Updates from: 07/19/2023 02:09:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Below are sample requests to help outline what the sync engine currently sends v
**Without feature flag** ```json
-{
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [
- {
+ {
"op": "Add", "path": "nickName", "value": "Babs"
- }
- ]
-}
-
+ }
+ ]
+ }
``` **With feature flag** ```json
-{
- "schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
- "Operations": [
- {
- "op": "add",
- "path": "nickName",
- "value": "Babs"
- }
- ]
-}
+ {
+ "schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
+ "Operations": [
+ {
+ "op": "add",
+ "path": "nickName",
+ "value": "Babs"
+ }
+ ]
+ }
``` **Requests to replace multiple attributes:** **Without feature flag** ```json
-{
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ],
Below are sample requests to help outline what the sync engine currently sends v
"value": "Eqpj" } ]
-}
+ }
``` **With feature flag** ```json
-{
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ],
Below are sample requests to help outline what the sync engine currently sends v
} } ]
-}
+ }
``` **Requests made to remove a group member:** **Without feature flag** ```json
-{
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ],
Below are sample requests to help outline what the sync engine currently sends v
] } ]
-}
+ }
``` **With feature flag** ```json
-{
+ {
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ],
Below are sample requests to help outline what the sync engine currently sends v
"path": "members[value eq \"7f4bc1a3-285e-48ae-8202-5accb43efb0e\"]" } ]
-}
+ }
``` - * **Downgrade URL:** Once the new SCIM compliant behavior becomes the default on the non-gallery application, you can use the following URL to roll back to the old, non SCIM compliant behavior: AzureAdScimPatch2017
active-directory Inbound Provisioning Api Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-faqs.md
Yes, the provisioning API supports on-premises AD domains as a target.
## How do we get the /bulkUpload API endpoint for our provisioning app?
-The /bulkUpload API is available only for apps of the type: "API-driven inbound provisioning to Azure AD" and "API-driven inbound provisioning to on-premises Active Directory". You can retrieve the unique API endpoint for each provisioning app from the Provisioning blade home page. In **Statistics to date** > **View technical information** and copy the **Provisioning API Endpoint** URL. It has the format:
+The /bulkUpload API is available only for apps of the type: "API-driven inbound provisioning to Azure AD" and "API-driven inbound provisioning to on-premises Active Directory". You can retrieve the unique API endpoint for each provisioning app from the Provisioning blade home page. In **Statistics to date** > **View technical information**,copy the **Provisioning API Endpoint** URL. It has the format:
```http https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/{jobId}/bulkUpload
The current API only supports inbound data. Here are some options to consider fo
## Next steps - [Configure API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md)-- To learn more about API-driven inbound provisioning, see [inbound user provisioning API concepts](inbound-provisioning-api-concepts.md).
+- To learn more about API-driven inbound provisioning, see [inbound user provisioning API concepts](inbound-provisioning-api-concepts.md).
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
This article uses the following terms:
| Resources| Link and Description | | - | - | | On-demand webinars| [Manage your Enterprise Applications with Azure AD](https://info.microsoft.com/CO-AZUREPLAT-WBNR-FY18-03Mar-06-ManageYourEnterpriseApplicationsOption1-MCW0004438_02OnDemandRegistration-ForminBody.html)<br>ΓÇÄLearn how Azure AD can help you achieve SSO to your enterprise SaaS applications and best practices for controlling access. |
-| Videos| [What is user provisioning in Active Azure Directory?](https://youtu.be/_ZjARPpI6NI) <br> [How to deploy user provisioning in Active Azure Directory?](https://youtu.be/pKzyts6kfrw) <br> [Integrating Salesforce with Azure AD: How to automate User Provisioning](https://azure.microsoft.com/resources/videos/integrating-salesforce-with-azure-ad-how-to-automate-user-provisioning/) |
+| Videos| [What is user provisioning in Active Azure Directory?](https://youtu.be/_ZjARPpI6NI) <br> [How to deploy user provisioning in Active Azure Directory?](https://youtu.be/pKzyts6kfrw) <br> [Integrating Salesforce with Azure AD: How to automate User Provisioning](https://youtu.be/MAy8s5WSe3A)
| Online courses| SkillUp Online: [Managing Identities](https://skillup.online/courses/course-v1:Microsoft+AZ-100.5+2018_T3/) <br> Learn how to integrate Azure AD with many SaaS applications and to secure user access to those applications. | | Books| [Modern Authentication with Azure Active Directory for Web Applications (Developer Reference) 1st Edition](https://www.amazon.com/Authentication-Directory-Applications-Developer-Reference/dp/0735696942/ref=sr_1_fkmr0_1?keywords=Azure+multifactor+authentication&qid=1550168894&s=gateway&sr=8-1-fkmr0). <br> ΓÇÄThis is an authoritative, deep-dive guide to building Active Directory authentication solutions for these new environments. | | Tutorials| See the [list of tutorials on how to integrate SaaS apps with Azure AD](../saas-apps/tutorial-list.md). |
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
# Authentication methods in Azure Active Directory - phone options
-For direct authentication using text message, you can [Configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md). SMS-based sign-in is great for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
+Microsoft recommends users move away from using SMS or voice calls for multifactor authentication (MFA). Modern authentication methods like [Microsoft Authenticator](concept-authentication-authenticator-app.md) are a recommended alternative. For more information, see [It's Time to Hang Up on Phone Transports for Authentication](https://aka.ms/hangup). Users can still verify themselves using a mobile phone or office phone as secondary form of authentication used for multifactor authentication (MFA) or self-service password reset (SSPR).
-Users can also verify themselves using a mobile phone or office phone as secondary form of authentication used during Azure AD Multi-Factor Authentication or self-service password reset (SSPR). Azure AD Multi-Factor Authentication and SSPR support phone extensions only for office phones.
+You can [configure and enable users for SMS-based authentication](howto-authentication-sms-signin.md) for direct authentication using text message. SMS-based sign-in is convenient for Frontline workers. With SMS-based sign-in, users don't need to know a username and password to access applications and services. The user instead enters their registered mobile phone number, receives a text message with a verification code, and enters that in the sign-in interface.
>[!NOTE] >Phone call verification isn't available for Azure AD tenants with trial subscriptions. For example, if you sign up for a trial license Microsoft Enterprise Mobility and Security (EMS), phone call verification isn't available. Phone numbers must be provided in the format *+CountryCode PhoneNumber*, for example, *+1 4251234567*. There must be a space between the country/region code and the phone number.
Users can also verify themselves using a mobile phone or office phone as seconda
For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive an SMS message with a verification code to enter in the sign-in interface, or receive a phone call.
-If users don't want their mobile phone number to be visible in the directory but want to use it for password reset, administrators shouldn't populate the phone number in the directory. Instead, users should populate their **Authentication Phone** attribute via the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo). Administrators can see this information in the user's profile, but it's not published elsewhere.
+If users don't want their mobile phone number to be visible in the directory but want to use it for password reset, administrators shouldn't populate the phone number in the directory. Instead, users should populate their **Authentication Phone** at [My Sign-Ins](https://aka.ms/setupsecurityinfo). Administrators can see this information in the user's profile, but it's not published elsewhere.
:::image type="content" source="media/concept-authentication-methods/user-authentication-methods.png" alt-text="Screenshot of the Azure portal that shows authentication methods with a phone number populated":::
+> [!NOTE]
+> Phone extensions are supported only for office phones.
+ Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve SMS deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada. > [!NOTE]
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Or, you can specify a tenant by URL to access security information.
`https://mysignins.microsoft.com/security-info/?tenantId=<Tenant ID>`
+> [!NOTE]
+> Customers attempting to register or manage security info through combined registration or the My Sign-ins page should use a modern browser such as Microsoft Edge.
+>
+> IE11 is not officially supported for creating a webview or browser in applications as it will not work as expected in all scenarios.
+>
+> Applications that have not been updated and are still using Azure AD Authentication Library (ADAL) that rely on legacy webviews can fallback to older versions of IE. In these scenarios, users will experience a blank page when directed to the My Sign-ins page. To resolve this issue, switch to a modern browser.
+ ## Next steps To get started, see the tutorials to [enable self-service password reset](tutorial-enable-sspr.md) and [enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md
This article provides you with a list and description of the system reports avai
| Report name | Type of the report | File format | Description | Availability | Collated report? | |-|--|--|| -|-| | Access Key Entitlements and Usage Report | Summary </p>Detailed | CSV | This report displays: </p> - Access key age, last rotation date, and last usage date availability in the summary report. Use this report to decide when to rotate access keys. </p> - Granted task and Permissions creep index (PCI) score. This report provides supporting information when you want to take the action on the keys. | AWS</p>Azure</p>GCP | Yes |
-| All Permissions for Identity | Detailed | CSV | This report lists all the assigned permissions for the selected identities. | AWS</p>Azure</p>GCP | N/A |
+| All Permissions for Identity | Summary | CSV | This report lists all the assigned permissions for the selected identities. | AWS</p>Azure</p>GCP | N/A |
| Group Entitlements and Usage | Summary | CSV | This report tracks all group level entitlements and the permission assignment, PCI. The number of members is also listed as part of this report. | AWS</p>Azure</p>GCP | Yes | | Identity Permissions | Summary | CSV | This report tracks any, or specific, task usage per **User**, **Group**, **Role**, or **App**. | AWS</p>Azure</p>GCP | N/A | | AWS Role Policy Audit | Detailed | CSV | This report gives the list of AWS roles, which can be assumed by **User**, **Group**, **resource** or **AWS Role**. | AWS | N/A | | Cross Account Access Details| Detailed | CSV | This report helps track **User**, **Group** from other AWS accounts have cross account access to the specified AWS account. | AWS | N/A | | PCI History | Summary | CSV | This report helps track **Monthly PCI History** for each authorized system. It can be used to plot the trend of the PCI. | AWS</p>Azure</p>GCP | Yes |
-| Permissions Analytics Report (PAR) | Detailed | CSV | This report lists the different key findings in the selected authorized systems. The key findings include **Super identities**, **Inactive identities**, **Over-provisioned active identities**, **Storage bucket hygiene**, **Access key age (AWS)**, and so on. </p>This report helps administrators to visualize the findings across the organization and make decisions. | AWS</p>Azure</p>GCP | Yes |
+| Permissions Analytics Report (PAR) | Detailed | XSLX, PDF | This report lists the different key findings in the selected authorized systems. The key findings include **Super identities**, **Inactive identities**, **Over-provisioned active identities**, **Storage bucket hygiene**, **Access key age (AWS)**, and so on. </p>This report helps administrators to visualize the findings across the organization and make decisions. | AWS</p>Azure</p>GCP | Yes for XSLX |
| Role/Policy Details | Summary | CSV | This report captures **Assigned/Unassigned** and **Custom/system policy with used/unused condition** for specific or all AWS accounts. </p>Similar data can be captured for Azure and GCP for assigned and unassigned roles. | AWS</p>Azure</p>GCP | No | | User Entitlements and Usage | Detailed <p>Summary <p> Permissions | CSV | **Summary** This report provides the summary view of all the identities with Permissions Creep Index (PCI), granted and executed tasks per Azure subscription, AWS account, GCP project. </p>**Detailed** This report provides a detailed view of Azure role assignments, GCP role assignments and AWS policy assignment along with Permissions Creep Index (PCI), tasks used by each identity. </p>**Permissions** This report provides the list of role assignments for Azure, GCP and policy assignments in AWS per identity. | AWS</p>Azure</p>GCP | Yes |
active-directory Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md
Title: View information about activity triggers in Permissions Management
-description: How to view information about activity triggers in the Activity triggers dashboard in Permissions Management.
+ Title: View information about alerts and alert triggers in Permissions Management
+description: How to view information about alerts and alert triggers in the Alerts dashboard in Permissions Management.
Last updated 02/23/2022
-# View information about activity triggers
+# View information about alerts and alert triggers
-This article describes how to use the **Activity triggers** dashboard in Permissions Management to view information about activity alerts and triggers.
+This article describes how to use the **Alerts** dashboard in Permissions Management to view information about alerts and alert triggers.
-## Display the Activity triggers dashboard
+## Display the Alerts dashboard
-- In the Permissions Management home page, select **Activity triggers** (the bell icon).
+- In the Permissions Management home page, select **Alerts** (the bell icon).
- The **Activity triggers** dashboard has four tabs:
+ The **Alerts** dashboard has four tabs:
- **Activity** - **Rule-Based Anomaly**
The **Rule-Based Anomaly** tab and the **Statistical Anomaly** tab both have one
- **Columns**: Select the columns you want to display: **Task**, **Resource**, and **Identity**. - To return to the system default settings, select **Reset to default**.
+Alert triggers are based on data collected. All alerts, if triggered, are shown every hour under the Alerts subtab.
++ ## View information about alert triggers The **Alert Triggers** subtab in the **Activity**, **Rule-Based Anomaly**, **Statistical Anomaly**, and **Permission Analytics** tab displays the following information:
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 07/07/2023 Last updated : 07/17/2023 -+ - # Conditional Access: Conditions Within a Conditional Access policy, an administrator can make use of signals from conditions like risk, device platform, or location to enhance their policy decisions.
Within a Conditional Access policy, an administrator can make use of signals fro
Multiple conditions can be combined to create fine-grained and specific Conditional Access policies.
-For example, when accessing a sensitive application an administrator may factor sign-in risk information from Identity Protection and location into their access decision in addition to other controls like multifactor authentication.
+When users access a sensitive application, an administrator may factor multiple conditions into their access decisions like:
+
+- Sign-in risk information from Identity Protection
+- Network location
+- Device information
## Sign-in risk
-For customers with access to [Identity Protection](../identity-protection/overview-identity-protection.md), sign-in risk can be evaluated as part of a Conditional Access policy. Sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. More information about sign-in risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
+Administrators with access to [Identity Protection](../identity-protection/overview-identity-protection.md), can evaluate sign-in risk as part of a Conditional Access policy. Sign-in risk represents the probability that a given authentication request wasn't made by the identity owner. More information about sign-in risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
## User risk
-For customers with access to [Identity Protection](../identity-protection/overview-identity-protection.md), user risk can be evaluated as part of a Conditional Access policy. User risk represents the probability that a given identity or account is compromised. More information about user risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
+Administrators with access to [Identity Protection](../identity-protection/overview-identity-protection.md), can evaluate user risk as part of a Conditional Access policy. User risk represents the probability that a given identity or account is compromised. More information about user risk can be found in the articles, [What is risk](../identity-protection/concept-identity-protection-risks.md) and [How To: Configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
## Device platforms
-The device platform is characterized by the operating system that runs on a device. Azure AD identifies the platform by using information provided by the device, such as user agent strings. Since user agent strings can be modified, this information is unverified. Device platform should be used in concert with Microsoft Intune device compliance policies or as part of a block statement. The default is to apply to all device platforms.
+Conditional Access identifies the device platform by using information provided by the device, such as user agent strings. Since user agent strings can be modified, this information is unverified. Device platform should be used in concert with Microsoft Intune device compliance policies or as part of a block statement. The default is to apply to all device platforms.
-Azure AD Conditional Access supports the following device platforms:
+Conditional Access supports the following device platforms:
- Android - iOS
We don't support selecting macOS or Linux device platforms when selecting **Requ
## Locations
-When configuring location as a condition, organizations can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, unknown areas that don't map to specific countries or regions, and [Global Secure Access' compliant network](../../global-secure-access/how-to-compliant-network.md).
+When administrators configure location as a condition, they can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, unknown areas that don't map to specific countries or regions, and [Global Secure Access' compliant network](../../global-secure-access/how-to-compliant-network.md).
-When including **any location**, this option includes any IP address on the internet not just configured named locations. When selecting **any location**, administrators can choose to exclude **all trusted** or **selected locations**.
+When including **any location**, this option includes any IP address on the internet not just configured named locations. When administrators select **any location**, they can choose to exclude **all trusted** or **selected locations**.
Administrators can create policies that target specific locations along with other conditions. More information about locations can be found in the article, [What is the location condition in Azure Active Directory Conditional Access](location-condition.md). ## Client apps
-By default, all newly created Conditional Access policies will apply to all client app types even if the client apps condition isnΓÇÖt configured.
+By default, all newly created Conditional Access policies apply to all client app types even if the client apps condition isnΓÇÖt configured.
> [!NOTE] > The behavior of the client apps condition was updated in August 2020. If you have existing Conditional Access policies, they will remain unchanged. However, if you click on an existing policy, the configure toggle has been removed and the client apps the policy applies to are selected. > [!IMPORTANT]
-> Sign-ins from legacy authentication clients donΓÇÖt support MFA and donΓÇÖt pass device state information to Azure AD, so they will be blocked by Conditional Access grant controls, like requiring MFA or compliant devices. If you have accounts which must use legacy authentication, you must either exclude those accounts from the policy, or configure the policy to only apply to modern authentication clients.
+> Sign-ins from legacy authentication clients donΓÇÖt support multifactor authentication (MFA) and donΓÇÖt pass device state information, so they are blocked by Conditional Access grant controls, like requiring MFA or compliant devices. If you have accounts which must use legacy authentication, you must either exclude those accounts from the policy, or configure the policy to only apply to modern authentication clients.
The **Configure** toggle when set to **Yes** applies to checked items, when set to **No** it applies to all client apps, including modern and legacy authentication clients. This toggle doesnΓÇÖt appear in policies created before August 2020.
The **Configure** toggle when set to **Yes** applies to checked items, when set
- Legacy authentication clients - Exchange ActiveSync clients - This selection includes all use of the Exchange ActiveSync (EAS) protocol.
- - When policy blocks the use of Exchange ActiveSync the affected user will receive a single quarantine email. This email with provide information on why theyΓÇÖre blocked and include remediation instructions if able.
+ - When policy blocks the use of Exchange ActiveSync the affected user receives a single quarantine email. This email with provide information on why theyΓÇÖre blocked and include remediation instructions if able.
- Administrators can apply policy only to supported platforms (such as iOS, Android, and Windows) through the Conditional Access Microsoft Graph API. - Other clients - This option includes clients that use basic/legacy authentication protocols that donΓÇÖt support modern authentication.
The **Configure** toggle when set to **Yes** applies to checked items, when set
- POP3 - Used by POP email clients. - Reporting Web Services - Used to retrieve report data in Exchange Online.
-These conditions are commonly used when requiring a managed device, blocking legacy authentication, and blocking web applications but allowing mobile or desktop apps.
+These conditions are commonly used to:
+
+- Require a managed device
+- Block legacy authentication
+- Block web applications but allow mobile or desktop apps
### Supported browsers
This setting works with all browsers. However, to satisfy a device policy, like
These browsers support device authentication, allowing the device to be identified and validated against a policy. The device check fails if the browser is running in private mode or if cookies are disabled. > [!NOTE]
-> Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario.
+> Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a hybrid device join scenario.
> > Safari is supported for device-based Conditional Access on a managed device, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements. > On iOS with 3rd party MDM solution only Microsoft Edge browser supports device policy.
These browsers support device authentication, allowing the device to be identifi
#### Why do I see a certificate prompt in the browser
-On Windows 7, iOS, Android, and macOS Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The user must select this certificate before using the browser.
+On Windows 7, iOS, Android, and macOS devices are identified using a client certificate. This certificate is provisioned when the device is registered. When a user first signs in through the browser the user is prompted to select the certificate. The user must select this certificate before using the browser.
#### Chrome support
For Chrome support in **Windows 8.1 and 7**, create the following registry key:
### Supported mobile applications and desktop clients
-Organizations can select **Mobile apps and desktop clients** as client app.
+Administrators can select **Mobile apps and desktop clients** as client app.
-This setting has an impact on access attempts made from the following mobile apps and desktop clients:
+This setting has an effect on access attempts made from the following mobile apps and desktop clients:
| Client apps | Target Service | Platform | | | | |
This setting has an impact on access attempts made from the following mobile app
### Exchange ActiveSync clients -- Organizations can only select Exchange ActiveSync clients when assigning policy to users or groups. Selecting **All users**, **All guest and external users**, or **Directory roles** will cause all users to be subject of the policy.-- When creating a policy assigned to Exchange ActiveSync clients, **Exchange Online** should be the only cloud application assigned to the policy. -- Organizations can narrow the scope of this policy to specific platforms using the **Device platforms** condition.
+- Administrators can only select Exchange ActiveSync clients when assigning policy to users or groups. Selecting **All users**, **All guest and external users**, or **Directory roles** causes all users to be subject of the policy.
+- When administrators create a policy assigned to Exchange ActiveSync clients, **Exchange Online** should be the only cloud application assigned to the policy.
+- Administrators can narrow the scope of this policy to specific platforms using the **Device platforms** condition.
If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multifactor Authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication doesnΓÇÖt support these controls.
By selecting **Other clients**, you can specify a condition that affects apps th
## Device state (deprecated)
-**This preview feature has been deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using device state (deprecated) condition.
--
-The device state condition was used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies.
-
-For example, *All users* accessing the *Microsoft Azure Management* cloud app including **All device state** excluding **Device Hybrid Azure AD joined** and **Device marked as compliant** and for *Access controls*, **Block**.
-- This example would create a policy that only allows access to Microsoft Azure Management from devices that are either hybrid Azure AD joined or devices marked as compliant.-
-The above scenario, can be configured using *All users* accessing the *Microsoft Azure Management* cloud app with **Filter for devices** condition in **exclude** mode using the following rule **device.trustType -eq "ServerAD" -or device.isCompliant -eq True** and for *Access controls*, **Block**.
-- This example would create a policy that blocks access to Microsoft Azure Management cloud app from unmanaged or non-compliant devices.
+**This feature has been deprecated.** Customers should use the **Filter for devices** condition in the Conditional Access policy, to satisfy scenarios previously achieved using the device state condition.
> [!IMPORTANT] > Device state and filters for devices cannot be used together in Conditional Access policy. Filters for devices provides more granular targeting including support for targeting device state information through the `trustType` and `isCompliant` property. ## Filter for devices
-ThereΓÇÖs a new optional condition in Conditional Access called filter for devices. When configuring filter for devices as a condition, organizations can choose to include or exclude devices based on a filter using a rule expression on device properties. The rule expression for filter for devices can be authored using rule builder or rule syntax. This experience is similar to the one used for dynamic membership rules for groups. For more information, see the article [Conditional Access: Filter for devices](concept-condition-filters-for-devices.md).
+When administrators configure filter for devices as a condition, they can choose to include or exclude devices based on a filter using a rule expression on device properties. The rule expression for filter for devices can be authored using rule builder or rule syntax. This experience is similar to the one used for dynamic membership rules for groups. For more information, see the article [Conditional Access: Filter for devices](concept-condition-filters-for-devices.md).
## Next steps
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 02/16/2023 Last updated : 06/26/2023
Administrators can choose to require [specific authentication strengths](../auth
Organizations that have deployed Intune can use the information returned from their devices to identify devices that meet specific policy compliance requirements. Intune sends compliance information to Azure AD so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see [Set rules on devices to allow access to resources in your organization by using Intune](/intune/protect/device-compliance-get-started).
-A device can be marked as compliant by Intune for any device operating system or by a third-party mobile device management system for Windows 10 devices. You can find a list of supported third-party mobile device management systems in [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
+A device can be marked as compliant by Intune for any device operating system or by a third-party mobile device management system for Windows devices. You can find a list of supported third-party mobile device management systems in [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
Devices must be registered in Azure AD before they can be marked as compliant. You can find more information about device registration in [What is a device identity?](../devices/overview.md). The **Require device to be marked as compliant** control:+ - Only supports Windows 10+, iOS, Android, and macOS devices registered with Azure AD and enrolled with Intune.
- - Considers Microsoft Edge in InPrivate mode a non-compliant device.
+ - Microsoft Edge in InPrivate mode is considered a non-compliant device.
> [!NOTE]
-> On Windows 7, iOS, Android, macOS, and some third-party web browsers, Azure AD identifies the device by using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser, the user is prompted to select the certificate. The user must select this certificate before they can continue to use the browser.
+> On Windows, iOS, Android, macOS, and some third-party web browsers, Azure AD identifies the device by using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser, the user is prompted to select the certificate. The user must select this certificate before they can continue to use the browser.
You can use the Microsoft Defender for Endpoint app with the approved client app policy in Intune to set the device compliance policy to Conditional Access policies. There's no exclusion required for the Microsoft Defender for Endpoint app while you're setting up Conditional Access. Although Microsoft Defender for Endpoint on Android and iOS (app ID dd47d17a-3194-4d86-bfd5-c6ae6f5651e3) isn't an approved app, it has permission to report device security posture. This permission enables the flow of compliance information to Conditional Access.
Organizations can require that an approved client app is used to access selecte
To apply this grant control, the device must be registered in Azure AD, which requires using a broker app. The broker app can be Microsoft Authenticator for iOS, or either Microsoft Authenticator or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the appropriate app store to install the required broker app.
-The following client apps support this setting, this list isn't exhaustive and is subject to change::
+The following client apps support this setting. This list isn't exhaustive and is subject to change:
- Microsoft Azure Information Protection - Microsoft Cortana
See [Require approved client apps for cloud app access with Conditional Access](
### Require app protection policy
-In your Conditional Access policy, you can require that an [Intune app protection policy](/intune/app-protection-policy) is present on the client app before access is available to the selected cloud apps.
+In Conditional Access policy, you can require that an [Intune app protection policy](/intune/app-protection-policy) is present on the client app before access is available to the selected applications. These mobile application management (MAM) app protection policies allow you to manage and protect your organization's data within specific applications.
-To apply this grant control, Conditional Access requires that the device is registered in Azure AD, which requires using a broker app. The broker app can be either Microsoft Authenticator for iOS or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the app store to install the broker app.
+To apply this grant control, Conditional Access requires that the device is registered in Azure AD, which requires using a broker app. The broker app can be either Microsoft Authenticator for iOS or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the app store to install the broker app. App protection policies are generally available for iOS and Android, and in public preview for Microsoft Edge on Windows. [Windows devices support no more than 3 Azure AD user accounts in the same session](../devices/faq.yml#i-can-t-add-more-than-3-azure-ad-user-accounts-under-the-same-user-session-on-a-windows-10-11-device--why). For more information about how to apply policy to Windows devices, see the article [Require an app protection policy on Windows devices (preview)](how-to-app-protection-policy-windows.md).
-Applications must have the Intune SDK with policy assurance implemented and must meet certain other requirements to support this setting. Developers who are implementing applications with the Intune SDK can find more information on these requirements in the [SDK documentation](/mem/intune/developer/app-sdk-get-started).
+Applications must meet certain requirements to support app protection policies. Developers can find more information about these requirements in the section [Apps you can manage with app protection policies](/mem/intune/apps/app-protection-policy#apps-you-can-manage-with-app-protection-policies).
-The following client apps are confirmed to support this setting, this list isn't exhaustive and is subject to change:
+The following client apps support this setting. This list isn't exhaustive and is subject to change. If your app isn't in the list please check with the application vendor to confirm support:
- Adobe Acrobat Reader mobile app - iAnnotate for Office 365
The following client apps are confirmed to support this setting, this list isn't
- Provectus - Secure Contacts - Yammer (Android, iOS, and iPadOS)
-This list isn't all encompassing, if your app isn't in this list please check with the application vendor to confirm support.
- > [!NOTE] > Kaizala, Skype for Business, and Visio don't support the **Require app protection policy** grant. If you require these apps to work, use the **Require approved apps** grant exclusively. Using the "or" clause between the two grants will not work for these three applications.
-Apps for the app protection policy support the Intune mobile application management feature with policy protection.
-
-The **Require app protection policy** control:
--- Only supports iOS and Android for device platform condition.-- Requires a broker app to register the device. On iOS, the broker app is Microsoft Authenticator. On Android, the broker app is Intune Company Portal.- See [Require app protection policy and an approved client app for cloud app access with Conditional Access](app-protection-based-conditional-access.md) for configuration examples. ### Require password change
-When user risk is detected, administrators can employ the user risk policy conditions to have the user securely change a password by using Azure AD self-service password reset. Users can perform a self-service password reset to self-remediate. This process will close the user risk event to prevent unnecessary alerts for administrators.
+When user risk is detected, administrators can employ the user risk policy conditions to have the user securely change a password by using Azure AD self-service password reset. Users can perform a self-service password reset to self-remediate. This process closes the user risk event to prevent unnecessary alerts for administrators.
When a user is prompted to change a password, they'll first be required to complete multifactor authentication. Make sure all users have registered for multifactor authentication, so they're prepared in case risk is detected for their account.
active-directory How To App Protection Policy Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-app-protection-policy-windows.md
+
+ Title: Conditional Access - Require app protection policy for Windows
+description: Create a Conditional Access policy to require app protection policy for Windows.
+++++ Last updated : 07/14/2023++++++++
+# Require an app protection policy on Windows devices (preview)
+
+App protection policies apply mobile application management (MAM) to specific applications on a device. These policies allow for securing data within an application in support of scenarios like bring your own device (BYOD). In the preview, we support applying policy to the Microsoft Edge browser on Windows 11 devices.
+
+![Screenshot of a browser requiring the user to sign in to their Microsoft Edge profile to access an application.](./media/how-to-app-protection-policy-windows/browser-sign-in-with-edge-profile.png)
+
+## Prerequisites
+
+The following requirements must be met before you can apply an [app protection policy] to Windows client devices:
+
+- Ensure your Windows client version is Windows 11, build 10.0.22621 (22H2) or newer.
+- Ensure your device isn't managed, including:
+ - Not Azure AD joined or enrolled in Mobile Device Management (MDM) for the same tenant
+as your MAM user.
+ - Not Azure AD registered (workplace joined) with more than two users besides the MAM user. There's a limit of no more than [three Azure AD registered users to a device](../devices/faq.yml#i-can-t-add-more-than-3-azure-ad-user-accounts-under-the-same-user-session-on-a-windows-10-11-device--why).
+- Clients must be running Microsoft Edge build v115.0.1901.155 or newer.
+ - You can check the version by going to `edge://settings/help` in the address bar.
+- Clients must have the **Enable MAM on Edge desktop platforms** flag enabled.
+ - You can enable this going to `edge://flags/#edge-desktop-mam` in the address bar.
+ - Enable **Enable MAM on Edge desktop platforms**
+ - Click the **Restart** button at the bottom of the window.
+
+## User exclusions
+
+## Create a Conditional Access policy
+
+The following policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
+
+### Require app protection policy for Windows devices
+
+The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [Preview: App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows).
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose at least your organization's emergency access or break-glass accounts.
+1. Under **Target resources** > **Cloud apps** > **Include**, select **Office 365**.
+1. Under **Conditions**:
+ 1. **Device platforms**, set **Configure** to **Yes**.
+ 1. Under **Include**, **Select device platforms**.
+ 1. Choose **Windows** only.
+ 1. Select **Done**.
+ 1. **Client apps**, set **Configure** to **Yes**.
+ 1. Select **Browser** only.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require app protection policy**
+ 1. **For multiple controls** select **Require one of the selected controls**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Sign in to Windows devices
+
+When users attempt to sign in to a site that is protected by an app protection policy for the first time, they're prompted: To access your service, app or website, you may need to sign in to Microsoft Edge using `username@domain.com` or register your device with `organization` if you are already signed in.
+
+Clicking on **Switch Edge profile** opens a window listing their Work or school account along with an option to **Sign in to sync data**.
+
+ ![Screenshot showing the popup in Microsoft Edge asking user to sign in.](./media/how-to-app-protection-policy-windows/browser-sign-in-continue-with-work-or-school-account.png)
+
+This process opens a window offering to allow Windows to remember your account and automatically sign you in to your apps and websites.
+
+> [!CAUTION]
+> You must *UNCHECK* the box **Allow my organization to manage my device**. Leaving this checked enrolls your device in mobile device maangment (MDM) not mobile application management (MAM).
+
+![Screenshot showing the stay signed in to all your apps window. Uncheck the allow my organization to manage my device checkbox.](./media/how-to-app-protection-policy-windows/stay-signed-in-to-all-your-apps.png)
+
+After selecting **OK** you may see a progress window while policy is applied. After a few moments you should see a window saying "you're all set", app protection policies are applied.
+
+## Troubleshooting
+
+### Common issues
+
+In some circumstances, after getting the "you're all set" page you may still be prompted to sign in with your work account. This prompt may happen when:
+
+- Your profile has been added to Microsoft Edge, but MAM enrollment is still being processed.
+- Your profile has been added to Microsoft Edge, but you selected "this app only" on the heads up page.
+- You have enrolled into MAM but your enrollment expired or aren't compliant with your organization's requirements.
+
+To resolve these possible scenarios:
+
+- Wait a few minutes and try again in a new tab.
+- Go to **Settings** > **Accounts** > **Access work or school**, then add the account there.
+- Contact your administrator to check that Microsoft Intune MAM policies are applying to your account correctly.
+
+### Existing account
+
+If there's a pre-existing, unregistered account, like `user@contoso.com` in Microsoft Edge, or if a user signs in without registering using the Heads Up Page, then the account isn't properly enrolled in MAM. This configuration blocks the user from being properly enrolled in MAM. This is a known issue that is currently being worked on.
+
+## Next steps
+
+- [What is Microsoft Intune app management?](/mem/intune/apps/app-management)
+- [App protection policies overview](/mem/intune/apps/app-protection-policy)
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
Previously updated : 08/22/2022 Last updated : 06/26/2023
People regularly use their mobile devices for both personal and work tasks. Whil
With Conditional Access, organizations can restrict access to [approved (modern authentication capable) client apps with Intune app protection policies](concept-conditional-access-grant.md#require-app-protection-policy). For older client apps that may not support app protection policies, administrators can restrict access to [approved client apps](concept-conditional-access-grant.md#require-approved-client-app). > [!WARNING]
-> App protection policies are supported on iOS and Android only.
+> App protection policies are supported on iOS and Android where applications meet specific requirements. **App protection policies are supported on Windows in preview for the Microsoft Edge browser only.**
> > Not all applications that are supported as approved applications or support application protection policies. For a list of some common client apps, see [App protection policy requirement](concept-conditional-access-grant.md#require-app-protection-policy). If your application is not listed there, contact the application developer. >
Organizations can choose to deploy this policy using the steps outlined below or
1. Under **Cloud apps or actions**, select **All cloud apps**. 1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**. 1. Under **Include**, **Select device platforms**.
- 1. Choose **Android** and **iOS**
+ 1. Choose **Android** and **iOS**.
1. Select **Done**. 1. Under **Access controls** > **Grant**, select **Grant access**. 1. Select **Require approved client app** and **Require app protection policy**
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
App developers must meet a few requirements to complete the publisher verificati
- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account). -- The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set.
+- The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set. The feature is not supported in Azure AD B2C tenant.
- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified)
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
See the following table for the validation differences of various properties for
| Property | `AzureADMyOrg` | `AzureADMultipleOrgs` | `AzureADandPersonalMicrosoftAccount` and `PersonalMicrosoftAccount` | | -- | | | -- |
-| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs |
+| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs |
| National clouds | Supported | Supported | Not supported | | Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key | | Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets |
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
To see which users and groups are consuming licenses, select a product. Under **
**Problem:** One of the products that's specified in the group contains a service plan that conflicts with another service plan that's already assigned to the user via a different product. Some service plans are configured in a way that they can't be assigned to the same user as another, related service plan.
-Consider the following example. A user has a license for Office 365 Enterprise *E1* assigned directly, with all the plans enabled. The user has been added to a group that has the Office 365 Enterprise *E3* product assigned to it. The E3 product contains service plans that can't overlap with the plans that are included in E1, so the group license assignment fails with the ΓÇ£Conflicting service plansΓÇ¥ error. In this example, the conflicting service plans are:
--- Exchange Online (Plan 2) conflicts with Exchange Online (Plan 1).-
-To solve this conflict, you need to disable one of the plans. You can disable the E1 license that's directly assigned to the user. Or, you need to modify the entire group license assignment and disable the plans in the E3 license. Alternatively, you might decide to remove the E1 license from the user if it's redundant in the context of the E3 license.
+> [!TIP]
+> Exchange Online Plan1 and Plan2 were previously non-duplicable service plans. However, now they are service plans that can be duplicated.
+> If you are experiencing conflicts with these service plans, please try reprocessing them.
The decision about how to resolve conflicting product licenses always belongs to the administrator. Azure AD doesn't automatically resolve license conflicts.
active-directory Licensing Powershell Graph Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-powershell-graph-examples.md
foreach ($userId in $skus.Keys) {
Write-Host "" }-
+```
## Remove direct licenses for users with group licenses
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for member users in the following ways:
| **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management that blocks non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management targets access to all Azure management. | | **Restrict non-admin users from creating tenants** | Users can create tenants in the Azure AD and Entra administration portal under Manage tenant. The creation of a tenant is recorded in the Audit log as category DirectoryManagement and activity Create Company. Anyone who creates a tenant becomes the Global Administrator of that tenant. The newly created tenant doesn't inherit any settings or configurations. </p><p></p><p>**What does this switch do?** <br> Setting this option to **Yes** restricts creation of Azure AD tenants to the Global Administrator or tenant creator roles. Setting this option to **No** allows non-admin users to create Azure AD tenants. Tenant create will continue to be recorded in the Audit log. </p><p></p><p>**How do I grant only a specific non-administrator users the ability to create new tenants?** <br> Set this option to Yes, then assign them the tenant creator role.|
-| **Restrict users from recovering the BitLocker key(s) for their owned devices** | Setting this option to **Yes** restricts users from being able to self-service recover BitLocker key(s) for their owned devices. Users will have to contact their organization's helpdesk to retrieve their BitLocker keys. Setting this option to **No** allows users to recover their BitLocker key(s). |
+| **Restrict users from recovering the BitLocker key(s) for their owned devices** | This setting can be found in the Azure AD and Entral portal in the Device Settings. Setting this option to **Yes** restricts users from being able to self-service recover BitLocker key(s) for their owned devices. Users will have to contact their organization's helpdesk to retrieve their BitLocker keys. Setting this option to **No** allows users to recover their BitLocker key(s). |
| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag doesn't prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. | The **Restrict non-admin users from creating tenants** option is shown [below](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/UserSettings)
Users can perform the following actions on owned enterprise applications. An ent
| microsoft.directory/servicePrincipals/permissions/update | Update the `servicePrincipals.permissions` property in Azure AD. | | microsoft.directory/servicePrincipals/policies/update | Update the `servicePrincipals.policies` property in Azure AD. | | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on sign-in reports in Azure AD. |
+| microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials |
+| microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs |
+| microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema |
+| microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal |
#### Owned devices
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
+## January 2023
+
+### Public Preview - Cross-tenant synchronization
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Collaboration
+
+Cross-tenant synchronization allows you to set up a scalable and automated solution for users to access applications across tenants in your organization. It builds upon the Azure AD B2B functionality and automates creating, updating, and deleting B2B users. For more information, see: [What is cross-tenant synchronization? (preview)](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
++++
+### General Availability - New Federated Apps available in Azure AD Application gallery - January 2023
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In January 2023 we've added the following 10 new applications in our App gallery with Federation support:
+
+[MINT TMS](../saas-apps/mint-tms-tutorial.md), [Exterro Legal GRC Software Platform](../saas-apps/exterro-legal-grc-software-platform-tutorial.md), [SIX.ONE Identity Access Manager](https://portal.six.one/), [Lusha](../saas-apps/lusha-tutorial.md), [Descartes](../saas-apps/descartes-tutorial.md), [Travel Management System](https://tms.billetkontoret.dk/), [Pinpoint (SAML)](../saas-apps/pinpoint-tutorial.md), [my.sdworx.com](../saas-apps/mysdworxcom-tutorial.md), [itopia Labs](https://labs.itopia.com/), [Better Stack](https://betteruptime.com/users/sign-up).
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
++++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - January 2023
+++
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [SurveyMonkey Enterprise](../saas-apps/surveymonkey-enterprise-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### Public Preview - Azure AD cloud sync new user experience
++
+**Type:** Changed feature
+**Service category:** Azure AD Connect Cloud Sync
+**Product capability:** Identity Governance
+
+Try out the new guided experience for syncing objects from AD to Azure AD using Azure AD Cloud Sync in Azure portal. With this new experience, Hybrid Identity Administrators can easily determine which sync engine to use for their scenarios and learn more about the various options they have with our sync solutions. With a rich set of tutorials and videos, customers are able to learn everything about Azure AD cloud sync in one single place.
+
+This experience helps administrators walk through the different steps involved in setting up a cloud sync configuration and an intuitive experience to help them easily manage it. Admins can also get insights into their sync configuration by using the "Insights" option, which integrates with Azure Monitor and Workbooks.
+
+For more information, see:
+
+- [Create a new configuration for Azure AD Connect cloud sync](../cloud-sync/how-to-configure.md)
+- [Attribute mapping in Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md)
+- [Azure AD cloud sync insights workbook](../cloud-sync/how-to-cloud-sync-workbook.md)
+++
+### Public Preview - Support for Directory Extensions using Azure AD cloud sync
+++
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Azure AD Connect Cloud Sync
+
+Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to map the needed attributes using Cloud Sync's attribute mapping experience.
+
+For more information on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](../cloud-sync/custom-attribute-mapping.md)
+++++ ## December 2022 ### Public Preview - Windows 10+ Troubleshooter for Diagnostic Logs
Use multi-stage reviews to create Azure AD access reviews in sequential stages,
In February 2022 we added the following 20 new applications in our App gallery with Federation support:
-[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
+[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md),
For more information about how to better secure your organization by using autom
In November 2021, we have added following 32 new applications in our App gallery with Federation support:
-[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure AD Multi-Factor Authentication](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit
+[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure AD Multi-Factor Authentication](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit
You can also find the documentation of all the applications [here](../saas-apps/tutorial-list.md).
IT Admins can start using the new "Hybrid Admin" role as the least privileged ro
In May 2020, we've added the following 36 new applications in our App gallery with Federation support:
-[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
+ [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
The new [policy details blade](../conditional-access/troubleshoot-conditional-ac
In April 2020, we've added these 31 new apps with Federation support to the app gallery:
-[SincroPool Apps](https://www.sincropool.com/), [SmartDB](https://hibiki.dreamarts.co.jp/smartdb/trial/), [Float](../saas-apps/float-tutorial.md), [LMS365](https://lms.365.systems/), [IWT Procurement Suite](../saas-apps/iwt-procurement-suite-tutorial.md), [Lunni](https://lunni.fi/), [EasySSO for Jira](../saas-apps/easysso-for-jira-tutorial.md), [Virtual Training Academy](https://vta.c3p.c)
+[SincroPool Apps](https://www.sincropool.com/), [SmartDB](https://hibiki.dreamarts.co.jp/smartdb/trial/), [Float](../saas-apps/float-tutorial.md), [LMS365](https://lms.365.systems/), [IWT Procurement Suite](../saas-apps/iwt-procurement-suite-tutorial.md), [Lunni](https://lunni.fi/), [EasySSO for Jira](../saas-apps/easysso-for-jira-tutorial.md), [Virtual Training Academy](https://vta.c3p.c)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information about how to better secure your organization by using autom
-
-## January 2023
-
-### Public Preview - Cross-tenant synchronization
-
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Collaboration
-
-Cross-tenant synchronization allows you to set up a scalable and automated solution for users to access applications across tenants in your organization. It builds upon the Azure AD B2B functionality and automates creating, updating, and deleting B2B users. For more information, see: [What is cross-tenant synchronization? (preview)](../multi-tenant-organizations/cross-tenant-synchronization-overview.md).
----
-### General Availability - New Federated Apps available in Azure AD Application gallery - January 2023
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In January 2023 we've added the following 10 new applications in our App gallery with Federation support:
-
-[MINT TMS](../saas-apps/mint-tms-tutorial.md), [Exterro Legal GRC Software Platform](../saas-apps/exterro-legal-grc-software-platform-tutorial.md), [SIX.ONE Identity Access Manager](https://portal.six.one/), [Lusha](../saas-apps/lusha-tutorial.md), [Descartes](../saas-apps/descartes-tutorial.md), [Travel Management System](https://tms.billetkontoret.dk/), [Pinpoint (SAML)](../saas-apps/pinpoint-tutorial.md), [my.sdworx.com](../saas-apps/mysdworxcom-tutorial.md), [itopia Labs](https://labs.itopia.com/), [Better Stack](https://betteruptime.com/users/sign-up).
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
-
-For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
----
-### Public Preview - New provisioning connectors in the Azure AD Application Gallery - January 2023
---
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
--- [SurveyMonkey Enterprise](../saas-apps/surveymonkey-enterprise-provisioning-tutorial.md)--
-For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
----
-### Public Preview - Azure AD cloud sync new user experience
--
-**Type:** Changed feature
-**Service category:** Azure AD Connect Cloud Sync
-**Product capability:** Identity Governance
-
-Try out the new guided experience for syncing objects from AD to Azure AD using Azure AD Cloud Sync in Azure portal. With this new experience, Hybrid Identity Administrators can easily determine which sync engine to use for their scenarios and learn more about the various options they have with our sync solutions. With a rich set of tutorials and videos, customers are able to learn everything about Azure AD cloud sync in one single place.
-
-This experience helps administrators walk through the different steps involved in setting up a cloud sync configuration and an intuitive experience to help them easily manage it. Admins can also get insights into their sync configuration by using the "Insights" option, which integrates with Azure Monitor and Workbooks.
-
-For more information, see:
--- [Create a new configuration for Azure AD Connect cloud sync](../cloud-sync/how-to-configure.md)-- [Attribute mapping in Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md)-- [Azure AD cloud sync insights workbook](../cloud-sync/how-to-cloud-sync-workbook.md)---
-### Public Preview - Support for Directory Extensions using Azure AD cloud sync
---
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Azure AD Connect Cloud Sync
-
-Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to map the needed attributes using Cloud Sync's attribute mapping experience.
-
-For more information on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](../cloud-sync/custom-attribute-mapping.md)
----
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md
This topic explains how the following features of the **Azure AD Connect sync se
These settings are configured by the [Azure Active Directory Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)). Download and install it separately from Azure AD Connect. The cmdlets documented in this topic were introduced in the [2016 March release (build 9031.1)](https://social.technet.microsoft.com/wiki/contents/articles/28552.microsoft-azure-active-directory-powershell-module-version-release-history.aspx#Version_9031_1). If you do not have the cmdlets documented in this topic or they do not produce the same result, then make sure you run the latest version.
-To see the configuration in your Azure AD directory, run `Get-MsolDirSyncFeatures`.
+To see the configuration in your Azure AD directory, run `Get-MsolDirSyncFeatures`.
![Get-MsolDirSyncFeatures result](./media/how-to-connect-syncservice-features/getmsoldirsyncfeatures.png)
+To see the configuration in your Azure AD directory using the Graph Powershell, use the following commands:
+```powershell
+Connect-MgGraph -Scopes OnPremDirectorySynchronization.Read.All, OnPremDirectorySynchronization.ReadWrite.All
+
+Get-MgDirectoryOnPremisSynchronization | Select-Object -ExpandProperty Features | Format-List
+```
+
+The output looks similar to `Get-MsolDireSyncFeatures`:
+```powershell
+BlockCloudObjectTakeoverThroughHardMatchEnabled : False
+BlockSoftMatchEnabled : False
+BypassDirSyncOverridesEnabled : False
+CloudPasswordPolicyForPasswordSyncedUsersEnabled : False
+ConcurrentCredentialUpdateEnabled : False
+ConcurrentOrgIdProvisioningEnabled : False
+DeviceWritebackEnabled : False
+DirectoryExtensionsEnabled : True
+FopeConflictResolutionEnabled : False
+GroupWriteBackEnabled : False
+PasswordSyncEnabled : True
+PasswordWritebackEnabled : False
+QuarantineUponProxyAddressesConflictEnabled : False
+QuarantineUponUpnConflictEnabled : False
+SoftMatchOnUpnEnabled : True
+SynchronizeUpnForManagedUsersEnabled : False
+UnifiedGroupWritebackEnabled : True
+UserForcePasswordChangeOnLogonEnabled : False
+UserWritebackEnabled : True
+AdditionalProperties : {}
+```
+ Many of these settings can only be changed by Azure AD Connect. The following settings can be configured by `Set-MsolDirSyncFeature`:
If you need to match on-premises AD accounts with existing accounts created in t
This feature is on by default for newly created Azure AD directories. You can see if this feature is enabled for you by running: ```powershell
+## Using the MSOnline module
Get-MsolDirSyncFeatures -Feature EnableSoftMatchOnUpn+
+## Using the Graph Powershell module
+$Config = Get-MgDirectoryOnPremisSynchronization
+$Config.Features.SoftMatchOnUpnEnabled
``` If this feature is not enabled for your Azure AD directory, then you can enable it by running:
Enabling this feature allows the sync engine to update the userPrincipalName whe
This feature is on by default for newly created Azure AD directories. You can see if this feature is enabled for you by running: ```powershell
+## Using the MSOnline module
Get-MsolDirSyncFeatures -Feature SynchronizeUpnForManagedUsers+
+## Using the Graph Powershell module
+$config = Get-MgDirectoryOnPremisSynchronization
+$config.Features.SynchronizeUpnForManagedUsersEnabled
``` If this feature is not enabled for your Azure AD directory, then you can enable it by running:
active-directory Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/get-started.md
Use these tasks if you're deploying Azure AD Connect to integrate with Active Di
|Task|Description| |--|--|
-|[Determine which sync tool is correct for you](https://setup.microsoft.com/azure/add-or-sync-users-to-microsoft-365) |Use the wizard to determine whether cloud sync or Azure AD Connect is the right tool for you.|
+|[Determine which sync tool is correct for you](https://setup.microsoft.com/azure/add-or-sync-users-to-azure-ad) |Use the wizard to determine whether cloud sync or Azure AD Connect is the right tool for you.|
|[Review the Azure AD Connect prerequisites](connect/how-to-connect-install-prerequisites.md)|Review the necessary prerequisites before getting started.| |[Review and choose an installation type](connect/how-to-connect-install-select-installation.md)|Determine whether you'll use express or custom installation.| |[Download Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594)|Download Azure AD Connect.|
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Container Apps | [Managed identities in Azure Container Apps](../../container-apps/managed-identity.md) | | Azure Container Instance | [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) | | Azure Container Registry | [Use an Azure-managed identity in ACR Tasks](../../container-registry/container-registry-tasks-authentication-managed-identity.md) |
-| Azure Cognitive Services | [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md) |
+| Azure Cognitive Services | [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md) |
| Azure Data Box | [Use customer-managed keys in Azure Key Vault for Azure Data Box](../../databox/data-box-customer-managed-encryption-key-portal.md) | | Azure Data Explorer | [Configure managed identities for your Azure Data Explorer cluster](/azure/data-explorer/configure-managed-identities-cluster?tabs=portal) | | Azure Data Factory | [Managed identity for Data Factory](../../data-factory/data-factory-service-identity.md) |
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
If you have a Virtual Machine that no longer needs the system-assigned managed i
```azurepowershell-interactive $vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM
- Update-AzVm -ResourceGroupName myResourceGroup -VM $vm -IdentityType "UserAssigned"
+ Update-AzVm -ResourceGroupName myResourceGroup -VM $vm -IdentityType "UserAssigned" -IdentityID "/subscriptions/<SUBSCRIPTION ID>/resourcegroups/<RESROURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>..."
``` If you have a virtual machine that no longer needs system-assigned managed identity and it has no user-assigned managed identities, use the following commands:
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
The following services support Azure AD authentication. New services are added t
| Azure App Services | [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md) | | Azure Batch | [Authenticate Batch service solutions with Active Directory](../../batch/batch-aad-auth.md) | | Azure Container Registry | [Authenticate with an Azure container registry](../../container-registry/container-registry-authentication.md) |
-| Azure Cognitive Services | [Authenticate requests to Azure Cognitive Services](../../cognitive-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) |
+| Azure Cognitive Services | [Authenticate requests to Azure Cognitive Services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) |
| Azure Communication Services | [Authenticate to Azure Communication Services](../../communication-services/concepts/authentication.md) | | Azure Cosmos DB | [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](../../cosmos-db/how-to-setup-rbac.md) | | Azure Databricks | [Authenticate using Azure Active Directory tokens](/azure/databricks/dev-tools/api/latest/aad/)
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
Model based on throughput:
* One for the contract download
-* Maximum signing performance of a Key Vault is 2,000 signing/~10 seconds. This is about 12,000 signings per minute. This means your solution can support up to 4,000 VC issuances per minute.
- * You can't control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md). * If you are planning a large rollout and onboarding of VCs, consider batching VC creation to ensure you don't exceed limits.
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
The following provides areas to consider when planning for performance:
* Each verification of a VC requires one Key Vault signature operation.
- * Maximum signing performance of a Key Vault is 2000 signings/~10 seconds. This means your solution can support up to 12,000 VC validation requests per minute.
- * You can't control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md) so that you understand how throttling might impact performance. ## Plan for reliability
advisor Advisor Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-performance-recommendations.md
We have determined that your VMs are located in a region different or far from w
## Upgrade to the latest version of the Immersive Reader SDK We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about [Immersive reader SDK](../applied-ai-services/immersive-reader/index.yml).
+Learn more about [Immersive reader SDK](../ai-services/immersive-reader/index.yml).
## Improve VM performance by changing the maximum session limit
ai-services Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/batch-inference.md
+
+ Title: Trigger batch inference with trained model
+
+description: Trigger batch inference with trained model
++++++ Last updated : 11/01/2022+++
+# Trigger batch inference with trained model
+
+You could choose the batch inference API, or the streaming inference API for detection.
+
+| Batch inference API | Streaming inference API |
+| - | - |
+| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
+
+|API Name| Method | Path | Description |
+| | - | -- | |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
+|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
+
+## Trigger a batch inference
+
+To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps.
+
+To get better performance, we recommend you send out no more than 150,000 data points per batch inference. *(Data points = Number of variables * Number of timestamps)*
+
+This inference is asynchronous, so the results aren't returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
+
+Failures are usually caused by model issues or data issues. You can't perform inference if the model isn't ready or the data link is invalid. Make sure that the training data and inference data are consistent, meaning they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables won't pass the data verification phase and errors will occur. Data verification is deferred so that you'll get error messages only when you query the results.
+
+### Request
+
+A sample request:
+
+```json
+{
+ "dataSource": "{{dataSource}}",
+ "topContributorCount": 3,
+ "startTime": "2021-01-02T12:00:00Z",
+ "endTime": "2021-01-03T00:00:00Z"
+}
+```
+#### Required parameters
+
+* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in Azure Blob Storage. The schema should be the same as your training data, either OneTable or MultiTable, and the variable number and name should be exactly the same as well.
+* **startTime**: The start time of data used for inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
+* **endTime**: The end time of data used for inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point.
+
+#### Optional parameters
+
+* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
+
+### Response
+
+A sample response:
+
+```json
+{
+ "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
+ "summary": {
+ "status": "CREATED",
+ "errors": [],
+ "variableStates": [],
+ "setupInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "topContributorCount": 3,
+ "startTime": "2021-01-02T12:00:00Z",
+ "endTime": "2021-01-03T00:00:00Z"
+ }
+ },
+ "results": []
+}
+```
+* **resultId**: This is the information that you'll need to trigger **Get Batch Inference Results API**.
+* **status**: This indicates whether you trigger a batch inference task successfully. If you see **CREATED**, then you don't need to trigger this API again, you should use the **Get Batch Inference Results API** to get the detection status and anomaly results.
+
+## Get batch detection results
+
+There's no content in the request body, what's required only is to put the resultId in the API path, which will be in a format of:
+**{{endpoint}}anomalydetector/v1.1/multivariate/detect-batch/{{resultId}}**
+
+### Response
+
+A sample response:
+
+```json
+{
+ "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
+ "summary": {
+ "status": "READY",
+ "errors": [],
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0,
+ "effectiveCount": 721,
+ "firstTimestamp": "2021-01-02T12:00:00Z",
+ "lastTimestamp": "2021-01-03T00:00:00Z"
+ }
+ ],
+ "setupInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "topContributorCount": 3,
+ "startTime": "2021-01-02T12:00:00Z",
+ "endTime": "2021-01-03T00:00:00Z"
+ }
+ },
+ "results": [
+ {
+ "timestamp": "2021-01-02T12:00:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.3377174139022827,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:01:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.24631972312927247,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:02:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.16678125858306886,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:03:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.23783254623413086,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:04:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.24804904460906982,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:05:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.11487171649932862,
+ "interpretation": []
+ },
+ "errors": []
+ },
+ {
+ "timestamp": "2021-01-02T12:06:00Z",
+ "value": {
+ "isAnomaly": true,
+ "severity": 0.32980116622958083,
+ "score": 0.5666913509368896,
+ "interpretation": [
+ {
+ "variable": "series_2",
+ "contributionScore": 0.4130149677604554,
+ "correlationChanges": {
+ "changedVariables": [
+ "series_0",
+ "series_4",
+ "series_3"
+ ]
+ }
+ },
+ {
+ "variable": "series_3",
+ "contributionScore": 0.2993065960239115,
+ "correlationChanges": {
+ "changedVariables": [
+ "series_0",
+ "series_4",
+ "series_3"
+ ]
+ }
+ },
+ {
+ "variable": "series_1",
+ "contributionScore": 0.287678436215633,
+ "correlationChanges": {
+ "changedVariables": [
+ "series_0",
+ "series_4",
+ "series_3"
+ ]
+ }
+ }
+ ]
+ },
+ "errors": []
+ }
+ ]
+}
+```
+
+The response contains the result status, variable information, inference parameters, and inference results.
+
+* **variableStates**: This lists the information of each variable in the inference request.
+* **setupInfo**: This is the request body submitted for this inference.
+* **results**: This contains the detection results. There are three typical types of detection results.
+
+* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there's insufficient historical data, so inference can't be performed on them. In this case, the error message can be ignored.
+
+* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
+ * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
+ * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
+
+* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
+
+* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
+
+* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
+
+* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
+
+> [!NOTE]
+> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
+> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
+> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
+
+## Next steps
+
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
ai-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/create-resource.md
+
+ Title: Create an Anomaly Detector resource
+
+description: Create an Anomaly Detector resource
++++++ Last updated : 11/01/2022++++
+# Create and Anomaly Detector resource
+
+Anomaly Detector service is a cloud-based Azure AI service that uses machine-learning models to detect anomalies in your time series data. Here, you'll learn how to create an Anomaly Detector resource in the Azure portal.
+
+## Create an Anomaly Detector resource in Azure portal
+
+1. Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+1. Once you have your Azure subscription, [create an Anomaly Detector resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal, and fill out the following fields:
+
+ - **Subscription**: Select your current subscription.
+ - **Resource group**: The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group.
+ - **Region**: Select your local region, see supported [Regions](../regions.md).
+ - **Name**: Enter a name for your resource. We recommend using a descriptive name, for example *multivariate-msft-test*.
+ - **Pricing tier**: The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of create a resource user experience](../media/create-resource/create-resource.png)
+
+1. Select **Identity** in the banner above and make sure you set the status as **On** which enables Anomaly Detector to visit your data in Azure in a secure way, then select **Review + create.**
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of enable managed identity](../media/create-resource/enable-managed-identity.png)
+
+1. Wait a few seconds until validation passed, and select **Create** button from the bottom-left corner.
+1. After you select create, you'll be redirected to a new page that says Deployment in progress. After a few seconds, you'll see a message that says, Your deployment is complete, then select **Go to resource**.
+
+## Get Endpoint URL and keys
+
+In your resource, select **Keys and Endpoint** on the left navigation bar, copy the **key** (both key1 and key2 will work) and **endpoint** values from your Anomaly Detector resource.. You'll need the key and endpoint values to connect your application to the Anomaly Detector API.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of copy key and endpoint user experience](../media/create-resource/copy-key-endpoint.png)
+
+That's it! You could start preparing your data for further steps!
+
+## Next steps
+
+* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
ai-services Deploy Anomaly Detection On Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-container-instances.md
+
+ Title: Run Anomaly Detector Container in Azure Container Instances
+
+description: Deploy the Anomaly Detector container to an Azure Container Instance, and test it in a web browser.
+++++++ Last updated : 04/01/2020+++
+# Deploy an Anomaly Detector univariate container to Azure Container Instances
+
+Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) container to Azure [Container Instances](../../../container-instances/index.yml). This procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers' attention away from managing infrastructure to instead focusing on application development.
+++++
+## Next steps
+
+* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container
+* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings
+* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
ai-services Deploy Anomaly Detection On Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-iot-edge.md
+
+ Title: Run Anomaly Detector on IoT Edge
+
+description: Deploy the Anomaly Detector module to IoT Edge.
++++++ Last updated : 12/03/2020+++
+# Deploy an Anomaly Detector univariate module to IoT Edge
+
+Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) module to an IoT Edge device. Once it's deployed into IoT Edge, the module runs in IoT Edge together with other modules as container instances. It exposes the exact same APIs as an Anomaly Detector container instance running in a standard docker container environment.
+
+## Prerequisites
+
+* Use an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
+* Install the [Azure CLI](/cli/azure/install-azure-cli).
+* An [IoT Hub](../../../iot-hub/iot-hub-create-through-portal.md) and an [IoT Edge](../../../iot-edge/quickstart-linux.md) device.
++
+## Deploy the Anomaly Detection module to the edge
+
+1. In the Azure portal, enter **Anomaly Detector on IoT Edge** into the search and open the Azure Marketplace result.
+2. It will take you to the Azure portal's [Target Devices for IoT Edge Module page](https://portal.azure.com/#create/azure-cognitive-service.edge-anomaly-detector). Provide the following required information.
+
+ 1. Select your subscription.
+
+ 1. Select your IoT Hub.
+
+ 1. Select **Find device** and find an IoT Edge device.
+
+3. Select the **Create** button.
+
+4. Select the **AnomalyDetectoronIoTEdge** module.
+
+ :::image type="content" source="../media/deploy-anomaly-detection-on-iot-edge/iot-edge-modules.png" alt-text="Image of IoT Edge Modules user interface with AnomalyDetectoronIoTEdge link highlighted with a red box to indicate that this is the item to select.":::
+
+5. Navigate to **Environment Variables** and provide the following information.
+
+ 1. Keep the value accept for **Eula**.
+
+ 1. Fill out **Billing** with your Azure AI services endpoint.
+
+ 1. Fill out **ApiKey** with your Azure AI services API key.
+
+ :::image type="content" source="../media/deploy-anomaly-detection-on-iot-edge/environment-variables.png" alt-text="Environment variables with red boxes around the areas that need values to be filled in for endpoint and API key":::
+
+6. Select **Update**
+
+7. Select **Next: Routes** to define your route. You define all messages from all modules to go to Azure IoT Hub. To learn how to declare a route, see [Establish routes in IoT Edge](../../../iot-edge/module-composition.md?view=iotedge-2020-11&preserve-view=true).
+
+8. Select **Next: Review + create**. You can preview the JSON file that defines all the modules that get deployed to your IoT Edge device.
+
+9. Select **Create** to start the module deployment.
+
+10. After you complete module deployment, you'll go back to the IoT Edge page of your IoT hub. Select your device from the list of IoT Edge devices to see its details.
+
+11. Scroll down and see the modules listed. Check that the runtime status is running for your new module.
+
+To troubleshoot the runtime status of your IoT Edge device, consult the [troubleshooting guide](../../../iot-edge/troubleshoot.md).
+
+## Test Anomaly Detector on an IoT Edge device
+
+You'll make an HTTP call to the Azure IoT Edge device that has the Azure AI services container running. The container provides REST-based endpoint APIs. Use the host, `http://<your-edge-device-ipaddress>:5000`, for module APIs.
+
+Alternatively, you can [create a module client by using the Anomaly Detector client library](../quickstarts/client-libraries.md?tabs=linux&pivots=programming-language-python) on the Azure IoT Edge device, and then call the running Azure AI services container on the edge. Use the host endpoint `http://<your-edge-device-ipaddress>:5000` and leave the host key empty.
+
+If your edge device does not already allow inbound communication on port 5000, you will need to create a new **inbound port rule**.
+
+For an Azure VM, this can set under **Virtual Machine** > **Settings** > **Networking** > **Inbound port rule** > **Add inbound port rule**.
+
+There are several ways to validate that the module is running. Locate the *External IP* address and exposed port of the edge device in question, and open your favorite web browser. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://<your-edge-device-ipaddress:5000`, but your specific container may vary. Keep in mind that you need to use your edge device's *External IP* address.
+
+| Request URL | Purpose |
+|:-|:|
+| `http://<your-edge-device-ipaddress>:5000/` | The container provides a home page. |
+| `http://<your-edge-device-ipaddress>:5000/status` | Also requested with GET, this verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://<your-edge-device-ipaddress>:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
+
+![Container's home page](../../../../includes/media/cognitive-services-containers-api-documentation/container-webpage.png)
+
+## Next steps
+
+* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container
+* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings
+* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
ai-services Identify Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/identify-anomalies.md
+
+ Title: How to use the Anomaly Detector API on your time series data
+
+description: Learn how to detect anomalies in your data either as a batch, or on streaming data.
++++++ Last updated : 10/01/2019+++
+# How to: Use the Anomaly Detector univariate API on your time series data
+
+The [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector/operations/post-timeseries-entire-detect) provides two methods of anomaly detection. You can either detect anomalies as a batch throughout your times series, or as your data is generated by detecting the anomaly status of the latest data point. The detection model returns anomaly results along with each data point's expected value, and the upper and lower anomaly detection boundaries. you can use these values to visualize the range of normal values, and anomalies in the data.
+
+## Anomaly detection modes
+
+The Anomaly Detector API provides detection modes: batch and streaming.
+
+> [!NOTE]
+> The following request URLs must be combined with the appropriate endpoint for your subscription. For example:
+> `https://<your-custom-subdomain>.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/entire/detect`
++
+### Batch detection
+
+To detect anomalies throughout a batch of data points over a given time range, use the following request URI with your time series data:
+
+`/timeseries/entire/detect`.
+
+By sending your time series data at once, the API will generate a model using the entire series, and analyze each data point with it.
+
+### Streaming detection
+
+To continuously detect anomalies on streaming data, use the following request URI with your latest data point:
+
+`/timeseries/last/detect`.
+
+By sending new data points as you generate them, you can monitor your data in real time. A model will be generated with the data points you send, and the API will determine if the latest point in the time series is an anomaly.
+
+## Adjusting lower and upper anomaly detection boundaries
+
+By default, the upper and lower boundaries for anomaly detection are calculated using `expectedValue`, `upperMargin`, and `lowerMargin`. If you require different boundaries, we recommend applying a `marginScale` to `upperMargin` or `lowerMargin`. The boundaries would be calculated as follows:
+
+|Boundary |Calculation |
+|||
+|`upperBoundary` | `expectedValue + (100 - marginScale) * upperMargin` |
+|`lowerBoundary` | `expectedValue - (100 - marginScale) * lowerMargin` |
+
+The following examples show an Anomaly Detector API result at different sensitivities.
+
+### Example with sensitivity at 99
+
+![Default Sensitivity](../media/sensitivity_99.png)
+
+### Example with sensitivity at 95
+
+![99 Sensitivity](../media/sensitivity_95.png)
+
+### Example with sensitivity at 85
+
+![85 Sensitivity](../media/sensitivity_85.png)
+
+## Next Steps
+
+* [What is the Anomaly Detector API?](../overview.md)
+* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md)
ai-services Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/postman.md
+
+ Title: How to run Multivariate Anomaly Detector API (GA version) in Postman?
+
+description: Learn how to detect anomalies in your data either as a batch, or on streaming data with Postman.
++++++ Last updated : 12/20/2022+++
+# How to run Multivariate Anomaly Detector API in Postman?
+
+This article will walk you through the process of using Postman to access the Multivariate Anomaly Detection REST API.
+
+## Getting started
+
+Select this button to fork the API collection in Postman and follow the steps in this article to test.
+
+[![Run in Postman](../media/postman/button.svg)](https://app.getpostman.com/run-collection/18763802-b90da6d8-0f98-4200-976f-546342abcade?action=collection%2Ffork&collection-url=entityId%3D18763802-b90da6d8-0f98-4200-976f-546342abcade%26entityType%3Dcollection%26workspaceId%3De1370b45-5076-4885-884f-e9a97136ddbc#?env%5BMVAD%5D=W3sia2V5IjoibW9kZWxJZCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJlNjQxZTJlYy01Mzg5LTExZWQtYTkyMC01MjcyNGM4YTZkZmEiLCJzZXNzaW9uSW5kZXgiOjB9LHsia2V5IjoicmVzdWx0SWQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiOGZkZTAwNDItNTM4YS0xMWVkLTlhNDEtMGUxMGNkOTEwZmZhIiwic2Vzc2lvbkluZGV4IjoxfSx7ImtleSI6Ik9jcC1BcGltLVN1YnNjcmlwdGlvbi1LZXkiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJzZWNyZXQiLCJzZXNzaW9uVmFsdWUiOiJjNzNjMGRhMzlhOTA0MjgzODA4ZjBmY2E0Zjc3MTFkOCIsInNlc3Npb25JbmRleCI6Mn0seyJrZXkiOiJlbmRwb2ludCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJodHRwczovL211bHRpLWFkLXRlc3QtdXNjeC5jb2duaXRpdmVzZXJ2aWNlcy5henVyZS5jb20vIiwic2Vzc2lvbkluZGV4IjozfSx7ImtleSI6ImRhdGFTb3VyY2UiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiaHR0cHM6Ly9tdmFkZGF0YXNldC5ibG9iLmNvcmUud2luZG93cy5uZXQvc2FtcGxlLW9uZXRhYmxlL3NhbXBsZV9kYXRhXzVfMzAwMC5jc3YiLCJzZXNzaW9uSW5kZXgiOjR9XQ==)
+
+## Multivariate Anomaly Detector API
+
+1. Select environment as **MVAD**.
+
+ :::image type="content" source="../media/postman/postman-initial.png" alt-text="Screenshot of Postman UI with MVAD selected." lightbox="../media/postman/postman-initial.png":::
+
+2. Select **Environment**, paste your Anomaly Detector `endpoint`, `key` and dataSource `url` into the **CURRENT VALUE** column, select **Save** to let the variables take effect.
+
+ :::image type="content" source="../media/postman/postman-key.png" alt-text="Screenshot of Postman UI with key, endpoint, and datasource filled in." lightbox="../media/postman/postman-key.png":::
+
+3. Select **Collections**, and select the first API - **Create and train a model**, then select **Send**.
+
+ > [!NOTE]
+ > If your data is one CSV file, please set the dataSchema as **OneTable**, if your data is multiple CSV files in a folder, please set the dataSchema as **MultiTable.**
+
+ :::image type="content" source="../media/postman/create-and-train.png" alt-text="Screenshot of create and train POST request." lightbox="../media/postman/create-and-train.png":::
+
+4. In the response of the first API, copy the modelId and paste it in the `modelId` in **Environments**, select **Save**. Then go to **Collections**, select **Get model status**, and select **Send**.
+ ![GIF of process of copying model identifier](../media/postman/model.gif)
+
+5. Select **Batch Detection**, and select **Send**. This API will trigger a synchronous inference task, and you should use the Get batch detection results API several times to get the status and the final results.
+
+ :::image type="content" source="../media/postman/result.png" alt-text="Screenshot of batch detection POST request." lightbox="../media/postman/result.png":::
+
+6. In the response, copy the `resultId` and paste it in the `resultId` in **Environments**, select **Save**. Then go to **Collections**, select **Get batch detection results**, and select **Send**.
+
+ ![GIF of process of copying result identifier](../media/postman/result.gif)
+
+7. For the rest of the APIs calls, select each and then select Send to test out their request and response.
+
+ :::image type="content" source="../media/postman/detection.png" alt-text="Screenshot of detect last POST result." lightbox="../media/postman/detection.png":::
+
+## Next Steps
+
+* [Create an Anomaly Detector resource](create-resource.md)
+* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md)
ai-services Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/prepare-data.md
+
+ Title: Prepare your data and upload to Storage Account
+
+description: Prepare your data and upload to Storage Account
++++++ Last updated : 11/01/2022++++
+# Prepare your data and upload to Storage Account
+
+Multivariate Anomaly Detection requires training to process your data, and an Azure Storage Account to store your data for further training and inference steps.
+
+## Data preparation
+
+First you need to prepare your data for training and inference.
+
+### Input data schema
+
+Multivariate Anomaly Detection supports two types of data schemas: **OneTable** and **MultiTable**. You could use either of these schemas to prepare your data and upload to Storage Account for further training and inference.
++
+#### Schema 1: OneTable
+**OneTable** is one CSV file that contains all the variables that you want to train a Multivariate Anomaly Detection model and one `timestamp` column. Download [One Table sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.csv)
+* The `timestamp` values should conform to *ISO 8601*; the values of other variables in other columns could be *integers* or *decimals* with any number of decimal places.
+
+* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference.
+
+ ***Example:***
+
+![Diagram of one table schema.](../media/prepare-data/onetable-schema.png)
+
+#### Schema 2: MultiTable
+
+**MultiTable** is multiple CSV files in one file folder, and each CSV file contains only two columns of one variable, with the exact column names of: **timestamp** and **value**. Download [Multiple Tables sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.zip) and unzip it.
+
+* The `timestamp` values should conform to *ISO 8601*; the `value` could be *integers* or *decimals* with any number of decimal places.
+
+* The name of the csv file will be used as the variable name and should be unique. For example, *temperature.csv* and *humidity.csv*.
+
+* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference.
+
+ ***Example:***
+
+> [!div class="mx-imgBorder"]
+> ![Diagram of multi table schema.](../media/prepare-data/multitable.png)
+
+> [!NOTE]
+> If your timestamps have hours, minutes, and/or seconds, ensure that they're properly rounded up before calling the APIs.
+> For example, if your data frequency is supposed to be one data point every 30 seconds, but you're seeing timestamps like "12:00:01" and "12:00:28", it's a strong signal that you should pre-process the timestamps to new values like "12:00:00" and "12:00:30".
+> For details, please refer to the ["Timestamp round-up" section](../concepts/best-practices-multivariate.md#timestamp-round-up) in the best practices document.
+
+## Upload your data to Storage Account
+
+Once you prepare your data with either of the two schemas above, you could upload your CSV file (OneTable) or your data folder (MultiTable) to your Storage Account.
+
+1. [Create a Storage Account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM), fill out the fields, which are similar to the steps when creating Anomaly Detector resource.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Azure Storage account setup page.](../media/prepare-data/create-blob.png)
+
+2. Select **Container** to the left in your Storage Account resource and select **+Container** to create one that will store your data.
+
+3. Upload your data to the container.
+
+ **Upload *OneTable* data**
+
+ Go to the container that you created, and select **Upload**, then choose your prepared CSV file and upload.
+
+ Once your data is uploaded, select your CSV file and copy the **blob URL** through the small blue button. (Please paste the URL somewhere convenient for further steps.)
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of copy blob url for one table.](../media/prepare-data/onetable-copy-url.png)
+
+ **Upload *MultiTable* data**
+
+ Go to the container that you created, and select **Upload**, then select **Advanced**, and initiate a folder name in **Upload to folder**, and select all the variables in separate CSV files and upload.
+
+ Once your data is uploaded, go into the folder, and select one CSV file in the folder, copy the **blob URL** and only keep the part before the name of this CSV file, so the final blob URL should ***link to the folder***. (Please paste the URL somewhere convenient for further steps.)
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of copy blob url for multi table.](../media/prepare-data/multitable-copy-url.png)
+
+4. Grant Anomaly Detector access to read the data in your Storage Account.
+ * In your container, select **Access Control(IAM)** to the left, select **+ Add** to **Add role assignment**. If you see the add role assignment is disabled, please contact your Storage Account owner to add Owner role to your Container.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of set access control UI.](../media/prepare-data/add-role-assignment.png)
+
+ * Search for and select the role of **Storage Blob Data Reader** and then select **Next**. Technically, the roles highlighted below and the *Owner* role all should work.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of add role assignment with reader roles selected.](../media/prepare-data/add-reader-role.png)
+
+ * Select assign access to **Managed identity**, and **Select Members**, then choose the anomaly detector resource that you created earlier, then select **Review + assign**.
+
+## Next steps
+
+* [Train a multivariate anomaly detection model](train-model.md)
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
ai-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/streaming-inference.md
+
+ Title: Streaming inference with trained model
+
+description: Streaming inference with trained model
++++++ Last updated : 11/01/2022+++
+# Streaming inference with trained model
+
+You could choose the batch inference API, or the streaming inference API for detection.
+
+| Batch inference API | Streaming inference API |
+| - | - |
+| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
+
+|API Name| Method | Path | Description |
+| | - | -- | |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario |
+|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
+
+## Trigger a streaming inference API
+
+### Request
+
+With the synchronous API, you can get inference results point by point in real time, and no need for compressing and uploading task like for training and asynchronous inference. Here are some requirements for the synchronous API:
+* You need to put data in **JSON format** into the API request body.
+* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables, and at least `1 sliding window length`.
+
+You can submit a bunch of timestamps of multiple variables in JSON format in the request body, with an API call like this:
+
+**{{endpoint}}/anomalydetector/v1.1/multivariate/models/{modelId}:detect-last**
+
+A sample request:
+
+```json
+{
+ "variables": [
+ {
+ "variable": "Variable_1",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more timestamps
+ ],
+ "values": [
+ 0.4551378545933972,
+ 0.7388603950488748,
+ 0.201088255984052
+ //more values
+ ]
+ },
+ {
+ "variable": "Variable_2",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more timestamps
+ ],
+ "values": [
+ 0.9617871613964145,
+ 0.24903311574778408,
+ 0.4920561254118613
+ //more values
+ ]
+ },
+ {
+ "variable": "Variable_3",
+ "timestamps": [
+ "2021-01-01T00:00:00Z",
+ "2021-01-01T00:01:00Z",
+ "2021-01-01T00:02:00Z"
+ //more timestamps
+ ],
+ "values": [
+ 0.4030756879437628,
+ 0.15526889968448554,
+ 0.36352226408981103
+ //more values
+ ]
+ }
+ ],
+ "topContributorCount": 2
+}
+```
+
+#### Required parameters
+
+* **variableName**: This name should be exactly the same as in your training data.
+* **timestamps**: The length of the timestamps should be equal to **1 sliding window**, since every streaming inference call will use 1 sliding window to detect the last point in the sliding window.
+* **values**: The values of each variable in every timestamp that was inputted above.
+
+#### Optional parameters
+
+* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
+
+### Response
+
+A sample response:
+
+```json
+{
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0,
+ "effectiveCount": 1,
+ "firstTimestamp": "2021-01-03T01:59:00Z",
+ "lastTimestamp": "2021-01-03T01:59:00Z"
+ }
+ ],
+ "results": [
+ {
+ "timestamp": "2021-01-03T01:59:00Z",
+ "value": {
+ "isAnomaly": false,
+ "severity": 0.0,
+ "score": 0.2675322890281677,
+ "interpretation": []
+ },
+ "errors": []
+ }
+ ]
+}
+```
+
+The response contains the result status, variable information, inference parameters, and inference results.
+
+* **variableStates**: This lists the information of each variable in the inference request.
+* **setupInfo**: This is the request body submitted for this inference.
+* **results**: This contains the detection results. There are three typical types of detection results.
+
+* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
+ * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
+ * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
+
+* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
+
+* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
+
+* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which is included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
+
+* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
+
+> [!NOTE]
+> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
+> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
+> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
+
+## Next steps
+
+* [Multivariate Anomaly Detection reference architecture](../concepts/multivariate-architecture.md)
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/train-model.md
+
+ Title: Train a Multivariate Anomaly Detection model
+
+description: Train a Multivariate Anomaly Detection model
++++++ Last updated : 11/01/2022+++
+# Train a Multivariate Anomaly Detection model
+
+To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a Jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#).
+
+## API Overview
+
+There are 7 APIs provided in Multivariate Anomaly Detection:
+* **Training**: Use `Train Model API` to create and train a model, then use `Get Model Status API` to get the status and model metadata.
+* **Inference**:
+ * Use `Async Inference API` to trigger an asynchronous inference process and use `Get Inference results API` to get detection results on a batch of data.
+ * You could also use `Sync Inference API` to trigger a detection on one timestamp every time.
+* **Other operations**: `List Model API` and `Delete Model API` are supported in Multivariate Anomaly Detection model for model management.
+
+![Diagram of model training workflow and inference workflow](../media/train-model/api-workflow.png)
+
+|API Name| Method | Path | Description |
+| | - | -- | |
+|**Train Model**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models | Create and train a model |
+|**Get Model Status**| GET | `{endpoint}`anomalydetector/v1.1/multivariate/models/`{modelId}` | Get model status and model metadata with `modelId` |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
+|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
+|**List Model**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/models | List all models |
+|**Delete Model**| DELET | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}` | Delete model with `modelId` |
+
+## Train a model
+
+In this process, you'll use the following information that you created previously:
+
+* **Key** of Anomaly Detector resource
+* **Endpoint** of Anomaly Detector resource
+* **Blob URL** of your data in Storage Account
+
+For training data size, the maximum number of timestamps is **1000000**, and a recommended minimum number is **5000** timestamps.
+
+### Request
+
+Here's a sample request body to train a Multivariate Anomaly Detection model.
+
+```json
+{
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0
+ },
+ "dataSource": "{{dataSource}}", //Example: https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest"
+}
+```
+
+#### Required parameters
+
+These three parameters are required in training and inference API requests:
+
+* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in the Azure Blob Storage.
+* **dataSchema**: This indicates the schema that you're using: `OneTable` or `MultiTable`.
+* **startTime**: The start time of data used for training or inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
+* **endTime**: The end time of data used for training or inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point. If `endTime` equals to `startTime`, it means inference of one single data point, which is often used in streaming scenarios.
+
+#### Optional parameters
+
+Other parameters for training API are optional:
+
+* **slidingWindow**: How many data points are used to determine anomalies. An integer between 28 and 2,880. The default value is 300. If `slidingWindow` is `k` for model training, then at least `k` points should be accessible from the source file during inference to get valid results.
+
+ Multivariate Anomaly Detection takes a segment of data points to decide if the next data point is an anomaly. The length of the segment is the `slidingWindow`.
+ Please keep two things in mind when choosing a `slidingWindow` value:
+ 1. The properties of your data: whether it's periodic and the sampling rate. When your data is periodic, you could set the length of 1 - 3 cycles as the `slidingWindow`. When your data is at a high frequency (small granularity) like minute-level or second-level, you could set a relatively higher value of `slidingWindow`.
+ 1. The trade-off between training/inference time and potential performance impact. A larger `slidingWindow` may cause longer training/inference time. There's **no guarantee** that larger `slidingWindow`s will lead to accuracy gains. A small `slidingWindow` may make it difficult for the model to converge on an optimal solution. For example, it's hard to detect anomalies when `slidingWindow` has only two points.
+
+* **alignMode**: How to align multiple variables (time series) on timestamps. There are two options for this parameter, `Inner` and `Outer`, and the default value is `Outer`.
+
+ This parameter is critical when there's misalignment between timestamp sequences of the variables. The model needs to align the variables onto the same timestamp sequence before further processing.
+
+ `Inner` means the model will report detection results only on timestamps on which **every variable** has a value, that is, the intersection of all variables. `Outer` means the model will report detection results on timestamps on which **any variable** has a value, that is, the union of all variables.
+
+ Here's an example to explain different `alignModel` values.
+
+ *Variable-1*
+
+ |timestamp | value|
+ -| --|
+ |2020-11-01| 1
+ |2020-11-02| 2
+ |2020-11-04| 4
+ |2020-11-05| 5
+
+ *Variable-2*
+
+ timestamp | value
+ | -
+ 2020-11-01| 1
+ 2020-11-02| 2
+ 2020-11-03| 3
+ 2020-11-04| 4
+
+ *`Inner` join two variables*
+
+ timestamp | Variable-1 | Variable-2
+ -| - | -
+ 2020-11-01| 1 | 1
+ 2020-11-02| 2 | 2
+ 2020-11-04| 4 | 4
+
+ *`Outer` join two variables*
+
+ timestamp | Variable-1 | Variable-2
+ | - | -
+ 2020-11-01| 1 | 1
+ 2020-11-02| 2 | 2
+ 2020-11-03| `nan` | 3
+ 2020-11-04| 4 | 4
+ 2020-11-05| 5 | `nan`
+
+* **fillNAMethod**: How to fill `nan` in the merged table. There might be missing values in the merged table and they should be properly handled. We provide several methods to fill them up. The options are `Linear`, `Previous`, `Subsequent`, `Zero`, and `Fixed` and the default value is `Linear`.
+
+ | Option | Method |
+ | - | -|
+ | `Linear` | Fill `nan` values by linear interpolation |
+ | `Previous` | Propagate last valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 2, 3, 3, 4]` |
+ | `Subsequent` | Use next valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 3, 3, 4, 4]` |
+ | `Zero` | Fill `nan` values with 0. |
+ | `Fixed` | Fill `nan` values with a specified valid value that should be provided in `paddingValue`. |
+
+* **paddingValue**: Padding value is used to fill `nan` when `fillNAMethod` is `Fixed` and must be provided in that case. In other cases, it's optional.
+
+* **displayName**: This is an optional parameter, which is used to identify models. For example, you can use it to mark parameters, data sources, and any other metadata about the model and its input data. The default value is an empty string.
+
+### Response
+
+Within the response, the most important thing is the `modelId`, which you'll use to trigger the Get Model Status API.
+
+A response sample:
+
+```json
+{
+ "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
+ "createdTime": "2022-11-01T00:00:00Z",
+ "lastUpdatedTime": "2022-11-01T00:00:00Z",
+ "modelInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest",
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0.0
+ },
+ "status": "CREATED",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [],
+ "trainLosses": [],
+ "validationLosses": [],
+ "latenciesInSeconds": []
+ },
+ "variableStates": []
+ }
+ }
+}
+```
+
+## Get model status
+
+You could use the above API to trigger a training and use **Get model status API** to know whether the model is trained successfully or not.
+
+### Request
+
+There's no content in the request body, what's required only is to put the modelId in the API path, which will be in a format of:
+**{{endpoint}}anomalydetector/v1.1/multivariate/models/{{modelId}}**
+
+### Response
+
+* **status**: The `status` in the response body indicates the model status with this category: *CREATED, RUNNING, READY, FAILED.*
+* **trainLosses & validationLosses**: These are two machine learning concepts indicating the model performance. If the numbers are decreasing and finally to a relatively small number like 0.2, 0.3, then it means the model performance is good to some extent. However, the model performance still needs to be validated through inference and the comparison with labels if any.
+* **epochIds**: indicates how many epochs the model has been trained out of a total of 100 epochs. For example, if the model is still in training status, `epochId` might be `[10, 20, 30, 40, 50]` , which means that it has completed its 50th training epoch, and therefore is halfway complete.
+* **latenciesInSeconds**: contains the time cost for each epoch and is recorded every 10 epochs. In this example, the 10th epoch takes approximately 0.34 second. This would be helpful to estimate the completion time of training.
+* **variableStates**: summarizes information about each variable. It's a list ranked by `filledNARatio` in descending order. It tells how many data points are used for each variable and `filledNARatio` tells how many points are missing. Usually we need to reduce `filledNARatio` as much as possible.
+Too many missing data points will deteriorate model accuracy.
+* **errors**: Errors during data processing will be included in the `errors` field.
+
+A response sample:
+
+```json
+{
+ "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
+ "createdTime": "2022-11-01T00:00:12Z",
+ "lastUpdatedTime": "2022-11-01T00:00:12Z",
+ "modelInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest",
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0.0
+ },
+ "status": "READY",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [
+ 10,
+ 20,
+ 30,
+ 40,
+ 50,
+ 60,
+ 70,
+ 80,
+ 90,
+ 100
+ ],
+ "trainLosses": [
+ 0.30325182933699,
+ 0.24335388161919333,
+ 0.22876543213020673,
+ 0.2439815090461211,
+ 0.22489577260884372,
+ 0.22305156764659015,
+ 0.22466289590705524,
+ 0.22133831883018668,
+ 0.2214335961775346,
+ 0.22268397090109912
+ ],
+ "validationLosses": [
+ 0.29047123109451445,
+ 0.263965221366497,
+ 0.2510373182971068,
+ 0.27116744686858824,
+ 0.2518718700216274,
+ 0.24802495975687047,
+ 0.24790137705176768,
+ 0.24640804830223623,
+ 0.2463938973166726,
+ 0.24831805566344597
+ ],
+ "latenciesInSeconds": [
+ 2.1662967205047607,
+ 2.0658926963806152,
+ 2.112030029296875,
+ 2.130472183227539,
+ 2.183091640472412,
+ 2.1442034244537354,
+ 2.117824077606201,
+ 2.1345198154449463,
+ 2.0993552207946777,
+ 2.1198465824127197
+ ]
+ },
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ }
+ ]
+ }
+ }
+}
+```
+
+## List models
+
+You may refer to [this page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/ListMultivariateModel) for information about the request URL and request headers. Notice that we only return 10 models ordered by update time, but you can visit other models by setting the `$skip` and the `$top` parameters in the request URL. For example, if your request URL is `https://{endpoint}/anomalydetector/v1.1/multivariate/models?$skip=10&$top=20`, then we'll skip the latest 10 models and return the next 20 models.
+
+A sample response is
+
+```json
+{
+ "models": [
+ {
+ "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
+ "createdTime": "2022-10-26T18:00:12Z",
+ "lastUpdatedTime": "2022-10-26T18:03:53Z",
+ "modelInfo": {
+ "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
+ "dataSchema": "OneTable",
+ "startTime": "2021-01-01T00:00:00Z",
+ "endTime": "2021-01-02T09:19:00Z",
+ "displayName": "SampleRequest",
+ "slidingWindow": 200,
+ "alignPolicy": {
+ "alignMode": "Outer",
+ "fillNAMethod": "Linear",
+ "paddingValue": 0.0
+ },
+ "status": "READY",
+ "errors": [],
+ "diagnosticsInfo": {
+ "modelState": {
+ "epochIds": [
+ 10,
+ 20,
+ 30,
+ 40,
+ 50,
+ 60,
+ 70,
+ 80,
+ 90,
+ 100
+ ],
+ "trainLosses": [
+ 0.30325182933699,
+ 0.24335388161919333,
+ 0.22876543213020673,
+ 0.2439815090461211,
+ 0.22489577260884372,
+ 0.22305156764659015,
+ 0.22466289590705524,
+ 0.22133831883018668,
+ 0.2214335961775346,
+ 0.22268397090109912
+ ],
+ "validationLosses": [
+ 0.29047123109451445,
+ 0.263965221366497,
+ 0.2510373182971068,
+ 0.27116744686858824,
+ 0.2518718700216274,
+ 0.24802495975687047,
+ 0.24790137705176768,
+ 0.24640804830223623,
+ 0.2463938973166726,
+ 0.24831805566344597
+ ],
+ "latenciesInSeconds": [
+ 2.1662967205047607,
+ 2.0658926963806152,
+ 2.112030029296875,
+ 2.130472183227539,
+ 2.183091640472412,
+ 2.1442034244537354,
+ 2.117824077606201,
+ 2.1345198154449463,
+ 2.0993552207946777,
+ 2.1198465824127197
+ ]
+ },
+ "variableStates": [
+ {
+ "variable": "series_0",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_1",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_2",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_3",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ },
+ {
+ "variable": "series_4",
+ "filledNARatio": 0.0004999999999999449,
+ "effectiveCount": 1999,
+ "firstTimestamp": "2021-01-01T00:01:00Z",
+ "lastTimestamp": "2021-01-02T09:19:00Z"
+ }
+ ]
+ }
+ }
+ }
+ ],
+ "currentCount": 42,
+ "maxCount": 1000,
+ "nextLink": ""
+}
+```
+
+The response contains four fields, `models`, `currentCount`, `maxCount`, and `nextLink`.
+
+* **models**: This contains the created time, last updated time, model ID, display name, variable counts, and the status of each model.
+* **currentCount**: This contains the number of trained multivariate models in your Anomaly Detector resource.
+* **maxCount**: The maximum number of models supported by your Anomaly Detector resource, which will be differentiated by the pricing tier that you choose.
+* **nextLink**: This could be used to fetch more models since maximum models that will be listed in this API response is **10**.
+
+## Next steps
+
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
ai-services Anomaly Detector Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-configuration.md
+
+ Title: How to configure a container for Anomaly Detector API
+
+description: The Anomaly Detector API container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings.
++++++ Last updated : 05/07/2020+++
+# Configure Anomaly Detector univariate containers
+
+The **Anomaly Detector** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
+
+## Configuration settings
+
+This container has the following configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|Yes|[ApiKey](#apikey-configuration-setting)|Used to track billing information.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
+|Yes|[Billing](#billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure.|
+|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
+|No|[Fluentd](#fluentd-settings)|Write log and, optionally, metric data to a Fluentd server.|
+|No|[Http Proxy](#http-proxy-credentials-settings)|Configure an HTTP proxy for making outbound requests.|
+|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
+|No|[Mounts](#mount-settings)|Read and write data from host computer to container and from container back to host computer.|
+
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](anomaly-detector-container-howto.md#billing).
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Anomaly Detector_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+This setting can be found in the following place:
+
+* Azure portal: **Anomaly Detector's** Resource Management, under **Keys**
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Anomaly Detector_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for an _Anomaly Detector_ resource on Azure.
+
+This setting can be found in the following place:
+
+* Azure portal: **Anomaly Detector's** Overview, labeled `Endpoint`
+
+|Required| Name | Data type | Description |
+|--||--|-|
+|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../cognitive-services-custom-subdomains.md). |
+
+## Eula setting
++
+## Fluentd settings
++
+## Http proxy credentials settings
++
+## Logging settings
+
++
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+The Anomaly Detector containers don't use input or output mounts to store training or service data.
+
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](anomaly-detector-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
+
+|Optional| Name | Data type | Description |
+|-||--|-|
+|Not allowed| `Input` | String | Anomaly Detector containers do not use this.|
+|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
+
+## Example docker run commands
+
+The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](anomaly-detector-container-howto.md#stop-the-container) it.
+
+* **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character for a bash shell. Replace or remove this based on your host operating system's requirements. For example, the line continuation character for windows is a caret, `^`. Replace the back slash with the caret.
+* **Argument order**: Do not change the order of the arguments unless you are very familiar with Docker containers.
+
+Replace value in brackets, `{}`, with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The endpoint key of the `Anomaly Detector` resource on the Azure `Anomaly Detector` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Anomaly Detector` Overview page.| See [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters) for explicit examples. |
++
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](anomaly-detector-container-howto.md#billing).
+> The ApiKey value is the **Key** from the Azure AI Anomaly Detector Resource keys page.
+
+## Anomaly Detector container Docker examples
+
+The following Docker examples are for the Anomaly Detector container.
+
+### Basic example
+
+ ```Docker
+ docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+ mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \
+ Eula=accept \
+ Billing={ENDPOINT_URI} \
+ ApiKey={API_KEY}
+ ```
+
+### Logging example with command-line arguments
+
+ ```Docker
+ docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+ mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \
+ Eula=accept \
+ Billing={ENDPOINT_URI} ApiKey={API_KEY} \
+ Logging:Console:LogLevel:Default=Information
+ ```
+
+## Next steps
+
+* [Deploy an Anomaly Detector container to Azure Container Instances](how-to/deploy-anomaly-detection-on-container-instances.md)
+* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
ai-services Anomaly Detector Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-howto.md
+
+ Title: Install and run Docker containers for the Anomaly Detector API
+
+description: Use the Anomaly Detector API's algorithms to find anomalies in your data, on-premises using a Docker container.
++++++ Last updated : 01/27/2023++
+keywords: on-premises, Docker, container, streaming, algorithms
++
+# Install and run Docker containers for the Anomaly Detector API
++
+Containers enable you to use the Anomaly Detector API your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run an Anomaly Detector container.
+
+Anomaly Detector offers a single Docker container for using the API on-premises. Use the container to:
+* Use the Anomaly Detector's algorithms on your data
+* Monitor streaming data, and detect anomalies as they occur in real-time.
+* Detect anomalies throughout your data set as a batch.
+* Detect trend change points in your data set as a batch.
+* Adjust the anomaly detection algorithm's sensitivity to better fit your data.
+
+For detailed information about the API, please see:
+* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+You must meet the following prerequisites before using Anomaly Detector containers:
+
+|Required|Purpose|
+|--|--|
+|Docker Engine| You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
+|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.|
+|Anomaly Detector resource |In order to use these containers, you must have:<br><br>An Azure _Anomaly Detector_ resource to get the associated API key and endpoint URI. Both values are available on the Azure portal's **Anomaly Detector** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
++
+## The host computer
++
+<!--* [Azure IoT Edge](../../iot-edge/index.yml). For instructions of deploying Anomaly Detector module in IoT Edge, see [How to deploy Anomaly Detector module in IoT Edge](how-to-deploy-anomaly-detector-module-in-iot-edge.md).-->
+
+### Container requirements and recommendations
+
+The following table describes the minimum and recommended CPU cores and memory to allocate for Anomaly Detector container.
+
+| QPS(Queries per second) | Minimum | Recommended |
+|--||-|
+| 10 QPS | 4 core, 1-GB memory | 8 core 2-GB memory |
+| 20 QPS | 8 core, 2-GB memory | 16 core 4-GB memory |
+
+Each core must be at least 2.6 gigahertz (GHz) or faster.
+
+Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Get the container image with `docker pull`
+
+The Anomaly Detector container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/decision` repository and is named `anomaly-detector`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector`.
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [image tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/decision/anomaly-detector/tags).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
+
+| Container | Repository |
+|--||
+| cognitive-services-anomaly-detector | `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest` |
+
+> [!TIP]
+> When using [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/), pay close attention to the casing of the container registry, repository, container image name and corresponding tag. They are case sensitive.
+
+<!--
+For a full description of available tags, such as `latest` used in the preceding command, see [anomaly-detector](https://go.microsoft.com/fwlink/?linkid=2083827&clcid=0x409) on Docker Hub.
+-->
+
+### Docker pull for the Anomaly Detector container
+
+```Docker
+docker pull mcr.microsoft.com/azure-cognitive-services/anomaly-detector:latest
+```
+
+## How to use the container
+
+Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
+
+1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
+1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
+
+## Run the container with `docker run`
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
+
+[Examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs an Anomaly Detector container from the container image
+* Allocates one CPU core and 4 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+### Running multiple containers on the same host
+
+If you intend to run multiple containers with exposed ports, make sure to run each container with a different port. For example, run the first container on port 5000 and the second container on port 5001.
+
+Replace the `<container-registry>` and `<container-name>` with the values of the containers you use. These do not have to be the same container. You can have the Anomaly Detector container and the LUIS container running on the HOST together or you can have multiple Anomaly Detector containers running.
+
+Run the first container on host port 5000.
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+<container-registry>/microsoft/<container-name> \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Run the second container on host port 5001.
++
+```bash
+docker run --rm -it -p 5001:5000 --memory 4g --cpus 1 \
+<container-registry>/microsoft/<container-name> \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Each subsequent container should be on a different port.
+
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, http://localhost:5000, for container APIs.
+
+<!-- ## Validate container is running -->
++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](anomaly-detector-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++++
+## Billing
+
+The Anomaly Detector containers send billing information to Azure, using an _Anomaly Detector_ resource on your Azure account.
++
+For more information about these options, see [Configure containers](anomaly-detector-container-configuration.md).
+
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Anomaly Detector containers. In summary:
+
+* Anomaly Detector provides one Linux container for Docker, encapsulating anomaly detection with batch vs streaming, expected range inference, and sensitivity tuning.
+* Container images are downloaded from a private Azure Container Registry dedicated for containers.
+* Container images run in Docker.
+* You can use either the REST API or SDK to call operations in Anomaly Detector containers by specifying the host URI of the container.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft.
+
+## Next steps
+
+* Review [Configure containers](anomaly-detector-container-configuration.md) for configuration settings
+* [Deploy an Anomaly Detector container to Azure Container Instances](how-to/deploy-anomaly-detection-on-container-instances.md)
+* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
ai-services Anomaly Detection Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md
+
+ Title: Best practices when using the Anomaly Detector univariate API
+
+description: Learn about best practices when detecting anomalies with the Anomaly Detector API.
+++++++ Last updated : 01/22/2021+++
+# Best practices for using the Anomaly Detector univariate API
+
+The Anomaly Detector API is a stateless anomaly detection service. The accuracy and performance of its results can be impacted by:
+
+* How your time series data is prepared.
+* The Anomaly Detector API parameters that were used.
+* The number of data points in your API request.
+
+Use this article to learn about best practices for using the API to get the best results for your data.
+
+## When to use batch (entire) or latest (last) point anomaly detection
+
+The Anomaly Detector API's batch detection endpoint lets you detect anomalies through your entire times series data. In this detection mode, a single statistical model is created and applied to each point in the data set. If your time series has the below characteristics, we recommend using batch detection to preview your data in one API call.
+
+* A seasonal time series, with occasional anomalies.
+* A flat trend time series, with occasional spikes/dips.
+
+We don't recommend using batch anomaly detection for real-time data monitoring, or using it on time series data that doesn't have the above characteristics.
+
+* Batch detection creates and applies only one model, the detection for each point is done in the context of the whole series. If the time series data trends up and down without seasonality, some points of change (dips and spikes in the data) may be missed by the model. Similarly, some points of change that are less significant than ones later in the data set may not be counted as significant enough to be incorporated into the model.
+
+* Batch detection is slower than detecting the anomaly status of the latest point when doing real-time data monitoring, because of the number of points being analyzed.
+
+For real-time data monitoring, we recommend detecting the anomaly status of your latest data point only. By continuously applying latest point detection, streaming data monitoring can be done more efficiently and accurately.
+
+The example below describes the impact these detection modes can have on performance. The first picture shows the result of continuously detecting the anomaly status latest point along 28 previously seen data points. The red points are anomalies.
+
+![An image showing anomaly detection using the latest point](../media/last.png)
+
+Below is the same data set using batch anomaly detection. The model built for the operation has ignored several anomalies, marked by rectangles.
+
+![An image showing anomaly detection using the batch method](../media/entire.png)
+
+## Data preparation
+
+The Anomaly Detector API accepts time series data formatted into a JSON request object. A time series can be any numerical data recorded over time in sequential order. You can send windows of your time series data to the Anomaly Detector API endpoint to improve the API's performance. The minimum number of data points you can send is 12, and the maximum is 8640 points. [Granularity](/dotnet/api/microsoft.azure.cognitiveservices.anomalydetector.models.granularity) is defined as the rate that your data is sampled at.
+
+Data points sent to the Anomaly Detector API must have a valid Coordinated Universal Time (UTC) timestamp, and a numerical value.
+
+```json
+{
+ "granularity": "daily",
+ "series": [
+ {
+ "timestamp": "2018-03-01T00:00:00Z",
+ "value": 32858923
+ },
+ {
+ "timestamp": "2018-03-02T00:00:00Z",
+ "value": 29615278
+ },
+ ]
+}
+```
+
+If your data is sampled at a non-standard time interval, you can specify it by adding the `customInterval` attribute in your request. For example, if your series is sampled every 5 minutes, you can add the following to your JSON request:
+
+```json
+{
+ "granularity" : "minutely",
+ "customInterval" : 5
+}
+```
+
+### Missing data points
+
+Missing data points are common in evenly distributed time series data sets, especially ones with a fine granularity (A small sampling interval. For example, data sampled every few minutes). Missing less than 10% of the expected number of points in your data shouldn't have a negative impact on your detection results. Consider filling gaps in your data based on its characteristics like substituting data points from an earlier period, linear interpolation, or a moving average.
+
+### Aggregate distributed data
+
+The Anomaly Detector API works best on an evenly distributed time series. If your data is randomly distributed, you should aggregate it by a unit of time, such as Per-minute, hourly, or daily.
+
+## Anomaly detection on data with seasonal patterns
+
+If you know that your time series data has a seasonal pattern (one that occurs at regular intervals), you can improve the accuracy and API response time.
+
+Specifying a `period` when you construct your JSON request can reduce anomaly detection latency by up to 50%. The `period` is an integer that specifies roughly how many data points the time series takes to repeat a pattern. For example, a time series with one data point per day would have a `period` as `7`, and a time series with one point per hour (with the same weekly pattern) would have a `period` of `7*24`. If you're unsure of your data's patterns, you don't have to specify this parameter.
+
+For best results, provide four `period`'s worth of data point, plus an additional one. For example, hourly data with a weekly pattern as described above should provide 673 data points in the request body (`7 * 24 * 4 + 1`).
+
+### Sampling data for real-time monitoring
+
+If your streaming data is sampled at a short interval (for example seconds or minutes), sending the recommended number of data points may exceed the Anomaly Detector API's maximum number allowed (8640 data points). If your data shows a stable seasonal pattern, consider sending a sample of your time series data at a larger time interval, like hours. Sampling your data in this way can also noticeably improve the API response time.
+
+## Next steps
+
+* [What is the Anomaly Detector API?](../overview.md)
+* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md)
ai-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/best-practices-multivariate.md
+
+ Title: Best practices for using the Multivariate Anomaly Detector API
+
+description: Best practices for using the Anomaly Detector Multivariate API's to apply anomaly detection to your time series data.
++++++ Last updated : 06/07/2022++
+keywords: anomaly detection, machine learning, algorithms
++
+# Best practices for using the Multivariate Anomaly Detector API
+
+This article provides guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs.
+In this tutorial, you'll:
+
+> [!div class="checklist"]
+> * **API usage**: Learn how to use MVAD without errors.
+> * **Data engineering**: Learn how to best cook your data so that MVAD performs with better accuracy.
+> * **Common pitfalls**: Learn how to avoid common pitfalls that customers meet.
+> * **FAQ**: Learn answers to frequently asked questions.
+
+## API usage
+
+Follow the instructions in this section to avoid errors while using MVAD. If you still get errors, refer to the [full list of error codes](./troubleshoot.md) for explanations and actions to take.
++++
+## Data engineering
+
+Now you're able to run your code with MVAD APIs without any error. What could be done to improve your model accuracy?
+
+### Data quality
+
+* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It's hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
+* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learned as normal patterns. That may result in real (not missing) data points being detected as anomalies.
++
+### Data quantity
+
+* The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **5,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable.
+* Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job.
+
+ Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade. Anything below that may lead to an `NotEnoughInput` error.
++
+### Timestamp round-up
+
+In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here's a simple example.
+
+*Variable-1*
+
+| timestamp | value |
+| | -- |
+| 12:00:01 | 1.0 |
+| 12:00:35 | 1.5 |
+| 12:01:02 | 0.9 |
+| 12:01:31 | 2.2 |
+| 12:02:08 | 1.3 |
+
+*Variable-2*
+
+| timestamp | value |
+| | -- |
+| 12:00:03 | 2.2 |
+| 12:00:37 | 2.6 |
+| 12:01:09 | 1.4 |
+| 12:01:34 | 1.7 |
+| 12:02:04 | 2.0 |
+
+We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD takes into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
+
+Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table is:
+
+| timestamp | Variable-1 | Variable-2 |
+| | -- | -- |
+| 12:00:01 | 1.0 | `nan` |
+| 12:00:03 | `nan` | 2.2 |
+| 12:00:35 | 1.5 | `nan` |
+| 12:00:37 | `nan` | 2.6 |
+| 12:01:02 | 0.9 | `nan` |
+| 12:01:09 | `nan` | 1.4 |
+| 12:01:31 | 2.2 | `nan` |
+| 12:01:34 | `nan` | 1.7 |
+| 12:02:04 | `nan` | 2.0 |
+| 12:02:08 | 1.3 | `nan` |
+
+`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table is empty as there's no common timestamp in variable 1 and variable 2.
+
+Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are:
+
+*Variable-1*
+
+| timestamp | value |
+| | -- |
+| 12:00:00 | 1.0 |
+| 12:00:30 | 1.5 |
+| 12:01:00 | 0.9 |
+| 12:01:30 | 2.2 |
+| 12:02:00 | 1.3 |
+
+*Variable-2*
+
+| timestamp | value |
+| | -- |
+| 12:00:00 | 2.2 |
+| 12:00:30 | 2.6 |
+| 12:01:00 | 1.4 |
+| 12:01:30 | 1.7 |
+| 12:02:00 | 2.0 |
+
+Now the merged table is more reasonable.
+
+| timestamp | Variable-1 | Variable-2 |
+| | -- | -- |
+| 12:00:00 | 1.0 | 2.2 |
+| 12:00:30 | 1.5 | 2.6 |
+| 12:01:00 | 0.9 | 1.4 |
+| 12:01:30 | 2.2 | 1.7 |
+| 12:02:00 | 1.3 | 2.0 |
+
+Values of different variables at close timestamps are well aligned, and the MVAD model can now extract correlation information.
+
+### Limitations
+
+There are some limitations in both the training and inference APIs, you should be aware of these limitations to avoid errors.
+
+#### General Limitations
+* Sliding window: 28-2880 timestamps, default is 300. For periodic data, set the length of 2-4 cycles as the sliding window.
+* Variable numbers: For training and batch inference, at most 301 variables.
+#### Training Limitations
+* Timestamps: At most 1000000. Too few timestamps may decrease model quality. Recommend having more than 5,000 timestamps.
+* Granularity: The minimum granularity is `per_second`.
+
+#### Batch inference limitations
+* Timestamps: At most 20000, at least 1 sliding window length.
+#### Streaming inference limitations
+* Timestamps: At most 2880, at least 1 sliding window length.
+* Detecting timestamps: From 1 to 10.
+
+## Model quality
+
+### How to deal with false positive and false negative in real scenarios?
+We have provided severity that indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives.
+
+### How to estimate which model is best to use according to training loss and validation loss?
+Generally speaking, it's hard to decide which model is the best without a labeled dataset. However, we can leverage the training and validation losses to have a rough estimation and discard those bad models. First, we need to observe whether training losses converge. Divergent losses often indicate poor quality of the model. Second, loss values may help identify whether underfitting or overfitting occurs. Models that are underfitting or overfitting may not have desired performance. Third, although the definition of the loss function doesn't reflect the detection performance directly, loss values may be an auxiliary tool to estimate model quality. Low loss value is a necessary condition for a good model, thus we may discard models with high loss values.
++
+## Common pitfalls
+
+Apart from the [error code table](./troubleshoot.md), we've learned from customers like you some common pitfalls while using MVAD APIs. This table will help you to avoid these issues.
+
+| Pitfall | Consequence |Explanation and solution |
+| | -- | -- |
+| Timestamps in training data and/or inference data weren't rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results aren't as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
+| Too many anomalous data points in the training data | Model accuracy is impacted negatively because it treats anomalous data points as normal patterns during training. | Empirically, keep the abnormal rate at or below **1%** will help. |
+| Too little training data | Model accuracy is compromised. | Empirically, training a MVAD model requires 15,000 or more data points (timestamps) per variable to keep a good accuracy.|
+| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that aren't severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
+| Sub-folders are zipped into the data file for training or inference. | The csv data files inside sub-folders are ignored during training and/or inference. | No sub-folders are allowed in the zip file. Please refer to [Folder structure](#folder-structure) for details. |
+| Too much data in the inference data file: for example, compressing all historical data in the inference data zip file | You may not see any errors but you'll experience degraded performance when you try to upload the zip file to Azure Blob as well as when you try to run inference. | Please refer to [Data quantity](#data-quantity) for details. |
+| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You'll get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
+
+## FAQ
+
+### How does MVAD sliding window work?
+
+Let's use two examples to learn how MVAD's sliding window works. Suppose you have set `slidingWindow` = 1,440, and your input data is at one-minute granularity.
+
+* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
+
+* **Batch scenario**: You have multiple target data points to predict. Your `endTime` will be greater than your `startTime`. Inference in such scenarios is performed in a "moving window" manner. For example, MVAD will use data from `2021-01-01T00:00:00Z` to `2021-01-01T23:59:00Z` (inclusive) to determine whether data at `2021-01-02T00:00:00Z` is anomalous. Then it moves forward and uses data from `2021-01-01T00:01:00Z` to `2021-01-02T00:00:00Z` (inclusive)
+to determine whether data at `2021-01-02T00:01:00Z` is anomalous. It moves on in the same manner (taking 1,440 data points to compare) until the last timestamp specified by `endTime` (or the actual latest timestamp). Therefore, your inference data source must contain data starting from `startTime` - `slidingWindow` and ideally contains in total of size `slidingWindow` + (`endTime` - `startTime`).
+
+### What's the difference between `severity` and `score`?
+
+Normally we recommend you to use `severity` as the filter to sift out 'anomalies' that aren't so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
+
+In cases where you've found a need of more sophisticated rules than thresholds against `severity` or duration of continuous high `severity` values, you may want to use `score` to build more powerful filters. Understanding how MVAD is using `score` to determine anomalies may help:
+
+We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it's also marked as an anomaly.
++
+## Next steps
+
+* [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md).
+* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
ai-services Multivariate Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/multivariate-architecture.md
+
+ Title: Predictive maintenance architecture for using the Anomaly Detector Multivariate API
+
+description: Reference architecture for using the Anomaly Detector Multivariate APIs to apply anomaly detection to your time series data for predictive maintenance.
++++++ Last updated : 12/15/2022++
+keywords: anomaly detection, machine learning, algorithms
++
+# Predictive maintenance solution with Multivariate Anomaly Detector
+
+Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks down.
+
+Monitoring the health status of equipment can be challenging, as each component inside the equipment can generate dozens of signals. For example, vibration, orientation, and rotation. This can be even more complex when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining different rules for those signals and correlating them with each other manually can be costly. Anomaly Detector's multivariate feature allows:
+
+* Multiple correlated signals to be monitored together, and the inter-correlations between them are accounted for in the model.
+* In each captured anomaly, the contribution rank of different signals can help with anomaly explanation, and incident root cause analysis.
+* The multivariate anomaly detection model is built in an unsupervised manner. Models can be trained specifically for different types of equipment.
+
+Here, we provide a reference architecture for a predictive maintenance solution based on Multivariate Anomaly Detector.
+
+## Reference architecture
+
+[ ![Architectural diagram that starts at sensor data being collected at the edge with a piece of industrial equipment and tracks the processing/analysis pipeline to an end output of an incident alert being generated after Anomaly Detector runs.](../media/multivariate-architecture/multivariate-architecture.png) ](../media/multivariate-architecture/multivariate-architecture.png#lightbox)
+
+In the above architecture, streaming events coming from sensor data will be stored in Azure Data Lake and then processed by a data transforming module to be converted into a time-series format. Meanwhile, the streaming event will trigger real-time detection with the trained model. In general, there will be a module to manage the multivariate model life cycle, like *Bridge Service* in this architecture.
+
+**Model training**: Before using the Anomaly Detector multivariate to detect anomalies for a component or equipment. We need to train a model on specific signals (time-series) generated by this entity. The *Bridge Service* will fetch historical data and submit a training job to the Anomaly Detector and then keep the Model ID in the *Model Meta* storage.
+
+**Model validation**: Training time of a certain model could be varied based on the training data volume. The *Bridge Service* could query model status and diagnostic info on a regular basis. Validating model quality could be necessary before putting it online. If there are labels in the scenario, those labels can be used to verify the model quality. Otherwise, the diagnostic info can be used to evaluate the model quality, and you can also perform detection on historical data with the trained model and evaluate the result to backtest the validity of the model.
+
+**Model inference**: Online detection will be performed with the valid model, and the result ID can be stored in the *Inference table*. Both the training process and the inference process are done in an asynchronous manner. In general, a detection task can be completed within seconds. Signals used for detection should be the same ones that have been used for training. For example, if we use vibration, orientation, and rotation for training, in detection the three signals should be included as an input.
+
+**Incident alerting** The detection results can be queried with result IDs. Each result contains severity of each anomaly, and contribution rank. Contribution rank can be used to understand why this anomaly happened, and which signal caused this incident. Different thresholds can be set on the severity to generate alerts and notifications to be sent to field engineers to conduct maintenance work.
+
+## Next steps
+
+- [Quickstarts](../quickstarts/client-libraries-multivariate.md).
+- [Best Practices](../concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
ai-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/troubleshoot.md
+
+ Title: Troubleshoot the Anomaly Detector multivariate API
+
+description: Learn how to remediate common error codes when you use the Azure AI Anomaly Detector multivariate API.
++++++ Last updated : 04/01/2021+
+keywords: anomaly detection, machine learning, algorithms
++
+# Troubleshoot the multivariate API
+
+This article provides guidance on how to troubleshoot and remediate common error messages when you use the Azure AI Anomaly Detector multivariate API.
+
+## Multivariate error codes
+
+The following tables list multivariate error codes.
+
+### Common errors
+
+| Error code | HTTP error code | Error message | Comment |
+| -- | | - | |
+| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers. | Add your APIM subscription ID in the header. An example header is `{"apim-subscription-id": <Your Subscription ID>}`. |
+| `FileNotExist` | 400 | File \<source> does not exist. | Check the validity of your blob shared access signature. Make sure that it hasn't expired. |
+| `InvalidBlobURL` | 400 | | Your blob shared access signature isn't a valid shared access signature. |
+| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service isn't allowed to write the data to the blob encrypted by a customer-managed key. Either remove the customer-managed key or grant access to our service again. For more information, see [Configure customer-managed keys with Azure Key Vault for Azure AI services](../../encryption/cognitive-services-encryption-keys-portal.md). |
+| `StorageReadError` | 403 | | Same as `StorageWriteError`. |
+| `UnexpectedError` | 500 | | Contact us with detailed error information. You could take the support options from [Azure AI services support and help options](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com). |
+
+### Train a multivariate anomaly detection model
+
+| Error code | HTTP error code | Error message | Comment |
+| | | | |
+| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Delete unused models before you train a new model. |
+| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train five models concurrently. Train a new model after previous models have completed their training process. |
+| `InvalidJsonFormat` | 400 | Invalid JSON format. | Training request isn't a valid JSON. |
+| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Check the value of `'alignMode'`, which should be either `'Inner'` or `'Outer'` (case sensitive). |
+| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'`. It cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Check the value of `'fillNAMethod'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
+| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
+| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request hasn't specified a value for the `'source'` field. An example is `{"source": <Your Blob SAS>}`. |
+| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"startTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidTimestampFormat` | 400 | Invalid timestamp format. The `<timestamp>` format is not a valid format. | The format of timestamp in the request body isn't correct. Try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
+| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"endTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | The `'slidingWindow'` field must be an integer between 28 and 2880 (inclusive). |
+
+### Get a multivariate model with a model ID
+
+| Error code | HTTP error code | Error message | Comment |
+| | | - | |
+| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID doesn't exist. Check the model ID in the request URL. |
+
+### List multivariate models
+
+| Error code | HTTP error code | Error message | Comment |
+| | | - | |
+|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top. | Check whether the values for the two parameters are numerical. The values $skip and $top are used to list the models with pagination. Because the API only returns the 10 most recently updated models, you could use $skip and $top to get models updated earlier. |
+
+### Anomaly detection with a trained model
+
+| Error code | HTTP error code | Error message | Comment |
+| -- | | | |
+| `ModelNotExist` | 404 | The model does not exist. | The model used for inference doesn't exist. Check the model ID in the request URL. |
+| `ModelFailed` | 400 | Model failed to be trained. | The model isn't successfully trained. Get detailed information by getting the model with model ID. |
+| `ModelNotReady` | 400 | The model is not ready yet. | The model isn't ready yet. Wait for a while until the training process completes. |
+| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit, which is currently 2 GB. Use less data for inference. |
+
+### Get detection results
+
+| Error code | HTTP error code | Error message | Comment |
+| - | | -- | |
+| `ResultNotExist` | 404 | The result does not exist. | The result per request doesn't exist. Either inference hasn't completed or the result has expired. The expiration time is seven days. |
+
+### Data processing errors
+
+The following error codes don't have associated HTTP error codes.
+
+| Error code | Error message | Comment |
+| | | |
+| `NoVariablesFound` | No variables found. Check that your files are organized as per instruction. | No CSV files could be found from the data source. This error is typically caused by incorrect organization of files. See the sample data for the desired structure. |
+| `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. |
+| `FileNotExist` | File \<filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
+| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable wasn't in the training data but appeared in the inference data. |
+| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single CSV file \<filename> exceeds the limit. Train with less data. |
+| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. For more information, see the \<error messages> or verify with `pd.read_csv(filename)` in a local environment. |
+| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each CSV file must have two columns with the names **timestamp** and **value** (case sensitive). |
+| `VariableParseError` | Variable \<variable> parse \<error message> error. | Can't process the \<variable> because of runtime errors. For more information, see the \<error message> or contact us with the \<error message>. |
+| `MergeDataFailed` | Failed to merge data. Check data format. | Data merge failed. This error is possibly because of the wrong data format or the incorrect organization of files. See the sample data for the current file structure. |
+| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Verify the data. |
+| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Verify the data. |
+| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Reduce the size of input data. |
+| `NoData` | There is no effective data. | There's no data to train/inference after processing. Check the start time and end time. |
+| `DataExceedsLimit`. | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. Currently, there's no limit on processed data. |
+| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window, which is \<sliding window size>. | The minimum number of data points for inference is the size of the sliding window. Try to provide more data for inference. |
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/overview.md
+
+ Title: What is Anomaly Detector?
+
+description: Use the Anomaly Detector API's algorithms to apply anomaly detection on your time series data.
++++++ Last updated : 10/27/2022+
+keywords: anomaly detection, machine learning, algorithms
+++
+# What is Anomaly Detector?
++
+Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference.
+
+This documentation contains the following types of articles:
+* [**Quickstarts**](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* [**Interactive demo**](https://aka.ms/adDemo) could you help understand how Anomaly Detector works with easy operations.
+* [**How-to guides**](./how-to/identify-anomalies.md) contain instructions for using the service in more specific or customized ways.
+* [**Tutorials**](./tutorials/batch-anomaly-detection-powerbi.md) are longer guides that show you how to use this service as a component in broader business solutions.
+* [**Code samples**](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook) demonstrate how to use Anomaly Detector.
+* [**Conceptual articles**](./concepts/anomaly-detection-best-practices.md) provide in-depth explanations of the service's functionality and features.
+
+## Anomaly Detector capabilities
+
+With Anomaly Detector, you can either detect anomalies in one variable using Univariate Anomaly Detector, or detect anomalies in multiple variables with Multivariate Anomaly Detector.
+
+|Feature |Description |
+|||
+|Univariate Anomaly Detection | Detect anomalies in one variable, like revenue, cost, etc. The model was selected automatically based on your data pattern. |
+|Multivariate Anomaly Detection| Detect anomalies in multiple variables with correlations, which are usually gathered from equipment or other complex system. The underlying model used is a Graph Attention Network.|
+
+### Univariate Anomaly Detection
+
+The Univariate Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
+
+![Line graph of detect pattern changes in service requests.](./media/anomaly_detection2.png)
+
+Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes.
+
+With the Univariate Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
+
+|Feature |Description |
+|||
+| Streaming detection| Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. |
+| Batch detection | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
+| Change points detection | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
+
+### Multivariate Anomaly Detection
+
+The **Multivariate Anomaly Detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
+
+![Line graph for multiple variables including: rotation, optical filter, pressure, bearing with anomalies highlighted in orange.](./media/multivariate-graph.png)
+
+Imagine 20 sensors from an auto engine generating 20 different signals like rotation, fuel pressure, bearing, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools.
+
+## Join the Anomaly Detector community
+
+Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
+
+## Algorithms
+
+* Blogs and papers:
+ * [Introducing Azure AI Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162)
+ * [Overview of SR-CNN algorithm in Azure AI Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798)
+ * [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679)
+ * [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
+ * [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019)
+
+* Videos:
+ > [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
+
+ > [!VIDEO https://www.youtube.com/embed/FwuI02edclQ]
+
+## Next steps
+
+* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md)
+* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md)
+* The Anomaly Detector [REST API reference](https://aka.ms/ad-api)
ai-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
+
+ Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library for multivariate anomaly detection'
+
+description: The Anomaly Detector multivariate offers client libraries to detect abnormalities in your data series either as a batch or on streaming data.
+++
+zone_pivot_groups: anomaly-detector-quickstart-multivariate
+++ Last updated : 10/27/2022+
+keywords: anomaly detection, algorithms
+ms.devlang: csharp, java, javascript, python
+++
+# Quickstart: Use the Multivariate Anomaly Detector client library
++++++++++++
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries.md
+
+ Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library'
+
+description: The Anomaly Detector API offers client libraries to detect abnormalities in your data series either as a batch or on streaming data.
+++
+zone_pivot_groups: anomaly-detector-quickstart
+++ Last updated : 10/27/2022+
+keywords: anomaly detection, algorithms
+ms.devlang: csharp, javascript, python
+recommendations: false
+++
+# Quickstart: Use the Univariate Anomaly Detector client library
++++++++++++
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/regions.md
+
+ Title: Regions - Anomaly Detector service
+
+description: A list of available regions and endpoints for the Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection.
++++++ Last updated : 11/1/2022++++
+# Anomaly Detector service supported regions
+
+The Anomaly Detector service provides anomaly detection technology on your time series data. The service is available in multiple regions with unique endpoints for the Anomaly Detector SDK and REST APIs.
+
+Keep in mind the following points:
+
+* If your application uses one of the Anomaly Detector service REST APIs, the region is part of the endpoint URI you use when making requests.
+* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you will get authentication errors.
+
+> [!NOTE]
+> The Anomaly Detector service doesn't store or process customer data outside the region the customer deploys the service instance in.
+
+## Univariate Anomaly Detection
+
+The following regions are supported for Univariate Anomaly Detection. The geographies are listed in alphabetical order.
+
+| Geography | Region | Region identifier |
+| -- | -- | -- |
+| Africa | South Africa North | `southafricanorth` |
+| Asia Pacific | East Asia | `eastasia` |
+| Asia Pacific | Southeast Asia | `southeastasia` |
+| Asia Pacific | Australia East | `australiaeast` |
+| Asia Pacific | Central India | `centralindia` |
+| Asia Pacific | Japan East | `japaneast` |
+| Asia Pacific | Japan West | `japanwest` |
+| Asia Pacific | Jio India West | `jioindiawest` |
+| Asia Pacific | Korea Central | `koreacentral` |
+| Canada | Canada Central | `canadacentral` |
+| China | China East 2 | `chinaeast2` |
+| China | China North 2 | `chinanorth2` |
+| Europe | North Europe | `northeurope` |
+| Europe | West Europe | `westeurope` |
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Switzerland North | `switzerlandnorth` |
+| Europe | UK South | `uksouth` |
+| Middle East | UAE North | `uaenorth` |
+| Qatar | Qatar Central | `qatarcentral` |
+| South America | Brazil South | `brazilsouth` |
+| Sweden | Sweden Central | `swedencentral` |
+| US | Central US | `centralus` |
+| US | East US | `eastus` |
+| US | East US 2 | `eastus2` |
+| US | North Central US | `northcentralus` |
+| US | South Central US | `southcentralus` |
+| US | West Central US | `westcentralus` |
+| US | West US | `westus`|
+| US | West US 2 | `westus2` |
+| US | West US 3 | `westus3` |
+
+## Multivariate Anomaly Detection
+
+The following regions are supported for Multivariate Anomaly Detection. The geographies are listed in alphabetical order.
+
+| Geography | Region | Region identifier |
+| -- | -- | -- |
+| Africa | South Africa North | `southafricanorth` |
+| Asia Pacific | East Asia | `eastasia` |
+| Asia Pacific | Southeast Asia | `southeastasia` |
+| Asia Pacific | Australia East | `australiaeast` |
+| Asia Pacific | Central India | `centralindia` |
+| Asia Pacific | Japan East | `japaneast` |
+| Asia Pacific | Jio India West | `jioindiawest` |
+| Asia Pacific | Korea Central | `koreacentral` |
+| Canada | Canada Central | `canadacentral` |
+| Europe | North Europe | `northeurope` |
+| Europe | West Europe | `westeurope` |
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Switzerland North | `switzerlandnorth` |
+| Europe | UK South | `uksouth` |
+| Middle East | UAE North | `uaenorth` |
+| South America | Brazil South | `brazilsouth` |
+| US | Central US | `centralus` |
+| US | East US | `eastus` |
+| US | East US 2 | `eastus2` |
+| US | North Central US | `northcentralus` |
+| US | South Central US | `southcentralus` |
+| US | West Central US | `westcentralus` |
+| US | West US | `westus`|
+| US | West US 2 | `westus2` |
+| US | West US 3 | `westus3` |
+
+## Next steps
+
+* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md)
+* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md)
+* The Anomaly Detector [REST API reference](https://aka.ms/ad-api)
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/service-limits.md
+
+ Title: Service limits - Anomaly Detector service
+
+description: Service limits for Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection.
++++++ Last updated : 1/31/2023++++
+# Anomaly Detector service quotas and limits
+
+This article contains both a quick reference and detailed description of Azure AI Anomaly Detector service quotas and limits for all pricing tiers. It also contains some best practices to help avoid request throttling.
+
+The quotas and limits apply to all the versions within Azure AI Anomaly Detector service.
+
+## Univariate Anomaly Detection
+
+|Quota<sup>1</sup>|Free (F0)|Standard (S0)|
+|--|--|--|
+| **All APIs per second** | 10 | 500 |
+
+<sup>1</sup> All the quota and limit are defined for one Anomaly Detector resource.
+
+## Multivariate Anomaly Detection
+
+### API call per minute
+
+|Quota<sup>1</sup>|Free (F0)<sup>2</sup>|Standard (S0)|
+|--|--|--|
+| **Training API per minute** | 1 | 20 |
+| **Get model API per minute** | 1 | 20 |
+| **Batch(async) inference API per minute** | 10 | 60 |
+| **Get inference results API per minute** | 10 | 60 |
+| **Last(sync) inference API per minute** | 10 | 60 |
+| **List model API per minute** | 1 | 20 |
+| **Delete model API per minute** | 1 | 20 |
+
+<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource.
+
+<sup>2</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/)
+
+### Concurrent models and inference tasks
+|Quota<sup>1</sup>|Free (F0)|Standard (S0)|
+|--|--|--|
+| **Maximum models** *(created, running, ready, failed)*| 20 | 1000 |
+| **Maximum running models** *(created, running)* | 1 | 20 |
+| **Maximum running inference** *(created, running)* | 10 | 60 |
+
+<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource. If you want to increase the limit, please contact AnomalyDetector@microsoft.com for further communication.
+
+## How to increase the limit for your resource?
+
+For the Standard pricing tier, this limit can be increased. Increasing the **concurrent request limit** doesn't directly affect your costs. Anomaly Detector service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
+
+The **concurrent request limit parameter** isn't visible via Azure portal, Command-Line tools, or API requests. To verify the current value, create an Azure Support Request.
+
+If you would like to increase your limit, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource [enable auto scaling](../autoscale.md). You can also submit an increase Transactions Per Second (TPS) support request.
+
+### Have the required information ready
+
+* Anomaly Detector resource ID
+
+* Region
+
+#### Retrieve resource ID and region
+
+* Go to [Azure portal](https://portal.azure.com/)
+* Select the Anomaly Detector Resource for which you would like to increase the transaction limit
+* Select Properties (Resource Management group)
+* Copy and save the values of the following fields:
+ * Resource ID
+ * Location (your endpoint Region)
+
+### Create and submit support request
+
+To request a limit increase for your resource submit a **Support Request**:
+
+1. Go to [Azure portal](https://portal.azure.com/)
+2. Select the Anomaly Detector Resource for which you would like to increase the limit
+3. Select New support request (Support + troubleshooting group)
+4. A new window will appear with auto-populated information about your Azure Subscription and Azure Resource
+5. Enter Summary (like "Increase Anomaly Detector TPS limit")
+6. In Problem type, select *"Quota or usage validation"*
+7. Select Next: Solutions
+8. Proceed further with the request creation
+9. Under the Details tab enters the following in the Description field:
+ * A note, that the request is about Anomaly Detector quota.
+ * Provide a TPS expectation you would like to scale to meet.
+ * Azure resource information you collected.
+ * Complete entering the required information and select Create button in *Review + create* tab
+ * Note the support request number in Azure portal notifications. You'll be contacted shortly for further processing.
ai-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/azure-data-explorer.md
+
+ Title: "Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer"
+
+description: Learn how to use the Univariate Anomaly Detector with Azure Data Explorer.
++++++ Last updated : 12/19/2022+++
+# Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer
+
+## Introduction
+
+The [Anomaly Detector API](../overview.md) enables you to check and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically finding and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API decides boundaries for anomaly detection, expected values, and which data points are anomalies.
+
+[Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a fully managed, high-performance, big data analytics platform that makes it easy to analyze high volumes of data in near real-time. The Azure Data Explorer toolbox gives you an end-to-end solution for data ingestion, query, visualization, and management.
+
+## Anomaly Detection functions in Azure Data Explorer
+
+### Function 1: series_uv_anomalies_fl()
+
+The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detector API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
+
+### Function 2: series_uv_change_points_fl()
+
+The function **[series_uv_change_points_fl()](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=adhoc)** finds change points in time series by calling the Univariate Anomaly Detector API. The function accepts a limited set of time series as numerical dynamic arrays, the change point detection threshold, and the minimum size of the stable trend window. Each time series is converted into the required JSON format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of change points, their respective confidence, and the detected seasonality.
+
+These two functions are user-definedΓÇ»[tabular functions](/azure/data-explorer/kusto/query/functions/user-defined-functions#tabular-function) applied using theΓÇ»[invoke operator](/azure/data-explorer/kusto/query/invokeoperator). You can either embed its code in your query or you can define it as a stored function in your database.
+
+## Where to use these new capabilities?
+
+These two functions are available to use either in Azure Data Explorer website or in the Kusto Explorer application.
+
+![Screenshot of Azure Data Explorer and Kusto Explorer](../media/data-explorer/way-of-use.png)
+
+## Create resources
+
+1. [Create an Azure Data Explorer Cluster](https://portal.azure.com/#create/Microsoft.AzureKusto) in the Azure portal, after the resource is created successfully, go to the resource and create a database.
+2. [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal and check the keys and endpoints that youΓÇÖll need later.
+3. Enable plugins in Azure Data Explorer
+ * These new functions have inline Python and require [enabling the python() plugin](/azure/data-explorer/kusto/query/pythonplugin#enable-the-plugin) on the cluster.
+ * These new functions call the anomaly detection service endpoint and require:
+ * Enable the [http_request plugin / http_request_post plugin](/azure/data-explorer/kusto/query/http-request-plugin) on the cluster.
+ * Modify the [callout policy](/azure/data-explorer/kusto/management/calloutpolicy) for type `webapi` to allow accessing the service endpoint.
+
+## Download sample data
+
+This quickstart uses the `request-data.csv` file that can be downloaded from our [GitHub sample data](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_data/request-data.csv)
+
+ You can also download the sample data by running:
+
+```cmd
+curl "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_data/request-data.csv" --output request-data.csv
+```
+
+Then ingest the sample data to Azure Data Explorer by following the [ingestion guide](/azure/data-explorer/ingest-sample-data?tabs=ingestion-wizard). Name the new table for the ingested data **univariate**.
+
+Once ingested, your data should look as follows:
++
+## Detect anomalies in an entire time series
+
+In Azure Data Explorer, run the following query to make an anomaly detection chart with your onboarded data. You could also [create a function](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=persistent) to add the code to a stored function for persistent usage.
+
+```kusto
+let series_uv_anomalies_fl=(tbl:(*), y_series:string, sensitivity:int=85, tsid:string='_tsid')
+{
+    let uri = '[Your-Endpoint]anomalydetector/v1.0/timeseries/entire/detect';
+    let headers=dynamic({'Ocp-Apim-Subscription-Key': h'[Your-key]'});
+    let kwargs = pack('y_series', y_series, 'sensitivity', sensitivity);
+    let code = ```if 1:
+        import json
+        y_series = kargs["y_series"]
+        sensitivity = kargs["sensitivity"]
+        json_str = []
+        for i in range(len(df)):
+            row = df.iloc[i, :]
+            ts = [{'value':row[y_series][j]} for j in range(len(row[y_series]))]
+            json_data = {'series': ts, "sensitivity":sensitivity}     # auto-detect period, or we can force 'period': 84. We can also add 'maxAnomalyRatio':0.25 for maximum 25% anomalies
+            json_str = json_str + [json.dumps(json_data)]
+        result = df
+        result['json_str'] = json_str
+    ```;
+    tbl
+    | evaluate python(typeof(*, json_str:string), code, kwargs)
+    | extend _tsid = column_ifexists(tsid, 1)
+    | partition by _tsid (
+       project json_str
+       | evaluate http_request_post(uri, headers, dynamic(null))
+       | project period=ResponseBody.period, baseline_ama=ResponseBody.expectedValues, ad_ama=series_add(0, ResponseBody.isAnomaly), pos_ad_ama=series_add(0, ResponseBody.isPositiveAnomaly)
+       , neg_ad_ama=series_add(0, ResponseBody.isNegativeAnomaly), upper_ama=series_add(ResponseBody.expectedValues, ResponseBody.upperMargins), lower_ama=series_subtract(ResponseBody.expectedValues, ResponseBody.lowerMargins)
+       | extend _tsid=toscalar(_tsid)
+      )
+}
+;
+let stime=datetime(2018-03-01);
+let etime=datetime(2018-04-16);
+let dt=1d;
+let ts = univariate
+| make-series value=avg(Column2) on Column1 from stime to etime step dt
+| extend _tsid='TS1';
+ts
+| invoke series_uv_anomalies_fl('value')
+| lookup ts on _tsid
+| render anomalychart with(xcolumn=Column1, ycolumns=value, anomalycolumns=ad_ama)
+```
+
+After you run the code, you'll render a chart like this:
++
+## Next steps
+
+* [Best practices of Univariate Anomaly Detection](../concepts/anomaly-detection-best-practices.md)
ai-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
+
+ Title: "Tutorial: Visualize anomalies using batch detection and Power BI"
+
+description: Learn how to use the Anomaly Detector API and Power BI to visualize anomalies throughout your time series data.
++++++ Last updated : 09/10/2020+++
+# Tutorial: Visualize anomalies using batch detection and Power BI (univariate)
+
+Use this tutorial to find anomalies within a time series data set as a batch. Using Power BI desktop, you will take an Excel file, prepare the data for the Anomaly Detector API, and visualize statistical anomalies throughout it.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Use Power BI Desktop to import and transform a time series data set
+> * Integrate Power BI Desktop with the Anomaly Detector API for batch anomaly detection
+> * Visualize anomalies found within your data, including expected and seen values, and anomaly detection boundaries.
+
+## Prerequisites
+* An [Azure subscription](https://azure.microsoft.com/free/cognitive-services)
+* [Microsoft Power BI Desktop](https://powerbi.microsoft.com/get-started/), available for free.
+* An excel file (.xlsx) containing time series data points.
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector" title="Create an Anomaly Detector resource" target="_blank">create an Anomaly Detector resource </a> in the Azure portal to get your key and endpoint.
+ * You will need the key and endpoint from the resource you create to connect your application to the Anomaly Detector API. You'll do this later in the quickstart.
++
+## Load and format the time series data
+
+To get started, open Power BI Desktop and load the time series data you downloaded from the prerequisites. This excel file contains a series of Coordinated Universal Time (UTC) timestamp and value pairs.
+
+> [!NOTE]
+> Power BI can use data from a wide variety of sources, such as .csv files, SQL databases, Azure blob storage, and more.
+
+In the main Power BI Desktop window, select the **Home** ribbon. In the **External data** group of the ribbon, open the **Get Data** drop-down menu and select **Excel**.
+
+![An image of the "Get Data" button in Power BI](../media/tutorials/power-bi-get-data-button.png)
+
+After the dialog appears, navigate to the folder where you downloaded the example .xlsx file and select it. After the **Navigator** dialogue appears, select **Sheet1**, and then **Edit**.
+
+![An image of the data source "Navigator" screen in Power BI](../media/tutorials/navigator-dialog-box.png)
+
+Power BI will convert the timestamps in the first column to a `Date/Time` data type. These timestamps must be converted to text in order to be sent to the Anomaly Detector API. If the Power Query editor doesn't automatically open, select **Edit Queries** on the home tab.
+
+Select the **Transform** ribbon in the Power Query Editor. In the **Any Column** group, open the **Data Type:** drop-down menu, and select **Text**.
+
+![An image of the data type drop down](../media/tutorials/data-type-drop-down.png)
+
+When you get a notice about changing the column type, select **Replace Current**. Afterwards, select **Close & Apply** or **Apply** in the **Home** ribbon.
+
+## Create a function to send the data and format the response
+
+To format and send the data file to the Anomaly Detector API, you can invoke a query on the table created above. In the Power Query Editor, from the **Home** ribbon, open the **New Source** drop-down menu and select **Blank Query**.
+
+Make sure your new query is selected, then select **Advanced Editor**.
+
+![An image of the "Advanced Editor" screen](../media/tutorials/advanced-editor-screen.png)
+
+Within the Advanced Editor, use the following Power Query M snippet to extract the columns from the table and send it to the API. Afterwards, the query will create a table from the JSON response, and return it. Replace the `apiKey` variable with your valid Anomaly Detector API key, and `endpoint` with your endpoint. After you've entered the query into the Advanced Editor, select **Done**.
+
+```M
+(table as table) => let
+
+ apikey = "[Placeholder: Your Anomaly Detector resource access key]",
+ endpoint = "[Placeholder: Your Anomaly Detector resource endpoint]/anomalydetector/v1.0/timeseries/entire/detect",
+ inputTable = Table.TransformColumnTypes(table,{{"Timestamp", type text},{"Value", type number}}),
+ jsontext = Text.FromBinary(Json.FromValue(inputTable)),
+ jsonbody = "{ ""Granularity"": ""daily"", ""Sensitivity"": 95, ""Series"": "& jsontext &" }",
+ bytesbody = Text.ToBinary(jsonbody),
+ headers = [#"Content-Type" = "application/json", #"Ocp-Apim-Subscription-Key" = apikey],
+ bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody, ManualStatusHandling={400}]),
+ jsonresp = Json.Document(bytesresp),
+
+ respTable = Table.FromColumns({
+
+ Table.Column(inputTable, "Timestamp")
+ ,Table.Column(inputTable, "Value")
+ , Record.Field(jsonresp, "IsAnomaly") as list
+ , Record.Field(jsonresp, "ExpectedValues") as list
+ , Record.Field(jsonresp, "UpperMargins")as list
+ , Record.Field(jsonresp, "LowerMargins") as list
+ , Record.Field(jsonresp, "IsPositiveAnomaly") as list
+ , Record.Field(jsonresp, "IsNegativeAnomaly") as list
+
+ }, {"Timestamp", "Value", "IsAnomaly", "ExpectedValues", "UpperMargin", "LowerMargin", "IsPositiveAnomaly", "IsNegativeAnomaly"}
+ ),
+
+ respTable1 = Table.AddColumn(respTable , "UpperMargins", (row) => row[ExpectedValues] + row[UpperMargin]),
+ respTable2 = Table.AddColumn(respTable1 , "LowerMargins", (row) => row[ExpectedValues] - row[LowerMargin]),
+ respTable3 = Table.RemoveColumns(respTable2, "UpperMargin"),
+ respTable4 = Table.RemoveColumns(respTable3, "LowerMargin"),
+
+ results = Table.TransformColumnTypes(
+
+ respTable4,
+ {{"Timestamp", type datetime}, {"Value", type number}, {"IsAnomaly", type logical}, {"IsPositiveAnomaly", type logical}, {"IsNegativeAnomaly", type logical},
+ {"ExpectedValues", type number}, {"UpperMargins", type number}, {"LowerMargins", type number}}
+ )
+
+ in results
+```
+
+Invoke the query on your data sheet by selecting `Sheet1` below **Enter Parameter**, and select **Invoke**.
+
+![An image of the invoke function](../media/tutorials/invoke-function-screenshot.png)
+
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Azure AI services [security](../../security-features.md) article for more information.
+
+## Data source privacy and authentication
+
+> [!NOTE]
+> Be aware of your organization's policies for data privacy and access. See [Power BI Desktop privacy levels](/power-bi/desktop-privacy-levels) for more information.
+
+You may get a warning message when you attempt to run the query since it utilizes an external data source.
+
+![An image showing a warning created by Power BI](../media/tutorials/blocked-function.png)
+
+To fix this, select **File**, and **Options and settings**. Then select **Options**. Below **Current File**, select **Privacy**, and **Ignore the Privacy Levels and potentially improve performance**.
+
+Additionally, you may get a message asking you to specify how you want to connect to the API.
+
+![An image showing a request to specify access credentials](../media/tutorials/edit-credentials-message.png)
+
+To fix this, select **Edit Credentials** in the message. After the dialogue box appears, select **Anonymous** to connect to the API anonymously. Then select **Connect**.
+
+Afterwards, select **Close & Apply** in the **Home** ribbon to apply the changes.
+
+## Visualize the Anomaly Detector API response
+
+In the main Power BI screen, begin using the queries created above to visualize the data. First select **Line Chart** in **Visualizations**. Then add the timestamp from the invoked function to the line chart's **Axis**. Right-click on it, and select **Timestamp**.
+
+![Right-clicking the Timestamp value](../media/tutorials/timestamp-right-click.png)
+
+Add the following fields from the **Invoked Function** to the chart's **Values** field. Use the below screenshot to help build your chart.
+
+* Value
+* UpperMargins
+* LowerMargins
+* ExpectedValues
+
+![An image of the chart settings](../media/tutorials/chart-settings.png)
+
+After adding the fields, select on the chart and resize it to show all of the data points. Your chart will look similar to the below screenshot:
+
+![An image of the chart visualization](../media/tutorials/chart-visualization.png)
+
+### Display anomaly data points
+
+On the right side of the Power BI window, below the **FIELDS** pane, right-click on **Value** under the **Invoked Function query**, and select **New quick measure**.
+
+![An image of the new quick measure screen](../media/tutorials/new-quick-measure.png)
+
+On the screen that appears, select **Filtered value** as the calculation. Set **Base value** to `Sum of Value`. Then drag `IsAnomaly` from the **Invoked Function** fields to **Filter**. Select `True` from the **Filter** drop-down menu.
+
+![A second image of the new quick measure screen](../media/tutorials/new-quick-measure-2.png)
+
+After selecting **Ok**, you will have a `Value for True` field, at the bottom of the list of your fields. Right-click it and rename it to **Anomaly**. Add it to the chart's **Values**. Then select the **Format** tool, and set the X-axis type to **Categorical**.
+
+![An image of the format x axis](../media/tutorials/format-x-axis.png)
+
+Apply colors to your chart by selecting on the **Format** tool and **Data colors**. Your chart should look something like the following:
+
+![An image of the final chart](../media/tutorials/final-chart.png)
ai-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
+
+ Title: "Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics"
+
+description: Learn how to use the Multivariate Anomaly Detector with Azure Synapse Analytics.
++++++ Last updated : 08/03/2022+++
+# Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics
+
+Use this tutorial to detect anomalies among multiple variables in Azure Synapse Analytics in very large datasets and databases. This solution is perfect for scenarios like equipment predictive maintenance. The underlying power comes from the integration with [SynapseML](https://microsoft.github.io/SynapseML/), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. It can be installed and used on any Spark 3 infrastructure including your **local machine**, **Databricks**, **Synapse Analytics**, and others.
+
+For more information, see [SynapseML estimator for Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Use Azure Synapse Analytics to detect anomalies among multiple variables in Synapse Analytics.
+> * Train a multivariate anomaly detector model and inference in separate notebooks in Synapse Analytics.
+> * Get anomaly detection result and root cause analysis for each anomaly.
+
+## Prerequisites
+
+In this section, you'll create the following resources in the Azure portal:
+
+* An **Anomaly Detector** resource to get access to the capability of Multivariate Anomaly Detector.
+* An **Azure Synapse Analytics** resource to use the Synapse Studio.
+* A **Storage account** to upload your data for model training and anomaly detection.
+* A **Key Vault** resource to hold the key of Anomaly Detector and the connection string of the Storage Account.
+
+### Create Anomaly Detector and Azure Synapse Analytics resources
+
+* [Create a resource for Azure Synapse Analytics](https://portal.azure.com/#create/Microsoft.Synapse) in the Azure portal, fill in all the required items.
+* [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal.
+* Sign in to [Azure Synapse Analytics](https://web.azuresynapse.net/) using your subscription and Workspace name.
+
+ ![A screenshot of the Synapse Analytics landing page.](../media/multivariate-anomaly-detector-synapse/synapse-workspace-welcome-page.png)
+
+### Create a storage account resource
+
+* [Create a storage account resource](https://portal.azure.com/#create/Microsoft.StorageAccount) in the Azure portal. After your storage account is built, **create a container** to store intermediate data, since SynapseML will transform your original data to a schema that Multivariate Anomaly Detector supports. (Refer to Multivariate Anomaly Detector [input schema](../how-to/prepare-data.md#input-data-schema))
+
+ > [!NOTE]
+ > For the purposes of this example only we are setting the security on the container to allow anonymous read access for containers and blobs since it will only contain our example .csv data. For anything other than demo purposes this is **not recommended**.
+
+ ![A screenshot of the creating a container in a storage account.](../media/multivariate-anomaly-detector-synapse/create-a-container.png)
+
+### Create a Key Vault to hold Anomaly Detector Key and storage account connection string
+
+* Create a key vault and configure secrets and access
+ 1. Create a [key vault](https://portal.azure.com/#create/Microsoft.KeyVault) in the Azure portal.
+ 2. Go to Key Vault > Access policies, and grant the [Azure Synapse workspace](../../../data-factory/data-factory-service-identity.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext&tabs=synapse-analytics) permission to read secrets from Azure Key Vault.
+
+ ![A screenshot of granting permission to Synapse.](../media/multivariate-anomaly-detector-synapse/grant-synapse-permission.png)
+
+* Create a secret in Key Vault to hold the Anomaly Detector key
+ 1. Go to your Anomaly Detector resource, **Anomaly Detector** > **Keys and Endpoint**. Then copy either of the two keys to the clipboard.
+ 2. Go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret, and then paste the key from the previous step into the **Value** field. Finally, select **Create**.
+
+ ![A screenshot of the creating a secret.](../media/multivariate-anomaly-detector-synapse/create-a-secret.png)
+
+* Create a secret in Key Vault to hold Connection String of Storage account
+ 1. Go to your Storage account resource, select **Access keys** to copy one of your Connection strings.
+
+ ![A screenshot of copying connection string.](../media/multivariate-anomaly-detector-synapse/copy-connection-string.png)
+
+ 2. Then go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret (like *myconnectionstring*), and then paste the Connection string from the previous step into the **Value** field. Finally, select **Create**.
+
+## Using a notebook to conduct Multivariate Anomaly Detection in Synapse Analytics
+
+### Create a notebook and a Spark pool
+
+1. Sign in [Azure Synapse Analytics](https://web.azuresynapse.net/) and create a new Notebook for coding.
+
+ ![A screenshot of creating notebook in Synapse.](../media/multivariate-anomaly-detector-synapse/create-a-notebook.png)
+
+2. Select **Manage pools** in the page of notebook to create a new Apache Spark pool if you donΓÇÖt have one.
+
+ ![A screenshot of creating spark pool.](../media/multivariate-anomaly-detector-synapse/create-spark-pool.png)
+
+### Writing code in notebook
+
+1. Install the latest version of SynapseML with the Anomaly Detection Spark models. You can also install SynapseML in Spark Packages, Databricks, Docker, etc. Please refer to [SynapseML homepage](https://microsoft.github.io/SynapseML/).
+
+ If you're using **Spark 3.1**, please use the following code:
+
+ ```python
+ %%configure -f
+ {
+ "name": "synapseml",
+ "conf": {
+ "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT",
+ "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
+ "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12",
+ "spark.yarn.user.classpath.first": "true"
+ }
+ }
+ ```
+
+ If you're using **Spark 3.2**, please use the following code:
+
+ ```python
+ %%configure -f
+ {
+ "name": "synapseml",
+ "conf": {
+ "spark.jars.packages": " com.microsoft.azure:synapseml_2.12:0.9.5 ",
+ "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
+ "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,io.netty:netty-tcnative-boringssl-static",
+ "spark.yarn.user.classpath.first": "true"
+ }
+ }
+ ```
+
+2. Import the necessary modules and libraries.
+
+ ```python
+ from synapse.ml.cognitive import *
+ from notebookutils import mssparkutils
+ import numpy as np
+ import pandas as pd
+ import pyspark
+ from pyspark.sql.functions import col
+ from pyspark.sql.functions import lit
+ from pyspark.sql.types import DoubleType
+ import synapse.ml
+ ```
+
+3. Load your data. Compose your data in the following format, and upload it to a cloud storage that Spark supports like an Azure Storage Account. The timestamp column should be in `ISO8601` format, and the feature columns should be `string` type. **Download sample data [here](https://sparkdemostorage.blob.core.windows.net/mvadcsvdata/spark-demo-data.csv)**.
+
+ ```python
+ df = spark.read.format("csv").option("header", True).load("wasbs://[container_name]@[storage_account_name].blob.core.windows.net/[csv_file_name].csv")
+
+ df = df.withColumn("sensor_1", col("sensor_1").cast(DoubleType())) \
+ .withColumn("sensor_2", col("sensor_2").cast(DoubleType())) \
+ .withColumn("sensor_3", col("sensor_3").cast(DoubleType()))
+
+ df.show(10)
+ ```
+
+ ![A screenshot of raw data.](../media/multivariate-anomaly-detector-synapse/raw-data.png)
+
+4. Train a multivariate anomaly detection model.
+
+ ![A screenshot of training parameter.](../media/multivariate-anomaly-detector-synapse/training-parameter.png)
+
+ ```python
+ #Input your key vault name and anomaly key name in key vault.
+ anomalyKey = mssparkutils.credentials.getSecret("[key_vault_name]","[anomaly_key_secret_name]")
+ #Input your key vault name and connection string name in key vault.
+ connectionString = mssparkutils.credentials.getSecret("[key_vault_name]", "[connection_string_secret_name]")
+
+ #Specify information about your data.
+ startTime = "2021-01-01T00:00:00Z"
+ endTime = "2021-01-02T09:18:00Z"
+ timestampColumn = "timestamp"
+ inputColumns = ["sensor_1", "sensor_2", "sensor_3"]
+ #Specify the container you created in Storage account, you could also initialize a new name here, and Synapse will help you create that container automatically.
+ containerName = "[container_name]"
+ #Set a folder name in Storage account to store the intermediate data.
+ intermediateSaveDir = "intermediateData"
+
+ simpleMultiAnomalyEstimator = (FitMultivariateAnomaly()
+ .setSubscriptionKey(anomalyKey)
+ #In .setLocation, specify the region of your Anomaly Detector resource, use lowercase letter like: eastus.
+ .setLocation("[anomaly_detector_region]")
+ .setStartTime(startTime)
+ .setEndTime(endTime)
+ .setContainerName(containerName)
+ .setIntermediateSaveDir(intermediateSaveDir)
+ .setTimestampCol(timestampColumn)
+ .setInputCols(inputColumns)
+ .setSlidingWindow(200)
+ .setConnectionString(connectionString))
+ ```
+
+ Trigger training process through these codes.
+
+ ```python
+ model = simpleMultiAnomalyEstimator.fit(df)
+ type(model)
+ ```
+
+5. Trigger inference process.
+
+ ```python
+ startInferenceTime = "2021-01-02T09:19:00Z"
+ endInferenceTime = "2021-01-03T01:59:00Z"
+ result = (model
+ .setStartTime(startInferenceTime)
+ .setEndTime(endInferenceTime)
+ .setOutputCol("results")
+ .setErrorCol("errors")
+ .setTimestampCol(timestampColumn)
+ .setInputCols(inputColumns)
+ .transform(df))
+ ```
+
+6. Get inference results.
+
+ ```python
+ rdf = (result.select("timestamp",*inputColumns, "results.contributors", "results.isAnomaly", "results.severity").orderBy('timestamp', ascending=True).filter(col('timestamp') >= lit(startInferenceTime)).toPandas())
+
+ def parse(x):
+ if type(x) is list:
+ return dict([item[::-1] for item in x])
+ else:
+ return {'series_0': 0, 'series_1': 0, 'series_2': 0}
+
+ rdf['contributors'] = rdf['contributors'].apply(parse)
+ rdf = pd.concat([rdf.drop(['contributors'], axis=1), pd.json_normalize(rdf['contributors'])], axis=1)
+ rdf
+ ```
+
+ The inference results will look as followed. The `severity` is a number between 0 and 1, showing the severe degree of an anomaly. The last three columns indicate the `contribution score` of each sensor, the higher the number is, the more anomalous the sensor is.
+ ![A screenshot of inference result.](../media/multivariate-anomaly-detector-synapse/inference-result.png)
+
+## Clean up intermediate data (optional)
+
+By default, the anomaly detector will automatically upload data to a storage account so that the service can process the data. To clean up the intermediate data, you could run the following codes.
+
+```python
+simpleMultiAnomalyEstimator.cleanUpIntermediateData()
+model.cleanUpIntermediateData()
+```
+
+## Use trained model in another notebook with model ID (optional)
+
+If you have the need to run training code and inference code in separate notebooks in Synapse, you could first get the model ID and use that ID to load the model in another notebook by creating a new object.
+
+1. Get the model ID in the training notebook.
+
+ ```python
+ model.getModelId()
+ ```
+
+2. Load the model in inference notebook.
+
+ ```python
+ retrievedModel = (DetectMultivariateAnomaly()
+ .setSubscriptionKey(anomalyKey)
+ .setLocation("eastus")
+ .setOutputCol("result")
+ .setStartTime(startTime)
+ .setEndTime(endTime)
+ .setContainerName(containerName)
+ .setIntermediateSaveDir(intermediateSaveDir)
+ .setTimestampCol(timestampColumn)
+ .setInputCols(inputColumns)
+ .setConnectionString(connectionString)
+ .setModelId('5bXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXe9'))
+ ```
+
+## Learn more
+
+### About Anomaly Detector
+
+* Learn about [what is Multivariate Anomaly Detector](../overview.md).
+* SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
+* Recipe: [Azure AI services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Multivariate%20Anomaly%20Detection/).
+* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u).
+
+### About Synapse
+
+* Quick start: [Configure prerequisites for using Azure AI services in Azure Synapse Analytics](../../../synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md#create-a-key-vault-and-configure-secrets-and-access).
+* Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples.
+* Learn more about [Synapse Analytics](../../../synapse-analytics/index.yml).
+* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/whats-new.md
+
+ Title: What's New - Anomaly Detector
+description: This article is regularly updated with news about the Azure AI Anomaly Detector.
++++++ Last updated : 12/15/2022++
+# What's new in Anomaly Detector
+
+Learn what's new in the service. These items include release notes, videos, blog posts, papers, and other types of information. Bookmark this page to keep up to date with the service.
+
+We have also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft isn't responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
+
+## Release notes
+
+### Jan 2023
+* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/).
+
+### Dec 2022
+* The following SDKs for Multivariate Anomaly Detection are updated to match with the generally available REST API.
+
+ |SDK Package |Sample Code |
+ |||
+ | [Python](https://pypi.org/project/azure-ai-anomalydetector/3.0.0b6/)|[sample_multivariate_detect.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_multivariate_detect.py)|
+ | [.NET](https://www.nuget.org/packages/Azure.AI.AnomalyDetector/3.0.0-preview.6) | [Sample4_MultivariateDetect.cs](https://github.com/Azure/azure-sdk-for-net/blob/40a7d122ac99a3a8a7c62afa16898b7acf82c03d/sdk/anomalydetector/Azure.AI.AnomalyDetector/tests/samples/Sample4_MultivariateDetect.cs)|
+ | [JAVA](https://search.maven.org/artifact/com.azure/azure-ai-anomalydetector/3.0.0-beta.5/jar) | [MultivariateSample.java](https://github.com/Azure/azure-sdk-for-java/blob/e845677d919d47a2c4837153306b37e5f4ecd795/sdk/anomalydetector/azure-ai-anomalydetector/src/samples/java/com/azure/ai/anomalydetector/MultivariateSample.java)|
+ | [JS/TS](https://www.npmjs.com/package/@azure-rest/ai-anomaly-detector/v/1.0.0-beta.1) |[sample_multivariate_detection.ts](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/anomalydetector/ai-anomaly-detector-rest/samples-dev/sample_multivariate_detection.ts)|
+
+* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection).
+
+### Nov 2022
+
+* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to get started using the latest release of Multivariate Anomaly Detection](how-to/create-resource.md).
+
+### June 2022
+
+* New blog released: [Four sets of best practices to use Multivariate Anomaly Detector when monitoring your equipment](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/4-sets-of-best-practices-to-use-multivariate-anomaly-detector/ba-p/3490848#footerContent).
+
+### May 2022
+
+* New blog released: [Detect anomalies in equipment with Multivariate Anomaly Detector in Azure Databricks](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detect-anomalies-in-equipment-with-anomaly-detector-in-azure/ba-p/3390688).
+
+### April 2022
+* Univariate Anomaly Detector is now integrated in Azure Data Explorer(ADX). Check out this [announcement blog post](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-univariate-anomaly-detector-in-azure-data-explorer/ba-p/3285400) to learn more!
+
+### March 2022
+* Anomaly Detector (univariate) available in Sweden Central.
+
+### February 2022
+* **Multivariate Anomaly Detector API has been integrated with Synapse.** Check out this [blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-multivariate-anomaly-detector-in-synapseml/ba-p/3122486) to learn more!
+
+### January 2022
+* **Multivariate Anomaly Detector API v1.1-preview.1 public preview on 1/18.** In this version, Multivariate Anomaly Detector supports synchronous API for inference and added new fields in API output interpreting the correlation change of variables.
+* Univariate Anomaly Detector added new fields in API output.
+
+### November 2021
+* Multivariate Anomaly Detector available in six more regions: UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West. Now in total 26 regions are supported.
+
+### September 2021
+* Anomaly Detector (univariate) available in Jio India West.
+* Multivariate anomaly detector APIs deployed in five more regions: East Asia, West US, Central India, Korea Central, Germany West Central.
+
+### August 2021
+
+* Multivariate anomaly detector APIs deployed in five more regions: West US 3, Japan East, Brazil South, Central US, Norway East. Now in total 15 regions are supported.
+
+### July 2021
+
+* Multivariate anomaly detector APIs deployed in four more regions: Australia East, Canada Central, North Europe, and Southeast Asia. Now in total 10 regions are supported.
+* Anomaly Detector (univariate) available in West US 3 and Norway East.
++
+### June 2021
+
+* Multivariate anomaly detector APIs available in more regions: West US 2, West Europe, East US 2, South Central US, East US, and UK South.
+* Anomaly Detector (univariate) available in Azure cloud for US Government.
+* Anomaly Detector (univariate) available in Azure China (China North 2).
+
+### April 2021
+
+* [IoT Edge module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-cognitive-service.edge-anomaly-detector) (univariate) published.
+* Anomaly Detector (univariate) available in Azure China (China East 2).
+* Multivariate anomaly detector APIs preview in selected regions (West US 2, West Europe).
+
+### September 2020
+
+* Anomaly Detector (univariate) generally available.
+
+### March 2019
+
+* Anomaly Detector announced preview with univariate anomaly detection support.
+
+## Technical articles
+
+* March 12, 2021 [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679) - Technical blog on the new multivariate APIs
+* September 2020 [Multivariate Time-series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040) - Paper on multivariate anomaly detection accepted by ICDM 2020
+* November 5, 2019 [Overview of SR-CNN algorithm in Azure AI Anomaly Detector](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/overview-of-sr-cnn-algorithm-in-azure-anomaly-detector/ba-p/982798) - Technical blog on SR-CNN
+* June 10, 2019 [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) - Paper on SR-CNN accepted by KDD 2019
+* April 20, 2019 [Introducing Azure AI Anomaly Detector API](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/introducing-azure-anomaly-detector-api/ba-p/490162) - Announcement blog
+
+## Videos
+
+* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
+* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detector APIs with Tony Xing and Seth Juarez
+* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez
+* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
+* September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood
+* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](/shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
+* August 27, 2019 [Anomaly Detector v1.0 Best Practices](/shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
+* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](/shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
+* August 13, 2019 [Introducing Azure AI Anomaly Detector](/shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
++
+## Service updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
ai-services Cognitive Services Encryption Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Encryption/cognitive-services-encryption-keys-portal.md
+
+ Title: Customer-Managed Keys for Azure AI services
+
+description: Learn how to use the Azure portal to configure customer-managed keys with Azure Key Vault. Customer-managed keys enable you to create, rotate, disable, and revoke access controls.
++++ Last updated : 04/07/2021+++
+# Configure customer-managed keys with Azure Key Vault for Azure AI services
+
+The process to enable Customer-Managed Keys with Azure Key Vault for Azure AI services varies by product. Use these links for service-specific instructions:
+
+## Vision
+
+* [Custom Vision encryption of data at rest](../custom-vision-service/encrypt-data-at-rest.md)
+* [Face Services encryption of data at rest](../computer-vision/identity-encrypt-data-at-rest.md)
+* [Document Intelligence encryption of data at rest](../../ai-services/document-intelligence/encrypt-data-at-rest.md)
+
+## Language
+
+* [Language Understanding service encryption of data at rest](../LUIS/encrypt-data-at-rest.md)
+* [QnA Maker encryption of data at rest](../QnAMaker/encrypt-data-at-rest.md)
+* [Translator encryption of data at rest](../translator/encrypt-data-at-rest.md)
+* [Language service encryption of data at rest](../language-service/concepts/encryption-data-at-rest.md)
+
+## Speech
+
+* [Speech encryption of data at rest](../speech-service/speech-encryption-of-data-at-rest.md)
+
+## Decision
+
+* [Content Moderator encryption of data at rest](../Content-Moderator/encrypt-data-at-rest.md)
+* [Personalizer encryption of data at rest](../personalizer/encrypt-data-at-rest.md)
+
+## Azure OpenAI
+
+* [Azure OpenAI encryption of data at rest](../openai/encrypt-data-at-rest.md)
++
+## Next steps
+
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
ai-services App Schema Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/app-schema-definition.md
+
+ Title: App schema definition
+description: The LUIS app is represented in either the `.json` or `.lu` and includes all intents, entities, example utterances, features, and settings.
++++++ Last updated : 08/22/2020++
+# App schema definition
+++
+The LUIS app is represented in either the `.json` or `.lu` and includes all intents, entities, example utterances, features, and settings.
+
+## Format
+
+When you import and export the app, choose either `.json` or `.lu`.
+
+|Format|Information|
+|--|--|
+|`.json`| Standard programming format|
+|`.lu`|Supported by the Bot Framework's [Bot Builder tools](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown/docs/lu-file-format.md).|
+
+## Version 7.x
+
+* Moving to version 7.x, the entities are represented as nested machine-learning entities.
+* Support for authoring nested machine-learning entities with `enableNestedChildren` property on the following authoring APIs:
+ * [Add label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c08)
+ * [Add batch label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c09)
+ * [Review labels](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c0a)
+ * [Suggest endpoint queries for entities](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2e)
+ * [Suggest endpoint queries for intents](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2d)
+
+```json
+{
+ "luis_schema_version": "7.0.0",
+ "intents": [
+ {
+ "name": "None",
+ "features": []
+ }
+ ],
+ "entities": [],
+ "hierarchicals": [],
+ "composites": [],
+ "closedLists": [],
+ "prebuiltEntities": [],
+ "utterances": [],
+ "versionId": "0.1",
+ "name": "example-app",
+ "desc": "",
+ "culture": "en-us",
+ "tokenizerVersion": "1.0.0",
+ "patternAnyEntities": [],
+ "regex_entities": [],
+ "phraselists": [
+ ],
+ "regex_features": [],
+ "patterns": [],
+ "settings": []
+}
+```
+
+| element | Comment |
+|--|--|
+| "hierarchicals": [], | Deprecated, use [machine-learning entities](concepts/entities.md). |
+| "composites": [], | Deprecated, use [machine-learning entities](concepts/entities.md). [Composite entity](./reference-entity-machine-learned-entity.md) reference. |
+| "closedLists": [], | [List entities](reference-entity-list.md) reference, primarily used as features to entities. |
+| "versionId": "0.1", | Version of a LUIS app.|
+| "name": "example-app", | Name of the LUIS app. |
+| "desc": "", | Optional description of the LUIS app. |
+| "culture": "en-us", | [Language](luis-language-support.md) of the app, impacts underlying features such as prebuilt entities, machine-learning, and tokenizer. |
+| "tokenizerVersion": "1.0.0", | [Tokenizer](luis-language-support.md#tokenization) |
+| "patternAnyEntities": [], | [Pattern.any entity](reference-entity-pattern-any.md) |
+| "regex_entities": [], | [Regular expression entity](reference-entity-regular-expression.md) |
+| "phraselists": [], | [Phrase lists (feature)](concepts/patterns-features.md#create-a-phrase-list-for-a-concept) |
+| "regex_features": [], | Deprecated, use [machine-learning entities](concepts/entities.md). |
+| "patterns": [], | [Patterns improve prediction accuracy](concepts/patterns-features.md) with [pattern syntax](reference-pattern-syntax.md) |
+| "settings": [] | [App settings](luis-reference-application-settings.md)|
+
+## Version 6.x
+
+* Moving to version 6.x, use the new [machine-learning entity](reference-entity-machine-learned-entity.md) to represent your entities.
+
+```json
+{
+ "luis_schema_version": "6.0.0",
+ "intents": [
+ {
+ "name": "None",
+ "features": []
+ }
+ ],
+ "entities": [],
+ "hierarchicals": [],
+ "composites": [],
+ "closedLists": [],
+ "prebuiltEntities": [],
+ "utterances": [],
+ "versionId": "0.1",
+ "name": "example-app",
+ "desc": "",
+ "culture": "en-us",
+ "tokenizerVersion": "1.0.0",
+ "patternAnyEntities": [],
+ "regex_entities": [],
+ "phraselists": [],
+ "regex_features": [],
+ "patterns": [],
+ "settings": []
+}
+```
+
+## Version 4.x
+
+```json
+{
+ "luis_schema_version": "4.0.0",
+ "versionId": "0.1",
+ "name": "example-app",
+ "desc": "",
+ "culture": "en-us",
+ "tokenizerVersion": "1.0.0",
+ "intents": [
+ {
+ "name": "None"
+ }
+ ],
+ "entities": [],
+ "composites": [],
+ "closedLists": [],
+ "patternAnyEntities": [],
+ "regex_entities": [],
+ "prebuiltEntities": [],
+ "model_features": [],
+ "regex_features": [],
+ "patterns": [],
+ "utterances": [],
+ "settings": []
+}
+```
+
+## Next steps
+
+* Migrate to the [V3 authoring APIs](luis-migration-authoring-entities.md)
ai-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/choose-natural-language-processing-service.md
+
+ Title: Use NLP with QnA Maker for chat bots
+description: Azure AI services provides two natural language processing services, Language Understanding and QnA Maker, each with a different purpose. Understand when to use each service and how they compliment each other.
++++++ Last updated : 10/20/2020++
+# Use Azure AI services with natural language processing (NLP) to enrich chat bot conversations
++++
+## Next steps
+
+* Learn [enterprise design strategies](how-to/improve-application.md)
ai-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/client-libraries-rest-api.md
+
+ Title: "Quickstart: Language Understanding (LUIS) SDK client libraries and REST API"
+description: Create and query a LUIS app with the LUIS SDK client libraries and REST API.
+ Last updated : 03/07/2022+++++
+keywords: Azure, artificial intelligence, ai, natural language processing, nlp, LUIS, azure luis, natural language understanding, ai chatbot, chatbot maker, understanding natural language
+ms.devlang: csharp, javascript, python
+
+zone_pivot_groups: programming-languages-set-luis
+
+# Quickstart: Language Understanding (LUIS) client libraries and REST API
++
+Create and query an Azure LUIS artificial intelligence (AI) app with the LUIS SDK client libraries with this quickstart using C#, Python, or JavaScript. You can also use cURL to send requests using the REST API.
+
+Language Understanding (LUIS) enables you to apply natural language processing (NLP) to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.
+
+* The **authoring** client library and REST API allows you to create, edit, train, and publish your LUIS app.
+* The **prediction runtime** client library and REST API allows you to query the published app.
+++++
+## Clean up resources
+
+You can delete the app from the [LUIS portal](https://www.luis.ai) and delete the Azure resources from the [Azure portal](https://portal.azure.com/).
+
+If you're using the REST API, delete the `ExampleUtterances.JSON` file from the file system when you're done with the quickstart.
+
+## Troubleshooting
+
+* Authenticating to the client library - authentication errors usually indicate that the wrong key & endpoint were used. This quickstart uses the authoring key and endpoint for the prediction runtime as a convenience, but will only work if you haven't already used the monthly quota. If you can't use the authoring key and endpoint, you need to use the prediction runtime key and endpoint when accessing the prediction runtime SDK client library.
+* Creating entities - if you get an error creating the nested machine-learning entity used in this tutorial, make sure you copied the code and didn't alter the code to create a different entity.
+* Creating example utterances - if you get an error creating the labeled example utterance used in this tutorial, make sure you copied the code and didn't alter the code to create a different labeled example.
+* Training - if you get a training error, this usually indicates an empty app (no intents with example utterances), or an app with intents or entities that are malformed.
+* Miscellaneous errors - because the code calls into the client libraries with text and JSON objects, make sure you haven't changed the code.
+
+Other errors - if you get an error not covered in the preceding list, let us know by giving feedback at the bottom on this page. Include the programming language and version of the client libraries you installed.
+
+## Next steps
++
+* [Iterative app development for LUIS](./concepts/application-design.md)
ai-services Application Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/application-design.md
+
+ Title: Application Design
+
+description: Application design concepts
++++++++ Last updated : 01/10/2022+++
+# Plan your LUIS app
+++
+A Language Understanding (LUIS) app schema contains [intents](../luis-glossary.md#intent) and [entities](../luis-glossary.md#entity) relevant to your subject [domain](../luis-glossary.md#domain). The intents classify user [utterances](../luis-glossary.md#utterance), and the entities extract data from the user utterances. Intents and entities relevant to your subject domain. The intents classify user utterances.
+
+A LUIS app learns and performs most efficiently when you iteratively develop it. Here's a typical iteration cycle:
+
+1. Create a new version
+2. Edit the LUIS app schema. This includes:
+ * Intents with example utterances
+ * Entities
+ * Features
+3. Train, test, and publish
+4. Test for active learning by reviewing utterances sent to the prediction endpoint
+5. Gather data from endpoint queries
++
+## Identify your domain
+
+A LUIS app is centered around a subject domain. For example, you may have a travel app that handles booking of tickets, flights, hotels, and rental cars. Another app may provide content related to exercising, tracking fitness efforts and setting goals. Identifying the domain helps you find words or phrases that are relevant to your domain.
+
+> [!TIP]
+> LUIS offers [prebuilt domains](../howto-add-prebuilt-models.md) for many common scenarios. Check to see if you can use a prebuilt domain as a starting point for your app.
+
+## Identify your intents
+
+Think about the [intents](../concepts/intents.md) that are important to your application's task.
+
+Let's take the example of a travel app, with functions to book a flight and check the weather at the user's destination. You can define two intents, BookFlight and GetWeather for these actions.
+
+In a more complex app with more functions, you likely would have more intents, and you should define them carefully so they aren't too specific. For example, BookFlight and BookHotel may need to be separate intents, but BookInternationalFlight and BookDomesticFlight may be too similar.
+
+> [!NOTE]
+> It is a best practice to use only as many intents as you need to perform the functions of your app. If you define too many intents, it becomes harder for LUIS to classify utterances correctly. If you define too few, they may be so general that they overlap.
+
+If you don't need to identify overall user intention, add all the example user utterances to the `None` intent. If your app grows into needing more intents, you can create them later.
+
+## Create example utterances for each intent
+
+To start, avoid creating too many utterances for each intent. Once you have determined the intents you need for your app, create 15 to 30 example utterances per intent. Each utterance should be different from the previously provided utterances. Include a variety of word counts, word choices, verb tenses, and [punctuation](../luis-reference-application-settings.md#punctuation-normalization).
+
+For more information, see [understanding good utterances for LUIS apps](../concepts/utterances.md).
+
+## Identify your entities
+
+In the example utterances, identify the entities you want extracted. To book a flight, you need information like the destination, date, airline, ticket category, and travel class. Create entities for these data types and then mark the [entities](entities.md) in the example utterances. Entities are important for accomplishing an intent.
+
+When determining which entities to use in your app, remember that there are different types of entities for capturing relationships between object types. See [Entities in LUIS](../concepts/entities.md) for more information about the different types.
+
+> [!TIP]
+> LUIS offers [prebuilt entities](../howto-add-prebuilt-models.md) for common, conversational user scenarios. Consider using prebuilt entities as a starting point for your application development.
+
+## Intents versus entities
+
+An intent is the desired outcome of the _whole_ utterance while entities are pieces of data extracted from the utterance. Usually intents are tied to actions, which the client application should take. Entities are information needed to perform this action. From a programming perspective, an intent would trigger a method call and the entities would be used as parameters to that method call.
+
+This utterance _must_ have an intent and _may_ have entities:
+
+"*Buy an airline ticket from Seattle to Cairo*"
+
+This utterance has a single intention:
+
+* Buying a plane ticket
+
+This utterance may have several entities:
+
+* Locations of Seattle (origin) and Cairo (destination)
+* The quantity of a single ticket
+
+## Resolution in utterances with more than one function or intent
+
+In many cases, especially when working with natural conversation, users provide an utterance that can contain more than one function or intent. To address this, a general strategy is to understand that output can be represented by both intents and entities. This representation should be mappable to your client application's actions, and doesn't need to be limited to intents.
+
+**Int-ent-ties** is the concept that actions (usually understood as intents) might also be captured as entities in the app's output, and mapped to specific actions. _Negation,_ _for example, commonly_ relies on intent and entity for full extraction. Consider the following two utterances, which are similar in word choice, but have different results:
+
+* "*Please schedule my flight from Cairo to Seattle*"
+* "*Cancel my flight from Cairo to Seattle*"
+
+Instead of having two separate intents, you should create a single intent with a FlightAction machine learning entity. This machine learning entity should extract the details of the action for both scheduling and canceling requests, and either an origin or destination location.
+
+This FlightAction entity would be structured with the following top-level machine learning entity, and subentities:
+
+* FlightAction
+ * Action
+ * Origin
+ * Destination
+
+To help with extraction, you would add features to the subentities. You would choose features based on the vocabulary you expect to see in user utterances, and the values you want returned in the prediction response.
+
+## Best practices
+
+### Plan Your schema
+
+Before you start building your app's schema, you should identify how and where you plan to use this app. The more thorough and specific your planning, the better your app becomes.
+
+* Research targeted users
+* Define end-to-end personas to represent your app - voice, avatar, issue handling (proactive, reactive)
+* Identify channels of user interactions (such as text or speech), handing off to existing solutions or creating a new solution for this app
+* End-to-end user journey
+ * What do you expect this app to do and not do? What are the priorities of what it should do?
+ * What are the main use cases?
+* Collecting data - [learn about collecting and preparing data](../data-collection.md)
+
+### Don't train and publish with every single example utterance
+
+Add 10 or 15 utterances before training and publishing. That allows you to see the impact on prediction accuracy. Adding a single utterance may not have a visible impact on the score.
+
+### Don't use LUIS as a training platform
+
+LUIS is specific to a language model's domain. It isn't meant to work as a general natural language training platform.
+
+### Build your app iteratively with versions
+
+Each authoring cycle should be contained within a new [version](../concepts/application-design.md), cloned from an existing version.
+
+### Don't publish too quickly
+
+Publishing your app too quickly and without proper planning may lead to several issues such as:
+
+* Your app will not work in your actual scenario at an acceptable level of performance.
+* The schema (intents and entities) might not be appropriate, and if you have developed client app logic following the schema, you may need to redo it. This might cause unexpected delays and extra costs to the project you are working on.
+* Utterances you add to the model might cause biases towards example utterances that are hard to debug and identify. It will also make removing ambiguity difficult after you have committed to a certain schema.
+
+### Do monitor the performance of your app
+
+Monitor the prediction accuracy using a [batch test](../luis-how-to-batch-test.md) set.
+
+Keep a separate set of utterances that aren't used as [example utterances](utterances.md) or endpoint utterances. Keep improving the app for your test set. Adapt the test set to reflect real user utterances. Use this test set to evaluate each iteration or version of the app.
+
+### Don't create phrase lists with all possible values
+
+Provide a few examples in the [phrase lists](patterns-features.md#create-a-phrase-list-for-a-concept) but not every word or phrase. LUIS generalizes and takes context into account.
+
+## Next steps
+[Intents](intents.md)
ai-services Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/entities.md
+
+ Title: Entities
+
+description: Entities concepts
++++++++ Last updated : 07/19/2022+++
+# Entity types
+++
+An entity is an item or an element that is relevant to the user's intent. Entities define data that can be extracted from the utterance and is essential to complete a user's required action. For example:
++
+| Utterance | Intent predicted | Entities extracted | Explanation |
+|--|--|--|--|
+| Hello, how are you? | Greeting | - | Nothing to extract. |
+| I want to order a small pizza | orderPizza | 'small' | 'Size' entity is extracted as 'small'. |
+| Turn off bedroom light | turnOff | 'bedroom' | 'Room' entity is extracted as 'bedroom'. |
+| Check balance in my savings account ending in 4406 | checkBalance | 'savings', '4406' | 'accountType' entity is extracted as 'savings' and 'accountNumber' entity is extracted as '4406'. |
+| Buy 3 tickets to New York | buyTickets | '3', 'New York' | 'ticketsCount' entity is extracted as '3' and 'Destination' entity is extracted as 'New York". |
+
+Entities are optional but recommended. You don't need to create entities for every concept in your app, only when:
+
+* The client application needs the data, or
+* The entity acts as a hint or signal to another entity or intent. To learn more about entities as Features go to [Entities as features](../concepts/entities.md#entities-as-features).
+
+## Entity types
+
+To create an entity, you have to give it a name and a type. There are several types of entities in LUIS.
+
+## List entity
+
+A list entity represents a fixed, closed set of related words along with their synonyms. You can use list entities to recognize multiple synonyms or variations and extract a normalized output for them. Use the _recommend_ option to see suggestions for new words based on the current list.
+
+A list entity isn't machine-learned, meaning that LUIS doesnΓÇÖt discover more values for list entities. LUIS marks any match to an item in any list as an entity in the response.
+
+Matching list entities is both case sensitive and it has to be an exact match. Normalized values are also used when matching the list entity. For example:
++
+| Normalized value | Synonyms |
+|--|--|
+| Small | `sm`, `sml`, `tiny`, `smallest` |
+| Medium | `md`, `mdm`, `regular`, `average`, `middle` |
+| Large | `lg`, `lrg`, `big` |
+
+See the [list entities reference article](../reference-entity-list.md) for more information.
+
+## Regex entity
+
+A regular expression entity extracts an entity based on a regular expression pattern you provide. It ignores case and ignores cultural variant. Regular expression entities are best for structured text or a predefined sequence of alphanumeric values that are expected in a certain format. For example:
+
+| Entity | Regular expression | Example |
+|--|--|--|
+| Flight Number | `flight [A-Z]{2} [0-9]{4}` | `flight AS 1234` |
+| Credit Card Number | `[0-9]{16}` | `5478789865437632` |
+
+See the [regex entities reference article](../reference-entity-regular-expression.md) for more information.
+
+## Prebuilt entities
+
+LUIS includes a set of prebuilt entities for recognizing common types of information, like dates, times, numbers, measurements, and currency. Prebuilt entity support varies by the culture of your LUIS app. For a full list of the prebuilt entities that LUIS supports, including support by culture, see the [prebuilt entity reference](../luis-reference-prebuilt-entities.md).
+
+When a prebuilt entity is included in your application, its predictions are included in your published application. The behavior of prebuilt entities is pre-trained and canΓÇÖt be modified.
++
+| Prebuilt entity | Example value |
+|--|--|
+| PersonName | James, Bill, Tom |
+| DatetimeV2 | `2019-05-02`, `May 2nd`, `8am on May 2nd 2019` |
+
+See the [prebuilt entities reference article](../luis-reference-prebuilt-entities.md) for more information.
+
+## Pattern.Any entity
+
+A pattern.Any entity is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. It follows a specific rule or pattern and best used for sentences with fixed lexical structure. For example:
++
+| Example utterance | Pattern | Entity |
+|--|--|--|
+| Can I have a burger please? | `Can I have a {meal} [please][?]` | burger |
+| Can I have a pizza? | `Can I have a {meal} [please][?]` | pizza |
+| Where can I find The Great Gatsby? | `Where can I find {bookName}?` | The Great Gatsby |
+
+See the [Pattern.Any entities reference article](../reference-entity-pattern-any.md) for more information.
+
+## Machine learned (ML) entity
+
+Machine learned entity uses context to extract entities based on labeled examples. It is the preferred entity for building LUIS applications. It relies on machine-learning algorithms and requires labeling to be tailored to your application successfully. Use an ML entity to identify data that isnΓÇÖt always well formatted but have the same meaning.
++
+| Example utterance | Extracted product entity |
+|--|--|
+| I want to buy a book. | 'book' |
+| Can I get these shoes please? | 'shoes' |
+| Add those shorts to my basket. | 'shorts' |
+
+See [Machine learned entities](../reference-entity-machine-learned-entity.md) for more information.
+
+#### ML Entity with Structure
+
+An ML entity can be composed of smaller sub-entities, each of which can have its own properties. For example, an _Address_ entity could have the following structure:
+
+* Address: 4567 Main Street, NY, 98052, USA
+ * Building Number: 4567
+ * Street Name: Main Street
+ * State: NY
+ * Zip Code: 98052
+ * Country: USA
+
+## Building effective ML entities
+
+To build machine learned entities effectively, follow these best practices:
+
+* If you have a machine learned entity with sub-entities, make sure that the different orders and variants of the entity and sub-entities are presented in the labeled utterances. Labeled example utterances should include all valid forms, and include entities that appear and are absent and also reordered within the utterance.
+* Avoid overfitting the entities to a fixed set. Overfitting happens when the model doesn't generalize well, and is a common problem in machine-learning models. This implies the app wouldnΓÇÖt work on new types of examples adequately. In turn, you should vary the labeled example utterances so the app can generalize beyond the limited examples you provide.
+* Your labeling should be consistent across the intents. This includes even utterances you provide in the _None_ intent that includes this entity. Otherwise the model wonΓÇÖt be able to determine the sequences effectively.
+
+## Entities as features
+
+Another important function of entities is to use them as features or distinguishing traits for another intents or entities so that your system observes and learns through them.
+
+## Entities as features for intents
+
+You can use entities as a signal for an intent. For example, the presence of a certain entity in the utterance can distinguish which intent does it fall under.
+
+| Example utterance | Entity | Intent |
+|--|--|--|
+| Book me a _flight to New York_. | City | Book Flight |
+| Book me the _main conference room_. | Room | Reserve Room |
+
+## Entities as Feature for entities
+
+You can also use entities as an indicator of the presence of other entities. A common example of this is using a prebuilt entity as a feature for another ML entity. If youΓÇÖre building a flight booking system and your utterance looks like 'Book me a flight from Cairo to Seattle', you likely will have _Origin City_ and _Destination City_ as ML entities. A good practice would be to use the prebuilt GeographyV2 entity as a feature for both entities.
+
+For more information, see the [GeographyV2 entities reference article](../luis-reference-prebuilt-geographyv2.md).
+
+You can also use entities as required features for other entities. This helps in the resolution of extracted entities. For example, if youΓÇÖre creating a pizza-ordering application and you have a Size ML entity, you can create SizeList list entity and use it as a required feature for the Size entity. Your application will return the normalized value as the extracted entity from the utterance.
+
+See [features](../concepts/patterns-features.md) for more information, and [prebuilt entities](../luis-reference-prebuilt-entities.md) to learn more about prebuilt entities resolution available in your culture.
+
+## Data from entities
+
+Most chat bots and applications need more than the intent name. This additional, optional data comes from entities discovered in the utterance. Each type of entity returns different information about the match.
+
+A single word or phrase in an utterance can match more than one entity. In that case, each matching entity is returned with its score.
+
+All entities are returned in the entities array of the response from the endpoint
+
+## Best practices for entities
+
+### Use machine-learning entities
+
+Machine learned entities are tailored to your app and require labeling to be successful. If you arenΓÇÖt using machine learned entities, you might be using the wrong entities.
+
+Machine learned entities can use other entities as features. These other entities can be custom entities such as regular expression entities or list entities, or you can use prebuilt entities as features.
+
+Learn about [effective machine learned entities](../concepts/entities.md#machine-learned-ml-entity).
+
+## Next steps
+
+* [How to use entities in your LUIS app](../how-to/entities.md)
+* [Utterances concepts](utterances.md)
ai-services Intents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/intents.md
+
+ Title: What are intents in LUIS
+
+description: Learn about intents and how they're used in LUIS
+++++++ Last updated : 07/19/2022++
+# Intents
+++
+An intent represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's [utterance](utterances.md).
+
+Define a set of intents that corresponds to actions users want to take in your application. For example, a travel app would have several intents:
+
+Travel app intents | Example utterances |
+||
+ BookFlight | "Book me a flight to Rio next week" <br/> "Fly me to Rio on the 24th" <br/> "I need a plane ticket next Sunday to Rio de Janeiro" |
+ Greeting | "Hi" <br/>"Hello" <br/>"Good morning" |
+ CheckWeather | "What's the weather like in Boston?" <br/> "Show me the forecast for this weekend" |
+ None | "Get me a cookie recipe"<br>"Did the Lakers win?" |
+
+All applications come with the predefined intent, "[None](#none-intent)", which is the fallback intent.
+
+## Prebuilt intents
+
+LUIS provides prebuilt intents and their utterances for each of its prebuilt domains. Intents can be added without adding the whole domain. Adding an intent is the process of adding an intent and its utterances to your app. Both the intent name and the utterance list can be modified.
+
+## Return all intents' scores
+
+You assign an utterance to a single intent. When LUIS receives an utterance, by default it returns the top intent for that utterance.
+
+If you want the scores for all intents for the utterance, you can provide a flag in the query string of the prediction API.
+
+|Prediction API version|Flag|
+|--|--|
+|V2|`verbose=true`|
+|V3|`show-all-intents=true`|
+
+## Intent compared to entity
+
+The intent represents the action the application should take for the user, based on the entire utterance. An utterance can have only one top-scoring intent, but it can have many entities.
+
+Create an intent when the user's intention would trigger an action in your client application, like a call to the checkweather() function from the table above. Then create entities to represent parameters required to execute the action.
+
+|Intent | Entity | Example utterance |
+||||
+| CheckWeather | { "type": "location", "entity": "Seattle" }<br>{ "type": "builtin.datetimeV2.date","entity": "tomorrow","resolution":"2018-05-23" } | What's the weather like in `Seattle` `tomorrow`? |
+| CheckWeather | { "type": "date_range", "entity": "this weekend" } | Show me the forecast for `this weekend` |
+||||
++
+## None intent
+
+The **None** intent is created but left empty on purpose. The **None** intent is a required intent and can't be deleted or renamed. Fill it with utterances that are outside of your domain.
+
+The **None** intent is the fallback intent, and should have 10% of the total utterances. It is important in every app, because itΓÇÖs used to teach LUIS utterances that are not important in the app domain (subject area). If you do not add any utterances for the **None** intent, LUIS forces an utterance that is outside the domain into one of the domain intents. This will skew the prediction scores by teaching LUIS the wrong intent for the utterance.
+
+When an utterance is predicted as the None intent, the client application can ask more questions or provide a menu to direct the user to valid choices.
+
+## Negative intentions
+
+If you want to determine negative and positive intentions, such as "I **want** a car" and "I **don't** want a car", you can create two intents (one positive, and one negative) and add appropriate utterances for each. Or you can create a single intent and mark the two different positive and negative terms as an entity.
+
+## Intents and patterns
+
+If you have example utterances, which can be defined in part or whole as a regular expression, consider using the [regular expression entity](../concepts/entities.md#regex-entity) paired with a [pattern](../concepts/patterns-features.md).
+
+Using a regular expression entity guarantees the data extraction so that the pattern is matched. The pattern matching guarantees an exact intent is returned.
+
+## Intent balance
+
+The app domain intents should have a balance of utterances across each intent. For example, do not have most of your intents with 10 utterances and another intent with 500 utterances. This is not balanced. In this situation, you would want to review the intent with 500 utterances to see if many of the intents can be reorganized into a [pattern](../concepts/patterns-features.md).
+
+The **None** intent is not included in the balance. That intent should contain 10% of the total utterances in the app.
+
+### Intent limits
+
+Review the [limits](../luis-limits.md) to understand how many intents you can add to a model.
+
+> [!Tip]
+> If you need more than the maximum number of intents, consider whether your system is using too many intents and determine if multiple intents be combined into single intent with entities.
+> Intents that are too similar can make it more difficult for LUIS to distinguish between them. Intents should be varied enough to capture the main tasks that the user is asking for, but they don't need to capture every path your code takes. For example, two intents: BookFlight() and FlightCustomerService() might be separate intents in a travel app, but BookInternationalFlight() and BookDomesticFlight() are too similar. If your system needs to distinguish them, use entities or other logic rather than intents.
++
+### Request help for apps with significant number of intents
+
+If reducing the number of intents or dividing your intents into multiple apps doesn't work for you, contact support. If your Azure subscription includes support services, contact [Azure technical support](https://azure.microsoft.com/support/options/).
++
+## Best Practices for Intents:
+
+### Define distinct intents
+
+Make sure the vocabulary for each intent is just for that intent and not overlapping with a different intent. For example, if you want to have an app that handles travel arrangements such as airline flights and hotels, you can choose to have these subject areas as separate intents or the same intent with entities for specific data inside the utterance.
+
+If the vocabulary between two intents is the same, combine the intent, and use entities.
+
+Consider the following example utterances:
+
+1. Book a flight
+2. Book a hotel
+
+"Book a flight" and "book a hotel" use the same vocabulary of "book a *\<noun\>*". This format is the same so it should be the same intent with the different words of flight and hotel as extracted entities.
+
+### Do add features to intents
+
+Features describe concepts for an intent. A feature can be a phrase list of words that are significant to that intent or an entity that is significant to that intent.
+
+### Do find sweet spot for intents
+
+Use prediction data from LUIS to determine if your intents are overlapping. Overlapping intents confuse LUIS. The result is that the top scoring intent is too close to another intent. Because LUIS does not use the exact same path through the data for training each time, an overlapping intent has a chance of being first or second in training. You want the utterance's score for each intention to be farther apart, so this variance doesn't happen. Good distinction for intents should result in the expected top intent every time.
+
+### Balance utterances across intents
+
+For LUIS predictions to be accurate, the quantity of example utterances in each intent (except for the None intent), must be relatively equal.
+
+If you have an intent with 500 example utterances and all your other intents with 10 example utterances, the 500-utterance intent will have a higher rate of prediction.
+
+### Add example utterances to none intent
+
+This intent is the fallback intent, indicating everything outside your application. Add one example utterance to the None intent for every 10 example utterances in the rest of your LUIS app.
+
+### Don't add many example utterances to intents
+
+After the app is published, only add utterances from active learning in the development lifecycle process. If utterances are too similar, add a pattern.
+
+### Don't mix the definition of intents and entities
+
+Create an intent for any action your bot will take. Use entities as parameters that make that action possible.
+
+For example, for a bot that will book airline flights, create a **BookFlight** intent. Do not create an intent for every airline or every destination. Use those pieces of data as [entities](../concepts/entities.md) and mark them in the example utterances.
+
+## Next steps
+
+[How to use intents](../how-to/intents.md)
ai-services Patterns Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/patterns-features.md
+
+ Title: Patterns and features
+
+description: Use this article to learn about patterns and features in LUIS
+++++++ Last updated : 07/19/2022+++
+# Patterns in LUIS apps
+++
+Patterns are designed to improve accuracy when multiple utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing several more utterances.
+
+## Patterns solve low intent confidence
+
+Consider a Human Resources app that reports on the organizational chart in relation to an employee. Given an employee's name and relationship, LUIS returns the employees involved. Consider an employee, Tom, with a manager named Alice, and a team of subordinates named: Michael, Rebecca, and Carl.
+++
+| Utterances | Intent predicted | Intent score |
+|--|--|--|
+| Who is Tom's subordinate? | GetOrgChart | 0.30 |
+| Who is the subordinate of Tom? | GetOrgChart | 0.30 |
+
+If an app has between 10 and 20 utterances with different lengths of sentence, different word order, and even different words (synonyms of "subordinate", "manage", "report"), LUIS may return a low confidence score. Create a pattern to help LUIS understand the importance of the word order.
+
+Patterns solve the following situations:
+
+* The intent score is low
+* The correct intent is not the top score but too close to the top score.
+
+## Patterns are not a guarantee of intent
+
+Patterns use a mix of prediction techniques. Setting an intent for a template utterance in a pattern is not a guarantee of the intent prediction, but it is a strong signal.
+
+## Patterns do not improve machine-learning entity detection
+
+A pattern is primarily meant to help the prediction of intents and roles. The "_pattern.any"_ entity is used to extract free-form entities. While patterns use entities, a pattern does not help detect a machine-learning entity.
+
+Do not expect to see improved entity prediction if you collapse multiple utterances into a single pattern. For simple entities to be utilized by your app, you need to add utterances or use list entities.
+
+## Patterns use entity roles
+
+If two or more entities in a pattern are contextually related, patterns use entity [roles](entities.md) to extract contextual information about entities.
+
+## Prediction scores with and without patterns
+
+Given enough example utterances, LUIS can be able to increase prediction confidence without patterns. Patterns increase the confidence score without having to provide as many utterances.
+
+## Pattern matching
+
+A pattern is matched by detecting the entities inside the pattern first, then validating the rest of the words and word order of the pattern. Entities are required in the pattern for a pattern to match. The pattern is applied at the token level, not the character level.
+
+## Pattern.any entity
+
+The pattern.any entity allows you to find free-form data where the wording of the entity makes it difficult to determine the end of the entity from the rest of the utterance.
+
+For example, consider a Human Resources app that helps employees find company documents. This app might need to understand the following example utterances.
+
+* "_Where is **HRF-123456**?_"
+* "_Who authored **HRF-123234**?_"
+* "_Is **HRF-456098** published in French?_"
+
+However, each document has both a formatted name (used in the above list), and a human-readable name, such as Request relocation from employee new to the company 2018 version 5.
+
+Utterances with the human-readable name might look like:
+
+* "_Where is **Request relocation from employee new to the company 2018 version 5**?_"
+* _"Who authored **"Request relocation from employee new to the company 2018 version 5"**?_"
+* _Is **Request relocation from employee new to the company 2018 version 5** is published in French?_"
+
+The utterances include words that may confuse LUIS about where the entity ends. Using a Pattern.any entity in a pattern allows you to specify the beginning and end of the document name, so LUIS correctly extracts the form name. For example, the following template utterances:
+
+* Where is {FormName}[?]
+* Who authored {FormName}[?]
+* Is {FormName} is published in French[?]
+
+## Best practices for Patterns:
+
+#### Do add patterns in later iterations
+
+You should understand how the app behaves before adding patterns because patterns are weighted more heavily than example utterances and will skew confidence.
+
+Once you understand how your app behaves, add patterns as they apply to your app. You do not need to add them each time you iterate on the app's design.
+
+There is no harm in adding them in the beginning of your model design, but it is easier to see how each pattern changes the model after the model is tested with utterances.
+
+#### Don't add many patterns
+
+Don't add too many patterns. LUIS is meant to learn quickly with fewer examples. Don't overload the system unnecessarily.
+
+## Features
+
+In machine learning, a _feature_ is a distinguishing trait or attribute of data that your system observes and learns through.
+
+Machine-learning features give LUIS important cues for where to look for things that distinguish a concept. They're hints that LUIS can use, but they aren't hard rules. LUIS uses these hints with the labels to find the data.
+
+A feature can be described as a function, like `f(x) = y`. In the example utterance, the feature tells you where to look for the distinguishing trait. Use this information to help create your schema.
+
+## Types of features
+
+Features are a necessary part of your schema design. LUIS supports both phrase lists and models as features:
+
+* Phrase list feature
+* Model (intent or entity) as a feature
+
+## Find features in your example utterances
+
+Because LUIS is a language-based application, the features are text-based. Choose text that indicates the trait you want to distinguish. For LUIS, the smallest unit is the _token_. For the English language, a token is a contiguous span of letters and numbers that has no spaces or punctuation.
+
+Because spaces and punctuation aren't tokens, focus on the text clues that you can use as features. Remember to include variations of words, such as:
+
+* Plural forms
+* Verb tenses
+* Abbreviations
+* Spellings and misspellings
+
+Determine if the text needs the following because it distinguishes a trait:
+
+* Match an exact word or phrase: Consider adding a regular expression entity or a list entity as a feature to the entity or intent.
+* Match a well-known concept like dates, times, or people's names: Use a prebuilt entity as a feature to the entity or intent.
+* Learn new examples over time: Use a phrase list of some examples of the concept as a feature to the entity or intent.
+
+## Create a phrase list for a concept
+
+A phrase list is a list of words or phrases that describe a concept. A phrase list is applied as a case-insensitive match at the token level.
+
+When adding a phrase list, you can set the feature to [**global**](#global-features). A global feature applies to the entire app.
+
+## When to use a phrase list
+
+Use a phrase list when you need your LUIS app to generalize and identify new items for the concept. Phrase lists are like domain-specific vocabulary. They enhance the quality of understanding for intents and entities.
+
+## How to use a phrase list
+
+With a phrase list, LUIS considers context and generalizes to identify items that are similar to, but aren't, an exact text match. Follow these steps to use a phrase list:
+
+1. Start with a machine-learning entity:
+ 1. Add example utterances.
+ 2. Label with a machine-learning entity.
+2. Add a phrase list:
+ 1. Add words with similar meaning. Don't add every possible word or phrase. Instead, add a few words or phrases at a time. Then retrain and publish.
+ 2. Review and add suggested words.
+
+## A typical scenario for a phrase list
+
+A typical scenario for a phrase list is to boost words related to a specific idea.
+
+Medical terms are a good example of words that might need a phrase list to boost their significance. These terms can have specific physical, chemical, therapeutic, or abstract meanings. LUIS won't know the terms are important to your subject domain without a phrase list.
+
+For example, to extract the medical terms:
+
+1. Create example utterances and label medical terms within those utterances.
+2. Create a phrase list with examples of the terms within the subject domain. This phrase list should include the actual term you labeled and other terms that describe the same concept.
+3. Add the phrase list to the entity or subentity that extracts the concept used in the phrase list. The most common scenario is a component (child) of a machine-learning entity. If the phrase list should be applied across all intents or entities, mark the phrase list as a global phrase list. The **enabledForAllModels** flag controls this model scope in the API.
+
+## Token matches for a phrase list
+
+A phrase list always applies at the token level. The following table shows how a phrase list that has the word **Ann** applies to variations of the same characters in that order.
+
+| Token variation of "Ann" | Phrase list match when the token is found |
+|--|--|
+| **ANN** <br> **aNN** | Yes - token is **Ann** |
+| **Ann's** | Yes - token is **Ann** |
+| **Anne** | No - token is **Anne** |
+
+## A model as a feature helps another model
+
+You can add a model (intent or entity) as a feature to another model (intent or entity). By adding an existing intent or entity as a feature, you're adding a well-defined concept that has labeled examples.
+
+When adding a model as a feature, you can set the feature as:
+
+* [**Required**](#required-features). A required feature must be found for the model to be returned from the prediction endpoint.
+* [**Global**](#global-features). A global feature applies to the entire app.
+
+## When to use an entity as a feature to an intent
+
+Add an entity as a feature to an intent when the detection of that entity is significant for the intent.
+
+For example, if the intent is for booking a flight, like **BookFlight** , and the entity is ticket information (such as the number of seats, origin, and destination), then finding the ticket-information entity should add significant weight to the prediction of the **BookFlight** intent.
+
+## When to use an entity as a feature to another entity
+
+An entity (A) should be added as a feature to another entity (B) when the detection of that entity (A) is significant for the prediction of entity (B).
+
+For example, if a shipping-address entity is contained in a street-address subentity, then finding the street-address subentity adds significant weight to the prediction for the shipping address entity.
+
+* Shipping address (machine-learning entity):
+ * Street number (subentity)
+ * Street address (subentity)
+ * City (subentity)
+ * State or Province (subentity)
+ * Country/Region (subentity)
+ * Postal code (subentity)
+
+## Nested subentities with features
+
+A machine-learning subentity indicates a concept is present to the parent entity. The parent can be another subentity or the top entity. The value of the subentity acts as a feature to its parent.
+
+A subentity can have both a phrase list and a model (another entity) as a feature.
+
+When the subentity has a phrase list, it boosts the vocabulary of the concept but won't add any information to the JSON response of the prediction.
+
+When the subentity has a feature of another entity, the JSON response includes the extracted data of that other entity.
+
+## Required features
+
+A required feature has to be found in order for the model to be returned from the prediction endpoint. Use a required feature when you know your incoming data must match the feature.
+
+If the utterance text doesn't match the required feature, it won't be extracted.
+
+A required feature uses a non-machine-learning entity:
+
+* Regular-expression entity
+* List entity
+* Prebuilt entity
+
+If you're confident that your model will be found in the data, set the feature as required. A required feature doesn't return anything if it isn't found.
+
+Continuing with the example of the shipping address:
+
+Shipping address (machine learned entity)
+
+* Street number (subentity)
+* Street address (subentity)
+* Street name (subentity)
+* City (subentity)
+* State or Province (subentity)
+* Country/Region (subentity)
+* Postal code (subentity)
+
+## Required feature using prebuilt entities
+
+Prebuilt entities such as city, state, and country/region are generally a closed set of lists, meaning they don't change much over time. These entities could have the relevant recommended features and those features could be marked as required. However, the isRequired flag is only related to the entity it is assigned to and doesn't affect the hierarchy. If the prebuilt sub-entity feature is not found, this will not affect the detection and return of the parent entity.
+
+As an example of a required feature, consider you want to detect addresses. You might consider making a street number a requirement. This would allow a user to enter "1 Microsoft Way" or "One Microsoft Way", and both would resolve to the numeral "1" for the street number sub-entity. See the [prebuilt entity](../luis-reference-prebuilt-entities.md)article for more information.
+
+## Required feature using list entities
+
+A [list entity](../reference-entity-list.md) is used as a list of canonical names along with their synonyms. As a required feature, if the utterance doesn't include either the canonical name or a synonym, then the entity isn't returned as part of the prediction endpoint.
+
+Suppose that your company only ships to a limited set of countries/regions. You can create a list entity that includes several ways for your customer to reference the country/region. If LUIS doesn't find an exact match within the text of the utterance, then the entity (that has the required feature of the list entity) isn't returned in the prediction.
+
+| Canonical name** | Synonyms |
+|--|--|
+| United States | U.S.<br> U.S.A <br> US <br> USA <br> 0 |
+
+A client application, such as a chat bot, can ask a follow-up question to help. This helps the customer understand that the country/region selection is limited and _required_.
+
+## Required feature using regular expression entities
+
+A [regular expression entity](../reference-entity-regular-expression.md) that's used as a required feature provides rich text-matching capabilities.
+
+In the shipping address example, you can create a regular expression that captures syntax rules of the country/region postal codes.
+
+## Global features
+
+While the most common use is to apply a feature to a specific model, you can configure the feature as a **global feature** to apply it to your entire application.
+
+The most common use for a global feature is to add an additional vocabulary to the app. For example, if your customers use a primary language, but expect to be able to use another language within the same utterance, you can add a feature that includes words from the secondary language.
+
+Because the user expects to use the secondary language across any intent or entity, add words from the secondary language to the phrase list. Configure the phrase list as a global feature.
+
+## Combine features for added benefit
+
+You can use more than one feature to describe a trait or concept. A common pairing is to use:
+
+* A phrase list feature: You can use multiple phrase lists as features to the same model.
+* A model as a feature: [prebuilt entity](../luis-reference-prebuilt-entities.md), [regular expression entity](../reference-entity-regular-expression.md), [list entity](../reference-entity-list.md).
+
+## Example: ticket-booking entity features for a travel app
+
+As a basic example, consider an app for booking a flight with a flight-reservation _intent_ and a ticket-booking _entity_. The ticket-booking entity captures the information to book an airplane ticket in a reservation system.
+
+The machine-learning entity for ticket-book has two subentities to capture origin and destination. The features need to be added to each subentity, not the top-level entity.
++
+The ticket-booking entity is a machine-learning entity, with subentities including _Origin_ and _Destination_. These subentities both indicate a geographical location. To help extract the locations, and distinguish between _Origin_ and _Destination_, each subentity should have features.
+
+| Type | Origin subentity | Destination subentity |
+|--|--|--|
+| Model as a feature | [geographyV2](../luis-reference-prebuilt-geographyv2.md) prebuilt entity | [geographyV2](../luis-reference-prebuilt-geographyv2.md) prebuilt entity |
+| Phrase list | **Origin words** : start at, begin from, leave | **Destination words** : to, arrive, land at, go, going, stay, heading |
+| Phrase list | Airport codes - same list for both origin and destination | Airport codes - same list for both origin and destination |
+| Phrase list | Airport names - same list for both origin and destination | Airport codes - same list for both origin and destination |
++
+If you anticipate that people use airport codes and airport names, then LUIS should have phrase lists that use both types of phrases. Airport codes may be more common with text entered in a chatbot while airport names may be more common with spoken conversation such as a speech-enabled chatbot.
+
+The matching details of the features are returned only for models, not for phrase lists because only models are returned in prediction JSON.
+
+## Ticket-booking labeling in the intent
+
+After you create the machine-learning entity, you need to add example utterances to an intent, and label the parent entity and all subentities.
+
+For the ticket booking example, Label the example utterances in the intent with the TicketBooking entity and any subentities in the text.
+++
+## Example: pizza ordering app
+
+For a second example, consider an app for a pizza restaurant, which receives pizza orders including the details of the type of pizza someone is ordering. Each detail of the pizza should be extracted, if possible, in order to complete the order processing.
+
+The machine-learning entity in this example is more complex with nested subentities, phrase lists, prebuilt entities, and custom entities.
++
+This example uses features at the subentity level and child of subentity level. Which level gets what kind of phrase list or model as a feature is an important part of your entity design.
+
+While subentities can have many phrase lists as features that help detect the entity, each subentity has only one model as a feature. In this [pizza app](https://github.com/Azure/pizza_luis_bot/blob/master/CognitiveModels/MicrosoftPizza.json), those models are primarily lists.
++
+The correctly labeled example utterances display in a way to show how the entities are nested.
+
+## Next steps
+
+* [LUIS application design](application-design.md)
+* [prebuilt models](../luis-concept-prebuilt-model.md)
+* [intents](intents.md)
+* [entities](entities.md).
ai-services Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/utterances.md
+
+ Title: Utterances
+description: Utterances concepts
+++
+ms.
++ Last updated : 07/19/2022+
+# Utterances
+++
+Utterances are inputs from users that your app needs to interpret. To train LUIS to extract intents and entities from these inputs, it's important to capture various different example utterances for each intent. Active learning, or the process of continuing to train on new utterances, is essential to the machine-learning intelligence that LUIS provides.
+
+Collect utterances that you think users will enter. Include utterances, which mean the same thing but are constructed in various ways:
+
+* Utterance length - short, medium, and long for your client-application
+* Word and phrase length
+* Word placement - entity at beginning, middle, and end of utterance
+* Grammar
+* Pluralization
+* Stemming
+* Noun and verb choice
+* [Punctuation](../luis-reference-application-settings.md#punctuation-normalization) - using both correct and incorrect grammar
+
+## Choose varied utterances
+
+When you start [adding example utterances](../how-to/entities.md) to your LUIS model, there are several principles to keep in mind:
+
+## Utterances aren't always well formed
+
+Your app may need to process sentences, like "Book a ticket to Paris for me", or a fragment of a sentence, like "Booking" or "Paris flight" Users also often make spelling mistakes. When planning your app, consider whether or not you want to use [Bing Spell Check](../luis-tutorial-bing-spellcheck.md) to correct user input before passing it to LUIS.
+
+If you do not spell check user utterances, you should train LUIS on utterances that include typos and misspellings.
+
+### Use the representative language of the user
+
+When choosing utterances, be aware that what you think are common terms or phrases might not be common for the typical user of your client application. They may not have domain experience or use different terminology. Be careful when using terms or phrases that a user would only say if they were an expert.
+
+### Choose varied terminology and phrasing
+
+You will find that even if you make efforts to create varied sentence patterns, you will still repeat some vocabulary. For example, the following utterances have similar meaning, but different terminology and phrasing:
+
+* "*How do I get a computer?*"
+* "*Where do I get a computer?*"
+* "*I want to get a computer, how do I go about it?*"
+* "*When can I have a computer?*"
+
+The core term here, _computer_, isn't varied. Use alternatives such as desktop computer, laptop, workstation, or even just machine. LUIS can intelligently infer synonyms from context, but when you create utterances for training, it's always better to vary them.
+
+## Example utterances in each intent
+
+Each intent needs to have example utterances - at least 15. If you have an intent that does not have any example utterances, you will not be able to train LUIS. If you have an intent with one or few example utterances, LUIS may not accurately predict the intent.
+
+## Add small groups of utterances
+
+Each time you iterate on your model to improve it, don't add large quantities of utterances. Consider adding utterances in quantities of 15. Then [Train](../how-to/train-test.md), [publish](../how-to/publish.md), and [test](../how-to/train-test.md) again.
+
+LUIS builds effective models with utterances that are carefully selected by the LUIS model author. Adding too many utterances isn't valuable because it introduces confusion.
+
+It is better to start with a few utterances, then [review the endpoint utterances](../how-to/improve-application.md) for correct intent prediction and entity extraction.
+
+## Utterance normalization
+
+Utterance normalization is the process of ignoring the effects of types of text, such as punctuation and diacritics, during training and prediction.
+
+Utterance normalization settings are turned off by default. These settings include:
+
+* Word forms
+* Diacritics
+* Punctuation
+
+If you turn on a normalization setting, scores in the **Test** pane, batch tests, and endpoint queries will change for all utterances for that normalization setting.
+
+When you clone a version in the LUIS portal, the version settings are kept in the new cloned version.
+
+Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings). See the [Reference](../luis-reference-application-settings.md) documentation for more information.
+
+## Word forms
+
+Normalizing **word forms** ignores the differences in words that expand beyond the root.
+
+## Diacritics
+
+Diacritics are marks or signs within the text, such as:
+
+`İ ı Ş Ğ ş ğ ö ü`
+
+## Punctuation marks
+
+Normalizing **punctuation** means that before your models get trained and before your endpoint queries get predicted, punctuation will be removed from the utterances.
+
+Punctuation is a separate token in LUIS. An utterance that contains a period at the end is a separate utterance than one that does not contain a period at the end, and may get two different predictions.
+
+If punctuation is not normalized, LUIS doesn't ignore punctuation marks by default because some client applications may place significance on these marks. Make sure to include example utterances that use punctuation, and ones that don't, for both styles to return the same relative scores.
+
+Make sure the model handles punctuation either in the example utterances (both having and not having punctuation) or in [patterns](../concepts/patterns-features.md) where it is easier to ignore punctuation. For example: I am applying for the {Job} position[.]
+
+If punctuation has no specific meaning in your client application, consider [ignoring punctuation](../concepts/utterances.md#utterance-normalization) by normalizing punctuation.
+
+## Ignoring words and punctuation
+
+If you want to ignore specific words or punctuation in patterns, use a [pattern](../concepts/patterns-features.md) with the _ignore_ syntax of square brackets, `[]`.
+
+## Training with all utterances
+
+Training is generally non-deterministic: utterance prediction can vary slightly across versions or apps. You can remove non-deterministic training by updating the [version settings](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) API with the UseAllTrainingData name/value pair to use all training data.
+
+## Testing utterances
+
+Developers should start testing their LUIS application with real data by sending utterances to the [prediction endpoint](../luis-how-to-azure-subscription.md) URL. These utterances are used to improve the performance of the intents and entities with [Review utterances](../how-to/improve-application.md). Tests submitted using the testing pane in the LUIS portal are not sent through the endpoint, and don't contribute to active learning.
+
+## Review utterances
+
+After your model is trained, published, and receiving [endpoint](../luis-glossary.md#endpoint) queries, [review the utterances](../how-to/improve-application.md) suggested by LUIS. LUIS selects endpoint utterances that have low scores for either the intent or entity.
+
+## Best practices
+
+### Label for word meaning
+
+If the word choice or word arrangement is the same, but doesn't mean the same thing, do not label it with the entity.
+
+In the following utterances, the word fair is a homograph, which means it's spelled the same but has a different meaning:
+* "*What kind of county fairs are happening in the Seattle area this summer?*"
+* "*Is the current 2-star rating for the restaurant fair?*
+
+If you want an event entity to find all event data, label the word fair in the first utterance, but not in the second.
+
+### Don't ignore possible utterance variations
+
+LUIS expects variations in an intent's utterances. The utterances can vary while having the same overall meaning. Variations can include utterance length, word choice, and word placement.
++
+| Don't use the same format | Do use varying formats |
+|--|--|
+| Buy a ticket to Seattle|Buy 1 ticket to Seattle|
+|Buy a ticket to Paris|Reserve two seats on the red eye to Paris next Monday|
+|Buy a ticket to Orlando |I would like to book 3 tickets to Orlando for spring break |
++
+The second column uses different verbs (buy, reserve, book), different quantities (1, &"two", 3), and different arrangements of words but all have the same intention of purchasing airline tickets for travel.
+
+### Don't add too many example utterances to intents
+
+After the app is published, only add utterances from active learning in the development lifecycle process. If utterances are too similar, add a pattern.
+
+## Next steps
+
+* [Intents](intents.md)
+* [Patterns and features concepts](patterns-features.md)
ai-services Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/data-collection.md
+
+ Title: Data collection
+description: Learn what example data to collect while developing your app
++++++ Last updated : 05/06/2020++
+# Data collection for your app
+++
+A Language Understanding (LUIS) app needs data as part of app development.
+
+## Data used in LUIS
+
+LUIS uses text as data to train and test your LUIS app for classification for [intents](concepts/intents.md) and for extraction of [entities](concepts/entities.md). You need a large enough data set that you have sufficient data to create separate data sets for both training and test that have the diversity and distribution called out specifically below. The data in each of these sets should not overlap.
+
+## Training data selection for example utterances
+
+Select utterances for your training set based on the following criteria:
+
+* **Real data is best**:
+ * **Real data from client application**: Select utterances that are real data from your client application. If the customer sends a web form with their inquiry today, and youΓÇÖre building a bot, you can start by using the web form data.
+ * **Crowd-sourced data**: If you donΓÇÖt have any existing data, consider crowd sourcing utterances. Try to crowd-source utterances from your actual user population for your scenario to get the best approximation of the real data your application will see. Crowd-sourced human utterances are better than computer-generated utterances. When you build a data set of synthetic utterances generated on specific patterns, it will lack much of the natural variation youΓÇÖll see with people creating the utterances and wonΓÇÖt end up generalizing well in production.
+* **Data diversity**:
+ * **Region diversity**: Make sure the data for each intent is as diverse as possible including _phrasing_ (word choice), and _grammar_. If you are teaching an intent about HR policies about vacation days, make sure you have utterances that represent the terms that are used for all regions youΓÇÖre serving. For example, in Europe people might ask about `taking a holiday` and in the US people might ask about `taking vacation days`.
+ * **Language diversity**: If you have users with various native languages that are communicating in a second language, make sure to have utterances that represent non-native speakers.
+ * **Input diversity**: Consider your data input path. If you are collecting data from one person, department or input device (microphone) you are likely missing diversity that will be important for your app to learn about all input paths.
+ * **Punctuation diversity**: Consider that people use varying levels of punctuation in text applications and make sure you have a diversity of how punctuation is used. If you're using data that comes from speech, it won't have any punctuation, so your data shouldn't either.
+* **Data distribution**: Make sure the data spread across intents represents the same spread of data your client application receives. If your LUIS app will classify utterances that are requests to schedule a leave (50%), but it will also see utterances about inquiring about leave days left (20%), approving leaves (20%) and some out of scope and chit chat (10%) then your data set should have the sample percentages of each type of utterance.
+* **Use all data forms**: If your LUIS app will take data in multiple forms, make sure to include those forms in your training utterances. For example, if your client application takes both speech and typed text input, you need to have speech to text generated utterances as well as typed utterances. You will see different variations in how people speak from how they type as well as different errors in speech recognition and typos. All of this variation should be represented in your training data.
+* **Positive and negative examples**: To teach a LUIS app, it must learn about what the intent is (positive) and what it is not (negative). In LUIS, utterances can only be positive for a single intent. When an utterance is added to an intent, LUIS automatically makes that same example utterance a negative example for all the other intents.
+* **Data outside of application scope**: If your application will see utterances that fall outside of your defined intents, make sure to provide those. The examples that arenΓÇÖt assigned to a particular defined intent will be labeled with the **None** intent. ItΓÇÖs important to have realistic examples for the **None** intent to properly predict utterances that are outside the scope of the defined intents.
+
+ For example, if you are creating an HR bot focused on leave time and you have three intents:
+ * schedule or edit a leave
+ * inquire about available leave days
+ * approve/disapprove leave
+
+ You want to make sure you have utterances that cover both of those intents, but also that cover potential utterances outside that scope that the application should serve like these:
+ * `What are my medical benefits?`
+ * `Who is my HR rep?`
+ * `tell me a joke`
+* **Rare examples**: Your app will need to have rare examples as well as common examples. If your app has never seen rare examples, it wonΓÇÖt be able to identify them in production. If youΓÇÖre using real data, you will be able to more accurately predict how your LUIS app will work in production.
+
+### Quality instead of quantity
+
+Consider the quality of your existing data before you add more data. With LUIS, youΓÇÖre using Machine Teaching. The combination of your labels and the machine learning features you define is what your LUIS app uses. It doesnΓÇÖt simply rely on the quantity of labels to make the best prediction. The diversity of examples and their representation of what your LUIS app will see in production is the most important part.
+
+### Preprocessing data
+
+The following preprocessing steps will help build a better LUIS app:
+
+* **Remove duplicates**: Duplicate utterances won't hurt, but they don't help either, so removing them will save labeling time.
+* **Apply same client-app preprocess**: If your client application, which calls the LUIS prediction endpoint, applies data processing at runtime before sending the text to LUIS, you should train the LUIS app on data that is processed in the same way.
+* **Don't apply new cleanup processes that the client app doesn't use**: If your client app accepts speech-generated text directly without any cleanup such as grammar or punctuation, your utterances need to reflect the same including any missing punctuation and any other misrecognition youΓÇÖll need to account for.
+* **Don't clean up data**: DonΓÇÖt get rid of malformed input that you might get from garbled speech recognition, accidental keypresses, or mistyped/misspelled text. If your app will see inputs like these, itΓÇÖs important for it to be trained and tested on them. Add a _malformed input_ intent if you wouldnΓÇÖt expect your app to understand it. Label this data to help your LUIS app predict the correct response at runtime. Your client application can choose an appropriate response to unintelligible utterances such as `Please try again`.
+
+### Labeling data
+
+* **Label text as if it was correct**: The example utterances should have all forms of an entity labeled. This includes text that is misspelled, mistyped, and mistranslated.
+
+### Data review after LUIS app is in production
+
+[Review endpoint utterances](how-to/improve-application.md) to monitor real utterance traffic once you have deployed an app to production. This allows you to update your training utterances with real data, which will improve your app. Any app built with crowd-sourced or non-real scenario data will need to be improved based on its real use.
+
+## Test data selection for batch testing
+
+All of the principles listed above for training utterances apply to utterances you should use for your [test set](./luis-how-to-batch-test.md). Ensure the distribution across intents and entities mirror the real distribution as closely as possible.
+
+DonΓÇÖt reuse utterances from your training set in your test set. This improperly biases your results and wonΓÇÖt give you the right indication of how your LUIS app will perform in production.
+
+Once the first version of your app is published, you should update your test set with utterances from real traffic to ensure your test set reflects your production distribution and you can monitor realistic performance over time.
+
+## Next steps
+
+[Learn how LUIS alters your data before prediction](luis-concept-data-alteration.md)
ai-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md
+
+ Title: Developer resources - Language Understanding
+description: SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in your programming language. Manage your Azure resources and LUIS predictions.
++++++ Last updated : 01/12/2021
+ms.devlang: csharp, javascript
+++
+# SDK, REST, and CLI developer resources for Language Understanding (LUIS)
+++
+SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in your programming language. Manage your Azure resources and LUIS predictions.
+
+## Azure resource management
+
+Use the Azure AI services Management layer to create, edit, list, and delete the Language Understanding or Azure AI services resource.
+
+Find reference documentation based on the tool:
+
+* [Azure CLI](/cli/azure/cognitiveservices#az-cognitiveservices-list)
+
+* [Azure RM PowerShell](/powershell/module/azurerm.cognitiveservices/#cognitive_services)
++
+## Language Understanding authoring and prediction requests
+
+The Language Understanding service is accessed from an Azure resource you need to create. There are two resources:
+
+* Use the **authoring** resource for training to create, edit, train, and publish.
+* Use the **prediction** for runtime to send user's text and receive a prediction.
+
+Learn about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Use [Azure AI services sample code](https://github.com/Azure-Samples/cognitive-services-quickstart-code) to learn and use the most common tasks.
+
+### REST specifications
+
+The [LUIS REST specifications](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/cognitiveservices/data-plane/LUIS), along with all [Azure REST specifications](https://github.com/Azure/azure-rest-api-specs), are publicly available on GitHub.
+
+### REST APIs
+
+Both authoring and prediction endpoint APIS are available from REST APIs:
+
+|Type|Version|
+|--|--|
+|Authoring|[V2](https://go.microsoft.com/fwlink/?linkid=2092087)<br>[preview V3](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview)|
+|Prediction|[V2](https://go.microsoft.com/fwlink/?linkid=2092356)<br>[V3](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/)|
+
+### REST Endpoints
+
+LUIS currently has 2 types of endpoints:
+
+* **authoring** on the training endpoint
+* query **prediction** on the runtime endpoint.
+
+|Purpose|URL|
+|--|--|
+|V2 Authoring on training endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/api/v2.0/apps/{appID}/`|
+|V3 Authoring on training endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/{appID}/`|
+|V2 Prediction - all predictions on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q={q}[&timezoneOffset][&verbose][&spellCheck][&staging][&bing-spell-check-subscription-key][&log]`|
+|V3 Prediction - versions prediction on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/{appId}/versions/{versionId}/predict?query={query}[&verbose][&log][&show-all-intents]`|
+|V3 Prediction - slot prediction on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?query={query}[&verbose][&log][&show-all-intents]`|
+
+The following table explains the parameters, denoted with curly braces `{}`, in the previous table.
+
+|Parameter|Purpose|
+|--|--|
+|`your-resource-name`|Azure resource name|
+|`q` or `query`|utterance text sent from client application such as chat bot|
+|`version`|10 character version name|
+|`slot`| `production` or `staging`|
+
+### REST query string parameters
++
+## App schema
+
+The [app schema](app-schema-definition.md) is imported and exported in a `.json` or `.lu` format.
+
+### Language-based SDKs
+
+|Language |Reference documentation|Package|Quickstarts|
+|--|--|--|--|
+|C#|[Authoring](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.authoring)</br>[Prediction](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.runtime)|[NuGet authoring](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Authoring/)<br>[NuGet prediction](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Query prediction](./client-libraries-rest-api.md?pivots=rest-api)|
+|Go|[Authoring and prediction](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v2.0/luis)|[SDK](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/LUIS)||
+|Java|[Authoring and prediction](/java/api/overview/azure/cognitiveservices/client/languageunderstanding)|[Maven authoring](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-authoring)<br>[Maven prediction](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-runtime)|
+|JavaScript|[Authoring](/javascript/api/@azure/cognitiveservices-luis-authoring/)<br>[Prediction](/javascript/api/@azure/cognitiveservices-luis-runtime/)|[NPM authoring](https://www.npmjs.com/package/@azure/cognitiveservices-luis-authoring)<br>[NPM prediction](https://www.npmjs.com/package/@azure/cognitiveservices-luis-runtime)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)|
+|Python|[Authoring and prediction](./client-libraries-rest-api.md?pivots=rest-api)|[Pip](https://pypi.org/project/azure-cognitiveservices-language-luis/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)|
++
+### Containers
+
+Language Understanding (LUIS) provides a [container](luis-container-howto.md) to provide on-premises and contained versions of your app.
+
+### Export and import formats
+
+Language Understanding provides the ability to manage your app and its models in a JSON format, the `.LU` ([LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown)) format, and a compressed package for the Language Understanding container.
+
+Importing and exporting these formats is available from the APIs and from the LUIS portal. The portal provides import and export as part of the Apps list and Versions list.
+
+## Workshops
+
+* GitHub: (Workshop) [Conversational-AI : NLU using LUIS](https://github.com/GlobalAICommunity/Workshop-Conversational-AI)
+
+## Continuous integration tools
+
+* GitHub: (Preview) [Developing a LUIS app using DevOps practices](https://github.com/Azure-Samples/LUIS-DevOps-Template)
+* GitHub: [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) - Tools supporting continuous integration and deployment for NLU services.
+
+## Bot Framework tools
+
+The bot framework is available as [an SDK](https://github.com/Microsoft/botframework) in a variety of languages and as a service using [Azure AI Bot Service](https://dev.botframework.com/).
+
+Bot framework provides [several tools](https://github.com/microsoft/botbuilder-tools) to help with Language Understanding, including:
+* [Bot Framework emulator](https://github.com/Microsoft/BotFramework-Emulator/releases) - a desktop application that allows bot developers to test and debug bots built using the Bot Framework SDK
+* [Bot Framework Composer](https://github.com/microsoft/BotFramework-Composer/blob/stable/README.md) - an integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Microsoft Bot Framework
+* [Bot Framework Samples](https://github.com/microsoft/botbuilder-samples) - in #C, JavaScript, TypeScript, and Python
+
+## Next steps
+
+* Learn about the common [HTTP error codes](luis-reference-response-codes.md)
+* [Reference documentation](../../index.yml) for all APIs and SDKs
+* [Bot framework](https://github.com/Microsoft/botbuilder-dotnet) and [Azure AI Bot Service](https://dev.botframework.com/)
+* [LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown)
+* [Cognitive Containers](../cognitive-services-container-support.md)
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/encrypt-data-at-rest.md
+
+ Title: Language Understanding service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Language Understanding (LUIS), and how to enable and manage CMK.
++++++ Last updated : 08/28/2020+
+#Customer intent: As a user of the Language Understanding (LUIS) service, I want to learn how encryption at rest works.
++
+# Language Understanding service encryption of data at rest
+++
+The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments.
+
+## About Azure AI services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+## Customer-managed keys with Azure Key Vault
+
+There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
+
+### Customer-managed keys for Language Understanding
+
+To request the ability to use customer-managed keys, fill out and submit theΓÇ»[LUIS Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with LUIS, you'll need to create a new Language Understanding resource from the Azure portal and select E0 as the Pricing Tier. The new SKU will function the same as the F0 SKU that is already available except for CMK. Users won't be able to upgrade from the F0 to the new E0 SKU.
+
+![LUIS subscription image](../media/cognitive-services-encryption/luis-subscription.png)
+
+### Limitations
+
+There are some limitations when using the E0 tier with existing/previously created applications:
+
+* Migration to an E0 resource will be blocked. Users will only be able to migrate their apps to F0 resources. After you've migrated an existing resource to F0, you can create a new resource in the E0 tier. Learn more about [migration here](./luis-migration-authoring.md).
+* Moving applications to or from an E0 resource will be blocked. A work around for this limitation is to export your existing application, and import it as an E0 resource.
+* The Bing Spell check feature isn't supported.
+* Logging end-user traffic is disabled if your application is E0.
+* The Speech priming capability from the Azure AI Bot Service isn't supported for applications in the E0 tier. This feature is available via the Azure AI Bot Service, which doesn't support CMK.
+* The speech priming capability from the portal requires Azure Blob Storage. For more information, see [bring your own storage](../Speech-Service/speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging).
+
+### Enable customer-managed keys
+
+A new Azure AI services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK.
+
+To learn how to use customer-managed keys with Azure Key Vault for Azure AI services encryption, see:
+
+- [Configure customer-managed keys with Key Vault for Azure AI services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)
+
+Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+### Store customer-managed keys in Azure Key Vault
+
+To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+### Rotate customer-managed keys
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Azure AI services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Azure AI services by using the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md).
+
+Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+
+### Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Azure AI services resource, as the encryption key is inaccessible by Azure AI services.
+
+## Next steps
+
+* [LUIS Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/faq.md
+
+ Title: LUIS frequently asked questions
+description: Use this article to see frequently asked questions about LUIS, and troubleshooting information
+++
+ms.
++ Last updated : 07/19/2022++
+# Language Understanding Frequently Asked Questions (FAQ)
++++
+## What are the maximum limits for LUIS application?
+
+LUIS has several limit areas. The first is the model limit, which controls intents, entities, and features in LUIS. The second area is quota limits based on key type. A third area of limits is the keyboard combination for controlling the LUIS website. A fourth area is the world region mapping between the LUIS authoring website and the LUIS endpoint APIs. See [LUIS limits](luis-limits.md) for more details.
+
+## What is the difference between Authoring and Prediction keys?
+
+An authoring resource lets you create, manage, train, test, and publish your applications. A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. See [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md) to learn about the differences between the authoring key and the prediction runtime key.
+
+## Does LUIS support speech to text?
+
+Yes, [Speech](../speech-service/how-to-recognize-intents-from-speech-csharp.md#luis-and-speech) to text is provided as an integration with LUIS.
+
+## What are Synonyms and word variations?
+
+LUIS has little or no knowledge of the broader _NLP_ aspects, such as semantic similarity, without explicit identification in examples. For example, the following tokens (words) are three different things until they are used in similar contexts in the examples provided:
+
+* Buy
+* Buying
+* Bought
+
+For semantic similarity Natural Language Understanding (NLU), you can use [Conversation Language Understanding](../language-service/conversational-language-understanding/overview.md)
+
+## What are the Authoring and prediction pricing?
+Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint, each has their own pricing. See [Resource usage and limits](luis-limits.md#resource-usage-and-limits)
+
+## What are the supported regions?
+
+See [region support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
+
+## How does LUIS store data?
+
+LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key. Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted.See [Data retention](luis-concept-data-storage.md) to know more details about data storage
+
+## Does LUIS support Customer-Managed Keys (CMK)?
+
+The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments. See [the CMK article](encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault) for more details about customer-managed keys.
+
+## Is it important to train the None intent?
+
+Yes, it is good to train your **None** intent with utterances, especially as you add more labels to other intents. See [none intent](concepts/intents.md#none-intent) for details.
+
+## How do I edit my LUIS app programmatically?
+
+To edit your LUIS app programmatically, use the [Authoring API](https://go.microsoft.com/fwlink/?linkid=2092087). See [Call LUIS authoring API](get-started-get-model-rest-apis.md) and [Build a LUIS app programmatically using Node.js](luis-tutorial-node-import-utterances-csv.md) for examples of how to call the Authoring API. The Authoring API requires that you use an [authoring key](luis-how-to-azure-subscription.md) rather than an endpoint key. Programmatic authoring allows up to 1,000,000 calls per month and five transactions per second. For more info on the keys you use with LUIS, see [Manage keys](luis-how-to-azure-subscription.md).
+
+## Should variations of an example utterance include punctuation?
+
+Use one of the following solutions:
+
+* Ignore [punctuation](luis-reference-application-settings.md#punctuation-normalization)
+* Add the different variations as example utterances to the intent
+* Add the pattern of the example utterance with the [syntax to ignore](concepts/utterances.md#utterance-normalization) the punctuation.
+
+## Why is my app is getting different scores every time I train?
+
+Enable or disable the use non-deterministic training option. When disabled, training will use all available data. When enabled (by default), training will use a random sample each time the app is trained, to be used as a negative for the intent. To make sure that you are getting same scores every time, make sure you train your LUIS app with all your data. See the [training article](how-to/train-test.md#change-deterministic-training-settings-using-the-version-settings-api) for more information.
+
+## I received an HTTP 403 error status code. How do I fix it? Can I handle more requests per second?
+
+You get 403 and 429 error status codes when you exceed the transactions per second or transactions per month for your pricing tier. Increase your pricing tier, or use Language Understanding Docker [containers](luis-container-howto.md).
+
+When you use all of the free 1000 endpoint queries or you exceed your pricing tier's monthly transactions quota, you will receive an HTTP 403 error status code.
+
+To fix this error, you need to either [change your pricing tier](luis-how-to-azure-subscription.md#change-the-pricing-tier) to a higher tier or [create a new resource](luis-get-started-create-app.md#sign-in-to-luis-portal) and assign it to your app.
+
+Solutions for this error include:
+
+* In the [Azure portal](https://portal.azure.com/), navigate to your Language Understanding resource, and select **Resource Management ,** then select **Pricing tier** , and change your pricing tier. You don't need to change anything in the Language Understanding portal if your resource is already assigned to your Language Understanding app.
+* If your usage exceeds the highest pricing tier, add more Language Understanding resources with a load balancer in front of them. The [Language Understanding container](luis-container-howto.md) with Kubernetes or Docker Compose can help with this.
+
+An HTTP 429 error code is returned when your transactions per second exceed your pricing tier.
+
+Solutions include:
+
+* You can [increase your pricing tier](luis-how-to-azure-subscription.md#change-the-pricing-tier), if you are not at the highest tier.
+* If your usage exceeds the highest pricing tier, add more Language Understanding resources with a load balancer in front of them. The [Language Understanding container](luis-container-howto.md) with Kubernetes or Docker Compose can help with this.
+* You can gate your client application requests with a [retry policy](/azure/architecture/best-practices/transient-faults#general-guidelines) you implement yourself when you get this status code.
+
+## Why does LUIS add spaces to the query around or in the middle of words?
+
+LUIS [tokenizes](luis-glossary.md#token) the utterance based on the [culture](luis-language-support.md#tokenization). Both the original value and the tokenized value are available for [data extraction](luis-concept-data-extraction.md#tokenized-entity-returned).
+
+## What do I do when I expect LUIS requests to go beyond the quota?
+
+LUIS has a monthly quota and a per-second quota, based on the pricing tier of the Azure resource.
+
+If your LUIS app request rate exceeds the allowed [quota rate](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/), you can:
+
+* Spread the load to more LUIS apps with the [same app definition](how-to/improve-application.md). This includes, optionally, running LUIS from a [container](./luis-container-howto.md).
+* Create and [assign multiple keys](how-to/improve-application.md) to the app.
+
+## Can I Use multiple apps with same app definition?
+
+Yes, export the original LUIS app and import the app back into separate apps. Each app has its own app ID. When you publish, instead of using the same key across all apps, create a separate key for each app. Balance the load across all apps so that no single app is overwhelmed. Add [Application Insights](/azure/bot-service/bot-builder-howto-v4-luis) to monitor usage.
+
+To get the same top intent between all the apps, make sure the intent prediction between the first and second intent is wide enough that LUIS is not confused, giving different results between apps for minor variations in utterances.
+
+When training these apps, make sure to [train with all data](how-to/train-test.md).
+
+Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md) website or the authoring API for a [single utterance](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c08) or for a [batch](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09).
+
+Schedule a periodic review, such as every two weeks, of [endpoint utterances](how-to/improve-application.md) for active learning, then retrain and republish the app.
+
+## How do I download a log of user utterances?
+
+By default, your LUIS app logs utterances from users. To download a log of utterances that users send to your LUIS app, go to **My Apps** , and select the app. In the contextual toolbar, select **Export Endpoint Logs**. The log is formatted as a comma-separated value (CSV) file.
+
+## How can I disable the logging of utterances?
+
+You can turn off the logging of user utterances by setting `log=false` in the Endpoint URL that your client application uses to query LUIS. However, turning off logging disables your LUIS app's ability to suggest utterances or improve performance that's based on [active learning](how-to/improve-application.md). If you set `log=false` because of data-privacy concerns, you can't download a record of those user utterances from LUIS or use those utterances to improve your app.
+
+Logging is the only storage of utterances.
+
+## Why don't I want all my endpoint utterances logged?
+
+If you are using your log for prediction analysis, do not capture test utterances in your log.
+
+## What are the supported languages?
+
+See [supported languages](luis-language-support.md), for multilingual NLU, consider using the new [Conversation Language Understanding (CLU)](../language-service/conversational-language-understanding/overview.md) feature of the Language Service.
+
+## Is Language Understanding (LUIS) available on-premises or in a private cloud?
+
+Yes, you can use the LUIS [container](luis-container-howto.md) for these scenarios if you have the necessary connectivity to meter usage.
+
+## How do I integrate LUIS with Azure AI Bot Services?
+
+Use this [tutorial](/composer/how-to-add-luis) to integrate LUIS app with a Bot
ai-services Get Started Get Model Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/get-started-get-model-rest-apis.md
+
+ Title: "How to update your LUIS model using the REST API"
+
+description: In this article, add example utterances to change a model and train the app.
++++
+ms.devlang: csharp, golang, java, javascript, python
++++ Last updated : 11/30/2020
+zone_pivot_groups: programming-languages-set-one
+#Customer intent: As an API developer familiar with REST but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
++
+# How to update the LUIS model with REST APIs
+++
+In this article, you will add example utterances to a Pizza app and train the app. Example utterances are conversational user text mapped to an intent. By providing example utterances for intents, you teach LUIS what kinds of user-supplied text belongs to which intent.
+++++
ai-services How To Application Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to-application-settings-portal.md
+
+ Title: "Application settings"
+description: Configure your application and version settings in the LUIS portal such as utterance normalization and app privacy.
++++++ Last updated : 11/30/2020++
+# Application and version settings
+++
+Configure your application settings in the LUIS portal such as utterance normalization and app privacy.
+
+## View application name, description, and ID
+
+You can edit your application name, and description. You can copy your App ID. The culture can't be changed.
+
+1. Sign into the [LUIS portal](https://www.luis.ai).
+1. Select an app from the **My apps** list.
+.
+1. Select **Manage** from the top navigation bar, then **Settings** from the left navigation bar.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of LUIS portal, Manage section, Application Settings page](media/app-settings/luis-portal-manage-section-application-settings.png)
++
+## Change application settings
+
+To change a setting, select the toggle on the page.
++
+## Change version settings
+
+To change a setting, select the toggle on the page.
++
+## Next steps
+
+* How to [collaborate](luis-how-to-collaborate.md) with other authors
+* [Publish settings](how-to/publish.md#configure-publish-settings)
ai-services Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/entities.md
+
+ Title: How to use entities in LUIS
+description: Learn how to use entities with LUIS.
+++
+ms.
++ Last updated : 01/05/2022++
+# Add entities to extract data
+++
+Create entities to extract key data from user utterances in Language Understanding (LUIS) apps. Extracted entity data is used by your client application to fulfill customer requests.
+
+The entity represents a word or phrase inside the utterance that you want extracted. Entities describe information relevant to the intent, and sometimes they are essential for your app to perform its task.
+
+## How to create a new entity
+
+The following process works for [machine learned entities](../concepts/entities.md#machine-learned-ml-entity), [list entities](../concepts/entities.md#list-entity), and [regular expression entities](../concepts/entities.md#regex-entity).
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on **My Apps** page.
+3. Select **Build** from the top navigation menu, then select **Entities** from the left panel, Select **+ Create** , then select the entity type.
+4. Continue configuring the entity. Select **Create** when you are done.
+
+## Create a machine learned entity
+Following the pizza example, we would need to create a "PizzaOrder" entity to extract pizza orders from utterances.
+
+1. Select **Build** from the top navigation menu, then select **Entities** from the left panel
+2. In the **Create an entity type** dialog box, enter the name of the entity and select [**Machine learned**](../concepts/entities.md#machine-learned-ml-entity) , select. To add sub-entities, select **Add structure**. Then select **Create**.
+
+ :::image type="content" source="../media/add-entities/machine-learned-entity-with-structure.png" alt-text="A screenshot creating a machine learned entity." lightbox="../media/add-entities/machine-learned-entity-with-structure.png":::
+
+ A pizza order might include many details, like quantity and type. To add these details, we would create a subentity.
+
+3. In **Add subentities** , add a subentity by selecting the **+** on the parent entity row.
+
+ :::image type="content" source="../media/add-entities/machine-learned-entity-with-subentities.png" alt-text="A screenshot of adding subentities." lightbox="../media/add-entities/machine-learned-entity-with-subentities.png":::
+
+4. Select **Create** to finish the creation process.
+
+## Add a feature to a machine learned entity
+Some entities include many details. Imagine a "PizzaOrder" entity, it may include "_ToppingModifiers_" or "_FullPizzaWithModifiers_". These could be added as features to a machine learned entity.
+
+1. Select **Build** from the top navigation bar, then select **Entities** from the left panel.
+2. Add a feature by selecting **+ Add feature** on the entity or subentity row.
+3. Select one of the existing entities and phrase lists.
+4. If the entity should only be extracted if the feature is found, select the asterisk for that feature.
+
+ :::image type="content" source="../media/add-entities/machine-learned-entity-schema-with-features.png" alt-text="A screenshot of adding feature to entity." lightbox="../media/add-entities/machine-learned-entity-schema-with-features.png":::
+
+## Create a regular expression entity
+For extracting structured text or a predefined sequence of alphanumeric values, use regular expression entities. For example, _OrderNumber_ could be predefined to be exactly 5 characters with type numbers ranging between 0 and 9.
+
+1. Select **Build** from the top navigation bar, then select **Intents** from the left panel
+2. Select **+ Create**.
+3. In the **Create an entity type** dialog box, enter the name of the entity and select **RegEx** , enter the regular expression in the **Regex** field and select **Create**.
+
+ :::image type="content" source="../media/add-entities/add-regular-expression-entity.png" alt-text="A screenshot of creating a regular expression entity." lightbox="../media/add-entities/add-regular-expression-entity.png":::
+
+## Create a list entity
+
+List entities represent a fixed, closed set of related words. While you, as the author, can change the list, LUIS won't grow or shrink the list. You can also import to an existing list entity using a [list entity .json format](../reference-entity-list.md#example-json-to-import-into-list-entity).
+
+Use the procedure to create a list entity. Once the list entity is created, you don't need to label example utterances in an intent. List items and synonyms are matched using exact text. A "_Size_" entity could be of type list, and it will include different sizes like "_small_", "_medium_", "_large_" and "_family_".
+
+1. From the **Build** section, select **Entities** in the left panel, and then select **+ Create**.
+2. In the **Create an entity type** dialog box, enter the name of the entity, such as _Size_ and select **List**.
+3. In the **Create a list entity** dialog box, in the **Add new sublist....** , enter the list item name, such as _large_. Also, you can add synonyms to a list item like _huge_ and _mega_ for item _large_.
+
+ :::image type="content" source="../media/add-entities/create-list-entity-colors.png" alt-text="Create a list of sizes as a list entity in the Entity detail page." lightbox="../media/add-entities/create-list-entity-colors.png":::
+
+4. When you are finished adding list items and synonyms, select **Create**.
+
+When you are done with a group of changes to the app, remember to **Train** the app. Do not train the app after a single change.
+
+> [!NOTE]
+> This procedure demonstrates creating and labeling a list entity from an example utterance in the **Intent detail** page. You can also create the same entity from the **Entities** page.
+
+## Add a prebuilt domain entity
+
+1. Select **Entities** in the left side.
+2. On the **Entities** page, select **Add prebuilt domain entity**.
+3. In **Add prebuilt domain models** dialog box, select the prebuilt domain entity.
+4. Select **Done**. After the entity is added, you do not need to train the app.
+
+## Add a prebuilt entity
+To recognize common types of information, add a [prebuilt entity](../concepts/entities.md#prebuilt-entities)
+1. Select **Entities** in the left side.
+2. On the **Entities** page, select **Add prebuilt entity**.
+3. In **Add prebuilt entities** dialog box, select the prebuilt entity.
+
+ :::image type="content" source="../media/luis-prebuilt-domains/add-prebuilt-entity.png" alt-text="A screenshot showing the dialog box for a prebuilt entity." lightbox="../media/luis-prebuilt-domains/add-prebuilt-entity.png":::
+
+4. Select **Done**. After the entity is added, you do not need to train the app.
+
+## Add a role to distinguish different contexts
+A role is a named subtype of an entity, based on context. In the following utterance, there are two locations, and each is specified semantically by the words around it such as to and from:
+
+_Pick up the pizza order from Seattle and deliver to New York City._
+
+In this procedure, add origin and destination roles to a prebuilt geographyV2 entity.
+
+1. From the **Build** section, select **Entities** in the left panel.
+2. Select **+ Add prebuilt entity**. Select **geographyV2** then select **Done**. A prebuilt entity will be added to the app.
+
+If you find that your pattern, when it includes a Pattern.any, extracts entities incorrectly, use an [explicit list](../reference-pattern-syntax.md#explicit-lists) to correct this problem.
+
+1. Select the newly added prebuilt geographyV2 entity from the **Entities** page list of entities.
+2. To add a new role, select **+** next to **No roles added**.
+3. In the **Type role...** textbox, enter the name of the role Origin then enter. Add a second role name of Destination then enter.
+
+ :::image type="content" source="../media/how-to-add-entities/add-role-to-prebuilt-geographyv2-entity.png" alt-text="A screenshot showing how to add an origin role to a location entity." lightbox="../media/how-to-add-entities//add-role-to-prebuilt-geographyv2-entity.png":::
+
+The role is added to the prebuilt entity but isn't added to any utterances using that entity.
+
+## Create a pattern.any entity
+Patterns are designed to improve accuracy when multiple utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing several more utterances. The [**Pattern.any**](../concepts/entities.md#patternany-entity) entity is only available with patterns. See the [patterns article](../concepts/patterns-features.md) for more information.
+
+## Next steps
+
+* [Label your example utterances](label-utterances.md)
+* [Train and test your application](train-test.md)
ai-services Improve Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/improve-application.md
+
+ Title: How to improve LUIS application
+description: Learn how to improve LUIS application
+++
+ms.
++ Last updated : 01/07/2022++
+# How to improve a LUIS app
+++
+Use this article to learn how you can improve your LUIS apps, such as reviewing for correct predictions, and working with optional text in utterances.
+
+## Active Learning
+
+The process of reviewing endpoint utterances for correct predictions is called Active learning. Active learning captures queries that are sent to the endpoint, and selects user utterances that it is unsure of. You review these utterances to select the intent and mark the entities for these real-world utterances. Then you can accept these changes into your app's example utterances, then [train](./train-test.md) and [publish](./publish.md) the app. This helps LUIS identify utterances more accurately.
+
+## Log user queries to enable active learning
+
+To enable active learning, you must log user queries. This is accomplished by calling the [endpoint query](../luis-get-started-create-app.md#query-the-v3-api-prediction-endpoint) with the `log=true` query string parameter and value.
+
+> [!Note]
+> To disable active learning, don't log user queries. You can change the query parameters by setting log=false in the endpoint query or omit the log parameter because the default value is false for the V3 endpoint.
+
+Use the LUIS portal to construct the correct endpoint query.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on **My Apps** page.
+3. Go to the **Manage** section, then select **Azure resources**.
+4. For the assigned prediction resource, select **Change query parameters**
++
+5. Toggle **Save logs** then save by selecting **Done**.
++
+This action changes the example URL by adding the `log=true` query string parameter. Copy and use the changed example query URL when making prediction queries to the runtime endpoint.
+
+## Correct predictions to align utterances
+
+Each utterance has a suggested intent displayed in the **Predicted Intent** column, and the suggested entities in dotted bounding boxes.
++
+If you agree with the predicted intent and entities, select the check mark next to the utterance. If the check mark is disabled, this means that there is nothing to confirm.
+If you disagree with the suggested intent, select the correct intent from the predicted intent's drop-down list. If you disagree with the suggested entities, start labeling them. After you are done, select the check mark next to the utterance to confirm what you labeled. Select **save utterance** to move it from the review list and add it its respective intent.
+
+If you are unsure if you should delete the utterance, either move it to the "*None*" intent, or create a new intent such as *miscellaneous* and move the utterance it.
+
+## Working with optional text and prebuilt entities
+
+Suppose you have a Human Resources app that handles queries about an organization's personnel. It might allow for current and future dates in the utterance text - text that uses `s`, `'s`, and `?`.
+
+If you create an "*OrganizationChart*" intent, you might consider the following example utterances:
+
+|Intent|Example utterances with optional text and prebuilt entities|
+|:--|:--|
+|OrgChart-Manager|"Who was Jill Jones manager on March 3?"|
+|OrgChart-Manager|"Who is Jill Jones manager now?"|
+|OrgChart-Manager|"Who will be Jill Jones manager in a month?"|
+|OrgChart-Manager|"Who will be Jill Jones manager on March 3?"|
+
+Each of these examples uses:
+* A verb tense: "_was_", "_is_", "_will be_"
+* A date: "_March 3_", "_now_", "_in a month_"
+
+LUIS needs these to make predictions correctly. Notice that the last two examples in the table use almost the same text except for "_in_" and "_on_".
+
+Using patterns, the following example template utterances would allow for optional information:
+
+|Intent|Example utterances with optional text and prebuilt entities|
+|:--|:--|
+|OrgChart-Manager|Who was {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]|
+|OrgChart-Manager|Who is {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]|
+
+The optional square brackets syntax "*[ ]*" lets you add optional text to the template utterance and can be nested in a second level "*[ [ ] ]*" and include entities or text.
+
+> [!CAUTION]
+> Remember that entities are found first, then the pattern is matched.
+
+### Next Steps:
+
+To test how performance improves, you can access the test console by selecting **Test** in the top panel. For instructions on how to test your app using the test console, see [Train and test your app](train-test.md).
ai-services Intents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/intents.md
+
+ Title: How to use intents in LUIS
+description: Learn how to use intents with LUIS.
+++
+ms.
++ Last updated : 01/07/2022+++
+# Add intents to determine user intention of utterances
+++
+Add [intents](../concepts/intents.md) to your LUIS app to identify groups of questions or commands that have the same intention.
+
+## Add an intent to your app
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on the **My Apps** page.
+3. Select **Build** from the top navigation bar, then select **Intents** from the left panel.
+4. On the **Intents** page, select **+ Create**.
+5. In the **Create new intent** dialog box, enter the intent name, for example *ModifyOrder*, and select **Done**.
+
+ :::image type="content" source="../media/luis-how-to-add-intents/Addintent-dialogbox.png" alt-text="A screenshot showing the add intent dialog box." lightbox="../media/luis-how-to-add-intents/Addintent-dialogbox.png":::
+
+The intent needs [example utterances](../concepts/utterances.md) in order to predict utterances at the published prediction endpoint.
+
+## Add an example utterance
+
+Example utterances are text examples of user questions or commands. To teach Language Understanding (LUIS) when to predict the intent, you need to add example utterances. Carefully consider each utterance you add. Each utterance added should be different than the examples that are already added to the intent..
+
+On the intent details page, enter a relevant utterance you expect from your users, such as "*I want to change my pizza order to large please*" in the text box below the intent name, and then press Enter.
+
+
+LUIS converts all utterances to lowercase and adds spaces around [tokens](../luis-language-support.md#tokenization), such as hyphens.
+
+## Intent prediction errors
+
+An intent prediction error is determined when an utterance is not predicted with the trained app for the intent.
+
+1. To find utterance prediction errors and fix them, use the **Incorrect** and **Unclear** filter options.
+
+ :::image type="content" source="../media/luis-how-to-add-intents/find-intent-prediction-errors.png" alt-text="A screenshot showing how to find and fix utterance prediction errors, using the filter option." lightbox="../media/luis-how-to-add-intents/find-intent-prediction-errors.png":::
+
+2. To display the score value on the Intent details page, select **Show details intent scores** from the **View** menu.
+
+When the filters and view are applied and there are example utterances with errors, the example utterance list will show the utterances and the issues.
+
+Each row shows the current training's prediction score for the example utterance, and the nearest other intent score, which is the difference between these two scores.
+
+> [!Tip]
+> To fix intent prediction errors, use the [Summary dashboard](../luis-how-to-use-dashboard.md). The summary dashboard provides analysis for the active version's last training and offers the top suggestions to fix your model.
+
+## Add a prebuilt intent
+
+Now imagine you want to quickly create a confirmation intent. You can use one of the prebuilt intents to create a confirmation intent.
+
+1. On the **Intents** page, select **Add prebuilt domain intent** from the toolbar above the intents list.
+2. Select an intent from the pop-up dialog.
+ :::image type="content" source="../media/luis-prebuilt-domains/add-prebuilt-domain-intents.png" alt-text="A screenshot showing the menu for adding prebuilt intents." lightbox="../media/luis-prebuilt-domains/add-prebuilt-domain-intents.png":::
+
+3. Select the **Done** button.
+
+## Next steps
+
+* [Add entities](entities.md)
+* [Label entities](label-utterances.md)
+* [Train and test](train-test.md)
ai-services Label Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/label-utterances.md
+
+ Title: How to label example utterances in LUIS
+description: Learn how to label example utterance in LUIS.
+++
+ms.
++ Last updated : 01/05/2022++
+# How to label example utterances
+++
+Labeling an entity in an example utterance gives LUIS an example of what the entity is and where the entity can appear in the utterance. You can label machine-learned entities and subentities.
+
+You only label machine-learned entities and sub-entities. Other entity types can be added as features to them when applicable.
+
+## Label example utterances from the Intent detail page
+
+To label examples of entities within the utterance, select the utterance's intent.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on **My Apps** page.
+3. Select the Intent that has the example utterances you want to label for extraction with an entity.
+4. Select the text you want to label then select the entity.
+
+### Two techniques to label entities
+
+Two labeling techniques are supported on the Intent detail page.
+
+* Select entity or subentity from [Entity Palette](../how-to/entities.md) then select within example utterance text. This is the recommended technique because you can visually verify you are working with the correct entity or subentity, according to your schema.
+* Select within the example utterance text first. A menu will appear with labeling choices.
+
+### Label with the Entity Palette visible
+
+After you've [planned your schema with entities](../concepts/application-design.md), keep the **Entity palette** visible while labeling. The **Entity palette** is a reminder of what entities you planned to extract.
+
+To access the **Entity Palette** , select the **@** symbol in the contextual toolbar above the example utterance list.
++
+### Label entity from Entity Palette
+
+The entity palette offers an alternative to the previous labeling experience. It allows you to brush over text to instantly label it with an entity.
+
+1. Open the entity palette by selecting on the **@** symbol at the top right of the utterance table.
+2. Select the entity from the palette that you want to label. This action is visually indicated with a new cursor. The cursor follows the mouse as you move in the LUIS portal.
+3. In the example utterance, _paint_ the entity with the cursor.
+
+ :::image type="content" source="../media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png" alt-text="A screenshot showing an entity painted with the cursor." lightbox="../media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png":::
+
+## Add entity as a feature from the Entity Palette
+
+The Entity Palette's lower section allows you to add features to the currently selected entity. You can select from all existing entities and phrase lists or create a new phrase list.
++
+### Label text with a role in an example utterance
+
+> [!TIP]
+> Roles can be replaced by labeling with subentities of a machine-learning entities.
+
+1. Go to the Intent details page, which has example utterances that use the role.
+2. To label with the role, select the entity label (solid line under text) in the example utterance, then select **View in entity pane** from the drop-down list.
+
+ :::image type="content" source="../media/add-entities/view-in-entity-pane.png" alt-text="A screenshot showing the view in entity menu." lightbox="../media/add-entities/view-in-entity-pane.png":::
+
+ The entity palette opens to the right.
+
+3. Select the entity, then go to the bottom of the palette and select the role.
+
+ :::image type="content" source="../media/add-entities/select-role-in-entity-palette.png" alt-text="A screenshot showing where to select a role." lightbox="../media/add-entities/select-role-in-entity-palette.png":::
++
+## Label entity from in-place menu
+
+Labeling in-place allows you to quickly select the text within the utterance and label it. You can also create a machine learning entity or list entity from the labeled text.
+
+Consider the example utterance: "hi, please i want a cheese pizza in 20 minutes".
+
+Select the left-most text, then select the right-most text of the entity. In the menu that appears, pick the entity you want to label.
++
+## Review labeled text
+
+After labeling, review the example utterance and ensure the selected span of text has been underlined with the chosen entity. The solid line indicates the text has been labeled.
++
+## Confirm predicted entity
+
+If there is a dotted-lined box around the span of text, it indicates the text is predicted but _not labeled yet_. To turn the prediction into a label, select the utterance row, then select **Confirm entities** from the contextual toolbar.
+
+<!--:::image type="content" source="../media/add-entities/prediction-confirm.png" alt-text="A screenshot showing confirming prediction." lightbox="../media/add-entities/prediction-confirm.png":::-->
+
+> [!Note]
+> You do not need to label for punctuation. Use [application settings](../luis-reference-application-settings.md) to control how punctuation impacts utterance predictions.
++
+## Unlabel entities
+
+> [!NOTE]
+> Only machine learned entities can be unlabeled. You can't label or unlabel regular expression entities, list entities, or prebuilt entities.
+
+To unlabel an entity, select the entity and select **Unlabel** from the in-place menu.
++
+## Automatic labeling for parent and child entities
+
+If you are labeling for a subentity, the parent will be labeled automatically.
+
+## Automatic labeling for non-machine learned entities
+
+Non-machine learned entities include prebuilt entities, regular expression entities, list entities, and pattern.any entities. These are automatically labeled by LUIS so they are not required to be manually labeled by users.
+
+## Entity prediction errors
+
+Entity prediction errors indicate the predicted entity doesn't match the labeled entity. This is visualized with a caution indicator next to the utterance.
++
+## Next steps
+
+[Train and test your application](train-test.md)
ai-services Orchestration Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/orchestration-projects.md
+
+ Title: Use LUIS and question answering
+description: Learn how to use LUIS and question answering using orchestration.
+++
+ms.
++ Last updated : 05/23/2022++
+# Combine LUIS and question answering capabilities
+++
+Azure AI services provides two natural language processing services, [Language Understanding](../what-is-luis.md) (LUIS) and question answering, each with a different purpose. Understand when to use each service and how they complement each other.
+
+Natural language processing (NLP) allows your client application, such as a chat bot, to work with your users' natural language.
+
+## When to use each feature
+
+LUIS and question answering solve different problems. LUIS determines the intent of a user's text (known as an utterance), while question answering determines the answer to a user's text (known as a query).
+
+To pick the correct service, you need to understand the user text coming from your client application, and what information it needs to get from the Azure AI service features.
+
+As an example, if your chat bot receives the text "How do I get to the Human Resources building on the Seattle north campus", use the table below to understand how each service works with the text.
++
+| Service | Client application determines |
+|||
+| LUIS | Determines user's intention of text - the service doesn't return the answer to the question. For example, this text would be classified as matching a "FindLocation" intent.|
+| Question answering | Returns the answer to the question from a custom knowledge base. For example, this text would be determined as a question, with the static text answer being "Get on the #9 bus and get off at Franklin street". |
+
+## Create an orchestration project
+
+Orchestration helps you connect more than one project and service together. Each connection in the orchestration is represented by a type and relevant data. The intent needs to have a name, a project type (LUIS, question answering, or conversational language understanding, and a project you want to connect to by name.
+
+You can use orchestration workflow to create new orchestration projects. See [orchestration workflow](../../language-service/orchestration-workflow/how-to/create-project.md) for more information.
+## Set up orchestration between Azure AI services features
+
+To use an orchestration project to connect LUIS, question answering, and conversational language understanding, you need:
+
+* A language resource in [Language Studio](https://language.azure.com/) or the Azure portal.
+* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/orchestration-workflow/how-to/create-project.md#import-an-orchestration-workflow-project).
+
+>[!Note]
+>LUIS can be used with Orchestration projects in West Europe only, and requires the authoring resource to be a Language resource. You can either import the application in the West Europe Language resource or change the authoring resource from the portal.
+
+## Change a LUIS resource to a language resource:
+
+You need to follow the following steps to change LUIS authoring resource to a Language resource
+
+1. Log in to the [LUIS portal](https://www.luis.ai/) .
+2. From the list of LUIS applications, select the application you want to change to a Language resource.
+3. From the menu at the top of the screen, select **Manage**.
+4. From the left Menu, select **Azure resource**
+5. Select **Authoring resource** , then change your LUIS authoring resource to the Language resource.
++
+## Next steps
+
+* [Conversational language understanding documentation](../../language-service/conversational-language-understanding/how-to/create-project.md)
ai-services Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/publish.md
+
+ Title: Publish
+description: Learn how to publish.
+++
+ms.
++ Last updated : 12/14/2021++
+# Publish your active, trained app
+++
+When you finish building, training, and testing your active LUIS app, you make it available to your client application by publishing it to an endpoint.
+
+## Publishing
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on **My Apps** page.
+3. To publish to the endpoint, select **Publish** in the top-right corner of the panel.
+
+ :::image type="content" source="../media/luis-how-to-publish-app/publish-top-nav-bar.png" alt-text="A screenshot showing the navigation bar at the top of the screen, with the publish button in the top-right." lightbox="../media/luis-how-to-publish-app/publish-top-nav-bar.png":::
+
+1. Select your settings for the published prediction endpoint, then select **Publish**.
++
+## Publishing slots
+
+Select the correct slot when the pop-up window displays:
+
+* Staging
+* Production
+
+By using both publishing slots, you can have two different versions of your app available at the published endpoints, or the same version on two different endpoints.
+
+## Publish in more than one region
+
+The app is published to all regions associated with the LUIS prediction resources. You can find your LUIS prediction resources in the LUIS portal by clicking **Manage** from the top navigation menu, and selecting [Azure Resources](../luis-how-to-azure-subscription.md#assign-luis-resources).
+
+For example, if you add 2 prediction resources to an application in two regions, **westus** and **eastus** , and add these to the app as resources, the app is published in both regions. For more information about LUIS regions, see [Regions](../luis-reference-regions.md).
+
+## Configure publish settings
+
+After you select the slot, configure the publish settings for:
+
+* Sentiment analysis:
+Sentiment analysis allows LUIS to integrate with the Language service to provide sentiment and key phrase analysis. You do not have to provide a Language service key and there is no billing charge for this service to your Azure account. See [Sentiment analysis](../luis-reference-prebuilt-sentiment.md) for more information about the sentiment analysis JSON endpoint response.
+
+* Speech priming:
+Speech priming is the process of sending the LUIS model output to the Speech service prior to converting the text to speech. This allows the speech service to provide speech conversion more accurately for your model. This allows for Speech and LUIS requests and responses in one call by making one speech call and getting back a LUIS response. It provides less latency overall.
+
+After you publish, these settings are available for review from the **Manage** section's **Publish settings** page. You can change the settings with every publish. If you cancel a publish, any changes you made during the publish are also canceled.
+
+## When your app is published
+
+When your app is successfully published, a success notification Will appear at the top of the browser. The notification also includes a link to the endpoints.
+
+If you need the endpoint URL, select the link or select **Manage** in the top menu, then select **Azure Resources** in the left menu.
++
+## Next steps
+
+- See [Manage keys](../luis-how-to-azure-subscription.md) to add keys to LUIS.
+- See [Train and test your app](train-test.md) for instructions on how to test your published app in the test console.
ai-services Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/sign-in.md
+
+ Title: Sign in to the LUIS portal and create an app
+description: Learn how to sign in to LUIS and create application.
+++
+ms.
++ Last updated : 07/19/2022+
+# Sign in to the LUIS portal and create an app
+++
+Use this article to get started with the LUIS portal, and create an authoring resource. After completing the steps in this article, you will be able to create and publish LUIS apps.
+
+## Access the portal
+
+1. To get started with LUIS, go to the [LUIS Portal](https://www.luis.ai/). If you do not already have a subscription, you will be prompted to go create a [free account](https://azure.microsoft.com/free/cognitive-services/) and return back to the portal.
+2. Refresh the page to update it with your newly created subscription
+3. Select your subscription from the dropdown list
++
+4. If your subscription lives under another tenant, you will not be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar containing your initials in the top-right section of the screen. Select **Choose a different authoring resource** from the top to reopen the window.
++
+5. If you have an existing LUIS authoring resource associated with your subscription, choose it from the dropdown list. You can view all applications that are created under this authoring resource.
+6. If not, then select **Create a new authoring resource** at the bottom of this modal.
+7. When creating a new authoring resource, provide the following information:
++
+* **Tenant Name** - the tenant your Azure subscription is associated with. You will not be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar at the top-right corner of the screen, containing your initials. Select **Choose a different authoring resource** from the top to reopen the window.
+* **Azure Resource group name** - a custom resource group name you choose in your subscription. Resource groups allow you to group Azure resources for access and management. If you currently do not have a resource group in your subscription, you will not be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
+* **Azure Resource name** - a custom name you choose, used as part of the URL for your authoring transactions. Your resource name can only include alphanumeric characters, `-`, and can't start or end with `-`. If any other symbols are included in the name, creating a resource will fail.
+* **Location** - Choose to author your applications in one of the [three authoring locations](../luis-reference-regions.md) that are currently supported by LUIS including: West US, West Europe and East Australia
+* **Pricing tier** - By default, F0 authoring pricing tier is selected as it is the recommended. Create a [customer managed key](../encrypt-data-at-rest.md#customer-managed-keys-for-language-understanding) from the Azure portal if you are looking for an extra layer of security.
+
+8. Now you have successfully signed in to LUIS. You can now start creating applications.
+
+>[!Note]
+> * When creating a new resource, make sure that the resource name only includes alphanumeric characters, '-', and canΓÇÖt start or end with '-'. Otherwise, it will fail.
++
+## Create a new LUIS app
+There are a couple of ways to create a LUIS app. You can create a LUIS app in the LUIS portal, or through the LUIS authoring [APIs](../developer-reference-resource.md).
+
+**Using the LUIS portal** You can create a new app in the portal in several ways:
+* Start with an empty app and create intents, utterances, and entities.
+* Start with an empty app and add a [prebuilt domain](../luis-concept-prebuilt-model.md).
+* Import a LUIS app from a .lu or .json file that already contains intents, utterances, and entities.
+
+**Using the authoring APIs** You can create a new app with the authoring APIs in a couple of ways:
+* [Add application](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f) - start with an empty app and create intents, utterances, and entities.
+* [Add prebuilt application](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/59104e515aca2f0b48c76be5) - start with a prebuilt domain, including intents, utterances, and entities.
+
+## Create new app in LUIS using portal
+1. On **My Apps** page, select your **Subscription** , and **Authoring resource** then select **+ New App**.
+
+1. In the dialog box, enter the name of your application, such as Pizza Tutorial.
+2. Choose your application culture, and then select **Done**. The description and prediction resource are optional at this point. You can set then at any time in the **Manage** section of the portal.
+ >[!NOTE]
+ > The culture cannot be changed once the application is created.
+
+ After the app is created, the LUIS portal shows the **Intents** list with the None intent already created for you. You now have an empty app.
+
+ :::image type="content" source="../media/pizza-tutorial-new-app-empty-intent-list.png" alt-text="Intents list with a None intent and no example utterances" lightbox="../media/pizza-tutorial-new-app-empty-intent-list.png":::
+
+
+## Next steps
+
+If your app design includes intent detection, [create new intents](intents.md), and add example utterances. If your app design is only data extraction, add example utterances to the None intent, then [create entities](entities.md), and label the example utterances with those entities.
ai-services Train Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/train-test.md
+
+ Title: How to use train and test
+description: Learn how to train and test the application.
+++
+ms.
++ Last updated : 01/10/2022++
+# Train and test your LUIS app
+++
+Training is the process of teaching your Language Understanding (LUIS) app to extract intent and entities from user utterances. Training comes after you make updates to the model, such as: adding, editing, labeling, or deleting entities, intents, or utterances.
+
+Training and testing an app is an iterative process. After you train your LUIS app, you test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, you should make updates to the LUIS app, then train and test again.
+
+Training is applied to the active version in the LUIS portal.
+
+## How to train interactively
+
+Before you start training your app in the [LUIS portal](https://www.luis.ai/), make sure every intent has at least one utterance. You must train your LUIS app at least once to test it.
+
+1. Access your app by selecting its name on the **My Apps** page.
+2. In your app, select **Train** in the top-right part of the screen.
+3. When training is complete, a notification appears at the top of the browser.
+
+>[!Note]
+>The training dates and times are in GMT + 2.
+
+## Start the training process
+
+> [!TIP]
+>You do not need to train after every single change. Training should be done after a group of changes are applied to the model, or if you want to test or publish the app.
+
+To train your app in the LUIS portal, you only need to select the **Train** button on the top-right corner of the screen.
+
+Training with the REST APIs is a two-step process.
+
+1. Send an HTTP POST [request for training](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c45).
+2. Request the [training status](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c46) with an HTTP GET request.
+
+In order to know when training is complete, you must poll the status until all models are successfully trained.
+
+## Test Your application
+
+Testing is the process of providing sample utterances to LUIS and getting a response of recognized intents and entities. You can test your LUIS app interactively one utterance at a time, or provide a set of utterances. While testing, you can compare the current active model's prediction response to the published model's prediction response.
+
+Testing an app is an iterative process. After training your LUIS app, test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, make updates to the LUIS app, train, and test again.
+
+## Interactive testing
+
+Interactive testing is done from the **Test** panel of the LUIS portal. You can enter an utterance to see how intents and entities are identified and scored. If LUIS isn't predicting an utterance's intents and entities as you would expect, copy the utterance to the **Intent** page as a new utterance. Then label parts of that utterance for entities to train your LUIS app.
+
+See [batch testing](../luis-how-to-batch-test.md) if you are testing more than one utterance at a time, and the [Prediction scores](../luis-concept-prediction-score.md) article to learn more about prediction scores.
++
+## Test an utterance
+
+The test utterance should not be exactly the same as any example utterances in the app. The test utterance should include word choice, phrase length, and entity usage you expect for a user.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on **My Apps** page.
+3. Select **Test** in the top-right corner of the screen for your app, and a panel will slide into view.
++
+4. Enter an utterance in the text box and press the enter button on the keyboard. You can test a single utterance in the **Test** box, or multiple utterances as a batch in the **Batch testing panel**.
+5. The utterance, its top intent, and score are added to the list of utterances under the text box. In the above example, this is displayed as 'None (0.43)'.
+
+## Inspect the prediction
+
+Inspect the test result details in the **Inspect** panel.
+
+1. With the **Test** panel open, select **Inspect** for an utterance you want to compare. **Inspect** is located next to the utterance's top intent and score. Refer to the above image.
+
+2. The **Inspection** panel will appear. The panel includes the top scoring intent and any identified entities. The panel shows the prediction of the selected utterance.
++
+> [!TIP]
+>From the inspection panel, you can add the test utterance to an intent by selecting **Add to example utterances**.
+
+## Change deterministic training settings using the version settings API
+
+Use the [Version settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) with the UseAllTrainingData set to *true* to turn off deterministic training.
+
+## Change deterministic training settings using the LUIS portal
+
+Log into the [LUIS portal](https://www.luis.ai/) and select your app. Select **Manage** at the top of the screen, then select **Settings.** Enable or disable the **use non-deterministic training** option. When disabled, training will use all available data. Training will only use a _random_ sample of data from other intents as negative data when training each intent
++
+## View sentiment results
+
+If sentiment analysis is configured on the [**Publish**](publish.md) page, the test results will include the sentiment found in the utterance.
+
+## Correct matched pattern's intent
+
+If you are using [Patterns](../concepts/patterns-features.md) and the utterance matched is a pattern, but the wrong intent was predicted, select the **Edit** link by the pattern and select the correct intent.
+
+## Compare with published version
+
+You can test the active version of your app with the published [endpoint](../luis-glossary.md#endpoint) version. In the **Inspect** panel, select **Compare with published**.
+> [!NOTE]
+> Any testing against the published model is deducted from your Azure subscription quota balance.
++
+## View endpoint JSON in test panel
+
+You can view the endpoint JSON returned for the comparison by selecting the **Show JSON view** in the top-right corner of the panel.
++
+## Next steps
+
+If testing requires testing a batch of utterances, See [batch testing](../luis-how-to-batch-test.md).
+
+If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's accuracy by labeling more utterances or adding features.
+
+* [Improve your application](./improve-application.md)
+* [Publishing your application](./publish.md)
ai-services Howto Add Prebuilt Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/howto-add-prebuilt-models.md
+
+ Title: Prebuilt models for Language Understanding
+
+description: LUIS includes a set of prebuilt models for quickly adding common, conversational user scenarios.
+++++++ Last updated : 05/17/2020+++
+# Add prebuilt models for common usage scenarios
+++
+LUIS includes a set of prebuilt models for quickly adding common, conversational user scenarios. This is a quick and easy way to add abilities to your conversational client application without having to design the models for those abilities.
+
+## Add a prebuilt domain
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+
+1. Select **Prebuilt Domains** from the left toolbar.
+
+1. Find the domain you want added to the app then select **Add domain** button.
+
+ > [!div class="mx-imgBorder"]
+ > ![Add Calendar prebuilt domain](./media/luis-prebuilt-domains/add-prebuilt-domain.png)
+
+## Add a prebuilt intent
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+
+1. On the **Intents** page, select **Add prebuilt domain intent** from the toolbar above the intents list.
+
+1. Select an intent from the pop-up dialog.
+
+ > [!div class="mx-imgBorder"]
+ > ![Add prebuilt intent](./media/luis-prebuilt-domains/add-prebuilt-domain-intents.png)
+
+1. Select the **Done** button.
+
+## Add a prebuilt entity
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+1. Select **Entities** in the left side.
+
+1. On the **Entities** page, select **Add prebuilt entity**.
+
+1. In **Add prebuilt entities** dialog box, select the prebuilt entity.
+
+ > [!div class="mx-imgBorder"]
+ > ![Add prebuilt entity dialog box](./media/luis-prebuilt-domains/add-prebuilt-entity.png)
+
+1. Select **Done**. After the entity is added, you do not need to train the app.
+
+## Add a prebuilt domain entity
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+1. Select **Entities** in the left side.
+
+1. On the **Entities** page, select **Add prebuilt domain entity**.
+
+1. In **Add prebuilt domain models** dialog box, select the prebuilt domain entity.
+
+1. Select **Done**. After the entity is added, you do not need to train the app.
+
+## Publish to view prebuilt model from prediction endpoint
+
+The easiest way to view the value of a prebuilt model is to query from the published endpoint.
+
+## Entities containing a prebuilt entity token
+
+If you have a machine-learning entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learning entity, then add a _required_ feature of a prebuilt entity.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Build model from .csv with REST APIs](./luis-tutorial-node-import-utterances-csv.md)
ai-services Luis Concept Data Alteration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-alteration.md
+
+ Title: Data alteration - LUIS
+description: Learn how data can be changed before predictions in Language Understanding (LUIS)
++++++ Last updated : 05/06/2020
+ms.devlang: csharp
+++
+# Alter utterance data before or during prediction
++
+LUIS provides ways to manipulate the utterance before or during the prediction. These include [fixing spelling](luis-tutorial-bing-spellcheck.md), and fixing timezone issues for prebuilt [datetimeV2](luis-reference-prebuilt-datetimev2.md).
+
+## Correct spelling errors in utterance
++
+### V3 runtime
+
+Preprocess text for spelling corrections before you send the utterance to LUIS. Use example utterances with the correct spelling to ensure you get the correct predictions.
+
+Use [Bing Spell Check](../../cognitive-services/bing-spell-check/overview.md) to correct text before sending it to LUIS.
+
+### Prior to V3 runtime
+
+LUIS uses [Bing Spell Check API V7](../../cognitive-services/bing-spell-check/overview.md) to correct spelling errors in the utterance. LUIS needs the key associated with that service. Create the key, then add the key as a querystring parameter at the [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356).
+
+The endpoint requires two params for spelling corrections to work:
+
+|Param|Value|
+|--|--|
+|`spellCheck`|boolean|
+|`bing-spell-check-subscription-key`|[Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) endpoint key|
+
+When [Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) detects an error, the original utterance, and the corrected utterance are returned along with predictions from the endpoint.
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+{
+ "query": "Book a flite to London?",
+ "alteredQuery": "Book a flight to London?",
+ "topScoringIntent": {
+ "intent": "BookFlight",
+ "score": 0.780123
+ },
+ "entities": []
+}
+```
+
+#### [V3 prediction endpoint response](#tab/V3)
+
+```JSON
+{
+ "query": "Book a flite to London?",
+ "prediction": {
+ "normalizedQuery": "book a flight to london?",
+ "topIntent": "BookFlight",
+ "intents": {
+ "BookFlight": {
+ "score": 0.780123
+ }
+ },
+ "entities": {},
+ }
+}
+```
+
+* * *
+
+### List of allowed words
+The Bing spell check API used in LUIS does not support a list of words to ignore during the spell check alterations. If you need to allow a list of words or acronyms, process the utterance in the client application before sending the utterance to LUIS for intent prediction.
+
+## Change time zone of prebuilt datetimeV2 entity
+When a LUIS app uses the prebuilt [datetimeV2](luis-reference-prebuilt-datetimev2.md) entity, a datetime value can be returned in the prediction response. The timezone of the request is used to determine the correct datetime to return. If the request is coming from a bot or another centralized application before getting to LUIS, correct the timezone LUIS uses.
+
+### V3 prediction API to alter timezone
+
+In V3, the `datetimeReference` determines the timezone offset. Learn more about [V3 predictions](luis-migration-api-v3.md#v3-post-body).
+
+### V2 prediction API to alter timezone
+The timezone is corrected by adding the user's timezone to the endpoint using the `timezoneOffset` parameter based on the API version. The value of the parameter should be the positive or negative number, in minutes, to alter the time.
+
+#### V2 prediction daylight savings example
+If you need the returned prebuilt datetimeV2 to adjust for daylight savings time, you should use the querystring parameter with a +/- value in minutes for the [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) query.
+
+Add 60 minutes:
+
+`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q=Turn the lights on?timezoneOffset=60&verbose={boolean}&spellCheck={boolean}&staging={boolean}&bing-spell-check-subscription-key={string}&log={boolean}`
+
+Remove 60 minutes:
+
+`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q=Turn the lights on?timezoneOffset=-60&verbose={boolean}&spellCheck={boolean}&staging={boolean}&bing-spell-check-subscription-key={string}&log={boolean}`
+
+#### V2 prediction C# code determines correct value of parameter
+
+The following C# code uses the [TimeZoneInfo](/dotnet/api/system.timezoneinfo) class's [FindSystemTimeZoneById](/dotnet/api/system.timezoneinfo.findsystemtimezonebyid#examples) method to determine the correct offset value based on system time:
+
+```csharp
+// Get CST zone id
+TimeZoneInfo targetZone = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time");
+
+// Get local machine's value of Now
+DateTime utcDatetime = DateTime.UtcNow;
+
+// Get Central Standard Time value of Now
+DateTime cstDatetime = TimeZoneInfo.ConvertTimeFromUtc(utcDatetime, targetZone);
+
+// Find timezoneOffset/datetimeReference
+int offset = (int)((cstDatetime - utcDatetime).TotalMinutes);
+```
+
+## Next steps
+
+[Correct spelling mistakes with this tutorial](luis-tutorial-bing-spellcheck.md)
ai-services Luis Concept Data Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-conversion.md
+
+ Title: Data conversion - LUIS
+
+description: Learn how utterances can be changed before predictions in Language Understanding (LUIS)
++++++++ Last updated : 03/21/2022+++
+# Convert data format of utterances
++
+LUIS provides the following conversions of a user utterance before prediction.
+
+* Speech to text using [Azure AI Speech](../speech-service/overview.md) service.
+
+## Speech to text
+
+Speech to text is provided as an integration with LUIS.
+
+### Intent conversion concepts
+Conversion of speech to text in LUIS allows you to send spoken utterances to an endpoint and receive a LUIS prediction response. The process is an integration of the [Speech](../speech-service/overview.md) service with LUIS. Learn more about Speech to Intent with a [tutorial](../speech-service/how-to-recognize-intents-from-speech-csharp.md).
+
+### Key requirements
+You do not need to create a **Bing Speech API** key for this integration. A **Language Understanding** key created in the Azure portal works for this integration. Do not use the LUIS starter key.
+
+### Pricing Tier
+This integration uses a different [pricing](luis-limits.md#resource-usage-and-limits) model than the usual Language Understanding pricing tiers.
+
+### Quota usage
+See [Key limits](luis-limits.md#resource-usage-and-limits) for information.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Extracting data](luis-concept-data-extraction.md)
ai-services Luis Concept Data Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-extraction.md
+
+ Title: Data extraction - LUIS
+description: Extract data from utterance text with intents and entities. Learn what kind of data can be extracted from Language Understanding (LUIS).
++++++ Last updated : 03/21/2022++
+# Extract data from utterance text with intents and entities
++
+LUIS gives you the ability to get information from a user's natural language utterances. The information is extracted in a way that it can be used by a program, application, or chat bot to take action. In the following sections, learn what data is returned from intents and entities with examples of JSON.
+
+The hardest data to extract is the machine-learning data because it isn't an exact text match. Data extraction of the machine-learning [entities](concepts/entities.md) needs to be part of the [authoring cycle](concepts/application-design.md) until you're confident you receive the data you expect.
+
+## Data location and key usage
+LUIS extracts data from the user's utterance at the published [endpoint](luis-glossary.md#endpoint). The **HTTPS request** (POST or GET) contains the utterance as well as some optional configurations such as staging or production environments.
+
+**V2 prediction endpoint request**
+
+`https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/<appID>?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&q=book 2 tickets to paris`
+
+**V3 prediction endpoint request**
+
+`https://westus.api.cognitive.microsoft.com/luis/v3.0-preview/apps/<appID>/slots/<slot-type>/predict?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&query=book 2 tickets to paris`
+
+The `appID` is available on the **Settings** page of your LUIS app as well as part of the URL (after `/apps/`) when you're editing that LUIS app. The `subscription-key` is the endpoint key used for querying your app. While you can use your free authoring/starter key while you're learning LUIS, it is important to change the endpoint key to a key that supports your [expected LUIS usage](luis-limits.md#resource-usage-and-limits). The `timezoneOffset` unit is minutes.
+
+The **HTTPS response** contains all the intent and entity information LUIS can determine based on the current published model of either the staging or production endpoint. The endpoint URL is found on the [LUIS](luis-reference-regions.md) website, in the **Manage** section, on the **Keys and endpoints** page.
+
+## Data from intents
+The primary data is the top scoring **intent name**. The endpoint response is:
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+{
+ "query": "when do you open next?",
+ "topScoringIntent": {
+ "intent": "GetStoreInfo",
+ "score": 0.984749258
+ },
+ "entities": []
+}
+```
+
+#### [V3 prediction endpoint response](#tab/V3)
+
+```JSON
+{
+ "query": "when do you open next?",
+ "prediction": {
+ "normalizedQuery": "when do you open next?",
+ "topIntent": "GetStoreInfo",
+ "intents": {
+ "GetStoreInfo": {
+ "score": 0.984749258
+ }
+ }
+ },
+ "entities": []
+}
+```
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+* * *
+
+|Data Object|Data Type|Data Location|Value|
+|--|--|--|--|
+|Intent|String|topScoringIntent.intent|"GetStoreInfo"|
+
+If your chatbot or LUIS-calling app makes a decision based on more than one intent score, return all the intents' scores.
++
+#### [V2 prediction endpoint response](#tab/V2)
+
+Set the querystring parameter, `verbose=true`. The endpoint response is:
+
+```JSON
+{
+ "query": "when do you open next?",
+ "topScoringIntent": {
+ "intent": "GetStoreInfo",
+ "score": 0.984749258
+ },
+ "intents": [
+ {
+ "intent": "GetStoreInfo",
+ "score": 0.984749258
+ },
+ {
+ "intent": "None",
+ "score": 0.2040639
+ }
+ ],
+ "entities": []
+}
+```
+
+#### [V3 prediction endpoint response](#tab/V3)
+
+Set the querystring parameter, `show-all-intents=true`. The endpoint response is:
+
+```JSON
+{
+ "query": "when do you open next?",
+ "prediction": {
+ "normalizedQuery": "when do you open next?",
+ "topIntent": "GetStoreInfo",
+ "intents": {
+ "GetStoreInfo": {
+ "score": 0.984749258
+ },
+ "None": {
+ "score": 0.2040639
+ }
+ },
+ "entities": {
+ }
+ }
+}
+```
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+* * *
+
+The intents are ordered from highest to lowest score.
+
+|Data Object|Data Type|Data Location|Value|Score|
+|--|--|--|--|:--|
+|Intent|String|intents[0].intent|"GetStoreInfo"|0.984749258|
+|Intent|String|intents[1].intent|"None"|0.0168218873|
+
+If you add prebuilt domains, the intent name indicates the domain, such as `Utilties` or `Communication` as well as the intent:
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+{
+ "query": "Turn on the lights next monday at 9am",
+ "topScoringIntent": {
+ "intent": "Utilities.ShowNext",
+ "score": 0.07842206
+ },
+ "intents": [
+ {
+ "intent": "Utilities.ShowNext",
+ "score": 0.07842206
+ },
+ {
+ "intent": "Communication.StartOver",
+ "score": 0.0239675418
+ },
+ {
+ "intent": "None",
+ "score": 0.0168218873
+ }],
+ "entities": []
+}
+```
+
+#### [V3 prediction endpoint response](#tab/V3)
+
+```JSON
+{
+ "query": "Turn on the lights next monday at 9am",
+ "prediction": {
+ "normalizedQuery": "Turn on the lights next monday at 9am",
+ "topIntent": "Utilities.ShowNext",
+ "intents": {
+ "Utilities.ShowNext": {
+ "score": 0.07842206
+ },
+ "Communication.StartOver": {
+ "score": 0.0239675418
+ },
+ "None": {
+ "score": 0.00085447653
+ }
+ },
+ "entities": []
+ }
+}
+```
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+* * *
+
+|Domain|Data Object|Data Type|Data Location|Value|
+|--|--|--|--|--|
+|Utilities|Intent|String|intents[0].intent|"<b>Utilities</b>.ShowNext"|
+|Communication|Intent|String|intents[1].intent|<b>Communication</b>.StartOver"|
+||Intent|String|intents[2].intent|"None"|
++
+## Data from entities
+Most chat bots and applications need more than the intent name. This additional, optional data comes from entities discovered in the utterance. Each type of entity returns different information about the match.
+
+A single word or phrase in an utterance can match more than one entity. In that case, each matching entity is returned with its score.
+
+All entities are returned in the **entities** array of the response from the endpoint
+
+## Tokenized entity returned
+
+Review the [token support](luis-language-support.md#tokenization) in LUIS.
++
+## Prebuilt entity data
+[Prebuilt](concepts/entities.md) entities are discovered based on a regular expression match using the open-source [Recognizers-Text](https://github.com/Microsoft/Recognizers-Text) project. Prebuilt entities are returned in the entities array and use the type name prefixed with `builtin::`.
+
+## List entity data
+
+[List entities](reference-entity-list.md) represent a fixed, closed set of related words along with their synonyms. LUIS does not discover additional values for list entities. Use the **Recommend** feature to see suggestions for new words based on the current list. If there is more than one list entity with the same value, each entity is returned in the endpoint query.
+
+## Regular expression entity data
+
+A [regular expression entity](reference-entity-regular-expression.md) extracts an entity based on a regular expression you provide.
+
+## Extracting names
+Getting names from an utterance is difficult because a name can be almost any combination of letters and words. Depending on what type of name you're extracting, you have several options. The following suggestions are not rules but more guidelines.
+
+### Add prebuilt PersonName and GeographyV2 entities
+
+[PersonName](luis-reference-prebuilt-person.md) and [GeographyV2](luis-reference-prebuilt-geographyV2.md) entities are available in some [language cultures](luis-reference-prebuilt-entities.md).
+
+### Names of people
+
+People's name can have some slight format depending on language and culture. Use either a prebuilt **[personName](luis-reference-prebuilt-person.md)** entity or a **[simple entity](concepts/entities.md)** with roles of first and last name.
+
+If you use the simple entity, make sure to give examples that use the first and last name in different parts of the utterance, in utterances of different lengths, and utterances across all intents including the None intent. [Review](./how-to/improve-application.md) endpoint utterances on a regular basis to label any names that were not predicted correctly.
+
+### Names of places
+
+Location names are set and known such as cities, counties, states, provinces, and countries/regions. Use the prebuilt entity **[geographyV2](luis-reference-prebuilt-geographyv2.md)** to extract location information.
+
+### New and emerging names
+
+Some apps need to be able to find new and emerging names such as products or companies. These types of names are the most difficult type of data extraction. Begin with a **[simple entity](concepts/entities.md)** and add a [phrase list](concepts/patterns-features.md). [Review](./how-to/improve-application.md) endpoint utterances on a regular basis to label any names that were not predicted correctly.
+
+## Pattern.any entity data
+
+[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied.
+
+## Sentiment analysis
+If sentiment analysis is configured while [publishing](how-to/publish.md), the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Language service](../language-service/sentiment-opinion-mining/overview.md) documentation.
+
+## Key phrase extraction entity data
+The [key phrase extraction entity](luis-reference-prebuilt-keyphrase.md) returns key phrases in the utterance, provided by the [Language service](../language-service/key-phrase-extraction/overview.md).
+
+## Data matching multiple entities
+
+LUIS returns all entities discovered in the utterance. As a result, your chat bot may need to make a decision based on the results.
+
+## Data matching multiple list entities
+
+If a word or phrase matches more than one list entity, the endpoint query returns each List entity.
+
+For the query `when is the best time to go to red rock?`, and the app has the word `red` in more than one list, LUIS recognizes all the entities and returns an array of entities as part of the JSON endpoint response.
+
+## Next steps
+
+See [Add entities](how-to/entities.md) to learn more about how to add entities to your LUIS app.
ai-services Luis Concept Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-storage.md
+
+ Title: Data storage - LUIS
+
+description: LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key.
++++++++ Last updated : 12/07/2020++
+# Data storage and removal in Language Understanding (LUIS) Azure AI services
+++
+LUIS stores data encrypted in an Azure data store corresponding to [the region](luis-reference-regions.md) specified by the key.
+
+* Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted.
+
+* Application authors can choose to [enable logging](how-to/improve-application.md#log-user-queries-to-enable-active-learning) on the utterances that are sent to a published application. If enabled, utterances will be saved for 30 days, and can be viewed by the application author. If logging isn't enabled when the application is published, this data is not stored.
+
+## Export and delete app
+Users have full control over [exporting](how-to/sign-in.md) and [deleting](how-to/sign-in.md) the app.
+
+## Utterances
+
+Utterances can be stored in two different places.
+
+* During **the authoring process**, utterances are created and stored in the Intent. Utterances in intents are required for a successful LUIS app. Once the app is published and receives queries at the endpoint, the endpoint request's querystring, `log=false`, determines if the endpoint utterance is stored. If the endpoint is stored, it becomes part of the active learning utterances found in the **Build** section of the portal, in the **Review endpoint utterances** section.
+* When you **review endpoint utterances**, and add an utterance to an intent, the utterance is no longer stored as part of the endpoint utterances to be reviewed. It is added to the app's intents.
+
+<a name="utterances-in-an-intent"></a>
+
+### Delete example utterances from an intent
+
+Delete example utterances used for training [LUIS](luis-reference-regions.md). If you delete an example utterance from your LUIS app, it is removed from the LUIS web service and is unavailable for export.
+
+<a name="utterances-in-review"></a>
+
+### Delete utterances in review from active learning
+
+You can delete utterances from the list of user utterances that LUIS suggests in the **[Review endpoint utterances page](how-to/improve-application.md)**. Deleting utterances from this list prevents them from being suggested, but doesn't delete them from logs.
+
+If you don't want active learning utterances, you can [disable active learning](how-to/improve-application.md). Disabling active learning also disables logging.
+
+### Disable logging utterances
+[Disabling active learning](how-to/improve-application.md) is disables logging.
++
+<a name="accounts"></a>
+
+## Delete an account
+If you are not migrated, you can delete your account and all your apps will be deleted along with their example utterances and logs. The data is retained for 90 days before the account and data are deleted permanently.
+
+Deleting account is available from the **Settings** page. Select your account name in the top right navigation bar to get to the **Settings** page.
+
+## Delete an authoring resource
+If you have [migrated to an authoring resource](./luis-migration-authoring.md), deleting the resource itself from the Azure portal will delete all your applications associated with that resource, along with their example utterances and logs. The data is retained for 90 days before it is deleted permanently.
+
+To delete your resource, go to the [Azure portal](https://portal.azure.com/#home) and select your LUIS authoring resource. Go to the **Overview** tab and select the **Delete** button on the top of the page. Then confirm your resource was deleted.
+
+## Data inactivity as an expired subscription
+For the purposes of data retention and deletion, an inactive LUIS app may at _MicrosoftΓÇÖs discretion_ be treated as an expired subscription. An app is considered inactive if it meets the following criteria for the last 90 days:
+
+* Has had **no** calls made to it.
+* Has not been modified.
+* Does not have a current key assigned to it.
+* Has not had a user sign in to it.
+
+## Next steps
+
+[Learn about exporting and deleting an app](how-to/sign-in.md)
ai-services Luis Concept Devops Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-automation.md
+
+ Title: Continuous Integration and Continuous Delivery workflows for LUIS apps
+description: How to implement CI/CD workflows for DevOps for Language Understanding (LUIS).
+++ Last updated : 06/01/2021++
+ms.
++
+# Continuous Integration and Continuous Delivery workflows for LUIS DevOps
+++
+Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management). This article describes concepts for implementing automated builds for LUIS.
+
+## Build automation workflows for LUIS
+
+![CI workflows](./media/luis-concept-devops-automation/luis-automation.png)
+
+In your source code management (SCM) system, configure automated build pipelines to run at the following events:
+
+1. **PR workflow** triggered when a [pull request](https://help.github.com/github/collaborating-with-issues-and-pull-requests/about-pull-requests) (PR) is raised. This workflow validates the contents of the PR *before* the updates get merged into the main branch.
+1. **CI/CD workflow** triggered when updates are pushed to the main branch, for example upon merging the changes from a PR. This workflow ensures the quality of all updates to the main branch.
+
+The **CI/CD workflow** combines two complementary development processes:
+
+* [Continuous Integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing code in a shared repository, and performing an automated build on it. Paired with an automated [testing](luis-concept-devops-testing.md) approach, continuous integration allows us to verify that for each update, the LUDown source is still valid and can be imported into a LUIS app, but also that it passes a group of tests that verify the trained app can recognize the intents and entities required for your solution.
+
+* [Continuous Delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes the Continuous Integration concept further to automatically deploy the application to an environment where you can do more in-depth testing. CD enables us to learn early about any unforeseen issues that arise from our changes as quickly as possible, and also to learn about gaps in our test coverage.
+
+The goal of continuous integration and continuous delivery is to ensure that "main is always shippable,". For a LUIS app, this means that we could, if we needed to, take any version from the main branch LUIS app and ship it on production.
+
+### Tools for building automation workflows for LUIS
+
+> [!TIP]
+> You can find a complete solution for implementing DevOps in the [LUIS DevOps template repo](#apply-devops-to-luis-app-development-using-github-actions).
+
+There are different build automation technologies available to create build automation workflows. All of them require that you can script steps using a command-line interface (CLI) or REST calls so that they can execute on a build server.
+
+Use the following tools for building automation workflows for LUIS:
+
+* [Bot Framework Tools LUIS CLI](https://github.com/microsoft/botbuilder-tools/tree/master/packages/LUIS) to work with LUIS apps and versions, train, test, and publish them within the LUIS service.
+
+* [Azure CLI](/cli/azure/) to query Azure subscriptions, fetch LUIS authoring and prediction keys, and to create an Azure [service principal](/cli/azure/ad/sp) used for automation authentication.
+
+* [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool for [testing a LUIS app](luis-concept-devops-testing.md) and to analyze test results.
+
+### The PR workflow
+
+As mentioned, you configure this workflow to run when a developer raises a PR to propose changes to be merged from a feature branch into the main branch. Its purpose is to verify the quality of the changes in the PR before they're merged to the main branch.
+
+This workflow should:
+
+* Create a temporary LUIS app by importing the `.lu` source in the PR.
+* Train and publish the LUIS app version.
+* Run all the [unit tests](luis-concept-devops-testing.md) against it.
+* Pass the workflow if all the tests pass, otherwise fail it.
+* Clean up and delete the temporary app.
+
+If supported by your SCM, configure branch protection rules so that this workflow must complete successfully before the PR can be completed.
+
+### The main branch CI/CD workflow
+
+Configure this workflow to run after the updates in the PR have been merged into the main branch. Its purpose is to keep the quality bar for your main branch high by testing the updates. If the updates meet the quality bar, this workflow deploys the new LUIS app version to an environment where you can do more in-depth testing.
+
+This workflow should:
+
+* Build a new version in your primary LUIS app (the app you maintain for the main branch) using the updated source code.
+
+* Train and publish the LUIS app version.
+
+ > [!NOTE]
+ > As explained in [Running tests in an automated build workflow](luis-concept-devops-testing.md#running-tests-in-an-automated-build-workflow) you must publish the LUIS app version under test so that tools such as NLU.DevOps can access it. LUIS only supports two named publication slots, *staging* and *production* for a LUIS app, but you can also [publish a version directly](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) and [query by version](./luis-migration-api-v3.md#changes-by-slot-name-and-version-name). Use direct version publishing in your automation workflows to avoid being limited to using the named publishing slots.
+
+* Run all the [unit tests](luis-concept-devops-testing.md).
+
+* Optionally run [batch tests](luis-concept-devops-testing.md#how-to-do-unit-testing-and-batch-testing) to measure the quality and accuracy of the LUIS app version and compare it to some baseline.
+
+* If the tests complete successfully:
+ * Tag the source in the repo.
+ * Run the Continuous Delivery (CD) job to deploy the LUIS app version to environments for further testing.
+
+### Continuous delivery (CD)
+
+The CD job in a CI/CD workflow runs conditionally on success of the build and automated unit tests. Its job is to automatically deploy the LUIS application to an environment where you can do more testing.
+
+There's no one recommended solution on how best to deploy your LUIS app, and you must implement the process that is appropriate for your project. The [LUIS DevOps template](https://github.com/Azure-Samples/LUIS-DevOps-Template) repo implements a simple solution for this which is to [publish the new LUIS app version](./how-to/publish.md) to the *production* publishing slot. This is fine for a simple setup. However, if you need to support a number of different production environments at the same time, such as *development*, *staging* and *UAT*, then the limit of two named publishing slots per app will prove insufficient.
+
+Other options for deploying an app version include:
+
+* Leave the app version published to the direct version endpoint and implement a process to configure downstream production environments with the direct version endpoint as required.
+* Maintain different LUIS apps for each production environments and write automation steps to import the `.lu` into a new version in the LUIS app for the target production environment, to train, and publish it.
+* Export the tested LUIS app version into a [LUIS docker container](./luis-container-howto.md?tabs=v3) and deploy the LUIS container to Azure [Container instances](../../container-instances/index.yml).
+
+## Release management
+
+Generally we recommend that you do continuous delivery only to your non-production environments, such as to development and staging. Most teams require a manual review and approval process for deployment to a production environment. For a production deployment, you might want to make sure it happens when key people on the development team are available for support, or during low-traffic periods.
++
+## Apply DevOps to LUIS app development using GitHub Actions
+
+Go to the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) for a complete solution that implements DevOps and software engineering best practices for LUIS. You can use this template repo to create your own repository with built-in support for CI/CD workflows and practices that enable [source control](luis-concept-devops-sourcecontrol.md), automated builds, [testing](luis-concept-devops-testing.md), and release management with LUIS for your own project.
+
+The [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) walks through how to:
+
+* **Clone the template repo** - Copy the template to your own GitHub repository.
+* **Configure LUIS resources** - Create the [LUIS authoring and prediction resources in Azure](./luis-how-to-azure-subscription.md) that will be used by the continuous integration workflows.
+* **Configure the CI/CD workflows** - Configure parameters for the CI/CD workflows and store them in [GitHub Secrets](https://help.github.com/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets).
+* **Walks through the ["dev inner loop"](/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow)** - The developer makes updates to a sample LUIS app while working in a development branch, tests the updates and then raises a pull request to propose changes and to seek review approval.
+* **Execute CI/CD workflows** - Execute [continuous integration workflows to build and test a LUIS app](#build-automation-workflows-for-luis) using GitHub Actions.
+* **Perform automated testing** - Perform [automated batch testing for a LUIS app](luis-concept-devops-testing.md) to evaluate the quality of the app.
+* **Deploy the LUIS app** - Execute a [continuous delivery (CD) job](#continuous-delivery-cd) to publish the LUIS app.
+* **Use the repo with your own project** - Explains how to use the repo with your own LUIS application.
+
+## Next steps
+
+* Learn how to write a [GitHub Actions workflow with NLU.DevOps](https://github.com/Azure-Samples/LUIS-DevOps-Template/blob/master/docs/4-pipeline.md)
+
+* Use the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) to apply DevOps with your own project.
+* [Source control and branch strategies for LUIS](luis-concept-devops-sourcecontrol.md)
+* [Testing for LUIS DevOps](luis-concept-devops-testing.md)
ai-services Luis Concept Devops Sourcecontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-sourcecontrol.md
+
+ Title: Source control and development branches - LUIS
+description: How to maintain your Language Understanding (LUIS) app under source control. How to apply updates to a LUIS app while working in a development branch.
+++ Last updated : 06/14/2022++++++
+# DevOps practices for LUIS
+++
+Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management) by following these guidelines.
+
+## Source control and branch strategies for LUIS
+
+One of the key factors that the success of DevOps depends upon is [source control](/azure/devops/user-guide/source-control). A source control system allows developers to collaborate on code and to track changes. The use of branches allows developers to switch between different versions of the code base, and to work independently from other members of the team. When developers raise a [pull request](https://help.github.com/github/collaborating-with-issues-and-pull-requests/about-pull-requests) (PR) to propose updates from one branch to another, or when changes are merged, these can be the trigger for [automated builds](luis-concept-devops-automation.md) to build and continuously test code.
+
+By using the concepts and guidance that are described in this document, you can develop a LUIS app while tracking changes in a source control system, and follow these software engineering best practices:
+
+- **Source Control**
+ - Source code for your LUIS app is in a human-readable format.
+ - The model can be built from source in a repeatable fashion.
+ - The source code can be managed by a source code repository.
+ - Credentials and secrets such as keys are never stored in source code.
+
+- **Branching and Merging**
+ - Developers can work from independent branches.
+ - Developers can work in multiple branches concurrently.
+ - It's possible to integrate changes to a LUIS app from one branch into another through rebase or merge.
+ - Developers can merge a PR to the parent branch.
+
+- **Versioning**
+ - Each component in a large application should be versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number.
+
+- **Code Reviews**
+ - The changes in the PR are presented as human readable source code that can be reviewed before accepting the PR.
+
+## Source control
+
+To maintain the [App schema definition](./app-schema-definition.md) of a LUIS app in a source code management system, use the [LUDown format (`.lu`)](/azure/bot-service/file-format/bot-builder-lu-file-format) representation of the app. `.lu` format is preferred to `.json` format because it's human readable, which makes it easier to make and review changes in PRs.
+
+### Save a LUIS app using the LUDown format
+
+To save a LUIS app in `.lu` format and place it under source control:
+
+- EITHER: [Export the app version](./luis-how-to-manage-versions.md#other-actions) as `.lu` from the [LUIS portal](https://www.luis.ai/) and add it to your source control repository
+
+- OR: Use a text editor to create a `.lu` file for a LUIS app and add it to your source control repository
+
+> [!TIP]
+> If you are working with the JSON export of a LUIS app, you can [convert it to LUDown](https://github.com/microsoft/botframework-cli/tree/master/packages/luis#bf-luisconvert). Use the `--sort` option to ensure that intents and utterances are sorted alphabetically.
+> Note that the **.LU** export capability built into the LUIS portal already sorts the output.
+
+### Build the LUIS app from source
+
+For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./how-to/train-test.md) and to [publish it](./how-to/publish.md). You can do this in the LUIS portal, or at the command line:
+
+- Use the LUIS portal to [import the `.lu` version](./luis-how-to-manage-versions.md#import-version) of the app from source control, and [train](./how-to/train-test.md) and [publish](./how-to/publish.md) the app.
+
+- Use the [Bot Framework Command Line Interface for LUIS](https://github.com/microsoft/botbuilder-tools/tree/master/packages/LUIS) at the command line or in a CI/CD workflow to [import](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisversionimport) the `.lu` version of the app from source control into a LUIS application, and [train](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luistrainrun) and [publish](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) the app.
+
+### Files to maintain under source control
+
+The following types of files for your LUIS application should be maintained under source control:
+
+- `.lu` file for the LUIS application
+
+- [Unit Test definition files](luis-concept-devops-testing.md#writing-tests) (utterances and expected results)
+
+- [Batch test files](./luis-how-to-batch-test.md#batch-test-file) (utterances and expected results) used for performance testing
+
+### Credentials and keys are not checked in
+
+Do not include keys or similar confidential values in files that you check in to your repo where they might be visible to unauthorized personnel. The keys and other values that you should prevent from check-in include:
+
+- LUIS Authoring and Prediction keys
+- LUIS Authoring and Prediction endpoints
+- Azure resource keys
+- Access tokens, such as the token for an Azure [service principal](/cli/azure/ad/sp) used for automation authentication
+
+#### Strategies for securely managing secrets
+
+Strategies for securely managing secrets include:
+
+- If you're using Git version control, you can store runtime secrets in a local file and prevent check in of the file by adding a pattern to match the filename to a [.gitignore](https://git-scm.com/docs/gitignore) file
+- In an automation workflow, you can store secrets securely in the parameters configuration offered by that automation technology. For example, if you're using [GitHub Actions](https://github.com/features/actions), you can store secrets securely in [GitHub secrets](https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets).
+
+## Branching and merging
+
+Distributed version control systems like Git give flexibility in how team members publish, share, review, and iterate on code changes through development branches shared with others. Adopt a [Git branching strategy](/azure/devops/repos/git/git-branching-guidance) that is appropriate for your team.
+
+Whichever branching strategy you adopt, a key principle of all of them is that team members can work on the solution within a *feature branch* independently from the work that is going on in other branches.
+
+To support independent working in branches with a LUIS project:
+
+- **The main branch has its own LUIS app.** This app represents the current state of your solution for your project and its current active version should always map to the `.lu` source that is in the main branch. All updates to the `.lu` source for this app should be reviewed and tested so that this app could be deployed to build environments such as Production at any time. When updates to the `.lu` are merged into main from a feature branch, you should create a new version in the LUIS app and [bump the version number](#versioning).
+
+- **Each feature branch must use its own instance of a LUIS app**. Developers work with this app in a feature branch without risk of affecting developers who are working in other branches. This 'dev branch' app is a working copy that should be deleted when the feature branch is deleted.
+
+![Git feature branch](./media/luis-concept-devops-sourcecontrol/feature-branch.png)
+
+### Developers can work from independent branches
+
+Developers can work on updates on a LUIS app independently from other branches by:
+
+1. Creating a feature branch from the main branch (depending on your branch strategy, usually main or develop).
+
+1. [Create a new LUIS app in the LUIS portal](./how-to/sign-in.md) (the "*dev branch app*") solely to support the work in the feature branch.
+
+ * If the `.lu` source for your solution already exists in your branch, because it was saved after work done in another branch earlier in the project, create your dev branch LUIS app by importing the `.lu` file.
+
+ * If you are starting work on a new project, you will not yet have the `.lu` source for your main LUIS app in the repo. You will create the `.lu` file by exporting your dev branch app from the portal when you have completed your feature branch work, and submit it as a part of your PR.
+
+1. Work on the active version of your dev branch app to implement the required changes. We recommend that you work only in a single version of your dev branch app for all the feature branch work. If you create more than one version in your dev branch app, be careful to track which version contains the changes you want to check in when you raise your PR.
+
+1. Test the updates - see [Testing for LUIS DevOps](luis-concept-devops-testing.md) for details on testing your dev branch app.
+
+1. Export the active version of your dev branch app as `.lu` from the [versions list](./luis-how-to-manage-versions.md).
+
+1. Check in your updates and invite peer review of your updates. If you're using GitHub, you'll raise a [pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests).
+
+1. When the changes are approved, merge the updates into the main branch. At this point, you will create a new [version](./luis-how-to-manage-versions.md) of the *main* LUIS app, using the updated `.lu` in main. See [Versioning](#versioning) for considerations on setting the version name.
+
+1. When the feature branch is deleted, it's a good idea to delete the dev branch LUIS app you created for the feature branch work.
+
+### Developers can work in multiple branches concurrently
+
+If you follow the pattern described above in [Developers can work from independent branches](#developers-can-work-from-independent-branches), then you will use a unique LUIS application in each feature branch. A single developer can work on multiple branches concurrently, as long as they switch to the correct dev branch LUIS app for the branch they're currently working on.
+
+We recommend that you use the same name for both the feature branch and for the dev branch LUIS app that you create for the feature branch work, to make it less likely that you'll accidentally work on the wrong app.
+
+As noted above, we recommend that for simplicity, you work in a single version in each dev branch app. If you are using multiple versions, take care to activate the correct version as you switch between dev branch apps.
+
+### Multiple developers can work on the same branch concurrently
+
+You can support multiple developers working on the same feature branch at the same time:
+
+- Developers check out the same feature branch and push and pull changes submitted by themselves and other developers while work proceeds, as normal.
+
+- If you follow the pattern described above in [Developers can work from independent branches](#developers-can-work-from-independent-branches), then this branch will use a unique LUIS application to support development. That 'dev branch' LUIS app will be created by the first member of the development team who begins work in the feature branch.
+
+- [Add team members as contributors](./luis-how-to-collaborate.md) to the dev branch LUIS app.
+
+- When the feature branch work is complete, export the active version of the dev branch LUIS app as `.lu` from the [versions list](./luis-how-to-manage-versions.md), save the updated `.lu` file in the repo, and check in and PR the changes.
+
+### Incorporating changes from one branch to another with rebase or merge
+
+Some other developers on your team working in another branch may have made updates to the `.lu` source and merged them to the main branch after you created your feature branch. You may want to incorporate their changes into your working version before you continue to make own changes within your feature branch. You can do this by [rebase or merge to main](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) in the same way as any other code asset. Since the LUIS app in LUDown format is human readable, it supports merging using standard merge tools.
+
+Follow these tips if you're rebasing your LUIS app in a feature branch:
+
+- Before you rebase or merge, make sure your local copy of the `.lu` source for your app has all your latest changes that you've applied using the LUIS portal, by re-exporting your app from the portal first. That way, you can make sure that any changes you've made in the portal and not yet exported don't get lost.
+
+- During the merge, use standard tools to resolve any merge conflicts.
+
+- Don't forget after rebase or merge is complete to re-import the app back into the portal, so that you're working with the updated app as you continue to apply your own changes.
+
+### Merge PRs
+
+After your PR is approved, you can merge your changes to your main branch. No special considerations apply to the LUDown source for a LUIS app: it's human readable and so supports merging using standard Merge tools. Any merge conflicts may be resolved in the same way as with other source files.
+
+After your PR has been merged, it's recommended to cleanup:
+
+- Delete the branch in your repo
+
+- Delete the 'dev branch' LUIS app you created for the feature branch work.
+
+In the same way as with application code assets, you should write unit tests to accompany LUIS app updates. You should employ continuous integration workflows to test:
+
+- Updates in a PR before the PR is merged
+- The main branch LUIS app after a PR has been approved and the changes have been merged into main.
+
+For more information on testing for LUIS DevOps, see [Testing for DevOps for LUIS](luis-concept-devops-testing.md). For more details on implementing workflows, see [Automation workflows for LUIS DevOps](luis-concept-devops-automation.md).
+
+## Code reviews
+
+A LUIS app in LUDown format is human readable, which supports the communication of changes in a PR suitable for review. Unit test files are also written in LUDown format and also easily reviewable in a PR.
+
+## Versioning
+
+An application consists of multiple components that might include things such as a bot running in [Azure AI Bot Service](/azure/bot-service/bot-service-overview-introduction), [QnA Maker](https://www.qnamaker.ai/), [Azure AI Speech service](../speech-service/overview.md), and more. To achieve the goal of loosely coupled applications, use [version control](/devops/develop/git/what-is-version-control) so that each component of an application is versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number. It's easier to version your LUIS app independently from other components if you maintain it in its own repo.
+
+The LUIS app for the main branch should have a versioning scheme applied. When you merge updates to the `.lu` for a LUIS app into main, you'll then import that updated source into a new version in the LUIS app for the main branch.
+
+It is recommended that you use a numeric versioning scheme for the main LUIS app version, for example:
+
+`major.minor[.build[.revision]]`
+
+Each update the version number is incremented at the last digit.
+
+The major / minor version can be used to indicate the scope of the changes to the LUIS app functionality:
+
+* Major Version: A significant change, such as support for a new [Intent](./concepts/intents.md) or [Entity](concepts/entities.md)
+* Minor Version: A backwards-compatible minor change, such as after significant new training
+* Build: No functionality change, just a different build.
+
+Once you've determined the version number for the latest revision of your main LUIS app, you need to build and test the new app version, and publish it to an endpoint where it can be used in different build environments, such as Quality Assurance or Production. It's highly recommended that you automate all these steps in a continuous integration (CI) workflow.
+
+See:
+- [Automation workflows](luis-concept-devops-automation.md) for details on how to implement a CI workflow to test and release a LUIS app.
+- [Release Management](luis-concept-devops-automation.md#release-management) for information on how to deploy your LUIS app.
+
+### Versioning the 'feature branch' LUIS app
+
+When you are working with a 'dev branch' LUIS app that you've created to support work in a feature branch, you will be exporting your app when your work is complete and you will include the updated `'lu` in your PR. The branch in your repo, and the 'dev branch' LUIS app should be deleted after the PR is merged into main. Since this app exists solely to support the work in the feature branch, there's no particular versioning scheme you need to apply within this app.
+
+When your changes in your PR are merged into main, that is when the versioning should be applied, so that all updates to main are versioned independently.
+
+## Next steps
+
+* Learn about [testing for LUIS DevOps](luis-concept-devops-testing.md)
+* Learn how to [implement DevOps for LUIS with GitHub](./luis-concept-devops-automation.md)
ai-services Luis Concept Devops Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-testing.md
+
+ Title: Testing for DevOps for LUIS apps
+description: How to test your Language Understanding (LUIS) app in a DevOps environment.
++++++ Last updated : 06/3/2020++
+# Testing for LUIS DevOps
+++
+Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management) by following these guidelines.
+
+In agile software development methodologies, testing plays an integral role in building quality software. Every significant change to a LUIS app should be accompanied by tests designed to test the new functionality the developer is building into the app. These tests are checked into your source code repository along with the `.lu` source of your LUIS app. The implementation of the change is finished when the app satisfies the tests.
+
+Tests are a critical part of [CI/CD workflows](luis-concept-devops-automation.md). When changes to a LUIS app are proposed in a pull request (PR) or after changes are merged into your main branch, then CI workflows should run the tests to verify that the updates haven't caused any regressions.
+
+## How to do Unit testing and Batch testing
+
+There are two different kinds of testing for a LUIS app that you need to perform in continuous integration workflows:
+
+- **Unit tests** - Relatively simple tests that verify the key functionality of your LUIS app. A unit test passes when the expected intent and the expected entities are returned for a given test utterance. All unit tests must pass for the test run to complete successfully.
+This kind of testing is similar to [Interactive testing](./how-to/train-test.md) that you can do in the [LUIS portal](https://www.luis.ai/).
+
+- **Batch tests** - Batch testing is a comprehensive test on your current trained model to measure its performance. Unlike unit tests, batch testing isn't pass|fail testing. The expectation with batch testing is not that every test will return the expected intent and expected entities. Instead, a batch test helps you view the accuracy of each intent and entity in your app and helps you to compare over time as you make improvements.
+This kind of testing is the same as the [Batch testing](./luis-how-to-batch-test.md) that you can perform interactively in the LUIS portal.
+
+You can employ unit testing from the beginning of your project. Batch testing is only really of value once you've developed the schema of your LUIS app and you're working on improving its accuracy.
+
+For both unit tests and batch tests, make sure that your test utterances are kept separate from your training utterances. If you test on the same data you train on, you'll get the false impression your app is performing well when it's just overfitting to the testing data. Tests must be unseen by the model to test how well it is generalizing.
+
+### Writing tests
+
+When you write a set of tests, for each test you need to define:
+
+* Test utterance
+* Expected intent
+* Expected entities.
+
+Use the LUIS [batch file syntax](./luis-how-to-batch-test.md#batch-syntax-template-for-intents-with-entities) to define a group of tests in a JSON-formatted file. For example:
+
+```JSON
+[
+ {
+ "text": "example utterance goes here",
+ "intent": "intent name goes here",
+ "entities":
+ [
+ {
+ "entity": "entity name 1 goes here",
+ "startPos": 14,
+ "endPos": 23
+ },
+ {
+ "entity": "entity name 2 goes here",
+ "startPos": 14,
+ "endPos": 23
+ }
+ ]
+ }
+]
+```
+
+Some test tools, such as [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) also support LUDown-formatted test files.
+
+#### Designing unit tests
+
+Unit tests should be designed to test the core functionality of your LUIS app. In each iteration, or sprint, of your app development, you should write a sufficient number of tests to verify that the key functionality you are implementing in that iteration is working correctly.
+
+In each unit test, for a given test utterance, you can:
+
+* Test that the correct intent is returned
+* Test that the 'key' entities - those that are critical to your solution - are being returned.
+* Test that the [prediction score](./luis-concept-prediction-score.md) for intent and entities exceeds a threshold that you define. For example, you could decide that you will only consider that a test has passed if the prediction score for the intent and for your key entities exceeds 0.75.
+
+In unit tests, it's a good idea to test that your key entities have been returned in the prediction response, but to ignore any false positives. *False positives* are entities that are found in the prediction response but which are not defined in the expected results for your test. By ignoring false positives, it makes it less onerous to author unit tests while still allowing you to focus on testing that the data that is key to your solution is being returned in a prediction response.
+
+> [!TIP]
+> The [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool supports all your LUIS testing needs. The `compare` command when used in [unit test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode) will assert that all tests pass, and will ignore false positive results for entities that are not labeled in the expected results.
+
+#### Designing Batch tests
+
+Batch test sets should contain a large number of test cases, designed to test across all intents and all entities in your LUIS app. See [Batch testing in the LUIS portal](./luis-how-to-batch-test.md) for information on defining a batch test set.
+
+### Running tests
+
+The LUIS portal offers features to help with interactive testing:
+
+* [**Interactive testing**](./how-to/train-test.md) allows you to submit a sample utterance and get a response of LUIS-recognized intents and entities. You verify the success of the test by visual inspection.
+
+* [**Batch testing**](./luis-how-to-batch-test.md) uses a batch test file as input to validate your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version, displaying results with a chart.
+
+#### Running tests in an automated build workflow
+
+The interactive testing features in the LUIS portal are useful, but for DevOps, automated testing performed in a CI/CD workflow brings certain requirements:
+
+* Test tools must run in a workflow step on a build server. This means the tools must be able to run on the command line.
+* The test tools must be able to execute a group of tests against an endpoint and automatically verify the expected results against the actual results.
+* If the tests fail, the test tools must return a status code to halt the workflow and "fail the build".
+
+LUIS does not offer a command-line tool or a high-level API that offers these features. We recommend that you use the [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool to run tests and verify results, both at the command line and during automated testing within a CI/CD workflow.
+
+The testing capabilities that are available in the LUIS portal don't require a published endpoint and are a part of the LUIS authoring capabilities. When you're implementing testing in an automated build workflow, you must publish the LUIS app version to be tested to an endpoint so that test tools such as NLU.DevOps can send prediction requests as part of testing.
+
+> [!TIP]
+> * If you're implementing your own testing solution and writing code to send test utterances to an endpoint, remember that if you are using the LUIS authoring key, the allowed transaction rate is limited to 5TPS. Either throttle the sending rate or use a prediction key instead.
+> * When sending test queries to an endpoint, remember to use `log=false` in the query string of your prediction request. This ensures that your test utterances do not get logged by LUIS and end up in the endpoint utterances review list presented by the LUIS [active learning](./how-to/improve-application.md) feature and, as a result, accidentally get added to the training utterances of your app.
+
+#### Running Unit tests at the command line and in CI/CD workflows
+
+You can use the [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) package to run tests at the command line:
+
+* Use the NLU.DevOps [test command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md) to submit tests from a test file to an endpoint and to capture the actual prediction results in a file.
+* Use the NLU.DevOps [compare command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md) to compare the actual results with the expected results defined in the input test file. The `compare` command generates NUnit test output, and when used in [unit test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode) by use of the `--unit-test` flag, will assert that all tests pass.
+
+### Running Batch tests at the command line and in CI/CD workflows
+
+You can also use the NLU.DevOps package to run batch tests at the command line.
+
+* Use the NLU.DevOps [test command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md) to submit tests from a test file to an endpoint and to capture the actual prediction results in a file, same as with unit tests.
+* Use the NLU.DevOps [compare command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md) in [Performance test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#performance-test-mode) to measure the performance of your app You can also compare the performance of your app against a baseline performance benchmark, for example, the results from the latest commit to main or the current release. In Performance test mode, the `compare` command generates NUnit test output and [batch test results](./luis-glossary.md#batch-test) in JSON format.
+
+## LUIS non-deterministic training and the effect on testing
+
+When LUIS is training a model, such as an intent, it needs both positive data - the labeled training utterances that you've supplied to train the app for the model - and negative data - data that is *not* valid examples of the usage of that model. During training, LUIS builds the negative data of one model from all the positive data you've supplied for the other models, but in some cases that can produce a data imbalance. To avoid this imbalance, LUIS samples a subset of the negative data in a non-deterministic fashion to optimize for a better balanced training set, improved model performance, and faster training time.
+
+The result of this non-deterministic training is that you may get a slightly [different prediction response between different training sessions](./luis-concept-prediction-score.md), usually for intents and/or entities where the [prediction score](./luis-concept-prediction-score.md) is not high.
+
+If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) with the `UseAllTrainingData` setting set to `true`.
+
+## Next steps
+
+* Learn about [implementing CI/CD workflows](luis-concept-devops-automation.md)
+* Learn how to [implement DevOps for LUIS with GitHub](./luis-concept-devops-automation.md)
ai-services Luis Concept Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-model.md
+
+ Title: Design with models - LUIS
+description: Language understanding provides several types of models. Some models can be used in more than one way.
+++
+ms.
++ Last updated : 01/07/2022++
+# Design with intent and entity models
+++
+Language understanding provides two types of models for you to define your app schema. Your app schema determines what information you receive from the prediction of a new user utterance.
+
+The app schema is built from models you create using [machine teaching](#authoring-uses-machine-teaching):
+* [Intents](#intents-classify-utterances) classify user utterances
+* [Entities](#entities-extract-data) extract data from utterance
+
+## Authoring uses machine teaching
+
+LUIS's machine teaching methodology allows you to easily teach concepts to a machine. Understanding _machine learning_ is not necessary to use LUIS. Instead, you as the teacher, communicates a concept to LUIS by providing examples of the concept and explaining how a concept should be modeled using other related concepts. You, as the teacher, can also improve LUIS's model interactively by identifying and fixing prediction mistakes.
+
+<a name="v3-authoring-model-decomposition"></a>
+
+## Intents classify utterances
+
+An intent classifies example utterances to teach LUIS about the intent. Example utterances within an intent are used as positive examples of the utterance. These same utterances are used as negative examples in all other intents.
+
+Consider an app that needs to determine a user's intention to order a book and an app that needs the shipping address for the customer. This app has two intents: `OrderBook` and `ShippingLocation`.
+
+The following utterance is a **positive example** for the `OrderBook` intent and a **negative example** for the `ShippingLocation` and `None` intents:
+
+`Buy the top-rated book on bot architecture.`
+
+## Entities extract data
+
+An entity represents a unit of data you want extracted from the utterance. A machine-learning entity is a top-level entity containing subentities, which are also machine-learning entities.
+
+An example of a machine-learning entity is an order for a plane ticket. Conceptually this is a single transaction with many smaller units of data such as date, time, quantity of seats, type of seat such as first class or coach, origin location, destination location, and meal choice.
+
+## Intents versus entities
+
+An intent is the desired outcome of the _whole_ utterance while entities are pieces of data extracted from the utterance. Usually intents are tied to actions, which the client application should take. Entities are information needed to perform this action. From a programming perspective, an intent would trigger a method call and the entities would be used as parameters to that method call.
+
+This utterance _must_ have an intent and _may_ have entities:
+
+`Buy an airline ticket from Seattle to Cairo`
+
+This utterance has a single intention:
+
+* Buying a plane ticket
+
+This utterance _may_ have several entities:
+
+* Locations of Seattle (origin) and Cairo (destination)
+* The quantity of a single ticket
+
+## Entity model decomposition
+
+LUIS supports _model decomposition_ with the authoring APIs, breaking down a concept into smaller parts. This allows you to build your models with confidence in how the various parts are constructed and predicted.
+
+Model decomposition has the following parts:
+
+* [intents](#intents-classify-utterances)
+ * [features](#features)
+* [machine-learning entities](reference-entity-machine-learned-entity.md)
+ * subentities (also machine-learning entities)
+ * [features](#features)
+ * [phrase list](concepts/patterns-features.md)
+ * [non-machine-learning entities](concepts/patterns-features.md) such as [regular expressions](reference-entity-regular-expression.md), [lists](reference-entity-list.md), and [prebuilt entities](luis-reference-prebuilt-entities.md)
+
+<a name="entities-extract-data"></a>
+<a name="machine-learned-entities"></a>
+
+## Features
+
+A [feature](concepts/patterns-features.md) is a distinguishing trait or attribute of data that your system observes. Machine learning features give LUIS important cues for where to look for things that will distinguish a concept. They are hints that LUIS can use, but not hard rules. These hints are used in conjunction with the labels to find the data.
+
+## Patterns
+
+[Patterns](concepts/patterns-features.md) are designed to improve accuracy when several utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing many more utterances.
+
+## Extending the app at runtime
+
+The app's schema (models and features) is trained and published to the prediction endpoint. You can [pass new information](schema-change-prediction-runtime.md), along with the user's utterance, to the prediction endpoint to augment the prediction.
+
+## Next steps
+
+* Understand [intents](concepts/patterns-features.md) and [entities](concepts/entities.md).
+* Learn more about [features](concepts/patterns-features.md)
ai-services Luis Concept Prebuilt Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-prebuilt-model.md
+
+ Title: Prebuilt models - LUIS
+
+description: Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt domain or add a relevant domain to your app later.
++++++++ Last updated : 10/10/2019+++
+# Prebuilt models
+++
+Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt model or add a relevant model to your app later.
+
+## Types of prebuilt models
+
+LUIS provides three types of prebuilt models. Each model can be added to your app at any time.
+
+|Model type|Includes|
+|--|--|
+|[Domain](luis-reference-prebuilt-domains.md)|Intents, utterances, entities|
+|Intents|Intents, utterances|
+|[Entities](luis-reference-prebuilt-entities.md)|Entities only|
+
+## Prebuilt domains
+
+Language Understanding (LUIS) provides *prebuilt domains*, which are pre-trained models of [intents](how-to/intents.md) and [entities](concepts/entities.md) that work together for domains or common categories of client applications.
+
+The prebuilt domains are trained and ready to add to your LUIS app. The intents and entities of a prebuilt domain are fully customizable once you've added them to your app.
+
+> [!TIP]
+> The intents and entities in a prebuilt domain work best together. It's better to combine intents and entities from the same domain when possible.
+> The Utilities prebuilt domain has intents that you can customize for use in any domain. For example, you can add `Utilities.Repeat` to your app and train it recognize whatever actions user might want to repeat in your application.
+
+### Changing the behavior of a prebuilt domain intent
+
+You might find that a prebuilt domain contains an intent that is similar to an intent you want to have in your LUIS app but you want it to behave differently. For example, the **Places** prebuilt domain provides a `MakeReservation` intent for making a restaurant reservation, but you want your app to use that intent to make hotel reservations. In that case, you can modify the behavior of that intent by adding example utterances to the intent about making hotel reservations and then retrain the app.
+
+You can find a full listing of the prebuilt domains in the [Prebuilt domains reference](./luis-reference-prebuilt-domains.md).
+
+## Prebuilt intents
+
+LUIS provides prebuilt intents and their utterances for each of its prebuilt domains. Intents can be added without adding the whole domain. Adding an intent is the process of adding an intent and its utterances to your app. Both the intent name and the utterance list can be modified.
+
+## Prebuilt entities
+
+LUIS includes a set of prebuilt entities for recognizing common types of information, like dates, times, numbers, measurements, and currency. Prebuilt entity support varies by the culture of your LUIS app. For a full list of the prebuilt entities that LUIS supports, including support by culture, see the [prebuilt entity reference](./luis-reference-prebuilt-entities.md).
+
+When a prebuilt entity is included in your application, its predictions are included in your published application. The behavior of prebuilt entities is pre-trained and **cannot** be modified.
+
+## Next steps
+
+Learn how to [add prebuilt entities](./howto-add-prebuilt-models.md) to your app.
ai-services Luis Concept Prediction Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-prediction-score.md
+
+ Title: Prediction scores - LUIS
+description: A prediction score indicates the degree of confidence the LUIS API service has for prediction results, based on a user utterance.
++++++ Last updated : 04/14/2020++
+# Prediction scores indicate prediction accuracy for intent and entities
+++
+A prediction score indicates the degree of confidence LUIS has for prediction results of a user utterance.
+
+A prediction score is between zero (0) and one (1). An example of a highly confident LUIS score is 0.99. An example of a score of low confidence is 0.01.
+
+|Score value|Confidence|
+|--|--|
+|1|definite match|
+|0.99|high confidence|
+|0.01|low confidence|
+|0|definite failure to match|
+
+## Top-scoring intent
+
+Every utterance prediction returns a top-scoring intent. This prediction is a numerical comparison of prediction scores.
+
+## Proximity of scores to each other
+
+The top 2 scores can have a very small difference between them. LUIS doesn't indicate this proximity other than returning the top score.
+
+## Return prediction score for all intents
+
+A test or endpoint result can include all intents. This configuration is set on the endpoint using the correct querystring name/value pair.
+
+|Prediction API|Querystring name|
+|--|--|
+|V3|`show-all-intents=true`|
+|V2|`verbose=true`|
+
+## Review intents with similar scores
+
+Reviewing the score for all intents is a good way to verify that not only is the correct intent identified, but that the next identified intent's score is significantly and consistently lower for utterances.
+
+If multiple intents have close prediction scores, based on the context of an utterance, LUIS may switch between the intents. To fix this situation, continue to add utterances to each intent with a wider variety of contextual differences or you can have the client application, such as a chat bot, make programmatic choices about how to handle the 2 top intents.
+
+The 2 intents, which are too-closely scored, may invert due to **non-deterministic training**. The top score could become the second top and the second top score could become the first top score. In order to prevent this situation, add example utterances to each of the top two intents for that utterance with word choice and context that differentiates the 2 intents. The two intents should have about the same number of example utterances. A rule of thumb for separation to prevent inversion due to training, is a 15% difference in scores.
+
+You can turn off the **non-deterministic training** by [training with all data](how-to/train-test.md).
+
+## Differences with predictions between different training sessions
+
+When you train the same model in a different app, and the scores are not the same, this difference is because there is **non-deterministic training** (an element of randomness). Secondly, any overlap of an utterance to more than one intent means the top intent for the same utterance can change based on training.
+
+If your chat bot requires a specific LUIS score to indicate confidence in an intent, you should use the score difference between the top two intents. This situation provides flexibility for variations in training.
+
+You can turn off the **non-deterministic training** by [training with all data](how-to/train-test.md).
+
+## E (exponent) notation
+
+Prediction scores can use exponent notation, _appearing_ above the 0-1 range, such as `9.910309E-07`. This score is an indication of a very **small** number.
+
+|E notation score |Actual score|
+|--|--|
+|9.910309E-07|.0000009910309|
+
+<a name="punctuation"></a>
+
+## Application settings
+
+Use [application settings](luis-reference-application-settings.md) to control how diacritics and punctuation impact prediction scores.
+
+## Next steps
+
+See [Add entities](how-to/entities.md) to learn more about how to add entities to your LUIS app.
ai-services Luis Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-configuration.md
+
+ Title: Docker container settings - LUIS
+
+description: The LUIS container runtime environment is configured using the `docker run` command arguments. LUIS has several required settings, along with a few optional settings.
+++++++ Last updated : 04/01/2020+++
+# Configure Language Understanding Docker containers
+++
+The **Language Understanding** (LUIS) container runtime environment is configured using the `docker run` command arguments. LUIS has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the input [mount settings](#mount-settings) and the billing settings.
+
+## Configuration settings
+
+This container has the following configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|Yes|[ApiKey](#apikey-setting)|Used to track billing information.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
+|Yes|[Billing](#billing-setting)|Specifies the endpoint URI of the service resource on Azure.|
+|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
+|No|[Fluentd](#fluentd-settings)|Write log and, optionally, metric data to a Fluentd server.|
+|No|[Http Proxy](#http-proxy-credentials-settings)|Configure an HTTP proxy for making outbound requests.|
+|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
+|Yes|[Mounts](#mount-settings)|Read and write data from host computer to container and from container back to host computer.|
+
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-setting), [`Billing`](#billing-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](luis-container-howto.md#billing).
+
+## ApiKey setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Azure AI services_ resource specified for the [`Billing`](#billing-setting) configuration setting.
+
+This setting can be found in the following places:
+
+* Azure portal: **Azure AI services** Resource Management, under **Keys**
+* LUIS portal: **Keys and Endpoint settings** page.
+
+Do not use the starter key or the authoring key.
+
+## ApplicationInsights setting
++
+## Billing setting
+
+The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+This setting can be found in the following places:
+
+* Azure portal: **Azure AI services** Overview, labeled `Endpoint`
+* LUIS portal: **Keys and Endpoint settings** page, as part of the endpoint URI.
+
+| Required | Name | Data type | Description |
+|-||--|-|
+| Yes | `Billing` | string | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](luis-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../cognitive-services-custom-subdomains.md). |
+
+## Eula setting
++
+## Fluentd settings
++
+## HTTP proxy credentials settings
++
+## Logging settings
+
+
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+The LUIS container doesn't use input or output mounts to store training or service data.
+
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](luis-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
+
+The following table describes the settings supported.
+
+|Required| Name | Data type | Description |
+|-||--|-|
+|Yes| `Input` | String | The target of the input mount. The default value is `/input`. This is the location of the LUIS package files. <br><br>Example:<br>`--mount type=bind,src=c:\input,target=/input`|
+|No| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes LUIS query logs and container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
+
+## Example docker run commands
+
+The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](luis-container-howto.md#stop-the-container) it.
+
+* These examples use the directory off the `C:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission.
+* Do not change the order of the arguments unless you are very familiar with docker containers.
+* If you are using a different operating system, use the correct console/terminal, folder syntax for mounts, and line continuation character for your system. These examples assume a Windows console with a line continuation character `^`. Because the container is a Linux operating system, the target mount uses a Linux-style folder syntax.
+
+Replace {_argument_name_} with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The endpoint key of the `LUIS` resource on the Azure `LUIS` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `LUIS` Overview page.| See [gather required parameters](luis-container-howto.md#gather-required-parameters) for explicit examples. |
++
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](luis-container-howto.md#billing).
+> The ApiKey value is the **Key** from the Keys and Endpoints page in the LUIS portal and is also available on the Azure `Azure AI services` resource keys page.
+
+### Basic example
+
+The following example has the fewest arguments possible to run the container:
+
+```console
+docker run --rm -it -p 5000:5000 --memory 4g --cpus 2 ^
+--mount type=bind,src=c:\input,target=/input ^
+--mount type=bind,src=c:\output,target=/output ^
+mcr.microsoft.com/azure-cognitive-services/luis:latest ^
+Eula=accept ^
+Billing={ENDPOINT_URL} ^
+ApiKey={API_KEY}
+```
+
+### ApplicationInsights example
+
+The following example sets the ApplicationInsights argument to send telemetry to Application Insights while the container is running:
+
+```console
+docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 ^
+--mount type=bind,src=c:\input,target=/input ^
+--mount type=bind,src=c:\output,target=/output ^
+mcr.microsoft.com/azure-cognitive-services/luis:latest ^
+Eula=accept ^
+Billing={ENDPOINT_URL} ^
+ApiKey={API_KEY} ^
+InstrumentationKey={INSTRUMENTATION_KEY}
+```
+
+### Logging example
+
+The following command sets the logging level, `Logging:Console:LogLevel`, to configure the logging level to [`Information`](https://msdn.microsoft.com).
+
+```console
+docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 ^
+--mount type=bind,src=c:\input,target=/input ^
+--mount type=bind,src=c:\output,target=/output ^
+mcr.microsoft.com/azure-cognitive-services/luis:latest ^
+Eula=accept ^
+Billing={ENDPOINT_URL} ^
+ApiKey={API_KEY} ^
+Logging:Console:LogLevel:Default=Information
+```
+
+## Next steps
+
+* Review [How to install and run containers](luis-container-howto.md)
+* Refer to [Troubleshooting](faq.md) to resolve issues related to LUIS functionality.
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md
+
+ Title: Install and run Docker containers for LUIS
+
+description: Use the LUIS container to load your trained or published app, and gain access to its predictions on-premises.
+++++++ Last updated : 03/02/2023+
+keywords: on-premises, Docker, container
++
+# Install and run Docker containers for LUIS
++++
+Containers enable you to use LUIS in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a LUIS container.
+
+The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a [LUIS app](https://www.luis.ai), the docker container provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app's prediction accuracy.
+
+The following video demonstrates using this container.
+
+[![Container demonstration for Azure AI services](./media/luis-container-how-to/luis-containers-demo-video-still.png)](https://aka.ms/luis-container-demo)
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+To run the LUIS container, note the following prerequisites:
+
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne" title="Create a LUIS resource" target="_blank">LUIS resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/).
+* A trained or published app packaged as a mounted input to the container with its associated App ID. You can get the packaged file from the LUIS portal or the Authoring APIs. If you are getting LUIS packaged app from the [authoring APIs](#authoring-apis-for-package-file), you will also need your _Authoring Key_.
++
+### App ID `{APP_ID}`
+
+This ID is used to select the app. You can find the app ID in the [LUIS portal](https://www.luis.ai/) by clicking **Manage** at the top of the screen for your app, and then **Settings**.
++
+### Authoring key `{AUTHORING_KEY}`
+
+This key is used to get the packaged app from the LUIS service in the cloud and upload the query logs back to the cloud. You will need your authoring key if you [export your app using the REST API](#export-published-apps-package-from-api), described later in the article.
+
+You can get your authoring key from the [LUIS portal](https://www.luis.ai/) by clicking **Manage** at the top of the screen for your app, and then **Azure Resources**.
+++
+### Authoring APIs for package file
+
+Authoring APIs for packaged apps:
+
+* [Published package API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagepublishedapplicationasgzip)
+* [Not-published, trained-only package API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagetrainedapplicationasgzip)
+
+### The host computer
++
+### Container requirements and recommendations
+
+The below table lists minimum and recommended values for the container host. Your requirements may change depending on traffic volume.
+
+|Container| Minimum | Recommended | TPS<br>(Minimum, Maximum)|
+|--||-|--|
+|LUIS|1 core, 2-GB memory|1 core, 4-GB memory|20, 40|
+
+* Each core must be at least 2.6 gigahertz (GHz) or faster.
+* TPS - transactions per second
+
+Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Get the container image with `docker pull`
+
+The LUIS container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/language` repository and is named `luis`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/language/luis`.
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/tags).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the `mcr.microsoft.com/azure-cognitive-services/language/luis` repository:
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/language/luis:latest
+```
+
+For a full description of available tags, such as `latest` used in the preceding command, see [LUIS](https://go.microsoft.com/fwlink/?linkid=2043204) on Docker Hub.
++
+## How to use the container
+
+Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
+
+![Process for using Language Understanding (LUIS) container](./media/luis-container-how-to/luis-flow-with-containers-diagram.jpg)
+
+1. [Export package](#export-packaged-app-from-luis) for container from LUIS portal or LUIS APIs.
+1. Move package file into the required **input** directory on the [host computer](#the-host-computer). Do not rename, alter, overwrite, or decompress the LUIS package file.
+1. [Run the container](#run-the-container-with-docker-run), with the required _input mount_ and billing settings. More [examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
+1. [Querying the container's prediction endpoint](#query-the-containers-prediction-endpoint).
+1. When you are done with the container, [import the endpoint logs](#import-the-endpoint-logs-for-active-learning) from the output mount in the LUIS portal and [stop](#stop-the-container) the container.
+1. Use LUIS portal's [active learning](how-to/improve-application.md) on the **Review endpoint utterances** page to improve the app.
+
+The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f). Then train and/or publish, then download a new package and run the container again.
+
+The LUIS app inside the container can't be exported back to the LUIS service. Only the query logs can be uploaded.
+
+## Export packaged app from LUIS
+
+The LUIS container requires a trained or published LUIS app to answer prediction queries of user utterances. In order to get the LUIS app, use either the trained or published package API.
+
+The default location is the `input` subdirectory in relation to where you run the `docker run` command.
+
+Place the package file in a directory and reference this directory as the input mount when you run the docker container.
+
+### Package types
+
+The input mount directory can contain the **Production**, **Staging**, and **Versioned** models of the app simultaneously. All the packages are mounted.
+
+|Package Type|Query Endpoint API|Query availability|Package filename format|
+|--|--|--|--|
+|Versioned|GET, POST|Container only|`{APP_ID}_v{APP_VERSION}.gz`|
+|Staging|GET, POST|Azure and container|`{APP_ID}_STAGING.gz`|
+|Production|GET, POST|Azure and container|`{APP_ID}_PRODUCTION.gz`|
+
+> [!IMPORTANT]
+> Do not rename, alter, overwrite, or decompress the LUIS package files.
+
+### Packaging prerequisites
+
+Before packaging a LUIS application, you must have the following:
+
+|Packaging Requirements|Details|
+|--|--|
+|Azure _Azure AI services_ resource instance|Supported regions include<br><br>West US (`westus`)<br>West Europe (`westeurope`)<br>Australia East (`australiaeast`)|
+|Trained or published LUIS app|With no [unsupported dependencies][unsupported-dependencies]. |
+|Access to the [host computer](#the-host-computer)'s file system |The host computer must allow an [input mount](luis-container-configuration.md#mount-settings).|
+
+### Export app package from LUIS portal
+
+The LUIS [portal](https://www.luis.ai) provides the ability to export the trained or published app's package.
+
+### Export published app's package from LUIS portal
+
+The published app's package is available from the **My Apps** list page.
+
+1. Sign on to the LUIS [portal](https://www.luis.ai).
+1. Select the checkbox to the left of the app name in the list.
+1. Select the **Export** item from the contextual toolbar above the list.
+1. Select **Export for container (GZIP)**.
+1. Select the environment of **Production slot** or **Staging slot**.
+1. The package is downloaded from the browser.
+
+![Export the published package for the container from the App page's Export menu](./media/luis-container-how-to/export-published-package-for-container.png)
+
+### Export versioned app's package from LUIS portal
+
+The versioned app's package is available from the **Versions** list page.
+
+1. Sign on to the LUIS [portal](https://www.luis.ai).
+1. Select the app in the list.
+1. Select **Manage** in the app's navigation bar.
+1. Select **Versions** in the left navigation bar.
+1. Select the checkbox to the left of the version name in the list.
+1. Select the **Export** item from the contextual toolbar above the list.
+1. Select **Export for container (GZIP)**.
+1. The package is downloaded from the browser.
+
+![Export the trained package for the container from the Versions page's Export menu](./media/luis-container-how-to/export-trained-package-for-container.png)
+
+### Export published app's package from API
+
+Use the following REST API method, to package a LUIS app that you've already [published](how-to/publish.md). Substituting your own appropriate values for the placeholders in the API call, using the table below the HTTP specification.
+
+```http
+GET /luis/api/v2.0/package/{APP_ID}/slot/{SLOT_NAME}/gzip HTTP/1.1
+Host: {AZURE_REGION}.api.cognitive.microsoft.com
+Ocp-Apim-Subscription-Key: {AUTHORING_KEY}
+```
+
+| Placeholder | Value |
+|-|-|
+| **{APP_ID}** | The application ID of the published LUIS app. |
+| **{SLOT_NAME}** | The environment of the published LUIS app. Use one of the following values:<br/>`PRODUCTION`<br/>`STAGING` |
+| **{AUTHORING_KEY}** | The authoring key of the LUIS account for the published LUIS app.<br/>You can get your authoring key from the **User Settings** page on the LUIS portal. |
+| **{AZURE_REGION}** | The appropriate Azure region:<br/><br/>`westus` - West US<br/>`westeurope` - West Europe<br/>`australiaeast` - Australia East |
+
+To download the published package, refer to the [API documentation here][download-published-package]. If successfully downloaded, the response is a LUIS package file. Save the file in the storage location specified for the input mount of the container.
+
+### Export versioned app's package from API
+
+Use the following REST API method, to package a LUIS application that you've already [trained](how-to/train-test.md). Substituting your own appropriate values for the placeholders in the API call, using the table below the HTTP specification.
+
+```http
+GET /luis/api/v2.0/package/{APP_ID}/versions/{APP_VERSION}/gzip HTTP/1.1
+Host: {AZURE_REGION}.api.cognitive.microsoft.com
+Ocp-Apim-Subscription-Key: {AUTHORING_KEY}
+```
+
+| Placeholder | Value |
+|-|-|
+| **{APP_ID}** | The application ID of the trained LUIS app. |
+| **{APP_VERSION}** | The application version of the trained LUIS app. |
+| **{AUTHORING_KEY}** | The authoring key of the LUIS account for the published LUIS app.<br/>You can get your authoring key from the **User Settings** page on the LUIS portal. |
+| **{AZURE_REGION}** | The appropriate Azure region:<br/><br/>`westus` - West US<br/>`westeurope` - West Europe<br/>`australiaeast` - Australia East |
+
+To download the versioned package, refer to the [API documentation here][download-versioned-package]. If successfully downloaded, the response is a LUIS package file. Save the file in the storage location specified for the input mount of the container.
+
+## Run the container with `docker run`
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
+
+[Examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
+
+```console
+docker run --rm -it -p 5000:5000 ^
+--memory 4g ^
+--cpus 2 ^
+--mount type=bind,src=c:\input,target=/input ^
+--mount type=bind,src=c:\output\,target=/output ^
+mcr.microsoft.com/azure-cognitive-services/language/luis ^
+Eula=accept ^
+Billing={ENDPOINT_URI} ^
+ApiKey={API_KEY}
+```
+
+* This example uses the directory off the `C:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission.
+* Do not change the order of the arguments unless you are familiar with docker containers.
+* If you are using a different operating system, use the correct console/terminal, folder syntax for mounts, and line continuation character for your system. These examples assume a Windows console with a line continuation character `^`. Because the container is a Linux operating system, the target mount uses a Linux-style folder syntax.
+
+This command:
+
+* Runs a container from the LUIS container image
+* Loads LUIS app from input mount at *C:\input*, located on container host
+* Allocates two CPU cores and 4 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Saves container and LUIS logs to output mount at *C:\output*, located on container host
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+More [examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
+
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+> The ApiKey value is the **Key** from the **Azure Resources** page in the LUIS portal and is also available on the Azure `Azure AI services` resource keys page.
++
+## Endpoint APIs supported by the container
+
+Both V2 and [V3](luis-migration-api-v3.md) versions of the API are available with the container.
+
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs. Endpoints for published (staging or production) apps have a _different_ route than endpoints for versioned apps.
+
+Use the host, `http://localhost:5000`, for container APIs.
+
+# [V3 prediction endpoint](#tab/v3)
+
+|Package type|HTTP verb|Route|Query parameters|
+|--|--|--|--|
+|Published|GET, POST|`/luis/v3.0/apps/{appId}/slots/{slotName}/predict?` `/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]|
+|Versioned|GET, POST|`/luis/v3.0/apps/{appId}/versions/{versionId}/predict?` `/luis/prediction/v3.0/apps/{appId}/versions/{versionId}/predict`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]|
+
+The query parameters configure how and what is returned in the query response:
+
+|Query parameter|Type|Purpose|
+|--|--|--|
+|`query`|string|The user's utterance.|
+|`verbose`|boolean|A boolean value indicating whether to return all the metadata for the predicted models. Default is false.|
+|`log`|boolean|Logs queries, which can be used later for [active learning](how-to/improve-application.md). Default is false.|
+|`show-all-intents`|boolean|A boolean value indicating whether to return all the intents or the top scoring intent only. Default is false.|
+
+# [V2 prediction endpoint](#tab/v2)
+
+|Package type|HTTP verb|Route|Query parameters|
+|--|--|--|--|
+|Published|[GET](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78), [POST](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee79)|`/luis/v2.0/apps/{appId}?`|`q={q}`<br>`&staging`<br>[`&timezoneOffset`]<br>[`&verbose`]<br>[`&log`]<br>|
+|Versioned|GET, POST|`/luis/v2.0/apps/{appId}/versions/{versionId}?`|`q={q}`<br>[`&timezoneOffset`]<br>[`&verbose`]<br>[`&log`]|
+
+The query parameters configure how and what is returned in the query response:
+
+|Query parameter|Type|Purpose|
+|--|--|--|
+|`q`|string|The user's utterance.|
+|`timezoneOffset`|number|The timezoneOffset allows you to [change the timezone](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity) used by the prebuilt entity datetimeV2.|
+|`verbose`|boolean|Returns all intents and their scores when set to true. Default is false, which returns only the top intent.|
+|`staging`|boolean|Returns query from staging environment results if set to true. |
+|`log`|boolean|Logs queries, which can be used later for [active learning](how-to/improve-application.md). Default is true.|
+
+***
+
+### Query the LUIS app
+
+An example CURL command for querying the container for a published app is:
+
+# [V3 prediction endpoint](#tab/v3)
+
+To query a model in a slot, use the following API:
+
+```bash
+curl -G \
+-d verbose=false \
+-d log=true \
+--data-urlencode "query=turn the lights on" \
+"http://localhost:5000/luis/v3.0/apps/{APP_ID}/slots/production/predict"
+```
+
+To make queries to the **Staging** environment, replace `production` in the route with `staging`:
+
+`http://localhost:5000/luis/v3.0/apps/{APP_ID}/slots/staging/predict`
+
+To query a versioned model, use the following API:
+
+```bash
+curl -G \
+-d verbose=false \
+-d log=false \
+--data-urlencode "query=turn the lights on" \
+"http://localhost:5000/luis/v3.0/apps/{APP_ID}/versions/{APP_VERSION}/predict"
+```
+
+# [V2 prediction endpoint](#tab/v2)
+
+To query a model in a slot, use the following API:
+
+```bash
+curl -X GET \
+"http://localhost:5000/luis/v2.0/apps/{APP_ID}?q=turn%20on%20the%20lights&staging=false&timezoneOffset=0&verbose=false&log=true" \
+-H "accept: application/json"
+```
+To make queries to the **Staging** environment, change the **staging** query string parameter value to true:
+
+`staging=true`
+
+To query a versioned model, use the following API:
+
+```bash
+curl -X GET \
+"http://localhost:5000/luis/v2.0/apps/{APP_ID}/versions/{APP_VERSION}?q=turn%20on%20the%20lights&timezoneOffset=0&verbose=false&log=true" \
+-H "accept: application/json"
+```
+The version name has a maximum of 10 characters and contains only characters allowed in a URL.
+
+***
+
+## Import the endpoint logs for active learning
+
+If an output mount is specified for the LUIS container, app query log files are saved in the output directory, where `{INSTANCE_ID}` is the container ID. The app query log contains the query, response, and timestamps for each prediction query submitted to the LUIS container.
+
+The following location shows the nested directory structure for the container's log files.
+```
+/output/luis/{INSTANCE_ID}/
+```
+
+From the LUIS portal, select your app, then select **Import endpoint logs** to upload these logs.
+
+![Import container's log files for active learning](./media/luis-container-how-to/upload-endpoint-log-files.png)
+
+After the log is uploaded, [review the endpoint](./how-to/improve-application.md) utterances in the LUIS portal.
+
+<!-- ## Validate container is running -->
++
+## Run the container disconnected from the internet
++
+## Stop the container
+
+To shut down the container, in the command-line environment where the container is running, press **Ctrl+C**.
+
+## Troubleshooting
+
+If you run the container with an output [mount](luis-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
+++
+## Billing
+
+The LUIS container sends billing information to Azure, using a _Azure AI services_ resource on your Azure account.
++
+For more information about these options, see [Configure containers](luis-container-configuration.md).
+
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Language Understanding (LUIS) containers. In summary:
+
+* Language Understanding (LUIS) provides one Linux container for Docker providing endpoint query predictions of utterances.
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You can use REST API to query the container endpoints by specifying the host URI of the container.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* Review [Configure containers](luis-container-configuration.md) for configuration settings.
+* See [LUIS container limitations](luis-container-limitations.md) for known capability restrictions.
+* Refer to [Troubleshooting](faq.md) to resolve issues related to LUIS functionality.
+* Use more [Azure AI containers](../cognitive-services-container-support.md)
+
+<!-- Links - external -->
+[download-published-package]: https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagepublishedapplicationasgzip
+[download-versioned-package]: https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagetrainedapplicationasgzip
+
+[unsupported-dependencies]: luis-container-limitations.md#unsupported-dependencies-for-latest-container
ai-services Luis Container Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-limitations.md
+
+ Title: Container limitations - LUIS
+
+description: The LUIS container languages that are supported.
++++++ Last updated : 10/28/2021+++
+# Language Understanding (LUIS) container limitations
+++
+The LUIS containers have a few notable limitations. From unsupported dependencies, to a subset of languages supported, this article details these restrictions.
+
+## Supported dependencies for `latest` container
+
+The latest LUIS container supports:
+
+* [New prebuilt domains](luis-reference-prebuilt-domains.md): these enterprise-focused domains include entities, example utterances, and patterns. Extend these domains for your own use.
+
+## Unsupported dependencies for `latest` container
+
+To [export for container](luis-container-howto.md#export-packaged-app-from-luis), you must remove unsupported dependencies from your LUIS app. When you attempt to export for container, the LUIS portal reports these unsupported features that you need to remove.
+
+You can use a LUIS application if it **doesn't include** any of the following dependencies:
+
+Unsupported app configurations|Details|
+|--|--|
+|Unsupported container cultures| The Dutch (`nl-NL`), Japanese (`ja-JP`) and German (`de-DE`) languages are only supported with the [1.0.2 tokenizer](luis-language-support.md#custom-tokenizer-versions).|
+|Unsupported entities for all cultures|[KeyPhrase](luis-reference-prebuilt-keyphrase.md) prebuilt entity for all cultures|
+|Unsupported entities for English (`en-US`) culture|[GeographyV2](luis-reference-prebuilt-geographyV2.md) prebuilt entities|
+|Speech priming|External dependencies are not supported in the container.|
+|Sentiment analysis|External dependencies are not supported in the container.|
+|Bing spell check|External dependencies are not supported in the container.|
+
+## Languages supported
+
+LUIS containers support a subset of the [languages supported](luis-language-support.md#languages-supported) by LUIS proper. The LUIS containers are capable of understanding utterances in the following languages:
+
+| Language | Locale | Prebuilt domain | Prebuilt entity | Phrase list recommendations | **[Sentiment analysis](../language-service/sentiment-opinion-mining/language-support.md) and [key phrase extraction](../language-service/key-phrase-extraction/language-support.md)|
+|--|--|:--:|:--:|:--:|:--:|
+| English (United States) | `en-US` | ✔️ | ✔️ | ✔️ | ✔️ |
+| Arabic (preview - modern standard Arabic) |`ar-AR`|❌|❌|❌|❌|
+| *[Chinese](#chinese-support-notes) |`zh-CN` | ✔️ | ✔️ | ✔️ | ❌ |
+| French (France) |`fr-FR` | ✔️ | ✔️ | ✔️ | ✔️ |
+| French (Canada) |`fr-CA` | ❌ | ❌ | ❌ | ✔️ |
+| German |`de-DE` | ✔️ | ✔️ | ✔️ | ✔️ |
+| Hindi | `hi-IN`| ❌ | ❌ | ❌ | ❌ |
+| Italian |`it-IT` | ✔️ | ✔️ | ✔️ | ✔️ |
+| Korean |`ko-KR` | ✔️ | ❌ | ❌ | *Key phrase* only |
+| Marathi | `mr-IN`|❌|❌|❌|❌|
+| Portuguese (Brazil) |`pt-BR` | ✔️ | ✔️ | ✔️ | not all sub-cultures |
+| Spanish (Spain) |`es-ES` | ✔️ | ✔️ |✔️|✔️|
+| Spanish (Mexico)|`es-MX` | ❌ | ❌ |✔️|✔️|
+| Tamil | `ta-IN`|❌|❌|❌|❌|
+| Telugu | `te-IN`|❌|❌|❌|❌|
+| Turkish | `tr-TR` |✔️| ❌ | ❌ | *Sentiment* only |
++
ai-services Luis Get Started Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-create-app.md
+
+ Title: "Quickstart: Build your app in LUIS portal"
+description: This quickstart shows how to create a LUIS app that uses the prebuilt domain `HomeAutomation` for turning lights and appliances on and off. This prebuilt domain provides intents, entities, and example utterances for you. When you're finished, you'll have a LUIS endpoint running in the cloud.
++++
+ms.
+ Last updated : 07/19/2022+
+#Customer intent: As a new user, I want to quickly get a LUIS app created so I can understand the model and actions to train, test, publish, and query.
++
+# Quickstart: Build your app in LUIS portal
++
+In this quickstart, create a LUIS app using the prebuilt home automation domain for turning lights and appliances on and off. This prebuilt domain provides intents, entities, and example utterances for you. Next, try customizing your app by adding more intents and entities. When you're finished, you'll have a LUIS endpoint running in the cloud.
+++
+## Create a new app
+
+You can create and manage your applications on **My Apps**.
+
+### Create an application
+
+To create an application, click **+ New app**.
+
+In the window that appears, enter the following information:
+
+|Name |Description |
+|||
+|Name | A name for your app. For example, "home automation". |
+|Culture | The language that your app understands and speaks. |
+|Description | A description for your app.
+|Prediction resource | The prediction resource that will receive queries. |
+
+Select **Done**.
+
+>[!NOTE]
+>The culture cannot be changed once the application is created.
+
+## Add prebuilt domain
+
+LUIS offers a set of prebuilt domains that can help you get started with your application. A prebuilt domain app is already populated with [intents](./concepts/intents.md), [entities](concepts/entities.md) and [utterances](concepts/utterances.md).
+
+1. In the left navigation, select **Prebuilt domains**.
+2. Search for **HomeAutomation**.
+3. Select **Add domain** on the HomeAutomation card.
+
+ > [!div class="mx-imgBorder"]
+ > ![Select 'Prebuilt domains' then search for 'HomeAutomation'. Select 'Add domain' on the HomeAutomation card.](media/luis-quickstart-new-app/home-automation.png)
+
+ When the domain is successfully added, the prebuilt domain box displays a **Remove domain** button.
+
+## Check out intents and entities
+
+1. Select **Intents** in the left navigation menu to see the HomeAutomation domain intents. It has example utterances, such as `HomeAutomation.QueryState` and `HomeAutomation.SetDevice`.
+
+ > [!NOTE]
+ > **None** is an intent provided by all LUIS apps. You use it to handle utterances that don't correspond to functionality your app provides.
+
+2. Select the **HomeAutomation.TurnOff** intent. The intent contains a list of example utterances that are labeled with entities.
+
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot of HomeAutomation.TurnOff intent](media/luis-quickstart-new-app/home-automation-turnoff.png "Screenshot of HomeAutomation.TurnOff intent")](media/luis-quickstart-new-app/home-automation-turnoff.png)
+
+3. If you want to view the entities for the app, select **Entities**. If you select one of the entities, such as **HomeAutomation.DeviceName** you will see a list of values associated with it.
+
+ :::image type="content" source="media/luis-quickstart-new-app/entities-page.png" alt-text="Image alt text" lightbox="media/luis-quickstart-new-app/entities-page.png":::
+
+## Train the LUIS app
+After your application is populated with intents, entities, and utterances, you need to train the application so that the changes you made can be reflected.
++
+## Test your app
+
+Once you've trained your app, you can test it.
+
+1. Select **Test** from the top-right navigation.
+
+1. Type a test utterance into the interactive test pane, and press Enter. For example, *Turn off the lights*.
+
+ In this example, *Turn off the lights* is correctly identified as the top scoring intent of **HomeAutomation.TurnOff**.
+
+ :::image type="content" source="media/luis-quickstart-new-app/review-test-inspection-pane-in-portal.png" alt-text="Screenshot of test panel with utterance highlighted" lightbox="media/luis-quickstart-new-app/review-test-inspection-pane-in-portal.png":::
+
+1. Select **Inspect** to view more information about the prediction.
+
+ :::image type="content" source="media/luis-quickstart-new-app/test.png" alt-text="Screenshot of test panel with inspection information" lightbox="media/luis-quickstart-new-app/test.png":::
+
+1. Close the test pane.
+
+## Customize your application
+
+Besides the prebuilt domains LUIS allows you to create your own custom applications or to customize on top of prebuilt ones.
+
+### Create Intents
+
+To add more intents to your app
+
+1. Select **Intents** in the left navigation menu.
+2. Select **Create**
+3. Enter the intent name, `HomeAutomation.AddDeviceAlias`, and then select Done.
+
+### Create Entities
+
+To add more entities to your app
+
+1. Select **Entities** in the left navigation menu.
+2. Select **Create**
+3. Enter the entity name, `HomeAutomation.DeviceAlias`, select machine learned from **type** and then select **Create**.
+
+### Add example utterances
+
+Example utterances are text that a user enters in a chat bot or other client applications. They map the intention of the user's text to a LUIS intent.
+
+On the **Intents** page for `HomeAutomation.AddDeviceAlias`, add the following example utterances under **Example Utterance**,
+
+|#|Example utterances|
+|--|--|
+|1|`Add alias to my fan to be wind machine`|
+|2|`Alias lights to illumination`|
+|3|`nickname living room speakers to our speakers a new fan`|
+|4|`rename living room tv to main tv`|
+++
+### Label example utterances
+
+Labeling your utterances is needed because you added an ML entity. Labeling is used by your application to learn how to extract the ML entities you created.
++
+## Create Prediction resource
+At this point, you have completed authoring your application. You need to create a prediction resource to publish your application in order to receive predictions in a chat bot or other client applications through the prediction endpoint
+
+To create a Prediction resource from the LUIS portal
+++
+## Publish the app to get the endpoint URL
++++
+## Query the V3 API prediction endpoint
++
+2. In the browser address bar, for the query string, make sure the following values are in the URL. If they are not in the query string, add them:
+
+ * `verbose=true`
+ * `show-all-intents=true`
+
+3. In the browser address bar, go to the end of the URL and enter *turn off the living room light* for the query string, then press Enter.
+
+ ```json
+ {
+ "query": "turn off the living room light",
+ "prediction": {
+ "topIntent": "HomeAutomation.TurnOff",
+ "intents": {
+ "HomeAutomation.TurnOff": {
+ "score": 0.969448864
+ },
+ "HomeAutomation.QueryState": {
+ "score": 0.0122336326
+ },
+ "HomeAutomation.TurnUp": {
+ "score": 0.006547436
+ },
+ "HomeAutomation.TurnDown": {
+ "score": 0.0050634006
+ },
+ "HomeAutomation.SetDevice": {
+ "score": 0.004951761
+ },
+ "HomeAutomation.TurnOn": {
+ "score": 0.00312553928
+ },
+ "None": {
+ "score": 0.000552945654
+ }
+ },
+ "entities": {
+ "HomeAutomation.Location": [
+ "living room"
+ ],
+ "HomeAutomation.DeviceName": [
+ [
+ "living room light"
+ ]
+ ],
+ "HomeAutomation.DeviceType": [
+ [
+ "light"
+ ]
+ ],
+ "$instance": {
+ "HomeAutomation.Location": [
+ {
+ "type": "HomeAutomation.Location",
+ "text": "living room",
+ "startIndex": 13,
+ "length": 11,
+ "score": 0.902181149,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "HomeAutomation.DeviceName": [
+ {
+ "type": "HomeAutomation.DeviceName",
+ "text": "living room light",
+ "startIndex": 13,
+ "length": 17,
+ "modelTypeId": 5,
+ "modelType": "List Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "HomeAutomation.DeviceType": [
+ {
+ "type": "HomeAutomation.DeviceType",
+ "text": "light",
+ "startIndex": 25,
+ "length": 5,
+ "modelTypeId": 5,
+ "modelType": "List Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+ }
+ }
+ }
+ ```
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
++
+## Clean up resources
++
+## Next steps
+
+* [Iterative app design](./concepts/application-design.md)
+* [Best practices](./faq.md)
ai-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-get-intent-from-browser.md
+
+ Title: "How to query for predictions using a browser - LUIS"
+description: In this article, use an available public LUIS app to determine a user's intention from conversational text in a browser.
++++++ Last updated : 03/26/2021+
+#Customer intent: As an developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
++
+# How to query the prediction runtime with user text
+++
+To understand what a LUIS prediction endpoint returns, view a prediction result in a web browser.
+
+## Prerequisites
+
+In order to query a public app, you need:
+
+* Your Language Understanding (LUIS) resource information:
+ * **Prediction key** - which can be obtained from [LUIS Portal](https://www.luis.ai/). If you do not already have a subscription to create a key, you can register for a [free account](https://azure.microsoft.com/free/cognitive-services).
+ * **Prediction endpoint subdomain** - the subdomain is also the **name** of your LUIS resource.
+* A LUIS app ID - use the public IoT app ID of `df67dcdb-c37d-46af-88e1-8b97951ca1c2`. The user query used in the quickstart code is specific to that app. This app should work with any prediction resource other than the Europe or Australia regions, since it uses "westus" as the authoring region.
+
+## Use the browser to see predictions
+
+1. Open a web browser.
+1. Use the complete URLs below, replacing `YOUR-KEY` with your own LUIS Prediction key. The requests are GET requests and include the authorization, with your LUIS Prediction key, as a query string parameter.
+
+ #### [V3 prediction request](#tab/V3-1-1)
++
+ The format of the V3 URL for a **GET** endpoint (by slots) request is:
+
+ `
+ https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY
+ `
+
+ #### [V2 prediction request](#tab/V2-1-2)
+
+ The format of the V2 URL for a **GET** endpoint request is:
+
+ `
+ https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?subscription-key=YOUR-LUIS-PREDICTION-KEY&q=turn on all lights
+ `
+
+1. Paste the URL into a browser window and press Enter. The browser displays a JSON result that indicates that LUIS detects the `HomeAutomation.TurnOn` intent as the top intent and the `HomeAutomation.Operation` entity with the value `on`.
+
+ #### [V3 prediction response](#tab/V3-2-1)
+
+ ```JSON
+ {
+ "query": "turn on all lights",
+ "prediction": {
+ "topIntent": "HomeAutomation.TurnOn",
+ "intents": {
+ "HomeAutomation.TurnOn": {
+ "score": 0.5375382
+ }
+ },
+ "entities": {
+ "HomeAutomation.Operation": [
+ "on"
+ ]
+ }
+ }
+ }
+ ```
+
+ #### [V2 prediction response](#tab/V2-2-2)
+
+ ```json
+ {
+ "query": "turn on all lights",
+ "topScoringIntent": {
+ "intent": "HomeAutomation.TurnOn",
+ "score": 0.5375382
+ },
+ "entities": [
+ {
+ "entity": "on",
+ "type": "HomeAutomation.Operation",
+ "startIndex": 5,
+ "endIndex": 6,
+ "score": 0.724984169
+ }
+ ]
+ }
+ ```
+
+ * * *
+
+1. To see all the intents, add the appropriate query string parameter.
+
+ #### [V3 prediction endpoint](#tab/V3-3-1)
+
+ Add `show-all-intents=true` to the end of the querystring to **show all intents**, and `verbose=true` to return all detailed information for entities.
+
+ `
+ https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&show-all-intents=true&verbose=true
+ `
+
+ ```JSON
+ {
+ "query": "turn off the living room light",
+ "prediction": {
+ "topIntent": "HomeAutomation.TurnOn",
+ "intents": {
+ "HomeAutomation.TurnOn": {
+ "score": 0.5375382
+ },
+ "None": {
+ "score": 0.08687421
+ },
+ "HomeAutomation.TurnOff": {
+ "score": 0.0207554
+ }
+ },
+ "entities": {
+ "HomeAutomation.Operation": [
+ "on"
+ ]
+ }
+ }
+ }
+ ```
+
+ #### [V2 prediction endpoint](#tab/V2)
+
+ Add `verbose=true` to the end of the querystring to **show all intents**:
+
+ `
+ https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?q=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&verbose=true
+ `
+
+ ```json
+ {
+ "query": "turn on all lights",
+ "topScoringIntent": {
+ "intent": "HomeAutomation.TurnOn",
+ "score": 0.5375382
+ },
+ "intents": [
+ {
+ "intent": "HomeAutomation.TurnOn",
+ "score": 0.5375382
+ },
+ {
+ "intent": "None",
+ "score": 0.08687421
+ },
+ {
+ "intent": "HomeAutomation.TurnOff",
+ "score": 0.0207554
+ }
+ ],
+ "entities": [
+ {
+ "entity": "on",
+ "type": "HomeAutomation.Operation",
+ "startIndex": 5,
+ "endIndex": 6,
+ "score": 0.724984169
+ }
+ ]
+ }
+ ```
+
+## Next steps
+
+* [V3 prediction endpoint](luis-migration-api-v3.md)
+* [Custom subdomains](../cognitive-services-custom-subdomains.md)
+* [Use the client libraries or REST API](client-libraries-rest-api.md)
ai-services Luis Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-glossary.md
+
+ Title: Glossary - LUIS
+description: The glossary explains terms that you might encounter as you work with the LUIS API Service.
+++ Last updated : 03/21/2022+++++
+# Language understanding glossary of common vocabulary and concepts
++
+The Language Understanding (LUIS) glossary explains terms that you might encounter as you work with the LUIS service.
+
+## Active version
+
+The active version is the [version](luis-how-to-manage-versions.md) of your app that is updated when you make changes to the model using the LUIS portal. In the LUIS portal, if you want to make changes to a version that is not the active version, you need to first set that version as active.
+
+## Active learning
+
+Active learning is a technique of machine learning in which the machine learned model is used to identify informative new examples to label. In LUIS, active learning refers to adding utterances from the endpoint traffic whose current predictions are unclear to improve your model. Select "review endpoint utterances", to view utterances to label.
+
+See also:
+* [Conceptual information](how-to/improve-application.md)
+* [Tutorial reviewing endpoint utterances](how-to/improve-application.md)
+* How to improve the LUIS app by [reviewing endpoint utterances](how-to/improve-application.md)
+
+## Application (App)
+
+In LUIS, your application, or app, is a collection of machine learned models, built on the same data set, that works together to predict intents and entities for a particular scenario. Each application has a separate prediction endpoint.
+
+If you are building an HR bot, you might have a set of intents, such as "Schedule leave time", "inquire about benefits" and "update personal information" and entities for each one of those intents that you group into a single application.
+
+## Authoring
+
+Authoring is the ability to create, manage and deploy a LUIS app, either using the LUIS portal or the authoring APIs.
+
+### Authoring Key
+
+The [authoring key](luis-how-to-azure-subscription.md) is used to author the app. Not used for production-level endpoint queries. For more information, see [resource limits](luis-limits.md#resource-usage-and-limits).
+
+### Authoring Resource
+
+Your LUIS [authoring resource](luis-how-to-azure-subscription.md) is a manageable item that is available through Azure. The resource is your access to the associated authoring, training, and publishing abilities of the Azure service. The resource includes authentication, authorization, and security information you need to access the associated Azure service.
+
+The authoring resource has an Azure "kind" of `LUIS-Authoring`.
+
+## Batch test
+
+Batch testing is the ability to validate a current LUIS app's models with a consistent and known test set of user utterances. The batch test is defined in a [JSON formatted file](./luis-how-to-batch-test.md#batch-test-file).
++
+See also:
+* [Concepts](./luis-how-to-batch-test.md)
+* [How-to](luis-how-to-batch-test.md) run a batch test
+* [Tutorial](./luis-how-to-batch-test.md) - create and run a batch test
+
+### F-measure
+
+In batch testing, a measure of the test's accuracy.
+
+### False negative (FN)
+
+In batch testing, the data points represent utterances in which your app incorrectly predicted the absence of the target intent/entity.
+
+### False positive (FP)
+
+In batch testing, the data points represent utterances in which your app incorrectly predicted the existence of the target intent/entity.
+
+### Precision
+In batch testing, precision (also called positive predictive value) is the fraction of relevant utterances among the retrieved utterances.
+
+An example for an animal batch test is the number of sheep that were predicted divided by the total number of animals (sheep and non-sheep alike).
+
+### Recall
+
+In batch testing, recall (also known as sensitivity), is the ability for LUIS to generalize.
+
+An example for an animal batch test is the number of sheep that were predicted divided by the total number of sheep available.
+
+### True negative (TN)
+
+A true negative is when your app correctly predicts no match. In batch testing, a true negative occurs when your app does predict an intent or entity for an example that has not been labeled with that intent or entity.
+
+### True positive (TP)
+
+True positive (TP) A true positive is when your app correctly predicts a match. In batch testing, a true positive occurs when your app predicts an intent or entity for an example that has been labeled with that intent or entity.
+
+## Classifier
+
+A classifier is a machine learned model that predicts what category or class an input fits into.
+
+An [intent](#intent) is an example of a classifier.
+
+## Collaborator
+
+A collaborator is conceptually the same thing as a [contributor](#contributor). A collaborator is granted access when an owner adds the collaborator's email address to an app that isn't controlled with Azure role-based access control (Azure RBAC). If you are still using collaborators, you should migrate your LUIS account, and use LUIS authoring resources to manage contributors with Azure RBAC.
+
+## Contributor
+
+A contributor is not the [owner](#owner) of the app, but has the same permissions to add, edit, and delete the intents, entities, utterances. A contributor provides Azure role-based access control (Azure RBAC) to a LUIS app.
+
+See also:
+* [How-to](luis-how-to-collaborate.md#add-contributor-to-azure-authoring-resource) add contributors
+
+## Descriptor
+
+A descriptor is the term formerly used for a machine learning [feature](#features).
+
+## Domain
+
+In the LUIS context, a domain is an area of knowledge. Your domain is specific to your scenario. Different domains use specific language and terminology that have meaning in the context of the domain. For example, if you are building an application to play music, your application would have terms and language specific to music ΓÇô words like "song, track, album, lyrics, b-side, artist". For examples of domains, see [prebuilt domains](#prebuilt-domain).
+
+## Endpoint
+
+### Authoring endpoint
+
+The LUIS authoring endpoint URL is where you author, train, and publish your app. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID.
+
+Learn more about authoring your app programmatically from the [Developer reference](developer-reference-resource.md#rest-endpoints)
+
+### Prediction endpoint
+
+The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c37) API.
+
+Your access to the prediction endpoint is authorized with the LUIS prediction key.
+
+## Entity
+
+[Entities](concepts/entities.md) are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want you model to predict an address, but also the subentities of street, city, state, and zipcode. Entities can also be used as features to models. Your response from the LUIS app will include both the predicted intents and all the entities.
+
+### Entity extractor
+
+An entity extractor sometimes known only as an extractor is the type of machine learned model that LUIS uses to predict entities.
+
+### Entity schema
+
+The entity schema is the structure you define for machine learned entities with subentities. The prediction endpoint returns all of the extracted entities and subentities defined in the schema.
+
+### Entity's subentity
+
+A subentity is a child entity of a machine-learning entity.
+
+### Non-machine-learning entity
+
+An entity that uses text matching to extract data:
+* List entity
+* Regular expression entity
+
+### List entity
+
+A [list entity](reference-entity-list.md) represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small", "medium", or "large" are used regardless of the context.
+
+### Regular expression
+
+A [regular expression entity](reference-entity-regular-expression.md) represents a regular expression. Regular expression entities are exact matches, unlike machined learned entities.
+### Prebuilt entity
+
+See Prebuilt model's entry for [prebuilt entity](#prebuilt-entity)
+
+## Features
+
+In machine learning, a feature is a characteristic that helps the model recognize a particular concept. It is a hint that LUIS can use, but not a hard rule.
+
+This term is also referred to as a **[machine-learning feature](concepts/patterns-features.md)**.
+
+These hints are used in conjunction with the labels to learn how to predict new data. LUIS supports both phrase lists and using other models as features.
+
+### Required feature
+
+A required feature is a way to constrain the output of a LUIS model. When a feature for an entity is marked as required, the feature must be present in the example for the entity to be predicted, regardless of what the machine learned model predicts.
+
+Consider an example where you have a prebuilt-number feature that you have marked as required on the quantity entity for a menu ordering bot. When your bot sees `I want a bajillion large pizzas?`, bajillion will not be predicted as a quantity regardless of the context in which it appears. Bajillion is not a valid number and wonΓÇÖt be predicted by the number pre-built entity.
+
+## Intent
+
+An [intent](concepts/intents.md) represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill. In LUIS, an utterance as a whole is classified as an intent, but parts of the utterance are extracted as entities
+
+## Labeling examples
+
+Labeling, or marking, is the process of associating a positive or negative example with a model.
+
+### Labeling for intents
+In LUIS, intents within an app are mutually exclusive. This means when you add an utterance to an intent, it is considered a _positive_ example for that intent and a _negative_ example for all other intents. Negative examples should not be confused with the "None" intent, which represents utterances that are outside the scope of the app.
+
+### Labeling for entities
+In LUIS, you [label](how-to/entities.md) a word or phrase in an intent's example utterance with an entity as a _positive_ example. Labeling shows the intent what it should predict for that utterance. The labeled utterances are used to train the intent.
+
+## LUIS app
+
+See the definition for [application (app)](#application-app).
+
+## Model
+
+A (machine learned) model is a function that makes a prediction on input data. In LUIS, we refer to intent classifiers and entity extractors generically as "models", and we refer to a collection of models that are trained, published, and queried together as an "app".
+
+## Normalized value
+
+You add values to your [list](#list-entity) entities. Each of those values can have a list of one or more synonyms. Only the normalized value is returned in the response.
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Owner
+
+Each app has one owner who is the person that created the app. The owner manages permissions to the application in the Azure portal.
+
+## Phrase list
+
+A [phrase list](concepts/patterns-features.md) is a specific type of machine learning feature that includes a group of values (words or phrases) that belong to the same class and must be treated similarly (for example, names of cities or products).
+
+## Prebuilt model
+
+A [prebuilt model](luis-concept-prebuilt-model.md) is an intent, entity, or collection of both, along with labeled examples. These common prebuilt models can be added to your app to reduce the model development work required for your app.
+
+### Prebuilt domain
+
+A prebuilt domain is a LUIS app configured for a specific domain such as home automation (HomeAutomation) or restaurant reservations (RestaurantReservation). The intents, utterances, and entities are configured for this domain.
+
+### Prebuilt entity
+
+A prebuilt entity is an entity LUIS provides for common types of information such as number, URL, and email. These are created based on public data. You can choose to add a prebuilt entity as a stand-alone entity, or as a feature to an entity
+
+### Prebuilt intent
+
+A prebuilt intent is an intent LUIS provides for common types of information and come with their own labeled example utterances.
+
+## Prediction
+
+A prediction is a REST request to the Azure LUIS prediction service that takes in new data (user utterance), and applies the trained and published application to that data to determine what intents and entities are found.
+
+### Prediction key
+
+The [prediction key](luis-how-to-azure-subscription.md) is the key associated with the LUIS service you created in Azure that authorizes your usage of the prediction endpoint.
+
+This key is not the authoring key. If you have a prediction endpoint key, it should be used for any endpoint requests instead of the authoring key. You can see your current prediction key inside the endpoint URL at the bottom of Azure resources page in LUIS website. It is the value of the subscription-key name/value pair.
+
+### Prediction resource
+
+Your LUIS prediction resource is a manageable item that is available through Azure. The resource is your access to the associated prediction of the Azure service. The resource includes predictions.
+
+The prediction resource has an Azure "kind" of `LUIS`.
+
+### Prediction score
+
+The [score](luis-concept-prediction-score.md) is a number from 0 and 1 that is a measure of how confident the system is that a particular input utterance matches a particular intent. A score closer to 1 means the system is very confident about its output and a score closer to 0 means the system is confident that the input does not match a particular output. Scores in the middle mean the system is very unsure of how to make the decision.
+
+For example, take a model that is used to identify if some customer text includes a food order. It might give a score of 1 for "I'd like to order one coffee" (the system is very confident that this is an order) and a score of 0 for "my team won the game last night" (the system is very confident that this is NOT an order). And it might have a score of 0.5 for "let's have some tea" (isn't sure if this is an order or not).
+
+## Programmatic key
+
+Renamed to [authoring key](#authoring-key).
+
+## Publish
+
+[Publishing](how-to/publish.md) means making a LUIS active version available on either the staging or production [endpoint](#endpoint).
+
+## Quota
+
+LUIS quota is the limitation of the Azure subscription tier. The LUIS quota can be limited by both requests per second (HTTP Status 429) and total requests in a month (HTTP Status 403).
+
+## Schema
+
+Your schema includes your intents and entities along with the subentities. The schema is initially planned for then iterated over time. The schema doesn't include app settings, features, or example utterances.
+
+## Sentiment Analysis
+Sentiment analysis provides positive or negative values of the utterances provided by the [Language service](../language-service/sentiment-opinion-mining/overview.md).
+
+## Speech priming
+
+Speech priming improves the recognition of spoken words and phrases that are commonly used in your scenario with [Speech Services](../speech-service/overview.md). For speech priming enabled applications, all LUIS labeled examples are used to improve speech recognition accuracy by creating a customized speech model for this specific application. For example, in a chess game you want to make sure that when the user says "Move knight", it isnΓÇÖt interpreted as "Move night". The LUIS app should include examples in which "knight" is labeled as an entity.
+
+## Starter key
+
+A free key to use when first starting out using LUIS.
+
+## Synonyms
+
+In LUIS [list entities](reference-entity-list.md), you can create a normalized value, which can each have a list of synonyms. For example, if you create a size entity that has normalized values of small, medium, large, and extra-large. You could create synonyms for each value like this:
+
+|Nomalized value| Synonyms|
+|--|--|
+|Small| the little one, 8 ounce|
+|Medium| regular, 12 ounce|
+|Large| big, 16 ounce|
+|Xtra large| the biggest one, 24 ounce|
+
+The model will return the normalized value for the entity when any of synonyms are seen in the input.
+
+## Test
+
+[Testing](./how-to/train-test.md) a LUIS app means viewing model predictions.
+
+## Timezone offset
+
+The endpoint includes [timezoneOffset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). This is the number in minutes you want to add or remove from the datetimeV2 prebuilt entity. For example, if the utterance is "what time is it now?", the datetimeV2 returned is the current time for the client request. If your client request is coming from a bot or other application that is not the same as your bot's user, you should pass in the offset between the bot and the user.
+
+See [Change time zone of prebuilt datetimeV2 entity](luis-concept-data-alteration.md?#change-time-zone-of-prebuilt-datetimev2-entity).
+
+## Token
+A [token](luis-language-support.md#tokenization) is the smallest unit of text that LUIS can recognize. This differs slightly across languages.
+
+For **English**, a token is a continuous span (no spaces or punctuation) of letters and numbers. A space is NOT a token.
+
+|Phrase|Token count|Explanation|
+|--|--|--|
+|`Dog`|1|A single word with no punctuation or spaces.|
+|`RMT33W`|1|A record locator number. It may have numbers and letters, but does not have any punctuation.|
+|`425-555-5555`|5|A phone number. Each punctuation mark is a single token so `425-555-5555` would be 5 tokens:<br>`425`<br>`-`<br>`555`<br>`-`<br>`5555` |
+|`https://luis.ai`|7|`https`<br>`:`<br>`/`<br>`/`<br>`luis`<br>`.`<br>`ai`<br>|
+
+## Train
+
+[Training](how-to/train-test.md) is the process of teaching LUIS about any changes to the active version since the last training.
+
+### Training data
+
+Training data is the set of information that is needed to train a model. This includes the schema, labeled utterances, features, and application settings.
+
+### Training errors
+
+Training errors are predictions on your training data that do not match their labels.
+
+## Utterance
+
+An [utterance](concepts/utterances.md) is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
+
+## Version
+
+A LUIS [version](luis-how-to-manage-versions.md) is a specific instance of a LUIS application associated with a LUIS app ID and the published endpoint. Every LUIS app has at least one version.
ai-services Luis How To Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-azure-subscription.md
+
+ Title: How to create and manage LUIS resources
+
+description: Learn how to use and manage Azure resources for LUIS.the app.
+++++++ Last updated : 07/19/2022+
+ms.devlang: azurecli
++
+# How to create and manage LUIS resources
+++
+Use this article to learn about the types of Azure resources you can use with LUIS, and how to manage them.
+
+## Authoring Resource
+
+An authoring resource lets you create, manage, train, test, and publish your applications. One [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) is available for the LUIS authoring resource - the free (F0) tier, which gives you:
+
+* 1 million authoring transactions
+* 1,000 testing prediction endpoint requests per month.
+
+You can use the [v3.0-preview LUIS Programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f) to manage authoring resources.
+
+## Prediction resource
+
+A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. Two [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) are available for the prediction resource:
+
+* The free (F0) prediction resource, which gives you 10,000 prediction endpoint requests monthly.
+* Standard (S0) prediction resource, which is the paid tier.
+
+You can use the [v3.0-preview LUIS Endpoint API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5f68f4d40a511ce5a7440859) to manage prediction resources.
+
+> [!Note]
+> * You can also use a [multi-service resource](../multi-service-resource.md?pivots=azcli) to get a single endpoint you can use for multiple Azure AI services.
+> * LUIS provides two types of F0 (free tier) resources: one for authoring transactions and one for prediction transactions. If you're running out of free quota for prediction transactions, make sure you're using the F0 prediction resource, which gives you a 10,000 free transactions monthly, and not the authoring resource, which gives you 1,000 prediction transactions monthly.
+> * You should author LUIS apps in the [regions](luis-reference-regions.md#publishing-regions) where you want to publish and query.
+
+## Create LUIS resources
+
+To create LUIS resources, you can use the LUIS portal, [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne), or Azure CLI. After you've created your resources, you'll need to assign them to your apps to be used by them.
+
+# [LUIS portal](#tab/portal)
+
+### Create a LUIS authoring resource using the LUIS portal
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), select your country/region and agree to the terms of use. If you see the **My Apps** section in the portal, a LUIS resource already exists and you can skip the next step.
+
+2. In the **Choose an authoring** window that appears, find your Azure subscription, and LUIS authoring resource. If you don't have a resource, you can create a new one.
+
+ :::image type="content" source="./media/luis-how-to-azure-subscription/choose-authoring-resource.png" alt-text="Choose a type of Language Understanding authoring resource.":::
+
+ When you create a new authoring resource, provide the following information:
+ * **Tenant name**: the tenant your Azure subscription is associated with.
+ * **Azure subscription name**: the subscription that will be billed for the resource.
+ * **Azure resource group name**: a custom resource group name you choose or create. Resource groups allow you to group Azure resources for access and management.
+ * **Azure resource name**: a custom name you choose, used as part of the URL for your authoring and prediction endpoint queries.
+ * **Pricing tier**: the pricing tier determines the maximum transaction per second and month.
+
+### Create a LUIS Prediction resource using the LUIS portal
++
+# [Without LUIS portal](#tab/without-portal)
+
+### Create LUIS resources without using the LUIS portal
+
+Use the [Azure CLI](/cli/azure/install-azure-cli) to create each resource individually.
+
+> [!TIP]
+> * The authoring resource `kind` is `LUIS.Authoring`
+> * The prediction resource `kind` is `LUIS`
+
+1. Sign in to the Azure CLI:
+
+ ```azurecli
+ az login
+ ```
+
+ This command opens a browser so you can select the correct account and provide authentication.
+
+2. Create a LUIS authoring resource of kind `LUIS.Authoring`, named `my-luis-authoring-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region.
+
+ ```azurecli
+ az cognitiveservices account create -n my-luis-authoring-resource -g my-resource-group --kind LUIS.Authoring --sku F0 -l westus --yes
+ ```
+
+3. Create a LUIS prediction endpoint resource of kind `LUIS`, named `my-luis-prediction-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region. If you want higher throughput than the free tier provides, change `F0` to `S0`. [Learn more about pricing tiers and throughput.](luis-limits.md#resource-usage-and-limits)
+
+ ```azurecli
+ az cognitiveservices account create -n my-luis-prediction-resource -g my-resource-group --kind LUIS --sku F0 -l westus --yes
+ ```
++++
+## Assign LUIS resources
+
+Creating a resource doesn't necessarily mean that it is put to use, you need to assign it to your apps. You can assign an authoring resource for a single app or for all apps in LUIS.
+
+# [LUIS portal](#tab/portal)
+
+### Assign resources using the LUIS portal
+
+**Assign an authoring resource to all your apps**
+
+ The following procedure assigns the authoring resource to all apps.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai).
+1. In the upper-right corner, select your user account, and then select **Settings**.
+1. On the **User Settings** page, select **Add authoring resource**, and then select an existing authoring resource. Select **Save**.
+
+**Assign a resource to a specific app**
+
+The following procedure assigns a resource to a specific app.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai). Select an app from the **My apps** list.
+1. Go to **Manage** > **Azure Resources**:
+
+ :::image type="content" source="./media/luis-how-to-azure-subscription/manage-azure-resources-prediction.png" alt-text="Choose a type of Language Understanding prediction resource." lightbox="./media/luis-how-to-azure-subscription/manage-azure-resources-prediction.png":::
+
+1. On the **Prediction resource** or **Authoring resource** tab, select the **Add prediction resource** or **Add authoring resource** button.
+1. Use the fields in the form to find the correct resource, and then select **Save**.
+
+# [Without LUIS portal](#tab/without-portal)
+
+## Assign prediction resource without using the LUIS portal
+
+For automated processes like CI/CD pipelines, you can automate the assignment of a LUIS resource to a LUIS app with the following steps:
+
+1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command.
+
+ ```azurecli
+ az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv
+ ```
+
+1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c) that your user account has access to.
+
+ This POST API requires the following values:
+
+ |Header|Value|
+ |--|--|
+ |`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
+ |`Ocp-Apim-Subscription-Key`|Your authoring key.|
+
+ The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app.
+
+1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be32228e8473de116325515) API.
+
+ This POST API requires the following values:
+
+ |Type|Setting|Value|
+ |--|--|--|
+ |Header|`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
+ |Header|`Ocp-Apim-Subscription-Key`|Your authoring key.|
+ |Header|`Content-type`|`application/json`|
+ |Querystring|`appid`|The LUIS app ID.
+ |Body||{`AzureSubscriptionId`: Your Subscription ID,<br>`ResourceGroup`: Resource Group name that has your prediction resource,<br>`AccountName`: Name of your prediction resource}|
+
+ When this API is successful, it returns `201 - created status`.
+++
+## Unassign a resource
+
+When you unassign a resource, it's not deleted from Azure. It's only unlinked from LUIS.
+
+# [LUIS portal](#tab/portal)
+
+## Unassign resources using LUIS portal
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and then select an app from the **My apps** list.
+1. Go to **Manage** > **Azure Resources**.
+1. Select the **Unassign resource** button for the resource.
+
+# [Without LUIS portal](#tab/without-portal)
+
+## Unassign prediction resource without using the LUIS portal
+
+1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command.
+
+ ```azurecli
+ az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv
+ ```
+
+1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c), which your user account has access to.
+
+ This POST API requires the following values:
+
+ |Header|Value|
+ |--|--|
+ |`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
+ |`Ocp-Apim-Subscription-Key`|Your authoring key.|
+
+ The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app.
+
+1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be32554f8591db3a86232e1/console) API.
+
+ This DELETE API requires the following values:
+
+ |Type|Setting|Value|
+ |--|--|--|
+ |Header|`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
+ |Header|`Ocp-Apim-Subscription-Key`|Your authoring key.|
+ |Header|`Content-type`|`application/json`|
+ |Querystring|`appid`|The LUIS app ID.
+ |Body||{`AzureSubscriptionId`: Your Subscription ID,<br>`ResourceGroup`: Resource Group name that has your prediction resource,<br>`AccountName`: Name of your prediction resource}|
+
+ When this API is successful, it returns `200 - OK status`.
+++
+## Resource ownership
+
+An Azure resource, like a LUIS resource, is owned by the subscription that contains the resource.
+
+To change the ownership of a resource, you can take one of these actions:
+* Transfer the [ownership](../../cost-management-billing/manage/billing-subscription-transfer.md) of your subscription.
+* Export the LUIS app as a file, and then import the app on a different subscription. Export is available on the **My apps** page in the LUIS portal.
+
+## Resource limits
+
+### Authoring key creation limits
+
+You can create as many as 10 authoring keys per region, per subscription. Publishing regions are different from authoring regions. Make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located. For information on how authoring regions map to publishing regions, see [Authoring and publishing regions](luis-reference-regions.md).
+
+See [resource limits](luis-limits.md#resource-usage-and-limits) for more information.
+
+### Errors for key usage limits
+
+Usage limits are based on the pricing tier.
+
+If you exceed your transactions-per-second (TPS) quota, you receive an HTTP 429 error. If you exceed your transaction-per-month (TPM) quota, you receive an HTTP 403 error.
+
+## Change the pricing tier
+
+1. In [the Azure portal](https://portal.azure.com), Go to **All resources** and select your resource
+
+ :::image type="content" source="./media/luis-usage-tiers/find.png" alt-text="Screenshot that shows a LUIS subscription in the Azure portal." lightbox="./media/luis-usage-tiers/find.png":::
+
+1. From the left side menu, select **Pricing tier** to see the available pricing tiers
+1. Select the pricing tier you want, and click **Select** to save your change. When the pricing change is complete, a notification will appear in the top right with the pricing tier update.
+
+## View Azure resource metrics
+
+## View a summary of Azure resource usage
+You can view LUIS usage information in the Azure portal. The **Overview** page shows a summary, including recent calls and errors. If you make a LUIS endpoint request, allow up to five minutes for the change to appear.
++
+## Customizing Azure resource usage charts
+The **Metrics** page provides a more detailed view of the data.
+You can configure your metrics charts for a specific **time period** and **metric**.
++
+## Total transactions threshold alert
+If you want to know when you reach a certain transaction threshold, for example 10,000 transactions, you can create an alert:
+
+1. From the left side menu, select **Alerts**
+2. From the top menu select **New alert rule**
+
+ :::image type="content" source="./media/luis-usage-tiers/alerts.png" alt-text="Screenshot that shows the alert rules page." lightbox="./media/luis-usage-tiers/alerts.png":::
+
+3. Select **Add condition**
+
+ :::image type="content" source="./media/luis-usage-tiers/alerts-2.png" alt-text="Screenshot that shows the add condition page for alert rules." lightbox="./media/luis-usage-tiers/alerts-2.png":::
+
+4. Select **Total calls**
+
+ :::image type="content" source="./media/luis-usage-tiers/alerts-3.png" alt-text="Screenshot that shows the total calls page for alerts." lightbox="./media/luis-usage-tiers/alerts-3.png":::
+
+5. Scroll down to the **Alert logic** section and set the attributes as you want and click **Done**
+
+ :::image type="content" source="./media/luis-usage-tiers/alerts-4.png" alt-text="Screenshot that shows the alert logic page." lightbox="./media/luis-usage-tiers/alerts-4.png":::
+
+6. To send notifications or invoke actions when the alert rule triggers go to the **Actions** section and add your action group.
+
+ :::image type="content" source="./media/luis-usage-tiers/alerts-5.png" alt-text="Screenshot that shows the actions page for alerts." lightbox="./media/luis-usage-tiers/alerts-5.png":::
+
+### Reset an authoring key
+
+For [migrated authoring resource](luis-migration-authoring.md) apps: If your authoring key is compromised, reset the key in the Azure portal, on the **Keys** page for the authoring resource.
+
+For apps that haven't been migrated: The key is reset on all your apps in the LUIS portal. If you author your apps via the authoring APIs, you need to change the value of `Ocp-Apim-Subscription-Key` to the new key.
+
+### Regenerate an Azure key
+
+You can regenerate an Azure key from the **Keys** page in the Azure portal.
+
+<a name="securing-the-endpoint"></a>
+
+## App ownership, access, and security
+
+An app is defined by its Azure resources, which are determined by the owner's subscription.
+
+You can move your LUIS app. Use the following resources to help you do so by using the Azure portal or Azure CLI:
+
+* [Move an app between LUIS authoring resources](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-move-app-to-another-luis-authoring-azure-resource)
+* [Move a resource to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
+* [Move a resource within the same subscription or across subscriptions](../../azure-resource-manager/management/move-limitations/app-service-move-limitations.md)
++
+## Next steps
+
+* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle.
++
ai-services Luis How To Batch Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-batch-test.md
+
+ Title: How to perform a batch test - LUIS
+
+description: Use Language Understanding (LUIS) batch testing sets to find utterances with incorrect intents and entities.
++++++++ Last updated : 01/19/2022+++
+# Batch testing with a set of example utterances
+++
+Batch testing validates your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version. Review the batch test results to take appropriate action to improve accuracy, such as adding more example utterances to an intent if your app frequently fails to identify the correct intent or labeling entities within the utterance.
+
+## Group data for batch test
+
+It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
+
+The batch JSON file you use should include utterances with top-level machine-learning entities labeled including start and end position. The utterances should not be part of the examples already in the app. They should be utterances you want to positively predict for intent and entities.
+
+You can separate out tests by intent and/or entity or have all the tests (up to 1000 utterances) in the same file.
+
+### Common errors importing a batch
+
+If you run into errors uploading your batch file to LUIS, check for the following common issues:
+
+* More than 1,000 utterances in a batch file
+* An utterance JSON object that doesn't have an entities property. The property can be an empty array.
+* Word(s) labeled in multiple entities
+* Entity labels starting or ending on a space.
+
+## Fixing batch errors
+
+If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](concepts/patterns-features.md) feature with domain-specific vocabulary to help LUIS learn faster.
++
+<a name="batch-testing"></a>
+
+# [LUIS portal](#tab/portal)
+
+## Batch testing using the LUIS portal
+
+### Import and train an example app
+
+Import an app that takes a pizza order such as `1 pepperoni pizza on thin crust`.
+
+1. Download and save [app JSON file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/apps/pizza-with-machine-learned-entity.json?raw=true).
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Select the arrow next to **New app** and click **Import as JSON** to import the JSON into a new app. Name the app `Pizza app`.
++
+1. Select **Train** in the top-right corner of the navigation to train the app.
+++
+### Batch test file
+
+The example JSON includes one utterance with a labeled entity to illustrate what a test file looks like. In your own tests, you should have many utterances with correct intent and machine-learning entity labeled.
+
+1. Create `pizza-with-machine-learned-entity-test.json` in a text editor or [download](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/batch-tests/pizza-with-machine-learned-entity-test.json?raw=true) it.
+
+2. In the JSON-formatted batch file, add an utterance with the **Intent** you want predicted in the test.
+
+ [!code-json[Add the intents to the batch test file](~/samples-cognitive-services-data-files/luis/batch-tests/pizza-with-machine-learned-entity-test.json "Add the intent to the batch test file")]
+
+## Run the batch
+
+1. Select **Test** in the top navigation bar.
+
+2. Select **Batch testing panel** in the right-side panel.
+
+ ![Batch Testing Link](./media/luis-how-to-batch-test/batch-testing-link.png)
+
+3. Select **Import**. In the dialog box that appears, select **Choose File** and locate a JSON file with the correct JSON format that contains *no more than 1,000* utterances to test.
+
+ Import errors are reported in a red notification bar at the top of the browser. When an import has errors, no dataset is created. For more information, see [Common errors](#common-errors-importing-a-batch).
+
+4. Choose the file location of the `pizza-with-machine-learned-entity-test.json` file.
+
+5. Name the dataset `pizza test` and select **Done**.
+
+6. Select the **Run** button.
+
+7. After the batch test completes, you can see the following columns:
+
+ | Column | Description |
+ | -- | - |
+ | State | Status of the test. **See results** is only visible after the test is completed. |
+ | Name | The name you have given to the test. |
+ | Size | Number of tests in this batch test file. |
+ | Last Run | Date of last run of this batch test file. |
+ | Last result | Number of successful predictions in the test. |
+
+8. To view detailed results of the test, select **See results**.
+
+ > [!TIP]
+ > * Selecting **Download** will download the same file that you uploaded.
+ > * If you see the batch test failed, at least one utterance intent did not match the prediction.
+
+<a name="access-batch-test-result-details-in-a-visualized-view"></a>
+
+### Review batch results for intents
+
+To review the batch test results, select **See results**. The test results show graphically how the test utterances were predicted against the active version.
+
+The batch chart displays four quadrants of results. To the right of the chart is a filter. The filter contains intents and entities. When you select a [section of the chart](#review-batch-results-for-intents) or a point within the chart, the associated utterance(s) display below the chart.
+
+While hovering over the chart, a mouse wheel can enlarge or reduce the display in the chart. This is useful when there are many points on the chart clustered tightly together.
+
+The chart is in four quadrants, with two of the sections displayed in red.
+
+1. Select the **ModifyOrder** intent in the filter list. The utterance is predicted as a **True Positive** meaning the utterance successfully matched its positive prediction listed in the batch file.
+
+ > [!div class="mx-imgBorder"]
+ > ![Utterance successfully matched its positive prediction](./media/luis-tutorial-batch-testing/intent-predicted-true-positive.png)
+
+ The green checkmarks in the filters list also indicate the success of the test for each intent. All the other intents are listed with a 1/1 positive score because the utterance was tested against each intent, as a negative test for any intents not listed in the batch test.
+
+1. Select the **Confirmation** intent. This intent isn't listed in the batch test so this is a negative test of the utterance that is listed in the batch test.
+
+ > [!div class="mx-imgBorder"]
+ > ![Utterance successfully predicted negative for unlisted intent in batch file](./media/luis-tutorial-batch-testing/true-negative-intent.png)
+
+ The negative test was successful, as noted with the green text in the filter, and the grid.
+
+### Review batch test results for entities
+
+The ModifyOrder entity, as a machine entity with subentities, displays if the top-level entity matched and how the subentities are predicted.
+
+1. Select the **ModifyOrder** entity in the filter list then select the circle in the grid.
+
+1. The entity prediction displays below the chart. The display includes solid lines for predictions that match the expectation and dotted lines for predictions that don't match the expectation.
+
+ > [!div class="mx-imgBorder"]
+ > ![Entity parent successfully predicted in batch file](./media/luis-tutorial-batch-testing/labeled-entity-prediction.png)
+
+<a name="filter-chart-results-by-intent-or-entity"></a>
+
+#### Filter chart results
+
+To filter the chart by a specific intent or entity, select the intent or entity in the right-side filtering panel. The data points and their distribution update in the graph according to your selection.
+
+![Visualized Batch Test Result](./media/luis-how-to-batch-test/filter-by-entity.png)
+
+### Chart result examples
+
+The chart in the LUIS portal, you can perform the following actions:
+
+#### View single-point utterance data
+
+In the chart, hover over a data point to see the certainty score of its prediction. Select a data point to retrieve its corresponding utterance in the utterances list at the bottom of the page.
+
+![Selected utterance](./media/luis-how-to-batch-test/selected-utterance.png)
++
+<a name="relabel-utterances-and-retrain"></a>
+<a name="false-test-results"></a>
+
+#### View section data
+
+In the four-section chart, select the section name, such as **False Positive** at the top-right of the chart. Below the chart, all utterances in that section display below the chart in a list.
+
+![Selected utterances by section](./media/luis-how-to-batch-test/selected-utterances-by-section.png)
+
+In this preceding image, the utterance `switch on` is labeled with the TurnAllOn intent, but received the prediction of None intent. This is an indication that the TurnAllOn intent needs more example utterances in order to make the expected prediction.
+
+The two sections of the chart in red indicate utterances that did not match the expected prediction. These indicate utterances which LUIS needs more training.
+
+The two sections of the chart in green did match the expected prediction.
+
+# [REST API](#tab/rest)
+
+## Batch testing using the REST API
+
+LUIS lets you batch test using the LUIS portal and REST API. The endpoints for the REST API are listed below. For information on batch testing using the LUIS portal, see [Tutorial: batch test data sets](). Use the complete URLs below, replacing the placeholder values with your own LUIS Prediction key and endpoint.
+
+Remember to add your LUIS key to `Ocp-Apim-Subscription-Key` in the header, and set `Content-Type` to `application/json`.
+
+### Start a batch test
+
+Start a batch test using either an app version ID or a publishing slot. Send a **POST** request to one of the following endpoint formats. Include your batch file in the body of the request.
+
+Publishing slot
+* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-NAME>/evaluations`
+
+App version ID
+* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations`
+
+These endpoints will return an operation ID that you will use to check the status, and get results.
++
+### Get the status of an ongoing batch test
+
+Use the operation ID from the batch test you started to get its status from the following endpoint formats:
+
+Publishing slot
+* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/status`
+
+App version ID
+* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/status`
+
+### Get the results from a batch test
+
+Use the operation ID from the batch test you started to get its results from the following endpoint formats:
+
+Publishing slot
+* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/result`
+
+App version ID
+* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/result`
++
+### Batch file of utterances
+
+Submit a batch file of utterances, known as a *data set*, for batch testing. The data set is a JSON-formatted file containing a maximum of 1,000 labeled utterances. You can test up to 10 data sets in an app. If you need to test more, delete a data set and then add a new one. All custom entities in the model appear in the batch test entities filter even if there are no corresponding entities in the batch file data.
+
+The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](concepts/entities.md#machine-learned-ml-entity) you expect to be detected.
+
+### Batch syntax template for intents with entities
+
+Use the following template to start your batch file:
+
+```JSON
+{
+ "LabeledTestSetUtterances": [
+ {
+ "text": "play a song",
+ "intent": "play_music",
+ "entities": [
+ {
+ "entity": "song_parent",
+ "startPos": 0,
+ "endPos": 15,
+ "children": [
+ {
+ "entity": "pre_song",
+ "startPos": 0,
+ "endPos": 3
+ },
+ {
+ "entity": "song_info",
+ "startPos": 5,
+ "endPos": 15
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
+
+If you do not want to test entities, include the `entities` property and set the value as an empty array, `[]`.
+
+### REST API batch test results
+
+There are several objects returned by the API:
+
+* Information about the intents and entities models, such as precision, recall and F-score.
+* Information about the entities models, such as precision, recall and F-score) for each entity
+ * Using the `verbose` flag, you can get more information about the entity, such as `entityTextFScore` and `entityTypeFScore`.
+* Provided utterances with the predicted and labeled intent names
+* A list of false positive entities, and a list of false negative entities.
+++
+## Next steps
+
+If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's performance by labeling more utterances or adding features.
+
+* [Label suggested utterances with LUIS](how-to/improve-application.md)
+* [Use features to improve your LUIS app's performance](concepts/patterns-features.md)
ai-services Luis How To Collaborate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-collaborate.md
+
+ Title: Collaborate with others - LUIS
+
+description: An app owner can add contributors to the authoring resource. These contributors can modify the model, train, and publish the app.
++++++++ Last updated : 05/17/2021+++
+# Add contributors to your app
+++
+An app owner can add contributors to apps. These contributors can modify the model, train, and publish the app. Once you have [migrated](luis-migration-authoring.md) your account, _contributors_ are managed in the Azure portal for the authoring resource, using the **Access control (IAM)** page. Add a user, using the collaborator's email address and the _contributor_ role.
+
+## Add contributor to Azure authoring resource
+
+You have migrated if your LUIS authoring experience is tied to an Authoring resource on the **Manage -> Azure resources** page in the LUIS portal.
+
+In the Azure portal, find your Language Understanding (LUIS) authoring resource. It has the type `LUIS.Authoring`. In the resource's **Access Control (IAM)** page, add the role of **contributor** for the user that you want to contribute. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## View the app as a contributor
+
+After you have been added as a contributor, [sign in to the LUIS portal](how-to/sign-in.md).
++
+### Users with multiple emails
+
+If you add contributors to a LUIS app, you are specifying the exact email address. While Azure Active Directory (Azure AD) allows a single user to have more than one email account used interchangeably, LUIS requires the user to sign in with the email address specified when adding the contributor.
+
+<a name="owner-and-collaborators"></a>
+
+### Azure Active Directory resources
+
+If you use [Azure Active Directory](../../active-directory/index.yml) (Azure AD) in your organization, Language Understanding (LUIS) needs permission to the information about your users' access when they want to use LUIS. The resources that LUIS requires are minimal.
+
+You see the detailed description when you attempt to sign up with an account that has admin consent or does not require admin consent, such as administrator consent:
+
+* Allows you to sign in to the app with your organizational account and let the app read your profile. It also allows the app to read basic company information. This gives LUIS permission to read basic profile data, such as user ID, email, name
+* Allows the app to see and update your data, even when you are not currently using the app. The permission is required to refresh the access token of the user.
++
+### Azure Active Directory tenant user
+
+LUIS uses standard Azure Active Directory (Azure AD) consent flow.
+
+The tenant admin should work directly with the user who needs access granted to use LUIS in the Azure AD.
+
+* First, the user signs into LUIS, and sees the pop-up dialog needing admin approval. The user contacts the tenant admin before continuing.
+* Second, the tenant admin signs into LUIS, and sees a consent flow pop-up dialog. This is the dialog the admin needs to give permission for the user. Once the admin accepts the permission, the user is able to continue with LUIS. If the tenant admin will not sign in to LUIS, the admin can access [consent](https://account.activedirectory.windowsazure.com/r#/applications) for LUIS. On this page you can filter the list to items that include the name `LUIS`.
+
+If the tenant admin only wants certain users to use LUIS, there are a couple of possible solutions:
+* Giving the "admin consent" (consent to all users of the Azure AD), but then set to "Yes" the "User assignment required" under Enterprise Application Properties, and finally assign/add only the wanted users to the Application. With this method, the Administrator is still providing "admin consent" to the App, however, it's possible to control the users that can access it.
+* A second solution, is by using the [Azure AD identity and access management API in Microsoft Graph](/graph/azuread-identity-access-management-concept-overview) to provide consent to each specific user.
+
+Learn more about Azure active directory users and consent:
+* [Restrict your app](../../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) to a set of users
+
+## Next steps
+
+* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle.
+* Understand the about [authoring resources](luis-how-to-azure-subscription.md) and [adding contributors](luis-how-to-collaborate.md) on that resource.
+* Learn [how to create](luis-how-to-azure-subscription.md) authoring and runtime resources
+* Migrate to the new [authoring resource](luis-migration-authoring.md)
ai-services Luis How To Manage Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-manage-versions.md
+
+ Title: Manage versions - LUIS
+
+description: Versions allow you to build and publish different models. A good practice is to clone the current active model to a different version of the app before making changes to the model.
++++++++ Last updated : 10/25/2021++
+# Use versions to edit and test without impacting staging or production apps
+++
+Versions allow you to build and publish different models. A good practice is to clone the current active model to a different [version](./concepts/application-design.md) of the app before making changes to the model.
+
+The active version is the version you are editing in the LUIS portal **Build** section with intents, entities, features, and patterns. When using the authoring APIs, you don't need to set the active version because the version-specific REST API calls include the version in the route.
+
+To work with versions, open your app by selecting its name on **My Apps** page, and then select **Manage** in the top bar, then select **Versions** in the left navigation.
+
+The list of versions shows which versions are published, where they are published, and which version is currently active.
+
+## Clone a version
+
+1. Select the version you want to clone then select **Clone** from the toolbar.
+
+2. In the **Clone version** dialog box, type a name for the new version such as "0.2".
+
+ ![Clone Version dialog box](./media/luis-how-to-manage-versions/version-clone-version-dialog.png)
+
+ > [!NOTE]
+ > Version ID can consist only of characters, digits or '.' and cannot be longer than 10 characters.
+
+ A new version with the specified name is created and set as the active version.
+
+## Set active version
+
+Select a version from the list, then select **Activate** from the toolbar.
+
+## Import version
+
+You can import a `.json` or a `.lu` version of your application.
+
+1. Select **Import** from the toolbar, then select the format.
+
+2. In the **Import new version** pop-up window, enter the new ten character version name. You only need to set a version ID if the version in the file already exists in the app.
+
+ ![Manage section, versions page, importing new version](./media/luis-how-to-manage-versions/versions-import-pop-up.png)
+
+ Once you import a version, the new version becomes the active version.
+
+### Import errors
+
+* Tokenizer errors: If you get a **tokenizer error** when importing, you are trying to import a version that uses a different [tokenizer](luis-language-support.md#custom-tokenizer-versions) than the app currently uses. To fix this, see [Migrating between tokenizer versions](luis-language-support.md#migrating-between-tokenizer-versions).
+
+<a name = "export-version"></a>
+
+## Other actions
+
+* To **delete** a version, select a version from the list, then select **Delete** from the toolbar. Select **Ok**.
+* To **rename** a version, select a version from the list, then select **Rename** from the toolbar. Enter new name and select **Done**.
+* To **export** a version, select a version from the list, then select **Export app** from the toolbar. Choose JSON or LU to export for a backup or to save in source control, choose **Export for container** to [use this app in a LUIS container](luis-container-howto.md).
+
+## See also
+
+See the following links to view the REST APIs for importing and exporting applications:
+
+* [Importing applications](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5892283039e2bb0d9c2805f5)
+* [Exporting applications](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40)
ai-services Luis How To Model Intent Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-model-intent-pattern.md
+
+ Title: Patterns add accuracy - LUIS
+
+description: Add pattern templates to improve prediction accuracy in Language Understanding (LUIS) applications.
++++++++ Last updated : 01/07/2022++
+# How to add patterns to improve prediction accuracy
++
+After a LUIS app receives endpoint utterances, use a [pattern](concepts/patterns-features.md) to improve prediction accuracy for utterances that reveal a pattern in word order and word choice. Patterns use specific [syntax](concepts/patterns-features.md) to indicate the location of: [entities](concepts/entities.md), entity [roles](./concepts/entities.md), and optional text.
+
+>[!Note]
+>* After you add, edit, remove, or reassign a pattern, [train](how-to/train-test.md) and [publish](how-to/publish.md) your app for your changes to affect endpoint queries.
+>* Patterns only include machine-learning entity parents, not subentities.
+
+## Add template utterance using correct syntax
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+1. Select **Patterns** in the left panel, under **Improve app performance**.
+
+1. Select the correct intent for the pattern.
+
+1. In the template textbox, type the template utterance and select Enter. When you want to enter the entity name, use the correct pattern entity syntax. Begin the entity syntax with `{`. The list of entities displays. Select the correct entity.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of entity for pattern](./media/luis-how-to-model-intent-pattern/patterns-3.png)
+
+ If your entity includes a [role](./concepts/entities.md), indicate the role with a single colon, `:`, after the entity name, such as `{Location:Origin}`. The list of roles for the entities displays in a list. Select the role, and then select Enter.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of entity with role](./media/luis-how-to-model-intent-pattern/patterns-4.png)
+
+ After you select the correct entity, finish entering the pattern, and then select Enter. When you are done entering patterns, [train](how-to/train-test.md) your app.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of entered pattern with both types of entities](./media/luis-how-to-model-intent-pattern/patterns-5.png)
+
+## Create a pattern.any entity
+
+[Pattern.any](concepts/entities.md) entities are only valid in [patterns](luis-how-to-model-intent-pattern.md), not intents' example utterances. This type of entity helps LUIS find the end of entities of varying length and word choice. Because this entity is used in a pattern, LUIS knows where the end of the entity is in the utterance template.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+1. From the **Build** section, select **Entities** in the left panel, and then select **+ Create**.
+
+1. In the **Choose an entity type** dialog box, enter the entity name in the **Name** box, and select **Pattern.Any** as the **Type** then select **Create**.
+
+ Once you [create a pattern utterance](luis-how-to-model-intent-pattern.md) using this entity, the entity is extracted with a combined machine-learning and text-matching algorithm.
+
+## Adding example utterances as pattern
+
+If you want to add a pattern for an entity, the _easiest_ way is to create the pattern from the Intent details page. This ensures your syntax matches the example utterance.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+1. Open your app by selecting its name on **My Apps** page.
+1. On the **Intents** list page, select the intent name of the example utterance you want to create a template utterance from.
+1. On the Intent details page, select the row for the example utterance you want to use as the template utterance, then select **+ Add as pattern** from the context toolbar.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of selecting example utterance as a template pattern on the Intent details page.](./media/luis-how-to-model-intent-pattern/add-example-utterances-as-pattern-template-utterance-from-intent-detail-page.png)
+
+ The utterance must include an entity in order to create a pattern from the utterance.
+
+1. In the pop-up box, select **Done** on the **Confirm patterns** page. You don't need to define the entities' subentities, or features. You only need to list the machine-learning entity.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of confirming example utterance as a template pattern on the Intent details page.](./media/luis-how-to-model-intent-pattern/confirm-patterns-from-example-utterance-intent-detail-page.png)
+
+1. If you need to edit the template, such as selecting text as optional, with the `[]` (square) brackets, you need to make this edit from the **Patterns** page.
+
+1. In the navigation bar, select **Train** to train the app with the new pattern.
+
+## Use the OR operator and groups
+
+The following two patterns can be combined into a single pattern using the group "_( )_" and OR "_|_" syntax.
+
+|Intent|Example utterances with optional text and prebuilt entities|
+|--|--|
+|OrgChart-Manager|"who will be {EmployeeListEntity}['s] manager [[in]{datetimeV2}?]"|
+|OrgChart-Manager|"who will be {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]"|
+
+The new template utterance will be:
+
+"who ( was | is | will be ) {EmployeeListEntity}['s] manager [([in]|[on]){datetimeV2}?]" .
+
+This uses a **group** around the required verb tense and the optional 'in' and 'on' with an **or** pipe between them.
++
+## Template utterances
+
+Due to the nature of the Human Resource subject domain, there are a few common ways of asking about employee relationships in organizations. Such as the following example utterances:
+
+* "Who does Jill Jones report to?"
+* "Who reports to Jill Jones?"
+
+These utterances are too close to determine the contextual uniqueness of each without providing _many_ utterance examples. By adding a pattern for an intent, LUIS learns common utterance patterns for an intent without needing to supply several more utterance examples.
+
+>[!Tip]
+>Each utterance can be deleted from the review list. Once deleted, it will not appear in the list again. This is true even if the user enters the same utterance from the endpoint.
+
+Template utterance examples for this intent would include:
+
+|Template utterances examples|syntax meaning|
+|--|--|
+|Who does {EmployeeListEntity} report to[?]|interchangeable: {EmployeeListEntity} <br> ignore: [?]|
+|Who reports to {EmployeeListEntity}[?]|interchangeable: {EmployeeListEntity} <br> ignore: [?]|
+
+The "_{EmployeeListEntity}_" syntax marks the entity location within the template utterance and which entity it is. The optional syntax, "_[?]_", marks words or [punctuation](luis-reference-application-settings.md) that is optional. LUIS matches the utterance, ignoring the optional text inside the brackets.
+
+> [!IMPORTANT]
+> While the syntax looks like a regular expression, it is not a regular expression. Only the curly bracket, "_{ }_", and square bracket, "_[ ]_", syntax is supported. They can be nested up to two levels.
+
+For a pattern to be matched to an utterance, _first_ the entities within the utterance must match the entities in the template utterance. This means the entities need to have enough examples in example utterances with a high degree of prediction before patterns with entities are successful. The template doesn't help predict entities, however. The template only predicts intents.
+
+> [!NOTE]
+> While patterns allow you to provide fewer example utterances, if the entities are not detected, the pattern will not match.
+
+## Add phrase list as a feature
+[Features](concepts/patterns-features.md) help LUIS by providing hints that certain words and phrases are part of an app domain vocabulary.
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
+2. Open your app by selecting its name on **My Apps** page.
+3. Select **Build** , then select **Features** in your app&#39;s left panel.
+4. On the **Features** page, select **+ Create**.
+5. In the **Create new phrase list feature** dialog box, enter a name such as Pizza Toppings. In the **Value** box, enter examples of toppings, such as _Ham_. You can type one value at a time, or a set of values separated by commas, and then press **Enter**.
++
+6. Keep the **These values are interchangeable** selector enabled if the phrases can be used interchangeably. The interchangeable phrase list feature serves as a list of synonyms for training. Non-interchangeable phrase lists serve as separate features for training, meaning that features are similar but the intent changes when you swap phrases.
+7. The phrase list can apply to the entire app with the **Global** setting, or to a specific model (intent or entity). If you create the phrase list as a _feature_ from an intent or entity, the toggle is not set to global. In this case, the toggle specifies that the feature is local only to that model, therefore, _not global_ to the application.
+8. Select **Done**. The new feature is added to the **ML Features** page.
+
+> [!Note]
+> * You can delete, or deactivate a phrase list from the contextual toolbar on the **ML Features** page.
+> * A phrase list should be applied to the intent or entity it is intended to help but there may be times when a phrase list should be applied to the entire app as a **Global** feature. On the **Machine Learning** Features page, select the phrase list, then select **Make global** in the top contextual toolbar.
++
+## Add entity as a feature to an intent
+
+To add an entity as a feature to an intent, select the intent from the Intents page, then select **+ Add feature** above the contextual toolbar. The list will include all phrase lists and entities that can be applied as features.
+
+To add an entity as a feature to another entity, you can add the feature either on the Intent detail page using the [Entity Palette](./how-to/entities.md) or you can add the feature on the Entity detail page.
++
+## Next steps
+
+* [Train and test](how-to/train-test.md) your app after improvement.
ai-services Luis How To Use Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-use-dashboard.md
+
+ Title: Dashboard - Language Understanding - LUIS
+
+description: Fix intents and entities with your trained app's dashboard. The dashboard displays overall app information, with highlights of intents that should be fixed.
++++++++ Last updated : 01/07/2022++
+# How to use the Dashboard to improve your app
+++
+Find and fix problems with your trained app's intents when you are using example utterances. The dashboard displays overall app information, with highlights of intents that should be fixed.
+
+Review Dashboard analysis is an iterative process, repeat as you change and improve your model.
+
+This page will not have relevant analysis for apps that do not have any example utterances in the intents, known as _pattern-only_ apps.
+
+## What issues can be fixed from dashboard?
+
+The three problems addressed in the dashboard are:
+
+|Issue|Chart color|Explanation|
+|--|--|--|
+|Data imbalance|-|This occurs when the quantity of example utterances varies significantly. All intents need to have _roughly_ the same number of example utterances - except the None intent. It should only have 10%-15% of the total quantity of utterances in the app.<br><br> If the data is imbalanced but the intent accuracy is above certain threshold, this imbalance is not reported as an issue.<br><br>**Start with this issue - it may be the root cause of the other issues.**|
+|Unclear predictions|Orange|This occurs when the top intent and the next intent's scores are close enough that they may flip on the next training, due to [negative sampling](how-to/train-test.md) or more example utterances added to intent. |
+|Incorrect predictions|Red|This occurs when an example utterance is not predicted for the labeled intent (the intent it is in).|
+
+Correct predictions are represented with the color blue.
+
+The dashboard shows these issues and tells you which intents are affected and suggests what you should do to improve the app.
+
+## Before app is trained
+
+Before you train the app, the dashboard does not contain any suggestions for fixes. Train your app to see these suggestions.
+
+## Check your publishing status
+
+The **Publishing status** card contains information about the active version's last publish.
+
+Check that the active version is the version you want to fix.
+
+![Dashboard shows app's external services, published regions, and aggregated endpoint hits.](./media/luis-how-to-use-dashboard/analytics-card-1-shows-app-summary-and-endpoint-hits.png)
+
+This also shows any external services, published regions, and aggregated endpoint hits.
+
+## Review training evaluation
+
+The **Training evaluation** card contains the aggregated summary of your app's overall accuracy by area. The score indicates intent quality.
+
+![The Training evaluation card contains the first area of information about your app's overall accuracy.](./media/luis-how-to-use-dashboard/analytics-card-2-shows-app-overall-accuracy.png)
+
+The chart indicates the correctly predicted intents and the problem areas with different colors. As you improve the app with the suggestions, this score increases.
+
+The suggested fixes are separated out by problem type and are the most significant for your app. If you would prefer to review and fix issues per intent, use the **[Intents with errors](#intents-with-errors)** card at the bottom of the page.
+
+Each problem area has intents that need to be fixed. When you select the intent name, the **Intent** page opens with a filter applied to the utterances. This filter allows you to focus on the utterances that are causing the problem.
+
+### Compare changes across versions
+
+Create a new version before making changes to the app. In the new version, make the suggested changes to the intent's example utterances, then train again. On the Dashboard page's **Training evaluation** card, use the **Show change from trained version** to compare the changes.
+
+![Compare changes across versions](./media/luis-how-to-use-dashboard/compare-improvement-across-versions.png)
+
+### Fix version by adding or editing example utterances and retraining
+
+The primary method of fixing your app will be to add or edit example utterances and retrain. The new or changed utterances need to follow guidelines for [varied utterances](concepts/utterances.md).
+
+Adding example utterances should be done by someone who:
+
+* has a high degree of understanding of what utterances are in the different intents.
+* knows how utterances in one intent may be confused with another intent.
+* is able to decide if two intents, which are frequently confused with each other, should be collapsed into a single intent. If this is the case, the different data must be pulled out with entities.
+
+### Patterns and phrase lists
+
+The analytics page doesnΓÇÖt indicate when to use [patterns](concepts/patterns-features.md) or [phrase lists](concepts/patterns-features.md). If you do add them, it can help with incorrect or unclear predictions but wonΓÇÖt help with data imbalance.
+
+### Review data imbalance
+
+Start with this issue - it may be the root cause of the other issues.
+
+The **data imbalance** intent list shows intents that need more utterances in order to correct the data imbalance.
+
+**To fix this issue**:
+
+* Add more utterances to the intent then train again.
+
+Do not add utterances to the None intent unless that is suggested on the dashboard.
+
+> [!Tip]
+> Use the third section on the page, **Utterances per intent** with the **Utterances (number)** setting, as a quick visual guide of which intents need more utterances.
+ ![Use 'Utterances (number)' to find intents with data imbalance.](./media/luis-how-to-use-dashboard/predictions-per-intent-number-of-utterances.png)
+
+### Review incorrect predictions
+
+The **incorrect prediction** intent list shows intents that have utterances, which are used as examples for a specific intent, but are predicted for different intents.
+
+**To fix this issue**:
+
+* Edit utterances to be more specific to the intent and train again.
+* Combine intents if utterances are too closely aligned and train again.
+
+### Review unclear predictions
+
+The **unclear prediction** intent list shows intents with utterances with prediction scores that are not far enough way from their nearest rival, that the top intent for the utterance may change on the next training, due to [negative sampling](how-to/train-test.md).
+
+**To fix this issue**;
+
+* Edit utterances to be more specific to the intent and train again.
+* Combine intents if utterances are too closely aligned and train again.
+
+## Utterances per intent
+
+This card shows the overall app health across the intents. As you fix intents and retrain, continue to glance at this card for issues.
+
+The following chart shows a well-balanced app with almost no issues to fix.
+
+![The following chart shows a well-balanced app with almost no issues to fix.](./media/luis-how-to-use-dashboard/utterance-per-intent-shows-data-balance.png)
+
+The following chart shows a poorly balanced app with many issues to fix.
+
+![Screenshot shows Predictions per intent with several Unclear or Incorrectly predicted results.](./media/luis-how-to-use-dashboard/utterance-per-intent-shows-data-imbalance.png)
+
+Hover over each intent's bar to get information about the intent.
+
+![Screenshot shows Predictions per intent with details of Unclear or Incorrectly predicted results.](./media/luis-how-to-use-dashboard/utterances-per-intent-with-details-of-errors.png)
+
+Use the **Sort by** feature to arrange the intents by issue type so you can focus on the most problematic intents with that issue.
+
+## Intents with errors
+
+This card allows you to review issues for a specific intent. The default view of this card is the most problematic intents so you know where to focus your efforts.
+
+![The Intents with errors card allows you to review issues for a specific intent. The card is filtered to the most problematic intents, by default, so you know where to focus your efforts.](./media/luis-how-to-use-dashboard/most-problematic-intents-with-errors.png)
+
+The top donut chart shows the issues with the intent across the three problem types. If there are issues in the three problem types, each type has its own chart below, along with any rival intents.
+
+### Filter intents by issue and percentage
+
+This section of the card allows you to find example utterances that are falling outside your error threshold. Ideally you want correct predictions to be significant. That percentage is business and customer driven.
+
+Determine the threshold percentages that you are comfortable with for your business.
+
+The filter allows you to find intents with specific issue:
+
+|Filter|Suggested percentage|Purpose|
+|--|--|--|
+|Most problematic intents|-|**Start here** - Fixing the utterances in this intent will improve the app more than other fixes.|
+|Correct predictions below|60%|This is the percentage of utterances in the selected intent that are correct but have a confidence score below the threshold. |
+|Unclear predictions above|15%|This is the percentage of utterances in the selected intent that are confused with the nearest rival intent.|
+|Incorrect predictions above|15%|This is the percentage of utterances in the selected intent that are incorrectly predicted. |
+
+### Correct prediction threshold
+
+What is a confident prediction confidence score to you? At the beginning of app development, 60% may be your target. Use the **Correct predictions below** with the percentage of 60% to find any utterances in the selected intent that need to be fixed.
+
+### Unclear or incorrect prediction threshold
+
+These two filters allow you to find utterances in the selected intent beyond your threshold. You can think of these two percentages as error percentages. If you are comfortable with a 10-15% error rate for predictions, set the filter threshold to 15% to find all utterances above this value.
+
+## Next steps
+
+* [Manage your Azure resources](luis-how-to-azure-subscription.md)
ai-services Luis Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-language-support.md
+
+ Title: Language support - LUIS
+
+description: LUIS has a variety of features within the service. Not all features are at the same language parity. Make sure the features you are interested in are supported in the language culture you are targeting. A LUIS app is culture-specific and cannot be changed once it is set.
++++++++ Last updated : 01/18/2022++
+# Language and region support for LUIS
+++
+LUIS has a variety of features within the service. Not all features are at the same language parity. Make sure the features you are interested in are supported in the language culture you are targeting. A LUIS app is culture-specific and cannot be changed once it is set.
+
+## Multilingual LUIS apps
+
+If you need a multilingual LUIS client application such as a chatbot, you have a few options. If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use the [Translator service](../translator/translator-overview.md) to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive the resulting scores.
+
+> [!NOTE]
+> A newer version of Language Understanding capabilities is now available as part of Azure AI Language. For more information, see [Azure AI Language Documentation](../language-service/index.yml). For language understanding capabilities that support multiple languages within the Language Service, see [Conversational Language Understanding](../language-service/conversational-language-understanding/concepts/multiple-languages.md).
+
+## Languages supported
+
+LUIS understands utterances in the following languages:
+
+| Language |Locale | Prebuilt domain | Prebuilt entity | Phrase list recommendations | **[Sentiment analysis](../language-service/sentiment-opinion-mining/overview.md) and [key phrase extraction](../language-service/key-phrase-extraction/overview.md)|
+|--|--|:--:|:--:|:--:|:--:|
+| Arabic (preview - modern standard Arabic) |`ar-AR`|-|-|-|-|
+| *[Chinese](#chinese-support-notes) |`zh-CN` | Γ£ö | Γ£ö |Γ£ö|-|
+| Dutch |`nl-NL` |Γ£ö|-|-|Γ£ö|
+| English (United States) |`en-US` | Γ£ö | Γ£ö |Γ£ö|Γ£ö|
+| English (UK) |`en-GB` | Γ£ö | Γ£ö |Γ£ö|Γ£ö|
+| French (Canada) |`fr-CA` |-|-|-|Γ£ö|
+| French (France) |`fr-FR` |Γ£ö| Γ£ö |Γ£ö |Γ£ö|
+| German |`de-DE` |Γ£ö| Γ£ö |Γ£ö |Γ£ö|
+| Gujarati (preview) | `gu-IN`|-|-|-|-|
+| Hindi (preview) | `hi-IN`|-|Γ£ö|-|-|
+| Italian |`it-IT` |Γ£ö| Γ£ö |Γ£ö|Γ£ö|
+| *[Japanese](#japanese-support-notes) |`ja-JP` |Γ£ö| Γ£ö |Γ£ö|Key phrase only|
+| Korean |`ko-KR` |Γ£ö|-|-|Key phrase only|
+| Marathi (preview) | `mr-IN`|-|-|-|-|
+| Portuguese (Brazil) |`pt-BR` |Γ£ö| Γ£ö |Γ£ö |not all sub-cultures|
+| Spanish (Mexico)|`es-MX` |-|Γ£ö|Γ£ö|Γ£ö|
+| Spanish (Spain) |`es-ES` |Γ£ö| Γ£ö |Γ£ö|Γ£ö|
+| Tamil (preview) | `ta-IN`|-|-|-|-|
+| Telugu (preview) | `te-IN`|-|-|-|-|
+| Turkish | `tr-TR` |Γ£ö|Γ£ö|-|Sentiment only|
++++
+Language support varies for [prebuilt entities](luis-reference-prebuilt-entities.md) and [prebuilt domains](luis-reference-prebuilt-domains.md).
++
+### *Japanese support notes
+
+ - Because LUIS does not provide syntactic analysis and will not understand the difference between Keigo and informal Japanese, you need to incorporate the different levels of formality as training examples for your applications.
+ - でございます is not the same as です.
+ - です is not the same as だ.
++
+### Speech API supported languages
+See Speech [Supported languages](../speech-service/speech-to-text.md) for Speech dictation mode languages.
+
+### Bing Spell Check supported languages
+See Bing Spell Check [Supported languages](../../cognitive-services/bing-spell-check/language-support.md) for a list of supported languages and status.
+
+## Rare or foreign words in an application
+In the `en-us` culture, LUIS learns to distinguish most English words, including slang. In the `zh-cn` culture, LUIS learns to distinguish most Chinese characters. If you use a rare word in `en-us` or character in `zh-cn`, and you see that LUIS seems unable to distinguish that word or character, you can add that word or character to a [phrase-list feature](concepts/patterns-features.md). For example, words outside of the culture of the application -- that is, foreign words -- should be added to a phrase-list feature.
+
+<!--This phrase list should be marked non-interchangeable, to indicate that the set of rare words forms a class that LUIS should learn to recognize, but they are not synonyms or interchangeable with each other.-->
+
+### Hybrid languages
+Hybrid languages combine words from two cultures such as English and Chinese. These languages are not supported in LUIS because an app is based on a single culture.
+
+## Tokenization
+To perform machine learning, LUIS breaks an utterance into [tokens](luis-glossary.md#token) based on culture.
+
+|Language| every space or special character | character level|compound words
+|--|:--:|:--:|:--:|
+|Arabic|Γ£ö|||
+|Chinese||Γ£ö||
+|Dutch|Γ£ö||Γ£ö|
+|English (en-us)|Γ£ö |||
+|English (en-GB)|Γ£ö |||
+|French (fr-FR)|Γ£ö|||
+|French (fr-CA)|Γ£ö|||
+|German|Γ£ö||Γ£ö|
+|Gujarati|Γ£ö|||
+|Hindi|Γ£ö|||
+|Italian|Γ£ö|||
+|Japanese|||Γ£ö
+|Korean||Γ£ö||
+|Marathi|Γ£ö|||
+|Portuguese (Brazil)|Γ£ö|||
+|Spanish (es-ES)|Γ£ö|||
+|Spanish (es-MX)|Γ£ö|||
+|Tamil|Γ£ö|||
+|Telugu|Γ£ö|||
+|Turkish|Γ£ö|||
++
+### Custom tokenizer versions
+
+The following cultures have custom tokenizer versions:
+
+|Culture|Version|Purpose|
+|--|--|--|
+|German<br>`de-de`|1.0.0|Tokenizes words by splitting them using a machine learning-based tokenizer that tries to break down composite words into their single components.<br>If a user enters `Ich fahre einen krankenwagen` as an utterance, it is turned to `Ich fahre einen kranken wagen`. Allowing the marking of `kranken` and `wagen` independently as different entities.|
+|German<br>`de-de`|1.0.2|Tokenizes words by splitting them on spaces.<br> If a user enters `Ich fahre einen krankenwagen` as an utterance, it remains a single token. Thus `krankenwagen` is marked as a single entity. |
+|Dutch<br>`nl-nl`|1.0.0|Tokenizes words by splitting them using a machine learning-based tokenizer that tries to break down composite words into their single components.<br>If a user enters `Ik ga naar de kleuterschool` as an utterance, it is turned to `Ik ga naar de kleuter school`. Allowing the marking of `kleuter` and `school` independently as different entities.|
+|Dutch<br>`nl-nl`|1.0.1|Tokenizes words by splitting them on spaces.<br> If a user enters `Ik ga naar de kleuterschool` as an utterance, it remains a single token. Thus `kleuterschool` is marked as a single entity. |
++
+### Migrating between tokenizer versions
+<!--
+Your first choice is to change the tokenizer version in the app file, then import the version. This action changes how the utterances are tokenized but allows you to keep the same app ID.
+
+Tokenizer JSON for 1.0.0. Notice the property value for `tokenizerVersion`.
+
+```JSON
+{
+ "luis_schema_version": "3.2.0",
+ "versionId": "0.1",
+ "name": "german_app_1.0.0",
+ "desc": "",
+ "culture": "de-de",
+ "tokenizerVersion": "1.0.0",
+ "intents": [
+ {
+ "name": "i1"
+ },
+ {
+ "name": "None"
+ }
+ ],
+ "entities": [
+ {
+ "name": "Fahrzeug",
+ "roles": []
+ }
+ ],
+ "composites": [],
+ "closedLists": [],
+ "patternAnyEntities": [],
+ "regex_entities": [],
+ "prebuiltEntities": [],
+ "model_features": [],
+ "regex_features": [],
+ "patterns": [],
+ "utterances": [
+ {
+ "text": "ich fahre einen krankenwagen",
+ "intent": "i1",
+ "entities": [
+ {
+ "entity": "Fahrzeug",
+ "startPos": 23,
+ "endPos": 27
+ }
+ ]
+ }
+ ],
+ "settings": []
+}
+```
+
+Tokenizer JSON for version 1.0.1. Notice the property value for `tokenizerVersion`.
+
+```JSON
+{
+ "luis_schema_version": "3.2.0",
+ "versionId": "0.1",
+ "name": "german_app_1.0.1",
+ "desc": "",
+ "culture": "de-de",
+ "tokenizerVersion": "1.0.1",
+ "intents": [
+ {
+ "name": "i1"
+ },
+ {
+ "name": "None"
+ }
+ ],
+ "entities": [
+ {
+ "name": "Fahrzeug",
+ "roles": []
+ }
+ ],
+ "composites": [],
+ "closedLists": [],
+ "patternAnyEntities": [],
+ "regex_entities": [],
+ "prebuiltEntities": [],
+ "model_features": [],
+ "regex_features": [],
+ "patterns": [],
+ "utterances": [
+ {
+ "text": "ich fahre einen krankenwagen",
+ "intent": "i1",
+ "entities": [
+ {
+ "entity": "Fahrzeug",
+ "startPos": 16,
+ "endPos": 27
+ }
+ ]
+ }
+ ],
+ "settings": []
+}
+```
+-->
+
+Tokenization happens at the app level. There is no support for version-level tokenization.
+
+[Import the file as a new app](how-to/sign-in.md), instead of a version. This action means the new app has a different app ID but uses the tokenizer version specified in the file.
ai-services Luis Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-limits.md
+
+ Title: Limits - LUIS
+description: This article contains the known limits of Azure AI services Language Understanding (LUIS). LUIS has several limits areas. Model limit controls intents, entities, and features in LUIS. Quota limits based on key type. Keyboard combination controls the LUIS website.
++++++ Last updated : 03/21/2022++
+# Limits for your LUIS model and keys
+++
+LUIS has several limit areas. The first is the [model limit](#model-limits), which controls intents, entities, and features in LUIS. The second area is [quota limits](#resource-usage-and-limits) based on resource type. A third area of limits is the [keyboard combination](#keyboard-controls) for controlling the LUIS website. A fourth area is the [world region mapping](luis-reference-regions.md) between the LUIS authoring website and the LUIS [endpoint](luis-glossary.md#endpoint) APIs.
+
+## Model limits
+
+If your app exceeds the LUIS model limits, consider using a [LUIS dispatch](how-to/improve-application.md) app or using a [LUIS container](luis-container-howto.md).
+
+| Area | Limit |
+| |: |
+| [App name][luis-get-started-create-app] | \*Default character max |
+| Applications | 500 applications per Azure authoring resource |
+| [Batch testing][batch-testing] | 10 datasets, 1000 utterances per dataset |
+| Explicit list | 50 per application |
+| External entities | no limits |
+| [Intents][intents] | 500 per application: 499 custom intents, and the required _None_ intent.<br>[Dispatch-based](https://aka.ms/dispatch-tool) application has corresponding 500 dispatch sources. |
+| [List entities](concepts/entities.md) | Parent: 50, child: 20,000 items. Canonical name is \*default character max. Synonym values have no length restriction. |
+| [machine-learning entities + roles](concepts/entities.md):<br> composite,<br>simple,<br>entity role | A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level. |
+| Model as a feature | Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists. |
+| [Preview - Dynamic list entities](./luis-migration-api-v3.md) | 2 lists of \~1k per query prediction endpoint request |
+| [Patterns](concepts/patterns-features.md) | 500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern |
+| [Pattern.any](concepts/entities.md) | 100 per application, 3 pattern.any entities per pattern |
+| [Phrase list][phrase-list] | 500 phrase lists. 10 global phrase lists due to the model as a feature limit. Non-interchangeable phrase list has max of 5,000 phrases. Interchangeable phrase list has max of 50,000 phrases. Maximum number of total phrases per application of 500,000 phrases. |
+| [Prebuilt entities](./howto-add-prebuilt-models.md) | no limit |
+| [Regular expression entities](concepts/entities.md) | 20 entities<br>500 character max. per regular expression entity pattern |
+| [Roles](concepts/entities.md) | 300 roles per application. 10 roles per entity |
+| [Utterance][utterances] | 500 characters<br><br>If you have text longer than this character limit, you need to segment the utterance prior to input to LUIS and you will receive individual intent responses per segment. There are obvious breaks you can work with, such as punctuation marks and long pauses in speech. |
+| [Utterance examples][utterances] | 15,000 per application - there is no limit on the number of utterances per intent<br><br>If you need to train the application with more examples, use a [dispatch](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Dispatch) model approach. You train individual LUIS apps (known as child apps to the parent dispatch app) with one or more intents and then train a dispatch app that samples from each child LUIS app's utterances to direct the prediction request to the correct child app. |
+| [Versions](./concepts/application-design.md) | 100 versions per application |
+| [Version name][luis-how-to-manage-versions] | 128 characters |
+
+\*Default character max is 50 characters.
+
+## Name uniqueness
+
+Object names must be unique when compared to other objects of the same level.
+
+| Objects | Restrictions |
+| | |
+| Intent, entity | All intent and entity names must be unique in a version of an app. |
+| ML entity components | All machine-learning entity components (child entities) must be unique, within that entity for components at the same level. |
+| Features | All named features, such as phrase lists, must be unique within a version of an app. |
+| Entity roles | All roles on an entity or entity component must be unique when they are at the same entity level (parent, child, grandchild, etc.). |
+
+## Object naming
+
+Do not use the following characters in the following names.
+
+| Object | Exclude characters |
+| | |
+| Intent, entity, and role names | `:`, `$`, `&`, `%`, `*`, `(`, `)`, `+`, `?`, `~` |
+| Version name | `\`, `/`, `:`, `?`, `&`, `=`, `*`, `+`, `(`, `)`, `%`, `@`, `$`, `~`, `!`, `#` |
+
+## Resource usage and limits
+
+Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint. To learn more about the differences between key types, see [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md).
+
+### Authoring resource limits
+
+Use the _kind_, `LUIS.Authoring`, when filtering resources in the Azure portal. LUIS limits 500 applications per Azure authoring resource.
+
+| Authoring resource | Authoring TPS |
+| | |
+| F0 - Free tier | 1 million/month, 5/second |
+
+* TPS = Transactions per second
+
+[Learn more about pricing.][pricing]
+
+### Query prediction resource limits
+
+Use the _kind_, `LUIS`, when filtering resources in the Azure portal.The LUIS query prediction endpoint resource, used on the runtime, is only valid for endpoint queries.
+
+| Query Prediction resource | Query TPS |
+| | |
+| F0 - Free tier | 10 thousand/month, 5/second |
+| S0 - Standard tier | 50/second |
+
+### Sentiment analysis
+
+[Sentiment analysis integration](how-to/publish.md), which provides sentiment information, is provided without requiring another Azure resource.
+
+### Speech integration
+
+[Speech integration](../speech-service/how-to-recognize-intents-from-speech-csharp.md) provides 1 thousand endpoint requests per unit cost.
+
+[Learn more about pricing.][pricing]
+
+## Keyboard controls
+
+| Keyboard input | Description |
+| | |
+| Control+E | switches between tokens and entities on utterances list |
+
+## Website sign-in time period
+
+Your sign-in access is for **60 minutes**. After this time period, you will get this error. You need to sign in again.
+
+[BATCH-TESTING]: ./how-to/train-test.md
+[INTENTS]: ./concepts/intents.md
+[LUIS-GET-STARTED-CREATE-APP]: ./luis-get-started-create-app.md
+[LUIS-HOW-TO-MANAGE-VERSIONS]: ./luis-how-to-manage-versions.md
+[PHRASE-LIST]: ./concepts/patterns-features.md
+[PRICING]: https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/
+[UTTERANCES]: ./concepts/utterances.md
ai-services Luis Migration Api V1 To V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-api-v1-to-v2.md
+
+ Title: v1 to v2 API Migration
+
+description: The version 1 endpoint and authoring Language Understanding APIs are deprecated. Use this guide to understand how to migrate to version 2 endpoint and authoring APIs.
++++++++ Last updated : 04/02/2019++
+# API v1 to v2 Migration guide for LUIS apps
++
+The version 1 [endpoint](https://aka.ms/v1-endpoint-api-docs) and [authoring](https://aka.ms/v1-authoring-api-docs) APIs are deprecated. Use this guide to understand how to migrate to version 2 [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) and [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) APIs.
+
+## New Azure regions
+LUIS has new [regions](./luis-reference-regions.md) provided for the LUIS APIs. LUIS provides a different portal for region groups. The application must be authored in the same region you expect to query. Applications do not automatically migrate regions. You export the app from one region then import into another for it to be available in a new region.
+
+## Authoring route changes
+The authoring API route changed from using the **prog** route to using the **api** route.
++
+| version | route |
+|--|--|
+|1|/luis/v1.0/**prog**/apps|
+|2|/luis/**api**/v2.0/apps|
++
+## Endpoint route changes
+The endpoint API has new query string parameters as well as a different response. If the verbose flag is true, all intents, regardless of score, are returned in an array named intents, in addition to the topScoringIntent.
+
+| version | GET route |
+|--|--|
+|1|/luis/v1/application?ID={appId}&q={q}|
+|2|/luis/v2.0/apps/{appId}?q={q}[&timezoneOffset][&verbose][&spellCheck][&staging][&bing-spell-check-subscription-key][&log]|
++
+v1 endpoint success response:
+```json
+{
+ "odata.metadata":"https://dialogice.cloudapp.net/odata/$metadata#domain","value":[
+ {
+ "id":"bccb84ee-4bd6-4460-a340-0595b12db294","q":"turn on the camera","response":"[{\"intent\":\"OpenCamera\",\"score\":0.976928055},{\"intent\":\"None\",\"score\":0.0230718572}]"
+ }
+ ]
+}
+```
+
+v2 endpoint success response:
+```json
+{
+ "query": "forward to frank 30 dollars through HSBC",
+ "topScoringIntent": {
+ "intent": "give",
+ "score": 0.3964121
+ },
+ "entities": [
+ {
+ "entity": "30",
+ "type": "builtin.number",
+ "startIndex": 17,
+ "endIndex": 18,
+ "resolution": {
+ "value": "30"
+ }
+ },
+ {
+ "entity": "frank",
+ "type": "frank",
+ "startIndex": 11,
+ "endIndex": 15,
+ "score": 0.935219169
+ },
+ {
+ "entity": "30 dollars",
+ "type": "builtin.currency",
+ "startIndex": 17,
+ "endIndex": 26,
+ "resolution": {
+ "unit": "Dollar",
+ "value": "30"
+ }
+ },
+ {
+ "entity": "hsbc",
+ "type": "Bank",
+ "startIndex": 36,
+ "endIndex": 39,
+ "resolution": {
+ "values": [
+ "BankeName"
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Key management no longer in API
+The subscription endpoint key APIs are deprecated, returning 410 GONE.
+
+| version | route |
+|--|--|
+|1|/luis/v1.0/prog/subscriptions|
+|1|/luis/v1.0/prog/subscriptions/{subscriptionKey}|
+
+Azure [endpoint keys](luis-how-to-azure-subscription.md) are generated in the Azure portal. You assign the key to a LUIS app on the **[Publish](luis-how-to-azure-subscription.md)** page. You do not need to know the actual key value. LUIS uses the subscription name to make the assignment.
+
+## New versioning route
+The v2 model is now contained in a [version](luis-how-to-manage-versions.md). A version name is 10 characters in the route. The default version is "0.1".
+
+| version | route |
+|--|--|
+|1|/luis/v1.0/**prog**/apps/{appId}/entities|
+|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/entities|
+
+## Metadata renamed
+Several APIs that return LUIS metadata have new names.
+
+| v1 route name | v2 route name |
+|--|--|
+|PersonalAssistantApps |assistants|
+|applicationcultures|cultures|
+|applicationdomains|domains|
+|applicationusagescenarios|usagescenarios|
++
+## "Sample" renamed to "suggest"
+LUIS suggests utterances from existing [endpoint utterances](how-to/improve-application.md) that may enhance the model. In the previous version, this was named **sample**. In the new version, the name is changed from sample to **suggest**. This is called **[Review endpoint utterances](how-to/improve-application.md)** in the LUIS website.
+
+| version | route |
+|--|--|
+|1|/luis/v1.0/**prog**/apps/{appId}/entities/{entityId}/**sample**|
+|1|/luis/v1.0/**prog**/apps/{appId}/intents/{intentId}/**sample**|
+|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/entities/{entityId}/**suggest**|
+|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/intents/{intentId}/**suggest**|
++
+## Create app from prebuilt domains
+[Prebuilt domains](./howto-add-prebuilt-models.md) provide a predefined domain model. Prebuilt domains allow you to quickly develop your LUIS application for common domains. This API allows you to create a new app based on a prebuilt domain. The response is the new appID.
+
+|v2 route|verb|
+|--|--|
+|/luis/api/v2.0/apps/customprebuiltdomains |get, post|
+|/luis/api/v2.0/apps/customprebuiltdomains/{culture} |get|
+
+## Importing 1.x app into 2.x
+The exported 1.x app's JSON has some areas that you need to change before importing into [LUIS][LUIS] 2.0.
+
+### Prebuilt entities
+The [prebuilt entities](./howto-add-prebuilt-models.md) have changed. Make sure you are using the V2 prebuilt entities. This includes using [datetimeV2](luis-reference-prebuilt-datetimev2.md), instead of datetime.
+
+### Actions
+The actions property is no longer valid. It should be an empty
+
+### Labeled utterances
+V1 allowed labeled utterances to include spaces at the beginning or end of the word or phrase. Removed the spaces.
+
+## Common reasons for HTTP response status codes
+See [LUIS API response codes](luis-reference-response-codes.md).
+
+## Next steps
+
+Use the v2 API documentation to update existing REST calls to LUIS [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) and [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) APIs.
+
+[LUIS]: ./luis-reference-regions.md
ai-services Luis Migration Api V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-api-v3.md
+
+ Title: Prediction endpoint changes in the V3 API
+description: The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs.
++
+ms.
+++ Last updated : 05/28/2021++++
+# Prediction endpoint changes for V3
+++
+The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs. There is currently no date by which migration needs to be completed.
+
+**Generally available status** - this V3 API include significant JSON request and response changes from V2 API.
+
+The V3 API provides the following new features:
+
+* [External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time)
+* [Dynamic lists](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time)
+* [Prebuilt entity JSON changes](#prebuilt-entity-changes)
+
+The prediction endpoint [request](#request-changes) and [response](#response-changes) have significant changes to support the new features listed above, including the following:
+
+* [Response object changes](#top-level-json-changes)
+* [Entity role name references instead of entity name](#entity-role-name-instead-of-entity-name)
+* [Properties to mark entities in utterances](#marking-placement-of-entities-in-utterances)
+
+[Reference documentation](https://aka.ms/luis-api-v3) is available for V3.
+
+## V3 changes from preview to GA
+
+V3 made the following changes as part of the move to GA:
+
+* The following prebuilt entities have different JSON responses:
+ * [OrdinalV1](luis-reference-prebuilt-ordinal.md)
+ * [GeographyV2](luis-reference-prebuilt-geographyv2.md)
+ * [DatetimeV2](luis-reference-prebuilt-datetimev2.md)
+ * Measurable unit key name from `units` to `unit`
+
+* Request body JSON change:
+ * from `preferExternalEntities` to `preferExternalEntities`
+ * optional `score` parameter for external entities
+
+* Response body JSON changes:
+ * `normalizedQuery` removed
+
+## Suggested adoption strategy
+
+If you use Bot Framework, Bing Spell Check V7, or want to migrate your LUIS app authoring only, continue to use the V2 endpoint.
+
+If you know none of your client application or integrations (Bot Framework, and Bing Spell Check V7) are impacted and you are comfortable migrating your LUIS app authoring and your prediction endpoint at the same time, begin using the V3 prediction endpoint. The V2 prediction endpoint will still be available and is a good fall-back strategy.
+
+For information on using the Bing Spell Check API, see [How to correct misspelled words](luis-tutorial-bing-spellcheck.md).
++
+## Not supported
+
+### Bot Framework and Azure AI Bot Service client applications
+
+Continue to use the V2 API prediction endpoint until the V4.7 of the Bot Framework is released.
++
+## Endpoint URL changes
+
+### Changes by slot name and version name
+
+The [format of the V3 endpoint HTTP](developer-reference-resource.md#rest-endpoints) call has changed.
+
+If you want to query by version, you first need to [publish via API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c3b) with `"directVersionPublish":true`. Query the endpoint referencing the version ID instead of the slot name.
+
+|Valid values for `SLOT-NAME`|
+|--|
+|`production`|
+|`staging`|
+
+## Request changes
+
+### Query string changes
++
+### V3 POST body
+
+```JSON
+{
+ "query":"your utterance here",
+ "options":{
+ "datetimeReference": "2019-05-05T12:00:00",
+ "preferExternalEntities": true
+ },
+ "externalEntities":[],
+ "dynamicLists":[]
+}
+```
+
+|Property|Type|Version|Default|Purpose|
+|--|--|--|--|--|
+|`dynamicLists`|array|V3 only|Not required.|[Dynamic lists](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time) allow you to extend an existing trained and published list entity, already in the LUIS app.|
+|`externalEntities`|array|V3 only|Not required.|[External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time) give your LUIS app the ability to identify and label entities during runtime, which can be used as features to existing entities. |
+|`options.datetimeReference`|string|V3 only|No default|Used to determine [datetimeV2 offset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). The format for the datetimeReference is [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).|
+|`options.preferExternalEntities`|boolean|V3 only|false|Specifies if user's [external entity (with same name as existing entity)](schema-change-prediction-runtime.md#override-existing-model-predictions) is used or the existing entity in the model is used for prediction. |
+|`query`|string|V3 only|Required.|**In V2**, the utterance to be predicted is in the `q` parameter. <br><br>**In V3**, the functionality is passed in the `query` parameter.|
+
+## Response changes
+
+The query response JSON changed to allow greater programmatic access to the data used most frequently.
+
+### Top level JSON changes
+++
+The top JSON properties for V2 are, when `verbose` is set to true, which returns all intents and their scores in the `intents` property:
+
+```JSON
+{
+ "query":"this is your utterance you want predicted",
+ "topScoringIntent":{},
+ "intents":[],
+ "entities":[],
+ "compositeEntities":[]
+}
+```
+
+The top JSON properties for V3 are:
+
+```JSON
+{
+ "query": "this is your utterance you want predicted",
+ "prediction":{
+ "topIntent": "intent-name-1",
+ "intents": {},
+ "entities":{}
+ }
+}
+```
+
+The `intents` object is an unordered list. Do not assume the first child in the `intents` corresponds to the `topIntent`. Instead, use the `topIntent` value to find the score:
+
+```nodejs
+const topIntentName = response.prediction.topIntent;
+const score = intents[topIntentName];
+```
+
+The response JSON schema changes allow for:
+
+* Clear distinction between original utterance, `query`, and returned prediction, `prediction`.
+* Easier programmatic access to predicted data. Instead of enumerating through an array in V2, you can access values by **name** for both intents and entities. For predicted entity roles, the role name is returned because it is unique across the entire app.
+* Data types, if determined, are respected. Numerics are no longer returned as strings.
+* Distinction between first priority prediction information and additional metadata, returned in the `$instance` object.
+
+### Entity response changes
+
+#### Marking placement of entities in utterances
+
+**In V2**, an entity was marked in an utterance with the `startIndex` and `endIndex`.
+
+**In V3**, the entity is marked with `startIndex` and `entityLength`.
+
+#### Access `$instance` for entity metadata
+
+If you need entity metadata, the query string needs to use the `verbose=true` flag and the response contains the metadata in the `$instance` object. Examples are shown in the JSON responses in the following sections.
+
+#### Each predicted entity is represented as an array
+
+The `prediction.entities.<entity-name>` object contains an array because each entity can be predicted more than once in the utterance.
+
+<a name="prebuilt-entities-with-new-json"></a>
+
+#### Prebuilt entity changes
+
+The V3 response object includes changes to prebuilt entities. Review [specific prebuilt entities](luis-reference-prebuilt-entities.md) to learn more.
+
+#### List entity prediction changes
+
+The JSON for a list entity prediction has changed to be an array of arrays:
+
+```JSON
+"entities":{
+ "my_list_entity":[
+ ["canonical-form-1","canonical-form-2"],
+ ["canonical-form-2"]
+ ]
+}
+```
+Each interior array corresponds to text inside the utterance. The interior object is an array because the same text can appear in more than one sublist of a list entity.
+
+When mapping between the `entities` object to the `$instance` object, the order of objects is preserved for the list entity predictions.
+
+```nodejs
+const item = 0; // order preserved, use same enumeration for both
+const predictedCanonicalForm = entities.my_list_entity[item];
+const associatedMetadata = entities.$instance.my_list_entity[item];
+```
+
+#### Entity role name instead of entity name
+
+In V2, the `entities` array returned all the predicted entities with the entity name being the unique identifier. In V3, if the entity uses roles and the prediction is for an entity role, the primary identifier is the role name. This is possible because entity role names must be unique across the entire app including other model (intent, entity) names.
+
+In the following example: consider an utterance that includes the text, `Yellow Bird Lane`. This text is predicted as a custom `Location` entity's role of `Destination`.
+
+|Utterance text|Entity name|Role name|
+|--|--|--|
+|`Yellow Bird Lane`|`Location`|`Destination`|
+
+In V2, the entity is identified by the _entity name_ with the role as a property of the object:
+
+```JSON
+"entities":[
+ {
+ "entity": "Yellow Bird Lane",
+ "type": "Location",
+ "startIndex": 13,
+ "endIndex": 20,
+ "score": 0.786378264,
+ "role": "Destination"
+ }
+]
+```
+
+In V3, the entity is referenced by the _entity role_, if the prediction is for the role:
+
+```JSON
+"entities":{
+ "Destination":[
+ "Yellow Bird Lane"
+ ]
+}
+```
+
+In V3, the same result with the `verbose` flag to return entity metadata:
+
+```JSON
+"entities":{
+ "Destination":[
+ "Yellow Bird Lane"
+ ],
+ "$instance":{
+ "Destination": [
+ {
+ "role": "Destination",
+ "type": "Location",
+ "text": "Yellow Bird Lane",
+ "startIndex": 25,
+ "length":16,
+ "score": 0.9837309,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor"
+ }
+ ]
+ }
+}
+```
+
+<a name="external-entities-passed-in-at-prediction-time"></a>
+<a name="override-existing-model-predictions"></a>
+
+## Extend the app at prediction time
+
+Learn [concepts](schema-change-prediction-runtime.md) about how to extend the app at prediction runtime.
++
+## Next steps
+
+Use the V3 API documentation to update existing REST calls to LUIS [endpoint](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs.
ai-services Luis Migration Authoring Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-authoring-entities.md
+
+ Title: Migrate to V3 machine-learning entity
+description: The V3 authoring provides one new entity type, the machine-learning entity, along with the ability to add relationships to the machine-learning entity and other entities or features of the application.
++++++ Last updated : 05/28/2021++
+# Migrate to V3 Authoring entity
+++
+The V3 authoring provides one new entity type, the machine-learning entity, along with the ability to add relationships to the machine-learning entity and other entities or features of the application. There is currently no date by which migration needs to be completed.
+
+## Entities are decomposable in V3
+
+Entities created with the V3 authoring APIs, either using the [APIs](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview) or with the portal, allow you to build a layered entity model with a parent and children. The parent is known to as the **machine-learning entity** and the children are known as **subentities** of the machine learned entity.
+
+Each subentity is also a machine-learning entity but with the added configuration options of features.
+
+* **Required features** are rules that guarantee an entity is extracted when it matches a feature. The rule is defined by required feature to the model:
+ * [Prebuilt entity](luis-reference-prebuilt-entities.md)
+ * [Regular expression entity](reference-entity-regular-expression.md)
+ * [List entity](reference-entity-list.md).
+
+## How do these new relationships compare to V2 authoring
+
+V2 authoring provided hierarchical and composite entities along with roles and features to accomplish this same task. Because the entities, features, and roles were not explicitly related to each other, it was difficult to understand how LUIS implied the relationships during prediction.
+
+With V3, the relationship is explicit and designed by the app authors. This allows you, as the app author, to:
+
+* Visually see how LUIS is predicting these relationships, in the example utterances
+* Test for these relationships either with the [interactive test pane](how-to/train-test.md) or at the endpoint
+* Use these relationships in the client application, via a well-structured, named, nested [.json object](reference-entity-machine-learned-entity.md)
+
+## Planning
+
+When you migrate, consider the following in your migration plan:
+
+* Back up your LUIS app, and perform the migration on a separate app. Having a V2 and V3 app available at the same time allows you to validate the changes required and the impact on the prediction results.
+* Capture current prediction success metrics
+* Capture current dashboard information as a snapshot of app status
+* Review existing intents, entities, phrase lists, patterns, and batch tests
+* The following elements can be migrated **without change**:
+ * Intents
+ * Entities
+ * Regular expression entity
+ * List entity
+ * Features
+ * Phrase list
+* The following elements need to be migrated **with changes**:
+ * Entities
+ * Hierarchical entity
+ * Composite entity
+ * Roles - roles can only be applied to a machine-learning (parent) entity. Roles can't be applied to subentities
+ * Batch tests and patterns that use the hierarchical and composite entities
+
+When you design your migration plan, leave time to review the final machine-learning entities, after all hierarchical and composite entities have been migrated. While a straight migration will work, after you make the change and review your batch test results, and prediction JSON, the more unified JSON may lead you to make changes so the final information delivered to the client-side app is organized differently. This is similar to code refactoring and should be treated with the same review process your organization has in place.
+
+If you don't have batch tests in place for your V2 model, and migrate the batch tests to the V3 model as part of the migration, you won't be able to validate how the migration will impact the endpoint prediction results.
+
+## Migrating from V2 entities
+
+As you begin to move to the V3 authoring model, you should consider how to move to the machine-learning entity, and its subentities and features.
+
+The following table notes which entities need to migrate from a V2 to a V3 entity design.
+
+|V2 authoring entity type|V3 authoring entity type|Example|
+|--|--|--|
+|Composite entity|Machine learned entity|[learn more](#migrate-v2-composite-entity)|
+|Hierarchical entity|machine-learning entity's role|[learn more](#migrate-v2-hierarchical-entity)|
+
+## Migrate V2 Composite entity
+
+Each child of the V2 composite should be represented with a subentity of the V3 machine-learning entity. If the composite child is a prebuilt, regular expression, or a list entity, this should be applied as a required feature on the subentity.
+
+Considerations when planning to migrate a composite entity to a machine-learning entity:
+* Child entities can't be used in patterns
+* Child entities are no longer shared
+* Child entities need to be labeled if they used to be non-machine-learned
+
+### Existing features
+
+Any phrase list used to boost words in the composite entity should be applied as a feature to either the machine-learning (parent) entity, the subentity (child) entity, or the intent (if the phrase list only applies to one intent). Plan to add the feature to the entity where it should boost most significantly. Do not add the feature generically to the machine-learning (parent) entity, if it will most significantly boost the prediction of a subentity (child).
+
+### New features
+
+In V3 authoring, add a planning step to evaluate entities as possible features for all the entities and intents.
+
+### Example entity
+
+This entity is an example only. Your own entity migration may require other considerations.
+
+Consider a V2 composite for modifying a pizza `order` that uses:
+* prebuilt datetimeV2 for delivery time
+* phrase list to boost certain words such as pizza, pie, crust, and topping
+* list entity to detect toppings such as mushrooms, olives, pepperoni.
+
+An example utterance for this entity is:
+
+`Change the toppings on my pie to mushrooms and delivery it 30 minutes later`
+
+The following table demonstrates the migration:
+
+|V2 models|V3 models|
+|--|--|
+|Parent - Component entity named `Order`|Parent - machine-learning entity named `Order`|
+|Child - Prebuilt datetimeV2|* Migrate prebuilt entity to new app.<br>* Add required feature on parent for prebuilt datetimeV2.|
+|Child - list entity for toppings|* Migrate list entity to new app.<br>* Then add a required feature on the parent for the list entity.|
++
+## Migrate V2 Hierarchical entity
+
+In V2 authoring, a hierarchical entity was provided before roles existing in LUIS. Both served the same purpose of extracting entities based on context usage. If you have hierarchical entities, you can think of them as simple entities with roles.
+
+In V3 authoring:
+* A role can be applied on the machine-learning (parent) entity.
+* A role can't be applied to any subentities.
+
+This entity is an example only. Your own entity migration may require other considerations.
+
+Consider a V2 hierarchical entity for modifying a pizza `order`:
+* where each child determines either an original topping or the final topping
+
+An example utterance for this entity is:
+
+`Change the topping from mushrooms to olives`
+
+The following table demonstrates the migration:
+
+|V2 models|V3 models|
+|--|--|
+|Parent - Component entity named `Order`|Parent - machine-learning entity named `Order`|
+|Child - Hierarchical entity with original and final pizza topping|* Add role to `Order` for each topping.|
+
+## API change constraint replaced with required feature
+
+This change was made in May 2020 at the //Build conference and only applies to the v3 authoring APIs where an app is using a constrained feature. If you are migrating from v2 authoring to v3 authoring, or have not used v3 constrained features, skip this section.
+
+**Functionality** - ability to require an existing entity as a feature to another model and only extract that model if the entity is detected. The functionality has not changed but the API and terminology have changed.
+
+|Previous terminology|New terminology|
+|--|--|
+|`constrained feature`<br>`constraint`<br>`instanceOf`|`required feature`<br>`isRequired`|
+
+#### Automatic migration
+
+Starting **June 19 2020**, you wonΓÇÖt be allowed to create constraints programmatically using the previous authoring API that exposed this functionality.
+
+All existing constraint features will be automatically migrated to the required feature flag. No programmatic changes are required to your prediction API and no resulting change on the quality of the prediction accuracy.
+
+#### LUIS portal changes
+
+The LUIS preview portal referenced this functionality as a **constraint**. The current LUIS portal designates this functionality as a **required feature**.
+
+#### Previous authoring API
+
+This functionality was applied in the preview authoring **[Create Entity Child API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5d86cf3c6a25a45529767d77)** as the part of an entity's definition, using the `instanceOf` property of an entity's child:
+
+```json
+{
+ "name" : "dayOfWeek",
+ "instanceOf": "datetimeV2",
+ "children": [
+ {
+ "name": "dayNumber",
+ "instanceOf": "number",
+ "children": []
+ }
+ ]
+}
+```
+
+#### New authoring API
+
+This functionality is now applied with the **[Add entity feature relation API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5d9dc1781e38aaec1c375f26)** using the `featureName` and `isRequired` properties. The value of the `featureName` property is the name of the model.
+
+```json
+{
+ "featureName": "YOUR-MODEL-NAME-HERE",
+ "isRequired" : true
+}
+```
++
+## Next steps
+
+* [Developer resources](developer-reference-resource.md)
ai-services Luis Migration Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-authoring.md
+
+ Title: Migrate to an Azure resource authoring key
+
+description: This article describes how to migrate Language Understanding (LUIS) authoring authentication from an email account to an Azure resource.
++++++++ Last updated : 03/21/2022++
+# Migrate to an Azure resource authoring key
+++
+> [!IMPORTANT]
+> As of December 3rd 2020, existing LUIS users must have completed the migration process to continue authoring LUIS applications.
+
+Language Understanding (LUIS) authoring authentication has changed from an email account to an Azure resource. Use this article to learn how to migrate your account, if you haven't migrated yet.
++
+## What is migration?
+
+Migration is the process of changing authoring authentication from an email account to an Azure resource. Your account will be linked to an Azure subscription and an Azure authoring resource after you migrate.
+
+Migration has to be done from the [LUIS portal](https://www.luis.ai). If you create the authoring keys by using the LUIS CLI, for example, you'll need to complete the migration process in the LUIS portal. You can still have co-authors on your applications after migration, but these will be added on the Azure resource level instead of the application level. Migrating your account can't be reversed.
+
+> [!Note]
+> * If you need to create a prediction runtime resource, there's [a separate process](luis-how-to-azure-subscription.md#create-luis-resources) to create it.
+> * See the [migration notes](#migration-notes) section below for information on how your applications and contributors will be affected.
+> * Authoring your LUIS app is free, as indicated by the F0 tier. Learn [more about pricing tiers](luis-limits.md#resource-usage-and-limits).
+
+## Migration prerequisites
+
+* A valid Azure subscription. Ask your tenant admin to add you on the subscription, or [sign up for a free one](https://azure.microsoft.com/free/cognitive-services).
+* A LUIS Azure authoring resource from the LUIS portal or from the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne).
+ * Creating an authoring resource from the LUIS portal is part of the migration process described in the next section.
+* If you're a collaborator on applications, applications won't automatically migrate. You will be prompted to export these apps while going through the migration flow. You can also use the [export API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40). You can import the app back into LUIS after migration. The import process creates a new app with a new app ID, for which you're the owner.
+* If you're the owner of the application, you won't need to export your apps because they'll migrate automatically. An email template with a list of all collaborators for each application is provided, so they can be notified of the migration process.
+
+## Migration steps
+
+1. When you sign-in to the [LUIS portal](https://www.luis.ai), an Azure migration window will open with the steps for migration. If you dismiss it, you won't be able to proceed with authoring your LUIS applications, and the only action displayed will be to continue with the migration.
+
+ > [!div class="mx-imgBorder"]
+ > ![Migration Window Intro](./media/migrate-authoring-key/notify-azure-migration.png)
+
+2. If you have collaborators on any of your apps, you will see a list of application names owned by you, along with the authoring region and collaborator emails on each application. We recommend sending your collaborators an email notifying them about the migration by clicking on the **send** symbol button on the left of the application name.
+A `*` symbol will appear next to the application name if a collaborator has a prediction resource assigned to your application. After migration, these apps will still have these prediction resources assigned to them even though the collaborators will not have access to author your applications. However, this assignment will be broken if the owner of the prediction resource [regenerated the keys](./luis-how-to-azure-subscription.md#regenerate-an-azure-key) from the Azure portal.
+
+ > [!div class="mx-imgBorder"]
+ > ![Notify collaborators](./media/migrate-authoring-key/notify-azure-migration-collabs.png)
++
+ For each collaborator and app, the default email application opens with a lightly formatted email. You can edit the email before sending it. The email template includes the exact app ID and app name.
+
+ ```html
+ Dear Sir/Madam,
+
+ I will be migrating my LUIS account to Azure. Consequently, you will no longer have access to the following app:
+
+ App Id: <app-ID-omitted>
+ App name: Human Resources
+
+ Thank you
+ ```
+ > [!Note]
+ > After you migrate your account to Azure, your apps will no longer be available to collaborators.
+
+3. If you're a collaborator on any apps, a list of application names shared with you is shown along with the authoring region and owner emails on each application. It is recommend to export a copy of the apps by clicking on the export button on the left of the application name. You can import these apps back after you migrate, because they won't be automatically migrated with you.
+A `*` symbol will appear next to the application name if you have a prediction resource assigned to an application. After migration, your prediction resource will still be assigned to these applications even though you will no longer have access to author these apps. If you want to break the assignment between your prediction resource and the application, you will need to go to Azure portal and [regenerate the keys](./luis-how-to-azure-subscription.md#regenerate-an-azure-key).
+
+ > [!div class="mx-imgBorder"]
+ > ![Export your applications.](./media/migrate-authoring-key/migration-export-apps.png)
++
+4. In the window for migrating regions, you will be asked to migrate your applications to an Azure resource in the same region they were authored in. LUIS has three authoring regions [and portals](./luis-reference-regions.md#luis-authoring-regions). The window will show the regions where your owned applications were authored. The displayed migration regions may be different depending on the regional portal you use, and apps you've authored.
+
+ > [!div class="mx-imgBorder"]
+ > ![Multi region migration.](./media/migrate-authoring-key/migration-regional-flow.png)
+
+5. For each region, choose to create a new LUIS authoring resource, or to migrate to an existing one using the buttons.
+
+ > [!div class="mx-imgBorder"]
+ > ![choose to create or existing authoring resource](./media/migrate-authoring-key/migration-multiregional-resource.png)
+
+ Provide the following information:
+
+ * **Tenant Name**: The tenant that your Azure subscription is associated with. By default this is set to the tenant you're currently using. You can switch tenants by closing this window and selecting the avatar in the top right of the screen, containing your initials. Select **Migrate to Azure** to re-open the window.
+ * **Azure Subscription Name**: The subscription that will be associated with the resource. If you have more than one subscription that belongs to your tenant, select the one you want from the drop-down list.
+ * **Authoring Resource Name**: A custom name that you choose. It's used as part of the URL for your authoring and prediction endpoint queries. If you are creating a new authoring resource, note that the resource name can only include alphanumeric characters, `-`, and canΓÇÖt start or end with `-`. If any other symbols are included in the name,
+ resource creation and migration will fail.
+ * **Azure Resource Group Name**: A custom resource group name that you choose from the drop-down list. Resource groups allow you to group Azure resources for access and management. If you currently do not have a resource group in your subscription, you will not be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
+
+6. After you have successfully migrated in all regions, select finish. You will now have access to your applications. You can continue authoring and maintaining all your applications in all regions within the portal.
+
+## Migration notes
+
+* Before migration, coauthors are known as _collaborators_ on the LUIS app level. After migration, the Azure role of _contributor_ is used for the same functionality on the Azure resource level.
+* If you have signed-in to more than one [LUIS regional portal](./luis-reference-regions.md#luis-authoring-regions), you will be asked to migrate in multiple regions at once.
+* Applications will automatically migrate with you if you're the owner of the application. Applications will not migrate with you if you're a collaborator on the application. However, collaborators will be prompted to export the apps they need.
+* Application owners can't choose a subset of apps to migrate and there is no way for an owner to know if collaborators have migrated.
+* Migration does not automatically move or add collaborators to the Azure authoring resource. The app owner is the one who needs to complete this step after migration. This step requires [permissions to the Azure authoring resource](./luis-how-to-collaborate.md).
+* After contributors are assigned to the Azure resource, they will need to migrate before they can access applications. Otherwise, they won't have access to author the applications.
++
+## Using apps after migration
+
+After the migration process, all your LUIS apps for which you're the owner will now be assigned to a single LUIS authoring resource.
+The **My Apps** list shows the apps migrated to the new authoring resource. Before you access your apps, select **Choose a different authoring resource** to select the subscription and authoring resource to view the apps that can be authored.
+
+> [!div class="mx-imgBorder"]
+> ![select subscription and authoring resource](./media/migrate-authoring-key/select-sub-and-resource.png)
++
+If you plan to edit your apps programmatically, you'll need the authoring key values. These values are displayed by clicking **Manage** at the top of the screen in the LUIS portal, and then selecting **Azure Resources**. They're also available in the Azure portal on the resource's **Key and endpoints** page. You can also create more authoring resources and assign them from the same page.
+
+## Adding contributors to authoring resources
++
+Learn [how to add contributors](luis-how-to-collaborate.md) on your authoring resource. Contributors will have access to all applications under that resource.
+
+You can add contributors to the authoring resource from the Azure portal, on the **Access Control (IAM)** page for that resource. For more information, see [Add contributors to your app](luis-how-to-collaborate.md).
+
+> [!Note]
+> If the owner of the LUIS app migrated and added the collaborator as a contributor on the Azure resource, the collaborator will still have no access to the app unless they also migrate.
+
+## Troubleshooting the migration process
+
+If you cannot find your Azure subscription in the drop-down list:
+* Ensure that you have a valid Azure subscription that's authorized to create Azure AI services resources. Go to the [Azure portal](https://portal.azure.com) and check the status of the subscription. If you don't have one, [create a free Azure account](https://azure.microsoft.com/free/cognitive-services/).
+* Ensure that you're in the proper tenant associated with your valid subscription. You can switch tenants selecting the avatar in the top right of the screen, containing your initials.
+
+ > [!div class="mx-imgBorder"]
+ > ![Page for switching directories](./media/migrate-authoring-key/switch-directories.png)
+
+If you have an existing authoring resource but can't find it when you select the **Use Existing Authoring Resource** option:
+* Your resource was probably created in a different region than the one your are trying to migrate in.
+* Create a new resource from the LUIS portal instead.
+
+If you select the **Create New Authoring Resource** option and migration fails with the error message "Failed retrieving user's Azure information, retry again later":
+* Your subscription might have 10 or more authoring resources per region, per subscription. If that's the case, you won't be able to create a new authoring resource.
+* Migrate by selecting the **Use Existing Authoring Resource** option and selecting one of the existing resources under your subscription.
+
+## Create new support request
+
+If you are having any issues with the migration that are not addressed in the troubleshooting section, please [create a support topic](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and provide the information below with the following fields:
+
+ * **Issue Type**: Technical
+ * **Subscription**: Choose a subscription from the dropdown list
+ * **Service**: Search and select "Azure AI services"
+ * **Resource**: Choose a LUIS resource if there is an existing one. If not, select General question.
+
+## Next steps
+
+* Review [concepts about authoring and runtime keys](luis-how-to-azure-subscription.md)
+* Review how to [assign keys](luis-how-to-azure-subscription.md) and [add contributors](luis-how-to-collaborate.md)
ai-services Luis Reference Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-application-settings.md
+
+ Title: Application settings - LUIS
+description: Applications settings for Azure AI services language understanding apps are stored in the app and portal.
++++++ Last updated : 05/04/2020++
+# App and version settings
+++
+These settings are stored in the [exported](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40) app and updated with the REST APIs or LUIS portal.
+
+Changing your app version settings resets your app training status to untrained.
+++
+Text reference and examples include:
+
+* [Punctuation](#punctuation-normalization)
+* [Diacritics](#diacritics-normalization)
+
+## Diacritics normalization
+
+The following utterances show how diacritics normalization impacts utterances:
+
+|With diacritics set to false|With diacritics set to true|
+|--|--|
+|`quiero tomar una pi├▒a colada`|`quiero tomar una pina colada`|
+|||
+
+### Language support for diacritics
+
+#### Brazilian portuguese `pt-br` diacritics
+
+|Diacritics set to false|Diacritics set to true|
+|-|-|
+|`á`|`a`|
+|`â`|`a`|
+|`ã`|`a`|
+|`à`|`a`|
+|`ç`|`c`|
+|`é`|`e`|
+|`ê`|`e`|
+|`í`|`i`|
+|`├│`|`o`|
+|`├┤`|`o`|
+|`├╡`|`o`|
+|`├║`|`u`|
+|||
+
+#### Dutch `nl-nl` diacritics
+
+|Diacritics set to false|Diacritics set to true|
+|-|-|
+|`á`|`a`|
+|`à`|`a`|
+|`é`|`e`|
+|`ë`|`e`|
+|`è`|`e`|
+|`ï`|`i`|
+|`í`|`i`|
+|`├│`|`o`|
+|`├╢`|`o`|
+|`├║`|`u`|
+|`├╝`|`u`|
+|||
+
+#### French `fr-` diacritics
+
+This includes both french and canadian subcultures.
+
+|Diacritics set to false|Diacritics set to true|
+|--|--|
+|`é`|`e`|
+|`à`|`a`|
+|`è`|`e`|
+|`├╣`|`u`|
+|`â`|`a`|
+|`ê`|`e`|
+|`î`|`i`|
+|`├┤`|`o`|
+|`├╗`|`u`|
+|`ç`|`c`|
+|`ë`|`e`|
+|`ï`|`i`|
+|`├╝`|`u`|
+|`├┐`|`y`|
+
+#### German `de-de` diacritics
+
+|Diacritics set to false|Diacritics set to true|
+|--|--|
+|`ä`|`a`|
+|`├╢`|`o`|
+|`├╝`|`u`|
+
+#### Italian `it-it` diacritics
+
+|Diacritics set to false|Diacritics set to true|
+|--|--|
+|`à`|`a`|
+|`è`|`e`|
+|`é`|`e`|
+|`ì`|`i`|
+|`í`|`i`|
+|`î`|`i`|
+|`├▓`|`o`|
+|`├│`|`o`|
+|`├╣`|`u`|
+|`├║`|`u`|
+
+#### Spanish `es-` diacritics
+
+This includes both spanish and canadian mexican.
+
+|Diacritics set to false|Diacritics set to true|
+|-|-|
+|`á`|`a`|
+|`é`|`e`|
+|`í`|`i`|
+|`├│`|`o`|
+|`├║`|`u`|
+|`├╝`|`u`|
+|`├▒`|`u`|
+
+## Punctuation normalization
+
+The following utterances show how punctuation impacts utterances:
+
+|With punctuation set to False|With punctuation set to True|
+|--|--|
+|`Hmm..... I will take the cappuccino`|`Hmm I will take the cappuccino`|
+|||
+
+### Punctuation removed
+
+The following punctuation is removed with `NormalizePunctuation` is set to true.
+
+|Punctuation|
+|--|
+|`-`|
+|`.`|
+|`'`|
+|`"`|
+|`\`|
+|`/`|
+|`?`|
+|`!`|
+|`_`|
+|`,`|
+|`;`|
+|`:`|
+|`(`|
+|`)`|
+|`[`|
+|`]`|
+|`{`|
+|`}`|
+|`+`|
+|`¡`|
+
+## Next steps
+
+* Learn [concepts](concepts/utterances.md#utterance-normalization) of diacritics and punctuation.
ai-services Luis Reference Prebuilt Age https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-age.md
+
+ Title: Age Prebuilt entity - LUIS
+
+description: This article contains age prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/04/2019++
+# Age prebuilt entity for a LUIS app
++
+The prebuilt age entity captures the age value both numerically and in terms of days, weeks, months, and years. Because this entity is already trained, you do not need to add example utterances containing age to the application intents. Age entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
+
+## Types of age
+Age is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L3) GitHub repository
+
+## Resolution for prebuilt age entity
+++
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "age": [
+ {
+ "number": 90,
+ "unit": "Day"
+ }
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "age": [
+ {
+ "number": 90,
+ "unit": "Day"
+ }
+ ],
+ "$instance": {
+ "age": [
+ {
+ "type": "builtin.age",
+ "text": "90 day old",
+ "startIndex": 2,
+ "length": 10,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor"
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.age** entity.
+
+```json
+ "entities": [
+ {
+ "entity": "90 day old",
+ "type": "builtin.age",
+ "startIndex": 2,
+ "endIndex": 11,
+ "resolution": {
+ "unit": "Day",
+ "value": "90"
+ }
+ }
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [currency](luis-reference-prebuilt-currency.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [dimension](luis-reference-prebuilt-dimension.md) entities.
ai-services Luis Reference Prebuilt Currency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-currency.md
+
+ Title: Currency Prebuilt entity - LUIS
+
+description: This article contains currency prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/14/2019++
+# Currency prebuilt entity for a LUIS app
++
+The prebuilt currency entity detects currency in many denominations and countries/regions, regardless of LUIS app culture. Because this entity is already trained, you do not need to add example utterances containing currency to the application intents. Currency entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
+
+## Types of currency
+Currency is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L26) GitHub repository
+
+## Resolution for currency entity
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "money": [
+ {
+ "number": 10.99,
+ "units": "Dollar"
+ }
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "money": [
+ {
+ "number": 10.99,
+ "unit": "Dollar"
+ }
+ ],
+ "$instance": {
+ "money": [
+ {
+ "type": "builtin.currency",
+ "text": "$10.99",
+ "startIndex": 23,
+ "length": 6,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor"
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.currency** entity.
+
+```json
+"entities": [
+ {
+ "entity": "$10.99",
+ "type": "builtin.currency",
+ "startIndex": 23,
+ "endIndex": 28,
+ "resolution": {
+ "unit": "Dollar",
+ "value": "10.99"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [datetimeV2](luis-reference-prebuilt-datetimev2.md), [dimension](luis-reference-prebuilt-dimension.md), and [email](luis-reference-prebuilt-email.md) entities.
ai-services Luis Reference Prebuilt Datetimev2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-datetimev2.md
+
+ Title: DatetimeV2 Prebuilt entities - LUIS
+
+description: This article has datetimeV2 prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 04/13/2020++
+# DatetimeV2 prebuilt entity for a LUIS app
+++
+The **datetimeV2** prebuilt entity extracts date and time values. These values resolve in a standardized format for client programs to consume. When an utterance has a date or time that isn't complete, LUIS includes _both past and future values_ in the endpoint response. Because this entity is already trained, you do not need to add example utterances containing datetimeV2 to the application intents.
+
+## Types of datetimeV2
+DatetimeV2 is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-DateTime.yaml) GitHub repository.
+
+## Example JSON
+
+The following utterance and its partial JSON response is shown below.
+
+`8am on may 2nd 2019`
+
+#### [V3 response](#tab/1-1)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "datetime",
+ "values": [
+ {
+ "timex": "2019-05-02T08",
+ "resolution": [
+ {
+ "value": "2019-05-02 08:00:00"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+#### [V3 verbose response](#tab/1-2)
+
+```json
+
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "datetime",
+ "values": [
+ {
+ "timex": "2019-05-02T08",
+ "resolution": [
+ {
+ "value": "2019-05-02 08:00:00"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "$instance": {
+ "datetimeV2": [
+ {
+ "type": "builtin.datetimeV2.datetime",
+ "text": "8am on may 2nd 2019",
+ "startIndex": 0,
+ "length": 19,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/1-3)
+
+```json
+"entities": [
+ {
+ "entity": "8am on may 2nd 2019",
+ "type": "builtin.datetimeV2.datetime",
+ "startIndex": 0,
+ "endIndex": 18,
+ "resolution": {
+ "values": [
+ {
+ "timex": "2019-05-02T08",
+ "type": "datetime",
+ "value": "2019-05-02 08:00:00"
+ }
+ ]
+ }
+ }
+]
+ ```
+
+|Property name |Property type and description|
+|||
+|Entity|**string** - Text extracted from the utterance with type of date, time, date range, or time range.|
+|type|**string** - One of the [subtypes of datetimeV2](#subtypes-of-datetimev2)
+|startIndex|**int** - The index in the utterance at which the entity begins.|
+|endIndex|**int** - The index in the utterance at which the entity ends.|
+|resolution|Has a `values` array that has one, two, or four [values of resolution](#values-of-resolution).|
+|end|The end value of a time, or date range, in the same format as `value`. Only used if `type` is `daterange`, `timerange`, or `datetimerange`|
+
+* * *
+
+## Subtypes of datetimeV2
+
+The **datetimeV2** prebuilt entity has the following subtypes, and examples of each are provided in the table that follows:
+* `date`
+* `time`
+* `daterange`
+* `timerange`
+* `datetimerange`
++
+## Values of resolution
+* The array has one element if the date or time in the utterance is fully specified and unambiguous.
+* The array has two elements if the datetimeV2 value is ambiguous. Ambiguity includes lack of specific year, time, or time range. See [Ambiguous dates](#ambiguous-dates) for examples. When the time is ambiguous for A.M. or P.M., both values are included.
+* The array has four elements if the utterance has two elements with ambiguity. This ambiguity includes elements that have:
+ * A date or date range that is ambiguous as to year
+ * A time or time range that is ambiguous as to A.M. or P.M. For example, 3:00 April 3rd.
+
+Each element of the `values` array may have the following fields:
+
+|Property name|Property description|
+|--|--|
+|timex|time, date, or date range expressed in TIMEX format that follows the [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601) and the TIMEX3 attributes for annotation using the TimeML language.|
+|mod|term used to describe how to use the value such as `before`, `after`.|
+|type|The subtype, which can be one of the following items: `datetime`, `date`, `time`, `daterange`, `timerange`, `datetimerange`, `duration`, `set`.|
+|value|**Optional.** A datetime object in the Format yyyy-MM-dd (date), HH:mm:ss (time) yyyy-MM-dd HH:mm:ss (datetime). If `type` is `duration`, the value is the number of seconds (duration) <br/> Only used if `type` is `datetime` or `date`, `time`, or `duration.|
+
+## Valid date values
+
+The **datetimeV2** supports dates between the following ranges:
+
+| Min | Max |
+|-|-|
+| 1st January 1900 | 31st December 2099 |
+
+## Ambiguous dates
+
+If the date can be in the past or future, LUIS provides both values. An example is an utterance that includes the month and date without the year.
+
+For example, given the following utterance:
+
+`May 2nd`
+
+* If today's date is May 3rd 2017, LUIS provides both "2017-05-02" and "2018-05-02" as values.
+* When today's date is May 1st 2017, LUIS provides both "2016-05-02" and "2017-05-02" as values.
+
+The following example shows the resolution of the entity "may 2nd". This resolution assumes that today's date is a date between May 2nd 2017 and May 1st 2018.
+Fields with `X` in the `timex` field are parts of the date that aren't explicitly specified in the utterance.
+
+## Date resolution example
++
+The following utterance and its partial JSON response is shown below.
+
+`May 2nd`
+
+#### [V3 response](#tab/2-1)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "date",
+ "values": [
+ {
+ "timex": "XXXX-05-02",
+ "resolution": [
+ {
+ "value": "2019-05-02"
+ },
+ {
+ "value": "2020-05-02"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+#### [V3 verbose response](#tab/2-2)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "date",
+ "values": [
+ {
+ "timex": "XXXX-05-02",
+ "resolution": [
+ {
+ "value": "2019-05-02"
+ },
+ {
+ "value": "2020-05-02"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "$instance": {
+ "datetimeV2": [
+ {
+ "type": "builtin.datetimeV2.date",
+ "text": "May 2nd",
+ "startIndex": 0,
+ "length": 7,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/2-3)
+
+```json
+ "entities": [
+ {
+ "entity": "may 2nd",
+ "type": "builtin.datetimeV2.date",
+ "startIndex": 0,
+ "endIndex": 6,
+ "resolution": {
+ "values": [
+ {
+ "timex": "XXXX-05-02",
+ "type": "date",
+ "value": "2019-05-02"
+ },
+ {
+ "timex": "XXXX-05-02",
+ "type": "date",
+ "value": "2020-05-02"
+ }
+ ]
+ }
+ }
+ ]
+```
+* * *
+
+## Date range resolution examples for numeric date
+
+The `datetimeV2` entity extracts date and time ranges. The `start` and `end` fields specify the beginning and end of the range. For the utterance `May 2nd to May 5th`, LUIS provides **daterange** values for both the current year and the next year. In the `timex` field, the `XXXX` values indicate the ambiguity of the year. `P3D` indicates the time period is three days long.
+
+The following utterance and its partial JSON response is shown below.
+
+`May 2nd to May 5th`
+
+#### [V3 response](#tab/3-1)
+
+```json
+
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "daterange",
+ "values": [
+ {
+ "timex": "(XXXX-05-02,XXXX-05-05,P3D)",
+ "resolution": [
+ {
+ "start": "2019-05-02",
+ "end": "2019-05-05"
+ },
+ {
+ "start": "2020-05-02",
+ "end": "2020-05-05"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
++
+#### [V3 verbose response](#tab/3-2)
+
+```json
+
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "daterange",
+ "values": [
+ {
+ "timex": "(XXXX-05-02,XXXX-05-05,P3D)",
+ "resolution": [
+ {
+ "start": "2019-05-02",
+ "end": "2019-05-05"
+ },
+ {
+ "start": "2020-05-02",
+ "end": "2020-05-05"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "$instance": {
+ "datetimeV2": [
+ {
+ "type": "builtin.datetimeV2.daterange",
+ "text": "May 2nd to May 5th",
+ "startIndex": 0,
+ "length": 18,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/3-3)
+
+```json
+"entities": [
+ {
+ "entity": "may 2nd to may 5th",
+ "type": "builtin.datetimeV2.daterange",
+ "startIndex": 0,
+ "endIndex": 17,
+ "resolution": {
+ "values": [
+ {
+ "timex": "(XXXX-05-02,XXXX-05-05,P3D)",
+ "type": "daterange",
+ "start": "2019-05-02",
+ "end": "2019-05-05"
+ }
+ ]
+ }
+ }
+ ]
+```
+* * *
+
+## Date range resolution examples for day of week
+
+The following example shows how LUIS uses **datetimeV2** to resolve the utterance `Tuesday to Thursday`. In this example, the current date is June 19th. LUIS includes **daterange** values for both of the date ranges that precede and follow the current date.
+
+The following utterance and its partial JSON response is shown below.
+
+`Tuesday to Thursday`
+
+#### [V3 response](#tab/4-1)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "daterange",
+ "values": [
+ {
+ "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)",
+ "resolution": [
+ {
+ "start": "2019-10-08",
+ "end": "2019-10-10"
+ },
+ {
+ "start": "2019-10-15",
+ "end": "2019-10-17"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+#### [V3 verbose response](#tab/4-2)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "daterange",
+ "values": [
+ {
+ "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)",
+ "resolution": [
+ {
+ "start": "2019-10-08",
+ "end": "2019-10-10"
+ },
+ {
+ "start": "2019-10-15",
+ "end": "2019-10-17"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "$instance": {
+ "datetimeV2": [
+ {
+ "type": "builtin.datetimeV2.daterange",
+ "text": "Tuesday to Thursday",
+ "startIndex": 0,
+ "length": 19,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/4-3)
+
+```json
+ "entities": [
+ {
+ "entity": "tuesday to thursday",
+ "type": "builtin.datetimeV2.daterange",
+ "startIndex": 0,
+ "endIndex": 19,
+ "resolution": {
+ "values": [
+ {
+ "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)",
+ "type": "daterange",
+ "start": "2019-04-30",
+ "end": "2019-05-02"
+ }
+ ]
+ }
+ }
+ ]
+```
+* * *
+
+## Ambiguous time
+The values array has two time elements if the time, or time range is ambiguous. When there's an ambiguous time, values have both the A.M. and P.M. times.
+
+## Time range resolution example
+
+DatetimeV2 JSON response has changed in the API V3. The following example shows how LUIS uses **datetimeV2** to resolve the utterance that has a time range.
+
+Changes from API V2:
+* `datetimeV2.timex.type` property is no longer returned because it is returned at the parent level, `datetimev2.type`.
+* The `datetimeV2.value` property has been renamed to `datetimeV2.timex`.
+
+The following utterance and its partial JSON response is shown below.
+
+`from 6pm to 7pm`
+
+#### [V3 response](#tab/5-1)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```JSON
+
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "timerange",
+ "values": [
+ {
+ "timex": "(T18,T19,PT1H)",
+ "resolution": [
+ {
+ "start": "18:00:00",
+ "end": "19:00:00"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+#### [V3 verbose response](#tab/5-2)
+
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "timerange",
+ "values": [
+ {
+ "timex": "(T18,T19,PT1H)",
+ "resolution": [
+ {
+ "start": "18:00:00",
+ "end": "19:00:00"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "$instance": {
+ "datetimeV2": [
+ {
+ "type": "builtin.datetimeV2.timerange",
+ "text": "from 6pm to 7pm",
+ "startIndex": 0,
+ "length": 15,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/5-3)
+
+```json
+ "entities": [
+ {
+ "entity": "6pm to 7pm",
+ "type": "builtin.datetimeV2.timerange",
+ "startIndex": 0,
+ "endIndex": 9,
+ "resolution": {
+ "values": [
+ {
+ "timex": "(T18,T19,PT1H)",
+ "type": "timerange",
+ "start": "18:00:00",
+ "end": "19:00:00"
+ }
+ ]
+ }
+ }
+ ]
+```
+
+* * *
+
+## Time resolution example
+
+The following utterance and its partial JSON response is shown below.
+
+`8am`
+
+#### [V3 response](#tab/6-1)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "time",
+ "values": [
+ {
+ "timex": "T08",
+ "resolution": [
+ {
+ "value": "08:00:00"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+#### [V3 verbose response](#tab/6-2)
+
+```json
+"entities": {
+ "datetimeV2": [
+ {
+ "type": "time",
+ "values": [
+ {
+ "timex": "T08",
+ "resolution": [
+ {
+ "value": "08:00:00"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "$instance": {
+ "datetimeV2": [
+ {
+ "type": "builtin.datetimeV2.time",
+ "text": "8am",
+ "startIndex": 0,
+ "length": 3,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/6-3)
+
+```json
+"entities": [
+ {
+ "entity": "8am",
+ "type": "builtin.datetimeV2.time",
+ "startIndex": 0,
+ "endIndex": 2,
+ "resolution": {
+ "values": [
+ {
+ "timex": "T08",
+ "type": "time",
+ "value": "08:00:00"
+ }
+ ]
+ }
+ }
+]
+```
+
+* * *
+
+## Deprecated prebuilt datetime
+
+The `datetime` prebuilt entity is deprecated and replaced by **datetimeV2**.
+
+To replace `datetime` with `datetimeV2` in your LUIS app, complete the following steps:
+
+1. Open the **Entities** pane of the LUIS web interface.
+2. Delete the **datetime** prebuilt entity.
+3. Select **Add prebuilt entity**
+4. Select **datetimeV2** and click **Save**.
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md).
+
ai-services Luis Reference Prebuilt Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-deprecated.md
+
+ Title: Deprecated Prebuilt entities - LUIS
+
+description: This article contains deprecated prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 07/29/2019++
+# Deprecated prebuilt entities in a LUIS app
++
+The following prebuilt entities are deprecated and can't be added to new LUIS apps.
+
+* **Datetime**: Existing LUIS apps that use **datetime** should be migrated to **datetimeV2**, although the datetime entity continues to function in pre-existing apps that use it.
+* **Geography**: Existing LUIS apps that use **geography** is supported until December 2018.
+* **Encyclopedia**: Existing LUIS apps that use **encyclopedia** is supported until December 2018.
+
+## Geography culture
+**Geography** is available only in the `en-us` locale.
+
+#### 3 Geography subtypes
+
+Prebuilt entity | Example utterance | JSON
+|||
+`builtin.geography.city` | `seattle` |`{ "type": "builtin.geography.city", "entity": "seattle" }`|
+`builtin.geography.city` | `paris` |`{ "type": "builtin.geography.city", "entity": "paris" }`|
+`builtin.geography.country`| `australia` |`{ "type": "builtin.geography.country", "entity": "australia" }`|
+`builtin.geography.country`| `japan` |`{ "type": "builtin.geography.country", "entity": "japan" }`|
+`builtin.geography.pointOfInterest` | `amazon river` |`{ "type": "builtin.geography.pointOfInterest", "entity": "amazon river" }`|
+`builtin.geography.pointOfInterest` | `sahara desert`|`{ "type": "builtin.geography.pointOfInterest", "entity": "sahara desert" }`|
+
+## Encyclopedia culture
+**Encyclopedia** is available only in the `en-US` locale.
+
+#### Encyclopedia subtypes
+Encyclopedia built-in entity includes over 100 sub-types in the following table: In addition, encyclopedia entities often map to multiple types. For example, the query Ronald Reagan yields:
+
+```json
+{
+ "entity": "ronald reagan",
+ "type": "builtin.encyclopedia.people.person"
+ },
+ {
+ "entity": "ronald reagan",
+ "type": "builtin.encyclopedia.film.actor"
+ },
+ {
+ "entity": "ronald reagan",
+ "type": "builtin.encyclopedia.government.us_president"
+ },
+ {
+ "entity": "ronald reagan",
+ "type": "builtin.encyclopedia.book.author"
+ }
+ ```
++
+Prebuilt entity | Prebuilt entity (sub-types) | Example utterance
+|||
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.people.person`| `bryan adams` |
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.producer`| `walt disney` |
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.cinematographer`| `adam greenberg`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.royalty.monarch`| `elizabeth ii`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.director`| `steven spielberg`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.writer`| `alfred hitchcock`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.actor`| `robert de niro`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.martial_arts.martial_artist`| `bruce lee`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.architecture.architect`| `james gallier`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.geography.mountaineer`| `jean couzy`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.celebrities.celebrity`| `angelina jolie`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.music.musician`| `bob dylan`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.soccer.player`| `diego maradona`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.baseball.player`| `babe ruth`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.basketball.player`| `heiko schaffartzik`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.olympics.athlete`| `andre agassi`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.basketball.coach`| `bob huggins`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.american_football.coach`| `james franklin`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.cricket.coach`| `andy flower`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.ice_hockey.coach`| `david quinn`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.ice_hockey.player`| `vincent lecavalier`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.politician`| `harold nicolson`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.us_president`| `barack obama`|
+`builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.us_vice_president`| `dick cheney`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.organization.organization`| `united nations`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.league`| `american league`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.ice_hockey.conference`| `western hockey league`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.division`| `american league east`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.league`| `major league baseball`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.conference`| `national basketball league`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.division`| `pacific division`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.soccer.league`| `premier league`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.american_football.division`| `afc north`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.broadcast`| `nebraska educational telecommunications`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.tv_station`| `abc`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.tv_channel`| `cnbc world`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.radio_station`| `bbc radio 1`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.business.operation`| `bank of china`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.music.record_label`| `pixar`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.aviation.airline`| `air france`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.automotive.company`| `general motors`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.music.musical_instrument_company`| `gibson guitar corporation` |
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.tv.network`| `cartoon network`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.educational_institution`| `cornwall hill college` |
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.school`| `boston arts academy`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.university`| `johns hopkins university`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.team`| `united states national handball team`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.team`| `chicago bulls`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.professional_sports_team`| `boston celtics`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.cricket.team`| `mumbai indians`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.team`| `houston astros`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.american_football.team`| `green bay packers`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.ice_hockey.team`| `hamilton bulldogs`|
+`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.soccer.team`| `fc bayern munich`|
+`builtin.encyclopedia.organization.organization |builtin.encyclopedia.government.political_party|pertubuhan kebangsaan melayu singapura`|
+`builtin.encyclopedia.time.event`| `builtin.encyclopedia.time.event`| `1740 batavia massacre`|
+`builtin.encyclopedia.time.event`| `builtin.encyclopedia.sports.championship_event`| `super bowl xxxix`|
+`builtin.encyclopedia.time.event`| `builtin.encyclopedia.award.competition`| `eurovision song contest 2003`|
+`builtin.encyclopedia.tv.series_episode`| `builtin.encyclopedia.tv.series_episode`| `the magnificent seven`|
+`builtin.encyclopedia.tv.series_episode`| `builtin.encyclopedia.tv.multipart_tv_episode`| `the deadly assassin`|
+`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.commerce.consumer_product`| `nokia lumia 620`|
+`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.music.album`| `dance pool`|
+`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.automotive.model`| `pontiac fiero`|
+`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.computer.computer`| `toshiba satellite`|
+`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.computer.web_browser`| `internet explorer`|
+`builtin.encyclopedia.commerce.brand`| `builtin.encyclopedia.commerce.brand`| `diet coke`|
+`builtin.encyclopedia.commerce.brand`| `builtin.encyclopedia.automotive.make`| `chrysler`|
+`builtin.encyclopedia.music.artist`| `builtin.encyclopedia.music.artist`| `michael jackson`|
+`builtin.encyclopedia.music.artist`| `builtin.encyclopedia.music.group`| `the yardbirds`|
+`builtin.encyclopedia.music.music_video`| `builtin.encyclopedia.music.music_video`| `the beatles anthology`|
+`builtin.encyclopedia.theater.play`| `builtin.encyclopedia.theater.play`| `camelot`|
+`builtin.encyclopedia.sports.fight_song`| `builtin.encyclopedia.sports.fight_song`| `the cougar song`|
+`builtin.encyclopedia.film.series`| `builtin.encyclopedia.film.series`| `the twilight saga`|
+`builtin.encyclopedia.tv.program`| `builtin.encyclopedia.tv.program`| `late night with david letterman`|
+`builtin.encyclopedia.radio.radio_program`| `builtin.encyclopedia.radio.radio_program`| `grand ole opry`|
+`builtin.encyclopedia.film.film`| `builtin.encyclopedia.film.film`| `alice in wonderland`|
+`builtin.encyclopedia.cricket.tournament`| `builtin.encyclopedia.cricket.tournament`| `cricket world cup`|
+`builtin.encyclopedia.government.government`| `builtin.encyclopedia.government.government`| `european commission`|
+`builtin.encyclopedia.sports.team_owner`| `builtin.encyclopedia.sports.team_owner`| `bob castellini`|
+`builtin.encyclopedia.music.genre`| `builtin.encyclopedia.music.genre`| `eastern europe`|
+`builtin.encyclopedia.ice_hockey.division`| `builtin.encyclopedia.ice_hockey.division`| `hockeyallsvenskan`|
+`builtin.encyclopedia.architecture.style`| `builtin.encyclopedia.architecture.style`| `spanish colonial revival architecture`|
+`builtin.encyclopedia.broadcast.producer`| `builtin.encyclopedia.broadcast.producer`| `columbia tristar television`|
+`builtin.encyclopedia.book.author`| `builtin.encyclopedia.book.author`| `adam maxwell`|
+`builtin.encyclopedia.religion.founding_figur`| `builtin.encyclopedia.religion.founding_figur`| `gautama buddha`|
+`builtin.encyclopedia.martial_arts.martial_art`| `builtin.encyclopedia.martial_arts.martial_art`| `american kenpo`|
+`builtin.encyclopedia.sports.school`| `builtin.encyclopedia.sports.school`| `yale university`|
+`builtin.encyclopedia.business.product_line`| `builtin.encyclopedia.business.product_line`| `canon powershot`|
+`builtin.encyclopedia.internet.website`| `builtin.encyclopedia.internet.website`| `bing`|
+`builtin.encyclopedia.time.holiday`| `builtin.encyclopedia.time.holiday`| `easter`|
+`builtin.encyclopedia.food.candy_bar`| `builtin.encyclopedia.food.candy_bar`| `cadbury dairy milk`|
+`builtin.encyclopedia.finance.stock_exchange`| `builtin.encyclopedia.finance.stock_exchange`| `tokyo stock exchange`|
+`builtin.encyclopedia.film.festival`| `builtin.encyclopedia.film.festival`| `berlin international film festival`|
+
+## Next steps
+
+Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md).
+
ai-services Luis Reference Prebuilt Dimension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-dimension.md
+
+ Title: Dimension Prebuilt entities - LUIS
+
+description: This article contains dimension prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/14/2019++
+# Dimension prebuilt entity for a LUIS app
++
+The prebuilt dimension entity detects various types of dimensions, regardless of the LUIS app culture. Because this entity is already trained, you do not need to add example utterances containing dimensions to the application intents. Dimension entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
+
+## Types of dimension
+
+Dimension is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml) GitHub repository.
+
+## Resolution for dimension entity
+
+The following entity objects are returned for the query:
+
+`10 1/2 miles of cable`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "dimension": [
+ {
+ "number": 10.5,
+ "units": "Mile"
+ }
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "dimension": [
+ {
+ "number": 10.5,
+ "units": "Mile"
+ }
+ ],
+ "$instance": {
+ "dimension": [
+ {
+ "type": "builtin.dimension",
+ "text": "10 1/2 miles",
+ "startIndex": 0,
+ "length": 12,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.dimension** entity.
+
+```json
+{
+ "entity": "10 1/2 miles",
+ "type": "builtin.dimension",
+ "startIndex": 0,
+ "endIndex": 11,
+ "resolution": {
+ "unit": "Mile",
+ "value": "10.5"
+ }
+}
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
ai-services Luis Reference Prebuilt Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-domains.md
+
+ Title: Prebuilt domain reference - LUIS
+
+description: Reference for the prebuilt domains, which are prebuilt collections of intents and entities from Language Understanding Intelligent Services (LUIS).
++++++++ Last updated : 04/18/2022++
+# Prebuilt domain reference for your LUIS app
+++
+This reference provides information about the [prebuilt domains](./howto-add-prebuilt-models.md), which are prebuilt collections of intents and entities that LUIS offers.
+
+[Custom domains](how-to/sign-in.md), by contrast, start with no intents and models. You can add any prebuilt domain intents and entities to a custom model.
+
+## Prebuilt domains per language
+
+The table below summarizes the currently supported domains. Support for English is usually more complete than others.
+
+| Entity Type | EN-US | ZH-CN | DE | FR | ES | IT | PT-BR | KO | NL | TR |
+|::|:--:|:--:|:--:|:--:|:--:|:--:|:|:|:|:|
+| Calendar | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Communication | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Email | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| HomeAutomation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Notes | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Places | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| RestaurantReservation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| ToDo | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Utilities | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Weather | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Web | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+Prebuilt domains are **not supported** in:
+
+* French Canadian
+* Hindi
+* Spanish Mexican
+* Japanese
+
+## Next steps
+
+Learn the [simple entity](reference-entity-simple.md).
ai-services Luis Reference Prebuilt Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-email.md
+
+ Title: LUIS Prebuilt entities email reference
+
+description: This article contains email prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 09/27/2019++
+# Email prebuilt entity for a LUIS app
++
+Email extraction includes the entire email address from an utterance. Because this entity is already trained, you do not need to add example utterances containing email to the application intents. Email entity is supported in `en-us` culture only.
+
+## Resolution for prebuilt email
+
+The following entity objects are returned for the query:
+
+`please send the information to patti@contoso.com`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "email": [
+ "patti@contoso.com"
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "email": [
+ "patti@contoso.com"
+ ],
+ "$instance": {
+ "email": [
+ {
+ "type": "builtin.email",
+ "text": "patti@contoso.com",
+ "startIndex": 31,
+ "length": 17,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.email** entity.
+
+```json
+"entities": [
+ {
+ "entity": "patti@contoso.com",
+ "type": "builtin.email",
+ "startIndex": 31,
+ "endIndex": 55,
+ "resolution": {
+ "value": "patti@contoso.com"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [number](luis-reference-prebuilt-number.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md).
ai-services Luis Reference Prebuilt Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-entities.md
+
+ Title: All Prebuilt entities - LUIS
+
+description: This article contains lists of the prebuilt entities that are included in Language Understanding (LUIS).
++++++++ Last updated : 05/05/2021++
+# Entities per culture in your LUIS model
+++
+Language Understanding (LUIS) provides prebuilt entities.
+
+## Entity resolution
+When a prebuilt entity is included in your application, LUIS includes the corresponding entity resolution in the endpoint response. All example utterances are also labeled with the entity.
+
+The behavior of prebuilt entities can't be modified but you can improve resolution by [adding the prebuilt entity as a feature to a machine-learning entity or sub-entity](concepts/entities.md#prebuilt-entities).
+
+## Availability
+Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture.
+
+|Culture|Subcultures|Notes|
+|--|--|--|
+|Chinese|[zh-CN](#chinese-entity-support)||
+|Dutch|[nl-NL](#dutch-entity-support)||
+|English|[en-US (American)](#english-american-entity-support)||
+|English|[en-GB (British)](#english-british-entity-support)||
+|French|[fr-CA (Canada)](#french-canadian-entity-support), [fr-FR (France)](#french-france-entity-support), ||
+|German|[de-DE](#german-entity-support)||
+|Italian|[it-IT](#italian-entity-support)||
+|Japanese|[ja-JP](#japanese-entity-support)||
+|Korean|[ko-KR](#korean-entity-support)||
+|Portuguese|[pt-BR (Brazil)](#portuguese-brazil-entity-support)||
+|Spanish|[es-ES (Spain)](#spanish-spain-entity-support), [es-MX (Mexico)](#spanish-mexico-entity-support)||
+|Turkish|[turkish](#turkish-entity-support)||
+
+## Prediction endpoint runtime
+
+The availability of a prebuilt entity in a specific language is determined by the prediction endpoint runtime version.
+
+## Chinese entity support
+
+The following entities are supported:
+
+| Prebuilt entity | zh-CN |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[PersonName](luis-reference-prebuilt-person.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![KeyPhrase](luis-reference-prebuilt-keyphrase.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+
+## Dutch entity support
+
+The following entities are supported:
+
+| Prebuilt entity | nl-NL |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![Datetime](luis-reference-prebuilt-deprecated.md) | - |-->
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## English (American) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | en-US |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[GeographyV2](luis-reference-prebuilt-geographyV2.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[PersonName](luis-reference-prebuilt-person.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+
+## English (British) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | en-GB |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[GeographyV2](luis-reference-prebuilt-geographyV2.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[PersonName](luis-reference-prebuilt-person.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+
+## French (France) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | fr-FR |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## French (Canadian) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | fr-CA |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## German entity support
+
+The following entities are supported:
+
+|Prebuilt entity | de-DE |
+| -- | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## Italian entity support
+
+Italian prebuilt age, currency, dimension, number, percentage _resolution_ changed from V2 and V3 preview.
+
+The following entities are supported:
+
+| Prebuilt entity | it-IT |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## Japanese entity support
+
+The following entities are supported:
+
+|Prebuilt entity | ja-JP |
+| -- | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, - |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, - |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, - |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, - |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, - |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, - |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, - |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![Datetime](luis-reference-prebuilt-deprecated.md) | - |-->
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## Korean entity support
+
+The following entities are supported:
+
+| Prebuilt entity | ko-KR |
+| | :: |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |-->
+<![Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |-->
+<![Datetime](luis-reference-prebuilt-deprecated.md) | - |-->
+<![Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |-->
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![Number](luis-reference-prebuilt-number.md) | - |-->
+<![Ordinal](luis-reference-prebuilt-ordinal.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![Percentage](luis-reference-prebuilt-percentage.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+<![Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |-->
+
+## Portuguese (Brazil) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | pt-BR |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+KeyPhrase is not available in all subcultures of Portuguese (Brazil) - ```pt-BR```.
+
+## Spanish (Spain) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | es-ES |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
+[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+## Spanish (Mexico) entity support
+
+The following entities are supported:
+
+| Prebuilt entity | es-MX |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |
+[Email](luis-reference-prebuilt-email.md) | V2, V3 |
+[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
+[Number](luis-reference-prebuilt-number.md) | V2, V3 |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | - |
+[Percentage](luis-reference-prebuilt-percentage.md) | - |
+[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |
+[URL](luis-reference-prebuilt-url.md) | V2, V3 |
+<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
+<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
+<![PersonName](luis-reference-prebuilt-person.md) | - |-->
+
+<! See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md)-->
+
+## Turkish entity support
+
+| Prebuilt entity | tr-tr |
+| | :: |
+[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |
+[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |
+[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - |
+[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |
+[Email](luis-reference-prebuilt-email.md) | - |
+[Number](luis-reference-prebuilt-number.md) | - |
+[Ordinal](luis-reference-prebuilt-ordinal.md) | - |
+[Percentage](luis-reference-prebuilt-percentage.md) | - |
+[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |
+[URL](luis-reference-prebuilt-url.md) | - |
+<![KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |-->
+<!Phonenumber](luis-reference-prebuilt-phonenumber.md) | - |-->
+
+<! See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md). -->
+
+## Contribute to prebuilt entity cultures
+The prebuilt entities are developed in the Recognizers-Text open-source project. [Contribute](https://github.com/Microsoft/Recognizers-Text) to the project. This project includes examples of currency per culture.
+
+GeographyV2 and PersonName are not included in the Recognizers-Text project. For issues with these prebuilt entities, please open a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+## Next steps
+
+Learn about the [number](luis-reference-prebuilt-number.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [currency](luis-reference-prebuilt-currency.md) entities.
ai-services Luis Reference Prebuilt Geographyv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-geographyV2.md
+
+ Title: Geography V2 prebuilt entity - LUIS
+
+description: This article contains geographyV2 prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/04/2019++
+# GeographyV2 prebuilt entity for a LUIS app
++
+The prebuilt geographyV2 entity detects places. Because this entity is already trained, you do not need to add example utterances containing GeographyV2 to the application intents. GeographyV2 entity is supported in English [culture](luis-reference-prebuilt-entities.md).
+
+## Subtypes
+The geographical locations have subtypes:
+
+|Subtype|Purpose|
+|--|--|
+|`poi`|point of interest|
+|`city`|name of city|
+|`countryRegion`|name of country or region|
+|`continent`|name of continent|
+|`state`|name of state or province|
++
+## Resolution for GeographyV2 entity
+
+The following entity objects are returned for the query:
+
+`Carol is visiting the sphinx in gizah egypt in africa before heading to texas.`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "geographyV2": [
+ {
+ "value": "the sphinx",
+ "type": "poi"
+ },
+ {
+ "value": "gizah",
+ "type": "city"
+ },
+ {
+ "value": "egypt",
+ "type": "countryRegion"
+ },
+ {
+ "value": "africa",
+ "type": "continent"
+ },
+ {
+ "value": "texas",
+ "type": "state"
+ }
+ ]
+}
+```
+
+In the preceding JSON, `poi` is an abbreviation for **Point of Interest**.
+
+#### [V3 verbose response](#tab/V3-verbose)
+
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "geographyV2": [
+ {
+ "value": "the sphinx",
+ "type": "poi"
+ },
+ {
+ "value": "gizah",
+ "type": "city"
+ },
+ {
+ "value": "egypt",
+ "type": "countryRegion"
+ },
+ {
+ "value": "africa",
+ "type": "continent"
+ },
+ {
+ "value": "texas",
+ "type": "state"
+ }
+ ],
+ "$instance": {
+ "geographyV2": [
+ {
+ "type": "builtin.geographyV2.poi",
+ "text": "the sphinx",
+ "startIndex": 18,
+ "length": 10,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "builtin.geographyV2.city",
+ "text": "gizah",
+ "startIndex": 32,
+ "length": 5,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "builtin.geographyV2.countryRegion",
+ "text": "egypt",
+ "startIndex": 38,
+ "length": 5,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "builtin.geographyV2.continent",
+ "text": "africa",
+ "startIndex": 47,
+ "length": 6,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "builtin.geographyV2.state",
+ "text": "texas",
+ "startIndex": 72,
+ "length": 5,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.geographyV2** entity.
+
+```json
+"entities": [
+ {
+ "entity": "the sphinx",
+ "type": "builtin.geographyV2.poi",
+ "startIndex": 18,
+ "endIndex": 27
+ },
+ {
+ "entity": "gizah",
+ "type": "builtin.geographyV2.city",
+ "startIndex": 32,
+ "endIndex": 36
+ },
+ {
+ "entity": "egypt",
+ "type": "builtin.geographyV2.countryRegion",
+ "startIndex": 38,
+ "endIndex": 42
+ },
+ {
+ "entity": "africa",
+ "type": "builtin.geographyV2.continent",
+ "startIndex": 47,
+ "endIndex": 52
+ },
+ {
+ "entity": "texas",
+ "type": "builtin.geographyV2.state",
+ "startIndex": 72,
+ "endIndex": 76
+ },
+ {
+ "entity": "carol",
+ "type": "builtin.personName",
+ "startIndex": 0,
+ "endIndex": 4
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
ai-services Luis Reference Prebuilt Keyphrase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-keyphrase.md
+
+ Title: Keyphrase prebuilt entity - LUIS
+
+description: This article contains keyphrase prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/28/2021++
+# keyPhrase prebuilt entity for a LUIS app
++
+The keyPhrase entity extracts a variety of key phrases from an utterance. You don't need to add example utterances containing keyPhrase to the application. The keyPhrase entity is supported in [many cultures](luis-language-support.md#languages-supported) as part of the [Language service](../language-service/overview.md) features.
+
+## Resolution for prebuilt keyPhrase entity
+
+The following entity objects are returned for the query:
+
+`where is the educational requirements form for the development and engineering group`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "keyPhrase": [
+ "educational requirements",
+ "development"
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "keyPhrase": [
+ "educational requirements",
+ "development"
+ ],
+ "$instance": {
+ "keyPhrase": [
+ {
+ "type": "builtin.keyPhrase",
+ "text": "educational requirements",
+ "startIndex": 13,
+ "length": 24,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "builtin.keyPhrase",
+ "text": "development",
+ "startIndex": 51,
+ "length": 11,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.keyPhrase** entity.
+
+```json
+"entities": [
+ {
+ "entity": "development",
+ "type": "builtin.keyPhrase",
+ "startIndex": 51,
+ "endIndex": 61
+ },
+ {
+ "entity": "educational requirements",
+ "type": "builtin.keyPhrase",
+ "startIndex": 13,
+ "endIndex": 36
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities.
ai-services Luis Reference Prebuilt Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-number.md
+
+ Title: Number Prebuilt entity - LUIS
+
+description: This article contains number prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 09/27/2019++
+# Number prebuilt entity for a LUIS app
++
+There are many ways in which numeric values are used to quantify, express, and describe pieces of information. This article covers only some of the possible examples. LUIS interprets the variations in user utterances and returns consistent numeric values. Because this entity is already trained, you do not need to add example utterances containing number to the application intents.
+
+## Types of number
+Number is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml) GitHub repository
+
+## Examples of number resolution
+
+| Utterance | Entity | Resolution |
+| - |:-:| --:|
+| ```one thousand times``` | ```"one thousand"``` | ```"1000"``` |
+| ```1,000 people``` | ```"1,000"``` | ```"1000"``` |
+| ```1/2 cup``` | ```"1 / 2"``` | ```"0.5"``` |
+| ```one half the amount``` | ```"one half"``` | ```"0.5"``` |
+| ```one hundred fifty orders``` | ```"one hundred fifty"``` | ```"150"``` |
+| ```one hundred and fifty books``` | ```"one hundred and fifty"``` | ```"150"```|
+| ```a grade of one point five```| ```"one point five"``` | ```"1.5"``` |
+| ```buy two dozen eggs``` | ```"two dozen"``` | ```"24"``` |
++
+LUIS includes the recognized value of a **`builtin.number`** entity in the `resolution` field of the JSON response it returns.
+
+## Resolution for prebuilt number
+
+The following entity objects are returned for the query:
+
+`order two dozen eggs`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "number": [
+ 24
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "number": [
+ 24
+ ],
+ "$instance": {
+ "number": [
+ {
+ "type": "builtin.number",
+ "text": "two dozen",
+ "startIndex": 6,
+ "length": 9,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows a JSON response from LUIS, that includes the resolution of the value 24, for the utterance "two dozen".
+
+```json
+"entities": [
+ {
+ "entity": "two dozen",
+ "type": "builtin.number",
+ "startIndex": 6,
+ "endIndex": 14,
+ "resolution": {
+ "subtype": "integer",
+ "value": "24"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [currency](luis-reference-prebuilt-currency.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md).
ai-services Luis Reference Prebuilt Ordinal V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal-v2.md
+
+ Title: Ordinal V2 prebuilt entity - LUIS
+
+description: This article contains ordinal V2 prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 09/27/2019++
+# Ordinal V2 prebuilt entity for a LUIS app
++
+Ordinal V2 number expands [Ordinal](luis-reference-prebuilt-ordinal.md) to provide relative references such as `next`, `last`, and `previous`. These are not extracted using the ordinal prebuilt entity.
+
+## Resolution for prebuilt ordinal V2 entity
+
+The following entity objects are returned for the query:
+
+`what is the second to last choice in the list`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "ordinalV2": [
+ {
+ "offset": -1,
+ "relativeTo": "end"
+ }
+ ]
+}
+```
+
+#### [V3 verbose response](#tab/V3-verbose)
+
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "ordinalV2": [
+ {
+ "offset": -1,
+ "relativeTo": "end"
+ }
+ ],
+ "$instance": {
+ "ordinalV2": [
+ {
+ "type": "builtin.ordinalV2.relative",
+ "text": "the second to last",
+ "startIndex": 8,
+ "length": 18,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.ordinalV2** entity.
+
+```json
+"entities": [
+ {
+ "entity": "the second to last",
+ "type": "builtin.ordinalV2.relative",
+ "startIndex": 8,
+ "endIndex": 25,
+ "resolution": {
+ "offset": "-1",
+ "relativeTo": "end"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [percentage](luis-reference-prebuilt-percentage.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Ordinal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal.md
+
+ Title: Ordinal Prebuilt entity - LUIS
+
+description: This article contains ordinal prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/14/2019++
+# Ordinal prebuilt entity for a LUIS app
++
+Ordinal number is a numeric representation of an object inside a set: `first`, `second`, `third`. Because this entity is already trained, you do not need to add example utterances containing ordinal to the application intents. Ordinal entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
+
+## Types of ordinal
+Ordinal is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml#L45) GitHub repository
+
+## Resolution for prebuilt ordinal entity
+
+The following entity objects are returned for the query:
+
+`Order the second option`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "ordinal": [
+ 2
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "ordinal": [
+ 2
+ ],
+ "$instance": {
+ "ordinal": [
+ {
+ "type": "builtin.ordinal",
+ "text": "second",
+ "startIndex": 10,
+ "length": 6,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.ordinal** entity.
+
+```json
+"entities": [
+ {
+ "entity": "second",
+ "type": "builtin.ordinal",
+ "startIndex": 10,
+ "endIndex": 15,
+ "resolution": {
+ "value": "2"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [OrdinalV2](luis-reference-prebuilt-ordinal-v2.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Percentage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-percentage.md
+
+ Title: Percentage Prebuilt entity - LUIS
+
+description: This article contains percentage prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 09/27/2019++
+# Percentage prebuilt entity for a LUIS app
++
+Percentage numbers can appear as fractions, `3 1/2`, or as percentage, `2%`. Because this entity is already trained, you do not need to add example utterances containing percentage to the application intents. Percentage entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
+
+## Types of percentage
+Percentage is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml#L114) GitHub repository
+
+## Resolution for prebuilt percentage entity
+
+The following entity objects are returned for the query:
+
+`set a trigger when my stock goes up 2%`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "percentage": [
+ 2
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "percentage": [
+ 2
+ ],
+ "$instance": {
+ "percentage": [
+ {
+ "type": "builtin.percentage",
+ "text": "2%",
+ "startIndex": 36,
+ "length": 2,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.percentage** entity.
+
+```json
+"entities": [
+ {
+ "entity": "2%",
+ "type": "builtin.percentage",
+ "startIndex": 36,
+ "endIndex": 37,
+ "resolution": {
+ "value": "2%"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-person.md
+
+ Title: PersonName prebuilt entity - LUIS
+
+description: This article contains personName prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 05/07/2019++
+# PersonName prebuilt entity for a LUIS app
++
+The prebuilt personName entity detects people names. Because this entity is already trained, you do not need to add example utterances containing personName to the application intents. personName entity is supported in English and Chinese [cultures](luis-reference-prebuilt-entities.md).
+
+## Resolution for personName entity
+
+The following entity objects are returned for the query:
+
+`Is Jill Jones in Cairo?`
++
+#### [V3 response](#tab/V3)
++
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "personName": [
+ "Jill Jones"
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "personName": [
+ "Jill Jones"
+ ],
+ "$instance": {
+ "personName": [
+ {
+ "type": "builtin.personName",
+ "text": "Jill Jones",
+ "startIndex": 3,
+ "length": 10,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.personName** entity.
+
+```json
+"entities": [
+{
+ "entity": "Jill Jones",
+ "type": "builtin.personName",
+ "startIndex": 3,
+ "endIndex": 12
+}
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
ai-services Luis Reference Prebuilt Phonenumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-phonenumber.md
+
+ Title: Phone number Prebuilt entities - LUIS
+
+description: This article contains phone number prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 09/27/2019++
+# Phone number prebuilt entity for a LUIS app
++
+The `phonenumber` entity extracts a variety of phone numbers including country code. Because this entity is already trained, you do not need to add example utterances to the application. The `phonenumber` entity is supported in `en-us` culture only.
+
+## Types of a phone number
+`Phonenumber` is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/Base-PhoneNumbers.yaml) GitHub repository
+
+## Resolution for this prebuilt entity
+
+The following entity objects are returned for the query:
+
+`my mobile is 1 (800) 642-7676`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "phonenumber": [
+ "1 (800) 642-7676"
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "phonenumber": [
+ "1 (800) 642-7676"
+ ],
+ "$instance": {
+
+ "phonenumber": [
+ {
+ "type": "builtin.phonenumber",
+ "text": "1 (800) 642-7676",
+ "startIndex": 13,
+ "length": 16,
+ "score": 1.0,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.phonenumber** entity.
+
+```json
+"entities": [
+ {
+ "entity": "1 (800) 642-7676",
+ "type": "builtin.phonenumber",
+ "startIndex": 13,
+ "endIndex": 28,
+ "resolution": {
+ "score": "1",
+ "value": "1 (800) 642-7676"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-sentiment.md
+
+ Title: Sentiment analysis - LUIS
+
+description: If Sentiment analysis is configured, the LUIS json response includes sentiment analysis.
++++++++ Last updated : 10/28/2021++
+# Sentiment analysis
++
+If Sentiment analysis is configured, the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Language service](../language-service/index.yml) documentation.
+
+LUIS uses V2 of the API.
+
+Sentiment Analysis is configured when publishing your application. See [how to publish an app](./how-to/publish.md) for more information.
+
+## Resolution for sentiment
+
+Sentiment data is a score between 1 and 0 indicating the positive (closer to 1) or negative (closer to 0) sentiment of the data.
+
+#### [English language](#tab/english)
+
+When culture is `en-us`, the response is:
+
+```JSON
+"sentimentAnalysis": {
+ "label": "positive",
+ "score": 0.9163064
+}
+```
+
+#### [Other languages](#tab/other-languages)
+
+For all other cultures, the response is:
+
+```JSON
+"sentimentAnalysis": {
+ "score": 0.9163064
+}
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
ai-services Luis Reference Prebuilt Temperature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-temperature.md
+
+ Title: Temperature Prebuilt entity - LUIS
+
+description: This article contains temperature prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/14/2019++
+# Temperature prebuilt entity for a LUIS app
++
+Temperature extracts a variety of temperature types. Because this entity is already trained, you do not need to add example utterances containing temperature to the application. Temperature entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
+
+## Types of temperature
+Temperature is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L819) GitHub repository
+
+## Resolution for prebuilt temperature entity
+
+The following entity objects are returned for the query:
+
+`set the temperature to 30 degrees`
++
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "temperature": [
+ {
+ "number": 30,
+ "units": "Degree"
+ }
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "temperature": [
+ {
+ "number": 30,
+ "units": "Degree"
+ }
+ ],
+ "$instance": {
+ "temperature": [
+ {
+ "type": "builtin.temperature",
+ "text": "30 degrees",
+ "startIndex": 23,
+ "length": 10,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the **builtin.temperature** entity.
+
+```json
+"entities": [
+ {
+ "entity": "30 degrees",
+ "type": "builtin.temperature",
+ "startIndex": 23,
+ "endIndex": 32,
+ "resolution": {
+ "unit": "Degree",
+ "value": "30"
+ }
+ }
+]
+```
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities.
ai-services Luis Reference Prebuilt Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-url.md
+
+ Title: URL Prebuilt entities - LUIS
+
+description: This article contains url prebuilt entity information in Language Understanding (LUIS).
++++++++ Last updated : 10/04/2019++
+# URL prebuilt entity for a LUIS app
++
+URL entity extracts URLs with domain names or IP addresses. Because this entity is already trained, you do not need to add example utterances containing URLs to the application. URL entity is supported in `en-us` culture only.
+
+## Types of URLs
+Url is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/Base-URL.yaml) GitHub repository
+
+## Resolution for prebuilt URL entity
+
+The following entity objects are returned for the query:
+
+`https://www.luis.ai is a great Azure AI services example of artificial intelligence`
+
+#### [V3 response](#tab/V3)
+
+The following JSON is with the `verbose` parameter set to `false`:
+
+```json
+"entities": {
+ "url": [
+ "https://www.luis.ai"
+ ]
+}
+```
+#### [V3 verbose response](#tab/V3-verbose)
+
+The following JSON is with the `verbose` parameter set to `true`:
+
+```json
+"entities": {
+ "url": [
+ "https://www.luis.ai"
+ ],
+ "$instance": {
+ "url": [
+ {
+ "type": "builtin.url",
+ "text": "https://www.luis.ai",
+ "startIndex": 0,
+ "length": 17,
+ "modelTypeId": 2,
+ "modelType": "Prebuilt Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 response](#tab/V2)
+
+The following example shows the resolution of the https://www.luis.ai is a great Azure AI services example of artificial intelligence
+
+```json
+"entities": [
+ {
+ "entity": "https://www.luis.ai",
+ "type": "builtin.url",
+ "startIndex": 0,
+ "endIndex": 17
+ }
+]
+```
+
+* * *
+
+## Next steps
+
+Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
+Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-regions.md
+
+ Title: Publishing regions & endpoints - LUIS
+description: The region specified in the Azure portal is the same where you will publish the LUIS app and an endpoint URL is generated for this same region.
+++++ Last updated : 02/08/2022+++
+# Authoring and publishing regions and the associated keys
+++
+LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one predection key per region.
+
+<a name="luis-website"></a>
+
+## LUIS Authoring regions
+
+Authoring regions are the regions where the application gets created and the training take place.
+
+LUIS has the following authoring regions available with [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md):
+
+* Australia east
+* West Europe
+* West US
+* Switzerland north
+
+LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai).
+
+<a name="regions-and-azure-resources"></a>
+
+## Publishing regions and Azure resources
+
+Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and assign your application to it. For example, if you create an app with the *westus* authoring region and publish it to the *eastus* and *brazilsouth* regions, the app will run in those two regions.
++
+## Public apps
+A public app is published in all regions so that a user with a supported predection resource can access the app in all regions.
+
+<a name="publishing-regions"></a>
+
+## Publishing regions are tied to authoring regions
+
+When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a resource in a publishing region.
+
+Every authoring region has corresponding prediction regions that you can publish your application to, which are listed in the tables below. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region to match the required publishing region.
++
+## Single data residency
+
+Single data residency means that the data does not leave the boundaries of the region.
+
+> [!Note]
+> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region.
+> * If `log=true`, data is returned to the authoring region for active learning.
+
+## Publishing to Europe
+
+ Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
+|--||||
+| Europe | `westeurope`| France Central<br>`francecentral` | `https://francecentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| North Europe<br>`northeurope` | `https://northeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| West Europe<br>`westeurope` | `https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| UK South<br>`uksouth` | `https://uksouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| Switzerland North<br>`switzerlandnorth` | `https://switzerlandnorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Europe | `westeurope`| Norway East<br>`norwayeast` | `https://norwayeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+
+## Publishing to Australia
+
+ Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
+|--||||
+| Australia | `australiaeast` | Australia East<br>`australiaeast` | `https://australiaeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+
+## Other publishing regions
+
+ Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
+|--||||
+| Africa | `westus`<br>[www.luis.ai][www.luis.ai]| South Africa North<br>`southafricanorth` | `https://southafricanorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Central India<br>`centralindia` | `https://centralindia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| East Asia<br>`eastasia` | `https://eastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan East<br>`japaneast` | `https://japaneast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan West<br>`japanwest` | `https://japanwest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Jio India West<br>`jioindiawest` | `https://jioindiawest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Korea Central<br>`koreacentral` | `https://koreacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Southeast Asia<br>`southeastasia` | `https://southeastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| North UAE<br>`northuae` | `https://northuae.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Canada Central<br>`canadacentral` | `https://canadacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Central US<br>`centralus` | `https://centralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | East US<br>`eastus` | `https://eastus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | East US 2<br>`eastus2` | `https://eastus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | North Central US<br>`northcentralus` | `https://northcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | South Central US<br>`southcentralus` | `https://southcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West Central US<br>`westcentralus` | `https://westcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America | `westus`<br>[www.luis.ai][www.luis.ai] | West US<br>`westus` | `https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 2<br>`westus2` | `https://westus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 3<br>`westus3` | `https://westus3.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+| South America | `westus`<br>[www.luis.ai][www.luis.ai] | Brazil South<br>`brazilsouth` | `https://brazilsouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
+
+## Endpoints
+
+Learn more about the [authoring and prediction endpoints](developer-reference-resource.md).
+
+## Failover regions
+
+Each region has a secondary region to fail over to. Failover will only happen in the same geographical region.
+
+Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+
+The following publishing regions do not have a failover region:
+
+* Brazil South
+* Southeast Asia
+
+## Next steps
++
+> [Prebuilt entities reference](./luis-reference-prebuilt-entities.md)
+
+ [www.luis.ai]: https://www.luis.ai
ai-services Luis Reference Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-response-codes.md
+
+ Title: API HTTP response codes - LUIS
+
+description: Understand what HTTP response codes are returned from the LUIS Authoring and Endpoint APIs
++++++++ Last updated : 06/14/2022+++
+# Common API response codes and their meaning
+++
+The [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) and [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) APIs return HTTP response codes. While response messages include information specific to a request, the HTTP response status code is general.
+
+## Common status codes
+The following table lists some of the most common HTTP response status codes for the [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) and [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) APIs:
+
+|Code|API|Explanation|
+|:--|--|--|
+|400|Authoring, Endpoint|request's parameters are incorrect meaning the required parameters are missing, malformed, or too large|
+|400|Authoring, Endpoint|request's body is incorrect meaning the JSON is missing, malformed, or too large|
+|401|Authoring|used endpoint key, instead of authoring key|
+|401|Authoring, Endpoint|invalid, malformed, or empty key|
+|401|Authoring, Endpoint| key doesn't match region|
+|401|Authoring|you are not the owner or collaborator|
+|401|Authoring|invalid order of API calls|
+|403|Authoring, Endpoint|total monthly key quota limit exceeded|
+|409|Endpoint|application is still loading|
+|410|Endpoint|application needs to be retrained and republished|
+|414|Endpoint|query exceeds maximum character limit|
+|429|Authoring, Endpoint|Rate limit is exceeded (requests/second)|
+
+## Next steps
+
+* REST API [authoring](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f) and [endpoint](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78) documentation
ai-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-traffic-manager.md
+
+ Title: Increase endpoint quota - LUIS
+
+description: Language Understanding (LUIS) offers the ability to increase the endpoint request quota beyond a single key's quota. This is done by creating more keys for LUIS and adding them to the LUIS application on the **Publish** page in the **Resources and Keys** section.
+++
+ms.devlang: javascript
+++++ Last updated : 08/20/2019
+#Customer intent: As an advanced user, I want to understand how to use multiple LUIS endpoint keys to increase the number of endpoint requests my application receives.
++
+# Use Microsoft Azure Traffic Manager to manage endpoint quota across keys
++
+Language Understanding (LUIS) offers the ability to increase the endpoint request quota beyond a single key's quota. This is done by creating more keys for LUIS and adding them to the LUIS application on the **Publish** page in the **Resources and Keys** section.
+
+The client-application has to manage the traffic across the keys. LUIS doesn't do that.
+
+This article explains how to manage the traffic across keys with Azure [Traffic Manager][traffic-manager-marketing]. You must already have a trained and published LUIS app. If you do not have one, follow the Prebuilt domain [quickstart](luis-get-started-create-app.md).
++
+## Connect to PowerShell in the Azure portal
+In the [Azure][azure-portal] portal, open the PowerShell window. The icon for the PowerShell window is the **>_** in the top navigation bar. By using PowerShell from the portal, you get the latest PowerShell version and you are authenticated. PowerShell in the portal requires an [Azure Storage](https://azure.microsoft.com/services/storage/) account.
+
+![Screenshot of Azure portal with PowerShell window open](./media/traffic-manager/azure-portal-powershell.png)
+
+The following sections use [Traffic Manager PowerShell cmdlets](/powershell/module/az.trafficmanager/#traffic_manager).
+
+## Create Azure resource group with PowerShell
+Before creating the Azure resources, create a resource group to contain all the resources. Name the resource group `luis-traffic-manager` and use the region is `West US`. The region of the resource group stores metadata about the group. It won't slow down your resources if they are in another region.
+
+Create resource group with **[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)** cmdlet:
+
+```powerShell
+New-AzResourceGroup -Name luis-traffic-manager -Location "West US"
+```
+
+## Create LUIS keys to increase total endpoint quota
+1. In the Azure portal, create two **Language Understanding** keys, one in the `West US` and one in the `East US`. Use the existing resource group, created in the previous section, named `luis-traffic-manager`.
+
+ ![Screenshot of Azure portal with two LUIS keys in luis-traffic-manager resource group](./media/traffic-manager/luis-keys.png)
+
+2. In the [LUIS][LUIS] website, in the **Manage** section, on the **Azure Resources** page, assign keys to the app, and republish the app by selecting the **Publish** button in the top right menu.
+
+ The example URL in the **endpoint** column uses a GET request with the endpoint key as a query parameter. Copy the two new keys' endpoint URLs. They are used as part of the Traffic Manager configuration later in this article.
+
+## Manage LUIS endpoint requests across keys with Traffic Manager
+Traffic Manager creates a new DNS access point for your endpoints. It does not act as a gateway or proxy but strictly at the DNS level. This example doesn't change any DNS records. It uses a DNS library to communicate with Traffic Manager to get the correct endpoint for that specific request. _Each_ request intended for LUIS first requires a Traffic Manager request to determine which LUIS endpoint to use.
+
+### Polling uses LUIS endpoint
+Traffic Manager polls the endpoints periodically to make sure the endpoint is still available. The Traffic Manager URL polled needs to be accessible with a GET request and return a 200. The endpoint URL on the **Publish** page does this. Since each endpoint key has a different route and query string parameters, each endpoint key needs a different polling path. Each time Traffic Manager polls, it does cost a quota request. The query string parameter **q** of the LUIS endpoint is the utterance sent to LUIS. This parameter, instead of sending an utterance, is used to add Traffic Manager polling to the LUIS endpoint log as a debugging technique while getting Traffic Manager configured.
+
+Because each LUIS endpoint needs its own path, it needs its own Traffic Manager profile. In order to manage across profiles, create a [_nested_ Traffic Manager](../../traffic-manager/traffic-manager-nested-profiles.md) architecture. One parent profile points to the children profiles and manage traffic across them.
+
+Once the Traffic Manager is configured, remember to change the path to use the logging=false query string parameter so your log is not filling up with polling.
+
+## Configure Traffic Manager with nested profiles
+The following sections create two child profiles, one for the East LUIS key and one for the West LUIS key. Then a parent profile is created and the two child profiles are added to the parent profile.
+
+### Create the East US Traffic Manager profile with PowerShell
+To create the East US Traffic Manager profile, there are several steps: create profile, add endpoint, and set endpoint. A Traffic Manager profile can have many endpoints but each endpoint has the same validation path. Because the LUIS endpoint URLs for the east and west subscriptions are different due to region and endpoint key, each LUIS endpoint has to be a single endpoint in the profile.
+
+1. Create profile with **[New-AzTrafficManagerProfile](/powershell/module/az.trafficmanager/new-aztrafficmanagerprofile)** cmdlet
+
+ Use the following cmdlet to create the profile. Make sure to change the `appIdLuis` and `subscriptionKeyLuis`. The subscriptionKey is for the East US LUIS key. If the path is not correct, including the LUIS app ID and endpoint key, the Traffic Manager polling is a status of `degraded` because Traffic Manage can't successfully request the LUIS endpoint. Make sure the value of `q` is `traffic-manager-east` so you can see this value in the LUIS endpoint logs.
+
+ ```powerShell
+ $eastprofile = New-AzTrafficManagerProfile -Name luis-profile-eastus -ResourceGroupName luis-traffic-manager -TrafficRoutingMethod Performance -RelativeDnsName luis-dns-eastus -Ttl 30 -MonitorProtocol HTTPS -MonitorPort 443 -MonitorPath "/luis/v2.0/apps/<appID>?subscription-key=<subscriptionKey>&q=traffic-manager-east"
+ ```
+
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-Name|luis-profile-eastus|Traffic Manager name in Azure portal|
+ |-ResourceGroupName|luis-traffic-manager|Created in previous section|
+ |-TrafficRoutingMethod|Performance|For more information, see [Traffic Manager routing methods][routing-methods]. If using performance, the URL request to the Traffic Manager must come from the region of the user. If going through a chatbot or other application, it is the chatbot's responsibility to mimic the region in the call to the Traffic Manager. |
+ |-RelativeDnsName|luis-dns-eastus|This is the subdomain for the service: luis-dns-eastus.trafficmanager.net|
+ |-Ttl|30|Polling interval, 30 seconds|
+ |-MonitorProtocol<BR>-MonitorPort|HTTPS<br>443|Port and protocol for LUIS is HTTPS/443|
+ |-MonitorPath|`/luis/v2.0/apps/<appIdLuis>?subscription-key=<subscriptionKeyLuis>&q=traffic-manager-east`|Replace `<appIdLuis>` and `<subscriptionKeyLuis>` with your own values.|
+
+ A successful request has no response.
+
+2. Add East US endpoint with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.trafficmanager/add-aztrafficmanagerendpointconfig)** cmdlet
+
+ ```powerShell
+ Add-AzTrafficManagerEndpointConfig -EndpointName luis-east-endpoint -TrafficManagerProfile $eastprofile -Type ExternalEndpoints -Target eastus.api.cognitive.microsoft.com -EndpointLocation "eastus" -EndpointStatus Enabled
+ ```
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-EndpointName|luis-east-endpoint|Endpoint name displayed under the profile|
+ |-TrafficManagerProfile|$eastprofile|Use profile object created in Step 1|
+ |-Type|ExternalEndpoints|For more information, see [Traffic Manager endpoint][traffic-manager-endpoints] |
+ |-Target|eastus.api.cognitive.microsoft.com|This is the domain for the LUIS endpoint.|
+ |-EndpointLocation|"eastus"|Region of the endpoint|
+ |-EndpointStatus|Enabled|Enable endpoint when it is created|
+
+ The successful response looks like:
+
+ ```console
+ Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-eastus
+ Name : luis-profile-eastus
+ ResourceGroupName : luis-traffic-manager
+ RelativeDnsName : luis-dns-eastus
+ Ttl : 30
+ ProfileStatus : Enabled
+ TrafficRoutingMethod : Performance
+ MonitorProtocol : HTTPS
+ MonitorPort : 443
+ MonitorPath : /luis/v2.0/apps/<luis-app-id>?subscription-key=f0517d185bcf467cba5147d6260bb868&q=traffic-manager-east
+ MonitorIntervalInSeconds : 30
+ MonitorTimeoutInSeconds : 10
+ MonitorToleratedNumberOfFailures : 3
+ Endpoints : {luis-east-endpoint}
+ ```
+
+3. Set East US endpoint with **[Set-AzTrafficManagerProfile](/powershell/module/az.trafficmanager/set-aztrafficmanagerprofile)** cmdlet
+
+ ```powerShell
+ Set-AzTrafficManagerProfile -TrafficManagerProfile $eastprofile
+ ```
+
+ A successful response will be the same response as step 2.
+
+### Create the West US Traffic Manager profile with PowerShell
+To create the West US Traffic Manager profile, follow the same steps: create profile, add endpoint, and set endpoint.
+
+1. Create profile with **[New-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/New-azTrafficManagerProfile)** cmdlet
+
+ Use the following cmdlet to create the profile. Make sure to change the `appIdLuis` and `subscriptionKeyLuis`. The subscriptionKey is for the East US LUIS key. If the path is not correct including the LUIS app ID and endpoint key, the Traffic Manager polling is a status of `degraded` because Traffic Manage can't successfully request the LUIS endpoint. Make sure the value of `q` is `traffic-manager-west` so you can see this value in the LUIS endpoint logs.
+
+ ```powerShell
+ $westprofile = New-AzTrafficManagerProfile -Name luis-profile-westus -ResourceGroupName luis-traffic-manager -TrafficRoutingMethod Performance -RelativeDnsName luis-dns-westus -Ttl 30 -MonitorProtocol HTTPS -MonitorPort 443 -MonitorPath "/luis/v2.0/apps/<appIdLuis>?subscription-key=<subscriptionKeyLuis>&q=traffic-manager-west"
+ ```
+
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-Name|luis-profile-westus|Traffic Manager name in Azure portal|
+ |-ResourceGroupName|luis-traffic-manager|Created in previous section|
+ |-TrafficRoutingMethod|Performance|For more information, see [Traffic Manager routing methods][routing-methods]. If using performance, the URL request to the Traffic Manager must come from the region of the user. If going through a chatbot or other application, it is the chatbot's responsibility to mimic the region in the call to the Traffic Manager. |
+ |-RelativeDnsName|luis-dns-westus|This is the subdomain for the service: luis-dns-westus.trafficmanager.net|
+ |-Ttl|30|Polling interval, 30 seconds|
+ |-MonitorProtocol<BR>-MonitorPort|HTTPS<br>443|Port and protocol for LUIS is HTTPS/443|
+ |-MonitorPath|`/luis/v2.0/apps/<appIdLuis>?subscription-key=<subscriptionKeyLuis>&q=traffic-manager-west`|Replace `<appId>` and `<subscriptionKey>` with your own values. Remember this endpoint key is different than the east endpoint key|
+
+ A successful request has no response.
+
+2. Add West US endpoint with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.TrafficManager/Add-azTrafficManagerEndpointConfig)** cmdlet
+
+ ```powerShell
+ Add-AzTrafficManagerEndpointConfig -EndpointName luis-west-endpoint -TrafficManagerProfile $westprofile -Type ExternalEndpoints -Target westus.api.cognitive.microsoft.com -EndpointLocation "westus" -EndpointStatus Enabled
+ ```
+
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-EndpointName|luis-west-endpoint|Endpoint name displayed under the profile|
+ |-TrafficManagerProfile|$westprofile|Use profile object created in Step 1|
+ |-Type|ExternalEndpoints|For more information, see [Traffic Manager endpoint][traffic-manager-endpoints] |
+ |-Target|westus.api.cognitive.microsoft.com|This is the domain for the LUIS endpoint.|
+ |-EndpointLocation|"westus"|Region of the endpoint|
+ |-EndpointStatus|Enabled|Enable endpoint when it is created|
+
+ The successful response looks like:
+
+ ```console
+ Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-westus
+ Name : luis-profile-westus
+ ResourceGroupName : luis-traffic-manager
+ RelativeDnsName : luis-dns-westus
+ Ttl : 30
+ ProfileStatus : Enabled
+ TrafficRoutingMethod : Performance
+ MonitorProtocol : HTTPS
+ MonitorPort : 443
+ MonitorPath : /luis/v2.0/apps/c3fc5d1e-5187-40cc-af0f-fbde328aa16b?subscription-key=e3605f07e3cc4bedb7e02698a54c19cc&q=traffic-manager-west
+ MonitorIntervalInSeconds : 30
+ MonitorTimeoutInSeconds : 10
+ MonitorToleratedNumberOfFailures : 3
+ Endpoints : {luis-west-endpoint}
+ ```
+
+3. Set West US endpoint with **[Set-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/Set-azTrafficManagerProfile)** cmdlet
+
+ ```powerShell
+ Set-AzTrafficManagerProfile -TrafficManagerProfile $westprofile
+ ```
+
+ A successful response is the same response as step 2.
+
+### Create parent Traffic Manager profile
+Create the parent Traffic Manager profile and link two child Traffic Manager profiles to the parent.
+
+1. Create parent profile with **[New-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/New-azTrafficManagerProfile)** cmdlet
+
+ ```powerShell
+ $parentprofile = New-AzTrafficManagerProfile -Name luis-profile-parent -ResourceGroupName luis-traffic-manager -TrafficRoutingMethod Performance -RelativeDnsName luis-dns-parent -Ttl 30 -MonitorProtocol HTTPS -MonitorPort 443 -MonitorPath "/"
+ ```
+
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-Name|luis-profile-parent|Traffic Manager name in Azure portal|
+ |-ResourceGroupName|luis-traffic-manager|Created in previous section|
+ |-TrafficRoutingMethod|Performance|For more information, see [Traffic Manager routing methods][routing-methods]. If using performance, the URL request to the Traffic Manager must come from the region of the user. If going through a chatbot or other application, it is the chatbot's responsibility to mimic the region in the call to the Traffic Manager. |
+ |-RelativeDnsName|luis-dns-parent|This is the subdomain for the service: luis-dns-parent.trafficmanager.net|
+ |-Ttl|30|Polling interval, 30 seconds|
+ |-MonitorProtocol<BR>-MonitorPort|HTTPS<br>443|Port and protocol for LUIS is HTTPS/443|
+ |-MonitorPath|`/`|This path doesn't matter because the child endpoint paths are used instead.|
+
+ A successful request has no response.
+
+2. Add East US child profile to parent with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.TrafficManager/Add-azTrafficManagerEndpointConfig)** and **NestedEndpoints** type
+
+ ```powerShell
+ Add-AzTrafficManagerEndpointConfig -EndpointName child-endpoint-useast -TrafficManagerProfile $parentprofile -Type NestedEndpoints -TargetResourceId $eastprofile.Id -EndpointStatus Enabled -EndpointLocation "eastus" -MinChildEndpoints 1
+ ```
+
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-EndpointName|child-endpoint-useast|East profile|
+ |-TrafficManagerProfile|$parentprofile|Profile to assign this endpoint to|
+ |-Type|NestedEndpoints|For more information, see [Add-AzTrafficManagerEndpointConfig](/powershell/module/az.trafficmanager/Add-azTrafficManagerEndpointConfig). |
+ |-TargetResourceId|$eastprofile.Id|ID of the child profile|
+ |-EndpointStatus|Enabled|Endpoint status after adding to parent|
+ |-EndpointLocation|"eastus"|[Azure region name](https://azure.microsoft.com/global-infrastructure/regions/) of resource|
+ |-MinChildEndpoints|1|Minimum number to child endpoints|
+
+ The successful response look like the following and includes the new `child-endpoint-useast` endpoint:
+
+ ```console
+ Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-parent
+ Name : luis-profile-parent
+ ResourceGroupName : luis-traffic-manager
+ RelativeDnsName : luis-dns-parent
+ Ttl : 30
+ ProfileStatus : Enabled
+ TrafficRoutingMethod : Performance
+ MonitorProtocol : HTTPS
+ MonitorPort : 443
+ MonitorPath : /
+ MonitorIntervalInSeconds : 30
+ MonitorTimeoutInSeconds : 10
+ MonitorToleratedNumberOfFailures : 3
+ Endpoints : {child-endpoint-useast}
+ ```
+
+3. Add West US child profile to parent with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.TrafficManager/Add-azTrafficManagerEndpointConfig)** cmdlet and **NestedEndpoints** type
+
+ ```powerShell
+ Add-AzTrafficManagerEndpointConfig -EndpointName child-endpoint-uswest -TrafficManagerProfile $parentprofile -Type NestedEndpoints -TargetResourceId $westprofile.Id -EndpointStatus Enabled -EndpointLocation "westus" -MinChildEndpoints 1
+ ```
+
+ This table explains each variable in the cmdlet:
+
+ |Configuration parameter|Variable name or Value|Purpose|
+ |--|--|--|
+ |-EndpointName|child-endpoint-uswest|West profile|
+ |-TrafficManagerProfile|$parentprofile|Profile to assign this endpoint to|
+ |-Type|NestedEndpoints|For more information, see [Add-AzTrafficManagerEndpointConfig](/powershell/module/az.trafficmanager/Add-azTrafficManagerEndpointConfig). |
+ |-TargetResourceId|$westprofile.Id|ID of the child profile|
+ |-EndpointStatus|Enabled|Endpoint status after adding to parent|
+ |-EndpointLocation|"westus"|[Azure region name](https://azure.microsoft.com/global-infrastructure/regions/) of resource|
+ |-MinChildEndpoints|1|Minimum number to child endpoints|
+
+ The successful response look like and includes both the previous `child-endpoint-useast` endpoint and the new `child-endpoint-uswest` endpoint:
+
+ ```console
+ Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-parent
+ Name : luis-profile-parent
+ ResourceGroupName : luis-traffic-manager
+ RelativeDnsName : luis-dns-parent
+ Ttl : 30
+ ProfileStatus : Enabled
+ TrafficRoutingMethod : Performance
+ MonitorProtocol : HTTPS
+ MonitorPort : 443
+ MonitorPath : /
+ MonitorIntervalInSeconds : 30
+ MonitorTimeoutInSeconds : 10
+ MonitorToleratedNumberOfFailures : 3
+ Endpoints : {child-endpoint-useast, child-endpoint-uswest}
+ ```
+
+4. Set endpoints with **[Set-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/Set-azTrafficManagerProfile)** cmdlet
+
+ ```powerShell
+ Set-AzTrafficManagerProfile -TrafficManagerProfile $parentprofile
+ ```
+
+ A successful response is the same response as step 3.
+
+### PowerShell variables
+In the previous sections, three PowerShell variables were created: `$eastprofile`, `$westprofile`, `$parentprofile`. These variables are used toward the end of the Traffic Manager configuration. If you chose not to create the variables, or forgot to, or your PowerShell window times out, you can use the PowerShell cmdlet, **[Get-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/Get-azTrafficManagerProfile)**, to get the profile again and assign it to a variable.
+
+Replace the items in angle brackets, `<>`, with the correct values for each of the three profiles you need.
+
+```powerShell
+$<variable-name> = Get-AzTrafficManagerProfile -Name <profile-name> -ResourceGroupName luis-traffic-manager
+```
+
+## Verify Traffic Manager works
+To verify that the Traffic Manager profiles work, the profiles need to have the status of `Online` This status is based on the polling path of the endpoint.
+
+### View new profiles in the Azure portal
+You can verify that all three profiles are created by looking at the resources in the `luis-traffic-manager` resource group.
+
+![Screenshot of Azure resource group luis-traffic-manager](./media/traffic-manager/traffic-manager-profiles.png)
+
+### Verify the profile status is Online
+The Traffic Manager polls the path of each endpoint to make sure it is online. If it is online, the status of the child profiles are `Online`. This is displayed on the **Overview** of each profile.
+
+![Screenshot of Azure Traffic Manager profile Overview showing Monitor Status of Online](./media/traffic-manager/profile-status-online.png)
+
+### Validate Traffic Manager polling works
+Another way to validate the traffic manager polling works is with the LUIS endpoint logs. On the [LUIS][LUIS] website apps list page, export the endpoint log for the application. Because Traffic Manager polls often for the two endpoints, there are entries in the logs even if they have only been on a few minutes. Remember to look for entries where the query begins with `traffic-manager-`.
+
+```console
+traffic-manager-west 6/7/2018 19:19 {"query":"traffic-manager-west","intents":[{"intent":"None","score":0.944767}],"entities":[]}
+traffic-manager-east 6/7/2018 19:20 {"query":"traffic-manager-east","intents":[{"intent":"None","score":0.944767}],"entities":[]}
+```
+
+### Validate DNS response from Traffic Manager works
+To validate that the DNS response returns a LUIS endpoint, request the Traffic Manage parent profile DNS using a DNS client library. The DNS name for the parent profile is `luis-dns-parent.trafficmanager.net`.
+
+The following Node.js code makes a request for the parent profile and returns a LUIS endpoint:
+
+```javascript
+const dns = require('dns');
+
+dns.resolveAny('luis-dns-parent.trafficmanager.net', (err, ret) => {
+ console.log('ret', ret);
+});
+```
+
+The successful response with the LUIS endpoint is:
+
+```json
+[
+ {
+ value: 'westus.api.cognitive.microsoft.com',
+ type: 'CNAME'
+ }
+]
+```
+
+## Use the Traffic Manager parent profile
+In order to manage traffic across endpoints, you need to insert a call to the Traffic Manager DNS to find the LUIS endpoint. This call is made for every LUIS endpoint request and needs to simulate the geographic location of the user of the LUIS client application. Add the DNS response code in between your LUIS client application and the request to LUIS for the endpoint prediction.
+
+## Resolving a degraded state
+
+Enable [diagnostic logs](../../traffic-manager/traffic-manager-diagnostic-logs.md) for Traffic Manager to see why endpoint status is degraded.
+
+## Clean up
+Remove the two LUIS endpoint keys, the three Traffic Manager profiles, and the resource group that contained these five resources. This is done from the Azure portal. You delete the five resources from the resources list. Then delete the resource group.
+
+## Next steps
+
+Review [middleware](/azure/bot-service/bot-builder-create-middleware?tabs=csaddmiddleware%252ccsetagoverwrite%252ccsmiddlewareshortcircuit%252ccsfallback%252ccsactivityhandler) options in BotFramework v4 to understand how this traffic management code can be added to a BotFramework bot.
+
+[traffic-manager-marketing]: https://azure.microsoft.com/services/traffic-manager/
+[traffic-manager-docs]: ../../traffic-manager/index.yml
+[LUIS]: ./luis-reference-regions.md#luis-website
+[azure-portal]: https://portal.azure.com/
+[azure-storage]: https://azure.microsoft.com/services/storage/
+[routing-methods]: ../../traffic-manager/traffic-manager-routing-methods.md
+[traffic-manager-endpoints]: ../../traffic-manager/traffic-manager-endpoint-types.md
ai-services Luis Tutorial Bing Spellcheck https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-bing-spellcheck.md
+
+ Title: Correct misspelled words - LUIS
+
+description: Correct misspelled words in utterances by adding Bing Spell Check API V7 to LUIS endpoint queries.
++++++++ Last updated : 01/05/2022++
+# Correct misspelled words with Bing Resource
+++
+The V3 prediction API now supports the [Bing Spellcheck API](/bing/search-apis/bing-spell-check/overview). Add spell checking to your application by including the key to your Bing search resource in the header of your requests. You can use an existing Bing resource if you already own one, or [create a new one](https://portal.azure.com/#create/Microsoft.BingSearch) to use this feature.
+
+Prediction output example for a misspelled query:
+
+```json
+{
+ "query": "bouk me a fliht to kayro",
+ "prediction": {
+ "alteredQuery": "book me a flight to cairo",
+ "topIntent": "book a flight",
+ "intents": {
+ "book a flight": {
+ "score": 0.9480589
+ }
+ "None": {
+ "score": 0.0332136229
+ }
+ },
+ "entities": {}
+ }
+}
+```
+
+Corrections to spelling are made before the LUIS user utterance prediction. You can see any changes to the original utterance, including spelling, in the response.
+
+## Create Bing Search Resource
+
+To create a Bing Search resource in the Azure portal, follow these instructions:
+
+1. Log in to the [Azure portal](https://portal.azure.com).
+
+2. Select **Create a resource** in the top left corner.
+
+3. In the search box, enter `Bing Search V7` and select the service.
+
+4. An information panel appears to the right containing information including the Legal Notice. Select **Create** to begin the subscription creation process.
+
+> [!div class="mx-imgBorder"]
+> ![Bing Spell Check API V7 resource](./media/luis-tutorial-bing-spellcheck/bing-search-resource-portal.png)
+
+5. In the next panel, enter your service settings. Wait for service creation process to finish.
+
+6. After the resource is created, go to the **Keys and Endpoint** blade on the left.
+
+7. Copy one of the keys to be added to the header of your prediction request. You will only need one of the two keys.
+
+<!--
+## Using the key in LUIS test panel
+There are two places in LUIS to use the key. The first is in the [test panel](how-to/train-test.md#view-bing-spell-check-corrections-in-test-panel). The key isn't saved into LUIS but instead is a session variable. You need to set the key every time you want the test panel to apply the Bing Spell Check API v7 service to the utterance. See [instructions](how-to/train-test.md#view-bing-spell-check-corrections-in-test-panel) in the test panel for setting the key.
+-->
++
+## Adding the key to the endpoint URL
+For each query you want to apply spelling correction on, the endpoint query needs the Bing Spellcheck resource key passed in the query header parameter. You may have a chatbot that calls LUIS or you may call the LUIS endpoint API directly. Regardless of how the endpoint is called, each and every call must include the required information in the header's request for spelling corrections to work properly. You must set the value with **mkt-bing-spell-check-key** to the key value.
+
+|Header Key|Header Value|
+|--|--|
+|`mkt-bing-spell-check-key`|Keys found in **Keys and Endpoint** blade of your resource|
+
+## Send misspelled utterance to LUIS
+1. Add a misspelled utterance in the prediction query you will be sending such as "How far is the mountainn?". In English, `mountain`, with one `n`, is the correct spelling.
+
+2. LUIS responds with a JSON result for `How far is the mountain?`. If Bing Spell Check API v7 detects a misspelling, the `query` field in the LUIS app's JSON response contains the original query, and the `alteredQuery` field contains the corrected query sent to LUIS.
+
+```json
+{
+ "query": "How far is the mountainn?",
+ "alteredQuery": "How far is the mountain?",
+ "topScoringIntent": {
+ "intent": "Concierge",
+ "score": 0.183866
+ },
+ "entities": []
+}
+```
+
+## Ignore spelling mistakes
+
+If you don't want to use the Bing Search API v7 service, you need to add the correct and incorrect spelling.
+
+Two solutions are:
+
+* Label example utterances that have the all the different spellings so that LUIS can learn proper spelling as well as typos. This option requires more labeling effort than using a spell checker.
+* Create a phrase list with all variations of the word. With this solution, you do not need to label the word variations in the example utterances.
+
+## Next steps
+[Learn more about example utterances](./how-to/entities.md)
ai-services Luis Tutorial Node Import Utterances Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-node-import-utterances-csv.md
+
+ Title: Import utterances using Node.js - LUIS
+
+description: Learn how to build a LUIS app programmatically from preexisting data in CSV format using the LUIS Authoring API.
++++
+ms.devlang: javascript
++++ Last updated : 05/17/2021++
+# Build a LUIS app programmatically using Node.js
+++
+LUIS provides a programmatic API that does everything that the [LUIS](luis-reference-regions.md) website does. This can save time when you have pre-existing data and it would be faster to create a LUIS app programmatically than by entering information by hand.
++
+## Prerequisites
+
+* Sign in to the [LUIS](luis-reference-regions.md) website and find your [authoring key](luis-how-to-azure-subscription.md) in Account Settings. You use this key to call the Authoring APIs.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+* This article starts with a CSV for a hypothetical company's log files of user requests. Download it [here](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/IoT.csv).
+* Install the latest Node.js with NPM. Download it from [here](https://nodejs.org/en/download/).
+* **[Recommended]** Visual Studio Code for IntelliSense and debugging, download it from [here](https://code.visualstudio.com/) for free.
+
+All of the code in this article is available on the [Azure-Samples Language Understanding GitHub repository](https://github.com/Azure-Samples/cognitive-services-language-understanding/tree/master/examples/build-app-programmatically-csv).
+
+## Map preexisting data to intents and entities
+Even if you have a system that wasn't created with LUIS in mind, if it contains textual data that maps to different things users want to do, you might be able to come up with a mapping from the existing categories of user input to intents in LUIS. If you can identify important words or phrases in what the users said, these words might map to entities.
+
+Open the [`IoT.csv`](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/IoT.csv) file. It contains a log of user queries to a hypothetical home automation service, including how they were categorized, what the user said, and some columns with useful information pulled out of them.
+
+![CSV file of pre-existing data](./media/luis-tutorial-node-import-utterances-csv/csv.png)
+
+You see that the **RequestType** column could be intents, and the **Request** column shows an example utterance. The other fields could be entities if they occur in the utterance. Because there are intents, entities, and example utterances, you have the requirements for a simple, sample app.
+
+## Steps to generate a LUIS app from non-LUIS data
+To generate a new LUIS app from the CSV file:
+
+* Parse the data from the CSV file:
+ * Convert to a format that you can upload to LUIS using the Authoring API.
+ * From the parsed data, gather information about intents and entities.
+* Make authoring API calls to:
+ * Create the app.
+ * Add intents and entities that were gathered from the parsed data.
+ * Once you have created the LUIS app, you can add the example utterances from the parsed data.
+
+You can see this program flow in the last part of the `index.js` file. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/index.js) this code and save it in `index.js`.
++
+ [!code-javascript[Node.js code for calling the steps to build a LUIS app](~/samples-luis/examples/build-app-programmatically-csv/index.js)]
++
+## Parse the CSV
+
+The column entries that contain the utterances in the CSV have to be parsed into a JSON format that LUIS can understand. This JSON format must contain an `intentName` field that identifies the intent of the utterance. It must also contain an `entityLabels` field, which can be empty if there are no entities in the utterance.
+
+For example, the entry for "Turn on the lights" maps to this JSON:
+
+```json
+ {
+ "text": "Turn on the lights",
+ "intentName": "TurnOn",
+ "entityLabels": [
+ {
+ "entityName": "Operation",
+ "startCharIndex": 5,
+ "endCharIndex": 6
+ },
+ {
+ "entityName": "Device",
+ "startCharIndex": 12,
+ "endCharIndex": 17
+ }
+ ]
+ }
+```
+
+In this example, the `intentName` comes from the user request under the **Request** column heading in the CSV file, and the `entityName` comes from the other columns with key information. For example, if there's an entry for **Operation** or **Device**, and that string also occurs in the actual request, then it can be labeled as an entity. The following code demonstrates this parsing process. You can copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_parse.js) it and save it to `_parse.js`.
+
+ [!code-javascript[Node.js code for parsing a CSV file to extract intents, entities, and labeled utterances](~/samples-luis/examples/build-app-programmatically-csv/_parse.js)]
+++
+## Create the LUIS app
+Once the data has been parsed into JSON, add it to a LUIS app. The following code creates the LUIS app. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_create.js) it, and save it into `_create.js`.
+
+ [!code-javascript[Node.js code for creating a LUIS app](~/samples-luis/examples/build-app-programmatically-csv/_create.js)]
++
+## Add intents
+Once you have an app, you need to intents to it. The following code creates the LUIS app. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_intents.js) it, and save it into `_intents.js`.
+
+ [!code-javascript[Node.js code for creating a series of intents](~/samples-luis/examples/build-app-programmatically-csv/_intents.js)]
++
+## Add entities
+The following code adds the entities to the LUIS app. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_entities.js) it, and save it into `_entities.js`.
+
+ [!code-javascript[Node.js code for creating entities](~/samples-luis/examples/build-app-programmatically-csv/_entities.js)]
+++
+## Add utterances
+Once the entities and intents have been defined in the LUIS app, you can add the utterances. The following code uses the [Utterances_AddBatch](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09) API, which allows you to add up to 100 utterances at a time. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_upload.js) it, and save it into `_upload.js`.
+
+ [!code-javascript[Node.js code for adding utterances](~/samples-luis/examples/build-app-programmatically-csv/_upload.js)]
++
+## Run the code
++
+### Install Node.js dependencies
+Install the Node.js dependencies from NPM in the terminal/command line.
+
+```console
+> npm install
+```
+
+### Change Configuration Settings
+In order to use this application, you need to change the values in the index.js file to your own endpoint key, and provide the name you want the app to have. You can also set the app's culture or change the version number.
+
+Open the index.js file, and change these values at the top of the file.
++
+```javascript
+// Change these values
+const LUIS_programmaticKey = "YOUR_AUTHORING_KEY";
+const LUIS_appName = "Sample App";
+const LUIS_appCulture = "en-us";
+const LUIS_versionId = "0.1";
+```
+
+### Run the script
+Run the script from a terminal/command line with Node.js.
+
+```console
+> node index.js
+```
+
+or
+
+```console
+> npm start
+```
+
+### Application progress
+While the application is running, the command line shows progress. The command line output includes the format of the responses from LUIS.
+
+```console
+> node index.js
+intents: ["TurnOn","TurnOff","Dim","Other"]
+entities: ["Operation","Device","Room"]
+parse done
+JSON file should contain utterances. Next step is to create an app with the intents and entities it found.
+Called createApp, created app with ID 314b306c-0033-4e09-92ab-94fe5ed158a2
+Called addIntents for intent named TurnOn.
+Called addIntents for intent named TurnOff.
+Called addIntents for intent named Dim.
+Called addIntents for intent named Other.
+Results of all calls to addIntent = [{"response":"e7eaf224-8c61-44ed-a6b0-2ab4dc56f1d0"},{"response":"a8a17efd-f01c-488d-ad44-a31a818cf7d7"},{"response":"bc7c32fc-14a0-4b72-bad4-d345d807f965"},{"response":"727a8d73-cd3b-4096-bc8d-d7cfba12eb44"}]
+called addEntity for entity named Operation.
+called addEntity for entity named Device.
+called addEntity for entity named Room.
+Results of all calls to addEntity= [{"response":"6a7e914f-911d-4c6c-a5bc-377afdce4390"},{"response":"56c35237-593d-47f6-9d01-2912fa488760"},{"response":"f1dd440c-2ce3-4a20-a817-a57273f169f3"}]
+retrying add examples...
+
+Results of add utterances = [{"response":[{"value":{"UtteranceText":"turn on the lights","ExampleId":-67649},"hasError":false},{"value":{"UtteranceText":"turn the heat on","ExampleId":-69067},"hasError":false},{"value":{"UtteranceText":"switch on the kitchen fan","ExampleId":-3395901},"hasError":false},{"value":{"UtteranceText":"turn off bedroom lights","ExampleId":-85402},"hasError":false},{"value":{"UtteranceText":"turn off air conditioning","ExampleId":-8991572},"hasError":false},{"value":{"UtteranceText":"kill the lights","ExampleId":-70124},"hasError":false},{"value":{"UtteranceText":"dim the lights","ExampleId":-174358},"hasError":false},{"value":{"UtteranceText":"hi how are you","ExampleId":-143722},"hasError":false},{"value":{"UtteranceText":"answer the phone","ExampleId":-69939},"hasError":false},{"value":{"UtteranceText":"are you there","ExampleId":-149588},"hasError":false},{"value":{"UtteranceText":"help","ExampleId":-81949},"hasError":false},{"value":{"UtteranceText":"testing the circuit","ExampleId":-11548708},"hasError":false}]}]
+upload done
+```
++++
+## Open the LUIS app
+Once the script completes, you can sign in to [LUIS](luis-reference-regions.md) and see the LUIS app you created under **My Apps**. You should be able to see the utterances you added under the **TurnOn**, **TurnOff**, and **None** intents.
+
+![TurnOn intent](./media/luis-tutorial-node-import-utterances-csv/imported-utterances-661.png)
++
+## Next steps
+
+[Test and train your app in LUIS website](how-to/train-test.md)
+
+## Additional resources
+
+This sample application uses the following LUIS APIs:
+- [create app](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c36)
+- [add intents](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0c)
+- [add entities](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0e)
+- [add utterances](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09)
ai-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-user-privacy.md
+
+ Title: Export & delete data - LUIS
+
+description: You have full control over viewing, exporting, and deleting their data. Delete customer data to ensure privacy and compliance.
++++++++ Last updated : 04/08/2020++
+# Export and delete your customer data in Language Understanding (LUIS) in Azure AI services
+++
+Delete customer data to ensure privacy and compliance.
+
+## Summary of customer data request featuresΓÇï
+Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f).
++
+Customer content is stored encrypted in Microsoft regional Azure storage and includes:
+
+- User account content collected at registration
+- Training data required to build the models
+- Logged user queries used by [active learning](how-to/improve-application.md) to help improve the model
+ - Users can turn off query logging by appending `&log=false` to the request, details [here](./faq.md#how-can-i-disable-the-logging-of-utterances)
+
+## Deleting customer data
+LUIS users have full control to delete any user content, either through the LUIS web portal or the LUIS Authoring (also known as Programmatic) APIs. The following table displays links assisting with both:
+
+| | **User Account** | **Application** | **Example Utterance(s)** | **End-user queries** |
+| | | | | |
+| **Portal** | [Link](luis-concept-data-storage.md#delete-an-account) | [Link](how-to/sign-in.md) | [Link](luis-concept-data-storage.md#utterances-in-an-intent) | [Active learning utterances](how-to/improve-application.md)<br>[Logged Utterances](luis-concept-data-storage.md#disable-logging-utterances) |
+| **APIs** | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c4c) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c39) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0b) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/58b6f32139e2bb139ce823c9) |
++
+## Exporting customer data
+LUIS users have full control to view the data on the portal, however it must be exported through the LUIS Authoring (also known as Programmatic) APIs. The following table displays links assisting with data exports via the LUIS Authoring (also known as Programmatic) APIs:
+
+| | **User Account** | **Application** | **Utterance(s)** | **End-user queries** |
+| | | | | |
+| **APIs** | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c48) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0a) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c36) |
+
+## Location of active learning
+
+To enable [active learning](how-to/improve-application.md#log-user-queries-to-enable-active-learning), users' logged utterances, received at the published LUIS endpoints, are stored in the following Azure geographies:
+
+* [Europe](#europe)
+* [Australia](#australia)
+* [United States](#united-states)
+
+With the exception of active learning data (detailed below), LUIS follows the [data storage practices for regional services](https://azuredatacentermap.azurewebsites.net/).
+++
+### Europe
+
+Europe Authoring (also known as Programmatic APIs) resources are hosted in Azure's Europe geography, and support deployment of endpoints to the following Azure geographies:
+
+* Europe
+* France
+* United Kingdom
+
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Europe geography for active learning.
+
+### Australia
+
+Australia Authoring (also known as Programmatic APIs) resources are hosted in Azure's Australia geography, and support deployment of endpoints to the following Azure geographies:
+
+* Australia
+
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Australia geography for active learning.
+
+### United States
+
+United States Authoring (also known as Programmatic APIs) resources are hosted in Azure's United States geography, and support deployment of endpoints to the following Azure geographies:
+
+* Azure geographies not supported by the Europe or Australia authoring regions
+
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's United States geography for active learning.
+
+### Switzerland North
+
+Switzerland North Authoring (also known as Programmatic APIs) resources are hosted in Azure's Switzerland geography, and support deployment of endpoints to the following Azure geographies:
+
+* Switzerland
+
+When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Switzerland geography for active learning.
+
+## Disable active learning
+
+To disable active learning, see [Disable active learning](how-to/improve-application.md).
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [LUIS regions reference](./luis-reference-regions.md)
ai-services Migrate From Composite Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/migrate-from-composite-entity.md
+
+ Title: Upgrade composite entity - LUIS
+description: Upgrade composite entity to machine-learning entity with upgrade process in the LUIS portal.
++++++ Last updated : 05/04/2020++
+# Upgrade composite entity to machine-learning entity
+++
+Upgrade composite entity to machine-learning entity to build an entity that receives more complete predictions with better decomposability for debugging the entity.
+
+## Current version model restrictions
+
+The upgrade process creates machine-learning entities, based on the existing composite entities found in your app, into a new version of your app. This includes composite entity children and roles. The process also switches the labels in example utterances to use the new entity.
+
+## Upgrade process
+
+The upgrade process:
+* Creates new machine-learning entity for each composite entity.
+* Child entities:
+ * If child entity is only used in composite entity, it will only be added to machine-learning entity.
+ * If child entity is used in composite _and_ as a separate entity (labeled in example utterances), it will be added to the version as an entity and as a subentity to the new machine-learning entity.
+ * If the child entity uses a role, each role will be converted into a subentity of the same name.
+ * If the child entity is a non-machine-learning entity (regular expression, list entity, or prebuilt entity), a new subentity is created with the same name, and the new subentity has a feature using the non-machine-learning entity with the required feature added.
+* Names are retained but must be unique at same subentity/sibling level. Refer to [unique naming limits](./luis-limits.md#name-uniqueness).
+* Labels in example utterances are switched to new machine-learning entity with subentities.
+
+Use the following chart to understand how your model changes:
+
+|Old object|New object|Notes|
+|--|--|--|
+|Composite entity|machine-learning entity with structure|Both objects are parent objects.|
+|Composite's child entity is **simple entity**|subentity|Both objects are child objects.|
+|Composite's child entity is **Prebuilt entity** such as Number|subentity with name of Prebuilt entity such as Number, and subentity has _feature_ of Prebuilt Number entity with constraint option set to _true_.|subentity contains feature with constraint at subentity level.|
+|Composite's child entity is **Prebuilt entity** such as Number, and prebuilt entity has a **role**|subentity with name of role, and subentity has feature of Prebuilt Number entity with constraint option set to true.|subentity contains feature with constraint at subentity level.|
+|Role|subentity|The role name becomes the subentity name. The subentity is a direct descendant of the machine-learning entity.|
+
+## Begin upgrade process
+
+Before updating, make sure to:
+
+* Change versions if you are not on the correct version to upgrade
++
+1. Begin the upgrade process from the notification or you can wait until the next scheduled prompt.
+
+ > [!div class="mx-imgBorder"]
+ > ![Begin upgrade from notifications](./media/update-composite-entity/notification-begin-update.png)
+
+1. On the pop-up, select **Upgrade now**.
+
+1. Review the **What happens when you upgrade** information then select **Continue**.
+
+1. Select the composite entities from the list to upgrade then select **Continue**.
+
+1. You can move any untrained changes from the current version into the upgraded version by selecting the checkbox.
+
+1. Select **Continue** to begin the upgrade process.
+
+1. The progress bar indicates the status of the upgrade process.
+
+1. When the process is done, you are on a new trained version with the new machine-learning entities. Select **Try your new entities** to see the new entity.
+
+ If the upgrade or training failed for the new entity, a notification provides more information.
+
+1. On the Entity list page, the new entities are marked with **NEW** next to the type name.
+
+## Next steps
+
+* [Authors and collaborators](luis-how-to-collaborate.md)
ai-services Reference Entity List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-list.md
+
+ Title: List entity type - LUIS
+description: List entities represent a fixed, closed set of related words along with their synonyms. LUIS does not discover additional values for list entities. Use the Recommend feature to see suggestions for new words based on the current list.
++
+ms.
+++ Last updated : 01/05/2022+
+# List entity
+++
+List entities represent a fixed, closed set of related words along with their synonyms. LUIS does not discover additional values for list entities. Use the **Recommend** feature to see suggestions for new words based on the current list. If there is more than one list entity with the same value, each entity is returned in the endpoint query.
+
+A list entity isn't machine-learned. It is an exact text match. LUIS marks any match to an item in any list as an entity in the response.
+
+**The entity is a good fit when the text data:**
+
+* Are a known set.
+* Doesn't change often. If you need to change the list often or want the list to self-expand, a simple entity boosted with a phrase list is a better choice.
+* The set doesn't exceed the maximum LUIS [boundaries](luis-limits.md) for this entity type.
+* The text in the utterance is a case-insensitive match with a synonym or the canonical name. LUIS doesn't use the list beyond the match. Fuzzy matching, stemming, plurals, and other variations are not resolved with a list entity. To manage variations, consider using a [pattern](reference-pattern-syntax.md#syntax-to-mark-optional-text-in-a-template-utterance) with the optional text syntax.
+
+![list entity](./media/luis-concept-entities/list-entity.png)
+
+## Example .json to import into list entity
+
+ You can import values into an existing list entity using the following .json format:
+
+ ```JSON
+ [
+ {
+ "canonicalForm": "Blue",
+ "list": [
+ "navy",
+ "royal",
+ "baby"
+ ]
+ },
+ {
+ "canonicalForm": "Green",
+ "list": [
+ "kelly",
+ "forest",
+ "avacado"
+ ]
+ }
+ ]
+ ```
+
+## Example JSON response
+
+Suppose the app has a list, named `Cities`, allowing for variations of city names including city of airport (Sea-tac), airport code (SEA), postal zip code (98101), and phone area code (206).
+
+|List item|Item synonyms|
+|||
+|`Seattle`|`sea-tac`, `sea`, `98101`, `206`, `+1` |
+|`Paris`|`cdg`, `roissy`, `ory`, `75001`, `1`, `+33`|
+
+`book 2 tickets to paris`
+
+In the previous utterance, the word `paris` is mapped to the paris item as part of the `Cities` list entity. The list entity matches both the item's normalized name as well as the item synonyms.
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+ "entities": [
+ {
+ "entity": "paris",
+ "type": "Cities",
+ "startIndex": 18,
+ "endIndex": 22,
+ "resolution": {
+ "values": [
+ "Paris"
+ ]
+ }
+ }
+ ]
+```
+
+#### [V3 prediction endpoint response](#tab/V3)
+
+This is the JSON if `verbose=false` is set in the query string:
+
+```json
+"entities": {
+ "Cities": [
+ [
+ "Paris"
+ ]
+ ]
+}
+```
+
+This is the JSON if `verbose=true` is set in the query string:
+
+```json
+"entities": {
+ "Cities": [
+ [
+ "Paris"
+ ]
+ ],
+ "$instance": {
+ "Cities": [
+ {
+ "type": "Cities",
+ "text": "paris",
+ "startIndex": 18,
+ "length": 5,
+ "modelTypeId": 5,
+ "modelType": "List Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+* * *
+
+|Data object|Entity name|Value|
+|--|--|--|
+|List Entity|`Cities`|`paris`|
+
+## Next steps
+
+Learn more about entities:
+
+* [Entity types](concepts/entities.md)
+* [Create entities](how-to/entities.md)
ai-services Reference Entity Machine Learned Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-machine-learned-entity.md
+
+ Title: Machine-learning entity type - LUIS
+
+description: The machine-learning entity is the preferred entity for building LUIS applications.
+++++++ Last updated : 01/05/2022+
+# Machine-learning entity
+++
+The machine-learning entity is the preferred entity for building LUIS applications.
++
+## Example JSON
+
+Suppose the app takes pizza orders, such as the [decomposable entity tutorial](tutorial/build-decomposable-application.md). Each order can include several different pizzas, including different sizes.
+
+Example utterances include:
+
+|Example utterances for pizza app|
+|--|
+|`Can I get a pepperoni pizza and a can of coke please`|
+|`can I get a small pizza with onions peppers and olives`|
+|`pickup an extra large meat lovers pizza`|
+++
+#### [V3 prediction endpoint response](#tab/V3)
+
+Because a machine-learning entity can have many subentities with required features, this is just an example only. It should be considered a guide for what your entity will return.
+
+Consider the query:
+
+`deliver 1 large cheese pizza on thin crust and 2 medium pepperoni pizzas on deep dish crust`
+
+This is the JSON if `verbose=false` is set in the query string:
+
+```json
+"entities": {
+ "Order": [
+ {
+ "FullPizzaWithModifiers": [
+ {
+ "PizzaType": [
+ "cheese pizza"
+ ],
+ "Size": [
+ [
+ "Large"
+ ]
+ ],
+ "Quantity": [
+ 1
+ ]
+ },
+ {
+ "PizzaType": [
+ "pepperoni pizzas"
+ ],
+ "Size": [
+ [
+ "Medium"
+ ]
+ ],
+ "Quantity": [
+ 2
+ ],
+ "Crust": [
+ [
+ "Deep Dish"
+ ]
+ ]
+ }
+ ]
+ }
+ ],
+ "ToppingList": [
+ [
+ "Cheese"
+ ],
+ [
+ "Pepperoni"
+ ]
+ ],
+ "CrustList": [
+ [
+ "Thin"
+ ]
+ ]
+}
+
+```
+
+This is the JSON if `verbose=true` is set in the query string:
+
+```json
+"entities": {
+ "Order": [
+ {
+ "FullPizzaWithModifiers": [
+ {
+ "PizzaType": [
+ "cheese pizza"
+ ],
+ "Size": [
+ [
+ "Large"
+ ]
+ ],
+ "Quantity": [
+ 1
+ ],
+ "$instance": {
+ "PizzaType": [
+ {
+ "type": "PizzaType",
+ "text": "cheese pizza",
+ "startIndex": 16,
+ "length": 12,
+ "score": 0.999998868,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "Size": [
+ {
+ "type": "SizeList",
+ "text": "large",
+ "startIndex": 10,
+ "length": 5,
+ "score": 0.998720646,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "Quantity": [
+ {
+ "type": "builtin.number",
+ "text": "1",
+ "startIndex": 8,
+ "length": 1,
+ "score": 0.999878645,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "PizzaType": [
+ "pepperoni pizzas"
+ ],
+ "Size": [
+ [
+ "Medium"
+ ]
+ ],
+ "Quantity": [
+ 2
+ ],
+ "Crust": [
+ [
+ "Deep Dish"
+ ]
+ ],
+ "$instance": {
+ "PizzaType": [
+ {
+ "type": "PizzaType",
+ "text": "pepperoni pizzas",
+ "startIndex": 56,
+ "length": 16,
+ "score": 0.999987066,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "Size": [
+ {
+ "type": "SizeList",
+ "text": "medium",
+ "startIndex": 49,
+ "length": 6,
+ "score": 0.999841452,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "Quantity": [
+ {
+ "type": "builtin.number",
+ "text": "2",
+ "startIndex": 47,
+ "length": 1,
+ "score": 0.9996054,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "Crust": [
+ {
+ "type": "CrustList",
+ "text": "deep dish crust",
+ "startIndex": 76,
+ "length": 15,
+ "score": 0.761551,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+ }
+ ],
+ "$instance": {
+ "FullPizzaWithModifiers": [
+ {
+ "type": "FullPizzaWithModifiers",
+ "text": "1 large cheese pizza on thin crust",
+ "startIndex": 8,
+ "length": 34,
+ "score": 0.616001546,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "FullPizzaWithModifiers",
+ "text": "2 medium pepperoni pizzas on deep dish crust",
+ "startIndex": 47,
+ "length": 44,
+ "score": 0.7395033,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+ }
+ ],
+ "ToppingList": [
+ [
+ "Cheese"
+ ],
+ [
+ "Pepperoni"
+ ]
+ ],
+ "CrustList": [
+ [
+ "Thin"
+ ]
+ ],
+ "$instance": {
+ "Order": [
+ {
+ "type": "Order",
+ "text": "1 large cheese pizza on thin crust and 2 medium pepperoni pizzas on deep dish crust",
+ "startIndex": 8,
+ "length": 83,
+ "score": 0.6881274,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "ToppingList": [
+ {
+ "type": "ToppingList",
+ "text": "cheese",
+ "startIndex": 16,
+ "length": 6,
+ "modelTypeId": 5,
+ "modelType": "List Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ },
+ {
+ "type": "ToppingList",
+ "text": "pepperoni",
+ "startIndex": 56,
+ "length": 9,
+ "modelTypeId": 5,
+ "modelType": "List Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ],
+ "CrustList": [
+ {
+ "type": "CrustList",
+ "text": "thin crust",
+ "startIndex": 32,
+ "length": 10,
+ "modelTypeId": 5,
+ "modelType": "List Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+#### [V2 prediction endpoint response](#tab/V2)
+
+This entity isn't available in the V2 prediction runtime.
+* * *
+
+## Next steps
+
+Learn more about the machine-learning entity including a [tutorial](./tutorial/build-decomposable-application.md), [concepts](concepts/entities.md#machine-learned-ml-entity), and [how-to guide](how-to/entities.md#create-a-machine-learned-entity).
+
+Learn about the [list](reference-entity-list.md) entity and [regular expression](reference-entity-regular-expression.md) entity.
ai-services Reference Entity Pattern Any https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-pattern-any.md
+
+ Title: Pattern.any entity type - LUIS
+
+description: Pattern.any is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends.
+++++++ Last updated : 09/29/2019+
+# Pattern.any entity
+++
+Pattern.any is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends.
+
+Pattern.any entities need to be marked in the [Pattern](luis-how-to-model-intent-pattern.md) template examples, not the intent user examples.
+
+**The entity is a good fit when:**
+
+* The ending of the entity can be confused with the remaining text of the utterance.
+
+## Usage
+
+Given a client application that searches for books based on title, the pattern.any extracts the complete title. A template utterance using pattern.any for this book search is `Was {BookTitle} written by an American this year[?]`.
+
+In the following table, each row has two versions of the utterance. The top utterance is how LUIS initially sees the utterance. It isn't clear where the book title begins and ends. The bottom utterance uses a Pattern.any entity to mark the beginning and end of the entity.
+
+|Utterance with entity in bold|
+|--|
+|`Was The Man Who Mistook His Wife for a Hat and Other Clinical Tales written by an American this year?`<br><br>Was **The Man Who Mistook His Wife for a Hat and Other Clinical Tales** written by an American this year?|
+|`Was Half Asleep in Frog Pajamas written by an American this year?`<br><br>Was **Half Asleep in Frog Pajamas** written by an American this year?|
+|`Was The Particular Sadness of Lemon Cake: A Novel written by an American this year?`<br><br>Was **The Particular Sadness of Lemon Cake: A Novel** written by an American this year?|
+|`Was There's A Wocket In My Pocket! written by an American this year?`<br><br>Was **There's A Wocket In My Pocket!** written by an American this year?|
+||
+++
+## Example JSON
+
+Consider the following query:
+
+`where is the form Understand your responsibilities as a member of the community and who needs to sign it after I read it?`
+
+With the embedded form name to extract as a Pattern.any:
+
+`Understand your responsibilities as a member of the community`
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+"entities": [
+ {
+ "entity": "understand your responsibilities as a member of the community",
+ "type": "FormName",
+ "startIndex": 18,
+ "endIndex": 78,
+ "role": ""
+ }
+```
++
+#### [V3 prediction endpoint response](#tab/V3)
+
+This is the JSON if `verbose=false` is set in the query string:
+
+```json
+"entities": {
+ "FormName": [
+ "Understand your responsibilities as a member of the community"
+ ]
+}
+```
+
+This is the JSON if `verbose=true` is set in the query string:
+
+```json
+"entities": {
+ "FormName": [
+ "Understand your responsibilities as a member of the community"
+ ],
+ "$instance": {
+ "FormName": [
+ {
+ "type": "FormName",
+ "text": "Understand your responsibilities as a member of the community",
+ "startIndex": 18,
+ "length": 61,
+ "modelTypeId": 7,
+ "modelType": "Pattern.Any Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+* * *
+
+## Next steps
+
+In this [tutorial](luis-how-to-model-intent-pattern.md), use the **Pattern.any** entity to extract data from utterances where the utterances are well-formatted and where the end of the data may be easily confused with the remaining words of the utterance.
ai-services Reference Entity Regular Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-regular-expression.md
+
+ Title: Regular expression entity type - LUIS
+description: A regular expression is best for raw utterance text. It ignores case and ignores cultural variant. Regular expression matching is applied after spell-check alterations at the character level, not the token level.
++++++ Last updated : 05/05/2021+
+# Regular expression entity
+++
+A regular expression entity extracts an entity based on a regular expression pattern you provide.
+
+A regular expression is best for raw utterance text. It ignores case and ignores cultural variant. Regular expression matching is applied after spell-check alterations at the token level. If the regular expression is too complex, such as using many brackets, you're not able to add the expression to the model. Uses part but not all of the [.NET Regex](/dotnet/standard/base-types/regular-expressions) library.
+
+**The entity is a good fit when:**
+
+* The data are consistently formatted with any variation that is also consistent.
+* The regular expression does not need more than 2 levels of nesting.
+
+![Regular expression entity](./media/luis-concept-entities/regex-entity.png)
+
+## Example JSON
+
+When using `kb[0-9]{6}`, as the regular expression entity definition, the following JSON response is an example utterance with the returned regular expression entities for the query:
+
+`When was kb123456 published?`:
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+"entities": [
+ {
+ "entity": "kb123456",
+ "type": "KB number",
+ "startIndex": 9,
+ "endIndex": 16
+ }
+]
+```
++
+#### [V3 prediction endpoint response](#tab/V3)
++
+This is the JSON if `verbose=false` is set in the query string:
+
+```json
+"entities": {
+ "KB number": [
+ "kb123456"
+ ]
+}
+```
+
+This is the JSON if `verbose=true` is set in the query string:
+
+```json
+"entities": {
+ "KB number": [
+ "kb123456"
+ ],
+ "$instance": {
+ "KB number": [
+ {
+ "type": "KB number",
+ "text": "kb123456",
+ "startIndex": 9,
+ "length": 8,
+ "modelTypeId": 8,
+ "modelType": "Regex Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+* * *
+
+## Next steps
+
+Learn more about entities:
+
+* [Entity types](concepts/entities.md)
+* [Create entities](how-to/entities.md)
ai-services Reference Entity Simple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-entity-simple.md
+
+ Title: Simple entity type - LUIS
+
+description: A simple entity describes a single concept from the machine-learning context. Add a phrase list when using a simple entity to improve results.
+++++++ Last updated : 01/07/2022++
+# Simple entity
+++
+A simple entity is a generic entity that describes a single concept and is learned from the machine-learning context. Because simple entities are generally names such as company names, product names, or other categories of names, add a [phrase list](concepts/patterns-features.md) when using a simple entity to boost the signal of the names used.
+
+**The entity is a good fit when:**
+
+* The data aren't consistently formatted but indicate the same thing.
+
+![simple entity](./media/luis-concept-entities/simple-entity.png)
+
+## Example JSON
+
+`Bob Jones wants 3 meatball pho`
+
+In the previous utterance, `Bob Jones` is labeled as a simple `Customer` entity.
+
+The data returned from the endpoint includes the entity name, the discovered text from the utterance, the location of the discovered text, and the score:
+
+#### [V2 prediction endpoint response](#tab/V2)
+
+```JSON
+"entities": [
+ {
+ "entity": "bob jones",
+ "type": "Customer",
+ "startIndex": 0,
+ "endIndex": 8,
+ "score": 0.473899543
+ }
+]
+```
+
+#### [V3 prediction endpoint response](#tab/V3)
+
+This is the JSON if `verbose=false` is set in the query string:
+
+```json
+"entities": {
+ "Customer": [
+ "Bob Jones"
+ ]
+}```
+
+This is the JSON if `verbose=true` is set in the query string:
+
+```json
+"entities": {
+ "Customer": [
+ "Bob Jones"
+ ],
+ "$instance": {
+ "Customer": [
+ {
+ "type": "Customer",
+ "text": "Bob Jones",
+ "startIndex": 0,
+ "length": 9,
+ "score": 0.9339134,
+ "modelTypeId": 1,
+ "modelType": "Entity Extractor",
+ "recognitionSources": [
+ "model"
+ ]
+ }
+ ]
+ }
+}
+```
+
+* * *
+
+|Data object|Entity name|Value|
+|--|--|--|
+|Simple Entity|`Customer`|`bob jones`|
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn pattern syntax](reference-pattern-syntax.md)
ai-services Reference Pattern Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-pattern-syntax.md
+
+ Title: Pattern syntax reference - LUIS
+description: Create entities to extract key data from user utterances in Language Understanding (LUIS) apps. Extracted data is used by the client application.
++++++ Last updated : 04/18/2022++
+# Pattern syntax
+++
+Pattern syntax is a template for an utterance. The template should contain words and entities you want to match as well as words and [punctuation](luis-reference-application-settings.md#punctuation-normalization) you want to ignore. It is **not** a regular expression.
+
+> [!CAUTION]
+> Patterns only include machine-learning entity parents, not subentities.
+Entities in patterns are surrounded by curly brackets, `{}`. Patterns can include entities, and entities with roles. [Pattern.any](concepts/entities.md#patternany-entity) is an entity only used in patterns.
+
+Pattern syntax supports the following syntax:
+
+|Function|Syntax|Nesting level|Example|
+|--|--|--|--|
+|entity| {} - curly brackets|2|Where is form {entity-name}?|
+|optional|[] - square brackets<BR><BR>There is a limit of 3 on nesting levels of any combination of optional and grouping |2|The question mark is optional [?]|
+|grouping|() - parentheses|2|is (a \| b)|
+|or| \| - vertical bar (pipe)<br><br>There is a limit of 2 on the vertical bars (Or) in one group |-|Where is form ({form-name-short} &#x7c; {form-name-long} &#x7c; {form-number})|
+|beginning and/or end of utterance|^ - caret|-|^begin the utterance<br>the utterance is done^<br>^strict literal match of entire utterance with {number} entity^|
+
+## Nesting syntax in patterns
+
+The **optional** syntax, with square brackets, can be nested two levels. For example: `[[this]is] a new form`. This example allows for the following utterances:
+
+|Nested optional utterance example|Explanation|
+|--|--|
+|this is a new form|matches all words in pattern|
+|is a new form|matches outer optional word and non-optional words in pattern|
+|a new form|matches required words only|
+
+The **grouping** syntax, with parentheses, can be nested two levels. For example: `(({Entity1:RoleName1} | {Entity1:RoleName2} ) | {Entity2} )`. This feature allows any of the three entities to be matched.
+
+If Entity1 is a Location with roles such as origin (Seattle) and destination (Cairo) and Entity 2 is a known building name from a list entity (RedWest-C), the following utterances would map to this pattern:
+
+|Nested grouping utterance example|Explanation|
+|--|--|
+|RedWest-C|matches outer grouping entity|
+|Seattle|matches one of the inner grouping entities|
+|Cairo|matches one of the inner grouping entities|
+
+## Nesting limits for groups with optional syntax
+
+A combination of **grouping** with **optional** syntax has a limit of 3 nesting levels.
+
+|Allowed|Example|
+|--|--|
+|Yes|( [ ( test1 &#x7c; test2 ) ] &#x7c; test3 )|
+|No|( [ ( [ test1 ] &#x7c; test2 ) ] &#x7c; test3 )|
+
+## Nesting limits for groups with or-ing syntax
+
+A combination of **grouping** with **or-ing** syntax has a limit of 2 vertical bars.
+
+|Allowed|Example|
+|--|--|
+|Yes|( test1 &#x7c; test2 &#x7c; ( test3 &#x7c; test4 ) )|
+|No|( test1 &#x7c; test2 &#x7c; test3 &#x7c; ( test4 &#x7c; test5 ) ) |
+
+## Syntax to add an entity to a pattern template
+
+To add an entity into the pattern template, surround the entity name with curly braces, such as `Who does {Employee} manage?`.
+
+|Pattern with entity|
+|--|
+|`Who does {Employee} manage?`|
+
+## Syntax to add an entity and role to a pattern template
+
+An entity role is denoted as `{entity:role}` with the entity name followed by a colon, then the role name. To add an entity with a role into the pattern template, surround the entity name and role name with curly braces, such as `Book a ticket from {Location:Origin} to {Location:Destination}`.
+
+|Pattern with entity roles|
+|--|
+|`Book a ticket from {Location:Origin} to {Location:Destination}`|
+
+## Syntax to add a pattern.any to pattern template
+
+The Pattern.any entity allows you to add an entity of varying length to the pattern. As long as the pattern template is followed, the pattern.any can be any length.
+
+To add a **Pattern.any** entity into the pattern template, surround the Pattern.any entity with the curly braces, such as `How much does {Booktitle} cost and what format is it available in?`.
+
+|Pattern with Pattern.any entity|
+|--|
+|`How much does {Booktitle} cost and what format is it available in?`|
+
+|Book titles in the pattern|
+|--|
+|How much does **steal this book** cost and what format is it available in?|
+|How much does **ask** cost and what format is it available in?|
+|How much does **The Curious Incident of the Dog in the Night-Time** cost and what format is it available in?|
+
+The words of the book title are not confusing to LUIS because LUIS knows where the book title ends, based on the Pattern.any entity.
+
+## Explicit lists
+
+create an [Explicit List](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8) through the authoring API to allow the exception when:
+
+* Your pattern contains a [Pattern.any](concepts/entities.md#patternany-entity)
+* And that pattern syntax allows for the possibility of an incorrect entity extraction based on the utterance.
+
+For example, suppose you have a pattern containing both optional syntax, `[]`, and entity syntax, `{}`, combined in a way to extract data incorrectly.
+
+Consider the pattern `[find] email about {subject} [from {person}]'.
+
+In the following utterances, the **subject** and **person** entity are extracted correctly and incorrectly:
+
+|Utterance|Entity|Correct extraction|
+|--|--|:--:|
+|email about dogs from Chris|subject=dogs<br>person=Chris|Γ£ö|
+|email about the man from La Mancha|subject=the man<br>person=La Mancha|X|
+
+In the preceding table, the subject should be `the man from La Mancha` (a book title) but because the subject includes the optional word `from`, the title is incorrectly predicted.
+
+To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8).
+
+## Syntax to mark optional text in a template utterance
+
+Mark optional text in the utterance using the regular expression square bracket syntax, `[]`. The optional text can nest square brackets up to two brackets only.
+
+|Pattern with optional text|Meaning|
+|--|--|
+|`[find] email about {subject} [from {person}]`|`find` and `from {person}` are optional|
+|`Can you help me[?]|The punctuation mark is optional|
+
+Punctuation marks (`?`, `!`, `.`) should be ignored and you need to ignore them using the square bracket syntax in patterns.
+
+## Next steps
+
+Learn more about patterns:
+
+* [How to add patterns](luis-how-to-model-intent-pattern.md)
+* [How to add pattern.any entity](how-to/entities.md#create-a-patternany-entity)
+* [Patterns Concepts](concepts/patterns-features.md)
+
+Understand how [sentiment](luis-reference-prebuilt-sentiment.md) is returned in the .json response.
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md
+
+ Title: LUIS role-based access control
+
+description: Use this article to learn how to add access control to your LUIS resource
+++++ Last updated : 08/23/2022+++
+# LUIS role-based access control
+++
+LUIS supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your LUIS authoring resources. See the [Azure RBAC documentation](../../role-based-access-control/index.yml) for more information.
+
+## Enable Azure Active Directory authentication
+
+To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
+
+## Add role assignment to Language Understanding Authoring resource
+
+Azure RBAC can be assigned to a Language Understanding Authoring resource. To grant access to an Azure resource, you add a role assignment.
+1. In the [Azure portal](https://portal.azure.com/), select **All services**.
+2. Select **Azure AI services**, and navigate to your specific Language Understanding Authoring resource.
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
+
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add**, then select **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
++
+## LUIS role types
+
+Use the following table to determine access needs for your LUIS application.
+
+These custom roles only apply to authoring (Language Understanding Authoring) and not prediction resources (Language Understanding).
+
+> [!NOTE]
+> * *Owner* and *Contributor* roles take priority over the custom LUIS roles.
+> * Azure Active Directory (Azure AAD) is only used with custom LUIS roles.
+> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal.
++
+### Azure AI services LUIS reader
+
+A user that should only be validating and reviewing LUIS applications, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets (utterances, intents, entities) to notify the app developers of any changes that need to be made, but do not have direct access to make them.
++
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * Read Utterances
+ * Intents
+ * Entities
+ * Test Application
+ :::column-end:::
+ :::column span="":::
+ All GET APIs under:
+ * [LUIS Programmatic v3.0-preview](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f)
+ * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f)
+
+ All the APIs under:
+ * [LUIS Endpoint APIs v2.0](./luis-migration-api-v1-to-v2.md)
+ * [LUIS Endpoint APIs v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8)
+ * [LUIS Endpoint APIs v3.0-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5cb0a9459a1fe8fa44c28dd8)
+
+ All the Batch Testing Web APIs
+ :::column-end:::
+
+### Azure AI services LUIS writer
+
+A user that is responsible for building and modifying LUIS application, as a collaborator in a larger team. The collaborator can modify the LUIS application in any way, train those changes, and validate/test those changes in the portal. However, this user wouldn't have access to deploying this application to the runtime, as they may accidentally reflect their changes in a production environment. They also wouldn't be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in a production environment. They may also create new applications under this resource, but with the restrictions mentioned.
+
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Azure AI services LUIS Reader.
+
+ The ability to add:
+ * Utterances
+ * Intents
+ * Entities
+ :::column-end:::
+ :::column span="":::
+ * All APIs under LUIS reader
+
+ All POST, PUT and DELETE APIs under:
+
+ * [LUIS Programmatic v3.0-preview](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f)
+ * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2d)
+
+ Except for
+ * [Delete application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c39)
+ * [Move app to another LUIS authoring Azure resource](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/apps-move-app-to-another-luis-authoring-azure-resource)
+ * [Publish an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c3b)
+ * [Update application settings](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/58aeface39e2bb03dcd5909e)
+ * [Assign a LUIS azure accounts to an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5be32228e8473de116325515)
+ * [Remove an assigned LUIS azure accounts from an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5be32554f8591db3a86232e1)
+ :::column-end:::
+
+### Azure AI services LUIS owner
+
+> [!NOTE]
+> * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal.
+
+These users are the gatekeepers for LUIS applications in a production environment. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments.
+
+ :::column span="":::
+ **Functionality**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Azure AI services LUIS Writer
+ * Deploy a model
+ * Delete an application
+ :::column-end:::
+ :::column span="":::
+ * All APIs available for LUIS
+ :::column-end:::
+
+## Next steps
+
+* [Managing Azure resources](./luis-how-to-azure-subscription.md?tabs=portal#authoring-resource)
ai-services Schema Change Prediction Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/schema-change-prediction-runtime.md
+
+ Title: Extend app at runtime - LUIS
+description: Learn how to extend an already published prediction endpoint to pass new information.
++++++ Last updated : 04/14/2020+
+# Extend app at prediction runtime
+++
+The app's schema (models and features) is trained and published to the prediction endpoint. This published model is used on the prediction runtime. You can pass new information, along with the user's utterance, to the prediction runtime to augment the prediction.
+
+Two prediction runtime schema changes include:
+* [External entities](#external-entities)
+* [Dynamic lists](#dynamic-lists)
+
+<a name="external-entities-passed-in-at-prediction-time"></a>
+
+## External entities
+
+External entities give your LUIS app the ability to identify and label entities during runtime, which can be used as features to existing entities. This allows you to use your own separate and custom entity extractors before sending queries to your prediction endpoint. Because this is done at the query prediction endpoint, you don't need to retrain and publish your model.
+
+The client-application is providing its own entity extractor by managing entity matching and determining the location within the utterance of that matched entity and then sending that information with the request.
+
+External entities are the mechanism for extending any entity type while still being used as signals to other models.
+
+This is useful for an entity that has data available only at query prediction runtime. Examples of this type of data are constantly changing data or specific per user. You can extend a LUIS contact entity with external information from a user's contact list.
+
+External entities are part of the V3 authoring API. Learn more about [migrating](luis-migration-api-v3.md) to this version.
+
+### Entity already exists in app
+
+The value of `entityName` for the external entity, passed in the endpoint request POST body, must already exist in the trained and published app at the time the request is made. The type of entity doesn't matter, all types are supported.
+
+### First turn in conversation
+
+Consider a first utterance in a chat bot conversation where a user enters the following incomplete information:
+
+`Send Hazem a new message`
+
+The request from the chat bot to LUIS can pass in information in the POST body about `Hazem` so it is directly matched as one of the user's contacts.
+
+```json
+ "externalEntities": [
+ {
+ "entityName":"contacts",
+ "startIndex": 5,
+ "entityLength": 5,
+ "resolution": {
+ "employeeID": "05013",
+ "preferredContactType": "TeamsChat"
+ }
+ }
+ ]
+```
+
+The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
+
+### Second turn in conversation
+
+The next user utterance into the chat bot uses a more vague term:
+
+`Send him a calendar reminder for the party.`
+
+In this turn of the conversation, the utterance uses `him` as a reference to `Hazem`. The conversational chat bot, in the POST body, can map `him` to the entity value extracted from the first utterance, `Hazem`.
+
+```json
+ "externalEntities": [
+ {
+ "entityName":"contacts",
+ "startIndex": 5,
+ "entityLength": 3,
+ "resolution": {
+ "employeeID": "05013",
+ "preferredContactType": "TeamsChat"
+ }
+ }
+ ]
+```
+
+The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
+
+### Override existing model predictions
+
+The `preferExternalEntities` options property specifies that if the user sends an external entity that overlaps with a predicted entity with the same name, LUIS chooses the entity passed in or the entity existing in the model.
+
+For example, consider the query `today I'm free`. LUIS detects `today` as a datetimeV2 with the following response:
+
+```JSON
+"datetimeV2": [
+ {
+ "type": "date",
+ "values": [
+ {
+ "timex": "2019-06-21",
+ "value": "2019-06-21"
+ }
+ ]
+ }
+]
+```
+
+If the user sends the external entity:
+
+```JSON
+{
+ "entityName": "datetimeV2",
+ "startIndex": 0,
+ "entityLength": 5,
+ "resolution": {
+ "date": "2019-06-21"
+ }
+}
+```
+
+If the `preferExternalEntities` is set to `false`, LUIS returns a response as if the external entity were not sent.
+
+```JSON
+"datetimeV2": [
+ {
+ "type": "date",
+ "values": [
+ {
+ "timex": "2019-06-21",
+ "value": "2019-06-21"
+ }
+ ]
+ }
+]
+```
+
+If the `preferExternalEntities` is set to `true`, LUIS returns a response including:
+
+```JSON
+"datetimeV2": [
+ {
+ "date": "2019-06-21"
+ }
+]
+```
+++
+#### Resolution
+
+The _optional_ `resolution` property returns in the prediction response, allowing you to pass in the metadata associated with the external entity, then receive it back out in the response.
+
+The primary purpose is to extend prebuilt entities but it is not limited to that entity type.
+
+The `resolution` property can be a number, a string, an object, or an array:
+
+* "Dallas"
+* {"text": "value"}
+* 12345
+* ["a", "b", "c"]
+
+<a name="dynamic-lists-passed-in-at-prediction-time"></a>
+
+## Dynamic lists
+
+Dynamic lists allow you to extend an existing trained and published list entity, already in the LUIS app.
+
+Use this feature when your list entity values need to change periodically. This feature allows you to extend an already trained and published list entity:
+
+* At the time of the query prediction endpoint request.
+* For a single request.
+
+The list entity can be empty in the LUIS app but it has to exist. The list entity in the LUIS app isn't changed, but the prediction ability at the endpoint is extended to include up to 2 lists with about 1,000 items.
+
+### Dynamic list JSON request body
+
+Send in the following JSON body to add a new sublist with synonyms to the list, and predict the list entity for the text, `LUIS`, with the `POST` query prediction request:
+
+```JSON
+{
+ "query": "Send Hazem a message to add an item to the meeting agenda about LUIS.",
+ "options":{
+ "timezoneOffset": "-8:00"
+ },
+ "dynamicLists": [
+ {
+ "listEntity*":"ProductList",
+ "requestLists":[
+ {
+ "name": "Azure AI services",
+ "canonicalForm": "Azure-Cognitive-Services",
+ "synonyms":[
+ "language understanding",
+ "luis",
+ "qna maker"
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+The prediction response includes that list entity, with all the other predicted entities, because it is defined in the request.
+
+## Next steps
+
+* [Prediction score](luis-concept-prediction-score.md)
+* [Authoring API V3 changes](luis-migration-api-v3.md)
ai-services Build Decomposable Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/tutorial/build-decomposable-application.md
+
+ Title: Build a decomposable LUIS application
+description: Use this tutorial to learn how to build a decomposable application.
+++
+ms.
++ Last updated : 01/10/2022++
+# Build a decomposable LUIS application
+++
+In this tutorial, you will be able to create a telecom LUIS application that can predict different user intentions. By the end of the tutorial, we should have a telecom application that can predict user intentions based on text provided by users.
+
+We will be handling different user scenarios (intents) such as:
+
+* Signing up for a new telecom line
+* Updating an existing tier
+* Paying a bill
+
+In this tutorial you will learn how to:
+
+1. Create a LUIS application
+2. Create intents
+3. Add entities
+4. Add utterances
+5. Label example utterances
+6. Train an app
+7. Publish an app
+8. Get predictions from the published endpoint
+
+## Create a LUIS application
+
+1. Sign in to the [LUIS portal](https://www.luis.ai/)
+2. Create a new application by selecting **+New app**.
+
+ :::image type="content" source="../media/build-decomposable-app/create-luis-app.png" alt-text="A screenshot of the application creation screen." lightbox="../media/build-decomposable-app/create-luis-app.png":::
+
+3. In the window that appears, enter the name "Telecom Tutorial", keeping the default culture, **English**. The other fields are optional, do not set them. Select **Done**.
+
+ :::image type="content" source="../media/build-decomposable-app/create-new-app.png" alt-text="A screenshot of the LUIS application's creation fields." lightbox="../media/build-decomposable-app/create-new-app.png":::
++
+## User intentions as intents
+
+The first thing you will see in the **Build** section are the app's intents. [Intents](../concepts/intents.md) represent a task or an action a user want to perform.
+
+Imagine a telecom LUIS application, what would a user need?
+
+They would probably need to perform some type of user action or ask for help. Another user might want to update their tier or **pay a bill**
+
+The resulting schema is as follows. For more information, see [best practices about planning the schema](../faq.md).
+
+| **Intent** | **Purpose** |
+| | |
+| UserActions | Determine user actions |
+| Help | Request help |
+| UpdateTier | Update current tier |
+| PayBill | Pay outstanding bill |
+| None | Determine if user is asking something the LUIS app is not designed to answer. This intent is provided as part of app creation and can&#39;t be deleted. |
+
+## Create a new Intent
+
+An intent is used to classify user utterances based on the user's intention, determined from the natural language text.
+
+To classify an utterance, the intent needs examples of user utterances that should be classified with this intent.
+
+1. Select **Build** from the top navigation menu, then select **Intents** on the left side of the screen. Select **+ Create** to create a new intent. Enter the new intent name, "UserAction", then select **Done**
+
+ UserAction could be one of many intents. For example, some users might want to sign up for a new line, while others might ask to retrieve information.
+
+2. Add several example utterances to this intent that you expect a user to ask:
+
+ * Hi! I want to sign up for a new line
+ * Can I Sign up for a new line?
+ * Hello, I want a new line
+ * I forgot my line number!
+ * I would like a new line number
+
+ :::image type="content" source="../media/build-decomposable-app/user-actions.png" alt-text="A screenshot showing example utterances for the UserAction intent." lightbox="../media/build-decomposable-app/user-actions.png":::
+
+For the **PayBill** intent, some utterances could be:
+
+* I want to pay my bill
+* Settle my bill
+* Pay bill
+* I want to close my current balance
+* Hey! I want to pay the current bill
+
+By providing _example utterances_, you are teaching LUIS about what kinds of utterances should be predicted for this intent. These are positive examples. The utterances in all the other intents are treated as negative examples for this intent. Ideally, the more example utterances you add, the better your app's predictions will be.
+
+These few utterances are for demonstration purposes only. A real-world app should have at least 15-30 [utterances](../concepts/utterances.md) of varying length, word order, tense, grammatical correctness, punctuation, and word count.
+
+## Creating the remaining intents
+
+Perform the above steps to add the following intents to the app:
+
+**"Help"**
+
+* "I need help"
+* "I need assistance"
+* "Help please"
+* "Can someone support me?"
+* "I'm stuck, can you help me"
+* "Can I get help?"
+
+**"UpdateTier"**
+
+* "I want to update my tier"
+* "Update my tier"
+* "I want to change to VIP tier"
+* "Change my subscription to standard tier"
+
+## Example utterances for the None intent
+
+The client application needs to know if an utterance is not meaningful or appropriate for the application. The "None" intent is added to each application as part of the creation process to determine if an utterance shouldn't be answered by the client application.
+
+If LUIS returns the "None" intent for an utterance, your client application can ask if the user wants to end the conversation or give more directions for continuing the conversation.
+
+If you leave the "None" intent empty, an utterance that should be predicted outside the subject domain will be predicted in one of the existing subject domain intents. The result is that the client application, such as a chat bot, will perform incorrect operations based on an incorrect prediction.
+
+1. Select **Intents** from the left panel.
+2. Select the **None** intent. Add three utterances that your user might enter but are not relevant to your Telecom app. These examples shouldn't use words you expect in your subject domain such as Tier, upgrade, signup, bill.
+
+ * "When is my flight?"
+ * "I need to change my pizza order please"
+ * "What is the weather like for today?"
+
+## Add entities
+
+An entity is an item or element that is relevant to the user's intent. Entities define data that can be extracted from the utterance and is essential to complete a user's required action.
+
+1. In the build section, select **Entities.**
+2. To add a new entity, select **+Create**
+
+ In this example, we will be creating two entities, "**UpdateTierInfo**" as a machine-learned entity type, and "Tier" as a list entity type. Luis also lets you create [different entity types](../concepts/entities.md).
+
+3. In the window that appears, enter "**UpdateTierInfo**", and select Machine learned from the available types. Select the **Add structure** box to be able to add a structure to this entity.
+
+ :::image type="content" source="../media/build-decomposable-app/create-entity.png" alt-text="A screenshot showing an entity." lightbox="../media/build-decomposable-app/create-entity.png":::
+
+4. Select **Next**.
+5. To add a child subentity, select the "**+**" symbol and start adding the child. For our entity example, "**UpdateTierInfo**", we require three things:
+ * **OriginalTier**
+ * **NewTier**
+ * **PhoneNumber**
+
+ :::image type="content" source="../media/build-decomposable-app/add-subentities.png" alt-text="A screenshot of subentities in the app." lightbox="../media/build-decomposable-app/add-subentities.png":::
+
+6. Select **Create** after adding all THE subentities.
+
+ We will create another entity named "**Tier**", but this time it will be a list entity, and it will include all the tiers that we might provide: Standard tier, Premium tier, and VIP tier.
+
+1. To do this, go to the entities tab, and press on **+create** and select **list** from the types in the screen that appears.
+
+2. Add the items to your list, and optionally, you can add synonyms to make sure that all cases of that mention will be understood.
+
+ :::image type="content" source="../media/build-decomposable-app/list-entities.png" alt-text="A screenshot of a list entity." lightbox="../media/build-decomposable-app/list-entities.png":::
+
+3. Now go back to the "**UpdateTierInfo**" entity and add the "tier" entity as a feature for the "**OriginalTier**" and "**newTier**" entities we created earlier. It should look something like this:
+
+ :::image type="content" source="../media/build-decomposable-app/update-tier-info.png" alt-text="A screenshot of an entity's features." lightbox="../media/build-decomposable-app/update-tier-info.png":::
+
+ We added tier as a feature for both "**originalTier**" and "**newTier**", and we added the "**Phonenumber**" entity, which is a Regex type. It can be created the same way we created an ML and a list entity.
+
+Now we have successfully created intents, added example utterances, and added entities. We created four intents (other than the "none" intent), and three entities.
+
+## Label example utterances
+
+The machine learned entity is created and the subentities have features. To complete the extraction improvement, the example utterances need to be labeled with the subentities.
+
+There are two ways to label utterances:
+
+1. Using the labeling tool
+ 1. Open the **Entity Palette** , and select the "**@**" symbol in the contextual toolbar.
+ 2. Select each entity row in the palette, then use the palette cursor to select the entity in each example utterance.
+2. Highlight the text by dragging your cursor. Using the cursor, highlight over the text you want to label. In the following image, we highlighted "vip - tier" and select the "**NewTier**" entity.
+
+ :::image type="content" source="../media/build-decomposable-app/label-example-utterance.png" alt-text="A screenshot showing how to label utterances." lightbox="../media/build-decomposable-app/label-example-utterance.png":::
++
+## Train the app
+
+In the top-right side of the LUIS website, select the **Train** button.
+
+Before training, make sure there is at least one utterance for each intent.
++
+## Publish the app
+
+In order to receive a LUIS prediction in a chat bot or other client application, you need to publish the app to the prediction endpoint. In order to publish, you need to train you application fist.
+
+1. Select **Publish** in the top-right navigation.
+
+ :::image type="content" source="../media/build-decomposable-app/publish-app.png" alt-text="A screenshot showing the button for publishing an app." lightbox="../media/build-decomposable-app/publish-app.png":::
+
+2. Select the **Production** slot, then select **Done**.
+
+ :::image type="content" source="../media/build-decomposable-app/production-slot.png" alt-text="A screenshot showing the production slot selector." lightbox="../media/build-decomposable-app/production-slot.png":::
++
+3. Select **Access your endpoint URLs** in the notification to go to the **Azure Resources** page. You will only be able to see the URLs if you have a prediction resource associated with the app. You can also find the **Azure Resources** page by clicking **Manage** on the left of the screen.
+
+ :::image type="content" source="../media/build-decomposable-app/access-endpoint.png" alt-text="A screenshot showing the endpoint access notification." lightbox="../media/build-decomposable-app/access-endpoint.png":::
++
+## Get intent prediction
+
+1. Select **Manage** in the top-right menu, then select **Azure Resources** on the left.
+2. Copy the **Example Query** URL and paste it into a new web browser tab.
+
+ The endpoint URL will have the following format.
+
+ ```http
+ https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/YOUR-APP-ID/slots/production/predict?subscription-key=YOUR-KEY-ID&amp;verbose=true&amp;show-all-intents=true&amp;log=true&amp;query=YOUR\_QUERY\_HERE
+ ```
+
+3. Go to the end of the URL in the address bar and replace the `query=` string parameter with:
+
+ "Hello! I am looking for a new number please."
+
+ The utterance **query** is passed in the URI. This utterance is not the same as any of the example utterances, and should be a good test to check if LUIS predicts the UserAction intent as the top scoring intent.
+ ```JSON
+ {
+ "query": "hello! i am looking for a new number please",
+ "prediction":
+ {
+ "topIntent": "UserAction",
+ "intents":
+ {
+ "UserAction": {
+ "score": 0.8607431},
+ "Help":{
+ "score": 0.031376917},
+ "PayBill": {
+ "score": 0.01989629},
+ "None": {
+ "score": 0.013738701},
+ "UpdateTier": {
+ "score": 0.012313577}
+ },
+ "entities": {}
+ }
+ }
+ ```
+The JSON result identifies the top scoring intent as **prediction.topIntent** property. All scores are between 1 and 0, with the better score being closer to 1.
+
+## Client-application next steps
+
+This tutorial created a LUIS app, created intents, entities, added example utterances to each intent, added example utterances to the None intent, trained, published, and tested at the endpoint. These are the basic steps of building a LUIS model.
+
+LUIS does not provide answers to user utterances, it only identifies what type of information is being asked for in natural language. The conversation follow-up is provided by the client application such as an [Azure Bot](https://azure.microsoft.com/services/bot-services/).
+
+## Clean up resources
+
+When no longer needed, delete the LUIS app. To do so, select **My apps** from the top-left menu. Select the ellipsis ( **_..._** ) to the right of the app name in the app list, select **Delete**. In the pop-up dialog named **Delete app?** , select **Ok**.
ai-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/what-is-luis.md
+
+ Title: Language Understanding (LUIS) Overview
+description: Language Understanding (LUIS) - a cloud-based API service using machine-learning to conversational, natural language to predict meaning and extract information.
+keywords: Azure, artificial intelligence, ai, natural language processing, nlp, natural language understanding, nlu, LUIS, conversational AI, ai chatbot, nlp ai, azure luis
+++ Last updated : 07/19/2022+++
+ms.
++
+# What is Language Understanding (LUIS)?
+++
+Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. LUIS provides access through its [custom portal](https://www.luis.ai), [APIs][endpoint-apis] and [SDK client libraries](client-libraries-rest-api.md).
+
+For first time users, follow these steps to [sign in to LUIS portal](how-to/sign-in.md "sign in to LUIS portal")
+To get started, you can try a LUIS [prebuilt domain app](luis-get-started-create-app.md).
+
+This documentation contains the following article types:
+
+* [**Quickstarts**](luis-get-started-create-app.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to/sign-in.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](concepts/application-design.md) provide in-depth explanations of the service functionality and features.
+* [**Tutorials**](tutorial/build-decomposable-application.md) are longer guides that show you how to use the service as a component in broader business solutions.
+
+## What does LUIS Offer
+
+* **Simplicity**: LUIS offloads you from the need of in-house AI expertise or any prior machine learning knowledge. With only a few clicks you can build your own conversational AI application. You can build your custom application by following one of our [quickstarts](luis-get-started-create-app.md), or you can use one of our [prebuilt domain](luis-get-started-create-app.md) apps.
+* **Security, Privacy and Compliance**: LUIS is backed by Azure infrastructure, which offers enterprise-grade security, privacy, and compliance. Your data remains yours; you can delete your data at any time. Your data is encrypted while itΓÇÖs in storage. Learn more about this [here](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy).
+* **Integration**: easily integrate your LUIS app with other Microsoft services like [Microsoft Bot framework](/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../speech-service/get-started-intent-recognition.md).
++
+## LUIS Scenarios
+* [Build an enterprise-grade conversational bot](/azure/architecture/reference-architectures/ai/conversational-bot): This reference architecture describes how to build an enterprise-grade conversational bot (chatbot) using the Azure Bot Framework.
+* [Commerce Chatbot](/azure/architecture/solution-ideas/articles/commerce-chatbot): Together, the Azure AI Bot Service and Language Understanding service enable developers to create conversational interfaces for various scenarios like banking, travel, and entertainment.
+* [Controlling IoT devices using a Voice Assistant](/azure/architecture/solution-ideas/articles/iot-controlling-devices-with-voice-assistant): Create seamless conversational interfaces with all of your internet-accessible devices-from your connected television or fridge to devices in a connected power plant.
++
+## Application Development life cycle
+
+![LUIS app development life cycle](./media/luis-overview/luis-dev-lifecycle.png "LUIS Application Develooment Lifecycle")
+
+- **Plan**: Identify the scenarios that users might use your application for. Define the actions and relevant information that needs to be recognized.
+- **Build**: Use your authoring resource to develop your app. Start by defining [intents](concepts/intents.md) and [entities](concepts/entities.md). Then, add training [utterances](concepts/utterances.md) for each intent.
+- **Test and Improve**: Start testing your model with other utterances to get a sense of how the app behaves, and you can decide if any improvement is needed. You can improve your application by following these [best practices](faq.md).
+- **Publish**: Deploy your app for prediction and query the endpoint using your prediction resource. Learn more about authoring and prediction resources [here](luis-how-to-azure-subscription.md).
+- **Connect**: Connect to other services such as [Microsoft Bot framework](/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../speech-service/get-started-intent-recognition.md).
+- **Refine**: [Review endpoint utterances](how-to/improve-application.md) to improve your application with real life examples
+
+Learn more about planning and building your application [here](concepts/application-design.md).
+
+## Next steps
+
+* [What's new](whats-new.md "What's new") with the service and documentation
+* [Build a LUIS app](tutorial/build-decomposable-application.md)
+* [API reference][endpoint-apis]
+* [Best practices](faq.md)
+* [Developer resources](developer-reference-resource.md "Developer resources") for LUIS.
+* [Plan your app](concepts/application-design.md "Plan your app") with [intents](concepts/intents.md "intents") and [entities](concepts/entities.md "entities").
+
+[bot-framework]: /bot-framework/
+[flow]: /connectors/luis/
+[authoring-apis]: https://go.microsoft.com/fwlink/?linkid=2092087
+[endpoint-apis]: https://go.microsoft.com/fwlink/?linkid=2092356
+[qnamaker]: https://qnamaker.ai/
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/whats-new.md
+
+ Title: What's New - Language Understanding (LUIS)
+description: This article is regularly updated with news about the Azure AI services Language Understanding API.
++++++ Last updated : 02/24/2022++
+# What's new in Language Understanding
+++
+Learn what's new in the service. These items include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up to date with the service.
+
+## Release notes
+
+### February 2022
+* LUIS containers can be used in [disconnected environments](../containers/disconnected-containers.md?context=/azure/ai-services/luis/context/context).
+
+### January 2022
+* [Updated text recognizer](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.8.2) to v1.8.2
+* Added [English (UK)](luis-language-support.md) to supported languages.
+
+### December 2021
+* [Updated text recognizer](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.8.1) to v1.8.1
+* Jio India west [publishing region](luis-reference-regions.md#other-publishing-regions)
+
+### November 2021
+* [Azure Role-based access control (RBAC) article](role-based-access-control.md)
+
+### August 2021
+* Norway East [publishing region](luis-reference-regions.md#publishing-to-europe).
+* West US 3 [publishing region](luis-reference-regions.md#other-publishing-regions).
+
+### April 2021
+
+* Switzerland North [authoring region](luis-reference-regions.md#publishing-to-europe).
+
+### January 2021
+
+* The V3 prediction API now supports the [Bing Spellcheck API](luis-tutorial-bing-spellcheck.md).
+* The regional portals (au.luis.ai and eu.luis.ai) have been consolidated into a single portal and URL. If you were using one of these portals, you will be automatically re-directed to luis.ai.
+
+### December 2020
+
+* All LUIS users are required to [migrate to a LUIS authoring resource](luis-migration-authoring.md)
+* New [evaluation endpoints](luis-how-to-batch-test.md#batch-testing-using-the-rest-api) that allow you to submit batch tests using the REST API, and get accuracy results for your intents and entities. Available starting with the v3.0-preview LUIS Endpoint.
+
+### June 2020
+
+* [Preview 3.0 Authoring](luis-migration-authoring-entities.md) SDK -
+ * Version 3.2.0-preview.3 - [.NET - NuGet](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Authoring/)
+ * Version 4.0.0-preview.3 - [JS - NPM](https://www.npmjs.com/package/@azure/cognitiveservices-luis-authoring)
+* Applying DevOps practices with LUIS
+ * Concepts
+ * [DevOps practices for LUIS](luis-concept-devops-sourcecontrol.md)
+ * [Continuous Integration and Continuous Delivery workflows for LUIS DevOps](luis-concept-devops-automation.md)
+ * [Testing for LUIS DevOps](luis-concept-devops-testing.md)
+ * How-to
+ * [Apply DevOps to LUIS app development using GitHub Actions](./luis-concept-devops-automation.md)
+ * [Complete code GitHub repo](https://github.com/Azure-Samples/LUIS-DevOps-Template)
+
+### May 2020 - //Build
+
+* Released as **generally available** (GA):
+ * [Language Understanding container](luis-container-howto.md)
+ * Preview portal promoted to [current portal](https://www.luis.ai), [previous](https://previous.luis.ai) portal still available
+ * New machine-learning entity creation and labeling experience
+ * [Upgrade process](migrate-from-composite-entity.md) from composite and simple entities to machine-learning entities
+ * [Setting](how-to-application-settings-portal.md) support for normalizing word variants
+* Preview Authoring API changes
+ * App schema 7.x for nested machine-learning entities
+ * [Migration to required feature](luis-migration-authoring-entities.md#api-change-constraint-replaced-with-required-feature)
+* New resources for developers
+ * [Continuous integration tools](developer-reference-resource.md#continuous-integration-tools)
+ * Workshop - learn best practices for [_Natural Language Understanding_ (NLU) using LUIS](developer-reference-resource.md#workshops)
+* [Customer managed keys](./encrypt-data-at-rest.md) - encrypt all the data you use in LUIS by using your own key
+* [AI show](/Shows/AI-Show/New-Features-in-Language-Understanding) (video) - see the new features in LUIS
+++
+### March 2020
+
+* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see Azure AI services [security](../security-features.md).
+
+### November 4, 2019 - Ignite
+
+* Video - [Advanced Natural Language Understanding (NLU) models using LUIS and Azure AI services | BRK2188](https://www.youtube.com/watch?v=JdJEV2jV0_Y)
+
+* Improved developer productivity
+ * General availability of our [prediction endpoint V3](luis-migration-api-v3.md).
+ * Ability to import and export apps with `.lu` ([LUDown](https://github.com/microsoft/botbuilder-tools/tree/master/packages/Ludown)) format. This paves the way for an effective CI/CD process.
+* Language expansion
+ * [Arabic and Hindi](luis-language-support.md) in public preview.
+* Prebuilt models
+ * [Prebuilt domains](luis-reference-prebuilt-domains.md) is now generally available (GA)
+ * Japanese [prebuilt entities](luis-reference-prebuilt-entities.md#japanese-entity-support) - age, currency, number, and percentage are not supported in V3.
+ * Italian [prebuilt entities](luis-reference-prebuilt-entities.md#italian-entity-support) - age, currency, dimension, number, and percentage resolution changed from V2.
+* Enhanced user experience in [preview.luis.ai portal](https://preview.luis.ai) - revamped labeling experience to enable building and debugging complex models. Try the preview portal tutorials:
+ * [Intents only](./tutorial/build-decomposable-application.md)
+ * [Decomposable machine-learning entity](./tutorial/build-decomposable-application.md)
+* Advance language understanding capabilities - [building sophisticated language models](concepts/entities.md) with less effort.
+* Define machine learning features at the model level and enable models to be used as signals to other models, for example using entities as features to intents and to other entities.
+* New, expanded [limits](luis-limits.md) - higher maximum for phrase lists and total phrases, new model as a feature limits
+* Extract information from text in the format of deep hierarchy structure, making conversation applications more powerful.
+
+ ![machine-learning entity image](./media/whats-new/deep-entity-extraction-example.png)
+
+### September 3, 2019
+
+* Azure authoring resource - [migrate now](luis-migration-authoring.md).
+ * 500 apps per Azure resource
+ * 100 versions per app
+* Turkish support for prebuilt entities
+* Italian support for datetimeV2
+
+### July 23, 2019
+
+* Update the [Recognizers-Text](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.2.3) to 1.2.3
+ * Age, Temperature, Dimension, and Currency recognizers in Italian.
+ * Improvement in Holiday recognition in English to correctly calculate Thanksgiving-based dates.
+ * Improvements in French DateTime to reduce false positives of non-Date and non-Time entities.
+ * Support for calendar/school/fiscal year and acronyms in English DateRange.
+ * Improved PhoneNumber recognition in Chinese and Japanese.
+ * Improved support for NumberRange in English.
+ * Performance improvements.
+
+### June 24, 2019
+
+* [OrdinalV2 prebuilt entity](luis-reference-prebuilt-ordinal-v2.md) to support ordering such as next, previous, and last. English culture only.
+
+### May 6, 2019 - //Build Conference
+
+The following features were released at the Build 2019 Conference:
+
+* [Preview of V3 API migration guide](luis-migration-api-v3.md)
+* [Improved analytics dashboard](luis-how-to-use-dashboard.md)
+* [Improved prebuilt domains](luis-reference-prebuilt-domains.md)
+* [Dynamic list entities](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time)
+* [External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time)
+
+## Blogs
+
+[Bot Framework](https://blog.botframework.com/)
+
+## Videos
+
+### 2019 Ignite videos
+
+[Advanced Natural Language Understanding (NLU) models using LUIS and Azure AI services | BRK2188](https://www.youtube.com/watch?v=JdJEV2jV0_Y)
+
+### 2019 Build videos
+
+[How to use Azure Conversational AI to scale your business for the next generation](https://www.youtube.com/watch?v=_k97jd-csuk&feature=youtu.be)
+
+## Service updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
ai-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/authentication.md
+
+ Title: Authentication in Azure AI services
+
+description: "There are three ways to authenticate a request to an Azure AI services resource: a resource key, a bearer token, or a multi-service subscription. In this article, you'll learn about each method, and how to make a request."
++++++ Last updated : 09/01/2022+++
+# Authenticate requests to Azure AI services
+
+Each request to an Azure AI service must include an authentication header. This header passes along a resource key or authentication token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each.
+
+* Authenticate with a [single-service](#authenticate-with-a-single-service-resource-key) or [multi-service](#authenticate-with-a-multi-service-resource-key) resource key
+* Authenticate with a [token](#authenticate-with-an-access-token)
+* Authenticate with [Azure Active Directory (AAD)](#authenticate-with-an-access-token)
+
+## Prerequisites
+
+Before you make a request, you need an Azure account and an Azure AI services subscription. If you already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to get you set up in minutes: [Create a multi-service resource](multi-service-resource.md?pivots=azportal).
+
+You can get your resource key from the [Azure portal](multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) after [creating your account](https://azure.microsoft.com/free/cognitive-services/).
+
+## Authentication headers
+
+Let's quickly review the authentication headers available for use with Azure AI services.
+
+| Header | Description |
+|--|-|
+| Ocp-Apim-Subscription-Key | Use this header to authenticate with a resource key for a specific service or a multi-service resource key. |
+| Ocp-Apim-Subscription-Region | This header is only required when using a multi-service resource key with the [Translator service](./Translator/reference/v3-0-reference.md). Use this header to specify the resource region. |
+| Authorization | Use this header if you are using an access token. The steps to perform a token exchange are detailed in the following sections. The value provided follows this format: `Bearer <TOKEN>`. |
+
+## Authenticate with a single-service resource key
+
+The first option is to authenticate a request with a resource key for a specific service, like Translator. The keys are available in the Azure portal for each resource that you've created. To use a resource key to authenticate a request, it must be passed along as the `Ocp-Apim-Subscription-Key` header.
+
+These sample requests demonstrates how to use the `Ocp-Apim-Subscription-Key` header. Keep in mind, when using this sample you'll need to include a valid resource key.
+
+This is a sample call to the Translator service:
+```cURL
+curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
+-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \
+-H 'Content-Type: application/json' \
+--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
+```
+
+The following video demonstrates using an Azure AI services key.
+
+## Authenticate with a multi-service resource key
+
+You can use a [multi-service](./multi-service-resource.md) resource key to authenticate requests. The main difference is that the multi-service resource key isn't tied to a specific service, rather, a single key can be used to authenticate requests for multiple Azure AI services. See [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) for information about regional availability, supported features, and pricing.
+
+The resource key is provided in each request as the `Ocp-Apim-Subscription-Key` header.
+
+[![Multi-service resource key demonstration for Azure AI services](./media/index/single-key-demonstration-video.png)](https://www.youtube.com/watch?v=psHtA1p7Cas&feature=youtu.be)
+
+### Supported regions
+
+When using the [multi-service](./multi-service-resource.md) resource key to make a request to `api.cognitive.microsoft.com`, you must include the region in the URL. For example: `westus.api.cognitive.microsoft.com`.
+
+When using a multi-service resource key with [Azure AI Translator](./translator/index.yml), you must specify the resource region with the `Ocp-Apim-Subscription-Region` header.
+
+Multi-service authentication is supported in these regions:
+
+- `australiaeast`
+- `brazilsouth`
+- `canadacentral`
+- `centralindia`
+- `eastasia`
+- `eastus`
+- `japaneast`
+- `northeurope`
+- `southcentralus`
+- `southeastasia`
+- `uksouth`
+- `westcentralus`
+- `westeurope`
+- `westus`
+- `westus2`
+- `francecentral`
+- `koreacentral`
+- `northcentralus`
+- `southafricanorth`
+- `uaenorth`
+- `switzerlandnorth`
++
+### Sample requests
+
+This is a sample call to the Translator service:
+
+```cURL
+curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
+-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \
+-H 'Ocp-Apim-Subscription-Region: YOUR_SUBSCRIPTION_REGION' \
+-H 'Content-Type: application/json' \
+--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
+```
+
+## Authenticate with an access token
+
+Some Azure AI services accept, and in some cases require, an access token. Currently, these services support access tokens:
+
+* Text Translation API
+* Speech
+* Speech
+
+>[!NOTE]
+> QnA Maker also uses the Authorization header, but requires an endpoint key. For more information, see [QnA Maker: Get answer from knowledge base](./qnamaker/quickstarts/get-answer-from-knowledge-base-using-url-tool.md).
+
+>[!WARNING]
+> The services that support access tokens may change over time, please check the API reference for a service before using this authentication method.
+
+Both single service and multi-service resource keys can be exchanged for authentication tokens. Authentication tokens are valid for 10 minutes. They're stored in JSON Web Token (JWT) format and can be queried programmatically using the [JWT libraries](https://jwt.io/libraries).
+
+Access tokens are included in a request as the `Authorization` header. The token value provided must be preceded by `Bearer`, for example: `Bearer YOUR_AUTH_TOKEN`.
+
+### Sample requests
+
+Use this URL to exchange a resource key for an access token: `https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken`.
+
+```cURL
+curl -v -X POST \
+"https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
+-H "Content-type: application/x-www-form-urlencoded" \
+-H "Content-length: 0" \
+-H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY"
+```
+
+These multi-service regions support token exchange:
+
+- `australiaeast`
+- `brazilsouth`
+- `canadacentral`
+- `centralindia`
+- `eastasia`
+- `eastus`
+- `japaneast`
+- `northeurope`
+- `southcentralus`
+- `southeastasia`
+- `uksouth`
+- `westcentralus`
+- `westeurope`
+- `westus`
+- `westus2`
+
+After you get an access token, you'll need to pass it in each request as the `Authorization` header. This is a sample call to the Translator service:
+
+```cURL
+curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
+-H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
+-H 'Content-Type: application/json' \
+--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
+```
++
+## Use Azure key vault to securely access credentials
+
+You can [use Azure Key Vault](./use-key-vault.md) to securely develop Azure AI services applications. Key Vault enables you to store your authentication credentials in the cloud, and reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
+
+Authentication is done via Azure Active Directory. Authorization may be done via Azure role-based access control (Azure RBAC) or Key Vault access policy. Azure RBAC can be used for both management of the vaults and access data stored in a vault, while key vault access policy can only be used when attempting to access data stored in a vault.
+
+## See also
+
+* [What are Azure AI services?](./what-are-ai-services.md)
+* [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)
+* [Custom subdomains](cognitive-services-custom-subdomains.md)
ai-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/autoscale.md
+
+ Title: Use the autoscale feature
+description: Learn how to use the autoscale feature for Azure AI services to dynamically adjust the rate limit of your service.
++++ Last updated : 06/27/2022++
+# Azure AI services autoscale feature
+
+This article provides guidance for how customers can access higher rate limits on their Azure AI services resources.
+
+## Overview
+
+Each Azure AI services resource has a preconfigured static call rate (transactions per second) which limits the number of concurrent calls that customers can make to the backend service in a given time frame. The autoscale feature will automatically increase/decrease a customer's resource's rate limits based on near-real-time resource usage metrics and backend service capacity metrics.
+
+## Get started with the autoscale feature
+
+This feature is disabled by default for every new resource. Follow these instructions to enable it.
+
+#### [Azure portal](#tab/portal)
+
+Go to your resource's page in the Azure portal, and select the **Overview** tab on the left pane. Under the **Essentials** section, find the **Autoscale** line and select the link to view the **Autoscale Settings** pane and enable the feature.
++
+#### [Azure CLI](#tab/cli)
+
+Run this command after you've created your resource:
+
+```azurecli
+az resource update --namespace Microsoft.CognitiveServices --resource-type accounts --set properties.dynamicThrottlingEnabled=true --resource-group {resource-group-name} --name {resource-name}
+
+```
+++
+## Frequently asked questions
+
+### Does enabling the autoscale feature mean my resource will never be throttled again?
+
+No, you may still get `429` errors for rate limit excess. If your application triggers a spike, and your resource reports a `429` response, autoscale checks the available capacity projection section to see whether the current capacity can accommodate a rate limit increase and respond within five minutes.
+
+If the available capacity is enough for an increase, autoscale gradually increases the rate limit cap of your resource. If you continue to call your resource at a high rate that results in more `429` throttling, your TPS rate will continue to increase over time. If this action continues for one hour or more, you should reach the maximum rate (up to 1000 TPS) currently available at that time for that resource.
+
+If the available capacity isn't enough for an increase, the autoscale feature waits five minutes and checks again.
+
+### What if I need a higher default rate limit?
+
+By default, Azure AI services resources have a default rate limit of 10 TPS. If you need a higher default TPS, submit a ticket by following the **New Support Request** link on your resource's page in the Azure portal. Remember to include a business justification in the request.
+
+### Will this feature increase my Azure spend?
+
+Azure AI services pricing hasn't changed and can be accessed [here](https://azure.microsoft.com/pricing/details/cognitive-services/). We'll only bill for successful calls made to Azure AI services APIs. However, increased call rate limits mean more transactions are completed, and you may receive a higher bill.
+
+Be aware of potential errors and their consequences. If a bug in your client application causes it to call the service hundreds of times per second, that would likely lead to a much higher bill, whereas the cost would be much more limited under a fixed rate limit. Errors of this kind are your responsibility. We highly recommend that you perform development and client update tests against a resource with a fixed rate limit prior to using the autoscale feature.
+
+### Can I disable this feature if I'd rather limit the rate than have unpredictable spending?
+
+Yes, you can disable the autoscale feature through Azure portal or CLI and return to your default call rate limit setting. If your resource was previously approved for a higher default TPS, it goes back to that rate. It can take up to five minutes for the changes to go into effect.
+
+### Which services support the autoscale feature?
+
+Autoscale feature is available for the following
+
+* [Azure AI services multi-key](./multi-service-resource.md?pivots=azportal)
+* [Azure AI Vision](computer-vision/index.yml)
+* [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
+* [Anomaly Detector](anomaly-detector/overview.md)
+* [Content Moderator](content-moderator/overview.md)
+* [Custom Vision (Prediction)](custom-vision-service/overview.md)
+* [Immersive Reader](immersive-reader/overview.md)
+* [LUIS](luis/what-is-luis.md)
+* [Metrics Advisor](metrics-advisor/overview.md)
+* [Personalizer](personalizer/what-is-personalizer.md)
+* [QnAMaker](qnamaker/overview/overview.md)
+* [Document Intelligence](document-intelligence/overview.md?tabs=v3-0)
+
+### Can I test this feature using a free subscription?
+
+No, the autoscale feature isn't available to free tier subscriptions.
+
+## Next steps
+
+* [Plan and Manage costs for Azure AI services](./plan-manage-costs.md).
+* [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+* Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+* Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
ai-services Cognitive Services And Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-and-machine-learning.md
+
+ Title: Azure AI services and Machine Learning
+
+description: Learn where Azure AI services fits in with other Azure offerings for machine learning.
++++++ Last updated : 10/28/2021+
+# Azure AI services and machine learning
+
+Azure AI services provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services.
+
+[Azure AI services](./what-are-ai-services.md) is a group of services, each supporting different, generalized prediction capabilities. The services are divided into different categories to help you find the right service.
+
+|Service category|Purpose|
+|--|--|
+|[Decision](https://azure.microsoft.com/services/cognitive-services/directory/decision/)|Build apps that surface recommendations for informed and efficient decision-making.|
+|[Language](https://azure.microsoft.com/services/cognitive-services/directory/lang/)|Allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want.|
+|[Search](https://azure.microsoft.com/services/cognitive-services/directory/search/)|Add Bing Search APIs to your apps and harness the ability to comb billions of webpages, images, videos, and news with a single API call.|
+|[Speech](https://azure.microsoft.com/services/cognitive-services/directory/speech/)|Convert speech into text and text into natural-sounding speech. Translate from one language to another and enable speaker verification and recognition.|
+|[Vision](https://azure.microsoft.com/services/cognitive-services/directory/vision/)|Recognize, identify, caption, index, and moderate your pictures, videos, and digital ink content.|
+
+Use Azure AI services when you:
+
+* Can use a generalized solution.
+* Access solution from a programming REST API or SDK.
+
+Use other machine-learning solutions when you:
+
+* Need to choose the algorithm and need to train on very specific data.
+
+## What is machine learning?
+
+Machine learning is a concept where you bring together data and an algorithm to solve a specific need. Once the data and algorithm are trained, the output is a model that you can use again with different data. The trained model provides insights based on the new data.
+
+The process of building a machine learning system requires some knowledge of machine learning or data science.
+
+Machine learning is provided using [Azure Machine Learning (AML) products and services](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning?context=azure%2fmachine-learning%2fstudio%2fcontext%2fml-context).
+
+## What is an Azure AI service?
+
+An Azure AI service provides part or all of the components in a machine learning solution: data, algorithm, and trained model. These services are meant to require general knowledge about your data without needing experience with machine learning or data science. These services provide both REST API(s) and language-based SDKs. As a result, you need to have programming language knowledge to use the services.
+
+## How are Azure AI services and Azure Machine Learning (AML) similar?
+
+Both have the end-goal of applying artificial intelligence (AI) to enhance business operations, though how each provides this in the respective offerings is different.
+
+Generally, the audiences are different:
+
+* Azure AI services are for developers without machine-learning experience.
+* Azure Machine Learning is tailored for data scientists.
+
+## How are Azure AI services different from machine learning?
+
+Azure AI services provide a trained model for you. This brings data and an algorithm together, available from a REST API(s) or SDK. You can implement this service within minutes, depending on your scenario. An Azure AI service provides answers to general problems such as key phrases in text or item identification in images.
+
+Machine learning is a process that generally requires a longer period of time to implement successfully. This time is spent on data collection, cleaning, transformation, algorithm selection, model training, and deployment to get to the same level of functionality provided by an Azure AI service. With machine learning, it is possible to provide answers to highly specialized and/or specific problems. Machine learning problems require familiarity with the specific subject matter and data of the problem under consideration, as well as expertise in data science.
+
+## What kind of data do you have?
+
+Azure AI services, as a group of services, can require none, some, or all custom data for the trained model.
+
+### No additional training data required
+
+Services that provide a fully-trained model can be treated as a _opaque box_. You don't need to know how they work or what data was used to train them. You bring your data to a fully trained model to get a prediction.
+
+### Some or all training data required
+
+Some services allow you to bring your own data, then train a model. This allows you to extend the model using the Service's data and algorithm with your own data. The output matches your needs. When you bring your own data, you may need to tag the data in a way specific to the service. For example, if you are training a model to identify flowers, you can provide a catalog of flower images along with the location of the flower in each image to train the model.
+
+A service may _allow_ you to provide data to enhance its own data. A service may _require_ you to provide data.
+
+### Real-time or near real-time data required
+
+A service may need real-time or near-real time data to build an effective model. These services process significant amounts of model data.
+
+## Service requirements for the data model
+
+The following data categorizes each service by which kind of data it allows or requires.
+
+|Azure AI service|No training data required|You provide some or all training data|Real-time or near real-time data collection|
+|--|--|--|--|
+|[Anomaly Detector](./Anomaly-Detector/overview.md)|x|x|x|
+|[Content Moderator](./Content-Moderator/overview.md)|x||x|
+|[Custom Vision](./custom-vision-service/overview.md)||x||
+|[Face](./computer-vision/overview-identity.md)|x|x||
+|[Language Understanding (LUIS)](./LUIS/what-is-luis.md)||x||
+|[Personalizer](./personalizer/what-is-personalizer.md)<sup>1</sup></sup>|x|x|x|
+|[QnA Maker](./QnAMaker/Overview/overview.md)||x||
+|[Speaker Recognizer](./speech-service/speaker-recognition-overview.md)||x||
+|[Speech Text to speech (TTS)](speech-service/text-to-speech.md)|x|x||
+|[Speech Speech to text (STT)](speech-service/speech-to-text.md)|x|x||
+|[Speech Translation](speech-service/speech-translation.md)|x|||
+|[Language](./language-service/overview.md)|x|||
+|[Translator](./translator/translator-overview.md)|x|||
+|[Translator - custom translator](./translator/custom-translator/overview.md)||x||
+|[Vision](./computer-vision/overview.md)|x|||
+
+<sup>1</sup> Personalizer only needs training data collected by the service (as it operates in real-time) to evaluate your policy and data. Personalizer does not need large historical datasets for up-front or batch training.
+
+## Where can you use Azure AI services?
+
+The services are used in any application that can make REST API(s) or SDK calls. Examples of applications include web sites, bots, virtual or mixed reality, desktop and mobile applications.
+
+## How is Azure Cognitive Search related to Azure AI services?
+
+[Azure Cognitive Search](../search/search-what-is-azure-search.md) is a separate cloud search service that optionally uses Azure AI services to add image and natural language processing to indexing workloads. Azure AI services is exposed in Azure Cognitive Search through [built-in skills](../search/cognitive-search-predefined-skills.md) that wrap individual APIs. You can use a free resource for walkthroughs, but plan on creating and attaching a [billable resource](../search/cognitive-search-attach-cognitive-services.md) for larger volumes.
+
+## How can you use Azure AI services?
+
+Each service provides information about your data. You can combine services together to chain solutions such as converting speech (audio) to text, translating the text into many languages, then using the translated languages to get answers from a knowledge base. While Azure AI services can be used to create intelligent solutions on their own, they can also be combined with traditional machine learning projects to supplement models or accelerate the development process.
+
+Azure AI services that provide exported models for other machine learning tools:
+
+|Azure AI service|Model information|
+|--|--|
+|[Custom Vision](./custom-vision-service/overview.md)|[Export](./custom-vision-service/export-model-python.md) for Tensorflow for Android, CoreML for iOS11, ONNX for Windows ML|
+
+## Learn more
+
+* [Architecture Guide - What are the machine learning products at Microsoft?](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning)
+* [Machine learning - Introduction to deep learning vs. machine learning](../machine-learning/concept-deep-learning-vs-machine-learning.md)
+
+## Next steps
+
+* Create your Azure AI services resource in the [Azure portal](multi-service-resource.md?pivots=azportal) or with [Azure CLI](./multi-service-resource.md?pivots=azcli).
+* Learn how to [authenticate](authentication.md) with your Azure AI service.
+* Use [diagnostic logging](diagnostic-logging.md) for issue identification and debugging.
+* Deploy an Azure AI service in a Docker [container](cognitive-services-container-support.md).
+* Keep up to date with [service updates](https://azure.microsoft.com/updates/?product=cognitive-services).
ai-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md
+
+ Title: Use Azure AI services Containers on-premises
+
+description: Learn how to use Docker containers to use Azure AI services on-premises.
++++++ Last updated : 05/08/2023+
+keywords: on-premises, Docker, container, Kubernetes
+#Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
++
+# What are Azure AI services containers?
+
+Azure AI services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services.
+
+> [!VIDEO https://www.youtube.com/embed/hdfbn4Q8jbo]
+
+Containerization is an approach to software distribution in which an application or service, including its dependencies & configuration, is packaged together as a container image. With little or no modification, a container image can be deployed on a container host. Containers are isolated from each other and the underlying operating system, with a smaller footprint than a virtual machine. Containers can be instantiated from container images for short-term tasks, and removed when no longer needed.
+
+## Features and benefits
+
+- **Immutable infrastructure**: Enable DevOps teams to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.
+- **Control over data**: Choose where your data gets processed by Azure AI services. This can be essential if you can't send data to the cloud but need access to Azure AI services APIs. Support consistency in hybrid environments ΓÇô across data, management, identity, and security.
+- **Control over model updates**: Flexibility in versioning and updating of models deployed in their solutions.
+- **Portable architecture**: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to [Azure Kubernetes Service](../aks/index.yml), [Azure Container Instances](../container-instances/index.yml), or to a [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
+- **High throughput / low latency**: Provide customers the ability to scale for high throughput and low latency requirements by enabling Azure AI services to run physically close to their application logic and data. Containers don't cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources.
+- **Scalability**: With the ever growing popularity of containerization and container orchestration software, such as Kubernetes; scalability is at the forefront of technological advancements. Building on a scalable cluster foundation, application development caters to high availability.
+
+## Containers in Azure AI services
+
+Azure AI services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure AI services. You can find instructions and image locations in the tables below.
+
+> [!NOTE]
+> See [Install and run Document Intelligence containers](document-intelligence/containers/install-run.md) for **Azure AI Document Intelligence** container instructions and image locations.
+
+### Decision containers
+
+| Service | Container | Description | Availability |
+|--|--|--|--|
+| [Anomaly detector][ad-containers] | **Anomaly Detector** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/decision/anomaly-detector/about)) | The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning. | Generally available |
+
+### Language containers
+
+| Service | Container | Description | Availability |
+|--|--|--|--|
+| [LUIS][lu-containers] | **LUIS** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/about)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/about)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/about)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Language service][ta-containers-health] | **Text Analytics for health** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/about))| Extract and label medical information from unstructured clinical text. | Generally available |
+| [Language service][ta-containers-cner] | **Custom Named Entity Recognition** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about))| Extract named entities from text, using a custom model you create using your data. | Preview |
+| [Translator][tr-containers] | **Translator** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/about))| Translate text in several languages and dialects. | Generally available. Gated - [request access](https://aka.ms/csgate-translator). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+
+### Speech containers
+
+> [!NOTE]
+> To use Speech containers, you will need to complete an [online request form](https://aka.ms/csgate).
+
+| Service | Container | Description | Availability |
+|--|--|--|--|
+| [Speech Service API][sp-containers-stt] | **Speech to text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Speech Service API][sp-containers-cstt] | **Custom Speech to text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Speech Service API][sp-containers-ntts] | **Neural Text to speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview |
+
+### Vision containers
++
+| Service | Container | Description | Availability |
+|--|--|--|--|
+| [Azure AI Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. Gated - [request access](https://aka.ms/csgate). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
+| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Preview |
+
+<!--
+|[Personalizer](./personalizer/what-is-personalizer.md) |F0, S0|**Personalizer** ([image](https://go.microsoft.com/fwlink/?linkid=2083928&clcid=0x409))|Azure AI Personalizer is a cloud-based API service that allows you to choose the best experience to show to your users, learning from their real-time behavior.|
+-->
+
+Additionally, some containers are supported in the Azure AI services [multi-service resource](multi-service-resource.md?pivots=azportal) offering. You can create one single Azure AI services All-In-One resource and use the same billing key across supported services for the following
+
+* Azure AI Vision
+* LUIS
+* Language service
+
+## Prerequisites
+
+You must satisfy the following prerequisites before using Azure AI services containers:
+
+**Docker Engine**: You must have Docker Engine installed locally. Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Linux](https://docs.docker.com/engine/installation/#supported-platforms), and [Windows](https://docs.docker.com/docker-for-windows/). On Windows, Docker must be configured to support Linux containers. Docker containers can also be deployed directly to [Azure Kubernetes Service](../aks/index.yml) or [Azure Container Instances](../container-instances/index.yml).
+
+Docker must be configured to allow the containers to connect with and send billing data to Azure.
+
+**Familiarity with Microsoft Container Registry and Docker**: You should have a basic understanding of both Microsoft Container Registry and Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.
+
+For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).
+
+Individual containers can have their own requirements, as well, including server and memory allocation requirements.
++
+## Developer samples
+
+Developer samples are available at our [GitHub repository](https://github.com/Azure-Samples/cognitive-services-containers-samples).
+
+## Next steps
+
+Learn about [container recipes](containers/container-reuse-recipe.md) you can use with the Azure AI services.
+
+Install and explore the functionality provided by containers in Azure AI
+
+* [Anomaly Detector containers][ad-containers]
+* [Azure AI Vision containers][cv-containers]
+* [Language Understanding (LUIS) containers][lu-containers]
+* [Speech Service API containers][sp-containers]
+* [Language service containers][ta-containers]
+* [Translator containers][tr-containers]
+
+<!--* [Personalizer containers](https://go.microsoft.com/fwlink/?linkid=2083928&clcid=0x409)
+-->
+
+[ad-containers]: anomaly-Detector/anomaly-detector-container-howto.md
+[cv-containers]: computer-vision/computer-vision-how-to-install-containers.md
+[lu-containers]: luis/luis-container-howto.md
+[sp-containers]: speech-service/speech-container-howto.md
+[spa-containers]: ./computer-vision/spatial-analysis-container.md
+[sp-containers-lid]: speech-service/speech-container-lid.md
+[sp-containers-stt]: speech-service/speech-container-stt.md
+[sp-containers-cstt]: speech-service/speech-container-cstt.md
+[sp-containers-ntts]: speech-service/speech-container-ntts.md
+[ta-containers]: language-service/overview.md#deploy-on-premises-using-docker-containers
+[ta-containers-keyphrase]: language-service/key-phrase-extraction/how-to/use-containers.md
+[ta-containers-language]: language-service/language-detection/how-to/use-containers.md
+[ta-containers-sentiment]: language-service/sentiment-opinion-mining/how-to/use-containers.md
+[ta-containers-health]: language-service/text-analytics-for-health/how-to/use-containers.md
+[ta-containers-cner]: language-service/custom-named-entity-recognition/how-to/use-containers.md
+[tr-containers]: translator/containers/translator-how-to-install-container.md
+[request-access]: https://aka.ms/csgate
ai-services Cognitive Services Custom Subdomains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-custom-subdomains.md
+
+ Title: Custom subdomains
+
+description: Custom subdomain names for each Azure AI services resource are created through the Azure portal, Azure Cloud Shell, or Azure CLI.
++++++ Last updated : 12/04/2020+++
+# Custom subdomain names for Azure AI services
+
+Azure AI services use custom subdomain names for each resource created through the [Azure portal](https://portal.azure.com), [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), or [Azure CLI](/cli/azure/install-azure-cli). Unlike regional endpoints, which were common for all customers in a specific Azure region, custom subdomain names are unique to the resource. Custom subdomain names are required to enable features like Azure Active Directory (Azure AD) for authentication.
+
+## How does this impact existing resources?
+
+Azure AI services resources created before July 1, 2019 will use the regional endpoints for the associated service. These endpoints will work with existing and new resources.
+
+If you'd like to migrate an existing resource to leverage custom subdomain names, so that you can enable features like Azure AD, follow these instructions:
+
+1. Sign in to the Azure portal and locate the Azure AI services resource that you'd like to add a custom subdomain name to.
+2. In the **Overview** blade, locate and select **Generate Custom Domain Name**.
+3. This opens a panel with instructions to create a unique custom subdomain for your resource.
+ > [!WARNING]
+ > After you've created a custom subdomain name it **cannot** be changed.
+
+## Do I need to update my existing resources?
+
+No. The regional endpoint will continue to work for new and existing Azure AI services and the custom subdomain name is optional. Even if a custom subdomain name is added the regional endpoint will continue to work with the resource.
+
+## What if an SDK asks me for the region for a resource?
+
+> [!WARNING]
+> Speech Services use custom subdomains with [private endpoints](Speech-Service/speech-services-private-link.md) **only**. In all other cases use **regional endpoints** with Speech Services and associated SDKs.
+
+Regional endpoints and custom subdomain names are both supported and can be used interchangeably. However, the full endpoint is required.
+
+Region information is available in the **Overview** blade for your resource in the [Azure portal](https://portal.azure.com). For the full list of regional endpoints, see [Is there a list of regional endpoints?](#is-there-a-list-of-regional-endpoints)
+
+## Are custom subdomain names regional?
+
+Yes. Using a custom subdomain name doesn't change any of the regional aspects of your Azure AI services resource.
+
+## What are the requirements for a custom subdomain name?
+
+A custom subdomain name is unique to your resource. The name can only include alphanumeric characters and the `-` character; it must be between 2 and 64 characters in length and cannot end with a `-`.
+
+## Can I change a custom domain name?
+
+No. After a custom subdomain name is created and associated with a resource it cannot be changed.
+
+## Can I reuse a custom domain name?
+
+Each custom subdomain name is unique, so in order to reuse a custom subdomain name that you've assigned to an Azure AI services resource, you'll need to delete the existing resource. After the resource has been deleted, you can reuse the custom subdomain name.
+
+## Is there a list of regional endpoints?
+
+Yes. This is a list of regional endpoints that you can use with Azure AI services resources.
+
+> [!NOTE]
+> The Translator service and Bing Search APIs use global endpoints.
+
+| Endpoint type | Region | Endpoint |
+||--|-|
+| Public | Global (Translator & Bing) | `https://api.cognitive.microsoft.com` |
+| | Australia East | `https://australiaeast.api.cognitive.microsoft.com` |
+| | Brazil South | `https://brazilsouth.api.cognitive.microsoft.com` |
+| | Canada Central | `https://canadacentral.api.cognitive.microsoft.com` |
+| | Central US | `https://centralus.api.cognitive.microsoft.com` |
+| | East Asia | `https://eastasia.api.cognitive.microsoft.com` |
+| | East US | `https://eastus.api.cognitive.microsoft.com` |
+| | East US 2 | `https://eastus2.api.cognitive.microsoft.com` |
+| | France Central | `https://francecentral.api.cognitive.microsoft.com` |
+| | India Central | `https://centralindia.api.cognitive.microsoft.com` |
+| | Japan East | `https://japaneast.api.cognitive.microsoft.com` |
+| | Korea Central | `https://koreacentral.api.cognitive.microsoft.com` |
+| | North Central US | `https://northcentralus.api.cognitive.microsoft.com` |
+| | North Europe | `https://northeurope.api.cognitive.microsoft.com` |
+| | South Africa North | `https://southafricanorth.api.cognitive.microsoft.com` |
+| | South Central US | `https://southcentralus.api.cognitive.microsoft.com` |
+| | Southeast Asia | `https://southeastasia.api.cognitive.microsoft.com` |
+| | UK South | `https://uksouth.api.cognitive.microsoft.com` |
+| | West Central US | `https://westcentralus.api.cognitive.microsoft.com` |
+| | West Europe | `https://westeurope.api.cognitive.microsoft.com` |
+| | West US | `https://westus.api.cognitive.microsoft.com` |
+| | West US 2 | `https://westus2.api.cognitive.microsoft.com` |
+| US Gov | US Gov Virginia | `https://virginia.api.cognitive.microsoft.us` |
+| China | China East 2 | `https://chinaeast2.api.cognitive.azure.cn` |
+| | China North | `https://chinanorth.api.cognitive.azure.cn` |
+
+## See also
+
+* [What are Azure AI services?](./what-are-ai-services.md)
+* [Authentication](authentication.md)
ai-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-data-loss-prevention.md
+
+ Title: Data Loss Prevention
+description: Azure AI services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Azure AI services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss.
++++ Last updated : 03/31/2023+++
+# Configure data loss prevention for Azure AI services
+
+Azure AI services data loss prevention capabilities allow customers to configure the list of outbound URLs their Azure AI services resources are allowed to access. This creates another level of control for customers to prevent data loss. In this article, we'll cover the steps required to enable the data loss prevention feature for Azure AI services resources.
+
+## Prerequisites
+
+Before you make a request, you need an Azure account and an Azure AI services subscription. If you already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to get you set up in minutes: [Create a multi-service resource](multi-service-resource.md?pivots=azportal).
+
+You can get your subscription key from the [Azure portal](multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) after [creating your account](https://azure.microsoft.com/free/cognitive-services/).
+
+## Enabling data loss prevention
+
+There are two parts to enable data loss prevention. First the property restrictOutboundNetworkAccess must be set to true. When this is set to true, you also need to provide the list of approved URLs. The list of URLs is added to the allowedFqdnList property. The allowedFqdnList property contains an array of comma-separated URLs.
+
+>[!NOTE]
+>
+> * The `allowedFqdnList` property value supports a maximum of 1000 URLs.
+> * The property supports both IP addresses and fully qualified domain names i.e., `www.microsoft.com`, values.
+> * It can take up to 15 minutes for the updated list to take effect.
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+
+1. View the details of the Azure AI services resource.
+
+ ```azurecli-interactive
+ az cognitiveservices account show \
+ -g "myresourcegroup" -n "myaccount" \
+ ```
+
+1. View the current properties of the Azure AI services resource.
+
+ ```azurecli-interactive
+ az rest -m get \
+ -u /subscriptions/{subscription ID}}/resourceGroups/{resource group}/providers/Microsoft.CognitiveServices/accounts/{account name}?api-version=2021-04-30 \
+ ```
+
+1. Configure the restrictOutboundNetworkAccess property and update the allowed FqdnList with the approved URLs
+
+ ```azurecli-interactive
+ az rest -m patch \
+ -u /subscriptions/{subscription ID}}/resourceGroups/{resource group}/providers/Microsoft.CognitiveServices/accounts/{account name}?api-version=2021-04-30 \
+ -b '{"properties": { "restrictOutboundNetworkAccess": true, "allowedFqdnList": [ "microsoft.com" ] }}'
+ ```
+
+# [PowerShell](#tab/powershell)
+
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+
+1. Display the current properties for Azure AI services resource.
+
+ ```azurepowershell-interactive
+ $getParams = @{
+ ResourceGroupName = 'myresourcegroup'
+ ResourceProviderName = 'Microsoft.CognitiveServices'
+ ResourceType = 'accounts'
+ Name = 'myaccount'
+ ApiVersion = '2021-04-30'
+ Method = 'GET'
+ }
+ Invoke-AzRestMethod @getParams
+ ```
+
+1. Configure the restrictOutboundNetworkAccess property and update the allowed FqdnList with the approved URLs
+
+ ```azurepowershell-interactive
+ $patchParams = @{
+ ResourceGroupName = 'myresourcegroup'
+ ResourceProviderName = 'Microsoft.CognitiveServices'
+ ResourceType = 'accounts'
+ Name = 'myaccount'
+ ApiVersion = '2021-04-30'
+ Payload = '{"properties": { "restrictOutboundNetworkAccess": true, "allowedFqdnList": [ "microsoft.com" ] }}'
+ Method = 'PATCH'
+ }
+ Invoke-AzRestMethod @patchParams
+ ```
+++
+## Supported services
+
+The following services support data loss prevention configuration:
+
+* Azure OpenAI
+* Azure AI Vision
+* Content Moderator
+* Custom Vision
+* Face
+* Document Intelligence
+* Speech Service
+* QnA Maker
+
+## Next steps
+
+* [Configure Virtual Networks](cognitive-services-virtual-networks.md)
ai-services Cognitive Services Development Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-development-options.md
+
+ Title: Azure AI services development options
+description: Learn how to use Azure AI services with different development and deployment options such as client libraries, REST APIs, Logic Apps, Power Automate, Azure Functions, Azure App Service, Azure Databricks, and many more.
++++++ Last updated : 10/28/2021++
+# Azure AI services development options
+
+This document provides a high-level overview of development and deployment options to help you get started with Azure AI services.
+
+Azure AI services are cloud-based AI services that allow developers to build intelligence into their applications and products without deep knowledge of machine learning. With Azure AI services, you have access to AI capabilities or models that are built, trained, and updated by Microsoft - ready to be used in your applications. In many cases, you also have the option to customize the models for your business needs.
+
+Azure AI services are organized into four categories: Decision, Language, Speech, and Vision. Typically you would access these services through REST APIs, client libraries, and custom tools (like command-line interfaces) provided by Microsoft. However, this is only one path to success. Through Azure, you also have access to several development options, such as:
+
+* Automation and integration tools like Logic Apps and Power Automate.
+* Deployment options such as Azure Functions and the App Service.
+* Azure AI services Docker containers for secure access.
+* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for big data scenarios.
+
+Before we jump in, it's important to know that the Azure AI services are primarily used for two distinct tasks. Based on the task you want to perform, you have different development and deployment options to choose from.
+
+* [Development options for prediction and analysis](#development-options-for-prediction-and-analysis)
+* [Tools to customize and configure models](#tools-to-customize-and-configure-models)
+
+## Development options for prediction and analysis
+
+The tools that you will use to customize and configure models are different from those that you'll use to call the Azure AI services. Out of the box, most Azure AI services allow you to send data and receive insights without any customization. For example:
+
+* You can send an image to the Azure AI Vision service to detect words and phrases or count the number of people in the frame
+* You can send an audio file to the Speech service and get transcriptions and translate the speech to text at the same time
+
+Azure offers a wide range of tools that are designed for different types of users, many of which can be used with Azure AI services. Designer-driven tools are the easiest to use, and are quick to set up and automate, but may have limitations when it comes to customization. Our REST APIs and client libraries provide users with more control and flexibility, but require more effort, time, and expertise to build a solution. If you use REST APIs and client libraries, there is an expectation that you're comfortable working with modern programming languages like C#, Java, Python, JavaScript, or another popular programming language.
+
+Let's take a look at the different ways that you can work with the Azure AI services.
+
+### Client libraries and REST APIs
+
+Azure AI services client libraries and REST APIs provide you direct access to your service. These tools provide programmatic access to the Azure AI services, their baseline models, and in many cases allow you to programmatically customize your models and solutions.
+
+* **Target user(s)**: Developers and data scientists
+* **Benefits**: Provides the greatest flexibility to call the services from any language and environment.
+* **UI**: N/A - Code only
+* **Subscription(s)**: Azure account + Azure AI services resources
+
+If you want to learn more about available client libraries and REST APIs, use our [Azure AI services overview](index.yml) to pick a service and get started with one of our quickstarts for vision, decision, language, and speech.
+
+### Azure AI services for big data
+
+With Azure AI services for big data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications. Azure AI services for big data support the following platforms and connectors: Azure Databricks, Azure Synapse, Azure Kubernetes Service, and Data Connectors.
+
+* **Target user(s)**: Data scientists and data engineers
+* **Benefits**: the Azure AI services for big data let users channel terabytes of data through Azure AI services using Apache Spark&trade;. It's easy to create large-scale intelligent applications with any datastore.
+* **UI**: N/A - Code only
+* **Subscription(s)**: Azure account + Azure AI services resources
+
+To learn more about big data for Azure AI services, see [Azure AI services in Azure Synapse Analytics](../synapse-analytics/machine-learning/overview-cognitive-services.md).
+
+### Azure Functions and Azure Service Web Jobs
+
+[Azure Functions](../azure-functions/index.yml) and [Azure App Service Web Jobs](../app-service/index.yml) both provide code-first integration services designed for developers and are built on [Azure App Services](../app-service/index.yml). These products provide serverless infrastructure for writing code. Within that code you can make calls to our services using our client libraries and REST APIs.
+
+* **Target user(s)**: Developers and data scientists
+* **Benefits**: Serverless compute service that lets you run event-triggered code.
+* **UI**: Yes
+* **Subscription(s)**: Azure account + Azure AI services resource + Azure Functions subscription
+
+### Azure Logic Apps
+
+[Azure Logic Apps](../logic-apps/index.yml) share the same workflow designer and connectors as Power Automate but provide more advanced control, including integrations with Visual Studio and DevOps. Power Automate makes it easy to integrate with your Azure AI services resources through service-specific connectors that provide a proxy or wrapper around the APIs. These are the same connectors as those available in Power Automate.
+
+* **Target user(s)**: Developers, integrators, IT pros, DevOps
+* **Benefits**: Designer-first (declarative) development model providing advanced options and integration in a low-code solution
+* **UI**: Yes
+* **Subscription(s)**: Azure account + Azure AI services resource + Logic Apps deployment
+
+### Power Automate
+
+Power Automate is a service in the [Power Platform](/power-platform/) that helps you create automated workflows between apps and services without writing code. We offer several connectors to make it easy to interact with your Azure AI services resource in a Power Automate solution. Power Automate is built on top of Logic Apps.
+
+* **Target user(s)**: Business users (analysts) and SharePoint administrators
+* **Benefits**: Automate repetitive manual tasks simply by recording mouse clicks, keystrokes and copy paste steps from your desktop!
+* **UI tools**: Yes - UI only
+* **Subscription(s)**: Azure account + Azure AI services resource + Power Automate Subscription + Office 365 Subscription
+
+### AI Builder
+
+[AI Builder](/ai-builder/overview) is a Microsoft Power Platform capability you can use to improve business performance by automating processes and predicting outcomes. AI Builder brings the power of AI to your solutions through a point-and-click experience. Many Azure AI services such as the Language service, and Azure AI Vision have been directly integrated here and you don't need to create your own Azure AI services.
+
+* **Target user(s)**: Business users (analysts) and SharePoint administrators
+* **Benefits**: A turnkey solution that brings the power of AI through a point-and-click experience. No coding or data science skills required.
+* **UI tools**: Yes - UI only
+* **Subscription(s)**: AI Builder
+
+### Continuous integration and deployment
+
+You can use Azure DevOps and GitHub Actions to manage your deployments. In the [section below](#continuous-integration-and-delivery-with-devops-and-github-actions), we have two examples of CI/CD integrations to train and deploy custom models for Speech and the Language Understanding (LUIS) service.
+
+* **Target user(s)**: Developers, data scientists, and data engineers
+* **Benefits**: Allows you to continuously adjust, update, and deploy applications and models programmatically. There is significant benefit when regularly using your data to improve and update models for Speech, Vision, Language, and Decision.
+* **UI tools**: N/A - Code only
+* **Subscription(s)**: Azure account + Azure AI services resource + GitHub account
+
+## Tools to customize and configure models
+
+As you progress on your journey building an application or workflow with the Azure AI services, you may find that you need to customize the model to achieve the desired performance. Many of our services allow you to build on top of the pre-built models to meet your specific business needs. For all our customizable services, we provide both a UI-driven experience for walking through the process as well as APIs for code-driven training. For example:
+
+* You want to train a Custom Speech model to correctly recognize medical terms with a word error rate (WER) below 3 percent
+* You want to build an image classifier with Custom Vision that can tell the difference between coniferous and deciduous trees
+* You want to build a custom neural voice with your personal voice data for an improved automated customer experience
+
+The tools that you will use to train and configure models are different from those that you'll use to call the Azure AI services. In many cases, Azure AI services that support customization provide portals and UI tools designed to help you train, evaluate, and deploy models. Let's quickly take a look at a few options:<br><br>
+
+| Pillar | Service | Customization UI | Quickstart |
+|--||||
+| Vision | Custom Vision | https://www.customvision.ai/ | [Quickstart](./custom-vision-service/quickstarts/image-classification.md?pivots=programming-language-csharp) |
+| Decision | Personalizer | UI is available in the Azure portal under your Personalizer resource. | [Quickstart](./personalizer/quickstart-personalizer-sdk.md) |
+| Language | Language Understanding (LUIS) | https://www.luis.ai/ | |
+| Language | QnA Maker | https://www.qnamaker.ai/ | [Quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) |
+| Language | Translator/Custom Translator | https://portal.customtranslator.azure.ai/ | [Quickstart](./translator/custom-translator/quickstart.md) |
+| Speech | Custom Commands | https://speech.microsoft.com/ | [Quickstart](./speech-service/custom-commands.md) |
+| Speech | Custom Speech | https://speech.microsoft.com/ | [Quickstart](./speech-service/custom-speech-overview.md) |
+| Speech | Custom Voice | https://speech.microsoft.com/ | [Quickstart](./speech-service/how-to-custom-voice.md) |
+
+### Continuous integration and delivery with DevOps and GitHub Actions
+
+Language Understanding and the Speech service offer continuous integration and continuous deployment solutions that are powered by Azure DevOps and GitHub Actions. These tools are used for automated training, testing, and release management of custom models.
+
+* [CI/CD for Custom Speech](./speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md)
+* [CI/CD for LUIS](./luis/luis-concept-devops-automation.md)
+
+## On-premises containers
+
+Many of the Azure AI services can be deployed in containers for on-premises access and use. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security, or other operational reasons. For a complete list of Azure AI containers, see [On-premises containers for Azure AI services](./cognitive-services-container-support.md).
+
+## Next steps
+
+* [Create a multi-service resource and start building](./multi-service-resource.md?pivots=azportal)
ai-services Cognitive Services Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-environment-variables.md
+
+ Title: Use environment variables with Azure AI services
+
+description: "This guide shows you how to set and retrieve environment variables to handle your Azure AI services subscription credentials in a more secure way when you test out applications."
+++++ Last updated : 09/09/2022+++
+# Use environment variables with Azure AI services
+
+This guide shows you how to set and retrieve environment variables to handle your Azure AI services subscription credentials in a more secure way when you test out applications.
+
+## Set an environment variable
+
+To set environment variables, use one the following commands, where the `ENVIRONMENT_VARIABLE_KEY` is the named key and `value` is the value stored in the environment variable.
+
+# [Command Line](#tab/command-line)
+
+Use the following command to create and assign a persisted environment variable, given the input value.
+
+```CMD
+:: Assigns the env var to the value
+setx ENVIRONMENT_VARIABLE_KEY "value"
+```
+
+In a new instance of the Command Prompt, use the following command to read the environment variable.
+
+```CMD
+:: Prints the env var value
+echo %ENVIRONMENT_VARIABLE_KEY%
+```
+
+# [PowerShell](#tab/powershell)
+
+Use the following command to create and assign a persisted environment variable, given the input value.
+
+```powershell
+# Assigns the env var to the value
+[System.Environment]::SetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY', 'value', 'User')
+```
+
+In a new instance of the Windows PowerShell, use the following command to read the environment variable.
+
+```powershell
+# Prints the env var value
+[System.Environment]::GetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY')
+```
+
+# [Bash](#tab/bash)
+
+Use the following command to create and assign a persisted environment variable, given the input value.
+
+```Bash
+# Assigns the env var to the value
+echo export ENVIRONMENT_VARIABLE_KEY="value" >> /etc/environment && source /etc/environment
+```
+
+In a new instance of the **Bash**, use the following command to read the environment variable.
+
+```Bash
+# Prints the env var value
+echo "${ENVIRONMENT_VARIABLE_KEY}"
+
+# Or use printenv:
+# printenv ENVIRONMENT_VARIABLE_KEY
+```
+++
+> [!TIP]
+> After you set an environment variable, restart your integrated development environment (IDE) to ensure that the newly added environment variables are available.
+
+## Retrieve an environment variable
+
+To use an environment variable in your code, it must be read into memory. Use one of the following code snippets, depending on which language you're using. These code snippets demonstrate how to get an environment variable given the `ENVIRONMENT_VARIABLE_KEY` and assign the value to a program variable named `value`.
+
+# [C#](#tab/csharp)
+
+For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>.
+
+```csharp
+using static System.Environment;
+
+class Program
+{
+ static void Main()
+ {
+ // Get the named env var, and assign it to the value variable
+ var value =
+ GetEnvironmentVariable(
+ "ENVIRONMENT_VARIABLE_KEY");
+ }
+}
+```
+
+# [C++](#tab/cpp)
+
+For more information, see <a href="/cpp/c-runtime-library/reference/getenv-s-wgetenv-s" target="_blank">`getenv_s`</a> and <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv`</a>.
+
+```cpp
+#include <iostream>
+#include <stdlib.h>
+
+std::string GetEnvironmentVariable(const char* name);
+
+int main()
+{
+ // Get the named env var, and assign it to the value variable
+ auto value = GetEnvironmentVariable("ENVIRONMENT_VARIABLE_KEY");
+}
+
+std::string GetEnvironmentVariable(const char* name)
+{
+#if defined(_MSC_VER)
+ size_t requiredSize = 0;
+ (void)getenv_s(&requiredSize, nullptr, 0, name);
+ if (requiredSize == 0)
+ {
+ return "";
+ }
+ auto buffer = std::make_unique<char[]>(requiredSize);
+ (void)getenv_s(&requiredSize, buffer.get(), requiredSize, name);
+ return buffer.get();
+#else
+ auto value = getenv(name);
+ return value ? value : "";
+#endif
+}
+```
+
+# [Java](#tab/java)
+
+For more information, see <a href="https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getenv(java.lang.String)" target="_blank">`System.getenv` </a>.
+
+```java
+import java.lang.*;
+
+public class Program {
+ public static void main(String[] args) throws Exception {
+ // Get the named env var, and assign it to the value variable
+ String value =
+ System.getenv(
+ "ENVIRONMENT_VARIABLE_KEY")
+ }
+}
+```
+
+# [Node.js](#tab/node-js)
+
+For more information, see <a href="https://nodejs.org/api/process.html#process_process_env" target="_blank">`process.env` </a>.
+
+```javascript
+// Get the named env var, and assign it to the value variable
+const value =
+ process.env.ENVIRONMENT_VARIABLE_KEY;
+```
+
+# [Python](#tab/python)
+
+For more information, see <a href="https://docs.python.org/2/library/os.html#os.environ" target="_blank">`os.environ` </a>.
+
+```python
+import os
+
+# Get the named env var, and assign it to the value variable
+value = os.environ['ENVIRONMENT_VARIABLE_KEY']
+```
+
+# [Objective-C](#tab/objective-c)
+
+For more information, see <a href="https://developer.apple.com/documentation/foundation/nsprocessinfo/1417911-environment?language=objc" target="_blank">`environment` </a>.
+
+```objectivec
+// Get the named env var, and assign it to the value variable
+NSString* value =
+ [[[NSProcessInfo processInfo]environment]objectForKey:@"ENVIRONMENT_VARIABLE_KEY"];
+```
+++
+## Next steps
+
+* Explore [Azure AI services](./what-are-ai-services.md) and choose a service to get started.
ai-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-limited-access.md
+
+ Title: Limited Access features for Azure AI services
+
+description: Azure AI services that are available with Limited Access are described below.
+++++ Last updated : 10/27/2022+++
+# Limited Access features for Azure AI services
+
+Our vision is to empower developers and organizations to use AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services.
+
+## What is Limited Access?
+
+Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they've reviewed and agree to the terms of service. Microsoft may require customers to reverify this information.
+
+Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services.
+
+## List of Limited Access services
+
+The following services are Limited Access:
+
+- [Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/ai-services/speech-service/context/context): Pro features
+- [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context): All features
+- [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/ai-services/computer-vision/context/context): Identify and Verify features, face ID property
+- [Azure AI Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/ai-services/computer-vision/context/context): Celebrity Recognition feature
+- [Azure AI Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features
+- [Azure OpenAI](/legal/cognitive-services/openai/limited-access): Azure OpenAI Service, modified abuse monitoring, and modified content filters
+
+Features of these services that aren't listed above are available without registration.
+
+## FAQ about Limited Access
+
+### How do I register for access?
+
+Submit a registration form for each Limited Access service you would like to use:
+
+- [Custom Neural Voice](https://aka.ms/customneural): Pro features
+- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
+- [Face API](https://aka.ms/facerecognition): Identify and Verify features
+- [Azure AI Vision](https://aka.ms/facerecognition): Celebrity Recognition feature
+- [Azure AI Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
+- [Azure OpenAI](/legal/cognitive-services/openai/limited-access): Azure OpenAI Service, modified abuse monitoring, and modified content filters
+
+### How long will the registration process take?
+
+Review may take 5-10 business days. You will receive an email as soon as your application is reviewed.
+
+### Who is eligible to use Limited Access services?
+
+Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their registration form.
+
+Please use an email address affiliated with your organization in your registration form. Registration forms submitted with personal email addresses will be denied.
+
+If you're not a managed customer, we invite you to submit an application using the same forms and we will reach out to you about any opportunities to join an eligibility program.
+
+### What is a managed customer? What if I don't know whether I'm a managed customer?
+
+Managed customers work with Microsoft account teams. We invite you to submit a registration form for the features you'd like to use, and we'll verify your eligibility for access. We are not able to accept requests to become a managed customer at this time.
+
+### What happens if I'm an existing customer and I don't register?
+
+Existing customers have until June 30, 2023 to submit a registration form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without an approved application, you will be denied access after June 30, 2023.
+
+The registration forms can be found here:
+
+- [Custom Neural Voice](https://aka.ms/customneural): Pro features
+- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
+- [Face API](https://aka.ms/facerecognition): Identify and Verify features
+- [Azure AI Vision](https://aka.ms/facerecognition): Celebrity Recognition feature
+- [Azure AI Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
+- [Azure OpenAI: [Azure OpenAI service](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu), modified abuse monitoring, and modified content filters
+
+### I'm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to register to keep using these services?
+
+We're always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you've previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new registration form to continue using these services beyond June 30, 2023.
+
+If you're an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit a registration form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for application processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The registration forms can be found here:
+
+- [Custom Neural Voice](https://aka.ms/customneural): Pro features
+- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
+
+### What if my use case isn't on the registration form?
+
+Limited Access features are only available for the use cases listed on the registration forms. If your desired use case isn't listed, let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings.
+
+### Where can I use Limited Access services?
+
+Search [here](https://azure.microsoft.com/global-infrastructure/services/) for a Limited Access service to view its regional availability. In the Brazil South and UAE North datacenter regions, we are prioritizing access for commercial customers managed by Microsoft.
+
+Detailed information about supported regions for Custom Neural Voice and Speaker Recognition operations can be found [here](./speech-service/regions.md).
+
+### What happens to my data if my application is denied?
+
+If you're an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to Microsoft's data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
+
+### How long will the registration process take?
+
+You'll receive communication from us about your application within 10 business days. In some cases, reviews can take longer. You'll receive an email as soon as your application is reviewed.
+
+## Help and support
+
+Report abuse of Limited Access services [here](https://aka.ms/reportabuse).
ai-services Cognitive Services Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-support-options.md
+
+ Title: Azure AI services support and help options
+description: How to obtain help and support for questions and problems when you create applications that integrate with Azure AI services.
+++++ Last updated : 06/28/2022+++
+# Azure AI services support and help options
+
+Are you just starting to explore the functionality of Azure AI services? Perhaps you are implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Azure AI services.
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='/media/logos/logo_azure.svg'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
+* [Azure portal for the United States government](https://portal.azure.us)
+
+## Post a question on Microsoft Q&A
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
+
+If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when you ask your question:
+
+* [Azure AI services](/answers/topics/azure-cognitive-services.html)
+
+**Vision**
+
+* [Azure AI Vision](/answers/topics/azure-computer-vision.html)
+* [Custom Vision](/answers/topics/azure-custom-vision.html)
+* [Face](/answers/topics/azure-face.html)
+* [Document Intelligence](/answers/topics/azure-form-recognizer.html)
+* [Video Indexer](/answers/topics/azure-media-services.html)
+
+**Language**
+
+* [Immersive Reader](/answers/topics/azure-immersive-reader.html)
+* [Language Understanding (LUIS)](/answers/topics/azure-language-understanding.html)
+* [QnA Maker](/answers/topics/azure-qna-maker.html)
+* [Language service](/answers/topics/azure-text-analytics.html)
+* [Translator](/answers/topics/azure-translator.html)
+
+**Speech**
+
+* [Speech service](/answers/topics/azure-speech.html)
++
+**Decision**
+
+* [Anomaly Detector](/answers/topics/azure-anomaly-detector.html)
+* [Content Moderator](/answers/topics/azure-content-moderator.html)
+* [Metrics Advisor](/answers/topics/148981/azure-metrics-advisor.html)
+* [Personalizer](/answers/topics/azure-personalizer.html)
+
+**Azure OpenAI**
+
+* [Azure OpenAI](/answers/topics/azure-openai.html)
+
+## Post a question to Stack Overflow
+
+<div class='icon is-large'>
+ <img alt='Stack Overflow' src='/media/logos/logo_stackoverflow.svg'>
+</div>
+
+For answers on your developer questions from the largest community developer ecosystem, ask your question on Stack Overflow.
+
+If you do submit a new question to Stack Overflow, please use one or more of the following tags when you create the question:
+
+* [Azure AI services](https://stackoverflow.com/questions/tagged/azure-cognitive-services)
+
+**Vision**
+
+* [Azure AI Vision](https://stackoverflow.com/search?q=azure+computer+vision)
+* [Custom Vision](https://stackoverflow.com/search?q=azure+custom+vision)
+* [Face](https://stackoverflow.com/search?q=azure+face)
+* [Document Intelligence](https://stackoverflow.com/search?q=azure+form+recognizer)
+* [Video Indexer](https://stackoverflow.com/search?q=azure+video+indexer)
+
+**Language**
+
+* [Immersive Reader](https://stackoverflow.com/search?q=azure+immersive+reader)
+* [Language Understanding (LUIS)](https://stackoverflow.com/search?q=azure+luis+language+understanding)
+* [QnA Maker](https://stackoverflow.com/search?q=azure+qna+maker)
+* [Language service](https://stackoverflow.com/search?q=azure+text+analytics)
+* [Translator](https://stackoverflow.com/search?q=azure+translator+text)
+
+**Speech**
+
+* [Speech service](https://stackoverflow.com/search?q=azure+speech)
+
+**Decision**
+
+* [Anomaly Detector](https://stackoverflow.com/search?q=azure+anomaly+detector)
+* [Content Moderator](https://stackoverflow.com/search?q=azure+content+moderator)
+* [Metrics Advisor](https://stackoverflow.com/search?q=azure+metrics+advisor)
+* [Personalizer](https://stackoverflow.com/search?q=azure+personalizer)
+
+**Azure OpenAI**
+
+* [Azure OpenAI](https://stackoverflow.com/search?q=azure+openai)
+
+## Submit feedback
+
+To request new features, post them on https://feedback.azure.com. Share your ideas for making Azure AI services and its APIs work better for the applications you develop.
+
+* [Azure AI services](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858)
+
+**Vision**
+
+* [Azure AI Vision](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Custom Vision](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Face](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Document Intelligence](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Video Indexer](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6483a3c0-0b25-ec11-b6e6-000d3a4f0858)
+
+**Language**
+
+* [Immersive Reader](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
+* [Language Understanding (LUIS)](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
+* [QnA Maker](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
+* [Language service](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
+* [Translator](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
+
+**Speech**
+
+* [Speech service](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=21041fae-0b25-ec11-b6e6-000d3a4f0858)
+
+**Decision**
+
+* [Anomaly Detector](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Content Moderator](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Metrics Advisor](https://feedback.azure.com/d365community/search/?q=%22Metrics+Advisor%22)
+* [Personalizer](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
+
+## Stay informed
+
+Staying informed about features in a new release or news on the Azure blog can help you find the difference between a programming error, a service bug, or a feature not yet available in Azure AI services.
+
+* Learn more about product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=ai-machine-learning&query=Azure%20Cognitive%20Services).
+* News about Azure AI services is shared in the [Azure blog](https://azure.microsoft.com/blog/topics/cognitive-services/).
+* [Join the conversation on Reddit](https://www.reddit.com/r/AZURE/search/?q=Cognitive%20Services&restrict_sr=1) about Azure AI services.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What are Azure AI services?](./what-are-ai-services.md)
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
+
+ Title: Configure Virtual Networks for Azure AI services
+
+description: Configure layered network security for your Azure AI services resources.
++++++ Last updated : 07/04/2023+++
+# Configure Azure AI services virtual networks
+
+Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
+
+An application that accesses an Azure AI services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key.
+
+> [!IMPORTANT]
+> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met:
+>
+> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Azure AI services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
+> * Or the request should originate from an allowed list of IP addresses.
+>
+> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
++
+## Scenarios
+
+To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients.
+
+Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. Once network rules are applied, they're enforced for all requests.
+
+## Supported regions and service offerings
+
+Virtual networks (VNETs) are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
+
+> [!div class="checklist"]
+> * Anomaly Detector
+> * Azure OpenAI
+> * Azure AI Vision
+> * Content Moderator
+> * Custom Vision
+> * Face
+> * Language Understanding (LUIS)
+> * Personalizer
+> * Speech service
+> * Language service
+> * QnA Maker
+> * Translator Text
++
+> [!NOTE]
+> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
+>
+> * **AzureActiveDirectory**
+> * **AzureFrontDoor.Frontend**
+> * **AzureResourceManager**
+> * **CognitiveServicesManagement**
+> * **CognitiveServicesFrontEnd**
++
+## Change the default network access rule
+
+By default, Azure AI services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action.
+
+> [!WARNING]
+> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
+
+### Managing default network access rules
+
+You can manage default network access rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI.
+
+# [Azure portal](#tab/portal)
+
+1. Go to the Azure AI services resource you want to secure.
+
+1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+
+ ![Virtual network option](media/vnet/virtual-network-blade.png)
+
+1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Azure AI services resource.
+1. To allow traffic from all networks, choose to allow access from **All networks**.
+
+ ![Virtual networks deny](media/vnet/virtual-network-deny.png)
+
+1. Select **Save** to apply your changes.
+
+# [PowerShell](#tab/powershell)
+
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+
+1. Display the status of the default rule for the Azure AI services resource.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName"= "myresourcegroup"
+ "Name"= "myaccount"
+}
+ (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
+ ```
+
+1. Set the default rule to deny network access by default.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -DefaultAction Deny
+ }
+ Update-AzCognitiveServicesAccountNetworkRuleSet @parameters
+ ```
+
+1. Set the default rule to allow network access by default.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -DefaultAction Allow
+ }
+ Update-AzCognitiveServicesAccountNetworkRuleSet @parameters
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+
+1. Display the status of the default rule for the Azure AI services resource.
+
+ ```azurecli-interactive
+ az cognitiveservices account show \
+ -g "myresourcegroup" -n "myaccount" \
+ --query networkRuleSet.defaultAction
+ ```
+
+1. Set the default rule to deny network access by default.
+
+ ```azurecli-interactive
+ az resource update \
+ --ids {resourceId} \
+ --set properties.networkAcls="{'defaultAction':'Deny'}"
+ ```
+
+1. Set the default rule to allow network access by default.
+
+ ```azurecli-interactive
+ az resource update \
+ --ids {resourceId} \
+ --set properties.networkAcls="{'defaultAction':'Allow'}"
+ ```
+
+***
+
+## Grant access from a virtual network
+
+You can configure Azure AI services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+
+Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure AI services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure AI services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
+
+Each Azure AI services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range).
+
+### Required permissions
+
+To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Azure AI services Contributor* role. Required permissions can also be added to custom role definitions.
+
+Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+
+> [!NOTE]
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+
+### Managing virtual network rules
+
+You can manage virtual network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI.
+
+# [Azure portal](#tab/portal)
+
+1. Go to the Azure AI services resource you want to secure.
+
+1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+
+1. Check that you've selected to allow access from **Selected networks**.
+
+1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**.
+
+ ![Add existing vNet](media/vnet/virtual-network-add-existing.png)
+
+1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**.
+
+ ![Add existing vNet details](media/vnet/virtual-network-add-existing-details.png)
+
+1. To create a new virtual network and grant it access, select **Add new virtual network**.
+
+ ![Add new vNet](media/vnet/virtual-network-add-new.png)
+
+1. Provide the information necessary to create the new virtual network, and then select **Create**.
+
+ ![Create vNet](media/vnet/virtual-network-create.png)
+
+ > [!NOTE]
+ > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
+ >
+ > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs.
+
+1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
+
+ ![Remove vNet](media/vnet/virtual-network-remove.png)
+
+1. Select **Save** to apply your changes.
+
+# [PowerShell](#tab/powershell)
+
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+
+1. List virtual network rules.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName"= "myresourcegroup"
+ "Name"= "myaccount"
+}
+ (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules
+ ```
+
+1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+
+ ```azurepowershell-interactive
+ Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" `
+ -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `
+ -AddressPrefix "10.0.0.0/24" `
+ -ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork
+ ```
+
+1. Add a network rule for a virtual network and subnet.
+
+ ```azurepowershell-interactive
+ $subParameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myvnet"
+ }
+ $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
+
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -VirtualNetworkResourceId $subnet.Id
+ }
+ Add-AzCognitiveServicesAccountNetworkRule @parameters
+ ```
+
+ > [!TIP]
+ > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+
+1. Remove a network rule for a virtual network and subnet.
+
+ ```azurepowershell-interactive
+ $subParameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myvnet"
+ }
+ $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
+
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -VirtualNetworkResourceId $subnet.Id
+ }
+ Remove-AzCognitiveServicesAccountNetworkRule @parameters
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+
+1. List virtual network rules.
+
+ ```azurecli-interactive
+ az cognitiveservices account network-rule list \
+ -g "myresourcegroup" -n "myaccount" \
+ --query virtualNetworkRules
+ ```
+
+1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+
+ ```azurecli-interactive
+ az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \
+ --vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices"
+ ```
+
+1. Add a network rule for a virtual network and subnet.
+
+ ```azurecli-interactive
+ $subnetid=(az network vnet subnet show \
+ -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ --query id --output tsv)
+
+ # Use the captured subnet identifier as an argument to the network rule addition
+ az cognitiveservices account network-rule add \
+ -g "myresourcegroup" -n "myaccount" \
+ --subnet $subnetid
+ ```
+
+ > [!TIP]
+ > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ >
+ > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant.
+
+1. Remove a network rule for a virtual network and subnet.
+
+ ```azurecli-interactive
+ $subnetid=(az network vnet subnet show \
+ -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ --query id --output tsv)
+
+ # Use the captured subnet identifier as an argument to the network rule removal
+ az cognitiveservices account network-rule remove \
+ -g "myresourcegroup" -n "myaccount" \
+ --subnet $subnetid
+ ```
+
+***
+
+> [!IMPORTANT]
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+
+## Grant access from an internet IP range
+
+You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic.
+
+Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`.
+
+ > [!Tip]
+ > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules.
+
+IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`.
+
+Only IPV4 addresses are supported at this time. Each Azure AI services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network).
+
+### Configuring access from on-premises networks
+
+To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help.
+
+If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering)
+
+### Managing IP network rules
+
+You can manage IP network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI.
+
+# [Azure portal](#tab/portal)
+
+1. Go to the Azure AI services resource you want to secure.
+
+1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+
+1. Check that you've selected to allow access from **Selected networks**.
+
+1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted.
+
+ ![Add IP range](media/vnet/virtual-network-add-ip-range.png)
+
+1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
+
+ ![Delete IP range](media/vnet/virtual-network-delete-ip-range.png)
+
+1. Select **Save** to apply your changes.
+
+# [PowerShell](#tab/powershell)
+
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+
+1. List IP network rules.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName"= "myresourcegroup"
+ "Name"= "myaccount"
+}
+ (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules
+ ```
+
+1. Add a network rule for an individual IP address.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -IPAddressOrRange "16.17.18.19"
+ }
+ Add-AzCognitiveServicesAccountNetworkRule @parameters
+ ```
+
+1. Add a network rule for an IP address range.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -IPAddressOrRange "16.17.18.0/24"
+ }
+ Add-AzCognitiveServicesAccountNetworkRule @parameters
+ ```
+
+1. Remove a network rule for an individual IP address.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -IPAddressOrRange "16.17.18.19"
+ }
+ Remove-AzCognitiveServicesAccountNetworkRule @parameters
+ ```
+
+1. Remove a network rule for an IP address range.
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ -ResourceGroupName "myresourcegroup"
+ -Name "myaccount"
+ -IPAddressOrRange "16.17.18.0/24"
+ }
+ Remove-AzCognitiveServicesAccountNetworkRule @parameters
+ ```
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+
+1. List IP network rules.
+
+ ```azurecli-interactive
+ az cognitiveservices account network-rule list \
+ -g "myresourcegroup" -n "myaccount" --query ipRules
+ ```
+
+1. Add a network rule for an individual IP address.
+
+ ```azurecli-interactive
+ az cognitiveservices account network-rule add \
+ -g "myresourcegroup" -n "myaccount" \
+ --ip-address "16.17.18.19"
+ ```
+
+1. Add a network rule for an IP address range.
+
+ ```azurecli-interactive
+ az cognitiveservices account network-rule add \
+ -g "myresourcegroup" -n "myaccount" \
+ --ip-address "16.17.18.0/24"
+ ```
+
+1. Remove a network rule for an individual IP address.
+
+ ```azurecli-interactive
+ az cognitiveservices account network-rule remove \
+ -g "myresourcegroup" -n "myaccount" \
+ --ip-address "16.17.18.19"
+ ```
+
+1. Remove a network rule for an IP address range.
+
+ ```azurecli-interactive
+ az cognitiveservices account network-rule remove \
+ -g "myresourcegroup" -n "myaccount" \
+ --ip-address "16.17.18.0/24"
+ ```
+
+***
+
+> [!IMPORTANT]
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+
+## Use private endpoints
+
+You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Azure AI services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
+
+Private endpoints for Azure AI services resources let you:
+
+* Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
+* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet.
+* Securely connect to Azure AI services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
+
+### Conceptual overview
+
+A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Azure AI services service uses a secure private link.
+
+Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
+
+Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others.
+
+When you create a private endpoint for an Azure AI services resource in your VNet, a consent request is sent for approval to the Azure AI services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
+
+Azure AI services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
+
+### Private endpoints
+
+When creating the private endpoint, you must specify the Azure AI services resource it connects to. For more information on creating a private endpoint, see:
+
+* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md)
+* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md)
+* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+
+### Connecting to private endpoints
+
+> [!NOTE]
+> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwader names.
+
+Clients on a VNet using the private endpoint should use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Azure AI services resource over a private link.
+
+We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
+
+### Private endpoints with the Speech Services
+
+See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md).
+
+### DNS changes for private endpoints
+
+When you create a private endpoint, the DNS CNAME resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
+
+When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
+
+This approach enables access to the Azure AI services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet.
+
+If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet.
+
+> [!TIP]
+> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
+
+For more information on configuring your own DNS server to support private endpoints, see the following articles:
+
+* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+
+### Pricing
+
+For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
+
+## Next steps
+
+* Explore the various [Azure AI services](./what-are-ai-services.md)
+* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
ai-services Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/commitment-tier.md
+
+ Title: Create an Azure AI services resource with commitment tier pricing
+description: Learn how to sign up for commitment tier pricing, which is different than pay-as-you-go pricing.
+++++ Last updated : 12/01/2022++
+# Purchase commitment tier pricing
+
+Azure AI offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the following Azure AI services features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload:
+
+* Speech to text (Standard)
+* Text to speech (Neural)
+* Text Translation (Standard)
+* Language Understanding standard (Text Requests)
+* Azure AI Language
+ * Sentiment Analysis
+ * Key Phrase Extraction
+ * Language Detection
+
+Commitment tier pricing is also available for the following Azure AI service:
+
+* Azure AI Language
+ * Sentiment Analysis
+ * Key Phrase Extraction
+ * Language Detection
+
+* Azure AI Vision - OCR
+
+* Document Intelligence ΓÇô Custom/Invoice
+
+For more information, see [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
+
+## Create a new resource
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Azure AI services or Azure AI services listed.
+
+2. Enter the applicable information to create your resource. Be sure to select the standard pricing tier.
+
+ > [!NOTE]
+ > If you intend to purchase a commitment tier for disconnected container usage, you will need to request separate access and select the **Commitment tier disconnected containers** pricing tier. For more information, [disconnected containers](./containers/disconnected-containers.md).
+
+ :::image type="content" source="media/commitment-tier/create-resource.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/commitment-tier/create-resource.png":::
+
+3. Once your resource is created, you can change your pricing from pay-as-you-go, to a commitment plan.
+
+## Purchase a commitment plan by updating your Azure resource
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure subscription.
+2. In your Azure resource for one of the applicable features listed, select **Commitment tier pricing**.
+3. Select **Change** to view the available commitments for hosted API and container usage. Choose a commitment plan for one or more of the following offerings:
+ * **Web**: web-based APIs, where you send data to Azure for processing.
+ * **Connected container**: Docker containers that enable you to [deploy Azure AI services on premises](cognitive-services-container-support.md), and maintain an internet connection for billing and metering.
+
+ :::image type="content" source="media/commitment-tier/commitment-tier-pricing.png" alt-text="A screenshot showing the commitment tier pricing page on the Azure portal." lightbox="media/commitment-tier/commitment-tier-pricing.png":::
+
+4. In the window that appears, select both a **Tier** and **Auto-renewal** option.
+
+ * **Commitment tier** - The commitment tier for the feature. The commitment tier is enabled immediately when you select **Purchase** and you're charged the commitment amount on a pro-rated basis.
+
+ * **Auto-renewal** - Choose how you want to renew, change, or cancel the current commitment plan starting with the next billing cycle. If you decide to autorenew, the **Auto-renewal date** is the date (in your local timezone) when you'll be charged for the next billing cycle. This date coincides with the start of the calendar month.
+
+ > [!CAUTION]
+ > Once you select **Purchase** you will be charged for the tier you select. Once purchased, the commitment plan is non-refundable.
+ >
+ > Commitment plans are charged monthly, except the first month upon purchase which is pro-rated (cost and quota) based on the number of days remaining in that month. For the subsequent months, the charge is incurred on the first day of the month.
+
+ :::image type="content" source="media/commitment-tier/enable-commitment-plan.png" alt-text="A screenshot showing the commitment tier pricing and renewal details on the Azure portal." lightbox="media/commitment-tier/enable-commitment-plan-large.png":::
++
+## Overage pricing
+
+If you use the resource above the quota provided, you're charged for the additional usage as per the overage amount mentioned in the commitment tier.
+
+## Purchase a different commitment plan
+
+The commitment plans have a calendar month commitment period. You can purchase a commitment plan at any time from the default pay-as-you-go pricing model. When you purchase a plan, you're charged a pro-rated price for the remaining month. During the commitment period, you can't change the commitment plan for the current month. However, you can choose a different commitment plan for the next calendar month. The billing for the next month would happen on the first day of the next month.
+
+If you need a larger commitment plan than any of the ones offered, contact `csgate@microsoft.com`.
+
+## End a commitment plan
+
+If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of each month to end a commitment plan, and not be charged for the following month.
+
+## See also
+
+* [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
ai-services Category Taxonomy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Category-Taxonomy.md
+
+ Title: Taxonomy of image categories - Azure AI Vision
+
+description: Get the 86 categories of taxonomy for the Azure AI Vision API in Azure AI services.
+++++++ Last updated : 04/17/2019+++
+# Azure AI Vision 86-category taxonomy
+
+abstract_
+
+abstract_net
+
+abstract_nonphoto
+
+abstract_rect
+
+abstract_shape
+
+abstract_texture
+
+animal_
+
+animal_bird
+
+animal_cat
+
+animal_dog
+
+animal_horse
+
+animal_panda
+
+building_
+
+building_arch
+
+building_brickwall
+
+building_church
+
+building_corner
+
+building_doorwindows
+
+building_pillar
+
+building_stair
+
+building_street
+
+dark_
+
+drink_
+
+drink_can
+
+dark_fire
+
+dark_fireworks
+
+dark_light
+
+food_
+
+food_bread
+
+food_fastfood
+
+food_grilled
+
+food_pizza
+
+indoor_
+
+indoor_churchwindow
+
+indoor_court
+
+indoor_doorwindows
+
+indoor_marketstore
+
+indoor_room
+
+indoor_venue
+
+object_screen
+
+object_sculpture
+
+others_
+
+outdoor_
+
+outdoor_city
+
+outdoor_field
+
+outdoor_grass
+
+outdoor_house
+
+outdoor_mountain
+
+outdoor_oceanbeach
+
+outdoor_playground
+
+outdoor_pool
+
+outdoor_railway
+
+outdoor_road
+
+outdoor_sportsfield
+
+outdoor_stonerock
+
+outdoor_street
+
+outdoor_water
+
+outdoor_waterside
+
+people_
+
+people_baby
+
+people_crowd
+
+people_group
+
+people_hand
+
+people_many
+
+people_portrait
+
+people_show
+
+people_swimming
+
+people_tattoo
+
+people_young
+
+plant_
+
+plant_branch
+
+plant_flower
+
+plant_leaves
+
+plant_tree
+
+sky_cloud
+
+sky_object
+
+sky_sun
+
+text_
+
+text_mag
+
+text_map
+
+text_menu
+
+text_sign
+
+trans_bicycle
+
+trans_bus
+
+trans_car
+
+trans_trainstation
ai-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/build-enrollment-app.md
+
+ Title: Build a React app to add users to a Face service
+
+description: Learn how to set up your development environment and deploy a Face app to get consent from customers.
++++++ Last updated : 11/17/2020+++
+# Build a React app to add users to a Face service
+
+This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, or personalization kiosk, based on their face data.
+
+When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
+
+The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
+
+## Prerequisites
+
+* An Azure subscription ΓÇô [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you created to connect your application to Face API.
+
+### Important Security Considerations
+* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
+* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
+* As a best practice, consider having separate API keys for development and production.
+
+## Set up the development environment
+
+#### [Android](#tab/android)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select your development OS and **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/).
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+
+ > [!WARNING]
+ > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Azure AI services Authentication guide](../../authentication.md) for other ways to authenticate the service.
+
+1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+
+#### [iOS](#tab/ios)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select **macOS** as your development OS and **iOS** as the target OS. Complete the section **Installing dependencies**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/). You will also need to download Xcode.
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource.
+
+ > [!WARNING]
+ > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Azure AI services Authentication guide](../../authentication.md) for other ways to authenticate the service.
+
+1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+++
+## Customize the app for your business
+
+Now that you have set up the sample app, you can tailor it to your own needs.
+
+For example, you may want to add situation-specific information on your consent page:
+
+> [!div class="mx-imgBorder"]
+> ![app consent page](../media/enrollment-app/1-consent-1.jpg)
+
+1. Add more instructions to improve verification accuracy.
+
+ Many face recognition issues are caused by low-quality reference images. Some factors that can degrade model performance are:
+ * Face size (faces that are distant from the camera)
+ * Face orientation (faces turned or tilted away from camera)
+ * Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
+ * Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
+ * Blur (such as by rapid face movement when the photograph was taken).
+
+ The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.
++
+> [!div class="mx-imgBorder"]
+> ![app image capture instruction page](../media/enrollment-app/4-instruction.jpg)
+
+1. The sample app offers functionality for deleting the user's information and the option to readd. You can enable or disable these operations based on your business requirement.
+
+> [!div class="mx-imgBorder"]
+> ![profile management page](../media/enrollment-app/10-manage-2.jpg)
+
+To extend the app's functionality to cover the full experience, read the [overview](../enrollment-overview.md) for additional features to implement and best practices.
+
+1. Configure your database to map each person with their ID
+
+ You need to use a database to store the face image along with user metadata. The social security number or other unique person identifier can be used as a key to look up their face ID.
+
+1. For secure methods of passing your subscription key and endpoint to Face service, see the Azure AI services [Security](../../security-features.md?tabs=command-line%2Ccsharp) guide.
++
+## Deploy the app
+
+#### [Android](#tab/android)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../security-features.md?tabs=command-line%2ccsharp).
+
+When you're ready to release your app for production, you'll generate a release-ready APK file, which is the package file format for Android apps. This APK file must be signed with a private key. With this release build, you can begin distributing the app to your devices directly.
+
+Follow the <a href="https://developer.android.com/studio/publish/preparing#publishing-build" title="Prepare for release" target="_blank">Prepare for release <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn how to generate a private key, sign your application, and generate a release APK.
+
+Once you've created a signed APK, see the <a href="https://developer.android.com/studio/publish" title="Publish your app" target="_blank">Publish your app <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn more about how to release your app.
+
+#### [iOS](#tab/ios)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../security-features.md?tabs=command-line%2ccsharp). To prepare for distribution, you will need to create an app icon, a launch screen, and configure deployment info settings. Follow the [documentation from Xcode](https://developer.apple.com/documentation/Xcode/preparing_your_app_for_distribution) to prepare your app for distribution.
+
+When you're ready to release your app for production, you'll build an archive of your app. Follow the [Xcode documentation](https://developer.apple.com/documentation/Xcode/distributing_your_app_for_beta_testing_and_releases) on how to create an archive build and options for distributing your app.
+++
+## Next steps
+
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
ai-services Storage Lab Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md
+
+ Title: "Tutorial: Generate metadata for Azure images"
+
+description: In this tutorial, you'll learn how to integrate the Azure AI Vision service into a web app to generate metadata for images.
++++++ Last updated : 12/29/2022+
+ms.devlang: csharp
+
+#Customer intent: As a developer of an image-intensive web app, I want to be able to automatically generate captions and search keywords for each of my images.
++
+# Tutorial: Use Azure AI Vision to generate image metadata in Azure Storage
+
+In this tutorial, you'll learn how to integrate the Azure AI Vision service into a web app to generate metadata for uploaded images. This is useful for [digital asset management (DAM)](../overview.md#azure-ai-vision-for-digital-asset-management) scenarios, such as if a company wants to quickly generate descriptive captions or searchable keywords for all of its images.
+
+You'll use Visual Studio to write an MVC Web app that accepts images uploaded by users and stores the images in Azure blob storage. You'll learn how to read and write blobs in C# and use blob metadata to attach additional information to the blobs you create. Then you'll submit each image uploaded by the user to the Azure AI Vision API to generate a caption and search metadata for the image. Finally, you can deploy the app to the cloud using Visual Studio.
+
+This tutorial shows you how to:
+
+> [!div class="checklist"]
+> * Create a storage account and storage containers using the Azure portal
+> * Create a Web app in Visual Studio and deploy it to Azure
+> * Use the Azure AI Vision API to extract information from images
+> * Attach metadata to Azure Storage images
+> * Check image metadata using [Azure Storage Explorer](http://storageexplorer.com/)
+
+> [!TIP]
+> The section [Use Azure AI Vision to generate metadata](#Exercise5) is most relevant to Image Analysis. Skip to there if you just want to see how Image Analysis is integrated into an established application.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
+
+## Prerequisites
+
+- [Visual Studio 2017 Community edition](https://www.visualstudio.com/products/visual-studio-community-vs.aspx) or higher, with the "ASP.NET and web development" and "Azure development" workloads installed.
+- The [Azure Storage Explorer](http://storageexplorer.com/) tool installed.
+
+<a name="Exercise1"></a>
+## Create a storage account
+
+In this section, you'll use the [Azure portal](https://portal.azure.com?WT.mc_id=academiccontent-github-cxa) to create a storage account. Then you'll create a pair of containers: one to store images uploaded by the user, and another to store image thumbnails generated from the uploaded images.
+
+1. Open the [Azure portal](https://portal.azure.com?WT.mc_id=academiccontent-github-cxa) in your browser. If you're asked to sign in, do so using your Microsoft account.
+1. To create a storage account, select **+ Create a resource** in the ribbon on the left. Then select **Storage**, followed by **Storage account**.
+
+ ![Creating a storage account](Images/new-storage-account.png)
+1. Enter a unique name for the storage account in **Name** field and make sure a green check mark appears next to it. The name is important, because it forms one part of the URL through which blobs created under this account are accessed. Place the storage account in a new resource group named "IntellipixResources," and select the region nearest you. Finish by selecting the **Review + create** button at the bottom of the screen to create the new storage account.
+ > [!NOTE]
+ > Storage account names can be 3 to 24 characters in length and can only contain numbers and lowercase letters. In addition, the name you enter must be unique within Azure. If someone else has chosen the same name, you'll be notified that the name isn't available with a red exclamation mark in the **Name** field.
+
+ ![Specifying parameters for a new storage account](Images/create-storage-account.png)
+1. Select **Resource groups** in the ribbon on the left. Then select the "IntellipixResources" resource group.
+
+ ![Opening the resource group](Images/open-resource-group.png)
+1. In the tab that opens for the resource group, select the storage account you created. If the storage account isn't there yet, you can select **Refresh** at the top of the tab until it appears.
+
+ ![Opening the new storage account](Images/open-storage-account.png)
+1. In the tab for the storage account, select **Blobs** to view a list of containers associated with this account.
+
+ ![Viewing blobs button](Images/view-containers.png)
+
+1. The storage account currently has no containers. Before you can create a blob, you must create a container to store it in. Select **+ Container** to create a new container. Type `photos` into the **Name** field and select **Blob** as the **Public access level**. Then select **OK** to create a container named "photos."
+
+ > By default, containers and their contents are private. Selecting **Blob** as the access level makes the blobs in the "photos" container publicly accessible, but doesn't make the container itself public. This is what you want because the images stored in the "photos" container will be linked to from a Web app.
+
+ ![Creating a "photos" container](Images/create-photos-container.png)
+
+1. Repeat the previous step to create a container named "thumbnails," once more ensuring that the container's **Public access level** is set to **Blob**.
+1. Confirm that both containers appear in the list of containers for this storage account, and that the names are spelled correctly.
+
+ ![The new containers](Images/new-containers.png)
+
+1. Close the "Blob service" screen. Select **Access keys** in the menu on the left side of the storage-account screen, and then select the **Copy** button next to **KEY** for **key1**. Paste this access key into your favorite text editor for later use.
+
+ ![Copying the access key](Images/copy-storage-account-access-key.png)
+
+You've now created a storage account to hold images uploaded to the app you're going to build, and containers to store the images in.
+
+<a name="Exercise2"></a>
+## Run Azure Storage Explorer
+
+[Azure Storage Explorer](http://storageexplorer.com/) is a free tool that provides a graphical interface for working with Azure Storage on PCs running Windows, macOS, and Linux. It provides most of the same functionality as the Azure portal and offers other features like the ability to view blob metadata. In this section, you'll use the Microsoft Azure Storage Explorer to view the containers you created in the previous section.
+
+1. If you haven't installed Storage Explorer or would like to make sure you're running the latest version, go to http://storageexplorer.com/ and download and install it.
+1. Start Storage Explorer. If you're asked to sign in, do so using your Microsoft account&mdash;the same one that you used to sign in to the Azure portal. If you don't see the storage account in Storage Explorer's left pane, select the **Manage Accounts** button highlighted below and make sure both your Microsoft account and the subscription used to create the storage account have been added to Storage Explorer.
+
+ ![Managing accounts in Storage Explorer](Images/add-account.png)
+
+1. Select the small arrow next to the storage account to display its contents, and then select the arrow next to **Blob Containers**. Confirm that the containers you created appear in the list.
+
+ ![Viewing blob containers](Images/storage-explorer.png)
+
+The containers are currently empty, but that will change once your app is deployed and you start uploading photos. Having Storage Explorer installed will make it easy for you to see what your app writes to blob storage.
+
+<a name="Exercise3"></a>
+## Create a new Web app in Visual Studio
+
+In this section, you'll create a new Web app in Visual Studio and add code to implement the basic functionality required to upload images, write them to blob storage, and display them in a Web page.
+
+1. Start Visual Studio and use the **File -> New -> Project** command to create a new Visual C# **ASP.NET Web Application** project named "Intellipix" (short for "Intelligent Pictures").
+
+ ![Creating a new Web Application project](Images/new-web-app.png)
+
+1. In the "New ASP.NET Web Application" dialog, make sure **MVC** is selected. Then select **OK**.
+
+ ![Creating a new ASP.NET MVC project](Images/new-mvc-project.png)
+
+1. Take a moment to review the project structure in Solution Explorer. Among other things, there's a folder named **Controllers** that holds the project's MVC controllers, and a folder named **Views** that holds the project's views. You'll be working with assets in these folders and others as you implement the application.
+
+ ![The project shown in Solution Explorer](Images/project-structure.png)
+
+1. Use Visual Studio's **Debug -> Start Without Debugging** command (or press **Ctrl+F5**) to launch the application in your browser. Here's how the application looks in its present state:
+
+ ![The initial application](Images/initial-application.png)
+
+1. Close the browser and return to Visual Studio. In Solution Explorer, right-click the **Intellipix** project and select **Manage NuGet Packages...**. Select **Browse**. Then type `imageresizer` into the search box and select the NuGet package named **ImageResizer**. Finally, select **Install** to install the latest stable version of the package. ImageResizer contains APIs that you'll use to create image thumbnails from the images uploaded to the app. OK any changes and accept any licenses presented to you.
+
+ ![Installing ImageResizer](Images/install-image-resizer.png)
+
+1. Repeat this process to add the NuGet package named **WindowsAzure.Storage** to the project. This package contains APIs for accessing Azure Storage from .NET applications. OK any changes and accept any licenses presented to you.
+
+ ![Installing WindowsAzure.Storage](Images/install-storage-package.png)
+
+1. Open _Web.config_ and add the following statement to the ```<appSettings>``` section, replacing ACCOUNT_NAME with the name of the storage account you created in the first section, and ACCOUNT_KEY with the access key you saved.
+
+ ```xml
+ <add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY" />
+ ```
+
+ > [!IMPORTANT]
+ > The _Web.config_ file is meant to hold sensitive information like your subscription keys, and any HTTP request to a file with the _.config_ extension is handled by the ASP.NET engine, which returns a "This type of page is not served" message. However, if an attacker is able to find some other exploit that allows them to view your _Web.config_ contents, then they'll be able to expose that information. See [Protecting Connection Strings and Other Configuration Information](/aspnet/web-forms/overview/data-access/advanced-data-access-scenarios/protecting-connection-strings-and-other-configuration-information-cs) for extra steps you can take to further secure your _Web.config_ data.
+
+1. Open the file named *_Layout.cshtml* in the project's **Views/Shared** folder. On line 19, change "Application name" to "Intellipix." The line should look like this:
+
+ ```C#
+ @Html.ActionLink("Intellipix", "Index", "Home", new { area = "" }, new { @class = "navbar-brand" })
+ ```
+
+ > [!NOTE]
+ > In an ASP.NET MVC project, *_Layout.cshtml* is a special view that serves as a template for other views. You typically define header and footer content that is common to all views in this file.
+
+1. Right-click the project's **Models** folder and use the **Add -> Class...** command to add a class file named *BlobInfo.cs* to the folder. Then replace the empty **BlobInfo** class with the following class definition:
+
+ ```C#
+ public class BlobInfo
+ {
+ public string ImageUri { get; set; }
+ public string ThumbnailUri { get; set; }
+ public string Caption { get; set; }
+ }
+ ```
+
+1. Open *HomeController.cs* from the project's **Controllers** folder and add the following `using` statements to the top of the file:
+
+ ```C#
+ using ImageResizer;
+ using Intellipix.Models;
+ using Microsoft.WindowsAzure.Storage;
+ using Microsoft.WindowsAzure.Storage.Blob;
+ using System.Configuration;
+ using System.Threading.Tasks;
+ using System.IO;
+ ```
+
+1. Replace the **Index** method in *HomeController.cs* with the following implementation:
+
+ ```C#
+ public ActionResult Index()
+ {
+ // Pass a list of blob URIs in ViewBag
+ CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
+ CloudBlobClient client = account.CreateCloudBlobClient();
+ CloudBlobContainer container = client.GetContainerReference("photos");
+ List<BlobInfo> blobs = new List<BlobInfo>();
+
+ foreach (IListBlobItem item in container.ListBlobs())
+ {
+ var blob = item as CloudBlockBlob;
+
+ if (blob != null)
+ {
+ blobs.Add(new BlobInfo()
+ {
+ ImageUri = blob.Uri.ToString(),
+ ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/")
+ });
+ }
+ }
+
+ ViewBag.Blobs = blobs.ToArray();
+ return View();
+ }
+ ```
+
+ The new **Index** method enumerates the blobs in the `"photos"` container and passes an array of **BlobInfo** objects representing those blobs to the view through ASP.NET MVC's **ViewBag** property. Later, you'll modify the view to enumerate these objects and display a collection of photo thumbnails. The classes you'll use to access your storage account and enumerate the blobs&mdash;**[CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount?view=azure-dotnet&preserve-view=true)**, **[CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient?view=azure-dotnet-legacy&preserve-view=true)**, and **[CloudBlobContainer](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer?view=azure-dotnet-legacy&preserve-view=true)**&mdash;come from the **WindowsAzure.Storage** package you installed through NuGet.
+
+1. Add the following method to the **HomeController** class in *HomeController.cs*:
+
+ ```C#
+ [HttpPost]
+ public async Task<ActionResult> Upload(HttpPostedFileBase file)
+ {
+ if (file != null && file.ContentLength > 0)
+ {
+ // Make sure the user selected an image file
+ if (!file.ContentType.StartsWith("image"))
+ {
+ TempData["Message"] = "Only image files may be uploaded";
+ }
+ else
+ {
+ try
+ {
+ // Save the original image in the "photos" container
+ CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
+ CloudBlobClient client = account.CreateCloudBlobClient();
+ CloudBlobContainer container = client.GetContainerReference("photos");
+ CloudBlockBlob photo = container.GetBlockBlobReference(Path.GetFileName(file.FileName));
+ await photo.UploadFromStreamAsync(file.InputStream);
+
+ // Generate a thumbnail and save it in the "thumbnails" container
+ using (var outputStream = new MemoryStream())
+ {
+ file.InputStream.Seek(0L, SeekOrigin.Begin);
+ var settings = new ResizeSettings { MaxWidth = 192 };
+ ImageBuilder.Current.Build(file.InputStream, outputStream, settings);
+ outputStream.Seek(0L, SeekOrigin.Begin);
+ container = client.GetContainerReference("thumbnails");
+ CloudBlockBlob thumbnail = container.GetBlockBlobReference(Path.GetFileName(file.FileName));
+ await thumbnail.UploadFromStreamAsync(outputStream);
+ }
+ }
+ catch (Exception ex)
+ {
+ // In case something goes wrong
+ TempData["Message"] = ex.Message;
+ }
+ }
+ }
+
+ return RedirectToAction("Index");
+ }
+ ```
+
+ This is the method that's called when you upload a photo. It stores each uploaded image as a blob in the `"photos"` container, creates a thumbnail image from the original image using the `ImageResizer` package, and stores the thumbnail image as a blob in the `"thumbnails"` container.
+
+1. Open *Index.cshmtl* in the project's **Views/Home** folder and replace its contents with the following code and markup:
+
+ ```HTML
+ @{
+ ViewBag.Title = "Intellipix Home Page";
+ }
+
+ @using Intellipix.Models
+
+ <div class="container" style="padding-top: 24px">
+ <div class="row">
+ <div class="col-sm-8">
+ @using (Html.BeginForm("Upload", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
+ {
+ <input type="file" name="file" id="upload" style="display: none" onchange="$('#submit').click();" />
+ <input type="button" value="Upload a Photo" class="btn btn-primary btn-lg" onclick="$('#upload').click();" />
+ <input type="submit" id="submit" style="display: none" />
+ }
+ </div>
+ <div class="col-sm-4 pull-right">
+ </div>
+ </div>
+
+ <hr />
+
+ <div class="row">
+ <div class="col-sm-12">
+ @foreach (BlobInfo blob in ViewBag.Blobs)
+ {
+ <img src="@blob.ThumbnailUri" width="192" title="@blob.Caption" style="padding-right: 16px; padding-bottom: 16px" />
+ }
+ </div>
+ </div>
+ </div>
+
+ @section scripts
+ {
+ <script type="text/javascript" language="javascript">
+ if ("@TempData["Message"]" !== "") {
+ alert("@TempData["Message"]");
+ }
+ </script>
+ }
+ ```
+
+ The language used here is [Razor](https://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail.
+
+1. Download and unzip the _photos.zip_ file from the [GitHub sample data repository](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial). This is an assortment of different photos you can use to test the app.
+
+1. Save your changes and press **Ctrl+F5** to launch the application in your browser. Then select **Upload a Photo** and upload one of the images you downloaded. Confirm that a thumbnail version of the photo appears on the page.
+
+ ![Intellipix with one photo uploaded](Images/one-photo-uploaded.png)
+
+1. Upload a few more images from your **photos** folder. Confirm that they appear on the page, too:
+
+ ![Intellipix with three photos uploaded](Images/three-photos-uploaded.png)
+
+1. Right-click in your browser and select **View page source** to view the source code for the page. Find the ```<img>``` elements representing the image thumbnails. Observe that the URLs assigned to the images refer directly to blobs in blob storage. This is because you set the containers' **Public access level** to **Blob**, which makes the blobs inside publicly accessible.
+
+1. Return to Azure Storage Explorer (or restart it if you didn't leave it running) and select the `"photos"` container under your storage account. The number of blobs in the container should equal the number of photos you uploaded. Double-click one of the blobs to download it and see the image stored in the blob.
+
+ ![Contents of the "photos" container](Images/photos-container.png)
+
+1. Open the `"thumbnails"` container in Storage Explorer. Open one of the blobs to view the thumbnail images generated from the image uploads.
+
+The app doesn't yet offer a way to view the original images that you uploaded. Ideally, selecting an image thumbnail should display the original image. You'll add that feature next.
+
+<a name="Exercise4"></a>
+## Add a lightbox for viewing photos
+
+In this section, you'll use a free, open-source JavaScript library to add a lightbox viewer that enables users to see the original images they've uploaded (rather than just the image thumbnails). The files are provided for you. All you have to do is integrate them into the project and make a minor modification to *Index.cshtml*.
+
+1. Download the _lightbox.css_ and _lightbox.js_ files from the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/ComputerVision/storage-lab-tutorial).
+1. In Solution Explorer, right-click your project's **Scripts** folder and use the **Add -> New Item...** command to create a *lightbox.js* file. Paste in the contents from the example file in the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/storage-lab-tutorial/scripts/lightbox.js).
+
+1. Right-click the project's "Content" folder and use the **Add -> New Item...** command create a *lightbox.css* file. Paste in the contents from the example file in the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/storage-lab-tutorial/css/lightbox.css).
+1. Download and unzip the _buttons.zip_ file from the GitHub data files repository: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial. You should have four button images.
+
+1. Right-click the Intellipix project in Solution Explorer and use the **Add -> New Folder** command to add a folder named "Images" to the project.
+
+1. Right-click the **Images** folder and use the **Add -> Existing Item...** command to import the four images you downloaded.
+
+1. Open *BundleConfig.cs* in the project's "App_Start" folder. Add the following statement to the ```RegisterBundles``` method in **BundleConfig.cs**:
+
+ ```C#
+ bundles.Add(new ScriptBundle("~/bundles/lightbox").Include(
+ "~/Scripts/lightbox.js"));
+ ```
+
+1. In the same method, find the statement that creates a ```StyleBundle``` from "~/Content/css" and add *lightbox.css* to the list of style sheets in the bundle. Here is the modified statement:
+
+ ```C#
+ bundles.Add(new StyleBundle("~/Content/css").Include(
+ "~/Content/bootstrap.css",
+ "~/Content/site.css",
+ "~/Content/lightbox.css"));
+ ```
+
+1. Open *_Layout.cshtml* in the project's **Views/Shared** folder and add the following statement just before the ```@RenderSection``` statement near the bottom:
+
+ ```C#
+ @Scripts.Render("~/bundles/lightbox")
+ ```
+
+1. The final task is to incorporate the lightbox viewer into the home page. To do that, open *Index.cshtml* (it's in the project's **Views/Home** folder) and replace the ```@foreach``` loop with this one:
+
+ ```HTML
+ @foreach (BlobInfo blob in ViewBag.Blobs)
+ {
+ <a href="@blob.ImageUri" rel="lightbox" title="@blob.Caption">
+ <img src="@blob.ThumbnailUri" width="192" title="@blob.Caption" style="padding-right: 16px; padding-bottom: 16px" />
+ </a>
+ }
+ ```
+
+1. Save your changes and press **Ctrl+F5** to launch the application in your browser. Then select one of the images you uploaded earlier. Confirm that a lightbox appears and shows an enlarged view of the image.
+
+ ![An enlarged image](Images/lightbox-image.png)
+
+1. Select the **X** in the lower-right corner of the lightbox to dismiss it.
+
+Now you have a way to view the images you uploaded. The next step is to do more with those images.
+
+<a name="Exercise5"></a>
+## Use Azure AI Vision to generate metadata
+
+### Create a Vision resource
+
+You'll need to create a Vision resource for your Azure account; this resource manages your access to Azure's Azure AI Vision service.
+
+1. Follow the instructions in [Create an Azure AI services resource](../../multi-service-resource.md?pivots=azportal) to create a multi-service resource or a Vision resource.
+
+1. Then go to the menu for your resource group and select the Vision resource that you created. Copy the URL under **Endpoint** to somewhere you can easily retrieve it in a moment. Then select **Show access keys**.
+
+ ![Azure portal page with the endpoint URL and access keys link outlined](Images/copy-vision-endpoint.png)
+
+ [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]
++
+1. In the next window, copy the value of **KEY 1** to the clipboard.
+
+ ![Manage keys dialog, with the copy button outlined](Images/copy-vision-key.png)
+
+### Add Azure AI Vision credentials
+
+Next, you'll add the required credentials to your app so that it can access Vision resources.
+
+Navigate to the *Web.config* file at the root of the project. Add the following statements to the `<appSettings>` section of the file, replacing `VISION_KEY` with the key you copied in the previous step, and `VISION_ENDPOINT` with the URL you saved in the step before.
+
+```xml
+<add key="SubscriptionKey" value="VISION_KEY" />
+<add key="VisionEndpoint" value="VISION_ENDPOINT" />
+```
+
+Then in the Solution Explorer, right-click the project and use the **Manage NuGet Packages** command to install the package **Microsoft.Azure.CognitiveServices.Vision.ComputerVision**. This package contains the types needed to call the Azure AI Vision API.
+
+### Add metadata generation code
+
+Next, you'll add the code that actually uses the Azure AI Vision service to create metadata for images.
+
+1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file:
+
+ ```csharp
+ using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
+ using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
+ ```
+
+1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on.
+
+ ```csharp
+ // Submit the image to the Azure AI Vision API
+ ComputerVisionClient vision = new ComputerVisionClient(
+ new ApiKeyServiceClientCredentials(ConfigurationManager.AppSettings["SubscriptionKey"]),
+ new System.Net.Http.DelegatingHandler[] { });
+ vision.Endpoint = ConfigurationManager.AppSettings["VisionEndpoint"];
+
+ List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>() { VisualFeatureTypes.Description };
+ var result = await vision.AnalyzeImageAsync(photo.Uri.ToString(), features);
+
+ // Record the image description and tags in blob metadata
+ photo.Metadata.Add("Caption", result.Description.Captions[0].Text);
+
+ for (int i = 0; i < result.Description.Tags.Count; i++)
+ {
+ string key = String.Format("Tag{0}", i);
+ photo.Metadata.Add(key, result.Description.Tags[i]);
+ }
+
+ await photo.SetMetadataAsync();
+ ```
+
+1. Next, go to the **Index** method in the same file. This method enumerates the stored image blobs in the targeted blob container (as **IListBlobItem** instances) and passes them to the application view. Replace the `foreach` block in this method with the following code. This code calls **CloudBlockBlob.FetchAttributes** to get each blob's attached metadata. It extracts the computer-generated description (`caption`) from the metadata and adds it to the **BlobInfo** object, which gets passed to the view.
+
+ ```csharp
+ foreach (IListBlobItem item in container.ListBlobs())
+ {
+ var blob = item as CloudBlockBlob;
+
+ if (blob != null)
+ {
+ blob.FetchAttributes(); // Get blob metadata
+ var caption = blob.Metadata.ContainsKey("Caption") ? blob.Metadata["Caption"] : blob.Name;
+
+ blobs.Add(new BlobInfo()
+ {
+ ImageUri = blob.Uri.ToString(),
+ ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/"),
+ Caption = caption
+ });
+ }
+ }
+ ```
+
+### Test the app
+
+Save your changes in Visual Studio and press **Ctrl+F5** to launch the application in your browser. Use the app to upload a few more images, either from the photo set you downloaded or from your own folder. When you hover the cursor over one of the new images in the view, a tooltip window should appear and display the computer-generated caption for the image.
+
+![The computer-generated caption](Images/thumbnail-with-tooltip.png)
+
+To view all of the attached metadata, use Azure Storage Explorer to view the storage container you're using for images. Right-click any of the blobs in the container and select **Properties**. In the dialog, you'll see a list of key-value pairs. The computer-generated image description is stored in the item `Caption`, and the search keywords are stored in `Tag0`, `Tag1`, and so on. When you're finished, select **Cancel** to close the dialog.
+
+![Image properties dialog window, with metadata tags listed](Images/blob-metadata.png)
+
+<a name="Exercise6"></a>
+## Add search to the app
+
+In this section, you will add a search box to the home page, enabling users to do keyword searches on the images that they've uploaded. The keywords are the ones generated by the Azure AI Vision API and stored in blob metadata.
+
+1. Open *Index.cshtml* in the project's **Views/Home** folder and add the following statements to the empty ```<div>``` element with the ```class="col-sm-4 pull-right"``` attribute:
+
+ ```HTML
+ @using (Html.BeginForm("Search", "Home", FormMethod.Post, new { enctype = "multipart/form-data", @class = "navbar-form" }))
+ {
+ <div class="input-group">
+ <input type="text" class="form-control" placeholder="Search photos" name="term" value="@ViewBag.Search" style="max-width: 800px">
+ <span class="input-group-btn">
+ <button class="btn btn-primary" type="submit">
+ <i class="glyphicon glyphicon-search"></i>
+ </button>
+ </span>
+ </div>
+ }
+ ```
+
+ This code and markup adds a search box and a **Search** button to the home page.
+
+1. Open *HomeController.cs* in the project's **Controllers** folder and add the following method to the **HomeController** class:
+
+ ```C#
+ [HttpPost]
+ public ActionResult Search(string term)
+ {
+ return RedirectToAction("Index", new { id = term });
+ }
+ ```
+
+ This is the method that's called when the user selects the **Search** button added in the previous step. It refreshes the page and includes a search parameter in the URL.
+
+1. Replace the **Index** method with the following implementation:
+
+ ```C#
+ public ActionResult Index(string id)
+ {
+ // Pass a list of blob URIs and captions in ViewBag
+ CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
+ CloudBlobClient client = account.CreateCloudBlobClient();
+ CloudBlobContainer container = client.GetContainerReference("photos");
+ List<BlobInfo> blobs = new List<BlobInfo>();
+
+ foreach (IListBlobItem item in container.ListBlobs())
+ {
+ var blob = item as CloudBlockBlob;
+
+ if (blob != null)
+ {
+ blob.FetchAttributes(); // Get blob metadata
+
+ if (String.IsNullOrEmpty(id) || HasMatchingMetadata(blob, id))
+ {
+ var caption = blob.Metadata.ContainsKey("Caption") ? blob.Metadata["Caption"] : blob.Name;
+
+ blobs.Add(new BlobInfo()
+ {
+ ImageUri = blob.Uri.ToString(),
+ ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/"),
+ Caption = caption
+ });
+ }
+ }
+ }
+
+ ViewBag.Blobs = blobs.ToArray();
+ ViewBag.Search = id; // Prevent search box from losing its content
+ return View();
+ }
+ ```
+
+ Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed.
+
+1. Add the following helper method to the **HomeController** class:
+
+ ```C#
+ private bool HasMatchingMetadata(CloudBlockBlob blob, string term)
+ {
+ foreach (var item in blob.Metadata)
+ {
+ if (item.Key.StartsWith("Tag") && item.Value.Equals(term, StringComparison.InvariantCultureIgnoreCase))
+ return true;
+ }
+
+ return false;
+ }
+ ```
+
+ This method is called by the **Index** method to determine whether the metadata keywords attached to a given image blob contain the search term that the user entered.
+
+1. Launch the application again and upload several photos. Feel free to use your own photos, not just the ones provided with the tutorial.
+
+1. Type a keyword such as "river" into the search box. Then select the **Search** button.
+
+ ![Performing a search](Images/enter-search-term.png)
+
+1. Search results will vary depending on what you typed and what images you uploaded. But the result should be a filtered list of images whose metadata keywords include all or part of the keyword that you typed.
+
+ ![Search results](Images/search-results.png)
+
+1. Select the browser's back button to display all of the images again.
+
+You're almost finished. It's time to deploy the app to the cloud.
+
+<a name="Exercise7"></a>
+## Deploy the app to Azure
+
+In this section, you'll deploy the app to Azure from Visual Studio. You will allow Visual Studio to create an Azure Web App for you, preventing you from having to go into the Azure portal and create it separately.
+
+1. Right-click the project in Solution Explorer and select **Publish...** from the context menu. Make sure **Microsoft Azure App Service** and **Create New** are selected, and then select the **Publish** button.
+
+ ![Publishing the app](Images/publish-1.png)
+
+1. In the next dialog, select the "IntellipixResources" resource group under **Resource Group**. Select the **New...** button next to "App Service Plan" and create a new App Service Plan in the same location you selected for the storage account in [Create a storage account](#Exercise1), accepting the defaults everywhere else. Finish by selecting the **Create** button.
+
+ ![Creating an Azure Web App](Images/publish-2.png)
+
+1. After a few moments, the app will appear in a browser window. Note the URL in the address bar. The app is no longer running locally; it's on the Web, where it's publicly reachable.
+
+ ![The finished product!](Images/vs-intellipix.png)
+
+If you make changes to the app and want to push the changes out to the Web, go through the publish process again. You can still test your changes locally before publishing to the Web.
+
+## Clean up resources
+
+If you'd like to keep working on your web app, see the [Next steps](#next-steps) section. If you don't plan to continue using this application, you should delete all app-specific resources. To do delete resources, you can delete the resource group that contains your Azure Storage subscription and Vision resource. This will remove the storage account, the blobs uploaded to it, and the App Service resource needed to connect with the ASP.NET web app.
+
+To delete the resource group, open the **Resource groups** tab in the portal, navigate to the resource group you used for this project, and select **Delete resource group** at the top of the view. You'll be asked to type the resource group's name to confirm you want to delete it. Once deleted, a resource group can't be recovered.
+
+## Next steps
+
+There's much more that you could do to use Azure and develop your Intellipix app even further. For example, you could add support for authenticating users and deleting photos, and rather than force the user to wait for Azure AI services to process a photo following an upload, you could use [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=academiccontent-github-cxa) to call the Azure AI Vision API asynchronously each time an image is added to blob storage. You could also do any number of other Image Analysis operations on the image, outlined in the overview.
+
+> [!div class="nextstepaction"]
+> [Image Analysis overview](../overview-image-analysis.md)
ai-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md
+
+ Title: Azure AI Vision 3.2 GA Read OCR container
+
+description: Use the Read 3.2 OCR containers from Azure AI Vision to extract text from images and documents, on-premises.
++++++ Last updated : 03/02/2023++
+keywords: on-premises, OCR, Docker, container
++
+# Install Azure AI Vision 3.2 GA Read OCR container
++
+Containers enable you to run the Azure AI Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container.
+
+The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
+
+## What's new
+The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you're an existing customer, follow the [download instructions](#get-the-container-image) to get started.
+
+The Read 3.2 OCR container is the latest GA model and provides:
+* New models for enhanced accuracy.
+* Support for multiple languages within the same document.
+* Support for a total of 164 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
+* A single operation for both documents and images.
+* Support for larger documents and images.
+* Confidence scores.
+* Support for documents with both print and handwritten text.
+* Ability to extract text from only selected page(s) in a document.
+* Choose text line output order from default to a more natural reading order for Latin languages only.
+* Text line classification as handwritten style or not for Latin languages only.
+
+If you're using Read 2.0 containers today, see the [migration guide](read-container-migration-guide.md) to learn about changes in the new versions.
+
+## Prerequisites
+
+You must meet the following prerequisites before using the containers:
+
+|Required|Purpose|
+|--|--|
+|Docker Engine| You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/install/#server). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
+|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.|
+|Computer Vision resource |In order to use the container, you must have:<br><br>A **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Request approval to run the container
+
+Fill out and submit the [request form](https://aka.ms/csgate) to request approval to run the container.
+++
+### Host computer requirements
++
+### Advanced Vector Extension support
+
+The **host** computer is the computer that runs the docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
+
+```console
+grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
+```
+
+> [!WARNING]
+> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
+
+### Container requirements and recommendations
++
+## Get the container image
+
+The Azure AI Vision Read OCR container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services` repository and is named `read`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/vision/read`.
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/vision/read/tags).
+
+The following container images for Read are available.
+
+| Container | Container Registry / Repository / Image Name | Tags |
+|--||--|
+| Read 3.2 GA | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30` | latest, 3.2, 3.2-model-2022-04-30 |
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
+```
++++
+## How to use the container
+
+Once the container is on the [host computer](#host-computer-requirements), use the following process to work with the container.
+
+1. [Run the container](#run-the-container), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available.
+1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
+
+## Run the container
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
+
+[Examples](computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+The above command:
+
+* Runs the Read OCR latest GA container from the container image.
+* Allocates 8 CPU core and 16 gigabytes (GB) of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+You can alternatively run the container using environment variables:
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+--env Eula=accept \
+--env Billing={ENDPOINT_URI} \
+--env ApiKey={API_KEY} \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
+```
++
+More [examples](./computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
+
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+If you need higher throughput (for example, when processing multi-page files), consider deploying multiple containers [on a Kubernetes cluster](deploy-computer-vision-on-premises.md), using [Azure Storage](../../storage/common/storage-account-create.md) and [Azure Queue](../../storage/queues/storage-queues-introduction.md).
+
+If you're using Azure Storage to store images for processing, you can create a [connection string](../../storage/common/storage-configure-connection-string.md) to use when calling the container.
+
+To find your connection string:
+
+1. Navigate to **Storage accounts** on the Azure portal, and find your account.
+2. Select on **Access keys** in the left navigation list.
+3. Your connection string will be located below **Connection string**
++
+<!-- ## Validate container is running -->
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/`.
+
+### Asynchronous Read
+
+You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Azure AI Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifier to the HTTP GET request.
+
+From the swagger UI, select the `Analyze` to expand it in the browser. Then select **Try it out** > **Choose file**. In this example, we'll use the following image:
+
+![tabs vs spaces](media/tabs-vs-spaces.png)
+
+When the asynchronous POST has run successfully, it returns an **HTTP 202** status code. As part of the response, there is an `operation-location` header that holds the result endpoint for the request.
+
+```http
+ content-length: 0
+ date: Fri, 04 Sep 2020 16:23:01 GMT
+ operation-location: http://localhost:5000/vision/v3.2/read/operations/a527d445-8a74-4482-8cb3-c98a65ec7ef9
+ server: Kestrel
+```
+
+The `operation-location` is the fully qualified URL and is accessed via an HTTP GET. Here is the JSON response from executing the `operation-location` URL from the preceding image:
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
+ "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
+ "analyzeResult": {
+ "version": "3.2.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 2.1243,
+ "width": 502,
+ "height": 252,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 58,
+ 42,
+ 314,
+ 59,
+ 311,
+ 123,
+ 56,
+ 121
+ ],
+ "text": "Tabs vs",
+ "appearance": {
+ "style": {
+ "name": "handwriting",
+ "confidence": 0.96
+ }
+ },
+ "words": [
+ {
+ "boundingBox": [
+ 68,
+ 44,
+ 225,
+ 59,
+ 224,
+ 122,
+ 66,
+ 123
+ ],
+ "text": "Tabs",
+ "confidence": 0.933
+ },
+ {
+ "boundingBox": [
+ 241,
+ 61,
+ 314,
+ 72,
+ 314,
+ 123,
+ 239,
+ 122
+ ],
+ "text": "vs",
+ "confidence": 0.977
+ }
+ ]
+ },
+ {
+ "boundingBox": [
+ 286,
+ 171,
+ 415,
+ 165,
+ 417,
+ 197,
+ 287,
+ 201
+ ],
+ "text": "paces",
+ "appearance": {
+ "style": {
+ "name": "handwriting",
+ "confidence": 0.746
+ }
+ },
+ "words": [
+ {
+ "boundingBox": [
+ 286,
+ 179,
+ 404,
+ 166,
+ 405,
+ 198,
+ 290,
+ 201
+ ],
+ "text": "paces",
+ "confidence": 0.938
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
++
+> [!IMPORTANT]
+> If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Azure AI Vision Docker containers](./computer-vision-resource-container-config.md).
+
+### Synchronous read
+
+You can use the following operation to synchronously read an image.
+
+`POST /vision/v3.2/read/syncAnalyze`
+
+When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this behavior is if an error occurs. If an error occurs, the following JSON is returned:
+
+```json
+{
+ "status": "Failed"
+}
+```
+
+The JSON response object has the same object graph as the asynchronous version. If you're a JavaScript user and want type safety, consider using TypeScript to cast the JSON response.
+
+For an example use-case, see the <a href="https://aka.ms/ts-read-api-types" target="_blank" rel="noopener noreferrer">TypeScript sandbox here </a> and select **Run** to visualize its ease-of-use.
+
+## Run the container disconnected from the internet
++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](./computer-vision-resource-container-config.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
+++
+## Billing
+
+The Azure AI services containers send billing information to Azure, using the corresponding resource on your Azure account.
++
+For more information about these options, see [Configure containers](./computer-vision-resource-container-config.md).
+
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Azure AI Vision containers. In summary:
+
+* Azure AI Vision provides a Linux container for Docker, encapsulating Read.
+* The read container image requires an application to run it.
+* Container images run in Docker.
+* You can use either the REST API or SDK to call operations in Read OCR containers by specifying the host URI of the container.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings
+* Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
+* Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container.
+* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.
+* Use more [Azure AI services Containers](../cognitive-services-container-support.md)
ai-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-resource-container-config.md
+
+ Title: Configure Read OCR containers - Azure AI Vision
+
+description: This article shows you how to configure both required and optional settings for Read OCR containers in Azure AI Vision.
++++++ Last updated : 04/09/2021++++
+# Configure Read OCR Docker containers
+
+You configure the Azure AI Vision Read OCR container's runtime environment by using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
+
+## Configuration settings
++
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](computer-vision-how-to-install-containers.md).
+
+The container also has the following container-specific configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|No|ReadEngineConfig:ResultExpirationPeriod| v2.0 containers only. Result expiration period in hours. The default is 48 hours. The setting specifies when the system should clear recognition results. For example, if `resultExpirationPeriod=1`, the system clears the recognition result 1 hour after the process. If `resultExpirationPeriod=0`, the system clears the recognition result after the result is retrieved.|
+|No|Cache:Redis| v2.0 containers only. Enables Redis storage for storing results. A cache is *required* if multiple read OCR containers are placed behind a load balancer.|
+|No|Queue:RabbitMQ|v2.0 containers only. Enables RabbitMQ for dispatching tasks. The setting is useful when multiple read OCR containers are placed behind a load balancer.|
+|No|Queue:Azure:QueueVisibilityTimeoutInMilliseconds | v3.x containers only. The time for a message to be invisible when another worker is processing it. |
+|No|Storage::DocumentStore::MongoDB|v2.0 containers only. Enables MongoDB for permanent result storage. |
+|No|Storage:ObjectStore:AzureBlob:ConnectionString| v3.x containers only. Azure blob storage connection string. |
+|No|Storage:TimeToLiveInDays| v3.x containers only. Result expiration period in days. The setting specifies when the system should clear recognition results. The default is 2 days, which means any result live for longer than that period is not guaranteed to be successfully retrieved. The value is integer and it must be between 1 day to 7 days.|
+|No|StorageTimeToLiveInMinutes| v3.2-model-2021-09-30-preview and new containers. Result expiration period in minutes. The setting specifies when the system should clear recognition results. The default is 2 days (2880 minutes), which means any result live for longer than that period is not guaranteed to be successfully retrieved. The value is integer and it must be between 60 minutes to 7 days (10080 minutes).|
+|No|Task:MaxRunningTimeSpanInMinutes| v3.x containers only. Maximum running time for a single request. The default is 60 minutes. |
+|No|EnableSyncNTPServer| v3.x containers only, except for v3.2-model-2021-09-30-preview and newer containers. Enables the NTP server synchronization mechanism, which ensures synchronization between the system time and expected task runtime. Note that this requires external network traffic. The default is `true`. |
+|No|NTPServerAddress| v3.x containers only, except for v3.2-model-2021-09-30-preview and newer containers. NTP server for the time sync-up. The default is `time.windows.com`. |
+|No|Mounts:Shared| v3.x containers only. Local folder for storing recognition result. The default is `/share`. For running container without using Azure blob storage, we recommend mounting a volume to this folder to ensure you have enough space for the recognition results. |
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Vision resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the Vision resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+This setting can be found in the following place:
+
+* Azure portal: **Azure AI services** Resource Management, under **Keys**
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+This setting can be found in the following place:
+
+* Azure portal: **Azure AI services** Overview, labeled `Endpoint`
+
+Remember to add the `vision/<version>` routing to the endpoint URI as shown in the following table.
+
+|Required| Name | Data type | Description |
+|--||--|-|
+|Yes| `Billing` | String | Billing endpoint URI<br><br>Example:<br>`Billing=https://westcentralus.api.cognitive.microsoft.com/vision/v3.2` |
+
+## Eula setting
++
+## Fluentd settings
++
+## HTTP proxy credentials settings
++
+## Logging settings
+
+
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+The Azure AI Vision containers don't use input or output mounts to store training or service data.
+
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](computer-vision-how-to-install-containers.md#host-computer-requirements)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
+
+|Optional| Name | Data type | Description |
+|-||--|-|
+|Not allowed| `Input` | String | Azure AI Vision containers do not use this.|
+|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
+
+## Example docker run commands
+
+The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](computer-vision-how-to-install-containers.md#stop-the-container) it.
+
+* **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+* **Argument order**: Do not change the order of the arguments unless you are very familiar with Docker containers.
+
+Replace {_argument_name_} with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The endpoint key of the Vision resource on the resource keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the resource overview page.| See [gather required parameters](computer-vision-how-to-install-containers.md#gather-required-parameters) for explicit examples. |
++
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](computer-vision-how-to-install-containers.md#billing).
+> The ApiKey value is the **Key** from the Vision resource keys page.
+
+## Container Docker examples
+
+The following Docker examples are for the Read OCR container.
++
+# [Version 3.2](#tab/version-3-2)
+
+### Basic example
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+
+```
+
+### Logging example
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+Logging:Console:LogLevel:Default=Information
+```
+
+# [Version 2.0-preview](#tab/version-2)
+
+### Basic example
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+
+```
+
+### Logging example
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+Logging:Console:LogLevel:Default=Information
+```
+++
+## Next steps
+
+* Review [How to install and run containers](computer-vision-how-to-install-containers.md).
ai-services Concept Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-background-removal.md
+
+ Title: Background removal - Image Analysis
+
+description: Learn about background removal, an operation of Image Analysis
+++++++ Last updated : 03/02/2023++++
+# Background removal (version 4.0 preview)
+
+The Image Analysis service can divide images into multiple segments or regions to help the user identify different objects or parts of the image. Background removal creates an alpha matte that separates the foreground object from the background in an image.
+
+> [!div class="nextstepaction"]
+> [Call the Background removal API](./how-to/background-removal.md)
+
+This feature provides two possible outputs based on the customer's needs:
+
+- The foreground object of the image without the background. This edited image shows the foreground object and makes the background transparent, allowing the foreground to be placed on a new background.
+- An alpha matte that shows the opacity of the detected foreground object. This matte can be used to separate the foreground object from the background for further processing.
+
+This service is currently in preview, and the API may change in the future.
+
+## Background removal examples
+
+The following example images illustrate what the Image Analysis service returns when removing the background of an image and creating an alpha matte.
++
+|Original image |With background removed |Alpha matte |
+|::|::|::|
+| :::image type="content" source="media/background-removal/building-1.png" alt-text="Photo of a city near water."::: | :::image type="content" source="media/background-removal/building-1-result.png" alt-text="Photo of a city near water; sky is transparent."::: | :::image type="content" source="media/background-removal/building-1-matte.png" alt-text="Alpha matte of a city skyline."::: |
+| :::image type="content" source="media/background-removal/person-5.png" alt-text="Photo of a group of people using a tablet."::: | :::image type="content" source="media/background-removal/person-5-result.png" alt-text="Photo of a group of people using a tablet; background is transparent."::: | :::image type="content" source="media/background-removal/person-5-matte.png" alt-text="Alpha matte of a group of people."::: |
+| :::image type="content" source="media/background-removal/bears.png" alt-text="Photo of a group of bears in the woods."::: | :::image type="content" source="media/background-removal/bears-result.png" alt-text="Photo of a group of bears; background is transparent."::: | :::image type="content" source="media/background-removal/bears-alpha.png" alt-text="Alpha matte of a group of bears."::: |
++
+## Limitations
+
+It's important to note the limitations of background removal:
+
+* Background removal works best for categories such as people and animals, buildings and environmental structures, furniture, vehicles, food, text and graphics, and personal belongings.
+* Objects that aren't prominent in the foreground may not be identified as part of the foreground.
+* Images with thin and detailed structures, like hair or fur, may show some artifacts when overlaid on backgrounds with strong contrast to the original background.
+* The latency of the background removal operation will be higher, up to several seconds, for large images. We suggest you experiment with integrating both modes into your workflow to find the best usage for your needs (for instance, calling background removal on the original image versus calling foreground matting on a downsampled version of the image, then resizing the alpha matte to the original size and applying it to the original image).
+
+## Use the API
+
+The background removal feature is available through the [Segment](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-02-01-preview/operations/63e6b6d9217d201194bbecbd) API (`imageanalysis:segment`). You can call this API through the REST API or the Vision SDK. See the [Background removal how-to guide](./how-to/background-removal.md) for more information.
+
+## Next steps
+
+* [Call the background removal API](./how-to/background-removal.md)
ai-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-brand-detection.md
+
+ Title: Brand detection - Azure AI Vision
+
+description: Learn about brand and logo detection, a specialized mode of object detection, using the Azure AI Vision API.
+++++++ Last updated : 07/05/2022+++
+# Brand detection
+
+Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
+
+The Azure AI Vision service detects whether there are brand logos in a given image; if there are, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
+
+The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Azure AI Vision service, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
+
+## Brand detection example
+
+The following JSON responses illustrate what Azure AI Vision returns when detecting brands in the example images.
+
+![A red shirt with a Microsoft label and logo on it](./Images/red-shirt-logo.jpg)
+
+```json
+"brands":[
+ {
+ "name":"Microsoft",
+ "rectangle":{
+ "x":20,
+ "y":97,
+ "w":62,
+ "h":52
+ }
+ }
+]
+```
+
+In some cases, the brand detector will pick up both the logo image and the stylized brand name as two separate logos.
+
+![A gray sweatshirt with a Microsoft label and logo on it](./Images/gray-shirt-logo.jpg)
+
+```json
+"brands":[
+ {
+ "name":"Microsoft",
+ "rectangle":{
+ "x":58,
+ "y":106,
+ "w":55,
+ "h":46
+ }
+ },
+ {
+ "name":"Microsoft",
+ "rectangle":{
+ "x":58,
+ "y":86,
+ "w":202,
+ "h":63
+ }
+ }
+]
+```
+
+## Use the API
+
+The brand detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Brands` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"brands"` section.
+
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Categorizing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-categorizing-images.md
+
+ Title: Image categorization - Azure AI Vision
+
+description: Learn concepts related to the image categorization feature of the Image Analysis API.
+++++++ Last updated : 07/05/2022++++
+# Image categorization
+
+In addition to tags and a description, Image Analysis can return the taxonomy-based categories detected in an image. Unlike tags, categories are organized in a parent/child hierarchy, and there are fewer of them (86, as opposed to thousands of tags). All category names are in English. Categorization can be done by itself or alongside the newer tags model.
+
+## The 86-category hierarchy
+
+Azure AI Vision can categorize an image broadly or specifically, using the list of 86 categories in the following diagram. For the full taxonomy in text format, see [Category Taxonomy](category-taxonomy.md).
+
+![Grouped lists of all the categories in the category taxonomy](./Images/analyze_categories-v2.png)
+
+## Image categorization examples
+
+The following JSON response illustrates what Azure AI Vision returns when categorizing the example image based on its visual features.
+
+![A woman on the roof of an apartment building](./Images/woman_roof.png)
+
+```json
+{
+ "categories": [
+ {
+ "name": "people_",
+ "score": 0.81640625
+ }
+ ],
+ "requestId": "bae7f76a-1cc7-4479-8d29-48a694974705",
+ "metadata": {
+ "height": 200,
+ "width": 300,
+ "format": "Jpeg"
+ }
+}
+```
+
+The following table illustrates a typical image set and the category returned by Azure AI Vision for each image.
+
+| Image | Category |
+|-|-|
+| ![Four people posed together as a family](./Images/family_photo.png) | people_group |
+| ![A puppy sitting in a grassy field](./Images/cute_dog.png) | animal_dog |
+| ![A person standing on a mountain rock at sunset](./Images/mountain_vista.png) | outdoor_mountain |
+| ![A pile of bread roles on a table](./Images/bread.png) | food_bread |
+
+## Use the API
+
+The categorization feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Categories` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"categories"` section.
+
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+
+## Next steps
+
+Learn the related concepts of [tagging images](concept-tagging-images.md) and [describing images](concept-describing-images.md).
ai-services Concept Describe Images 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-describe-images-40.md
+
+ Title: Image captions - Image Analysis 4.0
+
+description: Concepts related to the image captioning feature of the Image Analysis 4.0 API.
+++++++ Last updated : 01/24/2023++++
+# Image captions (version 4.0 preview)
+Image captions in Image Analysis 4.0 (preview) are available through the **Caption** and **Dense Captions** features.
+
+Caption generates a one sentence description for all image contents. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both these features use the latest groundbreaking Florence based AI models.
+
+At this time, image captioning is available in English language only.
+
+### Gender-neutral captions
+All captions contain gender terms: "man", "woman", "boy" and "girl" by default. You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter, **gender-neutral-caption** to `true` in the request URL.
+
+> [!IMPORTANT]
+> Image captioning in Image Analysis 4.0 is only available in the following Azure data center regions at this time: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US, East Asia. You must use a Vision resource located in one of these regions to get results from Caption and Dense Captions features.
+>
+> If you have to use a Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Azure AI Vision regions.
+
+Try out the image captioning features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Caption and Dense Captions examples
+
+#### [Caption](#tab/image)
+
+The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
+
+![Photo of a man pointing at a screen](./Media/quickstarts/presentation.png)
+
+```json
+"captions": [
+ {
+ "text": "a man pointing at a screen",
+ "confidence": 0.4891590476036072
+ }
+]
+```
+
+#### [Dense Captions](#tab/dense)
+
+The following JSON response illustrates what the Analysis 4.0 API returns when generating dense captions for the example image.
+
+![Photo of a tractor on a farm](./Images/farm.png)
+
+```json
+{
+ "denseCaptionsResult": {
+ "values": [
+ {
+ "text": "a man driving a tractor in a farm",
+ "confidence": 0.535620927810669,
+ "boundingBox": {
+ "x": 0,
+ "y": 0,
+ "w": 850,
+ "h": 567
+ }
+ },
+ {
+ "text": "a man driving a tractor in a field",
+ "confidence": 0.5428450107574463,
+ "boundingBox": {
+ "x": 132,
+ "y": 266,
+ "w": 209,
+ "h": 219
+ }
+ },
+ {
+ "text": "a blurry image of a tree",
+ "confidence": 0.5139822363853455,
+ "boundingBox": {
+ "x": 147,
+ "y": 126,
+ "w": 76,
+ "h": 131
+ }
+ },
+ {
+ "text": "a man riding a tractor",
+ "confidence": 0.4799223840236664,
+ "boundingBox": {
+ "x": 206,
+ "y": 264,
+ "w": 64,
+ "h": 97
+ }
+ },
+ {
+ "text": "a blue sky above a hill",
+ "confidence": 0.35495415329933167,
+ "boundingBox": {
+ "x": 0,
+ "y": 0,
+ "w": 837,
+ "h": 166
+ }
+ },
+ {
+ "text": "a tractor in a field",
+ "confidence": 0.47338250279426575,
+ "boundingBox": {
+ "x": 0,
+ "y": 243,
+ "w": 838,
+ "h": 311
+ }
+ }
+ ]
+ },
+ "modelVersion": "2023-02-01-preview",
+ "metadata": {
+ "width": 850,
+ "height": 567
+ }
+}
+```
+++
+## Use the API
+
+#### [Image captions](#tab/image)
+
+The image captioning feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. Include `Caption` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"captionResult"` section.
+
+#### [Dense captions](#tab/dense)
+
+The dense captioning feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `denseCaptions` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"denseCaptionsResult"` section.
+++
+## Next steps
+
+* Learn the related concept of [object detection](concept-object-detection-40.md).
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
+* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
ai-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-describing-images.md
+
+ Title: Image descriptions - Azure AI Vision
+
+description: Concepts related to the image description feature of the Azure AI Vision API.
+++++++ Last updated : 07/04/2023++++
+# Image descriptions
+
+Azure AI Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
+
+At this time, English is the only supported language for image description.
+
+Try out the image captioning features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Image description example
+
+The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
+
+![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
+
+```json
+{
+ "description":{
+ "tags":[
+ "outdoor",
+ "city",
+ "white"
+ ],
+ "captions":[
+ {
+ "text":"a city with tall buildings",
+ "confidence":0.48468858003616333
+ }
+ ]
+ },
+ "requestId":"7e5e5cac-ef16-43ca-a0c4-02bd49d379e9",
+ "metadata":{
+ "height":300,
+ "width":239,
+ "format":"Png"
+ },
+ "modelVersion":"2021-05-01"
+}
+```
+
+## Use the API
+
+The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
+
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+
+## Next steps
+
+Learn the related concepts of [tagging images](concept-tagging-images.md) and [categorizing images](concept-categorizing-images.md).
ai-services Concept Detecting Adult Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-adult-content.md
+
+ Title: Adult, racy, gory content - Azure AI Vision
+
+description: Concepts related to detecting adult content in images using the Azure AI Vision API.
+++++++ Last updated : 12/27/2022++++
+# Adult content detection
+
+Azure AI Vision can detect adult material in images so that developers can restrict the display of these images in their software. Content flags are applied with a score between zero and one so developers can interpret the results according to their own preferences.
+
+Try out the adult content detection features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Content flag definitions
+
+The "adult" classification contains several different categories:
+
+- **Adult** images are explicitly sexual in nature and often show nudity and sexual acts.
+- **Racy** images are sexually suggestive in nature and often contain less sexually explicit content than images tagged as **Adult**.
+- **Gory** images show blood/gore.
+
+## Use the API
+
+You can detect adult content with the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
+
+- [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Color Schemes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-color-schemes.md
+
+ Title: Color scheme detection - Azure AI Vision
+
+description: Concepts related to detecting the color scheme in images using the Azure AI Vision API.
+++++++ Last updated : 11/17/2021++++
+# Color schemes detection
+
+Azure AI Vision analyzes the colors in an image to provide three different attributes: the dominant foreground color, the dominant background color, and the larger set of dominant colors in the image. The set of possible returned colors is: black, blue, brown, gray, green, orange, pink, purple, red, teal, white, and yellow.
+
+Azure AI Vision also extracts an accent color, which represents the most vibrant color in the image, based on a combination of the dominant color set and saturation. The accent color is returned as a hexadecimal HTML color code (for example, `#00CC00`).
+
+Azure AI Vision also returns a boolean value indicating whether the image is a black-and-white image.
+
+## Color scheme detection examples
+
+The following example illustrates the JSON response returned by Azure AI Vision when it detects the color scheme of an image.
+
+> [!NOTE]
+> In this case, the example image is not a black and white image, but the dominant foreground and background colors are black, and the dominant colors for the image as a whole are black and white.
+
+![Outdoor Mountain at sunset, with a person's silhouette](./Images/mountain_vista.png)
+
+```json
+{
+ "color": {
+ "dominantColorForeground": "Black",
+ "dominantColorBackground": "Black",
+ "dominantColors": ["Black", "White"],
+ "accentColor": "BB6D10",
+ "isBwImg": false
+ },
+ "requestId": "0dc394bf-db50-4871-bdcc-13707d9405ea",
+ "metadata": {
+ "height": 202,
+ "width": 300,
+ "format": "Jpeg"
+ }
+}
+```
+
+### Dominant color examples
+
+The following table shows the returned foreground, background, and image colors for each sample image.
+
+| Image | Dominant colors |
+|-|--|
+|![A white flower with a green background](./Images/flower.png)| Foreground: Black<br/>Background: White<br/>Colors: Black, White, Green|
+![A train running through a station](./Images/train_station.png) | Foreground: Black<br/>Background: Black<br/>Colors: Black |
+
+### Accent color examples
+
+ The following table shows the returned accent color, as a hexadecimal HTML color value, for each example image.
+
+| Image | Accent color |
+|-|--|
+|![A person standing on a mountain rock at sunset](./Images/mountain_vista.png) | #BB6D10 |
+|![A white flower with a green background](./Images/flower.png) | #C6A205 |
+|![A train running through a station](./Images/train_station.png) | #474A84 |
+
+### Black and white detection examples
+
+The following table shows Azure AI Vision's black and white evaluation in the sample images.
+
+| Image | Black & white? |
+|-|-|
+|![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png) | true |
+|![A blue house and the front yard](./Images/house_yard.png) | false |
+
+## Use the API
+
+The color scheme detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
+
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Domain Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-domain-content.md
+
+ Title: Domain-specific content - Azure AI Vision
+
+description: Learn how to specify an image categorization domain to return more detailed information about an image.
+++++++ Last updated : 02/08/2019++++
+# Domain-specific content detection
+
+In addition to tagging and high-level categorization, Azure AI Vision also supports further domain-specific analysis using models that have been trained on specialized data.
+
+There are two ways to use the domain-specific models: by themselves (scoped analysis) or as an enhancement to the categorization feature.
+
+### Scoped analysis
+
+You can analyze an image using only the chosen domain-specific model by calling the [Models/\<model\>/Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API.
+
+The following is a sample JSON response returned by the **models/celebrities/analyze** API for the given image:
+
+![Satya Nadella standing, smiling](./images/satya.jpeg)
+
+```json
+{
+ "result": {
+ "celebrities": [{
+ "faceRectangle": {
+ "top": 391,
+ "left": 318,
+ "width": 184,
+ "height": 184
+ },
+ "name": "Satya Nadella",
+ "confidence": 0.99999856948852539
+ }]
+ },
+ "requestId": "8217262a-1a90-4498-a242-68376a4b956b",
+ "metadata": {
+ "width": 800,
+ "height": 1200,
+ "format": "Jpeg"
+ }
+}
+```
+
+### Enhanced categorization analysis
+
+You can also use domain-specific models to supplement general image analysis. You do this as part of [high-level categorization](concept-categorizing-images.md) by specifying domain-specific models in the *details* parameter of the [Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API call.
+
+In this case, the 86-category taxonomy classifier is called first. If any of the detected categories have a matching domain-specific model, the image is passed through that model as well and the results are added.
+
+The following JSON response shows how domain-specific analysis can be included as the `detail` node in a broader categorization analysis.
+
+```json
+"categories":[
+ {
+ "name":"abstract_",
+ "score":0.00390625
+ },
+ {
+ "name":"people_",
+ "score":0.83984375,
+ "detail":{
+ "celebrities":[
+ {
+ "name":"Satya Nadella",
+ "faceRectangle":{
+ "left":597,
+ "top":162,
+ "width":248,
+ "height":248
+ },
+ "confidence":0.999028444
+ }
+ ],
+ "landmarks":[
+ {
+ "name":"Forbidden City",
+ "confidence":0.9978346
+ }
+ ]
+ }
+ }
+]
+```
+
+## List the domain-specific models
+
+Currently, Azure AI Vision supports the following domain-specific models:
+
+| Name | Description |
+||-|
+| celebrities | Celebrity recognition, supported for images classified in the `people_` category |
+| landmarks | Landmark recognition, supported for images classified in the `outdoor_` or `building_` categories |
+
+Calling the [Models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20e) API will return this information along with the categories to which each model can apply:
+
+```json
+{
+ "models":[
+ {
+ "name":"celebrities",
+ "categories":[
+ "people_",
+ "Σ║║_",
+ "pessoas_",
+ "gente_"
+ ]
+ },
+ {
+ "name":"landmarks",
+ "categories":[
+ "outdoor_",
+ "户外_",
+ "屋外_",
+ "aoarlivre_",
+ "alairelibre_",
+ "building_",
+ "σ╗║τ¡æ_",
+ "建物_",
+ "edifício_"
+ ]
+ }
+ ]
+}
+```
+
+## Use the API
+
+This feature is available through the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Celebrities` or `Landmarks` in the **details** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"details"` section.
+
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-faces.md
+
+ Title: Face detection - Azure AI Vision
+
+description: Learn concepts related to the face detection feature of the Azure AI Vision API.
+++++++ Last updated : 12/27/2022++++
+# Face detection with Image Analysis
+
+Image Analysis can detect human faces within an image and generate rectangle coordinates for each detected face.
+
+> [!NOTE]
+> This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
++
+Try out the face detection features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Face detection examples
+
+The following example demonstrates the JSON response returned by Analyze API for an image containing a single human face.
+
+![Vision Analyze Woman Roof Face](./Images/woman_roof_face.png)
+
+```json
+{
+ "faces": [
+ {
+ "age": 23,
+ "gender": "Female",
+ "faceRectangle": {
+ "top": 45,
+ "left": 194,
+ "width": 44,
+ "height": 44
+ }
+ }
+ ],
+ "requestId": "8439ba87-de65-441b-a0f1-c85913157ecd",
+ "metadata": {
+ "height": 200,
+ "width": 300,
+ "format": "Png"
+ }
+}
+```
+
+The next example demonstrates the JSON response returned for an image containing multiple human faces.
+
+![Vision Analyze Family Photo Face](./Images/family_photo_face.png)
+
+```json
+{
+ "faces": [
+ {
+ "age": 11,
+ "gender": "Male",
+ "faceRectangle": {
+ "top": 62,
+ "left": 22,
+ "width": 45,
+ "height": 45
+ }
+ },
+ {
+ "age": 11,
+ "gender": "Female",
+ "faceRectangle": {
+ "top": 127,
+ "left": 240,
+ "width": 42,
+ "height": 42
+ }
+ },
+ {
+ "age": 37,
+ "gender": "Female",
+ "faceRectangle": {
+ "top": 55,
+ "left": 200,
+ "width": 41,
+ "height": 41
+ }
+ },
+ {
+ "age": 41,
+ "gender": "Male",
+ "faceRectangle": {
+ "top": 45,
+ "left": 103,
+ "width": 39,
+ "height": 39
+ }
+ }
+ ],
+ "requestId": "3a383cbe-1a05-4104-9ce7-1b5cf352b239",
+ "metadata": {
+ "height": 230,
+ "width": 300,
+ "format": "Png"
+ }
+}
+```
+
+## Use the API
+
+The face detection feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
+
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Image Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-image-types.md
+
+ Title: Image type detection - Azure AI Vision
+
+description: Concepts related to the image type detection feature of the Azure AI Vision API.
+++++++ Last updated : 03/11/2019++++
+# Image type detection
+
+With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API, Azure AI Vision can analyze the content type of images, indicating whether an image is clip art or a line drawing.
+
+## Detecting clip art
+
+Azure AI Vision analyzes an image and rates the likelihood of the image being clip art on a scale of 0 to 3, as described in the following table.
+
+| Value | Meaning |
+|-||
+| 0 | Non-clip-art |
+| 1 | Ambiguous |
+| 2 | Normal-clip-art |
+| 3 | Good-clip-art |
+
+### Clip art detection examples
+
+The following JSON responses illustrates what Azure AI Vision returns when rating the likelihood of the example images being clip art.
+
+![A clip art image of a slice of cheese](./Images/cheese_clipart.png)
+
+```json
+{
+ "imageType": {
+ "clipArtType": 3,
+ "lineDrawingType": 0
+ },
+ "requestId": "88c48d8c-80f3-449f-878f-6947f3b35a27",
+ "metadata": {
+ "height": 225,
+ "width": 300,
+ "format": "Jpeg"
+ }
+}
+```
+
+![A blue house and the front yard](./Images/house_yard.png)
+
+```json
+{
+ "imageType": {
+ "clipArtType": 0,
+ "lineDrawingType": 0
+ },
+ "requestId": "a9c8490a-2740-4e04-923b-e8f4830d0e47",
+ "metadata": {
+ "height": 200,
+ "width": 300,
+ "format": "Jpeg"
+ }
+}
+```
+
+## Detecting line drawings
+
+Azure AI Vision analyzes an image and returns a boolean value indicating whether the image is a line drawing.
+
+### Line drawing detection examples
+
+The following JSON responses illustrates what Azure AI Vision returns when indicating whether the example images are line drawings.
+
+![A line drawing image of a lion](./Images/lion_drawing.png)
+
+```json
+{
+ "imageType": {
+ "clipArtType": 2,
+ "lineDrawingType": 1
+ },
+ "requestId": "6442dc22-476a-41c4-aa3d-9ceb15172f01",
+ "metadata": {
+ "height": 268,
+ "width": 300,
+ "format": "Jpeg"
+ }
+}
+```
+
+![A white flower with a green background](./Images/flower.png)
+
+```json
+{
+ "imageType": {
+ "clipArtType": 0,
+ "lineDrawingType": 0
+ },
+ "requestId": "98437d65-1b05-4ab7-b439-7098b5dfdcbf",
+ "metadata": {
+ "height": 200,
+ "width": 300,
+ "format": "Jpeg"
+ }
+}
+```
+
+## Use the API
+
+The image type detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `ImageType` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"imageType"` section.
+
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-detection.md
+
+ Title: "Face detection and attributes - Face"
+
+description: Learn more about face detection; face detection is the action of locating human faces in an image and optionally returning different kinds of face-related data.
+++++++ Last updated : 07/04/2023+++
+# Face detection and attributes
++
+This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
+
+You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
+
+## Face rectangle
+
+Each detected face corresponds to a `faceRectangle` field in the response. This is a set of pixel coordinates for the left, top, width, and height of the detected face. Using these coordinates, you can get the location and size of the face. In the API response, faces are listed in size order from largest to smallest.
+
+Try out the capabilities of face detection quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Face ID
+
+The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
+
+## Face landmarks
+
+Face landmarks are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. By default, there are 27 predefined landmark points. The following figure shows all 27 points:
+
+![A face diagram with all 27 landmarks labeled](./media/landmarks.1.jpg)
+
+The coordinates of the points are returned in units of pixels.
+
+The Detection_03 model currently has the most accurate landmark detection. The eye and pupil landmarks it returns are precise enough to enable gaze tracking of the face.
+
+## Attributes
++
+Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
+
+* **Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
+* **Age**. The estimated age in years of a particular face.
+* **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
+* **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
+* **Exposure**. The exposure of the face in the image. This attribute returns a value between zero and one and an informal rating of underExposure, goodExposure, or overExposure.
+* **Facial hair**. The estimated facial hair presence and the length for the given face.
+* **Gender**. The estimated gender of the given face. Possible values are male, female, and genderless.
+* **Glasses**. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles.
+* **Hair**. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
+* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings:
+
+ ![A head with the pitch, roll, and yaw axes labeled](./media/headpose.1.jpg)
+
+ For more information on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
+* **Makeup**. Indicates whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
+* **Mask**. Indicates whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
+* **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
+* **Occlusion**. Indicates whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
+* **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
+* **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios.
+ >[!NOTE]
+ > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
+
+> [!IMPORTANT]
+> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.
+
+## Input data
+
+Use the following tips to make sure that your input images give the most accurate detection results:
+
+* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
+* The image file size should be no larger than 6 MB.
+* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they're larger than the minimum detectable face size.
+* The maximum detectable face size is 4096 x 4096 pixels.
+* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
+* Some faces might not be recognized because of technical challenges, such as:
+ * Images with extreme lighting, for example, severe backlighting.
+ * Obstructions that block one or both eyes.
+ * Differences in hair type or facial hair.
+ * Changes in facial appearance because of age.
+ * Extreme facial expressions.
+
+### Input data with orientation information:
+
+Some input images with JPEG format might contain orientation information in Exchangeable image file format (EXIF) metadata. If EXIF orientation is available, images are automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face are estimated based on the rotated image.
+
+To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of the image visualization tools automatically rotate the image according to its EXIF orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
+
+![Two face images with and without rotation](./media/image-rotation.png)
+
+### Video input
+
+If you're detecting faces from a video feed, you may be able to improve performance by adjusting certain settings on your video camera:
+
+* **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
+* **Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.
+* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This results in clearer video frames.
+
+ >[!NOTE]
+ > A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use.
+
+## Next steps
+
+Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image.
+
+* [Call the detect API](./how-to/identity-detect-faces.md)
ai-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-recognition.md
+
+ Title: "Face recognition - Face"
+
+description: Learn the concept of Face recognition, its related operations, and the underlying data structures.
+++++++ Last updated : 12/27/2022+++
+# Face recognition
+
+This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the act of verifying or identifying individuals by their faces. Face recognition is important in implementing the identity verification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
+
+You can try out the capabilities of face recognition quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
++
+## Face recognition operations
++
+This section details how the underlying operations use the above data structures to identify and verify a face.
+
+### PersonGroup creation and training
+
+You need to create a [PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) to store the set of people to match against. PersonGroups hold [Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) objects, which each represent an individual person and hold a set of face data belonging to that person.
+
+The [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) operation prepares the data set to be used in face data comparisons.
+
+### Identification
+
+The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
+
+### Verification
+
+The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
+
+## Related data structures
+
+The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription, but name fields may be duplicated.
+
+|Name|Description|
+|:--|:--|
+|DetectedFace| This single face representation is retrieved by the [face detection](./how-to/identity-detect-faces.md) operation. Its ID expires 24 hours after it's created.|
+|PersistedFace| When DetectedFace objects are added to a group, such as FaceList or Person, they become PersistedFace objects. They can be [retrieved](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c) at any time and don't expire.|
+|[FaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b) or [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc)| This data structure is an assorted list of PersistedFace objects. A FaceList has a unique ID, a name string, and optionally a user data string.|
+|[Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c)| This data structure is a list of PersistedFace objects that belong to the same person. It has a unique ID, a name string, and optionally a user data string.|
+|[PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d)| This data structure is an assorted list of Person objects. It has a unique ID, a name string, and optionally a user data string. A PersonGroup must be [trained](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) before it can be used in recognition operations.|
+|PersonDirectory | This data structure is like **LargePersonGroup** but offers additional storage capacity and other added features. For more information, see [Use the PersonDirectory structure (preview)](./how-to/use-persondirectory.md).
++
+## Input data
+
+Use the following tips to ensure that your input images give the most accurate recognition results:
+
+* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
+* Image file size should be no larger than 6 MB.
+* When you create Person objects, use photos that feature different kinds of angles and lighting.
+* Some faces might not be recognized because of technical challenges, such as:
+ * Images with extreme lighting, for example, severe backlighting.
+ * Obstructions that block one or both eyes.
+ * Differences in hair type or facial hair.
+ * Changes in facial appearance because of age.
+ * Extreme facial expressions.
+* You can utilize the qualityForRecognition attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only "high" quality images are recommended for person enrollment and quality at or above "medium" is recommended for identification scenarios.
+
+## Next steps
+
+Now that you're familiar with face recognition concepts, Write a script that identifies faces against a trained PersonGroup.
+
+* [Face quickstart](./quickstarts-sdk/identity-client-library.md)
ai-services Concept Generate Thumbnails 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-generate-thumbnails-40.md
+
+ Title: Smart-cropped thumbnails - Image Analysis 4.0
+
+description: Concepts related to generating thumbnails for images using the Image Analysis 4.0 API.
+++++++ Last updated : 01/24/2023++++
+# Smart-cropped thumbnails (version 4.0 preview)
+
+A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Azure AI Vision API uses smart cropping to create intuitive image thumbnails that include the most important regions of an image with priority given to any detected faces.
+
+The Azure AI Vision smart-cropping utility takes one or more aspect ratios in the range [0.75, 1.80] and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates.
+
+> [!IMPORTANT]
+> This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
+
+## Examples
+
+The generated bounding box can vary widely depending on what you specify for aspect ratio, as shown in the following images.
+
+| Aspect ratio | Bounding box |
+|-|--|
+| original | :::image type="content" source="Images/cropped-original.png" alt-text="Photo of a man with a dog at a table."::: |
+| 0.75 | :::image type="content" source="Images/cropped-075-bb.png" alt-text="Photo of a man with a dog at a table. A 0.75 ratio bounding box is drawn."::: |
+| 1.00 | :::image type="content" source="Images/cropped-1-0-bb.png" alt-text="Photo of a man with a dog at a table. A 1.00 ratio bounding box is drawn."::: |
+| 1.50 | :::image type="content" source="Images/cropped-150-bb.png" alt-text="Photo of a man with a dog at a table. A 1.50 ratio bounding box is drawn."::: |
++
+## Use the API
+
+The smart cropping feature is available through the [Analyze Image API](https://aka.ms/vision-4-0-ref). Include `SmartCrops` in the **features** query parameter. Also include a **smartcrops-aspect-ratios** query parameter, and set it to a decimal value for the aspect ratio you want (defined as width / height) in the range [0.75, 1.80]. Multiple aspect ratio values should be comma-separated. If no aspect ratio value is provided the API will return a crop with an aspect ratio that best preserves the imageΓÇÖs most important region.
+
+## Next steps
+
+* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
ai-services Concept Generating Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-generating-thumbnails.md
+
+ Title: Smart-cropped thumbnails - Azure AI Vision
+
+description: Concepts related to generating thumbnails for images using the Azure AI Vision API.
+++++++ Last updated : 11/09/2022++++
+# Smart-cropped thumbnails
+
+A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Azure AI Vision API uses smart cropping to create intuitive image thumbnails that include the most important regions of an image with priority given to any detected faces.
+
+The Azure AI Vision thumbnail generation algorithm works as follows:
+
+1. Remove distracting elements from the image and identify the _area of interest_&mdash;the area of the image in which the main object(s) appears.
+1. Crop the image based on the identified _area of interest_.
+1. Change the aspect ratio to fit the target thumbnail dimensions.
+
+## Area of interest
+
+When you upload an image, the Azure AI Vision API analyzes it to determine the *area of interest*. It can then use this region to determine how to crop the image. The cropping operation, however, will always match the desired aspect ratio if one is specified.
+
+You can also get the raw bounding box coordinates of this same *area of interest* by calling the **areaOfInterest** API instead. You can then use this information to modify the original image however you wish.
+
+## Examples
+
+The generated thumbnail can vary widely depending on what you specify for height, width, and smart cropping, as shown in the following image.
+
+![A mountain image next to various cropping configurations](./Images/thumbnail-demo.png)
+
+The following table illustrates thumbnails defined by smart-cropping for the example images. The thumbnails were generated for a specified target height and width of 50 pixels, with smart cropping enabled.
+
+| Image | Thumbnail |
+|-|--|
+|![Outdoor Mountain at sunset, with a person's silhouette](./Images/mountain_vista.png) | ![Thumbnail of Outdoor Mountain at sunset, with a person's silhouette](./Images/mountain_vista_thumbnail.png) |
+|![A white flower with a green background](./Images/flower.png) | ![Vision Analyze Flower thumbnail](./Images/flower_thumbnail.png) |
+|![A woman on the roof of an apartment building](./Images/woman_roof.png) | ![thumbnail of a woman on the roof of an apartment building](./Images/woman_roof_thumbnail.png) |
++
+## Use the API
++
+The generate thumbnail feature is available through the [Get Thumbnail](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20c) and [Get Area of Interest](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/b156d0f5e11e492d9f64418d) APIs. You can call this API through a native SDK or through REST calls.
++
+* [Generate a thumbnail (how-to)](./how-to/generate-thumbnail.md)
ai-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-image-retrieval.md
+
+ Title: Image Retrieval concepts - Image Analysis 4.0
+
+description: Concepts related to image vectorization using the Image Analysis 4.0 API.
+++++++ Last updated : 03/06/2023+++
+# Image retrieval (version 4.0 preview)
+
+Image retrieval is the process of searching a large collection of images to find those that are most similar to a given query image. Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
+
+## What's the difference between vector search and keyword-based search?
+
+Keyword search is the most basic and traditional method of information retrieval. In this approach, the search engine looks for the exact match of the keywords or phrases entered by the user in the search query and compares with labels and tags provided for the images. The search engine then returns images that contain those exact keywords as content tags and image labels. Keyword search relies heavily on the user's ability to input relevant and specific search terms.
+
+Vector search, on the other hand, searches large collections of vectors in high-dimensional space to find vectors that are similar to a given query. Vector search looks for semantic similarities by capturing the context and meaning of the search query. This approach is often more efficient than traditional image retrieval techniques, as it can reduce search space and improve the accuracy of the results.
+
+## Business Applications
+
+Image retrieval has a variety of applications in different fields, including:
+
+- Digital asset management: Image retrieval can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.
+- Medical image retrieval: Image retrieval can be used in medical imaging to search for images based on their diagnostic features or disease patterns. This can help doctors or researchers to identify similar cases or track disease progression.
+- Security and surveillance: Image retrieval can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection.
+- Forensic image retrieval: Image retrieval can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime.
+- E-commerce: Image retrieval can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases.
+- Fashion and design: Image retrieval can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends.
+
+## What are vector embeddings?
+
+Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks. Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears.
+
+> [!NOTE]
+> Vector embeddings can only be meaningfully compared if they are from the same model type.
+
+## How does it work?
++
+1. Vectorize Images and Text: the Image Retrieval APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
+1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity.
+1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
+
+## Next steps
+
+Enable image retrieval for your search service and follow the steps to generate vector embeddings for text and images.
+* [Call the Image retrieval APIs](./how-to/image-retrieval.md)
+
ai-services Concept Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-model-customization.md
+
+ Title: Model customization concepts - Image Analysis 4.0
+
+description: Concepts related to the custom model feature of the Image Analysis 4.0 API.
+++++++ Last updated : 02/06/2023+++
+# Model customization (version 4.0 preview)
+
+Model customization lets you train a specialized Image Analysis model for your own use case. Custom models can do either image classification (tags apply to the whole image) or object detection (tags apply to specific areas of the image). Once your custom model is created and trained, it belongs to your Vision resource, and you can call it using the [Analyze Image API](./how-to/call-analyze-image-40.md).
+
+> [!div class="nextstepaction"]
+> [Vision Studio quickstart](./how-to/model-customization.md?tabs=studio)
+
+> [!div class="nextstepaction"]
+> [Python SDK quickstart](./how-to/model-customization.md?tabs=python)
++
+## Scenario components
+
+The main components of a model customization system are the training images, COCO file, dataset object, and model object.
+
+### Training images
+
+Your set of training images should include several examples of each of the labels you want to detect. You'll also want to collect a few extra images to test your model with once it's trained. The images need to be stored in an Azure Storage container in order to be accessible to the model.
+
+In order to train your model effectively, use images with visual variety. Select images that vary by:
+
+- camera angle
+- lighting
+- background
+- visual style
+- individual/grouped subject(s)
+- size
+- type
+
+Additionally, make sure all of your training images meet the following criteria:
+
+- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format.
+- The file size of the image must be less than 20 megabytes (MB).
+- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels.
+
+### COCO file
+
+The COCO file references all of the training images and associates them with their labeling information. In the case of object detection, it specified the bounding box coordinates of each tag on each image. This file must be in the COCO format, which is a specific type of JSON file. The COCO file should be stored in the same Azure Storage container as the training images.
+
+> [!TIP]
+> [!INCLUDE [coco-files](includes/coco-files.md)]
+
+### Dataset object
+
+The **Dataset** object is a data structure stored by the Image Analysis service that references the association file. You need to create a **Dataset** object before you can create and train a model.
+
+### Model object
+
+The **Model** object is a data structure stored by the Image Analysis service that represents a custom model. It must be associated with a **Dataset** in order to do initial training. Once it's trained, you can query your model by entering its name in the `model-name` query parameter of the [Analyze Image API call](./how-to/call-analyze-image-40.md).
+
+## Quota limits
+
+The following table describes the limits on the scale of your custom model projects.
+
+| Category | Generic image classifier | Generic object detector |
+| - | - | - |
+| Max # training hours | 288 (12 days) | 288 (12 days) |
+| Max # training images | 1,000,000 | 200,000 |
+| Max # evaluation images | 100,000 | 100,000 |
+| Min # training images per category | 2 | 2 |
+| Max # tags per image | multiclass: 1 | NA |
+| Max # regions per image | NA | 1,000 |
+| Max # categories | 2,500 | 1,000 |
+| Min # categories | 2 | 1 |
+| Max image size (Training) | 20 MB | 20 MB |
+| Max image size (Prediction) | Sync: 6 MB, Batch: 20 MB | Sync: 6 MB, Batch: 20 MB |
+| Max image width/height (Training) | 10,240 | 10,240 |
+| Min image width/height (Prediction) | 50 | 50 |
+| Available regions | West US 2, East US, West Europe | West US 2, East US, West Europe |
+| Accepted image types | jpg, png, bmp, gif, jpeg | jpg, png, bmp, gif, jpeg |
+
+## Frequently asked questions
+
+### Why is my COCO file import failing when importing from blob storage?
+
+Currently, Microsoft is addressing an issue that causes COCO file import to fail with large datasets when initiated in Vision Studio. To train using a large dataset, it's recommended to use the REST API instead.
+
+### Why does training take longer/shorter than my specified budget?
+
+The specified training budget is the calibrated **compute time**, not the **wall-clock time**. Some common reasons for the difference are listed:
+
+- **Longer than specified budget:**
+ - Image Analysis experiences a high training traffic, and GPU resources may be tight. Your job may wait in the queue or be put on hold during training.
+ - The backend training process ran into unexpected failures, which resulted in retrying logic. The failed runs don't consume your budget, but this can lead to longer training time in general.
+ - Your data is stored in a different region than your Vision resource, which will lead to longer data transmission time.
+
+- **Shorter than specified budget:** The following factors speed up training at the cost of using more budget in certain wall-clock time.
+ - Image Analysis sometimes trains with multiple GPUs depending on your data.
+ - Image Analysis sometimes trains multiple exploration trials on multiple GPUs at the same time.
+ - Image Analysis sometimes uses premier (faster) GPU SKUs to train.
+
+### Why does my training fail and what I should do?
+
+The following are some common reasons for training failure:
+
+- `diverged`: The training can't learn meaningful things from your data. Some common causes are:
+ - Data is not enough: providing more data should help.
+ - Data is of poor quality: check if your images are of low resolution, extreme aspect ratios, or if annotations are wrong.
+- `notEnoughBudget`: Your specified budget isn't enough for the size of your dataset and model type you're training. Specify a larger budget.
+- `datasetCorrupt`: Usually this means your provided images aren't accessible or the annotation file is in the wrong format.
+- `datasetNotFound`: dataset cannot be found
+- `unknown`: This could be a backend issue. Reach out to support for investigation.
+
+### What metrics are used for evaluating the models?
+
+The following metrics are used:
+
+- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5
+- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75
+
+### Why does my dataset registration fail?
+
+The API responses should be informative enough. They are:
+- `DatasetAlreadyExists`: A dataset with the same name exists
+- `DatasetInvalidAnnotationUri`: "An invalid URI was provided among the annotation URIs at dataset registration time.
+
+### How many images are required for reasonable/good/best model quality?
+
+Although Florence models have great few-shot capability (achieving great model performance under limited data availability), in general more data makes your trained model better and more robust. Some scenarios require little data (like classifying an apple against a banana), but others require more (like detecting 200 kinds of insects in a rainforest). This makes it difficult to give a single recommendation.
+
+If your data labeling budget is constrained, our recommended workflow is to repeat the following steps:
+
+1. Collect `N` images per class, where `N` images are easy for you to collect (for example, `N=3`)
+1. Train a model and test it on your evaluation set.
+1. If the model performance is:
+
+ - **Good enough** (performance is better than your expectation or performance close to your previous experiment with less data collected): Stop here and use this model.
+ - **Not good** (performance is still below your expectation or better than your previous experiment with less data collected at a reasonable margin):
+ - Collect more images for each class&mdash;a number that's easy for you to collect&mdash;and go back to Step 2.
+ - If you notice the performance is not improving any more after a few iterations, it could be because:
+ - this problem isn't well defined or is too hard. Reach out to us for case-by-case analysis.
+ - the training data might be of low quality: check if there are wrong annotations or very low-pixel images.
++
+### How much training budget should I specify?
+
+You should specify the upper limit of budget that you're willing to consume. Image Analysis uses an AutoML system in its backend to try out different models and training recipes to find the best model for your use case. The more budget that's given, the higher the chance of finding a better model.
+
+The AutoML system also stops automatically if it concludes there's no need to try more, even if there is still remaining budget. So, it doesn't always exhaust your specified budget. You're guaranteed not to be billed over your specified budget.
+
+### Can I control the hyper-parameters or use my own models in training?
+
+No, Image Analysis model customization service uses a low-code AutoML training system that handles hyper-param search and base model selection in the backend.
+
+### Can I export my model after training?
+
+The prediction API is only supported through the cloud service.
+
+### Why does the evaluation fail for my object detection model?
+
+Below are the possible reasons:
+- `internalServerError`: An unknown error occurred. Please try again later.
+- `modelNotFound`: The specified model was not found.
+- `datasetNotFound`: The specified dataset was not found.
+- `datasetAnnotationsInvalid`: An error occurred while trying to download or parse the ground truth annotations associated with the test dataset.
+- `datasetEmpty`: The test dataset did not contain any "ground truth" annotations.
+
+### What is the expected latency for predictions with custom models?
+
+We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Azure AI Vision resource that they were trained under, and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory, and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results.
+
+## Data privacy and security
+
+As with all of the Azure AI services, developers using Image Analysis model customization should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+[Create and train a custom model](./how-to/model-customization.md)
ai-services Concept Object Detection 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-object-detection-40.md
+
+ Title: Object detection - Image Analysis 4.0
+
+description: Learn concepts related to the object detection feature of the Image Analysis 4.0 API - usage and limits.
+++++++ Last updated : 01/24/2023++++
+# Object detection (version 4.0 preview)
+
+Object detection is similar to [tagging](concept-tag-images-40.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat and person, the object detection operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
+
+The object detection function applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes.
+
+Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Object detection example
+
+The following JSON response illustrates what the Analysis 4.0 API returns when detecting objects in the example image.
+
+![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
+++
+```json
+{
+ "metadata":
+ {
+ "width": 1260,
+ "height": 473
+ },
+ "objectsResult":
+ {
+ "values":
+ [
+ {
+ "name": "kitchen appliance",
+ "confidence": 0.501,
+ "boundingBox": {"x":730,"y":66,"w":135,"h":85}
+ },
+ {
+ "name": "computer keyboard",
+ "confidence": 0.51,
+ "boundingBox": {"x":523,"y":377,"w":185,"h":46}
+ },
+ {
+ "name": "Laptop",
+ "confidence": 0.85,
+ "boundingBox": {"x":471,"y":218,"w":289,"h":226}
+ },
+ {
+ "name": "person",
+ "confidence": 0.855,
+ "boundingBox": {"x":654,"y":0,"w":584,"h":473}
+ }
+ ]
+ }
+}
+```
+
+## Limitations
+
+It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
+
+* Objects are generally not detected if they're small (less than 5% of the image).
+* Objects are generally not detected if they're arranged closely together (a stack of plates, for example).
+* Objects are not differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature.
+
+## Use the API
+
+The object detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Objects` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
+
+## Next steps
+
+* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
+
ai-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-object-detection.md
+
+ Title: Object detection - Azure AI Vision
+
+description: Learn concepts related to the object detection feature of the Azure AI Vision API - usage and limits.
+++++++ Last updated : 11/03/2022++++
+# Object detection
+
+Object detection is similar to [tagging](concept-tagging-images.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat and person, the Detect operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
+
+The object detection function applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes.
+
+Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Object detection example
+
+The following JSON response illustrates what the Analyze API returns when detecting objects in the example image.
+
+![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
++
+```json
+{
+ "objects":[
+ {
+ "rectangle":{
+ "x":730,
+ "y":66,
+ "w":135,
+ "h":85
+ },
+ "object":"kitchen appliance",
+ "confidence":0.501
+ },
+ {
+ "rectangle":{
+ "x":523,
+ "y":377,
+ "w":185,
+ "h":46
+ },
+ "object":"computer keyboard",
+ "confidence":0.51
+ },
+ {
+ "rectangle":{
+ "x":471,
+ "y":218,
+ "w":289,
+ "h":226
+ },
+ "object":"Laptop",
+ "confidence":0.85,
+ "parent":{
+ "object":"computer",
+ "confidence":0.851
+ }
+ },
+ {
+ "rectangle":{
+ "x":654,
+ "y":0,
+ "w":584,
+ "h":473
+ },
+ "object":"person",
+ "confidence":0.855
+ }
+ ],
+ "requestId":"25018882-a494-4e64-8196-f627a35c1135",
+ "metadata":{
+ "height":473,
+ "width":1260,
+ "format":"Jpeg"
+ },
+ "modelVersion":"2021-05-01"
+}
+```
++
+## Limitations
+
+It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
+
+* Objects are generally not detected if they're small (less than 5% of the image).
+* Objects are generally not detected if they're arranged closely together (a stack of plates, for example).
+* Objects are not differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature.
+
+## Use the API
+
+The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
++
+* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-ocr.md
+
+ Title: OCR for images - Azure AI Vision
+
+description: Extract text from in-the-wild and non-document images with a fast and synchronous Azure AI Vision Image Analysis 4.0 API.
+++++++ Last updated : 07/04/2023+++
+# OCR for images (version 4.0 preview)
+
+> [!NOTE]
+>
+> For extracting text from PDF, Office, and HTML documents and document images, use the [Document Intelligence Read OCR model](../../ai-services/document-intelligence/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelligent document processing scenarios.
+
+OCR traditionally started as a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. For several scenarios, such as single images that aren't text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times.
+
+## What is Computer Vision v4.0 Read OCR (preview)?
+
+The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
+
+## Text extraction example
+
+The following JSON response illustrates what the Image Analysis 4.0 API returns when extracting text from the given image.
+
+![Photo of a sticky note with writing on it.](./Images/handwritten-note.jpg)
+
+```json
+{
+ "metadata":
+ {
+ "width": 1000,
+ "height": 945
+ },
+ "readResult":
+ {
+ "stringIndexType": "TextElements",
+ "content": "You must be the change you\nWish to see in the world !\nEverything has its beauty , but\nnot everyone sees it !",
+ "pages":
+ [
+ {
+ "height": 945,
+ "width": 1000,
+ "angle": -1.099,
+ "pageNumber": 1,
+ "words":
+ [
+ {
+ "content": "You",
+ "boundingBox": [253,268,301,267,304,318,256,318],
+ "confidence": 0.998,
+ "span": {"offset":0,"length":3}
+ },
+ {
+ "content": "must",
+ "boundingBox": [310,266,376,265,378,316,313,317],
+ "confidence": 0.988,
+ "span": {"offset":4,"length":4}
+ },
+ {
+ "content": "be",
+ "boundingBox": [385,264,426,264,428,314,388,316],
+ "confidence": 0.928,
+ "span": {"offset":9,"length":2}
+ },
+ {
+ "content": "the",
+ "boundingBox": [435,263,494,263,496,311,437,314],
+ "confidence": 0.997,
+ "span": {"offset":12,"length":3}
+ },
+ {
+ "content": "change",
+ "boundingBox": [503,263,600,262,602,306,506,311],
+ "confidence": 0.995,
+ "span": {"offset":16,"length":6}
+ },
+ {
+ "content": "you",
+ "boundingBox": [609,262,665,263,666,302,611,305],
+ "confidence": 0.998,
+ "span": {"offset":23,"length":3}
+ },
+ {
+ "content": "Wish",
+ "boundingBox": [327,348,391,343,392,380,328,382],
+ "confidence": 0.98,
+ "span": {"offset":27,"length":4}
+ },
+ {
+ "content": "to",
+ "boundingBox": [406,342,438,340,439,378,407,379],
+ "confidence": 0.997,
+ "span": {"offset":32,"length":2}
+ },
+ {
+ "content": "see",
+ "boundingBox": [446,340,492,337,494,376,447,378],
+ "confidence": 0.998,
+ "span": {"offset":35,"length":3}
+ },
+ {
+ "content": "in",
+ "boundingBox": [500,337,527,336,529,375,501,376],
+ "confidence": 0.983,
+ "span": {"offset":39,"length":2}
+ },
+ {
+ "content": "the",
+ "boundingBox": [534,336,588,334,590,373,536,375],
+ "confidence": 0.993,
+ "span": {"offset":42,"length":3}
+ },
+ {
+ "content": "world",
+ "boundingBox": [599,334,655,333,658,371,601,373],
+ "confidence": 0.998,
+ "span": {"offset":46,"length":5}
+ },
+ {
+ "content": "!",
+ "boundingBox": [663,333,687,333,690,370,666,371],
+ "confidence": 0.915,
+ "span": {"offset":52,"length":1}
+ },
+ {
+ "content": "Everything",
+ "boundingBox": [255,446,371,441,372,490,256,494],
+ "confidence": 0.97,
+ "span": {"offset":54,"length":10}
+ },
+ {
+ "content": "has",
+ "boundingBox": [380,441,421,440,421,488,381,489],
+ "confidence": 0.793,
+ "span": {"offset":65,"length":3}
+ },
+ {
+ "content": "its",
+ "boundingBox": [430,440,471,439,471,487,431,488],
+ "confidence": 0.998,
+ "span": {"offset":69,"length":3}
+ },
+ {
+ "content": "beauty",
+ "boundingBox": [480,439,552,439,552,485,481,487],
+ "confidence": 0.296,
+ "span": {"offset":73,"length":6}
+ },
+ {
+ "content": ",",
+ "boundingBox": [561,439,571,439,571,485,562,485],
+ "confidence": 0.742,
+ "span": {"offset":80,"length":1}
+ },
+ {
+ "content": "but",
+ "boundingBox": [580,439,636,439,636,485,580,485],
+ "confidence": 0.885,
+ "span": {"offset":82,"length":3}
+ },
+ {
+ "content": "not",
+ "boundingBox": [364,516,412,512,413,546,366,549],
+ "confidence": 0.994,
+ "span": {"offset":86,"length":3}
+ },
+ {
+ "content": "everyone",
+ "boundingBox": [422,511,520,504,521,540,423,545],
+ "confidence": 0.993,
+ "span": {"offset":90,"length":8}
+ },
+ {
+ "content": "sees",
+ "boundingBox": [530,503,586,500,588,538,531,540],
+ "confidence": 0.988,
+ "span": {"offset":99,"length":4}
+ },
+ {
+ "content": "it",
+ "boundingBox": [596,500,627,498,628,536,598,537],
+ "confidence": 0.998,
+ "span": {"offset":104,"length":2}
+ },
+ {
+ "content": "!",
+ "boundingBox": [634,498,657,497,659,536,635,536],
+ "confidence": 0.994,
+ "span": {"offset":107,"length":1}
+ }
+ ],
+ "spans":
+ [
+ {
+ "offset": 0,
+ "length": 108
+ }
+ ],
+ "lines":
+ [
+ {
+ "content": "You must be the change you",
+ "boundingBox": [253,267,670,262,671,307,254,318],
+ "spans": [{"offset":0,"length":26}]
+ },
+ {
+ "content": "Wish to see in the world !",
+ "boundingBox": [326,343,691,332,693,369,327,382],
+ "spans": [{"offset":27,"length":26}]
+ },
+ {
+ "content": "Everything has its beauty , but",
+ "boundingBox": [254,443,640,438,641,485,255,493],
+ "spans": [{"offset":54,"length":31}]
+ },
+ {
+ "content": "not everyone sees it !",
+ "boundingBox": [364,512,658,496,660,534,365,549],
+ "spans": [{"offset":86,"length":22}]
+ }
+ ]
+ }
+ ],
+ "styles":
+ [
+ {
+ "isHandwritten": true,
+ "spans":
+ [
+ {
+ "offset": 0,
+ "length": 26
+ }
+ ],
+ "confidence": 0.95
+ },
+ {
+ "isHandwritten": true,
+ "spans":
+ [
+ {
+ "offset": 27,
+ "length": 58
+ }
+ ],
+ "confidence": 1
+ },
+ {
+ "isHandwritten": true,
+ "spans":
+ [
+ {
+ "offset": 86,
+ "length": 22
+ }
+ ],
+ "confidence": 0.9
+ }
+ ]
+ }
+}
+```
+
+## Use the API
+
+The text extraction feature is part of the [Analyze Image API](https://aka.ms/vision-4-0-ref). Include `Read` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section.
++
+## Next steps
+
+Follow the [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to extract text from an image using the Image Analysis 4.0 API.
ai-services Concept People Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-people-detection.md
+
+ Title: People detection - Azure AI Vision
+
+description: Learn concepts related to the people detection feature of the Azure AI Vision API - usage and limits.
++++++++ Last updated : 09/12/2022+++
+# People detection (version 4.0 preview)
+
+Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score.
+
+> [!IMPORTANT]
+> We built this model by enhancing our object detection model for person detection scenarios. People detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
+
+## People detection example
+
+The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
+
+![Photo of four people.](./Images/family_photo.png)
+
+```json
+{
+ "modelVersion": "2023-02-01-preview",
+ "metadata": {
+ "width": 300,
+ "height": 231
+ },
+ "peopleResult": {
+ "values": [
+ {
+ "boundingBox": {
+ "x": 0,
+ "y": 41,
+ "w": 95,
+ "h": 189
+ },
+ "confidence": 0.9474349617958069
+ },
+ {
+ "boundingBox": {
+ "x": 204,
+ "y": 96,
+ "w": 95,
+ "h": 134
+ },
+ "confidence": 0.9470965266227722
+ },
+ {
+ "boundingBox": {
+ "x": 53,
+ "y": 20,
+ "w": 136,
+ "h": 210
+ },
+ "confidence": 0.8943784832954407
+ },
+ {
+ "boundingBox": {
+ "x": 170,
+ "y": 31,
+ "w": 91,
+ "h": 199
+ },
+ "confidence": 0.2713555097579956
+ }
+ ]
+ }
+}
+```
+
+## Use the API
+
+The people detection feature is part of the [Image Analysis 4.0 API](https://aka.ms/vision-4-0-ref). Include `People` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"people"` section.
+
+## Next steps
+
+* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
ai-services Concept Shelf Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-shelf-analysis.md
+
+ Title: Product Recognition - Image Analysis 4.0
+
+description: Learn concepts related to the Product Recognition feature set of Image Analysis 4.0 - usage and limits.
+++++++ Last updated : 05/03/2023++++
+# Product Recognition (version 4.0 preview)
+
+The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document.
+
+Try out the capabilities of Product Recognition quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+## Product Recognition features
+
+### Image modification
+
+The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to:
+* Stitch together multiple images of a shelf to create a single image.
+* Rectify an image to remove perspective distortion.
+
+### Product Understanding (pretrained)
+
+The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each.
+
+The following JSON response illustrates what the Product Understanding API returns.
+
+```json
+{
+ "imageMetadata": {
+ "width": 2000,
+ "height": 1500
+ },
+ "products": [
+ {
+ "id": "string",
+ "boundingBox": {
+ "x": 1234,
+ "y": 1234,
+ "w": 12,
+ "h": 12
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "string"
+ }
+ ]
+ }
+ ],
+ "gaps": [
+ {
+ "id": "string",
+ "boundingBox": {
+ "x": 1234,
+ "y": 1234,
+ "w": 123,
+ "h": 123
+ },
+ "classifications": [
+ {
+ "confidence": 0.8,
+ "label": "string"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Product Understanding (custom)
+
+The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product.
+
+The following JSON response illustrates what the Product Understanding API returns when used with a custom model.
+
+```json
+"detectedProducts": {
+ "imageMetadata": {
+ "width": 21,
+ "height": 25
+ },
+ "products": [
+ {
+ "id": "01",
+ "boundingBox": {
+ "x": 123,
+ "y": 234,
+ "w": 34,
+ "h": 45
+ },
+ "classifications": [
+ {
+ "confidence": 0.8,
+ "label": "Product1"
+ }
+ ]
+ }
+ ],
+ "gaps": [
+ {
+ "id": "02",
+ "boundingBox": {
+ "x": 12,
+ "y": 123,
+ "w": 1234,
+ "h": 123
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "Product1"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Planogram matching
+
+The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document.
+
+It returns a JSON response that accounts for each position in the planogram document, whether it's occupied by a product or gap.
+
+```json
+{
+ "matchedResultsPerPosition": [
+ {
+ "positionId": "01",
+ "detectedObject": {
+ "id": "01",
+ "boundingBox": {
+ "x": 12,
+ "y": 1234,
+ "w": 123,
+ "h": 12345
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "Product1"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Limitations
+
+* Product Recognition is only available in the **East US** and **West US 2** Azure regions.
+* Shelf images can be up to 20 MB in size. The recommended size is 4 MB.
+* We recommend you do [stitching and rectification](./how-to/shelf-modify-images.md) on the shelf images before uploading them for analysis.
+* Using a [custom model](./how-to/shelf-model-customization.md) is optional in Product Recognition, but it's required for the [planogram matching](./how-to/shelf-planogram.md) function.
++
+## Next steps
+
+Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API.
+* [Prepare images for Product Recognition](./how-to/shelf-modify-images.md)
+* [Analyze a shelf image](./how-to/shelf-analyze.md)
ai-services Concept Tag Images 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-tag-images-40.md
+
+ Title: Content tags - Image Analysis 4.0
+
+description: Learn concepts related to the images tagging feature of the Image Analysis 4.0 API.
+++++++ Last updated : 01/24/2023++++
+# Image tagging (version 4.0 preview)
+
+Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags are not organized as a taxonomy and do not have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
+
+Try out the image tagging features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Image tagging example
+
+The following JSON response illustrates what Azure AI Vision returns when tagging visual features detected in the example image.
+
+![A blue house and the front yard](./Images/house_yard.png).
++
+```json
+{
+ "metadata":
+ {
+ "width": 300,
+ "height": 200
+ },
+ "tagsResult":
+ {
+ "values":
+ [
+ {
+ "name": "grass",
+ "confidence": 0.9960499405860901
+ },
+ {
+ "name": "outdoor",
+ "confidence": 0.9956876635551453
+ },
+ {
+ "name": "building",
+ "confidence": 0.9893627166748047
+ },
+ {
+ "name": "property",
+ "confidence": 0.9853052496910095
+ },
+ {
+ "name": "plant",
+ "confidence": 0.9791355729103088
+ },
+ {
+ "name": "sky",
+ "confidence": 0.976455569267273
+ },
+ {
+ "name": "home",
+ "confidence": 0.9732913374900818
+ },
+ {
+ "name": "house",
+ "confidence": 0.9726771116256714
+ },
+ {
+ "name": "real estate",
+ "confidence": 0.972320556640625
+ },
+ {
+ "name": "yard",
+ "confidence": 0.9480281472206116
+ },
+ {
+ "name": "siding",
+ "confidence": 0.945357620716095
+ },
+ {
+ "name": "porch",
+ "confidence": 0.9410697221755981
+ },
+ {
+ "name": "cottage",
+ "confidence": 0.9143695831298828
+ },
+ {
+ "name": "tree",
+ "confidence": 0.9111745357513428
+ },
+ {
+ "name": "farmhouse",
+ "confidence": 0.8988940119743347
+ },
+ {
+ "name": "window",
+ "confidence": 0.894851803779602
+ },
+ {
+ "name": "lawn",
+ "confidence": 0.894050121307373
+ },
+ {
+ "name": "backyard",
+ "confidence": 0.8931854963302612
+ },
+ {
+ "name": "garden buildings",
+ "confidence": 0.8859137296676636
+ },
+ {
+ "name": "roof",
+ "confidence": 0.8695330619812012
+ },
+ {
+ "name": "driveway",
+ "confidence": 0.8670969009399414
+ },
+ {
+ "name": "land lot",
+ "confidence": 0.856428861618042
+ },
+ {
+ "name": "landscaping",
+ "confidence": 0.8540748357772827
+ }
+ ]
+ }
+}
+```
+
+## Use the API
+
+The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
++
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
+
+## Next steps
+
+* Learn the related concept of [describing images](concept-describe-images-40.md).
+* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
+
ai-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-tagging-images.md
+
+ Title: Content tags - Azure AI Vision
+
+description: Learn concepts related to the images tagging feature of the Azure AI Vision API.
+++++++ Last updated : 09/20/2022++++
+# Image tagging
+
+Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
+
+After you upload an image or specify an image URL, the Analyze API can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
+
+Try out the image tagging features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Image tagging example
+
+The following JSON response illustrates what Azure AI Vision returns when tagging visual features detected in the example image.
+
+![A blue house and the front yard](./Images/house_yard.png).
+
+```json
+{
+ "tags":[
+ {
+ "name":"grass",
+ "confidence":0.9960499405860901
+ },
+ {
+ "name":"outdoor",
+ "confidence":0.9956876635551453
+ },
+ {
+ "name":"building",
+ "confidence":0.9893627166748047
+ },
+ {
+ "name":"property",
+ "confidence":0.9853052496910095
+ },
+ {
+ "name":"plant",
+ "confidence":0.9791355133056641
+ },
+ {
+ "name":"sky",
+ "confidence":0.9764555096626282
+ },
+ {
+ "name":"home",
+ "confidence":0.9732913970947266
+ },
+ {
+ "name":"house",
+ "confidence":0.9726772904396057
+ },
+ {
+ "name":"real estate",
+ "confidence":0.972320556640625
+ },
+ {
+ "name":"yard",
+ "confidence":0.9480282068252563
+ },
+ {
+ "name":"siding",
+ "confidence":0.945357620716095
+ },
+ {
+ "name":"porch",
+ "confidence":0.9410697221755981
+ },
+ {
+ "name":"cottage",
+ "confidence":0.9143695831298828
+ },
+ {
+ "name":"tree",
+ "confidence":0.9111741185188293
+ },
+ {
+ "name":"farmhouse",
+ "confidence":0.8988939523696899
+ },
+ {
+ "name":"window",
+ "confidence":0.894851565361023
+ },
+ {
+ "name":"lawn",
+ "confidence":0.8940501809120178
+ },
+ {
+ "name":"backyard",
+ "confidence":0.8931854963302612
+ },
+ {
+ "name":"garden buildings",
+ "confidence":0.885913610458374
+ },
+ {
+ "name":"roof",
+ "confidence":0.8695329427719116
+ },
+ {
+ "name":"driveway",
+ "confidence":0.8670971393585205
+ },
+ {
+ "name":"land lot",
+ "confidence":0.8564285039901733
+ },
+ {
+ "name":"landscaping",
+ "confidence":0.8540750741958618
+ }
+ ],
+ "requestId":"d60ac02b-966d-4f62-bc24-fbb1fec8bd5d",
+ "metadata":{
+ "height":200,
+ "width":300,
+ "format":"Png"
+ },
+ "modelVersion":"2021-05-01"
+}
+```
+
+## Use the API
+
+The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
++
+* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
+
+## Next steps
+
+Learn the related concepts of [categorizing images](concept-categorizing-images.md) and [describing images](concept-describing-images.md).
ai-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md
+
+ Title: Use Azure AI Vision container with Kubernetes and Helm
+
+description: Learn how to deploy the Azure AI Vision container using Kubernetes and Helm.
++++++ Last updated : 05/09/2022++++
+# Use Azure AI Vision container with Kubernetes and Helm
+
+One option to manage your Azure AI Vision containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define an Azure AI Vision container image, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services. For more information about running Docker containers without Kubernetes orchestration, see [install and run Azure AI Vision containers](computer-vision-how-to-install-containers.md).
+
+## Prerequisites
+
+The following prerequisites before using Azure AI Vision containers on-premises:
+
+| Required | Purpose |
+|-||
+| Azure Account | If you don't have an Azure subscription, create a [free account][free-azure-account] before you begin. |
+| Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. |
+| Helm CLI | Install the [Helm CLI][helm-install], which is used to install a helm chart (container package definition). |
+| Computer Vision resource |In order to use the container, you must have:<br><br>A **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
++
+### The host computer
++
+### Container requirements and recommendations
++
+## Connect to the Kubernetes cluster
+
+The host computer is expected to have an available Kubernetes cluster. See this tutorial on [deploying a Kubernetes cluster](../../aks/tutorial-kubernetes-deploy-cluster.md) for a conceptual understanding of how to deploy a Kubernetes cluster to a host computer. You can find more information on deployments in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
+
+## Configure Helm chart values for deployment
+
+Begin by creating a folder named *read*. Then, paste the following YAML content in a new file named `chart.yaml`:
+
+```yaml
+apiVersion: v2
+name: read
+version: 1.0.0
+description: A Helm chart to deploy the Read OCR container to a Kubernetes cluster
+dependencies:
+- name: rabbitmq
+ condition: read.image.args.rabbitmq.enabled
+ version: ^6.12.0
+ repository: https://kubernetes-charts.storage.googleapis.com/
+- name: redis
+ condition: read.image.args.redis.enabled
+ version: ^6.0.0
+ repository: https://kubernetes-charts.storage.googleapis.com/
+```
+
+To configure the Helm chart default values, copy and paste the following YAML into a file named `values.yaml`. Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values. Configure resultExpirationPeriod, Redis, and RabbitMQ if needed.
+
+```yaml
+# These settings are deployment specific and users can provide customizations
+read:
+ enabled: true
+ image:
+ name: cognitive-services-read
+ registry: mcr.microsoft.com/
+ repository: azure-cognitive-services/vision/read
+ tag: 3.2-preview.1
+ args:
+ eula: accept
+ billing: # {ENDPOINT_URI}
+ apikey: # {API_KEY}
+
+ # Result expiration period setting. Specify when the system should clean up recognition results.
+ # For example, resultExpirationPeriod=1, the system will clear the recognition result 1hr after the process.
+ # resultExpirationPeriod=0, the system will clear the recognition result after result retrieval.
+ resultExpirationPeriod: 1
+
+ # Redis storage, if configured, will be used by read OCR container to store result records.
+ # A cache is required if multiple read OCR containers are placed behind load balancer.
+ redis:
+ enabled: false # {true/false}
+ password: password
+
+ # RabbitMQ is used for dispatching tasks. This can be useful when multiple read OCR containers are
+ # placed behind load balancer.
+ rabbitmq:
+ enabled: false # {true/false}
+ rabbitmq:
+ username: user
+ password: password
+```
+
+> [!IMPORTANT]
+> - If the `billing` and `apikey` values aren't provided, the services expire after 15 minutes. Likewise, verification fails because the services aren't available.
+>
+> - If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Azure AI Vision Docker containers](./computer-vision-resource-container-config.md).
+>
+
+Create a *templates* folder under the *read* directory. Copy and paste the following YAML into a file named `deployment.yaml`. The `deployment.yaml` file will serve as a Helm template.
+
+> Templates generate manifest files, which are YAML-formatted resource descriptions that Kubernetes can understand. [- Helm Chart Template Guide][chart-template-guide]
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: read
+ labels:
+ app: read-deployment
+spec:
+ selector:
+ matchLabels:
+ app: read-app
+ template:
+ metadata:
+ labels:
+ app: read-app
+ spec:
+ containers:
+ - name: {{.Values.read.image.name}}
+ image: {{.Values.read.image.registry}}{{.Values.read.image.repository}}
+ ports:
+ - containerPort: 5000
+ env:
+ - name: EULA
+ value: {{.Values.read.image.args.eula}}
+ - name: billing
+ value: {{.Values.read.image.args.billing}}
+ - name: apikey
+ value: {{.Values.read.image.args.apikey}}
+ args:
+ - ReadEngineConfig:ResultExpirationPeriod={{ .Values.read.image.args.resultExpirationPeriod }}
+ {{- if .Values.read.image.args.rabbitmq.enabled }}
+ - Queue:RabbitMQ:HostName={{ include "rabbitmq.hostname" . }}
+ - Queue:RabbitMQ:Username={{ .Values.read.image.args.rabbitmq.rabbitmq.username }}
+ - Queue:RabbitMQ:Password={{ .Values.read.image.args.rabbitmq.rabbitmq.password }}
+ {{- end }}
+ {{- if .Values.read.image.args.redis.enabled }}
+ - Cache:Redis:Configuration={{ include "redis.connStr" . }}
+ {{- end }}
+ imagePullSecrets:
+ - name: {{.Values.read.image.pullSecret}}
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: read-service
+spec:
+ type: LoadBalancer
+ ports:
+ - port: 5000
+ selector:
+ app: read-app
+```
+
+In the same *templates* folder, copy and paste the following helper functions into `helpers.tpl`. `helpers.tpl` defines useful functions to help generate Helm template.
+
+```yaml
+{{- define "rabbitmq.hostname" -}}
+{{- printf "%s-rabbitmq" .Release.Name -}}
+{{- end -}}
+
+{{- define "redis.connStr" -}}
+{{- $hostMain := printf "%s-redis-master:6379" .Release.Name }}
+{{- $hostReplica := printf "%s-redis-replica:6379" .Release.Name -}}
+{{- $passWord := printf "password=%s" .Values.read.image.args.redis.password -}}
+{{- $connTail := "ssl=False,abortConnect=False" -}}
+{{- printf "%s,%s,%s,%s" $hostMain $hostReplica $passWord $connTail -}}
+{{- end -}}
+```
+The template specifies a load balancer service and the deployment of your container/image for Read.
+
+### The Kubernetes package (Helm chart)
+
+The *Helm chart* contains the configuration of which docker image(s) to pull from the `mcr.microsoft.com` container registry.
+
+> A [Helm chart][helm-charts] is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
+
+The provided *Helm charts* pull the docker images of the Azure AI Vision Service, and the corresponding service from the `mcr.microsoft.com` container
+registry.
+
+## Install the Helm chart on the Kubernetes cluster
+
+To install the *helm chart*, we'll need to execute the [`helm install`][helm-install-cmd] command. Ensure to execute the install command from the directory above the `read` folder.
+
+```console
+helm install read ./read
+```
+
+Here is an example output you might expect to see from a successful install execution:
+
+```console
+NAME: read
+LAST DEPLOYED: Thu Sep 04 13:24:06 2019
+NAMESPACE: default
+STATUS: DEPLOYED
+
+RESOURCES:
+==> v1/Pod(related)
+NAME READY STATUS RESTARTS AGE
+read-57cb76bcf7-45sdh 0/1 ContainerCreating 0 0s
+
+==> v1/Service
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+read LoadBalancer 10.110.44.86 localhost 5000:31301/TCP 0s
+
+==> v1beta1/Deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+read 0/1 1 0 0s
+```
+
+The Kubernetes deployment can take over several minutes to complete. To confirm that both pods and services are properly deployed and available, execute the following command:
+
+```console
+kubectl get all
+```
+
+You should expect to see something similar to the following output:
+
+```console
+kubectl get all
+NAME READY STATUS RESTARTS AGE
+pod/read-57cb76bcf7-45sdh 1/1 Running 0 17s
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45h
+service/read LoadBalancer 10.110.44.86 localhost 5000:31301/TCP 17s
+
+NAME READY UP-TO-DATE AVAILABLE AGE
+deployment.apps/read 1/1 1 1 17s
+
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/read-57cb76bcf7 1 1 1 17s
+```
+
+## Deploy multiple v3 containers on the Kubernetes cluster
+
+Starting in v3 of the container, you can use the containers in parallel on both a task and page level.
+
+By design, each v3 container has a dispatcher and a recognition worker. The dispatcher is responsible for splitting a multi-page task into multiple single page sub-tasks. The recognition worker is optimized for recognizing a single page document. To achieve page level parallelism, deploy multiple v3 containers behind a load balancer and let the containers share a universal storage and queue.
+
+> [!NOTE]
+> Currently only Azure Storage and Azure Queue are supported.
+
+The container receiving the request can split the task into single page sub-tasks, and add them to the universal queue. Any recognition worker from a less busy container can consume single page sub-tasks from the queue, perform recognition, and upload the result to the storage. The throughput can be improved up to `n` times, depending on the number of containers that are deployed.
+
+The v3 container exposes the liveness probe API under the `/ContainerLiveness` path. Use the following deployment example to configure a liveness probe for Kubernetes.
+
+Copy and paste the following YAML into a file named `deployment.yaml`. Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values. Replace the `# {AZURE_STORAGE_CONNECTION_STRING}` comment with your Azure Storage Connection String. Configure `replicas` to the number you want, which is set to `3` in the following example.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: read
+ labels:
+ app: read-deployment
+spec:
+ selector:
+ matchLabels:
+ app: read-app
+ replicas: # {NUMBER_OF_READ_CONTAINERS}
+ template:
+ metadata:
+ labels:
+ app: read-app
+ spec:
+ containers:
+ - name: cognitive-services-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read
+ ports:
+ - containerPort: 5000
+ env:
+ - name: EULA
+ value: accept
+ - name: billing
+ value: # {ENDPOINT_URI}
+ - name: apikey
+ value: # {API_KEY}
+ - name: Storage__ObjectStore__AzureBlob__ConnectionString
+ value: # {AZURE_STORAGE_CONNECTION_STRING}
+ - name: Queue__Azure__ConnectionString
+ value: # {AZURE_STORAGE_CONNECTION_STRING}
+ livenessProbe:
+ httpGet:
+ path: /ContainerLiveness
+ port: 5000
+ initialDelaySeconds: 60
+ periodSeconds: 60
+ timeoutSeconds: 20
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: azure-cognitive-service-read
+spec:
+ type: LoadBalancer
+ ports:
+ - port: 5000
+ targetPort: 5000
+ selector:
+ app: read-app
+```
+
+Run the following command.
+
+```console
+kubectl apply -f deployment.yaml
+```
+
+Below is an example output you might see from a successful deployment execution:
+
+```console
+deployment.apps/read created
+service/azure-cognitive-service-read created
+```
+
+The Kubernetes deployment can take several minutes to complete. To confirm that both pods and services are properly deployed and available, then execute the following command:
+
+```console
+kubectl get all
+```
+
+You should see console output similar to the following:
+
+```console
+kubectl get all
+NAME READY STATUS RESTARTS AGE
+pod/read-6cbbb6678-58s9t 1/1 Running 0 3s
+pod/read-6cbbb6678-kz7v4 1/1 Running 0 3s
+pod/read-6cbbb6678-s2pct 1/1 Running 0 3s
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/azure-cognitive-service-read LoadBalancer 10.0.134.0 <none> 5000:30846/TCP 17h
+service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 78d
+
+NAME READY UP-TO-DATE AVAILABLE AGE
+deployment.apps/read 3/3 3 3 3s
+
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/read-6cbbb6678 3 3 3 3s
+```
+
+<!-- ## Validate container is running -->
++
+## Next steps
+
+For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks].
+
+> [!div class="nextstepaction"]
+> [Azure AI services Containers][cog-svcs-containers]
+
+<!-- LINKS - external -->
+[free-azure-account]: https://azure.microsoft.com/free
+[git-download]: https://git-scm.com/downloads
+[azure-cli]: /cli/azure/install-azure-cli
+[docker-engine]: https://www.docker.com/products/docker-engine
+[kubernetes-cli]: https://kubernetes.io/docs/tasks/tools/install-kubectl
+[helm-install]: https://helm.sh/docs/intro/install/
+[helm-install-cmd]: https://helm.sh/docs/intro/using_helm/#helm-install-installing-a-package
+[helm-charts]: https://helm.sh/docs/topics/charts/
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[helm-test]: https://helm.sh/docs/helm/#helm-test
+[chart-template-guide]: https://helm.sh/docs/chart_template_guide
+
+<!-- LINKS - internal -->
+[vision-container-host-computer]: computer-vision-how-to-install-containers.md#the-host-computer
+[installing-helm-apps-in-aks]: ../../aks/kubernetes-helm.md
+[cog-svcs-containers]: ../cognitive-services-container-support.md
ai-services Enrollment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/enrollment-overview.md
+
+ Title: Best practices for adding users to a Face service
+
+description: Learn about the process of Face enrollment to register users in a face recognition service.
++++++ Last updated : 09/27/2021+++
+# Best practices for adding users to a Face service
+
+In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
+
+## Meaningful consent
+
+One of the key purposes of an enrollment application for facial recognition is to give users the opportunity to consent to the use of images of their face for specific purposes, such as access to a worksite. Because facial recognition technologies may be perceived as collecting sensitive personal data, it's especially important to ask for consent in a way that is both transparent and respectful. Consent is meaningful to users when it empowers them to make the decision that they feel is best for them.
+
+Based on Microsoft user research, Microsoft's Responsible AI principles, and [external research](ftp://ftp.cs.washington.edu/tr/2000/12/UW-CSE-00-12-02.pdf), we have found that consent is meaningful when it offers the following to users enrolling in the technology:
+
+* Awareness: Users should have no doubt when they are being asked to provide their face template or enrollment photos.
+* Understanding: Users should be able to accurately describe in their own words what they were being asked for, by whom, to what end, and with what assurances.
+* Freedom of choice: Users should not feel coerced or manipulated when choosing whether to consent and enroll in facial recognition.
+* Control: Users should be able to revoke their consent and delete their data at any time.
+
+This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario.
+
+> [!NOTE]
+> It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices.
+
+## Application development
+
+Before you design an enrollment flow, think about how the application you're building can uphold the promises you make to users about how their data is protected. The following recommendations can help you build an enrollment experience that includes responsible approaches to securing personal data, managing users' privacy, and ensuring that the application is accessible to all users.
+
+|Category | Recommendations |
+|||
+|Hardware | Consider the camera quality of the enrollment device. |
+|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
+|Security | Azure AI services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Azure AI services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. |
+|User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. |
+|Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. |
+
+## Next steps
+
+Follow the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide to get started with a sample enrollment app. Then customize it or write your own app to suit the needs of your product.
ai-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/add-faces.md
+
+ Title: "Example: Add faces to a PersonGroup - Face"
+
+description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure AI Face service.
++++++ Last updated : 04/10/2019+
+ms.devlang: csharp
+++
+# Add faces to a PersonGroup
++
+This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure AI Face .NET client library.
+
+## Step 1: Initialization
+
+The following code declares several variables and implements a helper function to schedule the face add requests:
+
+- `PersonCount` is the total number of persons.
+- `CallLimitPerSecond` is the maximum calls per second according to the subscription tier.
+- `_timeStampQueue` is a Queue to record the request timestamps.
+- `await WaitCallLimitPerSecondAsync()` waits until it's valid to send the next request.
+
+```csharp
+const int PersonCount = 10000;
+const int CallLimitPerSecond = 10;
+static Queue<DateTime> _timeStampQueue = new Queue<DateTime>(CallLimitPerSecond);
+
+static async Task WaitCallLimitPerSecondAsync()
+{
+ Monitor.Enter(_timeStampQueue);
+ try
+ {
+ if (_timeStampQueue.Count >= CallLimitPerSecond)
+ {
+ TimeSpan timeInterval = DateTime.UtcNow - _timeStampQueue.Peek();
+ if (timeInterval < TimeSpan.FromSeconds(1))
+ {
+ await Task.Delay(TimeSpan.FromSeconds(1) - timeInterval);
+ }
+ _timeStampQueue.Dequeue();
+ }
+ _timeStampQueue.Enqueue(DateTime.UtcNow);
+ }
+ finally
+ {
+ Monitor.Exit(_timeStampQueue);
+ }
+}
+```
+
+## Step 2: Authorize the API call
+
+When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
++
+## Step 3: Create the PersonGroup
+
+A PersonGroup named "MyPersonGroup" is created to save the persons.
+The request time is enqueued to `_timeStampQueue` to ensure the overall validation.
+
+```csharp
+const string personGroupId = "mypersongroupid";
+const string personGroupName = "MyPersonGroup";
+_timeStampQueue.Enqueue(DateTime.UtcNow);
+await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
+```
+
+## Step 4: Create the persons for the PersonGroup
+
+Persons are created concurrently, and `await WaitCallLimitPerSecondAsync()` is also applied to avoid exceeding the call limit.
+
+```csharp
+Person[] persons = new Person[PersonCount];
+Parallel.For(0, PersonCount, async i =>
+{
+ await WaitCallLimitPerSecondAsync();
+
+ string personName = $"PersonName#{i}";
+ persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);
+});
+```
+
+## Step 5: Add faces to the persons
+
+Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially.
+Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation.
+
+```csharp
+Parallel.For(0, PersonCount, async i =>
+{
+ Guid personId = persons[i].PersonId;
+ string personImageDir = @"/path/to/person/i/images";
+
+ foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
+ {
+ await WaitCallLimitPerSecondAsync();
+
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
+ }
+ }
+});
+```
+
+## Summary
+
+In this guide, you learned the process of creating a PersonGroup with a massive number of persons and faces. Several reminders:
+
+- This strategy also applies to FaceLists and LargePersonGroups.
+- Adding or deleting faces to different FaceLists or persons in LargePersonGroups are processed concurrently.
+- Adding or deleting faces to one specific FaceList or person in a LargePersonGroup are done sequentially.
+- For simplicity, how to handle a potential exception is omitted in this guide. If you want to enhance more robustness, apply the proper retry policy.
+
+The following features were explained and demonstrated:
+
+- Create PersonGroups by using the [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) API.
+- Create persons by using the [PersonGroup Person - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) API.
+- Add faces to persons by using the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API.
+
+## Next steps
+
+In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data.
+
+- [Use the PersonDirectory structure (preview)](use-persondirectory.md)
ai-services Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/analyze-video.md
+
+ Title: Analyze videos in near real time - Azure AI Vision
+
+description: Learn how to perform near real-time analysis on frames that are taken from a live video stream by using the Azure AI Vision API.
+++++++ Last updated : 07/05/2022
+ms.devlang: csharp
+++
+# Analyze videos in near real time
+
+This article demonstrates how to perform near real-time analysis on frames that are taken from a live video stream by using the Azure AI Vision API. The basic elements of such an analysis are:
+
+- Acquiring frames from a video source.
+- Selecting which frames to analyze.
+- Submitting these frames to the API.
+- Consuming each analysis result that's returned from the API call.
+
+The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+
+## Approaches to running near real-time analysis
+
+You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
+
+### Design an infinite loop
+
+The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, you grab a frame, analyze it, and then consume the result:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+}
+```
+
+If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
+
+### Allow the API calls to run in parallel
+
+Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ var t = Task.Run(async () =>
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+ }
+}
+```
+
+With this approach, you launch each analysis in a separate task. The task can run in the background while you continue grabbing new frames. The approach avoids blocking the main thread as you wait for an API call to return. However, the approach can present certain disadvantages:
+* It costs you some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
+* It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe.
+* Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
+
+### Design a producer-consumer system
+
+For your final approach, designing a "producer-consumer" system, you build a producer thread that looks similar to your previously mentioned infinite loop. However, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
+
+```csharp
+// Queue that will contain the API call tasks.
+var taskQueue = new BlockingCollection<Task<ResultWrapper>>();
+
+// Producer thread.
+while (true)
+{
+ // Grab a frame.
+ Frame f = GrabFrame();
+
+ // Decide whether to analyze the frame.
+ if (ShouldAnalyze(f))
+ {
+ // Start a task that will run in parallel with this thread.
+ var analysisTask = Task.Run(async () =>
+ {
+ // Put the frame, and the result/exception into a wrapper object.
+ var output = new ResultWrapper(f);
+ try
+ {
+ output.Analysis = await Analyze(f);
+ }
+ catch (Exception e)
+ {
+ output.Exception = e;
+ }
+ return output;
+ }
+
+ // Push the task onto the queue.
+ taskQueue.Add(analysisTask);
+ }
+}
+```
+
+You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
+
+```csharp
+// Consumer thread.
+while (true)
+{
+ // Get the oldest task.
+ Task<ResultWrapper> analysisTask = taskQueue.Take();
+
+ // Wait until the task is completed.
+ var output = await analysisTask;
+
+ // Consume the exception or result.
+ if (output.Exception != null)
+ {
+ throw output.Exception;
+ }
+ else
+ {
+ ConsumeResult(output.Analysis);
+ }
+}
+```
+
+## Implement the solution
+
+### Get sample code
+
+To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+
+The library contains the `FrameGrabber` class, which implements the previously discussed producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
+
+To illustrate some of the possibilities, we've provided two sample apps that use the library.
+
+The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is reproduced in the following code:
+
+```csharp
+using System;
+using System.Linq;
+using Microsoft.Azure.CognitiveServices.Vision.Face;
+using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
+using VideoFrameAnalyzer;
+
+namespace BasicConsoleSample
+{
+ internal class Program
+ {
+ const string ApiKey = "<your API key>";
+ const string Endpoint = "https://<your API region>.api.cognitive.microsoft.com";
+
+ private static async Task Main(string[] args)
+ {
+ // Create grabber.
+ FrameGrabber<DetectedFace[]> grabber = new FrameGrabber<DetectedFace[]>();
+
+ // Create Face Client.
+ FaceClient faceClient = new FaceClient(new ApiKeyServiceClientCredentials(ApiKey))
+ {
+ Endpoint = Endpoint
+ };
+
+ // Set up a listener for when we acquire a new frame.
+ grabber.NewFrameProvided += (s, e) =>
+ {
+ Console.WriteLine($"New frame acquired at {e.Frame.Metadata.Timestamp}");
+ };
+
+ // Set up a Face API call.
+ grabber.AnalysisFunction = async frame =>
+ {
+ Console.WriteLine($"Submitting frame acquired at {frame.Metadata.Timestamp}");
+ // Encode image and submit to Face service.
+ return (await faceClient.Face.DetectWithStreamAsync(frame.Image.ToMemoryStream(".jpg"))).ToArray();
+ };
+
+ // Set up a listener for when we receive a new result from an API call.
+ grabber.NewResultAvailable += (s, e) =>
+ {
+ if (e.TimedOut)
+ Console.WriteLine("API call timed out.");
+ else if (e.Exception != null)
+ Console.WriteLine("API call threw an exception.");
+ else
+ Console.WriteLine($"New result received for frame acquired at {e.Frame.Metadata.Timestamp}. {e.Analysis.Length} faces detected");
+ };
+
+ // Tell grabber when to call the API.
+ // See also TriggerAnalysisOnPredicate
+ grabber.TriggerAnalysisOnInterval(TimeSpan.FromMilliseconds(3000));
+
+ // Start running in the background.
+ await grabber.StartProcessingCameraAsync();
+
+ // Wait for key press to stop.
+ Console.WriteLine("Press any key to stop...");
+ Console.ReadKey();
+
+ // Stop, blocking until done.
+ await grabber.StopProcessingAsync();
+ }
+ }
+}
+```
+
+The second sample app is a bit more interesting. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
+
+In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure AI services.
+
+By using this approach, you can visualize the detected face immediately. You can then update the emotions later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Azure AI services APIs can be used to augment this processing with more advanced analysis when necessary.
+
+![The LiveCameraSample app displaying an image with tags](../images/frame-by-frame.jpg)
+
+### Integrate samples into your codebase
+
+To get started with this sample, do the following:
+
+1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
+2. Create resources for Azure AI Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
+ - [Azure AI Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
+ - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
+ After the resources are deployed, select **Go to resource** to collect your key and endpoint for each resource.
+3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
+4. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
+ - For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
+ - For LiveCameraSample, enter the keys in the **Settings** pane of the app. The keys are persisted across sessions as user data.
+
+When you're ready to integrate the samples, reference the VideoFrameAnalyzer library from your own projects.
+
+The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure AI services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure AI services.
+
+## Next steps
+
+In this article, you learned how to run near real-time analysis on live video streams by using the Face and Azure AI Vision services. You also learned how you can use our sample code to get started.
+
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
+
+- [Call the Image Analysis API (how to)](call-analyze-image.md)
ai-services Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/background-removal.md
+
+ Title: Remove the background in images
+
+description: Learn how to call the Segment API to isolate and remove the background from images.
+++++++ Last updated : 03/03/2023+++
+# Remove the background from images
+
+This article demonstrates how to call the Image Analysis 4.0 API to segment an image. It also shows you how to parse the returned information.
+
+## Prerequisites
+
+This guide assumes you have successfully followed the steps mentioned in the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) page. This means:
+
+* You have <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Vision resource" target="_blank">created a Vision resource </a> and obtained a key and endpoint URL.
+* If you're using the client SDK, you have the appropriate SDK package installed and you have a running quickstart application. You modify this quickstart application based on code examples here.
+* If you're using 4.0 REST API calls directly, you have successfully made a `curl.exe` call to the service (or used an alternative tool). You modify the `curl.exe` call based on the examples here.
+
+The quickstart shows you how to extract visual features from an image, however, the concepts are similar to background removal. Therefore you benefit from starting from the quickstart and making modifications.
+
+> [!IMPORTANT]
+> Background removal is only available in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+
+## Authenticate against the service
+
+To authenticate against the Image Analysis service, you need an Azure AI Vision key and endpoint URL.
+
+> [!TIP]
+> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](../../security-features.md) article for more authentication options like [Azure Key Vault](../../use-key-vault.md).
+
+The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
+
+#### [C#](#tab/csharp)
+
+Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example:
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
+
+#### [Python](#tab/python)
+
+Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example:
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)]
+
+#### [C++](#tab/cpp)
+
+At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example:
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)]
+
+Where we used this helper function to read the value of an environment variable:
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)]
+
+#### [REST API](#tab/rest)
+
+Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview`, where `<endpoint>` is your unique Azure AI Vision endpoint URL. See [Select a mode ](./background-removal.md#select-a-mode) section for another query string you add to this URL.
+++
+## Select the image to analyze
+
+The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
+
+#### [C#](#tab/csharp)
+
+Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl).
+
+**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)]
+
+> [!TIP]
+> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile).
+
+#### [Python](#tab/python)
+
+In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)]
+
+> [!TIP]
+> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL.
+
+#### [C++](#tab/cpp)
+
+Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl).
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)]
+
+> [!TIP]
+> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile).
+
+#### [REST API](#tab/rest)
+
+When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/ai-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`.
+
+To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`.
+++
+## Select a mode
+
+### [C#](#tab/csharp)
+
+<!-- TODO: After C# ref-docs get published, add link to SegmentationMode (/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.segmentationmode) & ImageSegmentationMode (/dotnet/api/azure.ai.vision.imageanalysis.imagesegmentationmode) -->
+
+Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the property `SegmentationMode`. This property must be set if you want to do segmentation. See `ImageSegmentationMode` for supported values.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/segmentation/Program.cs?name=segmentation_mode)]
+
+### [Python](#tab/python)
+
+<!-- TODO: Where Python ref-docs get published, add link to SegmentationMode (/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-segmentation-mode) & ImageSegmentationMode (/python/api/azure-ai-vision/azure.ai.vision.enums.imagesegmentationmode)> -->
+
+Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the property `segmentation_mode`. This property must be set if you want to do segmentation. See `ImageSegmentationMode` for supported values.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/segmentation/main.py?name=segmentation_mode)]
+
+### [C++](#tab/cpp)
+
+Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetSegmentationMode](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setsegmentationmode) method. You must call this method if you want to do segmentation. See [ImageSegmentationMode](/cpp/cognitive-services/vision/azure-ai-vision-imageanalysis-namespace#enum-imagesegmentationmode) for supported values.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segmentation_mode)]
+
+### [REST](#tab/rest)
+
+Set the query string *mode** to one of these two values. This query string is mandatory if you want to do image segmentation.
+
+|URL parameter | Value |Description |
+|--||-|
+| `mode` | `backgroundRemoval` | Outputs an image of the detected foreground object with a transparent background. |
+| `mode` | `foregroundMatting` | Outputs a gray-scale alpha matte image showing the opacity of the detected foreground object. |
+
+A populated URL for backgroundRemoval would look like this: `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval`
+++
+## Get results from the service
+
+This section shows you how to make the API call and parse the results.
+
+#### [C#](#tab/csharp)
+
+The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
+
+**SegmentationResult** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/segmentation/Program.cs?name=segment)]
+
+#### [Python](#tab/python)
+
+The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/segmentation/main.py?name=segment)]
+
+#### [C++](#tab/cpp)
+
+The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segment)]
+
+#### [REST](#tab/rest)
+
+The service returns a `200` HTTP response on success with `Content-Type: image/png`, and the body contains the returned PNG image in the form of a binary stream.
+++
+As an example, assume background removal is run on the following image:
++
+On a successful background removal call, The following four-channel PNG image is the response for the `backgroundRemoval` mode:
++
+The following one-channel PNG image is the response for the `foregroundMatting` mode:
++
+The API returns an image the same size as the original for the `foregroundMatting` mode, but at most 16 megapixels (preserving image aspect ratio) for the `backgroundRemoval` mode.
++
+## Error codes
++++
+## Next steps
+
+[Background removal concepts](../concept-background-removal.md)
++
ai-services Blob Storage Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/blob-storage-search.md
+
+ Title: Configure your blob storage for image retrieval and video search in Vision Studio
+
+description: To get started with the **Search photos with natural language** or with **Video summary and frame locator** in Vision Studio, you will need to select or create a new storage account.
+++++++ Last updated : 03/06/2023++++
+# Configure your blob storage for image retrieval and video search in Vision Studio
+
+To get started with the **Search photos with natural language** or **Video summary and frame locator** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Vision resource is more efficient and reduces cost.
+
+> [!IMPORTANT]
+> You need to create your storage account on the same Azure subscription as the Vision resource you're using in the **Search photos with natural language** or **Video summary and frame locator** scenarios as shown below.
++
+## Create a new storage account
+
+To get started, <a href="https://ms.portal.azure.com/#create/Microsoft.StorageAccount" title="create a new storage account" target="_blank">create a new storage account</a>.
+++
+Fill in the required parameters to configure your storage account, then select `Review` and `Create`.
+
+Once your storage account has been deployed, select `Go to resource` to open the storage account overview.
+++
+## Configure CORS rule on the storage account
+
+In your storage account overview, find the **Settings** section in the left hand navigation and select `Resource sharing (CORS)`, shown below.
+++
+Create a CORS rule by setting the **Allowed Origins** field to `https://portal.vision.cognitive.azure.com`.
+
+In the Allowed Methods field, select the `GET` checkbox to allow an authenticated request from a different domain. In the **Max age** field, enter the value `9999`, and click `Save`.
+
+[Learn more about CORS support for Azure Storage](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services).
++++
+This will allow Vision Studio to access images and videos in your blob storage container to extract insights on your data.
+
+## Upload images and videos in Vision Studio
+
+In the **Try with your own video** or **Try with your own image** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images or videos are stored. If you don't have a container, you can create one and upload the images or videos from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections.
++++++++
ai-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md
+
+ Title: Call the Image Analysis 4.0 Analyze API
+
+description: Learn how to call the Image Analysis 4.0 API and configure its behavior.
+++++++ Last updated : 01/24/2023+++
+# Call the Image Analysis 4.0 Analyze API (preview)
+
+This article demonstrates how to call the Image Analysis 4.0 API to return information about an image's visual features. It also shows you how to parse the returned information.
+
+## Prerequisites
+
+This guide assumes you have successfully followed the steps mentioned in the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) page. This means:
+
+* You have <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Vision resource" target="_blank">created a Vision resource</a> and obtained a key and endpoint URL.
+* If you're using the client SDK, you have the appropriate SDK package installed and you have a running [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) application. You can modify this quickstart application based on code examples here.
+* If you're using 4.0 REST API calls directly, you have successfully made a `curl.exe` call to the service (or used an alternative tool). You modify the `curl.exe` call based on the examples here.
+
+## Authenticate against the service
+
+To authenticate against the Image Analysis service, you need an Azure AI Vision key and endpoint URL.
+
+> [!TIP]
+> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](../../security-features.md) article for more authentication options like [Azure Key Vault](../../use-key-vault.md).
+
+The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
+
+#### [C#](#tab/csharp)
+
+Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example:
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
+
+#### [Python](#tab/python)
+
+Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example:
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)]
+
+#### [C++](#tab/cpp)
+
+At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example:
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)]
+
+Where we used this helper function to read the value of an environment variable:
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)]
+
+#### [REST API](#tab/rest)
+
+Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:analyze&api-version=2023-02-01-preview`, where `<endpoint>` is your unique Azure AI Vision endpoint URL. You add query strings based on your analysis options.
+++
+## Select the image to analyze
+
+The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
+
+#### [C#](#tab/csharp)
+
+Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl).
+
+**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)]
+
+> [!TIP]
+> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile).
+
+#### [Python](#tab/python)
+
+In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)]
+
+> [!TIP]
+> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL.
+
+#### [C++](#tab/cpp)
+
+Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl).
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)]
+
+> [!TIP]
+> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile).
+
+#### [REST API](#tab/rest)
+
+When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/ai-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`.
+
+To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`.
+++
+## Select analysis options
+
+### Select visual features when using the standard model
+
+The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer.
+
+Visual features 'Captions' and 'DenseCaptions' are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+
+> [!NOTE]
+> The REST API uses the terms **Smart Crops** and **Smart Crops Aspect Ratios**. The SDK uses the terms **Crop Suggestions** and **Cropping Aspect Ratios**. They both refer to the same service operation. Similarly, the REST API users the term **Read** for detecting text in the image, whereas the SDK uses the term **Text** for the same operation.
+
+#### [C#](#tab/csharp)
+
+Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property. [ImageAnalysisFeature](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=visual_features)]
+
+#### [Python](#tab/python)
+
+Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property. [ImageAnalysisFeature](/python/api/azure-ai-vision/azure.ai.vision.enums.imageanalysisfeature) enum defines the supported values.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=visual_features)]
+
+#### [C++](#tab/cpp)
+
+Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object. Then specify an `std::vector` of visual features you'd like to extract, by calling the [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) method. [ImageAnalysisFeature](/cpp/cognitive-services/vision/azure-ai-vision-imageanalysis-namespace#enum-imageanalysisfeature) enum defines the supported values.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=visual_features)]
+
+#### [REST API](#tab/rest)
+
+You can specify which features you want to use by setting the URL query parameters of the [Analysis 4.0 API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas.
+
+|URL parameter | Value | Description|
+|||--|
+|`features`|`read` | Reads the visible text in the image and outputs it as structured JSON data.|
+|`features`|`caption` | Describes the image content with a complete sentence in supported languages.|
+|`features`|`denseCaptions` | Generates detailed captions for up to 10 prominent image regions. |
+|`features`|`smartCrops` | Finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.|
+|`features`|`objects` | Detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`features`|`tags` | Tags the image with a detailed list of words related to the image content.|
+|`features`|`people` | Detects people appearing in images, including the approximate locations. |
+
+A populated URL might look like this:
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaptions,smartCrops,objects,people`
+++
+### Set model name when using a custom model
+
+You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name. You do not need to specify visual features if you use a custom model.
+
+### [C#](#tab/csharp)
+
+To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=model_name)]
+
+### [Python](#tab/python)
+
+To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the [model_name](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-model-name) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=model_name)]
+
+### [C++](#tab/cpp)
+
+To use a custom model, create the [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetModelName](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setmodelname) method. You don't need to call any other methods on **ImageAnalysisOptions**. There's no need to call [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) as you do with standard model, since your custom model already implies the visual features the service extracts.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=model_name)]
+
+### [REST API](#tab/rest)
+
+To use a custom model, don't use the features query parameter. Instead, set the `model-name` parameter to the name of your model as shown here. Replace `MyCustomModelName` with your custom model name.
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName`
+++
+### Specify languages
+
+You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language.
+
+Language option only applies when you're using the standard model.
+
+#### [C#](#tab/csharp)
+
+Use the [Language](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) property of your **ImageAnalysisOptions** object to specify a language.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=language)]
+
+#### [Python](#tab/python)
+
+Use the [language](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-language) property of your **ImageAnalysisOptions** object to specify a language.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=language)]
+
+#### [C++](#tab/cpp)
+
+Call the [SetLanguage](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setlanguage) method on your **ImageAnalysisOptions** object to specify a language.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=language)]
+
+#### [REST API](#tab/rest)
+
+The following URL query parameter specifies the language. The default value is `en`.
+
+|URL parameter | Value | Description|
+|||--|
+|`language`|`en` | English|
+|`language`|`es` | Spanish|
+|`language`|`ja` | Japanese|
+|`language`|`pt` | Portuguese|
+|`language`|`zh` | Simplified Chinese|
+
+A populated URL might look like this:
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=caption&language=en`
+++
+### Select gender neutral captions
+
+If you're extracting captions or dense captions, you can ask for gender neutral captions. Gender neutral captions is optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
+
+Gender neutral caption option only applies when you're using the standard model.
+
+#### [C#](#tab/csharp)
+
+Set the [GenderNeutralCaption](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=gender_neutral_caption)]
+
+#### [Python](#tab/python)
+
+Set the [gender_neutral_caption](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-gender-neutral-caption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=gender_neutral_caption)]
+
+#### [C++](#tab/cpp)
+
+Call the [SetGenderNeutralCaption](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setgenderneutralcaption) method of your **ImageAnalysisOptions** object with **true** as the argument, to enable gender neutral captions.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=gender_neutral_caption)]
+
+#### [REST API](#tab/rest)
+
+Add the optional query string `gender-neutral-caption` with values `true` or `false` (the default).
+
+A populated URL might look like this:
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=caption&gender-neutral-caption=true`
+++
+### Select smart cropping aspect ratios
+
+An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when the **smartCrop** option (REST API) or **CropSuggestions** (SDK) was selected as part the visual feature list. If you select smartCrop/CropSuggestions but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
+
+Smart cropping aspect rations only applies when you're using the standard model.
+
+#### [C#](#tab/csharp)
+
+Set the [CroppingAspectRatios](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=cropping_aspect_rations)]
+
+#### [Python](#tab/python)
+
+Set the [cropping_aspect_ratios](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-cropping-aspect-ratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ration of 0.9 and 1.33:
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=cropping_aspect_rations)]
+
+#### [C++](#tab/cpp)
+
+Call the [SetCroppingAspectRatios](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setcroppingaspectratios) method of your **ImageAnalysisOptions** with an `std::vector` of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=cropping_aspect_rations)]
+
+#### [REST API](#tab/rest)
+
+Add the optional query string `smartcrops-aspect-ratios`, with one or more aspect ratios separated by a comma.
+
+A populated URL might look like this:
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=smartCrops&smartcrops-aspect-ratios=0.8,1.2`
+++
+## Get results from the service
+
+### Get results using the standard model
+
+This section shows you how to make an analysis call to the service using the standard model, and get the results.
+
+#### [C#](#tab/csharp)
+
+1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/dotnet/api/azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **IDisposable**, therefore create the object with a **using** statement, or explicitly call **Dispose** method after analysis completes.
+
+1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method.
+
+1. Check the **Reason** property on the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed.
+
+1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object.
+
+1. If failed, you can construct the [ImageAnalysisErrorDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object to get information on the failure.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=analyze)]
+
+#### [Python](#tab/python)
+
+1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/python/api/azure-ai-vision/azure.ai.vision.imageanalyzer) object.
+
+1. Call the **analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **analyze_async** method.
+
+1. Check the **reason** property on the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object, to determine if analysis succeeded or failed.
+
+1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresultdetails) object.
+
+1. If failed, you can construct the [ImageAnalysisErrorDetails](/python/api/azure-ai-vision/azure.ai.vision.imageanalysiserrordetails) object to get information on the failure.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=analyze)]
+
+#### [C++](#tab/cpp)
+
+1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/cpp/cognitive-services/vision/imageanalysis-imageanalyzer) object.
+
+1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method.
+
+1. Call **GetReason** method on the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object, to determine if analysis succeeded or failed.
+
+1. If succeeded, proceed to call the relevant **Get** methods on the result based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresultdetails) object.
+
+1. If failed, you can construct the [ImageAnalysisErrorDetails](/cpp/cognitive-services/vision/imageanalysis-imageanalysiserrordetails) object to get information on the failure.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=analyze)]
+
+The code uses the following helper method to display the coordinates of a bounding polygon:
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=polygon_to_string)]
+
+#### [REST API](#tab/rest)
+
+The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
+
+```json
+{
+ "captionResult":
+ {
+ "text": "a person using a laptop",
+ "confidence": 0.55291348695755
+ },
+ "objectsResult":
+ {
+ "values":
+ [
+ {"boundingBox":{"x":730,"y":66,"w":135,"h":85},"tags":[{"name":"kitchen appliance","confidence":0.501}]},
+ {"boundingBox":{"x":523,"y":377,"w":185,"h":46},"tags":[{"name":"computer keyboard","confidence":0.51}]},
+ {"boundingBox":{"x":471,"y":218,"w":289,"h":226},"tags":[{"name":"Laptop","confidence":0.85}]},
+ {"boundingBox":{"x":654,"y":0,"w":584,"h":473},"tags":[{"name":"person","confidence":0.855}]}
+ ]
+ },
+ "modelVersion": "2023-02-01-preview",
+ "metadata":
+ {
+ "width": 1260,
+ "height": 473
+ },
+ "tagsResult":
+ {
+ "values":
+ [
+ {"name":"computer","confidence":0.9865934252738953},
+ {"name":"clothing","confidence":0.9695653915405273},
+ {"name":"laptop","confidence":0.9658201932907104},
+ {"name":"person","confidence":0.9536289572715759},
+ {"name":"indoor","confidence":0.9420197010040283},
+ {"name":"wall","confidence":0.8871886730194092},
+ {"name":"woman","confidence":0.8632704019546509},
+ {"name":"using","confidence":0.5603535771369934}
+ ]
+ },
+ "readResult":
+ {
+ "stringIndexType": "TextElements",
+ "content": "",
+ "pages":
+ [
+ {"height":473,"width":1260,"angle":0,"pageNumber":1,"words":[],"spans":[{"offset":0,"length":0}],"lines":[]}
+ ],
+ "styles": [],
+ "modelVersion": "2022-04-30"
+ },
+ "smartCropsResult":
+ {
+ "values":
+ [
+ {"aspectRatio":1.94,"boundingBox":{"x":158,"y":20,"w":840,"h":433}}
+ ]
+ },
+ "peopleResult":
+ {
+ "values":
+ [
+ {"boundingBox":{"x":660,"y":0,"w":584,"h":471},"confidence":0.9698998332023621},
+ {"boundingBox":{"x":566,"y":276,"w":24,"h":30},"confidence":0.022009700536727905},
+ {"boundingBox":{"x":587,"y":273,"w":20,"h":28},"confidence":0.01859394833445549},
+ {"boundingBox":{"x":609,"y":271,"w":19,"h":30},"confidence":0.003902678843587637},
+ {"boundingBox":{"x":563,"y":279,"w":15,"h":28},"confidence":0.0034854013938456774},
+ {"boundingBox":{"x":566,"y":299,"w":22,"h":41},"confidence":0.0031260766554623842},
+ {"boundingBox":{"x":570,"y":311,"w":29,"h":38},"confidence":0.0026493810582906008},
+ {"boundingBox":{"x":588,"y":275,"w":24,"h":54},"confidence":0.001754675293341279},
+ {"boundingBox":{"x":574,"y":274,"w":53,"h":64},"confidence":0.0012078586732968688},
+ {"boundingBox":{"x":608,"y":270,"w":32,"h":59},"confidence":0.0011869356967508793},
+ {"boundingBox":{"x":591,"y":305,"w":29,"h":42},"confidence":0.0010676260571926832}
+ ]
+ }
+}
+```
+++
+### Get results using custom model
+
+This section shows you how to make an analysis call to the service, when using a custom model.
+
+#### [C#](#tab/csharp)
+
+The code is similar to the standard model case. The only difference is that results from the custom model are available on the **CustomTags** and/or **CustomObjects** properties of the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object.
+
+[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=analyze)]
+
+#### [Python](#tab/python)
+
+The code is similar to the standard model case. The only difference is that results from the custom model are available on the **custom_tags** and/or **custom_objects** properties of the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object.
+
+[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=analyze)]
+
+#### [C++](#tab/cpp)
+
+The code is similar to the standard model case. The only difference is that results from the custom model are available by calling the **GetCustomTags** and/or **GetCustomObjects** methods of the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object.
+
+[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=analyze)]
+
+#### [REST API](#tab/rest)
+++
+## Error codes
++
+## Next steps
+
+* Explore the [concept articles](../concept-describe-images-40.md) to learn more about each feature.
+* Explore the [SDK code samples on GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk).
+* See the [REST API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
ai-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image.md
+
+ Title: Call the Image Analysis API
+
+description: Learn how to call the Image Analysis API and configure its behavior.
+++++++ Last updated : 12/27/2022+++
+# Call the Image Analysis API
+
+This article demonstrates how to call the Image Analysis API to return information about an image's visual features. It also shows you how to parse the returned information using the client SDKs or REST API.
+
+This guide assumes you've already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Vision resource" target="_blank">created a Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
+
+## Submit data to the service
+
+The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
+
+#### [REST](#tab/rest)
+
+When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+
+To analyze a local image, you'd put the binary image data in the HTTP request body.
+
+#### [C#](#tab/csharp)
+
+In your main class, save a reference to the URL of the image you want to analyze.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze_url)]
+
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclient) methods, such as **AnalyzeImageInStreamAsync**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ImageAnalysisQuickstart.cs) for scenarios involving local images.
+
+#### [Java](#tab/java)
+
+In your main class, save a reference to the URL of the image you want to analyze.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_urlimage)]
+
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVision](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision) methods, such as **AnalyzeImage**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java) for scenarios involving local images.
+
+#### [JavaScript](#tab/javascript)
+
+In your main function, save a reference to the URL of the image you want to analyze.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_describe_image)]
+
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClient](/javascript/api/@azure/cognitiveservices-computervision/computervisionclient) methods, such as **describeImageInStream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/ComputerVision/ImageAnalysisQuickstart.js) for scenarios involving local images.
+
+#### [Python](#tab/python)
+
+Save a reference to the URL of the image you want to analyze.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_remoteimage)]
+
+> [!TIP]
+> You can also analyze a local image. See the [ComputerVisionClientOperationsMixin](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin) methods, such as **analyze_image_in_stream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/ComputerVision/ImageAnalysisQuickstart.py) for scenarios involving local images.
++++
+## Determine how to process the data
+
+### Select visual features
+
+The Analyze API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The examples in the sections below add all of the available visual features, but for practical usage you'll likely only need one or two.
+
+#### [REST](#tab/rest)
+
+You can specify which features you want to use by setting the URL query parameters of the [Analyze API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need.
+
+|URL parameter | Value | Description|
+|||--|
+|`features`|`Read` | reads the visible text in the image and outputs it as structured JSON data.|
+|`features`|`Description` | describes the image content with a complete sentence in supported languages.|
+|`features`|`SmartCrops` | finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.|
+|`features`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`features`|`Tags` | tags the image with a detailed list of words related to the image content.|
+
+A populated URL might look like this:
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags`
+
+#### [C#](#tab/csharp)
+
+Define your new method for image analysis. Add the code below, which specifies visual features you'd like to extract in your analysis. See the **[VisualFeatureTypes](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes)** enum for a complete list.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_visualfeatures)]
++
+#### [Java](#tab/java)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes) enum for a complete list.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_features_remote)]
+
+#### [JavaScript](#tab/javascript)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/javascript/api/@azure/cognitiveservices-computervision/computervisionmodels.visualfeaturetypes) enum for a complete list.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_features_remote)]
+
+#### [Python](#tab/python)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.models.visualfeaturetypes?view=azure-python&preserve-view=true) enum for a complete list.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_features_remote)]
+++++
+### Specify languages
+
+You can also specify the language of the returned data.
+
+#### [REST](#tab/rest)
+
+The following URL query parameter specifies the language. The default value is `en`.
+
+|URL parameter | Value | Description|
+|||--|
+|`language`|`en` | English|
+|`language`|`es` | Spanish|
+|`language`|`ja` | Japanese|
+|`language`|`pt` | Portuguese|
+|`language`|`zh` | Simplified Chinese|
+
+A populated URL might look like this:
+
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags&language=en`
+
+#### [C#](#tab/csharp)
+
+Use the *language* parameter of [AnalyzeImageAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclientextensions.analyzeimageasync?view=azure-dotnet#microsoft-azure-cognitiveservices-vision-computervision-computervisionclientextensions-analyzeimageasync(microsoft-azure-cognitiveservices-vision-computervision-icomputervisionclient-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-visualfeaturetypes))))-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-details))))-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-descriptionexclude))))-system-string-system-threading-cancellationtoken&preserve-view=true)) call to specify a language. A method call that specifies a language might look like the following.
+
+```csharp
+ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features, language: "en");
+```
+
+#### [Java](#tab/java)
+
+Use the [AnalyzeImageOptionalParameter](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.analyzeimageoptionalparameter) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
++
+```java
+ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
+ .withVisualFeatures(featuresToExtractFromLocalImage)
+ .language("en")
+ .execute();
+```
+
+#### [JavaScript](#tab/javascript)
+
+Use the **language** property of the [ComputerVisionClientAnalyzeImageOptionalParams](/javascript/api/@azure/cognitiveservices-computervision/computervisionmodels.computervisionclientanalyzeimageoptionalparams) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
+
+```javascript
+const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
+```
+
+#### [Python](#tab/python)
+
+Use the *language* parameter of your [analyze_image](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin?view=azure-python#azure-cognitiveservices-vision-computervision-operations-computervisionclientoperationsmixin-analyze-image&preserve-view=true) call to specify a language. A method call that specifies a language might look like the following.
+
+```python
+results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details, 'en')
+```
++++
+## Get results from the service
+
+This section shows you how to parse the results of the API call. It includes the API call itself.
+
+> [!NOTE]
+> **Scoped API calls**
+>
+> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
+
+#### [REST](#tab/rest)
+
+The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
+
+```json
+{
+ "metadata":
+ {
+ "width": 300,
+ "height": 200
+ },
+ "tagsResult":
+ {
+ "values":
+ [
+ {
+ "name": "grass",
+ "confidence": 0.9960499405860901
+ },
+ {
+ "name": "outdoor",
+ "confidence": 0.9956876635551453
+ },
+ {
+ "name": "building",
+ "confidence": 0.9893627166748047
+ },
+ {
+ "name": "property",
+ "confidence": 0.9853052496910095
+ },
+ {
+ "name": "plant",
+ "confidence": 0.9791355729103088
+ }
+ ]
+ }
+}
+```
+
+### Error codes
+
+See the following list of possible errors and their causes:
+
+* 400
+ * `InvalidImageUrl` - Image URL is badly formatted or not accessible.
+ * `InvalidImageFormat` - Input data is not a valid image.
+ * `InvalidImageSize` - Input image is too large.
+ * `NotSupportedVisualFeature` - Specified feature type isn't valid.
+ * `NotSupportedImage` - Unsupported image, for example child pornography.
+ * `InvalidDetails` - Unsupported `detail` parameter value.
+ * `NotSupportedLanguage` - The requested operation isn't supported in the language specified.
+ * `BadArgument` - More details are provided in the error message.
+* 415 - Unsupported media type error. The Content-Type isn't in the allowed types:
+ * For an image URL, Content-Type should be `application/json`
+ * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data`
+* 500
+ * `FailedToProcess`
+ * `Timeout` - Image processing timed out.
+ * `InternalServerError`
++
+#### [C#](#tab/csharp)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze)]
+
+#### [Java](#tab/java)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_analyze)]
+
+#### [JavaScript](#tab/javascript)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_analyze)]
+
+#### [Python](#tab/python)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_analyze)]
++++
+> [!TIP]
+> While working with Azure AI Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker).
++
+## Next steps
+
+* Explore the [concept articles](../concept-object-detection.md) to learn more about each feature.
+* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
ai-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-read-api.md
+
+ Title: How to call the Read API
+
+description: Learn how to call the Read API and configure its behavior in detail.
++++++++ Last updated : 11/03/2022+++
+# Call the Azure AI Vision 3.2 GA Read API
+
+In this guide, you'll learn how to call the v3.2 GA Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs. This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Vision resource" target="_blank">create a Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
++
+## Input requirements
+
+The **Read** call takes images and documents as its input. They have the following requirements:
+
+* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
+* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
+* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit.
+* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
+
+>[!NOTE]
+> You don't need to crop an image for text lines. Send the whole image to Read API and it will recognize all texts.
+
+## Determine how to process the data (optional)
+
+### Specify the OCR model
+
+By default, the service will use the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
+
+When using the Read operation, use the following values for the optional `model-version` parameter.
+
+|Value| Model used |
+|:--|:-|
+| Not provided | Latest GA model |
+| latest | Latest GA model|
+| [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
+| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwritten text, adds support for Japanese and Korean. |
+| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages. For handwritten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
+| 2021-04-12 | 2021 GA model |
+
+### Input language
+
+By default, the service extracts all text from your images or documents including mixed languages. The [Read operation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) has an optional request parameter for language. Only provide a language code if you want to force the document to be processed as that specific language. Otherwise, the service may return incomplete and incorrect text.
+
+### Natural reading order output (Latin languages only)
+
+By default, the service outputs the text lines in the left to right order. Optionally, with the `readingOrder` request parameter, use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++
+### Select page(s) or page ranges for text extraction
+
+By default, the service extracts text from all pages in the documents. Optionally, use the `pages` request parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
++
+## Submit data to the service
+
+You submit either a local image or a remote image to the Read API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
+
+The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
+
+`https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
+
+The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step.
+
+|Response header| Example value |
+|:--|:-|
+|Operation-Location | `https://cognitiveservice/vision/v3.2/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+
+> [!NOTE]
+> **Billing**
+>
+> The [Azure AI Vision pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) page includes the pricing tier for Read. Each analyzed image or page is one transaction. If you call the operation with a PDF or TIFF document containing 100 pages, the Read operation will count it as 100 transactions and you will be billed for 100 transactions. If you made 50 calls to the operation and each call submitted a document with 100 pages, you will be billed for 50 X 100 = 5000 transactions.
++
+## Get results from the service
+
+The second step is to call [Get Read Results](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
+
+`https://{endpoint}/vision/v3.2/read/analyzeResults/{operationId}`
+
+It returns a JSON response that contains a **status** field with the following possible values.
+
+|Value | Meaning |
+|:--|:-|
+| `notStarted`| The operation has not started. |
+| `running`| The operation is being processed. |
+| `failed`| The operation has failed. |
+| `succeeded`| The operation has succeeded. |
+
+You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate.
+
+> [!NOTE]
+> The free tier limits the request rate to 20 calls per minute. The paid tier allows 30 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
+
+> [!NOTE]
+> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
+
+### Sample JSON output
+
+See the following example of a successful JSON response:
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
+ "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
+ "analyzeResult": {
+ "version": "3.2",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 2.1243,
+ "width": 502,
+ "height": 252,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 58,
+ 42,
+ 314,
+ 59,
+ 311,
+ 123,
+ 56,
+ 121
+ ],
+ "text": "Tabs vs",
+ "appearance": {
+ "style": {
+ "name": "handwriting",
+ "confidence": 0.96
+ }
+ },
+ "words": [
+ {
+ "boundingBox": [
+ 68,
+ 44,
+ 225,
+ 59,
+ 224,
+ 122,
+ 66,
+ 123
+ ],
+ "text": "Tabs",
+ "confidence": 0.933
+ },
+ {
+ "boundingBox": [
+ 241,
+ 61,
+ 314,
+ 72,
+ 314,
+ 123,
+ 239,
+ 122
+ ],
+ "text": "vs",
+ "confidence": 0.977
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Handwritten classification for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
++
+## Next steps
+
+- Get started with the [OCR (Read) REST API or client library quickstarts](../quickstarts-sdk/client-library.md).
+- Learn about the [Read 3.2 REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
ai-services Coco Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/coco-verification.md
+
+ Title: Verify a COCO annotation file
+
+description: Use a Python script to verify your COCO file for custom model training.
++++++ Last updated : 03/21/2023+++
+# Check the format of your COCO annotation file
+
+<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/check_coco_annotation.ipynb -->
+
+> [!TIP]
+> Contents of _check_coco_annotation.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb)**.
+
+This notebook demonstrates how to check if the format of your annotation file is correct. First, install the python samples package from the command line:
+
+```python
+pip install cognitive-service-vision-model-customization-python-samples
+```
+
+Then, run the following python code to check the file's format. You can either enter this code in a Python script, or run the [Jupyter Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb) on a compatible platform.
+
+```python
+from cognitive_service_vision_model_customization_python_samples import check_coco_annotation_file, AnnotationKind, Purpose
+import pathlib
+import json
+
+coco_file_path = pathlib.Path("{your_coco_file_path}")
+annotation_kind = AnnotationKind.MULTICLASS_CLASSIFICATION # or AnnotationKind.OBJECT_DETECTION
+purpose = Purpose.TRAINING # or Purpose.EVALUATION
+
+check_coco_annotation_file(json.loads(coco_file_path.read_text()), annotation_kind, purpose)
+```
+
+<!-- nbend -->
+
+## Use COCO file in a new project
+
+Once your COCO file is verified, you're ready to import it to your model customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file&mdash;you can follow the guide from there to the end.
+
+## Next steps
+
+* [Create and train a custom model](model-customization.md)
ai-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md
+
+ Title: "Find similar faces"
+
+description: Use the Face service to find similar faces (face search by image).
+++++++ Last updated : 11/07/2022++++
+# Find similar faces
++
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
+
+## Set up sample URL
+
+This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path.
+
+```
+https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/
+```
+
+## Detect faces for comparison
+
+You need to detect faces in images before you can compare them. In this guide, the following remote image, called *findsimilar.jpg*, will be used as the source:
+
+![Photo of a man who is smiling.](../media/quickstarts/find-similar.jpg)
+
+#### [C#](#tab/csharp)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)]
+
+The following code uses the above method to get face data from a series of images.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)]
++
+#### [JavaScript](#tab/javascript)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
++
+The following code uses the above method to get face data from a series of images.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your key and endpoint where appropriate. Then run the command to detect one of the target faces.
++
+Find the `"faceId"` value in the JSON response and save it to a temporary location. Then, call the above command again for these other image URLs, and save their face IDs as well. You'll use these IDs as the target group of faces from which to find a similar face.
++
+Finally, detect the single source face that you'll use for matching, and save its ID. Keep this ID separate from the others.
++++
+## Find and print matches
+
+In this guide, the face detected in the *Family1-Dad1.jpg* image should be returned as the face that's similar to the source image face.
+
+![Photo of a man who is smiling; this is the same person as the previous image.](../media/quickstarts/family-1-dad-1.jpg)
+
+#### [C#](#tab/csharp)
+
+The following code calls the Find Similar API on the saved list of faces.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)]
+
+The following code prints the match details to the console:
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)]
+
+#### [JavaScript](#tab/javascript)
+
+The following method takes a set of target faces and a single source face. Then, it compares them and finds all the target faces that are similar to the source face. Finally, it prints the match details to the console.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your key and endpoint where appropriate.
++
+Paste in the following JSON content for the `body` value:
++
+Then, copy over the source face ID value to the `"faceId"` field. Then copy the other face IDs, separated by commas, as terms in the `"faceIds"` array.
+
+Run the command, and the returned JSON should show the correct face ID as a similar match.
+++
+## Next steps
+
+In this guide, you learned how to call the Find Similar API to do a face search by similarity in a larger group of faces. Next, learn more about the different recognition models available for face comparison operations.
+
+* [Specify a face recognition model](specify-recognition-model.md)
ai-services Generate Thumbnail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/generate-thumbnail.md
+
+ Title: "Generate a smart-cropped thumbnail - Image Analysis"
+
+description: Use the Image Analysis REST API to generate a thumbnail with smart cropping.
+++++++ Last updated : 07/20/2022+++
+# Generate a smart-cropped thumbnail
+
+You can use Image Analysis to generate a thumbnail with smart cropping. You specify the desired height and width, which can differ in aspect ratio from the input image. Image Analysis uses smart cropping to intelligently identify the area of interest and generate cropping coordinates around that region.
+
+## Call the Generate Thumbnail API
+
+To call the API, do the following steps:
+
+1. Copy the following command into a text editor.
+1. Make the following changes in the command where needed:
+ 1. Replace the value of `<subscriptionKey>` with your key.
+ 1. Replace the value of `<thumbnailFile>` with the path and name of the file in which to save the returned thumbnail image.
+ 1. Replace the first part of the request URL (`westcentralus`) with the text in your own endpoint URL.
+ [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]
+ 1. Optionally, change the image URL in the request body (`https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\`) to the URL of a different image from which to generate a thumbnail.
+1. Open a command prompt window.
+1. Paste the command from the text editor into the command prompt window.
+1. Press enter to run the program.
+
+ ```bash
+ curl -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -o <thumbnailFile> -H "Content-Type: application/json" "https://westus.api.cognitive.microsoft.com/vision/v3.2/generateThumbnail?width=100&height=100&smartCropping=true" -d "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\"}"
+ ```
+
+## Examine the response
+
+A successful response writes the thumbnail image to the file specified in `<thumbnailFile>`. If the request fails, the response contains an error code and a message to help determine what went wrong. If the request seems to succeed but the created thumbnail isn't a valid image file, it's possible that your key is not valid.
+
+## Next steps
+
+If you'd like to call Image Analysis APIs using a native SDK in the language of your choice, follow the quickstart to get set up.
+
+[Quickstart (Image Analysis)](../quickstarts-sdk/image-analysis-client-library.md)
ai-services Identity Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-access-token.md
+
+ Title: "Use limited access tokens - Face"
+
+description: Learn how ISVs can manage the Face API usage of their clients by issuing access tokens that grant access to Face features which are normally gated.
+++++++ Last updated : 05/11/2023+++
+# Use limited access tokens for Face
+
+Independent software vendors (ISVs) can manage the Face API usage of their clients by issuing access tokens that grant access to Face features which are normally gated. This allows client companies to use the Face API without having to go through the formal approval process.
+
+This guide shows you how to generate the access tokens, if you're an approved ISV, and how to use the tokens if you're a client.
+
+The LimitedAccessToken feature is a part of the existing [Azure AI services token service](https://westus.dev.cognitive.microsoft.com/docs/services/57346a70b4769d2694911369/operations/issueScopedToken). We have added a new operation for the purpose of bypassing the Limited Access gate for approved scenarios. Only ISVs that pass the gating requirements will be given access to this feature.
+
+## Example use case
+
+A company sells software that uses the Azure AI Face service to operate door access security systems. Their clients, individual manufacturers of door devices, subscribe to the software and run it on their devices. These client companies want to make Face API calls from their devices to perform Limited Access operations like face identification. By relying on access tokens from the ISV, they can bypass the formal approval process for face identification. The ISV, which has already been approved, can grant the client just-in-time access tokens.
+
+## Expectation of responsibility
+
+The issuing ISV is responsible for ensuring that the tokens are used only for the approved purpose.
+
+If the ISV learns that a client is using the LimitedAccessToken for non-approved purposes, the ISV should stop generating tokens for that customer. Microsoft can track the issuance and usage of LimitedAccessTokens, and we reserve the right to revoke an ISV's access to the **issueLimitedAccessToken** API if abuse is not addressed.
+
+## Prerequisites
+
+* [cURL](https://curl.haxx.se/) installed (or another tool that can make HTTP requests).
+* The ISV needs to have either an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or a [Azure AI services multi-service](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource.
+* The client needs to have an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource.
+
+## Step 1: ISV obtains client's Face resource ID
+
+The ISV should set up a communication channel between their own secure cloud service (which will generate the access token) and their application running on the client's device. The client's Face resource ID must be known prior to generating the LimitedAccessToken.
+
+The Face resource ID has the following format:
+
+`/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.CognitiveServices/accounts/<face-resource-name>`
+
+For example:
+
+`/subscriptions/dc4d27d9-ea49-4921-938f-7782a774e151/resourceGroups/client-rg/providers/Microsoft.CognitiveServices/accounts/client-face-api`
+
+## Step 2: ISV generates a token
+
+The ISV's cloud service, running in a secure environment, calls the **issueLimitedAccessToken** API using their end customer's known Face resource ID.
+
+To call the **issueLimitedAccessToken** API, copy the following cURL command to a text editor.
+
+```bash
+curl -X POST 'https://<isv-endpoint>/sts/v1.0/issueLimitedAccessToken?expiredTime=3600' \
+-H 'Ocp-Apim-Subscription-Key: <client-face-key>' \
+-H 'Content-Type: application/json' \
+-d '{
+ "resourceId": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.CognitiveServices/accounts/<face-resource-name>",
+ "featureFlags": ["Face.Identification", "Face.Verification"]
+}'
+```
+
+Then, make the following changes:
+1. Replace `<isv-endpoint>` with the endpoint of the ISV's resource. For example, **westus.api.cognitive.microsoft.com**.
+1. Optionally set the `expiredTime` parameter to set the expiration time of the token in seconds. It must be between 60 and 86400. The default value is 3600 (one hour).
+1. Replace `<client-face-key>` with the key of the client's Face resource.
+1. Replace `<subscription-id>` with the subscription ID of the client's Azure subscription.
+1. Replace `<resource-group-name>` with the name of the client's resource group.
+1. Replace `<face-resource-name>` with the name of the client's Face resource.
+1. Set `"featureFlags"` to the set of access roles you want to grant. The available flags are `"Face.Identification"`, `"Face.Verification"`, and `"LimitedAccess.HighRisk"`. An ISV can only grant permissions that it has been granted itself by Microsoft. For example, if the ISV has been granted access to face identification, it can create a LimitedAccessToken for **Face.Identification** for the client. All token creations and uses are logged for usage and security purposes.
+
+Then, paste the command into a terminal window and run it.
+
+The API should return a `200` response with the token in the form of a JSON web token (`application/jwt`). If you want to inspect the LimitedAccessToken, you can do so using [JWT](https://jwt.io/).
+
+## Step 3: Client application uses the token
+
+The ISV's application can then pass the LimitedAccessToken as an HTTP request header for future Face API requests on behalf of the client. This works independently of other authentication mechanisms, so no personal information of the client's is ever leaked to the ISV.
+
+> [!CAUTION]
+> The client doesn't need to be aware of the token value, as it can be passed in the background. If the client were to use a web monitoring tool to intercept the traffic, they'd be able to view the LimitedAccessToken header. However, because the token expires after a short period of time, they are limited in what they can do with it. This risk is known and considered acceptable.
+>
+> It's for each ISV to decide how exactly it passes the token from its cloud service to the client application.
+
+#### [REST API](#tab/rest)
+
+An example Face API request using the access token looks like this:
+
+```bash
+curl -X POST 'https://<client-endpoint>/face/v1.0/identify' \
+-H 'Ocp-Apim-Subscription-Key: <client-face-key>' \
+-H 'LimitedAccessToken: Bearer <token>' \
+-H 'Content-Type: application/json' \
+-d '{
+ "largePersonGroupId": "sample_group",
+ "faceIds": [
+ "c5c24a82-6845-4031-9d5d-978df9175426",
+ "65d083d4-9447-47d1-af30-b626144bf0fb"
+ ],
+ "maxNumOfCandidatesReturned": 1,
+ "confidenceThreshold": 0.5
+}'
+```
+
+> [!NOTE]
+> The endpoint URL and Face key belong to the client's Face resource, not the ISV's resource. The `<token>` is passed as an HTTP request header.
+
+#### [C#](#tab/csharp)
+
+The following code snippets show you how to use an access token with the [Face SDK for C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face).
+
+The following class uses an access token to create a **ServiceClientCredentials** object that can be used to authenticate a Face API client object. It automatically adds the access token as a header in every request that the Face client will make.
+
+```csharp
+public class LimitedAccessTokenWithApiKeyClientCredential : ServiceClientCredentials
+{
+ /// <summary>
+ /// Creates a new instance of the LimitedAccessTokenWithApiKeyClientCredential class
+ /// </summary>
+ /// <param name="apiKey">API Key for the Face API or CognitiveService endpoint</param>
+ /// <param name="limitedAccessToken">LimitedAccessToken to bypass the limited access program, requires ISV sponsership.</param>
+
+ public LimitedAccessTokenWithApiKeyClientCredential(string apiKey, string limitedAccessToken)
+ {
+ this.ApiKey = apiKey;
+ this.LimitedAccessToken = limitedAccessToken;
+ }
+
+ private readonly string ApiKey;
+ private readonly string LimitedAccesToken;
+
+ /// <summary>
+ /// Add the Basic Authentication Header to each outgoing request
+ /// </summary>
+ /// <param name="request">The outgoing request</param>
+ /// <param name="cancellationToken">A token to cancel the operation</param>
+ public override Task ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
+ {
+ if (request == null)
+ throw new ArgumentNullException("request");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", ApiKey);
+ request.Headers.Add("LimitedAccessToken", $"Bearer {LimitedAccesToken}");
+
+ return Task.FromResult<object>(null);
+ }
+}
+```
+
+In the client-side application, the helper class can be used like in this example:
+
+```csharp
+static void Main(string[] args)
+{
+ // create Face client object
+ var faceClient = new FaceClient(new LimitedAccessTokenWithApiKeyClientCredential(apiKey: "<client-face-key>", limitedAccessToken: "<token>"));
+
+ faceClient.Endpoint = "https://willtest-eastus2.cognitiveservices.azure.com";
+
+ // use Face client in an API call
+ using (var stream = File.OpenRead("photo.jpg"))
+ {
+ var result = faceClient.Face.DetectWithStreamAsync(stream, detectionModel: "Detection_03", recognitionModel: "Recognition_04", returnFaceId: true).Result;
+
+ Console.WriteLine(JsonConvert.SerializeObject(result));
+ }
+}
+```
++
+## Next steps
+* [LimitedAccessToken API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57346a70b4769d2694911369/operations/issueLimitedAccessToken)
ai-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md
+
+ Title: "Call the Detect API - Face"
+
+description: This guide demonstrates how to use face detection to extract attributes like age, emotion, or head pose from a given image.
+++++++ Last updated : 12/27/2022+
+ms.devlang: csharp
+++
+# Call the Detect API
+++
+This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
+
+The code snippets in this guide are written in C# by using the Azure AI Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
++
+## Setup
+
+This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
+
+## Submit data to the service
+
+To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
++
+You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for the rectangles that give the pixel coordinates of each face. If you set _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks.
++
+For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
+
+## Determine how to process the data
+
+This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.
+
+### Get face landmarks
+
+[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
++
+### Get face attributes
+
+Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section.
+
+To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
+++
+## Get results from the service
+
+### Face landmark results
+
+The following code demonstrates how you might retrieve the locations of the nose and pupils:
++
+You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
++
+When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
++
+### Face attribute results
+
+The following code shows how you might retrieve the face attribute data that you requested in the original call.
++
+To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide.
+
+## Next steps
+
+In this guide, you learned how to use the various functionalities of face detection and analysis. Next, integrate these features into an app to add face data from users.
+
+- [Tutorial: Add users to a Face service](../enrollment-overview.md)
+
+## Related articles
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
ai-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md
+
+ Title: Do image retrieval using vectorization - Image Analysis 4.0
+
+description: Learn how to call the image retrieval API to vectorize image and search terms.
+++++++ Last updated : 02/21/2023++++
+# Do image retrieval using vectorization (version 4.0 preview)
+
+The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
+
+> [!IMPORTANT]
+> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Vision resource" target="_blank">create a Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+ * After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
+
+## Try out Image Retrieval
+
+You can try out the Image Retrieval feature quickly and easily in your browser using Vision Studio.
+
+> [!IMPORTANT]
+> The Vision Studio experience is limited to 500 images. To use a larger image set, create your own search application using the APIs in this guide.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Call the Vectorize Image API
+
+The `retrieval:vectorizeImage` API lets you convert an image's data to a vector. To call it, make the following changes to the cURL command below:
+
+1. Replace `<endpoint>` with your Azure AI Vision endpoint.
+1. Replace `<subscription-key>` with your Azure AI Vision key.
+1. In the request body, set `"url"` to the URL of a remote image you want to use.
+
+```bash
+curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+{
+'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'
+}"
+```
+
+To vectorize a local image, you'd put the binary image data in the HTTP request body.
+
+The API call returns an **vector** JSON object, which defines the image's coordinates in the high-dimensional vector space.
+
+```json
+{
+ "modelVersion": "2022-04-11",
+ "vector": [ -0.09442752, -0.00067171326, -0.010985051, ... ]
+}
+```
+
+## Call the Vectorize Text API
+
+The `retrieval:vectorizeText` API lets you convert a text string to a vector. To call it, make the following changes to the cURL command below:
+
+1. Replace `<endpoint>` with your Azure AI Vision endpoint.
+1. Replace `<subscription-key>` with your Azure AI Vision key.
+1. In the request body, set `"text"` to the example search term you want to use.
+
+```bash
+curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+{
+'text':'cat jumping'
+}"
+```
+
+The API call returns an **vector** JSON object, which defines the text string's coordinates in the high-dimensional vector space.
+
+```json
+{
+ "modelVersion": "2022-04-11",
+ "vector": [ -0.09442752, -0.00067171326, -0.010985051, ... ]
+}
+```
+
+## Calculate vector similarity
+
+Cosine similarity is a method for measuring the similarity of two vectors. In an Image Retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results.
+
+The following example C# code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
+
+```csharp
+public static float GetCosineSimilarity(float[] vector1, float[] vector2)
+{
+ float dotProduct = 0;
+ int length = Math.Min(vector1.Length, vector2.Length);
+ for (int i = 0; i < length; i++)
+ {
+ dotProduct += vector1[i] * vector2[i];
+ }
+ float magnitude1 = Math.Sqrt(vector1.Select(x => x * x).Sum());
+ float magnitude2 = Math.Sqrt(vector2.Select(x => x * x).Sum());
+
+ return dotProduct / (magnitude1 * magnitude2);
+}
+```
+
+## Next steps
+
+[Image retrieval concepts](../concept-image-retrieval.md)
ai-services Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/migrate-face-data.md
+
+ Title: "Migrate your face data across subscriptions - Face"
+
+description: This guide shows you how to migrate your stored face data from one Face subscription to another.
++++++ Last updated : 02/22/2021+
+ms.devlang: csharp
+++
+# Migrate your face data to a different Face subscription
+
+> [!CAUTION]
+> The Snapshot API will be retired for all users June 30 2023.
+
+This guide shows you how to move face data, such as a saved PersonGroup object with faces, to a different Azure AI Face subscription. To move the data, you use the Snapshot feature. This way you avoid having to repeatedly build and train a PersonGroup or FaceList object when you move or expand your operations. For example, perhaps you created a PersonGroup object with a free subscription and now want to migrate it to your paid subscription. Or you might need to sync face data across subscriptions in different regions for a large enterprise operation.
+
+This same migration strategy also applies to LargePersonGroup and LargeFaceList objects. If you aren't familiar with the concepts in this guide, see their definitions in the [Face recognition concepts](../concept-face-recognition.md) guide. This guide uses the Face .NET client library with C#.
+
+> [!WARNING]
+> The Snapshot feature might move your data outside the geographic region you originally selected. Data might move to West US, West Europe, and Southeast Asia regions.
+
+## Prerequisites
+
+You need the following items:
+
+- Two Face keys, one with the existing data and one to migrate to. To subscribe to the Face service and get your key, follow the instructions in [Create a multi-service resource](../../multi-service-resource.md?pivots=azportal).
+- The Face subscription ID string that corresponds to the target subscription. To find it, select **Overview** in the Azure portal.
+- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).
+
+## Create the Visual Studio project
+
+This guide uses a simple console app to run the face data migration. For a full implementation, see the [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) on GitHub.
+
+1. In Visual Studio, create a new Console app .NET Framework project. Name it **FaceApiSnapshotSample**.
+1. Get the required NuGet packages. Right-click your project in the Solution Explorer, and select **Manage NuGet Packages**. Select the **Browse** tab, and select **Include prerelease**. Find and install the following package:
+ - [Microsoft.Azure.CognitiveServices.Vision.Face 2.3.0-preview](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/2.2.0-preview)
+
+## Create face clients
+
+In the **Main** method in *Program.cs*, create two [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) instances for your source and target subscriptions. This example uses a Face subscription in the East Asia region as the source and a West US subscription as the target. This example demonstrates how to migrate data from one Azure region to another.
++
+```csharp
+var FaceClientEastAsia = new FaceClient(new ApiKeyServiceClientCredentials("<East Asia Key>"))
+ {
+ Endpoint = "https://southeastasia.api.cognitive.microsoft.com/>"
+ };
+
+var FaceClientWestUS = new FaceClient(new ApiKeyServiceClientCredentials("<West US Key>"))
+ {
+ Endpoint = "https://westus.api.cognitive.microsoft.com/"
+ };
+```
+
+Fill in the key values and endpoint URLs for your source and target subscriptions.
++
+## Prepare a PersonGroup for migration
+
+You need the ID of the PersonGroup in your source subscription to migrate it to the target subscription. Use the [PersonGroupOperationsExtensions.ListAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperationsextensions.listasync) method to retrieve a list of your PersonGroup objects. Then get the [PersonGroup.PersonGroupId](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.persongroup.persongroupid#Microsoft_Azure_CognitiveServices_Vision_Face_Models_PersonGroup_PersonGroupId) property. This process looks different based on what PersonGroup objects you have. In this guide, the source PersonGroup ID is stored in `personGroupId`.
+
+> [!NOTE]
+> The [sample code](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) creates and trains a new PersonGroup to migrate. In most cases, you should already have a PersonGroup to use.
+
+## Take a snapshot of a PersonGroup
+
+A snapshot is temporary remote storage for certain Face data types. It functions as a kind of clipboard to copy data from one subscription to another. First, you take a snapshot of the data in the source subscription. Then you apply it to a new data object in the target subscription.
+
+Use the source subscription's FaceClient instance to take a snapshot of the PersonGroup. Use [TakeAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperationsextensions.takeasync) with the PersonGroup ID and the target subscription's ID. If you have multiple target subscriptions, add them as array entries in the third parameter.
+
+```csharp
+var takeSnapshotResult = await FaceClientEastAsia.Snapshot.TakeAsync(
+ SnapshotObjectType.PersonGroup,
+ personGroupId,
+ new[] { "<Azure West US Subscription ID>" /* Put other IDs here, if multiple target subscriptions wanted */ });
+```
+
+> [!NOTE]
+> The process of taking and applying snapshots doesn't disrupt any regular calls to the source or target PersonGroups or FaceLists. Don't make simultaneous calls that change the source object, such as [FaceList management calls](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.facelistoperations) or the [PersonGroup Train](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperations) call, for example. The snapshot operation might run before or after those operations or might encounter errors.
+
+## Retrieve the snapshot ID
+
+The method used to take snapshots is asynchronous, so you must wait for its completion. Snapshot operations can't be canceled. In this code, the `WaitForOperation` method monitors the asynchronous call. It checks the status every 100 ms. After the operation finishes, retrieve an operation ID by parsing the `OperationLocation` field.
+
+```csharp
+var takeOperationId = Guid.Parse(takeSnapshotResult.OperationLocation.Split('/')[2]);
+var operationStatus = await WaitForOperation(FaceClientEastAsia, takeOperationId);
+```
+
+A typical `OperationLocation` value looks like this:
+
+```csharp
+"/operations/a63a3bdd-a1db-4d05-87b8-dbad6850062a"
+```
+
+The `WaitForOperation` helper method is here:
+
+```csharp
+/// <summary>
+/// Waits for the take/apply operation to complete and returns the final operation status.
+/// </summary>
+/// <returns>The final operation status.</returns>
+private static async Task<OperationStatus> WaitForOperation(IFaceClient client, Guid operationId)
+{
+ OperationStatus operationStatus = null;
+ do
+ {
+ if (operationStatus != null)
+ {
+ Thread.Sleep(TimeSpan.FromMilliseconds(100));
+ }
+
+ // Get the status of the operation.
+ operationStatus = await client.Snapshot.GetOperationStatusAsync(operationId);
+
+ Console.WriteLine($"Operation Status: {operationStatus.Status}");
+ }
+ while (operationStatus.Status != OperationStatusType.Succeeded
+ && operationStatus.Status != OperationStatusType.Failed);
+
+ return operationStatus;
+}
+```
+
+After the operation status shows `Succeeded`, get the snapshot ID by parsing the `ResourceLocation` field of the returned OperationStatus instance.
+
+```csharp
+var snapshotId = Guid.Parse(operationStatus.ResourceLocation.Split('/')[2]);
+```
+
+A typical `resourceLocation` value looks like this:
+
+```csharp
+"/snapshots/e58b3f08-1e8b-4165-81df-aa9858f233dc"
+```
+
+## Apply a snapshot to a target subscription
+
+Next, create the new PersonGroup in the target subscription by using a randomly generated ID. Then use the target subscription's FaceClient instance to apply the snapshot to this PersonGroup. Pass in the snapshot ID and the new PersonGroup ID.
+
+```csharp
+var newPersonGroupId = Guid.NewGuid().ToString();
+var applySnapshotResult = await FaceClientWestUS.Snapshot.ApplyAsync(snapshotId, newPersonGroupId);
+```
++
+> [!NOTE]
+> A Snapshot object is valid for only 48 hours. Only take a snapshot if you intend to use it for data migration soon after.
+
+A snapshot apply request returns another operation ID. To get this ID, parse the `OperationLocation` field of the returned applySnapshotResult instance.
+
+```csharp
+var applyOperationId = Guid.Parse(applySnapshotResult.OperationLocation.Split('/')[2]);
+```
+
+The snapshot application process is also asynchronous, so again use `WaitForOperation` to wait for it to finish.
+
+```csharp
+operationStatus = await WaitForOperation(FaceClientWestUS, applyOperationId);
+```
+
+## Test the data migration
+
+After you apply the snapshot, the new PersonGroup in the target subscription populates with the original face data. By default, training results are also copied. The new PersonGroup is ready for face identification calls without needing retraining.
+
+To test the data migration, run the following operations and compare the results they print to the console:
+
+```csharp
+await DisplayPersonGroup(FaceClientEastAsia, personGroupId);
+await IdentifyInPersonGroup(FaceClientEastAsia, personGroupId);
+
+await DisplayPersonGroup(FaceClientWestUS, newPersonGroupId);
+// No need to retrain the PersonGroup before identification,
+// training results are copied by snapshot as well.
+await IdentifyInPersonGroup(FaceClientWestUS, newPersonGroupId);
+```
+
+Use the following helper methods:
+
+```csharp
+private static async Task DisplayPersonGroup(IFaceClient client, string personGroupId)
+{
+ var personGroup = await client.PersonGroup.GetAsync(personGroupId);
+ Console.WriteLine("PersonGroup:");
+ Console.WriteLine(JsonConvert.SerializeObject(personGroup));
+
+ // List persons.
+ var persons = await client.PersonGroupPerson.ListAsync(personGroupId);
+
+ foreach (var person in persons)
+ {
+ Console.WriteLine(JsonConvert.SerializeObject(person));
+ }
+
+ Console.WriteLine();
+}
+```
+
+```csharp
+private static async Task IdentifyInPersonGroup(IFaceClient client, string personGroupId)
+{
+ using (var fileStream = new FileStream("data\\PersonGroup\\Daughter\\Daughter1.jpg", FileMode.Open, FileAccess.Read))
+ {
+ var detectedFaces = await client.Face.DetectWithStreamAsync(fileStream);
+
+ var result = await client.Face.IdentifyAsync(detectedFaces.Select(face => face.FaceId.Value).ToList(), personGroupId);
+ Console.WriteLine("Test identify against PersonGroup");
+ Console.WriteLine(JsonConvert.SerializeObject(result));
+ Console.WriteLine();
+ }
+}
+```
+
+Now you can use the new PersonGroup in the target subscription.
+
+To update the target PersonGroup again in the future, create a new PersonGroup to receive the snapshot. To do this, follow the steps in this guide. A single PersonGroup object can have a snapshot applied to it only one time.
+
+## Clean up resources
+
+After you finish migrating face data, manually delete the snapshot object.
+
+```csharp
+await FaceClientEastAsia.Snapshot.DeleteAsync(snapshotId);
+```
+
+## Next steps
+
+Next, see the relevant API reference documentation, explore a sample app that uses the Snapshot feature, or follow a how-to guide to start using the other API operations mentioned here:
+
+- [Snapshot reference documentation (.NET SDK)](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperations)
+- [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample)
+- [Add faces](add-faces.md)
+- [Call the detect API](identity-detect-faces.md)
ai-services Migrate From Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/migrate-from-custom-vision.md
+
+ Title: "Migrate a Custom Vision project to Image Analysis 4.0"
+
+description: Learn how to generate an annotation file from an old Custom Vision project, so you can train a custom Image Analysis model on previous training data.
++++++ Last updated : 02/06/2023+++
+# Migrate a Custom Vision project to Image Analysis 4.0 preview
+
+You can migrate an existing Azure AI Custom Vision project to the new Image Analysis 4.0 system. [Custom Vision](../../custom-vision-service/overview.md) is a model customization service that existed before Image Analysis 4.0.
+
+This guide uses Python code to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file&mdash;you can follow the guide from there to the end.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* [Python 3.x](https://www.python.org/)
+* A Custom Vision resource where an existing project is stored.
+* An Azure Storage resource - [Create one](../../../storage/common/storage-account-create.md?tabs=azure-portal)
+
+#### [Jupyter Notebook](#tab/notebook)
+
+This notebook exports your image data and annotations from the workspace of a Custom Vision Service project to your own COCO file in a storage blob, ready for training with Image Analysis Model Customization. You can run the code in this section using a custom Python script, or you can download and run the [Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb) on a compatible platform.
+
+<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/export_cvs_data_to_blob_storage.ipynb -->
+
+> [!TIP]
+> Contents of _export_cvs_data_to_blob_storage.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb)**.
++
+## Install the python samples package
+
+Run the following command to install the required python samples package:
+
+```python
+pip install cognitive-service-vision-model-customization-python-samples
+```
+
+## Authentication
+
+Next, provide the credentials of your Custom Vision project and your blob storage container.
+
+You need to fill in the correct parameter values. You need the following information:
+
+- The name of the Azure Storage account you want to use with your new custom model project
+- The key for that storage account
+- The name of the container you want to use in that storage account
+- Your Custom Vision training key
+- Your Custom Vision endpoint URL
+- The project ID of your Custom Vision project
+
+The Azure Storage credentials can be found on that resource's page in the Azure portal. The Custom Vision credentials can be found in the Custom Vision project settings page on the [Custom Vision web portal](https://customvision.ai).
++
+```python
+azure_storage_account_name = ''
+azure_storage_account_key = ''
+azure_storage_container_name = ''
+
+custom_vision_training_key = ''
+custom_vision_endpoint = ''
+custom_vision_project_id = ''
+```
+
+## Run the migration
+
+When you run the migration code, the Custom Vision training images will be saved to a `{project_name}_{project_id}/images` folder in your specified Azure blob storage container, and the COCO file will be saved to `{project_name}_{project_id}/train.json` in that same container. Both tagged and untagged images will be exported, including any **Negative**-tagged images.
+
+> [!IMPORTANT]
+> Image Analysis Model Customization does not currently support **multilabel** classification training, buy you can still export data from a Custom Vision multilabel classification project.
+
+```python
+from cognitive_service_vision_model_customization_python_samples import export_data
+import logging
+logging.getLogger().setLevel(logging.INFO)
+logging.getLogger('azure.core.pipeline.policies.http_logging_policy').setLevel(logging.WARNING)
+
+n_process = 8
+export_data(azure_storage_account_name, azure_storage_account_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process)
+```
+
+<!-- nbend -->
+
+#### [Python](#tab/python)
+
+## Install libraries
+
+This script requires certain Python libraries. Install them in your project directory with the following command.
+
+```bash
+pip install azure-storage-blob azure-cognitiveservices-vision-customvision cffi
+```
+
+## Prepare the migration script
+
+Create a new Python file&mdash;_export-cvs-data-to-coco.py_, for example. Then open it in a text editor and paste in the following contents.
+
+```python
+from typing import List, Union
+from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
+from azure.cognitiveservices.vision.customvision.training.models import Image, ImageTag, ImageRegion, Project
+from msrest.authentication import ApiKeyCredentials
+import argparse
+import time
+import json
+import pathlib
+import logging
+from azure.storage.blob import ContainerClient, BlobClient
+import multiprocessing
++
+N_PROCESS = 8
++
+def get_file_name(sub_folder, image_id):
+ return f'{sub_folder}/images/{image_id}'
++
+def blob_copy(params):
+ container_client, sub_folder, image = params
+ blob_client: BlobClient = container_client.get_blob_client(get_file_name(sub_folder, image.id))
+ blob_client.start_copy_from_url(image.original_image_uri)
+ return blob_client
++
+def wait_for_completion(blobs, time_out=5):
+ pendings = blobs
+ time_break = 0.5
+ while pendings and time_out > 0:
+ pendings = [b for b in pendings if b.get_blob_properties().copy.status == 'pending']
+ if pendings:
+ logging.info(f'{len(pendings)} pending copies. wait for {time_break} seconds.')
+ time.sleep(time_break)
+ time_out -= time_break
++
+def copy_images_with_retry(pool, container_client, sub_folder, images: List, batch_id, n_retries=5):
+ retry_limit = n_retries
+ urls = []
+ while images and n_retries > 0:
+ params = [(container_client, sub_folder, image) for image in images]
+ img_and_blobs = zip(images, pool.map(blob_copy, params))
+ logging.info(f'Batch {batch_id}: Copied {len(images)} images.')
+ urls = urls or [b.url for _, b in img_and_blobs]
+
+ wait_for_completion([b for _, b in img_and_blobs])
+ images = [image for image, b in img_and_blobs if b.get_blob_properties().copy.status in ['failed', 'aborted']]
+ n_retries -= 1
+ if images:
+ time.sleep(0.5 * (retry_limit - n_retries))
+
+ if images:
+ raise RuntimeError(f'Copy failed for some images in batch {batch_id}')
+
+ return urls
++
+class CocoOperator:
+ def __init__(self):
+ self._images = []
+ self._annotations = []
+ self._categories = []
+ self._category_name_to_id = {}
+
+ @property
+ def num_imges(self):
+ return len(self._images)
+
+ @property
+ def num_categories(self):
+ return len(self._categories)
+
+ @property
+ def num_annotations(self):
+ return len(self._annotations)
+
+ def add_image(self, width, height, coco_url, file_name):
+ self._images.append(
+ {
+ 'id': len(self._images) + 1,
+ 'width': width,
+ 'height': height,
+ 'coco_url': coco_url,
+ 'file_name': file_name,
+ })
+
+ def add_annotation(self, image_id, category_id_or_name: Union[int, str], bbox: List[float] = None):
+ self._annotations.append({
+ 'id': len(self._annotations) + 1,
+ 'image_id': image_id,
+ 'category_id': category_id_or_name if isinstance(category_id_or_name, int) else self._category_name_to_id[category_id_or_name]})
+
+ if bbox:
+ self._annotations[-1]['bbox'] = bbox
+
+ def add_category(self, name):
+ self._categories.append({
+ 'id': len(self._categories) + 1,
+ 'name': name
+ })
+
+ self._category_name_to_id[name] = len(self._categories)
+
+ def to_json(self) -> str:
+ coco_dict = {
+ 'images': self._images,
+ 'categories': self._categories,
+ 'annotations': self._annotations,
+ }
+
+ return json.dumps(coco_dict, ensure_ascii=False, indent=2)
++
+def log_project_info(training_client: CustomVisionTrainingClient, project_id):
+ project: Project = training_client.get_project(project_id)
+ proj_settings = project.settings
+ project.settings = None
+ logging.info(f'Project info dict: {project.__dict__}')
+ logging.info(f'Project setting dict: {proj_settings.__dict__}')
+ logging.info(f'Project info: n tags: {len(training_client.get_tags(project_id))},'
+ f' n images: {training_client.get_image_count(project_id)} (tagged: {training_client.get_tagged_image_count(project_id)},'
+ f' untagged: {training_client.get_untagged_image_count(project_id)})')
++
+def export_data(azure_storage_account_name, azure_storage_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process):
+ azure_storage_account_url = f"https://{azure_storage_account_name}.blob.core.windows.net"
+ container_client = ContainerClient(azure_storage_account_url, azure_storage_container_name, credential=azure_storage_key)
+ credentials = ApiKeyCredentials(in_headers={"Training-key": custom_vision_training_key})
+ trainer = CustomVisionTrainingClient(custom_vision_endpoint, credentials)
+
+ coco_operator = CocoOperator()
+ for tag in trainer.get_tags(custom_vision_project_id):
+ coco_operator.add_category(tag.name)
+
+ skip = 0
+ batch_id = 0
+ project_name = trainer.get_project(custom_vision_project_id).name
+ log_project_info(trainer, custom_vision_project_id)
+ sub_folder = f'{project_name}_{custom_vision_project_id}'
+ with multiprocessing.Pool(n_process) as pool:
+ while True:
+ images: List[Image] = trainer.get_images(project_id=custom_vision_project_id, skip=skip)
+ if not images:
+ break
+ urls = copy_images_with_retry(pool, container_client, sub_folder, images, batch_id)
+ for i, image in enumerate(images):
+ coco_operator.add_image(image.width, image.height, urls[i], get_file_name(sub_folder, image.id))
+ image_tags: List[ImageTag] = image.tags
+ image_regions: List[ImageRegion] = image.regions
+ if image_regions:
+ for img_region in image_regions:
+ coco_operator.add_annotation(coco_operator.num_imges, img_region.tag_name, [img_region.left, img_region.top, img_region.width, img_region.height])
+ elif image_tags:
+ for img_tag in image_tags:
+ coco_operator.add_annotation(coco_operator.num_imges, img_tag.tag_name)
+
+ skip += len(images)
+ batch_id += 1
+
+ coco_json_file_name = 'train.json'
+ local_json = pathlib.Path(coco_json_file_name)
+ local_json.write_text(coco_operator.to_json(), encoding='utf-8')
+ coco_json_blob_client: BlobClient = container_client.get_blob_client(f'{sub_folder}/{coco_json_file_name}')
+ if coco_json_blob_client.exists():
+ logging.warning(f'coco json file exists in blob. Skipped uploading. If existing one is outdated, please manually upload your new coco json from ./train.json to {coco_json_blob_client.url}')
+ else:
+ coco_json_blob_client.upload_blob(local_json.read_bytes())
+ logging.info(f'coco file train.json uploaded to {coco_json_blob_client.url}.')
++
+def parse_args():
+ parser = argparse.ArgumentParser('Export Custom Vision workspace data to blob storage.')
+
+ parser.add_argument('--custom_vision_project_id', '-p', type=str, required=True, help='Custom Vision Project Id.')
+ parser.add_argument('--custom_vision_training_key', '-k', type=str, required=True, help='Custom Vision training key.')
+ parser.add_argument('--custom_vision_endpoint', '-e', type=str, required=True, help='Custom Vision endpoint.')
+
+ parser.add_argument('--azure_storage_account_name', '-a', type=str, required=True, help='Azure storage account name.')
+ parser.add_argument('--azure_storage_account_key', '-t', type=str, required=True, help='Azure storage account key.')
+ parser.add_argument('--azure_storage_container_name', '-c', type=str, required=True, help='Azure storage container name.')
+
+ parser.add_argument('--n_process', '-n', type=int, required=False, default=8, help='Number of processes used in exporting data.')
+
+ return parser.parse_args()
++
+def main():
+ args = parse_args()
+
+ export_data(args.azure_storage_account_name, args.azure_storage_account_key, args.azure_storage_container_name,
+ args.custom_vision_endpoint, args.custom_vision_training_key, args.custom_vision_project_id, args.n_process)
++
+if __name__ == '__main__':
+ main()
+```
+
+## Run the script
+
+Run the script using the `python` command.
+
+```console
+python export-cvs-data-to-coco.py -p <project ID> -k <training key> -e <endpoint url> -a <storage account> -t <storage key> -c <container name>
+```
+
+You need to fill in the correct parameter values. You need the following information:
+
+- The project ID of your Custom Vision project
+- Your Custom Vision training key
+- Your Custom Vision endpoint URL
+- The name of the Azure Storage account you want to use with your new custom model project
+- The key for that storage account
+- The name of the container you want to use in that storage account
+++
+## Use COCO file in a new project
+
+The script generates a COCO file and uploads it to the blob storage location you specified. You can now import it to your Model Customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file&mdash;you can follow the guide from there to the end.
+
+## Next steps
+
+* [Create and train a custom model](model-customization.md)
ai-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/mitigate-latency.md
+
+ Title: How to mitigate latency when using the Face service
+
+description: Learn how to mitigate latency when using the Face service.
+++++ Last updated : 11/07/2021+
+ms.devlang: csharp
+++
+# How to: mitigate latency when using the Face service
+
+You may encounter latency when using the Face service. Latency refers to any kind of delay that occurs when communicating over a network. In general, possible causes of latency include:
+- The physical distance each packet must travel from source to destination.
+- Problems with the transmission medium.
+- Errors in routers or switches along the transmission path.
+- The time required by antivirus applications, firewalls, and other security mechanisms to inspect packets.
+- Malfunctions in client or server applications.
+
+This article talks about possible causes of latency specific to using the Azure AI services, and how you can mitigate these causes.
+
+> [!NOTE]
+> Azure AI services does not provide any Service Level Agreement (SLA) regarding latency.
+
+## Possible causes of latency
+
+### Slow connection between Azure AI services and a remote URL
+
+Some Azure AI services provide methods that obtain data from a remote URL that you provide. For example, when you call the [DetectWithUrlAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithUrlAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_String_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can specify the URL of an image in which the service tries to detect faces.
+
+```csharp
+var faces = await client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
+```
+
+The Face service must then download the image from the remote server. If the connection from the Face service to the remote server is slow, that will affect the response time of the Detect method.
+
+To mitigate this situation, consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
+
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
+```
+
+Be sure to use a storage account in the same region as the Face resource. This will reduce the latency of the connection between the Face service and the storage account.
+
+### Large upload size
+
+Some Azure services provide methods that obtain data from a file that you upload. For example, when you call the [DetectWithStreamAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithStreamAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_IO_Stream_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can upload an image in which the service tries to detect faces.
+
+```csharp
+using FileStream fs = File.OpenRead(@"C:\images\face.jpg");
+System.Collections.Generic.IList<DetectedFace> faces = await client.Face.DetectWithStreamAsync(fs, detectionModel: DetectionModel.Detection02);
+```
+
+If the file to upload is large, that will impact the response time of the `DetectWithStreamAsync` method, for the following reasons:
+- It takes longer to upload the file.
+- It takes the service longer to process the file, in proportion to the file size.
+
+Mitigations:
+- Consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
+```
+- Consider uploading a smaller file.
+ - See the guidelines regarding [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
+ - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
+ - For face recognition, reducing the face size to 200x200 pixels doesn't affect the accuracy of the recognition model.
+ - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
+ - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
+```csharp
+var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
+var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg");
+Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
+IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
+```
+
+### Slow connection between your compute resource and the Face service
+
+If your computer has a slow connection to the Face service, this will affect the response time of service methods.
+
+Mitigations:
+- When you create your Face subscription, make sure to choose the region closest to where your application is hosted.
+- If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.
+- If longer latencies affect the user experience, choose a timeout threshold (for example, maximum 5 seconds) before retrying the API call.
+
+## Next steps
+
+In this guide, you learned how to mitigate latency when using the Face service. Next, learn how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively.
+
+> [!div class="nextstepaction"]
+> [Example: Use the large-scale feature](use-large-scale.md)
+
+## Related topics
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
ai-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md
+
+ Title: Create a custom Image Analysis model
+
+description: Learn how to create and train a custom model to do image classification and object detection that's specific to your use case.
+++++ Last updated : 02/06/2023++++
+# Create a custom Image Analysis model (preview)
+
+Image Analysis 4.0 allows you to train a custom model using your own training images. By manually labeling your images, you can train a model to apply custom tags to the images (image classification) or detect custom objects (object detection). Image Analysis 4.0 models are especially effective at few-shot learning, so you can get accurate models with less training data.
+
+This guide shows you how to create and train a custom image classification model. The few differences between training an image classification model and object detection model are noted.
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Vision resource" target="_blank">create a Vision resource </a> in the Azure portal to get your key and endpoint. If you're following this guide using Vision Studio, you must create your resource in the East US region. If you're using the Python library, you can create it in the East US, West US 2, or West Europe region. After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
+* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
+* A set of images with which to train your classification model. You can use the set of [sample images on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images). Or, you can use your own images. You only need about 3-5 images per class.
+
+> [!NOTE]
+> We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Vision resource that they were trained under and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results.
+
+#### [Python](#tab/python)
+
+Train your own image classifier (IC) or object detector (OD) with your own data using Image Analysis model customization and Python.
+
+You can run through all of the model customization steps using a Python sample package. You can run the code in this section using a Python script, or you can download and run the Notebook on a compatible platform.
+
+> [!TIP]
+> Contents of _cognitive_service_vision_model_customization.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/cognitive_service_vision_model_customization.ipynb)**.
+
+## Install the python samples package
+
+Install the [sample code](https://pypi.org/project/cognitive-service-vision-model-customization-python-samples/) to train/predict custom models with Python:
+
+```bash
+pip install cognitive-service-vision-model-customization-python-samples
+```
+
+## Authentication
+
+Enter your Azure AI Vision endpoint URL, key, and the name of the resource, into the code below.
+
+```python
+# Resource and key
+import logging
+logging.getLogger().setLevel(logging.INFO)
+from cognitive_service_vision_model_customization_python_samples import ResourceType
+
+resource_type = ResourceType.SINGLE_SERVICE_RESOURCE # or ResourceType.MULTI_SERVICE_RESOURCE
+
+resource_name = None
+multi_service_endpoint = None
+
+if resource_type == ResourceType.SINGLE_SERVICE_RESOURCE:
+ resource_name = '{specify_your_resource_name}'
+ assert resource_name
+else:
+ multi_service_endpoint = '{specify_your_service_endpoint}'
+ assert multi_service_endpoint
+
+resource_key = '{specify_your_resource_key}'
+```
+
+## Prepare a dataset from Azure blob storage
+
+To train a model with your own dataset, the dataset should be arranged in the COCO format described below, hosted on Azure blob storage, and accessible from your Vision resource.
+
+### Dataset annotation format
+
+Image Analysis uses the COCO file format for indexing/organizing the training images and their annotations. Below are examples and explanations of what specific format is needed for multiclass classification and object detection.
+
+Image Analysis model customization for classification is different from other kinds of vision training, as we utilize your class names, as well as image data, in training. So, be sure provide meaningful category names in the annotations.
+
+> [!NOTE]
+> In the example dataset, there are few images for the sake of simplicity. Although [Florence models](https://www.microsoft.com/research/publication/florence-a-new-foundation-model-for-computer-vision/) achieve great few-shot performance (high model quality even with little data available), it's good to have more data for the model to learn. Our recommendation is to have at least five images per class, and the more the better.
+
+Once your COCO annotation file is prepared, you can use the [COCO file verification script](coco-verification.md) to check the format.
+
+#### Multiclass classification example
+
+```json
+{
+ "images": [{"id": 1, "width": 224.0, "height": 224.0, "file_name": "images/siberian-kitten.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/siberian-kitten.jpg"},
+ {"id": 2, "width": 224.0, "height": 224.0, "file_name": "images/kitten-3.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/kitten-3.jpg"}],
+ "annotations": [
+ {"id": 1, "category_id": 1, "image_id": 1},
+ {"id": 2, "category_id": 1, "image_id": 2},
+ ],
+ "categories": [{"id": 1, "name": "cat"}, {"id": 2, "name": "dog"}]
+}
+```
+
+Besides `absolute_url`, you can also use `coco_url` (the system accepts either field name).
+
+#### Object detection example
+
+```json
+{
+ "images": [{"id": 1, "width": 224.0, "height": 224.0, "file_name": "images/siberian-kitten.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/siberian-kitten.jpg"},
+ {"id": 2, "width": 224.0, "height": 224.0, "file_name": "images/kitten-3.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/kitten-3.jpg"}],
+ "annotations": [
+ {"id": 1, "category_id": 1, "image_id": 1, "bbox": [0.1, 0.1, 0.3, 0.3]},
+ {"id": 2, "category_id": 1, "image_id": 2, "bbox": [0.3, 0.3, 0.6, 0.6]},
+ {"id": 3, "category_id": 2, "image_id": 2, "bbox": [0.2, 0.2, 0.7, 0.7]}
+ ],
+ "categories": [{"id": 1, "name": "cat"}, {"id": 2, "name": "dog"}]
+}
+```
+
+The values in `bbox: [left, top, width, height]` are relative to the image width and height.
+
+### Blob storage directory structure
+
+Following the examples above, the data directory in your Azure Blob Container `https://{your_blob}.blob.core.windows.net/datasets/` should be arranged like below, where `train_coco.json` is the annotation file.
+
+```
+cat_dog/
+ images/
+ 1.jpg
+ 2.jpg
+ train_coco.json
+```
+
+> [!TIP]
+> Quota limit information, including the maximum number of images and categories supported, maximum image size, and so on, can be found on the [concept page](../concept-model-customization.md).
+
+### Grant Azure AI Vision access to your Azure data blob
+
+You need to take an extra step to give your Vision resource access to read the contents of your Azure blog storage container. There are two ways to do this.
+
+#### Option 1: Shared access signature (SAS)
+
+You can generate a SAS token with at least `read` permission on your Azure Blob Container. This is the option used in the code below. For instructions on acquiring a SAS token, see [Create SAS tokens](../../translator/document-translation/how-to-guides/create-sas-tokens.md?tabs=Containers).
+
+#### Option 2: Managed Identity or public accessible
+
+You can also use [Managed Identity](/azure/active-directory/managed-identities-azure-resources/overview) to grant access.
+
+Below is a series of steps for allowing the system-assigned Managed Identity of your Vision resource to access your blob storage. In the Azure portal:
+
+1. Go to the **Identity / System assigned** tab of your Vision resource, and change the **Status** to **On**.
+1. Go to the **Access Control (IAM) / Role assignment** tab of your blob storage resource, select **Add / Add role assignment**, and choose either **Storage Blob Data Contributor** or **Storage Blob Data Reader**.
+1. Select **Next**, and choose **Managed Identity** under **Assign access to**, and then select **Select members**.
+1. Choose your subscription, with the Managed Identity being Azure AI Vision, and look up the one that matches your Vision resource name.
+
+### Register the dataset
+
+Once your dataset has been prepared and hosted on your Azure blob storage container, with access granted to your Vision resource, you can register it with the service.
+
+> [!NOTE]
+> The service only accesses your storage data during training. It doesn't keep copies of your data beyond the training cycle.
+
+```python
+from cognitive_service_vision_model_customization_python_samples import DatasetClient, Dataset, AnnotationKind, AuthenticationKind, Authentication
+
+dataset_name = '{specify_your_dataset_name}'
+auth_kind = AuthenticationKind.SAS # or AuthenticationKind.MI
+
+dataset_client = DatasetClient(resource_type, resource_name, multi_service_endpoint, resource_key)
+annotation_file_uris = ['{specify_your_annotation_uri}'] # example: https://example_data.blob.core.windows.net/datasets/cat_dog/train_coco.json
+# register dataset
+if auth_kind == AuthenticationKind.SAS:
+ # option 1: sas
+ sas_auth = Authentication(AuthenticationKind.SAS, '{your_sas_token}') # note the token/query string is needed, not the full url
+ dataset = Dataset(name=dataset_name,
+ annotation_kind=AnnotationKind.MULTICLASS_CLASSIFICATION, # checkout AnnotationKind for all annotation kinds
+ annotation_file_uris=annotation_file_uris,
+ authentication=sas_auth)
+else:
+ # option 2: managed identity or public accessible. make sure your storage is accessible via the managed identiy, if it is not public accessible
+ dataset = Dataset(name=dataset_name,
+ annotation_kind=AnnotationKind.MULTICLASS_CLASSIFICATION, # checkout AnnotationKind for all annotation kinds
+ annotation_file_uris=annotation_file_uris)
+
+reg_dataset = dataset_client.register_dataset(dataset)
+logging.info(f'Register dataset: {reg_dataset.__dict__}')
+
+# specify your evaluation dataset here, you can follow the same registeration process as the training dataset
+eval_dataset = None
+if eval_dataset:
+ reg_eval_dataset = dataset_client.register_dataset(eval_dataset)
+ logging.info(f'Register eval dataset: {reg_eval_dataset.__dict__}')
+```
+
+## Train a model
+
+After you register the dataset, use it to train a custom model:
+
+```python
+from cognitive_service_vision_model_customization_python_samples import TrainingClient, Model, ModelKind, TrainingParameters, EvaluationParameters
+
+model_name = '{specify_your_model_name}'
+
+training_client = TrainingClient(resource_type, resource_name, multi_service_endpoint, resource_key)
+train_params = TrainingParameters(training_dataset_name=dataset_name, time_budget_in_hours=1, model_kind=ModelKind.GENERIC_IC) # checkout ModelKind for all valid model kinds
+eval_params = EvaluationParameters(test_dataset_name=eval_dataset.name) if eval_dataset else None
+model = Model(model_name, train_params, eval_params)
+model = training_client.train_model(model)
+logging.info(f'Start training: {model.__dict__}')
+```
+
+## Check the training status
+
+Use the following code to check the status of the asynchronous training operation.
+
+```python
+from cognitive_service_vision_model_customization_python_samples import TrainingClient
+
+training_client = TrainingClient(resource_type, resource_name, multi_service_endpoint, resource_key)
+model = training_client.wait_for_completion(model_name, 30)
+```
+
+## Predict with a sample image
+
+Use the following code to get a prediction with a new sample image.
+
+```python
+from cognitive_service_vision_model_customization_python_samples import PredictionClient
+prediction_client = PredictionClient(resource_type, resource_name, multi_service_endpoint, resource_key)
+
+with open('path_to_your_test_image.png', 'rb') as f:
+ img = f.read()
+
+prediction = prediction_client.predict(model_name, img, content_type='image/png')
+logging.info(f'Prediction: {prediction}')
+```
+
+<!-- nbend -->
++
+#### [Vision Studio](#tab/studio)
+
+## Create a new custom model
+
+Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) and selecting the **Image analysis** tab. Then select the **Customize models** tile.
++
+Then, sign in with your Azure account and select your Vision resource. If you don't have one, you can create one from this screen.
+
+> [!IMPORTANT]
+> To train a custom model in Vision Studio, your Azure subscription needs to be approved for access. Please request access using [this form](https://aka.ms/visionaipublicpreview).
++
+## Prepare training images
+
+You need to upload your training images to an Azure Blob Storage container. Go to your storage resource in the Azure portal and navigate to the **Storage browser** tab. Here you can create a blob container and upload your images. Put them all at the root of the container.
+
+## Add a dataset
+
+To train a custom model, you need to associate it with a **Dataset** where you provide images and their label information as training data. In Vision Studio, select the **Datasets** tab to view your datasets.
+
+To create a new dataset, select **add new dataset**. Enter a name and select a dataset type: If you'd like to do image classification, select `Multi-class image classification`. If you'd like to do object detection, select `Object detection`.
+++
+![Choose Blob Storage]( ../media/customization/create-dataset.png)
+
+Then, select the container from the Azure Blob Storage account where you stored the training images. Check the box to allow Vision Studio to read and write to the blob storage container. This is a necessary step to import labeled data. Create the dataset.
+
+## Create an Azure Machine Learning labeling project
+
+You need a COCO file to convey the labeling information. An easy way to generate a COCO file is to create an Azure Machine Learning project, which comes with a data-labeling workflow.
+
+In the dataset details page, select **Add a new Data Labeling project**. Name it and select **Create a new workspace**. That opens a new Azure portal tab where you can create the Azure Machine Learning project.
+
+![Choose Azure Machine Learning]( ../media/customization/dataset-details.png)
+
+Once the Azure Machine Learning project is created, return to the Vision Studio tab and select it under **Workspace**. The Azure Machine Learning portal will then open in a new browser tab.
+
+## Azure Machine Learning: Create labels
+
+To start labeling, follow the **Please add label classes** prompt to add label classes.
+
+![Label classes]( ../media/customization/azure-machine-learning-home-page.png)
+
+![Add class labels]( ../media/customization/azure-machine-learning-label-categories.png)
+
+Once you've added all the class labels, save them, select **start** on the project, and then select **Label data** at the top.
+
+![Start labeling]( ../media/customization/azure-machine-learning-start.png)
++
+### Azure Machine Learning: Manually label training data
+
+Choose **Start labeling** and follow the prompts to label all of your images. When you're finished, return to the Vision Studio tab in your browser.
+
+Now select **Add COCO file**, then select **Import COCO file from an Azure ML Data Labeling project**. This imports the labeled data from Azure Machine Learning.
+
+The COCO file you just created is now stored in the Azure Storage container that you linked to this project. You can now import it into the model customization workflow. Select it from the drop-down list. Once the COCO file is imported into the dataset, the dataset can be used for training a model.
+
+> [!NOTE]
+> ## Import COCO files from elsewhere
+>
+> If you have a ready-made COCO file you want to import, go to the **Datasets** tab and select `Add COCO files to this dataset`. You can choose to add a specific COCO file from a Blob storage account or import from the Azure Machine Learning labeling project.
+>
+> Currently, Microsoft is addressing an issue which causes COCO file import to fail with large datasets when initiated in Vision Studio. To train using a large dataset, it's recommended to use the REST API instead.
+>
+> ![Choose COCO]( ../media/customization/import-coco.png)
+>
+> [!INCLUDE [coco-files](../includes/coco-files.md)]
+
+## Train the custom model
+
+To start training a model with your COCO file, go to the **Custom models** tab and select **Add a new model**. Enter a name for the model and select `Image classification` or `Object detection` as the model type.
+
+![Create custom model]( ../media/customization/start-model-training.png)
+
+Select your dataset, which is now associated with the COCO file containing the labeling information.
+
+Then select a time budget and train the model. For small examples, you can use a `1 hour` budget.
+
+![Review training details]( ../media/customization/train-model.png)
+
+It may take some time for the training to complete. Image Analysis 4.0 models can be accurate with only a small set of training data, but they take longer to train than previous models.
+
+## Evaluate the trained model
+
+After training has completed, you can view the model's performance evaluation. The following metrics are used:
+
+- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5
+- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75
+
+If an evaluation set isn't provided when training the model, the reported performance is estimated based on part of the training set. We strongly recommend you use an evaluation dataset (using the same process as above) to have a reliable estimation of your model performance.
+
+![Screenshot of evaluation]( ../media/customization/training-result.png)
+
+## Test custom model in Vision Studio
+
+Once you've built a custom model, you can test by selecting the **Try it out** button on the model evaluation screen.
++
+This takes you to the **Extract common tags from images** page. Choose your custom model from the drop-down menu and upload a test image.
+
+![Screenshot of selecing test model in Vision Studio.]( ../media/customization/quick-test.png)
+
+The prediction results appear in the right column.
+
+#### [REST API](#tab/rest)
+
+## Prepare training data
+
+The first thing you need to do is create a COCO file from your training data. You can create a COCO file by converting an old Custom Vision project using the [migration script](migrate-from-custom-vision.md). Or, you can create a COCO file from scratch using some other labeling tool. See the following specification:
++
+## Upload to storage
+
+Upload your COCO file to a blob storage container, ideally the same blob container that holds the training images themselves.
+
+## Create your training dataset
+
+The `datasets/<dataset-name>` API lets you create a new dataset object that references the training data. Make the following changes to the cURL command below:
+
+1. Replace `<endpoint>` with your Azure AI Vision endpoint.
+1. Replace `<dataset-name>` with a name for your dataset.
+1. Replace `<subscription-key>` with your Azure AI Vision key.
+1. In the request body, set `"annotationKind"` to either `"imageClassification"` or `"imageObjectDetection"`, depending on your project.
+1. In the request body, set the `"annotationFileUris"` array to an array of string(s) that show the URI location(s) of your COCO file(s) in blob storage.
+
+```bash
+curl.exe -v -X PUT "https://<endpoint>/computervision/datasets/<dataset-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+{
+'annotationKind':'imageClassification',
+'annotationFileUris':['<URI>']
+}"
+```
+
+## Create and train a model
+
+The `models/<model-name>` API lets you create a new custom model and associate it with an existing dataset. It also starts the training process. Make the following changes to the cURL command below:
+
+1. Replace `<endpoint>` with your Azure AI Vision endpoint.
+1. Replace `<model-name>` with a name for your model.
+1. Replace `<subscription-key>` with your Azure AI Vision key.
+1. In the request body, set `"trainingDatasetName"` to the name of the dataset from the previous step.
+1. In the request body, set `"modelKind"` to either `"Generic-Classifier"` or `"Generic-Detector"`, depending on your project.
+
+```bash
+curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+{
+'trainingParameters': {
+ 'trainingDatasetName':'<dataset-name>',
+ 'timeBudgetInHours':1,
+ 'modelKind':'Generic-Classifier',
+ }
+}"
+```
+
+## Evaluate the model's performance on a dataset
+
+The `models/<model-name>/evaluations/<eval-name>` API evaluates the performance of an existing model. Make the following changes to the cURL command below:
+
+1. Replace `<endpoint>` with your Azure AI Vision endpoint.
+1. Replace `<model-name>` with the name of your model.
+1. Replace `<eval-name>` with a name that can be used to uniquely identify the evaluation.
+1. Replace `<subscription-key>` with your Azure AI Vision key.
+1. In the request body, set `"testDatasetName"` to the name of the dataset you want to use for evaluation. If you don't have a dedicated dataset, you can use the same dataset you used for training.
+
+```bash
+curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>/evaluations/<eval-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+{
+'evaluationParameters':{
+ 'testDatasetName':'<dataset-name>'
+ },
+}"
+```
+
+The API call returns a **ModelPerformance** JSON object, which lists the model's scores in several categories. The following metrics are used:
+
+- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5
+- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75
+
+## Test the custom model on an image
+
+The `imageanalysis:analyze` API does ordinary Image Analysis operations. By specifying some parameters, you can use this API to query your own custom model instead of the prebuilt Image Analysis models. Make the following changes to the cURL command below:
+
+1. Replace `<endpoint>` with your Azure AI Vision endpoint.
+1. Replace `<model-name>` with the name of your model.
+1. Replace `<subscription-key>` with your Azure AI Vision key.
+1. In the request body, set `"url"` to the URL of a remote image you want to test your model on.
+
+```bash
+curl.exe -v -X POST "https://<endpoint>/computervision/imageanalysis:analyze?model-name=<model-name>&api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+{'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'
+}"
+```
+
+The API call returns an **ImageAnalysisResult** JSON object, which contains all the detected tags for an image classifier, or objects for an object detector, with their confidence scores.
+
+```json
+{
+ "kind": "imageAnalysisResult",
+ "metadata": {
+ "height": 900,
+ "width": 1260
+ },
+ "customModelResult": {
+ "classifications": [
+ {
+ "confidence": 0.97970027,
+ "label": "hemlock"
+ },
+ {
+ "confidence": 0.020299695,
+ "label": "japanese-cherry"
+ }
+ ],
+ "objects": [],
+ "imageMetadata": {
+ "width": 1260,
+ "height": 900
+ }
+ }
+}
+```
+++
+## Next steps
+
+In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs.
+
+* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions.
+* [Call the Analyze Image API](./call-analyze-image-40.md). Note the sections [Set model name when using a custom model](./call-analyze-image-40.md#set-model-name-when-using-a-custom-model) and [Get results using custom model](./call-analyze-image-40.md#get-results-using-custom-model).
ai-services Shelf Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-analyze.md
+
+ Title: Analyze a shelf image using pretrained models
+
+description: Use the Product Understanding API to analyze a shelf image and receive rich product data.
+++++ Last updated : 04/26/2023++++
+# Analyze a shelf image using pretrained models
+
+The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Understanding API, you can upload a shelf image and get the locations of products and gaps.
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+## Prerequisites
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="create a Vision resource" target="_blank">create a Vision resource</a> in the Azure portal. It must be deployed in the **East US** or **West US 2** region. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service. You'll paste your key and endpoint into the code below later in the guide.
+* An Azure Storage resource with a blob storage container. [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
+* [cURL](https://curl.haxx.se/) installed. Or, you can use a different REST platform, like Postman, Swagger, or the REST Client extension for VS Code.
+* A shelf image. You can download our [sample image](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/shelf-analysis/shelf.png) or bring your own images. The maximum file size per image is 20 MB.
+
+## Analyze shelf images
+
+To analyze a shelf image, do the following steps:
+
+1. Upload the images you'd like to analyze to your blob storage container, and get the absolute URL.
+1. Copy the following `curl` command into a text editor.
+
+ ```bash
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:analyze" -d "{
+ 'url':'<your_url_string>'
+ }"
+ ```
+1. Make the following changes in the command where needed:
+ 1. Replace the value of `<subscriptionKey>` with your Vision resource key.
+ 1. Replace the value of `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 1. Replace the `<your_url_string>` contents with the blob URL of the image
+1. Open a command prompt window.
+1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
++
+## Examine the response
+
+A successful response is returned in JSON. The product understanding API results are returned in a `ProductUnderstandingResultApiModel` JSON field:
+
+```json
+{
+ "imageMetadata": {
+ "width": 2000,
+ "height": 1500
+ },
+ "products": [
+ {
+ "id": "string",
+ "boundingBox": {
+ "x": 1234,
+ "y": 1234,
+ "w": 12,
+ "h": 12
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "string"
+ }
+ ]
+ }
+ ],
+ "gaps": [
+ {
+ "id": "string",
+ "boundingBox": {
+ "x": 1234,
+ "y": 1234,
+ "w": 123,
+ "h": 123
+ },
+ "classifications": [
+ {
+ "confidence": 0.8,
+ "label": "string"
+ }
+ ]
+ }
+ ]
+}
+```
+
+See the following sections for definitions of each JSON field.
+
+### Product Understanding Result API model
+
+Results from the product understanding operation.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `imageMetadata` | [ImageMetadataApiModel](#image-metadata-api-model) | The image metadata information such as height, width and format. | Yes |
+| `products` |[DetectedObjectApiModel](#detected-object-api-model) | Products detected in the image. | Yes |
+| `gaps` | [DetectedObjectApiModel](#detected-object-api-model) | Gaps detected in the image. | Yes |
+
+### Image Metadata API model
+
+The image metadata information such as height, width and format.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `width` | integer | The width of the image in pixels. | Yes |
+| `height` | integer | The height of the image in pixels. | Yes |
+
+### Detected Object API model
+
+Describes a detected object in an image.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `id` | string | ID of the detected object. | No |
+| `boundingBox` | [BoundingBoxApiModel](#bounding-box-api-model) | A bounding box for an area inside an image. | Yes |
+| `classifications` | [ImageClassificationApiModel](#image-classification-api-model) | Classification confidences of the detected object. | Yes |
+
+### Bounding Box API model
+
+A bounding box for an area inside an image.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `x` | integer | Left-coordinate of the top left point of the area, in pixels. | Yes |
+| `y` | integer | Top-coordinate of the top left point of the area, in pixels. | Yes |
+| `w` | integer | Width measured from the top-left point of the area, in pixels. | Yes |
+| `h` | integer | Height measured from the top-left point of the area, in pixels. | Yes |
+
+### Image Classification API model
+
+Describes the image classification confidence of a label.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `confidence` | float | Confidence of the classification prediction. | Yes |
+| `label` | string | Label of the classification prediction. | Yes |
+
+## Next steps
+
+In this guide, you learned how to make a basic analysis call using the pretrained Product Understanding REST API. Next, learn how to use a custom Product Recognition model to better meet your business needs.
+
+> [!div class="nextstepaction"]
+> [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md)
+
+* [Image Analysis overview](../overview-image-analysis.md)
ai-services Shelf Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-model-customization.md
+
+ Title: Train a custom model for Product Recognition
+
+description: Learn how to use the Image Analysis model customization feature to train a model to recognize specific products in a Product Recognition task.
+++++++ Last updated : 05/02/2023+++
+# Train a custom Product Recognition model
+
+You can train a custom model to recognize specific retail products for use in a Product Recognition scenario. The out-of-box [Analyze](shelf-analyze.md) operation doesn't differentiate between products, but you can build this capability into your app through custom labeling and training.
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+## Use the model customization feature
+
+The [Model customization how-to guide](./model-customization.md) shows you how to train and publish a custom Image Analysis model. You can follow that guide, with a few specifications, to make a model for Product Recognition.
+
+> [!div class="nextstepaction"]
+> [Model customization](model-customization.md)
++
+## Dataset specifications
+
+Your training dataset should consist of images of the retail shelves. When you first create the model, you need to set the _ModelKind_ parameter to **ProductRecognitionModel**.
+
+Also, save the value of the _ModelName_ parameter, so you can use it as a reference later.
+
+## Custom labeling
+
+When you go through the labeling workflow, create labels for each of the products you want to recognize. Then label each product's bounding box, in each image.
+
+## Analyze shelves with a custom model
+
+When your custom model is trained and ready (you've completed the steps in the [Model customization guide](./model-customization.md)), you can use it through the Shelf Analyze operation. Set the _PRODUCT_CLASSIFIER_MODEL_ URL parameter to the name of your custom model (the _ModelName_ value you used in the creation step).
+
+The API call will look like this:
+
+```bash
+curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:analyze?PRODUCT_CLASSIFIER_MODEL=myModelName" -d "{
+ 'url':'<your_url_string>'
+}"
+```
+
+## Next steps
+
+In this guide, you learned how to use a custom Product Recognition model to better meet your business needs. Next, set up planogram matching, which works in conjunction with custom Product Recognition.
+
+> [!div class="nextstepaction"]
+> [Planogram matching](shelf-planogram.md)
+
+* [Image Analysis overview](../overview-image-analysis.md)
ai-services Shelf Modify Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-modify-images.md
+
+ Title: Prepare images for Product Recognition
+
+description: Use the stitching and rectification APIs to prepare organic photos of retail shelves for accurate image analysis.
+++++ Last updated : 07/10/2023++++
+# Prepare images for Product Recognition
+
+Part of the Product Recognition workflow involves fixing and modifying the input images so the service can perform correctly.
+
+This guide shows you how to use the Stitching API to combine multiple images of the same physical shelf: this gives you a composite image of the entire retail shelf, even if it's only viewed partially by multiple different cameras.
+
+This guide also shows you how to use the Rectification API to correct for perspective distortion when you stitch together different images.
+
+## Prerequisites
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="create a Vision resource" target="_blank">create a Vision resource</a> in the Azure portal. It must be deployed in the **East US** or **West US 2** region. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
+* An Azure Storage resource with a blob storage container. [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
+* [cURL](https://curl.haxx.se/) installed. Or, you can use a different REST platform, like Postman, Swagger, or the REST Client extension for VS Code.
+* A set of photos that show adjacent parts of the same shelf. A 50% overlap between images is recommended. You can download and use the sample "unstitched" images from [GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/shelf-analysis).
++
+## Use the Stitching API
+
+The Stitching API combines multiple images of the same physical shelf.
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+To run the image stitching operation on a set of images, follow these steps:
+
+1. Upload the images you'd like to stitch together to your blob storage container, and get the absolute URL for each image. You can stitch up to 10 images at once.
+1. Copy the following `curl` command into a text editor.
+
+ ```bash
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:stitch?api-version=2023-04-01-preview" --output <your_filename> -d "{
+ 'images': [
+ {
+ 'url':'<your_url_string>'
+ },
+ {
+ 'url':'<your_url_string_2>'
+ },
+ ...
+ ]
+ }"
+ ```
+1. Make the following changes in the command where needed:
+ 1. Replace the value of `<subscriptionKey>` with your Vision resource key.
+ 1. Replace the value of `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 1. Replace the `<your_url_string>` contents with the blob URLs of the images. The images should be ordered left to right and top to bottom, according to the physical spaces they show.
+ 1. Replace `<your_filename>` with the name and extension of the file where you'd like to get the result (for example, `download.jpg`).
+1. Open a command prompt window.
+1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
+
+## Examine the stitching response
+
+The API returns a `200` response, and the new file is downloaded to the location you specified.
+
+## Use the Rectification API
+
+After you complete the stitching operation, we recommend you do the rectification operation for optimal analysis results.
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+To correct the perspective distortion in the composite image, follow these steps:
+
+1. Upload the image you'd like to rectify to your blob storage container, and get the absolute URL.
+1. Copy the following `curl` command into a text editor.
+
+ ```bash
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:rectify?api-version=2023-04-01-preview" --output <your_filename> -d "{
+ 'url': '<your_url_string>',
+ 'controlPoints': {
+ 'topLeft': {
+ 'x': 0.1,
+ 'y': 0.1
+ },
+ 'topRight': {
+ 'x': 0.2,
+ 'y': 0.2
+ },
+ 'bottomLeft': {
+ 'x': 0.3,
+ 'y': 0.3
+ },
+ 'bottomRight': {
+ 'x': 0.4,
+ 'y': 0.4
+ }
+ }
+ }"
+ ```
+1. Make the following changes in the command where needed:
+ 1. Replace the value of `<subscriptionKey>` with your Vision resource key.
+ 1. Replace the value of `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 1. Replace `<your_url_string>` with the blob storage URL of the image.
+ 1. Replace the four control point coordinates in the request body. X is the horizontal coordinate and Y is vertical. The coordinates are normalized, so 0.5,0.5 indicates the center of the image, and 1,1 indicates the bottom right corner, for example. Set the coordinates to define the four corners of the shelf fixture as it appears in the image.
+
+ :::image type="content" source="../media/shelf/rectify.png" alt-text="Photo of a shelf with its four corners outlined.":::
+
+ > [!NOTE]
+ > The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+ 1. Replace `<your_filename>` with the name and extension of the file where you'd like to get the result (for example, `download.jpg`).
+1. Open a command prompt window.
+1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
++
+## Examine the rectification response
+
+The API returns a `200` response, and the new file is downloaded to the location you specified.
+
+## Next steps
+
+In this guide, you learned how to prepare shelf photos for analysis. Next, call the Product Understanding API to get analysis results.
+
+> [!div class="nextstepaction"]
+> [Analyze a shelf image](./shelf-analyze.md)
+
+* [Image Analysis overview](../overview-image-analysis.md)
ai-services Shelf Planogram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-planogram.md
+
+ Title: Check planogram compliance with Image Analysis
+
+description: Learn how to use the Planogram Matching API to check that a retail shelf in a photo matches its planogram layout.
++++++ Last updated : 05/02/2023+++
+# Check planogram compliance with Image Analysis
+
+A planogram is a diagram that indicates the correct placement of retail products on shelves. The Image Analysis Planogram Matching API lets you compare analysis results from a photo to the store's planogram input. It returns an account of all the positions in the planogram, and whether a product was found in each position.
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+## Prerequisites
+* You must have already set up and run basic [Product Understanding analysis](./shelf-analyze.md) with the Product Understanding API.
+* [cURL](https://curl.haxx.se/) installed. Or, you can use a different REST platform, like Postman, Swagger, or the REST Client extension for VS Code.
+
+## Prepare a planogram schema
+
+You need to have your planogram data in a specific JSON format. See the sections below for field definitions.
+
+```json
+"planogram": {
+ "width": 100.0,
+ "height": 50.0,
+ "products": [
+ {
+ "id": "string",
+ "name": "string",
+ "w": 12.34,
+ "h": 123.4
+ }
+ ],
+ "fixtures": [
+ {
+ "id": "string",
+ "w": 2.0,
+ "h": 10.0,
+ "x": 0.0,
+ "y": 3.0
+ }
+ ],
+ "positions": [
+ {
+ "id": "string",
+ "productId": "string",
+ "fixtureId": "string",
+ "x": 12.0,
+ "y": 34.0
+ }
+ ]
+}
+```
+
+The X and Y coordinates are relative to a top-left origin, and the width and height extend each bounding box down and to the right. The following diagram shows examples of the coordinate system.
++
+> [!NOTE]
+> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
+
+Quantities in the planogram schema are in nonspecific units. They can correspond to inches, centimeters, or any other unit of measurement. The matching algorithm calculates the relationship between the photo analysis units (pixels) and the planogram units.
+
+### Planogram API Model
+
+Describes the planogram for planogram matching operations.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `width` | double | Width of the planogram. | Yes |
+| `height` | double | Height of the planogram. | Yes |
+| `products` | [ProductApiModel](#product-api-model) | List of products in the planogram. | Yes |
+| `fixtures` | [FixtureApiModel](#fixture-api-model) | List of fixtures in the planogram. | Yes |
+| `positions` | [PositionApiModel](#position-api-model)| List of positions in the planogram. | Yes |
+
+### Product API Model
+
+Describes a product in the planogram.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `id` | string | ID of the product. | Yes |
+| `name` | string | Name of the product. | Yes |
+| `w` | double | Width of the product. | Yes |
+| `h` | double | Height of the fixture. | Yes |
+
+### Fixture API Model
+
+Describes a fixture (shelf or similar hardware) in a planogram.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `id` | string | ID of the fixture. | Yes |
+| `w` | double | Width of the fixture. | Yes |
+| `h` | double | Height of the fixture. | Yes |
+| `x` | double | Left offset from the origin, in units of in inches or centimeters. | Yes |
+| `y` | double | Top offset from the origin, in units of inches or centimeters. | Yes |
+
+### Position API Model
+
+Describes a product's position in a planogram.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `id` | string | ID of the position. | Yes |
+| `productId` | string | ID of the product. | Yes |
+| `fixtureId` | string | ID of the fixture that the product is on. | Yes |
+| `x` | double | Left offset from the origin, in units of in inches or centimeters. | Yes |
+| `y` | double | Top offset from the origin, in units of inches or centimeters. | Yes |
+
+## Get analysis results
+
+Next, you need to do a [Product Understanding](./shelf-analyze.md) API call with a [custom model](./shelf-model-customization.md).
+
+The returned JSON text should be a `"detectedProducts"` structure. It shows all the products that were detected on the shelf, with the product-specific labels you used in the training stage.
+
+```json
+"detectedProducts": {
+ "imageMetadata": {
+ "width": 21,
+ "height": 25
+ },
+ "products": [
+ {
+ "id": "01",
+ "boundingBox": {
+ "x": 123,
+ "y": 234,
+ "w": 34,
+ "h": 45
+ },
+ "classifications": [
+ {
+ "confidence": 0.8,
+ "label": "Product1"
+ }
+ ]
+ }
+ ],
+ "gaps": [
+ {
+ "id": "02",
+ "boundingBox": {
+ "x": 12,
+ "y": 123,
+ "w": 1234,
+ "h": 123
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "Product1"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Prepare the matching request
+
+Join the JSON content of your planogram schema with the JSON content of the analysis results, like this:
+
+```json
+"planogram": {
+ "width": 100.0,
+ "height": 50.0,
+ "products": [
+ {
+ "id": "string",
+ "name": "string",
+ "w": 12.34,
+ "h": 123.4
+ }
+ ],
+ "fixtures": [
+ {
+ "id": "string",
+ "w": 2.0,
+ "h": 10.0,
+ "x": 0.0,
+ "y": 3.0
+ }
+ ],
+ "positions": [
+ {
+ "id": "string",
+ "productId": "string",
+ "fixtureId": "string",
+ "x": 12.0,
+ "y": 34.0
+ }
+ ]
+},
+"detectedProducts": {
+ "imageMetadata": {
+ "width": 21,
+ "height": 25
+ },
+ "products": [
+ {
+ "id": "01",
+ "boundingBox": {
+ "x": 123,
+ "y": 234,
+ "w": 34,
+ "h": 45
+ },
+ "classifications": [
+ {
+ "confidence": 0.8,
+ "label": "Product1"
+ }
+ ]
+ }
+ ],
+ "gaps": [
+ {
+ "id": "02",
+ "boundingBox": {
+ "x": 12,
+ "y": 123,
+ "w": 1234,
+ "h": 123
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "Product1"
+ }
+ ]
+ }
+ ]
+}
+```
+
+This is the text you'll use in your API request body.
+
+## Call the planogram matching API
+
+1. Copy the following `curl` command into a text editor.
+
+ ```bash
+ curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-planogrammatching:analyze" -d "<body>"
+ ```
+1. Make the following changes in the command where needed:
+ 1. Replace the value of `<subscriptionKey>` with your Vision resource key.
+ 1. Replace the value of `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
+ 1. Replace the value of `<body>` with the joined JSON string you prepared in the previous section.
+1. Open a command prompt window.
+1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
+
+## Examine the response
+
+A successful response is returned in JSON, showing the products (or gaps) detected at each planogram position. See the sections below for field definitions.
+
+```json
+{
+ "matchedResultsPerPosition": [
+ {
+ "positionId": "01",
+ "detectedObject": {
+ "id": "01",
+ "boundingBox": {
+ "x": 12,
+ "y": 1234,
+ "w": 123,
+ "h": 12345
+ },
+ "classifications": [
+ {
+ "confidence": 0.9,
+ "label": "Product1"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+### Planogram Matching Position API Model
+
+Paired planogram position ID and corresponding detected object from product understanding result.
+
+| Name | Type | Description | Required |
+| - | - | -- | -- |
+| `positionId` | string | The position ID from the planogram matched to the corresponding detected object. | No |
+| `detectedObject` | DetectedObjectApiModel | Describes a detected object in an image. | No |
+
+## Next steps
+
+* [Image Analysis overview](../overview-image-analysis.md)
ai-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md
+
+ Title: How to specify a detection model - Face
+
+description: This article will show you how to choose which face detection model to use with your Azure AI Face application.
+++++++ Last updated : 03/05/2021+
+ms.devlang: csharp
+++
+# Specify a face detection model
+
+This guide shows you how to specify a face detection model for the Azure AI Face service.
+
+The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers have the option to specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
+
+Read on to learn how to specify the face detection model in certain face operations. The Face service uses face detection whenever it converts an image of a face into some other form of data.
+
+If you aren't sure whether you should use the latest model, skip to the [Evaluate different models](#evaluate-different-models) section to evaluate the new model and compare results using your current data set.
+
+## Prerequisites
+
+You should be familiar with the concept of AI face detection. If you aren't, see the face detection conceptual guide or how-to guide:
+
+* [Face detection concepts](../concept-face-detection.md)
+* [Call the detect API](identity-detect-faces.md)
++
+## Evaluate different models
+
+The different face detection models are optimized for different tasks. See the following table for an overview of the differences.
+
+|**detection_01** |**detection_02** |**detection_03**
+||||
+|Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations.
+|Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
+|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call.
+|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call.
+
+The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
+
+## Detect faces with specified model
+
+Face detection finds the bounding-box locations of human faces and identifies their visual landmarks. It extracts the face's features and stores them for later use in [recognition](../concept-face-recognition.md) operations.
+
+When you use the [Face - Detect] API, you can assign the model version with the `detectionModel` parameter. The available values are:
+
+* `detection_01`
+* `detection_02`
+* `detection_03`
+
+A request URL for the [Face - Detect] REST API will look like this:
+
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
+
+If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
+
+```csharp
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03");
+```
+
+## Add face to Person with specified model
+
+The Face service can extract face data from an image and associate it with a **Person** object through the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. In this API call, you can specify the detection model in the same way as in [Face - Detect].
+
+See the following code example for the .NET client library.
+
+```csharp
+// Create a PersonGroup and add a person with face detected by "detection_03" model
+string personGroupId = "mypersongroupid";
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+
+string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
+
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
+```
+
+This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
+
+> [!NOTE]
+> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Face - Identify] API, for example).
+
+## Add face to FaceList with specified model
+
+You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library.
+
+```csharp
+await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
+```
+
+This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
+
+> [!NOTE]
+> You don't need to use the same detection model for all faces in a **FaceList** object, and you don't need to use the same detection model when detecting new faces to compare with a **FaceList** object.
++
+## Next steps
+
+In this article, you learned how to specify the detection model to use with different Face APIs. Next, follow a quickstart to get started with face detection and analysis.
+
+* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+
+[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
+[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
+[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
+[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
+[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
+[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
+[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
+[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
+[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
+[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
+[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
+[FaceList - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
ai-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-recognition-model.md
+
+ Title: How to specify a recognition model - Face
+
+description: This article will show you how to choose which recognition model to use with your Azure AI Face application.
++++++ Last updated : 03/05/2021+
+ms.devlang: csharp
+++
+# Specify a face recognition model
++
+This guide shows you how to specify a face recognition model for face detection, identification and similarity search using the Azure AI Face service.
+
+The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face recognition model they'd like to use. They can choose the model that best fits their use case.
+
+The Azure AI Face service has four recognition models available. The models _recognition_01_ (published 2017), _recognition_02_ (published 2019), and _recognition_03_ (published 2020) are continually supported to ensure backwards compatibility for customers using FaceLists or **PersonGroup**s created with these models. A **FaceList** or **PersonGroup** will always use the recognition model it was created with, and new faces will become associated with this model when they're added. This can't be changed after creation and customers will need to use the corresponding recognition model with the corresponding **FaceList** or **PersonGroup**.
+
+You can move to later recognition models at your own convenience; however, you'll need to create new FaceLists and PersonGroups with the recognition model of your choice.
+
+The _recognition_04_ model (published 2021) is the most accurate model currently available. If you're a new customer, we recommend using this model. _Recognition_04_ will provide improved accuracy for both similarity comparisons and person-matching comparisons. _Recognition_04_ improves recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now you can build safe and seamless user experiences that use the latest _detection_03_ model to detect whether an enrolled user is wearing a face cover. Then you can use the latest _recognition_04_ model to recognize their identity. Each model operates independently of the others, and a confidence threshold set for one model isn't meant to be compared across the other recognition models.
+
+Read on to learn how to specify a selected model in different Face operations while avoiding model conflicts. If you're an advanced user and would like to determine whether you should switch to the latest model, skip to the [Evaluate different models](#evaluate-different-models) section. You can evaluate the new model and compare results using your current data set.
++
+## Prerequisites
+
+You should be familiar with the concepts of AI face detection and identification. If you aren't, see these guides first:
+
+* [Face detection concepts](../concept-face-detection.md)
+* [Face recognition concepts](../concept-face-recognition.md)
+* [Call the detect API](identity-detect-faces.md)
+
+## Detect faces with specified model
+
+Face detection identifies the visual landmarks of human faces and finds their bounding-box locations. It also extracts the face's features and stores them temporarily for up to 24 hours for use in identification. All of this information forms the representation of one face.
+
+The recognition model is used when the face features are extracted, so you can specify a model version when performing the Detect operation.
+
+When using the [Face - Detect] API, assign the model version with the `recognitionModel` parameter. The available values are:
+* recognition_01
+* recognition_02
+* recognition_03
+* recognition_04
++
+Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Face - Detect] REST API will look like this:
+
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>`
+
+If you're using the client library, you can assign the value for `recognitionModel` by passing a string representing the version. If you leave it unassigned, a default model version of `recognition_01` will be used. See the following code example for the .NET client library.
+
+```csharp
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: true, returnFaceLandmarks: true, recognitionModel: "recognition_01", returnRecognitionModel: true);
+```
+
+> [!NOTE]
+> The _returnFaceId_ parameter must be set to `true` in order to enable the face recognition scenarios in later steps.
+
+## Identify faces with specified model
+
+The Face service can extract face data from an image and associate it with a **Person** object (through the [Add face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API call, for example), and multiple **Person** objects can be stored together in a **PersonGroup**. Then, a new face can be compared against a **PersonGroup** (with the [Face - Identify] call), and the matching person within that group can be identified.
+
+A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([PersonGroup - Create] or [LargePersonGroup - Create]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [PersonGroup - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+
+See the following code example for the .NET client library.
+
+```csharp
+// Create an empty PersonGroup with "recognition_04" model
+string personGroupId = "mypersongroupid";
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+```
+
+In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features.
+
+Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Face - Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
+
+There is no change in the [Face - Identify] API; you only need to specify the model version in detection.
+
+## Find similar faces with specified model
+
+You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [FaceList - Create] API or [LargeFaceList - Create]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+
+See the following code example for the .NET client library.
+
+```csharp
+await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+```
+
+This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
+
+There is no change in the [Face - Find Similar] API; you only specify the model version in detection.
+
+## Verify faces with specified model
+
+The [Face - Verify] API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.
+
+## Evaluate different models
+
+If you'd like to compare the performances of different recognition models on your own data, you'll need to:
+1. Create four PersonGroups using _recognition_01_, _recognition_02_, _recognition_03_, and _recognition_04_ respectively.
+1. Use your image data to detect faces and register them to **Person**s within these four **PersonGroup**s.
+1. Train your PersonGroups using the PersonGroup - Train API.
+1. Test with Face - Identify on all four **PersonGroup**s and compare the results.
++
+If you normally specify a confidence threshold (a value between zero and one that determines how confident the model must be to identify a face), you may need to use different thresholds for different models. A threshold for one model isn't meant to be shared to another and won't necessarily produce the same results.
+
+## Next steps
+
+In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started with face detection.
+
+* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+
+[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
+[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
+[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
+[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
+[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
+[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
+[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
+[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
+[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
+[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
+[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
ai-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-headpose.md
+
+ Title: Use the HeadPose attribute
+
+description: Learn how to use the HeadPose attribute to automatically rotate the face rectangle or detect head gestures in a video feed.
++++++ Last updated : 02/23/2021+
+ms.devlang: csharp
+++
+# Use the HeadPose attribute
+
+In this guide, you'll see how you can use the HeadPose attribute of a detected face to enable some key scenarios.
+
+## Rotate the face rectangle
+
+The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.
+
+The [Azure AI Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) sample app uses the HeadPose attribute to rotate its detected face rectangles.
+
+### Explore the sample code
+
+You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
+
+```csharp
+/// <summary>
+/// Calculate the rendering face rectangle
+/// </summary>
+/// <param name="faces">Detected face from service</param>
+/// <param name="maxSize">Image rendering size</param>
+/// <param name="imageInfo">Image width and height</param>
+/// <returns>Face structure for rendering</returns>
+public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<DetectedFace> faces, int maxSize, Tuple<int, int> imageInfo)
+{
+ var imageWidth = imageInfo.Item1;
+ var imageHeight = imageInfo.Item2;
+ var ratio = (float)imageWidth / imageHeight;
+ int uiWidth = 0;
+ int uiHeight = 0;
+ if (ratio > 1.0)
+ {
+ uiWidth = maxSize;
+ uiHeight = (int)(maxSize / ratio);
+ }
+ else
+ {
+ uiHeight = maxSize;
+ uiWidth = (int)(ratio * uiHeight);
+ }
+
+ var uiXOffset = (maxSize - uiWidth) / 2;
+ var uiYOffset = (maxSize - uiHeight) / 2;
+ var scale = (float)uiWidth / imageWidth;
+
+ foreach (var face in faces)
+ {
+ var left = (int)(face.FaceRectangle.Left * scale + uiXOffset);
+ var top = (int)(face.FaceRectangle.Top * scale + uiYOffset);
+
+ // Angle of face rectangles, default value is 0 (not rotated).
+ double faceAngle = 0;
+
+ // If head pose attributes have been obtained, re-calculate the left & top (X & Y) positions.
+ if (face.FaceAttributes?.HeadPose != null)
+ {
+ // Head pose's roll value acts directly as the face angle.
+ faceAngle = face.FaceAttributes.HeadPose.Roll;
+ var angleToPi = Math.Abs((faceAngle / 180) * Math.PI);
+
+ // _____ | / \ |
+ // |____| => |/ /|
+ // | \ / |
+ // Re-calculate the face rectangle's left & top (X & Y) positions.
+ var newLeft = face.FaceRectangle.Left +
+ face.FaceRectangle.Width / 2 -
+ (face.FaceRectangle.Width * Math.Sin(angleToPi) + face.FaceRectangle.Height * Math.Cos(angleToPi)) / 2;
+
+ var newTop = face.FaceRectangle.Top +
+ face.FaceRectangle.Height / 2 -
+ (face.FaceRectangle.Height * Math.Sin(angleToPi) + face.FaceRectangle.Width * Math.Cos(angleToPi)) / 2;
+
+ left = (int)(newLeft * scale + uiXOffset);
+ top = (int)(newTop * scale + uiYOffset);
+ }
+
+ yield return new Face()
+ {
+ FaceId = face.FaceId?.ToString(),
+ Left = left,
+ Top = top,
+ OriginalLeft = (int)(face.FaceRectangle.Left * scale + uiXOffset),
+ OriginalTop = (int)(face.FaceRectangle.Top * scale + uiYOffset),
+ Height = (int)(face.FaceRectangle.Height * scale),
+ Width = (int)(face.FaceRectangle.Width * scale),
+ FaceAngle = faceAngle,
+ };
+ }
+}
+```
+
+### Display the updated rectangle
+
+From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data:
+
+```xaml
+ <DataTemplate>
+ <Rectangle Width="{Binding Width}" Height="{Binding Height}" Stroke="#FF26B8F4" StrokeThickness="1">
+ <Rectangle.LayoutTransform>
+ <RotateTransform Angle="{Binding FaceAngle}"/>
+ </Rectangle.LayoutTransform>
+ </Rectangle>
+</DataTemplate>
+```
+
+## Detect head gestures
+
+You can detect head gestures like nodding and head shaking by tracking HeadPose changes in real time. You can use this feature as a custom liveness detector.
+
+Liveness detection is the task of determining that a subject is a real person and not an image or video representation. A head gesture detector could serve as one way to help verify liveness, especially as opposed to an image representation of a person.
+
+> [!CAUTION]
+> To detect head gestures in real time, you'll need to call the Face API at a high rate (more than once per second). If you have a free-tier (f0) subscription, this will not be possible. If you have a paid-tier subscription, make sure you've calculated the costs of making rapid API calls for head gesture detection.
+
+See the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceAPIHeadPoseSample) on GitHub for a working example of head gesture detection.
+
+## Next steps
+
+See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
ai-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md
+
+ Title: "Example: Use the Large-Scale feature - Face"
+
+description: This guide is an article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects.
+++++++ Last updated : 05/01/2019+
+ms.devlang: csharp
+++
+# Example: Use the large-scale feature
++
+This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. PersonGroups can hold up to 1000 persons in the free tier and 10,000 in the paid tier, while LargePersonGroups can hold up to one million persons in the paid tier.
+
+This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
+
+LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
+
+The samples are written in C# by using the Azure AI Face client library.
+
+> [!NOTE]
+> To enable Face search performance for Identification and FindSimilar in large scale, introduce a Train operation to preprocess the LargeFaceList and LargePersonGroup. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform Identification and FindSimilar if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
+
+## Step 1: Initialize the client object
+
+When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
+
+## Step 2: Code migration
+
+This section focuses on how to migrate PersonGroup or FaceList implementation to LargePersonGroup or LargeFaceList. Although LargePersonGroup or LargeFaceList differs from PersonGroup or FaceList in design and internal implementation, the API interfaces are similar for backward compatibility.
+
+Data migration isn't supported. You re-create the LargePersonGroup or LargeFaceList instead.
+
+### Migrate a PersonGroup to a LargePersonGroup
+
+Migration from a PersonGroup to a LargePersonGroup is simple. They share exactly the same group-level operations.
+
+For PersonGroup- or person-related implementation, it's necessary to change only the API paths or SDK class/module to LargePersonGroup and LargePersonGroup Person.
+
+Add all of the faces and persons from the PersonGroup to the new LargePersonGroup. For more information, see [Add faces](add-faces.md).
+
+### Migrate a FaceList to a LargeFaceList
+
+| FaceList APIs | LargeFaceList APIs |
+|::|::|
+| Create | Create |
+| Delete | Delete |
+| Get | Get |
+| List | List |
+| Update | Update |
+| - | Train |
+| - | Get Training Status |
+
+The preceding table is a comparison of list-level operations between FaceList and LargeFaceList. As is shown, LargeFaceList comes with new operations, Train and Get Training Status, when compared with FaceList. Training the LargeFaceList is a precondition of the
+[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for FaceList. The following snippet is a helper function to wait for the training of a LargeFaceList:
+
+```csharp
+/// <summary>
+/// Helper function to train LargeFaceList and wait for finish.
+/// </summary>
+/// <remarks>
+/// The time interval can be adjusted considering the following factors:
+/// - The training time which depends on the capacity of the LargeFaceList.
+/// - The acceptable latency for getting the training status.
+/// - The call frequency and cost.
+///
+/// Estimated training time for LargeFaceList in different scale:
+/// - 1,000 faces cost about 1 to 2 seconds.
+/// - 10,000 faces cost about 5 to 10 seconds.
+/// - 100,000 faces cost about 1 to 2 minutes.
+/// - 1,000,000 faces cost about 10 to 30 minutes.
+/// </remarks>
+/// <param name="largeFaceListId">The Id of the LargeFaceList for training.</param>
+/// <param name="timeIntervalInMilliseconds">The time interval for getting training status in milliseconds.</param>
+/// <returns>A task of waiting for LargeFaceList training finish.</returns>
+private static async Task TrainLargeFaceList(
+ string largeFaceListId,
+ int timeIntervalInMilliseconds = 1000)
+{
+ // Trigger a train call.
+ await FaceClient.LargeTrainLargeFaceListAsync(largeFaceListId);
+
+ // Wait for training finish.
+ while (true)
+ {
+ Task.Delay(timeIntervalInMilliseconds).Wait();
+ var status = await faceClient.LargeFaceList.TrainAsync(largeFaceListId);
+
+ if (status.Status == Status.Running)
+ {
+ continue;
+ }
+ else if (status.Status == Status.Succeeded)
+ {
+ break;
+ }
+ else
+ {
+ throw new Exception("The train operation is failed!");
+ }
+ }
+}
+```
+
+Previously, a typical use of FaceList with added faces and FindSimilar looked like the following:
+
+```csharp
+// Create a FaceList.
+const string FaceListId = "myfacelistid_001";
+const string FaceListName = "MyFaceListDisplayName";
+const string ImageDir = @"/path/to/FaceList/images";
+faceClient.FaceList.CreateAsync(FaceListId, FaceListName).Wait();
+
+// Add Faces to the FaceList.
+Parallel.ForEach(
+ Directory.GetFiles(ImageDir, "*.jpg"),
+ async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.FaceList.AddFaceFromStreamAsync(FaceListId, stream);
+ }
+ });
+
+// Perform FindSimilar.
+const string QueryImagePath = @"/path/to/query/image";
+var results = new List<SimilarPersistedFace[]>();
+using (Stream stream = File.OpenRead(QueryImagePath))
+{
+ var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ foreach (var face in faces)
+ {
+ results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
+ }
+}
+```
+
+When migrating it to LargeFaceList, it becomes the following:
+
+```csharp
+// Create a LargeFaceList.
+const string LargeFaceListId = "mylargefacelistid_001";
+const string LargeFaceListName = "MyLargeFaceListDisplayName";
+const string ImageDir = @"/path/to/FaceList/images";
+faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName).Wait();
+
+// Add Faces to the LargeFaceList.
+Parallel.ForEach(
+ Directory.GetFiles(ImageDir, "*.jpg"),
+ async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.LargeFaceList.AddFaceFromStreamAsync(LargeFaceListId, stream);
+ }
+ });
+
+// Train() is newly added operation for LargeFaceList.
+// Must call it before FindSimilarAsync() to ensure the newly added faces searchable.
+await TrainLargeFaceList(LargeFaceListId);
+
+// Perform FindSimilar.
+const string QueryImagePath = @"/path/to/query/image";
+var results = new List<SimilarPersistedFace[]>();
+using (Stream stream = File.OpenRead(QueryImagePath))
+{
+ var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ foreach (var face in faces)
+ {
+ results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));
+ }
+}
+```
+
+As previously shown, the data management and the FindSimilar part are almost the same. The only exception is that a fresh preprocessing Train operation must complete in the LargeFaceList before FindSimilar works.
+
+## Step 3: Train suggestions
+
+Although the Train operation speeds up [FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)
+and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
+
+| Scale for faces or persons | Estimated training time |
+|::|::|
+| 1,000 | 1-2 sec |
+| 10,000 | 5-10 sec |
+| 100,000 | 1-2 min |
+| 1,000,000 | 10-30 min |
+
+To better utilize the large-scale feature, we recommend the following strategies.
+
+### Step 3.1: Customize time interval
+
+As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList.
+
+The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
+
+### Step 3.2: Small-scale buffer
+
+Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
+
+To mitigate this problem, use an extra small-scale LargePersonGroup or LargeFaceList as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master LargePersonGroup or LargeFaceList by running the master training on a sparser interval. Examples are in the middle of the night and daily.
+
+An example workflow:
+
+1. Create a master LargePersonGroup or LargeFaceList, which is the master collection. Create a buffer LargePersonGroup or LargeFaceList, which is the buffer collection. The buffer collection is only for newly added persons or faces.
+1. Add new persons or faces to both the master collection and the buffer collection.
+1. Only train the buffer collection with a short time interval to ensure that the newly added entries take effect.
+1. Call Identification or FindSimilar against both the master collection and the buffer collection. Merge the results.
+1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection.
+1. Delete the old buffer collection after the Train operation finishes on the master collection.
+
+### Step 3.3: Standalone training
+
+If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
+
+Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a LargePersonGroup by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is:
+
+```csharp
+private static void Main()
+{
+ // Create a LargePersonGroup.
+ const string LargePersonGroupId = "mylargepersongroupid_001";
+ const string LargePersonGroupName = "MyLargePersonGroupDisplayName";
+ faceClient.LargePersonGroup.CreateAsync(LargePersonGroupId, LargePersonGroupName).Wait();
+
+ // Set up standalone training at regular intervals.
+ const int TimeIntervalForStatus = 1000 * 60; // 1-minute interval for getting training status.
+ const double TimeIntervalForTrain = 1000 * 60 * 60; // 1-hour interval for training.
+ var trainTimer = new Timer(TimeIntervalForTrain);
+ trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed(LargePersonGroupId, TimeIntervalForStatus);
+ trainTimer.AutoReset = true;
+ trainTimer.Enabled = true;
+
+ // Other operations like creating persons, adding faces, and identification, except for Train.
+ // ...
+}
+
+private static void TrainTimerOnElapsed(string largePersonGroupId, int timeIntervalInMilliseconds)
+{
+ TrainLargePersonGroup(largePersonGroupId, timeIntervalInMilliseconds).Wait();
+}
+```
+
+For more information about data management and identification-related implementations, see [Add faces](add-faces.md).
+
+## Summary
+
+In this guide, you learned how to migrate the existing PersonGroup or FaceList code, not data, to the LargePersonGroup or LargeFaceList:
+
+- LargePersonGroup and LargeFaceList work similar to PersonGroup or FaceList, except that the Train operation is required by LargeFaceList.
+- Take the proper Train strategy to dynamic data update for large-scale data sets.
+
+## Next steps
+
+Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup.
+
+- [Add faces](add-faces.md)
+- [Face client library quickstart](../quickstarts-sdk/identity-client-library.md)
ai-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-persondirectory.md
+
+ Title: "Example: Use the PersonDirectory data structure - Face"
+
+description: Learn how to use the PersonDirectory data structure to store face and person data at greater capacity and with other new features.
+++++++ Last updated : 07/20/2022+
+ms.devlang: csharp
+++
+# Use the PersonDirectory data structure (preview)
++
+To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. **PersonDirectory** is a data structure in Public Preview that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Follow this guide to learn how to do basic tasks with **PersonDirectory**.
+
+## Advantages of PersonDirectory
+
+Currently, the Face API offers the **LargePersonGroup** structure, which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+
+Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train API calls after adding faces to a **Person** object&mdash;the update process happens automatically.
+
+## Prerequisites
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below.
+ * You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Add Persons to the PersonDirectory
+
+**Persons** are the base enrollment units in the **PersonDirectory**. Once you add a **Person** to the directory, you can add up to 248 face images to that **Person**, per recognition model. Then you can identify faces against them using varying scopes.
+
+### Create the Person
+To create a **Person**, you need to call the **CreatePerson** API and provide a name or userData property value.
+
+```csharp
+using Newtonsoft.Json;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Net.Http;
+using System.Net.Http.Headers;
+using System.Text;
+using System.Threading.Tasks;
+
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var addPersonUri = "https:// {endpoint}/face/v1.0-preview/persons";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example Person");
+body.Add("userData", "User defined data");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(addPersonUri, content);
+}
+```
+
+The CreatePerson call will return a generated ID for the **Person** and an operation location. The **Person** data will be processed asynchronously, so you use the operation location to fetch the results.
+
+### Wait for asynchronous operation completion
+You'll need to query the async operation status using the returned operation location string to check the progress.
+
+First, you should define a data model like the following to handle the status response.
+
+```csharp
+[Serializable]
+public class AsyncStatus
+{
+ [DataMember(Name = "status")]
+ public string Status { get; set; }
+
+ [DataMember(Name = "createdTime")]
+ public DateTime CreatedTime { get; set; }
+
+ [DataMember(Name = "lastActionTime")]
+ public DateTime? LastActionTime { get; set; }
+
+ [DataMember(Name = "finishedTime", EmitDefaultValue = false)]
+ public DateTime? FinishedTime { get; set; }
+
+ [DataMember(Name = "resourceLocation", EmitDefaultValue = false)]
+ public string ResourceLocation { get; set; }
+
+ [DataMember(Name = "message", EmitDefaultValue = false)]
+ public string Message { get; set; }
+}
+```
+
+Using the HttpResponseMessage from above, you can then poll the URL and wait for results.
+
+```csharp
+string operationLocation = response.Headers.GetValues("Operation-Location").FirstOrDefault();
+
+Stopwatch s = Stopwatch.StartNew();
+string status = "notstarted";
+do
+{
+ if (status == "succeeded")
+ {
+ await Task.Delay(500);
+ }
+
+ var operationResponseMessage = await client.GetAsync(operationLocation);
+
+ var asyncOperationObj = JsonConvert.DeserializeObject<AsyncStatus>(await operationResponseMessage.Content.ReadAsStringAsync());
+ status = asyncOperationObj.Status;
+
+} while ((status == "running" || status == "notstarted") && s.Elapsed < TimeSpan.FromSeconds(30));
+```
++
+Once the status returns as "succeeded", the **Person** object is considered added to the directory.
+
+> [!NOTE]
+> The asynchronous operation from the Create **Person** call does not have to show "succeeded" status before faces can be added to it, but it does need to be completed before the **Person** can be added to a **DynamicPersonGroup** (see below Create and update a **DynamicPersonGroup**) or compared during an Identify call. Verify calls will work immediately after faces are successfully added to the **Person**.
++
+### Add faces to Persons
+
+Once you have the **Person** ID from the Create Person call, you can add up to 248 face images to a **Person** per recognition model. Specify the recognition model (and optionally the detection model) to use in the call, as data under each recognition model will be processed separately inside the **PersonDirectory**.
+
+The currently supported recognition models are:
+* `Recognition_02`
+* `Recognition_03`
+* `Recognition_04`
+
+Additionally, if the image contains multiple faces, you'll need to specify the rectangle bounding box for the face that is the intended target. The following code adds faces to a **Person** object.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+// Optional query strings for more fine grained face control
+var queryString = "userData={userDefinedData}&targetFace={left,top,width,height}&detectionModel={detectionModel}";
+var uri = "https://{endpoint}/face/v1.0-preview/persons/{personId}/recognitionModels/{recognitionModel}/persistedFaces?" + queryString;
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("url", "{image url}");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+After the Add Faces call, the face data will be processed asynchronously, and you'll need to wait for the success of the operation in the same manner as before.
+
+When the operation for the face addition finishes, the data will be ready for in Identify calls.
+
+## Create and update a **DynamicPersonGroup**
+
+**DynamicPersonGroups** are collections of references to **Person** objects within a **PersonDirectory**; they're used to create subsets of the directory. A common use is when you want to get fewer false positives and increased accuracy in an Identify operation by limiting the scope to just the **Person** objects you expect to match. Practical use cases include directories for specific building access among a larger campus or organization. The organization directory may contain 5 million individuals, but you only need to search a specific 800 people for a particular building, so you would create a **DynamicPersonGroup** containing those specific individuals.
+
+If you've used a **PersonGroup** before, take note of two major differences:
+* Each **Person** inside a **DynamicPersonGroup** is a reference to the actual **Person** in the **PersonDirectory**, meaning that it's not necessary to recreate a **Person** in each group.
+* As mentioned in previous sections, there's no need to make Train calls, as the face data is processed at the Directory level automatically.
+
+### Create the group
+
+To create a **DynamicPersonGroup**, you need to provide a group ID with alphanumeric or dash characters. This ID will function as the unique identifier for all usage purposes of the group.
+
+There are two ways to initialize a group collection. You can create an empty group initially, and populate it later:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example DynamicPersonGroup");
+body.Add("userData", "User defined data");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PutAsync(uri, content);
+}
+```
+
+This process is immediate and there's no need to wait for any asynchronous operations to succeed.
+
+Alternatively, you can create it with a set of **Person** IDs to contain those references from the beginning by providing the set in the _AddPersonIds_ argument:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example DynamicPersonGroup");
+body.Add("userData", "User defined data");
+body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PutAsync(uri, content);
+
+ // Async operation location to query the completion status from
+ var operationLocation = response.Headers.Get("Operation-Location");
+}
+```
+
+> [!NOTE]
+> As soon as the call returns, the created **DynamicPersonGroup** will be ready to use in an Identify call, with any **Person** references provided in the process. The completion status of the returned operation ID, on the other hand, indicates the update status of the person-to-group relationship.
+
+### Update the DynamicPersonGroup
+
+After the initial creation, you can add and remove **Person** references from the **DynamicPersonGroup** with the Update Dynamic Person Group API. To add **Person** objects to the group, list the **Person** IDs in the _addPersonsIds_ argument. To remove **Person** objects, list them in the _removePersonIds_ argument. Both adding and removing can be performed in a single call:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example Dynamic Person Group updated");
+body.Add("userData", "User defined data updated");
+body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("removePersonIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PatchAsync(uri, content);
+
+ // Async operation location to query the completion status from
+ var operationLocation = response.Headers.Get("Operation-Location");
+}
+```
+
+Once the call returns, the updates to the collection will be reflected when the group is queried. As with the creation API, the returned operation indicates the update status of person-to-group relationship for any **Person** that's involved in the update. You don't need to wait for the completion of the operation before making further Update calls to the group.
+
+## Identify faces in a PersonDirectory
+
+The most common way to use face data in a **PersonDirectory** is to compare the enrolled **Person** objects against a given face and identify the most likely candidate it belongs to. Multiple faces can be provided in the request, and each will receive its own set of comparison results in the response.
+
+In **PersonDirectory**, there are three types of scopes each face can be identified against:
+
+### Scenario 1: Identify against a DynamicPersonGroup
+
+Specifying the _dynamicPersonGroupId_ property in the request compares the face against every **Person** referenced in the group. Only a single **DynamicPersonGroup** can be identified against in a call.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+// Optional query strings for more fine grained face control
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("dynamicPersonGroupId", "{dynamicPersonGroupIdToIdentifyIn}");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+### Scenario 2: Identify against a specific list of persons
+
+You can also specify a list of **Person** IDs in the _personIds_ property to compare the face against each of them.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+### Scenario 3: Identify against the entire **PersonDirectory**
+
+Providing a single asterisk in the _personIds_ property in the request compares the face against every single **Person** enrolled in the **PersonDirectory**.
+
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"*"});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+For all three scenarios, the identification only compares the incoming face against faces whose AddPersonFace call has returned with a "succeeded" response.
+
+## Verify faces against persons in the **PersonDirectory**
+
+With a face ID returned from a detection call, you can verify if the face belongs to a specific **Person** enrolled inside the **PersonDirectory**. Specify the **Person** using the _personId_ property.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/verify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceId", "{guid1}");
+body.Add("personId", "{guid1}");
+var jsSerializer = new JavaScriptSerializer();
+byte[] byteData = Encoding.UTF8.GetBytes(jsSerializer.Serialize(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+The response will contain a Boolean value indicating whether the service considers the new face to belong to the same **Person**, and a confidence score for the prediction.
+
+## Next steps
+
+In this guide, you learned how to use the **PersonDirectory** structure to store face and person data for your Face app. Next, learn the best practices for adding your users' face data.
+
+* [Best practices for adding users](../enrollment-overview.md)
ai-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/identity-api-reference.md
+
+ Title: API Reference - Face
+
+description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs.
+++++++ Last updated : 02/17/2021+++
+# Face API reference list
+
+Azure AI Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories:
+
+- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
ai-services Identity Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/identity-encrypt-data-at-rest.md
+
+ Title: Face service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Face, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020++
+#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
++
+# Face service encryption of data at rest
+
+The Face service automatically encrypts your data when persisted to the cloud. The Face service encryption protects your data and helps you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Face Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Face service, you will need to create a new Face resource and select E0 as the Pricing Tier. Once your Face resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Azure AI services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
ai-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/intro-to-spatial-analysis-public-preview.md
+
+ Title: What is Spatial Analysis?
+
+description: This document explains the basic concepts and features of the Azure Spatial Analysis container.
+++++++ Last updated : 12/27/2022+++
+# What is Spatial Analysis?
+
+You can use Azure AI Vision Spatial Analysis to detect the presence and movements of people in video. Ingest video streams from cameras, extract insights, and generate events to be used by other systems. The service can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
+
+Try out the capabilities of Spatial Analysis quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+<!--This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles]() provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.-->
+
+## What it does
+Spatial Analysis ingests video then detects people in the video. After people are detected, the system tracks the people as they move around over time then generates events as people interact with regions of interest. All operations give insights from a single camera's field of view.
+
+### People counting
+This operation counts the number of people in a specific zone over time using the PersonCount operation. It generates an independent count for each frame processed without attempting to track people across frames. This operation can be used to estimate the number of people in a space or generate an alert when a person appears.
+
+![Spatial Analysis counts the number of people in the cameras field of view](https://user-images.githubusercontent.com/11428131/139924111-58637f2e-f2f6-42d8-8812-ab42fece92b4.gif)
+
+### Entrance Counting
+This feature monitors how long people stay in an area or when they enter through a doorway. This monitoring can be done using the PersonCrossingPolygon or PersonCrossingLine operations. In retail scenarios, these operations can be used to measure wait times for a checkout line or engagement at a display. Also, these operations could measure foot traffic in a lobby or a specific floor in other commercial building scenarios.
+
+![Video frames of people moving in and out of a bordered space, with rectangles drawn around them.](https://user-images.githubusercontent.com/11428131/137016574-0d180d9b-fb9a-42a9-94b7-fbc0dbc18560.gif)
+
+### Social distancing and face mask detection
+This feature analyzes how well people follow social distancing requirements in a space. The system uses the PersonDistance operation to automatically calibrates itself as people walk around in the space. Then it identifies when people violate a specific distance threshold (6 ft. or 10 ft.).
+
+![Spatial Analysis visualizes social distance violation events showing lines between people showing the distance](https://user-images.githubusercontent.com/11428131/139924062-b5e10c0f-3cf8-4ff1-bb58-478571c022d7.gif)
+
+Spatial Analysis can also be configured to detect if a person is wearing a protective face covering such as a mask. A mask classifier can be enabled for the PersonCount, PersonCrossingLine, and PersonCrossingPolygon operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter.
+
+![Spatial Analysis classifies whether people have facemasks in an elevator](https://user-images.githubusercontent.com/11428131/137015842-ce524f52-3ac4-4e42-9067-25d19b395803.png)
+
+## Get started
+
+Follow the [quickstart](spatial-analysis-container.md) to set up the Spatial Analysis container and begin analyzing video.
+
+## Responsible use of Spatial Analysis technology
+
+To learn how to use Spatial Analysis technology responsibly, see the [transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). Microsoft's transparency notes help you understand how our AI technology works and the choices system owners can make that influence system performance and behavior. They focus on the importance of thinking about the whole system including the technology, people, and environment.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/language-support.md
+
+ Title: Language support - Azure AI Vision
+
+description: This article provides a list of natural languages supported by Azure AI Vision features; OCR, Image analysis.
+++++++ Last updated : 12/27/2022+++
+# Language support for Azure AI Vision
+
+Some capabilities of Azure AI Vision support multiple languages; any capabilities not mentioned here only support English.
+
+## Optical Character Recognition (OCR)
+
+The Azure AI Vision [Read API](./overview-ocr.md) supports many languages. The `Read` API can extract text from images and documents with mixed languages, including from the same text line, without requiring a language parameter.
+
+> [!NOTE]
+> **Language code optional**
+>
+> `Read` OCR's deep-learning-based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
+
+See [How to specify the `Read` model](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
+
+### Handwritten text
+
+The following table lists the OCR supported languages for handwritten text by the most recent `Read` GA model.
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean|`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
+
+### Print text
+
+The following table lists the OCR supported languages for print text by the most recent `Read` GA model.
+
+|Language| Code (optional) |Language| Code (optional) |
+|:--|:-:|:--|:-:|
+|Afrikaans|`af`|Khasi | `kha` |
+|Albanian |`sq`|K'iche' | `quc` |
+|Angika (Devanagiri) | `anp`| Korean | `ko` |
+|Arabic | `ar` | Korku | `kfq`|
+|Asturian |`ast`| Koryak | `kpy`|
+|Awadhi-Hindi (Devanagiri) | `awa`| Kosraean | `kos`|
+|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`|
+|Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`|
+|Basque |`eu`| Kurdish (Latin) | `ku-latn`
+|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagiri) | `kru`|
+|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
+|Bhojpuri-Hindi (Devanagiri) | `bho`| Lakota | `lkt` |
+|Bislama |`bi`| Latin | `la` |
+|Bodo (Devanagiri) | `brx`| Lithuanian | `lt` |
+|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` |
+|Brajbha | `bra`|Lule Sami | `smj`|
+|Breton |`br`|Luxembourgish | `lb` |
+|Bulgarian | `bg`|Mahasu Pahari (Devanagiri) | `bfz`|
+|Bundeli | `bns`|Malay (Latin) | `ms` |
+|Buryat (Cyrillic) | `bua`|Maltese | `mt`
+|Catalan |`ca`|Malto (Devanagiri) | `kmj`
+|Cebuano |`ceb`|Manx | `gv` |
+|Chamling | `rab`|Maori | `mi`|
+|Chamorro |`ch`|Marathi | `mr`|
+|Chhattisgarhi (Devanagiri)| `hne`| Mongolian (Cyrillic) | `mn`|
+|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`|
+|Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`|
+|Cornish |`kw`|Neapolitan | `nap` |
+|Corsican |`co`|Nepali | `ne`|
+|Crimean Tatar (Latin)|`crh`|Niuean | `niu`|
+|Croatian | `hr`|Nogay | `nog`
+|Czech | `cs` |Northern Sami (Latin) | `sme`|
+|Danish | `da` |Norwegian | `no` |
+|Dari | `prs`|Occitan | `oc` |
+|Dhimal (Devanagiri) | `dhi`| Ossetic | `os`|
+|Dogri (Devanagiri) | `doi`|Pashto | `ps`|
+|Dutch | `nl` |Persian | `fa`|
+|English | `en` |Polish | `pl` |
+|Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
+|Estonian |`et`|Punjabi (Arabic) | `pa`|
+|Faroese | `fo`|Ripuarian | `ksh`|
+|Fijian |`fj`|Romanian | `ro` |
+|Filipino |`fil`|Romansh | `rm` |
+|Finnish | `fi` | Russian | `ru` |
+|French | `fr` |Sadri (Devanagiri) | `sck` |
+|Friulian | `fur` | Samoan (Latin) | `sm`
+|Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`|
+|Galician | `gl` |Santali(Devanagiri) | `sat` |
+|German | `de` | Scots | `sco` |
+|Gilbertese | `gil` | Scottish Gaelic | `gd` |
+|Gondi (Devanagiri) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
+|Greenlandic | `kl` | Sherpa (Devanagiri) | `xsr` |
+|Gurung (Devanagiri) | `gvr`| Sirmauri (Devanagiri) | `srx`|
+|Haitian Creole | `ht` | Skolt Sami | `sms` |
+|Halbi (Devanagiri) | `hlb`| Slovak | `sk`|
+|Hani | `hni` | Slovenian | `sl` |
+|Haryanvi | `bgc`|Somali (Arabic) | `so`|
+|Hawaiian | `haw`|Southern Sami | `sma`
+|Hindi | `hi`|Spanish | `es` |
+|Hmong Daw (Latin)| `mww` | Swahili (Latin) | `sw` |
+|Ho(Devanagiri) | `hoc`|Swedish | `sv` |
+|Hungarian | `hu` |Tajik (Cyrillic) | `tg` |
+|Icelandic | `is`| Tatar (Latin) | `tt` |
+|Inari Sami | `smn`|Tetum | `tet` |
+|Indonesian | `id` | Thangmi | `thf` |
+|Interlingua | `ia` |Tongan | `to`|
+|Inuktitut (Latin) | `iu` | Turkish | `tr` |
+|Irish | `ga` |Turkmen (Latin) | `tk`|
+|Italian | `it` |Tuvan | `tyv`|
+|Japanese | `ja` |Upper Sorbian | `hsb` |
+|Jaunsari (Devanagiri) | `Jns`|Urdu | `ur`|
+|Javanese | `jv` |Uyghur (Arabic) | `ug`|
+|Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`|
+|Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
+|Kangri (Devanagiri) | `xnr`|Uzbek (Latin) | `uz` |
+|Karachay-Balkar | `krc`|Volap├╝k | `vo` |
+|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` |
+|Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
+|Kashubian | `csb` |Western Frisian | `fy` |
+|Kazakh (Cyrillic) | `kk-cyrl`|Yucatec Maya | `yua` |
+|Kazakh (Latin) | `kk-latn`|Zhuang | `za` |
+|Khaling | `klr`|Zulu | `zu` |
+
+## Image analysis
+
+Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis. Languages for tagging are only available in API version 3.2 or later.
+
+|Language | Language code | Categories | Tags | Description | Adult | Brands | Color | Faces | ImageType | Objects | Celebrities | Landmarks | Captions/Dense captions|
+|:|::|:-:|::|::|::|::|::|::|::|::|::|::|:--:|
+|Arabic |`ar`| | ✅| |||||| ||||
+|Azeri (Azerbaijani) |`az`| | ✅| |||||| ||||
+|Bulgarian |`bg`| | ✅| |||||| ||||
+|Bosnian Latin |`bs`| | ✅| |||||| ||||
+|Catalan |`ca`| | ✅| |||||| ||||
+|Czech |`cs`| | ✅| |||||| ||||
+|Welsh |`cy`| | ✅| |||||| ||||
+|Danish |`da`| | ✅| |||||| ||||
+|German |`de`| | ✅| |||||| ||||
+|Greek |`el`| | ✅| |||||| ||||
+|English |`en`|✅ | ✅| ✅|✅|✅|✅|✅|✅|✅|✅|✅|✅|
+|Spanish |`es`|✅ | ✅| ✅|||||| |✅|✅||
+|Estonian |`et`| | ✅| |||||| ||||
+|Basque |`eu`| | ✅| |||||| ||||
+|Finnish |`fi`| | ✅| |||||| ||||
+|French |`fr`| | ✅| |||||| ||||
+|Irish |`ga`| | ✅| |||||| ||||
+|Galician |`gl`| | ✅| |||||| ||||
+|Hebrew |`he`| | ✅| |||||| ||||
+|Hindi |`hi`| | ✅| |||||| ||||
+|Croatian |`hr`| | ✅| |||||| ||||
+|Hungarian |`hu`| | ✅| |||||| ||||
+|Indonesian |`id`| | ✅| |||||| ||||
+|Italian |`it`| | ✅| |||||| ||||
+|Japanese |`ja`|✅ | ✅| ✅|||||| |✅|✅||
+|Kazakh |`kk`| | ✅| |||||| ||||
+|Korean |`ko`| | ✅| |||||| ||||
+|Lithuanian |`lt`| | ✅| |||||| ||||
+|Latvian |`lv`| | ✅| |||||| ||||
+|Macedonian |`mk`| | ✅| |||||| ||||
+|Malay Malaysia |`ms`| | ✅| |||||| ||||
+|Norwegian (Bokmal) |`nb`| | ✅| |||||| ||||
+|Dutch |`nl`| | ✅| |||||| ||||
+|Polish |`pl`| | ✅| |||||| ||||
+|Dari |`prs`| | ✅| |||||| ||||
+| Portuguese-Brazil|`pt-BR`| | ✅| |||||| ||||
+| Portuguese-Portugal |`pt`|✅ | ✅| ✅|||||| |✅|✅||
+| Portuguese-Portugal |`pt-PT`| | ✅| |||||| ||||
+|Romanian |`ro`| | ✅| |||||| ||||
+|Russian |`ru`| | ✅| |||||| ||||
+|Slovak |`sk`| | ✅| |||||| ||||
+|Slovenian |`sl`| | ✅| |||||| ||||
+|Serbian - Cyrillic RS |`sr-Cryl`| | ✅| |||||| ||||
+|Serbian - Latin RS |`sr-Latn`| | ✅| |||||| ||||
+|Swedish |`sv`| | ✅| |||||| ||||
+|Thai |`th`| | ✅| |||||| ||||
+|Turkish |`tr`| | ✅| |||||| ||||
+|Ukrainian |`uk`| | ✅| |||||| ||||
+|Vietnamese |`vi`| | ✅| |||||| ||||
+|Chinese Simplified |`zh`|✅ | ✅| ✅|||||| |✅|✅||
+|Chinese Simplified |`zh-Hans`| | ✅| |||||| ||||
+|Chinese Traditional |`zh-Hant`| | ✅| |||||| ||||
ai-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md
+
+ Title: What is the Azure AI Face service?
+
+description: The Azure AI Face service provides AI algorithms that you use to detect, recognize, and analyze human faces in images.
++++++ Last updated : 07/04/2023++
+keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
+#Customer intent: As the developer of an app that deals with images of humans, I want to learn what the Face service does so I can determine if I should use its features.
++
+# What is the Azure AI Face service?
+
+> [!WARNING]
+> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
++
+The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
+
+You can use the Face service through a client library SDK or by calling the REST API directly. Follow the quickstart to get started.
+
+> [!div class="nextstepaction"]
+> [Quickstart](quickstarts-sdk/identity-client-library.md)
+
+Or, you can try out the capabilities of Face service quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
+
+For a more structured approach, follow a Training module for Face.
+* [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/)
+
+## Example use cases
+
+**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or to recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+
+**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
+
+**Face redaction**: Redact or blur detected faces of people recorded in a video to protect their privacy.
++
+## Face detection and analysis
+
+Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
+
+Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
++
+For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
++
+## Identity verification
+
+Modern enterprises and apps can use the Face identification and Face verification operations to verify that a user is who they claim to be.
+
+### Identification
+
+Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device.
+
+The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
+
+![A grid with three columns for different people, each with three rows of face images](./media/person.group.clare.jpg)
+
+After you create and train a group, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+
+Try out the capabilities of face identification quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+### Verification
+
+The verification operation answers the question, "Do these two faces belong to the same person?".
+
+Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for Identity Verification, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID.
+
+For more information about identity verification, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
+
+Try out the capabilities of face verification quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+## Find similar faces
+
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
+
+The following example shows the target face:
+
+![A woman smiling](./media/FaceFindSimilar.QueryFace.jpg)
+
+And these images are the candidate faces:
+
+![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg)
+
+To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) reference documentation.
+
+## Group faces
+
+The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
+
+All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
+
+## Data privacy and security
+
+As with all of the Azure AI services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. For more information, see the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center.
+
+## Next steps
+
+Follow a quickstart to code the basic components of a face recognition app in the language of your choice.
+
+- [Face quickstart](quickstarts-sdk/identity-client-library.md).
ai-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-image-analysis.md
+
+ Title: What is Image Analysis?
+
+description: The Image Analysis service uses pretrained AI models to extract many different visual features from images.
+++++++ Last updated : 07/04/2023+
+keywords: Azure AI Vision, Azure AI Vision applications, Azure AI Vision service
++
+# What is Image Analysis?
+
+The Azure AI Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.
+
+The latest version of Image Analysis, 4.0, which is now in public preview, has new features like synchronous OCR and people detection. We recommend you use this version going forward.
+
+You can use Image Analysis through a client library SDK or by calling the [REST API](https://aka.ms/vision-4-0-ref) directly. Follow the [quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.
+
+> [!div class="nextstepaction"]
+> [Quickstart](quickstarts-sdk/image-analysis-client-library-40.md)
+
+Or, you can try out the capabilities of Image Analysis quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+
+For a more structured approach, follow a Training module for Image Analysis.
+* [Analyze images with the Azure AI Vision service](/training/modules/analyze-images-computer-vision/)
++
+## Analyze Image
+
+You can analyze images to provide insights about their visual features and characteristics. All of the features in this list are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started.
+
+| Name | Description | Concept page |
+||||
+|**Model customization** (v4.0 preview only)|You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis trains a model customized for your use case.|[Model customization](./concept-model-customization.md)|
+|**Read text from images** (v4.0 preview only)| Version 4.0 preview of Image Analysis offers the ability to extract readable text from images. Compared with the async Computer Vision 3.2 Read API, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get OCR along with other insights in a single API call. |[OCR for images](concept-ocr.md)|
+|**Detect people in images** (v4.0 preview only)|Version 4.0 preview of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. |[People detection](concept-people-detection.md)|
+|**Generate image captions** | Generate a caption of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate captions based on the objects identified in the image. <br/><br/>The version 4.0 image captioning model is a more advanced implementation and works with a wider range of input images. It's only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. <br/><br/>Version 4.0 also lets you use dense captioning, which generates detailed captions for individual objects that are found in the image. The API returns the bounding box coordinates (in pixels) of each object found in the image, plus a caption. You can use this functionality to generate descriptions of separate parts of an image.<br/><br/>:::image type="content" source="Images/description.png" alt-text="Photo of cows with a simple description on the right.":::| [Generate image captions (v3.2)](concept-describing-images.md)<br/>[(v4.0 preview)](concept-describe-images-40.md)|
+|**Detect objects** |Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation lists those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. <br/><br/>:::image type="content" source="Images/detect-objects.png" alt-text="Photo of an office with a rectangle drawn around a laptop.":::| [Detect objects (v3.2)](concept-object-detection.md)<br/>[(v4.0 preview)](concept-object-detection-40.md)
+|**Tag visual features**| Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.<br/><br/>:::image type="content" source="Images/tagging.png" alt-text="Photo of a skateboarder with tags listed on the right.":::|[Tag visual features (v3.2)](concept-tagging-images.md)<br/>[(v4.0 preview)](concept-tag-images-40.md)|
+|**Get the area of interest / smart crop** |Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. <br/><br/>The version 4.0 smart cropping model is a more advanced implementation and works with a wider range of input images. It's only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. | [Generate a thumbnail (v3.2)](concept-generating-thumbnails.md)<br/>[(v4.0 preview)](concept-generate-thumbnails-40.md)|
+|**Detect brands** (v3.2 only) | Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. |[Detect brands](concept-brand-detection.md)|
+|**Categorize an image** (v3.2 only)|Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/><br/>Currently, English is the only supported language for tagging and categorizing images. |[Categorize an image](concept-categorizing-images.md)|
+| **Detect faces** (v3.2 only) |Detect faces in an image and provide information about each detected face. Azure AI Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/><br/>You can also use the dedicated [Face API](./overview-identity.md) for these purposes. It provides more detailed analysis, such as facial identification and pose detection.|[Detect faces](concept-detecting-faces.md)|
+|**Detect image types** (v3.2 only)|Detect characteristics about an image, such as whether an image is a line drawing or the likelihood of whether an image is clip art.| [Detect image types](concept-detecting-image-types.md)|
+| **Detect domain-specific content** (v3.2 only)|Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Azure AI Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.| [Detect domain-specific content](concept-detecting-domain-content.md)|
+|**Detect the color scheme** (v3.2 only) |Analyze color usage within an image. Azure AI Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors.| [Detect the color scheme](concept-detecting-color-schemes.md)|
+|**Moderate content in images** (v3.2 only) |You can use Azure AI Vision to detect adult content in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.|[Detect adult content](concept-detecting-adult-content.md)|
++
+## Product Recognition (v4.0 preview only)
+
+The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence or absence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document.
+
+[Product Recognition](./concept-shelf-analysis.md)
+
+## Image Retrieval (v4.0 preview only)
+
+The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
+
+These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+
+[Image Retrieval](./concept-image-retrieval.md)
+
+## Background removal (v4.0 preview only)
+
+Image Analysis 4.0 (preview) offers the ability to remove the background of an image. This feature can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object. [Background removal](./concept-background-removal.md)
+
+|Original image |With background removed |Alpha matte |
+|::|::|::|
+| :::image type="content" source="media/background-removal/person-5.png" alt-text="Photo of a group of people using a tablet."::: | :::image type="content" source="media/background-removal/person-5-result.png" alt-text="Photo of a group of people using a tablet; background is transparent."::: | :::image type="content" source="media/background-removal/person-5-matte.png" alt-text="Alpha matte of a group of people."::: |
+
+## Image requirements
+
+#### [Version 4.0](#tab/4-0)
+
+Image Analysis works on images that meet the following requirements:
+
+- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
+- The file size of the image must be less than 20 megabytes (MB)
+- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
++
+#### [Version 3.2](#tab/3-2)
+
+Image Analysis works on images that meet the following requirements:
+
+- The image must be presented in JPEG, PNG, GIF, or BMP format
+- The file size of the image must be less than 4 megabytes (MB)
+- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
+++
+## Data privacy and security
+
+As with all of the Azure AI services, developers using the Azure AI Vision service should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+Get started with Image Analysis by following the quickstart guide in your preferred development language:
+
+- [Quickstart (v4.0 preview): Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md)
+- [Quickstart (v3.2): Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md)
ai-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-ocr.md
+
+ Title: OCR - Optical Character Recognition
+
+description: Learn how the optical character recognition (OCR) services extract print and handwritten text from images and documents in global languages.
+++++++ Last updated : 07/04/2023++++
+# OCR - Optical Character Recognition
+
+OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning-based OCR techniques allow you to extract printed or handwritten text from images such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry.
+
+## How is OCR related to Intelligent Document Processing (IDP)?
+
+Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Document Intelligence](../../ai-services/document-intelligence/overview.md). Document Intelligence includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Document Intelligence Read OCR](../../ai-services/document-intelligence/concept-read.md).
+
+## OCR engine
+
+Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
+
+> [!WARNING]
+> The Azure AI Vision legacy [OCR API in v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText API in v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are not recomended for use.
++
+## How to use OCR
+
+Try out OCR by using Vision Studio. Then follow one of the links to the Read edition that best meet your requirements.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
++
+## OCR supported languages
+
+Both **Read** versions available today in Azure AI Vision support several languages for printed and handwritten text. OCR for printed text includes support for English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts. OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish languages.
+
+Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
+
+## OCR common features
+
+The Read OCR model is available in Azure AI Vision and Document Intelligence with common baseline capabilities while optimizing for respective scenarios. The following list summarizes the common features:
+
+* Printed and handwritten text extraction in supported languages
+* Pages, text lines and words with location and confidence scores
+* Support for mixed languages, mixed mode (print and handwritten)
+* Available as Distroless Docker container for on-premises deployment
+
+## Use the OCR cloud APIs or deploy on-premises
+
+The cloud APIs are the preferred option for most customers because of their ease of integration and fast productivity out of the box. Azure and the Azure AI Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
+
+For on-premises deployment, the [Read Docker container](./computer-vision-how-to-install-containers.md) enables you to deploy the Azure AI Vision v3.2 generally available OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
+
+## OCR data privacy and security
+
+As with all of the Azure AI services, developers using the Azure AI Vision service should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+- OCR for general (non-document) images: try the [Azure AI Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).
+- OCR for PDF, Office and HTML documents and document images: start with [Document Intelligence Read](../../ai-services/document-intelligence/concept-read.md).
+- Looking for the previous GA version? Refer to the [Azure AI Vision 3.2 GA SDK or REST API quickstarts](./quickstarts-sdk/client-library.md).
ai-services Overview Vision Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-vision-studio.md
+
+ Title: What is Vision Studio?
+
+description: Learn how to set up and use Vision Studio to test features of Azure AI Vision on the web.
++++++ Last updated : 12/27/2022++++
+# What is Vision Studio?
+
+[Vision Studio](https://portal.vision.cognitive.azure.com/) is a set of UI-based tools that lets you explore, build, and integrate features from Azure AI Vision.
+
+Vision Studio provides you with a platform to try several service features and sample their returned data in a quick, straightforward manner. Using Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to get started embedding these services into your own applications.
+
+## Get started using Vision Studio
+
+To use Vision Studio, you'll need an Azure subscription and a resource for Azure AI services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
+
+1. Create an Azure Subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
+
+1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com/). If it's your first time logging in, you'll see a popup window that prompts you to sign in to Azure and then choose or create a Vision resource. You can skip this step and do it later.
+
+ :::image type="content" source="./Images/vision-studio-wizard-1.png" alt-text="Screenshot of Vision Studio startup wizard.":::
+
+1. Select **Choose resource**, then select an existing resource within your subscription. If you'd like to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
+
+ :::image type="content" source="./Images/vision-studio-wizard-2.png" alt-text="Screenshot of Vision Studio resource selection panel.":::
+
+ > [!TIP]
+ > * When you select a location for your Azure resource, choose one that's closest to you for lower latency.
+ > * If you use the free pricing tier, you can keep using the Vision service even after your Azure free trial or service credit expires.
+
+1. Select **Create resource**. Your resource will be created, and you'll be able to try the different features offered by Vision Studio.
+
+ :::image type="content" source="./Images/vision-studio-home-page.png" alt-text="Screenshot of Vision Studio home page.":::
+
+1. From here, you can select any of the different features offered by Vision Studio. Some of them are outlined in the service quickstarts:
+ * [OCR quickstart](quickstarts-sdk/client-library.md?pivots=vision-studio)
+ * Image Analysis [4.0 quickstart](quickstarts-sdk/image-analysis-client-library-40.md?pivots=vision-studio) and [3.2 quickstart](quickstarts-sdk/image-analysis-client-library.md?pivots=vision-studio)
+ * [Face quickstart](quickstarts-sdk/identity-client-library.md?pivots=vision-studio)
+
+## Pre-configured features
+
+Azure AI Vision offers multiple features that use prebuilt, pre-configured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Azure AI Vision overview](overview.md) for a list of features offered by the Vision service.
+
+Each of these features has one or more try-it-out experiences in Vision Studio that allow you to upload images and receive JSON and text responses. These experiences help you quickly test the features using a no-code approach.
+
+## Cleaning up resources
+
+If you want to remove an Azure AI services resource after using Vision Studio, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can't delete your resource directly from Vision Studio, so use one of the following methods:
+* [Using the Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Using the Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+> [!TIP]
+> In Vision Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by selecting the Settings icon in the top-right corner of the Vision Studio screen.
+
+## Next steps
+
+* Go to [Vision Studio](https://portal.vision.cognitive.azure.com/) to begin using features offered by the service.
+* For more information on the features offered, see the [Azure AI Vision overview](overview.md).
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview.md
+
+ Title: What is Azure AI Vision?
+
+description: The Azure AI Vision service provides you with access to advanced algorithms for processing images and returning information.
+++++++ Last updated : 07/04/2023++
+keywords: Azure AI Vision, Azure AI Vision applications, Azure AI Vision service
+#Customer intent: As a developer, I want to evaluate image processing functionality, so that I can determine if it will work for my information extraction or object detection scenarios.
++
+# What is Azure AI Vision?
++
+Azure's Azure AI Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in.
+
+| Service|Description|
+|||
+| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on various surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.|
+|[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.|
+| [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. |
+| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.|
+
+## Azure AI Vision for digital asset management
+
+Azure AI Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Azure AI services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Azure AI Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
+
+## Getting started
+
+Use [Vision Studio](https://portal.vision.cognitive.azure.com/) to try out Azure AI Vision features quickly in your web browser.
+
+To get started building Azure AI Vision into your app, follow a quickstart.
+* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md)
+* [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library.md)
+* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
+
+## Image requirements
+
+Azure AI Vision can analyze images that meet the following requirements:
+
+- The image must be presented in JPEG, PNG, GIF, or BMP format
+- The file size of the image must be less than 4 megabytes (MB)
+- The dimensions of the image must be greater than 50 x 50 pixels
+ - For the Read API, the dimensions of the image must be between 50 x 50 and 10,000 x 10,000 pixels.
+
+## Data privacy and security
+
+As with all of the Azure AI services, developers using the Azure AI Vision service should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+Follow a quickstart to implement and run a service in your preferred development language.
+
+* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md)
+* [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library-40.md)
+* [Quickstart: Face](quickstarts-sdk/identity-client-library.md)
+* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
ai-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/client-library.md
+
+ Title: "Quickstart: Optical character recognition (OCR)"
+
+description: Learn how to use Optical character recognition (OCR) in your application through a native client library in the language of your choice.
++++++ Last updated : 07/04/2023+
+ms.devlang: csharp, golang, java, javascript, python
+
+zone_pivot_groups: programming-languages-ocr
+keywords: Azure AI Vision, Azure AI Vision service
++
+# Quickstart: Azure AI Vision v3.2 GA Read
++
+Get started with the Azure AI Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
+++++++++++++++
ai-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/identity-client-library.md
+
+ Title: 'Quickstart: Use the Face service'
+
+description: The Face API offers client libraries that make it easy to detect, find similar, identify, verify and more.
+++
+zone_pivot_groups: programming-languages-set-face
+++ Last updated : 07/04/2023+
+ms.devlang: csharp, golang, javascript, python
+
+keywords: face search by image, facial recognition search, facial recognition, face recognition app
++
+# Quickstart: Use the Face service
+++++++++++++++
ai-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
+
+ Title: "Quickstart: Image Analysis 4.0"
+
+description: Learn how to tag images in your application using Image Analysis 4.0 through a native client SDK in the language of your choice.
++++++ Last updated : 01/24/2023+
+ms.devlang: csharp, golang, java, javascript, python
+
+zone_pivot_groups: programming-languages-computer-vision-40
+keywords: Azure AI Vision, Azure AI Vision service
++
+# Quickstart: Image Analysis 4.0
+
+Get started with the Image Analysis 4.0 REST API or client SDK to set up a basic image analysis application. The Image Analysis service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.
++++++++++++
ai-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library.md
+
+ Title: "Quickstart: Image Analysis"
+
+description: Learn how to tag images in your application using Image Analysis through a native client library in the language of your choice.
++++++ Last updated : 12/27/2022+
+ms.devlang: csharp, golang, java, javascript, python
+
+zone_pivot_groups: programming-languages-computer-vision
+keywords: Azure AI Vision, Azure AI Vision service
++
+# Quickstart: Image Analysis
+
+Get started with the Image Analysis REST API or client libraries to set up a basic image tagging script. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.
++++++++++++++++++
ai-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/read-container-migration-guide.md
+
+ Title: Migrating to the Read v3.x containers
+
+description: Learn how to migrate to the v3 Read OCR containers
++++++ Last updated : 09/28/2021++++
+# Migrate to the Read v3.x OCR containers
+
+If you're using version 2 of the Azure AI Vision Read OCR container, Use this article to learn about upgrading your application to use version 3.x of the container.
++
+## Configuration changes
+
+* `ReadEngineConfig:ResultExpirationPeriod` is no longer supported. The Read OCR container has a built Cron job that removes the results and metadata associated with a request after 48 hours.
+* `Cache:Redis:Configuration` is no longer supported. The Cache isn't used in the v3.x containers, so you don't need to set it.
+
+## API changes
+
+The Read v3.2 container uses version 3 of the Azure AI Vision API and has the following endpoints:
+
+* `/vision/v3.2/read/analyzeResults/{operationId}`
+* `/vision/v3.2/read/analyze`
+* `/vision/v3.2/read/syncAnalyze`
+
+See the [Azure AI Vision v3 REST API migration guide](./upgrade-api-versions.md) for detailed information on updating your applications to use version 3 of cloud-based Read API. This information applies to the container as well. Sync operations are only supported in containers.
+
+## Memory requirements
+
+The requirements and recommendations are based on benchmarks with a single request per second, using a 523-KB image of a scanned business letter that contains 29 lines and a total of 803 characters. The following table describes the minimum and recommended allocations of resources for each Read OCR container.
+
+|Container |Minimum | Recommended |
+||||
+|Read 3.2 **2022-04-30** | 4 cores, 8-GB memory | 8 cores, 16-GB memory |
+
+Each core must be at least 2.6 gigahertz (GHz) or faster.
+
+Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the docker run command.
+
+## Storage implementations
+
+>[!NOTE]
+> MongoDB is no longer supported in 3.x versions of the container. Instead, the containers support Azure Storage and offline file systems.
+
+| Implementation | Required runtime argument(s) |
+|||
+|File level (default) | No runtime arguments required. `/share` directory will be used. |
+|Azure Blob | `Storage:ObjectStore:AzureBlob:ConnectionString={AzureStorageConnectionString}` |
+
+## Queue implementations
+
+In v3.x of the container, RabbitMQ is currently not supported. The supported backing implementations are:
+
+| Implementation | Runtime Argument(s) | Intended use |
+|||-|
+| In Memory (default) | No runtime arguments required. | Development and testing |
+| Azure Queues | `Queue:Azure:ConnectionString={AzureStorageConnectionString}` | Production |
+| RabbitMQ | Unavailable | Production |
+
+For added redundancy, the Read v3.x container uses a visibility timer to ensure requests can be successfully processed if a crash occurs when running in a multi-container setup.
+
+Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which sets the time for a message to be invisible when another worker is processing it. To avoid pages from being redundantly processed, we recommend setting the timeout period to 120 seconds. The default value is 30 seconds.
+
+| Default value | Recommended value |
+|||
+| 30000 | 120000 |
++
+## Next steps
+
+* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings
+* Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
+* Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container.
+* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.
+* Use more [Azure AI services Containers](../cognitive-services-container-support.md)
ai-services Spatial Analysis Camera Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-camera-placement.md
+
+ Title: Spatial Analysis camera placement
+
+description: Learn how to set up a camera for use with Spatial Analysis
++++++ Last updated : 06/08/2021++++
+# Where to place the camera?
+
+This article provides camera placement recommendations for Spatial Analysis (public preview). It includes general guidelines as well as specific recommendations for height, angle, and camera-to-focal-point-distance for all the included operations.
+
+> [!NOTE]
+> This guide is designed for the Axis M3045-V camera. This camera will use resolution 1920x1080, 106 degree horizontal field of view, 59 degree vertical field of view and a fixed 2.8mm focal length. The principles below will apply to all cameras, but specific guidelines around camera height and camera-to-focal-point distance will need to be adjusted for use with other cameras.
+
+## General guidelines
+
+Consider the following general guidelines when positioning cameras for Spatial Analysis:
+
+* **Lighting height.** Place cameras below lighting fixtures so the fixtures don't block the cameras.
+* **Obstructions.** To avoid obstructing camera views, take note of obstructions such as poles, signage, shelving, walls, and existing LP cameras.
+* **Environmental backlighting.** Outdoor backlighting affects camera image quality. To avoid severe backlighting conditions, avoid directing cameras at external-facing windows and glass doors.
+* **Local privacy rules and regulations.** Local regulations may restrict what cameras can capture. Make sure that you understand local rules and regulations before placing cameras.
+* **Building structure.** HVAC, sprinklers, and existing wiring may limit hard mounting of cameras.
+* **Cable management.** Make sure you can route an ethernet cable from planned camera mounting locations to the Power Over Internet (PoE) switch.
+
+## Height, focal-point distance, and angle
+
+You need to consider three things when deciding how to install a camera for Spatial Analysis:
+- Camera height
+- Camera-to-focal-point distance
+- The angle of the camera relative to the floor plane
+
+It's also important to know the direction that the majority of people walk (person walking direction) in relation to the camera field of view if possible. This direction is important for system performance.
+
+![Image of a person walking in a direction](./media/spatial-analysis/person-walking-direction.png)
+
+The following illustration shows the elevation view for person walking direction.
+
+![Elevation and plan view](./media/spatial-analysis/person-walking-direction-diagram.png)
+
+## Camera height
+
+Generally, cameras should be mounted 12-14 feet from the ground. For Face mask detection, we recommend cameras to be mounted 8-12 feet from the ground. When planning your camera mounting in this range, consider obstructions (for example: shelving, hanging lights, hanging signage, and displays) that might affect the camera view, and then adjust the height as necessary.
+
+## Camera-to-focal-point distance
+
+_Camera-to-focal-point distance_ is the linear distance from the focal point (or center of the camera image) to the camera measured on the ground.
+
+![Camera-to-focal-point-distance](./media/spatial-analysis/camera-focal-point.png)
+
+This distance is measured on the floor plane.
+
+![How camera-to-focal-point-distance is measured from the floor](./media/spatial-analysis/camera-focal-point-floor-plane.png)
+
+From above, it looks like this:
+
+![How camera-to-focal-point-distance is measured from above](./media/spatial-analysis/camera-focal-point-above.png)
+
+Use the table below to determine the camera's distance from the focal point based on specific mounting heights. These distances are for optimal placement. Note that the table provides guidance below the 12'-14' recommendation since some ceilings can limit height. For Face mask detection, recommended camera-to-focal-point distance (min/max) is 4ΓÇÖ-10ΓÇÖ for camera height between 8ΓÇÖ to 12ΓÇÖ.
+
+| Camera height | Camera-to-focal-point distance (min/max) |
+| - | - |
+| 8' | 4.6'-8' |
+| 10' | 5.8'-10' |
+| 12' | 7'-12' |
+| 14' | 8'-14'' |
+| 16' | 9.2'-16' |
+| 20' | 11.5'-20' |
+
+The following illustration simulates camera views from the closest and farthest camera-to-focal-point distances.
+
+| Closest | Farthest |
+| -- | |
+| ![Closest camera-to-focal-point distance](./media/spatial-analysis/focal-point-closest.png) | ![Farthest camera-to-focal-point distance](./media/spatial-analysis/focal-point-farthest.png) |
+
+## Camera angle mounting ranges
+
+This section describes acceptable camera angle mounting ranges. These mounting ranges show the acceptable range for optimal placement.
+
+### Line configuration
+
+For the **cognitiveservices.vision.spatialanalysis-personcrossingline** operation, +/-5┬░ is the optimal camera mounting angle to maximize accuracy.
+
+For Face mask detection, +/-30 degrees is the optimal camera mounting angle for camera height between 8ΓÇÖ to 12ΓÇÖ.
+
+The following illustration simulates camera views using the leftmost (-) and rightmost (+) mounting angle recommendations for using **cognitiveservices.vision.spatialanalysis-personcrossingline** to do entrance counting in a door way.
+
+| Leftmost view | Rightmost view |
+| - | -- |
+| ![Leftmost camera angle](./media/spatial-analysis/camera-angle-left.png) | ![Rightmost camera angle](./media/spatial-analysis/camera-angle-right.png) |
+
+The following illustration shows camera placement and mounting angles from a birds-eye view.
+
+![Bird's eye view](./media/spatial-analysis/camera-angle-top.png)
+
+### Zone configuration
+
+We recommend that you place cameras at 10 feet or more above ground to guarantee the covered area is big enough.
+
+When the zone is next to an obstacle like a wall or shelf, mount cameras in the specified distance from the target within the acceptable 120-degree angle range as shown in the following illustration.
+
+![Acceptable camera angle](./media/spatial-analysis/camera-angle-acceptable.png)
+
+The following illustration provides simulations for the left and right camera views of an area next to a shelf.
+
+| Left view | Right view |
+| - | -- |
+| ![Left view](./media/spatial-analysis/end-cap-left.png) | ![Right view](./media/spatial-analysis/end-cap-right.png) |
+
+#### Queues
+
+The **cognitiveservices.vision.spatialanalysis-personcount**, **cognitiveservices.vision.spatialanalysis-persondistance**, and **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** skills may be used to monitor queues. For optimal queue data quality, retractable belt barriers are preferred to minimize occlusion of the people in the queue and ensure the queues location is consistent over time.
+
+![Retractable belt queue](./media/spatial-analysis/retractable-belt-queue.png)
+
+This type of barrier is preferred over opaque barriers for queue formation to maximize the accuracy of the insights from the system.
+
+There are two types of queues: linear and zig-zag.
+
+The following illustration shows recommendations for linear queues:
+
+![Linear queue recommendation](./media/spatial-analysis/camera-angle-linear-queue.png)
+
+The following illustration provides simulations for the left and right camera views of linear queues. Note that you can mount the camera on the opposite side of the queue.
+
+| Left view | Right view |
+| - | -- |
+| ![Left angle for linear queue](./media/spatial-analysis/camera-angle-linear-left.png) | ![Right angle for linear queue](./media/spatial-analysis/camera-angle-linear-right.png) |
+
+For zig-zag queues, it's best to avoid placing the camera directly facing the queue line direction, as shown in the following illustration. Note that each of the four example camera positions in the illustration provide the ideal view with an acceptable deviation of +/- 15 degrees in each direction.
+
+The following illustrations simulate the view from a camera placed in the ideal locations for a zig-zag queue.
+
+| View 1 | View 2 |
+| - | - |
+| ![View 1](./media/spatial-analysis/camera-angle-ideal-location-right.png) | ![View 2](./media/spatial-analysis/camera-angle-ideal-location-left.png) |
+
+| View 3 | View 4 |
+| - | - |
+| ![View 3](./media/spatial-analysis/camera-angle-ideal-location-back.png) | ![View 4](./media/spatial-analysis/camera-angle-ideal-location-back-left.png) |
+
+##### Organic queues
+
+Organic queue lines form organically. This style of queue is acceptable if queues don't form beyond 2-3 people and the line forms within the zone definition. If the queue length is typically more than 2-3 people, we recommend using a retractable belt barrier to help guide the queue direction and ensure the line forms within the zone definition.
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
ai-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-container.md
+
+ Title: How to install and run the Spatial Analysis container - Azure AI Vision
+
+description: The Spatial Analysis container lets you can detect people and distances.
++++++ Last updated : 12/27/2022++++
+# Install and run the Spatial Analysis container (Preview)
+
+The Spatial Analysis container enables you to analyze real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. Containers are great for specific security and data governance requirements.
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* [!INCLUDE [contributor-requirement](../includes/quickstarts/contributor-requirement.md)]
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Vision resource" target="_blank">create a Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
+
+### Spatial Analysis container requirements
+
+To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), A2, 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We'll refer to this device as the host computer.
+
+#### [Azure Stack Edge device](#tab/azure-stack-edge)
+
+Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge computing device with network data transfer capabilities. For detailed preparation and setup instructions, see the [Azure Stack Edge documentation](../../databox-online/azure-stack-edge-deploy-prep.md).
+
+#### [Desktop machine](#tab/desktop-machine)
+
+#### Minimum hardware requirements
+
+* 4 GB of system RAM
+* 4 GB of GPU RAM
+* 8 core CPU
+* One NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), A2, 1080Ti, or 2080Ti)
+* 20 GB of HDD space
+
+#### Recommended hardware
+
+* 32 GB of system RAM
+* 16 GB of GPU RAM
+* 8 core CPU
+* Two NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), A2, 1080Ti, or 2080Ti)
+* 50 GB of SSD space
+
+In this article, you'll download and install the following software packages. The host computer must be able to run the following (see below for instructions):
+
+* [NVIDIA graphics drivers](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html) and [NVIDIA CUDA Toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html). The minimum GPU driver version is 460 with CUDA 11.1.
+* Configurations for [NVIDIA MPS](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf) (Multi-Process Service).
+* [Docker CE](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-enginecommunity-1) and [NVIDIA-Docker2](https://github.com/NVIDIA/nvidia-docker)
+* [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) runtime.
+
+#### [Azure VM with GPU](#tab/virtual-machine)
+In our example, we'll utilize an [NCv3 series VM](../../virtual-machines/ncv3-series.md) that has one v100 GPU.
+++
+| Requirement | Description |
+|--|--|
+| Camera | The Spatial Analysis container isn't tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
+| Linux OS | [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) must be installed on the host computer. |
+
+## Set up the host computer
+
+We recommend that you use an Azure Stack Edge device for your host computer. Select **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
+
+#### [Azure Stack Edge device](#tab/azure-stack-edge)
+
+### Configure compute on the Azure Stack Edge portal
+
+Spatial Analysis uses the compute features of the Azure Stack Edge to run an AI solution. To enable the compute features, make sure that:
+
+* You've [connected and activated](../../databox-online/azure-stack-edge-deploy-connect-setup-activate.md) your Azure Stack Edge device.
+* You have a Windows client system running PowerShell 5.0 or later, to access the device.
+* To deploy a Kubernetes cluster, you need to configure your Azure Stack Edge device via the **Local UI** on the [Azure portal](https://portal.azure.com/):
+ 1. Enable the compute feature on your Azure Stack Edge device. To enable compute, go to the **Compute** page in the web interface for your device.
+ 2. Select a network interface that you want to enable for compute, then select **Enable**. This will create a virtual switch on your device, on that network interface.
+ 3. Leave the Kubernetes test node IP addresses and the Kubernetes external services IP addresses blank.
+ 4. Select **Apply**. This operation may take about two minutes.
+
+![Configure compute](media/spatial-analysis/configure-compute.png)
+
+### Set up Azure Stack Edge role and create an IoT Hub resource
+
+In the [Azure portal](https://portal.azure.com/), navigate to your Azure Stack Edge resource. On the **Overview** page or navigation list, select the Edge compute **Get started** button. In the **Configure Edge compute** tile, select **Configure**.
+
+![Link](media/spatial-analysis/configure-edge-compute-tile.png)
+
+In the **Configure Edge compute** page, choose an existing IoT Hub, or choose to create a new one. By default, a Standard (S1) pricing tier is used to create an IoT Hub resource. To use a free tier IoT Hub resource, create one and then select it. The IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource
+
+Select **Create**. The IoT Hub resource creation may take a couple of minutes. After the IoT Hub resource is created, the **Configure Edge compute** tile will update to show the new configuration. To confirm that the Edge compute role has been configured, select **View config** on the **Configure compute** tile.
+
+When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. The Azure IoT Edge Runtime will already be running on the IoT Edge device.
+
+> [!NOTE]
+> * Currently only the Linux platform is supported for IoT Edge devices. For help troubleshooting the Azure Stack Edge device, see the [logging and troubleshooting](spatial-analysis-logging.md) article.
+> * To learn more about how to configure an IoT Edge device to communicate through a proxy server, see [Configure an IoT Edge device to communicate through a proxy server](../../iot-edge/how-to-configure-proxy-support.md#azure-portal)
+
+### Enable MPS on Azure Stack Edge
+
+Follow these steps to remotely connect from a Windows client.
+
+1. Run a Windows PowerShell session as an administrator.
+2. Make sure that the Windows Remote Management service is running on your client. At the command prompt, type:
+
+ ```powershell
+ winrm quickconfig
+ ```
+
+ For more information, see [Installation and configuration for Windows Remote Management](/windows/win32/winrm/installation-and-configuration-for-windows-remote-management#quick-default-configuration).
+
+3. Assign a variable to the connection string used in the `hosts` file.
+
+ ```powershell
+ $Name = "<Node serial number>.<DNS domain of the device>"
+ ```
+
+ Replace `<Node serial number>` and `<DNS domain of the device>` with the node serial number and DNS domain of your device. You can get the values for node serial number from the **Certificates** page and DNS domain from the **Device** page in the local web UI of your device.
+
+4. To add this connection string for your device to the clientΓÇÖs trusted hosts list, type the following command:
+
+ ```powershell
+ Set-Item WSMan:\localhost\Client\TrustedHosts $Name -Concatenate -Force
+ ```
+
+5. Start a Windows PowerShell session on the device:
+
+ ```powershell
+ Enter-PSSession -ComputerName $Name -Credential ~\EdgeUser -ConfigurationName Minishell -UseSSL
+ ```
+
+ If you see an error related to trust relationship, then check if the signing chain of the node certificate uploaded to your device is also installed on the client accessing your device.
+
+6. Provide the password when prompted. Use the same password that is used to sign into the local web UI. The default local web UI password is *Password1*. When you successfully connect to the device using remote PowerShell, you see the following sample output:
+
+ ```
+ Windows PowerShell
+ Copyright (C) Microsoft Corporation. All rights reserved.
+
+ PS C:\WINDOWS\system32> winrm quickconfig
+ WinRM service is already running on this machine.
+ PS C:\WINDOWS\system32> $Name = "1HXQG13.wdshcsso.com"
+ PS C:\WINDOWS\system32> Set-Item WSMan:\localhost\Client\TrustedHosts $Name -Concatenate -Force
+ PS C:\WINDOWS\system32> Enter-PSSession -ComputerName $Name -Credential ~\EdgeUser -ConfigurationName Minishell -UseSSL
+
+ WARNING: The Windows PowerShell interface of your device is intended to be used only for the initial network configuration. Please engage Microsoft Support if you need to access this interface to troubleshoot any potential issues you may be experiencing. Changes made through this interface without involving Microsoft Support could result in an unsupported configuration.
+ [1HXQG13.wdshcsso.com]: PS>
+ ```
+
+#### [Desktop machine](#tab/desktop-machine)
+
+Follow these instructions if your host computer isn't an Azure Stack Edge device.
+
+#### Install NVIDIA CUDA Toolkit and Nvidia graphics drivers on the host computer
+
+Use the following bash script to install the required Nvidia graphics drivers, and CUDA Toolkit.
+
+```bash
+wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
+sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
+sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
+sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
+sudo apt-get update
+sudo apt-get -y install cuda
+```
+
+Reboot the machine, and run the following command.
+
+```bash
+nvidia-smi
+```
+
+You should see the following output.
+
+![NVIDIA driver output](media/spatial-analysis/nvidia-driver-output.png)
+
+### Install Docker CE and nvidia-docker2 on the host computer
+
+Install Docker CE on the host computer.
+
+```bash
+sudo apt-get update
+sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+sudo apt-get update
+sudo apt-get install -y docker-ce docker-ce-cli containerd.io
+```
+
+Install the *nvidia-docker-2* software package.
+
+```bash
+distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
+curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+sudo apt-get update
+sudo apt-get install -y docker-ce nvidia-docker2
+sudo systemctl restart docker
+```
+
+## Enable NVIDIA MPS on the host computer
+
+> [!TIP]
+> * Don't install MPS if your GPU compute capability is less than 7.x (pre Volta). See [CUDA Compatability](https://docs.nvidia.com/deploy/cuda-compatibility/https://docsupdatetracker.net/index.html#support-title) for reference.
+> * Run the MPS instructions from a terminal window on the host computer. Not inside your Docker container instance.
+
+For best performance and utilization, configure the host computer's GPU(s) for [NVIDIA Multiprocess Service (MPS)](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf). Run the MPS instructions from a terminal window on the host computer.
+
+```bash
+# Set GPU(s) compute mode to EXCLUSIVE_PROCESS
+sudo nvidia-smi --compute-mode=EXCLUSIVE_PROCESS
+
+# Cronjob for setting GPU(s) compute mode to EXCLUSIVE_PROCESS on boot
+echo "SHELL=/bin/bash" > /tmp/nvidia-mps-cronjob
+echo "PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" >> /tmp/nvidia-mps-cronjob
+echo "@reboot root nvidia-smi --compute-mode=EXCLUSIVE_PROCESS" >> /tmp/nvidia-mps-cronjob
+
+sudo chown root:root /tmp/nvidia-mps-cronjob
+sudo mv /tmp/nvidia-mps-cronjob /etc/cron.d/
+
+# Service entry for automatically starting MPS control daemon
+echo "[Unit]" > /tmp/nvidia-mps.service
+echo "Description=NVIDIA MPS control service" >> /tmp/nvidia-mps.service
+echo "After=cron.service" >> /tmp/nvidia-mps.service
+echo "" >> /tmp/nvidia-mps.service
+echo "[Service]" >> /tmp/nvidia-mps.service
+echo "Restart=on-failure" >> /tmp/nvidia-mps.service
+echo "ExecStart=/usr/bin/nvidia-cuda-mps-control -f" >> /tmp/nvidia-mps.service
+echo "" >> /tmp/nvidia-mps.service
+echo "[Install]" >> /tmp/nvidia-mps.service
+echo "WantedBy=multi-user.target" >> /tmp/nvidia-mps.service
+
+sudo chown root:root /tmp/nvidia-mps.service
+sudo mv /tmp/nvidia-mps.service /etc/systemd/system/
+
+sudo systemctl --now enable nvidia-mps.service
+```
+
+## Configure Azure IoT Edge on the host computer
+
+To deploy the Spatial Analysis container on the host computer, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
+
+Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
+
+```bash
+curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+```
+```azurecli
+sudo az login
+```
+```azurecli
+sudo az account set --subscription "<name or ID of Azure Subscription>"
+```
+```azurecli
+sudo az group create --name "<resource-group-name>" --location "<your-region>"
+```
+See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
+```azurecli
+sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<resource-group-name>"
+```
+```azurecli
+sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
+```
+
+You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
+
+Ubuntu Server 18.04:
+```bash
+curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+```
+
+Copy the generated list.
+```bash
+sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+```
+
+Install the Microsoft GPG public key.
+
+```bash
+curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+```
+```bash
+sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+```
+
+Update the package lists on your device.
+
+```bash
+sudo apt-get update
+```
+
+Install the 1.0.9 release:
+
+```bash
+sudo apt-get install iotedge=1.1* libiothsm-std=1.1*
+```
+
+Next, register the host computer as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-provision-single-device-linux-symmetric.md#register-your-device).
+
+You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
+
+```azurecli
+sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123
+```
+
+On the host computer open `/etc/iotedge/config.yaml` for editing. Replace `ADD DEVICE CONNECTION STRING HERE` with the connection string. Save and close the file.
+Run this command to restart the IoT Edge service on the host computer.
+
+```bash
+sudo systemctl restart iotedge
+```
+
+Deploy the Spatial Analysis container as an IoT Module on the host computer, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../multi-service-resource.md?pivots=azcli). If you're using the portal, set the image URI to the location of your Azure Container Registry.
+
+Use the below steps to deploy the container using the Azure CLI.
+
+#### [Azure VM with GPU](#tab/virtual-machine)
+
+An Azure Virtual Machine with a GPU can also be used to run Spatial Analysis. The example below will use a [NCv3 series VM](../../virtual-machines/ncv3-series.md) that has one v100 GPU.
+
+#### Create the VM
+
+Open the [Create a Virtual Machine](https://portal.azure.com/#create/Microsoft.VirtualMachine) wizard in the Azure portal.
+
+Give your VM a name and select the region to be (US) West US 2.
+
+> [!IMPORTANT]
+> Be sure to set `Availability Options` to "No infrastructure redundancy required". Refer to the below figure for the complete configuration and the next step for help locating the correct VM size.
++
+To locate the VM size, select "See all sizes" and then view the list for "N-Series" and select **NC6s_v3**, shown below.
++
+Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Select on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, select create, and complete the wizard.
+
+Once the extension is successfully applied, navigate to the VM main page in the Azure portal and select `Connect`. The VM can be accessed either through SSH or RDP. RDP will be helpful as it will enable viewing of the visualizer window (explained later). Configure the RDP access by following [these steps](../../virtual-machines/linux/use-remote-desktop.md) and opening a remote desktop connection to the VM.
+
+### Verify Graphics Drivers are Installed
+
+Run the following command to verify that the graphics drivers have been successfully installed.
+
+```bash
+nvidia-smi
+```
+
+You should see the following output.
+
+![NVIDIA driver output](media/spatial-analysis/nvidia-driver-output.png)
+
+### Install Docker CE and nvidia-docker2 on the VM
+
+Run the following commands one at a time in order to install Docker CE and nvidia-docker2 on the VM.
+
+Install Docker CE on the host computer.
+
+```bash
+sudo apt-get update
+```
+```bash
+sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
+```
+```bash
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+```
+```bash
+sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+```
+```bash
+sudo apt-get update
+```
+```bash
+sudo apt-get install -y docker-ce docker-ce-cli containerd.io
+```
++
+Install the *nvidia-docker-2* software package.
+
+```bash
+DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID)
+```
+```bash
+curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
+```
+```bash
+curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+```
+```bash
+sudo apt-get update
+```
+```bash
+sudo apt-get install -y docker-ce nvidia-docker2
+```
+```bash
+sudo systemctl restart docker
+```
+
+Now that you have set up and configured your VM, follow the steps below to configure Azure IoT Edge.
+
+## Configure Azure IoT Edge on the VM
+
+To deploy the Spatial Analysis container on the VM, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
+
+Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
+
+```bash
+curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+```
+```azurecli
+sudo az login
+```
+```azurecli
+sudo az account set --subscription "<name or ID of Azure Subscription>"
+```
+```azurecli
+sudo az group create --name "<resource-group-name>" --location "<your-region>"
+```
+See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
+```azurecli
+sudo az iot hub create --name "<iothub-name>" --sku S1 --resource-group "<resource-group-name>"
+```
+```azurecli
+sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
+```
+
+You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
+
+Ubuntu Server 18.04:
+```bash
+curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+```
+
+Copy the generated list.
+```bash
+sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+```
+
+Install the Microsoft GPG public key.
+
+```bash
+curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+```
+```bash
+sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+```
+
+Update the package lists on your device.
+
+```bash
+sudo apt-get update
+```
+
+Install the 1.0.9 release:
+
+```bash
+sudo apt-get install iotedge=1.1* libiothsm-std=1.1*
+```
+
+Next, register the VM as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-provision-single-device-linux-symmetric.md#register-your-device).
+
+You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
+
+```azurecli
+sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123
+```
+
+On the VM open `/etc/iotedge/config.yaml` for editing. Replace `ADD DEVICE CONNECTION STRING HERE` with the connection string. Save and close the file.
+Run this command to restart the IoT Edge service on the VM.
+
+```bash
+sudo systemctl restart iotedge
+```
+
+Deploy the Spatial Analysis container as an IoT Module on the VM, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../multi-service-resource.md?pivots=azcli). If you're using the portal, set the image URI to the location of your Azure Container Registry.
+
+Use the below steps to deploy the container using the Azure CLI.
+++
+### IoT Deployment manifest
+
+To streamline container deployment on multiple host computers, you can create a deployment manifest file to specify the container creation options, and environment variables. You can find an example of a deployment manifest [for Azure Stack Edge](https://go.microsoft.com/fwlink/?linkid=2142179), [other desktop machines](https://go.microsoft.com/fwlink/?linkid=2152270), and [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub.
+
+The following table shows the various Environment Variables used by the IoT Edge Module. You can also set them in the deployment manifest linked above, using the `env` attribute in `spatialanalysis`:
+
+| Setting Name | Value | Description|
+||||
+| ARCHON_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values|
+| ARCHON_SHARED_BUFFER_LIMIT | 377487360 | Do not modify|
+| ARCHON_PERF_MARKER| false| Set this to true for performance logging, otherwise this should be false|
+| ARCHON_NODES_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values|
+| OMP_WAIT_POLICY | PASSIVE | Do not modify|
+| QT_X11_NO_MITSHM | 1 | Do not modify|
+| APIKEY | your API Key| Collect this value from Azure portal from your Vision resource. You can find it in the **Key and endpoint** section for your resource. |
+| BILLING | your Endpoint URI| Collect this value from Azure portal from your Vision resource. You can find it in the **Key and endpoint** section for your resource.|
+| EULA | accept | This value needs to be set to *accept* for the container to run |
+| DISPLAY | :1 | This value needs to be same as the output of `echo $DISPLAY` on the host computer. Azure Stack Edge devices do not have a display. This setting is not applicable|
+| KEY_ENV | ASE Encryption key | Add this environment variable if Video_URL is an obfuscated string |
+| IV_ENV | Initialization vector | Add this environment variable if Video_URL is an obfuscated string|
+
+> [!IMPORTANT]
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+Once you update the Deployment manifest for [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179), [a desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270) or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with your own settings and selection of operations, you can use the below [Azure CLI](../multi-service-resource.md?pivots=azcli) command to deploy the container on the host computer, as an IoT Edge Module.
+
+```azurecli
+sudo az login
+sudo az extension add --name azure-iot
+sudo az iot edge set-modules --hub-name "<iothub-name>" --device-id "<device-name>" --content DeploymentManifest.json --subscription "<name or ID of Azure Subscription>"
+```
+
+|Parameter |Description |
+|||
+| `--hub-name` | Your Azure IoT Hub name. |
+| `--content` | The name of the deployment file. |
+| `--target-condition` | Your IoT Edge device name for the host computer. |
+| `-ΓÇôsubscription` | Subscription ID or name. |
+
+This command will start the deployment. Navigate to the page of your Azure IoT Hub instance in the Azure portal to see the deployment status. The status may show as *417 ΓÇô The device's deployment configuration is not set* until the device finishes downloading the container images and starts running.
+
+## Validate that the deployment is successful
+
+There are several ways to validate that the container is running. Locate the *Runtime Status* in the **IoT Edge Module Settings** for the Spatial Analysis module in your Azure IoT Hub instance on the Azure portal. Validate that the **Desired Value** and **Reported Value** for the *Runtime Status* is *Running*.
+
+![Example deployment verification](./media/spatial-analysis/deployment-verification.png)
+
+Once the deployment is complete and the container is running, the **host computer** will start sending events to the Azure IoT Hub. If you used the `.debug` version of the operations, you'll see a visualizer window for each camera you configured in the deployment manifest. You can now define the lines and zones you want to monitor in the deployment manifest and follow the instructions to deploy again.
+
+## Configure the operations performed by Spatial Analysis
+
+You'll need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
+
+## Use the output generated by the container
+
+If you want to start consuming the output generated by the container, see the following articles:
+
+* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md).
+* Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information.
+
+## Troubleshooting
+
+If you encounter issues when starting or running the container, see [Telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
++
+## Billing
+
+The Spatial Analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
+
+Azure AI services containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Azure AI services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
++
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running the Spatial Analysis container. In summary:
+
+* Spatial Analysis is a Linux container for Docker.
+* Container images are downloaded from the Microsoft Container Registry.
+* Container images run as IoT Modules in Azure IoT Edge.
+* Configure the container and deploy it on a host machine.
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
ai-services Spatial Analysis Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-local.md
+
+ Title: Run Spatial Analysis on a local video file
+
+description: Use this guide to learn how to run Spatial Analysis on a recorded local video.
++++++ Last updated : 06/28/2022+++
+# Run Spatial Analysis on a local video file
+
+You can use Spatial Analysis with either recorded or live video. Use this guide to learn how to run Spatial Analysis on a recorded local video.
+
+## Prerequisites
+
+* Set up a Spatial Analysis container by following the steps in [Set up the host machine and run the container](spatial-analysis-container.md).
+
+## Analyze a video file
+
+To use Spatial Analysis for recorded video, record a video file and save it as a .mp4 file. Then take the following steps:
+
+1. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
+ 1. Change **Secure transfer required** to **Disabled**
+ 1. Change **Allow Blob public access** to **Enabled**
+
+1. Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
+
+1. Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
+
+1. Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
+
+The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
++
+```json
+"zonecrossing": {
+ "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "Replace http url here",
+ "VIDEO_SOURCE_ID": "personcountgraph",
+ "VIDEO_IS_LIVE": false,
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
+ }
+ },
+
+```
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
ai-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-logging.md
+
+ Title: Telemetry and logging for Spatial Analysis containers
+
+description: Spatial Analysis provides each container with a common configuration framework insights, logging, and security settings.
++++++ Last updated : 06/08/2021++++
+# Telemetry and troubleshooting
+
+Spatial Analysis includes a set of features to monitor the health of the system and help with diagnosing issues.
+
+## Enable visualizations
+
+To enable a visualization of AI Insight events in a video frame, you need to use the `.debug` version of a [Spatial Analysis operation](spatial-analysis-operations.md) on a desktop machine or Azure VM. The visualization is not possible on Azure Stack Edge devices. There are four debug operations available.
+
+If your device is a local desktop machine or Azure GPU VM (with remote desktop enabled), then then you can switch to `.debug` version of any operation and visualize the output.
+
+1. Open the desktop either locally or by using a remote desktop client on the host computer running Spatial Analysis.
+2. In the terminal run `xhost +`
+3. Update the [deployment manifest](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) under the `spaceanalytics` module with the value of the `DISPLAY` environment variable. You can find its value by running `echo $DISPLAY` in the terminal on the host computer.
+ ```
+ "env": {
+ "DISPLAY": {
+ "value": ":11"
+ }
+ }
+ ```
+4. Update the graph in the deployment manifest you want to run in debug mode. In the example below, we update the operationId to cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug. A new parameter `VISUALIZER_NODE_CONFIG` is required to enable the visualizer window. All operations are available in debug flavor. When using shared nodes, use the cognitiveservices.vision.spatialanalysis.debug operation and add `VISUALIZER_NODE_CONFIG` to the instance parameters.
+
+ ```
+ "zonecrossing": {
+ "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "Replace http url here",
+ "VIDEO_SOURCE_ID": "zonecrossingcamera",
+ "VIDEO_IS_LIVE": false,
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0 }",
+ "CAMERACALIBRATOR_NODE_CONFIG": "{ \"gpu_index\": 0}",
+ "VISUALIZER_NODE_CONFIG": "{ \"show_debug_video\": true }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"threshold\":35.0}]}"
+ }
+ }
+ ```
+
+5. Redeploy and you will see the visualizer window on the host computer
+6. After the deployment has completed, you might have to copy the `.Xauthority` file from the host computer to the container and restart it. In the sample below, `peopleanalytics` is the name of the container on the host computer.
+
+ ```bash
+ sudo docker cp $XAUTHORITY peopleanalytics:/root/.Xauthority
+ sudo docker stop peopleanalytics
+ sudo docker start peopleanalytics
+ xhost +
+ ```
+
+## Collect system health telemetry
+
+Telegraf is an open source image that works with Spatial Analysis, and is available in the Microsoft Container Registry. It takes the following inputs and sends them to Azure Monitor. The telegraf module can be built with desired custom inputs and outputs. The telegraf module configuration in Spatial Analysis is part of the deployment manifest (linked above). This module is optional and can be removed from the manifest if you don't need it.
+
+Inputs:
+1. Spatial Analysis Metrics
+2. Disk Metrics
+3. CPU Metrics
+4. Docker Metrics
+5. GPU Metrics
+
+Outputs:
+1. Azure Monitor
+
+The supplied Spatial Analysis telegraf module will publish all the telemetry data emitted by the Spatial Analysis container to Azure Monitor. See the [Azure Monitor](../../azure-monitor/overview.md) for information on adding Azure Monitor to your subscription.
+
+After setting up Azure Monitor, you will need to create credentials that enable the module to send telemetry. You can use the Azure portal to create a new Service Principal, or use the Azure CLI command below to create one.
+
+> [!NOTE]
+> This command requires you to have Owner privileges on the subscription.
+
+```azurecli
+# Find your Azure IoT Hub resource ID by running this command. The resource ID should start with something like
+# "/subscriptions/b60d6458-1234-4be4-9885-c7e73af9ced8/resourceGroups/..."
+az iot hub list
+
+# Create a Service Principal with `Monitoring Metrics Publisher` role in the IoTHub resource:
+# Save the output from this command. The values will be used in the deployment manifest. The password won't be shown again so make sure to write it down
+az ad sp create-for-rbac --role="Monitoring Metrics Publisher" --name "<principal name>" --scopes="<resource ID of IoT Hub>"
+```
+
+In the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189), look for the *telegraf* module, and replace the following values with the Service Principal information from the previous step and redeploy.
+
+```json
+
+"telegraf": {
+  "settings": {
+  "image": "mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis/telegraf:1.0",
+  "createOptions": "{\"HostConfig\":{\"Runtime\":\"nvidia\",\"NetworkMode\":\"azure-iot-edge\",\"Memory\":33554432,\"Binds\":[\"/var/run/docker.sock:/var/run/docker.sock\"]}}"
+},
+"type": "docker",
+"env": {
+ "AZURE_TENANT_ID": {
+ "value": "<Tenant Id>"
+ },
+ "AZURE_CLIENT_ID": {
+ "value": "Application Id"
+ },
+ "AZURE_CLIENT_SECRET": {
+ "value": "<Password>"
+ },
+ "region": {
+ "value": "<Region>"
+ },
+ "resource_id": {
+ "value": "/subscriptions/{subscriptionId}/resourceGroups/{resoureGroupName}/providers/Microsoft.Devices/IotHubs/{IotHub}"
+ },
+...
+```
+
+Once the telegraf module is deployed, the reported metrics can be accessed either through the Azure Monitor service, or by selecting **Monitoring** in the IoT Hub on the Azure portal.
++
+### System health events
+
+| Event Name | Description |
+|--|-|
+| archon_exit | Sent when a user changes the Spatial Analysis module status from *running* to *stopped*. |
+| archon_error | Sent when any of the processes inside the container crash. This is a critical error. |
+| InputRate | The rate at which the graph processes video input. Reported every 5 minutes. |
+| OutputRate | The rate at which the graph outputs AI insights. Reported every 5 minutes. |
+| archon_allGraphsStarted | Sent when all graphs have finished starting up. |
+| archon_configchange | Sent when a graph configuration has changed. |
+| archon_graphCreationFailed | Sent when the graph with the reported `graphId` fails to start. |
+| archon_graphCreationSuccess | Sent when the graph with the reported `graphId` starts successfully. |
+| archon_graphCleanup | Sent when the graph with the reported `graphId` cleans up and exits. |
+| archon_graphHeartbeat | Heartbeat sent every minute for every graph of a skill. |
+| archon_apiKeyAuthFail | Sent when the Vision resource key fails to authenticate the container for more than 24 hours, due to the following reasons: Out of Quota, Invalid, Offline. |
+| VideoIngesterHeartbeat | Sent every hour to indicate that video is streamed from the Video source, with the number of errors in that hour. Reported for each graph. |
+| VideoIngesterState | Reports *Stopped* or *Started* for video streaming. Reported for each graph. |
+
+## Troubleshooting an IoT Edge Device
+
+You can use `iotedge` command line tool to check the status and logs of the running modules. For example:
+* `iotedge list`: Reports a list of running modules.
+ You can further check for errors with `iotedge logs edgeAgent`. If `iotedge` gets stuck, you can try restarting it with `iotedge restart edgeAgent`
+* `iotedge logs <module-name>`
+* `iotedge restart <module-name>` to restart a specific module
+
+## Collect log files with the diagnostics container
+
+Spatial Analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The Spatial Analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the *diagnostics* module.
+
+In the "env" section add the following configuration:
+
+```json
+"diagnostics": {  
+  "settings": {
+  "image": "mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis/diagnostics:1.0",
+  "createOptions": "{\"HostConfig\":{\"Mounts\":[{\"Target\":\"/usr/bin/docker\",\"Source\":\"/home/data/docker\",\"Type\":\"bind\"},{\"Target\":\"/var/run\",\"Source\":\"/run\",\"Type\":\"bind\"}],\"LogConfig\":{\"Config\":{\"max-size\":\"500m\"}}}}"
+  }
+```
+
+To optimize logs uploaded to a remote endpoint, such as Azure Blob Storage, we recommend maintaining a small file size. See the example below for the recommended Docker logs configuration.
+
+```json
+{
+ "HostConfig": {
+ "LogConfig": {
+ "Config": {
+ "max-size": "500m",
+ "max-file": "1000"
+ }
+ }
+ }
+}
+```
+
+### Configure the log level
+
+Log level configuration allows you to control the verbosity of the generated logs. Supported log levels are: `none`, `verbose`, `info`, `warning`, and `error`. The default log verbose level for both nodes and platform is `info`.
+
+Log levels can be modified globally by setting the `ARCHON_LOG_LEVEL` environment variable to one of the allowed values.
+It can also be set through the IoT Edge Module Twin document either globally, for all deployed skills, or for every specific skill by setting the values for `platformLogLevel` and `nodesLogLevel` as shown below.
+
+```json
+{
+ "version": 1,
+ "properties": {
+ "desired": {
+ "globalSettings": {
+ "platformLogLevel": "verbose"
+ },
+ "graphs": {
+ "samplegraph": {
+ "nodesLogLevel": "verbose",
+ "platformLogLevel": "verbose"
+ }
+ }
+ }
+ }
+}
+```
+
+### Collecting Logs
+
+> [!NOTE]
+> The `diagnostics` module does not affect the logging content, it is only assists in collecting, filtering, and uploading existing logs.
+> You must have Docker API version 1.40 or higher to use this module.
+
+The sample deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) includes a module named `diagnostics` that collects and uploads logs. This module is disabled by default and should be enabled through the IoT Edge module configuration when you need to access logs.
+
+The `diagnostics` collection is on-demand and controlled via an IoT Edge direct method, and can send logs to an Azure Blob Storage.
+
+### Configure diagnostics upload targets
+
+From the IoT Edge portal, select your device and then the **diagnostics** module. In the sample Deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machines](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the **Environment Variables** section for diagnostics, named `env`, and add the following information:
+
+**Configure Upload to Azure Blob Storage**
+
+1. Create your own Azure Blob Storage account, if you haven't already.
+2. Get the **Connection String** for your storage account from the Azure portal. It will be located in **Access Keys**.
+3. Spatial Analysis logs will be automatically uploaded into a Blob Storage container named *rtcvlogs* with the following file name format: `{CONTAINER_NAME}/{START_TIME}-{END_TIME}-{QUERY_TIME}.log`.
+
+```json
+"env":{
+ "IOTEDGE_WORKLOADURI":"fd://iotedge.socket",
+ "AZURE_STORAGE_CONNECTION_STRING":"XXXXXX", //from the Azure Blob Storage account
+ "ARCHON_LOG_LEVEL":"info"
+}
+```
+
+### Uploading Spatial Analysis logs
+
+Logs are uploaded on-demand with the `getRTCVLogs` IoT Edge method, in the `diagnostics` module.
++
+1. Go to your IoT Hub portal page, select **Edge Devices**, then select your device and your diagnostics module.
+2. Go to the details page of the module and select the ***direct method*** tab.
+3. Type `getRTCVLogs` on Method Name, and a json format string in payload. You can enter `{}`, which is an empty payload.
+4. Set the connection and method timeouts, and select **Invoke Method**.
+5. Select your target container, and build a payload json string using the parameters described in the **Logging syntax** section. Select **Invoke Method** to perform the request.
+
+>[!NOTE]
+> Invoking the `getRTCVLogs` method with an empty payload will return a list of all containers deployed on the device. The method name is case sensitive. You will get a 501 error if an incorrect method name is given.
+
+![getRTCVLogs Direct method page](./media/spatial-analysis/direct-log-collection.png)
+
+
+### Logging syntax
+
+The below table lists the parameters you can use when querying logs.
+
+| Keyword | Description | Default Value |
+|--|--|--|
+| StartTime | Desired logs start time, in milliseconds UTC. | `-1`, the start of the container's runtime. When `[-1.-1]` is used as a time range, the API returns logs from the last one hour.|
+| EndTime | Desired logs end time, in milliseconds UTC. | `-1`, the current time. When `[-1.-1]` time range is used, the api returns logs from the last one hour. |
+| ContainerId | Target container for fetching logs.| `null`, when there is no container ID. The API returns all available containers information with IDs.|
+| DoPost | Perform the upload operation. When this is set to `false`, it performs the requested operation and returns the upload size without performing the upload. When set to `true`, it will initiate the asynchronous upload of the selected logs | `false`, do not upload.|
+| Throttle | Indicate how many lines of logs to upload per batch | `1000`, Use this parameter to adjust post speed. |
+| Filters | Filters logs to be uploaded | `null`, filters can be specified as key value pairs based on the Spatial Analysis logs structure: `[UTC, LocalTime, LOGLEVEL,PID, CLASS, DATA]`. For example: `{"TimeFilter":[-1,1573255761112]}, {"TimeFilter":[-1,1573255761112]}, {"CLASS":["myNode"]`|
+
+The following table lists the attributes in the query response.
+
+| Keyword | Description|
+|--|--|
+|DoPost| Either *true* or *false*. Indicates if logs have been uploaded or not. When you choose not to upload logs, the api returns information ***synchronously***. When you choose to upload logs, the api returns 200, if the request is valid, and starts uploading logs ***asynchronously***.|
+|TimeFilter| Time filter applied to the logs.|
+|ValueFilters| Keywords filters applied to the logs. |
+|TimeStamp| Method execution start time. |
+|ContainerId| Target container ID. |
+|FetchCounter| Total number of log lines. |
+|FetchSizeInByte| Total amount of log data in bytes. |
+|MatchCounter| Valid number of log lines. |
+|MatchSizeInByte| Valid amount of log data in bytes. |
+|FilterCount| Total number of log lines after applying filter. |
+|FilterSizeInByte| Total amount of log data in bytes after applying filter. |
+|FetchLogsDurationInMiliSec| Fetch operation duration. |
+|PaseLogsDurationInMiliSec| Filter operation duration. |
+|PostLogsDurationInMiliSec| Post operation duration. |
+
+#### Example request
+
+```json
+{
+ "StartTime": -1,
+ "EndTime": -1,
+ "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
+ "DoPost": false,
+ "Filters": null
+}
+```
+
+#### Example response
+
+```json
+{
+ "status": 200,
+ "payload": {
+ "DoPost": false,
+ "TimeFilter": [-1, 1581310339411],
+ "ValueFilters": {},
+ "Metas": {
+ "TimeStamp": "2020-02-10T04:52:19.4365389+00:00",
+ "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
+ "FetchCounter": 61,
+ "FetchSizeInByte": 20470,
+ "MatchCounter": 61,
+ "MatchSizeInByte": 20470,
+ "FilterCount": 61,
+ "FilterSizeInByte": 20470,
+ "FetchLogsDurationInMiliSec": 0,
+ "PaseLogsDurationInMiliSec": 0,
+ "PostLogsDurationInMiliSec": 0
+ }
+ }
+}
+```
+
+Check fetch log's lines, times, and sizes, if those settings look good replace ***DoPost*** to `true` and that will push the logs with same filters to destinations.
+
+You can export logs from the Azure Blob Storage when troubleshooting issues.
+
+## Troubleshooting the Azure Stack Edge device
+
+The following section is provided for help with debugging and verification of the status of your Azure Stack Edge device.
+
+### Access the Kubernetes API Endpoint. 
+
+1. In the local UI of your device, go to the **Devices** page.
+2. Under **Device endpoints**, copy the Kubernetes API service endpoint. This endpoint is a string in the following format: `https://compute..[device-IP-address]`.
+3. Save the endpoint string. You will use this later when configuring `kubectl` to access the Kubernetes cluster.
+
+### Connect to PowerShell interface
+
+Remotely, connect from a Windows client. After the Kubernetes cluster is created, you can manage the applications via this cluster. You will need to connect to the PowerShell interface of the device. Depending on the operating system of client, the procedures to remotely connect to the device may be different. The following steps are for a Windows client running PowerShell.
+
+> [!TIP]
+> * Before you begin, make sure that your Windows client is running Windows PowerShell 5.0 or later.
+> * PowerShell is also [available on Linux](/powershell/scripting/install/installing-powershell-core-on-linux).
+
+1. Run a Windows PowerShell session as an Administrator.
+ 1. Make sure that the Windows Remote Management service is running on your client. At the command prompt, type `winrm quickconfig`.
+
+2. Assign a variable for the device IP address. For example, `$ip = "<device-ip-address>"`.
+
+3. Use the following command to add the IP address of your device to the client's trusted hosts list.
+
+ ```powershell
+ Set-Item WSMan:\localhost\Client\TrustedHosts $ip -Concatenate -Force
+ ```
+
+4. Start a Windows PowerShell session on the device.
+
+ ```powershell
+ Enter-PSSession -ComputerName $ip -Credential $ip\EdgeUser -ConfigurationName Minishell
+ ```
+
+5. Provide the password when prompted. Use the same password that is used to sign into the local web interface. The default local web interface password is `Password1`.
+
+### Access the Kubernetes cluster
+
+After the Kubernetes cluster is created, you can use the `kubectl` command line tool to access the cluster.
+
+1. Create a new namespace.
+
+ ```powershell
+ New-HcsKubernetesNamespace -Namespace
+ ```
+
+2. Create a user and get a config file. This command will output configuration information for the Kubernetes cluster. Copy this information and save it in a file named *config*. Do not save the file a file extension.
+
+ ```powershell
+ New-HcsKubernetesUser -UserName
+ ```
+
+3. Add the *config* file to the *.kube* folder in your user profile on the local machine.
+
+4. Associate the namespace with the user you created.
+
+ ```powershell
+ Grant-HcsKubernetesNamespaceAccess -Namespace -UserName
+ ```
+
+5. Install `kubectl` on your Windows client using the following command:
+
+ ```powershell
+ curl https://storage.googleapis.com/kubernetesrelease/release/v1.15.2/bin/windows/amd64/kubectl.exe -O kubectl.exe
+ ```
+
+6. Add a DNS entry to the hosts file on your system.
+ 1. Run Notepad as administrator and open the *hosts* file located at `C:\windows\system32\drivers\etc\hosts`.
+ 2. Create an entry in the hosts file with the device IP address and DNS domain you got from the **Device** page in the local UI. The endpoint you should use will look similar to: `https://compute.asedevice.microsoftdatabox.com/10.100.10.10`.
+
+7. Verify you can connect to the Kubernetes pods.
+
+ ```powershell
+ kubectl get pods -n "iotedge"
+ ```
+
+To get container logs, run the following command:
+
+```powershell
+kubectl logs <pod-name> -n <namespace> --all-containers
+```
+
+### Useful commands
+
+|Command |Description |
+|||
+|`Get-HcsKubernetesUserConfig -AseUser` | Generates a Kubernetes configuration file. When using the command, copy the information into a file named *config*. Do not save the file with a file extension. |
+| `Get-HcsApplianceInfo` | Returns information about your device. |
+| `Enable-HcsSupportAccess` | Generates access credentials to start a support session. |
++
+## How to file a support ticket for Spatial Analysis
+
+If you need more support in finding a solution to a problem you're having with the Spatial Analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with additional guidance.
+
+### Fill out the basics
+Create a new support ticket at the [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page. Follow the prompts to fill in the following parameters:
+
+![Support basics](./media/support-ticket-page-1-final.png)
+
+1. Set **Issue Type** to be `Technical`.
+2. Select the subscription that you are utilizing to deploy the Spatial Analysis container.
+3. Select `My services` and select `Azure AI services` as the service.
+4. Select the resource that you are utilizing to deploy the Spatial Analysis container.
+5. Write a brief description detailing the problem you are facing.
+6. Select `Spatial Analysis` as your problem type.
+7. Select the appropriate subtype from the drop down.
+8. Select **Next: Solutions** to move on to the next page.
+
+### Recommended solutions
+The next stage will offer recommended solutions for the problem type that you selected. These solutions will solve the most common problems, but if it isn't useful for your solution, select **Next: Details** to go to the next step.
+
+### Details
+On this page, add some additional details about the problem you've been facing. Be sure to include as much detail as possible, as this will help our engineers better narrow down the issue. Include your preferred contact method and the severity of the issue so we can contact you appropriately, and select **Next: Review + create** to move to the next step.
+
+### Review and create
+Review the details of your support request to ensure everything is accurate and represents the problem effectively. Once you are ready, select **Create** to send the ticket to our team! You will receive an email confirmation once your ticket is received, and our team will work to get back to you as soon as possible. You can view the status of your ticket in the Azure portal.
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
ai-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-operations.md
+
+ Title: Spatial Analysis operations
+
+description: The Spatial Analysis operations.
++++++ Last updated : 02/02/2022++++
+# Spatial Analysis operations
+
+Spatial Analysis lets you analyze video streams from camera devices in real time. For each camera device you configure, the Spatial Analysis operations will generate an output stream of JSON messages sent to your instance of Azure IoT Hub.
+
+The Spatial Analysis container implements the following operations. You can configure these operations in the deployment manifest of your container.
+
+| Operation Identifier| Description|
+|||
+| cognitiveservices.vision.spatialanalysis-personcount | Counts people in a designated zone in the camera's field of view. The zone must be fully covered by a single camera in order for **PersonCount** to record an accurate total. <br> Emits an initial _personCountEvent_ event and then _personCountEvent_ events when the count changes. |
+| cognitiveservices.vision.spatialanalysis-personcrossingline | Tracks when a person crosses a designated line in the camera's field of view. <br>Emits a _personLineEvent_ event when the person crosses the line and provides directional info.
+| cognitiveservices.vision.spatialanalysis-personcrossingpolygon | Emits a _personZoneEnterExitEvent_ event when a person enters or exits the designated zone and provides directional info with the side of the zone that was crossed. Emits a _personZoneDwellTimeEvent_ when the person exits the zone and provides directional info as well as the number of milliseconds the person spent inside the zone. |
+| cognitiveservices.vision.spatialanalysis-persondistance | Tracks when people violate a minimum-distance rule. <br> Emits a _personDistanceEvent_ periodically with the location of each distance violation. |
+| cognitiveservices.vision.spatialanalysis | The generic operation, which can be used to run all scenarios mentioned above. This option is more useful when you want to run multiple scenarios on the same camera or use system resources (the GPU, for example) more efficiently. |
+
+All of the above operations are also available in the `.debug` version of the service (for example, `cognitiveservices.vision.spatialanalysis-personcount.debug`). Debug has the capability to visualize video frames as they're being processed. You'll need to run `xhost +` on the host computer to enable the visualization of video frames and events.
++
+> [!IMPORTANT]
+> The Azure AI Vision AI models detect and locate human presence in video footage and output a bounding box around the human body. The AI models do not attempt to discover the identities or demographics of individuals.
+
+## Operation parameters
+
+The following are the parameters required by each of the Spatial Analysis operations.
+
+| Operation parameters| Description|
+|||
+| `Operation ID` | The Operation Identifier from table above.|
+| `enabled` | Boolean: true or false|
+| `VIDEO_URL`| The RTSP url for the camera device (Example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded stream either through RTSP, http, or mp4. Video_URL can be provided as an obfuscated base64 string value using AES encryption, and if the video url is obfuscated then `KEY_ENV` and `IV_ENV` need to be provided as environment variables. Sample utility to generate keys and encryption can be found [here](/dotnet/api/system.security.cryptography.aesmanaged). |
+| `VIDEO_SOURCE_ID` | A friendly name for the camera device or video stream. This will be returned with the event JSON output.|
+| `VIDEO_IS_LIVE`| True for camera devices; false for recorded videos.|
+| `VIDEO_DECODE_GPU_INDEX`| Which GPU to decode the video frame. By default it is 0. Should be the same as the `gpu_index` in other node config like `DETECTOR_NODE_CONFIG` and `CAMERACALIBRATOR_NODE_CONFIG`.|
+| `INPUT_VIDEO_WIDTH` | Input video/stream's frame width (for example, 1920). This is an optional field and if provided, the frame will be scaled to this dimension while preserving the aspect ratio.|
+| `DETECTOR_NODE_CONFIG` | JSON indicating which GPU to run the detector node on. It should be in the following format: `"{ \"gpu_index\": 0 }",`|
+| `TRACKER_NODE_CONFIG` | JSON indicating whether to compute speed in the tracker node or not. It should be in the following format: `"{ \"enable_speed\": true }",`|
+| `CAMERA_CONFIG` | JSON indicating the calibrated camera parameters for multiple cameras. If the skill you used requires calibration and you already have the camera parameter, you can use this config to provide them directly. Should be in the following format: `"{ \"cameras\": [{\"source_id\": \"endcomputer.0.persondistancegraph.detector+end_computer1\", \"camera_height\": 13.105561256408691, \"camera_focal_length\": 297.60003662109375, \"camera_tiltup_angle\": 0.9738943576812744}] }"`, the `source_id` is used to identify each camera. It can be gotten from the `source_info` of the event we published. It will only take effect when `do_calibration=false` in `DETECTOR_NODE_CONFIG`.|
+| `CAMERACALIBRATOR_NODE_CONFIG` | JSON indicating which GPU to run the camera calibrator node on and whether to use calibration or not. It should be in the following format: `"{ \"gpu_index\": 0, \"do_calibration\": true, \"enable_orientation\": true}",`|
+| `CALIBRATION_CONFIG` | JSON indicating parameters to control how the camera calibration works. It should be in the following format: `"{\"enable_recalibration\": true, \"quality_check_frequency_seconds\": 86400}",`|
+| `SPACEANALYTICS_CONFIG` | JSON configuration for zone and line as outlined below.|
+| `ENABLE_FACE_MASK_CLASSIFIER` | `True` to enable detecting people wearing face masks in the video stream, `False` to disable it. By default this is disabled. Face mask detection requires input video width parameter to be 1920 `"INPUT_VIDEO_WIDTH": 1920`. The face mask attribute won't be returned if detected people aren't facing the camera or are too far from it. For more information, see the [camera placement](spatial-analysis-camera-placement.md). |
+| `STATIONARY_TARGET_REMOVER_CONFIG` | JSON indicating the parameters for stationary target removal, which adds the capability to learn and ignore long-term stationary false positive targets such as mannequins or people in pictures. Configuration should be in the following format: `"{\"enable\": true, \"bbox_dist_threshold-in_pixels\": 5, \"buffer_length_in_seconds\": 3600, \"filter_ratio\": 0.2 }"`|
+
+### Detector node parameter settings
+
+The following is an example of the `DETECTOR_NODE_CONFIG` parameters for all Spatial Analysis operations.
+
+```json
+{
+"gpu_index": 0,
+"enable_breakpad": false
+}
+```
+
+| Name | Type| Description|
+||||
+| `gpu_index` | string| The GPU index on which this operation will run.|
+| `enable_breakpad`| bool | Indicates whether to enable breakpad, which is used to generate a crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.|
+
+### Camera calibration node parameter settings
+The following is an example of the `CAMERACALIBRATOR_NODE_CONFIG` parameters for all spatial analysis operations.
+
+```json
+{
+ "gpu_index": 0,
+ "do_calibration": true,
+ "enable_breakpad": false,
+ "enable_orientation": true
+}
+```
+
+| Name | Type | Description |
+||||
+| `do_calibration` | string | Indicates that calibration is turned on. `do_calibration` must be true for **cognitiveservices.vision.spatialanalysis-persondistance** to function properly. `do_calibration` is set by default to `True`. |
+| `enable_breakpad`| bool | Indicates whether to enable breakpad, which is used to generate a crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.
+| `enable_orientation` | bool | Indicates whether you want to compute the orientation for the detected people or not. `enable_orientation` is set by default to `True`. |
+
+### Calibration config
+
+This is an example of the `CALIBRATION_CONFIG` parameters for all spatial analysis operations.
+
+```json
+{
+ "enable_recalibration": true,
+ "calibration_quality_check_frequency_seconds": 86400,
+ "calibration_quality_check_sample_collect_frequency_seconds": 300,
+ "calibration_quality_check_one_round_sample_collect_num": 10,
+ "calibration_quality_check_queue_max_size": 1000,
+ "calibration_event_frequency_seconds": -1
+}
+```
+
+| Name | Type| Description|
+||||
+| `enable_recalibration` | bool | Indicates whether automatic recalibration is turned on. Default is `true`.|
+| `calibration_quality_check_frequency_seconds` | int | Minimum number of seconds between each quality check to determine whether or not recalibration is needed. Default is `86400` (24 hours). Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_sample_collect_frequency_seconds` | int | Minimum number of seconds between collecting new data samples for recalibration and quality checking. Default is `300` (5 minutes). Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_one_round_sample_collect_num` | int | Minimum number of new data samples to collect per round of sample collection. Default is `10`. Only used when `enable_recalibration=True`.|
+| `calibration_quality_check_queue_max_size` | int | Maximum number of data samples to store when camera model is calibrated. Default is `1000`. Only used when `enable_recalibration=True`.|
+| `calibration_event_frequency_seconds` | int | Output frequency (seconds) of camera calibration events. A value of `-1` indicates that the camera calibration shouldn't be sent unless the camera calibration info has been changed. Default is `-1`.|
+
+### Camera calibration output
+
+The following is an example of the output from camera calibration if enabled. Ellipses indicate more of the same type of objects in a list.
+
+```json
+{
+ "type": "cameraCalibrationEvent",
+ "sourceInfo": {
+ "id": "camera1",
+ "timestamp": "2021-04-20T21:15:59.100Z",
+ "width": 512,
+ "height": 288,
+ "frameId": 531,
+ "cameraCalibrationInfo": {
+ "status": "Calibrated",
+ "cameraHeight": 13.294151306152344,
+ "focalLength": 372.0000305175781,
+ "tiltupAngle": 0.9581864476203918,
+ "lastCalibratedTime": "2021-04-20T21:15:59.058"
+ }
+ },
+ "zonePlacementInfo": {
+ "optimalZoneRegion": {
+ "type": "POLYGON",
+ "points": [
+ {
+ "x": 0.8403755868544601,
+ "y": 0.5515320334261838
+ },
+ {
+ "x": 0.15805946791862285,
+ "y": 0.5487465181058496
+ }
+ ],
+ "name": "optimal_zone_region"
+ },
+ "fairZoneRegion": {
+ "type": "POLYGON",
+ "points": [
+ {
+ "x": 0.7871674491392802,
+ "y": 0.7437325905292479
+ },
+ {
+ "x": 0.22065727699530516,
+ "y": 0.7325905292479109
+ }
+ ],
+ "name": "fair_zone_region"
+ },
+ "uniformlySpacedPersonBoundingBoxes": [
+ {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.0297339593114241,
+ "y": 0.0807799442896936
+ },
+ {
+ "x": 0.10015649452269171,
+ "y": 0.2757660167130919
+ }
+ ]
+ }
+ ],
+ "personBoundingBoxGroundPoints": [
+ {
+ "x": -22.944068908691406,
+ "y": 31.487680435180664
+ }
+ ]
+ }
+}
+```
+
+See [Spatial analysis operation output](#spatial-analysis-operation-output) for details on `source_info`.
+
+| ZonePlacementInfo Field Name | Type| Description|
+||||
+| `optimalZonePolygon` | object| A polygon in the camera image where lines or zones for your operations can be placed for optimal results. <br/> Each value pair represents the x,y for vertices of a polygon. The polygon represents the areas in which people are tracked or counted and polygon points are based on normalized coordinates (0-1), where the top left corner is (0.0, 0.0) and the bottom right corner is (1.0, 1.0).|
+| `fairZonePolygon` | object| A polygon in the camera image where lines or zones for your operations can be placed for good, but possibly not optimal, results. <br/> See `optimalZonePolygon` above for an in-depth explanation of the contents. |
+| `uniformlySpacedPersonBoundingBoxes` | list | A list of bounding boxes of people within the camera image distributed uniformly in real space. Values are based on normalized coordinates (0-1).|
+| `personBoundingBoxGroundPoints` | list | A list of coordinates on the floor plane relative to the camera. Each coordinate corresponds to the bottom right of the bounding box in `uniformlySpacedPersonBoundingBoxes` with the same index. <br/> See the `centerGroundPointX/centerGroundPointY` fields under the [JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights](#json-format-for-cognitiveservicesvisionspatialanalysis-persondistance-ai-insights) section for more details on how coordinates on the floor plane are calculated. |
+
+Example of the zone placement info output visualized on a video frame:
+![Zone placement info visualization](./media/spatial-analysis/zone-placement-info-visualization.png)
+
+The zone placement info provides suggestions for your configurations, but the guidelines in [Camera configuration](#camera-configuration) must still be followed for best results.
+
+### Tracker node parameter settings
+You can configure the speed computation through the tracker node parameter settings.
+```
+{
+"enable_speed": true,
+"remove_stationary_objects": true,
+"stationary_objects_dist_threshold_in_pixels": 5,
+"stationary_objects_buffer_length_in_seconds": 3600,
+"stationary_objects_filter_ratio": 0.2
+}
+```
+| Name | Type| Description|
+||||
+| `enable_speed` | bool | Indicates whether you want to compute the speed for the detected people or not. `enable_speed` is set by default to `True`. It is highly recommended that you enable both speed and orientation to have the best estimated values. |
+| `remove_stationary_objects` | bool | Indicates whether you want to remove stationary objects. `remove_stationary_objects` is set by default to True. |
+| `stationary_objects_dist_threshold_in_pixels` | int | The neighborhood distance threshold to decide whether two detection boxes can be treated as the same detection. `stationary_objects_dist_threshold_in_pixels` is set by default to 5. |
+| `stationary_objects_buffer_length_in_seconds` | int | The minimum length of time in seconds that the system has to look back to decide whether a target is a stationary target or not. `stationary_objects_buffer_length_in_seconds` is set by default to 3600. |
+| `stationary_objects_filter_ratio` | float | If a target is repeatedly detected at the same location (defined in `stationary_objects_dist_threshold_in_pixels`) for greater `stationary_objects_filter_ratio` (0.2 means 20%) of the `stationary_objects_buffer_length_in_seconds` time interval, it will be treated as a stationary target. `stationary_objects_filter_ratio` is set by default to 0.2. |
+
+## Spatial Analysis operations configuration and output
+
+### Zone configuration for cognitiveservices.vision.spatialanalysis-personcount
+
+The following is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that configures a zone. You may configure multiple zones for this operation.
+
+```json
+{
+ "zones": [
+ {
+ "name": "lobbycamera",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events": [
+ {
+ "type": "count",
+ "config": {
+ "trigger": "event",
+ "focus": "footprint"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | list| List of zones. |
+| `name` | string| Friendly name for this zone.|
+| `polygon` | list| Each value pair represents the x,y for vertices of a polygon. The polygon represents the areas in which people are tracked or counted. Polygon points are based on normalized coordinates (0-1), where the top left corner is (0.0, 0.0) and the bottom right corner is (1.0, 1.0).
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
+| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcount**, this should be `count`.|
+| `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not.
+| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`. |
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
+
+### Line configuration for cognitiveservices.vision.spatialanalysis-personcrossingline
+
+The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a line. You may configure multiple crossing lines for this operation.
+
+```json
+{
+ "lines": [
+ {
+ "name": "doorcamera",
+ "line": {
+ "start": {
+ "x": 0,
+ "y": 0.5
+ },
+ "end": {
+ "x": 1,
+ "y": 0.5
+ }
+ },
+ "events": [
+ {
+ "type": "linecrossing",
+ "config": {
+ "trigger": "event",
+ "focus": "footprint"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+| Name | Type| Description|
+||||
+| `lines` | list| List of lines.|
+| `name` | string| Friendly name for this line.|
+| `line` | list| The definition of the line. This is a directional line allowing you to understand "entry" vs. "exit".|
+| `start` | value pair| x, y coordinates for line's starting point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. |
+| `end` | value pair| x, y coordinates for line's ending point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
+| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingline**, this should be `linecrossing`.|
+|`trigger`|string|The type of trigger for sending an event.<br>Supported Values: "event": fire when someone crosses the line.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
+
+### Zone configuration for cognitiveservices.vision.spatialanalysis-personcrossingpolygon
+
+This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a zone. You may configure multiple zones for this operation.
+
+```json
+{
+"zones":[
+ {
+ "name": "queuecamera",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "zonecrossing",
+ "config":{
+ "trigger": "event",
+ "focus": "footprint"
+ }
+ }]
+ },
+ {
+ "name": "queuecamera1",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "zonedwelltime",
+ "config":{
+ "trigger": "event",
+ "focus": "footprint"
+ }
+ }]
+ }]
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | list| List of zones. |
+| `name` | string| Friendly name for this zone.|
+| `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
+| `target_side` | int| Specifies a side of the zone defined by `polygon` to measure how long people face that side while in the zone. 'dwellTimeForTargetSide' will output that estimated time. Each side is a numbered edge between the two vertices of the polygon that represents your zone. For example, the edge between the first two vertices of the polygon represents the first side, 'side'=1. The value of `target_side` is between `[0,N-1]` where `N` is the number of sides of the `polygon`. This is an optional field. |
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.074 will be 38 pixels on a video with image width = 512 (0.074 X 512 = ~38).|
+| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** this should be `zonecrossing` or `zonedwelltime`.|
+| `trigger`|string|The type of trigger for sending an event<br>Supported Values: "event": fire when someone enters or exits the zone.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
+
+### Zone configuration for cognitiveservices.vision.spatialanalysis-persondistance
+
+This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a zone for **cognitiveservices.vision.spatialanalysis-persondistance**. You may configure multiple zones for this operation.
+
+```json
+{
+"zones":[{
+ "name": "lobbycamera",
+ "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
+ "events":[{
+ "type": "persondistance",
+ "config":{
+ "trigger": "event",
+ "output_frequency":1,
+ "minimum_distance_threshold":6.0,
+ "maximum_distance_threshold":35.0,
+ "aggregation_method": "average",
+ "focus": "footprint"
+ }
+ }]
+ }]
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | list| List of zones. |
+| `name` | string| Friendly name for this zone.|
+| `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are counted and the distance between people is measured. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
+| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
+| `type` | string| For **cognitiveservices.vision.spatialanalysis-persondistance**, this should be `persondistance`.|
+| `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not.
+| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`.|
+| `minimum_distance_threshold` | float| A distance in feet that will trigger a "TooClose" event when people are less than that distance apart.|
+| `maximum_distance_threshold` | float| A distance in feet that will trigger a "TooFar" event when people are greater than that distance apart.|
+| `aggregation_method` | string| The method for aggregate `persondistance` result. The aggregation_method is applicable to both `mode` and `average`.|
+| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
+
+### Configuration for cognitiveservices.vision.spatialanalysis
+The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a line and zone for **cognitiveservices.vision.spatialanalysis**. You may configure multiple lines/zones for this operation and each line/zone can have different events.
+
+```json
+{
+ "lines": [
+ {
+ "name": "doorcamera",
+ "line": {
+ "start": {
+ "x": 0,
+ "y": 0.5
+ },
+ "end": {
+ "x": 1,
+ "y": 0.5
+ }
+ },
+ "events": [
+ {
+ "type": "linecrossing",
+ "config": {
+ "trigger": "event",
+ "focus": "footprint"
+ }
+ }
+ ]
+ }
+ ],
+ "zones": [
+ {
+ "name": "lobbycamera",
+ "polygon": [[0.3, 0.3],[0.3, 0.9],[0.6, 0.9],[0.6, 0.3],[0.3, 0.3]],
+ "events": [
+ {
+ "type": "persondistance",
+ "config": {
+ "trigger": "event",
+ "output_frequency": 1,
+ "minimum_distance_threshold": 6.0,
+ "maximum_distance_threshold": 35.0,
+ "focus": "footprint"
+ }
+ },
+ {
+ "type": "count",
+ "config": {
+ "trigger": "event",
+ "output_frequency": 1,
+ "focus": "footprint"
+ }
+ },
+ {
+ "type": "zonecrossing",
+ "config": {
+ "focus": "footprint"
+ }
+ },
+ {
+ "type": "zonedwelltime",
+ "config": {
+ "focus": "footprint"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Camera configuration
+
+See the [camera placement](spatial-analysis-camera-placement.md) guidelines to learn about more about how to configure zones and lines.
+
+## Spatial Analysis operation output
+
+The events from each operation are egressed to Azure IoT Hub on JSON format.
+
+### JSON format for cognitiveservices.vision.spatialanalysis-personcount AI Insights
+
+Sample JSON for an event output by this operation.
+
+```json
+{
+ "events": [
+ {
+ "id": "b013c2059577418caa826844223bb50b",
+ "type": "personCountEvent",
+ "detectionIds": [
+ "bc796b0fc2534bc59f13138af3dd7027",
+ "60add228e5274158897c135905b5a019"
+ ],
+ "properties": {
+ "personCount": 2
+ },
+ "zone": "lobbycamera",
+ "trigger": "event"
+ }
+ ],
+ "sourceInfo": {
+ "id": "camera_id",
+ "timestamp": "2020-08-24T06:06:57.224Z",
+ "width": 608,
+ "height": 342,
+ "frameId": "1400",
+ "cameraCalibrationInfo": {
+ "status": "Calibrated",
+ "cameraHeight": 10.306597709655762,
+ "focalLength": 385.3199462890625,
+ "tiltupAngle": 1.0969393253326416
+ },
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "person",
+ "id": "bc796b0fc2534bc59f13138af3dd7027",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.612683747944079,
+ "y": 0.25340268765276636
+ },
+ {
+ "x": 0.7185954043739721,
+ "y": 0.6425260577285499
+ }
+ ]
+ },
+ "confidence": 0.9559211134910583,
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "0.0",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
+ },
+ {
+ "type": "person",
+ "id": "60add228e5274158897c135905b5a019",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.22326200886776573,
+ "y": 0.17830915618361087
+ },
+ {
+ "x": 0.34922296122500773,
+ "y": 0.6297955429344847
+ }
+ ]
+ },
+ "confidence": 0.9389744400978088,
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
+| `properties` | collection| Collection of values|
+| `trackinId` | string| Unique identifier of the person detected|
+| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
+| `trigger` | string| The trigger type is 'event' or 'interval' depending on the value of `trigger` in SPACEANALYTICS_CONFIG|
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `confidence` | float| Algorithm confidence|
+| `attributes` | array| Array of attributes. Each attribute consists of label, task, and confidence |
+| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask) |
+| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask) |
+| `task` | string | The attribute classification task/class |
++
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted|
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
+| `cameraCallibrationInfo` | collection | Collection of values|
+| `status` | string | The status of the calibration in the format of `state[;progress description]`. The state can be `Calibrating`, `Recalibrating` (if recalibration is enabled), or `Calibrated`. The progress description part is only valid when it is in `Calibrating` and `Recalibrating` state, which is used to show the progress of current calibration process.|
+| `cameraHeight` | float | The height of the camera above the ground in feet. This is inferred from autocalibration. |
+| `focalLength` | float | The focal length of the camera in pixels. This is inferred from autocalibration. |
+| `tiltUpAngle` | float | The camera tilt angle from vertical. This is inferred from autocalibration.|
++
+### JSON format for cognitiveservices.vision.spatialanalysis-personcrossingline AI Insights
+
+Sample JSON for detections output by this operation.
+
+```json
+{
+ "events": [
+ {
+ "id": "3733eb36935e4d73800a9cf36185d5a2",
+ "type": "personLineEvent",
+ "detectionIds": [
+ "90d55bfc64c54bfd98226697ad8445ca"
+ ],
+ "properties": {
+ "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
+ "status": "CrossLeft"
+ },
+ "zone": "doorcamera"
+ }
+ ],
+ "sourceInfo": {
+ "id": "camera_id",
+ "timestamp": "2020-08-24T06:06:53.261Z",
+ "width": 608,
+ "height": 342,
+ "frameId": "1340",
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "person",
+ "id": "90d55bfc64c54bfd98226697ad8445ca",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.491627341822574,
+ "y": 0.2385801348769874
+ },
+ {
+ "x": 0.588894994635331,
+ "y": 0.6395559924387793
+ }
+ ]
+ },
+ "confidence": 0.9005028605461121,
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
+ "speed": "1.2",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
+| `properties` | collection| Collection of values|
+| `trackinId` | string| Unique identifier of the person detected|
+| `status` | string| Direction of line crossings, either 'CrossLeft' or 'CrossRight'. Direction is based on imagining standing at the "start" facing the "end" of the line. CrossRight is crossing from left to right. CrossLeft is crossing from right to left.|
+| `orientationDirection` | string| The orientation direction of the detected person after crossing the line. The value can be 'Left', 'Right, or 'Straight'. This value is output if `enable_orientation` is set to `True` in `CAMERACALIBRATOR_NODE_CONFIG` |
+| `zone` | string | The "name" field of the line that was crossed|
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `groundOrientationAngle` | float| The clockwise radian angle of the person's orientation on the inferred ground plane |
+| `mappedImageOrientation` | float| The projected clockwise radian angle of the person's orientation on the 2D image space |
+| `speed` | float| The estimated speed of the detected person. The unit is `foot per second (ft/s)`|
+| `confidence` | float| Algorithm confidence|
+| `attributes` | array| Array of attributes. Each attribute consists of label, task, and confidence |
+| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask) |
+| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask) |
+| `task` | string | The attribute classification task/class |
+
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted|
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
++
+> [!IMPORTANT]
+> The AI model detects a person irrespective of whether the person is facing towards or away from the camera. The AI model doesn't run face recognition and doesn't emit any biometric information.
+
+### JSON format for cognitiveservices.vision.spatialanalysis-personcrossingpolygon AI Insights
+
+Sample JSON for detections output by this operation with `zonecrossing` type SPACEANALYTICS_CONFIG.
+
+```json
+{
+ "events": [
+ {
+ "id": "f095d6fe8cfb4ffaa8c934882fb257a5",
+ "type": "personZoneEnterExitEvent",
+ "detectionIds": [
+ "afcc2e2a32a6480288e24381f9c5d00e"
+ ],
+ "properties": {
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "status": "Enter",
+ "side": "1"
+ },
+ "zone": "queuecamera"
+ }
+ ],
+ "sourceInfo": {
+ "id": "camera_id",
+ "timestamp": "2020-08-24T06:15:09.680Z",
+ "width": 608,
+ "height": 342,
+ "frameId": "428",
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "person",
+ "id": "afcc2e2a32a6480288e24381f9c5d00e",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.8135572734631991,
+ "y": 0.6653949670624315
+ },
+ {
+ "x": 0.9937645761590255,
+ "y": 0.9925406829655519
+ }
+ ]
+ },
+ "confidence": 0.6267998814582825,
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "speed": "1.2",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+Sample JSON for detections output by this operation with `zonedwelltime` type SPACEANALYTICS_CONFIG.
+
+```json
+{
+ "events": [
+ {
+ "id": "f095d6fe8cfb4ffaa8c934882fb257a5",
+ "type": "personZoneDwellTimeEvent",
+ "detectionIds": [
+ "afcc2e2a32a6480288e24381f9c5d00e"
+ ],
+ "properties": {
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "status": "Exit",
+ "side": "1",
+ "dwellTime": 7132.0,
+ "dwellFrames": 20
+ },
+ "zone": "queuecamera"
+ }
+ ],
+ "sourceInfo": {
+ "id": "camera_id",
+ "timestamp": "2020-08-24T06:15:09.680Z",
+ "width": 608,
+ "height": 342,
+ "frameId": "428",
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "person",
+ "id": "afcc2e2a32a6480288e24381f9c5d00e",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.8135572734631991,
+ "y": 0.6653949670624315
+ },
+ {
+ "x": 0.9937645761590255,
+ "y": 0.9925406829655519
+ }
+ ]
+ },
+ "confidence": 0.6267998814582825,
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.2",
+ "mappedImageOrientation": "0.3",
+ "speed": "1.2",
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ }
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type. The value can be either _personZoneDwellTimeEvent_ or _personZoneEnterExitEvent_|
+| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
+| `properties` | collection| Collection of values|
+| `trackinId` | string| Unique identifier of the person detected|
+| `status` | string| Direction of polygon crossings, either 'Enter' or 'Exit'|
+| `side` | int| The number of the side of the polygon that the person crossed. Each side is a numbered edge between the two vertices of the polygon that represents your zone. The edge between the first two vertices of the polygon represents first side. 'Side' is empty when the event isn't associated with a specific side due to occlusion. For example, an exit occurred when a person disappeared but wasn't seen crossing a side of the zone, or an enter occurred when a person appeared in the zone but wasn't seen crossing a side.|
+| `dwellTime` | float | The number of milliseconds that represent the time the person spent in the zone. This field is provided when the event type is personZoneDwellTimeEvent|
+| `dwellFrames` | int | The number of frames that the person spent in the zone. This field is provided when the event type is personZoneDwellTimeEvent|
+| `dwellTimeForTargetSide` | float | The number of milliseconds that represent the time the person spent in the zone and were facing to the `target_side`. This field is provided when `enable_orientation` is `True` in `CAMERACALIBRATOR_NODE_CONFIG ` and the value of `target_side` is set in `SPACEANALYTICS_CONFIG`|
+| `avgSpeed` | float| The average speed of the person in the zone. The unit is `foot per second (ft/s)`|
+| `minSpeed` | float| The minimum speed of the person in the zone. The unit is `foot per second (ft/s)`|
+| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `groundOrientationAngle` | float| The clockwise radian angle of the person's orientation on the inferred ground plane |
+| `mappedImageOrientation` | float| The projected clockwise radian angle of the person's orientation on the 2D image space |
+| `speed` | float| The estimated speed of the detected person. The unit is `foot per second (ft/s)`|
+| `confidence` | float| Algorithm confidence|
+| `attributes` | array| Array of attributes. Each attribute consists of label, task, and confidence |
+| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask) |
+| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask) |
+| `task` | string | The attribute classification task/class |
+
+### JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights
+
+Sample JSON for detections output by this operation.
+
+```json
+{
+ "events": [
+ {
+ "id": "9c15619926ef417aa93c1faf00717d36",
+ "type": "personDistanceEvent",
+ "detectionIds": [
+ "9037c65fa3b74070869ee5110fcd23ca",
+ "7ad7f43fd1a64971ae1a30dbeeffc38a"
+ ],
+ "properties": {
+ "personCount": 5,
+ "averageDistance": 20.807043981552123,
+ "minimumDistanceThreshold": 6.0,
+ "maximumDistanceThreshold": "Infinity",
+ "eventName": "TooClose",
+ "distanceViolationPersonCount": 2
+ },
+ "zone": "lobbycamera",
+ "trigger": "event"
+ }
+ ],
+ "sourceInfo": {
+ "id": "camera_id",
+ "timestamp": "2020-08-24T06:17:25.309Z",
+ "width": 608,
+ "height": 342,
+ "frameId": "1199",
+ "cameraCalibrationInfo": {
+ "status": "Calibrated",
+ "cameraHeight": 12.9940824508667,
+ "focalLength": 401.2800598144531,
+ "tiltupAngle": 1.057669997215271
+ },
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "person",
+ "id": "9037c65fa3b74070869ee5110fcd23ca",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.39988183975219727,
+ "y": 0.2719132942065858
+ },
+ {
+ "x": 0.5051516984638414,
+ "y": 0.6488402517218339
+ }
+ ]
+ },
+ "confidence": 0.948630690574646,
+ "metadata": {
+ "centerGroundPointX": "-1.4638760089874268",
+ "centerGroundPointY": "18.29732322692871",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ }
+ },
+ {
+ "type": "person",
+ "id": "7ad7f43fd1a64971ae1a30dbeeffc38a",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.5200299714740954,
+ "y": 0.2875368218672903
+ },
+ {
+ "x": 0.6457497446160567,
+ "y": 0.6183311060855263
+ }
+ ]
+ },
+ "confidence": 0.8235412240028381,
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ }
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
+| `properties` | collection| Collection of values|
+| `personCount` | int| Number of people detected when the event was emitted|
+| `averageDistance` | float| The average distance between all detected people in feet|
+| `minimumDistanceThreshold` | float| The distance in feet that will trigger a "TooClose" event when people are less than that distance apart.|
+| `maximumDistanceThreshold` | float| The distance in feet that will trigger a "TooFar" event when people are greater than distance apart.|
+| `eventName` | string| Event name is `TooClose` with the `minimumDistanceThreshold` is violated, `TooFar` when `maximumDistanceThreshold` is violated, or `unknown` when autocalibration hasn't completed|
+| `distanceViolationPersonCount` | int| Number of people detected in violation of `minimumDistanceThreshold` or `maximumDistanceThreshold`|
+| `zone` | string | The "name" field of the polygon that represents the zone that was monitored for distancing between people|
+| `trigger` | string| The trigger type is 'event' or 'interval' depending on the value of `trigger` in SPACEANALYTICS_CONFIG|
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `confidence` | float| Algorithm confidence|
+| `centerGroundPointX/centerGroundPointY` | 2 float values| `x`, `y` values with the coordinates of the person's inferred location on the ground in feet. `x` and `y` are coordinates on the floor plane, assuming the floor is level. The camera's location is the origin. |
+
+In `centerGroundPoint`, `x` is the component of distance from the camera to the person that is perpendicular to the camera image plane. `y` is the component of distance that is parallel to the camera image plane.
+
+![Example center ground point](./media/spatial-analysis/x-y-chart.png)
+
+In this example, `centerGroundPoint` is `{centerGroundPointX: 4, centerGroundPointY: 5}`. This means there's a person four feet ahead of the camera and five feet to the right, looking at the room top-down.
++
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted|
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
+| `cameraCallibrationInfo` | collection | Collection of values|
+| `status` | string | The status of the calibration in the format of `state[;progress description]`. The state can be `Calibrating`, `Recalibrating` (if recalibration is enabled), or `Calibrated`. The progress description part is only valid when it is in `Calibrating` and `Recalibrating` state, which is used to show the progress of current calibration process.|
+| `cameraHeight` | float | The height of the camera above the ground in feet. This is inferred from autocalibration. |
+| `focalLength` | float | The focal length of the camera in pixels. This is inferred from autocalibration. |
+| `tiltUpAngle` | float | The camera tilt angle from vertical. This is inferred from autocalibration.|
+
+### JSON format for cognitiveservices.vision.spatialanalysis AI Insights
+
+Output of this operation depends on configured `events`, for example if there's a `zonecrossing` event configured for this operation then output will be same as `cognitiveservices.vision.spatialanalysis-personcrossingpolygon`.
+
+## Use the output generated by the container
+
+You may want to integrate Spatial Analysis detection or events into your application. Here are a few approaches to consider:
+
+* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md).
+* Set up **Message Routing** on your Azure IoT Hub to send the events to other endpoints or save the events to your data storage. For more information, see [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md).
+* Set up an Azure Stream Analytics job to process the events in real-time as they arrive and create visualizations.
+
+## Deploy Spatial Analysis operations at scale (multiple cameras)
+
+In order to get the best performance and utilization of the GPUs, you can deploy any Spatial Analysis operations on multiple cameras using graph instances. Below is a sample configuration for running the `cognitiveservices.vision.spatialanalysis-personcrossingline` operation on 15 cameras.
+
+```json
+ "properties.desired": {
+ "globalSettings": {
+ "PlatformTelemetryEnabled": false,
+ "CustomerTelemetryEnabled": true
+ },
+ "graphs": {
+ "personzonelinecrossing": {
+ "operationId": "cognitiveservices.vision.spatialanalysis-personcrossingline",
+ "version": 1,
+ "enabled": true,
+ "sharedNodes": {
+ "shared_detector0": {
+ "node": "PersonCrossingLineGraph.detector",
+ "parameters": {
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"batch_size\": 7, \"do_calibration\": true}",
+ }
+ },
+ "shared_calibrator0": {
+ "node": "PersonCrossingLineGraph/cameracalibrator",
+ "parameters": {
+ "CAMERACALIBRATOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true, \"enable_zone_placement\": true}",
+ "CALIBRATION_CONFIG": "{\"enable_recalibration\": true, \"quality_check_frequency_seconds\": 86400}",
+ }
+ },
+ "parameters": {
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "VIDEO_IS_LIVE": true
+ },
+ "instances": {
+ "1": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 1>",
+ "VIDEO_SOURCE_ID": "camera 1",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0,0],[1,0],[0,1],[1,1],[0,0]]}]}"
+ }
+ },
+ "2": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 2>",
+ "VIDEO_SOURCE_ID": "camera 2",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "3": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 3>",
+ "VIDEO_SOURCE_ID": "camera 3",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "4": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 4>",
+ "VIDEO_SOURCE_ID": "camera 4",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "5": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 5>",
+ "VIDEO_SOURCE_ID": "camera 5",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "6": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 6>",
+ "VIDEO_SOURCE_ID": "camera 6",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "7": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 7>",
+ "VIDEO_SOURCE_ID": "camera 7",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "8": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 8>",
+ "VIDEO_SOURCE_ID": "camera 8",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "9": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 9>",
+ "VIDEO_SOURCE_ID": "camera 9",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "10": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 10>",
+ "VIDEO_SOURCE_ID": "camera 10",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "11": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 11>",
+ "VIDEO_SOURCE_ID": "camera 11",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "12": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 12>",
+ "VIDEO_SOURCE_ID": "camera 12",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "13": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 13>",
+ "VIDEO_SOURCE_ID": "camera 13",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "14": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 14>",
+ "VIDEO_SOURCE_ID": "camera 14",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ },
+ "15": {
+ "sharedNodeMap": {
+ "PersonCrossingLineGraph/detector": "shared_detector0",
+ "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
+ },
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL for camera 15>",
+ "VIDEO_SOURCE_ID": "camera 15",
+ "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
+ }
+ }
+ }
+ },
+ }
+ }
+ ```
+| Name | Type| Description|
+||||
+| `batch_size` | int | If all of the cameras have the same resolution, set `batch_size` to the number of cameras that will be used in that operation, otherwise, set `batch_size` to 1 or leave it as default (1), which indicates no batch is supported. |
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
ai-services Spatial Analysis Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-web-app.md
+
+ Title: Deploy a Spatial Analysis web app
+
+description: Learn how to use Spatial Analysis in a web application.
++++++ Last updated : 06/08/2021+++
+# How to: Deploy a Spatial Analysis web application
+
+Use this article to learn how to deploy a web app which will collect spatial analysis data(insights) from IotHub and visualize it. This can have useful applications across a wide range of scenarios and industries. For example, if a company wants to optimize the use of its real estate space, they are able to quickly create a solution with different scenarios.
+
+In this tutorial you will learn how to:
+
+* Deploy the Spatial Analysis container
+* Configure the operation and camera
+* Configure the IoT Hub connection in the Web Application
+* Deploy and test the Web Application
+
+This app will showcase below scenarios:
+
+* Count of people entering and exiting a space/store
+* Count of people entering and exiting a checkout area/zone and the time spent in the checkout line (dwell time)
+* Count of people wearing a face mask
+* Count of people violating social distancing guidelines
+
+## Prerequisites
+
+* Azure subscription - [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Basic understanding of Azure IoT Edge deployment configurations, and an [Azure IoT Hub](../../iot-hub/index.yml)
+* A configured [host computer](spatial-analysis-container.md).
+
+## Deploy the Spatial Analysis container
+
+Follow [the Host Computer Setup](./spatial-analysis-container.md) to configure the host computer and connect an IoT Edge device to Azure IoT Hub.
+
+### Deploy an Azure IoT Hub service in your subscription
+
+First, create an instance of an Azure IoT Hub service with either the Standard Pricing Tier (S1) or Free Tier (S0). Follow these instructions to create this instance using the Azure CLI.
+
+Fill in the required parameters:
+* Subscription: The name or ID of your Azure Subscription
+* Resource group: Create a name for your resource group
+* Iot Hub Name: Create a name for your IoT Hub
+* IoTHub Name: The name of the IoT Hub you created
+* Edge Device Name: Create a name for your Edge Device
+
+```azurecli
+az login
+az account set --subscription <name or ID of Azure Subscription>
+az group create --name "<Resource Group Name>" --location "WestUS"
+
+az iot hub create --name "<IoT Hub Name>" --sku S1 --resource-group "test-resource-group"
+
+az iot hub device-identity create --hub-name "<IoT Hub Name>" --device-id "<Edge Device Name>" --edge-enabled
+```
+
+### Deploy the container on Azure IoT Edge on the host computer
+
+The next step is to deploy the **spatial analysis** container as an IoT Module on the host computer using the Azure CLI. The deployment process requires a Deployment Manifest file which outlines the required containers, variables, and configurations for your deployment. A sample Deployment Manifest can be found at [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which includes pre-built configurations for all scenarios.
+
+### Set environment variables
+
+Most of the **Environment Variables** for the IoT Edge Module are already set in the sample *DeploymentManifest.json* files linked above. In the file, search for the `ENDPOINT` and `APIKEY` environment variables, shown below. Replace the values with the Endpoint URI and the API Key that you created earlier. Ensure that the EULA value is set to "accept".
+
+```json
+"EULA": {
+ "value": "accept"
+},
+"BILLING":{
+ "value": "<Use the endpoint from your Vision resource>"
+},
+"APIKEY":{
+ "value": "<Use a key from your Vision resource>"
+}
+```
+
+### Configure the operation parameters
+
+If you are using the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which already has all of the required configurations (operations, recorded video file urls and zones etc.), then you can skip to the **Execute the deployment** section.
+
+Now that the initial configuration of the spatial analysis container is complete, the next step is to configure the operations parameters and add them to the deployment.
+
+The first step is to update the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) and configure the desired operation. For example, configuration for cognitiveservices.vision.spatialanalysis-personcount is shown below:
+
+```json
+"personcount": {
+ "operationId": "cognitiveservices.vision.spatialanalysis-personcount",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL here>",
+ "VIDEO_SOURCE_ID": "<Replace with friendly name>",
+ "VIDEO_IS_LIVE":true,
+ "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0 }",
+ "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[<Replace with your values>], \"events\": [{\"type\":\"count\"}], \"threshold\":<use 0 for no threshold.}]}"
+ }
+},
+```
+
+After the deployment manifest is updated, follow the camera manufacturer's instructions to install the camera, configure the camera url, and configure the user name and password.
+
+Next, set `VIDEO_URL` to the RTSP url of the camera, and the credentials for connecting to the camera.
+
+If the edge device has more than one GPU, select the GPU on which to run this operation. Make sure you load balance the operations where there are no more than 8 operations running on a single GPU at a time.
+
+Next, configure the zone in which you want to count people. To configure the zone polygon, first follow the manufacturerΓÇÖs instructions to retrieve a frame from the camera. To determine each vertex of the polygon, select a point on the frame, take the x,y pixel coordinates of the point relative to the left, top corner of the frame, and divide by the corresponding frame dimensions. Set the results as x,y coordinates of the vertex. You can set the zone polygon configuration in the `SPACEANALYTICS_CONFIG` field.
+
+This is a sample video frame that shows how the vertex coordinates are being calculated for a frame of size 1920/1080.
+![Sample video frame](./media/spatial-analysis/sample-video-frame.jpg)
+
+You can also select a confidence threshold for when detected people are counted and events are generated. Set the threshold to 0 if youΓÇÖd like all events to be output.
+
+### Execute the deployment
+
+Now that the deployment manifest is complete, use this command in the Azure CLI to deploy the container on the host computer as an IoT Edge Module.
+
+```azurecli
+az login
+az extension add --name azure-iot
+az iot edge set-modules --hub-name "<IoT Hub name>" --device-id "<IoT Edge device name>" --content DeploymentManifest.json -ΓÇôsubscription "<subscriptionId>"
+```
+
+Fill in the required parameters:
+
+* IoT Hub Name: Your Azure IoT Hub name
+* DeploymentManifest.json: The name of your deployment file
+* IoT Edge device name: The IoT Edge device name of your host computer
+* Subscription: Your subscription ID or name
+
+This command will begin the deployment, and you can view the deployment status in your Azure IoT Hub instance in the Azure portal. The status may show as *417 ΓÇô The deviceΓÇÖs deployment configuration is not set* until the device finishes downloading the container images and starts running.
+
+### Validate that the deployment was successful
+
+Locate the *Runtime Status* in the IoT Edge Module Settings for the spatial-analysis module in your IoT Hub instance on the Azure portal. The **Desired Value** and **Reported Value** for the *Runtime Status* should say `Running`. See below for what this will look like on the Azure portal.
+
+![Example deployment verification](./media/spatial-analysis/deployment-verification.png)
+
+At this point, the spatial analysis container is running the operation. It emits AI insights for the operations and routes these insights as telemetry to your Azure IoT Hub instance. To configure additional cameras, you can update the deployment manifest file and execute the deployment again.
+
+## Spatial Analysis Web Application
+
+The Spatial Analysis Web Application enables developers to quickly configure a sample web app, host it in their Azure environment, and use the app to validate E2E events.
+
+## Build Docker Image
+
+Follow the [guide](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/README.md#docker-image) to build and push the image to an Azure Container Registry in your subscription.
+
+## Setup Steps
+
+To install the container, create a new Azure App Service and fill in the required parameters. Then go to the **Docker** Tab and select **Single Container**, then **Azure Container Registry**. Use your instance of Azure Container Registry where you pushed the image above.
+
+![Enter image details](./media/spatial-analysis/solution-app-create-screen.png)
+
+After entering the above parameters, select **Review+Create** and create the app.
+
+### Configure the app
+
+Wait for setup to complete, and navigate to your resource in the Azure portal. Go to the **configuration** section and add the following two **application settings**.
+
+* `EventHubConsumerGroup` ΓÇô The string name of the consumer group from your Azure IoT Hub, you can create a new consumer group in your IoT Hub or use the default group.
+* `IotHubConnectionString` ΓÇô The connection string to your Azure IoT Hub, this can be retrieved from the keys section of your Azure IoT Hub resource
+![Configure Parameters](./media/spatial-analysis/solution-app-config-page.png)
+
+Once these 2 settings are added, select **Save**. Then select **Authentication/Authorization** in the left navigation menu, and update it with the desired level of authentication. We recommend Azure Active Directory (Azure AD) express.
+
+### Test the app
+
+Go to the Azure Service and verify the deployment was successful, and the web app is running. Navigate to the configured url: `<yourapp>.azurewebsites.net` to view the running app.
+
+![Test the deployment](./media/spatial-analysis/solution-app-output.png)
+
+## Get the PersonCount source code
+If you'd like to view or modify the source code for this application, you can find it [on GitHub](https://github.com/Azure-Samples/cognitive-services-spatial-analysis).
+
+## Next steps
+
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
+* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
ai-services Spatial Analysis Zone Line Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-zone-line-placement.md
+
+ Title: Spatial Analysis zone and line placement
+
+description: Learn how to set up zones and lines with Spatial Analysis
++++++ Last updated : 06/08/2021++++
+# Zone and Line Placement Guide
+
+This article provides guidelines for how to define zones and lines for Spatial Analysis operations to achieve accurate analysis of peoples movements in a space. This applies to all operations.
+
+Zones and lines are defined using the JSON SPACEANALYSIS_CONFIG parameter. See the [Spatial Analysis operations](spatial-analysis-operations.md) article for more information.
+
+## Guidelines for drawing zones
+
+Remember that every space is different; you'll need to update the position or size depending on your needs.
+
+If you want to see a specific section of your camera view, create the largest zone that you can, covering the specific floor area that you're interested in but not including other areas that you're not interested in. This increases the accuracy of the data collected and prevents false positives from areas you don't want to track. Be careful when placing the corners of your polygon and make sure they're not outside the area you want to track. ΓÇâ
+
+### Example of a well-shaped zone
+
+The zone should be big enough to accommodate three people standing along each edge and focused on the area of interest. Spatial Analysis will identify people whose feet are placed in the zone, so when drawing zones on the 2D image, imagine the zone as a carpet laying on the floor.
+
+![Well-shaped zone](./media/spatial-analysis/zone-good-example.png)
+
+### Examples of zones that aren't well-shaped
+
+The following examples show poorly shaped zones. In these examples, the area of interest is the space in front of the *It's Game Time* display.
+
+**Zone is not on the floor.**
+
+![Poorly shaped zone](./media/spatial-analysis/zone-not-on-floor.png)
+
+**Zone is too small.**
+
+![zone is too small](./media/spatial-analysis/zone-too-small.png)
+
+**Zone doesn't fully capture the area around the display.**
+
+![zone doesn't fully capture the area around end cap](./media/spatial-analysis/zone-bad-capture.png)
+
+**Zone is too close to the edge of the camera image and doesn't capture the right display.**
+
+![zone is too close to the edge of the camera image and doesn't capture the right display](./media/spatial-analysis/zone-edge.png)
+
+**Zone is partially blocked by the shelf, so people and floor aren't fully visible.**
+
+![zone is partially blocked, so people aren't fully visible](./media/spatial-analysis/zone-partially-blocked.png)
+
+### Example of a well-shaped line
+
+The line should be long enough to accommodate the entire entrance. Spatial Analysis will identify people whose feet cross the line, so when drawing lines on the 2D image imagine you're drawing them as if they lie on the floor.
+
+If possible, extend the line wider than the actual entrance. If this will not result in extra crossings (as in the image below when the line is against a wall) then extend it.
+
+![Well-shaped line](./media/spatial-analysis/zone-line-good-example.png)
+
+### Examples of lines that aren't well-shaped
+
+The following examples show poorly defined lines.
+
+**Line doesn't cover the entire entry way on the floor.**
+
+![Line doesn't cover the entire entry way on the floor](./media/spatial-analysis/zone-line-bad-coverage.png)
+
+**Line is too high and doesn't cover the entirety of the door.**
+
+![Line is too high and doesn't cover the entirety of the door](./media/spatial-analysis/zone-line-too-high.png)
+
+## Next steps
+
+* [Deploy a People Counting web application](spatial-analysis-web-app.md)
+* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
+* [Logging and troubleshooting](spatial-analysis-logging.md)
+* [Camera placement guide](spatial-analysis-camera-placement.md)
ai-services Upgrade Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/upgrade-api-versions.md
+
+ Title: Upgrade to Read v3.0 of the Azure AI Vision API
+
+description: Learn how to upgrade to Azure AI Vision v3.0 Read API from v2.0/v2.1.
+++++++ Last updated : 08/11/2020+++++
+# Upgrade from Read v2.x to Read v3.x
+
+This guide shows how to upgrade your existing container or cloud API code from Read v2.x to Read v3.x.
+
+## Determine your API path
+Use the following table to determine the **version string** in the API path based on the Read 3.x version you're migrating to.
+
+|Product type| Version | Version string in 3.x API path |
+|:--|:-|:-|
+|Service | Read 3.0, 3.1, or 3.2 | **v3.0**, **v3.1**, or **v3.2** respectively |
+|Service | Read 3.2 preview | **v3.2-preview.1** |
+|Container | Read 3.0 preview or Read 3.1 preview | **v3.0** or **v3.1-preview.2** respectively |
++
+Next, use the following sections to narrow your operations and replace the **version string** in your API path with the value from the table. For example, for **Read v3.2 preview** cloud and container versions, update the API path to **https://{endpoint}/vision/v3.2-preview.1/read/analyze[?language]**.
+
+## Service/Container
+
+### `Batch Read File`
+
+|Read 2.x |Read 3.x |
+|-|--|
+|https://{endpoint}/vision/**v2.0/read/core/asyncBatchAnalyze** |https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
+
+A new optional _language_ parameter is available. If you don't know the language of your document, or it may be multilingual, don't include it.
+
+### `Get Read Results`
+
+|Read 2.x |Read 3.x |
+|-|--|
+|https://{endpoint}/vision/**v2.0/read/operations**/{operationId} |https://{endpoint}/vision/<**version string**>/read/analyzeResults/{operationId}|
+
+### `Get Read Operation Result` status flag
+
+When the call to `Get Read Operation Result` is successful, it returns a status string field in the JSON body.
+
+|Read 2.x |Read 3.x |
+|-|--|
+|`"NotStarted"` | `"notStarted"`|
+|`"Running"` | `"running"`|
+|`"Failed"` | `"failed"`|
+|`"Succeeded"` | `"succeeded"`|
+
+### API response (JSON)
+
+Note the following changes to the json:
+* In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded"`. In v3.0, this field is `succeeded`.
+* To get the root for page array, change the json hierarchy from `recognitionResults` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
+* The page angle `clockwiseOrientation` has been renamed to `angle` and the range has been changed from 0 - 360 degrees to -180 to 180 degrees. Depending on your code, you may or may not have to make changes as most math functions can handle either range.
+
+The v3.0 API also introduces the following improvements you can optionally use:
+* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
+* `version` tells you the version of the API used to generate results
+* A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
+
+
+In 2.X, the output format is as follows:
+
+```json
+ {
+ {
+ "status": "Succeeded",
+ "recognitionResults": [
+ {
+ "page": 1,
+ "language": "en",
+ "clockwiseOrientation": 349.59,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ },
+ // The rest of result is omitted for brevity
+
+}
+```
+
+In v3.0, it has been adjusted:
+
+```json
+ {
+ {
+ "status": "succeeded",
+ "createdDateTime": "2020-05-28T05:13:21Z",
+ "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
+ "analyzeResult": {
+ "version": "3.0.0",
+ "readResults": [
+ {
+ "page": 1,
+ "language": "en",
+ "angle": 0.8551,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ "confidence": 0.958
+ },
+ // The rest of result is omitted for brevity
+
+ }
+```
+
+## Service only
+
+### `Recognize Text`
+`Recognize Text` is a *preview* operation that is being *deprecated in all versions of Azure AI Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and other features, so it's recommended. To upgrade from `Recognize Text` to `Read`:
+
+|Recognize Text 2.x |Read 3.x |
+|-|--|
+|https://{endpoint}/vision/**v2.0/recognizeText[?mode]**|https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
+
+The _mode_ parameter isn't supported in `Read`. Both handwritten and printed text will automatically be supported.
+
+A new optional _language_ parameter is available in v3.0. If you don't know the language of your document, or it may be multilingual, don't include it.
+
+### `Get Recognize Text Operation Result`
+
+|Recognize Text 2.x |Read 3.x |
+|-|--|
+|https://{endpoint}/vision/**v2.0/textOperations/**{operationId}|https://{endpoint}/vision/<**version string**>/read/analyzeResults/{operationId}|
+
+### `Get Recognize Text Operation Result` status flags
+When the call to `Get Recognize Text Operation Result` is successful, it returns a status string field in the JSON body.
+
+|Recognize Text 2.x |Read 3.x |
+|-|--|
+|`"NotStarted"` | `"notStarted"`|
+|`"Running"` | `"running"`|
+|`"Failed"` | `"failed"`|
+|`"Succeeded"` | `"succeeded"`|
+
+### API response (JSON)
+
+Note the following changes to the json:
+* In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded`. In v3.x, this field is `succeeded`.
+* To get the root for page array, change the json hierarchy from `recognitionResult` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
+
+The v3.0 API also introduces the following improvements you can optionally use. See the API reference for more details:
+* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
+* `version` tells you the version of the API used to generate results
+* A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
+* `angle` general orientation of the text in clockwise direction, measured in degrees between (-180, 180].
+* `width` and `"height"` give you the dimensions of your document, and `"unit"` provides the unit of those dimensions (pixels or inches, depending on document type.)
+* `page` multipage documents are supported
+* `language` the input language of the document (from the optional _language_ parameter.)
++
+In 2.X, the output format is as follows:
+
+```json
+ {
+ {
+ "status": "Succeeded",
+ "recognitionResult": [
+ {
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ },
+ // The rest of result is omitted for brevity
+
+ }
+```
+
+In v3.x, it has been adjusted:
+
+```json
+ {
+ {
+ "status": "succeeded",
+ "createdDateTime": "2020-05-28T05:13:21Z",
+ "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
+ "analyzeResult": {
+ "version": "3.0.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0.8551,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ "confidence": 0.958
+ },
+ // The rest of result is omitted for brevity
+
+ }
+```
+
+## Container only
+
+### `Synchronous Read`
+
+|Read 2.0 |Read 3.x |
+|-|--|
+|https://{endpoint}/vision/**v2.0/read/core/Analyze** |https://{endpoint}/vision/<**version string**>/read/syncAnalyze[?language]|
ai-services Use Case Alt Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/use-case-alt-text.md
+
+ Title: "Overview: Generate alt text of images with Image Analysis"
+
+description: Grow your customer base by making your products and services more accessible. Generate a description of an image in human-readable language, using complete sentences.
+++++++ Last updated : 03/17/2023++++
+# Overview: Generate image alt-text with Image Analysis
+
+## What is alt text?
+
+Alt text, or alternative text, is an HTML attribute added to the `<img>` tag that displays images on an application or web page. It looks like this in plain HTML code:
+
+`<img src="elephant.jpg" alt="An elephant in a grassland">`
+
+Alt text enables website owners to describe an image in plain text. These image descriptions improve accessibility by enabling screen readers such as Microsoft Narrator, JAWS, and NVDA to accurately communicate image content to their visually impaired and blind users.
+
+Alt text is also vital for image search engine optimization (SEO). It helps search engines understand the visual content in your images. The search engine is then better able to include and rank your website in search results when users search for the content in your website.
+
+## Auto-generate alt text with Image Analysis
+
+Image Analysis offers image captioning models that generate one-sentence descriptions of image visual content. You can use these AI generated captions as alt text for your images.
++
+Auto-generated caption: "An elephant in a grassland."
+
+MicrosoftΓÇÖs own products such as PowerPoint, Word, and Edge browser use image captioning by Image Analysis to generate alt text.
++
+## Benefits for your website
+
+- **Improve accessibility and user experience for blind and low-vision users**. Alt Text makes visual information in images available to screen readers used by blind and low-vision users.
+- **Meet legal compliance requirements**. Some websites may be legally required to remove all accessibility barriers. Using alt text for accessibility helps website owners minimize risk of legal action now and in the future.
+- **Make your website more discoverable and searchable**. Image alt text helps search engine crawlers find images on your website more easily and rank them higher in search results.
+
+## Frequently Asked Questions
+
+### What languages are image captions available in?
+
+Image captions are available in English, Chinese, Portuguese, Japanese, and Spanish in Image Analysis 3.2 API. In the Image Analysis 4.0 API (preview), image captions are only available in English.
+
+### What confidence threshold should I use?
+
+To ensure accurate alt text for all images, you can choose to only accept captions above a certain confidence level. The right confidence level varies for each user depending on the type of images and usage scenario.
+
+In general, we advise a confidence threshold of `0.4` for the Image Analysis 3.2 API and of `0.0` for the Image Analysis 4.0 API (preview).
+
+### What can I do about embarrassing or erroneous captions?
+
+On rare occasions, image captions can contain embarrassing errors, such as labeling a male-identifying person as a "woman" or labeling an adult woman as a "girl". We encourage users to consider using the latest Image Analysis 4.0 API (preview) which eliminates some errors by supporting gender-neutral captions.
+
+Please report any embarrassing or offensive captions by going to the [Azure portal](https://ms.portal.azure.com/#home) and navigating to the **Feedback** button in the top right.
+
+## Next Steps
+Follow a quickstart to begin automatically generating alt text by using image captioning on Image Analysis.
+
+> [!div class="nextstepaction"]
+> [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library-40.md)
ai-services Use Case Dwell Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/use-case-dwell-time.md
+
+ Title: "Overview: Monitor dwell time in front of displays"
+
+description: Spatial Analysis can provide real-time information about how long customers spend in front of a display in a retail store.
+++++++ Last updated : 07/22/2022+++
+# Overview: Monitor dwell time in front of displays with Spatial Analysis
+
+Spatial Analysis can provide real-time information about how long customers spend in front of a display in a retail store. The service monitors the length of time customers spend in a zone you specify. You can use this information to track customer engagement with promotions/displays within a store or understand customers' preference toward specific products.
++
+## Key features
+
+You can deploy an end-to-end solution to track customers' time spent in front of displays.
+- **Customize areas to monitor**: with Spatial Analysis, you can customize any space to monitor. It can be spaces in front of promotions or spaces in front of displays.
+- **Count individual entrances**: you can measure how long people spend within these spaces.
+- **Customize models to differentiate between shoppers and employees**: with the customization options offered, you can train and deploy models to differentiate shoppers and employees so that you only generate data about customer behavior.
++
+## Benefits for your business
+
+In-store dwell time is an important metric to help store managers track customers' behaviors and generate insights about customers to make data-driven decisions. Tracking in-store dwell time will help store managers in the following ways:
+* **Generate data-driven insights about customers**: by tracking and analyzing the time customers spend in front of specific displays, you can understand how and why customers react to your displays.
+* **Collect real-time customer reactions**: instead of sending and receiving feedback from customers, you have access to customer interaction with displays in real time.
+* **Measure display/promotion effectiveness**: You can track, analyze and compare footfall and time spent by your customers to measure the effectiveness of promotions or displays.
+* **Manage inventory**: You can track and monitor changes in foot traffic and time spent by your customers to modify shift plans and inventory.
+* **Make data-driven decisions**: with data about your customers, you're empowered to make decisions that improve customer experience and satisfy customer needs.
++
+## Next steps
+
+Get started using Spatial Analysis in a container.
+
+> [!div class="nextstepaction"]
+> [Install and run the Spatial Analysis container](./spatial-analysis-container.md)
+
ai-services Use Case Identity Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/use-case-identity-verification.md
+
+ Title: "Overview: Identity verification with Face"
+
+description: Provide the best-in-class face verification experience in your business solution using Azure AI Face service. You can verify someone's identity against a government-issued ID card like a passport or driver's license.
+++++++ Last updated : 07/22/2022+++
+# Overview: Identity verification with Face
+
+Provide the best-in-class face verification experience in your business solution using Azure AI Face service. You can verify someone's identity against a government-issued ID card like a passport or driver's license. Use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a user, or proctoring an online assessment. Identity verification can be done when a person is onboarded to your service, and repeated when they access a digital or physical service.
++
+## Benefits for your business
+
+Identity verification verifies that the user is who they claim to be. Most organizations require some type of identity verification. Biometric identity verification with Face service provides the following benefits to your business:
+
+* Seamless end user experience: instead of entering a passcode manually, users can look at a camera to get access.
+* Improved security: biometric verification is more secure than alternative verification methods that request knowledge from users, since that can be more easily obtained by bad actors.
+
+## Key features
+
+Face service can power an end-to-end, low-friction, high-accuracy identity verification solution.
+
+* Face Detection ("Detection" / "Detect") answers the question, "Are there one or more human faces in this image?" Detection finds human faces in an image and returns bounding boxes indicating their locations. Face detection models alone don't find individually identifying features, only a bounding box. All of the other operations are dependent on Detection: before Face can identify or verify a person (see below), it must know the locations of the faces to be recognized.
+* Face Detection for attributes: The Detect API can optionally be used to analyze attributes about each face, such as head pose and facial landmarks, using other AI models. The attribute functionality is separate from the verification and identification functionality of Face. The full list of attributes is described in the [Face detection concept guide](concept-face-detection.md). The values returned by the API for each attribute are predictions of the perceived attributes and are best used to make aggregated approximations of attribute representation rather than individual assessments.
+* Face Verification ("Verification" / "Verify") builds on Detect and addresses the question, "Are these two images of the same person?" Verification is also called "one-to-one" matching because the probe image is compared to only one enrolled template. Verification can be used in identity verification or access control scenarios to verify that a picture matches a previously captured image (such as from a photo from a government-issued ID card).
+* Face Group ("Group") also builds on Detect and creates smaller groups of faces that look similar to each other from all enrollment templates.
+
+## Next steps
+
+Follow a quickstart to do identity verification with Face.
+
+> [!div class="nextstepaction"]
+> [Identity verification quickstart](./quickstarts-sdk/identity-client-library.md)
ai-services Use Case Queue Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/use-case-queue-time.md
+
+ Title: "Overview: Measure retail queue wait time with Spatial Analysis"
+
+description: Spatial Analysis can generate real-time information about how long users are waiting in a queue in a retail store. A store manager can use this information to manage open checkout stations and employee shifts.
+++++++ Last updated : 07/22/2022+++
+# Overview: Measure queue wait time with Spatial Analysis
+
+Spatial Analysis can generate real-time information about how long users are waiting in a queue in a retail store. The service monitors the length of time customers spend standing inside a checkout zone you specify. A store manager can use this information to manage open checkout stations and employee shifts.
++
+## Key features
+
+You can deploy an end-to-end solution to track customersΓÇÖ queue wait time.
+* **Customize areas to monitor**: With Spatial Analysis, you can customize the space you want to monitor.
+* **Count individual entrances**: You can monitor how long people spend within these spaces.
+* **Customize models to differentiate between shoppers and employees**: with the customization options offered, you can train and deploy models to differentiate shoppers and employees so that you only generate data about customer behavior.
+
+## Benefits for your business
+
+By managing queue wait time and making informed decisions, you can help your business in the following ways:
+* **Improve customer experience**: customer needs are met effectively and quickly by equipping store managers and employees with actionable alerts triggered by long checkout lines.
+* **Reduce unnecessary operation costs**: optimize staffing needs by monitoring checkout lines and closing vacant checkout stations.
+* **Make long-term data-driven decisions**: predict ideal future store operations by analyzing the traffic at different times of the day and on different days.
+
+
+## Next steps
+
+Get started using Spatial Analysis in a container.
+
+> [!div class="nextstepaction"]
+> [Install and run the Spatial Analysis container](./spatial-analysis-container.md)
ai-services Vehicle Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/vehicle-analysis.md
+
+ Title: Configure vehicle analysis containers
+
+description: Vehicle analysis provides each container with a common configuration framework, so that you can easily configure and manage compute, AI insight egress, logging, and security settings.
+++++++ Last updated : 11/07/2022+++
+# Install and run vehicle analysis (preview)
+
+Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you'll learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations.
+
+## Prerequisites
+
+* To utilize the operations of vehicle analysis, you must first follow the steps to [install and run spatial analysis container](./spatial-analysis-container.md) including configuring your host machine, downloading and configuring your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, executing the deployment, and setting up device [logging](spatial-analysis-logging.md).
+ * When you configure your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, refer to the steps below to add the graph configurations for vehicle analysis to your manifest prior to deploying the container. Or, once the spatial analysis container is up and running, you may add the graph configurations and follow the steps to redeploy. The steps below will outline how to properly configure your container.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+> [!NOTE]
+> Make sure that the edge device has at least 50GB disk space available before deploying the Spatial Analysis module.
+++
+## Vehicle analysis operations
+
+Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
+
+The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the *.cpu* distinction).
+
+| Operation identifier | Description |
+| -- | - |
+| **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview** and **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview** | Counts vehicles parked in a designated zone in the camera's field of view. </br> Emits an initial _vehicleCountEvent_ event and then _vehicleCountEvent_ events when the count changes. |
+| **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** | Identifies when a vehicle parks in a designated parking region in the camera's field of view. </br> Emits a _vehicleInPolygonEvent_ event when the vehicle is parked inside a parking space. |
+
+In addition to exposing the vehicle location, other estimated attributes for **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview**, **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview**, **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** include vehicle color and vehicle type. All of the possible values for these attributes are found in the output section (below).
+
+### Operation parameters for vehicle analysis
+
+The following table shows the parameters required by each of the vehicle analysis operations. Many are shared with Spatial Analysis; the only one not shared is the `PARKING_REGIONS` setting. The full list of Spatial Analysis operation parameters can be found in the [Spatial Analysis container](./spatial-analysis-container.md?tabs=azure-stack-edge#iot-deployment-manifest) guide.
+
+| Operation parameters| Description|
+|||
+| Operation ID | The Operation Identifier from table above.|
+| enabled | Boolean: true or false|
+| VIDEO_URL| The RTSP URL for the camera device (for example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded streams either through RTSP, HTTP, or MP4. |
+| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.|
+| VIDEO_IS_LIVE| True for camera devices; false for recorded videos.|
+| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it is 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
+| PARKING_REGIONS | JSON configuration for zone and line as outlined below. </br> PARKING_REGIONS must contain four points in normalized coordinates ([0, 1]) that define a convex region (the points follow a clockwise or counterclockwise order).|
+| EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE will generate an output on every single frame received (one FPS). ON_CHANGE will generate an output when something changes (number of vehicles or parking spot occupancy). |
+| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX will use an overlap between the detected bounding box and a reference bounding box. PROJECTIONS will project the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.|
+
+This is an example of a valid `PARKING_REGIONS` configuration:
+
+```json
+"{\"parking_slot1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
+```
+
+### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview
+
+This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
+
+```json
+{
+ "zone1": {
+ type: "Queue",
+ region: [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
+ }
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | dictionary | Keys are the zone names and the values are a field with type and region.|
+| `name` | string| Friendly name for this zone.|
+| `region` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which vehicles are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
+| `type` | string| For **cognitiveservices.vision.vehicleanalysis-vehiclecount** this should be "Queue".|
++
+### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview
+
+This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
+
+```json
+{
+ "zone1": {
+ type: "SingleSpot",
+ region: [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
+ }
+}
+```
+
+| Name | Type| Description|
+||||
+| `zones` | dictionary | Keys are the zone names and the values are a field with type and region.|
+| `name` | string| Friendly name for this zone.|
+| `region` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which vehicles are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
+| `type` | string| For **cognitiveservices.vision.vehicleanalysis-vehicleingpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleingpolygon.cpu-preview** this should be "SingleSpot".|
+
+## Configuring the vehicle analysis operations
+
+You must configure the graphs for vehicle analysis in your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file to enable the vehicle analysis operations. Below are sample graphs for vehicle analysis. You can add these JSON snippets to your deployment manifest in the "graphs" configuration section, configure the parameters for your video stream, and deploy the module. If you only intend to utilize the vehicle analysis capabilities, you may replace the existing graphs in the [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) with the vehicle analysis graphs.
+
+Below is the graph optimized for the **vehicle count** operation.
+
+```json
+"vehiclecount": {
+ "operationId": "cognitiveservices.vision.vehicleanalysis-vehiclecount-preview",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL here>",
+ "VIDEO_SOURCE_ID": "vehiclecountgraph",
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "VIDEO_IS_LIVE":true,
+ "PARKING_REGIONS": ""{\"1\": {\"type\": \"Queue\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
+ }
+}
+```
+
+Below is the graph optimized for the **vehicle in polygon** operation, utilized for vehicles in a parking spot.
+
+```json
+"vehicleinpolygon": {
+ "operationId": "cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview",
+ "version": 1,
+ "enabled": true,
+ "parameters": {
+ "VIDEO_URL": "<Replace RTSP URL here>",
+ "VIDEO_SOURCE_ID": "vehcileinpolygon",
+ "VIDEO_DECODE_GPU_INDEX": 0,
+ "VIDEO_IS_LIVE":true,
+ "PARKING_REGIONS": ""{\"1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
+ }
+}
+```
+
+## Sample cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview output
+
+The JSON below demonstrates an example of the vehicle count operation graph output.
+
+```json
+{
+ "events": [
+ {
+ "id": "95144671-15f7-3816-95cf-2b62f7ff078b",
+ "type": "vehicleCountEvent",
+ "detectionIds": [
+ "249acdb1-65a0-4aa4-9403-319c39e94817",
+ "200900c1-9248-4487-8fed-4b0c48c0b78d"
+ ],
+ "properties": {
+ "@type": "type.googleapis.com/microsoft.rtcv.insights.VehicleCountEventMetadata",
+ "detectionCount": 3
+ },
+ "zone": "1",
+ "trigger": ""
+ }
+ ],
+ "sourceInfo": {
+ "id": "vehiclecountTest",
+ "timestamp": "2022-09-21T19:31:05.558Z",
+ "frameId": "4",
+ "width": 0,
+ "height": 0,
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "vehicle",
+ "id": "200900c1-9248-4487-8fed-4b0c48c0b78d",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.5962499976158142,
+ "y": 0.46250003576278687,
+ "visible": false,
+ "label": ""
+ },
+ {
+ "x": 0.7544531226158142,
+ "y": 0.64000004529953,
+ "visible": false,
+ "label": ""
+ }
+ ],
+ "name": "",
+ "normalizationType": "UNSPECIFIED_NORMALIZATION"
+ },
+ "confidence": 0.9934938549995422,
+ "attributes": [
+ {
+ "task": "VehicleType",
+ "label": "Bicycle",
+ "confidence": 0.00012480001896619797
+ },
+ {
+ "task": "VehicleType",
+ "label": "Bus",
+ "confidence": 1.4998147889855318e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "Car",
+ "confidence": 0.9200984239578247
+ },
+ {
+ "task": "VehicleType",
+ "label": "Motorcycle",
+ "confidence": 0.0058081308379769325
+ },
+ {
+ "task": "VehicleType",
+ "label": "Pickup_Truck",
+ "confidence": 0.0001521655503893271
+ },
+ {
+ "task": "VehicleType",
+ "label": "SUV",
+ "confidence": 0.04790870100259781
+ },
+ {
+ "task": "VehicleType",
+ "label": "Truck",
+ "confidence": 1.346438511973247e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "Van/Minivan",
+ "confidence": 0.02388562448322773
+ },
+ {
+ "task": "VehicleType",
+ "label": "type_other",
+ "confidence": 0.0019937530159950256
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Black",
+ "confidence": 0.49258527159690857
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Blue",
+ "confidence": 0.47634875774383545
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Brown/Beige",
+ "confidence": 0.007451261859387159
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Green",
+ "confidence": 0.0002614705008454621
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Grey",
+ "confidence": 0.0005819533253088593
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Red",
+ "confidence": 0.0026496786158531904
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Silver",
+ "confidence": 0.012039118446409702
+ },
+ {
+ "task": "VehicleColor",
+ "label": "White",
+ "confidence": 0.007863214239478111
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Yellow/Gold",
+ "confidence": 4.345366687630303e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "color_other",
+ "confidence": 0.0001758455764502287
+ }
+ ],
+ "metadata": {
+ "tracking_id": ""
+ }
+ },
+ {
+ "type": "vehicle",
+ "id": "249acdb1-65a0-4aa4-9403-319c39e94817",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.44859376549720764,
+ "y": 0.5375000238418579,
+ "visible": false,
+ "label": ""
+ },
+ {
+ "x": 0.6053906679153442,
+ "y": 0.7537500262260437,
+ "visible": false,
+ "label": ""
+ }
+ ],
+ "name": "",
+ "normalizationType": "UNSPECIFIED_NORMALIZATION"
+ },
+ "confidence": 0.9893689751625061,
+ "attributes": [
+ {
+ "task": "VehicleType",
+ "label": "Bicycle",
+ "confidence": 0.0003215899341739714
+ },
+ {
+ "task": "VehicleType",
+ "label": "Bus",
+ "confidence": 3.258735432609683e-06
+ },
+ {
+ "task": "VehicleType",
+ "label": "Car",
+ "confidence": 0.825579047203064
+ },
+ {
+ "task": "VehicleType",
+ "label": "Motorcycle",
+ "confidence": 0.14065399765968323
+ },
+ {
+ "task": "VehicleType",
+ "label": "Pickup_Truck",
+ "confidence": 0.00044341650209389627
+ },
+ {
+ "task": "VehicleType",
+ "label": "SUV",
+ "confidence": 0.02949284389615059
+ },
+ {
+ "task": "VehicleType",
+ "label": "Truck",
+ "confidence": 1.625348158995621e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "Van/Minivan",
+ "confidence": 0.003406822681427002
+ },
+ {
+ "task": "VehicleType",
+ "label": "type_other",
+ "confidence": 8.27941985335201e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Black",
+ "confidence": 2.028317430813331e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Blue",
+ "confidence": 0.00022600525699090213
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Brown/Beige",
+ "confidence": 3.327144668219262e-06
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Green",
+ "confidence": 5.160827640793286e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Grey",
+ "confidence": 5.614096517092548e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Red",
+ "confidence": 1.0396311012073056e-07
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Silver",
+ "confidence": 0.9996315240859985
+ },
+ {
+ "task": "VehicleColor",
+ "label": "White",
+ "confidence": 1.0256461791868787e-05
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Yellow/Gold",
+ "confidence": 1.8006812751991674e-07
+ },
+ {
+ "task": "VehicleColor",
+ "label": "color_other",
+ "confidence": 5.103976263853838e-07
+ }
+ ],
+ "metadata": {
+ "tracking_id": ""
+ }
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of unique identifier of the vehicle detections that triggered this event|
+| `properties` | collection| Collection of values [detectionCount]|
+| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
+| `trigger` | string| Not used |
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `confidence` | float| Algorithm confidence|
+
+| Attribute | Type | Description |
+||||
+| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
+| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
+| `confidence` | float| Algorithm confidence|
+
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted in format YYYY-MM-DDTHH:MM:SS.ssZ |
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
+
+## Sample cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview output
+
+The JSON below demonstrates an example of the vehicle in polygon operation graph output.
+
+```json
+{
+ "events": [
+ {
+ "id": "1b812a6c-1fa3-3827-9769-7773aeae733f",
+ "type": "vehicleInPolygonEvent",
+ "detectionIds": [
+ "de4256ea-8b38-4883-9394-bdbfbfa5bd41"
+ ],
+ "properties": {
+ "@type": "type.googleapis.com/microsoft.rtcv.insights.VehicleInPolygonEventMetadata",
+ "status": "PARKED"
+ },
+ "zone": "1",
+ "trigger": ""
+ }
+ ],
+ "sourceInfo": {
+ "id": "vehicleInPolygonTest",
+ "timestamp": "2022-09-21T20:36:47.737Z",
+ "frameId": "3",
+ "width": 0,
+ "height": 0,
+ "imagePath": ""
+ },
+ "detections": [
+ {
+ "type": "vehicle",
+ "id": "de4256ea-8b38-4883-9394-bdbfbfa5bd41",
+ "region": {
+ "type": "RECTANGLE",
+ "points": [
+ {
+ "x": 0.18703125417232513,
+ "y": 0.32875001430511475,
+ "visible": false,
+ "label": ""
+ },
+ {
+ "x": 0.2650781273841858,
+ "y": 0.42500001192092896,
+ "visible": false,
+ "label": ""
+ }
+ ],
+ "name": "",
+ "normalizationType": "UNSPECIFIED_NORMALIZATION"
+ },
+ "confidence": 0.9583820700645447,
+ "attributes": [
+ {
+ "task": "VehicleType",
+ "label": "Bicycle",
+ "confidence": 0.0005135730025358498
+ },
+ {
+ "task": "VehicleType",
+ "label": "Bus",
+ "confidence": 2.502854385966202e-07
+ },
+ {
+ "task": "VehicleType",
+ "label": "Car",
+ "confidence": 0.9575894474983215
+ },
+ {
+ "task": "VehicleType",
+ "label": "Motorcycle",
+ "confidence": 0.03809007629752159
+ },
+ {
+ "task": "VehicleType",
+ "label": "Pickup_Truck",
+ "confidence": 6.314369238680229e-05
+ },
+ {
+ "task": "VehicleType",
+ "label": "SUV",
+ "confidence": 0.003204471431672573
+ },
+ {
+ "task": "VehicleType",
+ "label": "Truck",
+ "confidence": 4.916510079056025e-07
+ },
+ {
+ "task": "VehicleType",
+ "label": "Van/Minivan",
+ "confidence": 0.00029918691143393517
+ },
+ {
+ "task": "VehicleType",
+ "label": "type_other",
+ "confidence": 0.00023934587079565972
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Black",
+ "confidence": 0.7501943111419678
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Blue",
+ "confidence": 0.02153826877474785
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Brown/Beige",
+ "confidence": 0.0013857109006494284
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Green",
+ "confidence": 0.0006621106876991689
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Grey",
+ "confidence": 0.007349356077611446
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Red",
+ "confidence": 0.1460476964712143
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Silver",
+ "confidence": 0.015320491977036
+ },
+ {
+ "task": "VehicleColor",
+ "label": "White",
+ "confidence": 0.053948428481817245
+ },
+ {
+ "task": "VehicleColor",
+ "label": "Yellow/Gold",
+ "confidence": 0.0030805091373622417
+ },
+ {
+ "task": "VehicleColor",
+ "label": "color_other",
+ "confidence": 0.0004731453664135188
+ }
+ ],
+ "metadata": {
+ "tracking_id": ""
+ }
+ }
+ ],
+ "schemaVersion": "2.0"
+}
+```
+
+| Event Field Name | Type| Description|
+||||
+| `id` | string| Event ID|
+| `type` | string| Event type|
+| `detectionsId` | array| Array of unique identifier of the vehicle detection that triggered this event|
+| `properties` | collection| Collection of values status [PARKED/EXITED]|
+| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
+| `trigger` | string| Not used |
+
+| Detections Field Name | Type| Description|
+||||
+| `id` | string| Detection ID|
+| `type` | string| Detection type|
+| `region` | collection| Collection of values|
+| `type` | string| Type of region|
+| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
+| `confidence` | float| Algorithm confidence|
+
+| Attribute | Type | Description |
+||||
+| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
+| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
+| `confidence` | float| Algorithm confidence|
+
+| SourceInfo Field Name | Type| Description|
+||||
+| `id` | string| Camera ID|
+| `timestamp` | date| UTC date when the JSON payload was emitted in format YYYY-MM-DDTHH:MM:SS.ssZ |
+| `width` | int | Video frame width|
+| `height` | int | Video frame height|
+| `frameId` | int | Frame identifier|
+
+## Zone and line configuration for vehicle analysis
+
+For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone which you're analyzing.
+
+## Camera placement for vehicle analysis
+
+For guidelines on where and how to place your camera for vehicle analysis, refer to the [camera placement](spatial-analysis-camera-placement.md) guide found in the spatial analysis documentation. Other limitations to consider include the height of the camera mounted in the parking lot space. When you analyze vehicle patterns, a higher vantage point is ideal to ensure that the camera's field of view is wide enough to accommodate one or more vehicles, depending on your scenario.
+
+## Billing
+
+The vehicle analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of vehicle analysis in public preview is currently free.
+
+Azure AI services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Azure AI services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
+
+## Next steps
+
+* Set up a [Spatial Analysis container](spatial-analysis-container.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/whats-new.md
+
+ Title: What's new in Azure AI Vision?
+
+description: Stay up to date on recent releases and updates to Azure AI Vision.
+++++++ Last updated : 12/27/2022+++
+# What's new in Azure AI Vision
+
+Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+
+## May 2023
+
+### Image Analysis 4.0 Product Recognition (public preview)
+
+The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence and absence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document. [Product Recognition](./concept-shelf-analysis.md).
+
+## April 2023
+
+### Face limited access tokens
+
+Independent software vendors (ISVs) can manage the Face API usage of their clients by issuing access tokens that grant access to Face features which are normally gated. This allows client companies to use the Face API without having to go through the formal approval process. [Use limited access tokens](how-to/identity-access-token.md).
+
+## March 2023
+
+### Azure AI Vision Image Analysis 4.0 SDK public preview
+
+The [Florence foundation model](https://www.microsoft.com/en-us/research/project/projectflorence/) is now integrated into Azure AI Vision. The improved Vision Services enable developers to create market-ready, responsible Azure AI Vision applications across various industries. Customers can now seamlessly digitize, analyze, and connect their data to natural language interactions, unlocking powerful insights from their image and video content to support accessibility, drive acquisition through SEO, protect users from harmful content, enhance security, and improve incident response times. For more information, see [Announcing Microsoft's Florence foundation model](https://aka.ms/florencemodel).
+
+### Image Analysis 4.0 SDK (public preview)
+
+Image Analysis 4.0 is now available through client library SDKs in C#, C++, and Python. This update also includes the Florence-powered image captioning and dense captioning at human parity performance.
+
+### Image Analysis V4.0 Captioning and Dense Captioning (public preview):
+
+"Caption" replaces "Describe" in V4.0 as the significantly improved image captioning feature rich with details and semantic understanding. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. There's also a new gender-neutral parameter to allow customers to choose whether to enable probabilistic gender inference for alt-text and Seeing AI applications. Automatically deliver rich captions, accessible alt-text, SEO optimization, and intelligent photo curation to support digital content. [Image captions](./concept-describe-images-40.md).
+
+### Video summary and frame locator (public preview):
+Search and interact with video content in the same intuitive way you think and write. Locate relevant content without the need for additional metadata. Available only in [Vision Studio](https://aka.ms/VisionStudio).
++
+### Image Analysis 4.0 model customization (public preview)
+
+You can now create and train your own [custom image classification and object detection models](./concept-model-customization.md), using Vision Studio or the v4.0 REST APIs.
+
+### Image Retrieval APIs (public preview)
+
+The [Image Retrieval APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search.
+
+### Background removal APIs (public preview)
+
+As part of the Image Analysis 4.0 API, the [Background removal API](./concept-background-removal.md) lets you remove the background of an image. This operation can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object.
+
+### Azure AI Vision 3.0 & 3.1 previews deprecation
+
+The preview versions of the Azure AI Vision 3.0 and 3.1 APIs are scheduled to be retired on September 30, 2023. Customers won't be able to make any calls to these APIs past this date. Customers are encouraged to migrate their workloads to the generally available (GA) 3.2 API instead. Mind the following changes when migrating from the preview versions to the 3.2 API:
+- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
+- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+- Azure AI Vision 3.2 API uses a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+
+## October 2022
+
+### Azure AI Vision Image Analysis 4.0 (public preview)
+
+Image Analysis 4.0 has been released in public preview. The new API includes image captioning, image tagging, object detection, smart crops, people detection, and Read OCR functionality, all available through one Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
+
+## September 2022
+
+### Azure AI Vision 3.0/3.1 Read previews deprecation
+
+The preview versions of the Azure AI Vision 3.0 and 3.1 Read API are scheduled to be retired on January 31, 2023. Customers are encouraged to refer to the [How-To](./how-to/call-read-api.md) and [QuickStarts](./quickstarts-sdk/client-library.md?tabs=visual-studio&pivots=programming-language-csharp) to get started with the generally available (GA) version of the Read API instead. The latest GA versions provide the following benefits:
+* 2022 latest generally available OCR model
+* Significant expansion of OCR language coverage including support for handwritten text
+* Significantly improved OCR quality
+
+## June 2022
+
+### Vision Studio launch
+
+Vision Studio is UI tool that lets you explore, build, and integrate features from Azure AI Vision into your applications.
+
+Vision Studio provides you with a platform to try several service features, and see what they return in a visual manner. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
+
+### Responsible AI for Face
+
+#### Face transparency documentation
+* The [transparency documentation](https://aka.ms/faceraidocs) provides guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, providing support to people who believe their results were incorrect, and identifying and addressing fluctuations in accuracy due to variations in operational conditions.
+
+#### Retirement of sensitive attributes
+
+* We have retired facial analysis capabilities that purport to infer emotional states and identity attributes, such as gender, age, smile, facial hair, hair and makeup.
+* Facial detection capabilities, (including detecting blur, exposure, glasses, headpose, landmarks, noise, occlusion, facial bounding box) will remain generally available and do not require an application.
+
+#### Fairlearn package and Microsoft's Fairness Dashboard
+
+* [The open-source Fairlearn package and MicrosoftΓÇÖs Fairness Dashboard](https://github.com/microsoft/responsible-ai-toolbox/tree/main/notebooks/cognitive-services-examples/face-verification) aims to support customers to measure the fairness of Microsoft's facial verification algorithms on their own data, allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.
+
+#### Limited Access policy
+
+* As a part of aligning Face to the updated Responsible AI Standard, a new [Limited Access policy](https://aka.ms/AAh91ff) has been implemented for the Face API and Azure AI Vision. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. See details on Limited Access for Face [here](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/ai-services/computer-vision/context/context) and for Azure AI Vision [here](/legal/cognitive-services/computer-vision/limited-access?context=/azure/ai-services/computer-vision/context/context).
+
+### Azure AI Vision 3.2-preview deprecation
+
+The preview versions of the 3.2 API are scheduled to be retired in December of 2022. Customers are encouraged to use the generally available (GA) version of the API instead. Mind the following changes when migrating from the 3.2-preview versions:
+1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls now take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
+1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+1. Image Analysis APIs now use a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+
+## May 2022
+
+### OCR (Read) API model is generally available (GA)
+
+Azure AI Vision's [OCR (Read) API](overview-ocr.md) latest model with [164 supported languages](language-support.md) is now generally available as a cloud service and container.
+
+* OCR support for print text expands to 164 languages including Russian, Arabic, Hindi and other languages using Cyrillic, Arabic, and Devanagari scripts.
+* OCR support for handwritten text expands to 9 languages with English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish.
+* Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices.
+* Improved processing of digital PDF documents.
+* Input file size limit increased 10x to 500 MB.
+* Performance and latency improvements.
+* Available as [cloud service](overview-ocr.md) and [Docker container](computer-vision-how-to-install-containers.md).
+
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
+
+> [!div class="nextstepaction"]
+> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
+
+## February 2022
+
+### OCR (Read) API Public Preview supports 164 languages
+
+Azure AI Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages](language-support.md) to 164 with its latest preview:
+
+* OCR support for print text expands to 42 new languages including Arabic, Hindi and other languages using Arabic and Devanagari scripts.
+* OCR support for handwritten text expands to Japanese and Korean in addition to English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
+* Enhancements including better support for extracting handwritten dates, amounts, names, and single character boxes.
+* General performance and AI quality improvements
+
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
+
+> [!div class="nextstepaction"]
+> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
+
+### New Quality Attribute in Detection_01 and Detection_03
+* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concept-face-detection.md) and see how to use it with [QuickStart](./quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio).
+
+## September 2021
+
+### OCR (Read) API Public Preview supports 122 languages
+
+Azure AI Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages](language-support.md) to 122 with its latest preview:
+
+* OCR support for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages.
+* OCR support for handwritten text in 6 new languages that include English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
+* Enhancements for processing digital PDFs and Machine Readable Zone (MRZ) text in identity documents.
+* General performance and AI quality improvements
+
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
+
+> [!div class="nextstepaction"]
+> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
+
+## August 2021
+
+### Image tagging language expansion
+
+The [latest version (v3.2)](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
+
+## July 2021
+
+### New HeadPose and Landmarks improvements for Detection_03
+
+* The Detection_03 model has been updated to support facial landmarks.
+* The landmarks feature in Detection_03 is much more precise, especially in the eyeball landmarks which are crucial for gaze tracking.
+
+## May 2021
+
+### Spatial Analysis container update
+
+A new version of the [Spatial Analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
+
+* [Spatial Analysis operations](spatial-analysis-operations.md) can be now configured to detect the orientation that a person is facing.
+ * An orientation classifier can be enabled for the `personcrossingline` and `personcrossingpolygon` operations by configuring the `enable_orientation` parameter. It is set to off by default.
+
+* [Spatial Analysis operations](spatial-analysis-operations.md) now also offers configuration to detect a person's speed while walking/running
+ * Speed can be detected for the `personcrossingline` and `personcrossingpolygon` operations by turning on the `enable_speed` classifier, which is off by default. The output is reflected in the `speed`, `avgSpeed`, and `minSpeed` outputs.
+
+## April 2021
+
+### Azure AI Vision v3.2 GA
+
+The Azure AI Vision API v3.2 is now generally available with the following updates:
+
+* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
+* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
+* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
+* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
+
+> [!div class="nextstepaction"]
+> [See Azure AI Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+
+### PersonDirectory data structure (preview)
+
+* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md).
+
+## March 2021
+
+### Azure AI Vision 3.2 Public Preview update
+
+The Azure AI Vision API v3.2 public preview has been updated. The preview release has all Azure AI Vision features along with updated Read and Analyze APIs.
+
+> [!div class="nextstepaction"]
+> [See Azure AI Vision v3.2 public preview 3](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+
+## February 2021
+
+### Read API v3.2 Public Preview with OCR support for 73 languages
+
+The Azure AI Vision Read API v3.2 public preview, available as cloud service and Docker container, includes these updates:
+
+* [OCR for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
+* Natural reading order for the text line output (Latin languages only)
+* Handwriting style classification for text lines along with a confidence score (Latin languages only).
+* Extract text only for selected pages for a multi-page document.
+* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
+
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
+
+> [!div class="nextstepaction"]
+> [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
++
+### New Face API detection model
+* The new Detection 03 model is the most accurate detection model currently available. If you're a new customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+### New detectable Face attributes
+* The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+### New Face API Recognition Model
+* The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). Note that we recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See [Specify a face recognition model](./how-to/specify-recognition-model.md) for more details.
+
+## January 2021
+
+### Spatial Analysis container update
+
+A new version of the [Spatial Analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
+
+* [Spatial Analysis operations](spatial-analysis-operations.md) can be now configured to detect if a person is wearing a protective face covering such as a mask.
+ * A mask classifier can be enabled for the `personcount`, `personcrossingline` and `personcrossingpolygon` operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter.
+ * The attributes `face_mask` and `face_noMask` will be returned as metadata with confidence score for each person detected in the video stream
+* The *personcrossingpolygon* operation has been extended to allow the calculation of the dwell time a person spends in a zone. You can set the `type` parameter in the Zone configuration for the operation to `zonedwelltime` and a new event of type *personZoneDwellTimeEvent* will include the `durationMs` field populated with the number of milliseconds that the person spent in the zone.
+* **Breaking change**: The *personZoneEvent* event has been renamed to *personZoneEnterExitEvent*. This event is raised by the *personcrossingpolygon* operation when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed.
+* Video URL can be provided as "Private Parameter/obfuscated" in all operations. Obfuscation is optional now and it will only work if `KEY` and `IV` are provided as environment variables.
+* Calibration is enabled by default for all operations. Set the `do_calibration: false` to disable it.
+* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details
+* Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details.
+
+### Mitigate latency
+* The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](./how-to/mitigate-latency.md).
+
+## December 2020
+### Customer configuration for Face ID storage
+* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
+
+## November 2020
+### Sample Face enrollment app
+* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+
+## October 2020
+
+### Azure AI Vision API v3.1 GA
+
+The Azure AI Vision API in General Availability has been upgraded to v3.1.
+
+## September 2020
+
+### Spatial Analysis container preview
+
+The [Spatial Analysis container](spatial-analysis-container.md) is now in preview. The Spatial Analysis feature of Azure AI Vision lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments. Spatial Analysis is a Docker container you can use on-premises.
+
+### Read API v3.1 Public Preview adds OCR for Japanese
+
+The Azure AI Vision Read API v3.1 public preview adds these capabilities:
+
+* OCR for Japanese language
+* For each text line, indicate whether the appearance is Handwriting or Print style, along with a confidence score (Latin languages only).
+* For a multi-page document extract text only for selected pages or page range.
+
+* This preview version of the Read API supports English, Dutch, French, German, Italian, Japanese, Portuguese, Simplified Chinese, and Spanish languages.
+
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
+
+> [!div class="nextstepaction"]
+> [Learn more about Read API v3.1 Public Preview 2](https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005)
+
+## August 2020
+### Customer-managed encryption of data at rest
+* The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](./identity-encrypt-data-at-rest.md).
+
+## July 2020
+
+### Read API v3.1 Public Preview with OCR for Simplified Chinese
+
+The Azure AI Vision Read API v3.1 public preview adds support for Simplified Chinese.
+
+* This preview version of the Read API supports English, Dutch, French, German, Italian, Portuguese, Simplified Chinese, and Spanish languages.
+
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
+
+> [!div class="nextstepaction"]
+> [Learn more about Read API v3.1 Public Preview 1](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005)
+
+## May 2020
+
+Azure AI Vision API v3.0 entered General Availability, with updates to the Read API:
+
+* Support for English, Dutch, French, German, Italian, Portuguese, and Spanish
+* Improved accuracy
+* Confidence score for each extracted word
+* New output format
+
+See the [OCR overview](overview-ocr.md) to learn more.
+
+## April 2020
+### New Face API Recognition Model
+* The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](./how-to/specify-recognition-model.md).
+
+## March 2020
+
+* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure AI services security](../security-features.md).
+
+## January 2020
+
+### Read API 3.0 Public Preview
+
+You now can use version 3.0 of the Read API to extract printed or handwritten text from images. Compared to earlier versions, 3.0 provides:
+
+* Improved accuracy
+* New output format
+* Confidence score for each extracted word
+* Support for both Spanish and English languages with the language parameter
+
+Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/REST/CSharp-hand-text.md?tabs=version-3) to get starting using the 3.0 API.
++
+## June 2019
+
+### New Face API detection model
+* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](how-to/specify-detection-model.md).
+
+## April 2019
+
+### Improved attribute accuracy
+* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Improved processing speeds
+* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
+
+## March 2019
+
+### New Face API recognition model
+* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](how-to/specify-recognition-model.md).
+
+## January 2019
+
+### Face Snapshot feature
+* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get). More details in [How to Migrate your face data to a different Face subscription](how-to/migrate-face-data.md).
+
+## October 2018
+
+### API messages
+* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
+
+## May 2018
+
+### Improved attribute accuracy
+* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Increased file size limit
+* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
+
+## March 2018
+
+### New data structure
+* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to use the large-scale feature](how-to/use-large-scale.md).
+* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
+
+## May 2017
+
+### New detectable Face attributes
+* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
+* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.
+
+## March 2017
+
+### New detectable Face attribute
+* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Fixed issues
+* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels.
+
+## November 2016
+### New subscription tier
+* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
+
+## October 2016
+### API messages
+* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+
+## July 2016
+### New features
+* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
+* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
+* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
+
+## V1.0 changes from V0
+
+* Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
+ [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
+* Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
+* Deprecated the V0 endpoint of Face API on June 30, 2016.
++
+## Azure AI services updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
ai-services Azure Container Instance Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-container-instance-recipe.md
+
+ Title: Azure Container Instance recipe
+
+description: Learn how to deploy Azure AI services Containers on Azure Container Instance
++++++ Last updated : 12/18/2020+
+#Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
+
+# https://github.com/Azure/cognitiveservices-aci
++
+# Deploy and run container on Azure Container Instance
+
+With the following steps, scale Azure AI services applications in the cloud easily with Azure [Container Instances](../../container-instances/index.yml). Containerization helps you focus on building your applications instead of managing the infrastructure. For more information on using containers, see [features and benefits](../cognitive-services-container-support.md#features-and-benefits).
+
+## Prerequisites
+
+The recipe works with any Azure AI services container. The Azure AI services resource must be created before using the recipe. Each Azure AI service that supports containers has a "How to install" article for installing and configuring the service for a container. Some services require a file or set of files as input for the container, it is important that you understand and have used the container successfully before using this solution.
+
+* An Azure resource for the Azure AI service you're using.
+* Azure AI service resource **endpoint URL** - review your specific service's "How to install" for the container, to find where the endpoint URL is from within the Azure portal, and what a correct example of the URL looks like. The exact format can change from service to service.
+* Azure AI service resource **key** - the keys are on the **Keys** page for the Azure resource. You only need one of the two keys. The key is a string of 32 alpha-numeric characters.
+
+* A single Azure AI services Container on your local host (your computer). Make sure you can:
+ * Pull down the image with a `docker pull` command.
+ * Run the local container successfully with all required configuration settings with a `docker run` command.
+ * Call the container's endpoint, getting a response of HTTP 2xx and a JSON response back.
+
+All variables in angle brackets, `<>`, need to be replaced with your own values. This replacement includes the angle brackets.
+
+> [!IMPORTANT]
+> The LUIS container requires a `.gz` model file that is pulled in at runtime. The container must be able to access this model file via a volume mount from the container instance. To upload a model file, follow these steps:
+> 1. [Create an Azure file share](../../storage/files/storage-how-to-create-file-share.md). Take note of the Azure Storage account name, key, and file share name as you'll need them later.
+> 2. [export your LUIS model (packaged app) from the LUIS portal](../LUIS/luis-container-howto.md#export-packaged-app-from-luis).
+> 3. In the Azure portal, navigate to the **Overview** page of your storage account resource, and select **File shares**.
+> 4. Select the file share name that you recently created, then select **Upload**. Then upload your packaged app.
+
+# [Azure portal](#tab/portal)
++
+# [CLI](#tab/cli)
+++++
+## Use the Container Instance
+
+# [Azure portal](#tab/portal)
+
+1. Select the **Overview** and copy the IP address. It will be a numeric IP address such as `55.55.55.55`.
+1. Open a new browser tab and use the IP address, for example, `http://<IP-address>:5000 (http://55.55.55.55:5000`). You will see the container's home page, letting you know the container is running.
+
+ ![Container's home page](../../../includes/media/cognitive-services-containers-api-documentation/container-webpage.png)
+
+1. Select **Service API Description** to view the swagger page for the container.
+
+1. Select any of the **POST** APIs and select **Try it out**. The parameters are displayed including the input. Fill in the parameters.
+
+1. Select **Execute** to send the request to your Container Instance.
+
+ You have successfully created and used Azure AI services containers in Azure Container Instance.
+
+# [CLI](#tab/cli)
++
+> [!NOTE]
+> If you're running the Text Analytics for health container, use the following URL to submit queries: `http://localhost:5000/text/analytics/v3.2-preview.1/entities/health`
++
ai-services Azure Kubernetes Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-kubernetes-recipe.md
+
+ Title: Run Language Detection container in Kubernetes Service
+
+description: Deploy the language detection container, with a running sample, to the Azure Kubernetes Service, and test it in a web browser.
++++++ Last updated : 01/10/2022++
+ms.devlang: azurecli
++
+# Deploy a language detection container to Azure Kubernetes Service
+
+Learn how to deploy the language detection container. This procedure shows you how create the local Docker containers, push the containers to your own private container registry, run the container in a Kubernetes cluster, and test it in a web browser.
+
+## Prerequisites
+
+This procedure requires several tools that must be installed and run locally. Do not use Azure Cloud Shell.
+
+* Use an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
+* [Git](https://git-scm.com/downloads) for your operating system so you can clone the [sample](https://github.com/Azure-Samples/cognitive-services-containers-samples) used in this procedure.
+* [Azure CLI](/cli/azure/install-azure-cli).
+* [Docker engine](https://www.docker.com/products/docker-engine) and validate that the Docker CLI works in a console window.
+* [kubectl](https://storage.googleapis.com/kubernetes-release/release/v1.13.1/bin/windows/amd64/kubectl.exe).
+* An Azure resource with the correct pricing tier. Not all pricing tiers work with this container:
+ * **Language** resource with F0 or Standard pricing tiers only.
+ * **Azure AI services** resource with the S0 pricing tier.
+
+## Running the sample
+
+This procedure loads and runs the Azure AI services Container sample for language detection. The sample has two containers, one for the client application and one for the Azure AI services container. We'll push both of these images to the Azure Container Registry. Once they are on your own registry, create an Azure Kubernetes Service to access these images and run the containers. When the containers are running, use the **kubectl** CLI to watch the containers performance. Access the client application with an HTTP request and see the results.
+
+![A diagram showing the conceptual idea of running a container on Kubernetes](media/container-instance-sample.png)
+
+## The sample containers
+
+The sample has two container images, one for the frontend website. The second image is the language detection container returning the detected language (culture) of text. Both containers are accessible from an external IP when you are done.
+
+### The language-frontend container
+
+This website is equivalent to your own client-side application that makes requests of the language detection endpoint. When the procedure is finished, you get the detected language of a string of characters by accessing the website container in a browser with `http://<external-IP>/<text-to-analyze>`. An example of this URL is `http://132.12.23.255/helloworld!`. The result in the browser is `English`.
+
+### The language container
+
+The language detection container, in this specific procedure, is accessible to any external request. The container hasn't been changed in any way so the standard Azure AI services container-specific language detection API is available.
+
+For this container, that API is a POST request for language detection. As with all Azure AI services containers, you can learn more about the container from its hosted Swagger information, `http://<external-IP>:5000/swagger/https://docsupdatetracker.net/index.html`.
+
+Port 5000 is the default port used with the Azure AI services containers.
+
+## Create Azure Container Registry service
+
+To deploy the container to the Azure Kubernetes Service, the container images need to be accessible. Create your own Azure Container Registry service to host the images.
+
+1. Sign in to the Azure CLI
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. Create a resource group named `cogserv-container-rg` to hold every resource created in this procedure.
+
+ ```azurecli-interactive
+ az group create --name cogserv-container-rg --location westus
+ ```
+
+1. Create your own Azure Container Registry with the format of your name then `registry`, such as `pattyregistry`. Do not use dashes or underline characters in the name.
+
+ ```azurecli-interactive
+ az acr create --resource-group cogserv-container-rg --name pattyregistry --sku Basic
+ ```
+
+ Save the results to get the **loginServer** property. This will be part of the hosted container's address, used later in the `language.yml` file.
+
+ ```azurecli-interactive
+ az acr create --resource-group cogserv-container-rg --name pattyregistry --sku Basic
+ ```
+
+ ```output
+ {
+ "adminUserEnabled": false,
+ "creationDate": "2019-01-02T23:49:53.783549+00:00",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/cogserv-container-rg/providers/Microsoft.ContainerRegistry/registries/pattyregistry",
+ "location": "westus",
+ "loginServer": "pattyregistry.azurecr.io",
+ "name": "pattyregistry",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "cogserv-container-rg",
+ "sku": {
+ "name": "Basic",
+ "tier": "Basic"
+ },
+ "status": null,
+ "storageAccount": null,
+ "tags": {},
+ "type": "Microsoft.ContainerRegistry/registries"
+ }
+ ```
+
+1. Sign in to your container registry. You need to login before you can push images to your registry.
+
+ ```azurecli-interactive
+ az acr login --name pattyregistry
+ ```
+
+## Get website Docker image
+
+1. The sample code used in this procedure is in the Azure AI services containers samples repository. Clone the repository to have a local copy of the sample.
+
+ ```console
+ git clone https://github.com/Azure-Samples/cognitive-services-containers-samples
+ ```
+
+ Once the repository is on your local computer, find the website in the [\dotnet\Language\FrontendService](https://github.com/Azure-Samples/cognitive-services-containers-samples/tree/master/dotnet/Language/FrontendService) directory. This website acts as the client application calling the language detection API hosted in the language detection container.
+
+1. Build the Docker image for this website. Make sure the console is in the [\FrontendService](https://github.com/Azure-Samples/cognitive-services-containers-samples/tree/master/dotnet/Language/FrontendService) directory where the Dockerfile is located when you run the following command:
+
+ ```console
+ docker build -t language-frontend -t pattiyregistry.azurecr.io/language-frontend:v1 .
+ ```
+
+ To track the version on your container registry, add the tag with a version format, such as `v1`.
+
+1. Push the image to your container registry. This may take a few minutes.
+
+ ```console
+ docker push pattyregistry.azurecr.io/language-frontend:v1
+ ```
+
+ If you get an `unauthorized: authentication required` error, login with the `az acr login --name <your-container-registry-name>` command.
+
+ When the process is done, the results should be similar to:
+
+ ```output
+ The push refers to repository [pattyregistry.azurecr.io/language-frontend]
+ 82ff52ee6c73: Pushed
+ 07599c047227: Pushed
+ 816caf41a9a1: Pushed
+ 2924be3aed17: Pushed
+ 45b83a23806f: Pushed
+ ef68f6734aa4: Pushed
+ v1: digest: sha256:31930445deee181605c0cde53dab5a104528dc1ff57e5b3b34324f0d8a0eb286 size: 1580
+ ```
+
+## Get language detection Docker image
+
+1. Pull the latest version of the Docker image to the local machine. This may take a few minutes. If there is a newer version of this container, change the value from `1.1.006770001-amd64-preview` to the newer version.
+
+ ```console
+ docker pull mcr.microsoft.com/azure-cognitive-services/language:1.1.006770001-amd64-preview
+ ```
+
+1. Tag image with your container registry. Find the latest version and replace the version `1.1.006770001-amd64-preview` if you have a more recent version.
+
+ ```console
+ docker tag mcr.microsoft.com/azure-cognitive-services/language pattiyregistry.azurecr.io/language:1.1.006770001-amd64-preview
+ ```
+
+1. Push the image to your container registry. This may take a few minutes.
+
+ ```console
+ docker push pattyregistry.azurecr.io/language:1.1.006770001-amd64-preview
+ ```
+
+## Get Container Registry credentials
+
+The following steps are needed to get the required information to connect your container registry with the Azure Kubernetes Service you create later in this procedure.
+
+1. Create service principal.
+
+ ```azurecli-interactive
+ az ad sp create-for-rbac
+ ```
+
+ Save the results `appId` value for the assignee parameter in step 3, `<appId>`. Save the `password` for the next section's client-secret parameter `<client-secret>`.
+
+ ```output
+ {
+ "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "displayName": "azure-cli-2018-12-31-18-39-32",
+ "name": "http://azure-cli-2018-12-31-18-39-32",
+ "password": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ }
+ ```
+
+1. Get your container registry ID.
+
+ ```azurecli-interactive
+ az acr show --resource-group cogserv-container-rg --name pattyregistry --query "id" --o table
+ ```
+
+ Save the output for the scope parameter value, `<acrId>`, in the next step. It looks like:
+
+ ```output
+ /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/cogserv-container-rg/providers/Microsoft.ContainerRegistry/registries/pattyregistry
+ ```
+
+ Save the full value for step 3 in this section.
+
+1. To grant the correct access for the AKS cluster to use images stored in your container registry, create a role assignment. Replace `<appId>` and `<acrId>` with the values gathered in the previous two steps.
+
+ ```azurecli-interactive
+ az role assignment create --assignee <appId> --scope <acrId> --role Reader
+ ```
+
+## Create Azure Kubernetes Service
+
+1. Create the Kubernetes cluster. All the parameter values are from previous sections except the name parameter. Choose a name that indicates who created it and its purpose, such as `patty-kube`.
+
+ ```azurecli-interactive
+ az aks create --resource-group cogserv-container-rg --name patty-kube --node-count 2 --service-principal <appId> --client-secret <client-secret> --generate-ssh-keys
+ ```
+
+ This step may take a few minutes. The result is:
+
+ ```output
+ {
+ "aadProfile": null,
+ "addonProfiles": null,
+ "agentPoolProfiles": [
+ {
+ "count": 2,
+ "dnsPrefix": null,
+ "fqdn": null,
+ "maxPods": 110,
+ "name": "nodepool1",
+ "osDiskSizeGb": 30,
+ "osType": "Linux",
+ "ports": null,
+ "storageProfile": "ManagedDisks",
+ "vmSize": "Standard_DS1_v2",
+ "vnetSubnetId": null
+ }
+ ],
+ "dnsPrefix": "patty-kube--65a101",
+ "enableRbac": true,
+ "fqdn": "patty-kube--65a101-341f1f54.hcp.westus.azmk8s.io",
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/cogserv-container-rg/providers/Microsoft.ContainerService/managedClusters/patty-kube",
+ "kubernetesVersion": "1.9.11",
+ "linuxProfile": {
+ "adminUsername": "azureuser",
+ "ssh": {
+ "publicKeys": [
+ {
+ "keyData": "ssh-rsa AAAAB3NzaC...ohR2d81mFC
+ }
+ ]
+ }
+ },
+ "location": "westus",
+ "name": "patty-kube",
+ "networkProfile": {
+ "dnsServiceIp": "10.0.0.10",
+ "dockerBridgeCidr": "172.17.0.1/16",
+ "networkPlugin": "kubenet",
+ "networkPolicy": null,
+ "podCidr": "10.244.0.0/16",
+ "serviceCidr": "10.0.0.0/16"
+ },
+ "nodeResourceGroup": "MC_patty_westus",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "cogserv-container-rg",
+ "servicePrincipalProfile": {
+ "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "keyVaultSecretRef": null,
+ "secret": null
+ },
+ "tags": null,
+ "type": "Microsoft.ContainerService/ManagedClusters"
+ }
+ ```
+
+ The service is created but it doesn't have the website container or language detection container yet.
+
+1. Get credentials of the Kubernetes cluster.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group cogserv-container-rg --name patty-kube
+ ```
+
+## Load the orchestration definition into your Kubernetes service
+
+This section uses the **kubectl** CLI to talk with the Azure Kubernetes Service.
+
+1. Before loading the orchestration definition, check **kubectl** has access to the nodes.
+
+ ```console
+ kubectl get nodes
+ ```
+
+ The response looks like:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-13756812-0 Ready agent 6m v1.9.11
+ aks-nodepool1-13756812-1 Ready agent 6m v1.9.11
+ ```
+
+1. Copy the following file and name it `language.yml`. The file has a `service` section and a `deployment` section each for the two container types, the `language-frontend` website container and the `language` detection container.
+
+ [!code-yml[Kubernetes orchestration file for the Azure AI services containers sample](~/samples-cogserv-containers/Kubernetes/language/language.yml "Kubernetes orchestration file for the Azure AI services containers sample")]
+
+1. Change the language-frontend deployment lines of `language.yml` based on the following table to add your own container registry image names, client secret, and Language service settings.
+
+ Language-frontend deployment settings|Purpose|
+ |--|--|
+ |Line 32<br> `image` property|Image location for the frontend image in your Container Registry<br>`<container-registry-name>.azurecr.io/language-frontend:v1`|
+ |Line 44<br> `name` property|Container Registry secret for the image, referred to as `<client-secret>` in a previous section.|
+
+1. Change the language deployment lines of `language.yml` based on the following table to add your own container registry image names, client secret, and Language service settings.
+
+ |Language deployment settings|Purpose|
+ |--|--|
+ |Line 78<br> `image` property|Image location for the language image in your Container Registry<br>`<container-registry-name>.azurecr.io/language:1.1.006770001-amd64-preview`|
+ |Line 95<br> `name` property|Container Registry secret for the image, referred to as `<client-secret>` in a previous section.|
+ |Line 91<br> `apiKey` property|Your Language service resource key|
+ |Line 92<br> `billing` property|The billing endpoint for your Language service resource.<br>`https://westus.api.cognitive.microsoft.com/text/analytics/v2.1`|
+
+ Because the **apiKey** and **billing endpoint** are set as part of the Kubernetes orchestration definition, the website container doesn't need to know about these or pass them as part of the request. The website container refers to the language detection container by its orchestrator name `language`.
+
+1. Load the orchestration definition file for this sample from the folder where you created and saved the `language.yml`.
+
+ ```console
+ kubectl apply -f language.yml
+ ```
+
+ The response is:
+
+ ```output
+ service "language-frontend" created
+ deployment.apps "language-frontend" created
+ service "language" created
+ deployment.apps "language" created
+ ```
+
+## Get external IPs of containers
+
+For the two containers, verify the `language-frontend` and `language` services are running and get the external IP address.
+
+```console
+kubectl get all
+```
+
+```output
+NAME READY STATUS RESTARTS AGE
+pod/language-586849d8dc-7zvz5 1/1 Running 0 13h
+pod/language-frontend-68b9969969-bz9bg 1/1 Running 1 13h
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14h
+service/language LoadBalancer 10.0.39.169 104.42.172.68 5000:30161/TCP 13h
+service/language-frontend LoadBalancer 10.0.42.136 104.42.37.219 80:30943/TCP 13h
+
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+deployment.extensions/language 1 1 1 1 13h
+deployment.extensions/language-frontend 1 1 1 1 13h
+
+NAME DESIRED CURRENT READY AGE
+replicaset.extensions/language-586849d8dc 1 1 1 13h
+replicaset.extensions/language-frontend-68b9969969 1 1 1 13h
+
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+deployment.apps/language 1 1 1 1 13h
+deployment.apps/language-frontend 1 1 1 1 13h
+
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/language-586849d8dc 1 1 1 13h
+replicaset.apps/language-frontend-68b9969969 1 1 1 13h
+```
+
+If the `EXTERNAL-IP` for the service is shown as pending, rerun the command until the IP address is shown before moving to the next step.
+
+## Test the language detection container
+
+Open a browser and navigate to the external IP of the `language` container from the previous section: `http://<external-ip>:5000/swagger/https://docsupdatetracker.net/index.html`. You can use the `Try it` feature of the API to test the language detection endpoint.
+
+![A screenshot showing the container's swagger documentation](./media/language-detection-container-swagger-documentation.png)
+
+## Test the client application container
+
+Change the URL in the browser to the external IP of the `language-frontend` container using the following format: `http://<external-ip>/helloworld`. The English culture text of `helloworld` is predicted as `English`.
+
+## Clean up resources
+
+When you are done with the cluster, delete the Azure resource group.
+
+```azurecli-interactive
+az group delete --name cogserv-container-rg
+```
+
+## Related information
+
+* [kubectl for Docker Users](https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/)
+
+## Next steps
+
+[Azure AI services Containers](../cognitive-services-container-support.md)
ai-services Container Reuse Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/container-reuse-recipe.md
+
+ Title: Recipes for Docker containers
+
+description: Learn how to build, test, and store containers with some or all of your configuration settings for deployment and reuse.
++++++ Last updated : 10/28/2021+
+#Customer intent: As a potential customer, I want to know how to configure containers so I can reuse them.
++
+# Create containers for reuse
+
+Use these container recipes to create Azure AI services Containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started.
+
+Once you have this new layer of container (with settings), and you have tested it locally, you can store the container in a container registry. When the container starts, it will only need those settings that are not currently stored in the container. The private registry container provides configuration space for you to pass those settings in.
+
+## Docker run syntax
+
+Any `docker run` examples in this document assume a Windows console with a `^` line continuation character. Consider the following for your own use:
+
+* Do not change the order of the arguments unless you are very familiar with docker containers.
+* If you are using an operating system other than Windows, or a console other than Windows console, use the correct console/terminal, folder syntax for mounts, and line continuation character for your console and system. Because the Azure AI services container is a Linux operating system, the target mount uses a Linux-style folder syntax.
+* `docker run` examples use the directory off the `c:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission.
+
+## Store no configuration settings in image
+
+The example `docker run` commands for each service do not store any configuration settings in the container. When you start the container from a console or registry service, those configuration settings need to pass in. The private registry container provides configuration space for you to pass those settings in.
+
+## Reuse recipe: store all configuration settings with container
+
+In order to store all configuration settings, create a `Dockerfile` with those settings.
+
+Issues with this approach:
+
+* The new container has a separate name and tag from the original container.
+* In order to change these settings, you will have to change the values of the Dockerfile, rebuild the image, and republish to your registry.
+* If someone gets access to your container registry or your local host, they can run the container and use the Azure AI services endpoints.
+* If the Azure AI service that you're using doesn't require input mounts, don't add the `COPY` lines to your Dockerfile.
+
+Create Dockerfile, pulling from the existing Azure AI services container you want to use, then use docker commands in the Dockerfile to set or pull in information the container needs.
+
+This example:
+
+* Sets the billing endpoint, `{BILLING_ENDPOINT}` from the host's environment key using `ENV`.
+* Sets the billing API-key, `{ENDPOINT_KEY}` from the host's environment key using `ENV.
+
+### Reuse recipe: store billing settings with container
+
+This example shows how to build the Language service's sentiment container from a Dockerfile.
+
+```Dockerfile
+FROM mcr.microsoft.com/azure-cognitive-services/sentiment:latest
+ENV billing={BILLING_ENDPOINT}
+ENV apikey={ENDPOINT_KEY}
+ENV EULA=accept
+```
+
+Build and run the container [locally](#how-to-use-container-on-your-local-host) or from your [private registry container](#how-to-add-container-to-private-registry) as needed.
+
+### Reuse recipe: store billing and mount settings with container
+
+This example shows how to use Language Understanding, saving billing and models from the Dockerfile.
+
+* Copies the Language Understanding (LUIS) model file from the host's file system using `COPY`.
+* The LUIS container supports more than one model. If all models are stored in the same folder, you all need one `COPY` statement.
+* Run the docker file from the relative parent of the model input directory. For the following example, run the `docker build` and `docker run` commands from the relative parent of `/input`. The first `/input` on the `COPY` command is the host computer's directory. The second `/input` is the container's directory.
+
+```Dockerfile
+FROM <container-registry>/<cognitive-service-container-name>:<tag>
+ENV billing={BILLING_ENDPOINT}
+ENV apikey={ENDPOINT_KEY}
+ENV EULA=accept
+COPY /input /input
+```
+
+Build and run the container [locally](#how-to-use-container-on-your-local-host) or from your [private registry container](#how-to-add-container-to-private-registry) as needed.
+
+## How to use container on your local host
+
+To build the Docker file, replace `<your-image-name>` with the new name of the image, then use:
+
+```console
+docker build -t <your-image-name> .
+```
+
+To run the image, and remove it when the container stops (`--rm`):
+
+```console
+docker run --rm <your-image-name>
+```
+
+## How to add container to private registry
+
+Follow these steps to use the Dockerfile and place the new image in your private container registry.
+
+1. Create a `Dockerfile` with the text from reuse recipe. A `Dockerfile` doesn't have an extension.
+
+1. Replace any values in the angle brackets with your own values.
+
+1. Build the file into an image at the command line or terminal, using the following command. Replace the values in the angle brackets, `<>`, with your own container name and tag.
+
+ The tag option, `-t`, is a way to add information about what you have changed for the container. For example, a container name of `modified-LUIS` indicates the original container has been layered. A tag name of `with-billing-and-model` indicates how the Language Understanding (LUIS) container has been modified.
+
+ ```Bash
+ docker build -t <your-new-container-name>:<your-new-tag-name> .
+ ```
+
+1. Sign in to Azure CLI from a console. This command opens a browser and requires authentication. Once authenticated, you can close the browser and continue working in the console.
+
+ ```azurecli
+ az login
+ ```
+
+1. Sign in to your private registry with Azure CLI from a console.
+
+ Replace the values in the angle brackets, `<my-registry>`, with your own registry name.
+
+ ```azurecli
+ az acr login --name <my-registry>
+ ```
+
+ You can also sign in with docker login if you are assigned a service principal.
+
+ ```Bash
+ docker login <my-registry>.azurecr.io
+ ```
+
+1. Tag the container with the private registry location. Replace the values in the angle brackets, `<my-registry>`, with your own registry name.
+
+ ```Bash
+ docker tag <your-new-container-name>:<your-new-tag-name> <my-registry>.azurecr.io/<your-new-container-name-in-registry>:<your-new-tag-name>
+ ```
+
+ If you don't use a tag name, `latest` is implied.
+
+1. Push the new image to your private container registry. When you view your private container registry, the container name used in the following CLI command will be the name of the repository.
+
+ ```Bash
+ docker push <my-registry>.azurecr.io/<your-new-container-name-in-registry>:<your-new-tag-name>
+ ```
+
+## Next steps
+
+[Create and use Azure Container Instance](azure-container-instance-recipe.md)
+
+<!--
+## Store input and output configuration settings
+
+Bake in input params only
+
+FROM containerpreview.azurecr.io/microsoft/cognitive-services-luis:<tag>
+COPY luisModel1 /input/
+COPY luisModel2 /input/
+
+## Store all configuration settings
+
+If you are a single manager of the container, you may want to store all settings in the container. The new, resulting container will not need any variables passed in to run.
+
+Issues with this approach:
+
+* In order to change these settings, you will have to change the values of the Dockerfile and rebuild the file.
+* If someone gets access to your container registry or your local host, they can run the container and use the Azure AI services endpoints.
+
+The following _partial_ Dockerfile shows how to statically set the values for billing and model. This example uses the
+
+```Dockerfile
+FROM <container-registry>/<cognitive-service-container-name>:<tag>
+ENV billing=<billing value>
+ENV apikey=<apikey value>
+COPY luisModel1 /input/
+COPY luisModel2 /input/
+```
+
+->
ai-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md
+
+ Title: Use Docker containers in disconnected environments
+
+description: Learn how to run Azure AI services Docker containers disconnected from the internet.
+++++ Last updated : 06/28/2023+++
+# Use Docker containers in disconnected environments
+
+Containers enable you to run Azure AI services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs disconnected from the internet. Currently, the following containers can be run in this manner:
+
+* [Speech to text](../speech-service/speech-container-howto.md?tabs=stt)
+* [Custom Speech to text](../speech-service/speech-container-howto.md?tabs=cstt)
+* [Neural Text to speech](../speech-service/speech-container-howto.md?tabs=ntts)
+* [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)
+* Azure AI Language
+ * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md)
+ * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md)
+ * [Language Detection](../language-service/language-detection/how-to/use-containers.md)
+* [Azure AI Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md)
+* [Document Intelligence](../../ai-services/document-intelligence/containers/disconnected.md)
+
+Before attempting to run a Docker container in an offline environment, make sure you know the steps to successfully download and use the container. For example:
+
+* Host computer requirements and recommendations.
+* The Docker `pull` command you'll use to download the container.
+* How to validate that a container is running.
+* How to send queries to the container's endpoint, once it's running.
+
+## Request access to use containers in disconnected environments
+
+Fill out and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the internet.
++
+Access is limited to customers that meet the following requirements:
+
+* Your organization should be identified as strategic customer or partner with Microsoft.
+* Disconnected containers are expected to run fully offline, hence your use cases must meet one of below or similar requirements:
+ * Environment or device(s) with zero connectivity to internet.
+ * Remote location that occasionally has internet access.
+ * Organization under strict regulation of not sending any kind of data back to cloud.
+* Application completed as instructed - Please pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
+
+## Purchase a commitment plan to use containers in disconnected environments
+
+### Create a new resource
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Azure AI services or Azure AI services listed above.
+
+2. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
+
+ > [!NOTE]
+ >
+ > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
+ > * Pricing details are for example only.
+
+1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
+
+### Configure container for disconnected usage
+
+See the following documentation for steps on downloading and configuring the container for disconnected usage:
+
+* [Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md#run-the-container-disconnected-from-the-internet)
+* [Language Understanding (LUIS)](../LUIS/luis-container-howto.md#run-the-container-disconnected-from-the-internet)
+* [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)
+* [Document Intelligence](../../ai-services/document-intelligence/containers/disconnected.md)
+
+**Speech service**
+
+* [Speech to text](../speech-service/speech-container-stt.md?tabs=disconnected#run-the-container-with-docker-run)
+* [Custom Speech to text](../speech-service/speech-container-cstt.md?tabs=disconnected#run-the-container-with-docker-run)
+* [Neural Text to speech](../speech-service/speech-container-ntts.md?tabs=disconnected#run-the-container-with-docker-run)
+
+**Language service**
+
+* [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+* [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+* [Language Detection](../language-service/language-detection/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+
+
+## Container image and license updates
++
+## Usage records
+
+When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage.
+
+### Arguments for storing logs
+
+When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the example below, replacing `{OUTPUT_PATH}` with the path where the logs will be stored:
+
+```Docker
+docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
+```
+
+### Get records using the container endpoints
+
+The container provides two endpoints for returning records about its usage.
+
+#### Get all records
+
+The following endpoint will provide a report summarizing all of the usage collected in the mounted billing record directory.
+
+```http
+https://<service>/records/usage-logs/
+```
+
+It will return JSON similar to the example below.
+
+```json
+{
+ "apiType": "noop",
+ "serviceName": "noop",
+ "meters": [
+ {
+ "name": "Sample.Meter",
+ "quantity": 253
+ }
+ ]
+}
+```
+
+#### Get records for a specific month
+
+The following endpoint will provide a report summarizing usage over a specific month and year.
+
+```HTTP
+https://<service>/records/usage-logs/{MONTH}/{YEAR}
+```
+
+it will return a JSON response similar to the example below:
+
+```json
+{
+ "apiType": "string",
+ "serviceName": "string",
+ "meters": [
+ {
+ "name": "string",
+ "quantity": 253
+ }
+ ]
+}
+```
+
+## Purchase a different commitment plan for disconnected containers
+
+Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+
+You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
+
+## End a commitment plan
+
+If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers, and not be charged for the following year.
+
+## Troubleshooting
+
+If you run the container with an output mount and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
+
+> [!TIP]
+> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](disconnected-container-faq.yml).
+
+## Next steps
+
+[Azure AI services containers overview](../cognitive-services-container-support.md)
+++++++++
ai-services Docker Compose Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/docker-compose-recipe.md
+
+ Title: Use Docker Compose to deploy multiple containers
+
+description: Learn how to deploy multiple Azure AI services containers. This article shows you how to orchestrate multiple Docker container images by using Docker Compose.
++++++ Last updated : 10/29/2020+
+#Customer intent: As a potential customer, I want to know how to configure containers so I can reuse them.
+
+# SME: Brendan Walsh
++
+# Use Docker Compose to deploy multiple containers
+
+This article shows you how to deploy multiple Azure AI services containers. Specifically, you'll learn how to use Docker Compose to orchestrate multiple Docker container images.
+
+> [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container Docker applications. In Compose, you use a YAML file to configure your application's services. Then, you create and start all the services from your configuration by running a single command.
+
+It can be useful to orchestrate multiple container images on a single host computer. In this article, we'll pull together the Read and Document Intelligence containers.
+
+## Prerequisites
+
+This procedure requires several tools that must be installed and run locally:
+
+* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
+* [Docker Engine](https://www.docker.com/products/docker-engine). Confirm that the Docker CLI works in a console window.
+* An Azure resource with the correct pricing tier. Only the following pricing tiers work with this container:
+ * **Azure AI Vision** resource with F0 or Standard pricing tier only.
+ * **Document Intelligence** resource with F0 or Standard pricing tier only.
+ * **Azure AI services** resource with the S0 pricing tier.
+* If you're using a gated preview container, You will need to complete the [online request form](https://aka.ms/csgate/) to use it.
+
+## Docker Compose file
+
+The YAML file defines all the services to be deployed. These services rely on either a `DockerFile` or an existing container image. In this case, we'll use two preview images. Copy and paste the following YAML file, and save it as *docker-compose.yaml*. Provide the appropriate **apikey**, **billing**, and **EndpointUri** values in the file.
+
+```yaml
+version: '3.7'
+
+ forms:
+ image: "mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout"
+ environment:
+ eula: accept
+ billing: # < Your Document Intelligence billing URL >
+ apikey: # < Your Document Intelligence API key >
+ FormRecognizer__ComputerVisionApiKey: # < Your Document Intelligence API key >
+ FormRecognizer__ComputerVisionEndpointUri: # < Your Document Intelligence URI >
+ volumes:
+ - type: bind
+ source: E:\publicpreview\output
+ target: /output
+ - type: bind
+ source: E:\publicpreview\input
+ target: /input
+ ports:
+ - "5010:5000"
+
+ ocr:
+ image: "mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview"
+ environment:
+ eula: accept
+ apikey: # < Your Azure AI Vision API key >
+ billing: # < Your Azure AI Vision billing URL >
+ ports:
+ - "5021:5000"
+```
+
+> [!IMPORTANT]
+> Create the directories on the host machine that are specified under the **volumes** node. This approach is required because the directories must exist before you try to mount an image by using volume bindings.
+
+## Start the configured Docker Compose services
+
+A Docker Compose file enables the management of all the stages in a defined service's life cycle: starting, stopping, and rebuilding services; viewing the service status; and log streaming. Open a command-line interface from the project directory (where the docker-compose.yaml file is located).
+
+> [!NOTE]
+> To avoid errors, make sure that the host machine correctly shares drives with Docker Engine. For example, if *E:\publicpreview* is used as a directory in the *docker-compose.yaml* file, share drive **E** with Docker.
+
+From the command-line interface, execute the following command to start (or restart) all the services defined in the *docker-compose.yaml* file:
+
+```console
+docker-compose up
+```
+
+The first time Docker executes the **docker-compose up** command by using this configuration, it pulls the images configured under the **services** node and then downloads and mounts them:
+
+```console
+Pulling forms (mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout:)...
+latest: Pulling from azure-cognitive-services/form-recognizer/layout
+743f2d6c1f65: Pull complete
+72befba99561: Pull complete
+2a40b9192d02: Pull complete
+c7715c9d5c33: Pull complete
+f0b33959f1c4: Pull complete
+b8ab86c6ab26: Pull complete
+41940c21ed3c: Pull complete
+e3d37dd258d4: Pull complete
+cdb5eb761109: Pull complete
+fd93b5f95865: Pull complete
+ef41dcbc5857: Pull complete
+4d05c86a4178: Pull complete
+34e811d37201: Pull complete
+Pulling ocr (mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview:)...
+latest: Pulling from /azure-cognitive-services/vision/read:3.1-preview
+f476d66f5408: Already exists
+8882c27f669e: Already exists
+d9af21273955: Already exists
+f5029279ec12: Already exists
+1a578849dcd1: Pull complete
+45064b1ab0bf: Download complete
+4bb846705268: Downloading [=========================================> ] 187.1MB/222.8MB
+c56511552241: Waiting
+e91d2aa0f1ad: Downloading [==============================================> ] 162.2MB/176.1MB
+```
+
+After the images are downloaded, the image services are started:
+
+```console
+Starting docker_ocr_1 ... done
+Starting docker_forms_1 ... doneAttaching to docker_ocr_1, docker_forms_1forms_1 | forms_1 | forms_1 | Notice: This Preview is made available to you on the condition that you agree to the Supplemental Terms of Use for Microsoft Azure Previews [https://go.microsoft.com/fwlink/?linkid=2018815], which supplement your agreement [https://go.microsoft.com/fwlink/?linkid=2018657] governing your use of Azure. If you do not have an existing agreement governing your use of Azure, you agree that your agreement governing use of Azure is the Microsoft Online Subscription Agreement [https://go.microsoft.com/fwlink/?linkid=2018755] (which incorporates the Online Services Terms [https://go.microsoft.com/fwlink/?linkid=2018760]). By using the Preview you agree to these terms.
+forms_1 |
+forms_1 |
+forms_1 | Using '/input' for reading models and other read-only data.
+forms_1 | Using '/output/forms/812d811d1bcc' for writing logs and other output data.
+forms_1 | Logging to console.
+forms_1 | Submitting metering to 'https://westus2.api.cognitive.microsoft.com/'.
+forms_1 | WARNING: No access control enabled!
+forms_1 | warn: Microsoft.AspNetCore.Server.Kestrel[0]
+forms_1 | Overriding address(es) 'http://+:80'. Binding to endpoints defined in UseKestrel() instead.
+forms_1 | Hosting environment: Production
+forms_1 | Content root path: /app/forms
+forms_1 | Now listening on: http://0.0.0.0:5000
+forms_1 | Application started. Press Ctrl+C to shut down.
+ocr_1 |
+ocr_1 |
+ocr_1 | Notice: This Preview is made available to you on the condition that you agree to the Supplemental Terms of Use for Microsoft Azure Previews [https://go.microsoft.com/fwlink/?linkid=2018815], which supplement your agreement [https://go.microsoft.com/fwlink/?linkid=2018657] governing your use of Azure. If you do not have an existing agreement governing your use of Azure, you agree that your agreement governing use of Azure is the Microsoft Online Subscription Agreement [https://go.microsoft.com/fwlink/?linkid=2018755] (which incorporates the Online Services Terms [https://go.microsoft.com/fwlink/?linkid=2018760]). By using the Preview you agree to these terms.
+ocr_1 |
+ocr_1 |
+ocr_1 | Logging to console.
+ocr_1 | Submitting metering to 'https://westcentralus.api.cognitive.microsoft.com/'.
+ocr_1 | WARNING: No access control enabled!
+ocr_1 | Hosting environment: Production
+ocr_1 | Content root path: /
+ocr_1 | Now listening on: http://0.0.0.0:5000
+ocr_1 | Application started. Press Ctrl+C to shut down.
+```
+
+## Verify the service availability
++
+Here's some example output:
+
+```
+IMAGE ID REPOSITORY TAG
+2ce533f88e80 mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout latest
+4be104c126c5 mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview latest
+```
+
+### Test containers
+
+Open a browser on the host machine and go to **localhost** by using the specified port from the *docker-compose.yaml* file, such as http://localhost:5021/swagger/https://docsupdatetracker.net/index.html. For example, you could use the **Try It** feature in the API to test the Document Intelligence endpoint. Both containers swagger pages should be available and testable.
+
+![Document Intelligence Container](media/form-recognizer-swagger-page.png)
+
+## Next steps
+
+[Azure AI services containers](../cognitive-services-container-support.md)
ai-services Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/api-reference.md
+
+ Title: API reference - Content Moderator
+
+description: Learn about the content moderation APIs for Content Moderator.
++++++ Last updated : 05/29/2019++++
+# Content Moderator API reference
+
+You can get started with Azure AI Content Moderator APIs by doing the following:
+
+- In the Azure portal, [subscribe to the Content Moderator API](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator).
+
+You can use the following **Content Moderator APIs** to set up your post-moderation workflows.
+
+| Description | Reference |
+| -- |-|
+| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. | [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
+| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
+| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
+| **List Management API**<br /><br />Create and manage custom exclusion or inclusion lists of images and text. If enabled, the **Image - Match** and **Text - Screen** operations do fuzzy matching of the submitted content against your custom lists. <br /><br />For efficiency, you can skip the machine learning-based moderation step.<br /><br /> | [List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API reference") |
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/client-libraries.md
+
+ Title: 'Quickstart: Use the Content Moderator client library'
+
+description: The Content Moderator API offers client libraries that makes it easy to integrate Content Moderator into your applications.
+++
+zone_pivot_groups: programming-languages-set-conmod
+++ Last updated : 09/28/2021+
+ms.devlang: csharp, java, python
+
+keywords: content moderator, Azure AI Content Moderator, online moderator, content filtering software
++
+# Quickstart: Use the Content Moderator client library
++++++++++++
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/encrypt-data-at-rest.md
+
+ Title: Content Moderator encryption of data at rest
+
+description: Content Moderator encryption of data at rest.
+++++ Last updated : 03/13/2020+
+#Customer intent: As a user of the Content Moderator service, I want to learn how encryption at rest works.
++
+# Content Moderator encryption of data at rest
+
+Content Moderator automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Content Moderator Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Content Moderator service, you will need to create a new Content Moderator resource and select E0 as the Pricing Tier. Once your Content Moderator resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
+
+Customer-managed keys are available in all Azure regions.
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Azure AI services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/export-delete-data.md
+
+ Title: Export or delete user data - Content Moderator
+
+description: You have full control over your data. Learn how to view, export or delete your data in Content Moderator.
++++++ Last updated : 02/07/2019+++
+# Export or delete user data in Content Moderator
++
+Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Moderation APIs](./api-reference.md).
++
+For more information on how to export and delete user data in Content Moderator, see the following table.
+
+| Data | Export Operation | Delete Operation |
+| - | - | - |
+| Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). |
+| Images for custom matching | Call the [Get image IDs API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f676). Images are stored in a one-way proprietary hash format, and there is no way to extract the actual images. | Call the [Delete all Images API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f686). Or delete the Content Moderator resource using the Azure portal. |
+| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
ai-services Image Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/image-lists-quickstart-dotnet.md
+
+ Title: "Check images against custom lists in C# - Content Moderator"
+
+description: How to moderate images with custom image lists using the Content Moderator SDK for C#.
++++++ Last updated : 10/24/2019++
+#Customer intent: As a C# developer of content-providing software, I want to check images against a custom list of inappropriate images so that I can handle them more efficiently.
++
+# Moderate with custom image lists in C#
+
+This article provides information and code samples to help you get started using
+the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
+- Create a custom image list
+- Add and remove images from the list
+- Get the IDs of all images in the list
+- Retrieve and update list metadata
+- Refresh the list search index
+- Screen images against images in the list
+- Delete all images from the list
+- Delete the custom list
+
+> [!NOTE]
+> There is a maximum limit of **5 image lists** with each list **not to exceed 10,000 images**.
+
+The console application for this guide simulates some of the tasks you
+can perform with the image list API.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Sign up for Content Moderator services
+
+Before you can use Content Moderator services through the REST API or the SDK, you'll need an API subscription key. Subscribe to Content Moderator service in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) to obtain it.
+
+## Create your Visual Studio project
+
+1. Add a new **Console app (.NET Framework)** project to your solution.
+
+ In the sample code, name the project **ImageLists**.
+
+1. Select this project as the single startup project for the solution.
+
+### Install required packages
+
+Install the following NuGet packages:
+
+- Microsoft.Azure.CognitiveServices.ContentModerator
+- Microsoft.Rest.ClientRuntime
+- Newtonsoft.Json
+
+### Update the program's using statements
+
+Add the following `using` statements
+
+```csharp
+using Microsoft.Azure.CognitiveServices.ContentModerator;
+using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
+using Newtonsoft.Json;
+using System;
+using System.Collections.Generic;
+using System.IO;
+using System.Threading;
+```
+
+### Create the Content Moderator client
+
+Add the following code to create a Content Moderator client for your subscription. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
+
+```csharp
+/// <summary>
+/// Wraps the creation and configuration of a Content Moderator client.
+/// </summary>
+/// <remarks>This class library contains insecure code. If you adapt this
+/// code for use in production, use a secure method of storing and using
+/// your Content Moderator subscription key.</remarks>
+public static class Clients
+{
+ /// <summary>
+ /// The base URL for Content Moderator calls.
+ /// </summary>
+ private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
+
+ /// <summary>
+ /// Your Content Moderator subscription key.
+ /// </summary>
+ private static readonly string CMSubscriptionKey = "YOUR API KEY";
+
+ /// <summary>
+ /// Returns a new Content Moderator client for your subscription.
+ /// </summary>
+ /// <returns>The new client.</returns>
+ /// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
+ /// When you have finished using the client,
+ /// you should dispose of it either directly or indirectly. </remarks>
+ public static ContentModeratorClient NewClient()
+ {
+ // Create and initialize an instance of the Content Moderator API wrapper.
+ ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey));
+
+ client.Endpoint = AzureEndpoint;
+ return client;
+ }
+}
+```
+
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Azure AI services [security](../security-features.md) article for more information.
+
+### Initialize application-specific settings
+
+Add the following classes and static fields to the **Program** class in Program.cs.
+
+```csharp
+/// <summary>
+/// The minimum amount of time, im milliseconds, to wait between calls
+/// to the Image List API.
+/// </summary>
+private const int throttleRate = 3000;
+
+/// <summary>
+/// The number of minutes to delay after updating the search index before
+/// performing image match operations against the list.
+/// </summary>
+private const double latencyDelay = 0.5;
+
+/// <summary>
+/// Define constants for the labels to apply to the image list.
+/// </summary>
+private class Labels
+{
+ public const string Sports = "Sports";
+ public const string Swimsuit = "Swimsuit";
+}
+
+/// <summary>
+/// Define input data for images for this sample.
+/// </summary>
+private class Images
+{
+ /// <summary>
+ /// Represents a group of images that all share the same label.
+ /// </summary>
+ public class Data
+ {
+ /// <summary>
+ /// The label for the images.
+ /// </summary>
+ public string Label;
+
+ /// <summary>
+ /// The URLs of the images.
+ /// </summary>
+ public string[] Urls;
+ }
+
+ /// <summary>
+ /// The initial set of images to add to the list with the sports label.
+ /// </summary>
+ public static readonly Data Sports = new Data()
+ {
+ Label = Labels.Sports,
+ Urls = new string[] {
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample6.png",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample9.png"
+ }
+ };
+
+ /// <summary>
+ /// The initial set of images to add to the list with the swimsuit label.
+ /// </summary>
+ /// <remarks>We're adding sample16.png (image of a puppy), to simulate
+ /// an improperly added image that we will later remove from the list.
+ /// Note: each image can have only one entry in a list, so sample4.png
+ /// will throw an exception when we try to add it with a new label.</remarks>
+ public static readonly Data Swimsuit = new Data()
+ {
+ Label = Labels.Swimsuit,
+ Urls = new string[] {
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample3.png",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
+ }
+ };
+
+ /// <summary>
+ /// The set of images to subsequently remove from the list.
+ /// </summary>
+ public static readonly string[] Corrections = new string[] {
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
+ };
+}
+
+/// <summary>
+/// The images to match against the image list.
+/// </summary>
+/// <remarks>Samples 1 and 4 should scan as matches; samples 5 and 16 should not.</remarks>
+private static readonly string[] ImagesToScreen = new string[] {
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png",
+ "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
+};
+
+/// <summary>
+/// A dictionary that tracks the ID assigned to each image URL when
+/// the image is added to the list.
+/// </summary>
+/// <remarks>Indexed by URL.</remarks>
+private static readonly Dictionary<string, int> ImageIdMap =
+ new Dictionary<string, int>();
+
+/// <summary>
+/// The name of the file to contain the output from the list management operations.
+/// </summary>
+/// <remarks>Relative paths are relative to the execution directory.</remarks>
+private static string OutputFile = "ListOutput.log";
+
+/// <summary>
+/// A static reference to the text writer to use for logging.
+/// </summary>
+private static TextWriter writer;
+
+/// <summary>
+/// A copy of the list details.
+/// </summary>
+/// <remarks>Used to initially create the list, and later to update the
+/// list details.</remarks>
+private static Body listDetails;
+```
+
+> [!NOTE]
+> Your Content Moderator service key has a requests-per-second (RPS)
+> rate limit, and if you exceed the limit, the SDK throws an exception with a 429 error code. A free tier key has a one-RPS rate limit.
++
+## Create a method to write messages to the log file
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Writes a message to the log file, and optionally to the console.
+/// </summary>
+/// <param name="message">The message.</param>
+/// <param name="echo">if set to <c>true</c>, write the message to the console.</param>
+private static void WriteLine(string message = null, bool echo = false)
+{
+ writer.WriteLine(message ?? String.Empty);
+
+ if (echo)
+ {
+ Console.WriteLine(message ?? String.Empty);
+ }
+}
+```
+
+## Create a method to create the custom list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Creates the custom list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <returns>The response object from the operation.</returns>
+private static ImageList CreateCustomList(ContentModeratorClient client)
+{
+ // Create the request body.
+ listDetails = new Body("MyList", "A sample list",
+ new BodyMetadata("Acceptable", "Potentially racy"));
+
+ WriteLine($"Creating list {listDetails.Name}.", true);
+
+ var result = client.ListManagementImageLists.Create(
+ "application/json", listDetails);
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+
+ return result;
+}
+```
+
+## Create a method to add a collection of images to the list
+
+Add the following method to the **Program** class. This guide does not demonstrate how to apply tags to images in the list.
+
+```csharp
+/// <summary>
+/// Adds images to an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <param name="imagesToAdd">The images to add.</param>
+/// <param name="label">The label to apply to each image.</param>
+/// <remarks>Images are assigned content IDs when they are added to the list.
+/// Track the content ID assigned to each image.</remarks>
+private static void AddImages(
+ContentModeratorClient client, int listId,
+IEnumerable<string> imagesToAdd, string label)
+{
+ foreach (var imageUrl in imagesToAdd)
+ {
+ WriteLine();
+ WriteLine($"Adding {imageUrl} to list {listId} with label {label}.", true);
+ try
+ {
+ var result = client.ListManagementImage.AddImageUrlInput(
+ listId.ToString(), "application/json", new BodyModel("URL", imageUrl), null, label);
+
+ ImageIdMap.Add(imageUrl, Int32.Parse(result.ContentId));
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+ }
+ catch (Exception ex)
+ {
+ WriteLine($"Unable to add image to list. Caught {ex.GetType().FullName}: {ex.Message}", true);
+ }
+ finally
+ {
+ Thread.Sleep(throttleRate);
+ }
+ }
+}
+```
+
+## Create a method to remove images from the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Removes images from an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <param name="imagesToRemove">The images to remove.</param>
+/// <remarks>Images are assigned content IDs when they are added to the list.
+/// Use the content ID to remove the image.</remarks>
+private static void RemoveImages(
+ ContentModeratorClient client, int listId,
+ IEnumerable<string> imagesToRemove)
+{
+ foreach (var imageUrl in imagesToRemove)
+ {
+ if (!ImageIdMap.ContainsKey(imageUrl)) continue;
+ int imageId = ImageIdMap[imageUrl];
+
+ WriteLine();
+ WriteLine($"Removing entry for {imageUrl} (ID = {imageId}) from list {listId}.", true);
+
+ var result = client.ListManagementImage.DeleteImage(
+ listId.ToString(), imageId.ToString());
+ Thread.Sleep(throttleRate);
+
+ ImageIdMap.Remove(imageUrl);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+ }
+}
+```
+
+## Create a method to get all of the content IDs for images in the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Gets all image IDs in an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <returns>The response object from the operation.</returns>
+private static ImageIds GetAllImageIds(
+ ContentModeratorClient client, int listId)
+{
+ WriteLine();
+ WriteLine($"Getting all image IDs for list {listId}.", true);
+
+ var result = client.ListManagementImage.GetAllImageIds(listId.ToString());
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+
+ return result;
+}
+```
+
+## Create a method to update the details of the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Updates the details of an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <returns>The response object from the operation.</returns>
+private static ImageList UpdateListDetails(
+ ContentModeratorClient client, int listId)
+{
+ WriteLine();
+ WriteLine($"Updating details for list {listId}.", true);
+
+ listDetails.Name = "Swimsuits and sports";
+
+ var result = client.ListManagementImageLists.Update(
+ listId.ToString(), "application/json", listDetails);
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+
+ return result;
+}
+```
+
+## Create a method to retrieve the details of the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Gets the details for an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <returns>The response object from the operation.</returns>
+private static ImageList GetListDetails(
+ ContentModeratorClient client, int listId)
+{
+ WriteLine();
+ WriteLine($"Getting details for list {listId}.", true);
+
+ var result = client.ListManagementImageLists.GetDetails(listId.ToString());
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+
+ return result;
+}
+```
+
+## Create a method to refresh the search index of the list
+
+Add the following method to the **Program** class. Any time you update a list, you need to refresh the search index before using the
+list to screen images.
+
+```csharp
+/// <summary>
+/// Refreshes the search index for an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <returns>The response object from the operation.</returns>
+private static RefreshIndex RefreshSearchIndex(
+ ContentModeratorClient client, int listId)
+{
+ WriteLine();
+ WriteLine($"Refreshing the search index for list {listId}.", true);
+
+ var result = client.ListManagementImageLists.RefreshIndexMethod(listId.ToString());
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+
+ return result;
+}
+```
+
+## Create a method to match images against the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Matches images against an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+/// <param name="imagesToMatch">The images to screen.</param>
+private static void MatchImages(
+ ContentModeratorClient client, int listId,
+ IEnumerable<string> imagesToMatch)
+{
+ foreach (var imageUrl in imagesToMatch)
+ {
+ WriteLine();
+ WriteLine($"Matching image {imageUrl} against list {listId}.", true);
+
+ var result = client.ImageModeration.MatchUrlInput(
+ "application/json", new BodyModel("URL", imageUrl), listId.ToString());
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+ }
+}
+```
+
+## Create a method to delete all images from the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Deletes all images from an image list.
+/// </summary>
+/// <param name="client">The Content Modertor client.</param>
+/// <param name="listId">The list identifier.</param>
+private static void DeleteAllImages(
+ ContentModeratorClient client, int listId)
+{
+ WriteLine();
+ WriteLine($"Deleting all images from list {listId}.", true);
+
+ var result = client.ListManagementImage.DeleteAllImages(listId.ToString());
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+}
+```
+
+## Create a method to delete the list
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Deletes an image list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="listId">The list identifier.</param>
+private static void DeleteCustomList(
+ ContentModeratorClient client, int listId)
+{
+ WriteLine();
+ WriteLine($"Deleting list {listId}.", true);
+
+ var result = client.ListManagementImageLists.Delete(listId.ToString());
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+}
+```
+
+## Create a method to retrieve IDs for all image lists
+
+Add the following method to the **Program** class.
+
+```csharp
+/// <summary>
+/// Gets all list identifiers for the client.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <returns>The response object from the operation.</returns>
+private static IList<ImageList> GetAllListIds(ContentModeratorClient client)
+{
+ WriteLine();
+ WriteLine($"Getting all image list IDs.", true);
+
+ var result = client.ListManagementImageLists.GetAllImageLists();
+ Thread.Sleep(throttleRate);
+
+ WriteLine("Response:");
+ WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
+
+ return result;
+}
+```
+
+## Add code to simulate the use of an image list
+
+Add the following code to the **Main** method. This code simulates many of the operations that you would perform in defining and managing the list, as well as using the list to screen images. The logging features allow you to see the response objects generated by the SDK calls to the Content Moderator service.
+
+```csharp
+// Create the text writer to use for logging, and cache a static reference to it.
+using (StreamWriter outputWriter = new StreamWriter(OutputFile))
+{
+ writer = outputWriter;
+
+ // Create a Content Moderator client.
+ using (var client = Clients.NewClient())
+ {
+ // Create a custom image list and record the ID assigned to it.
+ var creationResult = CreateCustomList(client);
+ if (creationResult.Id.HasValue)
+ {
+ // Cache the ID of the new image list.
+ int listId = creationResult.Id.Value;
+
+ // Perform various operations using the image list.
+ AddImages(client, listId, Images.Sports.Urls, Images.Sports.Label);
+ AddImages(client, listId, Images.Swimsuit.Urls, Images.Swimsuit.Label);
+
+ GetAllImageIds(client, listId);
+ UpdateListDetails(client, listId);
+ GetListDetails(client, listId);
+
+ // Be sure to refresh search index
+ RefreshSearchIndex(client, listId);
+
+ // WriteLine();
+ WriteLine($"Waiting {latencyDelay} minutes to allow the server time to propagate the index changes.", true);
+ Thread.Sleep((int)(latencyDelay * 60 * 1000));
+
+ // Match images against the image list.
+ MatchImages(client, listId, ImagesToMatch);
+
+ // Remove images
+ RemoveImages(client, listId, Images.Corrections);
+
+ // Be sure to refresh search index
+ RefreshSearchIndex(client, listId);
+
+ WriteLine();
+ WriteLine($"Waiting {latencyDelay} minutes to allow the server time to propagate the index changes.", true);
+ Thread.Sleep((int)(latencyDelay * 60 * 1000));
+
+ // Match images again against the image list. The removed image should not get matched.
+ MatchImages(client, listId, ImagesToMatch);
+
+ // Delete all images from the list.
+ DeleteAllImages(client, listId);
+
+ // Delete the image list.
+ DeleteCustomList(client, listId);
+
+ // Verify that the list was deleted.
+ GetAllListIds(client);
+ }
+ }
+
+ writer.Flush();
+ writer.Close();
+ writer = null;
+}
+
+Console.WriteLine();
+Console.WriteLine("Press any key to exit...");
+Console.ReadKey();
+```
+
+## Run the program and review the output
+
+The list ID and the image content IDs are different each time you run the application.
+The log file written by the program has the following output:
+
+```json
+Creating list MyList.
+Response:
+{
+ "Id": 169642,
+ "Name": "MyList",
+ "Description": "A sample list",
+ "Metadata": {
+ "Key One": "Acceptable",
+ "Key Two": "Potentially racy"
+ }
+}
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png to list 169642 with label Sports.
+Response:
+{
+ "ContentId": "169490",
+ "AdditionalInfo": [
+ {
+ "Key": "Source",
+ "Value": "169642"
+ },
+ {
+ "Key": "ImageDownloadTimeInMs",
+ "Value": "233"
+ },
+ {
+ "Key": "ImageSizeInBytes",
+ "Value": "2945548"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_b4d3e20a-0751-4760-8829-475e5da33ce8"
+}
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample6.png to list 169642 with label Sports.
+Response:
+{
+ "ContentId": "169491",
+ "AdditionalInfo": [
+ {
+ "Key": "Source",
+ "Value": "169642"
+ },
+ {
+ "Key": "ImageDownloadTimeInMs",
+ "Value": "215"
+ },
+ {
+ "Key": "ImageSizeInBytes",
+ "Value": "2440050"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_cc1eb6af-2463-4e5e-9145-2a11dcecbc30"
+}
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample9.png to list 169642 with label Sports.
+Response:
+{
+ "ContentId": "169492",
+ "AdditionalInfo": [
+ {
+ "Key": "Source",
+ "Value": "169642"
+ },
+ {
+ "Key": "ImageDownloadTimeInMs",
+ "Value": "98"
+ },
+ {
+ "Key": "ImageSizeInBytes",
+ "Value": "1631958"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_01edc1f2-b448-48cf-b7f6-23b64d5040e9"
+}
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg to list 169642 with label Swimsuit.
+Response:
+{
+ "ContentId": "169493",
+ "AdditionalInfo": [
+ {
+ "Key": "Source",
+ "Value": "169642"
+ },
+ {
+ "Key": "ImageDownloadTimeInMs",
+ "Value": "27"
+ },
+ {
+ "Key": "ImageSizeInBytes",
+ "Value": "17280"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_41f7bc6f-8778-4576-ba46-37b43a6c2434"
+}
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample3.png to list 169642 with label Swimsuit.
+Response:
+{
+ "ContentId": "169494",
+ "AdditionalInfo": [
+ {
+ "Key": "Source",
+ "Value": "169642"
+ },
+ {
+ "Key": "ImageDownloadTimeInMs",
+ "Value": "129"
+ },
+ {
+ "Key": "ImageSizeInBytes",
+ "Value": "1242855"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_61a48f33-eb55-4fd9-ac97-20eb0f3622a5"
+}
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png to list 169642 with label Swimsuit.
+Unable to add image to list. Caught Microsoft.CognitiveServices.ContentModerator.Models.APIErrorException: Operation returned an invalid status code 'Conflict'
+
+Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png to list 169642 with label Swimsuit.
+Response:
+{
+ "ContentId": "169495",
+ "AdditionalInfo": [
+ {
+ "Key": "Source",
+ "Value": "169642"
+ },
+ {
+ "Key": "ImageDownloadTimeInMs",
+ "Value": "65"
+ },
+ {
+ "Key": "ImageSizeInBytes",
+ "Value": "1088127"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_1c1f3de4-58b9-4aa8-82fa-1b0f479f6d7c"
+}
+
+Getting all image IDs for list 169642.
+Response:
+{
+ "ContentSource": "169642",
+ "ContentIds": [
+ 169490,
+ 169491,
+ 169492,
+ 169493,
+ 169494,
+ 169495
+],
+"Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ },
+"TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_0d017deb-38fa-4701-a7b1-5b6608c79da2"
+}
+
+Updating details for list 169642.
+Response:
+{
+ "Id": 169642,
+ "Name": "Swimsuits and sports",
+ "Description": "A sample list",
+ "Metadata": {
+ "Key One": "Acceptable",
+ "Key Two": "Potentially racy"
+ }
+}
+
+Getting details for list 169642.
+Response:
+{
+ "Id": 169642,
+ "Name": "Swimsuits and sports",
+ "Description": "A sample list",
+ "Metadata": {
+ "Key One": "Acceptable",
+ "Key Two": "Potentially racy"
+ }
+}
+
+Refreshing the search index for list 169642.
+Response:
+{
+ "ContentSourceId": "169642",
+ "IsUpdateSuccess": true,
+ "AdvancedInfo": [],
+ "Status": {
+ "Code": 3000,
+ "Description": "RefreshIndex successfully completed.",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_c72255cd-55a0-415e-9c18-0b9c08a9f25b"
+}
+Waiting 0.5 minutes to allow the server time to propagate the index changes.
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_ec384878-dbaa-4999-9042-6ac986355967",
+ "CacheID": null,
+ "IsMatch": true,
+ "Matches": [
+ {
+ "Score": 1.0,
+ "MatchId": 169493,
+ "Source": "169642",
+ "Tags": [],
+ "Label": "Swimsuit"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_e9db4b8f-3067-400f-9552-d3e6af2474c0",
+ "CacheID": null,
+ "IsMatch": true,
+ "Matches": [
+ {
+ "Score": 1.0,
+ "MatchId": 169490,
+ "Source": "169642",
+ "Tags": [],
+ "Label": "Sports"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_25991575-05da-4904-89db-abe88270b403",
+ "CacheID": null,
+ "IsMatch": false,
+ "Matches": [],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_c65d1c91-0d8a-4511-8ac6-814e04adc845",
+ "CacheID": null,
+ "IsMatch": true,
+ "Matches": [
+ {
+ "Score": 1.0,
+ "MatchId": 169495,
+ "Source": "169642",
+ "Tags": [],
+ "Label": "Swimsuit"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Removing entry for https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png (ID = 169495) from list 169642.
+Response:
+""
+
+Refreshing the search index for list 169642.
+Response:
+{
+ "ContentSourceId": "169642",
+ "IsUpdateSuccess": true,
+ "AdvancedInfo": [],
+ "Status": {
+ "Code": 3000,
+ "Description": "RefreshIndex successfully completed.",
+ "Exception": null
+ },
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_b55a375e-30a1-4612-aa7b-81edcee5bffb"
+}
+
+Waiting 0.5 minutes to allow the server time to propagate the index changes.
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_00544948-2936-489c-98c8-b507b654bff5",
+ "CacheID": null,
+ "IsMatch": true,
+ "Matches": [
+ {
+ "Score": 1.0,
+ "MatchId": 169493,
+ "Source": "169642",
+ "Tags": [],
+ "Label": "Swimsuit"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_c36ec646-53c2-4705-86b2-d72b5c2273c7",
+ "CacheID": null,
+ "IsMatch": true,
+ "Matches": [
+ {
+ "Score": 1.0,
+ "MatchId": 169490,
+ "Source": "169642",
+ "Tags": [],
+ "Label": "Sports"
+ }
+ ],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png against list 169642.
+Response:
+{
+ TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_22edad74-690d-4fbc-b7d0-bf64867c4cb9",
+ "CacheID": null,
+ "IsMatch": false,
+ "Matches": [],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png against list 169642.
+Response:
+{
+ "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_abd4a178-3238-4601-8e4f-cf9ee66f605a",
+ "CacheID": null,
+ "IsMatch": false,
+ "Matches": [],
+ "Status": {
+ "Code": 3000,
+ "Description": "OK",
+ "Exception": null
+ }
+}
+
+Deleting all images from list 169642.
+Response:
+"Reset Successful."
+
+Deleting list 169642.
+Response:
+""
+
+Getting all image list IDs.
+Response:
+[]
+```
+
+## Next steps
+
+Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET, and get started on your integration.
ai-services Image Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/image-moderation-api.md
+
+ Title: Image Moderation - Content Moderator
+
+description: Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content.
++++++ Last updated : 10/27/2021++++
+# Learn image moderation concepts
+
+Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content. Scan images for text content and extract that text, and detect faces. You can match images against custom lists, and take further action.
+
+## Evaluating for adult and racy content
+
+The **Evaluate** operation returns a confidence score between 0 and 1. It also returns boolean data equal to true or false. These values predict whether the image contains potential adult or racy content. When you call the API with your image (file or URL), the returned response includes the following information:
+
+```json
+"ImageModeration": {
+ .............
+ "adultClassificationScore": 0.019196987152099609,
+ "isImageAdultClassified": false,
+ "racyClassificationScore": 0.032390203326940536,
+ "isImageRacyClassified": false,
+ ............
+ ],
+```
+
+> [!NOTE]
+>
+> - `isImageAdultClassified` represents the potential presence of images that may be considered sexually explicit or adult in certain situations.
+> - `isImageRacyClassified` represents the potential presence of images that may be considered sexually suggestive or mature in certain situations.
+> - The scores are between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This preview relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.
+> - The boolean values are either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.
+
+## Detecting text with Optical Character Recognition (OCR)
+
+The **Optical Character Recognition (OCR)** operation predicts the presence of text content in an image and extracts it for text moderation, among other uses. You can specify the language. If you do not specify a language, the detection defaults to English.
+
+The response includes the following information:
+- The original text.
+- The detected text elements with their confidence scores.
+
+Example extract:
+
+```json
+"TextDetection": {
+ "status": {
+ "code": 3000.0,
+ "description": "OK",
+ "exception": null
+ },
+ .........
+ "language": "eng",
+ "text": "IF WE DID \r\nALL \r\nTHE THINGS \r\nWE ARE \r\nCAPABLE \r\nOF DOING, \r\nWE WOULD \r\nLITERALLY \r\nASTOUND \r\nOURSELVE \r\n",
+ "candidates": []
+},
+```
+
+## Detecting faces
+
+Detecting faces helps to detect personal data such as faces in the images. You detect potential faces and the number of potential faces in each image.
+
+A response includes this information:
+
+- Faces count
+- List of locations of faces detected
+
+Example extract:
+
+```json
+"FaceDetection": {
+ ......
+ "result": true,
+ "count": 2,
+ "advancedInfo": [
+ .....
+ ],
+ "faces": [
+ {
+ "bottom": 598,
+ "left": 44,
+ "right": 268,
+ "top": 374
+ },
+ {
+ "bottom": 620,
+ "left": 308,
+ "right": 532,
+ "top": 396
+ }
+ ]
+}
+```
+
+## Creating and managing custom lists
+
+In many online communities, after users upload images or other type of content, offensive items may get shared multiple times over the following days, weeks, and months. The costs of repeatedly scanning and filtering out the same image or even slightly modified versions of the image from multiple places can be expensive and error-prone.
+
+Instead of moderating the same image multiple times, you add the offensive images to your custom list of blocked content. That way, your content moderation system compares incoming images against your custom lists and stops any further processing.
+
+> [!NOTE]
+> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
+>
+
+The Content Moderator provides a complete [Image List Management API](try-image-list-api.md) with operations for managing lists of custom images. Start with the [Image Lists API Console](try-image-list-api.md) and use the REST API code samples. Also check out the [Image List .NET quickstart](image-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
+
+## Matching against your custom lists
+
+The Match operation allows fuzzy matching of incoming images against any of your custom lists, created and managed using the List operations.
+
+If a match is found, the operation returns the identifier and the moderation tags of the matched image. The response includes this information:
+
+- Match score (between 0 and 1)
+- Matched image
+- Image tags (assigned during previous moderation)
+- Image labels
+
+Example extract:
+
+```json
+{
+ ..............,
+ "IsMatch": true,
+ "Matches": [
+ {
+ "Score": 1.0,
+ "MatchId": 169490,
+ "Source": "169642",
+ "Tags": [],
+ "Label": "Sports"
+ }
+ ],
+ ....
+}
+```
++
+## Next steps
+
+Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/language-support.md
+
+ Title: Language support - Content Moderator API
+
+description: This is a list of natural languages that the Azure AI services Content Moderator API supports.
++++++ Last updated : 10/27/2021++++
+# Language support for Content Moderator API
+
+> [!NOTE]
+> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
+>
+> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
++
+| Language detection | Profanity | OCR | Auto-correction |
+| -- |-|--||
+| Arabic (Romanized) | Afrikaans | Arabic | Arabic |
+| Balinese | Albanian | Chinese (Simplified) | Danish |
+| Bengali | Amharic | Chinese (Traditional) | Dutch |
+| Buginese | Arabic | Czech | English |
+| Buhid | Armenian | Danish | Finnish |
+| Carian | Assamese | Dutch | French |
+| Chinese (Simplified) | Azerbaijani | English | Greek (modern) |
+| Chinese (Traditional) | Bangla - Bangladesh | Finnish | Italian |
+| Church (Slavic) | Bangla - India | French | Korean |
+| Coptic | Basque | German | Norwegian |
+| Czech | Belarusian | Greek (modern) | Polish |
+| Dhivehi | Bosnian - Cyrillic | Hungarian | Portuguese |
+| Dutch | Bosnian - Latin | Italian | Romanian |
+| English (Creole) | Breton [non-GeoPol] | Japanese | Russian |
+| Farsi | Bulgarian | Korean | Slovak |
+| French | Catalan | Norwegian | Spanish |
+| German | Central Kurdish | Polish | Turkish |
+| Greek | Cherokee | Portuguese | |
+| Haitian | Chinese (Simplified) | Romanian | |
+| Hebrew | Chinese (Traditional) - Hong Kong SAR | Russian | |
+| Hindi | Chinese (Traditional) - Taiwan | Serbian Cyrillic | |
+| Hmong | Croatian | Serbian Latin | |
+| Hungarian | Czech | Slovak | |
+| Italian | Danish | Spanish | |
+| Japanese | Dari | Swedish | |
+| Korean | Dutch | Turkish | |
+| Kurdish (Arabic) | English | | |
+| Kurdish (Latin) | Estonian | | |
+| Lepcha | Filipino | | |
+| Limbu | Finnish | | |
+| Lu | French | | |
+| Lycian | | |
+| Lydian | Georgian | | |
+| Mycenaean (Greek) | German | | |
+| Nko | Greek | | |
+| Norwegian (Bokmal) | Gujarati | | |
+| Norwegian (Nynorsk) | Hausa | | |
+| Old (Persian) | Hebrew | | |
+| Pashto | Hindi | | |
+| Polish | Hungarian | | |
+| Portuguese | Icelandic | | |
+| Punjabi | Igbo | | |
+| Rejang | Indonesian | | |
+| Russian | Inuktitut | | |
+| Santali | Irish | | |
+| Sasak | isiXhosa | | |
+| Saurashtra | isiZulu | | |
+| Serbian (Cyrillic) | Italian | | |
+| Serbian (Latin) | Japanese | | |
+| Sinhala | Kannada | | |
+| Slovenian | Kazakh | | |
+| Spanish | Khmer | | |
+| Swedish | K'iche | | |
+| Sylheti | Kinyarwanda | | |
+| Syriac | Kiswahili | | |
+| Tagbanwa | Konkani | | |
+| Tai (Nua) | Korean | | |
+| Tamashek | Kyrgyz | | |
+| Turkish | Lao | | |
+| Ugaritic | Latvian | | |
+| Uzbek (Cyrillic) | Lithuanian | | |
+| Uzbek (Latin) | Luxembourgish | | |
+| Vai | Macedonian | | |
+| Yi | Malay | | |
+| Zhuang, Chuang | Malayalam | | |
+| | Maltese | | |
+| | Maori | | |
+| | Marathi | | |
+| | Mongolian | | |
+| | Nepali | | |
+| | Norwegian (Bokmål) | | |
+| | Norwegian (Nynorsk) | | |
+| | Odia | | |
+| | Pashto | | |
+| | Persian | | |
+| | Polish | | |
+| | Portuguese - Brazil | | |
+| | Portuguese - Portugal | | |
+| | Pulaar | | |
+| | Punjabi | | |
+| | Punjabi (Pakistan) | | |
+| | Quechua (Peru) | | |
+| | Romanian | | |
+| | Russian | | |
+| | Serbian (Cyrillic) | | |
+| | Serbian (Cyrillic, Bosnia and Herzegovina) | | |
+| | Serbian (Latin) | | |
+| | Sesotho | | |
+| | Sesotho sa Leboa | | |
+| | Setswana | | |
+| | Sindhi | | |
+| | Sinhala | | |
+| | Slovak | | |
+| | Slovenian | | |
+| | Spanish | | |
+| | Swedish | | |
+| | Tajik | | |
+| | Tamil | | |
+| | Tatar | | |
+| | Telugu | | |
+| | Thai | | |
+| | Tigrinya | | |
+| | Turkish | | |
+| | Turkmen | | |
+| | Ukrainian | | |
+| | Urdu | | |
+| | Uyghur | | |
+| | Uzbek | | |
+| | Valencian | | |
+| | Vietnamese | | |
+| | Wolof | | |
+| | Yoruba | | |
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/overview.md
+
+ Title: What is Azure AI Content Moderator?
+
+description: Learn how to use Content Moderator to track, flag, assess, and filter inappropriate material in user-generated content.
++++++ Last updated : 11/06/2021++
+keywords: content moderator, Azure AI Content Moderator, online moderator, content filtering software, content moderation service, content moderation
+
+#Customer intent: As a developer of content management software, I want to find out whether Azure AI Content Moderator is the right solution for my moderation needs.
++
+# What is Azure AI Content Moderator?
+++
+Azure AI Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically.
+
+You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.
+
+This documentation contains the following article types:
+
+* [**Quickstarts**](client-libraries.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](try-text-api.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features.
+
+For a more structured approach, follow a Training module for Content Moderator.
+* [Introduction to Content Moderator](/training/modules/intro-to-content-moderator/)
+* [Classify and moderate text with Azure AI Content Moderator](/training/modules/classify-and-moderate-text-with-azure-content-moderator/)
+
+## Where it's used
+
+The following are a few scenarios in which a software developer or team would require a content moderation service:
+
+- Online marketplaces that moderate product catalogs and other user-generated content.
+- Gaming companies that moderate user-generated game artifacts and chat rooms.
+- Social messaging platforms that moderate images, text, and videos added by their users.
+- Enterprise media companies that implement centralized moderation for their content.
+- K-12 education solution providers filtering out content that is inappropriate for students and educators.
+
+> [!IMPORTANT]
+> You cannot use Content Moderator to detect illegal child exploitation images. However, qualified organizations can use the [PhotoDNA Cloud Service](https://www.microsoft.com/photodna "Microsoft PhotoDNA Cloud Service") to screen for this type of content.
+
+## What it includes
+
+The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK.
+
+## Moderation APIs
+
+The Content Moderator service includes Moderation APIs, which check content for material that is potentially inappropriate or objectionable.
+
+![block diagram for Content Moderator moderation APIs](images/content-moderator-mod-api.png)
+
+The following table describes the different types of moderation APIs.
+
+| API group | Description |
+| | -- |
+|[**Text moderation**](text-moderation-api.md)| Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data.|
+|[**Custom term lists**](try-terms-list-api.md)| Scans text against a custom list of terms along with the built-in terms. Use custom lists to block or allow content according to your own content policies.|
+|[**Image moderation**](image-moderation-api.md)| Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability, and detects faces.|
+|[**Custom image lists**](try-image-list-api.md)| Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.|
+|[**Video moderation**](video-moderation-api.md)| Scans videos for adult or racy content and returns time markers for said content.|
+
+## Data privacy and security
+
+As with all of the Azure AI services, developers using the Content Moderator service should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Next steps
+
+* Complete a [client library or REST API quickstart](client-libraries.md) to implement the basic scenarios in code.
ai-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/samples-dotnet.md
+
+ Title: Code samples - Content Moderator, .NET
+
+description: Learn how to use Azure AI services Content Moderator in your .NET applications through the SDK.
++++++ Last updated : 10/27/2021+
+ms.devlang: csharp
++
+# Content Moderator .NET SDK samples
+
+The following list includes links to the code samples built using the Azure AI Content Moderator SDK for .NET.
+
+- **Image moderation**: [Evaluate an image for adult and racy content, text, and faces](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+- **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+
+> [!NOTE]
+> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
+>
+
+- **Text moderation**: [Screen text for profanity and personal data](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TextModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+- **Custom terms**: [Moderate with custom term lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TermListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+
+> [!NOTE]
+> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
+>
+
+- **Video moderation**: [Scan a video for adult and racy content and get results](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoModeration/Program.cs). See [quickstart](video-moderation-api.md).
++
+See all .NET samples at the [Content Moderator .NET samples on GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator).
ai-services Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/samples-rest.md
+
+ Title: Code samples - Content Moderator, C#
+
+description: Use Azure AI services Content Moderator feature based samples in your applications through REST API calls.
++++++ Last updated : 01/10/2019+++
+# Content Moderator REST samples in C#
+
+The following list includes links to code samples built using the Azure AI Content Moderator API.
+
+- [Image moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageModeration)
+- [Text moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/TextModeration)
+- [Video moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/VideoModeration)
+
+For walkthroughs of these samples, check out the [on-demand webinar](https://info.microsoft.com/cognitive-services-content-moderator-ondemand.html).
ai-services Term Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/term-lists-quickstart-dotnet.md
+
+ Title: "Check text against a custom term list in C# - Content Moderator"
+
+description: How to moderate text with custom term lists using the Content Moderator SDK for C#.
++++++ Last updated : 10/24/2019++
+#Customer intent: As a C# developer of content-providing software, I want to analyze text content for terms that are particular to my product, so that I can categorize and handle it accordingly.
++
+# Check text against a custom term list in C#
+
+The default global list of terms in Azure AI Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization.
+
+You can use the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to create custom lists of terms to use with the Text Moderation API.
+
+This article provides information and code samples to help you get started using
+the Content Moderator SDK for .NET to:
+- Create a list.
+- Add terms to a list.
+- Screen terms against the terms in a list.
+- Delete terms from a list.
+- Delete a list.
+- Edit list information.
+- Refresh the index so that changes to the list are included in a new scan.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Sign up for Content Moderator services
+
+Before you can use Content Moderator services through the REST API or the SDK, you'll need a subscription key. Subscribe to Content Moderator service in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) to obtain one.
+
+## Create your Visual Studio project
+
+1. Add a new **Console app (.NET Framework)** project to your solution.
+
+1. Name the project **TermLists**. Select this project as the single startup project for the solution.
+
+### Install required packages
+
+Install the following NuGet packages for the TermLists project:
+
+- Microsoft.Azure.CognitiveServices.ContentModerator
+- Microsoft.Rest.ClientRuntime
+- Microsoft.Rest.ClientRuntime.Azure
+- Newtonsoft.Json
+
+### Update the program's using statements
+
+Add the following `using` statements.
+
+```csharp
+using Microsoft.Azure.CognitiveServices.ContentModerator;
+using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
+using Newtonsoft.Json;
+using System;
+using System.Collections.Generic;
+using System.IO;
+using System.Threading;
+```
+
+### Create the Content Moderator client
+
+Add the following code to create a Content Moderator client for your subscription. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
+
+```csharp
+/// <summary>
+/// Wraps the creation and configuration of a Content Moderator client.
+/// </summary>
+/// <remarks>This class library contains insecure code. If you adapt this
+/// code for use in production, use a secure method of storing and using
+/// your Content Moderator subscription key.</remarks>
+public static class Clients
+{
+ /// <summary>
+ /// The base URL fragment for Content Moderator calls.
+ /// </summary>
+ private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
+
+ /// <summary>
+ /// Your Content Moderator subscription key.
+ /// </summary>
+ private static readonly string CMSubscriptionKey = "YOUR API KEY";
+
+ /// <summary>
+ /// Returns a new Content Moderator client for your subscription.
+ /// </summary>
+ /// <returns>The new client.</returns>
+ /// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
+ /// When you have finished using the client,
+ /// you should dispose of it either directly or indirectly. </remarks>
+ public static ContentModeratorClient NewClient()
+ {
+ // Create and initialize an instance of the Content Moderator API wrapper.
+ ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey));
+
+ client.Endpoint = AzureEndpoint;
+ return client;
+ }
+}
+```
+
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Azure AI services [security](../security-features.md) article for more information.
+
+### Add private properties
+
+Add the following private properties to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// The language of the terms in the term lists.
+/// </summary>
+private const string lang = "eng";
+
+/// <summary>
+/// The minimum amount of time, in milliseconds, to wait between calls
+/// to the Content Moderator APIs.
+/// </summary>
+private const int throttleRate = 3000;
+
+/// <summary>
+/// The number of minutes to delay after updating the search index before
+/// performing image match operations against the list.
+/// </summary>
+private const double latencyDelay = 0.5;
+```
+
+## Create a term list
+
+You create a term list with **ContentModeratorClient.ListManagementTermLists.Create**. The first parameter to **Create** is a string that contains a MIME type, which should be "application/json". For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f). The second parameter is a **Body** object that contains a name and description for the new term list.
+
+> [!NOTE]
+> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
+
+Add the following method definition to namespace TermLists, class Program.
+
+> [!NOTE]
+> Your Content Moderator service key has a requests-per-second (RPS)
+> rate limit, and if you exceed the limit, the SDK throws an exception with a 429 error code. A free tier key has a one-RPS rate limit.
+
+```csharp
+/// <summary>
+/// Creates a new term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <returns>The term list ID.</returns>
+static string CreateTermList (ContentModeratorClient client)
+{
+ Console.WriteLine("Creating term list.");
+
+ Body body = new Body("Term list name", "Term list description");
+ TermList list = client.ListManagementTermLists.Create("application/json", body);
+ if (false == list.Id.HasValue)
+ {
+ throw new Exception("TermList.Id value missing.");
+ }
+ else
+ {
+ string list_id = list.Id.Value.ToString();
+ Console.WriteLine("Term list created. ID: {0}.", list_id);
+ Thread.Sleep(throttleRate);
+ return list_id;
+ }
+}
+```
+
+## Update term list name and description
+
+You update the term list information with **ContentModeratorClient.ListManagementTermLists.Update**. The first parameter to **Update** is the term list ID. The second parameter is a MIME type, which should be "application/json". For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f685). The third parameter is a **Body** object, which contains the new name and description.
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Update the information for the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list to update.</param>
+/// <param name="name">The new name for the term list.</param>
+/// <param name="description">The new description for the term list.</param>
+static void UpdateTermList (ContentModeratorClient client, string list_id, string name = null, string description = null)
+{
+ Console.WriteLine("Updating information for term list with ID {0}.", list_id);
+ Body body = new Body(name, description);
+ client.ListManagementTermLists.Update(list_id, "application/json", body);
+ Thread.Sleep(throttleRate);
+}
+```
+
+## Add a term to a term list
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Add a term to the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list to update.</param>
+/// <param name="term">The term to add to the term list.</param>
+static void AddTerm (ContentModeratorClient client, string list_id, string term)
+{
+ Console.WriteLine("Adding term \"{0}\" to term list with ID {1}.", term, list_id);
+ client.ListManagementTerm.AddTerm(list_id, term, lang);
+ Thread.Sleep(throttleRate);
+}
+```
+
+## Get all terms in a term list
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Get all terms in the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list from which to get all terms.</param>
+static void GetAllTerms(ContentModeratorClient client, string list_id)
+{
+ Console.WriteLine("Getting terms in term list with ID {0}.", list_id);
+ Terms terms = client.ListManagementTerm.GetAllTerms(list_id, lang);
+ TermsData data = terms.Data;
+ foreach (TermsInList term in data.Terms)
+ {
+ Console.WriteLine(term.Term);
+ }
+ Thread.Sleep(throttleRate);
+}
+```
+
+## Add code to refresh the search index
+
+After you make changes to a term list, you refresh its search index for the changes to be included the next time you use the term list to screen text. This is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
+
+You refresh a term list search index with **ContentModeratorClient.ListManagementTermLists.RefreshIndexMethod**.
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Refresh the search index for the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list to refresh.</param>
+static void RefreshSearchIndex (ContentModeratorClient client, string list_id)
+{
+ Console.WriteLine("Refreshing search index for term list with ID {0}.", list_id);
+ client.ListManagementTermLists.RefreshIndexMethod(list_id, lang);
+ Thread.Sleep((int)(latencyDelay * 60 * 1000));
+}
+```
+
+## Screen text using a term list
+
+You screen text using a term list with **ContentModeratorClient.TextModeration.ScreenText**, which takes the following parameters.
+
+- The language of the terms in the term list.
+- A MIME type, which can be "text/html", "text/xml", "text/markdown", or "text/plain".
+- The text to screen.
+- A boolean value. Set this field to **true** to autocorrect the text before screening it.
+- A boolean value. Set this field to **true** to detect personal data in the text.
+- The term list ID.
+
+For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f).
+
+**ScreenText** returns a **Screen** object, which has a **Terms** property that lists any terms that Content Moderator detected in the screening. Note that if Content Moderator did not detect any terms during the screening, the **Terms** property has value **null**.
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Screen the indicated text for terms in the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list to use to screen the text.</param>
+/// <param name="text">The text to screen.</param>
+static void ScreenText (ContentModeratorClient client, string list_id, string text)
+{
+ Console.WriteLine("Screening text: \"{0}\" using term list with ID {1}.", text, list_id);
+ Screen screen = client.TextModeration.ScreenText(lang, "text/plain", text, false, false, list_id);
+ if (null == screen.Terms)
+ {
+ Console.WriteLine("No terms from the term list were detected in the text.");
+ }
+ else
+ {
+ foreach (DetectedTerms term in screen.Terms)
+ {
+ Console.WriteLine(String.Format("Found term: \"{0}\" from list ID {1} at index {2}.", term.Term, term.ListId, term.Index));
+ }
+ }
+ Thread.Sleep(throttleRate);
+}
+```
+
+## Delete terms and lists
+
+Deleting a term or a list is straightforward. You use the SDK to do the following tasks:
+
+- Delete a term. (**ContentModeratorClient.ListManagementTerm.DeleteTerm**)
+- Delete all the terms in a list without deleting the list. (**ContentModeratorClient.ListManagementTerm.DeleteAllTerms**)
+- Delete a list and all of its contents. (**ContentModeratorClient.ListManagementTermLists.Delete**)
+
+### Delete a term
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Delete a term from the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list from which to delete the term.</param>
+/// <param name="term">The term to delete.</param>
+static void DeleteTerm (ContentModeratorClient client, string list_id, string term)
+{
+ Console.WriteLine("Removed term \"{0}\" from term list with ID {1}.", term, list_id);
+ client.ListManagementTerm.DeleteTerm(list_id, term, lang);
+ Thread.Sleep(throttleRate);
+}
+```
+
+### Delete all terms in a term list
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Delete all terms from the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list from which to delete all terms.</param>
+static void DeleteAllTerms (ContentModeratorClient client, string list_id)
+{
+ Console.WriteLine("Removing all terms from term list with ID {0}.", list_id);
+ client.ListManagementTerm.DeleteAllTerms(list_id, lang);
+ Thread.Sleep(throttleRate);
+}
+```
+
+### Delete a term list
+
+Add the following method definition to namespace TermLists, class Program.
+
+```csharp
+/// <summary>
+/// Delete the indicated term list.
+/// </summary>
+/// <param name="client">The Content Moderator client.</param>
+/// <param name="list_id">The ID of the term list to delete.</param>
+static void DeleteTermList (ContentModeratorClient client, string list_id)
+{
+ Console.WriteLine("Deleting term list with ID {0}.", list_id);
+ client.ListManagementTermLists.Delete(list_id);
+ Thread.Sleep(throttleRate);
+}
+```
+
+## Compose the Main method
+
+Add the **Main** method definition to namespace **TermLists**, class **Program**. Finally, close the **Program** class and the **TermLists** namespace.
+
+```csharp
+static void Main(string[] args)
+{
+ using (var client = Clients.NewClient())
+ {
+ string list_id = CreateTermList(client);
+
+ UpdateTermList(client, list_id, "name", "description");
+ AddTerm(client, list_id, "term1");
+ AddTerm(client, list_id, "term2");
+
+ GetAllTerms(client, list_id);
+
+ // Always remember to refresh the search index of your list
+ RefreshSearchIndex(client, list_id);
+
+ string text = "This text contains the terms \"term1\" and \"term2\".";
+ ScreenText(client, list_id, text);
+
+ DeleteTerm(client, list_id, "term1");
+
+ // Always remember to refresh the search index of your list
+ RefreshSearchIndex(client, list_id);
+
+ text = "This text contains the terms \"term1\" and \"term2\".";
+ ScreenText(client, list_id, text);
+
+ DeleteAllTerms(client, list_id);
+ DeleteTermList(client, list_id);
+
+ Console.WriteLine("Press ENTER to close the application.");
+ Console.ReadLine();
+ }
+}
+```
+
+## Run the application to see the output
+
+Your console output will look like the following:
+
+```console
+Creating term list.
+Term list created. ID: 252.
+Updating information for term list with ID 252.
+
+Adding term "term1" to term list with ID 252.
+Adding term "term2" to term list with ID 252.
+
+Getting terms in term list with ID 252.
+term1
+term2
+
+Refreshing search index for term list with ID 252.
+
+Screening text: "This text contains the terms "term1" and "term2"." using term list with ID 252.
+Found term: "term1" from list ID 252 at index 32.
+Found term: "term2" from list ID 252 at index 46.
+
+Removed term "term1" from term list with ID 252.
+
+Refreshing search index for term list with ID 252.
+
+Screening text: "This text contains the terms "term1" and "term2"." using term list with ID 252.
+Found term: "term2" from list ID 252 at index 46.
+
+Removing all terms from term list with ID 252.
+Deleting term list with ID 252.
+Press ENTER to close the application.
+```
+
+## Next steps
+
+Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET, and get started on your integration.
ai-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/text-moderation-api.md
+
+ Title: Text Moderation - Content Moderator
+
+description: Use text moderation for possible unwanted text, personal data, and custom lists of terms.
++++++ Last updated : 10/27/2021++++
+# Learn text moderation concepts
+
+Use Content Moderator's text moderation models to analyze text content, such as chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
+
+The service response includes the following information:
+
+- Profanity: term-based matching with built-in list of profane terms in various languages
+- Classification: machine-assisted classification into three categories
+- Personal data
+- Auto-corrected text
+- Original text
+- Language
+
+## Profanity
+
+If the API detects any profane terms in any of the [supported languages](./language-support.md), those terms are included in the response. The response also contains their location (`Index`) in the original text. The `ListId` in the following sample JSON refers to terms found in [custom term lists](try-terms-list-api.md) if available.
+
+```json
+"Terms": [
+ {
+ "Index": 118,
+ "OriginalIndex": 118,
+ "ListId": 0,
+ "Term": "<offensive word>"
+ }
+```
+
+> [!NOTE]
+> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
+>
+> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
+
+## Classification
+
+Content Moderator's machine-assisted **text classification feature** supports **English only**, and helps detect potentially undesired content. The flagged content may be assessed as inappropriate depending on context. It conveys the likelihood of each category. The feature uses a trained model to identify possible abusive, derogatory or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words.
+
+The following extract in the JSON extract shows an example output:
+
+```json
+"Classification": {
+ "ReviewRecommended": true,
+ "Category1": {
+ "Score": 1.5113095059859916E-06
+ },
+ "Category2": {
+ "Score": 0.12747249007225037
+ },
+ "Category3": {
+ "Score": 0.98799997568130493
+ }
+}
+```
+
+### Explanation
+
+- `Category1` refers to potential presence of language that may be considered sexually explicit or adult in certain situations.
+- `Category2` refers to potential presence of language that may be considered sexually suggestive or mature in certain situations.
+- `Category3` refers to potential presence of language that may be considered offensive in certain situations.
+- `Score` is between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This feature relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.
+- `ReviewRecommended` is either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.
+
+## Personal data
+
+The personal data feature detects the potential presence of this information:
+
+- Email address
+- US mailing address
+- IP address
+- US phone number
+
+The following example shows a sample response:
+
+```json
+"pii":{
+ "email":[
+ {
+ "detected":"abcdef@abcd.com",
+ "sub_type":"Regular",
+ "text":"abcdef@abcd.com",
+ "index":32
+ }
+ ],
+ "ssn":[
+
+ ],
+ "ipa":[
+ {
+ "sub_type":"IPV4",
+ "text":"255.255.255.255",
+ "index":72
+ }
+ ],
+ "phone":[
+ {
+ "country_code":"US",
+ "text":"6657789887",
+ "index":56
+ }
+ ],
+ "address":[
+ {
+ "text":"1 Microsoft Way, Redmond, WA 98052",
+ "index":89
+ }
+ ]
+}
+```
+
+## Auto-correction
+
+The text moderation response can optionally return the text with basic auto-correction applied.
+
+For example, the following input text has a misspelling.
+
+> The quick brown fox jumps over the lazzy dog.
+
+If you specify auto-correction, the response contains the corrected version of the text:
+
+> The quick brown fox jumps over the lazy dog.
+
+## Creating and managing your custom lists of terms
+
+While the default, global list of terms works great for most cases, you may want to screen against terms that are specific to your business needs. For example, you may want to filter out any competitive brand names from posts by users.
+
+> [!NOTE]
+> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
+>
+
+The following example shows the matching List ID:
+
+```json
+"Terms": [
+ {
+ "Index": 118,
+ "OriginalIndex": 118,
+ "ListId": 231.
+ "Term": "<offensive word>"
+ }
+```
+
+The Content Moderator provides a [Term List API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f) with operations for managing custom term lists. Start with the [Term Lists API Console](try-terms-list-api.md) and use the REST API code samples. Also check out the [Term Lists .NET quickstart](term-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
+
+## Next steps
+
+Test out the APIs with the [Text moderation API console](try-text-api.md).
ai-services Try Image Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-image-api.md
+
+ Title: Moderate images with the API Console - Content Moderator
+
+description: Use the Image Moderation API in Azure AI Content Moderator to scan image content.
++++++ Last updated : 01/10/2019++++
+# Moderate images from the API console
+
+Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure AI Content Moderator to scan image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
+
+## Use the API console
+Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
+
+1. Go to [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c).
+
+ The **Image - Evaluate** image moderation page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Try Image - Evaluate page region selection](images/test-drive-region.png)
+
+ The **Image - Evaluate** API console opens.
+
+3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
+
+ ![Try Image - Evaluate console subscription key](images/try-image-api-1.png)
+
+4. In the **Request body** box, use the default sample image, or specify an image to scan. You can submit the image itself as binary bit data, or specify a publicly accessible URL for an image.
+
+ For this example, use the path provided in the **Request body** box, and then select **Send**.
+
+ ![Try Image - Evaluate console Request body](images/try-image-api-2.png)
+
+ This is the image at that URL:
+
+ ![Try Image - Evaluate console sample image](images/sample-image.jpg)
+
+5. Select **Send**.
+
+6. The API returns a probability score for each classification. It also returns a determination of whether the image meets the conditions (**true** or **false**).
+
+ ![Try Image - Evaluate console probability score and condition determination](images/try-image-api-3.png)
+
+## Face detection
+
+You can use the Image Moderation API to locate faces in an image. This option can be useful when you have privacy concerns and want to prevent a specific face from being posted on your platform.
+
+1. In the [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c), in the left menu, under **Image**, select **Find Faces**.
+
+ The **Image - Find Faces** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Try Image - Find Faces page region selection](images/test-drive-region.png)
+
+ The **Image - Find Faces** API console opens.
+
+3. Specify an image to scan. You can submit the image itself as binary bit data, or specify a publicly accessible URL to an image. This example links to an image that's used in a CNN story.
+
+ ![Try Image - Find Faces sample image](images/try-image-api-face-image.jpg)
+
+ ![Try Image - Find Faces sample request](images/try-image-api-face-request.png)
+
+4. Select **Send**. In this example, the API finds two faces, and returns their coordinates in the image.
+
+ ![Try Image - Find Faces sample Response content box](images/try-image-api-face-response.png)
+
+## Text detection via OCR capability
+
+You can use the Content Moderator OCR capability to detect text in images.
+
+1. In the [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c), in the left menu, under **Image**, select **OCR**.
+
+ The **Image - OCR** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Image - OCR page region selection](images/test-drive-region.png)
+
+ The **Image - OCR** API console opens.
+
+3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
+
+4. In the **Request body** box, use the default sample image. This is the same image that's used in the preceding section.
+
+5. Select **Send**. The extracted text is displayed in JSON:
+
+ ![Image - OCR sample Response content box](images/try-image-api-ocr.png)
+
+## Next steps
+
+Use the REST API in your code, or follow the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to add image moderation to your application.
ai-services Try Image List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-image-list-api.md
+
+ Title: Moderate images with custom lists and the API console - Content Moderator
+
+description: You use the List Management API in Azure AI Content Moderator to create custom lists of images.
++++++ Last updated : 01/10/2019++++
+# Moderate with custom image lists in the API console
+
+You use the [List Management API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672) in Azure AI Content Moderator to create custom lists of images. Use the custom lists of images with the Image Moderation API. The image moderation operation evaluates your image. If you create custom lists, the operation also compares it to the images in your custom lists. You can use custom lists to block or allow the image.
+
+> [!NOTE]
+> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
+>
+
+You use the List Management API to do the following tasks:
+
+- Create a list.
+- Add images to a list.
+- Screen images against the images in a list.
+- Delete images from a list.
+- Delete a list.
+- Edit list information.
+- Refresh the index so that changes to the list are included in a new scan.
+
+## Use the API console
+Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
+
+## Refresh search index
+
+After you make changes to an image list, you must refresh its index for changes to be included in future scans. This step is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
+
+1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image Lists**, and then select **Refresh Search Index**.
+
+ The **Image Lists - Refresh Search Index** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Image Lists - Refresh Search Index page region selection](images/test-drive-region.png)
+
+ The **Image Lists - Refresh Search Index** API console opens.
+
+3. In the **listId** box, enter the list ID. Enter your subscription key, and then select **Send**.
+
+ ![Image Lists - Refresh Search Index console Response content box](images/try-image-list-refresh-1.png)
++
+## Create an image list
+
+1. Go to the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672).
+
+ The **Image Lists - Create** page opens.
+
+3. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Image Lists - Create page region selection](images/test-drive-region.png)
+
+ The **Image Lists - Create** API console opens.
+
+4. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
+
+5. In the **Request body** box, enter values for **Name** (for example, MyList) and **Description**.
+
+ ![Image Lists - Create console Request body name and description](images/try-terms-list-create-1.png)
+
+6. Use key-value pair placeholders to assign more descriptive metadata to your list.
+
+ ```json
+ {
+ "Name": "MyExclusionList",
+ "Description": "MyListDescription",
+ "Metadata":
+ {
+ "Category": "Competitors",
+ "Type": "Exclude"
+ }
+ }
+ ```
+
+ Add list metadata as key-value pairs, and not the actual images.
+
+7. Select **Send**. Your list is created. Note the **ID** value that is associated with the new list. You need this ID for other image list management functions.
+
+ ![Image Lists - Create console Response content box shows the list ID](images/try-terms-list-create-2.png)
+
+8. Next, add images to MyList. In the left menu, select **Image**, and then select **Add Image**.
+
+ The **Image - Add Image** page opens.
+
+9. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Image - Add Image page region selection](images/test-drive-region.png)
+
+ The **Image - Add Image** API console opens.
+
+10. In the **listId** box, enter the list ID that you generated, and then enter the URL of the image that you want to add. Enter your subscription key, and then select **Send**.
+
+11. To verify that the image has been added to the list, in the left menu, select **Image**, and then select **Get All Image Ids**.
+
+ The **Image - Get All Image Ids** API console opens.
+
+12. In the **listId** box, enter the list ID, and then enter your subscription key. Select **Send**.
+
+ ![Image - Get All Image Ids console Response content box lists the images that you entered](images/try-image-list-create-11.png)
+
+10. Add a few more images. Now that you have created a custom list of images, try [evaluating images](try-image-api.md) by using the custom image list.
+
+## Delete images and lists
+
+Deleting an image or a list is straightforward. You can use the API to do the following tasks:
+
+- Delete an image. (**Image - Delete**)
+- Delete all the images in a list without deleting the list. (**Image - Delete All Images**)
+- Delete a list and all of its contents. (**Image Lists - Delete**)
+
+This example deletes a single image:
+
+1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image**, and then select **Delete**.
+
+ The **Image - Delete** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Image - Delete page region selection](images/test-drive-region.png)
+
+ The **Image - Delete** API console opens.
+
+3. In the **listId** box, enter the ID of the list to delete an image from. This is the number returned in the **Image - Get All Image Ids** console for MyList. Then, enter the **ImageId** of the image to delete.
+
+In our example, the list ID is **58953**, the value for **ContentSource**. The image ID is **59021**, the value for **ContentIds**.
+
+1. Enter your subscription key, and then select **Send**.
+
+1. To verify that the image has been deleted, use the **Image - Get All Image Ids** console.
+
+## Change list information
+
+You can edit a listΓÇÖs name and description, and add metadata items.
+
+1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image Lists**, and then select **Update Details**.
+
+ The **Image Lists - Update Details** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Image Lists - Update Details page region selection](images/test-drive-region.png)
+
+ The **Image Lists - Update Details** API console opens.
+
+3. In the **listId** box, enter the list ID, and then enter your subscription key.
+
+4. In the **Request body** box, make your edits, and then select the **Send** button on the page.
+
+ ![Image Lists - Update Details console Request body edits](images/try-terms-list-change-1.png)
+
+
+## Next steps
+
+Use the REST API in your code or start with the [Image lists .NET quickstart](image-lists-quickstart-dotnet.md) to integrate with your application.
ai-services Try Terms List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-terms-list-api.md
+
+ Title: Moderate text with custom term lists - Content Moderator
+
+description: Use the List Management API to create custom lists of terms to use with the Text Moderation API.
++++++ Last updated : 01/10/2019++++
+# Moderate with custom term lists in the API console
+
+The default global list of terms in Azure AI Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization. For example, you might want to tag competitor names for further review.
+
+Use the [List Management API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f) to create custom lists of terms to use with the Text Moderation API. The **Text - Screen** operation scans your text for profanity, and also compares text against custom and shared blocklists.
+
+> [!NOTE]
+> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
+>
+
+You can use the List Management API to do the following tasks:
+- Create a list.
+- Add terms to a list.
+- Screen terms against the terms in a list.
+- Delete terms from a list.
+- Delete a list.
+- Edit list information.
+- Refresh the index so that changes to the list are included in a new scan.
+
+## Use the API console
+
+Before you can test-drive the API in the online console, you need your subscription key. This key is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
+
+## Refresh search index
+
+After you make changes to a term list, you must refresh its index for changes to be included in future scans. This step is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
+
+1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term Lists**, and then select **Refresh Search Index**.
+
+ The **Term Lists - Refresh Search Index** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Term Lists - Refresh Search Index page region selection](images/test-drive-region.png)
+
+ The **Term Lists - Refresh Search Index** API console opens.
+
+3. In the **listId** box, enter the list ID. Enter your subscription key, and then select **Send**.
+
+ ![Term Lists API - Refresh Search Index console Response content box](images/try-terms-list-refresh-1.png)
+
+## Create a term list
+1. Go to the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f).
+
+ The **Term Lists - Create** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Term Lists - Create page region selection](images/test-drive-region.png)
+
+ The **Term Lists - Create** API console opens.
+
+3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
+
+4. In the **Request body** box, enter values for **Name** (for example, MyList) and **Description**.
+
+ ![Term Lists - Create console Request body name and description](images/try-terms-list-create-1.png)
+
+5. Use key-value pair placeholders to assign more descriptive metadata to your list.
+
+ ```json
+ {
+ "Name": "MyExclusionList",
+ "Description": "MyListDescription",
+ "Metadata":
+ {
+ "Category": "Competitors",
+ "Type": "Exclude"
+ }
+ }
+ ```
+
+ Add list metadata as key-value pairs, and not actual terms.
+
+6. Select **Send**. Your list is created. Note the **ID** value that is associated with the new list. You need this ID for other term list management functions.
+
+ ![Term Lists - Create console Response content box shows the list ID](images/try-terms-list-create-2.png)
+
+7. Add terms to MyList. In the left menu, under **Term**, select **Add Term**.
+
+ The **Term - Add Term** page opens.
+
+8. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Term - Add Term page region selection](images/test-drive-region.png)
+
+ The **Term - Add Term** API console opens.
+
+9. In the **listId** box, enter the list ID that you generated, and select a value for **language**. Enter your subscription key, and then select **Send**.
+
+ ![Term - Add Term console query parameters](images/try-terms-list-create-3.png)
+
+10. To verify that the term has been added to the list, in the left menu, select **Term**, and then select **Get All Terms**.
+
+ The **Term - Get All Terms** API console opens.
+
+11. In the **listId** box, enter the list ID, and then enter your subscription key. Select **Send**.
+
+12. In the **Response content** box, verify the terms you entered.
+
+ ![Term - Get All Terms console Response content box lists the terms that you entered](images/try-terms-list-create-4.png)
+
+13. Add a few more terms. Now that you have created a custom list of terms, try [scanning some text](try-text-api.md) by using the custom term list.
+
+## Delete terms and lists
+
+Deleting a term or a list is straightforward. You use the API to do the following tasks:
+
+- Delete a term. (**Term - Delete**)
+- Delete all the terms in a list without deleting the list. (**Term - Delete All Terms**)
+- Delete a list and all of its contents. (**Term Lists - Delete**)
+
+This example deletes a single term.
+
+1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term**, and then select **Delete**.
+
+ The **Term - Delete** opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Term - Delete page region selection](images/test-drive-region.png)
+
+ The **Term - Delete** API console opens.
+
+3. In the **listId** box, enter the ID of the list that you want to delete a term from. This ID is the number (in our example, **122**) that is returned in the **Term Lists - Get Details** console for MyList. Enter the term and select a language.
+
+ ![Term - Delete console query parameters](images/try-terms-list-delete-1.png)
+
+4. Enter your subscription key, and then select **Send**.
+
+5. To verify that the term has been deleted, use the **Term Lists - Get All** console.
+
+ ![Term Lists - Get All console Response content box shows that term is deleted](images/try-terms-list-delete-2.png)
+
+## Change list information
+
+You can edit a listΓÇÖs name and description, and add metadata items.
+
+1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term Lists**, and then select **Update Details**.
+
+ The **Term Lists - Update Details** page opens.
+
+2. For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Term Lists - Update Details page region selection](images/test-drive-region.png)
+
+ The **Term Lists - Update Details** API console opens.
+
+3. In the **listId** box, enter the list ID, and then enter your subscription key.
+
+4. In the **Request body** box, make your edits, and then select **Send**.
+
+ ![Term Lists - Update Details console Request body edits](images/try-terms-list-change-1.png)
+
+
+## Next steps
+
+Use the REST API in your code or start with the [Term lists .NET quickstart](term-lists-quickstart-dotnet.md) to integrate with your application.
ai-services Try Text Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-text-api.md
+
+ Title: Moderate text by using the Text Moderation API - Content Moderator
+
+description: Test-drive text moderation by using the Text Moderation API in the online console.
+++++++ Last updated : 10/27/2021++
+# Moderate text from the API console
+
+Use the [Text Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f) in Azure AI Content Moderator to scan your text content for profanity and compare it against custom and shared lists.
+
+## Get your API key
+
+Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
+
+## Navigate to the API reference
+
+Go to the [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f).
+
+ The **Text - Screen** page opens.
+
+## Open the API console
+
+For **Open API testing console**, select the region that most closely describes your location.
+
+ ![Text - Screen page region selection](images/test-drive-region.png)
+
+ The **Text - Screen** API console opens.
+
+## Select the inputs
+
+### Parameters
+
+Select the query parameters that you want to use in your text screen. For this example, use the default value for **language**. You can also leave it blank because the operation will automatically detect the likely language as part of its execution.
+
+> [!NOTE]
+> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
+>
+> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
+
+For **autocorrect**, **PII**, and **classify (preview)**, select **true**. Leave the **ListId** field empty.
+
+ ![Text - Screen console query parameters](images/text-api-console-inputs.png)
+
+### Content type
+
+For **Content-Type**, select the type of content you want to screen. For this example, use the default **text/plain** content type. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
+
+### Sample text to scan
+
+In the **Request body** box, enter some text. The following example shows an intentional typo in the text.
+
+```
+Is this a grabage or <offensive word> email abcdef@abcd.com, phone: 4255550111, IP:
+255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
+```
+
+## Analyze the response
+
+The following response shows the various insights from the API. It contains potential profanity, personal data, classification (preview), and the auto-corrected version.
+
+> [!NOTE]
+> The machine-assisted 'Classification' feature is in preview and supports English only.
+
+```json
+{
+ "original_text":"Is this a grabage or <offensive word> email abcdef@abcd.com, phone:
+ 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
+ "normalized_text":" grabage <offensive word> email abcdef@abcd.com, phone:
+ 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
+ "auto_corrected_text":"Is this a garbage or <offensive word> email abcdef@abcd.com, phone:
+ 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
+ "status":{
+ "code":3000,
+ "description":"OK"
+ },
+ "pii":{
+ "email":[
+ {
+ "detected":"abcdef@abcd.com",
+ "sub_type":"Regular",
+ "text":"abcdef@abcd.com",
+ "index":32
+ }
+ ],
+ "ssn":[
+
+ ],
+ "ipa":[
+ {
+ "sub_type":"IPV4",
+ "text":"255.255.255.255",
+ "index":72
+ }
+ ],
+ "phone":[
+ {
+ "country_code":"US",
+ "text":"6657789887",
+ "index":56
+ }
+ ],
+ "address":[
+ {
+ "text":"1 Microsoft Way, Redmond, WA 98052",
+ "index":89
+ }
+ ]
+ },
+ "language":"eng",
+ "terms":[
+ {
+ "index":12,
+ "original_index":21,
+ "list_id":0,
+ "term":"<offensive word>"
+ }
+ ],
+ "tracking_id":"WU_ibiza_65a1016d-0f67-45d2-b838-b8f373d6d52e_ContentModerator.
+ F0_fe000d38-8ecd-47b5-a8b0-4764df00e3b5"
+}
+```
+
+For a detailed explanation of all sections in the JSON response, refer to the [Text moderation](text-moderation-api.md) conceptual guide.
+
+## Next steps
+
+Use the REST API in your code, or follow the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to integrate with your application.
ai-services Video Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/video-moderation-api.md
+
+ Title: "Analyze video content for objectionable material in C# - Content Moderator"
+
+description: How to analyze video content for various objectionable material using the Content Moderator SDK for .NET
++++++ Last updated : 10/27/2021+
+ms.devlang: csharp
+
+#Customer intent: As a C# developer of content management software, I want to analyze video content for offensive or inappropriate material so that I can categorize and handle it accordingly.
++
+# Analyze video content for objectionable material in C#
+
+This article provides information and code samples to help you get started using the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to scan video content for adult or racy content.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/)
+
+## Set up Azure resources
+
+The Content Moderator's video moderation capability is available as a free public preview **media processor** in Azure Media Services (AMS). Azure Media Services is a specialized Azure service for storing and streaming video content.
+
+### Create an Azure Media Services account
+
+Follow the instructions in [Create an Azure Media Services account](/azure/media-services/previous/media-services-portal-create-account) to subscribe to AMS and create an associated Azure storage account. In that storage account, create a new Blob storage container.
+
+### Create an Azure Active Directory application
+
+Navigate to your new AMS subscription in the Azure portal and select **API access** from the side menu. Select **Connect to Azure Media Services with service principal**. Note the value in the **REST API endpoint** field; you will need this later.
+
+In the **Azure AD app** section, select **Create New** and name your new Azure AD application registration (for example, "VideoModADApp"). Select **Save** and wait a few minutes while the application is configured. Then, you should see your new app registration under the **Azure AD app** section of the page.
+
+Select your app registration and click the **Manage application** button below it. Note the value in the **Application ID** field; you will need this later. Select **Settings** > **Keys**, and enter a description for a new key (such as "VideoModKey"). Select **Save**, and then notice the new key value. Copy this string and save it somewhere secure.
+
+For a more thorough walkthrough of the above process, See [Get started with Azure AD authentication](/azure/media-services/previous/media-services-portal-get-started-with-aad).
+
+Once you've done this, you can use the video moderation media processor in two different ways.
+
+## Use Azure Media Services Explorer
+
+The Azure Media Services Explorer is a user-friendly frontend for AMS. Use it to browse your AMS account, upload videos, and scan content with the Content Moderator media processor. Download and install it from [GitHub](https://github.com/Azure/Azure-Media-Services-Explorer/releases), or see the [Azure Media Services Explorer blog post](https://azure.microsoft.com/blog/managing-media-workflows-with-the-new-azure-media-services-explorer-tool/) for more information.
+
+![Azure Media Services explorer with Content Moderator](images/ams-explorer-content-moderator.png)
+
+## Create the Visual Studio project
+
+1. In Visual Studio, create a new **Console app (.NET Framework)** project and name it **VideoModeration**.
+1. If there are other projects in your solution, select this one as the single startup project.
+1. Get the required NuGet packages. Right-click on your project in the Solution Explorer and select **Manage NuGet Packages**; then find and install the following packages:
+ - windowsazure.mediaservices
+ - windowsazure.mediaservices.extensions
+
+## Add video moderation code
+
+Next, you'll copy and paste the code from this guide into your project to implement a basic content moderation scenario.
+
+### Update the program's using statements
+
+Add the following `using` statements to the top of your _Program.cs_ file.
+
+```csharp
+using System;
+using System.Linq;
+using System.Threading.Tasks;
+using Microsoft.WindowsAzure.MediaServices.Client;
+using System.IO;
+using System.Threading;
+using Microsoft.WindowsAzure.Storage.Blob;
+using Microsoft.WindowsAzure.Storage;
+using Microsoft.WindowsAzure.Storage.Auth;
+using System.Collections.Generic;
+```
+
+### Set up resource references
+
+Add the following static fields to the **Program** class in _Program.cs_. These fields hold the information necessary for connecting to your AMS subscription. Fill them in with the values you got in the steps above. Note that `CLIENT_ID` is the **Application ID** value of your Azure AD app, and `CLIENT_SECRET` is the value of the "VideoModKey" that you created for that app.
+
+```csharp
+// declare constants and globals
+private static CloudMediaContext _context = null;
+private static CloudStorageAccount _StorageAccount = null;
+
+// Azure Media Services (AMS) associated Storage Account, Key, and the Container that has
+// a list of Blobs to be processed.
+static string STORAGE_NAME = "YOUR AMS ASSOCIATED BLOB STORAGE NAME";
+static string STORAGE_KEY = "YOUR AMS ASSOCIATED BLOB STORAGE KEY";
+static string STORAGE_CONTAINER_NAME = "YOUR BLOB CONTAINER FOR VIDEO FILES";
+
+private static StorageCredentials _StorageCredentials = null;
+
+// Azure Media Services authentication.
+private const string AZURE_AD_TENANT_NAME = "microsoft.onmicrosoft.com";
+private const string CLIENT_ID = "YOUR CLIENT ID";
+private const string CLIENT_SECRET = "YOUR CLIENT SECRET";
+
+// REST API endpoint, for example "https://accountname.restv2.westcentralus.media.azure.net/API".
+private const string REST_API_ENDPOINT = "YOUR API ENDPOINT";
+
+// Content Moderator Media Processor Nam
+private const string MEDIA_PROCESSOR = "Azure Media Content Moderator";
+
+// Input and Output files in the current directory of the executable
+private const string INPUT_FILE = "VIDEO FILE NAME";
+private const string OUTPUT_FOLDER = "";
+
+// JSON settings file
+private static readonly string CONTENT_MODERATOR_PRESET_FILE = "preset.json";
+
+```
+
+> [!IMPORTANT]
+> Remember to remove the keys from your code when you're done, and never post them publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Azure AI services [security](../security-features.md) article for more information.
+
+If you wish to use a local video file (simplest case), add it to the project and enter its path as the `INPUT_FILE` value (relative paths are relative to the execution directory).
+
+You will also need to create the _preset.json_ file in the current directory and use it to specify a version number. For example:
+
+```JSON
+{
+ "version": "2.0"
+}
+```
+
+### Load the input video(s)
+
+The **Main** method of the **Program** class will create an Azure Media Context and then an Azure Storage Context (in case your videos are in blob storage). The remaining code scans a video from a local folder, blob, or multiple blobs within an Azure storage container. You can try all options by commenting out the other lines of code.
+
+```csharp
+// Create Azure Media Context
+CreateMediaContext();
+
+// Create Storage Context
+CreateStorageContext();
+
+// Use a file as the input.
+IAsset asset = CreateAssetfromFile();
+
+// -- OR
+
+// Or a blob as the input
+// IAsset asset = CreateAssetfromBlob((CloudBlockBlob)GetBlobsList().First());
+
+// Then submit the asset to Content Moderator
+RunContentModeratorJob(asset);
+
+//-- OR -
+
+// Just run the content moderator on all blobs in a list (from a Blob Container)
+// RunContentModeratorJobOnBlobs();
+```
+
+### Create an Azure Media Context
+
+Add the following method to the **Program** class. This uses your AMS credentials to allow communication with AMS.
+
+```csharp
+// Creates a media context from azure credentials
+static void CreateMediaContext()
+{
+ // Get Azure AD credentials
+ var tokenCredentials = new AzureAdTokenCredentials(AZURE_AD_TENANT_NAME,
+ new AzureAdClientSymmetricKey(CLIENT_ID, CLIENT_SECRET),
+ AzureEnvironments.AzureCloudEnvironment);
+
+ // Initialize an Azure AD token
+ var tokenProvider = new AzureAdTokenProvider(tokenCredentials);
+
+ // Create a media context
+ _context = new CloudMediaContext(new Uri(REST_API_ENDPOINT), tokenProvider);
+}
+```
+
+### Add the code to create an Azure Storage Context
+
+Add the following method to the **Program** class. You use the Storage Context, created from your storage credentials, to access your blob storage.
+
+```csharp
+// Creates a storage context from the AMS associated storage name and key
+static void CreateStorageContext()
+{
+ // Get a reference to the storage account associated with a Media Services account.
+ if (_StorageCredentials == null)
+ {
+ _StorageCredentials = new StorageCredentials(STORAGE_NAME, STORAGE_KEY);
+ }
+ _StorageAccount = new CloudStorageAccount(_StorageCredentials, false);
+}
+```
+
+### Add the code to create Azure Media Assets from local file and blob
+
+The Content Moderator media processor runs jobs on **Assets** within the Azure Media Services platform.
+These methods create the Assets from a local file or an associated blob.
+
+```csharp
+// Creates an Azure Media Services Asset from the video file
+static IAsset CreateAssetfromFile()
+{
+ return _context.Assets.CreateFromFile(INPUT_FILE, AssetCreationOptions.None); ;
+}
+
+// Creates an Azure Media Services asset from your blog storage
+static IAsset CreateAssetfromBlob(CloudBlockBlob Blob)
+{
+ // Create asset from the FIRST blob in the list and return it
+ return _context.Assets.CreateFromBlob(Blob, _StorageCredentials, AssetCreationOptions.None);
+}
+```
+
+### Add the code to scan a collection of videos (as blobs) within a container
+
+```csharp
+// Runs the Content Moderator Job on all Blobs in a given container name
+static void RunContentModeratorJobOnBlobs()
+{
+ // Get the reference to the list of Blobs. See the following method.
+ var blobList = GetBlobsList();
+
+ // Iterate over the Blob list items or work on specific ones as needed
+ foreach (var sourceBlob in blobList)
+ {
+ // Create an Asset
+ IAsset asset = _context.Assets.CreateFromBlob((CloudBlockBlob)sourceBlob,
+ _StorageCredentials, AssetCreationOptions.None);
+ asset.Update();
+
+ // Submit to Content Moderator
+ RunContentModeratorJob(asset);
+ }
+}
+
+// Get all blobs in your container
+static IEnumerable<IListBlobItem> GetBlobsList()
+{
+ // Get a reference to the Container within the Storage Account
+ // that has the files (blobs) for moderation
+ CloudBlobClient CloudBlobClient = _StorageAccount.CreateCloudBlobClient();
+ CloudBlobContainer MediaBlobContainer = CloudBlobClient.GetContainerReference(STORAGE_CONTAINER_NAME);
+
+ // Get the reference to the list of Blobs
+ var blobList = MediaBlobContainer.ListBlobs();
+ return blobList;
+}
+```
+
+### Add the method to run the Content Moderator Job
+
+```csharp
+// Run the Content Moderator job on the designated Asset from local file or blob storage
+static void RunContentModeratorJob(IAsset asset)
+{
+ // Grab the presets
+ string configuration = File.ReadAllText(CONTENT_MODERATOR_PRESET_FILE);
+
+ // grab instance of Azure Media Content Moderator MP
+ IMediaProcessor mp = _context.MediaProcessors.GetLatestMediaProcessorByName(MEDIA_PROCESSOR);
+
+ // create Job with Content Moderator task
+ IJob job = _context.Jobs.Create(String.Format("Content Moderator {0}",
+ asset.AssetFiles.First() + "_" + Guid.NewGuid()));
+
+ ITask contentModeratorTask = job.Tasks.AddNew("Adult and racy classifier task",
+ mp, configuration,
+ TaskOptions.None);
+ contentModeratorTask.InputAssets.Add(asset);
+ contentModeratorTask.OutputAssets.AddNew("Adult and racy classifier output",
+ AssetCreationOptions.None);
+
+ job.Submit();
++
+ // Create progress printing and querying tasks
+ Task progressPrintTask = new Task(() =>
+ {
+ IJob jobQuery = null;
+ do
+ {
+ var progressContext = _context;
+ jobQuery = progressContext.Jobs
+ .Where(j => j.Id == job.Id)
+ .First();
+ Console.WriteLine(string.Format("{0}\t{1}",
+ DateTime.Now,
+ jobQuery.State));
+ Thread.Sleep(10000);
+ }
+ while (jobQuery.State != JobState.Finished &&
+ jobQuery.State != JobState.Error &&
+ jobQuery.State != JobState.Canceled);
+ });
+ progressPrintTask.Start();
+
+ Task progressJobTask = job.GetExecutionProgressTask(
+ CancellationToken.None);
+ progressJobTask.Wait();
+
+ // If job state is Error, the event handling
+ // method for job progress should log errors. Here we check
+ // for error state and exit if needed.
+ if (job.State == JobState.Error)
+ {
+ ErrorDetail error = job.Tasks.First().ErrorDetails.First();
+ Console.WriteLine(string.Format("Error: {0}. {1}",
+ error.Code,
+ error.Message));
+ }
+
+ DownloadAsset(job.OutputMediaAssets.First(), OUTPUT_FOLDER);
+}
+```
+
+### Add helper functions
+
+These methods download the Content Moderator output file (JSON) from the Azure Media Services asset, and help track the status of the moderation job so that the program can log a running status to the console.
+
+```csharp
+static void DownloadAsset(IAsset asset, string outputDirectory)
+{
+ foreach (IAssetFile file in asset.AssetFiles)
+ {
+ file.Download(Path.Combine(outputDirectory, file.Name));
+ }
+}
+
+// event handler for Job State
+static void StateChanged(object sender, JobStateChangedEventArgs e)
+{
+ Console.WriteLine("Job state changed event:");
+ Console.WriteLine(" Previous state: " + e.PreviousState);
+ Console.WriteLine(" Current state: " + e.CurrentState);
+ switch (e.CurrentState)
+ {
+ case JobState.Finished:
+ Console.WriteLine();
+ Console.WriteLine("Job finished.");
+ break;
+ case JobState.Canceling:
+ case JobState.Queued:
+ case JobState.Scheduled:
+ case JobState.Processing:
+ Console.WriteLine("Please wait...\n");
+ break;
+ case JobState.Canceled:
+ Console.WriteLine("Job is canceled.\n");
+ break;
+ case JobState.Error:
+ Console.WriteLine("Job failed.\n");
+ break;
+ default:
+ break;
+ }
+}
+```
+
+### Run the program and review the output
+
+After the Content Moderation job is completed, analyze the JSON response. It consists of these elements:
+
+- Video information summary
+- **Shots** as "**fragments**"
+- **Key frames** as "**events**" with a **reviewRecommended" (= true or false)"** flag based on **Adult** and **Racy** scores
+- **start**, **duration**, **totalDuration**, and **timestamp** are in "ticks". Divide by **timescale** to get the number in seconds.
+
+> [!NOTE]
+> - `adultScore` represents the potential presence and prediction score of content that may be considered sexually explicit or adult in certain situations.
+> - `racyScore` represents the potential presence and prediction score of content that may be considered sexually suggestive or mature in certain situations.
+> - `adultScore` and `racyScore` are between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This preview relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.
+> - `reviewRecommended` is either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.
+
+```json
+{
+"version": 2,
+"timescale": 90000,
+"offset": 0,
+"framerate": 50,
+"width": 1280,
+"height": 720,
+"totalDuration": 18696321,
+"fragments": [
+{
+ "start": 0,
+ "duration": 18000
+},
+{
+ "start": 18000,
+ "duration": 3600,
+ "interval": 3600,
+ "events": [
+ [
+ {
+ "reviewRecommended": false,
+ "adultScore": 0.00001,
+ "racyScore": 0.03077,
+ "index": 5,
+ "timestamp": 18000,
+ "shotIndex": 0
+ }
+ ]
+ ]
+},
+{
+ "start": 18386372,
+ "duration": 119149,
+ "interval": 119149,
+ "events": [
+ [
+ {
+ "reviewRecommended": true,
+ "adultScore": 0.00000,
+ "racyScore": 0.91902,
+ "index": 5085,
+ "timestamp": 18386372,
+ "shotIndex": 62
+ }
+ ]
+ ]
+}
+]
+}
+```
+
+## Next steps
+
+[Download the Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
ai-services Harm Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/harm-categories.md
+
+ Title: "Harm categories in Azure AI Content Safety"
+
+description: Learn about the different content moderation flags and severity levels that the Content Safety service returns.
+++++++ Last updated : 04/06/2023+
+keywords:
+++
+# Harm categories in Azure AI Content Safety
+
+This guide describes all of the harm categories and ratings that Content Safety uses to flag content. Both text and image content use the same set of flags.
+
+## Harm categories
+
+Content Safety recognizes four distinct categories of objectionable content.
+
+| Category | Description |
+| | - |
+| Hate | The hate category describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
+| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against oneΓÇÖs will, prostitution, pornography, and abuse. |
+| Violence | The violence category describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
+| Self-harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage oneΓÇÖs body, or kill oneself. |
+
+Classification can be multi-labeled. For example, when a text sample goes through the text moderation model, it could be classified as both Sexual content and Violence.
+
+## Severity levels
+
+Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
+
+| Severity Levels | Label |
+| -- | -- |
+|Severity Level 0 ΓÇô Safe | Content may be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
+|Severity Level 2 ΓÇô Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
+|Severity Level 4 ΓÇô Medium| Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
+|Severity Level 6 ΓÇô High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
+
+## Next steps
+
+Follow a quickstart to get started using Content Safety in your application.
+
+> [!div class="nextstepaction"]
+> [Content Safety quickstart](../quickstart-text.md)
ai-services Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/response-codes.md
+
+ Title: "Content Safety error codes"
+
+description: See the possible error codes for the Content Safety APIs.
+++++++ Last updated : 05/09/2023+++
+# Content Safety Error codes
+
+The content APIs may return the following error codes:
+
+| Error Code | Possible reasons | Suggestions |
+| - | - | -- |
+| InvalidRequestBody | One or more fields in the request body do not match the API definition. | 1. Check the API version you specified in the API call. <br/>2. Check the corresponding API definition for the API version you selected. |
+| InvalidResourceName | The resource name you specified in the URL does not meet the requirements, like the blocklist name, blocklist term ID, etc. | 1. Check the API version you specified in the API call. <br/>2. Check whether the given name has invalid characters according to the API definition. |
+| ResourceNotFound | The resource you specified in the URL may not exist, like the blocklist name. | 1. Check the API version you specified in the API call. <br/> 2. Double check the existence of the resource specified in the URL. |
+| InternalError | Some unexpected situations on the server side have been triggered. | 1. You may want to retry a few times after a small period and see it the issue happens again. <br/> 2. Contact Azure Support if this issue persists. |
+| ServerBusy | The server side cannot process the request temporarily. | 1. You may want to retry a few times after a small period and see it the issue happens again. <br/>2.Contact Azure Support if this issue persists. |
+| TooManyRequests | The current RPS has exceeded the quota for your current SKU. | 1. Check the pricing table to understand the RPS quota. <br/>2.Contact Azure Support if you need more QPS. |
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/encrypt-data-at-rest.md
+
+ Title: Azure AI Content Safety Service encryption of data at rest
+description: Learn how Azure AI Content Safety encrypts your data when it's persisted to the cloud.
+++++++ Last updated : 07/04/2023++++
+# Azure AI Content Safety encryption of data at rest
+
+Azure AI Content Safety automatically encrypts your data when it's persisted to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure AI Content Safety handles encryption of data at rest.
+
+## About Azure AI services encryption
+
+Azure AI Content Safety is part of Azure AI services. Azure AI services data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
++
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+> [!IMPORTANT]
+> For blocklist names, only MMK encryption is applied by default. Using CMK or not will not change this behavior. All the other data will use either MMK or CMK depending on what you've selected.
+
+## Customer-managed keys with Azure Key Vault
+
+Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/general/overview).
+
+To enable customer-managed keys, you must also enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](/azure/key-vault/general/about-keys-secrets-certificates).
++
+## Enable customer-managed keys for your resource
+
+To enable customer-managed keys in the Azure portal, follow these steps:
+
+1. Go to your Azure AI services resource.
+2. On the left, select **Encryption**.
+3. Under **Encryption type**, select **Customer Managed Keys**, as shown in the following screenshot.
+++
+## Specify a key
+
+After you enable customer-managed keys, you can specify a key to associate with the Azure AI services resource.
+
+### Specify a key as a URI
+
+To specify a key as a URI, follow these steps:
+
+1. In the Azure portal, go to your key vault.
+
+2. Under **Settings**, select **Keys**.
+
+3. Select the desired key, and then select the key to view its versions. Select a key version to view the settings for that version.
+
+4. Copy the **Key Identifier** value, which provides the URI.
+
+ ![Screenshot of the Azure portal page for a key version. The Key Identifier box contains a placeholder for a key URI.](../../media/cognitive-services-encryption/key-uri-portal.png)
+
+5. Go back to your Azure AI services resource, and then select **Encryption**.
+
+6. Under **Encryption key**, select **Enter key URI**.
+
+7. Paste the URI that you copied into the **Key URI** box.
+
+ ![Screenshot of the Encryption page for an Azure AI services resource. The Enter key URI option is selected, and the Key URI box contains a value.](../../media/cognitive-services-encryption/ssecmk2.png)
+
+8. Under **Subscription**, select the subscription that contains the key vault.
+
+9. Save your changes.
+++
+### Specify a key from a key vault
+
+To specify a key from a key vault, first make sure that you have a key vault that contains a key. Then follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+
+2. Under **Encryption key**, select **Select from Key Vault**.
+
+3. Select the key vault that contains the key that you want to use.
+
+4. Select the key that you want to use.
+
+ ![Screenshot of the Select key from Azure Key Vault page in the Azure portal. The Subscription, Key vault, Key, and Version boxes contain values.](../../media/cognitive-services-encryption/ssecmk3.png)
+
+5. Save your changes.
++
+## Update the key version
+
+When you create a new version of a key, update the Azure AI services resource to use the new version. Follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+2. Enter the URI for the new key version. Alternately, you can select the key vault and then select the key again to update the version.
+3. Save your changes.
++
+## Use a different key
+
+To change the key that you use for encryption, follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+2. Enter the URI for the new key. Alternately, you can select the key vault and then select a new key.
+3. Save your changes.
++
+## Rotate customer-managed keys
+
+You can rotate a customer-managed key in Key Vault according to your compliance policies. When the key is rotated, you must update the Azure AI services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](../../openai/encrypt-data-at-rest.md#update-the-key-version).
+
+Rotating the key doesn't trigger re-encryption of data in the resource. No further action is required from the user.
++
+## Revoke a customer-managed key
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Azure AI services resource, because the encryption key is inaccessible by Azure AI services.
++
+## Disable customer-managed keys
+
+When you disable customer-managed keys, your Azure AI services resource is then encrypted with Microsoft-managed keys. To disable customer-managed keys, follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+2. Select **Microsoft Managed Keys** > **Save**.
+
+When you previously enabled customer managed keys this also enabled a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](/azure/active-directory/managed-identities-azure-resources/overview).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/known-issues#transferring-a-subscription-between-azure-ad-directories).
+
+## Next steps
+* [Content Safety overview](../overview.md)
ai-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/use-blocklist.md
+
+ Title: "Use blocklists for text moderation"
+
+description: Learn how to customize text moderation in Content Safety by using your own list of blockItems.
+++++++ Last updated : 04/21/2023+
+keywords:
+++
+# Use a blocklist
+
+> [!CAUTION]
+> The sample data in this guide may contain offensive content. User discretion is advised.
+
+The default AI classifiers are sufficient for most content moderation needs. However, you may need to screen for items that are specific to your use case.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**.
+ * The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
+* One of the following installed:
+ * [cURL](https://curl.haxx.se/) for REST API calls.
+ * [Python 3.x](https://www.python.org/) installed
+ * Your Python installation should include [pip](https://pip.pypa.io/en/stable/). You can check if you have pip installed by running `pip --version` on the command line. Get pip by installing the latest version of Python.
+ * If you're using Python, you'll need to install the Azure AI Content Safety client library for Python. Run the command `pip install azure-ai-contentsafety` in your project directory.
+ * [.NET Runtime](https://dotnet.microsoft.com/download/dotnet/) installed.
+ * [.NET 6.0](https://dotnet.microsoft.com/download/dotnet-core) SDK or above installed.
+ * If you're using .NET, you'll need to install the Azure AI Content Safety client library for .NET. Run the command `dotnet add package Azure.AI.ContentSafety --prerelease` in your project directory.
++
+## Analyze text with a blocklist
+
+You can create blocklists to use with the Text API. The following steps help you get started.
+
+### Create or modify a blocklist
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the URL) with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`.
+1. Optionally replace the value of the `"description"` field with a custom description.
++
+```shell
+curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_id>?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "description": "This is a violence list"
+}'
+```
+
+The response code should be `201`(created a new list) or `200`(updated an existing list).
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+var blocklistDescription = "<description>";
+
+var data = new
+{
+ description = blocklistDescription,
+};
+
+var createResponse = client.CreateOrUpdateTextBlocklist(blocklistName, RequestContent.Create(data));
+if (createResponse.Status == 201)
+{
+ Console.WriteLine("\nBlocklist {0} created.", blocklistName);
+}
+else if (createResponse.Status == 200)
+{
+ Console.WriteLine("\nBlocklist {0} updated.", blocklistName);
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`.
+1. Optionally replace `<description>` with a custom description.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.ai.contentsafety.models import TextBlocklist
+from azure.core.exceptions import HttpResponseError
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create a Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def create_or_update_text_blocklist(name, description):
+ try:
+ return client.create_or_update_text_blocklist(
+ blocklist_name=name, resource=TextBlocklist(description=description)
+ )
+ except HttpResponseError as e:
+ print("\nCreate or update text blocklist failed: ")
+ if e.error:
+ print(f"Error code: {e.error.code}")
+ print(f"Error message: {e.error.message}")
+ raise
+ print(e)
+ raise
++
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+ blocklist_description = "<description>"
+
+ # create blocklist
+ result = create_or_update_text_blocklist(name=blocklist_name, description=blocklist_description)
+ if result is not None:
+ print("Blocklist created: {}".format(result))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`.
+1. Optionally replace `<description>` with a custom description.
+1. Run the script.
+++
+### Add blockItems in the list
+
+> [!NOTE]
+>
+> There is a maximum limit of **10,000 terms** in total across all lists. You can add at most 100 blockItems in one request.
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the URL) with the ID value you used in the list creation step.
+1. Optionally replace the value of the `"description"` field with a custom description.
+1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+
+```shell
+curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_id>:addBlockItems?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json' \
+--data-raw '"blockItems": [{
+ "description": "string",
+ "text": "bleed"
+}]'
+```
+
+> [!TIP]
+> You can add multiple blockItems in one API call. Make the request body a JSON array of data groups:
+>
+> [{
+> "description": "string",
+> "text": "bleed"
+> },
+> {
+> "description": "string",
+> "text": "blood"
+> }]
++
+The response code should be `200`.
+
+```console
+{
+ "blockItemId": "string",
+ "description": "string",
+ "text": "bleed"
+
+}
+```
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+string blockItemText1 = "k*ll";
+string blockItemText2 = "h*te";
+
+var blockItems = new TextBlockItemInfo[] { new TextBlockItemInfo(blockItemText1), new TextBlockItemInfo(blockItemText2) };
+var addedBlockItems = client.AddBlockItems(blocklistName, new AddBlockItemsOptions(blockItems));
+
+if (addedBlockItems != null && addedBlockItems.Value != null)
+{
+ Console.WriteLine("\nBlockItems added:");
+ foreach (var addedBlockItem in addedBlockItems.Value.Value)
+ {
+ Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", addedBlockItem.BlockItemId, addedBlockItem.Text, addedBlockItem.Description);
+ }
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Replace the value of the `block_item_text_1` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Optionally add more blockItem strings to the `blockItems` parameter.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.ai.contentsafety.models import TextBlockItemInfo, AddBlockItemsOptions
+from azure.core.exceptions import HttpResponseError
+import time
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create an Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def add_block_items(name, items):
+ block_items = [TextBlockItemInfo(text=i) for i in items]
+ try:
+ response = client.add_block_items(
+ blocklist_name=name,
+ body=AddBlockItemsOptions(block_items=block_items),
+ )
+ except HttpResponseError as e:
+ print("\nAdd block items failed: ")
+ if e.error:
+ print(f"Error code: {e.error.code}")
+ print(f"Error message: {e.error.message}")
+ raise
+ print(e)
+ raise
+
+ return response.value
++
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+
+ block_item_text_1 = "k*ll"
+ input_text = "I h*te you and I want to k*ll you."
+
+ # add block items
+ result = add_block_items(name=blocklist_name, items=[block_item_text_1])
+ if result is not None:
+ print("Block items added: {}".format(result))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Replace the value of the `block_item_text_1` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Optionally add more blockItem strings to the `add_block_items` parameter.
+1. Run the script.
++++
+> [!NOTE]
+>
+> There will be some delay after you add or edit a blockItem before it takes effect on text analysis, usually **not more than five minutes**.
+
+### Analyze text with a blocklist
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step. The `"blocklistNames"` field can contain an array of multiple list IDs.
+1. Optionally change the value of `"breakByBlocklists"`. `true` indicates that once a blocklist is matched, the analysis will return immediately without model output. `false` will cause the model to continue to do analysis in the default categories.
+1. Optionally change the value of the `"text"` field to whatever text you want to analyze.
+
+```shell
+curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2023-04-30-preview&' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "text": "I want to beat you till you bleed",
+ "categories": [
+ "Hate",
+ "Sexual",
+ "SelfHarm",
+ "Violence"
+ ],
+ "blocklistNames":["<your_list_id>"],
+ "breakByBlocklists": true
+}'
+```
+
+The JSON response will contain a `"blocklistMatchResults"` that indicates any matches with your blocklist. It reports the location in the text string where the match was found.
+
+```json
+{
+ "blocklistMatchResults": [
+ {
+ "blocklistName": "string",
+ "blockItemID": "string",
+ "blockItemText": "bleed",
+ "offset": "28",
+ "length": "5"
+ }
+ ]
+}
+```
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+// After you edit your blocklist, it usually takes effect in 5 minutes, please wait some time before analyzing with blocklist after editing.
+var request = new AnalyzeTextOptions("I h*te you and I want to k*ll you");
+request.BlocklistNames.Add(blocklistName);
+request.BreakByBlocklists = true;
+
+Response<AnalyzeTextResult> response;
+try
+{
+ response = client.AnalyzeText(request);
+}
+catch (RequestFailedException ex)
+{
+ Console.WriteLine("Analyze text failed.\nStatus code: {0}, Error code: {1}, Error message: {2}", ex.Status, ex.ErrorCode, ex.Message);
+ throw;
+}
+
+if (response.Value.BlocklistsMatchResults != null)
+{
+ Console.WriteLine("\nBlocklist match result:");
+ foreach (var matchResult in response.Value.BlocklistsMatchResults)
+ {
+ Console.WriteLine("Blockitem was hit in text: Offset: {0}, Length: {1}", matchResult.Offset, matchResult.Length);
+ Console.WriteLine("BlocklistName: {0}, BlockItemId: {1}, BlockItemText: {2}, ", matchResult.BlocklistName, matchResult.BlockItemId, matchResult.BlockItemText);
+ }
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Replace the `request` input text with whatever text you want to analyze.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.ai.contentsafety.models import AnalyzeTextOptions
+from azure.core.exceptions import HttpResponseError
+import time
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create an Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def analyze_text_with_blocklists(name, text):
+ try:
+ response = client.analyze_text(
+ AnalyzeTextOptions(text=text, blocklist_names=[name], break_by_blocklists=False)
+ )
+ except HttpResponseError as e:
+ print("\nAnalyze text failed: ")
+ if e.error:
+ print(f"Error code: {e.error.code}")
+ print(f"Error message: {e.error.message}")
+ raise
+ print(e)
+ raise
+
+ return response.blocklists_match_results
++
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+ input_text = "I h*te you and I want to k*ll you."
+
+ # analyze text
+ match_results = analyze_text_with_blocklists(name=blocklist_name, text=input_text)
+ for match_result in match_results:
+ print("Block item {} in {} was hit, text={}, offset={}, length={}."
+ .format(match_result.block_item_id, match_result.blocklist_name, match_result.block_item_text, match_result.offset, match_result.length))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Replace the `input_text` variable with whatever text you want to analyze.
+1. Run the script.
++
+## Other blocklist operations
+
+This section contains more operations to help you manage and use the blocklist feature.
+
+### Get all blockItems in a list
+
+#### [REST API](#tab/rest)
++
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+
+```shell
+curl --location --request GET '<endpoint>/contentsafety/text/blocklists/<your_list_id>/blockItems?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json'
+```
+
+The status code should be `200` and the response body should look like this:
+
+```json
+{
+ "values": [
+ {
+ "blockItemId": "string",
+ "description": "string",
+ "text": "bleed",
+ }
+ ]
+}
+```
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+var allBlockitems = client.GetTextBlocklistItems(blocklistName);
+Console.WriteLine("\nList BlockItems:");
+foreach (var blocklistItem in allBlockitems)
+{
+ Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", blocklistItem.BlockItemId, blocklistItem.Text, blocklistItem.Description);
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
++
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.core.exceptions import HttpResponseError
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create an Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def list_block_items(name):
+ try:
+ response = client.list_text_blocklist_items(blocklist_name=name)
+ return list(response)
+ except HttpResponseError as e:
+ print("\nList block items failed: ")
+ if e.error:
+ print(f"Error code: {e.error.code}")
+ print(f"Error message: {e.error.message}")
+ raise
+ print(e)
+ raise
++
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+
+ result = list_block_items(name=blocklist_name)
+ if result is not None:
+ print("Block items: {}".format(result))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Run the script.
++++
+### Get all blocklists
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
++
+```shell
+curl --location --request GET '<endpoint>/contentsafety/text/blocklists?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json'
+```
+
+The status code should be `200`. The JSON response looks like this:
+
+```json
+"value": [
+ {
+ "blocklistName": "string",
+ "description": "string"
+ }
+]
+```
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
++
+var blocklists = client.GetTextBlocklists();
+Console.WriteLine("\nList blocklists:");
+foreach (var blocklist in blocklists)
+{
+ Console.WriteLine("BlocklistName: {0}, Description: {1}", blocklist.BlocklistName, blocklist.Description);
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.core.exceptions import HttpResponseError
++
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
++
+# Create an Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def list_text_blocklists():
+ try:
+ return client.list_text_blocklists()
+ except HttpResponseError as e:
+ print("\nList text blocklists failed: ")
+ if e.error:
+ print(f"Error code: {e.error.code}")
+ print(f"Error message: {e.error.message}")
+ raise
+ print(e)
+ raise
+if __name__ == "__main__":
+ # list blocklists
+ result = list_text_blocklists()
+ if result is not None:
+ print("List blocklists: ")
+ for l in result:
+ print(l)
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Run the script.
+++
+### Get a blocklist by name
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+
+```shell
+cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_id>?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--data ''
+```
+
+The status code should be `200`. The JSON response looks like this:
+
+```json
+{
+ "blocklistName": "string",
+ "description": "string"
+}
+```
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+var getBlocklist = client.GetTextBlocklist(blocklistName);
+if (getBlocklist != null && getBlocklist.Value != null)
+{
+ Console.WriteLine("\nGet blocklist:");
+ Console.WriteLine("BlocklistName: {0}, Description: {1}", getBlocklist.Value.BlocklistName, getBlocklist.Value.Description);
+}
+```
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.ai.contentsafety.models import TextBlocklist
+from azure.core.exceptions import HttpResponseError
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create a Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def get_text_blocklist(name):
+ try:
+ return client.get_text_blocklist(blocklist_name=name)
+ except HttpResponseError as e:
+ print("\nGet text blocklist failed: ")
+ if e.error:
+ print(f"Error code: {e.error.code}")
+ print(f"Error message: {e.error.message}")
+ raise
+ print(e)
+ raise
+
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+
+ # get blocklist
+ result = get_text_blocklist(blocklist_name)
+ if result is not None:
+ print("Get blocklist: {}".format(result))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Run the script.
++++
+### Get a blockItem by blockItem ID
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
++
+```shell
+cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_id>/blockitems/<your_item_id>?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--data ''
+```
+
+The status code should be `200`. The JSON response looks like this:
+
+```json
+{
+ "blockItemId": "string",
+ "description": "string",
+ "text": "string"
+}
+```
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+var getBlockItemId = addedBlockItems.Value.Value[0].BlockItemId;
+var getBlockItem = client.GetTextBlocklistItem(blocklistName, getBlockItemId);
+Console.WriteLine("\nGet BlockItem:");
+Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", getBlockItem.Value.BlockItemId, getBlockItem.Value.Text, getBlockItem.Value.Description);
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Optionally change the value of `getBlockItemId` to the ID of a previously added item.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.core.exceptions import HttpResponseError
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create a Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def get_block_item(name, item_id):
+ try:
+ return client.get_text_blocklist_item(blocklist_name=name, block_item_id=item_id)
+ except HttpResponseError as e:
+ print("Get block item failed.")
+ print("Error code: {}".format(e.error.code))
+ print("Error message: {}".format(e.error.message))
+ return None
+ except Exception as e:
+ print(e)
+ return None
+
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+ block_item = get_block_item(blocklist_name, "<your_item_id>")
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
+1. Run the script.
+++
+### Remove a blockItem from a list
+
+> [!NOTE]
+>
+> There will be some delay after you delete an item before it takes effect on text analysis, usually **not more than five minutes**.
+
+#### [REST API](#tab/rest)
++
+Copy the cURL command below to a text editor and make the following changes:
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+1. Replace `<item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
++
+```shell
+curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_id>/removeBlockItems?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json'
+--data-raw '"blockItemIds":[
+ "<item_id>"
+]'
+```
+
+> [!TIP]
+> You can delete multiple blockItems in one API call. Make the request body an array of `blockItemId` values.
+
+The response code should be `204`.
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+var removeBlockItemId = addedBlockItems.Value.Value[0].BlockItemId;
+var removeBlockItemIds = new List<string> { removeBlockItemId };
+var removeResult = client.RemoveBlockItems(blocklistName, new RemoveBlockItemsOptions(removeBlockItemIds));
+
+if (removeResult != null && removeResult.Status == 204)
+{
+ Console.WriteLine("\nBlockItem removed: {0}.", removeBlockItemId);
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Optionally change the value of `removeBlockItemId` to the ID of a previously added item.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.ai.contentsafety.models import RemoveBlockItemsOptions
+from azure.core.exceptions import HttpResponseError
++
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create a Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+
+def remove_block_items(name, ids):
+ request = RemoveBlockItemsOptions(block_item_ids=ids)
+ try:
+ client.remove_block_items(blocklist_name=name, body=request)
+ return True
+ except HttpResponseError as e:
+ print("Remove block items failed.")
+ print("Error code: {}".format(e.error.code))
+ print("Error message: {}".format(e.error.message))
+ return False
+ except Exception as e:
+ print(e)
+ return False
+
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+ remove_id = "<your_item_id>"
+
+ # remove one blocklist item
+ if remove_block_items(name=blocklist_name, ids=[remove_id]):
+ print("Block item removed: {}".format(remove_id))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` with the ID value you used in the list creation step.
+1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
+1. Run the script.
+++
+### Delete a list and all of its contents
+
+> [!NOTE]
+>
+> There will be some delay after you delete a list before it takes effect on text analysis, usually **not more than five minutes**.
+
+#### [REST API](#tab/rest)
+
+Copy the cURL command below to a text editor and make the following changes:
++
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+
+```shell
+curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_id>?api-version=2023-04-30-preview' \
+--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \
+--header 'Content-Type: application/json' \
+```
+
+The response code should be `204`.
+
+#### [C#](#tab/csharp)
+
+Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
+
+```csharp
+string endpoint = "<endpoint>";
+string key = "<enter_your_key_here>";
+ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+var blocklistName = "<your_list_id>";
+
+var deleteResult = client.DeleteTextBlocklist(blocklistName);
+if (deleteResult != null && deleteResult.Status == 204)
+{
+ Console.WriteLine("\nDeleted blocklist.");
+}
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+1. Run the script.
+
+#### [Python](#tab/python)
+
+Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
+
+```python
+import os
+from azure.ai.contentsafety import ContentSafetyClient
+from azure.core.credentials import AzureKeyCredential
+from azure.core.exceptions import HttpResponseError
+
+endpoint = "<endpoint>"
+key = "<enter_your_key_here>"
+
+# Create an Content Safety client
+client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+def delete_blocklist(name):
+ try:
+ client.delete_text_blocklist(blocklist_name=name)
+ return True
+ except HttpResponseError as e:
+ print("Delete blocklist failed.")
+ print("Error code: {}".format(e.error.code))
+ print("Error message: {}".format(e.error.message))
+ return False
+ except Exception as e:
+ print(e)
+ return False
+
+if __name__ == "__main__":
+ blocklist_name = "<your_list_id>"
+ # delete blocklist
+ if delete_blocklist(name=blocklist_name):
+ print("Blocklist {} deleted successfully.".format(blocklist_name))
+```
+
+1. Replace `<endpoint>` with your endpoint URL.
+1. Replace `<enter_your_key_here>` with your key.
+1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
+1. Run the script.
+++
+## Next steps
+
+See the API reference documentation to learn more about the APIs used in this guide.
+
+* [Content Safety API reference](https://aka.ms/content-safety-api)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
+
+ Title: What is Azure AI Content Safety? (preview)
+
+description: Learn how to use Content Safety to track, flag, assess, and filter inappropriate material in user-generated content.
++++++ Last updated : 07/18/2023+
+keywords: content safety, Azure AI Content Safety, online content safety, content filtering software, content moderation service, content moderation
+
+#Customer intent: As a developer of content management software, I want to find out whether Azure AI Content Safety is the right solution for my moderation needs.
++
+# What is Azure AI Content Safety? (preview)
++
+Azure AI Content Safety detects harmful user-generated and AI-generated content in applications and services. Content Safety includes text and image APIs that allow you to detect material that is harmful. We also have an interactive Content Safety Studio that allows you to view, explore and try out sample code for detecting harmful content across different modalities.
+
+Content filtering software can help your app comply with regulations or maintain the intended environment for your users.
+++
+This documentation contains the following article types:
+
+* **[Quickstarts](./quickstart-text.md)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](./how-to/use-blocklist.md)** contain instructions for using the service in more specific or customized ways.
+* **[Concepts](concepts/harm-categories.md)** provide in-depth explanations of the service functionality and features.
+
+## Where it's used
+
+The following are a few scenarios in which a software developer or team would require a content moderation service:
+
+- Online marketplaces that moderate product catalogs and other user-generated content.
+- Gaming companies that moderate user-generated game artifacts and chat rooms.
+- Social messaging platforms that moderate images and text added by their users.
+- Enterprise media companies that implement centralized moderation for their content.
+- K-12 education solution providers filtering out content that is inappropriate for students and educators.
+
+> [!IMPORTANT]
+> You cannot use Content Safety to detect illegal child exploitation images.
+
+## Product types
+
+There are different types of analysis available from this service. The following table describes the currently available APIs.
+
+| Type | Functionality |
+| :-- | :- |
+| Text Detection API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
+| Image Detection API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
+
+## Content Safety Studio
+
+[Azure AI Content Safety Studio](https://contentsafety.cognitive.azure.com) is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.
+
+Content Safety Studio not only contains the out-of-the-box AI models, but also includes Microsoft's built-in terms blocklists to flag profanities and stay up to date with new trends. You can also upload your own blocklists to enhance the coverage of harmful content that's specific to your use case.
+
+Studio also lets you set up a moderation workflow, where you can continuously monitor and improve content moderation performance. It can help you meet content requirements from all kinds of industries like gaming, media, education, E-commerce, and more. Businesses can easily connect their services to the Studio and have their content moderated in real time, whether user-generated or AI-generated.
+
+All of these capabilities are handled by the Studio and its backend; customers donΓÇÖt need to worry about model development. You can onboard your data for quick validation and monitor your KPIs accordingly, like technical metrics (latency, accuracy, recall), or business metrics (block rate, block volume, category proportions, language proportions and more). With simple operations and configurations, customers can test different solutions quickly and find the best fit, instead of spending time experimenting with custom models or doing moderation manually.
+
+> [!div class="nextstepaction"]
+> [Content Safety Studio](https://contentsafety.cognitive.azure.com)
++
+### Content Safety Studio features
+
+In Content Safety Studio, the following Content Safety service features are available:
+
+* [Moderate Text Content](https://contentsafety.cognitive.azure.com/text): With the text moderation tool, you can easily run tests on text content. Whether you want to test a single sentence or an entire dataset, our tool offers a user-friendly interface that lets you assess the test results directly in the portal. You can experiment with different sensitivity levels to configure your content filters and blocklist management, ensuring that your content is always moderated to your exact specifications. Plus, with the ability to export the code, you can implement the tool directly in your application, streamlining your workflow and saving time.
+
+* [Moderate Image Content](https://contentsafety.cognitive.azure.com/image): With the image moderation tool, you can easily run tests on images to ensure that they meet your content standards. Our user-friendly interface allows you to evaluate the test results directly in the portal, and you can experiment with different sensitivity levels to configure your content filters. Once you've customized your settings, you can easily export the code to implement the tool in your application.
+
+* [Monitor Online Activity](https://contentsafety.cognitive.azure.com/monitor): The powerful monitoring page allows you to easily track your moderation API usage and trends across different modalities. With this feature, you can access detailed response information, including category and severity distribution, latency, error, and blocklist detection. This information provides you with a complete overview of your content moderation performance, enabling you to optimize your workflow and ensure that your content is always moderated to your exact specifications. With our user-friendly interface, you can quickly and easily navigate the monitoring page to access the information you need to make informed decisions about your content moderation strategy. You have the tools you need to stay on top of your content moderation performance and achieve your content goals.
+
+## Input requirements
+
+The default maximum length for text submissions is 1000 characters. If you need to analyze longer blocks of text, you can split the input text (for example, by punctuation or spacing) across multiple related submissions.
+
+The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
+
+## Security
+
+### Use Azure Active Directory or Managed Identity to manage access
+
+For enhanced security, you can use Azure Active Directory (Azure AD) or Managed Identity (MI) to manage access to your resources.
+* Managed Identity is automatically enabled when you create a Content Safety resource.
+* Azure Active Directory is supported in both API and SDK scenarios. Refer to the general AI services guideline of [Authenticating with Azure Active Directory](/azure/ai-services/authentication?tabs=powershell#authenticate-with-azure-active-directory). You can also grant access to other users within your organization by assigning them the roles of **Cognitive Services Users** and **Reader**. To learn more about granting user access to Azure resources using the Azure portal, refer to the [Role-based access control guide](/azure/role-based-access-control/quickstart-assign-role-user-portal).
+
+### Encryption of data at rest
+
+Learn how Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+## Pricing
+
+Currently, the public preview features are available in the **F0 and S0** pricing tier.
+
+## Service limits
+
+### Language availability
+
+This API supports eight languages: English, German, Japanese, Spanish, French, Italian, Portuguese, Chinese. You don't need to specify a language code for text analysis; we automatically detect your input language.
+
+### Region / location
+
+To use the preview APIs, you must create your Azure AI Content Safety resource in a supported region. Currently, the public preview features are available in the following Azure regions:
+
+- East US
+- West Europe
+
+Feel free to [contact us](mailto:acm-team@microsoft.com) if you need other regions for your business.
+
+### Query rates
+
+| Pricing Tier | Requests per second (RPS) |
+| :-- | : |
+| F0 | 5 |
+| S0 | 10 |
+
+If you need a faster rate, please [contact us](mailto:acm-team@microsoft.com) to request.
++
+## Contact us
+
+If you get stuck, [email us](mailto:acm-team@microsoft.com) or use the feedback widget on the upper right of any docs page.
+
+## Next steps
+
+Follow a quickstart to get started using Content Safety in your application.
+
+> [!div class="nextstepaction"]
+> [Content Safety quickstart](./quickstart-text.md)
ai-services Quickstart Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md
+
+ Title: "Quickstart: Analyze image content"
+
+description: Get started using Content Safety to analyze image content for objectionable material.
+++++++ Last updated : 05/08/2023+
+zone_pivot_groups: programming-languages-content-safety
+keywords:
++
+# QuickStart: Analyze image content
+
+Get started with the Content Studio, REST API, or client SDKs to do basic image moderation. The Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
+
+> [!NOTE]
+>
+> The sample data and code may contain offensive content. User discretion is advised.
++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+- [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy.
+
+> [!div class="nextstepaction"]
+> [Content Safety Studio quickstart](./studio-quickstart.md)
ai-services Quickstart Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md
+
+ Title: "Quickstart: Analyze image and text content"
+
+description: Get started using Content Safety to analyze image and text content for objectionable material.
+++++++ Last updated : 07/18/2023+
+zone_pivot_groups: programming-languages-content-safety
+keywords:
++
+# QuickStart: Analyze text content
+
+Get started with the Content Safety Studio, REST API, or client SDKs to do basic text moderation. The Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
+
+> [!NOTE]
+>
+> The sample data and code may contain offensive content. User discretion is advised.
++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+- [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy.
+
+> [!div class="nextstepaction"]
+> [Content Safety Studio quickstart](./studio-quickstart.md)
ai-services Studio Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/studio-quickstart.md
+
+ Title: "Quickstart: Content Safety Studio"
+
+description: In this quickstart, get started with the Content Safety service using Content Safety Studio in your browser.
+++++++ Last updated : 04/27/2023+++
+# QuickStart: Azure AI Content Safety Studio
+
+In this quickstart, get started with the Content Safety service using Content Safety Studio in your browser.
+
+> [!CAUTION]
+> Some of the sample content provided by Content Safety Studio may be offensive. Sample images are blurred by default. User discretion is advised.
+
+## Prerequisites
+
+* An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* A [Content Safety](https://aka.ms/acs-create) Azure resource.
+* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
++
+## Analyze text content
+The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides capability for you to quickly try out text moderation.
++
+1. Select the **Moderate text content** panel.
+1. Add text to the input field, or select sample text from the panels on the page.
+1. Select **Run test**.
+
+The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
+
+The **Use blocklist** tab on the right lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
+
+## Analyze image content
+The [Moderate image content](https://contentsafety.cognitive.azure.com/image) page provides capability for you to quickly try out image moderation.
++
+1. Select the **Moderate image content** panel.
+1. Select a sample image from the panels on the page, or upload your own image. The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
+1. Select **Run test**.
+
+The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
+
+## View and export code
+You can use the **View Code** feature in both *Analyze text content* or *Analyze image content* page to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
+++
+## Monitor online activity
+
+The [Monitor online activity](https://contentsafety.cognitive.azure.com/monitor) page lets you view your API usage and trends.
++
+You can choose which **Media type** to monitor. You can also specify the time range that you want to check by selecting **Show data for the last __**.
+
+In the **Reject rate per category** chart, you can also adjust the severity thresholds for each category.
++
+You can also edit blocklists if you want to change some terms, based on the **Top 10 blocked terms** chart.
+
+## Manage your resource
+To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Content Safety Studio home page and select the **Resource** tab. If you have other resources, you can switch resources here as well.
++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+- [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+- [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+Next, get started using Content Safety through the REST APIs or a client SDK, so you can seamlessly integrate the service into your application.
+
+> [!div class="nextstepaction"]
+> [Quickstart: REST API and client SDKs](./quickstart-text.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
+
+ Title: What's new in Content Safety?
+
+description: Stay up to date on recent releases and updates to Azure AI Content Safety.
+++++++ Last updated : 04/07/2023+++
+# What's new in Content Safety
+
+Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+
+## July 2023
+
+### Content Safety C# SDK
+
+The Azure AI Content Safety service is now available through a C# SDK. The SDK is available on [NuGet](https://www.nuget.org/packages/Azure.AI.ContentSafety/). Follow the [quickstart](./quickstart-text.md) to get started.
+
+## May 2023
+
+### Content Safety public preview
+
+Azure AI Content Safety detects material that is potentially offensive, risky, or otherwise undesirable. This service offers state-of-the-art text and image models that detect problematic content. Azure AI Content Safety helps make applications and services safer from harmful user-generated and AI-generated content. Follow a [quickstart](./quickstart-text.md) to get started.
+
+## Azure AI services updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
ai-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-bicep.md
+
+ Title: Create an Azure AI services resource using Bicep | Microsoft Docs
+description: Create an Azure AI service resource with Bicep.
+keywords: Azure AI services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
++++ Last updated : 01/19/2023++++
+# Quickstart: Create an Azure AI services resource using Bicep
+
+Follow this quickstart to create Azure AI services resource using Bicep.
+
+Azure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure AI services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
++
+## Things to consider
+
+Using Bicep to create an Azure AI services resource lets you create a multi-service resource. This enables you to:
+
+* Access multiple Azure AI services with a single key and endpoint.
+* Consolidate billing from the services you use.
+* [!INCLUDE [terms-azure-portal](./includes/quickstarts/terms-azure-portal.md)]
+
+## Prerequisites
+
+* If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cognitive-services-universalkey/).
+
+> [!NOTE]
+> * If you use a different resource `kind` (listed below), you may need to change the `sku` parameter to match the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) tier you wish to use. For example, the `TextAnalytics` kind uses `S` instead of `S0`.
+> * Many of the Azure AI services have a free `F0` pricing tier that you can use to try the service.
+
+Be sure to change the `sku` parameter to the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) instance you want. The `sku` depends on the resource `kind` that you are using. For example, `TextAnalytics`
++
+One Azure resource is defined in the Bicep file: [Microsoft.CognitiveServices/accounts](/azure/templates/microsoft.cognitiveservices/accounts) specifies that it is an Azure AI services resource. The `kind` field in the Bicep file defines the type of resource.
++
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+If you need to recover a deleted resource, see [Recover deleted Azure AI services resources](manage-resources.md).
+
+## See also
+
+* See **[Authenticate requests to Azure AI services](authentication.md)** on how to securely work with Azure AI services.
+* See **[What are Azure AI services?](./what-are-ai-services.md)** for a list of Azure AI services.
+* See **[Natural language support](language-support.md)** to see the list of natural languages that Azure AI services supports.
+* See **[Use Azure AI services as containers](cognitive-services-container-support.md)** to understand how to use Azure AI services on-prem.
+* See **[Plan and manage costs for Azure AI services](plan-manage-costs.md)** to estimate cost of using Azure AI services.
ai-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-resource-manager-template.md
+
+ Title: Create an Azure AI services resource using ARM templates | Microsoft Docs
+description: Create an Azure AI service resource with ARM template.
+keywords: Azure AI services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
+++++ Last updated : 09/01/2022++++
+# Quickstart: Create an Azure AI services resource using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure AI services.
+
+Azure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure AI services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
+
+Create a resource using an Azure Resource Manager template (ARM template). This multi-service resource lets you:
+
+* Access multiple Azure AI services with a single key and endpoint.
+* Consolidate billing from the services you use.
+* [!INCLUDE [terms-azure-portal](./includes/quickstarts/terms-azure-portal.md)]
++
+## Prerequisites
+
+* If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cognitive-services-universalkey/).
++
+One Azure resource is defined in the Bicep file: [Microsoft.CognitiveServices/accounts](/azure/templates/microsoft.cognitiveservices/accounts) specifies that it is an Azure AI services resource. The `kind` field in the Bicep file defines the type of resource.
++
+## Deploy the template
+
+# [Azure portal](#tab/portal)
+
+1. Select the **Deploy to Azure** button.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cognitiveservices%2Fcognitive-services-universalkey%2Fazuredeploy.json)
+
+2. Enter the following values.
+
+ |Value |Description |
+ |||
+ | **Subscription** | Select an Azure subscription. |
+ | **Resource group** | Select **Create new**, enter a unique name for the resource group, and then select **OK**. |
+ | **Region** | Select a region. For example, **East US** |
+ | **Cognitive Service Name** | Replace with a unique name for your Azure AI services resource. You will need the name in the next section when you validate the deployment. |
+ | **Location** | Replace with the region used above. |
+ | **Sku** | The [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/) for your resource. |
+
+ :::image type="content" source="media/arm-template/universal-key-portal-template.png" alt-text="Resource creation screen.":::
+
+3. Select **Review + Create**, then **Create**. After the resource has successfully finished deploying, the **Go to resource** button will be highlighted.
+
+# [Azure CLI](#tab/CLI)
+
+> [!NOTE]
+> `az deployment group` create requires Azure CLI version 2.6 or later. To display the version type `az --version`. For more information, see the [documentation](/cli/azure/deployment/group).
+
+Run the following script via the Azure CLI, either from [your local machine](/cli/azure/install-azure-cli), or from a browser by using the **Try it** button. Enter a name and location (for example `centralus`) for a new resource group, and the ARM template will be used to deploy an Azure AI services resource within it. Remember the name you use. You will use it later to validate the deployment.
+
+```azurecli-interactive
+read -p "Enter a name for your new resource group:" resourceGroupName &&
+read -p "Enter the location (i.e. centralus):" location &&
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.cognitiveservices/cognitive-services-universalkey/azuredeploy.json" &&
+az group create --name $resourceGroupName --location "$location" &&
+az deployment group create --resource-group $resourceGroupName --template-uri $templateUri &&
+echo "Press [ENTER] to continue ..." &&
+read
+```
+++
+> [!Tip]
+> If your subscription doesn't allow you to create an Azure AI services resource, you may need to enable the privilege of that [Azure resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) using the [Azure portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell command](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) or an [Azure CLI command](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli). If you are not the subscription owner, ask the *Subscription Owner* or someone with a role of *admin* to complete the registration for you or ask for the **/register/action** privileges to be granted to your account.
+
+## Review deployed resources
+
+# [Portal](#tab/portal)
+
+When your deployment finishes, you will be able to select the **Go to resource** button to see your new resource. You can also find the resource group by:
+
+1. Selecting **Resource groups** from the left navigation menu.
+2. Selecting the resource group name.
+
+# [Azure CLI](#tab/CLI)
+
+Using the Azure CLI, run the following script, and enter the name of the resource group you created earlier.
+
+```azurecli-interactive
+echo "Enter the resource group where the Azure AI services resource exists:" &&
+read resourceGroupName &&
+az cognitiveservices account list -g $resourceGroupName
+```
++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources contained in the group.
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups.
+2. Locate the resource group containing the resource to be deleted
+3. Right-click on the resource group listing. Select **Delete resource group**, and confirm.
+
+# [Azure CLI](#tab/CLI)
+
+Using the Azure CLI, run the following script, and enter the name of the resource group you created earlier.
+
+```azurecli-interactive
+echo "Enter the resource group name, for deletion:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName
+```
+++
+If you need to recover a deleted resource, see [Recover deleted Azure AI services resources](manage-resources.md).
+
+## See also
+
+* See **[Authenticate requests to Azure AI services](authentication.md)** on how to securely work with Azure AI services.
+* See **[What are Azure AI services?](./what-are-ai-services.md)** for a list of Azure AI services.
+* See **[Natural language support](language-support.md)** to see the list of natural languages that Azure AI services supports.
+* See **[Use Azure AI services as containers](cognitive-services-container-support.md)** to understand how to use Azure AI services on-prem.
+* See **[Plan and manage costs for Azure AI services](plan-manage-costs.md)** to estimate cost of using Azure AI services.
ai-services Create Account Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-terraform.md
+
+ Title: 'Quickstart: Create an Azure AI services resource using Terraform'
+description: 'In this article, you create an Azure AI services resource using Terraform'
+keywords: Azure AI services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
+++ Last updated : 4/14/2023+++
+content_well_notification:
+ - AI-contribution
++
+# Quickstart: Create an Azure AI services resource using Terraform
+
+This article shows how to use Terraform to create an [Azure AI services account](multi-service-resource.md?pivots=azportal) using [Terraform](/azure/developer/terraform/quickstart-configure).
+
+Azure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure AI services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random string using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create an Azure AI services account using [azurerm_cognitive_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cognitive_account)
+
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-cognitive-services-account). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-cognitive-services-account/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/main.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/outputs.tf)]
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/providers.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/variables.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource name in which the Azure AI services account was created.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Azure AI services account name.
+
+ ```console
+ azurerm_cognitive_account_name=$(terraform output -raw azurerm_cognitive_account_name)
+ ```
+
+1. Run [az cognitiveservices account show](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-show) to show the Azure AI services account you created in this article.
+
+ ```azurecli
+ az cognitiveservices account show --name $azurerm_cognitive_account_name \
+ --resource-group $resource_group_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource name in which the Azure AI services account was created.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Azure AI services account name.
+
+ ```console
+ $azurerm_cognitive_account_name=$(terraform output -raw azurerm_cognitive_account_name)
+ ```
+
+1. Run [Get-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccount) to display information about the new service.
+
+ ```azurepowershell
+ Get-AzCognitiveServicesAccount -ResourceGroupName $resource_group_name `
+ -Name $azurerm_cognitive_account_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Recover deleted Azure AI services resources](manage-resources.md)
ai-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/copy-move-projects.md
+
+ Title: Copy and back up Custom Vision projects
+
+description: Learn how to use the ExportProject and ImportProject APIs to copy and back up your Custom Vision projects.
+++++ Last updated : 01/20/2022+++
+# Copy and back up your Custom Vision projects
+
+After you've created and trained a Custom Vision project, you may want to copy your project to another resource. If your app or business depends on the use of a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
+
+The **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
+
+> [!TIP]
+> For an example of this scenario using the Python client library, see the [Move Custom Vision Project](https://github.com/Azure-Samples/custom-vision-move-project/tree/master/) repository on GitHub.
++
+## Prerequisites
+
+- Two Azure AI Custom Vision resources. If you don't have them, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true).
+- The training keys and endpoint URLs of your Custom Vision resources. You can find these values on the resource's **Overview** tab on the Azure portal.
+- A created Custom Vision project. See [Build a classifier](./getting-started-build-a-classifier.md) for instructions on how to do this.
+* [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line utility.
+
+## Process overview
+
+The process for copying a project consists of the following steps:
+
+1. First, you get the ID of the project in your source account you want to copy.
+1. Then you call the **ExportProject** API using the project ID and the training key of your source account. You'll get a temporary token string.
+1. Then you call the **ImportProject** API using the token string and the training key of your target account. The project will then be listed under your target account.
+
+## Get the project ID
+
+First call **[GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddead)** to see a list of your existing Custom Vision projects and their IDs. Use the training key and endpoint of your source account.
+
+```curl
+curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects"
+-H "Training-key: {training key}"
+```
+
+You'll get a `200\OK` response with a list of projects and their metadata in the body. The `"id"` value is the string to copy for the next steps.
+
+```json
+[
+ {
+ "id": "00000000-0000-0000-0000-000000000000",
+ "name": "string",
+ "description": "string",
+ "settings": {
+ "domainId": "00000000-0000-0000-0000-000000000000",
+ "classificationType": "Multiclass",
+ "targetExportPlatforms": [
+ "CoreML"
+ ],
+ "useNegativeSet": true,
+ "detectionParameters": "string",
+ "imageProcessingSettings": {
+ "augmentationMethods": {}
+ }
+ },
+ "created": "string",
+ "lastModified": "string",
+ "thumbnailUri": "string",
+ "drModeEnabled": true,
+ "status": "Succeeded"
+ }
+]
+```
+
+## Export the project
+
+Call **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** using the project ID and your source training key and endpoint.
+
+```curl
+curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects/{projectId}/export"
+-H "Training-key: {training key}"
+```
+
+You'll get a `200/OK` response with metadata about the exported project and a reference string `"token"`. Copy the value of the token.
+
+```json
+{
+ "iterationCount": 0,
+ "imageCount": 0,
+ "tagCount": 0,
+ "regionCount": 0,
+ "estimatedImportTimeInMS": 0,
+ "token": "string"
+}
+```
+
+> [!TIP]
+> If you get an "Invalid Token" error when you import your project, it could be that the token URL string isn't web encoded. You can encode the token using a [URL Encoder](https://meyerweb.com/eric/tools/dencoder/).
+
+## Import the project
+
+Call **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
+
+```curl
+curl -v -G -X POST "{endpoint}/customvision/v3.3/Training/projects/import"
+--data-urlencode "token={token}" --data-urlencode "name={name}"
+-H "Training-key: {training key}" -H "Content-Length: 0"
+```
+
+You'll get a `200/OK` response with metadata about your newly imported project.
+
+```json
+{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "name": "string",
+ "description": "string",
+ "settings": {
+ "domainId": "00000000-0000-0000-0000-000000000000",
+ "classificationType": "Multiclass",
+ "targetExportPlatforms": [
+ "CoreML"
+ ],
+ "useNegativeSet": true,
+ "detectionParameters": "string",
+ "imageProcessingSettings": {
+ "augmentationMethods": {}
+ }
+ },
+ "created": "string",
+ "lastModified": "string",
+ "thumbnailUri": "string",
+ "drModeEnabled": true,
+ "status": "Succeeded"
+}
+```
+
+## Next steps
+
+In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
+* [REST API reference documentation](/rest/api/custom-vision/)
ai-services Custom Vision Onnx Windows Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/custom-vision-onnx-windows-ml.md
+
+ Title: "Use an ONNX model with Windows ML - Custom Vision Service"
+
+description: Learn how to create a Windows UWP app that uses an ONNX model exported from Azure AI services.
+++++++ Last updated : 04/29/2020+
+#Customer intent: As a developer, I want to use a custom vision model with Windows ML.
++
+# Use an ONNX model from Custom Vision with Windows ML (preview)
+
+Learn how to use an ONNX model exported from the Custom Vision service with Windows ML (preview).
+
+In this guide, you'll learn how to use an ONNX file exported from the Custom Vision Service with Windows ML. You'll use the example UWP application with your own trained image classifier.
+
+## Prerequisites
+
+* Windows 10 version 1809 or higher
+* Windows SDK for build 17763 or higher
+* Visual Studio 2017 version 15.7 or later with the __Universal Windows Platform development__ workload enabled.
+* Developer mode enabled on your PC. For more information, see [Enable your device for development](/windows/uwp/get-started/enable-your-device-for-development).
+
+## About the example app
+
+The included application is a generic Windows UWP app. It allows you to select an image from your computer, which is then processed by a locally stored classification model. The tags and scores returned by the model are displayed next to the image.
+
+## Get the example code
+
+The example application is available at the [Azure AI services ONNX Custom Vision Sample](https://github.com/Azure-Samples/cognitive-services-onnx-customvision-sample) repo on GitHub. Clone it to your local machine and open *SampleOnnxEvaluationApp.sln* in Visual Studio.
+
+## Test the application
+
+1. Use the `F5` key to start the application from Visual Studio. You may be prompted to enable Developer mode.
+1. When the application starts, use the button to select an image for scoring. The default ONNX model is trained to classify different types of plankton.
+
+## Use your own model
+
+To use your own image classifier model, follow these steps:
+
+1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
+ * If you have an existing classifier that uses a different domain, you can convert it to **compact** in the project settings. Then, re-train your project before continuing.
+1. Export your model. Switch to the Performance tab and select an iteration that was trained with a **compact** domain. Select the **Export** button that appears. Then select **ONNX**, and then **Export**. Once the file is ready, select the **Download** button. For more information on export options, see [Export your model](./export-your-model.md).
+1. Open the downloaded *.zip* file and extract the *model.onnx* file from it. This file contains your classifier model.
+1. In the Solution Explorer in Visual Studio, right-click the **Assets** Folder and select __Add Existing Item__. Select your ONNX file.
+1. In Solution Explorer, right-click the ONNX file and select **Properties**. Change the following properties for the file:
+ * __Build Action__ -> __Content__
+ * __Copy to Output Directory__ -> __Copy if newer__
+1. Then open _MainPage.xaml.cs_ and change the value of `_ourOnnxFileName` to the name of your ONNX file.
+1. Use the `F5` to build and run the project.
+1. Select button to select image to evaluate.
+
+## Next steps
+
+To discover other ways to export and use a Custom Vision model, see the following documents:
+
+* [Export your model](./export-your-model.md)
+* [Use exported Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
+* [Use exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
+* [Use exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
+
+For more information on using ONNX models with Windows ML, see [Integrate a model into your app with Windows ML](/windows/ai/windows-ml/integrate-model).
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/encrypt-data-at-rest.md
+
+ Title: Custom Vision encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Custom Vision, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020++
+#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
++
+# Custom Vision encryption of data at rest
+
+Azure AI Custom Vision automatically encrypts your data when persisted it to the cloud. Custom Vision encryption protects your data and to help you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Custom Vision, you will need to create a new Custom Vision resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Azure AI services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-delete-data.md
+
+ Title: View or delete your data - Custom Vision Service
+
+description: You maintain full control over your data. This article explains how you can view, export or delete your data in the Custom Vision Service.
+++++++ Last updated : 03/21/2019++++
+# View or delete user data in Custom Vision
+
+Custom Vision collects user data to operate the service, but customers have full control over viewing and deleting their data using the Custom Vision [Training APIs](https://go.microsoft.com/fwlink/?linkid=865446).
++
+To learn how to view and delete user data in Custom Vision, see the following table.
+
+| Data | View operation | Delete operation |
+| - | - | - |
+| Account info (Keys) | [GetAccountInfo](https://go.microsoft.com/fwlink/?linkid=865446) | Delete using Azure portal (Azure Subscriptions). Or using "Delete Your Account" button in CustomVision.ai settings page (Microsoft Account Subscriptions) |
+| Iteration details | [GetIteration](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
+| Iteration performance details | [GetIterationPerformance](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
+| List of iterations | [GetIterations](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
+| Projects and project details | [GetProject](https://go.microsoft.com/fwlink/?linkid=865446) and [GetProjects](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteProject](https://go.microsoft.com/fwlink/?linkid=865446) |
+| Image tags | [GetTag](https://go.microsoft.com/fwlink/?linkid=865446) and [GetTags](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteTag](https://go.microsoft.com/fwlink/?linkid=865446) |
+| Images | [GetTaggedImages](https://go.microsoft.com/fwlink/?linkid=865446) (provides uri for image download) and [GetUntaggedImages](https://go.microsoft.com/fwlink/?linkid=865446) (provides uri for image download) | [DeleteImages](https://go.microsoft.com/fwlink/?linkid=865446) |
+| Exported iterations | [GetExports](https://go.microsoft.com/fwlink/?linkid=865446) | Deleted upon account deletion |
ai-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-model-python.md
+
+ Title: "Tutorial: Run TensorFlow model in Python - Custom Vision Service"
+
+description: Run a TensorFlow model in Python. This article only applies to models exported from image classification projects in the Custom Vision service.
+++++++ Last updated : 07/05/2022+
+ms.devlang: python
+++
+# Tutorial: Run a TensorFlow model in Python
+
+After you've [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
+
+> [!NOTE]
+> This tutorial applies only to models exported from "General (compact)" image classification projects. If you exported other models, please visit our [sample code repository](https://github.com/Azure-Samples/customvision-export-samples).
+
+## Prerequisites
+
+To use the tutorial, first to do the following:
+
+- Install either Python 2.7+ or Python 3.6+.
+- Install pip.
+
+Next, you'll need to install the following packages:
+
+```bash
+pip install tensorflow
+pip install pillow
+pip install numpy
+pip install opencv-python
+```
+
+## Load your model and tags
+
+The downloaded _.zip_ file contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
+
+```Python
+import tensorflow as tf
+import os
+
+graph_def = tf.compat.v1.GraphDef()
+labels = []
+
+# These are set to the default names from exported models, update as needed.
+filename = "model.pb"
+labels_filename = "labels.txt"
+
+# Import the TF graph
+with tf.io.gfile.GFile(filename, 'rb') as f:
+ graph_def.ParseFromString(f.read())
+ tf.import_graph_def(graph_def, name='')
+
+# Create a list of labels.
+with open(labels_filename, 'rt') as lf:
+ for l in lf:
+ labels.append(l.strip())
+```
+
+## Prepare an image for prediction
+
+There are a few steps you need to take to prepare the image for prediction. These steps mimic the image manipulation performed during training.
+
+### Open the file and create an image in the BGR color space
+
+```Python
+from PIL import Image
+import numpy as np
+import cv2
+
+# Load from a file
+imageFile = "<path to your image file>"
+image = Image.open(imageFile)
+
+# Update orientation based on EXIF tags, if the file has orientation info.
+image = update_orientation(image)
+
+# Convert to OpenCV format
+image = convert_to_opencv(image)
+```
+
+### Handle images with a dimension >1600
+
+```Python
+# If the image has either w or h greater than 1600 we resize it down respecting
+# aspect ratio such that the largest dimension is 1600
+image = resize_down_to_1600_max_dim(image)
+```
+
+### Crop the largest center square
+
+```Python
+# We next get the largest center square
+h, w = image.shape[:2]
+min_dim = min(w,h)
+max_square_image = crop_center(image, min_dim, min_dim)
+```
+
+### Resize down to 256x256
+
+```Python
+# Resize that square down to 256x256
+augmented_image = resize_to_256_square(max_square_image)
+```
+
+### Crop the center for the specific input size for the model
+
+```Python
+# Get the input size of the model
+with tf.compat.v1.Session() as sess:
+ input_tensor_shape = sess.graph.get_tensor_by_name('Placeholder:0').shape.as_list()
+network_input_size = input_tensor_shape[1]
+
+# Crop the center for the specified network_input_Size
+augmented_image = crop_center(augmented_image, network_input_size, network_input_size)
+
+```
+
+### Add helper functions
+
+The steps above use the following helper functions:
+
+```Python
+def convert_to_opencv(image):
+ # RGB -> BGR conversion is performed as well.
+ image = image.convert('RGB')
+ r,g,b = np.array(image).T
+ opencv_image = np.array([b,g,r]).transpose()
+ return opencv_image
+
+def crop_center(img,cropx,cropy):
+ h, w = img.shape[:2]
+ startx = w//2-(cropx//2)
+ starty = h//2-(cropy//2)
+ return img[starty:starty+cropy, startx:startx+cropx]
+
+def resize_down_to_1600_max_dim(image):
+ h, w = image.shape[:2]
+ if (h < 1600 and w < 1600):
+ return image
+
+ new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
+ return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
+
+def resize_to_256_square(image):
+ h, w = image.shape[:2]
+ return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
+
+def update_orientation(image):
+ exif_orientation_tag = 0x0112
+ if hasattr(image, '_getexif'):
+ exif = image._getexif()
+ if (exif != None and exif_orientation_tag in exif):
+ orientation = exif.get(exif_orientation_tag, 1)
+ # orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
+ orientation -= 1
+ if orientation >= 4:
+ image = image.transpose(Image.TRANSPOSE)
+ if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
+ image = image.transpose(Image.FLIP_TOP_BOTTOM)
+ if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
+ image = image.transpose(Image.FLIP_LEFT_RIGHT)
+ return image
+```
+
+## Classify an image
+
+Once the image is prepared as a tensor, we can send it through the model for a prediction.
+
+```Python
+
+# These names are part of the model and cannot be changed.
+output_layer = 'loss:0'
+input_node = 'Placeholder:0'
+
+with tf.compat.v1.Session() as sess:
+ try:
+ prob_tensor = sess.graph.get_tensor_by_name(output_layer)
+ predictions = sess.run(prob_tensor, {input_node: [augmented_image] })
+ except KeyError:
+ print ("Couldn't find classification output layer: " + output_layer + ".")
+ print ("Verify this a model exported from an Object Detection project.")
+ exit(-1)
+```
+
+## Display the results
+
+The results of running the image tensor through the model will then need to be mapped back to the labels.
+
+```Python
+ # Print the highest probability label
+ highest_probability_index = np.argmax(predictions)
+ print('Classified as: ' + labels[highest_probability_index])
+ print()
+
+ # Or you can print out all of the results mapping labels to probabilities.
+ label_index = 0
+ for p in predictions:
+ truncated_probablity = np.float64(np.round(p,8))
+ print (labels[label_index], truncated_probablity)
+ label_index += 1
+```
+
+## Next steps
+
+Next, learn how to wrap your model into a mobile application:
+* [Use your exported Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
+* [Use your exported CoreML model in an Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
+* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
ai-services Export Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-programmatically.md
+
+ Title: "Export a model programmatically"
+
+description: Use the Custom Vision client library to export a trained model.
+++++++ Last updated : 06/28/2021+
+ms.devlang: python
++
+# Export a model programmatically
+
+All of the export options available on the [Custom Vision website](https://www.customvision.ai/) can be done programmatically through the client libraries as well. You may want to do this so you can fully automate the process of retraining and updating the model iteration you use on a local device.
+
+This guide shows you how to export your model to an ONNX file with the Python SDK.
+
+## Create a training client
+
+You need to have a [CustomVisionTrainingClient](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.customvisiontrainingclient) object to export a model iteration. Create variables for your Custom Vision training resources Azure endpoint and keys, and use them to create the client object.
+
+```python
+ENDPOINT = "PASTE_YOUR_CUSTOM_VISION_TRAINING_ENDPOINT_HERE"
+training_key = "PASTE_YOUR_CUSTOM_VISION_TRAINING_SUBSCRIPTION_KEY_HERE"
+
+credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
+trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
+```
+
+> [!IMPORTANT]
+> Remember to remove the keys from your code when youre done, and never post them publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the Azure AI services [security](../security-features.md) article.
+
+## Call the export method
+
+Call the **export_iteration** method.
+* Provide the project ID, iteration ID of the model you want to export.
+* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
+* The *flavor* parameter specifies the format of the exported model: allowed values are `Linux`, `Windows`, `ONNX10`, `ONNX12`, `ARM`, `TensorFlowNormal`, and `TensorFlowLite`.
+* The *raw* parameter gives you the option to retrieve the raw JSON response along with the object model response.
+
+```python
+project_id = "PASTE_YOUR_PROJECT_ID"
+iteration_id = "PASTE_YOUR_ITERATION_ID"
+platform = "ONNX"
+flavor = "ONNX10"
+export = trainer.export_iteration(project_id, iteration_id, platform, flavor, raw=False)
+```
+
+For more information, see the **[export_iteration](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.operations.customvisiontrainingclientoperationsmixin#export-iteration-project-id--iteration-id--platform--flavor-none--custom-headers-none--raw-false-operation-config-)** method.
+
+> [!IMPORTANT]
+> If you've already exported a particular iteration, you cannot call the **export_iteration** method again. Instead, skip ahead to the **get_exports** method call to get a link to your existing exported model.
+
+## Download the exported model
+
+Next, you'll call the **get_exports** method to check the status of the export operation. The operation runs asynchronously, so you should poll this method until the operation completes. When it completes, you can retrieve the URI where you can download the model iteration to your device.
+
+```python
+while (export.status == "Exporting"):
+ print ("Waiting 10 seconds...")
+ time.sleep(10)
+ exports = trainer.get_exports(project_id, iteration_id)
+ # Locate the export for this iteration and check its status
+ for e in exports:
+ if e.platform == export.platform and e.flavor == export.flavor:
+ export = e
+ break
+ print("Export status is: ", export.status)
+```
+
+For more information, see the **[get_exports](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.operations.customvisiontrainingclientoperationsmixin#get-exports-project-id--iteration-id--custom-headers-none--raw-false-operation-config-)** method.
+
+Then, you can programmatically download the exported model to a location on your device.
+
+```python
+if export.status == "Done":
+ # Success, now we can download it
+ export_file = requests.get(export.download_uri)
+ with open("export.zip", "wb") as file:
+ file.write(export_file.content)
+```
+
+## Next steps
+
+Integrate your exported model into an application by exploring one of the following articles or samples:
+
+* [Use your Tensorflow model with Python](export-model-python.md)
+* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
+* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
+* See the sample for [Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
+* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
ai-services Export Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-your-model.md
+
+ Title: Export your model to mobile - Custom Vision Service
+
+description: This article will show you how to export your model for use in creating mobile applications or run locally for real-time classification.
+++++++ Last updated : 07/05/2022+++
+# Export your model for use with mobile devices
+
+Custom Vision Service lets you export your classifiers to be run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
+
+## Export options
+
+Custom Vision Service supports the following exports:
+
+* __TensorFlow__ for __Android__.
+* **TensorFlow.js** for JavaScript frameworks like React, Angular, and Vue. This will run on both **Android** and **iOS** devices.
+* __CoreML__ for __iOS11__.
+* __ONNX__ for __Windows ML__, **Android**, and **iOS**.
+* __[Vision AI Developer Kit](https://azure.github.io/Vision-AI-DevKit-Pages/)__.
+* A __Docker container__ for Windows, Linux, or ARM architecture. The container includes a TensorFlow model and service code to use the Custom Vision API.
+
+> [!IMPORTANT]
+> Custom Vision Service only exports __compact__ domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices. Classifiers built with a compact domain may be slightly less accurate than a standard domain with the same amount of training data.
+>
+> For information on improving your classifiers, see the [Improving your classifier](getting-started-improving-your-classifier.md) document.
+
+## Convert to a compact domain
+
+> [!NOTE]
+> The steps in this section only apply if you have an existing model that is not set to compact domain.
+
+To convert the domain of an existing model, take the following steps:
+
+1. On the [Custom vision website](https://customvision.ai), select the __Home__ icon to view a list of your projects.
+
+ ![Image of the home icon and projects list](./media/export-your-model/projects-list.png)
+
+1. Select a project, and then select the __Gear__ icon in the upper right of the page.
+
+ ![Image of the gear icon](./media/export-your-model/gear-icon.png)
+
+1. In the __Domains__ section, select one of the __compact__ domains. Select __Save Changes__ to save the changes.
+
+ > [!NOTE]
+ > For Vision AI Dev Kit, the project must be created with the __General (Compact)__ domain, and you must specify the **Vision AI Dev Kit** option under the **Export Capabilities** section.
+
+ ![Image of domains selection](./media/export-your-model/domains.png)
+
+1. From the top of the page, select __Train__ to retrain using the new domain.
+
+## Export your model
+
+To export the model after retraining, use the following steps:
+
+1. Go to the **Performance** tab and select __Export__.
+
+ ![Image of the export icon](./media/export-your-model/export.png)
+
+ > [!TIP]
+ > If the __Export__ entry is not available, then the selected iteration does not use a compact domain. Use the __Iterations__ section of this page to select an iteration that uses a compact domain, and then select __Export__.
+
+1. Select your desired export format, and then select __Export__ to download the model.
+
+## Next steps
+
+Integrate your exported model into an application by exploring one of the following articles or samples:
+
+* [Use your TensorFlow model with Python](export-model-python.md)
+* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
+* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
+* See the sample for [TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
+* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
ai-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/get-started-build-detector.md
+
+ Title: "Quickstart: Build an object detector with the Custom Vision website"
+
+description: In this quickstart, you'll learn how to use the Custom Vision website to create, train, and test an object detector model.
++++++ Last updated : 12/27/2022++
+keywords: image recognition, image recognition app, custom vision
++
+# Quickstart: Build an object detector with the Custom Vision website
+
+In this quickstart, you'll learn how to use the Custom Vision website to create an object detector model. Once you build a model, you can test it with new images and integrate it into your own image recognition app.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+- A set of images with which to train your detector model. You can use the set of [sample images](https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/tree/master/samples/vision/images) on GitHub. Or, you can choose your own images using the tips below.
+- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)
+
+## Create Custom Vision resources
++
+## Create a new project
+
+In your web browser, navigate to the [Custom Vision web page](https://customvision.ai) and select __Sign in__. Sign in with the same account you used to sign into the Azure portal.
+
+![Image of the sign-in page](./media/browser-home.png)
++
+1. To create your first project, select **New Project**. The **Create new project** dialog box will appear.
+
+ ![The new project dialog box has fields for name, description, and domains.](./media/get-started-build-detector/new-project.png)
+
+1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
+
+ > [!NOTE]
+ > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+
+1. Under
+1. Select __Object Detection__ under __Project Types__.
+
+1. Next, select one of the available domains. Each domain optimizes the detector for specific types of images, as described in the following table. You can change the domain later if you want to.
+
+ |Domain|Purpose|
+ |||
+ |__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or if you're unsure about which domain to choose, select the __General__ domain. |
+ |__Logo__|Optimized for finding brand logos in images.|
+ |__Products on shelves__|Optimized for detecting and classifying products on shelves.|
+ |__Compact domains__| Optimized for the constraints of real-time object detection on mobile devices. The models generated by compact domains can be exported to run locally.|
+
+1. Finally, select __Create project__.
+
+## Choose training images
++
+## Upload and tag images
+
+In this section, you'll upload and manually tag images to help train the detector.
+
+1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to upload the images.
+
+ ![The add images control is shown in the upper left, and as a button at bottom center.](./media/get-started-build-detector/add-images.png)
+
+1. You'll see your uploaded images in the **Untagged** section of the UI. The next step is to manually tag the objects that you want the detector to learn to recognize. Select the first image to open the tagging dialog window.
+
+ ![Images uploaded, in Untagged section](./media/get-started-build-detector/images-untagged.png)
+
+1. Select and drag a rectangle around the object in your image. Then, enter a new tag name with the **+** button, or select an existing tag from the drop-down list. It's important to tag every instance of the object(s) you want to detect, because the detector uses the untagged background area as a negative example in training. When you're done tagging, select the arrow on the right to save your tags and move on to the next image.
+
+ ![Tagging an object with a rectangular selection](./media/get-started-build-detector/image-tagging.png)
+
+To upload another set of images, return to the top of this section and repeat the steps.
+
+## Train the detector
+
+To train the detector model, select the **Train** button. The detector uses all of the current images and their tags to create a model that identifies each tagged object. This process can take several minutes.
+
+![The train button in the top right of the web page's header toolbar](./media/getting-started-build-a-classifier/train01.png)
+
+The training process should only take a few minutes. During this time, information about the training process is displayed in the **Performance** tab.
+
+![The browser window with a training dialog in the main section](./media/get-started-build-detector/training.png)
+
+## Evaluate the detector
+
+After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision. Precision and recall are two different measurements of the effectiveness of a detector:
+
+- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%.
+- **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
+- **Mean average precision** is the average value of the average precision (AP). AP is the area under the precision/recall curve (precision plotted against recall for each prediction made).
+
+![The training results show the overall precision and recall, and mean average precision.](./media/get-started-build-detector/trained-performance.png)
+
+### Probability threshold
++
+### Overlap threshold
+
+The **Overlap Threshold** slider deals with how correct an object prediction must be to be considered "correct" in training. It sets the minimum allowed overlap between the predicted object's bounding box and the actual user-entered bounding box. If the bounding boxes don't overlap to this degree, the prediction won't be considered correct.
+
+## Manage training iterations
+
+Each time you train your detector, you create a new _iteration_ with its own updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. In the left pane you'll also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
+
+See [Use your model with the prediction API](./use-prediction-api.md) to learn how to access your trained models programmatically.
+
+## Next steps
+
+In this quickstart, you learned how to create and train an object detector model using the Custom Vision website. Next, get more information on the iterative process of improving your model.
+
+> [!div class="nextstepaction"]
+> [Test and retrain a model](test-your-model.md)
+
+* [What is Custom Vision?](./overview.md)
ai-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/getting-started-build-a-classifier.md
+
+ Title: "Quickstart: Build an image classification model with the Custom Vision portal"
+
+description: In this quickstart, you'll learn how to use the Custom Vision web portal to create, train, and test an image classification model.
++++++ Last updated : 11/03/2022++
+keywords: image recognition, image recognition app, custom vision
++
+# Quickstart: Build an image classification model with the Custom Vision portal
+
+In this quickstart, you'll learn how to use the Custom Vision web portal to create an image classification model. Once you build a model, you can test it with new images and eventually integrate it into your own image recognition app.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+- A set of images with which to train your classification model. You can use the set of [sample images](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) on GitHub. Or, you can choose your own images using the tips below.
+- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)
++
+## Create Custom Vision resources
++
+## Create a new project
+
+In your web browser, navigate to the [Custom Vision web page](https://customvision.ai) and select __Sign in__. Sign in with the same account you used to sign into the Azure portal.
+
+![Image of the sign-in page](./media/browser-home.png)
++
+1. To create your first project, select **New Project**. The **Create new project** dialog box will appear.
+
+ ![The new project dialog box has fields for name, description, and domains.](./media/getting-started-build-a-classifier/new-project.png)
+
+1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
+
+ > [!NOTE]
+ > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+
+1. Select __Classification__ under __Project Types__. Then, under __Classification Types__, choose either **Multilabel** or **Multiclass**, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You'll be able to change the classification type later if you want to.
+
+1. Next, select one of the available domains. Each domain optimizes the model for specific types of images, as described in the following table. You can change the domain later if you wish.
+
+ |Domain|Purpose|
+ |||
+ |__Generic__| Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you're unsure of which domain to choose, select the Generic domain. |
+ |__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.|
+ |__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.|
+ |__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.|
+ |__Compact domains__| Optimized for the constraints of real-time classification on mobile devices. The models generated by compact domains can be exported to run locally.|
+
+1. Finally, select __Create project__.
+
+## Choose training images
++
+## Upload and tag images
+
+In this section, you'll upload and manually tag images to help train the classifier.
+
+1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to move to tagging. Your tag selection will be applied to the entire group of images you've selected to upload, so it's easier to upload images in separate groups according to their applied tags. You can also change the tags for individual images after they've been uploaded.
+
+ ![The add images control is shown in the upper left, and as a button at bottom center.](./media/getting-started-build-a-classifier/add-images01.png)
++
+1. To create a tag, enter text in the __My Tags__ field and press Enter. If the tag already exists, it will appear in a dropdown menu. In a multilabel project, you can add more than one tag to your images, but in a multiclass project you can add only one. To finish uploading the images, use the __Upload [number] files__ button.
+
+ ![Image of the tag and upload page](./media/getting-started-build-a-classifier/add-images03.png)
+
+1. Select __Done__ once the images have been uploaded.
+
+ ![The progress bar shows all tasks completed.](./media/getting-started-build-a-classifier/add-images04.png)
+
+To upload another set of images, return to the top of this section and repeat the steps.
+
+## Train the classifier
+
+To train the classifier, select the **Train** button. The classifier uses all of the current images to create a model that identifies the visual qualities of each tag. This process can take several minutes.
+
+![The train button in the top right of the web page's header toolbar](./media/getting-started-build-a-classifier/train01.png)
+
+The training process should only take a few minutes. During this time, information about the training process is displayed in the **Performance** tab.
+
+![The browser window with a training dialog in the main section](./media/getting-started-build-a-classifier/train02.png)
+
+## Evaluate the classifier
+
+After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall. Precision and recall are two different measurements of the effectiveness of a classifier:
+
+- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%.
+- **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
+
+![The training results show the overall precision and recall, and the precision and recall for each tag in the classifier.](./media/getting-started-build-a-classifier/train03.png)
+
+### Probability threshold
++
+## Manage training iterations
+
+Each time you train your classifier, you create a new _iteration_ with updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. You'll also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
+
+See [Use your model with the prediction API](./use-prediction-api.md) to learn how to access your trained models programmatically.
+
+## Next steps
+
+In this quickstart, you learned how to create and train an image classification model using the Custom Vision web portal. Next, get more information on the iterative process of improving your model.
+
+> [!div class="nextstepaction"]
+> [Test and retrain a model](test-your-model.md)
+
+* [What is Custom Vision?](./overview.md)
ai-services Getting Started Improving Your Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/getting-started-improving-your-classifier.md
+
+ Title: Improving your model - Custom Vision Service
+
+description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your model in the Custom Vision service.
+++++++ Last updated : 07/05/2022++++
+# How to improve your Custom Vision model
+
+In this guide, you'll learn how to improve the quality of your Custom Vision Service model. The quality of your [classifier](./getting-started-build-a-classifier.md) or [object detector](./get-started-build-detector.md) depends on the amount, quality, and variety of the labeled data you provide it and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results.
+
+The following is a general pattern to help you train a more accurate model:
+
+1. First-round training
+1. Add more images and balance data; retrain
+1. Add images with varying background, lighting, object size, camera angle, and style; retrain
+1. Use new image(s) to test prediction
+1. Modify existing training data according to prediction results
+
+## Prevent overfitting
+
+Sometimes a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
+
+To correct this problem, provide images with different angles, backgrounds, object size, groups, and other variations. The following sections expand upon these concepts.
+
+## Data quantity
+
+The number of training images is the most important factor for your dataset. We recommend using at least 50 images per label as a starting point. With fewer images, there's a higher risk of overfitting, and while your performance numbers may suggest good quality, your model may struggle with real-world data.
+
+## Data balance
+
+It's also important to consider the relative quantities of your training data. For instance, using 500 images for one label and 50 images for another label makes for an imbalanced training dataset. This will cause the model to be more accurate in predicting one label than another. You're likely to see better results if you maintain at least a 1:2 ratio between the label with the fewest images and the label with the most images. For example, if the label with the most images has 500 images, the label with the least images should have at least 250 images for training.
+
+## Data variety
+
+Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
+
+![Photo of fruits with unexpected matching.](./media/getting-started-improving-your-classifier/unexpected.png)
+
+To correct this problem, include a variety of images to ensure that your model can generalize well. Below are some ways you can make your training set more diverse:
+
+* __Background:__ Provide images of your object in front of different backgrounds. Photos in natural contexts are better than photos in front of neutral backgrounds as they provide more information for the classifier.
+
+ ![Photo of background samples.](./media/getting-started-improving-your-classifier/background.png)
+
+* __Lighting:__ Provide images with varied lighting (that is, taken with flash, high exposure, and so on), especially if the images used for prediction have different lighting. It's also helpful to use images with varying saturation, hue, and brightness.
+
+ ![Photo of lighting samples.](./media/getting-started-improving-your-classifier/lighting.png)
+
+* __Object Size:__ Provide images in which the objects vary in size and number (for example, a photo of bunches of bananas and a closeup of a single banana). Different sizing helps the classifier generalize better.
+
+ ![Photo of size samples.](./media/getting-started-improving-your-classifier/size.png)
+
+* __Camera Angle:__ Provide images taken with different camera angles. Alternatively, if all of your photos must be taken with fixed cameras (such as surveillance cameras), be sure to assign a different label to every regularly occurring object to avoid overfitting&mdash;interpreting unrelated objects (such as lampposts) as the key feature.
+
+ ![Photo of angle samples.](./media/getting-started-improving-your-classifier/angle.png)
+
+* __Style:__ Provide images of different styles of the same class (for example, different varieties of the same fruit). However, if you have objects of drastically different styles (such as Mickey Mouse vs. a real-life mouse), we recommend you label them as separate classes to better represent their distinct features.
+
+ ![Photo of style samples.](./media/getting-started-improving-your-classifier/style.png)
+
+## Negative images (classifiers only)
+
+If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images that don't match any of the other tags. When you upload these images, apply the special **Negative** label to them.
+
+Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative.
+
+> [!NOTE]
+> The Custom Vision Service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana.
+>
+> On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
+
+## Occlusion and truncation (object detectors only)
+
+If you want your object detector to detect truncated objects (objects that are partially cut out of the image) or occluded objects (objects that are partially blocked by other objects in the image), you'll need to include training images that cover those cases.
+
+> [!NOTE]
+> The issue of objects being occluded by other objects is not to be confused with **Overlap Threshold**, a parameter for rating model performance. The **Overlap Threshold** slider on the [Custom Vision website](https://customvision.ai) deals with how much a predicted bounding box must overlap with the true bounding box to be considered correct.
+
+## Use prediction images for further training
+
+When you use or test the model by submitting images to the prediction endpoint, the Custom Vision service stores those images. You can then use them to improve the model.
+
+1. To view images submitted to the model, open the [Custom Vision web page](https://customvision.ai), go to your project, and select the __Predictions__ tab. The default view shows images from the current iteration. You can use the __Iteration__ drop down menu to view images submitted during previous iterations.
+
+ ![screenshot of the predictions tab, with images in view](./media/getting-started-improving-your-classifier/predictions.png)
+
+2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
+
+ To add an image to your existing training data, select the image, set the correct tag(s), and select __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab.
+
+ ![Screenshot of the tagging page.](./media/getting-started-improving-your-classifier/tag.png)
+
+3. Then use the __Train__ button to retrain the model.
+
+## Visually inspect predictions
+
+To inspect image predictions, go to the __Training Images__ tab, select your previous training iteration in the **Iteration** drop-down menu, and check one or more tags under the **Tags** section. The view should now display a red box around each of the images for which the model failed to correctly predict the given tag.
+
+![Image of the iteration history](./media/getting-started-improving-your-classifier/iteration.png)
+
+Sometimes a visual inspection can identify patterns that you can then correct by adding more training data or modifying existing training data. For example, a classifier for apples vs. limes may incorrectly label all green apples as limes. You can then correct this problem by adding and providing training data that contains tagged images of green apples.
+
+## Next steps
+
+In this guide, you learned several techniques to make your custom image classification model or object detector model more accurate. Next, learn how to test images programmatically by submitting them to the Prediction API.
+
+> [!div class="nextstepaction"]
+> [Use the prediction API](use-prediction-api.md)
ai-services Iot Visual Alerts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/iot-visual-alerts-tutorial.md
+
+ Title: "Tutorial: IoT Visual Alerts sample"
+
+description: In this tutorial, you use Custom Vision with an IoT device to recognize and report visual states from a camera's video feed.
+++++++ Last updated : 11/23/2020+
+ms.devlang: csharp
++
+# Tutorial: Use Custom Vision with an IoT device to report visual states
+
+This sample app illustrates how to use Custom Vision to train a device with a camera to detect visual states. You can run this detection scenario on an IoT device by using an exported ONNX model.
+
+A visual state describes the content of an image: an empty room or a room with people, an empty driveway or a driveway with a truck, and so on. In the image below, you can see the app detect when a banana or an apple is placed in front of the camera.
+
+![Animation of a UI labeling fruit in front of the camera](./media/iot-visual-alerts-tutorial/scoring.gif)
+
+This tutorial will show you how to:
+> [!div class="checklist"]
+> * Configure the sample app to use your own Custom Vision and IoT Hub resources.
+> * Use the app to train your Custom Vision project.
+> * Use the app to score new images in real time and send the results to Azure.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
+
+## Prerequisites
+
+* [!INCLUDE [create-resources](includes/create-resources.md)]
+ > [!IMPORTANT]
+ > This project needs to be a **Compact** image classification project, because we will be exporting the model to ONNX later.
+* You'll also need to [create an IoT Hub resource](https://portal.azure.com/#create/Microsoft.IotHub) on Azure.
+* [Visual Studio 2015 or later](https://www.visualstudio.com/downloads/)
+* Optionally, an IoT device running Windows 10 IoT Core version 17763 or higher. You can also run the app directly from your PC.
+ * For Raspberry Pi 2 and 3, you can set up Windows 10 directly from the IoT Dashboard app. For other devices such as DrangonBoard, you'll need to flash it using the [eMMC method](/windows/iot-core/tutorials/quickstarter/devicesetup#flashing-with-emmc-for-dragonboard-410c-other-qualcomm-devices). If you need help with setting up a new device, see [Setting up your device](/windows/iot-core/tutorials/quickstarter/devicesetup) in the Windows IoT documentation.
+
+## About the Visual Alerts app
+
+The IoT Visual Alerts app runs in a continuous loop, switching between four different states as appropriate:
+
+* **No Model**: A no-op state. The app will continually sleep for one second and check the camera.
+* **Capturing Training Images**: In this state, the app captures a picture and uploads it as a training image to the target Custom Vision project. The app then sleeps for 500 ms and repeats the operation until the set target number of images are captured. Then it triggers the training of the Custom Vision model.
+* **Waiting For Trained Model**: In this state, the app calls the Custom Vision API every second to check whether the target project contains a trained iteration. When it finds one, it downloads the corresponding ONNX model to a local file and switches to the **Scoring** state.
+* **Scoring**: In this state, the app uses Windows ML to evaluate a single frame from the camera against the local ONNX model. The resulting image classification is displayed on the screen and sent as a message to the IoT Hub. The app then sleeps for one second before scoring a new image.
+
+## Examine the code structure
+
+The following files handle the main functionality of the app.
+
+| File | Description |
+|-|-|
+| [MainPage.xaml](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/MainPage.xaml) | This file defines the XAML user interface. It hosts the web camera control and contains the labels used for status updates.|
+| [MainPage.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/MainPage.xaml.cs) | This code controls the behavior of the XAML UI. It contains the state machine processing code.|
+| [CustomVision\CustomVisionServiceWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionServiceWrapper.cs) | This class is a wrapper that handles integration with the Custom Vision Service.|
+| [CustomVision\CustomVisionONNXModel.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionONNXModel.cs) | This class is a wrapper that handles integration with Windows ML for loading the ONNX model and scoring images against it.|
+| [IoTHub\IotHubWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/IoTHub/IoTHubWrapper.cs) | This class is a wrapper that handles integration with IoT Hub for uploading scoring results to Azure.|
+
+## Set up the Visual Alerts app
+
+Follow these steps to get the IoT Visual Alerts app running on your PC or IoT device.
+
+1. Clone or download the [IoTVisualAlerts sample](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/tree/master/IoTVisualAlerts) on GitHub.
+1. Open the solution _IoTVisualAlerts.sln_ in Visual Studio
+1. Integrate your Custom Vision project:
+ 1. In the _CustomVision\CustomVisionServiceWrapper.cs_ script, update the `ApiKey` variable with your training key.
+ 1. Then update the `Endpoint` variable with the endpoint URL associated with your key.
+ 1. Update the `targetCVSProjectGuid` variable with the corresponding ID of the Custom Vision project that you want to use.
+1. Set up the IoT Hub resource:
+ 1. In the _IoTHub\IotHubWrapper.cs_ script, update the `s_connectionString` variable with the proper connection string for your device.
+ 1. On the Azure portal, load your IoT Hub instance, select **IoT devices** under **Explorers**, select on your target device (or create one if needed), and find the connection string under **Primary Connection String**. The string will contain your IoT Hub name, device ID, and shared access key; it has the following format: `{your iot hub name}.azure-devices.net;DeviceId={your device id};SharedAccessKey={your access key}`.
+
+## Run the app
+
+If you're running the app on your PC, select **Local Machine** for the target device in Visual Studio, and select **x64** or **x86** for the target platform. Then press F5 to run the program. The app should start and display the live feed from the camera and a status message.
+
+If you're deploying to a IoT device with an ARM processor, you'll need to select **ARM** as the target platform and **Remote Machine** as the target device. Provide the IP address of your device when prompted (it must be on the same network as your PC). You can get the IP Address from the Windows IoT default app once you boot the device and connect it to the network. Press F5 to run the program.
+
+When you run the app for the first time, it won't have any knowledge of visual states. It will display a status message that no model is available.
+
+## Capture training images
+
+To set up a model, you need to put the app in the **Capturing Training Images** state. Take one of the following steps:
+* If you're running the app on PC, use the button on the top-right corner of the UI.
+* If you're running the app on an IoT device, call the `EnterLearningMode` method on the device through the IoT Hub. You can call it through the device entry in the IoT Hub menu on the Azure portal, or with a tool such as [IoT Hub Device Explorer](https://github.com/Azure/azure-iot-sdk-csharp).
+
+When the app enters the **Capturing Training Images** state, it will capture about two images every second until it has reached the target number of images. By default, the target is 30 images, but you can set this parameter by passing the desired number as an argument to the `EnterLearningMode` IoT Hub method.
+
+While the app is capturing images, you must expose the camera to the types of visual states that you want to detect (for example, an empty room, a room with
+people, an empty desk, a desk with a toy truck, and so on).
+
+## Train the Custom Vision model
+
+Once the app has finished capturing images, it will upload them and then switch to the **Waiting For Trained Model** state. At this point, you need to go to the [Custom Vision website](https://www.customvision.ai/) and build a model based on the new training images. The following animation shows an example of this process.
+
+![Animation: tagging multiple images of bananas](./media/iot-visual-alerts-tutorial/labeling.gif)
+
+To repeat this process with your own scenario:
+
+1. Sign in to the [Custom Vision website](http://customvision.ai).
+1. Find your target project, which should now have all the training images that the app uploaded.
+1. For each visual state that you want to identify, select the appropriate images and manually apply the tag.
+ * For example, if your goal is to distinguish between an empty room and a room with people in it, we recommend tagging five or more images with people as a new class, **People**, and tagging five or more images without people as the **Negative** tag. This will help the model differentiate between the two states.
+ * As another example, if your goal is to approximate how full a shelf is, then you might use tags such as **EmptyShelf**, **PartiallyFullShelf**, and **FullShelf**.
+1. When you're finished, select the **Train** button.
+1. Once training is complete, the app will detect that a trained iteration is available. It will start the process of exporting the trained model to ONNX and downloading it to the device.
+
+## Use the trained model
+
+Once the app downloads the trained model, it will switch to the **Scoring** state and start scoring images from the camera in a continuous loop.
+
+For each captured image, the app will display the top tag on the screen. If it doesn't recognize the visual state, it will display **No Matches**. The app also sends these messages to the IoT Hub, and if there is a class being detected, the message will include the label, the confidence score, and a property called `detectedClassAlert`, which can be used by IoT Hub clients interested in doing fast message routing based on properties.
+
+In addition, the sample uses a [Sense HAT library](https://github.com/emmellsoft/RPi.SenseHat) to detect when it's running on a Raspberry Pi with a Sense HAT unit, so it can use it as an output display by setting all display lights to red whenever it detects a class and blank when it doesn't detect anything.
+
+## Reuse the app
+
+If you'd like to reset the app back to its original state, you can do so by clicking on the button on the top-right corner of the UI, or by invoking the method `DeleteCurrentModel` through the IoT Hub.
+
+At any point, you can repeat the step of uploading training images by clicking the top-right UI button or calling the `EnterLearningMode` method again.
+
+If you're running the app on a device and need to retrieve the IP address again (to establish a remote connection through the [Windows IoT Remote Client](https://www.microsoft.com/p/windows-iot-remote-client/9nblggh5mnxz#activetab=pivot:overviewtab), for example), you can call the `GetIpAddress` method through IoT Hub.
+
+## Clean up resources
+
+Delete your Custom Vision project if you no longer want to maintain it. On the [Custom Vision website](https://customvision.ai), navigate to **Projects** and select the trash can under your new project.
+
+![Screenshot of a panel labeled My New Project with a trash can icon](./media/csharp-tutorial/delete_project.png)
+
+## Next steps
+
+In this tutorial, you set up and ran an application that detects visual state information on an IoT device and sends the results to the IoT Hub. Next, explore the source code further or make one of the suggested modifications below.
+
+> [!div class="nextstepaction"]
+> [IoTVisualAlerts sample (GitHub)](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/tree/master/IoTVisualAlerts)
+
+* Add an IoT Hub method to switch the app directly to the **Waiting For Trained Model** state. This way, you can train the model with images that aren't captured by the device itself and then push the new model to the device on command.
+* Follow the [Visualize real-time sensor data](../../iot-hub/iot-hub-live-data-visualization-in-power-bi.md) tutorial to create a Power BI Dashboard to visualize the IoT Hub alerts sent by the sample.
+* Follow the [IoT remote monitoring](../../iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md) tutorial to create a Logic App that responds to the IoT Hub alerts when visual states are detected.
ai-services Limits And Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/limits-and-quotas.md
+
+ Title: Limits and quotas - Custom Vision Service
+
+description: This article explains the different types of licensing keys and about the limits and quotas for the Custom Vision Service.
+++++++ Last updated : 07/05/2022+++
+# Limits and quotas
+
+There are two tiers of keys for the Custom Vision service. You can sign up for a F0 (free) or S0 (standard) subscription through the Azure portal. See the corresponding [Azure AI services Pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for details on pricing and transactions.
+
+The number of training images per project and tags per project are expected to increase over time for S0 projects.
+
+|Factor|**F0 (free)**|**S0 (standard)**|
+|--|--|--|
+|Projects|2|100|
+|Training images per project |5,000|100,000|
+|Predictions / month|10,000 |Unlimited|
+|Tags / project|50|500|
+|Iterations |20|20|
+|Min labeled images per Tag, Classification (50+ recommended) |5|5|
+|Min labeled images per Tag, Object Detection (50+ recommended)|15|15|
+|How long prediction images stored|30 days|30 days|
+|[Prediction](https://go.microsoft.com/fwlink/?linkid=865445) operations with storage (Transactions Per Second)|2|10|
+|[Prediction](https://go.microsoft.com/fwlink/?linkid=865445) operations without storage (Transactions Per Second)|2|20|
+|[TrainProject](https://go.microsoft.com/fwlink/?linkid=865446) (API calls Per Second)|2|10|
+|[Other API calls](https://go.microsoft.com/fwlink/?linkid=865446) (Transactions Per Second)|10|10|
+|Accepted image types|jpg, png, bmp, gif|jpg, png, bmp, gif|
+|Min image height/width in pixels|256 (see note)|256 (see note)|
+|Max image height/width in pixels|10,240|10,240|
+|Max image size (training image upload) |6 MB|6 MB|
+|Max image size (prediction)|4 MB|4 MB|
+|Max number of regions per image (object detection)|300|300|
+|Max number of tags per image (classification)|100|100|
+
+> [!NOTE]
+> Images smaller than than 256 pixels will be accepted but upscaled.
+> Image aspect ratio should not be larger than 25:1.
ai-services Logo Detector Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/logo-detector-mobile.md
+
+ Title: "Tutorial: Use custom logo detector to recognize Azure services - Custom Vision"
+
+description: In this tutorial, you will step through a sample app that uses Custom Vision as part of a logo detection scenario. Learn how Custom Vision is used with other components to deliver an end-to-end application.
++++++ Last updated : 07/04/2023+
+ms.devlang: csharp
+++
+# Tutorial: Recognize Azure service logos in camera pictures
+
+In this tutorial, you'll explore a sample app that uses Custom Vision as part of a larger scenario. The AI Visual Provision app, a Xamarin.Forms application for mobile platforms, analyzes photos of Azure service logos and then deploys those services to the user's Azure account. Here you'll learn how it uses Custom Vision in coordination with other components to deliver a useful end-to-end application. You can run the whole app scenario for yourself, or you can complete only the Custom Vision part of the setup and explore how the app uses it.
+
+This tutorial shows you how to:
+
+> [!div class="checklist"]
+> - Create a custom object detector to recognize Azure service logos.
+> - Connect your app to Azure AI Vision and Custom Vision.
+> - Create an Azure service principal account to deploy Azure services from the app.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+- [Visual Studio 2017 or later](https://www.visualstudio.com/downloads/)
+- The Xamarin workload for Visual Studio (see [Installing Xamarin](/xamarin/cross-platform/get-started/installation/windows))
+- An iOS or Android emulator for Visual Studio
+- The [Azure CLI](/cli/azure/install-azure-cli-windows) (optional)
+
+## Get the source code
+
+If you want to use the provided web app, clone or download the app's source code from the [AI Visual Provision](https://github.com/Microsoft/AIVisualProvision) repository on GitHub. Open the *Source/VisualProvision.sln* file in Visual Studio. Later, you'll edit some of the project files so you can run the app yourself.
+
+## Create an object detector
+
+Sign in to the [Custom Vision web portal](https://customvision.ai/) and create a new project. Specify an Object Detection project and use the Logo domain; this will let the service use an algorithm optimized for logo detection.
+
+![New-project window on the Custom Vision website in the Chrome browser](media/azure-logo-tutorial/new-project.png)
+
+## Upload and tag images
+
+Next, train the logo detection algorithm by uploading images of Azure service logos and tagging them manually. The AIVisualProvision repository includes a set of training images that you can use. On the website, select the **Add images** button on the **Training Images** tab. Then go to the **Documents/Images/Training_DataSet** folder of the repository. You'll need to manually tag the logos in each image, so if you're only testing out this project, you might want to upload only a subset of the images. Upload at least 15 instances of each tag you plan to use.
+
+After you upload the training images, select the first one on the display. The tagging window will appear. Draw boxes and assign tags for each logo in each image.
+
+![Logo tagging on the Custom Vision website](media/azure-logo-tutorial/tag-logos.png)
+
+The app is configured to work with specific tag strings. You'll find the definitions in the *Source\VisualProvision\Services\Recognition\RecognitionService.cs* file:
+
+[!code-csharp[Tag definitions](~/AIVisualProvision/Source/VisualProvision/Services/Recognition/RecognitionService.cs?name=snippet_constants)]
+
+After you tag an image, go to the right to tag the next one. Close the tagging window when you finish.
+
+## Train the object detector
+
+In the left pane, set the **Tags** switch to **Tagged** to display your images. Then select the green button at the top of the page to train the model. The algorithm will train to recognize the same tags in new images. It will also test the model on some of your existing images to generate accuracy scores.
+
+![The Custom Vision website, on the Training Images tab. In this screenshot, the Train button is outlined](media/azure-logo-tutorial/train-model.png)
+
+## Get the prediction URL
+
+After your model is trained, you're ready to integrate it into your app. You'll need to get the endpoint URL (the address of your model, which the app will query) and the prediction key (to grant the app access to prediction requests). On the **Performance** tab, select the **Prediction URL** button at the top of the page.
+
+![The Custom Vision website, showing a Prediction API window that displays a URL address and API key](media/azure-logo-tutorial/cusvis-endpoint.png)
+
+Copy the endpoint URL and the **Prediction-Key** value to the appropriate fields in the *Source\VisualProvision\AppSettings.cs* file:
+
+[!code-csharp[Custom Vision fields](~/AIVisualProvision/Source/VisualProvision/AppSettings.cs?name=snippet_cusvis_keys)]
+
+## Examine Custom Vision usage
+
+Open the *Source/VisualProvision/Services/Recognition/CustomVisionService.cs* file to see how the app uses your Custom Vision key and endpoint URL. The **PredictImageContentsAsync** method takes a byte stream of an image file along with a cancellation token (for asynchronous task management), calls the Custom Vision prediction API, and returns the result of the prediction.
+
+[!code-csharp[Custom Vision fields](~/AIVisualProvision/Source/VisualProvision/Services/Recognition/CustomVisionService.cs?name=snippet_prediction)]
+
+This result takes the form of a **PredictionResult** instance, which itself contains a list of **Prediction** instances. A **Prediction** contains a detected tag and its bounding box location in the image.
+
+[!code-csharp[Custom Vision fields](~/AIVisualProvision/Source/VisualProvision/Services/Recognition/Prediction.cs?name=snippet_prediction_class)]
+
+To learn more about how the app handles this data, start with the **GetResourcesAsync** method. This method is defined in the *Source/VisualProvision/Services/Recognition/RecognitionService.cs* file.
+
+## Add text recognition
+
+The Custom Vision portion of the tutorial is complete. If you want to run the app, you'll need to integrate the Azure AI Vision service as well. The app uses the Azure AI Vision text recognition feature to supplement the logo detection process. An Azure logo can be recognized by its appearance *or* by the text printed near it. Unlike Custom Vision models, Azure AI Vision is pretrained to perform certain operations on images or videos.
+
+Subscribe to the Azure AI Vision service to get a key and endpoint URL. For help on this step, see [How to obtain keys](../multi-service-resource.md?pivots=azportal).
+
+![The Azure AI Vision service in the Azure portal, with the Quickstart menu selected. A link for keys is outlined, as is the API endpoint URL](media/azure-logo-tutorial/comvis-keys.png)
+
+Next, open the *Source\VisualProvision\AppSettings.cs* file and populate the `ComputerVisionEndpoint` and `ComputerVisionKey` variables with the correct values.
+
+[!code-csharp[Azure AI Vision fields](~/AIVisualProvision/Source/VisualProvision/AppSettings.cs?name=snippet_comvis_keys)]
+
+## Create a service principal
+
+The app requires an Azure service principal account to deploy services to your Azure subscription. A service principal lets you delegate specific permissions to an app using Azure role-based access control. To learn more, see the [service principals guide](/azure-stack/operator/azure-stack-create-service-principals).
+
+You can create a service principal by using either Azure Cloud Shell or the Azure CLI, as shown here. To begin, sign in and select the subscription you want to use.
+
+```azurecli
+az login
+az account list
+az account set --subscription "<subscription name or subscription id>"
+```
+
+Then create your service principal. (This process might take some time to finish.)
+
+```azurecli
+az ad sp create-for-rbac --name <servicePrincipalName> --role Contributor --scopes /subscriptions/<subscription_id> --password <yourSPStrongPassword>
+```
+
+Upon successful completion, you should see the following JSON output, including the necessary credentials.
+
+```json
+{
+ "clientId": "(...)",
+ "clientSecret": "(...)",
+ "subscriptionId": "(...)",
+ "tenantId": "(...)",
+ ...
+}
+```
+
+Take note of the `clientId` and `tenantId` values. Add them to the appropriate fields in the *Source\VisualProvision\AppSettings.cs* file.
+
+[!code-csharp[Azure AI Vision fields](~/AIVisualProvision/Source/VisualProvision/AppSettings.cs?name=snippet_serviceprincipal)]
+
+## Run the app
+
+At this point, you've given the app access to:
+
+- A trained Custom Vision model
+- The Azure AI Vision service
+- A service principal account
+
+Follow these steps to run the app:
+
+1. In Visual Studio Solution Explorer, select either the **VisualProvision.Android** project or the **VisualProvision.iOS** project. Choose a corresponding emulator or connected mobile device from the drop-down menu on the main toolbar. Then run the app.
+
+ > [!NOTE]
+ > You will need a MacOS device to run an iOS emulator.
+
+1. On the first screen, enter your service principal client ID, tenant ID, and password. Select the **Login** button.
+
+ > [!NOTE]
+ > On some emulators, the **Login** button might not be activated at this step. If this happens, stop the app, open the *Source/VisualProvision/Pages/LoginPage.xaml* file, find the `Button` element labeled **LOGIN BUTTON**, remove the following line, and then run the app again.
+ > ```xaml
+ > IsEnabled="{Binding IsValid}"
+ > ```
+
+ ![The app screen, showing fields for service principal credentials](media/azure-logo-tutorial/app-credentials.png)
+
+1. On the next screen, select your Azure subscription from the drop-down menu. (This menu should contain all of the subscriptions to which your service principal has access.) Select the **Continue** button. At this point, the app might prompt you to grant access to the device's camera and photo storage. Grant the access permissions.
+
+ ![The app screen, showing a drop-down field for Target Azure subscription](media/azure-logo-tutorial/app-az-subscription.png)
++
+1. The camera on your device will be activated. Take a photo of one of the Azure service logos that you trained. A deployment window should prompt you to select a region and resource group for the new services (as you would do if you were deploying them from the Azure portal).
+
+ ![A smartphone camera screen focused on two paper cutouts of Azure logos](media/azure-logo-tutorial/app-camera-capture.png)
+
+ ![An app screen showing fields for the deployment region and resource group](media/azure-logo-tutorial/app-deployment-options.png)
+
+## Clean up resources
+
+If you've followed all of the steps of this scenario and used the app to deploy Azure services to your account, go to the [Azure portal](https://portal.azure.com/). There, cancel the services you don't want to use.
+
+If you plan to create your own object detection project with Custom Vision, you might want to delete the logo detection project you created in this tutorial. A free subscription for Custom Vision allows for only two projects. To delete the logo detection project, on the [Custom Vision website](https://customvision.ai), open **Projects** and then select the trash icon under **My New Project**.
+
+## Next steps
+
+In this tutorial, you set up and explored a full-featured Xamarin.Forms app that uses the Custom Vision service to detect logos in mobile camera images. Next, learn the best practices for building a Custom Vision model so that when you create one for your own app, you can make it powerful and accurate.
+
+> [!div class="nextstepaction"]
+> [How to improve your classifier](getting-started-improving-your-classifier.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/overview.md
+
+ Title: What is Custom Vision?
+
+description: Learn how to use the Azure AI Custom Vision service to build custom AI models to detect objects or classify images.
+++++++ Last updated : 07/04/2023++
+keywords: image recognition, image identifier, image recognition app, custom vision
+#Customer intent: As a data scientist/developer, I want to understand what the Custom Vision service does so that I can determine if it's suitable for my project.
++
+# What is Custom Vision?
+
+Azure AI Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels to images, according to their visual characteristics. Each label represents a classification or object. Unlike the [Azure AI Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
+
+> [!TIP]
+> The Azure AI Vision Image Analysis API now supports custom models. [Use Image Analysis 4.0](../computer-vision/how-to/model-customization.md) to create custom image identifier models using the latest technology from Azure. To migrate a Custom Vision project to the new Image Analysis 4.0 system, see the [Migration guide](../computer-vision/how-to/migrate-from-custom-vision.md).
+
+You can use Custom Vision through a client library SDK, REST API, or through the [Custom Vision web portal](https://customvision.ai/). Follow a quickstart to get started.
+
+> [!div class="nextstepaction"]
+> [Quickstart (web portal)](getting-started-build-a-classifier.md)
++
+This documentation contains the following types of articles:
+* The [quickstarts](./getting-started-build-a-classifier.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./test-your-model.md) contain instructions for using the service in more specific or customized ways.
+* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+<!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
+
+For a more structured approach, follow a Training module for Custom Vision:
+* [Classify images with the Custom Vision service](/training/modules/classify-images-custom-vision/)
+* [Classify endangered bird species with Custom Vision](/training/modules/cv-classify-bird-species/)
+
+## How it works
+
+The Custom Vision service uses a machine learning algorithm to analyze images. You submit sets of images that have and don't have the visual characteristics you're looking for. Then you label the images with your own custom labels (tags) at the time of submission. The algorithm trains to this data and calculates its own accuracy by testing itself on the same images. Once you've trained your model, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md) or [detect objects](get-started-build-detector.md). You can also [export the model](export-your-model.md) for offline use.
+
+### Classification and object detection
+
+Custom Vision functionality can be divided into two features. **[Image classification](getting-started-build-a-classifier.md)** applies one or more labels to an entire image. **[Object detection](get-started-build-detector.md)** is similar, but it returns the coordinates in the image where the applied label(s) can be found.
+
+### Optimization
+
+The Custom Vision service is optimized to quickly recognize major differences between images, so you can start prototyping your model with a small amount of data. It's generally a good start to use 50 images per label. However, the service isn't optimal for detecting subtle differences in images (for example, detecting minor cracks or dents in quality assurance scenarios).
+
+Additionally, you can choose from several variations of the Custom Vision algorithm that are optimized for images with certain subject material&mdash;for example, landmarks or retail items. For more information, see [Select a domain](select-domain.md).
+
+## How to use it
+
+The Custom Vision Service is available as a set of native SDKs and through a web-based interface on the [Custom Vision portal](https://customvision.ai/). You can create, test, and train a model through either interface or use both together.
+
+### Supported browsers for Custom Vision web portal
+
+The Custom Vision portal can be used by the following web browsers:
+- Microsoft Edge (latest version)
+- Google Chrome (latest version)
+
+![Custom Vision website in a Chrome browser window](media/browser-home.png)
+
+## Backup and disaster recovery
+
+As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](../../availability-zones/az-overview.md). If you need additional information or have any issues, [contact support](/answers/topics/azure-custom-vision.html).
++
+## Data privacy and security
+
+As with all of the Azure AI services, developers using the Custom Vision service should be aware of Microsoft's policies on customer data. See the [Azure AI services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
+
+## Data residency
+
+Custom Vision primarily doesn't replicate data out of the specified region, except for one region, `NorthCentralUS`, where there is no local Azure Support.
+
+## Next steps
+
+* Follow the [Build a classifier](getting-started-build-a-classifier.md) quickstart to get started using Custom Vision on the web portal.
+* Or, complete an [SDK quickstart](quickstarts/image-classification.md) to implement the basic scenarios with code.
ai-services Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/quickstarts/image-classification.md
+
+ Title: "Quickstart: Image classification with Custom Vision client library or REST API"
+
+description: "Quickstart: Create an image classification project, add tags, upload images, train your project, and make a prediction using the Custom Vision client library or the REST API"
+++++ Last updated : 11/03/2022
+ms.devlang: csharp, golang, java, javascript, python
+
+keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software
+zone_pivot_groups: programming-languages-set-cusvis
++
+# Quickstart: Create an image classification project with the Custom Vision client library or REST API
++++++
ai-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/quickstarts/object-detection.md
+
+ Title: "Quickstart: Object detection with Custom Vision client library"
+
+description: "Quickstart: Create an object detection project, add custom tags, upload images, train the model, and detect objects in images using the Custom Vision client library."
+++++ Last updated : 11/03/2022
+ms.devlang: csharp, golang, java, javascript, python
+
+keywords: custom vision
+zone_pivot_groups: programming-languages-set-one
++
+# Quickstart: Create an object detection project with the Custom Vision client library
+++++
ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/release-notes.md
+
+ Title: Release Notes - Custom Vision Service
+
+description: Get the latest information on new releases from the Custom Vision team.
+++++++ Last updated : 04/03/2019+++
+# Custom Vision Service Release Notes
+
+## May 2, 2019 and May 10, 2019
+
+- Bugfixes and backend improvements
+
+## May 23, 2019
+
+- Improved portal UX experience related to Azure subscriptions, making it easier to select your Azure directories.
+
+## April 18, 2019
+
+- Added Object Detection export for the Vision AI Dev Kit.
+- UI tweaks, including project search.
+
+## April 3, 2019
+
+- Increased limit on number of bounding boxes per image to 200.
+- Bugfixes, including substantial performance update for models exported to TensorFlow.
+
+## March 26, 2019
+
+- Custom Vision Service has entered General Availability on Azure!
+- Added Advanced Training feature with a new machine learning backend for improved performance, especially on challenging datasets and fine-grained classification. With advanced training, you can specify a compute time budget for training and Custom Vision will experimentally identify the best training and augmentation settings. For quick iterations, you can continue to use the existing fast training.
+- Introduced 3.0 APIs. Announced coming deprecation of pre-3.0 APIs on October 1, 2019. See the documentation [quickstarts](./quickstarts/image-classification.md) for examples on how to get started.
+- Replaced "Default Iterations" with Publish/Unpublish in the 3.0 APIs.
+- New model export targets have been added. Dockerfile export has been upgraded to support ARM for Raspberry Pi 3. Export support has been added to the Vision AI Dev Kit.
+- Increased limit of Tags per project to 500 for S0 tier. Increased limit of Images per project to 100,000 for S0 tier.
+- Removed Adult domain. General domain is recommended instead.
+- Announced [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for General Availability.
+
+## February 25, 2019
+
+- Announced the end of Limited Trial projects (projects not associated with an Azure resource), as Custom Vision nears completion of its move to Azure public preview. Beginning March 25, 2019, the CustomVision.ai site will only support viewing projects associated with an Azure resource, such as the free Custom Vision resource. Through October 1, 2019, you'll still be able to access your existing limited trial projects via the Custom Vision APIs. This will give you time to update API keys for any apps you've written with Custom Vision. After October 1, 2019, any limited trial projects you haven't moved to Azure will be deleted.
+
+## January 22, 2019
+
+- Support added for new Azure regions: West US 2, East US, East US 2, West Europe, North Europe, Southeast Asia, Australia East, Central India, UK South, Japan East, and North Central US. Support continues for South Central US.
+
+## December 12, 2018
+
+- Support export for Object Detection models (introduced Object Detection Compact Domain).
+- Fixed a number of accessibility issues for improved screen reader and keyboard navigation support.
+- UX updates for image viewer and improved object detection tagging experience for faster tagging.
+- Updated base model for Object Detection Domain for better quality object detection.
+- Bug fixes.
+
+## November 6, 2018
+
+- Added support for Logo Domain in Object Detection.
+
+## October 9, 2018
+
+- Object Detection enters paid preview. You can now create Object Detection projects with an Azure resource.
+- Added "Move to Azure" feature to website, to make it easier to upgrade a Limited Trial project to link to an Azure. resource linked project (F0 or S0.) You can find this on the Settings page for your product.
+- Added export to ONNX 1.2, to support Windows 2018 October Update version of Windows ML.
+Bug fixes, including for ONNX export with special characters.
+
+## August 14, 2018
+
+- Added "Get Started" widget to customvision.ai site to guide users through project training.
+- Further improvements to the machine learning pipeline to benefit multilabel projects (new loss layer).
+
+## June 28, 2018
+
+- Bug fixes & backend improvements.
+- Enabled multiclass classification, for projects where images have exactly one label. In Predictions for multiclass mode, probabilities will sum to one (all images are classified among your specified Tags).
+
+## June 13, 2018
+
+- UX refresh, focused on ease of use and accessibility.
+- Improvements to the machine learning pipeline to benefit multilabel projects with a large number of tags.
+- Fixed bug in TensorFlow export. Enabled exported model versioning, so iterations can be exported more than once.
+
+## May 7, 2018
+
+- Introduced preview Object Detection feature for Limited Trial projects.
+- Upgrade to 2.0 APIs
+- S0 tier expanded to up to 250 tags and 50,000 images.
+- Significant backend improvements to the machine learning pipeline for image classification projects. Projects trained after April 27, 2018 will benefit from these updates.
+- Added model export to ONNX, for use with Windows ML.
+- Added model export to Dockerfile. This allows you to download the artifacts to build your own Windows or Linux containers, including a DockerFile, TensorFlow model, and service code.
+- For newly trained models exported to TensorFlow in the General (Compact) and Landmark (Compact) Domains, [Mean Values are now (0,0,0)](https://github.com/azure-samples/cognitive-services-android-customvision-sample), for consistency across all projects.
+
+## March 1, 2018
+
+- Entered paid preview and onboarded onto the Azure portal. Projects can now be attached to Azure resources with an F0 (Free) or S0 (Standard) tier. Introduced S0 tier projects, which allow up to 100 tags and 25,000 images.
+- Backend changes to the machine learning pipeline/normalization parameter. This will give customers better control of precision-recall tradeoffs when adjusting the Probability Threshold. As a part of these changes, the default Probability Threshold in the CustomVision.ai portal was set to be 50%.
+
+## December 19, 2017
+
+- Export to Android (TensorFlow) added, in addition to previously released export to iOS (CoreML.) This allows export of a trained compact model to be run offline in an application.
+- Added Retail and Landmark "compact" domains to enable model export for these domains.
+- Released version [1.2 Training API](https://westus2.dev.cognitive.microsoft.com/docs/services/f2d62aa3b93843d79e948fe87fa89554/operations/5a3044ee08fa5e06b890f11f) and [1.1 Prediction API](https://westus2.dev.cognitive.microsoft.com/docs/services/57982f59b5964e36841e22dfbfe78fc1/operations/5a3044f608fa5e06b890f164). Updated APIs support model export, new Prediction operation that does not save images to "Predictions," and introduced batch operations to the Training API.
+- UX tweaks, including the ability to see which domain was used to train an iteration.
+- Updated [C# SDK and sample](https://github.com/Microsoft/Cognitive-CustomVision-Windows).
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/role-based-access-control.md
+
+ Title: "Azure role-based access control - Custom Vision"
+
+description: This article will show you how to configure Azure role-based access control for your Custom Vision projects.
+++++ Last updated : 09/11/2020+++
+# Azure role-based access control
+
+Custom Vision supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your Custom Vision projects. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/index.yml).
+
+## Add role assignment to Custom Vision resource
+
+Azure RBAC can be assigned to a Custom Vision resource. To grant access to an Azure resource, you add a role assignment.
+1. In the [Azure portal](https://portal.azure.com/), select **All services**.
+1. Then select **Azure AI services**, and navigate to your specific Custom Vision training resource.
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item (for example, selecting **Resource groups** and then clicking through to your wanted resource group).
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add** -> **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## Custom Vision role types
+
+Use the following table to determine access needs for your Custom Vision resources.
+
+|Role |Permissions |
+|||
+|`Cognitive Services Custom Vision Contributor` | Full access to the projects, including the ability to create, edit, or delete a project. |
+|`Cognitive Services Custom Vision Trainer` | Full access except the ability to create or delete a project. Trainers can view and edit projects and train, publish, unpublish, or export the models. |
+|`Cognitive Services Custom Vision Labeler` | Ability to upload, edit, or delete training images and create, add, remove, or delete tags. Labelers can view projects but can't update anything other than training images and tags. |
+|`Cognitive Services Custom Vision Deployment` | Ability to publish, unpublish, or export the models. Deployers can view projects but can't update a project, training images, or tags. |
+|`Cognitive Services Custom Vision Reader` | Ability to view projects. Readers can't make any changes. |
ai-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/select-domain.md
+
+ Title: "Select a domain for a Custom Vision project - Azure AI Vision"
+
+description: This article will show you how to select a domain for your project in the Custom Vision Service.
++++++ Last updated : 06/13/2022+++
+# Select a domain for a Custom Vision project
+
+From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab), or use the table below.
+
+## Image Classification
+
+|Domain|Purpose|
+|||
+|__General__| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
+|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. ID: `a8e3c40f-fb4a-466f-832a-5e457ae4a344`|
+|__General [A2]__| Optimized for better accuracy with faster inference time than General[A1] and General domains. Recommended for most datasets. This domain requires less training time than General and General [A1] domains. ID: `2e37d7fb-3a54-486a-b4d6-cfc369af0018` |
+|__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. ID: `c151d5b5-dd07-472a-acc8-15d29dea8518`|
+|__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. ID: `ca455789-012d-4b50-9fec-5bb63841c793`|
+|__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`|
+|__Compact domains__| Optimized for the constraints of real-time classification on edge devices.|
++
+> [!NOTE]
+> The General[A1] and General[A2] domains can be used for a broad set of scenarios and are optimized for accuracy. Use the General[A2] model for better inference speed and shorter training time. For larger datasets, you may want to use General[A1] to render better accuracy than General[A2], though it requires more training and inference time. The General model requires more inference time than both General[A1] and General[A2].
+
+## Object Detection
+
+|Domain|Purpose|
+|||
+|__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the General domain. ID: `da2e3a8a-40a5-4171-82f4-58522f70fbc1`|
+|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results are not deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. ID: `9c616dff-2e7d-ea11-af59-1866da359ce6`|
+|__Logo__|Optimized for finding brand logos in images. ID: `1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`|
+|__Products on shelves__|Optimized for detecting and classifying products on shelves. ID: `3780a898-81c3-4516-81ae-3a139614e1f3`|
+|__Compact domains__| Optimized for the constraints of real-time object detection on edge devices.|
+
+## Compact domains
+
+The models generated by compact domains can be exported to run locally. In the Custom Vision 3.4 public preview API, you can get a list of the exportable platforms for compact domains by calling the GetDomains API.
+
+All of the following domains support export in ONNX, TensorFlow,TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the **Object Detection General (compact)** domain does not support VAIDK.
+
+Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU \[1\]. These numbers don't include preprocessing and postprocessing time.
+
+|Task|Domain|ID|Model Size|CPU inference time|GPU inference time|
+|||||||
+|Classification|General (compact)|`0732100f-1a38-4e49-a514-c9b44c697ab5`|6 MB|10 ms|5 ms|
+|Classification|General (compact) [S1]|`a1db07ca-a19a-4830-bae8-e004a42dc863`|43 MB|50 ms|5 ms|
+|Object Detection|General (compact)|`a27d5ca5-bb19-49d8-a70a-fec086c47f5b`|45 MB|35 ms|5 ms|
+|Object Detection|General (compact) [S1]|`7ec2ac80-887b-48a6-8df9-8b1357765430`|14 MB|27 ms|7 ms|
+
+>[!NOTE]
+>__General (compact)__ domain for Object Detection requires special postprocessing logic. For the detail, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use __General (compact) [S1]__.
+
+>[!IMPORTANT]
+>There is no guarantee that the exported models give the exactly same result as the prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For the detail of the preprocessing logic, please see [this document](quickstarts/image-classification.md).
+
+\[1\] Intel Xeon E5-2690 CPU and NVIDIA Tesla M60
ai-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/storage-integration.md
+
+ Title: Integrate Azure storage for notifications and model backup
+
+description: Learn how to integrate Azure storage to receive push notifications when you train or export Custom Vision models. You can also save a backup of exported models.
+++++ Last updated : 06/25/2021++++
+# Integrate Azure storage for notifications and backup
+
+You can integrate your Custom Vision project with an Azure blob storage queue to get push notifications of project training/export activity and backup copies of published models. This feature is useful to avoid continually polling the service for results when long operations are running. Instead, you can integrate the storage queue notifications into your workflow.
+
+This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
+
+> [!NOTE]
+> Push notifications depend on the optional _notificationQueueUri_ parameter in the **CreateProject** API, and model backups require that you also use the optional _exportModelContainerUri_ parameter. This guide will use both for the full set of features.
+
+## Prerequisites
+
+- A Custom Vision resource in Azure. If you don't have one, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). This feature doesn't currently support the [Azure AI services multi-service resource](../multi-service-resource.md).
+- An Azure Storage account with a blob container. Follow the [Storage quickstart](../../storage/blobs/storage-quickstart-blobs-portal.md) if you need help with this step.
+- [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application.
+
+## Set up Azure storage integration
+
+Go to your Custom Vision training resource on the Azure portal, select the **Identity** page, and enable system assigned managed identity.
+
+Next, go to your storage resource in the Azure portal. Go to the **Access control (IAM)** page and select **Add role assignment (Preview)**. Then add a role assignment for either integration feature, or both:
+* If you plan to use the model backup feature, select the **Storage Blob Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
+* If you plan to use the notification queue feature, then select the **Storage Queue Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
+
+For help with role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+### Get integration URLs
+
+Next, you'll get the URLs that allow your Custom Vision resource to access these endpoints.
+
+For the notification queue integration URL, go to the **Queues** page of your storage account, add a new queue, and save its URL to a temporary location.
+
+> [!div class="mx-imgBorder"]
+> ![Azure storage queue page](./media/storage-integration/queue-url.png)
+
+For the model backup integration URL, go to the **Containers** page of your storage account and create a new container. Then select it and go to the **Properties** page. Copy the URL to a temporary location.
+
+> [!div class="mx-imgBorder"]
+> ![Azure storage container properties page](./media/storage-integration/container-url.png)
++
+## Integrate Custom Vision project
+
+Now that you have the integration URLs, you can create a new Custom Vision project that integrates the Azure Storage features. You can also update an existing project to add the features.
+
+### Create new project
+
+When you call the [CreateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
+
+```curl
+curl -v -X POST "{endpoint}/customvision/v3.3/Training/projects?exportModelContainerUri={inputUri}&notificationQueueUri={inputUri}&name={inputName}"
+-H "Training-key: {subscription key}"
+```
+
+If you receive a `200/OK` response, that means the URLs have been set up successfully. You should see your URL values in the JSON response as well:
+
+```json
+{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "name": "string",
+ "description": "string",
+ "settings": {
+ "domainId": "00000000-0000-0000-0000-000000000000",
+ "classificationType": "Multiclass",
+ "targetExportPlatforms": [
+ "CoreML"
+ ],
+ "useNegativeSet": true,
+ "detectionParameters": "string",
+ "imageProcessingSettings": {
+ "augmentationMethods": {}
+},
+"exportModelContainerUri": {url}
+"notificationQueueUri": {url}
+ },
+ "created": "string",
+ "lastModified": "string",
+ "thumbnailUri": "string",
+ "drModeEnabled": true,
+ "status": "Succeeded"
+}
+```
+
+### Update existing project
+
+To update an existing project with Azure storage feature integration, call the [UpdateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb1) API, using the ID of the project you want to update.
+
+```curl
+curl -v -X PATCH "{endpoint}/customvision/v3.3/Training/projects/{projectId}"
+-H "Content-Type: application/json"
+-H "Training-key: {subscription key}"
+
+--data-ascii "{body}"
+```
+
+Set the request body (`body`) to the following JSON format, filling in the appropriate values for _exportModelContainerUri_ and _notificationQueueUri_:
+
+```json
+{
+ "name": "string",
+ "description": "string",
+ "settings": {
+ "domainId": "00000000-0000-0000-0000-000000000000",
+ "classificationType": "Multiclass",
+ "targetExportPlatforms": [
+ "CoreML"
+ ],
+ "imageProcessingSettings": {
+ "augmentationMethods": {}
+},
+"exportModelContainerUri": {url}
+"notificationQueueUri": {url}
+
+ },
+ "status": "Succeeded"
+}
+```
+
+If you receive a `200/OK` response, that means the URLs have been set up successfully.
+
+## Verify the connection
+
+Your API call in the previous section should have already triggered new information in your Azure storage account.
+
+In your designated container, there should be a test blob inside a **CustomVision-TestPermission** folder. This blob will only exist temporarily.
+
+In your notification queue, you should see a test notification in the following format:
+
+```json
+{
+"version": "1.0" ,
+"type": "ConnectionTest",
+"Content":
+ {
+ "projectId": "00000000-0000-0000-0000-000000000000"
+ }
+}
+```
+
+## Get event notifications
+
+When you're ready, call the [TrainProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee1) API on your project to do an ordinary training operation.
+
+In your Storage notification queue, you'll receive a notification once training finishes:
+
+```json
+{
+"version": "1.0" ,
+"type": "Training",
+"Content":
+ {
+ "projectId": "00000000-0000-0000-0000-000000000000",
+ "iterationId": "00000000-0000-0000-0000-000000000000",
+ "trainingStatus": "TrainingCompleted"
+ }
+}
+```
+
+The `"trainingStatus"` field may be either `"TrainingCompleted"` or `"TrainingFailed"`. The `"iterationId"` field is the ID of the trained model.
+
+## Get model export backups
+
+When you're ready, call the [ExportIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddece) API to export a trained model into a specified platform.
+
+In your designated storage container, a backup copy of the exported model will appear. The blob name will have the format:
+
+```
+{projectId} - {iterationId}.{platformType}
+```
+
+Also, you'll receive a notification in your queue when the export finishes.
+
+```json
+{
+"version": "1.0" ,
+"type": "Export",
+"Content":
+ {
+ "projectId": "00000000-0000-0000-0000-000000000000",
+ "iterationId": "00000000-0000-0000-0000-000000000000",
+ "exportStatus": "ExportCompleted",
+ "modelUri": {url}
+ }
+}
+```
+
+The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`. The `"modelUri"` field will contain the URL of the backup model stored in your container, assuming you integrated queue notifications in the beginning. If you didn't, the `"modelUri"` field will show the SAS URL for your Custom Vision model blob.
+
+## Next steps
+
+In this guide, you learned how to copy and back up a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
+* [REST API reference documentation (training)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
+* [REST API reference documentation (prediction)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
ai-services Suggested Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/suggested-tags.md
+
+ Title: "Label images faster with Smart Labeler"
+
+description: In this guide, you'll learn how to use Smart Labeler to generate suggested tags for images. This lets you label a large number of images more quickly when training a Custom Vision model.
+++++++ Last updated : 12/27/2022+++
+# Tag images faster with Smart Labeler
+
+In this guide, you'll learn how to use Smart Labeler to generate suggested tags for images. This lets you label a large number of images more quickly when you're training a Custom Vision model.
+
+When you tag images for a Custom Vision model, the service uses the latest trained iteration of the model to predict the labels of new images. It shows these predictions as suggested tags, based on the selected confidence threshold and prediction uncertainty. You can then either confirm or change the suggestions, speeding up the process of manually tagging the images for training.
+
+## When to use Smart Labeler
+
+Keep the following limitations in mind:
+
+* You should only request suggested tags for images whose tags have already been trained on once. Don't get suggestions for a new tag that you're just beginning to train.
+
+> [!IMPORTANT]
+> The Smart Labeler feature uses the same [pricing model](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) as regular predictions. The first time you trigger suggested tags for a set of images, you'll be charged the same as for prediction calls. After that, the service stores the results for the selected images in a database for 30 days, and you can access them anytime for free within that period. After 30 days, you'll be charged if you request their suggested tags again.
+
+## Smart Labeler workflow
+
+Follow these steps to use Smart Labeler:
+
+1. Upload all of your training images to your Custom Vision project.
+1. Label part of your data set, choosing an equal number of images for each tag.
+ > [!TIP]
+ > Make sure you use all of the tags for which you want suggestions later on.
+1. Start the training process.
+1. When training is complete, navigate to the **Untagged** view and select the **Get suggested tags** button on the left pane.
+ > [!div class="mx-imgBorder"]
+ > ![The suggested tags button is shown under the untagged images tab.](./media/suggested-tags/suggested-tags-button.png)
+1. In the popup window that appears, set the number of images for which you want suggestions. You should only get initial tag suggestions for a portion of the untagged images. You'll get better tag suggestions as you iterate through this process.
+1. Confirm the suggested tags, fixing any that aren't correct.
+ > [!TIP]
+ > Images with suggested tags are sorted by their prediction uncertainty (lower values indicate higher confidence). You can change the sorting order with the **Sort by uncertainty** option. If you set the order to **high to low**, you can correct the high-uncertainty predictions first and then quickly confirm the low-uncertainty ones.
+ * In image classification projects, you can select and confirm tags in batches. Filter the view by a given suggested tag, deselect images that are tagged incorrectly, and then confirm the rest in a batch.
+ > [!div class="mx-imgBorder"]
+ > ![Suggested tags are displayed in batch mode for IC with filters.](./media/suggested-tags/ic-batch-mode.png)
+
+ You can also use suggested tags in individual image mode by selecting an image from the gallery.
+
+ ![Suggested tags are displayed in individual image mode for IC.](./media/suggested-tags/ic-individual-image-mode.png)
+ * In object detection projects, batch confirmations aren't supported, but you can still filter and sort by suggested tags for a more organized labeling experience. Thumbnails of your untagged images will show an overlay of bounding boxes indicating the locations of suggested tags. If you don't select a suggested tag filter, all of your untagged images will appear without overlaying bounding boxes.
+ > [!div class="mx-imgBorder"]
+ > ![Suggested tags are displayed in batch mode for OD with filters.](./media/suggested-tags/od-batch-mode.png)
+
+ To confirm object detection tags, you need to apply them to each individual image in the gallery.
+
+ ![Suggested tags are displayed in individual image mode for OD.](./media/suggested-tags/od-individual-image-mode.png)
+1. Start the training process again.
+1. Repeat the preceding steps until you're satisfied with the suggestion quality.
+
+## Next steps
+
+Follow a quickstart to get started creating and training a Custom Vision project.
+
+* [Build a classifier](getting-started-build-a-classifier.md)
+* [Build an object detector](get-started-build-detector.md)
ai-services Test Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/test-your-model.md
+
+ Title: Test and retrain a model - Custom Vision Service
+
+description: Learn how to test an image and then use it to retrain your model in the Custom Vision service.
+++++++ Last updated : 07/05/2022+++
+# Test and retrain a model with Custom Vision Service
+
+After you train your Custom Vision model, you can quickly test it using a locally stored image or a URL pointing to a remote image. The test uses the most recently trained iteration of your model. Then you can decide whether further training is needed.
+
+## Test your model
+
+1. From the [Custom Vision web portal](https://customvision.ai), select your project. Select **Quick Test** on the right of the top menu bar. This action opens a window labeled **Quick Test**.
+
+ ![The Quick Test button is shown in the upper right corner of the window.](./media/test-your-model/quick-test-button.png)
+
+1. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
+
+ ![Screenshot of the submit image page.](./media/test-your-model/submit-image.png)
+
+The image you select appears in the middle of the page. Then the prediction results appear below the image in the form of a table with two columns, labeled **Tags** and **Confidence**. After you view the results, you may close the **Quick Test** window.
+
+## Use the predicted image for training
+
+You can now take the image submitted previously for testing and use it to retrain your model.
+
+1. To view images submitted to the classifier, open the [Custom Vision web page](https://customvision.ai) and select the __Predictions__ tab.
+
+ ![Image of the predictions tab](./media/test-your-model/predictions-tab.png)
+
+ > [!TIP]
+ > The default view shows images from the current iteration. You can use the __Iteration__ drop down field to view images submitted during previous iterations.
+
+1. Hover over an image to see the tags that were predicted by the classifier.
+
+ > [!TIP]
+ > Images are ranked, so that the images that can bring the most gains to the classifier are at the top. To select a different sorting, use the __Sort__ section.
+
+ To add an image to your training data, select the image, manually select the tag(s), and then select __Save and close__. The image is removed from __Predictions__ and added to the training images. You can view it by selecting the __Training Images__ tab.
+
+ ![Screenshot of the tagging page.](./media/test-your-model/tag-image.png)
+
+1. Use the __Train__ button to retrain the classifier.
+
+## Next steps
+
+[Improve your model](getting-started-improving-your-classifier.md)
ai-services Update Application To 3.0 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/update-application-to-3.0-sdk.md
+
+ Title: How to update your project to the 3.0 API
+
+description: Learn how to update Custom Vision projects from the previous version of the API to the 3.0 API.
+++++++ Last updated : 12/27/2022++
+# Update to the 3.0 API
+
+Custom Vision has now reached General Availability and has undergone an API update. This update includes a few new features and, importantly, a few breaking changes:
+
+* The Prediction API is now split into two APIs based on the project type.
+* The Vision AI Developer Kit (VAIDK) export option requires creating a project in a specific way.
+* Default iterations have been removed in favor of publishing / unpublishing a named iteration.
+
+See the [Release notes](release-notes.md) for a full list of the changes. This guide will show you how to update your projects to work with the new API version.
+
+## Use the updated Prediction API
+
+The 2.x APIs used the same prediction call for both image classifiers and object detector projects. Both project types were acceptable to the **PredictImage** and **PredictImageUrl** calls. Starting with 3.0, we have split this API so that you need to match the project type to the call:
+
+* Use **[ClassifyImage](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)** and **[ClassifyImageUrl](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c14)** to get predictions for image classification projects.
+* Use **[DetectImage](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c19)** and **[DetectImageUrl](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c18)** to get predictions for object detection projects.
+
+## Use the new iteration publishing workflow
+
+The 2.x APIs used the default iteration or a specified iteration ID to choose the iteration to use for prediction. Starting in 3.0, we have adopted a publishing flow whereby you first publish an iteration under a specified name from the training API. You then pass the name to the prediction methods to specify which iteration to use.
+
+> [!IMPORTANT]
+> The 3.0 APIs do not use the default iteration feature. Until we deprecate the older APIs, you can continue to use the 2.x APIs to toggle an iteration as the default. These APIs will be maintained for a period of time, and you can call the **[UpdateIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b818)** method to mark an iteration as default.
+
+### Publish an iteration
+
+Once an iteration is trained, you can make it available for prediction using the **[PublishIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c82db28bf6a2b11a8247bbc)** method. To publish an iteration, you'll need the prediction resource ID, which is available on the CustomVision website's settings page.
+
+![The Custom Vision website settings page with the prediction resource ID outlined.](./media/update-application-to-3.0-sdk/prediction-id.png)
+
+> [!TIP]
+> You can also get this information from the [Azure Portal](https://portal.azure.com) by going to the Custom Vision Prediction resource and selecting **Properties**.
+
+Once your iteration is published, apps can use it for prediction by specifying the name in their prediction API call. To make an iteration unavailable for prediction calls, use the **[UnpublishIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b81a)** API.
+
+## Next steps
+
+* [Training API reference documentation (REST)](https://go.microsoft.com/fwlink/?linkid=865446)
+* [Prediction API reference documentation (REST)](https://go.microsoft.com/fwlink/?linkid=865445)
ai-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/use-prediction-api.md
+
+ Title: "Use prediction endpoint to programmatically test images with classifier - Custom Vision"
+
+description: Learn how to use the API to programmatically test images with your Custom Vision Service classifier.
+++++ Last updated : 12/27/2022+
+ms.devlang: csharp
+++
+# Call the prediction API
+
+After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+
+> [!NOTE]
+> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
+
+## Setup
+
+### Publish your trained iteration
+
+From the [Custom Vision web page](https://customvision.ai), select your project and then select the __Performance__ tab.
+
+To submit images to the Prediction API, you'll first need to publish your iteration for prediction, which can be done by selecting __Publish__ and specifying a name for the published iteration. This will make your model accessible to the Prediction API of your Custom Vision Azure resource.
+
+![The performance tab is shown, with a red rectangle surrounding the Publish button.](./media/use-prediction-api/unpublished-iteration.png)
+
+Once your model has been successfully published, you'll see a "Published" label appear next to your iteration in the left-hand sidebar, and its name will appear in the description of the iteration.
+
+![The performance tab is shown, with a red rectangle surrounding the Published label and the name of the published iteration.](./media/use-prediction-api/published-iteration.png)
+
+### Get the URL and prediction key
+
+Once your model has been published, you can retrieve the required information by selecting __Prediction URL__. This will open up a dialog with information for using the Prediction API, including the __Prediction URL__ and __Prediction-Key__.
+
+![The performance tab is shown with a red rectangle surrounding the Prediction URL button.](./media/use-prediction-api/published-iteration-prediction-url.png)
+
+![The performance tab is shown with a red rectangle surrounding the Prediction URL value for using an image file and the Prediction-Key value.](./media/use-prediction-api/prediction-api-info.png)
+
+## Submit data to the service
+
+This guide assumes that you already constructed a **[CustomVisionPredictionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.customvisionpredictionclient)** object, named `predictionClient`, with your Custom Vision prediction key and endpoint URL. For instructions on how to set up this feature, follow one of the [quickstarts](quickstarts/image-classification.md).
+
+In this guide, you'll use a local image, so download an image you'd like to submit to your trained model. The following code prompts the user to specify a local path and gets the bytestream of the file at that path.
+
+```csharp
+Console.Write("Enter image file path: ");
+string imageFilePath = Console.ReadLine();
+byte[] byteData = GetImageAsByteArray(imageFilePath);
+```
+
+Include the following helper method:
+
+```csharp
+private static byte[] GetImageAsByteArray(string imageFilePath)
+{
+ FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
+ BinaryReader binaryReader = new BinaryReader(fileStream);
+ return binaryReader.ReadBytes((int)fileStream.Length);
+}
+```
+
+The **[ClassifyImageAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.customvisionpredictionclientextensions.classifyimageasync#Microsoft_Azure_CognitiveServices_Vision_CustomVision_Prediction_CustomVisionPredictionClientExtensions_ClassifyImageAsync_Microsoft_Azure_CognitiveServices_Vision_CustomVision_Prediction_ICustomVisionPredictionClient_System_Guid_System_String_System_IO_Stream_System_String_System_Threading_CancellationToken_)** method takes the project ID and the locally stored image, and scores the image against the given model.
+
+```csharp
+// Make a prediction against the new project
+Console.WriteLine("Making a prediction:");
+var result = predictionApi.ClassifyImageAsync(project.Id, publishedModelName, byteData);
+```
+
+## Determine how to process the data
+
+You can optionally configure how the service does the scoring operation by choosing alternate methods (see the methods of the **[CustomVisionPredictionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.customvisionpredictionclient)** class).
+
+You can use a non-async version of the method above for simplicity, but it may cause the program to lock up for a noticeable amount of time.
+
+The **-WithNoStore** methods require that the service does not retain the prediction image after prediction is complete. Normally, the service retains these images so you have the option of adding them as training data for future iterations of your model.
+
+The **-WithHttpMessages** methods return the raw HTTP response of the API call.
+
+## Get results from the service
+
+The service returns results in the form of an **[ImagePrediction](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.models.imageprediction)** object. The **Predictions** property contains a list of **[PredictionModel](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.models.predictionmodel)** objects, which each represents a single object prediction. They include the name of the label and the bounding box coordinates where the object was detected in the image. Your app can then parse this data to, for example, display the image with labeled object fields on a screen.
+
+## Next steps
+
+In this guide, you learned how to submit images to your custom image classifier/detector and receive a response programmatically with the C# SDK. Next, learn how to complete end-to-end scenarios with C#, or get started using a different language SDK.
+
+* [Quickstart: Custom Vision SDK](quickstarts/image-classification.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/whats-new.md
+
+ Title: What's new in Custom Vision?
+
+description: This article contains news about Custom Vision.
++++++ Last updated : 09/27/2021+++
+# What's new in Custom Vision
+
+Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to keep up to date with the service.
+
+## May 2022
+
+### Estimated Minimum Budget
+- In Custom Vision Portal, users are now able to view the minimum estimated budget needed to train their project. This estimate (shown in hours) is calculated based on volume of images uploaded by user and domain selected by user.
+
+## October 2020
+
+### Custom base model
+
+- Some applications have a large amount of joint training data but need to fine-tune their models separately; this results in better performance for images from different sources with minor differences. In this case, you can train the first model as usual with a large volume of training data. Then call **TrainProject** in the 3.4 public preview API with _CustomBaseModelInfo_ in the request body to use the first stage trained model as the base model for downstream projects. If the source project and the downstream target project have similar images characteristics, you can expect better performance.
+
+### New domain information
+
+- The domain information returned from **GetDomains** in the Custom Vision 3.4 public preview API now includes supported exportable platforms, a brief description of model architecture, and the size of the model for compact domains.
+
+### Training divergence feedback
+
+- The Custom Vision 3.4 public preview API now returns **TrainingErrorDetails** from the **GetIteration** call. On failed iterations, this reveals whether the failure was caused by training divergence, which can be remedied with more and higher-quality training data.
+
+## July 2020
+
+### Azure role-based access control
+
+* Custom Vision supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. To learn how to manage access to your Custom Vision projects, see [Azure role-based access control](./role-based-access-control.md).
+
+### Subset training
+
+* When training an object detection project, you can optionally train on only a subset of your applied tags. You may want to do this if you haven't applied enough of certain tags yet, but you do have enough of others. Follow the [Client library quickstart](./quickstarts/object-detection.md) for C# or Python to learn more.
+
+### Azure storage notifications
+
+* You can integrate your Custom Vision project with an Azure blob storage queue to get push notifications of project training/export activity and backup copies of published models. This feature is useful to avoid continually polling the service for results when long operations are running. Instead, you can integrate the storage queue notifications into your workflow. See the [Storage integration](./storage-integration.md) guide to learn more.
+
+### Copy and move projects
+
+* You can now copy projects from one Custom Vision account into others. You might want to move a project from a development to production environment, or back up a project to an account in a different Azure region for increased data security. See the [Copy and move projects](./copy-move-projects.md) guide to learn more.
+
+## September 2019
+
+### Suggested tags
+
+* The Smart Labeler tool on the [Custom Vision website](https://www.customvision.ai/) generates suggested tags for your training images. This lets you label a large number of images more quickly when training a Custom Vision model. For instructions on how to use this feature, see [Suggested tags](./suggested-tags.md).
+
+## Azure AI services updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
ai-services Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/diagnostic-logging.md
+
+ Title: Diagnostic logging
+
+description: This guide provides step-by-step instructions to enable diagnostic logging for an Azure AI service. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging.
+++++ Last updated : 07/19/2021+++
+# Enable diagnostic logging for Azure AI services
+
+This guide provides step-by-step instructions to enable diagnostic logging for an Azure AI service. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging. Before you continue, you must have an Azure account with a subscription to at least one Azure AI service, such as [Speech Services](./speech-service/overview.md), or [LUIS](./luis/what-is-luis.md).
+
+## Prerequisites
+
+To enable diagnostic logging, you'll need somewhere to store your log data. This tutorial uses Azure Storage and Log Analytics.
+
+* [Azure storage](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) - Retains diagnostic logs for policy audit, static analysis, or backup. The storage account does not have to be in the same subscription as the resource emitting logs as long as the user who configures the setting has appropriate Azure RBAC access to both subscriptions.
+* [Log Analytics](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace) - A flexible log search and analytics tool that allows for analysis of raw logs generated by an Azure resource.
+
+> [!NOTE]
+> * Additional configuration options are available. To learn more, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
+> * "Trace" in diagnostic logging is only available for [Custom question answering](./qnamaker/how-to/get-analytics-knowledge-base.md?tabs=v2).
+
+## Enable diagnostic log collection
+
+Let's start by enabling diagnostic logging using the Azure portal.
+
+> [!NOTE]
+> To enable this feature using PowerShell or the Azure CLI, use the instructions provided in [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
+
+1. Navigate to the Azure portal. Then locate and select an Azure AI services resource. For example, your subscription to Speech Services.
+2. Next, from the left-hand navigation menu, locate **Monitoring** and select **Diagnostic settings**. This screen contains all previously created diagnostic settings for this resource.
+3. If there is a previously created resource that you'd like to use, you can select it now. Otherwise, select **+ Add diagnostic setting**.
+4. Enter a name for the setting. Then select **Archive to a storage account** and **Send to log Analytics**.
+5. When prompted to configure, select the storage account and OMS workspace that you'd like to use to store you diagnostic logs. **Note**: If you don't have a storage account or OMS workspace, follow the prompts to create one.
+6. Select **Audit**, **RequestResponse**, and **AllMetrics**. Then set the retention period for your diagnostic log data. If a retention policy is set to zero, events for that log category are stored indefinitely.
+7. Select **Save**.
+
+It can take up to two hours before logging data is available to query and analyze. So don't worry if you don't see anything right away.
+
+## View and export diagnostic data from Azure Storage
+
+Azure Storage is a robust object storage solution that is optimized for storing large amounts of unstructured data. In this section, you'll learn to query your storage account for total transactions over a 30-day timeframe and export the data to excel.
+
+1. From the Azure portal, locate the Azure Storage resource that you created in the last section.
+2. From the left-hand navigation menu, locate **Monitoring** and select **Metrics**.
+3. Use the available drop-downs to configure your query. For this example, let's set the time range to **Last 30 days** and the metric to **Transaction**.
+4. When the query is complete, you'll see a visualization of transaction over the last 30 days. To export this data, use the **Export to Excel** button located at the top of the page.
+
+Learn more about what you can do with diagnostic data in [Azure Storage](../storage/blobs/storage-blobs-introduction.md).
+
+## View logs in Log Analytics
+
+Follow these instructions to explore log analytics data for your resource.
+
+1. From the Azure portal, locate and select **Log Analytics** from the left-hand navigation menu.
+2. Locate and select the resource you created when enabling diagnostics.
+3. Under **General**, locate and select **Logs**. From this page, you can run queries against your logs.
+
+### Sample queries
+
+Here are a few basic Kusto queries you can use to explore your log data.
+
+Run this query for all diagnostic logs from Azure AI services for a specified time period:
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+```
+
+Run this query to see the 10 most recent logs:
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| take 10
+```
+
+Run this query to group operations by **Resource**:
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES" |
+summarize count() by Resource
+```
+Run this query to find the average time it takes to perform an operation:
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| summarize avg(DurationMs)
+by OperationName
+```
+
+Run this query to view the volume of operations over time split by OperationName with counts binned for every 10s.
+
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| summarize count()
+by bin(TimeGenerated, 10s), OperationName
+| render areachart kind=unstacked
+```
+
+## Next steps
+
+* To understand how to enable logging, and also the metrics and log categories that are supported by the various Azure services, read both the [Overview of metrics](../azure-monitor/data-platform.md) in Microsoft Azure and [Overview of Azure Diagnostic Logs](../azure-monitor/essentials/platform-logs-overview.md) articles.
+* Read these articles to learn about event hubs:
+ * [What is Azure Event Hubs?](../event-hubs/event-hubs-about.md)
+ * [Get started with Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
+* Read [Understand log searches in Azure Monitor logs](../azure-monitor/logs/log-query-overview.md).
ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md
+
+ Title: Document Intelligence changelog and release history
+
+description: A version-based description of Document Intelligence feature and capability releases, changes, enhancements, and updates.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD051 -->
+
+# Changelog and release history
+
+This reference article provides a version-based description of Document Intelligence feature and capability releases, changes, updates, and enhancements.
+
+#### Document Intelligence SDK April 2023 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+* **Version 4.1.0-beta.1 (2023-04-13**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#410-beta1-2023-04-13)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* **Version 4.1.0-beta.1 (2023-04-12**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#410-beta1-2023-04-12)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples#readme)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 4.1.0-beta.1 (2023-04-11**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#410-beta1-2023-04-11)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta)
+
+### [**Python**](#tab/python)
+
+* **Version 3.3.0b1 (2023-04-13**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#330b1-2023-04-13)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/samples)
+++
+#### Document Intelligence SDK September 2022 (GA) release
+
+This release includes the following updates:
+
+> [!IMPORTANT]
+> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier.
+
+### [**C#**](#tab/csharp)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-jav)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
+
+[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md)
+
+### [**Python**](#tab/python)
+
+> [!NOTE]
+> Python 3.7 or later is required to use this package.
+
+* **Version 3.2.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
+++++++
+#### Document Intelligence SDK beta August 2022 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.5 (2022-08-09)**
+**Supports REST API 2022-06-30-preview clients**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
+
+[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.6 (2022-08-10)**
+**Supports REST API 2022-06-30-preview and earlier clients**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
+
+ [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.6 (2022-08-09)**
+**Supports REST API 2022-06-30-preview and earlier clients**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
+
+ [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
+
+### [**Python**](#tab/python)
+
+> [!IMPORTANT]
+> Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
+
+**Version 3.2.0b6 (2022-08-09)**
+**Supports REST API 2022-06-30-preview and earlier clients**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+
+ [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+++++++
+### Document Intelligence SDK beta June 2022 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.4 (2022-06-08)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
+
+[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.5 (2022-06-07)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.4 (2022-06-07)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
+
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+
+### [**Python**](#tab/python)
+
+**Version 3.2.0b5 (2022-06-07**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
+
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
++++++
ai-services Choose Model Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/choose-model-feature.md
+
+ Title: Choose the best Document Intelligence model for your applications and workflows
+
+description: Choose the best Document Intelligence model to meet your needs.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
++++++
+# Which model should I choose?
+
+Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your applications and optimize your workflows. Selecting the right model is essential to ensure the success of your enterprise. In this article, we explore the available Document Intelligence models and provide guidance for how to choose the best solution for your projects.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1b]
+
+The following decision charts highlight the features of each **Document Intelligence v3.0** supported model and help you choose the best model to meet the needs and requirements of your application.
+
+> [!IMPORTANT]
+> Be sure to check the [**language support**](language-support.md) page for supported language text and field extraction by feature.
+
+## Pretrained document-analysis models
+
+| Document type | Example| Data to extract | Your best solution |
+| --|--|--|-|
+|**A generic document**. | A contract or letter. |You want to primarily extract written or printed text lines, words, locations, and detected languages.|[**Read OCR model**](concept-read.md)|
+|**A document that includes structural information**. |A report or study.| In addition to written or printed text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.| [**Layout analysis model**](concept-layout.md)
+|**A structured or semi-structured document that includes content formatted as fields and values**.|A form or document that is a standardized format commonly used in your business or industry like a credit application or survey. | You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| [**General document model**](concept-general-document.md)|
+
+## Pretrained scenario-specific models
+
+| Document type | Data to extract | Your best solution |
+| --|--|-|
+|**U.S. W-2 tax form**|You want to extract key information such as salary, wages, and taxes withheld.|[**W-2 model**](concept-w2.md)|
+|**Health insurance card** or health insurance ID.| You want to extract key information such as insurer, member ID, prescription coverage, and group number.|[**Health insurance card model**](./concept-insurance-card.md)|
+|**Invoice** or billing statement.|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md)
+ |**Receipt**, voucher, or single-page hotel receipt. |You want to extract key information such as merchant name, transaction date, and transaction total.|[**Receipt model**](concept-receipt.md)|
+|**Identity document (ID)** like a U.S. driver's license or international passport. |You want to extract key information such as first name, last name, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)|
+|**Business card** or calling card.|You want to extract key information such as first name, last name, company name, email address, and phone number.|[**Business card model**](concept-business-card.md)|
+|**Mixed-type document(s)** with structured, semi-structured, and/or unstructured elements. | You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| [**Custom model**](concept-custom.md)|
+
+>[!Tip]
+>
+> * If you're still unsure which pretrained model to use, try the **General Document model** to extract key-value pairs.
+> * The General Document model is powered by the Read OCR engine to detect text lines, words, locations, and languages.
+> * General document also extracts the same data as the Layout model (pages, tables, styles).
+
+## Custom extraction models
+
+| Training set | Example documents | Your best solution |
+| --|--|-|
+|**Structured, consistent, documents with a static layout**. |Structured forms such as questionnaires or applications. | [**Custom template model**](./concept-custom-template.md)|
+|**Structured, semi-structured, and unstructured documents**.|&#9679; Structured &rightarrow; surveys</br>&#9679; Semi-structured &rightarrow; invoices</br>&#9679; Unstructured &rightarrow; letters| [**Custom neural model**](concept-custom-neural.md)|
+|**A collection of several models each trained on similar-type documents.** |&#9679; Supply purchase orders</br>&#9679; Equipment purchase orders</br>&#9679; Furniture purchase orders</br> **All composed into a single model**.| [**Composed custom model**](concept-composed-models.md)|
+
+## Next steps
+
+* [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-accuracy-confidence.md
+
+ Title: Interpret and improve model accuracy and analysis confidence scores
+
+description: Best practices to interpret the accuracy score from the train model operation and the confidence score from analysis operations.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++++
+# Custom models: accuracy and confidence scores
++
+> [!NOTE]
+>
+> * **Custom neural models do not provide accuracy scores during training**.
+> * Confidence scores for structured fields such as tables are currently unavailable.
+
+Custom models generate an estimated accuracy score when trained. Documents analyzed with a custom model produce a confidence score for extracted fields. In this article, learn to interpret accuracy and confidence scores and best practices for using those scores to improve accuracy and confidence results.
+
+## Accuracy scores
+
+The output of a `build` (v3.0) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document.
+The accuracy value range is a percentage between 0% (low) and 100% (high). The estimated accuracy is calculated by running a few different combinations of the training data to predict the labeled values.
+
+**Document Intelligence Studio** </br>
+**Trained custom model (invoice)**
++
+## Confidence scores
+
+Document Intelligence analysis results return an estimated confidence for predicted words, key-value pairs, selection marks, regions, and signatures. Currently, not all document fields return a confidence score.
+
+Field confidence indicates an estimated probability between 0 and 1 that the prediction is correct. For example, a confidence value of 0.95 (95%) indicates that the prediction is likely correct 19 out of 20 times. For scenarios where accuracy is critical, confidence may be used to determine whether to automatically accept the prediction or flag it for human review.
+
+Confidence scores have two data points: the field level confidence score and the text extraction confidence score. In addition to the field confidence of position and span, the text extraction confidence in the ```pages``` section of the response is the model's confidence in the text extraction (OCR) process. The two confidence scores should be combined to generate one overall confidence score.
+
+**Document Intelligence Studio** </br>
+**Analyzed invoice prebuilt-invoice model**
++
+## Interpret accuracy and confidence scores
+
+The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
+
+| Accuracy | Confidence | Result |
+|--|--|--|
+| High| High | <ul><li>The model is performing well with the labeled keys and document formats. </li><li>You have a balanced training dataset</li></ul> |
+| High | Low | <ul><li>The analyzed document appears different from the training dataset.</li><li>The model would benefit from retraining with at least five more labeled documents. </li><li>These results could also indicate a format variation between the training dataset and the analyzed document. </br>Consider adding a new model.</li></ul> |
+| Low | High | <ul><li>This result is most unlikely.</li><li>For low accuracy scores, add more labeled data or split visually distinct documents into multiple models.</li></ul> |
+| Low | Low| <ul><li>Add more labeled data.</li><li>Split visually distinct documents into multiple models.</li></ul>|
+
+## Ensure high model accuracy
+
+Variances in the visual structure of your documents affect the accuracy of your model. Reported accuracy scores can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that a document set can look similar when viewed by humans but appear dissimilar to an AI model. To follow, is a list of the best practices for training models with the highest accuracy. Following these guidelines should produce a model with higher accuracy and confidence scores during analysis and reduce the number of documents flagged for human review.
+
+* Ensure that all variations of a document are included in the training dataset. Variations include different formats, for example, digital versus scanned PDFs.
+
+* If you expect the model to analyze both types of PDF documents, add at least five samples of each type to the training dataset.
+
+* Separate visually distinct document types to train different models.
+ * As a general rule, if you remove all user entered values and the documents look similar, you need to add more training data to the existing model.
+ * If the documents are dissimilar, split your training data into different folders and train a model for each variation. You can then [compose](how-to-guides/compose-custom-models.md?view=doc-intel-2.1.0&preserve-view=true#create-a-composed-model) the different variations into a single model.
+
+* Make sure that you don't have any extraneous labels.
+
+* For signature and region labeling, don't include the surrounding text.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Learn to create custom models](quickstarts/try-document-intelligence-studio.md#custom-models)
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
+
+ Title: Add-on capabilities - Document Intelligence
+
+description: How to increase service limit capacity with add-on capabilities.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence add-on capabilities (preview)
+
+**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
+
+> [!NOTE]
+>
+> Add-on capabilities for Document Intelligence Studio are only available within the Read and Layout models for the `2023-02-28-preview` release.
+
+Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are three add-on capabilities available for the `2023-02-28-preview`:
+
+* [`ocr.highResolution`](#high-resolution-extraction)
+
+* [`ocr.formula`](#formula-extraction)
+
+* [`ocr.font`](#font-property-extraction)
+
+## High resolution extraction
+
+The task of recognizing small text from large-size documents, like engineering drawings, is a challenge. Often the text is mixed with other graphical elements and has varying fonts, sizes and orientations. Moreover, the text may be broken into separate parts or connected with other symbols. Document Intelligence now supports extracting content from these types of documents with the `ocr.highResolution` capability. You get improved quality of content extraction from A1/A2/A3 documents by enabling this add-on capability.
+
+## Formula extraction
+
+The `ocr.formula` capability extracts all identified formulas, such as mathematical equations, in the `formulas` collection as a top level object under `content`. Inside `content`, detected formulas are represented as `:formula:`. Each entry in this collection represents a formula that includes the formula type as `inline` or `display`, and its LaTeX representation as `value` along with its `polygon` coordinates. Initially, formulas appear at the end of each page.
+
+ > [!NOTE]
+ > The `confidence` score is hard-coded for the `2023-02-28` public preview release.
+
+ ```json
+ "content": ":formula:",
+ "pages": [
+ {
+ "pageNumber": 1,
+ "formulas": [
+ {
+ "kind": "inline",
+ "value": "\\frac { \\partial a } { \\partial b }",
+ "polygon": [...],
+ "span": {...},
+ "confidence": 0.99
+ },
+ {
+ "kind": "display",
+ "value": "y = a \\times b + a \\times c",
+ "polygon": [...],
+ "span": {...},
+ "confidence": 0.99
+ }
+ ]
+ }
+ ]
+ ```
+
+## Font property extraction
+
+The `ocr.font` capability extracts all font properties of text extracted in the `styles` collection as a top-level object under `content`. Each style object specifies a single font property, the text span it applies to, and its corresponding confidence score. The existing style property is extended with more font properties such as `similarFontFamily` for the font of the text, `fontStyle` for styles such as italic and normal, `fontWeight` for bold or normal, `color` for color of the text, and `backgroundColor` for color of the text bounding box.
+
+ ```json
+ "content": "Foo bar",
+ "styles": [
+ {
+ "similarFontFamily": "Arial, sans-serif",
+ "spans": [ { "offset": 0, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "similarFontFamily": "Times New Roman, serif",
+ "spans": [ { "offset": 4, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "fontStyle": "italic",
+ "spans": [ { "offset": 1, "length": 2 } ],
+ "confidence": 0.98
+ },
+ {
+ "fontWeight": "bold",
+ "spans": [ { "offset": 2, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "color": "#FF0000",
+ "spans": [ { "offset": 4, "length": 2 } ],
+ "confidence": 0.98
+ },
+ {
+ "backgroundColor": "#00FF00",
+ "spans": [ { "offset": 5, "length": 2 } ],
+ "confidence": 0.98
+ }
+ ]
+ ```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Learn more:
+> [**Read model**](concept-read.md) [**Layout model**](concept-layout.md).
ai-services Concept Analyze Document Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-analyze-document-response.md
+
+ Title: Document Intelligence APIs analyze document response
+
+description: Description of the different objects returned as part of the analyze document response and how to use the document analysis response in your applications.
+++++ Last updated : 07/18/2023++
+monikerRange: 'doc-intel-3.0.0'
++++
+# Analyze document API response
+
+In this article, let's examine the different objects returned as part of the analyze document response and how to use the document analysis API response in your applications.
+
+## Analyze document request
+
+The Document Intelligence APIs analyze images, PDFs, and other document files to extract and detect various content, layout, style, and semantic elements. The analyze operation is an async API. Submitting a document returns an **Operation-Location** header that contains the URL to poll for completion. When an analysis request completes successfully, the response contains the elements described in the [model data extraction](concept-model-overview.md#model-data-extraction).
+
+### Response elements
+
+* Content elements are the basic text elements extracted from the document.
+
+* Layout elements group content elements into structural units.
+
+* Style elements describe the font and language of content elements.
+
+* Semantic elements assign meaning to the specified content elements.
+
+All content elements are grouped according to pages, specified by page number (`1`-indexed). They're also sorted by reading order that arranges semantically contiguous elements together, even if they cross line or column boundaries. When the reading order among paragraphs and other layout elements is ambiguous, the service generally returns the content in a left-to-right, top-to-bottom order.
+
+> [!NOTE]
+> Currently, Document Intelligence does not support reading order across page boundaries. Selection marks are not positioned within the surrounding words.
+
+The top-level content property contains a concatenation of all content elements in reading order. All elements specify their position in the reader order via spans within this content string. The content of some elements may not be contiguous.
+
+## Analyze response
+
+The analyze response for each API returns different objects. API responses contain elements from component models where applicable.
+
+| Response content | Description | API |
+|--|--|--|
+| **pages**| Words, lines and spans recognized from each page of the input document. | Read, Layout, General Document, Prebuilt, and Custom models|
+| **paragraphs**| Content recognized as paragraphs. | Read, Layout, General Document, Prebuilt, and Custom models|
+| **styles**| Identified text element properties. | Read, Layout, General Document, Prebuilt, and Custom models|
+| **languages**| Identified language associated with each span of the text extracted | Read |
+| **tables**| Tabular content identified and extracted from the document. Tables relate to tables identified by the pretrained layout model. Content labeled as tables is extracted as structured fields in the documents object. | Layout, General Document, Invoice, and Custom models |
+| **keyValuePairs**| Key-value pairs recognized by a pretrained model. The key is a span of text from the document with the associated value. | General document and Invoice models |
+| **documents**| Fields recognized are returned in the ```fields``` dictionary within the list of documents| Prebuilt models, Custom models|
+
+For more information on the objects returned by each API, see [model data extraction](concept-model-overview.md#model-data-extraction).
+
+## Element properties
+
+### Spans
+
+Spans specify the logical position of each element in the overall reading order, with each span specifying a character offset and length into the top-level content string property. By default, character offsets and lengths are returned in units of user-perceived characters (also known as [`grapheme clusters`](/dotnet/standard/base-types/character-encoding-introduction) or text elements). To accommodate different development environments that use different character units, user can specify the `stringIndexIndex` query parameter to return span offsets and lengths in Unicode code points (Python 3) or UTF16 code units (Java, JavaScript, .NET) as well. For more information, *see* [multilingual/emoji support](../../ai-services/language-service/concepts/multilingual-emoji-support.md).
++
+### Bounding Region
+
+Bounding regions describe the visual position of each element in the file. Since elements may not be visually contiguous or may cross pages (tables), the positions of most elements are described via an array of bounding regions. Each region specifies the page number (`1`-indexed) and bounding polygon. The bounding polygon is described as a sequence of points, clockwise from the left relative to the natural orientation of the element. For quadrilaterals, plot points are top-left, top-right, bottom-right, and bottom-left corners. Each point represents its x, y coordinate in the page unit specified by the unit property. In general, unit of measure for images is pixels while PDFs use inches.
++
+> [!NOTE]
+> Currently, Document Intelligence only returns 4-vertex quadrilaterals as bounding polygons. Future versions may return different number of points to describe more complex shapes, such as curved lines or non-rectangular images. Bounding regions applied only to rendered files, if the file is not rendered, bounding regions are not returned. Currently files of docx/xlsx/pptx/html format are not rendered.
+
+### Content elements
+
+#### Word
+
+A word is a content element composed of a sequence of characters. With Document Intelligence, a word is defined as a sequence of adjacent characters, with whitespace separating words from one another. For languages that don't use space separators between words each character is returned as a separate word, even if it doesn't represent a semantic word unit.
++
+#### Selection marks
+
+A selection mark is a content element that represents a visual glyph indicating the state of a selection. Checkbox is a common form of selection marks. However, they may also be represented via radio buttons or a boxed cell in a visual form. The state of a selection mark may be selected or unselected, with different visual representation to indicate the state.
++
+### Layout elements
+
+#### Line
+
+A line is an ordered sequence of consecutive content elements separated by a visual space, or ones that are immediately adjacent for languages without space delimiters between words. Content elements in the same horizontal plane (row) but separated by more than a single visual space are most often split into multiple lines. While this feature sometimes splits semantically contiguous content into separate lines, it enables the representation of textual content split into multiple columns or cells. Lines in vertical writing are detected in the vertical direction.
++
+#### Paragraph
+
+A paragraph is an ordered sequence of lines that form a logical unit. Typically, the lines share common alignment and spacing between lines. Paragraphs are often delimited via indentation, added spacing, or bullets/numbering. Content can only be assigned to a single paragraph.
+Select paragraphs may also be associated with a functional role in the document. Currently supported roles include page header, page footer, page number, title, section heading, and footnote.
++
+#### Page
+
+A page is a grouping of content that typically corresponds to one side of a sheet of paper. A rendered page is characterized via width and height in the specified unit. In general, images use pixel while PDFs use inch. The angle property describes the overall text angle in degrees for pages that may be rotated.
+
+> [!NOTE]
+> For spreadsheets like Excel, each sheet is mapped to a page. For presentations, like PowerPoint, each slide is mapped to a page. For file formats without a native concept of pages without rendering like HTML or Word documents, the main content of the file is considered a single page.
+
+#### Table
+
+A table organizes content into a group of cells in a grid layout. The rows and columns may be visually separated by grid lines, color banding, or greater spacing. The position of a table cell is specified via its row and column indices. A cell may span across multiple rows and columns.
+
+Based on its position and styling, a cell may be classified as general content, row header, column header, stub head, or description:
+
+* A row header cell is typically the first cell in a row that describes the other cells in the row.
+
+* A column header cell is typically the first cell in a column that describes the other cells in a column.
+
+* A row or column may contain multiple header cells to describe hierarchical content.
+
+* A stub head cell is typically the cell in the first row and first column position. It may be empty or describe the values in the header cells in the same row/column.
+
+* A description cell generally appears at the topmost or bottom area of a table, describing the overall table content. However, it may sometimes appear in the middle of a table to break the table into sections. Typically, description cells span across multiple cells in a single row.
+
+* A table caption specifies content that explains the table. A table may further have an associated caption and a set of footnotes. Unlike a description cell, a caption typically lies outside the grid layout. A table footnote annotates content inside the table, often marked with a footnote symbol. It's often found below the table grid.
+
+**Layout tables differ from document fields extracted from tabular data**. Layout tables are extracted from tabular visual content in the document without considering the semantics of the content. In fact, some layout tables are designed purely for visual layout and may not always contain structured data. The method to extract structured data from documents with diverse visual layout, like itemized details of a receipt, generally requires significant post processing. It's essential to map the row or column headers to structured fields with normalized field names. Depending on the document type, use prebuilt models or train a custom model to extract such structured content. The resulting information is exposed as document fields. Such trained models can also handle tabular data without headers and structured data in nontabular forms, for example the work experience section of a resume.
++
+#### Form field (key value pair)
+
+A form field consists of a field label (key) and value. The field label is generally a descriptive text string describing the meaning of the field. It often appears to the left of the value, though it can also appear over or under the value. The field value contains the content value of a specific field instance. The value may consist of words, selection marks, and other content elements. It may also be empty for unfilled form fields. A special type of form field has a selection mark value with the field label to its right.
+Document field is a similar but distinct concept from general form fields. The field label (key) in a general form field must appear in the document. Thus, it can't generally capture information like the merchant name in a receipt. Document fields are labeled and don't extract a key, document fields only map an extracted value to a labeled key. For more information, *see* [document fields]().
++
+### Style elements
+
+#### Style
+
+A style element describes the font style to apply to text content. The content is specified via spans into the global content property. Currently, the only detected font style is whether the text is handwritten. As other styles are added, text may be described via multiple nonconflicting style objects. For compactness, all text sharing the particular font style (with the same confidence) are described via a single style object.
++
+```json
+
+{
+ "confidence": 1,
+ "spans": [
+ {
+ "offset": 2402,
+ "length": 7
+ }
+ ],
+ "isHandwritten": true
+}
+```
+
+#### Language
+
+A language element describes the detected language for content specified via spans into the global content property. The detected language is specified via a [BCP-47 language tag](https://en.wikipedia.org/wiki/IETF_language_tag) to indicate the primary language and optional script and region information. For example, English and traditional Chinese are recognized as "en" and *zh-Hant*, respectively. Regional spelling differences for UK English may lead the text to be detected as *en-GB*. Language elements don't cover text without a dominant language (ex. numbers).
+
+### Semantic elements
+
+#### Document
+
+A document is a semantically complete unit. A file may contain multiple documents, such as multiple tax forms within a PDF file, or multiple receipts within a single page. However, the ordering of documents within the file doesn't fundamentally affect the information it conveys.
+
+> [!NOTE]
+> Currently, Document Intelligence does not support multiple documents on a single page.
+
+The document type describes documents sharing a common set of semantic fields, represented by a structured schema, independent of its visual template or layout. For example, all documents of type "receipt" may contain the merchant name, transaction date, and transaction total, although restaurant and hotel receipts often differ in appearance.
+
+A document element includes the list of recognized fields from among the fields specified by the semantic schema of the detected document type. A document field may be extracted or inferred. Extracted fields are represented via the extracted content and optionally its normalized value, if interpretable. Inferred fields don't have content property and are represented only via its value. Array fields don't include a content property, as the content can be concatenated from the content of the array elements. Object fields do contain a content property that specifies the full content representing the object, which may be a superset of the extracted subfields.
+
+The semantic schema of a document type is described via the fields it may contain. Each field schema is specified via its canonical name and value type. Field value types include basic (ex. string), compound (ex. address), and structured (ex. array, object) types. The field value type also specifies the semantic normalization performed to convert detected content into a normalization representation. Normalization may be locale dependent.
+
+#### Basic types
+
+| Field value type| Description | Normalized representation | Example (Field content -> Value) |
+|--|--|--|--|
+| string | Plain text | Same as content | MerchantName: "Contoso" → "Contoso" |
+| date | Date | ISO 8601 - YYYY-MM-DD | InvoiceDate: "5/7/2022" → "2022-05-07" |
+| time | Time | ISO 8601 - hh:mm:ss | TransactionTime: "9:45 PM" → "21:45:00" |
+| phoneNumber | Phone number | E.164 - +{CountryCode}{SubscriberNumber} | WorkPhone: "(800) 555-7676" → "+18005557676"|
+| countryRegion | Country/region | ISO 3166-1 alpha-3 | CountryRegion: "United States" → "USA" |
+| selectionMark | Is selected | "signed" or "unsigned" | AcceptEula: ☑ → "selected" |
+| signature | Is signed | Same as content | LendeeSignature: {signature} → "signed" |
+| number | Floating point number | Floating point number | Quantity: "1.20" → 1.2|
+| integer | Integer number | 64-bit signed number | Count: "123" → 123 |
+| boolean | Boolean value | true/false | IsStatutoryEmployee: ☑ → true |
+
+#### Compound types
+
+* Currency: Currency amount with optional currency unit. A value, for example: ```InvoiceTotal: $123.45```
+
+ ```json
+ {
+ "amount": 123.45,
+ "currencySymbol": "$"
+ }
+ ```
+
+* Address: Parsed address. For example: ```ShipToAddress: 123 Main St., Redmond, WA 98052```
+
+```json
+ {
+ "poBox": "PO Box 12",
+ "houseNumber": "123",
+ "streetName": "Main St.",
+ "city": "Redmond",
+ "state": "WA",
+ "postalCode": "98052",
+ "countryRegion": "USA",
+ "streetAddress": "123 Main St."
+ }
+
+```
+
+#### Structured types
+
+* Array: List of fields of the same type
+
+```json
+"Items": {
+ "type": "array",
+ "valueArray": [
+
+ ]
+}
+
+```
+
+* Object: Named list of subfields of potentially different types
+
+```json
+"InvoiceTotal": {
+ "type": "currency",
+ "valueCurrency": {
+ "currencySymbol": "$",
+ "amount": 110
+ },
+ "content": "$110.00",
+ "boundingRegions": [
+ {
+ "pageNumber": 1,
+ "polygon": [
+ 7.3842,
+ 7.465,
+ 7.9181,
+ 7.465,
+ 7.9181,
+ 7.6089,
+ 7.3842,
+ 7.6089
+ ]
+ }
+ ],
+ "confidence": 0.945,
+ "spans": [
+ {
+ "offset": 806,
+ "length": 7
+ }
+ ]
+}
+```
+
+## Next steps
+
+* Try processing your own forms and documents with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-business-card.md
+
+ Title: Business card data extraction - Document Intelligence
+
+description: OCR and machine learning based business card scanning in Document Intelligence extracts key data from business cards.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence business card model
+++
+The Document Intelligence business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract data from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
+
+## Business card data extraction
+
+Business cards are a great way to represent a business or a professional. The company logo, fonts and background images found in business cards help promote the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integration into for the benefit of their users.
+
+***Sample business card processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
++++
+***Sample business processed with [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***
+++
+## Development options
++
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**Business card model**| <ul><li>[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|**prebuilt-businessCard**|
+++
+Document Intelligence v2.1 supports the following tools:
+
+| Feature | Resources |
+|-|-|
+|**Business card model**| <ul><li>[**Document Intelligence labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-business-cards)</li><li>[**Client-library SDK**](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[**Document Intelligence Docker container**](containers/install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
++
+### Try business card data extraction
+
+See how data, including name, job title, address, email, and company name, is extracted from business cards. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
++
+#### Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with the v3.0 API.
+
+1. On the Document Intelligence Studio home page, select **Business cards**.
+
+1. You can analyze the sample business card or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/business-card-analyze.png" alt-text="Screenshot: analyze business card menu.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
+++
+## Document Intelligence Sample Labeling tool
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu.":::
+
+1. Select **Run analysis**. The Document Intelligence Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/business-card-results.png" alt-text="Screenshot of the business card model analyze results operation.":::
+
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Document Intelligence Service.
++
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
+++
+## Supported languages and locales
+
+>[!NOTE]
+> It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|Business card (v3.0 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
+|Business card (v2.1 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li> | Autodetected |
+
+## Field extractions
+
+|Name| Type | Description |Standardized output |
+|:--|:-|:-|:-:|
+| ContactNames | Array of objects | Contact name | |
+| FirstName | String | First (given) name of contact | |
+| LastName | String | Last (family) name of contact | |
+| CompanyNames | Array of strings | Company name(s)| |
+| Departments | Array of strings | Department(s) or organization(s) of contact | |
+| JobTitles | Array of strings | Listed Job title(s) of contact | |
+| Emails | Array of strings | Contact email address(es) | |
+| Websites | Array of strings | Company website(s) | |
+| Addresses | Array of strings | Address(es) extracted from business card | |
+| MobilePhones | Array of phone numbers | Mobile phone number(s) from business card |+1 xxx xxx xxxx |
+| Faxes | Array of phone numbers | Fax phone number(s) from business card | +1 xxx xxx xxxx |
+| WorkPhones | Array of phone numbers | Work phone number(s) from business card | +1 xxx xxx xxxx |
+| OtherPhones | Array of phone numbers | Other phone number(s) from business card | +1 xxx xxx xxxx |
+++
+### Fields extracted
+
+|Name| Type | Description | Text |
+|:--|:-|:-|:-|
+| ContactNames | array of objects | Contact name extracted from business card | [{ "FirstName": "John"`,` "LastName": "Doe" }] |
+| FirstName | string | First (given) name of contact | "John" |
+| LastName | string | Last (family) name of contact | "Doe" |
+| CompanyNames | array of strings | Company name extracted from business card | ["Contoso"] |
+| Departments | array of strings | Department or organization of contact | ["R&D"] |
+| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] |
+| Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] |
+| Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
+| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
+| MobilePhones | array of phone numbers | Mobile phone number extracted from business card | ["+19876543210"] |
+| Faxes | array of phone numbers | Fax phone number extracted from business card | ["+19876543211"] |
+| WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+19876543231"] |
+| OtherPhones | array of phone numbers | Other phone number extracted from business card | ["+19876543233"] |
+
+## Supported locales
+
+**Prebuilt business cards v2.1** supports the following locales:
+
+* **en-us**
+* **en-au**
+* **en-ca**
+* **en-gb**
+* **en-in**
+
+### Migration guide and REST API v3.0
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md
+
+ Title: Composed custom models - Document Intelligence
+
+description: Compose several custom models into a single model for easier data extraction from groups of distinct form types.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+# Document Intelligence composed custom models
++++
+**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document.
+
+With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
+
+* ```Custom form``` and ```Custom template``` models can be composed together into a single composed model.
+
+* With the model compose operation, you can assign up to 200 trained custom models to a single composed model. To analyze a document with a composed model, Document Intelligence first classifies the submitted form, chooses the best-matching assigned model, and returns results.
+
+* For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates.
+
+* The response includes a ```docType``` property to indicate which of the composed models was used to analyze the document.
+
+* For ```Custom neural``` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. Model compose is best suited for scenarios when you have documents of different types being submitted for analysis.
+++
+With the introduction of [**custom classification models**](./concept-custom-classifier.md), you can choose to use a [**composed model**](./concept-composed-models.md) or [**classification model**](concept-custom-classifier.md) as an explicit step before analysis. For a deeper understanding of when to use a classification or composed model, _see_ [**Custom classification models**](concept-custom-classifier.md#compare-custom-classification-and-composed-models).
+
+## Compose model limits
+
+> [!NOTE]
+> With the addition of **_custom neural model_** , there are a few limits to the compatibility of models that can be composed together.
+
+### Composed model compatibility
+
+|Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models v3.0 (preview) |Custom neural models 3.0 (GA)|
+|--|--|--|--|--|
+|**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported|
+|**Custom template models v3.0** |Supported|Supported|Not Supported|NotSupported|
+|**Custom template models v3.0 (GA)** |Not Supported|Not Supported|Supported|Not Supported|
+|**Custom neural models v3.0 (preview)**|Not Supported|Not Supported|Supported|Not Supported|
+|**Custom Neural models v3.0 (GA)**|Not Supported|Not Supported|Not Supported|Supported|
+
+* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition ensures that the v2.1 model can be composed with other models.
+
+* Models composed with v2.1 of the API continues to be supported, requiring no updates.
+
+* The limit for maximum number of custom models that can be composed is 100.
++
+## Development options
+
+The following resources are supported by Document Intelligence **v3.0** :
+
+| Feature | Resources |
+|-|-|
+|_**Custom model**_| <ul><li>[Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|
+| _**Composed model**_| <ul><li>[Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
++
+Document Intelligence v2.1 supports the following resources:
+
+| Feature | Resources |
+|-|-|
+|_**Custom model**_| <ul><li>[Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+| _**Composed model**_ |<ul><li>[Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
+
+## Next steps
+
+Learn to create and compose custom models:
+
+> [!div class="nextstepaction"]
+> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
+>
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
+
+ Title: Custom classification model - Document Intelligence
+
+description: Use the custom classification model to train a model to identify and split the documents you process within your application.
+++++ Last updated : 07/18/2023++
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence custom classification model
+
+**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
+
+> [!IMPORTANT]
+>
+> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+>
+
+Custom classification models are deep-learning-model types that combine layout and language features to accurately detect and identify documents you process within your application. Custom classification models can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
+
+## Model capabilities
+
+Custom classification models can analyze a single- or multi-file documents to identify if any of the trained document types are contained within an input file. Here are the currently supported scenarios:
+
+* A single file containing one document. For instance, a loan application form.
+
+* A single file containing multiple documents. For instance, a loan application package containing a loan application form, payslip, and bank statement.
+
+* A single file containing multiple instances of the same document. For instance, a collection of scanned invoices.
+
+Training a custom classifier requires at least two distinct classes and a minimum of five samples per class.
+
+### Compare custom classification and composed models
+
+A custom classification model can replace [a composed model](concept-composed-models.md) in some scenarios but there are a few differences to be aware of:
+
+| Capability | Custom classifier process | Composed model process |
+|--|--|--|
+|Analyze a single document of unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the classification model based on the document class. This step allows for a confidence-based check before invoking the extraction model analysis.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model containing the model corresponding to the input document type. |
+ |Analyze a single document of unknown type belonging to several types trained for extraction model processing.| &#9679;Requires multiple calls.</br> &#9679; Make a call to the classifier that ignores documents not matching a designated type for extraction.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model. The service selects a custom model within the composed model with the highest match.</br> &#9679; A composed model can't ignore documents.|
+|Analyze a file containing multiple documents of known or unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the extraction model for each identified document in the input file.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model.</br> &#9679; The composed model invokes the component model once on the first instance of the document. </br> &#9679;The remaining documents are ignored. |
+
+## Language support
+
+Classification models currently only support English language documents.
+
+## Best practices
+
+Custom classification models require a minimum of five samples per class to train. If the classes are similar, adding extra training samples improves model accuracy.
+
+## Training a model
+
+Custom classification models are only available in the [v3.0 API](v3-migration-guide.md) starting with API version ```2023-02-28-preview```. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
+
+When using the REST API, if you've organized your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classification model.
+
+```rest
+https://{endpoint}/formrecognizer/documentClassifiers:build?api-version=2023-02-28-preview
+
+{
+ "classifierId": "demo2.1",
+ "description": "",
+ "docTypes": {
+ "car-maint": {
+ "azureBlobSource": {
+ "containerUrl": "SAS URL to container",
+ "prefix": "sample1/car-maint/"
+ }
+ },
+ "cc-auth": {
+ "azureBlobSource": {
+ "containerUrl": "SAS URL to container",
+ "prefix": "sample1/cc-auth/"
+ }
+ },
+ "deed-of-trust": {
+ "azureBlobSource": {
+ "containerUrl": "SAS URL to container",
+ "prefix": "sample1/deed-of-trust/"
+ }
+ }
+ }
+}
+
+```
+
+Alternatively, if you have a flat list of files or only plan to use a few select files within each folder to train the model, you can use the ```azureBlobFileListSource``` property to train the model. This step requires a ```file list``` in [JSON Lines](https://jsonlines.org/) format. For each class, add a new file with a list of files to be submitted for training.
+
+```rest
+{
+ "classifierId": "demo2",
+ "description": "",
+ "docTypes": {
+ "car-maint": {
+ "azureBlobFileListSource": {
+ "containerUrl": "SAS URL to container",
+ "fileList": "sample1/car-maint.jsonl"
+ }
+ },
+ "cc-auth": {
+ "azureBlobFileListSource": {
+ "containerUrl": "SAS URL to container",
+ "fileList": "sample1/cc-auth.jsonl"
+ }
+ },
+ "deed-of-trust": {
+ "azureBlobFileListSource": {
+ "containerUrl": "SAS URL to container",
+ "fileList": "sample1/deed-of-trust.jsonl"
+ }
+ }
+ }
+}
+
+```
+
+File list `car-maint.jsonl` contains the following files.
+
+```json
+{"file":"sample1/car-maint/Commercial Motor Vehicle - Adatum.pdf"}
+{"file":"sample1/car-maint/Commercial Motor Vehicle - Fincher.pdf"}
+{"file":"sample1/car-maint/Commercial Motor Vehicle - Lamna.pdf"}
+{"file":"sample1/car-maint/Commercial Motor Vehicle - Liberty.pdf"}
+{"file":"sample1/car-maint/Commercial Motor Vehicle - Trey.pdf"}
+```
+
+## Next steps
+
+Learn to create custom classification models:
+
+> [!div class="nextstepaction"]
+> [**Build a custom classification model**](how-to-guides/build-a-custom-classifier.md)
+> [**Custom models overview**](concept-custom.md)
ai-services Concept Custom Label Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-label-tips.md
+
+ Title: Labeling tips for custom models in the Document Intelligence Studio
+
+description: Label tips and tricks for Document Intelligence Studio
+++++ Last updated : 07/18/2023++
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Tips for building labeled datasets
+
+This article highlights the best methods for labeling custom model datasets in the Document Intelligence Studio. Labeling documents can be time consuming when you have a large number of labels, long documents, or documents with varying structure. These tips should help you label documents more efficiently.
+
+## Video: Custom labels best practices
+
+* The following video is the second of two presentations intended to help you build custom models with higher accuracy (the first presentation explores [How to create a balanced data set](concept-custom-label.md#video-custom-label-tips-and-pointers)).
+
+* Here, we examine best practices for labeling your selected documents. With semantically relevant and consistent labeling, you should see an improvement in model performance.</br></br>
+
+ > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fZKB ]
+
+## Search
+
+The Studio now includes a search box for instances when you know you need to find specific words to label, but just don't know where they're located in the document. Simply search for the word or phrase and navigate to the specific section in the document to label the occurrence.
+
+## Auto label tables
+
+Tables can be challenging to label, when they have many rows or dense text. If the layout table extracts the result you need, you should just use that result and skip the labeling process. In instances where the layout table isn't exactly what you need, you can start with generating the table field from the values layout extracts. Start by selecting the table icon on the page and select on the auto label button. You can then edit the values as needed. Auto label currently only supports single page tables.
+
+## Shift select
+
+When labeling a large span of text, rather than mark each word in the span, hold down the shift key as you're selecting the words to speed up labeling and ensure you don't miss any words in the span of text.
+
+## Region labeling
+
+A second option for labeling larger spans of text is to use region labeling. When region labeling is used, the OCR results are populated in the value at training time. The difference between the shift select and region labeling is only in the visual feedback the shift labeling approach provides.
+
+## Field subtypes
+
+When creating a field, select the right subtype to minimize post processing, for instance select the ```dmy``` option for dates to extract the values in a ```dd-mm-yyyy``` format.
+
+## Batch layout
+
+When creating a project, select the batch layout option to prepare all documents in your dataset for labeling. This feature ensures that you no longer have to select on each document and wait for the layout results before you can start labeling.
+
+## Next steps
+
+* Learn more about custom labeling:
+
+ > [!div class="nextstepaction"]
+ > [Custom labels](concept-custom-label.md)
+
+* Learn more about custom template models:
+
+ > [!div class="nextstepaction"]
+ > [Custom template models](concept-custom-template.md )
+
+* Learn more about custom neural models:
+
+ > [!div class="nextstepaction"]
+ > [Custom neural models](concept-custom-neural.md )
ai-services Concept Custom Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-label.md
+
+ Title: Best practices for labeling documents in the Document Intelligence Studio
+
+description: Label documents in the Studio to create a training dataset. Labeling guidelines aimed at training a model with high accuracy
+++++ Last updated : 07/18/2023++
+monikerRange: 'doc-intel-3.0.0'
+++
+# Best practices: generating labeled datasets
+
+Custom models (template and neural) require a labeled dataset of at least five documents to train a model. The quality of the labeled dataset affects the accuracy of the trained model. This guide helps you learn more about generating a model with high accuracy by assembling a diverse dataset and provides best practices for labeling your documents.
+
+## Understand the components of a labeled dataset
+
+A labeled dataset consists of several files:
+
+* You provide a set of sample documents (typically PDFs or images). A minimum of five documents is needed to train a model.
+
+* Additionally, the labeling process generates the following files:
+
+ * A `fields.json` file is created when the first field is added. There's one `fields.json` file for the entire training dataset, the field list contains the field name and associated sub fields and types.
+
+ * The Studio runs each of the documents through the [Layout API](concept-layout.md). The layout response for each of the sample files in the dataset is added as `{file}.ocr.json`. The layout response is used to generate the field labels when a specific span of text is labeled.
+
+ * A `{file}.labels.json` file is created or updated when a field is labeled in a document. The label file contains the spans of text and associated polygons from the layout output for each span of text the user adds as a value for a specific field.
+
+## Video: Custom label tips and pointers
+
+* The following video is the first of two presentations intended to help you build custom models with higher accuracy (The second presentation examines [Best practices for labeling documents](concept-custom-label-tips.md#video-custom-labels-best-practices)).
+
+* Here, we explore how to create a balanced data set and select the right documents to label. This process sets you on the path to higher quality models.</br></br>
+
+ > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWWHru]
+
+## Create a balanced dataset
+
+Before you start labeling, it's a good idea to look at a few different samples of the document to identify which samples you want to use in your labeled dataset. A balanced dataset represents all the typical variations you would expect to see for the document. Creating a balanced dataset results in a model with the highest possible accuracy. A few examples to consider are:
+
+* **Document formats**: If you expect to analyze both digital and scanned documents, add a few examples of each type to the training dataset
+
+* **Variations (template model)**: Consider splitting the dataset into folders and train a model for each of variation. Any variations that include either structure or layout should be split into different models. You can then compose the individual models into a single [composed model](concept-composed-models.md).
+
+* **Variations (Neural models)**: When your dataset has a manageable set of variations, about 15 or fewer, create a single dataset with a few samples of each of the different variations to train a single model. If the number of template variations is larger than 15, you train multiple models and [compose](concept-composed-models.md) them together.
+
+* **Tables**: For documents containing tables with a variable number of rows, ensure that the training dataset also represents documents with different numbers of rows.
+
+* **Multi page tables**: When tables span multiple pages, label a single table. Add documents to the training dataset with the expected variations representedΓÇödocuments with the table on a single page only and documents with the table spanning two or more pages with all the rows labeled.
+
+* **Optional fields**: If your dataset contains documents with optional fields, validate that the training dataset has a few documents with the options represented.
+
+## Start by identifying the fields
+
+Take the time to identify each of the fields you plan to label in the dataset. Pay attention to optional fields. Define the fields with the labels that best match the supported types.
+
+Use the following guidelines to define the fields:
+
+* For custom neural models, use semantically relevant names for fields. For example, if the value being extracted is `Effective Date`, name it `effective_date` or `EffectiveDate` not a generic name like **date1**.
+
+* Ideally, name your fields with Pascal or camel case.
+
+* If a value is part of a visually repeating structure and you only need a single value, label it as a table and extract the required value during post-processing.
+
+* For tabular fields spanning multiple pages, define and label the fields as a single table.
+
+> [!NOTE]
+> Custom neural models share the same labeling format and strategy as custom template models. Currently custom neural models only support a subset of the field types supported by custom template models.
+
+## Model capabilities
+
+Custom neural models currently only support key-value pairs, structured fields (tables), and selection marks.
+
+| Model type | Form fields | Selection marks | Tabular fields | Signature | Region |
+|--|--|--|--|--|--|
+| Custom neural | ✔️Supported | ✔️Supported | ✔️Supported | Unsupported | ✔️Supported<sup>1</sup> |
+| Custom template | ✔️Supported| ✔️Supported | ✔️Supported | ✔️Supported | ✔️Supported |
+
+<sup>1</sup> Region labeling implementation differs between template and neural models. For template models, the training process injects synthetic data at training time if no text is found in the region labeled. With neural models, no synthetic text is injected and the recognized text is used as is.
+
+## Tabular fields
+
+Tabular fields (tables) are supported with custom neural models starting with API version ```2022-06-30-preview```. Models trained with API version 2022-06-30-preview or later will accept tabular field labels and documents analyzed with the model with API version 2022-06-30-preview or later will produce tabular fields in the output within the ```documents``` section of the result in the ```analyzeResult``` object.
+
+Tabular fields support **cross page tables** by default. To label a table that spans multiple pages, label each row of the table across the different pages in the single table. As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include both samples where an entire table is on a single page and samples of a table spanning two or more pages.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
+
+## Labeling guidelines
+
+* **Labeling values is required.** Don't include the surrounding text. For example when labeling a checkbox, name the field to indicate the check box selection for example ```selectionYes``` and ```selectionNo``` rather than labeling the yes or no text in the document.
+
+* **Don't provide interleaving field values** The value of words and/or regions of one field must be either a consecutive sequence in natural reading order without interleaving with other fields or in a region that doesn't cover any other fields
+
+* **Consistent labeling**. If a value appears in multiple contexts withing the document, consistently pick the same context across documents to label the value.
+
+* **Visually repeating data**. Tables support visually repeating groups of information not just explicit tables. Explicit tables are identified in tables section of the analyzed documents as part of the layout output and don't need to be labeled as tables. Only label a table field if the information is visually repeating and not identified as a table as part of the layout response. An example would be the repeating work experience section of a resume.
+
+* **Region labeling (custom template)**. Labeling specific regions allows you to define a value when none exists. If the value is optional, ensure that you leave a few sample documents with the region not labeled. When labeling regions, don't include the surrounding text with the label.
+
+## Next steps
+
+* Train a custom model:
+
+ > [!div class="nextstepaction"]
+ > [How to train a model](how-to-guides/build-a-custom-model.md?view=doc-intel-3.0.0&preserve-view=true)
+
+* Learn more about custom template models:
+
+ > [!div class="nextstepaction"]
+ > [Custom template models](concept-custom-template.md )
+
+* Learn more about custom neural models:
+
+ > [!div class="nextstepaction"]
+ > [Custom neural models](concept-custom-neural.md )
+
+* View the REST API:
+
+ > [!div class="nextstepaction"]
+ > [Document Intelligence API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
+
+ Title: Custom neural document model - Document Intelligence
+
+description: Use the custom neural document model to train a model to extract data from structured, semistructured, and unstructured documents.
+++++ Last updated : 07/18/2023++
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence custom neural model
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+Custom neural document models or neural models are a deep learned model type that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category:
+
+|Documents | Examples |
+||--|
+|structured| surveys, questionnaires|
+|semi-structured | invoices, purchase orders |
+|unstructured | contracts, letters|
+
+Custom neural models share the same labeling format and strategy as [custom template](concept-custom-template.md) models. Currently custom neural models only support a subset of the field types supported by custom template models.
+
+## Model capabilities
+
+Custom neural models currently only support key-value pairs and selection marks and structured fields (tables), future releases include support for signatures.
+
+| Form fields | Selection marks | Tabular fields | Signature | Region |
+|:--:|:--:|:--:|:--:|:--:|
+| Supported | Supported | Supported | Unsupported | Supported <sup>1</sup> |
+
+<sup>1</sup> Region labels in custom neural models use the results from the Layout API for specified region. This feature is different from template models where, if no value is present, text is generated at training time.
+
+### Build mode
+
+The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
+
+Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance across companies. For more information, *see* [Custom model build mode](concept-custom.md#build-mode).
+
+## Language support
+
+1. Neural models now support added languages in the ```2023-02-28-preview``` API.
+
+| Languages | API version |
+|:--:|:--:|
+| English | `2022-08-31` (GA), `2023-02-28-preview`|
+| German | `2023-02-28-preview`|
+| Italian | `2023-02-28-preview`|
+| French | `2023-02-28-preview`|
+| Spanish | `2023-02-28-preview`|
+| Dutch | `2023-02-28-preview`|
+
+## Tabular fields
+
+With the release of API versions **2022-06-30-preview** and later, custom neural models will support tabular fields (tables):
+
+* Models trained with API version 2022-08-31, or later will accept tabular field labels.
+* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
+* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
+
+Tabular fields support **cross page tables** by default:
+
+* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
+
+## Supported regions
+
+As of October 18, 2022, Document Intelligence custom neural model training will only be available in the following Azure regions until further notice:
+
+* Australia East
+* Brazil South
+* Canada Central
+* Central India
+* Central US
+* East Asia
+* East US
+* East US2
+* France Central
+* Japan East
+* South Central US
+* Southeast Asia
+* UK South
+* West Europe
+* West US2
+* US Gov Arizona
+* US Gov Virginia
+
+> [!TIP]
+> You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed to **any other region** and use it accordingly.
+>
+> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region.
+
+## Best practices
+
+Custom neural models differ from custom template models in a few different ways. The custom template or model relies on a consistent visual template to extract the labeled data. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model, and test to determine if it supports your functional needs.
+
+### Dealing with variations
+
+Custom neural models can generalize across different formats of a single document type. As a best practice, create a single model for all variations of a document type. Add at least five labeled samples for each of the different variations to the training dataset.
+
+### Field naming
+
+When you label the data, labeling the field relevant to the value improves the accuracy of the key-value pairs extracted. For example, for a field value containing the supplier ID, consider naming the field *supplier_id*. Field names should be in the language of the document.
+
+### Labeling contiguous values
+
+Value tokens/words of one field must be either
+
+* Consecutive sequence in natural reading order without interleaving with other fields
+* In a region that don't cover any other fields
+
+### Representative data
+
+Values in training cases should be diverse and representative. For example, if a field is named *date*, values for this field should be a date. synthetic value like a random string can affect model performance.
+
+## Current Limitations
+
+* The model doesn't recognize values split across page boundaries.
+* Custom neural models are only trained in English. Model performance is lower for documents in other languages.
+* If a dataset labeled for custom template models is used to train a custom neural model, the unsupported field types are ignored.
+* Custom neural models are limited to 20 build operations per month. Open a support request if you need the limit increased. For more information, see [Document Intelligence service quotas and limits](service-limits.md)
+
+## Training a model
+
+Custom neural models are only available in the [v3 API](v3-migration-guide.md).
+
+| Document Type | REST API | SDK | Label and Test Models|
+|--|--|--|--|
+| Custom document | [Document Intelligence 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```.
+
+```REST
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
+
+{
+ "modelId": "string",
+ "description": "string",
+ "buildMode": "neural",
+ "azureBlobSource":
+ {
+ "containerUrl": "string",
+ "prefix": "string"
+ }
+}
+```
+
+## Next steps
+
+Learn to create and compose custom models:
+
+> [!div class="nextstepaction"]
+> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-template.md
+
+ Title: Custom template document model - Document Intelligence
+
+description: Use the custom template document model to train a model to extract data from structured or templated forms.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Document Intelligence custom template model
+++
+Custom template (formerly custom form) is an easy-to-train document model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
++
+Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
++
+## Model capabilities
+
+Custom template models support key-value pairs, selection marks, tables, signature fields, and selected regions.
+
+| Form fields | Selection marks | Tabular fields (Tables) | Signature | Selected regions |
+|:--:|:--:|:--:|:--:|:--:|
+| Supported| Supported | Supported | Supported| Supported |
++
+## Tabular fields
+
+With the release of API versions **2022-06-30-preview** and later, custom template models will add support for **cross page** tabular fields (tables):
+
+* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages if you expect to see those variations in documents.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
++
+## Dealing with variations
+
+Template models rely on a defined visual template, changes to the template results in lower accuracy. In those instances, split your training dataset to include at least five samples of each template and train a model for each of the variations. You can then [compose](concept-composed-models.md) the models into a single endpoint. For subtle variations, like digital PDF documents and images, it's best to include at least five examples of each type in the same training dataset.
+
+## Training a model
++
+Custom template models are generally available with the [v3.0 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/BuildDocumentModel). If you're starting with a new project or have an existing labeled dataset, use the v3 API with Document Intelligence Studio to train a custom template model.
+
+| Model | REST API | SDK | Label and Test Models|
+|--|--|--|--|
+| Custom template | [Document Intelligence 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)|
+
+With the v3.0 API, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```.
+
+```REST
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
+
+{
+ "modelId": "string",
+ "description": "string",
+ "buildMode": "template",
+ "azureBlobSource":
+ {
+ "containerUrl": "string",
+ "prefix": "string"
+ }
+}
+```
+++
+Custom (template) models are generally available with the [v2.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm).
+
+| Model | REST API | SDK | Label and Test Models|
+|--|--|--|--|
+| Custom model (template) | [Document Intelligence 2.1 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Document Intelligence Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
++
+## Next steps
+
+Learn to create and compose custom models:
++
+> [!div class="nextstepaction"]
+> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
+++
+> [!div class="nextstepaction"]
+> [**Build a custom model**](concept-custom.md#build-a-custom-model)
+> [**Compose custom models**](concept-composed-models.md#development-options)
+
ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md
+
+ Title: Custom document models - Document Intelligence
+
+description: Label and train customized models for your documents and compose multiple models into a single model identifier.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+# Document Intelligence custom models
+++
+Document Intelligence uses advanced machine learning technology to identify documents, detect and extract information from forms and documents, and return the extracted data in a structured JSON output. With Document Intelligence, you can use document analysis models, pre-built/pre-trained, or your trained standalone custom models.
+
+Custom models now include [custom classification models](./concept-custom-classifier.md) for scenarios where you need to identify the document type prior to invoking the extraction model. Classifier models are available starting with the ```2023-02-28-preview``` API. A classification model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business to create a document processing solution. Standalone custom extraction models can be combined to create [composed models](concept-composed-models.md).
++
+## Custom document model types
+
+Custom document models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
+
+### Custom extraction models
+
+To create a custom extraction model, label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
+
+### Custom template model
+
+The custom template or custom form model relies on a consistent visual template to extract the labeled data. Variances in the visual structure of your documents affect the accuracy of your model. Structured forms such as questionnaires or applications are examples of consistent visual templates.
+
+Your training set consists of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields, and regions. Template models and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
+
+> [!TIP]
+>
+>To confirm that your training documents present a consistent visual template, remove all the user-entered data from each form in the set. If the blank forms are identical in appearance, they represent a consistent visual template.
+>
+> For more information, *see* [Interpret and improve accuracy and confidence for custom models](concept-accuracy-confidence.md).
+
+### Custom neural model
+
+The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+
+### Build mode
+
+The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
+
+* Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document.
+
+* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance across companies. Neural models currently only support English text.
+
+This table provides links to the build mode programming language SDK references and code samples on GitHub:
+
+|Programming language | SDK reference | Code sample |
+||||
+| C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
+|Java| DocumentBuildMode Class | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildDocumentModel.java)|
+|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-latest&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)|
+|Python | DocumentBuildMode Enum| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
+
+## Compare model features
+
+The following table compares custom template and custom neural features:
+
+|Feature|Custom template (form) | Custom neural (document) |
+||||
+|Document structure|Template, form, and structured | Structured, semi-structured, and unstructured|
+|Training time | 1 to 5 minutes | 20 minutes to 1 hour |
+|Data extraction | Key-value pairs, tables, selection marks, coordinates, and signatures | Key-value pairs, selection marks and tables|
+|Document variations | Requires a model per each variation | Uses a single model for all variations |
+|Language support | Multiple [language support](language-support.md#read-layout-and-custom-form-template-model) | English, with preview support for Spanish, French, German, Italian and Dutch [language support](language-support.md#custom-neural-model) |
+
+### Custom classification model
+
+ Document classification is a new scenario supported by Document Intelligence with the ```2023-02-28-preview``` API. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. See [custom classification](concept-custom-classifier.md) models to learn more.
+
+## Custom model tools
+
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID|
+|||:|
+|Custom model| <ul><li>[Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|***custom-model-id***|
+++
+Document Intelligence v2.1 supports the following tools:
+
+> [!NOTE]
+> Custom model types [custom neural](concept-custom-neural.md) and [custom template](concept-custom-template.md) are only available with Document Intelligence version v3.0.
+
+| Feature | Resources |
+|||
+|Custom model| <ul><li>[Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
++
+## Build a custom model
+
+### [Custom extraction](#tab/extraction)
+
+Extract data from your specific or unique documents using custom models. You need the following resources:
+
+* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
++
+## Sample Labeling tool
+
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
+
+* The Document Intelligence Sample Labeling tool is an open source tool that enables you to test the latest features of Document Intelligence and Optical Character Recognition (OCR) features.
+
+* Try the [**Sample Labeling tool quickstart**](quickstarts/try-sample-label-tool.md#train-a-custom-model) to get started building and using a custom model.
+++
+## Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with the v3.0 API.
+
+1. On the **Document Intelligence Studio** home page, select **Custom extraction models**.
+
+1. Under **My Projects**, select **Create a project**.
+
+1. Complete the project details fields.
+
+1. Configure the service resource by adding your **Storage account** and **Blob container** to **Connect your training data source**.
+
+1. Review and create your project.
+
+1. Add your sample documents to label, build and test your custom model.
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)
+
+For a detailed walkthrough to create your first custom extraction model, see [how to create a custom extraction model](how-to-guides/build-a-custom-model.md)
+
+### [Custom classification](#tab/classification)
+
+Extract data from your specific or unique documents using custom models. You need the following resources:
+
+* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
+
+## Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with the v3.0 API.
+
+1. On the **Document Intelligence Studio** home page, select **Custom classification models**.
+
+1. Under **My Projects**, select **Create a project**.
+
+1. Complete the project details fields.
+
+1. Configure the service resource by adding your **Storage account** and **Blob container** to **Connect your training data source**.
+
+1. Review and create your project.
+
+1. Label your documents to build and test your custom classification model.
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects)
+
+For a detailed walkthrough to create your first custom extraction model, see [how to create a custom extraction model](how-to-guides/build-a-custom-classifier.md)
++++
+## Custom model extraction summary
+
+This table compares the supported data extraction areas:
+
+|Model| Form fields | Selection marks | Structured fields (Tables) | Signature | Region labeling |
+|--|:--:|:--:|:--:|:--:|:--:|
+|Custom template| Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+|Custom neural| Γ£ö| Γ£ö | Γ£ö | **n/a** | * |
+
+**Table symbols**:
+Γ£öΓÇösupported;
+**n/aΓÇöcurrently unavailable;
+*-behaves differently. With template models, synthetic data is generated at training time. With neural models, exiting text recognized in the region is selected.
+
+> [!TIP]
+> When choosing between the two model types, start with a custom neural model if it meets your functional needs. See [custom neural](concept-custom-neural.md ) to learn more about custom neural models.
+
+## Custom model development options
+
+The following table describes the features available with the associated tools and SDKs. As a best practice, ensure that you use the compatible tools listed here.
+
+| Document type | REST API | SDK | Label and Test Models|
+|--|--|--|--|
+| Custom form 2.1 | [Document Intelligence 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
+| Custom template 3.0 | [Document Intelligence 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom neural | [Document Intelligence 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!NOTE]
+> Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API.
+
+* For best results, provide one clear photo or high-quality scan per document.
+* Supported file formats are JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* For PDF and TIFF files, up to 2,000 pages can be processed. With a free tier subscription, only the first two pages are processed.
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
+* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
+* The total size of the training data is 500 pages or less.
+* If your PDFs are password-locked, you must remove the lock before submission.
+
+ > [!TIP]
+ > Training data:
+ >
+ >* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+ > * Please supply only a single instance of the form per document.
+ > * For filled-in forms, use examples that have all their fields filled in.
+ > * Use forms with different values in each field.
+ >* If your form images are of lower quality, use a larger dataset. For example, use 10 to 15 images.
+
+## Supported languages and locales
+
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+ The Document Intelligence v3.0 version introduces more language support for custom models. For a list of supported handwritten and printed text, see [Language support](language-support.md).
+
+ Document Intelligence v3.0 introduces several new features and capabilities:
+
+* **Custom model API**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not.
+* [Document Intelligence v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.
+* [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities.
+
+### Try signature detection
+
+1. Build your training dataset.
+
+1. Go to [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). Under **Custom models**, select **Custom form**.
+
+ :::image type="content" source="media/label-tool/select-custom-form.png" alt-text="Screenshot that shows selecting the Document Intelligence Studio Custom form page.":::
+
+1. Follow the workflow to create a new project:
+
+ 1. Follow the **Custom model** input requirements.
+
+ 1. Label your documents. For signature fields, use **Region** labeling for better accuracy.
+
+ :::image type="content" source="media/label-tool/signature-label-region-too.png" alt-text="Screenshot that shows the Label signature field.":::
+
+After your training set is labeled, you can train your custom model and use it to analyze documents. The signature fields specify whether a signature was detected or not.
++++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md
+
+ Title: "Document Intelligence Studio"
+
+description: "Concept: Form and document processing, data extraction, and analysis using Document Intelligence Studio "
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence Studio
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+[Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with pretrained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
+
+The following image shows the Invoice prebuilt model feature at work.
++
+## Document Intelligence Studio features
+
+The following Document Intelligence service features are available in the Studio.
+
+* **Read**: Try out Document Intelligence's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true).
+
+* **Layout**: Try out Document Intelligence's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model).
+
+* **General Documents**: Try out Document Intelligence's General Documents feature to extract key-value pairs. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#general-document-model).
+
+* **Prebuilt models**: Document Intelligence's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model).
+
+* **Custom models**: Document Intelligence's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the online wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more and use the [Document Intelligence v3.0 migration guide](v3-migration-guide.md) to start integrating the new models with your applications.
+
+## Next steps
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn the differences from the previous version of the REST API.
+* Explore our [**v3.0 SDK quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
+* Refer to our [**v3.0 REST API quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0features using the new REST API.
+
+> [!div class="nextstepaction"]
+> [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md)
ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md
+
+ Title: General key-value extraction - Document Intelligence
+
+description: Extract key-value pairs, tables, selection marks, and text from your documents with Document Intelligence
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence general document model
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-migration-guide.md).
+
+> [!NOTE]
+> The ```2023-02-28-preview``` version of the general document model adds support for **normalized keys**.
+
+## General document features
+
+* The general document model is a pretrained model; it doesn't require labels or training.
+
+* A single API extracts key-value pairs, selection marks, text, tables, and structure from documents.
+
+* The general document model supports structured, semi-structured, and unstructured documents.
+
+* Key names are spans of text within the document that are associated with a value. With the ```2023-02-28-preview``` API version, key names are normalized where applicable.
+
+* Selection marks are identified as fields with a value of ```:selected:``` or ```:unselected:```
+
+***Sample document processed in the Document Intelligence Studio***
++
+## Key-value pair extraction
+
+The general document API supports most form types and analyzes your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
+
+### Key normalization (common name)
+
+When the service analyzes documents with variations in key names like ```Social Security Number```, ```Social Security Nbr```, ```SSN```, the output normalizes the key variations to a single common name, ```SocialSecurityNumber```. This normalization simplifies downstream processing for documents where you no longer need to account for variations in the key name.
++
+## Development options
+
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID
+|-|-||
+| **General document model**|<ul ><li>[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|**prebuilt-document**|
+
+### Try Document Intelligence
+
+Try extracting data from forms and documents using the Document Intelligence Studio.
+
+You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+#### Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio and the general document model are available with the v3.0 API.
+
+1. On the Document Intelligence Studio home page, select **General documents**
+
+1. You can analyze the sample document or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/general-document-analyze-1.png" alt-text="Screenshot: analyze general document menu.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document)
+
+## Key-value pairs
+
+Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context).
+
+## Data extraction
+
+| **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** | **Common Names** |
+| | :: |::| :: | :: | :: |
+|General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô* |
+
+Γ£ô* - Only available in the 2023-02-28-preview API version.
+
+## Input requirements
++
+## Supported languages and locales
+
+>[!NOTE]
+> It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|General document| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US|
+
+## Considerations
+
+* Keys are spans of text extracted from the document, for semi structured documents, keys may need to be mapped to an existing dictionary of keys.
+
+* Expect to see key-value pairs with a key, but no value. For example if a user chose to not provide an email address on the form.
+
+## Next steps
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+
+> [!div class="nextstepaction"]
+> [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-id-document.md
+
+ Title: Identity document (ID) processing ΓÇô Document Intelligence
+
+description: Automate identity document (ID) processing of driver licenses, passports, and more with Document Intelligence.
+++++ Last updated : 07/18/2023++
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence identity document model
++++
+Document Intelligence Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents. The API analyzes identity documents (including the following) and returns a structured JSON data representation:
+
+* US Drivers Licenses (all 50 states and District of Columbia)
+* International passport biographical pages
+* US state IDs
+* Social Security cards
+* Permanent resident cards
+++
+Document Intelligence can analyze and extract information from government-issued identification documents (IDs) using its prebuilt IDs model. It combines our powerful [Optical Character Recognition (OCR)](../../ai-services/computer-vision/overview-ocr.md) capabilities with ID recognition capabilities to extract key information from Worldwide Passports and U.S. Driver's Licenses (all 50 states and D.C.). The IDs API extracts key information from these identity documents, such as first name, last name, date of birth, document number, and more. This API is available in the Document Intelligence v2.1 as a cloud service.
++
+## Identity document processing
+
+Identity document processing involves extracting data from identity documents either manually or by using OCR-based technology. ID document is processing an important step in any business process that requires some proof of identity. Examples include customer verification in banks and other financial institutions, mortgage applications, medical visits, claim processing, hospitality industry, and more. Individuals provide some proof of their identity via driver licenses, passports, and other similar documents so that the business can efficiently verify them before providing services and benefits.
++
+***Sample U.S. Driver's License processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
++++
+## Data extraction
+
+The prebuilt IDs service extracts the key values from worldwide passports and U.S. Driver's Licenses and returns them in an organized structured JSON response.
+
+### **Driver's license example**
+
+![Sample Driver's License](./media/id-example-drivers-license.jpg)
+
+### **Passport example**
+
+![Sample Passport](./media/id-example-passport-result.jpg)
++
+## Development options
+
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**ID document model**|<ul><li> [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|**prebuilt-idDocument**|
++
+Document Intelligence v2.1 supports the following tools:
+
+| Feature | Resources |
+|-|-|
+|**ID document model**| <ul><li>[**Document Intelligence labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-identity-id-documents)</li><li>[**Client-library SDK**](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[**Document Intelligence Docker container**](containers/install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+
+* Document Intelligence processes PDF and TIFF files up to 2000 pages or only the first two pages for free-tier subscribers.
+
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
++
+### Try Document Intelligence
+
+Extract data, including name, birth date, and expiration date, from ID documents. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
++
+## Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with the v3.0 API (API version 2022-08-31 generally available (GA) release)
+
+1. On the Document Intelligence Studio home page, select **Identity documents**
+
+1. You can analyze the sample invoice or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/id-document-analyze.png" alt-text="Screenshot: analyze ID document menu.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
++
+## Document Intelligence Sample Labeling tool
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select document type dropdown menu.":::
+
+1. Select **Run analysis**. The Document Intelligence Sample Labeling tool calls the Analyze Prebuilt API and analyzes the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/id-example-drivers-license.jpg" alt-text="Screenshot of the identity model analyze results operation.":::
+
+1. Download the JSON output file to view the detailed results.
+
+ * The "readResults" node contains every line of text with its respective bounding box placement on the page.
+ * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is *selected* or *unselected*.
+ * The "pageResults" section includes the tables extracted. For each table, Document Intelligence extracts the text, row, and column index, row and column spanning, bounding box, and more.
+ * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
+
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Document Intelligence Service.
+++
+## Supported document types
+
+| Region | Document Types |
+|--|-|
+|Worldwide|Passport Book, Passport Card|
+|`United States (US)`|Driver License, Identification Card, Residency Permit (Green card), Social Security Card, Military ID|
+|`India (IN)`|Driver License, PAN Card, Aadhaar Card|
+|`Canada (CA)`|Driver License, Identification Card, Residency Permit (Maple Card)|
+|`United Kingdom (GB)`|Driver License, National Identity Card|
+|`Australia (AU)`|Driver License, Photo Card, Key-pass ID (including digital version)|
+
+## Field extractions
+
+The following are the fields extracted per document type. The Document Intelligence ID model `prebuilt-idDocument` extracts the following fields in the `documents.*.fields`. The json output includes all the extracted text in the documents, words, lines, and styles.
+
+>[!NOTE]
+>
+> In addition to specifying the IdDocument model, you can designate the ID type for (driver license, passport, national/regional identity card, residence permit, or US social security card ).
+
+### Data extraction (all types)
+
+|**Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** |**Key-Value pairs** | **Fields** |
+|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+|[prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+
+### Document types
+
+#### `idDocument.driverLicense` fields extracted
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`Region`|`string`|State or province|Washington|
+|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|Driver license document discriminator|12645646464554646456464544|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`EyeColor`|`string`|Eye color|BLU|
+|`HairColor`|`string`|Hair color|BRO|
+|`Height`|`string`|Height|5'11"|
+|`Weight`|`string`|Weight|185LB|
+|`Sex`|`string`|Sex|M|
+|`Endorsements`|`string`|Endorsements|L|
+|`Restrictions`|`string`|Restrictions|B|
+|`VehicleClassifications`|`string`|Vehicle classification|D|
+
+#### `idDocument.passport` fields extracted
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`DocumentNumber`|`string`|Passport number|340020013|
+|`FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
+|`MiddleName`|`string`|Name between given name and surname|REYES|
+|`LastName`|`string`|Surname|BROOKS|
+|`Aliases`|`array`|||
+|`Aliases.*`|`string`|Also known as|MAY LIN|
+|`DateOfBirth`|`date`|Date of birth|1980-01-01|
+|`DateOfExpiration`|`date`|Date of expiration|2019-05-05|
+|`DateOfIssue`|`date`|Date of issue|2014-05-06|
+|`Sex`|`string`|Sex|F|
+|`CountryRegion`|`countryRegion`|Issuing country or organization|USA|
+|`DocumentType`|`string`|Document type|P|
+|`Nationality`|`countryRegion`|Nationality|USA|
+|`PlaceOfBirth`|`string`|Place of birth|MASSACHUSETTS, U.S.A.|
+|`PlaceOfIssue`|`string`|Place of issue|LA PAZ|
+|`IssuingAuthority`|`string`|Issuing authority|United States Department of State|
+|`PersonalNumber`|`string`|Personal ID. No.|A234567893|
+|`MachineReadableZone`|`object`|Machine readable zone (MRZ)|P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816|
+|`MachineReadableZone.FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
+|`MachineReadableZone.LastName`|`string`|Surname|BROOKS|
+|`MachineReadableZone.DocumentNumber`|`string`|Passport number|340020013|
+|`MachineReadableZone.CountryRegion`|`countryRegion`|Issuing country or organization|USA|
+|`MachineReadableZone.Nationality`|`countryRegion`|Nationality|USA|
+|`MachineReadableZone.DateOfBirth`|`date`|Date of birth|1980-01-01|
+|`MachineReadableZone.DateOfExpiration`|`date`|Date of expiration|2019-05-05|
+|`MachineReadableZone.Sex`|`string`|Sex|F|
+
+#### `idDocument.nationalIdentityCard` fields extracted
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`Region`|`string`|State or province|Washington|
+|`DocumentNumber`|`string`|National/regional identity card number|WDLABCD456DG|
+|`DocumentDiscriminator`|`string`|National/regional identity card document discriminator|12645646464554646456464544|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`EyeColor`|`string`|Eye color|BLU|
+|`HairColor`|`string`|Hair color|BRO|
+|`Height`|`string`|Height|5'11"|
+|`Weight`|`string`|Weight|185LB|
+|`Sex`|`string`|Sex|M|
+
+#### `idDocument.residencePermit` fields extracted
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`CountryRegion`|`countryRegion`|Country or region code|USA|
+|`DocumentNumber`|`string`|Residence permit number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+|`Sex`|`string`|Sex|M|
+|`PlaceOfBirth`|`string`|Place of birth|Germany|
+|`Category`|`string`|Permit category|DV2|
+
+#### `idDocument.usSocialSecurityCard` fields extracted
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`DocumentNumber`|`string`|Social security card number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfIssue`|`date`|Date of issue|08/12/2012|
+
+#### `idDocument` fields extracted
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
+|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG|
+|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
+|`LastName`|`string`|Surname|TALBOT|
+|`DateOfBirth`|`date`|Date of birth|01/06/1958|
+|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
+++
+## Supported document types and locales
+
+ **Prebuilt ID v2.1** extracts key values from worldwide passports, and U.S. Driver's Licenses in the **en-us** locale.
+
+## Fields extracted
+
+|Name| Type | Description | Value |
+|:--|:-|:-|:-|
+| Country | country | Country code compliant with ISO 3166 standard | "USA" |
+| DateOfBirth | date | DOB in YYYY-MM-DD format | "1980-01-01" |
+| DateOfExpiration | date | Expiration date in YYYY-MM-DD format | "2019-05-05" |
+| DocumentNumber | string | Relevant passport number, driver's license number, etc. | "340020013" |
+| FirstName | string | Extracted given name and middle initial if applicable | "JENNIFER" |
+| LastName | string | Extracted surname | "BROOKS" |
+| Nationality | country | Country code compliant with ISO 3166 standard | "USA" |
+| Sex | gender | Possible extracted values include "M" "F" "X" | "F" |
+| MachineReadableZone | object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
+| DocumentType | string | Document type, for example, Passport, Driver's License | "passport" |
+| Address | string | Extracted address (Driver's License only) | "123 STREET ADDRESS YOUR CITY WA 99999-1234"|
+| Region | string | Extracted region, state, province, etc. (Driver's License only) | "Washington" |
+
+### Migration guide
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-insurance-card.md
+
+ Title: Document Intelligence insurance card prebuilt model
+
+description: Data extraction and analysis extraction using the insurance card model
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence health insurance card model
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+The Document Intelligence health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards. A health insurance card is a key document for care processing and can be digitally analyzed for patient onboarding, financial coverage information, cashless payments, and insurance claim processing. The health insurance card model analyzes health card images; extracts key information such as insurer, member, prescription, and group number; and returns a structured JSON representation. Health insurance cards can be presented in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
+
+***Sample health insurance card processed using Document Intelligence Studio***
++
+## Development options
+
+Document Intelligence v3.0 supports the prebuilt health insurance card model with the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**health insurance card model**|<ul><li> [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-healthInsuranceCard.us**|
+
+### Try Document Intelligence
+
+See how data is extracted from health insurance cards using the Document Intelligence Studio. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Document Intelligence instance](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
+
+#### Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with API version v3.0.
+
+1. On the [Document Intelligence Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **Health insurance cards**.
+
+1. You can analyze the sample insurance card document or select the **Γ₧ò Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/insurance-card-analyze.png" alt-text="Screenshot: analyze health insurance card window in the Document Intelligence Studio.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)
+
+## Input requirements
++
+## Supported languages and locales
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|prebuilt-healthInsuranceCard.us| <ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
+
+## Field extraction
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`Insurer`|`string`|Health insurance provider name|PREMERA<br>BLUE CROSS|
+|`Member`|`object`|||
+|`Member.Name`|`string`|Member name|ANGEL BROWN|
+|`Member.BirthDate`|`date`|Member date of birth|01/06/1958|
+|`Member.Employer`|`string`|Member name employer|Microsoft|
+|`Member.Gender`|`string`|Member gender|M|
+|`Member.IdNumberSuffix`|`string`|Identification Number Suffix as it appears on some health insurance cards|01|
+|`Dependents`|`array`|Array holding list of dependents, ordered where possible by membership suffix value||
+|`Dependents.*`|`object`|||
+|`Dependents.*.Name`|`string`|Dependent name|01|
+|`IdNumber`|`object`|||
+|`IdNumber.Prefix`|`string`|Identification Number Prefix as it appears on some health insurance cards|ABC|
+|`IdNumber.Number`|`string`|Identification Number|123456789|
+|`GroupNumber`|`string`|Insurance Group Number|1000000|
+|`PrescriptionInfo`|`object`|||
+|`PrescriptionInfo.Issuer`|`string`|ANSI issuer identification number (IIN)|(80840) 300-11908-77|
+|`PrescriptionInfo.RxBIN`|`string`|Prescription issued BIN number|987654|
+|`PrescriptionInfo.RxPCN`|`string`|Prescription processor control number|63200305|
+|`PrescriptionInfo.RxGrp`|`string`|Prescription group number|BCAAXYZ|
+|`PrescriptionInfo.RxId`|`string`|Prescription identification number. If not present, defaults to membership ID number|P97020065|
+|`PrescriptionInfo.RxPlan`|`string`|Prescription Plan number|A1|
+|`Pbm`|`string`|Pharmacy Benefit Manager for the plan|CVS CAREMARK|
+|`EffectiveDate`|`date`|Date from which the plan is effective|08/12/2012|
+|`Copays`|`array`|Array holding list of copay Benefits||
+|`Copays.*`|`object`|||
+|`Copays.*.Benefit`|`string`|Co-Pay Benefit name|Deductible|
+|`Copays.*.Amount`|`currency`|Co-Pay required amount|$1,500|
+|`Payer`|`object`|||
+|`Payer.Id`|`string`|Payer ID Number|89063|
+|`Payer.Address`|`address`|Payer address|123 Service St., Redmond WA, 98052|
+|`Payer.PhoneNumber`|`phoneNumber`|Payer phone number|+1 (987) 213-5674|
+|`Plan`|`object`|||
+|`Plan.Number`|`string`|Plan number|456|
+|`Plan.Name`|`string`|Plan name - Medicaid -> then Medicaid|HEALTH SAVINGS PLAN|
+|`Plan.Type`|`string`|Plan type|PPO|
+
+### Migration guide and REST API v3.0
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+
+## Next steps
+
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md
+
+ Title: Invoice data extraction ΓÇô Document Intelligence
+
+description: Automate invoice data extraction with Document Intelligence's invoice model to extract accounts payable data including invoice line items.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence invoice model
+++
+The Document Intelligence invoice model uses powerful Optical Character Recognition (OCR) capabilities to analyze and extract key fields and line items from sales invoices, utility bills, and purchase orders. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
+
+**Supported document types:**
+
+* Invoices
+* Utility bills
+* Sales orders
+* Purchase orders
+
+## Automated invoice processing
+
+Automated invoice processing is the process of extracting key accounts payable fields from billing account documents. Extracted data includes line items from invoices integrated with your accounts payable (AP) workflows for reviews and payments. Historically, the accounts payable process has been done manually and, hence, very time consuming. Accurate extraction of key data from invoices is typically the first and one of the most critical steps in the invoice automation process.
++
+**Sample invoice processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)**:
++++
+**Sample invoice processed with [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net)**:
+++
+## Development options
++
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**Invoice model** | <ul><li>[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|**prebuilt-invoice**|
+++
+Document Intelligence v2.1 supports the following tools:
+
+| Feature | Resources |
+|-|-|
+|**Invoice model**| <ul><li>[**Document Intelligence labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-invoices)</li><li>[**Client-library SDK**](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[**Document Intelligence Docker container**](containers/install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
++
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
++
+## Try invoice data extraction
+
+See how data, including customer information, vendor details, and line items, is extracted from invoices. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
++
+## Document Intelligence Studio
+
+1. On the Document Intelligence Studio home page, select **Invoices**
+
+1. You can analyze the sample invoice or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/invoice-analyze.png" alt-text="Screenshot: analyze invoice menu.":::
+
+> [!div class="nextstepaction"]
+> [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
+++
+## Document Intelligence Sample Labeling tool
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of layout model analyze results process.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot showing the select-form-type dropdown menu.":::
+
+1. Select **Run analysis**. The Document Intelligence Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/invoice-example-new.jpg" alt-text="Screenshot of layout model analyze results operation.":::
+
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Document Intelligence Service.
+++
+## Supported languages and locales
+
+>[!NOTE]
+> Document Intelligence auto-detects language and locale data.
+
+| Supported languages | Details |
+|:-|:|
+| &bullet; English (en) | United States (us), Australia (-au), Canada (-ca), United Kingdom (-uk), India (-in)|
+| &bullet; Spanish (es) |Spain (es)|
+| &bullet; German (de) | Germany (de)|
+| &bullet; French (fr) | France (fr) |
+| &bullet; Italian (it) | Italy (it)|
+| &bullet; Portuguese (pt) | Portugal (pt), Brazil (br)|
+| &bullet; Dutch (nl) | Netherlands (nl)|
+
+## Field extraction
+
+|Name| Type | Description | Standardized output |
+|:--|:-|:-|::|
+| CustomerName | String | Invoiced customer| |
+| CustomerId | String | Customer reference ID | |
+| PurchaseOrder | String | Purchase order reference number | |
+| InvoiceId | String | ID for this specific invoice (often "Invoice Number") | |
+| InvoiceDate | Date | Date the invoice was issued | yyyy-mm-dd|
+| DueDate | Date | Date payment for this invoice is due | yyyy-mm-dd|
+| VendorName | String | Vendor name | |
+| VendorTaxId | String | The taxpayer number associated with the vendor | |
+| VendorAddress | String | Vendor mailing address| |
+| VendorAddressRecipient | String | Name associated with the VendorAddress | |
+| CustomerAddress | String | Mailing address for the Customer | |
+| CustomerTaxId | String | The taxpayer number associated with the customer | |
+| CustomerAddressRecipient | String | Name associated with the CustomerAddress | |
+| BillingAddress | String | Explicit billing address for the customer | |
+| BillingAddressRecipient | String | Name associated with the BillingAddress | |
+| ShippingAddress | String | Explicit shipping address for the customer | |
+| ShippingAddressRecipient | String | Name associated with the ShippingAddress | |
+| PaymentTerm | String | The terms of payment for the invoice | |
+ |Sub&#8203;Total| Number | Subtotal field identified on this invoice | Integer |
+| TotalTax | Number | Total tax field identified on this invoice | Integer |
+| InvoiceTotal | Number (USD) | Total new charges associated with this invoice | Integer |
+| AmountDue | Number (USD) | Total Amount Due to the vendor | Integer |
+| ServiceAddress | String | Explicit service address or property address for the customer | |
+| ServiceAddressRecipient | String | Name associated with the ServiceAddress | |
+| RemittanceAddress | String | Explicit remittance or payment address for the customer | |
+| RemittanceAddressRecipient | String | Name associated with the RemittanceAddress | |
+| ServiceStartDate | Date | First date for the service period (for example, a utility bill service period) | yyyy-mm-dd |
+| ServiceEndDate | Date | End date for the service period (for example, a utility bill service period) | yyyy-mm-dd|
+| PreviousUnpaidBalance | Number | Explicit previously unpaid balance | Integer |
+| CurrencyCode | String | The currency code associated with the extracted amount | |
+| PaymentDetails | Array | An array that holds Payment Option details such as `IBAN`and `SWIFT` | |
+| TotalDiscount | Number | The total discount applied to an invoice | Integer |
+| TaxItems (en-IN only) | Array | AN array that holds added tax information such as `CGST`, `IGST`, and `SGST`. This line item is currently only available for the en-in locale | |
+
+### Line items
+
+Following are the line items extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg))
+
+|Name| Type | Description | Text (line item #1) | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| Items | String | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
+| Amount | Number | The amount of the line item | $60.00 | 100 |
+| Description | String | The text description for the invoice line item | Consulting service | Consulting service |
+| Quantity | Number | The quantity for this invoice line item | 2 | 2 |
+| UnitPrice | Number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
+| ProductCode | String| Product code, product number, or SKU associated with the specific line item | A123 | |
+| Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | |
+| Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 |
+| Tax | Number | Tax associated with each line item. Possible values include tax amount and tax Y/N | 10.00 | |
+| TaxRate | Number | Tax Rate associated with each line item. | 10% | |
+
+The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
+
+### Key-value pairs
+
+The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context).
+++
+## Supported locales
+
+**Prebuilt invoice v2.1** supports invoices in the **en-us** locale.
+
+## Fields extracted
+
+The Invoice service extracts the text, tables, and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg)).
+
+|Name| Type | Description | Text | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| CustomerName | string | Customer being invoiced | Microsoft Corp | |
+| CustomerId | string | Reference ID for the customer | CID-12345 | |
+| PurchaseOrder | string | A purchase order reference number | PO-3333 | |
+| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | |
+| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
+| DueDate | date | Date payment for this invoice is due | 12/15/2019 | 2019-12-15 |
+| VendorName | string | Vendor who has created this invoice | CONTOSO LTD. | |
+| VendorAddress | string | Mailing address for the Vendor | 123 456th St New York, NY, 10001 | |
+| VendorAddressRecipient | string | Name associated with the VendorAddress | Contoso Headquarters | |
+| CustomerAddress | string | Mailing address for the Customer | 123 Other Street, Redmond WA, 98052 | |
+| CustomerAddressRecipient | string | Name associated with the CustomerAddress | Microsoft Corp | |
+| BillingAddress | string | Explicit billing address for the customer | 123 Bill Street, Redmond WA, 98052 | |
+| BillingAddressRecipient | string | Name associated with the BillingAddress | Microsoft Services | |
+| ShippingAddress | string | Explicit shipping address for the customer | 123 Ship Street, Redmond WA, 98052 | |
+| ShippingAddressRecipient | string | Name associated with the ShippingAddress | Microsoft Delivery | |
+| Sub&#8203;Total | number | Subtotal field identified on this invoice | $100.00 | 100 |
+| TotalTax | number | Total tax field identified on this invoice | $10.00 | 10 |
+| InvoiceTotal | number | Total new charges associated with this invoice | $110.00 | 110 |
+| AmountDue | number | Total Amount Due to the vendor | $610.00 | 610 |
+| ServiceAddress | string | Explicit service address or property address for the customer | 123 Service Street, Redmond WA, 98052 | |
+| ServiceAddressRecipient | string | Name associated with the ServiceAddress | Microsoft Services | |
+| RemittanceAddress | string | Explicit remittance or payment address for the customer | 123 Remit St New York, NY, 10001 | |
+| RemittanceAddressRecipient | string | Name associated with the RemittanceAddress | Contoso Billing | |
+| ServiceStartDate | date | First date for the service period (for example, a utility bill service period) | 10/14/2019 | 2019-10-14 |
+| ServiceEndDate | date | End date for the service period (for example, a utility bill service period) | 11/14/2019 | 2019-11-14 |
+| PreviousUnpaidBalance | number | Explicit previously unpaid balance | $500.00 | 500 |
+
+Following are the line items extracted from an invoice in the JSON output response (the following output uses this [sample invoice](./media/sample-invoice.jpg))
+
+|Name| Type | Description | Text (line item #1) | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| Items | string | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
+| Amount | number | The amount of the line item | $60.00 | 100 |
+| Description | string | The text description for the invoice line item | Consulting service | Consulting service |
+| Quantity | number | The quantity for this invoice line item | 2 | 2 |
+| UnitPrice | number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
+| ProductCode | string| Product code, product number, or SKU associated with the specific line item | A123 | |
+| Unit | string| The unit of the line item, e.g, kg, lb etc. | hours | |
+| Date | date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 |
+| Tax | number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
+
+### JSON output
+
+The JSON output has three parts:
+
+* `"readResults"` node contains all of the recognized text and selection marks. Text is organized via page, then by line, then by individual words.
+* `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in *readResults*.
+* `"documentResults"` node contains the invoice-specific values and line items that the model discovered. It's where to find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more.
+
+## Migration guide
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md
+
+ Title: Document layout analysis - Document Intelligence
+
+description: Extract text, tables, selections, titles, section headings, page headers, page footers, and more with layout analysis model from Document Intelligence.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Document Intelligence layout model
+++
+Document Intelligence layout model is an advanced machine-learning based document analysis API available in the Document Intelligence cloud. It enables you to take documents in various formats and return structured data representations of the documents. It combines an enhanced version of our powerful [Optical Character Recognition (OCR)](../../ai-services/computer-vision/overview-ocr.md) capabilities with deep learning models to extract text, tables, selection marks, and document structure.
+
+## Document layout analysis
+
+Document structure layout analysis is the process of analyzing a document to extract regions of interest and their inter-relationships. The goal is to extract text and structural elements from the page to build better semantic understanding models. There are two types of roles that text plays in a document layout:
+
+* **Geometric roles**: Text, tables, and selection marks are examples of geometric roles.
+* **Logical roles**: Titles, headings, and footers are examples of logical roles.
+
+The following illustration shows the typical components in an image of a sample page.
+++
+***Sample form processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
++
+## Development options
+
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID |
+|-|||
+|**Layout model**| <ul><li>[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|**prebuilt-layout**|
+++
+**Sample document processed with [Document Intelligence Sample Labeling tool layout model](https://fott-2-1.azurewebsites.net/layout-analyze)**:
+++
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
++
+### Try layout extraction
+
+See how data, including text, tables, table headers, selection marks, and structure information is extracted from documents using Document Intelligence. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
++
+## Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with the v3.0 API.
+
+***Sample form processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
++
+1. On the Document Intelligence Studio home page, select **Layout**
+
+1. You can analyze the sample document or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/layout-analyze.png" alt-text="Screenshot: analyze layout menu.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
+++
+## Document Intelligence Sample Labeling tool
+
+1. Navigate to the [Document Intelligence sample tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
+
+ :::image type="content" source="media/label-tool/layout-1.jpg" alt-text="Screenshot of connection settings for the Document Intelligence layout process.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+1. In the **Source** field, select **URL** from the dropdown menu You can use our sample document:
+
+ * [**Sample document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg)
+
+ * Select the **Fetch** button.
+
+1. Select **Run Layout**. The Document Intelligence Sample Labeling tool calls the Analyze Layout API and analyze the document.
+
+ :::image type="content" source="media/fott-layout.png" alt-text="Screenshot: Layout dropdown window.":::
+
+1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
+
+ :::image type="content" source="media/label-tool/layout-3.jpg" alt-text="Screenshot of connection settings for the Document Intelligence Sample Labeling tool.":::
++
+## Supported document types
+
+| **Model** | **Images** | **PDF** | **TIFF** |
+| | | | |
+| Layout | Γ£ô | Γ£ô | Γ£ô |
+
+## Supported languages and locales
+
+*See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
++
+### Data extraction
+
+**Starting with v3.0 GA**, it extracts paragraphs and more structure information like titles, section headings, page header, page footer, page number, and footnote from the document page. These structural elements are examples of logical roles described in the previous section. This capability is supported for PDF documents and images (JPG, PNG, BMP, TIFF).
+
+| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Logical roles** |
+| | | | | | |
+| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+**Supported logical roles for paragraphs**:
+The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
+
+* title
+* sectionHeading
+* footnote
+* pageHeader
+* pageFooter
+* pageNumber
+++
+### Data extraction support
+
+| **Model** | **Text** | **Tables** | Selection marks|
+| | | | |
+| Layout | Γ£ô | Γ£ô| Γ£ô |
+
+Document Intelligence v2.1 supports the following tools:
+
+| Feature | Resources |
+|-|-|
+|**Layout API**| <ul><li>[**Document Intelligence labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-layout)</li><li>[**Client-library SDK**](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[**Document Intelligence Docker container**](containers/install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+++
+## Model extraction
+
+The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
+
+### Paragraph extraction
+
+The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
+
+```json
+"paragraphs": [
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
+ }
+]
+```
+
+### Paragraph roles
+
+The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Document Intelligence Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
+
+| **Predicted role** | **Description** |
+| | |
+| `title` | The main heading(s) in the page |
+| `sectionHeading` | One or more subheading(s) on the page |
+| `footnote` | Text near the bottom of the page |
+| `pageHeader` | Text near the top edge of the page |
+| `pageFooter` | Text near the bottom edge of the page |
+| `pageNumber` | Page number |
+
+```json
+{
+ "paragraphs": [
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "role": "title",
+ "content": "NEWS TODAY"
+ },
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "role": "sectionHeading",
+ "content": "Mirjam Nilsson"
+ }
+ ]
+}
+
+```
+
+### Pages extraction
+
+The pages collection is the first object you see in the service response.
+
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": [],
+ "kind": "document"
+ }
+]
+```
+
+### Text lines and words extraction
+
+The document layout model in Document Intelligence extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
+
+### Selection marks extraction
+
+The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
+
+```json
+{
+ "selectionMarks": [
+ {
+ "state": "unselected",
+ "polygon": [],
+ "confidence": 0.995,
+ "span": {
+ "offset": 1421,
+ "length": 12
+ }
+ }
+ ]
+}
+```
+
+### Extract tables from documents and images
+
+Extracting tables is a key requirement for processing documents containing large volumes of data typically formatted as tables. The Layout model extracts tables in the `pageResults` section of the JSON output. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding polygon is output along with information whether it's recognized as a `columnHeader` or not. The model supports extracting tables that are rotated. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top-level content that contains the full text from the document.
+
+```json
+{
+ "tables": [
+ {
+ "rowCount": 9,
+ "columnCount": 4,
+ "cells": [
+ {
+ "kind": "columnHeader",
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "columnSpan": 4,
+ "content": "(In millions, except earnings per share)",
+ "boundingRegions": [],
+ "spans": []
+ },
+ ]
+ }
+ ]
+}
+
+```
+
+### Handwritten style for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
+```
+
+### Annotations extraction
+
+The Layout model extracts annotations in documents, such as checks and crosses. The response includes the kind of annotation, along with a confidence score and bounding polygon.
+
+```json
+ {
+ "pages": [
+ {
+ "annotations": [
+ {
+ "kind": "cross",
+ "polygon": [...],
+ "confidence": 1
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Barcode extraction
+
+The Layout model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page.
+
+#### Supported barcode types
+
+| **Barcode Type** | **Example** |
+| | |
+| QR Code |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::|
+| Code 39 |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::|
+| Code 128 |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::|
+| UPC (UPC-A & UPC-E) |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::|
+| PDF417 |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::|
+
+ > [!NOTE]
+ > The `confidence` score is hard-coded for the `2023-02-28` public preview.
+
+ ```json
+ "content": ":barcode:",
+ "pages": [
+ {
+ "pageNumber": 1,
+ "barcodes": [
+ {
+ "kind": "QRCode",
+ "value": "http://test.com/",
+ "span": { ... },
+ "polygon": [...],
+ "confidence": 1
+ }
+ ]
+ }
+ ]
+ ```
+
+### Extract selected pages from documents
+
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+++
+### Natural reading order output (Latin only)
+
+You can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++
+### Select page numbers or ranges for text extraction
+
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
++
+## The Get Analyze Layout Result operation
+
+The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID the Analyze Layout operation created. It returns a JSON response that contains a **status** field with the following possible values.
+
+|Field| Type | Possible values |
+|:--|:-:|:-|
+|status | string | `notStarted`: The analysis operation hasn't started.<br /><br />`running`: The analysis operation is in progress.<br /><br />`failed`: The analysis operation has failed.<br /><br />`succeeded`: The analysis operation has succeeded.|
+
+Call this operation iteratively until it returns the `succeeded` value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response includes the extracted layout, text, tables, and selection marks. The extracted data includes extracted text lines and words, bounding boxes, text appearance with handwritten indication, tables, and selection marks with selected/unselected indicated.
+
+### Handwritten classification for text lines (Latin only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
++
+### Sample JSON output
+
+The response to the *Get Analyze Layout Result* operation is a structured representation of the document with all the information extracted.
+See here for a [sample document file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout.pdf) and its structured output [sample layout output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout-output.json).
+
+The JSON output has two parts:
+
+* `readResults` node contains all of the recognized text and selection mark. The text presentation hierarchy is page, then line, then individual words.
+* `pageResults` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults" field.
+
+## Example Output
+
+### Text
+
+Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+
+### Tables with headers
+
+Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
+
+![Tables example](./media/layout-table-header-demo.gif)
+
+### Selection marks
+
+Layout API also extracts selection marks from documents. Extracted selection marks include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `readResults` section of the JSON output.
+
+### Migration guide
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+
+## Next steps
++
+* [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
+
+ Title: Document processing models - Document Intelligence
+
+description: Document processing models for OCR, document layout, invoices, identity, custom models, and more to extract text, structure, and key-value pairs.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD011 -->
+
+# Document processing models
+++
+ Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt document analysis or domain specific model or train a custom model tailored to your specific business needs and use cases. Document Intelligence can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
+
+## Model overview
++
+| **Model** | **Description** |
+| | |
+|**Document analysis models**||
+| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.|
+| [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.|
+| [General document](#general-document) | Extract key-value pairs in addition to text and document structure information.|
+|**Prebuilt models**||
+| [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number and other key information from US health insurance cards.|
+| [W-2](#w-2) | Process W2 forms to extract employee, employer, wage, and other information. |
+| [Invoice](#invoice) | Automate invoices. |
+| [Receipt](#receipt) | Extract receipt data from receipts.|
+| [Identity document (ID)](#identity-document-id) | Extract identity (ID) fields from US driver licenses and international passports. |
+| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. |
+|**Custom models**||
+| [Custom model (overview)](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
+| [Custom extraction models](#custom-extraction)| &#9679; **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>&#9679; **Custom neural models** are trained on various document types to extract fields from structured, semi-structured and unstructured documents.|
+| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
+| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
+
+### Read OCR
++
+The Read API analyzes and extracts lines, words, their locations, detected languages, and handwritten style if detected.
+
+***Sample document processed using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/read)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: read model](concept-read.md)
+
+### Layout analysis
++
+The Layout analysis model analyzes and extracts text, tables, selection marks, and other structure elements like titles, section headings, page headers, page footers, and more.
+
+***Sample document processed using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
++
+> [!div class="nextstepaction"]
+>
+> [Learn more: layout model](concept-layout.md)
+
+### General document
++
+The general document model is ideal for extracting common key-value pairs from forms and documents. It's a pretrained model and can be directly invoked via the REST API and the SDKs. You can use the general document model as an alternative to training a custom model.
+
+***Sample document processed using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: general document model](concept-general-document.md)
+
+### Health insurance card
++
+The health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards.
+
+***Sample US health insurance card processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: Health insurance card model](concept-insurance-card.md)
+
+### W-2
++
+The W-2 form model extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
+
+***Sample W-2 document processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: W-2 model](concept-w2.md)
+
+### Invoice
++
+The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
+
+***Sample invoice processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: invoice model](concept-invoice.md)
+
+### Receipt
++
+Use the receipt model to scan sales receipts for merchant name, dates, line items, quantities, and totals from printed and handwritten receipts. The version v3.0 also supports single-page hotel receipt processing.
+
+***Sample receipt processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: receipt model](concept-receipt.md)
+
+### Identity document (ID)
++
+Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents) to extract key fields.
+
+***Sample U.S. Driver's License processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: identity document model](concept-id-document.md)
+
+### Business card
++
+Use the business card model to scan and extract key information from business card images.
+
+***Sample business card processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: business card model](concept-business-card.md)
+
+### Custom models
++
+Custom document models analyze and extract data from forms and documents specific to your business. They're trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started.
+
+Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
+
+***Sample custom template processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+
+#### Custom extraction
++
+Custom extraction model can be one of two types, **custom template** or **custom neural**. To create a custom extraction model, label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
+
+***Sample custom extraction processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom template model](concept-custom-template.md)
+
+> [!div class="nextstepaction"]
+> [Learn more: custom neural model](./concept-custom-neural.md)
+
+#### Custom classifier
++
+The custom classification model enables you to identify the document type prior to invoking the extraction model. The classification model is available starting with the 2023-02-28-preview. Training a custom classification model requires at least two distinct classes and a minimum of five samples per class.
+
+> [!div class="nextstepaction"]
+> [Learn more: custom classification model](concept-custom-classifier.md)
+
+#### Composed models
+
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. You can assign up to 200 trained custom models to a single composed model.
+
+***Composed model dialog window in [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+
+## Model data extraction
+
+| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** |
+|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
+| [prebuilt-healthInsuranceCard.us](concept-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
+| [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
+| [prebuilt-document](concept-general-document.md#data-extraction)| Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-businessCard](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
+
+## Input requirements
++
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Document Intelligence Service.
+
+### Version migration
+
+Learn how to use Document Intelligence v3.0 in your applications by following our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md)
+++
+| **Model** | **Description** |
+| | |
+|**Document analysis**||
+| [Layout](#layout) | Extract text and layout information from documents.|
+|**Prebuilt**||
+| [Invoice](#invoice) | Extract key information from English and Spanish invoices. |
+| [Receipt](#receipt) | Extract key information from English receipts. |
+| [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
+| [Business card](#business-card) | Extract key information from English business cards. |
+|**Custom**||
+| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
+| [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
+
+### Layout
+
+The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
+
+***Sample document processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/layout-analyze)***:
++
+> [!div class="nextstepaction"]
+>
+> [Learn more: layout model](concept-layout.md)
+
+### Invoice
+
+The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due.
+
+***Sample invoice processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: invoice model](concept-invoice.md)
+
+### Receipt
+
+* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
+
+***Sample receipt processed using [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: receipt model](concept-receipt.md)
+
+### ID document
+
+ The ID document model analyzes and extracts key information from the following documents:
+
+* U.S. Driver's Licenses (all 50 states and District of Columbia)
+
+* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
+
+***Sample U.S. Driver's License processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: identity document model](concept-id-document.md)
+
+### Business card
+
+The business card model analyzes and extracts key information from business card images.
+
+***Sample business card processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: business card model](concept-business-card.md)
+
+### Custom
+
+* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+
+***Sample custom model processing using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+
+#### Composed custom model
+
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
+
+***Composed model dialog window using the [Sample Labeling tool](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+
+## Model data extraction
+
+| **Model** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
+|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| [Layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
+| [Invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| [Receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [ID Document](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Business Card](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Custom Form](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
+
+## Input requirements
++
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Document Intelligence Service.
+
+### Version migration
+
+ You can learn how to use Document Intelligence v3.0 in your applications by following our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md)
++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-query-fields.md
+
+ Title: Query field extraction - Document Intelligence
+
+description: Use Document Intelligence to extract query field data.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence query field extraction (preview)
+
+**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
+
+> [!IMPORTANT]
+>
+> * The Document Intelligence Studio query fields extraction feature is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+> * Complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
+
+Document Intelligence now supports query field extractions using Azure OpenAI capabilities. With query field extraction, you can add fields to the extraction process using a query request without the need for added training.
+
+> [!NOTE]
+>
+> Document Intelligence Studio query field extraction is currently available with the general document model for the `2023-02-28-preview` release.
+
+## Select query fields
+
+For query field extraction, specify the fields you want to extract and Document Intelligence analyzes the document accordingly. Here's an example:
+
+* If you're processing a contract in the Document Intelligence Studio, you can pass a list of field labels like `Party1`, `Party2`, `TermsOfUse`, `PaymentTerms`, `PaymentDate`, and `TermEndDate`" as part of the analyze document request.
+
+ :::image type="content" source="media/studio/query-field-select.png" alt-text="Screenshot of query fields selection window in Document Intelligence Studio.":::
+
+* Document Intelligence utilizes the capabilities of both [**Azure OpenAI Service**](../../ai-services/openai/overview.md) and extraction models to analyze and extract the field data and return the values in a structured JSON output.
+
+* In addition to the query fields, the response includes text, tables, selection marks, general document key-value pairs, and other relevant data.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the Document Intelligence Studio quickstart](./quickstarts/try-document-intelligence-studio.md)
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
+
+ Title: OCR for documents - Document Intelligence
+
+description: Extract print and handwritten text from scanned and digital documents with Document Intelligence's Read OCR model.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence read model
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+> [!NOTE]
+>
+> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Vision v4.0 preview Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
+>
+
+Document Intelligence v3.0's Read Optical Character Recognition (OCR) model runs at a higher resolution than Azure AI Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The Read model is the underlying OCR engine for other Document Intelligence prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, in addition to custom models.
+
+## What is OCR for documents?
+
+Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It includes features like higher-resolution scanning of document images for better handling of smaller and dense text; paragraph detection; and fillable form management. OCR capabilities also include advanced scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
+
+## Read OCR supported document types
+
+> [!NOTE]
+>
+> * Only API Version 2022-06-30-preview supports Microsoft Word, Excel, PowerPoint, and HTML file formats in addition to all other document types supported by the GA versions.
+> * For the preview of Office and HTML file formats, Read API ignores the pages parameter and extracts all pages by default. Each embedded image counts as 1 page unit and each worksheet, slide, and page (up to 3000 characters) count as 1 page.
+
+| **Model** | **Images** | **PDF** | **TIFF** | **Word** | **Excel** | **PowerPoint** | **HTML** |
+| | | | | | | | |
+| **prebuilt-read** | GA</br> (2022-08-31)| GA</br> (2022-08-31) | GA</br> (2022-08-31) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) |
+
+### Data extraction
+
+| **Model** | **Text** | **[Language detection](language-support.md#detected-languages-read-api)** |
+| | | |
+**prebuilt-read** | Γ£ô |Γ£ô |
+
+## Development options
+
+Document Intelligence v3.0 supports the following resources:
+
+| Model | Resources | Model ID |
+|-|||
+|**Read model**| <ul><li>[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
+
+## Try OCR in Document Intelligence
+
+Try extracting text from forms and documents using the Document Intelligence Studio. You need the following assets:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+### Document Intelligence Studio
+
+> [!NOTE]
+> Currently, Document Intelligence Studio doesn't support Microsoft Word, Excel, PowerPoint, and HTML file formats in the Read version v3.0.
+
+***Sample form processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
++
+1. On the Document Intelligence Studio home page, select **Read**
+
+1. You can analyze the sample document or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/form-recognizer-studio-read-analyze-v3p2-updated.png" alt-text="Screenshot: analyze read menu.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
+
+## Input requirements
++
+## Supported languages and locales
+
+Document Intelligence v3.0 version supports several languages for the read OCR model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
+
+## Data detection and extraction
+
+### Microsoft Office and HTML text extraction
+
+Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview text extraction from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text and text from the images embedded in the Word document by running OCR on the images.
++
+The page units in the model output are computed as shown:
+
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Word | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint | Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+
+ ### Barcode extraction
+
+The Read OCR model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. Here, the `confidence` is hard-coded for the public preview (`2023-02-28`) release.
+
+#### Supported barcode types
+
+| **Barcode Type** | **Example** |
+| | |
+| QR Code |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::|
+| Code 39 |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::|
+| Code 128 |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::|
+| UPC (UPC-A & UPC-E) |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::|
+| PDF417 |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::|
+
+```json
+"content": ":barcode:",
+ "pages": [
+ {
+ "pageNumber": 1,
+ "barcodes": [
+ {
+ "kind": "QRCode",
+ "value": "http://test.com/",
+ "span": { ... },
+ "polygon": [...],
+ "confidence": 1
+ }
+ ]
+ }
+ ]
+```
+
+### Paragraphs extraction
+
+The Read OCR model in Document Intelligence extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
+
+```json
+"paragraphs": [
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
+ }
+]
+```
+
+### Language detection
+
+The Read OCR model in Document Intelligence adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
+
+```json
+"languages": [
+ {
+ "spans": [
+ {
+ "offset": 0,
+ "length": 131
+ }
+ ],
+ "locale": "en",
+ "confidence": 0.7
+ },
+]
+```
+
+### Extract pages from documents
+
+The page units in the model output are computed as shown:
+
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Images | Each image = 1 page unit | Total images |
+|PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
+|TIFF | Each image in the TIFF = 1 page unit | Total images in the PDF |
+
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": [],
+ "kind": "document"
+ }
+]
+```
+
+### Extract text lines and words
+
+The Read OCR model extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+
+For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Read extracts all embedded text as is. For any embedded images, it runs OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
+
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
+
+### Select page (s) for text extraction
+
+For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+
+> [!NOTE]
+> For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, the Read API ignores the pages parameter and extracts all pages by default.
+
+### Handwritten style for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
+```
+
+## Next steps
+
+Complete a Document Intelligence quickstart:
+
+> [!div class="checklist"]
+>
+> * [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+> * [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-csharp)
+> * [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-python)
+> * [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-java)
+> * [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>
+
+Explore our REST API:
+
+> [!div class="nextstepaction"]
+> [Document Intelligence API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-receipt.md
+
+ Title: Receipt data extraction - Document Intelligence
+
+description: Use machine learning powered receipt data extraction model to digitize receipts.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD033 -->
+
+# Document Intelligence receipt model
+++
+The Document Intelligence receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
+
+## Receipt data extraction
+
+Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. Document Intelligence OCR-powered receipt data extraction helps to automate the conversion and save time and effort.
++
+***Sample receipt processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
++++
+**Sample receipt processed with [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
+++
+## Development options
+
+Document Intelligence v3.0 Supports the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**Receipt model**| <ul><li>[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</li></ul>|**prebuilt-receipt**|
+++
+Document Intelligence v2.1 supports the following tools:
+
+| Feature | Resources |
+|-|-|
+|**Receipt model**| <ul><li>[**Document Intelligence labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=doc-intel-2.1.0#analyze-receipts)</li><li>[**Client-library SDK**](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</li><li>[**Document Intelligence Docker container**](containers/install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
++
+## Input requirements
+++++
+* Supported file formats: JPEG, PNG, PDF, and TIFF
+* For PDF and TIFF, Document Intelligence can process up to 2000 pages for standard tier subscribers or only the first two pages for free-tier subscribers.
+* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
++
+### Try receipt data extraction
+
+See how Document Intelligence extracts data, including time and date of transactions, merchant information, and amount totals from receipts. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
++
+#### Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with the v3.0 API.
+
+1. On the Document Intelligence Studio home page, select **Receipts**
+
+1. You can analyze the sample receipt or select the **+ Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/receipt-analyze.png" alt-text="Screenshot: analyze receipt menu.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
+++
+## Document Intelligence Sample Labeling tool
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results process.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+ :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu.":::
+
+1. Select **Run analysis**. The Document Intelligence Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="media/receipts-example.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Document Intelligence Service.
+++
+## Supported languages and locales
+
+>[!NOTE]
+> Document Intelligence auto-detects language and locale data.
+
+### [2023-02-28-preview](#tab/2023-02-28-preview)
+
+#### Thermal receipts (retail, meal, parking, etc.)
+
+| Supported Languages | Details |
+|:--|:-:|
+|English|United States (`en-US`), Australia (`en-AU`), Canada (`en-CA`), United Kingdom (`en-GB`), India (`en-IN`), United Arab Emirates (`en-AE`)|
+|Croatian|Croatia (`hr-HR`)|
+|Czech|Czechia (`cs-CZ`)|
+|Danish|Denmark (`da-DK`)|
+|Dutch|Netherlands (`nl-NL`)|
+|Finnish|Finland (`fi-FI`)|
+|French|Canada (`fr-CA`), France (`fr-FR`)|
+|German|Germany (`de-DE`)|
+|Hungarian|Hungary (`hu-HU`)|
+|Italian|Italy (`it-IT`)|
+|Japanese|Japan (`ja-JP`)|
+|Latvian|Latvia (`lv-LV`)|
+|Lithuanian|Lithuania (`lt-LT`)|
+|Norwegian|Norway (`no-NO`)|
+|Portuguese|Brazil (`pt-BR`), Portugal (`pt-PT`)|
+|Spanish|Spain (`es-ES`)|
+|Swedish|Sweden (`sv-SE`)|
+|Vietnamese|Vietnam (`vi-VN`)|
+
+#### Hotel receipts
+
+| Supported Languages | Details |
+|:--|:-:|
+|English|United States (`en-US`)|
+|French|France (`fr-FR`)|
+|German|Germany (`de-DE`)|
+|Italian|Italy (`it-IT`)|
+|Japanese|Japan (`ja-JP`)|
+|Portuguese|Portugal (`pt-PT`)|
+|Spanish|Spain (`es-ES`)|
+
+### [**2022-08-31 (GA)**](#tab/2022-08-31)
+
+#### Thermal receipts (retail, meal, parking, etc.)
+
+| Supported Languages | Details |
+|:--|:-:|
+|English|United States (`en-US`), Australia (`en-AU`), Canada (`en-CA`), United Kingdom (`en-GB`), India (`en-IN`), United Arab Emirates (`en-AE`)|
+|Croatian|Croatia (`hr-HR`)|
+|Czech|Czechia (`cs-CZ`)|
+|Danish|Denmark (`da-DK`)|
+|Dutch|Netherlands (`nl-NL`)|
+|Finnish|Finland (`fi-FI`)|
+|French|Canada (`fr-CA`), France (`fr-FR`)|
+|German|Germany (`de-DE`)|
+|Hungarian|Hungary (`hu-HU`)|
+|Italian|Italy (`it-IT`)|
+|Japanese|Japan (`ja-JP`)|
+|Latvian|Latvia (`lv-LV`)|
+|Lithuanian|Lithuania (`lt-LT`)|
+|Norwegian|Norway (`no-NO`)|
+|Portuguese|Brazil (`pt-BR`), Portugal (`pt-PT`)|
+|Spanish|Spain (`es-ES`)|
+|Swedish|Sweden (`sv-SE`)|
+|Vietnamese|Vietnam (`vi-VN`)|
+
+#### Hotel receipts
+
+| Supported Languages | Details |
+|:--|:-:|
+|English|United States (`en-US`)|
+|French|France (`fr-FR`)|
+|German|Germany (`de-DE`)|
+|Italian|Italy (`it-IT`)|
+|Japanese|Japan (`ja-JP`)|
+|Portuguese|Portugal (`pt-PT`)|
+|Spanish|Spain (`es-ES`)|
+++
+## Supported languages and locales v2.1
+
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|Receipt| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li></ul> | Autodetected |
++
+## Field extraction
++
+|Name| Type | Description | Standardized output |
+|:--|:-|:-|:-|
+| ReceiptType | String | Type of sales receipt | Itemized |
+| MerchantName | String | Name of the merchant issuing the receipt | |
+| MerchantPhoneNumber | phoneNumber | Listed phone number of merchant | +1 xxx xxx xxxx |
+| MerchantAddress | String | Listed address of merchant | |
+| TransactionDate | Date | Date the receipt was issued | yyyy-mm-dd |
+| TransactionTime | Time | Time the receipt was issued | hh-mm-ss (24-hour) |
+| Total | Number (USD)| Full transaction total of receipt | Two-decimal float|
+| Subtotal | Number (USD) | Subtotal of receipt, often before taxes are applied | Two-decimal float|
+ | Tax | Number (USD) | Total tax on receipt (often sales tax or equivalent). **Renamed to "TotalTax" in 2022-06-30 version**. | Two-decimal float |
+| Tip | Number (USD) | Tip included by buyer | Two-decimal float|
+| Items | Array of objects | Extracted line items, with name, quantity, unit price, and total price extracted | |
+| Name | String | Item description. **Renamed to "Description" in 2022-06-30 version**. | |
+| Quantity | Number | Quantity of each item | Two-decimal float |
+| Price | Number | Individual price of each item unit| Two-decimal float |
+| TotalPrice | Number | Total price of line item | Two-decimal float |
+++
+ Document Intelligence v3.0 introduces several new features and capabilities. In addition to thermal receipts, the **Receipt** model supports single-page hotel receipt processing and tax detail extraction for all receipt types.
+
+### [**2022-08-31 (GA)**](#tab/2022-08-31)
+
+#### Thermal receipts (receipt, receipt.retailMeal, receipt.creditCard, receipt.gas, receipt.parking)
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
+|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
+|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
+|`Total`|`number`|Full transaction total of receipt|$14.34|
+|`TransactionDate`|`date`|Date the receipt was issued|June 06, 2019|
+|`TransactionTime`|`time`|Time the receipt was issued|4:49 PM|
+|`Subtotal`|`number`|Subtotal of receipt, often before taxes are applied|$12.34|
+|`TotalTax`|`number`|Tax on receipt, often sales tax or equivalent|$2.00|
+|`Tip`|`number`|Tip included by buyer|$1.00|
+|`Items`|`array`|||
+|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
+|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
+|`Items.*.Description`|`string`|Item description|Surface Pro 6|
+|`Items.*.Quantity`|`number`|Quantity of each item|1|
+|`Items.*.Price`|`number`|Individual price of each item unit|$999.00|
+|`Items.*.ProductCode`|`string`|Product code, product number, or SKU associated with the specific line item|A123|
+|`Items.*.QuantityUnit`|`string`|Quantity unit of each item||
+|`TaxDetails`|`array`|||
+|`TaxDetails.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
+|`TaxDetails.*.Amount`|`currency`|The amount of the tax detail|$999.00|
+
+#### Hotel receipts (receipt.hotel)
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
+|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
+|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
+|`Total`|`number`|Full transaction total of receipt|$14.34|
+|`ArrivalDate`|`date`|Date of arrival|27Mar21|
+|`DepartureDate`|`date`|Date of departure|28Mar21|
+|`Currency`|`string`|Currency unit of receipt amounts (ISO 4217), or 'MIXED' if multiple values are found|USD|
+|`MerchantAliases`|`array`|||
+|`MerchantAliases.*`|`string`|Alternative name of merchant|Contoso (R)|
+|`Items`|`array`|||
+|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
+|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
+|`Items.*.Description`|`string`|Item description|Room Charge|
+|`Items.*.Date`|`date`|Item date|27Mar21|
+|`Items.*.Category`|`string`|Item category|Room|
+
+### [2023-02-28-preview](#tab/2023-02-28-preview)
+
+#### Thermal receipts (receipt, receipt.retailMeal, receipt.creditCard, receipt.gas, receipt.parking)
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
+|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
+|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
+|`Total`|`number`|Full transaction total of receipt|$14.34|
+|`TransactionDate`|`date`|Date the receipt was issued|June 06, 2019|
+|`TransactionTime`|`time`|Time the receipt was issued|4:49 PM|
+|`Subtotal`|`number`|Subtotal of receipt, often before taxes are applied|$12.34|
+|`TotalTax`|`number`|Tax on receipt, often sales tax or equivalent|$2.00|
+|`Tip`|`number`|Tip included by buyer|$1.00|
+|`Items`|`array`|||
+|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
+|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
+|`Items.*.Description`|`string`|Item description|Surface Pro 6|
+|`Items.*.Quantity`|`number`|Quantity of each item|1|
+|`Items.*.Price`|`number`|Individual price of each item unit|$999.00|
+|`Items.*.ProductCode`|`string`|Product code, product number, or SKU associated with the specific line item|A123|
+|`Items.*.QuantityUnit`|`string`|Quantity unit of each item||
+|`TaxDetails`|`array`|||
+|`TaxDetails.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
+|`TaxDetails.*.Amount`|`currency`|The amount of the tax detail|$999.00|
+
+#### Hotel receipts (receipt.hotel)
+
+| Field | Type | Description | Example |
+|:|:--|:|:--|
+|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
+|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
+|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
+|`Total`|`number`|Full transaction total of receipt|$14.34|
+|`ArrivalDate`|`date`|Date of arrival|27Mar21|
+|`DepartureDate`|`date`|Date of departure|28Mar21|
+|`Currency`|`string`|Currency unit of receipt amounts (ISO 4217), or 'MIXED' if multiple values are found|USD|
+|`MerchantAliases`|`array`|||
+|`MerchantAliases.*`|`string`|Alternative name of merchant|Contoso (R)|
+|`Items`|`array`|||
+|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
+|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
+|`Items.*.Description`|`string`|Item description|Room Charge|
+|`Items.*.Date`|`date`|Item date|27Mar21|
+|`Items.*.Category`|`string`|Item category|Room|
++++++
+### Migration guide and REST API v3.0
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-w2.md
+
+ Title: Automated W-2 form processing - Document Intelligence
+
+description: Use the Document Intelligence prebuilt W-2 model to automate extraction of W2 form data.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence W-2 form model
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+The Document Intelligence W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Document Intelligence W-2 model supports both single and multiple standard and customized forms from 2018 to the present.
+
+## Automated W-2 form processing
+
+An employer sends form W-2, also known as the Wage and Tax Statement, to each employee and the Internal Revenue Service (IRS) at the end of the year. A W-2 form reports employees' annual wages and the amount of taxes withheld from their paychecks. The IRS also uses W-2 forms to track individuals' tax obligations. The Social Security Administration (SSA) uses the information on this and other forms to compute the Social Security benefits for all workers.
+
+***Sample W-2 tax form processed using Document Intelligence Studio***
++
+## Development options
+
+Document Intelligence v3.0 supports the following tools:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**W-2 model**|<ul><li> [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
+
+### Try W-2 data extraction
+
+Try extracting data from W-2 forms using the Document Intelligence Studio. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
+
+#### Document Intelligence Studio
+
+> [!NOTE]
+> Document Intelligence Studio is available with v3.0 API.
+
+1. On the [Document Intelligence Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2**.
+
+1. You can analyze the sample W-2 document or select the **Γ₧ò Add** button to upload your own sample.
+
+1. Select the **Analyze** button:
+
+ :::image type="content" source="media/studio/w2-analyze.png" alt-text="Screenshot: analyze W-2 window in the Document Intelligence Studio.":::
+
+ > [!div class="nextstepaction"]
+ > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
+
+## Input requirements
++
+## Supported languages and locales
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|prebuilt-tax.us.w2|<ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
+
+## Field extraction
+
+|Name| Box | Type | Description | Standardized output|
+|:--|:-|:-|:-|:-|
+| Employee.SocialSecurityNumber | a | String | Employee's Social Security Number (SSN). | 123-45-6789 |
+| Employer.IdNumber | b | String | Employer's ID number (EIN), the business equivalent of a social security number| 12-1234567 |
+| Employer.Name | c | String | Employer's name | Contoso |
+| Employer.Address | c | String | Employer's address (with city) | 123 Example Street Sample City, CA |
+| Employer.ZipCode | c | String | Employer's zip code | 12345 |
+| ControlNumber | d | String | A code identifying the unique W-2 in the records of employer | R3D1 |
+| Employee.Name | e | String | Full name of the employee | Henry Ross|
+| Employee.Address | f | String | Employee's address (with city) | 123 Example Street Sample City, CA |
+| Employee.ZipCode | f | String | Employee's zip code | 12345 |
+| WagesTipsAndOtherCompensation | 1 | Number | A summary of your pay, including wages, tips and other compensation | 50000 |
+| FederalIncomeTaxWithheld | 2 | Number | Federal income tax withheld | 1111 |
+| SocialSecurityWages | 3 | Number | Social security wages | 35000 |
+| SocialSecurityTaxWithheld | 4 | Number | Social security tax with held | 1111 |
+| MedicareWagesAndTips | 5 | Number | Medicare wages and tips | 45000 |
+| MedicareTaxWithheld | 6 | Number | Medicare tax with held | 1111 |
+| SocialSecurityTips | 7 | Number | Social security tips | 1111 |
+| AllocatedTips | 8 | Number | Allocated tips | 1111 |
+| Verification&#8203;Code | 9 | String | Verification Code on Form W-2 | A123-B456-C789-DXYZ |
+| DependentCareBenefits | 10 | Number | Dependent care benefits | 1111 |
+| NonqualifiedPlans | 11 | Number | The nonqualified plan, a type of retirement savings plan that is employer-sponsored and tax-deferred | 1111 |
+| AdditionalInfo | | Array of objects | An array of LetterCode and Amount | |
+| LetterCode | 12a, 12b, 12c, 12d | String | Letter code Refer to [IRS/W-2](https://www.irs.gov/pub/irs-prior/fw2--2014.pdf) for the semantics of the code values. | D |
+| Amount | 12a, 12b, 12c, 12d | Number | Amount | 1234 |
+| IsStatutoryEmployee | 13 | String | Whether the RetirementPlan box is checked or not | true |
+| IsRetirementPlan | 13 | String | Whether the RetirementPlan box is checked or not | true |
+| IsThirdPartySickPay | 13 | String | Whether the ThirdPartySickPay box is checked or not | false |
+| Other | 14 | String | Other info employers may use this field to report | |
+| StateTaxInfos | | Array of objects | An array of state tax info including State, EmployerStateIdNumber, StateIncomeTax, StageWagesTipsEtc | |
+| State | 15 | String | State | CA |
+| EmployerStateIdNumber | 15 | String | Employer state number | 123-123-1234 |
+| StateWagesTipsEtc | 16 | Number | State wages, tips, etc. | 50000 |
+| StateIncomeTax | 17 | Number | State income tax | 1535 |
+| LocalTaxInfos | | Array of objects | An array of local income tax info including LocalWagesTipsEtc, LocalIncomeTax, LocalityName | |
+| LocalWagesTipsEtc | 18 | Number | Local wages, tips, etc. | 50000 |
+| LocalIncomeTax | 19 | Number | Local income tax | 750 |
+| LocalityName | 20 | Number | Locality name. | CLEVELAND |
+ | W2Copy | | String | Copy of W-2 forms A, B, C, D, 1, or 2 | Copy A For Social Security Administration |
+| TaxYear | | Number | Tax year | 2020 |
+* | W2FormVariant | | String | The variants of W-2 forms, including *W-2*, *W-2AS*, *W-2CM*, *W-2GU*, *W-2VI* | W-2 |
+
+### Migration guide and REST API v3.0
+
+* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
+
+## Next steps
+
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/configuration.md
+
+ Title: Configure Document Intelligence containers
+
+description: Learn how to configure the Document Intelligence container to parse form and table data.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+# Configure Document Intelligence containers
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](../media/yes-icon.png) **Document Intelligence v2.1**.
+
+With Document Intelligence containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, we show you how to configure the Document Intelligence container run-time environment by using the `docker compose` command arguments. Document Intelligence features are supported by six Document Intelligence feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+
+## Configuration settings
+
+Each container has the following configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|Yes|[Key](#key-and-billing-configuration-setting)|Tracks billing information.|
+|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. For more information, _see_ [Billing](install-run.md#billing). For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Azure AI services](../../../ai-services/cognitive-services-custom-subdomains.md).|
+|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) customer content support to your container.|
+|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
+|No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.|
+|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
+
+> [!IMPORTANT]
+> The [`Key`](#key-and-billing-configuration-setting), [`Billing`](#key-and-billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together. You must provide valid values for all three settings; otherwise, your containers won't start. For more information about using these configuration settings to instantiate a container, see [Billing](install-run.md#billing).
+
+## Key and Billing configuration setting
+
+The `Key` setting specifies the Azure resource key that's used to track billing information for the container. The value for the Key must be a valid key for the resource that's specified for `Billing` in the "Billing configuration setting" section.
+
+The `Billing` setting specifies the endpoint URI of the resource on Azure that's used to meter billing information for the container. The value for this configuration setting must be a valid endpoint URI for a resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+ You can find these settings in the Azure portal on the **Keys and Endpoint** page.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+## EULA setting
++
+## ApplicationInsights setting
++
+## Fluentd settings
++
+## HTTP proxy credentials settings
++
+## Logging settings
++
+## Volume settings
+
+Use [**volumes**](https://docs.docker.com/storage/volumes/) to read and write data to and from the container. Volumes are the preferred for persisting data generated and used by Docker containers. You can specify an input mount or an output mount by including the `volumes` option and specifying `type` (bind), `source` (path to the folder) and `target` (file path parameter).
+
+The Document Intelligence container requires an input volume and an output volume. The input volume can be read-only (`ro`), and it's required for access to the data that's used for training and scoring. The output volume has to be writable, and you use it to store the models and temporary data.
+
+The exact syntax of the host volume location varies depending on the host operating system. Additionally, the volume location of the [host computer](install-run.md#host-computer-requirements) might not be accessible because of a conflict between the Docker service account permissions and the host mount location permissions.
+
+## Example docker-compose.yml file
+
+The **docker compose** method is built from three steps:
+
+ 1. Create a Dockerfile.
+ 1. Define the services in a **docker-compose.yml** so they can be run together in an isolated environment.
+ 1. Run `docker-compose up` to start and run your services.
+
+### Single container example
+
+In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
+
+#### **Layout container**
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - key={FORM_RECOGNIZER_KEY}
+
+ ports:
+ - "5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+### Multiple containers example
+
+#### **Receipt and OCR Read containers**
+
+In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container and {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Azure AI Vision Read container.
+
+```yml
+version: "3"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - key={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5050"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - key={COMPUTER_VISION_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about running multiple containers and the docker compose command](install-run.md)
+
+* [Azure container instance recipe](../../../ai-services/containers/azure-container-instance-recipe.md)
ai-services Disconnected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/disconnected.md
+
+ Title: Use Document Intelligence containers in disconnected environments
+
+description: Learn how to run Azure AI services Docker containers disconnected from the internet.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Containers in disconnected environments
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](../media/yes-icon.png) **Document Intelligence v2.1**.
+
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+
+Azure AI Document Intelligence containers allow you to use Document Intelligence APIs with the benefits of containerization. Disconnected containers are offered through commitment tier pricing offered at a discounted rate compared to pay-as-you-go pricing. With commitment tier pricing, you can commit to using Document Intelligence features for a fixed fee, at a predictable total cost, based on the needs of your workload.
+
+## Get started
+
+Before attempting to run a Docker container in an offline environment, make sure you're familiar with the following requirements to successfully download and use the container:
+
+* Host computer requirements and recommendations.
+* The Docker `pull` command to download the container.
+* How to validate that a container is running.
+* How to send queries to the container's endpoint, once it's running.
+
+## Request access to use containers in disconnected environments
+
+Before you can use Document Intelligence containers in disconnected environments, you must first fill out and [submit a request form](../../../ai-services/containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments) and [purchase a commitment plan](../../../ai-services/containers/disconnected-containers.md#purchase-a-commitment-plan-to-use-containers-in-disconnected-environments).
+
+## Gather required parameters
+
+There are three required parameters for all Azure AI services' containers:
+
+* The end-user license agreement (EULA) must be present with a value of *accept*.
+* The endpoint URL for your resource from the Azure portal.
+* The API key for your resource from the Azure portal.
+
+Both the endpoint URL and API key are needed when you first run the container to configure it for disconnected usage. You can find the key and endpoint on the **Key and endpoint** page for your resource in the Azure portal:
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+> [!IMPORTANT]
+> You will only use your key and endpoint to configure the container to run in a disconnected environment. After you configure the container, you won't need the key and endpoint values to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
+
+## Download a Docker container with `docker pull`
+
+Download the Docker container that has been approved to run in a disconnected environment. For example:
+
+|Docker pull command | Value |Format|
+|-|-||
+|&bullet; **`docker pull [image]`**</br>&bullet; **`docker pull [image]:latest`**|The latest container image.|&bullet; mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout</br> </br>&bullet; mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice: latest |
+|||
+|&bullet; **`docker pull [image]:[version]`** | A specific container image |dockers pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt:2.1-preview |
+
+ **Example Docker pull command**
+
+```docker
+docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:latest
+```
+
+## Configure the container to be run in a disconnected environment
+
+Now that you've downloaded your container, you need to execute the `docker run` command with the following parameter:
+
+* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
+
+> [!IMPORTANT]
+>The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
+
+The following example shows the formatting for the `docker run` command to use with placeholder values. Replace these placeholder values with your own values.
+
+| Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Document Intelligence resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`{string}`|
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+
+ **Example `docker run` command**
+
+```docker
+
+docker run --rm -it -p 5000:5000 \
+
+-v {LICENSE_MOUNT} \
+
+{IMAGE} \
+
+eula=accept \
+
+billing={ENDPOINT_URI} \
+
+apikey={API_KEY} \
+
+DownloadLicense=True \
+
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+After you've configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
+
+## Document Intelligence container models and configuration
+
+After you've [configured the container](#configure-the-container-to-be-run-in-a-disconnected-environment), the values for the downloaded Document Intelligence models and container configuration will be generated and displayed in the container output.
+
+## Run the container in a disconnected environment
+
+Once the license file has been downloaded, you can run the container in a disconnected environment with your license, appropriate memory, and suitable CPU allocations. The following example shows the formatting of the `docker run` command with placeholder values. Replace these placeholders values with your own values.
+
+Whenever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. In addition, an output mount must be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
+| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
+
+ **Example `docker run` command**
+
+```docker
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+
+-v {LICENSE_MOUNT} \
+
+-v {OUTPUT_PATH} \
+
+{IMAGE} \
+
+eula=accept \
+
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+## Other parameters and commands
+
+Here are a few more parameters and commands you may need to run the container.
+
+#### Usage records
+
+When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST API endpoint to generate a report about service usage.
+
+#### Arguments for storing logs
+
+When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
+
+```Docker
+docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
+```
+
+#### Get records using the container endpoints
+
+The container provides two endpoints for returning records about its usage.
+
+#### Get all records
+
+The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
+
+```http
+https://<service>/records/usage-logs/
+```
+
+ **Example HTTPS endpoint**
+
+ `http://localhost:5000/records/usage-logs`
+
+The usage-log endpoint returns a JSON response similar to the following example:
+
+```json
+{
+ "apiType": "string",
+ "serviceName": "string",
+ "meters": [
+ {
+ "name": "string",
+ "quantity": 256345435
+ }
+ ]
+}
+```
+
+#### Get records for a specific month
+
+The following endpoint provides a report summarizing usage over a specific month and year.
+
+```HTTP
+https://<service>/records/usage-logs/{MONTH}/{YEAR}
+```
+
+This usage-logs endpoint returns a JSON response similar to the following example:
+
+```json
+{
+ "apiType": "string",
+ "serviceName": "string",
+ "meters": [
+ {
+ "name": "string",
+ "quantity": 56097
+ }
+ ]
+}
+```
+
+## Troubleshooting
+
+Run the container with an output mount and logging enabled. These settings enable the container generates log files that are helpful for troubleshooting issues that occur while starting or running the container.
+
+> [!TIP]
+> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../../ai-services/containers/disconnected-container-faq.yml).
+
+## Next steps
+
+* [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](../deploy-label-tool.md#deploy-with-azure-container-instances-aci)
+* [Change or end a commitment plan](../../../ai-services/containers/disconnected-containers.md#purchase-a-different-commitment-plan-for-disconnected-containers)
ai-services Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/image-tags.md
+
+ Title: Document Intelligence image tags and release notes
+
+description: A listing of all Document Intelligence container image tags.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++++++
+# Document Intelligence container tags
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](../media/yes-icon.png) **Document Intelligence v2.1**.
+
+## Feature containers
+
+Document Intelligence containers support the following features:
+
+| Container name | Fully qualified image name |
+|||
+| **Layout** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout |
+| **Business Card** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard |
+| **ID Document** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document |
+| **Receipt** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt |
+| **Invoice** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice |
+| **Custom API** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api |
+| **Custom Supervised** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised |
+
+## Microsoft container registry (MCR)
+
+Document Intelligence container images can be found within the [**Microsoft Container Registry Catalog**](https://mcr.microsoft.com/v2/_catalog) listing, the primary registry for all Microsoft Published Docker images:
+
+ :::image type="content" source="../media/containers/microsoft-container-registry-catalog.png" alt-text="Screenshot of the Microsoft Container Registry (MCR) catalog list.":::
+
+## Document Intelligence tags
+
+The following tags are available for Document Intelligence:
+
+### [Latest version](#tab/current)
+
+Release notes for `v2.1`:
+
+| Container | Tags | Retrieve image |
+||:||
+| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout`|
+| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard` |
+| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
+| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| **Custom API** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
+| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
+
+### [Previous versions](#tab/previous)
+
+> [!IMPORTANT]
+> The Document Intelligence v1.0 container has been retired.
++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Install and run Document Intelligence containers](install-run.md)
+>
+
+* [Azure container instance recipe](../../../ai-services/containers/azure-container-instance-recipe.md)
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
+
+ Title: Install and run Docker containers for Document Intelligence
+
+description: Use the Docker containers for Document Intelligence on-premises to identify and extract key-value pairs, selection marks, tables, and structure from forms and documents.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Install and run containers
+
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD051 -->
+++
+Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file.
+
+In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
+
+* **Read**, **Layout**, **General Document**, **ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by Document Intelligence v3.0 containers.
+
+* **Business Card** model is currently only supported in the [v2.1 containers](install-run.md?view=doc-intel-2.1.0&preserve-view=true).
+++
+> [!IMPORTANT]
+>
+> Document Intelligence v3.0 containers are now generally available. If you are getting started with containers, consider using the v3 containers.
+
+In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
+
+* **Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by six Document Intelligence feature containers.
+
+* For Receipt, Business Card and ID Document containers you also need the **Read** OCR container.
++
+> [!IMPORTANT]
+>
+> * To use Document Intelligence containers, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
+
+## Prerequisites
+
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+You also need the following to use Document Intelligence containers:
+
+| Required | Purpose |
+|-||
+| **Familiarity with Docker** | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). |
+| **Docker Engine installed** | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
+|**Document Intelligence resource** | A [**single-service Azure AI Document Intelligence**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated key and endpoint URI. Both values are available on the Azure portal Document Intelligence **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
+
+|Optional|Purpose|
+||-|
+|**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. |
+
+You also need a **Azure AI Vision API resource to process business cards, ID documents, or Receipts**.
+
+* You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../ai-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image).
+
+* The usual [billing](#billing) fees apply.
+
+* If you use the **cognitive-services-recognize-text** container, make sure that your Azure AI Vision key for the Document Intelligence container is the key specified in the Azure AI Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`).
+
+* If you use both the Azure AI Vision container and Document Intelligence container together on the same host, they can't both be started with the default port of **5000**.
+
+* Pass in both the key and endpoints for your Azure AI Vision Azure cloud or Azure AI container:
+
+ * **{COMPUTER_VISION_KEY}**: one of the two available resource keys.
+ * **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.
+
+## Request approval to run container
+
+Complete and submit the [**Azure AI services Application for Gated Services**](https://aka.ms/csgate) to request access to the container.
++
+## Host computer requirements
+
+The host is a x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
+
+* [Azure Kubernetes Service](../../../aks/index.yml).
+* [Azure Container Instances](../../../container-instances/index.yml).
+* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
+
+### Container requirements and recommendations
+
+#### Required supporting containers
+
+The following table lists the supporting container(s) for each Document Intelligence container you download. For more information, see the [Billing](#billing) section.
+
+| Feature container | Supporting container(s) |
+||--|
+| **Layout** | Not required |
+| **Business Card** | **Azure AI Vision Read**|
+| **ID Document** | **Azure AI Vision Read** |
+| **Invoice** | **Layout** |
+| **Receipt** |**Azure AI Vision Read** |
+| **Custom** | **Custom API**, **Custom Supervised**, **Lay&#8203;out**|
+
+Feature container | Supporting container(s) |
+||--|
+| **Read** | Not required |
+| **Layout** | Not required|
+| **General Document** | **Lay&#8203;out** |
+| **Invoice** | **Lay&#8203;out**|
+| **Receipt** | **Read** |
+| **ID Document** | **Read**|
+| **Custom Template** | **Lay&#8203;out** |
++
+#### Recommended CPU cores and memory
+
+> [!NOTE]
+>
+> The minimum and recommended values are based on Docker limits and *not* the host machine resources.
++
+##### Document Intelligence containers
+
+| Container | Minimum | Recommended |
+|--||-|
+| `Read` | `8` cores, 10-GB memory | `8` cores, 24-GB memory|
+| `Layout` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
+| `General Document` | `8` cores, 12-GB memory | `8` cores, 24-GB memory|
+| `ID Document` | `8` cores, 8-GB memory | `8` cores, 24-GB memory |
+| `Invoice` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+| `Receipt` | `8` cores, 11-GB memory | `8` cores, 24-GB memory |
+| `Custom Template` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+++
+##### Read, Layout, and prebuilt containers
+
+| Container | Minimum | Recommended |
+|--||-|
+| `Read 3.2` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+| `Layout 2.1` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
+| `Business Card 2.1` | `2` cores, 4-GB memory | `4` cores, 4-GB memory |
+| `ID Document 2.1` | `1` core, 2-GB memory |`2` cores, 2-GB memory |
+| `Invoice 2.1` | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
+| `Receipt 2.1` | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
+
+##### Custom containers
+
+The following host machine requirements are applicable to **train and analyze** requests:
+
+| Container | Minimum | Recommended |
+|--||-|
+| Custom API| 0.5 cores, 0.5-GB memory| `1` core, 1-GB memory |
+|Custom Supervised | `4` cores, 2-GB memory | `8` cores, 4-GB memory|
+
+* Each core must be at least 2.6 gigahertz (GHz) or faster.
+* Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker compose` or `docker run` command.
+
+> [!TIP]
+> You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. For example, the following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
+>
+> ```docker
+> docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
+>
+> IMAGE ID REPOSITORY TAG
+> <image-id> <repository-path/name> <tag-name>
+> ```
+
+## Run the container with the **docker-compose up** command
+
+* Replace the {ENDPOINT_URI} and {API_KEY} values with your resource Endpoint URI and the key from the Azure resource page.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+* Ensure that the EULA value is set to *accept*.
+
+* The `EULA`, `Billing`, and `ApiKey` values must be specified; otherwise the container can't start.
+
+> [!IMPORTANT]
+> The keys are used to access your Document Intelligence resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
++
+### [Read](#tab/read)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
+
+```yml
+version: "3.9"
+
+ azure-form-recognizer-read:
+ container_name: azure-form-recognizer-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [General Document](#tab/general-document)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your General Document and Layout container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-document:
+ container_name: azure-cognitive-service-document
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/document-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5000"
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+Given the resources on the machine, the General Document container might take some time to start up.
+
+### [Layout](#tab/layout)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
+
+```yml
+version: "3.9"
+
+ azure-form-recognizer-layout:
+ container_name: azure-form-recognizer-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Invoice](#tab/invoice)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Invoice container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-invoice:
+ container_name: azure-cognitive-service-invoice
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5000"
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Receipt](#tab/receipt)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt and Read container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5000"
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [ID Document](#tab/id-document)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-id-document:
+ container_name: azure-cognitive-service-id-document
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5000"
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Business Card](#tab/business-card)
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-invoice:
+ container_name: azure-cognitive-service-businesscard
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5000"
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+```
+
+### [Custom](#tab/custom)
+
+In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
+
+#### Create a folder to store the following files
+
+* [**.env**](#-create-an-environment-file)
+* [**nginx.conf**](#-create-an-nginx-file)
+* [**docker-compose.yml**](#-create-a-docker-compose-file)
+
+#### Create a folder to store your input data
+
+* Name this folder **files**.
+* We reference the file path for this folder as **{FILE_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called files, located in the same folder as the docker-compose file, the .env file entry is `FILE_MOUNT_PATH="./files"`
+
+#### Create a folder to store the logs written by the Document Intelligence service on your local machine
+
+* Name this folder **output**.
+* We reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called output, located in the same folder as the docker-compose file, the .env file entry is `OUTPUT_MOUNT_PATH="./output"`
+
+#### Create a folder for storing internal processing shared between the containers
+
+* Name this folder **shared**.
+* We reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called shared, located in the same folder as the docker-compose file, the .env file entry is `SHARED_MOUNT_PATH="./shared"`
+
+#### Create a folder for the Studio to store project related information
+
+* Name this folder **db**.
+* We reference the file path for this folder as **{DB_MOUNT_PATH}**.
+* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called db, located in the same folder as the docker-compose file, the .env file entry is `DB_MOUNT_PATH="./db"`
+
+#### Create an environment file
+
+ 1. Name this file **.env**.
+
+ 1. Declare the following environment variables:
+
+ ```text
+SHARED_MOUNT_PATH="./shared"
+OUTPUT_MOUNT_PATH="./output"
+FILE_MOUNT_PATH="./files"
+DB_MOUNT_PATH="./db"
+FORM_RECOGNIZER_ENDPOINT_URI="YourFormRecognizerEndpoint"
+FORM_RECOGNIZER_KEY="YourFormRecognizerKey"
+NGINX_CONF_FILE="./nginx.conf"
+ ```
+
+#### Create an **nginx** file
+
+ 1. Name this file **nginx.conf**.
+
+ 1. Enter the following configuration:
+
+```text
+worker_processes 1;
+
+events { worker_connections 1024; }
+
+http {
+
+ sendfile on;
+ client_max_body_size 90M;
+ upstream docker-custom {
+ server azure-cognitive-service-custom-template:5000;
+ }
+
+ upstream docker-layout {
+ server azure-cognitive-service-layout:5000;
+ }
+
+ server {
+ listen 5000;
+
+ location = / {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+ proxy_pass http://docker-custom/;
+ }
+
+ location /status {
+ proxy_pass http://docker-custom/status;
+ }
+
+ location /test {
+ return 200 $scheme://$host:$server_port;
+ }
+
+ location /ready {
+ proxy_pass http://docker-custom/ready;
+ }
+
+ location /swagger {
+ proxy_pass http://docker-custom/swagger;
+ }
+
+ location /formrecognizer/documentModels/prebuilt-layout {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+
+ add_header 'Access-Control-Allow-Origin' '*' always;
+ add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
+ add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
+ add_header 'Access-Control-Expose-Headers' '*' always;
+
+ if ($request_method = 'OPTIONS') {
+ return 200;
+ }
+
+ proxy_pass http://docker-layout/formrecognizer/documentModels/prebuilt-layout;
+ }
+
+ location /formrecognizer/documentModels {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+
+ add_header 'Access-Control-Allow-Origin' '*' always;
+ add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
+ add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE' always;
+ add_header 'Access-Control-Expose-Headers' '*' always;
+
+ if ($request_method = 'OPTIONS') {
+ return 200;
+ }
+
+ proxy_pass http://docker-custom/formrecognizer/documentModels;
+ }
+
+ location /formrecognizer/operations {
+ add_header 'Access-Control-Allow-Origin' '*' always;
+ add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
+ add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, PATCH' always;
+ add_header 'Access-Control-Expose-Headers' '*' always;
+
+ if ($request_method = OPTIONS ) {
+ return 200;
+ }
+
+ proxy_pass http://docker-custom/formrecognizer/operations;
+ }
+ }
+}
+
+```
+
+#### Create a **docker compose** file
+
+1. Name this file **docker-compose.yml**
+
+2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Studio and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+
+ ```yml
+version: '3.3'
+
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-template:
+ container_name: azure-cognitive-service-custom-template
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
+ restart: always
+ depends_on:
+ - layout
+ environment:
+ AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ studio:
+ container_name: form-recognizer-studio
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
+ environment:
+ ONPREM_LOCALFILE_BASEPATH: /onprem_folder
+ STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
+ volumes:
+ - type: bind
+ source: ${FILE_MOUNT_PATH} # path to your local folder
+ target: /onprem_folder
+ - type: bind
+ source: ${DB_MOUNT_PATH} # path to your local folder
+ target: /onprem_db
+ ports:
+ - "5001:5001"
+ user: "1000:1000" # echo $(id -u):$(id -g)
+
+ ```
+
+The custom template container can use Azure Storage queues or in memory queues. The `Storage:ObjectStore:AzureBlob:ConnectionString` and `queue:azure:connectionstring` environment variables only need to be set if you're using Azure Storage queues. When running locally, delete these variables.
+
+### Ensure the service is running
+
+To ensure that the service is up and running. Run these commands in an Ubuntu shell.
+
+```bash
+$cd <folder containing the docker-compose file>
+
+$source .env
+
+$docker-compose up
+```
+
+Custom template containers require a few different configurations and support other optional configurations
+
+| Setting | Required | Description |
+|--||-|
+|EULA | Yes | License acceptance Example: Eula=accept|
+|Billing | Yes | Billing endpoint URI of the FR resource |
+|ApiKey | Yes | The endpoint key of the FR resource |
+| Queue:Azure:ConnectionString | No| Azure Queue connection string |
+|Storage:ObjectStore:AzureBlob:ConnectionString | No| Azure Blob connection string |
+| HealthCheck:MemoryUpperboundInMB | No | Memory threshold for reporting unhealthy to liveness. Default: Same as recommended memory |
+| StorageTimeToLiveInMinutes | No| TTL duration to remove all intermediate and final files. Default: Two days, TTL can set between five minutes to seven days |
+| Task:MaxRunningTimeSpanInMinutes | No| Maximum running time for treating request as timeout. Default: 60 minutes |
+| HTTP_PROXY_BYPASS_URLS | No | Specify URLs for bypassing proxy Example: HTTP_PROXY_BYPASS_URLS = abc.com, xyz.com |
+| AzureCognitiveServiceReadHost (Receipt, IdDocument Containers Only)| Yes | Specify Read container uri Example:AzureCognitiveServiceReadHost=http://onprem-frread:5000 |
+| AzureCognitiveServiceLayoutHost (Document, Invoice Containers Only) | Yes | Specify Layout container uri Example:AzureCognitiveServiceLayoutHost=http://onprem-frlayout:5000 |
+
+#### Use the Document Intelligence Studio to train a model
+
+* Gather a set of at least five forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract _sample_data.zip_).
+
+* Once you can confirm that the containers are running, open a browser and navigate to the endpoint where you have the containers deployed. If this deployment is your local machine, the endpoint is `[http://localhost:5000](http://localhost:5000)`.
+* Select the custom extraction model tile.
+* Select the `Create project` option.
+* Provide a project name and optionally a description
+* On the "configure your resource" step, provide the endpoint to your custom template model. If you deployed the containers on your local machine, use this URL `[http://localhost:5000](http://localhost:5000)`.
+* Provide a subfolder for where your training data is located within the files folder.
+* Finally, create the project
+
+You should now have a project created, ready for labeling. Upload your training data and get started labeling. If you're new to labeling, see [build and train a custom model](../how-to-guides/build-a-custom-model.md)
+
+#### Using the API to train
+
+If you plan to call the APIs directly to train a model, the custom template model train API requires a base64 encoded zip file that is the contents of your labeling project. You can omit the PDF or image files and submit only the JSON files.
+
+Once you have your dataset labeled and *.ocr.json, *.labels.json and fields.json files added to a zip, use the PowerShell commands to generate the base64 encoded string.
+
+```powershell
+$bytes = [System.IO.File]::ReadAllBytes("<your_zip_file>.zip")
+$b64String = [System.Convert]::ToBase64String($bytes, [System.Base64FormattingOptions]::None)
+
+```
+
+Use the build model API to post the request.
+
+```http
+POST http://localhost:5000/formrecognizer/documentModels:build?api-version=2022-08-31
+
+{
+ "modelId": "mymodel",
+ "description": "test model",
+ "buildMode": "template",
+
+ "base64Source": "<Your base64 encoded string>",
+ "tags": {
+ "additionalProp1": "string",
+ "additionalProp2": "string",
+ "additionalProp3": "string"
+ }
+}
+```
+++++
+### [Read](#tab/read)
+
+Document Intelligence v2.1 doesn't support the Read container.
+
+### [General Document](#tab/general-document)
+
+Document Intelligence v2.1 doesn't support the General Document container.
+
+### [Layout](#tab/layout)
+
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ ports:
+ - "5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Invoice](#tab/invoice)
+
+The following code sample is a self-contained `docker compose` example to run Document Intelligence Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-invoice:
+ container_name: azure-cognitive-service-invoice
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Receipt](#tab/receipt)
+
+The following code sample is a self-contained `docker compose` example to run Document Intelligence Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Azure AI Vision Read container.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-receipt:
+ container_name: azure-cognitive-service-receipt
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apiKey={COMPUTER_VISION_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [ID Document](#tab/id-document)
+
+The following code sample is a self-contained `docker compose` example to run Document Intelligence ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Azure AI Vision Read container.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-id:
+ container_name: azure-cognitive-service-id
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apiKey={COMPUTER_VISION_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Business Card](#tab/business-card)
+
+The following code sample is a self-contained `docker compose` example to run Document Intelligence Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Azure AI Vision Read container.
+
+```yml
+version: "3.9"
+
+ azure-cognitive-service-businesscard:
+ container_name: azure-cognitive-service-businesscard
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+ azure-cognitive-service-read:
+ container_name: azure-cognitive-service-read
+ image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
+ environment:
+ - EULA=accept
+ - billing={COMPUTER_VISION_ENDPOINT_URI}
+ - apiKey={COMPUTER_VISION_KEY}
+ networks:
+ - ocrvnet
+
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### [Custom](#tab/custom)
+
+In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
+
+#### &bullet; Create a folder to store the following files
+
+ 1. [**.env**](#-create-an-environment-file)
+ 1. [**nginx.conf**](#-create-an-nginx-file)
+ 1. [**docker-compose.yml**](#-create-a-docker-compose-file)
+
+#### &bullet; Create a folder to store your input data
+
+ 1. Name this folder **shared**.
+ 1. We reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You need to add it to your **.env** file.
+
+#### &bullet; Create a folder to store the logs written by the Document Intelligence service on your local machine.
+
+ 1. Name this folder **output**.
+ 1. We reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You need to add it to your **.env** file.
+
+#### &bullet; Create an environment file
+
+ 1. Name this file **.env**.
+
+ 1. Declare the following environment variables:
+
+ ```text
+ SHARED_MOUNT_PATH="<file-path-to-shared-folder>"
+ OUTPUT_MOUNT_PATH="<file -path-to-output-folder>"
+ FORM_RECOGNIZER_ENDPOINT_URI="<your-form-recognizer-endpoint>"
+ FORM_RECOGNIZER_KEY="<your-form-recognizer-key>"
+ RABBITMQ_HOSTNAME="rabbitmq"
+ RABBITMQ_PORT=5672
+ NGINX_CONF_FILE="<file-path>"
+ ```
+
+#### &bullet; Create an **nginx** file
+
+ 1. Name this file **nginx.conf**.
+
+ 1. Enter the following configuration:
+
+```text
+worker_processes 1;
+
+events { worker_connections 1024; }
+
+http {
+
+ sendfile on;
+ client_max_body_size 90M;
+ upstream docker-api {
+ server azure-cognitive-service-custom-api:5000;
+ }
+
+ upstream docker-layout {
+ server azure-cognitive-service-layout:5000;
+ }
+
+ server {
+ listen 5000;
+
+ location = / {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+ proxy_pass http://docker-api/;
+
+ }
+
+ location /status {
+ proxy_pass http://docker-api/status;
+
+ }
+
+ location /ready {
+ proxy_pass http://docker-api/ready;
+
+ }
+
+ location /swagger {
+ proxy_pass http://docker-api/swagger;
+
+ }
+
+ location /formrecognizer/v2.1/custom/ {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+ proxy_pass http://docker-api/formrecognizer/v2.1/custom/;
+
+ }
+
+ location /formrecognizer/v2.1/layout/ {
+ proxy_set_header Host $host:$server_port;
+ proxy_set_header Referer $scheme://$host:$server_port;
+ proxy_pass http://docker-layout/formrecognizer/v2.1/layout/;
+
+ }
+ }
+}
+```
+
+* Gather a set of at least six forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created.
+
+* If you want to label your data, download the [Document Intelligence Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases). The download imports the labeling tool .exe file that you use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
+
+#### Create a new Sample Labeling tool project
+
+* Open the labeling tool by double-clicking on the Sample Labeling tool .exe file.
+* On the left pane of the tool, select the connections tab.
+* Select to create a new project and give it a name and description.
+* For the provider, choose the local file system option. For the local folder, make sure you enter the path to the folder where you stored the sample data files.
+* Navigate back to the home tab and select the *Use custom to train a model with labels and key-value pairs option*.
+* Select the train button on the left pane to train the labeled model.
+* Save this connection and use it to label your requests.
+* You can choose to analyze the file of your choice against the trained model.
+
+#### &bullet; Create a **docker compose** file
+
+1. Name this file **docker-compose.yml**
+
+2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+
+ ```yml
+ version: '3.3'
+
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ rabbitmq:
+ container_name: ${RABBITMQ_HOSTNAME}
+ image: rabbitmq:3
+ expose:
+ - "5672"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-api:
+ container_name: azure-cognitive-service-custom-api
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api
+ restart: always
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-supervised:
+ container_name: azure-cognitive-service-custom-supervised
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised
+ restart: always
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ CustomFormRecognizer:ContainerPhase: All
+ CustomFormRecognizer:LayoutAnalyzeUri: http://azure-cognitive-service-layout:5000/formrecognizer/v2.1/layout/analyze
+ Logging:Console:LogLevel:Default: Information
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ ```
+
+### Ensure the service is running
+
+To ensure that the service is up and running. Run these commands in an Ubuntu shell.
+
+```bash
+$cd <folder containing the docker-compose file>
+
+$source .env
+
+$docker-compose up
+```
+
+### Create a new connection
+
+* On the left pane of the tool, select the **connections** tab.
+* Select **create a new project** and give it a name and description.
+* For the provider, choose the **local file system** option. For the local folder, make sure you enter the path to the folder where you stored the **sample data** files.
+* Navigate back to the home tab and select **Use custom to train a model with labels and key-value pairs**.
+* Select the **train button** on the left pane to train the labeled model.
+* **Save** this connection and use it to label your requests.
+* You can choose to analyze the file of your choice against the trained model.
+
+## The Sample Labeling tool and Azure Container Instances (ACI)
+
+To learn how to use the Sample Labeling tool with an Azure Container Instance, *see*, [Deploy the Sample Labeling tool](../deploy-label-tool.md#deploy-with-azure-container-instances-aci).
+++++
+## Validate that the service is running
+
+There are several ways to validate that the container is running:
+
+* The container provides a homepage at `\` as a visual validation that the container is running.
+
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the listed request URLs to validate the container is running. The listed example request URLs are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+
+ Request URL | Purpose
+ -- | --
+ |**http://<span></span>localhost:5000/** | The container provides a home page.
+ |**http://<span></span>localhost:5000/ready** | Requested with GET, this request provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes liveness and readiness probes.
+ |**http://<span></span>localhost:5000/status** | Requested with GET, this request verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes liveness and readiness probes.
+ |**http://<span></span>localhost:5000/swagger** | The container provides a full set of documentation for the endpoints and a Try it out feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required.
+ |
++
+## Stop the containers
+
+To stop the containers, use the following command:
+
+```console
+docker-compose down
+```
+
+## Billing
+
+The Document Intelligence containers send billing information to Azure by using a Document Intelligence resource on your Azure account.
+
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You're billed for each container instance used to process your documents and images.
+
+> [!NOTE]
+> Currently, Document Intelligence v3 containers only support pay as you go pricing. Support for commitment tiers and disconnected mode will be added in March 2023.
+Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Azure AI containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
+
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You're billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you're billed for the Document Intelligence `BusinessCard` and `Azure AI Vision Read` container instances. For the invoice feature, you're billed for the Document Intelligence `Invoice` and `Layout` container instances. *See*, [Document Intelligence](https://azure.microsoft.com/pricing/details/form-recognizer/) and Azure AI Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
+
+Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Azure AI containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
+
+### Connect to Azure
+
+The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run, but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Azure AI container FAQ](../../../ai-services/containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
+
+### Billing arguments
+
+The [**docker-compose up**](https://docs.docker.com/engine/reference/commandline/compose_up/) command starts the container when all three of the following options are provided with valid values:
+
+| Option | Description |
+|--|-|
+| `ApiKey` | The key of the Azure AI services resource that's used to track billing information.<br/>The value of this option must be set to a key for the provisioned resource that's specified in `Billing`. |
+| `Billing` | The endpoint of the Azure AI services resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.|
+| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
+
+For more information about these options, see [Configure containers](configuration.md).
+
+## Summary
+
+That's it! In this article, you learned concepts and workflows for downloading, installing, and running Document Intelligence containers. In summary:
+
+* Document Intelligence provides seven Linux containers for Docker.
+* Container images are downloaded from mcr.
+* Container images run in Docker.
+* The billing information must be specified when you instantiate a container.
+
+> [!IMPORTANT]
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* [Document Intelligence container configuration settings](configuration.md)
+
+* [Azure container instance recipe](../../../ai-services/containers/azure-container-instance-recipe.md)
ai-services Create Document Intelligence Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-document-intelligence-resource.md
+
+ Title: How to create a Document Intelligence resource
+
+description: Create a Document Intelligence resource in the Azure portal
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.1.0'
+++
+# Create a Document Intelligence resource
++
+Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. In this article, learn how to create a Document Intelligence resource in the Azure portal.
+
+## Visit the Azure portal
+
+The Azure portal is a single platform you can use to create and manage Azure services.
+
+Let's get started:
+
+1. Navigate to the Azure portal home page: [Azure home page](https://portal.azure.com/#home).
+
+1. Select **Create a resource** from the Azure home page.
+
+1. Search for and choose **Document Intelligence** from the search bar.
+
+1. Select the **Create** button.
+
+## Create a resource
+
+1. Next, you're going to fill out the **Create Document Intelligence** fields with the following values:
+
+ * **Subscription**. Select your current subscription.
+ * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that contains your resource. You can create a new group or add it to a pre-existing group.
+ * **Region**. Select your local region.
+ * **Name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameFormRecognizer*.
+ * **Pricing tier**. The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+1. Select **Review + Create**.
+
+ :::image type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Document Intelligence resource.":::
+
+1. Azure will run a quick validation check, after a few seconds you should see a green banner that says **Validation Passed**.
+
+1. Once the validation banner appears, select the **Create** button from the bottom-left corner.
+
+1. After you select create, you'll be redirected to a new page that says **Deployment in progress**. After a few seconds, you'll see a message that says, **Your deployment is complete**.
+
+## Get Endpoint URL and keys
+
+1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
+
+1. Copy the key and endpoint values from your Document Intelligence resource paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Document Intelligence API.
+
+1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button, on the left navigation bar, and retrieve them there.
+
+ :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL":::
+
+That's it! You're now ready to start automating data extraction using Azure AI Document Intelligence.
+
+## Next steps
+
+* Try the [Document Intelligence Studio](concept-document-intelligence-studio.md), an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications.
+
+* Complete a Document Intelligence quickstart and get started creating a document processing app in the development language of your choice:
+
+ * [C#](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+ * [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+ * [Java](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+ * [JavaScript](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md
+
+ Title: Create shared access signature (SAS) tokens for your storage containers and blobs
+description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.1.0'
+++
+# Create SAS tokens for storage containers
++
+ In this article, learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
++
+At a high level, here's how SAS tokens work:
+
+* Your application submits the SAS token to Azure Storage as part of a REST API request.
+
+* If the storage service verifies that the SAS is valid, the request is authorized.
+
+* If the SAS token is deemed invalid, the request is declined and the error code 403 (Forbidden) is returned.
+
+Azure Blob Storage offers three resource types:
+
+* **Storage** accounts provide a unique namespace in Azure for your data.
+* **Data storage containers** are located in storage accounts and organize sets of blobs.
+* **Blobs** are located in containers and store text and binary data such as files, text, and images.
+
+## When to use a SAS token
+
+* **Training custom models**. Your assembled set of training documents *must* be uploaded to an Azure Blob Storage container. You can opt to use a SAS token to grant access to your training documents.
+
+* **Using storage containers with public access**. You can opt to use a SAS token to grant limited access to your storage resources that have public read access.
+
+ > [!IMPORTANT]
+ >
+ > * If your Azure storage account is protected by a virtual network or firewall, you can't grant access with a SAS token. You'll have to use a [managed identity](managed-identities.md) to grant access to your storage resource.
+ >
+ > * [Managed identity](managed-identities-secured-access.md) supports both privately and publicly accessible Azure Blob Storage accounts.
+ >
+ > * SAS tokens grant permissions to storage resources, and should be protected in the same manner as an account key.
+ >
+ > * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* A [Document Intelligence](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
+
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You need to create containers to store and organize your blob data within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
+
+## Upload your documents
+
+1. Go to the [Azure portal](https://portal.azure.com/#home).
+ * Select **Your storage account** → **Data storage** → **Containers**.
+
+ :::image type="content" source="media/sas-tokens/data-storage-menu.png" alt-text="Screenshot that shows the Data storage menu in the Azure portal.":::
+
+1. Select a container from the list.
+
+1. Select **Upload** from the menu at the top of the page.
+
+ :::image type="content" source="media/sas-tokens/container-upload-button.png" alt-text="Screenshot that shows the container Upload button in the Azure portal.":::
+
+1. The **Upload blob** window appears. Select your files to upload.
+
+ :::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal.":::
+
+ > [!NOTE]
+ > By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional).
+
+## Use the Azure portal
+
+The Azure portal is a web-based console that enables you to manage your Azure subscription and resources using a graphical user interface (GUI).
+
+1. Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows:
+
+ * **Your storage account** → **containers** → **your container**.
+
+1. Select **Generate SAS** from the menu near the top of the page.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by selecting or clearing the appropriate checkbox.</br>
+
+ * Make sure the **Read**, **Write**, **Delete**, and **List** permissions are selected.
+
+ :::image type="content" source="media/sas-tokens/sas-permissions.png" alt-text="Screenshot that shows the SAS permission fields in the Azure portal.":::
+
+ >[!IMPORTANT]
+ >
+ > * If you receive a message similar to the following one, you'll also need to assign access to the blob data in your storage account:
+ >
+ > :::image type="content" source="media/sas-tokens/need-permissions.png" alt-text="Screenshot that shows the lack of permissions warning.":::
+ >
+ > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
+ > * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.md?tabs=current) to assign a role that allows for read, write, and delete permissions for your Azure storage container. *See* [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor).
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+ * When you create a SAS token, the default duration is 48 hours. After 48 hours, you'll need to create a new token.
+ * Consider setting a longer duration period for the time you're using your storage account for Document Intelligence Service operations.
+ * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
+ * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Azure AD credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS token. The default value is HTTPS.
+
+1. Select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** appear in the lower area of the window. To use the Blob SAS token, append it to a storage service URI.
+
+1. Copy and paste the **Blob SAS token** and **Blob SAS URL** values in a secure location. They're displayed only once and can't be retrieved after the window is closed.
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+## Use Azure Storage Explorer
+
+Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
+
+### Get started
+
+* You need the [**Azure Storage Explorer**](../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
+
+* After the Azure Storage Explorer app is installed, [connect it the storage account](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Intelligence.
+
+### Create your SAS tokens
+
+1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
+1. Expand the Storage Accounts node and select **Blob Containers**.
+1. Expand the Blob Containers node and right-click a storage **container** node to display the options menu.
+1. Select **Get Shared Access Signature** from options menu.
+1. In the **Shared Access Signature** window, make the following selections:
+ * Select your **Access policy** (the default is none).
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
+ * Select the **Time zone** for the Start and Expiry date and time (default is Local).
+ * Define your container **Permissions** by selecting the **Read**, **Write**, **List**, and **Delete** checkboxes.
+ * Select **key1** or **key2**.
+ * Review and select **Create**.
+
+1. A new window appears with the **Container** name, **SAS URL**, and **Query string** for your container.
+
+1. **Copy and paste the SAS URL and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+## Use your SAS URL to grant access
+
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
+
+### REST API
+
+To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/BuildDocumentModel), add the SAS URL to the request body:
+
+ ```json
+ {
+ "source":"<BLOB SAS URL>"
+ }
+ ```
+
+That's it! You've learned how to create SAS tokens to authorize how clients access your data.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Build a training data set](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true)
ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/deploy-label-tool.md
+
+ Title: How to deploy the Document Intelligence Sample Labeling tool
+
+description: Learn the different ways you can deploy the Document Intelligence Sample Labeling tool to help with supervised learning.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-2.1.0'
+++
+# Deploy the Sample Labeling tool
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](media/yes-icon.png) **Document Intelligence v2.1**.
+
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
+
+> [!NOTE]
+> The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the Sample Labeling tool for yourself.
+
+The Document Intelligence Sample Labeling tool is an application that provides a simple user interface (UI), which you can use to manually label forms (documents) for supervised learning. In this article, we provide links and instructions that teach you how to:
+
+* [Run the Sample Labeling tool locally](#run-the-sample-labeling-tool-locally)
+* [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](#deploy-with-azure-container-instances-aci)
+* [Use and contribute to the open-source OCR Form Labeling Tool](#open-source-on-github)
+
+## Run the Sample Labeling tool locally
+
+The fastest way to start labeling data is to run the Sample Labeling tool locally. The following quickstart uses the Document Intelligence REST API and the Sample Labeling tool to train a custom model with manually labeled data.
+
+* [Get started with Azure AI Document Intelligence](label-tool.md).
+
+## Deploy with Azure Container Instances (ACI)
+
+Before we get started, it's important to note that there are two ways to deploy the Sample Labeling tool to an Azure Container Instance (ACI). Both options are used to run the Sample Labeling tool with ACI:
+
+* [Using the Azure portal](#azure-portal)
+* [Using the Azure CLI](#azure-cli)
+
+### Azure portal
+
+Follow these steps to create a new resource using the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/signin/index/).
+2. Select **Create a resource**.
+3. Next, select **Web App**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Select web app](./media/quickstarts/create-web-app.png)
+
+4. First, make sure that the **Basics** tab is selected. Now, you're going to need to provide some information:
+
+ > [!div class="mx-imgBorder"]
+ > ![Select Basics](./media/quickstarts/select-basics.png)
+ * Subscription - Select an existing Azure subscription
+ * Resource Group - You can reuse an existing resource group or create a new one for this project. Creating a new resource group is recommended.
+ * Name - Give your web app a name.
+ * Publish - Select **Docker Container**
+ * Operating System - Select **Linux**
+ * Region - Choose a region that makes sense for you.
+ * Linux Plan - Select a pricing tier/plan for your app service.
+
+ > [!div class="mx-imgBorder"]
+ > ![Configure your web app](./media/quickstarts/select-docker.png)
+
+5. Next, select the **Docker** tab.
+
+ > [!div class="mx-imgBorder"]
+ > ![Select Docker](./media/quickstarts/select-docker.png)
+
+6. Now let's configure your Docker container. All fields are required unless otherwise noted:
+<!-- markdownlint-disable MD025 -->
+
+* Options - Select **Single Container**
+* Image Source - Select **Private Registry**
+* Server URL - Set to `https://mcr.microsoft.com`
+* Username (Optional) - Create a username.
+* Password (Optional) - Create a secure password that you can remember.
+* Image and tag - Set to `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1`
+* Continuous Deployment - Set to **On** if you want to receive automatic updates when the development team makes changes to the Sample Labeling tool.
+* Startup command - Set to `./run.sh eula=accept`
+
+> [!div class="mx-imgBorder"]
+> ![Configure Docker](./media/quickstarts/configure-docker.png)
+
+* Next, select **Review + Create**, then **Create** to deploy your web app. When complete, you can access your web app at the URL provided in the **Overview** for your resource.
+
+### Continuous deployment
+
+After you've created your web app, you can enable the continuous deployment option:
+
+* From the left pane, choose **Container settings**.
+* In the main window, navigate to Continuous deployment and toggle between the **On** and **Off** buttons to set your preference:
++
+> [!NOTE]
+> When creating your web app, you can also configure authorization/authentication. This is not necessary to get started.
+
+> [!IMPORTANT]
+> You may need to enable TLS for your web app in order to view it at its `https` address. Follow the instructions in [Enable a TLS endpoint](../../container-instances/container-instances-container-group-ssl.md) to set up a sidecar container than enables TLS/SSL for your web app.
+<!-- markdownlint-disable MD001 -->
+### Azure CLI
+
+As an alternative to using the Azure portal, you can create a resource using the Azure CLI. Before you continue, you need to install the [Azure CLI](/cli/azure/install-azure-cli). You can skip this step if you're already working with the Azure CLI.
+
+There's a few things you need know about this command:
+
+* `DNS_NAME_LABEL=aci-demo-$RANDOM` generates a random DNS name.
+* This sample assumes that you have a resource group that you can use to create a resource. Replace `<resource_group_name>` with a valid resource group associated with your subscription.
+* You need to specify where you want to create the resource. Replace `<region name>` with your desired region for the web app.
+* This command automatically accepts EULA.
+
+From the Azure CLI, run this command to create a web app resource for the Sample Labeling tool:
+
+<!-- markdownlint-disable MD024 -->
+
+```azurecli
+DNS_NAME_LABEL=aci-demo-$RANDOM
+
+az container create \
+ --resource-group <resource_group_name> \
+ --name <name> \
+ --image mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 \
+ --ports 3000 \
+ --dns-name-label $DNS_NAME_LABEL \
+ --location <region name> \
+ --cpu 2 \
+ --memory 8 \
+ --command-line "./run.sh eula=accept"
+
+```
+
+### Connect to Azure AD for authorization
+
+It's recommended that you connect your web app to Azure Active Directory (Azure AD). This connection ensures that only users with valid credentials can sign in and use your web app. Follow the instructions in [Configure your App Service app](../../app-service/configure-authentication-provider-aad.md) to connect to Azure Active Directory.
+
+## Open source on GitHub
+
+The OCR Form Labeling Tool is also available as an open-source project on GitHub. The tool is a web application built using React + Redux, and is written in TypeScript. To learn more or contribute, see [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
+
+## Next steps
+
+Use the [Train with labels](label-tool.md) quickstart to learn how to use the tool to manually label training data and perform supervised learning.
ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/disaster-recovery.md
+
+ Title: Disaster recovery guidance for Azure AI Document Intelligence
+
+description: Learn how to use the copy model API to back up your Document Intelligence resources.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD033 -->
+
+# Disaster recovery
++++
+When you create a Document Intelligence resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Document Intelligence resources in different regions and the ability to sync custom models across regions.
+
+The Copy API enables this scenario by allowing you to copy custom models from one Document Intelligence account or into others, which can exist in any supported geographical region. This guide shows you how to use the Copy REST API with cURL. You can also use an HTTP request service like Postman to issue the requests.
+
+## Business scenarios
+
+If your app or business depends on the use of a Document Intelligence custom model, we recommend you copy your model to another Document Intelligence account in another region. If a regional outage occurs, you can then access your model in the region where it was copied.
+
+## Prerequisites
+
+1. Two Document Intelligence Azure resources in different Azure regions. If you don't have them, go to the Azure portal and [create a new Document Intelligence resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer).
+1. The key, endpoint URL, and subscription ID for your Document Intelligence resource. You can find these values on the resource's **Overview** tab in the [Azure portal](https://portal.azure.com/#home).
+++
+## Copy API overview
+
+The process for copying a custom model consists of the following steps:
+
+1. First you issue a copy authorization request to the target resource&mdash;that is, the resource that receives the copied model. You receive back the URL of the newly created target model that receives the copied model.
+1. Next you send the copy request to the source resource&mdash;the resource that contains the model to be copied with the payload (copy authorization) returned from the previous call. You receive back a URL that you can query to track the progress of the operation.
+1. You use your source resource credentials to query the progress URL until the operation is a success. With v3.0, you can also query the new model ID in the target resource to get the status of the new model.
+
+## Generate Copy authorization request
+
+The following HTTP request gets copy authorization from your target resource. You need to enter the endpoint and key of your target resource as headers.
+
+```http
+POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
+Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+Request body
+
+```json
+{
+ "modelId": "target-model-name",
+ "description": "Copied from SCUS"
+}
+```
+
+You receive a `200` response code with response body that contains the JSON payload required to initiate the copy.
+
+```json
+{
+ "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
+ "targetResourceRegion": "region",
+ "targetModelId": "target-model-name",
+ "targetModelLocation": "model path",
+ "accessToken": "access token",
+ "expirationDateTime": "timestamp"
+}
+```
+
+## Start Copy operation
+
+The following HTTP request starts the copy operation on the source resource. You need to enter the endpoint and key of your source resource as the url and header. Notice that the request URL contains the model ID of the source model you want to copy.
+
+```http
+POST {{source-endpoint}}formrecognizer/documentModels/{model-to-be-copied}:copyTo?api-version=2022-08-31
+Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+The body of your request is the response from the previous step.
+
+```json
+{
+ "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
+ "targetResourceRegion": "region",
+ "targetModelId": "target-model-name",
+ "targetModelLocation": "model path",
+ "accessToken": "access token",
+ "expirationDateTime": "timestamp"
+}
+```
+
+You receive a `202\Accepted` response with an Operation-Location header. This value is the URL that you use to track the progress of the operation. Copy it to a temporary location for the next step.
+
+```http
+HTTP/1.1 202 Accepted
+Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
+```
+
+> [!NOTE]
+> The Copy API transparently supports the [AEK/CMK](https://msazure.visualstudio.com/Cognitive%20Services/_wiki/wikis/Cognitive%20Services.wiki/52146/Customer-Managed-Keys) feature. This doesn't require any special treatment, but note that if you're copying between an unencrypted resource to an encrypted resource, you need to include the request header `x-ms-forms-copy-degrade: true`. If this header is not included, the copy operation will fail and return a `DataProtectionTransformServiceError`.
+
+## Track Copy progress
+
+```console
+GET https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
+Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+### Track the target model ID
+
+You can also use the **[Get model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel)** API to track the status of the operation by querying the target model. Call the API using the target model ID that you copied down from the [Generate Copy authorization request](#generate-copy-authorization-request) response.
+
+```http
+GET https://{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31" -H "Ocp-Apim-Subscription-Key: {YOUR-KEY}
+```
+
+In the response body, you see information about the model. Check the `"status"` field for the status of the model.
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json; charset=utf-8
+{"modelInfo":{"modelId":"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d","status":"ready","createdDateTime":"2020-02-26T16:59:28Z","lastUpdatedDateTime":"2020-02-26T16:59:34Z"},"trainResult":{"trainingDocuments":[{"documentName":"0.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"1.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"2.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"3.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"4.pdf","pages":1,"errors":[],"status":"succeeded"}],"errors":[]}}
+```
+
+## cURL sample code
+
+The following code snippets use cURL to make API calls. You also need to fill in the model IDs and subscription information specific to your own resources.
+
+### Generate Copy authorization
+
+**Request**
+
+```bash
+curl -i -X POST "{YOUR-ENDPOINT}formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"
+--data-ascii "{
+ 'modelId': '{modelId}',
+ 'description': '{description}'
+}"
+```
+
+**Successful response**
+
+```json
+{
+ "targetResourceId": "string",
+ "targetResourceRegion": "string",
+ "targetModelId": "string",
+ "targetModelLocation": "string",
+ "accessToken": "string",
+ "expirationDateTime": "string"
+}
+```
+
+### Begin Copy operation
+
+**Request**
+
+```bash
+curl -i -X POST "{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31"
+-H "Content-Type: application/json"
+-H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"
+--data-ascii "{
+ 'targetResourceId': '{targetResourceId}',
+ 'targetResourceRegion': {targetResourceRegion}',
+ 'targetModelId': '{targetModelId}',
+ 'targetModelLocation': '{targetModelLocation}',
+ 'accessToken': '{accessToken}',
+ 'expirationDateTime': '{expirationDateTime}'
+}"
+
+```
+
+**Successful response**
+
+```http
+HTTP/1.1 202 Accepted
+Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
+```
+
+### Track copy operation progress
+
+You can use the [**Get operation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetOperation) API to list all document model operations (succeeded, in-progress, or failed) associated with your Document Intelligence resource. Operation information only persists for 24 hours. Here's a list of the operations (operationId) that can be returned:
+
+* documentModelBuild
+* documentModelCompose
+* documentModelCopyTo
+
+### Track the target model ID
+
+If the operation was successful, the document model can be accessed using the [**getModel**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel) (get a single model), or [**GetModels**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) (get a list of models) APIs.
+++
+## Copy model overview
+
+The process for copying a custom model consists of the following steps:
+
+1. First you issue a copy authorization request to the target resource&mdash;that is, the resource that receives the copied model. You receive back the URL of the newly created target model that receives the copied model.
+1. Next you send the copy request to the source resource&mdash;the resource that contains the model to be copied with the payload (copy authorization) returned from the previous call. You receive back a URL that you can query to track the progress of the operation.
+1. You use your source resource credentials to query the progress URL until the operation is a success.
+
+## Generate authorization request
+
+The following HTTP request generates a copy authorization from your target resource. You need to enter the endpoint and key of your target resource as headers.
+
+```http
+POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization
+Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+You receive a `201\Created` response with a `modelId` value in the body. This string is the ID of the newly created (blank) model. The `accessToken` is needed for the API to copy data to this resource, and the `expirationDateTimeTicks` value is the expiration of the token. Save all three of these values to a secure location.
+
+```http
+HTTP/1.1 201 Created
+Location: https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/33f4d42c-cd2f-4e74-b990-a1aeafab5a5d
+{"modelId":"<your model ID>","accessToken":"<your access token>","expirationDateTimeTicks":637233481531659440}
+```
+
+## Start the copy operation
+
+The following HTTP request starts the Copy operation on the source resource. You need to enter the endpoint and key of your source resource as headers. Notice that the request URL contains the model ID of the source model you want to copy.
+
+```http
+POST https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/<your model ID>/copy HTTP/1.1
+Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+The body of your request needs to have the following format. You need to enter the resource ID and region name of your target resource. You can find your resource ID on the **Properties** tab of your resource in the Azure portal, and you can find the region name on the **Keys and endpoint** tab. You also need the model ID, access token, and expiration value that you copied from the previous step.
+
+```json
+{
+ "targetResourceId": "{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_ID}",
+ "targetResourceRegion": "{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_REGION_NAME}",
+ "copyAuthorization": {"modelId":"<your model ID>","accessToken":"<your access token>","expirationDateTimeTicks":637233481531659440}
+}
+```
+
+You receive a `202\Accepted` response with an Operation-Location header. This value is the URL that you use to track the progress of the operation. Copy it to a temporary location for the next step.
+
+```http
+HTTP/1.1 202 Accepted
+Operation-Location: https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/eccc3f13-8289-4020-ba16-9f1d1374e96f/copyresults/02989ba8-1296-499f-aaf4-55cfff41b8f1
+```
+
+> [!NOTE]
+> The Copy API transparently supports the [AEK/CMK](https://msazure.visualstudio.com/Cognitive%20Services/_wiki/wikis/Cognitive%20Services.wiki/52146/Customer-Managed-Keys) feature. This operation doesn't require any special treatment, but note that if you're copying between an unencrypted resource to an encrypted resource, you need to include the request header `x-ms-forms-copy-degrade: true`. If this header is not included, the copy operation will fail and return a `DataProtectionTransformServiceError`.
+
+### Track operation progress
+
+Track your progress by querying the **Get Copy Model Result** API against the source resource endpoint.
+
+```http
+GET https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/eccc3f13-8289-4020-ba16-9f1d1374e96f/copyresults/02989ba8-1296-499f-aaf4-55cfff41b8f1 HTTP/1.1
+Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+The response varies depending on the status of the operation. Look for the `"status"` field in the JSON body. If you're automating this API call in a script, we recommend querying the operation once every second.
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json; charset=utf-8
+{"status":"succeeded","createdDateTime":"2020-04-23T18:18:01.0275043Z","lastUpdatedDateTime":"2020-04-23T18:18:01.0275048Z","copyResult":{}}
+```
+
+### Track operation status with modelID
+
+You can also use the **Get Custom Model** API to track the status of the operation by querying the target model. Call this API using the target model ID that you copied down in the first step.
+
+```http
+GET https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/33f4d42c-cd2f-4e74-b990-a1aeafab5a5d HTTP/1.1
+Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
+```
+
+In the response body, you receive information about the model. Check the `"status"` field for the status of the model.
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json; charset=utf-8
+{"modelInfo":{"modelId":"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d","status":"ready","createdDateTime":"2020-02-26T16:59:28Z","lastUpdatedDateTime":"2020-02-26T16:59:34Z"},"trainResult":{"trainingDocuments":[{"documentName":"0.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"1.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"2.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"3.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"4.pdf","pages":1,"errors":[],"status":"succeeded"}],"errors":[]}}
+```
+
+## cURL code samples
+
+The following code snippets use cURL to make API calls. You also need to fill in the model IDs and subscription information specific to your own resources.
+
+### Generate copy authorization
+
+```bash
+curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}"
+```
+
+### Start copy operation
+
+```bash
+curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}" --data-ascii "{ \"targetResourceId\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_ID}\", \"targetResourceRegion\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_REGION_NAME}\", \"copyAuthorization\": "{\"modelId\":\"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d\",\"accessToken\":\"1855fe23-5ffc-427b-aab2-e5196641502f\",\"expirationDateTimeTicks\":637233481531659440}"}"
+```
+
+### Track copy progress
+
+```bash
+curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v2.1/custom/models/{SOURCE_MODELID}/copyResults/{RESULT_ID}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}"
+```
+++
+### Common error code messages
+
+|Error|Resolution|
+|:--|:--|
+| 400 / Bad Request with `"code:" "1002"` | Indicates validation error or badly formed copy request. Common issues include: a) Invalid or modified `copyAuthorization` payload. b) Expired value for `expirationDateTimeTicks` token (`copyAuthorization` payload is valid for 24 hours). c) Invalid or unsupported `targetResourceRegion`. d) Invalid or malformed `targetResourceId` string.
+|*Authorization failure due to missing or invalid authorization claims*.| Occurs when the `copyAuthorization` payload or content is modified from the `copyAuthorization` API. Ensure that the payload is the same exact content that was returned from the earlier `copyAuthorization` call.|
+|*Couldn't retrieve authorization metadata*.| Indicates that the `copyAuthorization` payload is being reused with a copy request. A copy request that succeeds doesn't allow any further requests that use the same `copyAuthorization` payload. If you raise a separate error and you later retry the copy with the same authorization payload, this error gets raised. The resolution is to generate a new `copyAuthorization` payload and then reissue the copy request.|
+|*Data transfer request isn't allowed as it downgrades to a less secure data protection scheme*.| Occurs when copying between an `AEK` enabled resource to a non `AEK` enabled resource. To allow copying encrypted model to the target as unencrypted specify `x-ms-forms-copy-degrade: true` header with the copy request.|
+|"Couldn't fetch information for Cognitive resource with ID...". | Indicates that the Azure resource indicated by the `targetResourceId` isn't a valid Cognitive resource or doesn't exist. Verify and reissue the copy request to resolve this issue.</br> Ensure the resource is valid and exists in the specified region, such as, `westus2`|
++
+## Next steps
++
+In this guide, you learned how to use the Copy API to back up your custom models to a secondary Document Intelligence resource. Next, explore the API reference docs to see what else you can do with Document Intelligence.
+
+* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
++
+In this guide, you learned how to use the Copy API to back up your custom models to a secondary Document Intelligence resource. Next, explore the API reference docs to see what else you can do with Document Intelligence.
+
+* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
+
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/encrypt-data-at-rest.md
+
+ Title: Document Intelligence service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Document Intelligence, and how to enable and manage CMK.
+++++ Last updated : 07/18/2023++
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Document Intelligence encryption of data at rest
++
+Azure AI Document Intelligence automatically encrypts your data when persisting it to the cloud. Document Intelligence encryption protects your data to help you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Document Intelligence, you will need to create a new Document Intelligence resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* [Document Intelligence Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-classifier.md
+
+ Title: "Build and train a custom classifier"
+
+description: Learn how to label, and build a custom document classification model.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Build and train a custom classification model (preview)
+
+**This article applies to:** ![Document Intelligence checkmark](../medi) supported by Document Intelligence REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
+
+> [!IMPORTANT]
+>
+> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+>
+
+Custom classification models can classify each page in an input file to identify the document(s) within. Classifier models can also identify multiple documents or multiple instances of a single document in the input file. Document Intelligence custom models require as few as five training documents per document class to get started. To get started training a custom classification model, you need at least **five documents** for each class and **two classes** of documents.
+
+## Custom classification model input requirements
+
+Make sure your training data set follows the input requirements for Document Intelligence.
++
+## Training data tips
+
+Follow these tips to further optimize your data set for training:
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+Once you've put together the set of forms or documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. If your dataset is organized as folders, preserve that structure as the Studio can use your folder names for labels to simplify the labeling process.
+
+## Create a classification project in the Document Intelligence Studio
+
+The Document Intelligence Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
+
+1. Start by navigating to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-document-intelligence-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-document-intelligence-studio.md#added-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
+
+1. In the Studio, select the **Custom classification model** tile, on the custom models section of the page and select the **Create a project** button.
+
+ :::image type="content" source="../media/how-to/studio-create-classifier-project.png" alt-text="Screenshot of how to create a classifier project in the Document Intelligence Studio.":::
+
+ 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
+
+ 1. Next, choose or create a Document Intelligence resource before you select continue.
+
+ :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot showing the project setup dialog window.":::
+
+1. Next select the storage account you used to upload your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
+
+ > [!IMPORTANT]
+ > You can either organize the training dataset by folders where the folder name is the label or class for documents or create a flat list of documents that you can assign a label to in the Studio.
+
+ :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot showing how to select the Document Intelligence resource.":::
+
+1. **Training a custom classifier requires the output from the Layout model for each document in your dataset**. Run layout on all documents prior to the model training process.
+
+1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
+
+## Label your data
+
+In your project, you only need to label each document with the appropriate class label.
++
+You see the files you uploaded to storage in the file list, ready to be labeled. You have a few options to label your dataset.
+
+1. If the documents are organized in folders, the Studio prompts you to use the folder names as labels. This step simplifies your labeling down to a single select.
+
+1. To assign a label to a document, select on the add label selection mark to assign a label.
+
+1. Control select to multi-select documents to assign a label
+
+You should now have all the documents in your dataset labeled. If you look at the storage account, you find *.ocr.json* files that correspond to each document in your training dataset and a new **class-name.jsonl** file for each class labeled. This training dataset is submitted to train the model.
+
+## Train your model
+
+With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
+
+1. On the train model dialog, provide a unique classifier ID and, optionally, a description. The classifier ID accepts a string data type.
+
+1. Select **Train** to initiate the training process.
+
+1. Classifier models train in a few minutes.
+
+1. Navigate to the *Models* menu to view the status of the train operation.
+
+## Test the model
+
+Once the model training is complete, you can test your model by selecting the model on the models list page.
+
+1. Select the model and select on the **Test** button.
+
+1. Add a new file by browsing for a file or dropping a file into the document selector.
+
+1. With a file selected, choose the **Analyze** button to test the model.
+
+1. The model results are displayed with the list of identified documents, a confidence score for each document identified and the page range for each of the documents identified.
+
+1. Validate your model by evaluating the results for each document identified.
+
+Congratulations you've trained a custom classification model in the Document Intelligence Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+
+## Troubleshoot
+
+The [classification model](../concept-custom-classifier.md) requires results from the [layout model](../concept-layout.md) for each training document. If you haven't provided the layout results, the Studio attempts to run the layout model for each document prior to training the classifier. This process is throttled and can result in a 429 response.
+
+In the Studio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about custom model types](../concept-custom.md)
+
+> [!div class="nextstepaction"]
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md
+
+ Title: "Build and train a custom model"
+
+description: Learn how to build, label, and train a custom model.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Build and train a custom model
++
+Document Intelligence models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
+
+## Custom model input requirements
+
+First, make sure your training data set follows the input requirements for Document Intelligence.
++++
+## Training data tips
+
+Follow these tips to further optimize your data set for training:
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+* For forms with input fields, use examples that have all of the fields completed.
+* Use forms with different values in each field.
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+Once you've put together the set of forms or documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Video: Train your custom model
+
+* Once you've gathered and uploaded your training dataset, you're ready to train your custom model. In the following video, we create a project and explore some of the fundamentals for successfully labeling and training a model.</br></br>
+
+ > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1c]
+
+## Create a project in the Document Intelligence Studio
+
+The Document Intelligence Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
+
+1. Start by navigating to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-document-intelligence-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-document-intelligence-studio.md#added-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
+
+1. In the Studio, select the **Custom models** tile, on the custom models page and select the **Create a project** button.
+
+ :::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Document Intelligence Studio.":::
+
+ 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
+
+ 1. On the next step in the workflow, choose or create a Document Intelligence resource before you select continue.
+
+ > [!IMPORTANT]
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
+
+ :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Document Intelligence resource.":::
+
+1. Next select the storage account you used to upload your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
+
+ :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
+
+1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
+
+## Label your data
+
+In your project, your first task is to label your dataset with the fields you wish to extract.
+
+The files you uploaded to storage are listed on the left of your screen, with the first file ready to be labeled.
+
+1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type.
+
+ :::image type="content" source="../media/how-to/studio-create-label.png" alt-text="Screenshot: Create a label.":::
+
+1. Enter a name for the field.
+
+1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. The labeled value is below the field name in the list of fields.
+
+1. Repeat the process for all the fields you wish to label for your dataset.
+
+1. Label the remaining documents in your dataset by selecting each document and selecting the text to be labeled.
+
+You now have all the documents in your dataset labeled. The *.labels.json* and *.ocr.json* files correspond to each document in your training dataset and a new fields.json file. This training dataset is submitted to train the model.
+
+## Train your model
+
+With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
+
+1. On the train model dialog, provide a unique model ID and, optionally, a description. The model ID accepts a string data type.
+
+1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
+
+ :::image type="content" source="../media/how-to/studio-train-model.png" alt-text="Screenshot: Train model dialog":::
+
+1. Select **Train** to initiate the training process.
+
+1. Template models train in a few minutes. Neural models can take up to 30 minutes to train.
+
+1. Navigate to the *Models* menu to view the status of the train operation.
+
+## Test the model
+
+Once the model training is complete, you can test your model by selecting the model on the models list page.
+
+1. Select the model and select on the **Test** button.
+
+1. Select the `+ Add` button to select a file to test the model.
+
+1. With a file selected, choose the **Analyze** button to test the model.
+
+1. The model results are displayed in the main window and the fields extracted are listed in the right navigation bar.
+
+1. Validate your model by evaluating the results for each field.
+
+1. The right navigation bar also has the sample code to invoke your model and the JSON results from the API.
+
+Congratulations you've trained a custom model in the Document Intelligence Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about custom model types](../concept-custom.md)
+
+> [!div class="nextstepaction"]
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
+++
+**Applies to:** ![Document Intelligence v2.1 checkmark](../medi?view=doc-intel-3.0.0&preserve-view=true?view=doc-intel-3.0.0&preserve-view=true)
+
+When you use the Document Intelligence custom model, you provide your own training data to the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
+
+You need at least five filled-in forms of the same type.
+
+If you want to use manually labeled training data, you must start with at least five filled-in forms of the same type. You can still use unlabeled forms in addition to the required data set.
+
+## Custom model input requirements
+
+First, make sure your training data set follows the input requirements for Document Intelligence.
++++
+## Training data tips
+
+Follow these tips to further optimize your data set for training.
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+* For filled-in forms, use examples that have all of their fields filled in.
+* Use forms with different values in each field.
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+When you've put together the set of form documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
+
+If you want to use manually labeled data, upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files.
+
+### Organize your data in subfolders (optional)
+
+By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API only uses documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
+
+```json
+{
+ "source":"<SAS URL>"
+}
+```
+
+If you add the following content to the request body, the API trains with documents located in subfolders. The `"prefix"` field is optional and limits the training data set to files whose paths begin with the given string. So a value of `"Test"`, for example, causes the API to look at only the files or folders that begin with the word *Test*.
+
+```json
+{
+ "source": "<SAS URL>",
+ "sourceFilter": {
+ "prefix": "<prefix string>",
+ "includeSubFolders": true
+ },
+ "useLabelFile": false
+}
+```
+
+## Next steps
+
+Now that you've learned how to build a training data set, follow a quickstart to train a custom Document Intelligence model and start using it on your forms.
+
+* [Train a model and extract document data using the client library or REST API](../quickstarts/get-started-sdks-rest-api.md)
+* [Train with labels using the Sample Labeling tool](../label-tool.md)
+
+## See also
+
+* [What is Document Intelligence?](../overview.md)
+
ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/compose-custom-models.md
+
+ Title: "How to guide: create and compose custom models with Document Intelligence"
+
+description: Learn how to create, use, and manage Document Intelligence custom and composed models
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Compose custom models
+
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+++
+A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 200 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
+
+To learn more, see [Composed custom models](../concept-composed-models.md).
+
+In this article, you learn how to create and use composed custom models to analyze your forms and documents.
+
+## Prerequisites
+
+To get started, you need the following resources:
+
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
+
+* **A Document Intelligence instance**. Once you have your Azure subscription, [create a Document Intelligence resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Document Intelligence resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+ 1. After the resource deploys, select **Go to resource**.
+
+ 1. Copy the **Keys and Endpoint** values from the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Document Intelligence API.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
+
+ > [!TIP]
+ > For more information, see [**create a Document Intelligence resource**](../create-document-intelligence-resource.md).
+
+* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Create your custom models
+
+First, you need a set of custom models to compose. You can use the Document Intelligence Studio, REST API, or client-library SDKs. The steps are as follows:
+
+* [**Assemble your training dataset**](#assemble-your-training-dataset)
+* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
+* [**Train your custom models**](#train-your-custom-model)
+
+## Assemble your training dataset
+
+Building a custom model begins with establishing your training dataset. You need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#custom-model-input-requirements) for Document Intelligence.
+
+>[!TIP]
+> Follow these tips to optimize your data set for training:
+>
+> * If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+> * For filled-in forms, use examples that have all of their fields filled in.
+> * Use forms with different values in each field.
+> * If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+See [Build a training data set](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true) for tips on how to collect your training documents.
+
+## Upload your training dataset
+
+When you've gathered a set of training documents, you need to [upload your training data](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#upload-your-training-data) to an Azure blob storage container.
+
+If you want to use manually labeled data, you have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents.
+
+## Train your custom model
+
+When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+
+Document Intelligence uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Document Intelligence enables training a model to extract key-value pairs and tables using supervised learning capabilities.
+
+### [Document Intelligence Studio](#tab/studio)
+
+To create custom models, start with configuring your project:
+
+1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
+
+1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
+
+1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
+
+1. Review and submit your settings to create the project.
++
+While creating your custom models, you may need to extract data collections from your documents. The collections may appear one of two formats. Using tables as the visual pattern:
+
+* Dynamic or variable count of values (rows) for a given set of fields (columns)
+
+* Specific collection of values for a given set of fields (columns and/or rows)
+
+See [Document Intelligence Studio: labeling as tables](../quickstarts/try-document-intelligence-studio.md#labeling-as-tables)
+
+### [REST API](#tab/rest)
+
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
+
+Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels are treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
+
+Once you have your label files, you can include them with by calling the training method with the *useLabelFile* parameter set to `true`.
++
+### [Client libraries](#tab/sdks)
+
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you have them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
+
+|Language |Method|
+|--|--|
+|**C#**|**StartBuildModel**|
+|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.documentanalysis.administration.documentmodeladministrationclient.beginbuildmodel)|
+|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)|
+| **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
++++
+## Create a composed model
+
+> [!NOTE]
+> **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
+
+With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Document Intelligence first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+### [Document Intelligence Studio](#tab/studio)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Gather your custom model IDs**](#gather-your-model-ids)
+* [**Compose your custom models**](#compose-your-custom-models)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Gather your model IDs
+
+When you train models using the [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/), the model ID is located in the models menu under a project:
++
+#### Compose your custom models
+
+1. Select a custom models project.
+
+1. In the project, select the ```Models``` menu item.
+
+1. From the resulting list of models, select the models you wish to compose.
+
+1. Choose the **Compose button** from the upper-left corner.
+
+1. In the pop-up window, name your newly composed model and select **Compose**.
+
+1. When the operation completes, your newly composed model appears in the list.
+
+1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results.
+
+#### Analyze documents
+
+The custom model **Analyze** operation requires you to provide the `modelID` in the call to Document Intelligence. You should provide the composed model ID for the `modelID` parameter in your applications.
++
+#### Manage your composed models
+
+You can manage your custom models throughout life cycles:
+
+* Test and validate new documents.
+* Download your model to use in your applications.
+* Delete your model when its lifecycle is complete.
++
+### [REST API](#tab/rest)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Compose your custom models**](#compose-your-custom-models)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Compose your custom models
+
+The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) accepts a list of model IDs to be composed.
++
+#### Analyze documents
+
+To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
++
+#### Manage your composed models
+
+You can manage custom models throughout your development needs including [**copying**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo), [**listing**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels), and [**deleting**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) your models.
+
+### [Client libraries](#tab/sdks)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Create a composed model**](#create-a-composed-model)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Create a composed model
+
+You can use the programming language of your choice to create a composed model:
+
+| Programming language| Code sample |
+|--|--|
+|**C#** | [Model compose](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
+|**Java** | [Model compose](https://github.com/Azure/azure-sdk-for-java/blob/afa0d44fa42979ae9ad9b92b23cdba493a562127/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java)
+|**JavaScript** | [Compose model](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/composeModel.js)
+|**Python** | [Create composed model](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
+
+#### Analyze documents
+
+Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
+
+|Programming language| Code sample |
+|--|--|
+|**C#** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzeWithCustomModel.md)
+|**Java** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
+|**JavaScript** | [Analyze a document with a custom/composed model using model ID](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/analyzeDocumentByModelId.js)
+|**Python** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_analyze_custom_documents.py)
+
+#### Manage your composed models
+
+You can manage a custom model at each stage in its life cycles. You can copy a custom model between resources, view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
+
+|Programming language| Code sample |
+|--|--|
+|**C#** | [Copy a custom model between Document Intelligence resources](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_CopyCustomModel.md#copy-a-custom-model-between-form-recognizer-resources)|
+|**Java** | [Copy a custom model between Document Intelligence resources](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/CopyDocumentModel.java)|
+|**JavaScript** | [Copy a custom model between Document Intelligence resources](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/copyModel.js)|
+|**Python** | [Copy a custom model between Document Intelligence resources](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_copy_model_to.py)|
+++
+Great! You've learned the steps to create custom and composed models and use them in your Document Intelligence projects and applications.
+
+## Next steps
+
+Try one of our Document Intelligence quickstarts:
+
+> [!div class="nextstepaction"]
+> [Document Intelligence Studio](../quickstarts/try-document-intelligence-studio.md)
+
+> [!div class="nextstepaction"]
+> [REST API](../quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [C#](../quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Java](../quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [JavaScript](../quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [Python](../quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
++++
+Document Intelligence uses advanced machine-learning technology to detect and extract information from document images and return the extracted data in a structured JSON output. With Document Intelligence, you can train standalone custom models or combine custom models to create composed models.
+
+* **Custom models**. Document Intelligence custom models enable you to analyze and extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases.
+
+* **Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model that encompasses your form types. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis.
+
+In this article, you learn how to create Document Intelligence custom and composed models using our [Document Intelligence Sample Labeling tool](../label-tool.md), [REST APIs](../how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#train-a-custom-model), or [client-library SDKs](../how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#train-a-custom-model).
+
+## Sample Labeling tool
+
+Try extracting data from custom forms using our Sample Labeling tool. You need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance (Document Intelligence forthcoming)](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+> [!div class="nextstepaction"]
+> [Try it](https://fott-2-1.azurewebsites.net/projects/create)
+
+In the Document Intelligence UI:
+
+1. Select **Use Custom to train a model with labels and get key value pairs**.
+
+ :::image type="content" source="../media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
+
+1. In the next window, select **New project**:
+
+ :::image type="content" source="../media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
+
+## Create your models
+
+The steps for building, training, and using custom and composed models are as follows:
+
+* [**Assemble your training dataset**](#assemble-your-training-dataset)
+* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
+* [**Train your custom model**](#train-your-custom-model)
+* [**Compose custom models**](#create-a-composed-model)
+* [**Analyze documents**](#analyze-documents-with-your-custom-or-composed-model)
+* [**Manage your custom models**](#manage-your-custom-models)
+
+## Assemble your training dataset
+
+Building a custom model begins with establishing your training dataset. You need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#custom-model-input-requirements) for Document Intelligence.
+
+## Upload your training dataset
+
+You need to [upload your training data](build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#upload-your-training-data)
+to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Train your custom model
+
+You [train your model](build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#train-your-model) with labeled data sets. Labeled datasets rely on the prebuilt-layout API, but supplementary human input is included such as your specific labels and field locations. Start with at least five completed forms of the same type for your labeled training data.
+
+When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+
+Document Intelligence uses the [Layout](../concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Document Intelligence enables training a model to extract key value pairs and tables using supervised learning capabilities.
+
+[Get started with Train with labels](../label-tool.md)
+
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+
+## Create a composed model
+
+> [!NOTE]
+> **Model Compose is only available for custom models trained *with* labels.** Attempting to compose unlabeled models will produce an error.
+
+With the Model Compose operation, you can assign up to 200 trained custom models to a single model ID. When you call Analyze with the composed model ID, Document Intelligence classifies the form you submitted first, chooses the best matching assigned model, and then returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+Using the Document Intelligence Sample Labeling tool, the REST API, or the Client-library SDKs, follow the steps to set up a composed model:
+
+1. [**Gather your custom model IDs**](#gather-your-custom-model-ids)
+1. [**Compose your custom models**](#compose-your-custom-models)
+
+### Gather your custom model IDs
+
+Once the training process has successfully completed, your custom model is assigned a model ID. You can retrieve a model ID as follows:
+
+<!-- Applies to FOTT but labeled studio to eliminate tab grouping warning -->
+### [**Document Intelligence Sample Labeling tool**](#tab/studio)
+
+When you train models using the [**Document Intelligence Sample Labeling tool**](https://fott-2-1.azurewebsites.net/), the model ID is located in the Train Result window:
++
+### [**REST API**](#tab/rest)
+
+The [**REST API**](build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#train-your-model) returns a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
++
+### [**Client-library SDKs**](#tab/sdks)
+
+ The [**client-library SDKs**](build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#train-your-model) return a model object that can be queried to return the trained model ID:
+
+* C\# | [CustomFormModel Class](/dotnet/api/azure.ai.formrecognizer.training.customformmodel?view=azure-dotnet&preserve-view=true#properties "Azure SDK for .NET")
+
+* Java | [CustomFormModelInfo Class](/java/api/com.azure.ai.formrecognizer.training.models.customformmodelinfo?view=azure-java-stable&preserve-view=true#methods "Azure SDK for Java")
+
+* JavaScript | CustomFormModelInfo interface
+
+* Python | [CustomFormModelInfo Class](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.customformmodelinfo?view=azure-python&preserve-view=true&branch=main#variables "Azure SDK for Python")
+++
+#### Compose your custom models
+
+After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
+
+<!-- Applies to FOTT but labeled studio to eliminate tab grouping warning -->
+### [**Document Intelligence Sample Labeling tool**](#tab/studio)
+
+The **Sample Labeling tool** enables you to quickly get started training models and composing them to a single model ID.
+
+After you have completed training, compose your models as follows:
+
+1. On the left rail menu, select the **Model Compose** icon (merging arrow).
+
+1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
+
+1. Choose the **Compose button** from the upper-left corner.
+
+1. In the pop-up window, name your newly composed model and select **Compose**.
+
+When the operation completes, your newly composed model appears in the list.
+
+ :::image type="content" source="../media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="../media/custom-model-compose-expanded.png":::
+
+### [**REST API**](#tab/rest)
+
+Using the **REST API**, you can make a [**Compose Custom Model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`.
+
+### [**Client-library SDKs**](#tab/sdks)
+
+Use the programming language code of your choice to create a composed model that is called with a single model ID. The following links are code samples that demonstrate how to create a composed model from existing custom models:
+
+* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
+
+* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java).
+
+* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
+
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
++++
+## Analyze documents with your custom or composed model
+
+ The custom form **Analyze** operation requires you to provide the `modelID` in the call to Document Intelligence. You can provide a single custom model ID or a composed model ID for the `modelID` parameter.
++
+### [**Document Intelligence Sample Labeling tool**](#tab/studio)
+
+1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
+
+1. Choose a local file or image URL to analyze.
+
+1. Select the **Run Analysis** button.
+
+1. The tool applies tags in bounding boxes and reports the confidence percentage for each tag.
++
+### [**REST API**](#tab/rest)
+
+Using the REST API, you can make an [Analyze Document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request to analyze a document and extract key-value pairs and table data.
+
+### [**Client-library SDKs**](#tab/sdks)
+
+Using the programming language of your choice to analyze a form or document with a custom or composed model. You need your Document Intelligence endpoint, key, and model ID.
+
+* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
+
+* [**Java**](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
+
+* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/recognizeCustomForm.js)
+
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.1/sample_recognize_custom_forms.py)
++++
+Test your newly trained models by [analyzing forms](build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#test-the-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](../label-tool.md#improve-results).
+
+## Manage your custom models
+
+You can [manage your custom models](../how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) from your account.
+
+Great! You've learned the steps to create custom and composed models and use them in your Document Intelligence projects and applications.
+
+## Next steps
+
+Learn more about the Document Intelligence client library by exploring our API reference documentation.
+
+> [!div class="nextstepaction"]
+> [Document Intelligence API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+
ai-services Estimate Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/estimate-cost.md
+
+ Title: "Check my usage and estimate the cost"
+
+description: Learn how to use Azure portal to check how many pages are analyzed and estimate the total price.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Check usage and estimate costs
++
+ In this guide, you'll learn how to use the metrics dashboard in the Azure portal to view how many pages were processed by Azure AI Document Intelligence. You'll also learn how to estimate the cost of processing those pages using the Azure pricing calculator.
+
+## Check how many pages were processed
+
+We'll start by looking at the page processing data for a given time period:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Document Intelligence resource.
+
+1. From the **Overview** page, select the **Monitoring** tab located near the middle of the page.
+
+ :::image type="content" source="../media/azure-portal-overview-menu.png" alt-text="Screenshot of the Azure portal overview page menu.":::
+
+1. Select a time range and you'll see the **Processed Pages** chart displayed.
+
+ :::image type="content" source="../media/azure-portal-overview-monitoring.png" alt-text="Screenshot that shows how many pages are processed on the resource overview page." lightbox="../media/azure-portal-processed-pages.png":::
+
+### Examine analyzed pages
+
+We can now take a deeper dive to see each model's analyzed pages:
+
+1. Under the **Monitoring** section, select **Metrics** from the left navigation menu.
+
+ :::image type="content" source="../media/azure-portal-monitoring-metrics.png" alt-text="Screenshot of the monitoring menu in the Azure portal.":::
+
+1. On the **Metrics** page, select **Add metric**.
+
+1. Select the Metric dropdown menu and, under **USAGE**, choose **Processed Pages**.
+
+ :::image type="content" source="../media/azure-portal-add-metric.png" alt-text="Screenshot that shows how to add new metrics on Azure portal.":::
+
+1. From the upper right corner, configure the time range and select the **Apply** button.
+
+ :::image type="content" source="../media/azure-portal-processed-pages-timeline.png" alt-text="Screenshot of time period options for metrics in the Azure portal." lightbox="../media/azure-portal-metrics-timeline.png":::
+
+1. Select **Apply splitting**.
+
+ :::image type="content" source="../media/azure-portal-apply-splitting.png" alt-text="Screenshot of the Apply splitting option in the Azure portal.":::
+
+1. Choose **FeatureName** from the **Values** dropdown menu.
+
+ :::image type="content" source="../media/azure-portal-splitting-on-feature-name.png" alt-text="Screenshot of the Apply splitting values dropdown menu.":::
+
+1. You'll see a breakdown of the pages analyzed by each model.
+
+ :::image type="content" source="../media/azure-portal-metrics-drill-down.png" alt-text="Screenshot demonstrating how to drill down to check analyzed pages by model." lightbox="../media/azure-portal-drill-down-closeup.png":::
+
+## Estimate price
+
+Now that we have the page processed data from the portal, we can use the Azure pricing calculator to estimate the cost:
+
+1. Sign in to [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) with the same credentials you use for the Azure portal.
+
+ > Press Ctrl + right-click to open in a new tab!
+
+1. Search for **Azure AI Document Intelligence** in the **Search products** search box.
+
+1. Select **Azure AI Document Intelligence** and you'll see that it has been added to the page.
+
+1. Under **Your Estimate**, select the relevant **Region**, **Payment Option** and **Instance** for your Document Intelligence resource. For more information, *see* [Azure AI Document Intelligence pricing options](https://azure.microsoft.com/pricing/details/form-recognizer/#pricing).
+
+1. Enter the number of pages processed from the Azure portal metrics dashboard. That data can be found using the steps in sections [Check how many pages are processed](#check-how-many-pages-were-processed) or [Examine analyzed pages](#examine-analyzed-pages), above.
+
+1. The estimated price is on the right, after the equal (**=**) sign.
+
+ :::image type="content" source="../media/azure-portal-pricing.png" alt-text="Screenshot of how to estimate the price based on processed pages.":::
+
+That's it. You now know where to find how many pages you have processed using Document Intelligence and how to estimate the cost.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>
+> [Learn more about Document Intelligence service quotas and limits](../service-limits.md)
ai-services Project Share Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/project-share-custom-models.md
+
+ Title: "Share custom model projects using Document Intelligence Studio"
+
+description: Learn how to share custom model projects using Document Intelligence Studio.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Project sharing using Document Intelligence Studio
+
+Document Intelligence Studio is an online tool to visually explore, understand, train, and integrate features from the Document Intelligence service into your applications. Document Intelligence Studio enables project sharing feature within the custom extraction model. Projects can be shared easily via a project token. The same project token can also be used to import a project.
+
+## Prerequisite
+
+In order to share and import your custom extraction projects seamlessly, both users (user who shares and user who imports) need an An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). Also, both users need to configure permissions to grant access to the Document Intelligence and storage resources.
+
+Generally, in the process of creating a custom model project, most of the requirements should have been met for project sharing. However, in cases where the project sharing feature does not work, please check the below.
+
+## Granted access and permissions
+
+ > [!IMPORTANT]
+ > Custom model projects can be imported only if you have the access to the storage account that is associated with the project you are trying to import. Check your storage account permission before starting to share or import projects with others.
+
+### Virtual networks and firewalls
+
+If your storage account VNet is enabled or if there are any firewall constraints, the project can't be shared. If you want to bypass those restrictions, ensure that those settings are turned off.
+
+A workaround is to manually create a project using the same settings as the project being shared.
++
+## Share a custom extraction model with Document Intelligence Studio
+
+Follow these steps to share your project using Document Intelligence Studio:
+
+1. Start by navigating to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
+
+1. In the Studio, select the **Custom extraction models** tile, under the custom models section.
+
+ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot showing how to select a custom extraction model in the Studio.":::
+
+1. On the custom extraction models page, select the desired model to share and then select the **Share** button.
+
+ :::image type="content" source="../media/how-to/studio-project-share.png" alt-text="Screenshot showing how to select the desired model and select the share option.":::
+
+1. On the share project dialog, copy the project token for the selected project.
++
+## Import custom extraction model with Document Intelligence Studio
+
+Follow these steps to import a project using Document Intelligence Studio.
+
+1. Start by navigating to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
+
+1. In the Studio, select the **Custom extraction models** tile, under the custom models section.
+
+ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot: Select custom extraction model in the Studio.":::
+
+1. On the custom extraction models page, select the **Import** button.
+
+ :::image type="content" source="../media/how-to/studio-project-import.png" alt-text="Screenshot: Select import within custom extraction model page.":::
+
+1. On the import project dialog, paste the project token shared with you and select import.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Back up and recover models](../disaster-recovery.md)
ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md
+
+ Title: "Use Document Intelligence client library SDKs or REST API "
+
+description: How to use Document Intelligence SDKs or REST API and create apps to extract key data from documents.
++++++ Last updated : 07/18/2023+
+zone_pivot_groups: programming-languages-set-formre
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD051 -->
+
+# Use Document Intelligence models
++
+ In this guide, you learn how to add Document Intelligence models to your applications and workflows using a programming language SDK of your choice or the REST API. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key text and structure elements from documents. We recommend that you use the free service as you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+Choose from the following Document Intelligence models to analyze and extract data and values from forms and documents:
+
+> [!div class="checklist"]
+>
+> * The [prebuilt-read](../concept-read.md) model is at the core of all Document Intelligence models and can detect lines, words, locations, and languages. Layout, general document, prebuilt, and custom models all use the read model as a foundation for extracting texts from documents.
+>
+> * The [prebuilt-layout](../concept-layout.md) model extracts text and text locations, tables, selection marks, and structure information from documents and images.
+>
+> * The [prebuilt-document](../concept-general-document.md) model extracts key-value pairs, tables, and selection marks from documents and can be used as an alternative to training a custom model without labels.
+>
+> * The [prebuilt-healthInsuranceCard.us](../concept-insurance-card.md) model extracts key information from US health insurance cards.
+>
+> * The [prebuilt-tax.us.w2](../concept-w2.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms.
+>
+> * The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
+>
+> * The [prebuilt-receipt](../concept-receipt.md) model extracts key information from printed and handwritten sales receipts.
+>
+> * The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards.
+>
+> * The [prebuilt-businessCard](../concept-business-card.md) model extracts key information from business card images.
++++++++++++++++++
+## Next steps
+
+Congratulations! You've learned to use Document Intelligence models to analyze various documents in different ways. Next, explore the Document Intelligence Studio and reference documentation.
+>[!div class="nextstepaction"]
+> [**Try the Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!div class="nextstepaction"]
+> [**Explore the Document Intelligence REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
++
+In this how-to guide, you learn how to add Document Intelligence to your applications and workflows using an SDK, in a programming language of your choice, or the REST API. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+You use the following APIs to extract structured data from forms and documents:
+
+* [Authenticate the client](#authenticate-the-client)
+* [Analyze Layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
+* [Analyze ID documents](#analyze-id-documents)
+* [Train a custom model](#train-a-custom-model)
+* [Analyze forms with a custom model](#analyze-forms-with-a-custom-model)
+* [Manage custom models](#manage-custom-models)
+++++++++++++++
ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/label-tool.md
+
+ Title: "How-to: Analyze documents, Label forms, train a model, and analyze forms with Document Intelligence"
+
+description: How to use the Document Intelligence sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-2.1.0'
++
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD034 -->
+# Train a custom model using the Sample Labeling tool
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](media/yes-icon.png) **Document Intelligence v2.1**.
+
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the V3.0.
+
+In this article, you use the Document Intelligence REST API with the Sample Labeling tool to train a custom model with manually labeled data.
+
+> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+
+## Prerequisites
+
+ You need the following resources to complete this project:
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Document Intelligence resource" target="_blank">create a Document Intelligence resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You need the key and endpoint from the resource you create to connect your application to the Document Intelligence API. You paste your key and endpoint into the code later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A set of at least six forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account.
+
+## Create a Document Intelligence resource
++
+## Try it out
+
+Try out the [**Document Intelligence Sample Labeling tool**](https://fott-2-1.azurewebsites.net/) online:
+
+> [!div class="nextstepaction"]
+> [Try Prebuilt Models](https://fott-2-1.azurewebsites.net/)
+
+You need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Document Intelligence resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Document Intelligence service.
+
+## Set up the Sample Labeling tool
+
+> [!NOTE]
+>
+> If your storage data is behind a VNet or firewall, you must deploy the **Document Intelligence Sample Labeling tool** behind your VNet or firewall and grant access by creating a [system-assigned managed identity](managed-identities.md "Azure managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources").
+
+You use the Docker engine to run the Sample Labeling tool. Follow these steps to set up the Docker container. For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).
+
+> [!TIP]
+> The OCR Form Labeling Tool is also available as an open source project on GitHub. The tool is a TypeScript web application built using React + Redux. To learn more or contribute, see the [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md#run-as-web-application) repo. To try out the tool online, go to the [Document Intelligence Sample Labeling tool website](https://fott-2-1.azurewebsites.net/).
+
+1. First, install Docker on a host computer. This guide shows you how to use local computer as a host. If you want to use a Docker hosting service in Azure, see the [Deploy the Sample Labeling tool](deploy-label-tool.md) how-to guide.
+
+ The host computer must meet the following hardware requirements:
+
+ | Container | Minimum | Recommended|
+ |:--|:--|:--|
+ |Sample Labeling tool|`2` core, 4-GB memory|`4` core, 8-GB memory|
+
+ Install Docker on your machine by following the appropriate instructions for your operating system:
+
+ * [Windows](https://docs.docker.com/docker-for-windows/)
+ * [macOS](https://docs.docker.com/docker-for-mac/)
+ * [Linux](https://docs.docker.com/install/)
+
+1. Get the Sample Labeling tool container with the `docker pull` command.
+
+ ```console
+ docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1
+ ```
+
+1. Now you're ready to run the container with `docker run`.
+
+ ```console
+ docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 eula=accept
+ ```
+
+ This command makes the sample-labeling tool available through a web browser. Go to `http://localhost:3000`.
+
+> [!NOTE]
+> You can also label documents and train models using the Document Intelligence REST API. To train and Analyze with the REST API, see [Train with labels using the REST API and Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
+
+## Set up input data
+
+First, make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. When you train, you need to direct the API to a subfolder.
+
+### Configure cross-domain resource sharing (CORS)
+
+Enable CORS on your storage account. Select your storage account in the Azure portal and then choose the **CORS** tab on the left pane. On the bottom line, fill in the following values. Select **Save** at the top.
+
+* Allowed origins = *
+* Allowed methods = \[select all\]
+* Allowed headers = *
+* Exposed headers = *
+* Max age = 200
+
+> [!div class="mx-imgBorder"]
+> ![CORS setup in the Azure portal](media/label-tool/cors-setup.png)
+
+## Connect to the Sample Labeling tool
+
+ The Sample Labeling tool connects to a source (your original uploaded forms) and a target (created labels and output data).
+
+Connections can be set up and shared across projects. They use an extensible provider model, so you can easily add new source/target providers.
+
+To create a new connection, select the **New Connections** (plug) icon, in the left navigation bar.
+
+Fill in the fields with the following values:
+
+* **Display Name** - The connection display name.
+* **Description** - Your project description.
+* **SAS URL** - The shared access signature (SAS) URL of your Azure Blob Storage container. [!INCLUDE [get SAS URL](includes/sas-instructions.md)]
+
+ :::image type="content" source="media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
++
+## Create a new project
+
+In the Sample Labeling tool, projects store your configurations and settings. Create a new project and fill in the fields with the following values:
+
+* **Display Name** - the project display name
+* **Security Token** - Some project settings can include sensitive values, such as keys or other shared secrets. Each project generates a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
+* **Source Connection** - The Azure Blob Storage connection you created in the previous step that you would like to use for this project.
+* **Folder Path** - Optional - If your source forms are located in a folder on the blob container, specify the folder name here
+* **Document Intelligence Service Uri** - Your Document Intelligence endpoint URL.
+* **Key** - Your Document Intelligence key.
+* **Description** - Optional - Project description
++
+## Label your forms
+
+When you create or open a project, the main tag editor window opens. The tag editor consists of three parts:
+
+* A resizable v3.0 pane that contains a scrollable list of forms from the source connection.
+* The main editor pane that allows you to apply tags.
+* The tags editor pane that allows users to modify, lock, reorder, and delete tags.
+
+### Identify text and tables
+
+Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool draws bounding boxes around each text element.
+
+The labeling tool also shows which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we don't label the table content, but rather rely on the automated extraction.
++
+In v2.1, if your training document doesn't have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
+
+### Apply labels to text
+
+Next, you create tags (labels) and apply them to the text elements that you want the model to analyze.
+
+1. First, use the tags editor pane to create the tags you'd like to identify.
+ 1. Select **+** to create a new tag.
+ 1. Enter the tag name.
+ 1. Press Enter to save the tag.
+1. In the main editor, select words from the highlighted text elements or a region you drew in.
+1. Select the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
+1. Follow these steps to label at least five of your forms.
+ > [!Tip]
+ > Keep the following tips in mind when you're labeling your forms:
+ >
+ > * You can only apply one tag to each selected text element.
+ > * Each tag can only be applied once per page. If a value appears multiple times on the same form, create different tags for each instance. For example: "invoice# 1", "invoice# 2" and so on.
+ > * Tags cannot span across pages.
+ > * Label values as they appear on the form; don't try to split a value into two parts with two different tags. For example, an address field should be labeled with a single tag even if it spans multiple lines.
+ > * Don't include keys in your tagged fields&mdash;only the values.
+ > * Table data should be detected automatically and will be available in the final output JSON file. However, if the model fails to detect all of your table data, you can manually tag these fields as well. Tag each cell in the table with a different label. If your forms have tables with varying numbers of rows, make sure you tag at least one form with the largest possible table.
+ > * Use the buttons to the right of the **+** to search, rename, reorder, and delete your tags.
+ > * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key.
+ >
++
+### Specify tag value types
+
+You can set the expected data type for each tag. Open the context menu to the right of a tag and select a type from the menu. This feature allows the detection algorithm to make assumptions that improve the text-detection accuracy. It also ensures that the detected values are returned in a standardized format in the final JSON output. Value type information is saved in the **fields.json** file in the same path as your label files.
+
+> [!div class="mx-imgBorder"]
+> ![Value type selection with Sample Labeling tool](media/whats-new/value-type.png)
+
+The following value types and variations are currently supported:
+
+* `string`
+ * default, `no-whitespaces`, `alphanumeric`
+
+* `number`
+ * default, `currency`
+ * Formatted as a Floating point value.
+ * Example: 1234.98 on the document is formatted into 1234.98 on the output
+
+* `date`
+ * default, `dmy`, `mdy`, `ymd`
+
+* `time`
+* `integer`
+ * Formatted as an integer value.
+ * Example: 1234.98 on the document is formatted into 123498 on the output.
+* `selectionMark`
+
+> [!NOTE]
+> See these rules for date formatting:
+>
+> You must specify a format (`dmy`, `mdy`, `ymd`) for date formatting to work.
+>
+> The following characters can be used as date delimiters: `, - / . \`. Whitespace cannot be used as a delimiter. For example:
+>
+> * 01,01,2020
+> * 01-01-2020
+> * 01/01/2020
+>
+> The day and month can each be written as one or two digits, and the year can be two or four digits:
+>
+> * 1-1-2020
+> * 1-01-20
+>
+> If a date string has eight digits, the delimiter is optional:
+>
+> * 01012020
+> * 01 01 2020
+>
+> The month can also be written as its full or short name. If the name is used, delimiter characters are optional. However, this format may be recognized less accurately than others.
+>
+> * 01/Jan/2020
+> * 01Jan2020
+> * 01 Jan 2020
+
+### Label tables (v2.1 only)
+
+At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by selecting **Add a new table tag**. Specify whether the table has a fixed number of rows or variable number of rows depending on the document and define the schema.
++
+Once you've defined your table tag, tag the cell values.
++
+## Train a custom model
+
+Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you see the following information:
+
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you need it if you want to do prediction calls through the [REST API](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true).
+* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
+* The list of tags, and the estimated accuracy per tag.
+++
+After training finishes, examine the **Average Accuracy** value. If it's low, you should add more input documents and repeat the labeling steps. The documents you've already labeled remain in the project index.
+
+> [!TIP]
+> You can also run the training process with a REST API call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
+
+## Compose trained models
+
+With Model Compose, you can compose up to 200 models to a single model ID. When you call Analyze with the composed `modelID`, Document Intelligence classifies the form you submitted, choose the best matching model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+* To compose models in the Sample Labeling tool, select the Model Compose (merging arrow) icon from the navigation bar.
+* Select the models you wish to compose together. Models with the arrows icon are already composed models.
+* Choose the **Compose button**. In the pop-up, name your new composed model and select **Compose**.
+* When the operation completes, your newly composed model should appear in the list.
++
+## Analyze a form
+
+Select the Analyze icon from the navigation bar to test your model. Select source *Local file*. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag.
++
+> [!TIP]
+> You can also run the Analyze API with a REST call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
+
+## Improve results
+
+Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value is high, but the confidence scores are low (or the results are inaccurate), add the prediction file to the training set, label it, and train again.
+
+The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
+
+## Save a project and resume later
+
+To resume your project at another time or in another browser, you need to save your project's security token and reenter it later.
+
+### Get project credentials
+
+Go to your project settings page (slider icon) and take note of the security token name. Then go to your application settings (gear icon), which shows all of the security tokens in your current browser instance. Find your project's security token and copy its name and key value to a secure location.
+
+### Restore project credentials
+
+When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings.
+
+### Resume a project
+
+Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's `.fott` file. The application loads all of the project's settings because it has the security token.
+
+## Next steps
+
+In this quickstart, you've learned how to use the Document Intelligence Sample Labeling tool to train a model with manually labeled data. If you'd like to build your own utility to label training data, use the REST APIs that deal with labeled data training.
+
+> [!div class="nextstepaction"]
+> [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
+
+* [What is Document Intelligence?](overview.md)
+* [Document Intelligence quickstart](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support.md
+
+ Title: Language support - Document Intelligence
+
+description: Learn more about the human languages that are available with Document Intelligence.
+++++ Last updated : 07/18/2023
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD036 -->
+
+# Language support
+++
+This article covers the supported languages for text and field **extraction (by feature)** and **[detection (Read only)](#detected-languages-read-api)**. Both groups are mutually exclusive.
+
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+
+## Read, layout, and custom form (template) model
+
+The following lists include the currently GA languages in the most recent v3.0 version for Read, Layout, and Custom template (form) models.
+
+> [!NOTE]
+> **Language code optional**
+>
+> Document Intelligence's deep learning based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
+
+To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](v3-migration-guide.md) to understand the differences from the v2.1 GA API and explore the [v3.0 SDK and REST API quickstarts](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true).
+
+### Handwritten text
+
+The following table lists the supported languages for extracting handwritten texts.
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
+
+### Print text
+
+The following table lists the supported languages for print text by the most recent GA version.
+
+|Language| Code (optional) |Language| Code (optional) |
+|:--|:-:|:--|:-:|
+|Afrikaans|`af`|Khasi | `kha` |
+|Albanian |`sq`|K'iche' | `quc` |
+|Angika (Devanagari) | `anp`| Korean | `ko` |
+|Arabic | `ar` | Korku | `kfq`|
+|Asturian |`ast`| Koryak | `kpy`|
+|Awadhi-Hindi (Devanagari) | `awa`| Kosraean | `kos`|
+|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`|
+|Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`|
+|Basque |`eu`| Kurdish (Latin) | `ku-latn`
+|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagari) | `kru`|
+|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
+|Bhojpuri-Hindi (Devanagari) | `bho`| Lakota | `lkt` |
+|Bislama |`bi`| Latin | `la` |
+|Bodo (Devanagari) | `brx`| Lithuanian | `lt` |
+|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` |
+|Brajbha | `bra`|Lule Sami | `smj`|
+|Breton |`br`|Luxembourgish | `lb` |
+|Bulgarian | `bg`|Mahasu Pahari (Devanagari) | `bfz`|
+|Bundeli | `bns`|Malay (Latin) | `ms` |
+|Buryat (Cyrillic) | `bua`|Maltese | `mt`
+|Catalan |`ca`|Malto (Devanagari) | `kmj`
+|Cebuano |`ceb`|Manx | `gv` |
+|Chamling | `rab`|Maori | `mi`|
+|Chamorro |`ch`|Marathi | `mr`|
+|Chhattisgarhi (Devanagari)| `hne`| Mongolian (Cyrillic) | `mn`|
+|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`|
+|Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`|
+|Cornish |`kw`|Neapolitan | `nap` |
+|Corsican |`co`|Nepali | `ne`|
+|Crimean Tatar (Latin)|`crh`|Niuean | `niu`|
+|Croatian | `hr`|Nogay | `nog`
+|Czech | `cs` |Northern Sami (Latin) | `sme`|
+|Danish | `da` |Norwegian | `no` |
+|Dari | `prs`|Occitan | `oc` |
+|Dhimal (Devanagari) | `dhi`| Ossetic | `os`|
+|Dogri (Devanagari) | `doi`|Pashto | `ps`|
+|Dutch | `nl` |Persian | `fa`|
+|English | `en` |Polish | `pl` |
+|Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
+|Estonian |`et`|Punjabi (Arabic) | `pa`|
+|Faroese | `fo`|Ripuarian | `ksh`|
+|Fijian |`fj`|Romanian | `ro` |
+|Filipino |`fil`|Romansh | `rm` |
+|Finnish | `fi` | Russian | `ru` |
+|French | `fr` |Sadri (Devanagari) | `sck` |
+|Friulian | `fur` | Samoan (Latin) | `sm`
+|Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`|
+|Galician | `gl` |Santali(Devanagiri) | `sat` |
+|German | `de` | Scots | `sco` |
+|Gilbertese | `gil` | Scottish Gaelic | `gd` |
+|Gondi (Devanagari) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
+|Greenlandic | `kl` | Sherpa (Devanagari) | `xsr` |
+|Gurung (Devanagari) | `gvr`| Sirmauri (Devanagari) | `srx`|
+|Haitian Creole | `ht` | Skolt Sami | `sms` |
+|Halbi (Devanagari) | `hlb`| Slovak | `sk`|
+|Hani | `hni` | Slovenian | `sl` |
+|Haryanvi | `bgc`|Somali (Arabic) | `so`|
+|Hawaiian | `haw`|Southern Sami | `sma`
+|Hindi | `hi`|Spanish | `es` |
+|Hmong Daw (Latin)| `mww` | Swahili (Latin) | `sw` |
+|Ho(Devanagiri) | `hoc`|Swedish | `sv` |
+|Hungarian | `hu` |Tajik (Cyrillic) | `tg` |
+|Icelandic | `is`| Tatar (Latin) | `tt` |
+|Inari Sami | `smn`|Tetum | `tet` |
+|Indonesian | `id` | Thangmi | `thf` |
+|Interlingua | `ia` |Tongan | `to`|
+|Inuktitut (Latin) | `iu` | Turkish | `tr` |
+|Irish | `ga` |Turkmen (Latin) | `tk`|
+|Italian | `it` |Tuvan | `tyv`|
+|Japanese | `ja` |Upper Sorbian | `hsb` |
+|Jaunsari (Devanagari) | `Jns`|Urdu | `ur`|
+|Javanese | `jv` |Uyghur (Arabic) | `ug`|
+|Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`|
+|Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
+|Kangri (Devanagari) | `xnr`|Uzbek (Latin) | `uz` |
+|Karachay-Balkar | `krc`|Volap├╝k | `vo` |
+|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` |
+|Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
+|Kashubian | `csb` |Western Frisian | `fy` |
+|Kazakh (Cyrillic) | `kk-cyrl`|Yucatec Maya | `yua` |
+|Kazakh (Latin) | `kk-latn`|Zhuang | `za` |
+|Khaling | `klr`|Zulu | `zu` |
+
+### Print text in preview (API version 2023-02-28-preview)
+
+Use the parameter `api-version=2023-02-28-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
+
+|Language| Code (optional) |Language| Code (optional) |
+|:--|:-:|:--|:-:|
+|Abaza|`abq`|Malagasy | `mg` |
+|Abkhazian |`ab`|Mandinka | `mnk` |
+|Achinese | `ace`| Mapudungun | `arn` |
+|Acoli | `ach` | Mari (Russia) | `chm`|
+|Adangme |`ada`| Masai | `mas`|
+|Adyghe | `ady`| Mende (Sierra Leone) | `men`|
+|Afar | `aa`| Meru | `mer`|
+|Akan | `ak`| Meta' | `mgo`|
+|Algonquin |`alq`| Minangkabau | `min`
+|Asu (Tanzania) | `asa`|Mohawk| `moh`|
+|Avaric | `av`| Mongondow | `mog`
+|Aymara | `ay`| Morisyen | `mfe` |
+|Bafia |`ksf`| Mundang | `mua` |
+|Bambara | `bm`| Nahuatl | `nah` |
+|Bashkir | `ba`| Navajo | `nv` |
+|Bemba (Zambia) | `bem`| Ndonga | `ng` |
+|Bena (Tanzania) | `bez`|Ngomba | `jgo`|
+|Bikol |`bik`|North Ndebele | `nd` |
+|Bini | `bin`|Nyanja | `ny`|
+|Chechen | `ce`|Nyankole | `nyn` |
+|Chiga | `cgg`|Nzima | `nzi`
+|Choctaw |`cho`|Ojibwa | `oj`
+|Chukot |`ckt`|Oromo | `om` |
+|Chuvash | `cv`|Pampanga | `pam`|
+|Cree |`cr`|Pangasinan | `pag`|
+|Creek| `mus`| Papiamento | `pap`|
+|Crow | `cro`|Pedi | `nso`|
+|Dargwa | `dar`|Quechua | `qu`|
+|Duala |`dua`|Rundi | `rn` |
+|Dungan |`dng`|Rwa | `rwk`|
+|Efik|`efi`|Samburu | `saq`|
+|Fon | `fon`|Sango | `sg`
+|Ga | `gaa` |Sangu (Gabon) | `snq`|
+|Ganda | `lg` |Sena | `seh` |
+|Gayo | `gay`|Serbian (Cyrillic) | `sr-cyrl` |
+|Guarani| `gn`| Shambala | `ksb`|
+|Gusii | `guz`|Shona | `sn`|
+|Greek | `el`|Siksika | `bla`|
+|Hebrew | `he` | Soga | `xog`|
+|Herero | `hz` |Somali (Latin) | `so-latn` |
+|Hiligaynon | `hil` |Songhai | `son` |
+|Iban | `iba`|South Ndebele | `nr`|
+|Igbo |`ig`|Southern Altai | `alt`|
+|Iloko | `ilo`|Southern Sotho | `st` |
+|Ingush |`inh`|Sundanese | `su` |
+|Jola-Fonyi |`dyo`|Swati | `ss` |
+|Kabardian | `kbd` | Tabassaran| `tab` |
+|Kalenjin | `kln` |Tachelhit| `shi` |
+|Kalmyk | `xal` | Tahitian | `ty`|
+|Kanuri | `kr`|Taita | `dav` |
+|Khakas | `kjh` | Tamil | `ta`|
+|Kikuyu | `ki` | Tatar (Cyrillic) | `tt-cyrl` |
+|Kildin Sami | `sjd` | Teso | `teo` |
+|Kinyarwanda| `rw`| Thai | `th`|
+|Komi | `kv` | Tok Pisin | `tpi` |
+|Kongo| `kg`| Tsonga | `ts`|
+|Kpelle | `kpe` | Tswana | `tn` |
+|Kuanyama | `kj`| Udmurt | `udm`|
+|Lak | `lbe` | Uighur (Cyrillic) | `ug-cyrl` |
+|Latvian | `lv`|Ukrainian | `uk`|
+|Lezghian | `lex`|Vietnamese | `vi`
+|Lingala | `ln`|Vunjo | `vun` |
+|Lozi| `loz` | Wolof | `wo` |
+|Luo (Kenya and Tanzania) | `luo`| Xhosa|`xh` |
+|Luyia | `luy` |Yakut | `sah` |
+|Macedonian | `mk`| Zapotec | `zap` |
+|Machame| `jmc`| Zarma | `dje` |
+|Madurese | `mad` |
+|Makhuwa-Meetto | `mgh` |
+|Makonde | `kde` |
+
+## General document model
+
+>[!NOTE]
+> It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|General document| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US|
+
+## Custom neural model
+
+Language| API Version |
+|:--|:-:|
+|English | `2022-08-31` (GA), `2023-02-28-preview`|
+|Spanish | `2023-02-28-preview`|
+|German | `2023-02-28-preview`|
+|French | `2023-02-28-preview`|
+|Italian | `2023-02-28-preview`|
+|Dutch | `2023-02-28-preview`|
+
+## Health insurance card model
+
+|Language| Locale code |
+|:--|:-:|
+|English (United States)| `en-us`|
+
+## W-2 form model
+
+|Language| Locale code |
+|:--|:-:|
+|English (United States)| `en-us`|
+
+## Invoice model
+
+Language| Locale code |
+|:--|:-:|
+|English |`en-US`, `en-CA`, `en-GB`, `en-IN`|
+|Spanish| es|
+|German | de|
+|French | fr|
+|Italian |it|
+|Portuguese |pt|
+|Dutch | nl|
+
+## Receipt model
+
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+**Receipt supports all English receipts and the following locales:**
+
+|Language| Locale code |
+|:--|:-:|
+|English |`en-au`|
+|English (Canada)|`en-ca`|
+|English (United Kingdom)|`en-gb`|
+|English (India)|`en-in`|
+|English (United States)| `en-us`|
+|French | `fr` |
+| Spanish | `es` |
+
+**Thermal receipts (retail, meal, parking, etc.)**
+
+| Supported Languages | Details |
+|:--|:-:|
+|English|United States (`en-US`), Australia (`en-AU`), Canada (`en-CA`), United Kingdom (`en-GB`), India (`en-IN`), United Arab Emirates (`en-AE`)|
+|Croatian|Croatia (`hr-HR`)|
+|Czech|Czechia (`cs-CZ`)|
+|Danish|Denmark (`da-DK`)|
+|Dutch|Netherlands (`nl-NL`)|
+|Finnish|Finland (`fi-FI`)|
+|French|Canada (`fr-CA`), France (`fr-FR`)|
+|German|Germany (`de-DE`)|
+|Hungarian|Hungary (`hu-HU`)|
+|Italian|Italy (`it-IT`)|
+|Japanese|Japan (`ja-JP`)|
+|Latvian|Latvia (`lv-LV`)|
+|Lithuanian|Lithuania (`lt-LT`)|
+|Norwegian|Norway (`no-NO`)|
+|Portuguese|Brazil (`pt-BR`), Portugal (`pt-PT`)|
+|Spanish|Spain (`es-ES`)|
+|Swedish|Sweden (`sv-SE`)|
+|Vietnamese|Vietnam (`vi-VN`)|
+
+**Hotel receipts**
+
+| Supported Languages | Details |
+|:--|:-:|
+|English|United States (`en-US`)|
+|French|France (`fr-FR`)|
+|German|Germany (`de-DE`)|
+|Italian|Italy (`it-IT`)|
+|Japanese|Japan (`ja-JP`)|
+|Portuguese|Portugal (`pt-PT`)|
+|Spanish|Spain (`es-ES`)|
+
+## ID document model
+
+This technology is currently available for US driver licenses and the biographical page from international passports (excluding visa and other travel documents).
+
+Supported document types
+
+| Region | Document Types |
+|--|-|
+|Worldwide|Passport Book, Passport Card|
+|`United States (US)`|Driver License, Identification Card, Residency Permit (Green card), Social Security Card, Military ID|
+|`India (IN)`|Driver License, PAN Card, Aadhaar Card|
+|`Canada (CA)`|Driver License, Identification Card, Residency Permit (Maple Card)|
+|`United Kingdom (GB)`|Driver License, National Identity Card|
+|`Australia (AU)`|Driver License, Photo Card, Key-pass ID (including digital version)|
+
+## Business card model
+
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|Business card (v3.0 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
+
+## General Document
+
+Language| Locale code |
+|:--|:-:|
+|English (United States)|en-us|
+
+## Detected languages: Read API
+
+The [Read API](concept-read.md) supports detecting the following languages in your documents. This list may include languages not currently supported for text extraction.
+
+> [!NOTE]
+> **Language detection**
+>
+> Document Intelligence read model can _detect_ possible presence of languages and returns language codes for detected languages. To determine if text can also be
+> extracted for a given language, see previous sections.
+
+> [!NOTE]
+> **Detected languages vs extracted languages**
+>
+> This section lists the languages we can detect from the documents using the Read model, if present. Please note that this list differs from list of languages we support extracting text from, which is specified in the above sections for each model.
+
+| Language | Code |
+|||
+| Afrikaans | `af` |
+| Albanian | `sq` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Armenian | `hy` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Basque | `eu` |
+| Belarusian | `be` |
+| Bengali | `bn` |
+| Bosnian | `bs` |
+| Bulgarian | `bg` |
+| Burmese | `my` |
+| Catalan | `ca` |
+| Central Khmer | `km` |
+| Chinese | `zh` |
+| Chinese Simplified | `zh_chs` |
+| Chinese Traditional | `zh_cht` |
+| Corsican | `co` |
+| Croatian | `hr` |
+| Czech | `cs` |
+| Danish | `da` |
+| Dari | `prs` |
+| Divehi | `dv` |
+| Dutch | `nl` |
+| English | `en` |
+| Esperanto | `eo` |
+| Estonian | `et` |
+| Fijian | `fj` |
+| Finnish | `fi` |
+| French | `fr` |
+| Galician | `gl` |
+| Georgian | `ka` |
+| German | `de` |
+| Greek | `el` |
+| Gujarati | `gu` |
+| Haitian | `ht` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Hmong Daw | `mww` |
+| Hungarian | `hu` |
+| Icelandic | `is` |
+| Igbo | `ig` |
+| Indonesian | `id` |
+| Inuktitut | `iu` |
+| Irish | `ga` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Kannada | `kn` |
+| Kazakh | `kk` |
+| Kinyarwanda | `rw` |
+| Kirghiz | `ky` |
+| Korean | `ko` |
+| Kurdish | `ku` |
+| Lao | `lo` |
+| Latin | `la` |
+| Latvian | `lv` |
+| Lithuanian | `lt` |
+| Luxembourgish | `lb` |
+| Macedonian | `mk` |
+| Malagasy | `mg` |
+| Malay | `ms` |
+| Malayalam | `ml` |
+| Maltese | `mt` |
+| Maori | `mi` |
+| Marathi | `mr` |
+| Mongolian | `mn` |
+| Nepali | `ne` |
+| Norwegian | `no` |
+| Norwegian Nynorsk | `nn` |
+| Oriya | `or` |
+| Pasht | `ps` |
+| Persian | `fa` |
+| Polish | `pl` |
+| Portuguese | `pt` |
+| Punjabi | `pa` |
+| Queretaro Otomi | `otq` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Samoan | `sm` |
+| Serbian | `sr` |
+| Shona | `sn` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Spanish | `es` |
+| Sundanese | `su` |
+| Swahili | `sw` |
+| Swedish | `sv` |
+| Tagalog | `tl` |
+| Tahitian | `ty` |
+| Tajik | `tg` |
+| Tamil | `ta` |
+| Tatar | `tt` |
+| Telugu | `te` |
+| Thai | `th` |
+| Tibetan | `bo` |
+| Tigrinya | `ti` |
+| Tongan | `to` |
+| Turkish | `tr` |
+| Turkmen | `tk` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Welsh | `cy` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Yoruba | `yo` |
+| Yucatec Maya | `yua` |
+| Zulu | `zu` |
++++
+This table lists the written languages supported by each Document Intelligence service.
+
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+
+## Layout and custom model
+
+|Language| Language code |
+|:--|:-:|
+|Afrikaans|`af`|
+|Albanian |`sq`|
+|Asturian |`ast`|
+|Basque |`eu`|
+|Bislama |`bi`|
+|Breton |`br`|
+|Catalan |`ca`|
+|Cebuano |`ceb`|
+|Chamorro |`ch`|
+|Chinese (Simplified) | `zh-Hans`|
+|Chinese (Traditional) | `zh-Hant`|
+|Cornish |`kw`|
+|Corsican |`co`|
+|Crimean Tatar (Latin) |`crh`|
+|Czech | `cs` |
+|Danish | `da` |
+|Dutch | `nl` |
+|English (printed and handwritten) | `en` |
+|Estonian |`et`|
+|Fijian |`fj`|
+|Filipino |`fil`|
+|Finnish | `fi` |
+|French | `fr` |
+|Friulian | `fur` |
+|Galician | `gl` |
+|German | `de` |
+|Gilbertese | `gil` |
+|Greenlandic | `kl` |
+|Haitian Creole | `ht` |
+|Hani | `hni` |
+|Hmong Daw (Latin) | `mww` |
+|Hungarian | `hu` |
+|Indonesian | `id` |
+|Interlingua | `ia` |
+|Inuktitut (Latin) | `iu` |
+|Irish | `ga` |
+|Italian | `it` |
+|Japanese | `ja` |
+|Javanese | `jv` |
+|K'iche' | `quc` |
+|Kabuverdianu | `kea` |
+|Kachin (Latin) | `kac` |
+|Kara-Kalpak | `kaa` |
+|Kashubian | `csb` |
+|Khasi | `kha` |
+|Korean | `ko` |
+|Kurdish (latin) | `kur` |
+|Luxembourgish | `lb` |
+|Malay (Latin) | `ms` |
+|Manx | `gv` |
+|Neapolitan | `nap` |
+|Norwegian | `no` |
+|Occitan | `oc` |
+|Polish | `pl` |
+|Portuguese | `pt` |
+|Romansh | `rm` |
+|Scots | `sco` |
+|Scottish Gaelic | `gd` |
+|Slovenian | `slv` |
+|Spanish | `es` |
+|Swahili (Latin) | `sw` |
+|Swedish | `sv` |
+|Tatar (Latin) | `tat` |
+|Tetum | `tet` |
+|Turkish | `tr` |
+|Upper Sorbian | `hsb` |
+|Uzbek (Latin) | `uz` |
+|Volap├╝k | `vo` |
+|Walser | `wae` |
+|Western Frisian | `fy` |
+|Yucatec Maya | `yua` |
+|Zhuang | `za` |
+|Zulu | `zu` |
+
+## Prebuilt receipt
+
+>[!NOTE]
+ > It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+Prebuilt Receipt and Business Cards support all English receipts and business cards with the following locales:
+
+|Language| Locale code |
+|:--|:-:|
+|English (Australia)|`en-au`|
+|English (Canada)|`en-ca`|
+|English (United Kingdom)|`en-gb`|
+|English (India|`en-in`|
+|English (United States)| `en-us`|
+
+## Prebuilt invoice
+
+Language| Locale code |
+|:--|:-:|
+|English (United States)|en-us|
+
+## Prebuilt business card
+
+>[!NOTE]
+> It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the language of the text in your image.
+
+| Model | LanguageΓÇöLocale code | Default |
+|--|:-|:|
+|Business card (v2.1 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li> | Autodetected |
+
+## Prebuilt identity documents
+
+This technology is currently available for US driver licenses and the biographical page from international passports (excluding visa and other travel documents).
++
+## Next steps
++
+> [!div class="nextstepaction"]
+> [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!div class="nextstepaction"]
+> [Try Document Intelligence Sample Labeling tool](https://aka.ms/fott-2.1-ga)
ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/managed-identities-secured-access.md
+
+ Title: "Configure secure access with managed identities and private endpoints"
+
+description: Learn how to configure secure communications between Document Intelligence and other Azure Services.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Configure secure access with managed identities and private endpoints
++
+This how-to guide walks you through the process of enabling secure connections for your Document Intelligence resource. You can secure the following connections:
+
+* Communication between a client application within a Virtual Network (VNET) and your Document Intelligence Resource.
+
+* Communication between Document Intelligence Studio and your Document Intelligence resource.
+
+* Communication between your Document Intelligence resource and a storage account (needed when training a custom model).
+
+ You're setting up your environment to secure the resources:
+
+ :::image type="content" source="media/managed-identities/secure-config.png" alt-text="Screenshot of secure configuration with managed identity and private endpoints.":::
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**Document Intelligence**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Azure AI services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a multi-service resource](../../ai-services/multi-service-resource.md?pivots=azportal).
+
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Document Intelligence resource. Create containers to store and organize your blob data within your storage account.
+
+* An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Document Intelligence resource. Create a virtual network to deploy your application resources to train models and analyze documents.
+
+* An **Azure data science VM** for [**Windows**](../../machine-learning/data-science-virtual-machine/provision-vm.md) or [**Linux/Ubuntu**](../../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md) to optionally deploy a data science VM in the virtual network to test the secure connections being established.
+
+## Configure resources
+
+Configure each of the resources to ensure that the resources can communicate with each other:
+
+* Configure the Document Intelligence Studio to use the newly created Document Intelligence resource by accessing the settings page and selecting the resource.
+
+* Validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request successfully completes.
+
+* Add a training dataset to a container in the Storage account you created.
+
+* Select the custom model tile to create a custom project. Ensure that you select the same Document Intelligence resource and the storage account you created in the previous step.
+
+* Select the container with the training dataset you uploaded in the previous step. Ensure that if the training dataset is within a folder, the folder path is set appropriately.
+
+* If you have the required permissions, the Studio sets the CORS setting required to access the storage account. If you don't have the permissions, you need to ensure that the CORS settings are configured on the Storage account before you can proceed.
+
+* Validate that the Studio is configured to access your training data, if you can see your documents in the labeling experience, all the required connections have been established.
+
+You now have a working implementation of all the components needed to build a Document Intelligence solution with the default security model:
+
+ :::image type="content" source="media/managed-identities/default-config.png" alt-text="Screenshot of default security configuration.":::
+
+Next, complete the following steps:
+
+* Setup managed identity on the Document Intelligence resource.
+
+* Secure the storage account to restrict traffic from only specific virtual networks and IP addresses.
+
+* Configure the Document Intelligence managed identity to communicate with the storage account.
+
+* Disable public access to the Document Intelligence resource and create a private endpoint to make it accessible from the virtual network.
+
+* Add a private endpoint for the storage account in a selected virtual network.
+
+* Validate that you can train models and analyze documents from within the virtual network.
+
+## Setup managed identity for Document Intelligence
+
+Navigate to the Document Intelligence resource in the Azure portal and select the **Identity** tab. Toggle the **System assigned** managed identity to **On** and save the changes:
+
+ :::image type="content" source="media/managed-identities/v2-fr-mi.png" alt-text="Screenshot of configure managed identity.":::
+
+## Secure the Storage account to limit traffic
+
+Start configuring secure communications by navigating to the **Networking** tab on your **Storage account** in the Azure portal.
+
+1. Under **Firewalls and virtual networks**, choose **Enabled from selected virtual networks and IP addresses** from the **Public network access** list.
+
+1. Ensure that **Allow Azure services on the trusted services list to access this storage account** is selected from the **Exceptions** list.
+
+1. **Save** your changes.
+
+ :::image type="content" source="media/managed-identities/v2-stg-firewall.png" alt-text="Screenshot of configure storage firewall.":::
+
+> [!NOTE]
+>
+> Your storage account won't be accessible from the public internet.
+>
+> Refreshing the custom model labeling page in the Studio will result in an error message.
+
+## Enable access to storage from Document Intelligence
+
+To ensure that the Document Intelligence resource can access the training dataset, you need to add a role assignment for your [managed identity](#setup-managed-identity-for-document-intelligence).
+
+1. Staying on the storage account window in the Azure portal, navigate to the **Access Control (IAM)** tab in the left navigation bar.
+
+1. Select the **Add role assignment** button.
+
+ :::image type="content" source="media/managed-identities/v2-stg-role-assign-role.png" alt-text="Screenshot of add role assignment window.":::
+
+1. On the **Role** tab, search for and select the **Storage Blob Data Reader** permission and select **Next**.
+
+ :::image type="content" source="media/managed-identities/v2-stg-role-assignment.png" alt-text="Screenshot of choose a role tab.":::
+
+1. On the **Members** tab, select the **Managed identity** option and choose **+ Select members**
+
+1. On the **Select managed identities** dialog window, select the following options:
+
+ * **Subscription**. Select your subscription.
+
+ * **Managed Identity**. Select Form **Recognizer**.
+
+ * **Select**. Choose the Document Intelligence resource you enabled with a managed identity.
+
+ :::image type="content" source="media/managed-identities/v2-stg-role-assign-resource.png" alt-text="Screenshot of managed identities dialog window.":::
+
+1. **Close** the dialog window.
+
+1. Finally, select **Review + assign** to save your changes.
+
+Great! You've configured your Document Intelligence resource to use a managed identity to connect to a storage account.
+
+> [!TIP]
+>
+> When you try the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio), you'll see the READ API and other prebuilt models don't require storage access to process documents. However, training a custom model requires additional configuration because the Studio can't directly communicate with a storage account.
+ > You can enable storage access by selecting **Add your client IP address** from the **Networking** tab of the storage account to configure your machine to access the storage account via IP allowlisting.
+
+## Configure private endpoints for access from VNETs
+
+When you connect to resources from a virtual network, adding private endpoints ensures both the storage account, and the Document Intelligence resource are accessible from the virtual network.
+
+Next, configure the virtual network to ensure only resources within the virtual network or traffic router through the network have access to the Document Intelligence resource and the storage account.
+
+### Enable your virtual network and private endpoints
+
+1. In the Azure portal, navigate to your Document Intelligence resource.
+
+1. Select the **Networking** tab from the left navigation bar.
+
+1. Enable the **Selected Networking and Private Endpoints** option from the **Firewalls and virtual networks** tab and select save.
+
+> [!NOTE]
+>
+>If you try accessing any of the Document Intelligence Studio features, you'll see an access denied message. To enable access from the Studio on your machine, select the **client IP address checkbox** and **Save** to restore access.
+
+ :::image type="content" source="media/managed-identities/v2-fr-network.png" alt-text="Screenshot showing how to disable public access to Document Intelligence.":::
+
+### Configure your private endpoint
+
+1. Navigate to the **Private endpoint connections** tab and select the **+ Private endpoint**. You're navigated to the **Create a private endpoint** dialog page.
+
+1. On the **Create private endpoint** dialog page, select the following options:
+
+ * **Subscription**. Select your billing subscription.
+
+ * **Resource group**. Select the appropriate resource group.
+
+ * **Name**. Enter a name for your private endpoint.
+
+ * **Region**. Select the same region as your virtual network.
+
+ * Select **Next: Resource**.
+
+ :::image type="content" source="media/managed-identities/v2-fr-private-end-basics.png" alt-text="Screenshot showing how to set-up a private endpoint":::
+
+### Configure your virtual network
+
+1. On the **Resource** tab, accept the default values and select **Next: Virtual Network**.
+
+1. On the **Virtual Network** tab, make sure that you select the virtual network that you created.
+
+1. If you have multiple subnets, select the subnet where you want the private endpoint to connect. Accept the default value to **Dynamically allocate IP address**.
+
+1. Select **Next: DNS**
+
+1. Accept the default value **Yes** to **integrate with private DNS zone**.
+
+ :::image type="content" source="media/managed-identities/v2-fr-private-end-vnet.png" alt-text="Screenshot showing how to configure private endpoint":::
+
+1. Accept the remaining defaults and select **Next: Tags**.
+
+1. Select **Next: Review + create** .
+
+Well done! Your Document Intelligence resource now is only accessible from the virtual network and any IP addresses in the IP allowlist.
+
+### Configure private endpoints for storage
+
+Navigate to your **storage account** on the Azure portal.
+
+1. Select the **Networking** tab from the left navigation menu.
+
+1. Select the **Private endpoint connections** tab.
+
+1. Choose add **+ Private endpoint**.
+
+1. Provide a name and choose the same region as the virtual network.
+
+1. Select **Next: Resource**.
+
+ :::image type="content" source="media/managed-identities/v2-stg-private-end-basics.png" alt-text="Screenshot showing how to create a private endpoint":::
+
+1. On the resource tab, select **blob** from the **Target sub-resource** list.
+
+1. select **Next: Virtual Network**.
+
+ :::image type="content" source="media/managed-identities/v2-stg-private-end-resource.png" alt-text="Screenshot showing how to configure a private endpoint for a blob.":::
+
+1. Select the **Virtual network** and **Subnet**. Make sure **Enable network policies for all private endpoints in this subnet** is selected and the **Dynamically allocate IP address** is enabled.
+
+1. Select **Next: DNS**.
+
+1. Make sure that **Yes** is enabled for **Integrate with private DNS zone**.
+
+1. Select **Next: Tags**.
+
+1. Select **Next: Review + create**.
+
+Great work! You now have all the connections between the Document Intelligence resource and storage configured to use managed identities.
+
+> [!NOTE]
+> The resources are only accessible from the virtual network.
+>
+> Studio access and analyze requests to your Document Intelligence resource will fail unless the request originates from the virtual network or is routed via the virtual network.
+
+## Validate your deployment
+
+To validate your deployment, you can deploy a virtual machine (VM) to the virtual network and connect to the resources.
+
+1. Configure a [Data Science VM](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) in the virtual network.
+
+1. Remotely connect into the VM from your desktop to launch a browser session to access Document Intelligence Studio.
+
+1. Analyze requests and the training operations should now work successfully.
+
+That's it! You can now configure secure access for your Document Intelligence resource with managed identities and private endpoints.
+
+## Common error messages
+
+* **Failed to access Blob container**:
+
+ :::image type="content" source="media/managed-identities/cors-error.png" alt-text="Screenshot of error message when CORS config is required":::
+
+ **Resolution**: [Configure CORS](quickstarts/try-document-intelligence-studio.md#prerequisites-for-new-users).
+
+* **AuthorizationFailure**:
+
+ :::image type="content" source="media/managed-identities/auth-failure.png" alt-text="Screenshot of authorization failure error.":::
+
+ **Resolution**: Ensure that there's a network line-of-sight between the computer accessing the Document Intelligence Studio and the storage account. For example, you may need to add the client IP address in the storage account's networking tab.
+
+* **ContentSourceNotAccessible**:
+
+ :::image type="content" source="media/managed-identities/content-source-error.png" alt-text="Screenshot of content source not accessible error.":::
+
+ **Resolution**: Make sure you've given your Document Intelligence managed identity the role of **Storage Blob Data Reader** and enabled **Trusted services** access or **Resource instance** rules on the networking tab.
+
+* **AccessDenied**:
+
+ :::image type="content" source="media/managed-identities/access-denied.png" alt-text="Screenshot of an access denied error.":::
+
+ **Resolution**: Check to make sure there's connectivity between the computer accessing the Document Intelligence Studio and the Document Intelligence service. For example, you may need to add the client IP address to the Document Intelligence service's networking tab.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/managed-identities.md
+
+ Title: Create and use managed identities with Document Intelligence
+
+description: Understand how to create and use managed identity with Document Intelligence
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.1.0'
+++
+# Managed identities for Document Intelligence
++
+Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
++
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
+
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+* There's no added cost to use managed identities in Azure.
+
+> [!IMPORTANT]
+>
+> * Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens.
+>
+> * Managed identities are a safer way to grant access to data without having credentials in your code.
+
+## Private storage account access
+
+ Private Azure storage account access and authentication support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Document Intelligence can't directly access your storage account data. However, once a managed identity is enabled, Document Intelligence can access your storage account using an assigned managed identity credential.
+
+> [!NOTE]
+>
+> * If you intend to analyze your storage data with the [**Document Intelligence Sample Labeling tool (FOTT)**](https://fott-2-1.azurewebsites.net/), you must deploy the tool behind your VNet or firewall.
+>
+> * The Analyze [**Receipt**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync), [**Business Card**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync), [**Invoice**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291), [**ID document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7738978e467c5fb8707), and [**Custom Form**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) APIs can extract data from a single document by posting requests as raw binary content. In these scenarios, there is no requirement for a managed identity credential.
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**Document Intelligence**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Azure AI services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a multi-service resource](../../ai-services/multi-service-resource.md?pivots=azportal).
+
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Document Intelligence resource. You also need to create containers to store and organize your blob data within your storage account.
+
+ * If your storage account is behind a firewall, **you must enable the following configuration**: </br></br>
+
+ * On your storage account page, select **Security + networking** → **Networking** from the left menu.
+ :::image type="content" source="media/managed-identities/security-and-networking-node.png" alt-text="Screenshot: security + networking tab.":::
+
+ * In the main window, select **Allow access from selected networks**.
+ :::image type="content" source="media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
+
+ * On the selected networks page, navigate to the **Exceptions** category and make certain that the [**Allow Azure services on the trusted services list to access this storage account**](../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
+
+ :::image type="content" source="media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view":::
+* A brief understanding of [**Azure role-based access control (Azure RBAC)**](../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
+
+## Managed identity assignments
+
+There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Document Intelligence only supports system-assigned managed identity:
+
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
+
+In the following steps, we enable a system-assigned managed identity and grant Document Intelligence limited access to your Azure blob storage account.
+
+## Enable a system-assigned managed identity
+
+>[!IMPORTANT]
+>
+> To enable a system-assigned managed identity, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../role-based-access-control/built-in-roles.md#user-access-administrator). You can specify a scope at four levels: management group, subscription, resource group, or resource.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with your Azure subscription.
+
+1. Navigate to your **Document Intelligence** resource page in the Azure portal.
+
+1. In the left rail, Select **Identity** from the **Resource Management** list:
+
+ :::image type="content" source="media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
+
+1. In the main window, toggle the **System assigned Status** tab to **On**.
+
+## Grant access to your storage account
+
+You need to grant Document Intelligence access to your storage account before it can create, read, or delete blobs. Now that you've enabled Document Intelligence with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Document Intelligence access to Azure storage. The **Storage Blob Data Reader** role gives Document Intelligence (represented by the system-assigned managed identity) read and list access to the blob container and data.
+
+1. Under **Permissions** select **Azure role assignments**:
+
+ :::image type="content" source="media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
+
+1. On the Azure role assignments page that opens, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
+
+ :::image type="content" source="media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
+
+ > [!NOTE]
+ >
+ > If you're unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or you get the permissions error, "you do not have permissions to add role assignment at this scope", check that you're currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as Owner or User Access Administrator at the Storage scope for the storage resource.
+
+1. Next, you're going to assign a **Storage Blob Data Reader** role to your Document Intelligence service resource. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+
+ | Field | Value|
+ ||--|
+ |**Scope**| **_Storage_**|
+ |**Subscription**| **_The subscription associated with your storage resource_**.|
+ |**Resource**| **_The name of your storage resource_**|
+ |**Role** | **_Storage Blob Data Reader_**ΓÇöallows for read access to Azure Storage blob containers and data.|
+
+ :::image type="content" source="media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
+
+1. After you've received the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
+
+ :::image type="content" source="media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
+
+1. If you don't see the change right away, wait and try refreshing the page once more. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
+
+ :::image type="content" source="media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
+
+ That's it! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Document Intelligence specific access rights to your storage resource without having to manage credentials such as SAS tokens.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Configure secure access with managed identities and private endpoints](managed-identities-secured-access.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
+
+ Title: What is Azure AI Document Intelligence?
+
+description: Azure AI Document Intelligence is a machine-learning based OCR and intelligent document processing service to automate extraction of key data from forms and documents.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+
+# What is Azure AI Document Intelligence?
+
+> [!NOTE]
+> Form Recognizer is now **Azure AI Document Intelligence**!
+>
+> As of July 2023, Azure AI services encompass all of what were previously known as Azure Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names "Cognitive Services" and "Azure Applied AI" continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs.
+++
+Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-services/index.yml) that enables you to build intelligent document processing solutions. Massive amounts of data, spanning a wide variety of data types, are stored in forms and documents. Document Intelligence enables you to effectively manage the velocity at which data is collected and processed and is key to improved operations, informed data-driven decisions, and enlightened innovation. </br></br>
+
+| ✔️ [**Document analysis models**](#document-analysis-models) | ✔️ [**Prebuilt models**](#prebuilt-models) | ✔️ [**Custom models**](#custom-model-overview) | ✔️[**Gated preview models**](#gated-preview-models) |
+
+### Document analysis models
+
+Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or progress.
+
+ :::column:::
+ :::image type="icon" source="media/overview/icon-read.png" link="#read":::</br>
+ [**Read**](#read) | Extract printed </br>and handwritten text.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br>
+ [**Layout**](#layout) | Extract text </br>and document structure.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-general-document.png" link="#general-document":::</br>
+ [**General document**](#general-document) | Extract text, </br>structure, and key-value pairs.
+ :::column-end:::
+
+### Prebuilt models
+
+Prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models.
+
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br>
+ [**Invoice**](#invoice) | Extract customer </br>and vendor details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br>
+ [**Receipt**](#receipt) | Extract sales </br>transaction details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br>
+ [**Identity**](#identity-id) | Extract identification </br>and verification details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-insurance-card.png" link="#w-2":::</br>
+ [🆕 **Insurance card**](#w-2) | Extract health insurance details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-w2.png" link="#w-2":::</br>
+ [**W2**](#w-2) | Extract taxable </br>compensation details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-business-card.png" link="#business-card":::</br>
+ [**Business card**](#business-card) | Extract business contact details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model-preview":::</br>
+ [**Contract**](#contract-model-preview) | Extract agreement</br> and party details.
+ :::column-end:::
+
+### Custom models
+
+Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models.
+
+ :::column:::
+ **Extraction models**</br>
+ Custom extraction models are trained to extract labeled fields from documents.
+ :::column-end:::
+
+ :::column:::
+ :::image type="icon" source="media/overview/icon-custom-template.png" link="#custom-template":::</br>
+ [**Custom template**](#custom-template) | Extract data from static layouts.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-custom-neural.png" link="#custom-neural":::</br>
+ [**Custom neural**](#custom-neural) | Extract data from mixed-type documents.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-custom-composed.png" link="#custom-composed":::</br>
+ [**Custom composed**](#custom-composed) | Extract data using a collection of models.
+ :::column-end:::
+
+ :::column:::
+ **Classification model**</br>
+ Custom classifiers analyze input documents to identify document types prior to invoking an extraction model.
+ :::column-end:::
+
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br>
+ [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) prior to invoking an extraction model.
+ :::column-end:::
+
+### Gated preview models
+
+Document Intelligence Studio preview features are currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback. Complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
+
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-1098e.png" link="#us-tax-1098-e-form-preview":::</br>
+ [**US Tax 1098-E form**](#us-tax-1098-e-form-preview) | Extract student loan interest details
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form-preview":::</br>
+ [**US Tax 1098 form**](#us-tax-1098-form-preview) | Extract mortgage interest details.
+ :::column-end:::
+ :::column span="":::
+ :::image type="icon" source="media/overview/icon-1098t.png" link="#us-tax-1098-t-form-preview":::</br>
+ [**US Tax 1098-T form**](#us-tax-1098-t-form-preview) | Extract qualified tuition details.
+ :::column-end:::
+ :::column-end:::
+
+## Models and development options
+
+> [!NOTE]
+>The following document understanding models and development options are supported by the Document Intelligence service v3.0.
+
+You can use Document Intelligence to automate document processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse development options.
+
+### Read
++
+|About| Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Read OCR model**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data and field extraction](concept-read.md#data-extraction)| &#9679; Contract processing. </br>&#9679; Financial or medical report processing.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-javascript) |
+
+> [!div class="nextstepaction"]
+> [Return to model types](#document-analysis-models)
+
+### Layout
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Layout analysis model**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data and field extraction](concept-layout.md#data-extraction)</br>&#9679; Layout API has been updated to a prebuilt model. |&#9679; Document indexing and retrieval by structure.</br>&#9679; Preprocessing prior to OCR analysis. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#document-analysis-models)
+
+### General document
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**General document model**](concept-general-document.md)|&#9679; Extract **text,layout, and key-value pairs** from documents.</br>&#9679; [Data and field extraction](concept-general-document.md#data-extraction)|&#9679; Key-value pair extraction.</br>&#9679; Form processing.</br>&#9679; Survey data collection and analysis.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#general-document-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#general-document-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#general-document-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#general-document-model) |
+
+> [!div class="nextstepaction"]
+> [Return to model types](#document-analysis-models)
+
+### Invoice
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Invoice model**](concept-invoice.md) |&#9679; Extract key information from invoices.</br>&#9679; [Data and field extraction](concept-invoice.md#field-extraction) |&#9679; Accounts payable processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### Receipt
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Receipt model**](concept-receipt.md) |&#9679; Extract key information from receipts.</br>&#9679; [Data and field extraction](concept-receipt.md#field-extraction)</br>&#9679; Receipt model v3.0 supports processing of **single-page hotel receipts**.|&#9679; Expense management.</br>&#9679; Consumer behavior data analysis.</br>&#9679; Customer loyalty program.</br>&#9679; Merchandise return processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### Identity (ID)
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Identity document (ID) model**](concept-id-document.md) |&#9679; Extract key information from passports and ID cards.</br>&#9679; [Document types](concept-id-document.md#document-types)</br>&#9679; Extract endorsements, restrictions, and vehicle classifications from US driver's licenses. |&#9679; Know your customer (KYC) financial services guidelines compliance.</br>&#9679; Medical account management.</br>&#9679; Identity checkpoints and gateways.</br>&#9679; Hotel registration. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### Health insurance card
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+| [**Health insurance card**](concept-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### W-2
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**W-2 Form**](concept-w2.md) |&#9679; Extract key information from IRS US W2 tax forms (year 2018-2021).</br>&#9679; [Data and field extraction](concept-w2.md#field-extraction)|&#9679; Automated tax document management.</br>&#9679; Mortgage loan application processing. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model) |
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### Business card
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Business card model**](concept-business-card.md) |&#9679; Extract key information from business cards.</br>&#9679; [Data and field extraction](concept-business-card.md#field-extractions) |&#9679; Sales lead and marketing management. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### Custom model overview
++
+| About | Description |Automation use cases |Development options |
+|-|--|--|--|
+|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
+
+> [!div class="nextstepaction"]
+> [Return to custom model types](#custom-models)
+
+#### Custom template
++
+ > [!NOTE]
+ > To train a custom template model, set the ```buildMode``` property to ```template```.
+ > For more information, *see* [Training a template model](concept-custom-template.md#training-a-model)
+
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| &#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#custom-models)
+
+#### Custom neural
++
+ > [!NOTE]
+ > To train a custom neural model, set the ```buildMode``` property to ```neural```.
+ > For more information, *see* [Training a neural model](concept-custom-neural.md#training-a-model)
+
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+ |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#custom-models)
+
+#### Custom composed
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Composed custom models**](concept-composed-models.md)| A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types.| Useful when you've trained several models and want to group them to analyze similar form types like purchase orders.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/ComposeDocumentModel)</br>&#9679; [**C# SDK**](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&#9679; [**Java SDK**](/jav?view=doc-intel-3.0.0&preserve-view=true)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#custom-models)
+
+#### Custom classification model
++
+| About | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**Composed classification model**](concept-custom-classifier.md)| Custom classification models combine layout and language features to detect, identify, and classify documents within an input file.|&#9679; A loan application packaged containing application form, payslip, and, bank statement.</br>&#9679; A collection of scanned invoices. |&#9679; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentClassifier)</br>
+
+> [!div class="nextstepaction"]
+> [Return to model types](#custom-models)
+
+### Contract model (preview)
++
+| About | Development options |
+|-|--|
+|Extract contract agreement and party details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#gated-preview-models)
+
+### US tax 1098 form (preview)
++
+| About | Development options |
+|-|--|
+|Extract mortgage interest information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#gated-preview-models)
+
+### US tax 1098-E form (preview)
++
+| About | Development options |
+|-|--|
+|Extract student loan information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#gated-preview-models)
+
+### US tax 1098-T form (preview)
++
+| About | Development options |
+|-|--|
+|Extract tuition information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
+
+> [!div class="nextstepaction"]
+> [Return to model types](#gated-preview-models)
+++
+Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-services/index.yml) for developers to build intelligent document processing solutions. Document Intelligence applies machine-learning-based optical character recognition (OCR) and document understanding technologies to extract text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
+
+| Model type | Model name |
+||--|
+|**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md?view=doc-intel-2.1.0&preserve-view=true) </br> |
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md?view=doc-intel-2.1.0&preserve-view=true)</br>&#9679; [**Receipt model**](concept-receipt.md?view=doc-intel-2.1.0&preserve-view=true) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md?view=doc-intel-2.1.0&preserve-view=true) </br>&#9679; [**Business card model**](concept-business-card.md?view=doc-intel-2.1.0&preserve-view=true) </br>
+| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md?view=doc-intel-2.1.0&preserve-view=true)|
++++
+## Document Intelligence models and development options
+
+ >[!TIP]
+ >
+ > * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
+ > * The v3.0 Studio supports any model trained with v2.1 labeled data.
+ > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+
+> [!NOTE]
+> The following models and development options are supported by the Document Intelligence service v2.1.
+
+Use the links in the table to learn more about each model and browse the API references:
+
+| Model| Description | Development options |
+|-|--|-|
+|[**Layout analysis**](concept-layout.md?view=doc-intel-2.1.0&preserve-view=true) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | &#9679; [**Document Intelligence labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#try-it-layout-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Document Intelligence Docker container**](containers/install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)|
+|[**Custom model**](concept-custom.md?view=doc-intel-2.1.0&preserve-view=true) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| &#9679; [**Document Intelligence labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Sample Labeling Tool**](concept-custom.md?view=doc-intel-2.1.0&preserve-view=true#build-a-custom-model)</br>&#9679; [**Document Intelligence Docker container**](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)|
+|[**Invoice model**](concept-invoice.md?view=doc-intel-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales invoices. | &#9679; [**Document Intelligence labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md#try-it-prebuilt-model)</br>&#9679; [**Document Intelligence Docker container**](containers/install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)|
+|[**Receipt model**](concept-receipt.md?view=doc-intel-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales receipts.| &#9679; [**Document Intelligence labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Document Intelligence Docker container**](containers/install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)|
+|[**Identity document (ID) model**](concept-id-document.md?view=doc-intel-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from US driver's licenses and international passports.| &#9679; [**Document Intelligence labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Document Intelligence Docker container**](containers/install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)|
+|[**Business card model**](concept-business-card.md?view=doc-intel-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from business cards.| &#9679; [**Document Intelligence labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Document Intelligence Docker container**](containers/install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)|
++
+## Data privacy and security
+
+ As with all AI services, developers using the Document Intelligence service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Document Intelligence](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
+
+## Next steps
++
+* [Choose a Document Intelligence model](choose-model-feature.md)
+
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+
ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md
+
+ Title: "Quickstart: Document Intelligence SDKs | REST API "
+
+description: Use a Document Intelligence SDK or the REST API to create a forms processing app that extracts key data and structure elements from your documents.
++++++ Last updated : 07/18/2023+
+zone_pivot_groups: programming-languages-set-formre
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Get started with Document Intelligence
+++
+Get started with the latest version of Azure AI Document Intelligence. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page.
+++++++++++++++++
+That's it, congratulations!
+
+In this quickstart, you used a form Document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Try the Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!div class="nextstepaction"]
+> [**Explore our how-to documentation and take a deeper dive into Document Intelligence models**](../how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+++
+Get started with Azure AI Document Intelligence using the programming language of your choice or the REST API. Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Document Intelligence models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page.
++++++++++++++++++
+That's it, congratulations! In this quickstart, you used Document Intelligence models to analyze various forms in different ways.
+
+## Next steps
+
+* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+
+* The v3.0 Studio supports any model trained with v2.1 labeled data.
+
+* You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+* *See* our [**REST API**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [**Python**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
+
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
+
+ Title: "Quickstart: Document Intelligence Studio | v3.0"
+
+description: Form and document processing, data extraction, and analysis using Document Intelligence Studio
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD001 -->
+
+# Get started: Document Intelligence Studio
++
+[Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service in your applications. You can get started by exploring the pretrained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE56n49]
+
+## Prerequisites for new users
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+* A [**Document Intelligence**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
+
+> [!TIP]
+> Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+
+## Models
+
+Prebuilt models help you add Document Intelligence features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. Document Intelligence currently supports the following prebuilt models:
+
+#### Document analysis
+
+* [**General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs.
+* [**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+* [**Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+
+#### Prebuilt
+
+* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices.
+* [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
+* [**Health insurance card**](https://formrecognizer.appliedai.azure.com/studio): extract insurer, member, prescription, group number and other key information from US health insurance cards.
+* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms.
+* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports.
+* [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
+
+#### Custom
+
+* [**Custom extraction models**](https://formrecognizer.appliedai.azure.com/studio): extract information from forms and documents with custom extraction models. Quickly train a model by labeling as few as five sample documents.
+* [**Custom classification model**](https://formrecognizer.appliedai.azure.com/studio): train a custom classifier to distinguish between the different document types within your applications. Quickly train a model with as few as two classes and five samples per class.
+
+#### Gated preview models
+
+> [!NOTE]
+> To request access for gated preview models in Document Intelligence Studio, complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey).
+
+* [**General document with query fields**](https://formrecognizer.appliedai.azure.com/studio): extract labels, values such as names, dates, and amounts from documents.
+* [**Contract**](https://formrecognizer.appliedai.azure.com/studio): extract the title and signatory party information (including names, references, and addresses) from contracts.
+* [**Vaccination card**](https://formrecognizer.appliedai.azure.com/studio): extract card holder name, health provider, and vaccination records from US COVID-19 vaccination cards.
+* [**US 1098 tax form**](https://formrecognizer.appliedai.azure.com/studio): extract mortgage interest information from US 1098 tax forms.
+* [**US 1098-E tax form**](https://formrecognizer.appliedai.azure.com/studio): extract student loan information from US 1098-E tax forms.
+* [**US 1098-T tax form**](https://formrecognizer.appliedai.azure.com/studio): extract tuition information from US 1098-T forms.
+
+> [!NOTE]
+> To request access for gated preview models in Document Intelligence Studio, complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey).
+
+After you've completed the prerequisites, navigate to [Document Intelligence Studio General Documents](https://formrecognizer.appliedai.azure.com/studio/document).
+
+In the following example, we use the General Documents feature. The steps to use other pretrained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
+
+ :::image border="true" type="content" source="../media/quickstarts/select-general-document.png" alt-text="Screenshot showing selection of the General Document API to analyze a document in the Document Intelligence Studio.":::
+
+1. Select a Document Intelligence service feature from the Studio home page.
+
+1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
+
+1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command.
+
+1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
+
+1. Observe the highlighted extracted content in the document view. Hover your mouse over the keys and values to see details.
+
+1. In the output section's Result tab, browse the JSON output to understand the service response format.
+
+1. In the Code tab, browse the sample code for integration. Copy and download to get started.
+
+## Added prerequisites for custom projects
+
+In addition to the Azure account and a Document Intelligence or Azure AI services resource, you need:
+
+### Azure Blob Storage container
+
+A **standard performance** [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You create containers to store and organize your training documents within your storage account. If you don't know how to create an Azure storage account with a container, following these quickstarts:
+
+* [**Create a storage account**](../../../storage/common/storage-account-create.md). When creating your storage account, make sure to select **Standard** performance in the **Instance details → Performance** field.
+* [**Create a container**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When creating your container, set the **Public access level** field to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
+
+### Configure CORS
+
+[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Document Intelligence Studio. To configure CORS in the Azure portal, you need access to the CORS tab of your storage account.
+
+1. Select the CORS tab for the storage account.
+
+ :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
+
+1. Start by creating a new CORS entry in the Blob service.
+
+1. Set the **Allowed origins** to `https://formrecognizer.appliedai.azure.com`.
+
+ :::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
+
+ > [!TIP]
+ > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
+
+1. Select all the available 8 options for **Allowed methods**.
+
+1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
+
+1. Set the **Max Age** to 120 seconds or any acceptable value.
+
+1. Select the save button at the top of the page to save the changes.
+
+CORS should now be configured to use the storage account from Document Intelligence Studio.
+
+### Sample documents set
+
+1. Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows: **Your storage account** → **Data storage** → **Containers**
+
+ :::image border="true" type="content" source="../media/sas-tokens/data-storage-menu.png" alt-text="Screenshot: Data storage menu in the Azure portal.":::
+
+1. Select a **container** from the list.
+
+1. Select **Upload** from the menu at the top of the page.
+
+ :::image border="true" type="content" source="../media/sas-tokens/container-upload-button.png" alt-text="Screenshot: container upload button in the Azure portal.":::
+
+1. The **Upload blob** window appears.
+
+1. Select your file(s) to upload.
+
+ :::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot: upload blob window in the Azure portal.":::
+
+> [!NOTE]
+> By default, the Studio will use form documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional)
+
+## Custom models
+
+To create custom models, you start with configuring your project:
+
+1. From the Studio home, select the Custom model card to open the Custom models page.
+
+1. Use the "Create a project" command to start the new project configuration wizard.
+
+1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
+
+1. Review and submit your settings to create the project.
+
+1. From the labeling view, define the labels and their types that you're interested in extracting.
+
+1. Select the text in the document and select the label from the drop-down list or the labels pane.
+
+1. Label four more documents to get at least five documents labeled.
+
+1. Select the Train command and enter model name, select whether you want the custom template (form) or custom neural (document) model to start training your custom model.
+
+1. Once the model is ready, use the Test command to validate it with your test documents and observe the results.
++
+### Labeling as tables
+
+> [!NOTE]
+>
+> * With the release of API versions 2022-06-30-preview and later, custom template models will add support for [cross page tabular fields (tables)](../concept-custom-template.md#tabular-fields).
+> * With the release of API versions 2022-06-30-preview and later, custom neural models will support [tabular fields (tables)](../concept-custom-template.md#tabular-fields) and models trained with API version 2022-08-31, or later will accept tabular field labels.
+
+1. Use the Delete command to delete models that aren't required.
+
+1. Download model details for offline viewing.
+
+1. Select multiple models and compose them into a new model to be used in your applications.
+
+Using tables as the visual pattern:
+
+For custom form models, while creating your custom models, you may need to extract data collections from your documents. Data collections may appear in a couple of formats. Using tables as the visual pattern:
+
+* Dynamic or variable count of values (rows) for a given set of fields (columns)
+
+* Specific collection of values for a given set of fields (columns and/or rows)
+
+**Label as dynamic table**
+
+Use dynamic tables to extract variable count of values (rows) for a given set of fields (columns):
+
+1. Add a new "Table" type label, select "Dynamic table" type, and name your label.
+
+1. Add the number of columns (fields) and rows (for data) that you need.
+
+1. Select the text in your page and then choose the cell to assign to the text. Repeat for all rows and columns in all pages in all documents.
++
+**Label as fixed table**
+
+Use fixed tables to extract specific collection of values for a given set of fields (columns and/or rows):
+
+1. Create a new "Table" type label, select "Fixed table" type, and name it.
+
+1. Add the number of columns and rows that you need corresponding to the two sets of fields.
+
+1. Select the text in your page and then choose the cell to assign it to the text. Repeat for other documents.
++
+### Signature detection
+
+>[!NOTE]
+> Signature fields are currently only supported for custom template models. When training a custom neural model, labeled signature fields are ignored.
+
+To label for signature detection: (Custom form only)
+
+1. In the labeling view, create a new "Signature" type label and name it.
+
+1. Use the Region command to create a rectangular region at the expected location of the signature.
+
+1. Select the drawn region and choose the Signature type label to assign it to your drawn region. Repeat for other documents.
++
+## Next steps
+
+* Follow our [**Document Intelligence v3.0 migration guide**](../v3-migration-guide.md) to learn the differences from the previous version of the REST API.
+* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
+* Refer to our [**v3.0 REST API quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features using the new REST API.
+
+[Get started with the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com).
ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-sample-label-tool.md
+
+ Title: "Quickstart: Label forms, train a model, and analyze forms using the Sample Labeling tool - Document Intelligence"
+
+description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label form documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-2.1.0'
++
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD034 -->
+<!-- markdownlint-disable MD029 -->
+# Get started with the Document Intelligence Sample Labeling tool
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](../media/yes-icon.png) **Document Intelligence v2.1**.
+
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
+
+The Azure AI Document Intelligence Sample Labeling tool is an open source tool that enables you to test the latest features of Document Intelligence and Optical Character Recognition (OCR)
+
+* [Analyze documents with the Layout API](#analyze-layout). Try the Layout API to extract text, tables, selection marks, and structure from documents.
+
+* [Analyze documents using a prebuilt model](#analyze-using-a-prebuilt-model). Start with a prebuilt model to extract data from invoices, receipts, identity documents, or business cards.
+
+* [Train and analyze a custom Form](#train-a-custom-form-model). Use a custom model to extract data from documents specific to distinct business data and use cases.
+
+## Prerequisites
+
+You'll need the following to get started:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* An Azure AI services or Document Intelligence resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer), or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Document Intelligence resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+ > [!TIP]
+ > Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+
+## Create a Document Intelligence resource
++
+ :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+## Analyze using a Prebuilt model
+
+Document Intelligence offers several prebuilt models to choose from. Each model has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the prebuilt models currently supported by the Document Intelligence service:
+
+* [**Invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
+* [**Receipt**](../concept-receipt.md): extracts text and key information from receipts.
+* [**ID document**](../concept-id-document.md): extracts text and key information from driver licenses and international passports.
+* [**Business-card**](../concept-business-card.md): extracts text and key information from business cards.
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
+
+ :::image type="content" source="../media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
+
+1. Select the **Form Type** to analyze from the dropdown menu.
+
+1. Choose a URL for the file you would like to analyze from the below options:
+
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
+ * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
+ * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
+ * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="../media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+ :::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot of the 'select-form-type' dropdown menu.":::
+
+1. Select **Run analysis**. The Document Intelligence Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
+
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+
+ :::image type="content" source="../media/label-tool/prebuilt-2.jpg" alt-text="Analyze Results of Document Intelligence invoice model":::
+
+1. Download the JSON output file to view the detailed results.
+
+ * The "readResults" node contains every line of text with its respective bounding box placement on the page.
+ * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
+ * The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
+ * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
+
+## Analyze Layout
+
+Azure the Document Intelligence Layout API extracts text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
+
+ :::image type="content" source="../media/label-tool/layout-1.jpg" alt-text="Connection settings for Layout Document Intelligence tool.":::
+
+1. In the **Document Intelligence service endpoint** field, paste the endpoint that you obtained with your Document Intelligence subscription.
+
+1. In the **key** field, paste the key you obtained from your Document Intelligence resource.
+
+1. In the **Source** field, select **URL** from the dropdown menu, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg`, and select the **Fetch** button.
+
+1. Select **Run Layout**. The Document Intelligence Sample Labeling tool will call the Analyze Layout API and analyze the document.
+
+ :::image type="content" source="../media/fott-layout.png" alt-text="Screenshot: Layout dropdown menu.":::
+
+1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
+
+ :::image type="content" source="../media/label-tool/layout-3.jpg" alt-text="Connection settings for Document Intelligence tool.":::
+
+1. Download the JSON output file to view the detailed Layout Results.
+ * The `readResults` node contains every line of text with its respective bounding box placement on the page.
+ * The `selectionMarks` node shows every selection mark (checkbox, radio mark) and whether its status is `selected` or `unselected`.
+ * The `pageResults` section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
+
+## Train a custom form model
+
+Train a custom model to analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You'll need at least five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+
+### Prerequisites for training a custom form model
+
+* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip).
+
+* If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
+
+* Configure CORS
+
+ [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Document Intelligence Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
+
+ 1. Select the CORS tab for the storage account.
+
+ :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
+
+ 1. Start by creating a new CORS entry in the Blob service.
+
+ 1. Set the **Allowed origins** to `https://fott-2-1.azurewebsites.net`.
+
+ :::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
+
+ > [!TIP]
+ > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
+
+ 1. Select all the available 8 options for **Allowed methods**.
+
+ 1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
+
+ 1. Set the **Max Age** to 120 seconds or any acceptable value.
+
+ 1. Select the save button at the top of the page to save the changes.
+
+### Use the Sample Labeling tool
+
+1. Navigate to the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/).
+
+1. On the sample tool home page, select **Use custom form to train a model with labels and get key-value pairs**.
+
+ :::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Train a custom model.":::
+
+1. Select **New project**
+
+ :::image type="content" source="../media/fott-new-project.png" alt-text="Screenshot: select a new project prompt.":::
+
+#### Create a new project
+
+Configure the **Project Settings** fields with the following values:
+
+1. **Display Name**. Name your project.
+
+1. **Security Token**. Each project will auto-generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
+
+1. **Source connection**. The Sample Labeling tool connects to a source (your original uploaded forms) and a target (created labels and output data). Connections can be set up and shared across projects. They use an extensible provider model, so you can easily add new source/target providers.
+
+ * Create a new connection, select the **Add Connection** button. Complete the fields with the following values:
+
+ > [!div class="checklist"]
+ >
+ > * **Display Name**. Name the connection.
+ > * **Description**. Add a brief description.
+ > * **SAS URL**. Paste the shared access signature (SAS) URL for your Azure Blob Storage container.
+
+ * To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and select **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
+
+ :::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS location.":::
+
+1. **Folder Path** (optional). If your source forms are located within a folder in the blob container, specify the folder name.
+
+1. **Document Intelligence Service Uri** - Your Document Intelligence endpoint URL.
+
+1. **Key**. Your Document Intelligence key.
+
+1. **API version**. Keep the v2.1 (default) value.
+
+1. **Description** (optional). Describe your project.
+
+ :::image type="content" source="../media/label-tool/connections.png" alt-text="Connection settings":::
+
+#### Label your forms
+
+ :::image type="content" source="../media/label-tool/new-project.png" alt-text="New project page":::
+
+When you create or open a project, the main tag editor window opens. The tag editor consists of three parts:
+
+* A resizable preview pane that contains a scrollable list of forms from the source connection.
+* The main editor pane that allows you to apply tags.
+* The tags editor pane that allows users to modify, lock, reorder, and delete tags.
+
+##### Identify text and tables
+
+Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
+
+The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we won't label the table content, but rather rely on the automated extraction.
+
+ :::image type="content" source="../media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool.":::
+
+##### Apply labels to text
+
+Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the Sample Label data set includes already labeled fields; we'll add another field.
+
+Use the tags editor pane to create a new tag you'd like to identify:
+
+1. Select **+** plus sign to create a new tag.
+
+1. Enter the tag "Total" name.
+
+1. Select **Enter** to save the tag.
+
+1. In the main editor, select the total value from the highlighted text elements.
+
+1. Select the Total tag to apply to the value, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane. Follow these steps to label all five forms in the sample dataset:
+
+ > [!Tip]
+ > Keep the following tips in mind when you're labeling your forms:
+ >
+ > * You can only apply one tag to each selected text element.
+ >
+ > * Each tag can only be applied once per page. If a value appears multiple times on the same form, create different tags for each instance. For example: "invoice# 1", "invoice# 2" and so on.
+ > * Tags cannot span across pages.
+ > * Label values as they appear on the form; don't try to split a value into two parts with two different tags. For example, an address field should be labeled with a single tag even if it spans multiple lines.
+ > * Don't include keys in your tagged fields&mdash;only the values.
+ > * Table data should be detected automatically and will be available in the final output JSON file in the 'pageResults' section. However, if the model fails to detect all of your table data, you can also label and train a model to detect tables, see [Train a custom model | Label your forms](../label-tool.md#label-your-forms)
+ > * Use the buttons to the right of the **+** to search, rename, reorder, and delete your tags.
+ > * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key.
+ >
+
+ :::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Label the samples.":::
+
+#### Train a custom model
+
+Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
+
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./get-started-sdks-rest-api.md?pivots=programming-language-rest-api) or [client library](./get-started-sdks-rest-api.md).
+
+* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling more forms and retraining to create a new model. We recommend starting by labeling five forms analyzing and testing the results and then if needed adding more forms as needed.
+* The list of tags, and the estimated accuracy per tag. For more information, _see_ [Interpret and improve accuracy and confidence](../concept-accuracy-confidence.md).
+
+ :::image type="content" source="../media/label-tool/custom-3.jpg" alt-text="Training view tool.":::
+
+#### Analyze a custom form
+
+1. Select the **Analyze** icon from the navigation bar to test your model.
+
+1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
+
+1. Choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
+
+ :::image type="content" source="../media/analyze.png" alt-text="Training view.":::
+
+That's it! You've learned how to use the Document Intelligence sample tool for Document Intelligence prebuilt, layout and custom models. You've also learned to analyze a custom form with manually labeled data.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)
ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/resource-customer-stories.md
+
+ Title: Customer spotlight
+
+description: Highlight customer stories with Document Intelligence.
+++++ Last updated : 07/18/2023++
+monikerRange: 'doc-intel-2.1.0'
+++
+# Customer spotlight
+
+The following customers and partners have adopted Document Intelligence across a wide range of business and technical scenarios.
+
+| Customer/Partner | Description | Link |
+||-|-|
+| **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Document Intelligence into its native application. The Document Intelligence's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | |
+ | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Document Intelligence to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
+|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries/regions. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Document Intelligence powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. ||
+|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | |
+|**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Document Intelligence. AvidXchange partners with Azure AI services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. ||
+|**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Document Intelligence to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. ||
+|**Chevron**| [**Chevron**](https://www.chevron.com//) Canada Business Unit is now using Document Intelligence with UiPath's robotic process automation platform to automate the extraction of data and move it into back-end systems for analysis. Subject matter experts have more time to focus on higher-value activities and information flows more rapidly. Accelerated operational control enables the company to analyze its business with greater speed, accuracy, and depth. | [Customer story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services)|
+|**Cross Masters**|[**Cross Masters**](https://crossmasters.com/), uses cutting-edge AI technologies not only as a passion, but as an essential part of a work culture requiring continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Cross Masters used Document Intelligence to develop a unique, customized solution, to provide clients with market insights from a large set of collected invoices. Most impressive is the extraction quality and continuous introduction of new features, such as model composing and table labeling. ||
+|**Element**| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. Element is one of the fastest growing companies in the global testing, inspection and certification sector having over 6,500 engaged experts working in more than 200 facilities across the globe. When the finance team for the Americas was forced to work from home during the COVID-19 pandemic, it needed to digitalize its paper processes fast. The creativity of the team and its use of Azure AI Document Intelligence delivered more than business as usualΓÇöit delivered significant efficiencies. The Element team used the tools in Azure so the next phase could be expedited. Rather than coding from scratch, they saw the opportunity to use the Azure AI Document Intelligence. This integration quickly gave them the functionality they needed, together with the agility and security of Azure. Azure Logic Apps is used to automate the process of extracting the documents from email, storing them, and updating the system with the extracted data. Azure AI Vision, part of Azure AI services, partners with Azure AI Document Intelligence to extract the right data points from the invoice documentsΓÇöwhether they're a pdf or scanned images. | [Customer story](https://customers.microsoft.com/story/1414941527887021413-element)|
+|**Emaar Properties**| [**Emaar Properties**](https://www.emaar.com/en/), operates Dubai Mall, the world's most-visited retail and entertainment destination. Each year, the Dubai Mall draws more than 80 million visitors. To enrich the shopping experience, Emaar Properties offers a unique rewards program through a dedicated mobile app. Loyalty program points are earned via submitted receipts. Emaar Properties uses Azure AI Document Intelligence to process submitted receipts and has achieved 92 percent reading accuracy.| [Customer story](https://customers.microsoft.com/story/1459754150957690925-emaar-retailers-azure-en-united-arab-emirates)|
+|**EY**| [**EY**](https://ey.com/) (Ernst & Young Global Limited) is a multinational professional services network that helps to create long-term value for clients and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries/regions to help clients grow, transform, and operate. EY teams work across assurance, consulting, law, strategy, tax, and transactions to find solutions for complex issues facing our world today. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. Azure AI Document Intelligence and Custom Vision partnered to enable EY teams to automate and improve the OCR and document handling processes for its transactions services clients. | [Customer story](https://customers.microsoft.com/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
+|**Financial Fabric**| [**Financial Fabric**](https://www.financialfabric.com/), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Document Intelligence, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. ||
+|**Fujitsu**| [**Fujitsu**](https://scanners.us.fujitsu.com/about-us) is the world leader in document scanning technology, with more than 50 percent of global market share, but that doesn't stop the company from constantly innovating. To improve the performance and accuracy of its cloud scanning solution, Fujitsu incorporated Azure AI Document Intelligence. It took only a few months to deploy the new technologies, and they have boosted character recognition rates as high as 99.9 percent. This collaboration helps Fujitsu deliver market-leading innovation and give its customers powerful and flexible tools for end-to-end document management. | [Customer story](https://customers.microsoft.com/en-us/story/1504311236437869486-fujitsu-document-scanning-azure-form-recognizer)|
+|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Document Intelligence. GEP combined their AI solution with Azure AI Document Intelligence to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. ||
+|**HCA Healthcare**| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure AI Document Intelligence to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)|
+|**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure AI Document Intelligence enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. ||
+|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. The application platform then brings this data into business workflows as organized information. This workflow provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. The applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
+|**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Azure AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
+|**Old Mutual**| [**Old Mutual**](https://www.oldmutual.co.za/) is Africa's leading financial services group with a comprehensive range of investment capabilities. They're the industry leader in retirement fund solutions, investments, asset management, group risk benefits, insurance, and multi-fund management. The Old Mutual team used Microsoft Natural Language Processing and Optical Character Recognition to provide the basis for automating key customer transactions received via emails. It also offered an opportunity to identify incomplete customer requests in order to nudge customers to the correct digital channels. Old Mutual's extensible solution technology was further developed as a microservice to be consumed by any enterprise application through a secure API management layer. | [Customer story](https://customers.microsoft.com/en-us/story/1507561807660098567-old-mutual-banking-capital-markets-azure-en-south-africa)|
+|**Standard Bank**| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Document Intelligence to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)|
+| **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Document Intelligence. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Document Intelligence can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy ||
+|**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure AI services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. ||
+|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Document Intelligence to automatically pull key-value pairs and text out of documents. When insurers use the platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview.md
+
+ Title: Document Intelligence SDKs
+
+description: Document Intelligence software development kits (SDKs) expose Document Intelligence models, features and capabilities, using C#, Java, JavaScript, and Python programming language.
++++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Document Intelligence SDK (GA)
++
+> [!IMPORTANT]
+> For more information on the latest public preview version (**2023-02-28-preview**), *see* [Document Intelligence SDK (preview)](sdk-preview.md)
+
+Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported languages
+
+Document Intelligence SDK supports the following languages and platforms:
+
+| Language → Document Intelligence SDK version | Package| Supported API version| Platform support |
+|:-:|:-|:-| :-|
+| [.NET/C# → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.0.6 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.2.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## Supported Clients
+
+| Language| SDK version | API version | Supported clients|
+| : | :--|:- | :--|
+|.NET/C#</br> Java</br> JavaScript</br>| 4.0.0 (latest GA release)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|.NET/C#</br> Java</br> JavaScript</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|.NET/C#</br> Java</br> JavaScript</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| Python| 3.2.x (latest GA release) | v3.0 / 2022-08-31 (default)| DocumentAnalysisClient</br>DocumentModelAdministrationClient|
+| Python | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+| Python | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+
+## Use Document Intelligence SDK in your applications
+
+The Document Intelligence SDK enables the use and management of the Document Intelligence service in your application. The SDK builds on the underlying Document Intelligence REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Document Intelligence SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 4.0.0
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 4.0.0
+```
+
+### [Java](#tab/java)
+
+```xml
+<dependency>
+<groupId>com.azure</groupId>
+<artifactId>azure-ai-formrecognizer</artifactId>
+<version>4.0.6</version>
+</dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:4.0.6")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@4.0.0
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.2.0
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+```
++++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Document Intelligence API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
++++
+#### Use an Azure Active Directory (Azure AD) token credential
+
+> [!NOTE]
+> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../ai-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Azure AI services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import DocumentAnalysisClient
+
+ credential = DefaultAzureCredential()
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
++++
+### 4. Build your application
+
+Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Explore Document Intelligence REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+
+> [!div class="nextstepaction"]
+> [**Try a Document Intelligence quickstart**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
ai-services Sdk Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-preview.md
+
+ Title: Document Intelligence SDKs (preview)
+
+description: The preview Document Intelligence software development kits (SDKs) expose Document Intelligence models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
++++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Document Intelligence SDK (public preview)
+
+**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-02-28-preview**.
+
+> [!IMPORTANT]
+>
+> * Document Intelligence public preview releases provide early access to features that are in active development.
+> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+> * The public preview version of Document Intelligence client libraries default to service version [**Document Intelligence 2023-02-28-preview REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument).
+
+Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported languages
+
+Document Intelligence SDK supports the following languages and platforms:
+
+| Language → Document Intelligence SDK version | Package| Supported API version| Platform support |
+|:-:|:-|:-| :-|
+| [.NET/C# → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)|[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1) |[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.3.0b1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0b1/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## Supported Clients
+
+| Language| SDK version | API version (default) | Supported clients|
+| : | :--|:- | :--|
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.1.0-beta-1 (preview)| 2023_02_28_preview|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python**| 3.3.0bx (preview) | 2023-02-28-preview | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python**| 3.2.x (GA) | v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+
+## Use Document Intelligence SDK in your applications
+
+The Document Intelligence SDK enables the use and management of the Document Intelligence service in your application. The SDK builds on the underlying Document Intelligence REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Document Intelligence SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 4.1.0-beta.1
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 4.1.0-beta.1
+```
+
+### [Java](#tab/java)
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-formrecognizer</artifactId>
+ <version>4.1.0-beta.1</version>
+ </dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:4.1.0-beta.1")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@4.1.0-beta.1
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.3.0b1
+```
++++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+```
++++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Document Intelligence API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
++++
+#### Use an Azure Active Directory (Azure AD) token credential
+
+> [!NOTE]
+> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../ai-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import DocumentAnalysisClient
+
+ credential = DefaultAzureCredential()
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
++++
+### 4. Build your application
+
+Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [**Explore Document Intelligence REST API 2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md
+
+ Title: Document Intelligence quotas and limits
+
+description: Quick reference, detailed description, and best practices for working within Azure AI Document Intelligence service Quotas and Limits
++++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Service quotas and limits
+<!-- markdownlint-disable MD033 -->
+++
+This article contains both a quick reference and detailed description of Azure AI Document Intelligence service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
+
+## Model usage
++
+> [!div class="checklist"]
+>
+> * [**Document Intelligence SDKs**](quickstarts/get-started-sdks-rest-api.md)
+> * [**Document Intelligence REST API**](quickstarts/get-started-sdks-rest-api.md)
+> * [**Document Intelligence Studio v3.0**](quickstarts/try-document-intelligence-studio.md)
++
+> [!div class="checklist"]
+>
+> * [**Document Intelligence SDKs**](quickstarts/get-started-sdks-rest-api.md)
+> * [**Document Intelligence REST API**](quickstarts/get-started-sdks-rest-api.md)
+> * [**Sample Labeling Tool v2.1**](https://fott-2-1.azurewebsites.net/)
++
+|Quota|Free (F0)<sup>1</sup>|Standard (S0)|
+|--|--|--|
+| **Transactions Per Second limit** | 1 | 15 (default value) |
+| Adjustable | No | Yes <sup>2</sup> |
+| **Max document size** | 4 MB | 500 MB |
+| Adjustable | No | No |
+| **Max number of pages (Analysis)** | 2 | 2000 |
+| Adjustable | No | No |
+| **Max size of labels file** | 10 MB | 10 MB |
+| Adjustable | No | No |
+| **Max size of OCR json response** | 500 MB | 500 MB |
+| Adjustable | No | No |
+| **Max number of Template models** | 500 | 5000 |
+| Adjustable | No | No |
+| **Max number of Neural models** | 100 | 500 |
+| Adjustable | No | No |
++
+## Custom model usage
+
+> [!div class="checklist"]
+>
+> * [**Custom template model**](concept-custom-template.md)
+> * [**Custom neural model**](concept-custom-neural.md)
+> * [**Composed classification models**](concept-custom-classifier.md)
+> * [**Composed custom models**](concept-composed-models.md)
+
+|Quota|Free (F0) <sup>1</sup>|Standard (S0)|
+|--|--|--|
+| **Compose Model limit** | 5 | 200 (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Neural** | 1 GB <sup>3</sup> | 1 GB (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) * Template** | 500 | 500 (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) * Neural** | 50,000 | 50,000 (default value) |
+| Adjustable | No | No |
+| **Custom neural model train** | 10 per month | 20 per month |
+| Adjustable | No |Yes <sup>3</sup>|
+| **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Classifier** | 1GB | 1GB (default value) |
+| Adjustable | No | No |
+++
+## Custom model limits
+
+> [!div class="checklist"]
+>
+> * [**Custom template model**](concept-custom-template.md)
+> * [**Composed custom models**](concept-composed-models.md)
+
+| Quota | Free (F0) <sup>1</sup> | Standard (S0) |
+|--|--|--|
+| **Compose Model limit** | 5 | 200 (default value) |
+| Adjustable | No | No |
+| **Training dataset size** | 50 MB | 50 MB (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training)** | 500 | 500 (default value) |
+| Adjustable | No | No |
+++
+> <sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).</br>
+> <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions(#create-and-submit-support-request).</br>
+> <sup>3</sup> Neural models training count is reset every calendar month. Open a support request to increase the monthly training limit.
+> <sup>4</sup> This limit applies to all documents found in your training dataset folder prior to any labeling-related updates.
+
+## Detailed description, Quota adjustment, and best practices
+
+Before requesting a quota increase (where applicable), ensure that it's necessary. Document Intelligence service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity.
+
+If your application returns Response Code 429 (*Too many requests*) and your workload is within the defined limits: most likely, the service is scaling up to your demand, but hasn't yet reached the required scale. Thus the service doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long.
+
+### General best practices to mitigate throttling during autoscaling
+
+To minimize issues related to throttling (Response Code 429), we recommend using the following techniques:
+
+* Implement retry logic in your application
+* Avoid sharp changes in the workload. Increase the workload gradually <br/>
+*Example.* Your application is using Document Intelligence and your current workload is 10 TPS (transactions per second). The next second you increase the load to 40 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it can't do it within a second, so some of the requests get Response Code 429.
+
+The next sections describe specific cases of adjusting quotas.
+Jump to [Document Intelligence: increasing concurrent request limit](#create-and-submit-support-request)
+
+### Increasing transactions per second request limit
+
+By default the number of transactions per second is limited to 15 transactions per second for a Document Intelligence resource. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you're familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#example-of-a-workload-pattern-best-practice).
+
+Increasing the Concurrent Request limit does **not** directly affect your costs. Document Intelligence service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
+
+Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
+
+If you would like to increase your transactions per second, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource * [enable auto scaling](../../ai-services/autoscale.md). You can also submit an increase TPS support request.
+
+#### Have the required information ready
+
+* Document Intelligence Resource ID
+* Region
+
+* **How to get information (Base model)**:
+ * Go to [Azure portal](https://portal.azure.com/)
+ * Select the Document Intelligence Resource for which you would like to increase the transaction limit
+ * Select *Properties* (*Resource Management* group)
+ * Copy and save the values of the following fields:
+ * **Resource ID**
+ * **Location** (your endpoint Region)
+
+#### Create and submit support request
+
+Initiate the increase of transactions per second(TPS) limit for your resource by submitting the Support Request:
+
+* Ensure you have the [required information](#have-the-required-information-ready)
+* Go to [Azure portal](https://portal.azure.com/)
+* Select the Document Intelligence Resource for which you would like to increase the TPS limit
+* Select *New support request* (*Support + troubleshooting* group)
+* A new window appears with autopopulated information about your Azure Subscription and Azure Resource
+* Enter *Summary* (like "Increase Document Intelligence TPS limit")
+* In Problem type,* select "Quota or usage validation"
+* Select *Next: Solutions*
+* Proceed further with the request creation
+* Under the *Details* tab, enter the following information in the *Description* field:
+ * a note, that the request is about **Document Intelligence** quota.
+ * Provide a TPS expectation you would like to scale to meet.
+ * Azure resource information you [collected](#have-the-required-information-ready).
+ * Complete entering the required information and select *Create* button in *Review + create* tab
+ * Note the support request number in Azure portal notifications. You're contacted shortly for further processing
+
+## Example of a workload pattern best practice
+
+This example presents the approach we recommend following to mitigate possible request throttling due to [Autoscaling being in progress](#detailed-description-quota-adjustment-and-best-practices). It isn't an *exact recipe*, but merely a template we invite to follow and adjust as necessary.
+
+ Let us suppose that a Document Intelligence resource has the default limit set. Start the workload to submit your analyze requests. If you find that you're seeing frequent throttling with response code 429, start by implementing an exponential backoff on the GET analyze response request. By using a progressively longer wait time between retries for consecutive error responses, for example a 2-5-13-34 pattern of delays between requests. In general, it's recommended to not call the get analyze response more than once every 2 seconds for a corresponding POST request.
+
+If you find that you're being throttled on the number of POST requests for documents being submitted, consider adding a delay between the requests. If your workload requires a higher degree of concurrent processing, you then need to create a support request to increase your service limits on transactions per second.
+
+Generally, it's highly recommended to test the workload and the workload patterns before going to production.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about error codes and troubleshooting](v3-error-guide.md)
ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/studio-overview.md
+
+ Title: What is Document Intelligence Studio?
+
+description: Learn how to set up and use Document Intelligence Studio to test features of Azure AI Document Intelligence on the web.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+<!-- markdownlint-disable MD033 -->
+# What is Document Intelligence Studio?
+
+**This article applies to:** ![Document Intelligence v3.0 checkmark](media/yes-icon.png) **Document Intelligence v3.0**.
+
+Document Intelligence Studio is an online tool to visually explore, understand, train, and integrate features from the Document Intelligence service into your applications. The studio provides a platform for you to experiment with the different Document Intelligence models and sample their returned data in an interactive manner without the need to write code.
+
+The studio supports Document Intelligence v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+
+## Get started using Document Intelligence Studio
+
+1. To use Document Intelligence Studio, you need the following assets:
+
+ * **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+
+ * **Azure AI services or Document Intelligence resource**. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal to get your key and endpoint. Use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+1. Navigate to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. You have two options:
+
+ **a. Access by Resource**.
+
+ * Choose your existing subscription.
+ * Select an existing resource group within your subscription or create a new one.
+ * Select your existing Document Intelligence or Azure AI services resource.
+
+ :::image type="content" source="media/studio/welcome-to-studio.png" alt-text="Screenshot of the configure service resource window.":::
+
+ **b. Access by API endpoint and key**.
+
+ * Retrieve your endpoint and key from the Azure portal.
+ * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
+ * Enter the values in the appropriate fields.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+1. Once you've completed configuring your resource, you're able to try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
+
+ :::image type="content" source="media/studio/form-recognizer-studio-front.png" alt-text="Screenshot of Document Intelligence Studio front page.":::
+
+1. After you've tried Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
+
+To learn more about each model, *see* concept pages.
++
+### Manage your resource
+
+ To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Document Intelligence Studio home page and select the **Resource** tab. If you have access to other resources, you can switch resources as well.
++
+With Document Intelligence, you can quickly automate your data processing in applications and workflows, easily enhance data-driven strategies, and skillfully enrich document search capabilities.
+
+## Next steps
+
+* Visit [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) to begin using the models presented by the service.
+
+* For more information on Document Intelligence capabilities, see [Azure AI Document Intelligence overview](overview.md).
ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/supervised-table-tags.md
+
+ Title: "Train your custom template model with the sample-labeling tool and table tags"
+
+description: Learn how to effectively use supervised table tag labeling.
+++++ Last updated : 07/18/2023+
+#Customer intent: As a user of the Document Intelligence custom model service, I want to ensure I'm training my model in the best way.
+monikerRange: 'doc-intel-2.1.0'
+++
+# Train models with the sample-labeling tool
+
+**This article applies to:** ![Document Intelligence v2.1 checkmark](media/yes-icon.png) **Document Intelligence v2.1**.
+
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with version v3.0.
+
+In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
+
+## When should I use table tags?
+
+Here are some examples of when using table tags would be appropriate:
+
+- There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.
+- There's data you wish to extract that isn't presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
+
+> [!NOTE]
+> Document Intelligence automatically finds and extracts all tables in your documents whether the tables are tagged or not. Therefore, you don't have to label every table from your form with a table tag and your table tags don't have to replicate the structure of very table found in your form. Tables extracted automatically by Document Intelligence will be included in the pageResults section of the JSON output.
+
+## Create a table tag with the Document Intelligence Sample Labeling tool
+<!-- markdownlint-disable MD004 -->
+* Determine whether you want a **dynamic** or **fixed-size** table tag. If the number of rows vary from document to document use a dynamic table tag. If the number of rows is consistent across your documents, use a fixed-size table tag.
+* If your table tag is dynamic, define the column names and the data type and format for each column.
+* If your table is fixed-size, define the column name, row name, data type, and format for each tag.
+
+## Label your table tag data
+
+* If your project has a table tag, you can open the labeling panel, and populate the tag as you would label key-value fields.
+
+## Next steps
+
+Follow our quickstart to train and use your custom Document Intelligence model:
+
+> [!div class="nextstepaction"]
+> [Train with labels using the Sample Labeling tool](label-tool.md)
+
+## See also
+
+* [Train a custom model with the Sample Labeling tool](label-tool.md)
+>
ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-azure-function.md
+
+ Title: "Tutorial: Use an Azure Function to process stored documents"
+
+description: This guide shows you how to use an Azure function to trigger the processing of documents that are uploaded to an Azure blob storage container.
++++++ Last updated : 07/18/2023+++++
+# Tutorial: Use Azure Functions and Python to process stored documents
+
+Document Intelligence can be used as part of an automated data processing pipeline built with Azure Functions. This guide will show you how to use Azure Functions to process documents that are uploaded to an Azure blob storage container. This workflow extracts table data from stored documents using the Document Intelligence layout model and saves the table data in a .csv file in Azure. You can then display the data using Microsoft Power BI (not covered here).
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create an Azure Storage account.
+> * Create an Azure Functions project.
+> * Extract layout data from uploaded forms.
+> * Upload extracted layout data to Azure Storage.
+
+## Prerequisites
+
+* **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+
+* **A Document Intelligence resource**. Once you have your Azure subscription, create a [Document Intelligence resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+
+ * After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Document Intelligence API. You'll paste your key and endpoint into the code below later in the tutorial:
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+* [**Python 3.6.x, 3.7.x, 3.8.x or 3.9.x**](https://www.python.org/downloads/) (Python 3.10.x isn't supported for this project).
+
+* The latest version of [**Visual Studio Code**](https://code.visualstudio.com/) (VS Code) with the following extensions installed:
+
+ * [**Azure Functions extension**](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). Once it's installed, you should see the Azure logo in the left-navigation pane.
+
+ * [**Azure Functions Core Tools**](../../azure-functions/functions-run-local.md?tabs=v3%2cwindows%2ccsharp%2cportal%2cbash) version 3.x (Version 4.x isn't supported for this project).
+
+ * [**Python Extension**](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio code. For more information, *see* [Getting Started with Python in VS Code](https://code.visualstudio.com/docs/python/python-tutorial)
+
+* [**Azure Storage Explorer**](https://azure.microsoft.com/features/storage-explorer/) installed.
+
+* **A local PDF document to analyze**. You can use our [sample pdf document](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample-layout.pdf) for this project.
+
+## Create an Azure Storage account
+
+1. [Create a general-purpose v2 Azure Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the Azure portal. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
+
+1. On the left pane, select the **Resource sharing (CORS)** tab, and remove the existing CORS policy if any exists.
+
+1. Once your storage account has deployed, create two empty blob storage containers, named **input** and **output**.
+
+## Create an Azure Functions project
+
+1. Create a new folder named **functions-app** to contain the project and choose **Select**.
+
+1. Open Visual Studio Code and open the Command Palette (Ctrl+Shift+P). Search for and choose **Python:Select Interpreter** → choose an installed Python interpreter that is version 3.6.x, 3.7.x, 3.8.x or 3.9.x. This selection will add the Python interpreter path you selected to your project.
+
+1. Select the Azure logo from the left-navigation pane.
+
+ * You'll see your existing Azure resources in the Resources view.
+
+ * Select the Azure subscription that you're using for this project and below you should see the Azure Function App.
+
+ :::image type="content" source="media/tutorial-azure-function/azure-extensions-visual-studio-code.png" alt-text="Screenshot of a list showing your Azure resources in a single, unified view.":::
+
+1. Select the Workspace (Local) section located below your listed resources. Select the plus symbol and choose the **Create Function** button.
+
+ :::image type="content" source="media/tutorial-azure-function/workspace-create-function.png" alt-text="Screenshot showing where to begin creating an Azure function.":::
+
+1. When prompted, choose **Create new project** and navigate to the **function-app** directory. Choose **Select**.
+
+1. You'll be prompted to configure several settings:
+
+ * **Select a language** → choose Python.
+
+ * **Select a Python interpreter to create a virtual environment** → select the interpreter you set as the default earlier.
+
+ * **Select a template** → choose **Azure Blob Storage trigger** and give the trigger a name or accept the default name. Press **Enter** to confirm.
+
+ * **Select setting** → choose **➕Create new local app setting** from the dropdown menu.
+
+ * **Select subscription** → choose your Azure subscription with the storage account you created → select your storage account → then select the name of the storage input container (in this case, `input/{name}`). Press **Enter** to confirm.
+
+ * **Select how your would like to open your project** → choose **Open the project in the current window** from the dropdown menu.
+
+1. Once you've completed these steps, VS Code will add a new Azure Function project with a *\_\_init\_\_.py* Python script. This script will be triggered when a file is uploaded to the **input** storage container:
+
+ ```python
+ import logging
+
+ import azure.functions as func
++
+ def main(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+ ```
+
+## Test the function
+
+1. Press F5 to run the basic function. VS Code will prompt you to select a storage account to interface with.
+
+1. Select the storage account you created and continue.
+
+1. Open Azure Storage Explorer and upload the sample PDF document to the **input** container. Then check the VS Code terminal. The script should log that it was triggered by the PDF upload.
+
+ :::image type="content" source="media/tutorial-azure-function/visual-studio-code-terminal-test.png" alt-text="Screenshot of the VS Code terminal after uploading a new document.":::
+
+1. Stop the script before continuing.
+
+## Add document processing code
+
+Next, you'll add your own code to the Python script to call the Document Intelligence service and parse the uploaded documents using the Document Intelligence [layout model](concept-layout.md).
+
+1. In VS Code, navigate to the function's *requirements.txt* file. This file defines the dependencies for your script. Add the following Python packages to the file:
+
+ ```txt
+ cryptography
+ azure-functions
+ azure-storage-blob
+ azure-identity
+ requests
+ pandas
+ numpy
+ ```
+
+1. Then, open the *\_\_init\_\_.py* script. Add the following `import` statements:
+
+ ```Python
+ import logging
+ from azure.storage.blob import BlobServiceClient
+ import azure.functions as func
+ import json
+ import time
+ from requests import get, post
+ import os
+ import requests
+ from collections import OrderedDict
+ import numpy as np
+ import pandas as pd
+ ```
+
+1. You can leave the generated `main` function as-is. You'll add your custom code inside this function.
+
+ ```python
+ # This part is automatically generated
+ def main(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+ ```
+
+1. The following code block calls the Document Intelligence [Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync) API on the uploaded document. Fill in your endpoint and key values.
+
+ ```Python
+ # This is the call to the Document Intelligence endpoint
+ endpoint = r"Your Document Intelligence Endpoint"
+ apim_key = "Your Document Intelligence Key"
+ post_url = endpoint + "/formrecognizer/v2.1/layout/analyze"
+ source = myblob.read()
+
+ headers = {
+ # Request headers
+ 'Content-Type': 'application/pdf',
+ 'Ocp-Apim-Subscription-Key': apim_key,
+ }
+
+ text1=os.path.basename(myblob.name)
+ ```
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* Azure AI services [security](../../ai-services/security-features.md).
+
+1. Next, add code to query the service and get the returned data.
+
+ ```python
+ resp = requests.post(url=post_url, data=source, headers=headers)
+
+ if resp.status_code != 202:
+ print("POST analyze failed:\n%s" % resp.text)
+ quit()
+ print("POST analyze succeeded:\n%s" % resp.headers)
+ get_url = resp.headers["operation-location"]
+
+ wait_sec = 25
+
+ time.sleep(wait_sec)
+ # The layout API is async therefore the wait statement
+
+ resp = requests.get(url=get_url, headers={"Ocp-Apim-Subscription-Key": apim_key})
+
+ resp_json = json.loads(resp.text)
+
+ status = resp_json["status"]
+
+ if status == "succeeded":
+ print("POST Layout Analysis succeeded:\n%s")
+ results = resp_json
+ else:
+ print("GET Layout results failed:\n%s")
+ quit()
+
+ results = resp_json
+
+ ```
+
+1. Add the following code to connect to the Azure Storage **output** container. Fill in your own values for the storage account name and key. You can get the key on the **Access keys** tab of your storage resource in the Azure portal.
+
+ ```Python
+ # This is the connection to the blob storage, with the Azure Python SDK
+ blob_service_client = BlobServiceClient.from_connection_string("DefaultEndpointsProtocol=https;AccountName="Storage Account Name";AccountKey="storage account key";EndpointSuffix=core.windows.net")
+ container_client=blob_service_client.get_container_client("output")
+ ```
+
+ The following code parses the returned Document Intelligence response, constructs a .csv file, and uploads it to the **output** container.
+
+ > [!IMPORTANT]
+ > You will likely need to edit this code to match the structure of your own form documents.
+
+ ```python
+ # The code below extracts the json format into tabular data.
+ # Please note that you need to adjust the code below to your form structure.
+ # It probably won't work out-of-the-box for your specific form.
+ pages = results["analyzeResult"]["pageResults"]
+
+ def make_page(p):
+ res=[]
+ res_table=[]
+ y=0
+ page = pages[p]
+ for tab in page["tables"]:
+ for cell in tab["cells"]:
+ res.append(cell)
+ res_table.append(y)
+ y=y+1
+
+ res_table=pd.DataFrame(res_table)
+ res=pd.DataFrame(res)
+ res["table_num"]=res_table[0]
+ h=res.drop(columns=["boundingBox","elements"])
+ h.loc[:,"rownum"]=range(0,len(h))
+ num_table=max(h["table_num"])
+ return h, num_table, p
+
+ h, num_table, p= make_page(0)
+
+ for k in range(num_table+1):
+ new_table=h[h.table_num==k]
+ new_table.loc[:,"rownum"]=range(0,len(new_table))
+ row_table=pages[p]["tables"][k]["rows"]
+ col_table=pages[p]["tables"][k]["columns"]
+ b=np.zeros((row_table,col_table))
+ b=pd.DataFrame(b)
+ s=0
+ for i,j in zip(new_table["rowIndex"],new_table["columnIndex"]):
+ b.loc[i,j]=new_table.loc[new_table.loc[s,"rownum"],"text"]
+ s=s+1
+
+ ```
+
+1. Finally, the last block of code uploads the extracted table and text data to your blob storage element.
+
+ ```Python
+ # Here is the upload to the blob storage
+ tab1_csv=b.to_csv(header=False,index=False,mode='w')
+ name1=(os.path.splitext(text1)[0]) +'.csv'
+ container_client.upload_blob(name=name1,data=tab1_csv)
+ ```
+
+## Run the function
+
+1. Press F5 to run the function again.
+
+1. Use Azure Storage Explorer to upload a sample PDF form to the **input** storage container. This action should trigger the script to run, and you should then see the resulting .csv file (displayed as a table) in the **output** container.
+
+You can connect this container to Power BI to create rich visualizations of the data it contains.
+
+## Next steps
+
+In this tutorial, you learned how to use an Azure Function written in Python to automatically process uploaded PDF documents and output their contents in a more data-friendly format. Next, learn how to use Power BI to display the data.
+
+> [!div class="nextstepaction"]
+> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/)
+
+* [What is Document Intelligence?](overview.md)
+* Learn more about the [layout model](concept-layout.md)
ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-logic-apps.md
+
+ Title: Use Document intelligence with Azure Logic Apps
+
+description: A tutorial introducing how to use Document intelligence with Logic Apps.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
++
+# Create a Document Intelligence Logic Apps workflow
++++
+> [!IMPORTANT]
+>
+> This tutorial and the Logic App Document intelligence connector targets Document intelligence REST API v3.0 and forward.
+++
+> [!IMPORTANT]
+>
+> This tutorial and the Logic App Document intelligence connector targets Document intelligence REST API v2.1 and must be used with the [FOTT Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
++
+Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and third-party applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
+
+* Create business processes and workflows visually.
+* Integrate workflows with software as a service (SaaS) and enterprise applications.
+* Automate enterprise application integration (EAI), business-to-business (B2B), and electronic data interchange (EDI) tasks.
+
+For more information, *see* [Logic Apps Overview](../../logic-apps/logic-apps-overview.md).
+
+ In this tutorial, we show you how to build a Logic App connector flow to automate the following tasks:
+
+> [!div class="checklist"]
+>
+> * Detect when an invoice as been added to a OneDrive folder.
+> * Process the invoice using the Document Intelligence prebuilt-invoice model.
+> * Send the extracted information from the invoice to a pre-specified email address.
+
+## Prerequisites
+
+To complete this tutorial, you need the following resources:
+
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
+
+* A free [**OneDrive**](https://onedrive.live.com/signup) or [**OneDrive for Business**](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) cloud storage account.
+
+ > [!NOTE]
+ >
+ > * OneDrive is intended for personal storage.
+ > * OneDrive for Business is part of Office 365 and is designed for organizations. It provides cloud storage where you can store, share, and sync all work files.
+ >
+
+* A free [**Outlook online**](https://signup.live.com/signup.aspx?lic=1&mkt=en-ca) or [**Office 365**](https://www.microsoft.com/microsoft-365/outlook/email-and-calendar-software-microsoft-outlook) email account**.
+
+* **A sample invoice to test your Logic App**. You can download and use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf) for this tutorial.
+
+* **A Document Intelligence resource**. Once you have your Azure subscription, [create a Document Intelligence resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Document Intelligence resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+ * After the resource deploys, select **Go to resource**. Copy the **Keys and Endpoint** values from your resource in the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Document Intelligence API. For more information, *see* [**create a Document Intelligence resource**](create-document-intelligence-resource.md).
+
+ :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot showing how to access resource key and endpoint URL.":::
+
+## Create a OneDrive folder
+
+Before we jump into creating the Logic App, we have to set up a OneDrive folder.
+
+1. Sign in to your [OneDrive](https://onedrive.live.com/) or [**OneDrive for Business**](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) home page.
+
+1. Select the **Γ₧ò Add New** button in the upper-left corner sidebar and select **Folder**.
+
+ :::image type="content" source="media/logic-apps-tutorial/add-new-folder.png" alt-text="Screenshot of add-new button. ":::
+
+1. Enter a name for your new folder and select **Create**.
+
+ :::image type="content" source="media/logic-apps-tutorial/create-folder.png" alt-text="Screenshot of create and name folder window.":::
+
+1. You see the new folder in your files.
+
+ :::image type="content" source="media/logic-apps-tutorial/new-file.png" alt-text="Screenshot of the newly created file.":::
+
+1. We're done with OneDrive for now.
+
+## Create a Logic App resource
+
+At this point, you should have a Document Intelligence resource and a OneDrive folder all set. Now, it's time to create a Logic App resource.
+
+1. Navigate to the [Azure portal](https://ms.portal.azure.com/#home).
+
+1. Select **Γ₧ò Create a resource** from the Azure home page.
+
+ :::image type="content" source="media/logic-apps-tutorial/azure-create-resource.png" alt-text="Screenshot of create a resource in the Azure portal.":::
+
+1. Search for and choose **Logic App** from the search bar.
+
+1. Select the create button
+
+ :::image type="content" source="media/logic-apps-tutorial/create-logic-app.png" alt-text="Screenshot of the Create Logic App page.":::
+
+1. Next, you're going to fill out the **Create Logic App** fields with the following values:
+
+ * **Subscription**. Select your current subscription.
+ * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that contains your resource. Choose the same resource group you have for your Document Intelligence resource.
+ * **Type**. Select **Consumption**. The Consumption resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../../logic-apps/logic-apps-pricing.md#consumption-pricing).
+ * **Logic App name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameLogicApp*.
+ * **Publish**. Select **Workflow**.
+ * **Region**. Select your local region.
+ * **Enable log analytics**. For this project, select **No**.
+ * **Plan Type**. Select **Consumption**. The Consumption resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../../logic-apps/logic-apps-pricing.md#consumption-pricing).
+ * **Zone Redundancy**. Select **disabled**.
+
+1. When you're done, you have something similar to the following image (Resource group, Logic App name, and Region may be different). After checking these values, select **Review + create** in the bottom-left corner.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/create-logic-app-fields.png" alt-text="Screenshot showing field values to create a Logic App resource.":::
+
+1. A short validation check runs. After it completes successfully, select **Create** in the bottom-left corner.
+
+1. Next, you're redirected to a screen that says **Deployment in progress**. Give Azure some time to deploy; it can take a few minutes. After the deployment is complete, you see a banner that says, **Your deployment is complete**. When you reach this screen, select **Go to resource**.
+
+1. Finally, you're redirected to the **Logic Apps Designer** page. There's a short video for a quick introduction to Logic Apps available on the home screen. When you're ready to begin designing your Logic App, select the **Blank Logic App** button from the **Templates** section.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-designer-templates.png" alt-text="Screenshot showing how to enter the Logic App Designer.":::
+
+1. You see a screen that looks similar to the following image. Now, you're ready to start designing and implementing your Logic App.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-designer.png" alt-text="Screenshot of the Logic App Designer start page.":::
+
+## Create an automation flow
+
+Now that you have the Logic App connector resource set up and configured, let's create the automation flow and test it out!
+
+1. Search for and select **OneDrive** or **OneDrive for Business** in the search bar. Then, select the **When a file is created** trigger.
+
+ :::image type="content" source="media/logic-apps-tutorial/one-drive-setup.png" alt-text="Screenshot of the OneDrive connector and trigger selection page.":::
+
+1. Next, a pop-up window appears, prompting you to log into your OneDrive account. Select **Sign in** and follow the prompts to connect your account.
+
+ > [!TIP]
+ > If you try to sign into the OneDrive connector using an Office 365 account, you may receive the following error: ***Sorry, we can't sign you in here with your @MICROSOFT.COM account.***
+ >
+ > * This error happens because OneDrive is a cloud-based storage for personal use that can be accessed with an Outlook.com or Microsoft Live account not with Office 365 account.
+ > * You can use the OneDrive for Business connector if you want to use an Office 365 account. Make sure that you have [created a OneDrive Folder](#create-a-onedrive-folder) for this project in your OneDrive for Business account.
+
+1. After your account is connected, select the folder you created earlier in your **OneDrive** or **OneDrive for Business** account. Leave the other default values in place.
+
+ :::image type="content" source="media/logic-apps-tutorial/when-file-created.png" alt-text="Screenshot of the When a file is created window.":::
++
+4. Next, we're going to add a new step to the workflow. Select the **Γ₧ò New step** button underneath the newly created OneDrive node.
+
+ :::image type="content" source="media/logic-apps-tutorial/one-drive-trigger-setup.png" alt-text="Screenshot of the OneDrive trigger setup.":::
+
+1. A new node is added to the Logic App designer view. Search for "Form Recognizer (Document Intelligence forthcoming)" in the **Choose an operation** search bar and select **Analyze Document for Prebuilt or Custom models (v3.0 API)** from the list.
+
+ :::image type="content" source="media/logic-apps-tutorial/analyze-prebuilt-document-action.png" alt-text="Screenshot of the Analyze Document for Prebuilt or Custom models (v3.0 API) selection button.":::
+
+1. Now, you see a window to create your connection. Specifically, you're going to connect your Document Intelligence resource to the Logic Apps Designer Studio:
+
+ * Enter a **Connection name**. It should be something easy to remember.
+ * Enter the Document Intelligence resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Document Intelligence resource and copy them again. When you're done, select **Create**.
+
+ :::image type="content" source="media/logic-apps-tutorial/create-logic-app-connector.png" alt-text="Screenshot of the logic app connector dialog window":::
+
+1. You see the selection parameters window for the **Analyze Document for Prebuilt or Custom Models (v3.0 API)** connector.
+
+ :::image type="content" source="media/logic-apps-tutorial/prebuilt-model-select-window.png" alt-text="Screenshot of the prebuilt model selection window.":::
+
+1. Complete the fields as follows:
+
+ * **Model Identifier**. Specify which model you want to call, in this case we're calling the prebuilt invoice model, so enter **prebuilt-invoice**.
+ * **Document/Image File Content**. Select this field. A dynamic content pop-up appears. If it doesn't, select the **Add dynamic content** button below the field and choose **File content**. This step is essentially sending the file(s) to be analyzed to the Document Intelligence prebuilt-invoice model. Once you see the **File content** badge show in the **Document /Image file content** field, you've completed this step correctly.
+ * **Document/Image URL**. Skip this field for this project because we're already pointing to the file content directly from the OneDrive folder.
+ * **Add new parameter**. Skip this field for this project.
+
+ :::image type="content" source="media/logic-apps-tutorial/add-file-content.png" alt-text="Screenshot of add file content window.":::
+
+1. We need to add a few more steps. Once again, select the **Γ₧ò New step** button to add another action.
+
+1. In the **Choose an operation** search bar, enter *Control* and select the **Control** tile.
+
+ :::image type="content" source="media/logic-apps-tutorial/select-control-tile.png" alt-text="Screenshot of the control tile from the choose an operation menu.":::
+
+1. Scroll down and select the **For each Control** tile from the **Control** list.
+
+ :::image type="content" source="media/logic-apps-tutorial/for-each-tile.png" alt-text="Screenshot of the For each Control tile from the Control menu. ":::
+
+1. In the **For each** step window, there's a field labeled **Select an output from previous steps**. Select this field. A dynamic content pop-up appears. If it doesn't, select the **Add dynamic content** button below the field and choose **documents**.
+
+ :::image type="content" source="media/logic-apps-tutorial/dynamic-content-documents.png" alt-text="Screenshot of the dynamic content list.":::
+
+1. Now, select **Add an Action** from within the **For each** step window.
+
+1. In the **Choose an operation** search bar, enter *Outlook* and select **Outlook.com** (personal) or **Office 365 Outlook** (work).
+
+1. In the actions list, scroll down until you find **Send an email (V2)** and select this action.
+
+ :::image type="content" source="media/logic-apps-tutorial/send-email.png" alt-text="Screenshot of Send an email (V2) action button.":::
+
+1. Just like with OneDrive, you're asked to sign into your Outlook or Office 365 Outlook account. After you sign in, you see a window where we're going to format the email that with the dynamic content that Document Intelligence extracts from the invoice.
+
+1. We're going to use the following expression to complete some of the fields:
+
+ ```powerappsfl
+
+ items('For_each')?['fields']?['FIELD-NAME']?['content']
+ ```
+
+1. In order to access a specific field, we select the **add the dynamic content** button and select the **Expression** tab.
+
+ :::image type="content" source="media/logic-apps-tutorial/function-expression-field.png" alt-text="Screenshot of the expression function field.":::
+
+1. In the **ƒx** box, copy and paste the above formula and replace **FIELD-NAME** with the name of the field we want to extract. For the full list of available fields, refer to the concept page for the given API. In this case, we use the [prebuilt-invoice model field extraction values](concept-invoice.md#field-extraction).
+
+1. We're almost done! Make the following changes to the following fields:
+
+ * **To**. Enter your personal or business email address or any other email address you have access to.
+
+ * **Subject**. Enter ***Invoice received from:*** and then add the following expression:
+
+ ```powerappsfl
+
+ items('For_each')?['fields']?['VendorName']?['content']
+ ```
+
+ * **Body**. We're going to add specific information about the invoice:
+
+ * Type ***Invoice ID:*** and, using the same method as before, append the following expression:
+
+ ```powerappsfl
+
+ items('For_each')?['fields']?['InvoiceId']?['content']
+ ```
+
+ * On a new line type ***Invoice due date:*** and append the following expression:
+
+ ```powerappsfl
+
+ items('For_each')?['fields']?['DueDate']?['content']
+ ```
+
+ * Type ***Amount due:*** and append the following expression:
+
+ ```powerappsfl
+
+ items('For_each')?['fields']?['AmountDue']?['content']
+ ```
+
+ * Lastly, because the amount due is an important number, we also want to send the confidence score for this extraction in the email. To do this type ***Amount due (confidence):*** and append the following expression:
+
+ ```powerappsfl
+
+ items('For_each')?['fields']?['AmountDue']?['confidence']
+ ```
+
+ * When you're done, the window looks similar to the following image:
+
+ :::image type="content" source="media/logic-apps-tutorial/send-email-functions.png" alt-text="Screenshot of the Send an email (V2) window with completed fields.":::
+
+1. **Select Save in the upper left corner**.
+
+ :::image type="content" source="media/logic-apps-tutorial/logic-app-designer-save.png" alt-text="Screenshot of the Logic Apps Designer save button.":::
+
+> [!NOTE]
+>
+> * This current version only returns a single invoice per PDF.
+> * The "For each loop" is required around the send email action to enable an output format that may return more than one invoice from PDFs in the future.
+++
+4. Next, we're going to add a new step to the workflow. Select the **Γ₧ò New step** button underneath the newly created OneDrive node.
+
+1. A new node is added to the Logic App designer view. Search for "Form Recognizer (Document Intelligence forthcoming)" in the **Choose an operation** search bar and select **Analyze invoice** from the list.
+
+ :::image type="content" source="media/logic-apps-tutorial/analyze-invoice-v-2.png" alt-text="Screenshot of Analyze Invoice action.":::
+
+1. Now, you see a window where to create your connection. Specifically, you're going to connect your Form Recognizer resource to the Logic Apps Designer Studio:
+
+ * Enter a **Connection name**. It should be something easy to remember.
+ * Enter the Form Recognizer resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Form Recognizer resource and copy them again. When you're done, select **Create**.
+
+ :::image type="content" source="media/logic-apps-tutorial/create-logic-app-connector.png" alt-text="Screenshot of the logic app connector dialog window.":::
+
+1. Next, you see the selection parameters window for the **Analyze Invoice** connector.
+
+ :::image type="content" source="media/logic-apps-tutorial/analyze-invoice-parameters.png" alt-text="Screenshot showing the analyze invoice window fields.":::
+
+1. Complete the fields as follows:
+
+ * **Document/Image File Content**. Select this field. A dynamic content pop-up appears. If it doesn't, select the **Add dynamic content** button below the field and choose **File content**. This step is essentially sending the file(s) to be analyzed to the Document Intelligence prebuilt-invoice model. Once you see the **File content** badge show in the **Document /Image file content** field, you've completed this step correctly.
+ * **Document/Image URL**. Skip this field for this project because we're already pointing to the file content directly from the OneDrive folder.
+ * **Include Text Details**. Select **Yes**.
+ * **Add new parameter**. Skip this field for this project.
+
+1. We need to add the last step. Once again, select the **Γ₧ò New step** button to add another action.
+
+1. In the **Choose an operation** search bar, enter *Outlook* and select **Outlook.com** (personal) or **Office 365 Outlook** (work).
+
+1. In the actions list, scroll down until you find **Send an email (V2)** and select this action.
+
+1. Just like with OneDrive, you're asked to sign into your **Outlook** or **Office 365 Outlook** account. After you sign in, you see a window like the following image. In this window, we're going to format the email to be sent with the dynamic content extracted from the invoice.
+
+ :::image type="content" source="media/logic-apps-tutorial/send-email.png" alt-text="Screenshot of Send an email (V2) action button.":::
+
+1. We're almost done! Type the following entries in the fields:
+
+ * **To**. Enter your personal or business email address or any other email address you have access to.
+
+ * **Subject**. Enter ***Invoice received from:*** and then append dynamic content **Vendor name field Vendor name**.
+
+ * **Body**. We're going to add specific information about the invoice:
+
+ * Type ***Invoice ID:*** and append the dynamic content **Invoice ID field Invoice ID**.
+
+ * On a new line type ***Invoice due date:*** and append the dynamic content **Invoice date field invoice date (date)**.
+
+ * Type ***Amount due:*** and append the dynamic content **Amount due field Amount due (number)**.
+
+ * Lastly, because the amount due is an important number we also want to send the confidence score for this extraction in the email. To do this type ***Amount due (confidence):*** and add the dynamic content **Amount due field confidence of amount due**. When you're done, the window looks similar to the following image.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/send-email-fields-complete.png" alt-text="Screenshot of the completed Outlook fields.":::
+
+ > [!TIP]
+ > If you don't see the dynamic content display automatically, use the **Search dynamic content** bar to find field entries.
+
+1. **Select Save in the upper left corner**.
+
+ :::image type="content" source="media/logic-apps-tutorial/logic-app-designer-save.png" alt-text="Screenshot of the Logic Apps Designer save button.":::
+
+ > [!NOTE]
+ >
+ > * This current version only returns a single invoice per PDF.
+ > * The "For each loop" around the send email action enables an output format that may return more than one invoice from PDFs in the future.
++
+## Test the automation flow
+
+Let's quickly review what we've done before we test our flow:
+
+> [!div class="checklist"]
+>
+> * We created a triggerΓÇöin this scenario. The trigger is activated when a file is created in a pre-specified folder in our OneDrive account.
+> * We added a Document Intelligence action to our flow. In this scenario, we decided to use the invoice API to automatically analyze an invoice from the OneDrive folder.
+> * We added an Outlook.com action to our flow. We sent some of the analyzed invoice data to a pre-determined email address.
+
+Now that we've created the flow, the last thing to do is to test it and make sure that we're getting the expected behavior.
+
+1. To test the Logic App, first open a new tab and navigate to the OneDrive folder you set up at the beginning of this tutorial. Add this file to the OneDrive folder [Sample invoice.](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf)
+
+1. Return to the Logic App designer tab and select the **Run trigger** button and select **Run** from the drop-down menu.
+
+ :::image type="content" source="media/logic-apps-tutorial/trigger-run.png" alt-text="Screenshot of Run trigger and Run buttons.":::
+
+1. You see a message in the upper=right corner indicating that the trigger was successful:
+
+ :::image type="content" source="media/logic-apps-tutorial/trigger-successful.png" alt-text="Screenshot of Successful trigger message.":::
+
+1. Navigate to your Logic App overview page by selecting your app name link in the upper-left corner.
+
+ :::image type="content" source="media/logic-apps-tutorial/navigate-overview.png" alt-text="Screenshot of navigate to overview page link.":::
+
+1. Check the status, to see if the run succeeded or failed. You can select the status indicator to check which steps were successful.
+
+ :::image border="true" type="content" source="media/logic-apps-tutorial/succeeded-failed-indicator.png" alt-text="Screenshot of Succeeded or Failed status.":::
+
+1. If your run failed, check the failed step to ensure that you entered the correct information.
+
+ :::image type="content" source="media/logic-apps-tutorial/failed-run-step.png" alt-text="Screenshot of failed step.":::
+
+1. Once achieve a successful run, check your email. There's a new email with the information we specified.
+
+ :::image type="content" source="media/logic-apps-tutorial/invoice-received.png" alt-text="Screenshot of received email message.":::
+
+1. Be sure to [disable or delete](../../logic-apps/manage-logic-apps-with-azure-portal.md#disable-or-enable-a-single-logic-app) your logic App after you're done so usage stops.
+
+ :::image type="content" source="media/logic-apps-tutorial/disable-delete.png" alt-text="Screenshot of disable and delete buttons.":::
+
+Congratulations! You've officially completed this tutorial.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Document Intelligence models](concept-model-overview.md)
ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-error-guide.md
+
+ Title: "Reference: Document Intelligence Errors"
+
+description: Learn how errors are represented in Document Intelligence and find a list of possible errors returned by the service.
+++++ Last updated : 07/18/2023+
+monikerRange: 'doc-intel-3.0.0'
+++
+# Document Intelligence error guide v3.0
+
+Document Intelligence uses a unified design to represent all errors encountered in the REST APIs. Whenever an API operation returns a 4xx or 5xx status code, additional information about the error is returned in the response JSON body as follows:
+
+```json
+{
+ "error": {
+ "code": "InvalidRequest",
+ "message": "Invalid request.",
+ "innererror": {
+ "code": "InvalidContent",
+ "message": "The file format is unsupported or corrupted. Refer to documentation for the list of supported formats."
+ }
+ }
+}
+```
+
+For long-running operations where multiple errors may be encountered, the top-level error code is set to the most severe error, with the individual errors listed under the *error.details* property. In such scenarios, the *target* property of each individual error specifies the trigger of the error.
+
+```json
+{
+ "status": "failed",
+ "createdDateTime": "2021-07-14T10:17:51Z",
+ "lastUpdatedDateTime": "2021-07-14T10:17:51Z",
+ "error": {
+ "code": "InternalServerError",
+ "message": "An unexpected error occurred.",
+ "details": [
+ {
+ "code": "InternalServerError",
+ "message": "An unexpected error occurred."
+ },
+ {
+ "code": "InvalidContentDimensions",
+ "message": "The input image dimensions are out of range. Refer to documentation for supported image dimensions.",
+ "target": "2"
+ }
+ ]
+ }
+}
+```
+
+The top-level *error.code* property can be one of the following error code messages:
+
+| Error Code | Message | Http Status |
+| -- | | -- |
+| InvalidRequest | Invalid request. | 400 |
+| InvalidArgument | Invalid argument. | 400 |
+| Forbidden | Access forbidden due to policy or other configuration. | 403 |
+| NotFound | Resource not found. | 404 |
+| MethodNotAllowed | The requested HTTP method is not allowed. | 405 |
+| Conflict | The request could not be completed due to a conflict. | 409 |
+| UnsupportedMediaType | Request content type is not supported. | 415 |
+| InternalServerError | An unexpected error occurred. | 500 |
+| ServiceUnavailable | A transient error has occurred. Try again. | 503 |
+
+When possible, more details are specified in the *inner error* property.
+
+| Top Error Code | Inner Error Code | Message |
+| -- | - | - |
+| Conflict | ModelExists | A model with the provided name already exists. |
+| Forbidden | AuthorizationFailed | Authorization failed: {details} |
+| Forbidden | InvalidDataProtectionKey | Data protection key is invalid: {details} |
+| Forbidden | OutboundAccessForbidden | The request contains a domain name that is not allowed by the current access control policy. |
+| InternalServerError | Unknown | Unknown error. |
+| InvalidArgument | InvalidContentSourceFormat | Invalid content source: {details} |
+| InvalidArgument | InvalidParameter | The parameter {parameterName} is invalid: {details} |
+| InvalidArgument | InvalidParameterLength | Parameter {parameterName} length must not exceed {maxChars} characters. |
+| InvalidArgument | InvalidSasToken | The shared access signature (SAS) is invalid: {details} |
+| InvalidArgument | ParameterMissing | The parameter {parameterName} is required. |
+| InvalidRequest | ContentSourceNotAccessible | Content is not accessible: {details} |
+| InvalidRequest | ContentSourceTimeout | Timeout while receiving the file from client. |
+| InvalidRequest | DocumentModelLimit | Account cannot create more than {maximumModels} models. |
+| InvalidRequest | DocumentModelLimitNeural | Account cannot create more than 10 custom neural models per month. Please contact support to request additional capacity. |
+| InvalidRequest | DocumentModelLimitComposed | Account cannot create a model with more than {details} component models. |
+| InvalidRequest | InvalidContent | The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats. |
+| InvalidRequest | InvalidContentDimensions | The input image dimensions are out of range. Refer to documentation for supported image dimensions. |
+| InvalidRequest | InvalidContentLength | The input image is too large. Refer to documentation for the maximum file size. |
+| InvalidRequest | InvalidFieldsDefinition | Invalid fields: {details} |
+| InvalidRequest | InvalidTrainingContentLength | Training content contains {bytes} bytes. Training is limited to {maxBytes} bytes. |
+| InvalidRequest | InvalidTrainingContentPageCount | Training content contains {pages} pages. Training is limited to {pages} pages. |
+| InvalidRequest | ModelAnalyzeError | Could not analyze using a custom model: {details} |
+| InvalidRequest | ModelBuildError | Could not build the model: {details} |
+| InvalidRequest | ModelComposeError | Could not compose the model: {details} |
+| InvalidRequest | ModelNotReady | Model is not ready for the requested operation. Wait for training to complete or check for operation errors. |
+| InvalidRequest | ModelReadOnly | The requested model is read-only. |
+| InvalidRequest | NotSupportedApiVersion | The requested operation requires {minimumApiVersion} or later. |
+| InvalidRequest | OperationNotCancellable | The operation can no longer be canceled. |
+| InvalidRequest | TrainingContentMissing | Training data is missing: {details} |
+| InvalidRequest | UnsupportedContent | Content is not supported: {details} |
+| NotFound | ModelNotFound | The requested model was not found. It may have been deleted or is still building. |
+| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-migration-guide.md
+
+ Title: "How-to: Migrate your application from Document Intelligence v2.1 to v3.0."
+
+description: In this how-to guide, you'll learn the differences between Document Intelligence API v2.1 and v3.0. You'll also learn the changes you need to move to the newer version of the API.
+++++ Last updated : 07/18/2023+
+monikerRange: '<=doc-intel-3.0.0'
+++
+# Document Intelligence v3.0 migration
+
+> [!IMPORTANT]
+>
+> Document Intelligence REST API v3.0 introduces breaking changes in the REST API request and analyze response JSON.
+
+## Migrating from a v3.0 preview API version
+
+Preview APIs are periodically deprecated. If you're using a preview API version, please update your application to target the GA API version. To migrate from the 2021-09-30-preview, 2022-01-30-preview or the 2022-06-30-preview API versions to the `2022-08-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md).
+
+> [!IMPORTANT]
+>
+> Preview API versions 2021-09-30-preview, 2022-01-30-preview and 2022-06-30-preview are being retired July 31st 2023. All analyze requests that use these API versions will fail. Custom neural models trained with any of these API versions will no longer be usable once the API versions are deprecated. All custom neural models trained with preview API versions will need to be retrained with the GA API version.
+
+The `2022-08-31` (GA) API has a few updates from the preview API versions:
+
+* Field rename: boundingBox to polygon to support non-quadrilateral polygon regions.
+* Field deleted: entities removed from the result of the general document model.
+* Field rename: documentLanguage.languageCode to locale
+* Added support for HEIF format
+* Added paragraph detection, with role classification for layout and general document models.
+* Added support for parsed address fields.
+
+## Migrating from v2.1
+
+Document Intelligence v3.0 introduces several new features and capabilities:
+
+* [Document Intelligence REST API](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) has been redesigned for better usability.
+* [**General document (v3.0)**](concept-general-document.md) model is a new API that extracts text, tables, structure, and key-value pairs, from forms and documents.
+* [**Custom neural model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents.
+* [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing.
+* [**ID document (v3.0)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom template models.
+* [**Custom model API (v3.0)**](overview.md) supports analysis of all the newly added prebuilt models. For a complete list of prebuilt models, see the [overview](overview.md) page.
+
+In this article, you'll learn the differences between Document Intelligence v2.1 and v3.0 and how to move to the newer version of the API.
+
+> [!CAUTION]
+>
+> * REST API **2022-08-31** release includes a breaking change in the REST API analyze response JSON.
+> * The `boundingBox` property is renamed to `polygon` in each instance.
+
+## Changes to the REST API endpoints
+
+ The v3.0 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the layout analysis (prebuilt-layout) and prebuilt models.
+
+### POST request
+
+```http
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
+
+```
+
+### GET request
+
+```http
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-08-31
+```
+
+### Analyze operation
+
+* The request payload and call pattern remain unchanged.
+* The Analyze operation specifies the input document and content-specific configurations, it returns the analyze result URL via the Operation-Location header in the response.
+* Poll this Analyze Result URL, via a GET request to check the status of the analyze operation (minimum recommended interval between requests is 1 second).
+* Upon success, status is set to succeeded and [analyzeResult](#changes-to-analyze-result) is returned in the response body. If errors are encountered, status will be set to `failed`, and an error will be returned.
+
+| Model | v2.1 | v3.0 |
+|:--| :--| :--|
+| **Request URL prefix**| **https://{your-form-recognizer-endpoint}/formrecognizer/v2.1** | **https://{your-form-recognizer-endpoint}/formrecognizer** |
+| **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
+| **Layout**| /layout/analyze |`/documentModels/prebuilt-layout:analyze`|
+|**Custom**| /custom/{modelId}/analyze |`/documentModels/{modelId}:analyze` |
+| **Invoice** | /prebuilt/invoice/analyze | `/documentModels/prebuilt-invoice:analyze` |
+| **Receipt** | /prebuilt/receipt/analyze | `/documentModels/prebuilt-receipt:analyze` |
+| **ID document** | /prebuilt/idDocument/analyze | `/documentModels/prebuilt-idDocument:analyze` |
+|**Business card**| /prebuilt/businessCard/analyze| `/documentModels/prebuilt-businessCard:analyze`|
+|**W-2**| /prebuilt/w-2/analyze| `/documentModels/prebuilt-w-2:analyze`|
+
+### Analyze request body
+
+The content to be analyzed is provided via the request body. Either the URL or base64 encoded data can be user to construct the request.
+
+ To specify a publicly accessible web URL, set  Content-Type to **application/json** and send the following JSON body:
+
+ ```json
+ {
+ "urlSource": "{urlPath}"
+ }
+ ```
+
+Base64 encoding is also supported in Document Intelligence v3.0:
+
+```json
+{
+ "base64Source": "{base64EncodedContent}"
+}
+```
+
+### Additionally supported parameters
+
+Parameters that continue to be supported:
+
+* `pages` : Analyze only a specific subset of pages in the document. List of page numbers indexed from the number `1` to analyze. Ex. "1-3,5,7-9"
+* `locale` : Locale hint for text recognition and document analysis. Value may contain only the language code (ex. "en", "fr") or BCP 47 language tag (ex. "en-US").
+
+Parameters no longer supported:
+
+* includeTextDetails
+
+The new response format is more compact and the full output is always returned.
+
+## Changes to analyze result
+
+Analyze response has been refactored to the following top-level results to support multi-page elements.
+
+* `pages`
+* `tables`
+* `keyValuePairs`
+* `entities`
+* `styles`
+* `documents`
+
+> [!NOTE]
+>
+> The analyzeResult response changes includes a number of changes like moving up from a property of pages to a top lever property within analyzeResult.
+
+```json
+
+{
+// Basic analyze result metadata
+"apiVersion": "2022-08-31", // REST API version used
+"modelId": "prebuilt-invoice", // ModelId used
+"stringIndexType": "textElements", // Character unit used for string offsets and lengths:
+// textElements, unicodeCodePoint, utf16CodeUnit // Concatenated content in global reading order across pages.
+// Words are generally delimited by space, except CJK (Chinese, Japanese, Korean) characters.
+// Lines and selection marks are generally delimited by newline character.
+// Selection marks are represented in Markdown emoji syntax (:selected:, :unselected:).
+"content": "CONTOSO LTD.\nINVOICE\nContoso Headquarters...", "pages": [ // List of pages analyzed
+{
+// Basic page metadata
+"pageNumber": 1, // 1-indexed page number
+"angle": 0, // Orientation of content in clockwise direction (degree)
+"width": 0, // Page width
+"height": 0, // Page height
+"unit": "pixel", // Unit for width, height, and polygon coordinates
+"spans": [ // Parts of top-level content covered by page
+{
+"offset": 0, // Offset in content
+"length": 7 // Length in content
+}
+], // List of words in page
+"words": [
+{
+"text": "CONTOSO", // Equivalent to $.content.Substring(span.offset, span.length)
+"boundingBox": [ ... ], // Position in page
+"confidence": 0.99, // Extraction confidence
+"span": { ... } // Part of top-level content covered by word
+}, ...
+], // List of selectionMarks in page
+"selectionMarks": [
+{
+"state": "selected", // Selection state: selected, unselected
+"boundingBox": [ ... ], // Position in page
+"confidence": 0.95, // Extraction confidence
+"span": { ... } // Part of top-level content covered by selection mark
+}, ...
+], // List of lines in page
+"lines": [
+{
+"content": "CONTOSO LTD.", // Concatenated content of line (may contain both words and selectionMarks)
+"boundingBox": [ ... ], // Position in page
+"spans": [ ... ], // Parts of top-level content covered by line
+}, ...
+]
+}, ...
+], // List of extracted tables
+"tables": [
+{
+"rowCount": 1, // Number of rows in table
+"columnCount": 1, // Number of columns in table
+"boundingRegions": [ // Polygons or Bounding boxes potentially across pages covered by table
+{
+"pageNumber": 1, // 1-indexed page number
+"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-08-31 API
+}
+],
+"spans": [ ... ], // Parts of top-level content covered by table // List of cells in table
+"cells": [
+{
+"kind": "stub", // Cell kind: content (default), rowHeader, columnHeader, stub, description
+"rowIndex": 0, // 0-indexed row position of cell
+"columnIndex": 0, // 0-indexed column position of cell
+"rowSpan": 1, // Number of rows spanned by cell (default=1)
+"columnSpan": 1, // Number of columns spanned by cell (default=1)
+"content": "SALESPERSON", // Concatenated content of cell
+"boundingRegions": [ ... ], // Bounding regions covered by cell
+"spans": [ ... ] // Parts of top-level content covered by cell
+}, ...
+]
+}, ...
+], // List of extracted key-value pairs
+"keyValuePairs": [
+{
+"key": { // Extracted key
+"content": "INVOICE:", // Key content
+"boundingRegions": [ ... ], // Key bounding regions
+"spans": [ ... ] // Key spans
+},
+"value": { // Extracted value corresponding to key, if any
+"content": "INV-100", // Value content
+"boundingRegions": [ ... ], // Value bounding regions
+"spans": [ ... ] // Value spans
+},
+"confidence": 0.95 // Extraction confidence
+}, ...
+],
+"styles": [
+{
+"isHandwritten": true, // Is content in this style handwritten?
+"spans": [ ... ], // Spans covered by this style
+"confidence": 0.95 // Detection confidence
+}, ...
+], // List of extracted documents
+"documents": [
+{
+"docType": "prebuilt-invoice", // Classified document type (model dependent)
+"boundingRegions": [ ... ], // Document bounding regions
+"spans": [ ... ], // Document spans
+"confidence": 0.99, // Document splitting/classification confidence // List of extracted fields
+"fields": {
+"VendorName": { // Field name (docType dependent)
+"type": "string", // Field value type: string, number, array, object, ...
+"valueString": "CONTOSO LTD.",// Normalized field value
+"content": "CONTOSO LTD.", // Raw extracted field content
+"boundingRegions": [ ... ], // Field bounding regions
+"spans": [ ... ], // Field spans
+"confidence": 0.99 // Extraction confidence
+}, ...
+}
+}, ...
+]
+}
+
+```
+
+## Build or train model
+
+The model object has three updates in the new API
+
+* ```modelId``` is now a property that can be set on a model for a human readable name.
+* ```modelName``` has been renamed to ```description```
+* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom neural models.
+
+The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed```, and the error is returned.
+
+The following code is a sample build request using a SAS token. Note the trailing slash when setting the prefix or folder path.
+
+```json
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
+
+{
+ "modelId": {modelId},
+ "description": "Sample model",
+ "buildMode": "template",
+ "azureBlobSource": {
+ "containerUrl": "https://{storageAccount}.blob.core.windows.net/{containerName}?{sasToken}",
+ "prefix": "{folderName/}"
+ }
+}
+```
+
+## Changes to compose model
+
+Model compose is now limited to single level of nesting. Composed models are now consistent with custom models with the addition of ```modelId``` and ```description``` properties.
+
+```json
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-08-31
+{
+ "modelId": "{composedModelId}",
+ "description": "{composedModelDescription}",
+ "componentModels": [
+ { "modelId": "{modelId1}" },
+ { "modelId": "{modelId2}" },
+ ]
+}
+
+```
+
+## Changes to copy model
+
+The call pattern for copy model remains unchanged:
+
+* Authorize the copy operation with the target resource calling ```authorizeCopy```. Now a POST request.
+* Submit the authorization to the source resource to copy the model calling ```copyTo```
+* Poll the returned operation to validate the operation completed successfully
+
+The only changes to the copy model function are:
+
+* HTTP action on the ```authorizeCopy``` is now a POST request.
+* The authorization payload contains all the information needed to submit the copy request.
+
+***Authorize the copy***
+
+```json
+POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
+{
+ "modelId": "{targetModelId}",
+ "description": "{targetModelDescription}",
+}
+```
+
+Use the response body from the authorize action to construct the request for the copy.
+
+```json
+POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copyTo?api-version=2022-08-31
+{
+ "targetResourceId": "{targetResourceId}",
+ "targetResourceRegion": "{targetResourceRegion}",
+ "targetModelId": "{targetModelId}",
+ "targetModelLocation": "https://{targetHost}/formrecognizer/documentModels/{targetModelId}",
+ "accessToken": "{accessToken}",
+ "expirationDateTime": "2021-08-02T03:56:11Z"
+}
+```
+
+## Changes to list models
+
+List models have been extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels).
+
+***Sample list models request***
+
+```json
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-08-31
+```
+
+## Change to get model
+
+As get model now includes prebuilt models, the get operation returns a ```docTypes``` dictionary. Each document type is described by its name, optional description, field schema, and optional field confidence. The field schema describes the list of fields potentially returned with the document type.
+
+```json
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
+```
+
+## New get info operation
+
+The ```info``` operation on the service returns the custom model count and custom model limit.
+
+```json
+GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-08-31
+```
+
+***Sample response***
+
+```json
+{
+ "customDocumentModels": {
+ "count": 5,
+ "limit": 100
+ }
+}
+```
+
+## Next steps
+
+In this migration guide, you've learned how to upgrade your existing Document Intelligence application to use the v3.0 APIs.
+
+* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+* [What is Document Intelligence?](overview.md)
+* [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
+
+ Title: What's new in Document Intelligence?
+
+description: Learn the latest changes to the Document Intelligence API.
+++++ Last updated : 07/18/2023++
+monikerRange: '<=doc-intel-3.0.0'
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# What's new in Azure AI Document Intelligence
++
+Document Intelligence service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
+
+>[!NOTE]
+> With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md).
+
+## July 2023
+
+> [!NOTE]
+> Form Recognizer is now Azure AI Document Intelligence!
+>
+> As of July 2023, Azure AI services encompass all of what were previously known as Azure Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names "Cognitive Services" and "Azure Applied AI" continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs.
+
+## May 2023
+
+**Introducing refreshed documentation for Build 2023**
+
+* [🆕 Document Intelligence Overview](overview.md?view=doc-intel-3.0.0&preserve-view=true) has enhanced navigation, structured access points, and enriched images.
+
+* [🆕 Choose a Document Intelligence model](choose-model-feature.md?view=doc-intel-3.0.0&preserve-view=true) provides guidance for choosing the best Document Intelligence solution for your projects and workflows.
+
+## April 2023
+
+**Announcing the latest Document Intelligence client-library public preview release**
+
+* Document Intelligence REST API Version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) supports the public preview release SDKs. This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) SDKs:
+
+ * [**Custom classification model**](concept-custom-classifier.md)
+
+ * [**Query fields extraction**](concept-query-fields.md)
+
+ * [**Add-on capabilities**](concept-add-on-capabilities.md)
+
+* For more information, _see_ [**Document Intelligence SDK (public preview**)](./sdk-preview.md) and [March 2023 release](#march-2023) notes.
+
+## March 2023
+
+> [!IMPORTANT]
+> [**`2023-02-28-preview`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) capabilities are currently only available in the following regions:
+>
+> * West Europe
+> * West US2
+> * East US
+
+* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Document Intelligence starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult).
+* [**Query fields**](concept-query-fields.md) capabilities added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region.
+* [**Read**](concept-read.md#barcode-extraction) and [**Layout**](concept-layout.md#barcode-extraction) models support **barcode** extraction with the ```2023-02-28-preview``` API.
+* [**Add-on capabilities**](concept-add-on-capabilities.md)
+ * [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API.
+ * [**Formula extraction**](concept-add-on-capabilities.md#formula-extraction) is now recognized with the ```2023-02-28-preview``` API.
+ * [**High resolution extraction**](concept-add-on-capabilities.md#high-resolution-extraction) is now recognized with the ```2023-02-28-preview``` API.
+* [**Common name key normalization**](concept-general-document.md#key-normalization-common-name) capabilities are added to the General Document model to improve processing forms with variations in key names.
+* [**Custom extraction model updates**](concept-custom.md)
+ * [**Custom neural model**](concept-custom-neural.md) now supports added languages for training and analysis. Train neural models for Dutch, French, German, Italian and Spanish.
+ * [**Custom template model**](concept-custom-template.md) now has an improved signature detection capability.
+* [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio) updates
+ * In addition to support for all the new features like classification and query fields, the Studio now enables project sharing for custom model projects.
+ * New model additions in gated preview: **Vaccination cards**, **Contracts**, **US Tax 1098**, **US Tax 1098-E**, and **US Tax 1098-T**. To request access to gated preview models, complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey).
+* [**Receipt model updates**](concept-receipt.md)
+ * Receipt model has added support for thermal receipts.
+ * Receipt model now has added language support for 18 languages and three regional languages (English, French, Portuguese).
+ * Receipt model now supports `TaxDetails` extraction.
+* [**Layout model**](concept-layout.md) now has improved table recognition.
+* [**Read model**](concept-read.md) now has added improvement for single-digit character recognition.
++++
+## February 2023
+
+* Select Document Intelligence containers for v3.0 are now available for use!
+* Currently **Read v3.0** and **Layout v3.0** containers are available.
+
+ For more information, _see_ [Install and run Document Intelligence containers](containers/install-run.md?view=doc-intel-3.0.0&preserve-view=true)
++++
+## January 2023
+
+* Prebuilt receipt model - added languages supported. The receipt model now supports these added languages and locales
+ * Japanese - Japan (ja-JP)
+ * French - Canada (fr-CA)
+ * Dutch - Netherlands (nl-NL)
+ * English - United Arab Emirates (en-AE)
+ * Portuguese - Brazil (pt-BR)
+
+* Prebuilt invoice model - added languages supported. The invoice model now supports these added languages and locales
+ * English - United States (en-US), Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN)
+ * Spanish - Spain (es-ES)
+ * French - France (fr-FR)
+ * Italian - Italy (it-IT)
+ * Portuguese - Portugal (pt-PT)
+ * Dutch - Netherlands (nl-NL)
+
+* Prebuilt invoice model - added fields recognized. The invoice model now recognizes these added fields
+ * Currency code
+ * Payment options
+ * Total discount
+ * Tax items (en-IN only)
+
+* Prebuilt ID model - added document types supported. The ID model now supports these added document types
+ * US Military ID
+
+> [!TIP]
+> All January 2023 updates are available with [REST API version **2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
+
+* **[Prebuilt receipt model](concept-receipt.md#supported-languages-and-locales)ΓÇöadditional language support**:
+
+ The **prebuilt receipt model** now has added support for the following languages:
+
+ * English - United Arab Emirates (en-AE)
+ * Dutch - Netherlands (nl-NL)
+ * French - Canada (fr-CA)
+ * German - (de-DE)
+ * Italian - (it-IT)
+ * Japanese - Japan (ja-JP)
+ * Portuguese - Brazil (pt-BR)
+
+* **[Prebuilt invoice model](concept-invoice.md)ΓÇöadditional language support and field extractions**
+
+ The **prebuilt invoice model** now has added support for the following languages:
+
+ * English - Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN)
+ * Portuguese - Brazil (pt-BR)
+
+ The **prebuilt invoice model** now has added support for the following field extractions:
+
+ * Currency code
+ * Payment options
+ * Total discount
+ * Tax items (en-IN only)
+
+* **[Prebuilt ID document model](concept-id-document.md#document-types)ΓÇöadditional document types support**
+
+ The **prebuilt ID document model** now has added support for the following document types:
+
+ * Driver's license expansion supporting India, Canada, United Kingdom and Australia
+ * US military ID cards and documents
+ * India ID cards and documents (PAN and Aadhaar)
+ * Australia ID cards and documents (photo card, Key-pass ID)
+ * Canada ID cards and documents (identification card, Maple card)
+ * United Kingdom ID cards and documents (national/regional identity card)
++++
+## December 2022
+
+* [**Document Intelligence Studio updates**](https://formrecognizer.appliedai.azure.com/studio)
+
+ The December Document Intelligence Studio release includes the latest updates to Document Intelligence Studio. There are significant improvements to user experience, primarily with custom model labeling support.
+
+ * **Page range**. The Studio now supports analyzing specified pages from a document.
+
+ * **Custom model labeling**:
+
+ * **Run Layout API automatically**. You can opt to run the Layout API for all documents automatically in your blob storage during the setup process for custom model.
+
+ * **Search**. The Studio now includes search functionality to locate words within a document. This improvement allows for easier navigation while labeling.
+
+ * **Navigation**. You can select labels to target labeled words within a document.
+
+ * **Auto table labeling**. After you select the table icon within a document, you can opt to autolabel the extracted table in the labeling view.
+
+ * **Label subtypes and second-level subtypes** The Studio now supports subtypes for table columns, table rows, and second-level subtypes for types such as dates and numbers.
+
+* Building custom neural models is now supported in the US Gov Virginia region.
+
+* Preview API versions ```2022-01-30-preview``` and ```2021-09-30-preview``` will be retired January 31 2023. Update to the ```2022-08-31``` API version to avoid any service disruptions.
++++
+## November 2022
+
+* **Announcing the latest stable release of Azure AI Document Intelligence libraries**
+ * This release includes important changes and updates for .NET, Java, JavaScript, and Python SDKs. For more information, _see_ [**Azure SDK DevBlog**](https://devblogs.microsoft.com/azure-sdk/announcing-new-stable-release-of-azure-form-recognizer-libraries/).
+ * The most significant enhancements are the introduction of two new clients, the **`DocumentAnalysisClient`** and the **`DocumentModelAdministrationClient`**.
++++
+## October 2022
+
+* **Document Intelligence versioned content**
+ * Document Intelligence documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the `v3.0 GA` experience or the `v2.1 GA` experience. The v3.0 experience is the default.
+
+ :::image type="content" source="media/versioning-and-monikers.png" alt-text="Screenshot of the Document Intelligence landing page denoting the version dropdown menu.":::
+
+* **Document Intelligence Studio Sample Code**
+ * Sample code for the [Document Intelligence Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Document Intelligence into their own UX or build their own new UX using the Document Intelligence Studio sample code.
+
+* **Language expansion**
+ * With the latest preview release, Document Intelligence's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Document Intelligence now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
+ * Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
+
+* **New Prebuilt Contract model**
+ * A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. the contracts model is currently in preview, request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+
+* **Region expansion for training custom neural models**
+ * Training custom neural models now supported in added regions.
+ > [!div class="checklist"]
+ >
+ > * East US
+ > * East US2
+ > * US Gov Arizona
++++
+## September 2022
+
+>[!NOTE]
+> Starting with version 4.0.0, a new set of clients has been introduced to leverage the newest features of the Document Intelligence service.
+
+**SDK version 4.0.0 GA release includes the following updates:**
+
+### [**C#**](#tab/csharp)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-jav)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
+
+[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md)
+
+### [Python](#tab/python)
+
+> [!NOTE]
+> Python 3.7 or later is required to use this package.
+
+* **Version 3.2.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
++++
+* **Region expansion for training custom neural models now supported in six new regions**
+ > [!div class="checklist"]
+ >
+ > * Australia East
+ > * Central US
+ > * East Asia
+ > * France Central
+ > * UK South
+ > * West US2
+
+ * For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
+
+ * Document Intelligence SDK version `4.0.0 GA` release
+ * **Document Intelligence SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
+ * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview.md).
+ * Update your applications using your programming language's **migration guide**.
++++
+## August 2022
+
+**Document Intelligence SDK beta August 2022 preview release includes the following updates:**
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.5 (2022-08-09)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
+
+[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.6 (2022-08-10)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
+
+ [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.6 (2022-08-09)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
+
+ [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+> [!IMPORTANT]
+> Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
+
+**Version 3.2.0b6 (2022-08-09)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+
+ [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
++++
+* Document Intelligence v3.0 generally available
+
+ * **Document Intelligence REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
+
+* Document Intelligence Studio updates
+ > [!div class="checklist"]
+ >
+ > * **Next steps**. Under each model page, the Studio now has a next steps section. Users can quickly reference sample code, troubleshooting guidelines, and pricing information.
+ > * **Custom models**. The Studio now includes the ability to reorder labels in custom model projects to improve labeling efficiency.
+ > * **Copy Models** Custom models can be copied across Document Intelligence services from within the Studio. The operation enables the promotion of a trained model to other environments and regions.
+ > * **Delete documents**. The Studio now supports deleting documents from labeled dataset within custom projects.
+
+* Document Intelligence service updates
+
+ * [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Document Intelligence with paragraphs and language detection as the two new features. Document Intelligence Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Document Intelligence.
+ * [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number.
+ * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields now resolves to the existing fields TotalTax and Line/Tax respectively.
+ * [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information.
+ * [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE).
+ * [**prebuilt-businessCard**](concept-business-card.md). Address parse support to extract subfields for address components like address, city, state, country/region, and zip code.
+
+* **AI quality improvements**
+
+ * [**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other key data commonly found in receipts and invoices and improved processing of digital PDF documents.
+ * [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells.
+ * [**prebuilt-document**](concept-general-document.md). Improved value and check box detection.
+ * [**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
++++
+## June 2022
+
+* Document Intelligence SDK beta June 2022 preview release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.4 (2022-06-08)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
+
+[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.5 (2022-06-07)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.4 (2022-06-07)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
+
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+**Version 3.2.0b5 (2022-06-07**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
+
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
+++
+* [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) June release is the latest update to the Document Intelligence Studio. There are considerable user experience and accessibility improvements addressed in this update:
+
+ * **Code sample for JavaScript and C#**. The Studio code tab now adds JavaScript and C# code samples in addition to the existing Python one.
+ * **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
+ * **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
+
+* Document Intelligence v3.0 **2022-06-30-preview** release presents extensive updates across the feature APIs:
+
+ * [**Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
+ * [**Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+ * [**Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+ * [**Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs).
+ * [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
+ * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+ * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
+ * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction](concept-read.md#microsoft-office-and-html-text-extraction).
++++
+## February 2022
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.3 (2022-02-10)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta3-2022-02-10)
+
+ [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3)
+
+ [**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.4 (2022-02-10)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta4-2022-02-10)
+
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.4/jar)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.3 (2022-02-10)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#400-beta3-2022-02-10)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3)
+
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+**Version 3.2.0b3 (2022-02-10)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#320b3-2022-02-10)
+
+ [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/)
+
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
++++
+* Document Intelligence v3.0 preview release introduces several new features, capabilities and enhancements:
+
+ * [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-structured and **unstructured documents**.
+ * [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
+ * [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
+ * [**General document**](concept-general-document.md) pretrained model is now updated to support selection marks in addition to API text, tables, structure, and key-value pairs from forms and documents.
+ * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
+ * [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
+ * [**Language Expansion**](language-support.md) Document Intelligence Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
+
+* Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
+
+* Document Intelligence model data extraction
+
+ | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Signatures**|
+ | | :: |::| :: | :: |:: |
+ |Read | Γ£ô | | | | |
+ |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
+ | Layout | Γ£ô | | Γ£ô | Γ£ô | |
+ | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
+ |Receipt | Γ£ô | Γ£ô | | |Γ£ô|
+ | ID document | Γ£ô | Γ£ô | | ||
+ | Business card | Γ£ô | Γ£ô | | ||
+ | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
+
+* Document Intelligence SDK beta preview release includes the following updates:
+
+ * [Custom Document models and modes](concept-custom.md):
+ * [Custom template](concept-custom-template.md) (formerly custom form)
+ * [Custom neural](concept-custom-neural.md).
+ * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
+
+ * [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
+ * [Read prebuilt model](concept-read.md) (prebuilt-read).
+ * [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
++++
+## November 2021
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.2 (2021-11-09)**
+
+| [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.2) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) | [**API reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+ | [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.2) | [**Changelog/Release History**](https://oss.sonatype.org/service/local/repositories/releases/content/com/azure/azure-ai-formrecognizer/4.0.0-beta.2/azure-ai-formrecognizer-4.0.0-beta.2-changelog.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.2/https://docsupdatetracker.net/index.html)
+
+### [**JavaScript**](#tab/javascript)
+
+| [**Package (NPM)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.2) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) | [**API reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true) |
+
+### [**Python**](#tab/python)
+
+| [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b2/) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b2/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/latest/azure.ai.formrecognizer.html)
++++
+* **Document Intelligence v3.0 preview SDK release update (beta.2) incorporates bug fixes and minor feature updates.**
++++
+## October 2021
+
+* **Document Intelligence v3.0 preview release version 4.0.0-beta.1 (2021-10-07)introduces several new features and capabilities:**
+
+ * [**General document**](concept-general-document.md) model is a new API that uses a pretrained model to extract text, tables, structure, and key-value pairs from forms and documents.
+ * [**Hotel receipt**](concept-receipt.md) model added to prebuilt receipt processing.
+ * [**Expanded fields for ID document**](concept-id-document.md) the ID model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+ * [**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field.
+ * [**Language Expansion**](language-support.md) Support for 122 languages (print) and 7 languages (handwritten). Document Intelligence Layout and Custom Form expand [supported languages](language-support.md) to 122 with its latest preview. The preview includes text extraction for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages. In addition extraction of handwritten text now supports seven languages that include English, and new previews of Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
+ * **Tables and text extraction enhancements** Layout now supports extracting single row tables also called key-value tables. Text extraction enhancements include better processing of digital PDFs and Machine Readable Zone (MRZ) text in identity documents, along with general performance.
+ * [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Document Intelligence Studio to test the different prebuilt models or label and train a custom model
+
+ * Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
+
+* Document Intelligence model data extraction
+
+ | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |
+ | | :: |::| :: | :: |
+ |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ | Layout | Γ£ô | | Γ£ô | Γ£ô |
+ | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ |Receipt | Γ£ô | Γ£ô | | |
+ | ID document | Γ£ô | Γ£ô | | |
+ | Business card | Γ£ô | Γ£ô | | |
+ | Custom |Γ£ô | Γ£ô | Γ£ô | Γ£ô |
++++
+## September 2021
+
+* [Azure metrics explorer advanced features](../../azure-monitor/essentials/metrics-charts.md) are available on your Document Intelligence resource overview page in the Azure portal.
+
+* Monitoring menu
+
+ :::image type="content" source="media/portal-metrics.png" alt-text="Screenshot showing the monitoring menu in the Azure portal":::
+
+* Charts
+
+ :::image type="content" source="media/portal-metrics-charts.png" alt-text="Screenshot showing an example metric chart in the Azure portal.":::
+
+* **ID document** model update: given names including a suffix, with or without a period (full stop), process successfully:
+
+ |Input Text | Result with update |
+ ||-|
+ | William Isaac Kirby Jr. |**FirstName**: William Isaac</br></br>**LastName**: Kirby Jr. |
+ | Henry Caleb Ross Sr | **FirstName**: Henry Caleb </br></br> **LastName**: Ross Sr |
++++
+## July 2021
+
+* System-assigned managed identity support: You can now enable a system-assigned managed identity to grant Document Intelligence limited access to private storage accounts including accounts protected by a Virtual Network (VNet) or firewall or have enabled bring-your-own-storage (BYOS). _See_ [Create and use managed identity for your Document Intelligence resource](managed-identities.md) to learn more.
++++
+## June 2021
+
+### [**C#**](#tab/csharp)
+
+| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.1.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
+
+### [**Java**](#tab/java)
+
+ | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package dependency version 3.1.1](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/3.1.1) |
+
+### [**JavaScript**](#tab/javascript)
+
+> [!NOTE]
+> There are no updates to JavaScript SDK v3.1.0.
+
+| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
+
+### [**Python**](#tab/python)
+
+| [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [PyPi azure-ai-formrecognizer 3.1.1](https://pypi.org/project/azure-ai-formrecognizer/) |
++++
+* Document Intelligence containers v2.1 released in gated preview and are now supported by six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval.
+
+ * *See* [**Install and run Docker containers for Document Intelligence**](containers/install-run.md?branch=main&tabs=layout) and [**Configure Document Intelligence containers**](containers/configuration.md?branch=main)
+
+* Document Intelligence connector released in preview: The [**Document Intelligence connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](../../logic-apps/logic-apps-overview.md), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards and ID documents.
+
+* Document Intelligence SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python. The patch addresses invoices that don't have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
++++
+## May 2021
+
+### [**C#**](#tab/csharp)
+
+* **Version 3.1.0 (2021-05-26)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.0.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
+
+### [**Java**](#tab/java)
+
+* **Version 3.1.0 (2021-05-26)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#310-2021-05-26) | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package dependency version 3.1.0](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 3.1.0 (2021-05-26)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/@azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
+
+### [**Python**](#tab/python)
+
+* **Version 3.1.0 (2021-05-26)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [PyPi azure-ai-formrecognizer 3.1.0](https://pypi.org/project/azure-ai-formrecognizer/) |
++++
+* Document Intelligence 2.1 is generally available. The GA release marks the stability of the changes introduced in prior 2.1 preview package versions. This release enables you to detect and extract information and data from the following document types:
+ > [!div class="checklist"]
+ >
+ > * [Documents](concept-layout.md)
+ > * [Receipts](./concept-receipt.md)
+ > * [Business cards](./concept-business-card.md)
+ > * [Invoices](./concept-invoice.md)
+ > * [Identity documents](./concept-id-document.md)
+ > * [Custom forms](concept-custom.md)
+
+* To get started, try the [Document Intelligence Sample Tool](https://fott-2-1.azurewebsites.net/) and follow the [quickstart](./quickstarts/try-sample-label-tool.md).
+
+* The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update can be used to identify which rows make up the table header.
++++
+## April 2021
+<!-- markdownlint-disable MD029 -->
+
+### [**C#**](#tab/csharp)
+
+* ***NuGet package version 3.1.0-beta.4**
+
+* [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#310-beta4-2021-04-06)
+
+* **New methods to analyze data from identity documents**:
+
+ **StartRecognizeIdDocumentsFromUriAsync**
+
+ **StartRecognizeIdDocumentsAsync**
+
+ For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Document Intelligence documentation.
+
+* Expanded the set of document languages that can be provided to the **[StartRecognizeContent](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizecontent?view=azure-dotnet-preview&preserve-view=true)** method.
+
+* **New property `Pages` supported by the following classes**:
+
+ **[RecognizeBusinessCardsOptions](/dotnet/api/azure.ai.formrecognizer.recognizebusinesscardsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
+ **[RecognizeCustomFormsOptions](/dotnet/api/azure.ai.formrecognizer.recognizecustomformsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
+ **[RecognizeInvoicesOptions](/dotnet/api/azure.ai.formrecognizer.recognizeinvoicesoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
+ **[RecognizeReceiptsOptions](/dotnet/api/azure.ai.formrecognizer.recognizereceiptsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
+
+ The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the p age numbers and ranges separated by commas: `2, 5-7`.
+
+* **New property `ReadingOrder` supported for the following class**:
+
+ **[RecognizeContentOptions](/dotnet/api/azure.ai.formrecognizer.recognizecontentoptions?view=azure-dotnet-preview&preserve-view=true)**
+
+ The `ReadingOrder` property is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
+
+### [**Java**](#tab/java)
+
+**Maven artifact package dependency version 3.1.0-beta.3**
+
+* **New methods to analyze data from identity documents**:
+
+ **[beginRecognizeIdDocumentsFromUrl]**
+
+ **[beginRecognizeIdDocuments]**
+
+ For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Document Intelligence documentation.
+
+* ** **Bitmap Image file (.bmp) support for custom forms and training methods in the [`FormContentType`](/java/api/com.azure.ai.formrecognizer.models.formcontenttype?view=azure-java-preview&preserve-view=true#fields) fields**:
+
+ * `image/bmp`
+
+ * **New property `Pages` supported by the following classes**:
+
+ **[RecognizeBusinessCardsOptions](/java/api/com.azure.ai.formrecognizer.models.recognizebusinesscardsoptions?view=azure-java-preview&preserve-view=true)**</br>
+ **[RecognizeCustomFormOptions](/java/api/com.azure.ai.formrecognizer.models.recognizecustomformsoptions?view=azure-java-preview&preserve-view=true)**</br>
+ **[RecognizeInvoicesOptions](/java/api/com.azure.ai.formrecognizer.models.recognizeinvoicesoptions?view=azure-java-preview&preserve-view=true)**</br>
+ **[RecognizeReceiptsOptions](/java/api/com.azure.ai.formrecognizer.models.recognizereceiptsoptions?view=azure-java-preview&preserve-view=true)**</br>
+
+ * The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
+
+* **New keyword argument `ReadingOrder` supported for the following methods**:
+
+ * **[beginRecognizeContent](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?preserve-view=true&view=azure-java-preview)**</br>
+ * **[beginRecognizeContentFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontentfromurl?view=azure-java-preview&preserve-view=true)**</br>
+ * The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
+
+* The client defaults to the latest supported service version, which currently is **2.1-preview.3**.
+
+### [**JavaScript**](#tab/javascript)
+
+**npm package version 3.1.0-beta.3**
+
+* **New methods to analyze data from identity documents**:
+
+ **azure-ai-form-recognizer-formrecognizerclient-beginrecognizeidentitydocumentsfromurl**
+
+ **beginRecognizeIdDocuments**
+
+ For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Document Intelligence documentation.
+
+* **New field values added to the FieldValue interface**:
+
+ `gender`ΓÇöpossible values are `M` `F` or `X`.</br>
+ `country`ΓÇöpossible values follow [ISO alpha-3](https://www.iso.org/obp/ui/#search) three-letter country code string.
+
+* New option `pages` supported by all document intelligence methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
+
+* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true to the URL)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
+
+* Split **FormField** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
+
+* Migrated to the **`2.1-preview.3`** Document Intelligence service endpoint for all REST API calls.
+
+### [**Python**](#tab/python)
+
+**pip package version 3.1.0b4**
+
+* **New methods to analyze data from identity documents**:
+
+ **[begin_recognize_id_documents_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python&preserve-view=true)**
+
+ **[begin_recognize_id_documents](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python&preserve-view=true)**
+
+ For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Document Intelligence documentation.
+
+* **New field values added to the [FieldValueType](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.fieldvaluetype?view=azure-python-preview&preserve-view=true) enum**:
+
+ genderΓÇöpossible values are `M` `F` or `X`.
+
+ countryΓÇöpossible values follow [ISO alpha-3 Country Codes](https://www.iso.org/obp/ui/#search).
+
+* **Bitmap Image file (.bmp) support for custom forms and training methods in the [FormContentType](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formcontenttype?view=azure-python-preview&preserve-view=true) enum**:
+
+ image/bmp
+
+* **New keyword argument `pages` supported by the following methods**:
+
+ **[begin_recognize_receipts](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true&branch=main#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-receipts)**
+
+ **[begin_recognize_receipts_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-receipts-from-url)**
+
+ **[begin_recognize_business_cards](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-business-cards)**
+
+ **[begin_recognize_business_cards_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-business-cards-from-url)**
+
+ **[begin_recognize_invoices](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-invoices)**
+
+ **[begin_recognize_invoices_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-invoices-from-url)**
+
+ **[begin_recognize_content](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content)**
+
+ **[begin_recognize_content_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content-from-url-)**
+
+ The `pages` keyword argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
+
+* **New keyword argument `readingOrder` supported for the following methods**:
+
+ **[begin_recognize_content](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content)**
+
+ **[begin_recognize_content_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content-from-url)**
+
+ The `readingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
++++
+* **SDK preview updates for API version `2.1-preview.3` introduces feature updates and enhancements.**
++++
+## March 2021
+
+ **Document Intelligence v2.1 public preview v2.1-preview.3 has been released and includes the following features:**
+
+* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses.
+
+ [Learn more about the prebuilt ID model](./concept-id-document.md)
+
+ :::image type="content" source="./media/id-canada-passport-example.png" alt-text="Screenshot of a sample passport." lightbox="./media/id-canada-passport-example.png":::
+
+* **Line-item extraction for invoice model** - Prebuilt Invoice model now supports line item extraction; it now extracts full items and their parts - description, amount, quantity, product ID, date and more. With a simple API/SDK call, you can extract useful data from your invoices - text, table, key-value pairs, and line items.
+
+ [Learn more about the invoice model](./concept-invoice.md)
+
+* **Supervised table labeling and training, empty-value labeling** - In addition to Document Intelligence's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model extracts line items as part of the JSON output in the documentResults section.
+
+ :::image type="content" source="./media/table-labeling.png" alt-text="Screenshot of the table labeling feature." lightbox="./media/table-labeling.png":::
+
+ In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model knows to extract values properly from analyzed documents.
+
+* **Support for 66 new languages** - The Layout API and Custom Models for Document Intelligence now support 73 languages.
+
+ [Learn more about Document Intelligence's language support](language-support.md)
+
+* **Natural reading order, handwriting classification, and page selection** - With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter and set it to "natural" value for a more human-friendly reading order output. In addition, for Latin languages, Document Intelligence classifies text lines as handwritten style or not and give a confidence score.
+
+* **Prebuilt receipt model quality improvements** This update includes many quality improvements for the prebuilt Receipt model, especially around line item extraction.
++++
+## November 2020
+
+* **Document Intelligence v2.1-preview.2 has been released and includes the following features:**
+
+ * **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts key text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, and bill to.
+
+ > [Learn more about the prebuilt invoice model](./concept-invoice.md)
+
+ :::image type="content" source="./media/invoice-example.jpg" alt-text="Screenshot of a sample invoice." lightbox="./media/invoice-example.jpg":::
+
+ * **Enhanced table extraction** - Document Intelligence now provides enhanced table extraction, which combines our powerful Optical Character Recognition (OCR) capabilities with a deep learning table extraction model. Document Intelligence can extract data from tables, including complex tables with merged columns, rows, no borders and more.
+
+ :::image type="content" source="./media/tables-example.jpg" alt-text="Screenshot of tables analysis." lightbox="./media/tables-example.jpg":::
+
+ > [Learn more about Layout extraction](concept-layout.md)
+
+ * **Client library update** - The latest versions of the [client libraries](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) for .NET, Python, Java, and JavaScript support the Document Intelligence 2.1 API.
+ * **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md)
+ * **Text line style indication (handwritten/other) (Latin languages only)** - Document Intelligence now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages.
+ * **Quality improvements** - Extraction improvements including single digit extraction improvements.
+ * **New try-it-out feature in the Document Intelligence Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Document Intelligence Sample Labeling tool. See how your data is extracted without writing any code.
+
+ * [**Try the Document Intelligence Sample Labeling tool**](https://fott-2-1.azurewebsites.net)
+
+ :::image type="content" source="media/ui-preview.jpg" alt-text="Screenshot of the Sample Labeling tool homepage.":::
+
+ * **Feedback Loop** - When Analyzing files via the Sample Labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
+ * **Auto Label Documents** - Automatically labels added documents based on previous labeled documents in the project.
++++
+## August 2020
+
+* **Document Intelligence `v2.1-preview.1` has been released and includes the following features:
+
+ * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
+ * **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`).
+ * **Checkbox / Selection Mark detection** ΓÇô Document Intelligence supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks.
+ * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
+ * **Model name** - add a friendly name to your custom models for easier management and tracking.
+ * **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
+ * **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
+ * **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_.
+
+* **v2.0** includes the following update:
+
+ * The [client libraries](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) for NET, Python, Java, and JavaScript have entered General Availability.
+
+ **New samples** are available on GitHub.
+
+ * The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Document Intelligence customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.
+ * The [Sample Labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](label-tool.md) for getting started with the tool.
+ * The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Document Intelligence sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_.
++++
+## July 2020
+
+<!-- markdownlint-disable MD004 -->
+* **Document Intelligence v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated SDKs for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/).
+ * **Table enhancements and Extraction enhancements** - includes accuracy improvements and table extractions enhancements, specifically, the capability to learn tables headers and structures in _custom train without labels_.
+
+ * **Currency support** - Detection and extraction of global currency symbols.
+ * **Azure Gov** - Document Intelligence is now also available in Azure Gov.
+ * **Enhanced security features**:
+ * **Bring your own key** - Document Intelligence automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./encrypt-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+ * **Private endpoints** ΓÇô Enables you on a virtual network to [securely access data over a Private Link.](../../private-link/private-link-overview.md)
++++
+## June 2020
+
+* **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature.
+* **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Document Intelligence client objects in the SDKs.
+* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, _see_ the SDK changelogs.
+ * [C# SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+ * [Python SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+ * [Java SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-jav)
+ * [JavaScript SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_1.0.0-preview.3/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
++++
+## April 2020
+
+* **SDK support for Document Intelligence API v2.0 Public Preview** - This month we expanded our service support to include a preview SDK for Document Intelligence v2.0 release. Use these links to get started with your language of choice:
+* [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme)
+* [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme)
+* [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
+* [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
+
+The new SDK supports all the features of the v2.0 REST API for Document Intelligence. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
+
+* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource. This authorization is secured by calling the Copy Authorization operation against the target resource endpoint.
+
+* [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API
+* [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
+
+* Security improvements
+
+* Customer-Managed Keys are now available for FormRecognizer. For more information, see [Data encryption at rest for Document Intelligence](./encrypt-data-at-rest.md).
+* Use Managed Identities for access to Azure resources with Azure Active Directory. For more information, see [Authorize access to managed identities](../../ai-services/authentication.md#authorize-access-to-managed-identities).
++++
+## March 2020
+
+* **Value types for labeling** You can now specify the types of values you're labeling with the Document Intelligence Sample Labeling tool. The following value types and variations are currently supported:
+* `string`
+ * default, `no-whitespaces`, `alphanumeric`
+* `number`
+ * default, `currency`
+* `date`
+ * default, `dmy`, `mdy`, `ymd`
+* `time`
+* `integer`
+
+See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to learn how to use this feature.
+
+* **Table visualization** The Sample Labeling tool now displays tables that were recognized in the document. This feature lets you view recognized and extracted tables from the document prior to labeling and analyzing. This feature can be toggled on/off using the layers option.
+
+* The following image is an example of how tables are recognized and extracted:
+
+ :::image type="content" source="media/whats-new/table-viz.png" alt-text="Screenshot of table visualization using the Sample Labeling tool.":::
+
+* The extracted tables are available in the JSON output under `"pageResults"`.
+
+ > [!IMPORTANT]
+ > Labeling tables isn't supported. If tables are not recognized and extracted automatically, you can only label them as key/value pairs. When labeling tables as key/value pairs, label each cell as a unique value.
+
+* Extraction enhancements
+
+* This release includes extraction enhancements and accuracy improvements, specifically, the capability to label and extract multiple key/value pairs in the same line of text.
+
+* Sample Labeling tool is now open-source
+
+* The Document Intelligence Sample Labeling tool is now available as an open-source project. You can integrate it within your solutions and make customer-specific changes to meet your needs.
+
+* For more information about the Document Intelligence Sample Labeling tool, review the documentation available on [GitHub](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
+
+* TLS 1.2 enforcement
+
+* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure AI services security](../../ai-services/security-features.md).
++++
+## January 2020
+
+This release introduces the Document Intelligence 2.0. In the next sections, you'll find more information about new features, enhancements, and changes.
+
+* New features
+
+ * **Custom model**
+ * **Train with labels** You can now train a custom model with manually labeled data. This method results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+ * **Asynchronous API** You can use async API calls to train with and analyze large data sets and files.
+ * **TIFF file support** You can now train with and extract data from TIFF documents.
+ * **Extraction accuracy improvements**
+
+ * **Prebuilt receipt model**
+ * **Tip amounts** You can now extract tip amounts and other handwritten values.
+ * **Line item extraction** You can extract line item values from receipts.
+ * **Confidence values** You can view the model's confidence for each extracted value.
+ * **Extraction accuracy improvements**
+
+ * **Layout extraction** You can now use the Layout API to extract text data and table data from your forms.
+
+* Custom model API changes
+
+ All of the APIs for training and using custom models have been renamed, and some synchronous methods are now asynchronous. The following are major changes:
+
+ * The process of training a model is now asynchronous. You initiate training through the **/custom/models** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}** to return the training results.
+ * Key/value extraction is now initiated by the **/custom/models/{modelID}/analyze** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}/analyzeResults/{resultID}** to return the extraction results.
+ * Operation IDs for the Train operation are now found in the **Location** header of HTTP responses, not the **Operation-Location** header.
+
+* Receipt API changes
+
+ * The APIs for reading sales receipts have been renamed.
+
+ * Receipt data extraction is now initiated by the **/prebuilt/receipt/analyze** API call. This call returns an operation ID, which you can pass into **/prebuilt/receipt/analyzeResults/{resultID}** to return the extraction results.
+
+* Output format changes
+
+ * The JSON responses for all API calls have new formats. Some keys and values have been added, removed, or renamed. See the quickstarts for examples of the current JSON formats.
++++
+## Next steps
++
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
++
ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-cache-token.md
+
+ Title: "Cache the authentication token"
+
+description: This article will show you how to cache the authentication token.
++++++ Last updated : 01/14/2020++++
+# How to cache the authentication token
+
+This article demonstrates how to cache the authentication token in order to improve performance of your application.
+
+## Using ASP.NET
+
+Import the **Microsoft.Identity.Client** NuGet package, which is used to acquire a token.
+
+Create a confidential client application property.
+
+```csharp
+private IConfidentialClientApplication _confidentialClientApplication;
+private IConfidentialClientApplication ConfidentialClientApplication
+{
+ get {
+ if (_confidentialClientApplication == null) {
+ _confidentialClientApplication = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithClientSecret(ClientSecret)
+ .WithAuthority($"https://login.windows.net/{TenantId}")
+ .Build();
+ }
+
+ return _confidentialClientApplication;
+ }
+}
+```
+
+Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+
+> [!IMPORTANT]
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../../active-directory/develop/msal-migration.md) for more details.
++
+```csharp
+public async Task<string> GetTokenAsync()
+{
+ const string resource = "https://cognitiveservices.azure.com/";
+
+ var authResult = await ConfidentialClientApplication.AcquireTokenForClient(
+ new[] { $"{resource}/.default" })
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+
+ return authResult.AccessToken;
+}
+```
+
+The `AuthenticationResult` object has an `AccessToken` property which is the actual token you will use when launching the Immersive Reader using the SDK. It also has an `ExpiresOn` property which denotes when the token will expire. Before launching the Immersive Reader, you can check whether the token has expired, and acquire a new token only if it has expired.
+
+## Using Node.JS
+
+Add the [**request**](https://www.npmjs.com/package/request) npm package to your project. Use the following code to acquire a token, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+
+```javascript
+router.get('/token', function(req, res) {
+ request.post(
+ {
+ headers: { 'content-type': 'application/x-www-form-urlencoded' },
+ url: `https://login.windows.net/${TENANT_ID}/oauth2/token`,
+ form: {
+ grant_type: 'client_credentials',
+ client_id: CLIENT_ID,
+ client_secret: CLIENT_SECRET,
+ resource: 'https://cognitiveservices.azure.com/'
+ }
+ },
+ function(err, resp, json) {
+ const result = JSON.parse(json);
+ return res.send({
+ access_token: result.access_token,
+ expires_on: result.expires_on
+ });
+ }
+ );
+});
+```
+
+The `expires_on` property is the date and time at which the token expires, expressed as the number of seconds since January 1, 1970 UTC. Use this value to determine whether your token has expired before attempting to acquire a new one.
+
+```javascript
+async function getToken() {
+ if (Date.now() / 1000 > CREDENTIALS.expires_on) {
+ CREDENTIALS = await refreshCredentials();
+ }
+ return CREDENTIALS.access_token;
+}
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
ai-services How To Configure Read Aloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-configure-read-aloud.md
+
+ Title: "Configure Read Aloud"
+
+description: This article will show you how to configure the various options for Read Aloud.
++++++ Last updated : 06/29/2020+++
+# How to configure Read Aloud
+
+This article demonstrates how to configure the various options for Read Aloud in the Immersive Reader.
+
+## Automatically start Read Aloud
+
+The `options` parameter contains all of the flags that can be used to configure Read Aloud. Set `autoplay` to `true` to enable automatically starting Read Aloud after launching the Immersive Reader.
+
+```typescript
+const options = {
+ readAloudOptions: {
+ autoplay: true
+ }
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+> [!NOTE]
+> Due to browser limitations, automatic playback is not supported in Safari.
+
+## Configure the voice
+
+Set `voice` to either `male` or `female`. Not all languages support both voices. For more information, see the [Language Support](./language-support.md) page.
+
+```typescript
+const options = {
+ readAloudOptions: {
+ voice: 'female'
+ }
+};
+```
+
+## Configure playback speed
+
+Set `speed` to a number between `0.5` (50%) and `2.5` (250%) inclusive. Values outside this range will get clamped to either 0.5 or 2.5.
+
+```typescript
+const options = {
+ readAloudOptions: {
+ speed: 1.5
+ }
+};
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
ai-services How To Configure Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-configure-translation.md
+
+ Title: "Configure translation"
+
+description: This article will show you how to configure the various options for translation.
++++++ Last updated : 01/06/2022+++
+# How to configure Translation
+
+This article demonstrates how to configure the various options for Translation in the Immersive Reader.
+
+## Configure Translation language
+
+The `options` parameter contains all of the flags that can be used to configure Translation. Set the `language` parameter to the language you wish to translate to. See the [Language Support](./language-support.md) for the full list of supported languages.
+
+```typescript
+const options = {
+ translationOptions: {
+ language: 'fr-FR'
+ }
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## Automatically translate the document on load
+
+Set `autoEnableDocumentTranslation` to `true` to enable automatically translating the entire document when the Immersive Reader loads.
+
+```typescript
+const options = {
+ translationOptions: {
+ autoEnableDocumentTranslation: true
+ }
+};
+```
+
+## Automatically enable word translation
+
+Set `autoEnableWordTranslation` to `true` to enable single word translation.
+
+```typescript
+const options = {
+ translationOptions: {
+ autoEnableWordTranslation: true
+ }
+};
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-create-immersive-reader.md
+
+ Title: "Create an Immersive Reader Resource"
+
+description: This article shows you how to create a new Immersive Reader resource with a custom subdomain and then configure Azure AD in your Azure tenant.
+++++++ Last updated : 03/31/2023+++
+# Create an Immersive Reader resource and configure Azure Active Directory authentication
+
+In this article, we provide a script that creates an Immersive Reader resource and configure Azure Active Directory (Azure AD) authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must also be configured with Azure AD permissions.
+
+The script is designed to create and configure all the necessary Immersive Reader and Azure AD resources for you all in one step. However, you can also just configure Azure AD authentication for an existing Immersive Reader resource, if for instance, you happen to have already created one in the Azure portal.
+
+For some customers, it may be necessary to create multiple Immersive Reader resources, for development vs. production, or perhaps for multiple different regions your service is deployed in. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with the Azure AD permissions.
+
+The script is designed to be flexible. It first looks for existing Immersive Reader and Azure AD resources in your subscription, and creates them only as necessary if they don't already exist. If it's your first time creating an Immersive Reader resource, the script does everything you need. If you want to use it just to configure Azure AD for an existing Immersive Reader resource that was created in the portal, it does that too.
+It can also be used to create and configure multiple Immersive Reader resources.
+
+## Permissions
+
+The listed **Owner** of your Azure subscription has all the required permissions to create an Immersive Reader resource and configure Azure AD authentication.
+
+If you aren't an owner, the following scope-specific permissions are required:
+
+* **Contributor**. You need to have at least a Contributor role associated with the Azure subscription:
+
+ :::image type="content" source="media/contributor-role.png" alt-text="Screenshot of contributor built-in role description.":::
+
+* **Application Developer**. You need to have at least an Application Developer role associated in Azure AD:
+
+ :::image type="content" source="media/application-developer-role.png" alt-text="{alt-text}":::
+
+For more information, _see_ [Azure AD built-in roles](../../active-directory/roles/permissions-reference.md#application-developer)
+
+## Set up PowerShell environment
+
+1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
+
+1. Copy and paste the following code snippet into the shell.
+
+ ```azurepowershell-interactive
+ function Create-ImmersiveReaderResource(
+ [Parameter(Mandatory=$true, Position=0)] [String] $SubscriptionName,
+ [Parameter(Mandatory=$true)] [String] $ResourceName,
+ [Parameter(Mandatory=$true)] [String] $ResourceSubdomain,
+ [Parameter(Mandatory=$true)] [String] $ResourceSKU,
+ [Parameter(Mandatory=$true)] [String] $ResourceLocation,
+ [Parameter(Mandatory=$true)] [String] $ResourceGroupName,
+ [Parameter(Mandatory=$true)] [String] $ResourceGroupLocation,
+ [Parameter(Mandatory=$true)] [String] $AADAppDisplayName,
+ [Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri,
+ [Parameter(Mandatory=$true)] [String] $AADAppClientSecretExpiration
+ )
+ {
+ $unused = ''
+ if (-not [System.Uri]::TryCreate($AADAppIdentifierUri, [System.UriKind]::Absolute, [ref] $unused)) {
+ throw "Error: AADAppIdentifierUri must be a valid URI"
+ }
+
+ Write-Host "Setting the active subscription to '$SubscriptionName'"
+ $subscriptionExists = Get-AzSubscription -SubscriptionName $SubscriptionName
+ if (-not $subscriptionExists) {
+ throw "Error: Subscription does not exist"
+ }
+ az account set --subscription $SubscriptionName
+
+ $resourceGroupExists = az group exists --name $ResourceGroupName
+ if ($resourceGroupExists -eq "false") {
+ Write-Host "Resource group does not exist. Creating resource group"
+ $groupResult = az group create --name $ResourceGroupName --location $ResourceGroupLocation
+ if (-not $groupResult) {
+ throw "Error: Failed to create resource group"
+ }
+ Write-Host "Resource group created successfully"
+ }
+
+ # Create an Immersive Reader resource if it doesn't already exist
+ $resourceId = az cognitiveservices account show --resource-group $ResourceGroupName --name $ResourceName --query "id" -o tsv
+ if (-not $resourceId) {
+ Write-Host "Creating the new Immersive Reader resource '$ResourceName' (SKU '$ResourceSKU') in '$ResourceLocation' with subdomain '$ResourceSubdomain'"
+ $resourceId = az cognitiveservices account create `
+ --name $ResourceName `
+ --resource-group $ResourceGroupName `
+ --kind ImmersiveReader `
+ --sku $ResourceSKU `
+ --location $ResourceLocation `
+ --custom-domain $ResourceSubdomain `
+ --query "id" `
+ -o tsv
+
+ if (-not $resourceId) {
+ throw "Error: Failed to create Immersive Reader resource"
+ }
+ Write-Host "Immersive Reader resource created successfully"
+ }
+
+ # Create an Azure Active Directory app if it doesn't already exist
+ $clientId = az ad app show --id $AADAppIdentifierUri --query "appId" -o tsv
+ if (-not $clientId) {
+ Write-Host "Creating new Azure Active Directory app"
+ $clientId = az ad app create --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv
+ if (-not $clientId) {
+ throw "Error: Failed to create Azure Active Directory application"
+ }
+ Write-Host "Azure Active Directory application created successfully."
+
+ $clientSecret = az ad app credential reset --id $clientId --end-date "$AADAppClientSecretExpiration" --query "password" | % { $_.Trim('"') }
+ if (-not $clientSecret) {
+ throw "Error: Failed to create Azure Active Directory application client secret"
+ }
+ Write-Host "Azure Active Directory application client secret created successfully."
+
+ Write-Host "NOTE: To manage your Active Directory application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
+ }
+
+ # Create a service principal if it doesn't already exist
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "id" -o tsv
+ if (-not $principalId) {
+ Write-Host "Creating new service principal"
+ az ad sp create --id $clientId | Out-Null
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "id" -o tsv
+
+ if (-not $principalId) {
+ throw "Error: Failed to create new service principal"
+ }
+ Write-Host "New service principal created successfully"
+
+ # Sleep for 5 seconds to allow the new service principal to propagate
+ Write-Host "Sleeping for 5 seconds"
+ Start-Sleep -Seconds 5
+ }
+
+ Write-Host "Granting service principal access to the newly created Immersive Reader resource"
+ $accessResult = az role assignment create --assignee $principalId --scope $resourceId --role "Cognitive Services Immersive Reader User"
+ if (-not $accessResult) {
+ throw "Error: Failed to grant service principal access"
+ }
+ Write-Host "Service principal access granted successfully"
+
+ # Grab the tenant ID, which is needed when obtaining an Azure AD token
+ $tenantId = az account show --query "tenantId" -o tsv
+
+ # Collect the information needed to obtain an Azure AD token into one object
+ $result = @{}
+ $result.TenantId = $tenantId
+ $result.ClientId = $clientId
+ $result.ClientSecret = $clientSecret
+ $result.Subdomain = $ResourceSubdomain
+
+ Write-Host "`nSuccess! " -ForegroundColor Green -NoNewline
+ Write-Host "Save the following JSON object to a text file for future reference."
+ Write-Host "*****"
+ if($clientSecret -ne $null) {
+
+ Write-Host "This function has created a client secret (password) for you. This secret is used when calling Azure Active Directory to fetch access tokens."
+ Write-Host "This is the only time you will ever see the client secret for your Azure Active Directory application, so save it now." -ForegroundColor Yellow
+ }
+ else{
+ Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Azure Active Directory application. Please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow
+ }
+ Write-Host "*****`n"
+ Write-Output (ConvertTo-Json $result)
+ }
+ ```
+
+1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders with your own values as appropriate.
+
+ ```azurepowershell-interactive
+ Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
+ ```
+
+ The full command looks something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command with your own values. This example has dummy values for the '<PARAMETER_VALUES>'. Yours may be different, as you come up with your own names for these values.
+
+ ```
+ Create-ImmersiveReaderResource
+ -SubscriptionName 'MyOrganizationSubscriptionName'
+ -ResourceName 'MyOrganizationImmersiveReader'
+ -ResourceSubdomain 'MyOrganizationImmersiveReader'
+ -ResourceSKU 'S0'
+ -ResourceLocation 'westus2'
+ -ResourceGroupName 'MyResourceGroupName'
+ -ResourceGroupLocation 'westus2'
+ -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp'
+ -AADAppIdentifierUri 'api://MyOrganizationImmersiveReaderAADApp'
+ -AADAppClientSecretExpiration '2021-12-31'
+ ```
+
+ | Parameter | Comments |
+ | | |
+ | SubscriptionName |Name of the Azure subscription to use for your Immersive Reader resource. You must have a subscription in order to create a resource. |
+ | ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters.|
+ | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. |
+ | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
+ | ResourceLocation |Options: `australiaeast`, `brazilsouth`, `canadacentral`, `centralindia`, `centralus`, `eastasia`, `eastus`, `eastus2`, `francecentral`, `germanywestcentral`, `japaneast`, `japanwest`, `jioindiawest`, `koreacentral`, `northcentralus`, `northeurope`, `norwayeast`, `southafricanorth`, `southcentralus`, `southeastasia`, `swedencentral`, `switzerlandnorth`, `switzerlandwest`, `uaenorth`, `uksouth`, `westcentralus`, `westeurope`, `westus`, `westus2`, `westus3`. This parameter is optional if the resource already exists. |
+ | ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group doesn't already exist, a new one with this name is created. |
+ | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. |
+ | AADAppDisplayName |The Azure Active Directory application display name. If an existing Azure AD application isn't found, a new one with this name is created. This parameter is optional if the Azure AD application already exists. |
+ | AADAppIdentifierUri |The URI for the Azure AD application. If an existing Azure AD application isn't found, a new one with this URI is created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we're using the default Azure AD URI scheme prefix of `api://` for compatibility with the [Azure AD policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
+ | AADAppClientSecretExpiration |The date or datetime after which your Azure AD Application Client Secret (password) will expire (for example, '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function creates a client secret for you. To manage Azure AD application client secrets after you've created this resource, visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) `[AADAppDisplayName]` -> Certificates and Secrets section -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot).|
+
+ Manage your Azure AD application secrets
+
+ ![Azure Portal Certificates and Secrets blade](./media/client-secrets-blade.png)
+
+1. Copy the JSON output into a text file for later use. The output should look like the following.
+
+ ```json
+ {
+ "TenantId": "...",
+ "ClientId": "...",
+ "ClientSecret": "...",
+ "Subdomain": "..."
+ }
+ ```
+
+## Next steps
+
+* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
+* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
ai-services How To Customize Launch Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-customize-launch-button.md
+
+ Title: "Edit the Immersive Reader launch button"
+
+description: This article will show you how to customize the button that launches the Immersive Reader.
+++++++ Last updated : 03/08/2021+++
+# How to customize the Immersive Reader button
+
+This article demonstrates how to customize the button that launches the Immersive Reader to fit the needs of your application.
+
+## Add the Immersive Reader button
+
+The Immersive Reader SDK provides default styling for the button that launches the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling.
+
+```html
+<div class='immersive-reader-button'></div>
+```
+
+## Customize the button style
+
+Use the `data-button-style` attribute to set the style of the button. The allowed values are `icon`, `text`, and `iconAndText`. The default value is `icon`.
+
+### Icon button
+
+```html
+<div class='immersive-reader-button' data-button-style='icon'></div>
+```
+
+This renders the following:
+
+![This is the rendered Text button](./media/button-icon.png)
+
+### Text button
+
+```html
+<div class='immersive-reader-button' data-button-style='text'></div>
+```
+
+This renders the following:
+
+![This is the rendered Immersive Reader button.](./media/button-text.png)
+
+### Icon and text button
+
+```html
+<div class='immersive-reader-button' data-button-style='iconAndText'></div>
+```
+
+This renders the following:
+
+![Icon button](./media/button-icon-and-text.png)
+
+## Customize the button text
+
+Configure the language and the alt text for the button using the `data-locale` attribute. The default language is English.
+
+```html
+<div class='immersive-reader-button' data-locale='fr-FR'></div>
+```
+
+## Customize the size of the icon
+
+The size of the Immersive Reader icon can be configured using the `data-icon-px-size` attribute. This sets the size of the icon in pixels. The default size is 20px.
+
+```html
+<div class='immersive-reader-button' data-icon-px-size='50'></div>
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
ai-services How To Launch Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-launch-immersive-reader.md
+
+ Title: "How to launch the Immersive Reader"
+
+description: Learn how to launch the Immersive reader using JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
+++++ Last updated : 03/04/2021++
+zone_pivot_groups: immersive-reader-how-to-guides
++
+# How to launch the Immersive Reader
+
+In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This article demonstrates how to launch the Immersive Reader JavaScript, Python, Android, or iOS.
++++++++++++
ai-services How To Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-multiple-resources.md
+
+ Title: "Integrate multiple Immersive Reader resources"
+
+description: In this tutorial, you'll create a Node.js application that launches the Immersive Reader using multiple Immersive Reader resources.
++++++ Last updated : 01/14/2020++
+#Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer.
++
+# Integrate multiple Immersive Reader resources
+
+In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. In the [quickstart](./quickstarts/client-libraries.md), you learned how to use Immersive Reader with a single resource. This tutorial covers how to integrate multiple Immersive Reader resources in the same application. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create multiple Immersive Reader resource under an existing resource group
+> * Launch the Immersive Reader using multiple resources
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+* Follow the [quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to create a web app that launches the Immersive Reader with NodeJS. In that quickstart, you configure a single Immersive Reader resource. We will build on top of that in this tutorial.
+
+## Create the Immersive Reader resources
+
+Follow [these instructions](./how-to-create-immersive-reader.md) to create each Immersive Reader resource. The **Create-ImmersiveReaderResource** script has `ResourceName`, `ResourceSubdomain`, and `ResourceLocation` as parameters. These should be unique for each resource being created. The remaining parameters should be the same as what you used when setting up your first Immersive Reader resource. This way, each resource can be linked to the same Azure resource group and Azure AD application.
+
+The example below shows how to create two resources, one in WestUS, and another in EastUS. Notice the unique values for `ResourceName`, `ResourceSubdomain`, and `ResourceLocation`.
+
+```azurepowershell-interactive
+Create-ImmersiveReaderResource
+ -SubscriptionName <SUBSCRIPTION_NAME> `
+ -ResourceName Resource_name_wus `
+ -ResourceSubdomain resource-subdomain-wus `
+ -ResourceSKU <RESOURCE_SKU> `
+ -ResourceLocation westus `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
+ -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
+ -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
+ -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+
+Create-ImmersiveReaderResource
+ -SubscriptionName <SUBSCRIPTION_NAME> `
+ -ResourceName Resource_name_eus `
+ -ResourceSubdomain resource-subdomain-eus `
+ -ResourceSKU <RESOURCE_SKU> `
+ -ResourceLocation eastus `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
+ -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
+ -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
+ -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+```
+
+## Add resources to environment configuration
+
+In the quickstart, you created an environment configuration file that contains the `TenantId`, `ClientId`, `ClientSecret`, and `Subdomain` parameters. Since all of your resources use the same Azure AD application, we can use the same values for the `TenantId`, `ClientId`, and `ClientSecret`. The only change that needs to be made is to list each subdomain for each resource.
+
+Your new __.env__ file should now look something like the following:
+
+```text
+TENANT_ID={YOUR_TENANT_ID}
+CLIENT_ID={YOUR_CLIENT_ID}
+CLIENT_SECRET={YOUR_CLIENT_SECRET}
+SUBDOMAIN_WUS={YOUR_WESTUS_SUBDOMAIN}
+SUBDOMAIN_EUS={YOUR_EASTUS_SUBDOMAIN}
+```
+
+Be sure not to commit this file into source control, as it contains secrets that should not be made public.
+
+Next, we're going to modify the _routes\index.js_ file that we created to support our multiple resources. Replace its content with the following code.
+
+As before, this code creates an API endpoint that acquires an Azure AD authentication token using your service principal password. This time, it allows the user to specify a resource location and pass it in as a query parameter. It then returns an object containing the token and the corresponding subdomain.
+
+```javascript
+var express = require('express');
+var router = express.Router();
+var request = require('request');
+
+/* GET home page. */
+router.get('/', function(req, res, next) {
+ res.render('index', { Title: 'Express' });
+});
+
+router.get('/GetTokenAndSubdomain', function(req, res) {
+ try {
+ request.post({
+ headers: {
+ 'content-type': 'application/x-www-form-urlencoded'
+ },
+ url: `https://login.windows.net/${process.env.TENANT_ID}/oauth2/token`,
+ form: {
+ grant_type: 'client_credentials',
+ client_id: process.env.CLIENT_ID,
+ client_secret: process.env.CLIENT_SECRET,
+ resource: 'https://cognitiveservices.azure.com/'
+ }
+ },
+ function(err, resp, tokenResult) {
+ if (err) {
+ console.log(err);
+ return res.status(500).send('CogSvcs IssueToken error');
+ }
+
+ var tokenResultParsed = JSON.parse(tokenResult);
+
+ if (tokenResultParsed.error) {
+ console.log(tokenResult);
+ return res.send({error : "Unable to acquire Azure AD token. Check the debugger for more information."})
+ }
+
+ var token = tokenResultParsed.access_token;
+
+ var subdomain = "";
+ var region = req.query && req.query.region;
+ switch (region) {
+ case "eus":
+ subdomain = process.env.SUBDOMAIN_EUS
+ break;
+ case "wus":
+ default:
+ subdomain = process.env.SUBDOMAIN_WUS
+ }
+
+ return res.send({token, subdomain});
+ });
+ } catch (err) {
+ console.log(err);
+ return res.status(500).send('CogSvcs IssueToken error');
+ }
+});
+
+module.exports = router;
+```
+
+The **getimmersivereaderlaunchparams** API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+
+## Launch the Immersive Reader with sample content
+
+1. Open _views\index.pug_, and replace its content with the following code. This code populates the page with some sample content, and adds two buttons that launches the Immersive Reader. One for launching Immersive Reader for the EastUS resource, and another for the WestUS resource.
+
+ ```pug
+ doctype html
+ html
+ head
+ title Immersive Reader Quickstart Node.js
+
+ link(rel='stylesheet', href='https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css')
+
+ // A polyfill for Promise is needed for IE11 support.
+ script(src='https://cdn.jsdelivr.net/npm/promise-polyfill@8/dist/polyfill.min.js')
+
+ script(src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.2.0.js')
+ script(src='https://code.jquery.com/jquery-3.3.1.min.js')
+
+ style(type="text/css").
+ .immersive-reader-button {
+ background-color: white;
+ margin-top: 5px;
+ border: 1px solid black;
+ float: right;
+ }
+ body
+ div(class="container")
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("wus")') WestUS Immersive Reader
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("eus")') EastUS Immersive Reader
+
+ h1(id="ir-title") About Immersive Reader
+ div(id="ir-content" lang="en-us")
+ p Immersive Reader is a tool that implements proven techniques to improve reading comprehension for emerging readers, language learners, and people with learning differences. The Immersive Reader is designed to make reading more accessible for everyone. The Immersive Reader
+
+ ul
+ li Shows content in a minimal reading view
+ li Displays pictures of commonly used words
+ li Highlights nouns, verbs, adjectives, and adverbs
+ li Reads your content out loud to you
+ li Translates your content into another language
+ li Breaks down words into syllables
+
+ h3 The Immersive Reader is available in many languages.
+
+ p(lang="es-es") El Lector inmersivo está disponible en varios idiomas.
+ p(lang="zh-cn") 沉浸式阅读器支持许多语言
+ p(lang="de-de") Der plastische Reader ist in vielen Sprachen verf├╝gbar.
+ p(lang="ar-eg" dir="rtl" style="text-align:right") يتوفر \"القارئ الشامل\" في العديد من اللغات.
+
+ script(type="text/javascript").
+ function getTokenAndSubdomainAsync(region) {
+ return new Promise(function (resolve, reject) {
+ $.ajax({
+ url: "/GetTokenAndSubdomain",
+ type: "GET",
+ data: {
+ region: region
+ },
+ success: function (data) {
+ if (data.error) {
+ reject(data.error);
+ } else {
+ resolve(data);
+ }
+ },
+ error: function (err) {
+ reject(err);
+ }
+ });
+ });
+ }
+
+ function handleLaunchImmersiveReader(region) {
+ getTokenAndSubdomainAsync(region)
+ .then(function (response) {
+ const token = response["token"];
+ const subdomain = response["subdomain"];
+ // Learn more about chunk usage and supported MIME types https://learn.microsoft.com/azure/ai-services/immersive-reader/reference#chunk
+ const data = {
+ Title: $("#ir-title").text(),
+ chunks: [{
+ content: $("#ir-content").html(),
+ mimeType: "text/html"
+ }]
+ };
+ // Learn more about options https://learn.microsoft.com/azure/ai-services/immersive-reader/reference#options
+ const options = {
+ "onExit": exitCallback,
+ "uiZIndex": 2000
+ };
+ ImmersiveReader.launchAsync(token, subdomain, data, options)
+ .catch(function (error) {
+ alert("Error in launching the Immersive Reader. Check the console.");
+ console.log(error);
+ });
+ })
+ .catch(function (error) {
+ alert("Error in getting the Immersive Reader token and subdomain. Check the console.");
+ console.log(error);
+ });
+ }
+
+ function exitCallback() {
+ console.log("This is the callback function. It is executed when the Immersive Reader closes.");
+ }
+ ```
+
+3. Our web app is now ready. Start the app by running:
+
+ ```bash
+ npm start
+ ```
+
+4. Open your browser and navigate to `http://localhost:3000`. You should see the above content on the page. Select either the **EastUS Immersive Reader** button or the **WestUS Immersive Reader** button to launch the Immersive Reader using those respective resources.
+
+## Next steps
+
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
+* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
ai-services How To Prepare Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-prepare-html.md
+
+ Title: "How to prepare HTML content for Immersive Reader"
+
+description: Learn how to launch the Immersive reader using HTML, JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
++++++ Last updated : 03/04/2021+++
+# How to prepare HTML content for Immersive Reader
+
+This article shows you how to structure your HTML and retrieve the content, so that it can be used by Immersive Reader.
+
+## Prepare the HTML content
+
+Place the content that you want to render in the Immersive Reader inside of a container element. Be sure that the container element has a unique `id`. The Immersive Reader provides support for basic HTML elements, see the [reference](reference.md#html-support) for more information.
+
+```html
+<div id='immersive-reader-content'>
+ <b>Bold</b>
+ <i>Italic</i>
+ <u>Underline</u>
+ <strike>Strikethrough</strike>
+ <code>Code</code>
+ <sup>Superscript</sup>
+ <sub>Subscript</sub>
+ <ul><li>Unordered lists</li></ul>
+ <ol><li>Ordered lists</li></ol>
+</div>
+```
+
+## Get the HTML content in JavaScript
+
+Use the `id` of the container element to get the HTML content in your JavaScript code.
+
+```javascript
+const htmlContent = document.getElementById('immersive-reader-content').innerHTML;
+```
+
+## Launch the Immersive Reader with your HTML content
+
+When calling `ImmersiveReader.launchAsync`, set the chunk's `mimeType` property to `text/html` to enable rendering HTML.
+
+```javascript
+const data = {
+ chunks: [{
+ content: htmlContent,
+ mimeType: 'text/html'
+ }]
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](reference.md)
ai-services How To Store User Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-store-user-preferences.md
+
+ Title: "Store user preferences"
+
+description: This article will show you how to store the user's preferences.
++++++ Last updated : 06/29/2020+++
+# How to store user preferences
+
+This article demonstrates how to store the user's UI settings, formally known as **user preferences**, via the [-preferences](./reference.md#options) and [-onPreferencesChanged](./reference.md#options) Immersive Reader SDK options.
+
+When the [CookiePolicy](./reference.md#cookiepolicy-options) SDK option is set to *Enabled*, the Immersive Reader application stores the **user preferences** (text size, theme color, font, and so on) in cookies, which are local to a specific browser and device. Each time the user launches the Immersive Reader on the same browser and device, it will open with the user's preferences from their last session on that device. However, if the user opens the Immersive Reader on a different browser or device, the settings will initially be configured with the Immersive Reader's default settings, and the user will have to set their preferences again, and so on for each device they use. The `-preferences` and `-onPreferencesChanged` Immersive Reader SDK options provide a way for applications to roam a user's preferences across various browsers and devices, so that the user has a consistent experience wherever they use the application.
+
+First, by supplying the `-onPreferencesChanged` callback SDK option when launching the Immersive Reader application, the Immersive Reader will send a `-preferences` string back to the host application each time the user changes their preferences during the Immersive Reader session. The host application is then responsible for storing the user preferences in their own system. Then, when that same user launches the Immersive Reader again, the host application can retrieve that user's preferences from storage, and supply them as the `-preferences` string SDK option when launching the Immersive Reader application, so that the user's preferences are restored.
+
+This functionality may be used as an alternate means to storing **user preferences** in the case where using cookies is not desirable or feasible.
+
+> [!CAUTION]
+> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
+
+## How to enable storing user preferences
+
+the Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function will be called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
+
+```typescript
+const options = {
+ onPreferencesChanged: (value: string) => {
+ // Store user preferences here
+ }
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## How to load user preferences into the Immersive Reader
+
+Pass in the user's preferences to the Immersive Reader using the `-preferences` option. A trivial example to store and load the user's preferences is as follows:
+
+```typescript
+const storedUserPreferences = localStorage.getItem("USER_PREFERENCES");
+let userPreferences = storedUserPreferences === null ? null : storedUserPreferences;
+const options = {
+ preferences: userPreferences,
+ onPreferencesChanged: (value: string) => {
+ userPreferences = value;
+ localStorage.setItem("USER_PREFERENCES", userPreferences);
+ }
+};
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
ai-services Display Math https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to/display-math.md
+
+ Title: "Display math in the Immersive Reader"
+
+description: This article will show you how to display math in the Immersive Reader.
++++++ Last updated : 01/14/2020++++
+# How to display math in the Immersive Reader
+
+The Immersive Reader can display math when provided in the form of Mathematical Markup Language ([MathML](https://developer.mozilla.org/docs/Web/MathML)).
+The MIME type can be set through the Immersive Reader [chunk](../reference.md#chunk). See [supported MIME types](../reference.md#supported-mime-types) for more information.
+
+## Send Math to the Immersive Reader
+In order to send math to the Immersive Reader, supply a chunk containing MathML, and set the MIME type to ```application/mathml+xml```;
+
+For example, if your content were the following:
+
+```html
+<div id='ir-content'>
+ <math xmlns='http://www.w3.org/1998/Math/MathML'>
+ <mfrac>
+ <mrow>
+ <msup>
+ <mi>x</mi>
+ <mn>2</mn>
+ </msup>
+ <mo>+</mo>
+ <mn>3</mn>
+ <mi>x</mi>
+ <mo>+</mo>
+ <mn>2</mn>
+ </mrow>
+ <mrow>
+ <mi>x</mi>
+ <mo>−</mo>
+ <mn>3</mn>
+ </mrow>
+ </mfrac>
+ <mo>=</mo>
+ <mn>4</mn>
+ </math>
+</div>
+```
+
+Then you could display your content by using the following JavaScript.
+
+```javascript
+const data = {
+ Title: 'My Math',
+ chunks: [{
+ content: document.getElementById('ir-content').innerHTML.trim(),
+ mimeType: 'application/mathml+xml'
+ }]
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
+```
+
+When you launch the Immersive Reader, you should see:
+
+![Math in Immersive Reader](../media/how-tos/1-math.png)
+
+## Next steps
+
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
ai-services Set Cookie Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to/set-cookie-policy.md
+
+ Title: "Set Immersive Reader Cookie Policy"
+
+description: This article will show you how to set the cookie policy for the Immersive Reader.
+++++++ Last updated : 01/06/2020++++
+# How to set the cookie policy for the Immersive Reader
+
+The Immersive Reader will disable cookie usage by default. If you enable cookie usage, then the Immersive Reader may use cookies to maintain user preferences and track feature usage. If you enable cookie usage in the Immersive Reader, please consider the requirements of EU Cookie Compliance Policy. It is the responsibility of the host application to obtain any necessary user consent in accordance with EU Cookie Compliance Policy.
+
+The cookie policy can be set through the Immersive Reader [options](../reference.md#options).
+
+## Enable Cookie Usage
+
+```javascript
+var options = {
+ 'cookiePolicy': ImmersiveReader.CookiePolicy.Enable
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## Disable Cookie Usage
+
+```javascript
+var options = {
+ 'cookiePolicy': ImmersiveReader.CookiePolicy.Disable
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## Next steps
+
+* View the [Node.js quickstart](../quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
+* View the [Android tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/language-support.md
+
+ Title: Language support - Immersive Reader
+
+description: Learn more about the human languages that are available with Immersive Reader.
++++++ Last updated : 11/15/2021+++
+# Language support for Immersive Reader
+
+This article lists supported human languages for Immersive Reader features.
++
+## Text to speech
+
+| Language | Tag |
+|-|--|
+| Afrikaans | af |
+| Afrikaans (South Africa) | af-ZA |
+| Albanian | sq |
+| Albanian (Albania) | sq-AL |
+| Amharic | am |
+| Amharic (Ethiopia) | am-ET |
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Lebanon) | ar-LB |
+| Arabic (Oman) | ar-OM |
+| Arabic (Saudi Arabia) | ar-SA |
+| Azerbaijani | az |
+| Azerbaijani (Azerbaijan) | az-AZ |
+| Bangla | bn |
+| Bangla (Bangladesh) | bn-BD |
+| Bangla (India) | bn-IN |
+| Bosnian | bs |
+| Bosnian (Bosnia & Herzegovina) | bs-BA |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Burmese | my |
+| Burmese (Myanmar) | my-MM |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Chinese | zh |
+| Chinese (China) | zh-CN |
+| Chinese (Hong Kong SAR) | zh-HK |
+| Chinese (Macao SAR) | zh-MO |
+| Chinese (Singapore) | zh-SG |
+| Chinese (Taiwan) | zh-TW |
+| Chinese Simplified | zh-Hans |
+| Chinese Simplified (China) | zh-Hans-CN |
+| Chinese Simplified (Singapore) | zh-Hans-SG |
+| Chinese Traditional | zh-Hant |
+| Chinese Traditional (China) | zh-Hant-CN |
+| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
+| Chinese Traditional (Macao SAR) | zh-Hant-MO |
+| Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Chinese (Literary) | lzh |
+| Chinese (Literary, China) | lzh-CN |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Belgium) | nl-BE |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (Kenya) | en-KE |
+| English (New Zealand) | en-NZ |
+| English (Nigeria) | en-NG |
+| English (Philippines) | en-PH |
+| English (Singapore) | en-SG |
+| English (South Africa) | en-ZA |
+| English (Tanzania) | en-TZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et-EE |
+| Filipino | fil |
+| Filipino (Philippines) | fil-PH |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Belgium) | fr-BE |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| Galician | gl |
+| Galician (Spain) | gl-ES |
+| Georgian | ka |
+| Georgian (Georgia) | ka-GE |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Gujarati | gu |
+| Gujarati (India) | gu-IN |
+| Hebrew | he |
+| Hebrew (Israel) | he-IL |
+| Hindi | hi |
+| Hindi (India) | hi-IN |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Icelandic (Iceland) | is-IS |
+| Indonesian | id |
+| Indonesian (Indonesia) | id-ID |
+| Irish | ga |
+| Irish (Ireland) | ga-IE |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Javanese | jv |
+| Javanese (Indonesia) | jv-ID |
+| Kannada | kn |
+| Kannada (India) | kn-IN |
+| Kazakh | kk |
+| Kazakh (Kazakhstan) | kk-KZ |
+| Khmer | km |
+| Khmer (Cambodia) | km-KH |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Lao | lo |
+| Lao (Laos) | lo-LA |
+| Latvian | Lv-LV |
+| Latvian (Latvia) | lv-LV |
+| Lithuanian | lt |
+| Lithuanian | lt-LT |
+| Macedonian | mk |
+| Macedonian (North Macedonia) | mk-MK |
+| Malay | ms |
+| Malay (Malaysia) | ms-MY |
+| Malayalam | ml |
+| Malayalam (India) | ml-IN |
+| Maltese | mt |
+| Maltese (Malta) | Mt-MT |
+| Marathi | mr |
+| Marathi (India) | mr-IN |
+| Mongolian | mn |
+| Mongolian (Mongolia) | mn-MN |
+| Nepali | ne |
+| Nepali (Nepal) | ne-NP |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Pashto | ps |
+| Pashto (Afghanistan) | ps-AF |
+| Persian | fa |
+| Persian (Iran) | fa-IR |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Serbian (Cyrillic) | sr-Cyrl |
+| Serbian (Cyrillic, Serbia) | sr-Cyrl-RS |
+| Sinhala | si |
+| Sinhala (Sri Lanka) | si-LK |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Somali | so |
+| Somali (Somalia) | so-SO |
+| Spanish | es |
+| Spanish (Argentina) | es-AR |
+| Spanish (Colombia) | es-CO |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Spanish (United States) | es-US |
+| Sundanese | su |
+| Sundanese (Indonesia) | su-ID |
+| Swahili | sw |
+| Swahili (Kenya) | sw-KE |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Tamil | ta |
+| Tamil (India) | ta-IN |
+| Tamil (Malaysia) | ta-MY |
+| Telugu | te |
+| Telugu (India) | te-IN |
+| Thai | th |
+| Thai (Thailand) | th-TH |
+| Turkish | tr |
+| Turkish (T├╝rkiye) | tr-TR |
+| Ukrainian | uk |
+| Ukrainian (Ukraine) | uk-UA |
+| Urdu | ur |
+| Urdu (India) | ur-IN |
+| Uzbek | uz |
+| Uzbek (Uzbekistan) | uz-UZ |
+| Vietnamese | vi |
+| Vietnamese (Vietnam) | vi-VN |
+| Welsh | cy |
+| Welsh (United Kingdom) | Cy-GB |
+| Zulu | zu |
+| Zulu (South Africa) | zu-ZA |
+
+## Translation
+
+| Language | Tag |
+|-|--|
+| Afrikaans | af |
+| Albanian | sq |
+| Amharic | am |
+| Arabic | ar |
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Armenian | hy |
+| Assamese | as |
+| Azerbaijani | az |
+| Bangla | bn |
+| Bashkir | ba |
+| Bosnian | bs |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Burmese | my |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Chinese | zh |
+| Chinese (China) | zh-CN |
+| Chinese (Hong Kong SAR) | zh-HK |
+| Chinese (Macao SAR) | zh-MO |
+| Chinese (Singapore) | zh-SG |
+| Chinese (Taiwan) | zh-TW |
+| Chinese Simplified | zh-Hans |
+| Chinese Simplified (China) | zh-Hans-CN |
+| Chinese Simplified (Singapore) | zh-Hans-SG |
+| Chinese Traditional | zh-Hant |
+| Chinese Traditional (China) | zh-Hant-CN |
+| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
+| Chinese Traditional (Macao SAR) | zh-Hant-MO |
+| Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Chinese (Literary) | lzh |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dari (Afghanistan) | prs |
+| Divehi | dv |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et |
+| Faroese | fo |
+| Fijian | fj |
+| Filipino | fil |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| Georgian | ka |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Gujarati | gu |
+| Haitian (Creole) | ht |
+| Hebrew | he |
+| Hebrew (Israel) | he-IL |
+| Hindi | hi |
+| Hindi (India) | hi-IN |
+| Hmong Daw | mww |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Indonesian | id |
+| Indonesian (Indonesia) | id-ID |
+| Inuinnaqtun | ikt |
+| Inuktitut | iu |
+| Inuktitut (Latin) | iu-Latn |
+| Irish | ga |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Kannada | kn |
+| Kazakh | kk |
+| Khmer | km |
+| Kiswahili | sw |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Kurdish (Central) | ku |
+| Kurdish (Northern) | kmr |
+| KurdishCentral | ckb |
+| Kyrgyz | ky |
+| Lao | lo |
+| Latvian | lv |
+| Lithuanian | lt |
+| Macedonian | mk |
+| Malagasy | mg |
+| Malay | ms |
+| Malay (Malaysia) | ms-MY |
+| Malayalam | ml |
+| Maltese | mt |
+| Maori | mi |
+| Marathi | mr |
+| Mongolian (Cyrillic) | mn-Cyrl |
+| Mongolian (Traditional) | mn-Mong |
+| Nepali | ne |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Odia | or |
+| Pashto (Afghanistan) | ps |
+| Persian | fa |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Punjabi | pa |
+| Querétaro Otomi | otq |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Samoan | sm |
+| Serbian | sr |
+| Serbian (Cyrillic) | sr-Cyrl |
+| Serbian (Latin) | sr-Latn |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Somali | so |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Tahitian | ty |
+| Tamil | ta |
+| Tamil (India) | ta-IN |
+| Tatar | tt |
+| Telugu | te |
+| Telugu (India) | te-IN |
+| Thai | th |
+| Thai (Thailand) | th-TH |
+| Tibetan | bo |
+| Tigrinya | ti |
+| Tongan | to |
+| Turkish | tr |
+| Turkish (T├╝rkiye) | tr-TR |
+| Turkmen | tk |
+| Ukrainian | uk |
+| UpperSorbian | hsb |
+| Urdu | ur |
+| Uyghur | ug |
+| Vietnamese | vi |
+| Vietnamese (Vietnam) | vi-VN |
+| Welsh | cy |
+| Yucatec Maya | yua |
+| Yue Chinese | yue |
+| Zulu | zu |
+
+## Language detection
+
+| Language | Tag |
+|-|--|
+| Arabic | ar |
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Basque | eu |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Chinese Simplified | zh-Hans |
+| Chinese Simplified (China) | zh-Hans-CN |
+| Chinese Simplified (Singapore) | zh-Hans-SG |
+| Chinese Traditional | zh-Hant-CN |
+| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
+| Chinese Traditional (Macao SAR) | zh-Hant-MO |
+| Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| Galician | gl |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Hebrew | he |
+| Hebrew (Israel) | he-IL |
+| Hindi | hi |
+| Hindi (India) | hi-IN |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Indonesian | id |
+| Indonesian (Indonesia) | id-ID |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Kazakh | kk |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Latvian | lv |
+| Lithuanian | lt |
+| Malay | ms |
+| Malay (Malaysia) | ms-MY |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Norwegian Nynorsk | nn |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Serbian(Cyrillic) | sr-Cyrl |
+| Serbian (Latin) | sr-Latn |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Tamil | ta |
+| Tamil (India) | ta-IN |
+| Telugu | te |
+| Telugu (India) | te-IN |
+| Thai | th |
+| Thai (Thailand) | th-TH |
+| Turkish | tr |
+| Turkish (T├╝rkiye) | tr-TR |
+| Ukrainian | uk |
+| Vietnamese | vi |
+| Vietnamese (Vietnam) | vi-VN |
+| Welsh | cy |
+
+## Syllabification
+
+| Language | Tag |
+|-|--|
+| Basque | eu |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| Galician | gl |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Kazakh | kk |
+| Latvian | lv |
+| Lithuanian | lt |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Norwegian Nynorsk | nn |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Serbian | sr |
+| Serbian(Cyrillic) | sr-Cyrl |
+| Serbian (Latin) | sr-Latn |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Turkish | tr |
+| Turkish (T├╝rkiye) | tr-TR |
+| Ukrainian | uk |
+| Welsh | cy |
+
+## Picture dictionary
+
+| Language | Tag |
+|-|--|
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+
+## Parts of speech
+
+| Language | Tag |
+|-|--|
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Norwegian Nynorsk | nn |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/overview.md
+
+ Title: What is Azure AI Immersive Reader?
+
+description: Immersive Reader is a tool that is designed to help people with learning differences or help new readers and language learners with reading comprehension.
+++++++ Last updated : 11/15/2021++
+keywords: readers, language learners, display pictures, improve reading, read content, translate
+#Customer intent: As a developer, I want to learn more about the Immersive Reader, which is a new offering in Azure AI services, so that I can embed this package of content into a document to accommodate users with reading differences.
++
+# What is Azure AI Immersive Reader?
++
+[Immersive Reader](https://www.onenote.com/learningtools) is part of [Azure AI services](../../ai-services/what-are-ai-services.md), and is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft One Note to improve your web applications.
+
+This documentation contains the following types of articles:
+
+* **[Quickstarts](quickstarts/client-libraries.md)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to-create-immersive-reader.md)** contain instructions for using the service in more specific or customized ways.
+
+## Use Immersive Reader to improve reading accessibility
+
+Immersive Reader is designed to make reading easier and more accessible for everyone. Let's take a look at a few of Immersive Reader's core features.
+
+### Isolate content for improved readability
+
+Immersive Reader isolates content to improve readability.
+
+ ![Isolate content for improved readability with Immersive Reader](./media/immersive-reader.png)
+
+### Display pictures for common words
+
+For commonly used terms, the Immersive Reader will display a picture.
+
+ ![Picture Dictionary with Immersive Reader](./media/picture-dictionary.png)
+
+### Highlight parts of speech
+
+Immersive Reader can be use to help learners understand parts of speech and grammar by highlighting verbs, nouns, pronouns, and more.
+
+ ![Show parts of speech with Immersive Reader](./media/parts-of-speech.png)
+
+### Read content aloud
+
+Speech synthesis (or text-to-speech) is baked into the Immersive Reader service, which lets your readers select text to be read aloud.
+
+ ![Read text aloud with Immersive Reader](./media/read-aloud.png)
+
+### Translate content in real-time
+
+Immersive Reader can translate text into many languages in real-time. This is helpful to improve comprehension for readers learning a new language.
+
+ ![Translate text with Immersive Reader](./media/translation.png)
+
+### Split words into syllables
+
+With Immersive Reader you can break words into syllables to improve readability or to sound out new words.
+
+ ![Break words into syllables with Immersive Reader](./media/syllabification.png)
+
+## How does Immersive Reader work?
+
+Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your wep application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
+
+## Get started with Immersive Reader
+
+The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:
+
+* [Quickstart: Use the Immersive Reader client library](quickstarts/client-libraries.md)
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/quickstarts/client-libraries.md
+
+ Title: "Quickstart: Immersive Reader client library"
+
+description: "The Immersive Reader client library makes it easy to integrate the Immersive Reader service into your web applications to improve reading comprehension. In this quickstart, you'll learn how to use Immersive Reader for text selection, recognizing parts of speech, reading selected text out loud, translation, and more."
+++
+zone_pivot_groups: programming-languages-set-twenty
+++ Last updated : 03/08/2021++
+keywords: display pictures, parts of speech, read selected text, translate words, reading comprehension
++
+# Quickstart: Get started with Immersive Reader
+++++++++++++++
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/reference.md
+
+ Title: "Immersive Reader SDK Reference"
+
+description: The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
++++++++ Last updated : 11/15/2021+++
+# Immersive Reader JavaScript SDK Reference (v1.2)
+
+The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
+
+You may use `npm`, `yarn`, or an `HTML` `<script>` element to include the library of the latest stable build in your web application:
+
+```html
+<script type='text/javascript' src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.2.0.js'></script>
+```
+
+```bash
+npm install @microsoft/immersive-reader-sdk
+```
+
+```bash
+yarn add @microsoft/immersive-reader-sdk
+```
+
+## Functions
+
+The SDK exposes the functions:
+
+- [`ImmersiveReader.launchAsync(token, subdomain, content, options)`](#launchasync)
+
+- [`ImmersiveReader.close()`](#close)
+
+- [`ImmersiveReader.renderButtons(options)`](#renderbuttons)
+
+<br>
+
+## launchAsync
+
+Launches the Immersive Reader within an `HTML` `iframe` element in your web application. The size of your content is limited to a maximum of 50 MB.
+
+```typescript
+launchAsync(token: string, subdomain: string, content: Content, options?: Options): Promise<LaunchResponse>;
+```
+
+#### launchAsync Parameters
+
+| Name | Type | Description |
+| - | - | |
+| `token` | string | The Azure AD authentication token. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). |
+| `subdomain` | string | The custom subdomain of your Immersive Reader resource in Azure. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). |
+| `content` | [Content](#content) | An object containing the content to be shown in the Immersive Reader. |
+| `options` | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. |
+
+#### Returns
+
+Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [`LaunchResponse`](#launchresponse) object.
+
+#### Exceptions
+
+The returned `Promise` will be rejected with an [`Error`](#error) object if the Immersive Reader fails to load. For more information, see the [error codes](#error-codes).
+
+<br>
+
+## close
+
+Closes the Immersive Reader.
+
+An example use case for this function is if the exit button is hidden by setting ```hideExitButton: true``` in [options](#options). Then, a different button (for example a mobile header's back arrow) can call this ```close``` function when it's clicked.
+
+```typescript
+close(): void;
+```
+
+<br>
+
+## Immersive Reader Launch Button
+
+The SDK provides default styling for the button for launching the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling. For more information, see [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md).
+
+```html
+<div class='immersive-reader-button'></div>
+```
+
+#### Optional attributes
+
+Use the following attributes to configure the look and feel of the button.
+
+| Attribute | Description |
+| | -- |
+| `data-button-style` | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. |
+| `data-locale` | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. |
+| `data-icon-px-size` | Sets the size of the icon in pixels. Defaults to 20px. |
+
+<br>
+
+## renderButtons
+
+The ```renderButtons``` function isn't necessary if you're using the [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) guidance.
+
+This function styles and updates the document's Immersive Reader button elements. If ```options.elements``` is provided, then the buttons will be rendered within each element provided in ```options.elements```. Using the ```options.elements``` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call ```ImmersiveReader.renderButtons(options: RenderButtonsOptions);``` on page load as demonstrated in the below code snippet. Otherwise, the buttons will be rendered within the document's elements that have the class ```immersive-reader-button``` as shown in [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md).
+
+```typescript
+// This snippet assumes there are two empty div elements in
+// the page HTML, button1 and button2.
+const btn1: HTMLDivElement = document.getElementById('button1');
+const btn2: HTMLDivElement = document.getElementById('button2');
+const btns: HTMLDivElement[] = [btn1, btn2];
+ImmersiveReader.renderButtons({elements: btns});
+```
+
+See the above [Optional Attributes](#optional-attributes) for more rendering options. To use these options, add any of the option attributes to each ```HTMLDivElement``` in your page HTML.
+
+```typescript
+renderButtons(options?: RenderButtonsOptions): void;
+```
+
+#### renderButtons Parameters
+
+| Name | Type | Description |
+| - | - | |
+| `options` | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. |
+
+### renderButtons Options
+
+Options for rendering the Immersive Reader buttons.
+
+```typescript
+{
+ elements: HTMLDivElement[];
+}
+```
+
+#### renderButtons Options Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| elements | HTMLDivElement[] | Elements to render the Immersive Reader buttons in. |
+
+##### `elements`
+```Parameters
+Type: HTMLDivElement[]
+Required: false
+```
+
+<br>
+
+## LaunchResponse
+
+Contains the response from the call to `ImmersiveReader.launchAsync`. A reference to the `HTML` `iframe` element that contains the Immersive Reader can be accessed via `container.firstChild`.
+
+```typescript
+{
+ container: HTMLDivElement;
+ sessionId: string;
+ charactersProcessed: number;
+}
+```
+
+#### LaunchResponse Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| container | HTMLDivElement | HTML element that contains the Immersive Reader `iframe` element. |
+| sessionId | String | Globally unique identifier for this session, used for debugging. |
+| charactersProcessed | number | Total number of characters processed |
+
+## Error
+
+Contains information about an error.
+
+```typescript
+{
+ code: string;
+ message: string;
+}
+```
+
+#### Error Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| code | String | One of a set of error codes. For more information, see [Error codes](#error-codes). |
+| message | String | Human-readable representation of the error. |
+
+#### Error codes
+
+| Code | Description |
+| - | -- |
+| BadArgument | Supplied argument is invalid, see `message` parameter of the [Error](#error). |
+| Timeout | The Immersive Reader failed to load within the specified timeout. |
+| TokenExpired | The supplied token is expired. |
+| Throttled | The call rate limit has been exceeded. |
+
+<br>
+
+## Types
+
+### Content
+
+Contains the content to be shown in the Immersive Reader.
+
+```typescript
+{
+ title?: string;
+ chunks: Chunk[];
+}
+```
+
+#### Content Parameters
+
+| Name | Type | Description |
+| - | - | |
+| title | String | Title text shown at the top of the Immersive Reader (optional) |
+| chunks | [Chunk[]](#chunk) | Array of chunks |
+
+##### `title`
+```Parameters
+Type: String
+Required: false
+Default value: "Immersive Reader"
+```
+
+##### `chunks`
+```Parameters
+Type: Chunk[]
+Required: true
+Default value: null
+```
+
+<br>
+
+### Chunk
+
+A single chunk of data, which will be passed into the Content of the Immersive Reader.
+
+```typescript
+{
+ content: string;
+ lang?: string;
+ mimeType?: string;
+}
+```
+
+#### Chunk Parameters
+
+| Name | Type | Description |
+| - | - | |
+| content | String | The string that contains the content sent to the Immersive Reader. |
+| lang | String | Language of the text, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Language will be detected automatically if not specified. For more information, see [Supported Languages](#supported-languages). |
+| mimeType | string | Plain text, MathML, HTML & Microsoft Word DOCX formats are supported. For more information, see [Supported MIME types](#supported-mime-types). |
+
+##### `content`
+```Parameters
+Type: String
+Required: true
+Default value: null
+```
+
+##### `lang`
+```Parameters
+Type: String
+Required: false
+Default value: Automatically detected
+```
+
+##### `mimeType`
+```Parameters
+Type: String
+Required: false
+Default value: "text/plain"
+```
+
+#### Supported MIME types
+
+| MIME Type | Description |
+| | -- |
+| text/plain | Plain text. |
+| text/html | HTML content. [Learn more](#html-support)|
+| application/mathml+xml | Mathematical Markup Language (MathML). [Learn more](./how-to/display-math.md).
+| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document.
++
+<br>
+
+## Options
+
+Contains properties that configure certain behaviors of the Immersive Reader.
+
+```typescript
+{
+ uiLang?: string;
+ timeout?: number;
+ uiZIndex?: number;
+ useWebview?: boolean;
+ onExit?: () => any;
+ customDomain?: string;
+ allowFullscreen?: boolean;
+ parent?: Node;
+ hideExitButton?: boolean;
+ cookiePolicy?: CookiePolicy;
+ disableFirstRun?: boolean;
+ readAloudOptions?: ReadAloudOptions;
+ translationOptions?: TranslationOptions;
+ displayOptions?: DisplayOptions;
+ preferences?: string;
+ onPreferencesChanged?: (value: string) => any;
+ disableGrammar?: boolean;
+ disableTranslation?: boolean;
+ disableLanguageDetection?: boolean;
+}
+```
+
+#### Options Parameters
+
+| Name | Type | Description |
+| - | - | |
+| uiLang | String | Language of the UI, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Defaults to browser language if not specified. |
+| timeout | Number | Duration (in milliseconds) before [launchAsync](#launchasync) fails with a timeout error (default is 15,000 ms). This timeout only applies to the initial launch of the Reader page, when the Reader page opens successfully and the spinner starts. Adjustment of the timeout should'nt be necessary. |
+| uiZIndex | Number | Z-index of the `HTML` `iframe` element that will be created (default is 1000). |
+| useWebview | Boolean| Use a webview tag instead of an `HTML` `iframe` element, for compatibility with Chrome Apps (default is false). |
+| onExit | Function | Executes when the Immersive Reader exits. |
+| customDomain | String | Reserved for internal use. Custom domain where the Immersive Reader webapp is hosted (default is null). |
+| allowFullscreen | Boolean | The ability to toggle fullscreen (default is true). |
+| parent | Node | Node in which the `HTML` `iframe` element or `Webview` container is placed. If the element doesn't exist, iframe is placed in `body`. |
+| hideExitButton | Boolean | Hides the Immersive Reader's exit button arrow (default is false). This value should only be true if there's an alternative mechanism provided to exit the Immersive Reader (e.g a mobile toolbar's back arrow). |
+| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent following EU Cookie Compliance Policy. For more information, see [Cookie Policy Options](#cookiepolicy-options). |
+| disableFirstRun | Boolean | Disable the first run experience. |
+| readAloudOptions | [ReadAloudOptions](#readaloudoptions) | Options to configure Read Aloud. |
+| translationOptions | [TranslationOptions](#translationoptions) | Options to configure translation. |
+| displayOptions | [DisplayOptions](#displayoptions) | Options to configure text size, font, theme, and so on. |
+| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. For more information, see [Settings Parameters](#settings-parameters) and [How-To Store User Preferences](./how-to-store-user-preferences.md). |
+| onPreferencesChanged | Function | Executes when the user's preferences have changed. For more information, see [How-To Store User Preferences](./how-to-store-user-preferences.md). |
+| disableTranslation | Boolean | Disable the word and document translation experience. |
+| disableGrammar | Boolean | Disable the Grammar experience. This option will also disable Syllables, Parts of Speech and Picture Dictionary, which depends on Parts of Speech. |
+| disableLanguageDetection | Boolean | Disable Language Detection to ensure the Immersive Reader only uses the language that is explicitly specified on the [Content](#content)/[Chunk[]](#chunk). This option should be used sparingly, primarily in situations where language detection isn't working, for instance, this issue is more likely to happen with short passages of fewer than 100 characters. You should be certain about the language you're sending, as text-to-speech won't have the correct voice. Syllables, Parts of Speech and Picture Dictionary won't work correctly if the language isn't correct. |
+
+##### `uiLang`
+```Parameters
+Type: String
+Required: false
+Default value: User's browser language
+```
+
+##### `timeout`
+```Parameters
+Type: Number
+Required: false
+Default value: 15000
+```
+
+##### `uiZIndex`
+```Parameters
+Type: Number
+Required: false
+Default value: 1000
+```
+
+##### `onExit`
+```Parameters
+Type: Function
+Required: false
+Default value: null
+```
+
+##### `preferences`
+
+> [!CAUTION]
+> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
+
+```Parameters
+Type: String
+Required: false
+Default value: null
+```
+
+##### `onPreferencesChanged`
+```Parameters
+Type: Function
+Required: false
+Default value: null
+```
+
+##### `customDomain`
+```Parameters
+Type: String
+Required: false
+Default value: null
+```
+
+<br>
+
+## ReadAloudOptions
+
+```typescript
+type ReadAloudOptions = {
+ voice?: string;
+ speed?: number;
+ autoplay?: boolean;
+};
+```
+
+#### ReadAloudOptions Parameters
+
+| Name | Type | Description |
+| - | - | |
+| voice | String | Voice, either "Female" or "Male". Not all languages support both genders. |
+| speed | Number | Playback speed, must be between 0.5 and 2.5, inclusive. |
+| autoPlay | Boolean | Automatically start Read Aloud when the Immersive Reader loads. |
+
+##### `voice`
+```Parameters
+Type: String
+Required: false
+Default value: "Female" or "Male" (determined by language)
+Values available: "Female", "Male"
+```
+
+##### `speed`
+```Parameters
+Type: Number
+Required: false
+Default value: 1
+Values available: 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5
+```
+
+> [!NOTE]
+> Due to browser limitations, autoplay is not supported in Safari.
+
+<br>
+
+## TranslationOptions
+
+```typescript
+type TranslationOptions = {
+ language: string;
+ autoEnableDocumentTranslation?: boolean;
+ autoEnableWordTranslation?: boolean;
+};
+```
+
+#### TranslationOptions Parameters
+
+| Name | Type | Description |
+| - | - | |
+| language | String | Sets the translation language, the value is in IETF BCP 47-language tag format, for example, fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. |
+| autoEnableDocumentTranslation | Boolean | Automatically translate the entire document. |
+| autoEnableWordTranslation | Boolean | Automatically enable word translation. |
+
+##### `language`
+```Parameters
+Type: String
+Required: true
+Default value: null
+Values available: For more information, see the Supported Languages section
+```
+
+<br>
+
+## ThemeOption
+
+```typescript
+enum ThemeOption { Light, Dark }
+```
+
+## DisplayOptions
+
+```typescript
+type DisplayOptions = {
+ textSize?: number;
+ increaseSpacing?: boolean;
+ fontFamily?: string;
+ themeOption?: ThemeOption
+};
+```
+
+#### DisplayOptions Parameters
+
+| Name | Type | Description |
+| - | - | |
+| textSize | Number | Sets the chosen text size. |
+| increaseSpacing | Boolean | Sets whether text spacing is toggled on or off. |
+| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
+| themeOption | ThemeOption | Sets the chosen Theme of the reader ("Light", "Dark"). |
+
+##### `textSize`
+```Parameters
+Type: Number
+Required: false
+Default value: 20, 36 or 42 (Determined by screen size)
+Values available: 14, 20, 28, 36, 42, 48, 56, 64, 72, 84, 96
+```
+
+##### `fontFamily`
+```Parameters
+Type: String
+Required: false
+Default value: "Calibri"
+Values available: "Calibri", "Sitka", "ComicSans"
+```
+
+<br>
+
+## CookiePolicy Options
+
+```typescript
+enum CookiePolicy { Disable, Enable }
+```
+
+**The settings listed below are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default to follow EU Cookie Compliance laws. If you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, your website or application will need proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader. The table below describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled.
+
+#### Settings Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| textSize | Number | Sets the chosen text size. |
+| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
+| textSpacing | Number | Sets whether text spacing is toggled on or off. |
+| formattingEnabled | Boolean | Sets whether HTML formatting is toggled on or off. |
+| theme | String | Sets the chosen theme (e.g "Light", "Dark"...). |
+| syllabificationEnabled | Boolean | Sets whether syllabification toggled on or off. |
+| nounHighlightingEnabled | Boolean | Sets whether noun-highlighting is toggled on or off. |
+| nounHighlightingColor | String | Sets the chosen noun-highlighting color. |
+| verbHighlightingEnabled | Boolean | Sets whether verb-highlighting is toggled on or off. |
+| verbHighlightingColor | String | Sets the chosen verb-highlighting color. |
+| adjectiveHighlightingEnabled | Boolean | Sets whether adjective-highlighting is toggled on or off. |
+| adjectiveHighlightingColor | String | Sets the chosen adjective-highlighting color. |
+| adverbHighlightingEnabled | Boolean | Sets whether adverb-highlighting is toggled on or off. |
+| adverbHighlightingColor | String | Sets the chosen adverb-highlighting color. |
+| pictureDictionaryEnabled | Boolean | Sets whether Picture Dictionary is toggled on or off. |
+| posLabelsEnabled | Boolean | Sets whether the superscript text label of each highlighted Part of Speech is toggled on or off. |
+
+<br>
+
+## Supported Languages
+
+The translation feature of Immersive Reader supports many languages. For more information, see [Language Support](./language-support.md).
+
+<br>
+
+## HTML support
+
+When formatting is enabled, the following content will be rendered as HTML in the Immersive Reader.
+
+| HTML | Supported Content |
+| | -- |
+| Font Styles | Bold, Italic, Underline, Code, Strikethrough, Superscript, Subscript |
+| Unordered Lists | Disc, Circle, Square |
+| Ordered Lists | Decimal, Upper-Alpha, Lower-Alpha, Upper-Roman, Lower-Roman |
+
+Unsupported tags will be rendered comparably. Images and tables are currently not supported.
+
+<br>
+
+## Browser support
+
+Use the most recent versions of the following browsers for the best experience with the Immersive Reader.
+
+* Microsoft Edge
+* Internet Explorer 11
+* Google Chrome
+* Mozilla Firefox
+* Apple Safari
+
+<br>
+
+## Next steps
+
+* Explore the [Immersive Reader SDK on GitHub](https://github.com/microsoft/immersive-reader-sdk)
+* [Quickstart: Create a web app that launches the Immersive Reader (C#)](./quickstarts/client-libraries.md?pivots=programming-language-csharp)
ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/release-notes.md
+
+ Title: "Immersive Reader SDK Release Notes"
+
+description: Learn more about what's new in the Immersive Reader JavaScript SDK.
++++++++ Last updated : 11/15/2021+++
+# Immersive Reader JavaScript SDK Release Notes
+
+## Version 1.3.0
+
+This release contains new features, security vulnerability fixes, and updates to code samples.
+
+#### New Features
+
+* Added the capability for the Immersive Reader iframe to request microphone permissions for Reading Coach
+
+#### Improvements
+
+* Update code samples to use v1.3.0
+* Update code samples to demonstrate the usage of latest options from v1.2.0
+
+## Version 1.2.0
+
+This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
+
+#### New Features
+
+* Add option to set the theme to light or dark
+* Add option to set the parent node where the iframe/webview container is placed
+* Add option to disable the Grammar experience
+* Add option to disable the Translation experience
+* Add option to disable Language Detection
+
+#### Improvements
+
+* Add title and aria modal attributes to the iframe
+* Set isLoading to false when exiting
+* Update code samples to use v1.2.0
+* Adds React code sample
+* Adds Ember code sample
+* Adds Azure function code sample
+* Adds C# code sample demonstrating how to call the Azure Function for authentication
+* Adds Android Kotlin code sample demonstrating how to call the Azure Function for authentication
+* Updates the Swift code sample to be Objective C compliant
+* Updates Advanced C# code sample to demonstrate the usage of new options: parent node, disableGrammar, disableTranslation, and disableLanguageDetection
+
+#### Fixes
+
+* Fixes multiple security vulnerabilities by upgrading TypeScript packages
+* Fixes bug where renderButton rendered a duplicate icon and label in the button
+
+## Version 1.1.0
+
+This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
+
+#### New Features
+
+* Enable saving and loading user preferences across different browsers and devices
+* Enable configuring default display options
+* Add option to set the translation language, enable word translation, and enable document translation when launching Immersive Reader
+* Add support for configuring Read Aloud via options
+* Add ability to disable first run experience
+* Add ImmersiveReaderView for UWP
+
+#### Improvements
+
+* Update the Android code sample HTML to work with the latest SDK
+* Update launch response to return the number of characters processed
+* Update code samples to use v1.1.0
+* Do not allow launchAsync to be called when already loading
+* Check for invalid content by ignoring messages where the data is not a string
+* Wrap call to window in an if clause to check browser support of Promise
+
+#### Fixes
+
+* Fix dependabot by removing yarn.lock from gitignore
+* Fix security vulnerability by upgrading pug to v3.0.0 in quickstart-nodejs code sample
+* Fix multiple security vulnerabilities by upgrading Jest and TypeScript packages
+* Fix a security vulnerability by upgrading Microsoft.IdentityModel.Clients.ActiveDirectory to v5.2.0
+
+<br>
+
+## Version 1.0.0
+
+This release contains breaking changes, new features, code sample improvements, and bug fixes.
+
+#### Breaking Changes
+
+* Require Azure AD token and subdomain, and deprecates tokens used in previous versions.
+* Set CookiePolicy to disabled. Retention of user preferences is disabled by default. The Reader launches with default settings every time, unless the CookiePolicy is set to enabled.
+
+#### New Features
+
+* Add support to enable or disable cookies
+* Add Android Kotlin quick start code sample
+* Add Android Java quick start code sample
+* Add Node quick start code sample
+
+#### Improvements
+
+* Update Node.js advanced README.md
+* Change Python code sample from advanced to quick start
+* Move iOS Swift code sample into js/samples
+* Update code samples to use v1.0.0
+
+#### Fixes
+
+* Fix for Node.js advanced code sample
+* Add missing files for advanced-csharp-multiple-resources
+* Remove en-us from hyperlinks
+
+<br>
+
+## Version 0.0.3
+
+This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
+
+#### New Features
+
+* Add iOS Swift code sample
+* Add C# advanced code sample demonstrating use of multiple resources
+* Add support to disable the full screen toggle feature
+* Add support to hide the Immersive Reader application exit button
+* Add a callback function that may be used by the host application upon exiting the Immersive Reader
+* Update code samples to use Azure Active Directory Authentication
+
+#### Improvements
+
+* Update C# advanced code sample to include Word document
+* Update code samples to use v0.0.3
+
+#### Fixes
+
+* Upgrade lodash to version 4.17.14 to fix security vulnerability
+* Update C# MSAL library to fix security vulnerability
+* Upgrade mixin-deep to version 1.3.2 to fix security vulnerability
+* Upgrade jest, webpack and webpack-cli which were using vulnerable versions of set-value and mixin-deep to fix security vulnerability
+
+<br>
+
+## Version 0.0.2
+
+This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
+
+#### New Features
+
+* Add Python advanced code sample
+* Add Java quick start code sample
+* Add simple code sample
+
+#### Improvements
+
+* Rename resourceName to cogSvcsSubdomain
+* Move secrets out of code and use environment variables
+* Update code samples to use v0.0.2
+
+#### Fixes
+
+* Fix Immersive Reader button accessibility bugs
+* Fix broken scrolling
+* Upgrade handlebars package to version 4.1.2 to fix security vulnerability
+* Fixes bugs in SDK unit tests
+* Fixes JavaScript Internet Explorer 11 compatibility bugs
+* Updates SDK urls
+
+<br>
+
+## Version 0.0.1
+
+The initial release of the Immersive Reader JavaScript SDK.
+
+* Add Immersive Reader JavaScript SDK
+* Add support to specify the UI language
+* Add a timeout to determine when the launchAsync function should fail with a timeout error
+* Add support to specify the z-index of the Immersive Reader iframe
+* Add support to use a webview tag instead of an iframe, for compatibility with Chrome Apps
+* Add SDK unit tests
+* Add Node.js advanced code sample
+* Add C# advanced code sample
+* Add C# quick start code sample
+* Add package configuration, Yarn and other build files
+* Add git configuration files
+* Add README.md files to code samples and SDK
+* Add MIT License
+* Add Contributor instructions
+* Add static icon button SVG assets
+
+## Next steps
+
+Get started with Immersive Reader:
+
+* Read the [Immersive Reader client library Reference](./reference.md)
+* Explore the [Immersive Reader client library on GitHub](https://github.com/microsoft/immersive-reader-sdk)
ai-services Security How To Update Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/security-how-to-update-role-assignment.md
+
+ Title: "Security Advisory: Update Role Assignment for Azure Active Directory authentication permissions"
+
+description: This article will show you how to update the role assignment on existing Immersive Reader resources due to a security bug discovered in November 2021
++++++++ Last updated : 01/06/2022+++
+# Security Advisory: Update Role Assignment for Azure Active Directory authentication permissions
+
+A security bug has been discovered with Immersive Reader Azure Active Directory (Azure AD) authentication configuration. We are advising that you change the permissions on your Immersive Reader resources as described below.
+
+## Background
+
+A security bug was discovered that relates to Azure AD authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Azure AD authentication, it is necessary to grant permissions for the Azure AD application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Azure AI services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role.
+
+During a security audit, it was discovered that this `Azure AI services User` role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Azure AD access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill.
+
+In practice however, this attack or exploit is not likely to occur or may not even be possible. For Immersive Reader scenarios, customers obtain Azure AD access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Azure AD access token would need to have an audience of `https://management.azure.com`. Generally speaking, this is not too much of a concern, since the access tokens used for Immersive Reader scenarios would not work to `list keys`, as they do not have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Azure AD to acquire the token. Again, this is not likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Azure AD access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that attacker could compromise that process and change the audience.
+
+The real concern comes when or if any customer were to acquire tokens from Azure AD directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it is possible that some customers are doing this.
+
+To mitigate the concerns about any possibility of using the Azure AD access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Azure AI services platform like `Azure AI services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs.
+
+We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Azure AI services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions.
+
+This recommendation applies to ALL customers, to ensure that this vulnerability is patched for everyone, no matter what the implementation scenario or likelihood of attack.
+
+If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, it is advised that you migrate to the new role to mitigate the security concerns discussed above. Applying this update is a security advisory recommendation; it is not a mandate.
+
+Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role.
++
+## Call to action
+
+If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
+
+After you have updated the role using the script below, it is also advised that you rotate the subscription keys on your resource. This is in case your keys have been compromised by the exploit above, and somebody is actually using your resource with subscription key authentication without your consent. Rotating the keys will render the previous keys invalid and deny any further access. For customers using Azure AD authentication, which should be everyone per current Immersive Reader SDK implementation, rotating the keys will have no impact on the Immersive Reader service, since Azure AD access tokens are used for authentication, not the subscription key. Rotating the subscription keys is just another precaution.
+
+You can rotate the subscription keys on the [Azure portal](https://portal.azure.com). Navigate to your resource and then to the `Keys and Endpoint` blade. At the top, there are buttons to `Regenerate Key1` and `Regenerate Key2`.
+++++
+### Use Azure PowerShell environment to update your Immersive Reader resource Role assignment
+
+1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
+
+1. Copy and paste the following code snippet into the shell.
+
+ ```azurepowershell-interactive
+ function Update-ImmersiveReaderRoleAssignment(
+ [Parameter(Mandatory=$true, Position=0)] [String] $SubscriptionName,
+ [Parameter(Mandatory=$true)] [String] $ResourceGroupName,
+ [Parameter(Mandatory=$true)] [String] $ResourceName,
+ [Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri
+ )
+ {
+ $unused = ''
+ if (-not [System.Uri]::TryCreate($AADAppIdentifierUri, [System.UriKind]::Absolute, [ref] $unused)) {
+ throw "Error: AADAppIdentifierUri must be a valid URI"
+ }
+
+ Write-Host "Setting the active subscription to '$SubscriptionName'"
+ $subscriptionExists = Get-AzSubscription -SubscriptionName $SubscriptionName
+ if (-not $subscriptionExists) {
+ throw "Error: Subscription does not exist"
+ }
+ az account set --subscription $SubscriptionName
+
+ # Get the Immersive Reader resource
+ $resourceId = az cognitiveservices account show --resource-group $ResourceGroupName --name $ResourceName --query "id" -o tsv
+ if (-not $resourceId) {
+ throw "Error: Failed to find Immersive Reader resource"
+ }
+
+ # Get the Azure AD application service principal
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
+ if (-not $principalId) {
+ throw "Error: Failed to find Azure AD application service principal"
+ }
+
+ $newRoleName = "Cognitive Services Immersive Reader User"
+ $newRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $newRoleName --query "[].id" -o tsv
+ if ($newRoleExists) {
+ Write-Host "New role assignment for '$newRoleName' role already exists on resource"
+ }
+ else {
+ Write-Host "Creating new role assignment for '$newRoleName' role"
+ $roleCreateResult = az role assignment create --assignee $principalId --scope $resourceId --role $newRoleName
+ if (-not $roleCreateResult) {
+ throw "Error: Failed to add new role assignment"
+ }
+ Write-Host "New role assignment created successfully"
+ }
+
+ $oldRoleName = "Azure AI services User"
+ $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv
+ if (-not $oldRoleExists) {
+ Write-Host "Old role assignment for '$oldRoleName' role does not exist on resource"
+ }
+ else {
+ Write-Host "Deleting old role assignment for '$oldRoleName' role"
+ az role assignment delete --assignee $principalId --scope $resourceId --role $oldRoleName
+ $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv
+ if ($oldRoleExists) {
+ throw "Error: Failed to delete old role assignment"
+ }
+ Write-Host "Old role assignment deleted successfully"
+ }
+ }
+ ```
+
+1. Run the function `Update-ImmersiveReaderRoleAssignment`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate.
+
+ ```azurepowershell-interactive
+ Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>'
+ ```
+
+ The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
+
+ ```Update-ImmersiveReaderRoleAssignment```<br>
+ ``` -SubscriptionName 'MyOrganizationSubscriptionName'```<br>
+ ``` -ResourceGroupName 'MyResourceGroupName'```<br>
+ ``` -ResourceName 'MyOrganizationImmersiveReader'```<br>
+ ``` -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'```<br>
+
+ | Parameter | Comments |
+ | | |
+ | SubscriptionName |The name of your Azure subscription. |
+ | ResourceGroupName |The name of the Resource Group that contains your Immersive Reader resource. |
+ | ResourceName |The name of your Immersive Reader resource. |
+ | AADAppIdentifierUri |The URI for your Azure AD app. |
++
+## Next steps
+
+* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
+* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
+
+ Title: "Tutorial: Create an iOS app that takes a photo and launches it in the Immersive Reader (Swift)"
+
+description: In this tutorial, you will build an iOS app from scratch and add the Picture to Immersive Reader functionality.
++++++ Last updated : 01/14/2020+
+#Customer intent: As a developer, I want to integrate two Azure AI services, the Immersive Reader and the Read API into my iOS application so that I can view any text from a photo in the Immersive Reader.
++
+# Tutorial: Create an iOS app that launches the Immersive Reader with content from a photo (Swift)
+
+The [Immersive Reader](https://www.onenote.com/learningtools) is an inclusively designed tool that implements proven techniques to improve reading comprehension.
+
+The [Azure AI Vision Azure AI services Read API](../../ai-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream.
+
+In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+* [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12)
+* An Immersive Reader resource configured for Azure Active Directory authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You will need some of the values created here when configuring the sample project properties. Save the output of your session into a text file for future reference.
+* Usage of this sample requires an Azure subscription to the Azure AI Vision service. [Create an Azure AI Vision resource in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision).
+
+## Create an Xcode project
+
+Create a new project in Xcode.
+
+![New Project](./media/ios/xcode-create-project.png)
+
+Choose **Single View App**.
+
+![New Single View App](./media/ios/xcode-single-view-app.png)
+
+## Get the SDK CocoaPod
+
+The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods:
+
+1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods.
+
+2. Create a Podfile by running `pod init` in your Xcode project's root directory.
+
+3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
+
+ ```ruby
+ platform :ios, '9.0'
+
+ target 'picture-to-immersive-reader-swift' do
+ use_frameworks!
+ # Pods for picture-to-immersive-reader-swift
+ pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
+ end
+ ```
+
+4. In the terminal, in the directory of your Xcode project, run the command `pod install` to install the Immersive Reader SDK pod.
+
+5. Add `import immersive_reader_sdk` to all files that need to reference the SDK.
+
+6. Ensure to open the project by opening the `.xcworkspace` file and not the `.xcodeproj` file.
+
+## Acquire an Azure AD authentication token
+
+You need some values from the Azure AD authentication configuration prerequisite step above for this part. Refer back to the text file you saved of that session.
+
+````text
+TenantId => Azure subscription TenantId
+ClientId => Azure AD ApplicationId
+ClientSecret => Azure AD Application Service Principal password
+Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI PowerShell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
+````
+
+In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+
+## Set up the app to run without a storyboard
+
+Open AppDelegate.swift and replace the file with the following code.
+
+```swift
+import UIKit
+
+@UIApplicationMain
+class AppDelegate: UIResponder, UIApplicationDelegate {
+
+ var window: UIWindow?
+
+ var navigationController: UINavigationController?
+
+ func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
+ // Override point for customization after application launch.
+
+ window = UIWindow(frame: UIScreen.main.bounds)
+
+ // Allow the app run without a storyboard
+ if let window = window {
+ let mainViewController = PictureLaunchViewController()
+ navigationController = UINavigationController(rootViewController: mainViewController)
+ window.rootViewController = navigationController
+ window.makeKeyAndVisible()
+ }
+ return true
+ }
+
+ func applicationWillResignActive(_ application: UIApplication) {
+ // Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state.
+ // Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method to pause the game.
+ }
+
+ func applicationDidEnterBackground(_ application: UIApplication) {
+ // Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later.
+ // If your application supports background execution, this method is called instead of applicationWillTerminate: when the user quits.
+ }
+
+ func applicationWillEnterForeground(_ application: UIApplication) {
+ // Called as part of the transition from the background to the active state; here you can undo many of the changes made on entering the background.
+ }
+
+ func applicationDidBecomeActive(_ application: UIApplication) {
+ // Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
+ }
+
+ func applicationWillTerminate(_ application: UIApplication) {
+ // Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:.
+ }
+
+}
+```
+
+## Add functionality for taking and uploading photos
+
+Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code.
+
+```swift
+import UIKit
+import immersive_reader_sdk
+
+class PictureLaunchViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
+
+ private var photoButton: UIButton!
+ private var cameraButton: UIButton!
+ private var titleText: UILabel!
+ private var bodyText: UILabel!
+ private var sampleContent: Content!
+ private var sampleChunk: Chunk!
+ private var sampleOptions: Options!
+ private var imagePicker: UIImagePickerController!
+ private var spinner: UIActivityIndicatorView!
+ private var activityIndicatorBackground: UIView!
+ private var textURL = "vision/v2.0/read/core/asyncBatchAnalyze";
+
+ override func viewDidLoad() {
+ super.viewDidLoad()
+
+ view.backgroundColor = .white
+
+ titleText = UILabel()
+ titleText.text = "Picture to Immersive Reader with OCR"
+ titleText.font = UIFont.boldSystemFont(ofSize: 32)
+ titleText.textAlignment = .center
+ titleText.lineBreakMode = .byWordWrapping
+ titleText.numberOfLines = 0
+ view.addSubview(titleText)
+
+ bodyText = UILabel()
+ bodyText.text = "Capture or upload a photo of handprinted text on a piece of paper, handwriting, typed text, text on a computer screen, writing on a white board and many more, and watch it be presented to you in the Immersive Reader!"
+ bodyText.font = UIFont.systemFont(ofSize: 18)
+ bodyText.lineBreakMode = .byWordWrapping
+ bodyText.numberOfLines = 0
+ let screenSize = self.view.frame.height
+ if screenSize <= 667 {
+ // Font size for smaller iPhones.
+ bodyText.font = bodyText.font.withSize(16)
+
+ } else if screenSize <= 812.0 {
+ // Font size for medium iPhones.
+ bodyText.font = bodyText.font.withSize(18)
+
+ } else if screenSize <= 896 {
+ // Font size for larger iPhones.
+ bodyText.font = bodyText.font.withSize(20)
+
+ } else {
+ // Font size for iPads.
+ bodyText.font = bodyText.font.withSize(26)
+ }
+ view.addSubview(bodyText)
+
+ photoButton = UIButton()
+ photoButton.backgroundColor = .darkGray
+ photoButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5)
+ photoButton.layer.cornerRadius = 5
+ photoButton.setTitleColor(.white, for: .normal)
+ photoButton.setTitle("Choose Photo from Library", for: .normal)
+ photoButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold)
+ photoButton.addTarget(self, action: #selector(selectPhotoButton(sender:)), for: .touchUpInside)
+ view.addSubview(photoButton)
+
+ cameraButton = UIButton()
+ cameraButton.backgroundColor = .darkGray
+ cameraButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5)
+ cameraButton.layer.cornerRadius = 5
+ cameraButton.setTitleColor(.white, for: .normal)
+ cameraButton.setTitle("Take Photo", for: .normal)
+ cameraButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold)
+ cameraButton.addTarget(self, action: #selector(takePhotoButton(sender:)), for: .touchUpInside)
+ view.addSubview(cameraButton)
+
+ activityIndicatorBackground = UIView()
+ activityIndicatorBackground.backgroundColor = UIColor.black
+ activityIndicatorBackground.alpha = 0
+ view.addSubview(activityIndicatorBackground)
+ view.bringSubviewToFront(_: activityIndicatorBackground)
+
+ spinner = UIActivityIndicatorView(style: .whiteLarge)
+ view.addSubview(spinner)
+
+ let layoutGuide = view.safeAreaLayoutGuide
+
+ titleText.translatesAutoresizingMaskIntoConstraints = false
+ titleText.topAnchor.constraint(equalTo: layoutGuide.topAnchor, constant: 25).isActive = true
+ titleText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true
+ titleText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true
+
+ bodyText.translatesAutoresizingMaskIntoConstraints = false
+ bodyText.topAnchor.constraint(equalTo: titleText.bottomAnchor, constant: 35).isActive = true
+ bodyText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true
+ bodyText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true
+
+ cameraButton.translatesAutoresizingMaskIntoConstraints = false
+ if screenSize > 896 {
+ // Constraints for iPads.
+ cameraButton.heightAnchor.constraint(equalToConstant: 150).isActive = true
+ cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true
+ cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true
+ cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 150).isActive = true
+ } else {
+ // Constraints for iPhones.
+ cameraButton.heightAnchor.constraint(equalToConstant: 100).isActive = true
+ cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true
+ cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true
+ cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 100).isActive = true
+ }
+ cameraButton.bottomAnchor.constraint(equalTo: photoButton.topAnchor, constant: -40).isActive = true
+
+ photoButton.translatesAutoresizingMaskIntoConstraints = false
+ if screenSize > 896 {
+ // Constraints for iPads.
+ photoButton.heightAnchor.constraint(equalToConstant: 150).isActive = true
+ photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true
+ photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true
+ } else {
+ // Constraints for iPhones.
+ photoButton.heightAnchor.constraint(equalToConstant: 100).isActive = true
+ photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true
+ photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true
+ }
+
+ spinner.translatesAutoresizingMaskIntoConstraints = false
+ spinner.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true
+ spinner.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
+
+ activityIndicatorBackground.translatesAutoresizingMaskIntoConstraints = false
+ activityIndicatorBackground.topAnchor.constraint(equalTo: layoutGuide.topAnchor).isActive = true
+ activityIndicatorBackground.bottomAnchor.constraint(equalTo: layoutGuide.bottomAnchor).isActive = true
+ activityIndicatorBackground.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor).isActive = true
+ activityIndicatorBackground.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor).isActive = true
+
+ // Create content and options.
+ sampleChunk = Chunk(content: bodyText.text!, lang: nil, mimeType: nil)
+ sampleContent = Content( Title: titleText.text!, chunks: [sampleChunk])
+ sampleOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil)
+ }
+
+ @IBAction func selectPhotoButton(sender: AnyObject) {
+ // Launch the photo picker.
+ imagePicker = UIImagePickerController()
+ imagePicker.delegate = self
+ self.imagePicker.sourceType = .photoLibrary
+ self.imagePicker.allowsEditing = true
+ self.present(self.imagePicker, animated: true, completion: nil)
+ self.photoButton.isEnabled = true
+ }
+
+ @IBAction func takePhotoButton(sender: AnyObject) {
+ if !UIImagePickerController.isSourceTypeAvailable(.camera) {
+ // If there is no camera on the device, disable the button
+ self.cameraButton.backgroundColor = .gray
+ self.cameraButton.isEnabled = true
+
+ } else {
+ // Launch the camera.
+ imagePicker = UIImagePickerController()
+ imagePicker.delegate = self
+ self.imagePicker.sourceType = .camera
+ self.present(self.imagePicker, animated: true, completion: nil)
+ self.cameraButton.isEnabled = true
+ }
+ }
+
+ func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
+ imagePicker.dismiss(animated: true, completion: nil)
+ photoButton.isEnabled = false
+ cameraButton.isEnabled = false
+ self.spinner.startAnimating()
+ activityIndicatorBackground.alpha = 0.6
+
+ // Retrieve the image.
+ let image = (info[.originalImage] as? UIImage)!
+
+ // Retrieve the byte array from image.
+ let imageByteArray = image.jpegData(compressionQuality: 1.0)
+
+ // Call the getTextFromImage function passing in the image the user takes or chooses.
+ getTextFromImage(subscriptionKey: Constants.computerVisionSubscriptionKey, getTextUrl: Constants.computerVisionEndPoint + textURL, pngImage: imageByteArray!, onSuccess: { cognitiveText in
+ print("cognitive text is: \(cognitiveText)")
+ DispatchQueue.main.async {
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+ }
+
+ // Create content and options with the text from the image.
+ let sampleImageChunk = Chunk(content: cognitiveText, lang: nil, mimeType: nil)
+ let sampleImageContent = Content( Title: "Text from image", chunks: [sampleImageChunk])
+ let sampleImageOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil)
+
+ // Callback to get token for Immersive Reader.
+ self.getToken(onSuccess: {cognitiveToken in
+
+ DispatchQueue.main.async {
+
+ launchImmersiveReader(navController: self.navigationController!, token: cognitiveToken, subdomain: Constants.subdomain, content: sampleImageContent, options: sampleImageOptions, onSuccess: {
+ self.spinner.stopAnimating()
+ self.activityIndicatorBackground.alpha = 0
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+
+ }, onFailure: { error in
+ print("An error occured launching the Immersive Reader: \(error)")
+ self.spinner.stopAnimating()
+ self.activityIndicatorBackground.alpha = 0
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+
+ })
+ }
+
+ }, onFailure: { error in
+ DispatchQueue.main.async {
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+
+ }
+ print("An error occured retrieving the token: \(error)")
+ })
+
+ }, onFailure: { error in
+ DispatchQueue.main.async {
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+ }
+
+ })
+ }
+
+ /// Retrieves the token for the Immersive Reader using Azure Active Directory authentication
+ ///
+ /// - Parameters:
+ /// -onSuccess: A closure that gets called when the token is successfully recieved using Azure Active Directory authentication.
+ /// -theToken: The token for the Immersive Reader recieved using Azure Active Directory authentication.
+ /// -onFailure: A closure that gets called when the token fails to be obtained from the Azure Active Directory Authentication.
+ /// -theError: The error that occured when the token fails to be obtained from the Azure Active Directory Authentication.
+ func getToken(onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) {
+
+ let tokenForm = "grant_type=client_credentials&resource=https://cognitiveservices.azure.com/&client_id=" + Constants.clientId + "&client_secret=" + Constants.clientSecret
+ let tokenUrl = "https://login.windows.net/" + Constants.tenantId + "/oauth2/token"
+
+ var responseTokenString: String = "0"
+
+ let url = URL(string: tokenUrl)!
+ var request = URLRequest(url: url)
+ request.httpBody = tokenForm.data(using: .utf8)
+ request.httpMethod = "POST"
+
+ let task = URLSession.shared.dataTask(with: request) { data, response, error in
+ guard let data = data,
+ let response = response as? HTTPURLResponse,
+ // Check for networking errors.
+ error == nil else {
+ print("error", error ?? "Unknown error")
+ onFailure("Error")
+ return
+ }
+
+ // Check for http errors.
+ guard (200 ... 299) ~= response.statusCode else {
+ print("statusCode should be 2xx, but is \(response.statusCode)")
+ print("response = \(response)")
+ onFailure(String(response.statusCode))
+ return
+ }
+
+ let responseString = String(data: data, encoding: .utf8)
+ print("responseString = \(String(describing: responseString!))")
+
+ let jsonResponse = try? JSONSerialization.jsonObject(with: data, options: [])
+ guard let jsonDictonary = jsonResponse as? [String: Any] else {
+ onFailure("Error parsing JSON response.")
+ return
+ }
+ guard let responseToken = jsonDictonary["access_token"] as? String else {
+ onFailure("Error retrieving token from JSON response.")
+ return
+ }
+ responseTokenString = responseToken
+ onSuccess(responseTokenString)
+ }
+
+ task.resume()
+ }
+
+ /// Returns the text string after it has been extracted from an Image input.
+ ///
+ /// - Parameters:
+ /// -subscriptionKey: The Azure subscription key.
+ /// -pngImage: Image data in PNG format.
+ /// - Returns: a string of text representing the
+ func getTextFromImage(subscriptionKey: String, getTextUrl: String, pngImage: Data, onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) {
+
+ let url = URL(string: getTextUrl)!
+ var request = URLRequest(url: url)
+ request.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
+ request.setValue("application/octet-stream", forHTTPHeaderField: "Content-Type")
+
+ // Two REST API calls are required to extract text. The first call is to submit the image for processing, and the next call is to retrieve the text found in the image.
+
+ // Set the body to the image in byte array format.
+ request.httpBody = pngImage
+
+ request.httpMethod = "POST"
+
+ let task = URLSession.shared.dataTask(with: request) { data, response, error in
+ guard let data = data,
+ let response = response as? HTTPURLResponse,
+ // Check for networking errors.
+ error == nil else {
+ print("error", error ?? "Unknown error")
+ onFailure("Error")
+ return
+ }
+
+ // Check for http errors.
+ guard (200 ... 299) ~= response.statusCode else {
+ print("statusCode should be 2xx, but is \(response.statusCode)")
+ print("response = \(response)")
+ onFailure(String(response.statusCode))
+ return
+ }
+
+ let responseString = String(data: data, encoding: .utf8)
+ print("responseString = \(String(describing: responseString!))")
+
+ // Send the second call to the API. The first API call returns operationLocation which stores the URI for the second REST API call.
+ let operationLocation = response.allHeaderFields["Operation-Location"] as? String
+
+ if (operationLocation == nil) {
+ print("Error retrieving operation location")
+ return
+ }
+
+ // Wait 10 seconds for text recognition to be available as suggested by the Text API documentation.
+ print("Text submitted. Waiting 10 seconds to retrieve the recognized text.")
+ sleep(10)
+
+ // HTTP GET request with the operationLocation url to retrieve the text.
+ let getTextUrl = URL(string: operationLocation!)!
+ var getTextRequest = URLRequest(url: getTextUrl)
+ getTextRequest.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
+ getTextRequest.httpMethod = "GET"
+
+ // Send the GET request to retrieve the text.
+ let taskGetText = URLSession.shared.dataTask(with: getTextRequest) { data, response, error in
+ guard let data = data,
+ let response = response as? HTTPURLResponse,
+ // Check for networking errors.
+ error == nil else {
+ print("error", error ?? "Unknown error")
+ onFailure("Error")
+ return
+ }
+
+ // Check for http errors.
+ guard (200 ... 299) ~= response.statusCode else {
+ print("statusCode should be 2xx, but is \(response.statusCode)")
+ print("response = \(response)")
+ onFailure(String(response.statusCode))
+ return
+ }
+
+ // Decode the JSON data into an object.
+ let customDecoding = try! JSONDecoder().decode(TextApiResponse.self, from: data)
+
+ // Loop through the lines to get all lines of text and concatenate them together.
+ var textFromImage = ""
+ for textLine in customDecoding.recognitionResults[0].lines {
+ textFromImage = textFromImage + textLine.text + " "
+ }
+
+ onSuccess(textFromImage)
+ }
+ taskGetText.resume()
+
+ }
+
+ task.resume()
+ }
+
+ // Structs used for decoding the Text API JSON response.
+ struct TextApiResponse: Codable {
+ let status: String
+ let recognitionResults: [RecognitionResult]
+ }
+
+ struct RecognitionResult: Codable {
+ let page: Int
+ let clockwiseOrientation: Double
+ let width, height: Int
+ let unit: String
+ let lines: [Line]
+ }
+
+ struct Line: Codable {
+ let boundingBox: [Int]
+ let text: String
+ let words: [Word]
+ }
+
+ struct Word: Codable {
+ let boundingBox: [Int]
+ let text: String
+ let confidence: String?
+ }
+
+}
+```
+
+## Build and run the app
+
+Set the archive scheme in Xcode by selecting a simulator or device target.
+![Archive scheme](./media/ios/xcode-archive-scheme.png)<br/>
+![Select Target](./media/ios/xcode-select-target.png)
+
+In Xcode, press Ctrl + R or select the play button to run the project and the app should launch on the specified simulator or device.
+
+In your app, you should see:
+
+![Sample app](./media/ios/picture-to-immersive-reader-ipad-app.png)
+
+Inside the app, take or upload a photo of text by pressing the 'Take Photo' button or 'Choose Photo from Library' button and the Immersive Reader will then launch displaying the text from the photo.
+
+![Immersive Reader](./media/ios/picture-to-immersive-reader-ipad.png)
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
ai-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/configure-containers.md
+
+ Title: Configure containers - Language service
+
+description: Language service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
+++++++ Last updated : 11/02/2021+++
+# Configure Language service docker containers
+
+Language service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. This article applies to the following containers:
+
+* sentiment analysis
+* language detection
+* key phrase extraction
+* Text Analytics for health
+
+## Configuration settings
++
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start.
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the key and it must be a valid key for the _Language_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Language_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Language_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+|Required| Name | Data type | Description |
+|--||--|-|
+|Yes| `Billing` | String | Billing endpoint URI. |
++
+## Eula setting
++
+## Fluentd settings
++
+## Http proxy credentials settings
++
+## Logging settings
+
+
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+The Language service containers don't use input or output mounts to store training or service data.
+
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the host computer's mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
+
+|Optional| Name | Data type | Description |
+|-||--|-|
+|Not allowed| `Input` | String | Language service containers do not use this.|
+|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
+
+## Next steps
+
+* Use more [Azure AI containers](../../cognitive-services-container-support.md)
ai-services Multi Region Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/custom-features/multi-region-deployment.md
+
+ Title: Deploy custom language projects to multiple regions in Azure AI Language
+
+description: Learn about deploying your language projects to multiple regions.
++++++ Last updated : 10/11/2022++++
+# Deploy custom language projects to multiple regions
+
+> [!NOTE]
+> This article applies to the following custom features in Azure AI Language:
+> * [Conversational language understanding](../../conversational-language-understanding/overview.md)
+> * [Custom text classification](../../custom-text-classification/overview.md)
+> * [Custom NER](../../custom-named-entity-recognition/overview.md)
+> * [Orchestration workflow](../../orchestration-workflow/overview.md)
+
+Custom Language service features enable you to deploy your project to more than one region, making it much easier to access your project globally while managing only one instance of your project in one place.
+
+Before you deploy a project, you can assign **deployment resources** in other regions. Each deployment resource is a different Language resource from the one you use to author your project. You deploy to those resources and then target your prediction requests to that resource in their respective regions and your queries are served directly from that region.
+
+When creating a deployment, you can select which of your assigned deployment resources and their corresponding regions you would like to deploy to. The model you deploy is then replicated to each region and accessible with its own endpoint dependent on the deployment resource's custom subdomain.
+
+## Example
+
+Suppose you want to make sure your project, which is used as part of a customer support chatbot, is accessible by customers across the US and India. You would author a project with the name **ContosoSupport** using a _West US 2_ Language resource named **MyWestUS2**. Before deployment, you would assign two deployment resources to your project - **MyEastUS** and **MyCentralIndia** in _East US_ and _Central India_, respectively.
+
+When deploying your project, You would select all three regions for deployment: the original _West US 2_ region and the assigned ones through _East US_ and _Central India_.
+
+You would now have three different endpoint URLs to access your project in all three regions:
+* West US 2: `https://mywestus2.cognitiveservices.azure.com/language/:analyze-conversations`
+* East US: `https://myeastus.cognitiveservices.azure.com/language/:analyze-conversations`
+* Central India: `https://mycentralindia.cognitiveservices.azure.com/language/:analyze-conversations`
+
+The same request body to each of those different URLs serves the exact same response directly from that region.
+
+## Validations and requirements
+
+Assigning deployment resources requires Microsoft Azure Active Directory (Azure AD) authentication. Azure AD is used to confirm you have access to the resources you are interested in assigning to your project for multi-region deployment. In the Language Studio, you can automatically [enable Azure AD authentication](https://aka.ms/rbac-language) by assigning yourself the _Cognitive Services Language Owner_ role to your original resource. To programmatically use Azure AD authentication, learn more from the [Azure AI services documentation](../../../authentication.md?source=docs&tabs=powershell&tryIt=true#authenticate-with-azure-active-directory).
+
+Your project name and resource are used as its main identifiers. Therefore, a Language resource can only have a specific project name in each resource. Any other projects with the same name will not be deployable to that resource.
+
+For example, if a project **ContosoSupport** was created by resource **MyWestUS2** in _West US 2_ and deployed to resource **MyEastUS** in _East US_, the resource **MyEastUS** cannot create a different project called **ContosoSupport** and deploy a project to that region. Similarly, your collaborators cannot then create a project **ContosoSupport** with resource **MyCentralIndia** in _Central India_ and deploy it to either **MyWestUS2** or **MyEastUS**.
+
+You can only swap deployments that are available in the exact same regions, otherwise swapping will fail.
+
+If you remove an assigned resource from your project, all of the project deployments to that resource will then be deleted.
+
+> [!NOTE]
+> Orchestration workflow only:
+>
+> You **cannot** assign deployment resources to orchestration workflow projects with custom question answering or LUIS connections. You subsequently cannot add custom question answering or LUIS connections to projects that have assigned resources.
+>
+> For multi-region deployment to work as expected, the connected CLU projects **must also be deployed** to the same regional resources you've deployed the orchestration workflow project to. Otherwise the orchestration workflow project will attempt to route a request to a deployment in its region that doesn't exist.
+
+Some regions are only available for deployment and not for authoring projects.
+
+## Next steps
+
+Learn how to deploy models for:
+* [Conversational language understanding](../../conversational-language-understanding/how-to/deploy-model.md)
+* [Custom text classification](../../custom-text-classification/how-to/deploy-model.md)
+* [Custom NER](../../custom-named-entity-recognition/how-to/deploy-model.md)
+* [Orchestration workflow](../../orchestration-workflow/how-to/deploy-model.md)
ai-services Project Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/custom-features/project-versioning.md
+
+ Title: Conversational Language Understanding Project Versioning
+
+description: Learn how versioning works in conversational language understanding
++++++ Last updated : 10/10/2022+++++
+# Project versioning
+
+> [!NOTE]
+> This article applies to the following custom features in Azure AI Language:
+> * Conversational language understanding
+> * Custom text classification
+> * Custom NER
+> * Orchestration workflow
+
+Building your project typically happens in increments. You may add, remove, or edit intents, entities, labels and data at each stage. Every time you train, a snapshot of your current project state is taken to produce a model. That model saves the snapshot to be loaded back at any time. Every model acts as its own version of the project.
+
+For example, if your project has 10 intents and/or entities, with 50 training documents or utterances, it can be trained to create a model named **v1**. Afterwards, you might make changes to the project to alter the numbers of training data. The project can be trained again to create a new model named **v2**. If you don't like the changes you've made in **v2** and would like to continue from where you left off in model **v1**, then you would just need to load the model data from **v1** back into the project. Loading a model's data is possible through both the Language Studio and API. Once complete, the project will have the original amount and types of training data.
+
+If the project data is not saved in a trained model, it can be lost. For example, if you loaded model **v1**, your project now has the data that was used to train it. If you then made changes, didn't train, and loaded model **v2**, you would lose those changes as they weren't saved to any specific snapshot.
+
+If you overwrite a model with a new snapshot of data, you won't be able to revert back to any previous state of that model.
+
+You always have the option to locally export the data for every model.
+
+## Data location
+
+The data for your model versions will be saved in different locations, depending on the custom feature you're using.
+
+# [Custom NER](#tab/custom-ner)
+
+In custom named entity recognition, the data being saved to the snapshot is the labels file.
+
+# [Custom text classification](#tab/custom-text-classification)
+
+In custom text classification, the data being saved to the snapshot is the labels file.
+
+# [Orchestration workflow](#tab/orchestration-workflow)
+
+In orchestration workflow, you do not version or store the assets of the connected intents as part of the orchestration snapshot - those are managed separately. The only snapshot being taken is of the connection itself and the intents and utterances that do not have connections, including all the test data.
+
+# [Conversational language understanding](#tab/clu)
+
+In conversational language understanding, the data being saved to the snapshot are the intents and utterances included in the project.
++++
+## Next steps
+Learn how to load or export model data for:
+* [Conversational language understanding](../../conversational-language-understanding/how-to/view-model-evaluation.md#export-model-data)
+* [Custom text classification](../../custom-text-classification/how-to/view-model-evaluation.md)
+* [Custom NER](../../custom-named-entity-recognition/how-to/view-model-evaluation.md)
+* [Orchestration workflow](../../orchestration-workflow/how-to/view-model-evaluation.md)
ai-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/data-limits.md
+
+ Title: Data limits for Language service features
+
+description: Data and service limitations for Azure AI Language features.
+++++++ Last updated : 10/05/2022+++
+# Service limits for Azure AI Language
+
+> [!NOTE]
+> This article only describes the limits for preconfigured features in Azure AI Language:
+> To see the service limits for customizable features, see the following articles:
+> * [Custom classification](../custom-text-classification/service-limits.md)
+> * [Custom NER](../custom-named-entity-recognition/service-limits.md)
+> * [Conversational language understanding](../conversational-language-understanding/service-limits.md)
+> * [Question answering](../question-answering/concepts/limits.md)
+
+Use this article to find the limits for the size, and rates that you can send data to the following features of the language service.
+* [Named Entity Recognition (NER)](../named-entity-recognition/overview.md)
+* [Personally Identifiable Information (PII) detection](../personally-identifiable-information/overview.md)
+* [Key phrase extraction](../key-phrase-extraction/overview.md)
+* [Entity linking](../entity-linking/overview.md)
+* [Text Analytics for health](../text-analytics-for-health/overview.md)
+* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/overview.md)
+* [Language detection](../language-detection/overview.md)
+
+When using features of the Language service, keep the following information in mind:
+
+* Pricing is independent of data or rate limits. Pricing is based on the number of text records you send to the API, and is subject to your Language resource's [pricing details](https://aka.ms/unifiedLanguagePricing).
+ * A text record is measured as 1000 characters.
+* Data and rate limits are based on the number of documents you send to the API. If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+* A document is a single string of text characters.
+
+## Maximum characters per document
+
+The following limit specifies the maximum number of characters that can be in a single document.
+
+| Feature | Value |
+|||
+| Text Analytics for health | 125,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
+| All other preconfigured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). If you need to submit larger documents, consider using the feature asynchronously. |
+| All other preconfigured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). |
+
+If a document exceeds the character limit, the API behaves differently depending on how you're sending requests.
+
+If you're sending requests synchronously:
+* The API doesn't process documents that exceed the maximum size, and returns an invalid document error for it. If an API request has multiple documents, the API continues processing them if they are within the character limit.
+
+If you're sending requests [asynchronously](use-asynchronously.md):
+* The API rejects the entire request and returns a `400 bad request` error if any document within it exceeds the maximum size.
+
+## Maximum request size
+
+The following limit specifies the maximum size of documents contained in the entire request.
+
+| Feature | Value |
+|||
+| All preconfigured features | 1 MB |
+
+## Maximum documents per request
+
+Exceeding the following document limits generates an HTTP 400 error code.
+
+> [!NOTE]
+> When sending asynchronous API requests, you can send a maximum of 25 documents per request.
+
+| Feature | Max Documents Per Request |
+|-|--|
+| Conversation summarization | 1 |
+| Language Detection | 1000 |
+| Sentiment Analysis | 10 |
+| Opinion Mining | 10 |
+| Key Phrase Extraction | 10 |
+| Named Entity Recognition (NER) | 5 |
+| Personally Identifying Information (PII) detection | 5 |
+| Document summarization | 25 |
+| Entity Linking | 5 |
+| Text Analytics for health |25 for the web-based API, 1000 for the container. (125,000 characters in total) |
+
+## Rate limits
+
+Your rate limit varies with your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). These limits are the same for both versions of the API. These rate limits don't apply to the Text Analytics for health container, which doesn't have a set rate limit.
+
+| Tier | Requests per second | Requests per minute |
+||||
+| S / Multi-service | 1000 | 1000 |
+| S0 / F0 | 100 | 300 |
+
+Requests rates are measured for each feature separately. You can send the maximum number of requests for your pricing tier to each feature, at the same time. For example, if you're in the `S` tier and send 1000 requests at once, you wouldn't be able to send another request for 59 seconds.
+
+## See also
+
+* [What is Azure AI Language](../overview.md)
+* [Pricing details](https://aka.ms/unifiedLanguagePricing)
ai-services Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/developer-guide.md
+
+ Title: Use the Language SDK and REST API
+
+description: Learn about how to integrate the Language service SDK and REST API into your applications.
++++++ Last updated : 02/14/2023+++
+# SDK and REST developer guide for the Language service
+
+Use this article to find information on integrating the Language service SDKs and REST API into your applications.
+
+## Development options
+
+The Language service provides support through a REST API, and client libraries in several languages.
+
+# [Client library (Azure SDK)](#tab/language-studio)
+
+## Client libraries (Azure SDK)
+
+The Language service provides three namespaces for using the available features. Depending on which features and programming language you're using, you will need to download one or more of the following packages, and have the following framework/language version support:
+
+|Framework/Language | Minimum supported version |
+|||
+|.NET | .NET Framework 4.6.1 or newer, or .NET (formerly .NET Core) 2.0 or newer. |
+|Java | v8 or later |
+|JavaScript | v14 LTS or later |
+|Python| v3.7 or later |
+
+### Azure.AI.TextAnalytics
+
+>[!NOTE]
+> If you're using custom named entity recognition or custom text classification, you will need to create a project and train a model before using the SDK. The SDK only provides the ability to analyze text using models you create. See the following quickstarts for information on creating a model.
+> * [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md)
+> * [Custom text classification](../custom-text-classification/quickstart.md)
+
+The `Azure.AI.TextAnalytics` namespace enables you to use the following Language features. Use the links below for articles to help you send API requests using the SDK.
+
+* [Custom named entity recognition](../custom-named-entity-recognition/how-to/call-api.md?tabs=client#send-an-entity-recognition-request-to-your-model)
+* [Custom text classification](../custom-text-classification/how-to/call-api.md?tabs=client-libraries#send-a-text-classification-request-to-your-model)
+* [Document summarization](../summarization/quickstart.md)
+* [Entity linking](../entity-linking/quickstart.md)
+* [Key phrase extraction](../key-phrase-extraction/quickstart.md)
+* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md)
+* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md)
+* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md)
+* [Text analytics for health](../text-analytics-for-health/quickstart.md)
+
+As you use these features in your application, use the following documentation and code samples for additional information.
+
+| Language → Latest GA version |Reference documentation |Samples |
+||||
+| [C#/.NET → v5.2.0](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) | [C# documentation](/dotnet/api/azure.ai.textanalytics) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| [Java → v5.2.0](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+| [JavaScript → v1.0.0](https://www.npmjs.com/package/@azure/ai-language-text/v/1.0.0) | [JavaScript documentation](/javascript/api/overview/azure/ai-language-text-readme) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+| [Python → v5.2.0](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
++
+### Azure.AI.Language.Conversations
+
+> [!NOTE]
+> If you're using conversational language understanding or orchestration workflow, you will need to create a project and train a model before using the SDK. The SDK only provides the ability to analyze text using models you create. See the following quickstarts for more information.
+> * [Conversational language understanding](../conversational-language-understanding/quickstart.md)
+> * [Orchestration workflow](../orchestration-workflow/quickstart.md)
+
+The `Azure.AI.Language.Conversations` namespace enables you to use the following Language features. Use the links below for articles to help you send API requests using the SDK.
+
+* [Conversational language understanding](../conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request)
+* [Orchestration workflow](../orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request)
+* [Conversation summarization (Python only)](../summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python)
+* [Personally Identifying Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=client-libraries#examples)
+
+As you use these features in your application, use the following documentation and code samples for additional information.
+
+| Language → Latest GA version | Reference documentation |Samples |
+||||
+| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://aka.ms/sdk-sample-conversation-dot-net) |
+| [Python → v1.0.0](https://pypi.org/project/azure-ai-language-conversations/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://aka.ms/sdk-samples-conversation-python) |
+
+### Azure.AI.Language.QuestionAnswering
+
+The `Azure.AI.Language.QuestionAnswering` namespace enables you to use the following Language features:
+
+* [Question answering](../question-answering/quickstart/sdk.md?pivots=programming-language-csharp)
+ * Authoring - Automate common tasks like adding new question answer pairs and working with projects/knowledge bases.
+ * Prediction - Answer questions based on passages of text.
+
+As you use these features in your application, use the following documentation and code samples for additional information.
+
+| Language → Latest GA version |Reference documentation |Samples |
+||||
+| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering/1.0.0#readme-body-tab) | [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) |
+| [Python → v1.0.0](https://pypi.org/project/azure-ai-language-questionanswering/1.0.0/) | [Python documentation](/python/api/overview/azure/ai-language-questionanswering-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-questionanswering) |
+
+# [REST API](#tab/rest-api)
+
+## REST API
+
+The Language service provides multiple API endpoints depending on which feature you wish to use.
+
+### Conversation analysis authoring API
+
+The conversation analysis authoring API enables you to author custom models and create/manage projects for the following features.
+* [Conversational language understanding](../conversational-language-understanding/quickstart.md?pivots=rest-api)
+* [Orchestration workflow](../orchestration-workflow/quickstart.md?pivots=rest-api)
+
+As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversational-analysis-authoring) for additional information.
+
+### Conversation analysis runtime API
+
+The conversation analysis runtime API enables you to send requests to custom models you've created for:
+* [Conversational language understanding](../conversational-language-understanding/how-to/call-api.md?tabs=REST-APIs#send-a-conversational-language-understanding-request)
+* [Orchestration workflow](../orchestration-workflow/how-to/call-api.md?tabs=REST-APIs#send-an-orchestration-workflow-request)
+
+It additionally enables you to use the following features, without creating any models:
+* [Conversation summarization](../summarization/quickstart.md?pivots=rest-api&tabs=conversation-summarization)
+* [Personally Identifiable Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=rest-api#examples)
+
+As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime) for additional information.
++
+### Text analysis authoring API
+
+The text analysis authoring API enables you to author custom models and create/manage projects for:
+* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api)
+* [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
+
+As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-authoring) for additional information.
+
+### Text analysis runtime API
+
+The text analysis runtime API enables you to send requests to custom models you've created for:
+
+* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api)
+* [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
+
+It additionally enables you to use the following features, without creating any models:
+
+* [Document summarization](../summarization/quickstart.md?tabs=document-summarization&pivots=rest-api)
+* [Entity linking](../entity-linking/quickstart.md?pivots=rest-api)
+* [Key phrase extraction](../key-phrase-extraction/quickstart.md?pivots=rest-api)
+* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md?pivots=rest-api)
+* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md?pivots=rest-api)
+* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api)
+* [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
+
+As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text) for additional information.
+
+### Question answering APIs
+
+The question answering APIs enables you to use the [question answering](../question-answering/quickstart/sdk.md?pivots=rest) feature.
+
+#### Reference documentation
+
+As you use this API in your application, see the following reference documentation for additional information.
+
+* [Prebuilt API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers-from-text) - Use the prebuilt runtime API to answer specified question using text provided by users.
+* [Custom authoring API](/rest/api/cognitiveservices/questionanswering/question-answering-projects) - Create a knowledge base to answer questions.
+* [Custom runtime API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) - Query and knowledge base to generate an answer.
++++
+## See also
+
+[Azure AI Language overview](../overview.md)
ai-services Encryption Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/encryption-data-at-rest.md
+
+ Title: Language service encryption of data at rest
+description: Learn how the Language service encrypts your data when it's persisted to the cloud.
+++++ Last updated : 08/08/2022+
+#Customer intent: As a user of the Language service, I want to learn how encryption at rest works.
++
+# Language service encryption of data at rest
+
+The Language service automatically encrypts your data when it is persisted to the cloud. The Language service encryption protects your data and helps you meet your organizational security and compliance commitments.
+
+## About Azure AI services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+## Customer-managed keys with Azure Key Vault
+
+There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../../key-vault/general/overview.md).
+
+### Customer-managed keys for Language service
+
+To request the ability to use customer-managed keys, fill out and submit theΓÇ»[Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Language service, you'll need to create a new Language resource from the Azure portal.
++
+### Enable customer-managed keys
+
+A new Azure AI services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK.
+
+To learn how to use customer-managed keys with Azure Key Vault for Azure AI services encryption, see:
+
+- [Configure customer-managed keys with Key Vault for Azure AI services encryption from the Azure portal](../../encryption/cognitive-services-encryption-keys-portal.md)
+
+Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+### Store customer-managed keys in Azure Key Vault
+
+To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../../key-vault/general/about-keys-secrets-certificates.md).
+
+### Rotate customer-managed keys
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Azure AI services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Azure AI services by using the Azure portal](../../encryption/cognitive-services-encryption-keys-portal.md).
+
+Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+
+### Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Azure AI services resource, as the encryption key is inaccessible by Azure AI services.
+
+## Next steps
+
+* [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../../key-vault/general/overview.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/language-support.md
+
+ Title: Language support for language features
+
+description: This article explains which natural languages are supported by the different features of Azure AI Language.
++++++ Last updated : 03/09/2023+++
+# Language support for Language features
+
+Use this article to learn about the languages currently supported by different features.
+
+> [!NOTE]
+> Some of the languages listed below are only supported in some [model versions](../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data). See the linked feature-level language support article for details.
++
+| Language | Language code | [Custom text classification](../custom-text-classification/language-support.md) | [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) | [Conversational language understanding](../conversational-language-understanding/language-support.md) | [Entity linking](../entity-linking/language-support.md) | [Language detection](../language-detection/language-support.md) | [Key phrase extraction](../key-phrase-extraction/language-support.md) | [Named entity recognition(NER)](../named-entity-recognition/language-support.md) | [Orchestration workflow](../orchestration-workflow/language-support.md) | [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents) | [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations) | [Question answering](../question-answering/language-support.md) | [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support) | [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) | [Text Analytics for health](../text-analytics-for-health/language-support.md) | [Summarization](../summarization/language-support.md?tabs=document-summarization) | [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization) |
+|::|:-:|:-:|:-:|:--:|:-:|::|::|:--:|:--:|:-:|:-:|::|::|:-:|:--:|::|:--:|
+| Afrikaans | `af` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Albanian | `sq` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Amharic | `am` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Arabic | `ar` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Armenian | `hy` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Assamese | `as` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Azerbaijani | `az` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Basque | `eu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Belarusian | `be` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Bengali | `bn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Bosnian | `bs` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Breton | `br` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Bulgarian | `bg` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Burmese | `my` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Catalan | `ca` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Central Khmer | `km` | | | | | &check; | | | | | | | | | | | |
+| Chinese (Simplified) | `zh-hans` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
+| Chinese (Traditional) | `zh-hant` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Corsican | `co` | | | | | &check; | | | | | | | | | | | |
+| Croatian | `hr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Czech | `cs` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Danish | `da` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Dari | `prs` | | | | | &check; | | | | | | | | | | | |
+| Divehi | `dv` | | | | | &check; | | | | | | | | | | | |
+| Dutch | `nl` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| English (UK) | `en-gb` | &check; | &check; | &check; | | &check; | | | | | | | | | | | |
+| English (US) | `en-us` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
+| Esperanto | `eo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Estonian | `et` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Fijian | `fj` | | | | | &check; | | | | | | | | | | | |
+| Filipino | `tl` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Finnish | `fi` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| French | `fr` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Galician | `gl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Georgian | `ka` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| German | `de` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Greek | `el` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Gujarati | `gu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Haitian | `ht` | | | | | &check; | | | | | | | | | | | |
+| Hausa | `ha` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Hebrew | `he` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | &check; | | |
+| Hindi | `hi` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Hmong Daw | `mww` | | | | | &check; | | | | | | | | | | | |
+| Hungarian | `hu` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Icelandic | `is` | | | | | &check; | | | | | | &check; | | | | | |
+| Igbo | `ig` | | | | | &check; | | | | | | | | | | | |
+| Indonesian | `id` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Inuktitut | `iu` | | | | | &check; | | | | | | | | | | | |
+| Irish | `ga` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Italian | `it` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Japanese | `ja` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
+| Javanese | `jv` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Kannada | `kn` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Kazakh | `kk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Khmer | `km` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Kinyarwanda | `rw` | | | | | &check; | | | | | | | | | | | |
+| Korean | `ko` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
+| Kurdish (Kurmanji) | `ku` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Kyrgyz | `ky` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Lao | `lo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Latin | `la` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Latvian | `lv` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Lithuanian | `lt` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Luxembourgish | `lb` | | | | | &check; | | | | | | | | | | | |
+| Macedonian | `mk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Malagasy | `mg` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Malay | `ms` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Malayalam | `ml` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Maltese | `mt` | | | | | &check; | | | | | | | | | | | |
+| Maori | `mi` | | | | | &check; | | | | | | | | | | | |
+| Marathi | `mr` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Mongolian | `mn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Nepali | `ne` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Norwegian (Bokmal) | `nb` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | | | | | |
+| Norwegian | `no` | | | | | &check; | | | | | | | &check; | &check; | | | |
+| Norwegian Nynorsk | `nn` | | | | | &check; | | | | | | | | | | | |
+| Oriya | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Oromo | `om` | | | | | | &check; | | | | | | &check; | &check; | | | |
+| Pashto | `ps` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Persian (Farsi) | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Polish | `pl` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Portuguese (Brazil) | `pt-br` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
+| Portuguese (Portugal) | `pt-pt` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | | &check; | &check; | | &check; | |
+| Punjabi | `pa` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Queretaro Otomi | `otq` | | | | | &check; | | | | | | | | | | | |
+| Romanian | `ro` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Russian | `ru` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Samoan | `sm` | | | | | &check; | | | | | | | | | | | |
+| Sanskrit | `sa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Scottish Gaelic | `gd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Serbian | `sr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | | | | | |
+| Shona | `sn` | | | | | &check; | | | | | | | | | | | |
+| Sindhi | `sd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Sinhala | `si` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Slovak | `sk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Slovenian | `sl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Somali | `so` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Spanish | `es` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | | |
+| Sundanese | `su` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Swahili | `sw` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Swedish | `sv` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Tahitian | `ty` | | | | | &check; | | | | | | | | | | | |
+| Tajik | `tg` | | | | | &check; | | | | | | | | | | | |
+| Tamil | `ta` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Tatar | `tt` | | | | | &check; | | | | | | | | | | | |
+| Telugu | `te` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Thai | `th` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Tibetan | `bo` | | | | | &check; | | | | | | | | | | | |
+| Tigrinya | `ti` | | | | | &check; | | | | | | | | | | | |
+| Tongan | `to` | | | | | &check; | | | | | | | | | | | |
+| Turkish | `tr` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
+| Turkmen | `tk` | | | | | &check; | | | | | | | | | | | |
+| Ukrainian | `uk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Urdu | `ur` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Uyghur | `ug` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Uzbek | `uz` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Vietnamese | `vi` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
+| Welsh | `cy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Western Frisian | `fy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Xhosa | `xh` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Yiddish | `yi` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
+| Yoruba | `yo` | | | | | &check; | | | | | | | | | | | |
+| Yucatec Maya | `yua` | | | | | &check; | | | | | | | | | | | |
+| Zulu | `zu` | &check; | &check; | &check; | | &check; | | | | | | | | | | | |
+
+## See also
+
+See the following service-level language support articles for information on model version support for each language:
+* [Custom text classification](../custom-text-classification/language-support.md)
+* [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md)
+* [Conversational language understanding](../conversational-language-understanding/language-support.md)
+* [Entity linking](../entity-linking/language-support.md)
+* [Language detection](../language-detection/language-support.md)
+* [Key phrase extraction](../key-phrase-extraction/language-support.md)
+* [Named entity recognition(NER)](../named-entity-recognition/language-support.md)
+* [Orchestration workflow](../orchestration-workflow/language-support.md)
+* [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents)
+* [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations)
+* [Question answering](../question-answering/language-support.md)
+* [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support)
+* [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support)
+* [Text Analytics for health](../text-analytics-for-health/language-support.md)
+* [Summarization](../summarization/language-support.md?tabs=document-summarization)
+* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)
ai-services Migrate Language Service Latest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/migrate-language-service-latest.md
+
+ Title: Migrate to the latest version of Azure AI Language
+
+description: Learn how to move your Text Analytics applications to use the latest version of the Language service.
++++++ Last updated : 08/08/2022++++
+# Migrate to the latest version of Azure AI Language
+
+> [!TIP]
+> Just getting started with Azure AI Language? See the [overview article](../overview.md) for details on the service, available features, and links to quickstarts for information on the current version of the API.
+
+If your applications are still using the Text Analytics API, or client library (before stable v5.1.0), this article will help you upgrade your applications to use the latest version of the [Azure AI language](../overview.md) features.
+
+## Unified Language endpoint (REST API)
+
+This section applies to applications that use the older `/text/analytics/...` endpoint format for REST API calls. For example:
+
+```http
+https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/<version>/<feature>
+```
+
+If your application uses the above endpoint format, the REST API endpoint for the following Language service features has changed:
+
+* [Entity linking](../entity-linking/quickstart.md?pivots=rest-api)
+* [Key phrase extraction](../key-phrase-extraction/quickstart.md?pivots=rest-api)
+* [Language detection](../language-detection/quickstart.md?pivots=rest-api)
+* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md?pivots=rest-api)
+* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md?pivots=rest-api)
+* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api)
+* [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
+
+The Language service now provides a unified endpoint for sending REST API requests to these features. If your application uses the REST API, update its request endpoint to use the current endpoint:
+
+```http
+https://<your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01
+```
+
+Additionally, the format of the JSON request body has changed. You'll need to update the request structure that your application sends to the API, for example the following entity recognition JSON body:
+
+```json
+{
+ "kind": "EntityRecognition",
+ "parameters": {
+ "modelVersion": "latest"
+ },
+ "analysisInput":{
+ "documents":[
+ {
+ "id":"1",
+ "language": "en",
+ "text": "I had a wonderful trip to Seattle last week."
+ }
+ ]
+ }
+}
+```
+
+Use the quickstarts linked above to see current example REST API calls for the feature(s) you're using, and the associated API output.
+
+## Client libraries
+
+To use the latest version of the client library, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. See the quickstart articles linked above for example code and instructions for using the client library in your preferred language.
+
+<!--[!INCLUDE [SDK target versions](../includes/sdk-target-versions.md)]-->
++
+## Version 2.1 functionality changes
+
+If you're migrating an application from v2.1 of the API, there are several changes to feature functionality you should be aware of.
+
+### Sentiment analysis v2.1
+
+[Sentiment Analysis](../sentiment-opinion-mining/quickstart.md) in version 2.1 returns sentiment scores between 0 and 1 for each document sent to the API, with scores closer to 1 indicating more positive sentiment. The current version of this feature returns sentiment labels (such as "positive" or "negative") for both the sentences and the document as a whole, and their associated confidence scores.
+
+### NER, PII, and entity linking v2.1
+
+In version 2.1, the Text Analytics API used one endpoint for Named Entity Recognition (NER) and entity linking. The current version of this feature provides expanded named entity detection, and has separate endpoints for [NER](../named-entity-recognition/quickstart.md?pivots=rest-api) and [entity linking](../entity-linking/quickstart.md?pivots=rest-api) requests. Additionally, you can use another feature offered in the Language service that lets you detect [detect personal (PII) and health (PHI) information](../personally-identifiable-information/overview.md).
+
+You will also need to update your application to use the [entity categories](../named-entity-recognition/concepts/named-entity-categories.md) returned in the [API's response](../named-entity-recognition/how-to-call.md).
+
+### Version 2.1 entity categories
+
+The following table lists the entity categories returned for NER v2.1.
+
+| Category | Description |
+||--|
+| Person | Names of people. |
+|Location | Natural and human-made landmarks, structures, geographical features, and geopolitical entities |
+|Organization | Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type. |
+| PhoneNumber | Phone numbers (US and EU phone numbers only). |
+| Email | Email addresses. |
+| URL | URLs to websites. |
+| IP | Network IP addresses. |
+| DateTime | Dates and times of day.|
+| Date | Calender dates. |
+| Time | Times of day |
+| DateRange | Date ranges. |
+| TimeRange | Time ranges. |
+| Duration | Durations. |
+| Set | Set, repeated times. |
+| Quantity | Numbers and numeric quantities. |
+| Number | Numbers. |
+| Percentage | Percentages.|
+| Ordinal | Ordinal numbers. |
+| Age | Ages. |
+| Currency | Currencies. |
+| Dimension | Dimensions and measurements. |
+| Temperature | Temperatures. |
+
+### Language detection v2.1
+
+The [language detection](../language-detection/quickstart.md) feature output has changed in the current version. The JSON response will contain `ConfidenceScore` instead of `score`. The current version also only returns one language for each document.
+
+### Key phrase extraction v2.1
+
+The key phrase extraction feature functionality currently has not changed outside of the endpoint and request format.
+
+## See also
+
+* [What is Azure AI Language?](../overview.md)
+* [Language service developer guide](developer-guide.md)
+* See the following reference documentation for information on previous API versions.
+ * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c9)
+ * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Sentiment)
+ * [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
+* Use the following quickstart guides to see examples for the current version of these features.
+ * [Entity linking](../entity-linking/quickstart.md)
+ * [Key phrase extraction](../key-phrase-extraction/quickstart.md)
+ * [Named entity recognition (NER)](../named-entity-recognition/quickstart.md)
+ * [Language detection](../language-detection/quickstart.md)
+ * [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md)
+ * [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md)
+ * [Text analytics for health](../text-analytics-for-health/quickstart.md)
+
ai-services Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/migrate.md
+
+ Title: "Migrate to Azure AI Language from: LUIS, QnA Maker, and Text Analytics"
+
+description: Use this article to learn if you need to migrate your applications from LUIS, QnA Maker, and Text Analytics.
+++++ Last updated : 09/29/2022++++
+# Migrating to Azure AI Language
+
+On November 2nd 2021, Azure AI Language was released into public preview. This language service unifies the Text Analytics, QnA Maker, and LUIS service offerings, and provides several new features as well.
+
+## Do I need to migrate to the language service if I am using Text Analytics?
+
+Text Analytics has been incorporated into the language service, and its features are still available. If you were using Text Analytics, your applications should continue to work without breaking changes. You can also see the [Text Analytics migration guide](migrate-language-service-latest.md), if you need to update an older application.
+
+Consider using one of the available quickstart articles to see the latest information on service endpoints, and API calls.
+
+## How do I migrate to the language service if I am using LUIS?
+
+If you're using Language Understanding (LUIS), you can [import your LUIS JSON file](../conversational-language-understanding/how-to/migrate-from-luis.md) to the new Conversational language understanding feature.
+
+## How do I migrate to the language service if I am using QnA Maker?
+
+If you're using QnA Maker, see the [migration guide](../question-answering/how-to/migrate-qnamaker.md) for information on migrating knowledge bases from QnA Maker to question answering.
+
+## See also
+
+* [Azure AI Language overview](../overview.md)
+* [Conversational language understanding overview](../conversational-language-understanding/overview.md)
+* [Question answering overview](../question-answering/overview.md)
ai-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/model-lifecycle.md
+
+ Title: Model Lifecycle of Language service models
+
+description: This article describes the timelines for models and model versions used by Language service features.
+++++++ Last updated : 11/29/2022+++
+# Model lifecycle
+
+Language service features utilize AI models. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they are retired. Use this article for information on that process, and what you can expect for your applications.
+
+## Prebuilt features
++
+Our standard (not customized) language service features are built on AI models that we call pre-trained models.
+
+We regularly update the language service with new model versions to improve model accuracy, support, and quality.
+
+By default, all API requests will use the latest Generally Available (GA) model.
+
+#### Choose the model-version used on your data
+
+We strongly recommend using the `latest` model version to utilize the latest and highest quality models. As our models improve, itΓÇÖs possible that some of your model results may change. Model versions may be deprecated, so don't recommend including specified versions in your implementation.
+
+Preview models used for preview features do not maintain a minimum retirement period and may be deprecated at any time.
+
+By default, API and SDK requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended).
+
+> [!NOTE]
+> * If you are using an model version that is not listed in the table, then it was subjected to the expiration policy.
+> * Abstractive document and conversation summarization do not provide model versions other than the latest available.
+
+Use the table below to find which model versions are supported by each feature:
+
+| Feature | Supported versions |
+|--|--|
+| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01`,`2022-10-01`,`2022-11-01*` |
+| Language Detection | `2021-11-20`, `2022-10-01*` |
+| Entity Linking | `2021-06-01*` |
+| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview`, `2023-02-01-preview**` |
+| Personally Identifiable Information (PII) detection | `2021-01-15*`, `2023-01-01-preview**` |
+| PII detection for conversations (Preview) | `2022-05-15-preview**` |
+| Question answering | `2021-10-01*` |
+| Text Analytics for health | `2021-05-15`, `2022-03-01*`, `2022-08-15-preview`, `2023-01-01-preview**` |
+| Key phrase extraction | `2021-06-01`, `2022-07-01`,`2022-10-01*` |
+| Document summarization - extractive only (preview) | `2022-08-31-preview**` |
+
+\* Latest Generally Available (GA) model version
+
+\*\* Latest preview version
++
+## Custom features
+
+### Expiration timeline
+
+As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration:
+
+New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you've assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version.
+
+After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
+
+> [!Tip]
+> It's recommended to use the latest supported config version
+
+You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you'll have to use another supported training config version for submitting any training or deployment jobs.
+
+Deployment expiration is when your deployed model will be unavailable to be used for prediction.
+
+Use the table below to find which model versions are supported by each feature:
+
+| Feature | Supported Training config versions | Training config expiration | Deployment expiration |
+||--|||
+| Custom text classification | `2022-05-01` | `2023-05-01` | `2024-04-30` |
+| Conversational language understanding | `2022-05-01` | `2022-10-28` | `2023-10-28` |
+| Conversational language understanding | `2022-09-01` | `2023-02-28` | `2024-02-28` |
+| Custom named entity recognition | `2022-05-01` | `2023-05-01` | `2024-04-30` |
+| Orchestration workflow | `2022-05-01` | `2023-05-01` | `2024-04-30` |
++
+## API versions
+
+When you're making API calls to the following features, you need to specify the `API-VERISON` you want to use to complete your request. It's recommended to use the latest available API versions.
+
+If you're using [Language Studio](https://aka.ms/languageStudio) for your projects, you'll use the latest API version available. Other API versions are only available through the REST APIs and client libraries.
+
+Use the following table to find which API versions are supported by each feature:
+
+| Feature | Supported versions | Latest Generally Available version | Latest preview version |
+|--||||
+| Custom text classification | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2022-05-01` | `2022-10-01-preview` |
+| Conversational language understanding | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
+| Custom named entity recognition | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
+| Orchestration workflow | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
+
+## Next steps
+
+[Azure AI Language overview](../overview.md)
ai-services Multilingual Emoji Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/multilingual-emoji-support.md
+
+ Title: Multilingual and emoji support in Azure AI Language
+
+description: Learn about offsets caused by multilingual and emoji encodings in Language service features.
++++++ Last updated : 11/02/2021++++
+# Multilingual and emoji support in Language service features
+
+Multilingual and emoji support has led to Unicode encodings that use more than one [code point](https://wikipedia.org/wiki/Code_point) to represent a single displayed character, called a grapheme. For example, emojis like 🌷 and 👍 may use several characters to compose the shape with additional characters for visual attributes, such as skin tone. Similarly, the Hindi word `अनुच्छेद` is encoded as five letters and three combining marks.
+
+Because of the different lengths of possible multilingual and emoji encodings, Language service features may return offsets in the response.
+
+## Offsets in the API response
+
+Whenever offsets are returned the API response, remember:
+
+* Elements in the response may be specific to the endpoint that was called.
+* HTTP POST/GET payloads are encoded in [UTF-8](https://www.w3schools.com/charsets/ref_html_utf8.asp), which may or may not be the default character encoding on your client-side compiler or operating system.
+* Offsets refer to grapheme counts based on the [Unicode 8.0.0](https://unicode.org/versions/Unicode8.0.0) standard, not character counts.
+
+## Extracting substrings from text with offsets
+
+Offsets can cause problems when using character-based substring methods, for example the .NET [substring()](/dotnet/api/system.string.substring) method. One problem is that an offset may cause a substring method to end in the middle of a multi-character grapheme encoding instead of the end.
+
+In .NET, consider using the [StringInfo](/dotnet/api/system.globalization.stringinfo) class, which enables you to work with a string as a series of textual elements, rather than individual character objects. You can also look for grapheme splitter libraries in your preferred software environment.
+
+The Language service features returns these textual elements as well, for convenience.
+
+Endpoints that return an offset will support the `stringIndexType` parameter. This parameter adjusts the `offset` and `length` attributes in the API output to match the requested string iteration scheme. Currently, we support three types:
+
+- `textElement_v8` (default): iterates over graphemes as defined by the [Unicode 8.0.0](https://unicode.org/versions/Unicode8.0.0) standard
+- `unicodeCodePoint`: iterates over [Unicode Code Points](http://www.unicode.org/versions/Unicode13.0.0/ch02.pdf#G25564), the default scheme for Python 3
+- `utf16CodeUnit`: iterates over [UTF-16 Code Units](https://unicode.org/faq/utf_bom.html#UTF16), the default scheme for JavaScript, Java, and .NET
+
+If the `stringIndexType` requested matches the programming environment of choice, substring extraction can be done using standard substring or slice methods.
+
+## See also
+
+* [Language service overview](../overview.md)
ai-services Previous Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/previous-updates.md
+
+ Title: Previous language service updates
+
+description: An archive of previous Azure AI Language updates.
++++++ Last updated : 06/23/2022++++
+# Previous updates for Azure AI Language
+
+This article contains a list of previously recorded updates for Azure AI Language. For more current service updates, see [What's new](../whats-new.md).
+
+## October 2021
+
+* Quality improvements for the extractive summarization feature in model-version `2021-08-01`.
+
+## September 2021
+
+* Starting with version `3.0.017010001-onprem-amd64` The text analytics for health container can now be called using the client library.
+
+## July 2021
+
+* General availability for text analytics for health containers and API.
+* General availability for opinion mining.
+* General availability for PII extraction and redaction.
+* General availability for asynchronous operation.
+
+## June 2021
+
+### General API updates
+
+* New model-version `2021-06-01` for key phrase extraction based on transformers. It provides:
+ * Support for 10 languages (Latin and CJK).
+ * Improved key phrase extraction.
+* The `2021-06-01` model version for Named Entity Recognition (NER) which provides
+ * Improved AI quality and expanded language support for the *Skill* entity category.
+ * Added Spanish, French, German, Italian and Portuguese language support for the *Skill* entity category
+
+### Text Analytics for health updates
+
+* A new model version `2021-05-15` for the `/health` endpoint and on-premises container which provides
+ * 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`,
+ * 14 new relation types,
+ * Assertion detection expanded for new entity types and
+ * Linking support for `ALLERGEN` entity type
+* A new image for the Text Analytics for health container with tag `3.0.016230002-onprem-amd64` and model version `2021-05-15`. This container is available for download from Microsoft Container Registry.
+
+## May 2021
+
+* Custom question answering (previously QnA maker) can now be accessed using a Text Analytics resource.
+* Preview API release, including:
+ * Asynchronous API now supports sentiment analysis and opinion mining.
+ * A new query parameter, `LoggingOptOut`, is now available for customers who wish to opt out of logging input text for incident reports.
+* Text analytics for health and asynchronous operations are now available in all regions.
+
+## March 2021
+
+* Changes in the opinion mining JSON response body:
+ * `aspects` is now `targets` and `opinions` is now `assessments`.
+* Changes in the JSON response body of the hosted web API of text analytics for health:
+ * The `isNegated` boolean name of a detected entity object for negation is deprecated and replaced by assertion detection.
+ * A new property called `role` is now part of the extracted relation between an attribute and an entity as well as the relation between entities. This adds specificity to the detected relation type.
+* Entity linking is now available as an asynchronous task.
+* A new `pii-categories` parameter for the PII feature.
+ * This parameter lets you specify select PII entities, as well as those not supported by default for the input language.
+* Updated client libraries, which include asynchronous and text analytics for health operations.
+
+* A new model version `2021-03-01` for text analytics for health API and on-premises container which provides:
+ * A rename of the `Gene` entity type to `GeneOrProtein`.
+ * A new `Date` entity type.
+ * Assertion detection which replaces negation detection.
+ * A new preferred `name` property for linked entities that is normalized from various ontologies and coding systems.
+* A new text analytics for health container image with tag `3.0.015490002-onprem-amd64` and the new model-version `2021-03-01` has been released to the container preview repository.
+ * This container image will no longer be available for download from `containerpreview.azurecr.io` after April 26th, 2021.
+* **Processed Text Records** is now available as a metric in the **Monitoring** section for your text analytics resource in the Azure portal.
+
+## February 2021
+
+* The `2021-01-15` model version for the PII feature, which provides:
+ * Expanded support for 9 new languages
+ * Improved AI quality
+* The S0 through S4 pricing tiers are being retired on March 8th, 2021.
+* The language detection container is now generally available.
+
+## January 2021
+
+* The `2021-01-15` model version for Named Entity Recognition (NER), which provides
+ * Expanded language support.
+ * Improved AI quality of general entity categories for all supported languages.
+* The `2021-01-05` model version for language detection, which provides additional language support.
+
+## November 2020
+
+* Portuguese (Brazil) `pt-BR` is now supported in sentiment analysis, starting with model version `2020-04-01`. It adds to the existing `pt-PT` support for Portuguese.
+* Updated client libraries, which include asynchronous and text analytics for health operations.
+
+## October 2020
+
+* Hindi support for sentiment analysis, starting with model version `2020-04-01`.
+* Model version `2020-09-01` for language detection, which adds additional language support and accuracy improvements.
+
+## September 2020
+
+* PII now includes the new `redactedText` property in the response JSON where detected PII entities in the input text are replaced by an `*` for each character of those entities.
+* Entity linking endpoint now includes the `bingID` property in the response JSON for linked entities.
+* The following updates are specific to the September release of the text analytics for health container only.
+ * A new container image with tag `1.1.013530001-amd64-preview` with the new model-version `2020-09-03` has been released to the container preview repository.
+ * This model version provides improvements in entity recognition, abbreviation detection, and latency enhancements.
+
+## August 2020
+
+* Model version `2020-07-01` for key phrase extraction, PII detection, and language detection. This update adds:
+ * Additional government and country/region specific entity categories for Named Entity Recognition.
+ * Norwegian and Turkish support in Sentiment Analysis.
+* An HTTP 400 error will now be returned for API requests that exceed the published data limits.
+* Endpoints that return an offset now support the optional `stringIndexType` parameter, which adjusts the returned `offset` and `length` values to match a supported string index scheme.
+
+The following updates are specific to the August release of the Text Analytics for health container only.
+
+* New model-version for Text Analytics for health: `2020-07-24`
+
+The following properties in the JSON response have changed:
+
+* `type` has been renamed to `category`
+* `score` has been renamed to `confidenceScore`
+* Entities in the `category` field of the JSON output are now in pascal case. The following entities have been renamed:
+ * `EXAMINATION_RELATION` has been renamed to `RelationalOperator`.
+ * `EXAMINATION_UNIT` has been renamed to `MeasurementUnit`.
+ * `EXAMINATION_VALUE` has been renamed to `MeasurementValue`.
+ * `ROUTE_OR_MODE` has been renamed `MedicationRoute`.
+ * The relational entity `ROUTE_OR_MODE_OF_MEDICATION` has been renamed to `RouteOfMedication`.
+
+The following entities have been added:
+
+* Named Entity Recognition
+ * `AdministrativeEvent`
+ * `CareEnvironment`
+ * `HealthcareProfession`
+ * `MedicationForm`
+
+* Relation extraction
+ * `DirectionOfCondition`
+ * `DirectionOfExamination`
+ * `DirectionOfTreatment`
+
+## May 2020
+
+* Model version `2020-04-01`:
+ * Updated language support for sentiment analysis
+ * New "Address" entity category in Named Entity Recognition (NER)
+ * New subcategories in NER:
+ * Location - Geographical
+ * Location - Structural
+ * Organization - Stock Exchange
+ * Organization - Medical
+ * Organization - Sports
+ * Event - Cultural
+ * Event - Natural
+ * Event - Sports
+
+* The following properties in the JSON response have been added:
+ * `SentenceText` in sentiment analysis
+ * `Warnings` for each document
+
+* The names of the following properties in the JSON response have been changed, where applicable:
+
+* `score` has been renamed to `confidenceScore`
+ * `confidenceScore` has two decimal points of precision.
+* `type` has been renamed to `category`
+* `subtype` has been renamed to `subcategory`
+
+* New sentiment analysis feature - opinion mining
+* New personal (`PII`) domain filter for protected health information (`PHI`).
+
+## February 2020
+
+Additional entity types are now available in the Named Entity Recognition (NER). This update introduces model version `2020-02-01`, which includes:
+
+* Recognition of the following general entity types (English only):
+ * PersonType
+ * Product
+ * Event
+ * Geopolitical Entity (GPE) as a subtype under Location
+ * Skill
+
+* Recognition of the following personal information entity types (English only):
+ * Person
+ * Organization
+ * Age as a subtype under Quantity
+ * Date as a subtype under DateTime
+ * Email
+ * Phone Number (US only)
+ * URL
+ * IP Address
+
+### October 2019
+
+* Introduction of PII feature
+* Model version `2019-10-01`, which includes:
+ * Named entity recognition:
+ * Expanded detection and categorization of entities found in text.
+ * Recognition of the following new entity types:
+ * Phone number
+ * IP address
+ * Sentiment analysis:
+ * Significant improvements in the accuracy and detail of the API's text categorization and scoring.
+ * Automatic labeling for different sentiments in text.
+ * Sentiment analysis and output on a document and sentence level.
+
+ This model version supports: English (`en`), Japanese (`ja`), Chinese Simplified (`zh-Hans`), Chinese Traditional (`zh-Hant`), French (`fr`), Italian (`it`), Spanish (`es`), Dutch (`nl`), Portuguese (`pt`), and German (`de`).
+
+## Next steps
+
+See [What's new](../whats-new.md) for current service updates.
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/role-based-access-control.md
+
+ Title: Role-based access control for the Language service
+
+description: Learn how to use Azure RBAC for managing individual access to Azure resources.
++++++ Last updated : 10/31/2022++++
+# Language role-based access control
+
+Azure AI Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](../../../role-based-access-control/index.yml) for more information.
+
+## Enable Azure Active Directory authentication
+
+To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
+
+## Add role assignment to Language resource
+
+Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment.
+1. In the [Azure portal](https://portal.azure.com/), select **All services**.
+1. Select **Azure AI services**, and navigate to your specific Language resource.
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
+
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add**, then select **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
+
+## Language role types
+
+Use the following table to determine access needs for your Language projects.
+
+These custom roles only apply to Language resources.
+> [!NOTE]
+> * All prebuilt capabilities are accessible to all roles
+> * *Owner* and *Contributor* roles take priority over the custom language roles
+> * AAD is only used in case of custom Language roles
+> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in Language studio portal.
++
+### Azure AI Language reader
+
+A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
++
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * Read
+ * Test
+ :::column-end:::
+ :::column span="":::
+ All GET APIs under:
+ * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
+ * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
+ * [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+ Only `TriggerExportProjectJob` POST operation under:
+ * [Language authoring conversational language understanding export API](/rest/api/language/2023-04-01/text-analysis-authoring/export)
+ * [Language authoring text analysis export API](/rest/api/language/2023-04-01/text-analysis-authoring/export)
+ Only Export POST operation under:
+ * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export)
+ All the Batch Testing Web APIs
+ *[Language Runtime CLU APIs](/rest/api/language/2023-04-01/conversation-analysis-runtime)
+ *[Language Runtime Text Analysis APIs](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text)
+ :::column-end:::
+
+### Azure AI Language writer
+
+A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
+
+ :::column span="":::
+ **Capabilities**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Azure AI Language Reader.
+ * Ability to:
+ * Train
+ * Write
+ :::column-end:::
+ :::column span="":::
+ * All APIs under Language reader
+ * All POST, PUT and PATCH APIs under:
+ * [Language conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
+ * [Language text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
+ * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+ Except for
+ * Delete deployment
+ * Delete trained model
+ * Delete Project
+ * Deploy Model
+ :::column-end:::
+
+### Cognitive Services Language Owner
+
+> [!NOTE]
+> If you are assigned as an *Owner* and *Language Owner* you will be be shown as *Cognitive Services Language Owner* in Language studio portal.
++
+These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
+
+ :::column span="":::
+ **Functionality**
+ :::column-end:::
+ :::column span="":::
+ **API Access**
+ :::column-end:::
+ :::column span="":::
+ * All functionalities under Azure AI Language Writer
+ * Deploy
+ * Delete
+ :::column-end:::
+ :::column span="":::
+ All APIs available under:
+ * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
+ * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
+ * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+
+ :::column-end:::
ai-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/use-asynchronously.md
+
+ Title: "How to: Use Language service features asynchronously"
+
+description: Learn how to send Language service API requests asynchronously.
++++++ Last updated : 10/31/2022+++
+# How to use Language service features asynchronously
+
+The Language service enables you to send API requests asynchronously, using either the REST API or client library. You can also include multiple different Language service features in your request, to be performed on your data at the same time.
+
+Currently, the following features are available to be used asynchronously:
+* Entity linking
+* Document summarization
+* Conversation summarization
+* Key phrase extraction
+* Language detection
+* Named Entity Recognition (NER)
+* Customer content detection
+* Sentiment analysis and opinion mining
+* Text Analytics for health
+* Personal Identifiable information (PII)
+
+When you send asynchronous requests, you will incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you will be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
+
+## Submit an asynchronous job using the REST API
+
+To submit an asynchronous job, review the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
+1. Add your documents to the `analysisInput` object.
+1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object.
+1. You can optionally:
+ 1. Choose a specific [version of the model](model-lifecycle.md) used on your data.
+ 1. Include additional Language service features in the `tasks` object, to be performed on your data at the same time.
+
+Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to job creation endpoint. For example:
+
+```http
+POST https://your-endpoint.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-05-01
+```
+
+A successful call will return a 202 response code. The `operation-location` in the response header will be the URL you will use to retrieve the API results. The value will look similar to the following URL:
+
+```http
+GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01
+```
+
+To [get the status and retrieve the results](/rest/api/language/2023-04-01/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
+
+## Send asynchronous API requests using the client library
+
+First, make sure you have the client library installed for your language of choice. For steps on installing the client library, see the quickstart article for the feature you want to use.
+
+Afterwards, use the client object to send asynchronous calls to the API. The method calls to use will vary depending on your language. Use the available samples and reference documentation to help you get started.
+
+* [C#](/dotnet/api/overview/azure/ai.textanalytics-readme?preserve-view=true&view=azure-dotnet#async-examples)
+* [Java](/java/api/overview/azure/ai-textanalytics-readme?preserve-view=true&view=azure-java-preview#analyze-multiple-actions)
+* [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?preserve-view=true&view=azure-node-preview#analyze-actions)
+* [Python](/python/api/overview/azure/ai-textanalytics-readme?preserve-view=true&view=azure-python-preview#multiple-analysis)
+
+## Result availability
+
+When using this feature asynchronously, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+## Automatic language detection
+
+Starting in version `2022-07-01-preview` of the REST API, you can request automatic [language detection](../language-detection/overview.md) on your documents. By setting the `language` parameter to `auto`, the detected language code of the text will be returned as a language value in the response. This language detection will not incur extra charges to your Language resource.
+
+## Data limits
+
+> [!NOTE]
+> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+> * A document is a single string of text characters.
+
+You can send up to 125,000 characters across all documents contained in the asynchronous request, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). This character limit is higher than the limit for synchronous requests, to enable higher throughput.
+
+If a document exceeds the character limit, the API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
+
+## See also
+
+* [Azure AI Language overview](../overview.md)
+* [Multilingual and emoji support](../concepts/multilingual-emoji-support.md)
+* [What's new](../whats-new.md)
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
+
+ Title: Conversational language understanding best practices
+
+description: Apply best practices when using conversational language understanding
++++++ Last updated : 10/11/2022++++
+# Best practices for conversational language understanding
+
+Use the following guidelines to create the best possible projects in conversational language understanding.
+
+## Choose a consistent schema
+
+Schema is the definition of your intents and entities. There are different approaches you could take when defining what you should create as an intent versus an entity. There are some questions you need to ask yourself:
+
+- What actions or queries am I trying to capture from my user?
+- What pieces of information are relevant in each action?
+
+You can typically think of actions and queries as _intents_, while the information required to fulfill those queries as _entities_.
+
+For example, assume you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _Cancel_ intent with various examples like _"Cancel the Contoso service"_, or _"stop charging me for the Fabrikam subscription"_. The user's intent here is to _cancel_, the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they would like to cancel. Therefore, you can create an entity for _subscriptions_. You can then model your entire project to capture actions as intents and use entities to fill in those actions. This allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, upgrading, etc. that all make use of the _subscriptions_ and other entities.
+
+The above schema design makes it easy for you to extend existing capabilities (canceling, upgrading, signing up) to new targets by creating a new entity.
+
+Another approach is to model the _information_ as intents and _actions_ as entities. Let's take the same example, allowing your customers to cancel subscriptions through your chatbot. You can create an intent for each subscription available, such as _Contoso_ with utterances like _"cancel Contoso"_, _"stop charging me for contoso services"_, _"Cancel the Contoso subscription"_. You would then create an entity to capture the action, _cancel_. You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
+
+This schema design makes it easy for you to extend new actions to existing targets by adding new action entities or entity components.
+
+Make sure to avoid trying to funnel all the concepts into just intents, for example don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
+
+You also want to avoid mixing different schema designs. Do not build half of your application with actions as intents and the other half with information as intents. Ensure it is consistent to get the possible results.
++++++++
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/data-formats.md
+
+ Title: conversational language understanding data formats
+
+description: Learn about the data formats accepted by conversational language understanding.
++++++ Last updated : 06/20/2023++++
+# Data formats accepted by conversational language understanding
++
+If you're uploading your data into CLU it has to follow a specific format, use this article to learn more about accepted data formats.
+
+## Import project file format
+
+If you're [importing a project](../how-to/create-project.md#import-project) into CLU the file uploaded has to be in the following format.
+
+```json
+{
+ "projectFileVersion": "2022-10-01-preview",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "Conversation",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": true,
+ "description": "DESCRIPTION",
+ "language": "{LANGUAGE-CODE}",
+ "settings": {
+ "confidenceThreshold": 0
+ }
+ },
+ "assets": {
+ "projectKind": "Conversation",
+ "intents": [
+ {
+ "category": "intent1"
+ }
+ ],
+ "entities": [
+ {
+ "category": "entity1",
+ "compositionSetting": "{COMPOSITION-SETTING}",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "list1",
+ "synonyms": [
+ {
+ "language": "{LANGUAGE-CODE}",
+ "values": [
+ "{VALUES-FOR-LIST}"
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "prebuilts": [
+ {
+ "category": "{PREBUILT-COMPONENTS}"
+ }
+ ],
+ "regex": {
+ "expressions": [
+ {
+ "regexKey": "regex1",
+ "language": "{LANGUAGE-CODE}",
+ "regexPattern": "{REGEX-PATTERN}"
+ }
+ ]
+ },
+ "requiredComponents": [
+ "{REQUIRED-COMPONENTS}"
+ ]
+ }
+ ],
+ "utterances": [
+ {
+ "text": "utterance1",
+ "intent": "intent1",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "category": "ENTITY1",
+ "offset": 6,
+ "length": 4
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+|`{API-VERSION}` | The [version](../../concepts/model-lifecycle.md#api-versions) of the API you are calling. | `2023-04-01` |
+|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md). Values are from `0` to `1`|`0.7`|
+| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
+| `multilingual` | `true`| A boolean value that enables you to have utterances in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
+|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
+|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
+|`synonyms`|`[]`|Array containing all the synonyms|synonym|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances, synonyms, and regular expressions used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
+| `intents` | `[]` | Array containing all the intents you have in the project. These are the intents that will be classified from your utterances.| `[]` |
+| `entities` | `[]` | Array containing all the entities in your project. These are the entities that will be extracted from your utterances. Every entity can have additional optional components defined with them: list, prebuilt, or regex. | `[]` |
+| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+| `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | ` ` | The inclusive character position of the start of the entity. |`5`|
+| `length` | ` ` | The character length of the entity. |`5`|
+| `listKey`| ` ` | A normalized value for the list of synonyms to map back to in prediction. | `Microsoft` |
+| `values`| `{VALUES-FOR-LIST}` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"msft", "microsoft", "MS"` |
+| `regexKey`| `{REGEX-PATTERN}` | A normalized value for the regular expression to map back to in prediction. | `ProductPattern1` |
+| `regexPattern`| `{REGEX-PATTERN}` | A regular expression. | `^pre` |
+| `prebuilts`| `{PREBUILT-COMPONENTS}` | The prebuilt components that can extract common types. You can find the list of prebuilts you can add [here](../prebuilt-component-reference.md). | `Quantity.Number` |
+| `requiredComponents` | `{REQUIRED-COMPONENTS}` | A setting that specifies a requirement that a specific component be present to return the entity. You can learn more [here](./entity-components.md#required-components). The possible values are `learned`, `regex`, `list`, or `prebuilts` |`"learned", "prebuilt"`|
++++
+## Utterance file format
+
+CLU offers the option to upload your utterance directly to the project rather than typing them in one by one. You can find this option in the [data labeling](../how-to/tag-utterances.md) page for your project.
+
+```json
+[
+ {
+ "text": "{Utterance-Text}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "intent": "{intent}",
+ "entities": [
+ {
+ "category": "{entity}",
+ "offset": 19,
+ "length": 10
+ }
+ ]
+ },
+ {
+ "text": "{Utterance-Text}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "intent": "{intent}",
+ "entities": [
+ {
+ "category": "{entity}",
+ "offset": 20,
+ "length": 10
+ },
+ {
+ "category": "{entity}",
+ "offset": 31,
+ "length": 5
+ }
+ ]
+ }
+]
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+|`text`|`{Utterance-Text}`|Your utterance text|Testing|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the language code of the majority of the utterances. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
+| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+|`intent`|`{intent}`|The assigned intent| intent1|
+|`entity`|`{entity}`|Entity to be extracted| entity1|
+| `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | ` ` | The inclusive character position of the start of the text. |`0`|
+| `length` | ` ` | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
++
+## Next steps
+
+* You can import your labeled data into your project directly. See [import project](../how-to/create-project.md#import-project) for more information.
+* See the [how-to article](../how-to/tag-utterances.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
ai-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/entity-components.md
+
+ Title: Entity components in Conversational Language Understanding
+
+description: Learn how Conversational Language Understanding extracts entities from text
++++++ Last updated : 10/11/2022++++
+# Entity components
+
+In Conversational Language Understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+
+## Component types
+
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+
+### Learned component
+
+The learned component uses the entity tags you label your utterances with to train a machine learned model. The model learns to predict where the entity is, based on the context within the utterance. Your labels provide examples of where the entity is expected to be present in an utterance, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by tagging utterances for the entity. If you do not tag any utterances with the entity, it will not have a learned component.
++
+### List component
+
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+
+In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+++
+### Prebuilt component
+
+The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. See [the list of supported prebuilt components](../prebuilt-component-reference.md) for more information.
+++
+### Regex component
+
+The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression will be extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression will return the key as part of the prediction response.
+
+In multilingual projects, you can specify a different expression for each language. While using the prediction API, you can specify the language in the input request, which will only match the regular expression associated to that language.
+++
+## Entity options
+
+When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+
+### Combine components
+
+Combine components as one entity when they overlap by taking the union of all the components.
+
+Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
++
+By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
++
+Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
++
+With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+++
+### Do not combine components
+
+Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ tagged as Software:
++
+When you do not combine components, the entity will return twice:
++
+### Required components
+
+An entity can sometimes be defined by multiple components but requires one or more of them to be present. Every component can be set as **required**, which means the entity will **not** be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it is guaranteed that any returned entity includes a learned component; if it doesn't, the entity will not be returned.
+
+Required components are most frequently used with learned components, as they can restrict the other component types to a specific context, which is commonly associated to **roles**. You can also require all components to make sure that every component is present for an entity.
+
+In the Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
+
+#### Example
+
+Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as _"Book **two** tickets tomorrow to Cairo"_.
+
+Typically, you would add a prebuilt component for _Quantity.Number_ that already extracts all numbers. However if your entity was only defined with the prebuilt, it would also extract other numbers as part of the **Ticket Quantity** entity, such as _"Book **two** tickets tomorrow to Cairo at **3** PM"_.
+
+To resolve this, you would label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has 2 components, the prebuilt that knows all numbers, and the learned one that predicts where the Ticket Quantity is in a sentence. If you require the learned component, you make sure that Ticket Quantity only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned Ticket Quantity entity is both a number and in the correct position.
++
+## How to use components and options
+
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a _General.Organization_ prebuilt component added to it, the entity may not predict all the organizations specific to your domain. You can use a list component to extend the values of the Organization entity and thereby extending the prebuilt with your own organizations.
+
+Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
+
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+
+> [!NOTE]
+> Previously during the public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
++
+## Next steps
+
+[Supported prebuilt components](../prebuilt-component-reference.md)
ai-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/evaluation-metrics.md
+
+ Title: Conversational Language Understanding evaluation metrics
+
+description: Learn about evaluation metrics in Conversational Language Understanding
++++++ Last updated : 05/13/2022++++
+# Evaluation metrics for conversational language understanding models
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined intents and entities for utterances in the test set, and compares them with the provided tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, conversational language understanding uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)`
++
+Precision, recall, and F1 score are calculated for:
+* Each entity separately (entity-level evaluation)
+* Each intent separately (intent-level evaluation)
+* For the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for entity-level, intent-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* can differ. For example, consider the following text.
+
+### Example
+
+* Make a response with thank you very much.
+* reply with saying yes.
+* Check my email please.
+* email to cynthia that dinner last week was splendid.
+* send email to mike
+
+These are the intents used: *Reply*,*sendEmail*,*readEmail*. These are the entities: *contactName*, *message*.
+
+The model could make the following predictions:
+
+| Utterance | Predicted intent | Actual intent |Predicted entity| Actual entity|
+|--|--|--|--|--|
+|Make a response with thank you very much|Reply|Reply|`thank you very much` as `message` |`thank you very much` as `message` |
+|reply with saying yes| sendEmail|Reply|--|`yes` as `message`|
+|Check my email please|readEmail|readEmail|--|--|
+|email to cynthia that dinner last week was splendid|Reply|sendEmail|`dinner last week was splendid` as `message`| `cynthia` as `contactName`, `dinner last week was splendid` as `message`|
+|send email to mike|sendEmail|sendEmail|`mike` as `message`|`mike` as `contactName`|
++
+### Intent level evaluation for *Reply* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 1 was correctly predicted as *Reply*. |
+| False Positive | 1 |Utterance 4 was mistakenly predicted as *Reply*. |
+| False Negative | 1 | Utterance 2 was mistakenly predicted as *sendEmail*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5 `
++
+### Intent level evaluation for *sendEmail* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 5 was correctly predicted as *sendEmail* |
+| False Positive | 1 |Utterance 2 was mistakenly predicted as *sendEmail*. |
+| False Negative | 1 | Utterance 4 was mistakenly predicted as *Reply*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5 `
+
+### Intent level evaluation for *readEmail* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 3 was correctly predicted as *readEmail*. |
+| False Positive | 0 |--|
+| False Negative | 0 |--|
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 0) = 1`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 1) / (1 + 1) = 1`
+
+### Entity level evaluation for *contactName* entity
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | `cynthia` was correctly predicted as `contactName` in utterance 4|
+| False Positive | 0 |--|
+| False Negative | 1 | `mike` was mistakenly predicted as `message` in utterance 5 |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.5) / (1 + 0.5) = 0.67`
+
+### Entity level evaluation for *message* entity
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 |`thank you very much` was correctly predicted as `message` in utterance 1 and `dinner last week was splendid` was correctly predicted as `message` in utterance 4 |
+| False Positive | 1 |`mike` was mistakenly predicted as `message` in utterance 5 |
+| False Negative | 1 | ` yes` was not predicted as `message` in utterance 2 |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 2 / (2 + 1) = 0.67`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 2 / (2 + 1) = 0.67`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
++
+### Model-level evaluation for the collective model
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 6 | Sum of TP for all intents and entities |
+| False Positive | 3| Sum of FP for all intents and entities |
+| False Negative | 4 | Sum of FN for all intents and entities |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 6 / (6 + 3) = 0.67`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 6 / (6 + 4) = 0.60`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.67 * 0.60) / (0.67 + 0.60) = 0.63`
+
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities or intents.
+The matrix compares the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify intents or entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these intents or entities together. If that isn't possible, consider adding more tagged examples of both intents or entities to help the model differentiate between them.
+
+The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
++
+You can calculate the intent-level or entity-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each intent or entity.
+* The sum of the values in the intent or entities rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the intent or entities columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all intents or entities.
+* The *false positive* of the model is the sum of *false positives* for all intents or entities.
+* The *false Negative* of the model is the sum of *false negatives* for all intents or entities.
++
+## Guidance
+
+After you trained your model, you will see some guidance and recommendations on how to improve the model. It's recommended to have a model covering every point in the guidance section.
+
+* Training set has enough data: When an intent or entity has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on that intent. In this case, consider adding more labeled data in the training set. You should only consider adding more labeled data to your entity if your entity has a learned component. If your entity is defined only by list, prebuilt, and regex components, then this recommendation is not applicable.
+
+* All intents or entities are present in test set: When the testing data lacks labeled instances for an intent or entity, the model evaluation is less comprehensive due to untested scenarios. Consider having test data for every intent and entity in your model to ensure everything is being tested.
+
+* Unclear distinction between intents or entities: When data is similar for different intents or entities, it can lead to lower accuracy because they may be frequently misclassified as each other. Review the following intents and entities and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance. If you are seeing two entities constantly being predicted for the same spans because they share the same list, prebuilt, or regex components, then make sure to add a **learned** component for each entity and make it **required**. Learn more about [entity components](./entity-components.md).
+
+## Next steps
+
+[Train a model in Language Studio](../how-to/train-model.md)
ai-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/multiple-languages.md
+
+ Title: Multilingual projects
+
+description: Learn about which how to make use of multilingual projects in conversational language understanding
++++++ Last updated : 01/10/2022++++
+# Multilingual projects
+
+Conversational language understanding makes it easy for you to extend your project to several languages at once. When you enable multiple languages in projects, you'll be able to add language specific utterances and synonyms to your project, and get multilingual predictions for your intents and entities.
+
+## Multilingual intent and learned entity components
+
+When you enable multiple languages in a project, you can train the project primarily in one language and immediately get predictions in others.
+
+For example, you can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the [tag utterances](../how-to/tag-utterances.md) page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same amount of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+## List and prebuilt components in multiple languages
+
+Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
+
+```json
+"query": "{query}"
+"language": "{language code}"
+```
+
+If you do not provide a language, it will fall back to the default language of your project. See the [language support](../language-support.md) article for a list of different language codes.
+
+Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.
+
+## Next steps
+
+* [Tag utterances](../how-to/tag-utterances.md)
+* [Train a model](../how-to/train-model.md)
ai-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/none-intent.md
+
+ Title: Conversational Language Understanding None Intent
+
+description: Learn about the default None intent in conversational language understanding
++++++ Last updated : 05/13/2022+++++
+# None intent
+
+Every project in conversational language understanding includes a default **None** intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
+
+An utterance can be predicted as the None intent if the top scoring intent's score is **lower** than the None score threshold. It can also be predicted if the utterance is similar to examples added to the None intent.
+
+## None score threshold
+
+You can go to the **project settings** of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
+
+For any query and utterance, the highest scoring intent ends up **lower** than the threshold score, the top intent will be automatically replaced with the None intent. The scores of all the other intents remain unchanged.
+
+The score should be set according to your own observations of prediction scores, as they may vary by project. A higher threshold score forces the utterances to be more similar to the examples you have in your training data.
+
+When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
+
+> [!NOTE]
+> During model evaluation of your test set, the None score threshold is not applied.
+
+## Adding examples to the None intent
+
+The None intent is also treated like any other intent in your project. If there are utterances that you want predicted as None, consider adding similar examples to them in your training data. For example, if you would like to categorize utterances that are not important to your project as None, such as greetings, yes and no answers, responses to questions such as providing a number, then add those utterances to your intent.
+
+You should also consider adding false positive examples to the None intent. For example, in a flight booking project it is likely that the utterance "I want to buy a book" could be confused with a Book Flight intent. Adding "I want to buy a book" or "I love reading books" as None training utterances helps alter the predictions of those types of utterances towards the None intent instead of Book Flight.
+
+## Next steps
+
+[Conversational language understanding overview](../overview.md)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/faq.md
+
+ Title: Frequently Asked Questions
+
+description: Use this article to quickly get the answers to FAQ about conversational language understanding
++++++ Last updated : 09/29/2022++++
+# Frequently asked questions for conversational language understanding
+
+Use this article to quickly get the answers to common questions about conversational language understanding
+
+## How do I create a project?
+
+See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
++
+## Can I use more than one conversational language understanding project together?
+
+Yes, using orchestration workflow. See the [orchestration workflow documentation](../orchestration-workflow/overview.md) for more information.
+
+## What is the difference between LUIS and conversational language understanding?
+
+Conversational language understanding is the next generation of LUIS.
+
+## Training is taking a long time, is this expected?
+
+For conversation projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
+
+## How do I use entity components?
+
+See the [entity components](./concepts/entity-components.md) article.
+
+## Which languages are supported in this feature?
+
+See the [language support](./language-support.md) article.
+
+## How do I get more accurate results for my project?
+
+Take a look at the [recommended guidelines](./how-to/build-schema.md#guidelines-and-recommendations) for information on improving accuracy.
+
+## How do I get predictions in different languages?
+
+When you train and deploy a conversation project in any language, you can immediately try querying it in [multiple languages](./concepts/multiple-languages.md). You may get varied results for different languages. To improve the accuracy of any language, add utterances to your project in that language to introduce the trained model to more syntax of that language.
+
+## How many intents, entities, utterances can I add to a project?
+
+See the [service limits](./service-limits.md) article.
+
+## Can I label the same word as 2 different entities?
+
+Unlike LUIS, you cannot label the same text as 2 different entities. Learned components across different entities are mutually exclusive, and only one learned span is predicted for each set of characters.
+
+## Can I import a LUIS JSON file into conversational language understanding?
+
+Yes, you can [import any LUIS application](./how-to/migrate-from-luis.md) JSON file from the latest version in the service.
+
+## Can I import a LUIS `.LU` file into conversational language understanding?
+
+No, the service only supports JSON format. You can go to LUIS, import the `.LU` file and export it as a JSON file.
+
+## Can I use conversational language understanding with custom question answering?
+
+Yes, you can use [orchestration workflow](../orchestration-workflow/overview.md) to orchestrate between different conversational language understanding and [question answering](../question-answering/overview.md) projects. Start by creating orchestration workflow projects, then connect your conversational language understanding and custom question answering projects. To perform this action, make sure that your projects are under the same Language resource.
+
+## How do I handle out of scope or domain utterances that aren't relevant to my intents?
+
+Add any out of scope utterances to the [none intent](./concepts/none-intent.md).
+
+## How do I control the none intent?
+
+You can control the none intent threshold from UI through the project settings, by changing the none intent threshold value. The values can be between 0.0 and 1.0. Also, you can change this threshold from the APIs by changing the *confidenceThreshold* in settings object. Learn more about [none intent](./concepts/none-intent.md#none-score-threshold)
+
+## Is there any SDK support?
+
+Yes, only for predictions, and samples are available for [Python](https://aka.ms/sdk-samples-conversation-python) and [C#](https://aka.ms/sdk-sample-conversation-dot-net). There is currently no authoring support for the SDK.
+
+## What are the training modes?
++
+|Training mode | Description | Language availability | Pricing |
+|||||
+|Standard training | Faster training times for quicker model iteration. | Can only train projects in English. | Included in your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). |
+|Advanced training | Slower training times using fine-tuned neural network transformer models. | Can train [multilingual projects](language-support.md#multi-lingual-option). | May incur [additional charges](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
+
+See [training modes](how-to/train-model.md#training-modes) for more information.
+
+## Are there APIs for this feature?
+
+Yes, all the APIs are available.
+* [Authoring APIs](https://aka.ms/clu-authoring-apis)
+* [Prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation)
+
+## Next steps
+
+[Conversational language understanding overview](overview.md)
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/glossary.md
+
+ Title: Definitions used in conversational language understanding
+
+description: Learn about definitions used in conversational language understanding.
++++++ Last updated : 05/13/2022++++
+# Terms and definitions used in conversation language understanding
+
+Use this article to learn about some of the definitions and terms you may encounter when using conversation language understanding.
+
+## Entity
+Entities are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Intent
+An intent represents a task or action the user wants to perform. It's a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill.
+
+## List entity
+A list entity represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small", "medium", or "large" are used regardless of the context.
+
+## Model
+A model is an object that's trained to do a certain task, in this case conversation understanding tasks. Models are trained by providing labeled data to learn from so they can later be used to understand utterances.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+## Regular expression
+A regular expression entity represents a regular expression. Regular expression entities are exact matches.
+
+## Schema
+Schema is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project
+
+## Training data
+Training data is the set of information that is needed to train a model.
+
+## Utterance
+
+An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Conversation language understanding overview](../overview.md).
ai-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/build-schema.md
+
+ Title: How to build a Conversational Language Understanding project schema
+
+description: Use this article to start building a Conversational Language Understanding project schema
++++++ Last updated : 05/13/2022++++
+# How to build your project schema
+
+In conversational language understanding projects, the *schema* is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project.
+
+## Guidelines and recommendations
+
+Consider the following guidelines when picking intents for your project:
+
+ 1. Create distinct, separable intents. An intent is best described as action the user wants to perform. Think of the project you're building and identify all the different actions your users may take when interacting with your project. Sending, calling, and canceling are all actions that are best represented as different intents. "Canceling an order" and "canceling an appointment" are very similar, with the distinction being *what* they are canceling. Those two actions should be represented under the same intent, *Cancel*.
+
+ 2. Create entities to extract relevant pieces of information within your text. The entities should be used to capture the relevant information needed to fulfill your user's action. For example, *order* or *appointment* could be different things a user is trying to cancel, and you should create an entity to capture that piece of information.
+
+You can *"send"* a *message*, *"send"* an *email*, or *"send"* a package. Creating an intent to capture each of those requirements will not scale over time, and you should use entities to identify *what* the user was sending. The combination of intents and entities should determine your conversation flow.
+
+For example, consider a company where the bot developers have identified the three most common actions their users take when using a calendar:
+
+* Setup new meetings
+* Respond to meeting requests
+* Cancel meetings
+
+They might create an intent to represent each of these actions. They might also include entities to help complete these actions, such as:
+
+* Meeting attendants
+* Date
+* Meeting durations
+
+## Add intents
+
+To build a project schema within [Language Studio](https://aka.ms/languageStudio):
+
+1. Select **Schema definition** from the left side menu.
+
+2. From the top pivots, you can change the view to be **Intents** or **Entities**.
+
+2. To create an intent, select **Add** from the top menu. You will be prompted to type in a name before completing creating the intent.
+
+3. Repeat the above step to create all the intents to capture all the actions that you think the user will want to perform while using the project.
+
+ :::image type="content" source="../media/build-schema-page.png" alt-text="A screenshot showing the schema creation page for conversation projects in Language Studio." lightbox="../media/build-schema-page.png":::
+
+4. When you select the intent, you will be directed to the [Data labeling](tag-utterances.md) page, with a filter set for the intent you selected. You can add examples for intents and label them with entities.
+
+## Add entities
+
+1. Move to **Entities** pivot from the top of the page.
+
+2. To add an entity, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
+
+3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
+
+4. Every entity can be defined by multiple components: learned, list or prebuilt. A learned component is added to all your entities once you label them in your utterances.
+
+ :::image type="content" source="../media/entity-details.png" alt-text="A screenshot showing the entity details page for conversation projects in Language Studio." lightbox="../media/entity-details.png":::
+
+5.You can add a [list](../concepts/entity-components.md#list-component) or [prebuilt](../concepts/entity-components.md#prebuilt-component) component to each entity.
+
+### Add prebuilt component
+
+To add a **prebuilt** component, select **Add new prebuilt** and from the drop-down menu, select the prebuilt type to you want to add to this entity.
+
+ <!--:::image type="content" source="../media/add-prebuilt-component.png" alt-text="A screenshot showing a prebuilt-component in Language Studio." lightbox="../media/add-prebuilt-component.png":::-->
+
+### Add list component
+
+To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
+
+1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
+
+2. From the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
+
+ <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
+
+### Define entity options
+
+Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and select the **Save** button at the top.
+
+ <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
++
+After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
+
+## Next Steps
+
+* [Add utterances and label your data](tag-utterances.md)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/call-api.md
+
+ Title: Send prediction requests to a conversational language understanding deployment
+
+description: Learn about sending prediction requests for conversational language understanding.
++++++ Last updated : 06/28/2022++++
+# Send prediction requests to a deployment
+
+After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment.
+You can query the deployment programmatically through the [prediction API](https://aka.ms/ct-runtime-swagger) or through the client libraries (Azure SDK).
+
+## Test deployed model
+
+You can use Language Studio to submit an utterance, get predictions and visualize the results.
++++
+## Send a conversational language understanding request
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+First you will need to get your resource key and endpoint:
++
+### Query your model
++
+# [Client libraries (Azure SDK)](#tab/azure-sdk)
+
+### Use the client libraries (Azure SDK)
+
+You can also use the client libraries provided by the Azure SDK to send requests to your model.
+
+> [!NOTE]
+> The client library for conversational language understanding is only available for:
+> * .NET
+> * Python
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
+
+ :::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="A screenshot showing a key and endpoint in the Azure portal." lightbox="../../custom-text-classification/media/get-endpoint-azure.png":::
++
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
+
+5. See the following reference documentation for more information:
+
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
+
++
+## Next steps
+
+* [Conversational language understanding overview](../overview.md)
ai-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/create-project.md
+
+ Title: How to create projects in Conversational Language Understanding
+
+description: Use this article to learn how to create projects in Conversational Language Understanding.
++++++ Last updated : 09/29/2022++++
+# How to create a CLU project
+
+Use this article to learn how to set up these requirements and create a project.
++
+## Prerequisites
+
+Before you start using CLU, you will need several things:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+* An Azure AI Language resource
+
+### Create a Language resource
+
+Before you start using CLU, you will need an Azure AI Language resource.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
++++
+## Sign in to Language Studio
++
+## Create a conversation project
+
+Once you have a Language resource created, create a Conversational Language Understanding project.
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+### [Language Studio](#tab/language-studio)
+
+You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and from the top menu, clicking on **Export**.
++
+That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
+
+If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [the LUIS migration article](../how-to/migrate-from-luis.md) for more information.
+
+To import a project, select the arrow button next to **Create a new project** and select **Import**, then select the LUIS or Conversational Language Understanding JSON file.
++
+### [REST APIs](#tab/rest-api)
+
+You can import a CLU JSON into the service
++++
+## Export project
+
+### [Language Studio](#tab/Language-Studio)
+
+You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and pressing **Export**.
+
+### [REST APIs](#tab/rest-apis)
+
+You can export a Conversational Language Understanding project as a JSON file at any time.
++++
+## Get CLU project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+
+When you don't need your project anymore, you can delete your project using the APIs.
++++
+## Next Steps
+
+[Build schema](./build-schema.md)
+
ai-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/deploy-model.md
+
+ Title: How to deploy a model for conversational language understanding
+
+description: Use this article to learn how to deploy models for conversational language understanding.
++++++ Last updated : 10/12/2022++++
+# Deploy a model
+
+Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md)
+* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
+* Reviewed the [model performance](view-model-evaluation.md) to determine how your model is performing.
+
+See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
+* Taking the model assigned to the first deployment, and assigning it to the second deployment.
+* taking the model assigned to second deployment and assign it to the first deployment.
+
+This can be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* Use [prediction API to query your model](call-api.md)
ai-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/fail-over.md
+
+ Title: Back up and recover your conversational language understanding models
+
+description: Learn how to save and recover your conversational language understanding models.
++++++ Last updated : 05/16/2022++++
+# Back up and recover your conversational language understanding models
+
+When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure AI Language resources in different regions and the ability to sync your CLU models across regions.
+
+If your app or business depends on the use of a CLU model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents, entities and utterances. You still need to [train](../how-to/train-model.md) and [deploy](../how-to/deploy-model.md) the models to be available for use with [runtime APIs](https://aka.ms/clu-apis).
++
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure AI Language resources in different Azure regions, each of them in a different region.
+
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get Training Status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
+
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference ](https://aka.ms/clu-authoring-apis)
+
+* [Runtime prediction REST API reference ](https://aka.ms/clu-apis)
+
ai-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
+
+ Title: Conversational Language Understanding backwards compatibility
+
+description: Learn about backwards compatibility between LUIS and Conversational Language Understanding
++++++ Last updated : 09/08/2022++++
+# Migrate from Language Understanding (LUIS) to conversational language understanding (CLU)
+
+[Conversational language understanding (CLU)](../overview.md) is a cloud-based AI offering in Azure AI Language. It's the newest generation of [Language Understanding (LUIS)](../../../luis/what-is-luis.md) and offers backwards compatibility with previously created LUIS applications. CLU employs state-of-the-art machine learning intelligence to allow users to build a custom natural language understanding model for predicting intents and entities in conversational utterances.
+
+CLU offers the following advantages over LUIS:
+
+- Improved accuracy with state-of-the-art machine learning models for better intent classification and entity extraction. LUIS required more examples to generalize certain concepts in intents and entities, while CLU's more advanced machine learning reduces the burden on customers by requiring significantly less data.
+- Multilingual support for model learning and training. Train projects in one language and immediately predict intents and entities across 96 languages.
+- Ease of integration with different CLU and [custom question answering](../../question-answering/overview.md) projects using [orchestration workflow](../../orchestration-workflow/overview.md).
+- The ability to add testing data within the experience using Language Studio and APIs for model performance evaluation prior to deployment.
+
+To get started, you can [create a new project](../quickstart.md?pivots=language-studio#create-a-conversational-language-understanding-project) or [migrate your LUIS application](#migrate-your-luis-applications).
+
+## Comparison between LUIS and CLU
+
+The following table presents a side-by-side comparison between the features of LUIS and CLU. It also highlights the changes to your LUIS application after migrating to CLU. Select the linked concept to learn more about the changes.
+
+|LUIS features | CLU features | Post migration |
+|::|:-:|:--:|
+|Machine-learned and Structured ML entities| Learned [entity components](#how-are-entities-different-in-clu) |Machine-learned entities without subentities will be transferred as CLU entities. Structured ML entities will only transfer leaf nodes (lowest level subentities that do not have their own subentities) as entities in CLU. The name of the entity in CLU will be the name of the subentity concatenated with the parent. For example, _Order.Size_|
+|List, regex, and prebuilt entities| List, regex, and prebuilt [entity components](#how-are-entities-different-in-clu) | List, regex, and prebuilt entities will be transferred as entities in CLU with a populated entity component based on the entity type.|
+|`Pattern.Any` entities| Not currently available | `Pattern.Any` entities will be removed.|
+|Single culture for each application|[Multilingual models](#how-is-conversational-language-understanding-multilingual) enable multiple languages for each project. |The primary language of your project will be set as your LUIS application culture. Your project can be trained to extend to different languages.|
+|Entity roles |[Roles](#how-are-entity-roles-transferred-to-clu) are no longer needed. | Entity roles will be transferred as entities.|
+|Settings for: normalize punctuation, normalize diacritics, normalize word form, use all training data |[Settings](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Settings will not be transferred. |
+|Patterns and phrase list features|[Patterns and Phrase list features](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Patterns and phrase list features will not be transferred. |
+|Entity features| Entity components| List or prebuilt entities added as features to an entity will be transferred as added components to that entity. [Entity features](#how-do-entity-features-get-transferred-in-clu) will not be transferred for intents. |
+|Intents and utterances| Intents and utterances |All intents and utterances will be transferred. Utterances will be labeled with their transferred entities. |
+|Application GUIDs |Project names| A project will be created for each migrating application with the application name. Any special characters in the application names will be removed in CLU.|
+|Versioning| Every time you train, a model is created and acts as a version of your [project](#how-do-i-manage-versions-in-clu). | A project will be created for the selected application version. |
+|Evaluation using batch testing |Evaluation using testing sets | [Adding your testing dataset](../how-to/tag-utterances.md#how-to-label-your-utterances) will be required.|
+|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). |
+|Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu-how-is-standard-training-different-from-advanced-training) | Training will be required after application migration. |
+|Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. |
+|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](https://aka.ms/clu-authoring-apis). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
+|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](https://aka.ms/clu-runtime-api). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
+
+## Migrate your LUIS applications
+
+Use the following steps to migrate your LUIS application using either the LUIS portal or REST API.
+
+# [LUIS portal](#tab/luis-portal)
+
+## Migrate your LUIS applications using the LUIS portal
+
+Follow these steps to begin migration using the [LUIS Portal](https://www.luis.ai/):
+
+1. After logging into the LUIS portal, click the button on the banner at the top of the screen to launch the migration wizard. The migration will only copy your selected LUIS applications to CLU.
+
+ :::image type="content" source="../media/backwards-compatibility/banner.svg" alt-text="A screenshot showing the migration banner in the LUIS portal." lightbox="../media/backwards-compatibility/banner.svg":::
++
+ The migration overview tab provides a brief explanation of conversational language understanding and its benefits. Press Next to proceed.
+
+ :::image type="content" source="../media/backwards-compatibility/migration-overview.svg" alt-text="A screenshot showing the migration overview window." lightbox="../media/backwards-compatibility/migration-overview.svg":::
+
+1. Determine the Language resource that you wish to migrate your LUIS application to. If you have already created your Language resource, select your Azure subscription followed by your Language resource, and then select **Next**. If you don't have a Language resource, click the link to create a new Language resource. Afterwards, select the resource and select **Next**.
+
+ :::image type="content" source="../media/backwards-compatibility/select-resource.svg" alt-text="A screenshot showing the resource selection window." lightbox="../media/backwards-compatibility/select-resource.svg":::
+
+1. Select all your LUIS applications that you want to migrate, and specify each of their versions. Select **Next**. After selecting your application and version, you will be prompted with a message informing you of any features that won't be carried over from your LUIS application.
+
+ > [!NOTE]
+ > Special characters are not supported by conversational language understanding. Any special characters in your selected LUIS application names will be removed in your new migrated applications.
+ :::image type="content" source="../media/backwards-compatibility/select-applications.svg" alt-text="A screenshot showing the application selection window." lightbox="../media/backwards-compatibility/select-applications.svg":::
+
+1. Review your Language resource and LUIS applications selections. Select **Finish** to migrate your applications.
+
+1. A popup window will let you track the migration status of your applications. Applications that have not started migrating will have a status of **Not started**. Applications that have begun migrating will have a status of **In progress**, and once they have finished migrating their status will be **Succeeded**. A **Failed** application means that you must repeat the migration process. Once the migration has completed for all applications, select **Done**.
+
+ :::image type="content" source="../media/backwards-compatibility/migration-progress.svg" alt-text="A screenshot showing the application migration progress window." lightbox="../media/backwards-compatibility/migration-progress.svg":::
+
+1. After your applications have migrated, you can perform the following steps:
+
+ * [Train your model](../how-to/train-model.md?tabs=language-studio)
+ * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
+ * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
+
+# [REST API](#tab/rest-api)
+
+## Migrate your LUIS applications using REST APIs
+
+Follow these steps to begin migration programmatically using the CLU Authoring REST APIs:
+
+1. Export your LUIS application in JSON format. You can use the [LUIS Portal](https://www.luis.ai/) to export your applications, or the [LUIS programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40).
+
+1. Submit a POST request using the following URL, headers, and JSON body to import LUIS application into your CLU project. CLU does not support names with special characters so remove any special characters from the project name.
+
+ ### Request URL
+ ```rest
+ {ENDPOINT}/language/authoring/analyze-conversations/projects/{PROJECT-NAME}/:import?api-version={API-VERSION}&format=luis
+ ```
+
+ |Placeholder |Value | Example |
+ ||||
+ |`{ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+ |`{PROJECT-NAME}` | The name for your project. This value is case sensitive. | `myProject` |
+ |`{API-VERSION}` | The [version](../../concepts/model-lifecycle.md#api-versions) of the API you are calling. | `2023-04-01` |
+
+ ### Headers
+
+ Use the following header to authenticate your request.
+
+ |Key|Value|
+ |--|--|
+ |`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.|
+
+ ### JSON body
+
+ Use the exported LUIS JSON data as your body.
+
+1. After your applications have migrated, you can perform the following steps:
+
+ * [Train your model](../how-to/train-model.md?tabs=language-studio)
+ * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
+ * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
+++
+## Frequently asked questions
+
+### Which LUIS JSON version is supported by CLU?
+
+CLU supports the model JSON version 7.0.0. If the JSON format is older, it would need to be imported into LUIS first, then exported from LUIS with the most recent version.
+
+### How are entities different in CLU?
+
+In CLU, a single entity can have multiple entity components, which are different methods for extraction. Those components are then combined together using rules you can define. The available components are:
+- Learned: Equivalent to ML entities in LUIS, labels are used to train a machine-learned model to predict an entity based on the content and context of the provided labels.
+- List: Just like list entities in LUIS, list components exact match a set of synonyms and maps them back to a normalized value called a **list key**.
+- Prebuilt: Prebuilt components allow you to define an entity with the prebuilt extractors for common types available in both LUIS and CLU.
+- Regex: Regex components use regular expressions to capture custom defined patterns, exactly like regex entities in LUIS.
+
+Entities in LUIS will be transferred over as entities of the same name in CLU with the equivalent components transferred.
+
+After migrating, your structured machine-learned leaf nodes and bottom-level subentities will be transferred to the new CLU model while all the parent entities and higher-level entities will be ignored. The name of the entity will be the bottom-level entityΓÇÖs name concatenated with its parent entity.
+
+#### Example:
+
+LUIS entity:
+
+* Pizza Order
+ * Topping
+ * Size
+
+Migrated LUIS entity in CLU:
+
+* Pizza Order.Topping
+* Pizza Order.Size
+
+You also cannot label 2 different entities in CLU for the same span of characters. Learned components in CLU are mutually exclusive and do not provide overlapping predictions for learned components only. When migrating your LUIS application, entity labels that overlapped preserved the longest label and ignored any others.
+
+For more information on entity components, see [Entity components](../concepts/entity-components.md).
+
+### How are entity roles transferred to CLU?
+
+Your roles will be transferred as distinct entities along with their labeled utterances. Each roleΓÇÖs entity type will determine which entity component will be populated. For example, a list entity role will be transferred as an entity with the same name as the role, with a populated list component.
+
+### How do entity features get transferred in CLU?
+
+Entities used as features for intents will not be transferred. Entities used as features for other entities will populate the relevant component of the entity. For example, if a list entity named _SizeList_ was used as a feature to a machine-learned entity named _Size_, then the _Size_ entity will be transferred to CLU with the list values from _SizeList_ added to its list component. The same is applied for prebuilt and regex entities.
+
+### How are entity confidence scores different in CLU?
+
+Any extracted entity has a 100% confidence score and therefore entity confidence scores should not be used to make decisions between entities.
+
+### How is conversational language understanding multilingual?
+
+Conversational language understanding projects accept utterances in different languages. Furthermore, you can train your model in one language and extend it to predict in other languages.
+
+#### Example:
+
+Training utterance (English): *How are you?*
+
+Labeled intent: Greeting
+
+Runtime utterance (French): *Comment ça va?*
+
+Predicted intent: Greeting
+
+### How is the accuracy of CLU better than LUIS?
+
+CLU uses state-of-the-art models to enhance machine learning performance of different models of intent classification and entity extraction.
+
+These models are insensitive to minor variations, removing the need for the following settings: _Normalize punctuation_, _normalize diacritics_, _normalize word form_, and _use all training data_.
+
+Additionally, the new models do not support phrase list features as they no longer require supplementary information from the user to provide semantically similar words for better accuracy. Patterns were also used to provide improved intent classification using rule-based matching techniques that are not necessary in the new model paradigm. The question below explains this in more detail.
+
+### What do I do if the features I am using in LUIS are no longer present?
+
+There are several features that were present in LUIS that will no longer be available in CLU. This includes the ability to do feature engineering, having patterns and pattern.any entities, and structured entities. If you had dependencies on these features in LUIS, use the following guidance:
+
+- **Patterns**: Patterns were added in LUIS to assist the intent classification through defining regular expression template utterances. This included the ability to define Pattern only intents (without utterance examples). CLU is capable of generalizing by leveraging the state-of-the-art models. You can provide a few utterances to that matched a specific pattern to the intent in CLU, and it will likely classify the different patterns as the top intent without the need of the pattern template utterance. This simplifies the requirement to formulate these patterns, which was limited in LUIS, and provides a better intent classification experience.
+
+- **Phrase list features**: The ability to associate features mainly occurred to assist the classification of intents by highlighting the key elements/features to use. This is no longer required since the deep models used in CLU already possess the ability to identify the elements that are inherent in the language. In turn removing these features will have no effect on the classification ability of the model.
+
+- **Structured entities**: The ability to define structured entities was mainly to enable multilevel parsing of utterances. With the different possibilities of the sub-entities, LUIS needed all the different combinations of entities to be defined and presented to the model as examples. In CLU, these structured entities are no longer supported, since overlapping learned components are not supported. There are a few possible approaches to handling these structured extractions:
+ - **Non-ambiguous extractions**: In most cases the detection of the leaf entities is enough to understand the required items within a full span. For example, structured entity such as _Trip_ that fully spanned a source and destination (_London to New York_ or _Home to work_) can be identified with the individual spans predicted for source and destination. Their presence as individual predictions would inform you of the _Trip_ entity.
+ - **Ambiguous extractions**: When the boundaries of different sub-entities are not very clear. To illustrate, take the example ΓÇ£I want to order a pepperoni pizza and an extra cheese vegetarian pizzaΓÇ¥. While the different pizza types as well as the topping modifications can be extracted, having them extracted without context would have a degree of ambiguity of where the extra cheese is added. In this case the extent of the span is context based and would require ML to determine this. For ambiguous extractions you can use one of the following approaches:
+
+1. Combine sub-entities into different entity components within the same entity.
+
+#### Example:
+
+LUIS Implementation:
+
+* Pizza Order (entity)
+ * Size (subentity)
+ * Quantity (subentity)
+
+CLU Implementation:
+
+* Pizza Order (entity)
+ * Size (list entity component: small, medium, large)
+ * Quantity (prebuilt entity component: number)
+
+In CLU, you would label the entire span for _Pizza Order_ inclusive of the size and quantity, which would return the pizza order with a list key for size, and a number value for quantity in the same entity object.
+
+2. For more complex problems where entities contain several levels of depth, you can create a project for each level of depth in the entity structure. This gives you the option to:
+- Pass the utterance to each project.
+- Combine the analyses of each project in the stage proceeding CLU.
+
+For a detailed example on this concept, check out the pizza sample projects available on [GitHub](https://aka.ms/clu-pizza).
+
+### How do I manage versions in CLU?
+
+CLU saves the data assets used to train your model. You can export a model's assets or load them back into the project at any point. So models act as different versions of your project.
+
+You can export your CLU projects using [Language Studio](https://language.cognitive.azure.com/home) or [programmatically](../how-to/fail-over.md#export-your-primary-project-assets) and store different versions of the assets locally.
+
+### Why is CLU classification different from LUIS? How does None classification work?
+
+CLU presents a different approach to training models by using multi-classification as opposed to binary classification. As a result, the interpretation of scores is different and also differs across training options. While you are likely to achieve better results, you have to observe the difference in scores and determine a new threshold for accepting intent predictions. You can easily add a confidence score threshold for the [None intent](../concepts/none-intent.md) in your project settings. This will return *None* as the top intent if the top intent did not exceed the confidence score threshold provided.
+
+### Do I need more data for CLU models than LUIS?
+
+The new CLU models have better semantic understanding of language than in LUIS, and in turn help make models generalize with a significant reduction of data. While you shouldnΓÇÖt aim to reduce the amount of data that you have, you should expect better performance and resilience to variations and synonyms in CLU compared to LUIS.
+
+### If I donΓÇÖt migrate my LUIS apps, will they be deleted?
+
+Your existing LUIS applications will be available until October 1, 2025. After that time you will no longer be able to use those applications, the service endpoints will no longer function, and the applications will be permanently deleted.
+
+### Are .LU files supported on CLU?
+
+Only JSON format is supported by CLU. You can import your .LU files to LUIS and export them in JSON format, or you can follow the migration steps above for your application.
+
+### What are the service limits of CLU?
+
+See the [service limits](../service-limits.md) article for more information.
+
+### Do I have to refactor my code if I migrate my applications from LUIS to CLU?
+
+The API objects of CLU applications are different from LUIS and therefore code refactoring will be necessary.
+
+If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40) and [runtime](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs, you can replace them with their equivalent APIs.
+
+[CLU authoring APIs](https://aka.ms/clu-authoring-apis): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/2023-04-01/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
+
+[CLU runtime APIs](https://aka.ms/clu-runtime-api): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`.
+
+You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
+
+### How are the training times different in CLU? How is standard training different from advanced training?
+
+CLU offers standard training, which trains and learns in English and is comparable to the training time of LUIS. It also offers advanced training, which takes a considerably longer duration as it extends the training to all other [supported languages](../language-support.md). The train API will continue to be an asynchronous process, and you will need to assess the change in the DevOps process you employ for your solution.
+
+### How has the experience changed in CLU compared to LUIS? How is the development lifecycle different?
+
+In LUIS you would Build-Train-Test-Publish, whereas in CLU you Build-Train-Evaluate-Deploy-Test.
+
+1. **Build**: In CLU, you can define your intents, entities, and utterances before you train. CLU additionally offers you the ability to specify _test data_ as you build your application to be used for model evaluation. Evaluation assesses how well your model is performing on your test data and provides you with precision, recall, and F1 metrics.
+2. **Train**: You create a model with a name each time you train. You can overwrite an already trained model. You can specify either _standard_ or _advanced_ training, and determine if you would like to use your test data for evaluation, or a percentage of your training data to be left out from training and used as testing data. After training is complete, you can evaluate how well your model is doing on the outside.
+3. **Deploy**: After training is complete and you have a model with a name, it can be deployed for predictions. A deployment is also named and has an assigned model. You could have multiple deployments for the same model. A deployment can be overwritten with a different model, or you can swap models with other deployments in the project.
+4. **Test**: Once deployment is complete, you can use it for predictions through the deployment endpoint. You can also test it in the studio in the Test deployment page.
+
+This process is in contrast to LUIS, where the application ID was attached to everything, and you deployed a version of the application in either the staging or production slots.
+
+This will influence the DevOps processes you use.
+
+### Does CLU have container support?
+
+No, you cannot export CLU to containers.
+
+### How will my LUIS applications be named in CLU after migration?
+
+Any special characters in the LUIS application name will be removed. If the cleared name length is greater than 50 characters, the extra characters will be removed. If the name after removing special characters is empty (for example, if the LUIS application name was `@@`), the new name will be _untitled_. If there is already a conversational language understanding project with the same name, the migrated LUIS application will be appended with `_1` for the first duplicate and increase by 1 for each additional duplicate. In case the new nameΓÇÖs length is 50 characters and it needs to be renamed, the last 1 or 2 characters will be removed to be able to concatenate the number and still be within the 50 characters limit.
+
+## Migration from LUIS Q&A
+
+If you have any questions that were unanswered in this article, consider leaving your questions at our [Microsoft Q&A thread](https://aka.ms/luis-migration-qna-thread).
+
+## Next steps
+* [Quickstart: create a CLU project](../quickstart.md)
+* [CLU language support](../language-support.md)
+* [CLU FAQ](../faq.md)
ai-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
+
+ Title: How to tag utterances in Conversational Language Understanding
+
+description: Use this article to tag your utterances in Conversational Language Understanding projects
++++++ Last updated : 06/03/2022++++
+# Label your utterances in Language Studio
+
+Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance that you want to extract as entities.
+
+Data labeling is a crucial step in development lifecycle; this data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled utterances, you can directly [import it into your project](create-project.md#import-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. Labeled data informs the model how to interpret text, and is used for training and evaluation.
+
+## Prerequisites
+
+Before you can label your data, you need:
+
+* A successfully [created project](create-project.md).
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After [building your schema](build-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words and sentences will be associated with the intents and entities in your project. You will want to spend time labeling your utterances - introducing and refining the data that will be used to in training your models.
+
+As you add utterances and label them, keep in mind:
+
+* The machine learning models generalize based on the labeled examples you provide it; the more examples you provide, the more data points the model has to make better generalizations.
+
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+
+ * **Label precisely**: Label each intent and entity to its right type always. Only include what you want classified and extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the utterances.
+ * **Label completely**: Provide varied utterances for every intent. Label all the instances of the entity in all your utterances.
+
+* For [Multilingual projects](../language-support.md#multi-lingual-option), adding utterances in other languages increases the model's performance in these languages, but avoid duplicating your data across all the languages you would like to support. For example, to improve a calender bot's performance with users, a developer might add examples mostly in English, and a few in Spanish or French as well. They might add utterances such as:
+
+ * "_Set a meeting with **Matt** and **Kevin** **tomorrow** at **12 PM**._" (English)
+ * "_Reply as **tentative** to the **weekly update** meeting._" (English)
+ * "_Cancelar mi **pr├│xima** reuni├│n_." (Spanish)
+
+## How to label your utterances
+
+Use the following steps to label your utterances:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. In this page, you can start adding your utterance and labeling them. You can also upload your utterance directly by clicking on **Upload utterance file** from the top menu, make sure it follows the [accepted format](../concepts/data-formats.md#utterance-file-format).
+
+3. From the top pivots, you can change the view to be **training set** or **testing set**. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
+
+ :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
+
+ > [!TIP]
+ > If you are planning on using **Automatically split the testing set from training data** splitting, add all your utterances to the training set.
+
+
+4. From the **Select intent** dropdown menu, select one of the intents, the language of the utterance (for multilingual projects), and the utterance itself. Press the enter key in the utterance's text box to add the utterance.
+
+5. You have two options to label entities in an utterance:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity in the right pane, then highlight the text in the utterance you want to label. |
+ |Label using inline menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity you want to label these words with. |
+
+6. In the right side pane, under the **Labels** pivot, you can find all the entity types in your project and the count of labeled instances per each.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances per labeled entity* where you can view count of all labeled instances of a specific entity.
+ * *Unique utterances per labeled entity* where each utterance is counted if it contains at least one labeled instance of this entity.
+ * *Utterances per intent* where you can view count of utterances per intent.
+++
+ > [!NOTE]
+ > List and prebuilt components are not shown in the data labeling page, and all labels here only apply to the **learned component**.
+
+To remove a label:
+ 1. From within your utterance, select the entity you want to remove a label from.
+ 3. Scroll through the menu that appears, and select **Remove label**.
+
+To delete an entity:
+ 1. Select the entity you want to edit in the right side pane.
+ 2. Select the three dots next to the entity, and select the option you want from the drop-down menu.
+
+## Suggest utterances with Azure OpenAI
+
+In CLU, use Azure OpenAI to suggest utterances to add to your project using GPT models. You first need to get access and create a resource in Azure OpenAI. You'll then need to create a deployment for the GPT models. Follow the pre-requisite steps [here](../../../openai/how-to/create-resource.md).
+
+Before you get started, the suggest utterances feature is only available if your Language resource is in the following regions:
+* East US
+* South Central US
+* West Europe
+
+In the Data Labeling page:
+
+1. Select the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment.
+2. On selection of an Azure OpenAI resource, select **Connect**, which allows your Language resource to have direct access to your Azure OpenAI resource. It assigns your Language resource the role of `Cognitive Services User` to your Azure OpenAI resource, which allows your current Language resource to have access to Azure OpenAI's service. If the connection fails, follow these [steps](#add-required-configurations-to-azure-openai-resource) below to add the right role to your Azure OpenAI resource manually.
+3. Once the resource is connected, select the deployment. The recommended model for the Azure OpenAI deployment is `text-davinci-002`.
+4. Select the intent you'd like to get suggestions for. Make sure the intent you have selected has at least 5 saved utterances to be enabled for utterance suggestions. The suggestions provided by Azure OpenAI are based on the **most recent utterances** you've added for that intent.
+5. Select **Generate utterances**. Once complete, the suggested utterances will show up with a dotted line around it, with the note *Generated by AI*. Those suggestions need to be accepted or rejected. Accepting a suggestion simply adds it to your project, as if you had added it yourself. Rejecting it deletes the suggestion entirely. Only accepted utterances will be part of your project and used for training or testing. You can accept or reject by clicking on the green check or red cancel buttons beside each utterance. You can also use the `Accept all` and `Reject all` buttons in the toolbar.
++
+Using this feature entails a charge to your Azure OpenAI resource for a similar number of tokens to the suggested utterances generated. Details for Azure OpenAI's pricing can be found [here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
+
+### Add required configurations to Azure OpenAI resource
+
+If connecting your Language resource to an Azure OpenAI resource fails, follow these steps:
+
+Enable identity management for your Language resource using the following options:
+
+### [Azure portal](#tab/portal)
+
+Your Language resource must have identity management, to enable it using [Azure portal](https://portal.azure.com/):
+
+1. Go to your Language resource
+2. From left hand menu, under **Resource Management** section, select **Identity**
+3. From **System assigned** tab, make sure to set **Status** to **On**
+
+### [Language Studio](#tab/studio)
+
+Your Language resource must have identity management, to enable it using [Language Studio](https://aka.ms/languageStudio):
+
+1. Select the settings icon in the top right corner of the screen
+2. Select **Resources**
+3. Select the check box **Managed Identity** for your Language resource.
+++
+After enabling managed identity, assign the role `Azure AI services User` to your Azure OpenAI resource using the managed identity of your Language resource.
+
+ 1. Go to the [Azure portal](https://portal.azure.com/) and navigate to your Azure OpenAI resource.
+ 2. Select the Access Control (IAM) tab on the left.
+ 3. Select Add > Add role assignment.
+ 4. Select "Job function roles" and click Next.
+ 5. Select `Azure AI services User` from the list of roles and click Next.
+ 6. Select Assign access to "Managed identity" and select "Select members".
+ 7. Under "Managed identity" select "Language".
+ 8. Search for your resource and select it. Then select the Select button below and next to complete the process.
+ 9. Review the details and select Review + Assign.
++
+After a few minutes, refresh the Language Studio and you will be able to successfully connect to Azure OpenAI.
+
+## Next Steps
+* [Train Model](./train-model.md)
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/train-model.md
+
+ Title: How to train and evaluate models in Conversational Language Understanding
+
+description: Use this article to train a model and view its evaluation details to make improvements.
++++++ Last updated : 05/12/2022++++
+# Train your conversational language understanding model
+
+After you have completed [labeling your utterances](tag-utterances.md), you can start training a model. Training is the process where the model learns from your [labeled utterances](tag-utterances.md). <!--After training is completed, you will be able to [view model performance](view-model-evaluation.md).-->
+
+To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). <!--The results are returned so you can review the [modelΓÇÖs performance](view-model-evaluation.md).-->
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* [Labeled utterances](tag-utterances.md)
+
+<!--See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.-->
+
+## Data splitting
+
+Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled utterances.
+The **testing set** is a blind set that isn't introduced to the model during training but only during evaluation.
+
+After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate [evaluation metrics](../concepts/evaluation-metrics.md).
+It is recommended to make sure that all your intents and entities are adequately represented in both the training and testing set.
+
+Conversational language understanding supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during [labeling](tag-utterances.md).
+
+## Training modes
+
+CLU supports two modes for training your models
+
+* **Standard training** uses fast machine learning algorithms to train your models relatively quickly. This is currently only available for **English** and is disabled for any project that doesn't use English (US), or English (UK) as its primary language. This training option is free of charge. Standard training allows you to add utterances and test them quickly at no cost. The evaluation scores shown should guide you on where to make changes in your project and add more utterances. Once youΓÇÖve iterated a few times and made incremental improvements, you can consider using advanced training to train another version of your model.
+
+* **Advanced training** uses the latest in machine learning technology to customize models with your data. This is expected to show better performance scores for your models and will enable you to use the [multilingual capabilities](../language-support.md#multi-lingual-option) of CLU as well. Advanced training is priced differently. See the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/language-service) for details.
+
+Use the evaluation scores to guide your decisions. There might be times where a specific example is predicted incorrectly in advanced training as opposed to when you used standard training mode. However, if the overall evaluation results are better using advanced, then it is recommended to use your final model. If that isnΓÇÖt the case and you are not looking to use any multilingual capabilities, you can continue to use model trained with standard mode.
+
+> [!Note]
+> You should expect to see a difference in behaviors in intent confidence scores between the training modes as each algorithm calibrates their scores differently.
+
+## Train model
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
++++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Next steps
+
+* [Model evaluation metrics](../concepts/evaluation-metrics.md)
+<!--* [Deploy and query the model](./deploy-model.md)-->
ai-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/how-to/view-model-evaluation.md
+
+ Title: How to view conversational language understanding models details
+description: Use this article to learn about viewing the details for a conversational language understanding model.
+++++++ Last updated : 05/16/2022++++
+# View conversational language understanding model details
+
+After model training is completed, you can view your model details and see how well it performs against the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from your utterances. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Testing set** when [add your utterances](tag-utterances.md).
+
+## Prerequisites
+
+Before viewing a model's evaluation, you need:
+
+* A successfully [created project](create-project.md).
+* A successfully [trained model](train-model.md).
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/Language-studio)
++
+### [REST APIs](#tab/REST-APIs)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++++
+## Next steps
+
+* As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used.
+* If you're happy with your model performance, you can [deploy your model](deploy-model.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/language-support.md
+
+ Title: Conversational language understanding language support
+
+description: This article explains which natural languages are supported by the conversational language understanding feature of Azure AI Language.
++++++ Last updated : 05/12/2022++++
+# Language support for conversational language understanding
+
+Use this article to learn about the languages currently supported by CLU feature.
+
+## Multi-lingual option
+
+> [!TIP]
+> See [How to train a model](how-to/train-model.md#training-modes) for information on which training mode you should use for multilingual projects.
+
+With conversational language understanding, you can train a model in one language and use to predict intents and entities from utterances in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
+
+You can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the tag utterances page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same number of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+### List and prebuilt components in multiple languages
+
+Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
+
+```json
+"query": "{query}"
+"language": "{language code}"
+```
+
+If you do not provide a language, it will fall back to the default language of your project.
+
+Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. <!--See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.-->
+++
+## Languages supported by conversational language understanding
+
+Conversational language understanding supports utterances in the following languages:
+
+| Language | Language code |
+| | |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de` |
+| Greek | `el` |
+| English (US) | `en-us` |
+| English (UK) | `en-gb` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Chinese (Traditional) | `zh-hant` |
+| Zulu | `zu` |
+++
+## Next steps
+
+* [Conversational language understanding overview](overview.md)
+* [Service limits](service-limits.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/overview.md
+
+ Title: Conversational Language Understanding - Azure AI services
+
+description: Customize an AI model to predict the intentions of utterances, and extract important information from them.
++++++ Last updated : 10/26/2022++++
+# What is conversational language understanding?
+
+Conversational language understanding is one of the custom features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build natural language understanding component to be used in an end-to-end conversational application.
+
+Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it. CLU only provides the intelligence to understand the input text for the client application and doesn't perform any actions. By creating a CLU project, developers can iteratively label utterances, train and evaluate model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/create-project.md) contain instructions for using the service in more specific or customized ways.
++
+## Example usage scenarios
+
+CLU can be used in multiple scenarios across a variety of industries. Some examples are:
+
+### End-to-end conversational bot
+
+Use CLU to build and train a custom natural language understanding model based on a specific domain and the expected users' utterances. Integrate it with any end-to-end conversational bot so that it can process and analyze incoming text in real time to identify the intention of the text and extract important information from it. Have the bot perform the desired action based on the intention and extracted information. An example would be a customized retail bot for online shopping or food ordering.
+
+### Human assistant bots
+
+One example of a human assistant bot is to help staff improve customer engagements by triaging customer queries and assigning them to the appropriate support engineer. Another example would be a human resources bot in an enterprise that allows employees to communicate in natural language and receive guidance based on the query.
+
+### Command and control application
+
+When you integrate a client application with a speech to text component, users can speak a command in natural language for CLU to process, identify intent, and extract information from the text for the client application to perform an action. This use case has many applications, such as to stop, play, forward, and rewind a song or turn lights on or off.
+
+### Enterprise chat bot
+
+In a large corporation, an enterprise chat bot may handle a variety of employee affairs. It might handle frequently asked questions served by a custom question answering knowledge base, a calendar specific skill served by conversational language understanding, and an interview feedback skill served by LUIS. Use Orchestration workflow to connect all these skills together and appropriately route the incoming requests to the correct service.
++
+## Project development lifecycle
+
+Creating a CLU project typically involves several different steps.
++
+Follow these steps to get the most out of your model:
+
+1. **Define your schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. In this step you create the [intents](glossary.md#intent) that you want to assign to user's utterances, and the relevant [entities](glossary.md#entity) you want extracted.
+
+2. **Label your data**: The quality of data labeling is a key factor in determining model performance.
+
+3. **Train the model**: Your model starts learning from your labeled data.
+
+4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+
+6. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model.
+
+7. **Deploy the model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-apis).
+
+8. **Predict intents and entities**: Use your custom model to predict intents and entities from user's utterances.
+
+## Reference documentation and code samples
+
+As you use CLU, see the following reference documentation and samples for Azure AI Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |
+|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the transparency note for CLU to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using conversational language understanding.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
+
ai-services Prebuilt Component Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/prebuilt-component-reference.md
+
+ Title: Supported prebuilt entity components
+
+description: Learn about which entities can be detected automatically in Conversational Language Understanding
++++++ Last updated : 11/02/2021++++
+# Supported prebuilt entity components
+
+Conversational Language Understanding allows you to add prebuilt components to entities. Prebuilt components automatically predict common types from utterances. See the [entity components](concepts/entity-components.md) article for information on components.
+
+## Reference
+
+The following prebuilt components are available in Conversational Language Understanding.
+
+| Type | Description | Supported languages |
+| | | |
+| Quantity.Age | Age of a person or thing. For example: "30 years old", "9 months old" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.Number | A cardinal number in numeric or text form. For example: "Thirty", "23", "14.5", "Two and a half" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.Percentage | A percentage using the symbol % or the word "percent". For example: "10%", "5.6 percent" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.Ordinal | An ordinal number in numeric or text form. For example: "first", "second", "last", "10th" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.Dimension | Special dimensions such as length, distance, volume, area, and speed. For example: "two miles", "650 square kilometers", "35 km/h" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.Temperature | A temperature in Celsius or Fahrenheit. For example: "32F", "34 degrees celsius", "2 deg C" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.Currency | Monetary amounts including currency. For example "1000.00 US dollars", "£20.00", "$67.5B" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Quantity.NumberRange | A numeric interval. For example: "between 25 and 35" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Datetime | Dates and times. For example: "June 23, 1976", "7 AM", "6:49 PM", "Tomorrow at 7 PM", "Next Week" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Person.Name | The name of an individual. For example: "Joe", "Ann" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Email | Email Addresses. For example: "user@contoso.com", "user_name@contoso.com", "user.name@contoso.com" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Phone Number | US Phone Numbers. For example: "123-456-7890", "+1 123 456 7890", "(123)456-7890" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| URL | Website URLs and Links. | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| General.Organization | Companies and corporations. For example: "Microsoft" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| Geography.Location | The name of a location. For example: "Tokyo" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
+| IP Address | An IP address. For example: "192.168.0.4" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
++
+## Prebuilt components in multilingual projects
+
+In multilingual conversation projects, you can enable any of the prebuilt components. The component is only predicted if the language of the query is supported by the prebuilt entity. The language is either specified in the request or defaults to the primary language of the application if not provided.
+
+## Next steps
+
+[Entity components](concepts/entity-components.md)
+
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/quickstart.md
+
+ Title: Quickstart - create a conversational language understanding project
+
+description: Quickly start building an AI model to extract information and predict the intentions of text-based utterances.
++++++ Last updated : 03/14/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: Conversational language understanding
+
+Use this article to get started with Conversational Language understanding using Language Studio and the REST API. Follow these steps to try out an example.
+++++++
+## Next steps
+
+* [Learn about entity components](concepts/entity-components.md)
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/service-limits.md
+
+ Title: Conversational Language Understanding limits
+
+description: Learn about the data, region, and throughput limits for Conversational Language Understanding
++++++ Last updated : 10/12/2022++++
+# Conversational language understanding limits
+
+Use this article to learn about the data and service limits when using conversational language understanding.
+
+## Language resource limits
+
+* Your Language resource must be one of the following [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/):
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |F0|Free tier|You are only allowed **one** F0 Language resource **per subscription**.|
+ |S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
++
+See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
+
+* You can have up to **500** projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+### Regional availability
+
+Conversational language understanding is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| China East 2 | Γ£ô | Γ£ô |
+| China North 2 | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month |
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 request per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Data limits
+
+The following limits are observed for the conversational language understanding.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Number of utterances per project | 1 | 25,000|
+|Utterance length in characters (authoring) | 1 | 500 |
+|Utterance length in characters (prediction) | 1 | 1000 |
+|Number of intents per project | 1 | 500|
+|Number of entities per project | 0 | 350|
+|Number of list synonyms per entity| 0 | 20,000 |
+|Number of list synonyms per project| 0 | 2,000,000 |
+|Number of prebuilt components per entity| 0 | 7 |
+|Number of regular expressions per project| 0 | 20 |
+|Number of trained models per project| 0 | 10 |
+|Number of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
+
+## Next steps
+
+* [Conversational language understanding overview](overview.md)
ai-services Bot Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/tutorials/bot-framework.md
+
+ Title: Add natural language understanding to your bot in Bot Framework SDK using conversational language understanding
+description: Learn how to train a bot to understand natural language.
+keywords: conversational language understanding, bot framework, bot, language understanding, nlu
+++++++ Last updated : 05/25/2022++
+# Integrate conversational language understanding with Bot Framework
+
+A dialog is the interaction that occurs between user queries and an application. Dialog management is the process that defines the automatic behavior that should occur for different customer interactions. While conversational language understanding can classify intents and extract information through entities, the [Bot Framework SDK](/azure/bot-service/bot-service-overview) allows you to configure the applied logic for the responses returned from it.
+
+This tutorial will explain how to integrate your own conversational language understanding (CLU) project for a flight booking project in the Bot Framework SDK that includes three intents: **Book Flight**, **Get Weather**, and **None**.
++
+## Prerequisites
+
+- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial.
+- Download the **CoreBotWithCLU** [sample](https://aka.ms/clu-botframework-overview).
+ - Clone the entire samples repository to get access to this solution.
+
+## Import a project in conversational language understanding
+
+1. Download the [FlightBooking.json](https://aka.ms/clu-botframework-json) file in the **Core Bot with CLU** sample, in the _Cognitive Models_ folder.
+2. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
+3. Navigate to [Conversational Language Understanding](https://language.cognitive.azure.com/clu/projects) and select the service. This will route you the projects page. Select the Import button next to the Create New Project button. Import the FlightBooking.json file with the project name as **FlightBooking**. This will automatically import the CLU project with all the intents, entities, and utterances.
+
+ :::image type="content" source="../media/import.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import.png":::
+
+4. Once the project is loaded, select **Training jobs** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
+
+ :::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page in C L U." lightbox="../media/train-model.png":::
+
+5. Once training is complete, click to **Deploying a model** on the left. Select Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
+
+ :::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot of the deployment page within the deploy model screen in C L U." lightbox="../media/deploy-model-tutorial.png":::
+
+## Update the settings file
+
+Now that your CLU project is deployed and ready, update the settings that will connect to the deployment.
+
+In the **Core Bot** sample, update your [appsettings.json](https://aka.ms/clu-botframework-settings) with the appropriate values.
+
+- The _CluProjectName_ is **FlightBooking**.
+- The _CluDeploymentName_ is **Testing**
+- The _CluAPIKey_ can be either of the keys in the **Keys and Endpoint** section for your Language resource in the [Azure portal](https://portal.azure.com). You can also copy your key from the Project Settings tab in CLU.
+- The _CluAPIHostName_ is the endpoint found in the **Keys and Endpoint** section for your Language resource in the Azure portal. Note the format should be ```<Language_Resource_Name>.cognitiveservices.azure.com``` without `https://`.
+
+```json
+{
+ "MicrosoftAppId": "",
+ "MicrosoftAppPassword": "",
+ "CluProjectName": "",
+ "CluDeploymentName": "",
+ "CluAPIKey": "",
+ "CluAPIHostName": ""
+}
+```
+
+## Identify integration points
+
+In the Core Bot sample, you can check out the **FlightBookingRecognizer.cs** file. Here is where the CLU API call to the deployed endpoint is made to retrieve the CLU prediction for intents and entities.
+
+```csharp
+ public FlightBookingRecognizer(IConfiguration configuration)
+ {
+ var cluIsConfigured = !string.IsNullOrEmpty(configuration["CluProjectName"]) && !string.IsNullOrEmpty(configuration["CluDeploymentName"]) && !string.IsNullOrEmpty(configuration["CluAPIKey"]) && !string.IsNullOrEmpty(configuration["CluAPIHostName"]);
+ if (cluIsConfigured)
+ {
+ var cluApplication = new CluApplication(
+ configuration["CluProjectName"],
+ configuration["CluDeploymentName"],
+ configuration["CluAPIKey"],
+ "https://" + configuration["CluAPIHostName"]);
+ // Set the recognizer options depending on which endpoint version you want to use.
+ var recognizerOptions = new CluOptions(cluApplication)
+ {
+ Language = "en"
+ };
+
+ _recognizer = new CluRecognizer(recognizerOptions);
+ }
+```
++
+Under the Dialogs folder, find the **MainDialog** which uses the following to make a CLU prediction.
+
+```csharp
+ var cluResult = await _cluRecognizer.RecognizeAsync<FlightBooking>(stepContext.Context, cancellationToken);
+```
+
+The logic that determines what to do with the CLU result follows it.
+
+```csharp
+ switch (cluResult.TopIntent().intent)
+ {
+ case FlightBooking.Intent.BookFlight:
+ // Initialize BookingDetails with any entities we may have found in the response.
+ var bookingDetails = new BookingDetails()
+ {
+ Destination = cluResult.Entities.toCity,
+ Origin = cluResult.Entities.fromCity,
+ TravelDate = cluResult.Entities.flightDate,
+ };
+
+ // Run the BookingDialog giving it whatever details we have from the CLU call, it will fill out the remainder.
+ return await stepContext.BeginDialogAsync(nameof(BookingDialog), bookingDetails, cancellationToken);
+
+ case FlightBooking.Intent.GetWeather:
+ // We haven't implemented the GetWeatherDialog so we just display a TODO message.
+ var getWeatherMessageText = "TODO: get weather flow here";
+ var getWeatherMessage = MessageFactory.Text(getWeatherMessageText, getWeatherMessageText, InputHints.IgnoringInput);
+ await stepContext.Context.SendActivityAsync(getWeatherMessage, cancellationToken);
+ break;
+
+ default:
+ // Catch all for unhandled intents
+ var didntUnderstandMessageText = $"Sorry, I didn't get that. Please try asking in a different way (intent was {cluResult.TopIntent().intent})";
+ var didntUnderstandMessage = MessageFactory.Text(didntUnderstandMessageText, didntUnderstandMessageText, InputHints.IgnoringInput);
+ await stepContext.Context.SendActivityAsync(didntUnderstandMessage, cancellationToken);
+ break;
+ }
+```
+
+## Run the bot locally
+
+Run the sample locally on your machine **OR** run the bot from a terminal or from Visual Studio:
+
+### Run the bot from a terminal
+
+From a terminal, navigate to the `cognitive-service-language-samples/CoreBotWithCLU` folder.
+
+Then run the following command
+
+```bash
+# run the bot
+dotnet run
+```
+
+### Run the bot from Visual Studio
+
+1. Launch Visual Studio
+1. From the top navigation menu, select **File**, **Open**, then **Project/Solution**
+1. Navigate to the `cognitive-service-language-samples/CoreBotWithCLU` folder
+1. Select the `CoreBotCLU.csproj` file
+1. Press `F5` to run the project
++
+## Testing the bot using Bot Framework Emulator
+
+[Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.
+
+- Install the [latest Bot Framework Emulator](https://github.com/Microsoft/BotFramework-Emulator/releases).
+
+## Connect to the bot using Bot Framework Emulator
+
+1. Launch Bot Framework Emulator
+1. Select **File**, then **Open Bot**
+1. Enter a Bot URL of `http://localhost:3978/api/messages` and press Connect and wait for it to load
+1. You can now query for different examples such as "Travel from Cairo to Paris" and observe the results
+
+If the top intent returned from CLU resolves to "_Book flight_". Your bot will ask additional questions until it has enough information stored to create a travel booking. At that point it will return this booking information back to your user.
+
+## Next steps
+
+Learn more about the [Bot Framework SDK](/azure/bot-service/bot-service-overview).
++
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/concepts/data-formats.md
+
+ Title: Custom NER data formats
+
+description: Learn about the data formats accepted by custom NER.
++++++ Last updated : 10/17/2022++++
+# Accepted custom NER data formats
+
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
+
+## Labels file format
+
+Your Labels file should be in the `json` format below to be used in [importing](../how-to/create-project.md#import-project) your labels into a project.
+
+```json
+{
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomEntityRecognition",
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "Project-description",
+ "language": "en-us",
+ "settings": {}
+ },
+ "assets": {
+ "projectKind": "CustomEntityRecognition",
+ "entities": [
+ {
+ "category": "Entity1"
+ },
+ {
+ "category": "Entity2"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 500,
+ "labels": [
+ {
+ "category": "Entity1",
+ "offset": 25,
+ "length": 10
+ },
+ {
+ "category": "Entity2",
+ "offset": 120,
+ "length": 8
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 100,
+ "labels": [
+ {
+ "category": "Entity2",
+ "offset": 20,
+ "length": 5
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
+|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
+| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
+| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
+| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
+| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | | The start position for the entity text. | `25`|
+| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
+* See the [how-to article](../how-to/tag-data.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
ai-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/concepts/evaluation-metrics.md
+
+ Title: Custom NER evaluation metrics
+
+description: Learn about evaluation metrics in Custom Named Entity Recognition (NER)
++++++ Last updated : 08/08/2022++++
+# Evaluation metrics for custom named entity recognition models
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom NER uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
+
+## Model-level and entity-level evaluation metrics
+
+Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
+
+### Example
+
+*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+
+The model extracting entities from this text could have the following predictions:
+
+| Entity | Predicted as | Actual type |
+|--|--|--|
+| John Smith | Person | Person |
+| Frederick | Person | City |
+| Forrest | City | Person |
+| Fannie Thomas | Person | Person |
+| Colorado Springs | City | City |
+
+### Entity-level evaluation for the *person* entity
+
+The model would have the following entity-level evaluation, for the *person* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
+| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+
+* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
+* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
+* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+
+### Entity-level evaluation for the *city* entity
+
+The model would have the following entity-level evaluation, for the *city* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
+| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Model-level evaluation for the collective model
+
+The model would have the following evaluation for the model in its entirety:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
+| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
+| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
+
+## Interpreting entity-level evaluation metrics
+
+So what does it actually mean to have high precision or high recall for a certain entity?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This entity is handled well by the model. |
+| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
+| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
+| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
+
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
++
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
+The matrix compares the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
+
+The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
++
+You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each entity.
+* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all entities.
+* The *false positive* of the model is the sum of *false positives* for all entities.
+* The *false Negative* of the model is the sum of *false negatives* for all entities.
+
+## Next steps
+
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
ai-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/fail-over.md
+
+ Title: Back up and recover your custom Named Entity Recognition (NER) models
+
+description: Learn how to save and recover your custom NER models.
++++++ Last updated : 04/25/2022++++
+# Back up and recover your custom NER models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure AI Language resources in different regions and synchronizing custom models across them.
+
+If your app or business depends on the use of a custom NER model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure AI Language resources in different Azure regions. [Create your resources](./how-to/create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](./quickstart.md?pivots=rest-api#create-a-new-azure-ai-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
++
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get training status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
+
+ [!INCLUDE [get project details](./includes/rest-api/get-project-details.md)]
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/faq.md
+
+ Title: Custom Named Entity Recognition (NER) FAQ
+
+description: Learn about Frequently asked questions when using custom Named Entity Recognition.
++++++ Last updated : 08/08/2022+++++
+# Frequently asked questions for Custom Named Entity Recognition
+
+Find answers to commonly asked questions about concepts, and scenarios related to custom NER in Azure AI Language.
+
+## How do I get started with the service?
+
+See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more detailed information.
+
+## What are the service limits?
+
+See the [service limits article](service-limits.md) for more information.
+
+## How many tagged files are needed?
+
+Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There is no set number of tagged instances that will make every model perform well. Performance highly dependent on your schema, and the ambiguity of your schema. Ambiguous entity types need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per entity is 50.
+
+## Training is taking a long time, is this expected?
+
+The training process can take a long time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
+
+## How do I build my custom model programmatically?
++
+You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
+
+When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-for-predictions), you can use the REST API, or the client library.
+
+## What is the recommended CI/CD process?
+
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its performance](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove labels from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [train a model](how-to/train-model.md), you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing set where there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
+
+## Does a low or high model score guarantee bad or good performance in production?
+
+Model evaluation may not always be comprehensive. This depends on:
+* If the **test set** is too small so the good/bad scores are not representative of model's actual performance. Also if a specific entity type is missing or under-represented in your test set it will affect model performance.
+* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
+* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
+
+See the [data selection and schema design](how-to/design-schema.md) article for more information.
+
+## How do I improve model performance?
+
+* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous, and you should consider merging them both into one entity type for better performance.
+
+* [Review test set predictions](how-to/view-model-evaluation.md). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
+
+* Learn more about [data selection and schema design](how-to/design-schema.md).
+
+* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged entities side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
+
+## Why do I get different results when I retrain my model?
+
+* When you [train your model](how-to/train-model.md), you can determine if you want your data to be split randomly into train and test sets. If you do, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+
+* If you're retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
+
+## How do I get predictions in different languages?
+
+First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in [multiple languages](language-support.md#multi-lingual-option). You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language.
+
+## I trained my model, but I can't test it
+
+You need to [deploy your model](quickstart.md#deploy-your-model) before you can test it.
+
+## How do I use my trained model for predictions?
+
+After deploying your model, you [call the prediction API](how-to/call-api.md), using either the [REST API](how-to/call-api.md?tabs=rest-api) or [client libraries](how-to/call-api.md?tabs=client).
+
+## Data privacy and security
+
+Custom NER is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, Custom NER users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
+
+Your data is only stored in your Azure Storage account. Custom NER only has access to read from it during training.
+
+## How to clone my project?
+
+To clone your project you need to use the export API to export the project assets, and then import them into a new project. See the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
+
+## Next steps
+
+* [Custom NER overview](overview.md)
+* [Quickstart](quickstart.md)
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/glossary.md
+
+ Title: Definitions and terms used for Custom Named Entity Recognition (NER)
+
+description: Definitions and terms you may encounter when building AI models using Custom Named Entity Recognition
++++++ Last updated : 05/06/2022++++
+# Custom named entity recognition definitions and terms
+
+Use this article to learn about some of the definitions and terms you may encounter when using custom NER.
+
+## Entity
+
+An entity is a span of text that indicates a certain type of information. The text span can consist of one or more words. In the scope of custom NER, entities represent the information that the user wants to extract from the text. Developers tag entities within their data with the needed entities before passing it to the model for training. For example "Invoice number", "Start date", "Shipment number", "Birthplace", "Origin city", "Supplier name" or "Client address".
+
+For example, in the sentence "*John borrowed 25,000 USD from Fred.*" the entities might be:
+
+| Entity name/type | Entity |
+|--|--|
+| Borrower Name | *John* |
+| Lender Name | *Fred* |
+| Loan Amount | *25,000 USD* |
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Model
+
+A model is an object that is trained to do a certain task, in this case custom entity recognition. Models are trained by providing labeled data to learn from so they can later be used for recognition tasks.
+
+* **Model training** is the process of teaching your model what to extract based on your labeled data.
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+As a prerequisite to creating a custom entity extraction project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
+
+Within your project you can do the following actions:
+
+* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
+* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
+* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
+* **Deployment**: After you have reviewed the model's performance and decided it can be used in your environment, you need to assign it to a deployment to use it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+* **Test model**: After deploying your model, test your deployment in [Language Studio](https://aka.ms/LanguageStudio) to see how it would perform in production.
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Custom NER overview](../overview.md).
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/call-api.md
+
+ Title: Send a Named Entity Recognition (NER) request to your custom model
+description: Learn how to send requests for custom NER.
+++++++ Last updated : 05/11/2023+
+ms.devlang: csharp, python
+++
+# Query your custom model
+
+After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api) or through the client libraries (Azure SDK).
+
+## Test deployed model
+
+You can use Language Studio to submit the custom entity recognition task and visualize the results.
++++
+## Send an entity recognition request to your model
+
+# [Language Studio](#tab/language-studio)
++
+# [REST API](#tab/rest-api)
+
+First you need to get your resource key and endpoint:
+++
+### Submit a custom NER task
++
+### Get task results
++
+# [Client libraries (Azure SDK)](#tab/client)
+
+First you need to get your resource key and endpoint:
++
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
+ |Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
+ |JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
+ |Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_RecognizeCustomEntities.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py)
+
+5. See the following reference documentation for more information on the client, and return object:
+
+ * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
+ * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
+ * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
+
++
+## Next steps
+
+* [Frequently asked questions](../faq.md)
ai-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/create-project.md
+
+ Title: Create custom NER projects and use Azure resources
+
+description: Learn how to create and manage projects and Azure resources for custom NER.
++++++ Last updated : 06/03/2022++++
+# How to create custom NER project
+
+Use this article to learn how to set up the requirements for starting with custom NER and create a project.
+
+## Prerequisites
+
+Before you start using custom NER, you will need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom NER, you will need an Azure AI Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom named entity recognition.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
+
+## Create Language resource and connect storage account
+
+You can create a resource in the following ways:
+
+* The Azure portal
+* Language Studio
+* PowerShell
+
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
++
+## Create a custom named entity recognition project
+
+Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Get project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After your project is created, you can start [labeling your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
ai-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
+
+ Title: How to deploy a custom NER model
+
+description: Learn how to deploy a model for custom NER.
++++++ Last updated : 03/23/2023++++
+# Deploy a model and extract entities from text using the runtime API
+
+Once you are satisfied with how your model performs, it is ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After you have a deployment, you can use it to [extract entities](call-api.md) from text.
ai-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/design-schema.md
+
+ Title: Preparing data and designing a schema for custom NER
+
+description: Learn about how to select and prepare data, to be successful in creating custom NER projects.
++++++ Last updated : 05/09/2022++++
+# How to prepare data and define a schema for custom NER
+
+In order to create a custom NER model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the entity types/categories that you need your model to extract from the text at runtime.
+
+## Schema design
+
+The schema defines the entity types/categories that you need your model to extract from text at runtime.
+
+* Review documents in your dataset to be familiar with their format and structure.
+
+* Identify the [entities](../glossary.md#entity) you want to extract from the data.
+
+ For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
+
+* Avoid entity types ambiguity.
+
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
+
+ For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
+
+* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
+
+ For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
+
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
+
+You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all entities used in your project.
+
+## Next steps
+
+If you haven't already, create a custom NER project. If it's your first time using custom NER, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
ai-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/tag-data.md
+
+ Title: How to label your data for Custom Named Entity Recognition (NER)
+
+description: Learn how to label your data for use with Custom Named Entity Recognition (NER).
++++++ Last updated : 05/24/2022++++
+# Label your data in Language Studio
+
+Before training your model you need to label your documents with the custom entities you want to extract. Data labeling is a crucial step in development lifecycle. In this step you can create the entity types you want to extract from your data and label these entities within your documents. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project.
+
+Before creating a custom NER model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
+
+## Prerequisites
+
+Before you can label your data, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON document in your storage container that you have connected to this project.
+
+As you label your data, keep in mind:
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the documents.
+ * **Label completely**: Label all the instances of the entity in all your documents. You can use the [auto labelling feature](use-autolabeling.md) to ensure complete labeling.
+
+ > [!NOTE]
+ > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
+
+ <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific entity type.
+
+3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
+
+4. In the right side pane, **Add entity type** to your project so you can start labeling your data with them.
+
+ <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
+
+6. You have two options to label your document:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+
+ The below screenshot shows labeling using a brush.
+
+ :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
+
+6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each.
+
+6. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific entity type.
+ * *documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
+
+7. When you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+To remove a label
+
+1. Select the entity you want to remove a label from.
+2. Scroll through the menu that appears, and select **Remove label**.
+
+## Delete entities
+
+To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity will remove all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/train-model.md
+
+ Title: How to train your Custom Named Entity Recognition (NER) model
+
+description: Learn about how to train your model for Custom Named Entity Recognition (NER).
++++++ Last updated : 05/06/2022++++
+# Train your custom named entity recognition model
+
+Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
+
+To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
++
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated.
+It's recommended to make sure that all your entities are adequately represented in both the training and testing set.
+
+Custom NER supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**:The system will split your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](tag-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you'll be able to view [model performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
ai-services Use Autolabeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/use-autolabeling.md
+
+ Title: How to use autolabeling in custom named entity recognition
+
+description: Learn how to use autolabeling in custom named entity recognition.
+++++++ Last updated : 03/20/2023+++
+# How to use autolabeling for Custom Named Entity Recognition
+
+[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires both time and effort, you can use the autolabeling feature to automatically label your entities. You can start autolabeling jobs based on a model you've previously trained or using GPT models. With autolabeling based on a model you've previously trained, you can start labeling a few of your documents, train a model, then create an autolabeling job to produce entity labels for other documents based on that model. With autolabeling with GPT, you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your entities.
+
+## Prerequisites
+
+### [Autolabel based on a model you've trained](#tab/autolabel-model)
+
+Before you can use autolabeling based on a model you've trained, you need:
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+* A [successfully trained model](train-model.md)
++
+### [Autolabel with GPT](#tab/autolabel-gpt)
+Before you can use autolabeling with GPT, you need:
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* Entity names that are meaningful. The GPT models label entities in your documents based on the name of the entity you've provided.
+* [Labeled data](tag-data.md) isn't required.
+* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md).
+++
+## Trigger an autolabeling job
+
+### [Autolabel based on a model you've trained](#tab/autolabel-model)
+
+When you trigger an autolabeling job based on a model you've trained, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit applies on all projects within the same resource.
+
+> [!TIP]
+> A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is:
+>
+> `ceil(8921/1000) = ceil(8.921)`, which is 9 text records.
+
+1. From the left navigation menu, select **Data labeling**.
+2. Select the **Autolabel** button under the Activity pane to the right of the page.
++
+ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png":::
+
+3. Choose Autolabel based on a model you've trained and select Next.
+
+ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
+
+4. Choose a trained model. It's recommended to check the model performance before using it for autolabeling.
+
+ :::image type="content" source="../media/choose-model-trained.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model-trained.png":::
+
+5. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities.
+
+ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
+
+6. Choose the documents you want to be automatically labeled. The number of text records of each document is displayed. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter.
+
+ > [!NOTE]
+ > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible.
+ > * You can view the documents by clicking on the document name.
+
+ :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
+
+7. Select **Autolabel** to trigger the autolabeling job.
+You should see the model used, number of documents included in the autolabeling job, number of text records and entities to be automatically labeled. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
+
+ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
+
+### [Autolabel with GPT](#tab/autolabel-gpt)
+
+When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models.
+
+1. From the left navigation menu, select **Data labeling**.
+2. Select the **Autolabel** button under the Activity pane to the right of the page.
+
+ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png":::
+
+4. Choose Autolabel with GPT and select Next.
+
+ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
+
+5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed.
+
+ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png":::
+
+6. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. Having descriptive names for labels, and including examples for each label is recommended to achieve good quality labeling with GPT.
+
+ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
+
+7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter.
+
+ > [!NOTE]
+ > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible.
+ > * You can view the documents by clicking on the document name.
+
+ :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
+
+8. Select **Start job** to trigger the autolabeling job.
+You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
+
+ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
++++
+## Review the auto labeled documents
+
+When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
++
+Entities that have been automatically labeled appear with a dotted line. These entities have two selectors (a checkmark and an "X") that allow you to accept or reject the automatic label.
+
+Once an entity is accepted, the dotted line changes to a solid one, and the label is included in any further model training becoming a user defined label.
+
+Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen.
+
+After you accept or reject the labeled entities, select **Save labels** to apply the changes.
+
+> [!NOTE]
+> * We recommend validating automatically labeled entities before accepting them.
+> * All labels that were not accepted are be deleted when you train your model.
++
+## Next steps
+
+* Learn more about [labeling your data](tag-data.md).
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/use-containers.md
+
+ Title: Use Docker containers for Custom Named Entity Recognition on-premises
+
+description: Learn how to use Docker containers for Custom Named Entity Recognition on-premises.
++++++ Last updated : 05/08/2023++
+keywords: on-premises, Docker, container, natural language processing
++
+# Install and run Custom Named Entity Recognition containers
++
+Containers enable you to host the Custom Named Entity Recognition API on your own infrastructure using your own trained model. If you have security or data governance requirements that can't be fulfilled by calling Custom Named Entity Recognition remotely, then containers might be a good option.
+
+> [!NOTE]
+> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data and service limits](../../concepts/data-limits.md).
++
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
+* A [trained and deployed Custom Named Entity Recognition model](../quickstart.md)
++
+## Host computer requirements and recommendations
++
+The following table describes the minimum and recommended specifications for Custom Named Entity Recognition containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
+
+| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
+|||-|--|--|
+| **Custom Named Entity Recognition** | 1 core, 2 GB memory | 1 core, 4 GB memory |15 | 30|
+
+CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Export your Custom Named Entity Recognition model
+
+Before you proceed with running the docker image, you will need to export your own trained model to expose it to your container. Use the following command to extract your model and replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Custom Named Entity Recognition resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the Custom Named Entity Recognition API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| **{PROJECT_NAME}** | The name of the project containing the model that you want to export. You can find it on your projects tab in the Language Studio portal. |`myProject`|
+| **{TRAINED_MODEL_NAME}** | The name of the trained model you want to export. You can find your trained models on your model evaluation tab under your project in the Language Studio portal. |`myTrainedModel`|
+
+```bash
+curl --location --request PUT '{ENDPOINT_URI}/language/authoring/analyze-text/projects/{PROJECT_NAME}/exported-models/{TRAINED_MODEL_NAME}?api-version=2023-04-15-preview' \
+--header 'Ocp-Apim-Subscription-Key: {API_KEY}' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+    "TrainedmodelLabel": "{TRAINED_MODEL_NAME}"
+}'
+```
+
+## Get the container image with `docker pull`
+
+The Custom Named Entity Recognition container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `customner`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/customner`.
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry.
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/customner:latest
+```
++
+## Run the container with `docker run`
+
+Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
+
+> [!IMPORTANT]
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+To run the *Custom Named Entity Recognition* container, execute the following `docker run` command. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Custom Named Entity Recognition resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the Custom Named Entity Recognition API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| **{PROJECT_NAME}** | The name of the project containing the model that you want to export. You can find it on your projects tab in the Language Studio portal. |`myProject`|
+| **{LOCAL_PATH}** | The path where the exported model in the previous step will be downloaded in. You can choose any path of your liking. |`C:/custom-ner-model`|
+| **{TRAINED_MODEL_NAME}** | The name of the trained model you want to export. You can find your trained models on your model evaluation tab under your project in the Language Studio portal. |`myTrainedModel`|
++
+```bash
+docker run --rm -it -p5000:5000 --memory 4g --cpus 1 \
+-v {LOCAL_PATH}:/modelPath \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/customner:latest \
+EULA=accept \
+BILLING={ENDPOINT_URI} \
+APIKEY={API_KEY} \
+projectName={PROJECT_NAME}
+exportedModelName={TRAINED_MODEL_NAME}
+```
+
+This command:
+
+* Runs a *Custom Named Entity Recognition* container and downloads your exported model to the local path specified.
+* Allocates one CPU core and 4 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Automatically removes the container after it exits. The container image is still available on the host computer.
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs.
+++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++
+## Billing
+
+The Custom Named Entity Recognition containers send billing information to Azure, using a _Custom Named Entity Recognition_ resource on your Azure account.
++
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Custom Named Entity Recognition containers. In summary:
+
+* Custom Named Entity Recognition provides Linux containers for Docker.
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You can use either the REST API or SDK to call operations in Custom Named Entity Recognition containers by specifying the host URI of the container.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
ai-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
+
+ Title: Evaluate a Custom Named Entity Recognition (NER) model
+
+description: Learn how to evaluate and score your Custom Named Entity Recognition (NER) model
++++++ Last updated : 02/28/2023+++++
+# View the custom NER model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from the data. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](tag-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+* A [successfully trained model](train-model.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* [Deploy your model](deploy-model.md)
+* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/language-support.md
+
+ Title: Language and region support for custom named entity recognition
+
+description: Learn about the languages and regions supported by custom named entity recognition.
++++++ Last updated : 05/06/2022++++
+# Language support for custom named entity recognition
+
+Use this article to learn about the languages currently supported by custom named entity recognition feature.
+
+## Multi-lingual option
+
+With custom NER, you can train a model in one language and use to extract entities from documents in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
++
+You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom named entity recognition
+makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+## Language support
+
+Custom NER supports `.txt` files in the following languages:
+
+| Language | Language code |
+| | |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de`
+| Greek | `el` |
+| English (US) | `en-us` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Zulu | `zu` |
+
+## Next steps
+
+* [Custom NER overview](overview.md)
+* [Service limits](service-limits.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/overview.md
+
+ Title: Custom named entity recognition - Azure AI services
+
+description: Customize an AI model to label and extract information from documents using Azure AI services.
++++++ Last updated : 02/22/2023++++
+# What is custom named entity recognition?
+
+Custom NER is one of the custom features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for custom named entity recognition tasks.
+
+Custom NER enables users to build custom AI models to extract domain-specific entities from unstructured text, such as contracts or financial documents. By creating a Custom NER project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Custom named entity recognition can be used in multiple scenarios across a variety of industries:
+
+### Information extraction
+
+Many financial and legal organizations extract and normalize data from thousands of complex, unstructured text sources on a daily basis. Such sources include bank statements, legal agreements, or bank forms. For example, mortgage application data extraction done manually by human reviewers may take several days to extract. Automating these steps by building a custom NER model simplifies the process and saves cost, time, and effort.
+
+### Knowledge mining to enhance/enrich semantic search
+
+Search is foundational to any app that surfaces text content to users. Common scenarios include catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries want to build a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom NER for extracting entities from the text that are relevant to their industry. These entities can be used to enrich the indexing of the file for a more customized search experience.
+
+### Audit and compliance
+
+Instead of manually reviewing significantly long text files to audit and apply policies, IT departments in financial or legal enterprises can use custom NER to build automated solutions. These solutions can be helpful to enforce compliance policies, and set up necessary business rules based on knowledge mining pipelines that process structured and unstructured content.
+
+## Project development lifecycle
+
+Using custom NER typically involves several different steps.
++
+1. **Define your schema**: Know your data and identify the [entities](glossary.md#entity) you want extracted. Avoid ambiguity.
+
+2. **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
+ 1. **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ 2. **Label consistently**: The same entity should have the same label across all the files.
+ 3. **Label completely**: Label all the instances of the entity in all your files.
+
+3. **Train the model**: Your model starts learning from your labeled data.
+
+4. **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
+
+6. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
+
+7. **Extract entities**: Use your custom models for entity extraction tasks.
+
+## Reference documentation and code samples
+
+As you use custom NER, see the following reference documentation and samples for Azure AI Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
+|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_RecognizeCustomEntities.md) |
+| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java) |
+|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
+|Python (Runtime) | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom named entity recognition.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/quickstart.md
+
+ Title: Quickstart - Custom named entity recognition (NER)
+
+description: Quickly start building an AI model to categorize and extract information from unstructured text.
++++++ Last updated : 01/25/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: Custom named entity recognition
+
+Use this article to get started with creating a custom NER project where you can train custom models for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract named entities and are trained by learning from tagged data.
+
+In this article, we use Language Studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example weΓÇÖll build a custom NER model to extract relevant entities from loan agreements, such as the:
+* Date of the agreement
+* Borrower's name, address, city and state
+* Lender's name, address, city and state
+* Loan and interest amounts
+++++++
+## Next steps
+
+After you've created entity extraction model, you can:
+
+* [Use the runtime API to extract entities](how-to/call-api.md)
+
+When you start to create your own custom NER projects, use the how-to articles to learn more about tagging, training and consuming your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/tag-data.md)
+* [Train a model](how-to/train-model.md)
+* [Model evaluation](how-to/view-model-evaluation.md)
+
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/service-limits.md
+
+ Title: Custom Named Entity Recognition (NER) service limits
+
+description: Learn about the data and service limits when using Custom Named Entity Recognition (NER).
++++++ Last updated : 05/06/2022++++
+# Custom named entity recognition service limits
+
+Use this article to learn about the data and service limits when using custom NER.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
+
+* Your resource must be one of the supported pricing tiers:
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |F0 |Free tier|You are only allowed one F0 tier Language resource per subscription.|
+ |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
+
+
+* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional availability
+
+Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
++
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 25 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month |
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 text records per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for the custom named entity recognition.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of entity types | 1 | 200 |
+|Entity length in characters | 1 | 500 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
++
+## Next steps
+
+* [Custom NER overview](overview.md)
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
+
+ Title: Custom Text Analytics for health data formats
+
+description: Learn about the data formats accepted by custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# Accepted data formats in custom text analytics for health
+
+Use this article to learn about formatting your data to be imported into custom text analytics for health.
+
+If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
+
+Your Labels file should be in the `json` format below to be used when importing your labels into a project.
+
+```json
+{
+ "projectFileVersion": "{API-VERSION}",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectName": "{PROJECT-NAME}",
+ "projectKind": "CustomHealthcare",
+ "description": "Trying out custom Text Analytics for health",
+ "language": "{LANGUAGE-CODE}",
+ "multilingual": true,
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "settings": {}
+ },
+ "assets": {
+ "projectKind": "CustomHealthcare",
+ "entities": [
+ {
+ "category": "Entity1",
+ "compositionSetting": "{COMPOSITION-SETTING}",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "One",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "EntityNumberOne",
+ "FirstEntity"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "category": "Entity2"
+ },
+ {
+ "category": "MedicationName",
+ "list": {
+ "sublists": [
+ {
+ "listKey": "research drugs",
+ "synonyms": [
+ {
+ "language": "en",
+ "values": [
+ "rdrug a",
+ "rdrug b"
+ ]
+ }
+ ]
+
+ }
+ ]
+ }
+ "prebuilts": "MedicationName"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 500,
+ "labels": [
+ {
+ "category": "Entity1",
+ "offset": 25,
+ "length": 10
+ },
+ {
+ "category": "Entity2",
+ "offset": 120,
+ "length": 8
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "entities": [
+ {
+ "regionOffset": 0,
+ "regionLength": 100,
+ "labels": [
+ {
+ "category": "Entity2",
+ "offset": 20,
+ "length": 5
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
+|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
+| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
+| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
+|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
+| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
+|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
+| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
+|`synonyms`|`[]`|Array containing all the synonyms|synonym|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
+| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
+| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
+| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
+| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
+| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
+| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
+| `offset` | | The start position for the entity text. | `25`|
+| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
+
+## Entity naming rules
+
+1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
+2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
+3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
+* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
+* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
ai-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
+
+ Title: Entity components in custom Text Analytics for health
+
+description: Learn how custom Text Analytics for health extracts entities from text
++++++ Last updated : 04/14/2023++++
+# Entity components in custom text analytics for health
+
+In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+
+## Component types
+
+An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
+
+### Learned component
+
+The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
+
+The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
++
+### List component
+
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+
+In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+++
+### Prebuilt component
+
+The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
+++
+## Entity options
+
+When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+
+### Combine components
+
+Combine components as one entity when they overlap by taking the union of all the components.
+
+Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
++
+By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
++
+Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
++
+With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+++
+### Don't combine components
+
+Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+
+#### Example
+
+Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
++
+When you do not combine components, the entity will return twice:
+++
+## How to use components and options
+
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
+
+Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
+
+When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
++
+## Next steps
+
+* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
ai-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
+
+ Title: Custom text analytics for health evaluation metrics
+
+description: Learn about evaluation metrics in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Evaluation metrics for custom Text Analytics for health models
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
+
+## Model-level and entity-level evaluation metrics
+
+Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
+
+### Example
+
+*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
+
+The model extracting entities from this text could have the following predictions:
+
+| Entity | Predicted as | Actual type |
+|--|--|--|
+| John Smith | Person | Person |
+| Frederick | Person | City |
+| Forrest | City | Person |
+| Fannie Thomas | Person | Person |
+| Colorado Springs | City | City |
+
+### Entity-level evaluation for the *person* entity
+
+The model would have the following entity-level evaluation, for the *person* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
+| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+
+* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
+* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
+* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
+
+### Entity-level evaluation for the *city* entity
+
+The model would have the following entity-level evaluation, for the *city* entity:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
+| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
+| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Model-level evaluation for the collective model
+
+The model would have the following evaluation for the model in its entirety:
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
+| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
+| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
+
+* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
+* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
+* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
+
+## Interpreting entity-level evaluation metrics
+
+So what does it actually mean to have high precision or high recall for a certain entity?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This entity is handled well by the model. |
+| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
+| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
+| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
+
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
++
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
+The matrix compares the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
+
+The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
++
+You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each entity.
+* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all entities.
+* The *false positive* of the model is the sum of *false positives* for all entities.
+* The *false Negative* of the model is the sum of *false negatives* for all entities.
+
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
+
+ Title: Send a custom Text Analytics for health request to your custom model
+description: Learn how to send a request for custom text analytics for health.
+++++++ Last updated : 04/14/2023+
+ms.devlang: REST API
+++
+# Send queries to your custom Text Analytics for health model
+
+After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
+You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
+
+## Test deployed model
+
+You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
++++
+## Send a custom text analytics for health request to your model
+
+# [Language Studio](#tab/language-studio)
+++
+# [REST API](#tab/rest-api)
+
+First you will need to get your resource key and endpoint:
+++
+### Submit a custom Text Analytics for health task
++
+### Get task results
+++++
+## Next steps
+
+* [Custom text analytics for health](../overview.md)
ai-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
+
+ Title: Using Azure resources in custom Text Analytics for health
+
+description: Learn about the steps for using Azure resources with custom text analytics for health.
++++++ Last updated : 04/14/2023++++
+# How to create custom Text Analytics for health project
+
+Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
+
+## Prerequisites
+
+Before you start using custom text analytics for health, you need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom text analytics for health, you'll need an Azure AI Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
+
+## Create Language resource and connect storage account
+
+You can create a resource in the following ways:
+
+* The Azure portal
+* Language Studio
+* PowerShell
+
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
++
+## Create a custom Text Analytics for health project
+
+Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Import project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Get project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
ai-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
+
+ Title: Deploy a custom Text Analytics for health model
+
+description: Learn about deploying a model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Deploy a custom text analytics for health model
+
+Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After you have a deployment, you can use it to [extract entities](call-api.md) from text.
ai-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
+
+ Title: Preparing data and designing a schema for custom Text Analytics for health
+
+description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
++++++ Last updated : 04/14/2023++++
+# How to prepare data and define a schema for custom Text Analytics for health
+
+In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
+
+## Schema design
+
+Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
+
+* Review documents in your dataset to be familiar with their format and structure.
+
+* Identify the entities you want to extract from the data.
+
+ For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
+
+* Avoid entity types ambiguity.
+
+ **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
+
+ For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
+
+* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
+
+ For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
++
+## Add entities
+
+To add entities to your project:
+
+1. Move to **Entities** pivot from the top of the page.
+
+2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
+
+3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
+
+4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
+
+5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
+
+
+### Add list component
+
+To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
+
+1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
+
+2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
+
+ <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
+
+### Define entity options
+
+Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and select the **Save** button at the top.
+
+ <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
++
+After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
++
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
+
+You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
+
+## Next steps
+
+If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
ai-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
+
+ Title: Back up and recover your custom Text Analytics for health models
+
+description: Learn how to save and recover your custom Text Analytics for health models.
++++++ Last updated : 04/14/2023++++
+# Back up and recover your custom Text Analytics for health models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure AI Language resources in different regions and synchronizing custom models across them.
+
+If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure AI Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-ai-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
++
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get training status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
+
+ [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
ai-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
+
+ Title: How to label your data for custom Text Analytics for health
+
+description: Learn how to label your data for use with custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Label your data using the Language Studio
+
+Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
+
+## Prerequisites
+
+Before you can label your data, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
+
+As you label your data, keep in mind:
+
+* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
+
+If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
+
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the documents.
+ * **Label completely**: Label all the instances of the entity in all your documents.
+
+ > [!NOTE]
+ > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
+
+ <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific entity type.
+
+3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
+
+4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
+
+ <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
+
+5. You have two options to label your document:
+
+ |Option |Description |
+ |||
+ |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
+ |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
+
+ The below screenshot shows labeling using a brush.
+
+ :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
+
+6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
+
+7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
+
+7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific entity type.
+ * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
+
+7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+To remove a label
+
+1. Select the entity you want to remove a label from.
+2. Scroll through the menu that appears, and select **Remove label**.
+
+## Delete entities
+
+You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
+
+ Title: How to train your custom Text Analytics for health model
+
+description: Learn about how to train your model for custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Train your custom Text Analytics for health model
+
+Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
+
+To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
++
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
+
+Custom Text Analytics for health supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
ai-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
+
+ Title: Evaluate a Custom Text Analytics for health model
+
+description: Learn how to evaluate and score your Custom Text Analytics for health model
++++++ Last updated : 04/14/2023+++++
+# View a custom text analytics for health model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation, you need:
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md)
+* A [successfully trained model](train-model.md)
++
+## Model details
+
+There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* [Deploy your model](deploy-model.md)
+* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/language-support.md
+
+ Title: Language and region support for custom Text Analytics for health
+
+description: Learn about the languages and regions supported by custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Language support for custom text analytics for health
+
+Use this article to learn about the languages currently supported by custom Text Analytics for health.
+
+## Multilingual option
+
+With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
+
+## Language support
+
+Custom Text Analytics for health supports `.txt` files in the following languages:
+
+| Language | Language code |
+| | |
+| English | `en` |
+| French | `fr` |
+| German | `de` |
+| Spanish | `es` |
+| Italian | `it` |
+| Portuguese (Portugal) | `pt-pt` |
+| Hebrew | `he` |
++
+## Next steps
+
+* [Custom Text Analytics for health overview](overview.md)
+* [Service limits](reference/service-limits.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/overview.md
+
+ Title: Custom Text Analytics for health - Azure AI services
+
+description: Customize an AI model to label and extract healthcare information from documents using Azure AI services.
++++++ Last updated : 04/14/2023++++
+# What is custom Text Analytics for health?
+
+Custom Text Analytics for health is one of the custom features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
+
+Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
++
+## Project development lifecycle
+
+Using custom Text Analytics for health typically involves several different steps.
++
+* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
+
+* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
+ * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
+ * **Label consistently**: The same entity should have the same label across all the files.
+ * **Label completely**: Label all the instances of the entity in all your files.
+
+* **Train the model**: Your model starts learning from your labeled data.
+
+* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
+
+* **Deploy the model**: Deploying a model makes it available for use via an API.
+
+* **Extract entities**: Use your custom models for entity extraction tasks.
+
+## Reference documentation and code samples
+
+As you use custom Text Analytics for health, see the following reference documentation for Azure AI Language:
+
+|APIs| Reference documentation|
+||||
+|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
+|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
+
+* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/quickstart.md
+
+ Title: Quickstart - Custom Text Analytics for health (Custom TA4H)
+
+description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
++++++ Last updated : 04/14/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: custom Text Analytics for health
+
+Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
+
+In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
+++++++
+## Next steps
+
+* [Text analytics for health overview](./overview.md)
+
+After you've created entity extraction model, you can:
+
+* [Use the runtime API to extract entities](how-to/call-api.md)
+
+When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/label-data.md)
+* [Train a model](how-to/train-model.md)
+* [Model evaluation](how-to/view-model-evaluation.md)
+
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/reference/glossary.md
+
+ Title: Definitions used in custom Text Analytics for health
+
+description: Learn about definitions used in custom Text Analytics for health
++++++ Last updated : 04/14/2023++++
+# Terms and definitions used in custom Text Analytics for health
+
+Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
+
+## Entity
+Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Prebuilt entity component
+
+Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
++
+## Learned entity component
+
+The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
+
+## List entity component
+A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
+
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
+
+## Model
+A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
++
+## Schema
+Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
+
+## Training data
+Training data is the set of information that is needed to train a model.
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
+
+ Title: Custom Text Analytics for health service limits
+
+description: Learn about the data and service limits when using Custom Text Analytics for health.
++++++ Last updated : 04/14/2023++++
+# Custom Text Analytics for health service limits
+
+Use this article to learn about the data and service limits when using custom Text Analytics for health.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
+
+* Your resource must be one of the supported pricing tiers:
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
+
+
+* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional availability
+
+Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| East US | Γ£ô | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|S|Training time| Unlimited, free |
+|S|Prediction Calls| 5,000 text records for free per language resource|
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for authoring.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of entity types | 1 | 200 |
+|Entity length in characters | 1 | 500 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
++
+## Next steps
+
+* [Custom text analytics for health overview](../overview.md)
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/concepts/data-formats.md
+
+ Title: Custom text classification data formats
+
+description: Learn about the data formats accepted by custom text classification.
++++++ Last updated : 05/24/2022++++
+# Accepted data formats
+
+If you're trying to import your data into custom text classification, it has to follow a specific format. If you don't have data to import you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
+
+## Labels file format
+
+Your Labels file should be in the `json` format below. This will enable you to [import](../how-to/create-project.md#import-a-custom-text-classification-project) your labels into a project.
+
+# [Multi label classification](#tab/multi-classification)
+
+```json
+{
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomMultiLabelClassification",
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "Project-description",
+ "language": "en-us"
+ },
+ "assets": {
+ "projectKind": "CustomMultiLabelClassification",
+ "classes": [
+ {
+ "category": "Class1"
+ },
+ {
+ "category": "Class2"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "classes": [
+ {
+ "category": "Class1"
+ },
+ {
+ "category": "Class2"
+ }
+ ]
+ }
+ ]
+ }
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| multilingual | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
+|projectName|`{PROJECT-NAME}`|Project name|myproject|
+| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| classes | [] | Array containing all the classes you have in the project. These are the classes you want to classify your documents into.| [] |
+| documents | [] | Array containing all the documents in your project and the classes labeled for this document. | [] |
+| location | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container, this value should be the document name.|`doc1.txt`|
+| dataset | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../how-to/train-model.md#data-splitting) for more information. Possible values for this field are `Train` and `Test`. |`Train`|
++
+# [Single label classification](#tab/single-classification)
+
+```json
+{
+
+ "projectFileVersion": "2022-05-01",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomSingleLabelClassification",
+ "storageInputContainerName": "{CONTAINER-NAME}",
+ "settings": {},
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "Project-description",
+ "language": "en-us"
+ },
+ "assets": {
+ "projectKind": "CustomSingleLabelClassification",
+ "classes": [
+ {
+ "category": "Class1"
+ },
+ {
+ "category": "Class2"
+ }
+ ],
+ "documents": [
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "class": {
+ "category": "Class2"
+ }
+ },
+ {
+ "location": "{DOCUMENT-NAME}",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "class": {
+ "category": "Class1"
+ }
+ }
+ ]
+ }
+```
+|Key |Placeholder |Value | Example |
+|||-|--|
+|projectName|`{PROJECT-NAME}`|Project name|myproject|
+| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
+| multilingual | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
+| classes | [] | Array containing all the classes you have in the project. These are the classes you want to classify your documents into.| [] |
+| documents | [] | Array containing all the documents in your project and which class this document belongs to. | [] |
+| location | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
+| dataset | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../how-to/train-model.md#data-splitting) for more information. Possible values for this field are `Train` and `Test`. |`Train`|
++++
+## Next steps
+
+* You can import your labeled data into your project directly. See [How to create a project](../how-to/create-project.md#import-a-custom-text-classification-project) to learn more about importing projects.
+* See the [how-to article](../how-to/tag-data.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
ai-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/concepts/evaluation-metrics.md
+
+ Title: Custom text classification evaluation metrics
+
+description: Learn about evaluation metrics in custom text classification.
++++++ Last updated : 08/08/2022++++
+# Evaluation metrics
+
+Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined classes for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom text classification uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
+
+>[!NOTE]
+> Precision, recall and F1 score are calculated for each class separately (*class-level* evaluation) and for the model collectively (*model-level* evaluation).
+## Model-level and Class-level evaluation metrics
+
+The definitions of precision, recall, and evaluation are the same for both class-level and model-level evaluations. However, the count of *True Positive*, *False Positive*, and *False Negative* differ as shown in the following example.
+
+The below sections use the following example dataset:
+
+| Document | Actual classes | Predicted classes |
+|--|--|--|
+| 1 | action, comedy | comedy|
+| 2 | action | action |
+| 3 | romance | romance |
+| 4 | romance, comedy | romance |
+| 5 | comedy | action |
+
+### Class-level evaluation for the *action* class
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Document 2 was correctly classified as *action*. |
+| False Positive | 1 | Document 5 was mistakenly classified as *action*. |
+| False Negative | 1 | Document 1 was not classified as *Action* though it should have. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Class-level evaluation for the *comedy* class
+
+| Key | Count | Explanation |
+|--|--|--|
+| True positive | 1 | Document 1 was correctly classified as *comedy*. |
+| False positive | 0 | No documents were mistakenly classified as *comedy*. |
+| False negative | 2 | Documents 5 and 4 were not classified as *comedy* though they should have. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 2) = 0.33`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.67) / (1 + 0.67) = 0.80`
+
+### Model-level evaluation for the collective model
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 4 | Documents 1, 2, 3 and 4 were given correct classes at prediction. |
+| False Positive | 1 | Document 5 was given a wrong class at prediction. |
+| False Negative | 2 | Documents 1 and 4 were not given all correct class at prediction. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 4 / (4 + 1) = 0.8`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 4 / (4 + 2) = 0.67`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.8 * 0.67) / (0.8 + 0.67) = 0.73`
+
+> [!NOTE]
+> For single-label classification models, the count of false negatives and false positives are always equal. Custom single-label classification models always predict one class for each document. If the prediction is not correct, FP count of the predicted class increases by one and FN of the actual class increases by one, overall count of FP and FN for the model will always be equal. This is not the case for multi-label classification, because failing to predict one of the classes of a document is counted as a false negative.
+## Interpreting class-level evaluation metrics
+
+So what does it actually mean to have a high precision or a high recall for a certain class?
+
+| Recall | Precision | Interpretation |
+|--|--|--|
+| High | High | This class is perfectly handled by the model. |
+| Low | High | The model can't always predict this class but when it does it is with high confidence. This may be because this class is underrepresented in the dataset so consider balancing your data distribution.|
+| High | Low | The model predicts this class well, however it is with low confidence. This may be because this class is over represented in the dataset so consider balancing your data distribution. |
+| Low | Low | This class is poorly handled by the model where it is not usually predicted and when it is, it is not with high confidence. |
+
+Custom text classification models are expected to experience both false negatives and false positives. You need to consider how each will affect the overall system, and carefully think through scenarios where the model will ignore correct predictions, and recognize incorrect predictions. Depending on your scenario, either *precision* or *recall* could be more suitable evaluating your model's performance.
+
+For example, if your scenario involves processing technical support tickets, predicting the wrong class could cause it to be forwarded to the wrong department/team. In this example, you should consider making your system more sensitive to false positives, and precision would be a more relevant metric for evaluation.
+
+As another example, if your scenario involves categorizing email as "*important*" or "*spam*", an incorrect prediction could cause you to miss a useful email if it's labeled "*spam*". However, if a spam email is labeled *important* you can disregard it. In this example, you should consider making your system more sensitive to false negatives, and recall would be a more relevant metric for evaluation.
+
+If you want to optimize for general purpose scenarios or when precision and recall are both important, you can utilize the F1 score. Evaluation scores are subjective depending on your scenario and acceptance criteria. There is no absolute metric that works for every scenario.
+
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When a class type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases.
+
+* All class types are present in test set: When the testing data lacks labeled instances for a class type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios.
+
+* Class types are balanced within training and test sets: When sampling bias causes an inaccurate representation of a class typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that class type to occur too often or too little.
+
+* Class types are evenly distributed between training and test sets: When the mix of class types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested.
+
+* Class types in training set are clearly distinct: When the training data is similar for multiple class types, it can lead to lower accuracy because the class types may be frequently misclassified as each other.
+
+## Confusion matrix
+
+> [!Important]
+> Confusion matrix is not available for multi-label classification projects.
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of classes.
+The matrix compares the the expected labels with the ones predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify classes that are too close to each other and often get mistaken (ambiguity). In this case consider merging these classes together. If that isn't possible, consider labeling more documents with both classes to help the model differentiate between them.
+
+All correct predictions are located in the diagonal of the table, so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
++
+You can calculate the class-level and model-level evaluation metrics from the confusion matrix:
+
+* The values in the diagonal are the *True Positive* values of each class.
+* The sum of the values in the class rows (excluding the diagonal) is the *false positive* of the model.
+* The sum of the values in the class columns (excluding the diagonal) is the *false Negative* of the model.
+
+Similarly,
+
+* The *true positive* of the model is the sum of *true Positives* for all classes.
+* The *false positive* of the model is the sum of *false positives* for all classes.
+* The *false Negative* of the model is the sum of *false negatives* for all classes.
++
+## Next steps
+
+* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
+* [Train a model](../how-to/train-model.md)
ai-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/fail-over.md
+
+ Title: Back up and recover your custom text classification models
+
+description: Learn how to save and recover your custom text classification models.
++++++ Last updated : 04/22/2022++++
+# Back up and recover your custom text classification models
+
+When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure AI Language resources in different regions and the ability to sync custom models across regions.
+
+If your app or business depends on the use of a custom text classification model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
+
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure AI Language resources in different Azure regions. [Create a Language resource](./how-to/create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect both of your Language resources to the same storage account, though this might introduce slightly higher latency when importing your project, and training a model.
+
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get Training Status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both project. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
++
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
+* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/faq.md
+
+ Title: Custom text classification FAQ
+
+description: Learn about Frequently asked questions when using the custom text classification API.
++++++ Last updated : 04/22/2022+++++
+# Frequently asked questions
+
+Find answers to commonly asked questions about concepts, and scenarios related to custom text classification in Azure AI Language.
+
+## How do I get started with the service?
+
+See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more details.
+
+## What are the service limits?
+
+See the [service limits article](service-limits.md).
+
+## Which languages are supported in this feature?
+
+See the [language support](./language-support.md) article.
+
+## How many tagged files are needed?
+
+Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There's no set number of tagged classes that will make every model perform well. Performance is highly dependent on your schema and the ambiguity of your schema. Ambiguous classes need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per class is 50.
+
+## Training is taking a long time, is this expected?
+
+The training process can take some time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
+
+## How do I build my custom model programmatically?
+
+You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
+
+When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-to-make-predictions), you can use the REST API, or the client library.
+
+## What is the recommended CI/CD process?
+
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [tag your data](how-to/tag-data.md#label-your-data), you can determine how your dataset is split into training and testing sets.
+
+## Does a low or high model score guarantee bad or good performance in production?
+
+Model evaluation may not always be comprehensive, depending on:
+* If the **test set** is too small, the good/bad scores are not representative of model's actual performance. Also if a specific class is missing or under-represented in your test set it will affect model performance.
+* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
+* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
+
+See the [data selection and schema design](how-to/design-schema.md) article for more information.
+
+## How do I improve model performance?
+
+* View the model [confusion matrix](how-to/view-model-evaluation.md), if you notice that a certain class is frequently classified incorrectly, consider adding more tagged instances for this class. If you notice that two classes are frequently classified as each other, this means the schema is ambiguous, consider merging them both into one class for better performance.
+
+* [Examine Data distribution](concepts/evaluation-metrics.md) If one of the classes has many more tagged instances than the others, your model may be biased towards this class. Add more data to the other classes or remove most of the examples from the dominating class.
+
+* Review the [data selection and schema design](how-to/design-schema.md) article for more information.
+
+* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged classes side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
+
+## When I retrain my model I get different results, why is this?
+
+* When you [tag your data](how-to/tag-data.md#label-your-data) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+
+* If you are retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough, which is a factor of how representative and distinct your data is, and the quality of your tagged data.
+
+## How do I get predictions in different languages?
+
+First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in multiple languages. You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language. See [language support](language-support.md#multi-lingual-option) for more information.
+
+## I trained my model, but I can't test it
+
+You need to [deploy your model](quickstart.md#deploy-your-model) before you can test it.
+
+## How do I use my trained model to make predictions?
+
+After deploying your model, you [call the prediction API](how-to/call-api.md), using either the [REST API](how-to/call-api.md?tabs=rest-api) or [client libraries](how-to/call-api.md?tabs=client).
+
+## Data privacy and security
+
+Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
+
+Your data is only stored in your Azure Storage account. Custom text classification only has access to read from it during training.
+
+## How to clone my project?
+
+To clone your project you need to use the export API to export the project assets and then import them into a new project. See [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
+
+## Next steps
+
+* [Custom text classification overview](overview.md)
+* [Quickstart](quickstart.md)
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/glossary.md
+
+ Title: Definitions used in custom text classification
+
+description: Learn about definitions used in custom text classification.
++++++ Last updated : 04/14/2022++++
+# Terms and definitions used in custom text classification
+
+Use this article to learn about some of the definitions and terms you may encounter when using custom text classification.
+
+## Class
+
+A class is a user-defined category that indicates the overall classification of the text. Developers label their data with their classes before they pass it to the model for training.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Model
+
+A model is an object that's trained to do a certain task, in this case text classification tasks. Models are trained by providing labeled data to learn from so they can later be used for classification tasks.
+
+* **Model training** is the process of teaching your model how to classify documents based on your labeled data.
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+As a prerequisite to creating a custom text classification project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
+
+Within your project you can do the following:
+
+* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
+* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
+* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
+* **Deployment**: After you have reviewed model performance and decide it's fit to be used in your environment; you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+* **Test model**: After deploying your model, you can use this operation in [Language Studio](https://aka.ms/LanguageStudio) to try it out your deployment and see how it would perform in production.
+
+### Project types
+
+Custom text classification supports two types of projects
+
+* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
+* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Custom text classification overview](../overview.md).
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/call-api.md
+
+ Title: Send a text classification request to your custom model
+description: Learn how to send requests for custom text classification.
+++++++ Last updated : 03/23/2023+
+ms.devlang: csharp, python
+++
+# Send text classification requests to your model
+
+After you've successfully deployed a model, you can query the deployment to classify text based on the model you assigned to the deployment.
+You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-api) or through the client libraries (Azure SDK).
+
+## Test deployed model
+
+You can use Language Studio to submit the custom text classification task and visualize the results.
+++++
+## Send a text classification request to your model
+
+> [!TIP]
+> You can [test your model in Language Studio](../quickstart.md?pivots=language-studio#test-your-model) by sending sample text to classify it.
+
+# [Language Studio](#tab/language-studio)
+++
+# [REST API](#tab/rest-api)
+
+First you need to get your resource key and endpoint:
++++
+### Submit a custom text classification task
+++
+### Get task results
++
+# [Client libraries (Azure SDK)](#tab/client-libraries)
+
+First you'll need to get your resource key and endpoint:
++
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
+ |Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
+ |JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
+ |Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ Single label classification:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_SingleLabelClassify.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/SingleLabelClassifyDocument.java)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py)
+
+ Multi label classification:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_MultiLabelClassify.md)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/MultiLabelClassifyDocument.java)
+ * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py)
+
+5. See the following reference documentation for more information on the client, and return object:
+
+ * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
+ * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
+ * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
++
+## Next steps
+
+* [Custom text classification overview](../overview.md)
ai-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/create-project.md
+
+ Title: How to create custom text classification projects
+
+description: Learn about the steps for using Azure resources with custom text classification.
++++++ Last updated : 06/03/2022++++
+# How to create custom text classification project
+
+Use this article to learn how to set up the requirements for starting with custom text classification and create a project.
+
+## Prerequisites
+
+Before you start using custom text classification, you will need:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+
+## Create a Language resource
+
+Before you start using custom text classification, you will need an Azure AI Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
+
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to classify text.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you will connect a pre-existing storage account, you should have an **owner** role assigned to it.
+
+## Create Language resource and connect storage account
++
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+
+### [Using the Azure portal](#tab/azure-portal)
++
+### [Using Language Studio](#tab/language-studio)
++
+### [Using Azure PowerShell](#tab/azure-powershell)
++++
+> [!NOTE]
+> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
+> * You can only connect your language resource to one storage account.
+
+## Using a pre-existing Language resource
+++
+## Create a custom text classification project
+
+Once your resource and storage container are configured, create a new custom text classification project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can [import it](#import-a-custom-text-classification-project) to get started.
+
+### [Language Studio](#tab/studio)
+++
+### [REST APIs](#tab/apis)
++++
+## Import a custom text classification project
+
+If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
+
+### [Language Studio](#tab/studio)
++
+### [REST APIs](#tab/apis)
++++
+## Get project details
+
+### [Language Studio](#tab/studio)
++
+### [REST APIs](#tab/apis)
++++
+## Delete project
+
+### [Language Studio](#tab/studio)
++
+### [REST APIs](#tab/apis)
++++
+## Next steps
+
+* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
+
+* After your project is created, you can start [labeling your data](tag-data.md), which will inform your text classification model how to interpret text, and is used for training and evaluation.
ai-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/deploy-model.md
+
+ Title: How to deploy a custom text classification model
+
+description: Learn how to deploy a model for custom text classification.
++++++ Last updated : 03/23/2023++++
+# Deploy a model and classify text using the runtime API
+
+Once you are satisfied with how your model performs, it is ready to be deployed; and use it to classify text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* [A custom text classification project](create-project.md) with a configured Azure storage account,
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you have reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+You can swap deployments after you've tested a model assigned to one deployment, and want to assign it to another. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment and assign it to the first deployment. This could be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
+++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When you unassign or remove a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+* Use [prediction API to query your model](call-api.md)
ai-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/design-schema.md
+
+ Title: How to prepare data and define a custom classification schema
+
+description: Learn about data selection, preparation, and creating a schema for custom text classification projects.
++++++ Last updated : 05/05/2022++++
+# How to prepare data and define a text classification schema
+
+In order to create a custom text classification model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the classes that you need your model to classify your text into at runtime.
+
+## Schema design
+
+The schema defines the classes that you need your model to classify your text into at runtime.
+
+* **Review and identify**: Review documents in your dataset to be familiar with their structure and content, then identify how you want to classify your data.
+
+ For example, if you are classifying support tickets, you might need the following classes: *login issue*, *hardware issue*, *connectivity issue*, and *new equipment request*.
+
+* **Avoid ambiguity in classes**: Ambiguity arises when the classes you specify share similar meaning to one another. The more ambiguous your schema is, the more labeled data you may need to differentiate between different classes.
+
+ For example, if you are classifying food recipes, they may be similar to an extent. To differentiate between *dessert recipe* and *main dish recipe*, you may need to label more examples to help your model distinguish between the two classes. Avoiding ambiguity saves time and yields better results.
+
+* **Out of scope data**: When using your model in production, consider adding an *out of scope* class to your schema if you expect documents that don't belong to any of your classes. Then add a few documents to your dataset to be labeled as *out of scope*. The model can learn to recognize irrelevant documents, and predict their labels accordingly.
++
+## Data selection
+
+The quality of data you train your model with affects model performance greatly.
+
+* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
+
+* Balance your data distribution as much as possible without deviating far from the distribution in real-life.
+
+* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
+
+* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
+
+* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
+
+> [!NOTE]
+> If your documents are in multiple languages, select the **multiple languages** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
+
+## Data preparation
+
+As a prerequisite for creating a custom text classification project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
+
+* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
+
+You can only use `.txt`. documents for custom text. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your file format.
+
+ You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
+
+## Test set
+
+When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all classes used in your project.
+
+## Next steps
+
+If you haven't already, create a custom text classification project. If it's your first time using custom text classification, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [project requirements](../how-to/create-project.md) for more details on what you need to create a project.
ai-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/tag-data.md
+
+ Title: How to label your data for custom classification - Azure AI services
+
+description: Learn about how to label your data for use with the custom text classification.
++++++ Last updated : 11/10/2022++++
+# Label text data for training your model
+
+Before training your model you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
+
+Before creating a custom text classification model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
+
+## Prerequisites
+
+Before you can label data, you need:
+
+* [A successfully created project](create-project.md) with a configured Azure blob storage account,
+* Documents containing text data that have [been uploaded](design-schema.md#data-preparation) to your storage account.
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data labeling guidelines
+
+After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON file in your storage container that you've connected to this project.
+
+As you label your data, keep in mind:
+
+* In general, more labeled data leads to better results, provided the data is labeled accurately.
+
+* There is no fixed number of labels that can guarantee your model will perform the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
+
+## Label your data
+
+Use the following steps to label your data:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container. See the image below.
+
+ >[!TIP]
+ > You can use the filters in top menu to view the unlabeled files so that you can start labeling them.
+ > You can also use the filters to view the documents that are labeled with a specific class.
+
+3. Change to a single file view from the left side in the top menu or select a specific file to start labeling. You can find a list of all `.txt` files available in your projects to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
+
+ > [!NOTE]
+ > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
++
+4. In the right side pane, **Add class** to your project so you can start labeling your data with them.
+
+5. Start labeling your files.
+
+ # [Multi label classification](#tab/multi-classification)
+
+ **Multi label classification**: your file can be labeled with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to label this document with.
+
+ :::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
+
+ # [Single label classification](#tab/single-classification)
+
+ **Single label classification**: your file can only be labeled with one class; you can do so by selecting one of the buttons next to the class you want to label the document with.
+
+ :::image type="content" source="../media/single.png" alt-text="A screenshot showing the single label classification tag page" lightbox="../media/single.png":::
+
+
+
+ You can also use the [auto labeling feature](use-autolabeling.md) to ensure complete labeling.
+
+6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each.
+
+7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
+
+ > [!TIP]
+ > If you are planning on using **Automatic** data spliting use the default option of assigning all the documents into your training set.
+
+8. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
+ * *Total instances* where you can view count of all labeled instances of a specific class.
+ * *documents with at least one label* where each document is counted if it contains at least one labeled instance of this class.
+
+9. While you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
+
+## Remove labels
+
+If you want to remove a label, uncheck the button next to the class.
+
+## Delete or classes
+
+To delete a class, select the delete icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
+
+## Next steps
+
+After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/train-model.md
+
+ Title: How to train your custom text classification model - Azure AI services
+
+description: Learn about how to train your model for custom text classification.
++++++ Last updated : 08/08/2022++++
+# How to train a custom text classification model
+
+Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to determine if you need to improve your model.
+
+To train a model, start a training job. Only successfully completed jobs create a usable model. Training jobs expire after seven days. After this period, you won't be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiration. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
+++
+## Prerequisites
+
+Before you train your model, you need:
+
+* [A successfully created project](create-project.md) with a configured Azure blob storage account,
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the class/classes assigned to each document.
+The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
+After the model is trained successfully, it is used to make predictions from the documents in the testing set. Based on these predictions, the model's [evaluation metrics](../concepts/evaluation-metrics.md) will be calculated.
+It is recommended to make sure that all your classes are adequately represented in both the training and testing set.
+
+Custom text classification supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**: The system will split your labeled data between the training and testing sets, according to the percentages you choose. The system will attempt to have a representation of all classes in your training set. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](tag-data.md).
+
+## Train model
+
+# [Language studio](#tab/Language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+### Start training job
++
+### Get training job status
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
+
+ [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
ai-services Use Autolabeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/use-autolabeling.md
+
+ Title: How to use autolabeling in custom text classification
+
+description: Learn how to use autolabeling in custom text classification.
+++++++ Last updated : 3/15/2023+++
+# How to use autolabeling for Custom Text Classification
+
+[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires much time and effort, you can use the autolabeling feature to automatically label your documents with the classes you want to categorize them into. You can currently start autolabeling jobs based on a model using GPT models where you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your documents.
+
+## Prerequisites
+
+Before you can use autolabeling with GPT, you need:
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* Class names that are meaningful. The GPT models label documents based on the names of the classes you've provided.
+* [Labeled data](tag-data.md) isn't required.
+* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md).
+++
+## Trigger an autolabeling job
+
+When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models.
+
+1. From the left navigation menu, select **Data labeling**.
+2. Select the **Autolabel** button under the Activity pane to the right of the page.
+
+ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png":::
+
+4. Choose Autolabel with GPT and select Next.
+
+ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
+
+5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed.
+
+ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png":::
+
+6. Select the classes you want to be included in the autolabeling job. By default, all classes are selected. Having descriptive names for classes, and including examples for each class is recommended to achieve good quality labeling with GPT.
+
+ :::image type="content" source="../media/choose-classes.png" alt-text="A screenshot showing which labels to be included in autotag job." lightbox="../media/choose-classes.png":::
+
+7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter.
+
+ > [!NOTE]
+ > * If a document was automatically labeled, but this label was already user defined, only the user defined label is used.
+ > * You can view the documents by clicking on the document name.
+
+ :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
+
+8. Select **Start job** to trigger the autolabeling job.
+You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
+
+ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
++++
+## Review the auto labeled documents
+
+When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
++
+Documents that have been automatically classified have suggested labels in the activity pane highlighted in purple. Each suggested label has two selectors (a checkmark and a cancel icon) that allow you to accept or reject the automatic label.
+
+Once a label is accepted, the purple color changes to the default blue one, and the label is included in any further model training becoming a user defined label.
+
+After you accept or reject the labels for the autolabeled documents, select **Save labels** to apply the changes.
+
+> [!NOTE]
+> * We recommend validating automatically labeled documents before accepting them.
+> * All labels that were not accepted are deleted when you train your model.
++
+## Next steps
+
+* Learn more about [labeling your data](tag-data.md).
ai-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/how-to/view-model-evaluation.md
+
+ Title: View a custom text classification model evaluation - Azure AI services
+
+description: Learn how to view the evaluation scores for a custom text classification model
++++++ Last updated : 10/12/2022++++
+# View your text classification model's evaluation and details
+
+After your model has finished training, you can view the model performance and see the predicted classes for the documents in the test set.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](tag-data.md).
+
+## Prerequisites
+
+Before viewing model evaluation you need:
+
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+* A successfully [trained model](train-model.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+
+### Single label classification
+
+### Multi label classification
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/language-studio)
+++
+### [REST APIs](#tab/rest-api)
+++++
+## Next steps
+
+As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used. Once you know whether your model performance needs to improve, you can begin improving the model.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/language-support.md
+
+ Title: Language support in custom text classification
+
+description: Learn about which languages are supported by custom text classification.
++++++ Last updated : 05/06/2022++++
+# Language support for custom text classification
+
+Use this article to learn about the languages currently supported by custom text classification feature.
+
+## Multi-lingual option
+
+With custom text classification, you can train a model in one language and use to classify documents in another language. This feature is useful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
+
+You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom text classification
+makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+
+Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+
+You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
+
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
+
+## Languages supported by custom text classification
+
+Custom text classification supports `.txt` files in the following languages:
+
+| Language | Language Code |
+| | |
+| Afrikaans | `af` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Belarusian | `be` |
+| Bulgarian | `bg` |
+| Bengali | `bn` |
+| Breton | `br` |
+| Bosnian | `bs` |
+| Catalan | `ca` |
+| Czech | `cs` |
+| Welsh | `cy` |
+| Danish | `da` |
+| German | `de`
+| Greek | `el` |
+| English (US) | `en-us` |
+| Esperanto | `eo` |
+| Spanish | `es` |
+| Estonian | `et` |
+| Basque | `eu` |
+| Persian (Farsi) | `fa` |
+| Finnish | `fi` |
+| French | `fr` |
+| Western Frisian | `fy` |
+| Irish | `ga` |
+| Scottish Gaelic | `gd` |
+| Galician | `gl` |
+| Gujarati | `gu` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Croatian | `hr` |
+| Hungarian | `hu` |
+| Armenian | `hy` |
+| Indonesian | `id` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Georgian | `ka` |
+| Kazakh | `kk` |
+| Khmer | `km` |
+| Kannada | `kn` |
+| Korean | `ko` |
+| Kurdish (Kurmanji) | `ku` |
+| Kyrgyz | `ky` |
+| Latin | `la` |
+| Lao | `lo` |
+| Lithuanian | `lt` |
+| Latvian | `lv` |
+| Malagasy | `mg` |
+| Macedonian | `mk` |
+| Malayalam | `ml` |
+| Mongolian | `mn` |
+| Marathi | `mr` |
+| Malay | `ms` |
+| Burmese | `my` |
+| Nepali | `ne` |
+| Dutch | `nl` |
+| Norwegian (Bokmal) | `nb` |
+| Oriya | `or` |
+| Punjabi | `pa` |
+| Polish | `pl` |
+| Pashto | `ps` |
+| Portuguese (Brazil) | `pt-br` |
+| Portuguese (Portugal) | `pt-pt` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Sanskrit | `sa` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Albanian | `sq` |
+| Serbian | `sr` |
+| Sundanese | `su` |
+| Swedish | `sv` |
+| Swahili | `sw` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Thai | `th` |
+| Filipino | `tl` |
+| Turkish | `tr` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Chinese (Simplified) | `zh-hans` |
+| Zulu | `zu` |
+
+## Next steps
+
+* [Custom text classification overview](overview.md)
+* [Service limits](service-limits.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/overview.md
+
+ Title: Custom text classification - Azure AI services
+
+description: Customize an AI model to classify documents and other content using Azure AI services.
++++++ Last updated : 06/17/2022++++
+# What is custom text classification?
+
+Custom text classification is one of the custom features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for text classification tasks.
+
+Custom text classification enables users to build custom AI models to classify text into custom classes pre-defined by the user. By creating a custom text classification project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+
+Custom text classification supports two types of projects:
+
+* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
+* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
+
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
+
+## Example usage scenarios
+
+Custom text classification can be used in multiple scenarios across a variety of industries:
+
+### Automatic emails or ticket triage
+
+Support centers of all types receive a high volume of emails or tickets containing unstructured, freeform text and attachments. Timely review, acknowledgment, and routing to subject matter experts within internal teams is critical. Email triage at this scale requires people to review and route to the right departments, which takes time and resources. Custom text classification can be used to analyze incoming text, and triage and categorize the content to be automatically routed to the relevant departments for further action.
+
+### Knowledge mining to enhance/enrich semantic search
+
+Search is foundational to any app that surfaces text content to users. Common scenarios include catalog or document searches, retail product searches, or knowledge mining for data science. Many enterprises across various industries are seeking to build a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom text classification to categorize their text into classes that are relevant to their industry. The predicted classes can be used to enrich the indexing of the file for a more customized search experience.
+
+## Project development lifecycle
+
+Creating a custom text classification project typically involves several different steps.
++
+Follow these steps to get the most out of your model:
+
+1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, to avoid ambiguity.
+
+2. **Label your data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
+
+3. **Train the model**: Your model starts learning from your labeled data.
+
+4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+
+5. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
+
+6. **Classify text**: Use your custom model for custom text classification tasks.
+
+## Reference documentation and code samples
+
+As you use custom text classification, see the following reference documentation and samples for Azure AI Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | |
+|REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
+|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples - Single label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_SingleLabelClassify.md) [C# samples - Multi label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_MultiLabelClassify.md) |
+| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/SingleLabelClassifyDocument.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/MultiLabelClassifyDocument.java) |
+|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples - Single label classification](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) [JavaScript samples - Multi label classification](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
+|Python (Runtime)| [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples - Single label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py) [Python samples - Multi label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom text classification](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using custom text classification.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as regional availability.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/quickstart.md
+
+ Title: Quickstart - Custom text classification
+
+description: Quickly start building an AI model to identify and apply labels (classify) unstructured text.
++++++ Last updated : 01/25/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: Custom text classification
+
+Use this article to get started with creating a custom text classification project where you can train custom models for text classification. A model is artificial intelligence software that's trained to do a certain task. For this system, the models classify text, and are trained by learning from tagged data.
+
+Custom text classification supports two types of projects:
+
+* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
+* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
+
+In this quickstart you can use the sample datasets provided to build a multi label classification where you can classify movie scripts into one or more categories or you can use single label classification dataset where you can classify abstracts of scientific papers into one of the defined domains.
++++++++
+## Next steps
+
+After you've created a custom text classification model, you can:
+* [Use the runtime API to classify text](how-to/call-api.md)
+
+When you start to create your own custom text classification projects, use the how-to articles to learn more about developing your model in greater detail:
+
+* [Data selection and schema design](how-to/design-schema.md)
+* [Tag data](how-to/tag-data.md)
+* [Train a model](how-to/train-model.md)
+* [View model evaluation](how-to/view-model-evaluation.md)
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/service-limits.md
+
+ Title: Custom text classification limits
+
+description: Learn about the data and rate limits when using custom text classification.
+++ Last updated : 01/25/2022+++++++
+# Custom text classification limits
+
+Use this article to learn about the data and service limits when using custom text classification.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the supported regions and pricing tiers listed below.
+
+* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](how-to/create-project.md#create-language-resource-and-connect-storage-account)
+
+* You can have up to 500 projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Pricing tiers
+
+Custom text classification is available with the following pricing tiers:
+
+|Tier|Description|Limit|
+|--|--|--|
+|F0 |Free tier|You are only allowed one F0 tier Language resource per subscription.|
+|S |Paid tier|You can have unlimited Language S tier resources per subscription. |
++
+See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
+
+## Regional availability
+
+Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+|Document size|--|125,000 characters. You can send up to 25 documents as long as they collectively do not exceed 125,000 characters|
+
+> [!TIP]
+> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month|
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 text records per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Document limits
+
+* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
+
+* All files uploaded in your container must contain data. Empty files are not allowed for training.
+
+* All files should be available at the root of your container.
+
+## Data limits
+
+The following limits are observed for the custom text classification.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Documents count | 10 | 100,000 |
+|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
+|Count of classes | 1 | 200 |
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Item | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Class name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
+| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
+
+## Next steps
+
+* [Custom text classification overview](overview.md)
ai-services Triage Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/tutorials/triage-email.md
+
+ Title: Triage incoming emails with Power Automate
+
+description: Learn how to use custom text classification to categorize and triage incoming emails with Power Automate
++++++ Last updated : 01/27/2023+++
+# Tutorial: Triage incoming emails with power automate
+
+In this tutorial you will categorize and triage incoming email using custom text classification. Using this [Power Automate](/power-automate/getting-started) flow, when a new email is received, its contents will have a classification applied, and depending on the result, a message will be sent to a designated channel on [Microsoft Teams](https://www.microsoft.com/microsoft-teams).
++
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">A Language resource </a>
+ * A trained [custom text classification](../overview.md) model.
+ * You will need the key and endpoint from your Language resource to authenticate your Power Automate flow.
+* A successfully created and deployed [single text classification custom model](../quickstart.md)
++
+## Create a Power Automate flow
+
+1. [Sign in to power automate](https://make.powerautomate.com/)
+
+2. From the left side menu, select **My flows** and create a **Automated cloud flow**
+
+ :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the flow creation screen." lightbox="../media/create-flow.png":::
+
+3. Name your flow `EmailTriage`. Below **Choose your flow's triggers**, search for *email* and select **When a new email arrives**. Then select **create**
+
+ :::image type="content" source="../media/email-flow.png" alt-text="A screenshot of the email flow triggers." lightbox="../media/email-flow.png":::
+
+4. Add the right connection to your email account. This connection will be used to access the email content.
+
+5. To add a Language service connector, search for *Azure AI Language*.
+
+ :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of available Azure AI Language service connectors." lightbox="../media/language-connector.png":::
+
+6. Search for *CustomSingleLabelClassification*.
+
+ :::image type="content" source="../media/single-classification.png" alt-text="A screenshot of Classification connector." lightbox="../media/single-classification.png":::
+
+7. Start by adding the right connection to your connector. This connection will be used to access the classification project.
+
+8. In the documents ID field, add **1**.
+
+9. In the documents text field, add **body** from **dynamic content**.
+
+10. Fill in the project name and deployment name of your deployed custom text classification model.
+
+ :::image type="content" source="../media/classification.png" alt-text="A screenshot project details." lightbox="../media/classification.png":::
+
+11. Add a condition to send a Microsoft Teams message to the right team by:
+ 1. Select **results** from **dynamic content**, and add the condition. For this tutorial, we are looking for `Computer_science` related emails. In the **Yes** condition, choose your desired option to notify a team channel. In the **No** condition, you can add additional conditions to perform alternative actions.
+
+ :::image type="content" source="../media/email-triage.png" alt-text="A screenshot of email flow." lightbox="../media/email-triage.png":::
++
+## Next steps
+
+* [Use the Language service with Power Automate](../../tutorials/power-automate.md)
+* [Available Language service connectors](/connectors/cognitiveservicestextanalytics)
ai-services Azure Machine Learning Labeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom/azure-machine-learning-labeling.md
+
+ Title: Use Azure Machine Learning labeling in Language Studio
+description: Learn how to label your data in Azure Machine Learning, and import it for use in the Language service.
+++++++ Last updated : 04/17/2023+++
+# Use Azure Machine Learning labeling in Language Studio
+
+Labeling data is an important part of preparing your dataset. Using the labeling experience in Azure Machine Learning, you can experience easier collaboration, more flexibility, and the ability to [outsource labeling tasks](/azure/machine-learning/how-to-outsource-data-labeling) to external labeling vendors from the [Azure Market Place](https://azuremarketplace.microsoft.com/marketplace/consulting-services?search=AzureMLVend). You can use Azure Machine Learning labeling for:
+* [custom text classification](../custom-text-classification/overview.md)
+* [custom named entity recognition](../custom-named-entity-recognition/overview.md)
+
+## Prerequisites
+
+Before you can connect your labeling project to Azure Machine Learning, you need:
+* A successfully created Language Studio project with a configured Azure blob storage account.
+* Text data that has been uploaded to your storage account.
+* At least:
+ * One entity label for custom named entity recognition, or
+ * Two class labels for custom text classification projects.
+* An [Azure Machine Learning workspace](/azure/machine-learning/how-to-manage-workspace) that has been [connected](/azure/machine-learning/v1/how-to-connect-data-ui?tabs=credential#create-datastores) to the same Azure blob storage account that your Language Studio account using.
+
+## Limitations
+
+* Connecting your labeling project to Azure Machine Learning is a one-to-one connection. If you disconnect your project, you will not be able to connect your project back to the same Azure Machine Learning project
+* You can't label in the Language Studio and Azure Machine Learning simultaneously. The labeling experience is enabled in one studio at a time.
+* The testing and training files in the labeling experience you switch away from will be ignored when training your model.
+* Only Azure Machine Learning's JSONL file format can be imported into Language Studio.
+* Projects with the multi-lingual option enabled can't be connected to Azure Machine Learning, and not all languages are supported.
+ * Language support is provided by the Azure Machine Learning [TextDNNLanguages Class](/python/api/azureml-automl-core/azureml.automl.core.constants.textdnnlanguages?view=azure-ml-py&preserve-view=true&branch=main#azureml-automl-core-constants-textdnnlanguages-supported).
+* The Azure Machine Learning workspace you're connecting to must be assigned to the same Azure Storage account that Language Studio is connected to. Be sure that the Azure Machine Learning workspace has the storage blob data reader permission on the storage account. The workspace needs to have been linked to the storage account during the creation process in the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices).
+* Switching between the two labeling experiences isn't instantaneous. It may take time to successfully complete the operation.
+
+## Import your Azure Machine Learning labels into Language Studio
+
+Language Studio supports the JSONL file format used by Azure Machine Learning. If youΓÇÖve been labeling data on Azure Machine Learning, you can import your up-to-date labels in a new custom project to utilize the features of both studios.
+
+1. Start by creating a new project for custom text classification or custom named entity recognition.
+
+ 1. In the **Create a project** screen that appears, follow the prompts to connect your storage account, and enter the basic information about your project. Be sure that the Azure resource you're using doesn't have another storage account already connected.
+
+ 1. In the **Choose container** section, choose the option indicating that you already have a correctly formatted file. Then select your most recent Azure Machine Learning labels file.
+
+ :::image type="content" source="./media/select-label-file.png" alt-text="A screenshot showing the selection for a label file in Language Studio." lightbox="./media/select-label-file.png":::
+
+## Connect to Azure Machine Learning
+
+Before you connect to Azure Machine Learning, you need an Azure Machine Learning account with a pricing plan that can accommodate the compute needs of your project. See the [prerequisites section](#prerequisites) to make sure that you have successfully completed all the requirements to start connecting your Language Studio project to Azure Machine Learning.
+
+1. Use the [Azure portal](https://portal.azure.com/) to navigate to the Azure Blob Storage account connected to your language resource.
+2. Ensure that the *Storage Blob Data Contributor* role is assigned to your AML workspace within the role assignments for your Azure Blob Storage account.
+3. Navigate to your project in [Language Studio](https://language.azure.com/). From the left navigation menu of your project, select **Data labeling**.
+4. Select **use Azure Machine Learning to label** in either the **Data labeling** description, or under the **Activity pane**.
+
+ :::image type="content" source="./media/azure-machine-learning-selection.png" alt-text="A screenshot showing the location of the Azure Machine Learning link." lightbox="./media/azure-machine-learning-selection.png":::
+
+1. Select **Connect to Azure Machine Learning** to start the connection process.
+
+ :::image type="content" source="./media/activity-pane.png" alt-text="A screenshot showing the Azure Machine Learning connection button in Language Studio." lightbox="./media/activity-pane.png":::
+
+1. In the window that appears, follow the prompts. Select the Azure Machine Learning workspace youΓÇÖve created previously under the same Azure subscription. Enter a name for the new Azure Machine Learning project that will be created to enable labeling in Azure Machine Learning.
+
+ >[!TIP]
+ > Make sure your workspace is linked to the same Azure Blob Storage account and Language resource before continuing. You can create a new workspace and link to your storage account through the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices). Ensure that the storage account is properly linked to the workspace.
+
+1. (Optional) Turn on the vendor labeling toggle to use labeling vendor companies. Before choosing the vendor labeling companies, contact the vendor labeling companies on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?search=AzureMLVend) to finalize a contract with them. For more information about working with vendor companies, see [How to outsource data labeling](/azure/machine-learning/how-to-outsource-data-labeling).
+
+ You can also leave labeling instructions for the human labelers that will help you in the labeling process. These instructions can help them understand the task by leaving clear definitions of the labels and including examples for better results.
+
+1. Review the settings for your connection to Azure Machine Learning and make changes if needed.
+
+ > [!IMPORTANT]
+ > Finalizing the connection **is permanent**. Attempting to disconnect your established connection at any point in time will permanently disable your Language Studio project from connecting to the same Azure Machine Learning project.
+
+1. After the connection has been initiated, your ability to label data in Language Studio will be disabled for a few minutes to prepare the new connection.
+
+## Switch to labeling with Azure Machine Learning from Language Studio
+
+Once the connection has been established, you can switch to Azure Machine Learning through the **Activity pane** in Language Studio at any time.
++
+When you switch, your ability to label data in Language Studio will be disabled, and you will be able to label data in Azure Machine Learning. You can switch back to labeling in Language Studio at any time through Azure Machine Learning.
+
+For information on how to label the text, see [Azure Machine Learning how to label](/azure/machine-learning/how-to-label-data#label-text). For information about managing and tracking the text labeling project, see [Azure Machine Learning set up and manage a text labeling project](/azure/machine-learning/how-to-create-text-labeling-projects).
+
+## Train your model using labels from Azure Machine Learning
+
+When you switch to labeling using Azure Machine Learning, you can still train, evaluate, and deploy your model in Language Studio. To train your model using updated labels from Azure Machine Learning:
+
+1. Select **Training jobs** from the navigation menu on the left of the Language studio screen for your project.
+
+1. Select **Import latest labels from Azure Machine Learning** from the **Choose label origin** section in the training page. This synchronizes the labels from Azure Machine Learning before starting the training job.
+
+ :::image type="content" source="./media/azure-machine-learning-label-origin.png" alt-text="A screenshot showing the selector for using labels from Azure Machine Learning." lightbox="./media/azure-machine-learning-label-origin.png":::
+
+## Switch to labeling with Language Studio from Azure Machine Learning
+
+After you've switched to labeling with Azure Machine Learning, You can switch back to labeling with Language Studio project at any time.
+
+> [!NOTE]
+> * Only users with the [correct roles](/azure/machine-learning/how-to-add-users) in Azure Machine Learning have the ability to switch labeling.
+> * When you switch to using Language Studio, labeling on Azure Machine learning will be disabled.
+
+To switch back to labeling with Language Studio:
+
+1. Navigate to your project in Azure Machine Learning and select **Data labeling** from the left navigation menu.
+1. Select the **Language Studio** tab and select **Switch to Language Studio**.
+
+ :::image type="content" source="./media/azure-machine-learning-studio.png" alt-text="A screenshot showing the selector for using labels from Language Studio." lightbox="./media/azure-machine-learning-studio.png":::
+
+3. The process takes a few minutes to complete, and your ability to label in Azure Machine Learning will be disabled until it's switched back from Language Studio.
+
+## Disconnecting from Azure Machine Learning
+
+Disconnecting your project from Azure Machine Learning is a permanent, irreversible process and can't be undone. You will no longer be able to access your labels in Azure Machine Learning, and you wonΓÇÖt be able to reconnect the Azure Machine Learning project to any Language Studio project in the future. To disconnect from Azure Machine Learning:
+
+1. Ensure that any updated labels you want to maintain are synchronized with Azure Machine Learning by switching the labeling experience back to the Language Studio.
+1. Select **Project settings** from the navigation menu on the left in Language Studio.
+1. Select the **Disconnect from Azure Machine Learning** button from the **Manage Azure Machine Learning connections** section.
+
+## Next steps
+Learn more about labeling your data for [Custom Text Classification](../custom-text-classification/how-to/tag-data.md) and [Custom Named Entity Recognition](../custom-named-entity-recognition/how-to/tag-data.md).
+
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/how-to/call-api.md
+
+ Title: How to call the entity linking API
+
+description: Learn how to identify and link entities found in text with the entity linking API.
++++++ Last updated : 11/02/2021++++
+# How to use entity linking
+
+The entity linking feature can be used to identify and disambiguate the identity of an entity found in text (for example, determining whether an occurrence of the word "*Mars*" refers to the planet, or to the Roman god of war). It will return the entities in the text with links to [Wikipedia](https://www.wikipedia.org/) as a knowledge base.
++
+## Development options
++
+## Determine how to process the data (optional)
+
+### Specify the entity linking model
+
+By default, entity linking will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
+
+### Input languages
+
+When you submit documents to be processed by entity linking, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, entity linking will default to English. Due to [multilingual and emoji support](../../concepts/multilingual-emoji-support.md), the response may contain text offsets.
+
+## Submitting data
+
+Entity linking produces a higher-quality result when you give it smaller amounts of text to work on. This is opposite from some features, like key phrase extraction which performs better on larger blocks of text. To get the best results from both operations, consider restructuring the inputs accordingly.
+
+To send an API request, you will need a Language resource endpoint and key.
+
+> [!NOTE]
+> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
+
+Analysis is performed upon receipt of the request. Using entity linking synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
++
+### Getting entity linking results
+
+You can stream the results to an application, or save the output to a file on the local system.
+
+## Service and data limits
++
+## See also
+
+* [Entity linking overview](../overview.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/language-support.md
+
+ Title: Language support for key phrase analysis
+
+description: A list of natural languages supported by the entity linking API
++++++ Last updated : 11/02/2021++++
+# Entity linking language support
+
+> [!NOTE]
+> Languages are added as new model versions are released for specific features. The current model version for Entity Linking is `2020-02-01`.
+
+| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+|:|:-:|:-:|:--:|:--:|
+| English | `en` | Γ£ô | 2019-10-01 | |
+| Spanish | `es` | Γ£ô | 2019-10-01 | |
+
+## Next steps
+
+[Entity linking overview](overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/overview.md
+
+ Title: What is entity linking in Azure AI Language?
+
+description: An overview of entity linking in Azure AI services, which helps you extract entities from text, and provides links to an online knowledge base.
++++++ Last updated : 01/10/2023++++
+# What is entity linking in Azure AI Language?
+
+Entity linking is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Entity linking identifies and disambiguates the identity of entities found in text. For example, in the sentence "*We went to Seattle last week.*", the word "*Seattle*" would be identified, with a link to more information on Wikipedia.
+
+This documentation contains the following types of articles:
+
+* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific ways.
+
+## Get started with entity linking
+++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for entity linking](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+There are two ways to get started using the entity linking feature:
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure AI Language features without needing to write code.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/quickstart.md
+
+ Title: "Quickstart: Entity linking using the client library and REST API"
+
+description: 'Use this quickstart to perform Entity Linking, using C#, Python, Java, JavaScript, and the REST API.'
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+keywords: text mining, entity linking
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: Entity Linking using the client library and REST API
+++++++++++++++
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/how-to/call-api.md
+
+ Title: how to call the Key Phrase Extraction API
+
+description: How to extract key phrases by using the Key Phrase Extraction API.
++++++ Last updated : 01/10/2023++++
+# How to use key phrase extraction
+
+The key phrase extraction feature can evaluate unstructured text, and for each document, return a list of key phrases.
+
+This feature is useful if you need to quickly identify the main points in a collection of documents. For example, given input text "*The food was delicious and the staff was wonderful*", the service returns the main topics: "*food*" and "*wonderful staff*".
+
+> [!TIP]
+> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+
+## Development options
++
+## Determine how to process the data (optional)
+
+### Specify the key phrase extraction model
+
+By default, key phrase extraction will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
+
+### Input languages
+
+When you submit documents to be processed by key phrase extraction, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
+
+## Submitting data
+
+Key phrase extraction works best when you give it bigger amounts of text to work on. This is opposite from sentiment analysis, which performs better on smaller amounts of text. To get the best results from both operations, consider restructuring the inputs accordingly.
+
+To send an API request, You will need your Language resource endpoint and key.
+
+> [!NOTE]
+> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
+
+Analysis is performed upon receipt of the request. Using the key phrase extraction feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+++
+## Getting key phrase extraction results
+
+When you receive results from the API, the order of the returned key phrases is determined internally, by the model. You can stream the results to an application, or save the output to a file on the local system.
+
+## Service and data limits
++
+## Next steps
+
+[Key Phrase Extraction overview](../overview.md)
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/how-to/use-containers.md
+
+ Title: Use Docker containers for Key Phrase Extraction on-premises
+
+description: Learn how to use Docker containers for Key Phrase Extraction on-premises.
++++++ Last updated : 04/11/2023++
+keywords: on-premises, Docker, container, natural language processing
++
+# Install and run Key Phrase Extraction containers
++
+Containers enable you to host the Key Phrase Extraction API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Key Phrase Extraction remotely, then containers might be a good option.
++
+> [!NOTE]
+> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data and service limits](../../concepts/data-limits.md).
+
+Containers enable you to run the Key Phrase Extraction APIs in your own environment and are great for your specific security and data governance requirements. The Key Phrase Extraction containers provide advanced natural language processing over raw text, and include three main functions: sentiment analysis, Key Phrase Extraction, and language detection.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
++
+## Host computer requirements and recommendations
++
+The following table describes the minimum and recommended specifications for the available Key Phrase Extraction containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
+
+| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
+|||-|--|--|
+| **Key Phrase Extraction** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
+
+CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Get the container image with `docker pull`
+
+The key phrase extraction container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `keyphrase`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase`.
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry.
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase:latest
+```
++
+## Run the container with `docker run`
+
+Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
+
+> [!IMPORTANT]
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+> * The sentiment analysis and language detection containers use v3 of the API, and are generally available. The Key Phrase Extraction container uses v2 of the API, and is in preview.
+
+To run the *Key Phrase Extraction* container, execute the following `docker run` command. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Key Phrase Extraction resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the Key Phrase Extraction API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
++
+```bash
+docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a *Key Phrase Extraction* container from the container image
+* Allocates one CPU core and 4 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Automatically removes the container after it exits. The container image is still available on the host computer.
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs.
+
+<!-- ## Validate container is running -->
+++
+## Run the container disconnected from the internet
++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++
+## Billing
+
+The Key Phrase Extraction containers send billing information to Azure, using a _Key Phrase Extraction_ resource on your Azure account.
+
+\
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Key Phrase Extraction containers. In summary:
+
+* Key Phrase Extraction provides Linux containers for Docker.
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You can use either the REST API or SDK to call operations in Key Phrase Extraction containers by specifying the host URI of the container.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/language-support.md
+
+ Title: Language support for Key Phrase Extraction
+
+description: Use this article to find the natural languages supported by Key Phrase Extraction.
++++++ Last updated : 07/28/2022++++
+# Language support for Key Phrase Extraction
+
+Use this article to find the natural languages supported by Key Phrase Analysis.
+
+## Supported languages
+
+> [!NOTE]
+> Languages are added as new [model versions](how-to/call-api.md#specify-the-key-phrase-extraction-model) are released for specific features. The current model version for Key Phrase Extraction is `2022-07-01`.
+
+Total supported language codes: 94
+
+| Language | Language code | Starting with model version | Notes |
+|--||-|--|
+| Afrikaans      |     `af`  |                2020-07-01                 |                    |
+| Albanian     |     `sq`  |                2022-10-01                 |                    |
+| Amharic     |     `am`  |                2022-10-01                 |                    |
+| Arabic    |     `ar`  |                2022-10-01                 |                    |
+| Armenian    |     `hy`  |                2022-10-01                 |                    |
+| Assamese    |     `as`  |                2022-10-01                 |                    |
+| Azerbaijani    |     `az`  |                2022-10-01                 |                    |
+| Basque    |     `eu`  |                2022-10-01                 |                    |
+| Belarusian |     `be`  |                2022-10-01                 |                    |
+| Bengali     |     `bn`  |                2022-10-01                 |                    |
+| Bosnian    |     `bs`  |                2022-10-01                 |                    |
+| Breton    |     `br`  |                2022-10-01                 |                    |
+| Bulgarian      |     `bg`  |                2020-07-01                 |                    |
+| Burmese    |     `my`  |                2022-10-01                 |                    |
+| Catalan    |     `ca`  |                2020-07-01                 |                    |
+| Chinese-Simplified    |     `zh-hans` |                2021-06-01                 |                    |
+| Chinese-Traditional |     `zh-hant` |                2022-10-01                 |                    |
+| Croatian | `hr` | 2020-07-01 | |
+| Czech    |     `cs`  |                2022-10-01                 |                    |
+| Danish | `da` | 2019-10-01 | |
+| Dutch                 |     `nl`      |                2019-10-01                 |                    |
+| English               |     `en`      |                2019-10-01                 |                    |
+| Esperanto    |     `eo`  |                2022-10-01                 |                    |
+| Estonian              |     `et`      |                2020-07-01                 |                    |
+| Filipino    |     `fil`  |                2022-10-01                 |                    |
+| Finnish               |     `fi`      |                2019-10-01                 |                    |
+| French                |     `fr`      |                2019-10-01                 |                    |
+| Galician    |     `gl`  |                2022-10-01                 |                    |
+| Georgian    |     `ka`  |                2022-10-01                 |                    |
+| German                |     `de`      |                2019-10-01                 |                    |
+| Greek    |     `el`  |                2020-07-01                 |                    |
+| Gujarati    |     `gu`  |                2022-10-01                 |                    |
+| Hausa      |     `ha`  |                2022-10-01                 |                    |
+| Hebrew    |     `he`  |                2022-10-01                 |                    |
+| Hindi      |     `hi`  |                2022-10-01                 |                    |
+| Hungarian    |     `hu`  |                2020-07-01                 |                    |
+| Indonesian            |     `id`      |                2020-07-01                 |                    |
+| Irish            |     `ga`      |                2022-10-01                 |                    |
+| Italian               |     `it`      |                2019-10-01                 |                    |
+| Japanese              |     `ja`      |                2019-10-01                 |                    |
+| Javanese            |     `jv`      |                2022-10-01                 |                    |
+| Kannada            |     `kn`      |                2022-10-01                 |                    |
+| Kazakh            |     `kk`      |                2022-10-01                 |                    |
+| Khmer            |     `km`      |                2022-10-01                 |                    |
+| Korean                |     `ko`      |                2019-10-01                 |                    |
+| Kurdish (Kurmanji)   |     `ku`      |                2022-10-01                 |                    |
+| Kyrgyz            |     `ky`      |                2022-10-01                 |                    |
+| Lao            |     `lo`      |                2022-10-01                 |                    |
+| Latin            |     `la`      |                2022-10-01                 |                    |
+| Latvian               |     `lv`      |                2020-07-01                 |                    |
+| Lithuanian            |     `lt`      |                2022-10-01                 |                    |
+| Macedonian            |     `mk`      |                2022-10-01                 |                    |
+| Malagasy            |     `mg`      |                2022-10-01                 |                    |
+| Malay            |     `ms`      |                2022-10-01                 |                    |
+| Malayalam            |     `ml`      |                2022-10-01                 |                    |
+| Marathi            |     `mr`      |                2022-10-01                 |                    |
+| Mongolian            |     `mn`      |                2022-10-01                 |                    |
+| Nepali            |     `ne`      |                2022-10-01                 |                    |
+| Norwegian (Bokmål)    |     `no`      |                2020-07-01                 | `nb` also accepted |
+| Oriya            |     `or`      |                2022-10-01                 |                    |
+| Oromo            |     `om`      |                2022-10-01                 |                    |
+| Pashto            |     `ps`      |                2022-10-01                 |                    |
+| Persian (Farsi)       |     `fa`      |                2022-10-01                 |                    |
+| Polish                |     `pl`      |                2019-10-01                 |                    |
+| Portuguese (Brazil)   |    `pt-BR`    |                2019-10-01                 |                    |
+| Portuguese (Portugal) |    `pt-PT`    |                2019-10-01                 | `pt` also accepted |
+| Punjabi            |     `pa`      |                2022-10-01                 |                    |
+| Romanian              |     `ro`      |                2020-07-01                 |                    |
+| Russian               |     `ru`      |                2019-10-01                 |                    |
+| Sanskrit            |     `sa`      |                2022-10-01                 |                    |
+| Scottish Gaelic       |     `gd`      |                2022-10-01                 |                    |
+| Serbian            |     `sr`      |                2022-10-01                 |                    |
+| Sindhi            |     `sd`      |                2022-10-01                 |                    |
+| Sinhala            |     `si`      |                2022-10-01                 |                    |
+| Slovak                |     `sk`      |                2020-07-01                 |                    |
+| Slovenian             |     `sl`      |                2020-07-01                 |                    |
+| Somali            |     `so`      |                2022-10-01                 |                    |
+| Spanish               |     `es`      |                2019-10-01                 |                    |
+| Sudanese            |     `su`      |                2022-10-01                 |                    |
+| Swahili            |     `sw`      |                2022-10-01                 |                    |
+| Swedish               |     `sv`      |                2019-10-01                 |                    |
+| Tamil            |     `ta`      |                2022-10-01                 |                    |
+| Telugu           |     `te`      |                2022-10-01                 |                    |
+| Thai            |     `th`      |                2022-10-01                 |                    |
+| Turkish              |     `tr`      |                2020-07-01                 |                    |
+| Ukrainian           |     `uk`      |                2022-10-01                 |                    |
+| Urdu            |     `ur`      |                2022-10-01                 |                    |
+| Uyghur            |     `ug`      |                2022-10-01                 |                    |
+| Uzbek            |     `uz`      |                2022-10-01                 |                    |
+| Vietnamese            |     `vi`      |                2022-10-01                 |                    |
+| Welsh            |     `cy`      |                2022-10-01                 |                    |
+| Western Frisian       |     `fy`      |                2022-10-01                 |                    |
+| Xhosa            |     `xh`      |                2022-10-01                 |                    |
+| Yiddish            |     `yi`      |                2022-10-01                 |                    |
+
+## Next steps
+
+* [How to call the API](how-to/call-api.md) for more information.
+* [Quickstart: Use the key phrase extraction client library and REST API](quickstart.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/overview.md
+
+ Title: What is key phrase extraction in Azure AI Language?
+
+description: An overview of key phrase extraction in Azure AI services, which helps you identify main concepts in unstructured text
++++++ Last updated : 01/10/2023++++
+# What is key phrase extraction in Azure AI Language?
+
+Key phrase extraction is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use key phrase extraction to quickly identify the main concepts in text. For example, in the text "*The food was delicious and the staff were wonderful.*", key phrase extraction will return the main topics: "*food*" and "*wonderful staff*".
+
+This documentation contains the following types of articles:
+
+* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
+++
+## Get started with entity linking
+++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for key phrase extraction](/legal/cognitive-services/language-service/transparency-note-key-phrase-extraction?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+There are two ways to get started using the entity linking feature:
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure AI Language features without needing to write code.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/quickstart.md
+
+ Title: "Quickstart: Use the Key Phrase Extraction client library"
+
+description: Use this quickstart to start using the Key Phrase Extraction API.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+keywords: text mining, key phrase
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: using the Key Phrase Extraction client library and REST API
++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Next steps
+
+* [Key phrase extraction overview](overview.md)
ai-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
+
+ Title: 'Tutorial: Integrate Power BI with key phrase extraction'
+
+description: Learn how to use the key phrase extraction feature to get text stored in Power BI.
++++++ Last updated : 09/28/2022++++
+# Tutorial: Extract key phrases from text stored in Power BI
+
+Microsoft Power BI Desktop is a free application that lets you connect to, transform, and visualize your data. Key phrase extraction, one of the features of Azure AI Language, provides natural language processing. Given raw unstructured text, it can extract the most important phrases, analyze sentiment, and identify well-known entities such as brands. Together, these tools can help you quickly see what your customers are talking about and how they feel about it.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Use Power BI Desktop to import and transform data
+> * Create a custom function in Power BI Desktop
+> * Integrate Power BI Desktop with the Key Phrase Extraction feature of Azure AI Language
+> * Use Key Phrase Extraction to get the most important phrases from customer feedback
+> * Create a word cloud from customer feedback
+
+## Prerequisites
+
+- Microsoft Power BI Desktop. [Download at no charge](https://powerbi.microsoft.com/get-started/).
+- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/).
+- A Language resource. If you don't have one, you can [create one](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics).
+- The [Language resource key](../../../multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) that was generated for you during sign-up.
+- Customer comments. You can [use our example data](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/language-service/tutorials/comments.csv) or your own data. This tutorial assumes you're using our example data.
+
+## Load customer data
+
+To get started, open Power BI Desktop and load the comma-separated value (CSV) file that you downloaded as part of the [prerequisites](#prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum.
+
+> [!NOTE]
+> Power BI can use data from a wide variety of web-based sources, such as SQL databases. See the [Power Query documentation](/power-query/connectors/) for more information.
+
+In the main Power BI Desktop window, select the **Home** ribbon. In the **External data** group of the ribbon, open the **Get Data** drop-down menu and select **Text/CSV**.
+
+![The Get Data button](../media/tutorials/power-bi/get-data-button.png)
+
+The Open dialog appears. Navigate to your Downloads folder, or to the folder where you downloaded the CSV file. Select the name of the file, then the **Open** button. The CSV import dialog appears.
+
+![The CSV Import dialog](../media/tutorials/power-bi/csv-import.png)
+
+The CSV import dialog lets you verify that Power BI Desktop has correctly detected the character set, delimiter, header rows, and column types. This information is all correct, so select **Load**.
+
+To see the loaded data, click the **Data View** button on the left edge of the Power BI workspace. A table opens that contains the data, like in Microsoft Excel.
+
+![The initial view of the imported data](../media/tutorials/power-bi/initial-data-view.png)
+
+## Prepare the data
+
+You may need to transform your data in Power BI Desktop before it's ready to be processed by Key Phrase Extraction.
+
+The sample data contains a `subject` column and a `comment` column. With the Merge Columns function in Power BI Desktop, you can extract key phrases from the data in both these columns, rather than just the `comment` column.
+
+In Power BI Desktop, select the **Home** ribbon. In the **External data** group, select **Edit Queries**.
+
+![The External Data group in Home ribbon](../media/tutorials/power-bi/edit-queries.png)
+
+Select `FabrikamComments` in the **Queries** list at the left side of the window if it isn't already selected.
+
+Now select both the `subject` and `comment` columns in the table. You may need to scroll horizontally to see these columns. First click the `subject` column header, then hold down the Control key and click the `comment` column header.
+
+![Selecting fields to be merged](../media/tutorials/power-bi/select-columns.png)
+
+Select the **Transform** ribbon. In the **Text Columns** group of the ribbon, select **Merge Columns**. The Merge Columns dialog appears.
+
+![Merging fields using the Merge Columns dialog](../media/tutorials/power-bi/merge-columns.png)
+
+In the Merge Columns dialog, choose `Tab` as the separator, then select **OK.**
+
+You might also consider filtering out blank messages using the Remove Empty filter, or removing unprintable characters using the Clean transformation. If your data contains a column like the `spamscore` column in the sample file, you can skip "spam" comments using a Number Filter.
+
+## Understand the API
+
+[Key Phrase Extraction](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-V3-1/operations/KeyPhrases) can process up to a thousand text documents per HTTP request. Power BI prefers to deal with records one at a time, so in this tutorial your calls to the API will include only a single document each. The Key Phrases API requires the following fields for each document being processed.
+
+| Field | Description |
+| - | - |
+| `id` | A unique identifier for this document within the request. The response also contains this field. That way, if you process more than one document, you can easily associate the extracted key phrases with the document they came from. In this tutorial, because you're processing only one document per request, you can hard-code the value of `id` to be the same for each request.|
+| `text` | The text to be processed. The value of this field comes from the `Merged` column you created in the [previous section](#prepare-the-data), which contains the combined subject line and comment text. The Key Phrases API requires this data be no longer than about 5,120 characters.|
+| `language` | The code for the natural language the document is written in. All the messages in the sample data are in English, so you can hard-code the value `en` for this field.|
+
+## Create a custom function
+
+Now you're ready to create the custom function that will integrate Power BI and Key Phrase Extraction. The function receives the text to be processed as a parameter. It converts data to and from the required JSON format and makes the HTTP request to the Key Phrases API. The function then parses the response from the API and returns a string that contains a comma-separated list of the extracted key phrases.
+
+> [!NOTE]
+> Power BI Desktop custom functions are written in the [Power Query M formula language](/powerquery-m/power-query-m-reference), or just "M" for short. M is a functional programming language based on [F#](/dotnet/fsharp/). You don't need to be a programmer to finish this tutorial, though; the required code is included below.
+
+In Power BI Desktop, make sure you're still in the Query Editor window. If you aren't, select the **Home** ribbon, and in the **External data** group, select **Edit Queries**.
+
+Now, in the **Home** ribbon, in the **New Query** group, open the **New Source** drop-down menu and select **Blank Query**.
+
+A new query, initially named `Query1`, appears in the Queries list. Double-click this entry and name it `KeyPhrases`.
+
+Now, in the **Home** ribbon, in the **Query** group, select **Advanced Editor** to open the Advanced Editor window. Delete the code that's already in that window and paste in the following code.
+
+> [!NOTE]
+> Replace the example endpoint below (containing `<your-custom-subdomain>`) with the endpoint generated for your Language resource. You can find this endpoint by signing in to the [Azure portal](https://azure.microsoft.com/features/azure-portal/), navigating to your resource, and selecting **Key and endpoint**.
++
+```fsharp
+// Returns key phrases from the text in a comma-separated list
+(text) => let
+ apikey = "YOUR_API_KEY_HERE",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics" & "/v3.0/keyPhrases",
+ jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
+ jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }",
+ bytesbody = Text.ToBinary(jsonbody),
+ headers = [#"Ocp-Apim-Subscription-Key" = apikey],
+ bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
+ jsonresp = Json.Document(bytesresp),
+ keyphrases = Text.Lower(Text.Combine(jsonresp[documents]{0}[keyPhrases], ", "))
+in keyphrases
+```
+
+Replace `YOUR_API_KEY_HERE` with your Language resource key. You can also find this key by signing in to the [Azure portal](https://azure.microsoft.com/features/azure-portal/), navigating to your Language resource, and selecting the **Key and endpoint** page. Be sure to leave the quotation marks before and after the key. Then select **Done.**
+
+## Use the custom function
+
+Now you can use the custom function to extract the key phrases from each of the customer comments and store them in a new column in the table.
+
+In Power BI Desktop, in the Query Editor window, switch back to the `FabrikamComments` query. Select the **Add Column** ribbon. In the **General** group, select **Invoke Custom Function**.
+
+![Invoke Custom Function button](../media/tutorials/power-bi/invoke-custom-function-button.png)<br><br>
+
+The Invoke Custom Function dialog appears. In **New column name**, enter `keyphrases`. In **Function query**, select the custom function you created, `KeyPhrases`.
+
+A new field appears in the dialog, **text (optional)**. This field is asking which column we want to use to provide values for the `text` parameter of the Key Phrases API. (Remember that you already hard-coded the values for the `language` and `id` parameters.) Select `Merged` (the column you created [previously](#prepare-the-data) by merging the subject and message fields) from the drop-down menu.
+
+![Invoking a custom function](../media/tutorials/power-bi/invoke-custom-function.png)
+
+Finally, select **OK.**
+
+If everything is ready, Power BI calls your custom function once for each row in the table. It sends the queries to the Key Phrases API and adds a new column to the table to store the results. But before that happens, you may need to specify authentication and privacy settings.
+
+## Authentication and privacy
+
+After you close the Invoke Custom Function dialog, a banner may appear asking you to specify how to connect to the Key Phrases API.
+
+![credentials banner](../media/tutorials/power-bi/credentials-banner.png)
+
+Select **Edit Credentials,** make sure `Anonymous` is selected in the dialog, then select **Connect.**
+
+> [!NOTE]
+> You select `Anonymous` because Key Phrase Extraction authenticates requests using your access key, so Power BI does not need to provide credentials for the HTTP request itself.
+
+> [!div class="mx-imgBorder"]
+> ![setting authentication to anonymous](../media/tutorials/power-bi/access-web-content.png)
+
+If you see the Edit Credentials banner even after choosing anonymous access, you may have forgotten to paste your Language resource key into the code in the `KeyPhrases` [custom function](#create-a-custom-function).
+
+Next, a banner may appear asking you to provide information about your data sources' privacy.
+
+![privacy banner](../media/tutorials/power-bi/privacy-banner.png)
+
+Select **Continue** and choose `Public` for each of the data sources in the dialog. Then select **Save.**
+
+![setting data source privacy](../media/tutorials/power-bi/privacy-dialog.png)
+
+## Create the word cloud
+
+Once you have dealt with any banners that appear, select **Close & Apply** in the Home ribbon to close the Query Editor.
+
+Power BI Desktop takes a moment to make the necessary HTTP requests. For each row in the table, the new `keyphrases` column contains the key phrases detected in the text by the Key Phrases API.
+
+Now you'll use this column to generate a word cloud. To get started, click the **Report** button in the main Power BI Desktop window, to the left of the workspace.
+
+> [!NOTE]
+> Why use extracted key phrases to generate a word cloud, rather than the full text of every comment? The key phrases provide us with the *important* words from our customer comments, not just the *most common* words. Also, word sizing in the resulting cloud isn't skewed by the frequent use of a word in a relatively small number of comments.
+
+If you don't already have the Word Cloud custom visual installed, install it. In the Visualizations panel to the right of the workspace, click the three dots (**...**) and choose **Import From Market**. If the word "cloud" is not among the displayed visualization tools in the list, you can search for "cloud" and click the **Add** button next the Word Cloud visual. Power BI installs the Word Cloud visual and lets you know that it installed successfully.
+
+![adding a custom visual](../media/tutorials/power-bi/add-custom-visuals.png)<br><br>
+
+First, click the Word Cloud icon in the Visualizations panel.
+
+![Word Cloud icon in visualizations panel](../media/tutorials/power-bi/visualizations-panel.png)
+
+A new report appears in the workspace. Drag the `keyphrases` field from the Fields panel to the Category field in the Visualizations panel. The word cloud appears inside the report.
+
+Now switch to the Format page of the Visualizations panel. In the Stop Words category, turn on **Default Stop Words** to eliminate short, common words like "of" from the cloud. However, because we're visualizing key phrases, they might not contain stop words.
+
+![activating default stop words](../media/tutorials/power-bi/default-stop-words.png)
+
+Down a little further in this panel, turn off **Rotate Text** and **Title**.
+
+![activate focus mode](../media/tutorials/power-bi/word-cloud-focus-mode.png)
+
+Select the Focus Mode tool in the report to get a better look at our word cloud. The tool expands the word cloud to fill the entire workspace, as shown below.
+
+![A Word Cloud](../media/tutorials/power-bi/word-cloud.png)
+
+## Using other features
+
+Azure AI Language also provides sentiment analysis and language detection. The language detection in particular is useful if your customer feedback isn't all in English.
+
+Both of these other APIs are similar to the Key Phrases API. That means you can integrate them with Power BI Desktop using custom functions that are nearly identical to the one you created in this tutorial. Just create a blank query and paste the appropriate code below into the Advanced Editor, as you did earlier. (Don't forget your access key!) Then, as before, use the function to add a new column to the table.
+
+The Sentiment Analysis function below returns a label indicating how positive the sentiment expressed in the text is.
+
+```fsharp
+// Returns the sentiment label of the text, for example, positive, negative or mixed.
+(text) => let
+ apikey = "YOUR_API_KEY_HERE",
+ endpoint = "<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/sentiment",
+ jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
+ jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }",
+ bytesbody = Text.ToBinary(jsonbody),
+ headers = [#"Ocp-Apim-Subscription-Key" = apikey],
+ bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
+ jsonresp = Json.Document(bytesresp),
+ sentiment = jsonresp[documents]{0}[sentiment]
+ in sentiment
+```
+
+Here are two versions of a Language Detection function. The first returns the ISO language code (for example, `en` for English), while the second returns the "friendly" name (for example, `English`). You may notice that only the last line of the body differs between the two versions.
+
+```fsharp
+// Returns the two-letter language code (for example, 'en' for English) of the text
+(text) => let
+ apikey = "YOUR_API_KEY_HERE",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/languages",
+ jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
+ jsonbody = "{ documents: [ { id: ""0"", text: " & jsontext & " } ] }",
+ bytesbody = Text.ToBinary(jsonbody),
+ headers = [#"Ocp-Apim-Subscription-Key" = apikey],
+ bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
+ jsonresp = Json.Document(bytesresp),
+ language = jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
+```
+```fsharp
+// Returns the name (for example, 'English') of the language in which the text is written
+(text) => let
+ apikey = "YOUR_API_KEY_HERE",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/languages",
+ jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
+ jsonbody = "{ documents: [ { id: ""0"", text: " & jsontext & " } ] }",
+ bytesbody = Text.ToBinary(jsonbody),
+ headers = [#"Ocp-Apim-Subscription-Key" = apikey],
+ bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
+ jsonresp = Json.Document(bytesresp),
+ language jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
+```
+
+Finally, here's a variant of the Key Phrases function already presented that returns the phrases as a list object, rather than as a single string of comma-separated phrases.
+
+> [!NOTE]
+> Returning a single string simplified our word cloud example. A list, on the other hand, is a more flexible format for working with the returned phrases in Power BI. You can manipulate list objects in Power BI Desktop using the Structured Column group in the Query Editor's Transform ribbon.
+
+```fsharp
+// Returns key phrases from the text as a list object
+(text) => let
+ apikey = "YOUR_API_KEY_HERE",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/keyPhrases",
+ jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
+ jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }",
+ bytesbody = Text.ToBinary(jsonbody),
+ headers = [#"Ocp-Apim-Subscription-Key" = apikey],
+ bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
+ jsonresp = Json.Document(bytesresp),
+ keyphrases = jsonresp[documents]{0}[keyPhrases]
+in keyphrases
+```
+
+## Next steps
+
+Learn more about Azure AI Language, the Power Query M formula language, or Power BI.
+
+* [Azure AI Language overview](../../overview.md)
+* [Power Query M reference](/powerquery-m/power-query-m-reference)
+* [Power BI documentation](https://powerbi.microsoft.com/documentation/powerbi-landing-page/)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/how-to/call-api.md
+
+ Title: How to perform language detection
+
+description: This article will show you how to detect the language of written text using language detection.
++++++ Last updated : 03/01/2022++++
+# How to use language detection
+
+The Language Detection feature can evaluate text, and return a language identifier that indicates the language a document was written in.
+
+Language detection is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score between 0 and 1 that reflects the confidence of the model.
+
+The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages.
+
+## Development options
++
+## Determine how to process the data (optional)
+
+### Specify the language detection model
+
+By default, language detection will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
+
+### Input languages
+
+When you submit documents to be evaluated, language detection will attempt to determine if the text was written in any of [the supported languages](../language-support.md).
+
+If you have content expressed in a less frequently used language, you can try the Language Detection feature to see if it returns a code. The response for languages that can't be detected is `unknown`.
+
+## Submitting data
+
+> [!TIP]
+> You can use a [Docker container](use-containers.md)for language detection, so you can use the API on-premises.
+
+Analysis is performed upon receipt of the request. Using the language detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+++
+## Getting language detection results
+
+When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
+
+Language detection will return one predominant language for each document you submit, along with it's [ISO 639-1](https://www.iso.org/standard/22109.html) name, a human-readable name, and a confidence score. A positive score of 1 indicates the highest possible confidence level of the analysis.
+
+### Ambiguous content
+
+In some cases it may be hard to disambiguate languages based on the input. You can use the `countryHint` parameter to specify an [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) country/region code. By default the API uses "US" as the default country hint. To remove this behavior, you can reset this parameter by setting this value to empty string `countryHint = ""` .
+
+For example, "communication" is common to both English and French and if given with limited context the response will be based on the "US" country/region hint. If the origin of the text is known to be coming from France that can be given as a hint.
+
+**Input**
+
+```json
+{
+ "documents": [
+ {
+ "id": "1",
+ "text": "communication"
+ },
+ {
+ "id": "2",
+ "text": "communication",
+ "countryHint": "fr"
+ }
+ ]
+}
+```
+
+The language detection model now has additional context to make a better judgment:
+
+**Output**
+
+```json
+{
+ "documents":[
+ {
+ "detectedLanguage":{
+ "confidenceScore":0.62,
+ "iso6391Name":"en",
+ "name":"English"
+ },
+ "id":"1",
+ "warnings":[
+
+ ]
+ },
+ {
+ "detectedLanguage":{
+ "confidenceScore":1.0,
+ "iso6391Name":"fr",
+ "name":"French"
+ },
+ "id":"2",
+ "warnings":[
+
+ ]
+ }
+ ],
+ "errors":[
+
+ ],
+ "modelVersion":"2022-10-01"
+}
+```
+
+If the analyzer can't parse the input, it returns `(Unknown)`. An example is if you submit a text string that consists solely of numbers.
+
+```json
+{
+ "documents": [
+ {
+ "id": "1",
+ "detectedLanguage": {
+ "name": "(Unknown)",
+ "iso6391Name": "(Unknown)",
+ "confidenceScore": 0.0
+ },
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-01-05"
+}
+```
+
+### Mixed-language content
+
+Mixed-language content within the same document returns the language with the largest representation in the content, but with a lower positive rating. The rating reflects the marginal strength of the assessment. In the following example, input is a blend of English, Spanish, and French. The analyzer counts characters in each segment to determine the predominant language.
+
+**Input**
+
+```json
+{
+ "documents": [
+ {
+ "id": "1",
+ "text": "Hello, I would like to take a class at your University. ¿Se ofrecen clases en español? Es mi primera lengua y más fácil para escribir. Que diriez-vous des cours en français?"
+ }
+ ]
+}
+```
+
+**Output**
+
+The resulting output consists of the predominant language, with a score of less than 1.0, which indicates a weaker level of confidence.
+
+```json
+{
+ "documents": [
+ {
+ "id": "1",
+ "detectedLanguage": {
+ "name": "Spanish",
+ "iso6391Name": "es",
+ "confidenceScore": 0.88
+ },
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-01-05"
+}
+```
+
+## Service and data limits
++
+## See also
+
+* [Language detection overview](../overview.md)
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/how-to/use-containers.md
+
+ Title: Use language detection Docker containers on-premises
+
+description: Use Docker containers for the Language Detection API to determine the language of written text, on-premises.
++++++ Last updated : 04/11/2023++
+keywords: on-premises, Docker, container
++
+# Use language detection Docker containers on-premises
+
+Containers enable you to host the Language Detection API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Language Detection remotely, then containers might be a good option.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
++
+## Host computer requirements and recommendations
++
+The following table describes the minimum and recommended specifications for the language detection container. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
+
+| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
+|||-|--|--|
+| **Language detection** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
+
+CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Get the container image with `docker pull`
+
+The Language Detection container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `language`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/language`
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/tags).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the Microsoft Container Registry.
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/language:latest
+```
++
+## Run the container with `docker run`
+
+Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
+
+> [!IMPORTANT]
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+To run the *Language Detection* container, execute the following `docker run` command. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the language detection API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
++
+```bash
+docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/language \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a *Language Detection* container from the container image
+* Allocates one CPU core and 4 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Automatically removes the container after it exits. The container image is still available on the host computer.
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs.
+
+<!-- ## Validate container is running -->
++
+## Run the container disconnected from the internet
++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++
+## Billing
+
+The language detection containers send billing information to Azure, using a _Language_ resource on your Azure account.
++
+For more information about these options, see [Configure containers](../../concepts/configure-containers.md).
+
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running language detection containers. In summary:
+
+* Language detection provides Linux containers for Docker
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> This container is not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/language-support.md
+
+ Title: Language Detection language support
+
+description: This article explains which natural languages are supported by the Language Detection API.
++++++ Last updated : 11/02/2021++++
+# Language support for Language Detection
+
+Use this article to learn which natural languages are supported by Language Detection.
++
+> [!NOTE]
+> Languages are added as new [model versions](how-to/call-api.md#specify-the-language-detection-model) are released. The current model version for Language Detection is `2022-10-01`.
+
+The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional/cultural languages, and return detected languages with their name and code. The returned language code parameters conform to [BCP-47](https://tools.ietf.org/html/bcp47) standard with most of them conforming to [ISO-639-1](https://www.iso.org/iso-639-language-codes.html) identifiers.
+
+If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that cannot be detected is `unknown`.
+
+## Languages supported by Language Detection
+
+| Language | Language Code | Starting with model version: |
+||||
+| Afrikaans | `af` | |
+| Albanian | `sq` | |
+| Amharic | `am` | 2021-01-05 |
+| Arabic | `ar` | |
+| Armenian | `hy` | |
+| Assamese | `as` | 2021-01-05 |
+| Azerbaijani | `az` | 2021-01-05 |
+| Bashkir | `ba` | 2022-10-01 |
+| Basque | `eu` | |
+| Belarusian | `be` | |
+| Bengali | `bn` | |
+| Bosnian | `bs` | 2020-09-01 |
+| Bulgarian | `bg` | |
+| Burmese | `my` | |
+| Catalan | `ca` | |
+| Central Khmer | `km` | |
+| Chinese | `zh` | |
+| Chinese Simplified | `zh_chs` | |
+| Chinese Traditional | `zh_cht` | |
+| Chuvash | `cv` | 2022-10-01 |
+| Corsican | `co` | 2021-01-05 |
+| Croatian | `hr` | |
+| Czech | `cs` | |
+| Danish | `da` | |
+| Dari | `prs` | 2020-09-01 |
+| Divehi | `dv` | |
+| Dutch | `nl` | |
+| English | `en` | |
+| Esperanto | `eo` | |
+| Estonian | `et` | |
+| Faroese | `fo` | 2022-10-01 |
+| Fijian | `fj` | 2020-09-01 |
+| Finnish | `fi` | |
+| French | `fr` | |
+| Galician | `gl` | |
+| Georgian | `ka` | |
+| German | `de` | |
+| Greek | `el` | |
+| Gujarati | `gu` | |
+| Haitian | `ht` | |
+| Hausa | `ha` | 2021-01-05 |
+| Hebrew | `he` | |
+| Hindi | `hi` | |
+| Hmong Daw | `mww` | 2020-09-01 |
+| Hungarian | `hu` | |
+| Icelandic | `is` | |
+| Igbo | `ig` | 2021-01-05 |
+| Indonesian | `id` | |
+| Inuktitut | `iu` | |
+| Irish | `ga` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Javanese | `jv` | 2021-01-05 |
+| Kannada | `kn` | |
+| Kazakh | `kk` | 2020-09-01 |
+| Kinyarwanda | `rw` | 2021-01-05 |
+| Kirghiz | `ky` | 2022-10-01 |
+| Korean | `ko` | |
+| Kurdish | `ku` | |
+| Lao | `lo` | |
+| Latin | `la` | |
+| Latvian | `lv` | |
+| Lithuanian | `lt` | |
+| Luxembourgish | `lb` | 2021-01-05 |
+| Macedonian | `mk` | |
+| Malagasy | `mg` | 2020-09-01 |
+| Malay | `ms` | |
+| Malayalam | `ml` | |
+| Maltese | `mt` | |
+| Maori | `mi` | 2020-09-01 |
+| Marathi | `mr` | 2020-09-01 |
+| Mongolian | `mn` | 2021-01-05 |
+| Nepali | `ne` | 2021-01-05 |
+| Norwegian | `no` | |
+| Norwegian Nynorsk | `nn` | |
+| Odia | `or` | |
+| Pasht | `ps` | |
+| Persian | `fa` | |
+| Polish | `pl` | |
+| Portuguese | `pt` | |
+| Punjabi | `pa` | |
+| Queretaro Otomi | `otq` | 2020-09-01 |
+| Romanian | `ro` | |
+| Russian | `ru` | |
+| Samoan | `sm` | 2020-09-01 |
+| Serbian | `sr` | |
+| Shona | `sn` | 2021-01-05 |
+| Sindhi | `sd` | 2021-01-05 |
+| Sinhala | `si` | |
+| Slovak | `sk` | |
+| Slovenian | `sl` | |
+| Somali | `so` | |
+| Spanish | `es` | |
+| Sundanese | `su` | 2021-01-05 |
+| Swahili | `sw` | |
+| Swedish | `sv` | |
+| Tagalog | `tl` | |
+| Tahitian | `ty` | 2020-09-01 |
+| Tajik | `tg` | 2021-01-05 |
+| Tamil | `ta` | |
+| Tatar | `tt` | 2021-01-05 |
+| Telugu | `te` | |
+| Thai | `th` | |
+| Tibetan | `bo` | 2021-01-05 |
+| Tigrinya | `ti` | 2021-01-05 |
+| Tongan | `to` | 2020-09-01 |
+| Turkish | `tr` | 2021-01-05 |
+| Turkmen | `tk` | 2021-01-05 |
+| Upper Sorbian | `hsb` | 2022-10-01 |
+| Uyghur | `ug` | 2022-10-01 |
+| Ukrainian | `uk` | |
+| Urdu | `ur` | |
+| Uzbek | `uz` | |
+| Vietnamese | `vi` | |
+| Welsh | `cy` | |
+| Xhosa | `xh` | 2021-01-05 |
+| Yiddish | `yi` | |
+| Yoruba | `yo` | 2021-01-05 |
+| Yucatec Maya | `yua` | |
+| Zulu | `zu` | 2021-01-05 |
+
+## Romanized Indic Languages supported by Language Detection
+
+| Language | Language Code | Starting with model version: |
+||||
+| Assamese | `as` | 2022-10-01 |
+| Bengali | `bn` | 2022-10-01 |
+| Gujarati | `gu` | 2022-10-01 |
+| Hindi | `hi` | 2022-10-01 |
+| Kannada | `kn` | 2022-10-01 |
+| Malayalam | `ml` | 2022-10-01 |
+| Marathi | `mr` | 2022-10-01 |
+| Odia | `or` | 2022-10-01 |
+| Punjabi | `pa` | 2022-10-01 |
+| Tamil | `ta` | 2022-10-01 |
+| Telugu | `te` | 2022-10-01 |
+| Urdu | `ur` | 2022-10-01 |
+
+## Next steps
+
+[Language detection overview](overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/overview.md
+
+ Title: What is language detection in Azure AI Language?
+
+description: An overview of language detection in Azure AI services, which helps you detect the language that text is written in by returning language codes.
++++++ Last updated : 07/27/2022++++
+# What is language detection in Azure AI Language?
+
+Language detection is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Language detection can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
+
+This documentation contains the following types of articles:
+
+* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
+++
+## Get started with language detection
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for language detection](/legal/cognitive-services/language-service/transparency-note-language-detection?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+There are two ways to get started using the entity linking feature:
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure AI Language features without needing to write code.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/quickstart.md
+
+ Title: "Quickstart: Use the Language Detection client library"
+
+description: Use this quickstart to start using Language Detection.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+keywords: text mining, language detection
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: using the Language Detection client library and REST API
++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Next steps
+
+* [Language detection overview](overview.md)
ai-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-studio.md
+
+ Title: "Quickstart: Get started with Language Studio"
+
+description: Use this article to learn about Language Studio, and testing features of Azure AI Language
+++++ Last updated : 01/03/2023++++
+# Quickstart: Get started with Language Studio
+
+[Language Studio](https://aka.ms/languageStudio) is a set of UI-based tools that lets you explore, build, and integrate features from Azure AI Language into your applications.
+
+Language Studio provides you with a platform to try several service features, and see what they return in a visual manner. It also provides you with an easy-to-use experience to create custom projects and models to work on your data. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
+
+## Try Language Studio before signing up
+
+Language Studio lets you try available features without needing to create an Azure account or an Azure resource. From the main page of the studio, select one of the listed categories to see [available features](overview.md#available-features) you can try.
++
+Once you choose a feature, you'll be able to send several text examples to the service, and see example output.
++
+## Use Language Studio with your own text
+
+When you're ready to use Language Studio features on your own text data, you will need an Azure AI Language resource for authentication and [billing](https://aka.ms/unifiedLanguagePricing). You can also use this resource to call the REST APIs and client libraries programmatically. Follow these steps to get started.
+
+> [!IMPORTANT]
+> The setup process and requirements for custom features are different. If you're using one of the following custom features, we recommend using the quickstart articles linked below to get started more easily.
+> * [Conversational Language Understanding](./conversational-language-understanding/quickstart.md)
+> * [Custom Text Classification](./custom-text-classification/quickstart.md)
+> * [Custom Named Entity Recognition (NER)](./custom-named-entity-recognition/quickstart.md)
+> * [Orchestration workflow](./orchestration-workflow/quickstart.md)
+
+1. Create an Azure Subscription. You can [create one for free](https://azure.microsoft.com/free/ai/).
+
+2. [Log into Language Studio](https://aka.ms/languageStudio). If it's your first time logging in, you'll see a window appear that lets you choose a language resource.
+
+ :::image type="content" source="./media/language-resource-small.png" alt-text="A screenshot showing the resource selection screen in Language Studio." lightbox="./media/language-resource.png":::
+
+3. Select **Create a new language resource**. Then enter information for your new resource, such as a name, location and resource group.
+
+
+ > [!TIP]
+ > * When selecting a location for your Azure resource, choose one that's closest to you for lower latency.
+ > * We recommend turning the **Managed Identity** option **on**, to authenticate your requests across Azure.
+ > * If you use the free pricing tier, you can keep using the Language service even after your Azure free trial or service credit expires.
+
+ :::image type="content" source="./media/create-new-resource-small.png" alt-text="A screenshot showing the resource creation screen in Language Studio." lightbox="./media/create-new-resource.png":::
+
+4. Select **Done**. Your resource will be created, and you will be able to use the different features offered by the Language service with your own text.
++
+### Valid text formats for conversation features
+
+> [!NOTE]
+> This section applies to the following features:
+> * [PII detection for conversation](./personally-identifiable-information/overview.md)
+> * [Conversation summarization](./summarization/overview.md?tabs=conversation-summarization)
+
+If you're sending conversational text to supported features in Language Studio, be aware of the following input requirements:
+* The text you send must be a conversational dialog between two or more participants.
+* Each line must start with the name of the participant, followed by a `:`, and followed by what they say.
+* Each participant must be on a new line. If multiple participants' utterances are on the same line, it will be processed as one line of the conversation.
+
+See the following example for how you should structure conversational text you want to send.
+
+*Agent: Hello, you're chatting with Rene. How may I help you?*
+
+*Customer: Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didn't work.*
+
+*Agent: IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue.*
+
+Note that the names of the two participants in the conversation (*Agent* and *Customer*) begin each line, and that there is only one participant per line of dialog.
++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+> [!TIP]
+> In Language Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by:
+> 1. Selecting the **Settings** icon in the rop-right corner of the Language Studio screen).
+> 2. Select **Resources**
+>
+> You can't delete your resource from Language Studio.
+
+## Next steps
+
+* Go to the [Language Studio](https://aka.ms/languageStudio) to begin using features offered by the service.
+* For more information and documentation on the features offered, see the [Azure AI Language overview](overview.md).
ai-services Entity Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/concepts/entity-metadata.md
+
+ Title: Entity Metadata provided by Named Entity Recognition
+
+description: Learn about entity metadata in the NER feature.
++++++ Last updated : 06/13/2023+++
+# Entity Metadata
+
+The Entity Metadata object captures optional additional information about detected entities, providing resolutions specifically for numeric and temporal entities. This attribute is populated only when there's supplementary data available, enhancing the comprehensiveness of the detected entities. The Metadata component encompasses resolutions designed for both numeric and temporal entities. It's important to handle cases where the Metadata attribute may be empty or absent, as its presence isn't guaranteed for every entity.
+
+Currently, metadata components handle resolutions to a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`.
+
+You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that are provided to a meeting scheduling system.
+
+> [!NOTE]
+> Entity Metadata are only supported starting from **_api-version=2023-04-15-preview_**. For older API versions, you may check the [Entity Resolutions article](./entity-resolutions.md).
+
+This article documents the resolution objects returned for each entity category or subcategory under the metadata object.
+
+## Numeric Entities
+
+### Age
+
+Examples: "10 years old", "23 months old", "sixty Y.O."
+
+```json
+"metadata": {
+ "unit": "Year",
+ "value": 10
+ }
+```
+
+Possible values for "unit":
+- Year
+- Month
+- Week
+- Day
++
+### Currency
+
+Examples: "30 Egyptian pounds", "77 USD"
+
+```json
+"metadata": {
+ "unit": "Egyptian pound",
+ "ISO4217": "EGP",
+ "value": 30
+ }
+```
+
+Possible values for "unit" and "ISO4217":
+- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html).
+
+## Datetime/Temporal entities
+
+Datetime includes several different subtypes that return different response objects.
+
+### Date
+
+Specific days.
+
+Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow"
+
+```json
+"metadata": {
+ "dateValues": [
+ {
+ "timex": "1995-01-01",
+ "value": "1995-01-01"
+ }
+ ]
+ }
+```
+
+Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query.
+
+```json
+"metadata": {
+ "dateValues": [
+ {
+ "timex": "XXXX-04-12",
+ "value": "2022-04-12"
+ },
+ {
+ "timex": "XXXX-04-12",
+ "value": "2023-04-12"
+ }
+ ]
+ }
+```
+
+Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week.
+
+```json
+"metadata" :{
+ "dateValues": [
+ {
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-03"
+ },
+ {
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-10"
+ }
+ ]
+ }
+```
++
+### Time
+
+Specific times.
+
+Examples: "9:39:33 AM", "seven AM", "20:03"
+
+```json
+"metadata": {
+ "timex": "T09:39:33",
+ "value": "09:39:33"
+ }
+```
+
+### Datetime
+
+Specific date and time combinations.
+
+Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30"
+
+```json
+"metadata": {
+ "timex": "2022-10-07T18",
+ "value": "2022-10-07 18:00:00"
+ }
+```
+
+Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified.
+
+```json
+"metadata": {
+ "dateValues": [
+ {
+ "timex": "XXXX-05-03T12",
+ "value": "2022-05-03 12:00:00"
+ },
+ {
+ "timex": "XXXX-05-03T12",
+ "value": "2023-05-03 12:00:00"
+ }
+ ]
+ }
+```
+
+### Datetime ranges
+
+A datetime range is a period with a beginning and end date, time, or datetime.
+
+Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend"
+
+The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week.
+
+```json
+"metadata": {
+ "duration": "PT2702H",
+ "begin": "2022-01-03 06:00:00",
+ "end": "2022-04-25 20:00:00"
+ }
+```
+
+### Set
+
+A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime.
+
+Examples: "every Monday at 6 PM", "every Thursday", "every weekend"
+
+For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM.
+
+```json
+"metadata": {
+ "timex": "XXXX-WXX-1T18",
+ "value": "not resolved"
+ }
+```
+
+## Dimensions
+
+Examples: "24 km/hr", "44 square meters", "sixty six kilobytes"
+
+```json
+"metadata": {
+ "unit": "KilometersPerHour",
+ "value": 24
+ }
+```
+
+Possible values for the "unit" field values:
+
+- **For Measurements**:
+ - SquareKilometer
+ - SquareHectometer
+ - SquareDecameter
+ - SquareMeter
+ - SquareDecimeter
+ - SquareCentimeter
+ - SquareMillimeter
+ - SquareInch
+ - SquareFoot
+ - SquareMile
+ - SquareYard
+ - Acre
+
+- **For Information**:
+ - Bit
+ - Kilobit
+ - Megabit
+ - Gigabit
+ - Terabit
+ - Petabit
+ - Byte
+ - Kilobyte
+ - Megabyte
+ - Gigabyte
+ - Terabyte
+ - Petabyte
+
+- **For Length, width, height**:
+ - Kilometer
+ - Hectometer
+ - Decameter
+ - Meter
+ - Decimeter
+ - Centimeter
+ - Millimeter
+ - Micrometer
+ - Nanometer
+ - Picometer
+ - Mile
+ - Yard
+ - Inch
+ - Foot
+ - Light year
+ - Pt
+
+- **For Speed**:
+ - MetersPerSecond
+ - KilometersPerHour
+ - KilometersPerMinute
+ - KilometersPerSecond
+ - MilesPerHour
+ - Knot
+ - FootPerSecond
+ - FootPerMinute
+ - YardsPerMinute
+ - YardsPerSecond
+ - MetersPerMillisecond
+ - CentimetersPerMillisecond
+ - KilometersPerMillisecond
+
+- **For Volume**:
+ - CubicMeter
+ - CubicCentimeter
+ - CubicMillimiter
+ - Hectoliter
+ - Decaliter
+ - Liter
+ - Deciliter
+ - Centiliter
+ - Milliliter
+ - CubicYard
+ - CubicInch
+ - CubicFoot
+ - CubicMile
+ - FluidOunce
+ - Teaspoon
+ - Tablespoon
+ - Pint
+ - Quart
+ - Cup
+ - Gill
+ - Pinch
+ - FluidDram
+ - Barrel
+ - Minim
+ - Cord
+ - Peck
+ - Bushel
+ - Hogshead
+
+- **For Weight**:
+ - Kilogram
+ - Gram
+ - Milligram
+ - Microgram
+ - Gallon
+ - MetricTon
+ - Ton
+ - Pound
+ - Ounce
+ - Grain
+ - Pennyweight
+ - LongTonBritish
+ - ShortTonUS
+ - ShortHundredweightUS
+ - Stone
+ - Dram
++
+## Ordinal
+
+Examples: "3rd", "first", "last"
+
+```json
+"metadata": {
+ "offset": "3",
+ "relativeTo": "Start",
+ "value": "3"
+ }
+```
+
+Possible values for "relativeTo":
+- Start
+- End
+
+## Temperature
+
+Examples: "88 deg fahrenheit", "twenty three degrees celsius"
+
+```json
+"metadata": {
+ "unit": "Fahrenheit",
+ "value": 88
+ }
+```
+
+Possible values for "unit":
+- Celsius
+- Fahrenheit
+- Kelvin
+- Rankine
ai-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
+
+ Title: Entity resolutions provided by Named Entity Recognition
+
+description: Learn about entity resolutions in the NER feature.
++++++ Last updated : 10/12/2022++++
+# Resolve entities to standard formats
+
+A resolution is a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`.
+
+You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that will be provided to a meeting scheduling system.
+
+> [!NOTE]
+> Entity resolution responses are only supported starting from **_api-version=2022-10-01-preview_** and **_"modelVersion": "2022-10-01-preview"_**.
+
+This article documents the resolution objects returned for each entity category or subcategory.
+
+## Age
+
+Examples: "10 years old", "23 months old", "sixty Y.O."
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "AgeResolution",
+ "unit": "Year",
+ "value": 10
+ }
+ ]
+```
+
+Possible values for "unit":
+- Year
+- Month
+- Week
+- Day
++
+## Currency
+
+Examples: "30 Egyptian pounds", "77 USD"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "CurrencyResolution",
+ "unit": "Egyptian pound",
+ "ISO4217": "EGP",
+ "value": 30
+ }
+ ]
+```
+
+Possible values for "unit" and "ISO4217":
+- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html).
+
+## Datetime
+
+Datetime includes several different subtypes that return different response objects.
+
+### Date
+
+Specific days.
+
+Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "1995-01-01",
+ "value": "1995-01-01"
+ }
+ ]
+```
+
+Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-04-12",
+ "value": "2022-04-12"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-04-12",
+ "value": "2023-04-12"
+ }
+ ]
+```
+
+Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-03"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Date",
+ "timex": "XXXX-WXX-1",
+ "value": "2022-10-10"
+ }
+ ]
+```
++
+### Time
+
+Specific times.
+
+Examples: "9:39:33 AM", "seven AM", "20:03"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Time",
+ "timex": "T09:39:33",
+ "value": "09:39:33"
+ }
+ ]
+```
+
+### Datetime
+
+Specific date and time combinations.
+
+Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "2022-10-07T18",
+ "value": "2022-10-07 18:00:00"
+ }
+ ]
+```
+
+Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "XXXX-05-03T12",
+ "value": "2022-05-03 12:00:00"
+ },
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "DateTime",
+ "timex": "XXXX-05-03T12",
+ "value": "2023-05-03 12:00:00"
+ }
+ ]
+```
+
+### Datetime ranges
+
+A datetime range is a period with a beginning and end date, time, or datetime.
+
+Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend"
+
+The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "TemporalSpanResolution",
+ "duration": "PT2702H",
+ "begin": "2022-01-03 06:00:00",
+ "end": "2022-04-25 20:00:00"
+ }
+ ]
+```
+
+### Set
+
+A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime.
+
+Examples: "every Monday at 6 PM", "every Thursday", "every weekend"
+
+For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM.
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "DateTimeResolution",
+ "dateTimeSubKind": "Set",
+ "timex": "XXXX-WXX-1T18",
+ "value": "not resolved"
+ }
+ ]
+```
+
+## Dimensions
+
+Examples: "24 km/hr", "44 square meters", "sixty six kilobytes"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "SpeedResolution",
+ "unit": "KilometersPerHour",
+ "value": 24
+ }
+ ]
+```
+
+Possible values for "resolutionKind" and their "unit" values:
+
+- **AreaResolution**:
+ - SquareKilometer
+ - SquareHectometer
+ - SquareDecameter
+ - SquareMeter
+ - SquareDecimeter
+ - SquareCentimeter
+ - SquareMillimeter
+ - SquareInch
+ - SquareFoot
+ - SquareMile
+ - SquareYard
+ - Acre
+
+- **InformationResolution**:
+ - Bit
+ - Kilobit
+ - Megabit
+ - Gigabit
+ - Terabit
+ - Petabit
+ - Byte
+ - Kilobyte
+ - Megabyte
+ - Gigabyte
+ - Terabyte
+ - Petabyte
+
+- **LengthResolution**:
+ - Kilometer
+ - Hectometer
+ - Decameter
+ - Meter
+ - Decimeter
+ - Centimeter
+ - Millimeter
+ - Micrometer
+ - Nanometer
+ - Picometer
+ - Mile
+ - Yard
+ - Inch
+ - Foot
+ - Light year
+ - Pt
+
+- **SpeedResolution**:
+ - MetersPerSecond
+ - KilometersPerHour
+ - KilometersPerMinute
+ - KilometersPerSecond
+ - MilesPerHour
+ - Knot
+ - FootPerSecond
+ - FootPerMinute
+ - YardsPerMinute
+ - YardsPerSecond
+ - MetersPerMillisecond
+ - CentimetersPerMillisecond
+ - KilometersPerMillisecond
+
+- **VolumeResolution**:
+ - CubicMeter
+ - CubicCentimeter
+ - CubicMillimiter
+ - Hectoliter
+ - Decaliter
+ - Liter
+ - Deciliter
+ - Centiliter
+ - Milliliter
+ - CubicYard
+ - CubicInch
+ - CubicFoot
+ - CubicMile
+ - FluidOunce
+ - Teaspoon
+ - Tablespoon
+ - Pint
+ - Quart
+ - Cup
+ - Gill
+ - Pinch
+ - FluidDram
+ - Barrel
+ - Minim
+ - Cord
+ - Peck
+ - Bushel
+ - Hogshead
+
+- **WeightResolution**:
+ - Kilogram
+ - Gram
+ - Milligram
+ - Microgram
+ - Gallon
+ - MetricTon
+ - Ton
+ - Pound
+ - Ounce
+ - Grain
+ - Pennyweight
+ - LongTonBritish
+ - ShortTonUS
+ - ShortHundredweightUS
+ - Stone
+ - Dram
++
+## Number
+
+Examples: "27", "one hundred and three", "38.5", "2/3", "33%"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "NumberResolution",
+ "numberKind": "Integer",
+ "value": 27
+ }
+ ]
+```
+
+Possible values for "numberKind":
+- Integer
+- Decimal
+- Fraction
+- Power
+- Percent
++
+## Ordinal
+
+Examples: "3rd", "first", "last"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "OrdinalResolution",
+ "offset": "3",
+ "relativeTo": "Start",
+ "value": "3"
+ }
+ ]
+```
+
+Possible values for "relativeTo":
+- Start
+- End
+
+## Temperature
+
+Examples: "88 deg fahrenheit", "twenty three degrees celsius"
+
+```json
+"resolutions": [
+ {
+ "resolutionKind": "TemperatureResolution",
+ "unit": "Fahrenheit",
+ "value": 88
+ }
+ ]
+```
+
+Possible values for "unit":
+- Celsius
+- Fahrenheit
+- Kelvin
+- Rankine
++++
ai-services Ga Preview Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/concepts/ga-preview-mapping.md
+
+ Title: Preview API overview
+
+description: Learn about the NER preview API.
++++++ Last updated : 06/14/2023++++
+# Preview API changes
+
+Use this article to get an overview of the new API changes starting from `2023-04-15-preview` version. This API change mainly introduces two new concepts (`entity types` and `entity tags`) replacing the `category` and `subcategory` fields in the current Generally Available API.
+
+## Entity types
+Entity types represent the lowest (or finest) granularity at which the entity has been detected and can be considered to be the base class that has been detected.
+
+## Entity tags
+Entity tags are used to further identify an entity where a detected entity is tagged by the entity type and additional tags to differentiate the identified entity. The entity tags list could be considered to include categories, subcategories, sub-subcategories, and so on.
+
+## Changes from generally available API to preview API
+The changes introduce better flexibility for named entity recognition, including:
+* More granular entity recognition through introducing the tags list where an entity could be tagged by more than one entity tag.
+* Overlapping entities where entities could be recognized as more than one entity type and if so, this entity would be returned twice. If an entity was recognized to belong to two entity tags under the same entity type, both entity tags are returned in the tags list.
+* Filtering entities using entity tags, you can learn more about this by navigating to [this article](../how-to-call#select-which-entities-to-be-returned-(Preview API only).md).
+* Metadata Objects which contain additional information about the entity but currently only act as a wrapper for the existing entity resolution feature. You can learn more about this new feature [here](entity-metadata.md).
+
+## Generally available to preview API entity mappings
+You can see a comparison between the structure of the entity categories/types in the [Supported Named Entity Recognition (NER) entity categories and entity types article](./named-entity-categories.md). Below is a table describing the mappings between the results you would expect to see from the Generally Available API and the Preview API.
+
+| Type | Tags |
+|-|-|
+| Date | Temporal, Date |
+| DateRange | Temporal, DateRange |
+| DateTime | Temporal, DateTime |
+| DateTimeRange | Temporal, DateTimeRange |
+| Duration | Temporal, Duration |
+| SetTemporal | Temporal, SetTemporal |
+| Time | Temporal, Time |
+| TimeRange | Temporal, TimeRange |
+| City | GPE, Location, City |
+| State | GPE, Location, State |
+| CountryRegion | GPE, Location, CountryRegion |
+| Continent | GPE, Location, Continent |
+| GPE | Location, GPE |
+| Location | Location |
+| Airport | Structural, Location |
+| Structural | Location, Structural |
+| Geological | Location, Geological |
+| Age | Numeric, Age |
+| Currency | Numeric, Currency |
+| Number | Numeric, Number |
+| NumberRange | Numeric, NumberRange |
+| Percentage | Numeric, Percentage |
+| Ordinal | Numeric, Ordinal |
+| Temperature | Numeric, Dimension, Temperature |
+| Speed | Numeric, Dimension, Speed |
+| Weight | Numeric, Dimension, Weight |
+| Height | Numeric, Dimension, Height |
+| Length | Numeric, Dimension, Length |
+| Volume | Numeric, Dimension, Volume |
+| Area | Numeric, Dimension, Area |
+| Information | Numeric, Dimension, Information |
+| Address | Address |
+| Person | Person |
+| PersonType | PersonType |
+| Organization | Organization |
+| Product | Product |
+| ComputingProduct | Product, ComputingProduct |
+| IP | IP |
+| Email | Email |
+| URL | URL |
+| Skill | Skill |
+| Event | Event |
+| CulturalEvent | Event, CulturalEvent |
+| SportsEvent | Event, SportsEvent |
+| NaturalEvent | Event, NaturalEvent |
+
ai-services Named Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/concepts/named-entity-categories.md
+
+ Title: Entity categories recognized by Named Entity Recognition in Azure AI Language
+
+description: Learn about the entities the NER feature can recognize from unstructured text.
++++++ Last updated : 11/02/2021++++
+# Supported Named Entity Recognition (NER) entity categories and entity types
+
+Use this article to find the entity categories that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document.
+
+> [!NOTE]
+> * Starting from API version 2023-04-15-preview, the category and subcategory fields are replaced with entity types and tags to introduce better flexibility.
+
+# [Generally Available API](#tab/ga-api)
+
+## Category: Person
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Person
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Names of people.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `ar`, `cs`, `da`, `nl`, `en`, `fi`, `fr`, `de`, `he`, <br> `hu`, `it`, `ja`, `ko`, `no`, `pl`, `pt-br`, `pt`-`pt`, `ru`, `es`, `sv`, `tr`
+
+ :::column-end:::
+
+## Category: PersonType
+
+This category contains the following entity:
++
+ :::column span="":::
+ **Entity**
+
+ PersonType
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Job types or roles held by a person
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: Location
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Location
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Natural and human-made landmarks, structures, geographical features, and geopolitical entities.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `ar`, `cs`, `da`, `nl`, `en`, `fi`, `fr`, `de`, `he`, `hu`, `it`, `ja`, `ko`, `no`, `pl`, `pt-br`, `pt-pt`, `ru`, `es`, `sv`, `tr`
+
+ :::column-end:::
+
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Geopolitical Entity (GPE)
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Cities, countries/regions, states.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+
+ Structural
+
+ :::column-end:::
+ :::column span="2":::
+
+ Manmade structures.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Geographical
+
+ :::column-end:::
+ :::column span="2":::
+
+ Geographic and natural features such as rivers, oceans, and deserts.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`
+
+ :::column-end:::
+
+## Category: Organization
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Organization
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `ar`, `cs`, `da`, `nl`, `en`, `fi`, `fr`, `de`, `he`, `hu`, `it`, `ja`, `ko`, `no`, `pl`, `pt-br`, `pt-pt`, `ru`, `es`, `sv`, `tr`
+
+ :::column-end:::
+
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Medical
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Medical companies and groups.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Stock exchange
+
+ :::column-end:::
+ :::column span="2":::
+
+ Stock exchange groups.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Sports
+
+ :::column-end:::
+ :::column span="2":::
+
+ Sports-related organizations.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`
+
+ :::column-end:::
+
+## Category: Event
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Event
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Historical, social, and naturally occurring events.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt` and `pt-br`
+
+ :::column-end:::
+
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Cultural
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Cultural events and holidays.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Natural
+
+ :::column-end:::
+ :::column span="2":::
+
+ Naturally occurring events.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Sports
+
+ :::column-end:::
+ :::column span="2":::
+
+ Sporting events.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`
+
+ :::column-end:::
+
+## Category: Product
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Product
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Physical objects of various categories.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
++
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Computing products
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Computing products.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Category: Skill
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Skill
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ A capability, skill, or expertise.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en` , `es`, `fr`, `de`, `it`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: Address
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Address
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Full mailing address.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: PhoneNumber
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ PhoneNumber
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Phone numbers (US and EU phone numbers only).
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt` `pt-br`
+
+ :::column-end:::
+
+## Category: Email
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Email
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Email addresses.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: URL
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ URL
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ URLs to websites.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: IP
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ IP
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ network IP addresses.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: DateTime
+
+This category contains the following entities:
+
+ :::column span="":::
+ **Entity**
+
+ DateTime
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Dates and times of day.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+Entities in this category can have the following subcategories
+
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Date
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Calender dates.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+
+ Time
+
+ :::column-end:::
+ :::column span="2":::
+
+ Times of day.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+
+ DateRange
+
+ :::column-end:::
+ :::column span="2":::
+
+ Date ranges.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+
+ TimeRange
+
+ :::column-end:::
+ :::column span="2":::
+
+ Time ranges.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+
+ Duration
+
+ :::column-end:::
+ :::column span="2":::
+
+ Durations.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+
+ Set
+
+ :::column-end:::
+ :::column span="2":::
+
+ Set, repeated times.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: Quantity
+
+This category contains the following entities:
+
+ :::column span="":::
+ **Entity**
+
+ Quantity
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Numbers and numeric quantities.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Number
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Numbers.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+ Percentage
+
+ :::column-end:::
+ :::column span="2":::
+
+ Percentages
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+ Ordinal numbers
+
+ :::column-end:::
+ :::column span="2":::
+
+ Ordinal numbers.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+ Age
+
+ :::column-end:::
+ :::column span="2":::
+
+ Ages.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+ Currency
+
+ :::column-end:::
+ :::column span="2":::
+
+ Currencies
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+ Dimensions
+
+ :::column-end:::
+ :::column span="2":::
+
+ Dimensions and measurements.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+ :::column span="":::
+ Temperature
+
+ :::column-end:::
+ :::column span="2":::
+
+ Temperatures.
+
+ :::column-end:::
+ :::column span="2":::
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+# [Preview API](#tab/preview-api)
+
+## Supported Named Entity Recognition (NER) entity categories
+
+Use this article to find the entity types and the additional tags that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document.
+
+### Type: Address
+
+Specific street-level mentions of locations: house/building numbers, streets, avenues, highways, intersections referenced by name.
+
+### Type: Numeric
+
+Numeric values.
+
+This entity type could be tagged by the following entity tags:
+
+#### Age
+
+**Description:** Ages
+
+#### Currency
+
+**Description:** Currencies
+
+#### Number
+
+**Description:** Numbers without a unit
+
+#### NumberRange
+
+**Description:** Range of numbers
+
+#### Percentage
+
+**Description:** Percentages
+
+#### Ordinal
+
+**Description:** Ordinal Numbers
+
+#### Temperature
+
+**Description:** Temperatures
+
+#### Dimension
+
+**Description:** Dimensions or measurements
+
+This entity tag also supports tagging the entity type with the following tags:
+
+|Entity tag |Details |
+|--|-|
+|Length |Length of an object|
+|Weight |Weight of an object|
+|Height |Height of an object|
+|Speed |Speed of an object |
+|Area |Area of an object |
+|Volume |Volume of an object|
+|Information|Unit of measure for digital information|
+
+## Type: Temporal
+
+Dates and times of day
+
+This entity type could be tagged by the following entity tags:
+
+#### Date
+
+**Description:** Calendar dates
+
+#### Time
+
+**Description:** Times of day
+
+#### DateTime
+
+**Description:** Calendar dates with time
+
+#### DateRange
+
+**Description:** Date range
+
+#### TimeRange
+
+**Description:** Time range
+
+#### DateTimeRange
+
+**Description:** Date Time range
+
+#### Duration
+
+**Description:** Durations
+
+#### SetTemporal
+
+**Description:** Set, repeated times
+
+## Type: Event
+
+Events with a timed period
+
+This entity type could be tagged by the following entity tags:
+
+#### SocialEvent
+
+**Description:** Social events
+
+#### CulturalEvent
+
+**Description:** Cultural events
+
+#### NaturalEvent
+
+**Description:** Natural events
+
+## Type: Location
+
+Particular point or place in physical space
+
+This entity type could be tagged by the following entity tags:
+#### GPE
+
+**Description:** GeoPolitialEntity
+
+This entity tag also supports tagging the entity type with the following tags:
+
+|Entity tag |Details |
+|-|-|
+|City |Cities |
+|State |States |
+|CountryRegion|Countries/Regions |
+|Continent |Continents |
+
+#### Structural
+
+**Description:** Manmade structures
+
+This entity tag also supports tagging the entity type with the following tags:
+
+|Entity tag |Details |
+|-|-|
+|Airport |Airports |
+
+#### Geological
+
+**Description:** Geographic and natural features
+
+This entity tag also supports tagging the entity type with the following tags:
+
+|Entity tag |Details |
+|-|-|
+|River |Rivers |
+|Ocean |Oceans |
+|Desert |Deserts |
+
+## Type: Organization
+
+Corporations, agencies, and other groups of people defined by some established organizational structure
+
+This entity type could be tagged by the following entity tags:
+
+#### MedicalOrganization
+
+**Description:** Medical companies and groups
+
+#### StockExchange
+
+**Description:** Stock exchange groups
+
+#### SportsOrganization
+
+**Description:** Sports-related organizations
+
+## Type: Person
+
+Names of individuals
+
+## Type: PersonType
+
+Human roles classified by group membership
+
+## Type: Email
+
+Email addresses
+
+## Type: URL
+
+URLs to websites
+
+## Type: IP
+
+Network IP addresses
+
+## Type: PhoneNumber
+
+Phone numbers
+
+## Type: Product
+
+Commercial, consumable objects
+
+This entity type could be tagged by the following entity tags:
+
+#### ComputingProduct
+
+**Description:** Computing products
+
+## Type: Skill
+
+Capabilities, skills, or expertise
+++
+## Next steps
+
+* [NER overview](../overview.md)
ai-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/how-to-call.md
+
+ Title: How to perform Named Entity Recognition (NER)
+
+description: This article will show you how to extract named entities from text.
++++++ Last updated : 01/10/2023+++++
+# How to use Named Entity Recognition(NER)
+
+The NER feature can evaluate unstructured text, and extract named entities from text in several predefined categories, for example: person, location, event, product, and organization.
+
+## Development options
++
+## Determine how to process the data (optional)
+
+### Specify the NER model
+
+By default, this feature uses the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
++
+### Input languages
+
+When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, key phrase extraction defaults to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md).
+
+## Submitting data
+
+Analysis is performed upon receipt of the request. Using the NER feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
++
+The API attempts to detect the [defined entity categories](concepts/named-entity-categories.md) for a given document language.
+
+## Getting NER results
+
+When you get results from NER, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/named-entity-categories.md), including their categories and subcategories, and confidence scores.
+
+## Select which entities to be returned (Preview API only)
+
+Starting with **API version 2023-04-15-preview**, the API attempts to detect the [defined entity types and tags](concepts/named-entity-categories.md) for a given document language. The entity types and tags replace the categories and subcategories structure the older models use to define entities for more flexibility. You can also specify which entities are detected and returned, use the optional `includeList` and `excludeList` parameters with the appropriate entity types. The following example would detect only `Location`. You can specify one or more [entity types](concepts/named-entity-categories.md) to be returned. Given the types and tags hierarchy introduced for this version, you have the flexibility to filter on different granularity levels as so:
+
+**Input:**
+
+> [!NOTE]
+> In this example, it returns only the **Location** entity type.
+
+```bash
+{
+    "kind": "EntityRecognition",
+    "parameters": 
+    {
+ "includeList" :
+ [
+ "Location"
+ ]
+    },
+    "analysisInput":
+    {
+        "documents":
+        [
+            {
+                "id":"1",
+                "language": "en",
+                "text": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
+            }
+        ]
+    }
+}
+
+```
+
+The above examples would return entities falling under the `Location` entity type such as the `GPE`, `Structural`, and `Geological` tagged entities as [outlined by entity types and tags](concepts/named-entity-categories.md). We could also further filter the returned entities by filtering using one of the entity tags for the `Location` entity type such as filtering over `GPE` tag only as outlined:
+
+```bash
+
+ "parameters": 
+    {
+ "includeList" :
+ [
+ "GPE"
+ ]
+    }
+
+```
+
+This method returns all `Location` entities only falling under the `GPE` tag and ignore any other entity falling under the `Location` type that is tagged with any other entity tag such as `Structural` or `Geological` tagged `Location` entities. We could also further drill-down on our results by using the `excludeList` parameter. `GPE` tagged entities could be tagged with the following tags: `City`, `State`, `CountryRegion`, `Continent`. We could, for example, exclude `Continent` and `CountryRegion` tags for our example:
+
+```bash
+
+ "parameters": 
+    {
+ "includeList" :
+ [
+ "GPE"
+ ],
+ "excludeList": :
+ [
+ "Continent",
+ "CountryRegion"
+ ]
+    }
+
+```
+
+Using these parameters we can successfully filter on only `Location` entity types, since the `GPE` entity tag included in the `includeList` parameter, falls under the `Location` type. We then filter on only Geopolitical entities and exclude any entities tagged with `Continent` or `CountryRegion` tags.
+
+## Service and data limits
++
+## Next steps
+
+[Named Entity Recognition overview](overview.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/language-support.md
+
+ Title: Named Entity Recognition (NER) language support
+
+description: This article explains which natural languages are supported by the NER feature of Azure AI Language.
++++++ Last updated : 06/27/2022++++
+# Named Entity Recognition (NER) language support
+
+Use this article to learn which natural languages are supported by the NER feature of Azure AI Language.
+
+> [!NOTE]
+> * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
+> * The language support below is for model version `2023-02-01-preview` for the Generally Available API.
+> * You can additionally find the language support for the Preview API in the second tab.
+
+## NER language support
+
+# [Generally Available API](#tab/ga-api)
+
+|Language|Language Code|Supports resolution|Notes|
+|:-|:-|:-|:-|
+|Afrikaans|`af`| | |
+|Albanian|`sq`| | |
+|Amharic|`am`| | |
+|Arabic|`ar`| | |
+|Armenian|`hy`| | |
+|Assamese|`as`| | |
+|Azerbaijani|`az`| | |
+|Basque|`eu`| | |
+|Bengali|`bn`| | |
+|Bosnian|`bs`| | |
+|Bulgarian|`bg`| | |
+|Burmese|`my`| | |
+|Catalan|`ca`| | |
+|Chinese (Simplified)|`zh-Hans`|Γ£ô |`zh` also accepted|
+|Chinese (Traditional)|`zh-Hant`| | |
+|Croatian|`hr`| | |
+|Czech|`cs`| | |
+|Danish|`da`| | |
+|Dutch|`nl`|Γ£ô | |
+|English|`en`|Γ£ô | |
+|Estonian|`et`| | |
+|Finnish|`fi`| | |
+|French|`fr`|Γ£ô | |
+|Galician|`gl`| | |
+|Georgian|`ka`| | |
+|German|`de`|Γ£ô | |
+|Greek|`el`| | |
+|Gujarati|`gu`| | |
+|Hebrew|`he`| | |
+|Hindi|`hi`|Γ£ô | |
+|Hungarian|`hu`| | |
+|Indonesian|`id`| | |
+|Irish|`ga`| | |
+|Italian|`it`|Γ£ô | |
+|Japanese|`ji`|Γ£ô | |
+|Kannada|`kn`| | |
+|Kazakh|`kk`| | |
+|Khmer|`km`| | |
+|Korean|`ko`| | |
+|Kurdish (Kurmanji)|`ku`| | |
+|Kyrgyz|`ky`| | |
+|Lao|`lo`| | |
+|Latvian|`lv`| | |
+|Lithuanian|`lt`| | |
+|Macedonian|`mk`| | |
+|Malagasy|`mg`| | |
+|Malay|`ms`| | |
+|Malayalam|`ml`| | |
+|Marathi|`mr`| | |
+|Mongolian|`mn`| | |
+|Nepali|`ne`| | |
+|Norwegian (Bokmal)|`no`| |`nb` also accepted|
+|Odia|`or`| | |
+|Pashto|`ps`| | |
+|Persian|`fa`| | |
+|Polish|`pl`| | |
+|Portuguese (Brazil)|`pt-BR`|Γ£ô | |
+|Portuguese (Portugal)|`pt-PT`| |`pt` also accepted|
+|Punjabi|`pa`| | |
+|Romanian|`ro`| | |
+|Russian|`ru`| | |
+|Serbian|`sr`| | |
+|Slovak|`sk`| | |
+|Slovenian|`sl`| | |
+|Somali|`so`| | |
+|Spanish|`es`|Γ£ô | |
+|Swahili|`sw`| | |
+|Swazi|`ss`| | |
+|Swedish|`sv`| | |
+|Tamil|`ta`| | |
+|Telugu|`te`| | |
+|Thai|`th`| | |
+|Turkish|`tr`|Γ£ô | |
+|Ukrainian|`uk`| | |
+|Urdu|`ur`| | |
+|Uyghur|`ug`| | |
+|Uzbek|`uz`| | |
+|Vietnamese|`vi`| | |
+|Welsh|`cy`| | |
+
+# [Preview API](#tab/preview-api)
+
+|Language|Language Code|Supports metadata|Notes|
+|:-|:-|:-|:-|
+|Afrikaans|`af`|Γ£ô||
+|Albanian|`sq`|Γ£ô||
+|Amharic|`am`|Γ£ô||
+|Arabic|`ar`|Γ£ô||
+|Armenian|`hy`|Γ£ô||
+|Assamese|`as`|Γ£ô||
+|Azerbaijani|`az`|Γ£ô||
+|Basque|`eu`|Γ£ô||
+|Belarusian (new)|`be`|Γ£ô||
+|Bengali|`bn`|Γ£ô||
+|Bosnian|`bs`|Γ£ô||
+|Breton (new)|`br`|Γ£ô||
+|Bulgarian|`bg`|Γ£ô||
+|Burmese|`my`|✓|`zh` also accepted|
+|Catalan|`ca`|Γ£ô||
+|Chinese (Simplified)|`zh-Hans`|Γ£ô||
+|Chinese (Traditional)|`zh-Hant`|Γ£ô||
+|Croatian|`hr`|Γ£ô||
+|Czech|`cs`|Γ£ô||
+|Danish|`da`|Γ£ô||
+|Dutch|`nl`|Γ£ô||
+|English|`en`|Γ£ô||
+|Esperanto (new)|`eo`|Γ£ô||
+|Estonian|`et`|Γ£ô||
+|Filipino|`fil`|Γ£ô||
+|Finnish|`fi`|Γ£ô||
+|French|`fr`|Γ£ô||
+|Galician|`gl`|Γ£ô||
+|Georgian|`ka`|Γ£ô||
+|German|`de`|Γ£ô||
+|Greek|`el`|Γ£ô||
+|Gujarati|`gu`|Γ£ô||
+|Hausa (new)|`ha`|Γ£ô||
+|Hebrew|`he`|Γ£ô||
+|Hindi|`hi`|Γ£ô||
+|Hungarian|`hu`|Γ£ô||
+|Indonesian|`id`|Γ£ô||
+|Irish|`ga`|Γ£ô||
+|Italian|`it`|Γ£ô||
+|Japanese|`ji`|Γ£ô||
+|Javanese (new)|`jv`|Γ£ô||
+|Kannada|`kn`|Γ£ô||
+|Kazakh|`kk`|Γ£ô||
+|Khmer|`km`|Γ£ô||
+|Korean|`ko`|Γ£ô||
+|Kurdish (Kurmanji)|`ku`|Γ£ô||
+|Kyrgyz|`ky`|Γ£ô||
+|Lao|`lo`|Γ£ô||
+|Latin (new)|`la`|Γ£ô||
+|Latvian|`lv`|Γ£ô||
+|Lithuanian|`lt`|Γ£ô||
+|Macedonian|`mk`|✓|nb also accepted|
+|Malagasy|`mg`|Γ£ô||
+|Malay|`ms`|Γ£ô||
+|Malayalam|`ml`|Γ£ô||
+|Marathi|`mr`|Γ£ô||
+|Mongolian|`mn`|Γ£ô||
+|Nepali|`ne`|✓|pt also accepted|
+|Norwegian|`no`|Γ£ô||
+|Odia|`or`|Γ£ô||
+|Oromo (new)|`om`|Γ£ô||
+|Pashto|`ps`|Γ£ô||
+|Persian|`fa`|Γ£ô||
+|Polish|`pl`|Γ£ô||
+|Portuguese (Brazil)|`pt-BR`|Γ£ô||
+|Portuguese (Portugal)|`pt-PT`|Γ£ô||
+|Punjabi|`pa`|Γ£ô||
+|Romanian|`ro`|Γ£ô||
+|Russian|`ru`|Γ£ô||
+|Sanskrit (new)|`sa`|Γ£ô||
+|Scottish Gaelic (new)|`gd`|Γ£ô||
+|Serbian|`sr`|Γ£ô||
+|Sindhi (new)|`sd`|Γ£ô||
+|Sinhala (new)|`si`|Γ£ô||
+|Slovak|`sk`|Γ£ô||
+|Slovenian|`sl`|Γ£ô||
+|Somali|`so`|Γ£ô||
+|Spanish|`es`|Γ£ô||
+|Sundanese (new)|`su`|Γ£ô||
+|Swahili|`sw`|Γ£ô||
+|Swedish|`sv`|Γ£ô||
+|Tamil|`ta`|Γ£ô||
+|Telugu|`te`|Γ£ô||
+|Thai|`th`|Γ£ô||
+|Turkish|`tr`|Γ£ô||
+|Ukrainian|`uk`|Γ£ô||
+|Urdu|`ur`|Γ£ô||
+|Uyghur|`ug`|Γ£ô||
+|Uzbek|`uz`|Γ£ô||
+|Vietnamese|`vi`|Γ£ô||
+|Welsh|`cy`|Γ£ô||
+|Western Frisian (new)|`fy`|Γ£ô||
+|Xhosa (new)|`xh`|Γ£ô||
+|Yiddish (new)|`yi`|Γ£ô||
+++
+## Next steps
+
+[NER feature overview](overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/overview.md
+
+ Title: What is the Named Entity Recognition (NER) feature in Azure AI Language?
+
+description: An overview of the Named Entity Recognition feature in Azure AI services, which helps you extract categories of entities in text.
++++++ Last updated : 06/15/2022++++
+# What is Named Entity Recognition (NER) in Azure AI Language?
+
+Named Entity Recognition (NER) is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The NER feature can identify and categorize entities in unstructured text. For example: people, places, organizations, and quantities.
+
+* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
+* The [**conceptual articles**](concepts/named-entity-categories.md) provide in-depth explanations of the service's functionality and features.
+++
+## Get started with named entity recognition
+++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for NER](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Scenarios
+
+* Enhance search capabilities and search indexing - Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
+* Automate business processes - For example, when reviewing insurance claims, recognized entities like name and location could be highlighted to facilitate the review. Or a support ticket could be generated with a customer's name and company automatically from an email.
+* Customer analysis ΓÇô Determine the most popular information conveyed by customers in reviews, emails, and calls to determine the most relevant topics that get brought up and determine trends over time.
+
+## Next steps
+
+There are two ways to get started using the Named Entity Recognition (NER) feature:
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure AI Language features without needing to write code.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/quickstart.md
+
+ Title: "Quickstart: Use the NER client library"
+
+description: Use this quickstart to start using the Named Entity Recognition (NER) API.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+keywords: text mining, key phrase
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: Detecting named entities (NER)
++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Next steps
+
+* [NER overview](overview.md)
ai-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
+
+ Title: Extract information in Excel using Power Automate
+
+description: Learn how to Extract Excel text without having to write code, using Named Entity Recognition and Power Automate.
++++++ Last updated : 11/21/2022++++
+# Extract information in Excel using Named Entity Recognition(NER) and Power Automate
+
+In this tutorial, you'll create a Power Automate flow to extract text in an Excel spreadsheet without having to write code.
+
+This flow will take a spreadsheet of issues reported about an apartment complex, and classify them into two categories: plumbing and other. It will also extract the names and phone numbers of the tenants who sent them. Lastly, the flow will append this information to the Excel sheet.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Use Power Automate to create a flow
+> * Upload Excel data from OneDrive for Business
+> * Extract text from Excel, and send it for Named Entity Recognition(NER)
+> * Use the information from the API to update an Excel sheet.
+
+## Prerequisites
+
+- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/).
+- A Language resource. If you don't have one, you can [create one in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) and use the free tier to complete this tutorial.
+- The [key and endpoint](../../../multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) that was generated for you during sign-up.
+- A spreadsheet containing tenant issues. Example data for this tutorial is [available on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx).
+- Microsoft 365, with [OneDrive for business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business).
+
+## Add the Excel file to OneDrive for Business
+
+Download the example Excel file from [GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx). This file must be stored in your OneDrive for Business account.
++
+The issues are reported in raw text. We will use the NER feature to extract the person name and phone number. Then the flow will look for the word "plumbing" in the description to categorize the issues.
+
+## Create a new Power Automate workflow
+
+Go to the [Power Automate site](https://make.powerautomate.com/), and log in. Then select **Create** and **Scheduled flow**.
++
+On the **Build a scheduled cloud flow** page, initialize your flow with the following fields:
+
+|Field |Value |
+|||
+|**Flow name** | **Scheduled Review** or another name. |
+|**Starting** | Enter the current date and time. |
+|**Repeat every** | **1 hour** |
+
+## Add variables to the flow
+
+Create variables representing the information that will be added to the Excel file. Select **New Step** and search for **Initialize variable**. Do this four times, to create four variables.
+++
+Add the following information to the variables you created. They represent the columns of the Excel file. If any variables are collapsed, you can select them to expand them.
+
+| Action |Name | Type | Value |
+|||||
+| Initialize variable | var_person | String | Person |
+| Initialize variable 2 | var_phone | String | Phone Number |
+| Initialize variable 3 | var_plumbing | String | plumbing |
+| Initialize variable 4 | var_other | String | other |
++
+## Read the excel file
+
+Select **New Step** and type **Excel**, then select **List rows present in a table** from the list of actions.
++
+Add the Excel file to the flow by filling in the fields in this action. This tutorial requires the file to have been uploaded to OneDrive for Business.
++
+Select **New Step** and add an **Apply to each** action.
++
+Select **Select an output from previous step**. In the Dynamic content box that appears, select **value**.
++
+## Send a request for entity recognition
+
+If you haven't already, you need to create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal.
+
+### Create a Language service connection
+
+In the **Apply to each**, select **Add an action**. Go to your Language resource's **key and endpoint** page in the Azure portal, and get the key and endpoint for your Language resource.
+
+In your flow, enter the following information to create a new Language connection.
+
+> [!NOTE]
+> If you already have created a Language connection and want to change your connection details, Select the ellipsis on the top right corner, and select **+ Add new connection**.
+
+| Field | Value |
+|--|-|
+| Connection Name | A name for the connection to your Language resource. For example, `TAforPowerAutomate`. |
+| Account key | The key for your Language resource. |
+| Site URL | The endpoint for your Language resource. |
+++
+## Extract the excel content
+
+After the connection is created, search for **Text Analytics** and select **Named Entity Recognition**. This will extract information from the description column of the issue.
++
+Select in the **Text** field and select **Description** from the Dynamic content windows that appears. Enter `en` for Language, and a unique name as the document ID (you might need to select **Show advanced options**).
+++
+Within the **Apply to each**, select **Add an action** and create another **Apply to each** action. Select inside the text box and select **documents** in the Dynamic Content window that appears.
+++
+## Extract the person name
+
+Next, we will find the person entity type in the NER output. Within the **Apply to each 2**, select **Add an action**, and create another **Apply to each** action. Select inside the text box and select **Entities** in the Dynamic Content window that appears.
+++
+Within the newly created **Apply to each 3** action, select **Add an action**, and add a **Condition** control.
+++
+In the Condition window, select the first text box. In the Dynamic content window, search for **Category** and select it.
+++
+Make sure the second box is set to **is equal to**. Then select the third box, and search for `var_person` in the Dynamic content window.
+++
+In the **If yes** condition, type in Excel then select **Update a Row**.
++
+Enter the Excel information, and update the **Key Column**, **Key Value** and **PersonName** fields. This will append the name detected by the API to the Excel sheet.
++
+## Get the phone number
+
+Minimize the **Apply to each 3** action by clicking on the name. Then add another **Apply to each** action to **Apply to each 2**, like before. it will be named **Apply to each 4**. Select the text box, and add **entities** as the output for this action.
++
+Within **Apply to each 4**, add a **Condition** control. It will be named **Condition 2**. In the first text box, search for, and add **categories** from the Dynamic content window. Be sure the center box is set to **is equal to**. Then, in the right text box, enter `var_phone`.
++
+In the **If yes** condition, add an **Update a row** action. Then enter the information like we did above, for the phone numbers column of the Excel sheet. This will append the phone number detected by the API to the Excel sheet.
++
+## Get the plumbing issues
+
+Minimize **Apply to each 4** by clicking on the name. Then create another **Apply to each** in the parent action. Select the text box, and add **Entities** as the output for this action from the Dynamic content window.
++
+Next, the flow will check if the issue description from the Excel table row contains the word "plumbing". If yes, it will add "plumbing" in the IssueType column. If not, we will enter "other."
+
+Inside the **Apply to each 4** action, add a **Condition** Control. It will be named **Condition 3**. In the first text box, search for, and add **Description** from the Excel file, using the Dynamic content window. Be sure the center box says **contains**. Then, in the right text box, find and select `var_plumbing`.
++
+In the **If yes** condition, select **Add an action**, and select **Update a row**. Then enter the information like before. In the IssueType column, select `var_plumbing`. This will apply a "plumbing" label to the row.
+
+In the **If no** condition, select **Add an action**, and select **Update a row**. Then enter the information like before. In the IssueType column, select `var_other`. This will apply an "other" label to the row.
++
+## Test the workflow
+
+In the top-right corner of the screen, select **Save**, then **Test**. Under **Test Flow**, select **manually**. Then select **Test**, and **Run flow**.
+
+The Excel file will get updated in your OneDrive account. It will look like the below.
++
+## Next steps
++
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/concepts/data-formats.md
+
+ Title: Orchestration workflow data formats
+
+description: Learn about the data formats accepted by orchestration workflow.
++++++ Last updated : 05/19/2022++++
+# Data formats accepted by orchestration workflow
+
+When data is used by your model for learning, it expects the data to be in a specific format. When you tag your data in Language Studio, it gets converted to the JSON format described in this article. You can also manually tag your files.
++
+## JSON file format
+
+If you upload a tags file, it should follow this format.
+
+```json
+{
+ "projectFileVersion": "{API-VERSION}",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "Orchestration",
+ "projectName": "{PROJECT-NAME}",
+ "multilingual": false,
+ "description": "This is a description",
+ "language": "{LANGUAGE-CODE}"
+ },
+ "assets": {
+ "projectKind": "Orchestration",
+ "intents": [
+ {
+ "category": "{INTENT1}",
+ "orchestration": {
+ "targetProjectKind": "Luis|Conversation|QuestionAnswering",
+ "luisOrchestration": {
+ "appId": "{APP-ID}",
+ "appVersion": "0.1",
+ "slotName": "production"
+ },
+ "conversationOrchestration": {
+ "projectName": "{PROJECT-NAME}",
+ "deploymentName": "{DEPLOYMENT-NAME}"
+ },
+ "questionAnsweringOrchestration": {
+ "projectName": "{PROJECT-NAME}"
+ }
+ }
+ }
+ ],
+ "utterances": [
+ {
+ "text": "utterance 1",
+ "language": "{LANGUAGE-CODE}",
+ "dataset": "{DATASET}",
+ "intent": "intent1"
+ }
+ ]
+ }
+}
+```
+
+|Key |Placeholder |Value | Example |
+|||-|--|
+| `api-version` | `{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-03-01-preview` |
+|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md)|`0.7`|
+| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
+| `multilingual` | `false`| Orchestration doesn't support the multilingual feature | `false`|
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
+| `intents` | `[]` | Array containing all the intent types you have in the project. These are the intents used in the orchestration project.| `[]` |
++
+## Utterance format
+
+```json
+[
+ {
+ "intent": "intent1",
+ "language": "{LANGUAGE-CODE}",
+ "text": "{Utterance-Text}",
+ },
+ {
+ "intent": "intent2",
+ "language": "{LANGUAGE-CODE}",
+ "text": "{Utterance-Text}",
+ }
+]
+
+```
+++
+## Next steps
+* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md)
+* See the [how-to article](../how-to/tag-utterances.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
ai-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/concepts/evaluation-metrics.md
+
+ Title: Orchestration workflow model evaluation metrics
+
+description: Learn about evaluation metrics in orchestration workflow
++++++ Last updated : 05/19/2022++++
+# Evaluation metrics for orchestration workflow models
+
+Your dataset is split into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data. <!--See [data splitting](../how-to/train-model.md#data-splitting) for more information-->
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined intents for utterances in the test set, and compares them with the provided tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, orchestration workflow uses the following metrics:
+
+* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+ `Precision = #True_Positive / (#True_Positive + #False_Positive)`
+
+* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+ `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
+
+* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
+
+ `F1 Score = 2 * Precision * Recall / (Precision + Recall)`
++
+Precision, recall, and F1 score are calculated for:
+* Each intent separately (intent-level evaluation)
+* For the model collectively (model-level evaluation).
+
+The definitions of precision, recall, and evaluation are the same for intent-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* can differ. For example, consider the following text.
+
+### Example
+
+* Make a response with thank you very much
+* Call my friend
+* Hello
+* Good morning
+
+These are the intents used: *CLUEmail* and *Greeting*
+
+The model could make the following predictions:
+
+| Utterance | Predicted intent | Actual intent |
+|--|--|--|
+|Make a response with thank you very much|CLUEmail|CLUEmail|
+|Call my friend|Greeting|CLUEmail|
+|Hello|CLUEmail|Greeting|
+|Goodmorning| Greeting|Greeting|
+
+### Intent level evaluation for *CLUEmail* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 1 was correctly predicted as *CLUEmail*. |
+| False Positive | 1 |Utterance 3 was mistakenly predicted as *CLUEmail*. |
+| False Negative | 1 | Utterance 2 was mistakenly predicted as *Greeting*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
+
+### Intent level evaluation for *Greeting* intent
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 1 | Utterance 4 was correctly predicted as *Greeting*. |
+| False Positive | 1 |Utterance 2 was mistakenly predicted as *Greeting*. |
+| False Negative | 1 | Utterance 3 was mistakenly predicted as *CLUEmail*. |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
++
+### Model-level evaluation for the collective model
+
+| Key | Count | Explanation |
+|--|--|--|
+| True Positive | 2 | Sum of TP for all intents |
+| False Positive | 2| Sum of FP for all intents |
+| False Negative | 2 | Sum of FN for all intents |
+
+**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 2 / (2 + 2) = 0.5`
+
+**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 2 / (2 + 2) = 0.5`
+
+**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
++
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of intents.
+The matrix compares the actual tags with the tags predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify intents that are too close to each other and often get mistaken (ambiguity). In this case consider merging these intents together. If that isn't possible, consider adding more tagged examples of both intents to help the model differentiate between them.
+
+You can calculate the model-level evaluation metrics from the confusion matrix:
+
+* The *true positive* of the model is the sum of *true Positives* for all intents.
+* The *false positive* of the model is the sum of *false positives* for all intents.
+* The *false Negative* of the model is the sum of *false negatives* for all intents.
+
+## Next steps
+
+[Train a model in Language Studio](../how-to/train-model.md)
ai-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/concepts/fail-over.md
+
+ Title: Save and recover orchestration workflow models
+
+description: Learn how to save and recover your orchestration workflow models.
++++++ Last updated : 05/19/2022++++
+# Back up and recover your orchestration workflow models
+
+When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure AI Language resources in different regions and the ability to sync your orchestration workflow models across regions.
+
+If your app or business depends on the use of an orchestration workflow model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
+
+Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents and utterances. You still need to [train](../how-to/train-model.md) and [deploy](../how-to/deploy-model.md) the models before you can [query them](../how-to/call-api.md) with the [runtime APIs](https://aka.ms/clu-apis).
++
+In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
+
+## Prerequisites
+
+* Two Azure AI Language resources in different Azure regions, each of them in a different region.
+
+## Get your resource keys endpoint
+
+Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
++
+> [!TIP]
+> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
+`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
+> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
+
+## Export your primary project assets
+
+Start by exporting the project assets from the project in your primary resource.
+
+### Submit export job
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get export job status
+
+Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
++
+Copy the response body as you will use it as the body for the next import job.
+
+## Import to a new project
+
+Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
+
+### Submit import job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get import job status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+## Train your model
+
+After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
+
+### Submit training job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
+++
+### Get Training Status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Deploy your model
+
+This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
+
+> [!TIP]
+> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
+
+### Submit deployment job
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+### Get the deployment status
+
+Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
++
+## Changes in calling the runtime
+
+Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
+
+In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
+
+## Check if your projects are out of sync
+
+Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
+
+### Get project details
+
+Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
+Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
++
+Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
+
+## Next steps
+
+In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
+
+* [Authoring REST API reference](https://aka.ms/clu-authoring-apis)
+
+* [Runtime prediction REST API reference](https://aka.ms/clu-apis)
ai-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/concepts/none-intent.md
+
+ Title: Orchestration workflow none intent
+
+description: Learn about the default None intent in orchestration workflow.
++++++ Last updated : 06/03/2022+++++
+# The "None" intent in orchestration workflow
+
+Every project in orchestration workflow includes a default None intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
+
+An utterance can be predicted as the None intent if the top scoring intent's score is **lower** than the None score threshold. It can also be predicted if the utterance is similar to examples added to the None intent.
+
+## None score threshold
+
+You can go to the **project settings** of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
+
+For any query and utterance, the highest scoring intent ends up **lower** than the threshold score, the top intent will be automatically replaced with the None intent. The scores of all the other intents remain unchanged.
+
+The score should be set according to your own observations of prediction scores, as they may vary by project. A higher threshold score forces the utterances to be more similar to the examples you have in your training data.
+
+When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
+
+The default score for Orchestration Workflow projects is set at **0.5** when creating new project in Language Studio.
+
+> [!NOTE]
+> During model evaluation of your test set, the None score threshold is not applied.
+
+## Adding examples to the None intent
+
+The None intent is also treated like any other intent in your project. If there are utterances that you want predicted as None, consider adding similar examples to them in your training data. For example, if you would like to categorize utterances that are not important to your project as None, then add those utterances to your intent.
+
+## Next steps
+
+[Orchestration workflow overview](../overview.md)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/faq.md
+
+ Title: Frequently Asked Questions for orchestration projects
+
+description: Use this article to quickly get the answers to FAQ about orchestration projects
++++++ Last updated : 06/21/2022++++
+# Frequently asked questions for orchestration workflows
+
+Use this article to quickly get the answers to common questions about orchestration workflows
+
+## How do I create a project?
+
+See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
+
+## How do I connect other service applications in orchestration workflow projects?
+
+See [How to create projects and build schemas](./how-to/create-project.md) for information on connecting another project as an intent.
+
+## Which LUIS applications can I connect to in orchestration workflow projects?
+
+LUIS applications that use the Language resource as their authoring resource will be available for connection. You can only connect to LUIS applications that are owned by the same resource. This option will only be available for resources in West Europe, as it's the only common available region between LUIS and CLU.
+
+## Which question answering project can I connect to in orchestration workflow projects?
+
+Question answering projects that use the Language resource will be available for connection. You can only connect to question answering projects that are in the same Language resource.
+
+## Training is taking a long time, is this expected?
+
+For orchestration projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
+
+## Can I add entities to orchestration workflow projects?
+
+No. Orchestration projects are only enabled for intents that can be connected to other projects for routing.
+
+<!--
+## Which languages are supported in this feature?
+
+See the [language support](./language-support.md) article.
+-->
+## How do I get more accurate results for my project?
+
+See [evaluation metrics](./concepts/evaluation-metrics.md) for information on how models are evaluated, and metrics you can use to improve accuracy.
+<!--
+## How many intents, and utterances can I add to a project?
+
+See the [service limits](./service-limits.md) article.
+-->
+## Can I label the same word as 2 different entities?
+
+Unlike LUIS, you cannot label the same text as 2 different entities. Learned components across different entities are mutually exclusive, and only one learned span is predicted for each set of characters.
+
+## Is there any SDK support?
+
+Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleCode). There is currently no authoring support for the SDK.
+
+## Are there APIs for this feature?
+
+Yes, all the APIs are available.
+* [Authoring APIs](https://aka.ms/clu-authoring-apis)
+* [Prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation)
+
+## Next steps
+
+[Orchestration workflow overview](overview.md)
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/glossary.md
+
+ Title: Definitions used in orchestration workflow
+
+description: Learn about definitions used in orchestration workflow.
++++++ Last updated : 05/19/2022++++
+# Terms and definitions used in orchestration workflow
+Use this article to learn about some of the definitions and terms you may encounter when using orchestration workflow.
+
+## F1 score
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
+
+## Intent
+An intent represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill.
+
+## Model
+A model is an object that's trained to do a certain task, in this case conversation understanding tasks. Models are trained by providing labeled data to learn from so they can later be used to understand utterances.
+
+* **Model evaluation** is the process that happens right after training to know how well does your model perform.
+* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Overfitting
+Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+
+## Precision
+Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
+
+## Project
+A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
+
+## Recall
+Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
+
+## Schema
+Schema is defined as the combination of intents within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents should be included in your project
+
+## Training data
+Training data is the set of information that is needed to train a model.
+
+## Utterance
+
+An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
++
+## Next steps
+
+* [Data and service limits](service-limits.md).
+* [Orchestration workflow overview](../overview.md).
ai-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/build-schema.md
+
+ Title: How to build an orchestration project schema
+description: Learn how to define intents for your orchestration workflow project.
+++++++ Last updated : 05/20/2022++++
+# How to build your project schema for orchestration workflow
+
+In orchestration workflow projects, the *schema* is defined as the combination of intents within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents that should be included in your project.
+
+## Guidelines and recommendations
+
+Consider the following guidelines and recommendations for your project:
+
+* Build orchestration projects when you need to manage the NLU for a multi-faceted virtual assistant or chatbot, where the intents, entities, and utterances would begin to be far more difficult to maintain over time in one project.
+* Orchestrate between different domains. A domain is a collection of intents and entities that serve the same purpose, such as Email commands vs. Restaurant commands.
+* If there is an overlap of similar intents between domains, create the common intents in a separate domain and removing them from the others for the best accuracy.
+* For intents that are general across domains, such as ΓÇ£GreetingΓÇ¥, ΓÇ£ConfirmΓÇ¥, ΓÇ£RejectΓÇ¥, you can either add them in a separate domain or as direct intents in the Orchestration project.
+* Orchestrate to Custom question answering knowledge base when a domain has FAQ type questions with static answers. Ensure that the vocabulary and language used to ask questions is distinctive from the one used in the other Conversational Language Understanding projects and LUIS applications.
+* If an utterance is being misclassified and routed to an incorrect intent, then add similar utterances to the intent to influence its results. If the intent is connected to a project, then add utterances to the connected project itself. After you retrain your orchestration project, the new utterances in the connected project will influence predictions.
+* Add test data to your orchestration projects to validate there isnΓÇÖt confusion between linked projects and other intents.
++
+## Add intents
+
+To build a project schema within [Language Studio](https://aka.ms/languageStudio):
+
+1. Select **Schema definition** from the left side menu.
+
+2. To create an intent, select **Add** from the top menu. You will be prompted to type in a name for the intent.
+
+3. To connect your intent to other existing projects, select **Yes, I want to connect it to an existing project** option. You can alternatively create a non-connected intent by selecting the **No, I don't want to connect to a project** option.
+
+4. If you choose to create a connected intent, choose from **Connected service** the service you are connecting to, then choose the **project name**. You can connect your intent to only one project from the following
+
+ :::image type="content" source="../media/build-schema-page.png" alt-text="A screenshot showing the schema creation page in Language Studio." lightbox="../media/build-schema-page.png":::
+
+> [!TIP]
+> Use connected intents to connect to other projects (conversational language understanding, LUIS, and question answering)
+
+5. Select **Add intent** to add your intent.
+
+## Next steps
+
+* [Add utterances](tag-utterances.md)
+
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/call-api.md
+
+ Title: How to send requests to orchestration workflow
+
+description: Learn about sending requests for orchestration workflow.
++++++ Last updated : 06/28/2022+
+ms.devlang: csharp, python
+++
+# Query deployment for intent predictions
+
+After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment.
+You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-swagger) or through the [Client libraries (Azure SDK)](#send-an-orchestration-workflow-request).
+
+## Test deployed model
+
+You can use Language Studio to submit an utterance, get predictions and visualize the results.
++++
+## Send an orchestration workflow request
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/REST-APIs)
+
+First you will need to get your resource key and endpoint:
++
+### Query your model
++
+# [Client libraries (Azure SDK)](#tab/azure-sdk)
+
+First you will need to get your resource key and endpoint:
++
+### Use the client libraries (Azure SDK)
+
+You can also use the client libraries provided by the Azure SDK to send requests to your model.
+
+> [!NOTE]
+> The client library for conversational language understanding is only available for:
+> * .NET
+> * Python
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
+
+ :::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="Screenshot showing how to get the Azure endpoint." lightbox="../../custom-text-classification/media/get-endpoint-azure.png":::
+
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
+
+5. See the following reference documentation for more information:
+
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
+
++
+## Next steps
+
+* [Orchestration workflow overview](../overview.md)
ai-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/create-project.md
+
+ Title: Create orchestration workflow projects and use Azure resources
+
+description: Use this article to learn how to create projects in orchestration workflow
++++++ Last updated : 03/23/2023++++
+# How to create projects in orchestration workflow
+
+Orchestration workflow allows you to create projects that connect your applications to:
+* Custom Language Understanding
+* Question Answering
+* LUIS
+
+## Prerequisites
+
+Before you start using orchestration workflow, you will need several things:
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+* An Azure AI Language resource
+
+### Create a Language resource
+
+Before you start using orchestration workflow, you will need an Azure AI Language resource.
+
+> [!NOTE]
+> * You need to have an **owner** role assigned on the resource group to create a Language resource.
+> * If you are planning to use question answering, you have to enable question answering in resource creation
+++
+## Sign in to Language Studio
+
+To create a new intent, select *+Add* button and start by giving your intent a **name**. You will see two options, to connect to a project or not. You can connect to (LUIS, question answering, or Conversational Language Understanding) projects, or choose the **no** option.
++
+## Create an orchestration workflow project
+
+Once you have a Language resource created, create an orchestration workflow project.
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+++
+## Import an orchestration workflow project
+
+### [Language Studio](#tab/language-studio)
+
+You can export an orchestration workflow project as a JSON file at any time by going to the orchestration workflow projects page, selecting a project, and from the top menu, clicking on **Export**.
+
+That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
+
+To import a project, select the arrow button next to **Create a new project** and select **Import**, then select the JSON file.
++
+### [REST APIs](#tab/rest-api)
+
+You can import an orchestration workflow JSON into the service
++++
+## Export project
+
+### [Language Studio](#tab/language-studio)
+
+You can export an orchestration workflow project as a JSON file at any time by going to the orchestration workflow projects page, selecting a project, and pressing **Export**.
+
+### [REST APIs](#tab/rest-api)
+
+You can export an orchestration workflow project as a JSON file at any time.
+++
+## Get orchestration project details
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
++++
+## Delete project
+
+### [Language Studio](#tab/language-studio)
++
+### [REST APIs](#tab/rest-api)
+
+When you don't need your project anymore, you can delete your project using the APIs.
+++++++
+## Next Steps
+
+[Build schema](./build-schema.md)
ai-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/deploy-model.md
+
+ Title: How to deploy an orchestration workflow project
+
+description: Learn about deploying orchestration workflow projects.
++++++ Last updated : 10/12/2022++++
+# Deploy an orchestration workflow model
+
+Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md)
+* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
+<!--* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.-->
+
+See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Deploy model
+
+After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-apis). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
+
+# [Language Studio](#tab/language-studio)
+
+
+# [REST APIs](#tab/rest-api)
+
+### Submit deployment job
++
+### Get deployment job status
++++
+## Swap deployments
+
+After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
+* Taking the model assigned to the first deployment, and assigning it to the second deployment.
+* taking the model assigned to second deployment and assign it to the first deployment.
+
+This can be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Delete deployment
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+
+Use [prediction API to query your model](call-api.md)
ai-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/tag-utterances.md
+
+ Title: How to tag utterances in an orchestration workflow project
+
+description: Use this article to tag utterances
++++++ Last updated : 05/20/2022++++
+# Add utterances in Language Studio
+
+Once you have [built a schema](build-schema.md), you should add training and testing utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to.
+
+Adding utterances is a crucial step in project development lifecycle; this data will be used in the next step when training your model so that your model can learn from the added data. If you already have utterances, you can directly [import it into your project](create-project.md#import-an-orchestration-workflow-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). Labeled data informs the model how to interpret text, and is used for training and evaluation.
+
+## Prerequisites
+
+* A successfully [created project](create-project.md).
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
++
+## How to add utterances
+
+Use the following steps to add utterances:
+
+1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
+
+2. From the left side menu, select **Add utterances**.
+
+3. From the top pivots, you can change the view to be **training set** or **testing set**. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
+
+3. From the **Select intent** dropdown menu, select one of the intents. Type in your utterance, and press the enter key in the utterance's text box to add the utterance. You can also upload your utterances directly by clicking on **Upload utterance file** from the top menu, make sure it follows the [accepted format](../concepts/data-formats.md#utterance-format).
+
+ > [!Note]
+ > If you are planning on using **Automatically split the testing set from training data** splitting, add all your utterances to the training set.
+ > You can add training utterances to **non-connected** intents only.
+
+ :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
+
+5. Under **Distribution** you can view the distribution across training and testing sets. You can **view utterances per intent**:
+
+* Utterance per non-connected intent
+* Utterances per connected intent
+
+## Next Steps
+* [Train Model](./train-model.md)
ai-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/train-model.md
+
+ Title: How to train and evaluate models in orchestration workflow
+description: Learn how to train a model for orchestration workflow projects.
+++++++ Last updated : 05/20/2022++++
+# Train your orchestration workflow model
+
+Training is the process where the model learns from your [labeled utterances](tag-utterances.md). After training is completed, you will be able to [view model performance](view-model-evaluation.md).
+
+To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.
+
+The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
+
+Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). The results are returned so you can review the [modelΓÇÖs performance](view-model-evaluation.md).
+
+## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure blob storage account
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Data splitting
+
+Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function.
+The **training set** is used in training the model, this is the set from which the model learns the labeled utterances.
+The **testing set** is a blind set that isn't introduced to the model during training but only during evaluation.
+
+After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate [evaluation metrics](../concepts/evaluation-metrics.md).
+
+It is recommended to make sure that all your intents are adequately represented in both the training and testing set.
+
+Orchestration workflow supports two methods for data splitting:
+
+* **Automatically splitting the testing set from training data**: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+
+ > [!NOTE]
+ > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
+
+* **Use a manual split of training and testing data**: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during [labeling](tag-utterances.md).
+
+> [!Note]
+> You can only add utterances in the training dataset for non-connected intents only.
++
+## Train model
+
+### Start training job
+
+#### [Language Studio](#tab/language-studio)
++
+#### [REST APIs](#tab/rest-api)
++++
+### Get training job status
+
+#### [Language Studio](#tab/language-studio)
+
+Select the training job ID from the list, a side pane will appear where you can check the **Training progress**, **Job status**, and other details for this job.
+
+<!--:::image type="content" source="../../../media/train-pane.png" alt-text="A screenshot showing the training job details." lightbox="../../../media/train-pane.png":::-->
++
+#### [REST APIs](#tab/rest-api)
+
+Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
+++
+### Cancel training job
+
+# [Language Studio](#tab/language-studio)
++
+# [REST APIs](#tab/rest-api)
++++
+## Next steps
+* [Model evaluation metrics concepts](../concepts/evaluation-metrics.md)
+* How to [deploy a model](./deploy-model.md)
ai-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/how-to/view-model-evaluation.md
+
+ Title: How to view orchestration workflow models details
+description: Learn how to view details for your model and evaluate its performance.
+++++++ Last updated : 10/12/2022++++
+# View orchestration workflow model details
+
+After model training is completed, you can view your model details and see how well it performs against the test set. Observing how well your model performed is called evaluation. The test set consists of data that wasn't introduced to the model during the training process.
+
+> [!NOTE]
+> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from your utterances. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Testing set** when [add your utterances](tag-utterances.md).
++
+## Prerequisites
+
+Before viewing a model's evaluation, you need:
+
+* [An orchestration workflow project](create-project.md).
+* A successfully [trained model](train-model.md)
+
+See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
+
+## Model details
+
+### [Language studio](#tab/Language-studio)
+
+In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
++
+### [REST APIs](#tab/REST-APIs)
++++
+## Load or export model data
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++
+## Delete model
+
+### [Language studio](#tab/Language-studio)
+++
+### [REST APIs](#tab/REST-APIs)
++++++
+## Next steps
+
+* As you review how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used.
+* If you're happy with your model performance, you can [deploy your model](deploy-model.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/language-support.md
+
+ Title: Language support for orchestration workflow
+
+description: Learn about the languages supported by orchestration workflow.
++++++ Last updated : 05/17/2022++++
+# Language support for orchestration workflow projects
+
+Use this article to learn about the languages currently supported by orchestration workflow projects.
+
+## Multilingual options
+
+Orchestration workflow projects do not support the multi-lingual option.
++
+## Language support
+
+Orchestration workflow projects support the following languages:
+
+| Language | Language code |
+| | |
+| German | `de` |
+| English | `en-us` |
+| Spanish | `es` |
+| French | `fr` |
+| Italian | `it` |
+| Portuguese (Brazil) | `pt-br` |
++
+## Next steps
+
+* [Orchestration workflow overview](overview.md)
+* [Service limits](service-limits.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/overview.md
+
+ Title: Orchestration workflows - Azure AI services
+
+description: Customize an AI model to connect your Conversational Language Understanding, question answering and LUIS applications.
++++++ Last updated : 08/10/2022++++
+# What is orchestration workflow?
++
+Orchestration workflow is one of the features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build orchestration models to connect [Conversational Language Understanding (CLU)](../conversational-language-understanding/overview.md), [Question Answering](../question-answering/overview.md) projects and [LUIS](../../luis/what-is-luis.md) applications.
+By creating an orchestration workflow, developers can iteratively tag utterances, train and evaluate model performance before making it available for consumption.
+To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
++
+This documentation contains the following article types:
+
+* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
+* [How-to guides](how-to/create-project.md) contain instructions for using the service in more specific or customized ways.
++
+## Example usage scenarios
+
+Orchestration workflow can be used in multiple scenarios across a variety of industries. Some examples are:
+
+### Enterprise chat bot
+
+In a large corporation, an enterprise chat bot may handle a variety of employee affairs. It may be able to handle frequently asked questions served by a custom question answering knowledge base, a calendar specific skill served by conversational language understanding, and an interview feedback skill served by LUIS. The bot needs to be able to appropriately route incoming requests to the correct service. Orchestration workflow allows you to connect those skills to one project that handles the routing of incoming requests appropriately to power the enterprise bot.
+
+## Project development lifecycle
+
+Creating an orchestration workflow project typically involves several different steps.
++
+Follow these steps to get the most out of your model:
+
+1. **Define your schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. Create the [intents](glossary.md#intent) that you want to assign to user's utterances and the projects you want to connect to your orchestration project.
+
+2. **Label your data**: The quality of data tagging is a key factor in determining model performance.
+
+3. **Train a model**: Your model starts learning from your tagged data.
+
+4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
+
+5. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model.
+
+6. **Deploy the model**: Deploying a model makes it available for use via the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
+
+7. **Predict intents**: Use your custom model to predict intents from user's utterances.
+
+## Reference documentation and code samples
+
+As you use orchestration workflow, see the following reference documentation and samples for Azure AI Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
+|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation) | |
+|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
+|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency note for CLU and orchestration workflow to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+* Use the [quickstart article](quickstart.md) to start using orchestration workflow.
+
+* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
+
+* Remember to view the [service limits](service-limits.md) for information such as regional availability.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/quickstart.md
+
+ Title: Quickstart - Orchestration workflow
+
+description: Quickly start creating an AI model to connect your Conversational Language Understanding, question answering and LUIS applications.
++++++ Last updated : 02/28/2023++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: Orchestration workflow
+
+Use this article to get started with Orchestration workflow projects using Language Studio and the REST API. Follow these steps to try out an example.
+++++++
+## Next steps
+
+* [Learn about orchestration workflows](overview.md)
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/service-limits.md
+
+ Title: Orchestration workflow limits
+
+description: Learn about the data, region, and throughput limits for Orchestration workflow
++++++ Last updated : 10/12/2022++++
+# Orchestration workflow limits
+
+Use this article to learn about the data and service limits when using orchestration workflow.
+
+## Language resource limits
+
+* Your Language resource has to be created in one of the [supported regions](#regional-availability).
+
+* Pricing tiers
+
+ |Tier|Description|Limit|
+ |--|--|--|
+ |F0 |Free tier|You are only allowed one Language resource with the F0 tier per subscription.|
+ |S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
++
+See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
+
+* You can have up to **500** projects per resource.
+
+* Project names have to be unique within the same resource across all custom features.
+
+## Regional availability
+
+Orchestration workflow is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| China East 2 | Γ£ô | Γ£ô |
+| China North 2 | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| Qatar Central | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
+| West Europe | Γ£ô | Γ£ô |
+| West US | | Γ£ô |
+| West US 2 | Γ£ô | Γ£ô |
+| West US 3 | Γ£ô | Γ£ô |
+
+## API limits
+
+|Item|Request type| Maximum limit|
+|:-|:-|:-|
+|Authoring API|POST|10 per minute|
+|Authoring API|GET|100 per minute|
+|Prediction API|GET/POST|1,000 per minute|
+
+## Quota limits
+
+|Pricing tier |Item |Limit |
+| | | |
+|F|Training time| 1 hour per month|
+|S|Training time| Unlimited, Pay as you go |
+|F|Prediction Calls| 5,000 request per month |
+|S|Prediction Calls| Unlimited, Pay as you go |
+
+## Data limits
+
+The following limits are observed for orchestration workflow.
+
+|Item|Lower Limit| Upper Limit |
+| | | |
+|Count of utterances per project | 1 | 15,000|
+|Utterance length in characters | 1 | 500 |
+|Count of intents per project | 1 | 500|
+|Count of trained models per project| 0 | 10 |
+|Count of deployments per project| 0 | 10 |
+
+## Naming limits
+
+| Attribute | Limits |
+|--|--|
+| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
+| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
+| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
++
+## Next steps
+
+* [Orchestration workflow overview](overview.md)
ai-services Connect Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/tutorials/connect-services.md
+
+ Title: Integrate custom question answering and conversational language understanding with orchestration workflow
+description: Learn how to connect different projects with orchestration workflow.
+keywords: conversational language understanding, bot framework, bot, language understanding, nlu
+++++++ Last updated : 05/25/2022++
+# Connect different services with Orchestration workflow
+
+Orchestration workflow is a feature that allows you to connect different projects from LUIS, conversational language understanding, and custom question answering in one project. You can then use this project for predictions under one endpoint. The orchestration project makes a prediction on which project should be called and automatically routes the request to that project, and returns with its response.
+
+In this tutorial, you will learn how to connect a custom question answering knowledge base with a conversational language understanding project. You will then call the project using the .NET SDK sample for orchestration.
+
+This tutorial will include creating a **chit chat** knowledge base and **email commands** project. Chit chat will deal with common niceties and greetings with static responses. Email commands will predict among a few simple actions for an email assistant. The tutorial will then teach you to call the Orchestrator using the SDK in a .NET environment using a sample solution.
++
+## Prerequisites
+
+- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) **and select the custom question answering feature** in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial. Copy them from the **Keys and Endpoint** tab in your resource.
+ - When you enable custom question answering, you must select an Azure search resource to connect to.
+ - Make sure the region of your resource is supported by [conversational language understanding](../../conversational-language-understanding/service-limits.md#regional-availability).
+- Download the **OrchestrationWorkflowSample** [sample](https://aka.ms/orchestration-sample).
+
+## Create a custom question answering knowledge base
+
+1. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
+2. Find and select the [Custom question answering](https://language.cognitive.azure.com/questionAnswering/projects/) card in the homepage.
+3. Select **Create new project** and add the name **chitchat** with the language _English_ before clicking on **Create project**.
+4. When the project loads, select **Add source** and select _Chit chat_. Select the professional personality for chit chat before
+
+ :::image type="content" source="../media/chit-chat.png" alt-text="A screenshot of the chit chat popup." lightbox="../media/chit-chat.png":::
+
+5. Go to **Deploy knowledge base** from the left navigation menu and select **Deploy** and confirm the popup that shows up.
+
+You are now done with deploying your knowledge base for chit chat. You can explore the type of questions and answers to expect in the **Edit knowledge base** page.
+
+## Create a conversational language understanding project
+
+1. In Language Studio, go to the [Conversational language understanding](https://language.cognitive.azure.com/clu/projects) service.
+2. Download the `EmailProject.json` sample file [here](https://aka.ms/clu-sample-json).
+3. Select the **Import** button. Browse to the `EmailProject.json`` file you downloaded and press Done.
+
+ :::image type="content" source="../media/import-export.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import-export.png":::
+
+4. Once the project is loaded, select **Training jobs** on the left. Press on Start a training job, provide the model name **v1** and press Train.
+
+ :::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page." lightbox="../media/train-model.png":::
+
+5. Once training is complete, click to **Deploying a model** on the left. Select **Add Deployment** and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
+
+ :::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot showing the model deployment page." lightbox="../media/deploy-model-tutorial.png":::
+
+You are now done with deploying a conversational language understanding project for email commands. You can explore the different commands in the **Data labeling** page.
+
+## Create an Orchestration workflow project
+
+1. In Language Studio, go to the [Orchestration workflow](https://language.cognitive.azure.com/orchestration/projects) service.
+2. Select **Create new project**. Use the name **Orchestrator** and the language _English_ before clicking next then done.
+3. Once the project is created, select **Add** in the **Schema definition** page.
+4. Select _Yes, I want to connect it to an existing project_. Add the intent name **EmailIntent** and select **Conversational Language Understanding** as the connected service. Select the recently created **EmailProject** project for the project name before clicking on **Add Intent**.
++
+5. Add another intent but now select **Question Answering** as the service and select **chitchat** as the project name.
+6. Similar to conversational language understanding, go to **Training jobs** and start a new training job with the name **v1** and press Train.
+7. Once training is complete, click to **Deploying a model** on the left. Select **Add deployment** and create a new deployment with the name **Testing**, and assign model **v1** to the deployment and press Next.
+8. On the next page, select the deployment name **Testing** for the **EmailIntent**. This tells the orchestrator to call the **Testing** deployment in **EmailProject** when it routes to it. Custom question answering projects only have one deployment by default.
++
+Now your orchestration project is ready to be used. Any incoming request will be routed to either **EmailIntent** and the **EmailProject** in conversational language understanding or **ChitChatIntent** and the **chitchat** knowledge base.
+
+## Call the orchestration project with the Conversations SDK
+
+1. In the downloaded sample, open OrchestrationWorkflowSample.sln in Visual Studio.
+
+2. In the OrchestrationWorkflowSample solution, make sure to install all the required packages. In Visual Studio, go to _Tools_, _NuGet Package Manager_ and select _Package Manager Console_ and run the following command.
+
+```powershell
+dotnet add package Azure.AI.Language.Conversations
+```
+Alternatively, you can search for "Azure.AI.Language.Conversations" in the NuGet package manager and install the latest release.
+
+3. In `Program.cs`, replace `{api-key}` and the `{endpoint}` variables. Use the key and endpoint for the Language resource you created earlier. You can find them in the **Keys and Endpoint** tab in your Language resource in Azure.
+
+```csharp
+Uri endpoint = new Uri("{endpoint}");
+AzureKeyCredential credential = new AzureKeyCredential("{api-key}");
+```
+
+4. Replace the project and deployment parameters to **Orchestrator** and **Testing** as below if they are not set already.
+
+```csharp
+string projectName = "Orchestrator";
+string deploymentName = "Testing";
+```
+
+5. Run the project or press F5 in Visual Studio.
+6. Input a query such as "read the email from matt" or "hello how are you". You'll now observe different responses for each, a conversational language understanding **EmailProject** response from the first query, and the answer from the **chitchat** knowledge base for the second query.
+
+**Conversational Language Understanding**:
+
+**Custom Question Answering**:
+
+You can now connect other projects to your orchestrator and begin building complex architectures with various different projects.
+
+## Next steps
+
+- Learn more about [conversational language understanding](./../../conversational-language-understanding/overview.md).
+- Learn more about [custom question answering](./../../question-answering/overview.md).
++
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/overview.md
+
+ Title: What is Azure AI Language
+
+description: Learn how to integrate AI into your applications that can extract information and understand written language.
++++++ Last updated : 04/14/2023++++
+# What is Azure AI Language?
++
+Azure AI Language is a cloud-based service that provides Natural Language Processing (NLP) features for understanding and analyzing text. Use this service to help build intelligent applications using the web-based Language Studio, REST APIs, and client libraries.
+
+## Available features
+
+This Language service unifies the following previously available Azure AI
+
+The Language service also provides several new features as well, which can either be:
+
+* Preconfigured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications.
+* Customizable, which means you'll train an AI model using our tools to fit your data specifically.
+
+> [!TIP]
+> Unsure which feature to use? See [Which Language service feature should I use?](#which-language-service-feature-should-i-use) to help you decide.
+
+[**Language Studio**](./language-studio.md) enables you to use the below service features without needing to write code.
+
+### Named Entity Recognition (NER)
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/named-entity-recognition.png" alt-text="A screenshot of a named entity recognition example." lightbox="media/studio-examples/named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Named entity recognition](./named-entity-recognition/overview.md) is a preconfigured feature that categorizes entities (words or phrases) in unstructured text across several predefined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
+
+ :::column-end:::
+
+### Personally identifying (PII) and health (PHI) information detection
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/personal-information-detection.png" alt-text="A screenshot of a PII detection example." lightbox="media/studio-examples/personal-information-detection.png":::
+ :::column-end:::
+ :::column span="":::
+ [PII detection](./personally-identifiable-information/overview.md) is a preconfigured feature that identifies, categorizes, and redacts sensitive information in both [unstructured text documents](./personally-identifiable-information/how-to-call.md), and [conversation transcripts](./personally-identifiable-information/how-to-call-for-conversations.md). For example: phone numbers, email addresses, forms of identification, [and more](./personally-identifiable-information/concepts/entity-categories.md).
+
+ :::column-end:::
+
+### Language detection
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/language-detection.png" alt-text="A screenshot of a language detection example." lightbox="media/studio-examples/language-detection.png":::
+ :::column-end:::
+ :::column span="":::
+ [Language detection](./language-detection/overview.md) is a preconfigured feature that can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
+
+ :::column-end:::
+
+### Sentiment Analysis and opinion mining
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/sentiment-analysis-example.png" alt-text="A screenshot of a sentiment analysis example." lightbox="media/studio-examples/sentiment-analysis-example.png":::
+ :::column-end:::
+ :::column span="":::
+ [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) are preconfigured features that help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
+
+ :::column-end:::
+
+### Summarization
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/summarization-example.png" alt-text="A screenshot of a summarization example." lightbox="media/studio-examples/summarization-example.png":::
+ :::column-end:::
+ :::column span="":::
+ [Summarization](./summarization/overview.md) is a preconfigured feature that uses extractive text summarization to produce a summary of documents and conversation transcriptions. It extracts sentences that collectively represent the most important or relevant information within the original content.
+ :::column-end:::
+
+### Key phrase extraction
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/key-phrases.png" alt-text="A screenshot of a key phrase extraction example." lightbox="media/studio-examples/key-phrases.png":::
+ :::column-end:::
+ :::column span="":::
+ [Key phrase extraction](./key-phrase-extraction/overview.md) is a preconfigured feature that evaluates and returns the main concepts in unstructured text, and returns them as a list.
+ :::column-end:::
+
+### Entity linking
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/entity-linking.png" alt-text="A screenshot of an entity linking example." lightbox="media/studio-examples/entity-linking.png":::
+ :::column-end:::
+ :::column span="":::
+ [Entity linking](./entity-linking/overview.md) is a preconfigured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
+ :::column-end:::
+
+### Text analytics for health
+
+ :::column span="":::
+ :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Text analytics for health](./text-analytics-for-health/overview.md) is a preconfigured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+ :::column-end:::
+
+### Custom text classification
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/single-classification.png" alt-text="A screenshot of a custom text classification example." lightbox="media/studio-examples/single-classification.png":::
+ :::column-end:::
+ :::column span="":::
+ [Custom text classification](./custom-text-classification/overview.md) enables you to build custom AI models to classify unstructured text documents into custom classes you define.
+ :::column-end:::
+
+### Custom Named Entity Recognition (Custom NER)
++
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/custom-named-entity-recognition.png" alt-text="A screenshot of a custom NER example." lightbox="media/studio-examples/custom-named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Custom NER](custom-named-entity-recognition/overview.md) enables you to build custom AI models to extract custom entity categories (labels for words or phrases), using unstructured text that you provide.
+ :::column-end:::
++
+### Conversational language understanding
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/conversational-language-understanding.png" alt-text="A screenshot of a conversational language understanding example." lightbox="media/studio-examples/conversational-language-understanding.png":::
+ :::column-end:::
+ :::column span="":::
+ [Conversational language understanding (CLU)](./conversational-language-understanding/overview.md) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it.
+ :::column-end:::
+
+### Orchestration workflow
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/orchestration-workflow.png" alt-text="A screenshot of an orchestration workflow example." lightbox="media/studio-examples/orchestration-workflow.png":::
+ :::column-end:::
+ :::column span="":::
+ [Orchestration workflow](./language-detection/overview.md) is a custom feature that enables you to connect [Conversational Language Understanding (CLU)](./conversational-language-understanding/overview.md), [question answering](./question-answering/overview.md), and [LUIS](../LUIS/what-is-luis.md) applications.
+
+ :::column-end:::
+
+### Question answering
+
+ :::column span="":::
+ :::image type="content" source="media/studio-examples/question-answering.png" alt-text="A screenshot of a question answering example." lightbox="media/studio-examples/question-answering.png":::
+ :::column-end:::
+ :::column span="":::
+ [Question answering](./question-answering/overview.md) is a custom feature that finds the most appropriate answer for inputs from your users, and is commonly used to build conversational client applications, such as social media applications, chat bots, and speech-enabled desktop applications.
+
+ :::column-end:::
+
+### Custom text analytics for health
+
+ :::column span="":::
+ :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
+ :::column-end:::
+ :::column span="":::
+ [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
+ :::column-end:::
+
+## Which Language service feature should I use?
+
+This section will help you decide which Language service feature you should use for your application:
+
+|What do you want to do? |Document format |Your best solution | Is this solution customizable?* |
+|||||
+| Detect and/or redact sensitive information such as PII and PHI. | Unstructured text, <br> transcribed conversations | [PII detection](./personally-identifiable-information/overview.md) | |
+| Extract categories of information without creating a custom model. | Unstructured text | The [preconfigured NER feature](./named-entity-recognition/overview.md) | |
+| Extract categories of information using a model specific to your data. | Unstructured text | [Custom NER](./custom-named-entity-recognition/overview.md) | Γ£ô |
+|Extract main topics and important phrases. | Unstructured text | [Key phrase extraction](./key-phrase-extraction/overview.md) | |
+| Determine the sentiment and opinions expressed in text. | Unstructured text | [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) | |
+| Summarize long chunks of text or conversations. | Unstructured text, <br> transcribed conversations. | [Summarization](./summarization/overview.md) | |
+| Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | |
+| Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô|
+| Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
+| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
+| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô |
+| Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | |
+| Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô |
+| Connect apps from conversational language understanding, LUIS, and question answering. | Unstructured user inputs | [Orchestration workflow](./orchestration-workflow/overview.md) | Γ£ô |
+
+\* If a feature is customizable, you can train an AI model using our tools to fit your data specifically. Otherwise a feature is preconfigured, meaning the AI models it uses cannot be changed. You just send your data, and use the feature's output in your applications.
+
+## Migrate from Text Analytics, QnA Maker, or Language Understanding (LUIS)
+
+Azure AI Language unifies three individual language services in Azure AI services - Text Analytics, QnA Maker, and Language Understanding (LUIS). If you have been using these three services, you can easily migrate to the new Azure AI Language. For instructions see [Migrating to Azure AI Language](concepts/migrate.md).
+
+## Tutorials
+
+After you've had a chance to get started with the Language service, try our tutorials that show you how to solve various scenarios.
+
+* [Extract key phrases from text stored in Power BI](key-phrase-extraction/tutorials/integrate-power-bi.md)
+* [Use Power Automate to sort information in Microsoft Excel](named-entity-recognition/tutorials/extract-excel-information.md)
+* [Use Flask to translate text, analyze sentiment, and synthesize speech](/training/modules/python-flask-build-ai-web-app/)
+* [Use Azure AI services in canvas apps](/powerapps/maker/canvas-apps/cognitive-services-api?context=/azure/ai-services/language-service/context/context)
+* [Create a FAQ Bot](question-answering/tutorials/bot-service.md)
+
+## Additional code samples
+
+You can find more code samples on GitHub for the following languages:
+
+* [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
+* [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
+* [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples)
+* [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples)
+
+## Deploy on premises using Docker containers
+Use Language service containers to deploy API features on-premises. These Docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons. The Language service offers the following containers:
+
+* [Sentiment analysis](sentiment-opinion-mining/how-to/use-containers.md)
+* [Language detection](language-detection/how-to/use-containers.md)
+* [Key phrase extraction](key-phrase-extraction/how-to/use-containers.md)
+* [Text Analytics for health](text-analytics-for-health/how-to/use-containers.md)
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the following articles to learn about responsible AI use and deployment in your systems:
+
+* [Transparency note for the Language service](/legal/cognitive-services/text-analytics/transparency-note)
+* [Integration and responsible use](/legal/cognitive-services/text-analytics/guidance-integration-responsible-use)
+* [Data, privacy, and security](/legal/cognitive-services/text-analytics/data-privacy)
ai-services Conversations Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/concepts/conversations-entity-categories.md
+
+ Title: Entity categories recognized by Conversational Personally Identifiable Information (detection) in Azure AI Language
+
+description: Learn about the entities the Conversational PII feature (preview) can recognize from conversation inputs.
++++++ Last updated : 05/15/2022++++
+# Supported customer content (PII) entity categories in conversations
+
+Use this article to find the entity categories that can be returned by the [conversational PII detection feature](../how-to-call-for-conversations.md). This feature runs a predictive model to identify, categorize, and redact sensitive information from an input conversation.
+
+The PII preview feature includes the ability to detect personal (`PII`) information from conversations.
+
+## Entity categories
+
+The following entity categories are returned when you're sending API requests PII feature.
+
+## Category: Name
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Name
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ All first, middle, last or full name is considered PII regardless of whether it is the speakerΓÇÖs name, the agentΓÇÖs name, someone elseΓÇÖs name or a different version of the speakerΓÇÖs full name (Chris vs. Christopher).
+
+ To get this entity category, add `Name` to the `pii-categories` parameter. `Name` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+ :::column-end:::
+
+## Category: PhoneNumber
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ PhoneNumber
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ All telephone numbers (including toll-free numbers or numbers that may be easily found or considered public knowledge) are considered PII
+
+ To get this entity category, add `PhoneNumber` to the `pii-categories` parameter. `PhoneNumber` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
++
+## Category: Address
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Address
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Complete or partial addresses are considered PII. All addresses regardless of what residence or institution the address belongs to (such as: personal residence, business, medical center, government agency, etc.) are covered under this category.
+ Note:
+ * If information is limited to City & State only, it will not be considered PII.
+ * If information contains street, zip code or house number, all information is considered as Address PII , including the city and state
+
+ To get this entity category, add `Address` to the `pii-categories` parameter. `Address` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
++
+## Category: Email
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Email
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ All email addresses are considered PII.
+
+ To get this entity category, add `Email` to the `pii-categories` parameter. `Email` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Category: NumericIdentifier
+
+This category contains the following entities:
+
+ :::column span="":::
+ **Entity**
+
+ NumericIdentifier
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Any numeric or alphanumeric identifier that could contain any PII information.
+ Examples:
+ * Case Number
+ * Member Number
+ * Ticket number
+ * Bank account number
+ * Installation ID
+ * IP Addresses
+ * Product Keys
+ * Serial Numbers (1:1 relationship with a specific item/product)
+ * Shipping tracking numbers, etc.
+
+ To get this entity category, add `NumericIdentifier` to the `pii-categories` parameter. `NumericIdentifier` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Category: Credit card
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Credit card
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Any credit card number, any security code on the back, or the expiration date is considered as PII.
+
+ To get this entity category, add `CreditCard` to the `pii-categories` parameter. `CreditCard` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+## Next steps
+
+[How to detect PII in conversations](../how-to-call-for-conversations.md)
ai-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/concepts/entity-categories.md
+
+ Title: Entity categories recognized by Personally Identifiable Information (detection) in Azure AI Language
+
+description: Learn about the entities the PII feature can recognize from unstructured text.
++++++ Last updated : 11/15/2021++++
+# Supported Personally Identifiable Information (PII) entity categories
+
+Use this article to find the entity categories that can be returned by the [PII detection feature](../how-to-call.md). This feature runs a predictive model to identify, categorize, and redact sensitive information from an input document.
+
+The PII feature includes the ability to detect personal (`PII`) and health (`PHI`) information.
+
+## Entity categories
+
+> [!NOTE]
+> To detect protected health information (PHI), use the `domain=phi` parameter and model version `2020-04-01` or later.
++
+The following entity categories are returned when you're sending API requests PII feature.
+
+## Category: Person
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Person
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Names of people. Returned as both PII and PHI.
+
+ To get this entity category, add `Person` to the `piiCategories` parameter. `Person` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: PersonType
+
+This category contains the following entity:
++
+ :::column span="":::
+ **Entity**
+
+ PersonType
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Job types or roles held by a person.
+
+ To get this entity category, add `PersonType` to the `piiCategories` parameter. `PersonType` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: PhoneNumber
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ PhoneNumber
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Phone numbers (US and EU phone numbers only). Returned as both PII and PHI.
+
+ To get this entity category, add `PhoneNumber` to the `piiCategories` parameter. `PhoneNumber` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt` `pt-br`
+
+ :::column-end:::
+++
+## Category: Organization
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Organization
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type. Returned as both PII and PHI.
+
+ To get this entity category, add `Organization` to the `piiCategories` parameter. `Organization` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
++
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Medical
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Medical companies and groups.
+
+ To get this entity category, add `OrganizationMedical` to the `piiCategories` parameter. `OrganizationMedical` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+
+ :::column span="":::
+
+ Stock exchange
+
+ :::column-end:::
+ :::column span="2":::
+
+ Stock exchange groups.
+
+ To get this entity category, add `OrganizationStockExchange` to the `piiCategories` parameter. `OrganizationStockExchange` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+
+ :::column span="":::
+
+ Sports
+
+ :::column-end:::
+ :::column span="2":::
+
+ Sports-related organizations.
+
+ To get this entity category, add `OrganizationSports` to the `piiCategories` parameter. `OrganizationSports` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+++
+## Category: Address
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Address
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Full mailing address. Returned as both PII and PHI.
+
+ To get this entity category, add `Address` to the `piiCategories` parameter. `Address` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
++
+## Category: Email
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ Email
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Email addresses. Returned as both PII and PHI.
+
+ To get this entity category, add `Email` to the `piiCategories` parameter. `Email` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
++
+## Category: URL
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ URL
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ URLs to websites. Returned as both PII and PHI.
+
+ To get this entity category, add `URL` to the `piiCategories` parameter. `URL` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
++
+## Category: IP Address
+
+This category contains the following entity:
+
+ :::column span="":::
+ **Entity**
+
+ IPAddress
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Network IP addresses. Returned as both PII and PHI.
+
+ To get this entity category, add `IPAddress` to the `piiCategories` parameter. `IPAddress` will be returned in the API response if detected.
+
+ :::column-end:::
+
+ :::column span="":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+## Category: DateTime
+
+This category contains the following entities:
+
+ :::column span="":::
+ **Entity**
+
+ DateTime
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Dates and times of day.
+
+ To get this entity category, add `DateTime` to the `piiCategories` parameter. `DateTime` will be returned in the API response if detected.
+
+ :::column-end:::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+#### Subcategories
+
+The entity in this category can have the following subcategories.
+
+ :::column span="":::
+ **Entity subcategory**
+
+ Date
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Calender dates. Returned as both PII and PHI.
+
+ To get this entity category, add `Date` to the `piiCategories` parameter. `Date` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
++
+## Category: Age
+
+This category contains the following entities:
+
+ :::column span="":::
+ **Entity**
+
+ Age
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Ages.
+
+ To get this entity category, add `Age` to the `piiCategories` parameter. `Age` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="2":::
+ **Supported document languages**
+
+ `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
+
+ :::column-end:::
+
+### Azure information
+
+These entity categories include identifiable Azure information like authentication information and connection strings. Not returned as PHI.
+
+ :::column span="":::
+ **Entity**
+
+ Azure DocumentDB Auth Key
+
+ :::column-end:::
+ :::column span="2":::
+ **Details**
+
+ Authorization key for an Azure Cosmos DB server.
+
+ To get this entity category, add `AzureDocumentDBAuthKey` to the `piiCategories` parameter. `AzureDocumentDBAuthKey` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+ **Supported document languages**
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure IAAS Database Connection String and Azure SQL Connection String.
+
+
+ :::column-end:::
+ :::column span="2":::
+
+ Connection string for an Azure infrastructure as a service (IaaS) database, and SQL connection string.
+
+ To get this entity category, add `AzureIAASDatabaseConnectionAndSQLString` to the `piiCategories` parameter. `AzureIAASDatabaseConnectionAndSQLString` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure IoT Connection String
+
+ :::column-end:::
+ :::column span="2":::
+
+ Connection string for Azure IoT.
+
+ To get this entity category, add `AzureIoTConnectionString` to the `piiCategories` parameter. `AzureIoTConnectionString` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure Publish Setting Password
+
+ :::column-end:::
+ :::column span="2":::
+
+ Password for Azure publish settings.
+
+ To get this entity category, add `AzurePublishSettingPassword` to the `piiCategories` parameter. `AzurePublishSettingPassword` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure Redis Cache Connection String
+
+ :::column-end:::
+ :::column span="2":::
+
+ Connection string for a Redis cache.
+
+ To get this entity category, add `AzureRedisCacheString` to the `piiCategories` parameter. `AzureRedisCacheString` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure SAS
+
+ :::column-end:::
+ :::column span="2":::
+
+ Connection string for Azure software as a service (SaaS).
+
+ To get this entity category, add `AzureSAS` to the `piiCategories` parameter. `AzureSAS` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure Service Bus Connection String
+
+ :::column-end:::
+ :::column span="2":::
+
+ Connection string for an Azure service bus.
+
+ To get this entity category, add `AzureServiceBusString` to the `piiCategories` parameter. `AzureServiceBusString` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure Storage Account Key
+
+ :::column-end:::
+ :::column span="2":::
+
+ Account key for an Azure storage account.
+
+ To get this entity category, add `AzureStorageAccountKey` to the `piiCategories` parameter. `AzureStorageAccountKey` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ Azure Storage Account Key (Generic)
+
+ :::column-end:::
+ :::column span="2":::
+
+ Generic account key for an Azure storage account.
+
+ To get this entity category, add `AzureStorageAccountGeneric` to the `piiCategories` parameter. `AzureStorageAccountGeneric` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+ :::column span="":::
+
+ SQL Server Connection String
+
+ :::column-end:::
+ :::column span="2":::
+
+ Connection string for a computer running SQL Server.
+
+ To get this entity category, add `SQLServerConnectionString` to the `piiCategories` parameter. `SQLServerConnectionString` will be returned in the API response if detected.
+
+ :::column-end:::
+ :::column span="":::
+
+ `en`
+
+ :::column-end:::
+
+### Identification
+++
+## Next steps
+
+* [NER overview](../overview.md)
ai-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
+
+ Title: How to detect Personally Identifiable Information (PII) in conversations.
+
+description: This article will show you how to extract PII from chat and spoken transcripts and redact identifiable information.
++++++ Last updated : 01/31/2023+++++
+# How to detect and redact Personally Identifying Information (PII) in conversations
+
+The Conversational PII feature can evaluate conversations to extract sensitive information (PII) in the content across several pre-defined categories and redact them. This API operates on both transcribed text (referenced as transcripts) and chats.
+For transcripts, the API also enables redaction of audio segments, which contains the PII information by providing the audio timing information for those audio segments.
+
+## Determine how to process the data (optional)
+
+### Specify the PII detection model
+
+By default, this feature will use the latest available AI model on your input. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
+
+### Language support
+
+Currently the conversational PII preview API only supports English language.
+
+### Region support
+
+Currently the conversational PII preview API supports all Azure regions supported by the Language service.
+
+## Submitting data
+
+> [!NOTE]
+> See the [Language Studio](../language-studio.md#valid-text-formats-for-conversation-features) article for information on formatting conversational text to submit using Language Studio.
+
+You can submit the input to the API as list of conversation items. Analysis is performed upon receipt of the request. Because the API is asynchronous, there may be a delay between sending an API request, and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
+
+When using the async feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+When you submit data to conversational PII, you can send one conversation (chat or spoken) per request.
+
+The API will attempt to detect all the [defined entity categories](concepts/conversations-entity-categories.md) for a given conversation input. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories.
+
+For spoken transcripts, the entities detected will be returned on the `redactionSource` parameter value provided. Currently, the supported values for `redactionSource` are `text`, `lexical`, `itn`, and `maskedItn` (which maps to Speech to text REST API's `display`\\`displayText`, `lexical`, `itn` and `maskedItn` format respectively). Additionally, for the spoken transcript input, this API will also provide audio timing information to empower audio redaction. For using the audioRedaction feature, use the optional `includeAudioRedaction` flag with `true` value. The audio redaction is performed based on the lexical input format.
+
+> [!NOTE]
+> Conversation PII now supports 40,000 characters as document size.
++
+## Getting PII results
+
+When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/conversations-entity-categories.md), including their categories and subcategories, and confidence scores. The text string with the PII entities redacted will also be returned.
+
+## Examples
+
+# [Client libraries (Azure SDK)](#tab/client-libraries)
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. You'll need one of the keys and the endpoint to authenticate your API requests.
+
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
+ |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.1.0b2) |
+
+4. See the following reference documentation for more information on the client, and return object:
+
+ * [C#](/dotnet/api/azure.ai.language.conversations)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
+
+# [REST API](#tab/rest-api)
+
+## Submit transcripts using speech to text
+
+Use the following example if you have conversations transcribed using the Speech service's [speech to text](../../Speech-Service/speech-to-text.md) feature:
+
+```bash
+curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations/jobs?api-version=2022-05-15-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: your-key-here" \
+-d \
+'
+{
+ "displayName": "Analyze conversations from xxx",
+ "analysisInput": {
+ "conversations": [
+ {
+ "id": "23611680-c4eb-4705-adef-4aa1c17507b5",
+ "language": "en",
+ "modality": "transcript",
+ "conversationItems": [
+ {
+ "participantId": "agent_1",
+ "id": "8074caf7-97e8-4492-ace3-d284821adacd",
+ "text": "Good morning.",
+ "lexical": "good morning",
+ "itn": "good morning",
+ "maskedItn": "good morning",
+ "audioTimings": [
+ {
+ "word": "good",
+ "offset": 11700000,
+ "duration": 2100000
+ },
+ {
+ "word": "morning",
+ "offset": 13900000,
+ "duration": 3100000
+ }
+ ]
+ },
+ {
+ "participantId": "agent_1",
+ "id": "0d67d52b-693f-4e34-9881-754a14eec887",
+ "text": "Can I have your name?",
+ "lexical": "can i have your name",
+ "itn": "can i have your name",
+ "maskedItn": "can i have your name",
+ "audioTimings": [
+ {
+ "word": "can",
+ "offset": 44200000,
+ "duration": 2200000
+ },
+ {
+ "word": "i",
+ "offset": 46500000,
+ "duration": 800000
+ },
+ {
+ "word": "have",
+ "offset": 47400000,
+ "duration": 1500000
+ },
+ {
+ "word": "your",
+ "offset": 49000000,
+ "duration": 1500000
+ },
+ {
+ "word": "name",
+ "offset": 50600000,
+ "duration": 2100000
+ }
+ ]
+ },
+ {
+ "participantId": "customer_1",
+ "id": "08684a7a-5433-4658-a3f1-c6114fcfed51",
+ "text": "Sure that is John Doe.",
+ "lexical": "sure that is john doe",
+ "itn": "sure that is john doe",
+ "maskedItn": "sure that is john doe",
+ "audioTimings": [
+ {
+ "word": "sure",
+ "offset": 5400000,
+ "duration": 6300000
+ },
+ {
+ "word": "that",
+ "offset": 13600000,
+ "duration": 2300000
+ },
+ {
+ "word": "is",
+ "offset": 16000000,
+ "duration": 1300000
+ },
+ {
+ "word": "john",
+ "offset": 17400000,
+ "duration": 2500000
+ },
+ {
+ "word": "doe",
+ "offset": 20000000,
+ "duration": 2700000
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "ConversationalPIITask",
+ "parameters": {
+ "modelVersion": "2022-05-15-preview",
+ "redactionSource": "text",
+ "includeAudioRedaction": true,
+ "piiCategories": [
+ "all"
+ ]
+ }
+ }
+ ]
+}
+`
+```
+
+## Submit text chats
+
+Use the following example if you have conversations that originated in text. For example, conversations through a text-based chat client.
+
+```bash
+curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations/jobs?api-version=2022-05-15-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: your-key-here" \
+-d \
+'
+{
+ "displayName": "Analyze conversations from xxx",
+ "analysisInput": {
+ "conversations": [
+ {
+ "id": "23611680-c4eb-4705-adef-4aa1c17507b5",
+ "language": "en",
+ "modality": "text",
+ "conversationItems": [
+ {
+ "participantId": "agent_1",
+ "id": "8074caf7-97e8-4492-ace3-d284821adacd",
+ "text": "Good morning."
+ },
+ {
+ "participantId": "agent_1",
+ "id": "0d67d52b-693f-4e34-9881-754a14eec887",
+ "text": "Can I have your name?"
+ },
+ {
+ "participantId": "customer_1",
+ "id": "08684a7a-5433-4658-a3f1-c6114fcfed51",
+ "text": "Sure that is John Doe."
+ }
+ ]
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "ConversationalPIITask",
+ "parameters": {
+ "modelVersion": "2022-05-15-preview"
+ }
+ }
+ ]
+}
+`
+```
++
+## Get the result
+
+Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```rest
+https://your-language-endpoint/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678
+```
+
+To get the results of the request, use the following cURL command. Be sure to replace `my-job-id` with the numerical ID value you received from the previous `operation-location` response header:
+
+```bash
+curl -X GET https://your-language-endpoint/language/analyze-conversations/jobs/my-job-id \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: your-key-here"
+```
+++
+## Service and data limits
++
ai-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/how-to-call.md
+
+ Title: How to detect Personally Identifiable Information (PII)
+
+description: This article will show you how to extract PII and health information (PHI) from text and detect identifiable information.
++++++ Last updated : 07/27/2022+++++
+# How to detect and redact Personally Identifying Information (PII)
+
+The PII feature can evaluate unstructured text, extract and redact sensitive information (PII) and health information (PHI) in text across several [pre-defined categories](concepts/entity-categories.md).
++
+## Development options
++
+## Determine how to process the data (optional)
+
+### Specify the PII detection model
+
+By default, this feature will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
+
+### Input languages
+
+When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md).
+
+## Submitting data
+
+Analysis is performed upon receipt of the request. Using the PII detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
++
+## Select which entities to be returned
+
+The API will attempt to detect the [defined entity categories](concepts/entity-categories.md) for a given document language. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. The following example would detect only `Person`. You can specify one or more [entity types](concepts/entity-categories.md) to be returned.
+
+> [!TIP]
+> If you don't include `default` when specifying entity categories, The API will only return the entity categories you specify.
+
+**Input:**
+
+> [!NOTE]
+> In this example, it will return only **person** entity type:
+
+`https://<your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01`
+
+```bash
+{
+    "kind": "PiiEntityRecognition",
+    "parameters": 
+    {
+        "modelVersion": "latest",
+ "piiCategories" :
+ [
+ "Person"
+ ]
+    },
+    "analysisInput":
+    {
+        "documents":
+        [
+            {
+                "id":"1",
+                "language": "en",
+                "text": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
+            }
+        ]
+    }
+}
+
+```
+
+**Output:**
+
+```bash
+
+{
+ "kind": "PiiEntityRecognitionResults",
+ "results": {
+ "documents": [
+ {
+ "redactedText": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is ********) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!",
+ "id": "1",
+ "entities": [
+ {
+ "text": "John Doe",
+ "category": "Person",
+ "offset": 226,
+ "length": 8,
+ "confidenceScore": 0.98
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-01-15"
+ }
+}
+```
+
+## Getting PII results
+
+When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/entity-categories.md), including their categories and subcategories, and confidence scores. The text string with the PII entities redacted will also be returned.
+
+## Service and data limits
++
+## Next steps
+
+[Named Entity Recognition overview](overview.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/language-support.md
+
+ Title: Personally Identifiable Information (PII) detection language support
+
+description: This article explains which natural languages are supported by the PII detection feature of Azure AI Language.
++++++ Last updated : 08/02/2022++++
+# Personally Identifiable Information (PII) detection language support
+
+Use this article to learn which natural languages are supported by the PII and conversation PII (preview) features of Azure AI Language.
+
+> [!NOTE]
+> * Languages are added as new [model versions](how-to-call.md#specify-the-pii-detection-model) are released.
+
+# [PII for documents](#tab/documents)
+
+## PII language support
+
+| Language | Language code | Starting with model version | Notes |
+||-|--||
+|Arabic |`ar` | 2023-01-01-preview | |
+|Chinese-Simplified |`zh-hans` |2021-01-15 |`zh` also accepted|
+|Chinese-Traditional |`zh-hant` | 2023-01-01-preview | |
+|Czech |`cs` | 2023-01-01-preview | |
+|Danish |`da` |2023-01-01-preview | |
+|Dutch |`nl` |2023-01-01-preview | |
+|English |`en` |2020-07-01 | |
+|Finnish |`fi` | 2023-01-01-preview | |
+|French |`fr` |2021-01-15 | |
+|German |`de` | 2021-01-15 | |
+|Hebrew |`he` | 2023-01-01-preview | |
+|Hindi |`hi` |2023-01-01-preview | |
+|Hungarian |`hu` | 2023-01-01-preview | |
+|Italian |`it` |2021-01-15 | |
+|Japanese |`ja` | 2021-01-15 | |
+|Korean |`ko` | 2021-01-15 | |
+|Norwegian (Bokmål) |`no` | 2023-01-01-preview |`nb` also accepted|
+|Polish |`pl` | 2023-01-01-preview | |
+|Portuguese (Brazil) |`pt-BR` |2021-01-15 | |
+|Portuguese (Portugal)|`pt-PT` | 2021-01-15 |`pt` also accepted|
+|Russian |`ru` | 2023-01-01-preview | |
+|Spanish |`es` |2020-04-01 | |
+|Swedish |`sv` | 2023-01-01-preview | |
+|Turkish |`tr` |2023-01-01-preview | |
++
+# [PII for conversations (preview)](#tab/conversations)
+
+## PII language support
+
+| Language | Language code | Starting with model version | Notes |
+|:-|:-:|:-:|::|
+| English | `en` | 2022-05-15-preview | |
+| French | `fr` | XXXX-XX-XX-preview | |
+| German | `de` | XXXX-XX-XX-preview | |
+| Spanish | `es` | XXXX-XX-XX-preview | |
+++
+## Next steps
+
+[PII feature overview](overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/overview.md
+
+ Title: What is the Personally Identifying Information (PII) detection feature in Azure AI Language?
+
+description: An overview of the PII detection feature in Azure AI services, which helps you extract entities and sensitive information (PII) in text.
++++++ Last updated : 01/10/2023++++
+# What is Personally Identifiable Information (PII) detection in Azure AI Language?
+
+PII detection is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
+
+* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
+* The [**conceptual articles**](concepts/entity-categories.md) provide in-depth explanations of the service's functionality and features.
+
+PII comes into two shapes:
+* [PII](how-to-call.md) - works on unstructured text.
+* [Conversation PII (preview)](how-to-call-for-conversations.md) - tailored model to work on conversation transcription.
+++
+## Get started with PII detection
+++++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Example scenarios
+
+* **Apply sensitivity labels** - For example, based on the results from the PII service, a public sensitivity label might be applied to documents where no PII entities are detected. For documents where US addresses and phone numbers are recognized, a confidential label might be applied. A highly confidential label might be used for documents where bank routing numbers are recognized.
+* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to first line support representatives, the company may want to redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
+* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they may want to block name, address and phone number to help reduce unconscious gender or other biases.
+* **Replace personal information in source data for machine learning to reduce unfairness** ΓÇô For example, if you want to remove names that might reveal gender when training a machine learning model, you could use the service to identify them and you could replace them with generic placeholders for model training.
+* **Remove personal information from call center transcription** ΓÇô For example, if you want to remove names or other PII data that happen between the agent and the customer in a call center scenario. You could use the service to identify and remove them.
+* **Data cleaning for data science** - PII can be used to make the data ready for data scientists and engineers to be able to use these data to train their machine learning models. Redacting the data to make sure that customer data isn't exposed.
+++
+## Next steps
+
+There are two ways to get started using the entity linking feature:
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/quickstart.md
+
+ Title: "Quickstart: Detect Personally Identifying Information (PII) in text"
+
+description: Use this quickstart to start using the PII detection API.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: Detect Personally Identifiable Information (PII)
+
+> [!NOTE]
+> This quickstart only covers PII detection in documents. To learn more about detecting PII in conversations, see [How to detect and redact PII in conversations](how-to-call-for-conversations.md).
++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Next steps
+
+* [Overview](overview.md)
ai-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/azure-resources.md
+
+ Title: Azure resources - question answering
+description: Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used in combination allows you to find and fix problems when they occur.
+++++ Last updated : 08/08/2022+++
+# Azure resources for question answering
+
+Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how resources are used _in combination_ allows you to find and fix problems when they occur.
+
+## Resource planning
+
+> [!TIP]
+> "Knowledge base" and "project" are equivalent terms in question answering and can be used interchangeably.
+
+When you first develop a project, in the prototype phase, it is common to have a single resource for both testing and production.
+
+When you move into the development phase of the project, you should consider:
+
+* How many languages will your project hold?
+* How many regions you need your project to be available in?
+* How many documents will your system hold in each domain?
+
+## Pricing tier considerations
+
+Typically there are three parameters you need to consider:
+
+* **The throughput you need**:
+
+ * The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
+
+ * This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
+
+* **Size and the number of projects**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
+
+ With custom question answering, you have a choice to set up your language resource in a single language or multiple languages. You can make this selection when you create your first project in the [Language Studio](https://language.azure.com/).
+
+ > [!IMPORTANT]
+ > You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum indexes allowed in the tier. Also check the maximum size and the number of documents allowed per tier.
+
+ For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
+
+* **Number of documents as sources**: There are no limits to the number of documents you can add as sources in question answering.
+
+The following table gives you some high-level guidelines.
+
+| |Azure Cognitive Search | Limitations |
+| -- | | -- |
+| **Experimentation** |Free Tier | Publish Up to 2 KBs, 50 MB size |
+| **Dev/Test Environment** |Basic | Publish Up to 14 KBs, 2 GB size |
+| **Production Environment** |Standard | Publish Up to 49 KBs, 25 GB size |
+
+## Recommended settings
++
+The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
++
+## Keys in question answering
+
+Your custom question answering feature deals with two kinds of keys: **authoring keys** and **Azure Cognitive Search keys** used to access the service in the customerΓÇÖs subscription.
+
+Use these keys when making requests to the service through APIs.
+
+|Name|Location|Purpose|
+|--|--|--|
+|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language service APIs). These APIs let you edit the questions and answers in your project, and publish your project. These keys are created when you create a new resource.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.|
+|Azure Cognitive Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure cognitive search service deployed in the userΓÇÖs Azure subscription. When you associate an Azure Cognitive Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure Cognitive Search** resource on the **Keys** page.|
+
+### Find authoring keys in the Azure portal
+
+You can view and reset your authoring keys from the Azure portal, where you added the custom question answering feature in your language resource.
+
+1. Go to the language resource in the Azure portal and select the resource that has the *Azure AI services* type:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of question answering resource list.](../../../qnamaker/media/qnamaker-how-to-setup-service/resources-created-question-answering.png)
+
+2. Go to **Keys and Endpoint**:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of subscription key.](../../../qnamaker/media/qnamaker-how-to-key-management/custom-qna-keys-and-endpoint.png)
+
+### Management service region
+
+In custom question answering, both the management and the prediction services are colocated in the same region.
+
+## Resource purposes
+
+Each Azure resource created with Custom question answering feature has a specific purpose:
+
+* Language resource (Also referred to as a Text Analytics resource depending on the context of where you are evaluating the resource.)
+* Cognitive Search resource
+
+### Language resource
+
+The language resource with custom question answering feature provides access to the authoring and publishing APIs, hosts the ranking runtime as well as provides telemetry.
+
+### Azure Cognitive Search resource
+
+The [Cognitive Search](../../../../search/index.yml) resource is used to:
+
+* Store the question and answer pairs
+* Provide the initial ranking (ranker #1) of the question and answer pairs at runtime
+
+#### Index usage
+
+You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum number of indexes allowed in the Azure Cognitive Search tier. Also check the maximum size and the number of documents allowed per tier.
+
+For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
+
+#### Language usage
+
+With custom question answering, you have a choice to set up your service for projects in a single language or multiple languages. You make this choice during the creation of the first project in your language resource.
+
+## Next steps
+
+* Learn about the question answering [projects](../How-To/manage-knowledge-base.md)
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/best-practices.md
+
+ Title: Best practices - question answering
+description: Use these best practices to improve your project and provide better results to your application/chat bot's end users.
+++++ Last updated : 06/03/2022+++
+# Question answering best practices
+
+Use these best practices to improve your project and provide better results to your client application or chat bot's end users.
+
+## Extraction
+
+Question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
+
+## Creating good questions and answers
+
+WeΓÇÖve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for question answering.
+
+| Question | Answer |
+|-|-|
+| I want to buy a car |There are three options to buy a car.|
+| I want to purchase software license |Software license can be purchased online at no cost.|
+| What is the price of Microsoft stock? | $200. |
+| How to buy Microsoft Services | Microsoft services can be bought online.|
+| Want to sell car | Please send car pics and document.|
+| How to get access to identification card? | Apply via company portal to get identification card.|
+
+### When should you add alternate questions to question and answer pairs?
+
+Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the project. For example, consider the following question answer pair:
+
+*Question: What is the price of Microsoft Stock?*
+*Answer: $200.*
+
+The service can return the expected response for semantically similar queries such as:
+
+ΓÇ£How much is Microsoft stock worth?
+ΓÇ£How much is Microsoft share value?ΓÇ¥
+ΓÇ£How much does a Microsoft share cost?ΓÇ¥
+ΓÇ£What is the market value of a Microsoft stock?ΓÇ¥
+ΓÇ£What is the market value of a Microsoft share?ΓÇ¥
+
+However, itΓÇÖs important to understand that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
+
+There are certain scenarios that require the customer to add an alternate question. When itΓÇÖs already verified that for a particular query the correct answer isnΓÇÖt returned despite being present in the project, we advise adding that query as an alternate question to the intended question answer pair.
+
+### How many alternate questions per question answer pair is optimal?
+
+Users can add as many alternate questions as they want, but only first 5 will be considered for core ranking. However, the rest will be useful for exact match scenarios. It is also recommended to keep the different intent/distinct alternate questions at the top for better relevance and score.
+
+Semantic understanding in question answering should be able to take care of similar alternate questions.
+
+The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the project at the beginning of this section, in question answer pair #1, adding alternate questions such as ΓÇ£How can I buy a carΓÇ¥, ΓÇ£I wanna buy a carΓÇ¥ arenΓÇÖt required. Whereas adding alternate questions such as ΓÇ£How to purchase a carΓÇ¥, ΓÇ£What are the options of buying a vehicleΓÇ¥ can be useful.
+
+### When to add synonyms to a project?
+
+Question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
+
+For better relevance, you need to provide a list of acronyms that the end user intends to use interchangeably. The following is a list of acceptable acronyms:
+
+`MSFT` ΓÇô Microsoft
+`ID` ΓÇô Identification
+`ETA` ΓÇô Estimated time of Arrival
+
+Other than acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as ΓÇ£my carΓÇÖs audio isnΓÇÖt workingΓÇ¥ and the project has questions on ΓÇ£fixing audio for car XΓÇ¥, then we need to add ΓÇÿXΓÇÖ and ΓÇÿcarΓÇÖ as synonyms.
+
+The transformer-based model already takes care of most of the common synonym cases, for example: `Purchase ΓÇô Buy`, `Sell - Auction`, `Price ΓÇô Value`. For another example, consider the following question answer pair: Q: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥ A: ΓÇ£$200ΓÇ¥.
+
+If we receive user queries like ΓÇ£Microsoft stock valueΓÇ¥,ΓÇ¥ Microsoft share valueΓÇ¥, ΓÇ£Microsoft stock worthΓÇ¥, ΓÇ£Microsoft share worthΓÇ¥, ΓÇ£stock valueΓÇ¥, etc., you should be able to get the correct answer even though these queries have words like "share", "value", and "worth", which arenΓÇÖt originally present in the project.
+
+Special characters are not allowed in synonyms.
+
+### How are lowercase/uppercase characters treated?
+
+Question answering takes casing into account but it's intelligent enough to understand when itΓÇÖs to be ignored. You shouldnΓÇÖt be seeing any perceivable difference due to wrong casing.
+
+### How are question answer pairs prioritized for multi-turn questions?
+
+When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [Question Answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
+
+### How are accents treated?
+
+Accents are supported for all major European languages. If the query has an incorrect accent, the confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
+
+### How is punctuation in a user query treated?
+
+Punctuation is ignored in a user query before sending it to the ranking stack. Ideally it shouldn’t impact the relevance scores. Punctuation that is ignored is as follows: `,?:;\"'(){}[]-+。./!*؟`
+
+## Chit-Chat
+
+Add chit-chat to your bot, to make your bot more conversational and engaging, with low effort. You can easily add chit-chat data sources from pre-defined personalities when creating your project, and change them at any time. Learn how to [add chit-chat to your KB](../How-To/chit-chat.md).
+
+Chit-chat is supported in [many languages](../how-to/chit-chat.md#language-support).
+
+### Choosing a personality
+
+Chit-chat is supported for several predefined personalities:
+
+|Personality |Question answering dataset file |
+||--|
+|Professional |[qna_chitchat_professional.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_professional.tsv) |
+|Friendly |[qna_chitchat_friendly.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_friendly.tsv) |
+|Witty |[qna_chitchat_witty.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_witty.tsv) |
+|Caring |[qna_chitchat_caring.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_caring.tsv) |
+|Enthusiastic |[qna_chitchat_enthusiastic.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_enthusiastic.tsv) |
+
+The responses range from formal to informal and irreverent. You should select the personality that is closest aligned with the tone you want for your bot. You can view the [datasets](https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets), and choose one that serves as a base for your bot, and then customize the responses.
+
+### Edit bot-specific questions
+
+There are some bot-specific questions that are part of the chit-chat data set, and have been filled in with generic answers. Change these answers to best reflect your bot details.
+
+We recommend making the following chit-chat question answer pairs more specific:
+
+* Who are you?
+* What can you do?
+* How old are you?
+* Who created you?
+* Hello
+
+### Adding custom chit-chat with a metadata tag
+
+If you add your own chit-chat question answer pairs, make sure to add metadata so these answers are returned. The metadata name/value pair is `editorial:chitchat`.
+
+## Searching for answers
+
+Question answering REST API uses both questions and the answer to search for best answers to a user's query.
+
+### Searching questions only when answer isnΓÇÖt relevant
+
+Use the [`RankerType=QuestionOnly`](#choosing-ranker-type) if you don't want to search answers.
+
+An example of this is when the project is a catalog of acronyms as questions with their full form as the answer. The value of the answer wonΓÇÖt help to search for the appropriate answer.
+
+## Ranking/Scoring
+
+Make sure youΓÇÖre making the best use of the supported ranking features. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
+
+### Choosing a threshold
+
+The default [confidence score](confidence-score.md) that is used as a threshold is 0, however you can [change the threshold](confidence-score.md#set-threshold) for your project based on your needs. Since every project is different, you should test and choose the threshold that is best suited for your project.
+
+### Choosing Ranker type
+
+By default, question answering searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the REST API request.
+
+### Add alternate questions
+
+Alternate questions to improve the likelihood of a match with a user query. Alternate questions are useful when there are multiple ways in which the same question may be asked. This can include changes in the sentence structure and word-style.
+
+|Original query|Alternate queries|Change|
+|--|--|--|
+|Is parking available?|Do you have a car park?|sentence structure|
+ |Hi|Yo<br>Hey there|word-style or slang|
+
+### Use metadata tags to filter questions and answers
+
+Metadata adds the ability for a client application to know it shouldnΓÇÖt take all answers but instead to narrow down the results of a user query based on metadata tags. The project answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
+
+### Use synonyms
+
+While thereΓÇÖs some support for synonyms in the English language, use case-insensitive [word alterations](../tutorials/adding-synonyms.md) to add synonyms to keywords that take different forms.
+
+|Original word|Synonyms|
+|--|--|
+|buy|purchase<br>net-banking<br>net banking|
+++
+### Use distinct words to differentiate questions
+
+The ranking algorithm, which matches a user query with a question in the project, works best if each question addresses a different need. Repetition of the same word set between questions reduces the likelihood that the right answer is chosen for a given user query with those words.
+
+For example, you might have two separate question answer pairs with the following questions:
+
+|Questions|
+|--|
+|where is the parking *location*|
+|where is the ATM *location*|
+
+Since these two questions are phrased with very similar words, this similarity could cause very similar scores for many user queries that are phrased like *"where is the `<x>` location"*. Instead, try to clearly differentiate with queries like *"where is the parking lot"* and *"where is the ATM"*, by avoiding words like "location" that could be in many questions in your project.
+
+## Collaborate
+
+Question answering allows users to collaborate on a project. Users need access to the associated Azure resource group in order to access the projects. Some organizations may want to outsource the project editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical language resources with identical question answering projects in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the project contents are exported and transferred with an [import-export](../how-to/migrate-knowledge-base.md) process to the language resource of the approver that will finally deploy the project and update the endpoint.
+
+## Active learning
+
+[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. ItΓÇÖs important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in Language Studio, you can review and accept or reject those suggestions.
+
+## Next steps
++
ai-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/confidence-score.md
+
+ Title: Confidence score - question answering
+
+description: When a user query is matched against a knowledge base, question answering returns relevant answers, along with a confidence score.
+++++++ Last updated : 11/02/2021+++
+# Confidence score
+
+When a user query is matched against a project (also known as a knowledge base), question answering returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
+
+The confidence score is a number between 0 and 100. A score of 100 is likely an exact match, while a score of 0 means, that no matching answer was found. The higher the score- the greater the confidence in the answer. For a given query, there could be multiple answers returned. In that case, the answers are returned in order of decreasing confidence score.
+
+The following table indicates typical confidence associated for a given score.
+
+|Score Value|Score Meaning|Example Query|
+|--|--|--|
+|0.90 - 1.00|A near exact match of user query and a KB question|
+|> 70|High confidence - typically a good answer that completely answers the user's query|
+|0.50 - 0.70|Medium confidence - typically a fairly good answer that should answer the main intent of the user query|
+|0.30 - 0.50|Low confidence - typically a related answer, that partially answers the user's intent|
+|< 0.30|Very low confidence - typically does not answer the user's query, but has some matching words or phrases |
+|0|No match, so the answer is not returned.|
+
+## Choose a score threshold
+
+The table above shows the range of scores that can occur when querying with question answering. However, since every project is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to `0`, so that all possible answers are returned. The recommended threshold that should work for most projects, is **50**.
+
+When choosing your threshold, keep in mind the balance between **Accuracy** and **Coverage**, and adjust your threshold based on your requirements.
+
+- If **Accuracy** (or precision) is more important for your scenario, then increase your threshold. This way, every time you return an answer, it will be a much more CONFIDENT case, and much more likely to be the answer users are looking for. In this case, you might end up leaving more questions unanswered.
+
+- If **Coverage** (or recall) is more important- and you want to answer as many questions as possible, even if there is only a partial relation to the user's question- then LOWER the threshold. This means there could be more cases where the answer does not answer the user's actual query, but gives some other somewhat related answer.
+
+## Set threshold
+
+Set the threshold score as a property of the [REST API JSON body](../quickstart/sdk.md). This means you set it for each call to REST API.
+
+## Improve confidence scores
+
+To improve the confidence score of a particular response to a user query, you can add the user query to the project as an alternate question on that response. You can also use case-insensitive [synonyms](../tutorials/adding-synonyms.md) to add synonyms to keywords in your project.
+
+## Similar confidence scores
+
+When multiple responses have a similar confidence score, it is likely that the query was too generic and therefore matched with equal likelihood with multiple answers. Try to structure your QnAs better so that every QnA entity has a distinct intent.
+
+## Confidence score differences between test and production
+
+The confidence score of an answer may change negligibly between the test and deployed version of the project even if the content is the same. This is because the content of the test and the deployed project are located in different Azure Cognitive Search indexes.
+
+The test index holds all the question and answer pairs of your project. When querying the test index, the query applies to the entire index then results are restricted to the partition for that specific project. If the test query results are negatively impacting your ability to validate the project, you can:
+* Organize your project using one of the following:
+ * One resource restricted to one project: restrict your single language resource (and the resulting Azure Cognitive Search test index) to a project.
+ * Two resources - one for test, one for production: have two language resources, using one for testing (with its own test and production indexes) and one for production (also having its own test and production indexes)
+* Always use the same parameters when querying both your test and production projects.
+
+When you deploy a project, the question and answer contents of your project moves from the test index to a production index in Azure search.
+
+If you have a project in different regions, each region uses its own Azure Cognitive Search index. Because different indexes are used, the scores will not be exactly the same.
+
+## No match found
+
+When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is returned. You can change the [default response](../how-to/change-default-answer.md).
+
+## Next steps
+
ai-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/limits.md
+
+ Title: Limits and boundaries - question answering
+description: Question answering has meta-limits for parts of the knowledge base and service. It is important to keep your knowledge base within those limits in order to test and publish.
+++++ Last updated : 11/02/2021++
+# Project limits and boundaries
+
+Question answering limits provided below are a combination of the [Azure Cognitive Search pricing tier limits](../../../../search/search-limits-quotas-capacity.md) and question answering limits. Both sets of limits affect how many projects you can create per resource and how large each project can grow.
+
+## Projects
+
+The maximum number of projects is based on [Azure Cognitive Search tier limits](../../../../search/search-limits-quotas-capacity.md).
+
+Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
+
+With custom question answering, you have a choice to set up your language resource in a single language or multiple languages. You can make this selection when you create your first project in the [Language Studio](https://language.azure.com/).
+
+ > [!IMPORTANT]
+ > You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum indexes allowed in the tier. Also check the maximum size and the number of documents allowed per tier.
+
+For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
++
+## Extraction limits
+
+### File naming constraints
+
+File names may not include the following characters:
+
+|Do not use character|
+|--|
+|Single quote `'`|
+|Double quote `"`|
+
+### Maximum file size
+
+|Format|Max file size (MB)|
+|--|--|
+|`.docx`|10|
+|`.pdf`|25|
+|`.tsv`|10|
+|`.txt`|10|
+|`.xlsx`|3|
+
+### Maximum number of files
+
+> [!NOTE]
+> Question answering currently has no limits on the number of sources that can be added. Throughput is currently capped at 10 text records per second for both management APIs and prediction APIs.
+
+### Maximum number of deep-links from URL
+
+The maximum number of deep-links that can be crawled for extraction of question answer pairs from a URL page is **20**.
+
+## Metadata limits
+
+Metadata is presented as a text-based `key:value` pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure Cognitive Search tier limits](../../../../search/search-limits-quotas-capacity.md)**.
+
+If you choose to projects with multiple languages in a single language resource, there is a dedicated test index per project. So the limit is applied per project in the language service.
+
+|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|||||||-|
+|Maximum metadata fields per language service (per project)|1,000|100*|1,000|1,000|1,000|1,000|
+
+If you don't choose the option to have projects with multiple different languages, then the limits are applied across all projects in the language service.
+
+|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|||||||-|
+|Maximum metadata fields per Language service (across all projects)|1,000|100*|1,000|1,000|1,000|1,000|
+
+### By name and value
+
+The length and acceptable characters for metadata name and value are listed in the following table.
+
+|Item|Allowed chars|Regex pattern match|Max chars|
+|--|--|--|--|
+|Name (key)|Allows<br>Alphanumeric (letters and digits)<br>`_` (underscore)<br> Must not contain spaces.|`^[a-zA-Z0-9_]+$`|100|
+|Value|Allows everything except<br>`:` (colon)<br>`|` (vertical pipe)<br>Only one value allowed.|`^[^:|]+$`|500|
+|||||
+
+## Project content limits
+Overall limits on the content in the project:
+* Length of answer text: 25,000 characters
+* Length of question text: 1,000 characters
+* Length of metadata key text: 100 characters
+* Length of metadata value text: 500 characters
+* Supported characters for metadata name: Alphabets, digits, and `_`
+* Supported characters for metadata value: All except `:` and `|`
+* Length of file name: 200
+* Supported file formats: ".tsv", ".pdf", ".txt", ".docx", ".xlsx".
+* Maximum number of alternate questions: 300
+* Maximum number of question-answer pairs: Depends on the **[Azure Cognitive Search tier](../../../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure Cognitive Search index.
+* URL/HTML page: 1 million characters
+
+## Create project call limits:
+
+These represent the limits for each create project action; that is, selecting *Create new project* or calling the REST API to create a project.
+
+* Recommended maximum number of alternate questions per answer: 300
+* Maximum number of URLs: 10
+* Maximum number of files: 10
+* Maximum number of QnAs permitted per call: 1000
+
+## Update project call limits
+
+These represent the limits for each update action; that is, selecting *Save* or calling the REST API with an update request.
+* Length of each source name: 300
+* Recommended maximum number of alternate questions added or deleted: 300
+* Maximum number of metadata fields added or deleted: 10
+* Maximum number of URLs that can be refreshed: 5
+* Maximum number of QnAs permitted per call: 1000
+
+## Add unstructured file limits
+
+> [!NOTE]
+> * If you need to use larger files than the limit allows, you can break the file into smaller files before sending them to the API.
+
+These represent the limits when unstructured files are used to *Create new project* or call the REST API to create a project:
+* Length of file: We will extract first 32000 characters
+* Maximum three responses per file.
+
+## Prebuilt question answering limits
+
+> [!NOTE]
+> * If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+> * A document is a single string of text characters.
+
+These represent the limits when REST API is used to answer a question based without having to create a project:
+* Number of documents: 5
+* Maximum size of a single document: 5,120 characters
+* Maximum three responses per document.
ai-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/plan.md
+
+ Title: Plan your app - question answering
+description: Learn how to plan your question answering app. Understand how question answering works and interacts with other Azure services and some project concepts.
+++++ Last updated : 06/03/2022++
+# Plan your question answering app
+
+To plan your question answering app, you need to understand how question answering works and interacts with other Azure services. You should also have a solid grasp of project concepts.
+
+## Azure resources
+
+Each [Azure resource](azure-resources.md#resource-purposes) created with question answering has a specific purpose. Each resource has its own purpose, limits, and [pricing tier](azure-resources.md#pricing-tier-considerations). It's important to understand the function of these resources so that you can use that knowledge into your planning process.
+
+| Resource | Purpose |
+|--|--|
+| [Language resource](azure-resources.md) resource | Authoring, query prediction endpoint and telemetry|
+| [Cognitive Search](azure-resources.md#azure-cognitive-search-resource) resource | Data storage and search |
+
+### Resource planning
+
+Question answering throughput is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
+
+### Language resource
+
+A single language resource with the custom question answering feature enabled can host more than one project. The number of projects is determined by the Cognitive Search pricing tier's quantity of supported indexes. Learn more about the [relationship of indexes to projects](azure-resources.md#index-usage).
+
+### Project size and throughput
+
+When you build a real app, plan sufficient resources for the size of your project and for your expected query prediction requests.
+
+A project size is controlled by the:
+* [Cognitive Search resource](../../../../search/search-limits-quotas-capacity.md) pricing tier limits
+* [Question answering limits](./limits.md)
+
+The project query prediction request is controlled by the web app plan and web app. Refer to [recommended settings](azure-resources.md#recommended-settings) to plan your pricing tier.
+
+### Understand the impact of resource selection
+
+Proper resource selection means your project answers query predictions successfully.
+
+If your project isn't functioning properly, it's typically an issue of improper resource management.
+
+Improper resource selection requires investigation to determine which [resource needs to change](azure-resources.md#pricing-tier-considerations).
+
+## Project
+
+A project is directly tied its language resource. It holds the question and answer (QnA) pairs that are used to answer query prediction requests.
+
+### Language considerations
+
+You can now have projects in different languages within the same language resource where the custom question answering feature is enabled. When you create the first project, you can choose whether you want to use the resource for projects in a single language that will apply to all subsequent projects or make a language selection each time a project is created.
+
+### Ingest data sources
+
+Question answering also supports unstructured content. You can upload a file that has unstructured content.
+
+Currently we do not support URLs for unstructured content.
+
+The ingestion process converts supported content types to markdown. All further editing of the *answer* is done with markdown. After you create a project, you can edit QnA pairs in Language Studio with rich text authoring.
+
+### Data format considerations
+
+Because the final format of a QnA pair is markdown, it's important to understand markdown support.
+
+### Bot personality
+
+Add a bot personality to your project with [chit-chat](../how-to/chit-chat.md). This personality comes through with answers provided in a certain conversational tone such as *professional* and *friendly*. This chit-chat is provided as a conversational set, which you have total control to add, edit, and remove.
+
+A bot personality is recommended if your bot connects to your project. You can choose to use chit-chat in your project even if you also connect to other services, but you should review how the bot service interacts to know if that is the correct architectural design for your use.
+
+### Conversation flow with a project
+
+Conversation flow usually begins with a salutation from a user, such as `Hi` or `Hello`. Your project can answer with a general answer, such as `Hi, how can I help you`, and it can also provide a selection of follow-up prompts to continue the conversation.
+
+You should design your conversational flow with a loop in mind so that a user knows how to use your bot and isn't abandoned by the bot in the conversation. [Follow-up prompts](../tutorials/guided-conversations.md) provide linking between QnA pairs, which allow for the conversational flow.
+
+### Authoring with collaborators
+
+Collaborators may be other developers who share the full development stack of the project application or may be limited to just authoring the project.
+
+project authoring supports several role-based access permissions you apply in the Azure portal to limit the scope of a collaborator's abilities.
+
+## Integration with client applications
+
+Integration with client applications is accomplished by sending a query to the prediction runtime endpoint. A query is sent to your specific project with an SDK or REST-based request to your question answering web app endpoint.
+
+To authenticate a client request correctly, the client application must send the correct credentials and project ID. If you're using an Azure AI Bot Service, configure these settings as part of the bot configuration in the Azure portal.
+
+### Conversation flow in a client application
+
+Conversation flow in a client application, such as an Azure bot, may require functionality before and after interacting with the project.
+
+Does your client application support conversation flow, either by providing alternate means to handle follow-up prompts or including chit-chit? If so, design these early and make sure the client application query is handled correctly by another service or when sent to your project.
+
+### Active learning from a client application
+
+Question answering uses _active learning_ to improve your project by suggesting alternate questions to an answer. The client application is responsible for a part of this [active learning](../tutorials/active-learning.md). Through conversational prompts, the client application can determine that the project returned an answer that's not useful to the user, and it can determine a better answer. The client application needs to send that information back to the project to improve the prediction quality.
+
+### Providing a default answer
+
+If your project doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page.
+
+This default answer is different from the Azure bot default answer. You configure the default answer for your Azure bot in the Azure portal as part of configuration settings. It's returned when the score threshold isn't met.
+
+## Prediction
+
+The prediction is the response from your project, and it includes more information than just the answer. To get a query prediction response, use the question answering API.
+
+### Prediction score fluctuations
+
+A score can change based on several factors:
+
+* Number of answers you requested in response with the `top` property
+* Variety of available alternate questions
+* Filtering for metadata
+* Query sent to `test` or `production` project.
+
+### Analytics with Azure Monitor
+
+In question answering, telemetry is offered through the [Azure Monitor service](../../../../azure-monitor/index.yml). Use our [top queries](../how-to/analytics.md) to understand your metrics.
+
+## Development lifecycle
+
+The development lifecycle of a project is ongoing: editing, testing, and publishing your project.
+
+### Project development of question answer pairs
+
+Your QnA pairs should be designed and developed based on your client application usage.
+
+Each pair can contain:
+* Metadata - filterable when querying to allow you to tag your QnA pairs with additional information about the source, content, format, and purpose of your data.
+* Follow-up prompts - helps to determine a path through your project so the user arrives at the correct answer.
+* Alternate questions - important to allow search to match to your answer from different forms of the question. [Active learning suggestions](../tutorials/active-learning.md) turn into alternate questions.
+
+### DevOps development
+
+Developing a project to insert into a DevOps pipeline requires that the project is isolated during batch testing.
+
+A project shares the Cognitive Search index with all other projects on the language resource. While the project is isolated by partition, sharing the index can cause a difference in the score when compared to the published project.
+
+To have the _same score_ on the `test` and `production` projects, isolate a language resource to a single project. In this architecture, the resource only needs to live as long as the isolated batch test.
+
+## Next steps
+
+* [Azure resources](./azure-resources.md)
ai-services Precise Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/precise-answering.md
+
+ Title: Precise answering using answer span detection - question answering
+description: Understand Precise answering feature available in question answering.
+++++ Last updated : 11/02/2021+++
+# Precise answering
+
+The precise answering feature introduced, allows you to get the precise short answer from the best candidate answer passage present in the project for any user query. This feature uses a deep learning model at runtime, which understands the intent of the user query and detects the precise short answer from the answer passage, if there is a short answer present as a fact in the answer passage.
+
+This feature is beneficial for both content developers as well as end users. Now, content developers don't need to manually curate specific question answer pairs for every fact present in the project, and the end user doesn't need to look through the whole answer passage returned from the service to find the actual fact that answers the user's query.
+
+## Precise answering via the portal
+
+In the [Language Studio portal](https://aka.ms/languageStudio), when you open the test pane, you will see an option to **Include short answer response** on the top above show advanced options.
+
+When you enter a query in the test pane, you will see a short-answer along with the answer passage, if there is a short answer present in the answer passage.
+
+>[!div class="mx-imgBorder"]
+>[![Screenshot of test pane with short answer checked and a question containing a short answer response.](../media/precise-answering/short-answer.png)](../media/precise-answering/short-answer.png#lightbox)
+
+You can unselect the **Include short answer response** option, if you want to see only the **Long answer** passage in the test pane.
+
+The service also returns back the confidence score of the precise answer as an **Answer-span confidence score** which you can check by selecting the **Inspect** option and then selection **additional information**.
+
+>[!div class="mx-imgBorder"]
+>[![Screenshot of inspect pane with answer-span confidence score displayed.](../media/precise-answering/answer-confidence-score.png)](../media/precise-answering/answer-confidence-score.png#lightbox)
+
+## Deploying a bot
+
+When you publish a bot, you get the precise answer enabled experience by default in your application, where you will see short answer along with the answer passage. Refer to the API reference for REST API to see how to use the precise answer (called AnswerSpan) in the response. User has the flexibility to choose other experiences by updating the template through the Bot app service.
ai-services Project Development Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/project-development-lifecycle.md
+
+ Title: Project lifecycle - question answering
+description: Question answering learns best in an iterative cycle of model changes, utterance examples, deployment, and gathering data from endpoint queries.
+++++ Last updated : 06/03/2022++
+# Question answering project lifecycle
+
+Question answering learns best in an iterative cycle of model changes, utterance examples, deployment, and gathering data from endpoint queries.
+
+## Creating a project
+
+Question answering projects provide a best-match answer to a user query based on the content of the project. Creating a project is a one-time action to setting up a content repository of questions, answers, and associated metadata. A project can be created by crawling pre-existing content such the following sources:
+
+- FAQ pages
+- Product manuals
+- Q-A pairs
+
+Learn how to [create a project](../how-to/create-test-deploy.md).
+
+## Testing and updating your project
+
+The project is ready for testing once it is populated with content, either editorially or through automatic extraction. Interactive testing can be done in Language Studio, in the custom question answering menu through the **Test** panel. You enter common user queries. Then you verify that the responses returned with both the correct response and a sufficient confidence score.
+
+* **To fix low confidence scores**: add alternate questions.
+* **When a query incorrectly returns the [default response](../How-to/change-default-answer.md)**: add new answers to the correct question.
+
+This tight loop of test-update continues until you are satisfied with the results.
+
+## Deploy your project
+
+Once you are done testing the project, you can deploy it to production. Deployment pushes the latest version of the tested project to a dedicated Azure Cognitive Search index representing the **published** project. It also creates an endpoint that can be called in your application or chat bot.
+
+Due to the deployment action, any further changes made to the test version of the project leave the published version unaffected. The published version can be live in a production application.
+
+Each of these projects can be targeted for testing separately.
+
+## Monitor usage
+
+To be able to log the chat logs of your service and get additional analytics, you would need to enable [Azure Monitor Diagnostic Logs](../how-to/analytics.md) after you create your language resource.
+
+Based on what you learn from your analytics, make appropriate updates to your project.
+
+## Version control for data in your project
+
+Version control for data is provided through the import/export features on the project page in the question answering section of Language Studio.
+
+You can back up a project by exporting the project, in either `.tsv` or `.xls` format. Once exported, include this file as part of your regular source control check.
+
+When you need to go back to a specific version, you need to import that file from your local system. An exported **must** only be used via import on the project page. It can't be used as a file or URL document data source. This will replace questions and answers currently in the project with the contents of the imported file.
+
+## Test and production project
+
+A project is the repository of questions and answer sets created, maintained, and used through question answering. Each language resource can hold multiple projects.
+
+A project has two states: *test* and *published*.
+
+### Test project
+
+The *test project* is the version currently edited and saved. The test version has been tested for accuracy, and for completeness of responses. Changes made to the test project don't affect the end user of your application or chat bot. The test project is known as `test` in the HTTP request. The `test` knowledge is available with Language Studio's interactive **Test** pane.
+
+### Production project
+
+The *published project* is the version that's used in your chat bot or application. Publishing a project puts the content of its test version into its published version. The published project is the version that the application uses through the endpoint. Make sure that the content is correct and well tested. The published project is known as `prod` in the HTTP request.
+
+## Next steps
+
ai-services Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/analytics.md
+
+ Title: Analytics on projects - custom question answering
+
+description: Custom question answering uses Azure diagnostic logging to store the telemetry data and chat logs
++++
+displayName: chat history, history, chat logs, logs
+++ Last updated : 11/02/2021+++
+# Get analytics for your project
+
+Custom question answering uses Azure diagnostic logging to store the telemetry data and chat logs. Follow the below steps to run sample queries to get analytics on the usage of your custom question answering project.
+
+1. [Enable diagnostics logging](../../../diagnostic-logging.md) for your language resource with custom question answering enabled.
+
+2. In the previous step, select **Trace** in addition to **Audit, RequestResponse and AllMetrics** for logging
+
+ ![Enable trace logging in custom question answering](../media/analytics/qnamaker-v2-enable-trace-logging.png)
+
+## Kusto queries
+
+### Chat log
+
+```kusto
+// All QnA Traffic
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
+| extend answer_ = tostring(parse_json(properties_s).answer)
+| extend question_ = tostring(parse_json(properties_s).question)
+| extend score_ = tostring(parse_json(properties_s).score)
+| extend kbId_ = tostring(parse_json(properties_s).kbId)
+| project question_, answer_, score_, kbId_
+```
+
+### Traffic count per project and user in a time period
+
+```kusto
+// Traffic count per KB and user in a time period
+let startDate = todatetime('2019-01-01');
+let endDate = todatetime('2020-12-31');
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
+| where TimeGenerated <= endDate and TimeGenerated >=startDate
+| extend kbId_ = tostring(parse_json(properties_s).kbId)
+| extend userId_ = tostring(parse_json(properties_s).userId)
+| summarize ChatCount=count() by bin(TimeGenerated, 1d), kbId_, userId_
+```
+
+### Latency of GenerateAnswer API
+
+```kusto
+// Latency of GenerateAnswer
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| where OperationName=="Generate Answer"
+| project TimeGenerated, DurationMs
+| render timechart
+```
+
+### Average latency of all operations
+
+```kusto
+// Average Latency of all operations
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| project DurationMs, OperationName
+| summarize count(), avg(DurationMs) by OperationName
+| render barchart
+```
+
+### Unanswered questions
+
+```kusto
+// All unanswered questions
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
+| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
+| extend answer_ = tostring(parse_json(properties_s).answer)
+| extend question_ = tostring(parse_json(properties_s).question)
+| extend score_ = tostring(parse_json(properties_s).score)
+| extend kbId_ = tostring(parse_json(properties_s).kbId)
+| where score_ == 0
+| project question_, answer_, score_, kbId_
+```
+
+### Prebuilt question answering inference calls
+
+```kusto
+// Show logs from AzureDiagnostics table
+// Lists the latest logs in AzureDiagnostics table, sorted by time (latest first).
+AzureDiagnostics
+| where OperationName == "CustomQuestionAnswering QueryText"
+| extend answer_ = tostring(parse_json(properties_s).answer)
+| extend question_ = tostring(parse_json(properties_s).question)
+| extend score_ = tostring(parse_json(properties_s).score)
+| extend requestid = tostring(parse_json(properties_s)["apim-request-id"])
+| project TimeGenerated, requestid, question_, answer_, score_
+```
+
+## Next steps
++
ai-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/authoring.md
+
+ Title: Authoring API - question answering
+
+description: Use the question answering Authoring API to automate common tasks like adding new question answer pairs, and creating, and publishing projects.
+++++ Last updated : 11/23/2021++
+# Authoring API
+
+The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects.
+
+> [!NOTE]
+> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
+
+## Prerequisites
+
+* The current version of [cURL](https://curl.haxx.se/). Several command-line switches are used in this article, which are noted in the [cURL documentation](https://curl.haxx.se/docs/manpage.html).
+* The commands in this article are designed to be executed in a Bash shell. These commands will not always work in a Windows command prompt or in PowerShell without modification. If you do not have a Bash shell installed locally, you can use the [Azure Cloud Shell's bash environment](../../../../cloud-shell/overview.md).
+
+## Create a project
+
+To create a project programmatically:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If the prior example was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `NEW-PROJECT-NAME` | The name for your new question answering project.|
+
+You can also adjust additional values like the project language, the default answer given when no answer can be found that meets or exceeds the confidence threshold, and whether this language resource will support multiple languages.
+
+### Example query
+
+```bash
+curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
+ "description": "proj1 is a test project.",
+ "language": "en",
+ "settings": {
+ "defaultAnswer": "No good match found for your question in the project."
+ },
+ "multilingualResource": true
+ }
+ }' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{NEW-PROJECT-NAME}?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+{
+ "200": {
+ "headers": {},
+ "body": {
+ "projectName": "proj1",
+ "description": "proj1 is a test project.",
+ "language": "en",
+ "settings": {
+ "defaultAnswer": "No good match found for your question in the project."
+ },
+ "multilingualResource": true,
+ "createdDateTime": "2021-05-01T15:13:22Z",
+ "lastModifiedDateTime": "2021-05-01T15:13:22Z",
+ "lastDeployedDateTime": "2021-05-01T15:13:22Z"
+ }
+ }
+}
+```
+
+## Delete Project
+
+To delete a project programmatically:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If the prior example was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to delete.|
+
+### Example query
+
+```bash
+curl -X DELETE -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}?api-version=2021-10-01'
+```
+
+A successful call to delete a project results in an `Operation-Location` header being returned, which can be used to check the status of the delete project job. In most our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
+
+### Example response
+
+```bash
+HTTP/2 202
+content-length: 0
+operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deletion-jobs/{JOB-ID-GUID}
+x-envoy-upstream-service-time: 324
+apim-request-id:
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Tue, 23 Nov 2021 20:56:18 GMT
+```
+
+If the project was already deleted or could not be found, you would receive a message like:
+
+```json
+{
+ "error": {
+ "code": "ProjectNotFound",
+ "message": "The specified project was not found.",
+ "details": [
+ {
+ "code": "ProjectNotFound",
+ "message": "{GUID}"
+ }
+ ]
+ }
+}
+```
+
+### Get project deletion status
+
+To check on the status of your delete project request:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to check on the deployment status for.|
+| `JOB-ID` | When you delete a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the deletion request. The `JOB-ID` is the guid at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deletion-jobs/{THIS GUID IS YOUR JOB ID}` |
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/deletion-jobs/{JOB-ID}?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+{
+ "createdDateTime": "2021-11-23T20:56:18+00:00",
+ "expirationDateTime": "2021-11-24T02:56:18+00:00",
+ "jobId": "GUID",
+ "lastUpdatedDateTime": "2021-11-23T20:56:18+00:00",
+ "status": "succeeded"
+}
+```
+
+## Get project settings
+
+To retrieve information about a given project, update the following values in the query below:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to retrieve information about.|
+
+### Example query
+
+```bash
+
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+ {
+ "200": {
+ "headers": {},
+ "body": {
+ "projectName": "proj1",
+ "description": "proj1 is a test project.",
+ "language": "en",
+ "settings": {
+ "defaultAnswer": "No good match found for your question in the project."
+ },
+ "createdDateTime": "2021-05-01T15:13:22Z",
+ "lastModifiedDateTime": "2021-05-01T15:13:22Z",
+ "lastDeployedDateTime": "2021-05-01T15:13:22Z"
+ }
+ }
+ }
+```
+
+## Get question answer pairs
+
+To retrieve question answer pairs and related information for a given project, update the following values in the query below:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to retrieve all the question answer pairs for.|
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/qnas?api-version=2021-10-01'
+
+```
+
+### Example response
+
+```json
+{
+ "200": {
+ "headers": {},
+ "body": {
+ "value": [
+ {
+ "id": 1,
+ "answer": "ans1",
+ "source": "source1",
+ "questions": [
+ "question 1.1",
+ "question 1.2"
+ ],
+ "metadata": {
+ "k1": "v1",
+ "k2": "v2"
+ },
+ "dialog": {
+ "isContextOnly": false,
+ "prompts": [
+ {
+ "displayOrder": 1,
+ "qnaId": 11,
+ "displayText": "prompt 1.1"
+ },
+ {
+ "displayOrder": 2,
+ "qnaId": 21,
+ "displayText": "prompt 1.2"
+ }
+ ]
+ },
+ "lastUpdatedDateTime": "2021-05-01T17:21:14Z"
+ },
+ {
+ "id": 2,
+ "answer": "ans2",
+ "source": "source2",
+ "questions": [
+ "question 2.1",
+ "question 2.2"
+ ],
+ "lastUpdatedDateTime": "2021-05-01T17:21:14Z"
+ }
+ ]
+ }
+ }
+ }
+```
+
+## Get sources
+
+To retrieve the sources and related information for a given project, update the following values in the query below:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to retrieve all the source information for.|
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT_NAME}/sources?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+{
+ "200": {
+ "headers": {},
+ "body": {
+ "value": [
+ {
+ "displayName": "source1",
+ "sourceUri": "https://learn.microsoft.com/azure/ai-services/qnamaker/overview/overview",
+ "sourceKind": "url",
+ "lastUpdatedDateTime": "2021-05-01T15:13:22Z"
+ },
+ {
+ "displayName": "source2",
+ "sourceUri": "https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf",
+ "sourceKind": "file",
+ "contentStructureKind": "unstructured",
+ "lastUpdatedDateTime": "2021-05-01T15:13:22Z"
+ }
+ ]
+ }
+ }
+ }
+```
+
+## Get synonyms
+
+To retrieve synonyms and related information for a given project, update the following values in the query below:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to retrieve synonym information for.|
+
+### Example query
+
+```bash
+
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/synonyms?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+ {
+ "200": {
+ "headers": {},
+ "body": {
+ "value": [
+ {
+ "alterations": [
+ "qnamaker",
+ "qna maker"
+ ]
+ },
+ {
+ "alterations": [
+ "botframework",
+ "bot framework"
+ ]
+ }
+ ]
+ }
+ }
+ }
+```
+
+## Deploy project
+
+To deploy a project to production, update the following values in the query below:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to deploy to production.|
+
+### Example query
+
+```bash
+curl -X PUT -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/deployments/production?api-version=2021-10-01'
+```
+
+A successful call to deploy a project results in an `Operation-Location` header being returned which can be used to check the status of the deployment job. In most our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
+
+### Example response
+
+```bash
+0HTTP/2 202
+content-length: 0
+operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deployments/production/jobs/{JOB-ID-GUID}
+x-envoy-upstream-service-time: 31
+apim-request-id:
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Tue, 23 Nov 2021 20:35:00 GMT
+```
+
+### Get project deployment status
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to check on the deployment status for.|
+| `JOB-ID` | When you deploy a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the deployment request. The `JOB-ID` is the guid at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deployments/production/jobs/{THIS GUID IS YOUR JOB ID}` |
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/deployments/production/jobs/{JOB-ID}?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+ {
+ "200": {
+ "headers": {},
+ "body": {
+ "errors": [],
+ "createdDateTime": "2021-05-01T17:21:14Z",
+ "expirationDateTime": "2021-05-01T17:21:14Z",
+ "jobId": "{JOB-ID-GUID}",
+ "lastUpdatedDateTime": "2021-05-01T17:21:14Z",
+ "status": "succeeded"
+ }
+ }
+ }
+```
+
+## Export project metadata and assets
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to export.|
+
+### Example query
+
+```bash
+curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{exportAssetTypes": ["qnas","synonyms"]}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:export?api-version=2021-10-01&format=tsv'
+```
+
+### Example response
+
+```bash
+HTTP/2 202
+content-length: 0
+operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/Sample-project/export/jobs/{JOB-ID_GUID}
+x-envoy-upstream-service-time: 214
+apim-request-id:
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Tue, 23 Nov 2021 21:24:03 GMT
+```
+
+### Check export status
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to check on the export status for.|
+| `JOB-ID` | When you export a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the export request. The `JOB-ID` is the guid at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/export/jobs/{THIS GUID IS YOUR JOB ID}` |
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID}?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+{
+ "createdDateTime": "2021-11-23T21:24:03+00:00",
+ "expirationDateTime": "2021-11-24T03:24:03+00:00",
+ "jobId": "JOB-ID-GUID",
+ "lastUpdatedDateTime": "2021-11-23T21:24:08+00:00",
+ "status": "succeeded",
+ "resultUrl": "https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result"
+}
+```
+
+If you try to access the resultUrl directly, you will get a 404 error. You must append `?api-version=2021-10-01` to the path to make it accessible by an authenticated request: `https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result?api-version=2021-10-01`
+
+## Import project
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
+| `FILE-URI-PATH` | When you export a project programmatically, and then check the status the export a `resultUrl` is generated as part of the response. For example: `"resultUrl": "https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result"` you can use the resultUrl with the API version appended as a source file to import a project from: `https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result?api-version=2021-10-01`.|
+
+### Example query
+
+```bash
+curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
+ "fileUri": "FILE-URI-PATH"
+ }' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:import?api-version=2021-10-01&format=tsv'
+```
+
+A successful call to import a project results in an `Operation-Location` header being returned, which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this additional parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
+
+### Example response
+
+```bash
+HTTP/2 202
+content-length: 0
+operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/import/jobs/{JOB-ID-GUID}
+x-envoy-upstream-service-time: 417
+apim-request-id:
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Wed, 24 Nov 2021 00:35:11 GMT
+```
+
+### Check import status
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
+| `JOB-ID` | When you import a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the export request. The `JOB-ID` is the GUID at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/import/jobs/{THIS GUID IS YOUR JOB ID}` |
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME/import/jobs/{JOB-ID-GUID}?api-version=2021-10-01'
+```
+
+### Example query response
+
+```bash
+{
+ "errors": [],
+ "createdDateTime": "2021-05-01T17:21:14Z",
+ "expirationDateTime": "2021-05-01T17:21:14Z",
+ "jobId": "JOB-ID-GUID",
+ "lastUpdatedDateTime": "2021-05-01T17:21:14Z",
+ "status": "succeeded"
+}
+```
+
+## List deployments
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to generate a deployment list for.|
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/deployments?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+[
+ {
+ "deploymentName": "production",
+ "lastDeployedDateTime": "2021-10-26T15:12:02Z"
+ }
+]
+```
+
+## List Projects
+
+Retrieve a list of all question answering projects your account has access to.
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects?api-version=2021-10-01'
+```
+
+### Example response
+
+```json
+{
+ "value": [
+ {
+ "projectName": "Sample-project",
+ "description": "My first question answering project",
+ "language": "en",
+ "multilingualResource": false,
+ "createdDateTime": "2021-10-07T04:51:15Z",
+ "lastModifiedDateTime": "2021-10-27T00:42:01Z",
+ "lastDeployedDateTime": "2021-11-24T01:34:18Z",
+ "settings": {
+ "defaultAnswer": "No good match found in KB"
+ }
+ }
+ ]
+}
+```
+
+## Update sources
+
+In this example, we will add a new source to an existing project. You can also replace and delete existing sources with this command depending on what kind of operations you pass as part of your query body.
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project where you would like to update sources.|
+|`METHOD`| PATCH |
+
+### Example query
+
+```bash
+curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
+ {
+ "op": "add",
+ "value": {
+ "displayName": "source5",
+ "sourceKind": "url",
+ "sourceUri": "https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf",
+ "sourceContentStructureKind": "semistructured"
+ }
+ }
+]' -i '{LanguageServiceName}.cognitiveservices.azure.com//language/query-knowledgebases/projects/{projectName}/sources?api-version=2021-10-01'
+```
+
+A successful call to update a source results in an `Operation-Location` header being returned which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't always been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
+
+### Example response
+
+```bash
+HTTP/2 202
+content-length: 0
+operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/Sample-project/sources/jobs/{JOB_ID_GUID}
+x-envoy-upstream-service-time: 412
+apim-request-id: dda23d2b-f110-4645-8bce-1a6f8d504b33
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Wed, 24 Nov 2021 02:47:53 GMT
+```
+
+### Get update source status
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
+| `JOB-ID` | When you update a source programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the update source request. The `JOB-ID` is the GUID at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/sources/jobs/{THIS GUID IS YOUR JOB ID}` |
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/sources/jobs/{JOB-ID}?api-version=2021-10-01'
+```
+
+### Example response
+
+```bash
+{
+ "createdDateTime": "2021-11-24T02:47:53+00:00",
+ "expirationDateTime": "2021-11-24T08:47:53+00:00",
+ "jobId": "{JOB-ID-GUID}",
+ "lastUpdatedDateTime": "2021-11-24T02:47:56+00:00",
+ "status": "succeeded",
+ "resultUrl": "/knowledgebases/Sample-project"
+}
+
+```
+
+## Update question and answer pairs
+
+In this example, we will add a question answer pair to an existing source. You can also modify, or delete existing question answer pairs with this query depending on what operation you pass in the query body. If you don't have a source named `source5`, this example query will fail. You can adjust the source value in the body of the query to a source that exists for your target project.
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
+
+```bash
+curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
+ {
+ "op": "add",
+ "value":{
+ "id": 1,
+ "answer": "The latest question answering docs are on https://learn.microsoft.com",
+ "source": "source5",
+ "questions": [
+ "Where do I find docs for question answering?"
+ ],
+ "metadata": {},
+ "dialog": {
+ "isContextOnly": false,
+ "prompts": []
+ }
+ }
+ }
+]' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/qnas?api-version=2021-10-01'
+```
+
+A successful call to update a question answer pair results in an `Operation-Location` header being returned which can be used to check the status of the update job. In many of our examples, we haven't needed to look at the response headers and thus haven't always been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
+
+### Example response
+
+```bash
+HTTP/2 202
+content-length: 0
+operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/Sample-project/qnas/jobs/{JOB-ID-GUID}
+x-envoy-upstream-service-time: 507
+apim-request-id:
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Wed, 24 Nov 2021 03:16:01 GMT
+```
+
+### Get update question answer pairs status
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to be the destination for the question answer pairs updates.|
+| `JOB-ID` | When you update a question answer pair programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the update request. The `JOB-ID` is the GUID at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/qnas/jobs/{THIS GUID IS YOUR JOB ID}` |
+
+### Example query
+
+```bash
+curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/qnas/jobs/{JOB-ID}?api-version=2021-10-01'
+```
+
+### Example response
+
+```bash
+ "createdDateTime": "2021-11-24T03:16:01+00:00",
+ "expirationDateTime": "2021-11-24T09:16:01+00:00",
+ "jobId": "{JOB-ID-GUID}",
+ "lastUpdatedDateTime": "2021-11-24T03:16:06+00:00",
+ "status": "succeeded",
+ "resultUrl": "/knowledgebases/Sample-project"
+```
+
+## Update Synonyms
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to add synonyms.|
+
+### Example query
+
+```bash
+curl -X PUT -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
+"value": [
+ {
+ "alterations": [
+ "qnamaker",
+ "qna maker"
+ ]
+ },
+ {
+ "alterations": [
+ "botframework",
+ "bot framework"
+ ]
+ }
+ ]
+}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/synonyms?api-version=2021-10-01'
+```
+
+### Example response
+
+```bash
+0HTTP/2 200
+content-length: 17
+content-type: application/json; charset=utf-8
+x-envoy-upstream-service-time: 39
+apim-request-id: 5deb2692-dac8-43a8-82fe-36476e407ef6
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Wed, 24 Nov 2021 03:59:09 GMT
+
+{
+ "value": []
+}
+```
+
+## Update active learning feedback
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project you would like to be the destination for the active learning feedback updates.|
+
+### Example query
+
+```bash
+curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
+records": [
+ {
+ "userId": "user1",
+ "userQuestion": "hi",
+ "qnaId": 1
+ },
+ {
+ "userId": "user1",
+ "userQuestion": "hello",
+ "qnaId": 2
+ }
+ ]
+}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/feedback?api-version=2021-10-01'
+```
+
+### Example response
+
+```bash
+HTTP/2 204
+x-envoy-upstream-service-time: 37
+apim-request-id: 92225e03-e83f-4c7f-b35a-223b1b0f29dd
+strict-transport-security: max-age=31536000; includeSubDomains; preload
+x-content-type-options: nosniff
+date: Wed, 24 Nov 2021 04:02:56 GMT
+```
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/best-practices.md
+
+ Title: Project best practices
+description: Best practices for Question Answering
+++++
+recommendations: false
Last updated : 06/03/2022++
+# Project best practices
+
+The following list of QnA pairs will be used to represent a project to highlight best practices when authoring in custom question answering.
+
+|Question |Answer |
+|-|-|
+|I want to buy a car. |There are three options for buying a car. |
+|I want to purchase software license. |Software licenses can be purchased online at no cost. |
+|How to get access to WPA? |WPA can be accessed via the company portal. |
+|What is the price of Microsoft stock?|$200. |
+|How do I buy Microsoft Services? |Microsoft services can be bought online. |
+|I want to sell car. |Please send car pictures and documents. |
+|How do I get an identification card? |Apply via company portal to get an identification card.|
+|How do I use WPA? |WPA is easy to use with the provided manual. |
+|What is the utility of WPA? |WPA provides a secure way to access company resources. |
+
+## When should you add alternate questions to a QnA?
+
+- Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to questions in the project. For example, consider the following question answer pair:
+
+ **Question: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥**
+
+ **Answer: ΓÇ£$200ΓÇ¥.**
+
+ The service can return expected responses for semantically similar queries such as:
+
+ "How much is Microsoft stock worth?"
+
+ "How much is Microsoft's share value?"
+
+ "How much does a Microsoft share cost?"
+
+ "What is the market value of Microsoft stock?"
+
+ "What is the market value of a Microsoft share?"
+
+ However, please note that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
+
+- There are certain scenarios which require the customer to add an alternate question. When a query does not return the correct answer despite it being present in the project, we advise adding that query as an alternate question to the intended QnA pair.
+
+## How many alternate questions per QnA is optimal?
+
+- Users can add up to 10 alternate questions depending on their scenario. Alternate questions beyond the first 10 arenΓÇÖt considered by our core ranker. However, they are evaluated in the other processing layers resulting in better output overall. All the alternate questions will be considered in the preprocessing step to look for an exact match.
+
+- Semantic understanding in question answering should be able to take care of similar alternate questions.
+
+- The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all intents for the answer are captured by these 10 questions. For the project above, in QNA #1, adding alternate questions such as "How can I buy a car?", "I wanna buy a car." are not required. Whereas adding alternate questions such as "How to purchase a car.", "What are the options for buying a vehicle?" can be useful.
+
+## When to add synonyms to a project
+
+- Question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
+
+- For better relevance, the customer needs to provide a list of acronyms that the end user intends to use interchangeably. For instance, the following is a list of acceptable acronyms:
+
+ MSFT ΓÇô Microsoft
+
+ ID ΓÇô Identification
+
+ ETA ΓÇô Estimated time of Arrival
+
+- Apart from acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as "my carΓÇÖs audio isnΓÇÖt working" and the project has questions on "fixing audio for car X", then we need to add "X" and "car" as synonyms.
+
+- The Transformer based model already takes care of most of the common synonym cases, for e.g.- Purchase ΓÇô Buy, Sell - Auction, Price ΓÇô Value. For example, consider the following QnA pair: Q: "What is the price of Microsoft Stock?" A: "$200".
+
+If we receive user queries like "Microsoft stock value", "Microsoft share value", "Microsoft stock worth", "Microsoft share worth", "stock value", etc., they should be able to get correct answer even though these queries have words like share, value, worth which are not originally present in the knowledge base.
+
+## How are lowercase/uppercase characters treated?
+
+Question answering takes casing into account but it's intelligent enough to understand when it is to be ignored. You should not be seeing any perceivable difference due to wrong casing.
+
+## How are QnAs prioritized for multi-turn questions?
+
+When a KB has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other QnAs, for the next query we give slight preference to all the children QnAs, sibling QnAs and grandchildren QnAs in that order. Along with any query, the [Question Answering API] (/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a "context" object with the property "previousQnAId" which denotes the last top answer. Based on this previous QnA ID, all the related QnAs are boosted.
+
+## How are accents treated?
+
+Accents are supported for all major European languages. If the query has an incorrect accent, confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
+
+## How is punctuation in a user query treated?
+
+Punctuation is ignored in user query before sending it to the ranking stack. Ideally it should not impact the relevance scores. Punctuation that are ignored are as follows: ,?:;\"'(){}[]-+。./!*؟
+
+## Next steps
+
ai-services Change Default Answer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/change-default-answer.md
+
+ Title: Get default answer - custom question answering
+description: The default answer is returned when there is no match to the question. You may want to change the default answer from the standard default answer in custom question answering.
+++ Last updated : 11/02/2021+++++
+# Change default answer for question answering
+
+The default answer for a project is meant to be returned when an answer is not found. If you are using a client application, such as the [Azure AI Bot Service](/azure/bot-service/bot-builder-howto-qna), it may also have a separate default answer, indicating no answer met the score threshold.
+
+## Default answer
++
+|Default answer|Description of answer|
+|--|--|
+|KB answer when no answer is determined|`No good match found in KB.` - When the question answering API finds no matching answer to the question it displays a default text response. In Custom question answering, you can set this text in the **Settings** of your project. |
+
+### Client application integration
+
+For a client application, such as a bot with the [Azure AI Bot Service](/azure/bot-service/bot-builder-howto-qna), you can choose from the following scenarios:
+
+* Use your project's setting
+* Use different text in the client application to distinguish when an answer is returned but doesn't meet the score threshold. This text can either be static text stored in code, or can be stored in the client application's settings list.
+
+## Change default answer in Language Studio
+
+The project default answer is returned when no answer is returned from question answering.
+
+1. Sign in to the [Language Studio](https://language.azure.com). Go to Custom question answering and select your project from the list.
+1. Select **Settings** from the left navigation bar.
+1. Change the value of **Default answer when no answer found** > Select **Save**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of project settings with red box around the default answer](../media/change-default-answer/settings.png)
+
+## Next steps
+
+* [Create a project](manage-knowledge-base.md)
ai-services Chit Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/chit-chat.md
+
+ Title: Adding chitchat to a custom question answering project
+
+description: Adding personal chitchat to your bot makes it more conversational and engaging when you create a project. Custom question answering allows you to easily add a pre-populated set of the top chitchat, into your projects.
+++++++ Last updated : 11/02/2021+++
+# Use chitchat with a project
+
+Adding chitchat to your bot makes it more conversational and engaging. The chitchat feature in custom question answering allows you to easily add a pre-populated set of the top chitchat, into your project. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
+
+This dataset has about 100 scenarios of chitchat in the voice of multiple personas, like Professional, Friendly and Witty. Choose the persona that most closely resembles your bot's voice. Given a user query, question answering tries to match it with the closest known chitchat question and answer.
+
+Some examples of the different personalities are below. You can see all the personality [datasets](https://github.com/microsoft/botframework-cli/blob/main/packages/qnamaker/docs/chit-chat-dataset.md) along with details of the personalities.
+
+For the user query of `When is your birthday?`, each personality has a styled response:
+
+<!-- added quotes so acrolinx doesn't score these sentences -->
+|Personality|Example|
+|--|--|
+|Professional|Age doesn't really apply to me.|
+|Friendly|I don't really have an age.|
+|Witty|I'm age-free.|
+|Caring|I don't have an age.|
+|Enthusiastic|I'm a bot, so I don't have an age.|
+||
+
+## Language support
+
+Chitchat data sets are supported in the following languages:
+
+|Language|
+|--|
+|Chinese|
+|English|
+|French|
+|Germany|
+|Italian|
+|Japanese|
+|Korean|
+|Portuguese|
+|Spanish|
+
+## Add chitchat source
+After you create your project you can add sources from URLs, files, as well as chitchat from the **Manage sources** pane.
+
+> [!div class="mx-imgBorder"]
+> ![Add source chitchat](../media/chit-chat/add-source.png)
+
+Choose the personality that you want as your chitchat base.
+
+> [!div class="mx-imgBorder"]
+> ![Menu of different chitchat personalities](../media/chit-chat/personality.png)
+
+## Edit your chitchat questions and answers
+
+When you edit your project, you will see a new source for chitchat, based on the personality you selected. You can now add altered questions or edit the responses, just like with any other source.
+
+> [!div class="mx-imgBorder"]
+> ![Edit chitchat question pairs](../media/chit-chat/edit-chit-chat.png)
+
+To turn the views for context and metadata on and off, select **Show columns** in the toolbar.
+
+## Add more chitchat questions and answers
+
+You can add a new chitchat question pair that is not in the predefined data set. Ensure that you are not duplicating a question pair that is already covered in the chitchat set. When you add any new chitchat question pair, it gets added to your **Editorial** source. To ensure the ranker understands that this is chitchat, add the metadata key/value pair "Editorial: chitchat", as seen in the following image:
++
+## Delete chitchat from your project
+
+Select the **manage sources** pane, and choose your chitchat source. Your specific chitchat source is listed as a tsv file, with the selected personality name. Select **Delete** from the toolbar.
+
+> [!div class="mx-imgBorder"]
+> ![Delete chitchat source](../media/chit-chat/delete-chit-chat.png)
+
+## Next steps
++
ai-services Configure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/configure-resources.md
+
+ Title: Configure Question Answering service
+description: This document outlines advanced configurations for custom question answering enabled resources.
+++++ Last updated : 11/02/2021+++
+# Configure custom question answering enabled resources
+
+You can configure question answering to use a different Cognitive Search resource.
+
+## Change Cognitive Search resource
+
+> [!WARNING]
+> If you change the Azure Search service associated with your language resource, you will lose access to all the projects already present in it. Make sure you export the existing projects before you change the Azure Search service.
+
+If you create a language resource and its dependencies (such as Search) through the Azure portal, a Search service is created for you and linked to the language resource. After these resources are created, you can update the Search resource in the **Features** tab.
+
+1. Go to your language resource in the Azure portal.
+
+2. Select **Features** and select the Azure Cognitive Search service you want to link with your language resource.
+
+ > [!NOTE]
+ > Your Language resource will retain your Azure Cognitive Search keys. If you update your search resource (for example, regenerating your keys), you will need to select **Update Azure Cognitive Search keys for the current search service**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Add QnA to TA](../media/configure-resources/update-custom-feature.png)
+
+3. Select **Save**.
+
+## Next steps
+
+* [Encrypt data at rest](./encrypt-data-at-rest.md)
ai-services Create Test Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/create-test-deploy.md
+
+ Title: Create, test, and deploy your question answering project
+description: You can create a question answering project from your own content, such as FAQs or product manuals. This article includes an example of creating a question answering project from a simple FAQ webpage, to answer questions.
+++++ Last updated : 11/02/2021+++
+# Create, test, and deploy a custom question answering project
+
+You can create a question answering project from your own content, such as FAQs or product manuals. This article includes an example of creating a question answering project from a product manual, to answer questions.
+
+## Prerequisites
+
+> [!div class="checklist"]
+> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled.
+
+## Create your first question answering project
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Understand questions and conversational language** section and select **Open custom question answering**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Open custom question answering](../media/create-test-deploy/open-custom-question-answering.png)
+
+3. If your resource is not yet connected to Azure Search select **Connect to Azure Search**. This will open a new browser tab to **Features** pane of your resource in the Azure portal.
+
+ > [!div class="mx-imgBorder"]
+ > ![Connect to Azure Search](../media/create-test-deploy/connect-to-azure-search.png)
+
+4. Select **Enable custom question answering**, choose the Azure Search resource to link to, and then select **Apply**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Enable custom question answering](../media/create-test-deploy/enable-custom-question-answering.png)
+
+5. Return to the Language Studio tab. You may need to refresh this page for it to register the change to your resource. Select **Create new project**.
+
+6. Choose the option **I want to set the language for all projects created in this resource** > select **English** > Select **Next**.
+
+7. Enter a project name of **Sample-project**, a description of **My first question answering project**, and leave the default answer with a setting of **No answer found**.
+
+8. Review your choices and select **Create project**
+
+9. From the **Manage sources** page select **Add source** > **URLS**.
+
+10. Select **Add url** enter the following values and then select **Add all**:
+
+ |URL Name|URL Value|
+ |--||
+ |Surface Book User Guide |https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf |
+
+ The extraction process takes a few moments to read the document and identify questions and answers. Question and answering will determine if the underlying content is structured or unstructured.
+
+ After successfully adding the source, you can then edit the source contents to add more custom question answer sets.
+
+## Test your project
+
+1. Select the link to your source, this will open the edit project page.
+
+2. Select **Test** from the menu bar > Enter the question **How do I setup my surface book?**. An answer will be generated based on the question answer pairs that were automatically identified and extracted from your source URL:
+
+ > [!div class="mx-imgBorder"]
+ > ![Test question chat interface](../media/create-test-deploy/test-question.png)
+
+ If you check the box for **include short answer response** you will also see a precise answer, if available, along with the answer passage in the test pane when you ask a question.
+
+3. Select **Inspect** to examine the response in more detail. The test window is used to test your changes to your project before deploying your project.
+
+ > [!div class="mx-imgBorder"]
+ > ![See the confidence interval](../media/create-test-deploy/inspect-test.png)
+
+ From the **Inspect** interface, you can see the level of confidence that this response will answer the question and directly edit a given question and answer response pair.
+
+## Deploy your project
+
+1. Select the Deploy project icon to enter the deploy project menu.
+
+ > [!div class="mx-imgBorder"]
+ > ![Deploy project](../media/create-test-deploy/deploy-knowledge-base.png)
+
+ When you deploy a project, the contents of your project move from the `test` index to a `prod` index in Azure Search.
+
+2. Select **Deploy** > and then when prompted select **Deploy** again.
+
+ > [!div class="mx-imgBorder"]
+ > ![Successful deployment](../media/create-test-deploy/successful-deployment.png)
+
+ Your project is now successfully deployed. You can use the endpoint to answer questions in your own custom application to answer or in a bot.
+
+## Clean up resources
+
+If you will not continue to test custom question answering, you can delete the associated resource.
+
+## Next steps
++
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/encrypt-data-at-rest.md
+
+ Title: Custom question answering encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for custom question answering, and how to enable and manage CMK.
+++++ Last updated : 06/03/2022++++
+# Custom question answering encryption of data at rest
+
+Question answering automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your resource with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
+
+Question answering uses CMK support from Azure search, and associates the provided CMK to encrypt the data stored in Azure search index. Please follow the steps listed in [this article](../../../../search/search-security-manage-encryption-keys.md) to configure Key Vault access for the Azure search service.
+
+> [!IMPORTANT]
+> Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
+
+## Enable customer-managed keys
+
+Follow these steps to enable CMKs:
+
+1. Go to the **Encryption** tab of your language resource with custom question answering enabled.
+2. Select the **Customer Managed Keys** option. Provide the details of your [customer-managed keys](../../../../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal) and select **Save**.
+
+> [!div class="mx-imgBorder"]
+> ![Question Answering CMK](../media/encrypt-data-at-rest/question-answering-cmk.png)
+
+3. On a successful save, the CMK will be used to encrypt the data stored in the Azure Search Index.
+
+> [!IMPORTANT]
+> It is recommended to set your CMK in a fresh Azure Cognitive Search service before any projects are created. If you set CMK in a language resource with existing projects, you might lose access to them. Read more about [working with encrypted content](../../../../search/search-security-manage-encryption-keys.md#work-with-encrypted-content) in Azure Cognitive search.
+
+> [!NOTE]
+> To request the ability to use customer-managed keys, fill out and submit the [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk).
+
+## Regional availability
+
+Customer-managed keys are available in all Azure Search regions.
+
+## Encryption of data in transit
+
+Language Studio runs in the user's browser. Every action triggers a direct call to the respective Azure AI services API. Hence, question answering is compliant for data in transit.
+
+## Next steps
+
+* [Encryption in Azure Search using CMKs in Azure Key Vault](../../../../search/search-security-manage-encryption-keys.md)
+* [Data encryption at rest](../../../../security/fundamentals/encryption-atrest.md)
+* [Learn more about Azure Key Vault](../../../../key-vault/general/overview.md)
ai-services Export Import Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/export-import-refresh.md
+
+ Title: Export/import/refresh | question answering projects and projects
+description: Learn about backing up your question answering projects and projects
+++++
+recommendations: false
Last updated : 01/25/2022+
+# Export-import-refresh in question answering
+
+You may want to create a copy of your question answering project or related question and answer pairs for several reasons:
+
+* To implement a backup and restore process
+* To integrate with your CI/CD pipeline
+* To move your data to different regions
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled. Remember your Azure Active Directory ID, Subscription, language resource name you selected when you created the resource.
+
+## Export a project
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project you wish to export > Select **Export** > YouΓÇÖll have the option to export as an **Excel** or **TSV** file.
+
+4. YouΓÇÖll be prompted to save your exported file locally as a zip file.
+
+### Export a project programmatically
+
+To automate the export process, use the [export functionality of the authoring API](./authoring.md#export-project-metadata-and-assets)
+
+## Import a project
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select **Import** and specify the file type you selected for the export process. Either **Excel**, or **TSV**.
+
+4. Select Choose File and browse to the local zipped copy of your project that you exported previously.
+
+5. Provide a unique name for the project youΓÇÖre importing.
+
+6. Remember that a project that has only been imported still needs to be deployed/published if you want it to be live.
+
+### Import a project programmatically
+
+To automate the import process, use the [import functionality of the authoring API](./authoring.md#import-project)
+
+## Refresh source url
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project that contains the source you want to refresh > select manage sources.
+
+4. We recommend having a backup of your project/question answer pairs prior to running each refresh so that you can always roll-back if needed.
+
+5. Select a url-based source to refresh > Select **Refresh URL**.
+6. Only one URL can be refreshed at a time.
+
+### Refresh a URL programmatically
+
+To automate the URL refresh process, use the [update sources functionality of the authoring API](./authoring.md#update-sources)
+
+The update sources example in the [Authoring API docs](./authoring.md#update-sources) shows the syntax for adding a new URL-based source. An example query for an update would be as follows:
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the following code sample, you would only need to add the region-specific portion of `southcentral` as the rest of the endpoint path is already present.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
+| `PROJECT-NAME` | The name of project where you would like to update sources.|
+
+```bash
+curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
+ {
+ "op": "replace",
+ "value": {
+ "displayName": "source5",
+ "sourceKind": "url",
+ "sourceUri": https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf,
+ "refresh": "true"
+ }
+ }
+]' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/sources?api-version=2021-10-01'
+```
+
+## Export questions and answers
+
+ItΓÇÖs also possible to export/import a specific project of question and answers rather than the entire question answering project.
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project that contains the project question and answer pairs you want to export.
+
+4. Select **Edit project**.
+
+5. To the right of show columns are `...` an ellipsis button. > Select the `...` > a dropdown will reveal the option to export/import questions and answers.
+
+ Depending on the size of your web browser, you may experience the UI differently. Smaller browsers will see two separate ellipsis buttons.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of selecting multiple UI ellipsis buttons to get to import/export question and answer pair option](../media/export-import-refresh/export-questions.png)
+
+## Import questions and answers
+
+ItΓÇÖs also possible to export/import a specific project of question and answers rather than the entire question answering project.
+
+1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
+
+2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
+
+3. Select the project that contains the project question and answer pairs you want to export.
+
+4. Select **Edit project**.
+
+5. To the right of show columns are `...` an ellipsis button. > Select the `...` > a dropdown will reveal the option to export/import questions and answers.
+
+ Depending on the size of your web browser, you may experience the UI differently. Smaller browsers will see two separate ellipsis buttons.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of selecting multiple UI ellipsis buttons to get to import/export question and answer pair option](../media/export-import-refresh/export-questions.png)
+
+## Next steps
+
+* [Learn how to use the Authoring API](./authoring.md)
ai-services Manage Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/manage-knowledge-base.md
+
+ Title: Manage projects - question answering
+description: Custom question answering allows you to manage projects by providing access to the project settings and content.
+++++ Last updated : 06/03/2022+++
+# Create and manage project settings
+
+Question answering allows you to manage your projects by providing access to the project settings and data sources. If you haven't created a question answering project before we recommend starting with the [getting started article](create-test-deploy.md).
+
+## Prerequisites
+
+> * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A [Language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and language resource name you selected when you created the resource.
+
+## Create a project
+
+1. Sign in to the [Language Studio](https://language.azure.com/) portal with your Azure credentials.
+
+2. Open the [question answering](https://language.azure.com/languageStudio/questionAnswering/projects) page.
+
+3. Select **create new project**.
+
+4. If you are creating the first project associated with your language resource, you have the option of creating future projects with multiple languages for the same resource. If you choose to explicitly set the language to a single language in your first project, you will not be able to modify this setting later and all subsequent projects for that resource will use the language selected during the creation of your first project.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of language selection UI.](../media/manage-knowledge-base/choose-language-option.png)
+
+5. Enter basic project settings:
+
+ |Setting| Value|
+ |-||
+ |**Name** | Enter your unique project name here|
+ |**Description** | Enter a description for your project |
+ |**Source language** | Whether or not this value is greyed out, is dependent on the selection that was made when the first project associated with the language resource was created. |
+ |**Default answer** | The default answer the system will send if there was no answer found for the question. You can change this at any time in Project settings.
+
+## Manage projects
+
+From the main question answering page in Language Studio you can:
+
+- Create projects
+- Delete projects
+- Export existing projects for backup or to migrate to other language resources
+- Import projects. (The expected file format is a `.zip` file containing a project that was exported in `excel` or `.tsv` format).
+- Projects can be ordered by either **Last modified** or **Last published** date.
+
+## Manage sources
+
+1. Select **Manage sources** in the left navigation bar.
+
+1. There are three types of sources: **URLS**, **Files**, and **Chitchat**
+
+ |Goal|Action|
+ |--|--|
+ |Add Source|You can add new sources and FAQ content to your project by selecting **Add source** > and choosing **URLs**, **Files**, or **Chitchat**|
+ |Delete Source|You can delete existing sources by selecting to the left of the source, which will cause a blue circle with a checkmark to appear > select the trash can icon. |
+ |Mark content as unstructured|If you want to mark the uploaded file content as unstructured select **Unstructured content** from the dropdown when adding the source.|
+ |Auto-detect| Allow question and answering to attempt to determine if content is structured versus unstructured.|
+
+## Manage large projects
+
+From the **Edit project page** you can:
+
+* **Search project**: You can search the project by typing in the text box at the top of question answer panel. Hit enter to search on the question, answer, or metadata content.
+
+* **Pagination**: Quickly move through data sources to manage large projects. Page numbers appear at the bottom of the UI and are sometimes off screen.
+
+## Delete project
+
+Deleting a project is a permanent operation. It can't be undone. Before deleting a project, you should export the project from the main question answering page within Language Studio.
+
+If you share your project with collaborators and then later delete it, everyone loses access to the project.
+
+## Next steps
+
+* [Configure resources](./configure-resources.md)
ai-services Migrate Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/migrate-knowledge-base.md
+
+ Title: Move projects - custom question answering
+description: Moving a custom question answering project requires exporting a project from one resource, and then importing into another.
+++++ Last updated : 1/11/2023++
+# Move projects and question answer pairs
+
+> [!NOTE]
+
+> This article deals with the process to export and move projects and sources from one Language resource to another.
+
+You may want to create copies of your projects or sources for several reasons:
+
+* To implement a backup and restore process
+* Integrate with your CI/CD pipeline
+* When you wish to move your data to different regions
+
+## Prerequisites
+
+* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and the Language resource name you selected when you created the resource.
+
+## Export a project
+
+Exporting a project allows you to back up all the question answer sources that are contained within a single project.
+
+1. Sign in to the [Language Studio](https://language.azure.com/).
+1. Select the Language resource you want to move a project from.
+1. Go to Custom Question Answering service. On the **Projects** page, you have the options to export in two formats, Excel or TSV. This will determine the contents of the file. The file itself will be exported as a .zip containing the contents of your project.
+2. You can export only one project at a time.
+
+## Import a project
+
+1. Select the Language resource, which will be the destination for your previously exported project.
+1. Go to Custom Question Answering service. On the **Projects** page, select **Import** and choose the format used when you selected export. Then browse to the local .zip file containing your exported project. Enter a name for your newly imported project and select **Done**.
+
+## Export sources
+
+1. Select the language resource you want to move an individual question answer source from.
+1. Select the project that contains the question and answer source you wish to export.
+1. On the Edit project page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to export in either Excel or TSV.
+
+## Import question and answers
+
+1. Select the language resource, which will be the destination for your previously exported question and answer source.
+1. Select the project where you want to import a question and answer source.
+1. On the Edit project page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to import either an Excel or TSV file.
+1. Browse to the local location of the file with the **Choose File** option and select **Done**.
+
+<!-- TODO: Replace Link-->
+### Test
+
+**Test** the question answer source by selecting the **Test** option from the toolbar in the **Edit project** page which will launch the test panel. Learn how to [test your project](../../../qnamaker/How-To/test-knowledge-base.md).
+
+### Deploy
+
+<!-- TODO: Replace Link-->
+**Deploy** the project and create a chat bot. Learn how to [deploy your project](../../../qnamaker/Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
+
+## Chat logs
+
+There is no way to move chat logs with projects. If diagnostic logs are enabled, chat logs are stored in the associated Azure Monitor resource.
+
+## Next steps
+
+<!-- TODO: Replace Link-->
+
ai-services Migrate Qnamaker To Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md
+
+ Title: Migrate from QnA Maker to Question Answering
+description: Details on features, requirements, and examples for migrating from QnA Maker to Question Answering
++++
+ms.
+ Last updated : 08/08/2022++
+# Migrate from QnA Maker to Question Answering
+
+**Purpose of this document:** This article aims to provide information that can be used to successfully migrate applications that use QnA Maker to Question Answering. Using this article, we hope customers will gain clarity on the following:
+
+ - Comparison of features across QnA Maker and Question Answering
+ - Pricing
+ - Simplified Provisioning and Development Experience
+ - Migration phases
+ - Common migration scenarios
+ - Migration steps
+
+**Intended Audience:** Existing QnA Maker customers
+
+> [!IMPORTANT]
+> Question Answering, a feature of Azure AI Language was introduced in November 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration:
+>
+> - Automatic RBAC to Language project (not resource)
+> - Automatic enabling of analytics.
+
+You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+## Comparison of features
+
+In addition to a new set of features, Question Answering provides many technical improvements to common features.
+
+|Feature|QnA Maker|Question Answering|Details|
+|-|||-|
+|State of the art transformer-based models|➖|✔️|Turing based models that enable search of QnA at web scale.|
+|Pre-built capability|➖|✔️|Using this capability one can leverage the power of question answering without having to ingest content and manage resources.|
+|Precise answering|➖|✔️|Question Answering supports precise answering with the help of SOTA models.|
+|Smart URL Refresh|➖|✔️|Question Answering provides a means to refresh ingested content from public sources with a single click.|
+|Q&A over knowledge base (hierarchical extraction)|✔️|✔️| |
+|Active learning|✔️|✔️|Question Answering has an improved active learning model.|
+|Alternate Questions|✔️|✔️|The improved models in question answering reduces the need to add alternate questions.|
+|Synonyms|✔️|✔️| |
+|Metadata|✔️|✔️| |
+|Question Generation (private preview)|➖|✔️|This new feature will allow generation of questions over text.|
+|Support for unstructured documents|➖|✔️|Users can now ingest unstructured documents as input sources and query the content for responses|
+|.NET SDK|✔️|✔️| |
+|API|✔️|✔️| |
+|Unified Authoring experience|➖|✔️|A single authoring experience across all Azure AI Language|
+|Multi region support|➖|✔️|
+
+## Pricing
+
+When you are looking at migrating to Question Answering, please consider the following:
+
+|Component |QnA Maker|Question Answering|Details |
+|-||||
+|QnA Maker Service cost |✔️ |➖ |The fixed cost per resource per month. Only applicable for QnAMaker. |
+|Question Answering service cost|➖ |✔️ |The Question Answering cost according to the pay as you go model. Only applicable for Question Answering.|
+|Azure Search cost |✔️ |✔️ |Applicable for both QnA Maker and Question Answering. |
+|App Service cost |✔️ |➖ |Only applicable for QnA Maker. This is the biggest cost savings for users moving to Question Answering. |
+
+- Users may select a higher tier with higher capacity, which will impact overall price they pay. It doesnΓÇÖt impact the price on language component of Custom Question Answering.
+
+- ΓÇ£Text RecordsΓÇ¥ in Question Answering features refers to the query submitted by the user to the runtime, and it is a concept common to all features within Language service. Sometimes a query may have more text records when the query length is higher.
+
+**Example price estimations**
+
+|Usage |Number of resources in QnA Maker|Number of app services in QnA Maker (Tier)|Monthly inference calls in QnA Maker|Search Partitions x search replica (Tier)|Relative cost in Question Answering |
+||--|||--|--|
+|High |5 |5(P1) |8M |9x3(S2) |More expensive |
+|High |100 |100(P1) |6M |9x3(S2) |Less expensive |
+|Medium|10 |10(S1) |800K |4x3(S1) |Less expensive |
+|Low |4 |4(B1) |100K |3x3(S1) |Less expensive |
+
+ Summary : Customers should save cost across the most common configurations as seen in the relative cost column.
+
+Here you can find the pricing details for [Question Answering](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) and [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/).
+
+The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can provide even more detail.
+
+## Simplified Provisioning and Development Experience
+
+With the Language service, QnA Maker customers now benefit from a single service that provides Text Analytics, LUIS and Question Answering as features of the language resource. The Language service provides:
+
+- One Language resource to access all above capabilities
+- A single pane of authoring experience across capabilities
+- A unified set of APIs across all the capabilities
+- A cohesive, simpler, and powerful product
+
+Learn how to get started in [Language Studio](../../language-studio.md)
+
+## Migration Phases
+
+If you or your organization have applications in development or production that use QnA Maker, you should update them to use Question Answering as soon as possible. See the following links for available APIs, SDKs, Bot SDKs and code samples.
+
+Following are the broad migration phases to consider:
+
+![A chart showing the phases of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-phases.png)
+
+Additional links which can help you are given below:
+- [Authoring portal](https://language.cognitive.azure.com/home)
+- [API](authoring.md)
+- [SDK](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker)
+- Bot SDK: For bots to use custom question answering, use the [Bot.Builder.AI.QnA](https://www.nuget.org/packages/Microsoft.Bot.Builder.AI.QnA/) SDK ΓÇô We recommend customers to continue to use this for their Bot integrations. Here are some sample usages of the same in the botΓÇÖs code: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
+
+## Common migration scenarios
+
+This topic compares two hypothetical scenarios when migrating from QnA Maker to Question Answering. These scenarios can help you to determine the right set of migration steps to execute for the given scenario.
+
+> [!NOTE]
+> An attempt has been made to ensure these scenarios are representative of real customer migrations, however, individual customer scenarios will of course differ. Also, this article doesn't include pricing details. Visit the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) page for more information.
+
+> [!IMPORTANT]
+> Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+### Migration scenario 1: No custom authoring portal
+
+In the first migration scenario, the customer uses qnamaker.ai as the authoring portal and they want to migrate their QnA Maker knowledge bases to Custom Question Answering.
+
+[Migrate your project from QnA Maker to Question Answering](migrate-qnamaker.md)
+
+Once migrated to Question Answering:
+
+- The resource level settings need to be reconfigured for the language resource
+- Customer validations should start on the migrated knowledge bases on:
+ - Size validation
+ - Number of QnA pairs in all KBs to match pre and post migration
+- Customers need to establish new thresholds for their knowledge bases in custom question answering as the Confidence score mapping is different when compared to QnA Maker.
+ - Answers for sample questions in pre and post migration
+ - Response time for Questions answered in v1 vs v2
+ - Retaining of prompts
+ - Customers can use the batch testing tool post migration to test the newly created project in custom question answering.
+
+Old QnA Maker resources need to be manually deleted.
+
+Here are some [detailed steps](migrate-qnamaker.md) on migration scenario 1.
+
+### Migration scenario 2
+
+In this migration scenario, the customer may have created their own authoring frontend leveraging the QnA Maker authoring APIs or QnA Maker SDKs.
+
+They should perform these steps required for migration of SDKs:
+
+This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
+
+They should perform the steps required for migration of Knowledge bases to the new Project within Language resource.
+
+Once migrated to Question Answering:
+- The resource level settings need to be reconfigured for the language resource
+- Customer validations should start on the migrated knowledge bases on
+ - Size validation
+ - Number of QnA pairs in all KBs to match pre and post migration
+ - Confidence score mapping
+ - Answers for sample questions in pre and post migration
+ - Response time for Questions answered in v1 vs v2
+ - Retaining of prompts
+ - Batch testing pre and post migration
+- Old QnA Maker resources need to be manually deleted.
+
+Additionally, for customers who have to migrate and upgrade Bot, upgrade bot code is published as NuGet package.
+
+Here you can find some code samples: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
+
+Here are [detailed steps on migration scenario 2](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md)
+
+Learn more about the [pre-built API](../../../QnAMaker/How-To/using-prebuilt-api.md)
+
+Learn more about the [Question Answering Get Answers REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers)
+
+## Migration steps
+
+Please note that some of these steps are needed depending on the customers existing architecture. Kindly look at migration phases given above for getting more clarity on which steps are needed by you for migration.
+
+![A chart showing the steps of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-steps.png)
ai-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/migrate-qnamaker.md
+
+ Title: Migrate QnA Maker knowledge bases to custom question answering
+description: Migrate your legacy QnAMaker knowledge bases to custom question answering to take advantage of the latest features.
+++++ Last updated : 01/23/2022+++
+# Migrate from QnA Maker to custom question answering
+
+Custom question answering, a feature of Azure AI Language was introduced in May 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each custom question answering project is equivalent to a knowledge base in QnA Maker. You can easily migrate knowledge bases from a QnA Maker resource to custom question answering projects within a [language resource](https://aka.ms/create-language-resource). You can also choose to migrate knowledge bases from multiple QnA Maker resources to a specific language resource.
+
+To successfully migrate knowledge bases, **the account performing the migration needs contributor access to the selected QnA Maker and language resource**. When a knowledge base is migrated, the following details are copied to the new custom question answering project:
+
+- QnA pairs including active learning suggestions.
+- Synonyms and default answer from the QnA Maker resource.
+- Knowledge base name is copied to project description field.
+
+Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
+
+## Steps to migrate SDKs
+
+This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
+
+## Steps to migrate knowledge bases
+
+You can follow the steps below to migrate knowledge bases:
+
+1. Create a [language resource](https://aka.ms/create-language-resource) with custom question answering enabled in advance. When you create the language resource in the Azure portal, you will see the option to enable custom question answering. When you select that option and proceed, you will be asked for Azure Search details to save the knowledge bases.
+
+2. If you want to add knowledge bases in multiple languages to your language resource, visit [Language Studio](https://language.azure.com/) to create your first custom question answering project and select the first option as shown below. Language settings for the language resource can be specified only when creating a project. If you want to migrate existing knowledge bases in a single language to the language resource, you can skip this step.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of choose language UI screen"](../media/migrate-qnamaker/choose-language.png)
+
+3. Visit [https://www.qnamaker.ai](https://www.qnamaker.ai) and select **Start Migration** in the migration note on the knowledge base page. A dialog box will open to initiate the migration.
+
+ :::image type="content" source="../media/migrate-qnamaker/start-migration.png" alt-text="Start Migration button that appears in a banner on qnamaker.ai" lightbox="../media/migrate-qnamaker/start-migration.png":::
+
+4. Fill in the details required to initiate migration. The tenant will be auto-selected. You can choose to switch the tenant.
+
+ > [!div class="mx-imgBorder"]
+ > ![Migrate QnAMaker with red selection box around the tenant selection option](../media/migrate-qnamaker/tenant-selection.png)
+
+5. Select the QnA Maker resource, which contains the knowledge bases to be migrated.
+
+ > [!div class="mx-imgBorder"]
+ > ![Migrate QnAMaker with red selection box around the QnAMaker resource selection option](../media/migrate-qnamaker/select-resource.png)
+
+6. Select the language resource to which you want to migrate the knowledge bases. You will only be able to see those language resources that have custom question answering enabled. The language setting for the language resource is displayed in the options. You wonΓÇÖt be able to migrate knowledge bases in multiple languages from QnA Maker resources to a language resource if its language setting is not specified.
+
+ > [!div class="mx-imgBorder"]
+ > ![Migrate QnAMaker with red selection box around the language resource option currently selected resource contains the information that language is unspecified](../media/migrate-qnamaker/language-setting.png)
+
+ If you want to migrate knowledge bases in multiple languages to the language resource, you must enable the multiple language setting when creating the first custom question answering project for the language resource. You can do so by following the instructions in step #2. **If the language setting for the language resource is not specified, it is assigned the language of the selected QnA Maker resource**.
+
+7. Select all the knowledge bases that you wish to migrate > select **Next**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Migrate QnAMaker with red selection box around the knowledge base selection option with a drop-down displaying three knowledge base names](../media/migrate-qnamaker/select-knowledge-bases.png)
+
+8. You can review the knowledge bases you plan to migrate. There could be some validation errors in project names as we follow stricter validation rules for custom question answering projects. To resolve these errors occurring due to invalid characters, select the checkbox (in red) and select **Next**. This is a one-click method to replace the problematic characters in the name with the accepted characters. If there's a duplicate, a new unique project name is generated by the system.
+
+ > [!CAUTION]
+ > If you migrate a knowledge base with the same name as a project that already exists in the target language resource, **the content of the project will be overridden** by the content of the selected knowledge base.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of an error message starting project names can't contain special characters](../media/migrate-qnamaker/migration-kb-name-validation.png)
+
+9. After resolving the validation errors, select **Start migration**
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot with special characters removed](../media/migrate-qnamaker/migration-kb-name-validation-success.png)
+
+10. It will take a few minutes for the migration to occur. Do not cancel the migration while it is in progress. You can navigate to the migrated projects within the [Language Studio](https://language.azure.com/) post migration.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of successfully migrated knowledge bases with information that you can publish by using Language Studio](../media/migrate-qnamaker/migration-success.png)
+
+ If any knowledge bases fail to migrate to custom question answering projects, an error will be displayed. The most common migration errors occur when:
+
+ - Your source and target resources are invalid.
+ - You are trying to migrate an empty knowledge base (KB).
+ - You have reached the limit for an Azure Search instance linked to your target resources.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of a failed migration with an example error](../media/migrate-qnamaker/migration-errors.png)
+
+ Once you resolve these errors, you can rerun the migration.
+
+11. The migration will only copy the test instances of your knowledge bases. Once your migration is complete, you will need to manually deploy the knowledge bases to copy the test index to the production index.
+
+## Next steps
+
+- Learn how to re-enable analytics with [Azure Monitor diagnostic logs](analytics.md).
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md
+
+ Title: Network isolation and Private Link -question answering
+description: Users can restrict public access to question answering resources.
+++++ Last updated : 11/02/2021+++
+# Network isolation and private endpoints
+
+The steps below describe how to restrict public access to question answering resources as well as how to enable Azure Private Link. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
+
+## Private Endpoints
+
+Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Question answering provides you support to create private endpoints to the Azure Search Service.
+
+Private endpoints are provided by [Azure Private Link](../../../../private-link/private-link-overview.md), as a separate service. For more information about costs, see the [pricing page.](https://azure.microsoft.com/pricing/details/private-link/)
+
+## Steps to enable private endpoint
+
+1. Assign *Contributer* role to language resource (Depending on the context this may appear as a Text Analytics resource) in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
+
+> [!div class="mx-imgBorder"]
+> ![Text Analytics Identity](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoints-identity.png)
+
+2. Add the above identity as *Contributer* by going to Azure Search Service IAM tab.
+
+![Managed service IAM](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-access-control.png)
+
+3. Select on *Add role assignments*, add the identity and select *Save*.
+
+![Managed role assignment](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-role-assignment.png)
+
+4. Now, go to *Networking* tab in the Azure Search Service instance and switch Endpoint connectivity data from *Public* to *Private*. This operation is a long running process and can take up to 30 mins to complete.
+
+![Managed Azure search networking](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking.png)
+
+5. Go to *Networking* tab of language resource and under the *Allow access from*, select the *Selected Networks and private endpoints* option and select *save*.
+
+> [!div class="mx-imgBorder"]
+> ![Text Analytics networking](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-custom-qna.png)
+
+This will establish a private endpoint connection between language resource and Azure Cognitive Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure Cognitive Search service instance. Once the whole operation is completed, you are good to use your language resource with question answering enabled.
+
+![Managed Networking Service](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png)
+
+## Support details
+ * We don't support changes to Azure Cognitive Search service once you enable private access to your language resources. If you change the Azure Cognitive Search service via 'Features' tab after you have enabled private access, the language resource will become unusable.
+
+ * After establishing Private Endpoint Connection, if you switch Azure Cognitive Search Service Networking to 'Public', you won't be able to use the language resource. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work.
+
+## Restrict access to Cognitive Search resource
+
+Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
+
+After restricting access to an Azure AI services resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.
+- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules).
+- Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of firewall and virtual networks configuration UI]( ../../../qnamaker/media/network-isolation/firewall.png) ]( ../../../qnamaker/media/network-isolation/firewall.png#lightbox)
ai-services Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/prebuilt.md
+
+ Title: Prebuilt API - question answering
+
+description: Use the question answering Prebuilt API to ask and receive answers to questions without having to create a project.
+++++ Last updated : 11/03/2021++
+# Prebuilt API
+
+The question answering **prebuilt API** provides you the capability to answer questions based on a passage of text without having to create projects, maintain question and answer pairs, or incurring costs for underutilized infrastructure. This functionality is provided as an API and can be used to meet question and answering needs without having to learn the details about question answering.
+
+Given a user query and a block of text/passage the API will return an answer and precise answer (if available).
+
+## Example API usage
+
+Imagine that you have one or more blocks of text from which you would like to get answers for a given question. Normally you would have had to create as many sources as the number of blocks of text. However, now with the prebuilt API you can query the blocks of text without having to define content sources in a project.
+
+Some other scenarios where this API can be used are:
+
+* You are developing an ebook reader app for end users, which allows them to highlight text, enter a question and find answers over a highlighted passage of text.
+* A browser extension that allows users to ask a question over the content being currently displayed on the browser page.
+* A health bot that takes queries from users and provides answers based on the medical content that the bot identifies as most relevant to the user query.
+
+Below is an example of a sample request:
+
+## Sample request
+
+```
+POST https://{Unique-to-your-endpoint}.api.cognitive.microsoft.com/language/:query-text
+```
+
+### Sample query over a single block of text
+
+Request Body
+
+```json
+{
+ "parameters": {
+ "Endpoint": "{Endpoint}",
+ "Ocp-Apim-Subscription-Key": "{API key}",
+ "Content-Type": "application/json",
+ "api-version": "2021-10-01",
+ "stringIndexType": "TextElements_v8",
+ "textQueryOptions": {
+ "question": "how long it takes to charge surface?",
+ "records": [
+ {
+ "id": "1",
+ "text": "Power and charging. It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it."
+ },
+ {
+ "id": "2",
+ "text": "You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges. The USB port on the power supply is only for charging, not for data transfer. If you want to use a USB device, plug it into the USB port on your Surface."
+ }
+ ],
+ "language": "en"
+ }
+ }
+}
+```
+
+## Sample response
+
+In the above request body, we query over a single block of text. A sample response received for the above query is shown below,
+
+```json
+{
+"responses": {
+ "200": {
+ "headers": {},
+ "body": {
+ "answers": [
+ {
+ "answer": "Power and charging. It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it.",
+ "confidenceScore": 0.93,
+ "id": "1",
+ "answerSpan": {
+ "text": "two to four hours",
+ "confidenceScore": 0,
+ "offset": 28,
+ "length": 45
+ },
+ "offset": 0,
+ "length": 224
+ },
+ {
+ "answer": "It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it.",
+ "confidenceScore": 0.92,
+ "id": "1",
+ "answerSpan": {
+ "text": "two to four hours",
+ "confidenceScore": 0,
+ "offset": 8,
+ "length": 25
+ },
+ "offset": 20,
+ "length": 224
+ },
+ {
+ "answer": "It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it.",
+ "confidenceScore": 0.05,
+ "id": "1",
+ "answerSpan": null,
+ "offset": 110,
+ "length": 244
+ }
+ ]
+ }
+ }
+ }
+```
+
+We see that multiple answers are received as part of the API response. Each answer has a specific confidence score that helps understand the overall relevance of the answer. Answer span represents whether a potential short answer was also detected. Users can make use of this confidence score to determine which answers to provide in response to the query.
+
+## Prebuilt API limits
+
+### API call limits
+If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API. In this context, a document is a defined single string of text characters.
+
+These numbers represent the **per individual API call limits**:
+
+* Number of documents: 5.
+* Maximum size of a single document: 5,120 characters.
+* Maximum three responses per document.
+
+### Language codes supported
+The following language codes are supported by Prebuilt API. These language codes are in accordance to the [ISO 639-1 codes standard](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).
+
+Language code|Language
+-|-
+af|Afrikaans
+am|Amharic
+ar|Arabic
+as|Assamese
+az|Azerbaijani
+ba|Bashkir
+be|Belarusian
+bg|Bulgarian
+bn|Bengali
+ca|Catalan, Valencian
+ckb|Central Kurdish
+cs|Czech
+cy|Welsh
+da|Danish
+de|German
+el|Greek, Modern (1453ΓÇô)
+en|English
+eo|Esperanto
+es|Spanish, Castilian
+et|Estonian
+eu|Basque
+fa|Persian
+fi|Finnish
+fr|French
+ga|Irish
+gl|Galician
+gu|Gujarati
+he|Hebrew
+hi|Hindi
+hr|Croatian
+hu|Hungarian
+hy|Armenian
+id|Indonesian
+is|Icelandic
+it|Italian
+ja|Japanese
+ka|Georgian
+kk|Kazakh
+km|Central Khmer
+kn|Kannada
+ko|Korean
+ky|Kirghiz, Kyrgyz
+la|Latin
+lo|Lao
+lt|Lithuanian
+lv|Latvian
+mk|Macedonian
+ml|Malayalam
+mn|Mongolian
+mr|Marathi
+ms|Malay
+mt|Maltese
+my|Burmese
+ne|Nepali
+nl|Dutch, Flemish
+nn|Norwegian Nynorsk
+no|Norwegian
+or|Oriya
+pa|Punjabi, Panjabi
+pl|Polish
+ps|Pashto, Pushto
+pt|Portuguese
+ro|Romanian, Moldavian, Moldovan
+ru|Russian
+sa|Sanskrit
+sd|Sindhi
+si|Sinhala, Sinhalese
+sk|Slovak
+sl|Slovenian
+sq|Albanian
+sr|Serbian
+sv|Swedish
+sw|Swahili
+ta|Tamil
+te|Telugu
+tg|Tajik
+th|Thai
+tl|Tagalog
+tr|Turkish
+tt|Tatar
+ug|Uighur, Uyghur
+uk|Ukrainian
+ur|Urdu
+uz|Uzbek
+vi|Vietnamese
+yi|Yiddish
+zh|Chinese
+
+## Prebuilt API reference
+
+Visit the [full prebuilt API samples](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/Language/stable/2021-10-01/examples/questionanswering/SuccessfulQueryText.json) documentation to understand the input and output parameters required for calling the API.
ai-services Smart Url Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/smart-url-refresh.md
+
+ Title: Smart URL refresh - question answering
+
+description: Use the question answering smart URL refresh feature to keep your project up to date.
+++++ Last updated : 02/28/2022++
+# Use smart URL refresh with a project
+
+Custom question answering gives you the ability to refresh your source contents by getting the latest content from a source URL and updating the corresponding project with one click. The service will ingest content from the URL and either create, merge, or delete question-and-answer pairs in the project.
+
+This functionality is provided to support scenarios where the content in the source URL changes frequently, such as the FAQ page of a product that's updated often. The service will refresh the source and update the project to the latest content while retaining any manual edits made previously.
+
+> [!NOTE]
+> This feature is only applicable to URL sources, and they must be refreshed individually, not in bulk.
+
+> [!IMPORTANT]
+> This feature is only available in the `2021-10-01` version of the Language API.
+
+## How it works
+
+If you have a project with a URL source that has changed, you can trigger a smart URL refresh to keep your project up to date. The service will scan the URL for updated content and generate QnA pairs. It will add any new QnA pairs to your project and also delete any pairs that have disappeared from the source (with exceptions&mdash;see below). It also merges old and new QnA pairs in some situations (see below).
+
+> [!IMPORTANT]
+> Because smart URL refresh can involve deleting old content from your project, you may want to [create a backup](./export-import-refresh.md) of your project before you do any refresh operations.
+
+You can trigger a URL refresh in Language Studio by opening your project, selecting the source in the **Manage sources** list, and selecting **Refresh URL**.
++
+You can also trigger a refresh programmatically using the REST API. See the **[Update Sources](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)** reference documentation for parameters and a sample request.
+
+## Smart refresh behavior
+
+When the user refreshes content using this feature, the project of QnA pairs may be updated in the following ways:
+
+### Delete old pair
+
+If the content of the URL is updated so that an existing QnA pair from the old content of the URL is no longer found in the source, that pair is deleted from the refreshed project. For example, if a QnA pair Q1A1 existed in the old project, but after refreshing, there's no A1 answer generated by the newly refreshed source, then the pair Q1A1 is considered outdated and is dropped from the project altogether.
+
+However, if the old QnA pairs have been manually edited in the authoring portal, they won't be deleted.
+
+### Add new pair
+
+If the content of the URL is updated in such a way that a new QnA pair exists which didn't exist in the old KB, then it's added to the KB. For example, if the service finds that a new answer A2 can be generated, then the QnA pair Q2A2 is inserted into the KB.
+
+### Merge pairs
+
+If the answer of a new QnA pair matches the answer of an old QnA pair, the two pairs are merged. The new pair's question is added as an alternate question to the old QnA pair. For example, consider Q3A3 exists in the old source. When you refresh the source, a new QnA pair Q3'A3 is introduced. In that case, the two QnA pairs are merged: Q3' is added to Q3 as an alternate question.
+
+If the old QnA pair has a metadata value, that data is retained and persisted in the newly merged pair.
+
+If the old QnA pair has follow-up prompts associated with it, then the following scenarios may arise:
+* If the prompt attached to the old pair is from the source being refreshed, then it's deleted, and the prompt of the new pair (if any exists) is appended to the newly merged QnA pair.
+* If the prompt attached to the old pair is from a different source, then it's maintained as-is and the prompt from the new question (if any exists) is appended to the newly merged QnA pair.
++
+#### Merge example
+See the following example of a merge operation with differing questions and prompts:
+
+|Source iteration|Question |Answer |Prompts |
+||||--|
+|old |"What is the new HR policy?" | "You may have to choose among the following options:" | P1, P2 |
+|new |"What is the new payroll policy?" | "You may have to choose among the following options:" | P3, P4 |
+
+The prompts P1 and P2 come from the original source and are different from prompts P3 and P4 of the new QnA pair. They both have the same answer, `You may have to choose among the following options:`, but it leads to different prompts. In this case, the resulting QnA pair would look like this:
+
+|Question |Answer |Prompts |
+|||--|
+|"What is the new HR policy?" </br>(alternate question: "What is the new payroll policy?") | "You may have to choose among the following options:" | P3, P4 |
+
+#### Duplicate answers scenario
+
+When the original source has two or more QnA pairs with the same answer (as in, Q1A1 and Q2A1), the merge behavior may be more complex.
+
+If these two QnA pairs have individual prompts attached to them (for example, Q1A1+P1 and Q2A1+P2), and the refreshed source content has a new QnA pair generated with the same answer A1 and a new prompt P3 (Q1'A1+P3), then the new question will be added as an alternate question to the original pairs (as described above). But all of the original attached prompts will be overwritten by the new prompt. So the final pair set will look like this:
+
+|Question |Answer |Prompts |
+|||--|
+|Q1 </br>(alternate question: Q1') | A1 | P3 |
+|Q2 </br>(alternate question: Q1') | A1 | P3 |
+
+## Next steps
+
+* [Question answering quickstart](../quickstart/sdk.md?pivots=studio)
+* [Update Sources API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)
ai-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/troubleshooting.md
+
+ Title: Troubleshooting - question answering
+description: The curated list of the most frequently asked questions regarding question answering will help you adopt the feature faster and with better results.
+++++ Last updated : 11/02/2021++
+# Troubleshooting for question answering
+
+The curated list of the most frequently asked questions regarding question answering will help you adopt the feature faster and with better results.
+
+## Manage predictions
+
+<details>
+<summary><b>How can I improve the throughput performance for query predictions?</b></summary>
+
+**Answer**:
+Throughput performance issues indicate you need to scale up your Cognitive Search. Consider adding a replica to your Cognitive Search to improve performance.
+
+Learn more about [pricing tiers](../Concepts/azure-resources.md).
+</details>
+
+## Manage your project
+
+<details>
+<summary><b>Why is my URL(s)/file(s) not extracting question-answer pairs?</b></summary>
+
+**Answer**:
+It's possible that question answering can't auto-extract some question-and-answer (QnA) content from valid FAQ URLs. In such cases, you can paste the QnA content in a .txt file and see if the tool can ingest it. Alternately, you can editorially add content to your project through the [Language Studio portal](https://language.azure.com).
+
+</details>
+
+<details>
+<summary><b>How large a project can I create?</b></summary>
+
+**Answer**:
+The size of the project depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](../concepts/azure-resources.md) for more details.
+
+</details>
+
+<details>
+<summary><b>How do I share a project with others?</b></summary>
+
+**Answer**:
+Sharing works at the level of the language resource, that is, all projects associated a language resource can be shared.
+</details>
+
+<details>
+<summary><b>Can you share a project with a contributor that is not in the same Azure Active Directory tenant, to modify a project?</b></summary>
+
+**Answer**:
+Sharing is based on Azure role-based access control (Azure Role-base access control). If you can share _any_ resource in Azure with another user, you can also share question answering.
+
+</details>
+
+<details>
+<summary><b>Can you assign read/write rights to 5 different users so each of them can access only 1 question answering project?</b></summary>
+
+**Answer**:
+You can share an entire language resource, not individual projects.
+
+</details>
+
+<details>
+<summary><b>The updates that I made to my project are not reflected in production. Why not?</b></summary>
+
+**Answer**:
+Every edit operation, whether in a table update, test, or setting, needs to be saved before it can be deployed. Be sure to select **Save** after making changes and then re-deploy your project for those changes to be reflected in production.
+
+</details>
+
+<details>
+<summary><b>Does the project support rich data or multimedia?</b></summary>
+
+**Answer**:
+
+#### Multimedia auto-extraction for files and URLs
+
+* URLS - limited HTML-to-Markdown conversion capability.
+* Files - not supported
+
+#### Answer text in markdown
+
+Once QnA pairs are in the project, you can edit an answer's markdown text to include links to media available from public URLs.
+
+</details>
+
+<details>
+<summary><b>Does question answering support non-English languages?</b></summary>
+
+**Answer**:
+See more details about [supported languages](../language-support.md).
+
+If you have content from multiple languages, be sure to create a separate project for each language.
+
+</details>
+
+## Manage service
+
+<details>
+<summary><b>I deleted my existing Search service. How can I fix this?</b></summary>
+
+**Answer**:
+If you delete an Azure Cognitive Search index, the operation is final and the index cannot be recovered.
+
+</details>
+
+<details>
+<summary><b>I deleted my `testkbv2` index in my Search service. How can I fix this?</b></summary>
+
+**Answer**:
+In case you deleted the `testkbv2` index in your Search service, you can restore the data from the last published KB. Use the recovery tool [RestoreTestKBIndex](https://github.com/pchoudhari/QnAMakerBackupRestore/tree/master/RestoreTestKBFromProd) available on GitHub.
+
+</details>
+
+<details>
+<summary><b>Can I use the same Azure Cognitive Search resource for projects using multiple languages?</b></summary>
+
+**Answer**:
+To use multiple language and multiple projects, the user has to create a project for each language and the first project created for the language resource has to select the option **I want to select the language when I create a project in this resource**. This will create a separate Azure search service per language.
+
+</details>
+
+## Integrate with other services including Bots
+
+<details>
+<summary><b>Do I need to use Bot Framework in order to use question answering?</b></summary>
+
+**Answer**:
+No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with question answering. However, Question answering is offered as one of several templates in [Azure AI Bot Service](/azure/bot-service/). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
+
+</details>
+
+<details>
+<summary><b>How can I create a new bot with question answering?</b></summary>
+
+**Answer**:
+Follow the instructions in [this](../tutorials/bot-service.md) documentation to create your Bot with Azure AI Bot Service.
+
+</details>
+
+<details>
+<summary><b>How do I use a different project with an existing Azure AI Bot Service?</b></summary>
+
+**Answer**:
+You need to have the following information about your project:
+
+* Project ID.
+* Project's published endpoint custom subdomain name, known as `host`, found on **Settings** page after you publish.
+* Project's published endpoint key - found on **Settings** page after you publish.
+
+With this information, go to your bot's app service in the Azure portal. Under **Settings -> Configuration -> Application settings**, change those values.
+
+The project's endpoint key is labeled `QnAAuthkey` in the ABS service.
+
+</details>
+
+<details>
+<summary><b>Can two or more client applications share a project?</b></summary>
+
+**Answer**:
+Yes, the project can be queried from any number of clients.
+
+</details>
+
+<details>
+<summary><b>How do I embed question answering in my website?</b></summary>
+
+**Answer**:
+Follow these steps to embed the question answering service as a web-chat control in your website:
+
+1. Create your FAQ bot by following the instructions [here](../tutorials/bot-service.md).
+2. Enable the web chat by following the steps [here](../tutorials/bot-service.md#integrate-the-bot-with-channels)
+
+## Data storage
+
+<details>
+<summary><b>What data is stored and where is it stored?</b></summary>
+
+**Answer**:
+
+When you create your language resource for question answering, you selected an Azure region. Your projects and log files are stored in this region.
+
+</details>
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/language-support.md
+
+ Title: Language support - custom question answering
+
+description: A list of culture, natural languages supported by custom question answering for your project. Do not mix languages in the same project.
++++
+recommendations: false
+++ Last updated : 11/02/2021+++
+# Language support for custom question answering and projects
+
+This article describes the language support options for custom question answering enabled resources and projects.
+
+In custom question answering, you have the option to either select the language each time you add a new project to a resource allowing multiple language support, or you can select a language that will apply to all future projects for a resource.
+
+## Supporting multiple languages in one custom question answering enabled resource
+
+> [!div class="mx-imgBorder"]
+> ![Multi-lingual project selection](./media/language-support/choose-language.png)
+
+* When you are creating the first project in your service, you get a choice pick the language each time you create a new project. Select this option, to create projects belonging to different languages within one service.
+* Language setting option cannot be modified for the service, once the first project is created.
+* If you enable multiple languages for the project, then instead of having one test index for the service you will have one test index per project.
+
+## Supporting multiple languages in one project
+
+If you need to support a project system, which includes several languages, you can:
+
+* Use the [Translator service](../../translator/translator-overview.md) to translate a question into a single language before sending the question to your project. This allows you to focus on the quality of a single language and the quality of the alternate questions and answers.
+* Create a custom question answering enabled language resource, and a project inside that resource, for every language. This allows you to manage separate alternate questions and answer text that is more nuanced for each language. This gives you much more flexibility but requires a much higher maintenance cost when the questions or answers change across all languages.
+
+## Single language per resource
+
+If you **select the option to set the language used by all projects associated with the resource**, consider the following:
+* A language resource, and all its projects, will support one language only.
+* The language is explicitly set when the first project of the service is created.
+* The language can't be changed for any other projects associated with the resource.
+* The language is used by the Cognitive Search service (ranker #1) and Custom question answering (ranker #2) to generate the best answer to a query.
+
+## Languages supported
+
+The following list contains the languages supported for a question answering resource.
+
+| Language |
+|--|
+| Arabic |
+| Armenian |
+| Bangla |
+| Basque |
+| Bulgarian |
+| Catalan |
+| Chinese_Simplified |
+| Chinese_Traditional |
+| Croatian |
+| Czech |
+| Danish |
+| Dutch |
+| English |
+| Estonian |
+| Finnish |
+| French |
+| Galician |
+| German |
+| Greek |
+| Gujarati |
+| Hebrew |
+| Hindi |
+| Hungarian |
+| Icelandic |
+| Indonesian |
+| Irish |
+| Italian |
+| Japanese |
+| Kannada |
+| Korean |
+| Latvian |
+| Lithuanian |
+| Malayalam |
+| Malay |
+| Norwegian |
+| Polish |
+| Portuguese |
+| Punjabi |
+| Romanian |
+| Russian |
+| Serbian_Cyrillic |
+| Serbian_Latin |
+| Slovak |
+| Slovenian |
+| Spanish |
+| Swedish |
+| Tamil |
+| Telugu |
+| Thai |
+| Turkish |
+| Ukrainian |
+| Urdu |
+| Vietnamese |
+
+## Query matching and relevance
+Custom question answering depends on [Azure Cognitive Search language analyzers](/rest/api/searchservice/language-support) for providing results.
+
+While the Azure Cognitive Search capabilities are on par for supported languages, question answering has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
+
+|Languages with additional ranker|
+|--|
+|Chinese|
+|Czech|
+|Dutch|
+|English|
+|French|
+|German|
+|Hungarian|
+|Italian|
+|Japanese|
+|Korean|
+|Polish|
+|Portuguese|
+|Spanish|
+|Swedish|
+
+This additional ranking is an internal working of the custom question answering's ranker.
+
+## Next steps
++
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/overview.md
+
+ Title: What is question answering?
+description: Question answering is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom project.
++++
+recommendations: false
+ Last updated : 06/03/2022
+keywords: "qna maker, low code chat bots, multi-turn conversations"
+++
+# What is question answering?
+
+Question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data. It is used to find appropriate answers from customer input or from a project.
+
+Question answering is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications. This offering includes features like enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support.
+
+Question answering comprises two capabilities:
+
+* Custom question answering: Using this capability users can customize different aspects like edit question and answer pairs extracted from the content source, define synonyms and metadata, accept question suggestions etc.
+* Prebuilt question answering: This capability allows users to get a response by querying a text passage without having the need to manage knowledgebases.
+
+This documentation contains the following article types:
+
+* The [quickstarts](./quickstart/sdk.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/manage-knowledge-base.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](./concepts/precise-answering.md) provide in-depth explanations of the service's functionality and features.
+* [**Tutorials**](./tutorials/bot-service.md) are longer guides that show you how to use the service as a component in broader business solutions.
+
+## When to use question answering
+
+* **When you have static information** - Use question answering when you have static information in your project. This project is custom to your needs, which you've built with documents such as PDFs and URLs.
+* **When you want to provide the same answer to a request, question, or command** - when different users submit the same question, the same answer is returned.
+* **When you want to filter static information based on meta-information** - add [metadata](./tutorials/multiple-domains.md) tags to provide additional filtering options relevant to your client application's users and the information. Common metadata information includes [chit-chat](./how-to/chit-chat.md), content type or format, content purpose, and content freshness. <!--TODO: Fix Link-->
+* **When you want to manage a bot conversation that includes static information** - your project takes a user's conversational text or command and answers it. If the answer is part of a pre-determined conversation flow, represented in your project with [multi-turn context](./tutorials/guided-conversations.md), the bot can easily provide this flow.
+
+## What is a project?
+
+Question answering [imports your content](./how-to/manage-knowledge-base.md) into a project full of question and answer pairs. The import process extracts information about the relationship between the parts of your structured and semi-structured content to imply relationships between the question and answer pairs. You can edit these question and answer pairs or add new pairs.
+
+The content of the question and answer pair includes:
+* All the alternate forms of the question
+* Metadata tags used to filter answer choices during the search
+* Follow-up prompts to continue the search refinement
+
+After you publish your project, a client application sends a user's question to your endpoint. Your question answering service processes the question and responds with the best answer.
+
+## Create a chat bot programmatically
+
+Once a question answering project is published, a client application sends a question to your project endpoint and receives the results as a JSON response. A common client application for question answering is a chat bot.
+
+![Ask a bot a question and get answer from project content](../../qnamaker/media/qnamaker-overview-learnabout/bot-chat-with-qnamaker.png)
+
+|Step|Action|
+|:--|:--|
+|1|The client application sends the user's _question_ (text in their own words), "How do I programmatically update my project?" to your project endpoint.|
+|2|Question answering uses the trained project to provide the correct answer and any follow-up prompts that can be used to refine the search for the best answer. Question answering returns a JSON-formatted response.|
+|3|The client application uses the JSON response to make decisions about how to continue the conversation. These decisions can include showing the top answer and presenting more choices to refine the search for the best answer. |
+|||
+
+## Build low code chat bots
+
+The [Language Studio](https://language.cognitive.azure.com/) portal provides the complete project authoring experience. You can import documents, in their current form, to your project. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
+
+Once your project is edited, publish the project to a working [Azure Web App bot](https://azure.microsoft.com/services/bot-service/) without writing any code. Test your bot in the [Azure portal](https://portal.azure.com) or download it and continue development.
+
+## High quality responses with layered ranking
+
+The question answering system uses a layered ranking approach. The data is stored in Azure search, which also serves as the first ranking layer. The top results from Azure search are then passed through question answering's NLP re-ranking model to produce the final results and confidence score.
+
+## Multi-turn conversations
+
+Question answering provides multi-turn prompts and active learning to help you improve your basic question and answer pairs.
+
+**Multi-turn prompts** give you the opportunity to connect question and answer pairs. This connection allows the client application to provide a top answer and provides more questions to refine the search for a final answer.
+
+After the project receives questions from users at the published endpoint, question answering applies **active learning** to these real-world questions to suggest changes to your project to improve the quality.
+
+## Development lifecycle
+
+Question answering provides authoring, training, and publishing along with collaboration permissions to integrate into the full development life cycle.
+
+> [!div class="mx-imgBorder"]
+> ![Conceptual image of development cycle](../../qnamaker/media/qnamaker-overview-learnabout/development-cycle.png)
+
+## Complete a quickstart
+
+We offer quickstarts in most popular programming languages, each designed to teach you basic design patterns, and have you running code in less than 10 minutes.
+
+* [Get started with the question answering client library](./quickstart/sdk.md)
+
+## Next steps
+Question answering provides everything you need to build, manage, and deploy your custom project.
+
ai-services Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/quickstart/sdk.md
+
+ Title: "Quickstart: Use SDK to create and manage project - custom question answering"
+description: This quickstart shows you how to create and manage your project using custom question answering.
+++ Last updated : 06/06/2022++
+recommendations: false
+ms.devlang: csharp, python
+
+zone_pivot_groups: custom-qna-quickstart
++
+# Quickstart: question answering
+
+> [!NOTE]
+> Are you looking to migrate your workloads from QnA Maker? See our [migration guide](../how-to/migrate-qnamaker-to-question-answering.md) for information on feature comparisons and migration steps.
+
+Get started with the custom question answering client library. Follow these steps to install the package and try out the example code for basic tasks.
+++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Explore the REST API
+
+To learn about automating your question answering pipeline consult the REST API documentation. Currently authoring functionality is only available via REST API:
+
+* [Authoring API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
+* [Authoring API cURL examples](../how-to/authoring.md)
+* [Runtime API reference](/rest/api/cognitiveservices/questionanswering/question-answering)
+
+## Next steps
+
+* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md)
ai-services Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/reference/document-format-guidelines.md
+
+ Title: Import document format guidelines - question answering
+description: Use these guidelines for importing documents to get the best results for your content with question answering.
+++++ Last updated : 01/23/2022++
+# Format guidelines for question answering
+
+Review these formatting guidelines to get the best results for your content.
+
+## Formatting considerations
+
+After importing a file or URL, question answering converts and stores your content in the [markdown format](https://en.wikipedia.org/wiki/Markdown). The conversion process adds new lines in the text, such as `\n\n`. A knowledge of the markdown format helps you to understand the converted content and manage your project content.
+
+If you add or edit your content directly in your project, use **markdown formatting** to create rich text content or change the markdown format content that is already in the answer. Question answering supports much of the markdown format to bring rich text capabilities to your content. However, the client application, such as a chat bot may not support the same set of markdown formats. It is important to test the client application's display of answers.
+
+## Basic document formatting
+
+Question answering identifies sections and subsections and relationships in the file based on visual clues like:
+
+* font size
+* font style
+* numbering
+* colors
+
+> [!NOTE]
+> We don't support extraction of images from uploaded documents currently.
+
+### Product manuals
+
+A manual is typically guidance material that accompanies a product. It helps the user to set up, use, maintain, and troubleshoot the product. When question answering processes a manual, it extracts the headings and subheadings as questions and the subsequent content as answers. See an example [here](https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf).
+
+Below is an example of a manual with an index page, and hierarchical content
+
+> [!div class="mx-imgBorder"]
+> ![Product Manual example for a project](../../../qnamaker/media/qnamaker-concepts-datasources/product-manual.png)
+
+> [!NOTE]
+> Extraction works best on manuals that have a table of contents and/or an index page, and a clear structure with hierarchical headings.
+
+### Brochures, guidelines, papers, and other files
+
+Many other types of documents can also be processed to generate question answer pairs, provided they have a clear structure and layout. These include: Brochures, guidelines, reports, white papers, scientific papers, policies, books, etc. See an example [here](https://qnamakerstore.blob.core.windows.net/qnamakerdata/docs/Manage%20Azure%20Blob%20Storage.docx).
+
+Below is an example of a semi-structured doc, without an index:
+
+> [!div class="mx-imgBorder"]
+> ![Azure Blob storage semi-structured Doc](../../../qnamaker/media/qnamaker-concepts-datasources/semi-structured-doc.png)
+
+### Unstructured document support
+
+Custom question answering now supports unstructured documents. A document that does not have its content organized in a well-defined hierarchical manner, is missing a set structure or has its content free flowing can be considered as an unstructured document.
+
+Below is an example of an unstructured PDF document:
+
+> [!div class="mx-imgBorder"]
+> ![Unstructured document example for a project](../../../qnamaker/media/qnamaker-concepts-datasources/unstructured-qna-pdf.png)
+
+> [!NOTE]
+> QnA pairs are not extracted in the "Edit sources" tab for unstructured sources.
+
+> [!IMPORTANT]
+> Support for unstructured file/content is available only in question answering.
+
+### Structured question answering document
+
+The format for structured question-answers in DOC files, is in the form of alternating questions and answers per line, one question per line followed by its answer in the following line, as shown below:
+
+```text
+Question1
+
+Answer1
+
+Question2
+
+Answer2
+```
+
+Below is an example of a structured question answering word document:
+
+> [!div class="mx-imgBorder"]
+> ![Structured question answering document example for a project](../../../qnamaker/media/qnamaker-concepts-datasources/structured-qna-doc.png)
+
+### Structured *TXT*, *TSV* and *XLS* Files
+
+Question answering in the form of structured *.txt*, *.tsv* or *.xls* files can also be uploaded to question answering to create or augment a project. These can either be plain text, or can have content in RTF or HTML. Question answer pairs have an optional metadata field that can be used to group question answer pairs into categories.
+
+| Question | Answer | Metadata (1 key: 1 value) |
+|--||-|
+| Question1 | Answer1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
+| Question2 | Answer2 | `Key:Value` |
+
+Any additional columns in the source file are ignored.
+
+### Structured data format through import
+
+Importing a project replaces the content of the existing project. Import requires a structured .tsv file that contains data source information. This information helps group the question-answer pairs and attribute them to a particular data source. Question answer pairs have an optional metadata field that can be used to group question answer pairs into categories. The import format needs to be similar to the exported knowledgebase format.
+
+| Question | Answer | Source| Metadata (1 key: 1 value) | QnaId |
+|--||-|||
+| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> | QnaId 1 |
+| Question2 | Answer2 | Editorial| `Key:Value` | QnaId 2 |
+
+<a href="#formatting-considerations"></a>
+
+### Multi-turn document formatting
+
+* Use headings and subheadings to denote hierarchy. For example, You can h1 to denote the parent question answer and h2 to denote the question answer that should be taken as prompt. Use small heading size to denote subsequent hierarchy. Do not use style, color, or some other mechanism to imply structure in your document, question answering will not extract the multi-turn prompts.
+* First character of heading must be capitalized.
+* Do not end a heading with a question mark, `?`.
+
+**Sample documents**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)
+
+## FAQ URLs
+
+Question answering can support FAQ web pages in three different forms:
+
+* Plain FAQ pages
+* FAQ pages with links
+* FAQ pages with a Topics Homepage
+
+### Plain FAQ pages
+
+This is the most common type of FAQ page, in which the answers immediately follow the questions in the same page.
+
+### FAQ pages with links
+
+In this type of FAQ page, questions are aggregated together and are linked to answers that are either in different sections of the same page, or in different pages.
+
+Below is an example of an FAQ page with links in sections that are on the same page:
+
+> [!div class="mx-imgBorder"]
+> ![Section Link FAQ page example for a project](../../../qnamaker/media/qnamaker-concepts-datasources/sectionlink-faq.png)
+
+### Parent Topics page links to child answers pages
+
+This type of FAQ has a Topics page where each topic is linked to a corresponding set of questions and answers on a different page. Question answer crawls all the linked pages to extract the corresponding questions & answers.
+
+Below is an example of a Topics page with links to FAQ sections in different pages.
+
+> [!div class="mx-imgBorder"]
+> ![Deep link FAQ page example for a project](../../../qnamaker/media/qnamaker-concepts-datasources/topics-faq.png)
+
+### Support URLs
+
+Question answering can process semi-structured support web pages, such as web articles that would describe how to perform a given task, how to diagnose and resolve a given problem, and what are the best practices for a given process. Extraction works best on content that has a clear structure with hierarchical headings.
+
+> [!NOTE]
+> Extraction for support articles is a new feature and is in early stages. It works best for simple pages, that are well structured, and do not contain complex headers/footers.
+
+## Import and export project
+
+**TSV and XLS files**, from exported projects, can only be used by importing the files from the **Settings** page in Language Studio. They cannot be used as data sources during project creation or from the **+ Add file** or **+ Add URL** feature on the **Settings** page.
+
+When you import the project through these **TSV and XLS files**, the question answer pairs get added to the editorial source and not the sources from which the question and answers were extracted in the exported project.
+
+## Next steps
+
+* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md)
ai-services Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/reference/markdown-format.md
+
+ Title: Markdown format - question answering
+description: Following is the list of markdown formats that you can use your answer text.
+++++ Last updated : 01/21/2022++
+# Markdown format supported in answer text
+
+Question answering stores answer text as markdown. There are many flavors of markdown. In order to make sure the answer text is returned and displayed correctly, use this reference.
+
+Use the **[CommonMark](https://commonmark.org/help/tutorial/https://docsupdatetracker.net/index.html)** tutorial to validate your markdown. The tutorial has a **Try it** feature for quick copy/paste validation.
+
+## When to use rich-text editing versus markdown
+
+Rich-text editing of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
+
+Markdown is a better tool when you need to autogenerate content to create projects to be imported as part of a CI/CD pipeline or for batch testing.
+
+## Supported markdown format
+
+Following is the list of markdown formats that you can use in your answer text.
+
+|Purpose|Format|Example markdown|
+|--|--|--|
+A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n question answering?`|
+|Headers from h1 to h6, the number of `#` denotes which header. 1 `#` is the h1.|`\n# text \n## text \n### text \n####text \n#####text` |`## Creating a bot \n ...text.... \n### Important news\n ...text... \n### Related Information\n ....text...`<br><br>`\n# my h1 \n## my h2\n### my h3 \n#### my h4 \n##### my h5`|
+|Italics |`*text*`|`How do I create a bot with *question answering*?`|
+|Strong (bold)|`**text**`|`How do I create a bot with **question answering***?`|
+|URL for link|`[text](https://www.my.com)`|`How do I create a bot with [question answering](https://language.cognitive.azure.com/)?`|
+|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![question answering](path-to-your-image.png)`|
+|Strikethrough|`~~text~~`|`some ~~questions~~ questions need to be asked`|
+|Bold and italics|`***text***`|`How can I create a ***question answering**** bot?`|
+|Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**question answering**](https://language.cognitive.azure.com/)?`|
+|Italics URL for link|`[*text*](https://www.my.com)`|`How do I create a bot with [*question answering*](https://language.cognitive.azure.com/)?`|
+|Escape markdown symbols|`\*text\*`|`How do I create a bot with \*question answering*\*?`|
+|Ordered list|`\n 1. item1 \n 1. item2`|`This is an ordered list: \n 1. List item 1 \n 1. List item 2`<br>The preceding example uses automatic numbering built into markdown.<br>`This is an ordered list: \n 1. List item 1 \n 2. List item 2`<br>The preceding example uses explicit numbering.|
+|Unordered list|`\n * item1 \n * item2`<br>or<br>`\n - item1 \n - item2`|`This is an unordered list: \n * List item 1 \n * List item 2`|
+|Nested lists|`\n * Parent1 \n\t * Child1 \n\t * Child2 \n * Parent2`<br><br>`\n * Parent1 \n\t 1. Child1 \n\t * Child2 \n 1. Parent2`<br><br>You can nest ordered and unordered lists together. The tab, `\t`, indicates the indentation level of the child element.|`This is an unordered list: \n * List item 1 \n\t * Child1 \n\t * Child2 \n * List item 2`<br><br>`This is an ordered nested list: \n 1. Parent1 \n\t 1. Child1 \n\t 1. Child2 \n 1. Parent2`|
+
+* Question answering doesn't process the image in any way. It is the client application's role to render the image.
+
+If you want to add content using update/replace project APIs and the content/file contains html tags, you can preserve the HTML in your file by ensuring that opening and closing of the tags are converted in the encoded format.
+
+| Preserve HTML | Representation in the API request | Representation in KB |
+|--||-|
+| Yes | \&lt;br\&gt; | &lt;br&gt; |
+| Yes | \&lt;h3\&gt;header\&lt;/h3\&gt; | &lt;h3&gt;header&lt;/h3&gt; |
+
+Additionally, `CR LF(\r\n)` are converted to `\n` in the KB. `LF(\n)` is kept as is. If you want to escape any escape sequence like a \t or \n you can use backslash, for example: '\\\\r\\\\n' and '\\\\t'
+
+## Next steps
+
+* [Import a project](../how-to/migrate-knowledge-base.md)
ai-services Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/active-learning.md
+
+ Title: Enrich your project with active learning
+description: In this tutorial, learn how to enrich your question answering projects with active learning
+++++ Last updated : 11/02/2021+++
+# Enrich your project with active learning
+
+In this tutorial, you learn how to:
+
+<!-- green checkmark -->
+> [!div class="checklist"]
+> * Download an active learning test file
+> * Import the test file to your existing project
+> * Accept/reject active learning suggestions
+> * Add alternate questions
+
+This tutorial shows you how to enhance your question answering project with active learning. If you notice that customers are asking questions, which are not part of your project. There are often variations of questions that are paraphrased differently.
+
+These variations when added as alternate questions to the relevant question answer pair, help to optimize the project to answer real world user queries. You can manually add alternate questions to question answer pairs through the editor. At the same time, you can also use the active learning feature to generate active learning suggestions based on user queries. The active learning feature, however, requires that the project receives regular user traffic to generate suggestions.
+
+## Enable active learning
+
+Active learning is turned on by default for question answering enabled resources.
+
+To try out active learning suggestions, you can import the following file as a new project: [SampleActiveLearning.tsv](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/SampleActiveLearning.tsv).
+
+## Download file
+
+Run the following command from the command prompt to download a local copy of the `SampleActiveLearning.tsv` file.
+
+```cmd
+curl "https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/SampleActiveLearning.tsv" --output SampleActiveLearning.tsv
+```
+
+## Import file
+
+From the edit project pane for your project, select the `...` (ellipsis) icon from the menu > **Import questions and answers** > **Import as TSV**. The select **Choose file** to browse to the copy of `SampleActiveLearning.tsv` that you downloaded to your computer in the previous step, and then select done.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of edit project menu bar with import as TSV option displayed.]( ../media/active-learning/import-questions.png) ]( ../media/active-learning/import-questions.png#lightbox)
+
+## View and add/reject active learning suggestions
+
+Once the import of the test file is complete, active learning suggestions can be viewed on the review suggestions pane:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with review suggestions page displayed.]( ../media/active-learning/review-suggestions.png) ]( ../media/active-learning/review-suggestions.png#lightbox)
+
+> [!NOTE]
+> Active learning suggestions are not real time. There is an approximate delay of 30 minutes before the suggestions can show on this pane. This delay is to ensure that we balance the high cost involved for real time updates to the index and service performance.
+
+We can now either accept these suggestions or reject them using the options on the menu bar to **Accept all suggestions** or **Reject all suggestions**.
+
+Alternatively, to accept or reject individual suggestions, select the checkmark (accept) symbol or trash can (reject) symbol that appears next to individual questions in the **Review suggestions** page.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with option to accept or reject highlighted in red.]( ../media/active-learning/accept-reject.png) ]( ../media/active-learning/accept-reject.png#lightbox)
+
+## Add alternate questions
+
+While active learning automatically suggests alternate questions based on the user queries hitting the project, we can also add variations of a question on the edit project page by selecting **Add alternate phrase** to question answer pairs.
+
+By adding alternate questions along with active learning, we further enrich the project with variations of a question that helps to provide consistent answers to user queries.
+
+> [!NOTE]
+> When alternate questions have many stop words, they might negatively impact the accuracy of responses. So, if the only difference between alternate questions is in the stop words, these alternate questions are not required.
+> To examine the list of stop words consult the [stop words article](https://github.com/Azure-Samples/azure-search-sample-dat).
+
+## Next steps
++
ai-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/adding-synonyms.md
+
+ Title: Improve the quality of responses with synonyms
+description: In this tutorial, learn how to improve response with synonyms and alternate words
+++++ Last updated : 11/02/2021+++
+# Improve quality of response with synonyms
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Add synonyms to improve the quality of your responses
+> * Evaluate the response quality via the inspect option of the Test pane
+
+This tutorial will show you how you can improve the quality of your responses by using synonyms. Let's assume that users are not getting an accurate response to their queries, when they use alternate forms, synonyms or acronyms of a word. So, they decide to improve the quality of the response by using [Authoring API](../how-to/authoring.md) to add synonyms for keywords.
+
+## Add synonyms using Authoring API
+
+LetΓÇÖs us add the following words and their alterations to improve the results:
+
+|Word | Alterations|
+|--|--|
+| fix problems | `troubleshoot`, `diagnostic`|
+| whiteboard | `white board`, `white canvas` |
+| bluetooth | `blue tooth`, `BT` |
+
+```json
+{
+ "synonyms": [
+ {
+ "alterations": [
+ "fix problems",
+ "troubleshoot",
+ "diagnostic",
+ ]
+ },
+ {
+ "alterations": [
+ "whiteboard",
+ "white board",
+ "white canvas"
+ ]
+ },
+ {
+ "alterations": [
+ "bluetooth",
+ "blue tooth",
+ "BT"
+ ]
+ }
+ ]
+}
+
+```
+
+For the question and answer pair ΓÇ£Fix problems with Surface PenΓÇ¥, we compare the response for a query made using its synonym ΓÇ£troubleshootΓÇ¥.
+
+## Response before addition of synonym
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with confidence score of .74 highlighted in red]( ../media/adding-synonyms/score.png) ]( ../media/adding-synonyms/score.png#lightbox)
+
+## Response after addition of synonym
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot with a confidence score of .97 highlighted in red]( ../media/adding-synonyms/score-improvement.png) ]( ../media/adding-synonyms/score-improvement.png#lightbox)
+
+As you can see, when `troubleshoot` was not added as a synonym, we got a low confidence response to the query ΓÇ£How to troubleshoot your surface penΓÇ¥. However, after we add `troubleshoot` as a synonym to ΓÇ£fix problemsΓÇ¥, we received the correct response to the query with a higher confidence score. Once, these synonyms were added, the relevance of results improved thereby improving user experience.
+
+> [!IMPORTANT]
+> Synonyms are case insensitive. Synonyms also might not work as expected if you add stop words as synonyms. The list of stop words can be found here: [List of stop words](https://github.com/Azure-Samples/azure-search-sample-dat).
+> For instance, if you add the abbreviation **IT** for Information technology, the system might not be able to recognize Information Technology because **IT** is a stop word and is filtered when a query is processed.
+
+## Notes
+* Synonyms can be added in any order. The ordering is not considered in any computational logic.
+* Synonyms can only be added to a project that has at least one question and answer pair.
+* Synonyms can be added only when there is at least one question and answer pair present in a project.
+* In case of overlapping synonym words between 2 sets of alterations, it may have unexpected results and it is not recommended to use overlapping sets.
+* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator. Following is the list of special characters **not allowed**:
+
+|Special character | Symbol|
+|--|--|
+|Comma | ,|
+|Question mark | ?|
+|Colon| :|
+|Semicolon| ;|
+|Double quotation mark| \"|
+|Single quotation mark| \'|
+|Open parenthesis|(|
+|Close parenthesis|)|
+|Open brace|{|
+|Close brace|}|
+|Open bracket|[|
+|Close bracket|]|
+|Hyphen/dash|-|
+|Plus sign|+|
+|Period|.|
+|Forward slash|/|
+|Exclamation mark|!|
+|Asterisk|\*|
+|Underscore|\_|
+|Ampersand|@|
+|Hash|#|
++
+## Next steps
+
ai-services Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/bot-service.md
+
+ Title: "Tutorial: Create an FAQ bot with question answering and Azure AI Bot Service"
+description: In this tutorial, create a no code FAQ Bot with question answering and Azure AI Bot Service.
+++++ Last updated : 11/02/2021+++
+# Tutorial: Create a FAQ bot
+
+Create a FAQ Bot with custom question answering and Azure [Bot Service](https://azure.microsoft.com/services/bot-service/) with no code.
+
+In this tutorial, you learn how to:
+
+<!-- green checkmark -->
+> [!div class="checklist"]
+> * Link a question answering project to an Azure AI Bot Service
+> * Deploy a Bot
+> * Chat with the Bot in web chat
+> * Enable the Bot in supported channels
+
+## Create and publish a project
+
+Follow the [getting started article](../how-to/create-test-deploy.md). Once the project has been successfully deployed, you will be ready to start this article.
+
+## Create a bot
+
+After deploying your project, you can create a bot from the **Deploy project** page:
+
+* You can create several bots quickly, all pointing to the same project for different regions or pricing plans for the individual bots.
+
+* When you make changes to the project and redeploy, you don't need to take further action with the bot. It's already configured to work with the project, and works with all future changes to the project. Every time you publish a project, all the bots connected to it are automatically updated.
+
+1. In Language Studio, on the question answering **Deploy project** page, select the **Create a bot** button.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of UI with option to create a bot in Azure.](../media/bot-service/create-bot-in-azure.png)
+
+1. A new browser tab opens for the Azure portal, with the Azure AI Bot Service's creation page. Configure the Azure AI Bot Service and hit the **Create** button.
+
+ |Setting |Value|
+ |-||
+ | Bot handle| Unique identifier for your bot. This value needs to be distinct from your App name |
+ | Subscription | Select your subscription |
+ | Resource group | Select an existing resource group or create a new one |
+ | Location | Select your desired location |
+ | Pricing tier | Choose pricing tier |
+ |App name | App service name for your bot |
+ |SDK language | C# or Node.js. Once the bot is created, you can download the code to your local development environment and continue the development process. |
+ | Language Resource Key | This key is automatically populated deployed question answering project |
+ | App service plan/Location | This value is automatically populated, do not change this value |
+
+1. After the bot is created, open the **Bot service** resource.
+1. Under **Settings**, select **Test in Web Chat**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Azure AI Bot Service UI button that reads "Test web chat".](../media/bot-service/test-in-web-chat.png)
+
+1. At the chat prompt of **Type your message**, enter:
+
+ `How do I setup my surface book?`
+
+ The chat bot responds with an answer from your project.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of bot test chat response containing a question and bot generated answer.](../media/bot-service/bot-chat.png)
+
+## Integrate the bot with channels
+
+Select **Channels** in the Bot service resource that you have created. You can activate the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
+
+ >[!div class="mx-imgBorder"]
+ >![Screenshot of integration with teams with product icons.](../media/bot-service/channels.png)
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete the associate question answering and bot service resources.
+
+## Next steps
+
+Advance to the next article to learn how to customize your FAQ bot with multi-turn prompts.
ai-services Guided Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/guided-conversations.md
+
+ Title: Add guided conversations with multi-turn prompts
+description: In this tutorial, learn how to make guided conversations with multi-turn prompts.
+++++ Last updated : 11/02/2021+++
+# Add guided conversations with multi-turn prompts
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Add new question and answer pairs to your existing project
+> * Add follow-up prompts to create guided conversations
+> * Test your multi-turn prompts
+
+## Prerequisites
+
+ In this tutorial, we use [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98) to create a project.
+
+If you have never created a question answering project before we recommend starting with the [getting started](../how-to/create-test-deploy.md) article, which will take you step-by-step through the process.
+
+## View question answer pair context
+
+For this example, let's assume that users are asking for additional details about the Surface Pen product, particularly how to troubleshoot their Surface Pen, but they are not getting the correct answers. So, we add more prompts to support additional scenarios and guide the users to the correct answers using multi-turn prompts.
+
+Multi-turn prompts that are associated with question and answer pairs, can be viewed by selecting **Show columns** > **Context**. By default this should already be enabled on the **Edit project** page in the Language Studio question answering interface.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of show columns UI with context highlighted in red]( ../media/guided-conversations/context.png) ]( ../media/guided-conversations/context.png#lightbox)
+
+This displays the context tree where all follow-up prompts linked to a QnA pair are shown:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of Surface Pen FAQ page with answers to common questions]( ../media/guided-conversations/surface-source.png) ]( ../media/guided-conversations/surface-source.png#lightbox)
+
+## Add question pair with follow-up prompts
+
+To help users solve issues with their Surface Pen, we add follow-up prompts:
+
+- Add a new question pair with two follow-up prompts
+- Add a follow-up prompt to one of the newly added prompts
+
+1. Add a new QnA pair with two follow-up prompts **Check compatibility** and **Check Pen Settings**
+Using the editor, we add a new QnA pair with a follow-up prompt by clicking on **Add question pair**
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of UI with "Add question pair" highlighted in a red box]( ../media/guided-conversations/add-question.png)
+
+ A new row in **Editorial** is created where we enter the question answer pair as shown below:
+
+ |Field|Value|
+ |--|-|
+ |Questions | Fix problems with Surface |
+ |Answers and prompts | Here are some things to try first if your Surface Pen won't write, open apps, or connect to Bluetooth.|
+
+2. We then add a follow-up prompt to the newly created question pair by choosing **Add follow-up prompts**. Fill the details for the prompt as shown:
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of UI with "add a follow-up prompt" highlighted in a red box]( ../media/guided-conversations/add-prompts.png) ]( ../media/guided-conversations/add-prompts.png#lightbox)
+
+ We provide **Check Compatibility** as the ΓÇ£Display textΓÇ¥ for the prompt and try to link it to a QnA. Since, no related QnA pair is available to link to the prompt, when we search ΓÇ£Check your Surface Pen CompatibilityΓÇ¥, we create a new question pair by clicking on **Create link to new pair** and select **Done**. Then select **Save changes**.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of a question and answer about checking your surface pen compatibility]( ../media/guided-conversations/compatability-check.png) ]( ../media/guided-conversations/compatability-check.png#lightbox)
+
+3. Similarly, we add another prompt **Check Pen Settings** to help the user troubleshoot the Surface Pen and add question pair to it.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of a question and answer about checking your surface pen settings]( ../media/guided-conversations/pen-settings.png) ]( ../media/guided-conversations/check-pen-settings.png#lightbox)
+
+4. Add another follow-up prompt to the newly created prompt. We now add ΓÇ£Replace Pen tipsΓÇÖ as a follow-up prompt to the previously created prompt ΓÇ£Check Pen SettingsΓÇ¥.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of a question and answer about creating a follow-up prompt for replacing pen tips]( ../media/guided-conversations/replace-pen.png) ]( ../media/guided-conversations/replace-pen.png#lightbox)
+
+5. Finally, save the changes and test these prompts in the **Test** pane:
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of question answer UI with save changes highlighted in a red box]( ../media/guided-conversations/save-changes.png) ]( ../media/guided-conversations/save-changes.png#lightbox)
+
+ For a user query **Issues with Surface Pen**, the system returns an answer and presents the newly added prompts to the user. The user then selects one of the prompts **Check Pen Settings** and the related answer is returned to the user with another prompt **Replace Pen Tips**, which when selected further provides the user with more information. So, multi-turn is used to help and guide the user to the desired answer.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of chat test UI]( ../media/guided-conversations/test.png) ]( ../media/guided-conversations/test.png#lightbox)
+
+## Next steps
+
ai-services Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/multiple-domains.md
+
+ Title: "Tutorial: Create a FAQ bot for multiple categories with Azure AI Bot Service"
+description: In this tutorial, create a no code FAQ Bot for production use cases with question answering and Azure AI Bot Service.
+++++ Last updated : 11/02/2021+++
+# Add multiple categories to your FAQ bot
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a project and tag question answer pairs into distinct categories with metadata
+> * Create a separate project for each domain
+> * Create a separate language resource for each domain
+
+When building a FAQ bot, you may encounter use cases that require you to address queries across multiple domains. Let's say the marketing team at Microsoft wants to build a customer support bot that answers common user queries on multiple Surface Products. For the sake of simplicity here, we will be using two FAQ URLs, [Surface Pen](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98), and [Surface Earbuds](https://support.microsoft.com/surface/use-surface-earbuds-aea108c3-9344-0f11-e5f5-6fc9f57b21f9) to create the project.
+
+## Create project with domain specific metadata
+
+The content authors can use documents to extract question answer pairs or add custom question answer pairs to the project. In order to group these question and answers into specific domains or categories, you can add metadata.
+
+For the bot on Surface products, you can take the following steps to create a bot that answers queries for both product types:
+
+1. Add the following FAQ URLs as sources by selecting **Add source** > **URLs** > and then **Add all** once you have added each of the URLS below:
+
+ [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98)<br>[Surface Earbuds FAQ](https://support.microsoft.com/surface/use-surface-earbuds-aea108c3-9344-0f11-e5f5-6fc9f57b21f9)
+
+ >[!div class="mx-imgBorder"]
+ >[![Screenshot of add URL UI.](../media/multiple-domains/add-url.png)](../media/multiple-domains/add-url.png#lightbox)
+
+2. In this project, we have question answer pairs on two products and we would like to distinguish between them such that we can search for responses among question and answers for a given product. In order to do this, we could update the metadata field for the question answer pairs.
+
+ As you can see in the example below, we have added a metadata with **product** as key and **surface_pen** or **surface_earbuds** as values wherever applicable. You can extend this example to extract data on multiple products and add a different value for each product.
+
+ >[!div class="mx-imgBorder"]
+ >[![Screenshot of metadata example.](../media/multiple-domains/product-metadata.png)](../media/multiple-domains/product-metadata.png#lightbox)
+
+4. Now, in order to restrict the system to search for the response across a particular product you would need to pass that product as a filter in the question answering REST API.
+
+ The REST API prediction URL can be retrieved from the Deploy project pane:
+
+ >[!div class="mx-imgBorder"]
+ >[![Screenshot of the Deploy project page with the prediction URL displayed.](../media/multiple-domains/prediction-url.png)](../media/multiple-domains/prediction-url.png#lightbox)
+
+ In the JSON body for the API call, we have passed *surface_pen* as value for the metadata *product*. So, the system will only look for the response among the QnA pairs with the same metadata.
+
+ ```json
+ {
+ "question": "What is the price?",
+ "top": 3
+ },
+ "answerSpanRequest": {
+ "enable": true,
+ "confidenceScoreThreshold": 0.3,
+ "topAnswersWithSpan": 1
+ },
+ "filters": {
+ "metadataFilter": {
+ "metadata": [
+ {
+ "key": "product",
+ "value": "surface_pen"
+ }
+ ]
+ }
+ }
+ ```
+
+ You can obtain metadata value based on user input in the following ways:
+
+ * Explicitly take the domain as input from the user through the bot client. For instance as shown below, you can take product category as input from the user when the conversation is initiated.
+
+ ![Take metadata input](../media/multiple-domains/explicit-metadata-input.png)
+
+ * Implicitly identify domain based on bot context. For instance, in case the previous question was on a particular Surface product, it can be saved as context by the client. If the user doesn't specify the product in the next query, you could pass on the bot context as metadata to the Generate Answer API.
+
+ ![Pass context](../media/multiple-domains/extract-metadata-from-context.png)
+
+ * Extract entity from user query to identify domain to be used for metadata filter. You can use other Azure AI services such as [Named Entity Recognition (NER)](../../named-entity-recognition/overview.md) and [conversational language understanding](../../conversational-language-understanding/overview.md) for entity extraction.
+
+ ![Extract metadata from query](../media/multiple-domains/extract-metadata-from-query.png)
+
+### How large can our projects be?
+
+You can add up to 50000 question answer pairs to a single project. If your data exceeds 50,000 question answer pairs, you should consider splitting the project.
+
+## Create a separate project for each domain
+
+You can also create a separate project for each domain and maintain the projects separately. All APIs require for the user to pass on the project name to make any update to the project or fetch an answer to the user's question.
+
+When the user question is received by the service, you would need to pass on the `projectName` in the REST API endpoint shown to fetch a response from the relevant project. You can locate the URL in the **Deploy project** page under **Get prediction URL**:
+
+`https://southcentralus.api.cognitive.microsoft.com/language/:query-knowledgebases?projectName=Test-Project-English&api-version=2021-10-01&deploymentName=production`
+
+## Create a separate language resource for each domain
+
+Let's say the marketing team at Microsoft wants to build a customer support bot that answers user queries on Surface and Xbox products. They plan to assign distinct teams to access projects on Surface and Xbox. In this case, it is advised to create two question answering resources - one for Surface and another for Xbox. You can however define distinct roles for users accessing the same resource.
ai-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/multiple-languages.md
+
+ Title: Create projects in multiple languages -question answering
+description: In this tutorial, you will learn how to create projects with multiple languages.
+++++ Last updated : 11/02/2021+++
+# Create projects in multiple languages
+
+In this tutorial, you learn how to:
+
+<!-- green checkmark -->
+> [!div class="checklist"]
+> * Create a project that supports English
+> * Create a project that supports German
+
+This tutorial will walk through the process of creating projects in multiple languages. We use the [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98) URL to create projects in German and English. We then deploy the project and use the question answering REST API to query and get answers to FAQs in the desired language.
+
+## Create project in German
+
+To be able to create a project in more than one language, the multiple language setting must be set at the creation of the first project that is associated with the language resource.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI for create project with I want to select the language when I create a project in this resource selected.]( ../media/multiple-languages/multiple-languages.png) ](../media/multiple-languages/multiple-languages.png#lightbox)
+
+1. From the [Language Studio](https://aka.ms/languageStudio) home page, select open custom question answering. Select **Create new project** > **I want to select the language when I create a project in this resource** > **Next**.
+
+2. Fill out enter basic information page and select **Next** > **Create project**.
+
+ |Setting| Value|
+ ||-|
+ |Name | Unique name for your project|
+ |Description | Unique description to help identify the project |
+ |Source language | For this tutorial, select German |
+ |Default answer | Default answer when no answer is returned |
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of UI for create project with the german laungague selected.]( ../media/multiple-languages/choose-german.png) ](../media/multiple-languages/choose-german.png#lightbox)
+
+3. **Add source** > **URLs** > **Add url** > **Add all**.
+
+ |Setting| Value |
+ |-||
+ | Url Name | Surface Pen German |
+ | URL | https://support.microsoft.com/de-de/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98 |
+ | Classify file structure | Auto-detect |
+
+ Question answering reads the document and extracts question answer pairs from the source URL to create the project in the German language. If you select the link to the source, the project page opens where we can edit the contents.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of UI with German questions and answers](../media/multiple-languages/german-language.png) ]( ../media/multiple-languages/german-language.png#lightbox)
+
+## Create project in English
+
+We now repeat the above steps from before but this time select English and provide an English URL as a source.
+
+1. From the [Language Studio](https://aka.ms/languageStudio) open the question answering page > **Create new project**.
+
+2. Fill out enter basic information page and select **Next** > **Create project**.
+
+ |Setting| Value|
+ ||-|
+ |Name | Unique name for your project|
+ |Description | Unique description to help identify the project |
+ |Source language | For this tutorial, select English |
+ |Default answer | Default answer when no answer is returned |
+
+3. **Add source** > **URLs** > **Add url** > **Add all**.
+
+ |Setting| Value |
+ |--|--|
+ | Url Name | Surface Pen German |
+ | URL | https://support.microsoft.com/en-us/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98 |
+ | Classify file structure | Auto-detect |
+
+## Deploy and query project
+
+We are now ready to deploy the two project and query them in the desired language using the question answering REST API. Once a project is deployed, the following page is shown which provides details to query the project.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of UI with English questions and answers](../media/multiple-languages/get-prediction-url.png) ](../media/multiple-languages/get-prediction-url.png#lightbox)
+
+The language for the incoming user query can be detected with the [Language Detection API](../../language-detection/how-to/call-api.md) and the user can call the appropriate endpoint and project depending on the detected language.
ai-services Power Virtual Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/power-virtual-agents.md
+
+ Title: "Tutorial: Add your Question Answering project to Power Virtual Agents"
+description: In this tutorial, you will learn how to add your Question Answering project to Power Virtual Agents.
+++++ Last updated : 10/11/2022+++
+# Add your Question Answering project to Power Virtual Agents
+
+Create and extend a [Power Virtual Agents](https://powervirtualagents.microsoft.com/) bot to provide answers from your project.
+
+> [!NOTE]
+> The integration demonstrated in this tutorial is in preview and is not intended for deployment to production environments.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Create a Power Virtual Agents bot
+> * Create a system fallback topic
+> * Add Question Answering as an action to a topic as a Power Automate flow
+> * Create a Power Automate solution
+> * Add a Power Automate flow to your solution
+> * Publish Power Virtual Agents
+> * Test Power Virtual Agents, and receive an answer from your Question Answering project
+
+> [!NOTE]
+> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure AI Language](../../index.yml). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md).
+
+## Create and publish a project
+1. Follow the [quickstart](../quickstart/sdk.md?pivots=studio) to create a Question Answering project. Once you have deployed your project.
+2. After deploying your project from Language Studio, select ΓÇ£Get Prediction URLΓÇ¥.
+3. Get your Site URL from the hostname of Prediction URL and your Account key which would be the Ocp-Apim-Subscription-Key.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of how to obtain the prediction URL and subscription key displayed.]( ../media/power-virtual-agents/get-prediction-url.png) ]( ../media/power-virtual-agents/get-prediction-url.png#lightbox)
+
+4. Create a Custom Question Answering connector: Follow the [connector documentation](/connectors/languagequestionansw/) to create a connection to Question Answering.
+5. Use this tutorial to create a Bot with Power Virtual Agents instead of creating a bot from Language Studio.
+
+## Create a bot in Power Virtual Agents
+[Power Virtual Agents](https://powervirtualagents.microsoft.com/) allows teams to create powerful bots by using a guided, no-code graphical interface. You don't need data scientists or developers.
+
+Create a bot by following the steps in [Create and delete Power Virtual Agents bots](/power-virtual-agents/authoring-first-bot).
+
+## Create the system fallback topic
+In Power Virtual Agents, you create a bot with a series of topics (subject areas), in order to answer user questions by performing actions.
+
+Although the bot can connect to your project from any topic, this tutorial uses the system fallback topic. The fallback topic is used when the bot can't find an answer. The bot passes the user's text to Question Answering Query knowledgebase API, receives the answer from your project, and displays it to the user as a message.
+
+Create a fallback topic by following the steps in [Configure the system fallback topic in Power Virtual Agents](/power-virtual-agents/authoring-system-fallback-topic).
+
+## Use the authoring canvas to add an action
+Use the Power Virtual Agents authoring canvas to connect the fallback topic to your project. The topic starts with the unrecognized user text. Add an action that passes that text to Question Answering, and then shows the answer as a message. The last step of displaying an answer is handled as a [separate step](../../../QnAMaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md#add-your-solutions-flow-to-power-virtual-agents), later in this tutorial.
+
+This section creates the fallback topic conversation flow.
+
+The new fallback action might already have conversation flow elements. Delete the **Escalate** item by selecting the **Options** menu.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of how to delete existing conversation flow elements.]( ../media/power-virtual-agents/delete-action.png) ]( ../media/power-virtual-agents/delete-action.png#lightbox)
+
+Below the *Message* node, select the (**+**) icon, then select **Call an action**.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of how select the Call an action feature.]( ../media/power-virtual-agents/trigger-action-for-power-automate.png) ]( ../media/power-virtual-agents/trigger-action-for-power-automate.png#lightbox)
+
+Select **Create a flow**. This takes you to the Power Automate portal.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of where to find the Create a flow action.]( ../media/power-virtual-agents/create-flow.png) ]( ../media/power-virtual-agents/create-flow.png#lightbox)
+
+Power Automate opens a new template as shown below.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the default Power Automate template.]( ../media/power-virtual-agents/power-automate-actions.png) ]( ../media/power-virtual-agents/power-automate-actions.png#lightbox)
+**Do not use the template shown above.**
+
+Instead you need to follow the steps below that creates a Power Automate flow. This flow:
+- Takes the incoming user text as a question, and sends it to Question Answering.
+- Returns the top response back to your bot.
+
+select **Create** in the left panel, then click "OK" to leave the page.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Create action in the left panel and a confirmation message for navigating away from the page.]( ../media/power-virtual-agents/power-automate-create-new.png) ]( ../media/power-virtual-agents/power-automate-create-new.png#lightbox)
+
+Select "Instant Cloud flow"
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Instant cloud flow selection box.]( ../media/power-virtual-agents/create-instant-cloud-flow.png) ]( ../media/power-virtual-agents/create-instant-cloud-flow.png#lightbox)
+
+For testing this connector, you can select ΓÇ£When PowerVirtual Agents calls a flowΓÇ¥ and select **Create**.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the When Power Virtual Agents calls a flow selection in the Choose how to trigger this flow list.]( ../media/power-virtual-agents/create-trigger.png) ]( ../media/power-virtual-agents/create-trigger.png#lightbox)
+
+Select "New Step" and search for "Power Virtual Agents". Choose "Add an input" and select text. Next, provide the keyword and the value.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Add an input option.]( ../media/power-virtual-agents/flow-step-1.png) ]( ../media/power-virtual-agents/flow-step-1.png#lightbox)
+
+Select "New Step" and search "Language - Question Answering" and choose "Generate answer from Project" from the three actions.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Generate answer from Project selection in the Action list.]( ../media/power-virtual-agents/flow-step-2.png) ]( ../media/power-virtual-agents/flow-step-2.png#lightbox)
+
+This option helps in answering the specified question using your project. Type in the project name, deployment name and API version and select the question from the previous step.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of fields for the Generate answer from Project action.]( ../media/power-virtual-agents/flow-step-3.png) ]( ../media/power-virtual-agents/flow-step-3.png#lightbox)
+
+Select "New Step" and search for "Initialize variable". Choose a name for your variable, and select the "String" type.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Initialize variable action fields.]( ../media/power-virtual-agents/flow-step-4.png) ]( ../media/power-virtual-agents/flow-step-4.png#lightbox)
+
+Select "New Step" again, and search for "Apply to each", then select the output from the previous steps and add an action of "Set variable" and select the connector action.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Set variable action within the Apply to each step.]( ../media/power-virtual-agents/flow-step-5.png) ]( ../media/power-virtual-agents/flow-step-5.png#lightbox)
+
+Select "New Step" and search for "Return value(s) to Power Virtual Agents" and type in a keyword, then choose the previous variable name in the answer.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Return value(s) to Power Virtual Agents step, containing the previous variable.]( ../media/power-virtual-agents/flow-step-6.png) ]( ../media/power-virtual-agents/flow-step-6.png#lightbox)
+
+The list of completed steps should look like this.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the full list of completed steps within Power Virtual Agents.]( ../media/power-virtual-agents/flow-step-7.png) ]( ../media/power-virtual-agents/flow-step-7.png#lightbox)
+
+Select **Save** to save the flow.
+
+## Create a solution and add the flow
+
+For the bot to find and connect to the flow, the flow must be included in a Power Automate solution.
+
+1. While still in the Power Automate portal, select Solutions from the left-side navigation.
+2. Select **+ New solution**.
+3. Enter a display name. The list of solutions includes every solution in your organization or school. Choose a naming convention that helps you filter to just your solutions. For example, you might prefix your email to your solution name: jondoe-power-virtual-agent-question-answering-fallback.
+4. Select your publisher from the list of choices.
+5. Accept the default values for the name and version.
+6. Select **Create** to finish the process.
+
+**Add your flow to the solution**
+
+1. In the list of solutions, select the solution you just created. It should be at the top of the list. If it isn't, search by your email name, which is part of the solution name.
+2. In the solution, select **+ Add existing**, and then select Flow from the list.
+3. Find your flow from the **Outside solutions** list, and then select Add to finish the process. If there are many flows, look at the **Modified** column to find the most recent flow.
+
+## Add your solution's flow to Power Virtual Agents
+
+1. Return to the browser tab with your bot in Power Virtual Agents. The authoring canvas should still be open.
+2. To insert a new step in the flow, above the **Message** action box, select the plus (+) icon. Then select **Call an action**.
+3. From the **Flow** pop-up window, select the new flow named **Generate answers using Question Answering Project...**. The new action appears in the flow.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the Call an action using the Generate answers using Question Answering Project flow.]( ../media/power-virtual-agents/flow-step-8.png) ]( ../media/power-virtual-agents/flow-step-8.png#lightbox)
+
+4. To correctly set the input variable to the QnA Maker action, select **Select a variable**, then select **bot.UnrecognizedTriggerPhrase**.
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the selected bot.UnrecognizedTriggerPhrase variable within the action call.]( ../media/power-virtual-agents/flow-step-9.png) ]( ../media/power-virtual-agents/flow-step-9.png#lightbox)
+
+5. To correctly set the output variable to the Question Answering action, in the **Message** action, select **UnrecognizedTriggerPhrase**, then select the icon to insert a variable, {x}, then select **FinalAnswer**.
+6. From the context toolbar, select **Save**, to save the authoring canvas details for the topic.
+
+Here's what the final bot canvas looks like:
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the completed bot canvas.]( ../media/power-virtual-agents/flow-step-10.png) ]( ../media/power-virtual-agents/flow-step-10.png#lightbox)
+
+## Test the bot
+
+As you design your bot in Power Virtual Agents, you can use the [Test bot pane](/power-virtual-agents/authoring-test-bot) to see how the bot leads a customer through the bot conversation.
+
+1. In the test pane, toggle **Track between topics**. This allows you to watch the progression between topics, as well as within a single topic.
+2. Test the bot by entering the user text in the following order. The authoring canvas reports the successful steps with a green check mark.
+
+|**Question order**|**Test questions** |**Purpose** |
+||-|--|
+|1 |Hello |Begin conversation |
+|2 |Store hours |Sample topic. This is configured for you without any additional work on your part. |
+|3 |Yes |In reply to "Did that answer your question?" |
+|4 |Excellent |In reply to "Please rate your experience." |
+|5 |Yes |In reply to "Can I help with anything else?" |
+|6 |How can I improve the throughput performance for query predictions?|This question triggers the fallback action, which sends the text to your project to answer. Then the answer is shown. the green check marks for the individual actions indicate success for each action.|
+
+> [!div class="mx-imgBorder"]
+> [ ![Screenshot of the completd test bot running alongside the tutorial flow.]( ../media/power-virtual-agents/flow-step-11.png) ]( ../media/power-virtual-agents/flow-step-11.png#lightbox)
+
+## Publish your bot
+
+To make the bot available to all members of your organization, you need to publish it.
+
+Publish your bot by following the steps in [Publish your bot](/power-virtual-agents/publication-fundamentals-publish-channels).
+
+## Share your bot
+
+To make your bot available to others, you first need to publish it to a channel. For this tutorial we'll use the demo website.
+
+Configure the demo website by following the steps in [Configure a chatbot for a live or demo website](/power-virtual-agents/publication-connect-bot-to-web-channels).
+
+Then you can share your website URL with your school or organization members.
+
+## Clean up resources
+
+When you are done with the project, remove the QnA Maker resources in the Azure portal.
+
+## See also
+
+* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/how-to/call-api.md
+
+ Title: How to perform sentiment analysis and opinion mining
+
+description: This article will show you how to detect sentiment, and mine for opinions in text.
++++++ Last updated : 07/27/2022++++
+# How to: Use Sentiment analysis and Opinion Mining
+
+Sentiment analysis and opinion mining are two ways of detecting positive and negative sentiment. Using sentiment analysis, you can get sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. Opinion Mining provides granular information about the opinions related to words (such as the attributes of products or services) in the text.
+
+## Sentiment Analysis
+
+Sentiment Analysis applies sentiment labels to text, which are returned at a sentence and document level, with a confidence score for each.
+
+The labels are *positive*, *negative*, and *neutral*. At the document level, the *mixed* sentiment label also can be returned. The sentiment of the document is determined below:
+
+| Sentence sentiment | Returned document label |
+|--|-|
+| At least one `positive` sentence is in the document. The rest of the sentences are `neutral`. | `positive` |
+| At least one `negative` sentence is in the document. The rest of the sentences are `neutral`. | `negative` |
+| At least one `negative` sentence and at least one `positive` sentence are in the document. | `mixed` |
+| All sentences in the document are `neutral`. | `neutral` |
+
+Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. For each document or each sentence, the predicted scores associated with the labels (positive, negative, and neutral) add up to 1. For more information, see the [Responsible AI transparency note](/legal/cognitive-services/text-analytics/transparency-note?context=/azure/ai-services/text-analytics/context/context).
+
+## Opinion Mining
+
+Opinion Mining is a feature of Sentiment Analysis. Also known as Aspect-based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to attributes of products or services in text. The API surfaces opinions as a target (noun or verb) and an assessment (adjective).
+
+For example, if a customer leaves feedback about a hotel such as "The room was great, but the staff was unfriendly.", Opinion Mining will locate targets (aspects) in the text, and their associated assessments (opinions) and sentiments. Sentiment Analysis might only report a negative sentiment.
++
+If you're using the REST API, to get Opinion Mining in your results, you must include the `opinionMining=true` flag in a request for sentiment analysis. The Opinion Mining results will be included in the sentiment analysis response. Opinion mining is an extension of Sentiment Analysis and is included in your current [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
+
+## Development options
++
+## Determine how to process the data (optional)
+
+### Specify the sentiment analysis model
+
+By default, sentiment analysis will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
+
+<!--### Using a preview model version
+
+To use the a preview model version in your API calls, you must specify the model version using the model version parameter. For example, if you were sending a request using Python:
+
+```python
+result = text_analytics_client.analyze_sentiment(documents, show_opinion_mining=True, model_version="2021-10-01-preview")
+```
+
+or if you were using the REST API:
+
+```rest
+https://your-resource-name.cognitiveservices.azure.com/text/analytics/v3.1/sentiment?opinionMining=true&model-version=2021-10-01-preview
+```
+
+See the reference documentation for more information.
+* [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
+
+* [.NET](/dotnet/api/azure.ai.textanalytics.analyzesentimentaction#properties)
+* [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics.textanalyticsclient#analyze-sentiment-documents-kwargs-)
+* [Java](/java/api/com.azure.ai.textanalytics.models.analyzesentimentoptions.setmodelversion#com_azure_ai_textanalytics_models_AnalyzeSentimentOptions_setModelVersion_java_lang_String_)
+* [JavaScript](/javascript/api/@azure/ai-text-analytics/analyzesentimentoptions)
+-->
+
+### Input languages
+
+When you submit documents to be processed by sentiment analysis, you can specify which of [the supported languages](../language-support.md) they're written in. If you don't specify a language, sentiment analysis will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
+
+## Submitting data
+
+Sentiment analysis and opinion mining produce a higher-quality result when you give it smaller amounts of text to work on. This is opposite from some features, like key phrase extraction which performs better on larger blocks of text.
+
+To send an API request, you'll need your Language resource endpoint and key.
+
+> [!NOTE]
+> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
+
+Analysis is performed upon receipt of the request. Using the sentiment analysis and opinion mining features synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
++
+## Getting sentiment analysis and opinion mining results
+
+When you receive results from the API, the order of the returned key phrases is determined internally, by the model. You can stream the results to an application, or save the output to a file on the local system.
+
+Sentiment analysis returns a sentiment label and confidence score for the entire document, and each sentence within it. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. A document can have multiple sentences, and the confidence scores within each document or sentence add up to 1.
+
+Opinion Mining will locate targets (nouns or verbs) in the text, and their associated assessment (adjective). For example, the sentence "*The restaurant had great food and our server was friendly*" has two targets: *food* and *server*. Each target has an assessment. For example, the assessment for *food* would be *great*, and the assessment for *server* would be *friendly*.
+
+The API returns opinions as a target (noun or verb) and an assessment (adjective).
+
+## Service and data limits
++
+## See also
+
+* [Sentiment analysis and opinion mining overview](../overview.md)
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/how-to/use-containers.md
+
+ Title: Install and run Docker containers for Sentiment Analysis
+
+description: Use the Docker containers for the Sentiment Analysis API to perform natural language processing such as sentiment analysis, on-premises.
++++++ Last updated : 04/11/2023++
+keywords: on-premises, Docker, container, sentiment analysis, natural language processing
++
+# Install and run Sentiment Analysis containers
+
+Containers enable you to host the Sentiment Analysis API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Sentiment Analysis remotely, then containers might be a good option.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+You must meet the following prerequisites before using Sentiment Analysis containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
++
+## Host computer requirements and recommendations
++
+The following table describes the minimum and recommended specifications for the available container. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
+
+| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
+|||-|--|--|
+| **Sentiment Analysis** | 1 core, 2GB memory | 4 cores, 8GB memory |15 | 30|
+
+CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Get the container image with `docker pull`
+
+The Sentiment Analysis container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `sentiment`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment`
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/tags).
+
+The sentiment analysis container v3 container is available in several languages. To download the container for the English container, use the command below.
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-en
+```
+
+To download the container for another language, replace `3.0-en` with one of the image tags below.
+
+| Sentiment Analysis Container | Image tag |
+|--|--|
+| Chinese-Simplified | `3.0-zh-hans` |
+| Chinese-Traditional | `3.0-zh-hant` |
+| Dutch | `3.0-nl` |
+| English | `3.0-en` |
+| French | `3.0-fr` |
+| German | `3.0-de` |
+| Hindi | `3.0-hi` |
+| Italian | `3.0-it` |
+| Japanese | `3.0-ja` |
+| Korean | `3.0-ko` |
+| Norwegian (Bokmål) | `3.0-no` |
+| Portuguese (Brazil) | `3.0-pt-BR` |
+| Portuguese (Portugal) | `3.0-pt-PT` |
+| Spanish | `3.0-es` |
+| Turkish | `3.0-tr` |
++
+## Run the container with `docker run`
+
+Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
+
+> [!IMPORTANT]
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+
+To run the Sentiment Analysis container, execute the following `docker run` command. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| **{IMAGE_TAG}** | The image tag representing the language of the container you want to run. Make sure this matches the `docker pull` command you used. | `3.0-en` |
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:{IMAGE_TAG} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a *Sentiment Analysis* container from the container image
+* Allocates one CPU core and 8 gigabytes (GB) of memory
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Automatically removes the container after it exits. The container image is still available on the host computer.
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs.
+
+<!-- ## Validate container is running -->
++
+## Run the container disconnected from the internet
++
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++
+## Billing
+
+The Sentiment Analysis containers send billing information to Azure, using a _Language_ resource on your Azure account.
++
+For more information about these options, see [Configure containers](../../concepts/configure-containers.md).
+
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Sentiment Analysis containers. In summary:
+
+* Sentiment Analysis provides Linux containers for Docker
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/language-support.md
+
+ Title: Sentiment Analysis and Opinion Mining language support
+
+description: This article explains which languages are supported by the Sentiment Analysis and Opinion Mining features of Azure AI Language.
++++++ Last updated : 10/31/2022++++
+# Sentiment Analysis and Opinion Mining language support
+
+Use this article to learn which languages are supported by Sentiment Analysis and Opinion Mining.
+
+> [!NOTE]
+> Languages are added as new [model versions](../concepts/model-lifecycle.md) are released.
+
+## Sentiment Analysis language support
+
+Total supported language codes: 94
+
+| Language | Language code | Starting with model version | Notes |
+|-|-|-|-|
+| Afrikaans | `af` | 2022-10-01 | |
+| Albanian | `sq` | 2022-10-01 | |
+| Amharic | `am` | 2022-10-01 | |
+| Arabic | `ar` | 2022-06-01 | |
+| Armenian | `hy` | 2022-10-01 | |
+| Assamese | `as` | 2022-10-01 | |
+| Azerbaijani | `az` | 2022-10-01 | |
+| Basque | `eu` | 2022-10-01 | |
+| Belarusian (new) | `be` | 2022-10-01 | |
+| Bengali | `bn` | 2022-10-01 | |
+| Bosnian | `bs` | 2022-10-01 | |
+| Breton (new) | `br` | 2022-10-01 | |
+| Bulgarian | `bg` | 2022-10-01 | |
+| Burmese | `my` | 2022-10-01 | |
+| Catalan | `ca` | 2022-10-01 | |
+| Chinese (Simplified) | `zh-hans` | 2019-10-01 | `zh` also accepted |
+| Chinese (Traditional) | `zh-hant` | 2019-10-01 | |
+| Croatian | `hr` | 2022-10-01 | |
+| Czech | `cs` | 2022-10-01 | |
+| Danish | `da` | 2022-06-01 | |
+| Dutch | `nl` | 2019-10-01 | |
+| English | `en` | 2019-10-01 | |
+| Esperanto (new) | `eo` | 2022-10-01 | |
+| Estonian | `et` | 2022-10-01 | |
+| Filipino | `fil` | 2022-10-01 | |
+| Finnish | `fi` | 2022-06-01 | |
+| French | `fr` | 2019-10-01 | |
+| Galician | `gl` | 2022-10-01 | |
+| Georgian | `ka` | 2022-10-01 | |
+| German | `de` | 2019-10-01 | |
+| Greek | `el` | 2022-06-01 | |
+| Gujarati | `gu` | 2022-10-01 | |
+| Hausa (new) | `ha` | 2022-10-01 | |
+| Hebrew | `he` | 2022-10-01 | |
+| Hindi | `hi` | 2020-04-01 | |
+| Hungarian | `hu` | 2022-10-01 | |
+| Indonesian | `id` | 2022-10-01 | |
+| Irish | `ga` | 2022-10-01 | |
+| Italian | `it` | 2019-10-01 | |
+| Japanese | `ja` | 2019-10-01 | |
+| Javanese (new) | `jv` | 2022-10-01 | |
+| Kannada | `kn` | 2022-10-01 | |
+| Kazakh | `kk` | 2022-10-01 | |
+| Khmer | `km` | 2022-10-01 | |
+| Korean | `ko` | 2019-10-01 | |
+| Kurdish (Kurmanji) | `ku` | 2022-10-01 | |
+| Kyrgyz | `ky` | 2022-10-01 | |
+| Lao | `lo` | 2022-10-01 | |
+| Latin (new) | `la` | 2022-10-01 | |
+| Latvian | `lv` | 2022-10-01 | |
+| Lithuanian | `lt` | 2022-10-01 | |
+| Macedonian | `mk` | 2022-10-01 | |
+| Malagasy | `mg` | 2022-10-01 | |
+| Malay | `ms` | 2022-10-01 | |
+| Malayalam | `ml` | 2022-10-01 | |
+| Marathi | `mr` | 2022-10-01 | |
+| Mongolian | `mn` | 2022-10-01 | |
+| Nepali | `ne` | 2022-10-01 | |
+| Norwegian | `no` | 2019-10-01 | |
+| Odia | `or` | 2022-10-01 | |
+| Oromo (new) | `om` | 2022-10-01 | |
+| Pashto | `ps` | 2022-10-01 | |
+| Persian | `fa` | 2022-10-01 | |
+| Polish | `pl` | 2022-06-01 | |
+| Portuguese (Portugal) | `pt-PT` | 2019-10-01 | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | 2019-10-01 | |
+| Punjabi | `pa` | 2022-10-01 | |
+| Romanian | `ro` | 2022-10-01 | |
+| Russian | `ru` | 2022-06-01 | |
+| Sanskrit (new) | `sa` | 2022-10-01 | |
+| Scottish Gaelic (new) | `gd` | 2022-10-01 | |
+| Serbian | `sr` | 2022-10-01 | |
+| Sindhi (new) | `sd` | 2022-10-01 | |
+| Sinhala (new) | `si` | 2022-10-01 | |
+| Slovak | `sk` | 2022-10-01 | |
+| Slovenian | `sl` | 2022-10-01 | |
+| Somali | `so` | 2022-10-01 | |
+| Spanish | `es` | 2019-10-01 | |
+| Sundanese (new) | `su` | 2022-10-01 | |
+| Swahili | `sw` | 2022-10-01 | |
+| Swedish | `sv` | 2022-06-01 | |
+| Tamil | `ta` | 2022-10-01 | |
+| Telugu | `te` | 2022-10-01 | |
+| Thai | `th` | 2022-10-01 | |
+| Turkish | `tr` | 2022-10-01 | |
+| Ukrainian | `uk` | 2022-10-01 | |
+| Urdu | `ur` | 2022-10-01 | |
+| Uyghur | `ug` | 2022-10-01 | |
+| Uzbek | `uz` | 2022-10-01 | |
+| Vietnamese | `vi` | 2022-10-01 | |
+| Welsh | `cy` | 2022-10-01 | |
+| Western Frisian (new) | `fy` | 2022-10-01 | |
+| Xhosa (new) | `xh` | 2022-10-01 | |
+| Yiddish (new) | `yi` | 2022-10-01 | |
+
+### Opinion Mining language support
+
+Total supported language codes: 94
+
+| Language | Language code | Starting with model version | Notes |
+|-|-|-|-|
+| Afrikaans (new) | `af` | 2022-11-01 | |
+| Albanian (new) | `sq` | 2022-11-01 | |
+| Amharic (new) | `am` | 2022-11-01 | |
+| Arabic | `ar` | 2022-11-01 | |
+| Armenian (new) | `hy` | 2022-11-01 | |
+| Assamese (new) | `as` | 2022-11-01 | |
+| Azerbaijani (new) | `az` | 2022-11-01 | |
+| Basque (new) | `eu` | 2022-11-01 | |
+| Belarusian (new) | `be` | 2022-11-01 | |
+| Bengali | `bn` | 2022-11-01 | |
+| Bosnian (new) | `bs` | 2022-11-01 | |
+| Breton (new) | `br` | 2022-11-01 | |
+| Bulgarian (new) | `bg` | 2022-11-01 | |
+| Burmese (new) | `my` | 2022-11-01 | |
+| Catalan (new) | `ca` | 2022-11-01 | |
+| Chinese (Simplified) | `zh-hans` | 2022-11-01 | `zh` also accepted |
+| Chinese (Traditional) (new) | `zh-hant` | 2022-11-01 | |
+| Croatian (new) | `hr` | 2022-11-01 | |
+| Czech (new) | `cs` | 2022-11-01 | |
+| Danish | `da` | 2022-11-01 | |
+| Dutch | `nl` | 2022-11-01 | |
+| English | `en` | 2020-04-01 | |
+| Esperanto (new) | `eo` | 2022-11-01 | |
+| Estonian (new) | `et` | 2022-11-01 | |
+| Filipino (new) | `fil` | 2022-11-01 | |
+| Finnish | `fi` | 2022-11-01 | |
+| French | `fr` | 2021-10-01 | |
+| Galician (new) | `gl` | 2022-11-01 | |
+| Georgian (new) | `ka` | 2022-11-01 | |
+| German | `de` | 2021-10-01 | |
+| Greek | `el` | 2022-11-01 | |
+| Gujarati (new) | `gu` | 2022-11-01 | |
+| Hausa (new) | `ha` | 2022-11-01 | |
+| Hebrew (new) | `he` | 2022-11-01 | |
+| Hindi | `hi` | 2022-11-01 | |
+| Hungarian | `hu` | 2022-11-01 | |
+| Indonesian | `id` | 2022-11-01 | |
+| Irish (new) | `ga` | 2022-11-01 | |
+| Italian | `it` | 2021-10-01 | |
+| Japanese | `ja` | 2022-11-01 | |
+| Javanese (new) | `jv` | 2022-11-01 | |
+| Kannada (new) | `kn` | 2022-11-01 | |
+| Kazakh (new) | `kk` | 2022-11-01 | |
+| Khmer (new) | `km` | 2022-11-01 | |
+| Korean | `ko` | 2022-11-01 | |
+| Kurdish (Kurmanji) | `ku` | 2022-11-01 | |
+| Kyrgyz (new) | `ky` | 2022-11-01 | |
+| Lao (new) | `lo` | 2022-11-01 | |
+| Latin (new) | `la` | 2022-11-01 | |
+| Latvian (new) | `lv` | 2022-11-01 | |
+| Lithuanian (new) | `lt` | 2022-11-01 | |
+| Macedonian (new) | `mk` | 2022-11-01 | |
+| Malagasy (new) | `mg` | 2022-11-01 | |
+| Malay (new) | `ms` | 2022-11-01 | |
+| Malayalam (new) | `ml` | 2022-11-01 | |
+| Marathi | `mr` | 2022-11-01 | |
+| Mongolian (new) | `mn` | 2022-11-01 | |
+| Nepali (new) | `ne` | 2022-11-01 | |
+| Norwegian | `no` | 2022-11-01 | |
+| Oriya (new) | `or` | 2022-11-01 | |
+| Oromo (new) | `om` | 2022-11-01 | |
+| Pashto (new) | `ps` | 2022-11-01 | |
+| Persian (Farsi) (new) | `fa` | 2022-11-01 | |
+| Polish | `pl` | 2022-11-01 | |
+| Portuguese (Portugal) | `pt-PT` | 2021-10-01 | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | 2021-10-01 | |
+| Punjabi (new) | `pa` | 2022-11-01 | |
+| Romanian (new) | `ro` | 2022-11-01 | |
+| Russian | `ru` | 2022-11-01 | |
+| Sanskrit (new) | `sa` | 2022-11-01 | |
+| Scottish Gaelic (new) | `gd` | 2022-11-01 | |
+| Serbian (new) | `sr` | 2022-11-01 | |
+| Sindhi (new) | `sd` | 2022-11-01 | |
+| Sinhala (new) | `si` | 2022-11-01 | |
+| Slovak (new) | `sk` | 2022-11-01 | |
+| Slovenian (new) | `sl` | 2022-11-01 | |
+| Somali (new) | `so` | 2022-11-01 | |
+| Spanish | `es` | 2021-10-01 | |
+| Sundanese (new) | `su` | 2022-11-01 | |
+| Swahili (new) | `sw` | 2022-11-01 | |
+| Swedish | `sv` | 2022-11-01 | |
+| Tamil | `ta` | 2022-11-01 | |
+| Telugu | `te` | 2022-11-01 | |
+| Thai (new) | `th` | 2022-11-01 | |
+| Turkish | `tr` | 2022-11-01 | |
+| Ukrainian (new) | `uk` | 2022-11-01 | |
+| Urdu (new) | `ur` | 2022-11-01 | |
+| Uyghur (new) | `ug` | 2022-11-01 | |
+| Uzbek (new) | `uz` | 2022-11-01 | |
+| Vietnamese (new) | `vi` | 2022-11-01 | |
+| Welsh (new) | `cy` | 2022-11-01 | |
+| Western Frisian (new) | `fy` | 2022-11-01 | |
+| Xhosa (new) | `xh` | 2022-11-01 | |
+| Yiddish (new) | `yi` | 2022-11-01 | |
++
+## Next steps
+
+* [how to call the API](how-to/call-api.md#specify-the-sentiment-analysis-model) for more information.
+* [Quickstart: Use the Sentiment Analysis client library and REST API](quickstart.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/overview.md
+
+ Title: What is sentiment analysis and opinion mining in Azure AI Language?
+
+description: An overview of the sentiment analysis feature in Azure AI services, which helps you find out what people think of a topic by mining text for clues.
++++++ Last updated : 01/12/2023++++
+# What is sentiment analysis and opinion mining in Azure AI Language?
+
+Sentiment analysis and opinion mining are features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. These features help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
+
+Both sentiment analysis and opinion mining work with a variety of [written languages](./language-support.md).
+
+## Sentiment analysis
+
+The sentiment analysis feature provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This feature also returns confidence scores between 0 and 1 for each document & sentences within it for positive, neutral and negative sentiment.
+
+## Opinion mining
+
+Opinion mining is a feature of sentiment analysis. Also known as aspect-based sentiment analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to words (such as the attributes of products or services) in text.
++
+## Get started with sentiment analysis
+++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for sentiment analysis](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
++
+## Next steps
+
+There are two ways to get started using the entity linking feature:
+* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/quickstart.md
+
+ Title: "Quickstart: Use the Sentiment Analysis client library and REST API"
+
+description: Use this quickstart to start using the Sentiment Analysis API.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+keywords: text mining, key phrase
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: Sentiment analysis and opinion mining
+++++++++++++++
ai-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/custom/how-to/data-formats.md
+
+ Title: Prepare data for custom summarization
+
+description: Learn about how to select and prepare data, to be successful in creating custom summarization projects.
++++++ Last updated : 06/01/2022++++
+# Format data for custom Summarization
+
+This page contains information about how to select and prepare data in order to be successful in creating custom summarization projects.
+
+> [!NOTE]
+> Throughout this document, we refer to a summary of a document as a ΓÇ£labelΓÇ¥.
+
+## Custom summarization document sample format
+
+In the abstractive document summarization scenario, each document (whether it has a provided label or not) is expected to be provided in a plain .txt file. The file contains one or more lines. If multiple lines are provided, each is assumed to be a paragraph of the document. The following is an example document with three paragraphs.
+
+*At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality.*
+
+*In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages.*
+
+*The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks.*
+
+## Custom summarization conversation sample format
+
+In the abstractive conversation summarization scenario, each conversation (whether it has a provided label or not) is expected to be provided in a plain .txt file. Each conversation turn must be provided in a single line that is formatted as Speaker + ΓÇ£: ΓÇ£ + text (I.e., Speaker and text are separated by a colon followed by a space). The following is an example conversation of three turns between two speakers (Agent and Customer).
+
+Agent: Hello, how can I help you?
+
+Customer: How do I upgrade office? I have been getting error messages all day.
+
+Agent: Please press the upgrade button, then sign in and follow the instructions.
++
+## Custom summarization document and sample mapping JSON format
+
+In both document and conversation summarization scenarios, a set of documents and corresponding labels can be provided in a single JSON file that references individual document/conversation and summary files.
+
+<! The JSON file is expected to contain the following fields:
+
+```json
+projectFileVersion": TODO,
+"stringIndexType": TODO,
+"metadata": {
+ "projectKind": TODO,
+ "storageInputContainerName": TODO,
+ "projectName": a string project name,
+ "multilingual": TODO,
+ "description": a string project description,
+ "language": TODO:
+},
+"assets": {
+ "projectKind": TODO,
+ "documents": a list of document-label pairs, each is defined with three fields:
+ [
+ {
+ "summaryLocation": a string path to the summary txt file,
+ "location": a string path to the document txt file,
+ "language": TODO
+ }
+ ]
+}
+``` >
+
+The following is an example mapping file for the abstractive document summarization scenario with three documents and corresponding labels.
+
+```json
+{
+ "projectFileVersion": "2022-10-01-preview",
+ "stringIndexType": "Utf16CodeUnit",
+ "metadata": {
+ "projectKind": "CustomAbstractiveSummarization",
+ "storageInputContainerName": "abstractivesummarization",
+ "projectName": "sample_custom_summarization",
+ "multilingual": false,
+ "description": "Creating a custom summarization model",
+ "language": "en-us"
+ }
+ "assets": {
+ "projectKind": "CustomAbstractiveSummarization",
+ "documents": [
+ {
+ "summaryLocation": "doc1_summary.txt",
+ "location": "doc1.txt",
+ "language": "en-us"
+ },
+ {
+ "summaryLocation": "doc2_summary.txt",
+ "location": "doc2.txt",
+ "language": "en-us"
+ },
+ {
+ "summaryLocation": "doc3_summary.txt",
+ "location": "doc3.txt",
+ "language": "en-us"
+ }
+ ]
+ }
+}
+```
+
+## Next steps
+
+[Get started with custom summarization](../../custom/quickstart.md)
ai-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/custom/how-to/deploy-model.md
+
+ Title: Deploy a custom summarization model
+
+description: Learn about deploying a model for Custom summarization.
++++++ Last updated : 06/02/2023+++
+# Deploy a custom summarization model
+
+Once you're satisfied with how your model performs, it's ready to be deployed and used to summarize text documents. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
+
+<!--## Prerequisites
+
+* A successfully [created project](create-project.md) with a configured Azure storage account.
+* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
+* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
+
+For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).-->
+
+## Deploy model
+
+After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
+
+
+## Swap deployments
+
+After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
++
+## Delete deployment
++
+## Assign deployment resources
+
+You can [deploy your project to multiple regions](../../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
++
+## Unassign deployment resources
+
+When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
++
+## Next steps
+
+Check out the [summarization feature overview](../../overview.md).
ai-services Test Evaluate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/custom/how-to/test-evaluate.md
+
+ Title: Test and evaluate models in custom summarization
+
+description: Learn about how to test and evaluate custom summarization models.
++++++ Last updated : 06/01/2022+++
+# Test and evaluate your custom summarization models
+
+As you create your custom summarization model, you want to be sure to ensure that you end up with a quality model. You need to test and evaluate your custom summarization model to ensure it performs well.
+
+## Guidance on split test and training sets
+
+An important stage of creating a customized summarization model is validating that the created model is satisfactory in terms of quality and generates summaries as expected. That validation process has to be performed with a separate set of examples (called test examples) than the examples used for training. There are three important guidelines we recommend following when splitting the available data into training and testing:
+
+- **Size**: To establish enough confidence about the model's quality, the test set should be of a reasonable size. Testing the model on just a handful of examples can give misleading outcome evaluation times. We recommend evaluating on hundreds of examples. When a large number of documents/conversations is available, we recommend reserving at least 10% of them for testing.
+- **No Overlap**: It's crucial to make sure that the same document isn't used for training and testing at the same time. Testing should be performed on documents that were never used for training at any stage, otherwise the quality of the model will be highly overestimated.
+- **Diversity**: The test set should cover as many possible input characteristics as possible. For example, it's always better to include documents of different lengths, topics, styles, .. etc. when applicable. Similarly for conversation summarization, it's always a good idea to include conversations of different number of turns and number of speakers.
+
+## Guidance to evaluate a custom summarization model
+
+When evaluating a custom model, we recommend using both automatic and manual evaluation together. Automatic evaluation helps quickly judge the quality of summaries produced for the entire test set, hence covering a wide range of input variations. However, automatic evaluation gives an approximation of the quality and isn't enough by itself to establish confidence in the model quality. So, we also recommend inspecting the summaries produced for as many test documents as possible.
+
+### Automatic evaluation
+
+Currently, we use a metric called ROUGE (Recall-Oriented Understudy for Gisting Evaluation). This technique includes measures for automatically determining the quality of a summary by comparing it to ideal summaries created by humans. The measures count the number of overlapping units, like n-gram, word sequences, and word pairs between the computer-generated summary being evaluated and the ideal summaries. To learn more about Rouge, see the [ROUGE Wikipedia entry](https://en.wikipedia.org/wiki/ROUGE_(metric)) and the [paper on the ROUGE package](https://aclanthology.org/W04-1013.pdf).
+
+### Manual evaluation
+
+When you manually inspect the quality of a summary, there are general qualities of a summary that we recommend checking for besides any desired expectations that the custom model was trained to adhere to such as style, format, or length. The general qualities we recommend checking are:
+
+- **Fluency**: The summary should have no formatting problems, capitalization errors or ungrammatical sentences.
+- **Coherence**: The summary should be well-structured and well-organized. The summary shouldn't just be a heap of related information, but should build from sentence to sentence into a coherent body of information about a topic.
+- **Coverage**: The summary should cover all important information in the document/conversation.
+- **Relevance**: The summary should include only important information from the source document/conversation without redundancies.
+- **Hallucinations**: The summary doesn't contain wrong information not supported by the source document/conversation.
+
+To learn more about summarization evaluation, see the [MIT Press article on SummEval](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00373/100686/SummEval-Re-evaluating-Summarization-Evaluation).
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/custom/quickstart.md
+
+ Title: Quickstart - Custom summarization (preview)
+
+description: Quickly start building an AI model to summarize text.
++++++ Last updated : 05/26/2023+++
+<! zone_pivot_groups: usage-custom-language-features >
+# Quickstart: custom summarization (preview)
+
+Use this article to get started with creating a custom Summarization project where you can train custom models on top of Summarization. A model is artificial intelligence software that's trained to do a certain task. For this system, the models summarize text and are trained by learning from imported data.
+
+In this article, we use Language Studio to demonstrate key concepts of custom summarization. As an example weΓÇÖll build a custom summarization model to extract the Facility or treatment location from short discharge notes.
+
+<! >::: zone pivot="language-studio" >
++
+<!::: zone-end
++++
+## Next steps
+
+* [Summarization overview](../overview.md)
ai-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/conversation-summarization.md
+
+ Title: Summarize text with the conversation summarization API
+
+description: This article will show you how to summarize chat logs with the conversation summarization API.
++++++ Last updated : 01/31/2023++++
+# How to use conversation summarization
++
+## Conversation summarization types
+
+- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization type works on conversations with any number of parties.
+
+- Issue and resolution (call center focused) is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs, which occur between two parties.
++
+The AI models used by the API are provided by the service, you just have to send content for analysis.
+
+## Features
+
+The conversation summarization API uses natural language processing techniques to summarize conversations into shorter summaries per request. Conversation summarization can summarize for issues and resolutions discussed in a two-party conversation or summarize a long conversation into chapters and a short narrative for each chapter.
+
+There's another feature in Azure AI Language named [document summarization](../overview.md?tabs=document-summarization) that is more suitable to summarize documents into concise summaries. When you're deciding between document summarization and conversation summarization, consider the following points:
+* Input genre: Conversation summarization can operate on both chat text and speech transcripts. which have speakers and their utterances. Document summarization operations on text.
+* Purpose of summarization: for example, conversation issue and resolution summarization returns a reason and the resolution for a chat between a customer and a customer service agent.
+
+## Submitting data
+
+> [!NOTE]
+> See the [Language Studio](../../language-studio.md#valid-text-formats-for-conversation-features) article for information on formatting conversational text to submit using Language Studio.
+
+You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
+
+When you use this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+When you submit data to conversation summarization, we recommend sending one chat log per request, for better latency.
+
+### Get summaries from text chats
+
+You can use conversation issue and resolution summarization to get summaries as you need. To see an example using text chats, see the [quickstart article](../quickstart.md).
+
+### Get summaries from speech transcriptions
+
+Conversation issue and resolution summarization also enables you to get summaries from speech transcripts by using the [Speech service's speech to text feature](../../../Speech-Service/call-center-overview.md). The following example shows a short conversation that you might include in your API requests.
+
+```json
+"conversations":[
+ {
+ "id":"abcdefgh-1234-1234-1234-1234abcdefgh",
+ "language":"en",
+ "modality":"transcript",
+ "conversationItems":[
+ {
+ "modality":"transcript",
+ "participantId":"speaker",
+ "id":"12345678-abcd-efgh-1234-abcd123456",
+ "content":{
+ "text":"Hi.",
+ "lexical":"hi",
+ "itn":"hi",
+ "maskedItn":"hi",
+ "audioTimings":[
+ {
+ "word":"hi",
+ "offset":4500000,
+ "duration":2800000
+ }
+ ]
+ }
+ }
+ ]
+ }
+]
+```
+
+### Get chapter titles
+
+Conversation chapter title summarization lets you get chapter titles from input conversations. A guided example scenario is provided below:
+
+1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
+
+```bash
+curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-conversations/jobs?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
+-d \
+'
+{
+ "displayName": "Conversation Task Example",
+ "analysisInput": {
+ "conversations": [
+ {
+ "conversationItems": [
+ {
+ "text": "Hello, youΓÇÖre chatting with Rene. How may I help you?",
+ "id": "1",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.",
+ "id": "2",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you please try the following steps for me? First, could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking on and off every second?",
+ "id": "3",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Yes, I pushed the wifi connection button, and now the power light is slowly blinking.",
+ "id": "4",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine? ",
+ "id": "5",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "No. Nothing happened.",
+ "id": "6",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.",
+ "id": "7",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ }
+ ],
+ "modality": "text",
+ "id": "conversation1",
+ "language": "en"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "Conversation Task 1",
+ "kind": "ConversationalSummarizationTask",
+ "parameters": {
+ "summaryAspects": [
+ "chapterTitle"
+ ]
+ }
+ }
+ ]
+}
+'
+```
+
+2. Make the following changes in the command where needed:
+- Replace the value `your-value-language-key` with your key.
+- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.
+
+3. Open a command prompt window (for example: BASH).
+
+4. Paste the command from the text editor into the command prompt window, then run the command.
+
+5. Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```http
+https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
+```
+
+6. To get the results of the request, use the following cURL command. Be sure to replace `<my-job-id>` with the GUID value you received from the previous `operation-location` response header:
+
+```curl
+curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/<my-job-id>?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
+```
+
+Example chapter title summarization JSON response:
+
+```json
+{
+ "jobId": "d874a98c-bf31-4ac5-8b94-5c236f786754",
+ "lastUpdatedDateTime": "2022-09-29T17:36:42Z",
+ "createdDateTime": "2022-09-29T17:36:39Z",
+ "expirationDateTime": "2022-09-30T17:36:39Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "Conversation Task Example",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "conversationalSummarizationResults",
+ "taskName": "Conversation Task 1",
+ "lastUpdateDateTime": "2022-09-29T17:36:42.895694Z",
+ "status": "succeeded",
+ "results": {
+ "conversations": [
+ {
+ "summaries": [
+ {
+ "aspect": "chapterTitle",
+ "text": "Smart Brew 300 Espresso Machine WiFi Connection",
+ "contexts": [
+ { "conversationItemId": "1", "offset": 0, "length": 53 },
+ { "conversationItemId": "2", "offset": 0, "length": 94 },
+ { "conversationItemId": "3", "offset": 0, "length": 266 },
+ { "conversationItemId": "4", "offset": 0, "length": 85 },
+ { "conversationItemId": "5", "offset": 0, "length": 119 },
+ { "conversationItemId": "6", "offset": 0, "length": 21 },
+ { "conversationItemId": "7", "offset": 0, "length": 109 }
+ ]
+ }
+ ],
+ "id": "conversation1",
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "latest"
+ }
+ }
+ ]
+ }
+}
+```
+For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
+
+ ### Get narrative summarization
+
+Conversation summarization also lets you get narrative summaries from input conversations. A guided example scenario is provided below:
+
+1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
+
+```bash
+curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-conversations/jobs?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
+-d \
+'
+{
+ "displayName": "Conversation Task Example",
+ "analysisInput": {
+ "conversations": [
+ {
+ "conversationItems": [
+ {
+ "text": "Hello, youΓÇÖre chatting with Rene. How may I help you?",
+ "id": "1",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.",
+ "id": "2",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you please try the following steps for me? First, could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking on and off every second?",
+ "id": "3",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "Yes, I pushed the wifi connection button, and now the power light is slowly blinking.",
+ "id": "4",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine? ",
+ "id": "5",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ },
+ {
+ "text": "No. Nothing happened.",
+ "id": "6",
+ "role": "Customer",
+ "participantId": "Customer_1"
+ },
+ {
+ "text": "IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.",
+ "id": "7",
+ "role": "Agent",
+ "participantId": "Agent_1"
+ }
+ ],
+ "modality": "text",
+ "id": "conversation1",
+ "language": "en"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "Conversation Task 1",
+ "kind": "ConversationalSummarizationTask",
+ "parameters": {
+ "summaryAspects": [
+ "narrative"
+ ]
+ }
+ }
+ ]
+}
+'
+```
+
+2. Make the following changes in the command where needed:
+- Replace the value `your-language-resource-key` with your key.
+- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.
+
+3. Open a command prompt window (for example: BASH).
+
+4. Paste the command from the text editor into the command prompt window, then run the command.
+
+5. Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```http
+https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
+```
+
+6. To get the results of a request, use the following cURL command. Be sure to replace `<my-job-id>` with the GUID value you received from the previous `operation-location` response header:
+
+```curl
+curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/<my-job-id>?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
+```
+
+Example narrative summarization JSON response:
+
+```json
+{
+ "jobId": "d874a98c-bf31-4ac5-8b94-5c236f786754",
+ "lastUpdatedDateTime": "2022-09-29T17:36:42Z",
+ "createdDateTime": "2022-09-29T17:36:39Z",
+ "expirationDateTime": "2022-09-30T17:36:39Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "Conversation Task Example",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "conversationalSummarizationResults",
+ "taskName": "Conversation Task 1",
+ "lastUpdateDateTime": "2022-09-29T17:36:42.895694Z",
+ "status": "succeeded",
+ "results": {
+ "conversations": [
+ {
+ "summaries": [
+ {
+ "aspect": "narrative",
+ "text": "Agent_1 helps customer to set up wifi connection for Smart Brew 300 espresso machine.",
+ "contexts": [
+ { "conversationItemId": "1", "offset": 0, "length": 53 },
+ { "conversationItemId": "2", "offset": 0, "length": 94 },
+ { "conversationItemId": "3", "offset": 0, "length": 266 },
+ { "conversationItemId": "4", "offset": 0, "length": 85 },
+ { "conversationItemId": "5", "offset": 0, "length": 119 },
+ { "conversationItemId": "6", "offset": 0, "length": 21 },
+ { "conversationItemId": "7", "offset": 0, "length": 109 }
+ ]
+ }
+ ],
+ "id": "conversation1",
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "latest"
+ }
+ }
+ ]
+ }
+}
+```
+
+For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
+
+## Getting conversation issue and resolution summarization results
+
+The following text is an example of content you might submit for conversation issue and resolution summarization. This is only an example, the API can accept much longer input text. See [data limits](../../concepts/data-limits.md) for more information.
+
+**Agent**: "*Hello, how can I help you*?"
+
+**Customer**: "*How can I upgrade my Contoso subscription? I've been trying the entire day.*"
+
+**Agent**: "*Press the upgrade button please. Then sign in and follow the instructions.*"
+
+Summarization is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
+
+In the above example, the API might return the following summarized sentences:
+
+|Summarized text | Aspect |
+||-|
+| "Customer wants to upgrade their subscription. Customer doesn't know how." | issue |
+| "Customer needs to press upgrade button, and sign in." | resolution |
+
+## See also
+
+* [Summarization overview](../overview.md)
ai-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/document-summarization.md
+
+ Title: Summarize text with the extractive summarization API
+
+description: This article will show you how to summarize text with the extractive summarization API.
++++++ Last updated : 09/26/2022++++
+# How to use document summarization
+
+Document summarization is designed to shorten content that users consider too long to read. Both extractive and abstractive summarization condense articles, papers, or documents to key sentences.
+
+**Extractive summarization**: Produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
+
+**Abstractive summarization**: Produces a summary by generating summarized sentences from the document that capture the main idea.
+
+The AI models used by the API are provided by the service, you just have to send content for analysis.
+
+## Features
+
+> [!TIP]
+> If you want to start using these features, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+
+The document summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
+
+Document summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
+
+There is another feature in Azure AI Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following:
+* Key phrase extraction returns phrases while extractive summarization returns sentences.
+* Extractive summarization returns sentences together with a rank score, and top ranked sentences will be returned per request.
+* Extractive summarization also returns the following positional information:
+ * Offset: The start position of each extracted sentence.
+ * Length: The length of each extracted sentence.
++
+## Determine how to process the data (optional)
+
+### Submitting data
+
+You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
+
+When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+### Getting document summarization results
+
+When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
+
+The following is an example of content you might submit for summarization, which is extracted using the Microsoft blog article [A holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/). This article is only an example, the API can accept much longer input text. See the data limits section for more information.
+
+*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
+
+The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
+
+Using the above example, the API might return the following summarized sentences:
+
+**Extractive summarization**:
+- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
+- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."
+- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
+
+**Abstractive summarization**:
+- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."
+
+### Try document extractive summarization
+
+You can use document extractive summarization to get summaries of articles, papers, or documents. To see an example, see the [quickstart article](../quickstart.md).
+
+You can use the `sentenceCount` parameter to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20.
+
+You can also use the `sortby` parameter to specify in what order the extracted sentences will be returned - either `Offset` or `Rank`, with `Offset` being the default.
+
+|parameter value |Description |
+|||
+|Rank | Order sentences according to their relevance to the input document, as decided by the service. |
+|Offset | Keeps the original order in which the sentences appear in the input document. |
+
+### Try document abstractive summarization
+
+[Reference documentation](https://go.microsoft.com/fwlink/?linkid=2211684)
+
+The following example will get you started with document abstractive summarization:
+
+1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character instead.
+
+```bash
+curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
+-d \
+'
+{
+ "displayName": "Document Abstractive Summarization Task Example",
+ "analysisInput": {
+ "documents": [
+ {
+ "id": "1",
+ "language": "en",
+ "text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "kind": "AbstractiveSummarization",
+ "taskName": "Document Abstractive Summarization Task 1",
+ "parameters": {
+ "sentenceCount": 1
+ }
+ }
+ ]
+}
+'
+```
+If you do not specify `sentenceCount`, the model will determine the summary length. Note that `sentenceCount` is the approximation of the sentence count of the output summary, range 1 to 20.
+
+2. Make the following changes in the command where needed:
+- Replace the value `your-language-resource-key` with your key.
+- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.
+
+3. Open a command prompt window (for example: BASH).
+
+4. Paste the command from the text editor into the command prompt window, then run the command.
+
+5. Get the `operation-location` from the response header. The value will look similar to the following URL:
+
+```http
+https://<your-language-resource-endpoint>/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
+```
+
+6. To get the results of the request, use the following cURL command. Be sure to replace `<my-job-id>` with the numerical ID value you received from the previous `operation-location` response header:
+
+```bash
+curl -X GET https://<your-language-resource-endpoint>/language/analyze-text/jobs/<my-job-id>?api-version=2022-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
+```
+++
+### Abstractive document summarization example JSON response
+
+```json
+{
+ "jobId": "cd6418fe-db86-4350-aec1-f0d7c91442a6",
+ "lastUpdateDateTime": "2022-09-08T16:45:14Z",
+ "createdDateTime": "2022-09-08T16:44:53Z",
+ "expirationDateTime": "2022-09-09T16:44:53Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "Document Abstractive Summarization Task Example",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "AbstractiveSummarizationLROResults",
+ "taskName": "Document Abstractive Summarization Task 1",
+ "lastUpdateDateTime": "2022-09-08T16:45:14.0717206Z",
+ "status": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "summaries": [
+ {
+ "text": "Microsoft is taking a more holistic, human-centric approach to AI. We've developed a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We've achieved human performance on benchmarks in conversational speech recognition, machine translation, ...... and image captions.",
+ "contexts": [
+ {
+ "offset": 0,
+ "length": 247
+ }
+ ]
+ }
+ ],
+ "id": "1"
+ }
+ ],
+ "errors": [],
+ "modelVersion": "latest"
+ }
+ }
+ ]
+ }
+}
+```
+
+|parameter |Description |
+|||
+|`-X POST <endpoint>` | Specifies your endpoint for accessing the API. |
+|`-H Content-Type: application/json` | The content type for sending JSON data. |
+|`-H "Ocp-Apim-Subscription-Key:<key>` | Specifies the key for accessing the API. |
+|`-d <documents>` | The JSON containing the documents you want to send. |
+
+The following cURL commands are executed from a BASH shell. Edit these commands with your own resource name, resource key, and JSON values.
++
+## Service and data limits
++
+## See also
+
+* [Summarization overview](../overview.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/language-support.md
+
+ Title: Summarization language support
+
+description: Learn about which languages are supported by document summarization.
++++++ Last updated : 09/28/2022++++
+# Summarization language support
+
+Use this article to learn which natural languages are supported by document and conversation summarization.
+
+# [Document summarization](#tab/document-summarization)
+
+## Languages supported by extractive and abstractive document summarization
+
+| Language | Language code | Notes |
+|--|||
+| Chinese-Simplified | `zh-hans` | `zh` also accepted |
+| English | `en` | |
+| French | `fr` | |
+| German | `de` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Korean | `ko` | |
+| Spanish | `es` | |
+| Portuguese | `pt` | |
+
+# [Conversation summarization](#tab/conversation-summarization)
+
+## Languages supported by conversation summarization
+
+Conversation summarization supports the following languages:
+
+| Language | Language code | Notes |
+|--|||
+| English | `en` | |
+++
+## Languages supported by custom summarization
+
+Custom summarization supports the following languages:
+
+| Language | Language code | Notes |
+|--|||
+| English | `en` | |
+
+## Next steps
+
+* [Summarization overview](overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
+
+ Title: What is document and conversation summarization (preview)?
+
+description: Learn about summarizing text.
++++++ Last updated : 01/12/2023++++
+# What is document and conversation summarization?
++
+Summarization is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+
+Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md).
+
+# [Document summarization](#tab/document-summarization)
+
+This documentation contains the following article types:
+
+* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
+
+Document summarization uses natural language processing techniques to generate a summary for documents. There are two general approaches to automatic summarization, both of which are supported by the API: extractive and abstractive.
+
+Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words which are not simply extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+
+## Key features
+
+There are two types of document summarization this API provides:
+
+* **Extractive summarization**: Produces a summary by extracting salient sentences within the document.
+ * Multiple extracted sentences: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
+ * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization will return the three highest scored sentences.
+ * Positional information: The start position and length of extracted sentences.
+* **Abstractive summarization**: Generates a summary that may not use the same words as those in the document, but captures the main idea.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document may be segmented so multiple groups of summary texts may be returned with their contextual input range.
+ * Contextual input range: The range within the input document that was used to generate the summary text.
+
+As an example, consider the following paragraph of text:
+
+*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
+
+The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/multilingual-emoji-support.md) for more information.
+
+Using the above example, the API might return the following summarized sentences:
+
+**Extractive summarization**:
+- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
+- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."
+- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
+
+**Abstractive summarization**:
+- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."
+
+# [Conversation summarization](#tab/conversation-summarization)
+
+> [!IMPORTANT]
+> Conversation summarization is only available in English.
+
+This documentation contains the following article types:
+
+* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=conversation-summarization)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to/conversation-summarization.md)** contain instructions for using the service in more specific or customized ways.
+
+## Key features
+
+Conversation summarization supports the following features:
+
+* **Issue/resolution summarization**: A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
+* **Chapter title summarization**: Gives suggested chapter titles of the input conversation.
+* **Narrative summarization**: Gives call notes, meeting notes or chat summaries of the input conversation.
+
+## When to use issue and resolution summarization
+
+* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
+ * The reason for a service chat/call (the issue).
+ * That resolution for the issue.
+* You only want a summary that focuses on related information about issues and resolutions.
+* When there are two participants in the conversation, and you want to summarize what each had said.
+
+As an example, consider the following example conversation:
+
+**Agent**: "*Hello, youΓÇÖre chatting with Rene. How may I help you?*"
+
+**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.*"
+
+**Agent**: "*IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
+
+**Customer**: "*Yes, I pushed the wifi connection button, and now the power light is slowly blinking.*"
+
+**Agent**: "*Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine?*"
+
+**Customer**: "*No. Nothing happened.*"
+
+**Agent**: "*I see. Thanks. LetΓÇÖs try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
+
+**Customer**: *"IΓÇÖve tried the factory reset and followed the above steps again, but it still didnΓÇÖt work."*
+
+**Agent**: "*IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.*"
+
+Conversation summarization feature would simplify the text into the following:
+
+|Example summary | Format | Conversation aspect |
+||-|-|
+| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
+| Checked if the power light is blinking slowly. Checked the Contoso coffee app. It had no prompt. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
+++
+## Get started with summarization
+++
+## Input requirements and service limits
+
+# [Document summarization](#tab/document-summarization)
+
+* Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
+* Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information.
+
+# [Conversation summarization](#tab/conversation-summarization)
+
+* Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
+* Conversation summarization accepts text in English. See [language support](language-support.md?tabs=conversation-summarization) for more information.
+++
+## Reference documentation and code samples
+
+As you use document summarization in your applications, see the following reference documentation and samples for Azure AI Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST API | [REST API documentation](https://go.microsoft.com/fwlink/?linkid=2211684) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+
+* [Transparency note for Azure AI Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/ai-services/language-service/context/context)
+* [Characteristics and limitations of summarization](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/quickstart.md
+
+ Title: "Quickstart: Use Document Summarization"
+
+description: Use this quickstart to start using Document Summarization.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: using document summarization and conversation summarization
+++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Next steps
+
+* [How to call document summarization](./how-to/document-summarization.md)
+* [How to call conversation summarization](./how-to/conversation-summarization.md)
ai-services Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/region-support.md
+
+ Title: Summarization region support
+
+description: Learn about which regions are supported by document summarization.
+++++++ Last updated : 06/11/2023+++
+# Regional availability
+
+Some summarization features are only available in limited regions. More regions will be added to this list as they become available.
+
+## Regional availability table
+
+|Region|Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization|
+||||||-|
+|North Europe|&#9989;|&#9989;|&#9989;|&#10060;|
+|East US|&#9989;|&#9989;|&#9989;|&#9989;|
+|UK South|&#9989;|&#9989;|&#9989;|&#10060;|
+|Southeast Asia|&#9989;|&#9989;|&#9989;|&#10060;|
+
+## Next steps
+
+* [Summarization overview](overview.md)
ai-services Assertion Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/concepts/assertion-detection.md
+
+ Title: Assertion detection in Text Analytics for health
+
+description: Learn about assertion detection.
++++++ Last updated : 01/04/2023++++
+# Assertion detection
+
+The meaning of medical content is highly affected by modifiers, such as negative or conditional assertions, which can have critical implications if misrepresented. Text Analytics for health supports four categories of assertion detection for entities in the text:
+
+* Certainty
+* Conditional
+* Association
+* Temporal
+
+## Assertion output
+
+Text Analytics for health returns assertion modifiers, which are informative attributes assigned to medical concepts that provide a deeper understanding of the conceptsΓÇÖ context within the text. These modifiers are divided into four categories, each focusing on a different aspect and containing a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The serviceΓÇÖs output response contains only assertion modifiers that are different from the default value. In other words, if no assertion is returned, the implied assertion is the default value.
+
+**CERTAINTY** ΓÇô provides information regarding the presence (present vs. absent) of the concept and how certain the text is regarding its presence (definite vs. possible).
+* **Positive** [Default]: the concept exists or has happened.
+* **Negative**: the concept does not exist now or never happened.
+* **Positive_Possible**: the concept likely exists but there is some uncertainty.
+* **Negative_Possible**: the conceptΓÇÖs existence is unlikely but there is some uncertainty.
+* **Neutral_Possible**: the concept may or may not exist without a tendency to either side.
+
+An example of assertion detection is shown below where a negated entity is returned with a negative value for the certainty category:
+
+```json
+{
+ "offset": 381,
+ "length": 3,
+ "text": "SOB",
+ "category": "SymptomOrSign",
+ "confidenceScore": 0.98,
+ "assertion": {
+ "certainty": "negative"
+ },
+ "name": "Dyspnea",
+ "links": [
+ {
+ "dataSource": "UMLS",
+ "id": "C0013404"
+ },
+ {
+ "dataSource": "AOD",
+ "id": "0000005442"
+ },
+ ...
+}
+```
+
+**CONDITIONALITY** ΓÇô provides information regarding whether the existence of a concept depends on certain conditions.
+* **None** [Default]: the concept is a fact and not hypothetical and does not depend on certain conditions.
+* **Hypothetical**: the concept may develop or occur in the future.
+* **Conditional**: the concept exists or occurs only under certain conditions.
+
+**ASSOCIATION** ΓÇô describes whether the concept is associated with the subject of the text or someone else.
+* **Subject** [Default]: the concept is associated with the subject of the text, usually the patient.
+* **Other**: the concept is associated with someone who is not the subject of the text.
+
+**TEMPORAL** - provides additional temporal information for a concept detailing whether it is an occurrence related to the past, present, or future.
+* **Current** [Default]: the concept is related to conditions/events that belong to the current encounter. For example, medical symptoms that have brought the patient to seek medical attention (e.g., “started having headaches 5 days prior to their arrival to the ER”). This includes newly made diagnoses, symptoms experienced during or leading to this encounter, treatments and examinations done within the encounter.
+* **Past**: the concept is related to conditions, examinations, treatments, medication events that are mentioned as something that existed or happened prior to the current encounter, as might be indicated by hints like s/p, recently, ago, previously, in childhood, at age X. For example, diagnoses that were given in the past, treatments that were done, past examinations and their results, past admissions, etc. Medical background is considered as PAST.
+* **Future**: the concept is related to conditions/events that are planned/scheduled/suspected to happen in the future, e.g., will be obtained, will undergo, is scheduled in two weeks from now.
+++
+## Next steps
+
+[How to call the Text Analytics for health](../how-to/call-api.md)
ai-services Health Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/concepts/health-entity-categories.md
+
+ Title: Entity categories recognized by Text Analytics for health
+
+description: Learn about categories recognized by Text Analytics for health
++++++ Last updated : 01/04/2023++++
+# Supported Text Analytics for health entity categories
+
+Text Analytics for health processes and extracts insights from unstructured medical data. The service detects and surfaces medical concepts, assigns assertions to concepts, infers semantic relations between concepts and links them to common medical ontologies.
+
+Text Analytics for health detects medical concepts that fall under the following categories.
+
+## Anatomy
+
+### Entities
+
+**BODY_STRUCTURE** - Body systems, anatomic locations or regions, and body sites. For example, arm, knee, abdomen, nose, liver, head, respiratory system, lymphocytes.
++
+## Demographics
+
+### Entities
+
+**AGE** - All age terms and phrases, including ones for patients, family members, and others. For example, 40-year-old, 51 yo, 3 months old, adult, infant, elderly, young, minor, middle-aged.
+
+**ETHNICITY** - Phrases that indicate the ethnicity of the subject. For example, African American or Asian.
+++
+**GENDER** - Terms that disclose the gender of the subject. For example, male, female, woman, gentleman, lady.
++
+## Examinations
+
+### Entities
+
+**EXAMINATION_NAME** ΓÇô Diagnostic procedures and tests, including vital signs and body measurements. For example, MRI, ECG, HIV test, hemoglobin, platelets count, scale systems such as *Bristol stool scale*.
++
+## External Influence
+
+### Entities
+
+**ALLERGEN** ΓÇô an antigen triggering an allergic reaction. For example, cats, peanuts.
+++
+## General attributes
+
+### Entities
+
+**COURSE** - Description of a change in another entity over time, such as condition progression (for example: improvement, worsening, resolution, remission), a course of treatment or medication (for example: increase in medication dosage).
++
+**DATE** - Full date relating to a medical condition, examination, treatment, medication, or administrative event.
++
+**DIRECTION** ΓÇô Directional terms that may relate to a body structure, medical condition, examination, or treatment, such as: left, lateral, upper, posterior.
++
+**FREQUENCY** - Describes how often a medical condition, examination, treatment, or medication occurred, occurs, or should occur.
+++
+**TIME** - Temporal terms relating to the beginning and/or length (duration) of a medical condition, examination, treatment, medication, or administrative event.
+
+**MEASUREMENT_UNIT** ΓÇô The unit of measurement related to an examination or a medical condition measurement.
+
+**MEASUREMENT_VALUE** ΓÇô The value related to an examination or a medical condition measurement.
++
+**RELATIONAL_OPERATOR** - Phrases that express the quantitative relation between an entity and some additional information.
++
+## Genomics
+
+### Entities
+
+**VARIANT** - All mentions of gene variations and mutations. For example, `c.524C>T`, `(MTRR):r.1462_1557del96`
+
+**GENE_OR_PROTEIN** ΓÇô All mentions of names and symbols of human genes as well as chromosomes and parts of chromosomes and proteins. For example, MTRR, F2.
+
+**MUTATION_TYPE** - Description of the mutation, including its type, effect, and location. For example, trisomy, germline mutation, loss of function.
++
+**EXPRESSION** - Gene expression level. For example, positive for-, negative for-, overexpressed, detected in high/low levels, elevated.
+++
+## Healthcare
+
+### Entities
+
+**ADMINISTRATIVE_EVENT** ΓÇô Events that relate to the healthcare system but of an administrative/semi-administrative nature. For example, registration, admission, trial, study entry, transfer, discharge, hospitalization, hospital stay.
+
+**CARE_ENVIRONMENT** ΓÇô An environment or location where patients are given care. For example, emergency room, physicianΓÇÖs office, cardio unit, hospice, hospital.
++
+**HEALTHCARE_PROFESSION** ΓÇô A healthcare practitioner licensed or non-licensed. For example, dentist, pathologist, neurologist, radiologist, pharmacist, nutritionist, physical therapist, chiropractor.
++
+## Medical condition
+
+### Entities
+
+**DIAGNOSIS** ΓÇô Disease, syndrome, poisoning. For example, breast cancer, AlzheimerΓÇÖs, HTN, CHF, spinal cord injury.
+
+**SYMPTOM_OR_SIGN** ΓÇô Subjective or objective evidence of disease or other diagnoses. For example, chest pain, headache, dizziness, rash, SOB, abdomen was soft, good bowel sounds, well nourished.
++
+**CONDITION_QUALIFIER** - Qualitative terms that are used to describe a medical condition. All the following subcategories are considered qualifiers:
+
+* Time-related expressions: those are terms that describe the time dimension qualitatively, such as sudden, acute, chronic, longstanding.
+* Quality expressions: Those are terms that describe the ΓÇ£natureΓÇ¥ of the medical condition, such as burning, sharp.
+* Severity expressions: severe, mild, a bit, uncontrolled.
+* Extensivity expressions: local, focal, diffuse.
++
+**CONDITION_SCALE** ΓÇô Qualitative terms that characterize the condition by a scale, which is a finite ordered list of values.
++
+## Medication
+
+### Entities
+
+**MEDICATION_CLASS** ΓÇô A set of medications that have a similar mechanism of action, a related mode of action, a similar chemical structure, and/or are used to treat the same disease. For example, ACE inhibitor, opioid, antibiotics, pain relievers.
++
+**MEDICATION_NAME** ΓÇô Medication mentions, including copyrighted brand names, and non-brand names. For example, Ibuprofen.
+
+**DOSAGE** - Amount of medication ordered. For example, Infuse Sodium Chloride solution *1000 mL*.
+
+**MEDICATION_FORM** - The form of the medication. For example, solution, pill, capsule, tablet, patch, gel, paste, foam, spray, drops, cream, syrup.
++
+**MEDICATION_ROUTE** - The administration method of medication. For example, oral, topical, inhaled.
++
+## Social
+
+### Entities
+
+**FAMILY_RELATION** ΓÇô Mentions of family relatives of the subject. For example, father, daughter, siblings, parents.
++
+**EMPLOYMENT** ΓÇô Mentions of employment status including specific profession, such as unemployed, retired, firefighter, student.
++
+**LIVING_STATUS** ΓÇô Mentions of the housing situation, including homeless, living with parents, living alone, living with others.
++
+**SUBSTANCE_USE** ΓÇô Mentions of use of legal or illegal drugs, tobacco or alcohol. For example, smoking, drinking, or heroin use.
++
+**SUBSTANCE_USE_AMOUNT** ΓÇô Mentions of specific amounts of substance use. For example, a pack (of cigarettes) or a few glasses (of wine).
+++
+## Treatment
+
+### Entities
+
+**TREATMENT_NAME** ΓÇô Therapeutic procedures. For example, knee replacement surgery, bone marrow transplant, TAVI, diet.
++++
+## Next steps
+
+* [How to call the Text Analytics for health](../how-to/call-api.md)
ai-services Relation Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/concepts/relation-extraction.md
+
+ Title: Relation extraction in Text Analytics for health
+
+description: Learn about relation extraction
++++++ Last updated : 01/04/2023++++
+# Relation extraction
+
+Text Analytics for health features relation extraction, which is used to identify meaningful connections between concepts, or entities, mentioned in the text. For example, a "time of condition" relation is found by associating a condition name with a time. Another example is a "dosage of medication" relation, which is found by relating an extracted medication to its extracted dosage. The following example shows how relations are expressed in the JSON output.
+
+> [!NOTE]
+> * Relations referring to CONDITION may refer to either the DIAGNOSIS entity type or the SYMPTOM_OR_SIGN entity type.
+> * Relations referring to MEDICATION may refer to either the MEDICATION_NAME entity type or the MEDICATION_CLASS entity type.
+> * Relations referring to TIME may refer to either the TIME entity type or the DATE entity type.
+
+Relation extraction output contains URI references and assigned roles of the entities of the relation type. For example, in the following JSON:
+
+```json
+ "relations": [
+ {
+ "relationType": "DosageOfMedication",
+ "entities": [
+ {
+ "ref": "#/results/documents/0/entities/0",
+ "role": "Dosage"
+ },
+ {
+ "ref": "#/results/documents/0/entities/1",
+ "role": "Medication"
+ }
+ ]
+ },
+ {
+ "relationType": "RouteOfMedication",
+ "entities": [
+ {
+ "ref": "#/results/documents/0/entities/1",
+ "role": "Medication"
+ },
+ {
+ "ref": "#/results/documents/0/entities/2",
+ "role": "Route"
+ }
+ ]
+...
+]
+```
+
+## Recognized relations
+
+The following list presents all the recognized relations by the Text Analytics for health API.
+
+**ABBREVIATION**
+
+**AMOUNT_OF_SUBSTANCE_USE**
+
+**BODY_SITE_OF_CONDITION**
+
+**BODY_SITE_OF_TREATMENT**
+
+**COURSE_OF_CONDITION**
+
+**COURSE_OF_EXAMINATION**
+
+**COURSE_OF_MEDICATION**
+
+**COURSE_OF_TREATMENT**
+
+**DIRECTION_OF_BODY_STRUCTURE**
+
+**DIRECTION_OF_CONDITION**
+
+**DIRECTION_OF_EXAMINATION**
+
+**DIRECTION_OF_TREATMENT**
+
+**DOSAGE_OF_MEDICATION**
+
+**EXAMINATION_FINDS_CONDITION**
+
+**EXPRESSION_OF_GENE**
+
+**EXPRESSION_OF_VARIANT**
+
+**FORM_OF_MEDICATION**
+
+**FREQUENCY_OF_CONDITION**
+
+**FREQUENCY_OF_MEDICATION**
+
+**FREQUENCY_OF_SUBSTANCE_USE**
+
+**FREQUENCY_OF_TREATMENT**
+
+**MUTATION_TYPE_OF_GENE**
+
+**MUTATION_TYPE_OF_VARIANT**
+
+**QUALIFIER_OF_CONDITION**
+
+**RELATION_OF_EXAMINATION**
+
+**ROUTE_OF_MEDICATION**
+
+**SCALE_OF_CONDITION**
+
+**TIME_OF_CONDITION**
+
+**TIME_OF_EVENT**
+
+**TIME_OF_EXAMINATION**
+
+**TIME_OF_MEDICATION**
+
+**TIME_OF_TREATMENT**
+
+**UNIT_OF_CONDITION**
+
+**UNIT_OF_EXAMINATION**
+
+**VALUE_OF_CONDITION**
+
+**VALUE_OF_EXAMINATION**
+
+**VARIANT_OF_GENE**
+
+## Next steps
+
+* [How to call the Text Analytics for health](../how-to/call-api.md)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/how-to/call-api.md
+
+ Title: How to call Text Analytics for health
+
+description: Learn how to extract and label medical information from unstructured clinical text with Text Analytics for health.
++++++ Last updated : 01/04/2023++++
+# How to use Text Analytics for health
++
+Text Analytics for health can be used to extract and label relevant medical information from unstructured texts such as doctors' notes, discharge summaries, clinical documents, and electronic health records. The service performs [named entity recognition](../concepts/health-entity-categories.md), [relation extraction](../concepts/relation-extraction.md), [entity linking](https://www.nlm.nih.gov/research/umls/sourcereleasedocs/https://docsupdatetracker.net/index.html), and [assertion detection](../concepts/assertion-detection.md) to uncover insights from the input text. For information on the returned confidence scores, see the [transparency note](/legal/cognitive-services/text-analytics/transparency-note#general-guidelines-to-understand-and-improve-performance?context=/azure/ai-services/text-analytics/context/context).
+
+> [!TIP]
+> If you want to test out the feature without writing any code, use the [Language Studio](../../language-studio.md).
+
+There are two ways to call the service:
+
+* A [Docker container](use-containers.md) (synchronous)
+* Using the web-based API and client libraries (asynchronous)
+
+## Development options
++++
+## Specify the Text Analytics for health model
+
+By default, Text Analytics for health will use the ("2022-03-01") model version on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities along with their assertions and relationships (**only in English**) is supported with the latest preview model version "2023-04-15-preview".
+
+| Supported Versions | Status |
+|--|--|
+| `2023-04-15-preview` | Preview |
+| `2023-04-01` | Generally available |
+| `2023-01-01-preview` | Preview |
+| `2022-08-15-preview` | Preview |
+| `2022-03-01` | Generally available |
+
+## Specify the Text Analytics for health API version
+
+When making a Text Analytics for health API call, you must specify an API version. The latest generally available API version is "2023-04-01" which supports relationship confidence scores in the results. The latest preview API version is "2023-04-15-preview", offering the latest feature which is support for [temporal assertions](../concepts/assertion-detection.md).
+
+| Supported Versions | Status |
+|--|--|
+| `2023-04-15-preview`| Preview |
+| `2023-04-01`| Generally available |
+| `2022-10-01-preview` | Preview |
+| `2022-05-01` | Generally available |
++
+### Text Analytics for health container
+
+The [Text Analytics for health container](use-containers.md) uses separate model versioning than the REST API and client libraries. Only one model version is available per container image.
+
+| Endpoint | Container Image Tag | Model version |
+||--||
+| `/entities/health` | `3.0.59413252-onprem-amd64` (latest) | `2022-03-01` |
+| `/entities/health` | `3.0.59413252-latin-onprem-amd64` (latin) | `2022-08-15-preview` |
+| `/entities/health` | `3.0.59413252-semitic-onprem-amd64` (semitic) | `2022-08-15-preview` |
+| `/entities/health` | `3.0.016230002-onprem-amd64` | `2021-05-15` |
+| `/entities/health` | `3.0.015370001-onprem-amd64` | `2021-03-01` |
+| `/entities/health` | `1.1.013530001-amd64-preview` | `2020-09-03` |
+| `/entities/health` | `1.1.013150001-amd64-preview` | `2020-07-24` |
+| `/domains/health` | `1.1.012640001-amd64-preview` | `2020-05-08` |
+| `/domains/health` | `1.1.012420001-amd64-preview` | `2020-05-08` |
+| `/domains/health` | `1.1.012070001-amd64-preview` | `2020-04-16` |
+
+### Input languages
+
+The Text Analytics for health supports English in addition to multiple languages that are currently in preview. You can use the hosted API or deploy the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
+
+## Submitting data
+
+To send an API request, you will need your Language resource endpoint and key.
+
+> [!NOTE]
+> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
+
+Analysis is performed upon receipt of the request. If you send a request using the REST API or client library, the results will be returned asynchronously. If you're using the Docker container, they will be returned synchronously.
+++
+## Submitting a Fast Healthcare Interoperability Resources (FHIR) request
+
+Fast Healthcare Interoperability Resources (FHIR) is the health industry communication standard developed by the Health Level Seven International (HL7) organization. The standard defines the data formats (resources) and API structure for exchanging electronic healthcare data. To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body.
+
+| Parameter Name | Type | Value |
+|--|--|--|
+| fhirVersion | string | `4.0.1` |
+++
+## Getting results from the feature
+
+Depending on your API request, and the data you submit to the Text Analytics for health, you will get:
+++
+## Service and data limits
++
+## See also
+
+* [Text Analytics for health overview](../overview.md)
+* [Text Analytics for health entity categories](../concepts/health-entity-categories.md)
ai-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/how-to/configure-containers.md
+
+ Title: Configure Text Analytics for health containers
+
+description: Text Analytics for health containers uses a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
++++++ Last updated : 11/02/2021++++
+# Configure Text Analytics for health docker containers
+
+Text Analytics for health provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. Several [example docker run commands](use-containers.md#run-the-container-with-docker-run) are also available.
+
+## Configuration settings
++
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](use-containers.md#billing).
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Language_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+This setting can be found in the following place:
+
+* Azure portal: **Language** resource management, under **Keys and endpoint**
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Language_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Language_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+This setting can be found in the following place:
+
+* Azure portal: **Language** Overview, labeled `Endpoint`
+
+|Required| Name | Data type | Description |
+|--||--|-|
+|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](use-containers.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../../../cognitive-services-custom-subdomains.md). |
+
+## Eula setting
++
+## Fluentd settings
++
+## Http proxy credentials settings
++
+## Logging settings
+
+
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+Text Analytics for health containers don't use input or output mounts to store training or service data.
+
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](use-containers.md#host-computer-requirements-and-recommendations)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
+
+|Optional| Name | Data type | Description |
+|-||--|-|
+|Not allowed| `Input` | String | Text Analytics for health containers do not use this.|
+|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
+
+## Next steps
+
+* Review [How to install and run containers](use-containers.md)
+* Use more [Azure AI containers](../../../cognitive-services-container-support.md)
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/how-to/use-containers.md
+
+ Title: How to use Text Analytics for health containers
+
+description: Learn how to extract and label medical information on premises using Text Analytics for health Docker container.
++++++ Last updated : 01/18/2023++
+ms.devlang: azurecli
++
+# Use Text Analytics for health containers
+
+Containers enable you to host the Text Analytics for health API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Text Analytics for health remotely, then containers might be a good option.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+You must meet the following prerequisites before using Text Analytics for health containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
++
+## Host computer requirements and recommendations
++
+The following table describes the minimum and recommended specifications for the Text Analytics for health containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
+
+| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
+|||-|--|--|
+| **1 document/request** | 4 core, 12GB memory | 6 core, 12GB memory |15 | 30|
+| **10 documents/request** | 6 core, 16GB memory | 8 core, 20GB memory |15 | 30|
+
+CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+## Get the container image with `docker pull`
+
+The Text Analytics for health container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `healthcare`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare`
+
+To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/tags).
+
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry. You can find the featured tags on the [dockerhub page](https://hub.docker.com/_/microsoft-azure-cognitive-services-textanalytics-healthcare)
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name>
+```
++
+## Run the container with `docker run`
+
+Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
+
+> [!IMPORTANT]
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+> * The [responsible AI](/legal/cognitive-services/text-analytics/transparency-note-health) (RAI) acknowledgment must also be present with a value of `accept`.
+> * The sentiment analysis and language detection containers use v3 of the API, and are generally available. The key phrase extraction container uses v2 of the API, and is in preview.
+
+There are multiple ways you can install and run the Text Analytics for health container.
+
+- Use the Azure portal to create a Language resource, and use Docker to get your container.
+- Use an Azure VM with Docker to run the container. Refer to [Docker on Azure](../../../../docker/index.yml).
+- Use the following PowerShell and Azure CLI scripts to automate resource deployment and container configuration.
+
+When you use the Text Analytics for health container, the data contained in your API requests and responses is not visible to Microsoft, and is not used for training the model applied to your data.
+
+### Run the container locally
+
+To run the container in your own environment after downloading the container image, execute the following `docker run` command. Replace the placeholders below with your own values:
+
+| Placeholder | Value | Format or example |
+|-|-||
+| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| **{ENDPOINT_URI}** | The endpoint for accessing the API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+
+```bash
+docker run --rm -it -p 5000:5000 --cpus 6 --memory 12g \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name> \
+Eula=accept \
+rai_terms=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+- Runs the Text Analytics for health container from the container image
+- Allocates 6 CPU core and 12 gigabytes (GB) of memory
+- Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+- Accepts the end user license agreement (EULA) and responsible AI (RAI) terms
+- Automatically removes the container after it exits. The container image is still available on the host computer.
+
+### Demo UI to visualize output
+
+The container provides REST-based query prediction endpoint APIs. We have also provided a visualization tool in the container that is accessible by appending `/demo` to the endpoint of the container. For example:
+
+```
+http://<serverURL>:5000/demo
+```
+
+Use the example cURL request below to submit a query to the container you have deployed replacing the `serverURL` variable with the appropriate value.
+
+```bash
+curl -X POST 'http://<serverURL>:5000/text/analytics/v3.1/entities/health' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json
+
+```
+
+### Install the container using Azure Web App for Containers
+
+Azure [Web App for Containers](https://azure.microsoft.com/services/app-service/containers/) is an Azure resource dedicated to running containers in the cloud. It brings out-of-the-box capabilities such as autoscaling, support for docker containers and docker compose, HTTPS support and much more.
+
+> [!NOTE]
+> Using Azure Web App you will automatically get a domain in the form of `<appservice_name>.azurewebsites.net`
+
+Run this PowerShell script using the Azure CLI to create a Web App for Containers, using your subscription and the container image over HTTPS. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request.
+
+```azurecli
+$subscription_name = "" # THe name of the subscription you want you resource to be created on.
+$resource_group_name = "" # The name of the resource group you want the AppServicePlan
+ # and AppSerivce to be attached to.
+$resources_location = "" # This is the location you wish the AppServicePlan to be deployed to.
+ # You can use the "az account list-locations -o table" command to
+ # get the list of available locations and location code names.
+$appservice_plan_name = "" # This is the AppServicePlan name you wish to have.
+$appservice_name = "" # This is the AppService resource name you wish to have.
+$TEXT_ANALYTICS_RESOURCE_API_KEY = "" # This should be taken from the Language resource.
+$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT = "" # This should be taken from the Language resource.
+$DOCKER_IMAGE_NAME = "mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest"
+
+az login
+az account set -s $subscription_name
+az appservice plan create -n $appservice_plan_name -g $resource_group_name --is-linux -l $resources_location --sku P3V2
+az webapp create -g $resource_group_name -p $appservice_plan_name -n $appservice_name -i $DOCKER_IMAGE_NAME
+az webapp config appsettings set -g $resource_group_name -n $appservice_name --settings Eula=accept rai_terms=accept Billing=$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT ApiKey=$TEXT_ANALYTICS_RESOURCE_API_KEY
+
+# Once deployment complete, the resource should be available at: https://<appservice_name>.azurewebsites.net
+```
+
+### Install the container using Azure Container Instance
+
+You can also use an Azure Container Instance (ACI) to make deployment easier. ACI is a resource that allows you to run Docker containers on-demand in a managed, serverless Azure environment.
+
+See [How to use Azure Container Instances](../../../containers/azure-container-instance-recipe.md) for steps on deploying an ACI resource using the Azure portal. You can also use the below PowerShell script using Azure CLI, which will create an ACI on your subscription using the container image. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request. Due to the limit on the maximum number of CPUs per ACI resource, do not select this option if you expect to submit more than 5 large documents (approximately 5000 characters each) per request.
+See the [ACI regional support](../../../../container-instances/container-instances-region-availability.md) article for availability information.
+
+> [!NOTE]
+> Azure Container Instances don't include HTTPS support for the builtin domains. If you need HTTPS, you will need to manually configure it, including creating a certificate and registering a domain. You can find instructions to do this with NGINX below.
+
+```azurecli
+$subscription_name = "" # The name of the subscription you want you resource to be created on.
+$resource_group_name = "" # The name of the resource group you want the AppServicePlan
+ # and AppService to be attached to.
+$resources_location = "" # This is the location you wish the web app to be deployed to.
+ # You can use the "az account list-locations -o table" command to
+ # Get the list of available locations and location code names.
+$azure_container_instance_name = "" # This is the AzureContainerInstance name you wish to have.
+$TEXT_ANALYTICS_RESOURCE_API_KEY = "" # This should be taken from the Language resource.
+$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT = "" # This should be taken from the Language resource.
+$DNS_LABEL = "" # This is the DNS label name you wish your ACI will have
+$DOCKER_IMAGE_NAME = "mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest"
+
+az login
+az account set -s $subscription_name
+az container create --resource-group $resource_group_name --name $azure_container_instance_name --image $DOCKER_IMAGE_NAME --cpu 4 --memory 12 --port 5000 --dns-name-label $DNS_LABEL --environment-variables Eula=accept rai_terms=accept Billing=$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT ApiKey=$TEXT_ANALYTICS_RESOURCE_API_KEY
+
+# Once deployment complete, the resource should be available at: http://<unique_dns_label>.<resource_group_region>.azurecontainer.io:5000
+```
+
+### Secure ACI connectivity
+
+By default there is no security provided when using ACI with container API. This is because typically containers will run as part of a pod which is protected from the outside by a network bridge. You can however modify a container with a front-facing component, keeping the container endpoint private. The following examples use [NGINX](https://www.nginx.com) as an ingress gateway to support HTTPS/SSL and client-certificate authentication.
+
+> [!NOTE]
+> NGINX is an open-source, high-performance HTTP server and proxy. An NGINX container can be used to terminate a TLS connection for a single container. More complex NGINX ingress-based TLS termination solutions are also possible.
+
+#### Set up NGINX as an ingress gateway
+
+NGINX uses [configuration files](https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/) to enable features at runtime. In order to enable TLS termination for another service, you must specify an SSL certificate to terminate the TLS connection and `proxy_pass` to specify an address for the service. A sample is provided below.
++
+> [!NOTE]
+> `ssl_certificate` expects a path to be specified within the NGINX container's local filesystem. The address specified for `proxy_pass` must be available from within the NGINX container's network.
+
+The NGINX container will load all of the files in the `_.conf_` that are mounted under `/etc/nginx/conf.d/` into the HTTP configuration path.
+
+```nginx
+server {
+ listen 80;
+ return 301 https://$host$request_uri;
+}
+server {
+ listen 443 ssl;
+ # replace with .crt and .key paths
+ ssl_certificate /cert/Local.crt;
+ ssl_certificate_key /cert/Local.key;
+
+ location / {
+ proxy_pass http://cognitive-service:5000;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Real-IP $remote_addr;
+ }
+}
+```
+
+#### Example Docker compose file
+
+The below example shows how a [docker compose](https://docs.docker.com/compose/reference/overview) file can be created to deploy NGINX and health containers:
+
+```yaml
+version: "3.7"
+
+ cognitive-service:
+ image: {IMAGE_ID}
+ ports:
+ - 5000:5000
+ environment:
+ - eula=accept
+ - billing={ENDPOINT_URI}
+ - apikey={API_KEY}
+ volumes:
+ # replace with path to logs folder
+ - <path-to-logs-folder>:/output
+ nginx:
+ image: nginx
+ ports:
+ - 443:443
+ volumes:
+ # replace with paths for certs and conf folders
+ - <path-to-certs-folder>:/cert
+ - <path-to-conf-folder>:/etc/nginx/conf.d/
+```
+
+To initiate this Docker compose file, execute the following command from a console at the root level of the file:
+
+```bash
+docker-compose up
+```
+
+For more information, see NGINX's documentation on [NGINX SSL Termination](https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-http/).
++
+## Query the container's prediction endpoint
+
+The container provides REST-based query prediction endpoint APIs.
+
+Use the host, `http://localhost:5000`, for container APIs.
+++
+### Structure the API request for the container
+
+You can use Postman or the example cURL request below to submit a query to the container you deployed, replacing the `serverURL` variable with the appropriate value. Note the version of the API in the URL for the container is different than the hosted API.
+++
+## Run the container with client library support
+
+Starting with container version `3.0.017010001-onprem-amd64` (or if you use the `latest` container), you can run the Text Analytics for health container using the [client library](../quickstart.md). To do so, add the following parameter to the `docker run` command:
+
+`enablelro=true`
+
+Afterwards when you authenticate the client object, use the endpoint that your container is running on:
+
+`http://localhost:5000`
+
+For example, if you're using C# you would use the following code:
+
+```csharp
+var client = new TextAnalyticsClient("http://localhost:5000", "your-text-analytics-key");
+```
+
+## Stop the container
++
+## Troubleshooting
+
+If you run the container with an output [mount](configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
++
+## Billing
+
+Text Analytics for health containers send billing information to Azure, using a _Language_ resource on your Azure account.
++
+## Summary
+
+In this article, you learned concepts and workflow for downloading, installing, and running Text Analytics for health containers. In summary:
+
+* Text Analytics for health provides a Linux container for Docker
+* Container images are downloaded from the Microsoft Container Registry (MCR).
+* Container images run in Docker.
+* You can use either the REST API or SDK to call operations in Text Analytics for health containers by specifying the host URI of the container.
+* You must specify billing information when instantiating a container.
+
+> [!IMPORTANT]
+> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+
+## Next steps
+
+* See [Configure containers](configure-containers.md) for configuration settings.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/language-support.md
+
+ Title: Text Analytics for health language support
+
+description: "This article explains which natural languages are supported by the Text Analytics for health."
++++++ Last updated : 01/04/2023++++
+# Language support for Text Analytics for health
+
+Use this article to learn which natural languages are supported by Text Analytics for health and its Docker container.
+
+## Hosted API Service
+
+The hosted API service supports English language, model version 03-01-2022. Additional languages, English, Spanish, French, German Italian, Portuguese and Hebrew are supported with model version 2022-08-15-preview.
+
+When structuring the API request, the relevant language tags must be added for these languages:
+
+```
+English ΓÇô ΓÇ£enΓÇ¥
+Spanish ΓÇô ΓÇ£esΓÇ¥
+French - ΓÇ£frΓÇ¥
+German ΓÇô ΓÇ£deΓÇ¥
+Italian ΓÇô ΓÇ£itΓÇ¥
+Portuguese ΓÇô ΓÇ£ptΓÇ¥
+Hebrew ΓÇô ΓÇ£heΓÇ¥
+```
+```json
+json
+
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "text": "El médico prescrió 200 mg de ibuprofeno.",
+ "language": "es",
+ "id": "1"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "Healthcare",
+ "parameters":
+ {
+ "modelVersion": "2022-08-15-preview"
+ }
+ }
+ ]
+}
+```
+
+## Docker container
+
+The docker container supports English language, model version 2022-03-01.
+Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview.
+Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md).
+
+In order to download the new container images from the Microsoft public container registry, use the following [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command.
+
+For English, Spanish, Italian, French, German and Portuguese:
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latin
+```
+
+For Hebrew:
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:semitic
+```
++
+When structuring the API request, the relevant language tags must be added for these languages:
+
+```
+English ΓÇô ΓÇ£enΓÇ¥
+Spanish ΓÇô ΓÇ£esΓÇ¥
+French - ΓÇ£frΓÇ¥
+German ΓÇô ΓÇ£deΓÇ¥
+Italian ΓÇô ΓÇ£itΓÇ¥
+Portuguese ΓÇô ΓÇ£ptΓÇ¥
+Hebrew ΓÇô ΓÇ£heΓÇ¥
+```
+
+The following json is an example of a JSON file attached to the Language request's POST body, for a Spanish document:
+
+```json
+json
+
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "text": "El médico prescrió 200 mg de ibuprofeno.",
+ "language": "es",
+ "id": "1"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "Healthcare",
+ }
+ ]
+}
+```
+## Details of the supported model versions for each language:
++
+| Language Code | Model Version: | Featured Tag | Specific Tag |
+|:--|:-:|:-:|::|
+| `en` | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
+| `en`, `es`, `it`, `fr`, `de`, `pt` | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
+| `he` | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
++++++
+## See also
+
+[Text Analytics for health overview](overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/overview.md
+
+ Title: What is the Text Analytics for health in Azure AI Language?
+
+description: An overview of Text Analytics for health in Azure AI services, which helps you extract medical information from unstructured text, like clinical documents.
++++++ Last updated : 01/06/2023++++
+# What is Text Analytics for health?
++
+Text Analytics for health is one of the prebuilt features offered by [Azure AI Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to extract and label relevant medical information from a variety of unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+
+This documentation contains the following types of articles:
+* The [**quickstart article**](quickstart.md) provides a short tutorial that guides you with making your first request to the service.
+* The [**how-to guides**](how-to/call-api.md) contain detailed instructions on how to make calls to the service using the hosted API or using the on-premises Docker container.
+* The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth information on each of the service's features, named entity recognition, relation extraction, entity linking, and assertion detection.
+
+## Text Analytics for health features
+
+Text Analytics for health performs four key functions which are named entity recognition, relation extraction, entity linking, and assertion detection, all with a single API call.
++
+Text Analytics for health can receive unstructured text in English as part of its generally available offering. Additional languages such as German, French, Italian, Spanish, Portuguese, and Hebrew are currently supported in preview.
+
+Additionally, Text Analytics for health can return the processed output using the Fast Healthcare Interoperability Resources (FHIR) structure which enables the service's integration with other electronic health systems.
+++
+> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+
+## Usage scenarios
+
+Text Analytics for health can be used in multiple scenarios across a variety of industries.
+Some common customer motivations for using Text Analytics for health include:
+* Assisting and automating the processing of medical documents by proper medical coding to ensure accurate care and billing.
+* Increasing the efficiency of analyzing healthcare data to help drive the success of value-based care models similar to Medicare.
+* Minimizing healthcare provider effort by automating the aggregation of key patient data for trend and pattern monitoring.
+* Facilitating and supporting the adoption of HL7 standards for improved exchange, integration, sharing, retrieval, and delivery of electronic health information in all healthcare services.
+
+### Example use cases:ΓÇâ
+
+|Use case|Description|
+|--|--|
+|Extract insights and statistics|Identify medical entities such as symptoms, medications, diagnosis from clinical and research documents in order to extract insights and statistics for different patient cohorts.|
+|Develop predictive models using historic data|Power solutions for planning, decision support, risk analysis and more, based on prediction models created from historic data.|
+|Annotate and curate medical information|Support solutions for clinical data annotation and curation such as automating clinical coding and digitizing manually created data.|
+|Review and report medical information|Support solutions for reporting and flagging possible errors in medical information resulting from reviewal processes such as quality assurance.|
+|Assist with decision support|Enable solutions that provide humans with assistive information relating to patientsΓÇÖ medical information for faster and more reliable decisions.|
+
+## Get started with Text Analytics for health
+++
+## Input requirements and service limits
+
+Text Analytics for health is designed to receive unstructured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
+
+Text Analytics for health works with a variety of input languages. For more information, see [language support](language-support.md).
+++
+## Responsible use of AI
+
+An AI system includes the technology, the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also refer to the following articles for more information:
+
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/quickstart.md
+
+ Title: "Quickstart: Use the Text Analytics for health REST API and client library"
+
+description: Use this quickstart to start using Text Analytics for health.
++++++ Last updated : 02/17/2023+
+ms.devlang: csharp, java, javascript, python
+
+keywords: text mining, health, text analytics for health
+zone_pivot_groups: programming-languages-text-analytics
++
+# Quickstart: Using Text Analytics for health client library and REST API
+
+This article contains Text Analytics for health quickstarts that help with using the supported client libraries, C#, Java, NodeJS, and Python as well as with using the REST API.
++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+++
+## Next steps
+
+* [How to call the hosted API](./how-to/call-api.md)
+* [How to use the service with Docker containers](./how-to/use-containers.md)
ai-services Power Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/tutorials/power-automate.md
+
+ Title: Use Language service in power automate
+
+description: Learn how to use Azure AI Language in power automate, without writing code.
++++++ Last updated : 03/02/2023++++
+# Use the Language service in Power Automate
+
+You can use [Power Automate](/power-automate/getting-started) flows to automate repetitive tasks and bring efficiency to your organization. Using Azure AI Language, you can automate tasks like:
+* Send incoming emails to different departments based on their contents.
+* Analyze the sentiment of new tweets.
+* Extract entities from incoming documents.
+* Summarize meetings.
+* Remove personal data from files before saving them.
+
+In this tutorial, you'll create a Power Automate flow to extract entities found in text, using [Named entity recognition](../named-entity-recognition/overview.md).
+
+## Prerequisites
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">create a Language resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You will need the key and endpoint from the resource you create to connect your application to the API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`Free F0`) to try the service, and upgrade later to a paid tier for production.
+* Optional for this tutorial: A trained model is required if you're using a custom capability such as [custom NER](../custom-named-entity-recognition/overview.md), [custom text classification](../custom-text-classification/overview.md), or [conversational language understanding](../conversational-language-understanding/overview.md).
+
+## Create a Power Automate flow
+
+For this tutorial, you will create a flow that extracts named entities from text.
+
+1. [Sign in to power automate](https://make.powerautomate.com/)
+
+1. From the left side menu, select **My flows**. Then select **New flow** > **Automated cloud flow**.
+
+ :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the menu for creating an automated cloud flow." lightbox="../media/create-flow.png":::
+
+1. Enter a name for your flow such as `LanguageFlow`. Then select **Skip** to continue without choosing a trigger.
+
+ :::image type="content" source="../media/language-flow.png" alt-text="A screenshot of automated cloud flow screen." lightbox="../media/language-flow.png":::
+
+1. Under **Triggers** select **Manually trigger a flow**.
+
+ :::image type="content" source="../media/trigger-flow.png" alt-text="A screenshot of how to manually trigger a flow." lightbox="../media/trigger-flow.png":::
+
+1. Select **+ New step** to begin adding a Language service connector.
+
+1. Under **Choose an operation** search for **Azure AI Language**. Then select **Azure AI Language**. This will narrow down the list of actions to only those that are available for Language.
+
+ :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of An Azure AI Language connector." lightbox="../media/language-connector.png":::
+
+1. Under **Actions** search for **Named Entity Recognition**, and select the connector.
+
+ :::image type="content" source="../media/entity-connector.png" alt-text="A screenshot of a named entity recognition connector." lightbox="../media/entity-connector.png":::
+
+1. Get the endpoint and key for your Language resource, which will be used for authentication. You can find your key and endpoint by navigating to your resource in the [Azure portal](https://portal.azure.com), and selecting **Keys and Endpoint** from the left side menu.
+
+ :::image type="content" source="../media/azure-portal-resource-credentials.png" alt-text="A screenshot of A language resource key and endpoint in the Azure portal." lightbox="../media/azure-portal-resource-credentials.png":::
+
+1. Once you have your key and endpoint, add it to the connector in Power Automate.
+
+ :::image type="content" source="../media/language-auth.png" alt-text="A screenshot of adding the language key and endpoint to the Power Automate flow." lightbox="../media/language-auth.png":::
+
+1. Add the data in the connector
+
+ > [!NOTE]
+ > You will need deployment name and project name if you are using custom language capability.
+
+1. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**.
+
+1. After the flow runs, you will see the response in the **outputs** field.
+
+ :::image type="content" source="../media/response-connector.png" alt-text="A screenshot of flow response." lightbox="../media/response-connector.png":::
+
+## Next steps
+
+* [Triage incoming emails with custom text classification](../custom-text-classification/tutorials/triage-email.md)
+* [Available Language service connectors](/connectors/cognitiveservicestextanalytics)
++
ai-services Use Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/tutorials/use-kubernetes-service.md
+
+ Title: Deploy a key phrase extraction container to Azure Kubernetes Service
+
+description: Deploy a key phrase extraction container image to Azure Kubernetes Service, and test it in a web browser.
++++++ Last updated : 05/27/2022++++
+# Deploy a key phrase extraction container to Azure Kubernetes Service
+
+Learn how to deploy a [key phrase extraction Docker container](../key-phrase-extraction/how-to/use-containers.md) image to Azure Kubernetes Service (AKS). This procedure shows how to create a Language resource, how to associate a container image, and how to exercise this orchestration of the two from a browser. Using containers can shift your attention away from managing infrastructure to instead focusing on application development. While this article uses the key phrase extraction container as an example, you can use this process for other containers offered by Azure AI Language
+
+## Prerequisites
+
+This procedure requires several tools that must be installed and run locally. Don't use Azure Cloud Shell. You need the following:
+
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
+* A text editor, for example, [Visual Studio Code](https://code.visualstudio.com/download).
+* The [Azure CLI](/cli/azure/install-azure-cli) installed.
+* The [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed.
+* An Azure resource with the correct pricing tier. Not all pricing tiers work with this container:
+ * **Azure AI Language** resource with F0 or standard pricing tiers only.
+ * **Azure AI services** resource with the S0 pricing tier.
++++
+## Deploy the Key Phrase Extraction container to an AKS cluster
+
+1. Open the Azure CLI, and sign in to Azure.
+
+ ```azurecli
+ az login
+ ```
+
+1. Sign in to the AKS cluster. Replace `your-cluster-name` and `your-resource-group` with the appropriate values.
+
+ ```azurecli
+ az aks get-credentials -n your-cluster-name -g -your-resource-group
+ ```
+
+ After this command runs, it reports a message similar to the following:
+
+ ```output
+ Merged "your-cluster-name" as current context in /home/username/.kube/config
+ ```
+
+ > [!WARNING]
+ > If you have multiple subscriptions available to you on your Azure account and the `az aks get-credentials` command returns with an error, a common problem is that you're using the wrong subscription. Set the context of your Azure CLI session to use the same subscription that you created the resources with and try again.
+ > ```azurecli
+ > az account set -s subscription-id
+ > ```
+
+1. Open the text editor of choice. This example uses Visual Studio Code.
+
+ ```console
+ code .
+ ```
+
+1. Within the text editor, create a new file named *keyphrase.yaml*, and paste the following YAML into it. Be sure to replace `billing/value` and `apikey/value` with your own information.
+
+ ```yaml
+ apiVersion: apps/v1beta1
+ kind: Deployment
+ metadata:
+ name: keyphrase
+ spec:
+ template:
+ metadata:
+ labels:
+ app: keyphrase-app
+ spec:
+ containers:
+ - name: keyphrase
+ image: mcr.microsoft.com/azure-cognitive-services/keyphrase
+ ports:
+ - containerPort: 5000
+ resources:
+ requests:
+ memory: 2Gi
+ cpu: 1
+ limits:
+ memory: 4Gi
+ cpu: 1
+ env:
+ - name: EULA
+ value: "accept"
+ - name: billing
+ value: # {ENDPOINT_URI}
+ - name: apikey
+ value: # {API_KEY}
+
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: keyphrase
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 5000
+ selector:
+ app: keyphrase-app
+ ```
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Azure AI services [security](../../security-features.md) article for more information.
+
+1. Save the file, and close the text editor.
+1. Run the Kubernetes `apply` command with the *keyphrase.yaml* file as its target:
+
+ ```console
+ kubectl apply -f keyphrase.yaml
+ ```
+
+ After the command successfully applies the deployment configuration, a message appears similar to the following output:
+
+ ```output
+ deployment.apps "keyphrase" created
+ service "keyphrase" created
+ ```
+1. Verify that the pod was deployed:
+
+ ```console
+ kubectl get pods
+ ```
+
+ The output for the running status of the pod:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ keyphrase-5c9ccdf575-mf6k5 1/1 Running 0 1m
+ ```
+
+1. Verify that the service is available, and get the IP address.
+
+ ```console
+ kubectl get services
+ ```
+
+ The output for the running status of the *keyphrase* service in the pod:
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m
+ keyphrase LoadBalancer 10.0.100.64 168.61.156.180 5000:31234/TCP 2m
+ ```
+
+## Verify the Key Phrase Extraction container instance
+
+1. Select the **Overview** tab, and copy the IP address.
+1. Open a new browser tab, and enter the IP address. For example, enter `http://<IP-address>:5000 (http://55.55.55.55:5000`). The container's home page is displayed, which lets you know the container is running.
+
+ ![View the container home page to verify that it's running](../media/swagger-container-documentation.png)
+
+1. Select the **Service API Description** link to go to the container's Swagger page.
+
+1. Choose any of the **POST** APIs, and select **Try it out**. The parameters are displayed, which includes this example input:
+
+ ```json
+ {
+ "documents": [
+ {
+ "id": "1",
+ "text": "Hello world"
+ },
+ {
+ "id": "2",
+ "text": "Bonjour tout le monde"
+ },
+ {
+ "id": "3",
+ "text": "La carretera estaba atascada. Había mucho tráfico el día de ayer."
+ },
+ {
+ "id": "4",
+ "text": ":) :( :D"
+ }
+ ]
+ }
+ ```
+
+1. Replace the input with the following JSON content:
+
+ ```json
+ {
+ "documents": [
+ {
+ "language": "en",
+ "id": "7",
+ "text": "I was fortunate to attend the KubeCon Conference in Barcelona, it is one of the best conferences I have ever attended. Great people, great sessions and I thoroughly enjoyed it!"
+ }
+ ]
+ }
+ ```
+
+1. Set **showStats** to `true`.
+
+1. Select **Execute** to determine the sentiment of the text.
+
+ The model that's packaged in the container generates a score that ranges from 0 to 1, where 0 is negative and 1 is positive.
+
+ The JSON response that's returned includes sentiment for the updated text input:
+
+ ```json
+ {
+ "documents": [
+ {
+ "id": "7",
+ "keyPhrases": [
+ "Great people",
+ "great sessions",
+ "KubeCon Conference",
+ "Barcelona",
+ "best conferences"
+ ],
+ "statistics": {
+ "charactersCount": 176,
+ "transactionsCount": 1
+ }
+ }
+ ],
+ "errors": [],
+ "statistics": {
+ "documentsCount": 1,
+ "validDocumentsCount": 1,
+ "erroneousDocumentsCount": 0,
+ "transactionsCount": 1
+ }
+ }
+ ```
+
+We can now correlate the document `id` of the response payload's JSON data to the original request payload document `id`. The resulting document has a `keyPhrases` array, which contains the list of key phrases that have been extracted from the corresponding input document. Additionally, there are various statistics such as `characterCount` and `transactionCount` for each resulting document.
+
+## Next steps
+
+* Use more [Azure AI containers](../../cognitive-services-container-support.md)
+* [Key phrase extraction overview](../overview.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/whats-new.md
+
+ Title: What's new in Azure AI Language?
+
+description: Find out about new releases and features for the Azure AI Language.
++++++ Last updated : 04/14/2023++++
+# What's new in Azure AI Language?
+
+Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+
+## May 2023
+
+* [Custom Named Entity Recognition (NER) Docker containers](./custom-named-entity-recognition/how-to/use-containers.md) are now available for on-premises deployment.
+
+## April 2023
+
+* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
+* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below.
+ * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
+ * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
+* The latest model version (2022-10-01) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
+
+## March 2023
+
+* New model version ('2023-01-01-preview') for Personally Identifiable Information (PII) detection with quality updates and new [language support](./personally-identifiable-information/language-support.md)
+
+* New versions of the text analysis client library are available in preview:
+
+ ### [C#](#tab/csharp)
+
+ [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.3.0-beta.2)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.TextAnalytics_5.3.0-beta.2/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.TextAnalytics_5.3.0-beta.2/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.TextAnalytics_5.3.0-beta.2/sdk/textanalytics/Azure.AI.TextAnalytics/samples/README.md)
+
+ ### [Java](#tab/java)
+
+ [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.3.0-beta.2)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#530-beta2-2023-03-07)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
+
+ ### [JavaScript](#tab/javascript)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-language-text/v/1.1.0-beta.2)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/CHANGELOG.md)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1-beta)
+
+ ### [Python](#tab/python)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.3.0b2/)
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b2/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
+
+ [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b2/sdk/textanalytics/azure-ai-textanalytics/README.md)
+
+ [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-textanalytics_5.3.0b2/sdk/textanalytics/azure-ai-textanalytics/samples)
+
+
+
+## February 2023
+
+* Conversational language understanding and orchestration workflow is now available in the following regions in the sovereign cloud for China:
+ * China East 2 (Authoring and Prediction)
+ * China North 2 (Prediction)
+* New model evaluation updates for Conversational language understanding and Orchestration workflow.
+* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health.
+* New model version ('2023-02-01-preview') for named entity recognition features improved accuracy and more [language support](./named-entity-recognition/language-support.md) with up to 79 languages.
+
+## December 2022
+
+* New version (v5.2.0-beta.1) of the text analysis client library is available in preview for C#/.NET:
+ * [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.3.0-beta.1)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
+ * New model version (`2022-10-01`) released for Language Detection. The new model version comes with improvements in language detection quality on short texts.
+
+## November 2022
+
+* Expanded language support for:
+ * [Opinion mining](./sentiment-opinion-mining/language-support.md)
+* Conversational PII now supports up to 40,000 characters as document size.
+* New versions of the text analysis client library are available in preview:
+
+ * Java
+ * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.3.0-beta.1)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/azure-ai-textanalytics_5.3.0-beta.1/sdk/textanalytics/azure-ai-textanalytics/src/samples)
+
+ * JavaScript
+ * [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-language-text/v/1.1.0-beta.1)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/CHANGELOG.md)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/README.md)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1)
+
+ * Python
+ * [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.3.0b1/)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/README.md)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/samples)
+
+## October 2022
+
+* The summarization feature now has the following capabilities:
+ * [Document summarization](./summarization/overview.md):
+ * Abstractive summarization, which generates a summary of a document that may not use the same words as those in the document, but captures the main idea.
+ * [Conversation summarization](./summarization/overview.md?tabs=document-summarization?tabs=conversation-summarization)
+ * Chapter title summarization, which returns suggested chapter titles of input conversations.
+ * Narrative summarization, which returns call notes, meeting notes or chat summaries of input conversations.
+* Expanded language support for:
+ * [Sentiment analysis](./sentiment-opinion-mining/language-support.md)
+ * [Key phrase extraction](./key-phrase-extraction/language-support.md)
+ * [Named entity recognition](./named-entity-recognition/language-support.md)
+ * [Text Analytics for health](./text-analytics-for-health/language-support.md)
+* [Multi-region deployment](./concepts/custom-features/multi-region-deployment.md) and [project asset versioning](./concepts/custom-features/project-versioning.md) for:
+ * [Conversational language understanding](./conversational-language-understanding/overview.md)
+ * [Orchestration workflow](./orchestration-workflow/overview.md)
+ * [Custom text classification](./custom-text-classification/overview.md)
+ * [Custom named entity recognition](./custom-named-entity-recognition/overview.md)
+* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions.
+* [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition
+* New region support for:
+ * [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability)
+ * [Orchestration workflow](./orchestration-workflow/service-limits.md#regional-availability)
+ * [Custom text classification](./custom-text-classification/service-limits.md#regional-availability)
+ * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
+* Document type as an input supported for [Text Analytics for health](./text-analytics-for-health/how-to/call-api.md) FHIR requests
+
+## September 2022
+
+* [Conversational language understanding](./conversational-language-understanding/overview.md) is available in the following regions:
+ * Central India
+ * Switzerland North
+ * West US 2
+* Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
+* The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md).
+ * Java
+ * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
+ * Python
+ * [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.2.0/)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/README.md)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples)
+ * C#/.NET
+ * [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0)
+ * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
+ * [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
+ * [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
+
+## August 2022
+
+* [Role-based access control](./concepts/role-based-access-control.md) for the Language service.
+
+## July 2022
+
+* New AI models for [sentiment analysis](./sentiment-opinion-mining/overview.md) and [key phrase extraction](./key-phrase-extraction/overview.md) based on [z-code models](https://www.microsoft.com/research/project/project-zcode/), providing:
+ * Performance and quality improvements for the following 11 [languages](./sentiment-opinion-mining/language-support.md) supported by sentiment analysis: `ar`, `da`, `el`, `fi`, `hi`, `nl`, `no`, `pl`, `ru`, `sv`, `tr`
+ * Performance and quality improvements for the following 20 [languages](./key-phrase-extraction/language-support.md) supported by key phrase extraction: `af`, `bg`, `ca`, `hr`, `da`, `nl`, `et`, `fi`, `el`, `hu`, `id`, `lv`, `no`, `pl`, `ro`, `ru`, `sk`, `sl`, `sv`, `tr`
+
+* Conversational PII is now available in all Azure regions supported by the Language service.
+
+* A new version of the Language API (`2022-07-01-preview`) has been released. It provides:
+ * [Automatic language detection](./concepts/use-asynchronously.md#automatic-language-detection) for asynchronous tasks.
+ * Text Analytics for health confidence scores are now returned in relations.
+
+ To use this version in your REST API calls, use the following URL:
+
+ ```http
+ <your-language-resource-endpoint>/language/:analyze-text?api-version=2022-07-01-preview
+ ```
+
+## June 2022
+* v1.0 client libraries for [conversational language understanding](./conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request) and [orchestration workflow](./orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request) are Generally Available for the following languages:
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
+* v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for:
+ * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
+* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
+ * [Entity linking](./entity-linking/quickstart.md?pivots=rest-api)
+ * [Language detection](./language-detection/quickstart.md?pivots=rest-api)
+ * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
+ * [Named entity recognition](./named-entity-recognition/quickstart.md?pivots=rest-api)
+ * [PII detection](./personally-identifiable-information/quickstart.md?pivots=rest-api)
+ * [Sentiment analysis and opinion mining](./sentiment-opinion-mining/quickstart.md?pivots=rest-api)
+ * [Text analytics for health](./text-analytics-for-health/quickstart.md?pivots=rest-api)
++
+## May 2022
+
+* PII detection for conversations.
+* Rebranded Text Summarization to Document summarization.
+* Conversation summarization is now available in public preview.
+
+* The following features are now Generally Available (GA):
+ * Custom text classification
+ * Custom Named Entity Recognition (NER)
+ * Conversational language understanding
+ * Orchestration workflow
+
+* The following updates for custom text classification, custom Named Entity Recognition (NER), conversational language understanding, and orchestration workflow:
+ * Data splitting controls.
+ * Ability to cancel training jobs.
+ * Custom deployments can be named. You can have up to 10 deployments.
+ * Ability to swap deployments.
+ * Auto labeling (preview) for custom named entity recognition
+ * Enterprise readiness support
+ * Training modes for conversational language understanding
+ * Updated service limits
+ * Ability to use free (F0) tier for Language resources
+ * Expanded regional availability
+ * Updated model life cycle to add training configuration versions
+++
+## April 2022
+
+* Fast Healthcare Interoperability Resources (FHIR) support is available in the [Language REST API preview](text-analytics-for-health/quickstart.md?pivots=rest-api&tabs=language) for Text Analytics for health.
+
+## March 2022
+
+* Expanded language support for:
+ * [Custom text classification](custom-text-classification/language-support.md)
+ * [Custom Named Entity Recognition (NER)](custom-named-entity-recognition/language-support.md)
+ * [Conversational language understanding](conversational-language-understanding/language-support.md)
+
+## February 2022
+
+* Model improvements for latest model-version for [text summarization](summarization/overview.md)
+
+* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
+
+* [Question Answering](question-answering/overview.md): Active learning v2 incorporates a better clustering logic providing improved accuracy of suggestions. It considers user actions when suggestions are accepted or rejected to avoid duplicate suggestions, and improve query suggestions.
+
+## December 2021
+
+* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
+
+## November 2021
+
+* Based on ongoing customer feedback, we have increased the character limit per document for Text Analytics for health from 5,120 to 30,720.
+
+* Azure AI Language release, with support for:
+
+ * [Question Answering (now Generally Available)](question-answering/overview.md)
+ * [Sentiment Analysis and opinion mining](sentiment-opinion-mining/overview.md)
+ * [Key Phrase Extraction](key-phrase-extraction/overview.md)
+ * [Named Entity Recognition (NER), Personally Identifying Information (PII)](named-entity-recognition/overview.md)
+ * [Language Detection](language-detection/overview.md)
+ * [Text Analytics for health](text-analytics-for-health/overview.md)
+ * [Text summarization preview](summarization/overview.md)
+ * [Custom Named Entity Recognition (Custom NER) preview](custom-named-entity-recognition/overview.md)
+ * [Custom Text Classification preview](custom-text-classification/overview.md)
+ * [Conversational Language Understanding preview](conversational-language-understanding/overview.md)
+
+* Preview model version `2021-10-01-preview` for [Sentiment Analysis and Opinion mining](sentiment-opinion-mining/overview.md), which provides:
+
+ * Improved prediction quality.
+ * [Additional language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
+ * For more information, see the [project z-code site](https://www.microsoft.com/research/project/project-zcode/).
+ * To use this [model version](sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model), you must specify it in your API calls, using the model version parameter.
+
+* SDK support for sending requests to custom models:
+
+ * Custom Named Entity Recognition
+ * Custom text classification
+ * Custom language understanding
+
+## Next steps
+
+* See the [previous updates](./concepts/previous-updates.md) article for service updates not listed here.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md
+
+ Title: Language support
+
+description: Azure AI services enable you to build applications that see, hear, speak with, and understand your users.
+++++ Last updated : 07/18/2023++++
+# Natural language support for Azure AI services
+
+Azure AI services enable you to build applications that see, hear, articulate, and understand users. Our burgeoning language support capabilities enable users to communicate with your application in natural ways and help facilitate global outreach. Use the links in the tables to view language support and availability by service.
+
+## Language supported services
+
+The following table provides links to language support reference articles by supported service.
+
+| Azure AI Language support | Description |
+| | |
+|![Content Moderator icon](medi) (retired) | Detect potentially offensive or unwanted content. |
+|![Document Intelligence icon](medi) | Turn documents into usable data at a fraction of the time and cost. |
+|![Immersive Reader icon](medi) | Help users read and comprehend text. |
+|![Language icon](medi) | Build apps with industry-leading natural language understanding capabilities. |
+|![Language Understanding icon](medi) (retired) | Understand natural language in your apps. |
+|![QnA Maker icon](medi) (retired) | Distill information into easy-to-navigate questions and answers. |
+|![Speech icon](medi)| Configure speech-to-text, text-to-speech, translation, and speaker recognition applications. |
+|![Translator icon](medi) | Translate more than 100 languages and dialects including those deemed at-risk and endangered. |
+|![Video Indexer icon](medi#guidelines-and-limitations) | Extract actionable insights from your videos. |
+|![Vision icon](medi) | Analyze content in images and videos. |
+
+## Language independent services
+
+These Azure AI services are language agnostic and don't have limitations based on human language.
+
+| Azure AI service | Description |
+| | |
+|![Anomaly Detector icon](media/service-icons/anomaly-detector.svg)</br>[Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on. |
+|![Custom Vision icon](media/service-icons/custom-vision.svg)</br>[Custom Vision](./custom-vision-service/index.yml) |Customize image recognition to fit your business. |
+|![Face icon](medi) | Detect and identify people and emotions in images. |
+|![Personalizer icon](media/service-icons/personalizer.svg)</br>[Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for users. |
+
+## See also
+
+* [What are Azure AI services?](./what-are-ai-services.md)
+* [Create an account](multi-service-resource.md?pivots=azportal)
ai-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/manage-resources.md
+
+ Title: Recover deleted Azure AI services resource
+
+description: This article provides instructions on how to recover an already-deleted Azure AI services resource.
+++++ Last updated : 07/05/2023+++
+# Recover deleted Azure AI services resources
+
+This article provides instructions on how to recover an Azure AI services resource that is already deleted. The article also provides instructions on how to purge a deleted resource.
+
+> [!NOTE]
+> The instructions in this article are applicable to both a multi-service resource and a single-service resource. A multi-service resource enables access to multiple Azure AI services using a single key and endpoint. On the other hand, a single-service resource enables access to just that specific Azure AI service for which the resource was created.
+
+## Prerequisites
+
+* The resource to be recovered must have been deleted within the past 48 hours.
+* The resource to be recovered must not have been purged already. A purged resource cannot be recovered.
+* Before you attempt to recover a deleted resource, make sure that the resource group for that account exists. If the resource group was deleted, you must recreate it. Recovering a resource group is not possible. For more information, seeΓÇ»[Manage resource groups](../azure-resource-manager/management/manage-resource-groups-portal.md).
+* If the deleted resource used customer-managed keys with Azure Key Vault and the key vault has also been deleted, then you must restore the key vault before you restore the Azure AI services resource. For more information, see [Azure Key Vault recovery management](../key-vault/general/key-vault-recovery.md).
+* If the deleted resource used a customer-managed storage and storage account has also been deleted, you must restore the storage account before you restore the Azure AI services resource. For instructions, see [Recover a deleted storage account](../storage/common/storage-account-recover.md).
+
+Your subscription must have `Microsoft.CognitiveServices/locations/resourceGroups/deletedAccounts/delete` permissions to purge resources, such as [Azure AI services Contributor](../role-based-access-control/built-in-roles.md#cognitive-services-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor).
+
+## Recover a deleted resource
+
+To recover a deleted Azure AI services resource, use the following commands. Where applicable, replace:
+
+* `{subscriptionID}` with your Azure subscription ID
+* `{resourceGroup}` with your resource group
+* `{resourceName}` with your resource name
+* `{location}` with the location of your resource
++
+# [Azure portal](#tab/azure-portal)
+
+If you need to recover a deleted resource, navigate to the hub of the Azure AI services API type and select "Manage deleted resources" from the menu. For example, if you would like to recover an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service. Then select **Manage deleted resources**.
+
+Select the subscription in the dropdown list to locate the deleted resource you would like to recover. Select one or more of the deleted resources and select **Recover**.
++
+> [!NOTE]
+> It can take a couple of minutes for your deleted resource(s) to recover and show up in the list of the resources. Select the **Refresh** button in the menu to update the list of resources.
+
+# [Rest API](#tab/rest-api)
+
+Use the following `PUT` command:
+
+```rest-api
+https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName}?Api-Version=2021-04-30
+```
+
+In the request body, use the following JSON format:
+
+```json
+{
+ "location": "{location}",
+ "properties": {
+ "restore": true
+ }
+}
+```
+
+# [PowerShell](#tab/powershell)
+
+Use the following command to restore the resource:
+
+```powershell
+New-AzResource -Location {location} -Properties @{restore=$true} -ResourceId /subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName} -ApiVersion 2021-04-30
+```
+
+If you need to find the name of your deleted resources, you can get a list of deleted resource names with the following command:
+
+```powershell
+Get-AzResource -ResourceId /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/deletedAccounts -ApiVersion 2021-04-30
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az resource create --subscription {subscriptionID} -g {resourceGroup} -n {resourceName} --location {location} --namespace Microsoft.CognitiveServices --resource-type accounts --properties "{\"restore\": true}"
+```
+++
+## Purge a deleted resource
+
+Once you delete a resource, you won't be able to create another one with the same name for 48 hours. To create a resource with the same name, you will need to purge the deleted resource.
+
+To purge a deleted Azure AI services resource, use the following commands. Where applicable, replace:
+
+* `{subscriptionID}` with your Azure subscription ID
+* `{resourceGroup}` with your resource group
+* `{resourceName}` with your resource name
+* `{location}` with the location of your resource
+
+> [!NOTE]
+> Once a resource is purged, it is permanently deleted and cannot be restored. You will lose all data and keys associated with the resource.
++
+# [Azure portal](#tab/azure-portal)
+
+If you need to purge a deleted resource, the steps are similar to recovering a deleted resource.
+
+Navigate to the hub of the Azure AI services API type of your deleted resource. For example, if you would like to purge an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service. Then select **Manage deleted resources** from the menu.
+
+Select the subscription in the dropdown list to locate the deleted resource you would like to purge.
+Select one or more deleted resources and select **Purge**.
+Purging will permanently delete an Azure AI services resource.
+++
+# [Rest API](#tab/rest-api)
+
+Use the following `DELETE` command:
+
+```rest-api
+https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}?Api-Version=2021-04-30`
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az resource delete --ids /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}
+```
++++
+## See also
+* [Create a multi-service resource](multi-service-resource.md)
+* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
ai-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/cost-management.md
+
+ Title: Cost management with Azure AI Metrics Advisor
+
+description: Learn about cost management and pricing for Azure AI Metrics Advisor
+++++ Last updated : 09/06/2022+++
+# Azure AI Metrics Advisor cost management
+
+Azure AI Metrics Advisor monitors the performance of your organization's growth engines, including sales revenue and manufacturing operations. Quickly identify and fix problems through a powerful combination of monitoring in near-real time, adapting models to your scenario, and offering granular analysis with diagnostics and alerting. You will only be charged for the time series that are analyzed by the service. There's no up-front commitment or minimum fee.
+
+> [!NOTE]
+> This article discusses how pricing is calculated to assist you with planning and cost management when using Azure Metric Advisor. The prices in this article do not reflect actual prices and are for example purposes only. For the latest pricing information please refer to the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
+
+## Key points about cost management and pricing
+
+- You will be charged for the number of **distinct [time series](glossary.md#time-series)** analyzed during a month. If one data point is analyzed for a time series, it will be calculated as well.
+- The number of distinct time series is **irrespective** of its [granularity](glossary.md#granularity). An hourly time series and a daily time series will be charged at the same price.
+- The number of distinct time series is **highly related** to the data schema(choice of timestamp, dimension, measure) during onboarding. Please don't choose **timestamp or any IDs** as [dimension](glossary.md#dimension) to avoid dimension explosion, and introduce unexpected cost.
+- You will be charged based on the tiered pricing structure listed below. The first day of next month will initialize a new statistic window.
+- The more time series you onboard to the service for analysis, the lower price you pay for each time series.
+
+**Again keep in mind, the prices below are for example purposes only**. For the latest pricing information, consult the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
+
+| Analyzed time series /month| $ per time series |
+|--|--|
+| Free: first 25 time series | $- |
+| 26 time series - 1k time series | $0.75 |
+| 1k time series - 5k time series | $0.50 |
+| 5k time series - 20k time series | $0.25|
+| 20k time series - 50k time series| $0.10|
+| >50k time series | $0.05 |
++
+To help you get a basic understanding of Metrics Advisor and start to explore the service, there's an included amount being offered to allow you to analyze up to 25 time series for free.
+
+## Pricing examples
+
+### Example 1
+<!-- introduce statistic window-->
+
+In month 1, if a customer has onboarded a data feed with 25 time series for analyzing the first week. Afterwards, they onboard another data feed with 30 time series the second week. But at the third week, they delete 30 time series that were onboarded during the second week. Then there are **55** distinct time series being analyzed in month 1, the customer will be charged for **30** of them (exclude the 25 time series in the free tier) and falls under tier 1. The monthly cost is: 30 * $0.75 = **$22.5**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 30 (55-25) time series | $0.75 | $22.5 |
+| **Total = 30 time series** | | **$22.5 per month** |
+
+In month 2, the customer has not onboarded or deleted any time series. Then there are 25 analyzed time series in month 2. No cost will be introduced.
+
+### Example 2
+<!-- introduce how time series is calculated-->
+
+A business planner needs to track the company's revenue as the indicator of business healthiness. Usually there's a week by week pattern, the customer onboards the metrics into Metrics Advisor for analyzing anomalies. Metrics Advisor is able to learn the pattern from historical data and perform detection on follow-up data points. There might be a sudden drop detected as an anomaly, which may indicate an underlying issue, like a service outage or a promotional offer not working as expected. There might also be an unexpected spike detected as an anomaly, which may indicate a highly successful marketing campaign or a significant customer win.
+
+The metric is analyzed on **100 product categories** and **10 regions**, then the number of distinct time series being analyzed is calculated as:
+
+```
+1(Revenue) * 100 product categories * 10 regions = 1,000 analyzed time series
+```
+
+Based on the tiered pricing model described above, 1,000 analyzed time series per month is charged at (1,000 - 25) * $0.75 = **$731.25**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 975 (1,000-25) time series | $0.75 | $731.25 |
+| **Total = 30 time series** | | **$731.25 per month** |
+
+### Example 3
+<!-- introduce cost for multiple metrics and -->
+
+After validating detection results on the revenue metric, the customer would like to onboard two more metrics to be analyzed. One is cost, another is DAU(daily active user) of their website. They would also like to add a new dimension with **20 channels**. Within the month, 10 out of the 100 product categories are discontinued after the first week, and are not analyzed further. In addition, 10 new product categories are introduced in the third week of the month, and the corresponding time series are analyzed for half of the month. Then the number of distinct time series being analyzed are calculated as:
+
+```
+3(Revenue, cost and DAU) * 110 product categories * 10 regions * 20 channels = 66,000 analyzed time series
+```
+
+Based on the tiered pricing model described above, 66,000 analyzed time series per month fall into tier 5 and will be charged at **$10281.25**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 975 (1,000-25) time series | $0.75 | $731.25 |
+| Next 4,000 time series | $0.50 | $2,000 |
+| Next 15,000 time series | $0.25 | $3,750 |
+| Next 30,000 time series | $0.10 | $3,000 |
+| Next 16,000 time series | $0.05 | $800 |
+| **Total = 65,975 time series** | | **$10281.25 per month** |
+
+## Next steps
+
+- [Manage your data feeds](how-tos/manage-data-feeds.md)
+- [Configurations for different data sources](data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md)
+
ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/data-feeds-from-different-sources.md
+
+ Title: Connect different data sources to Metrics Advisor
+
+description: Add different data feeds to Metrics Advisor
++++++ Last updated : 05/26/2021++++
+# How-to: Connect different data sources
+
+Use this article to find the settings and requirements for connecting different types of data sources to Azure AI Metrics Advisor. To learn about using your data with Metrics Advisor, see [Onboard your data](how-tos/onboard-your-data.md).
+
+## Supported authentication types
+
+| Authentication types | Description |
+| |-|
+|**Basic** | You need to provide basic parameters for accessing data sources. For example, you can use a connection string or a password. Data feed admins can view these credentials. |
+| **Azure managed identity** | [Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for Azure resources is a feature of Azure Active Directory (Azure AD). It provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication.|
+| **Azure SQL connection string**| Store your Azure SQL connection string as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of the credential entity can view these credentials, but authorized viewers can create data feeds without needing to know details for the credentials. |
+| **Azure Data Lake Storage Gen2 shared key**| Store your data lake account key as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of the credential entity can view these credentials, but authorized viewers can create data feeds without needing to know details for the credentials.|
+| **Service principal**| Store your [service principal](../../active-directory/develop/app-objects-and-service-principals.md) as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of the credential entity can view the credentials, but authorized viewers can create data feeds without needing to know details for the credentials.|
+| **Service principal from key vault**|Store your [service principal in a key vault](/azure-stack/user/azure-stack-key-vault-store-credentials) as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of a credential entity can view the credentials, but viewers can create data feeds without needing to know details for the credentials. |
++
+## Data sources and corresponding authentication types
+
+| Data sources | Authentication types |
+|-| |
+|[Application Insights](#appinsights) | Basic |
+|[Azure Blob Storage (JSON)](#blob) | Basic<br>Managed identity |
+|[Azure Cosmos DB (SQL)](#cosmosdb) | Basic |
+|[Azure Data Explorer (Kusto)](#kusto) | Basic<br>Managed identity<br>Service principal<br>Service principal from key vault |
+|[Azure Data Lake Storage Gen2](#adl) | Basic<br>Data Lake Storage Gen2 shared key<br>Service principal<br>Service principal from key vault |
+|[Azure Event Hubs](#eventhubs) | Basic |
+|[Azure Monitor Logs](#log) | Basic<br>Service principal<br>Service principal from key vault |
+|[Azure SQL Database / SQL Server](#sql) | Basic<br>Managed identity<br>Service principal<br>Service principal from key vault<br>Azure SQL connection string |
+|[Azure Table Storage](#table) | Basic |
+|[InfluxDB (InfluxQL)](#influxdb) | Basic |
+|[MongoDB](#mongodb) | Basic |
+|[MySQL](#mysql) | Basic |
+|[PostgreSQL](#pgsql) | Basic|
+
+The following sections specify the parameters required for all authentication types within different data source scenarios.
+
+## <span id="appinsights">Application Insights</span>
+
+* **Application ID**: This is used to identify this application when you're using the Application Insights API. To get the application ID, follow these steps:
+
+ 1. From your Application Insights resource, select **API Access**.
+
+ ![Screenshot that shows how to get the application ID from your Application Insights resource.](media/portal-app-insights-app-id.png)
+
+ 2. Copy the application ID generated into the **Application ID** field in Metrics Advisor.
+
+* **API key**: API keys are used by applications outside the browser to access this resource. To get the API key, follow these steps:
+
+ 1. From the Application Insights resource, select **API Access**.
+
+ 2. Select **Create API key**.
+
+ 3. Enter a short description, select the **Read telemetry** option, and select **Generate key**.
+
+ ![Screenshot that shows how to get the API key in the Azure portal.](media/portal-app-insights-app-id-api-key.png)
+
+ > [!IMPORTANT]
+ > Copy and save this API key. It will never be shown to you again. If you lose this key, you have to create a new one.
+
+ 4. Copy the API key to the **API key** field in Metrics Advisor.
+
+* **Query**: Application Insights logs are built on Azure Data Explorer, and Azure Monitor log queries use a version of the same Kusto query language. The [Kusto query language documentation](/azure/data-explorer/kusto/query) should be your primary resource for writing a query against Application Insights.
+
+ Sample query:
+
+ ``` Kusto
+ [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
+ ```
+ You can also refer to the [Tutorial: Write a valid query](tutorials/write-a-valid-query.md) for more specific examples.
+
+## <span id="blob">Azure Blob Storage (JSON)</span>
+
+* **Connection string**: There are two authentication types for Azure Blob Storage (JSON):
+
+ * **Basic**: See [Configure Azure Storage connection strings](../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account) for information on retrieving this string. Also, you can visit the Azure portal for your Azure Blob Storage resource, and find the connection string directly in **Settings** > **Access keys**.
+
+ * **Managed identity**: Managed identities for Azure resources can authorize access to blob and queue data. The feature uses Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services.
+
+ You can create a managed identity in the Azure portal for your Azure Blob Storage resource. In **Access Control (IAM)**, select **Role assignments**, and then select **Add**. A suggested role type is: **Storage Blob Data Reader**. For more details, refer to [Use managed identity to access Azure Storage](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access-1).
+
+ ![Screenshot that shows a managed identity blob.](media/managed-identity-blob.png)
+
+
+* **Container**: Metrics Advisor expects time series data to be stored as blob files (one blob per timestamp), under a single container. This is the container name field.
+
+* **Blob template**: Metrics Advisor uses a path to find the JSON file in Blob Storage. This is an example of a blob file template, which is used to find the JSON file in Blob Storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, and if you have `%d` in your path, you can add it after `%m`. If your JSON file is named by date, you can also use `%Y-%m-%d-%h-%M.json`.
+
+ The following parameters are supported:
+
+ * `%Y` is the year, formatted as `yyyy`.
+ * `%m` is the month, formatted as `MM`.
+ * `%d` is the day, formatted as `dd`.
+ * `%h` is the hour, formatted as `HH`.
+ * `%M` is the minute, formatted as `mm`.
+
+ For example, in the following dataset, the blob template should be `%Y/%m/%d/00/JsonFormatV2.json`.
+
+ ![Screenshot that shows the blob template.](media/blob-template.png)
+
+
+* **JSON format version**: Defines the data schema in the JSON files. Metrics Advisor supports the following versions. You can choose one to fill in the field:
+
+ * **v1**
+
+ Only the metrics *Name* and *Value* are accepted. For example:
+
+ ```json
+ {"count":11, "revenue":1.23}
+ ```
+
+ * **v2**
+
+ The metrics *Dimensions* and *timestamp* are also accepted. For example:
+
+ ```json
+ [
+ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
+ {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
+ ]
+ ```
+
+ Only one timestamp is allowed per JSON file.
+
+## <span id="cosmosdb">Azure Cosmos DB (SQL)</span>
+
+* **Connection string**: The connection string to access your Azure Cosmos DB instance. This can be found in the Azure Cosmos DB resource in the Azure portal, in **Keys**. For more information, see [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md).
+* **Database**: The database to query against. In the Azure portal, under **Containers**, go to **Browse** to find the database.
+* **Collection ID**: The collection ID to query against. In the Azure portal, under **Containers**, go to **Browse** to find the collection ID.
+* **SQL query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ```SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
+
+## <span id="kusto">Azure Data Explorer (Kusto)</span>
+
+* **Connection string**: There are four authentication types for Azure Data Explorer (Kusto): basic, service principal, service principal from key vault, and managed identity. The data source in the connection string should be in the URI format (starts with "https"). You can find the URI in the Azure portal.
+
+ * **Basic**: Metrics Advisor supports accessing Azure Data Explorer (Kusto) by using Azure AD application authentication. You need to create and register an Azure AD application, and then authorize it to access an Azure Data Explorer database. For more information, see [Create an Azure AD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app). Here's an example of connection string:
+
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>;AAD Federated Security=True;Application Client ID=<Application Client ID>;Application Key=<Application Key>;Authority ID=<Tenant ID>
+ ```
+
+ * **Service principal**: A service principal is a concrete instance created from the application object. The service principal inherits certain properties from that application object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access. To use a service principal in Metrics Advisor:
+
+ 1. Create the Azure AD application registration. For more information, see [Create an Azure AD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app).
+
+ 1. Manage Azure Data Explorer database permissions. For more information, see [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions).
+
+ 1. Create a credential entity in Metrics Advisor. See how to [create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+ * **Service principal from key vault**: Azure Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. For more information, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault. Here's an example of connection string:
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+ * **Managed identity**: Managed identity for Azure resources can authorize access to blob and queue data. Managed identity uses Azure AD credentials from applications running in Azure virtual machines, function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources and Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. Learn how to [authorize with a managed identity](../../storage/blobs/authorize-managed-identity.md#enable-managed-identities-on-a-vm).
+
+ You can create a managed identity in the Azure portal for your Azure Data Explorer (Kusto). Select **Permissions** > **Add**. The suggested role type is: **admin / viewer**.
+
+ ![Screenshot that shows managed identity for Kusto.](media/managed-identity-kusto.png)
+
+ Here's an example of connection string:
+ ```
+ Data Source=<URI Server>;Initial Catalog=<Database>
+ ```
+
+
+* **Query**: To get and formulate data into multi-dimensional time series data, see [Kusto Query Language](/azure/data-explorer/kusto/query). You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ```kusto
+ [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
+
+## <span id="adl">Azure Data Lake Storage Gen2</span>
+
+* **Account Name**: The authentication types for Azure Data Lake Storage Gen2 are basic, Azure Data Lake Storage Gen2 shared key, service principal, and service principal from Key Vault.
+
+ * **Basic**: The **Account Name** of your Azure Data Lake Storage Gen2. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) resource, in **Access keys**.
+
+ * **Azure Data Lake Storage Gen2 shared key**: First, you specify the account key to access your Azure Data Lake Storage Gen2 (this is the same as the account key in the basic authentication type. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) resource, in **Access keys**. Then, you [create a credential entity](how-tos/credential-entity.md) for Azure Data Lake Storage Gen2 shared key type, and fill in the account key.
+
+ The account name is the same as the basic authentication type.
+
+ * **Service principal**: A *service principal* is a concrete instance created from the application object, and it inherits certain properties from that application object. A service principal is created in each tenant where the application is used, and it references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ The account name is the same as the basic authentication type.
+
+ **Step 1:** Create and register an Azure AD application, and then authorize it to access the database. For more information, see [Create an Azure AD app registration](/azure/data-explorer/provision-azure-ad-app).
+
+ **Step 2:** Assign roles.
+
+ 1. In the Azure portal, go to the **Storage accounts** service.
+
+ 2. Select the Azure Data Lake Storage Gen2 account to use with this application registration.
+
+ 3. Select **Access Control (IAM)**.
+
+ 4. Select **+ Add**, and select **Add role assignment** from the menu.
+
+ 5. Set the **Select** field to the Azure AD application name, and set the role to **Storage Blob Data Contributor**. Then select **Save**.
+
+ ![Screenshot that shows the steps to assign roles.](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
+
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
+
+ * **Service principal from Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. Create a service principal first, and then store the service principal inside a key vault. For more details, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv). The account name is the same as the basic authentication type.
+
+* **Account Key** (only necessary for the basic authentication type): Specify the account key to access your Azure Data Lake Storage Gen2. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) resource, in **Access keys**.
+
+* **File System Name (Container)**: For Metrics Advisor, you store your time series data as blob files (one blob per timestamp), under a single container. This is the container name field. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) instance. In **Data Lake Storage**, select **Containers**, and then you see the container name.
+
+* **Directory Template**: This is the directory template of the blob file. The following parameters are supported:
+
+ * `%Y` is the year, formatted as `yyyy`.
+ * `%m` is the month, formatted as `MM`.
+ * `%d` is the day, formatted as `dd`.
+ * `%h` is the hour, formatted as `HH`.
+ * `%M` is the minute, formatted as `mm`.
+
+ Query sample for a daily metric: `%Y/%m/%d`.
+
+ Query sample for an hourly metric: `%Y/%m/%d/%h`.
+
+* **File Template**:
+ Metrics Advisor uses a path to find the JSON file in Blob Storage. The following is an example of a blob file template, which is used to find the JSON file in Blob Storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, and if you have `%d` in your path, you can add it after `%m`.
+
+ The following parameters are supported:
+
+ * `%Y` is the year, formatted as `yyyy`.
+ * `%m` is the month, formatted as `MM`.
+ * `%d` is the day, formatted as `dd`.
+ * `%h` is the hour, formatted as `HH`.
+ * `%M` is the minute, formatted as `mm`.
+
+ Metrics Advisor supports the data schema in the JSON files, as in the following example:
+
+ ```json
+ [
+ {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
+ {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
+ ]
+ ```
+
+## <span id="eventhubs">Azure Event Hubs</span>
+
+* **Limitations**: Be aware of the following limitations with integration.
+
+ * Metrics Advisor integration with Event Hubs doesn't currently support more than three active data feeds in one Metrics Advisor instance in public preview.
+ * Metrics Advisor will always start consuming messages from the latest offset, including when reactivating a paused data feed.
+
+ * Messages during the data feed pause period will be lost.
+ * The data feed ingestion start time is set to the current Coordinated Universal Time timestamp automatically, when the data feed is created. This time is only for reference purposes.
+
+ * Only one data feed can be used per consumer group. To reuse a consumer group from another deleted data feed, you need to wait at least ten minutes after deletion.
+ * The connection string and consumer group can't be modified after the data feed is created.
+ * For Event Hubs messages, only JSON is supported, and the JSON values can't be a nested JSON object. The top-level element can be a JSON object or a JSON array.
+
+ Valid messages are as follows:
+
+ Single JSON object:
+
+ ```json
+ {
+ "metric_1": 234,
+ "metric_2": 344,
+ "dimension_1": "name_1",
+ "dimension_2": "name_2"
+ }
+ ```
+
+ JSON array:
+
+ ```json
+ [
+ {
+ "timestamp": "2020-12-12T12:00:00", "temperature": 12.4,
+ "location": "outdoor"
+ },
+ {
+ "timestamp": "2020-12-12T12:00:00", "temperature": 24.8,
+ "location": "indoor"
+ }
+ ]
+ ```
++
+* **Connection String**: Go to the instance of Event Hubs. Then add a new policy or choose an existing shared access policy. Copy the connection string in the pop-up panel.
+ ![Screenshot of Event Hubs.](media/datafeeds/entities-eventhubs.jpg)
+
+ ![Screenshot of shared access policies.](media/datafeeds/shared-access-policies.jpg)
+
+ Here's an example of a connection string:
+ ```
+ Endpoint=<Server>;SharedAccessKeyName=<SharedAccessKeyName>;SharedAccessKey=<SharedAccess Key>;EntityPath=<EntityPath>
+ ```
+
+* **Consumer Group**: A [consumer group](../../event-hubs/event-hubs-features.md#consumer-groups) is a view (state, position, or offset) of an entire event hub.
+You find this on the **Consumer Groups** menu of an instance of Azure Event Hubs. A consumer group can only serve one data feed. Create a new consumer group for each data feed.
+* **Timestamp** (optional): Metrics Advisor uses the Event Hubs timestamp as the event timestamp, if the user data source doesn't contain a timestamp field. The timestamp field is optional. If no timestamp column is chosen, the service uses the enqueued time as the timestamp.
+
+ The timestamp field must match one of these two formats:
+
+ * `YYYY-MM-DDTHH:MM:SSZ`
+ * The number of seconds or milliseconds from the epoch of `1970-01-01T00:00:00Z`.
+
+ The timestamp will left-align to granularity. For example, if the timestamp is `2019-01-01T00:03:00Z`, granularity is 5 minutes, and then Metrics Advisor aligns the timestamp to `2019-01-01T00:00:00Z`. If the event timestamp is `2019-01-01T00:10:00Z`, Metrics Advisor uses the timestamp directly, without any alignment.
++
+## <span id="log">Azure Monitor Logs</span>
+
+Azure Monitor Logs has the following authentication types: basic, service principal, and service principal from Key Vault.
+* **Basic**: You need to fill in **Tenant ID**, **Client ID**, **Client Secret**, and **Workspace ID**.
+ To get **Tenant ID**, **Client ID**, and **Client Secret**, see [Register app or web API](../../active-directory/develop/quickstart-register-app.md). You can find **Workspace ID** in the Azure portal.
+
+ ![Screenshot that shows where to find the Workspace ID in the Azure portal.](media/workspace-id.png)
+
+* **Service principal**: A service principal is a concrete instance created from the application object, and it inherits certain properties from that application object. A service principal is created in each tenant where the application is used, and it references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ **Step 1:** Create and register an Azure AD application, and then authorize it to access a database. For more information, see [Create an Azure AD app registration](/azure/data-explorer/provision-azure-ad-app).
+
+ **Step 2:** Assign roles.
+ 1. In the Azure portal, go to the **Storage accounts** service.
+ 2. Select **Access Control (IAM)**.
+ 3. Select **+ Add**, and then select **Add role assignment** from the menu.
+ 4. Set the **Select** field to the Azure AD application name, and set the role to **Storage Blob Data Contributor**. Then select **Save**.
+
+ ![Screenshot that shows how to assign roles.](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
+
+
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
+
+* **Service principal from Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. Create a service principal first, and then store the service principal inside a key vault. For more details, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv).
+
+* **Query**: Specify the query. For more information, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+
+ Sample query:
+
+ ``` Kusto
+ [TableName]
+ | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd)
+ | summarize [count_per_dimension]=count() by [Dimension]
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
+
+## <span id="sql">Azure SQL Database | SQL Server</span>
+
+* **Connection String**: The authentication types for Azure SQL Database and SQL Server are basic, managed identity, Azure SQL connection string, service principal, and service principal from key vault.
+
+ * **Basic**: Metrics Advisor accepts an [ADO.NET style connection string](/dotnet/framework/data/adonet/connection-string-syntax) for a SQL Server data source.
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<db-name>;User ID=<user-name>;Password=<password>
+ ```
+
+ * <span id='jump'>**Managed identity**</span>: Managed identity for Azure resources can authorize access to blob and queue data. It does so by using Azure AD credentials from applications running in Azure virtual machines, function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources and Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. To [enable your managed entity](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md), follow these steps:
+ 1. Enabling a system-assigned managed identity is a one-click experience. In the Azure portal, for your Metrics Advisor workspace, go to **Settings** > **Identity** > **System assigned**. Then set the status as **on**.
+
+ ![Screenshot that shows how to set the status as on.](media/datafeeds/set-identity-status.png)
+
+ 1. Enable Azure AD authentication. In the Azure portal, for your data source, go to **Settings** > **Active Directory admin**. Select **Set admin**, and select an **Azure AD user account** to be made an administrator of the server. Then, choose **Select**.
+
+ ![Screenshot that shows how to set the admin.](media/datafeeds/set-admin.png)
+
+ 1. Enable managed identity in Metrics Advisor. You can edit a query in the database management tool or in the Azure portal.
+
+ **Management tool**: In your database management tool, select **Active Directory - Universal with MFA support** in the authentication field. In the **User name** field, enter the name of the Azure AD account that you set as the server administrator in step 2. For example, this might be `test@contoso.com`.
+
+ ![Screenshot that shows how to set connection details.](media/datafeeds/connection-details.png)
+
+ **Azure portal**: In your SQL database, select **Query editor**, and sign in the admin account.
+ ![Screenshot that shows how to edit your query in the Azure portal.](media/datafeeds/query-editor.png)
+
+ Then in the query window, run the following (note that this is the same for the management tool method):
+
+ ```
+ CREATE USER [MI Name] FROM EXTERNAL PROVIDER
+ ALTER ROLE db_datareader ADD MEMBER [MI Name]
+ ```
+
+ > [!NOTE]
+ > The `MI Name` is the managed identity name in Metrics Advisor (for service principal, it should be replaced with the service principal name). For more information, see [Authorize with a managed identity](../../storage/blobs/authorize-managed-identity.md#enable-managed-identities-on-a-vm).
+
+ Here's an example of a connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+ * **Azure SQL connection string**:
+
+
+ Here's an example of a connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>;User ID=<user-name>;Password=<password>
+ ```
+
+
+ * **Service principal**: A service principal is a concrete instance created from the application object, and it inherits certain properties from that application object. A service principal is created in each tenant where the application is used, and it references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
+
+ **Step 1:** Create and register an Azure AD application, and then authorize it to access a database. For more information, see [Create an Azure AD app registration](/azure/data-explorer/provision-azure-ad-app).
+
+ **Step 2:** Follow the steps documented previously, in [managed identity in SQL Server](#jump).
+
+ **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
+
+ Here's an example of a connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+ * **Service principal from Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. Create a service principal first, and then store the service principal inside a key vault. For more details, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv). You can also find your connection string in your Azure SQL Server resource, in **Settings** > **Connection strings**.
+
+ Here's an example of connection string:
+
+ ```
+ Data Source=<Server>;Initial Catalog=<Database>
+ ```
+
+* **Query**: Use a SQL query to get and formulate data into multi-dimensional time series data. You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting an expected metrics value in an interval. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
++
+ Sample query:
+
+ ```SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+## <span id="table">Azure Table Storage</span>
+
+* **Connection String**: Create a shared access signature (SAS) URL, and fill it in here. The most straightforward way to generate a SAS URL is by using the Azure portal. First, under **Settings**, go to the storage account you want to access. Then select **Shared access signature**. Select the **Table** and **Object** checkboxes, and then select **Generate SAS and connection string**. In the Metrics Advisor workspace, copy and paste the **Table service SAS URL** into the text box.
+
+ ![Screenshot that shows how to generate the shared access signature in Azure Table Storage.](media/azure-table-generate-sas.png)
+
+* **Table Name**: Specify a table to query against. You can find this in your Azure storage account instance. In the **Table Service** section, select **Tables**.
+
+* **Query**: You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting an expected metrics value in an interval. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
+
+ Sample query:
+
+ ``` mssql
+ PartitionKey ge '@IntervalStart' and PartitionKey lt '@IntervalEnd'
+ ```
+
+ For more information, see the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
++
+## <span id="influxdb">InfluxDB (InfluxQL)</span>
+
+* **Connection String**: The connection string to access InfluxDB.
+* **Database**: The database to query against.
+* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+
+For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
+
+* **User name**: This is optional for authentication.
+* **Password**: This is optional for authentication.
+
+## <span id="mongodb">MongoDB</span>
+
+* **Connection String**: The connection string to access MongoDB.
+* **Database**: The database to query against.
+* **Query**: A command to get and formulate data into multi-dimensional time series data for ingestion. Verify the command on [db.runCommand()](https://docs.mongodb.com/manual/reference/method/db.runCommand/https://docsupdatetracker.net/index.html).
+
+ Sample query:
+
+ ``` MongoDB
+ {"find": "[TableName]","filter": { [Timestamp]: { $gte: ISODate(@IntervalStart) , $lt: ISODate(@IntervalEnd) }},"singleBatch": true}
+ ```
+
+
+## <span id="mysql">MySQL</span>
+
+* **Connection String**: The connection string to access MySQL DB.
+* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn]< @IntervalEnd
+ ```
+
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
+
+## <span id="pgsql">PostgreSQL</span>
+
+* **Connection String**: The connection string to access PostgreSQL DB.
+* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
+
+ Sample query:
+
+ ``` SQL
+ SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
+ ```
+ For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
+
+## Next steps
+
+* While you're waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
+* When your metric data is ingested, you can [configure metrics and fine tune detection configuration](how-tos/configure-metrics.md).
ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/encryption.md
+
+ Title: Metrics Advisor service encryption
+
+description: Metrics Advisor service encryption of data at rest.
++++++ Last updated : 07/02/2021+
+#Customer intent: As a user of the Metrics Advisor service, I want to learn how encryption at rest works.
++
+# Metrics Advisor service encryption of data at rest
+
+Metrics Advisor service automatically encrypts your data when it is persisted to the cloud. The Metrics Advisor service encryption protects your data and helps you to meet your organizational security and compliance commitments.
++
+Metrics Advisor supports CMK and double encryption by using BYOS (bring your own storage).
+
+## Steps to create a Metrics Advisor with BYOS
+
+### Step1. Create an Azure Database for PostgreSQL and set admin
+
+- Create an Azure Database for PostgreSQL
+
+ Log in to the Azure portal and create a resource of the Azure Database for PostgreSQL. Couple of things to notice:
+
+ 1. Please select the **'Single Server'** deployment option.
+ 2. When choosing 'Datasource', please specify as **'None'**.
+ 3. For the 'Location', please make sure to create within the **same location** as Metrics Advisor resource.
+ 4. 'Version' should be set to **11**.
+ 5. 'Compute + storage' should choose a 'Memory Optimized' SKU with at least **32 vCores**.
+
+ ![Create an Azure Database for PostgreSQL](media/cmk-create.png)
+
+- Set Active Directory Admin for newly created PG
+
+ After successfully creating your Azure Database for PostgreSQL. Go to the resource page of the newly created Azure PG resource. Select 'Active Directory admin' tab and set yourself as the Admin.
++
+### Step2. Create a Metrics Advisor resource and enable Managed Identity
+
+- Create a Metrics Advisor resource in the Azure portal
+
+ Go to Azure portal again and search 'Metrics Advisor'. When creating Metrics Advisor, do remember the following:
+
+ 1. Choose the **same 'region'** as you created Azure Database for PostgreSQL.
+ 2. Mark 'Bring your own storage' as **'Yes'** and select the Azure Database for PostgreSQL you just created in the dropdown list.
+
+- Enable the Managed Identity for Metrics Advisor
+
+ After creating the Metrics Advisor resource, select 'Identity' and set 'Status' to **'On'** to enable Managed Identity.
+
+- Get Application ID of Managed Identity
+
+ Go to Azure Active Directory, and select 'Enterprise applications'. Change 'Application type' to **'Managed Identity'**, copy resource name of Metrics Advisor, and search. Then you're able to view the 'Application ID' from the query result, copy it.
+
+### Step3. Grant Metrics Advisor access permission to your Azure Database for PostgreSQL
+
+- Grant **'Owner'** role for the Managed Identity on your Azure Database for PostgreSQL
+
+- Set firewall rules
+
+ 1. Set 'Allow access to Azure services' as 'Yes'.
+ 2. Add your clientIP address to log in to Azure Database for PostgreSQL.
+
+- Get the access-token for your account with resource type 'https://ossrdbms-aad.database.windows.net'. The access token is the password you need to log in to the Azure Database for PostgreSQL by your account. An example using `az` client:
+
+ ```
+ az login
+ az account get-access-token --resource https://ossrdbms-aad.database.windows.net
+ ```
+
+- After getting the token, use it to log in to your Azure Database for PostgreSQL. Replace the 'servername' as the one that you can find in the 'overview' of your Azure Database for PostgreSQL.
+
+ ```
+ export PGPASSWORD=<access-token>
+ psql -h <servername> -U <adminaccount@servername> -d postgres
+ ```
+
+- After login, execute the following commands to grant Metrics Advisor access permission to Azure Database for PostgreSQL. Replace the 'appid' with the one that you get in Step 2.
+
+ ```
+ SET aad_validate_oids_in_tenant = off;
+ CREATE ROLE metricsadvisor WITH LOGIN PASSWORD '<appid>' IN ROLE azure_ad_user;
+ ALTER ROLE metricsadvisor CREATEDB;
+ GRANT azure_pg_admin TO metricsadvisor;
+ ```
+
+By completing all the above steps, you've successfully created a Metrics Advisor resource with CMK supported. Wait for a couple of minutes until your Metrics Advisor is accessible.
+
+## Next steps
+
+* [Metrics Advisor Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/glossary.md
+
+ Title: Metrics Advisor glossary
+
+description: Key ideas and concepts for the Metrics Advisor service
++++++ Last updated : 09/14/2020+++
+# Metrics Advisor glossary of common vocabulary and concepts
+
+This document explains the technical terms used in Metrics Advisor. Use this article to learn about common concepts and objects you might encounter when using the service.
+
+## Data feed
+
+> [!NOTE]
+> Multiple metrics can share the same data source, and even the same data feed.
+
+A data feed is what Metrics Advisor ingests from your data source, such as Azure Cosmos DB or a SQL server. A data feed contains rows of:
+* timestamps
+* zero or more dimensions
+* one or more measures.
+
+## Interval
+Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
+
+Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
+
+<!-- ![What is interval](media/tutorial/what-is-interval.png) -->
+
+## Metric
+
+A metric is a quantifiable measure that is used to monitor and assess the status of a specific business process. It can be a combination of multiple time series values divided into dimensions. For example a *web health* metric might contain dimensions for *user count* and the *en-us market*.
+
+## Dimension
+
+A dimension is one or more categorical values. The combination of those values identifies a particular univariate time series, for example: country/region, language, tenant, and so on.
+
+## Multi-dimensional metric
+
+What is a multi-dimension metric? Let's use two examples.
+
+**Revenue of a business**
+
+Suppose you have data for the revenue of your business. Your time series data might look something like this:
+
+| Timestamp | Category | Market | Revenue |
+| -|-|--|-- |
+| 2020-6-1 | Food | US | 1000 |
+| 2020-6-1 | Apparel | US | 2000 |
+| 2020-6-2 | Food | UK | 800 |
+| ... | ... |... | ... |
+
+In this example, *Category* and *Market* are dimensions. *Revenue* is the Key Performance Indicator (KPI) which could be sliced into different categories and/or markets, and could also be aggregated. For example, the revenue of *food* for all markets.
+
+**Error counts for a complex application**
+
+Suppose you have data for the number of errors logged in an application. Your time series data might look something like this:
+
+| Timestamp | Application component | Region | Error count |
+| -|-|--|-- |
+| 2020-6-1 | Employee database | WEST EU | 9000 |
+| 2020-6-1 | Message queue | EAST US | 1000 |
+| 2020-6-2 | Message queue | EAST US | 8000|
+| ... | ... | ... | ...|
+
+In this example, *Application component* and *Region* are dimensions. *Error count* is the KPI which could be sliced into different categories and/or markets, and could also be aggregated. For example, the error count of *Message queue* in all regions.
+
+## Measure
+
+A measure is a fundamental or unit-specific term and a quantifiable value of the metric.
+
+## Time series
+
+A time series is a series of data points indexed (or listed or graphed) in chronological order. Most commonly, a time series is a sequence taken at successive, equally spaced points in time. It is a sequence of discrete-time data.
+
+In Metrics Advisor, values of one metric on a specific dimension combination are called one series.
+
+## Granularity
+
+Granularity indicates how frequent data points will be generated at the data source. For example, daily, hourly.
+
+## Ingest data since(UTC)
+
+Ingest data since(UTC) is the time that you want Metrics Advisor to begin ingesting data from your data source. Your data source must have data at the specified ingestion start time.
+
+## Confidence boundaries
+
+> [!NOTE]
+> Confidence boundaries are not the only measurement used to find anomalies. It's possible for data points outside of this boundary to be flagged as normal by the detection model.
+
+In Metrics Advisor, confidence boundaries represent the sensitivity of the algorithm used, and are used to filter out overly sensitive anomalies. On the web portal, confidence bounds appear as a transparent blue band. All the points within the band are treated as normal points.
+
+Metrics Advisor provides tools to adjust the sensitivity of the algorithms used. See [How to: Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md) for more information.
+
+![Confidence bounds](media/confidence-bounds.png)
++
+## Hook
+
+Metrics Advisor lets you create and subscribe to real-time alerts. These alerts are sent over the internet, [using a hook](how-tos/alerts.md).
+
+## Anomaly incident
+
+After a detection configuration is applied to metrics, incidents are generated whenever any series within it has an anomaly. In large data sets this can be overwhelming, so Metrics Advisor groups series of anomalies within a metric into an incident. The service will also evaluate the severity and provide tools for [diagnosing an incident](how-tos/diagnose-an-incident.md).
+
+### Diagnostic tree
+
+In Metrics Advisor, you can apply anomaly detection on metrics, then Metrics Advisor automatically monitors all time series of all dimension combinations. Whenever there is any anomaly detected, Metrics Advisor aggregates anomalies into incidents.
+After an incident occurs, Metrics Advisor will provide a diagnostic tree with a hierarchy of contributing anomalies, and identify ones with the biggest impact. Each incident has a root cause anomaly, which is the top node of the tree.
+
+### Anomaly grouping
+
+Metrics Advisor provides the capability to find related time series with similar patterns. It can also provide deeper insights into the impact on other dimensions, and correlate the anomalies.
+
+### Time series comparison
+
+You can pick multiple time series to compare trends in a single visualization. This provides a clear and insightful way to view and compare related series.
+
+## Detection configuration
+
+>[!Note]
+>Detection configurations are only applied within an individual metric.
+
+On the Metrics Advisor web portal, a detection configuration (such as sensitivity, auto snooze, and direction) is listed on the left panel when viewing a metric. Parameters can be tuned and applied to all series within this metric.
+
+A detection configuration is required for every time series, and determines whether a point in the time series is an anomaly. Metrics Advisor will set up a default configuration for the whole metric when you first onboard data.
+
+You can additionally refine the configuration by applying tuning parameters on a group of series, or a specific one. Only one configuration will be applied to a time series:
+* Configurations applied to a specific series will overwrite configurations for a group
+* Configurations for a group will overwrite configurations applied to the whole metric.
+
+Metrics Advisor provides several [detection methods](how-tos/configure-metrics.md#anomaly-detection-methods), and you can combine them using logical operators.
+
+### Smart detection
+
+Anomaly detection using multiple machine learning algorithms.
+
+**Sensitivity**: A numerical value to adjust the tolerance of the anomaly detection. Visually, the higher the value, the narrower the upper and lower boundaries around the time series.
+
+### Hard threshold
+
+Values outside of upper or lower bounds are anomalies.
+
+**Min**: The lower bound
+
+**Max**: The upper bound
+
+### Change threshold
+
+Use the previous point value to determine if this point is an anomaly.
+
+**Change percentage**: Compared to the previous point, the current point is an anomaly if the percentage of change is more than this parameter.
+
+**Change over points**: How many points to look back.
+
+### Common parameters
+
+**Direction**: A point is an anomaly only when the deviation occurs in the direction *up*, *down*, or *both*.
+
+**Not valid anomaly until**: A data point is only an anomaly if a specified percentage of previous points are also anomalies.
+
+## Alert settings
+
+Alert settings determine which anomalies should trigger an alert. You can set multiple alerts with different settings. For example, you could create an alert for anomalies with lower business impact, and another for more importance alerts.
+
+You can also create an alert across metrics. For example, an alert that only gets triggered if two specified metrics have anomalies.
+
+### Alert scope
+
+Alert scope refers to the scope that the alert applies to. There are four options:
+
+**Anomalies of all series**: Alerts will be triggered for anomalies in all series within the metric.
+
+**Anomalies in series group**: Alerts will only be triggered for anomalies in specific dimensions of the series group. The number of specified dimensions should be smaller than the total number dimensions.
+
+**Anomalies in favorite series**: Alerts will only be triggered for anomalies that are added as favorites. You can choose a group of series as a favorite for each detecting config.
+
+**Anomalies in top N of all series**: Alerts will only be triggered for anomalies in the top N series. You can set parameters to specify the number of timestamps to take into account, and how many anomalies must be in them to send the alert.
+
+### Severity
+
+Severity is a grade that Metrics Advisor uses to describe the severity of incident, including *High*, *Medium*, and *Low*.
+
+Currently, Metrics Advisor uses the following factors to measure the alert severity:
+1. The value proportion and the quantity proportion of anomalies in the metric.
+1. Confidence of anomalies.
+1. Your favorite settings also contribute to the severity.
+
+### Auto snooze
+
+Some anomalies are transient issues, especially for small granularity metrics. You can *snooze* an alert for a specific number of time points. If anomalies are found within that specified number of points, no alert will be triggered. The behavior of auto snooze can be set on either metric level or series level.
+
+The behavior of snooze can be set on either metric level or series level.
+
+## Data feed settings
+
+### Ingestion Time Offset
+
+By default, data is ingested according to the granularity (such as *daily*). By using a positive integer, you can delay ingestion of the data by the specified value. Using a negative number, you can advance the ingestion by the specified value.
+
+### Max Ingestion per Minute
+
+Set this parameter if your data source supports limited concurrency. Otherwise leave the default settings.
+
+### Stop retry after
+
+If data ingestion has failed, Metrics Advisor will retry automatically after a period of time. The beginning of the period is the time when the first data ingestion occurred. The length of the retry period is defined according to the granularity. If you use the default value (`-1`), the retry period will be determined according to the granularity:
+
+| Granularity | Stop Retry After |
+| : | : |
+| Daily, Custom (>= 1 Day), Weekly, Monthly, Yearly | 7 days |
+| Hourly, Custom (< 1 Day) | 72 hours |
+
+### Min retry interval
+
+You can specify the minimum interval when retrying to pull data from the source. If you use the default value (`-1`), the retry interval will be determined according to the granularity:
+
+| Granularity | Minimum Retry Interval |
+| : | : |
+| Daily, Custom (>= 1 Day), Weekly, Monthly | 30 minutes |
+| Hourly, Custom (< 1 Day) | 10 minutes |
+| Yearly | 1 day |
+
+### Grace period
+
+> [!Note]
+> The grace period begins at the regular ingestion time, plus specified ingestion time offset.
+
+A grace period is a period of time where Metrics Advisor will continue fetching data from the data source, but won't fire any alerts. If no data was ingested after the grace period, a *Data feed not available* alert will be triggered.
+
+### Snooze alerts in
+
+When this option is set to zero, each timestamp with *Not Available* will trigger an alert. When set to a value other than zero, the specified number of *Not available* alerts will be snoozed if no data was fetched.
+
+## Data feed permissions
+
+There are two roles to manage data feed permissions: *Administrator*, and *Viewer*.
+
+* An *Administrator* has full control of the data feed and metrics within it. They can activate, pause, delete the data feed, and make updates to feeds and configurations. An *Administrator* is typically the owner of the metrics.
+
+* A *Viewer* is able to view the data feed or metrics, but is not able to make changes.
+
+## Next steps
+- [Metrics Advisor overview](overview.md)
+- [Use the web portal](quickstarts/web-portal.md)
ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/alerts.md
+
+ Title: Configure Metrics Advisor alerts
+
+description: How to configure your Metrics Advisor alerts using hooks for email, web and Azure DevOps.
++++++ Last updated : 09/14/2020+++
+# How-to: Configure alerts and get notifications using a hook
+
+After an anomaly is detected by Metrics Advisor, an alert notification will be triggered based on alert settings, using a hook. An alert setting can be used with multiple detection configurations, various parameters are available to customize your alert rule.
+
+## Create a hook
+
+Metrics Advisor supports four different types of hooks: email, Teams, webhook, and Azure DevOps. You can choose the one that works for your specific scenario.
+
+### Email hook
+
+> [!Note]
+> Metrics Advisor resource administrators need to configure the Email settings, and input **SMTP related information** into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Azure AI Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](../faq.yml#how-to-set-up-email-settings-and-enable-alerting-by-email-).
++
+An email hook is the channel for anomaly alerts to be sent to email addresses specified in the **Email to** section. Two types of alert emails will be sent: **Data feed not available** alerts, and **Incident reports**, which contain one or multiple anomalies.
+
+To create an email hook, the following parameters are available:
+
+|Parameter |Description |
+|||
+| Name | Name of the email hook |
+| Email to| Email addresses to send alerts to|
+| External link | Optional field, which enables a customized redirect, such as for troubleshooting notes. |
+| Customized anomaly alert title | Title template supports `${severity}`, `${alertSettingName}`, `${datafeedName}`, `${metricName}`, `${detectConfigName}`, `${timestamp}`, `${topDimension}`, `${incidentCount}`, `${anomalyCount}`
+
+After you select **OK**, an email hook will be created. You can use it in any alert settings to receive anomaly alerts. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-azure-logic-apps-teams-and-smtp) for detailed steps.
+
+### Teams hook
+
+A Teams hook is the channel for anomaly alerts to be sent to a channel in Microsoft Teams. A Teams hook is implemented through an "Incoming webhook" connector. You may need to create an "Incoming webhook" connector ahead in your target Teams channel and get a URL of it. Then pivot back to your Metrics Advisor workspace.
+
+Select "Hooks" tab in left navigation bar, and select "Create hook" button at top right of the page. Choose hook type of "Teams", following parameters are provided:
+
+|Parameter |Description |
+|||
+| Name | Name of the Teams hook |
+| Connector URL | The URL that just copied from "Incoming webhook" connector that created in target Teams channel. |
+
+After you select **OK**, a Teams hook will be created. You can use it in any alert settings to notify anomaly alerts to target Teams channel. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-azure-logic-apps-teams-and-smtp) for detailed steps.
+
+### Web hook
+
+A web hook is another notification channel by using an endpoint that is provided by the customer. Any anomaly detected on the time series will be notified through a web hook. There're several steps to enable a web hook as alert notification channel within Metrics Advisor.
+
+**Step1.** Enable Managed Identity in your Metrics Advisor resource
+
+A system assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). The managed identity is authenticated with Azure AD, so you donΓÇÖt have to store any credentials in code.
+
+Go to Metrics Advisor resource in Azure portal, and select "Identity", turn it to "on" then Managed Identity is enabled.
+
+**Step2.** Create a web hook in Metrics Advisor workspace
+
+Log in to you workspace and select "Hooks" tab, then select "Create hook" button.
++
+To create a web hook, you will need to add the following information:
+
+|Parameter |Description |
+|||
+|Endpoint | The API address to be called when an alert is triggered. **MUST be Https**. |
+|Username / Password | For authenticating to the API address. Leave this black if authentication isn't needed. |
+|Header | Custom headers in the API call. |
+|Certificate identifier in Azure Key vaults| If accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults. Input the identifier here.
+
+> [!Note]
+> When a web hook is created or modified, the endpoint will be called as a test with **an empty request body**. Your API needs to return a 200 HTTP code to successfully pass the validation.
++
+- Request method is **POST**
+- Timeout 30s
+- Retry for 5xx error, ignore other error. Will not follow 301/302 redirect request.
+- Request body:
+```
+{
+"value": [{
+ "hookId": "b0f27e91-28cf-4aa2-aa66-ac0275df14dd",
+ "alertType": "Anomaly",
+ "alertInfo": {
+ "anomalyAlertingConfigurationId": "1bc6052e-9a2a-430b-9cbd-80cd07a78c64",
+ "alertId": "172536dbc00",
+ "timestamp": "2020-05-27T00:00:00Z",
+ "createdTime": "2020-05-29T10:04:45.590Z",
+ "modifiedTime": "2020-05-29T10:04:45.590Z"
+ },
+ "callBackUrl": "https://kensho2-api.azurewebsites.net/alert/anomaly/configurations/1bc6052e-9a2a-430b-9cbd-80cd07a78c64/alerts/172536dbc00/incidents"
+}]
+}
+```
+
+**Step3. (optional)** Store your certificate in Azure Key vaults and get identifier
+As mentioned, if accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults.
+
+- Check [Set and retrieve a certificate from Azure Key Vault using the Azure portal](../../../key-vault/certificates/quick-create-portal.md)
+- Select the certificate you've added, then you're able to copy the "Certificate identifier".
+- Then select "Access policies" and "Add access policy", grant "get" permission for "Key permissions", "Secrete permissions" and "Certificate permissions". Select principal as the name of your Metrics Advisor resource. Select "Add" and "Save" button in "Access policies" page.
+
+**Step4.** Receive anomaly notification
+When a notification is pushed through a web hook, you can fetch incidents data by calling the "callBackUrl" in Webhook Request. Details for this api:
+
+- [/alert/anomaly/configurations/{configurationId}/alerts/{alertId}/incidents](https://westus2.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/getIncidentsFromAlertByAnomalyAlertingConfiguration)
+
+By using web hook and Azure Logic Apps, it's possible to send email notification **without an SMTP server configured**. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-azure-logic-apps-teams-and-smtp) for detailed steps.
+
+### Azure DevOps
+
+Metrics Advisor also supports automatically creating a work item in Azure DevOps to track issues/bugs when any anomaly is detected. All alerts can be sent through Azure DevOps hooks.
+
+To create an Azure DevOps hook, you will need to add the following information
+
+|Parameter |Description |
+|||
+| Name | A name for the hook |
+| Organization | The organization that your DevOps belongs to |
+| Project | The specific project in DevOps. |
+| Access Token | A token for authenticating to DevOps. |
+
+> [!Note]
+> You need to grant write permissions if you want Metrics Advisor to create work items based on anomaly alerts.
+> After creating hooks, you can use them in any of your alert settings. Manage your hooks in the **hook settings** page.
+
+## Add or edit alert settings
+
+Go to metrics detail page to find the **Alert settings** section, in the bottom-left corner of the metrics detail page. It lists all alert settings that apply to the selected detection configuration. When a new detection configuration is created, there's no alert setting, and no alerts will be sent.
+You can use the **add**, **edit** and **delete** icons to modify alert settings.
++
+Select the **add** or **edit** buttons to get a window to add or edit your alert settings.
++
+**Alert setting name**: The name of the alert setting. It will be displayed in the alert email title.
+
+**Hooks**: The list of hooks to send alerts to.
+
+The section marked in the screenshot above are the settings for one detection configuration. You can set different alert settings for different detection configurations. Choose the target configuration using the third drop-down list in this window.
+
+### Filter settings
+
+The following are filter settings for one detection configuration.
+
+**Alert For** has four options for filtering anomalies:
+
+* **Anomalies in all series**: All anomalies will be included in the alert.
+* **Anomalies in the series group**: Filter series by dimension values. Set specific values for some dimensions. Anomalies will only be included in the alert when the series matches the specified value.
+* **Anomalies in favorite series**: Only the series marked as favorite will be included in the alert. |
+* **Anomalies in top N of all series**: This filter is for the case that you only care about the series whose value is in the top N. Metrics Advisor will look back over previous timestamps, and check if values of the series at these timestamps are in top N. If the "in top n" count is larger than the specified number, the anomaly will be included in an alert. |
+
+**Filter anomaly options are an extra filter with the following options**:
+
+- **Severity**: The anomaly will only be included when the anomaly severity is within the specified range.
+- **Snooze**: Stop alerts temporarily for anomalies in the next N points (period), when triggered in an alert.
+ - **snooze type**: When set to **Series**, a triggered anomaly will only snooze its series. For **Metric**, one triggered anomaly will snooze all the series in this metric.
+ - **snooze number**: the number of points (period) to snooze.
+ - **reset for non-successive**: When selected, a triggered anomaly will only snooze the next n successive anomalies. If one of the following data points isn't an anomaly, the snooze will be reset from that point; When unselected, one triggered anomaly will snooze next n points (period), even if successive data points aren't anomalies.
+- **value** (optional): Filter by value. Only point values that meet the condition, anomaly will be included. If you use the corresponding value of another metric, the dimension names of the two metrics should be consistent.
+
+Anomalies not filtered out will be sent in an alert.
+
+### Add cross-metric settings
+
+Select **+ Add cross-metric settings** in the alert settings page to add another section.
+
+The **Operator** selector is the logical relationship of each section, to determine if they send an alert.
++
+|Operator |Description |
+|||
+|AND | Only send an alert if a series matches each alert section, and all data points are anomalies. If the metrics have different dimension names, an alert will never be triggered. |
+|OR | Send the alert if at least one section contains anomalies. |
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/anomaly-feedback.md
+
+ Title: Provide anomaly feedback to the Metrics Advisor service
+
+description: Learn how to send feedback on anomalies found by your Metrics Advisor instance, and tune the results.
+++++ Last updated : 11/24/2020+++
+# Provide anomaly feedback
+
+User feedback is one of the most important methods to discover defects within the anomaly detection system. Here we provide a way for users to mark incorrect detection results directly on a time series, and apply the feedback immediately. In this way, a user can teach the anomaly detection system how to do anomaly detection for a specific time series through active interactions.
+
+> [!NOTE]
+> Currently feedback will only affect anomaly detection results by **Smart detection** but not **Hard threshold** and **Change threshold**.
+
+## How to give time series feedback
+
+You can provide feedback from the metric detail page on any series. Just select any point, and you will see the below feedback dialog. It shows you the dimensions of the series you've chosen. You can reselect dimension values, or even remove some of them to get a batch of time series data. After choosing time series, select the **Add** button to add the feedback, there are four kinds of feedback you could give. To append multiple feedback items, select the **Save** button once you complete your annotations.
+++
+### Mark the anomaly point type
+
+As shown in the image below, the feedback dialog will fill the timestamp of your chosen point automatically, though you can edit this value. You then select whether you want to identify this item as an `Anomaly`, `NotAnomaly`, or `AutoDetect`.
++
+The selection will apply your feedback to the future anomaly detection processing of the same series. The processed points will not be recalculated. That means if you marked an Anomaly as NotAnomaly, we will suppress similar anomalies in the future, and if you marked a `NotAnomaly` point as `Anomaly`, we will tend to detect similar points as `Anomaly` in the future. If `AutoDetect` is chosen, any previous feedback on the same point will be ignored in the future.
+
+## Provide feedback for multiple continuous points
+
+If you would like to give anomaly feedback for multiple continuous points at the same time, select the group of points you want to annotate. You will see the chosen time-range automatically filled when you provide anomaly feedback.
++
+To view if an individual point is affected by your anomaly feedback, when browsing a time series, select a single point. If its anomaly detection result has been changed by feedback, the tooltip will show **Affected by feedback: true**. If it shows **Affected by feedback: false**, this means an anomaly feedback calculation was performed for this point, but the anomaly detection result should not be changed.
++
+There are some situations where we do not suggest giving feedback:
+
+- The anomaly is caused by a holiday. It's suggested to use a preset event to solve this kind of false alarm, as it will be more precise.
+- The anomaly is caused by a known data source change. For example, an upstream system change happened at that time. In this situation, it is expected to give an anomaly alert since our system didn't know what caused the value change and when similar value changes will happen again. Thus we don't suggest annotating this kind of issue as `NotAnomaly`.
+
+## Change points
+
+Sometimes the trend change of data will affect anomaly detection results. When a decision is made as to whether a point is an anomaly or not, the latest window of history data will be taken into consideration. When your time series has a trend change, you could mark the exact change point, this will help our anomaly detector in future analysis.
+
+As the figure below shows, you could select `ChangePoint` for the feedback Type, and select `ChangePoint`, `NotChangePoint`, or `AutoDetect` from the pull-down list.
++
+> [!NOTE]
+> If your data keeps changing, you will only need to mark one point as a `ChangePoint`, so if you marked a `timerange`, we will fill the last point's timestamp and time automatically. In this case, your annotation will only affect anomaly detection results after 12 points.
+
+## Seasonality
+
+For seasonal data, when we perform anomaly detection, one step is to estimate the period(seasonality) of the time series, and apply it to the anomaly detection phase. Sometimes, it's hard to identify a precise period, and the period may also change. An incorrectly defined period may have side effects on your anomaly detection results. You can find the current period from a tooltip, its name is `Min Period`. `Window` is a recommended window size to detect the last point within the window. `Window` usually is \(`Min Period` &times; 4 \) + 1.
++
+You can provide feedback for period to fix this kind of anomaly detection error. As the figure shows, you can set a period value. The unit `interval` means one granularity. Here zero intervals means the data is non-seasonal. You could also select `AutoDetect` if you want to cancel previous feedback and let the pipeline detect period automatically.
+
+> [!NOTE]
+> When setting period you do not need to assign a timestamp or timerange, the period will affect future anomaly detections on whole timeseries from the moment you give feedback.
+++
+## Provide comment feedback
+
+You can also add comments to annotate and provide context to your data. To add comments, select a time range and add the text for your comment.
++
+## Time series batch feedback
+
+As previously described, the feedback modal allows you to reselect or remove dimension values, to get a batch of time series defined by a dimension filter. You can also open this modal by clicking the "+" button for Feedback from the left panel, and select dimensions and dimension values.
+++
+## How to view feedback history
+
+There are two ways to view feedback history. You can select the feedback history button from the left panel, and will see a feedback list modal. It lists all the feedback you've given before either for single series or dimension filters.
++
+Another way to view feedback history is from a series. You will see several buttons on the upper right corner of each series. Select the show feedback button, and the line will switch from showing anomaly points to showing feedback entries. The green flag represents a change point, and the blue points are other feedback points. You could also select them, and will get a feedback list modal that lists the details of the feedback given for this point.
+++
+> [!NOTE]
+> Anyone who has access to the metric is permitted to give feedback, so you may see feedback given by other datafeed owners. If you edit the same point as someone else, your feedback will overwrite the previous feedback entry.
+
+## Next steps
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
+- [Configure alerts and get notifications using a hook](../how-tos/alerts.md)
ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/configure-metrics.md
+
+ Title: Configure your Metrics Advisor instance using the web portal
+
+description: How to configure your Metrics Advisor instance and fine-tune the anomaly detection results.
+++++ Last updated : 05/12/2022+++
+# Configure metrics and fine tune detection configuration
+
+Use this article to start configuring your Metrics Advisor instance using the web portal and fine-tune the anomaly detection results.
+
+## Metrics
+
+To browse the metrics for a specific data feed, go to the **Data feeds** page and select one of the feeds. This will display a list of metrics associated with it.
++
+Select one of the metric names to see its details. In this view, you can switch to another metric in the same data feed using the drop-down list in the top right corner of the screen.
+
+When you first view a metric's details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
+
+You can also select time ranges, and change the layout of the page.
+
+> [!NOTE]
+> - The start time is inclusive.
+> - The end time is exclusive.
+
+You can select the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-an-incident.md).
+
+## Tune the detection configuration
+
+A metric can apply one or more detection configurations. There's a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
+
+### Detection configuration auto-tuning based on anomaly preference
+
+Detection configuration auto-tuning is a new feature released in Metrics Advisor, to help address the following scenarios:
+
+- Depending on the use case, certain types of anomalies may be of more significant interest. Sometimes you may be interested in sudden spikes or dips, but other cases, spikes/dips or transient anomalies aren't critical. Previously it was hard to distinguish the configuration for different anomaly types. The new auto-tuning feature makes this distinction between anomaly types possible. As of now, there are five supported anomaly patterns:
+ * Spike
+ * Dip
+ * Increase
+ * Decrease
+ * Steady
+
+- Sometimes, there may be many dimensions within one metric, which will split the metric into hundreds, thousands, or even more time series to be monitored. However, often some of these dimensions aren't equally important. Take revenue as an example, the number for small regions or a niche product category might be quite small and therefore also not that stable. But at the same time not necessarily critical. The new auto-tuning feature has made it possible to fine tune configuration based on series value range.
+
+This allows you to not have to spend as much effort fine tuning your configuration again and again, and also reduces alert noise.
+
+> [!NOTE]
+> The auto-tuning feature is only applied on the 'Smart detection' method.
+
+#### Prerequisite for triggering auto-tuning
+
+After the metrics are onboarded to Metrics Advisor, the system will try to perform statistics on the metrics to categorize **anomaly pattern** types and **series value** distribution. By providing this functionality, you can further fine tune the configuration based on their specific preferences. At the beginning, it will show a status of **Initializing**.
++
+#### Choose to enable auto-tuning on anomaly pattern and series value
+
+The feature enables you to tune detection configuration from two perspectives **anomaly pattern** and **series value**. Based on your specific use case, you can choose which one to enabled or enable both.
+
+- For the **anomaly pattern** option, the system will list out different anomaly patterns that were observed with the metric. You can choose which ones you're interested in and select them, the unselected patterns will have their sensitivity **reduced** by default.
+
+- For the **series value** option, your selection will depend on your specific use case. You'll have to decide if you want to use a higher sensitivity for series with higher values, and decrease sensitivity on low value ones, or vice versa. Then check the checkbox.
++
+#### Tune the configuration for selected anomaly patterns
+
+If specific anomaly patterns are chosen, the next step is to fine tune the configuration for each. There's a global **sensitivity** that is applied for all series. For each anomaly pattern, you can tune the **adjustment**, which is based on the global **sensitivity**.
+
+You must tune each anomaly pattern that has been chosen individually.
++
+#### Tune the configuration for each series value group
+
+After the system generates statistics on all time series within the metric, several series value groups are created automatically. As described above, you can fine tune the **adjustment** for each series value group according to your specific business needs.
+
+There will be a default adjustment configured to get the best detection results, but it can be further tuned.
++
+#### Set up alert rules
+
+Even once the detection configuration on capturing valid anomalies is tuned, it's still important to input **alert
+rules** to make sure the final alert rules can meet eventual business needs. There are a number of rules that can be set, like **filter rules** or **snooze continuous alert rules**.
++
+After configuring all the settings described in the section above, the system will orchestrate them together and automatically detect anomalies based on your inputted preferences. The goal is to get the best configuration that works for each metric, which can be achieved much easier through use of the new **auto-tuning** capability.
+
+### Tune the configuration for all series in current metric
+
+This configuration will be applied to all the series in this metric, except for ones with a separate configuration. A metric level configuration is applied by default when data is onboarded, and is shown on the left panel. Users can directly edit metric level config on metric page.
+
+There are additional parameters like **Direction**, and **Valid anomaly** that can be used to further tune the configuration. You can combine different detection methods as well.
++
+### Tune the configuration for a specific series or group
+
+Select **Advanced configuration** below the metric level configuration options to see the group level configuration.You can add a configuration for an individual series, or group of series by clicking the **+** icon in this window. The parameters are similar to the metric-level configuration parameters, but you may need to specify at least one dimension value for a group-level configuration to identify a group of series. And specify all dimension values for series-level configuration to identify a specific series.
+
+This configuration will be applied to the group of series or specific series instead of the metric level configuration. After setting the conditions for this group, save it.
++
+### Anomaly detection methods
+
+Metrics Advisor offers multiple anomaly detection methods: **Hard threshold, Smart detection, Change threshold**. You can use one or combine them using logical operators by clicking the **'+'** button.
+
+**Hard threshold**
+
+ Hard threshold is a basic method for anomaly detection. You can set an upper and/or lower bound to determine the expected value range. Any points fall out of the boundary will be identified as an anomaly.
+
+**Smart detection**
+
+Smart detection is powered by machine learning that learns patterns from historical data, and uses them for future detection. When using this method, the **Sensitivity** is the most important parameter for tuning the detection results. You can drag it to a smaller or larger value to affect the visualization on the right side of the page. Choose one that fits your data and save it.
++
+In smart detection mode, the sensitivity and boundary version parameters are used to fine-tune the anomaly detection result.
+
+Sensitivity can affect the width of the expected value range of each point. When increased, the expected value range will be tighter, and more anomalies will be reported:
++
+When the sensitivity is turned down, the expected value range will be wider, and fewer anomalies will be reported:
++
+**Change threshold**
+
+Change threshold is normally used when metric data generally stays around a certain range. The threshold is set according to **Change percentage**. The **Change threshold** mode is able to detect anomalies in the scenarios:
+
+* Your data is normally stable and smooth. You want to be notified when there are fluctuations.
+* Your data is normally unstable and fluctuates a lot. You want to be notified when it becomes too stable or flat.
+
+Use the following steps to use this mode:
+
+1. Select **Change threshold** as your anomaly detection method when you set the anomaly detection configurations for your metrics or time series.
+
+ :::image type="content" source="../media/metrics/change-threshold.png" alt-text="change threshold":::
+
+2. Select the **out of the range** or **in the range** parameter based on your scenario.
+
+ If you want to detect fluctuations, select **out of the range**. For example, with the settings below, any data point that changes over 10% compared to the previous one will be detected as an outlier.
+ :::image type="content" source="../media/metrics/out-of-the-range.png" alt-text="out of range parameter":::
+
+ If you want to detect flat lines in your data, select **in the range**. For example, with the settings below, any data point that changes within 0.01% compared to the previous one will be detected as an outlier. Because the threshold is so small (0.01%), it detects flat lines in the data as outliers.
+
+ :::image type="content" source="../media/metrics/in-the-range.png" alt-text="In range parameter":::
+
+3. Set the percentage of change that will count as an anomaly, and which previously captured data points will be used for comparison. This comparison is always between the current data point, and a single data point N points before it.
+
+ **Direction** is only valid if you're using the **out of the range** mode:
+
+ * **Up** configures detection to only detect anomalies when (current data point) - (comparison data point) > **+** threshold percentage.
+ * **Down** configures detection to only detect anomalies when (current data point) - (comparing data point) < **-** threshold percentage.
+++
+## Preset events
+
+Sometimes, expected events and occurrences (such as holidays) can generate anomalous data. Using preset events, you can add flags to the anomaly detection output, during specified times. This feature should be configured after your data feed is onboarded. Each metric can only have one preset event configuration.
+
+> [!Note]
+> Preset event configuration will take holidays into consideration during anomaly detection, and may change your results. It will be applied to the data points ingested after you save the configuration.
+
+Select the **Configure Preset Event** button next to the metrics drop-down list on each metric details page.
++
+In the window that appears, configure the options according to your usage. Make sure **Enable holiday event** is selected to use the configuration.
+
+The **Holiday event** section helps you suppress unnecessary anomalies detected during holidays. There are two options for the **Strategy** option that you can apply:
+
+* **Suppress holiday**: Suppresses all anomalies and alerts in anomaly detection results during holiday period.
+* **Holiday as weekend**: Calculates the average expected values of several corresponding weekends before the holiday, and bases the anomaly status off of these values.
+
+There are several other values you can configure:
+
+|Option |Description |
+|||
+|**Choose one dimension as country** | Choose a dimension that contains country information. For example, a country code. |
+|**Country code mapping** | The mapping between a standard [country code](https://wikipedia.org/wiki/ISO_3166-1_alpha-2), and chosen dimension's country data. |
+|**Holiday options** | Whether to take into account all holidays, only PTO (Paid Time Off) holidays, or only Non-PTO holidays. |
+|**Days to expand** | The impacted days before and after a holiday. |
+
+The **Cycle event** section can be used in some scenarios to help reduce unnecessary alerts by using cyclic patterns in the data. For example:
+
+- Metrics that have multiple patterns or cycles, such as both a weekly and monthly pattern.
+- Metrics that don't have a clear pattern, but the data is comparable Year over Year (YoY), Month over Month (MoM), Week Over Week (WoW), or Day Over Day (DoD).
+
+Not all options are selectable for every granularity. The available options per granularity are below (Γ£ö for available, X for unavailable):
+
+| Granularity | YoY | MoM | WoW | DoD |
+|:-|:-|:-|:-|:-|
+| Yearly | X | X | X | X |
+| Monthly | X | X | X | X |
+| Weekly | Γ£ö | X | X | X |
+| Daily | Γ£ö | Γ£ö | Γ£ö | X |
+| Hourly | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Minutely | X | X | X | X |
+| Secondly | X | X | X | X |
+| Custom* | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+
+When using a custom granularity in seconds, only available if the metric is longer than one hour and less than one day.
+
+Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it will report an anomaly if multiple data points don't follow the pattern. **Strict mode** is used to enable anomaly reporting if even one data point doesn't follow the pattern.
++
+## View recent incidents
+
+Metrics Advisor detects anomalies on all your time series data as they're ingested. However, not all anomalies need to be escalated, because they might not have a significant impact. Aggregation will be performed on anomalies to group related ones into incidents. You can view these incidents from the **Incident** tab in metrics details page.
+
+Select an incident to go to the **Incidents analysis** page where you can see more details about it. Select **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-an-incident.md) page where you can find all incidents under the specific metric.
+
+## Subscribe anomalies for notification
+
+If you'd like to get notified whenever an anomaly is detected, you can subscribe to alerts for the metric, using a hook. For more information, see [configure alerts and get notifications using a hook](alerts.md) for more information.
+
+## Next steps
+- [Configure alerts and get notifications using a hook](alerts.md)
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+
ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/credential-entity.md
+
+ Title: Create a credential entity
+
+description: How to create a credential entity to manage your credential in secure.
+++++ Last updated : 06/22/2021+++
+# How-to: Create a credential entity
+
+When onboarding a data feed, you should select an authentication type, some authentication types like *Azure SQL Connection String* and *Service Principal* need a credential entity to store credential-related information, in order to manage your credential in secure. This article will tell how to create a credential entity for different credential types in Metrics Advisor.
+
+
+## Basic procedure: Create a credential entity
+
+You can create a **credential entity** to store credential-related information, and use it for authenticating to your data sources. You can share the credential entity to others and enable them to connect to your data sources without sharing the real credentials. It can be created in 'Adding data feed' tab or 'Credential entity' tab. After creating a credential entity for a specific authentication type, you can just choose one credential entity you created when adding new data feed, this will be helpful when creating multiple data feeds. The general procedure of creating and using a credential entity is shown below:
+
+1. Select '+' to create a new credential entity in 'Adding data feed' tab (you can also create one in 'Credential entity feed' tab).
+
+ ![create credential entity](../media/create-credential-entity.png)
+
+2. Set the credential entity name, description (if needed), credential type (equals to *authentication type*) and other settings.
+
+ ![set credential entity](../media/set-credential-entity.png)
+
+3. After creating a credential entity, you can choose it when specifying authentication type.
+
+ ![choose credential entity](../media/choose-credential-entity.png)
+
+There are **four credential types** in Metrics Advisor: Azure SQL Connection String, Azure Data Lake Storage Gen2 Shared Key Entity, Service Principal, Service Principal from Key Vault. For different credential type settings, see following instructions.
+
+## Azure SQL Connection String
+
+You should set the **Name** and **Connection String**, then select 'create'.
+
+![set credential entity for sql connection string](../media/credential-entity/credential-entity-sql-connection-string.png)
+
+## Azure Data Lake Storage Gen2 Shared Key Entity
+
+You should set the **Name** and **Account Key**, then select 'create'. Account key could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+
+<!-- 增加basic说明,tips是错的;增加一下怎么管理;加一个step1的link
+-->
+![set credential entity for data lake](../media/credential-entity/credential-entity-data-lake.png)
+
+## Service principal
+
+To create service principal for your data source, you can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md). After creating a service principal, you need to fill in the following configurations in credential entity.
+
+![sp credential entity](../media/credential-entity/credential-entity-service-principal.png)
+
+* **Name:** Set a name for your service principal credential entity.
+* **Tenant ID & Client ID:** After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**.
+
+ ![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
+
+* **Client Secret:** After creating a service principal in Azure portal, you should go to **Certificates & Secrets** to create a new client secret, and the **value** should be used as `Client Secret` in credential entity. (Note: The value only appears once, so it's better to store it somewhere.)
++
+ ![sp Client secret value](../media/credential-entity/sp-secret-value.png)
+
+## <span id="sp-from-kv">Service principal from Key Vault</span>
+
+There are several steps to create a service principal from key vault.
+
+**Step 1. Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source.
+
+After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**. The **Directory (tenant) ID** should be `Tenant ID` in credential entity configurations.
+
+![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
+
+**Step 2. Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.)
+
+![sp Client secret value](../media/credential-entity/sp-secret-value.png)
+
+**Step 3. Create a key vault.** In [Azure portal](https://portal.azure.com/#home), select **Key vaults** to create one.
+
+![create a key vault in azure portal](../media/credential-entity/create-key-vault.png)
+
+After creating a key vault, the **Vault URI** is the `Key Vault Endpoint` in MA (Metrics Advisor) credential entity.
+
+![key vault endpoint](../media/credential-entity/key-vault-endpoint.png)
+
+**Step 4. Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**.
+The first is for `Service Principal Client Id`, the other is for `Service Principal Client Secret`, both of their name will be used in credential entity configurations.
+
+![generate secrets](../media/credential-entity/generate-secrets.png)
+
+* **Service Principal Client ID:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client ID` in **Step 1**.
+
+ ![secret1: sp client id](../media/credential-entity/secret-1-sp-client-id.png)
+
+* **Service Principal Client Secret:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client Secret Value` in **Step 2**.
+
+ ![secret2: sp client secret](../media/credential-entity/secret-2-sp-secret-value.png)
+
+Until now, the *client ID* and *client secret* of service principal are finally stored in Key Vault. Next, you need to create another service principal to store the key vault. Therefore, you should **create two service principals**, one to save client ID and client secret, which will be stored in a key vault, the other is to store the key vault.
+
+**Step 5. Create a service principal to store the key vault.**
+
+1. Go to [Azure portal AAD (Azure Active Directory)](https://portal.azure.com/?trace=diagnostics&feature.customportal=false#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) and create a new registration.
+
+ ![create a new registration](../media/credential-entity/create-registration.png)
+
+ After creating the service principal, the **Application (client) ID** in Overview will be the `Key Vault Client ID` in credential entity configuration.
+
+2. In **Manage->Certificates & Secrets**, create a client secret by selecting 'New client secret'. Then you should **copy down the value**, because it appears only once. The value is `Key Vault Client Secret` in credential entity configuration.
+
+ ![add client secret](../media/credential-entity/add-client-secret.png)
+
+**Step 6. Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'.
+
+![grant sp to key vault](../media/credential-entity/grant-sp-to-kv.png)
++
+## Configurations conclusion
+To conclude, the credential entity configurations in Metrics Advisor for *Service Principal from Key Vault* and the way to get them are shown in table below:
+
+| Configuration | How to get |
+|-| |
+| Key Vault Endpoint | **Step 3:** Vault URI of key vault. |
+| Tenant ID | **Step 1:** Directory (tenant) ID of your first service principal. |
+| Key Vault Client ID | **Step 5:** The Application (client) ID of your second service principal. |
+| Key Vault Client Secret | **Step 5:** The client secret value of your second service principal. |
+| Service Principal Client ID Name | **Step 4:** The secret name you set for Client ID. |
+| Service Principal Client Secret Name | **Step 4:** The secret name you set for Client Secret Value. |
++
+## Next steps
+
+- [Onboard your data](onboard-your-data.md)
+- [Connect different data sources](../data-feeds-from-different-sources.md)
ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
+
+ Title: Diagnose an incident using Metrics Advisor
+
+description: Learn how to diagnose an incident using Metrics Advisor, and get detailed views of anomalies in your data.
+++++ Last updated : 04/15/2021+++
+# Diagnose an incident using Metrics Advisor
+
+## What is an incident?
+
+When there are anomalies detected on multiple time series within one metric at a particular timestamp, Metrics Advisor will automatically group anomalies that **share the same root cause** into one incident. An incident usually indicates a real issue, Metrics Advisor performs analysis on top of it and provides automatic root cause analysis insights.
+
+This will significantly remove customer's effort to view each individual anomaly and quickly finds the most important contributing factor to an issue.
+
+An alert generated by Metrics Advisor may contain multiple incidents and each incident may contain multiple anomalies captured on different time series at the same timestamp.
+
+## Paths to diagnose an incident
+
+- **Diagnose from an alert notification**
+
+ If you've configured a hook of the email/Teams type and applied at least one alerting configuration. Then you will receive continuous alert notifications escalating incidents that are analyzed by Metrics Advisor. Within the notification, there's an incident list and a brief description. For each incident, there's a **"Diagnose"** button, selecting it will direct you to the incident detail page to view diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/alert-notification.png" alt-text="Diagnose from an alert notification":::
+
+- **Diagnose from an incident in "Incident hub"**
+
+ There's a central place in Metrics Advisor that gathers all incidents that have been captured and make it easy to track any ongoing issues. Selecting the **Incident Hub** tab in left navigation bar will list out all incidents within the selected metrics. Within the incident list, select one of them to view detailed diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/incident-list.png" alt-text="Diagnose from an incident in Incident hub":::
+
+- **Diagnose from an incident listed in metrics page**
+
+ Within the metrics detail page, there's a tab named **Incidents** which lists the latest incidents captured for this metric. The list can be filtered by the severity of the incidents or the dimension value of the metrics.
+
+ Selecting one incident in the list will direct you to the incident detail page to view diagnostic insights.
+
+ :::image type="content" source="../media/diagnostics/incident-in-metrics.png" alt-text="Diagnose from an incident listed in metrics page":::
+
+## Typical diagnostic flow
+
+After being directed to the incident detail page, you're able to take advantage of the insights that are automatically analyzed by Metrics Advisor to quickly locate root cause of an issue or use the analysis tool to further evaluate the issue impact. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
+
+### Step 1. Check summary of current incident
+
+The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
+
+- Basic information includes the "top impacted series" with a diagram, "impact start & end time", "incident severity" and "total anomalies included". By reading this, you can get a basic understanding of an ongoing issue and the impact of it.
+- Actions & tracings, this is used to facilitate team collaboration on an ongoing incident. Sometimes one incident may need to involve cross-team members' effort to analyze and resolve it. Everyone who has the permission to view the incident can add an action or a tracing event.
+
+ For example, after diagnosing the incident and root cause is identified, an engineer can add a tracing item with type of "customized" and input the root cause in the comment section. Leave the status as "Active". Then other teammates can share the same info and know there's someone working on the fix. You can also add an "Azure DevOps" item to track the incident with a specific task or bug.
++
+- Analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates root cause advice.
++
+For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
++
+### Step 2. View cross-dimension diagnostic insights
+
+After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
+
+For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy, which is named the **Diagnostic tree**. For example, a "revenue" metric is monitored by two dimensions: "region" and "category". Despite concrete dimension values, there needs to have an **aggregated** dimension value, like **"SUM"**. Then time series of "region" = **"SUM"** and "category" = **"SUM"** will be categorized as the root node within the tree. Whenever there's an anomaly captured at **"SUM"** dimension, then it could be drilled down and analyzed to locate which specific dimension value has contributed the most to the parent node anomaly. Select each node to expand and see detailed information.
++
+- To enable an "aggregated" dimension value in your metrics
+
+ Metrics Advisor supports performing "Roll-up" on dimensions to calculate an "aggregated" dimension value. The diagnostic tree supports diagnosing on **"SUM", "AVG", "MAX","MIN","COUNT"** aggregations. To enable an "aggregated" dimension value, you can enable the "Roll-up" function during data onboarding. Please make sure your metrics is **mathematically computable** and that the aggregated dimension has real business value.
+
+ :::image type="content" source="../media/diagnostics/automatic-roll-up.png" alt-text="Roll-up settings":::
+
+- If there's no "aggregated" dimension value in your metrics
+
+ If there's no "aggregated" dimension value in your metrics and the "Roll-up" function is not enabled during data onboarding. There will be no metric value calculated for "aggregated" dimension, it will show up as a gray node in the tree and could be expanded to view its child nodes.
+
+#### Legend of diagnostic tree
+
+There are three kinds of nodes in the diagnostic tree:
+- **Blue node**, which corresponds to a time series with real metric value.
+- **Gray node**, which corresponds to a virtual time series with no metric value, it's a logical node.
+- **Red node**, which corresponds to the top impacted time series of the current incident.
+
+For each node abnormal status is described by the color of the node border
+- **Red border** means there's an anomaly captured on the time series corresponding to the incident timestamp.
+- **Non-red border** means there's no anomaly captured on the time series corresponding to the incident timestamp.
+
+#### Display mode
+
+There are two display modes for a diagnostic tree: only show anomaly series or show major proportions.
+
+- **Only show anomaly series mode** enables customer to focus on current anomalies that captured on different series and diagnose root cause of top impacted series.
+- **Show major proportions** enables customer to check on abnormal status of major proportions of top impacted series. In this mode, the tree would show both series with anomaly detected and series with no anomaly. But more focus on important series.
+
+#### Analyze options
+
+- **Show delta ratio**
+
+ "Delta ratio" is the percentage of current node delta compared to parent node delta. HereΓÇÖs the formula:
+
+ (real value of current node - expected value of current node) / (real value of parent node - expected value of parent node) * 100%
+
+ This is used to analyze the major contribution of parent node delta.
+
+- **Show value proportion**
+
+ "Value proportion" is the percentage of current node value compared to parent node value. HereΓÇÖs the formula:
+
+ (real value of current node / real value of parent node) * 100%
+
+ This is used to evaluate the proportion of current node within the whole.
+
+By using "Diagnostic tree", customers can locate root cause of current incident into specific dimension. This significantly removes customer's effort to view each individual anomalies or pivot through different dimensions to find the major anomaly contribution.
+
+### Step 3. View cross-metrics diagnostic insights using "Metrics graph"
+
+Sometimes, it's hard to analyze an issue by checking abnormal status of one single metric, but need to correlate multiple metrics together. Customers are able to configure a **Metrics graph**, which indicates the relationship between metrics. Refer to [How to build a metrics graph](metrics-graph.md) to get started.
+
+#### Check anomaly status on root cause dimension within "Metrics graph"
+
+By using the above cross-dimension diagnostic result, the root cause is limited to a specific dimension value. Then use the "Metrics graph" and filter by the analyzed root cause dimension to check anomaly status on other metrics.
+
+For example, if there's an incident captured on "revenue" metrics. The top impacted series is at global region with "region" = "SUM". By using cross-dimension diagnostic, the root cause has been located on "region" = "Karachi". There's a pre-configured metrics graph, including metrics of "revenue", "cost", "DAU", "PLT(page load time)" and "CHR(cache hit rate)".
+
+Metrics Advisor will automatically filter the metrics graph by the root cause dimension of "region" = "Karachi" and display anomaly status of each metric. By analyzing the relation between metrics and anomaly status, customers can gain further insights of what is the final root cause.
++
+#### Auto related anomalies
+
+By applying the root cause dimension filter on the metrics graph, anomalies on each metric at the timestamp of the current incident will be autorelated. Those anomalies should be related to the identified root cause of current incident.
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
ai-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/further-analysis.md
+
+ Title: Further analyze an incident and evaluate impact
+
+description: Learn how to leverage analysis tools to further analyze an incident.
+++++ Last updated : 04/15/2021+++
+# Further analyze an incident and evaluate impact
+
+## Metrics drill down by dimensions
+
+When you're viewing incident information, you may need to get more detailed information, for example, for different dimensions, and timestamps. If your data has one or more dimensions, you can use the drill down function to get a more detailed view.
+
+To use the drill down function, select the **Metric drilling** tab in the **Incident hub**.
++
+The **Dimensions** setting is a list of dimensions for an incident, you can select other available dimension values for each one. After the dimension values are changed. The **Timestamp** setting lets you view the current incident at different moments in time.
+
+### Select drilling options and choose a dimension
+
+There are two types of drill down options: **Drill down** and **Horizontal comparison**.
+
+> [!Note]
+> - For drill down, you can explore the data from different dimension values, except the currenly selected dimensions.
+> - For horizontal comparison, you can explore the data from different dimension values, except the all-up dimensions.
++
+### Value comparison for different dimension values
+
+The second section of the drill down tab is a table with comparisons for different dimension values. It includes the value, baseline value, difference value, delta value and whether it is an anomaly.
++
+### Value and expected value comparisons for different dimension value
+
+The third section of the drill down tab is a histogram with the values and expected values, for different dimension values. The histogram is sorted by the difference between value and expected value. You can find the unexpected value with the biggest impact easily. For example, in the above picture, we can find that, except the all up value, **US7** contributes the most for the anomaly.
++
+### Raw value visualization
+The last part of drill down tab is a line chart of the raw values. With this chart provided, you don't need to navigate to the metric page to view details.
++
+## Compare time series
+
+Sometimes when an anomaly is detected on a specific time series, it's helpful to compare it with multiple other series in a single visualization.
+Select the **Compare tools** tab, and then select the blue **+ Add** button.
++
+Select a series from your data feed. You can choose the same granularity or a different one. Select the target dimensions and load the series trend, then click **Ok** to compare it with a previous series. The series will be put together in one visualization. You can continue to add more series for comparison and get further insights. Select the drop down menu at the top of the **Compare tools** tab to compare the time series data over a time-shifted period.
+
+> [!Warning]
+> To make a comparison, time series data analysis may require shifts in data points so the granularity of your data must support it. For example, if your data is weekly and you use the **Day over day** comparison, you will get no results. In this example, you would use the **Month over month** comparison instead.
+
+After selecting a time-shifted comparison, you can select whether you want to compare the data values, the delta values, or the percentage delta.
+
+## View similar anomalies using Time Series Clustering
+
+When viewing an incident, you can use the **Similar time-series-clustering** tab to see the various series associated with it. Series in one group are summarized together. From the above picture, we can know that there is at least two series groups. This feature is only available if the following requirements are met:
+
+- Metrics must have one or more dimensions or dimension values.
+- The series within one metric must have a similar trend.
+
+Available dimensions are listed on the top the tab, and you can make a selection to specify the series.
+
ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/manage-data-feeds.md
+
+ Title: Manage data feeds in Metrics Advisor
+
+description: Learn how to manage data feeds that you've added to Metrics Advisor.
+++++ Last updated : 10/25/2022+++
+# How to: Manage your data feeds
+
+Learn how to manage your onboarded data feeds in Metrics Advisor. This article guides you through managing data feeds in Metrics Advisor.
+
+## Edit a data feed
+
+> [!NOTE]
+> The following details cannot be changed after a data feed has been created.
+> * Data feed ID
+> * Created Time
+> * Dimension
+> * Source Type
+> * Granularity
+
+Only the administrator of a data feed is allowed to make changes to it.
+
+On the data feed list page, you can **pause, reactivate, delete** a data feed:
+
+* **Pause/Reactivate**: Select the **Pause/Play** button to pause/reactivate a data feed.
+
+* **Delete**: Select **Delete** button to delete a data feed.
+
+If you change the ingestion start time, you need to verify the schema again. You can change it by clicking **Edit** in the data feed detail page.
+
+## Backfill your data feed
+
+Select the **Backfill** button to trigger an immediate ingestion on a time-stamp, to fix a failed ingestion or override the existing data.
+- The start time is inclusive.
+- The end time is exclusive.
+- Anomaly detection is re-triggered on selected range only.
++
+## Manage permission of a data feed
+
+Azure operations can be divided into two categories - control plane and data plane. You use the control plane to manage resources in your subscription. You use the data plane to use capabilities exposed by your instance of a resource type.
+Metrics Advisor requires at least a 'Reader' role to use its capabilities, but cannot perform edit/delete action to the resource itself.
+
+Within Metrics Advisor there're other fine-grained roles to enable permission control on specific entities, like data feeds, hooks, credentials etc. There are two types of roles:
+
+- **Administrator**: Has full permissions to manage a data feed, hook, credentials, etc. including modify and delete.
+- **Viewer**: Has access to a read-only view of the data feed, hook, credentials, etc.
+
+## Advanced settings
+
+There are several optional advanced settings when creating a new data feed, they can be modified in data feed detail page.
+
+### Ingestion options
+
+* **Ingestion time offset**: By default, data is ingested according to the specified granularity. For example, a metric with a *daily* timestamp will be ingested one day after its timestamp. You can use the offset to delay the time of ingestion with a *positive* number, or advance it with a *negative* number.
+
+* **Max concurrency**: Set this parameter if your data source supports limited concurrency. Otherwise leave at the default setting.
+
+* **Stop retry after**: If data ingestion has failed, it will retry automatically within a period. The beginning of the period is the time when the first data ingestion happened. The length of the period is defined according to the granularity. If leaving the default value (-1), the value will be determined according to the granularity as below.
+
+ | Granularity | Stop Retry After |
+ | : | : |
+ | Daily, Custom (>= 1 Day), Weekly, Monthly, Yearly | 7 days |
+ | Hourly, Custom (< 1 Day) | 72 hours |
+
+* **Min retry interval**: You can specify the minimum interval when retrying pulling data from source. If leaving the default value (-1), the retry interval will be determined according to the granularity as below.
+
+ | Granularity | Minimum Retry Interval |
+ | : | : |
+ | Daily, Custom (>= 1 Day), Weekly, Monthly | 30 minutes |
+ | Hourly, Custom (< 1 Day) | 10 minutes |
+ | Yearly | 1 day |
+
+### Fill gap when detecting:
+
+> [!NOTE]
+> This setting won't affect your data source and will not affect the data charts displayed on the portal. The auto-filling only occurs during anomaly detection.
+
+Sometimes series are not continuous. When there are missing data points, Metrics Advisor will use the specified value to fill them before anomaly detection to improve accuracy.
+The options are:
+
+* Using the value from the previous actual data point. This is used by default.
+* Using a specific value.
+
+### Action link template:
+
+Action link templates are used to predefine actionable HTTP urls, which consist of the placeholders `%datafeed`, `%metric`, `%timestamp`, `%detect_config`, and `%tagset`. You can use the template to redirect from an anomaly or an incident to a specific URL to drill down.
++
+Once you've filled in the action link, click **Go to action link** on the incident list's action option, and diagnostic tree's right-click menu. Replace the placeholders in the action link template with the corresponding values of the anomaly or incident.
+
+| Placeholder | Examples | Comment |
+| - | -- | - |
+| `%datafeed` | - | Data feed ID |
+| `%metric` | - | Metric ID |
+| `%detect_config` | - | Detect config ID |
+| `%timestamp` | - | Timestamp of an anomaly or end time of a persistent incident |
+| `%tagset` | `%tagset`, <br> `[%tagset.get("Dim1")]`, <br> `[ %tagset.get("Dim1", "filterVal")]` | Dimension values of an anomaly or top anomaly of an incident. <br> The `filterVal` is used to filter out matching values within the square brackets. |
+
+Examples:
+
+* If the action link template is `https://action-link/metric/%metric?detectConfigId=%detect_config`:
+ * The action link `https://action-link/metric/1234?detectConfigId=2345` would go to anomalies or incidents under metric `1234` and detect config `2345`.
+
+* If the action link template is `https://action-link?[Dim1=%tagset.get('Dim1','')&][Dim2=%tagset.get('Dim2','')]`:
+ * The action link would be `https://action-link?Dim1=Val1&Dim2=Val2` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`.
+ * The action link would be `https://action-link?Dim2=Val2` when the anomaly is `{ "Dim1": "", "Dim2": "Val2" }`, since `[Dim1=***&]` is skipped for the dimension value empty string.
+
+* If the action link template is `https://action-link?filter=[Name/Dim1 eq '%tagset.get('Dim1','')' and ][Name/Dim2 eq '%tagset.get('Dim2','')']`:
+ * The action link would be `https://action-link?filter=Name/Dim1 eq 'Val1' and Name/Dim2 eq 'Val2'` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`,
+ * The action link would be `https://action-link?filter=Name/Dim2 eq 'Val2'` when anomaly is `{ "Dim1": "", "Dim2": "Val2" }` since `[Name/Dim1 eq '***' and ]` is skipped for the dimension value empty string.
+
+### "Data feed not available" alert settings
+
+A data feed is considered as not available if no data is ingested from the source within the grace period specified from the time the data feed starts ingestion. An alert is triggered in this case.
+
+To configure an alert, you need to [create a hook](alerts.md#create-a-hook) first. Alerts will be sent through the hook configured.
+
+* **Grace period**: The Grace period setting is used to determine when to send an alert if no data points are ingested. The reference point is the time of first ingestion. If an ingestion fails, Metrics Advisor will keep trying at a regular interval specified by the granularity. If it continues to fail past the grace period, an alert will be sent.
+
+* **Auto snooze**: When this option is set to zero, each timestamp with *Not Available* triggers an alert. When a setting other than zero is specified, continuous timestamps after the first timestamp with *not available* are not triggered according to the setting specified.
+
+## Next steps
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
ai-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/metrics-graph.md
+
+ Title: Metrics Advisor metrics graph
+
+description: How to configure your Metrics graph and visualize related anomalies in your data.
+++++ Last updated : 09/08/2020+++
+# How-to: Build a metrics graph to analyze related metrics
+
+Each time series in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Anomalies will be detected if any data point falls out of the historical pattern. In some cases, however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. **Metrics graph** is just the tool that helps with this.
+
+For example, if you have several metrics that monitor your business from different perspectives, anomaly detection will be applied respectively. However, in the real business case, anomalies detected on multiple metrics may have a relation with each other, discovering those relations and analyzing root cause base on that would be helpful when addressing real issues. The metrics graph helps automatically correlate anomalies detected on related metrics to accelerate the troubleshooting process.
+
+## Select a metric to put the first node to the graph
+
+Select the **Metrics graph** tab in the navigation bar. The first step for building a metrics graph is to put a node onto the graph. Select a data feed and a metric at the top of the page. A node will appear in the bottom panel.
++
+## Add a node/relation on existing node
+
+Next, you need to add another node and specify a relation to an existing node(s). Select an existing node and right-click on it. A context menu will appear with several options.
+
+Select **Add relation**, and you will be able to choose another metric and specify the relation type between the two nodes. You can also apply specific dimension filters.
++
+After repeating the above steps, you will have a metrics graph describing the relations between all related metrics.
+
+There're other actions you can take on the graph:
+1. Delete a node
+2. Go to metrics
+3. Go to Incident Hub
+4. Expand
+5. Delete relation
+
+## Legend of metrics graph
+
+Each node on the graph represents a metric. There are four kinds of nodes in the metrics graph:
+
+- **Green node**: The node that represents current metric incident severity is low.
+- **Orange node**: The node that represents current metric incident severity is medium.
+- **Red node**: The node that represents current metric incident severity is high.
+- **Blue node**: The node which doesn't have anomaly severity.
++
+## View related metrics anomaly status in incident hub
+
+When the metrics graph is built, whenever an anomaly is detected on metrics within the graph, you will able to view related anomaly statuses and get a high-level view of the incident.
+
+Select into an incident within the graph and scroll down to **cross metrics analysis**, below the diagnostic information.
++
+## Next steps
+
+- [Adjust anomaly detection using feedback](anomaly-feedback.md)
+- [Diagnose an incident](diagnose-an-incident.md).
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md
+
+ Title: Onboard your data feed to Metrics Advisor
+
+description: How to get started with onboarding your data feeds to Metrics Advisor.
++++++ Last updated : 04/20/2021+++
+# How-to: Onboard your metric data to Metrics Advisor
+
+Use this article to learn about onboarding your data to Metrics Advisor.
+
+## Data schema requirements and configuration
+
+If you are not sure about some of the terms, refer to [Glossary](../glossary.md).
+
+## Avoid loading partial data
+
+Partial data is caused by inconsistencies between the data stored in Metrics Advisor and the data source. This can happen when the data source is updated after Metrics Advisor has finished pulling data. Metrics Advisor only pulls data from a given data source once.
+
+For example, if a metric has been onboarded to Metrics Advisor for monitoring. Metrics Advisor successfully grabs metric data at timestamp A and performs anomaly detection on it. However, if the metric data of that particular timestamp A has been refreshed after the data has been ingested. New data value won't be retrieved.
+
+You can try to [backfill](manage-data-feeds.md#backfill-your-data-feed) historical data (described later) to mitigate inconsistencies but this won't trigger new anomaly alerts, if alerts for those time points have already been triggered. This process may add additional workload to the system, and is not automatic.
+
+To avoid loading partial data, we recommend two approaches:
+
+* Generate data in one transaction:
+
+ Ensure the metric values for all dimension combinations at the same timestamp are stored to the data source in one transaction. In the above example, wait until data from all data sources is ready, and then load it into Metrics Advisor in one transaction. Metrics Advisor can poll the data feed regularly until data is successfully (or partially) retrieved.
+
+* Delay data ingestion by setting a proper value for the **Ingestion time offset** parameter:
+
+ Set the **Ingestion time offset** parameter for your data feed to delay the ingestion until the data is fully prepared. This can be useful for some data sources which don't support transactions such as Azure Table Storage. See [advanced settings](manage-data-feeds.md#advanced-settings) for details.
+
+## Start by adding a data feed
+
+After signing into your Metrics Advisor portal and choosing your workspace, click **Get started**. Then, on the main page of the workspace, click **Add data feed** from the left menu.
+
+### Add connection settings
+
+#### 1. Basic settings
+Next you'll input a set of parameters to connect your time-series data source.
+* **Source Type**: The type of data source where your time series data is stored.
+* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, per minute, and Custom. The lowest interval the customization option supports is 60 seconds.
+ * **Seconds**: The number of seconds when *granularityName* is set to *Customize*.
+* **Ingest data since (UTC)**: The baseline start time for data ingestion. `startOffsetInSeconds` is often used to add an offset to help with data consistency.
+
+#### 2. Specify connection string
+Next, you'll need to specify the connection information for the data source. For details on the other fields and connecting different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
+
+#### 3. Specify query for a single timestamp
+<!-- Next, you'll need to specify a query to convert the data into the required schema, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
+
+For details of different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
+
+### Load data
+
+After the connection string and query string are inputted, select **Load data**. Within this operation, Metrics Advisor will check connection and permission to load data, check necessary parameters (@IntervalStart and @IntervalEnd) which need to be used in query, and check the column name from data source.
+
+If there's an error at this step:
+1. First check if the connection string is valid.
+2. Then check if there's sufficient permissions and that the ingestion worker IP address is granted access.
+3. Then check if required parameters (@IntervalStart and @IntervalEnd) are used in your query.
+
+### Schema configuration
+
+Once the data schema is loaded, select the appropriate fields.
+
+If the timestamp of a data point is omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as a timestamp. If you get a message that a column cannot be specified as a timestamp, check your query or data source, and whether there are multiple timestamps in the query result - not only in the preview data. When performing data ingestion, Metrics Advisor can only consume one chunk (for example one day, one hour - according to the granularity) of time-series data from the given source each time.
+
+|Selection |Description |Notes |
+||||
+| **Display Name** | Name to be displayed in your workspace instead of the original column name. | Optional.|
+|**Timestamp** | The timestamp of a data point. If omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as timestamp. | Optional. Should be specified with at most one column. If you get a **column cannot be specified as Timestamp** error, check your query or data source for duplicate timestamps. |
+|**Measure** | The numeric values in the data feed. For each data feed, you can specify multiple measures but at least one column should be selected as measure. | Should be specified with at least one column. |
+|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country/region, language, tenant. You can select zero or more columns as dimensions. Note: be cautious when selecting a non-string column as a dimension. | Optional. |
+|**Ignore** | Ignore the selected column. | Optional. For data sources support using a query to get data, there is no 'Ignore' option. |
+
+If you want to ignore columns, we recommend updating your query or data source to exclude those columns. You can also ignore columns using **Ignore columns** and then **Ignore** on the specific columns. If a column should be a dimension and is mistakenly set as *Ignored*, Metrics Advisor may end up ingesting partial data. For example, assume the data from your query is as below:
+
+| Row ID | Timestamp | Country/Region | Language | Income |
+| | | | | |
+| 1 | 2019/11/10 | China | ZH-CN | 10000 |
+| 2 | 2019/11/10 | China | EN-US | 1000 |
+| 3 | 2019/11/10 | US | ZH-CN | 12000 |
+| 4 | 2019/11/11 | US | EN-US | 23000 |
+| ... | ...| ... | ... | ... |
+
+If *Country* is a dimension and *Language* is set as *Ignored*, then the first and second rows will have the same dimensions for a timestamp. Metrics Advisor will arbitrarily use one value from the two rows. Metrics Advisor will not aggregate the rows in this case.
+
+After configuring the schema, select **Verify schema**. Within this operation, Metrics Advisor will perform following checks:
+- Whether timestamp of queried data falls into one single interval.
+- Whether there's duplicate values returned for the same dimension combination within one metric interval.
+
+### Automatic roll up settings
+
+> [!IMPORTANT]
+> If you'd like to enable root cause analysis and other diagnostic capabilities, the **Automatic roll up settings** need to be configured.
+> Once enabled, the automatic roll-up settings cannot be changed.
+
+Metrics Advisor can automatically perform aggregation(for example SUM, MAX, MIN) on each dimension during ingestion, then builds a hierarchy which will be used in root case analysis and other diagnostic features.
+
+Consider the following scenarios:
+
+* *"I do not need to include the roll-up analysis for my data."*
+
+ You do not need to use the Metrics Advisor roll-up.
+
+* *"My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others."*
+
+ This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
+
+ | Country/Region | Language | Income |
+ ||-|--|
+ | China | ZH-CN | 10000 |
+ | (NULL) | EN-US | 999999 |
+ | US | EN-US | 12000 |
+ | | EN-US | 5000 |
+
+* *"I need Metrics Advisor to roll up my data by calculating Sum/Max/Min/Avg/Count and represent it by {some string}."*
+
+ Some data sources such as Azure Cosmos DB or Azure Blob Storage do not support certain calculations like *group by* or *cube*. Metrics Advisor provides the roll up option to automatically generate a data cube during ingestion.
+ This option means you need Metrics Advisor to calculate the roll-up using the algorithm you've selected and use the specified string to represent the roll-up in Metrics Advisor. This won't change any data in your data source.
+ For example, suppose you have a set of time series which stands for Sales metrics with the dimension (Country, Region). For a given timestamp, it might look like the following:
++
+ | Country | Region | Sales |
+ |||-|
+ | Canada | Alberta | 100 |
+ | Canada | British Columbia | 500 |
+ | United States | Montana | 100 |
++
+ After enabling Auto Roll Up with *Sum*, Metrics Advisor will calculate the dimension combinations, and sum the metrics during data ingestion. The result might be:
+
+ | Country | Region | Sales |
+ | | | - |
+ | Canada | Alberta | 100 |
+ | NULL | Alberta | 100 |
+ | Canada | British Columbia | 500 |
+ | NULL | British Columbia | 500 |
+ | United States | Montana | 100 |
+ | NULL | Montana | 100 |
+ | NULL | NULL | 700 |
+ | Canada | NULL | 600 |
+ | United States | NULL | 100 |
+
+ `(Country=Canada, Region=NULL, Sales=600)` means the sum of Sales in Canada (all regions) is 600.
+
+ The following is the transformation in SQL language.
+
+ ```mssql
+ SELECT
+ dimension_1,
+ dimension_2,
+ ...
+ dimension_n,
+ sum (metrics_1) AS metrics_1,
+ sum (metrics_2) AS metrics_2,
+ ...
+ sum (metrics_n) AS metrics_n
+ FROM
+ each_timestamp_data
+ GROUP BY
+ CUBE (dimension_1, dimension_2, ..., dimension_n);
+ ```
+
+ Consider the following before using the Auto roll up feature:
+
+ * If you want to use *SUM* to aggregate your data, make sure your metrics are additive in each dimension. Here are some examples of *non-additive* metrics:
+ - Fraction-based metrics. This includes ratio, percentage, etc. For example, you should not add the unemployment rate of each state to calculate the unemployment rate of the entire country/region.
+ - Overlap in dimension. For example, you should not add the number of people in to each sport to calculate the number of people who like sports, because there is an overlap between them, one person can like multiple sports.
+ * To ensure the health of the whole system, the size of cube is limited. Currently, the limit is **100,000**. If your data exceeds that limit, ingestion will fail for that timestamp.
+
+## Advanced settings
+
+There are several advanced settings to enable data ingested in a customized way, such as specifying ingestion offset, or concurrency. For more information, see the [advanced settings](manage-data-feeds.md#advanced-settings) section in the data feed management article.
+
+## Specify a name for the data feed and check the ingestion progress
+
+Give a custom name for the data feed, which will be displayed in your workspace. Then select **Submit**. In the data feed details page, you can use the ingestion progress bar to view status information.
+++
+To check ingestion failure details:
+
+1. Select **Show Details**.
+2. Select **Status** then choose **Failed** or **Error**.
+3. Hover over a failed ingestion, and view the details message that appears.
++
+A *failed* status indicates the ingestion for this data source will be retried later.
+An *Error* status indicates Metrics Advisor won't retry for the data source. To reload data, you need to trigger a backfill/reload manually.
+
+You can also reload the progress of an ingestion by clicking **Refresh Progress**. After data ingestion completes, you're free to click into metrics and check anomaly detection results.
+
+## Next steps
+- [Manage your data feeds](manage-data-feeds.md)
+- [Configurations for different data sources](../data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](configure-metrics.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/overview.md
+
+ Title: What is the Azure AI Metrics Advisor service?
+
+description: What is Metrics Advisor?
+++++ Last updated : 07/06/2021+++
+# What is Azure AI Metrics Advisor?
++
+Metrics Advisor is a part of [Azure AI services](../../ai-services/what-are-ai-services.md) that uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predicative maintenance, and business monitor applications on top of the service. Use Metrics Advisor to:
+
+* Analyze multi-dimensional data from multiple data sources
+* Identify and correlate anomalies
+* Configure and fine-tune the anomaly detection model used on your data
+* Diagnose anomalies and help with root cause analysis
++
+This documentation contains the following types of articles:
+* The [quickstarts](./Quickstarts/web-portal.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-tos/onboard-your-data.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](glossary.md) provide in-depth explanations of the service's functionality and features.
+
+## Connect to a variety of data sources
+
+Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/onboard-your-data.md) data from many data stores, including: SQL Server, Azure Blob Storage, MongoDB and more.
+
+## Easy-to-use and customizable anomaly detection
+
+* Metrics Advisor automatically selects the best model for your data, without needing to know any machine learning.
+* Automatically monitor every time series within [multi-dimensional metrics](glossary.md#multi-dimensional-metric).
+* Use [parameter tuning](how-tos/configure-metrics.md) and [interactive feedback](how-tos/anomaly-feedback.md) to customize the model applied on your data, and future anomaly detection results.
+
+## Real-time notification through multiple channels
+
+Whenever anomalies are detected, Metrics Advisor is able to [send real time notification](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks, Teams hooks and Azure DevOps hooks. Flexible alert configuration lets you customize when and where to send a notification.
+
+## Smart diagnostic insights by analyzing anomalies
+
+### Analyze root cause into specific dimension
+
+Metrics Advisor combines anomalies detected on the same multi-dimensional metric into a diagnostic tree to help you analyze root cause into specific dimension. There's also automated analyzed insights available by analyzing the greatest contribution of each dimension.
+
+### Cross-metrics analysis using Metrics graph
+
+A [Metrics graph](./how-tos/metrics-graph.md) indicates the relation between metrics. Cross-metrics analysis can be enabled to help you catch on abnormal status among all related metrics in a holistic view. And eventually locate the final root cause.
+
+Refer to [how to diagnose an incident](./how-tos/diagnose-an-incident.md) for more detail.
+
+## Typical workflow
+
+The workflow is simple: after onboarding your data, you can fine-tune the anomaly detection, and create configurations to fit your scenario.
+
+1. [Create an Azure resource](https://go.microsoft.com/fwlink/?linkid=2142156) for Metrics Advisor.
+2. Build your first monitor using the web portal.
+ 1. [Onboard your data](./how-tos/onboard-your-data.md)
+ 2. [Fine-tune anomaly detection configuration](./how-tos/configure-metrics.md)
+ 3. [Subscribe anomalies for notification](./how-tos/alerts.md)
+ 4. [View diagnostic insights](./how-tos/diagnose-an-incident.md)
+3. Use the REST API to customize your instance.
+
+## Video
+* [Introducing Metrics Advisor](https://www.youtube.com/watch?v=0Y26cJqZMIM)
+* [New to Azure AI services](https://www.youtube.com/watch?v=7tCLJHdBZgM)
+
+## Data retention & limitation:
+
+Metrics Advisor will keep at most **10,000** time intervals ([what is an interval?](tutorials/write-a-valid-query.md#what-is-an-interval)) forward counting from current timestamp, no matter there's data available or not. Data falls out of the window will be deleted. Data retention mapping to count of days for different metric granularity:
+
+| Granularity(min) | Retention(day) |
+|| |
+| 1 | 6.94 |
+| 5 | 34.72|
+| 15 | 104.1|
+| 60(=hourly) | 416.67 |
+| 1440(=daily)|10000.00|
+
+There are also further limitations. Refer to [FAQ](faq.yml#what-are-the-data-retention-and-limitations-of-metrics-advisor-) for more details.
+
+## Use cases for Metrics Advisor
+
+* [Protect your organization's growth by using Azure AI Metrics Advisor](https://techcommunity.microsoft.com/t5/azure-ai/protect-your-organization-s-growth-by-using-azure-metrics/ba-p/2564682)
+* [Supply chain anomaly detection and root cause analysis with Azure Metric Advisor](https://techcommunity.microsoft.com/t5/azure-ai/supply-chain-anomaly-detection-and-root-cause-analysis-with/ba-p/2871920)
+* [Customer support: How Azure AI Metrics Advisor can help improve customer satisfaction](https://techcommunity.microsoft.com/t5/azure-ai-blog/customer-support-how-azure-metrics-advisor-can-help-improve/ba-p/3038907)
+* [Detecting methane leaks using Azure AI Metrics Advisor](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detecting-methane-leaks-using-azure-metrics-advisor/ba-p/3254005)
+* [AIOps with Azure AI Metrics Advisor - OpenDataScience.com](https://opendatascience.com/aiops-with-azure-metrics-advisor/)
+
+## Next steps
+
+* Explore a quickstart: [Monitor your first metric on web](quickstarts/web-portal.md).
+* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
+
+ Title: Metrics Advisor client libraries REST API
+
+description: Use this quickstart to connect your applications to the Metrics Advisor API from Azure AI services.
+++++ Last updated : 11/07/2022+
+zone_pivot_groups: programming-languages-metrics-monitor
+++
+# Quickstart: Use the client libraries or REST APIs to customize your solution
+
+Get started with the Metrics Advisor REST API or client libraries. Follow these steps to install the package and try out the example code for basic tasks.
+
+Use Metrics Advisor to perform:
+
+* Add a data feed from a data source
+* Check ingestion status
+* Configure detection and alerts
+* Query the anomaly detection results
+* Diagnose anomalies
+++++++++++++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../../ai-services/multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../../ai-services/multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+- [Use the web portal](web-portal.md)
+- [Onboard your data feeds](../how-tos/onboard-your-data.md)
+ - [Manage data feeds](../how-tos/manage-data-feeds.md)
+ - [Configurations for different data sources](../data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](../how-tos/configure-metrics.md)
+- [Adjust anomaly detection using feedback](../how-tos/anomaly-feedback.md)
ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/quickstarts/web-portal.md
+
+ Title: 'Quickstart: Metrics Advisor web portal'
+
+description: Learn how to start using the Metrics Advisor web portal.
+++ Last updated : 11/07/2022++++++
+# Quickstart: Monitor your first metric by using the web portal
+
+When you provision an instance of Azure AI Metrics Advisor, you can use the APIs and web-based workspace to interact with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
+
+## Prerequisites
+
+* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+* When you have your Azure subscription, <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your instance of Metrics Advisor.
+
+
+> [!TIP]
+> * It can take 10 to 30 minutes for your Metrics Advisor resource to deploy. Select **Go to resource** after it successfully deploys.
+> * If you want to use the REST API to interact with the service, you need the key and endpoint from the resource you create. You can find them on the **Keys and endpoints** tab in the created resource.
++
+This document uses a SQL database as an example for creating your first monitor.
+
+## Sign in to your workspace
+
+After your resource is created, sign in to the [Metrics Advisor portal](https://go.microsoft.com/fwlink/?linkid=2143774) with your Active Directory account. From the landing page, select your **Directory**, **Subscription**, and **Workspace** that you just created, and then select **Get started**. To use time series data, select **Add data feed** from the left menu.
+
+
+Currently you can create one Metrics Advisor resource at each available region. You can switch workspaces in the Metrics Advisor portal at any time.
++
+## Time series data
+
+Metrics Advisor provides connectors for different data sources, such as Azure SQL Database, Azure Data Explorer, and Azure Table Storage. The steps for connecting data are similar for different connectors, although some configuration parameters might vary. For more information, see [Connect different data sources](../data-feeds-from-different-sources.md).
+
+This quickstart uses a SQL database as an example. You can also ingest your own data by following the same steps.
++
+### Data schema requirements and configuration
++
+### Configure connection settings and query
+
+[Add the data feeds](../how-tos/onboard-your-data.md) by connecting to your time series data source. Start by selecting the following parameters:
+
+* **Source Type**: The type of data source where your time series data is stored.
+* **Granularity**: The interval between consecutive data points in your time series data (for example, yearly, monthly, or daily). The shortest interval supported is 60 seconds.
+* **Ingest data since (UTC)**: The start time for the first timestamp to be ingested.
++++
+### Load data
+
+After you input the connection and query strings, select **Load data**. Metrics Advisor checks the connection and permission to load data, checks the necessary parameters used in the query, and checks the column name from the data source.
+
+If there's an error at this step:
+1. Check if the connection string is valid.
+1. Confirm that there's sufficient permissions and that the ingestion worker IP address is granted access.
+1. Check if the required parameters (`@IntervalStart` and `@IntervalEnd`) are used in your query.
+
+### Schema configuration
+
+After the data is loaded by running the query, select the appropriate fields.
++
+|Selection |Description |Notes |
+||||
+|**Timestamp** | The timestamp of a data point. If the timestamp is omitted, Metrics Advisor uses the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as timestamp. | Optional. Should be specified with at most one column. |
+|**Measure** | The numeric values in the data feed. For each data feed, you can specify multiple measures, but at least one column should be selected as measure. | Should be specified with at least one column. |
+|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series. Examples include country/region, language, and tenant. You can select none, or an arbitrary number of columns as dimensions. If you're selecting a non-string column as dimension, be cautious with dimension explosion. | Optional. |
+|**Ignore** | Ignore the selected column. | Optional. For data sources that support using a query to get data, there's no ignore option. |
+++
+After configuring the schema, select **Verify schema**. Metrics Advisor performs the following checks:
+- Whether the timestamp of the queried data falls into one single interval.
+- Whether there are duplicate values returned for the same dimension combination within one metric interval.
+
+### Automatic roll-up settings
+
+> [!IMPORTANT]
+> If you want to enable root cause analysis and other diagnostic capabilities, configure the automatic roll-up settings. After you enable the analysis, you can't change the automatic roll-up settings.
+
+Metrics Advisor can automatically perform aggregation on each dimension during ingestion. Then the service builds a hierarchy that you can use in root cause analysis and other diagnostic features. For more information, see [Automatic roll-up settings](../how-tos/onboard-your-data.md#automatic-roll-up-settings).
+
+Give a custom name for the data feed, which will be shown in your workspace. Select **Submit**.
+
+## Tune detection configuration
+
+After the data feed is added, Metrics Advisor attempts to ingest metric data from the specified start date. It will take some time for data to be fully ingested, and you can view the ingestion status by selecting **Ingestion progress** at the top of the data feed page. If data is ingested, Metrics Advisor will apply detection, and continue to monitor the source for new data.
+
+When detection is applied, select one of the metrics listed in the data feed to find the **Metric detail page**. Here, you can:
+- View visualizations of all time series' slices under this metric.
+- Update detection configuration to meet expected results.
+- Set up notification for detected anomalies.
++
+## View diagnostic insights
+
+After tuning the detection configuration, you should find that detected anomalies reflect actual anomalies in your data. Metrics Advisor performs analysis on multidimensional metrics to locate the root cause to a specific dimension. The service also performs cross-metrics analysis by using the metrics graph feature.
+
+To view diagnostic insights, select the red dots on time series visualizations. These red dots represent detected anomalies. A window will appear with a link to the incident analysis page.
++
+On the incident analysis page, you see a group of related anomalies and diagnostic insights. The following sections cover the major steps to diagnose an incident.
+
+### Check the summary of the current incident
+
+You can find the summary at the top of the incident analysis page. This summary includes basic information, actions and tracings, and an analyzed root cause. Basic information includes the top impacted series with a diagram, the impact start and end time, the severity, and the total anomalies included.
+
+The analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on a time series, within one metric with different dimension values at the same timestamp. Then the service performs correlation, clustering group-related anomalies together, and generates advice about a root cause.
++
+Based on these, you can already get a straightforward view of the current abnormal status, the impact of the incident, and the most likely root cause. You can then take immediate action to resolve the incident.
+
+### View cross-dimension diagnostic insights
+
+You can also get more detailed info on abnormal status on other dimensions within the same metric in a holistic way, by using the diagnostic tree feature.
+
+For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy (called a diagnostic tree). For example, a revenue metric is monitored by two dimensions: region and category. You need to have an aggregated dimension value, such as `SUM`. Then, the time series of `region = SUM` and `category = SUM` is categorized as the root node within the tree. Whenever there's an anomaly captured at the `SUM` dimension, you can analyze it to locate which specific dimension value has contributed the most to the parent node anomaly. Select each node to expand it for detailed information.
++
+### View cross-metrics diagnostic insights
+
+Sometimes, it's hard to analyze an issue by checking the abnormal status of a single metric, and you need to correlate multiple metrics together. To do this, configure a metrics graph, which indicates the relationships between metrics.
+
+By using the cross-dimension diagnostic result described in the previous section, you can identify that the root cause is limited to a specific dimension value. Then use a metrics graph to filter by the analyzed root cause dimension, to check the anomaly status on other metrics.
++
+You can also pivot across more diagnostic insights by using additional features. These features help you drill down on dimensions of anomalies, view similar anomalies, and compare across metrics. For more information, see [Diagnose an incident](../how-tos/diagnose-an-incident.md).
+
+## Get notified when new anomalies are found
+
+If you want to get alerted when an anomaly is detected in your data, you can create a subscription for one or more of your metrics. Metrics Advisor uses hooks to send alerts. Three types of hooks are supported: email hook, web hook, and Azure DevOps. We'll use web hook as an example.
+
+### Create a web hook
+
+In Metrics Advisor, you can use a web hook to surface an anomaly programmatically. The service calls a user-provided API when an alert is triggered. For more information, see [Create a hook](../how-tos/alerts.md#create-a-hook).
+
+### Configure alert settings
+
+After creating a hook, an alert setting determines how and which alert notifications should be sent. You can set multiple alert settings for each metric. Two important settings are **Alert for**, which specifies the anomalies to be included, and **Filter anomaly options**, which defines which anomalies to include in the alert. For more information, see [Add or edit alert settings](../how-tos/alerts.md#add-or-edit-alert-settings).
++
+## Next steps
+
+- [Add your metric data to Metrics Advisor](../how-tos/onboard-your-data.md)
+ - [Manage your data feeds](../how-tos/manage-data-feeds.md)
+ - [Connect different data sources](../data-feeds-from-different-sources.md)
+- [Use the client libraries or REST APIs to customize your solution](./rest-api-and-client-library.md)
+- [Configure metrics and fine-tune detection configuration](../how-tos/configure-metrics.md)
ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
+
+ Title: Metrics Advisor anomaly notification e-mails with Azure Logic Apps
+description: Learn how to automate sending e-mail alerts in response to Metric Advisor anomalies
+++++ Last updated : 05/20/2021++
+# Tutorial: Enable anomaly notification in Metrics Advisor
+
+<!-- 2. Introductory paragraph
+Required. Lead with a light intro that describes, in customer-friendly language,
+what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
+would I want to do this?ΓÇ¥ question. Keep it short.
+-->
++
+<!-- 3. Tutorial outline
+Required. Use the format provided in the list below.
+-->
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a hook in Metrics Advisor
+> * Send Notifications with Azure Logic Apps
+> * Send Notifications to Microsoft Teams
+> * Send Notifications via SMTP server
+
+<!-- 4. Prerequisites
+Required. First prerequisite is a link to a free trial account if one exists. If there
+are no prerequisites, state that no prerequisites are needed for this tutorial.
+-->
+
+## Prerequisites
+### Create a Metrics Advisor resource
+
+To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+### Create a hook in Metrics Advisor
+A hook in Metrics Advisor is a bridge that enables customer to subscribe to metrics anomalies and send notifications through different channels. There are four types of hooks in Metrics Advisor:
+
+- Email hook
+- Webhook
+- Teams hook
+- Azure DevOps hook
+
+Each hook type corresponds to a specific channel that anomaly will be notified through.
+
+<!-- 5. H2s
+Required. Give each H2 a heading that sets expectations for the content that follows.
+Follow the H2 headings with a sentence about how the section contributes to the whole.
+-->
+
+## Send notifications with Azure Logic Apps, Teams, and SMTP
+
+#### [Logic Apps](#tab/logic)
+
+### Send email notification by using Azure Logic Apps
+
+<!-- Introduction paragraph -->
+There are two common options to send email notifications that are supported in Metrics Advisor. One is to use webhooks and Azure Logic Apps to send email alerts, the other is to set up an SMTP server and use it to send email alerts directly. This section will focus on the first option, which is easier for customers who don't have an available SMTP server.
+
+**Step 1.** Create a webhook in Metrics Advisor
+
+A webhook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a webhook.
+
+Select the **Hooks** tab in your Metrics Advisor workspace, and select the **Create hook** button. Choose a hook type of **web hook**. Fill in the required parameters and select **OK**. For detailed steps, refer to [create a webhook](../how-tos/alerts.md#web-hook).
+
+There's one extra parameter of **Endpoint** that needs to be filled out, this could be done after completing Step 3 below.
++
+**Step 2.** Create a Consumption logic app resource
+
+In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource with a blank workflow by following the instructions in [Create an example Consumption logic app workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md). When you see the workflow designer opens, return to this tutorial.
++
+**Step 3.** Add a trigger of **When an HTTP request is received**
+
+- Azure Logic Apps uses various actions to trigger workflows that are defined. For this use case, it uses the trigger named **When an HTTP request is received**.
+
+- In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
+
+ ![Screenshot that shows the When an HTTP request dialog box and the Use sample payload to generate schema option selected.](../media/tutorial/logic-apps-generate-schema.png)
+
+ Copy the following sample JSON into the textbox and select **Done**.
+
+ ```json
+ {
+ "properties": {
+ "value": {
+ "items": {
+ "properties": {
+ "alertInfo": {
+ "properties": {
+ "alertId": {
+ "type": "string"
+ },
+ "anomalyAlertingConfigurationId": {
+ "type": "string"
+ },
+ "createdTime": {
+ "type": "string"
+ },
+ "modifiedTime": {
+ "type": "string"
+ },
+ "timestamp": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "alertType": {
+ "type": "string"
+ },
+ "callBackUrl": {
+ "type": "string"
+ },
+ "hookId": {
+ "type": "string"
+ }
+ },
+ "required": [
+ "hookId",
+ "alertType",
+ "alertInfo",
+ "callBackUrl"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+ }
+ ```
+
+- Choose the method as 'POST' and select **Save**. You can now see the URL of your HTTP request trigger. Select the copy icon to copy it and fill it back in the **Endpoint** in Step 1.
+
+ ![Screenshot that highlights the copy icon to copy the URL of your HTTP request trigger.](../media/tutorial/logic-apps-copy-url.png)
+
+**Step 4.** Add a next step using 'HTTP' action
+
+Signals that are pushed through the webhook only contain limited information like timestamp, alertID, configurationID, etc. Detailed information needs to be queried using the callback URL provided in the signal. This step is to query detailed alert info.
+
+- Choose a method of 'GET'
+- Select 'callBackURL' from 'Dynamic content' list in 'URI'.
+- Enter a key of 'Content-Type' in 'Headers' and input a value of 'application/json'
+- Enter a key of 'x-api-key' in 'Headers' and get this by the clicking **'API keys'** tab in your Metrics Advisor workspace. This step is to ensure the workflow has sufficient permissions for API calls.
+
+ ![Screenshot that highlights the api-keys](../media/tutorial/logic-apps-api-key.png)
+
+**Step 5.** Add a next step to ΓÇÿparse JSONΓÇÖ
+
+You need to parse the response of the API for easier formatting of email content.
+
+> [!NOTE]
+> This tutorial only shares a quick example, the final email format needs to be further designed.
+
+- Select 'Body' from 'Dynamic content' list in 'Content'
+- select **Use sample payload to generate schema**. Copy the following sample JSON into the textbox and select **Done**.
+
+```json
+{
+ "properties": {
+ "@@nextLink": {},
+ "value": {
+ "items": {
+ "properties": {
+ "properties": {
+ "properties": {
+ "IncidentSeverity": {
+ "type": "string"
+ },
+ "IncidentStatus": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "rootNode": {
+ "properties": {
+ "createdTime": {
+ "type": "string"
+ },
+ "detectConfigGuid": {
+ "type": "string"
+ },
+ "dimensions": {
+ "properties": {
+ },
+ "type": "object"
+ },
+ "metricGuid": {
+ "type": "string"
+ },
+ "modifiedTime": {
+ "type": "string"
+ },
+ "properties": {
+ "properties": {
+ "AnomalySeverity": {
+ "type": "string"
+ },
+ "ExpectedValue": {}
+ },
+ "type": "object"
+ },
+ "seriesId": {
+ "type": "string"
+ },
+ "timestamp": {
+ "type": "string"
+ },
+ "value": {
+ "type": "number"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "required": [
+ "rootNode",
+ "properties"
+ ],
+ "type": "object"
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+}
+```
+
+**Step 6.** Add a next step to ΓÇÿcreate HTML tableΓÇÖ
+
+A bunch of information has been returned from the API call, however, depending on your scenarios not all of the information may be useful. Choose the items that you care about and would like included in the alert email.
+
+Below is an example of an HTML table that chooses 'timestamp', 'metricGUID' and 'dimension' to be included in the alert email.
+
+![Screenshot of html table example](../media/tutorial/logic-apps-html-table.png)
+
+**Step 7.** Add the final step to ΓÇÿsend an emailΓÇÖ
+
+There are several options to send email, both Microsoft hosted and 3rd-party offerings. Customer may need to have a tenant/account for their chosen option. For example, when choosing ΓÇÿOffice 365 OutlookΓÇÖ as the server. Sign in process will be pumped for building connection and authorization. An API connection will be established to use email server to send alert.
+
+Fill in the content that you'd like to include to 'Body', 'Subject' in the email and fill in an email address in 'To'.
+
+![Screenshot of send an email](../media/tutorial/logic-apps-send-email.png)
+
+#### [Teams Channel](#tab/teams)
+
+### Send anomaly notification through a Microsoft Teams channel
+This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
+
+**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
+
+- Navigate to the Teams channel that you'd like to send notification to, select 'ΓÇóΓÇóΓÇó'(More options).
+- In the dropdown list, select 'Connectors'. Within the new dialog, search for 'Incoming Webhook' and click 'Add'.
+
+ ![Screenshot to create an incoming webhook](../media/tutorial/add-webhook.png)
+
+- If you are not able to view the 'Connectors' option, please contact your Teams group owners. Select 'Manage team', then select the 'Settings' tab at the top and check whether the setting of 'Allow members to create, update and remove connectors' is checked.
+
+ ![Screenshot to check teams settings](../media/tutorial/teams-settings.png)
+
+- Input a name for the connector and you can also upload an image to make it as the avatar. Select 'Create', then the Incoming Webhook connector is added successfully to your channel. A URL will be generated at the bottom of the dialog, **be sure to select 'Copy'**, then select 'Done'.
+
+ ![Screenshot to copy URL](../media/tutorial/webhook-url.png)
+
+**Step 2.** Create a new 'Teams hook' in Metrics Advisor
+
+- Select 'Hooks' tab in left navigation bar, and select the 'Create hook' button at top right of the page.
+- Choose hook type of 'Teams', then input a name and paste the URL that you copied from the above step.
+- Select 'Save'.
+
+ ![Screenshot to create a Teams hook](../media/tutorial/teams-hook.png)
+
+**Step 3.** Apply the Teams hook to an alert configuration
+
+Go and select one of the data feeds that you have onboarded. Select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to anomalies that are detected and notify through a Teams channel.
+
+Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. Then you're set for applying a Teams hook to an alert configuration. Any new anomalies will be notified through the Teams channel.
+
+![Screenshot that applies an Teams hook to an alert configuration](../media/tutorial/teams-hook-in-alert.png)
++
+#### [SMTP E-mail](#tab/smtp)
+
+### Send email notification by configuring an SMTP server
+
+This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password.
+
+**Step 1.** Assign your account as the 'Cognitive Service Metrics Advisor Administrator' role
+
+- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control(IAM) tab.
+- Select 'Add role assignments'.
+- Pick a role of 'Azure AI Metrics Advisor Administrator', select your account as in the image below.
+- Select 'Save' button, then you've been successfully added as administrator of a Metrics Advisor resource. All the above actions need to be performed by a subscription administrator or resource group administrator. It might take up to one minute for the permissions to propagate.
+
+![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png)
+
+**Step 2.** Configure SMTP server in Metrics Advisor workspace
+
+After you've completed the above steps and have been successfully added as an administrator of the Metrics Advisor resource. Wait several minutes for the permissions to propagate. Then sign in to your Metrics Advisor workspace, you should be able to view a new tab named 'Email setting' on the left navigation panel. Select it and to continue configuration.
+
+Parameters to be filled out:
+
+- SMTP server name (**required**): Fill in the name of your SMTP server provider, most server names are written in the form ΓÇ£smtp.domain.comΓÇ¥ or ΓÇ£mail.domain.comΓÇ¥. Take Office365 as an example, it should be set as 'smtp.office365.com'.
+- SMTP server port (**required**): Port 587 is the default port for SMTP submission on the modern web. While you can use other ports for submission (more on those next), you should always start with port 587 as the default and only use a different port if circumstances dictate (like your host blocking port 587 for some reason).
+- Email sender(s)(**required**): This is the real email account that takes responsibility to send emails. You may need to fill in the account name and password of the sender. You can set a quota threshold for the maximum number of alert emails to be sent within one minute for one account. You can set multiple senders if there's possibility of having large volume of alerts to be sent in one minute, but at least one account should be set.
+- Send on behalf of (optional): If you have multiple senders configured, but you'd like alert emails to appear to be sent from one account. You can use this field to align them. But note you may need to grant permission to the senders to allow sending emails on behalf of their account.
+- Default CC (optional): To set a default email address that will be cc'd in all email alerts.
+
+Below is an example of a configured SMTP server:
+
+![Screenshot that shows an example of a configured SMTP server](../media/tutorial/email-setting.png)
+
+**Step 3.** Create an email hook in Metrics Advisor
+
+After successfully configuring an SMTP server, you're set to create an 'email hook' in the 'Hooks' tab in Metrics Advisor. For more about creating an 'email hook', refer to [article on alerts](../how-tos/alerts.md#email-hook) and follow the steps to completion.
+
+**Step 4.** Apply the email hook to an alert configuration
+
+ Go and select one of the data feeds that you on-boarded, select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to the anomalies that have been detected and sent through emails.
+
+Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. You have now successfully setup an email hook with a custom alert configuration and any new anomalies will be escalated through the hook using the SMTP server.
+
+![Screenshot that applies an email hook to an alert configuration](../media/tutorial/apply-hook.png)
+++
+## Next steps
+
+Advance to the next article to learn how to create.
+> [!div class="nextstepaction"]
+> [Write a valid query](write-a-valid-query.md)
+
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/write-a-valid-query.md
+
+ Title: Write a query for Metrics Advisor data ingestion
+description: Learn how to onboard your data to Metrics Advisor.
+++++ Last updated : 05/20/2021 ++
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
+-->
+
+<!--
+This template provides the basic structure of a tutorial article.
+See the [tutorial guidance](contribute-how-to-mvc-tutorial.md) in our contributor guide.
+
+To provide feedback on this template contact
+[the templates workgroup](mailto:templateswg@microsoft.com).
+-->
+
+<!-- 1. H1
+Required. Start with "Tutorial: ". Make the first word following "Tutorial: " a
+verb.
+-->
+
+# Tutorial: Write a valid query to onboard metrics data
+
+<!-- 2. Introductory paragraph
+Required. Lead with a light intro that describes, in customer-friendly language,
+what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
+would I want to do this?ΓÇ¥ question. Keep it short.
+-->
++
+<!-- 3. Tutorial outline
+Required. Use the format provided in the list below.
+-->
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * How to write a valid data onboarding query
+> * Common errors and how to avoid them
+
+<!-- 4. Prerequisites
+Required. First prerequisite is a link to a free trial account if one exists. If there
+are no prerequisites, state that no prerequisites are needed for this tutorial.
+-->
+
+## Prerequisites
+
+### Create a Metrics Advisor resource
+
+To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
+
+<!-- 5. H2s
+Required. Give each H2 a heading that sets expectations for the content that follows.
+Follow the H2 headings with a sentence about how the section contributes to the whole.
+-->
+
+## Data schema requirements
+<!-- Introduction paragraph -->
+++
+## <span id="ingestion-work">How does data ingestion work in Metrics Advisor?</span>
+
+When onboarding your metrics to Metrics Advisor, generally there are two ways:
+<!-- Introduction paragraph -->
+- Pre-aggregate your metrics into the expected schema and store data into certain files. Fill in the path template during onboarding, and Metrics Advisor will continuously grab new files from the path and perform detection on the metrics. This is a common practice for a data source like Azure Data Lake and Azure Blob Storage.
+- If you're ingesting data from data sources like Azure SQL Server, Azure Data Explorer, or other sources, which support using a query script than you need to make sure you are properly constructing your query. This article will teach you how to write a valid query to onboard metric data as expected.
++
+### What is an interval?
+
+Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
+
+Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
+
+![Illustration that describes what is an interval](../media/tutorial/what-is-interval.png)
+
+## How to write a valid query?
+<!-- Introduction paragraph -->
+### <span id="use-parameters"> Use @IntervalStart and @IntervalEnd to limit query results</span>
+
+ To help in achieving this, two parameters have been provided to use within the query: **@IntervalStart** and **@IntervalEnd**.
+
+Every time when the query runs, @IntervalStart and @IntervalEnd will be automatically updated to the latest interval timestamp and gets corresponding metrics data. @IntervalEnd is always assigned as @IntervalStart + 1 granularity.
+
+Here's an example of proper use of these two parameters with Azure SQL Server:
+
+```SQL
+SELECT [timestampColumnName] AS timestamp, [dimensionColumnName], [metricColumnName] FROM [sampleTable] WHERE [timestampColumnName] >= @IntervalStart and [timestampColumnName] < @IntervalEnd;
+```
+
+By writing the query script in this way, the timestamps of metrics should fall in the same interval for each query result. Metrics Advisor will automatically align the timestamps with the metrics' granularity.
+
+### <span id="use-aggregation"> Use aggregation functions to aggregate metrics</span>
+
+It's a common case that there are many columns within customers data sources, however, not all of them make sense to be monitored or included as a dimension. Customers can use aggregation functions to aggregate metrics and only include meaningful columns as dimensions.
+
+Below is an example where there are more than 10 columns in a customer's data source, but only a few of them are meaningful and need to be included and aggregated into a metric to be monitored.
+
+| TS | Market | Device OS | Category | ... | Measure1 | Measure2 | Measure3 |
+| -|--|--|-|--|-|-|-|
+| 2020-09-18T12:23:22Z | New York | iOS | Sunglasses | ...| 43242 | 322 | 54546|
+| 2020-09-18T12:27:34Z | Beijing | Android | Bags | ...| 3333 | 126 | 67677 |
+| ...
+
+If customer would like to monitor **'Measure1'** at **hourly granularity** and choose **'Market'** and **'Category'** as dimensions, below are examples of how to properly make use of the aggregation functions to achieve this:
+
+- SQL sample:
+
+ ```sql
+ SELECT dateadd(hour, datediff(hour, 0, TS),0) as NewTS
+ ,Market
+ ,Category
+ ,sum(Measure1) as M1
+ FROM [dbo].[SampleTable] where TS >= @IntervalStart and TS < @IntervalEnd
+ group by Market, Category, dateadd(hour, datediff(hour, 0, TS),0)
+ ```
+- Azure Data Explorer sample:
+
+ ```kusto
+ SampleTable
+ | where TS >= @IntervalStart and TS < @IntervalEnd
+ | summarize M1 = sum(Measure1) by Market, Category, NewTS = startofhour(TS)
+ ```
+
+> [!Note]
+> In the above case, the customer would like to monitor metrics at an hourly granularity, but the raw timestamp(TS) is not aligned. Within aggregation statement, **a process on the timestamp is required** to align at the hour and generate a new timestamp column named 'NewTS'.
++
+## Common errors during onboarding
+
+- **Error:** Multiple timestamp values are found in query results
+
+ This is a common error, if you haven't limited query results within one interval. For example, if you're monitoring a metric at a daily granularity, you will get this error if your query returns results like this:
+
+ ![Screenshot that shows multiple timestamp values returned](../media/tutorial/multiple-timestamps.png)
+
+ There are multiple timestamp values and they're not in the same metrics interval(one day). Check [How does data ingestion work in Metrics Advisor?](#ingestion-work) and understand that Metrics Advisor grabs metrics data at each metrics interval. Then make sure to use **@IntervalStart** and **@IntervalEnd** in your query to limit results within one interval. Check [Use @IntervalStart and @IntervalEnd to limit query results](#use-parameters) for detailed guidance and samples.
++
+- **Error:** Duplicate metric values are found on the same dimension combination within one metric interval
+
+ Within one interval, Metrics Advisor expects only one metrics value for the same dimension combinations. For example, if you're monitoring a metric at a daily granularity, you will get this error if your query returns results like this:
+
+ ![Screenshot that shows duplicate values returned](../media/tutorial/duplicate-values.png)
+
+ Refer to [Use aggregation functions to aggregate metrics](#use-aggregation) for detailed guidance and samples.
+
+<!-- 7. Next steps
+Required: A single link in the blue box format. Point to the next logical tutorial
+in a series, or, if there are no other tutorials, to some other cool thing the
+customer can do.
+-->
+
+## Next steps
+
+Advance to the next article to learn how to create.
+> [!div class="nextstepaction"]
+> [Enable anomaly notifications](enable-anomaly-notification.md)
+
+<!--
+Remove all the comments in this template before you sign-off or merge to the
+main branch.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/whats-new.md
+
+ Title: Metrics Advisor what's new
+
+description: Learn about what is new with Metrics Advisor
+++++ Last updated : 12/16/2022+++
+# Metrics Advisor: what's new in the docs
+
+Welcome! This page covers what's new in the Metrics Advisor docs. Check back every month for information on service changes, doc additions and updates this month.
+
+## December 2022
+
+**Metrics Advisor new portal experience** has been released in preview! [Metrics Advisor Studio](https://metricsadvisor.appliedai.azure.com/) is designed to align the look and feel with the Azure AI services studio experiences with better usability and significant accessibility improvements. Both the classic portal and the new portal will be available for a period of time, until we officially retire the classic portal.
+
+We **strongly encourage** you to try out the new experience and share your feedback through the smiley face on the top banner.
+
+## May 2022
+
+ **Detection configuration auto-tuning** has been released. This feature enables you to customize the service to better surface and personalize anomalies. Instead of the traditional way of setting configurations for each time series or a group of time series. A guided experience is provided to capture your detection preferences, such as the level of sensitivity, and the types of anomaly patterns, which allows you to tailor the model to your own needs on the back end. Those preferences can then be applied to all the time series you're monitoring. This allows you to reduce configuration costs while achieving better detection results.
+
+Check out [this article](how-tos/configure-metrics.md#tune-the-detection-configuration) to learn how to take advantage of the new feature.
+
+## SDK updates
+
+If you want to learn about the latest updates to Metrics Advisor client SDKs see:
+
+* [.NET SDK change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/CHANGELOG.md)
+* [Java SDK change log](https://github.com/Azure/azure-sdk-for-jav)
+* [Python SDK change log](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/metricsadvisor/azure-ai-metricsadvisor/CHANGELOG.md)
+* [JavaScript SDK change log](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/metricsadvisor/ai-metrics-advisor/CHANGELOG.md)
+
+## June 2021
+
+### New articles
+
+* [Tutorial: Write a valid query to onboard metrics data](tutorials/write-a-valid-query.md)
+* [Tutorial: Enable anomaly notification in Metrics Advisor](tutorials/enable-anomaly-notification.md)
+
+### Updated articles
+
+* [Updated metrics onboarding flow](how-tos/onboard-your-data.md)
+* [Enriched guidance when adding data feeds from different sources](data-feeds-from-different-sources.md)
+* [Updated new notification channel using Microsoft Teams](how-tos/alerts.md#teams-hook)
+* [Updated incident diagnostic experience](how-tos/diagnose-an-incident.md)
+
+## October 2020
+
+### New articles
+
+* [Quickstarts for Metrics Advisor client SDKs for .NET, Java, Python, and JavaScript](quickstarts/rest-api-and-client-library.md)
+
+### Updated articles
+
+* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](./faq.yml)
ai-services Multi Service Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md
+
+ Title: Create a multi-service resource for Azure AI services
+
+description: Create and manage a multi-service resource for Azure AI services
+
+keywords: Azure AI services, cognitive
++++ Last updated : 7/18/2023+
+zone_pivot_groups: programming-languages-portal-cli-sdk
++
+# Quickstart: Create a multi-service resource for Azure AI services
+
+Use this quickstart to create and manage a multi-service resource for Azure AI services. A multi-service resource allows you to access multiple Azure AI services with a single key and endpoint. It also consolidates billing from the services you use.
+
+You can access Azure AI services through two different resources: A multi-service resource, or a single-service one.
+
+* Multi-service resource:
+ * Access multiple Azure AI services with a single key and endpoint.
+ * Consolidates billing from the services you use.
+* Single-service resource:
+ * Access a single Azure AI service with a unique key and endpoint for each service created.
+ * Most Azure AI servives offer a free tier to try it out.
+
+Azure AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications.
+++++++++++++++++++
+## Next steps
+
+* Explore [Azure AI services](./what-are-ai-services.md) and choose a service to get started.
ai-services Chatgpt Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/chatgpt-quickstart.md
+
+ Title: 'Quickstart - Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service'
+
+description: Walkthrough on how to get started with GPT-35-Turbo and GPT-4 on Azure OpenAI Service.
++++++++ Last updated : 05/23/2023
+zone_pivot_groups: openai-quickstart-new
+recommendations: false
++
+# Quickstart: Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service
+
+Use this article to get started using Azure OpenAI.
++++++++++++++++++
ai-services Abuse Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/abuse-monitoring.md
+
+ Title: Azure OpenAI Service abuse monitoring
+
+description: Learn about the abuse monitoring capabilities of Azure OpenAI Service
+++++ Last updated : 06/16/2023++
+keywords:
++
+# Abuse Monitoring
+
+Azure OpenAI Service detects and mitigates instances of recurring content and/or behaviors that suggest use of the service in a manner that may violate the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context) or other applicable product terms. Details on how data is handled can be found on the [Data, Privacy and Security page](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context).
+
+## Components of abuse monitoring
+
+There are several components to abuse monitoring:
+
+- **Content Classification**: Classifier models detect harmful language and/or images in user prompts (inputs) and completions (outputs). The system looks for categories of harms as defined in the [Content Requirements](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context), and assigns severity levels as described in more detail on the [Content Filtering page](content-filter.md).
+
+- **Abuse Pattern Capture**: Azure OpenAI ServiceΓÇÖs abuse monitoring looks at customer usage patterns and employs algorithms and heuristics to detect indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected in a customerΓÇÖs prompts and completions.
+
+- **Human Review and Decision**: When prompts and/or completions are flagged through content classification and abuse pattern capture as described above, authorized Microsoft employees may assess the flagged content, and either confirm or correct the classification or determination based on predefined guidelines and policies. Data can be accessed for human review <u>only</u> by authorized Microsoft employees via Secure Access Workstations (SAWs) with Just-In-Time (JIT) request approval granted by team managers. For Azure OpenAI Service resources deployed in the European Economic Area, the authorized Microsoft employees are located in the European Economic Area.
+
+- **Notification and Action**: When a threshold of abusive behavior has been confirmed based on the preceding three steps, the customer is informed of the determination by email. Except in cases of severe or recurring abuse, customers typically are given an opportunity to explain or remediateΓÇöand implement mechanisms to prevent recurrence ofΓÇöthe abusive behavior. Failure to address the behaviorΓÇöor recurring or severe abuseΓÇömay result in suspension or termination of the customerΓÇÖs access to Azure OpenAI resources and/or capabilities.
+
+## Next steps
+
+- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
+- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
ai-services Advanced Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/advanced-prompt-engineering.md
+
+ Title: Prompt engineering techniques with Azure OpenAI
+
+description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models
+++++ Last updated : 04/20/2023+
+keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought
+zone_pivot_groups: openai-prompt
++
+# Prompt engineering techniques
+
+This guide will walk you through some advanced techniques in prompt design and prompt engineering. If you're new to prompt engineering, we recommend starting with our [introduction to prompt engineering guide](prompt-engineering.md).
+
+While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
+
+- Chat Completion API.
+- Completion API.
+
+Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
+
+The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the GPT-35-Turbo models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md).
+
+The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context#limitations), is just as important as understanding how to leverage their strengths.
++++++
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
+
+ Title: Azure OpenAI Service content filtering
+
+description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI services
+++++ Last updated : 06/08/2023++
+keywords:
++
+# Content filtering
+
+Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design may affect completions and thus filtering behavior. The content filtering system supports the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. It might not be able to detect inappropriate content in languages that it has not been trained or tested to process.
+
+In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that may violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
+
+The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
+
+## Content filtering categories
+
+The content filtering system integrated in the Azure OpenAI Service contains neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and is not configurable.
+
+### Categories
+
+|Category|Description|
+|--|--|
+| Hate |The hate category describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
+| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against oneΓÇÖs will, prostitution, pornography, and abuse. |
+| Violence | The violence category describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
+| Self-Harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage oneΓÇÖs body, or kill oneself.|
+
+### Severity levels
+
+|Category|Description|
+|--|--|
+|Safe | Content may be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
+|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.|
+| Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
+|High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual power exchange or abuse.|
+
+## Configurability (preview)
+
+The default content filtering configuration is set to filter at the medium severity threshold for all four content harm categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low is not filtered by the content filters. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
+
+| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
+|-|--||--|
+| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
+| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
+| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium is not filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.|
+| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
+
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning content filters off. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
+
+Content filtering configurations are created within a Resource in Azure AI Studio, and can be associated with Deployments. [Learn more about configurability here](../how-to/content-filters.md).
+
+ :::image type="content" source="../media/content-filters/configuration.png" alt-text="Screenshot of the content filter configuration UI" lightbox="../media/content-filters/configuration.png":::
+
+## Scenario details
+
+When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which may result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
+
+- Prompts that are classified at a filtered category and severity level will return an HTTP 400 error.
+- Non-streaming completions calls won't return any content when the content is filtered. The `finish_reason` value will be set to content_filter. In rare cases with longer responses, a partial result can be returned. In these cases, the `finish_reason` will be updated.
+- For streaming completions calls, segments will be returned back to the user as they're completed. The service will continue streaming until either reaching a stop token, length, or when content that is classified at a filtered category and severity level is detected.
+
+### Scenario: You send a non-streaming completions call asking for multiple outputs; no content is classified at a filtered category and severity level
+
+The table below outlines the various ways content filtering can appear:
+
+ **HTTP response code** | **Response behavior** |
+||-|
+| 200 | In the cases when all generation passes the filters as configured, no content moderation details are added to the response. The `finish_reason` for each generation will be either stop or length. |
+
+**Example request payload:**
+
+```json
+{
+ "prompt":"Text example",
+ "n": 3,
+ "stream": false
+}
+```
+
+**Example response JSON:**
+
+```json
+{
+ "id": "example-id",
+ "object": "text_completion",
+ "created": 1653666286,
+ "model": "davinci",
+ "choices": [
+ {
+ "text": "Response generated text",
+ "index": 0,
+ "finish_reason": "stop",
+ "logprobs": null
+ }
+ ]
+}
+
+```
+
+### Scenario: Your API call asks for multiple responses (N>1) and at least 1 of the responses is filtered
+
+| **HTTP Response Code** | **Response behavior**|
+||-|
+| 200 |The generations that were filtered will have a `finish_reason` value of `content_filter`.
+
+**Example request payload:**
+
+```json
+{
+ "prompt":"Text example",
+ "n": 3,
+ "stream": false
+}
+```
+
+**Example response JSON:**
+
+```json
+{
+ "id": "example",
+ "object": "text_completion",
+ "created": 1653666831,
+ "model": "ada",
+ "choices": [
+ {
+ "text": "returned text 1",
+ "index": 0,
+ "finish_reason": "length",
+ "logprobs": null
+ },
+ {
+ "text": "returned text 2",
+ "index": 1,
+ "finish_reason": "content_filter",
+ "logprobs": null
+ }
+ ]
+}
+```
+
+### Scenario: An inappropriate input prompt is sent to the completions API (either for streaming or non-streaming)
+
+**HTTP Response Code** | **Response behavior**
+||-|
+|400 |The API call will fail when the prompt triggers a content filter as configured. Modify the prompt and try again.|
+
+**Example request payload:**
+
+```json
+{
+ "prompt":"Content that triggered the filtering model"
+}
+```
+
+**Example response JSON:**
+
+```json
+"error": {
+ "message": "The response was filtered",
+ "type": null,
+ "param": "prompt",
+ "code": "content_filter",
+ "status": 400
+}
+```
+
+### Scenario: You make a streaming completions call; no output content is classified at a filtered category and severity level
+
+**HTTP Response Code** | **Response behavior**
+|||-|
+|200|In this case, the call will stream back with the full generation and `finish_reason` will be either 'length' or 'stop' for each generated response.|
+
+**Example request payload:**
+
+```json
+{
+ "prompt":"Text example",
+ "n": 3,
+ "stream": true
+}
+```
+
+**Example response JSON:**
+
+```json
+{
+ "id": "cmpl-example",
+ "object": "text_completion",
+ "created": 1653670914,
+ "model": "ada",
+ "choices": [
+ {
+ "text": "last part of generation",
+ "index": 2,
+ "finish_reason": "stop",
+ "logprobs": null
+ }
+ ]
+}
+```
+
+### Scenario: You make a streaming completions call asking for multiple completions and at least a portion of the output content is filtered
+
+**HTTP Response Code** | **Response behavior**
+|||-|
+| 200 | For a given generation index, the last chunk of the generation will include a non-null `finish_reason` value. The value will be `content_filter` when the generation was filtered.|
+
+**Example request payload:**
+
+```json
+{
+ "prompt":"Text example",
+ "n": 3,
+ "stream": true
+}
+```
+
+**Example response JSON:**
+
+```json
+ {
+ "id": "cmpl-example",
+ "object": "text_completion",
+ "created": 1653670515,
+ "model": "ada",
+ "choices": [
+ {
+ "text": "Last part of generated text streamed back",
+ "index": 2,
+ "finish_reason": "content_filter",
+ "logprobs": null
+ }
+ ]
+}
+```
+
+### Scenario: Content filtering system doesn't run on the completion
+
+**HTTP Response Code** | **Response behavior**
+||-|
+| 200 | If the content filtering system is down or otherwise unable to complete the operation in time, your request will still complete without content filtering. You can determine that the filtering wasn't applied by looking for an error message in the `content_filter_result` object.|
+
+**Example request payload:**
+
+```json
+{
+ "prompt":"Text example",
+ "n": 1,
+ "stream": false
+}
+```
+
+**Example response JSON:**
+
+```json
+{
+ "id": "cmpl-example",
+ "object": "text_completion",
+ "created": 1652294703,
+ "model": "ada",
+ "choices": [
+ {
+ "text": "generated text",
+ "index": 0,
+ "finish_reason": "length",
+ "logprobs": null,
+ "content_filter_result": {
+ "error": {
+ "code": "content_filter_error",
+ "message": "The contents are not filtered"
+ }
+ }
+ }
+ ]
+}
+```
+
+## Annotations (preview)
+
+When annotations are enabled as shown in the code snippet below, the following information is returned via the API: content filtering category (hate, sexual, violence, self-harm); within each content filtering category, the severity level (safe, low, medium or high); filtering status (true or false).
+
+Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
+
+```python
+# Note: The openai-python library support for Azure OpenAI is in preview.
+# os.getenv() for the endpoint and key assumes that you are using environment variables.
+
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+
+response = openai.Completion.create(
+ engine="text-davinci-003", # engine = "deployment_name".
+ prompt="{Example prompt where a severity level of low is detected}"
+ # Content that is detected at severity level medium or high is filtered,
+ # while content detected at severity level low isn't filtered by the content filters.
+)
+
+print(response)
+
+```
+
+### Output
+
+```json
+{
+ "choices": [
+ {
+ "content_filter_results": {
+ "hate": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "self_harm": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "sexual": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "violence": {
+ "filtered": false,
+ "severity": "low"
+ }
+ },
+ "finish_reason": "length",
+ "index": 0,
+ "logprobs": null,
+ "text": {"\")(\"Example model response will be returned\").}"
+ }
+ ],
+ "created": 1685727831,
+ "id": "cmpl-7N36VZAVBMJtxycrmiHZ12aK76a6v",
+ "model": "text-davinci-003",
+ "object": "text_completion",
+ "prompt_annotations": [
+ {
+ "content_filter_results": {
+ "hate": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "self_harm": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "sexual": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "violence": {
+ "filtered": false,
+ "severity": "safe"
+ }
+ },
+ "prompt_index": 0
+ }
+ ],
+ "usage": {
+ "completion_tokens": 16,
+ "prompt_tokens": 5,
+ "total_tokens": 21
+ }
+}
+```
+
+The following code snippet shows how to retrieve annotations when content was filtered:
+
+```python
+# Note: The openai-python library support for Azure OpenAI is in preview.
+# os.getenv() for the endpoint and key assumes that you are using environment variables.
+
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+
+try:
+ response = openai.Completion.create(
+ prompt="<PROMPT>",
+ engine="<MODEL_DEPLOYMENT_NAME>",
+ )
+ print(response)
+
+except openai.error.InvalidRequestError as e:
+ if e.error.code == "content_filter" and e.error.innererror:
+ content_filter_result = e.error.innererror.content_filter_result
+ # print the formatted JSON
+ print(content_filter_result)
+
+ # or access the individual categories and details
+ for category, details in content_filter_result.items():
+ print(f"{category}:\n filtered={details['filtered']}\n severity={details['severity']}")
+
+```
+
+For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`.
+
+### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
+
+```json
+{
+ "error": {
+ "message": "The response was filtered due to the prompt triggering Azure Content management policy.
+ Please modify your prompt and retry. To learn more about our content filtering policies
+ please read our documentation: https://go.microsoft.com/fwlink/?linkid=21298766",
+ "type": null,
+ "param": "prompt",
+ "code": "content_filter",
+ "status": 400,
+ "innererror": {
+ "code": "ResponsibleAIPolicyViolation",
+ "content_filter_result": {
+ "hate": {
+ "filtered": true,
+ "severity": "high"
+ },
+ "self-harm": {
+ "filtered": true,
+ "severity": "high"
+ },
+ "sexual": {
+ "filtered": false,
+ "severity": "safe"
+ },
+ "violence": {
+ "filtered":true,
+ "severity": "medium"
+ }
+ }
+ }
+ }
+}
+```
+
+## Best practices
+
+As part of your application design, consider the following best practices to deliver a positive experience with your application while minimizing potential harms:
+
+- Decide how you want to handle scenarios where your users send prompts containing content that is classified at a filtered category and severity level or otherwise misuse your application.
+- Check the `finish_reason` to see if a completion is filtered.
+- Check that there's no error object in the `content_filter_result` (indicating that content filters didn't run).
+
+## Next steps
+
+- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+- Apply for modified content filters via [this form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).
+- Azure OpenAI content filtering is powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
+- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
+- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
ai-services Legacy Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/legacy-models.md
+
+ Title: Azure OpenAI Service legacy models
+
+description: Learn about the legacy models in Azure OpenAI.
+++ Last updated : 07/06/2023++++
+recommendations: false
+keywords:
++
+# Azure OpenAI Service legacy models
+
+Azure OpenAI Service offers a variety of models for different use cases. The following models are not available for new deployments beginning July 6, 2023. Deployments created prior to July 6, 2023 remain available to customers until July 5, 2024. We recommend customers migrate to the replacement models prior to the July 5, 2024 retirement.
+
+## GPT-3.5
+
+The impacted GPT-3.5 models are the following. The replacement for the GPT-3.5 models is GPT-3.5 Turbo Instruct when that model becomes available.
+
+- `text-davinci-002`
+- `text-davinci-003`
+- `code-davinci-002`
+
+## GPT-3
+
+The impacted GPT-3 models are the following. The replacement for the GPT-3 models is GPT-3.5 Turbo Instruct when that model becomes available.
+
+- `text-ada-001`
+- `text-babbage-001`
+- `text-curie-001`
+- `text-davinci-001`
+- `code-cushman-001`
+
+## Embedding models
+
+The embedding models below will be retired effective July 5, 2024. Customers should migrate to `text-embedding-ada-002` (version 2).
+
+- [Similarity](#similarity-embedding)
+- [Text search](#text-search-embedding)
+- [Code search](#code-search-embedding)
+
+Each family includes models across a range of capability. The following list indicates the length of the numerical vector returned by the service, based on model capability:
+
+| Base Model | Model(s) | Dimensions |
+||||
+| Ada | | 1024 |
+| Babbage | | 2048 |
+| Curie | | 4096 |
+| Davinci | | 12288 |
++
+### Similarity embedding
+
+These models are good at capturing semantic similarity between two or more pieces of text.
+
+| Use cases | Models |
+|||
+| Clustering, regression, anomaly detection, visualization | `text-similarity-ada-001` <br> `text-similarity-babbage-001` <br> `text-similarity-curie-001` <br> `text-similarity-davinci-001` <br>|
+
+### Text search embedding
+
+These models help measure whether long documents are relevant to a short search query. There are two input types supported by this family: `doc`, for embedding the documents to be retrieved, and `query`, for embedding the search query.
+
+| Use cases | Models |
+|||
+| Search, context relevance, information retrieval | `text-search-ada-doc-001` <br> `text-search-ada-query-001` <br> `text-search-babbage-doc-001` <br> `text-search-babbage-query-001` <br> `text-search-curie-doc-001` <br> `text-search-curie-query-001` <br> `text-search-davinci-doc-001` <br> `text-search-davinci-query-001` <br> |
+
+### Code search embedding
+
+Similar to text search embedding models, there are two input types supported by this family: `code`, for embedding code snippets to be retrieved, and `text`, for embedding natural language search queries.
+
+| Use cases | Models |
+|||
+| Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` |
+
+## Model summary table and region availability
+
+Region availability is for customers with deployments of the models prior to July 6, 2023.
+
+### GPT-3.5 models
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | - | -- | - |
+| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 |
+| text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 |
+| code-davinci-002 | East US, West Europe | N/A | 8,001 | Jun 2021 |
+
+### GPT-3 models
++
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | - | -- | - |
+| ada | N/A | N/A | 2,049 | Oct 2019|
+| text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|
+| babbage | N/A | N/A | 2,049 | Oct 2019 |
+| text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
+| curie | N/A | N/A | 2,049 | Oct 2019 |
+| text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |
+| davinci | N/A | N/A | 2,049 | Oct 2019|
+| text-davinci-001 | South Central US, West Europe | N/A | | |
++
+### Codex models
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| code-cushman-001 | South Central US, West Europe | N/A | 2,048 | |
+
+### Embedding models
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| text-similarity-ada-001| East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-similarity-babbage-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-similarity-curie-001 | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 |
+| text-similarity-davinci-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-ada-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-ada-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-babbage-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-babbage-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-curie-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-curie-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-davinci-doc-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| text-search-davinci-query-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-ada-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-ada-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-babbage-code-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
+| code-search-babbage-text-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 |
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
+
+ Title: Azure OpenAI Service models
+
+description: Learn about the different model capabilities that are available with Azure OpenAI.
+++ Last updated : 07/12/2023++++
+recommendations: false
+keywords:
++
+# Azure OpenAI Service models
+
+Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. Model availability varies by region. For GPT-3 and other models retiring in July 2024, see [Azure OpenAI Service legacy models](./legacy-models.md).
+
+| Models | Description |
+|--|--|
+| [GPT-4](#gpt-4) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. |
+| [GPT-3.5](#gpt-35) | A set of models that improve on GPT-3 and can understand as well as generate natural language and code. |
+| [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. |
+| [DALL-E](#dall-e-models-preview) (Preview) | A series of models in preview that can generate original images from natural language. |
+
+## GPT-4
+
+ GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
+
+Due to high demand, access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+
+- `gpt-4`
+- `gpt-4-32k`
+
+The `gpt-4` model supports 8192 max input tokens and the `gpt-4-32k` model supports up to 32,768 tokens.
+
+## GPT-3.5
+
+GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. We recommend using GPT-3.5 Turbo over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md).
+
+- `gpt-35-turbo`
+- `gpt-35-turbo-16k`
+
+The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens.
+
+Like GPT-4, use the Chat Completions API to use GPT-3.5 Turbo. To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
+
+## Embeddings models
+
+> [!IMPORTANT]
+> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
++
+Currently, we offer three families of Embeddings models for different functionalities:
+ The following list indicates the length of the numerical vector returned by the service, based on model capability:
+
+| Base Model | Model(s) | Dimensions |
+||||
+| Ada | models ending in -001 (Version 1) | 1024 |
+| Ada | text-embedding-ada-002 (Version 2) | 1536 |
+
+## DALL-E (Preview)
+
+The DALL-E models, currently in preview, generate images from text prompts that the user provides.
++
+## Model summary table and region availability
+
+> [!IMPORTANT]
+> South Central US is temporarily unavailable for creating new resources due to high demand.
+
+### GPT-4 models
+
+These models can only be used with the Chat Completion API.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>1</sup> (0613) | East US, France Central | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1</sup> (0613) | East US, France Central | N/A | 32,768 | September 2021 |
+
+<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br>
+<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior.
+
+### GPT-3.5 models
+
+GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can also be used with the Completions API. GPT3.5 Turbo (0613) only supports the Chat Completions API.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | - | -- | - |
+| `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo` (0613) | East US, France Central, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo-16k` (0613) | East US, France Central, UK South | N/A | 16,384 | Sep 2021 |
+
+<sup>1</sup> Version `0301` of gpt-35-turbo will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior.
++
+### Embeddings models
+
+These models can only be used with Embedding API requests.
+
+> [!NOTE]
+> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| text-embedding-ada-002 (version 2) | East US, South Central US, West Europe | N/A |8,191 | Sep 2021 |
+| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
+
+### DALL-E models (Preview)
+
+| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (characters) | Training Data (up to) |
+| | | | | |
+| dalle2 | East US | N/A | 1000 | N/A |
+
+## Working with models
+
+### Finding what models are available
+
+You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
+
+### Model updates
+
+Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down will be visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**:
++
+### Auto update to default
+
+When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a new version being released.
+
+If you are still in the early testing phases for completion and chat completion based models, we recommend deploying models with **auto-update to default** set whenever it is available.
+
+### Specific model version
+
+As your use of Azure OpenAI evolves, and you start to build and integrate with applications you will likely want to manually control model updates so that you can first test and validate that model performance is remaining consistent for your use case prior to upgrade.
+
+When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will auto-upgrade to the default version at the time of retirement.
+
+### GPT-35-Turbo 0301 and GPT-4 0314 retirement
+
+The `gpt-35-turbo` (`0301`) and both `gpt-4` (`0314`) models will be retired on January 4, 2024. Upon retirement, deployments will automatically be upgraded to the default version at the time of retirement. If you would like your deployment to stop accepting completion requests rather than upgrading, then you will be able to set the model upgrade option to expire through the API. We will publish guidelines on this by September 1.
+
+### Viewing deprecation dates
+
+For currently deployed models, from Azure OpenAI Studio select **Deployments**:
++
+To view deprecation/expiration dates for all available models in a given region from Azure OpenAI Studio select **Models** > **Column options** > Select **Deprecation fine tune** and **Deprecation inference**:
++
+### Update & deploy models via the API
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments/{deploymentName}?api-version=2023-05-01
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```acountname``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deploymentName``` | string | Required | The deployment name you chose when you deployed an existing model or the name you would like a new model deployment to have. |
+| ```resourceGroupName``` | string | Required | The name of the associated resource group for this model deployment. |
+| ```subscriptionId``` | string | Required | Subscription ID for the associated subscription. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json)
+
+**Request body**
+
+This is only a subset of the available request body parameters. For the full list of the parameters, you can refer to the [REST API spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json).
+
+|Parameter|Type| Description |
+|--|--|--|
+|versionUpgradeOption | String | Deployment model version upgrade options:<br>`OnceNewDefaultVersionAvailable`<br>`OnceCurrentVersionExpired`<br>`NoAutoUpgrade`|
+|capacity|integer|This represents the amount of [quota](../how-to/quota.md) you are assigning to this deployment. A value of 1 equals 1,000 Tokens per Minute (TPM)|
+
+#### Example request
+
+```Bash
+curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1" \
+ -H "Content-Type: application/json" \
+ -H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
+ -d '{"sku":{"name":"Standard","capacity":1},"properties": {"model": {"format": "OpenAI","name": "text-embedding-ada-002","version": "2"},"versionUpgradeOption":"OnceCurrentVersionExpired"}}'
+```
+
+> [!NOTE]
+> There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from https://portal.azure.com. Then run [`az account get-access-token`](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true). You can use this token as your temporary authorization token for API testing.
+
+#### Example response
+
+```json
+{
+ "id": "/subscriptions/{subscription-id}/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1",
+ "type": "Microsoft.CognitiveServices/accounts/deployments",
+ "name": "text-embedding-ada-002-test-1",
+ "sku": {
+ "name": "Standard",
+ "capacity": 1
+ },
+ "properties": {
+ "model": {
+ "format": "OpenAI",
+ "name": "text-embedding-ada-002",
+ "version": "2"
+ },
+ "versionUpgradeOption": "OnceCurrentVersionExpired",
+ "capabilities": {
+ "embeddings": "true",
+ "embeddingsMaxInputs": "1"
+ },
+ "provisioningState": "Succeeded",
+ "ratelimits": [
+ {
+ "key": "request",
+ "renewalPeriod": 10,
+ "count": 2
+ },
+ {
+ "key": "token",
+ "renewalPeriod": 60,
+ "count": 1000
+ }
+ ]
+ },
+ "systemData": {
+ "createdBy": "docs@contoso.com",
+ "createdByType": "User",
+ "createdAt": "2023-06-13T00:12:38.885937Z",
+ "lastModifiedBy": "docs@contoso.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-06-13T02:41:04.8410965Z"
+ },
+ "etag": "\"{GUID}\""
+}
+```
+++++++
+## Next steps
+
+- [Learn more about Azure OpenAI](../overview.md)
+- [Learn more about fine-tuning Azure OpenAI models](../how-to/fine-tuning.md)
ai-services Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/prompt-engineering.md
+
+ Title: Azure OpenAI Service | Introduction to Prompt engineering
+
+description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service.
+++ Last updated : 03/21/2023++++
+recommendations: false
+keywords:
++
+# Introduction to prompt engineering
+
+GPT-3, GPT-3.5, and GPT-4 models from OpenAI are prompt-based. With prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. This completion is the modelΓÇÖs continuation of the input text.
+
+While these models are extremely powerful, their behavior is also very sensitive to the prompt. This makes prompt construction an important skill to develop.
+
+Prompt construction can be difficult. In practice, the prompt acts to configure the model weights to complete the desired task, but it's more of an art than a science, often requiring experience and intuition to craft a successful prompt. The goal of this article is to help get you started with this learning process. It attempts to capture general concepts and patterns that apply to all GPT models. However it's important to understand that each model behaves differently, so the learnings may not apply equally to all models.
+
+## Basics
+
+This section covers the basic concepts and elements of GPT prompts.
+
+Text prompts are how users interact with GPT models. As with all generative language models, GPT models attempt to produce the next series of words that are most likely to follow from the previous text. It's as if we're saying *What is the first thing that comes to your mind when I say `<prompt>`?ΓÇ¥*. The examples below demonstrate this behavior. Given the first words of famous content, the model is able to accurately continue the text.
+
+| Prompt |Completion |
+||--|
+| Four score and seven years ago our | fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. […]|
+| "Call me Ishmael." | "Some years ago—never mind how long precisely—having little o no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world." […]|
+
+As you develop more complex prompts, it's helpful to keep this fundamental behavior in mind. Regardless of the prompt that is provided, the model is simply responding with what it determines is most likely (given its training data and training targets). If you ask a question in your prompt, the model isn't following a separate ΓÇ£Q&AΓÇ¥ code path, but rather it appears to answer the question because an answer is the most likely sort of response for the given question as input.
+
+### Prompt components
+
+When using the Completion API while there's no differentiation between different parts of the prompt, it can still be useful for learning and discussion to identify underlying prompt components. With the [Chat Completion API](../how-to/chatgpt.md) there are distinct sections of the prompt that are sent to the API in the form of an array of dictionaries with associated roles: system, user, and assistant. This guidance focuses more generally on how to think about prompt construction rather than providing prescriptive guidance that is specific to one API over another.
+
+It's also important to understand that while there could be other valid ways to dissect prompts, the goal of this breakdown is to provide a relatively simple way to think about prompt construction. With the Completion API, all the components are optional, but at least one must be present and most prompts include more than one component. There can be some grey area between components as well. The order presented below roughly corresponds to how commonly each component is used, from most to least.
+
+#### Instructions
+
+Instructions are likely the most commonly used prompt component. Instructions are simply that: instructions to the model on what to do. While simple in concept, they can be complex in practice. The following table illustrates this by showing both simple and more complex instructions, in two different forms.
+
+|Instruction Type | Prompt | Completion |
+|--||--|
+Simple instructions | Write an introduction for a weekly newsletter. | Welcome to our weekly newsletter. This week we have a special focus on productivity. We'll be sharing tips and tricks on how to be productive in all areas of your life. We hope you enjoy and please share with your friends! |
+| Complex instructions | Write an introduction for a weekly newsletter to my company named Contoso. It should mention the great all hands meeting we had last week and thank the team for all their hard work over the tough past few months. | Welcome to Contoso's weekly newsletter! We had a great all hands meeting last week and I want to thank the team for all their hard work over the tough past few months. We're looking forward to a bright future and I'm confident that with our team's dedication and commitment, we will achieve great things. Thank you for being a part of our team and I look forward to working with you all in the coming weeks and months. Sincerely, [Your name]. |
+| Complex Instructions | Write an introduction for a weekly newsletter, including the following: <br>- My company is named contoso <br>- An exuberant greeting <br>- Thanking the team for their hard work over a couple tough months <br>- Positive outlook for the coming quarter<br>- Signed by the SLT | Hello Contoso team! I wanted to take a moment to thank you for all your hard work over the past few months. It's been a tough couple of months, but you've all pulled together and done an amazing job. I'm really proud of you all. Looking ahead to the next quarter, I'm feeling really positive. I think we're going to achieve some great things. Thanks again, The Senior Leadership Team.|
+
+### Primary Content
+
+Primary content refers to some sort of text that is being processed or transformed by the model. Primary content is typically used in conjunction with instructions. A simple example would be language translation. In the example below, the English text would be considered the primary content while ΓÇ£Translate to French:ΓÇ¥ is an instruction.
+
+| Prompt | Completion|
+|--|--|
+| Can you please tell me how to get to the museum?<br>Translate to French: |`Pouvez-vous s'il vous plaît me dire comment aller au musée?` |
+
+Primary content can also be much longer. In the following example, the primary content is the introduction section from the [Wikipedia entry on time complexity](https://en.wikipedia.org/wiki/Time_complexity), which is almost 300 words long. This content has been abbreviated in our table for ease of display.
+
+| Prompt | Completion|
+|--|--|
+| In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>Summarize the previous text in a few sentences:| Time complexity is a measure of how long an algorithm takes to run. It is typically expressed as a function of the size of the input. The most common measure is worst-case time complexity, which is the maximum amount of time required for inputs of a given size.|
+| In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by [...]<br>`TL;DR` (Too long didn't read) | The time complexity of an algorithm is the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. The time complexity is generally expressed as a function of the size of the input.|
+
+GPT models can also handle primary content that is structured. In the example below, a TSV about beers in Belgium (abbreviated for display in this article) is passed to the model as part of the prompt. It is capable of properly interpreting the content and answering questions about the data.
+
+| Prompt | Completion |
+|--||
+| `Belgian Beer Brands` <br>`Beer name Beer style color ABV Brewery`<br>`"Chimay Gold: ""Doree""" trappist pale ale gold 4.80% Chimay Brewery (official trappist)`<br>`"Chimay Blue: ""Grande Reserve""" trappist dark ale dark 9.00%`<br>[…]<br>Referencing the table above, how many beers are less than 6% ABV? | There are 10 beers in the table that are less than 6% ABV. |
+
+### Examples
+
+Successful prompts often rely on the practice of ΓÇ£one-shotΓÇ¥ or ΓÇ£few-shotΓÇ¥ learning. This refers to the inclusion of one or more examples of the desired behavior of the model, typically by including input and output pairs. This is not learning in the sense that the model is permanently changed, but rather that the examples better condition the model to respond as desired for only the current inference. The use of prompts with no examples is sometimes referred to as ΓÇ£zero-shotΓÇ¥ learning. Note that with the Chat Completion API few-shot learning examples are typically added to the messages array in the form of example user/assistant interactions after the initial system message.
+
+| Learning Type| Prompt| Completion|
+|- |-|--|
+| Zero-shot | Headline: Coach confident injury won't derail Warriors<br>Topic:| The coach is confident that the injury won't derail the Warriors' season. The team is still focused on their goals and that they will continue to work hard to achieve them.|
+| Few-shot | Headline: Twins' Correa to use opt-out, test free agency<br>Topic: Baseball<br>Headline: Qatar World Cup to have zones for sobering up<br>Topic: Soccer<br>Headline: Yates: Fantasy football intel for Week 6<br>Topic: Football<br>Headline: Coach confident injury won't derail Warriors<br>Topic: | Basketball |
+
+The example above illustrates the utility of few-shot learning. Without the examples, the model seems to be guessing at the desired behavior, while the examples cleanly show the model how to operate. This also demonstrates the power of the model, it can infer the category of label that is wanted, even without a ΓÇÿbasketballΓÇÖ label in the examples.
+
+### Cue
+
+Cues act as the ΓÇ£jumpstartΓÇ¥ for the output of the model, helping to direct the model to the desired output. It is often a prefix that the model can build onto. In the example below, we show the cue directing the story to include key elements. Cues are often used along with instructions, but not always. In the examples below, the model is being used to summarize a long announcement email. A cue is used in one case to help focus the output of the model and to suggest a specific output format (bullet points) in the other.
+
+In the context of the Chat Completion API instructions take the form of the system message, and few-shot examples are indicated by providing a series of example user/assistant interactions to help prime the model.
+
+| Number of Cues|Prompt | Completion|
+|-|-|--|
+| 0 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message: | The purpose of this email is to inform employees of recent changes to the performance review experience, which is part of Contoso's Performance & Development approach. The changes include an improved structure, upgraded in-tool user experience, and focused prompts. Additionally, there is now an optional Conversation Starters section for employees to flag topics they'd like to discuss with their managers. |
+| 1 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message:<br> The key takeaway of the message is | that Contoso is improving the performance review experience by making it more focused on core priorities, expectations and impact. Additionally, they are upgrading the in-tool user experience with tips, resources and guidance. Finally, they are adding guidance on how to activate company-wide core priorities. |
+|2 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message: <br>Key Points:<br>• | Contoso is introducing changes to the performance review experience, based on feedback from employees.<br>• The changes include a new focus on core priorities, expectations, and impact; an upgraded user experience; and new guidance on company-wide core priorities.<br>• Performance reviews are a time to reflect on past performance and learning as well as to plan for future priorities and growth opportunities.<br>• Employees and managers should write comments in advance of the performance review meeting.<br>• There should be at least two performance reviews per year.<br>• The time investment for each review should remain about the same. |
+
+### Supporting content
+
+Supporting content is information that the model can utilize to influence the output in some way. It differs from primary content in that it is not the main target of task, but it is typically used along with primary content. Common examples include contextual information such as the current date, the name of the user, user preferences, and so on. The example below uses supporting content to help organize a set of planned workshops for the user. Without the supporting (important topics) the model simply lists off the workshops (truncated for display), when informed of my important topics, the model is able to accurately group the meetings.
+
+| Supporting Content| Prompt | Completion|
+|--|-|--|
+| Without | Workshop list:<br>Title Description<br> Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […]<br>Summarize the workshops above, grouped by topic: | Workshop list: <br> Planning Workshop Kickoff <br> -Planning Workshop Wrap-up<br>- Core MeetUp<br>- Customer Driven Perspectives<br>-Impact and Key Focus Areas<br>-Best practices<br>-Accelerating app adoption […] |
+| With | Workshop list: <br>Title Description<br>Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in […] <br>My Important Topics: prompt engineering, search, GPT models<br>Summarize the workshops above, grouped by my important topics:| 1. Prompt Engineering: <br>- New prompt engineering tips<br>- The art of prompt engineering<br>2. Search:<br>-Intro to vector search with embedding<br>3. GPT Models:<br>- Intro to GPT-4<br>- GPT-35-Turbo in-depth.|
+
+## Best practices
+
+- **Be Specific**. Leave as little to interpretation as possible. Restrict the operational space.
+- **Be Descriptive**. Use analogies.
+- **Double Down**. Sometimes you may need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc.
+- **Order Matters**. The order in which you present information to the model may impact the output. Whether you put instructions before your content (“summarize the following…”) or after (“summarize the above…”) can make a difference in output. Even the order of few-shot examples can matter. This is referred to as recency bias.
+- **Give the model an ΓÇ£outΓÇ¥**. It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with ΓÇÿnot foundΓÇÖ if the answer is not present." This can help the model avoid generating false responses.
+
+## Space efficiency
+
+While the input size increases with each new generation of GPT models, there will continue to be scenarios that provide more data than the model can handle. GPT models break words into ΓÇ£tokensΓÇ¥. While common multi-syllable words are often a single token, less common words are broken in syllables. Tokens can sometimes be counter-intuitive, as shown by the example below which demonstrates token boundaries for different date formats. In this case, spelling out the entire month is more space efficient than a fully numeric date. The current range of token support goes from 2000 tokens with earlier GPT-3 models to up to 32,768 tokens with the 32k version of the latest GPT-4 model.
++
+Given this limited space, it is important to use it as efficiently as possible.
+- Tables ΓÇô As shown in the examples in the previous section, GPT models can understand tabular formatted data quite easily. This can be a space efficient way to include data, rather than preceding every field with name (such as with JSON).
+- White Space ΓÇô Consecutive whitespaces are treated as separate tokens which can be an easy way to waste space. Spaces preceding a word, on the other hand, are typically treated as part of the same token as the word. Carefully watch your usage of whitespace and donΓÇÖt use punctuation when a space alone will do.
+
+## Next steps
+
+[Learn more about Azure OpenAI](../overview.md)
ai-services Red Teaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/red-teaming.md
+
+ Title: Introduction to red teaming large language models (LLMs)
+
+description: Learn about how red teaming and adversarial testing is an essential practice in the responsible development of systems and features using large language models (LLMs)
+++ Last updated : 05/18/2023++++
+recommendations: false
+keywords:
++
+# Introduction to red teaming large language models (LLMs)
+
+The term *red teaming* has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of LLMs, the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hate speech, incitement or glorification of violence, or sexual content.
+
+**Red teaming is an essential practice in the responsible development of systems and features using LLMs**. While not a replacement for systematic [measurement and mitigation](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context) work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations.
+
+Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](content-filter.md) and other [mitigation strategies](prompt-engineering.md)) for its Azure OpenAI Service models (see this [Responsible AI Overview](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context)). However, the context of your LLM application will be unique and you also should conduct red teaming to:
+
+- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application system.
+- Identify and mitigate shortcomings in the existing default filters or mitigation strategies.
+- Provide feedback on failures so we can make improvements.
+
+Here is how you can get started in your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise.
+
+## Getting started
+
+### Managing your red team
+
+**Assemble a diverse group of red teamers.**
+
+LLM red teamers should be a mix of people with diverse social and professional backgrounds, demographic groups, and interdisciplinary expertise that fits the deployment context of your AI system. For example, if youΓÇÖre designing a chatbot to help health care providers, medical experts can help identify risks in that domain.
+
+**Recruit red teamers with both benign and adversarial mindsets.**
+
+Having red teamers with an adversarial mindset and security-testing experience is essential for understanding security risks, but red teamers who are ordinary users of your application system and havenΓÇÖt been involved in its development can bring valuable perspectives on harms that regular users might encounter.
+
+**Remember that handling potentially harmful content can be mentally taxing.**
+
+You will need to take care of your red teamers, not only by limiting the amount of time they spend on an assignment, but also by letting them know they can opt out at any time. Also, avoid burnout by switching red teamersΓÇÖ assignments to different focus areas.
+
+### Planning your red teaming
+
+#### Where to test
+
+Because a system is developed using a LLM base model, you may need to test at several different layers:
+
+- The LLM base model with its [safety system](./content-filter.md) in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually through an API endpoint.)
+- Your application system. (Testing is usually through a UI.)
+- Both the LLM base model and your application system before and after mitigations are in place.
+
+#### How to test
+
+Consider conducting iterative red teaming in at least two phases:
+
+1. Open-ended red teaming, where red teamers are encouraged to discover a variety of harms. This can help you develop a taxonomy of harms to guide further testing. Note that developing a taxonomy of undesired LLM outputs for your application system is crucial to being able to measure the success of specific mitigation efforts.
+2. Guided red teaming, where red teamers are assigned to focus on specific harms listed in the taxonomy while staying alert for any new harms that may emerge. Red teamers can also be instructed to focus testing on specific features of a system for surfacing potential harms.
+
+Be sure to:
+
+- Provide your red teamers with clear instructions for what harms or system features they will be testing.
+- Give your red teamers a place for recording their findings. For example, this could be a simple spreadsheet specifying the types of data that red teamers should provide, including basics such as:
+ - The type of harm that was surfaced.
+ - The input prompt that triggered the output.
+ - An excerpt from the problematic output.
+ - Comments about why the red teamer considered the output problematic.
+- Maximize the effort of responsible AI red teamers who have expertise for testing specific types of harms or undesired outputs. For example, have security subject matter experts focus on jailbreaks, metaprompt extraction, and content related to aiding cyberattacks.
+
+### Reporting red teaming findings
+
+You will want to summarize and report red teaming top findings at regular intervals to key stakeholders, including teams involved in the measurement and mitigation of LLM failures so that the findings can inform critical decision making and prioritizations.
+
+## Next steps
+
+[Learn about other mitigation strategies like prompt engineering](./prompt-engineering.md)
ai-services System Message https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/system-message.md
+
+ Title: System message framework and template recommendations for Large Language Models(LLMs)
+
+description: Learn about how to construct system messages also know as metaprompts to guide an AI system's behavior.
+++ Last updated : 05/19/2023++++
+recommendations: false
+keywords:
++
+# System message framework and template recommendations for Large Language Models (LLMs)
+
+This article provides a recommended framework and example templates to help write an effective system message, sometimes referred to as a metaprompt or [system prompt](advanced-prompt-engineering.md?pivots=programming-language-completions#meta-prompts) that can be used to guide an AI systemΓÇÖs behavior and improve system performance. If you're new to prompt engineering, we recommend starting with our [introduction to prompt engineering](prompt-engineering.md) and [prompt engineering techniques guidance](advanced-prompt-engineering.md).
+
+This guide provides system message recommendations and resources that, along with other prompt engineering techniques, can help increase the accuracy and grounding of responses you generate with a Large Language Model (LLM). However, it is important to remember that even when using these templates and guidance, you still need to validate the responses the models generate. Just because a carefully crafted system message worked well for a particular scenario doesn't necessarily mean it will work more broadly across other scenarios. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context#limitations) and the [mechanisms for evaluating and mitigating those limitations](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context) is just as important as understanding how to leverage their strengths.
+
+The LLM system message framework described here covers four concepts:
+
+- Define the modelΓÇÖs profile, capabilities, and limitations for your scenario
+- Define the modelΓÇÖs output format
+- Provide example(s) to demonstrate the intended behavior of the model
+- Provide additional behavioral guardrails
+
+## Define the modelΓÇÖs profile, capabilities, and limitations for your scenario
+
+- **Define the specific task(s)** you would like the model to complete. Describe who the users of the model will be, what inputs they will provide to the model, and what you expect the model to do with the inputs.
+
+- **Define how the model should complete the tasks**, including any additional tools (like APIs, code, plug-ins) the model can use. If it doesnΓÇÖt use additional tools, it can rely on its own parametric knowledge.
+
+- **Define the scope and limitations** of the modelΓÇÖs performance. Provide clear instructions on how the model should respond when faced with any limitations. For example, define how the model should respond if prompted on subjects or for uses that are off topic or otherwise outside of what you want the system to do.
+
+- **Define the posture and tone** the model should exhibit in its responses.
+
+## Define the model's output format
+
+When using the system message to define the modelΓÇÖs desired output format in your scenario, consider and include the following types of information:
+
+- **Define the language and syntax** of the output format. If you want the output to be machine parse-able, you may want the output to be in formats like JSON, XSON or XML.
+
+- **Define any styling or formatting** preferences for better user or machine readability. For example, you may want relevant parts of the response to be bolded or citations to be in a specific format.
+
+## Provide example(s) to demonstrate the intended behavior of the model
+
+When using the system message to demonstrate the intended behavior of the model in your scenario, it is helpful to provide specific examples. When providing examples, consider the following:
+
+- Describe difficult use cases where the prompt is ambiguous or complicated, to give the model additional visibility into how to approach such cases.
+- Show the potential ΓÇ£inner monologueΓÇ¥ and chain-of-thought reasoning to better inform the model on the steps it should take to achieve the desired outcomes.
+
+## Define additional behavioral guardrails
+
+When defining additional safety and behavioral guardrails, itΓÇÖs helpful to first identify and prioritize [the harms](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context) youΓÇÖd like to address. Depending on the application, the sensitivity and severity of certain harms could be more important than others.
+
+## Next steps
+
+- Learn more about [Azure OpenAI](../overview.md)
+- Learn more about [deploying Azure OpenAI responsibly](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context)
+- For more examples, check out the [Azure OpenAI Samples GitHub repository](https://github.com/Azure-Samples/openai)
ai-services Understand Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/understand-embeddings.md
+
+ Title: Azure OpenAI Service embeddings
+
+description: Learn more about Azure OpenAI embeddings API for document search and cosine similarity
+++++ Last updated : 03/22/2023++
+recommendations: false
+++
+# Understanding embeddings in Azure OpenAI Service
+
+An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar.
+
+## Embedding models
+
+Different Azure OpenAI embedding models are specifically created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure whether long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
+
+Embeddings make it easier to do machine learning on large inputs representing words by capturing the semantic similarities in a vector space. Therefore, we can use embeddings to determine if two text chunks are semantically related or similar, and provide a score to assess similarity.
+
+## Cosine similarity
+
+Azure OpenAI embeddings rely on cosine similarity to compute similarity between documents and a query.
+
+From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This is beneficial because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [this article on Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity).
+
+An alternative method of identifying similar documents is to count the number of common words between documents. Unfortunately, this approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among completely disparate topics. For this reason, cosine similarity can offer a more effective alternative.
+
+## Next steps
+
+Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
+
+ Title: 'Using your data with Azure OpenAI Service'
+
+description: Use this article to learn about using your data for better text generation in Azure OpenAI.
+++++++ Last updated : 07/12/2023
+recommendations: false
++
+# Azure OpenAI on your data (preview)
+
+Azure OpenAI on your data enables you to run supported chat models such as GPT-35-Turbo and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. By doing so, you can unlock valuable insights that can help you make better business decisions, identify trends and patterns, and optimize your operations. One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI.
+
+To get started, [connect your data source](../use-your-data-quickstart.md) using [Azure OpenAI Studio](https://oai.azure.com/) and start asking questions and chatting on your data.
+
+Because the model has access to, and can reference specific sources to support its responses, answers are not only based on its pretrained knowledge but also on the latest information available in the designated data source. This grounding data also helps the model avoid generating responses based on outdated or incorrect information.
+
+> [!NOTE]
+> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) with either the gpt-35-turbo or the gpt-4 models deployed.
+
+## What is Azure OpenAI on your data
+
+Azure OpenAI on your data works with OpenAI's powerful GPT-35-Turbo and GPT-4 language models, enabling them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/) to create a solution that connects to your data to enable an enhanced chat experience.
+
+One of the key features of Azure OpenAI on your data is its ability to retrieve and utilize data in a way that enhances the model's output. Azure OpenAI on your data, together with Azure Cognitive Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history. This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt. Although retrieved data is being appended to the prompt, the resulting input is still processed by the model like any other prompt. Once the data has been retrieved and the prompt has been submitted to the model, the model uses this information to provide a completion. See the [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context) article for more information.
+
+## Data source options
+
+Azure OpenAI on your data uses an [Azure Cognitive Search](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information.
+
+## Ingesting your data into Azure Cognitive Search
+
+For documents and datasets with long text, you should use the available [data preparation script](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) to ingest the data into cognitive search. The script chunks the data so that your response with the service will be more accurate. This script also supports scanned PDF file and images and ingests the data using [Document Intelligence](../../../ai-services/document-intelligence/overview.md).
++
+## Data formats and file types
+
+Azure OpenAI on your data supports the following filetypes:
+
+* `.txt`
+* `.md`
+* `.html`
+* Microsoft Word files
+* Microsoft PowerPoint files
+* PDF
+
+There are some caveats about document structure and how it might affect the quality of responses from the model:
+
+* The model provides the best citation titles from markdown (`.md`) files.
+
+* If a document is a PDF file, the text contents are extracted as a preprocessing step (unless you're connecting your own Azure Cognitive Search index). If your document contains images, graphs, or other visual content, the model's response quality depends on the quality of the text that can be extracted from them.
+
+* If you're converting data from an unsupported format into a supported format, make sure the conversion:
+
+ * Doesn't lead to significant data loss.
+ * Doesn't add unexpected noise to your data.
+
+ This will impact the quality of Azure Cognitive Search and the model response.
+
+## Virtual network support & private link support
+
+Azure OpenAI on your data does not currently support private endpoints.
+
+## Azure Role-based access controls (Azure RBAC)
+
+To add a new data source to your Azure OpenAI resource, you need the following Azure RBAC roles.
++
+|Azure RBAC role |Needed when |
+|||
+|[Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) | You want to use Azure OpenAI on your data. |
+|[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. |
+|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. |
+
+## Recommended settings
+
+Use the following sections to help you configure Azure OpenAI on your data for optimal results.
+
+### System message
+
+Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 200 tokens.
+
+For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
+
+*"You are a financial chatbot useful for answering questions from financial reports. You are given excerpts from the earnings call. Please answer the questions by parsing through all dialogue."*
+
+This system message can help improve the quality of the response by specifying the domain (in this case finance) and mentioning that the data consists of call transcriptions. It helps set the necessary context for the model to respond appropriately.
+
+> [!NOTE]
+> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior may occur if the system message contradicts with these behaviors.
+
+### Maximum response
+
+Set a limit on the number of tokens per model response. The upper limit for Azure OpenAI on Your Data is 1500. This is equivalent to setting the `max_tokens` parameter in the API.
+
+### Limit responses to your data
+
+This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model may more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
+
+### Semantic search
+
+> [!IMPORTANT]
+> * Semantic search is subject to [additional pricing](/azure/search/semantic-search-overview#availability-and-pricing)
+> * Currently Azure OpenAI on your data supports semantic search for English data only. Only enable semantic search if both your documents and use case are in English.
+
+If [semantic search](/azure/search/semantic-search-overview) is enabled for your Azure Cognitive Search service, you are more likely to produce better retrieval of your data, which can improve response and citation quality.
+
+### Index field mapping
+
+If you're using your own index, you will be prompted in the Azure OpenAI Studio to define which fields you want to map for answering questions when you add your data source. You can provide multiple fields for *Content data*, and should include all fields that have text pertaining to your use case.
++
+In this example, the fields mapped to **Content data** and **Title** provide information to the model to answer questions. **Title** is also used to title citation text. The field mapped to **File name** generates the citation names in the response.
+
+Mapping these fields correctly helps ensure the model has better response and citation quality.
+
+### Interacting with the model
+
+Use the following practices for best results when chatting with the model.
+
+**Conversation history**
+
+* Before starting a new conversation (or asking a question that is not related to the previous ones), clear the chat history.
+* Getting different responses for the same question between the first conversational turn and subsequent turns can be expected because the conversation history changes the current state of the model. If you receive incorrect answers, report it as a quality bug.
+
+**Model response**
+
+* If you are not satisfied with the model response for a specific question, try either making the question more specific or more generic to see how the model responds, and reframe your question accordingly.
+
+* [Chain-of-thought prompting](advanced-prompt-engineering.md?pivots=programming-language-chat-completions#chain-of-thought-prompting) has been shown to be effective in getting the model to produce desired outputs for complex questions/tasks.
+
+**Question length**
+
+Avoid asking long questions and break them down into multiple questions if possible. The GPT models have limits on the number of tokens they can accept. Token limits are counted toward: the user question, the system message, the retrieved search documents (chunks), internal prompts, the conversation history (if any), and the response. If the question exceeds the token limit, it will be truncated.
+
+**Multi-lingual support**
+
+* Azure OpenAI on your data supports queries that are in the same language as the documents. For example, if your data is in Japanese, then queries need to be in Japanese too.
+
+* Currently Azure OpenAI on your data supports [semantic search](/azure/search/semantic-search-overview) for English data only. Don't enable semantic search if your data is in other languages.
+
+* We recommend using a system message to inform the model that your data is in another language. For example:
+
+ *"You are an AI assistant that helps people find information. You retrieve Japanese documents, and you should read them carefully in Japanese and answer in Japanese."*
+
+* If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
+
+### Using the web app
+
+You can use the available web app to interact with your model using a graphical user interface, which you can deploy using either [Azure OpenAI studio](../use-your-data-quickstart.md?pivots=programming-language-studio#deploy-a-web-app) or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
+
+![A screenshot of the web app interface.](../media/use-your-data/web-app.png)
+
+You can also customize the app's frontend and backend logic. For example, you could change the icon that appears in the center of the app by updating `/frontend/src/assets/Azure.svg` and then redeploying the app [using the Azure CLI](https://github.com/microsoft/sample-app-aoai-chatGPT#deploy-with-the-azure-cli). See the source code for the web app, and more information [on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT).
+
+When customizing the app, we recommend:
+
+- Resetting the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost.
+
+- Clearly communicating the impact on the user experience that each setting you implement will have.
+
+- When you rotate API keys for your Azure OpenAI or Azure Cognitive Search resource, be sure to update the app settings for each of your deployed apps to use the new keys.
+
+- Pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes and improvements.
+
+### Using the API
+
+Consider setting the following parameters even if they are optional for using the API.
++
+|Parameter |Recommendation |
+|||
+|`fieldsMapping` | Explicitly set the title and content fields of your index. This impacts the search retrieval quality of Azure Cognitive Search, which impacts the overall response and citation quality. |
+|`roleInformation` | Corresponds to the "System Message" in the Azure OpenAI Studio. See the [System message](#system-message) section above for recommendations. |
+
+#### Streaming data
+
+You can send a streaming request using the `stream` parameter, allowing data to be sent and received incrementally, without waiting for the entire API response. This can improve performance and user experience, especially for large or dynamic data.
+
+```json
+{
+ "stream": true,
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'$SearchEndpoint'",
+ "key": "'$SearchKey'",
+ "indexName": "'$SearchIndex'"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "What are the differences between Azure Machine Learning and Azure AI services?"
+ }
+ ]
+}
+```
+
+#### Conversation history for better results
+
+When chatting with a model, providing a history of the chat will help the model return higher quality results.
+
+```json
+{
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'$SearchEndpoint'",
+ "key": "'$SearchKey'",
+ "indexName": "'$SearchIndex'"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "What are the differences between Azure Machine Learning and Azure AI services?"
+ },
+ {
+ "role": "tool",
+ "content": "{\"citations\": [{\"content\": \" Title: Azure AI services and Machine Learning\\n
+ },
+ {
+ "role": "assistant",
+ "content": " \nAzure Machine Learning is a product and service tailored for data scientists to build, train, and deploy machine learning models [doc1]..."
+ },
+ {
+ "role": "user",
+ "content": "How do I use Azure machine learning?"
+ }
+ ]
+}
+```
++
+## Next steps
+* [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
+
+* [Introduction to prompt engineering](./prompt-engineering.md)
++
ai-services Dall E Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/dall-e-quickstart.md
+
+ Title: 'Quickstart - Generate an image using Azure OpenAI Service'
+
+description: Walkthrough on how to get started with Azure OpenAI and make your first image generation call.
++++++++ Last updated : 04/04/2023
+zone_pivot_groups: openai-quickstart-dall-e
++
+# Quickstart: Get started generating images using Azure OpenAI Service
++++++++++
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/encrypt-data-at-rest.md
+
+ Title: Azure OpenAI Service encryption of data at rest
+description: Learn how Azure OpenAI encrypts your data when it's persisted to the cloud.
++++++ Last updated : 11/14/2022+++
+# Azure OpenAI Service encryption of data at rest
+
+Azure OpenAI automatically encrypts your data when it's persisted to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure OpenAI handles encryption of data at rest, specifically training data and fine-tuned models. For information on how data provided by you to the service is processed, used, and stored, consult the [data, privacy, and security article](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context).
+
+## About Azure AI services encryption
+
+Azure OpenAI is part of Azure AI services. Azure AI services data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+## Customer-managed keys with Azure Key Vault
+
+Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
+
+To request the ability to use customer-managed keys, fill out and submit the [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request.
+
+To enable customer-managed keys, you must also enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+## Enable customer-managed keys for your resource
+
+To enable customer-managed keys in the Azure portal, follow these steps:
+
+1. Go to your Azure AI services resource.
+1. On the left, select **Encryption**.
+1. Under **Encryption type**, select **Customer Managed Keys**, as shown in the following screenshot.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of create a resource user experience](./media/encryption/encryption.png)
+
+## Specify a key
+
+After you enable customer-managed keys, you can specify a key to associate with the Azure AI services resource.
+
+### Specify a key as a URI
+
+To specify a key as a URI, follow these steps:
+
+1. In the Azure portal, go to your key vault.
+1. Under **Settings**, select **Keys**.
+1. Select the desired key, and then select the key to view its versions. Select a key version to view the settings for that version.
+1. Copy the **Key Identifier** value, which provides the URI.
+
+ :::image type="content" source="../media/cognitive-services-encryption/key-uri-portal.png" alt-text="Screenshot of the Azure portal page for a key version. The Key Identifier box contains a placeholder for a key URI.":::
+
+1. Go back to your Azure AI services resource, and then select **Encryption**.
+1. Under **Encryption key**, select **Enter key URI**.
+1. Paste the URI that you copied into the **Key URI** box.
+
+ :::image type="content" source="../media/cognitive-services-encryption/ssecmk2.png" alt-text="Screenshot of the Encryption page for an Azure AI services resource. The Enter key URI option is selected, and the Key URI box contains a value.":::
+
+1. Under **Subscription**, select the subscription that contains the key vault.
+1. Save your changes.
+
+### Specify a key from a key vault
+
+To specify a key from a key vault, first make sure that you have a key vault that contains a key. Then follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+1. Under **Encryption key**, select **Select from Key Vault**.
+1. Select the key vault that contains the key that you want to use.
+1. Select the key that you want to use.
+
+ :::image type="content" source="../media/cognitive-services-encryption/ssecmk3.png" alt-text="Screenshot of the Select key from Azure Key Vault page in the Azure portal. The Subscription, Key vault, Key, and Version boxes contain values.":::
+
+1. Save your changes.
+
+## Update the key version
+
+When you create a new version of a key, update the Azure AI services resource to use the new version. Follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+1. Enter the URI for the new key version. Alternately, you can select the key vault and then select the key again to update the version.
+1. Save your changes.
+
+## Use a different key
+
+To change the key that you use for encryption, follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+1. Enter the URI for the new key. Alternately, you can select the key vault and then select a new key.
+1. Save your changes.
+
+## Rotate customer-managed keys
+
+You can rotate a customer-managed key in Key Vault according to your compliance policies. When the key is rotated, you must update the Azure AI services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](#update-the-key-version).
+
+Rotating the key doesn't trigger re-encryption of data in the resource. No further action is required from the user.
+
+## Revoke a customer-managed key
+
+You can revoke a customer-managed encryption key by changing the access policy, by changing the permissions on the key vault, or by deleting the key.
+
+To change the access policy of the managed identity that your registry uses, run the [az-keyvault-delete-policy](/cli/azure/keyvault#az-keyvault-delete-policy) command:
+
+```azurecli
+az keyvault delete-policy \
+ --resource-group <resource-group-name> \
+ --name <key-vault-name> \
+ --key_id <key-vault-key-id>
+```
+
+To delete the individual versions of a key, run the [az-keyvault-key-delete](/cli/azure/keyvault/key#az-keyvault-key-delete) command. This operation requires the *keys/delete* permission.
+
+```azurecli
+az keyvault key delete \
+ --name <key-vault-name> \
+ --object-id $identityPrincipalID \
+```
+
+> [!IMPORTANT]
+> Revoking access to an active customer-managed key while CMK is still enabled will prevent downloading of training data and results files, fine-tuning new models, and deploying fine-tuned models. However, previously deployed fine-tuned models will continue to operate and serve traffic until those deployments are deleted.
+
+### Delete training, validation, and training results data
+
+ The Files API allows customers to upload their training data for the purpose of fine-tuning a model. This data is stored in Azure Storage, within the same region as the resource and logically isolated with their Azure subscription and API Credentials. Uploaded files can be deleted by the user via the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-training-files).
+
+### Delete fine-tuned models and deployments
+
+The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest (either with Microsoft-managed keys or customer-managed keys) and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
+
+## Disable customer-managed keys
+
+When you disable customer-managed keys, your Azure AI services resource is then encrypted with Microsoft-managed keys. To disable customer-managed keys, follow these steps:
+
+1. Go to your Azure AI services resource, and then select **Encryption**.
+1. Select **Microsoft Managed Keys** > **Save**.
+
+When you previously enabled customer managed keys this also enabled a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+## Next steps
+
+* [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/business-continuity-disaster-recovery.md
+
+ Title: 'Business Continuity and Disaster Recovery (BCDR) with Azure OpenAI Service'
+
+description: Considerations for implementing Business Continuity and Disaster Recovery (BCDR) with Azure OpenAI
+++++ Last updated : 6/21/2023++
+recommendations: false
+keywords:
+++
+# Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI Service
+
+Azure OpenAI is available in multiple regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
+
+It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
+
+## Best practices
+
+Today customers will call the endpoint provided during deployment for both deployments and inference. These operations are stateless, so no data is lost in the case that a region becomes unavailable.
+
+If a region is non-operational customers must take steps to ensure service continuity.
+
+## Business continuity
+
+The following set of instructions applies both customers using default endpoints and those using custom endpoints.
+
+### Default endpoint recovery
+
+If you're using a default endpoint, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription.
+
+Follow these steps to configure your client to monitor errors:
+
+1. Use the [models page](../concepts/models.md) to identify the list of available regions for Azure OpenAI.
+
+2. Select a primary and one secondary/backup regions from the list.
+
+3. Create Azure OpenAI resources for each region selected.
+
+4. For the primary region and any backup regions your code will need to know:
+
+ a. Base URI for the resource
+
+ b. Regional access key or Azure Active Directory access
+
+5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors).
+
+ a. Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry.
+
+ b. For persistence redirect traffic to the backup resource in the region you've created.
+
+## BCDR requires custom code
+
+The recovery from regional failures for this usage type can be performed instantaneously and at a very low cost. This does however, require custom development of this functionality on the client side of your application.
ai-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/chatgpt.md
+
+ Title: How to work with the GPT-35-Turbo and GPT-4 models
+
+description: Learn about the options for how to use the GPT-35-Turbo and GPT-4 models
++++++ Last updated : 05/15/2023+
+keywords: ChatGPT
+zone_pivot_groups: openai-chat
++
+# Learn how to work with the GPT-35-Turbo and GPT-4 models
+
+The GPT-35-Turbo and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the GPT-35-Turbo and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
+
+In Azure OpenAI there are two different options for interacting with these type of models:
+
+- Chat Completion API.
+- Completion API with Chat Markup Language (ChatML).
+
+The Chat Completion API is a new dedicated API for interacting with the GPT-35-Turbo and GPT-4 models. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**.
+
+ChatML uses the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, it requires a unique token based prompt format known as Chat Markup Language (ChatML). This provides lower level access than the dedicated Chat Completion API, but also requires additional input validation, only supports gpt-35-turbo models, and **the underlying format is more likely to change over time**.
+
+This article walks you through getting started with the GPT-35-Turbo and GPT-4 models. It's important to use the techniques described here to get the best results. If you try to interact with the models the same way you did with the older model series, the models will often be verbose and provide less useful responses.
++++++
ai-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/completions.md
+
+ Title: 'How to generate text with Azure OpenAI Service'
+
+description: Learn how to generate or manipulate text, including code with Azure OpenAI
+++++ Last updated : 06/24/2022++
+recommendations: false
+keywords:
+++
+# Learn how to generate or manipulate text
+
+The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability.
+
+The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
+
+`write a tagline for an ice cream shop`
+
+once you submit, you'll see something like the following generated:
+
+``` console
+write a tagline for an ice cream shop
+we serve up smiles with every scoop!
+```
+
+The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
+
+This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
+
+> [!NOTE]
+> Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future.
+
+## Prompt design
+
+### Basics
+
+OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
+
+The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
+
+There are three basic guidelines to creating prompts:
+
+**Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want.
+
+**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response.
+
+**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
+
+### Troubleshooting
+
+If you're having trouble getting the API to perform as expected, follow this checklist:
+
+1. Is it clear what the intended generation should be?
+2. Are there enough examples?
+3. Did you check your examples for mistakes? (The API won't tell you directly)
+4. Are you using temp and top_p correctly?
+
+## Classification
+
+To create a text classifier with the API we provide a description of the task and provide a few examples. In this demonstration we show the API how to classify the sentiment of Tweets.
+
+```console
+This is a tweet sentiment classifier
+
+Tweet: "I loved the new Batman movie!"
+Sentiment: Positive
+
+Tweet: "I hate it when my phone battery dies."
+Sentiment: Negative
+
+Tweet: "My day has been 👍"
+Sentiment: Positive
+
+Tweet: "This is the link to the article"
+Sentiment: Neutral
+
+Tweet: "This new music video blew my mind"
+Sentiment:
+```
+
+It's worth paying attention to several features in this example:
+
+**1. Use plain language to describe your inputs and outputs**
+We use plain language for the input "Tweet" and the expected output "Sentiment." For best practices, start with plain language descriptions. While you can often use shorthand or keys to indicate the input and output, when building your prompt it's best to start by being as descriptive as possible and then working backwards removing extra words as long as the performance to the prompt is consistent.
+
+**2. Show the API how to respond to any case**
+In this example we provide multiple outcomes "Positive", "Negative" and "Neutral." A neutral outcome is important because there will be many cases where even a human would have a hard time determining if something is positive or negative and situations where it's neither.
+
+**3. You can use text and emoji**
+The classifier is a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them.
+
+**4. You need fewer examples for familiar tasks**
+For this classifier we only provided a handful of examples. This is because the API already has an understanding of sentiment and the concept of a tweet. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples.
+
+### Improving the classifier's efficiency
+
+Now that we have a grasp of how to build a classifier, let's take that example and make it even more efficient so that we can use it to get multiple results back from one API call.
+
+```
+This is a tweet sentiment classifier
+
+Tweet: "I loved the new Batman movie!"
+Sentiment: Positive
+
+Tweet: "I hate it when my phone battery dies"
+Sentiment: Negative
+
+Tweet: "My day has been 👍"
+Sentiment: Positive
+
+Tweet: "This is the link to the article"
+Sentiment: Neutral
+
+Tweet text
+1. "I loved the new Batman movie!"
+2. "I hate it when my phone battery dies"
+3. "My day has been 👍"
+4. "This is the link to the article"
+5. "This new music video blew my mind"
+
+Tweet sentiment ratings:
+1: Positive
+2: Negative
+3: Positive
+4: Neutral
+5: Positive
+
+Tweet text
+1. "I can't stand homework"
+2. "This sucks. I'm bored 😠"
+3. "I can't wait for Halloween!!!"
+4. "My cat is adorable ❤️❤️"
+5. "I hate chocolate"
+
+Tweet sentiment ratings:
+1.
+```
+
+After showing the API how tweets are classified by sentiment we then provide it a list of tweets and then a list of sentiment ratings with the same number index. The API is able to pick up from the first example how a tweet is supposed to be classified. In the second example it sees how to apply this to a list of tweets. This allows the API to rate five (and even more) tweets in just one API call.
+
+It's important to note that when you ask the API to create lists or evaluate text you need to pay extra attention to your probability settings (Top P or Temperature) to avoid drift.
+
+1. Make sure your probability setting is calibrated correctly by running multiple tests.
+
+2. Don't make your list too long or the API is likely to drift.
+++
+## Generation
+
+One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. You can give the API a list of a few story ideas and it will try to add to that list. We've seen it create business plans, character descriptions and marketing slogans just by providing it a handful of examples. In this demonstration we'll use the API to create more examples for how to use virtual reality in the classroom:
+
+```
+Ideas involving education and virtual reality
+
+1. Virtual Mars
+Students get to explore Mars via virtual reality and go on missions to collect and catalog what they see.
+
+2.
+```
+
+All we had to do in this example is provide the API with just a description of what the list is about and one example. We then prompted the API with the number `2.` indicating that it's a continuation of the list.
+
+Although this is a very simple prompt, there are several details worth noting:
+
+**1. We explained the intent of the list**<br>
+Just like with the classifier, we tell the API up front what the list is about. This helps it focus on completing the list and not trying to guess what the pattern is behind it.
+
+**2. Our example sets the pattern for the rest of the list**<br>
+Because we provided a one-sentence description, the API is going to try to follow that pattern for the rest of the items it adds to the list. If we want a more verbose response, we need to set that up from the start.
+
+**3. We prompt the API by adding an incomplete entry**<br>
+When the API sees `2.` and the prompt abruptly ends, the first thing it tries to do is figure out what should come after it. Since we already had an example with number one and gave the list a title, the most obvious response is to continue adding items to the list.
+
+**Advanced generation techniques**<br>
+You can improve the quality of the responses by making a longer more diverse list in your prompt. One way to do that is to start off with one example, let the API generate more and select the ones that you like best and add them to the list. A few more high-quality variations can dramatically improve the quality of the responses.
+++
+## Conversation
+
+The API is extremely adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, we've seen the API perform as a customer service chatbot that intelligently answers questions without ever getting flustered or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples.
+
+Here's an example of the API playing the role of an AI answering questions:
+
+```
+The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.
+
+Human: Hello, who are you?
+AI: I am an AI created by OpenAI. How can I help you today?
+Human:
+```
+
+This is all it takes to create a chatbot capable of carrying on a conversation. But underneath its simplicity there are several things going on that are worth paying attention to:
+
+**1. We tell the API the intent but we also tell it how to behave**
+Just like the other prompts, we cue the API into what the example represents, but we also add another key detail: we give it explicit instructions on how to interact with the phrase "The assistant is helpful, creative, clever, and very friendly."
+
+Without that instruction the API might stray and mimic the human it's interacting with and become sarcastic or some other behavior we want to avoid.
+
+**2. We give the API an identity**
+At the start we have the API respond as an AI that was created by OpenAI. While the API has no intrinsic identity, this helps it respond in a way that's as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots. If you tell the API to respond as a woman who works as a research scientist in biology, you'll get intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background.
+
+In this example we create a chatbot that is a bit sarcastic and reluctantly answers questions:
+
+```
+Marv is a chatbot that reluctantly answers questions.
+
+###
+User: How many pounds are in a kilogram?
+Marv: This again? There are 2.2 pounds in a kilogram. Please make a note of this.
+###
+User: What does HTML stand for?
+Marv: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future.
+###
+User: When did the first airplane fly?
+Marv: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away.
+###
+User: Who was the first man in space?
+Marv:
+```
+
+To create an amusing and somewhat helpful chatbot we provide a few examples of questions and answers showing the API how to reply. All it takes is just a few sarcastic responses and the API is able to pick up the pattern and provide an endless number of snarky responses.
+++
+## Transformation
+
+The API is a language model that is familiar with a variety of ways that words and characters can be used to express information. This ranges from natural language text to code and languages other than English. The API is also able to understand content on a level that allows it to summarize, convert and express it in different ways.
+
+### Translation
+
+In this example we show the API how to convert from English to French:
+
+```
+English: I do not speak French.
+French: Je ne parle pas français.
+English: See you later!
+French: À tout à l'heure!
+English: Where is a good restaurant?
+French: O├╣ est un bon restaurant?
+English: What rooms do you have available?
+French: Quelles chambres avez-vous de disponible?
+English:
+```
+
+This example works because the API already has a grasp of French, so there's no need to try to teach it this language. Instead, we just need to provide enough examples that API understands that it's converting from one language to another.
+
+If you want to translate from English to a language the API is unfamiliar with you'd need to provide it with more examples and a fine-tuned model to do it fluently.
+
+### Conversion
+
+In this example we convert the name of a movie into emoji. This shows the adaptability of the API to picking up patterns and working with other characters.
+
+```
+Back to Future: 👨👴🚗🕒
+Batman: 🤵🦇
+Transformers: 🚗🤖
+Wonder Woman: 👸🏻👸🏼👸🏽👸🏾👸🏿
+Spider-Man: 🕸🕷🕸🕸🕷🕸
+Winnie the Pooh: 🐻🐼🐻
+The Godfather: 👨👩👧🕵🏻‍♂️👲💥
+Game of Thrones: 🏹🗡🗡🏹
+Spider-Man:
+```
+
+## Summarization
+
+The API is able to grasp the context of text and rephrase it in different ways. In this example, the API takes a block of text and creates an explanation a child would understand. This illustrates that the API has a deep grasp of language.
+
+```
+My ten-year-old asked me what this passage means:
+"""
+A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich.[1] Neutron stars are the smallest and densest stellar objects, excluding black holes and hypothetical white holes, quark stars, and strange stars.[2] Neutron stars have a radius on the order of 10 kilometres (6.2 mi) and a mass of about 1.4 solar masses.[3] They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.
+"""
+
+I rephrased it for him, in plain language a ten-year-old can understand:
+"""
+```
+
+In this example we place whatever we want summarized between the triple quotes. It's worth noting that we explain both before and after the text to be summarized what our intent is and who the target audience is for the summary. This is to keep the API from drifting after it processes a large block of text.
+
+## Completion
+
+While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. For example, if given this prompt, the API will continue the train of thought about vertical farming. You can lower the temperature setting to keep the API more focused on the intent of the prompt or increase it to let it go off on a tangent.
+
+```
+Vertical farming provides a novel solution for producing food locally, reducing transportation costs and
+```
+
+This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/legacy-models.md#codex-models) section in [Models](../concepts/models.md).
+
+```
+import React from 'react';
+const HeaderComponent = () => (
+```
+++
+## Factual responses
+
+The API has a lot of knowledge that it's learned from the data it was trained on. It also has the ability to provide responses that sound very real but are in fact made up. There are two ways to limit the likelihood of the API making up an answer.
+
+**1. Provide a ground truth for the API**
+If you provide the API with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to confabulate a response.
+
+**2. Use a low probability and show the API how to say "I don't know"**
+If the API understands that in cases where it's less certain about a response that saying "I don't know" or some variation is appropriate, it will be less inclined to make up answers.
+
+In this example we give the API examples of questions and answers it knows and then examples of things it wouldn't know and provide question marks. We also set the probability to zero so the API is more likely to respond with a "?" if there's any doubt.
+
+```
+Q: Who is Batman?
+A: Batman is a fictional comic book character.
+
+Q: What is torsalplexity?
+A: ?
+
+Q: What is Devz9?
+A: ?
+
+Q: Who is George Lucas?
+A: George Lucas is American film director and producer famous for creating Star Wars.
+
+Q: What is the capital of California?
+A: Sacramento.
+
+Q: What orbits the Earth?
+A: The Moon.
+
+Q: Who is Fred Rickerson?
+A: ?
+
+Q: What is an atom?
+A: An atom is a tiny particle that makes up everything.
+
+Q: Who is Alvan Muntz?
+A: ?
+
+Q: What is Kozar-09?
+A: ?
+
+Q: How many moons does Mars have?
+A: Two, Phobos and Deimos.
+
+Q:
+```
+## Working with code
+
+The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+
+Learn more about generating code completions, with the [working with code guide](./work-with-code.md)
+
+## Next steps
+
+Learn [how to work with code (Codex)](./work-with-code.md).
+Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
ai-services Content Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/content-filters.md
+
+ Title: 'How to use content filters (preview) with Azure OpenAI Service'
+
+description: Learn how to use content filters (preview) with Azure OpenAI Service
+++++ Last updated : 6/5/2023++
+recommendations: false
+keywords:
++
+# How to configure content filters with Azure OpenAI Service
+
+> [!NOTE]
+> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for full content filtering control, including (i) configuring content filters at severity level high only (ii) or turning the content filters off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).
+
+The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high). The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md).
+
+Content filters can be configured at resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
+
+The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below. Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and is not configurable.
+
+| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
+|-|--||--|
+| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
+| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
+| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium is not filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.|
+| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
+
+<sup>\*</sup> Only approved customers have full content filtering control, including configuring content filters at severity level high only or turning the content filters off. Managed customers only can apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
+
+## Configuring content filters via Azure OpenAI Studio (preview)
+
+The following steps show how to set up a customized content filtering configuration for your resource.
+
+1. Go to Azure OpenAI Studio and navigate to the Content Filters tab (in the bottom left navigation, as designated by the red box below).
+
+ :::image type="content" source="../media/content-filters/studio.png" alt-text="Screenshot of the AI Studio UI with Content Filters highlighted" lightbox="../media/content-filters/studio.png":::
+
+2. Create a new customized content filtering configuration.
+
+ :::image type="content" source="../media/content-filters/create-filter.png" alt-text="Screenshot of the content filtering configuration UI with create selected" lightbox="../media/content-filters/create-filter.png":::
+
+ This leads to the following configuration view, where you can choose a name for the custom content filtering configuration.
+
+ :::image type="content" source="../media/content-filters/filter-view.png" alt-text="Screenshot of the content filtering configuration UI" lightbox="../media/content-filters/filter-view.png":::
+
+3. This is the view of the default content filtering configuration, where content is filtered at medium and high severity levels for all categories. You can modify the content filtering severity level for both prompts and completions separately (configuration for prompts is in the left column and configuration for completions is in the right column, as designated with the blue boxes below) for each of the four content categories (content categories are listed on the left side of the screen, as designated with the green box below). There are three severity levels for each category that are partially or fully configurable: Low, medium, and high (labeled at the top of each column, as designated with the red box below).
+
+ :::image type="content" source="../media/content-filters/severity-level.png" alt-text="Screenshot of the content filtering configuration UI with user prompts and model completions highlighted" lightbox="../media/content-filters/severity-level.png":::
+
+4. If you determine that your application or usage scenario requires stricter filtering for some or all content categories, you can configure the settings, separately for prompts and completions, to filter at more severity levels than the default setting. An example is shown in the image below, where the filtering level for user prompts is set to the strictest configuration for hate and sexual, with low severity content filtered along with content classified as medium and high severity (outlined in the red box below). In the example, the filtering levels for model completions are set at the strictest configuration for all content categories (blue box below). With this modified filtering configuration in place, low, medium, and high severity content will be filtered for the hate and sexual categories in user prompts; medium and high severity content will be filtered for the self-harm and violence categories in user prompts; and low, medium, and high severity content will be filtered for all content categories in model completions.
+
+ :::image type="content" source="../media/content-filters/settings.png" alt-text="Screenshot of the content filtering configuration with low, medium, high, highlighted." lightbox="../media/content-filters/settings.png":::
+
+5. If your use case was approved for modified content filters as outlined above, you will receive full control over content filtering configurations. With full control, you can choose to turn filtering off, or filter only at severity level high, while accepting low and medium severity content. In the image below, filtering for the categories of self-harm and violence is turned off for user prompts (red box below), while default configurations are retained for other categories for user prompts. For model completions, only high severity content is filtered for the category self-harm (blue box below), and filtering is turned off for violence (green box below), while default configurations are retained for other categories.
+
+ :::image type="content" source="../media/content-filters/off.png" alt-text="Screenshot of the content filtering configuration with self harm and violence set to off." lightbox="../media/content-filters/off.png":::
+
+ You can create multiple content filtering configurations as per your requirements.
+
+ :::image type="content" source="../media/content-filters/multiple.png" alt-text="Screenshot of the content filtering configuration with multiple content filters configured." lightbox="../media/content-filters/multiple.png":::
+
+6. Next, to make a custom content filtering configuration operational, assign a configuration to one or more deployments in your resource. To do this, go to the **Deployments** tab and select **Edit deployment** (outlined near the top of the screen in a red box below).
+
+ :::image type="content" source="../media/content-filters/edit-deployment.png" alt-text="Screenshot of the content filtering configuration with edit deployment highlighted." lightbox="../media/content-filters/edit-deployment.png":::
+
+7. Go to advanced options (outlined in the blue box below) select the content filter configuration suitable for that deployment from the **Content Filter** dropdown (outlined near the bottom of the dialog box in the red box below).
+
+ :::image type="content" source="../media/content-filters/advanced.png" alt-text="Screenshot of edit deployment configuration with advanced options selected." lightbox="../media/content-filters/select-filter.png":::
+
+8. Select **Save and close** to apply the selected configuration to the deployment.
+
+ :::image type="content" source="../media/content-filters/select-filter.png" alt-text="Screenshot of edit deployment configuration with content filter selected." lightbox="../media/content-filters/select-filter.png":::
+
+9. You can also edit and delete a content filter configuration if required. To do this, navigate to the content filters tab and select the desired action (options outlined near the top of the screen in the red box below). You can edit/delete only one filtering configuration at a time.
+
+ :::image type="content" source="../media/content-filters/delete.png" alt-text="Screenshot of content filter configuration with edit and delete highlighted." lightbox="../media/content-filters/delete.png":::
+
+ > [!NOTE]
+ > Before deleting a content filtering configuration, you will need to unassign it from any deployment in the Deployments tab.
+
+## Best practices
+
+We recommend informing your content filtering configuration decisions through an iterative identification (for example, red team testing, stress-testing, and analysis) and measurement process to address the potential harms that are relevant for a specific model, application, and deployment scenario. After implementing mitigations such as content filtering, repeat measurement to test effectiveness. Recommendations and best practices for Responsible AI for Azure OpenAI, grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI) can be found in the [Responsible AI Overview for Azure OpenAI](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
+
+## Next steps
+
+- Learn more about Responsible AI practices for Azure OpenAI: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
+- Read more about [content filtering categories and severity levels](../concepts/content-filter.md) with Azure OpenAI Service.
+- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs) article](../concepts/red-teaming.md).
ai-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/create-resource.md
+
+ Title: 'How-to - Create a resource and deploy a model using Azure OpenAI Service'
+
+description: Walkthrough on how to get started with Azure OpenAI and make your first resource and deploy your first model.
++++++ Last updated : 02/02/2023
+zone_pivot_groups: openai-create-resource
++
+recommendations: false
++
+# Create a resource and deploy a model using Azure OpenAI
+
+Use this article to get started with Azure OpenAI with step-by-step instructions to create a resource and deploy a model. While the steps for resource creation and model deployment can be completed in a few minutes, the actual deployment process itself can take more than hour. You can create your resource, start your deployment, and then check back in on your deployment later rather than actively waiting for the deployment to complete.
+++++++
+## Next steps
+
+* Now that you have a resource and your first model deployed get started making API calls and generating text with our [quickstarts](../quickstart.md).
+* Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
+
+ Title: 'How to generate embeddings with Azure OpenAI Service'
+
+description: Learn how to generate embeddings with Azure OpenAI
+++++ Last updated : 5/9/2023++
+recommendations: false
+keywords:
++
+# Learn how to generate embeddings with Azure OpenAI
+
+An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar.
+
+## How to get embeddings
+
+To obtain an embedding vector for a piece of text, we make a request to the embeddings endpoint as shown in the following code snippets:
+
+# [console](#tab/console)
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15\
+ -H 'Content-Type: application/json' \
+ -H 'api-key: YOUR_API_KEY' \
+ -d '{"input": "Sample Document goes here"}'
+```
+
+# [python](#tab/python)
+```python
+import openai
+
+openai.api_type = "azure"
+openai.api_key = YOUR_API_KEY
+openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
+openai.api_version = "2023-05-15"
+
+response = openai.Embedding.create(
+ input="Your text string goes here",
+ engine="YOUR_DEPLOYMENT_NAME"
+)
+embeddings = response['data'][0]['embedding']
+print(embeddings)
+```
+
+# [C#](#tab/csharp)
+```csharp
+using Azure;
+using Azure.AI.OpenAI;
+
+Uri oaiEndpoint = new ("https://YOUR_RESOURCE_NAME.openai.azure.com");
+string oaiKey = "YOUR_API_KEY";
+
+AzureKeyCredential credentials = new (oaiKey);
+
+OpenAIClient openAIClient = new (oaiEndpoint, credentials);
+
+EmbeddingsOptions embeddingOptions = new ("Your text string goes here");
+
+var returnValue = openAIClient.GetEmbeddings("YOUR_DEPLOYMENT_NAME", embeddingOptions);
+
+foreach (float item in returnValue.Value.Data[0].Embedding)
+{
+ Console.WriteLine(item);
+}
+```
+++
+## Best practices
+
+### Verify inputs don't exceed the maximum length
+
+The maximum length of input text for our embedding models is 2048 tokens (equivalent to around 2-3 pages of text). You should verify that your inputs don't exceed this limit before making a request.
+
+### Choose the best model for your task
+
+For the search models, you can obtain embeddings in two ways. The `<search_model>-doc` model is used for longer pieces of text (to be searched over) and the `<search_model>-query` model is used for shorter pieces of text, typically queries or class labels in zero shot classification. You can read more about all of the Embeddings models in our [Models](../concepts/models.md) guide.
+
+### Replace newlines with a single space
+
+Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.
+
+## Limitations & risks
+
+Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations. Review our Responsible AI content for more information on how to approach their use responsibly.
+
+## Next steps
+
+* Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).
+* Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
+
+ Title: 'How to customize a model with Azure OpenAI Service'
+
+description: Learn how to create your own customized model with Azure OpenAI
++++++ Last updated : 04/05/2023++
+zone_pivot_groups: openai-fine-tuning
+keywords:
+
+# Learn how to customize a model for your application
+
+Azure OpenAI Service lets you tailor our models to your personal datasets using a process known as *fine-tuning*. This customization step will let you get more out of the service by providing:
+
+- Higher quality results than what you can get just from prompt design
+- The ability to train on more examples than can fit into a prompt
+- Lower-latency requests
+
+A customized model improves on the few-shot learning approach by training the model's weights on your specific prompts and structure. The customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, saving cost and improving request latency.
+
+> [!NOTE]
+> There is a breaking change in the `create` fine tunes command in the latest 12-01-2022 GA API. For the latest command syntax consult the [reference documentation](/rest/api/cognitiveservices/azureopenaistable/fine-tunes/create)
+++++++++
ai-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/integrate-synapseml.md
+
+ Title: 'How-to - Use Azure OpenAI Service with large datasets'
+
+description: Walkthrough on how to integrate Azure OpenAI with SynapseML and Apache Spark to apply large language models at a distributed scale.
++++++ Last updated : 08/04/2022++
+recommendations: false
++
+# Use Azure OpenAI with large datasets
+
+Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI Service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
+
+## Prerequisites
+
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to Azure OpenAI in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+- An Azure OpenAI resource ΓÇô [create a resource](create-resource.md?pivots=web-portal#create-a-resource)
+- An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)
+
+We recommend [creating a Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md), but an Azure Databricks, HDInsight, or Spark on Kubernetes, or even a Python environment with the `pyspark` package, will also work.
+
+## Import this guide as a notebook
+
+The next step is to add this code into your Spark cluster. You can either create a notebook in your Spark platform and copy the code into this notebook to run the demo, or download the notebook and import it into Synapse Analytics.
+
+1. [Download this demo as a notebook](https://github.com/microsoft/SynapseML/blob/master/notebooks/features/cognitive_services/CognitiveServices%20-%20OpenAI.ipynb) (select Raw, then save the file)
+1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook) or, if using Databricks, [into the Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook)
+1. Install SynapseML on your cluster. See the installation instructions for Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This requires pasting another cell at the top of the notebook you imported
+1. Connect your notebook to a cluster and follow along, editing and running the cells below.
+
+## Fill in your service information
+
+Next, edit the cell in the notebook to point to your service. In particular, set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
+
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Azure AI services [security](../../security-features.md) article for more information.
+
+```python
+import os
+
+# Replace the following values with your Azure OpenAI resource information
+resource_name = "RESOURCE_NAME" # The name of your Azure OpenAI resource.
+deployment_name = "DEPLOYMENT_NAME" # The name of your Azure OpenAI deployment.
+location = "RESOURCE_LOCATION" # The location or region ID for your resource.
+key = "RESOURCE_API_KEY" # The key for your resource.
+
+assert key is not None and resource_name is not None
+```
+
+## Create a dataset of prompts
+
+Next, create a dataframe consisting of a series of rows, with one prompt per row.
+
+You can also load data directly from Azure Data Lake Storage (ADLS) or other databases. For more information about loading and preparing Spark dataframes, see the [Apache Spark data loading guide](https://spark.apache.org/docs/latest/sql-data-sources.html).
+
+```python
+df = spark.createDataFrame(
+ [
+ ("Hello my name is",),
+ ("The best code is code that's",),
+ ("SynapseML is ",),
+ ]
+).toDF("prompt")
+```
+
+## Create the OpenAICompletion Apache Spark client
+
+To apply the OpenAI Completion service to the dataframe that you just created, create an `OpenAICompletion` object that serves as a distributed client. Parameters of the service can be set either with a single value, or by a column of the dataframe with the appropriate setters on the `OpenAICompletion` object. Here, we're setting `maxTokens` to 200. A token is around four characters, and this limit applies to the sum of the prompt and the result. We're also setting the `promptCol` parameter with the name of the prompt column in the dataframe.
+
+```python
+from synapse.ml.cognitive import OpenAICompletion
+
+completion = (
+ OpenAICompletion()
+ .setSubscriptionKey(key)
+ .setDeploymentName(deployment_name)
+ .setUrl("https://{}.openai.azure.com/".format(resource_name))
+ .setMaxTokens(200)
+ .setPromptCol("prompt")
+ .setErrorCol("error")
+ .setOutputCol("completions")
+)
+```
+
+## Transform the dataframe with the OpenAICompletion client
+
+Now that you have the dataframe and the completion client, you can transform your input dataset and add a column called `completions` with all of the information the service adds. We'll select out just the text for simplicity.
+
+```python
+from pyspark.sql.functions import col
+
+completed_df = completion.transform(df).cache()
+display(completed_df.select(
+ col("prompt"), col("error"), col("completions.choices.text").getItem(0).alias("text")))
+```
+
+Your output should look something like the following example; note that the completion text can vary.
+
+| **prompt** | **error** | **text** |
+||--| |
+| Hello my name is | undefined | Makaveli I'm eighteen years old and I want to<br>be a rapper when I grow up I love writing and making music I'm from Los<br>Angeles, CA |
+| The best code is code that's | undefined | understandable This is a subjective statement,<br>and there is no definitive answer. |
+| SynapseML is | undefined | A machine learning algorithm that is able to learn how to predict the future outcome of events. |
+
+## Other usage examples
+
+### Improve throughput with request batching
+
+The example above makes several requests to the service, one for each prompt. To complete multiple prompts in a single request, use batch mode. First, in the `OpenAICompletion` object, instead of setting the Prompt column to "Prompt", specify "batchPrompt" for the BatchPrompt column.
+To do so, create a dataframe with a list of prompts per row.
+
+> [!NOTE]
+> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
+
+```python
+batch_df = spark.createDataFrame(
+ [
+ (["The time has come", "Pleased to", "Today stocks", "Here's to"],),
+ (["The only thing", "Ask not what", "Every litter", "I am"],),
+ ]
+).toDF("batchPrompt")
+```
+
+Next we create the `OpenAICompletion` object. Rather than setting the prompt column, set the batchPrompt column if your column is of type `Array[String]`.
+
+```python
+batch_completion = (
+ OpenAICompletion()
+ .setSubscriptionKey(key)
+ .setDeploymentName(deployment_name)
+ .setUrl("https://{}.openai.azure.com/".format(resource_name))
+ .setMaxTokens(200)
+ .setBatchPromptCol("batchPrompt")
+ .setErrorCol("error")
+ .setOutputCol("completions")
+)
+```
+
+In the call to transform, a request will then be made per row. Because there are multiple prompts in a single row, each request will be sent with all prompts in that row. The results will contain a row for each row in the request.
+
+```python
+completed_batch_df = batch_completion.transform(batch_df).cache()
+display(completed_batch_df)
+```
+
+> [!NOTE]
+> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
+
+### Using an automatic mini-batcher
+
+If your data is in column format, you can transpose it to row format using SynapseML's `FixedMiniBatcherTransformer`.
+
+```python
+from pyspark.sql.types import StringType
+from synapse.ml.stages import FixedMiniBatchTransformer
+from synapse.ml.core.spark import FluentAPI
+
+completed_autobatch_df = (df
+ .coalesce(1) # Force a single partition so that our little 4-row dataframe makes a batch of size 4, you can remove this step for large datasets
+ .mlTransform(FixedMiniBatchTransformer(batchSize=4))
+ .withColumnRenamed("prompt", "batchPrompt")
+ .mlTransform(batch_completion))
+
+display(completed_autobatch_df)
+```
+
+### Prompt engineering for translation
+
+Azure OpenAI can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
+
+```python
+translate_df = spark.createDataFrame(
+ [
+ ("Japanese: Ookina hako \nEnglish: Big box \nJapanese: Midori tako\nEnglish:",),
+ ("French: Quelle heure est-il à Montréal? \nEnglish: What time is it in Montreal? \nFrench: Où est le poulet? \nEnglish:",),
+ ]
+).toDF("prompt")
+
+display(completion.transform(translate_df))
+```
+
+### Prompt for question answering
+
+Here, we prompt the GPT-3 model for general-knowledge question answering:
+
+```python
+qa_df = spark.createDataFrame(
+ [
+ (
+ "Q: Where is the Grand Canyon?\nA: The Grand Canyon is in Arizona.\n\nQ: What is the weight of the Burj Khalifa in kilograms?\nA:",
+ )
+ ]
+).toDF("prompt")
+
+display(completion.transform(qa_df))
+```
ai-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/manage-costs.md
+
+ Title: Plan to manage costs for Azure OpenAI Service
+description: Learn how to plan for and manage costs for Azure OpenAI by using cost analysis in the Azure portal.
++++++ Last updated : 04/05/2023+++
+# Plan to manage costs for Azure OpenAI Service
+
+This article describes how you plan for and manage costs for Azure OpenAI Service. Before you deploy the service, you can use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you've started using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+
+## Prerequisites
+
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../../../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Estimate costs before using Azure OpenAI
+
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the costs of using Azure OpenAI.
+
+## Understand the full billing model for Azure OpenAI Service
+
+Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+
+### How you're charged for Azure OpenAI Service
+
+### Base series and Codex series models
+
+Azure OpenAI base series and Codex series models are charged per 1,000 tokens. Costs vary depending on which model series you choose: Ada, Babbage, Curie, Davinci, or Code-Cushman.
+
+Our models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text.
+
+Token costs are for both input and output. For example, if you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens.
+
+In practice, for this type of completion call the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many different factors including the value assigned to the max_tokens parameter.
+
+### Base Series and Codex series fine-tuned models
+
+Azure OpenAI fine-tuned models are charged based on three factors:
+
+- Training hours
+- Hosting hours
+- Inference per 1,000 tokens
+
+The hosting hours cost is important to be aware of since once a fine-tuned model is deployed it continues to incur an hourly cost regardless of whether you're actively using it. Fine-tuned model costs should be monitored closely.
++
+### Other costs that might accrue with Azure OpenAI Service
+
+Keep in mind that enabling capabilities like sending data to Azure Monitor Logs, alerting, etc. incurs additional costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource.
+
+### Using Azure Prepayment with Azure OpenAI Service
+
+You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+
+## Monitor costs
+
+As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+
+To view Azure OpenAI costs in cost analysis:
+
+1. Sign in to the Azure portal.
+2. Select one of your Azure OpenAI resources.
+3. Under **Resource Management** select **Cost analysis**
+4. By default cost analysis is scoped to the individual Azure OpenAI resource.
++
+To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and in this case switching the chart type to **Line**. You can now see that for this particular resource the source of the costs is from three different model series with **Text-Davinci Tokens** representing the bulk of the costs.
++
+It's important to understand scope when evaluating costs associated with Azure OpenAI. If your resources are part of the same resource group you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups you can scope to the subscription level.
+
+However, when scoped at a higher level you often need to add additional filters to be able to zero in on Azure OpenAI usage. When scoped at the subscription level we see a number of other resources that we may not care about in the context of Azure OpenAI cost management. When scoping at the subscription level, we recommend navigating to the full **Cost analysis tool** under the **Cost Management** service. Search for **"Cost Management"** in the top Azure search bar to navigate to the full service experience, which includes more options like creating budgets.
++
+If you try to add a filter by service, you'll find that you can't find Azure OpenAI in the list. This is because Azure OpenAI has commonality with a subset of Azure AI services where the service level filter is **Cognitive Services**, but if you want to see all Azure OpenAI resources across a subscription without any other type of Azure AI services resources you need to instead scope to **Service tier: Azure OpenAI**:
++
+## Create budgets
+
+You can create [budgets](../../../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+> [!IMPORTANT]
+> While OpenAI has an option for hard limits that will prevent you from going over your budget, Azure OpenAI does not currently provide this functionality. You are able to kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part.
+
+## Export cost data
+
+You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+
+## Next steps
+
+- Learn [how to optimize your cloud investment with Azure Cost Management](../../../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../../../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
ai-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md
+
+ Title: How to configure Azure OpenAI Service with managed identities
+
+description: Provides guidance on how to set managed identity with Azure Active Directory
+++ Last updated : 06/24/2022++
+recommendations: false
+++
+# How to configure Azure OpenAI Service with managed identities
+
+More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Azure Active Directory (Azure AD).
+
+In the following sections, you'll use the Azure CLI to assign roles, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI.
+
+## Prerequisites
+
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>
+- Access granted to the Azure OpenAI service in the desired Azure subscription
+
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+- Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)
+- The following Python libraries: os, requests, json
+
+## Sign into the Azure CLI
+
+To sign-in to the Azure CLI, run the following command and complete the sign-in. You may need to do it again if your session has been idle for too long.
+
+```azurecli
+az login
+```
+
+## Assign yourself to the Cognitive Services User role
+
+Assigning yourself to the "Cognitive Services User" role will allow you to use your account for access to the specific Azure AI services resource.
+
+1. Get your user information
+
+ ```azurecli
+ export user=$(az account show -o json | jq -r .user.name)
+ ```
+
+2. Assign yourself to ΓÇ£Cognitive Services UserΓÇ¥ role.
+
+ ```azurecli
+ export resourceId=$(az group show -g $myResourceGroupName -o json | jq -r .id)
+ az role assignment create --role "Cognitive Services User" --assignee $user --scope $resourceId
+ ```
+
+ > [!NOTE]
+ > Role assignment change will take ~5 mins to become effective.
+
+3. Acquire an Azure AD access token. Access tokens expire in one hour. you'll then need to acquire another one.
+
+ ```azurecli
+ export accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com -o json | jq -r .accessToken)
+ ```
+
+4. Make an API call
+Use the access token to authorize your API call by setting the `Authorization` header value.
+
+ ```bash
+ curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $accessToken" \
+ -d '{ "prompt": "Once upon a time" }'
+ ```
+
+## Authorize access to managed identities
+
+OpenAI supports Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md). Managed identities for Azure resources can authorize access to Azure AI services resources using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identities for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud.
+
+## Enable managed identities on a VM
+
+Before you can use managed identities for Azure resources to authorize access to Azure AI services resources from your VM, you must enable managed identities for Azure resources on the VM. To learn how to enable managed identities for Azure Resources, see:
+
+- [Azure portal](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+- [Azure PowerShell](../../../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)
+- [Azure CLI](../../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)
+- [Azure Resource Manager template](../../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)
+- [Azure Resource Manager client libraries](../../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)
+
+For more information about managed identities, see [Managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
ai-services Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/monitoring.md
+
+ Title: Monitoring Azure OpenAI Service
+description: Start here to learn how to monitor Azure OpenAI Service
++++++ Last updated : 03/15/2023++
+# Monitoring Azure OpenAI Service
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure OpenAI Service. Azure OpenAI is part of Azure AI services, which uses [Azure Monitor](../../../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../../../azure-monitor/essentials/monitor-azure-resource.md).
+
+## Monitoring data
+
+Azure OpenAI collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-azure-resources).
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
+
+Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has additional costs associated with it. To understand more, consult the [Azure Monitor cost calculation guide](/azure/azure-monitor/logs/cost-logs).
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for *Azure OpenAI* by opening **Metrics** which can be found underneath the **Monitoring** section when viewing your Azure OpenAI resource in the Azure portal. See [Getting started with Azure Metrics Explorer](../../../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+
+Azure OpenAI has commonality with a subset of Azure AI services. For a list of all platform metrics collected for Azure OpenAI and similar Azure AI services, see [Microsoft.CognitiveServices/accounts](../../../azure-monitor/essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) supported metrics.
+
+For the current subset of metrics available in Azure OpenAI:
+
+### Azure OpenAI Metrics
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|BlockedCalls |Yes |Blocked Calls |Count |Total |Number of calls that exceeded rate or quota limit. |ApiName, OperationName, Region, RatelimitKey |
+|ClientErrors |Yes |Client Errors |Count |Total |Number of calls with client side error (HTTP response code 4xx). |ApiName, OperationName, Region, RatelimitKey |
+|DataIn |Yes |Data In |Bytes |Total |Size of incoming data in bytes. |ApiName, OperationName, Region |
+|DataOut |Yes |Data Out |Bytes |Total |Size of outgoing data in bytes. |ApiName, OperationName, Region |
+|FineTunedTrainingHours |Yes |Processed FineTuned Training Hours |Count |Total |Number of Training Hours Processed on an OpenAI FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|Latency |Yes |Latency |MilliSeconds |Average |Latency in milliseconds. |ApiName, OperationName, Region, RatelimitKey |
+|Ratelimit |Yes |Ratelimit |Count |Total |The current ratelimit of the ratelimit key. |Region, RatelimitKey |
+|ServerErrors |Yes |Server Errors |Count |Total |Number of calls with service internal error (HTTP response code 5xx). |ApiName, OperationName, Region, RatelimitKey |
+|SuccessfulCalls |Yes |Successful Calls |Count |Total |Number of successful calls. |ApiName, OperationName, Region, RatelimitKey |
+|TokenTransaction |Yes |Processed Inference Tokens |Count |Total |Number of Inference Tokens Processed on an OpenAI Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|TotalCalls |Yes |Total Calls |Count |Total |Total number of calls. |ApiName, OperationName, Region, RatelimitKey |
+|TotalErrors |Yes |Total Errors |Count |Total |Total number of calls with error response (HTTP response code 4xx or 5xx). |ApiName, OperationName, Region, RatelimitKey |
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../../azure-monitor/essentials/resource-logs-schema.md).
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs available for Azure OpenAI and similar Azure AI services, see [Microsoft.CognitiveServices](/azure/role-based-access-control/resource-provider-operations#microsoftcognitiveservices) resource provider operations.
+
+### Kusto queries
+
+> [!IMPORTANT]
+> When you select **Logs** from the Azure OpenAI menu, Log Analytics is opened with the query scope set to the current Azure OpenAI resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../../azure-monitor/logs/scope.md) for details.
+
+To explore and get a sense of what type of information is available for your Azure OpenAI resource a useful query to start with once you have deployed a model and sent some completion calls through the playground is as follows:
+
+```kusto
+AzureDiagnostics
+| take 100
+| project TimeGenerated, _ResourceId, Category,OperationName, DurationMs, ResultSignature, properties_s
+```
+
+Here we return a sample of 100 entries and are displaying a subset of the available columns of data in the logs. The results are as follows:
++
+If you wish to see all available columns of data, you can remove the scoping that is provided by the `| project` line:
+
+```kusto
+AzureDiagnostics
+| take 100
+```
+
+You can also select the arrow next to the table name to view all available columns and associated data types.
+
+To examine AzureMetrics run:
+
+```kusto
+AzureMetrics
+| take 100
+| project TimeGenerated, MetricName, Total, Count, TimeGrain, UnitName
+```
++
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have different benefits and drawbacks.
+
+Every organization's alerting needs are going to vary, and will also evolve over time. Generally all alerts should be actionable, with a specific intended response if the alert occurs. If there's no action for someone to take, then it might be something you want to capture in a report, but not in an alert. Some use cases may require alerting anytime certain error conditions exist. But in many environments, it might only be in cases where errors exceed a certain threshold for a period of time where sending an alert is warranted.
+
+Errors below certain thresholds can often be evaluated through regular analysis of data in Azure Monitor Logs. As you analyze your log data over time, you may also find that a certain condition not occurring for a long enough period of time might be valuable to track with alerts. Sometimes the absence of an event in a log is just as important a signal as an error.
+
+Depending on what type of application you're developing in conjunction with your use of Azure OpenAI, [Azure Monitor Application Insights](../../../azure-monitor/overview.md) may offer additional monitoring benefits at the application layer.
++
+## Next steps
+
+- See [Monitoring Azure resources with Azure Monitor](../../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- Read [Understand log searches in Azure Monitor logs](../../../azure-monitor/logs/log-query-overview.md).
ai-services Prepare Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/prepare-dataset.md
+
+ Title: 'How to prepare a dataset for custom model training'
+
+description: Learn how to prepare your dataset for fine-tuning
+++++ Last updated : 06/24/2022++
+recommendations: false
+keywords:
++
+# Learn how to prepare your dataset for fine-tuning
+
+The first step of customizing your model is to prepare a high quality dataset. To do this you'll need a set of training examples composed of single input prompts and the associated desired output ('completion'). This format is notably different than using models during inference in the following ways:
+
+- Only provide a single prompt vs a few examples.
+- You don't need to provide detailed instructions as part of the prompt.
+- Each prompt should end with a fixed separator to inform the model when the prompt ends and the completion begins. A simple separator, which generally works well is `\n\n###\n\n`. The separator shouldn't appear elsewhere in any prompt.
+- Each completion should start with a whitespace due to our tokenization, which tokenizes most words with a preceding whitespace.
+- Each completion should end with a fixed stop sequence to inform the model when the completion ends. A stop sequence could be `\n`, `###`, or any other token that doesn't appear in any completion.
+- For inference, you should format your prompts in the same way as you did when creating the training dataset, including the same separator. Also specify the same stop sequence to properly truncate the completion.
+- The dataset cannot exceed 100 MB in total file size.
+
+## Best practices
+
+Customization performs better with high-quality examples and the more you have, generally the better the model performs. We recommend that you provide at least a few hundred high-quality examples to achieve a model that performs better than using well-designed prompts with a base model. From there, performance tends to linearly increase with every doubling of the number of examples. Increasing the number of examples is usually the best and most reliable way of improving performance.
+
+If you're fine-tuning on a pre-existing dataset rather than writing prompts from scratch, be sure to manually review your data for offensive or inaccurate content if possible, or review as many random samples of the dataset as possible if it's large.
+
+## Specific guidelines
+
+Fine-tuning can solve various problems, and the optimal way to use it may depend on your specific use case. Below, we've listed the most common use cases for fine-tuning and corresponding guidelines.
+
+### Classification
+
+Classifiers are the easiest models to get started with. For classification problems we suggest using **ada**, which generally tends to perform only very slightly worse than more capable models once fine-tuned, while being significantly faster. In classification problems, each prompt in the dataset should be classified into one of the predefined classes. For this type of problem, we recommend:
+
+- Use a separator at the end of the prompt, for example, `\n\n###\n\n`. Remember to also append this separator when you eventually make requests to your model.
+- Choose classes that map to a single token. At inference time, specify max_tokens=1 since you only need the first token for classification.
+- Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator
+- Aim for at least 100 examples per class
+- To get class log probabilities, you can specify logprobs=5 (for five classes) when using your model
+- Ensure that the dataset used for fine-tuning is very similar in structure and type of task as what the model will be used for
+
+#### Case study: Is the model making untrue statements?
+
+Let's say you'd like to ensure that the text of the ads on your website mentions the correct product and company. In other words, you want to ensure the model isn't making things up. You may want to fine-tune a classifier which filters out incorrect ads.
+
+The dataset might look something like the following:
+
+```json
+{"prompt":"Company: BHFF insurance\nProduct: allround insurance\nAd:One stop shop for all your insurance needs!\nSupported:", "completion":" yes"}
+{"prompt":"Company: Loft conversion specialists\nProduct: -\nAd:Straight teeth in weeks!\nSupported:", "completion":" no"}
+```
+
+In the example above, we used a structured input containing the name of the company, the product, and the associated ad. As a separator we used `\nSupported:` which clearly separated the prompt from the completion. With a sufficient number of examples, the separator you choose doesn't make much of a difference (usually less than 0.4%) as long as it doesn't appear within the prompt or the completion.
+
+For this use case we fine-tuned an ada model since it is faster and cheaper, and the performance is comparable to larger models because it's a classification task.
+
+Now we can query our model by making a Completion request.
+
+```console
+curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
+ -H 'Content-Type: application/json' \
+ -H 'api-key: YOUR_API_KEY' \
+ -d '{
+ "prompt": "Company: Reliable accountants Ltd\nProduct: Personal Tax help\nAd:Best advice in town!\nSupported:",
+ "max_tokens": 1
+ }'
+```
+
+Which will return either `yes` or `no`.
+
+#### Case study: Sentiment analysis
+
+Let's say you'd like to get a degree to which a particular tweet is positive or negative. The dataset might look something like the following:
+
+```console
+{"prompt":"Overjoyed with the new iPhone! ->", "completion":" positive"}
+{"prompt":"@contoso_basketball disappoint for a third straight night. ->", "completion":" negative"}
+```
+
+Once the model is fine-tuned, you can get back the log probabilities for the first completion token by setting `logprobs=2` on the completion request. The higher the probability for positive class, the higher the relative sentiment.
+
+Now we can query our model by making a Completion request.
+
+```console
+curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
+ -H 'Content-Type: application/json' \
+ -H 'api-key: YOUR_API_KEY' \
+ -d '{
+ "prompt": "Excited to share my latest blog post! ->",
+ "max_tokens": 1,
+ "logprobs": 2
+ }'
+```
+
+Which will return:
+
+```json
+{
+ "object": "text_completion",
+ "created": 1589498378,
+ "model": "YOUR_FINE_TUNED_MODEL_NAME",
+ "choices": [
+ {
+ "logprobs": {
+ "text_offset": [
+ 19
+ ],
+ "token_logprobs": [
+ -0.03597255
+ ],
+ "tokens": [
+ " positive"
+ ],
+ "top_logprobs": [
+ {
+ " negative": -4.9785037,
+ " positive": -0.03597255
+ }
+ ]
+ },
+
+ "text": " positive",
+ "index": 0,
+ "finish_reason": "length"
+ }
+ ]
+}
+```
+
+#### Case study: Categorization for Email triage
+
+Let's say you'd like to categorize incoming email into one of a large number of predefined categories. For classification into a large number of categories, we recommend you convert those categories into numbers, which will work well with up to approximately 500 categories. We've observed that adding a space before the number sometimes slightly helps the performance, due to tokenization. You may want to structure your training data as follows:
+
+```json
+{
+ "prompt":"Subject: <email_subject>\nFrom:<customer_name>\nDate:<date>\nContent:<email_body>\n\n###\n\n", "completion":" <numerical_category>"
+}
+```
+
+For example:
+
+```json
+{
+ "prompt":"Subject: Update my address\nFrom:Joe Doe\nTo:support@ourcompany.com\nDate:2021-06-03\nContent:Hi,\nI would like to update my billing address to match my delivery address.\n\nPlease let me know once done.\n\nThanks,\nJoe\n\n###\n\n",
+ "completion":" 4"
+}
+```
+
+In the example above we used an incoming email capped at 2043 tokens as input. (This allows for a four token separator and a one token completion, summing up to 2048.) As a separator we used `\n\n###\n\n` and we removed any occurrence of ### within the email.
+
+### Conditional generation
+
+Conditional generation is a problem where the content needs to be generated given some kind of input. This includes paraphrasing, summarizing, entity extraction, product description writing given specifications, chatbots and many others. For this type of problem we recommend:
+
+- Use a separator at the end of the prompt, for example, `\n\n###\n\n`. Remember to also append this separator when you eventually make requests to your model.
+- Use an ending token at the end of the completion, for example, `END`.
+- Remember to add the ending token as a stop sequence during inference, for example, `stop=[" END"]`.
+- Aim for at least ~500 examples.
+- Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator.
+- Ensure the examples are of high quality and follow the same desired format.
+- Ensure that the dataset used for fine-tuning is similar in structure and type of task as what the model will be used for.
+- Using Lower learning rate and only 1-2 epochs tends to work better for these use cases.
+
+#### Case study: Write an engaging ad based on a Wikipedia article
+
+This is a generative use case so you would want to ensure that the samples you provide are of the highest quality, as the fine-tuned model will try to imitate the style (and mistakes) of the given examples. A good starting point is around 500 examples. A sample dataset might look like this:
+
+```json
+{
+ "prompt":"<Product Name>\n<Wikipedia description>\n\n###\n\n",
+ "completion":" <engaging ad> END"
+}
+```
+
+For example:
+
+```json
+{
+ "prompt":"Samsung Galaxy Feel\nThe Samsung Galaxy Feel is an Android smartphone developed by Samsung Electronics exclusively for the Japanese market. The phone was released in June 2017 and was sold by NTT Docomo. It runs on Android 7.0 (Nougat), has a 4.7 inch display, and a 3000 mAh battery.\nSoftware\nSamsung Galaxy Feel runs on Android 7.0 (Nougat), but can be later updated to Android 8.0 (Oreo).\nHardware\nSamsung Galaxy Feel has a 4.7 inch Super AMOLED HD display, 16 MP back facing and 5 MP front facing cameras. It has a 3000 mAh battery, a 1.6 GHz Octa-Core ARM Cortex-A53 CPU, and an ARM Mali-T830 MP1 700 MHz GPU. It comes with 32GB of internal storage, expandable to 256GB via microSD. Aside from its software and hardware specifications, Samsung also introduced a unique a hole in the phone's shell to accommodate the Japanese perceived penchant for personalizing their mobile phones. The Galaxy Feel's battery was also touted as a major selling point since the market favors handsets with longer battery life. The device is also waterproof and supports 1seg digital broadcasts using an antenna that is sold separately.\n\n###\n\n",
+ "completion":"Looking for a smartphone that can do it all? Look no further than Samsung Galaxy Feel! With a slim and sleek design, our latest smartphone features high-quality picture and video capabilities, as well as an award winning battery life. END"
+}
+```
+
+Here we used a multiline separator, as Wikipedia articles contain multiple paragraphs and headings. We also used a simple end token, to ensure that the model knows when the completion should finish.
+
+#### Case study: Entity extraction
+
+This is similar to a language transformation task. To improve the performance, it's best to either sort different extracted entities alphabetically or in the same order as they appear in the original text. This helps the model to keep track of all the entities which need to be generated in order. The dataset could look as follows:
+
+```json
+{
+ "prompt":"<any text, for example news article>\n\n###\n\n",
+ "completion":" <list of entities, separated by a newline> END"
+}
+```
+
+For example:
+
+```json
+{
+ "prompt":"Portugal will be removed from the UK's green travel list from Tuesday, amid rising coronavirus cases and concern over a \"Nepal mutation of the so-called Indian variant\". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n",
+ "completion":" Portugal\nUK\nNepal mutation\nIndian variant END"
+}
+```
+
+A multi-line separator works best, as the text will likely contain multiple lines. Ideally there will be a high diversity of the types of input prompts (news articles, Wikipedia pages, tweets, legal documents), which reflect the likely texts which will be encountered when extracting entities.
+
+#### Case study: Customer support chatbot
+
+A chatbot will normally contain relevant context about the conversation (order details), summary of the conversation so far, and most recent messages. For this use case the same past conversation can generate multiple rows in the dataset, each time with a slightly different context, for every agent generation as a completion. This use case requires a few thousand examples, as it likely deals with different types of requests, and customer issues. To ensure the performance is of high quality, we recommend vetting the conversation samples to ensure the quality of agent messages. The summary can be generated with a separate text transformation fine tuned model. The dataset could look as follows:
+
+```json
+{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent:", "completion":" <response2>\n"}
+{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent: <response2>\nCustomer: <message3>\nAgent:", "completion":" <response3>\n"}
+```
+
+Here we purposefully separated different types of input information, but maintained Customer Agent dialog in the same format between a prompt and a completion. All the completions should only be by the agent, and we can use `\n` as a stop sequence when doing inference.
+
+#### Case study: Product description based on a technical list of properties
+
+Here it's important to convert the input data into a natural language, which will likely lead to superior performance. For example, the following format:
+
+```json
+{
+ "prompt":"Item=handbag, Color=army_green, price=$99, size=S->",
+ "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune."
+}
+```
+
+Won't work as well as:
+
+```json
+{
+ "prompt":"Item is a handbag. Colour is army green. Price is midrange. Size is small.->",
+ "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune."
+}
+```
+
+For high performance, ensure that the completions were based on the description provided. If external content is often consulted, then adding such content in an automated way would improve the performance. If the description is based on images, it may help to use an algorithm to extract a textual description of the image. Since completions are only one sentence long, we can use `.` as the stop sequence during inference.
+
+### Open ended generation
+
+For this type of problem we recommend:
+
+- Leave the prompt empty.
+- No need for any separators.
+- You'll normally want a large number of examples, at least a few thousand.
+- Ensure the examples cover the intended domain or the desired tone of voice.
+
+#### Case study: Maintaining company voice
+
+Many companies have a large amount of high quality content generated in a specific voice. Ideally all generations from our API should follow that voice for the different use cases. Here we can use the trick of leaving the prompt empty, and feeding in all the documents which are good examples of the company voice. A fine-tuned model can be used to solve many different use cases with similar prompts to the ones used for base models, but the outputs are going to follow the company voice much more closely than previously.
+
+```json
+{"prompt":"", "completion":" <company voice textual content>"}
+{"prompt":"", "completion":" <company voice textual content2>"}
+```
+
+A similar technique could be used for creating a virtual character with a particular personality, style of speech and topics the character talks about.
+
+Generative tasks have a potential to leak training data when requesting completions from the model, so extra care needs to be taken that this is addressed appropriately. For example personal or sensitive company information should be replaced by generic information or not be included into fine-tuning in the first place.
+
+## Next steps
+
+* Fine tune your model with our [How-to guide](fine-tuning.md)
+* Learn more about the [underlying models that power Azure OpenAI Service](../concepts/models.md)
ai-services Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/quota.md
+
+ Title: Manage Azure OpenAI Service quota
+
+description: Learn how to use Azure OpenAI to control your deployments rate limits.
++++++ Last updated : 06/07/2023+++
+# Manage Azure OpenAI Service quota
+
+Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing your Azure OpenAI quota.
+
+## Introduction to quota
+
+Azure OpenAI's quota feature enables assignment of rate limits to your deployments, up-to a global limit called your ΓÇ£quota.ΓÇ¥ Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute (TPM)**. When you onboard a subscription to Azure OpenAI, you'll receive default quota for most available models. Then, you'll assign TPM to each deployment as it is created, and the available quota for that model will be reduced by that amount. You can continue to create deployments and assign them TPM until you reach your quota limit. Once that happens, you can only create new deployments of that model by reducing the TPM assigned to other deployments of the same model (thus freeing TPM for use), or by requesting and being approved for a model quota increase in the desired region.
+
+> [!NOTE]
+> With a quota of 240,000 TPM for GPT-35-Turbo in East US, a customer can create a single deployment of 240K TPM, 2 deployments of 120K TPM each, or any number of deployments in one or multiple Azure OpenAI resources as long as their TPM adds up to less than 240K total in that region.
+
+When a deployment is created, the assigned TPM will directly map to the tokens-per-minute rate limit enforced on its inferencing requests. A **Requests-Per-Minute (RPM)** rate limit will also be enforced whose value is set proportionally to the TPM assignment using the following ratio:
+
+6 RPM per 1000 TPM.
+
+The flexibility to distribute TPM globally within a subscription and region has allowed Azure OpenAI Service to loosen other restrictions:
+
+- The maximum resources per region are increased to 30.
+- The limit on creating no more than one deployment of the same model in a resource has been removed.
+
+## Assign quota
+
+When you create a model deployment, you have the option to assign Tokens-Per-Minute (TPM) to that deployment. TPM can be modified in increments of 1,000, and will map to the TPM and RPM rate limits enforced on your deployment, as discussed above.
+
+To create a new deployment from within the Azure AI Studio under **Management** select **Deployments** > **Create new deployment**.
+
+The option to set the TPM is under the **Advanced options** drop-down:
++
+Post deployment you can adjust your TPM allocation by selecting **Edit deployment** under **Management** > **Deployments** in Azure AI Studio. You can also modify this selection within the new quota management experience under **Management** > **Quotas**.
+
+> [!IMPORTANT]
+> Quotas and limits are subject to change, for the most up-date-information consult our [quotas and limits article](../quotas-limits.md).
+
+## Model specific settings
+
+Different model deployments, also called model classes have unique max TPM values that you're now able to control. **This represents the maximum amount of TPM that can be allocated to that type of model deployment in a given region.** While each model type represents its own unique model class, the max TPM value is currently only different for certain model classes:
+
+- GPT-4
+- GPT-4-32K
+- Text-Davinci-003
+
+All other model classes have a common max TPM value.
+
+> [!NOTE]
+> Quota Tokens-Per-Minute (TPM) allocation is not related to the max input token limit of a model. Model input token limits are defined in the [models table](../concepts/models.md) and are not impacted by changes made to TPM.
+
+## View and request quota
+
+For an all up view of your quota allocations across deployments in a given region, select **Management** > **Quota** in Azure AI Studio:
++
+- **Quota Name**: There's one quota value per region for each model type. The quota covers all versions of that model. The quota name can be expanded in the UI to show the deployments that are using the quota.
+- **Deployment**: Model deployments divided by model class.
+- **Usage/Limit**: For the quota name, this shows how much quota is used by deployments and the total quota approved for this subscription and region. This amount of quota used is also represented in the bar graph.
+- **Request Quota**: The icon in this field navigates to a form where requests to increase quota can be submitted.
+
+## Migrating existing deployments
+
+As part of the transition to the new quota system and TPM based allocation, all existing Azure OpenAI model deployments have been automatically migrated to use quota. In cases where the existing TPM/RPM allocation exceeds the default values due to previous custom rate-limit increases, equivalent TPM were assigned to the impacted deployments.
+
+## Understanding rate limits
+
+Assigning TPM to a deployment sets the Tokens-Per-Minute (TPM) and Requests-Per-Minute (RPM) rate limits for the deployment, as described above. TPM rate limits are based on the maximum number of tokens that are estimated to be processed by a request at the time the request is received. It isn't the same as the token count used for billing, which is computed after all processing is completed.
+
+As each request is received, Azure OpenAI computes an estimated max processed-token count that includes the following:
+
+- Prompt text and count
+- The max_tokens parameter setting
+- The best_of parameter setting
+
+As requests come into the deployment endpoint, the estimated max-processed-token count is added to a running token count of all requests that is reset each minute. If at any time during that minute, the TPM rate limit value is reached, then further requests will receive a 429 response code until the counter resets.
+
+RPM rate limits are based on the number of requests received over time. The rate limit expects that requests be evenly distributed over a one-minute period. If this average flow isn't maintained, then requests may receive a 429 response even though the limit isn't met when measured over the course of a minute. To implement this behavior, Azure OpenAI Service evaluates the rate of incoming requests over a small period of time, typically 1 or 10 seconds. If the number of requests received during that time exceeds what would be expected at the set RPM limit, then new requests will receive a 429 response code until the next evaluation period. For example, if Azure OpenAI is monitoring request rate on 1-second intervals, then rate limiting will occur for a 600-RPM deployment if more than 10 requests are received during each 1-second period (600 requests per minute = 10 requests per second).
+
+### Rate limit best practices
+
+To minimize issues related to rate limits, it's a good idea to use the following techniques:
+
+- Set max_tokens and best_of to the minimum values that serve the needs of your scenario. For example, donΓÇÖt set a large max-tokens value if you expect your responses to be small.
+- Use quota management to increase TPM on deployments with high traffic, and to reduce TPM on deployments with limited needs.
+- Implement retry logic in your application.
+- Avoid sharp changes in the workload. Increase the workload gradually.
+- Test different load increase patterns.
+
+## Next steps
+
+- To review quota defaults for Azure OpenAI, consult the [quotas & limits article](../quotas-limits.md)
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
+
+ Title: How to switch between OpenAI and Azure OpenAI Service endpoints with Python
+
+description: Learn about the changes you need to make to your code to swap back and forth between OpenAI and Azure OpenAI endpoints.
++++++ Last updated : 05/24/2023+++
+# How to switch between OpenAI and Azure OpenAI endpoints with Python
+
+While OpenAI and Azure OpenAI Service rely on a [common Python client library](https://github.com/openai/openai-python), there are small changes you need to make to your code in order to swap back and forth between endpoints. This article walks you through the common changes and differences you'll experience when working across OpenAI and Azure OpenAI.
+
+> [!NOTE]
+> This library is maintained by OpenAI and is currently in preview. Refer to the [release history](https://github.com/openai/openai-python/releases) or the [version.py commit history](https://github.com/openai/openai-python/commits/main/openai/version.py) to track the latest updates to the library.
+
+## Authentication
+
+We recommend using environment variables. If you haven't done this before our [Python quickstarts](../quickstart.md) walk you through this configuration.
+
+### API key
+
+<table>
+<tr>
+<td> OpenAI </td> <td> Azure OpenAI </td>
+</tr>
+<tr>
+<td>
+
+```python
+import openai
+
+openai.api_key = "sk-..."
+openai.organization = "..."
++
+```
+
+</td>
+<td>
+
+```python
+import openai
+
+openai.api_type = "azure"
+openai.api_key = "..."
+openai.api_base = "https://example-endpoint.openai.azure.com"
+openai.api_version = "2023-05-15" # subject to change
+```
+
+</td>
+</tr>
+</table>
+
+### Azure Active Directory authentication
+
+<table>
+<tr>
+<td> OpenAI </td> <td> Azure OpenAI </td>
+</tr>
+<tr>
+<td>
+
+```python
+import openai
+
+openai.api_key = "sk-..."
+openai.organization = "..."
++++++
+```
+
+</td>
+<td>
+
+```python
+import openai
+from azure.identity import DefaultAzureCredential
+
+credential = DefaultAzureCredential()
+token = credential.get_token("https://cognitiveservices.azure.com/.default")
+
+openai.api_type = "azuread"
+openai.api_key = token.token
+openai.api_base = "https://example-endpoint.openai.azure.com"
+openai.api_version = "2023-05-15" # subject to change
+```
+
+</td>
+</tr>
+</table>
+
+## Keyword argument for model
+
+OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of [deployments](create-resource.md?pivots=web-portal#deploy-a-model) and uses the `deployment_id` keyword argument to describe which model deployment to use. Azure OpenAI also supports the use of `engine` interchangeably with `deployment_id`.
+
+For OpenAI `engine` still works in most instances, but it's deprecated and `model` is preferred.
+
+<table>
+<tr>
+<td> OpenAI </td> <td> Azure OpenAI </td>
+</tr>
+<tr>
+<td>
+
+```python
+completion = openai.Completion.create(
+ prompt="<prompt>",
+ model="text-davinci-003"
+)
+
+chat_completion = openai.ChatCompletion.create(
+ messages="<messages>",
+ model="gpt-4"
+)
+
+embedding = openai.Embedding.create(
+ input="<input>",
+ model="text-embedding-ada-002"
+)
++++
+```
+
+</td>
+<td>
+
+```python
+completion = openai.Completion.create(
+ prompt="<prompt>",
+ deployment_id="text-davinci-003"
+ #engine="text-davinci-003"
+)
+
+chat_completion = openai.ChatCompletion.create(
+ messages="<messages>",
+ deployment_id="gpt-4"
+ #engine="gpt-4"
+
+)
+
+embedding = openai.Embedding.create(
+ input="<input>",
+ deployment_id="text-embedding-ada-002"
+ #engine="text-embedding-ada-002"
+)
+```
+
+</td>
+</tr>
+</table>
+
+## Azure OpenAI embeddings doesn't support multiple inputs
+
+Many examples show passing multiple inputs into the embeddings API. For Azure OpenAI, currently we must pass a single text input per call.
+
+<table>
+<tr>
+<td> OpenAI </td> <td> Azure OpenAI </td>
+</tr>
+<tr>
+<td>
+
+```python
+inputs = ["A", "B", "C"]
+
+embedding = openai.Embedding.create(
+ input=inputs,
+ model="text-embedding-ada-002"
+)
++
+```
+
+</td>
+<td>
+
+```python
+inputs = ["A", "B", "C"]
+
+for text in inputs:
+ embedding = openai.Embedding.create(
+ input=text,
+ deployment_id="text-embedding-ada-002"
+ #engine="text-embedding-ada-002"
+ )
+```
+
+</td>
+</tr>
+</table>
+
+## Next steps
+
+* Learn more about how to work with GPT-35-Turbo and the GPT-4 models with [our how-to guide](../how-to/chatgpt.md).
+* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples)
ai-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/work-with-code.md
+
+ Title: 'How to use the Codex models to work with code'
+
+description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks
+++++ Last updated : 06/24/2022++
+keywords:
++
+# Codex models and Azure OpenAI Service
+
+The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+
+You can use Codex for a variety of tasks including:
+
+- Turn comments into code
+- Complete your next line or function in context
+- Bring knowledge to you, such as finding a useful library or API call for an application
+- Add comments
+- Rewrite code for efficiency
+
+## How to use the Codex models
+
+Here are a few examples of using Codex that can be tested in [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
+
+### Saying "Hello" (Python)
+
+```python
+"""
+Ask the user for their name and say "Hello"
+"""
+```
+
+### Create random names (Python)
+
+```python
+"""
+1. Create a list of first names
+2. Create a list of last names
+3. Combine them randomly into a list of 100 full names
+"""
+```
+
+### Create a MySQL query (Python)
+
+```python
+"""
+Table customers, columns = [CustomerId, FirstName, LastName, Company, Address, City, State, Country, PostalCode, Phone, Fax, Email, SupportRepId]
+Create a MySQL query for all customers in Texas named Jane
+"""
+query =
+```
+
+### Explaining code (JavaScript)
+
+```javascript
+// Function 1
+var fullNames = [];
+for (var i = 0; i < 50; i++) {
+ fullNames.push(names[Math.floor(Math.random() * names.length)]
+ + " " + lastNames[Math.floor(Math.random() * lastNames.length)]);
+}
+
+// What does Function 1 do?
+```
+
+## Best practices
+
+### Start with a comment, data or code
+
+You can experiment using one of the Codex models in our playground (styling instructions as comments when needed.)
+
+To get Codex to create a useful completion, it's helpful to think about what information a programmer would need to perform a task. This could simply be a clear comment or the data needed to write a useful function, like the names of variables or what class a function handles.
+
+In this example we tell Codex what to call the function and what task it's going to perform.
+
+```python
+# Create a function called 'nameImporter' to add a first and last name to the database
+```
+
+This approach scales even to the point where you can provide Codex with a comment and an example of a database schema to get it to write useful query requests for various databases. Here's an example where we provide the columns and table names for the query.
+
+```python
+# Table albums, columns = [AlbumId, Title, ArtistId]
+# Table artists, columns = [ArtistId, Name]
+# Table media_types, columns = [MediaTypeId, Name]
+# Table playlists, columns = [PlaylistId, Name]
+# Table playlist_track, columns = [PlaylistId, TrackId]
+# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice]
+
+# Create a query for all albums with more than 10 tracks
+```
+
+When you show Codex the database schema, it's able to make an informed guess about how to format a query.
+
+### Specify the programming language
+
+Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. Here's an example for R and Python.
+
+```r
+# R language
+# Calculate the mean distance between an array of points
+```
+
+```python
+# Python 3
+# Calculate the mean distance between an array of points
+```
+
+### Prompt Codex with what you want it to do
+
+If you want Codex to create a webpage, placing the first line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with func or def).
+
+```html
+<!-- Create a web page with the title 'Kat Katman attorney at paw' -->
+<!DOCTYPE html>
+```
+
+Placing `<!DOCTYPE html>` after our comment makes it very clear to Codex what we want it to do.
+
+Or if we want to write a function we could start the prompt as follows and Codex will understand what it needs to do next.
+
+```python
+# Create a function to count to 100
+
+def counter
+```
+
+### Specifying libraries will help Codex understand what you want
+
+Codex is aware of a large number of libraries, APIs and modules. By telling Codex which ones to use, either from a comment or importing them into your code, Codex will make suggestions based upon them instead of alternatives.
+
+```html
+<!-- Use A-Frame version 1.2.0 to create a 3D website -->
+<!-- https://aframe.io/releases/1.2.0/aframe.min.js -->
+```
+
+By specifying the version, you can make sure Codex uses the most current library.
+
+> [!NOTE]
+> Codex can suggest helpful libraries and APIs, but always be sure to do your own research to make sure that they're safe for your application.
+
+### Comment style can affect code quality
+
+With some languages, the style of comments can improve the quality of the output. For example, when working with Python, in some cases using doc strings (comments wrapped in triple quotes) can give higher quality results than using the pound (`#`) symbol.
+
+```python
+"""
+Create an array of users and email addresses
+"""
+```
+
+### Comments inside of functions can be helpful
+
+Recommended coding standards usually suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do.
+
+```python
+def getUserBalance(id):
+ """
+ Look up the user in the database ΓÇÿUserData' and return their current account balance.
+ """
+```
+
+### Provide examples for more precise results
+
+If you have a particular style or format you need Codex to use, providing examples or demonstrating it in the first part of the request will help Codex more accurately match what you need.
+
+```python
+"""
+Create a list of random animals and species
+"""
+animals = [ {"name": "Chomper", "species": "Hamster"}, {"name":
+```
+
+### Lower temperatures give more precise results
+
+Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
+
+In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation.
+
+### Organize tasks into functions
+
+We can get Codex to write functions by specifying what the function should do in as precise terms as possible in comment. For example, by writing the following comment, Codex creates a JavaScript timer function that's triggered when a user presses a button:
+
+A simple JavaScript timer
+
+```javascript
+// Create a timer that creates an alert in 10 seconds
+```
+
+### Creating example data
+
+Testing applications often requires using example data. Because Codex is a language model that understands how to comprehend and write natural language, you can ask Codex to create data like arrays of made up names, products and other variables. For example, here we ask Codex to create an array of weather temperatures.
+
+```javascript
+/* Create an array of weather temperatures for San Francisco */
+```
+
+Asking Codex to perform this task will produce a table like this:
+
+```javascript
+var weather = [
+ { month: 'January', high: 58, low: 48 },
+ { month: 'February', high: 61, low: 50 },
+ { month: 'March', high: 64, low: 53 },
+ { month: 'April', high: 67, low: 55 },
+ { month: 'May', high: 70, low: 58 },
+ { month: 'June', high: 73, low: 61 },
+ { month: 'July', high: 76, low: 63 },
+ { month: 'August', high: 77, low: 64 },
+ { month: 'September', high: 76, low: 63 },
+ { month: 'October', high: 73, low: 61 },
+ { month: 'November', high: 68, low: 57 },
+ { month: 'December', high: 64, low: 54 }
+];
+```
+
+### Compound functions and small applications
+
+We can provide Codex with a comment consisting of a complex request like creating a random name generator or performing tasks with user input and Codex can generate the rest provided there are enough tokens.
+
+```javascript
+/*
+Create a list of animals
+Create a list of cities
+Use the lists to generate stories about what I saw at the zoo in each city
+*/
+```
+
+### Limit completion size for more precise results or lower latency
+
+Requesting longer completions in Codex can lead to imprecise answers and repetition. Limit the size of the query by reducing max_tokens and setting stop tokens. For instance, add `\n` as a stop sequence to limit completions to one line of code. Smaller completions also incur less latency.
+
+### Use streaming to reduce latency
+
+Large Codex queries can take tens of seconds to complete. To build applications that require lower latency, such as coding assistants that perform autocompletion, consider using streaming. Responses will be returned before the model finishes generating the entire completion. Applications that need only part of a completion can reduce latency by cutting off a completion either programmatically or by using creative values for `stop`.
+
+Users can combine streaming with duplication to reduce latency by requesting more than one solution from the API, and using the first response returned. Do this by setting `n > 1`. This approach consumes more token quota, so use carefully (for example, by using reasonable settings for `max_tokens` and `stop`).
+
+### Use Codex to explain code
+
+Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex will usually interpret this as the start of an explanation and complete the rest of the text.
+
+```javascript
+/* Explain what the previous function is doing: It
+```
+
+### Explaining an SQL query
+
+In this example, we use Codex to explain in a human readable format what an SQL query is doing.
+
+```sql
+SELECT DISTINCT department.name
+FROM department
+JOIN employee ON department.id = employee.department_id
+JOIN salary_payments ON employee.id = salary_payments.employee_id
+WHERE salary_payments.date BETWEEN '2020-06-01' AND '2020-06-30'
+GROUP BY department.name
+HAVING COUNT(employee.id) > 10;
+-- Explanation of the above query in human readable format
+--
+```
+
+### Writing unit tests
+
+Creating a unit test can be accomplished in Python simply by adding the comment "Unit test" and starting a function.
+
+```python
+# Python 3
+def sum_numbers(a, b):
+ return a + b
+
+# Unit test
+def
+```
+
+### Checking code for errors
+
+By using examples, you can show Codex how to identify errors in code. In some cases no examples are required, however demonstrating the level and detail to provide a description can help Codex understand what to look for and how to explain it. (A check by Codex for errors shouldn't replace careful review by the user. )
+
+```javascript
+/* Explain why the previous function doesn't work. */
+```
+
+### Using source data to write database functions
+
+Just as a human programmer would benefit from understanding the database structure and the column names, Codex can use this data to help you write accurate query requests. In this example, we insert the schema for a database and tell Codex what to query the database for.
+
+```python
+# Table albums, columns = [AlbumId, Title, ArtistId]
+# Table artists, columns = [ArtistId, Name]
+# Table media_types, columns = [MediaTypeId, Name]
+# Table playlists, columns = [PlaylistId, Name]
+# Table playlist_track, columns = [PlaylistId, TrackId]
+# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice]
+
+# Create a query for all albums with more than 10 tracks
+```
+
+### Converting between languages
+
+You can get Codex to convert from one language to another by following a simple format where you list the language of the code you want to convert in a comment, followed by the code and then a comment with the language you want it translated into.
+
+```python
+# Convert this from Python to R
+# Python version
+
+[ Python code ]
+
+# End
+
+# R version
+```
+
+### Rewriting code for a library or framework
+
+If you want Codex to make a function more efficient, you can provide it with the code to rewrite followed by an instruction on what format to use.
+
+```javascript
+// Rewrite this as a React component
+var input = document.createElement('input');
+input.setAttribute('type', 'text');
+document.body.appendChild(input);
+var button = document.createElement('button');
+button.innerHTML = 'Say Hello';
+document.body.appendChild(button);
+button.onclick = function() {
+ var name = input.value;
+ var hello = document.createElement('div');
+ hello.innerHTML = 'Hello ' + name;
+ document.body.appendChild(hello);
+};
+
+// React version:
+```
+
+## Next steps
+
+Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
+
+ Title: What is Azure OpenAI Service?
+
+description: Apply advanced language models to variety of use cases with Azure OpenAI
++++++ Last updated : 07/06/2023+
+recommendations: false
+keywords:
++
+# What is Azure OpenAI Service?
+
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-35-Turbo, and Embeddings model series. In addition, the new GPT-4 and gpt-35-turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
+
+### Features overview
+
+| Feature | Azure OpenAI |
+| | |
+| Models available | **GPT-4 series** <br>**GPT-35-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman <br> Davinci <br>**Fine-tuning is currently unavailable to new customers**.|
+| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
+| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). |
+| Managed Identity| Yes, via Azure Active Directory |
+| UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
+| Model regional availability | [Model availability](./concepts/models.md) |
+| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. |
+
+## Responsible AI
+
+At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating MicrosoftΓÇÖs <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.
+
+## How do I get access to Azure OpenAI?
+
+How do I get access to Azure OpenAI?
+
+Access is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">MicrosoftΓÇÖs commitment to responsible AI</a>. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations.
+
+More specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to Azure OpenAI.
+
+Apply here for access:
+
+<a href="https://aka.ms/oaiapply" target="_blank">Apply now</a>
+
+## Comparing Azure OpenAI and OpenAI
+
+Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
+
+With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
+
+## Key concepts
+
+### Prompts & completions
+
+The completions endpoint is the core component of the API service. This API provides access to the model's text-in, text-out interface. Users simply need to provide an input **prompt** containing the English text command, and the model will generate a text **completion**.
+
+Here's an example of a simple prompt and completion:
+
+>**Prompt**:
+ ```
+ """
+ count to 5 in a for loop
+ """
+ ```
+>
+>**Completion**:
+ ```
+ for i in range(1, 6):
+ print(i)
+ ```
+
+### Tokens
+
+Azure OpenAI processes text by breaking it down into tokens. Tokens can be words or just chunks of characters. For example, the word ΓÇ£hamburgerΓÇ¥ gets broken up into the tokens ΓÇ£hamΓÇ¥, ΓÇ£burΓÇ¥ and ΓÇ£gerΓÇ¥, while a short and common word like ΓÇ£pearΓÇ¥ is a single token. Many tokens start with a whitespace, for example ΓÇ£ helloΓÇ¥ and ΓÇ£ byeΓÇ¥.
+
+The total number of tokens processed in a given request depends on the length of your input, output and request parameters. The quantity of tokens being processed will also affect your response latency and throughput for the models.
+
+### Resources
+
+Azure OpenAI is a new product offering on Azure. You can get started with Azure OpenAI the same way as any other Azure product where you [create a resource](how-to/create-resource.md), or instance of the service, in your Azure Subscription. You can read more about Azure's [resource management design](../../azure-resource-manager/management/overview.md).
+
+### Deployments
+
+Once you create an Azure OpenAI Resource, you must deploy a model before you can start making API calls and generating text. This action can be done using the Deployment APIs. These APIs allow you to specify the model you wish to use.
+
+### Prompt engineering
+
+GPT-3, GPT-3.5, and GPT-4 models from OpenAI are prompt-based. With prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. This completion is the modelΓÇÖs continuation of the input text.
+
+While these models are extremely powerful, their behavior is also very sensitive to the prompt. This makes [prompt engineering](./concepts/prompt-engineering.md) an important skill to develop.
+
+Prompt construction can be difficult. In practice, the prompt acts to configure the model weights to complete the desired task, but it's more of an art than a science, often requiring experience and intuition to craft a successful prompt.
+
+### Models
+
+The service provides users access to several different models. Each model provides a different capability and price point.
+
+GPT-4 models are the latest available models. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+
+The DALL-E models, currently in preview, generate images from text prompts that the user provides.
+
+Learn more about each model on our [models concept page](./concepts/models.md).
+
+## Next steps
+
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quickstart.md
+
+ Title: 'Quickstart - Deploy a model and generate text using Azure OpenAI Service'
+
+description: Walkthrough on how to get started with Azure OpenAI and make your first completions call.
++++++++ Last updated : 05/23/2023
+zone_pivot_groups: openai-quickstart-new
+recommendations: false
++
+# Quickstart: Get started generating text using Azure OpenAI Service
+
+Use this article to get started making your first calls to Azure OpenAI.
++++++++++++++++++
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
+
+ Title: Azure OpenAI Service quotas and limits
+
+description: Quick reference, detailed description, and best practices on the quotas and limits for the OpenAI service in Azure AI services.
++++++ Last updated : 06/08/2023+++
+# Azure OpenAI Service quotas and limits
+
+This article contains a quick reference and a detailed description of the quotas and limits for Azure OpenAI in Azure AI services.
+
+## Quotas and limits reference
+
+The following sections provide you with a quick guide to the default quotas and limits that apply to Azure OpenAI:
+
+| Limit Name | Limit Value |
+|--|--|
+| OpenAI resources per region per Azure subscription | 30 |
+| Default quota per model and region (in tokens-per-minute)<sup>1</sup> |Text-Davinci-003: 120 K <br> GPT-4: 20 K <br> GPT-4-32K: 60 K <br> All others: 240 K |
+| Default DALL-E quota limits | 2 concurrent requests |
+| Maximum prompt tokens per request | Varies per model. For more information, see [Azure OpenAI Service models](./concepts/models.md)|
+| Max fine-tuned model deployments | 2 |
+| Total number of training jobs per resource | 100 |
+| Max simultaneous running training jobs per resource | 1 |
+| Max training jobs queued | 20 |
+| Max Files per resource | 30 |
+| Total size of all files per resource | 1 GB |
+| Max training job time (job will fail if exceeded) | 720 hours |
+| Max training job size (tokens in training file) x (# of epochs) | 2 Billion |
+
+<sup>1</sup> Default quota limits are subject to change.
+
+### General best practices to remain within rate limits
+
+To minimize issues related to rate limits, it's a good idea to use the following techniques:
+
+- Implement retry logic in your application.
+- Avoid sharp changes in the workload. Increase the workload gradually.
+- Test different load increase patterns.
+- Increase the quota assigned to your deployment. Move quota from another deployment, if necessary.
+
+### How to request increases to the default quotas and limits
+
+Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, we are not currently approving new quota increase requests. Your request will be queued until it can be filled at a later time.
+
+For other rate limits, please [submit a service request](../cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context).
+
+## Next steps
+
+Explore how to [manage quota](./how-to/quota.md) for your Azure OpenAI deployments.
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
+
+ Title: Azure OpenAI Service REST API reference
+
+description: Learn how to use Azure OpenAI's REST API. In this article, you'll learn about authorization options, how to structure a request and receive a response.
+++++ Last updated : 05/15/2023++
+recommendations: false
+++
+# Azure OpenAI Service REST API reference
+
+This article provides details on the inference REST API endpoints for Azure OpenAI.
+
+## Authentication
+
+Azure OpenAI provides two methods for authentication. you can use either API Keys or Azure Active Directory.
+
+- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides guidance for how to make calls with this type of authentication.
+
+- **Azure Active Directory authentication**: You can authenticate an API call using an Azure Active Directory token. Authentication tokens are included in a request as the ```Authorization``` header. The token provided must be preceded by ```Bearer```, for example ```Bearer YOUR_AUTH_TOKEN```. You can read our how-to guide on [authenticating with Azure Active Directory](./how-to/managed-identity.md).
+
+### REST API versioning
+
+The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure. For example:
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15
+```
+
+## Completions
+
+With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position.
+
+**Create a completion**
+
+```http
+POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deployment-id``` | string | Required | The deployment name you chose when you deployed the model. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
+- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+
+**Request body**
+
+| Parameter | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document. |
+| ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
+| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
+| ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
+| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. |
+| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse |
+| ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |
+| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.|
+| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
+| ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. |
+| ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. |
+| ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. |
+| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
+| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
+| ```best_of``` | integer | Optional | 1 | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. This parameter cannot be used with `gpt-35-turbo`. |
+
+#### Example request
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\
+ -H "Content-Type: application/json" \
+ -H "api-key: YOUR_API_KEY" \
+ -d "{
+ \"prompt\": \"Once upon a time\",
+ \"max_tokens\": 5
+}"
+```
+
+#### Example response
+
+```json
+{
+ "id": "cmpl-4kGh7iXtjW4lc9eGhff6Hp8C7btdQ",
+ "object": "text_completion",
+ "created": 1646932609,
+ "model": "ada",
+ "choices": [
+ {
+ "text": ", a dark line crossed",
+ "index": 0,
+ "logprobs": null,
+ "finish_reason": "length"
+ }
+ ]
+}
+```
+
+In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
+
+## Embeddings
+Get a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
+
+> [!NOTE]
+> We currently do not support batching of embeddings into a single API call. If you receive the error `InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon.`, this typically occurs when an array of embeddings is attempted to be passed as a batch rather than a single string. The string can be up to 8191 tokens in length when using the text-embedding-ada-002 (Version 2) model.
+
+**Create an embedding**
+
+```http
+POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/embeddings?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
+- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
+
+**Request body**
+
+| Parameter | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| ```input```| string | Yes | N/A | Input text to get embeddings for, encoded as a string. The number of input tokens varies depending on what [model you are using](./concepts/models.md). <br> Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.|
+| ```user``` | string | No | Null | A unique identifier representing for your end-user. This will help Azure OpenAI monitor and detect abuse. **Do not pass PII identifiers instead use pseudoanonymized values such as GUIDs** |
+
+#### Example request
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15 \
+ -H "Content-Type: application/json" \
+ -H "api-key: YOUR_API_KEY" \
+ -d "{\"input\": \"The food was delicious and the waiter...\"}"
+```
+
+#### Example response
+
+```json
+{
+ "object": "list",
+ "data": [
+ {
+ "object": "embedding",
+ "embedding": [
+ 0.018990106880664825,
+ -0.0073809814639389515,
+ .... (1024 floats total for ada)
+ 0.021276434883475304,
+ ],
+ "index": 0
+ }
+ ],
+ "model": "text-similarity-babbage:001"
+}
+```
+
+## Chat completions
+
+Create completions for chat messages with the GPT-35-Turbo and GPT-4 models.
+
+**Create chat completions**
+
+```http
+POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+
+#### Example request
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15 \
+ -H "Content-Type: application/json" \
+ -H "api-key: YOUR_API_KEY" \
+ -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure AI services support this too?"}]}'
+
+```
+
+#### Example response
+
+```console
+{"id":"chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
+"object":"chat.completion","created":1679072642,
+"model":"gpt-35-turbo",
+"usage":{"prompt_tokens":58,
+"completion_tokens":68,
+"total_tokens":126},
+"choices":[{"message":{"role":"assistant",
+"content":"Yes, other Azure AI services also support customer managed keys. Azure AI services offer multiple options for customers to manage keys, such as using Azure Key Vault, customer-managed keys in Azure Key Vault or customer-managed keys through Azure Storage service. This helps customers ensure that their data is secure and access to their services is controlled."},"finish_reason":"stop","index":0}]}
+```
+
+In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
+
+Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks.
+
+| Parameter | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| ```messages``` | array | Required | | The messages to generate chat completions for, in the chat format. |
+| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\nWe generally recommend altering this or `top_p` but not both. |
+| ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. |
+| ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." |
+| ```stop``` | string or array | Optional | null | Up to 4 sequences where the API will stop generating further tokens.|
+| ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).|
+| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.|
+| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.|
+| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.|
+| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
+
+## Completions extensions
+
+Extensions for chat completions, for example Azure OpenAI on your data.
+
+**Use chat completions extensions**
+
+```http
+POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/completions?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+
+#### Example request
+
+```Console
+curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \
+-H "Content-Type: application/json" \
+-H "api-key: YOUR_API_KEY" \
+-H "chatgpt_url: YOUR_RESOURCE_URL" \
+-H "chatgpt_key: YOUR_API_KEY" \
+-d \
+'
+{
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'YOUR_AZURE_COGNITIVE_SEARCH_ENDPOINT'",
+ "key": "'YOUR_AZURE_COGNITIVE_SEARCH_KEY'",
+ "indexName": "'YOUR_AZURE_COGNITIVE_SEARCH_INDEX_NAME'"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "What are the differences between Azure Machine Learning and Azure AI services?"
+ }
+ ]
+}
+'
+```
+
+#### Example response
+
+```json
+{
+ "id": "12345678-1a2b-3c4e5f-a123-12345678abcd",
+ "model": "",
+ "created": 1684304924,
+ "object": "chat.completion",
+ "choices": [
+ {
+ "index": 0,
+ "messages": [
+ {
+ "role": "tool",
+ "content": "{\"citations\": [{\"content\": \"\\nAzure AI services are cloud-based artificial intelligence (AI) services...\", \"id\": null, \"title\": \"What is Azure AI services\", \"filepath\": null, \"url\": null, \"metadata\": {\"chunking\": \"orignal document size=250. Scores=0.4314117431640625 and 1.72564697265625.Org Highlight count=4.\"}, \"chunk_id\": \"0\"}], \"intent\": \"[\\\"Learn about Azure AI services.\\\"]\"}",
+ "end_turn": false
+ },
+ {
+ "role": "assistant",
+ "content": " \nAzure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. [doc1]. Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle. [doc1].",
+ "end_turn": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `messages` | array | Required | null | The messages to generate chat completions for, in the chat format. |
+| `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI on your data feature. |
+| `temperature` | number | Optional | 0 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. |
+| `top_p` | number | Optional | 1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.|
+| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` |
+| `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. |
+| `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
+
+The following parameters can be used inside of the `parameters` field inside of `dataSources`.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure Cognitive search the value is `AzureCognitiveSearch`. |
+| `endpoint` | string | Required | null | The data source endpoint. |
+| `key` | string | Required | null | One of the Azure Cognitive Search admin keys for your service. |
+| `indexName` | string | Required | null | The search index to be used. |
+| `fieldsMapping` | dictionary | Optional | null | Index data column mapping. |
+| `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |
+| `topNDocuments` | number | Optional | 5 | Number of documents that need to be fetched for document augmentation. |
+| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. |
+| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only available when `queryType` is set to `semantic`. |
+| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the ΓÇ£System MessageΓÇ¥ in Azure OpenAI Studio. <!--See [Using your data](./concepts/use-your-data.md#system-message) for more information.--> ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
+
+## Image generation
+
+### Request a generated image
+
+Generate a batch of images from a text caption. Image generation is currently only available with `api-version=2023-06-01-preview`.
+
+```http
+POST https://{your-resource-name}.openai.azure.com/openai/images/generations:submit?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-06-01-preview`
+
+**Request body**
+
+| Parameter | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| ```prompt``` | string | Required | | A text description of the desired image(s). The maximum length is 1000 characters. |
+| ```n``` | integer | Optional | 1 | The number of images to generate. Must be between 1 and 5. |
+| ```size``` | string | Optional | 1024x1024 | The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. |
+
+#### Example request
+
+```console
+curl -X POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/images/generations:submit?api-version=2023-06-01-preview \
+ -H "Content-Type: application/json" \
+ -H "api-key: YOUR_API_KEY" \
+ -d '{
+"prompt": "An avocado chair",
+"size": "512x512",
+"n": 3
+}'
+```
+
+#### Example response
+
+The operation returns a `202` status code and an `GenerateImagesResponse` JSON object containing the ID and status of the operation.
+
+```json
+{
+ "id": "f508bcf2-e651-4b4b-85a7-58ad77981ffa",
+ "status": "notRunning"
+}
+```
+
+### Get a generated image result
++
+Use this API to retrieve the results of an image generation operation. Image generation is currently only available with `api-version=2023-06-01-preview`.
+
+```http
+GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version={api-version}
+```
++
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```operation-id``` | string | Required | The GUID that identifies the original image generation request. |
+
+**Supported versions**
+
+- `2023-06-01-preview`
+
+#### Example request
+
+```console
+curl -X GET "https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version=2023-06-01-preview"
+-H "Content-Type: application/json"
+-H "Api-Key: {api key}"
+```
+
+#### Example response
+
+Upon success the operation returns a `200` status code and an `OperationResponse` JSON object. The `status` field can be `"notRunning"` (task is queued but hasn't started yet), `"running"`, `"succeeded"`, `"canceled"` (task has timed out), `"failed"`, or `"deleted"`. A `succeeded` status indicates that the generated image is available for download at the given URL. If multiple images were generated, their URLs are all returned in the `result.data` field.
+
+```json
+{
+ "created": 1685064331,
+ "expires": 1685150737,
+ "id": "4b755937-3173-4b49-bf3f-da6702a3971a",
+ "result": {
+ "data": [
+ {
+ "url": "<URL_TO_IMAGE>"
+ },
+ {
+ "url": "<URL_TO_NEXT_IMAGE>"
+ },
+ ...
+ ]
+ },
+ "status": "succeeded"
+}
+```
+
+### Delete a generated image from the server
+
+You can use the operation ID returned by the request to delete the corresponding image from the Azure server. Generated images are automatically deleted after 24 hours by default, but you can trigger the deletion earlier if you want to.
+
+```http
+DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version={api-version}
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```operation-id``` | string | Required | The GUID that identifies the original image generation request. |
+
+**Supported versions**
+
+- `2023-06-01-preview`
+
+#### Example request
+
+```console
+curl -X DELETE "https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version=2023-06-01-preview"
+-H "Content-Type: application/json"
+-H "Api-Key: {api key}"
+```
+
+#### Response
+
+The operation returns a `204` status code if successful. This API only succeeds if the operation is in an end state (not `running`).
+
+## Management APIs
+
+Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
+
+[**Management APIs reference documentation**](/rest/api/cognitiveservices/)
+
+## Next steps
+
+Learn about [managing deployments, models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/deployments/create).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
+
+ Title: Azure OpenAI Service embeddings tutorial
+
+description: Learn how to use Azure OpenAI's embeddings API for document search with the BillSum dataset
+++++ Last updated : 06/14/2023++
+recommendations: false
++++
+# Tutorial: Explore Azure OpenAI Service embeddings and document search
+
+This tutorial will walk you through using the Azure OpenAI [embeddings](../concepts/understand-embeddings.md) API to perform **document search** where you'll query a knowledge base to find the most relevant document.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Install Azure OpenAI and other dependent Python libraries.
+> * Download the BillSum dataset and prepare it for analysis.
+> * Create environment variables for your resources endpoint and API key.
+> * Use the **text-embedding-ada-002 (Version 2)** model
+> * Use [cosine similarity](../concepts/understand-embeddings.md) to rank search results.
+
+> [!Important]
+> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
+* Access granted to Azure OpenAI in the desired Azure subscription
+ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+* <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a>
+* The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, pandas, tiktoken.
+* [Jupyter Notebooks](https://jupyter.org/)
+* An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** model deployed. This model is currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process of creating one is documented in our [resource deployment guide](../how-to/create-resource.md).
+
+## Set up
+
+### Python libraries
+
+If you haven't already, you need to install the following libraries:
+
+```cmd
+pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
+```
+
+<!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).-->
+
+### Download the BillSum dataset
+
+BillSum is a dataset of United States Congressional and California state bills. For illustration purposes, we'll look only at the US bills. The corpus consists of bills from the 103rd-115th (1993-2018) sessions of Congress. The data was split into 18,949 train bills and 3,269 test bills. The BillSum corpus focuses on mid-length legislation from 5,000 to 20,000 characters in length. More information on the project and the original academic paper where this dataset is derived from can be found on the [BillSum project's GitHub repository](https://github.com/FiscalNote/BillSum)
+
+This tutorial uses the `bill_sum_data.csv` file that can be downloaded from our [GitHub sample data](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/data/bill_sum_data.csv).
+
+You can also download the sample data by running the following command on your local machine:
+
+```cmd
+curl "https://raw.githubusercontent.com/Azure-Samples/Azure-OpenAI-Docs-Samples/main/Samples/Tutorials/Embeddings/data/bill_sum_data.csv" --output bill_sum_data.csv
+```
+
+### Retrieve key and endpoint
+
+To successfully make a call against Azure OpenAI, you'll need an **endpoint** and a **key**.
+
+|Variable name | Value |
+|--|-|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com`.|
+| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
+
+Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
+
+Create and assign persistent environment variables for your key and endpoint.
+
+### Environment variables
+
+# [Command Line](#tab/command-line)
+
+```CMD
+setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
+```
+
+```CMD
+setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_API_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
+```
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_ENDPOINT', 'REPLACE_WITH_YOUR_ENDPOINT_HERE', 'User')
+```
+
+# [Bash](#tab/bash)
+
+```Bash
+echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment
+echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment
+
+source /etc/environment
+```
+++
+After setting the environment variables, you may need to close and reopen Jupyter notebooks or whatever IDE you're using in order for the environment variables to be accessible. While we strongly recommend using Jupyter Notebooks, if for some reason you cannot you'll need to modify any code that is returning a pandas dataframe by using `print(dataframe_name)` rather than just calling the `dataframe_name` directly as is often done at the end of a code block.
+
+Run the following code in your preferred Python IDE:
+
+<!--If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).-->
+
+## Import libraries and list models
+
+```python
+import openai
+import os
+import re
+import requests
+import sys
+from num2words import num2words
+import os
+import pandas as pd
+import numpy as np
+from openai.embeddings_utils import get_embedding, cosine_similarity
+import tiktoken
+
+API_KEY = os.getenv("AZURE_OPENAI_API_KEY")
+RESOURCE_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
+
+openai.api_type = "azure"
+openai.api_key = API_KEY
+openai.api_base = RESOURCE_ENDPOINT
+openai.api_version = "2022-12-01"
+
+url = openai.api_base + "/openai/deployments?api-version=2022-12-01"
+
+r = requests.get(url, headers={"api-key": API_KEY})
+
+print(r.text)
+```
+
+```output
+{
+ "data": [
+ {
+ "scale_settings": {
+ "scale_type": "standard"
+ },
+ "model": "text-embedding-ada-002",
+ "owner": "organization-owner",
+ "id": "text-embedding-ada-002",
+ "status": "succeeded",
+ "created_at": 1657572678,
+ "updated_at": 1657572678,
+ "object": "deployment"
+ },
+ {
+ "scale_settings": {
+ "scale_type": "standard"
+ },
+ "model": "code-cushman-001",
+ "owner": "organization-owner",
+ "id": "code-cushman-001",
+ "status": "succeeded",
+ "created_at": 1657572712,
+ "updated_at": 1657572712,
+ "object": "deployment"
+ },
+ {
+ "scale_settings": {
+ "scale_type": "standard"
+ },
+ "model": "text-search-curie-doc-001",
+ "owner": "organization-owner",
+ "id": "text-search-curie-doc-001",
+ "status": "succeeded",
+ "created_at": 1668620345,
+ "updated_at": 1668620345,
+ "object": "deployment"
+ },
+ {
+ "scale_settings": {
+ "scale_type": "standard"
+ },
+ "model": "text-search-curie-query-001",
+ "owner": "organization-owner",
+ "id": "text-search-curie-query-001",
+ "status": "succeeded",
+ "created_at": 1669048765,
+ "updated_at": 1669048765,
+ "object": "deployment"
+ }
+ ],
+ "object": "list"
+}
+```
+
+The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have an entry for **text-embedding-ada-002**. If you find that you're missing this model, you'll need to [deploy the model](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
+
+Now we need to read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`.
+
+```python
+df=pd.read_csv(os.path.join(os.getcwd(),'bill_sum_data.csv')) # This assumes that you have placed the bill_sum_data.csv in the same directory you are running Jupyter Notebooks
+df
+```
+
+**Output:**
++
+The initial table has more columns than we need we'll create a new smaller DataFrame called `df_bills` which will contain only the columns for `text`, `summary`, and `title`.
+
+```python
+df_bills = df[['text', 'summary', 'title']]
+df_bills
+```
+
+**Output:**
++
+Next we'll perform some light data cleaning by removing redundant whitespace and cleaning up the punctuation to prepare the data for tokenization.
+
+```python
+pd.options.mode.chained_assignment = None #https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#evaluation-order-matters
+
+# s is input text
+def normalize_text(s, sep_token = " \n "):
+ s = re.sub(r'\s+', ' ', s).strip()
+ s = re.sub(r". ,","",s)
+ # remove all instances of multiple spaces
+ s = s.replace("..",".")
+ s = s.replace(". .",".")
+ s = s.replace("\n", "")
+ s = s.strip()
+
+ return s
+
+df_bills['text']= df_bills["text"].apply(lambda x : normalize_text(x))
+```
+
+Now we need to remove any bills that are too long for the token limit (8192 tokens).
+
+```python
+tokenizer = tiktoken.get_encoding("cl100k_base")
+df_bills['n_tokens'] = df_bills["text"].apply(lambda x: len(tokenizer.encode(x)))
+df_bills = df_bills[df_bills.n_tokens<8192]
+len(df_bills)
+```
+
+```output
+20
+```
+
+>[!NOTE]
+>In this case all bills are under the embedding model input token limit, but you can use the technique above to remove entries that would otherwise cause embedding to fail. When faced with content that exceeds the embedding limit, you can also chunk the content into smaller pieces and then embed those one at a time.
+
+We'll once again examine **df_bills**.
+
+```python
+df_bills
+```
+
+**Output:**
++
+To understand the n_tokens column a little more as well how text ultimately is tokenized, it can be helpful to run the following code:
+
+```python
+sample_encode = tokenizer.encode(df_bills.text[0])
+decode = tokenizer.decode_tokens_bytes(sample_encode)
+decode
+```
+
+For our docs we're intentionally truncating the output, but running this command in your environment will return the full text from index zero tokenized into chunks. You can see that in some cases an entire word is represented with a single token whereas in others parts of words are split across multiple tokens.
+
+```output
+[b'SECTION',
+ b' ',
+ b'1',
+ b'.',
+ b' SHORT',
+ b' TITLE',
+ b'.',
+ b' This',
+ b' Act',
+ b' may',
+ b' be',
+ b' cited',
+ b' as',
+ b' the',
+ b' ``',
+ b'National',
+ b' Science',
+ b' Education',
+ b' Tax',
+ b' In',
+ b'cent',
+ b'ive',
+ b' for',
+ b' Businesses',
+ b' Act',
+ b' of',
+ b' ',
+ b'200',
+ b'7',
+ b"''.",
+ b' SEC',
+ b'.',
+ b' ',
+ b'2',
+ b'.',
+ b' C',
+ b'RED',
+ b'ITS',
+ b' FOR',
+ b' CERT',
+ b'AIN',
+ b' CONTRIBUT',
+ b'IONS',
+ b' BEN',
+ b'EF',
+ b'IT',
+ b'ING',
+ b' SC',
+```
+
+If you then check the length of the `decode` variable, you'll find it matches the first number in the n_tokens column.
+
+```python
+len(decode)
+```
+
+```output
+1466
+```
+
+Now that we understand more about how tokenization works we can move on to embedding. It is important to note, that we haven't actually tokenized the documents yet. The `n_tokens` column is simply a way of making sure none of the data we pass to the model for tokenization and embedding exceeds the input token limit of 8,192. When we pass the documents to the embeddings model, it will break the documents into tokens similar (though not necessarily identical) to the examples above and then convert the tokens to a series of floating point numbers that will be accessible via vector search. These embeddings can be stored locally or in an Azure Database. As a result, each bill will have its own corresponding embedding vector in the new `ada_v2` column on the right side of the DataFrame.
+
+```python
+df_bills['ada_v2'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-embedding-ada-002')) # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
+```
+
+```python
+df_bills
+```
+
+**Output:**
++
+As we run the search code block below, we'll embed the search query *"Can I get information on cable company tax revenue?"* with the same **text-embedding-ada-002 (Version 2)** model. Next we'll find the closest bill embedding to the newly embedded text from our query ranked by [cosine similarity](../concepts/understand-embeddings.md).
+
+```python
+# search through the reviews for a specific product
+def search_docs(df, user_query, top_n=3, to_print=True):
+ embedding = get_embedding(
+ user_query,
+ engine="text-embedding-ada-002" # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
+ )
+ df["similarities"] = df.ada_v2.apply(lambda x: cosine_similarity(x, embedding))
+
+ res = (
+ df.sort_values("similarities", ascending=False)
+ .head(top_n)
+ )
+ if to_print:
+ display(res)
+ return res
++
+res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
+```
+
+**Output**:
++
+Finally, we'll show the top result from document search based on user query against the entire knowledge base. This returns the top result of the "Taxpayer's Right to View Act of 1993". This document has a cosine similarity score of 0.76 between the query and the document:
+
+```python
+res["summary"][9]
+```
+
+```output
+"Taxpayer's Right to View Act of 1993 - Amends the Communications Act of 1934 to prohibit a cable operator from assessing separate charges for any video programming of a sporting, theatrical, or other entertainment event if that event is performed at a facility constructed, renovated, or maintained with tax revenues or by an organization that receives public financial support. Authorizes the Federal Communications Commission and local franchising authorities to make determinations concerning the applicability of such prohibition. Sets forth conditions under which a facility is considered to have been constructed, maintained, or renovated with tax revenues. Considers events performed by nonprofit or public organizations that receive tax subsidies to be subject to this Act if the event is sponsored by, or includes the participation of a team that is part of, a tax exempt organization."
+```
+
+Using this approach, you can use embeddings as a search mechanism across documents in a knowledge base. The user can then take the top search result and use it for their downstream task, which prompted their initial query.
+
+## Clean up resources
+
+If you created an OpenAI resource solely for completing this tutorial and want to clean up and remove an OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
+
+- [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+- [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+Learn more about Azure OpenAI's models:
+> [!div class="nextstepaction"]
+> [Azure OpenAI Service models](../concepts/models.md)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
+
+ Title: 'Use your own data with Azure OpenAI service'
+
+description: Use this article to import and use your data in Azure OpenAI.
+++++++ Last updated : 07/12/2023
+recommendations: false
+zone_pivot_groups: openai-use-your-data
++
+# Quickstart: Chat with Azure OpenAI models using your own data
+
+In this quickstart you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication.
+
+## Prerequisites
+
+- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
+- Access granted to Azure OpenAI in the desired Azure subscription.
+
+ Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
+
+- An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
+- Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) role for the Azure OpenAI resource.
++
+> [!div class="nextstepaction"]
+> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites)
++++++++
+## Clean up resources
+
+If you want to clean up and remove an OpenAI or Azure Cognitive Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+- [Azure AI services resources](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+- [Azure Cognitive Search resources](/azure/search/search-get-started-portal#clean-up-resources)
+- [Azure app service resources](/azure/app-service/quickstart-dotnetcore?pivots=development-environment-vs#clean-up-resources)
+
+## Next steps
+- Learn more about [using your data in Azure OpenAI Service](./concepts/use-your-data.md)
+- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
+
+ Title: What's new in Azure OpenAI Service?
+
+description: Learn about the latest news and features updates for Azure OpenAI
++++++ Last updated : 06/12/2023
+recommendations: false
+keywords:
++
+# What's new in Azure OpenAI Service
+
+## June 2023
+
+### Use Azure OpenAI on your own data (preview)
+
+- [Azure OpenAI on your data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as GPT-35-Turbo and GPT-4 and receive responses based on your data.
+
+### New versions of gpt-35-turbo and gpt-4 models
+
+- gpt-35-turbo (version 0613)
+- gpt-35-turbo-16k (version 0613)
+- gpt-4 (version 0613)
+- gpt-4-32k (version 0613)
+
+### UK South
+
+- Azure OpenAI is now available in the UK South region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+
+### Content filtering & annotations (Preview)
+
+- How to [configure content filters](how-to/content-filters.md) with Azure OpenAI Service.
+- [Enable annotations](concepts/content-filter.md) to view content filtering category and severity information as part of your GPT based Completion and Chat Completion calls.
+
+### Quota
+
+- Quota provides the flexibility to actively [manage the allocation of rate limits across the deployments](how-to/quota.md) within your subscription.
+
+## May 2023
+
+### Java & JavaScript SDK support
+
+- NEW Azure OpenAI preview SDKs offering support for [JavaScript](quickstart.md?tabs=command-line&pivots=programming-language-javascript) and [Java](quickstart.md?tabs=command-line&pivots=programming-language-java).
+
+### Azure OpenAI Chat Completion General Availability (GA)
+
+- General availability support for:
+ - Chat Completion API version `2023-05-15`.
+ - GPT-35-Turbo models.
+ - GPT-4 model series. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+
+If you are currently using the `2023-03-15-preview` API, we recommend migrating to the GA `2023-05-15` API. If you are currently using API version `2022-12-01` this API remains GA, but does not include the latest Chat Completion capabilities.
+
+> [!IMPORTANT]
+> Using the current versions of the GPT-35-Turbo models with the completion endpoint remains in preview.
+
+### France Central
+
+- Azure OpenAI is now available in the France Central region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+
+## April 2023
+
+- **DALL-E 2 public preview**. Azure OpenAI Service now supports image generation APIs powered by OpenAI's DALL-E 2 model. Get AI-generated images based on the descriptive text you provide. To learn more, check out the [quickstart](./dall-e-quickstart.md). To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/access).
+
+- **Inactive deployments of customized models will now be deleted after 15 days; models will remain available for redeployment.** If a customized (fine-tuned) model is deployed for more than fifteen (15) days during which no completions or chat completions calls are made to it, the deployment will automatically be deleted (and no further hosting charges will be incurred for that deployment). The underlying customized model will remain available and can be redeployed at any time. To learn more check out the [how-to-article](./how-to/fine-tuning.md?pivots=programming-language-studio#deploy-a-customized-model).
++
+## March 2023
+
+- **GPT-4 series models are now available in preview on Azure OpenAI**. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4). These models are currently available in the East US and South Central US regions.
+
+- **New Chat Completion API for GPT-35-Turbo and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md).
+
+- **GPT-35-Turbo preview**. To learn more checkout the [how-to article](./how-to/chatgpt.md).
+
+- Increased training limits for fine-tuning: The max training job size (tokens in training file) x (# of epochs) is 2 Billion tokens for all models. We have also increased the max training job from 120 to 720 hours.
+- Adding additional use cases to your existing access.  Previously, the process for adding new use cases required customers to reapply to the service. Now, we're releasing a new process that allows you to quickly add new use cases to your use of the service. This process follows the established Limited Access process within Azure AI services. [Existing customers can attest to any and all new use cases here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUM003VEJPRjRSOTZBRVZBV1E5N1lWMk1XUyQlQCN0PWcu). Please note that this is required anytime you would like to use the service for a new use case you did not originally apply for.
+
+## February 2023
+
+### New Features
+
+- .NET SDK(inference) [preview release](https://www.nuget.org/packages/Azure.AI.OpenAI/1.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/openai/Azure.AI.OpenAI/tests/Samples)
+- [Terraform SDK update](https://registry.terraform.io/providers/hashicorp/azurerm/3.37.0/docs/resources/cognitive_deployment) to support Azure OpenAI management operations.
+- Inserting text at the end of a completion is now supported with the `suffix` parameter.
+
+### Updates
+
+- Content filtering is on by default.
+
+New articles on:
+
+- [Monitoring an Azure OpenAI Service](./how-to/monitoring.md)
+- [Plan and manage costs for Azure OpenAI](./how-to/manage-costs.md)
+
+New training course:
+
+- [Intro to Azure OpenAI](/training/modules/explore-azure-openai/)
++
+## January 2023
+
+### New Features
+
+* **Service GA**. Azure OpenAI Service is now generally available.ΓÇï
+
+* **New models**: Addition of the latest text model, text-davinci-003 (East US, West Europe), text-ada-embeddings-002 (East US, South Central US, West Europe)
++
+## December 2022
+
+### New features
+
+* **The latest models from OpenAI.** Azure OpenAI provides access to all the latest models including the GPT-3.5 seriesΓÇï.
+
+* **New API version (2022-12-01).** This update includes several requested enhancements including token usage information in the API response, improved error messages for files, alignment with OpenAI on fine-tuning creation data structure, and support for the suffix parameter to allow custom naming of fine-tuned jobs. ΓÇï
+
+* **Higher request per second limits.** 50 for non-Davinci models. 20 for Davinci models.ΓÇï
+
+* **Faster fine-tune deployments.** Deploy an Ada and Curie fine-tuned models in under 10 minutes.ΓÇï
+
+* **Higher training limits:** 40M training tokens for Ada, Babbage, and Curie. 10M for Davinci.ΓÇï
+
+* **Process for requesting modifications to the abuse & miss-use data logging & human review.** Today, the service logs request/response data for the purposes of abuse and misuse detection to ensure that these powerful models aren't abused. However, many customers have strict data privacy and security requirements that require greater control over their data. To support these use cases, we're releasing a new process for customers to modify the content filtering policies or turn off the abuse logging for low-risk use cases. This process follows the established Limited Access process within Azure AI services and [existing OpenAI customers can apply here](https://aka.ms/oai/modifiedaccess).ΓÇï
+
+* **Customer managed key (CMK) encryption.** CMK provides customers greater control over managing their data in Azure OpenAI by providing their own encryption keys used for storing training data and customized models. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. [Learn more from our encryption at rest documentation](encrypt-data-at-rest.md).
+
+* **Lockbox support**ΓÇï
+
+* **SOC-2 compliance**ΓÇï
+
+* **Logging and diagnostics** through Azure Resource Health, Cost Analysis, and Metrics & Diagnostic settingsΓÇï.
+
+* **Studio improvements.** Numerous usability improvements to the Studio workflow including Azure AD role support to control who in the team has access to create fine-tuned models and deploy.
+
+### Changes (breaking)
+
+**Fine-tuning** create API request has been updated to match OpenAIΓÇÖs schema.
+
+**Preview API versions:**
+
+```json
+{ΓÇï
+ "training_file": "file-XGinujblHPwGLSztz8cPS8XY",ΓÇï
+ "hyperparams": { ΓÇï
+ "batch_size": 4,ΓÇï
+ "learning_rate_multiplier": 0.1,ΓÇï
+ "n_epochs": 4,ΓÇï
+ "prompt_loss_weight": 0.1,ΓÇï
+ }ΓÇï
+}
+```
+
+**API version 2022-12-01:**
+
+```json
+{ΓÇï
+ "training_file": "file-XGinujblHPwGLSztz8cPS8XY",ΓÇï
+ "batch_size": 4,ΓÇï
+ "learning_rate_multiplier": 0.1,ΓÇï
+ "n_epochs": 4,ΓÇï
+ "prompt_loss_weight": 0.1,ΓÇï
+}
+```
+
+**Content filtering is temporarily off** by default. Azure content moderation works differently than OpenAI. Azure OpenAI runs content filters during the generation call to detect harmful or abusive content and filters them from the response. [Learn MoreΓÇï](./concepts/content-filter.md)
+
+ΓÇïThese models will be re-enabled in Q1 2023 and be on by default. ΓÇï
+
+ΓÇï**Customer actions**ΓÇï
+
+* [Contact Azure Support](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) if you would like these turned on for your subscriptionΓÇï.
+* [Apply for filtering modifications](https://aka.ms/oai/modifiedaccess), if you would like to have them remain off. (This option will be for low-risk use cases only.)ΓÇï
+
+## Next steps
+
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Concept Active Inactive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-active-inactive-events.md
+
+ Title: Active and inactive events - Personalizer
+description: This article discusses the use of active and inactive events within the Personalizer service.
++
+ms.
+++ Last updated : 02/20/2020++
+# Defer event activation
+
+Deferred activation of events allows you to create personalized websites or mailing campaigns, considering that the user may never actually see the page or open the email.
+In these scenarios, the application might need to call Rank before it even knows if the result will be used or displayed to the user at all. If the content is never shown to the user, no default Reward (typically zero) should be assumed for it to learn from.
+Deferred Activation allows you to use the results of a Rank call at one point in time, and decide if the Event should be learned from later on, or elsewhere in your code.
+
+## Typical scenarios for deferred activation
+
+Deferring activation of events is useful in the following example scenarios:
+
+* You are pre-rendering a personalized web page for a user, but the user may never get to see it because some business logic may override the action choice of Personalizer.
+* You are personalizing content "below the fold" in a web page, and it is highly possible the content will never be seen by the user.
+* You are personalizing marketing emails, and you need to avoid training from emails that were never opened by users.
+* You personalized a dynamic media channel, and your users may stop playing the channel before it gets to the songs or videos selected by Personalizer.
+
+In general terms, these scenarios happen when:
+
+* You're pre-rendering UI that the user might or might not get to see due to UI or time constraints.
+* Your application is doing predictive personalization in which you make Rank calls before you know if you will use the output.
+
+## How to defer activation, and later activate, events
+
+To defer activation for an event, call [Rank](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) with `deferActivation = True` in the bequest body.
+
+As soon as you know your users were shown the personalized content or media, and expecting a Reward is reasonable, you must activate that event. To do so call the [Activate API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Activate) with the eventId.
++
+The [Activate API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Activate) call for that EventID call must be received before the Reward Wait Time time window expires.
+
+### Behavior with deferred activation
+
+Personalizer will learn from events and rewards as follows:
+* If you call Rank with `deferActivation = True`, and *don't* call the `Activate` API for that eventId, and call Reward, Personalizer not learn from the event.
+* If you call Rank with `deferActivation = True`, and *do* call the `Activate` API for that eventId, and call Reward, Personalizer will learn from the event with the specified Reward score.
+* If you call Rank with `deferActivation = True`, and *do* call the `Activate` API for that eventId, but omit calling Reward, Personalizer will learn from the event with the Default Reward score set in configuration.
+
+## Next steps
+* Howe to Configure [Default rewards](how-to-settings.md#configure-rewards-for-the-feedback-loop).
+* Learn [how to determine reward score and what data to consider](concept-rewards.md).
ai-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-active-learning.md
+
+ Title: Learning policy - Personalizer
+description: Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.
++
+ms.
+++ Last updated : 02/20/2020++
+# Learning policy and settings
+
+Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.
+
+[Learning policy and settings](how-to-settings.md#configure-rewards-for-the-feedback-loop) are set on your Personalizer resource in the Azure portal.
+
+## Import and export learning policies
+
+You can import and export learning-policy files from the Azure portal. Use this method to save existing policies, test them, replace them, and archive them in your source code control as artifacts for future reference and audit.
+
+Learn [how to](how-to-manage-model.md#import-a-new-learning-policy) import and export a learning policy in the Azure portal for your Personalizer resource.
+
+## Understand learning policy settings
+
+The settings in the learning policy aren't intended to be changed. Change settings only if you understand how they affect Personalizer. Without this knowledge, you could cause problems, including invalidating Personalizer models.
+
+Personalizer uses [vowpalwabbit](https://github.com/VowpalWabbit) to train and score the events. Refer to the [vowpalwabbit documentation](https://vowpalwabbit.org/docs/vowpal_wabbit/python/9.6.0/command_line_args.html) on how to edit the learning settings using vowpalwabbit. Once you have the correct command line arguments, save the command to a file with the following format (replace the arguments property value with the desired command) and upload the file to import learning settings in the **Model and Learning Settings** pane in the Azure portal for your Personalizer resource.
+
+The following `.json` is an example of a learning policy.
+
+```json
+{
+ "name": "new learning settings",
+ "arguments": " --cb_explore_adf --epsilon 0.2 --power_t 0 -l 0.001 --cb_type mtr -q ::"
+}
+```
+
+## Compare learning policies
+
+You can compare how different learning policies perform against past data in Personalizer logs by doing [offline evaluations](concepts-offline-evaluation.md).
+
+[Upload your own learning policies](how-to-manage-model.md) to compare them with the current learning policy.
+
+## Optimize learning policies
+
+Personalizer can create an optimized learning policy in an [offline evaluation](how-to-offline-evaluation.md). An optimized learning policy that has better rewards in an offline evaluation will yield better results when it's used online in Personalizer.
+
+After you optimize a learning policy, you can apply it directly to Personalizer so it immediately replaces the current policy. Or you can save the optimized policy for further evaluation and later decide whether to discard, save, or apply it.
+
+## Next steps
+
+* Learn [active and inactive events](concept-active-inactive-events.md).
ai-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-apprentice-mode.md
+
+ Title: Apprentice mode - Personalizer
+description: Learn how to use apprentice mode to gain confidence in a model without changing any code.
++
+ms.
+++ Last updated : 07/26/2022++
+# Use Apprentice mode to train Personalizer without affecting your existing application
+
+When deploying a new Personalizer resource, it's initialized with an untrained, or blank model. That is, it hasn't learned from any data and therefore won't perform well in practice. This is known as the "cold start" problem and is resolved over time by training the model with real data from your production environment. **Apprentice mode** is a learning behavior that helps mitigate the "cold start" problem, and allows you to gain confidence in the model _before_ it makes decisions in production, all without requiring any code change.
+
+<!--
+
+## What is Apprentice mode?
+
+Similar to how an apprentice can learn a craft by observing an expert, Apprentice mode enables Personalizer to learn by observing the decisions made by your application's current logic. The Personalizer model trains by mimicking the same decision output as the application. With each Rank API call, Personalizer can learn without impacting the existing logic and outcomes. Metrics, available from the Azure portal and the API, help you understand the performance as the model learns. Specifically, how well Personalize is matching your existing logic (also known as the baseline policy).
+
+Once Personalizer is able to reasonably match your existing logic 60-80% of the time, you can change the behavior from Apprentice mode to *Online mode*. At that time, Personalizer returns the best actions in the Rank API as determined by the underlying model and can learn how to make better decisions than your baseline policy.
+
+## Why use Apprentice mode?
+
+Apprentice mode provides a way for your model to mimic your existing decision logic before it makes online decisions used by your application. This helps to mitigate the aforementioned cold start problem and provides you with more trust in the Personalizer service and reassurance that the data sent to Personalizer is valuable for training the model. This is done without risking or affecting your online traffic and customer experiences.
+
+The two main reasons to use Apprentice mode are:
+
+* Mitigating **Cold Starts**: Apprentice mode helps mitigate the cost of a training a "new" model in production by learning without the need to make uninformed decisions. The model learns to mimic your existing application logic.
+* **Validating Action and Context Features**: Context and Action features may be inadequate, inaccurate, or suboptimally engineered. If there are too few, too many, incorrect, noisy, or malformed features, Personalize will have difficulty training a well performing model. Conducting a [feature evaluation](how-to-feature-evaluation.md) while in Apprentice mode, enables you to discover how effective the features are at training Personalizer and can identify areas for improving feature quality.
+
+## When should you use Apprentice mode?
+
+Use Apprentice mode to train Personalizer to improve its effectiveness through the following scenarios while leaving the experience of your users unaffected by Personalizer:
+
+* You're implementing Personalizer in a new scenario.
+* You've made major changes to the Context or Action features.
+
+However, Apprentice mode isn't an effective way of measuring the impact Personalizer can have on improving the average reward or your business KPIs. It can only evaluate how well the service is learning your existing logic given the current data you're providing. To measure how effective Personalizer is at choosing the best possible action for each Rank call, Personalizer must be in Online mode, or you can use [Offline evaluations](concepts-offline-evaluation.md) over a period of time when Personalizer was in Online mode.
+
+## Who should use Apprentice mode?
+
+Apprentice mode is useful for developers, data scientists, and business decision makers:
+
+* **Developers** can use Apprentice mode to ensure the Rank and Reward APIs are implemented correctly in the application, and that features being sent to Personalizer are free from errors and common mistakes. Learn more about creating good [Context and Action features](concepts-features.md).
+
+* **Data scientists** can use Apprentice mode to validate that the features are effective at training the Personalizer models. That is, the features contain useful information that allow Personalizer to learn the existing decision logic.
+
+* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (that is, rewards) compared to existing business logic. Specifically, whether or not Personalizer can learn from the provided data before going into Online mode. This allows them to make an informed decision about impacting user experience, where real revenue and user satisfaction are at stake.
+
+## Comparing Behaviors - Apprentice mode and Online mode
+
+Learning when in Apprentice mode differs from Online mode in the following ways.
+
+|Area|Apprentice mode|Online mode|
+|--|--|--|
+|Impact on User Experience| The users' experience and business metrics won't change. Personalizer is trained by observing the **baseline actions** of your current application logic, without affecting them. | Your users' experience may change as the decision is made by Personalizer and not your baseline action.|
+|Learning speed|Personalizer will learn more slowly when in Apprentice mode compared to learning in Online mode. Apprentice mode can only learn by observing the rewards obtained by your default action without [exploration](concepts-exploration.md), which limits how much Personalizer can learn.|Learns faster because it can both _exploit_ the best action from the current model and _explore_ other actions for potentially better results.|
+|Learning effectiveness "Ceiling"|Personalizer can only approximate, and never exceed, the performance of your application's current logic (the total average reward achieved by the baseline action). It's unlikely that Personalizer will achieve 100% match with your current application's logic, and is recommended that once 60%-80% matching is achieved, Personalizer should be switched to Online mode.|Personalizer should exceed the performance of your baseline application logic. If Personalizer's performance stalls over time, you can conduct on [offline evaluation](concepts-offline-evaluation.md) and [feature evaluation](how-to-feature-evaluation.md) to pursue additional improvements. |
+|Rank API return value for rewardActionId| The _rewardActionId_ will always be the Id of the default action. That is, the action you send as the first action in the Rank API request JSON. In other words, the Rank API does nothing visible for your application during Apprentice mode. |The _rewardActionId_ will be one of the Ids provided in then Rank API call as determined by the Personalizer model.|
+|Evaluations|Personalizer keeps a comparison of the reward totals received by your current application logic, and the reward totals Personalizer would be getting if it was in Online mode at that point. This comparison is available to view in the _Monitor_ blade of your Personalizer resource in the Azure portal.|Evaluate PersonalizerΓÇÖs effectiveness by running [Offline evaluations](concepts-offline-evaluation.md), which let you compare the total rewards Personalizer has achieved against the potential rewards of the applicationΓÇÖs baseline.|
+
+Note that Personalizer is unlikely to achieve a 100% performance match with the application's baseline logic, and it will never exceed it. Performance matching of 60%-80% should be sufficient to switch Personalizer to Online mode, where Personalizer can learn better decisions and exceed the performance of your application's baseline logic.
+
+## Limitations of Apprentice Mode
+Apprentice Mode trains Personalizer model by attempting to imitate your existing application's baseline logic, using the Context and Action features present in the Rank calls. The following factors will affect Apprentice mode's ability to learn.
+
+### Scenarios where Apprentice Mode May Not be Appropriate:
+
+#### Editorially chosen Content:
+In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors aren't an algorithm, and the factors considered by editors can be subjective and possibly unrelated to the Context or Action features. In this case, Apprentice mode may have difficulty in predicting the baseline action. In these situations you can:
+
+* Test Personalizer in Online Mode: Consider putting Personalizer in Online Mode for time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference between your application's baseline logic and Personalizer.
+* Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content when a certain celebrity is often in the news: This knowledge could be added as a Context feature.
+
+### Factors that will improve and accelerate Apprentice Mode
+If apprentice mode is learning and attaining a matching performance above zero, but performance is improving slowly (not getting to 60% to 80% matched rewards within two weeks), it's possible that there's too little data being sent to Personalizer. The following steps may help facilitate faster learning:
+
+1. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will increase the diversity of the feature values.
+2. Reducing Actions per Event: Personalizer will use the "% of Rank calls to use for exploration" setting to discover preferences and trends. When a Rank call has more actions, the chance of any particular Action being chosen for exploration becomes lower. Reducing the number of actions sent in each Rank call to a smaller number (under 10) can be a temporary adjustment that may indicate whether or not Apprentice Mode has sufficient data to learn.
+
+## Using Apprentice mode to train with historical data
+
+If you have a significant amount of historical data that you would like to use to train Personalizer, you can use Apprentice mode to replay the data through Personalizer.
+
+Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You may need approximately 50,000 historical events to see Personalizer attain a 60-80% match with your application's baseline logic. You may be able to achieve satisfactory results with fewer or more events.
+
+When training from historical data, it's recommended that the data sent in [features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set], matches the data [features and calculation of reward] available from your existing application.
+
+Offline and historical data tends to be more incomplete and noisier and can differ in format from your in-production (or online) scenario. While training from historical data is possible, the results from doing so may be inconclusive and aren't necessarily a good predictor of how well Personalizer will learn in Online mode, especially if the features vary between the historical data and the current scenario.
+
+## Using Apprentice Mode versus A/B Tests
+
+It's only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode, since in Apprentice mode, only the baseline action is used, and the existing logic is learned. This essentially means Personalizer is returning the action of the "control" arm of your A/B test, hence an A/B test in Apprentice mode has no value.
+
+Once you have a use case using Personalizer and learning online, A/B experiments can allow you to create controlled cohorts and conduct comparisons of results that may be more complex than the signals used for rewards. An example question an A/B test could answer is: "In a retail website, Personalizer optimizes a layout and gets more users to _check out_ earlier, but does this reduce total revenue per transaction?"
+
+## Next steps
+
+* Learn about [active and inactive events](concept-active-inactive-events.md)
ai-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-auto-optimization.md
+
+ Title: Auto-optimize - Personalizer
+description: This article provides a conceptual overview of the auto-optimize feature for Azure AI Personalizer service.
++
+ms.
+++ Last updated : 03/08/2021++
+# Personalizer Auto-Optimize (Preview)
++
+## Introduction
+Personalizer automatic optimization saves you manual effort in keeping a Personalizer loop at its best machine learning performance, by automatically searching for improved Learning Settings used to train your models and applying them. Personalizer has strict criteria to apply new Learning Settings to insure improvements are unlikely to introduce loss in rewards.
+
+Personalizer Auto-Optimize is in Public Preview and features, approaches and processes will change based on user feedback.
+
+## When to use Auto-Optimize
+In most cases, the best option is to have Auto-Optimize turned on. Auto-Optimize is *on* for default for new Personalizer loops.
+
+Auto-optimize may help in the following situations:
+* You build applications that are used by many tenants, and each gets their own Personalizer loop(s); for example, if you host multiple e-commerce sites. Auto-Optimize allows you to avoid the manual effort you'd need to tune learning settings for large numbers of Personalizer loops.
+* You have deployed Personalizer and validated that it is working well, getting good rewards, and you have made sure there are no bugs or problems in your features.
+
+> [!NOTE]
+> Auto-Optimize will periodically overwrite Personalizer Learning Settings. If your use case or industry requires audit and archive of models and settings, or if you need backups of previous settings, you can use the Personalizer API to retrieve Learning Settings, or download them via the Azure portal.
+
+## How to enable and disable Auto-Optimize
+To Enable Auto-Optimize, use the toggle switch in the "Model and Learning Settings" blade in the Azure portal.
+
+Alternatively, you can activate Auto-Optimize using the Personalizer `/configurations/service` API.
+
+To disable Auto-Optimize, turn off the toggle.
+
+## Auto-Optimize reports
+
+In the Model and Learning Settings blade you can see the history of auto-optimize runs and the action taken on each.
+
+The table shows:
+* When an auto-optimize run happened,
+* What data window was included,
+* What was the reward performance of online, baseline, and best found Learning Settings,
+* Actions taken: if Learning Settings were updated or not.
+
+Reward performance of different learning settings in each auto-optimization history row are shown in absolute numbers, and as percentages relative to baseline performance.
+
+**Example**: If your baseline average reward is estimated to be 0.20, and the online Personalizer behavior is achieving 0.30, these will be shown as 100% and 150% respectively. If the auto optimization found learning settings capable of achieving 0.40 average reward, it will be shown as 200% (0.40 is 200% of 0.20). Assuming the confidence margins allow for it, the new settings would be applied, and then these would drive Personalizer as the Online settings until the next run.
+
+A history of up to 24 previous Auto-Optimize runs is kept for your analysis. You can seek out more details about those Offline Evaluations and reports for each. Also, the reports contain any Learning Settings that are in this history, which you can find and download or apply.
+
+## How it works
+Personalizer is constantly training the AI models it uses based on rewards. This training is done following some *Learning Settings*, which contain hyper-parameters and other values used in the training process. These learning settings can be "tuned" to your specific Personalizer instance.
+
+Personalizer also has the ability to perform *Offline Evaluations*. Offline Evaluations look at past data, and can produce a statistical estimation of the average reward that Personalizer different algorithms and models could have attained. During this process Personalizer will also search for better Learning Settings, estimating their performance (how many rewards they would have gotten) over that past time period.
+
+#### Auto-Optimize frequency
+Auto-Optimize will run periodically, and will perform the Auto-Optimize based on past data
+* If your application sends to Personalizer more than approximately 20 Mb of data in the last two weeks, it will use the last two weeks of data.
+* If your application sends less than this amount, Personalizer will add data from previous days until there is enough data to optimize, or it reaches the earliest data stored (up to the Data Retention number of days).
+
+The exact times and days when Auto-Optimize runs is determined by the Personalizer service, and will fluctuate over time.
+
+#### Criteria for updating learning settings
+
+Personalizer uses these reward estimations to decide whether to change the current Learning Settings for others. Each estimation is a distribution curve, with upper and lower 95% confidence bounds. Personalizer will only apply new Learning Settings if:
+ * They showed higher average rewards in the evaluation period, AND
+ * They have a lower bound of the 95% confidence interval, that is *higher* than the lower bound of the 95% confidence interval of the online Learning Settings.
+This criteria to maximize the reward improvement, while trying to eliminate the probability of future rewards lost is managed by Personalizer and draws from research in [Seldonian algorithms](https://aisafety.cs.umass.edu/overview.html) and AI safety.
+
+#### Limitations of Auto-Optimize
+
+Personalizer Auto-Optimize relies on an evaluation of a past period to estimate performance in the future. It is possible that due to external factors in the world, your application, and your users, that these estimations and predictions about Personalizer's models done for the past period are not representative of the future.
+
+Automatic Optimization Preview is unavailable for Personalizer loops that have enabled the Multi-Slot personalization API Preview functionality.
+
+## Next steps
+
+* [Offline evaluations](concepts-offline-evaluation.md)
+* [Learning Policy and Settings](concept-active-learning.md)
+* [How To Analyze Personalizer with an Offline Evaluation](how-to-offline-evaluation.md)
+* [Research in AI Safety](https://aisafety.cs.umass.edu/overview.html)
ai-services Concept Multi Slot Personalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-multi-slot-personalization.md
+
+ Title: Multi-slot personalization
+description: Learn where and when to use single-slot and multi-slot personalization with the Personalizer Rank and Reward APIs.
+++++++ Last updated : 05/24/2021+++
+# Multi-slot personalization (Preview)
+
+Multi-slot personalization (Preview) allows you to target content in web layouts, carousels, and lists where more than one action (such as a product or piece of content) is shown to your users. With Personalizer multi-slot APIs, you can have the AI models in Personalizer learn what user contexts and products drive certain behaviors, considering and learning from the placement in your user interface. For example, Personalizer may learn that certain products or content drive more clicks as a sidebar or a footer than as a main highlight on a page.
+
+In this article, you'll learn why multi-slot personalization improves results, how to enable it, and when to use it. This article assumes that you are familiar with the Personalizer APIs like `Rank` and `Reward`, and have a conceptual understanding of how you use it in your application. If you aren't familiar with Personalizer and how it works, review the following before you continue:
+
+* [What is Personalizer](what-is-personalizer.md)
+* [How Personalizer Works](how-personalizer-works.md)
+
+> [!IMPORTANT]
+> Multi-slot personalization is in Public Preview. Features, approaches and processes will change based on user feedback. Enabling the multi-slot preview will disable other Personalizer functionality in your loop permanently. Multi-slot personalization cannot be turned off once it is enabled for a Personalizer loop. Read this document and consider the impact before configuring a Personalizer loop for multi-slot personalization.
+
+<!--
+We encourage you to provide feedback on multi-slot personalization APIs via ...
+-->
+
+## When to use Multi-Slot personalization
+
+Whenever you display products and/or content to users, you may want to show more than one item to your customers. For example:
+
+* **Website layouts for home pages**: Many tiles and page areas are dedicated to highlighting content in boxes, banners, and sidebars of different shapes and sizes. Multi-slot personalization will learn how the characteristics of this layout affect customers' choices and actions.
+* **Carousels**: Carousels of dynamically changing content need a handful of items to cycle. Multi-slot personalization can learn how sequence and even display duration affects clicks and engagement.
+* **Related products/content and embedded references**: It is common to engage users by embedding or interspersing references to additional content and products in banners, side-bars, vignettes, and footer boxes. Multi-slot personalization can help you allocate your references where they are most likely to drive more use.
+* **Search results or lists**: If your application search functionality, where you provide results as lists or tiles, you can use multi-slot personalization to choose which items to highlight at the top considering more metadata than traditional rankers.
+* **Dynamic channels and playlists**: Multi-slot personalization can help determine a short sequence for a list of videos or songs to play next in a dynamic channel.
+
+Multi-slot personalization allows you to declare the "slots" in the user interface that actions need to be chosen for. It also allows you to provide more information about the slots so that Personalizer can use to improve product placement - such as is this a big box or a small box? Does it display a caption or only a feature? Is it in a footer or a sidebar?
++
+## How to use Multi-slot personalization
+
+1. [Enable multi-slot Personalization](#enable-multi-slot-personalization)
+1. [Create JSON object for Rank request](#create-json-object-for-a-rank-request)
+1. [Call the Rank API defining slots and baseline actions](#use-the-response-of-the-rank-api)
+1. [Call the Rewards APIs](#call-the-reward-api)
++
+### Enable multi-slot personalization
+
+See [Differences between single-slot and multi-slot Personalization](#differences-between-single-slot-and-multi-slot-personalization) below to understand and decide whether multi-slot personalization is useful to you. *Multi-slot personalization is a preview feature*: We encourage you to create a new Personalizer loop if you are wanting to test multi-slot personalization APIs, as enabling it is non-reversible, and will have effects on a Personalizer loop running in production.
+
+Once you have decided to convert a loop to multi-slot personalization, you must follow these steps once for this Personalizer loop:
++
+### Create JSON object for a Rank request
+Using multi-slot personalization requires an API that differs slightly from the single-slot personalization API.
+
+You declare the slots available to assign actions to in each Rank call request, in the slots object:
+
+* **Array of slots**: You need to declare an array of slots. Slots are *ordered*: the position of each slot in the array matters. We strongly recommend ordering your slot definitions based on how many rewards/clicks/conversions each slot tends to get, starting with the one that gets the most. For example, you'd put a large home page "hero" box for a website as slot 1, rather than a small footer. All other things being equal, Personalizer will assign actions with more chances of getting rewards earlier in the sequence.
+* **Slot ID**: You need to give a slotId to each slot - a string unique for all other slots in this Rank call.
+* **Slot features**: You should provide additional metadata that describes and further distinguishes it from other slots. These are called *features*. When determining slot features, you must follow the same guidelines recommended for the features of context and actions (See: [Features for context and actions]()). Typical slot features help identify size, position, or visual characteristics of a user interface element. For example `position: "top-left"`, `size: "big"`, `animated: "no"`, `sidebar: "true"` or `sequence: "1"`.
+* **Baseline actions**: You need to specify the baseline action ID for each slot. That is, the ID of the Action that would be shown in that slot if Personalizer didn't exist. This is required to train Personalizer in [Apprentice Mode]() and to have a meaningful number when doing [Offline Evaluations]().
+* **Have enough actions**: Make sure you call Rank with more Actions than slots, so that Personalizer can assign at least one action to each slot. Personalizer won't repeat action recommendations across slots: The rank response will assign each action at most to one slot.
+
+It is OK if you add or remove slots over time, add and change their features, or re-order the array: Personalizer will adapt and keep training based on the new information.
+
+Here is an example `slots` object with some example features. While the majority of the `slots` object will be stable (since UIs tend to change slowly), most of it won't change often: But you must make sure to assign the appropriate baselineAction Ids to each Rank call.
++
+```json
+"slots": [ 
+    { 
+      "id": "BigHighlight", 
+      "features": [ 
+         { 
+           "size": "Large", 
+           "position": "Left-Middle" 
+         }
+ ],
+ "baselineAction": "BlackBoot_4656"
+    }, 
+
+    { 
+      "id": "Sidebar1", 
+      "features": [ 
+         { 
+           "size": "Small", 
+ "position": "Right-Top" 
+         } 
+ ],
+ "baselineAction": "TrekkingShoe_1122"  
+    }
+  ]
+
+```
++
+### Use the Response of the Rank API
+
+A multi-slot Rank response from the request above may look like the following:
+
+```json
+{ 
+  "slots": [ 
+     { 
+       "id": "BigHighlight", 
+       "rewardActionId": "WhiteSneaker_8181" 
+     }, 
+     { 
+       "id": "SideBar1", 
+       "rewardActionId": "BlackBoot_4656" 
+     } 
+  ], 
+  "eventId": "123456D0-BFEE-4598-8196-C57383D38E10" 
+} 
+```
+
+Take the rewardActionId for each slot, and use it to render your user interface appropriately.
+
+### Call the Reward API
+
+Personalizer learns how to choose actions that will maximize the reward obtained. Your application will observe user behavior and compute a "reward score" for Personalizer based on the observed reaction. For example, if the user clicked on the action in the `"slotId": "SideBar1",`, you will send a "1" to Personalizer to provide positive reinforcement for the action choices.
+
+The reward API specifies the eventId for the reward in the URL:
+
+```http
+https://{endpoint}/personalizer/v1.0/events/{eventId}/reward
+```
+
+For example, the reward for the event above with ID: 123456D0-BFEE-4598-8196-C57383D38E10/reward will be sent to `https://{endpoint}/personalizer/v1.0/events/123456D0-BFEE-4598-8196-C57383D38E10/reward/reward`:
+
+```json
+{ 
+  "reward": [ 
+    { 
+      "slotId": "BigHighlight", 
+      "value": 0.2 
+    }, 
+    { 
+      "slotId": "SideBar1", 
+     "value": 1.0 
+    },
+  ] 
+} 
+```
+
+You don't need to provide all reward scores in just one call of the Reward API. You can call the Reward API multiple times, each with the appropriate eventId and slotIds.
+If no reward score is received for a slot in an event, Personalizer will implicitly assign theDefault Reward configured for the loop (typically 0).
++
+## Differences between single-slot and multi-slot personalization
+
+There are differences in how you use the Rank and Reward APIs with single and multi-slot personalization:
+
+| Description | Single-slot Personalization | Multi-slot Personalization |
+||-|-|
+| Rank API call request elements | You send a Context object and a list of Actions | You send Context, a list of Actions, and an ordered list of Slots |
+| Rank request specifying baseline | Personalizer will take the first Action in the action list as the baseline action (The item your application would have chosen if Personalizer didn't exist).|You must specify the baseline ActionID that would have been used in each Slot.|
+| Rank API call response | Your application highlights the action indicated in the rewardActionId field | The response includes a different rewardActionId for each Slot that was specified in the request. Your application will show those rewardActionId actions in each slot. |
+| Reward API call | You call the Reward API with a reward score, which you calculate from how the users interacted with rewardActionId for this particular eventId. For example, if the user clicked on it, you send a reward of 1. | You specify the Reward for each slot, given how well the action with rewardActionId elicited desired user behavior. This can be sent in one or multiple Reward API calls with the same eventId. |
++
+### Impact of enabling multi-slot for a Personalizer loop
+
+Additionally, when you enable multi-slot, consider the following:
+
+| Description | Single-slot Personalization | Multi-slot Personalization |
+||-|-|
+| Inactive events and activation | When calling the activate API, Personalizer will activate the event, expecting a Reward score or assigning the configured Default Reward if Reward Wait Time is exceeded. | Personalizer activates and expects rewards for all slots that were specified in the eventId |
+| Apprentice Mode | Personalizer Rank API always returns the baseline action, and trains internal models by imitating the baseline action.| Personalizer Rank API returns the baseline action for each slot specified in the baselineAction field. Personalizer will train internal models on imitating the first |
+| Learning speed | Only learns from the one highlighted action | Can learn from interactions with any slot. This typically means more user behaviors that can yield rewards, which would result in faster learning for Personalizer. |
+| Offline Evaluations |Compares performance of Personalizer against baseline and optimized learning settings, based on which Action would have been chosen them.| (Preview Limitation) Only evaluates performance of the first slot in the array. For more accurate evaluations, we recommend making sure the slot with most rewards is the first one in your array. |
+| Automatic Optimization (Preview) | Your Personalizer loop can periodically perform Offline Evaluations in the background and optimize Learning Settings without administrative intervention | (Preview Limitation) Automatic Optimization is disabled for Personalizer loops that have multi-slot APIs enabled. |
+
+## Next steps
+* [How to use multi-slot personalization](how-to-multi-slot.md)
+* [Sample: Jupyter notebook running a multi-slot personalization loop](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/python/Personalizer/azure-notebook)
ai-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concept-rewards.md
+
+ Title: Reward score - Personalizer
+description: The reward score indicates how well the personalization choice, RewardActionID, resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior. Personalizer trains its machine learning models by evaluating the rewards.
++
+ms.
++ Last updated : 02/20/2020+++
+# Reward scores indicate success of personalization
+
+The reward score indicates how well the personalization choice, [RewardActionID](/rest/api/personalizer/1.0/rank/rank#response), resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior.
+
+Personalizer trains its machine learning models by evaluating the rewards.
+
+Learn [how to](how-to-settings.md#configure-rewards-for-the-feedback-loop) configure the default reward score in the Azure portal for your Personalizer resource.
+
+## Use Reward API to send reward score to Personalizer
+
+Rewards are sent to Personalizer by the [Reward API](/rest/api/personalizer/1.0/events/reward). Typically, a reward is a number from 0 to 1. A negative reward, with the value of -1, is possible in certain scenarios and should only be used if you are experienced with reinforcement learning (RL). Personalizer trains the model to achieve the highest possible sum of rewards over time.
+
+Rewards are sent after the user behavior has happened, which could be days later. The maximum amount of time Personalizer will wait until an event is considered to have no reward or a default reward is configured with the [Reward Wait Time](#reward-wait-time) in the Azure portal.
+
+If the reward score for an event hasn't been received within the **Reward Wait Time**, then the **Default Reward** will be applied. Typically, the **[Default Reward](how-to-settings.md#configure-reward-settings-for-the-feedback-loop-based-on-use-case)** is configured to be zero.
++
+## Behaviors and data to consider for rewards
+
+Consider these signals and behaviors for the context of the reward score:
+
+* Direct user input for suggestions when options are involved ("Do you mean X?").
+* Session length.
+* Time between sessions.
+* Sentiment analysis of the user's interactions.
+* Direct questions and mini surveys where the bot asks the user for feedback about usefulness, accuracy.
+* Response to alerts, or delay to response to alerts.
+
+## Composing reward scores
+
+A Reward score must be computed in your business logic. The score can be represented as:
+
+* A single number sent once
+* A score sent immediately (such as 0.8) and an additional score sent later (typically 0.2).
+
+## Default Rewards
+
+If no reward is received within the [Reward Wait Time](#reward-wait-time), the duration since the Rank call, Personalizer implicitly applies the **Default Reward** to that Rank event.
+
+## Building up rewards with multiple factors
+
+For effective personalization, you can build up the reward score based on multiple factors.
+
+For example, you could apply these rules for personalizing a list of video content:
+
+|User behavior|Partial score value|
+|--|--|
+|The user clicked on the top item.|+0.5 reward|
+|The user opened the actual content of that item.|+0.3 reward|
+|The user watched 5 minutes of the content or 30%, whichever is longer.|+0.2 reward|
+|||
+
+You can then send the total reward to the API.
+
+## Calling the Reward API multiple times
+
+You can also call the Reward API using the same event ID, sending different reward scores. When Personalizer gets those rewards, it determines the final reward for that event by aggregating them as specified in the Personalizer configuration.
+
+Aggregation values:
+
+* **First**: Takes the first reward score received for the event, and discards the rest.
+* **Sum**: Takes all reward scores collected for the eventId, and adds them together.
+
+All rewards for an event, which are received after the **Reward Wait Time**, are discarded and do not affect the training of models.
+
+By adding up reward scores, your final reward may be outside the expected score range. This won't make the service fail.
+
+## Best Practices for calculating reward score
+
+* **Consider true indicators of successful personalization**: It is easy to think in terms of clicks, but a good reward is based on what you want your users to *achieve* instead of what you want people to *do*. For example, rewarding on clicks may lead to selecting content that is clickbait prone.
+
+* **Use a reward score for how good the personalization worked**: Personalizing a movie suggestion would hopefully result in the user watching the movie and giving it a high rating. Since the movie rating probably depends on many things (the quality of the acting, the mood of the user), it is not a good reward signal for how well *the personalization* worked. The user watching the first few minutes of the movie, however, may be a better signal of personalization effectiveness and sending a reward of 1 after 5 minutes will be a better signal.
+
+* **Rewards only apply to RewardActionID**: Personalizer applies the rewards to understand the efficacy of the action specified in RewardActionID. If you choose to display other actions and the user selects them, the reward should be zero.
+
+* **Consider unintended consequences**: Create reward functions that lead to responsible outcomes with [ethics and responsible use](responsible-use-cases.md).
+
+* **Use Incremental Rewards**: Adding partial rewards for smaller user behaviors helps Personalizer to achieving better rewards. This incremental reward allows the algorithm to know it's getting closer to engaging the user in the final desired behavior.
+ * If you are showing a list of movies, if the user hovers over the first one for a while to see more information, you can determine that some user-engagement happened. The behavior can count with a reward score of 0.1.
+ * If the user opened the page and then exited, the reward score can be 0.2.
+
+## Reward wait time
+
+Personalizer will correlate the information of a Rank call with the rewards sent in Reward calls to train the model, which may come at different times. Personalizer waits for the reward score for a defined limited time, starting when the corresponding Rank call occured. This is done even if the Rank call was made using deferred activation](concept-active-inactive-events.md).
+
+If the **Reward Wait Time** expires and there has been no reward information, a default reward is applied to that event for training. You can select a reward wait time of 10 minutes, 4 hours, 12 hours, or 24 hours. If your scenario requires longer reward wait times (e.g., for marketing email campaigns) we are offering a private preview of longer wait times. Open a support ticket in the Azure portal to get in contact with team and see if you qualify and it can be offered to you.
+
+## Best practices for reward wait time
+
+Follow these recommendations for better results.
+
+* Make the Reward Wait Time as short as you can, while leaving enough time to get user feedback.
+
+* Don't choose a duration that is shorter than the time needed to get feedback. For example, if some of your rewards come in after a user has watched 1 minute of a video, the experiment length should be at least double that.
+
+## Next steps
+
+* [Reinforcement learning](concepts-reinforcement-learning.md)
+* [Try the Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank/console)
+* [Try the Reward API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward)
ai-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-exploration.md
+
+ Title: Exploration - Personalizer
+
+description: With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.
++
+ms.
+++ Last updated : 08/28/2022++
+# Exploration
+
+With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes.
+
+When Personalizer receives a Rank call, it returns a RewardActionID that either:
+* Uses known relevance to match the most probable user behavior based on the current machine learning model.
+* Uses exploration, which does not match the action that has the highest probability in the rank.
+
+Personalizer currently uses an algorithm called *epsilon greedy* to explore.
+
+## Choosing an exploration setting
+
+You configure the percentage of traffic to use for exploration in the Azure portal's **Configuration** page for Personalizer. This setting determines the percentage of Rank calls that perform exploration.
+
+Personalizer determines whether to explore or use the model's most probable action on each rank call. This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs.
+
+## Best practices for choosing an exploration setting
+
+Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.
+
+A setting of zero will negate many of the benefits of Personalizer. With this setting, Personalizer uses no user interactions to discover better user interactions. This leads to model stagnation, drift, and ultimately lower performance.
+
+A setting that is too high will negate the benefits of learning from user behavior. Setting it to 100% implies a constant randomization, and any learned behavior from users would not influence the outcome.
+
+It is important not to change the application behavior based on whether you see if Personalizer is exploring or using the learned best action. This would lead to learning biases that ultimately would decrease the potential performance.
+
+## Next steps
+
+[Reinforcement learning](concepts-reinforcement-learning.md)
ai-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-features.md
+
+ Title: "Features: Action and Context - Personalizer"
+
+description: Personalizer uses features, information about actions and context, to make better ranking suggestions. Features can be generic, or specific to an item.
++
+ms.
+++ Last updated : 12/28/2022++
+# Context and actions
+
+Personalizer works by learning what your application should show to users in a given context. Context and actions are the two most important pieces of information that you pass into Personalizer. The **context** represents the information you have about the current user or the state of your system, and the **actions** are the options to be chosen from.
+
+## Context
+
+Information for the _context_ depends on each application and use case, but it typically may include information such as:
+
+* Demographic and profile information about your user.
+* Information extracted from HTTP headers such as user agent, or derived from HTTP information such as reverse geographic lookups based on IP addresses.
+* Information about the current time, such as day of the week, weekend or not, morning or afternoon, holiday season or not, etc.
+* Information extracted from mobile applications, such as location, movement, or battery level.
+* Historical aggregates of the behavior of users - such as what are the movie genres this user has viewed the most.
+* Information about the state of the system.
+
+Your application is responsible for loading the information about the context from the relevant databases, sensors, and systems you may have. If your context information doesn't change, you can add logic in your application to cache this information, before sending it to the Rank API.
+
+## Actions
+
+Actions represent a list of options.
+
+Don't send in more than 50 actions when Ranking actions. They may be the same 50 actions every time, or they may change. For example, if you have a product catalog of 10,000 items for an e-commerce application, you may use a recommendation or filtering engine to determine the top 40 a customer may like, and use Personalizer to find the one that will generate the most reward for the current context.
+
+### Examples of actions
+
+The actions you send to the Rank API will depend on what you are trying to personalize.
+
+Here are some examples:
+
+|Purpose|Action|
+|--|--|
+|Personalize which article is highlighted on a news website.|Each action is a potential news article.|
+|Optimize ad placement on a website.|Each action will be a layout or rules to create a layout for the ads (for example, on the top, on the right, small images, large images).|
+|Display personalized ranking of recommended items on a shopping website.|Each action is a specific product.|
+|Suggest user interface elements such as filters to apply to a specific photo.|Each action may be a different filter.|
+|Choose a chat bot's response to clarify user intent or suggest an action.|Each action is an option of how to interpret the response.|
+|Choose what to show at the top of a list of search results|Each action is one of the top few search results.|
+
+### Load actions from the client application
+
+Features from actions may typically come from content management systems, catalogs, and recommender systems. Your application is responsible for loading the information about the actions from the relevant databases and systems you have. If your actions don't change or getting them loaded every time has an unnecessary impact on performance, you can add logic in your application to cache this information.
+
+### Prevent actions from being ranked
+
+In some cases, there are actions that you don't want to display to users. The best way to prevent an action from being ranked is by adding it to the [Excluded Actions](/dotnet/api/microsoft.azure.cognitiveservices.personalizer.models.rankrequest.excludedactions) list, or not passing it to the Rank Request.
+
+In some cases, you might not want events to be trained on by default. In other words, you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you'll render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events that the end user didn't have a chance to interact with.
+
+## Features
+
+Both the **context** and possible **actions** are described using **features**. The features represent all information you think is important for the decision making process to maximize rewards. A good starting point is to imagine you're tasked with selecting the best action at each timestamp and ask yourself: "What information do I need to make an informed decision? What information do I have available to describe the context and each possible action?" Features can be generic, or specific to an item.
+
+Personalizer does not prescribe, limit, or fix what features you can send for actions and context:
+
+* Over time, you may add and remove features about context and actions. Personalizer continues to learn from available information.
+* For categorical features, there's no need to pre-define the possible values.
+* For numeric features, there's no need to pre-define ranges.
+* Feature names starting with an underscore `_` will be ignored.
+* The list of features can be large (hundreds), but we recommend starting with a concise feature set and expanding as necessary.
+* **action** features may or may not have any correlation with **context** features.
+* Features that aren't available should be omitted from the request. If the value of a specific feature is not available for a given request, omit the feature for this request.
+* Avoid sending features with a null value. A null value will be processed as a string with a value of "null" which is undesired.
+
+It's ok and natural for features to change over time. However, keep in mind that Personalizer's machine learning model adapts based on the features it sees. If you send a request containing all new features, Personalizer's model won't be able to use past events to select the best action for the current event. Having a 'stable' feature set (with recurring features) will help the performance of Personalizer's machine learning algorithms.
+
+### Context features
+* Some context features may only be available part of the time. For example, if a user is logged into the online grocery store website, the context will contain features describing purchase history. These features won't be available for a guest user.
+* There must be at least one context feature. Personalizer does not support an empty context.
+* If the context features are identical for every request, Personalizer will choose the globally best action.
+
+### Action features
+* Not all actions need to contain the same features. For example, in the online grocery store scenario, microwavable popcorn will have a "cooking time" feature, while a cucumber won't.
+* Features for a certain action ID may be available one day, but later on become unavailable.
+
+Examples:
+
+The following are good examples for action features. These will depend a lot on each application.
+
+* Features with characteristics of the actions. For example, is it a movie or a tv series?
+* Features about how users may have interacted with this action in the past. For example, this movie is mostly seen by people in demographics A or B, it's typically played no more than one time.
+* Features about the characteristics of how the user *sees* the actions. For example, does the poster for the movie shown in the thumbnail include faces, cars, or landscapes?
+
+## Supported feature types
+
+Personalizer supports features of string, numeric, and boolean types. It's likely that your application will mostly use string features, with a few exceptions.
+
+### How feature types affect machine learning in Personalizer
+
+* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (for example, category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model).
+* **Numeric**: Only use numeric values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as categorical strings. For example Age could be encoded as "Age":"0-5", "Age":"6-10", etc. Height could be bucketed as "Height": "<5'0", "Height": "5'0-5'4", "Height": "5'5-5'11", "Height":"6'0-6-4", "Height":">6'4".
+* **Boolean**
+* **Arrays** Only numeric arrays are supported.
+
+## Feature engineering
+
+* Use categorical and string types for features that are not a magnitude.
+* Make sure there are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed.
+* There are features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
+
+Having features of high density helps Personalizer extrapolate learning from one item to another. But if there are only a few features and they are too dense, Personalizer will try to precisely target content with only a few buckets to choose from.
+
+### Common issues with feature design and formatting
+
+* **Sending features with high cardinality.** Features that have unique values that are not likely to repeat over many events. For example, PII specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
+* **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if non-PII) will likely add more noise to the model and is not recommended.
+* **Sending unique values that will rarely occur more than a few times**. It's recommended to bucket your features to a higher level-of-detail. For example, having features such as `"Context.TimeStamp.Day":"Monday"` or `"Context.TimeStamp.Hour":13` can be useful as there are only 7 and 24 unique values, respectively. However, `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is very precise and has an extremely large number of unique values, which makes it very difficult for Personalizer to learn from it.
+
+### Improve feature sets
+
+Analyze the user behavior by running a [Feature Evaluation Job](how-to-feature-evaluation.md). This allows you to look at past data to see what features are heavily contributing to positive rewards versus those that are contributing less. You can see what features are helping, and it will be up to you and your application to find better features to send to Personalizer to improve results even further.
+
+### Expand feature sets with artificial intelligence and Azure AI services
+
+Artificial Intelligence and ready-to-run Azure AI services can be a very powerful addition to Personalizer.
+
+By preprocessing your items using artificial intelligence services, you can automatically extract information that is likely to be relevant for personalization.
+
+For example:
+
+* You can run a movie file via [Video Indexer](https://azure.microsoft.com/services/media-services/video-indexer/) to extract scene elements, text, sentiment, and many other attributes. These attributes can then be made more dense to reflect characteristics that the original item metadata didn't have.
+* Images can be run through object detection, faces through sentiment, etc.
+* Information in text can be augmented by extracting entities, sentiment, and expanding entities with Bing knowledge graph.
+
+You can use several other [Azure AI services](https://www.microsoft.com/cognitive-services), like
+
+* [Entity Linking](../language-service/entity-linking/overview.md)
+* [Language service](../language-service/index.yml)
+* [Face](../computer-vision/overview-identity.md)
+* [Vision](../computer-vision/overview.md)
+
+### Use embeddings as features
+
+Embeddings from various Machine Learning models have proven to be affective features for Personalizer
+
+* Embeddings from Large Language Models
+* Embeddings from Azure AI Vision Models
+
+## Namespaces
+
+Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
+
+The following are examples of feature namespaces used by applications:
+
+* User_Profile_from_CRM
+* Time
+* Mobile_Device_Info
+* http_user_agent
+* VideoResolution
+* DeviceInfo
+* Weather
+* Product_Recommendation_Ratings
+* current_time
+* NewsArticle_TextAnalytics
+
+### Namespace naming conventions and guidelines
+
+* Namespaces should not be nested.
+* Namespaces must start with unique ASCII characters (we recommend using names namespaces that are UTF-8 based). Currently having namespaces with same first characters could result in collisions, therefore it's strongly recommended to have your namespaces start with characters that are distinct from each other.
+* Namespaces are case sensitive. For example `user` and `User` will be considered different namespaces.
+* Feature names can be repeated across namespaces, and will be treated as separate features
+* The following characters cannot be used: codes < 32 (not printable), 32 (space), 58 (colon), 124 (pipe), and 126ΓÇô140.
+* All namespaces starting with an underscore `_` will be ignored.
+
+## JSON examples
+
+### Actions
+When calling Rank, you will send multiple actions to choose from:
+
+JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
+
+```json
+{
+ "actions": [
+ {
+ "id": "pasta",
+ "features": [
+ {
+ "taste": "salty",
+ "spiceLevel": "medium",
+ "grams": [400,800]
+ },
+ {
+ "nutritionLevel": 5,
+ "cuisine": "italian"
+ }
+ ]
+ },
+ {
+ "id": "ice cream",
+ "features": [
+ {
+ "taste": "sweet",
+ "spiceLevel": "none",
+ "grams": [150, 300, 450]
+ },
+ {
+ "nutritionalLevel": 2
+ }
+ ]
+ },
+ {
+ "id": "juice",
+ "features": [
+ {
+ "taste": "sweet",
+ "spiceLevel": "none",
+ "grams": [300, 600, 900]
+ },
+ {
+ "nutritionLevel": 5
+ },
+ {
+ "drink": true
+ }
+ ]
+ },
+ {
+ "id": "salad",
+ "features": [
+ {
+ "taste": "salty",
+ "spiceLevel": "low",
+ "grams": [300, 600]
+ },
+ {
+ "nutritionLevel": 8
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Context
+
+Context is expressed as a JSON object that is sent to the Rank API:
+
+JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
+
+```JSON
+{
+ "contextFeatures": [
+ {
+ "state": {
+ "timeOfDay": "noon",
+ "weather": "sunny"
+ }
+ },
+ {
+ "device": {
+ "mobile":true,
+ "Windows":true,
+ "screensize": [1680,1050]
+ }
+ }
+ ]
+}
+```
+
+### Namespaces
+
+In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
+
+> [!Note]
+> We strongly recommend using names for feature namespaces that are UTF-8 based and start with different letters. For example, `user`, `environment`, `device`, and `activity` start with `u`, `e`, `d`, and `a`. Currently having namespaces with same first characters could result in collisions.
+
+```JSON
+{
+ "contextFeatures": [
+ {
+ "user": {
+ "profileType":"AnonymousUser",
+ "Location": "New York, USA"
+ }
+ },
+ {
+ "environment": {
+ "monthOfYear": "8",
+ "timeOfDay": "Afternoon",
+ "weather": "Sunny"
+ }
+ },
+ {
+ "device": {
+ "mobile":true,
+ "Windows":true
+ }
+ },
+ {
+ "activity" : {
+ "itemsInCart": "3-5",
+ "cartValue": "250-300",
+ "appliedCoupon": true
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+[Reinforcement learning](concepts-reinforcement-learning.md)
ai-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-offline-evaluation.md
+
+ Title: Use the Offline Evaluation method - Personalizer
+
+description: This article will explain how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.
++
+ms.
+++ Last updated : 02/20/2020++
+# Offline evaluation
+
+Offline evaluation is a method that allows you to test and assess the effectiveness of the Personalizer Service without changing your code or affecting user experience. Offline evaluation uses past data, sent from your application to the Rank and Reward APIs, to compare how different ranks have performed.
+
+Offline evaluation is performed on a date range. The range can finish as late as the current time. The beginning of the range can't be more than the number of days specified for [data retention](how-to-settings.md).
+
+Offline evaluation can help you answer the following questions:
+
+* How effective are Personalizer ranks for successful personalization?
+ * What are the average rewards achieved by the Personalizer online machine learning policy?
+ * How does Personalizer compare to the effectiveness of what the application would have done by default?
+ * What would have been the comparative effectiveness of a random choice for Personalization?
+ * What would have been the comparative effectiveness of different learning policies specified manually?
+* Which features of the context are contributing more or less to successful personalization?
+* Which features of the actions are contributing more or less to successful personalization?
+
+In addition, Offline Evaluation can be used to discover more optimized learning policies that Personalizer can use to improve results in the future.
+
+Offline evaluations do not provide guidance as to the percentage of events to use for exploration.
+
+## Prerequisites for offline evaluation
+
+The following are important considerations for the representative offline evaluation:
+
+* Have enough data. The recommended minimum is at least 50,000 events.
+* Collect data from periods with representative user behavior and traffic.
+
+## Discovering the optimized learning policy
+
+Personalizer can use the offline evaluation process to discover a more optimal learning policy automatically.
+
+After performing the offline evaluation, you can see the comparative effectiveness of Personalizer with that new policy compared to the current online policy. You can then apply that learning policy to make it effective immediately in Personalizer, by downloading it and uploading it in the Models and Policy panel. You can also download it for future analysis or use.
+
+Current policies included in the evaluation:
+
+| Learning settings | Purpose|
+|--|--|
+|**Online Policy**| The current Learning Policy used in Personalizer |
+|**Baseline**|The application's default (as determined by the first Action sent in Rank calls)|
+|**Random Policy**|An imaginary Rank behavior that always returns random choice of Actions from the supplied ones.|
+|**Custom Policies**|Additional Learning Policies uploaded when starting the evaluation.|
+|**Optimized Policy**|If the evaluation was started with the option to discover an optimized policy, it will also be compared, and you will be able to download it or make it the online learning policy, replacing the current one.|
+
+## Understanding the relevance of offline evaluation results
+
+When you run an offline evaluation, it is very important to analyze _confidence bounds_ of the results. If they are wide, it means your application hasnΓÇÖt received enough data for the reward estimates to be precise or significant. As the system accumulates more data, and you run offline evaluations over longer periods, the confidence intervals become narrower.
+
+## How offline evaluations are done
+
+Offline Evaluations are done using a method called **Counterfactual Evaluation**.
+
+Personalizer is built on the assumption that users' behavior (and thus rewards) are impossible to predict retrospectively (Personalizer can't know what would have happened if the user had been shown something different than what they did see), and only to learn from measured rewards.
+
+This is the conceptual process used for evaluations:
+
+```
+[For a given _learning policy), such as the online learning policy, uploaded learning policies, or optimized candidate policies]:
+{
+ Initialize a virtual instance of Personalizer with that policy and a blank model;
+
+ [For every chronological event in the logs]
+ {
+ - Perform a Rank call
+
+ - Compare the reward of the results against the logged user behavior.
+ - If they match, train the model on the observed reward in the logs.
+ - If they don't match, then what the user would have done is unknown, so the event is discarded and not used for training or measurement.
+
+ }
+
+ Add up the rewards and statistics that were predicted, do some aggregation to aid visualizations, and save the results.
+}
+```
+
+The offline evaluation only uses observed user behavior. This process discards large volumes of data, especially if your application does Rank calls with large numbers of actions.
++
+## Evaluation of features
+
+Offline evaluations can provide information about how much of the specific features for actions or context are weighing for higher rewards. The information is computed using the evaluation against the given time period and data, and may vary with time.
+
+We recommend looking at feature evaluations and asking:
+
+* What other, additional, features could your application or system provide along the lines of those that are more effective?
+* What features can be removed due to low effectiveness? Low effectiveness features add _noise_ into the machine learning.
+* Are there any features that are accidentally included? Examples of these are: user identifiable information, duplicate IDs, etc.
+* Are there any undesirable features that shouldn't be used to personalize due to regulatory or responsible use considerations? Are there features that could proxy (that is, closely mirror or correlate with) undesirable features?
++
+## Next steps
+
+[Configure Personalizer](how-to-settings.md)
+[Run Offline Evaluations](how-to-offline-evaluation.md)
+Understand [How Personalizer Works](how-personalizer-works.md)
ai-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-reinforcement-learning.md
+
+ Title: Reinforcement Learning - Personalizer
+
+description: Personalizer uses information about actions and current context to make better ranking suggestions. The information about these actions and context are attributes or properties that are referred to as features.
++
+ms.
+++ Last updated : 05/07/2019+
+# What is Reinforcement Learning?
+
+Reinforcement Learning is an approach to machine learning that learns behaviors by getting feedback from its use.
+
+Reinforcement Learning works by:
+
+* Providing an opportunity or degree of freedom to enact a behavior - such as making decisions or choices.
+* Providing contextual information about the environment and choices.
+* Providing feedback about how well the behavior achieves a certain goal.
+
+While there are many subtypes and styles of reinforcement learning, this is how the concept works in Personalizer:
+
+* Your application provides the opportunity to show one piece of content from a list of alternatives.
+* Your application provides information about each alternative and the context of the user.
+* Your application computes a _reward score_.
+
+Unlike some approaches to reinforcement learning, Personalizer doesn't require a simulation to work in. Its learning algorithms are designed to react to an outside world (versus control it) and learn from each data point with an understanding that it's a unique opportunity that cost time and money to create, and that there's a non-zero regret (loss of possible reward) if suboptimal performance happens.
+
+## What type of reinforcement learning algorithms does Personalizer use?
+
+The current version of Personalizer uses **contextual bandits**, an approach to reinforcement learning that is framed around making decisions or choices between discrete actions, in a given context.
+
+The _decision memory_, the model that has been trained to capture the best possible decision, given a context, uses a set of linear models. These have repeatedly shown business results and are a proven approach, partially because they can learn from the real world very rapidly without needing multi-pass training, and partially because they can complement supervised learning models and deep neural network models.
+
+The explore / best action traffic allocation is made randomly following the percentage set for exploration, and the default algorithm for exploration is epsilon-greedy.
+
+### History of Contextual Bandits
+
+John Langford coined the name Contextual Bandits (Langford and Zhang [2007]) to describe a tractable subset of reinforcement learning and has worked on a half-dozen papers improving our understanding of how to learn in this paradigm:
+
+* Beygelzimer et al. [2011]
+* Dudík et al. [2011a, b]
+* Agarwal et al. [2014, 2012]
+* Beygelzimer and Langford [2009]
+* Li et al. [2010]
+
+John has also given several tutorials previously on topics such as Joint Prediction (ICML 2015), Contextual Bandit Theory (NIPS 2013), Active Learning (ICML 2009), and Sample Complexity Bounds (ICML 2003)
+
+## What machine learning frameworks does Personalizer use?
+
+Personalizer currently uses [Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit/wiki) as the foundation for the machine learning. This framework allows for maximum throughput and lowest latency when making personalization ranks and training the model with all events.
+
+## References
+
+* [Making Contextual Decisions with Low Technical Debt](https://arxiv.org/abs/1606.03966)
+* [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453)
+* [Efficient Contextual Bandits in Non-stationary Worlds](https://arxiv.org/abs/1708.01799)
+* [Residual Loss Prediction: Reinforcement: learning With No Incremental Feedback](https://openreview.net/pdf?id=HJNMYceCW)
+* [Mapping Instructions and Visual Observations to Actions with Reinforcement Learning](https://arxiv.org/abs/1704.08795)
+* [Learning to Search Better Than Your Teacher](https://arxiv.org/abs/1502.02206)
+
+## Next steps
+
+[Offline evaluation](concepts-offline-evaluation.md)
ai-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/concepts-scalability-performance.md
+
+ Title: Scalability and Performance - Personalizer
+
+description: "High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance: latency and training throughput."
++
+ms.
+++ Last updated : 10/24/2019+
+# Scalability and Performance
+
+High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance:
+
+* Keeping low latency when making Rank API calls
+* Making sure training throughput keeps up with event input
+
+Personalization can return a rank rapidly, with most of the call duration dedicated to communication through the REST API. Azure will autoscale the ability to respond to requests rapidly.
+
+## Low-latency scenarios
+
+Some applications require low latencies when returning a rank. Low latencies are necessary:
+
+* To keep the user from waiting a noticeable amount of time before displaying ranked content.
+* To help a server that is experiencing extreme traffic avoid tying up scarce compute time and network connections.
++
+## Scalability and training throughput
+
+Personalizer works by updating a model that is retrained based on messages sent asynchronously by Personalizer after Rank and Reward APIs. These messages are sent using an Azure EventHub for the application.
+
+ It's unlikely most applications will reach the maximum joining and training throughput of Personalizer. While reaching this maximum won't slow down the application, it would imply event hub queues are getting filled internally faster than they can be cleaned up.
+
+## How to estimate your throughput requirements
+
+* Estimate the average number of bytes per ranking event adding the lengths of the context and action JSON documents.
+* Divide 20MB/sec by this estimated average bytes.
+
+For example, if your average payload has 500 features and each is an estimated 20 characters, then each event is approximately 10 kb. With these estimates, 20,000,000 / 10,000 = 2,000 events/sec, which is about 173 million events/day.
+
+If you're reaching these limits, please contact our support team for architecture advice.
+
+## Next steps
+
+[Create and configure Personalizer](how-to-settings.md).
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/encrypt-data-at-rest.md
+
+ Title: Data-at-rest encryption in Personalizer
+
+description: Learn about the keys that you use for data-at-rest encryption in Personalizer. See how to use Azure Key Vault to configure customer-managed keys.
+++++ Last updated : 06/02/2022++
+#Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
++
+# Encryption of data at rest in Personalizer
+
+Personalizer is a service in Azure AI services that uses a machine learning model to provide apps with user-tailored content. When Personalizer persists data to the cloud, it encrypts that data. This encryption protects your data and helps you meet organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available with the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It takes approximately 3-5 business days to hear back about the status of your request. If demand is high, you might be placed in a queue and approved when space becomes available.
+>
+> After you're approved to use customer-managed keys with Personalizer, create a new Personalizer resource and select E0 as the pricing tier. After you've created that resource, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-personalizer-works.md
+
+ Title: How Personalizer Works - Personalizer
+description: The Personalizer _loop_ uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the Rank and Reward calls.
++
+ms.
+++ Last updated : 02/18/2020++
+# How Personalizer works
+
+The Personalizer resource, your _learning loop_, uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the **Rank** and **Reward** calls. Every loop is completely independent of each other.
+
+## Rank and Reward APIs impact the model
+
+You send _actions with features_ and _context features_ to the Rank API. The **Rank** API decides to use either:
+
+* _Exploit_: The current model to decide the best action based on past data.
+* _Explore_: Select a different action instead of the top action. You [configure this percentage](how-to-settings.md#configure-exploration-to-allow-the-learning-loop-to-adapt) for your Personalizer resource in the Azure portal.
+
+You determine the reward score and send that score to the Reward API. The **Reward** API:
+
+* Collects data to train the model by recording the features and reward scores of each rank call.
+* Uses that data to update the model based on the configuration specified in the _Learning Policy_.
+
+## Your system calling Personalizer
+
+The following image shows the architectural flow of calling the Rank and Reward calls:
+
+![alt text](./media/how-personalizer-works/personalization-how-it-works.png "How Personalization Works")
+
+1. You send _actions with features_ and _context features_ to the Rank API.
+
+ * Personalizer decides whether to exploit the current model or explore new choices for the model.
+ * The ranking result is sent to EventHub.
+1. The top rank is returned to your system as _reward action ID_.
+ Your system presents that content and determines a reward score based on your own business rules.
+1. Your system returns the reward score to the learning loop.
+ * When Personalizer receives the reward, the reward is sent to EventHub.
+ * The rank and reward are correlated.
+ * The AI model is updated based on the correlation results.
+ * The inference engine is updated with the new model.
+
+## Personalizer retrains your model
+
+Personalizer retrains your model based on your **Model frequency update** setting on your Personalizer resource in the Azure portal.
+
+Personalizer uses all the data currently retained, based on the **Data retention** setting in number of days on your Personalizer resource in the Azure portal.
+
+## Research behind Personalizer
+
+Personalizer is based on cutting-edge science and research in the area of [Reinforcement Learning](concepts-reinforcement-learning.md) including papers, research activities, and ongoing areas of exploration in Microsoft Research.
+
+## Next steps
+
+Learn about [top scenarios](where-can-you-use-personalizer.md) for Personalizer
ai-services How To Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-create-resource.md
+
+ Title: Create Personalizer resource
+description: In this article, learn how to create a personalizer resource in the Azure portal for each feedback loop.
++
+ms.
+++ Last updated : 03/26/2020 +++
+# Create a Personalizer resource
+
+A Personalizer resource is the same thing as a Personalizer learning loop. A single resource, or learning loop, is created for each subject domain or content area you have. Do not use multiple content areas in the same loop because this will confuse the learning loop and provide poor predictions.
+
+If you want Personalizer to select the best content for more than one content area of a web page, use a different learning loop for each.
++
+## Create a resource in the Azure portal
+
+Create a Personalizer resource for each feedback loop.
+
+1. Sign in to [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer). The previous link takes you to the **Create** page for the Personalizer service.
+1. Enter your service name, select a subscription, location, pricing tier, and resource group.
+
+ > [!div class="mx-imgBorder"]
+ > ![Use Azure portal to create Personalizer resource, also called a learning loop.](./media/how-to-create-resource/how-to-create-personalizer-resource-learning-loop.png)
+
+1. Select **Create** to create the resource.
+
+1. After your resource has deployed, select the **Go to Resource** button to go to your Personalizer resource.
+
+1. Select the **Quick start** page for your resource, then copy the values for your endpoint and key. You need both the resource endpoint and key to use the Rank and Reward APIs.
+
+1. Select the **Configuration** page for the new resource to [configure the learning loop](how-to-settings.md).
+
+## Create a resource with the Azure CLI
+
+1. Sign in to the Azure CLI with the following command:
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. Create a resource group, a logical grouping to manage all Azure resources you intend to use with the Personalizer resource.
++
+ ```azurecli-interactive
+ az group create \
+ --name your-personalizer-resource-group \
+ --location westus2
+ ```
+
+1. Create a new Personalizer resource, _learning loop_, with the following command for an existing resource group.
+
+ ```azurecli-interactive
+ az cognitiveservices account create \
+ --name your-personalizer-learning-loop \
+ --resource-group your-personalizer-resource-group \
+ --kind Personalizer \
+ --sku F0 \
+ --location westus2 \
+ --yes
+ ```
+
+ This returns a JSON object, which includes your **resource endpoint**.
+
+1. Use the following Azure CLI command to get your **resource key**.
+
+ ```azurecli-interactive
+ az cognitiveservices account keys list \
+ --name your-personalizer-learning-loop \
+ --resource-group your-personalizer-resource-group
+ ```
+
+ You need both the resource endpoint and key to use the Rank and Reward APIs.
+
+## Next steps
+
+* [Configure](how-to-settings.md) Personalizer learning loop
ai-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-feature-evaluation.md
+
+ Title: Personalizer feature evaluations
+
+description: When you run a Feature Evaluation in your Personalizer resource from the Azure portal, Personalizer creates a report containing Feature Scores, a measure of how influential each feature was to the model during the evaluation period.
++
+ms.
+++ Last updated : 09/22/2022++
+# Evaluate feature importances
+
+You can assess how important each feature was to Personalizer's machine learning model by conducting a _feature evaluation_ on your historical log data. Feature evaluations are useful to:
+
+* Understand which features are most or least important to the model.
+* Brainstorm extra features that may be beneficial to learning, by deriving inspiration from what features are currently important in the model.
+* Identify potentially unimportant or non-useful features that should be considered for further analysis or removal.
+* Troubleshoot common problems and errors that may occur when designing features and sending them to Personalizer. For example, using GUIDs, timestamps, or other features that are generally _sparse_ may be problematic. Learn more about [improving features](concepts-features.md).
+
+## What is a feature evaluation?
+
+Feature evaluations are conducted by training and running a copy of your current model configuration on historically collected log data in a specified time period. Features are ignored one at a time to measure the difference in model performance with and without each feature. Because the feature evaluations are performed on historical data, there's no guarantee that these patterns will be observed in future data. However, these insights may still be relevant to future data if your logged data has captured sufficient variability or non-stationary properties of your data. Your current model's performance isn't affected by running a feature evaluation.
+
+A _feature importance_ score is a measure of the relative impact of the feature on the reward over the evaluation period. Feature importance scores are a number between 0 (least important) and 100 (most important) and are shown in the feature evaluation. Since the evaluation is run over a specific time period, the feature importances can change as additional data is sent to Personalizer and as your users, scenarios, and data change over time.
+
+## Creating a feature evaluation
+
+To obtain feature importance scores, you must create a feature evaluation over a period of logged data to generate a report containing the feature importance scores. This report is viewable in the Azure portal. To create a feature evaluation:
+
+1. Go to the [Azure portal](https://portal.azure.com) website
+1. Select your Personalizer resource
+1. Select the _Monitor_ section from the side navigation pane
+1. Select the _Features_ tab
+1. Select "Create report" and a new screen should appear
+1. Choose a name for your report
+1. Choose _start_ and _end_ times for your evaluation period
+1. Select "Create report"
+
+![Screenshot that shows how to create a Feature Evaluation in your Personalizer resource by clicking on "Monitor" blade, the "Feature" tab, then "Create a report".](media/feature-evaluation/create-report.png)
++
+![Screenshot that shows in the creation window and how to fill in the fields for your report including the name, start date, and end date.](media/feature-evaluation/create-report-window.png)
+
+Next, your report name should appear in the reports table below. Creating a feature evaluation is a long running process, where the time to completion depends on the volume of data sent to Personalizer during the evaluation period. While the report is being generated, the _Status_ column will indicate "Running" for your evaluation, and will update to "Succeeded" once completed. Check back periodically to see if your evaluation has finished.
+
+You can run multiple feature evaluations over various periods of time that your Personalizer resource has log data. Make sure that your [data retention period](how-to-settings.md#data-retention) is set sufficiently long to enable you to perform evaluations over older data.
+
+## Interpreting feature importance scores
+
+### Features with a high importance score
+
+Features with higher importance scores were more influential to the model during the evaluation period as compared to the other features. Important features can provide inspiration for designing additional features to be included in the model. For example, if you see the context features "IsWeekend" or "IsWeekday" have high importance for grocery shopping, it may be the case that holidays or long-weekends may also be important factors, so you may want to consider adding features that capture this information.
+
+### Features with a low importance score
+
+Features with low importance scores are good candidates for further analysis. Not all low scoring features necessarily _bad_ or not useful as low scores can occur for one or more several reasons. The list below can help you get started with analyzing why your features may have low scores:
+
+* The feature was rarely observed in the data during the evaluation period.
+ <!-- * Check The _Feature occurrences_ in your feature evaluation. If it's low in comparison to other features, this may indicate that feature was not present often enough for the model to determine if it's valuable or not. -->
+ * If the number of occurrences of this feature is low in comparison to other features, this may indicate that feature wasn't present often enough for the model to determine if it's valuable or not.
+* The feature values didn't have a lot of diversity or variation.
+ <!-- * Check The _Number of unique values_ in your feature evaluation. If it's lower than you would expect, this may indicate that the feature did not vary much during the evaluation period and won't provide significant insight. -->
+ * If the number of unique values for this feature lower than you would expect, this may indicate that the feature didn't vary much during the evaluation period and won't provide significant insight.
+
+* The feature values were too noisy (random), or too distinct, and provided little value.
+ <!-- * Check the _Number of unique values_ in your feature evaluation. If it's higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period. -->
+ * Check the _Number of unique values_ in your feature evaluation. If the number of unique values for this feature is higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period.
+* There's a data or formatting issue.
+ * Check to make sure the features are formatted and sent to Personalizer in the way you expect.
+* The feature may not be valuable to model learning and performance if the feature score is low and the reasons above do not apply.
+ * Consider removing the feature as it's not helping your model maximize the average reward.
+
+Removing features with low importance scores can help speed up model training by reducing the amount of data needed to learn. It can also potentially improve the performance of the model. However, this isn't guaranteed and further analysis may be needed. [Learn more about designing context and action features.](concepts-features.md)
+
+### Common issues and steps to improve features
+
+- **Sending features with high cardinality.** Features with high cardinality are those that have many distinct values that are not likely to repeat over many events. For example, personal information specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
+
+- **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if not personal information) will likely add more noise to the model and isn't recommended.
+
+- **Features are too sparse. Values are distinct and rarely occur more than a few times**. Precise timestamps down to the second can be very sparse. It can be made more dense (and therefore, effective) by grouping times into "morning", "midday" or "afternoon", for example.
+
+Location information also typically benefits from creating broader classifications. For example, a latitude-longitude coordinates such as Lat: 47.67402┬░ N, Long: 122.12154┬░ W is too precise and forces the model to learn latitude and longitude as distinct dimensions. When you're trying to personalize based on location information, it helps to group location information in larger sectors. An easy way to do that is to choose an appropriate rounding precision for the lat-long numbers, and combine latitude and longitude into "areas" by making them one string. For example, a good way to represent Lat: 47.67402┬░ N, Long: 122.12154┬░ W in regions approximately a few kilometers wide would be "location":"34.3 , 12.1".
+
+- **Expand feature sets with extrapolated information**
+You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national/regional cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it's relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
++
+## Next steps
+
+[Analyze policy performances with an offline evaluation](how-to-offline-evaluation.md) with Personalizer.
+
ai-services How To Inference Explainability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-inference-explainability.md
+
+ Title: "How-to: Use Inference Explainability"
+
+description: Personalizer can return feature scores in each Rank call to provide insight on what features are important to the model's decision.
++
+ms.
+++ Last updated : 09/20/2022++
+# Inference Explainability
+Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
+
+Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
+
+## How do I enable inference explainability?
+
+Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
+
+```JSON
+{
+ "rewardWaitTime": "PT10M",
+ "defaultReward": 0,
+ "rewardAggregation": "earliest",
+ "explorationPercentage": 0.2,
+ "modelExportFrequency": "PT5M",
+ "logMirrorEnabled": true,
+ "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
+ "logRetentionDays": 7,
+ "lastConfigurationEditDate": "0001-01-01T00:00:00Z",
+ "learningMode": "Online",
+ "isAutoOptimizationEnabled": true,
+ "autoOptimizationFrequency": "P7D",
+ "autoOptimizationStartDate": "2019-01-19T00:00:00Z",
+"isInferenceExplainabilityEnabled": true
+}
+```
+
+> [!NOTE]
+> Enabling inference explainability will significantly increase the latency of calls to the Rank API. We recommend experimenting with this capability and measuring the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
++
+## How to interpret feature scores?
+Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
+
+```JSON
+
+{
+ "ranking": [
+ {
+ "id": "EntertainmentArticle",
+ "probability": 0.8
+ },
+ {
+ "id": "SportsArticle",
+ "probability": 0.15
+ },
+ {
+ "id": "NewsArticle",
+ "probability": 0.05
+ }
+ ],
+ "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
+ "rewardActionId": "EntertainmentArticle",
+ "inferenceExplanation": [
+ {
+ "idΓÇ¥: "EntertainmentArticle",
+ "features": [
+ {
+ "name": "user.profileType",
+ "score": 3.0
+ },
+ {
+ "name": "user.latLong",
+ "score": -4.3
+ },
+ {
+ "name": "user.profileType^user.latLong",
+ "score" : 12.1
+ },
+ ]
+ ]
+}
+```
+
+In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the_ best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API.
+
+Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](concepts-exploration.md).
+
+For the best actions returned by Personalizer, the feature scores can provide general insight where:
+* Larger positive scores provide more support for the model choosing this action.
+* Larger negative scores provide more support for the model not choosing this action.
+* Scores close to zero have a small effect on the decision to choose this action.
+
+## Important considerations for Inference Explainability
+* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
+
+* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
+* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm.
++
+## Next steps
+
+[Reinforcement learning](concepts-reinforcement-learning.md)
ai-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-learning-behavior.md
+
+ Title: Configure learning behavior
+description: Apprentice mode gives you confidence in the Personalizer service and its machine learning capabilities, and provides metrics that the service is sent information that can be learned from ΓÇô without risking online traffic.
++
+ms.
+++ Last updated : 07/26/2022++
+# Configure the Personalizer learning behavior
+
+[Apprentice mode](concept-apprentice-mode.md) gives you trust and confidence in the Personalizer service and its machine learning capabilities, and provides assurance that the service is sent information that can be learned from ΓÇô without risking online traffic.
+
+## Configure Apprentice mode
+
+1. Sign in to the [Azure portal](https://portal.azure.com), for your Personalizer resource.
+
+2. On the **Setup** page, on the **Model settings** tab, select **Apprentice mode** then select **Save**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of configuring apprentice mode learning behavior in Azure portal](media/settings/configure-learning-behavior-azure-portal.png)
+
+## Changes to the existing application
+
+Your existing application shouldn't change how it currently selects actions to display or how the application determines the value, **reward** of that action. The only change to the application might be the order of the actions sent to the Personalizer Rank API. The action your application currently displays is sent as the _first action_ in the action list. The [Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) uses this first action to train your Personalizer model.
+
+### Configure your application to call the Rank API
+
+In order to add Personalizer to your application, you need to call the Rank and Reward APIs.
+
+1. Add the [Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) call after the point in your existing application logic where you determine the list of actions and their features. The first action in the actions list needs to be the action selected by your existing logic.
+
+1. Configure your code to display the action associated with the Rank API response's **Reward Action ID**.
+
+### Configure your application to call Reward API
+
+> [!NOTE]
+> Reward API calls do not affect training while in Apprentice mode. The service learns by matching your application's current logic, or default actions. However implementing Reward calls at this stage does help ensure a smooth transition to Online mode later on with a simple switch in the Azure portal. Additionally, the rewards will be logged, enabling you to analyze how well the current logic is performing and how much reward is being received.
+
+1. Use your existing business logic to calculate the **reward** of the displayed action. The value needs to be in the range from 0 to 1. Send this reward to Personalizer using the [Reward API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward). The reward value is not expected immediately and can be delayed over a time period - depending on your business logic.
+
+1. If you don't return the reward within the configured **Reward wait time**, the default reward will be logged instead.
+
+## Evaluate Apprentice mode
+
+In the Azure portal, on the **Monitor** page for your Personalizer resource, review the **Matching performance**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of reviewing evaluation of apprentice mode learning behavior in Azure portal](media/settings/evaluate-apprentice-mode.png)
+
+Apprentice mode provides the following **evaluation metrics**:
+* **Baseline ΓÇô average reward**: Average rewards of the applicationΓÇÖs default (baseline).
+* **Personalizer ΓÇô average reward**: Average of total rewards Personalizer would potentially have reached.
+* **Reward achievement ratio over most recent 1000 events**: Ratio of Baseline and Personalizer reward ΓÇô normalized over the most recent 1000 events.
+
+## Switch behavior to Online mode
+
+When you determine Personalizer is trained with an average of 75-85% rolling average, the model is ready to switch to Online mode.
+
+In the Azure portal for your Personalizer resource, on the **Setup** page, on the **Model settings** tab, select **Online mode* then select **Save**.
+
+You do not need to make any changes to the Rank and Reward API calls.
+
+## Next steps
+
+* [Manage model and learning settings](how-to-manage-model.md)
ai-services How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-manage-model.md
+
+ Title: Manage model and learning settings - Personalizer
+description: The machine-learned model and learning settings can be exported for backup in your own source control system.
++
+ms.
+++ Last updated : 02/20/2020++
+# How to manage model and learning settings
+
+The machine-learned model and learning settings can be exported for backup in your own source control system.
+
+## Export the Personalizer model
+
+From the Resource management's section for **Model and learning settings**, review model creation and last updated date and export the current model. You can use the Azure portal or the Personalizer APIs to export a model file for archival purposes.
+
+![Export current Personalizer model](media/settings/export-current-personalizer-model.png)
+
+## Clear data for your learning loop
+
+1. In the Azure portal, for your Personalizer resource, on the **Model and learning settings** page, select **Clear data**.
+1. In order to clear all data, and reset the learning loop to the original state, select all three check boxes.
+
+ ![In Azure portal, clear data from Personalizer resource.](./media/settings/clear-data-from-personalizer-resource.png)
+
+ |Value|Purpose|
+ |--|--|
+ |Logged personalization and reward data.|This logging data is used in offline evaluations. Clear the data if you're resetting your resource.|
+ |Reset the Personalizer model.|This model changes on every retraining. This frequency of training is specified in **upload model frequency** on the **Configuration** page. |
+ |Set the learning policy to default.|If you've changed the learning policy as part of an offline evaluation, this resets to the original learning policy.|
+
+1. Select **Clear selected data** to begin the clearing process. Status is reported in Azure notifications, in the top-right navigation.
+
+## Import a new learning policy
+
+The [learning policy](concept-active-learning.md#understand-learning-policy-settings) settings determine the _hyperparameters_ of the model training. Perform an [offline evaluation](how-to-offline-evaluation.md) to find a new learning policy.
+
+1. Open the [Azure portal](https://portal.azure.com), and select your Personalizer resource.
+1. Select **Model and learning settings** in the **Resource Management** section.
+1. For the **Import learning settings** select the file you created with the JSON format specified above, then select the **Upload** button.
+
+ Wait for the notification that the learning policy was uploaded successfully.
+
+## Export a learning policy
+
+1. Open the [Azure portal](https://portal.azure.com), and select your Personalizer resource.
+1. Select **Model and learning settings** in the **Resource Management** section.
+1. For the **Import learning settings** select the **Export learning settings** button. This saves the `json` file to your local computer.
+
+## Next steps
+
+[Analyze your learning loop with an offline evaluation](how-to-offline-evaluation.md)
ai-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-multi-slot.md
+
+ Title: How to use multi-slot with Personalizer
+description: Learn how to use multi-slot with Personalizer to improve content recommendations provided by the service.
+++++++ Last updated : 05/24/2021
+zone_pivot_groups: programming-languages-set-six
+ms.devlang: csharp, javascript, python
+++
+# Get started with multi-slot for Azure AI Personalizer
+
+Multi-slot personalization (Preview) allows you to target content in web layouts, carousels, and lists where more than one action (such as a product or piece of content) is shown to your users. With Personalizer multi-slot APIs, you can have the AI models in Personalizer learn what user contexts and products drive certain behaviors, considering and learning from the placement in your user interface. For example, Personalizer may learn that certain products or content drive more clicks as a sidebar or a footer than as a main highlight on a page.
+
+In this guide, you'll learn how to use the Personalizer multi-slot APIs.
++++++++++
+## Next steps
+
+* [Learn more about multi-slot](concept-multi-slot-personalization.md)
ai-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-offline-evaluation.md
+
+ Title: How to perform offline evaluation - Personalizer
+
+description: This article will show you how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.
++
+ms.
+++ Last updated : 02/20/2020++
+# Analyze your learning loop with an offline evaluation
+
+Learn how to create an offline evaluation and interpret the results.
+
+Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior over a period of logged (historical) data, and assess how well other model configuration settings may perform for your model.
+
+When you create an offline evaluation, the _Optimization discovery_ option will run offline evaluations over a variety of learning policy values to find one that may improve the performance of your model. You can also provide additional policies to assess in the offline evaluation.
+
+Read about [Offline Evaluations](concepts-offline-evaluation.md) to learn more.
+
+## Prerequisites
+
+* A configured Personalizer resource
+* The Personalizer resource must have a representative amount of logged data - as a ballpark figure, we recommend at least 50,000 events in its logs for meaningful evaluation results. Optionally, you may also have previously exported _learning policy_ files that you wish to test and compare in this evaluation.
+
+## Run an offline evaluation
+
+1. In the [Azure portal](https://azure.microsoft.com/free/cognitive-services), locate your Personalizer resource.
+1. In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**.
+ ![In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**.](./media/offline-evaluation/create-new-offline-evaluation.png)
+1. Fill out the options in the _Create an evaluation_ window:
+
+ * An evaluation name.
+ * Start and end date - these are dates that specify the range of data to use in the evaluation. This data must be present in the logs, as specified in the [Data Retention](how-to-settings.md) value.
+ * Set _Optimization discovery_ to **yes**, if you wish Personalizer to attempt to find more optimal learning policies.
+ * Add learning settings - upload a learning policy file if you wish to evaluate a custom or previously exported policy.l
+
+ > [!div class="mx-imgBorder"]
+ > ![Choose offline evaluation settings](./media/offline-evaluation/create-an-evaluation-form.png)
+
+1. Start the Evaluation by selecting **Start evaluation**.
+
+## Review the evaluation results
+
+Evaluations can take a long time to run, depending on the amount of data to process, number of learning policies to compare, and whether an optimization was requested.
+
+1. Once completed, you can select the evaluation from the list of evaluations, then select **Compare the score of your application with other potential learning settings**. Select this feature when you want to see how your current learning policy performs compared to a new policy.
+
+1. Next, Review the performance of the [learning policies](concepts-offline-evaluation.md#discovering-the-optimized-learning-policy).
+
+ > [!div class="mx-imgBorder"]
+ > [![Review evaluation results](./media/offline-evaluation/evaluation-results.png)](./media/offline-evaluation/evaluation-results.png#lightbox)
+
+You'll see various learning policies on the chart, along with their estimated average reward, confidence intervals, and options to download or apply a specific policy.
+- "Online" - Personalizer's current policy
+- "Baseline1" - Your application's baseline policy
+- "BaselineRand" - A policy of taking actions at random
+- "Inter-len#" or "Hyper#" - Policies created by Optimization discovery.
+
+Select **Apply** to apply the policy that improves the model best for your data.
++
+## Next steps
+
+* Learn more about [how offline evaluations work](concepts-offline-evaluation.md).
ai-services How To Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-settings.md
+
+ Title: Configure Personalizer
+description: Service configuration includes how the service treats rewards, how often the service explores, how often the model is retrained, and how much data is stored.
++
+ms.
+++ Last updated : 04/29/2020++
+# Configure Personalizer learning loop
+
+Service configuration includes how the service treats rewards, how often the service explores, how often the model is retrained, and how much data is stored.
+
+Configure the learning loop on the **Configuration** page, in the Azure portal for that Personalizer resource.
+
+<a name="configure-service-settings-in-the-azure-portal"></a>
+<a name="configure-reward-settings-for-the-feedback-loop-based-on-use-case"></a>
+
+## Planning configuration changes
+
+Because some configuration changes [reset your model](#settings-that-include-resetting-the-model), you should plan your configuration changes.
+
+If you plan to use [Apprentice mode](concept-apprentice-mode.md), make sure to review your Personalizer configuration before switching to Apprentice mode.
+
+<a name="clear-data-for-your-learning-loop"></a>
+
+## Settings that include resetting the model
+
+The following actions trigger a retraining of the model using data available upto the last 2 days.
+
+* Reward
+* Exploration
+
+To [clear](how-to-manage-model.md) all your data, use the **Model and learning settings** page.
+
+## Configure rewards for the feedback loop
+
+Configure the service for your learning loop's use of rewards. Changes to the following values will reset the current Personalizer model and retrain it with the last 2 days of data.
+
+> [!div class="mx-imgBorder"]
+> ![Configure the reward values for the feedback loop](media/settings/configure-model-reward-settings.png)
+
+|Value|Purpose|
+|--|--|
+|Reward wait time|ΓÇïSets the length of time during which Personalizer will collect reward values for a Rank call, starting from the moment the Rank call happens. This value is set by asking: "How long should Personalizer wait for rewards calls?" Any reward arriving after this window will be logged but not used for learning.|
+|Default reward|If no reward call is received by Personalizer during the Reward Wait Time window associated to a Rank call, Personalizer will assign the Default Reward. By default, and in most scenarios, the Default Reward is zero (0).|
+|Reward aggregation|If multiple rewards are received for the same Rank API call, this aggregation method is used: **sum** or **earliest**. Earliest picks the earliest score received and discards the rest. This is useful if you want a unique reward among possibly duplicate calls. |
+
+After changing these values, make sure to select **Save**.
+
+## Configure exploration to allow the learning loop to adapt
+
+Personalization is able to discover new patterns and adapt to user behavior changes over time by exploring alternatives instead of using the trained model's prediction. The **Exploration** value determines what percentage of Rank calls are answered with exploration.
+
+Changes to this value will reset the current Personalizer model and retrain it with the last 2 days of data.
+
+![The exploration value determines what percentage of Rank calls are answered with exploration](media/settings/configure-exploration-setting.png)
+
+After changing this value, make sure to select **Save**.
+
+<a name="model-update-frequency"></a>
+
+## Configure model update frequency for model training
+
+The **Model update frequency** sets how often the model is trained.
+
+|Frequency setting|Purpose|
+|--|--|
+|1 minute|One-minute update frequencies are useful when **debugging** an application's code using Personalizer, doing demos, or interactively testing machine learning aspects.|
+|15 minutes|High model update frequencies are useful for situations where you want to **closely track changes** in user behaviors. Examples include sites that run on live news, viral content, or live product bidding. You could use a 15-minute frequency in these scenarios. |
+|1 hour|For most use cases, a lower update frequency is effective.|
+
+![Model update frequency sets how often a new Personalizer model is retrained.](media/settings/configure-model-update-frequency-settings-15-minutes.png)
+
+After changing this value, make sure to select **Save**.
+
+## Data retention
+
+**Data retention period** sets how many days Personalizer keeps data logs. Past data logs are required to perform [offline evaluations](concepts-offline-evaluation.md), which are used to measure the effectiveness of Personalizer and optimize Learning Policy.
+
+After changing this value, make sure to select **Save**.
+++
+## Next steps
+
+[Learn how to manage your model](how-to-manage-model.md)
ai-services How To Thick Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/how-to-thick-client.md
+
+ Title: How to use local inference with the Personalizer SDK
+description: Learn how to use local inference to improve latency.
+++++++ Last updated : 09/06/2022++
+# Get started with the local inference SDK for Azure AI Personalizer
+
+The Personalizer local inference SDK (Preview) downloads the Personalizer model locally, and thus significantly reduces the latency of Rank calls by eliminating network calls. Every minute the client will download the most recent model in the background and use it for inference.
+
+In this guide, you'll learn how to use the Personalizer local inference SDK.
+
ai-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/quickstart-personalizer-sdk.md
+
+ Title: "Quickstart: Create and use learning loop with client SDK - Personalizer"
+description: This quickstart shows you how to create and manage a Personalizer learning loop using the client library.
++
+ms.
+++ Last updated : 02/02/2023
+ms.devlang: csharp, javascript, python
+
+keywords: personalizer, Azure AI Personalizer, machine learning
+zone_pivot_groups: programming-languages-set-six
++
+# Quickstart: Personalizer client library
+
+Get started with the Azure AI Personalizer client libraries to set up a basic learning loop. A learning loop is a system of decisions and feedback: an application requests a decision ranking from the service, then it uses the top-ranked choice and calculates a reward score from the outcome. It returns the reward score to the service. Over time, Personalizer uses AI algorithms to make better decisions for any given context. Follow these steps to set up a sample application.
+
+## Example scenario
+
+In this quickstart, a grocery e-retailer wants to increase revenue by showing relevant and personalized products to each customer on its website. On the main page, there's a "Featured Product" section that displays a prepared meal product to prospective customers. The e-retailer would like to determine how to show the right product to the right customer in order to maximize the likelihood of a purchase.
+
+The Personalizer service solves this problem in an automated, scalable, and adaptable way using reinforcement learning. You'll learn how to create _actions_ and their features, _context features_, and _reward scores_. You'll use the Personalizer client library to make calls to the [Rank and Reward APIs](what-is-personalizer.md#rank-and-reward-apis).
++++
+## Download the trained model
+
+If you'd like download a Personalizer model that has been trained on 5,000 events from the example above, visit the [Personalizer Samples repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/quickstarts) and download the _Personalizer_QuickStart_Model.zip_ file. Then go to your Personalizer resource in the Azure portal, go to the **Setup** page and the **Import/export** tab, and import the file.
+
+## Clean up resources
+
+To clean up your Azure AI services subscription, you can delete the resource or delete the resource group, which will delete any associated resources.
+
+* [Portal](../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How Personalizer works](how-personalizer-works.md)
+> [Where to use Personalizer](where-can-you-use-personalizer.md)
ai-services Responsible Characteristics And Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-characteristics-and-limitations.md
+
+ Title: Characteristics and limitations of Personalizer
+
+description: Characteristics and limitations of Personalizer
+++++ Last updated : 05/23/2022++++
+# Characteristics and limitations of Personalizer
+
+Azure AI Personalizer can work in many scenarios. To understand where you can apply Personalizer, make sure the requirements of your scenario meet the [expectations for Personalizer to work](where-can-you-use-personalizer.md#expectations-required-to-use-personalizer). To understand whether Personalizer should be used and how to integrate it into your applications, see [Use Cases for Personalizer](responsible-use-cases.md). You'll find criteria and guidance on choosing use cases, designing features, and reward functions for your uses of Personalizer.
+
+Before you read this article, it's helpful to understand some background information about [how Personalizer works](how-personalizer-works.md).
++
+## Select features for Personalizer
+
+Personalizing content depends on having useful information about the content and the user. For some applications and industries, some user features can be directly or indirectly considered discriminatory and potentially illegal. See the [Personalizer integration and responsible use guidelines](responsible-guidance-integration.md) on assessing features to use with Personalizer.
++
+## Computing rewards for Personalizer
+
+Personalizer learns to improve action choices based on the reward score provided by your application business logic.
+A well-built reward score will act as a short-term proxy to a business goal that's tied to an organization's mission.
+For example, rewarding on clicks will make Personalizer seek clicks at the expense of everything else, even if what's clicked is distracting to the user or not tied to a business outcome.
+In contrast, a news site might want to set rewards tied to something more meaningful than clicks, such as "Did the user spend enough time to read the content?" or "Did the user click relevant articles or references?" With Personalizer, it's easy to tie metrics closely to rewards. However, you will need to be careful not to confound short-term user engagement with desired outcomes.
++
+## Unintended consequences from reward scores
+
+Even if built with the best intentions reward scores might create unexpected consequences or unintended results because of how Personalizer ranks content.
+
+Consider the following examples:
+
+- Rewarding video content personalization on the percentage of the video length watched will probably tend to rank shorter videos higher than longer videos.
+- Rewarding social media shares, without sentiment analysis of how it's shared or the content itself, might lead to ranking offensive, unmoderated, or inflammatory content. This type of content tends to incite a lot of engagement but is often damaging.
+- Rewarding the action on user interface elements that users don't expect to change might interfere with the usability and predictability of the user interface. For example, buttons that change location or purpose without warning might make it harder for certain groups of users to stay productive.
+
+Implement these best practices:
+
+- Run offline experiments with your system by using different reward approaches to understand impact and side effects.
+- Evaluate your reward functions, and ask yourself how a naïve person might alter its interpretation which may result in unintentional or undesirable outcomes.
+- Archive information and assets, such as models, learning policies, and other data, that Personalizer uses to function, so that results can be reproducible.
++
+## General guidelines to understand and improve performance
+
+Because Personalizer is based on Reinforcement Learning and learns from rewards to make better choices over time, performance isn't measured in traditional supervised learning terms used in classifiers, such as precision and recall. The performance of Personalizer is directly measured as the sum of reward scores it receives from your application via the Reward API.
+
+When you use Personalizer, the product user interface in the Azure portal provides performance information so you can monitor and act on it. The performance can be seen in the following ways:
+
+- If Personalizer is in Online Learning mode, you can perform [offline evaluations](concepts-offline-evaluation.md).
+- If Personalizer is in [Apprentice mode](concept-apprentice-mode.md), you can see the performance metrics (events imitated and rewards imitated) in the Evaluation pane in the Azure portal.
+
+We recommend you perform frequent offline evaluations to maintain oversight. This task will help you monitor trends and ensure effectiveness. For example, you could decide to temporarily put Personalizer in Apprentice Mode if reward performance has a dip.
+
+### Personalizer performance estimates shown in Offline Evaluations: Limitations
+
+We define the "performance" of Personalizer as the total rewards it obtains during use. Personalizer performance estimates shown in Offline Evaluations are computed instead of measured. It is important to understand the limitations of these estimates:
+
+- The estimates are based on past data, so future performance may vary as the world and your users change.
+- The estimates for baseline performance are computed probabilistically. For this reason, the confidence band for the baseline average reward is important. The estimate will get more precise with more events. If you use a smaller number of actions in each Rank call the performance estimate may increase in confidence as there is a higher probability that Personalizer may choose any one of them (including the baseline action) for every event.
+- Personalizer constantly trains a model in near real time to improve the actions chosen for each event, and as a result, it will affect the total rewards obtained. The model performance will vary over time, depending on the recent past training data.
+- Exploration and action choice are stochastic processes guided by the Personalizer model. The random numbers used for these stochastic processes are seeded from the Event Id. To ensure reproducibility of explore-exploit and other stochastic processes, use the same Event Id.
+- Online performance may be capped by [exploration](concepts-exploration.md). Lowering exploration settings will limit how much information is harvested to stay on top of changing trends and usage patterns, so the balance depends on each use case. Some use cases merit starting off with higher exploration settings and reducing them over time (e.g., start with 30% and reduce to 10%).
++
+### Check existing models that might accidentally bias Personalizer
+
+Existing recommendations, customer segmentation, and propensity model outputs can be used by your application as inputs to Personalizer. Personalizer learns to disregard features that don't contribute to rewards. Review and evaluate any propensity models to determine if they're good at predicting rewards and contain strong biases that might generate harm as a side effect. For example, look for recommendations that might be based on harmful stereotypes. Consider using tools such as [FairLearn](https://fairlearn.org/) to facilitate the process.
++
+## Proactive assessments during your project lifecycle
+
+Consider creating methods for team members, users, and business owners to report concerns regarding responsible use and a process that prioritizes their resolution. Consider treating tasks for responsible use just like other crosscutting tasks in the application lifecycle, such as tasks related to user experience, security, or DevOps. Tasks related to responsible use and their requirements shouldnΓÇÖt be afterthoughts. Responsible use should be discussed and implemented throughout the application lifecycle.
++
+## Next steps
+
+- [Responsible use and integration](responsible-guidance-integration.md)
+- [Offline evaluations](concepts-offline-evaluation.md)
+- [Features for context and actions](concepts-features.md)
ai-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-data-and-privacy.md
+
+ Title: Data and privacy for Personalizer
+
+description: Data and privacy for Personalizer
+++++ Last updated : 05/23/2022+++
+# Data and privacy for Personalizer
+
+This article provides information about what data Azure AI Personalizer uses to work, how it processes that data, and how you can control that data. It assumes basic familiarity with [what Personalizer is](what-is-personalizer.md) and [how Personalizer works](how-personalizer-works.md). Specific terms can be found in Terminology.
++
+## What data does Personalizer process?
+
+Personalizer processes the following types of data:
+- **Context features and Action features**: Your application sends information about users, and the products or content to personalize, in aggregated form. This data is sent to Personalizer in each Rank API call in arguments for Context and Actions. You decide what to send to the API and how to aggregate it. The data is expressed as attributes or features. You provide information about your users, such as their device and their environment, as Context features. You shouldn't send features specific to a user like a phone number or email or User IDs. Action features include information about your content and product, such as movie genre or product price. For more information, see [Features for Actions and Context](concepts-features.md).
+- **Reward information**: A reward score (a number between 0 and 1) ranks how well the user interaction resulting from the personalization choice mapped to a business goal. For example, an event might get a reward of "1" if a recommended article was clicked on. For more information, see [Rewards](concept-rewards.md).
+
+To understand more about what information you typically use with Personalizer, see [Features are information about Actions and Context](concepts-features.md).
+
+[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores.
+
+## How does Personalizer process data?
+
+The following diagram illustrates how your data is processed.
+
+![Diagram that shows how Personalizer processes data.](media/how-personalizer-works/personalization-how-it-works.png)
+
+Personalizer processes data as follows:
+
+1. Personalizer receives data each time the application calls the Rank API for a personalization event. The data is sent via the arguments for the Context and Actions.
+
+2. Personalizer uses the information in the Context and Actions, its internal AI models, and service configuration to return the rank response for the ID of the action to use. The contents of the Context and Actions are stored for no more than 48 hours in transient caches with the EventID used or generated in the Rank API.
+3. The application then calls the Reward API with one or more reward scores. This information is also stored in transient caches and matched with the Actions and Context information.
+4. After the rank and reward information for events is correlated, it's removed from transient caches and placed in more permanent storage. It remains in permanent storage until the number of days specified in the Data Retention setting has gone by, at which time the information is deleted. If you choose not to specify a number of days in the Data Retention setting, this data will be saved as long as the Personalizer Azure Resource is not deleted or until you choose to Clear Data via the UI or APIs. You can change the Data Retention setting at any time.
+5. Personalizer continuously trains internal Personalizer AI models specific to this Personalizer loop by using the data in the permanent storage and machine learning configuration parameters in [Learning settings](concept-active-learning.md).
+6. Personalizer creates [offline evaluations either](concepts-offline-evaluation.md) automatically or on demand.
+Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](how-to-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
++
+### Independence of Personalizer loops
+
+Each Personalizer loop is separate and independent from others, as follows:
+
+- **No external data augmentation**: Each Personalizer loop only uses the data supplied to it by you via Rank and Reward API calls to train models. Personalizer doesn't use any additional information from any origin, such as other Personalizer loops in your own Azure subscription, Microsoft, third-party sources or subprocessors.
+- **No data, model, or information sharing**: A Personalizer loop won't share information about events, features, and models with any other Personalizer loop in your subscription, Microsoft, third parties or subprocessors.
++
+## How is data retained and what customer controls are available?
+
+Personalizer retains different types of data in different ways and provides the following controls for each.
++
+### Personalizer rank and reward data
+
+Personalizer stores the features about Actions and Context sent via rank and reward calls for the number of days specified in configuration under Data Retention.
+To control this data retention, you can:
+
+1. Specify the number of days to retain log storage in the [Azure portal for the Personalizer resource](how-to-settings.md)under **Configuration** > **Data Retention** or via the API. The default **Data Retention** setting is seven days. Personalizer deletes all Rank and Reward data older than this number of days automatically.
+
+2. Clear data for logged personalization and reward data in the Azure portal under **Model and learning settings** > **Clear data** > **Logged personalization and reward data** or via the API.
+
+3. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
+
+You can't access past data from Rank and Reward API calls in the Personalizer resource directly. If you want to see all the data that's being saved, configure log mirroring to create a copy of this data on an Azure Blob Storage resource you've created and are responsible for managing.
++
+### Personalizer transient cache
+
+Personalizer stores partial data about an event separate from rank and reward calls in transient caches. Events are automatically purged from the transient cache 48 hours from the time the event occurred.
+
+To delete transient data, you can:
+
+1. Clear data for logged personalization and reward data in the Azure portal under **Model and learning settings** > **Clear data** or via the API.
+
+2. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
++
+### Personalizer models and learning settings
+
+A Personalizer loop trains models with data from Rank and Reward API calls, driven by the hyperparameters and configuration specified in **Model and learning settings** in the Azure portal. Models are volatile. They're constantly changing and being trained on additional data in near real time. Personalizer doesn't automatically save older models and keeps overwriting them with the latest models. For more information, see ([How to manage models and learning settings](how-to-manage-model.md)). To clear models and learning settings:
+
+1. Reset them in the Azure portal under **Model and learning settings** > **Clear data** or via the API.
+
+2. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
++
+### Personalizer evaluation reports
+
+Personalizer also retains the information generated in [offline evaluations](concepts-offline-evaluation.md) for reports.
+
+To delete offline evaluation reports, you can:
+
+1. Go to the Personalizer loop under the Azure portal. Go to **Evaluations** and delete the relevant evaluation.
+
+2. Delete evaluations via the Evaluations API.
+
+3. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
++
+### Further storage considerations
+
+- **Customer managed keys**: Customers can configure the service to encrypt data at rest with their own managed keys. This second layer of encryption is on top of Microsoft's own encryption.
+- **Geography**: In all cases, the incoming data, models, and evaluations are processed and stored in the same geography where the Personalizer resource was created.
+
+Also see:
+
+- [How to manage model and learning settings](how-to-manage-model.md)
+- [Configure Personalizer learning loop](how-to-settings.md)
++
+## Next steps
+
+- [See Responsible use guidelines for Personalizer](responsible-use-cases.md).
+
+To learn more about Microsoft's privacy and security commitments, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
ai-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-guidance-integration.md
+
+ Title: Guidance for integration and responsible use of Personalizer
+
+description: Guidance for integration and responsible use of Personalizer
+++++ Last updated : 05/23/2022++++
+# Guidance for integration and responsible use of Personalizer
+
+Microsoft works to help customers responsibly develop and deploy solutions by using Azure AI Personalizer. Our principled approach upholds personal agency and dignity by considering the AI system's:
+
+- Fairness, reliability, and safety.
+- Privacy and security.
+- Inclusiveness.
+- Transparency.
+- Human accountability.
+
+These considerations reflect our commitment to developing responsible AI.
++
+## General guidelines for integration and responsible use principles
+
+When you get ready to integrate and responsibly use AI-powered products or features, the following activities will help to set you up for success:
+
+- Understand what it can do. Fully assess the potential of Personalizer to understand its capabilities and limitations. Understand how it will perform in your particular scenario and context by thoroughly testing it with real-life conditions and data.
+
+- **Respect an individual's right to privacy**. Only collect data and information from individuals for lawful and justifiable purposes. Only use data and information that you have consent to use for this purpose.
+
+- **Obtain legal review**. Obtain appropriate legal advice to review Personalizer and how you are using it in your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future.
+
+- **Have a human in the loop**. Include human oversight as a consistent pattern area to explore. Ensure constant human oversight of the AI-powered product or feature. Maintain the role of humans in decision making. Make sure you can have real-time human intervention in the solution to prevent harm and manage situations when the AI system doesnΓÇÖt perform as expected.
+
+- **Build trust with affected stakeholders**. Communicate the expected benefits and potential risks to affected stakeholders. Help people understand why the data is needed and how the use of the data will lead to their benefit. Describe data handling in an understandable way.
+
+- **Create a customer feedback loop**. Provide a feedback channel that allows users and individuals to report issues with the service after it's deployed. After you've deployed an AI-powered product or feature, it requires ongoing monitoring and improvement. Be ready to implement any feedback and suggestions for improvement. Establish channels to collect questions and concerns from affected stakeholders. People who might be directly or indirectly affected by the system include employees, visitors, and the general public.
+
+- **Feedback**: Seek feedback from a diverse sampling of the community during the development and evaluation process (for example, historically marginalized groups, people with disabilities, and service workers). For more information, see Community jury.
+
+- **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met.
+
+- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](./concepts-features.md?branch=main) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.
+
+- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
+
+- **Opt out**: Consider providing a control for users to opt out of receiving personalized recommendations. For these users, the Personalizer Rank API will not be called from your application. Instead, your application can use an alternative mechanism for deciding what action is taken. For example, by opting out of personalized recommendations and choosing the default or baseline action, the user would experience the action that would be taken without Personalizer's recommendation. Alternatively, your application can use recommendations based on aggregate or population-based measures (e.g., now trending, top 10 most popular, etc.).
++
+## Your responsibility
+
+All guidelines for responsible implementation build on the foundation that developers and businesses using Personalizer are responsible and accountable for the effects of using these algorithms in society. If you're developing an application that your organization will deploy, you should recognize your role and responsibility for its operation and how it affects people. If you're designing an application to be deployed by a third party, come to a shared understanding of who is ultimately responsible for the behavior of the application. Make sure to document that understanding.
++
+## Questions and feedback
+
+Microsoft is continuously upgrading tools and documents to help you act on these responsibilities. Our team invites you to [provide feedback to Microsoft](mailto:cogsvcs-RL-feedback@microsoft.com?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe other tools, product features, and documents would help you implement these guidelines for using Personalizer.
++
+## Recommended reading
+- See Microsoft's six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/).
++
+## Next steps
+
+Understand how the Personalizer API receives features: [Features: Action and Context](concepts-features.md)
ai-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/responsible-use-cases.md
+
+ Title: Transparency note for Personalizer
+
+description: Transparency Note for Personalizer
+++++ Last updated : 05/23/2022+++
+# Use cases for Personalizer
+
+## What is a Transparency Note?
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance.
+
+Microsoft provides *Transparency Notes* to help you understand how our AI technology works. This includes the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+
+Transparency Notes are part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai).
+
+## Introduction to Personalizer
+
+Azure AI Personalizer is a cloud-based service that helps your applications choose the best content item to show your users. You can use Personalizer to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, your application monitors the user's reaction and reports a reward score back to Personalizer. The reward score is used to continuously improve the machine learning model using reinforcement learning. This enhances the ability of Personalizer to select the best content item in subsequent interactions based on the contextual information it receives for each.
+
+For more information, see:
+
+- [What is Personalizer?](what-is-personalizer.md)
+- [Where can you use Personalizer](where-can-you-use-personalizer.md)
+- [How Personalizer works](how-personalizer-works.md)
+
+## Key terms
+
+|Term| Definition|
+|:--|:-|
+|**Learning Loop** | You create a Personalizer resource, called a learning loop, for every part of your application that can benefit from personalization. If you have more than one experience to personalize, create a loop for each. |
+|**Online model** | The default [learning behavior](terminology.md#learning-behavior) for Personalizer where your learning loop, uses machine learning to build the model that predicts the **top action** for your content. |
+|**Apprentice mode** | A [learning behavior](terminology.md#learning-behavior) that helps warm-start a Personalizer model to train without impacting the applications outcomes and actions. |
+|**Rewards**| A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history. |
+|**Exploration**| The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring. |
+
+For more information, and additional key terms, please refer to the [Personalizer Terminology](terminology.md) and [conceptual documentation](how-personalizer-works.md).
+
+## Example use cases
+
+Some common customer motivations for using Personalizer are to:
+
+- **User engagement**: Capture user interest by choosing content to increase clickthrough, or to prioritize the next best action to improve average revenue. Other mechanisms to increase user engagement might include selecting videos or music in a dynamic channel or playlist.
+- **Content optimization**: Images can be optimized for a product (such as selecting a movie poster from a set of options) to optimize clickthrough, or the UI layout, colors, images, and blurbs can be optimized on a web page to increase conversion and purchase.
+- **Maximize conversions using discounts and coupons**: To get the best balance of margin and conversion choose which discounts the application will provide to users, or decide which product to highlight from the results of a recommendation engine to maximize conversion.
+- **Maximize positive behavior change**: Select which wellness tip question to send in a notification, messaging, or SMS push to maximize positive behavior change.
+- **Increase productivity** in customer service and technical support by highlighting the most relevant next best actions or the appropriate content when users are looking for documents, manuals, or database items.
+
+## Considerations when choosing a use case
+
+- Using a service that learns to personalize content and user interfaces is useful. However, it can also be misapplied if the personalization creates harmful side effects in the real world. Consider how personalization also helps your users achieve their goals.
+- Consider what the negative consequences in the real world might be if Personalizer isn't suggesting particular items because the system is trained with a bias to the behavior patterns of the majority of the system users.
+- Consider situations where the exploration behavior of Personalizer might cause harm.
+- Carefully consider personalizing choices that are consequential or irreversible, and that should not be determined by short-term signals and rewards.
+- Don't provide actions to Personalizer that shouldn't be chosen. For example, inappropriate movies should be filtered out of the actions to personalize if making a recommendation for an anonymous or underage user.
+
+Here are some scenarios where the above guidance will play a role in whether, and how, to apply Personalizer:
+
+- Avoid using Personalizer for ranking offers on specific loan, financial, and insurance products, where personalization features are regulated, based on data the individuals don't know about, can't obtain, or can't dispute; and choices needing years and information ΓÇ£beyond the clickΓÇ¥ to truly assess how good recommendations were for the business and the users.
+- Carefully consider personalizing highlights of school courses and education institutions where recommendations without enough exploration might propagate biases and reduce users' awareness of other options.
+- Avoid using Personalizer to synthesize content algorithmically with the goal of influencing opinions in democracy and civic participation, as it is consequential in the long term, and can be manipulative if the user's goal for the visit is to be informed, not influenced.
++
+## Next steps
+
+* [Characteristics and limitations for Personalizer](responsible-characteristics-and-limitations.md)
+* [Where can you use Personalizer?](where-can-you-use-personalizer.md)
ai-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/terminology.md
+
+ Title: Terminology - Personalizer
+description: Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.
++
+ms.
+++ Last updated : 09/16/2022+
+# Personalizer terminology
+
+Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.
+
+## Conceptual terminology
+
+* **Learning Loop**: You create a Personalizer resource, called a _learning loop_, for every part of your application that can benefit from personalization. If you have more than one experience to personalize, create a loop for each.
+
+* **Model**: A Personalizer model captures all data learned about user behavior, getting training data from the combination of the arguments you send to Rank and Reward calls, and with a training behavior determined by the Learning Policy.
+
+* **Online mode**: The default [learning behavior](#learning-behavior) for Personalizer where your learning loop, uses machine learning to build the model that predicts the **top action** for your content.
+
+* **Apprentice mode**: A [learning behavior](#learning-behavior) that helps warm-start a Personalizer model to train without impacting the applications outcomes and actions.
+
+## Learning Behavior:
+
+* **Online mode**: Return the best action. Your model will respond to Rank calls with the best action and will use Reward calls to learn and improve its selections over time.
+* **[Apprentice mode](concept-apprentice-mode.md)**: Learn as an apprentice. Your model will learn by observing the behavior of your existing system. Rank calls will always return the application's **default action** (baseline).
+
+## Personalizer configuration
+
+Personalizer is configured from the [Azure portal](https://portal.azure.com).
+
+* **Rewards**: configure the default values for reward wait time, default reward, and reward aggregation policy.
+
+* **Exploration**: configure the percentage of Rank calls to use for exploration
+
+* **Model update frequency**: How often the model is retrained.
+
+* **Data retention**: How many days worth of data to store. This can impact offline evaluations, which are used to improve your learning loop.
+
+## Use Rank and Reward APIs
+
+* **Rank**: Given the actions with features and the context features, use explore or exploit to return the top action (content item).
+
+ * **Actions**: Actions are the content items, such as products or promotions, to choose from. Personalizer chooses the top action (returned reward action ID) to show to your users via the Rank API.
+
+ * **Context**: To provide a more accurate rank, provide information about your context, for example:
+ * Your user.
+ * The device they are on.
+ * The current time.
+ * Other data about the current situation.
+ * Historical data about the user or context.
+
+ Your specific application may have different context information.
+
+ * **[Features](concepts-features.md)**: A unit of information about a content item or a user context. Make sure to only use features that are aggregated. Do not use specific times, user IDs, or other non-aggregated data as features.
+
+ * An _action feature_ is metadata about the content.
+ * A _context feature_ is metadata about the context in which the content is presented.
+
+* **Exploration**: The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring.
+
+* **Learned Best Action**: The Personalizer service uses the current model to decide the best action based on past data.
+
+* **Experiment Duration**: The amount of time the Personalizer service waits for a reward, starting from the moment the Rank call happened for that event.
+
+* **Inactive Events**: An inactive event is one where you called Rank, but you're not sure the user will ever see the result, due to client application decisions. Inactive events allow you to create and store personalization results, then decide to discard them later without impacting the machine learning model.
++
+* **Reward**: A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history.
+
+## Evaluations
+
+### Offline evaluations
+
+* **Evaluation**: An offline evaluation determines the best learning policy for your loop based on your application's data.
+
+* **Learning Policy**: How Personalizer trains a model on every event will be determined by some parameters that affect how the machine learning algorithm works. A new learning loop starts with a default **Learning Policy**, which can yield moderate performance. When running [Evaluations](concepts-offline-evaluation.md), Personalizer creates new learning policies specifically optimized to the use cases of your loop. Personalizer will perform significantly better with policies optimized for each specific loop, generated during Evaluation. The learning policy is named _learning settings_ on the **Model and learning settings** for the Personalizer resource in the Azure portal.
+
+### Apprentice mode evaluations
+
+Apprentice mode provides the following **evaluation metrics**:
+* **Baseline ΓÇô average reward**: Average rewards of the applicationΓÇÖs default (baseline).
+* **Personalizer ΓÇô average reward**: Average of total rewards Personalizer would potentially have reached.
+* **Average rolling reward**: Ratio of Baseline and Personalizer reward ΓÇô normalized over the most recent 1000 events.
+
+## Next steps
+
+* Learn about [ethics and responsible use](responsible-use-cases.md)
ai-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
+
+ Title: "Tutorial: Azure Notebook - Personalizer"
+
+description: This tutorial simulates a Personalizer loop _system in an Azure Notebook, which suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is also available and stored in a coffee dataset.
++
+ms.
+++ Last updated : 04/27/2020+
+#Customer intent: As a Python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
++
+# Tutorial: Use Personalizer in Azure Notebook
+
+This tutorial runs a Personalizer loop in an Azure Notebook, demonstrating the end to end life cycle of a Personalizer loop.
+
+The loop suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is stored in a coffee dataset.
+
+## Users and coffee
+
+The notebook, simulating user interaction with a website, selects a random user, time of day, and type of weather from the dataset. A summary of the user information is:
+
+|Customers - context features|Times of Day|Types of weather|
+|--|--|--|
+|Alice<br>Bob<br>Cathy<br>Dave|Morning<br>Afternoon<br>Evening|Sunny<br>Rainy<br>Snowy|
+
+To help Personalizer learn, over time, the _system_ also knows details about the coffee selection for each person.
+
+|Coffee - action features|Types of temperature|Places of origin|Types of roast|Organic|
+|--|--|--|--|--|
+|Cappacino|Hot|Kenya|Dark|Organic|
+|Cold brew|Cold|Brazil|Light|Organic|
+|Iced mocha|Cold|Ethiopia|Light|Not organic|
+|Latte|Hot|Brazil|Dark|Not organic|
+
+The **purpose** of the Personalizer loop is to find the best match between the users and the coffee as much of the time as possible.
+
+The code for this tutorial is available in the [Personalizer Samples GitHub repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/azurenotebook).
+
+## How the simulation works
+
+At the beginning of the running system, the suggestions from Personalizer are only successful between 20% to 30%. This success is indicated by the reward sent back to the Personalizer Reward API, with a score of 1. After some Rank and Reward calls, the system improves.
+
+After the initial requests, run an offline evaluation. This allows Personalizer to review the data and suggest a better learning policy. Apply the new learning policy and run the notebook again with 20% of the previous request count. The loop will perform better with the new learning policy.
+
+## Rank and reward calls
+
+For each of the few thousand calls to the Personalizer service, the Azure Notebook sends the **Rank** request to the REST API:
+
+* A unique ID for the Rank/Request event
+* Context features - A random choice of the user, weather, and time of day - simulating a user on a website or mobile device
+* Actions with Features - _All_ the coffee data - from which Personalizer makes a suggestion
+
+The system receives the request, then compares that prediction with the user's known choice for the same time of day and weather. If the known choice is the same as the predicted choice, the **Reward** of 1 is sent back to Personalizer. Otherwise the reward sent back is 0.
+
+> [!Note]
+> This is a simulation so the algorithm for the reward is simple. In a real-world scenario, the algorithm should use business logic, possibly with weights for various aspects of the customer's experience, to determine the reward score.
++
+## Prerequisites
+
+* An [Azure Notebook](https://notebooks.azure.com/) account.
+* An [Azure AI Personalizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer).
+ * If you have already used the Personalizer resource, make sure to [clear the data](how-to-settings.md#clear-data-for-your-learning-loop) in the Azure portal for the resource.
+* Upload all the files for [this sample](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/azurenotebook) into an Azure Notebook project.
+
+File descriptions:
+
+* [Personalizer.ipynb](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/Personalizer.ipynb) is the Jupyter notebook for this tutorial.
+* [User dataset](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/users.json) is stored in a JSON object.
+* [Coffee dataset](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/coffee.json) is stored in a JSON object.
+* [Example Request JSON](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/example-rankrequest.json) is the expected format for a POST request to the Rank API.
+
+## Configure Personalizer resource
+
+In the Azure portal, configure your [Personalizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) with the **update model frequency** set to 15 seconds and a **reward wait time** of 10 minutes. These values are found on the **[Configuration](how-to-settings.md#configure-service-settings-in-the-azure-portal)** page.
+
+|Setting|Value|
+|--|--|
+|update model frequency|15 seconds|
+|reward wait time|10 minutes|
+
+These values have a very short duration in order to show changes in this tutorial. These values shouldn't be used in a production scenario without validating they achieve your goal with your Personalizer loop.
+
+## Set up the Azure Notebook
+
+1. Change the Kernel to `Python 3.6`.
+1. Open the `Personalizer.ipynb` file.
+
+## Run Notebook cells
+
+Run each executable cell and wait for it to return. You know it is done when the brackets next to the cell display a number instead of a `*`. The following sections explain what each cell does programmatically and what to expect for the output.
+
+### Include the Python modules
+
+Include the required Python modules. The cell has no output.
+
+```python
+import json
+import matplotlib.pyplot as plt
+import random
+import requests
+import time
+import uuid
+```
+
+### Set Personalizer resource key and name
+
+From the Azure portal, find your key and endpoint on the **Quickstart** page of your Personalizer resource. Change the value of `<your-resource-name>` to your Personalizer resource's name. Change the value of `<your-resource-key>` to your Personalizer key.
+
+```python
+# Replace 'personalization_base_url' and 'resource_key' with your valid endpoint values.
+personalization_base_url = "https://<your-resource-name>.cognitiveservices.azure.com/"
+resource_key = "<your-resource-key>"
+```
+
+### Print current date and time
+Use this function to note the start and end times of the iterative function, iterations.
+
+These cells have no output. The function does output the current date and time when called.
+
+```python
+# Print out current datetime
+def currentDateTime():
+ currentDT = datetime.datetime.now()
+ print (str(currentDT))
+```
+
+### Get the last model update time
+
+When the function, `get_last_updated`, is called, the function prints out the last modified date and time that the model was updated.
+
+These cells have no output. The function does output the last model training date when called.
+
+The function uses a GET REST API to [get model properties](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/GetModelProperties).
+
+```python
+# ititialize variable for model's last modified date
+modelLastModified = ""
+```
+
+```python
+def get_last_updated(currentModifiedDate):
+
+ print('--checking model')
+
+ # get model properties
+ response = requests.get(personalization_model_properties_url, headers = headers, params = None)
+
+ print(response)
+ print(response.json())
+
+ # get lastModifiedTime
+ lastModifiedTime = json.dumps(response.json()["lastModifiedTime"])
+
+ if (currentModifiedDate != lastModifiedTime):
+ currentModifiedDate = lastModifiedTime
+ print(f'--model updated: {lastModifiedTime}')
+```
+
+### Get policy and service configuration
+
+Validate the state of the service with these two REST calls.
+
+These cells have no output. The function does output the service values when called.
+
+```python
+def get_service_settings():
+
+ print('--checking service settings')
+
+ # get learning policy
+ response = requests.get(personalization_model_policy_url, headers = headers, params = None)
+
+ print(response)
+ print(response.json())
+
+ # get service settings
+ response = requests.get(personalization_service_configuration_url, headers = headers, params = None)
+
+ print(response)
+ print(response.json())
+```
+
+### Construct URLs and read JSON data files
+
+This cell
+
+* builds the URLs used in REST calls
+* sets the security header using your Personalizer resource key
+* sets the random seed for the Rank event ID
+* reads in the JSON data files
+* calls `get_last_updated` method - learning policy has been removed in example output
+* calls `get_service_settings` method
+
+The cell has output from the call to `get_last_updated` and `get_service_settings` functions.
+
+```python
+# build URLs
+personalization_rank_url = personalization_base_url + "personalizer/v1.0/rank"
+personalization_reward_url = personalization_base_url + "personalizer/v1.0/events/" #add "{eventId}/reward"
+personalization_model_properties_url = personalization_base_url + "personalizer/v1.0/model/properties"
+personalization_model_policy_url = personalization_base_url + "personalizer/v1.0/configurations/policy"
+personalization_service_configuration_url = personalization_base_url + "personalizer/v1.0/configurations/service"
+
+headers = {'Ocp-Apim-Subscription-Key' : resource_key, 'Content-Type': 'application/json'}
+
+# context
+users = "users.json"
+
+# action features
+coffee = "coffee.json"
+
+# empty JSON for Rank request
+requestpath = "example-rankrequest.json"
+
+# initialize random
+random.seed(time.time())
+
+userpref = None
+rankactionsjsonobj = None
+actionfeaturesobj = None
+
+with open(users) as handle:
+ userpref = json.loads(handle.read())
+
+with open(coffee) as handle:
+ actionfeaturesobj = json.loads(handle.read())
+
+with open(requestpath) as handle:
+ rankactionsjsonobj = json.loads(handle.read())
+
+get_last_updated(modelLastModified)
+get_service_settings()
+
+print(f'User count {len(userpref)}')
+print(f'Coffee count {len(actionfeaturesobj)}')
+```
+
+Verify that the output's `rewardWaitTime` is set to 10 minutes and `modelExportFrequency` is set to 15 seconds.
+
+```console
+--checking model
+<Response [200]>
+{'creationTime': '0001-01-01T00:00:00+00:00', 'lastModifiedTime': '0001-01-01T00:00:00+00:00'}
+--model updated: "0001-01-01T00:00:00+00:00"
+--checking service settings
+<Response [200]>
+{...learning policy...}
+<Response [200]>
+{'rewardWaitTime': '00:10:00', 'defaultReward': 0.0, 'rewardAggregation': 'earliest', 'explorationPercentage': 0.2, 'modelExportFrequency': '00:00:15', 'logRetentionDays': -1}
+User count 4
+Coffee count 4
+```
+
+### Troubleshooting the first REST call
+
+This previous cell is the first cell that calls out to Personalizer. Make sure the REST status code in the output is `<Response [200]>`. If you get an error, such as 404, but you are sure your resource key and name are correct, reload the notebook.
+
+Make sure the count of coffee and users is both 4. If you get an error, check that you uploaded all 3 JSON files.
+
+### Set up metric chart in Azure portal
+
+Later in this tutorial, the long running process of 10,000 requests is visible from the browser with an updating text box. It may be easier to see in a chart or as a total sum, when the long running process ends. To view this information, use the metrics provided with the resource. You can create the chart now that you have completed a request to the service, then refresh the chart periodically while the long running process is going.
+
+1. In the Azure portal, select your Personalizer resource.
+1. In the resource navigation, select **Metrics** underneath Monitoring.
+1. In the chart, select **Add metric**.
+1. The resource and metric namespace are already set. You only need to select the metric of **successful calls** and the aggregation of **sum**.
+1. Change the time filter to the last 4 hours.
+
+ ![Set up metric chart in Azure portal, adding metric for successful calls for the last 4 hours.](./media/tutorial-azure-notebook/metric-chart-setting.png)
+
+ You should see three successful calls in the chart.
+
+### Generate a unique event ID
+
+This function generates a unique ID for each rank call. The ID is used to identify the rank and reward call information. This value could come from a business process such as a web view ID or transaction ID.
+
+The cell has no output. The function does output the unique ID when called.
+
+```python
+def add_event_id(rankjsonobj):
+ eventid = uuid.uuid4().hex
+ rankjsonobj["eventId"] = eventid
+ return eventid
+```
+
+### Get random user, weather, and time of day
+
+This function selects a unique user, weather, and time of day, then adds those items to the JSON object to send to the Rank request.
+
+The cell has no output. When the function is called, it returns the random user's name, random weather, and random time of day.
+
+The list of 4 users and their preferences - only some preferences are shown for brevity:
+
+```json
+{
+ "Alice": {
+ "Sunny": {
+ "Morning": "Cold brew",
+ "Afternoon": "Iced mocha",
+ "Evening": "Cold brew"
+ }...
+ },
+ "Bob": {
+ "Sunny": {
+ "Morning": "Cappucino",
+ "Afternoon": "Iced mocha",
+ "Evening": "Cold brew"
+ }...
+ },
+ "Cathy": {
+ "Sunny": {
+ "Morning": "Latte",
+ "Afternoon": "Cold brew",
+ "Evening": "Cappucino"
+ }...
+ },
+ "Dave": {
+ "Sunny": {
+ "Morning": "Iced mocha",
+ "Afternoon": "Iced mocha",
+ "Evening": "Iced mocha"
+ }...
+ }
+}
+```
+
+```python
+def add_random_user_and_contextfeatures(namesoption, weatheropt, timeofdayopt, rankjsonobj):
+ name = namesoption[random.randint(0,3)]
+ weather = weatheropt[random.randint(0,2)]
+ timeofday = timeofdayopt[random.randint(0,2)]
+ rankjsonobj['contextFeatures'] = [{'timeofday': timeofday, 'weather': weather, 'name': name}]
+ return [name, weather, timeofday]
+```
++
+### Add all coffee data
+
+This function adds the entire list of coffee to the JSON object to send to the Rank request.
+
+The cell has no output. The function does change the `rankjsonobj` when called.
++
+The example of a single coffee's features is:
+
+```json
+{
+ "id": "Cappucino",
+ "features": [
+ {
+ "type": "hot",
+ "origin": "kenya",
+ "organic": "yes",
+ "roast": "dark"
+
+ }
+}
+```
+
+```python
+def add_action_features(rankjsonobj):
+ rankjsonobj["actions"] = actionfeaturesobj
+```
+
+### Compare prediction with known user preference
+
+This function is called after the Rank API is called, for each iteration.
+
+This function compares the user's preference for coffee, based on weather and time of day, with the Personalizer's suggestion for the user for those filters. If the suggestion matches, a score of 1 is returned, otherwise the score is 0. The cell has no output. The function does output the score when called.
+
+```python
+def get_reward_from_simulated_data(name, weather, timeofday, prediction):
+ if(userpref[name][weather][timeofday] == str(prediction)):
+ return 1
+ return 0
+```
+
+### Loop through calls to Rank and Reward
+
+The next cell is the _main_ work of the Notebook, getting a random user, getting the coffee list, sending both to the Rank API. Comparing the prediction with the user's known preferences, then sending the reward back to the Personalizer service.
+
+The loop runs for `num_requests` times. Personalizer needs a few thousand calls to Rank and Reward to create a model.
+
+An example of the JSON sent to the Rank API follows. The list of coffee is not complete, for brevity. You can see the entire JSON for coffee in `coffee.json`.
+
+JSON sent to the Rank API:
+
+```json
+{
+ 'contextFeatures':[
+ {
+ 'timeofday':'Evening',
+ 'weather':'Snowy',
+ 'name':'Alice'
+ }
+ ],
+ 'actions':[
+ {
+ 'id':'Cappucino',
+ 'features':[
+ {
+ 'type':'hot',
+ 'origin':'kenya',
+ 'organic':'yes',
+ 'roast':'dark'
+ }
+ ]
+ }
+ ...rest of coffee list
+ ],
+ 'excludedActions':[
+
+ ],
+ 'eventId':'b5c4ef3e8c434f358382b04be8963f62',
+ 'deferActivation':False
+}
+```
+
+JSON response from the Rank API:
+
+```json
+{
+ 'ranking': [
+ {'id': 'Latte', 'probability': 0.85 },
+ {'id': 'Iced mocha', 'probability': 0.05 },
+ {'id': 'Cappucino', 'probability': 0.05 },
+ {'id': 'Cold brew', 'probability': 0.05 }
+ ],
+ 'eventId': '5001bcfe3bb542a1a238e6d18d57f2d2',
+ 'rewardActionId': 'Latte'
+}
+```
+
+Finally, each loop shows the random selection of user, weather, time of day, and determined reward. The reward of 1 indicates the Personalizer resource selected the correct coffee type for the given user, weather, and time of day.
+
+```console
+1 Alice Rainy Morning Latte 1
+```
+
+The function uses:
+
+* Rank: a POST REST API to [get rank](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank).
+* Reward: a POST REST API to [report reward](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward).
+
+```python
+def iterations(n, modelCheck, jsonFormat):
+
+ i = 1
+
+ # default reward value - assumes failed prediction
+ reward = 0
+
+ # Print out dateTime
+ currentDateTime()
+
+ # collect results to aggregate in graph
+ total = 0
+ rewards = []
+ count = []
+
+ # default list of user, weather, time of day
+ namesopt = ['Alice', 'Bob', 'Cathy', 'Dave']
+ weatheropt = ['Sunny', 'Rainy', 'Snowy']
+ timeofdayopt = ['Morning', 'Afternoon', 'Evening']
++
+ while(i <= n):
+
+ # create unique id to associate with an event
+ eventid = add_event_id(jsonFormat)
+
+ # generate a random sample
+ [name, weather, timeofday] = add_random_user_and_contextfeatures(namesopt, weatheropt, timeofdayopt, jsonFormat)
+
+ # add action features to rank
+ add_action_features(jsonFormat)
+
+ # show JSON to send to Rank
+ print('To: ', jsonFormat)
+
+ # choose an action - get prediction from Personalizer
+ response = requests.post(personalization_rank_url, headers = headers, params = None, json = jsonFormat)
+
+ # show Rank prediction
+ print ('From: ',response.json())
+
+ # compare personalization service recommendation with the simulated data to generate a reward value
+ prediction = json.dumps(response.json()["rewardActionId"]).replace('"','')
+ reward = get_reward_from_simulated_data(name, weather, timeofday, prediction)
+
+ # show result for iteration
+ print(f' {i} {currentDateTime()} {name} {weather} {timeofday} {prediction} {reward}')
+
+ # send the reward to the service
+ response = requests.post(personalization_reward_url + eventid + "/reward", headers = headers, params= None, json = { "value" : reward })
+
+ # for every N rank requests, compute total correct total
+ total = total + reward
+
+ # every N iteration, get last updated model date and time
+ if(i % modelCheck == 0):
+
+ print("**** 10% of loop found")
+
+ get_last_updated(modelLastModified)
+
+ # aggregate so chart is easier to read
+ if(i % 10 == 0):
+ rewards.append( total)
+ count.append(i)
+ total = 0
+
+ i = i + 1
+
+ # Print out dateTime
+ currentDateTime()
+
+ return [count, rewards]
+```
+
+## Run for 10,000 iterations
+Run the Personalizer loop for 10,000 iterations. This is a long running event. Do not close the browser running the notebook. Refresh the metrics chart in the Azure portal periodically to see the total calls to the service. When you have around 20,000 calls, a rank and reward call for each iteration of the loop, the iterations are done.
+
+```python
+# max iterations
+num_requests = 200
+
+# check last mod date N% of time - currently 10%
+lastModCheck = int(num_requests * .10)
+
+jsonTemplate = rankactionsjsonobj
+
+# main iterations
+[count, rewards] = iterations(num_requests, lastModCheck, jsonTemplate)
+```
+++
+## Chart results to see improvement
+
+Create a chart from the `count` and `rewards`.
+
+```python
+def createChart(x, y):
+ plt.plot(x, y)
+ plt.xlabel("Batch of rank events")
+ plt.ylabel("Correct recommendations per batch")
+ plt.show()
+```
+
+## Run chart for 10,000 rank requests
+
+Run the `createChart` function.
+
+```python
+createChart(count,rewards)
+```
+
+## Reading the chart
+
+This chart shows the success of the model for the current default learning policy.
+
+![This chart shows the success of the current learning policy for the duration of the test.](./media/tutorial-azure-notebook/azure-notebook-chart-results.png)
++
+The ideal target that by the end of the test, the loop is averaging a success rate that is close to 100 percent minus the exploration. The default value of exploration is 20%.
+
+`100-20=80`
+
+This exploration value is found in the Azure portal, for the Personalizer resource, on the **Configuration** page.
+
+In order to find a better learning policy, based on your data to the Rank API, run an [offline evaluation](how-to-offline-evaluation.md) in the portal for your Personalizer loop.
+
+## Run an offline evaluation
+
+1. In the Azure portal, open the Personalizer resource's **Evaluations** page.
+1. Select **Create Evaluation**.
+1. Enter the required data of evaluation name, and date range for the loop evaluation. The date range should include only the days you are focusing on for your evaluation.
+ ![In the Azure portal, open the Personalizer resource's Evaluations page. Select Create Evaluation. Enter the evaluation name and date range.](./media/tutorial-azure-notebook/create-offline-evaluation.png)
+
+ The purpose of running this offline evaluation is to determine if there is a better learning policy for the features and actions used in this loop. To find that better learning policy, make sure **Optimization Discovery** is turned on.
+
+1. Select **OK** to begin the evaluation.
+1. This **Evaluations** page lists the new evaluation and its current status. Depending on how much data you have, this evaluation can take some time. You can come back to this page after a few minutes to see the results.
+1. When the evaluation is completed, select the evaluation then select **Comparison of different learning policies**. This shows the available learning policies and how they would behave with the data.
+1. Select the top-most learning policy in the table and select **Apply**. This applies the _best_ learning policy to your model and retrains.
+
+## Change update model frequency to 5 minutes
+
+1. In the Azure portal, still on the Personalizer resource, select the **Configuration** page.
+1. Change the **model update frequency** and **reward wait time** to 5 minutes and select **Save**.
+
+Learn more about the [reward wait time](concept-rewards.md#reward-wait-time) and [model update frequency](how-to-settings.md#model-update-frequency).
+
+```python
+#Verify new learning policy and times
+get_service_settings()
+```
+
+Verify that the output's `rewardWaitTime` and `modelExportFrequency` are both set to 5 minutes.
+```console
+--checking model
+<Response [200]>
+{'creationTime': '0001-01-01T00:00:00+00:00', 'lastModifiedTime': '0001-01-01T00:00:00+00:00'}
+--model updated: "0001-01-01T00:00:00+00:00"
+--checking service settings
+<Response [200]>
+{...learning policy...}
+<Response [200]>
+{'rewardWaitTime': '00:05:00', 'defaultReward': 0.0, 'rewardAggregation': 'earliest', 'explorationPercentage': 0.2, 'modelExportFrequency': '00:05:00', 'logRetentionDays': -1}
+User count 4
+Coffee count 4
+```
+
+## Validate new learning policy
+
+Return to the Azure Notebooks file and continue by running the same loop, but for only 2,000 iterations. Refresh the metrics chart in the Azure portal periodically to see the total calls to the service. When you have around 4,000 calls, a rank and reward call for each iteration of the loop, the iterations are done.
+
+```python
+# max iterations
+num_requests = 2000
+
+# check last mod date N% of time - currently 10%
+lastModCheck2 = int(num_requests * .10)
+
+jsonTemplate2 = rankactionsjsonobj
+
+# main iterations
+[count2, rewards2] = iterations(num_requests, lastModCheck2, jsonTemplate)
+```
+
+## Run chart for 2,000 rank requests
+
+Run the `createChart` function.
+
+```python
+createChart(count2,rewards2)
+```
+
+## Review the second chart
+
+The second chart should show a visible increase in Rank predictions aligning with user preferences.
+
+![The second chart should show a visible increase in Rank predictions aligning with user preferences.](./media/tutorial-azure-notebook/azure-notebook-chart-results-happy-graph.png)
+
+## Clean up resources
+
+If you do not intend to continue the tutorial series, clean up the following resources:
+
+* Delete your Azure Notebook project.
+* Delete your Personalizer resource.
+
+## Next steps
+
+The [Jupyter notebook and data files](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/azurenotebook) used in this sample are available on the GitHub repo for Personalizer.
ai-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-personalizer-chat-bot.md
+
+ Title: Use Personalizer in chat bot - Personalizer
+description: Customize a C# .NET chat bot with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.
++
+ms.
+++ Last updated : 05/17/2021
+ms.devlang: csharp
+++
+# Tutorial: Use Personalizer in .NET chat bot
+
+Use a C# .NET chat bot with a Personalizer loop to provide the correct content to a user. This chat bot suggests a specific coffee or tea to a user. The user can accept or reject that suggestion. This gives Personalizer information to help make the next suggestion more appropriate.
+
+**In this tutorial, you learn how to:**
+
+<!-- green checkmark -->
+> [!div class="checklist"]
+> * Set up Azure resources
+> * Configure and run bot
+> * Interact with bot using Bot Framework Emulator
+> * Understand where and how the bot uses Personalizer
++
+## How does the chat bot work?
+
+A chat bot is typically a back-and-forth conversation with a user. This specific chat bot uses Personalizer to select the best action (coffee or tea) to offer the user. Personalizer uses reinforcement learning to make that selection.
+
+The chat bot needs to manage turns in conversation. The chat bot uses [Bot Framework](https://github.com/microsoft/botframework-sdk) to manage the bot architecture and conversation and uses [Azure AI Language Understanding](../LUIS/index.yml) (LUIS), to understand the intent of the natural language from the user.
+
+The chat bot is a web site with a specific route available to answer requests, `http://localhost:3978/api/messages`. You can use the Bot Framework Emulator to visually interact with the running chat bot while you are developing a bot locally.
+
+### User interactions with the bot
+
+This is a simple chat bot that allows you to enter text queries.
+
+|User enters text|Bot responds with text|Description of action bot takes to determine response text|
+|--|--|--|
+|No text entered - bot begins the conversation.|`This is a simple chatbot example that illustrates how to use Personalizer. The bot learns what coffee or tea order is preferred by customers given some context information (such as weather, temperature, and day of the week) and information about the user.`<br>`To use the bot, just follow the prompts. To try out a new imaginary context, type ΓÇ£ResetΓÇ¥ and a new one will be randomly generated.`<br>`Welcome to the coffee bot, please tell me if you want to see the menu or get a coffee or tea suggestion for today. Once IΓÇÖve given you a suggestion, you can reply with ΓÇÿlikeΓÇÖ or ΓÇÿdonΓÇÖt likeΓÇÖ. ItΓÇÖs Tuesday today and the weather is Snowy.`|The bot begins the conversation with instructional text and lets you know what the context is: `Tuesday`, `Snowy`.|
+|`Show menu`|`Here is our menu: Coffee: Cappuccino Espresso Latte Macchiato Mocha Tea: GreenTea Rooibos`|Determine intent of query using LUIS, then display menu choices of coffee and tea items. Features of the actions are |
+|`What do you suggest`|`How about Latte?`|Determine intent of query using LUIS, then call **Rank API**, and display top choice as a question `How about {response.RewardActionId}?`. Also displays JSON call and response for illustration purposes.|
+|`I like it`|`ThatΓÇÖs great! IΓÇÖll keep learning your preferences over time.`<br>`Would you like to get a new suggestion or reset the simulated context to a new day?`|Determine intent of query using LUIS, then call **Reward API** with reward of `1`, displays JSON call and response for illustration purposes.|
+|`I don't like it`|`Oh well, maybe IΓÇÖll guess better next time.`<br>`Would you like to get a new suggestion or reset the simulated context to a new day?`|Determine intent of query using LUIS, then call **Reward API** with reward of `0`, displays JSON call and response for illustration purposes.|
+|`Reset`|Returns instructional text.|Determine intent of query using LUIS, then displays the instructional text and resets the context.|
++
+### Personalizer in this bot
+
+This chat bot uses Personalizer to select the top action (specific coffee or tea), based on a list of _actions_ (some type of content) and context features.
+
+The bot sends the list of actions, along with context features, to the Personalizer loop. Personalizer returns the single best action to your bot, which your bot displays.
+
+In this tutorial, the **actions** are types of coffee and tea:
+
+|Coffee|Tea|
+|--|--|
+|Cappuccino<br>Espresso<br>Latte<br>Mocha|GreenTea<br>Rooibos|
+
+**Rank API:** To help Personalizer learn about your actions, the bot sends the following with each Rank API request:
+
+* Actions _with features_
+* Context features
+
+A **feature** of the model is information about the action or context that can be aggregated (grouped) across members of your chat bot user base. A feature _isn't_ individually specific (such as a user ID) or highly specific (such as an exact time of day).
+
+Features are used to align actions to the current context in the model. The model is a representation of Personalizer's past knowledge about actions, context, and their features that allows it to make educated decisions.
+
+The model, including features, is updated on a schedule based on your **Model update frequency** setting in the Azure portal.
+
+Features should be selected with the same planning and design that you would apply to any schema or model in your technical architecture. The feature values can be set with business logic or third-party systems.
+
+> [!CAUTION]
+> Features in this application are for demonstration and may not necessarily be the best features to use in a your web app for your use case.
+
+#### Action features
+
+Each action (content item) has features to help distinguish the coffee or tea item.
+
+The features aren't configured as part of the loop configuration in the Azure portal. Instead they are sent as a JSON object with each Rank API call. This allows flexibility for the actions and their features to grow, change, and shrink over time, which allows Personalizer to follow trends.
+
+Features for coffee and tea include:
+
+* Origin location of coffee bean such as Kenya and Brazil
+* Is the coffee or tea organic?
+* Light or dark roast of coffee
+
+While the coffee has three features in the preceding list, tea only has one. Only pass features to Personalizer that make sense to the action. Don't pass in an empty value for a feature if it doesn't apply to the action.
+
+#### Context features
+
+Context features help Personalizer understand the context of the environment such as the display device, the user, the location, and other features that are relevant to your use case.
+
+The context for this chat bot includes:
+
+* Type of weather (snowy, rainy, sunny)
+* Day of the week
+
+Selection of features is randomized in this chat bot. In a real bot, use real data for your context features.
+
+### Design considerations for this bot
+
+There are a few cautions to note about this conversation:
+* **Bot interaction**: The conversation is very simple because it is demonstrating Rank and Reward in a simple use case. It is not demonstrating the full functionality of the Bot Framework SDK or of the Emulator.
+* **Personalizer**: The features are selected randomly to simulate usage. Do not randomize features in a production Personalizer scenario.
+* **Language Understanding (LUIS)**: The few example utterances of the LUIS model are meant for this sample only. Do not use so few example utterances in your production LUIS application.
++
+## Install required software
+- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/). The downloadable samples repository includes instructions if you would prefer to use the .NET Core CLI.
+- [Microsoft Bot Framework Emulator](https://aka.ms/botframeworkemulator) is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.
+
+## Download the sample code of the chat bot
+
+The chat bot is available in the Personalizer samples repository. Clone or [download](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/archive/master.zip) the repository, then open the sample in the `/samples/ChatbotExample` directory with Visual Studio 2019.
+
+To clone the repository, use the following Git command in a Bash shell (terminal).
+
+```bash
+git clone https://github.com/Azure-Samples/cognitive-services-personalizer-samples.git
+```
+
+## Create and configure Personalizer and LUIS resources
+
+### Create Azure resources
+
+To use this chat bot, you need to create Azure resources for Personalizer and Language Understanding (LUIS).
+
+* [Create LUIS resources](../luis/luis-how-to-azure-subscription.md). Create both an authoring and prediction resource.
+* [Create Personalizer resource](how-to-create-resource.md) then copy the key and endpoint from the Azure portal. You will need to set these values in the `appsettings.json` file of the .NET project.
+
+### Create LUIS app
+
+If you are new to LUIS, you need to [sign in](https://www.luis.ai) and immediately migrate your account. You don't need to create new resources, instead select the resources you created in the previous section of this tutorial.
+
+1. To create a new LUIS application, in the [LUIS portal](https://www.luis.ai), select your subscription and authoring resource.
+1. Then, still on the same page, select **+ New app for conversation**, then **Import as JSON**.
+1. In the pop-up dialog, select **Choose file** then select the `/samples/ChatbotExample/CognitiveModels/coffeebot.json` file. Enter the name `Personalizer Coffee bot`.
+1. Select the **Train** button in the top-right navigation of the LUIS portal.
+1. Select the **Publish** button to publish the app to the **Production slot** for the prediction runtime.
+1. Select **Manage**, then **Settings**. Copy the value of the **App ID**. You will need to set this value in the `appsettings.json` file of the .NET project.
+1. Still in the **Manage** section, select **Azure Resources**. This displays the associated resources on the app.
+1. Select **Add prediction resource**. In the pop-up dialog, select your subscription, and the prediction resource created in a previous section of this tutorial, then select **Done**.
+1. Copy the values of the **Primary key** and **Endpoint URL**. You will need to set these values in the `appsettings.json` file of the .NET project.
+
+### Configure bot with appsettings.json file
+
+1. Open the chat bot solution file, `ChatbotSamples.sln`, with Visual Studio 2019.
+1. Open `appsettings.json` in the root directory of the project.
+1. Set all five settings copied in the previous section of this tutorial.
+
+ ```json
+ {
+ "PersonalizerChatbot": {
+ "LuisAppId": "",
+ "LuisAPIKey": "",
+ "LuisServiceEndpoint": "",
+ "PersonalizerServiceEndpoint": "",
+ "PersonalizerAPIKey": ""
+ }
+ }
+ ```
+
+### Build and run the bot
+
+Once you have configured the `appsettings.json`, you are ready to build and run the chat bot. When you do, a browser opens to the running website, `http://localhost:3978`.
++
+Keep the web site running because the tutorial explains what the bot is doing, so you can interact with the bot.
++
+## Set up the Bot Framework Emulator
+
+1. Open the Bot Framework Emulator, and select **Open Bot**.
+
+ :::image type="content" source="media/tutorial-chat-bot/bot-emulator-startup.png" alt-text="Screenshot of Bot Framework Emulator startup screen.":::
++
+1. Configure the bot with the following **bot URL** then select **Connect**:
+
+ `http://localhost:3978/api/messages`
+
+ :::image type="content" source="media/tutorial-chat-bot/bot-emulator-open-bot-settings.png" alt-text="Screenshot of Bot Framework Emulator open bot settings.":::
+
+ The emulator connects to the chat bot and displays the instructional text, along with logging and debug information helpful for local development.
+
+ :::image type="content" source="media/tutorial-chat-bot/bot-emulator-bot-conversation-first-turn.png" alt-text="Screenshot of Bot Framework Emulator in first turn of conversation.":::
+
+## Use the bot in the Bot Framework Emulator
+
+1. Ask to see the menu by entering `I would like to see the menu`. The chat bot displays the items.
+1. Let the bot suggest an item by entering `Please suggest a drink for me.`
+ The emulator displays the Rank request and response in the chat window, so you can see the full JSON. And the bot makes a suggestion, something like `How about Latte?`
+1. Answer that you would like that which means you accept Personalizer's top Ranked selection, `I like it.`
+ The emulator displays the Reward request with reward score 1 and response in the chat window, so you can see the full JSON. And the bot responds with `ThatΓÇÖs great! IΓÇÖll keep learning your preferences over time.` and `Would you like to get a new suggestion or reset the simulated context to a new day?`
+
+ If you respond with `no` to the selection, the reward score of 0 is sent to Personalizer.
++
+## Understand the .NET code using Personalizer
+
+The .NET solution is a simple bot framework chat bot. The code related to Personalizer is in the following folders:
+* `/samples/ChatbotExample/Bots`
+ * `PersonalizerChatbot.cs` file for the interaction between bot and Personalizer
+* `/samples/ChatbotExample/ReinforcementLearning` - manages the actions and features for the Personalizer model
+* `/samples/ChatbotExample/Model` - files for the Personalizer actions and features, and for the LUIS intents
+
+### PersonalizerChatbot.cs - working with Personalizer
+
+The `PersonalizerChatbot` class is derived from the `Microsoft.Bot.Builder.ActivityHandler`. It has three properties and methods to manage the conversation flow.
+
+> [!CAUTION]
+> Do not copy code from this tutorial. Use the sample code in the [Personalizer samples repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples).
+
+```csharp
+public class PersonalizerChatbot : ActivityHandler
+{
+
+ private readonly LuisRecognizer _luisRecognizer;
+ private readonly PersonalizerClient _personalizerClient;
+
+ private readonly RLContextManager _rlFeaturesManager;
+
+ public PersonalizerChatbot(LuisRecognizer luisRecognizer, RLContextManager rlContextManager, PersonalizerClient personalizerClient)
+ {
+ _luisRecognizer = luisRecognizer;
+ _rlFeaturesManager = rlContextManager;
+ _personalizerClient = personalizerClient;
+ }
+ }
+
+ public override async Task OnTurnAsync(ITurnContext turnContext, CancellationToken cancellationToken = default(CancellationToken))
+ {
+ await base.OnTurnAsync(turnContext, cancellationToken);
+
+ if (turnContext.Activity.Type == ActivityTypes.Message)
+ {
+ // Check LUIS model
+ var recognizerResult = await _luisRecognizer.RecognizeAsync(turnContext, cancellationToken);
+ var topIntent = recognizerResult?.GetTopScoringIntent();
+ if (topIntent != null && topIntent.HasValue && topIntent.Value.intent != "None")
+ {
+ Intents intent = (Intents)Enum.Parse(typeof(Intents), topIntent.Value.intent);
+ switch (intent)
+ {
+ case Intents.ShowMenu:
+ await turnContext.SendActivityAsync($"Here is our menu: \n Coffee: {CoffeesMethods.DisplayCoffees()}\n Tea: {TeaMethods.DisplayTeas()}", cancellationToken: cancellationToken);
+ break;
+ case Intents.ChooseRank:
+ // Here we generate the event ID for this Rank.
+ var response = await ChooseRankAsync(turnContext, _rlFeaturesManager.GenerateEventId(), cancellationToken);
+ _rlFeaturesManager.CurrentPreference = response.Ranking;
+ await turnContext.SendActivityAsync($"How about {response.RewardActionId}?", cancellationToken: cancellationToken);
+ break;
+ case Intents.RewardLike:
+ if (!string.IsNullOrEmpty(_rlFeaturesManager.CurrentEventId))
+ {
+ await RewardAsync(turnContext, _rlFeaturesManager.CurrentEventId, 1, cancellationToken);
+ await turnContext.SendActivityAsync($"That's great! I'll keep learning your preferences over time.", cancellationToken: cancellationToken);
+ await SendByebyeMessageAsync(turnContext, cancellationToken);
+ }
+ else
+ {
+ await turnContext.SendActivityAsync($"Not sure what you like. Did you ask for a suggestion?", cancellationToken: cancellationToken);
+ }
+
+ break;
+ case Intents.RewardDislike:
+ if (!string.IsNullOrEmpty(_rlFeaturesManager.CurrentEventId))
+ {
+ await RewardAsync(turnContext, _rlFeaturesManager.CurrentEventId, 0, cancellationToken);
+ await turnContext.SendActivityAsync($"Oh well, maybe I'll guess better next time.", cancellationToken: cancellationToken);
+ await SendByebyeMessageAsync(turnContext, cancellationToken);
+ }
+ else
+ {
+ await turnContext.SendActivityAsync($"Not sure what you dislike. Did you ask for a suggestion?", cancellationToken: cancellationToken);
+ }
+
+ break;
+ case Intents.Reset:
+ _rlFeaturesManager.GenerateRLFeatures();
+ await SendResetMessageAsync(turnContext, cancellationToken);
+ break;
+ default:
+ break;
+ }
+ }
+ else
+ {
+ var msg = @"Could not match your message with any of the following LUIS intents:
+ 'ShowMenu'
+ 'ChooseRank'
+ 'RewardLike'
+ 'RewardDislike'.
+ Try typing 'Show me the menu','What do you suggest','I like it','I don't like it'.";
+ await turnContext.SendActivityAsync(msg);
+ }
+ }
+ else if (turnContext.Activity.Type == ActivityTypes.ConversationUpdate)
+ {
+ // Generate a new weekday and weather condition
+ // These will act as the context features when we call rank with Personalizer
+ _rlFeaturesManager.GenerateRLFeatures();
+
+ // Send a welcome message to the user and tell them what actions they may perform to use this bot
+ await SendWelcomeMessageAsync(turnContext, cancellationToken);
+ }
+ else
+ {
+ await turnContext.SendActivityAsync($"{turnContext.Activity.Type} event detected", cancellationToken: cancellationToken);
+ }
+ }
+
+ // code removed for brevity, full sample code available for download
+ private async Task SendWelcomeMessageAsync(ITurnContext turnContext, CancellationToken cancellationToken)
+ private async Task SendResetMessageAsync(ITurnContext turnContext, CancellationToken cancellationToken)
+ private async Task SendByebyeMessageAsync(ITurnContext turnContext, CancellationToken cancellationToken)
+ private async Task<RankResponse> ChooseRankAsync(ITurnContext turnContext, string eventId, CancellationToken cancellationToken)
+ private async Task RewardAsync(ITurnContext turnContext, string eventId, double reward, CancellationToken cancellationToken)
+}
+```
+
+The methods prefixed with `Send` manage conversation with the bot and LUIS. The methods `ChooseRankAsync` and `RewardAsync` interact with Personalizer.
+
+#### Calling Rank API and display results
+
+The method `ChooseRankAsync` builds the JSON data to send to the Personalizer Rank API by collecting the actions with features and the context features.
+
+```csharp
+private async Task<RankResponse> ChooseRankAsync(ITurnContext turnContext, string eventId, CancellationToken cancellationToken)
+{
+ IList<object> contextFeature = new List<object>
+ {
+ new { weather = _rlFeaturesManager.RLFeatures.Weather.ToString() },
+ new { dayofweek = _rlFeaturesManager.RLFeatures.DayOfWeek.ToString() },
+ };
+
+ Random rand = new Random(DateTime.UtcNow.Millisecond);
+ IList<RankableAction> actions = new List<RankableAction>();
+ var coffees = Enum.GetValues(typeof(Coffees));
+ var beansOrigin = Enum.GetValues(typeof(CoffeeBeansOrigin));
+ var organic = Enum.GetValues(typeof(Organic));
+ var roast = Enum.GetValues(typeof(CoffeeRoast));
+ var teas = Enum.GetValues(typeof(Teas));
+
+ foreach (var coffee in coffees)
+ {
+ actions.Add(new RankableAction
+ {
+ Id = coffee.ToString(),
+ Features =
+ new List<object>()
+ {
+ new { BeansOrigin = beansOrigin.GetValue(rand.Next(0, beansOrigin.Length)).ToString() },
+ new { Organic = organic.GetValue(rand.Next(0, organic.Length)).ToString() },
+ new { Roast = roast.GetValue(rand.Next(0, roast.Length)).ToString() },
+ },
+ });
+ }
+
+ foreach (var tea in teas)
+ {
+ actions.Add(new RankableAction
+ {
+ Id = tea.ToString(),
+ Features =
+ new List<object>()
+ {
+ new { Organic = organic.GetValue(rand.Next(0, organic.Length)).ToString() },
+ },
+ });
+ }
+
+ // Sending a rank request to Personalizer
+ // Here we are asking Personalizer to decide which drink the user is most likely to want
+ // based on the current context features (weather, day of the week generated in RLContextManager)
+ // and the features of the drinks themselves
+ var request = new RankRequest(actions, contextFeature, null, eventId);
+ await turnContext.SendActivityAsync(
+ "===== DEBUG MESSAGE CALL TO RANK =====\n" +
+ "This is what is getting sent to Rank:\n" +
+ $"{JsonConvert.SerializeObject(request, Formatting.Indented)}\n",
+ cancellationToken: cancellationToken);
+ var response = await _personalizerClient.RankAsync(request, cancellationToken);
+ await turnContext.SendActivityAsync(
+ $"===== DEBUG MESSAGE RETURN FROM RANK =====\n" +
+ "This is what Rank returned:\n" +
+ $"{JsonConvert.SerializeObject(response, Formatting.Indented)}\n",
+ cancellationToken: cancellationToken);
+ return response;
+}
+```
+
+#### Calling Reward API and display results
+
+The method `RewardAsync` builds the JSON data to send to the Personalizer Reward API by determining score. The score is determined from the LUIS intent identified in the user text and sent from the `OnTurnAsync` method.
+
+```csharp
+private async Task RewardAsync(ITurnContext turnContext, string eventId, double reward, CancellationToken cancellationToken)
+{
+ await turnContext.SendActivityAsync(
+ "===== DEBUG MESSAGE CALL REWARD =====\n" +
+ "Calling Reward:\n" +
+ $"eventId = {eventId}, reward = {reward}\n",
+ cancellationToken: cancellationToken);
+
+ // Sending a reward request to Personalizer
+ // Here we are responding to the drink ranking Personalizer provided us
+ // If the user liked the highest ranked drink, we give a high reward (1)
+ // If they did not, we give a low reward (0)
+ await _personalizerClient.RewardAsync(eventId, new RewardRequest(reward), cancellationToken);
+}
+```
+
+## Design considerations for a bot
+
+This sample is meant to demonstrate a simple end-to-end solution of Personalizer in a bot. Your use case may be more complex.
+
+If you intent to use Personalizer in a production bot, plan for:
+* Real-time access to Personalizer _every time_ you need a ranked selection. The Rank API can't be batched or cached. The reward call can be delayed or offloaded to a separate process and if you don't return a reward in the timed period, then a default reward value is set for the event.
+* Use-case based calculation of the reward: This example showed two rewards of zero and one with no range between and no negative value for a score. Your system made of need to more granular scoring.
+* Bot channels: This sample uses a single channel but if you intend to use more than one channel, or variations of bots on a single channel, that may need to be considered as part of the context features of the Personalizer model.
+
+## Clean up resources
+
+When you are done with this tutorial, clean up the following resources:
+
+* Delete your sample project directory.
+* Delete your Personalizer and LUIS resource in the Azure portal.
+
+## Next steps
+* [How Personalizer works](how-personalizer-works.md)
+* [Features](concepts-features.md): learn concepts about features using with actions and context
+* [Rewards](concept-rewards.md): learn about calculating rewards
ai-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/tutorial-use-personalizer-web-app.md
+
+ Title: Use web app - Personalizer
+description: Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.
++
+ms.
+++ Last updated : 06/10/2020
+ms.devlang: csharp
++
+# Tutorial: Add Personalizer to a .NET web app
+
+Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.
+
+**In this tutorial, you learn how to:**
+
+<!-- green checkmark -->
+> [!div class="checklist"]
+> * Set up Personalizer key and endpoint
+> * Collect features
+> * Call Rank and Reward APIs
+> * Display top action, designated as _rewardActionId_
+++
+## Select the best content for a web app
+
+A web app should use Personalizer when there is a list of _actions_ (some type of content) on the web page that needs to be personalized to a single top item (rewardActionId) to display. Examples of action lists include news articles, button placement locations, and word choices for product names.
+
+You send the list of actions, along with context features, to the Personalizer loop. Personalizer selects the single best action, then your web app displays that action.
+
+In this tutorial, the actions are types of food:
+
+* pasta
+* ice cream
+* juice
+* salad
+* popcorn
+* coffee
+* soup
+
+To help Personalizer learn about your actions, send both _actions with features_ and _context features_ with each Rank API request.
+
+A **feature** of the model is information about the action or context that can be aggregated (grouped) across members of your web app user base. A feature _isn't_ individually specific (such as a user ID) or highly specific (such as an exact time of day).
+
+### Actions with features
+
+Each action (content item) has features to help distinguish the food item.
+
+The features aren't configured as part of the loop configuration in the Azure portal. Instead they are sent as a JSON object with each Rank API call. This allows flexibility for the actions and their features to grow, change, and shrink over time, which allows Personalizer to follow trends.
+
+```csharp
+ /// <summary>
+ /// Creates personalizer actions feature list.
+ /// </summary>
+ /// <returns>List of actions for personalizer.</returns>
+ private IList<RankableAction> GetActions()
+ {
+ IList<RankableAction> actions = new List<RankableAction>
+ {
+ new RankableAction
+ {
+ Id = "pasta",
+ Features =
+ new List<object>() { new { taste = "savory", spiceLevel = "medium" }, new { nutritionLevel = 5, cuisine = "italian" } }
+ },
+
+ new RankableAction
+ {
+ Id = "ice cream",
+ Features =
+ new List<object>() { new { taste = "sweet", spiceLevel = "none" }, new { nutritionalLevel = 2 } }
+ },
+
+ new RankableAction
+ {
+ Id = "juice",
+ Features =
+ new List<object>() { new { taste = "sweet", spiceLevel = "none" }, new { nutritionLevel = 5 }, new { drink = true } }
+ },
+
+ new RankableAction
+ {
+ Id = "salad",
+ Features =
+ new List<object>() { new { taste = "sour", spiceLevel = "low" }, new { nutritionLevel = 8 } }
+ },
+
+ new RankableAction
+ {
+ Id = "popcorn",
+ Features =
+ new List<object>() { new { taste = "salty", spiceLevel = "none" }, new { nutritionLevel = 3 } }
+ },
+
+ new RankableAction
+ {
+ Id = "coffee",
+ Features =
+ new List<object>() { new { taste = "bitter", spiceLevel = "none" }, new { nutritionLevel = 3 }, new { drink = true } }
+ },
+
+ new RankableAction
+ {
+ Id = "soup",
+ Features =
+ new List<object>() { new { taste = "sour", spiceLevel = "high" }, new { nutritionLevel = 7} }
+ }
+ };
+
+ return actions;
+ }
+```
++
+## Context features
+
+Context features help Personalizer understand the context of the actions. The context for this sample application includes:
+
+* time of day - morning, afternoon, evening, night
+* user's preference for taste - salty, sweet, bitter, sour, or savory
+* browser's context - user agent, geographical location, referrer
+
+```csharp
+/// <summary>
+/// Get users time of the day context.
+/// </summary>
+/// <returns>Time of day feature selected by the user.</returns>
+private string GetUsersTimeOfDay()
+{
+ Random rnd = new Random();
+ string[] timeOfDayFeatures = new string[] { "morning", "noon", "afternoon", "evening", "night", "midnight" };
+ int timeIndex = rnd.Next(timeOfDayFeatures.Length);
+ return timeOfDayFeatures[timeIndex];
+}
+
+/// <summary>
+/// Gets user food preference.
+/// </summary>
+/// <returns>Food taste feature selected by the user.</returns>
+private string GetUsersTastePreference()
+{
+ Random rnd = new Random();
+ string[] tasteFeatures = new string[] { "salty", "bitter", "sour", "savory", "sweet" };
+ int tasteIndex = rnd.Next(tasteFeatures.Length);
+ return tasteFeatures[tasteIndex];
+}
+```
+
+## How does the web app use Personalizer?
+
+The web app uses Personalizer to select the best action from the list of food choices. It does this by sending the following information with each Rank API call:
+* **actions** with their features such as `taste` and `spiceLevel`
+* **context** features such as `time` of day, user's `taste` preference, and the browser's user agent information, and context features
+* **actions to exclude** such as juice
+* **eventId**, which is different for each call to Rank API.
+
+## Personalizer model features in a web app
+
+Personalizer needs features for the actions (content) and the current context (user and environment). Features are used to align actions to the current context in the model. The model is a representation of Personalizer's past knowledge about actions, context, and their features that allows it to make educated decisions.
+
+The model, including features, is updated on a schedule based on your **Model update frequency** setting in the Azure portal.
+
+> [!CAUTION]
+> Features in this application are meant to illustrate features and feature values but not necessarily to the best features to use in a web app.
+
+### Plan for features and their values
+
+Features should be selected with the same planning and design that you would apply to any schema or model in your technical architecture. The feature values can be set with business logic or third-party systems. Feature values should not be so highly specific that they don't apply across a group or class of features.
+
+### Generalize feature values
+
+#### Generalize into categories
+
+This app uses `time` as a feature but groups time into categories such as `morning`, `afternoon`, `evening`, and `night`. That is an example of using the information of time but not in a highly specific way, such as `10:05:01 UTC+2`.
+
+#### Generalize into parts
+
+This app uses the HTTP Request features from the browser. This starts with a very specific string with all the data, for example:
+
+```http
+Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/530.99 (KHTML, like Gecko) Chrome/80.0.3900.140 Safari/537.36
+```
+
+The **HttpRequestFeatures** class library generalizes this string into a **userAgentInfo** object with individual values. Any values that are too specific are set to an empty string. When the context features for the request are sent, it has the following JSON format:
+
+```JSON
+{
+ "httpRequestFeatures": {
+ "_synthetic": false,
+ "OUserAgent": {
+ "_ua": "",
+ "_DeviceBrand": "",
+ "_DeviceFamily": "Other",
+ "_DeviceIsSpider": false,
+ "_DeviceModel": "",
+ "_OSFamily": "Windows",
+ "_OSMajor": "10",
+ "DeviceType": "Desktop"
+ }
+ }
+}
+```
++
+## Using sample web app
+
+The sample browser-based web app (all code is provided) needs the following applications installed to run the app.
+
+Install the following software:
+
+* [.NET Core 2.1](https://dotnet.microsoft.com/download/dotnet-core/2.1) - the sample back-end server uses .NET core
+* [Node.js](https://nodejs.org/) - the client/front end depends on this application
+* [Visual Studio 2019](https://visualstudio.microsoft.com/vs/), or [.NET Core CLI](/dotnet/core/tools/) - use either the developer environment of Visual Studio 2019 or the .NET Core CLI to build and run the app
+
+### Set up the sample
+1. Clone the Azure AI Personalizer Samples repo.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/cognitive-services-personalizer-samples.git
+ ```
+
+1. Navigate to _samples/HttpRequestFeatures_ to open the solution, `HttpRequestFeaturesExample.sln`.
+
+ If requested, allow Visual Studio to update the .NET package for Personalizer.
+
+### Set up Azure AI Personalizer Service
+
+1. [Create a Personalizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) in the Azure portal.
+
+1. In the Azure portal, find the `Endpoint` and either `Key1` or `Key2` (either will work) in the **Keys and Endpoints** tab. These are your `PersonalizerServiceEndpoint` and your `PersonalizerApiKey`.
+1. Fill in the `PersonalizerServiceEndpoint` in **appsettings.json**.
+1. Configure the `PersonalizerApiKey` as an [app secrets](/aspnet/core/security/app-secrets) in one of the following ways:
+
+ * If you are using the .NET Core CLI, you can use the `dotnet user-secrets set "PersonalizerApiKey" "<API Key>"` command.
+ * If you are using Visual Studio, you can right-click the project and select the **Manage User Secrets** menu option to configure the Personalizer keys. By doing this, Visual Studio will open a `secrets.json` file where you can add the keys as follows:
+
+ ```JSON
+ {
+ "PersonalizerApiKey": "<your personalizer key here>",
+ }
+ ```
+
+## Run the sample
+
+Build and run HttpRequestFeaturesExample with one of the following methods:
+
+* Visual Studio 2019: Press **F5**
+* .NET Core CLI: `dotnet build` then `dotnet run`
+
+Through a web browser, you can send a Rank request and a Reward request and see their responses, as well as the http request features extracted from your environment.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot shows an example of the Http Request Feature in a web browser.](./media/tutorial-web-app/web-app-single-page.png)
+
+## Demonstrate the Personalizer loop
+
+1. Select the **Generate new Rank Request** button to create a new JSON object for the Rank API call. This creates the actions (with features) and context features and displays the values so you can see what the JSON looks like.
+
+ For your own future application, generation of actions and features may happen on the client, on the server, a mix of the two, or with calls to other services.
+
+1. Select **Send Rank Request** to send the JSON object to the server. The server calls the Personalizer Rank API. The server receives the response and returns the top ranked action to the client to display.
+
+1. Set the reward value, then select the **Send Reward Request** button. If you don't change the reward value, the client application always sends the value of `1` to Personalizer.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the Reward Request section.](./media/tutorial-web-app/reward-score-api-call.png)
+
+ For your own future application, generation of the reward score may happen after collecting information from the user's behavior on the client, along with business logic on the server.
+
+## Understand the sample web app
+
+The sample web app has a **C# .NET** server, which manages the collection of features and sending and receiving HTTP calls to your Personalizer endpoint.
+
+The sample web app uses a **knockout front-end client application** to capture features and process user interface actions such as clicking on buttons, and sending data to the .NET server.
+
+The following sections explain the parts of the server and client that a developer needs to understand to use Personalizer.
+
+## Rank API: Client application sends context to server
+
+The client application collects the user's browser _user agent_.
+
+> [!div class="mx-imgBorder"]
+> ![Build and run the HTTPRequestFeaturesExample project. A browser window opens to display the single page application.](./media/tutorial-web-app/user-agent.png)
+
+## Rank API: Server application calls Personalizer
+
+This is a typical .NET web app with a client application, much of the boiler plate code is provided for you. Any code not specific to Personalizer is removed from the following code snippets so you can focus on the Personalizer-specific code.
+
+### Create Personalizer client
+
+In the server's **Startup.cs**, the Personalizer endpoint and key are used to create the Personalizer client. The client application doesn't need to communicate with Personalizer in this app, instead relying on the server to make those SDK calls.
+
+The web server's .NET start-up code is:
+
+```csharp
+using Microsoft.Azure.CognitiveServices.Personalizer;
+// ... other using statements removed for brevity
+
+namespace HttpRequestFeaturesExample
+{
+ public class Startup
+ {
+ public Startup(IConfiguration configuration)
+ {
+ Configuration = configuration;
+ }
+
+ public IConfiguration Configuration { get; }
+
+ // This method gets called by the runtime. Use this method to add services to the container.
+ public void ConfigureServices(IServiceCollection services)
+ {
+ string personalizerApiKey = Configuration.GetSection("PersonalizerApiKey").Value;
+ string personalizerEndpoint = Configuration.GetSection("PersonalizerConfiguration:ServiceEndpoint").Value;
+ if (string.IsNullOrEmpty(personalizerEndpoint) || string.IsNullOrEmpty(personalizerApiKey))
+ {
+ throw new ArgumentException("Missing Azure AI Personalizer endpoint and/or api key.");
+ }
+ services.AddSingleton(client =>
+ {
+ return new PersonalizerClient(new ApiKeyServiceClientCredentials(personalizerApiKey))
+ {
+ Endpoint = personalizerEndpoint
+ };
+ });
+
+ services.AddMvc();
+ }
+
+ // ... code removed for brevity
+ }
+}
+```
+
+### Select best action
+
+In the server's **PersonalizerController.cs**, the **GenerateRank** server API summarizes the preparation to call the Rank API
+
+* Create new `eventId` for the Rank call
+* Get the list of actions
+* Get the list of features from the user and create context features
+* Optionally, set any excluded actions
+* Call Rank API, return results to client
+
+```csharp
+/// <summary>
+/// Creates a RankRequest with user time of day, HTTP request features,
+/// and taste as the context and several different foods as the actions
+/// </summary>
+/// <returns>RankRequest with user info</returns>
+[HttpGet("GenerateRank")]
+public RankRequest GenerateRank()
+{
+ string eventId = Guid.NewGuid().ToString();
+
+ // Get the actions list to choose from personalizer with their features.
+ IList<RankableAction> actions = GetActions();
+
+ // Get context information from the user.
+ HttpRequestFeatures httpRequestFeatures = GetHttpRequestFeaturesFromRequest(Request);
+ string timeOfDayFeature = GetUsersTimeOfDay();
+ string tasteFeature = GetUsersTastePreference();
+
+ // Create current context from user specified data.
+ IList<object> currentContext = new List<object>() {
+ new { time = timeOfDayFeature },
+ new { taste = tasteFeature },
+ new { httpRequestFeatures }
+ };
+
+ // Exclude an action for personalizer ranking. This action will be held at its current position.
+ IList<string> excludeActions = new List<string> { "juice" };
+
+ // Rank the actions
+ return new RankRequest(actions, currentContext, excludeActions, eventId);
+}
+```
+
+The JSON sent to Personalizer, containing both actions (with features) and the current context features, looks like:
+
+```json
+{
+ "contextFeatures": [
+ {
+ "time": "morning"
+ },
+ {
+ "taste": "savory"
+ },
+ {
+ "httpRequestFeatures": {
+ "_synthetic": false,
+ "MRefer": {
+ "referer": "http://localhost:51840/"
+ },
+ "OUserAgent": {
+ "_ua": "",
+ "_DeviceBrand": "",
+ "_DeviceFamily": "Other",
+ "_DeviceIsSpider": false,
+ "_DeviceModel": "",
+ "_OSFamily": "Windows",
+ "_OSMajor": "10",
+ "DeviceType": "Desktop"
+ }
+ }
+ }
+ ],
+ "actions": [
+ {
+ "id": "pasta",
+ "features": [
+ {
+ "taste": "savory",
+ "spiceLevel": "medium"
+ },
+ {
+ "nutritionLevel": 5,
+ "cuisine": "italian"
+ }
+ ]
+ },
+ {
+ "id": "ice cream",
+ "features": [
+ {
+ "taste": "sweet",
+ "spiceLevel": "none"
+ },
+ {
+ "nutritionalLevel": 2
+ }
+ ]
+ },
+ {
+ "id": "juice",
+ "features": [
+ {
+ "taste": "sweet",
+ "spiceLevel": "none"
+ },
+ {
+ "nutritionLevel": 5
+ },
+ {
+ "drink": true
+ }
+ ]
+ },
+ {
+ "id": "salad",
+ "features": [
+ {
+ "taste": "sour",
+ "spiceLevel": "low"
+ },
+ {
+ "nutritionLevel": 8
+ }
+ ]
+ },
+ {
+ "id": "popcorn",
+ "features": [
+ {
+ "taste": "salty",
+ "spiceLevel": "none"
+ },
+ {
+ "nutritionLevel": 3
+ }
+ ]
+ },
+ {
+ "id": "coffee",
+ "features": [
+ {
+ "taste": "bitter",
+ "spiceLevel": "none"
+ },
+ {
+ "nutritionLevel": 3
+ },
+ {
+ "drink": true
+ }
+ ]
+ },
+ {
+ "id": "soup",
+ "features": [
+ {
+ "taste": "sour",
+ "spiceLevel": "high"
+ },
+ {
+ "nutritionLevel": 7
+ }
+ ]
+ }
+ ],
+ "excludedActions": [
+ "juice"
+ ],
+ "eventId": "82ac52da-4077-4c7d-b14e-190530578e75",
+ "deferActivation": null
+}
+```
+
+### Return Personalizer rewardActionId to client
+
+The Rank API returns the selected best action **rewardActionId** to the server.
+
+Display the action returned in **rewardActionId**.
+
+```json
+{
+ "ranking": [
+ {
+ "id": "popcorn",
+ "probability": 0.833333254
+ },
+ {
+ "id": "salad",
+ "probability": 0.03333333
+ },
+ {
+ "id": "juice",
+ "probability": 0
+ },
+ {
+ "id": "soup",
+ "probability": 0.03333333
+ },
+ {
+ "id": "coffee",
+ "probability": 0.03333333
+ },
+ {
+ "id": "pasta",
+ "probability": 0.03333333
+ },
+ {
+ "id": "ice cream",
+ "probability": 0.03333333
+ }
+ ],
+ "eventId": "82ac52da-4077-4c7d-b14e-190530578e75",
+ "rewardActionId": "popcorn"
+}
+```
+
+### Client displays the rewardActionId action
+
+In this tutorial, the `rewardActionId` value is displayed.
+
+In your own future application, that may be some exact text, a button, or a section of the web page highlighted. The list is returned for any post-analysis of scores, not an ordering of the content. Only the `rewardActionId` content should be displayed.
+
+## Reward API: collect information for reward
+
+The [reward score](concept-rewards.md) should be carefully planned, just as the features are planned. The reward score typically should be a value from 0 to 1. The value _can_ be calculated partially in the client application, based on user behaviors, and partially on the server, based on business logic and goals.
+
+If the server doesn't call the Reward API within the **Reward wait time** configured in the Azure portal for your Personalizer resource, then the **Default reward** (also configured in the Azure portal) is used for that event.
+
+In this sample application, you can select a value to see how the reward impacts the selections.
+
+## Additional ways to learn from this sample
+
+The sample uses several time-based events configured in the Azure portal for your Personalizer resource. Play with those values then return to this sample web app to see how the changes impact the Rank and Reward calls:
+
+* Reward wait time
+* Model update frequency
+
+Additional settings to play with include:
+* Default reward
+* Exploration percentage
++
+## Clean up resources
+
+When you are done with this tutorial, clean up the following resources:
+
+* Delete your sample project directory.
+* Delete your Personalizer resource - think of a Personalizer resource as dedicated to the actions and context - only reuse the resource if you are still using the foods as actions subject domain.
++
+## Next steps
+* [How Personalizer works](how-personalizer-works.md)
+* [Features](concepts-features.md): learn concepts about features using with actions and context
+* [Rewards](concept-rewards.md): learn about calculating rewards
ai-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/what-is-personalizer.md
+
+ Title: What is Personalizer?
+description: Personalizer is a cloud-based service that allows you to choose the best experience to show to your users, learning from their real-time behavior.
++
+ms.
+++ Last updated : 11/17/2022+
+keywords: personalizer, Azure AI Personalizer, machine learning
++
+# What is Personalizer?
++
+Azure AI Personalizer is an AI service that your applications make smarter decisions at scale using **reinforcement learning**. Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
+
+Personalizer can determine the best actions to take in a variety of scenarios:
+* E-commerce: What product should be shown to customers to maximize the likelihood of a purchase?
+* Content recommendation: What article should be shown to increase the click-through rate?
+* Content design: Where should an advertisement be placed to optimize user engagement on a website?
+* Communication: When and how should a notification be sent to maximize the chance of a response?
+
+To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer in your browser with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
+
+This documentation contains the following types of articles:
+
+* [**Quickstarts**](quickstart-personalizer-sdk.md) provide step-by-step instructions to guide you through setup and sample code to start making API requests to the service.
+* [**How-to guides**](how-to-settings.md) contain instructions for using Personalizer features and advanced capabilities.
+* [**Code samples**](https://github.com/Azure-Samples/cognitive-services-personalizer-samples) demonstrate how to use Personalizer and help you to easily interface your application with the service.
+* [**Tutorials**](tutorial-use-personalizer-web-app.md) are longer walk-throughs implementing Personalizer as a part of a broader business solution.
+* [**Concepts**](how-personalizer-works.md) provide further detail on Personalizer features, capabilities, and fundamentals.
++
+## How does Personalizer work?
+
+Personalizer uses reinforcement learning to select the best *action* for a given *context* across all users in order to maximize an average *reward*.
+* **Context**: Information that describes the state of your application, scenario, or user that may be relevant to making a decision.
+ * Example: The location, device type, age, and favorite topics of users visiting a web site.
+* **Actions**: A discrete set of items that can be chosen, along with attributes describing each item.
+ * Example: A set of news articles and the topics that are discussed in each article.
+* **Reward**: A numerical score between 0 and 1 that indicates whether the decision was *bad* (0), or *good* (1)
+ * Example: A "1" indicates that a user clicked on the suggested article, whereas a "0" indicates the user did not.
+
+### Rank and Reward APIs
+
+Personalizer empowers you to take advantage of the power and flexibility of reinforcement learning using just two primary APIs.
+
+The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there's a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
+
+The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there's feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
+
+### Learning modes
+
+* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_ that is the action that the application would have taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
+
+* **Online mode** Personalizer will return the best action, given the context, as determined by the underlying RL model and explores other possible actions that may improve performance. Personalizer learns from feedback provided in calls to the Reward API.
+
+Note that Personalizer uses collective information across all users to learn the best actions based on the current context. The service does not:
+* Persist and manage user profile information. Unique user IDs should not be sent to Personalizer.
+* Log individual users' preferences or historical data.
++
+## Example scenarios
+
+Here are a few examples where Personalizer can be used to select the best content to render for a user.
+
+|Content type|Actions {features}|Context features|Returned Reward Action ID<br>(display this content)|
+|--|--|--|--|
+|News articles|a. `The president...`, {national, politics, [text]}<br>b. `Premier League ...` {global, sports, [text, image, video]}<br> c. `Hurricane in the ...` {regional, weather, [text,image]}|Country='USA',<br>Recent_Topics=('politics', 'business'),<br>Month='October'<br>|a `The president...`|
+|Movies|1. `Star Wars` {1977, [action, adventure, fantasy], George Lucas}<br>2. `Hoop Dreams` {1994, [documentary, sports], Steve James}<br>3. `Casablanca` {1942, [romance, drama, war], Michael Curtiz}|Device='smart TV',<br>Screen_Size='large',<br>Favorite_Genre='classics'<br>|3. `Casablanca`|
+|E-commerce Products|i. `Product A` {3 kg, $$$$, deliver in 1 day}<br>ii. `Product B` {20 kg, $$, deliver in 7 days}<br>iii. `Product C` {3 kg, $$$, deliver in 2 days}| Device='iPhone',<br>Spending_Tier='low',<br>Month='June'|ii. `Product B`|
++
+## Scenario requirements
+
+Use Personalizer when your scenario has:
+
+* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest [using a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
+* Information describing the actions (_action features_).
+* Information describing the current context (_contextual features_).
+* Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
+
+## Responsible use of AI
+
+At Microsoft, we're committed to the advancement of AI driven by principles that put people first. AI models such as the ones available in the Personalizer service have significant potential benefits,
+but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, incorporating [MicrosoftΓÇÖs principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai), building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers. See the [Responsible AI docs for Personalizer](responsible-use-cases.md).
+
+## Integrate Personalizer into an application
+
+1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine how to interpret feedback as a **_reward_** score.
+1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
+
+ |Resource type| Purpose|
+ |--|--|
+ |[Apprentice mode](concept-apprentice-mode.md) - `E0`| Train Personalizer to mimic your current decision-making logic without impacting your existing application, before using _Online mode_ to learn better policies in a production environment.|
+ |_Online mode_ - Standard, `S0`| Personalizer uses RL to determine best actions in production.|
+ |_Online mode_ - Free, `F0`| Try Personalizer in a limited non-production environment.|
+
+1. Add Personalizer to your application, website, or system:
+ 1. Add a **Rank** call to Personalizer in your application, website, or system to determine the best action.
+ 1. Use the best action, as specified as a _reward action ID_ in your scenario.
+ 1. Apply _business logic_ to user behavior or feedback data to determine the **reward** score. For example:
+
+ |Behavior|Calculated reward score|
+ |--|--|
+ |User selected a news article suggested by Personalizer |**1**|
+ |User selected a news article _not_ suggested by Personalizer |**0**|
+ |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
+
+ 1. Add a **Reward** call sending a reward score between 0 and 1
+ * Immediately after feedback is received.
+ * Or sometime later in scenarios where delayed feedback is expected.
+ 1. Evaluate your loop with an [offline evaluation](concepts-offline-evaluation.md) after a period of time when Personalizer has received significant data to make online decisions. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without code changes or user impact.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Personalizer quickstart](quickstart-personalizer-sdk.md)
+
+* [How Personalizer works](how-personalizer-works.md)
+* [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/whats-new.md
+
+ Title: What's new - Personalizer
+
+description: This article contains news about Personalizer.
++
+ms.
+++ Last updated : 05/28/2021+
+# What's new in Personalizer
+
+Learn what's new in Azure AI Personalizer. These items may include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service.
+
+## Release notes
+
+### September 2022
+* Personalizer Inference Explainabiltiy is now available as a Public Preview. Enabling inference explainability returns feature scores on every Rank API call, providing insight into how influential each feature is to the actions chosen by your Personalizer model. [Learn more about Inference Explainability](how-to-inference-explainability.md).
+* Personalizer SDK now available in [Java](https://search.maven.org/artifact/com.azure/azure-ai-personalizer/1.0.0-beta.1/jar) and [Javascript](https://www.npmjs.com/package/@azure-rest/ai-personalizer).
+
+### April 2022
+* Local inference SDK (Preview): Personalizer now supports near-realtime (sub-10ms) inference without the need to wait for network API calls. Your Personalizer models can be used locally for lightning fast Rank calls using the [C# SDK (Preview)](https://www.nuget.org/packages/Azure.AI.Personalizer/2.0.0-beta.2), empowering your applications to personalize quickly and efficiently. Your model continues to train in Azure while your local model is seamlessly updated.
+
+### May 2021 - //Build conference
+
+* Auto-Optimize (Preview) : You can configure a Personalizer loop that you are using to continuously improve over time with less work. Personalizer will automatically run offline evaluations, discover better machine learning settings, and apply them. To learn more, see [Personalizer Auto-Optimize (Preview)](concept-auto-optimization.md).
+* Multi-slot personalization (Preview): If you have tiled layouts, carousels, and/or sidebars, it is easier to use Personalizer for each place that you recommend products or content on the same page. Personalizer now can now take a list of slots in the Rank API, assign an action to each, and learn from reward scores you send for each slot. To learn more, see [Multi-slot personalization (Preview)](concept-multi-slot-personalization.md).
+* Personalizer is now available in more regions.
+* Updated code samples (GitHub) and documentation. Use the links below to view updated samples:
+ * [C#/.NET](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/Personalizer)
+ * [JavaScript](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/javascript/Personalizer)
+ * [Python samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/python/Personalizer)
+
+### July 2020
+
+* New tutorial - [using Personalizer in a chat bot](tutorial-use-personalizer-chat-bot.md)
+
+### June 2020
+
+* New tutorial - [using Personalizer in a web app](tutorial-use-personalizer-web-app.md)
+
+### May 2020 - //Build conference
+
+The following are available in **Public Preview**:
+
+ * [Apprentice mode](concept-apprentice-mode.md) as a learning behavior.
+
+### March 2020
+
+* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure AI services security](../security-features.md).
+
+### November 2019 - Ignite conference
+
+* Personalizer is generally available (GA)
+* Azure Notebooks [tutorial](tutorial-use-azure-notebook-generate-loop-data.md) with entire lifecycle
+
+### May 2019 - //Build conference
+
+The following preview features were released at the Build 2019 Conference:
+
+* [Rank and reward learning loop](what-is-personalizer.md)
+
+## Videos
+
+### 2019 Build videos
+
+* [Deliver the Right Experiences & Content like Xbox with Azure AI Personalizer](https://azure.microsoft.com/resources/videos/build-2019-deliver-the-right-experiences-and-content-with-cognitive-services-personalizer/)
+
+## Service updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
+
+## Next steps
+
+* [Quickstart: Create a feedback loop in C#](./quickstart-personalizer-sdk.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Use the interactive demo](https://personalizerdevdemo.azurewebsites.net/)
ai-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/personalizer/where-can-you-use-personalizer.md
+
+ Title: Where and how to use - Personalizer
+description: Personalizer can be applied in any situation where your application can select the right item, action, or product to display - in order to make the experience better, achieve better business results, or improve productivity.
++
+ms.
+++ Last updated : 02/18/2020+
+# Where and how to use Personalizer
+
+Use Personalizer in any situation where your application needs to select the correct action (content) to display - in order to make the experience better, achieve better business results, or improve productivity.
+
+Personalizer uses reinforcement learning to select which action (content) to show the user. The selection can vary drastically depending on the quantity, quality, and distribution of data sent to the service.
+
+## Example use cases for Personalizer
+
+* **Intent clarification & disambiguation**: help your users have a better experience when their intent is not clear by providing an option that is personalized.
+* **Default suggestions** for menus & options: have the bot suggest the most likely item in a personalized way as a first step, instead of presenting an impersonal menu or list of alternatives.
+* **Bot traits & tone**: for bots that can vary tone, verbosity, and writing style, consider varying these traits.
+* **Notification & alert content**: decide what text to use for alerts in order to engage users more.
+* **Notification & alert timing**: have personalized learning of when to send notifications to users to engage them more.
++
+## Expectations required to use Personalizer
+
+You can apply Personalizer in situations where you meet or can implement the following guidelines.
+
+|Guideline|Explanation|
+|--|--|
+|Business goal|You have a business or usability goal for your application.|
+|Content|You have a place in your application where making a contextual decision of what to show to users will improve that goal.|
+|Content quantity|You have fewer than 50 actions to rank per call.|
+|Aggregate data|The best choice can and should be learned from collective user behavior and total reward score.|
+|Ethical use|The use of machine learning for personalization follows [responsible use guidelines](responsible-use-cases.md) and choices you chose.
+|Best single option|The contextual decision can be expressed as ranking the best option (action) from a limited set of choices.|
+|Scored result|How well the ranked choice worked for your application can be determined by measuring some aspect of user behavior, and expressing it in a _[reward score](concept-rewards.md)_.|
+|Relevant timing|The reward score doesn't bring in too many confounding or external factors. The experiment duration is low enough that the reward score can be computed while it's still relevant.|
+|Sufficient context features|You can express the context for the rank as a list of at least 5 [features](concepts-features.md) that you think would help make the right choice, and that doesn't include user-specific identifiable information.|
+|Sufficient action features|You have information about each content choice, _action_, as a list of at least 5 [features](concepts-features.md) that you think will help Personalizer make the right choice.|
+|Daily data|There's enough events to stay on top of optimal personalization if the problem drifts over time (such as preferences in news or fashion). Personalizer will adapt to continuous change in the real world, but results won't be optimal if there's not enough events and data to learn from to discover and settle on new patterns. You should choose a use case that happens often enough. Consider looking for use cases that happen at least 500 times per day.|
+|Historical data|Your application can retain data for long enough to accumulate a history of at least 100,000 interactions. This allows Personalizer to collect enough data to perform offline evaluations and policy optimization.|
+
+**Don't use Personalizer** where the personalized behavior isn't something that can be discovered across all users. For example, using Personalizer to suggest a first pizza order from a list of 20 possible menu items is useful, but which contact to call from the users' contact list when requiring help with childcare (such as "Grandma") is not something that is personalizable across your user base.
+
+## How to use Personalizer in a web application
+
+Adding a learning loop to a web application includes:
+
+* Determine which experience to personalize, what actions and features you have, what context features to use, and what reward you'll set.
+* Add a reference to the Personalization SDK in your application.
+* Call the Rank API when you are ready to personalize.
+* Store the eventId. You send a reward with the Reward API later.
+1. Call Activate for the event once you're sure the user has seen your personalized page.
+1. Wait for user selection of ranked content.
+1. Call Reward API to specify how well the output of the Rank API did.
+
+## How to use Personalizer with a chat bot
+
+In this example, you will see how to use Personalization to make a default suggestion instead of sending the user down a series of menus or choices every time.
+
+* Get the [code](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/ChatbotExample) for this sample.
+* Set up your bot solution. Make sure to publish your LUIS application.
+* Manage Rank and Reward API calls for bot.
+ * Add code to manage LUIS intent processing. If the **None** is returned as the top intent or the top intent's score is below your business logic threshold, send the intents list to Personalizer to Rank the intents.
+ * Show intent list to user as selectable links with the first intent being the top-ranked intent from Rank API response.
+ * Capture the user's selection and send this in the Reward API call.
+
+### Recommended bot patterns
+
+* Make Personalizer Rank API calls every time a disambiguation is needed, as opposed to caching results for each user. The result of disambiguating intent may change over time for one person, and allowing the Rank API to explore variances will accelerate overall learning.
+* Choose an interaction that is common with many users so that you have enough data to personalize. For example, introductory questions may be better fits than smaller clarifications deep in the conversation graph that only a few users may get to.
+* Use Rank API calls to enable "first suggestion is right" conversations, where the user gets asked "Would you like X?" or "Did you mean X?" and the user can just confirm; as opposed to giving options to the user where they must choose from a menu. For example, User:"I'd like to order a coffee" Bot:"Would you like a double espresso?". This way the reward signal is also strong as it pertains directly to the one suggestion.
+
+## How to use Personalizer with a recommendation solution
+
+Many companies use recommendation engines, marketing and campaigning tools, audience segmentation and clustering, collaborative filtering, and other means to recommend products from a large catalog to customers.
+
+The [Microsoft Recommenders GitHub repository](https://github.com/Microsoft/Recommenders) provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. It provides working examples for preparing data, building models, evaluating, tuning, and operationalizing the recommendation engines, for many common approaches including xDeepFM, SAR, ALS, RBM, DKN.
+
+Personalizer can work with a recommendation engine when it's present.
+
+* Recommendation engines take large amounts of items (for example, 500,000) and recommend a subset (such as the top 20) from hundreds or thousands of options.
+* Personalizer takes a small number of actions with lots of information about them and ranks them in real time for a given rich context, while most recommendation engines only use a few attributes about users, products and their interactions.
+* Personalizer is designed to autonomously explore user preferences all the time, which will yield better results where content is changing rapidly, such as news, live events, live community content, content with daily updates, or seasonal content.
+
+A common use is to take the output of a recommendation engine (for example, the top 20 products for a certain customer) and use that as the input actions for Personalizer.
+
+## Adding content safeguards to your application
+
+If your application allows for large variances in content shown to users, and some of that content may be unsafe or inappropriate for some users, you should plan ahead to make sure that the right safeguards are in place to prevent your users from seeing unacceptable content. The best pattern to implement safeguards is:
+ * Obtain the list of actions to rank.
+ * Filter out the ones that are not viable for the audience.
+ * Only rank these viable actions.
+ * Display the top ranked action to the user.
+
+In some architectures, the above sequence may be hard to implement. In that case, there is an alternative approach to implementing safeguards after ranking, but a provision needs to be made so actions that fall outside the safeguard are not used to train the Personalizer model.
+
+* Obtain the list of actions to rank, with learning deactivated.
+* Rank actions.
+* Check if the top action is viable.
+ * If the top action is viable, activate learning for this rank, then show it to the user.
+ * If the top action is not viable, do not activate learning for this ranking, and decide through your own logic or alternative approaches what to show to the user. Even if you use the second-best ranked option, do not activate learning for this ranking.
++
+## Next steps
+
+[Ethics & responsible use](responsible-use-cases.md).
ai-services Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/plan-manage-costs.md
+
+ Title: Plan to manage costs for Azure AI services
+description: Learn how to plan for and manage costs for Azure AI services by using cost analysis in the Azure portal.
+++++ Last updated : 11/03/2021+++
+# Plan and manage costs for Azure AI services
+
+This article describes how you plan for and manage costs for Azure AI services. First, you use the Azure pricing calculator to help plan for Azure AI services costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using Azure AI services resources (for example Speech, Azure AI Vision, LUIS, Language service, Translator, etc.), use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure AI services are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure AI services, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+
+## Prerequisites
+
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+<!--Note for Azure service writer: If you have other prerequisites for your service, insert them here -->
+
+## Estimate costs before using Azure AI services
+
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you add Azure AI services.
++
+As you add new resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
+
+For more information, see [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
+
+## Understand the full billing model for Azure AI services
+
+Azure AI services runs on Azure infrastructure that [accrues costs](https://azure.microsoft.com/pricing/details/cognitive-services/) when you deploy the new resource. It's important to understand that more infrastructure might accrue costs. You need to manage that cost when you make changes to deployed resources.
+
+When you create or use Azure AI services resources, you might get charged based on the services that you use. There are two billing models available for Azure AI
+
+## Pay-as-you-go
+
+With Pay-As-You-Go pricing, you are billed according to the Azure AI services offering you use, based on its billing information.
+
+| Service | Instance(s) | Billing information |
+||-||
+| **Vision** | | |
+| [Azure AI Vision](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) | Free, Standard (S1) | Billed by the number of transactions. Price per transaction varies per feature (Read, OCR, Spatial Analysis). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/). |
+| [Custom Vision](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) | Free, Standard | <li>Predictions are billed by the number of transactions.</li><li>Training is billed by compute hour(s).</li><li>Image storage is billed by number of images (up to 6 MB per image).</li>|
+| [Face](https://azure.microsoft.com/pricing/details/cognitive-services/face-api/) | Free, Standard | Billed by the number of transactions. |
+| **Speech** | | |
+| [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) | Free, Standard | Billing varies by feature (speech-to-text, text to speech, speech translation, speaker recognition). Primarily, billing is by transaction count or character count. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
+| **Language** | | |
+| [Language Understanding (LUIS)](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) | Free Authoring, Free Prediction, Standard | Billed by number of transactions. Price per transaction varies by feature (speech requests, text requests). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/). |
+| [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/) | Free, Standard | Subscription fee billed monthly. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). |
+| [Language service](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/) | Free, Standard | Billed by number of text records. |
+| [Translator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/) | Free, Pay-as-you-go (S1), Volume discount (S2, S3, S4, C2, C3, C4, D3) | Pricing varies by meter and feature. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). <li>Text translation is billed by number of characters translated.</li><li>Document translation is billed by characters translated.</li><li>Custom translation is billed by characters of source and target training data.</li> |
+| **Decision** | | |
+| [Anomaly Detector](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/) | Free, Standard | Billed by the number of transactions. |
+| [Content Moderator](https://azure.microsoft.com/pricing/details/cognitive-services/content-moderator/) | Free, Standard | Billed by the number of transactions. |
+| [Personalizer](https://azure.microsoft.com/pricing/details/cognitive-services/personalizer/) | Free, Standard (S0) | Billed by transactions per month. There are storage and transaction quotas. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/personalizer/). |
+
+## Commitment tier
+
+In addition to the pay-as-you-go model, Azure AI services has commitment tiers, which let you commit to using several service features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload.
+
+With commitment tier pricing, you are billed according to the plan you choose. See [Quickstart: purchase commitment tier pricing](commitment-tier.md) for information on available services, how to sign up, and considerations when purchasing a plan.
+
+> [!NOTE]
+> If you use the resource above the quota provided by the commitment plan, you will be charged for the additional usage as per the overage amount mentioned in the Azure portal when you purchase a commitment plan. For more information, see [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
++++
+### Costs that typically accrue with Azure AI services
+
+Typically, after you deploy an Azure resource, costs are determined by your pricing tier and the API calls you make to your endpoint. If the service you're using has a commitment tier, going over the allotted calls in your tier may incur an overage charge.
+
+Extra costs may accrue when using these
+
+#### QnA Maker
+
+When you create resources for QnA Maker, resources for other Azure services may also be created. They include:
+
+- [Azure App Service (for the runtime)](https://azure.microsoft.com/pricing/details/app-service/)
+- [Azure Cognitive Search (for the data)](https://azure.microsoft.com/pricing/details/search/)
+
+### Costs that might accrue after resource deletion
+
+#### QnA Maker
+
+After you delete QnA Maker resources, the following resources might continue to exist. They continue to accrue costs until you delete them.
+
+- [Azure App Service (for the runtime)](https://azure.microsoft.com/pricing/details/app-service/)
+- [Azure Cognitive Search (for the data)](https://azure.microsoft.com/pricing/details/search/)
+
+### Using Azure Prepayment credit with Azure AI services
+
+You can pay for Azure AI services charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
+
+## Monitor costs
+
+<!-- Note to Azure service writer: Modify the following as needed for your service. Replace example screenshots with ones taken for your service. If you need assistance capturing screenshots, ask banders for help. -->
+
+As you use Azure resources with Azure AI services, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). As soon as use of an Azure AI services resource starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view Azure AI services costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+
+To view Azure AI services costs in cost analysis:
+
+1. Sign in to the Azure portal.
+2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
+3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Azure AI services.
+
+Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
++
+- To narrow costs for a single service, like Azure AI services, select **Add filter** and then select **Service name**. Then, select **Azure AI services**.
+
+Here's an example showing costs for just Azure AI services.
++
+In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and Azure AI services costs by resource group are also shown. From here, you can explore costs on your own.
+
+## Create budgets
+
+You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more about the filter options when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Export cost data
+
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you or others need to do more data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+
+## Next steps
+
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
+
+ Title: Built-in policy definitions for Azure AI services
+description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 07/18/2023++++++
+# Azure Policy built-in policy definitions for Azure AI services
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure AI services. For additional Azure Policy built-ins for other services,
+see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure AI services
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
ai-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/azure-resources.md
+
+ Title: Azure resources - QnA Maker
+description: QnA Maker uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used in combination allows you to find and fix problems when they occur.
++++++ Last updated : 02/02/2022+++
+# Azure resources for QnA Maker
+
+QnA Maker uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used _in combination_ allows you to find and fix problems when they occur.
++
+## Resource planning
+
+When you first develop a QnA Maker knowledge base, in the prototype phase, it is common to have a single QnA Maker resource for both testing and production.
+
+When you move into the development phase of the project, you should consider:
+
+* How many languages your knowledge base system will hold?
+* How many regions you need your knowledge base to be available in?
+* How many documents in each domain your system will hold?
+
+Plan to have a single QnA Maker resource hold all knowledge bases that have the same language, the same region and the same subject domain combination.
+
+## Pricing tier considerations
+
+Typically there are three parameters you need to consider:
+
+* **The throughput you need from the service**:
+ * Select the appropriate [App Plan](https://azure.microsoft.com/pricing/details/app-service/plans/) for your App service based on your needs. You can [scale up](../../../app-service/manage-scale-up.md) or down the App.
+ * This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../search/search-capacity-planning.md) with replicas.
+
+* **Size and the number of knowledge bases**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide number of knowledge bases you need based on number of different subject domains. Once subject domain (for a single language) should be in one knowledge base.
+
+Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
+
+> [!IMPORTANT]
+> You can publish N-1 knowledge bases in a particular tier, where N is the maximum indexes allowed in the tier. Also check the maximum size and the number of documents allowed per tier.
+
+For example, if your tier has 15 allowed indexes, you can publish 14 knowledge bases (one index per published knowledge base). The fifteenth index is used for all the knowledge bases for authoring and testing.
+
+* **Number of documents as sources**: The free SKU of the QnA Maker management service limits the number of documents you can manage via the portal and the APIs to 3 (of 1 MB size each). The standard SKU has no limits to the number of documents you can manage. See more details [here](https://aka.ms/qnamaker-pricing).
+
+The following table gives you some high-level guidelines.
+
+| | QnA Maker Management | App Service | Azure Cognitive Search | Limitations |
+| -- | -- | -- | | -- |
+| **Experimentation** | Free SKU | Free Tier | Free Tier | Publish Up to 2 KBs, 50 MB size |
+| **Dev/Test Environment** | Standard SKU | Shared | Basic | Publish Up to 14 KBs, 2 GB size |
+| **Production Environment** | Standard SKU | Basic | Standard | Publish Up to 49 KBs, 25 GB size |
+
+## Recommended Settings
+
+|Target QPS | App Service | Azure Cognitive Search |
+| -- | -- | |
+| 3 | S1, one Replica | S1, one Replica |
+| 50 | S3, 10 Replicas | S1, 12 Replicas |
+| 80 | S3, 10 Replicas | S3, 12 Replicas |
+| 100 | P3V2, 10 Replicas | S3, 12 Replicas, 3 Partitions |
+| 200 to 250 | P3V2, 20 Replicas | S3, 12 Replicas, 3 Partitions |
+
+## When to change a pricing tier
+
+|Upgrade|Reason|
+|--|--|
+|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku) QnA Maker management SKU|You want to have more QnA pairs or document sources in your knowledge base.|
+|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-app-service) App Service SKU and check Cognitive Search tier and [create Cognitive Search replicas](../../../search/search-capacity-planning.md)|Your knowledge base needs to serve more requests from your client app, such as a chat bot.|
+|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-the-azure-cognitive-search-service) Azure Cognitive Search service|You plan to have many knowledge bases.|
+
+Get the latest runtime updates by [updating your App Service in the Azure portal](../how-to/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates).
+
+## Keys in QnA Maker
+
+Your QnA Maker service deals with two kinds of keys: **authoring keys** and **query endpoint keys** used with the runtime hosted in the App service.
+
+Use these keys when making requests to the service through APIs.
+
+![Key management](../media/authoring-key.png)
+
+|Name|Location|Purpose|
+|--|--|--|
+|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.|
+|Query endpoint key|[QnA Maker portal](https://www.qnamaker.ai)|These keys are used to query the published knowledge base endpoint to get a response for a user question. You typically use this query endpoint in your chat bot or in the client application code that connects to the QnA Maker service. These keys are created when you publish your QnA Maker knowledge base.<br><br>Find these keys in the **Service settings** page. Find this page from the user's menu in the upper right of the page on the drop-down menu.|
+
+### Find authoring keys in the Azure portal
+
+You can view and reset your authoring keys from the Azure portal, where you created the QnA Maker resource.
+
+1. Go to the QnA Maker resource in the Azure portal and select the resource that has the _Azure AI services_ type:
+
+ ![QnA Maker resource list](../media/qnamaker-how-to-key-management/qnamaker-resource-list.png)
+
+2. Go to **Keys and Endpoint**:
+
+ ![QnA Maker managed (Preview) Subscription key](../media/qnamaker-how-to-key-management/subscription-key-v2.png)
+
+### Find query endpoint keys in the QnA Maker portal
+
+The endpoint is in the same region as the resource because the endpoint keys are used to make a call to the knowledge base.
+
+Endpoint keys can be managed from the [QnA Maker portal](https://qnamaker.ai).
+
+1. Sign in to the [QnA Maker portal](https://qnamaker.ai), go to your profile, and then select **Service settings**:
+
+ ![Endpoint key](../media/qnamaker-how-to-key-management/Endpoint-keys.png)
+
+2. View or reset your keys:
+
+ > [!div class="mx-imgBorder"]
+ > ![Endpoint key manager](../media/qnamaker-how-to-key-management/Endpoint-keys1.png)
+
+ >[!NOTE]
+ >Refresh your keys if you think they've been compromised. This may require corresponding changes to your client application or bot code.
+
+## Management service region
+
+The management service of QnA Maker is used only for the QnA Maker portal and for initial data processing. This service is available only in the **West US** region. No customer data is stored in this West US service.
+
+## Resource naming considerations
+
+The resource name for the QnA Maker resource, such as `qna-westus-f0-b`, is also used to name the other resources.
+
+The Azure portal create window allows you to create a QnA Maker resource and select the pricing tiers for the other resources.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of Azure portal for QnA Maker resource creation](../media/concept-azure-resource/create-blade.png)
+
+After the resources are created, they have the same name, except for the optional Application Insights resource, which postpends characters to the name.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of Azure portal resource listing](../media/concept-azure-resource/all-azure-resources-created-qnamaker.png)
+
+> [!TIP]
+> Create a new resource group when you create a QnA Maker resource. That allows you to see all resources associated with the QnA Maker resource when searching by resource group.
+
+> [!TIP]
+> Use a naming convention to indicate pricing tiers within the name of the resource or the resource group. When you receive errors from creating a new knowledge base, or adding new documents, the Cognitive Search pricing tier limit is a common issue.
+
+## Resource purposes
+
+Each Azure resource created with QnA Maker has a specific purpose:
+
+* QnA Maker resource
+* Cognitive Search resource
+* App Service
+* App Plan Service
+* Application Insights Service
+
+### QnA Maker resource
+
+The QnA Maker resource provides access to the authoring and publishing APIs.
+
+#### QnA Maker resource configuration settings
+
+When you create a new knowledge base in the [QnA Maker portal](https://qnamaker.ai), the **Language** setting is the only setting that is applied at the resource level. You select the language when you create the first knowledge base for the resource.
+
+### Cognitive Search resource
+
+The [Cognitive Search](../../../search/index.yml) resource is used to:
+
+* Store the QnA pairs
+* Provide the initial ranking (ranker #1) of the QnA pairs at runtime
+
+#### Index usage
+
+The resource keeps one index to act as the test index and the remaining indexes correlate to one published knowledge base each.
+
+A resource priced to hold 15 indexes, will hold 14 published knowledge bases, and one index is used for testing all the knowledge bases. This test index is partitioned by knowledge base so that a query using the interactive test pane will use the test index but only return results from the specific partition associated with the specific knowledge base.
+
+#### Language usage
+
+The first knowledge base created in the QnA Maker resource is used to determine the _single_ language set for the Cognitive Search resource and all its indexes. You can only have _one language set_ for a QnA Maker service.
+
+#### Using a single Cognitive Search service
+
+If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
+
+Learn [how to configure](../how-to/configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) QnA Maker to use a different Azure AI service resource than the one created as part of the QnA Maker resource creation process.
+
+### App service and App service plan
+
+The [App service](../../../app-service/index.yml) is used by your client application to access the published knowledge bases via the runtime endpoint. App service includes the natural language processing (NLP) based second ranking layer (ranker #2) of the QnA pairs at runtime. The second ranking applies intelligent filters that can include metadata and follow-up prompts.
+
+To query the published knowledge base, all published knowledge bases use the same URL endpoint, but specify the **knowledge base ID** within the route.
+
+`{RuntimeEndpoint}/qnamaker/knowledgebases/{kbId}/generateAnswer`
+
+### Application Insights
+
+[Application Insights](../../../azure-monitor/app/app-insights-overview.md) is used to collect chat logs and telemetry. Review the common [Kusto queries](../how-to/get-analytics-knowledge-base.md) for information about your service.
+
+### Share services with QnA Maker
+
+QnA Maker creates several Azure resources. To reduce management and benefit from cost sharing, use the following table to understand what you can and can't share:
+
+|Service|Share|Reason|
+|--|--|--|
+|Azure AI services|X|Not possible by design|
+|App Service plan|Γ£ö|Fixed disk space allocated for an App Service plan. If other apps that sharing the same App Service plan use significant disk space, the QnAMaker App Service instance will encounter problems.|
+|App Service|X|Not possible by design|
+|Application Insights|Γ£ö|Can be shared|
+|Search service|Γ£ö|1. `testkb` is a reserved name for the QnAMaker service; it can't be used by others.<br>2. Synonym map by the name `synonym-map` is reserved for the QnAMaker service.<br>3. The number of published knowledge bases is limited by Search service tier. If there are free indexes available, other services can use them.|
+
+## Next steps
+
+* Learn about the QnA Maker [knowledge base](../how-to/manage-knowledge-bases.md)
+* Understand a [knowledge base life cycle](development-lifecycle-knowledge-base.md)
+* Review service and knowledge base [limits](../limits.md)
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/best-practices.md
+
+ Title: Best practices - QnA Maker
+description: Use these best practices to improve your knowledge base and provide better results to your application/chat bot's end users.
++++++ Last updated : 11/19/2021+++
+# Best practices of a QnA Maker knowledge base
+
+The [knowledge base development lifecycle](../concepts/development-lifecycle-knowledge-base.md) guides you on how to manage your KB from beginning to end. Use these best practices to improve your knowledge base and provide better results to your client application or chat bot's end users.
++
+## Extraction
+
+The QnA Maker service is continually improving the algorithms that extract QnAs from content and expanding the list of supported file and HTML formats. Follow the [guidelines](../concepts/data-sources-and-content.md) for data extraction based on your document type.
+
+In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
+
+### Configuring multi-turn
+
+[Create your knowledge base](../how-to/multi-turn.md#create-a-multi-turn-conversation-from-a-documents-structure) with multi-turn extraction enabled. If your knowledge base does or should support question hierarchy, this hierarchy can be extracted from the document or created after the document is extracted.
+
+## Creating good questions and answers
+
+### Good questions
+
+The best questions are simple. Consider the key word or phrase for each question then create a simple question for that key word or phrase.
+
+Add as many alternate questions as you need but keep the alterations simple. Adding more words or phrasings that are not part of the main goal of the question does not help QnA Maker find a match.
++
+### Add relevant alternative questions
+
+Your user may enter questions with either a conversational style of text, `How do I add a toner cartridge to my printer?` or a keyword search such as `toner cartridge`. The knowledge base should have both styles of questions in order to correctly return the best answer. If you aren't sure what keywords a customer is entering, use Application Insights data to analyze queries.
+
+### Good answers
+
+The best answers are simple answers but not too simple. Do not use answers such as `yes` and `no`. If your answer should link to other sources or provide a rich experience with media and links, use [metadata tagging](../how-to/edit-knowledge-base.md#add-metadata) to distinguish between answers, then [submit the query](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) with metadata tags in the `strictFilters` property to get the correct answer version.
+
+|Answer|Follow-up prompts|
+|--|--|
+|Power down the Surface laptop with the power button on the keyboard.|* Key-combinations to sleep, shut down, and restart.<br>* How to hard-boot a Surface laptop<br>* How to change the BIOS for a Surface laptop<br>* Differences between sleep, shut down and restart|
+|Customer service is available via phone, Skype, and text message 24 hours a day.|* Contact information for sales.<br> * Office and store locations and hours for an in-person visit.<br> * Accessories for a Surface laptop.|
+
+## Chit-Chat
+Add chit-chat to your bot, to make your bot more conversational and engaging, with low effort. You can easily add chit-chat data sets from pre-defined personalities when creating your KB, and change them at any time. Learn how to [add chit-chat to your KB](../how-to/chit-chat-knowledge-base.md).
+
+Chit-chat is supported in [many languages](../how-to/chit-chat-knowledge-base.md#language-support).
+
+### Choosing a personality
+Chit-chat is supported for several predefined personalities:
+
+|Personality |QnA Maker Dataset file |
+||--|
+|Professional |[qna_chitchat_professional.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_professional.tsv) |
+|Friendly |[qna_chitchat_friendly.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_friendly.tsv) |
+|Witty |[qna_chitchat_witty.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_witty.tsv) |
+|Caring |[qna_chitchat_caring.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_caring.tsv) |
+|Enthusiastic |[qna_chitchat_enthusiastic.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_enthusiastic.tsv) |
+
+The responses range from formal to informal and irreverent. Select the personality that is closest aligned with the tone you want for your bot. You can view the [datasets](https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets), and choose one that serves as a base for your bot, and then customize the responses.
+
+### Edit bot-specific questions
+There are some bot-specific questions that are part of the chit-chat data set, and have been filled in with generic answers. Change these answers to best reflect your bot details.
+
+We recommend making the following chit-chat QnAs more specific:
+
+* Who are you?
+* What can you do?
+* How old are you?
+* Who created you?
+* Hello
+
+### Adding custom chit-chat with a metadata tag
+
+If you add your own chit-chat QnA pairs, make sure to add metadata so these answers are returned. The metadata name/value pair is `editorial:chitchat`.
+
+## Searching for answers
+
+GenerateAnswer API uses both questions and the answer to search for best answers to a user's query.
+
+### Searching questions only when answer is not relevant
+
+Use the [`RankerType=QuestionOnly`](#choosing-ranker-type) if you don't want to search answers.
+
+An example of this, is when the knowledge base is a catalog of acronyms as questions with their full form as the answer. The value of the answer will not help to search for the appropriate answer.
+
+## Ranking/Scoring
+Make sure you are making the best use of the ranking features QnA Maker supports. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
+
+### Choosing a threshold
+
+The default [confidence score](confidence-score.md) that is used as a threshold is 0, however you can [change the threshold](confidence-score.md#set-threshold) for your KB based on your needs. Since every KB is different, you should test and choose the threshold that is best suited for your KB.
+
+### Choosing Ranker type
+By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the GenerateAnswer request.
+
+### Add alternate questions
+[Alternate questions](../how-to/edit-knowledge-base.md) improve the likelihood of a match with a user query. Alternate questions are useful when there are multiple ways in which the same question may be asked. This can include changes in the sentence structure and word-style.
+
+|Original query|Alternate queries|Change|
+|--|--|--|
+|Is parking available?|Do you have car park?|sentence structure|
+ |Hi|Yo<br>Hey there!|word-style or slang|
+
+<a name="use-metadata-filters"></a>
+
+### Use metadata tags to filter questions and answers
+
+[Metadata](../how-to/edit-knowledge-base.md) adds the ability for a client application to know it should not take all answers but instead to narrow down the results of a user query based on metadata tags. The knowledge base answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
+
+### Use synonyms
+
+While there is some support for synonyms in the English language, use case-insensitive word alterations via the [Alterations API](/rest/api/cognitiveservices/qnamaker/alterations/replace) to add synonyms to keywords that take different forms. Synonyms are added at the QnA Maker service-level and **shared by all knowledge bases in the service**.
+
+### Use distinct words to differentiate questions
+QnA Maker's ranking algorithm, that matches a user query with a question in the knowledge base, works best if each question addresses a different need. Repetition of the same word set between questions reduces the likelihood that the right answer is chosen for a given user query with those words.
+
+For example, you might have two separate QnAs with the following questions:
+
+|QnAs|
+|--|
+|where is the parking *location*|
+|where is the ATM *location*|
+
+Since these two QnAs are phrased with very similar words, this similarity could cause very similar scores for many user queries that are phrased like *"where is the `<x>` location"*. Instead, try to clearly differentiate with queries like *"where is the parking lot"* and *"where is the ATM"*, by avoiding words like "location" that could be in many questions in your KB.
+
+## Collaborate
+QnA Maker allows users to collaborate on a knowledge base. Users need access to the Azure AI QnA Maker resource group in order to access the knowledge bases. Some organizations may want to outsource the knowledge base editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical [QnA Maker services](../How-to/set-up-qnamaker-service-azure.md) in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the knowledge base contents are transferred with an [import-export](../tutorials/export-knowledge-base.md) process to the QnA Maker service of the approver that will finally publish the knowledge base and update the endpoint.
+
+## Active learning
+
+[Active learning](../How-to/use-active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It is important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the QnA Maker portal, you can **[filter by suggestions](../how-to/improve-knowledge-base.md)** then review and accept or reject those suggestions.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Edit a knowledge base](../How-to/edit-knowledge-base.md)
ai-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/confidence-score.md
+
+ Title: Confidence score - QnA Maker
+
+description: When a user query is matched against a knowledge base, QnA Maker returns relevant answers, along with a confidence score.
+++++++ Last updated : 01/27/2020+++
+# The confidence score of an answer
+When a user query is matched against a knowledge base, QnA Maker returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
+
+The confidence score is a number between 0 and 100. A score of 100 is likely an exact match, while a score of 0 means, that no matching answer was found. The higher the score- the greater the confidence in the answer. For a given query, there could be multiple answers returned. In that case, the answers are returned in order of decreasing confidence score.
+
+In the example below, you can see one QnA entity, with 2 questions.
++
+![Sample QnA pair](../media/qnamaker-concepts-confidencescore/ranker-example-qna.png)
+
+For the above example- you can expect scores like the sample score range below- for different types of user queries:
++
+![Ranker score range](../media/qnamaker-concepts-confidencescore/ranker-score-range.png)
++
+The following table indicates typical confidence associated for a given score.
+
+|Score Value|Score Meaning|Example Query|
+|--|--|--|
+|90 - 100|A near exact match of user query and a KB question|"My changes aren't updated in KB after publish"|
+|> 70|High confidence - typically a good answer that completely answers the user's query|"I published my KB but it's not updated"|
+|50 - 70|Medium confidence - typically a fairly good answer that should answer the main intent of the user query|"Should I save my updates before I publish my KB?"|
+|30 - 50|Low confidence - typically a related answer, that partially answers the user's intent|" What does the save and train do?"|
+|< 30|Very low confidence - typically does not answer the user's query, but has some matching words or phrases |" Where can I add synonyms to my KB"|
+|0|No match, so the answer is not returned.|"How much does the service cost"|
+
+## Choose a score threshold
+The table above shows the scores that are expected on most KBs. However, since every KB is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to 0, so that all possible answers are returned. The recommended threshold that should work for most KBs, is **50**.
+
+When choosing your threshold, keep in mind the balance between Accuracy and Coverage, and tweak your threshold based on your requirements.
+
+- If **Accuracy** (or precision) is more important for your scenario, then increase your threshold. This way, every time you return an answer, it will be a much more CONFIDENT case, and much more likely to be the answer users are looking for. In this case, you might end up leaving more questions unanswered. *For example:* if you make the threshold **70**, you might miss some ambiguous examples likes "what is save and train?".
+
+- If **Coverage** (or recall) is more important- and you want to answer as many questions as possible, even if there is only a partial relation to the user's question- then LOWER the threshold. This means there could be more cases where the answer does not answer the user's actual query, but gives some other somewhat related answer. *For example:* if you make the threshold **30**, you might give answers for queries like "Where can I edit my KB?"
+
+> [!NOTE]
+> Newer versions of QnA Maker include improvements to scoring logic, and could affect your threshold. Any time you update the service, make sure to test and tweak the threshold if necessary. You can check your QnA Service version [here](https://www.qnamaker.ai/UserSettings), and see how to get the latest updates [here](../how-to/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates).
+
+## Set threshold
+
+Set the threshold score as a property of the [GenerateAnswer API JSON body](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration). This means you set it for each call to GenerateAnswer.
+
+From the bot framework, set the score as part of the options object with [C#](../how-to/metadata-generateanswer-usage.md?#use-qna-maker-with-a-bot-in-c) or [Node.js](../how-to/metadata-generateanswer-usage.md?#use-qna-maker-with-a-bot-in-nodejs).
+
+## Improve confidence scores
+To improve the confidence score of a particular response to a user query, you can add the user query to the knowledge base as an alternate question on that response. You can also use case-insensitive [word alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) to add synonyms to keywords in your KB.
++
+## Similar confidence scores
+When multiple responses have a similar confidence score, it is likely that the query was too generic and therefore matched with equal likelihood with multiple answers. Try to structure your QnAs better so that every QnA entity has a distinct intent.
++
+## Confidence score differences between test and production
+The confidence score of an answer may change negligibly between the test and published version of the knowledge base even if the content is the same. This is because the content of the test and the published knowledge base are located in different Azure Cognitive Search indexes.
+
+The test index holds all the QnA pairs of your knowledge bases. When querying the test index, the query applies to the entire index then results are restricted to the partition for that specific knowledge base. If the test query results are negatively impacting your ability to validate the knowledge base, you can:
+* organize your knowledge base using one of the following:
+ * 1 resource restricted to 1 KB: restrict your single QnA resource (and the resulting Azure Cognitive Search test index) to a single knowledge base.
+ * 2 resources - 1 for test, 1 for production: have two QnA Maker resources, using one for testing (with its own test and production indexes) and one for product (also having its own test and production indexes)
+* and, always use the same parameters, such as **[top](../how-to/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers)** when querying both your test and production knowledge base
+
+When you publish a knowledge base, the question and answer contents of your knowledge base moves from the test index to a production index in Azure search. See how the [publish](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base) operation works.
+
+If you have a knowledge base in different regions, each region uses its own Azure Cognitive Search index. Because different indexes are used, the scores will not be exactly the same.
++
+## No match found
+When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is "No good match found in the KB". You can override this [default response](../how-to/metadata-generateanswer-usage.md) in the bot or application code calling the endpoint. Alternately, you can also set the override response in Azure and this changes the default for all knowledge bases deployed in a particular QnA Maker service.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Best practices](./best-practices.md)
ai-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/data-sources-and-content.md
+
+ Title: Data sources and content types - QnA Maker
+description: Learn how to import question and answer pairs from data sources and supported content types, which include many standard structured documents such as PDF, DOCX, and TXT - QnA Maker.
++++++ Last updated : 01/11/2022++
+# Importing from data sources
+
+A knowledge base consists of question and answer pairs brought in by public URLs and files.
++
+## Data source locations
+
+Content is brought into a knowledge base from a data source. Data source locations are **public URLs or files**, which do not require authentication.
+
+[SharePoint files](../how-to/add-sharepoint-datasources.md), secured with authentication, are the exception. SharePoint resources must be files, not web pages.
+
+QnA Maker supports public URLs ending with a .ASPX web extension which are not secured with authentication.
+
+## Chit-chat content
+
+The chit-chat content set is offered as a complete content data source in several languages and conversational styles. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch. Learn [how to add chit-chat content](../how-to/chit-chat-knowledge-base.md) to your knowledge base.
+
+## Structured data format through import
+
+Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured `.tsv` file that contains questions and answers. This information helps QnA Maker group the question-answer pairs and attribute them to a particular data source.
+
+| Question | Answer | Source| Metadata (1 key: 1 value) |
+|--||-||
+| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
+| Question2 | Answer2 | Editorial| `Key:Value` |
+
+## Structured multi-turn format through import
+
+You can create the multi-turn conversations in a `.tsv` file format. The format provides you with the ability to create the multi-turn conversations by analyzing previous chat logs (with other processes, not using QnA Maker), then create the `.tsv` file through automation. Import the file to replace the existing knowledge base.
+
+> [!div class="mx-imgBorder"]
+> ![Conceptual model of 3 levels of multi-turn question](../media/qnamaker-concepts-knowledgebase/nested-multi-turn.png)
+
+The column for a multi-turn `.tsv`, specific to multi-turn is **Prompts**. An example `.tsv`, shown in Excel, show the information to include to define the multi-turn children:
+
+```JSON
+[
+ {"displayOrder":0,"qnaId":2,"displayText":"Level 2 Question A"},
+ {"displayOrder":0,"qnaId":3,"displayText":"Level 2 - Question B"}
+]
+```
+
+The **displayOrder** is numeric and the **displayText** is text that shouldn't include markdown.
+
+> [!div class="mx-imgBorder"]
+> ![Multi-turn question example as shown in Excel](../media/qnamaker-concepts-knowledgebase/multi-turn-tsv-columns-excel-example.png)
+
+## Export as example
+
+If you are unsure how to represent your QnA pair in the `.tsv` file:
+* Use this [downloadable example from GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Structured-multi-turn-format.xlsx?raw=true)
+* Or create the pair in the QnA Maker portal, save, then export the knowledge base for an example of how to represent the pair.
+
+## Unstructured data format
+
+You can also create a knowledge base based on unstructured content imported via a file. Currently this functionality is available only via document upload for documents that are in any of the supported file formats.
+
+> [!IMPORTANT]
+> The support for unstructured content via file upload is available only in [question answering](../../language-service/question-answering/overview.md).
+
+## Content types of documents you can add to a knowledge base
+Content types include many standard structured documents such as PDF, DOC, and TXT.
+
+### File and URL data types
+
+The table below summarizes the types of content and file formats that are supported by QnA Maker.
+
+|Source Type|Content Type| Examples|
+|--|--|--|
+|URL|FAQs<br> (Flat, with sections or with a topics homepage)<br>Support pages <br> (Single page how-to articles, troubleshooting articles etc.)|[Plain FAQ](../troubleshooting.md), <br>[FAQ with links](https://www.microsoft.com/microsoft-365/microsoft-365-for-home-and-school-faq),<br> [FAQ with topics homepage](https://www.microsoft.com/Licensing/servicecenter/Help/Faq.aspx)<br>[Support article](./best-practices.md)|
+|PDF / DOC|FAQs,<br> Product Manual,<br> Brochures,<br> Paper,<br> Flyer Policy,<br> Support guide,<br> Structured QnA,<br> etc.|**Without Multi-turn**<br>[Structured QnA.docx](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/structured.docx),<br> [Sample Product Manual.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/product-manual.pdf),<br> [Sample semi-structured.docx](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/semi-structured.docx),<br> [Sample white paper.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/white-paper.pdf),<br> [Unstructured blog.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Introducing-surface-laptop-4-and-new-access.pdf),<br> [Unstructured white paper.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/sample-unstructured-paper.pdf)<br><br>**Multi-turn**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)|
+|*Excel|Structured QnA file<br> (including RTF, HTML support)|**Without Multi-turn**:<br>[Sample QnA FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/QnA%20Maker%20Sample%20FAQ.xlsx)<br><br>**Multi-turn**:<br>[Structured simple FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Structured-multi-turn-format.xlsx)<br>[Surface laptop FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-Surface-Pro.xlsx)|
+|*TXT/TSV|Structured QnA file|[Sample chit-chat.tsv](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Scenario_Responses_Friendly.tsv)|
+
+If you need authentication for your data source, consider the following methods to get that content into QnA Maker:
+
+* Download the file manually and import into QnA Maker
+* Add the file from authenticated [SharePoint location](../how-to/add-sharepoint-datasources.md)
+
+### URL content
+
+Two types of documents can be imported via **URL** in QnA Maker:
+
+* FAQ URLs
+* Support URLs
+
+Each type indicates an expected format.
+
+### File-based content
+
+You can add files to a knowledge base from a public source, or your local file system, in the [QnA Maker portal](https://www.qnamaker.ai).
+
+### Content format guidelines
+
+Learn more about the [format guidelines](../reference-document-format-guidelines.md) for the different files.
+
+## Next steps
+
+Learn how to [edit QnAs](../how-to/edit-knowledge-base.md).
ai-services Development Lifecycle Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/development-lifecycle-knowledge-base.md
+
+ Title: Lifecycle of knowledge base - QnA Maker
+description: QnA Maker learns best in an iterative cycle of model changes, utterance examples, publishing, and gathering data from endpoint queries.
++++++ Last updated : 09/01/2020++
+# Knowledge base lifecycle in QnA Maker
+QnA Maker learns best in an iterative cycle of model changes, utterance examples, publishing, and gathering data from endpoint queries.
+
+![Authoring cycle](../media/qnamaker-concepts-lifecycle/kb-lifecycle.png)
++
+## Creating a QnA Maker knowledge base
+QnA Maker knowledge base (KB) endpoint provides a best-match answer to a user query based on the content of the KB. Creating a knowledge base is a one-time action to setting up a content repository of questions, answers, and associated metadata. A KB can be created by crawling pre-existing content such the following sources:
+
+- FAQ pages
+- Product manuals
+- Q-A pairs
+
+Learn how to [create a knowledge base](../quickstarts/create-publish-knowledge-base.md).
+
+## Testing and updating the knowledge base
+
+The knowledge base is ready for testing once it is populated with content, either editorially or through automatic extraction. Interactive testing can be done in the QnA Maker portal, through the **Test** panel. You enter common user queries. Then you verify that the responses returned with both the correct response and a sufficient confidence score.
++
+* **To fix low confidence scores**: add alternate questions.
+* **When a query incorrectly returns the [default response](../How-to/change-default-answer.md)**: add new answers to the correct question.
+
+This tight loop of test-update continues until you are satisfied with the results. Learn how to [test your knowledge base](../how-to/test-knowledge-base.md).
+
+For large KBs, use automated testing with the [generateAnswer API](../how-to/metadata-generateanswer-usage.md#get-answer-predictions-with-the-generateanswer-api) and the `isTest` body property, which queries the `test` knowledge base instead of the published knowledge base.
+
+```json
+{
+ "question": "example question",
+ "top": 3,
+ "userId": "Default",
+ "isTest": true
+}
+```
+
+## Publish the knowledge base
+Once you are done testing the knowledge base, you can publish it. Publish pushes the latest version of the tested knowledge base to a dedicated Azure Cognitive Search index representing the **published** knowledge base. It also creates an endpoint that can be called in your application or chat bot.
+
+Due to the publish action, any further changes made to the test version of the knowledge base leave the published version unaffected. The published version might be live in a production application.
+
+Each of these knowledge bases can be targeted for testing separately. Using the APIs, you can target the test version of the knowledge base with `isTest` body property in the generateAnswer call.
+
+Learn how to [publish your knowledge base](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
+
+## Monitor usage
+To be able to log the chat logs of your service, you would need to enable Application Insights when you [create your QnA Maker service](../how-to/set-up-qnamaker-service-azure.md).
+
+You can get various analytics of your service usage. Learn more about how to use application insights to get [analytics for your QnA Maker service](../how-to/get-analytics-knowledge-base.md).
+
+Based on what you learn from your analytics, make appropriate [updates to your knowledge base](../how-to/edit-knowledge-base.md).
+
+## Version control for data in your knowledge base
+
+Version control for data is provided through the import/export features on the **Settings** page in the QnA Maker portal.
+
+You can back up a knowledge base by exporting the knowledge base, in either `.tsv` or `.xls` format. Once exported, include this file as part of your regular source control check.
+
+When you need to go back to a specific version, you need to import that file from your local system. An exported knowledge base **must** only be used via import on the **Settings** page. It can't be used as a file or URL document data source. This will replace questions and answers currently in the knowledge base with the contents of the imported file.
+
+## Test and production knowledge base
+A knowledge base is the repository of questions and answer sets created, maintained, and used through QnA Maker. Each QnA Maker resource can hold multiple knowledge bases.
+
+A knowledge base has two states: *test* and *published*.
+
+### Test knowledge base
+
+The *test knowledge base* is the version currently edited and saved. The test version has been tested for accuracy, and for completeness of responses. Changes made to the test knowledge base don't affect the end user of your application or chat bot. The test knowledge base is known as `test` in the HTTP request. The `test` knowledge is available with the QnA Maker's portal interactive **Test** pane.
+
+### Production knowledge base
+
+The *published knowledge base* is the version that's used in your chat bot or application. Publishing a knowledge base puts the content of its test version into its published version. The published knowledge base is the version that the application uses through the endpoint. Make sure that the content is correct and well tested. The published knowledge base is known as `prod` in the HTTP request.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Active learning suggestions](../index.yml)
ai-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/plan.md
+
+ Title: Plan your app - QnA Maker
+description: Learn how to plan your QnA Maker app. Understand how QnA Maker works and interacts with other Azure services and some knowledge base concepts.
++++++ Last updated : 11/02/2021+++
+# Plan your QnA Maker app
+
+To plan your QnA Maker app, you need to understand how QnA Maker works and interacts with other Azure services. You should also have a solid grasp of knowledge base concepts.
++
+## Azure resources
+
+Each [Azure resource](azure-resources.md#resource-purposes) created with QnA Maker has a specific purpose. Each resource has its own purpose, limits, and [pricing tier](azure-resources.md#pricing-tier-considerations). It's important to understand the function of these resources so that you can use that knowledge into your planning process.
+
+| Resource | Purpose |
+|--|--|
+| [QnA Maker](azure-resources.md#qna-maker-resource) resource | Authoring and query prediction |
+| [Cognitive Search](azure-resources.md#cognitive-search-resource) resource | Data storage and search |
+| [App Service resource and App Plan Service](azure-resources.md#app-service-and-app-service-plan) resource | Query prediction endpoint |
+| [Application Insights](azure-resources.md#application-insights) resource | Query prediction telemetry |
+
+### Resource planning
+
+The free tier, `F0`, of each resource works and can provide both the authoring and query prediction experience. You can use this tier to learn authoring and query prediction. When you move to a production or live scenario, reevaluate your resource selection.
+
+### Knowledge base size and throughput
+
+When you build a real app, plan sufficient resources for the size of your knowledge base and for your expected query prediction requests.
+
+A knowledge base size is controlled by the:
+* [Cognitive Search resource](../../../search/search-limits-quotas-capacity.md) pricing tier limits
+* [QnA Maker limits](../limits.md)
+
+The knowledge base query prediction request is controlled by the web app plan and web app. Refer to [recommended settings](azure-resources.md#recommended-settings) to plan your pricing tier.
+
+### Resource sharing
+
+If you already have some of these resources in use, you may consider sharing resources. See which resources [can be shared](azure-resources.md#share-services-with-qna-maker) with the understanding that resource sharing is an advanced scenario.
+
+All knowledge bases created in the same QnA Maker resource share the same **test** query prediction endpoint.
+
+### Understand the impact of resource selection
+
+Proper resource selection means your knowledge base answers query predictions successfully.
+
+If your knowledge base isn't functioning properly, it's typically an issue of improper resource management.
+
+Improper resource selection requires investigation to determine which [resource needs to change](azure-resources.md#when-to-change-a-pricing-tier).
+
+## Knowledge bases
+
+A knowledge base is directly tied its QnA Maker resource. It holds the question and answer (QnA) pairs that are used to answer query prediction requests.
+
+### Language considerations
+
+The first knowledge base created on your QnA Maker resource sets the language for the resource. You can only have one language for a QnA Maker resource.
+
+You can structure your QnA Maker resources by language or you can use [Translator](../../translator/translator-overview.md) to change a query from another language into the knowledge base's language before sending the query to the query prediction endpoint.
+
+### Ingest data sources
+
+You can use one of the following ingested [data sources](../concepts/data-sources-and-content.md) to create a knowledge base:
+
+* Public URL
+* Private SharePoint URL
+* File
+
+The ingestion process converts [supported content types](../reference-document-format-guidelines.md) to markdown. All further editing of the *answer* is done with markdown. After you create a knowledge base, you can edit [QnA pairs](question-answer-set.md) in the QnA Maker portal with [rich text authoring](../how-to/edit-knowledge-base.md#rich-text-editing-for-answer).
+
+### Data format considerations
+
+Because the final format of a QnA pair is markdown, it's important to understand [markdown support](../reference-markdown-format.md).
+
+Linked images must be available from a public URL to be displayed in the test pane of the QnA Maker portal or in a client application. QnA Maker doesn't provide authentication for content, including images.
+
+### Bot personality
+
+Add a bot personality to your knowledge base with [chit-chat](../how-to/chit-chat-knowledge-base.md). This personality comes through with answers provided in a certain conversational tone such as *professional* and *friendly*. This chit-chat is provided as a conversational set, which you have total control to add, edit, and remove.
+
+A bot personality is recommended if your bot connects to your knowledge base. You can choose to use chit-chat in your knowledge base even if you also connect to other services, but you should review how the bot service interacts to know if that is the correct architectural design for your use.
+
+### Conversation flow with a knowledge base
+
+Conversation flow usually begins with a salutation from a user, such as `Hi` or `Hello`. Your knowledge base can answer with a general answer, such as `Hi, how can I help you`, and it can also provide a selection of follow-up prompts to continue the conversation.
+
+You should design your conversational flow with a loop in mind so that a user knows how to use your bot and isn't abandoned by the bot in the conversation. [Follow-up prompts](../how-to/multi-turn.md) provide linking between QnA pairs, which allow for the conversational flow.
+
+### Authoring with collaborators
+
+Collaborators may be other developers who share the full development stack of the knowledge base application or may be limited to just authoring the knowledge base.
+
+Knowledge base authoring supports several role-based access permissions you apply in the Azure portal to limit the scope of a collaborator's abilities.
+
+## Integration with client applications
+
+Integration with client applications is accomplished by sending a query to the prediction runtime endpoint. A query is sent to your specific knowledge base with an SDK or REST-based request to your QnA Maker's web app endpoint.
+
+To authenticate a client request correctly, the client application must send the correct credentials and knowledge base ID. If you're using an Azure AI Bot Service, configure these settings as part of the bot configuration in the Azure portal.
+
+### Conversation flow in a client application
+
+Conversation flow in a client application, such as an Azure bot, may require functionality before and after interacting with the knowledge base.
+
+Does your client application support conversation flow, either by providing alternate means to handle follow-up prompts or including chit-chit? If so, design these early and make sure the client application query is handled correctly by another service or when sent to your knowledge base.
+
+### Dispatch between QnA Maker and Language Understanding (LUIS)
+
+A client application may provide several features, only one of which is answered by a knowledge base. Other features still need to understand the conversational text and extract meaning from it.
+
+A common client application architecture is to use both QnA Maker and [Language Understanding (LUIS)](../../LUIS/what-is-luis.md) together. LUIS provides the text classification and extraction for any query, including to other services. QnA Maker provides answers from your knowledge base.
+
+In such a [shared architecture](../choose-natural-language-processing-service.md) scenario, the dispatching between the two services is accomplished by the [Dispatch](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Dispatch) tool from Bot Framework.
+
+### Active learning from a client application
+
+QnA Maker uses _active learning_ to improve your knowledge base by suggesting alternate questions to an answer. The client application is responsible for a part of this [active learning](../how-to/use-active-learning.md). Through conversational prompts, the client application can determine that the knowledge base returned an answer that's not useful to the user, and it can determine a better answer. The client application needs to send that information back to the knowledge base to improve the prediction quality.
+
+### Providing a default answer
+
+If your knowledge base doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page in the QnA Maker portal or in the [APIs](/rest/api/cognitiveservices/qnamaker/knowledgebase/update#request-body).
+
+This default answer is different from the Azure bot default answer. You configure the default answer for your Azure bot in the Azure portal as part of configuration settings. It's returned when the score threshold isn't met.
+
+## Prediction
+
+The prediction is the response from your knowledge base, and it includes more information than just the answer. To get a query prediction response, use the [GenerateAnswer API](query-knowledge-base.md).
+
+### Prediction score fluctuations
+
+A score can change based on several factors:
+
+* Number of answers you requested in response to [GenerateAnswer](query-knowledge-base.md) with `top` property
+* Variety of available alternate questions
+* Filtering for metadata
+* Query sent to `test` or `production` knowledge base
+
+There's a [two-phase answer ranking](query-knowledge-base.md#how-qna-maker-processes-a-user-query-to-select-the-best-answer):
+- Cognitive Search - first rank. Set the number of _answers allowed_ high enough that the best answers are returned by Cognitive Search and then passed into the QnA Maker ranker.
+- QnA Maker - second rank. Apply featurization and machine learning to determine best answer.
+
+### Service updates
+
+Apply the [latest runtime updates](../how-to/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates) to automatically manage service updates.
+
+### Scaling, throughput, and resiliency
+
+Scaling, throughput, and resiliency are determined by the [Azure resources](../how-to/set-up-qnamaker-service-azure.md), their pricing tiers, and any surrounding architecture such as [Traffic manager](../how-to/configure-QnA-Maker-resources.md#business-continuity-with-traffic-manager).
+
+### Analytics with Application Insights
+
+All queries to your knowledge base are stored in Application Insights. Use our [top queries](../how-to/get-analytics-knowledge-base.md) to understand your metrics.
+
+## Development lifecycle
+
+The [development lifecycle](development-lifecycle-knowledge-base.md) of a knowledge base is ongoing: editing, testing, and publishing your knowledge base.
+
+### Knowledge base development of QnA Maker pairs
+
+Your QnA pairs should be designed and developed based on your client application usage.
+
+Each pair can contain:
+* Metadata - filterable when querying to allow you to tag your QnA pairs with additional information about the source, content, format, and purpose of your data.
+* Follow-up prompts - helps to determine a path through your knowledge base so the user arrives at the correct answer.
+* Alternate questions - important to allow search to match to your answer from different forms of the question. [Active learning suggestions](../how-to/use-active-learning.md) turn into alternate questions.
+
+### DevOps development
+
+Developing a knowledge base to insert into a DevOps pipeline requires that the knowledge base is isolated during batch testing.
+
+A knowledge base shares the Cognitive Search index with all other knowledge bases on the QnA Maker resource. While the knowledge base is isolated by partition, sharing the index can cause a difference in the score when compared to the published knowledge base.
+
+To have the _same score_ on the `test` and `production` knowledge bases, isolate a QnA Maker resource to a single knowledge base. In this architecture, the resource only needs to live as long as the isolated batch test.
+
+## Next steps
+
+* [Azure resources](../how-to/set-up-qnamaker-service-azure.md)
+* [Question and answer pairs](question-answer-set.md)
ai-services Query Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/query-knowledge-base.md
+
+ Title: Query the knowledge base - QnA Maker
+description: A knowledge base must be published. Once published, the knowledge base is queried at the runtime prediction endpoint using the generateAnswer API.
++++ Last updated : 11/02/2021+++
+# Query the knowledge base for answers
+
+A knowledge base must be published. Once published, the knowledge base is queried at the runtime prediction endpoint using the generateAnswer API. The query includes the question text, and other settings, to help QnA Maker select the best possible match to an answer.
++
+## How QnA Maker processes a user query to select the best answer
+
+The trained and [published](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base) QnA Maker knowledge base receives a user query, from a bot or other client application, at the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md). The following diagram illustrates the process when the user query is received.
+
+![The ranking model process for a user query](../media/qnamaker-concepts-knowledgebase/ranker-v1.png)
+
+### Ranker process
+
+The process is explained in the following table.
+
+|Step|Purpose|
+|--|--|
+|1|The client application sends the user query to the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md).|
+|2|QnA Maker preprocesses the user query with language detection, spellers, and word breakers.|
+|3|This preprocessing is taken to alter the user query for the best search results.|
+|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries. Azure search filters [stop words](https://github.com/Azure-Samples/azure-search-sample-dat) in this step.|
+|5|QnA Maker uses syntactic and semantic based featurization to determine the similarity between the user query and the fetched QnA results.|
+|6|The machine-learned ranker model uses the different features, from step 5, to determine the confidence scores and the new ranking order.|
+|7|The new results are returned to the client application in ranked order.|
+|||
+
+Features used include but aren't limited to word-level semantics, term-level importance in a corpus, and deep learned semantic models to determine similarity and relevance between two text strings.
+
+## HTTP request and response with endpoint
+
+When you publish your knowledge base, the service creates a REST-based HTTP endpoint that can be integrated into your application, commonly a chat bot.
+
+### The user query request to generate an answer
+
+A user query is the question that the end user asks of the knowledge base, such as `How do I add a collaborator to my app?`. The query is often in a natural language format or a few keywords that represent the question, such as `help with collaborators`. The query is sent to your knowledge base from an HTTP request in your client application.
+
+```json
+{
+ "question": "How do I add a collaborator to my app?",
+ "top": 6,
+ "isTest": true,
+ "scoreThreshold": 20,
+ "strictFilters": [
+ {
+ "name": "QuestionType",
+ "value": "Support"
+ }],
+ "userId": "sd53lsY="
+}
+```
+
+You control the response by setting properties such as [scoreThreshold](./confidence-score.md#choose-a-score-threshold), [top](../how-to/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers), and [strictFilters](../how-to/query-knowledge-base-with-metadata.md).
+
+Use [conversation context](../how-to/query-knowledge-base-with-metadata.md) with [multi-turn functionality](../how-to/multi-turn.md) to keep the conversation going to refine the questions and answers, to find the correct and final answer.
+
+### The response from a call to generate an answer
+
+The HTTP response is the answer retrieved from the knowledge base, based on the best match for a given user query. The response includes the answer and the prediction score. If you asked for more than one top answer with the `top` property, you get more than one top answer, each with a score.
+
+```json
+{
+ "answers": [
+ {
+ "questions": [
+ "How do I add a collaborator to my app?",
+ "What access control is provided for the app?",
+ "How do I find user management and security?"
+ ],
+ "answer": "Use the Azure portal to add a collaborator using Access Control (IAM)",
+ "score": 100,
+ "id": 1,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "QuestionType",
+ "value": "Support"
+ },
+ {
+ "name": "ToolDependency",
+ "value": "Azure Portal"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Confidence score](./confidence-score.md)
ai-services Question Answer Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/question-answer-set.md
+
+ Title: Design knowledge base - QnA Maker concepts
+description: Learn how to design a knowledge base - QnA Maker.
++++++ Last updated : 01/27/2020++
+# Question and answer pair concepts
+
+A knowledge base consists of question and answer (QnA) pairs. Each pair has one answer and a pair contains all the information associated with that _answer_. An answer can loosely resemble a database row or a data structure instance.
++
+## Question and answer pairs
+
+The **required** settings in a question-and-answer (QnA) pair are:
+
+* a **question** - text of user query, used to QnA Maker's machine-learning, to align with text of user's question with different wording but the same answer
+* the **answer** - the pair's answer is the response that's returned when a user query is matched with the associated question
+
+Each pair is represented by an **ID**.
+
+The **optional** settings for a pair include:
+
+* **Alternate forms of the question** - this helps QnA Maker return the correct answer for a wider variety of question phrasings
+* **Metadata**: Metadata are tags associated with a QnA pair and are represented as key-value pairs. Metadata tags are used to filter QnA pairs and limit the set over which query matching is performed.
+* **Multi-turn prompts**, used to continue a multi-turn conversation
+
+![QnA Maker knowledge bases](../media/qnamaker-concepts-knowledgebase/knowledgebase.png)
+
+## Editorially add to knowledge base
+
+If you do not have pre-existing content to populate the knowledge base, you can add QnA pairs editorially in the QnA Maker portal. Learn how to update your knowledge base [here](../how-to/edit-knowledge-base.md).
+
+## Editing your knowledge base locally
+
+Once a knowledge base is created, it is recommended that you make edits to the knowledge base text in the [QnA Maker portal](https://qnamaker.ai), rather than exporting and reimporting through local files. However, there may be times that you need to edit a knowledge base locally.
+
+Export the knowledge base from the **Settings** page, then edit the knowledge base with Microsoft Excel. If you choose to use another application to edit your exported file, the application may introduce syntax errors because it is not fully TSV compliant. Microsoft Excel's TSV files generally don't introduce any formatting errors.
+
+Once you are done with your edits, reimport the TSV file from the **Settings** page. This will completely replace the current knowledge base with the imported knowledge base.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Knowledge base lifecycle in QnA Maker](./development-lifecycle-knowledge-base.md)
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/role-based-access-control.md
+
+ Title: Collaborate with others - QnA Maker
+description: Learn how to collaborate with other authors and editors using Azure role-based access control.
++++++ Last updated : 05/15/2020++
+# Collaborate with other authors and editors
+
+Collaborate with other authors and editors using Azure role-based access control (Azure RBAC) placed on your QnA Maker resource.
++
+## Access is provided on the QnA Maker resource
+
+All permissions are controlled by the permissions placed on the QnA Maker resource. These permissions align to read, write, publish, and full access. You can allow collaboration among multiple users by [updating RBAC access](../how-to/manage-qna-maker-app.md) for QnA Maker resource.
+
+This Azure RBAC feature includes:
+* Azure Active Directory (AAD) is 100% backward compatible with key-based authentication for owners and contributors. Customers can use either key-based authentication or Azure RBAC-based authentication in their requests.
+* Quickly add authors and editors to all knowledge bases in the resource because control is at the resource level, not at the knowledge base level.
+
+> [!NOTE]
+> Make sure to add a custom subdomain for the resource. [Custom Subdomain](../../cognitive-services-custom-subdomains.md) should be present by default, but if not, please add it
+
+## Access is provided by a defined role
++
+## Authentication flow
+
+The following diagram shows the flow, from the author's perspective, for signing into the QnA Maker portal and using the authoring APIs.
+
+> [!div class="mx-imgBorder"]
+> ![The following diagram shows the flow, from the author's perspective, for signing into the QnA Maker portal and using the authoring APIs.](../media/qnamaker-how-to-collaborate-knowledge-base/rbac-flow-from-portal-to-service.png)
+
+|Steps|Description|
+|--|--|
+|1|Portal Acquires token for QnA Maker resource.|
+|2|Portal Calls the appropriate QnA Maker authoring API (APIM) passing the token instead of keys.|
+|3|QnA Maker API validates the token.|
+|4 |QnA Maker API calls QnAMaker Service.|
+
+If you intend to call the [authoring APIs](../index.yml), learn more about how to set up authentication.
+
+## Authenticate by QnA Maker portal
+
+If you author and collaborate using the QnA Maker portal, after you add the appropriate role to the resource for a collaborator, the QnA Maker portal manages all the access permissions.
+
+## Authenticate by QnA Maker APIs and SDKs
+
+If you author and collaborate using the APIs, either through REST or the SDKs, you need to [create a service principal](../../authentication.md#assign-a-role-to-a-service-principal) to manage the authentication.
+
+## Next step
+
+* Design a knowledge base for languages and for client applications
ai-services Add Sharepoint Datasources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/add-sharepoint-datasources.md
+
+ Title: SharePoint files - QnA Maker
+description: Add secured SharePoint data sources to your knowledge base to enrich the knowledge base with questions and answers that may be secured with Active Directory.
++++++ Last updated : 01/25/2022++
+# Add a secured SharePoint data source to your knowledge base
+
+Add secured cloud-based SharePoint data sources to your knowledge base to enrich the knowledge base with questions and answers that may be secured with Active Directory.
+
+When you add a secured SharePoint document to your knowledge base, as the QnA Maker manager, you must request Active Directory permission for QnA Maker. Once this permission is given from the Active Directory manager to QnA Maker for access to SharePoint, it doesn't have to be given again. Each subsequent document addition to the knowledge base will not need authorization if it is in the same SharePoint resource.
+
+If the QnA Maker knowledge base manager is not the Active Directory manager, you will need to communicate with the Active Directory manager to finish this process.
+
+## Prerequisites
+
+* Cloud-based SharePoint - QnA Maker uses Microsoft Graph for permissions. If your SharePoint is on-premises, you won't be able to extract from SharePoint because Microsoft Graph won't be able to determine permissions.
+* URL format - QnA Maker only supports SharePoint urls which are generated for sharing and are of format `https://\*.sharepoint.com`
+
+## Add supported file types to knowledge base
+
+You can add all QnA Maker-supported [file types](../concepts/data-sources-and-content.md#file-and-url-data-types) from a SharePoint site to your knowledge base. You may have to grant [permissions](#permissions) if the file resource is secured.
+
+1. From the library with the SharePoint site, select the file's ellipsis menu, `...`.
+1. Copy the file's URL.
+
+ ![Get the SharePoint file URL by selecting the file's ellipsis menu then copying the URL.](../media/add-sharepoint-datasources/get-sharepoint-file-url.png)
+
+1. In the QnA Maker portal, on the **Settings** page, add the URL to the knowledge base.
+
+### Images with SharePoint files
+
+If files include images, those are not extracted. You can add the image, from the QnA Maker portal, after the file is extracted into QnA pairs.
+
+Add the image with the following markdown syntax:
+
+```markdown
+![Explanation or description of image](URL of public image)
+```
+
+The text in the square brackets, `[]`, explains the image. The URL in the parentheses, `()`, is the direct link to the image.
+
+When you test the QnA pair in the interactive test panel, in the QnA Maker portal, the image is displayed, instead of the markdown text. This validates the image can be publicly retrieved from your client-application.
+
+## Permissions
+
+Granting permissions happens when a secured file from a server running SharePoint is added to a knowledge base. Depending on how the SharePoint is set up and the permissions of the person adding the file, this could require:
+
+* no additional steps - the person adding the file has all the permissions needed.
+* steps by both [knowledge base manager](#knowledge-base-manager-add-sharepoint-data-source-in-qna-maker-portal) and [Active Directory manager](#active-directory-manager-grant-file-read-access-to-qna-maker).
+
+See the steps listed below.
+
+### Knowledge base
+
+When the **QnA Maker manager** adds a secured SharePoint document to a knowledge base, the knowledge base manager initiates a request for permission that the Active Directory manager needs to complete.
+
+The request begins with a pop-up to authenticate to an Active Directory account.
+
+![Authenticate User Account](../media/add-sharepoint-datasources/authenticate-user-account.png)
+
+Once the QnA Maker manager selects the account, the Azure Active Directory administrator will receive a notice that they need to allow the QnA Maker app (not the QnA Maker manager) access to the SharePoint resource. The Azure Active Directory manager will need to do this for every SharePoint resource, but not every document in that resource.
+
+### Active directory
+
+The Active Directory manager (not the QnA Maker manager) needs to grant access to QnA Maker to access the SharePoint resource by selecting [this link](https://login.microsoftonline.com/common/oauth2/v2.0/authorize?response_type=id_token&scope=Files.Read%20Files.Read.All%20Sites.Read.All%20User.Read%20User.ReadBasic.All%20profile%20openid%20email&client_id=c2c11949-e9bb-4035-bda8-59542eb907a6&redirect_uri=https%3A%2F%2Fwww.qnamaker.ai%3A%2FCreate&state=68) to authorize the QnA Maker Portal SharePoint enterprise app to have file read permissions.
+
+![Azure Active Directory manager grants permission interactively](../media/add-sharepoint-datasources/aad-manager-grants-permission-interactively.png)
+
+<!--
+The Active Directory manager must grant QnA Maker access either by application name, `QnAMakerPortalSharePoint`, or by application ID, `c2c11949-e9bb-4035-bda8-59542eb907a6`.
+-->
+<!--
+### Grant access from the interactive pop-up window
+
+The Active Directory manager will get a pop-up window requesting permissions to the `QnAMakerPortalSharePoint` app. The pop-up window includes the QnA Maker Manager email address that initiated the request, an `App Info` link to learn more about **QnAMakerPortalSharePoint**, and a list of permissions requested. Select **Accept** to provide those permissions.
+
+![Azure Active Directory manager grants permission interactively](../media/add-sharepoint-datasources/aad-manager-grants-permission-interactively.png)
+-->
+<!--
+
+### Grant access from the App Registrations list
+
+1. The Active Directory manager signs in to the Azure portal and opens **[App registrations list](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ApplicationsListBlade)**.
+
+1. Search for and select the **QnAMakerPortalSharePoint** app. Change the second filter box from **My apps** to **All apps**. The app information will open on the right side.
+
+ ![Select QnA Maker app in App registrations list](../media/add-sharepoint-datasources/select-qna-maker-app-in-app-registrations.png)
+
+1. Select **Settings**.
+
+ [![Select Settings in the right-side blade](../media/add-sharepoint-datasources/select-settings-for-qna-maker-app-registration.png)](../media/add-sharepoint-datasources/select-settings-for-qna-maker-app-registration.png#lightbox)
+
+1. Under **API access**, select **Required permissions**.
+
+ ![Select 'Settings', then under 'API access', select 'Required permission'](../media/add-sharepoint-datasources/select-required-permissions-in-settings-blade.png)
+
+1. Do not change any settings in the **Enable Access** window. Select **Grant Permission**.
+
+ [![Under 'Grant Permission', select 'Yes'](../media/add-sharepoint-datasources/grant-app-required-permissions.png)](../media/add-sharepoint-datasources/grant-app-required-permissions.png#lightbox)
+
+1. Select **YES** in the pop-up confirmation windows.
+
+ ![Grant required permissions](../media/add-sharepoint-datasources/grant-required-permissions.png)
+-->
+### Grant access from the Azure Active Directory admin center
+
+1. The Active Directory manager signs in to the Azure portal and opens **[Enterprise applications](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps)**.
+
+1. Search for `QnAMakerPortalSharePoint` the select the QnA Maker app.
+
+ [![Search for QnAMakerPortalSharePoint in Enterprise apps list](../media/add-sharepoint-datasources/search-enterprise-apps-for-qna-maker.png)](../media/add-sharepoint-datasources/search-enterprise-apps-for-qna-maker.png#lightbox)
+
+1. Under **Security**, go to **Permissions**. Select **Grant admin consent for Organization**.
+
+ [![Select authenticated user for Active Directory Admin](../media/add-sharepoint-datasources/grant-aad-permissions-to-enterprise-app.png)](../media/add-sharepoint-datasources/grant-aad-permissions-to-enterprise-app.png#lightbox)
+
+1. Select a Sign-On account with permissions to grant permissions for the Active Directory.
++++
+## Add SharePoint data source with APIs
+
+There is a workaround to add latest SharePoint content via API using Azure blob storage, below are the steps:
+1. Download the SharePoint files locally. The user calling the API needs to have access to SharePoint.
+1. Upload them on the Azure blob storage. This will create a secure shared access by [using SAS token.](../../../storage/common/storage-sas-overview.md#how-a-shared-access-signature-works)
+1. Pass the blob URL generated with the SAS token to the QnA Maker API. To allow the Question Answers extraction from the files, you need to add the suffix file type as '&ext=pdf' or '&ext=doc' at the end of the URL before passing it to QnA Maker API.
++
+<!--
+## Get SharePoint File URI
+
+Use the following steps to transform the SharePoint URL into a sharing token.
+
+1. Encode the URL using [base64](https://en.wikipedia.org/wiki/Base64).
+
+1. Convert the base64-encoded result to an unpadded base64url format with the following character changes.
+
+ * Remove the equal character, `=` from the end of the value.
+ * Replace `/` with `_`.
+ * Replace `+` with `-`.
+ * Append `u!` to be beginning of the string.
+
+1. Sign in to Graph explorer and run the following query, where `sharedURL` is ...:
+
+ ```
+ https://graph.microsoft.com/v1.0/shares/<sharedURL>/driveitem
+ ```
+
+ Get the **@microsoft.graph.downloadUrl** and use this as `fileuri` in the QnA Maker APIs.
+
+### Add or update a SharePoint File URI to your knowledge base
+
+Use the **@microsoft.graph.downloadUrl** from the previous section as the `fileuri` in the QnA Maker API for [adding a knowledge base](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase) or [updating a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The following fields are mandatory: name, fileuri, filename, source.
+
+```
+{
+ "name": "Knowledge base name",
+ "files": [
+ {
+ "fileUri": "<@microsoft.graph.downloadURL>",
+ "fileName": "filename.xlsx",
+ "source": "<SharePoint link>"
+ }
+ ],
+ "urls": [],
+ "users": [],
+ "hostUrl": "",
+ "qnaList": []
+}
+```
+++
+## Remove QnA Maker app from SharePoint authorization
+
+1. Use the steps in the previous section to find the Qna Maker app in the Active Directory admin center.
+1. When you select the **QnAMakerPortalSharePoint**, select **Overview**.
+1. Select **Delete** to remove permissions.
+
+-->
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Collaborate on your knowledge base](../concepts/data-sources-and-content.md)
ai-services Change Default Answer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/change-default-answer.md
+
+ Title: Get default answer - QnA Maker
+description: The default answer is returned when there is no match to the question. You may want to change the default answer from the standard default answer.
++++++ Last updated : 11/09/2020+++
+# Change default answer for a QnA Maker resource
+
+The default answer for a knowledge base is meant to be returned when an answer is not found. If you are using a client application, such as the [Azure AI Bot Service](/azure/bot-service/bot-builder-howto-qna), it may also have a separate default answer, indicating no answer met the score threshold.
++
+## Types of default answer
+
+There are two types of default answer in your knowledge base. It is important to understand how and when each is returned from a prediction query:
+
+|Types of default answers|Description of answer|
+|--|--|
+|KB answer when no answer is determined|`No good match found in KB.` - When the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer) finds no matching answer to the question, the `DefaultAnswer` setting of the App service is returned. All knowledge bases in the same QnA Maker resource share the same default answer text.<br>You can manage the setting in the Azure portal, via the App service, or with the REST APIs for [getting](/rest/api/appservice/webapps/listapplicationsettings) or [updating](/rest/api/appservice/webapps/updateapplicationsettings) the setting.|
+|Follow-up prompt instruction text|When using a follow-up prompt in a conversation flow, you may not need an answer in the QnA pair because you want the user to select from the follow-up prompts. In this case, set specific text by setting the default answer text, which is returned with each prediction for follow-up prompts. The text is meant to display as instructional text to the selection of follow-up prompts. An example for this default answer text is `Please select from the following choices`. This configuration is explained in the next few sections of this document. Can also set as part of knowledge base definition of `defaultAnswerUsedForExtraction` using [REST API](/rest/api/cognitiveservices/qnamaker/knowledgebase/create).|
+
+### Client application integration
+
+For a client application, such as a bot with the **Azure AI Bot Service**, you can choose from the common following scenarios:
+
+* Use the knowledge base's setting
+* Use different text in the client application to distinguish when an answer is returned but doesn't meet the score threshold. This text can either be static text stored in code, or can be stored in the client application's settings list.
+
+## Set follow-up prompt's default answer when you create knowledge base
+
+When you create a new knowledge base, the default answer text is one of the settings. If you choose not to set it during the creation process, you can change it later with the following procedure.
+
+## Change follow-up prompt's default answer in QnA Maker portal
+
+The knowledge base default answer is returned when no answer is returned from the QnA Maker service.
+
+1. Sign in to the [QnA Maker portal](https://www.qnamaker.ai/) and select your knowledge base from the list.
+1. Select **Settings** from the navigation bar.
+1. Change the value of **Default answer text** in the **Manage knowledge base** section.
+
+ :::image type="content" source="../media/qnamaker-concepts-confidencescore/change-default-answer.png" alt-text="Screenshot of QnA Maker portal, Settings page, with default answer textbox highlighted.":::
+
+1. Select **Save and train** to save the change.
+
+## Next steps
+
+* [Create a knowledge base](../How-to/manage-knowledge-bases.md)
ai-services Chit Chat Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/chit-chat-knowledge-base.md
+
+ Title: Adding chit-chat to a QnA Maker knowledge base
+
+description: Adding personal chit-chat to your bot makes it more conversational and engaging when you create a KB. QnA Maker allows you to easily add a pre-populated set of the top chit-chat, into your KB.
+++++++ Last updated : 08/25/2021+++
+# Add Chit-chat to a knowledge base
+
+Adding chit-chat to your bot makes it more conversational and engaging. The chit-chat feature in QnA maker allows you to easily add a pre-populated set of the top chit-chat, into your knowledge base (KB). This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
++
+This dataset has about 100 scenarios of chit-chat in the voice of multiple personas, like Professional, Friendly and Witty. Choose the persona that most closely resembles your bot's voice. Given a user query, QnA Maker tries to match it with the closest known chit-chat QnA.
+
+Some examples of the different personalities are below. You can see all the personality [datasets](https://github.com/microsoft/botframework-cli/blob/main/packages/qnamaker/docs/chit-chat-dataset.md) along with details of the personalities.
+
+For the user query of `When is your birthday?`, each personality has a styled response:
+
+<!-- added quotes so acrolinx doesn't score these sentences -->
+|Personality|Example|
+|--|--|
+|Professional|Age doesn't really apply to me.|
+|Friendly|I don't really have an age.|
+|Witty|I'm age-free.|
+|Caring|I don't have an age.|
+|Enthusiastic|I'm a bot, so I don't have an age.|
+||
++
+## Language support
+
+Chit-chat data sets are supported in the following languages:
+
+|Language|
+|--|
+|Chinese|
+|English|
+|French|
+|Germany|
+|Italian|
+|Japanese|
+|Korean|
+|Portuguese|
+|Spanish|
++
+## Add chit-chat during KB creation
+During knowledge base creation, after adding your source URLs and files, there is an option for adding chit-chat. Choose the personality that you want as your chit-chat base. If you do not want to add chit-chat, or if you already have chit-chat support in your data sources, choose **None**.
+
+## Add Chit-chat to an existing KB
+Select your KB, and navigate to the **Settings** page. There is a link to all the chit-chat datasets in the appropriate **.tsv** format. Download the personality you want, then upload it as a file source. Make sure not to edit the format or the metadata when you download and upload the file.
+
+![Add chit-chat to existing KB](../media/qnamaker-how-to-chit-chat/add-chit-chat-dataset.png)
+
+## Edit your chit-chat questions and answers
+When you edit your KB, you will see a new source for chit-chat, based on the personality you selected. You can now add altered questions or edit the responses, just like with any other source.
+
+![Edit chit-chat QnAs](../media/qnamaker-how-to-chit-chat/edit-chit-chat.png)
+
+To view the metadata, select **View Options** in the toolbar, then select **Show metadata**.
+
+## Add additional chit-chat questions and answers
+You can add a new chit-chat QnA pair that is not in the predefined data set. Ensure that you are not duplicating a QnA pair that is already covered in the chit-chat set. When you add any new chit-chat QnA, it gets added to your **Editorial** source. To ensure the ranker understands that this is chit-chat, add the metadata key/value pair "Editorial: chitchat", as seen in the following image:
++
+## Delete chit-chat from an existing KB
+Select your KB, and navigate to the **Settings** page. Your specific chit-chat source is listed as a file, with the selected personality name. You can delete this as a source file.
+
+![Delete chit-chat from KB](../media/qnamaker-how-to-chit-chat/delete-chit-chat.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Import a knowledge base](../tutorials/export-knowledge-base.md)
+
+## See also
+
+[QnA Maker overview](../overview/overview.md)
ai-services Configure Qna Maker Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/configure-qna-maker-resources.md
+
+ Title: Configure QnA Maker service - QnA Maker
+description: This document outlines advanced configurations for your QnA Maker resources.
++++++ Last updated : 08/25/2021+++
+# Configure QnA Maker resources
+
+The user can configure QnA Maker to use a different Cognitive search resource. They can also configure App service settings if they are using QnA Maker GA.
++
+## Configure QnA Maker to use different Cognitive Search resource
+
+> [!NOTE]
+> If you change the Azure Search service associated with QnA Maker, you will lose access to all the knowledge bases already present in it. Make sure you export the existing knowledge bases before you change the Azure Search service.
+
+If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
+
+QnA Maker's **App Service** resource uses the Cognitive Search resource. In order to change the Cognitive Search resource used by QnA Maker, you need to change the setting in the Azure portal.
+
+1. Get the **Admin key** and **Name** of the Cognitive Search resource you want QnA Maker to use.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and find the **App Service** associated with your QnA Maker resource. Both with have the same name.
+
+1. Select **Settings**, then **Configuration**. This will display all existing settings for the QnA Maker's App Service.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Azure portal showing App Service configuration settings](../media/qnamaker-how-to-upgrade-qnamaker/change-search-service-app-service-configuration.png)
+
+1. Change the values for the following keys:
+
+ * **AzureSearchAdminKey**
+ * **AzureSearchName**
+
+1. To use the new settings, you need to restart the App service. Select **Overview**, then select **Restart**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Azure portal restarting App Service after configuration settings change](../media/qnamaker-how-to-upgrade-qnamaker/screenshot-azure-portal-restart-app-service.png)
+
+If you create a QnA service through Azure Resource Manager templates, you can create all resources and control the App Service creation to use an existing Search service.
+
+Learn more about how to configure the App Service [Application settings](../../../app-service/configure-common.md#configure-app-settings).
+
+## Get the latest runtime updates
+
+The QnAMaker runtime is part of the Azure App Service instance that's deployed when you [create a QnAMaker service](./set-up-qnamaker-service-azure.md) in the Azure portal. Updates are made periodically to the runtime. The QnA Maker App Service instance is in auto-update mode after the April 2019 site extension release (version 5+). This update is designed to take care of ZERO downtime during upgrades.
+
+You can check your current version at https://www.qnamaker.ai/UserSettings. If your version is older than version 5.x, you must restart App Service to apply the latest updates:
+
+1. Go to your QnAMaker service (resource group) in the [Azure portal](https://portal.azure.com).
+
+ > [!div class="mx-imgBorder"]
+ > ![QnAMaker Azure resource group](../media/qnamaker-how-to-troubleshoot/qnamaker-azure-resourcegroup.png)
+
+1. Select the App Service instance and open the **Overview** section.
+
+ > [!div class="mx-imgBorder"]
+ > ![QnAMaker App Service instance](../media/qnamaker-how-to-troubleshoot/qnamaker-azure-appservice.png)
++
+1. Restart App Service. The update process should finish in a couple of seconds. Any dependent applications or bots that use this QnAMaker service will be unavailable to end users during this restart period.
+
+ ![Restart of the QnAMaker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png)
+
+## Configure App service idle setting to avoid timeout
+
+The app service, which serves the QnA Maker prediction runtime for a published knowledge base, has an idle timeout configuration, which defaults to automatically time out if the service is idle. For QnA Maker, this means your prediction runtime generateAnswer API occasionally times out after periods of no traffic.
+
+In order to keep the prediction endpoint app loaded even when there is no traffic, set the idle to always on.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select your QnA Maker resource's app service. It will have the same name as the QnA Maker resource but it will have a different **type** of App Service.
+1. Find **Settings** then select **Configuration**.
+1. On the Configuration pane, select **General settings**, then find **Always on**, and select **On** as the value.
+
+ > [!div class="mx-imgBorder"]
+ > ![On the Configuration pane, select **General settings**, then find **Always on**, and select **On** as the value.](../media/qnamaker-how-to-upgrade-qnamaker/configure-app-service-idle-timeout.png)
+
+1. Select **Save** to save the configuration.
+1. You are asked if you want to restart the app to use the new setting. Select **Continue**.
+
+Learn more about how to configure the App Service [General settings](../../../app-service/configure-common.md#configure-general-settings).
+
+## Business continuity with traffic manager
+
+The primary objective of the business continuity plan is to create a resilient knowledge base endpoint, which would ensure no down time for the Bot or the application consuming it.
+
+> [!div class="mx-imgBorder"]
+> ![QnA Maker bcp plan](../media/qnamaker-how-to-bcp-plan/qnamaker-bcp-plan.png)
+
+The high-level idea as represented above is as follows:
+
+1. Set up two parallel [QnA Maker services](set-up-qnamaker-service-azure.md) in [Azure paired regions](../../../availability-zones/cross-region-replication-azure.md).
+
+1. [Backup](../../../app-service/manage-backup.md) your primary QnA Maker App service and [restore](../../../app-service/manage-backup.md) it in the secondary setup. This will ensure that both setups work with the same hostname and keys.
+
+1. Keep the primary and secondary Azure search indexes in sync. Use the GitHub sample [here](https://github.com/pchoudhari/QnAMakerBackupRestore) to see how to backup-restore Azure indexes.
+
+1. Back up the Application Insights using [continuous export](/previous-versions/azure/azure-monitor/app/export-telemetry).
+
+1. Once the primary and secondary stacks have been set up, use [traffic manager](../../../traffic-manager/traffic-manager-overview.md) to configure the two endpoints and set up a routing method.
+
+1. You would need to create a Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), certificate for your traffic manager endpoint. [Bind the TLS/SSL certificate](../../../app-service/configure-ssl-bindings.md) in your App services.
+
+1. Finally, use the traffic manager endpoint in your Bot or App.
ai-services Edit Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/edit-knowledge-base.md
+
+ Title: Edit a knowledge base - QnA Maker
+description: QnA Maker allows you to manage the content of your knowledge base by providing an easy-to-use editing experience.
++++++ Last updated : 11/19/2021+++
+# Edit QnA pairs in your knowledge base
+
+QnA Maker allows you to manage the content of your knowledge base by providing an easy-to-use editing experience.
+
+QnA pairs are added from a datasource, such as a file or URL, or added as an editorial source. An editorial source indicates the QnA pair was added in the QnA portal manually. All QnA pairs are available for editing.
++
+<a name="add-an-editorial-qna-set"></a>
+
+## Question and answer pairs
+
+A knowledge base consists of question and answer (QnA) pairs. Each pair has one answer and a pair contains all the information associated with that _answer_. An answer can loosely resemble a database row or a data structure instance. The **required** settings in a question-and-answer (QnA) pair are:
+
+* a **question** - text of user query, used to QnA Maker's machine-learning, to align with text of user's question with different wording but the same answer
+* the **answer** - the pair's answer is the response that's returned when a user query is matched with the associated question
+
+Each pair is represented by an **ID**.
+
+The **optional** settings for a pair include:
+
+* **Alternate forms of the question** - this helps QnA Maker return the correct answer for a wider variety of question phrasings
+* **Metadata**: Metadata are tags associated with a QnA pair and are represented as key-value pairs. Metadata tags are used to filter QnA pairs and limit the set over which query matching is performed.
+* **Multi-turn prompts**, used to continue a multi-turn conversation
+
+![QnA Maker knowledge bases](../media/qnamaker-concepts-knowledgebase/knowledgebase.png)
+
+## Add an editorial QnA pair
+
+If you do not have pre-existing content to populate the knowledge base, you can add QnA pairs editorially in the QnA Maker portal.
+
+1. Sign in to the [QnA portal](https://www.qnamaker.ai/), then select the knowledge base to add the QnA pair to.
+1. On the **EDIT** page of the knowledge base, select **Add QnA pair** to add a new QnA pair.
+
+ > [!div class="mx-imgBorder"]
+ > ![Add QnA pair](../media/qnamaker-how-to-edit-kb/add-qnapair.png)
+
+1. In the new QnA pair row, add the required question and answer fields. The other fields are optional. All fields can be changed at any time.
+
+1. Optionally, add **[alternate phrasing](../quickstarts/add-question-metadata-portal.md#add-additional-alternatively-phrased-questions)**. Alternate phrasing is any form of the question that is significantly different from the original question but should provide the same answer.
+
+ When your knowledge base is published, and you have [active learning](use-active-learning.md) turned on, QnA Maker collects alternate phrasing choices for you to accept. These choices are selected in order to increase the prediction accuracy.
+
+1. Optionally, add **[metadata](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)**. To view metadata, select **View options** in the context menu. Metadata provides filters to the answers that the client application, such as a chat bot, provides.
+
+1. Optionally, add **[follow-up prompts](multi-turn.md)**. Follow-up prompts provide additional conversation paths to the client application to present to the user.
+
+1. Select **Save and train** to see predictions including the new QnA pair.
+
+## Rich-text editing for answer
+
+Rich-text editing of your answer text gives you markdown styling from a simple toolbar.
+
+1. Select the text area for an answer, the rich-text editor toolbar displays on the QnA pair row.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the rich-text editor with the question and answer of a QnA pair row.](../media/qnamaker-how-to-edit-kb/rich-text-control-qna-pair-row.png)
+
+ Any text already in the answer displays correctly as your user will see it from a bot.
+
+1. Edit the text. Select formatting features from the rich-text editing toolbar or use the toggle feature to switch to markdown syntax.
+
+ > [!div class="mx-imgBorder"]
+ > ![Use the rich-text editor to write and format text and save as markdown.](../media/qnamaker-how-to-edit-kb/rich-text-display-image.png)
+
+ |Rich-text editor features|Keyboard shortcut|
+ |--|--|
+ |Toggle between rich-text editor and markdown. `</>`|CTRL+M|
+ |Bold. **B**|CTR+LB|
+ |Italics, indicated with an italicized **_I_**|CTRL+I|
+ |Unordered list||
+ |Ordered list||
+ |Paragraph style||
+ |Image - add an image available from a public URL.|CTRL+G|
+ |Add link to publicly available URL.|CTRL+K|
+ |Emoticon - add from a selection of emoticons.|CTRL+E|
+ |Advanced menu - undo|CTRL+Z|
+ |Advanced menu - redo|CTRL+Y|
+
+1. Add an image to the answer using the Image icon in the rich-text toolbar. The in-place editor needs the publicly accessible image URL and the alternate text for the image.
++
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows the in-place editor with the publicly accessible image URL and alternate text for the image entered.](../media/qnamaker-how-to-edit-kb/add-image-url-alternate-text.png)
+
+1. Add a link to a URL by either selecting the text in the answer, then selecting the Link icon in the toolbar or by selecting the Link icon in the toolbar then entering new text and the URL.
+
+ > [!div class="mx-imgBorder"]
+ > ![Use the rich-text editor add a publicly accessible image and its ALT text.](../media/qnamaker-how-to-edit-kb/add-link-to-answer-rich-text-editor.png)
+
+## Edit a QnA pair
+
+Any field in any QnA pair can be edited, regardless of the original data source. Some fields may not be visible due to your current **View Options** settings, found in the context tool bar.
+
+## Delete a QnA pair
+
+To delete a QnA, click the **delete** icon on the far right of the QnA row. This is a permanent operation. It can't be undone. Consider exporting your KB from the **Publish** page before deleting pairs.
+
+![Delete QnA pair](../media/qnamaker-how-to-edit-kb/delete-qnapair.png)
+
+## Find the QnA pair ID
+
+If you need to find the QnA pair ID, you can find it in two places:
+
+* Hover on the delete icon on the QnA pair row you are interested in. The hover text includes the QnA pair ID.
+* Export the knowledge base. Each QnA pair in the knowledge base includes the QnA pair ID.
+
+## Add alternate questions
+
+Add alternate questions to an existing QnA pair to improve the likelihood of a match to a user query.
+
+![Add Alternate Questions](../media/qnamaker-how-to-edit-kb/add-alternate-question.png)
+
+## Linking QnA pairs
+
+Linking QnA pairs is provided with [follow-up prompts](multi-turn.md). This is a logical connection between QnA pairs, managed at the knowledge base level. You can edit follow-up prompts in the QnA Maker portal.
+
+You can't link QnA pairs in the answer's metadata.
+
+## Add metadata
+
+Add metadata pairs by first selecting **View options**, then selecting **Show metadata**. This displays the metadata column. Next, select the **+** sign to add a metadata pair. This pair consists of one key and one value.
+
+Learn more about metadata in the QnA Maker portal quickstart for metadata:
+* [Authoring - add metadata to QnA pair](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)
+* [Query prediction - filter answers by metadata](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md)
+
+## Save changes to the QnA pairs
+
+Periodically select **Save and train** after making edits to avoid losing changes.
+
+![Add Metadata](../media/qnamaker-how-to-edit-kb/add-metadata.png)
+
+## When to use rich-text editing versus markdown
+
+[Rich-text editing](#add-an-editorial-qna-set) of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
+
+[Markdown](../reference-markdown-format.md) is a better tool when you need to autogenerate content to create knowledge bases to be imported as part of a CI/CD pipeline or for [batch testing](../index.yml).
+
+## Editing your knowledge base locally
+
+Once a knowledge base is created, it is recommended that you make edits to the knowledge base text in the [QnA Maker portal](https://qnamaker.ai), rather than exporting and reimporting through local files. However, there may be times that you need to edit a knowledge base locally.
+
+Export the knowledge base from the **Settings** page, then edit the knowledge base with Microsoft Excel. If you choose to use another application to edit your exported file, the application may introduce syntax errors because it is not fully TSV compliant. Microsoft Excel's TSV files generally don't introduce any formatting errors.
+
+Once you are done with your edits, reimport the TSV file from the **Settings** page. This will completely replace the current knowledge base with the imported knowledge base.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Collaborate on a knowledge base](../index.yml)
+
+* [Manage Azure resources used by QnA Maker](set-up-qnamaker-service-azure.md)
ai-services Get Analytics Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/get-analytics-knowledge-base.md
+
+ Title: Analytics on knowledgebase - QnA Maker
+
+description: QnA Maker stores all chat logs and other telemetry, if you have enabled App Insights during the creation of your QnA Maker service. Run the sample queries to get your chat logs from App Insights.
++++
+displayName: chat history, history, chat logs, logs
+++ Last updated : 08/25/2021+++
+# Get analytics on your knowledge base
+
+QnA Maker stores all chat logs and other telemetry, if you have enabled Application Insights during the [creation of your QnA Maker service](./set-up-qnamaker-service-azure.md). Run the sample queries to get your chat logs from Application Insights.
++
+1. Go to your Application Insights resource.
+
+ ![Select your application insights resource](../media/qnamaker-how-to-analytics-kb/resources-created.png)
+
+2. Select **Log (Analytics)**. A new window opens where you can query QnA Maker telemetry.
+
+3. Paste in the following query and run it.
+
+ ```kusto
+ requests
+ | where url endswith "generateAnswer"
+ | project timestamp, id, url, resultCode, duration, performanceBucket
+ | parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
+ | join kind= inner (
+ traces | extend id = operation_ParentId
+ ) on id
+ | where message == "QnAMaker GenerateAnswer"
+ | extend question = tostring(customDimensions['Question'])
+ | extend answer = tostring(customDimensions['Answer'])
+ | extend score = tostring(customDimensions['Score'])
+ | project timestamp, resultCode, duration, id, question, answer, score, performanceBucket,KbId
+ ```
+
+ Select **Run** to run the query.
+
+ [![Run query to determine questions, answers, and score from users](../media/qnamaker-how-to-analytics-kb/run-query.png)](../media/qnamaker-how-to-analytics-kb/run-query.png#lightbox)
+
+## Run queries for other analytics on your QnA Maker knowledge base
+
+### Total 90-day traffic
+
+```kusto
+//Total Traffic
+requests
+| where url endswith "generateAnswer" and name startswith "POST"
+| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
+| summarize ChatCount=count() by bin(timestamp, 1d), KbId
+```
+
+### Total question traffic in a given time period
+
+```kusto
+//Total Question Traffic in a given time period
+let startDate = todatetime('2019-01-01');
+let endDate = todatetime('2020-12-31');
+requests
+| where timestamp <= endDate and timestamp >=startDate
+| where url endswith "generateAnswer" and name startswith "POST"
+| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
+| summarize ChatCount=count() by KbId
+```
+
+### User traffic
+
+```kusto
+//User Traffic
+requests
+| where url endswith "generateAnswer"
+| project timestamp, id, url, resultCode, duration
+| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
+| join kind= inner (
+traces | extend id = operation_ParentId
+) on id
+| extend UserId = tostring(customDimensions['UserId'])
+| summarize ChatCount=count() by bin(timestamp, 1d), UserId, KbId
+```
+
+### Latency distribution of questions
+
+```kusto
+//Latency distribution of questions
+requests
+| where url endswith "generateAnswer" and name startswith "POST"
+| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
+| project timestamp, id, name, resultCode, performanceBucket, KbId
+| summarize count() by performanceBucket, KbId
+```
+
+### Unanswered questions
+
+```kusto
+// Unanswered questions
+requests
+| where url endswith "generateAnswer"
+| project timestamp, id, url
+| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
+| join kind= inner (
+traces | extend id = operation_ParentId
+) on id
+| extend question = tostring(customDimensions['Question'])
+| extend answer = tostring(customDimensions['Answer'])
+| extend score = tostring(customDimensions['Score'])
+| where score == "0" and message == "QnAMaker GenerateAnswer"
+| project timestamp, KbId, question, answer, score
+| order by timestamp desc
+```
+
+> **NOTE**
+> If you cannot get properly the log by using Application Insight, please confirm the Application Insights settings on the App Service resource.
+> Open App Service resource and go to Application Insights. And then please check whether it is Enabled or Disabled. If it is disabled, please enable it and then apply there.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Choose capactiy](./improve-knowledge-base.md)
ai-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/improve-knowledge-base.md
+
+ Title: Active Learning suggested questions - QnA Maker
+description: Improve the quality of your knowledge base with active learning. Review, accept or reject, add without removing or changing existing questions.
+++ Last updated : 01/11/2022++
+ms.devlang: csharp, javascript
++++
+# Accept active learning suggested questions in the knowledge base
++
+<a name="accept-an-active-learning-suggestion-in-the-knowledge-base"></a>
+
+Active Learning alters the Knowledge Base or Search Service after you approve the suggestion, then save and train. If you approve the suggestion, it will be added as an alternate question.
+
+## Turn on active learning
+
+In order to see suggested questions, you must [turn on active learning](../how-to/use-active-learning.md#turn-on-active-learning-for-alternate-questions) for your QnA Maker resource.
+
+## View suggested questions
+
+1. In order to see the suggested questions, on the **Edit** knowledge base page, select **View Options**, then select **Show active learning suggestions**. This option will be disabled if there are no suggestions present for any of the question and answer pairs.
+
+ [![On the Edit section of the portal, select Show Suggestions in order to see the active learning's new question alternatives.](../media/improve-knowledge-base/show-suggestions-button.png)](../media/improve-knowledge-base/show-suggestions-button.png#lightbox)
+
+1. Filter the knowledge base with question and answer pairs to show only suggestions by selecting **Filter by Suggestions**.
+
+ [![Use the Filter by suggestions toggle to view only the active learning's suggested question alternatives.](../media/improve-knowledge-base/filter-by-suggestions.png)](../media/improve-knowledge-base/filter-by-suggestions.png#lightbox)
+
+1. Each QnA pair suggests the new question alternatives with a check mark, `Γ£ö` , to accept the question or an `x` to reject the suggestions. Select the check mark to add the question.
+
+ [![Select or reject active learning's suggested question alternatives by selecting the green check mark or red delete mark.](../media/improve-knowledge-base/accept-active-learning-suggestions-small.png)](../media/improve-knowledge-base/accept-active-learning-suggestions.png#lightbox)
+
+ You can add or delete _all suggestions_ by selecting **Add all** or **Reject all** in the contextual toolbar.
+
+1. Select **Save and Train** to save the changes to the knowledge base.
+
+1. Select **Publish** to allow the changes to be available from the [GenerateAnswer API](metadata-generateanswer-usage.md#generateanswer-request-configuration).
+
+ When 5 or more similar queries are clustered, every 30 minutes, QnA Maker suggests the alternate questions for you to accept or reject.
++
+<a name="#score-proximity-between-knowledge-base-questions"></a>
+
+## Active learning suggestions are saved in the exported knowledge base
+
+When your app has active learning enabled, and you export the app, the `SuggestedQuestions` column in the tsv file retains the active learning data.
+
+The `SuggestedQuestions` column is a JSON object of information of implicit, `autosuggested`, and explicit, `usersuggested` feedback. An example of this JSON object for a single user-submitted question of `help` is:
+
+```JSON
+[
+ {
+ "clusterHead": "help",
+ "totalAutoSuggestedCount": 1,
+ "totalUserSuggestedCount": 0,
+ "alternateQuestionList": [
+ {
+ "question": "help",
+ "autoSuggestedCount": 1,
+ "userSuggestedCount": 0
+ }
+ ]
+ }
+]
+```
+
+When you reimport this app, the active learning continues to collect information and recommend suggestions for your knowledge base.
++
+### Architectural flow for using GenerateAnswer and Train APIs from a bot
+
+A bot or other client application should use the following architectural flow to use active learning:
+
+1. Bot [gets the answer from the knowledge base](#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers) with the GenerateAnswer API, using the `top` property to get a number of answers.
+
+2. Bot determines explicit feedback:
+ * Using your own [custom business logic](#use-the-score-property-along-with-business-logic-to-get-list-of-answers-to-show-user), filter out low scores.
+ * In the bot or client-application, display list of possible answers to the user and get user's selected answer.
+3. Bot [sends selected answer back to QnA Maker](#bot-framework-sample-code) with the [Train API](#train-api).
+
+### Use the top property in the GenerateAnswer request to get several matching answers
+
+When submitting a question to QnA Maker for an answer, the `top` property of the JSON body sets the number of answers to return.
+
+```json
+{
+ "question": "wi-fi",
+ "isTest": false,
+ "top": 3
+}
+```
+
+### Use the score property along with business logic to get list of answers to show user
+
+When the client application (such as a chat bot) receives the response, the top 3 questions are returned. Use the `score` property to analyze the proximity between scores. This proximity range is determined by your own business logic.
+
+```json
+{
+ "answers": [
+ {
+ "questions": [
+ "Wi-Fi Direct Status Indicator"
+ ],
+ "answer": "**Wi-Fi Direct Status Indicator**\n\nStatus bar icons indicate your current Wi-Fi Direct connection status: \n\nWhen your device is connected to another device using Wi-Fi Direct, '$ \n\n+ *+ ' Wi-Fi Direct is displayed in the Status bar.",
+ "score": 74.21,
+ "id": 607,
+ "source": "Bugbash KB.pdf",
+ "metadata": []
+ },
+ {
+ "questions": [
+ "Wi-Fi - Connections"
+ ],
+ "answer": "**Wi-Fi**\n\nWi-Fi is a term used for certain types of Wireless Local Area Networks (WLAN). Wi-Fi communication requires access to a wireless Access Point (AP).",
+ "score": 74.15,
+ "id": 599,
+ "source": "Bugbash KB.pdf",
+ "metadata": []
+ },
+ {
+ "questions": [
+ "Turn Wi-Fi On or Off"
+ ],
+ "answer": "**Turn Wi-Fi On or Off**\n\nTurning Wi-Fi on makes your device able to discover and connect to compatible in-range wireless APs. \n\n1. From a Home screen, tap ::: Apps > e Settings .\n2. Tap Connections > Wi-Fi , and then tap On/Off to turn Wi-Fi on or off.",
+ "score": 69.99,
+ "id": 600,
+ "source": "Bugbash KB.pdf",
+ "metadata": []
+ }
+ ]
+}
+```
+
+## Client application follow-up when questions have similar scores
+
+Your client application displays the questions with an option for the user to select _the single question_ that most represents their intention.
+
+Once the user selects one of the existing questions, the client application sends the user's choice as feedback using the QnA Maker Train API. This feedback completes the active learning feedback loop.
+
+## Train API
+
+Active learning feedback is sent to QnA Maker with the Train API POST request. The API signature is:
+
+```http
+POST https://<QnA-Maker-resource-name>.azurewebsites.net/qnamaker/knowledgebases/<knowledge-base-ID>/train
+Authorization: EndpointKey <endpoint-key>
+Content-Type: application/json
+{"feedbackRecords": [{"userId": "1","userQuestion": "<question-text>","qnaId": 1}]}
+```
+
+|HTTP request property|Name|Type|Purpose|
+|--|--|--|--|
+|URL route parameter|Knowledge base ID|string|The GUID for your knowledge base.|
+|Custom subdomain|QnAMaker resource name|string|The resource name is used as the custom subdomain for your QnA Maker. This is available on the Settings page after you publish the knowledge base. It is listed as the `host`.|
+|Header|Content-Type|string|The media type of the body sent to the API. Default value is: `application/json`|
+|Header|Authorization|string|Your endpoint key (EndpointKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).|
+|Post Body|JSON object|JSON|The training feedback|
+
+The JSON body has several settings:
+
+|JSON body property|Type|Purpose|
+|--|--|--|--|
+|`feedbackRecords`|array|List of feedback.|
+|`userId`|string|The user ID of the person accepting the suggested questions. The user ID format is up to you. For example, an email address can be a valid user ID in your architecture. Optional.|
+|`userQuestion`|string|Exact text of the user's query. Required.|
+|`qnaID`|number|ID of question, found in the [GenerateAnswer response](metadata-generateanswer-usage.md#generateanswer-response-properties). |
+
+An example JSON body looks like:
+
+```json
+{
+ "feedbackRecords": [
+ {
+ "userId": "1",
+ "userQuestion": "<question-text>",
+ "qnaId": 1
+ }
+ ]
+}
+```
+
+A successful response returns a status of 204 and no JSON response body.
+
+### Batch many feedback records into a single call
+
+In the client-side application, such as a bot, you can store the data, then send many records in a single JSON body in the `feedbackRecords` array.
+
+An example JSON body looks like:
+
+```json
+{
+ "feedbackRecords": [
+ {
+ "userId": "1",
+ "userQuestion": "How do I ...",
+ "qnaId": 1
+ },
+ {
+ "userId": "2",
+ "userQuestion": "Where is ...",
+ "qnaId": 40
+ },
+ {
+ "userId": "3",
+ "userQuestion": "When do I ...",
+ "qnaId": 33
+ }
+ ]
+}
+```
+++
+<a name="active-learning-is-saved-in-the-exported-apps-tsv-file"></a>
+
+## Bot framework sample code
+
+Your bot framework code needs to call the Train API, if the user's query should be used for active learning. There are two pieces of code to write:
+
+* Determine if query should be used for active learning
+* Send query back to the QnA Maker Train API for active learning
+
+In the [Azure Bot sample](https://github.com/microsoft/BotBuilder-Samples), both of these activities have been programmed.
+
+### Example C# code for Train API with Bot Framework 4.x
+
+The following code illustrates how to send information back to QnA Maker with the Train API.
+
+```csharp
+public class FeedbackRecords
+{
+ // <summary>
+ /// List of feedback records
+ /// </summary>
+ [JsonProperty("feedbackRecords")]
+ public FeedbackRecord[] Records { get; set; }
+}
+
+/// <summary>
+/// Active learning feedback record
+/// </summary>
+public class FeedbackRecord
+{
+ /// <summary>
+ /// User id
+ /// </summary>
+ public string UserId { get; set; }
+
+ /// <summary>
+ /// User question
+ /// </summary>
+ public string UserQuestion { get; set; }
+
+ /// <summary>
+ /// QnA Id
+ /// </summary>
+ public int QnaId { get; set; }
+}
+
+/// <summary>
+/// Method to call REST-based QnAMaker Train API for Active Learning
+/// </summary>
+/// <param name="endpoint">Endpoint URI of the runtime</param>
+/// <param name="FeedbackRecords">Feedback records train API</param>
+/// <param name="kbId">Knowledgebase Id</param>
+/// <param name="key">Endpoint key</param>
+/// <param name="cancellationToken"> Cancellation token</param>
+public async static void CallTrain(string endpoint, FeedbackRecords feedbackRecords, string kbId, string key, CancellationToken cancellationToken)
+{
+ var uri = endpoint + "/knowledgebases/" + kbId + "/train/";
+
+ using (var client = new HttpClient())
+ {
+ using (var request = new HttpRequestMessage())
+ {
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(uri);
+ request.Content = new StringContent(JsonConvert.SerializeObject(feedbackRecords), Encoding.UTF8, "application/json");
+ request.Headers.Add("Authorization", "EndpointKey " + key);
+
+ var response = await client.SendAsync(request, cancellationToken);
+ await response.Content.ReadAsStringAsync();
+ }
+ }
+}
+```
+
+### Example Node.js code for Train API with Bot Framework 4.x
+
+The following code illustrates how to send information back to QnA Maker with the Train API.
+
+```javascript
+async callTrain(stepContext){
+
+ var trainResponses = stepContext.values[this.qnaData];
+ var currentQuery = stepContext.values[this.currentQuery];
+
+ if(trainResponses.length > 1){
+ var reply = stepContext.context.activity.text;
+ var qnaResults = trainResponses.filter(r => r.questions[0] == reply);
+
+ if(qnaResults.length > 0){
+
+ stepContext.values[this.qnaData] = qnaResults;
+
+ var feedbackRecords = {
+ FeedbackRecords:[
+ {
+ UserId:stepContext.context.activity.id,
+ UserQuestion: currentQuery,
+ QnaId: qnaResults[0].id
+ }
+ ]
+ };
+
+ // Call Active Learning Train API
+ this.activeLearningHelper.callTrain(this.qnaMaker.endpoint.host, feedbackRecords, this.qnaMaker.endpoint.knowledgeBaseId, this.qnaMaker.endpoint.endpointKey);
+
+ return await stepContext.next(qnaResults);
+ }
+ else{
+
+ return await stepContext.endDialog();
+ }
+ }
+
+ return await stepContext.next(stepContext.result);
+}
+```
+
+## Best practices
+
+For best practices when using active learning, see [Best practices](../concepts/best-practices.md#active-learning).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Use metadata with GenerateAnswer API](metadata-generateanswer-usage.md)
ai-services Manage Knowledge Bases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/manage-knowledge-bases.md
+
+ Title: Manage knowledge bases - QnA Maker
+description: QnA Maker allows you to manage your knowledge bases by providing access to the knowledge base settings and content.
++++++ Last updated : 03/18/2020+++
+# Create knowledge base and manage settings
+
+QnA Maker allows you to manage your knowledge bases by providing access to the knowledge base settings and data sources.
++
+## Prerequisites
+
+> * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A [QnA Maker resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
+
+## Create a knowledge base
+
+1. Sign in to the [QnAMaker.ai](https://QnAMaker.ai) portal with your Azure credentials.
+
+1. In the QnA Maker portal, select **Create a knowledge base**.
+
+1. On the **Create** page, skip **Step 1** if you already have your QnA Maker resource.
+
+ If you haven't created the resource yet, select **Stable** and **Create a QnA service**. You are directed to the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) to set up a QnA Maker service in your subscription. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
+
+ When you are done creating the resource in the Azure portal, return to the QnA Maker portal, refresh the browser page, and continue to **Step 2**.
+
+1. In **Step 3**, select your Active directory, subscription, service (resource), and the language for all knowledge bases created in the service.
+
+ ![Screenshot of selecting a QnA Maker service knowledge base](../media/qnamaker-quickstart-kb/qnaservice-selection.png)
+
+1. In **Step 3**, name your knowledge base `My Sample QnA KB`.
+
+1. In **Step 4**, configure the settings with the following table:
+
+ |Setting|Value|
+ |--|--|
+ |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked|
+ |**Default answer text**| `Quickstart - default answer not found.`|
+ |**+ Add URL**|`https://azure.microsoft.com/en-us/support/faq/`|
+ |**Chit-chat**|Select **Professional**|
++
+1. In **Step 5**, Select **Create your KB**.
+
+ The extraction process takes a few moments to read the document and identify questions and answers.
+
+ After QnA Maker successfully creates the knowledge base, the **Knowledge base** page opens. You can edit the contents of the knowledge base on this page.
+
+## Edit knowledge base
+
+1. Select **My knowledge bases** in the top navigation bar.
+
+ You can see all the services you created or shared with you sorted in the descending order of the **last modified** date.
+
+ ![My Knowledge Bases](../media/qnamaker-how-to-edit-kb/my-kbs.png)
+
+1. Select a particular knowledge base to make edits to it.
+
+1. Select **Settings**. The following list contains fields you can change.
+
+ |Goal|Action|
+ |--|--|
+ |Add URL|You can add new URLs to add new FAQ content to Knowledge base by clicking **Manage knowledge base -> '+ Add URL'** link.|
+ |Delete URL|You can delete existing URLs by selecting the delete icon, the trash can.|
+ |Refresh content|If you want your knowledge base to crawl the latest content of existing URLs, select the **Refresh** checkbox. This action will update the knowledge base with latest URL content once. This action is not setting a regular schedule of updates.|
+ |Add file|You can add a supported file document to be part of a knowledge base, by selecting **Manage knowledge base**, then selecting **+ Add File**|
+ |Import|You can also import any existing knowledge base by selecting **Import Knowledge base** button. |
+ |Update|Updating of knowledge base depends on **management pricing tier** used while creating QnA Maker service associated with your knowledge base. You can also update the management tier from Azure portal if necessary.
+
+ <br/>
+ 1. Once you are done making changes to the knowledge base, select **Save and train** in the top-right corner of the page in order to persist the changes.
+
+ ![Save and Train](../media/qnamaker-how-to-edit-kb/save-and-train.png)
+
+ >[!CAUTION]
+ >If you leave the page before selecting **Save and train**, all changes will be lost.
+
+## Manage large knowledge bases
+
+* **Data source groups**: The QnAs are grouped by the data source from which they were extracted. You can expand or collapse the data source.
+
+ ![Use the QnA Maker data source bar to collapse and expand data source questions and answers](../media/qnamaker-how-to-edit-kb/data-source-grouping.png)
+
+* **Search knowledge base**: You can search the knowledge base by typing in the text box at the top of the Knowledge Base table. Select enter to search on the question, answer, or metadata content. Select the X icon to remove the search filter.
+
+ ![Use the QnA Maker search box above the questions and answers to reduce the view to only filter-matching items](../media/qnamaker-how-to-edit-kb/search-paginate-group.png)
+
+* **Pagination**: Quickly move through data sources to manage large knowledge bases
+
+ ![Use the QnA Maker pagination features above the questions and answers to move through pages of questions and answers](../media/qnamaker-how-to-edit-kb/pagination.png)
+
+## Delete knowledge bases
+
+Deleting a knowledge base (KB) is a permanent operation. It can't be undone. Before deleting a knowledge base, you should export the knowledge base from the **Settings** page of the QnA Maker portal.
+
+If you share your knowledge base with collaborators,](collaborate-knowledge-base.md) then delete it, everyone loses access to the KB.
ai-services Manage Qna Maker App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/manage-qna-maker-app.md
+
+ Title: Manage QnA Maker App - QnA Maker
+description: QnA Maker allows multiple people to collaborate on a knowledge base. QnA Maker offers a capability to improve the quality of your knowledge base with active learning. One can review, accept or reject, and add without removing or changing existing questions.
++++++ Last updated : 11/09/2020+++
+# Manage QnA Maker app
+
+QnA Maker allows you to collaborate with different authors and content editors by offering a capability to restrict collaborator access based on the collaborator's role.
+Learn more about [QnA Maker collaborator authentication concepts](../concepts/role-based-access-control.md).
++
+## Add Azure role-based access control (Azure RBAC)
+
+QnA Maker allows multiple people to collaborate on all knowledge bases in the same QnA Maker resource. This feature is provided with [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/role-assignments-portal.md).
+
+## Access at the cognitive resource level
+
+You cannot share a particular knowledge base in a QnA Maker service. If you want more granular access control, consider distributing your knowledge bases across different QnA Maker resources, then add roles to each resource.
+
+## Add a role to a resource
+
+### Add a user account to the cognitive resource
+
+You should apply RBAC controls to the QnA Maker resource.
+
+The following steps use the collaborator role but any of the roles can be added using these steps
+
+1. Sign in to the [Azure](https://portal.azure.com/) portal, and go to your cognitive resource.
+
+ ![QnA Maker resource list](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-resource-list.png)
+
+1. Go to the **Access Control (IAM)** tab.
+
+ ![QnA Maker IAM](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-iam.png)
+
+1. Select **Add**.
+
+ ![QnA Maker IAM add](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-iam-add.png)
+
+1. Select a role from the following list:
+
+ |Role|
+ |--|
+ |Owner|
+ |Contributor|
+ |Azure AI QnA Maker Reader|
+ |Azure AI QnA Maker Editor|
+ |Azure AI services User|
+
+ :::image type="content" source="../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-add-role-iam.png" alt-text="QnA Maker IAM add role.":::
+
+1. Enter the user's email address and press **Save**.
+
+ ![QnA Maker IAM add email](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-iam-add-email.png)
+
+### View QnA Maker knowledge bases
+
+When the person you shared your QnA Maker service with logs into the [QnA Maker portal](https://qnamaker.ai), they can see all the knowledge bases in that service based on their role.
+
+When they select a knowledge base, their current role on that QnA Maker resource is visible next to the knowledge base name.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a knowledge base](./manage-knowledge-bases.md)
ai-services Metadata Generateanswer Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/metadata-generateanswer-usage.md
+
+ Title: Metadata with GenerateAnswer API - QnA Maker
+
+description: QnA Maker lets you add metadata, in the form of key/value pairs, to your question/answer pairs. You can filter results to user queries, and store additional information that can be used in follow-up conversations.
+++++++ Last updated : 11/09/2020+++
+# Get an answer with the GenerateAnswer API
+
+To get the predicted answer to a user's question, use the GenerateAnswer API. When you publish a knowledge base, you can see information about how to use this API on the **Publish** page. You can also configure the API to filter answers based on metadata tags, and test the knowledge base from the endpoint with the test query string parameter.
++
+<a name="generateanswer-api"></a>
+
+## Get answer predictions with the GenerateAnswer API
+
+You use the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer) in your bot or application to query your knowledge base with a user question, to get the best match from the question and answer pairs.
+
+> [!NOTE]
+> This documentation does not apply to the latest release. To learn about using the latest question answering APIs consult the [question answering quickstart guide](../../language-service/question-answering/quickstart/sdk.md).
+
+<a name="generateanswer-endpoint"></a>
+
+## Publish to get GenerateAnswer endpoint
+
+After you publish your knowledge base, either from the [QnA Maker portal](https://www.qnamaker.ai), or by using the [API](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish), you can get the details of your GenerateAnswer endpoint.
+
+To get your endpoint details:
+1. Sign in to [https://www.qnamaker.ai](https://www.qnamaker.ai).
+1. In **My knowledge bases**, select **View Code** for your knowledge base.
+ ![Screenshot of My knowledge bases](../media/qnamaker-how-to-metadata-usage/my-knowledge-bases.png)
+1. Get your GenerateAnswer endpoint details.
+
+ ![Screenshot of endpoint details](../media/qnamaker-how-to-metadata-usage/view-code.png)
+
+You can also get your endpoint details from the **Settings** tab of your knowledge base.
+
+<a name="generateanswer-request"></a>
+
+## GenerateAnswer request configuration
+
+You call GenerateAnswer with an HTTP POST request. For sample code that shows how to call GenerateAnswer, see the [quickstarts](../quickstarts/quickstart-sdk.md#generate-an-answer-from-the-knowledge-base).
+
+The POST request uses:
+
+* Required [URI parameters](/rest/api/cognitiveservices/qnamakerruntime/runtime/train#uri-parameters)
+* Required header property, `Authorization`, for security
+* Required [body properties](/rest/api/cognitiveservices/qnamakerruntime/runtime/train#feedbackrecorddto).
+
+The GenerateAnswer URL has the following format:
+
+```
+https://{QnA-Maker-endpoint}/knowledgebases/{knowledge-base-ID}/generateAnswer
+```
+
+Remember to set the HTTP header property of `Authorization` with a value of the string `EndpointKey` with a trailing space then the endpoint key found on the **Settings** page.
+
+An example JSON body looks like:
+
+```json
+{
+ "question": "qna maker and luis",
+ "top": 6,
+ "isTest": true,
+ "scoreThreshold": 30,
+ "rankerType": "" // values: QuestionOnly
+ "strictFilters": [
+ {
+ "name": "category",
+ "value": "api"
+ }],
+ "userId": "sd53lsY="
+}
+```
+
+Learn more about [rankerType](../concepts/best-practices.md#choosing-ranker-type).
+
+The previous JSON requested only answers that are at 30% or above the threshold score.
+
+<a name="generateanswer-response"></a>
+
+## GenerateAnswer response properties
+
+The [response](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer#successful-query) is a JSON object including all the information you need to display the answer and the next turn in the conversation, if available.
+
+```json
+{
+ "answers": [
+ {
+ "score": 38.54820341616869,
+ "Id": 20,
+ "answer": "There is no direct integration of LUIS with QnA Maker. But, in your bot code, you can use LUIS and QnA Maker together. [View a sample bot](https://github.com/Microsoft/BotBuilder-CognitiveServices/tree/master/Node/samples/QnAMaker/QnAWithLUIS)",
+ "source": "Custom Editorial",
+ "questions": [
+ "How can I integrate LUIS with QnA Maker?"
+ ],
+ "metadata": [
+ {
+ "name": "category",
+ "value": "api"
+ }
+ ]
+ }
+ ]
+}
+```
+
+The previous JSON responded with an answer with a score of 38.5%.
+
+## Match questions only, by text
+
+By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the GenerateAnswer request.
+
+You can search through the published kb, using `isTest=false`, or in the test kb using `isTest=true`.
+
+```json
+{
+ "question": "Hi",
+ "top": 30,
+ "isTest": true,
+ "RankerType":"QuestionOnly"
+}
+
+```
+## Use QnA Maker with a bot in C#
+
+The bot framework provides access to the QnA Maker's properties with the [getAnswer API](/dotnet/api/microsoft.bot.builder.ai.qna.qnamaker.getanswersasync#Microsoft_Bot_Builder_AI_QnA_QnAMaker_GetAnswersAsync_Microsoft_Bot_Builder_ITurnContext_Microsoft_Bot_Builder_AI_QnA_QnAMakerOptions_System_Collections_Generic_Dictionary_System_String_System_String__System_Collections_Generic_Dictionary_System_String_System_Double__):
+
+```csharp
+using Microsoft.Bot.Builder.AI.QnA;
+var metadata = new Microsoft.Bot.Builder.AI.QnA.Metadata();
+var qnaOptions = new QnAMakerOptions();
+
+metadata.Name = Constants.MetadataName.Intent;
+metadata.Value = topIntent;
+qnaOptions.StrictFilters = new Microsoft.Bot.Builder.AI.QnA.Metadata[] { metadata };
+qnaOptions.Top = Constants.DefaultTop;
+qnaOptions.ScoreThreshold = 0.3F;
+var response = await _services.QnAServices[QnAMakerKey].GetAnswersAsync(turnContext, qnaOptions);
+```
+
+The previous JSON requested only answers that are at 30% or above the threshold score.
+
+## Use QnA Maker with a bot in Node.js
+
+The bot framework provides access to the QnA Maker's properties with the [getAnswer API](/javascript/api/botbuilder-ai/qnamaker?preserve-view=true&view=botbuilder-ts-latest#generateanswer-stringundefined--number--number-):
+
+```javascript
+const { QnAMaker } = require('botbuilder-ai');
+this.qnaMaker = new QnAMaker(endpoint);
+
+// Default QnAMakerOptions
+var qnaMakerOptions = {
+ ScoreThreshold: 0.30,
+ Top: 3
+};
+var qnaResults = await this.qnaMaker.getAnswers(stepContext.context, qnaMakerOptions);
+```
+
+The previous JSON requested only answers that are at 30% or above the threshold score.
+
+## Get precise answers with GenerateAnswer API
+
+We offer precise answer feature only with the QnA Maker managed version.
+
+## Common HTTP errors
+
+|Code|Explanation|
+|:--|--|
+|2xx|Success|
+|400|Request's parameters are incorrect meaning the required parameters are missing, malformed, or too large|
+|400|Request's body is incorrect meaning the JSON is missing, malformed, or too large|
+|401|Invalid key|
+|403|Forbidden - you do not have correct permissions|
+|404|KB doesn't exist|
+|410|This API is deprecated and is no longer available|
+
+## Next steps
+
+The **Publish** page also provides information to [generate an answer](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md) with Postman or cURL.
+
+> [!div class="nextstepaction"]
+> [Get analytics on your knowledge base](../how-to/get-analytics-knowledge-base.md)
+> [!div class="nextstepaction"]
+> [Confidence score](../concepts/confidence-score.md)
ai-services Multi Turn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/multi-turn.md
+
+ Title: Multi-turn conversations - QnA Maker
+description: Use prompts and context to manage the multiple turns, known as multi-turn, for your bot from one question to another. Multi-turn is the ability to have a back-and-forth conversation where the previous question's context influences the next question and answer.
+++++ Last updated : 11/18/2021++
+# Use follow-up prompts to create multiple turns of a conversation
+
+Use follow-up prompts and context to manage the multiple turns, known as _multi-turn_, for your bot from one question to another.
+
+To see how multi-turn works, view the following demonstration video:
+
+[![Multi-turn conversation in QnA Maker](../media/conversational-context/youtube-video.png)](https://aka.ms/multiturnexample)
++
+## What is a multi-turn conversation?
+
+Some questions can't be answered in a single turn. When you design your client application (chat bot) conversations, a user might ask a question that needs to be filtered or refined to determine the correct answer. You make this flow through the questions possible by presenting the user with *follow-up prompts*.
+
+When a user asks a question, QnA Maker returns the answer _and_ any follow-up prompts. This response allows you to present the follow-up questions as choices.
+
+> [!CAUTION]
+> Multi-turn prompts are not extracted from FAQ documents. If you need multi-turn extraction, remove the question marks that designate the QnA pairs as FAQs.
+
+## Example multi-turn conversation with chat bot
+
+With multi-turn, a chat bot manages a conversation with a user to determine the final answer, as shown in the following image:
+
+![A multi-turn dialog with prompts that guide a user through a conversation](../media/conversational-context/conversation-in-bot.png)
+
+In the preceding image, a user has started a conversation by entering **My account**. The knowledge base has three linked question-and-answer pairs. To refine the answer, the user selects one of the three choices in the knowledge base. The question (#1), has three follow-up prompts, which are presented in the chat bot as three options (#2).
+
+When the user selects an option (#3), the next list of refining options (#4) is presented. This sequence continues (#5) until the user determines the correct, final answer (#6).
+
+### Use multi-turn in a bot
+
+After publishing your KB, you can select the **Create Bot** button to deploy your QnA Maker bot to Azure AI Bot Service. The prompts will appear in the chat clients that you have enabled for your bot.
+
+## Create a multi-turn conversation from a document's structure
+
+When you create a knowledge base, the **Populate your KB** section displays an **Enable multi-turn extraction from URLs, .pdf or .docx files** check box.
+
+![Check box for enabling multi-turn extraction](../media/conversational-context/enable-multi-turn.png)
+
+When you select this option, QnA Maker extracts the hierarchy present in the document structure. The hierarchy is converted in to follow up prompts and the root of the hierarchy serves as the parent QnA. In some documents, the root of the hierarchy does not have content, which could serve as an answer. You can provide the 'Default Answer Text' to be used as a substitute answer text to extract such hierarchies.
+
+Multi-turn structure can be inferred only from URLs, PDF files, or DOCX files. For an example of structure, view an image of a [Microsoft Surface user manual PDF file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/product-manual.pdf).
++
+### Building your own multi-turn document
+
+If you are creating a multi-turn document, keep in mind the following guidelines:
+
+* Use headings and sub-headings to denote hierarchy. For example, use a h1 to denote the parent QnA and h2 to denote the QnA that should be taken as prompt. Use small heading size to denote subsequent hierarchy. Don't use style, color, or some other mechanism to imply structure in your document, QnA Maker will not extract the multi-turn prompts.
+
+* First character of heading must be capitalized.
+
+* Do not end a heading with a question mark, `?`.
+
+* You can use the [sample document](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx) as an example to create your own multi-turn document.
+
+### Adding files to a multi-turn KB
+
+When you add a hierarchical document, QnA Maker determines follow-up prompts from the structure to create conversational flow.
+
+1. In QnA Maker, select an existing knowledge base, which was created with **Enable multi-turn extraction from URLs, .pdf or .docx files** enabled.
+1. Go to the **Settings** page, select the file or URL to add.
+1. **Save and train** the knowledge base.
+
+> [!Caution]
+> Support for using an exported TSV or XLS multi-turn knowledge base file as a data source for a new or empty knowledge base isn't supported. You need to **Import** that file type, from the **Settings** page of the QnA Maker portal, in order to add exported multi-turn prompts to a knowledge base.
+
+## Create knowledge base with multi-turn prompts with the Create API
+
+You can create a knowledge case with multi-turn prompts using the [QnA Maker Create API](/rest/api/cognitiveservices/qnamaker/knowledgebase/create). The prompts are adding in the `context` property's `prompts` array.
+
+## Show questions and answers with context
+
+Reduce the displayed question-and-answer pairs to only those pairs with contextual conversations.
+
+Select **View options**, and then select **Show context**. The list displays question-and-answer pairs that contain follow-up prompts.
+
+![Filter question-and-answer pairs by contextual conversations](../media/conversational-context/filter-question-and-answers-by-context.png)
+
+The multi-turn context is displayed in the first column.
++
+In the preceding image, **#1** indicates bold text in the column, which signifies the current question. The parent question is the top item in the row. Any questions below it are the linked question-and-answer pairs. These items are selectable, so that you can immediately go to the other context items.
+
+## Add an existing question-and-answer pair as a follow-up prompt
+
+The original question, **My account**, has follow-up prompts, such as **Accounts and signing in**.
+
+![The "Accounts and signing in" answers and follow-up prompts](../media/conversational-context/detected-and-linked-follow-up-prompts.png)
+
+Add a follow-up prompt to an existing question-and-answer pair that isn't currently linked. Because the question isn't linked to any question-and-answer pair, the current view setting needs to be changed.
+
+1. To link an existing question-and-answer pair as a follow-up prompt, select the row for the question-and-answer pair. For the Surface manual, search for **Sign out** to reduce the list.
+1. In the row for **Signout**, in the **Answer** column, select **Add follow-up prompt**.
+1. In the fields in the **Follow-up prompt** pop-up window, enter the following values:
+
+ |Field|Value|
+ |--|--|
+ |Display text|Enter **Turn off the device**. This is custom text to display in the follow-up prompt.|
+ |Context-only| Select this check box. An answer is returned only if the question specifies context.|
+ |Link to answer|Enter **Use the sign-in screen** to find the existing question-and-answer pair.|
+
+1. One match is returned. Select this answer as the follow-up, and then select **Save**.
+
+ ![The "Follow-up prompt" page](../media/conversational-context/search-follow-up-prompt-for-existing-answer.png)
+
+1. After you've added the follow-up prompt, select **Save and train** in the top navigation.
+
+### Edit the display text
+
+When a follow-up prompt is created, and an existing question-and-answer pair is entered as the **Link to Answer**, you can enter new **Display text**. This text doesn't replace the existing question, and it doesn't add a new alternate question. It is separate from those values.
+
+1. To edit the display text, search for and select the question in the **Context** field.
+1. In the row for that question, select the follow-up prompt in the answer column.
+1. Select the display text you want to edit, and then select **Edit**.
+
+ ![The Edit command for the display text](../media/conversational-context/edit-existing-display-text.png)
+
+1. In the **Follow-up prompt** pop-up window, change the existing display text.
+1. When you're done editing the display text, select **Save**.
+1. In the top navigation bar, **Save and train**.
+
+## Add a new question-and-answer pair as a follow-up prompt
+
+When you add a new question-and-answer pair to the knowledge base, each pair should be linked to an existing question as a follow-up prompt.
+
+1. On the knowledge base toolbar, search for and select the existing question-and-answer pair for **Accounts and signing in**.
+
+1. In the **Answer** column for this question, select **Add follow-up prompt**.
+1. Under **Follow-up prompt (PREVIEW)**, create a new follow-up prompt by entering the following values:
+
+ |Field|Value|
+ |--|--|
+ |Display text|*Create a Windows Account*. The custom text to display in the follow-up prompt.|
+ |Context-only|Select this check box. This answer is returned only if the question specifies context.|
+ |Link to answer|Enter the following text as the answer:<br>*[Create](https://account.microsoft.com/) a Windows account with a new or existing email account*.<br>When you save and train the database, this text will be converted. |
+ |||
+
+ ![Create a new prompt question and answer](../media/conversational-context/create-child-prompt-from-parent.png)
+
+1. Select **Create new**, and then select **Save**.
+
+ This action creates a new question-and-answer pair and links the selected question as a follow-up prompt. The **Context** column, for both questions, indicates a follow-up prompt relationship.
+
+1. Select **View options**, and then select [**Show context (PREVIEW)**](#show-questions-and-answers-with-context).
+
+ The new question shows how it's linked.
+
+ ![Create a new follow-up prompt](../media/conversational-context/new-qna-follow-up-prompt.png)
+
+ The parent question displays a new question as one of its choices.
+
+ :::image type="content" source="../media/conversational-context/child-prompt-created.png" alt-text="Screenshot shows the Context column, for both questions, indicates a follow-up prompt relationship." lightbox="../media/conversational-context/child-prompt-created.png":::
+
+1. After you've added the follow-up prompt, select **Save and train** in the top navigation bar.
+
+<a name="enable-multi-turn-during-testing-of-follow-up-prompts"></a>
+
+## View multi-turn during testing of follow-up prompts
+
+When you test the question with follow-up prompts in the **Test** pane, the response includes the follow-up prompts.
+
+![The response includes the follow-up prompts](../media/conversational-context/test-pane-with-question-having-follow-up-prompts.png)
+
+## A JSON request to return an initial answer and follow-up prompts
+
+Use the empty `context` object to request the answer to the user's question and include follow-up prompts.
+
+```JSON
+{
+ "question": "accounts and signing in",
+ "top": 10,
+ "userId": "Default",
+ "isTest": false,
+ "context": {}
+}
+```
+
+## A JSON response to return an initial answer and follow-up prompts
+
+The preceding section requested an answer and any follow-up prompts to **Accounts and signing in**. The response includes the prompt information, which is located at `answers[0].context`, and the text to display to the user.
+
+```JSON
+{
+ "answers": [
+ {
+ "questions": [
+ "Accounts and signing in"
+ ],
+ "answer": "**Accounts and signing in**\n\nWhen you set up your Surface, an account is set up for you. You can create additional accounts later for family and friends, so each person using your Surface can set it up just the way he or she likes. For more info, see All about accounts on Surface.com. \n\nThere are several ways to sign in to your Surface Pro 4: ",
+ "score": 100.0,
+ "id": 15,
+ "source": "product-manual.pdf",
+ "metadata": [],
+ "context": {
+ "isContextOnly": true,
+ "prompts": [
+ {
+ "displayOrder": 0,
+ "qnaId": 16,
+ "qna": null,
+ "displayText": "Use the sign-in screen"
+ }
+ ]
+ }
+ },
+ {
+ "questions": [
+ "Sign out"
+ ],
+ "answer": "**Sign out**\n\nHere's how to sign out: \n\n Go to Start, and right-click your name. Then select Sign out. ",
+ "score": 38.01,
+ "id": 18,
+ "source": "product-manual.pdf",
+ "metadata": [],
+ "context": {
+ "isContextOnly": true,
+ "prompts": [
+ {
+ "displayOrder": 0,
+ "qnaId": 16,
+ "qna": null,
+ "displayText": "Turn off the device"
+ }
+ ]
+ }
+ },
+ {
+ "questions": [
+ "Use the sign-in screen"
+ ],
+ "answer": "**Use the sign-in screen**\n\n1. \n\nTurn on or wake your Surface by pressing the power button. \n\n2. \n\nSwipe up on the screen or tap a key on the keyboard. \n\n3. \n\nIf you see your account name and account picture, enter your password and select the right arrow or press Enter on your keyboard. \n\n4. \n\nIf you see a different account name, select your own account from the list at the left. Then enter your password and select the right arrow or press Enter on your keyboard. ",
+ "score": 27.53,
+ "id": 16,
+ "source": "product-manual.pdf",
+ "metadata": [],
+ "context": {
+ "isContextOnly": true,
+ "prompts": []
+ }
+ }
+ ]
+}
+```
+
+The `prompts` array provides text in the `displayText` property and the `qnaId` value. You can show these answers as the next displayed choices in the conversation flow and then send the selected `qnaId` back to QnA Maker in the following request.
+
+<!--
+
+The `promptsToDelete` array provides the ...
+
+-->
+
+## A JSON request to return a non-initial answer and follow-up prompts
+
+Fill the `context` object to include the previous context.
+
+In the following JSON request, the current question is *Use Windows Hello to sign in* and the previous question was *accounts and signing in*.
+
+```JSON
+{
+ "question": "Use Windows Hello to sign in",
+ "top": 10,
+ "userId": "Default",
+ "isTest": false,
+ "qnaId": 17,
+ "context": {
+ "previousQnAId": 15,
+ "previousUserQuery": "accounts and signing in"
+ }
+}
+```
+
+## A JSON response to return a non-initial answer and follow-up prompts
+
+The QnA Maker _GenerateAnswer_ JSON response includes the follow-up prompts in the `context` property of the first item in the `answers` object:
+
+```JSON
+{
+ "answers": [
+ {
+ "questions": [
+ "Use Windows Hello to sign in"
+ ],
+ "answer": "**Use Windows Hello to sign in**\n\nSince Surface Pro 4 has an infrared (IR) camera, you can set up Windows Hello to sign in just by looking at the screen. \n\nIf you have the Surface Pro 4 Type Cover with Fingerprint ID (sold separately), you can set up your Surface sign you in with a touch. \n\nFor more info, see What is Windows Hello? on Windows.com. ",
+ "score": 100.0,
+ "id": 17,
+ "source": "product-manual.pdf",
+ "metadata": [],
+ "context": {
+ "isContextOnly": true,
+ "prompts": []
+ }
+ },
+ {
+ "questions": [
+ "Meet Surface Pro 4"
+ ],
+ "answer": "**Meet Surface Pro 4**\n\nGet acquainted with the features built in to your Surface Pro 4. \n\nHere's a quick overview of Surface Pro 4 features: \n\n\n\n\n\n\n\nPower button \n\n\n\n\n\nPress the power button to turn your Surface Pro 4 on. You can also use the power button to put it to sleep and wake it when you're ready to start working again. \n\n\n\n\n\n\n\nTouchscreen \n\n\n\n\n\nUse the 12.3" display, with its 3:2 aspect ratio and 2736 x 1824 resolution, to watch HD movies, browse the web, and use your favorite apps. \n\nThe new Surface G5 touch processor provides up to twice the touch accuracy of Surface Pro 3 and lets you use your fingers to select items, zoom in, and move things around. For more info, see Surface touchscreen on Surface.com. \n\n\n\n\n\n\n\nSurface Pen \n\n\n\n\n\nEnjoy a natural writing experience with a pen that feels like an actual pen. Use Surface Pen to launch Cortana in Windows or open OneNote and quickly jot down notes or take screenshots. \n\nSee Using Surface Pen (Surface Pro 4 version) on Surface.com for more info. \n\n\n\n\n\n\n\nKickstand \n\n\n\n\n\nFlip out the kickstand and work or play comfortably at your desk, on the couch, or while giving a hands-free presentation. \n\n\n\n\n\n\n\nWi-Fi and Bluetooth&reg; \n\n\n\n\n\nSurface Pro 4 supports standard Wi-Fi protocols (802.11a/b/g/n/ac) and Bluetooth 4.0. Connect to a wireless network and use Bluetooth devices like mice, printers, and headsets. \n\nFor more info, see Add a Bluetooth device and Connect Surface to a wireless network on Surface.com. \n\n\n\n\n\n\n\nCameras \n\n\n\n\n\nSurface Pro 4 has two cameras for taking photos and recording video: an 8-megapixel rear-facing camera with autofocus and a 5-megapixel, high-resolution, front-facing camera. Both cameras record video in 1080p, with a 16:9 aspect ratio. Privacy lights are located on the right side of both cameras. \n\nSurface Pro 4 also has an infrared (IR) face-detection camera so you can sign in to Windows without typing a password. For more info, see Windows Hello on Surface.com. \n\nFor more camera info, see Take photos and videos with Surface and Using autofocus on Surface 3, Surface Pro 4, and Surface Book on Surface.com. \n\n\n\n\n\n\n\nMicrophones \n\n\n\n\n\nSurface Pro 4 has both a front and a back microphone. Use the front microphone for calls and recordings. Its noise-canceling feature is optimized for use with Skype and Cortana. \n\n\n\n\n\n\n\nStereo speakers \n\n\n\n\n\nStereo front speakers provide an immersive music and movie playback experience. To learn more, see Surface sound, volume, and audio accessories on Surface.com. \n\n\n\n\n",
+ "score": 21.92,
+ "id": 3,
+ "source": "product-manual.pdf",
+ "metadata": [],
+ "context": {
+ "isContextOnly": true,
+ "prompts": [
+ {
+ "displayOrder": 0,
+ "qnaId": 4,
+ "qna": null,
+ "displayText": "Ports and connectors"
+ }
+ ]
+ }
+ },
+ {
+ "questions": [
+ "Use the sign-in screen"
+ ],
+ "answer": "**Use the sign-in screen**\n\n1. \n\nTurn on or wake your Surface by pressing the power button. \n\n2. \n\nSwipe up on the screen or tap a key on the keyboard. \n\n3. \n\nIf you see your account name and account picture, enter your password and select the right arrow or press Enter on your keyboard. \n\n4. \n\nIf you see a different account name, select your own account from the list at the left. Then enter your password and select the right arrow or press Enter on your keyboard. ",
+ "score": 19.04,
+ "id": 16,
+ "source": "product-manual.pdf",
+ "metadata": [],
+ "context": {
+ "isContextOnly": true,
+ "prompts": []
+ }
+ }
+ ]
+}
+```
+
+## Query the knowledge base with the QnA Maker ID
+
+If you are building a custom application, in the initial question's response, any follow-up prompts, and its associated `qnaId` are returned. Now that you have the ID, you can pass it in the follow-up prompt's request body. If the request body contains the `qnaId`, and the context object (which contains the previous QnA Maker properties), then GenerateAnswer will return the exact question by ID, instead of using the ranking algorithm to find the answer by the question text.
+
+## Display order is supported in the Update API
+
+The [display text and display order](/rest/api/cognitiveservices/qnamaker/knowledgebase/update#promptdto), returned in the JSON response, is supported for editing by the [Update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update).
+
+## Add or delete multi-turn prompts with the Update API
+
+You can add or delete multi-turn prompts using the [QnA Maker Update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The prompts are adding in the `context` property's `promptsToAdd` array and the `promptsToDelete` array.
+
+## Export knowledge base for version control
+
+QnA Maker supports version control by including multi-turn conversation steps in the exported file.
+
+## Next steps
+
+* Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/archive/samples/csharp_dotnetcore/11.qnamaker) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md
+
+ Title: Network isolation
+description: Users can restrict public access to QnA Maker resources.
++++++ Last updated : 11/02/2021+++
+# Recommended settings for network isolation
+
+Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal).
++
+## Restrict access to App Service (QnA runtime)
+
+You can use the ServiceTag `CognitiveServicesMangement` to restrict inbound access to App Service or ASE (App Service Environment) network security group in-bound rules. Check out more information about service tags in the [virtual network service tags article](../../../virtual-network/service-tags-overview.md).
+
+### Regular App Service
+
+1. Open the Cloud Shell (PowerShell) from the Azure portal.
+2. Run the following command in the PowerShell window at the bottom of the page:
+
+```ps
+Add-AzWebAppAccessRestrictionRule -ResourceGroupName "<resource group name>" -WebAppName "<app service name>" -Name "Azure AI services Tag" -Priority 100 -Action Allow -ServiceTag "CognitiveServicesManagement"
+```
+3. Verify the added access rule is present in the **Access Restrictions** section of the **Networking** tab:
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of access restriction rule]( ../media/network-isolation/access-restrictions.png) ]( ../media/network-isolation/access-restrictions.png#lightbox)
+
+4. To access the **Test pane** on the https://qnamaker.ai portal, add the **Public IP address of the machine** from where you want to access the portal. From the **Access Restrictions** page select **Add Rule**, and allow access to your client IP.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of access restriction rule with the addition of public IP address]( ../media/network-isolation/public-address.png) ]( ../media/network-isolation/public-address.png#lightbox)
+
+### Outbound access from App Service
+
+The QnA Maker App Service requires outbound access to the below endpoints. Please make sure theyΓÇÖre added to the allow list if there are any restrictions on the outbound traffic.
+- https://qnamakerstore.blob.core.windows.net
+- https://qnamaker-data.trafficmanager.net
+- https://qnamakerconfigprovider.trafficmanager.net
++
+### Configure App Service Environment to host QnA Maker App Service
+
+The App Service Environment (ASE) can be used to host the QnA Maker App Service instance. Follow the steps below:
+
+1. Create a [new Azure Cognitive Search Resource](https://portal.azure.com/#create/Microsoft.Search).
+2. Create an external ASE with App Service.
+ - Follow this [App Service quickstart](../../../app-service/environment/create-external-ase.md#create-an-ase-and-an-app-service-plan-together) for instructions. This process can take up to 1-2 hours.
+ - Finally, you will have an App Service endpoint that will appear similar to: `https://<app service name>.<ASE name>.p.azurewebsite.net` .
+ - Example: `https:// mywebsite.myase.p.azurewebsite.net`
+3. Add the following App service configurations:
+
+ | Name | Value |
+ |:|:-|
+ | PrimaryEndpointKey | `<app service name>-PrimaryEndpointKey` |
+ | AzureSearchName | `<Azure Cognitive Search Resource Name from step #1>` |
+ | AzureSearchAdminKey | `<Azure Cognitive Search Resource admin Key from step #1>`|
+ | QNAMAKER_EXTENSION_VERSION | `latest` |
+ | DefaultAnswer | `no answer found` |
+
+4. Add CORS origin "*" on the App Service to allow access to https://qnamaker.ai portal Test pane. **CORS** is located under the API header in the App Service pane.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of CORS interface within App Service UI]( ../media/network-isolation/cross-orgin-resource-sharing.png) ]( ../media/network-isolation/cross-orgin-resource-sharing.png#lightbox)
+
+5. Create a QnA Maker Azure AI services instance (Microsoft.CognitiveServices/accounts) using Azure Resource Manager. The QnA Maker endpoint should be set to the App Service Endpoint created above (`https:// mywebsite.myase.p.azurewebsite.net`). Here is a [sample Azure Resource Manager template you can use for reference](https://github.com/pchoudhari/QnAMakerBackupRestore/tree/master/QnAMakerASEArmTemplate).
+
+### Related questions
+
+#### Can QnA Maker be deployed to an internal ASE?
+
+The main reason for using an external ASE is so the QnAMaker service backend (authoring apis) can reach the App Service via the Internet. However, you can still protect it by adding inbound access restriction to allow only connections from addresses associated with the `CognitiveServicesManagement` service tag.
+
+If you still want to use an internal ASE, you need to expose that specific QnA Maker app in the ASE on a public domain via the app gateway DNS TLS/SSL cert. For more information, see this [article on Enterprise deployment of App Services](/azure/architecture/reference-architectures/enterprise-integration/ase-standard-deployment).
+
+## Restrict access to Cognitive Search resource
+
+The Cognitive Search instance can be isolated via a private endpoint after the QnA Maker resources have been created. Use the following steps to lock down access:
+
+1. Create a new [virtual network (VNet)](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) or use existing VNet of ASE (App Service Environment).
+2. Open the VNet resource, then under the **Subnets** tab create two subnets. One for the App Service **(appservicesubnet)** and another subnet **(searchservicesubnet)** for the Cognitive Search resource without delegation.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of virtual networks subnets UI interface]( ../media/network-isolation/subnets.png) ]( ../media/network-isolation/subnets.png#lightbox)
+
+3. In the **Networking** tab in the Cognitive Search service instance switch endpoint connectivity data from public to private. This operation is a long running process and **can take up to 30 minutes** to complete.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of networking UI with public/private toggle button]( ../media/network-isolation/private.png) ]( ../media/network-isolation/private.png#lightbox)
+
+4. Once the Search resource is switched to private, select add **private endpoint**.
+ - **Basics tab**: make sure you are creating your endpoint in the same region as search resource.
+ - **Resource tab**: select the required search resource of type `Microsoft.Search/searchServices`.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of create a private endpoint UI window]( ../media/network-isolation/private-endpoint.png) ]( ../media/network-isolation/private-endpoint.png#lightbox)
+
+ - **Configuration tab**: use the VNet, subnet (searchservicesubnet) created in step 2. After that, in section **Private DNS integration** select the corresponding subscription and create a new private DNS zone called **privatelink.search.windows.net**.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of create private endpoint UI window with subnet field populated]( ../media/network-isolation/subnet.png) ]( ../media/network-isolation/subnet.png#lightbox)
+
+5. Enable VNET integration for the regular App Service. You can skip this step for ASE, as that already has access to the VNET.
+ - Go to App Service **Networking** section, and open **VNet Integration**.
+ - Link to the dedicated App Service VNet, Subnet (appservicevnet) created in step 2.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of VNET integration UI]( ../media/network-isolation/integration.png) ]( ../media/network-isolation/integration.png#lightbox)
+
+[Create Private endpoints](../reference-private-endpoint.md) to the Azure Search resource.
+
+Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal).
+
+After restricting access to the Azure AI service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.
+- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules).
+- Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
+
+ > [!div class="mx-imgBorder"]
+ > [ ![Screenshot of firewall and virtual networks configuration UI]( ../media/network-isolation/firewall.png) ]( ../media/network-isolation/firewall.png#lightbox)
ai-services Query Knowledge Base With Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/query-knowledge-base-with-metadata.md
+
+ Title: Contextually filter by using metadata
+
+description: QnA Maker filters QnA pairs by metadata.
+++++++ Last updated : 11/09/2020+++
+# Filter Responses with Metadata
+
+QnA Maker lets you add metadata, in the form of key and value pairs, to your pairs of questions and answers. You can then use this information to filter results to user queries, and to store additional information that can be used in follow-up conversations.
++
+<a name="qna-entity"></a>
+
+## Store questions and answers with a QnA entity
+
+It's important to understand how QnA Maker stores the question and answer data. The following illustration shows a QnA entity:
+
+![Illustration of a QnA entity](../media/qnamaker-how-to-metadata-usage/qna-entity.png)
+
+Each QnA entity has a unique and persistent ID. You can use the ID to make updates to a particular QnA entity.
+
+## Use metadata to filter answers by custom metadata tags
+
+Adding metadata allows you to filter the answers by these metadata tags. Add the metadata column from the **View Options** menu. Add metadata to your knowledge base by selecting the metadata **+** icon to add a metadata pair. This pair consists of one key and one value.
+
+![Screenshot of adding metadata](../media/qnamaker-how-to-metadata-usage/add-metadata.png)
+
+<a name="filter-results-with-strictfilters-for-metadata-tags"></a>
+
+## Filter results with strictFilters for metadata tags
+
+Consider the user question "When does this hotel close?", where the intent is implied for the restaurant "Paradise."
+
+Because results are required only for the restaurant "Paradise", you can set a filter in the GenerateAnswer call on the metadata "Restaurant Name". The following example shows this:
+
+```json
+{
+ "question": "When does this hotel close?",
+ "top": 1,
+ "strictFilters": [ { "name": "restaurant", "value": "paradise"}]
+}
+```
+
+## Filter by source
+
+In case you have multiple content sources in your knowledge base and you would like to limit the results to a particular set of sources, you can do that using the reserved keyword `source_name_metadata` as shown below.
+
+```json
+"strictFilters": [
+ {
+ "name": "category",
+ "value": "api"
+ },
+ {
+ "name": "source_name_metadata",
+ "value": "boby_brown_docx"
+ },
+ {
+ "name": "source_name_metadata",
+ "value": "chitchat.tsv"
+ }
+]
+```
+++
+### Logical AND by default
+
+To combine several metadata filters in the query, add the additional metadata filters to the array of the `strictFilters` property. By default, the values are logically combined (AND). A logical combination requires all filters to matches the QnA pairs in order for the pair to be returned in the answer.
+
+This is equivalent to using the `strictFiltersCompoundOperationType` property with the value of `AND`.
+
+### Logical OR using strictFiltersCompoundOperationType property
+
+When combining several metadata filters, if you are only concerned with one or some of the filters matching, use the `strictFiltersCompoundOperationType` property with the value of `OR`.
+
+This allows your knowledge base to return answers when any filter matches but won't return answers that have no metadata.
+
+```json
+{
+ "question": "When do facilities in this hotel close?",
+ "top": 1,
+ "strictFilters": [
+ { "name": "type","value": "restaurant"},
+ { "name": "type", "value": "bar"},
+ { "name": "type", "value": "poolbar"}
+ ],
+ "strictFiltersCompoundOperationType": "OR"
+}
+```
+
+### Metadata examples in quickstarts
+
+Learn more about metadata in the QnA Maker portal quickstart for metadata:
+* [Authoring - add metadata to QnA pair](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)
+* [Query prediction - filter answers by metadata](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md)
+
+<a name="keep-context"></a>
+
+## Use question and answer results to keep conversation context
+
+The response to the GenerateAnswer contains the corresponding metadata information of the matched question and answer pair. You can use this information in your client application to store the context of the previous conversation for use in later conversations.
+
+```json
+{
+ "answers": [
+ {
+ "questions": [
+ "What is the closing time?"
+ ],
+ "answer": "10.30 PM",
+ "score": 100,
+ "id": 1,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "restaurant",
+ "value": "paradise"
+ },
+ {
+ "name": "location",
+ "value": "secunderabad"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Analyze your knowledgebase](../How-to/get-analytics-knowledge-base.md)
ai-services Set Up Qnamaker Service Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/set-up-qnamaker-service-azure.md
+
+ Title: Set up a QnA Maker service - QnA Maker
+description: Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service.
++++++ Last updated : 09/14/2021++
+# Manage QnA Maker resources
+
+Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service. If you are trying the Custom question answering feature, you would need to create the Language resource and add the Custom question answering feature.
++
+A solid understanding of the following concepts is helpful before creating your resource:
+
+* [QnA Maker resources](../concepts/azure-resources.md)
+* [Authoring and publishing keys](../concepts/azure-resources.md#keys-in-qna-maker)
+
+## Create a new QnA Maker service
+
+This procedure creates the Azure resources needed to manage the knowledge base content. After you complete these steps, you'll find the _subscription_ keys on the **Keys** page for the resource in the Azure portal.
+
+1. Sign in to the Azure portal and [create a QnA Maker](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) resource.
+
+1. Select **Create** after you read the terms and conditions:
+
+ ![Create a new QnA Maker service](../media/qnamaker-how-to-setup-service/create-new-resource-button.png)
+
+1. In **QnA Maker**, select the appropriate tiers and regions:
+
+ ![Create a new QnA Maker service - pricing tier and regions](../media/qnamaker-how-to-setup-service/enter-qnamaker-info.png)
+
+ * In the **Name** field, enter a unique name to identify this QnA Maker service. This name also identifies the QnA Maker endpoint that your knowledge bases will be associated with.
+ * Choose the **Subscription** under which the QnA Maker resource will be deployed.
+ * Select the **Pricing tier** for the QnA Maker management services (portal and management APIs). See [more details about SKU pricing](https://aka.ms/qnamaker-pricing).
+ * Create a new **Resource group** (recommended) or use an existing one in which to deploy this QnA Maker resource. QnA Maker creates several Azure resources. When you create a resource group to hold these resources, you can easily find, manage, and delete these resources by the resource group name.
+ * Select a **Resource group location**.
+ * Choose the **Search pricing tier** of the Azure Cognitive Search service. If the Free tier option is unavailable (appears dimmed), it means you already have a free service deployed through your subscription. In that case, you'll need to start with the Basic tier. See [Azure Cognitive Search pricing details](https://azure.microsoft.com/pricing/details/search/).
+ * Choose the **Search location** where you want Azure Cognitive Search indexes to be deployed. Restrictions on where customer data must be stored will help determine the location you choose for Azure Cognitive Search.
+ * In the **App name** field, enter a name for your Azure App Service instance.
+ * By default, App Service defaults to the standard (S1) tier. You can change the plan after creation. Learn more about [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
+ * Choose the **Website location** where App Service will be deployed.
+
+ > [!NOTE]
+ > The **Search Location** can differ from the **Website Location**.
+
+ * Choose whether or not you want to enable **Application Insights**. If **Application Insights** is enabled, QnA Maker collects telemetry on traffic, chat logs, and errors.
+ * Choose the **App insights location** where the Application Insights resource will be deployed.
+ * For cost savings measures, you can [share](configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) some but not all Azure resources created for QnA Maker.
+
+1. After all the fields are validated, select **Create**. The process can take a few minutes to complete.
+
+1. After deployment is completed, you'll see the following resources created in your subscription:
+
+ ![Resource created a new QnA Maker service](../media/qnamaker-how-to-setup-service/resources-created.png)
+
+ The resource with the _Azure AI services_ type has your _subscription_ keys.
+
+## Upgrade Azure resources
+
+### Upgrade QnA Maker SKU
+
+When you want to have more questions and answers in your knowledge base, beyond your current tier, upgrade your QnA Maker service pricing tier.
+
+To upgrade the QnA Maker management SKU:
+
+1. Go to your QnA Maker resource in the Azure portal, and select **Pricing tier**.
+
+ ![QnA Maker resource](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-resource.png)
+
+1. Choose the appropriate SKU and press **Select**.
+
+ ![QnA Maker pricing](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-pricing-page.png)
+
+### Upgrade App Service
+
+When your knowledge base needs to serve more requests from your client app, upgrade your App Service pricing tier.
+
+You can [scale up](../../../app-service/manage-scale-up.md) or scale out App Service.
+
+Go to the App Service resource in the Azure portal, and select the **Scale up** or **Scale out** option as required.
+
+![QnA Maker App Service scale](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-scale.png)
+
+### Upgrade the Azure Cognitive Search service
+
+If you plan to have many knowledge bases, upgrade your Azure Cognitive Search service pricing tier.
+
+Currently, you can't perform an in-place upgrade of the Azure search SKU. However, you can create a new Azure search resource with the desired SKU, restore the data to the new resource, and then link it to the QnA Maker stack. To do this, follow these steps:
+
+1. Create a new Azure search resource in the Azure portal, and select the desired SKU.
+
+ ![QnA Maker Azure search resource](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-azuresearch-new.png)
+
+1. Restore the indexes from your original Azure search resource to the new one. See the [backup restore sample code](https://github.com/pchoudhari/QnAMakerBackupRestore).
+
+1. After the data is restored, go to your new Azure search resource, select **Keys**, and write down the **Name** and the **Admin key**:
+
+ ![QnA Maker Azure search keys](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-azuresearch-keys.png)
+
+1. To link the new Azure search resource to the QnA Maker stack, go to the QnA Maker App Service instance.
+
+ ![QnA Maker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-resource-list-appservice.png)
+
+1. Select **Application settings** and modify the settings in the **AzureSearchName** and **AzureSearchAdminKey** fields from step 3.
+
+ ![QnA Maker App Service setting](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-settings.png)
+
+1. Restart the App Service instance.
+
+ ![Restart of the QnA Maker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png)
+
+### Inactivity policy for free Search resources
+
+If you are not using a QnA maker resource, you should remove all the resources. If you don't remove unused resources, your Knowledge base will stop working if you created a free Search resource.
+
+Free Search resources are deleted after 90 days without receiving an API call.
+
+## Delete Azure resources
+
+If you delete any of the Azure resources used for your knowledge bases, the knowledge bases will no longer function. Before deleting any resources, make sure you export your knowledge bases from the **Settings** page.
+
+## Next steps
+
+Learn more about the [App service](../../../app-service/index.yml) and [Search service](../../../search/index.yml).
+
+> [!div class="nextstepaction"]
+> [Learn how to author with others](../index.yml)
ai-services Test Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/test-knowledge-base.md
+
+ Title: How to test a knowledge base - QnA Maker
+description: Testing your QnA Maker knowledge base is an important part of an iterative process to improve the accuracy of the responses being returned. You can test the knowledge base through an enhanced chat interface that also allows you make edits.
++++++ Last updated : 11/02/2021++
+# Test your knowledge base in QnA Maker
+
+Testing your QnA Maker knowledge base is an important part of an iterative process to improve the accuracy of the responses being returned. You can test the knowledge base through an enhanced chat interface that also allows you make edits.
++
+## Interactively test in QnA Maker portal
+
+1. Access your knowledge base by selecting its name on the **My knowledge bases** page.
+1. To access the Test slide-out panel, select **Test** in your application's top panel.
+1. Enter a query in the text box and select Enter.
+1. The best-matched answer from the knowledge base is returned as the response.
+
+### Clear test panel
+
+To clear all the entered test queries and their results from the test console, select **Start over** at the upper-left corner of the Test panel.
+
+### Close test panel
+
+To close the Test panel, select the **Test** button again. While the Test panel is open, you cannot edit the Knowledge Base contents.
+
+### Inspect score
+
+You inspect details of the test result in the Inspect panel.
+
+1. With the Test slide-out panel open, select **Inspect** for more details on that response.
+
+ ![Inspect responses](../media/qnamaker-how-to-test-knowledge-bases/inspect.png)
+
+2. The Inspection panel appears. The panel includes the top scoring intent as well as any identified entities. The panel shows the result of the selected utterance.
+
+### Correct the top scoring answer
+
+If the top scoring answer is incorrect, select the correct answer from the list and select **Save and Train**.
+
+![Correct the top scoring answer](../media/qnamaker-how-to-test-knowledge-bases/choose-answer.png)
+
+### Add alternate questions
+
+You can add alternate forms of a question to a given answer. Type the alternate answers in the text box and select enter to add them. Select **Save and Train** to store the updates.
+
+![Add alternate questions](../media/qnamaker-how-to-test-knowledge-bases/add-alternate-question.png)
+
+### Add a new answer
+
+You can add a new answer if any of the existing answers that were matched are incorrect or the answer does not exist in the knowledge base (no good match found in the KB).
+
+At the bottom of the answers list, use the text box to enter a new answer and press enter to add it.
+
+Select **Save and Train** to persist this answer. A new question-answer pair has now been added to your knowledge base.
+
+> [!NOTE]
+> All edits to your knowledge base only get saved when you press the **Save and Train** button.
+
+### Test the published knowledge base
+
+You can test the published version of knowledge base in the test pane. Once you have published the KB, select the **Published KB** box and send a query to get results from the published KB.
+
+![Test against a published KB](../media/qnamaker-how-to-test-knowledge-bases/test-against-published-knowledge-base.png)
+
+## Batch test with tool
+
+Use the batch testing tool when you want to:
+
+* determine top answer and score for a set of questions
+* validate expected answer for set of questions
+
+### Prerequisites
+
+* Azure subscription - [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Either [create a QnA Maker service](../quickstarts/create-publish-knowledge-base.md) or use an existing service, which uses the English language.
+* Download the [multi-turn sample `.docx` file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)
+* Download the [batch testing tool](https://aka.ms/qnamakerbatchtestingtool), extract the executable file from the `.zip` file.
+
+### Sign into QnA Maker portal
+
+[Sign in](https://www.qnamaker.ai/) to the QnA Maker portal.
+
+### Create a new knowledge base from the multi-turn sample.docx file
+
+1. Select **Create a knowledge base** from the tool bar.
+1. Skip **Step 1** because you should already have a QnA Maker resource, moving on to **Step 2** to select your existing resource information:
+ * Azure Active Directory ID
+ * Azure Subscription Name
+ * Azure QnA Service Name
+ * Language - the English language
+1. Enter the name `Multi-turn batch test quickstart` as the name of your knowledge base.
+
+1. In **Step 4**, configure the settings with the following table:
+
+ |Setting|Value|
+ |--|--|
+ |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked|
+ |**Default answer text**| `Batch test - default answer not found.`|
+ |**+ Add File**|Select the downloaded `.docx` file listing in the prerequisites.|
+ |**Chit-chat**|Select **Professional**|
+
+1. In **Step 5**, select **Create your KB**.
+
+ When the creation process finishes, the portal displays the editable knowledge base.
+
+### Save, train, and publish knowledge base
+
+1. Select **Save and train** from the toolbar to save the knowledge base.
+1. Select **Publish** from the toolbar then select **Publish** again to publish the knowledge base. Publishing makes the knowledge base available for queries from a public URL endpoint. When publishing is complete, save the host URL and endpoint key information shown on the **Publish** page.
+
+ |Required data| Example|
+ |--|--|
+ |Published Host|`https://YOUR-RESOURCE-NAME.azurewebsites.net`|
+ |Published Key|`XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX` (32 character string shown after `Endpoint` )|
+ |App ID|`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` (36 character string shown as part of `POST`) |
+
+### Create batch test file with question IDs
+
+In order to use the batch test tool, create a file named `batch-test-data-1.tsv` with a text editor. The file should be in UTF-8 format and it needs to have the following columns separated by a tab.
+
+|TSV input file fields|Notes|Example|
+|--|--|--|
+|Knowledge base ID|Your knowledge base ID found on the Publish page. Test several knowledge bases in the same service at one time in a single file by using different knowledge base IDs in a single file.|`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` (36 character string shown as part of `POST`) |
+|Question|The question text a user would enter. 1,000 character max.|`How do I sign out?`|
+|Metadata tags|optional|`topic:power` uses the `key:value` format|
+|Top parameter|optional|`25`|
+|Expected answer ID|optional|`13`|
+
+For this knowledge base, add three rows of just the two required columns to the file. The first column is your knowledge base ID and the second column should be the following list of questions:
+
+|Column 2 - questions|
+|--|
+|`Use Windows Hello to sign in`|
+|`Charge your Surface Pro 4`|
+|`Get to know Windows 10`|
+
+These questions are the exact wording from the knowledge base and should return 100 as the confidence score.
+
+Next, add a few questions, similar to these questions but not exactly the same on three more rows, using the same knowledge base ID:
+
+|Column 2 - questions|
+|--|
+|`What is Windows Hello?`|
+|`How do I charge the laptop?`|
+|`What features are in Windows 10?`|
+
+> [!CAUTION]
+> Make sure that each column is separated by a tab delimiter only. Leading or trailing spaces are added to the column data and will cause the program to throw exceptions when the type or size is incorrect.
+
+The batch test file, when opened in Excel, looks like the following image. The knowledge base ID has been replaced with `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` for security. For your own batch test, make sure the column displays your knowledge base ID.
+
+> [!div class="mx-imgBorder"]
+> ![Input first version of .tsv file from batch test](../media/batch-test/batch-test-1-input.png)
+
+### Test the batch file
+
+Run the batch testing program using the following CLI format at the command line.
+
+Replace `YOUR-RESOURCE-NAME` and `ENDPOINT-KEY` with your own values for service name and endpoint key. These values are found on the **Settings** page in the QnA Maker portal.
+
+```console
+batchtesting.exe batch-test-data-1.tsv https://YOUR-RESOURCE-NAME.azurewebsites.net ENDPOINT-KEY out.tsv
+```
+The test completes and generates the `out.tsv` file:
+
+> [!div class="mx-imgBorder"]
+> ![Output first version of .tsv file from batch test](../media/batch-test/batch-test-1-output.png)
+
+The knowledge base ID has been replaced with `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` for security. For your own batch test, the column displays your knowledge base ID.
+
+The test output of confidence score, in the fourth column, shows the top three questions returned a score of 100 as expected because each question is exactly the same as it appears in the knowledge base. The last three questions, with new wording of the question, do not return 100 as the confidence score. In order to increase the score both for the test, and your users, you need to add more alternate questions to the knowledge base.
+
+### Testing with the optional fields
+
+Once you understand the format and process, you can generate a test file, to run against your knowledge base from a source of data such as from chats logs.
+
+Because the data source and process are automated, the test file can be run many times with different settings to determine the correct values.
+
+For example, if you have a chat log and you want to determine which chat log text applies to which metadata fields, create a test file and set the metadata fields for every row. Run the test, then review the rows that match the metadata. Generally, the matches should be positive, but you should review the results for false positives. A false positive is a row that matches the metadata but based on the text, it shouldn't match.
+
+### Using optional fields in the input batch test file
+
+Use the following chart to understand how to find the field values for optional data.
+
+|Column number|Optional column|Data location|
+|--|--|--|
+|3|metadata|Export existing knowledge base for existing `key:value` pairs.|
+|4|top|Default value of `25` is recommended.|
+|5|Question and answer set ID|Export existing knowledge base for ID values. Also notice the IDs were returned in the output file.|
+
+### Add metadata to the knowledge base
+
+1. In the QnA portal, on the **Edit** page, add metadata of `topic:power` to the following questions:
+
+ |Questions|
+ |--|
+ |Charge your Surface Pro 4|
+ |Check the battery level|
+
+ Two QnA pairs have the metadata set.
+
+ > [!TIP]
+ > To see the metadata and QnA IDs of each set, export the knowledge base. Select the **Settings** page, then select **Export** as a `.xls` file. Find this downloaded file and open with Excel reviewing for metadata and ID.
+
+1. Select **Save and train**, then select the **Publish** page, then select the **Publish** button. These actions make the change available to the batch test. Download the knowledge base from the **Settings** page.
+
+ The downloaded file has the correct format for the metadata and the correct question and answer set ID. Use these fields in the next section
+
+ > [!div class="mx-imgBorder"]
+ > ![Exported knowledge base with metadata](../media/batch-test/exported-knowledge-base-with-metadata.png)
+
+### Create a second batch test
+
+There are two main scenarios for batch testing:
+* **Process chat log files** - Determine the top answer for a previously unseen question - the most common situation is when you need to process are log file of queries, such as a chat bot's user questions. Create a batch file test, with only the required columns. The test returns the top answer for each question. That doesn't mean it the top answer is the correct answer. Once you complete this test, move on to the validation test.
+* **Validation test** - Validate the expected answer. This test requires that all the questions and matching expected answers in the batch test have been validated. This may require some manual process.
+
+The following procedure assumes the scenario is to process chat logs with
+
+1. Create a new batch test file to include optional data, `batch-test-data-2.tsv`. Add the six rows from the original batch test input file, then add the metadata, top, and QnA pair ID for each row.
+
+ To simulate the automated process of checking new text from chat logs against the knowledge base, set the metadata for each column to the same value: `topic:power`.
+
+ > [!div class="mx-imgBorder"]
+ > ![Input second version of .tsv file from batch test](../media/batch-test/batch-test-2-input.png)
+
+1. Run the test again, changing the input and output file names to indicate it is the second test.
+
+ > [!div class="mx-imgBorder"]
+ > ![Output second version of .tsv file from batch test](../media/batch-test/batch-test-2-output.png)
+
+### Test results and an automated test system
+
+This test output file can be parsed as part of an automated continuous test pipeline.
+
+This specific test output should be read as: each row was filtered with metadata, and because each row didn't match the metadata in the knowledge base, the default answer for those non-matching rows returned ("no good match found in kb"). Of those rows that did match, the QnA ID and score was returned.
+
+All rows returned the label of incorrect because no row matched the answer ID expected.
+
+You should be able to see with these results that you can take a chat log and use the text as the query of each row. Without knowing anything about the data, the results tell you a lot about the data that you can then use moving forward:
+
+* meta-data
+* QnA ID
+* score
+
+Was filtering with meta-data a good idea for the test? Yes and no. The test system should create test files for each meta-data pair as well as a test with no meta-data pairs.
+
+### Clean up resources
+
+If you aren't going to continue testing the knowledge base, delete the batch file tool and the test files.
+
+If you're not going to continue to use this knowledge base, delete the knowledge base with the following steps:
+
+1. In the QnA Maker portal, select **My Knowledge bases** from the top menu.
+1. In the list of knowledge bases, select the **Delete** icon on the row of the knowledge base of this quickstart.
+
+[Reference documentation about the tool](../reference-tsv-format-batch-testing.md) includes:
+
+* the command-line example of the tool
+* the format for TSV input and outfile files
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Publish a knowledge base](../quickstarts/create-publish-knowledge-base.md)
ai-services Use Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/use-active-learning.md
+
+ Title: Use active learning with knowledge base - QnA Maker
+description: Learn to improve the quality of your knowledge base with active learning. Review, accept or reject, add without removing or changing existing questions.
++++++ Last updated : 11/02/2021+++
+# Active learning
+
+The _Active learning suggestions_ feature allows you to improve the quality of your knowledge base by suggesting alternative questions, based on user-submissions, to your question and answer pair. You review those suggestions, either adding them to existing questions or rejecting them.
+
+Your knowledge base doesn't change automatically. In order for any change to take effect, you must accept the suggestions. These suggestions add questions but don't change or remove existing questions.
++
+## What is active learning?
+
+QnA Maker learns new question variations with implicit and explicit feedback.
+
+* [Implicit feedback](#how-qna-makers-implicit-feedback-works) ΓÇô The ranker understands when a user question has multiple answers with scores that are very close and considers this as feedback. You don't need to do anything for this to happen.
+* [Explicit feedback](#how-you-give-explicit-feedback-with-the-train-api) ΓÇô When multiple answers with little variation in scores are returned from the knowledge base, the client application asks the user which question is the correct question. The user's explicit feedback is sent to QnA Maker with the [Train API](../how-to/improve-knowledge-base.md#train-api).
+
+Both methods provide the ranker with similar queries that are clustered.
+
+## How active learning works
+
+Active learning is triggered based on the scores of the top few answers returned by QnA Maker. If the score differences between QnA pairs that match the query lie within a small range, then the query is considered a possible suggestion (as an alternate question) for each of the possible QnA pairs. Once you accept the suggested question for a specific QnA pair, it is rejected for the other pairs. You need to remember to save and train, after accepting suggestions.
+
+Active learning gives the best possible suggestions in cases where the endpoints are getting a reasonable quantity and variety of usage queries. When five or more similar queries are clustered, every 30 minutes, QnA Maker suggests the user-based questions to the knowledge base designer to accept or reject. All the suggestions are clustered together by similarity and top suggestions for alternate questions are displayed based on the frequency of the particular queries by end users.
+
+Once questions are suggested in the QnA Maker portal, you need to review and accept or reject those suggestions. There isn't an API to manage suggestions.
+
+## How QnA Maker's implicit feedback works
+
+QnA Maker's implicit feedback uses an algorithm to determine score proximity then makes active learning suggestions. The algorithm to determine proximity is not a simple calculation. The ranges in the following example are not meant to be fixed but should be used as a guide to understand the effect of the algorithm only.
+
+When a question's score is highly confident, such as 80%, the range of scores that are considered for active learning are wide, approximately within 10%. As the confidence score decreases, such as 40%, the range of scores decreases as well, approximately within 4%.
+
+In the following JSON response from a query to QnA Maker's generateAnswer, the scores for A, B, and C are near and would be considered as suggestions.
+
+```json
+{
+ "activeLearningEnabled": true,
+ "answers": [
+ {
+ "questions": [
+ "Q1"
+ ],
+ "answer": "A1",
+ "score": 80,
+ "id": 15,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "topic",
+ "value": "value"
+ }
+ ]
+ },
+ {
+ "questions": [
+ "Q2"
+ ],
+ "answer": "A2",
+ "score": 78,
+ "id": 16,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "topic",
+ "value": "value"
+ }
+ ]
+ },
+ {
+ "questions": [
+ "Q3"
+ ],
+ "answer": "A3",
+ "score": 75,
+ "id": 17,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "topic",
+ "value": "value"
+ }
+ ]
+ },
+ {
+ "questions": [
+ "Q4"
+ ],
+ "answer": "A4",
+ "score": 50,
+ "id": 18,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "topic",
+ "value": "value"
+ }
+ ]
+ }
+ ]
+}
+```
+
+QnA Maker won't know which answer is the best answer. Use the QnA Maker portal's list of suggestions to select the best answer and train again.
++
+## How you give explicit feedback with the Train API
+
+QnA Maker needs explicit feedback about which of the answers was the best answer. How the best answer is determined, is up to you and can include:
+
+* User feedback, selecting one of the answers.
+* Business logic, such as determining an acceptable score range.
+* A combination of both user feedback and business logic.
+
+Use the [Train API](/rest/api/cognitiveservices/qnamaker4.0/runtime/train) to send the correct answer to QnA Maker, after the user selects it.
+
+## Upgrade runtime version to use active learning
+
+Active Learning is supported in runtime version 4.4.0 and above. If your knowledge base was created on an earlier version, [upgrade your runtime](configure-QnA-Maker-resources.md#get-the-latest-runtime-updates) to use this feature.
+
+## Turn on active learning for alternate questions
+
+Active learning is off by default. Turn it on to see suggested questions. After you turn on active learning, you need to send information from the client app to QnA Maker. For more information, see [Architectural flow for using GenerateAnswer and Train APIs from a bot](improve-knowledge-base.md#architectural-flow-for-using-generateanswer-and-train-apis-from-a-bot).
+
+1. Select **Publish** to publish the knowledge base. Active learning queries are collected from the GenerateAnswer API prediction endpoint only. The queries to the Test pane in the QnA Maker portal do not affect active learning.
+
+1. To turn active learning on in the QnA Maker portal, go to the top-right corner, select your **Name**, go to [**Service Settings**](https://www.qnamaker.ai/UserSettings).
+
+ ![Turn on active learning's suggested question alternatives from the Service settings page. Select your user name in the top-right menu, then select Service Settings.](../media/improve-knowledge-base/Endpoint-Keys.png)
++
+1. Find the QnA Maker service then toggle **Active Learning**.
+
+ > [!div class="mx-imgBorder"]
+ > [![On the Service settings page, toggle on Active Learning feature. If you are not able to toggle the feature, you may need to upgrade your service.](../media/improve-knowledge-base/turn-active-learning-on-at-service-setting.png)](../media/improve-knowledge-base/turn-active-learning-on-at-service-setting.png#lightbox)
+
+ > [!Note]
+ > The exact version on the preceding image is shown as an example only. Your version may be different.
+
+ Once **Active Learning** is enabled, the knowledge base suggests new questions at regular intervals based on user-submitted questions. You can disable **Active Learning** by toggling the setting again.
+
+## Review suggested alternate questions
+
+[Review alternate suggested questions](improve-knowledge-base.md) on the **Edit** page of each knowledge base.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a knowledge base](./manage-knowledge-bases.md)
ai-services Using Prebuilt Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/using-prebuilt-api.md
+
+ Title: QnA Maker Prebuilt API
+
+description: Use the Prebuilt API to get answers over text
+++++++ Last updated : 05/05/2021++
+# Prebuilt question answering
+
+Prebuilt question answering provides user the capability to answer question over a passage of text without having to create knowledgebases, maintain question and answer pairs or incurring cost for underutilized infrastructure. This functionality is provided as an API and can be used to meet question and answering needs without having to learn the details about QnA Maker or additional storage.
++
+> [!NOTE]
+> This documentation does not apply to the latest release. To learn about using the Prebuilt API with the latest release consult the [question answering prebuilt API article](../../language-service/question-answering/how-to/prebuilt.md).
+
+Given a user query and a block of text/passage the API will return an answer and precise answer (if available).
+
+<a name="qna-entity"></a>
++
+## Example usage of Prebuilt question answering
+
+Imagine that you have one or more blocks of text from which you would like to get answers for a given question. Conventionally you would have had to create as many sources as the number of blocks of text. However, now with Prebuilt question answering you can query the blocks of text without having to define content sources in a knowledge base.
+
+Some other scenarios where the Prebuilt API can be used are:
+
+* You are developing an ebook reader app for end users which allows them to highlight text, enter a question and find answers over highlighted text
+* A browser extension that allows users to ask a question over the content being currently displayed on the browser page
+* A health bot that takes queries from users and provides answers based on the medical content that the bot identifies as most relevant to the user query
+
+Below is an example of a sample request:
+
+## Sample Request
+```
+POST https://{Endpoint}/qnamaker/v5.0-preview.2/generateanswer
+```
+
+### Sample Query over a single block of text
+
+Request Body
+
+```json
+{
+ "question": "How long it takes to charge surface pro 4?",
+ "documents": [
+ {
+ "text": "### The basics #### Power and charging It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it. You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
+ "id": "doc1"
+ }
+ ],
+ "Language": "en"
+}
+```
+## Sample Response
+
+In the above request body, we query over a single block of text. A sample response received for the above query is shown below,
+
+```json
+{
+ "answers": [
+ {
+ "answer": "### The basics #### Power and charging It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it. You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
+ "answerSpan": {
+ "text": "two to four hours",
+ "score": 0.0,
+ "startIndex": 47,
+ "endIndex": 64
+ },
+ "score": 0.9599020481109619,
+ "id": "doc1",
+ "answerStartIndex": 0,
+ "answerEndIndex": 390
+ },
+ {
+ "answer": "It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it. You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
+ "score": 0.06749606877565384,
+ "id": "doc1",
+ "answerStartIndex": 129,
+ "answerEndIndex": 390
+ },
+ {
+ "answer": "You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
+ "score": 0.011389964260160923,
+ "id": "doc1",
+ "answerStartIndex": 265,
+ "answerEndIndex": 390
+ }
+ ]
+}
+```
+We see that multiple answers are received as part of the API response. Each answer has a specific confidence score that helps understand the overall relevance of the answer. Users can make use of this confidence score to show the answers to the query.
+
+## Prebuilt API Limits
+
+Visit the [Prebuilt API Limits](../limits.md#prebuilt-question-answering-limits) documentation
+
+## Prebuilt API reference
+Visit the [Prebuilt API reference](/rest/api/cognitiveservices-qnamaker/qnamaker5.0preview2/prebuilt/generateanswer) documentation to understand the input and output parameters required for calling the API.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Overview/language-support.md
+
+ Title: Language support - QnA Maker
+
+description: A list of culture, natural languages supported by QnA Maker for your knowledge base. Do not mix languages in the same knowledge base.
+++++++ Last updated : 11/09/2019++
+# Language support for a QnA Maker resource and knowledge bases
+
+This article describes the language support options for QnA Maker resources and knowledge bases.
++
+Language for the service is selected when you create the first knowledge base in the resource. All additional knowledge bases in the resource must be in the same language.
+
+The language determines the relevance of the results QnA Maker provides in response to user queries. The QnA Maker resource, and all the knowledge bases inside that resource, support a single language. The single language is necessary to provide the best answer results for a query.
+
+## Single language per resource
+
+Consider the following:
+
+* A QnA Maker service, and all its knowledge bases, support one language only.
+* The language is explicitly set when the first knowledge base of the service is created.
+* The language is determined from the files and URLs added when the knowledge base is created.
+* The language can't be changed for any other knowledge bases in the service.
+* The language is used by the Cognitive Search service (ranker #1) and the QnA Maker service (ranker #2) to generate the best answer to a query.
+
+## Supporting multiple languages in one QnA Maker resource
+
+This functionality is not supported in our current Generally Available (GA) stable release. Check out [question answering](../../language-service/question-answering/overview.md) to test out this functionality.
+
+## Supporting multiple languages in one knowledge base
+
+If you need to support a knowledge base system, which includes several languages, you can:
+
+* Use the [Translator service](../../translator/translator-overview.md) to translate a question into a single language before sending the question to your knowledge base. This allows you to focus on the quality of a single language and the quality of the alternate questions and answers.
+* Create a QnA Maker resource, and a knowledge base inside that resource, for every language. This allows you to manage separate alternate questions and answer text that is more nuanced for each language. This gives you much more flexibility but requires a much higher maintenance cost when the questions or answers change across all languages.
++
+## Languages supported
+
+The following list contains the languages supported for a QnA Maker resource.
+
+| Language |
+|--|
+| Arabic |
+| Armenian |
+| Bangla |
+| Basque |
+| Bulgarian |
+| Catalan |
+| Chinese_Simplified |
+| Chinese_Traditional |
+| Croatian |
+| Czech |
+| Danish |
+| Dutch |
+| English |
+| Estonian |
+| Finnish |
+| French |
+| Galician |
+| German |
+| Greek |
+| Gujarati |
+| Hebrew |
+| Hindi |
+| Hungarian |
+| Icelandic |
+| Indonesian |
+| Irish |
+| Italian |
+| Japanese |
+| Kannada |
+| Korean |
+| Latvian |
+| Lithuanian |
+| Malayalam |
+| Malay |
+| Norwegian |
+| Polish |
+| Portuguese |
+| Punjabi |
+| Romanian |
+| Russian |
+| Serbian_Cyrillic |
+| Serbian_Latin |
+| Slovak |
+| Slovenian |
+| Spanish |
+| Swedish |
+| Tamil |
+| Telugu |
+| Thai |
+| Turkish |
+| Ukrainian |
+| Urdu |
+| Vietnamese |
+
+## Query matching and relevance
+QnA Maker depends on [Azure Cognitive Search language analyzers](/rest/api/searchservice/language-support) for providing results.
+
+While the Azure Cognitive Search capabilities are on par for supported languages, QnA Maker has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
+
+|Languages with additional ranker|
+|--|
+|Chinese|
+|Czech|
+|Dutch|
+|English|
+|French|
+|German|
+|Hungarian|
+|Italian|
+|Japanese|
+|Korean|
+|Polish|
+|Portuguese|
+|Spanish|
+|Swedish|
+
+This additional ranking is an internal working of the QnA Maker's ranker.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Language selection](../index.yml)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Overview/overview.md
+
+ Title: What is QnA Maker service?
+description: QnA Maker is a cloud-based NLP service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information.
++++++ Last updated : 11/19/2021
+keywords: "qna maker, low code chat bots, multi-turn conversations"
+++
+# What is QnA Maker?
+++
+QnA Maker is a cloud-based Natural Language Processing (NLP) service that allows you to create a natural conversational layer over your data. It is used to find the most appropriate answer for any input from your custom knowledge base (KB) of information.
+
+QnA Maker is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications.
+
+QnA Maker doesn't store customer data. All customer data (question answers and chat logs) is stored in the region the customer deploys the dependent service instances in. For more details on dependent services see [here](../concepts/plan.md?tabs=v1).
+
+This documentation contains the following article types:
+
+* The [quickstarts](../quickstarts/create-publish-knowledge-base.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](../how-to/set-up-qnamaker-service-azure.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](../concepts/plan.md) provide in-depth explanations of the service's functionality and features.
+* [**Tutorials**](../tutorials/create-faq-bot-with-azure-bot-service.md) are longer guides that show you how to use the service as a component in broader business solutions.
+
+## When to use QnA Maker
+
+* **When you have static information** - Use QnA Maker when you have static information in your knowledge base of answers. This knowledge base is custom to your needs, which you've built with documents such as [PDFs and URLs](../concepts/data-sources-and-content.md).
+* **When you want to provide the same answer to a request, question, or command** - when different users submit the same question, the same answer is returned.
+* **When you want to filter static information based on meta-information** - add [metadata](../how-to/metadata-generateanswer-usage.md) tags to provide additional filtering options relevant to your client application's users and the information. Common metadata information includes [chit-chat](../how-to/chit-chat-knowledge-base.md), content type or format, content purpose, and content freshness.
+* **When you want to manage a bot conversation that includes static information** - your knowledge base takes a user's conversational text or command and answers it. If the answer is part of a pre-determined conversation flow, represented in your knowledge base with [multi-turn context](../how-to/multi-turn.md), the bot can easily provide this flow.
+
+## What is a knowledge base?
+
+QnA Maker [imports your content](../concepts/plan.md) into a knowledge base of question and answer pairs. The import process extracts information about the relationship between the parts of your structured and semi-structured content to imply relationships between the question and answer pairs. You can edit these question and answer pairs or add new pairs.
+
+The content of the question and answer pair includes:
+* All the alternate forms of the question
+* Metadata tags used to filter answer choices during the search
+* Follow-up prompts to continue the search refinement
+
+![Example question and answer with metadata](../media/qnamaker-overview-learnabout/example-question-and-answer-with-metadata.png)
+
+After you publish your knowledge base, a client application sends a user's question to your endpoint. Your QnA Maker service processes the question and responds with the best answer.
+
+## Create a chat bot programmatically
+
+Once a QnA Maker knowledge base is published, a client application sends a question to your knowledge base endpoint and receives the results as a JSON response. A common client application for QnA Maker is a chat bot.
+
+![Ask a bot a question and get answer from knowledge base content](../media/qnamaker-overview-learnabout/bot-chat-with-qnamaker.png)
+
+|Step|Action|
+|:--|:--|
+|1|The client application sends the user's _question_ (text in their own words), "How do I programmatically update my Knowledge Base?" to your knowledge base endpoint.|
+|2|QnA Maker uses the trained knowledge base to provide the correct answer and any follow-up prompts that can be used to refine the search for the best answer. QnA Maker returns a JSON-formatted response.|
+|3|The client application uses the JSON response to make decisions about how to continue the conversation. These decisions can include showing the top answer and presenting more choices to refine the search for the best answer. |
+|||
+
+## Build low code chat bots
+
+The QnA Maker portal provides the complete knowledge base authoring experience. You can import documents, in their current form, to your knowledge base. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
+
+## High quality responses with layered ranking
+
+QnA Maker's system is a layered ranking approach. The data is stored in Azure search, which also serves as the first ranking layer. The top results from Azure search are then passed through QnA Maker's NLP re-ranking model to produce the final results and confidence score.
+
+## Multi-turn conversations
+
+QnA Maker provides multi-turn prompts and active learning to help you improve your basic question and answer pairs.
+
+**Multi-turn prompts** give you the opportunity to connect question and answer pairs. This connection allows the client application to provide a top answer and provides more questions to refine the search for a final answer.
+
+After the knowledge base receives questions from users at the published endpoint, QnA Maker applies **active learning** to these real-world questions to suggest changes to your knowledge base to improve the quality.
+
+## Development lifecycle
+
+QnA Maker provides authoring, training, and publishing along with collaboration permissions to integrate into the full development life cycle.
+
+> [!div class="mx-imgBorder"]
+> ![Conceptual image of development cycle](../media/qnamaker-overview-learnabout/development-cycle.png)
++
+## Complete a quickstart
+
+We offer quickstarts in most popular programming languages, each designed to teach you basic design patterns, and have you running code in less than 10 minutes. See the following list for the quickstart for each feature.
+
+* [Get started with QnA Maker client library](../quickstarts/quickstart-sdk.md)
+* [Get started with QnA Maker portal](../quickstarts/create-publish-knowledge-base.md)
+
+## Next steps
+QnA Maker provides everything you need to build, manage, and deploy your custom knowledge base.
+
+> [!div class="nextstepaction"]
+> [Review the latest changes](../whats-new.md)
ai-services Add Question Metadata Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/add-question-metadata-portal.md
+
+ Title: "Add questions and answer in QnA Maker portal"
+description: This article shows how to add question and answer pairs with metadata so your users can find the right answer to their question.
++++++ Last updated : 05/26/2020+++
+# Add questions and answer with QnA Maker portal
+
+Once a knowledge base is created, add question and answer (QnA) pairs with metadata to filter the answer. The questions in the following table are about Azure service limits, but each has to do with a different Azure search service.
++
+<a name="qna-table"></a>
+
+|Pair|Questions|Answer|Metadata|
+|--|--|--|--|
+|#1|`How large a knowledge base can I create?`<br><br>`What is the max size of a knowledge base?`<br><br>`How many GB of data can a knowledge base hold?` |`The size of the knowledge base depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](../concepts/azure-resources.md) for more details.`|`service=qna_maker`<br>`link_in_answer=true`|
+|#2|`How many knowledge bases can I have for my QnA Maker service?`<br><br>`I selected an Azure Cognitive Search tier that holds 15 knowledge bases, but I can only create 14 - what is going on?`<br><br>`What is the connection between the number of knowledge bases in my QnA Maker service and the Azure Cognitive Search service size?` |`Each knowledge base uses 1 index, and all the knowledge bases share a test index. You can have N-1 knowledge bases where N is the number of indexes your Azure Cognitive Search tier supports.`|`service=search`<br>`link_in_answer=false`|
+
+Once metadata is added to a QnA pair, the client application can:
+
+* Request answers that only match certain metadata.
+* Receive all answers but post-process the answers depending on the metadata for each answer.
+
+## Prerequisites
+
+* Complete the [previous quickstart](./create-publish-knowledge-base.md)
+
+## Sign in to the QnA Maker portal
+
+1. Sign in to the [QnA Maker portal](https://www.qnamaker.ai).
+
+1. Select your existing knowledge base from the [previous quickstart](./create-publish-knowledge-base.md).
+
+## Add additional alternatively-phrased questions
+
+The current knowledge base has the QnA Maker troubleshooting QnA pairs. These pairs were created when the URL was added to the knowledge base during the creation process.
+
+When this URL was imported, only one question with one answer was created. In this procedure, add additional questions.
+
+1. From the **Edit** page, use the search textbox above the question and answer pairs, to find the question `How large a knowledge base can I create?`
+
+1. In the **Question** column, select **+ Add alternative phrasing** then add each new phrasing, provided in the following table.
+
+ |Alternative phrasing|
+ |--|
+ |`What is the max size of a knowledge base?`|
+ |`How many GB of data can a knowledge base hold?`|
+
+1. Select **Save and train** to retrain the knowledge base.
+
+1. Select **Test**, then enter a question that is close to one of the new alternative phrasings but isn't exactly the same wording:
+
+ `What GB size can a knowledge base be?`
+
+ The correct answer is returned in markdown format:
+
+ `The size of the knowledge base depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](../concepts/azure-resources.md) for more details.`
+
+ If you select **Inspect** under the returned answer, you can see more answers met the question but not with the same high level of confidence.
+
+ Do not add every possible combination of alternative phrasing. When you turn on QnA Maker's [active learning](../how-to/improve-knowledge-base.md), this finds the alternative phrasings that will best help your knowledge base meet your users' needs.
+
+1. Select **Test** again to close the test window.
+
+## Add metadata to filter the answers
+
+Adding metadata to a question and answer pair allows your client application to request filtered answers. This filter is applied before the [first and second rankers](../concepts/query-knowledge-base.md#ranker-process) are applied.
+
+1. Add the second question and answer pair, without the metadata, from the [first table in this quickstart](#qna-table), then continue with the following steps.
+
+1. Select **View options**, then select **Show metadata**.
+
+1. For the QnA pair you just added, select **Add metadata tags**, then add the name of `service` and the value of `search`. It looks like this: `service:search`.
+
+1. Add another metadata tag with name of `link_in_answer` and value of `false`. It looks like this: `link_in_answer:false`.
+
+1. Search for the first answer in the table, `How large a knowledge base can I create?`.
+
+1. Add metadata pairs for the same two metadata tags:
+
+ `link_in_answer` : `true`<br>
+ `service`: `qna_maker`
+
+ You now have two questions with the same metadata tags with different values.
+
+1. Select **Save and train** to retrain the knowledge base.
+
+1. Select **Publish** in the top menu to go to the publish page.
+1. Select the **Publish** button to publish the current knowledge base to the endpoint.
+1. After the knowledge base is published, continue to the next quickstart to learn how to generate an answer from your knowledge base.
+
+## What did you accomplish?
+
+You edited your knowledge base to support more questions and provided name/value pairs to support filtering during the search for the top answer or postprocessing, after the answer or answers are returned.
+
+## Clean up resources
+
+If you are not continuing to the next quickstart, delete the QnA Maker and Bot framework resources in the Azure portal.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get answer with Postman or cURL](get-answer-from-knowledge-base-using-url-tool.md)
ai-services Create Publish Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/create-publish-knowledge-base.md
+
+ Title: "Quickstart: Create, train, and publish knowledge base - QnA Maker"
+description: You can create a QnA Maker knowledge base (KB) from your own content, such as FAQs or product manuals. This article includes an example of creating a QnA Maker knowledge base from a simple FAQ webpage, to answer questions QnA Maker.
++++++ Last updated : 11/02/2021+++
+# Quickstart: Create, train, and publish your QnA Maker knowledge base
++
+You can create a QnA Maker knowledge base (KB) from your own content, such as FAQs or product manuals. This article includes an example of creating a QnA Maker knowledge base from a simple FAQ webpage, to answer questions.
+
+## Prerequisites
+
+> [!div class="checklist"]
+> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A [QnA Maker resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA Maker resource name you selected when you created the resource.
+
+## Create your first QnA Maker knowledge base
+
+1. Sign in to the [QnAMaker.ai](https://QnAMaker.ai) portal with your Azure credentials.
+
+2. In the QnA Maker portal, select **Create a knowledge base**.
+
+3. On the **Create** page, skip **Step 1** if you already have your QnA Maker resource.
+
+If you haven't created the service yet, select **Stable** and **Create a QnA service**. You are directed to the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) to set up a QnA Maker service in your subscription. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
+
+When you are done creating the resource in the Azure portal, return to the QnA Maker portal, refresh the browser page, and continue to **Step 2**.
+
+4. In **Step 2**, select your Active directory, subscription, service (resource), and the language for all knowledge bases created in the service.
+
+ :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/qnaservice-selection.png" alt-text="Screenshot of selecting a QnA Maker service knowledge base":::
+
+5. In **Step 3**, name your knowledge base **My Sample QnA KB**.
+
+6. In **Step 4**, configure the settings with the following table:
+
+ |Setting|Value|
+ |--|--|
+ |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked|
+ |**Multi-turn default text**| Select an option|
+ |**+ Add URL**|`https://www.microsoft.com/download/faq.aspx`|
+ |**Chit-chat**|Select **Professional**|
+
+7. In **Step 5**, Select **Create your KB**.
+
+ The extraction process takes a few moments to read the document and identify questions and answers.
+
+ After QnA Maker successfully creates the knowledge base, the **Knowledge base** page opens. You can edit the contents of the knowledge base on this page.
+
+## Add a new question and answer set
+
+1. In the QnA Maker portal, on the **Edit** page, select **+ Add QnA pair** from the context toolbar.
+1. Add the following question:
+
+ `How many Azure services are used by a knowledge base?`
+
+1. Add the answer formatted with _markdown_:
+
+ ` * Azure AI QnA Maker service\n* Azure Cognitive Search\n* Azure web app\n* Azure app plan`
+
+ :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/add-question-and-answer.png" alt-text="Add the question as text and the answer formatted with markdown.":::
+
+ The markdown symbol, `*`, is used for bullet points. The `\n` is used for a new line.
+
+ The **Edit** page shows the markdown. When you use the **Test** panel later, you will see the markdown displayed properly.
+
+## Save and train
+
+In the upper right, select **Save and train** to save your edits and train QnA Maker. Edits aren't kept unless they're saved.
+
+## Test the knowledge base
+
+1. In the QnA Maker portal, in the upper right, select **Test** to test that the changes you made took effect.
+2. Enter an example user query in the textbox.
+
+ `I want to know the difference between 32 bit and 64 bit Windows`
+
+ :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/query-dialogue.png" alt-text="Enter an example user query in the textbox.":::
+
+3. Select **Inspect** to examine the response in more detail. The test window is used to test your changes to the knowledge base before publishing your knowledge base.
+
+4. Select **Test** again to close the **Test** panel.
+
+## Publish the knowledge base
+
+When you publish a knowledge base, the contents of your knowledge base move from the `test` index to a `prod` index in Azure search.
+
+![Screenshot of moving the contents of your knowledge base](../media/qnamaker-how-to-publish-kb/publish-prod-test.png)
+
+1. In the QnA Maker portal, select **Publish**. Then to confirm, select **Publish** on the page.
+
+ The QnA Maker service is now successfully published. You can use the endpoint in your application or bot code.
+
+ ![Screenshot of successful publishing](../media/qnamaker-create-publish-knowledge-base/publish-knowledge-base-to-endpoint.png)
+
+## Create a bot
+
+After publishing, you can create a bot from the **Publish** page:
+
+* You can create several bots quickly, all pointing to the same knowledge base for different regions or pricing plans for the individual bots.
+* If you want only one bot for the knowledge base, use the **View all your bots on the Azure portal** link to view a list of your current bots.
+
+When you make changes to the knowledge base and republish, you don't need to take further action with the bot. It's already configured to work with the knowledge base, and works with all future changes to the knowledge base. Every time you publish a knowledge base, all the bots connected to it are automatically updated.
+
+1. In the QnA Maker portal, on the **Publish** page, select **Create bot**. This button appears only after you've published the knowledge base.
+
+ ![Screenshot of creating a bot](../media/qnamaker-create-publish-knowledge-base/create-bot-from-published-knowledge-base-page.png)
+
+1. A new browser tab opens for the Azure portal, with the Azure AI Bot Service's creation page. Configure the Azure AI Bot Service. The bot and QnA Maker can share the web app service plan, but can't share the web app. This means the **app name** for the bot must be different from the app name for the QnA Maker service.
+
+ * **Do**
+ * Change bot handle - if it is not unique.
+ * Select SDK Language. Once the bot is created, you can download the code to your local development environment and continue the development process.
+ * **Don't**
+ * Change the following settings in the Azure portal when creating the bot. They are pre-populated for your existing knowledge base:
+ * QnA Auth Key
+ * App service plan and location
++
+1. After the bot is created, open the **Bot service** resource.
+1. Under **Bot Management**, select **Test in Web Chat**.
+1. At the chat prompt of **Type your message**, enter:
+
+ `Azure services?`
+
+ The chat bot responds with an answer from your knowledge base.
+
+ :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/test-web-chat.png" alt-text="Enter a user query into the test web chat.":::
+
+## What did you accomplish?
+
+You created a new knowledge base, added a public URL to the knowledge base, added your own QnA pair, trained, tested, and published the knowledge base.
+
+After publishing the knowledge base, you created a bot, and tested the bot.
+
+This was all accomplished in a few minutes without having to write any code or clean the content.
+
+## Clean up resources
+
+If you are not continuing to the next quickstart, delete the QnA Maker and Bot framework resources in the Azure portal.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add questions with metadata](add-question-metadata-portal.md)
+
+For more information:
+
+* [Markdown format in answers](../reference-markdown-format.md)
+* QnA Maker [data sources](../concepts/data-sources-and-content.md).
ai-services Get Answer From Knowledge Base Using Url Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/get-answer-from-knowledge-base-using-url-tool.md
+
+ Title: Use URL tool to get answer from knowledge base - QnA Maker
+
+description: This article walks you through getting an answer from your knowledge base using a URL test tool such as cURL or Postman.
++++++
+zone_pivot_groups: URL-test-interface
+ Last updated : 07/16/2020+++
+# Get an answer from a QNA Maker knowledge base
++
+> [!NOTE]
+> This documentation does not apply to the latest release. To learn about using the latest question answering APIs consult the [question answering authoring guide](../../language-service/question-answering/how-to/authoring.md).
++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Test knowledge base with batch file](../how-to/test-knowledge-base.md#batch-test-with-tool)
+
+Learn more about metadata:
+* [Authoring - add metadata to QnA pair](../how-to/edit-knowledge-base.md#add-metadata)
+* [Query prediction - filter answers by metadata](../how-to/query-knowledge-base-with-metadata.md)
ai-services Quickstart Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/quickstart-sdk.md
+
+ Title: "Quickstart: Use SDK to create and manage knowledge base - QnA Maker"
+description: This quickstart shows you how to create and manage your knowledge base using the client SDK.
+++++ Last updated : 01/26/2022
+ms.devlang: csharp, java, javascript, python
+
+zone_pivot_groups: qnamaker-quickstart
++
+# Quickstart: QnA Maker client library
+
+Get started with the QnA Maker client library. Follow these steps to install the package and try out the example code for basic tasks.
+++++++
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>[Tutorial: Test your knowledge base with a batch file](../how-to/test-knowledge-base.md#batch-test-with-tool)
+
+* [What is the QnA Maker API?](../overview/overview.md)
+* [Edit a knowledge base](../how-to/edit-knowledge-base.md)
+* [Get usage analytics](../how-to/get-analytics-knowledge-base.md)
ai-services Create Faq Bot With Azure Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/create-faq-bot-with-azure-bot-service.md
+
+ Title: "Tutorial: Create an FAQ bot with Azure AI Bot Service"
+description: In this tutorial, create a no code FAQ Bot with QnA Maker and Azure AI Bot Service.
++++++ Last updated : 08/31/2020+++
+# Tutorial: Create a FAQ bot with Azure AI Bot Service
+Create an FAQ Bot with QnA Maker and Azure [Bot Service](https://azure.microsoft.com/services/bot-service/) with no code.
+
+In this tutorial, you learn how to:
+
+<!-- green checkmark -->
+> [!div class="checklist"]
+> * Link a QnA Maker knowledge base to an Azure AI Bot Service
+> * Deploy the Bot
+> * Chat with the Bot in web chat
+> * Light up the Bot in the supported channels
++
+## Create and publish a knowledge base
+
+Follow the [quickstart](../quickstarts/create-publish-knowledge-base.md) to create a knowledge base. Once the knowledge base has been successfully published, you will reach the below page.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of successful publishing](../media/qnamaker-create-publish-knowledge-base/publish-knowledge-base-to-endpoint.png)
++
+## Create a bot
+
+After publishing, you can create a bot from the **Publish** page:
+
+* You can create several bots quickly, all pointing to the same knowledge base for different regions or pricing plans for the individual bots.
+* If you want only one bot for the knowledge base, use the **View all your bots on the Azure portal** link to view a list of your current bots.
+
+When you make changes to the knowledge base and republish, you don't need to take further action with the bot. It's already configured to work with the knowledge base, and works with all future changes to the knowledge base. Every time you publish a knowledge base, all the bots connected to it are automatically updated.
+
+1. In the QnA Maker portal, on the **Publish** page, select **Create bot**. This button appears only after you've published the knowledge base.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of creating a bot](../media/qnamaker-create-publish-knowledge-base/create-bot-from-published-knowledge-base-page.png)
+
+1. A new browser tab opens for the Azure portal, with the Azure AI Bot Service's creation page. Configure the Azure AI Bot Service. The bot and QnA Maker can share the web app service plan, but can't share the web app. This means the **app name** for the bot must be different from the app name for the QnA Maker service.
+
+ * **Do**
+ * Change bot handle - if it is not unique.
+ * Select SDK Language. Once the bot is created, you can download the code to your local development environment and continue the development process.
+ * **Don't**
+ * Change the following settings in the Azure portal when creating the bot. They are pre-populated for your existing knowledge base:
+ * QnA Auth Key
+ * App service plan and location
++
+1. After the bot is created, open the **Bot service** resource.
+1. Under **Bot Management**, select **Test in Web Chat**.
+1. At the chat prompt of **Type your message**, enter:
+
+ `Azure services?`
+
+ The chat bot responds with an answer from your knowledge base.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of bot returning a response](../media/qnamaker-create-publish-knowledge-base/test-web-chat.png)
++
+1. Light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
+
+## Integrate the bot with channels
+
+Select **Channels** in the Bot service resource that you have created. You can light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
+
+ >[!div class="mx-imgBorder"]
+ >![Screenshot of integration with teams](../media/qnamaker-tutorial-updates/connect-with-teams.png)
ai-services Export Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/export-knowledge-base.md
+
+ Title: Export knowledge bases - QnA Maker
+description: Exporting a knowledge base requires exporting from one knowledge base, then importing into another.
++++++ Last updated : 11/09/2020++
+# Move a knowledge base using export-import
+
+You may want to create a copy of your knowledge base for several reasons:
+
+* Copy a knowledge base from QnA Maker GA to Custom question answering
+* To implement a backup and restore process
+* Integrate with your CI/CD pipeline
+* When you wish to move your data to different regions
++
+## Prerequisites
+
+> * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A [QnA Maker resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
+> * Set up a new [QnA Maker service](../how-to/set-up-qnamaker-service-azure.md)
+
+## Export a knowledge base
+1. Sign in to [QnA Maker portal](https://qnamaker.ai).
+1. Select the knowledge base you want to move.
+
+1. On the **Settings** page, you have the options to export **QnAs**, **Synonyms**, or **Knowledge Base Replica**. You can choose to download the data in .tsv/.xlsx.
+
+ 1. **QnAs**: When exporting QnAs, all QnA pairs (with questions, answers, metadata, follow-up prompts, and the data source names) are downloaded. The QnA IDs that are exported with the questions and answers may be used to update a specific QnA pair using the [update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The QnA ID for a specific QnA pair remains unchanged across multiple export operations.
+ 2. **Synonyms**: You can export Synonyms that have been added to the knowledge base.
+ 4. **Knowledge Base Replica**: If you want to download the entire knowledge base with synonyms and other settings, you can choose this option.
+
+## Import a knowledge base
+1. Select **Create a knowledge base** from the top menu of the qnamaker.ai portal and then create an _empty_ knowledge base by not adding any URLs or files. Set the name of your choice for the new knowledge base and then ClickΓÇ»**Create your KB**.
+
+1. In this new knowledge base, open the **Settings** tab and and under _Import knowledge base_ select one of the following options: **QnAs**, **Synonyms**, or **Knowledge Base Replica**.
+
+ 1. **QnAs**: This option imports all QnA pairs. **The QnA pairs created in the new knowledge base shall have the same QnA ID as present in the exported file**. You can refer [SampleQnAs.xlsx](https://aka.ms/qnamaker-sampleqnas), [SampleQnAs.tsv](https://aka.ms/qnamaker-sampleqnastsv) to import QnAs.
+ 2. **Synonyms**: This option can be used to import synonyms to the knowledge base. You can refer [SampleSynonyms.xlsx](https://aka.ms/qnamaker-samplesynonyms), [SampleSynonyms.tsv](https://aka.ms/qnamaker-samplesynonymstsv) to import synonyms.
+ 3. **Knowledge Base Replica**: This option can be used to import KB replica with QnAs, Synonyms and Settings. You can refer [KBReplicaSampleExcel](https://aka.ms/qnamaker-samplereplica), [KBReplicaSampleTSV](https://aka.ms/qnamaker-samplereplicatsv) for more details. If you also want to add unstructured content to the replica, refer [CustomQnAKBReplicaSample](https://aka.ms/qnamaker-samplev2replica).
+
+ Either QnAs or Unstructured content is required when importing replica. Unstructured documents are only valid for Custom question answering.
+ Synonyms file is not mandatory when importing replica.
+ Settings file is mandatory when importing replica.
+
+ |Settings|Update permitted when importing to QnA Maker KB?|Update permitted when importing to Custom question answering KB?|
+ |:--|--|--|
+ |DefaultAnswerForKB|No|Yes|
+ |EnableActiveLearning (True/False)|Yes|No|
+ |EnableMultiTurnExtraction (True/False)|Yes|Yes|
+ |DefaultAnswerforMultiturn|Yes|Yes|
+ |Language|No|No|
+
+1. **Test** the new knowledge base using the Test panel. Learn how to [test your knowledge base](../how-to/test-knowledge-base.md).
+
+1. **Publish** the knowledge base and create a chat bot. Learn how to [publish your knowledge base](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
+
+ > [!div class="mx-imgBorder"]
+ > ![Migrate knowledge base](../media/qnamaker-how-to-migrate-kb/import-export-kb.png)
+
+## Programmatically export a knowledge base from QnA Maker
+
+The export/import process is programmatically available using the following REST APIs:
+
+**Export**
+
+* [Download knowledge base API](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/download)
+
+**Import**
+
+* [Replace API (reload with same knowledge base ID)](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/replace)
+* [Create API (load with new knowledge base ID)](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/create)
+
+## Chat logs
+
+There is no way to export chat logs, since the new knowledge base uses Application Insights for storing chat logs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Edit a knowledge base](../how-to/edit-knowledge-base.md)
ai-services Integrate With Power Virtual Assistant Fallback Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md
+
+ Title: "Tutorial: Integrate with Power Virtual Agents - QnA Maker"
+description: In this tutorial, improve the quality of your knowledge base with active learning. Review, accept or reject, or add without removing or changing existing questions.
++++++ Last updated : 11/09/2020++
+# Tutorial: Add your knowledge base to Power Virtual Agents
+
+Create and extend a [Power Virtual Agents](https://powervirtualagents.microsoft.com/) bot to provide answers from your knowledge base.
+
+> [!NOTE]
+> The integration demonstrated in this tutorial is in preview and is not intended for deployment to production environments.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create a Power Virtual Agents bot
+> * Create a system fallback topic
+> * Add QnA Maker as an action to a topic as a Power Automate flow
+> * Create a Power Automate solution
+> * Add a Power Automate flow to your solution
+> * Publish Power Virtual Agents
+> * Test Power Virtual Agents, and recieve an answer from your QnA Maker knowledge base
++
+## Create and publish a knowledge base
+
+1. Follow the [quickstart](../quickstarts/create-publish-knowledge-base.md) to create a knowledge base. Don't complete the last section, about creating a bot. Instead, use this tutorial to create a bot with Power Virtual Agents.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of published knowledge base settings](../media/how-to-integrate-power-virtual-agent/published-knowledge-base-settings.png)
+
+ Enter your published knowledge base settings found on the **Settings** page in the [QnA Maker](https://www.qnamaker.ai/) portal. You will need this information for the [Power Automate step](#create-a-power-automate-flow-to-connect-to-your-knowledge-base) to configure your QnA Maker `GenerateAnswer` connection.
+
+1. In the QnA Maker portal, on the **Settings** page, find the endpoint key, endpoint host, and knowledge base ID.
+
+## Create bot in Power Virtual Agents
+
+[Power Virtual Agents](https://powervirtualagents.microsoft.com/) allows teams to create powerful bots by using a guided, no-code graphical interface. You don't need data scientists or developers.
+
+Create a bot by following the steps in [Create and delete Power Virtual Agents bots](/power-virtual-agents/authoring-first-bot).
+
+## Create the system fallback topic
+
+In Power Virtual Agents, you create a bot with a series of topics (subject areas), in order to answer user questions by performing actions.
+
+Although the bot can connect to your knowledge base from any topic, this tutorial uses the *system fallback* topic. The fallback topic is used when the bot can't find an answer. The bot passes the user's text to QnA Maker's `GenerateAnswer` API, receives the answer from your knowledge base, and displays it to the user as a message.
+
+Create a fallback topic by following the steps in [Configure the system fallback topic in Power Virtual Agents](/power-virtual-agents/authoring-system-fallback-topic).
+
+## Use the authoring canvas to add an action
+
+Use the Power Virtual Agents authoring canvas to connect the fallback topic to your knowledge base. The topic starts with the unrecognized user text. Add an action that passes that text to QnA Maker, and then shows the answer as a message. The last step of displaying an answer is handled as a [separate step](#add-your-solutions-flow-to-power-virtual-agents), later in this tutorial.
+
+This section creates the fallback topic conversation flow.
+
+1. The new fallback action might already have conversation flow elements. Delete the **Escalate** item by selecting the **Options** menu.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/delete-escalate-action-using-option-menu.png" alt-text="Partial screenshot of conversation flow, with delete option highlighted.":::
+
+1. Above the **Message** node, select the plus (**+**) icon, and then select **Call an action**.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/create-new-item-call-an-action.png" alt-text="Partial Screenshot of Call an action.":::
+
+1. Select **Create a flow**. The process takes you to the Power Automate portal.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Create a flow](../media/how-to-integrate-power-virtual-agent/create-a-flow.png)
+
+ Power Automate opens to a new template. You won't use this new template.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-automate-flow-initial-template.png" alt-text="Partial Screenshot of Power Automate with new flow template.":::
+
+## Create a Power Automate flow to connect to your knowledge base
+
+> [!NOTE]
+> Currently the Power Automate template does not support QnA Maker managed (Preview) endpoints. To add a QnA Maker managed (Preview) knowledge base to Power Automate skip this step and manually add the endpoints to it.
+
+The following procedure creates a Power Automate flow that:
+
+* Takes the incoming user text, and sends it to QnA Maker.
+* Returns the top response back to your bot.
+
+1. In **Power Automate**, select **Templates** from the left navigation. If you are asked if you want to leave the browser page, accept Leave.
+
+1. On the templates page, search for the template **Generate answer using QnA Maker** then select the template. This template has all the steps to call QnA Maker with your knowledge base settings and return the top answer.
+
+1. On the new screen for the QnA Maker flow, select **Continue**.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-automate-qna-flow-template-continue.png" alt-text="Partial Screenshot of QnA Maker template flow with Continue button highlighted.":::
+
+1. Select the **Generate Answer** action box, and fill in your QnA Maker settings from a previous section titled [Create and publish a knowledge base](#create-and-publish-a-knowledge-base). Your **Service Host** in the following image refers to your knowledge base host **Host** and is in the format of `https://YOUR-RESOURCE-NAME.azurewebsites.net/qnamaker`.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-fill-in-generate-answer-settings.png" alt-text="Partial Screenshot of QnA Maker template flow with Generate answer (Preview) highlighted.":::
+
+1. Select **Save** to save the flow.
+
+## Create a solution and add the flow
+
+For the bot to find and connect to the flow, the flow must be included in a Power Automate solution.
+
+1. While still in the Power Automate portal, select **Solutions** from the left-side navigation.
+
+1. Select **+ New solution**.
+
+1. Enter a display name. The list of solutions includes every solution in your organization or school. Choose a naming convention that helps you filter to just your solutions. For example, you might prefix your email to your solution name: `jondoe-power-virtual-agent-qnamaker-fallback`.
+
+1. Select your publisher from the list of choices.
+
+1. Accept the default values for the name and version.
+
+1. Select **Create** to finish the process.
+
+## Add your flow to the solution
+
+1. In the list of solutions, select the solution you just created. It should be at the top of the list. If it isn't, search by your email name, which is part of the solution name.
+
+1. In the solution, select **+ Add existing**, and then select **Flow** from the list.
+
+1. Find your flow from the **Outside solutions** list, and then select **Add** to finish the process. If there are many flows, look at the **Modified** column to find the most recent flow.
+
+## Add your solution's flow to Power Virtual Agents
+
+1. Return to the browser tab with your bot in Power Virtual Agents. The authoring canvas should still be open.
+
+1. To insert a new step in the flow, above the **Message** action box, select the plus (**+**) icon. Then select **Call an action**.
+
+1. From the **Flow** pop-up window, select the new flow named **Generate answers using QnA Maker knowledge base...**. The new action appears in the flow.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-flow-after-adding-action.png" alt-text="Partial Screenshot of Power Virtual Agent topic conversation canvas after adding QnA Maker flow.":::
+
+1. To correctly set the input variable to the QnA Maker action, select **Select a variable**, then select **bot.UnrecognizedTriggerPhrase**.
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-selection-action-input.png" alt-text="Partial Screenshot of Power Virtual Agent topic conversation canvas selecting input variable.":::
+
+1. To correctly set the output variable to the QnA Maker action, in the **Message** action, select **UnrecognizedTriggerPhrase**, then select the icon to insert a variable, `{x}`, then select **FinalAnswer**.
+
+1. From the context toolbar, select **Save**, to save the authoring canvas details for the topic.
+
+Here's what the final bot canvas looks like.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot shows the final agent canvas with Trigger Phrases, Action, and then Message sections.](../media/how-to-integrate-power-virtual-agent/power-virtual-agent-topic-authoring-canvas-full-flow.png)
+
+## Test the bot
+As you design your bot in Power Virtual Agents, you can use [the Test bot pane](/power-virtual-agents/authoring-test-bot) to see how the bot leads a customer through the bot conversation.
+
+1. In the test pane, toggle **Track between topics**. This allows you to watch the progression between topics, as well as within a single topic.
+
+1. Test the bot by entering the user text in the following order. The authoring canvas reports the successful steps with a green check mark.
+
+ |Question order|Test questions|Purpose|
+ |--|--|--|
+ |1|Hello|Begin conversation|
+ |2|Store hours|Sample topic. This is configured for you without any additional work on your part.|
+ |3|Yes|In reply to `Did that answer your question?`|
+ |4|Excellent|In reply to `Please rate your experience.`|
+ |5|Yes|In reply to `Can I help with anything else?`|
+ |6|How can I improve the throughput performance for query predictions?|This question triggers the fallback action, which sends the text to your knowledge base to answer. Then the answer is shown. the green check marks for the individual actions indicate success for each action.|
+
+ :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-test-tracked.png" alt-text="Screenshot of chat bot with canvas indicating green checkmarks for successful actions.":::
+
+## Publish your bot
+
+To make the bot available to all members of your school or organization, you need to *publish* it.
+
+Publish your bot by following the steps in [Publish your bot](/power-virtual-agents/publication-fundamentals-publish-channels).
+
+## Share your bot
+
+To make your bot available to others, you first need to publish it to a channel. For this tutorial we'll use the demo website.
+
+Configure the demo website by following the steps in [Configure a chatbot for a live or demo website](/power-virtual-agents/publication-connect-bot-to-web-channels).
+
+Then you can share your website URL with your school or organization members.
+
+## Clean up resources
+
+When you are done with the knowledge base, remove the QnA Maker resources in the Azure portal.
+
+## Next step
+
+[Get analytics on your knowledge base](../how-to/get-analytics-knowledge-base.md)
+
+Learn more about:
+
+* [Power Virtual Agents](/power-virtual-agents/)
+* [Power Automate](/power-automate/)
+* [QnA Maker connector](https://us.flow.microsoft.com/connectors/shared_cognitiveservicesqnamaker/qna-maker/) and the [settings for the connector](/connectors/cognitiveservicesqnamaker/)
ai-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/choose-natural-language-processing-service.md
+
+ Title: Use NLP with LUIS for chat bots
+description: Learn when to use Language Understanding and when to use QnA Maker and understand how they compliment each other.
++++++ Last updated : 04/16/2020++
+# Use Azure AI services with natural language processing (NLP) to enrich bot conversations
+++
+## Next steps
+
+* Learn [how to manage Azure resources](How-To/set-up-qnamaker-service-azure.md)
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/encrypt-data-at-rest.md
+
+ Title: QnA Maker encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for QnA Maker, and how to enable and manage CMK.
+++++ Last updated : 09/09/2021+
+#Customer intent: As a user of the QnA Maker service, I want to learn how encryption at rest works.
+++
+# QnA Maker encryption of data at rest
+
+QnA Maker automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
++
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
+
+QnA Maker uses CMK support from Azure search. Configure [CMK in Azure Search using Azure Key Vault](../../search/search-security-manage-encryption-keys.md). This Azure instance should be associated with QnA Maker service to make it CMK enabled.
++
+> [!IMPORTANT]
+> Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
+
+## Enable customer-managed keys
+
+The QnA Maker service uses CMK from the Azure Search service. Follow these steps to enable CMKs:
+
+1. Create a new Azure Search instance and enable the prerequisites mentioned in the [customer-managed key prerequisites for Azure Cognitive Search](../../search/search-security-manage-encryption-keys.md#prerequisites).
+
+ ![View Encryption settings 1](../media/cognitive-services-encryption/qna-encryption-1.png)
+
+2. When you create a QnA Maker resource, it's automatically associated with an Azure Search instance. This instance cannot be used with CMK. To use CMK, you'll need to associate your newly created instance of Azure Search that was created in step 1. Specifically, you'll need to update the `AzureSearchAdminKey` and `AzureSearchName` in your QnA Maker resource.
+
+ ![View Encryption settings 2](../media/cognitive-services-encryption/qna-encryption-2.png)
+
+3. Next, create a new application setting:
+ * **Name**: Set to `CustomerManagedEncryptionKeyUrl`
+ * **Value**: Use the value that you got in Step 1 when creating your Azure Search instance.
+
+ ![View Encryption settings 3](../media/cognitive-services-encryption/qna-encryption-3.png)
+
+4. When finished, restart the runtime. Now your QnA Maker service is CMK-enabled.
+
+## Regional availability
+
+Customer-managed keys are available in all Azure Search regions.
+
+## Encryption of data in transit
+
+QnA Maker portal runs in the user's browser. Every action triggers a direct call to the respective Azure AI services API. Hence, QnA Maker is compliant for data in transit.
+However, as the QnA Maker portal service is hosted in West-US, it is still not ideal for non-US customers.
+
+## Next steps
+
+* [Encryption in Azure Search using CMKs in Azure Key Vault](../../search/search-security-manage-encryption-keys.md)
+* [Data encryption at rest](../../security/fundamentals/encryption-atrest.md)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/limits.md
+
+ Title: Limits and boundaries - QnA Maker
+description: QnA Maker has meta-limits for parts of the knowledge base and service. It is important to keep your knowledge base within those limits in order to test and publish.
++++++ Last updated : 11/02/2021+++
+# QnA Maker knowledge base limits and boundaries
+
+QnA Maker limits provided below are a combination of the [Azure Cognitive Search pricing tier limits](../../search/search-limits-quotas-capacity.md) and the [QnA Maker pricing tier limits](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). You need to know both sets of limits to understand how many knowledge bases you can create per resource and how large each knowledge base can grow.
+
+## Knowledge bases
+
+The maximum number of knowledge bases is based on [Azure Cognitive Search tier limits](../../search/search-limits-quotas-capacity.md).
+
+|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|||||||-|
+|Maximum number of published knowledge bases allowed|2|14|49|199|199|2,999|
+
+ For example, if your tier has 15 allowed indexes, you can publish 14 knowledge bases (one index per published knowledge base). The 15th index, `testkb`, is used for all the knowledge bases for authoring and testing.
+
+## Extraction Limits
+
+### File naming constraints
+
+File names may not include the following characters:
+
+|Do not use character|
+|--|
+|Single quote `'`|
+|Double quote `"`|
+
+### Maximum file size
+
+|Format|Max file size (MB)|
+|--|--|
+|`.docx`|10|
+|`.pdf`|25|
+|`.tsv`|10|
+|`.txt`|10|
+|`.xlsx`|3|
+
+### Maximum number of files
+
+The maximum number of files that can be extracted and maximum file size is based on your **[QnA Maker pricing tier limits](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/)**.
+
+### Maximum number of deep-links from URL
+
+The maximum number of deep-links that can be crawled for extraction of QnAs from a URL page is **20**.
+
+## Metadata Limits
+
+Metadata is presented as a text-based key: value pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure Cognitive Search tier limits](../../search/search-limits-quotas-capacity.md)**.
+
+For GA version, since the test index is shared across all the KBs, the limit is applied across all KBs in the QnA Maker service.
+
+|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|||||||-|
+|Maximum metadata fields per QnA Maker service (across all KBs)|1,000|100*|1,000|1,000|1,000|1,000|
+
+### By name and value
+
+The length and acceptable characters for metadata name and value are listed in the following table.
+
+|Item|Allowed chars|Regex pattern match|Max chars|
+|--|--|--|--|
+|Name (key)|Allows<br>Alphanumeric (letters and digits)<br>`_` (underscore)<br> Must not contain spaces.|`^[a-zA-Z0-9_]+$`|100|
+|Value|Allows everything except<br>`:` (colon)<br>`|` (vertical pipe)<br>Only one value allowed.|`^[^:|]+$`|500|
+|||||
+
+## Knowledge Base content limits
+Overall limits on the content in the knowledge base:
+* Length of answer text: 25,000 characters
+* Length of question text: 1,000 characters
+* Length of metadata key text: 100 characters
+* Length of metadata value text: 500 characters
+* Supported characters for metadata name: Alphabets, digits, and `_`
+* Supported characters for metadata value: All except `:` and `|`
+* Length of file name: 200
+* Supported file formats: ".tsv", ".pdf", ".txt", ".docx", ".xlsx".
+* Maximum number of alternate questions: 300
+* Maximum number of question-answer pairs: Depends on the **[Azure Cognitive Search tier](../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure Cognitive Search index.
+* URL/HTML page: 1 million characters
+
+## Create Knowledge base call limits:
+These represent the limits for each create knowledge base action; that is, clicking *Create KB* or calling the CreateKnowledgeBase API.
+* Recommended maximum number of alternate questions per answer: 300
+* Maximum number of URLs: 10
+* Maximum number of files: 10
+* Maximum number of QnAs permitted per call: 1000
+
+## Update Knowledge base call limits
+These represent the limits for each update action; that is, clicking *Save and train* or calling the UpdateKnowledgeBase API.
+* Length of each source name: 300
+* Recommended maximum number of alternate questions added or deleted: 300
+* Maximum number of metadata fields added or deleted: 10
+* Maximum number of URLs that can be refreshed: 5
+* Maximum number of QnAs permitted per call: 1000
+
+## Add unstructured file limits
+
+> [!NOTE]
+> * If you need to use larger files than the limit allows, you can break the file into smaller files before sending them to the API.
+
+These represent the limits when unstructured files are used to *Create KB* or call the CreateKnowledgeBase API:
+* Length of file: We will extract first 32000 characters
+* Maximum three responses per file.
+
+## Prebuilt question answering limits
+
+> [!NOTE]
+> * If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+> * A document is a single string of text characters.
+
+These represent the limits when Prebuilt API is used to *Generate response* or call the GenerateAnswer API:
+* Number of documents: 5
+* Maximum size of a single document: 5,120 characters
+* Maximum three responses per document.
+
+> [!IMPORTANT]
+> Support for unstructured file/content and is available only in question answering.
+
+## Alterations limits
+[Alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
+
+## Next steps
+
+Learn when and how to change [service pricing tiers](How-To/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku).
ai-services Reference App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-app-service.md
+
+ Title: Service configuration - QnA Maker
+description: Understand how and where to configure resources.
++++++ Last updated : 11/02/2021+++
+# Service configuration
+
+Each version of QnA Maker uses a different set of Azure resources (services). This article describes the supported customizations for these services.
+
+## App Service
+
+QnA Maker uses the App Service to provide the query runtime used by the [generateAnswer API](/rest/api/cognitiveservices/qnamaker4.0/runtime/generateanswer).
+
+These settings are available in the Azure portal, for the App Service. The settings are available by selecting **Settings**, then **Configuration**.
+
+You can set an individual setting either through the Application Settings list, or modify several settings by selecting **Advanced edit**.
+
+|Resource|Setting|
+|--|--|
+|AzureSearchAdminKey|Cognitive Search - used for QnA pair storage and Ranker #1|
+|AzureSearchName|Cognitive Search - used for QnA pair storage and Ranker #1|
+|DefaultAnswer|Text of answer when no match is found|
+|UserAppInsightsAppId|Chat log and telemetry|
+|UserAppInsightsKey|Chat log and telemetry|
+|UserAppInsightsName|Chat log and telemetry|
+|QNAMAKER_EXTENSION_VERSION|Always set to _latest_. This setting will initialize the QnAMaker Site Extension in the App Service.|
+
+You need to **restart** the service from the **Overview** page of the Azure portal, once you are done making changes.
+
+## QnA Maker Service
+
+The QnA Maker service provides configuration for the following users to collaborate on a single QnA Maker service, and all its knowledge bases.
+
+Learn [how to add collaborators](./index.yml) to your service.
+
+## Change Azure Cognitive Search
+
+Learn [how to change the Cognitive Search service](./how-to/configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) linked to your QnA Maker service.
+
+## Change default answer
+
+Learn [how to change the text of your default answers](How-To/change-default-answer.md).
+
+## Telemetry
+
+Application Insights is used for monitoring telemetry with QnA Maker GA. There are no configuration settings specific to QnA Maker.
+
+## App Service Plan
+
+App Service Plan has no configuration settings specific to QnA Maker.
+
+## Next steps
+
+Learn more about [formats](reference-document-format-guidelines.md) for documents and URLs you want to import into a knowledge base.
ai-services Reference Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-document-format-guidelines.md
+
+ Title: Import document format guidelines - QnA Maker
+description: Use these guidelines for importing documents to get the best results for your content.
++++++ Last updated : 04/06/2020+++
+# Format guidelines for imported documents and URLs
+
+Review these formatting guidelines to get the best results for your content.
+
+## Formatting considerations
+
+After importing a file or URL, QnA Maker converts and stores your content in the [markdown format](https://en.wikipedia.org/wiki/Markdown). The conversion process adds new lines in the text, such as `\n\n`. A knowledge of the markdown format helps you to understand the converted content and manage your knowledge base content.
+
+If you add or edit your content directly in your knowledge base, use **markdown formatting** to create rich text content or change the markdown format content that is already in the answer. QnA Maker supports much of the markdown format to bring rich text capabilities to your content. However, the client application, such as a chat bot may not support the same set of markdown formats. It is important to test the client application's display of answers.
+
+See a full list of [content types and examples](./concepts/data-sources-and-content.md#content-types-of-documents-you-can-add-to-a-knowledge-base).
+
+## Basic document formatting
+
+QnA Maker identifies sections and subsections and relationships in the file based on visual clues like:
+
+* font size
+* font style
+* numbering
+* colors
+
+> [!NOTE]
+> We don't support extraction of images from uploaded documents currently.
+
+### Product manuals
+
+A manual is typically guidance material that accompanies a product. It helps the user to set up, use, maintain, and troubleshoot the product. When QnA Maker processes a manual, it extracts the headings and subheadings as questions and the subsequent content as answers. See an example [here](https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf).
+
+Below is an example of a manual with an index page, and hierarchical content
+
+> [!div class="mx-imgBorder"]
+> ![Product Manual example for a knowledge base](./media/qnamaker-concepts-datasources/product-manual.png)
+
+> [!NOTE]
+> Extraction works best on manuals that have a table of contents and/or an index page, and a clear structure with hierarchical headings.
+
+### Brochures, guidelines, papers, and other files
+
+Many other types of documents can also be processed to generate QA pairs, provided they have a clear structure and layout. These include: Brochures, guidelines, reports, white papers, scientific papers, policies, books, etc. See an example [here](https://qnamakerstore.blob.core.windows.net/qnamakerdata/docs/Manage%20Azure%20Blob%20Storage.docx).
+
+Below is an example of a semi-structured doc, without an index:
+
+> [!div class="mx-imgBorder"]
+> ![Azure Blob storage semi-structured Doc](./media/qnamaker-concepts-datasources/semi-structured-doc.png)
+
+### Unstructured document support
+
+Custom question answering now supports unstructured documents. A document that does not have its content organized in a well defined hierarchical manner, is missing a set structure or has its content free flowing can be considered as an unstructured document.
+
+Below is an example of an unstructured PDF document:
+
+> [!div class="mx-imgBorder"]
+> ![Unstructured document example for a knowledge base](./media/qnamaker-concepts-datasources/unstructured-qna-pdf.png)
+
+ Currently this functionality is available only via document upload and only for PDF and DOC file formats.
+
+> [!IMPORTANT]
+> Support for unstructured file/content is available only in question answering.
+
+### Structured QnA Document
+
+The format for structured Question-Answers in DOC files, is in the form of alternating Questions and Answers per line, one question per line followed by its answer in the following line, as shown below:
+
+```text
+Question1
+
+Answer1
+
+Question2
+
+Answer2
+```
+
+Below is an example of a structured QnA word document:
+
+> [!div class="mx-imgBorder"]
+> ![Structured QnA document example for a knowledge base](./media/qnamaker-concepts-datasources/structured-qna-doc.png)
+
+### Structured *TXT*, *TSV* and *XLS* Files
+
+QnAs in the form of structured *.txt*, *.tsv* or *.xls* files can also be uploaded to QnA Maker to create or augment a knowledge base. These can either be plain text, or can have content in RTF or HTML. [QnA pairs](./how-to/edit-knowledge-base.md#question-and-answer-pairs) have an optional metadata field that can be used to group QnA pairs into categories.
+
+| Question | Answer | Metadata (1 key: 1 value) |
+|--||-|
+| Question1 | Answer1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
+| Question2 | Answer2 | `Key:Value` |
+
+Any additional columns in the source file are ignored.
+
+#### Example of structured Excel file
+
+Below is an example of a structured QnA *.xls* file, with HTML content:
+
+> [!div class="mx-imgBorder"]
+> ![Structured QnA excel example for a knowledge base](./media/qnamaker-concepts-datasources/structured-qna-xls.png)
+
+#### Example of alternate questions for single answer in Excel file
+
+Below is an example of a structured QnA *.xls* file, with several alternate questions for a single answer:
+
+> [!div class="mx-imgBorder"]
+> ![Example of alternate questions for single answer in Excel file](./media/qnamaker-concepts-datasources/xls-alternate-question-example.png)
+
+After the file is imported, the question-and-answer pair is in the knowledge base as shown below:
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of alternate questions for single answer imported into knowledge base](./media/qnamaker-concepts-datasources/xls-alternate-question-example-after-import.png)
+
+### Structured data format through import
+
+Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured .tsv file that contains data source information. This information helps QnA Maker group the question-answer pairs and attribute them to a particular data source. [QnA pairs](./how-to/edit-knowledge-base.md#question-and-answer-pairs) have an optional metadata field that can be used to group QnA pairs into categories.
+
+| Question | Answer | Source| Metadata (1 key: 1 value) |
+|--||-||
+| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
+| Question2 | Answer2 | Editorial| `Key:Value` |
+
+<a href="#formatting-considerations"></a>
+
+### Multi-turn document formatting
+
+* Use headings and sub-headings to denote hierarchy. For example You can h1 to denote the parent QnA and h2 to denote the QnA that should be taken as prompt. Use small heading size to denote subsequent hierarchy. Don't use style, color, or some other mechanism to imply structure in your document, QnA Maker will not extract the multi-turn prompts.
+* First character of heading must be capitalized.
+* Do not end a heading with a question mark, `?`.
+
+**Sample documents**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)
+
+## FAQ URLs
+
+QnA Maker can support FAQ web pages in 3 different forms:
+
+* Plain FAQ pages
+* FAQ pages with links
+* FAQ pages with a Topics Homepage
+
+### Plain FAQ pages
+
+This is the most common type of FAQ page, in which the answers immediately follow the questions in the same page.
+
+Below is an example of a plain FAQ page:
+
+> [!div class="mx-imgBorder"]
+> ![Plain FAQ page example for a knowledge base](./media/qnamaker-concepts-datasources/plain-faq.png)
++
+### FAQ pages with links
+
+In this type of FAQ page, questions are aggregated together and are linked to answers that are either in different sections of the same page, or in different pages.
+
+Below is an example of an FAQ page with links in sections that are on the same page:
+
+> [!div class="mx-imgBorder"]
+> ![Section Link FAQ page example for a knowledge base](./media/qnamaker-concepts-datasources/sectionlink-faq.png)
++
+### Parent Topics page links to child answers pages
+
+This type of FAQ has a Topics page where each topic is linked to a corresponding set of questions and answers on a different page. QnA Maker crawls all the linked pages to extract the corresponding questions & answers.
+
+Below is an example of a Topics page with links to FAQ sections in different pages.
+
+> [!div class="mx-imgBorder"]
+> ![Deep link FAQ page example for a knowledge base](./media/qnamaker-concepts-datasources/topics-faq.png)
+
+### Support URLs
+
+QnA Maker can process semi-structured support web pages, such as web articles that would describe how to perform a given task, how to diagnose and resolve a given problem, and what are the best practices for a given process. Extraction works best on content that has a clear structure with hierarchical headings.
+
+> [!NOTE]
+> Extraction for support articles is a new feature and is in early stages. It works best for simple pages, that are well structured, and do not contain complex headers/footers.
+
+> [!div class="mx-imgBorder"]
+> ![QnA Maker supports extraction from semi-structured web pages where a clear structure is presented with hierarchical headings](./media/qnamaker-concepts-datasources/support-web-pages-with-heirarchical-structure.png)
+
+## Import and export knowledge base
+
+**TSV and XLS files**, from exported knowledge bases, can only be used by importing the files from the **Settings** page in the QnA Maker portal. They can't be used as data sources during knowledge base creation or from the **+ Add file** or **+ Add URL** feature on the **Settings** page.
+
+When you import the Knowledge base through these **TSV and XLS files**, the QnA pairs get added to the editorial source and not the sources from which the QnAs were extracted in the exported Knowledge Base.
++
+## Next steps
+
+See a full list of [content types and examples](./concepts/data-sources-and-content.md#content-types-of-documents-you-can-add-to-a-knowledge-base)
ai-services Reference Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-markdown-format.md
+
+ Title: Markdown format - QnA Maker
+description: Following is the list of markdown formats that you can use in QnA Maker's answer text.
++++++ Last updated : 03/19/2020++
+# Markdown format supported in QnA Maker answer text
+
+QnA Maker stores answer text as markdown. There are many flavors of markdown. In order to make sure the answer text is returned and displayed correctly, use this reference.
+
+Use the **[CommonMark](https://commonmark.org/help/tutorial/https://docsupdatetracker.net/index.html)** tutorial to validate your Markdown. The tutorial has a **Try it** feature for quick copy/paste validation.
+
+## When to use rich-text editing versus markdown
+
+[Rich-text editing](How-To/edit-knowledge-base.md#add-an-editorial-qna-set) of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
+
+Markdown is a better tool when you need to autogenerate content to create knowledge bases to be imported as part of a CI/CD pipeline or for [batch testing](./index.yml).
+
+## Supported markdown format
+
+Following is the list of markdown formats that you can use in QnA Maker's answer text.
+
+|Purpose|Format|Example markdown|Rendering<br>as displayed in Chat bot|
+|--|--|--|--|
+A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n QnA Maker?`|![format new line between two sentences](./media/qnamaker-concepts-datasources/format-newline.png)|
+|Headers from h1 to h6, the number of `#` denotes which header. 1 `#` is the h1.|`\n# text \n## text \n### text \n####text \n#####text` |`## Creating a bot \n ...text.... \n### Important news\n ...text... \n### Related Information\n ....text...`<br><br>`\n# my h1 \n## my h2\n### my h3 \n#### my h4 \n##### my h5`|![format with markdown headers](./media/qnamaker-concepts-datasources/format-headers.png)<br>![format with markdown headers H1 to H5](./media/qnamaker-concepts-datasources/format-h1-h5.png)|
+|Italics |`*text*`|`How do I create a bot with *QnA Maker*?`|![format with italics](./media/qnamaker-concepts-datasources/format-italics.png)|
+|Strong (bold)|`**text**`|`How do I create a bot with **QnA Maker**?`|![format with strong marking for bold](./media/qnamaker-concepts-datasources/format-strong.png)|
+|URL for link|`[text](https://www.my.com)`|`How do I create a bot with [QnA Maker](https://www.qnamaker.ai)?`|![format for URL (hyperlink)](./media/qnamaker-concepts-datasources/format-url.png)|
+|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![QnAMaker](https://review.learn.microsoft.com/azure/ai-services/qnamaker/media/qnamaker-how-to-key-management/qnamaker-resource-list.png)`|![format for public image URL](./media/qnamaker-concepts-datasources/format-image-url.png)|
+|Strikethrough|`~~text~~`|`some ~~questoins~~ questions need to be asked`|![format for strikethrough](./media/qnamaker-concepts-datasources/format-strikethrough.png)|
+|Bold and italics|`***text***`|`How can I create a ***QnA Maker*** bot?`|![format for bold and italics](./media/qnamaker-concepts-datasources/format-bold-italics.png)|
+|Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**QnA Maker**](https://www.qnamaker.ai)?`|![format for bold URL](./media/qnamaker-concepts-datasources/format-bold-url.png)|
+|Italics URL for link|`[*text*](https://www.my.com)`|`How do I create a bot with [*QnA Maker*](https://www.qnamaker.ai)?`|![format for italics URL](./media/qnamaker-concepts-datasources/format-url-italics.png)|
+|Escape markdown symbols|`\*text\*`|`How do I create a bot with \*QnA Maker\*?`|![Format for escape markdown symbols.](./media/qnamaker-concepts-datasources/format-escape-markdown-symbols.png)|
+|Ordered list|`\n 1. item1 \n 1. item2`|`This is an ordered list: \n 1. List item 1 \n 1. List item 2`<br>The preceding example uses automatic numbering built into markdown.<br>`This is an ordered list: \n 1. List item 1 \n 2. List item 2`<br>The preceding example uses explicit numbering.|![format for ordered list](./media/qnamaker-concepts-datasources/format-ordered-list.png)|
+|Unordered list|`\n * item1 \n * item2`<br>or<br>`\n - item1 \n - item2`|`This is an unordered list: \n * List item 1 \n * List item 2`|![format for unordered list](./media/qnamaker-concepts-datasources/format-unordered-list.png)|
+|Nested lists|`\n * Parent1 \n\t * Child1 \n\t * Child2 \n * Parent2`<br><br>`\n * Parent1 \n\t 1. Child1 \n\t * Child2 \n 1. Parent2`<br><br>You can nest ordered and unordered lists together. The tab, `\t`, indicates the indentation level of the child element.|`This is an unordered list: \n * List item 1 \n\t * Child1 \n\t * Child2 \n * List item 2`<br><br>`This is an ordered nested list: \n 1. Parent1 \n\t 1. Child1 \n\t 1. Child2 \n 1. Parent2`|![format for nested unordered list](./media/qnamaker-concepts-datasources/format-nested-unordered-list.png)<br>![format for nested ordered list](./media/qnamaker-concepts-datasources/format-nested-ordered-list.png)|
+
+*QnA Maker doesn't process the image in any way. It is the client application's role to render the image.
+
+If you want to add content using update/replace knowledge base APIs and the content/file contains html tags, you can preserve the HTML in your file by ensuring that opening and closing of the tags are converted in the encoded format.
+
+| Preserve HTML | Representation in the API request | Representation in KB |
+|--||-|
+| Yes | \&lt;br\&gt; | &lt;br&gt; |
+| Yes | \&lt;h3\&gt;header\&lt;/h3\&gt; | &lt;h3&gt;header&lt;/h3&gt; |
+
+Additionally, CR LF(\r\n) are converted to \n in the KB. LF(\n) is kept as is. If you want to escape any escape sequence like a \t or \n you can use backslash, for example: '\\\\r\\\\n' and '\\\\t'
+
+## Next steps
+
+Review batch testing [file formats](reference-tsv-format-batch-testing.md).
ai-services Reference Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-private-endpoint.md
+
+ Title: How to set up a Private Endpoint - QnA Maker
+description: Understand Private Endpoint creation available in QnA Maker managed.
++++++ Last updated : 01/12/2021++
+# Private Endpoints
+
+Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Now, Custom question answering provides you support to create private endpoints to the Azure Search Service.
+
+Private endpoints are provided by [Azure Private Link](../../private-link/private-link-overview.md), as a separate service. For more information about costs, see the [pricing page.](https://azure.microsoft.com/pricing/details/private-link/)
+
+## Prerequisites
+> [!div class="checklist"]
+> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> * A [Text Analytics resource](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics) (with Custom question answering feature) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, Text Analytics resource name you selected when you created the resource.
+
+## Steps to enable private endpoint
+1. Assign *Contributer* role to Text Analytics service in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
+
+> [!div class="mx-imgBorder"]
+> ![Text Analytics Identity](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoints-identity.png)
+
+1. Add the above identity as *Contributer* by going to Azure Search Service IAM tab.
+
+![Managed service IAM](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-access-control.png)
+
+1. Select **Add role assignments**, add the identity and then select **Save**.
+
+![Managed role assignment](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-role-assignment.png)
+
+1. Now, go to the **Networking** tab in the Azure Search Service instance and switch Endpoint connectivity data from *Public* to *Private*. This operation is a long running process and can take up to 30 mins to complete.
+
+![Managed Azure search networking](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking.png)
+
+1. Go to the **Networking** tab of Text Analytics service and under **Allow access from**, select the **Selected Networks and private endpoints** option.
+1. Select **Save**
+
+> [!div class="mx-imgBorder"]
+> ![Text Analytics newtorking](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-custom-qna.png)
+
+This will establish a private endpoint connection between Text Analytics service and Azure Cognitive Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure Cognitive Search service instance. Once the whole operation is completed, you are good to use your Text Analytics service.
+
+![Managed Networking Service](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png)
++
+## Support details
+ * We don't support changes to Azure Cognitive Search service once you enable private access to your Text Analytics service. If you change the Azure Cognitive Search service via 'Features' tab after you have enabled private access, the Text Analytics service will become unusable.
+ * After establishing Private Endpoint Connection, if you switch Azure Cognitive Search Service Networking to 'Public', you won't be able to use the Text Analytics service. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work
ai-services Reference Tsv Format Batch Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-tsv-format-batch-testing.md
+
+ Title: Batch test TSV format - QnA Maker
+
+description: Understand the TSV format for batch testing
+++++++ Last updated : 10/24/2019++
+# Batch testing TSV format
+
+Batch testing is available from [source code](https://github.com/Azure-Samples/cognitive-services-qnamaker-csharp/tree/master/documentation-samples/batchtesting) or as a [downloadable executable zipped](https://aka.ms/qna_btzip). The format of the command to run the batch test is:
+
+```console
+batchtesting.exe input.tsv https://YOUR-HOST.azurewebsites.net ENDPOINT-KEY out.tsv
+```
+
+|Param|Expected Value|
+|--|--|
+|1|name of tsv file formatted with [TSV input fields](#tsv-input-fields)|
+|2|URI for endpoint, with YOUR-HOST from the Publish page of the QnA Maker portal.|
+|3|ENDPOINT-KEY, found on Publish page of the QnA Maker portal.|
+|4|name of tsv file created by batch test for results.|
+
+Use the following information to understand and implement the TSV format for batch testing.
+
+## TSV input fields
+
+|TSV input file fields|Notes|
+|--|--|
+|KBID|Your KB ID found on the Publish page.|
+|Question|The question a user would enter.|
+|Metadata tags|optional|
+|Top parameter|optional|
+|Expected answer ID|optional|
+
+![Input format for TSV file for batch testing.](media/batch-test/input-tsv-format-batch-test.png)
+
+## TSV output fields
+
+|TSV Output file parameters|Notes|
+|--|--|
+|KBID|Your KB ID found on the Publish page.|
+|Question|The question as entered from the input file.|
+|Answer|Top answer from your knowledge base.|
+|Answer ID|Answer ID|
+|Score|Prediction score for answer. |
+|Metadata tags|associated with returned answer|
+|Expected answer ID|optional (only when expected answer ID is given)|
+|Judgment label|optional, values could be: correct or incorrect (only when expected answer is given)|
ai-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/troubleshooting.md
+
+ Title: Troubleshooting - QnA Maker
+description: The curated list of the most frequently asked questions regarding the QnA Maker service will help you adopt the service faster and with better results.
++++++ Last updated : 11/02/2021++
+# Troubleshooting for QnA Maker
+
+The curated list of the most frequently asked questions regarding the QnA Maker service will help you adopt the service faster and with better results.
++
+<a name="how-to-get-the-qnamaker-service-hostname"></a>
+
+## Manage predictions
+
+<details>
+<summary><b>How can I improve the throughput performance for query predictions?</b></summary>
+
+**Answer**:
+Throughput performance issues indicate you need to scale up for both your App service and your Cognitive Search. Consider adding a replica to your Cognitive Search to improve performance.
+
+Learn more about [pricing tiers](Concepts/azure-resources.md).
+</details>
+
+<details>
+<summary><b>How to get the QnAMaker service endpoint</b></summary>
+
+**Answer**:
+QnAMaker service endpoint is useful for debugging purposes when you contact QnAMaker Support or UserVoice. The endpoint is a URL in this form: `https://your-resource-name.azurewebsites.net`.
+
+1. Go to your QnAMaker service (resource group) in the [Azure portal](https://portal.azure.com)
+
+ ![QnAMaker Azure resource group in Azure portal](./media/qnamaker-how-to-troubleshoot/qnamaker-azure-resourcegroup.png)
+
+1. Select the App Service associated with the QnA Maker resource. Typically, the names are the same.
+
+ ![Select QnAMaker App Service](./media/qnamaker-how-to-troubleshoot/qnamaker-azure-appservice.png)
+
+1. The endpoint URL is available in the Overview section
+
+ ![QnAMaker endpoint](./media/qnamaker-how-to-troubleshoot/qnamaker-azure-gethostname.png)
+
+</details>
+
+## Manage the knowledge base
+
+<details>
+<summary><b>I accidentally deleted a part of my QnA Maker, what should I do?</b></summary>
+
+**Answer**:
+Do not delete any of the Azure services created along with the QnA Maker resource such as Search or Web App. These are necessary for QnA Maker to work, if you delete one, QnA Maker will stop working correctly.
+
+All deletes are permanent, including question and answer pairs, files, URLs, custom questions and answers, knowledge bases, or Azure resources. Make sure you export your knowledge base from the **Settings** page before deleting any part of your knowledge base.
+
+</details>
+
+<details>
+<summary><b>Why is my URL(s)/file(s) not extracting question-answer pairs?</b></summary>
+
+**Answer**:
+It's possible that QnA Maker can't auto-extract some question-and-answer (QnA) content from valid FAQ URLs. In such cases, you can paste the QnA content in a .txt file and see if the tool can ingest it. Alternately, you can editorially add content to your knowledge base through the [QnA Maker portal](https://qnamaker.ai).
+
+</details>
+
+<details>
+<summary><b>How large a knowledge base can I create?</b></summary>
+
+**Answer**:
+The size of the knowledge base depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](./concepts/azure-resources.md) for more details.
+
+</details>
+
+<details>
+<summary><b>Why can't I see anything in the drop-down when I try to create a new knowledge base?</b></summary>
+
+**Answer**:
+You haven't created any QnA Maker services in Azure yet. Read [here](./how-to/set-up-qnamaker-service-azure.md) to learn how to do that.
+
+</details>
+
+<details>
+<summary><b>How do I share a knowledge base with others?</b></summary>
+
+**Answer**:
+Sharing works at the level of a QnA Maker service, that is, all knowledge bases in the service will be shared. Read [here](./index.yml) how to collaborate on a knowledge base.
+
+</details>
+
+<details>
+<summary><b>Can you share a knowledge base with a contributor that is not in the same AAD tenant, to modify a knowledge base?</b></summary>
+
+**Answer**:
+Sharing is based on Azure role-based access control. If you can share _any_ resource in Azure with another user, you can also share QnA Maker.
+
+</details>
+
+<details>
+<summary><b>If you have an App Service Plan with 5 QnAMaker knowledge bases. Can you assign read/write rights to 5 different users so each of them can access only 1 QnAMaker knowledge base?</b></summary>
+
+**Answer**:
+You can share an entire QnAMaker service, not individual knowledge bases.
+
+</details>
+
+<details>
+<summary><b>How can I change the default message when no good match is found?</b></summary>
+
+**Answer**:
+The default message is part of the settings in your App service.
+- Go to your App service resource in the Azure portal
+
+![qnamaker appservice](./media/qnamaker-faq/qnamaker-resource-list-appservice.png)
+- Select the **Settings** option
+
+![qnamaker appservice settings](./media/qnamaker-faq/qnamaker-appservice-settings.png)
+- Change the value of the **DefaultAnswer** setting
+- Restart your App service
+
+![qnamaker appservice restart](./media/qnamaker-faq/qnamaker-appservice-restart.png)
+
+</details>
+
+<details>
+<summary><b>Why is my SharePoint link not getting extracted?</b></summary>
+
+**Answer**:
+See [Data source locations](./concepts/data-sources-and-content.md#data-source-locations) for more information.
+
+</details>
+
+<details>
+<summary><b>The updates that I made to my knowledge base are not reflected on publish. Why not?</b></summary>
+
+**Answer**:
+Every edit operation, whether in a table update, test, or setting, needs to be saved before it can be published. Be sure to select **Save and train** button after every edit operation.
+
+</details>
+
+<details>
+<summary><b>Does the knowledge base support rich data or multimedia?</b></summary>
+
+**Answer**:
+
+#### Multimedia auto-extraction for files and URLs
+
+* URLS - limited HTML-to-Markdown conversion capability.
+* Files - not supported
+
+#### Answer text in markdown
+Once QnA pairs are in the knowledge base, you can edit an answer's markdown text to include links to media available from public URLs.
++
+</details>
+
+<details>
+<summary><b>Does QnA Maker support non-English languages?</b></summary>
+
+**Answer**:
+See more details about [supported languages](./overview/language-support.md).
+
+If you have content from multiple languages, be sure to create a separate service for each language.
+
+</details>
+
+## Manage service
+
+<details>
+<summary><b>When should I restart my app service?</b></summary>
+
+**Answer**:
+Refresh your app service when the caution icon is next to the version value for the knowledge base in the **Endpoint keys** table on the **User Settings** [page](https://www.qnamaker.ai/UserSettings).
+
+</details>
+
+<details>
+<summary><b>I deleted my existing Search service. How can I fix this?</b></summary>
+
+**Answer**:
+If you delete an Azure Cognitive Search index, the operation is final and the index cannot be recovered.
+
+</details>
+
+<details>
+<summary><b>I deleted my `testkb` index in my Search service. How can I fix this?</b></summary>
+
+**Answer**:
+In case you deleted the `testkb` index in your Search service, you can restore the data from the last published KB. Please use the recovery tool [RestoreTestKBIndex](https://github.com/pchoudhari/QnAMakerBackupRestore/tree/master/RestoreTestKBFromProd) available on GitHub.
+
+</details>
+
+<details>
+<summary><b>I am receiving the following error: Please check if QnA Maker App service's CORS settings allow https://www.qnamaker.ai or if there are any organization specific network restrictions. How can I resolve this?</b></summary>
+
+**Answer**:
+In the API section of the App service pane, update the CORS setting to * or "https://www.qnamaker.ai". If this doesn't resolve the issue, check for any organization-specific restrictions.
+
+</details>
+
+<details>
+<summary><b>When should I refresh my endpoint keys?</b></summary>
+
+**Answer**:
+Refresh your endpoint keys if you suspect that they have been compromised.
+
+</details>
+
+<details>
+<summary><b>Can I use the same Azure Cognitive Search resource for knowledge bases using multiple languages?</b></summary>
+
+**Answer**:
+To use multiple language and multiple knowledge bases, the user has to create a QnA Maker resource for each language. This will create a separate Azure search service per language. Mixing different language knowledge bases in a single Azure search service will result in degraded relevance of results.
+
+</details>
+
+<details>
+<summary><b>How can I change the name of the Azure Cognitive Search resource used by QnA Maker?</b></summary>
+
+**Answer**:
+The name of the Azure Cognitive Search resource is the QnA Maker resource name with some random letters appended at the end. This makes it hard to distinguish between multiple Search resources for QnA Maker. Create a separate search service (naming it the way you would like to) and connect it to your QnA Service. The steps are similar to the steps you need to do to [upgrade an Azure search](How-To/set-up-qnamaker-service-azure.md#upgrade-the-azure-cognitive-search-service).
+
+</details>
+
+<details>
+<summary><b>When QnA Maker returns `Runtime core is not initialized,` how do I fix it?</b></summary>
+
+**Answer**:
+The disk space for your app service might be full. Steps to fix your disk space:
+
+1. In the [Azure portal](https://portal.azure.com), select your QnA Maker's App service, then stop the service.
+1. While still on the App service, select **Development Tools**, then **Advanced Tools**, then **Go**. This opens a new browser window.
+1. Select **Debug console**, then **CMD** to open a command-line tool.
+1. Navigate to the _site/wwwroot/Data/QnAMaker/_ directory.
+1. Remove all the folders whose name begins with `rd`.
+
+ **Do not delete** the following:
+
+ * KbIdToRankerMappings.txt file
+ * EndpointSettings.json file
+ * EndpointKeys folder
+
+1. Start the App service.
+1. Access your knowledge base to verify it works now.
+
+</details>
+<details>
+<summary><b>Why is my Application Insights not working?</b></summary>
+
+**Answer**:
+Please cross check and update below steps to fix the issue:
+
+1. In App Service -> Settings group -> Configuration section -> Application Settings -> Name "UserAppInsightsKey" parameters is configured properly and set to the respective application insights Overview tab ("Instrumentation Key") Guid.
+
+1. In App Service -> Settings group -> "Application Insights" section -> Make sure app insights is enabled and connected to respective application insights resource.
+
+</details>
+
+<details>
+<summary><b>My Application Insights is enabled but why is it not working properly?</b></summary>
+
+**Answer**:
+Please follow the below given steps:
+
+1. Copy the value of 'ΓÇ£APPINSIGHTS_INSTRUMENTATIONKEYΓÇ¥ name' into 'UserAppInsightsKey' name by overriding if there is some value already present there.
+
+1. If the 'UserAppInsightsKey' key does not exist in app settings, please add a new key with that name and copy the value.
+
+1. Save it and this will automatically restart the app service. This should resolve the issue.
+
+</details>
+
+## Integrate with other services including Bots
+
+<details>
+<summary><b>Do I need to use Bot Framework in order to use QnA Maker?</b></summary>
+
+**Answer**:
+No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with QnA Maker. However, QnA Maker is offered as one of several templates in [Azure AI Bot Service](/azure/bot-service/). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
+
+</details>
+
+<details>
+<summary><b>How can I create a new bot with QnA Maker?</b></summary>
+
+**Answer**:
+Follow the instructions in [this](./quickstarts/create-publish-knowledge-base.md) documentation to create your Bot with Azure AI Bot Service.
+
+</details>
+
+<details>
+<summary><b>How do I use a different knowledge base with an existing Azure AI Bot Service?</b></summary>
+
+**Answer**:
+You need to have the following information about your knowledge base:
+
+* Knowledge base ID.
+* Knowledge base's published endpoint custom subdomain name, known as `host`, found on **Settings** page after you publish.
+* Knowledge base's published endpoint key - found on **Settings** page after you publish.
+
+With this information, go to your bot's app service in the Azure portal. Under **Settings -> Configuration -> Application settings**, change those values.
+
+The knowledge base's endpoint key is labeled `QnAAuthkey` in the ABS service.
+
+</details>
+
+<details>
+<summary><b>Can two or more client applications share a knowledge base?</b></summary>
+
+**Answer**:
+Yes, the knowledge base can be queried from any number of clients. If the response from the knowledge base appears to be slow, or timed out, consider upgrading the service tier for the app service associated with the knowledge base.
+
+</details>
+
+<details>
+<summary><b>How do I embed the QnA Maker service in my website?</b></summary>
+
+**Answer**:
+Follow these steps to embed the QnA Maker service as a web-chat control in your website:
+
+1. Create your FAQ bot by following the instructions [here](./quickstarts/create-publish-knowledge-base.md).
+2. Enable the web chat by following the steps [here](/azure/bot-service/bot-service-channel-connect-webchat)
+
+</details>
+
+## Data storage
+
+<details>
+<summary><b>What data is stored and where is it stored?</b></summary>
+
+**Answer**:
+
+When you create your QnA Maker service, you selected an Azure region. Your knowledge bases and log files are stored in this region.
+
+</details>
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/whats-new.md
+
+ Title: What's new in QnA Maker service?
+
+description: This article contains news about QnA Maker.
+++++++ Last updated : 07/16/2020+++
+# What's new in QnA Maker
+
+Learn what's new in the service. These items may release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service.
++
+## Release notes
+
+Learn what's new with QnA Maker.
+
+### November 2021
+* [Question answering](../language-service/question-answering/overview.md) is now [generally available](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/question-answering-feature-is-generally-available/ba-p/2899497) as a feature within [Azure AI Language](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics).
+* Question answering is powered by **state-of-the-art transformer models** and [Turing](https://turing.microsoft.com/) Natural Language models.
+* The erstwhile QnA Maker product will be [retired](https://azure.microsoft.com/updates/azure-qna-maker-will-be-retired-on-31-march-2025/) on 31st March 2025, and no new QnA Maker resources will be created beginning 1st October 2022.
+* All existing QnA Maker customers are strongly advised to [migrate](../language-service/question-answering/how-to/migrate-qnamaker.md) their QnA Maker knowledge bases to Question answering as soon as possible to continue experiencing the best of QnA capabilities offered by Azure.
+
+### May 2021
+
+* QnA Maker managed has been re-introduced as Custom question answering feature in [Language resource](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics).
+* Custom question answering supports unstructured documents.
+* [Prebuilt API](how-to/using-prebuilt-api.md) has been introduced to generate answers for user queries from document text passed via the API.
+
+### November 2020
+
+* New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575).
+
+> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
+* Simplified resource creation
+* End to End region support
+* Deep learnt ranking model
+* Machine Reading Comprehension for precise answers
+
+### July 2020
+
+* [Metadata: `OR` logical combination of multiple metadata pairs](how-to/query-knowledge-base-with-metadata.md#logical-or-using-strictfilterscompoundoperationtype-property)
+* [Steps](how-to/network-isolation.md) to configure Cognitive Search endpoints to be private, but still accessible to QnA Maker.
+* Free Cognitive Search resources are removed after [90 days of inactivity](how-to/set-up-qnamaker-service-azure.md#inactivity-policy-for-free-search-resources).
+
+### June 2020
+
+* Updated [Power Virtual Agent](tutorials/integrate-with-power-virtual-assistant-fallback-topic.md) tutorial for faster and easier steps
+
+### May 2020
+
+* [Azure role-based access control (Azure RBAC)](concepts/role-based-access-control.md)
+* [Rich-text editing](how-to/edit-knowledge-base.md#rich-text-editing-for-answer) for answers
+
+### March 2020
+
+* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see Azure AI services [security](../security-features.md).
+
+### February 2020
+
+* [NPM package](https://www.npmjs.com/package/@azure/cognitiveservices-qnamaker) with GenerateAnswer API
+
+### November 2019
+
+* [US Government cloud support](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers) for QnA Maker
+* [Multi-turn](./how-to/multi-turn.md) feature in GA
+* [Chit-chat support](./how-to/chit-chat-knowledge-base.md#language-support) available in tier-1 languages
+
+### October 2019
+
+* Explicitly setting the language for all knowledge bases in the QnA Maker service.
+
+### September 2019
+
+* Import and export with XLS file format.
+
+### June 2019
+
+* Improved [ranker model](concepts/query-knowledge-base.md#ranker-process) for French, Italian, German, Spanish, Portuguese
+
+### April 2019
+
+* Support website content extraction
+* [SharePoint document](how-to/add-sharepoint-datasources.md) support from authenticated access
+
+### March 2019
+
+* [Active learning](how-to/improve-knowledge-base.md) provides suggestions for new question alternatives based on real user questions
+* Improved natural language processing (NLP) [ranker](concepts/query-knowledge-base.md#ranker-process) model for English
+
+> [!div class="nextstepaction"]
+> [Create a QnA Maker service](how-to/set-up-qnamaker-service-azure.md)
+
+## Azure AI services updates
+
+[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services)
ai-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/responsible-use-of-ai-overview.md
+
+ Title: Overview of Responsible use of AI
+
+description: Azure AI services provides information and guidelines on how to responsibly use our AI services in applications. Below are the links to articles that provide this guidance for the different services within the Azure AI services suite.
+++++ Last updated : 1/10/2022+++
+# Responsible use of AI with Azure AI services
+
+Azure AI services provides information and guidelines on how to responsibly use artificial intelligence in applications. Below are the links to articles that provide this guidance for the different services within the Azure AI services suite.
+
+## Anomaly Detector
+
+* [Transparency note and use cases](/legal/cognitive-services/anomaly-detector/ad-transparency-note?context=/azure/ai-services/anomaly-detector/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/anomaly-detector/data-privacy-security?context=/azure/ai-services/anomaly-detector/context/context)
+
+## Azure AI Vision - OCR
+
+* [Transparency note and use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note?context=/azure/ai-services/computer-vision/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations?context=/azure/ai-services/computer-vision/context/context)
+* [Integration and responsible use](/legal/cognitive-services/computer-vision/ocr-guidance-integration-responsible-use?context=/azure/ai-services/computer-vision/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security?context=/azure/ai-services/computer-vision/context/context)
+
+## Azure AI Vision - Image Analysis
+
+* [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=/azure/ai-services/computer-vision/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/computer-vision/imageanalysis-characteristics-and-limitations?context=/azure/ai-services/computer-vision/context/context)
+* [Integration and responsible use](/legal/cognitive-services/computer-vision/imageanalysis-guidance-for-integration?context=/azure/ai-services/computer-vision/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/computer-vision/imageanalysis-data-privacy-security?context=/azure/ai-services/computer-vision/context/context)
+* [Limited Access features](/legal/cognitive-services/computer-vision/limited-access?context=/azure/ai-services/computer-vision/context/context)
+
+## Azure AI Vision - Face
+
+* [Transparency note and use cases](/legal/cognitive-services/face/transparency-note?context=/azure/ai-services/computer-vision/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/face/characteristics-and-limitations?context=/azure/ai-services/computer-vision/context/context)
+* [Integration and responsible use](/legal/cognitive-services/face/guidance-integration-responsible-use?context=/azure/ai-services/computer-vision/context/context)
+* [Data privacy and security](/legal/cognitive-services/face/data-privacy-security?context=/azure/ai-services/computer-vision/context/context)
+* [Limited Access features](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/ai-services/computer-vision/context/context)
+
+## Azure AI Vision - Spatial Analysis
+
+* [Transparency note and use cases](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/ai-services/computer-vision/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/computer-vision/accuracy-and-limitations?context=/azure/ai-services/computer-vision/context/context)
+* [Responsible use in AI deployment](/legal/cognitive-services/computer-vision/responsible-use-deployment?context=/azure/ai-services/computer-vision/context/context)
+* [Disclosure design guidelines](/legal/cognitive-services/computer-vision/disclosure-design?context=/azure/ai-services/computer-vision/context/context)
+* [Research insights](/legal/cognitive-services/computer-vision/research-insights?context=/azure/ai-services/computer-vision/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/computer-vision/compliance-privacy-security-2?context=/azure/ai-services/computer-vision/context/context)
+
+## Custom Vision
+
+* [Transparency note and use cases](/legal/cognitive-services/custom-vision/custom-vision-cvs-transparency-note?context=/azure/ai-services/custom-vision-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/custom-vision/custom-vision-cvs-characteristics-and-limitations?context=/azure/ai-services/custom-vision-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/custom-vision/custom-vision-cvs-guidance-integration-responsible-use?context=/azure/ai-services/custom-vision-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/custom-vision/custom-vision-cvs-data-privacy-security?context=/azure/ai-services/custom-vision-service/context/context)
+
+## Language service
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Custom text classification
+
+* [Transparency note](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/ctc-guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/language-service/ctc-characteristics-and-limitations?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/ctc-data-privacy-security?context=/azure/ai-services/language-service/context/context)
+
+## Language - Named entity recognition
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Custom named entity recognition
+
+* [Transparency note](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/cner-guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/language-service/cner-characteristics-and-limitations?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/cner-data-privacy-security?context=/azure/ai-services/language-service/context/context)
+
+## Language - Entity linking
+
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Language detection
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-language-detection?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Key phrase extraction
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-key-phrase-extraction?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Personally identifiable information detection
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Question Answering
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-question-answering?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-question-answering?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy-security-question-answering?context=/azure/ai-services/language-service/context/context)
+
+## Language - Sentiment Analysis and opinion mining
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Text Analytics for health
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language - Summarization
+
+* [Transparency note](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/ai-services/language-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/ai-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
+
+## Language Understanding
+
+* [Transparency note and use cases](/legal/cognitive-services/luis/luis-transparency-note?context=/azure/ai-services/LUIS/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/luis/characteristics-and-limitations?context=/azure/ai-services/LUIS/context/context)
+* [Integration and responsible use](/legal/cognitive-services/luis/guidance-integration-responsible-use?context=/azure/ai-services/LUIS/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/luis/data-privacy-security?context=/azure/ai-services/LUIS/context/context)
+
+## Azure OpenAI Service
+
+* [Transparency note](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context)
+* [Limited access](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context)
+* [Code of conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context)
+
+## Personalizer
+
+* [Transparency note and use cases](./personalizer/responsible-use-cases.md)
+* [Characteristics and limitations](./personalizer/responsible-characteristics-and-limitations.md)
+* [Integration and responsible use](./personalizer/responsible-guidance-integration.md)
+* [Data and privacy](./personalizer/responsible-data-and-privacy.md)
+
+## QnA Maker
+
+* [Transparency note and use cases](/legal/cognitive-services/qnamaker/transparency-note-qnamaker?context=/azure/ai-services/qnamaker/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/qnamaker/characteristics-and-limitations-qnamaker?context=/azure/ai-services/qnamaker/context/context)
+* [Integration and responsible use](/legal/cognitive-services/qnamaker/guidance-integration-responsible-use-qnamaker?context=/azure/ai-services/qnamaker/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/qnamaker/data-privacy-security-qnamaker?context=/azure/ai-services/qnamaker/context/context)
+
+## Speech - Pronunciation Assessment
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
+
+## Speech - Speaker Recognition
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+
+## Speech - Custom Neural Voice
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/ai-services/speech-service/context/context)
+* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+
+## Speech - Speech to text
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context)
ai-services Rotate Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/rotate-keys.md
+
+ Title: Rotate keys in Azure AI services
+
+description: "Learn how to rotate API keys for better security, without interrupting service"
+++++ Last updated : 11/08/2022+++
+# Rotate subscription keys in Azure AI services
+
+Each Azure AI services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your resource if a key gets leaked.
+
+## How to rotate keys
+
+Keys can be rotated using the following procedure:
+
+1. If you're using both keys in production, change your code so that only one key is in use. In this guide, assume it's key 1.
+
+ This is a necessary step because once a key is regenerated, the older version of that key will stop working immediately. This would cause clients using the older key to get 401 access denied errors.
+1. Once you have only key 1 in use, you can regenerate the key 2. Go to your resource's page on the Azure portal, select the **Keys and Endpoint** tab, and select the **Regenerate Key 2** button at the top of the page.
+1. Next, update your code to use the newly generated key 2.
+
+ It will help to have logs or availability to check that users of the key have successfully swapped from using key 1 to key 2 before you proceed.
+1. Now you can regenerate the key 1 using the same process.
+1. Finally, update your code to use the new key 1.
+
+## See also
+
+* [What are Azure AI services?](./what-are-ai-services.md)
+* [Azure AI services security features](./security-features.md)
ai-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/security-controls-policy.md
+
+ Title: Azure Policy Regulatory Compliance controls for Azure AI services
+description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Last updated : 07/06/2023++++++
+# Azure Policy Regulatory Compliance controls for Azure AI services
+
+[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
+provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
+**compliance domains** and **security controls** related to different compliance standards. This
+page lists the **compliance domains** and **security controls** for Azure AI services. You can
+assign the built-ins for a **security control** individually to help make your Azure resources
+compliant with the specific standard.
+++
+## Next steps
+
+- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
ai-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/security-features.md
+
+ Title: Azure AI services security
+
+description: Learn about the security considerations for Azure AI services usage.
+++++ Last updated : 12/02/2022++++
+# Azure AI services security
+
+Security should be considered a top priority in the development of all applications, and with the growth of artificial intelligence enabled applications, security is even more important. This article outlines various security features available for Azure AI services. Each feature addresses a specific liability, so multiple features can be used in the same workflow.
+
+For a comprehensive list of Azure service security recommendations see the [Azure AI services security baseline](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=%2Fazure%2Fcognitive-services%2FTOC.json) article.
+
+## Security features
+
+|Feature | Description |
+|:|:|
+| [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Azure AI services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call an Azure AI services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). |
+| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Azure AI services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Azure AI services](./authentication.md). |
+| [Key rotation](./authentication.md)| Each Azure AI services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](./rotate-keys.md). |
+| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). |
+| [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.|
+| [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
+| [Data loss prevention](./cognitive-services-data-loss-prevention.md) | The data loss prevention feature lets an administrator decide what types of URIs their Azure resource can take as inputs (for those API calls that take URIs as input). This can be done to prevent the possible exfiltration of sensitive company data: If a company stores sensitive information (such as a customer's private data) in URL parameters, a bad actor inside that company could submit the sensitive URLs to an Azure service, which surfaces that data outside the company. Data loss prevention lets you configure the service to reject certain URI forms on arrival.|
+| [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) |The Customer Lockbox feature provides an interface for customers to review and approve or reject data access requests. It's used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see the [Customer Lockbox guide](../security/fundamentals/customer-lockbox-overview.md).</br></br>Customer Lockbox is available for the following
+| [Bring your own storage (BYOS)](./speech-service/speech-encryption-of-data-at-rest.md)| The Speech service doesn't currently support Customer Lockbox. However, you can arrange for your service-specific data to be stored in your own storage resource using bring-your-own-storage (BYOS). BYOS allows you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the Azure region where the Speech resource was created. This applies to any data at rest and data in transit. For customization features like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where the Speech service resource and BYOS resource (if used) reside. </br></br>To use BYOS with Speech, follow the [Speech encryption of data at rest](./speech-service/speech-encryption-of-data-at-rest.md) guide.</br></br> Microsoft does not use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored by Speech. |
+
+## Next steps
+
+* Explore [Azure AI services](./what-are-ai-services.md) and choose a service to get started.
ai-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-overview.md
+
+ Title: Audio processing - Speech service
+
+description: An overview of audio processing and capabilities of the Microsoft Audio Stack.
++++++ Last updated : 09/07/2022++++
+# Audio processing
+
+The Microsoft Audio Stack is a set of enhancements optimized for speech processing scenarios. This includes examples like keyword recognition and speech recognition. It consists of various enhancements/components that operate on the input audio signal:
+
+* **Noise suppression** - Reduce the level of background noise.
+* **Beamforming** - Localize the origin of sound and optimize the audio signal using multiple microphones.
+* **Dereverberation** - Reduce the reflections of sound from surfaces in the environment.
+* **Acoustic echo cancellation** - Suppress audio being played out of the device while microphone input is active.
+* **Automatic gain control** - Dynamically adjust the personΓÇÖs voice level to account for soft speakers, long distances, or non-calibrated microphones.
+
+[ ![Block diagram of Microsoft Audio Stack's enhancements.](media/audio-processing/mas-block-diagram.png) ](media/audio-processing/mas-block-diagram.png#lightbox)
+
+Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
+
+Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
+
+The Microsoft Audio Stack also powers a wide range of Microsoft products:
+* **Windows** - Microsoft Audio Stack is the default speech processing pipeline when using the Speech audio category.
+* **Microsoft Teams Displays and Microsoft Teams Rooms devices** - Microsoft Teams Displays and Teams Rooms devices use the Microsoft Audio Stack to enable high quality hands-free, voice-based experiences with Cortana.
+
+## Speech SDK integration
+
+The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. Some of the key Microsoft Audio Stack features available via the Speech SDK include:
+* **Real-time microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
+* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
+* **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)).
+* **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
+
+## Minimum requirements to use Microsoft Audio Stack
+
+Microsoft Audio Stack can be used by any product or application that can meet the following requirements:
+* **Raw audio** - Microsoft Audio Stack requires raw (unprocessed) audio as input to yield the best results. Providing audio that is already processed limits the audio stackΓÇÖs ability to perform enhancements at high quality.
+* **Microphone geometries** - Geometry information about each microphone on the device is required to correctly perform all enhancements offered by the Microsoft Audio Stack. Information includes the number of microphones, their physical arrangement, and coordinates. Up to 16 input microphone channels are supported.
+* **Loopback or reference audio** - An audio channel that represents the audio being played out of the device is required to perform acoustic echo cancellation.
+* **Input format** - Microsoft Audio Stack supports down sampling for sample rates that are integral multiples of 16 kHz. A minimum sampling rate of 16 kHz is required. Additionally, the following formats are supported: 32-bit IEEE little endian float, 32-bit little endian signed int, 24-bit little endian signed int, 16-bit little endian signed int, and 8-bit signed int.
+
+## Next steps
+[Use the Speech SDK for audio processing](audio-processing-speech-sdk.md)
ai-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-speech-sdk.md
+
+ Title: Use the Microsoft Audio Stack (MAS) - Speech service
+
+description: An overview of the features, capabilities, and restrictions for audio processing using the Speech Software Development Kit (SDK).
++++++ Last updated : 09/16/2022+
+ms.devlang: cpp, csharp, java
+++
+# Use the Microsoft Audio Stack (MAS)
+
+The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. See the [Audio processing](audio-processing-overview.md) documentation for an overview.
+
+In this article, you learn how to use the Microsoft Audio Stack (MAS) with the Speech SDK.
+
+## Default options
+
+This sample shows how to use MAS with all default enhancement options on input from the device's default microphone.
+
+### [C#](#tab/csharp)
+
+```csharp
+var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
+var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
+
+var recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
+
+### [C++](#tab/cpp)
+
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
+auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions);
+
+auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
+```
+
+### [Java](#tab/java)
+
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
+AudioConfig audioInput = AudioConfig.fromDefaultMicrophoneInput(audioProcessingOptions);
+
+SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
++
+## Preset microphone geometry
+
+This sample shows how to use MAS with a predefined microphone geometry on a specified audio input device. In this example:
+* **Enhancement options** - The default enhancements are applied on the input audio stream.
+* **Preset geometry** - The preset geometry represents a linear 2-microphone array.
+* **Audio input device** - The audio input device ID is `hw:0,1`. For more information on how to select an audio input device, see [How to: Select an audio input device with the Speech SDK](how-to-select-audio-input-devices.md).
+
+### [C#](#tab/csharp)
+
+```csharp
+var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry.Linear2);
+var audioInput = AudioConfig.FromMicrophoneInput("hw:0,1", audioProcessingOptions);
+
+var recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
+
+### [C++](#tab/cpp)
+
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry::Linear2);
+auto audioInput = AudioConfig::FromMicrophoneInput("hw:0,1", audioProcessingOptions);
+
+auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
+```
+
+### [Java](#tab/java)
+
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry.Linear2);
+AudioConfig audioInput = AudioConfig.fromMicrophoneInput("hw:0,1", audioProcessingOptions);
+
+SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
++
+## Custom microphone geometry
+
+This sample shows how to use MAS with a custom microphone geometry on a specified audio input device. In this example:
+* **Enhancement options** - The default enhancements will be applied on the input audio stream.
+* **Custom geometry** - A custom microphone geometry for a 7-microphone array is provided via the microphone coordinates. The units for coordinates are millimeters.
+* **Audio input** - The audio input is from a file, where the audio within the file is expected from an audio input device corresponding to the custom geometry specified.
+
+### [C#](#tab/csharp)
+
+```csharp
+var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[7]
+{
+ new MicrophoneCoordinates(0, 0, 0),
+ new MicrophoneCoordinates(40, 0, 0),
+ new MicrophoneCoordinates(20, -35, 0),
+ new MicrophoneCoordinates(-20, -35, 0),
+ new MicrophoneCoordinates(-40, 0, 0),
+ new MicrophoneCoordinates(-20, 35, 0),
+ new MicrophoneCoordinates(20, 35, 0)
+};
+var microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Planar, microphoneCoordinates);
+var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
+var audioInput = AudioConfig.FromWavFileInput("katiesteve.wav", audioProcessingOptions);
+
+var recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
+
+### [C++](#tab/cpp)
+
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+MicrophoneArrayGeometry microphoneArrayGeometry
+{
+ MicrophoneArrayType::Planar,
+ { { 0, 0, 0 }, { 40, 0, 0 }, { 20, -35, 0 }, { -20, -35, 0 }, { -40, 0, 0 }, { -20, 35, 0 }, { 20, 35, 0 } }
+};
+auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel::LastChannel);
+auto audioInput = AudioConfig::FromWavFileInput("katiesteve.wav", audioProcessingOptions);
+
+auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
+```
+
+### [Java](#tab/java)
+
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[7];
+microphoneCoordinates[0] = new MicrophoneCoordinates(0, 0, 0);
+microphoneCoordinates[1] = new MicrophoneCoordinates(40, 0, 0);
+microphoneCoordinates[2] = new MicrophoneCoordinates(20, -35, 0);
+microphoneCoordinates[3] = new MicrophoneCoordinates(-20, -35, 0);
+microphoneCoordinates[4] = new MicrophoneCoordinates(-40, 0, 0);
+microphoneCoordinates[5] = new MicrophoneCoordinates(-20, 35, 0);
+microphoneCoordinates[6] = new MicrophoneCoordinates(20, 35, 0);
+MicrophoneArrayGeometry microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Planar, microphoneCoordinates);
+AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
+AudioConfig audioInput = AudioConfig.fromWavFileInput("katiesteve.wav", audioProcessingOptions);
+
+SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
++
+## Select enhancements
+
+This sample shows how to use MAS with a custom set of enhancements on the input audio. By default, all enhancements are enabled but there are options to disable dereverberation, noise suppression, automatic gain control, and echo cancellation individually by using `AudioProcessingOptions`.
+
+In this example:
+* **Enhancement options** - Echo cancellation and noise suppression will be disabled, while all other enhancements remain enabled.
+* **Audio input device** - The audio input device is the default microphone of the device.
+
+### [C#](#tab/csharp)
+
+```csharp
+var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
+var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
+
+var recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
+
+### [C++](#tab/cpp)
+
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
+auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions);
+
+auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
+```
+
+### [Java](#tab/java)
+
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
+AudioConfig audioInput = AudioConfig.fromDefaultMicrophoneInput(audioProcessingOptions);
+
+SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
++
+## Specify beamforming angles
+
+This sample shows how to use MAS with a custom microphone geometry and beamforming angles on a specified audio input device. In this example:
+* **Enhancement options** - The default enhancements will be applied on the input audio stream.
+* **Custom geometry** - A custom microphone geometry for a 4-microphone array is provided by specifying the microphone coordinates. The units for coordinates are millimeters.
+* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees. In the sample code below, the start angle is set to 70 degrees and the end angle is set to 110 degrees.
+* **Audio input** - The audio input is from a push stream, where the audio within the stream is expected from an audio input device corresponding to the custom geometry specified.
+
+### [C#](#tab/csharp)
+
+```csharp
+var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[4]
+{
+ new MicrophoneCoordinates(-60, 0, 0),
+ new MicrophoneCoordinates(-20, 0, 0),
+ new MicrophoneCoordinates(20, 0, 0),
+ new MicrophoneCoordinates(60, 0, 0)
+};
+var microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Linear, 70, 110, microphoneCoordinates);
+var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
+var pushStream = AudioInputStream.CreatePushStream();
+var audioInput = AudioConfig.FromStreamInput(pushStream, audioProcessingOptions);
+
+var recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
+
+### [C++](#tab/cpp)
+
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+MicrophoneArrayGeometry microphoneArrayGeometry
+{
+ MicrophoneArrayType::Linear,
+ 70,
+ 110,
+ { { -60, 0, 0 }, { -20, 0, 0 }, { 20, 0, 0 }, { 60, 0, 0 } }
+};
+auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel::LastChannel);
+auto pushStream = AudioInputStream::CreatePushStream();
+auto audioInput = AudioConfig::FromStreamInput(pushStream, audioProcessingOptions);
+
+auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
+```
+
+### [Java](#tab/java)
+
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
+
+MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[4];
+microphoneCoordinates[0] = new MicrophoneCoordinates(-60, 0, 0);
+microphoneCoordinates[1] = new MicrophoneCoordinates(-20, 0, 0);
+microphoneCoordinates[2] = new MicrophoneCoordinates(20, 0, 0);
+microphoneCoordinates[3] = new MicrophoneCoordinates(60, 0, 0);
+MicrophoneArrayGeometry microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Planar, 70, 110, microphoneCoordinates);
+AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
+PushAudioInputStream pushStream = AudioInputStream.createPushStream();
+AudioConfig audioInput = AudioConfig.fromStreamInput(pushStream, audioProcessingOptions);
+
+SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
+```
++
+## Reference channel for echo cancellation
+
+Microsoft Audio Stack requires the reference channel (also known as loopback channel) to perform echo cancellation. The source of the reference channel varies by platform:
+* **Windows** - The reference channel is automatically gathered by the Speech SDK if the `SpeakerReferenceChannel::LastChannel` option is provided when creating `AudioProcessingOptions`.
+* **Linux** - ALSA (Advanced Linux Sound Architecture) must be configured to provide the reference audio stream as the last channel for the audio input device used. ALSA is configured in addition to providing the `SpeakerReferenceChannel::LastChannel` option when creating `AudioProcessingOptions`.
+
+## Language and platform support
+
+| Language | Platform | Reference docs |
+||-|-|
+| C++ | Windows, Linux | [C++ docs](/cpp/cognitive-services/speech/) |
+| C# | Windows, Linux | [C# docs](/dotnet/api/microsoft.cognitiveservices.speech) |
+| Java | Windows, Linux | [Java docs](/java/api/com.microsoft.cognitiveservices.speech) |
+
+## Next steps
+[Setup development environment](quickstarts/setup-platform.md)
ai-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis.md
+
+ Title: Batch synthesis API (Preview) for text to speech - Speech service
+
+description: Learn how to use the batch synthesis API for asynchronous synthesis of long-form text to speech.
++++++ Last updated : 11/16/2022+++
+# Batch synthesis API (Preview) for text to speech
+
+The Batch synthesis API (Preview) can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
+
+> [!IMPORTANT]
+> The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
+
+The batch synthesis API is asynchronous and doesn't return synthesized audio in real-time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
+
+This diagram provides a high-level overview of the workflow.
+
+![Diagram of the Batch Synthesis API workflow.](media/long-audio-api/long-audio-api-workflow.png)
+
+> [!TIP]
+> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks. For a C# example, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs).
+
+You can use the following REST API operations for batch synthesis:
+
+| Operation | Method | REST API call |
+| - | -- | |
+| Create batch synthesis | `POST` | texttospeech/3.1-preview1/batchsynthesis |
+| Get batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+| List batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis |
+| Delete batch synthesis | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+
+For code samples, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-synthesis).
+
+## Create batch synthesis
+
+To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
+
+- Set the required `textType` property.
+- If the `textType` property is set to "PlainText", then you must also set the `voice` property in the `synthesisConfig`. In the example below, the `textType` is set to "SSML", so the `speechSynthesis` isn't set.
+- Set the required `displayName` property. Choose a name that you can refer to later. The display name doesn't have to be unique.
+- Optionally you can set the `description`, `timeToLive`, and other properties. For more information, see [batch synthesis properties](#batch-synthesis-properties).
+
+> [!NOTE]
+> The maximum JSON payload size that will be accepted is 500 kilobytes. Each Speech resource can have up to 200 batch synthesis jobs that are running concurrently.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, replace `YourSpeechRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
+ "displayName": "batch synthesis sample",
+ "description": "my ssml test",
+ "textType": "SSML",
+ "inputs": [
+ {
+ "text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''>
+ <voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>
+ The rainbow has seven colors.
+ </voice>
+ </speak>",
+ },
+ ],
+ "properties": {
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false,
+ "concatenateResult": false,
+ "decompressOutputFiles": false
+ },
+}' "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "lastActionDateTime": "2022-11-16T15:07:04.121Z",
+ "status": "NotStarted",
+ "id": "1e2e0fe8-e403-417c-a382-b55eb2ea943d",
+ "createdDateTime": "2022-11-16T15:07:04.121Z",
+ "displayName": "batch synthesis sample",
+ "description": "my ssml test"
+}
+```
+
+The `status` property should progress from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can call the [GET batch synthesis API](#get-batch-synthesis) periodically until the returned status is `Succeeded` or `Failed`.
+
+## Get batch synthesis
+
+To get the status of the batch synthesis job, make an HTTP GET request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "audioSize": 100000,
+ "durationInTicks": 31250000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT3.125S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T14:00:32.523Z",
+ "status": "Succeeded",
+ "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "createdDateTime": "2022-11-05T14:00:31.523Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ }
+```
+
+From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
+
+## List batch synthesis
+
+To list all batch synthesis jobs for the Speech resource, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key and replace `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `top` (page size) query parameters in URL. The default value for `skip` is 0 and the default value for `top` is 100.
+
+```azurecli-interactive
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "textType": "SSML",
+ "synthesisConfig": {},
+ "customVoices": {},
+ "properties": {
+ "audioSize": 100000,
+ "durationInTicks": 31250000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT3.125S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T14:00:32.523Z",
+ "status": "Succeeded",
+ "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "createdDateTime": "2022-11-05T14:00:31.523Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ }
+ {
+ "textType": "PlainText",
+ "synthesisConfig": {
+ "voice": "en-US-JennyNeural",
+ "style": "chat",
+ "rate": "+30.00%",
+ "pitch": "x-high",
+ "volume": "80"
+ },
+ "customVoices": {},
+ "properties": {
+ "audioSize": 79384,
+ "durationInTicks": 24800000,
+ "succeededAudioCount": 1,
+ "failedAudioCount": 0,
+ "duration": "PT2.48S",
+ "billingDetails": {
+ "customNeural": 0,
+ "neural": 33
+ },
+ "timeToLive": "P31D",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "decompressOutputFiles": false,
+ "wordBoundaryEnabled": false,
+ "sentenceBoundaryEnabled": false
+ },
+ "outputs": {
+ "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/38e249bf-2607-4236-930b-82f6724048d8/results.zip?SAS_Token"
+ },
+ "lastActionDateTime": "2022-11-05T18:52:23.210Z",
+ "status": "Succeeded",
+ "id": "38e249bf-2607-4236-930b-82f6724048d8",
+ "createdDateTime": "2022-11-05T18:52:22.807Z",
+ "displayName": "batch synthesis sample",
+ "description": "my test"
+ },
+ ],
+ // The next page link of the list of batch synthesis.
+ "@nextLink": "https://{region}.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2"
+}
+```
+
+From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
+
+The `values` property in the json response lists your synthesis requests. The list is paginated, with a maximum page size of 100. The `"@nextLink"` property is provided as needed to get the next page of the paginated list.
+
+## Delete batch synthesis
+
+Delete the batch synthesis job history after you retrieved the audio output results. The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+
+To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X DELETE "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+The response headers will include `HTTP/1.1 204 No Content` if the delete request was successful.
+
+## Batch synthesis results
+
+After you [get a batch synthesis job](#get-batch-synthesis) with `status` of "Succeeded", you can download the audio output results. Use the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response.
+
+To get the batch synthesis results file, make an HTTP GET request using the URI as shown in the following example. Replace `YourOutputsResultUrl` with the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response. Replace `YourSpeechKey` with your Speech resource key.
+
+```azurecli-interactive
+curl -v -X GET "YourOutputsResultUrl" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" > results.zip
+```
+
+The results are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. The numbered prefix of each filename (shown below as `[nnnn]`) is in the same order as the text inputs used when you created the batch synthesis.
+
+> [!NOTE]
+> The `[nnnn].debug.json` file contains the synthesis result ID and other information that might help with troubleshooting. The properties that it contains might change, so you shouldn't take any dependencies on the JSON format.
+
+The summary file contains the synthesis results for each text input. Here's an example `summary.json` file:
+
+```json
+{
+ "jobID": "41b83de2-380d-45dc-91af-722b68cfdc8e",
+ "status": "Succeeded",
+ "results": [
+ {
+ "texts": [
+ "<speak version='1.0' xml:lang='en-US'>\n\t\t\t\t<voice xml:lang='en-US' xml:gender='Female' name='en-US-JennyNeural'>\n\t\t\t\t\tThe rainbow has seven colors.\n\t\t\t\t</voice>\n\t\t\t</speak>"
+ ],
+ "status": "Succeeded",
+ "billingDetails": {
+ "CustomNeural": "0",
+ "Neural": "33"
+ },
+ "audioFileName": "0001.wav",
+ "properties": {
+ "audioSize": "100000",
+ "duration": "PT3.1S",
+ "durationInTicks": "31250000"
+ }
+ }
+ ]
+}
+```
+
+If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file will be included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file will be included in the results.
+
+Here's an example word data file with both audio offset and duration in milliseconds:
+
+```json
+[
+ {
+ "Text": "the",
+ "AudioOffset": 38,
+ "Duration": 153
+ },
+ {
+ "Text": "rainbow",
+ "AudioOffset": 201,
+ "Duration": 326
+ },
+ {
+ "Text": "has",
+ "AudioOffset": 567,
+ "Duration": 96
+ },
+ {
+ "Text": "seven",
+ "AudioOffset": 673,
+ "Duration": 96
+ },
+ {
+ "Text": "colors",
+ "AudioOffset": 778,
+ "Duration": 451
+ },
+]
+```
+
+## Batch synthesis properties
+
+Batch synthesis properties are described in the following table.
+
+| Property | Description |
+|-|-|
+|`createdDateTime`|The date and time when the batch synthesis job was created.<br/><br/>This property is read-only.|
+|`customProperties`|A custom set of optional batch synthesis configuration settings.<br/><br/>This property is stored for your convenience to associate the synthesis jobs that you created with the synthesis jobs that you get or list. This property is stored, but isn't used by the Speech service.<br/><br/>You can specify up to 10 custom properties as key and value pairs. The maximum allowed key length is 64 characters, and the maximum allowed value length is 256 characters.|
+|`customVoices`|The map of a custom voice name and its deployment ID.<br/><br/>For example: `"customVoices": {"your-custom-voice-name": "502ac834-6537-4bc3-9fd6-140114daa66d"}`<br/><br/>You can use the voice name in your `synthesisConfig.voice` (when the `textType` is set to `"PlainText"`) or within the SSML text of `inputs` (when the `textType` is set to `"SSML"`).<br/><br/>This property is required to use a custom voice. If you try to use a custom voice that isn't defined here, the service returns an error.|
+|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
+|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
+|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
+|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result will be written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but keep in mind that the maximum JSON payload size (including all text inputs and other properties) that will be accepted is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.|
+|`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.|
+|`properties`|A defined set of optional batch synthesis configuration settings.|
+|`properties.audioSize`|The audio output size in bytes.<br/><br/>This property is read-only.|
+|`properties.billingDetails`|The number of words that were processed and billed by `customNeural` versus `neural` (prebuilt) voices.<br/><br/>This property is read-only.|
+|`properties.concatenateResult`|Determines whether to concatenate the result. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.decompressOutputFiles`|Determines whether to unzip the synthesis result files in the destination container. This property can only be set when the `destinationContainerUrl` property is set or BYOS (Bring Your Own Storage) is configured for the Speech resource. This optional `bool` value ("true" or "false") is "false" by default.|
+|`properties.destinationContainerUrl`|The batch synthesis results can be stored in a writable Azure container. If you don't specify a container URI with [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) token, the Speech service stores the results in a container managed by Microsoft. SAS with stored access policies isn't supported. When the synthesis job is deleted, the result data is also deleted.<br/><br/>This optional property isn't included in the response when you get the synthesis job.|
+|`properties.duration`|The audio output duration. The value is an ISO 8601 encoded duration.<br/><br/>This property is read-only.|
+|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.|
+|`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.|
+|`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.|
+|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file will be included in the results data ZIP file.|
+|`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.|
+|`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](#delete-batch-synthesis) synthesis method to remove the job sooner.|
+|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file will be included in the results data ZIP file.|
+|`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.|
+|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
+|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
+|`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
+
+## HTTP status codes
+
+The section details the HTTP response codes and messages from the batch synthesis API.
+
+### HTTP 200 OK
+
+HTTP 200 OK indicates that the request was successful.
+
+### HTTP 201 Created
+
+HTTP 201 Created indicates that the create batch synthesis request (via HTTP POST) was successful.
+
+### HTTP 204 error
+
+An HTTP 204 error indicates that the request was successful, but the resource doesn't exist. For example:
+- You tried to get or delete a synthesis job that doesn't exist.
+- You successfully deleted a synthesis job.
+
+### HTTP 400 error
+
+Here are examples that can result in the 400 error:
+- The `outputFormat` is unsupported or invalid. Provide a valid format value, or leave `outputFormat` empty to use the default setting.
+- The number of requested text inputs exceeded the limit of 1,000.
+- The `top` query parameter exceeded the limit of 100.
+- You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.
+- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier.
+- You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
+
+### HTTP 404 error
+
+The specified entity can't be found. Make sure the synthesis ID is correct.
+
+### HTTP 429 error
+
+There are too many recent requests. Each client application can submit up to 50 requests per 5 seconds for each Speech resource. Reduce the number of requests per second.
+
+You can check the rate limit and quota remaining via the HTTP headers as shown in the following example:
+
+```http
+X-RateLimit-Limit: 50
+X-RateLimit-Remaining: 49
+X-RateLimit-Reset: 2022-11-11T01:49:43Z
+```
+
+### HTTP 500 error
+
+HTTP 500 Internal Server Error indicates that the request failed. The response body contains the error message.
+
+### HTTP error example
+
+Here's an example request that results in an HTTP 400 error, because the `top` query parameter is set to a value greater than 100.
+
+```console
+curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+```
+
+In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+
+The response body will resemble the following JSON example:
+
+```json
+{
+ "code": "InvalidRequest",
+ "message": "The top parameter should not be greater than 100.",
+ "innerError": {
+ "code": "InvalidParameter",
+ "message": "The top parameter should not be greater than 100."
+ }
+}
+```
+
+## Next steps
+
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Text to speech quickstart](get-started-text-to-speech.md)
+- [Migrate to batch synthesis](migrate-to-batch-synthesis.md)
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
+
+ Title: Locate audio files for batch transcription - Speech service
+
+description: Batch transcription is used to transcribe a large amount of audio in storage. You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe.
+++++++ Last updated : 10/21/2022
+ms.devlang: csharp
+++
+# Locate audio files for batch transcription
+
+Batch transcription is used to transcribe a large amount of audio in storage. Batch transcription can access audio files from inside or outside of Azure.
+
+When source audio files are stored outside of Azure, they can be accessed via a public URI (such as "https://crbn.us/hello.wav"). Files should be directly accessible; URIs that require authentication or that invoke interactive scripts before the file can be accessed aren't supported.
+
+Audio files that are stored in Azure Blob storage can be accessed via one of two methods:
+- [Trusted Azure services security mechanism](#trusted-azure-services-security-mechanism)
+- [Shared access signature (SAS)](#sas-url-for-batch-transcription) URI.
+
+You can specify one or multiple audio files when creating a transcription. We recommend that you provide multiple files per request or point to an Azure Blob storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
+
+## Supported audio formats
+
+The batch transcription API supports the following formats:
+
+| Format | Codec | Bits per sample | Sample rate |
+|--|-|||
+| WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+| MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+| OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo |
+
+For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file. To create an ordered final transcript, use the timestamps that are generated per utterance.
+
+## Azure Blob Storage upload
+
+When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#destination-container-url) to a Blob container.
+
+> [!NOTE]
+> For blob and container limits, see [batch transcription quotas and limits](speech-services-quotas-and-limits.md#batch-transcription).
+
+# [Azure portal](#tab/portal)
+
+Follow these steps to create a storage account and upload wav files from your local directory to a new container.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
+1. Select the Storage account.
+1. In the **Data storage** group in the left pane, select **Containers**.
+1. Select **+ Container**.
+1. Enter a name for the new container and select **Create**.
+1. Select the new container.
+1. Select **Upload**.
+1. Choose the files to upload and select **Upload**.
+
+# [Azure CLI](#tab/azure-cli)
+
+Follow these steps to create a storage account and upload wav files from your local directory to a new container.
+
+1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created. Use the same subscription and resource group as your Speech resource.
+
+ ```azurecli-interactive
+ set RESOURCE_GROUP=<your existing resource group name>
+ ```
+
+1. Set the `AZURE_STORAGE_ACCOUNT` environment variable to the name of a storage account that you want to create.
+
+ ```azurecli-interactive
+ set AZURE_STORAGE_ACCOUNT=<choose new storage account name>
+ ```
+
+1. Create a new storage account with the [`az storage account create`](/cli/azure/storage/account#az-storage-account-create) command. Replace `eastus` with the region of your resource group.
+
+ ```azurecli-interactive
+ az storage account create -n %AZURE_STORAGE_ACCOUNT% -g %RESOURCE_GROUP% -l eastus
+ ```
+
+ > [!TIP]
+ > When you are finished with batch transcriptions and want to delete your storage account, use the [`az storage delete create`](/cli/azure/storage/account#az-storage-account-delete) command.
+
+1. Get your new storage account keys with the [`az storage account keys list`](/cli/azure/storage/account#az-storage-account-keys-list) command.
+
+ ```azurecli-interactive
+ az storage account keys list -g %RESOURCE_GROUP% -n %AZURE_STORAGE_ACCOUNT%
+ ```
+
+1. Set the `AZURE_STORAGE_KEY` environment variable to one of the key values retrieved in the previous step.
+
+ ```azurecli-interactive
+ set AZURE_STORAGE_KEY=<your storage account key>
+ ```
+
+ > [!IMPORTANT]
+ > The remaining steps use the `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_KEY` environment variables. If you didn't set the environment variables, you can pass the values as parameters to the commands. See the [az storage container create](/cli/azure/storage/) documentation for more information.
+
+1. Create a container with the [`az storage container create`](/cli/azure/storage/container#az-storage-container-create) command. Replace `<mycontainer>` with a name for your container.
+
+ ```azurecli-interactive
+ az storage container create -n <mycontainer>
+ ```
+
+1. The following [`az storage blob upload-batch`](/cli/azure/storage/blob#az-storage-blob-upload-batch) command uploads all .wav files from the current local directory. Replace `<mycontainer>` with a name for your container. Optionally you can modify the command to upload files from a different directory.
+
+ ```azurecli-interactive
+ az storage blob upload-batch -d <mycontainer> -s . --pattern *.wav
+ ```
+++
+## Trusted Azure services security mechanism
+
+This section explains how to set up and limit access to your batch transcription source audio files in an Azure Storage account using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity).
+
+> [!NOTE]
+> With the trusted Azure services security mechanism, you need to use [Azure Blob storage](../../storage/blobs/storage-blobs-overview.md) to store audio files. Usage of [Azure Files](../../storage/files/storage-files-introduction.md) is not supported.
+
+If you perform all actions in this section, your Storage account will be in the following configuration:
+- Access to all external network traffic is prohibited.
+- Access to Storage account using Storage account key is prohibited.
+- Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited.
+- Access to the selected Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+
+So in effect your Storage account becomes completely "locked" and can't be used in any scenario apart from transcribing audio files that were already present by the time the new configuration was applied. You should consider this configuration as a model as far as the security of your audio data is concerned and customize it according to your needs.
+
+For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
+
+> [!NOTE]
+> Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the storage account. You can use a private endpoint for batch transcription API requests, while separately accessing the source audio files from a secure storage account, or the other way around.
+
+By following the steps below, you'll severely restrict access to the storage account. Then you'll assign the minimum required permissions for Speech resource managed identity to access the Storage account.
+
+### Enable system assigned managed identity for the Speech resource
+
+Follow these steps to enable system assigned managed identity for the Speech resource that you will use for batch transcription.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Speech resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. On the **System assigned** tab, select **On** for the status.
+
+ > [!IMPORTANT]
+ > User assigned managed identity won't meet requirements for the batch transcription storage account scenario. Be sure to enable system assigned managed identity.
+
+1. Select **Save**
+
+Now the managed identity for your Speech resource can be granted access to your storage account.
+
+### Restrict access to the storage account
+
+Follow these steps to restrict access to the storage account.
+
+> [!IMPORTANT]
+> Upload audio files in a Blob container before locking down the storage account access.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Storage account.
+1. In the **Settings** group in the left pane, select **Configuration**.
+1. Select **Disabled** for **Allow Blob public access**.
+1. Select **Disabled** for **Allow storage account key access**
+1. Select **Save**.
+
+For more information, see [Prevent anonymous public read access to containers and blobs](../../storage/blobs/anonymous-read-access-prevent.md) and [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md).
+
+### Configure Azure Storage firewall
+
+Having restricted access to the Storage account, you need to grant access to specific managed identities. Follow these steps to add access for the Speech resource.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Storage account.
+1. In the **Security + networking** group in the left pane, select **Networking**.
+1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
+1. Deselect all check boxes.
+1. Make sure **Microsoft network routing** is selected.
+1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Speech resource as the instance name.
+1. Select **Save**.
+
+ > [!NOTE]
+ > It may take up to 5 min for the network changes to propagate.
+
+Although by now the network access is permitted, the Speech resource can't yet access the data in the Storage account. You need to assign a specific access role for Speech resource managed identity.
+
+### Assign resource access role
+
+Follow these steps to assign the **Storage Blob Data Reader** role to the managed identity of your Speech resource.
+
+> [!IMPORTANT]
+> You need to be assigned the *Owner* role of the Storage account or higher scope (like Subscription) to perform the operation in the next steps. This is because only the *Owner* role can assign roles to others. See details [here](../../role-based-access-control/built-in-roles.md).
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Storage account.
+1. Select **Access Control (IAM)** menu in the left pane.
+1. Select **Add role assignment** in the **Grant access to this resource** tile.
+1. Select **Storage Blob Data Reader** under **Role** and then select **Next**.
+1. Select **Managed identity** under **Members** > **Assign access to**.
+1. Assign the managed identity of your Speech resource and then select **Review + assign**.
+
+ :::image type="content" source="media/storage/storage-identity-access-management-role.png" alt-text="Screenshot of the managed role assignment review.":::
+
+1. After confirming the settings, select **Review + assign**
+
+Now the Speech resource managed identity has access to the Storage account and can access the audio files for batch transcription.
+
+With system assigned managed identity, you'll use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
+
+```json
+{
+ "contentContainerUrl": "https://<storage_account_name>.blob.core.windows.net/<container_name>"
+}
+```
+
+You could otherwise specify individual files in the container. For example:
+
+```json
+{
+ "contentUrls": [
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_1>",
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_2>"
+ ]
+}
+```
+
+## SAS URL for batch transcription
+
+A shared access signature (SAS) is a URI that grants restricted access to an Azure Storage container. Use it when you want to grant access to your batch transcription files for a specific time range without sharing your storage account key.
+
+> [!TIP]
+> If the container with batch transcription source files should only be accessed by your Speech resource, use the [trusted Azure services security mechanism](#trusted-azure-services-security-mechanism) instead.
+
+# [Azure portal](#tab/portal)
+
+Follow these steps to generate a SAS URL that you can use for batch transcriptions.
+
+1. Complete the steps in [Azure Blob Storage upload](#azure-blob-storage-upload) to create a Storage account and upload audio files to a new container.
+1. Select the new container.
+1. In the **Settings** group in the left pane, select **Shared access tokens**.
+1. Select **+ Container**.
+1. Select **Read** and **List** for **Permissions**.
+
+ :::image type="content" source="media/storage/storage-container-shared-access-signature.png" alt-text="Screenshot of the container SAS URI permissions.":::
+
+1. Enter the start and expiry times for the SAS URI, or leave the defaults.
+1. Select **Generate SAS token and URL**.
+
+# [Azure CLI](#tab/azure-cli)
+
+Follow these steps to generate a SAS URL that you can use for batch transcriptions.
+
+1. Complete the steps in [Azure Blob Storage upload](#azure-blob-storage-upload) to create a Storage account and upload audio files to a new container.
+1. Generate a SAS URL with read (r) and list (l) permissions for the container with the [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) command. Choose a new expiry date and replace `<mycontainer>` with the name of your container.
+
+ ```azurecli-interactive
+ az storage container generate-sas -n <mycontainer> --expiry 2022-10-10 --permissions rl --https-only
+ ```
+
+The previous command returns a SAS token. Append the SAS token to your container blob URL to create a SAS URL. For example: `https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN`.
+++
+You will use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
+
+```json
+{
+ "contentContainerUrl": "https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN"
+}
+```
+
+You could otherwise specify individual files in the container. You must generate and use a different SAS URL with read (r) permissions for each file. For example:
+
+```json
+{
+ "contentUrls": [
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_1>?SAS_TOKEN_1",
+ "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_2>?SAS_TOKEN_2"
+ ]
+}
+```
+
+## Next steps
+
+- [Batch transcription overview](batch-transcription.md)
+- [Create a batch transcription](batch-transcription-create.md)
+- [Get batch transcription results](batch-transcription-get.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
+
+ Title: Create a batch transcription - Speech service
+
+description: With batch transcriptions, you submit the audio, and then retrieve transcription results asynchronously.
+++++++ Last updated : 11/29/2022
+zone_pivot_groups: speech-cli-rest
+++
+# Create a batch transcription
+
+With batch transcriptions, you submit the [audio data](batch-transcription-audio-data.md), and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
+
+> [!NOTE]
+> To use batch transcription, you need to use a standard (S0) Speech resource. Free resources (F0) aren't supported. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+## Create a transcription job
++
+To create a transcription, use the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation of the [Speech to text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions:
+
+- You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
+- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.
+- Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later.
+- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`.
+- Optionally you can set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
+
+For more information, see [request configuration options](#request-configuration-options).
+
+Make an HTTP POST request using the URI as shown in the following [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "contentUrls": [
+ "https://crbn.us/hello.wav",
+ "https://crbn.us/whatstheweatherlike.wav"
+ ],
+ "locale": "en-US",
+ "displayName": "My Transcription",
+ "model": null,
+ "properties": {
+ "wordLevelTimestampsEnabled": true,
+ "languageIdentification": {
+ "candidateLocales": [
+ "en-US", "de-DE", "es-ES"
+ ],
+ }
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": true,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked",
+ "languageIdentification": {
+ "candidateLocales": [
+ "en-US",
+ "de-DE",
+ "es-ES"
+ ]
+ }
+ },
+ "lastActionDateTime": "2022-10-21T14:18:06Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-10-21T14:18:06Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The top-level `self` property in the response body is the transcription's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) details such as the URI of the transcriptions and transcription report files. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) a transcription.
+
+You can query the status of your transcriptions with the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation.
+
+Call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete)
+regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
+++
+To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions:
+
+- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
+- Set the required `language` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a transcription job:
+
+```azurecli-interactive
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": false,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked"
+ },
+ "lastActionDateTime": "2022-10-21T14:21:59Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-10-21T14:21:59Z",
+ "locale": "en-US",
+ "displayName": "My Transcription",
+ "description": ""
+}
+```
+
+The top-level `self` property in the response body is the transcription's URI. Use this URI to get details such as the URI of the transcriptions and transcription report files. You also use this URI to update or delete a transcription.
+
+For Speech CLI help with transcriptions, run the following command:
+
+```azurecli-interactive
+spx help batch transcription
+```
++
+## Request configuration options
++
+Here are some property options that you can use to configure a transcription when you call the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation.
+
+| Property | Description |
+|-|-|
+|`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. |
+|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
+|`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
+|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
+|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
+|`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
+|`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.|
+|`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.|
+|`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
+|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).|
+|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. |
+|`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
+|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
+|`wordLevelTimestampsEnabled`|Specifies if word level timestamps should be included in the output. The default value is `false`.|
++++
+For Speech CLI help with transcription configuration options, run the following command:
+
+```azurecli-interactive
+spx help batch transcription create advanced
+```
++
+## Using custom models
+
+Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
+
+Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
++
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "contentUrls": [
+ "https://crbn.us/hello.wav",
+ "https://crbn.us/whatstheweatherlike.wav"
+ ],
+ "locale": "en-US",
+ "displayName": "My Transcription",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "properties": {
+ "wordLevelTimestampsEnabled": true,
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions"
+```
+++
+```azurecli-interactive
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+```
++
+To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
+
+> [!TIP]
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
+
+Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
++
+## Destination container URL
+
+The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
+
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic.
+
+The [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) is not supported for storing transcription results from a Speech resource. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging). You can secure access to BYOS-associated Storage account exactly as described in the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) guide, except that the BYOS Speech resource would need **Storage Blob Data Contributor** role assignment. The results of batch transcription performed by the BYOS Speech resource will be automatically stored in the **TranscriptionData** folder of the **customspeech-artifacts** blob container.
+
+## Next steps
+
+- [Batch transcription overview](batch-transcription.md)
+- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
+- [Get batch transcription results](batch-transcription-get.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
ai-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md
+
+ Title: Get batch transcription results - Speech service
+
+description: With batch transcription, the Speech service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container.
+++++++ Last updated : 11/29/2022
+zone_pivot_groups: speech-cli-rest
+++
+# Get batch transcription results
+
+To get transcription results, first check the [status](#get-transcription-status) of the transcription job. If the job is completed, you can [retrieve](#get-batch-transcription-results) the transcriptions and transcription report.
+
+## Get transcription status
++
+To get the status of the transcription job, call the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": false,
+ "displayFormWordLevelTimestampsEnabled": true,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked",
+ "duration": "PT3S",
+ "languageIdentification": {
+ "candidateLocales": [
+ "en-US",
+ "de-DE",
+ "es-ES"
+ ]
+ }
+ },
+ "lastActionDateTime": "2022-09-10T18:39:09Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-09-10T18:39:07Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
++++
+To get the status of the transcription job, use the `spx batch transcription status` command. Construct the request parameters according to the following instructions:
+
+- Set the `transcription` parameter to the ID of the transcription that you want to get.
+
+Here's an example Speech CLI command to get the transcription status:
+
+```azurecli-interactive
+spx batch transcription status --api-version v3.1 --transcription YourTranscriptionId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": false,
+ "displayFormWordLevelTimestampsEnabled": true,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked",
+ "duration": "PT3S"
+ },
+ "lastActionDateTime": "2022-09-10T18:39:09Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-09-10T18:39:07Z",
+ "locale": "en-US",
+ "displayName": "My Transcription"
+}
+```
+
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+
+For Speech CLI help with transcriptions, run the following command:
+
+```azurecli-interactive
+spx help batch transcription
+```
++
+## Get transcription results
++
+The [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
+
+Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
+ "name": "contenturl_0.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 3407
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
+ "name": "contenturl_1.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 8233
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
+ "name": "report.json",
+ "kind": "TranscriptionReport",
+ "properties": {
+ "size": 279
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
+ }
+ }
+ ]
+}
+```
+
+The location of each transcription and transcription report files with more details are returned in the response body. The `contentUrl` property contains the URL to the [transcription](#transcription-result-file) (`"kind": "Transcription"`) or [transcription report](#transcription-report-file) (`"kind": "TranscriptionReport"`) file.
+
+If you didn't specify a container in the `destinationContainerUrl` property of the transcription request, the results are stored in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.
+++
+The `spx batch transcription list` command returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
+
+- Set the required `files` flag.
+- Set the required `transcription` parameter to the ID of the transcription that you want to get logs.
+
+Here's an example Speech CLI command that gets a list of result files for a transcription:
+
+```azurecli-interactive
+spx batch transcription list --api-version v3.1 --files --transcription YourTranscriptionId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
+ "name": "contenturl_0.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 3407
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
+ "name": "contenturl_1.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 8233
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
+ "name": "report.json",
+ "kind": "TranscriptionReport",
+ "properties": {
+ "size": 279
+ },
+ "createdDateTime": "2022-09-10T18:39:09Z",
+ "links": {
+ "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
+ }
+ }
+ ]
+}
+```
+
+The location of each transcription and transcription report files with more details are returned in the response body. The `contentUrl` property contains the URL to the [transcription](#transcription-result-file) (`"kind": "Transcription"`) or [transcription report](#transcription-report-file) (`"kind": "TranscriptionReport"`) file.
+
+By default, the results are stored in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.
++
+### Transcription report file
+
+One transcription report file is provided for each submitted batch transcription job.
+
+The contents of each transcription result file are formatted as JSON, as shown in this example.
+
+```json
+{
+ "successfulTranscriptionsCount": 2,
+ "failedTranscriptionsCount": 0,
+ "details": [
+ {
+ "source": "https://crbn.us/hello.wav",
+ "status": "Succeeded"
+ },
+ {
+ "source": "https://crbn.us/whatstheweatherlike.wav",
+ "status": "Succeeded"
+ }
+ ]
+}
+```
+
+### Transcription result file
+
+One transcription result file is provided for each successfully transcribed audio file.
+
+The contents of each transcription result file are formatted as JSON, as shown in this example.
+
+```json
+{
+ "source": "...",
+ "timestamp": "2023-07-10T14:28:16Z",
+ "durationInTicks": 25800000,
+ "duration": "PT2.58S",
+ "combinedRecognizedPhrases": [
+ {
+ "channel": 0,
+ "lexical": "hello world",
+ "itn": "hello world",
+ "maskedITN": "hello world",
+ "display": "Hello world."
+ }
+ ],
+ "recognizedPhrases": [
+ {
+ "recognitionStatus": "Success",
+ "channel": 0,
+ "offset": "PT0.76S",
+ "duration": "PT1.32S",
+ "offsetInTicks": 7600000.0,
+ "durationInTicks": 13200000.0,
+ "nBest": [
+ {
+ "confidence": 0.5643338,
+ "lexical": "hello world",
+ "itn": "hello world",
+ "maskedITN": "hello world",
+ "display": "Hello world.",
+ "displayWords": [
+ {
+ "displayText": "Hello",
+ "offset": "PT0.76S",
+ "duration": "PT0.76S",
+ "offsetInTicks": 7600000.0,
+ "durationInTicks": 7600000.0
+ },
+ {
+ "displayText": "world.",
+ "offset": "PT1.52S",
+ "duration": "PT0.56S",
+ "offsetInTicks": 15200000.0,
+ "durationInTicks": 5600000.0
+ }
+ ]
+ },
+ {
+ "confidence": 0.1769063,
+ "lexical": "helloworld",
+ "itn": "helloworld",
+ "maskedITN": "helloworld",
+ "display": "helloworld"
+ },
+ {
+ "confidence": 0.49964225,
+ "lexical": "hello worlds",
+ "itn": "hello worlds",
+ "maskedITN": "hello worlds",
+ "display": "hello worlds"
+ },
+ {
+ "confidence": 0.4995761,
+ "lexical": "hello worm",
+ "itn": "hello worm",
+ "maskedITN": "hello worm",
+ "display": "hello worm"
+ },
+ {
+ "confidence": 0.49418187,
+ "lexical": "hello word",
+ "itn": "hello word",
+ "maskedITN": "hello word",
+ "display": "hello word"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Depending in part on the request parameters set when you created the transcription job, the transcription file can contain the following result properties.
+
+|Property|Description|
+|--|--|
+|`channel`|The channel number of the results. For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file.|
+|`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.|
+|`confidence`|The confidence value for the recognition.|
+|`display`|The display form of the recognized text. Added punctuation and capitalization are included.|
+|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`duration`|The audio duration. The value is an ISO 8601 encoded duration.|
+|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).|
+|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.|
+|`lexical`|The actual words recognized.|
+|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`maskedITN`|The ITN form with profanity masking applied.|
+|`nBest`|A list of possible transcriptions for the current phrase with confidences.|
+|`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
+|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).|
+|`recognitionStatus`|The recognition state. For example: "Success" or "Failure".|
+|`recognizedPhrases`|The list of results for each phrase.|
+|`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.|
+|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
+|`timestamp`|The creation date and time of the transcription. The value is an ISO 8601 encoded timestamp.|
+|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
++
+## Next steps
+
+- [Batch transcription overview](batch-transcription.md)
+- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
+- [Create a batch transcription](batch-transcription-create.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
ai-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md
+
+ Title: Batch transcription overview - Speech service
+
+description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure blobs. Then you can asynchronously retrieve transcriptions.
+++++++ Last updated : 03/15/2023
+ms.devlang: csharp
+++
+# What is batch transcription?
+
+Batch transcription is used to transcribe a large amount of audio data in storage. Both the [Speech to text REST API](rest-speech-to-text.md#transcriptions) and [Speech CLI](spx-basics.md) support batch transcription.
+
+You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
+
+## How does it work?
+
+With batch transcriptions, you submit the audio data, and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container.
+
+> [!TIP]
+> For a low or no-code solution, you can use the [Batch Speech to text Connector](/connectors/cognitiveservicesspe/) in Power Platform applications such as Power Automate, Power Apps, and Logic Apps. See the [Power automate batch transcription](power-automate-batch-transcription.md) guide to get started.
+
+To use the batch transcription REST API:
+
+1. [Locate audio files for batch transcription](batch-transcription-audio-data.md) - You can upload your own data or use existing audio files via public URI or [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI.
+1. [Create a batch transcription](batch-transcription-create.md) - Submit the transcription job with parameters such as the audio files, the transcription language, and the transcription model.
+1. [Get batch transcription results](batch-transcription-get.md) - Check transcription status and retrieve transcription results asynchronously.
+
+Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
+
+## Next steps
+
+- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
+- [Create a batch transcription](batch-transcription-create.md)
+- [Get batch transcription results](batch-transcription-get.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
ai-services Call Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-overview.md
+
+ Title: Azure AI services for Call Center Overview
+
+description: Azure AI services for Language and Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels.
++++++ Last updated : 09/18/2022++
+# Call Center Overview
+
+Azure AI services for Language and Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation personally identifiable information (PII), summarize the transcription, and detect the sentiment.
+
+Some example scenarios for the implementation of Azure AI services in call and contact centers are:
+- Virtual agents: Conversational AI-based telephony-integrated voicebots and voice-enabled chatbots
+- Agent-assist: Real-time transcription and analysis of a call to improve the customer experience by providing insights and suggest actions to agents
+- Post-call analytics: Post-call analysis to create insights into customer conversations to improve understanding and support continuous improvement of call handling, optimization of quality assurance and compliance control as well as other insight driven optimizations.
+
+> [!TIP]
+> Try the [Language Studio](https://language.cognitive.azure.com) or [Speech Studio](https://aka.ms/speechstudio/callcenter) for a demonstration on how to use the Language and Speech services to analyze call center conversations.
+>
+> To deploy a call center transcription solution to Azure with a no-code approach, try the [Ingestion Client](./ingestion-client.md).
+
+## Azure AI services features for call centers
+
+A holistic call center implementation typically incorporates technologies from the Language and Speech services.
+
+Audio data typically used in call centers generated through landlines, mobile phones, and radios is often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
+
+Once you've transcribed your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
+
+### Speech service
+
+The Speech service offers the following features that can be used for call center use cases:
+
+- [Real-time speech to text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events.
+- [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.
+- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech.
+- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection.
+- [Language Identification](./language-identification.md): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent).
+
+The Speech service works well with prebuilt models. However, you might want to further customize and tune the experience for your product or environment. Typical examples for Speech customization include:
+
+| Speech customization | Description |
+| -- | -- |
+| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
+| [Custom Neural Voice](./custom-neural-voice.md) | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. |
+
+### Language service
+
+The Language service offers the following features that can be used for call center use cases:
+
+- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription.
+- [Conversation summarization](../language-service/summarization/overview.md?tabs=conversation-summarization): Summarize in abstract text what each conversation participant said about the issues and resolutions. For example, a call center can group product issues that have a high volume.
+- [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.
+
+While the Language service works well with prebuilt models, you might want to further customize and tune models to extract more information from your data. Typical examples for Language customization include:
+
+| Language customization | Description |
+| -- | -- |
+| [Custom NER (named entity recognition)](../language-service/custom-named-entity-recognition/overview.md) | Improve the detection and extraction of entities in transcriptions. |
+| [Custom text classification](../language-service/custom-text-classification/overview.md) | Classify and label transcribed utterances with either single or multiple classifications. |
+
+You can find an overview of all Language service features and customization options [here](../language-service/overview.md#available-features).
+
+## Next steps
+
+* [Post-call transcription and analytics quickstart](./call-center-quickstart.md)
+* [Try out the Language Studio](https://language.cognitive.azure.com)
+* [Try out the Speech Studio](https://aka.ms/speechstudio/callcenter)
ai-services Call Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-quickstart.md
+
+ Title: "Post-call transcription and analytics quickstart - Speech service"
+
+description: In this quickstart, you perform sentiment analysis and conversation summarization of call center transcriptions.
++++++ Last updated : 09/20/2022+
+ms.devlang: csharp
++
+# Quickstart: Post-call transcription and analytics
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the Ingestion Client](ingestion-client.md)
ai-services Call Center Telephony Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-telephony-integration.md
+
+ Title: Call Center Telephony Integration - Speech service
+
+description: A common scenario for speech to text is transcribing large volumes of telephony data that come from various systems, such as interactive voice response (IVR) in real-time. This requires an integration with the Telephony System used.
++++++ Last updated : 08/10/2022+++
+# Telephony Integration
+
+To support real-time scenarios, like Virtual Agent and Agent Assist in Call Centers, an integration with the Call Centers telephony system is required.
+
+Typically, integration with the Speech service is handled by a telephony client connected to the customers SIP/RTP processor, for example, to a Session Border Controller (SBC).
+
+Usually the telephony client handles the incoming audio stream from the SIP/RTP processor, the conversion to PCM and connects the streams using continuous recognition. It also triages the processing of the results, for example, analysis of speech transcripts for Agent Assist or connect with a dialog processing engine (for example, Azure Botframework or Power Virtual Agent) for Virtual Agent.
+
+For easier integration the Speech service also supports ΓÇ£ALAW in WAV containerΓÇ¥ and ΓÇ£MULAW in WAV containerΓÇ¥ for audio streaming.
+
+To build this integration we recommend using the [Speech SDK](./speech-sdk.md).
++
+> [!TIP]
+> For guidance on reducing Text to speech latency check out the **[How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md?pivots=programming-language-csharp)** guide.
+>
+> In addition, consider implementing a text to speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized.
+
+## Next steps
+
+* [Learn about Speech SDK](./speech-sdk.md)
+* [How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md)
ai-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md
+
+ Title: Captioning with speech to text - Speech service
+
+description: An overview of key concepts for captioning with speech to text.
+++++++ Last updated : 06/02/2022+
+zone_pivot_groups: programming-languages-speech-sdk-cli
++
+# Captioning with speech to text
+
+In this guide, you learn how to create captions with speech to text. Captioning is the process of converting the audio content of a television broadcast, webcast, film, video, live event, or other production into text, and then displaying the text on a screen, monitor, or other visual display system.
+
+Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing.
+
+Here are some common captioning scenarios:
+- Online courses and instructional videos
+- Sporting events
+- Voice and video calls
+
+The following are aspects to consider when using captioning:
+* Let your audience know that captions are generated by an automated service.
+* Center captions horizontally on the screen, in a large and prominent font.
+* Consider whether to use partial results, when to start displaying captions, and how many words to show at a time.
+* Learn about captioning protocols such as [SMPTE-TT](https://ieeexplore.ieee.org/document/7291854).
+* Consider output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
+
+> [!TIP]
+> Try the [Speech Studio](https://aka.ms/speechstudio/captioning) and choose a sample video clip to see real-time or offline processed captioning results.
+>
+> Try the [Azure AI Video Indexer](../../azure-video-indexer/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
+
+Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+
+## Caption output format
+
+The Speech service supports output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
+
+> [!TIP]
+> The Speech service provides [profanity filter](display-text-format.md#profanity-filter) options. You can specify whether to mask, remove, or show profanity.
+
+The [SRT](https://docs.fileformat.com/video/srt/) (SubRip Text) timespan output format is `hh:mm:ss,fff`.
+
+```srt
+1
+00:00:00,180 --> 00:00:03,230
+Welcome to applied Mathematics course 201.
+```
+
+The [WebVTT](https://www.w3.org/TR/webvtt1/#introduction) (Web Video Text Tracks) timespan output format is `hh:mm:ss.fff`.
+
+```
+WEBVTT
+
+00:00:00.180 --> 00:00:03.230
+Welcome to applied Mathematics course 201.
+{
+ "ResultId": "8e89437b4b9349088a933f8db4ccc263",
+ "Duration": "00:00:03.0500000"
+}
+```
+
+## Input audio to the Speech service
+
+For real-time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
+
+For captioning of a prerecording, send file input to the Speech service. For more information, see [How to use compressed input audio](how-to-use-codec-compressed-audio-input-streams.md).
+
+## Caption and speech synchronization
+
+You'll want to synchronize captions with the audio track, whether it's done in real-time or with a prerecording.
+
+The Speech service returns the offset and duration of the recognized speech.
++
+For more information, see [Get speech recognition results](get-speech-recognition-results.md).
+
+## Get partial results
+
+Consider when to start displaying captions, and how many words to show at a time. Speech recognition results are subject to change while an utterance is still being recognized. Partial results are returned with each `Recognizing` event. As each word is processed, the Speech service re-evaluates an utterance in the new context and again returns the best result. The new result isn't guaranteed to be the same as the previous result. The complete and final transcription of an utterance is returned with the `Recognized` event.
+
+> [!NOTE]
+> Punctuation of partial results is not available.
+
+For captioning of prerecorded speech or wherever latency isn't a concern, you could wait for the complete transcription of each utterance before displaying any words. Given the final offset and duration of each word in an utterance, you know when to show subsequent words at pace with the soundtrack.
+
+Real-time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
+
+You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
+
+```csharp
+speechConfig.SetProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```cpp
+speechConfig->SetProperty(PropertyId::SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```go
+speechConfig.SetProperty(common.SpeechServiceResponseStablePartialResultThreshold, 5)
+```
+```java
+speechConfig.setProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```javascript
+speechConfig.setProperty(sdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
+```
+```objective-c
+[self.speechConfig setPropertyTo:5 byId:SPXSpeechServiceResponseStablePartialResultThreshold];
+```
+```swift
+self.speechConfig!.setPropertyTo(5, by: SPXPropertyId.speechServiceResponseStablePartialResultThreshold)
+```
+```python
+speech_config.set_property(property_id = speechsdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, value = 5)
+```
+```console
+spx recognize --file caption.this.mp4 --format any --property SpeechServiceResponse_StablePartialResultThreshold=5 --output vtt file - --output srt file -
+```
+
+Requesting more stable partial results will reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
+
+### Stable partial threshold example
+In the following recognition sequence without setting a stable partial threshold, "math" is recognized as a word, but the final text is "mathematics". At another point, "course 2" is recognized, but the final text is "course 201".
+
+```console
+RECOGNIZING: Text=welcome to
+RECOGNIZING: Text=welcome to applied math
+RECOGNIZING: Text=welcome to applied mathematics
+RECOGNIZING: Text=welcome to applied mathematics course 2
+RECOGNIZING: Text=welcome to applied mathematics course 201
+RECOGNIZED: Text=Welcome to applied Mathematics course 201.
+```
+
+In the previous example, the transcriptions were additive and no text was retracted. But at other times you might find that the partial results were inaccurate. In either case, the unstable partial results can be perceived as "flickering" when displayed.
+
+For this example, if the stable partial result threshold is set to `5`, no words are altered or backtracked.
+
+```console
+RECOGNIZING: Text=welcome to
+RECOGNIZING: Text=welcome to applied
+RECOGNIZING: Text=welcome to applied mathematics
+RECOGNIZED: Text=Welcome to applied Mathematics course 201.
+```
+
+## Language identification
+
+If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
+
+## Customizations to improve accuracy
+
+A [phrase list](improve-accuracy-phrase-list.md) is a list of words or phrases that you provide right before starting speech recognition. Adding a phrase to a phrase list increases its importance, thus making it more likely to be recognized.
+
+Examples of phrases include:
+* Names
+* Geographical locations
+* Homonyms
+* Words or acronyms unique to your industry or organization
+
+There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontics lectures, you might want to train a custom model with the corresponding domain data.
+
+## Next steps
+
+* [Captioning quickstart](captioning-quickstart.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
ai-services Captioning Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-quickstart.md
+
+ Title: "Create captions with speech to text quickstart - Speech service"
+
+description: In this quickstart, you convert speech to text as captions.
++++++ Last updated : 04/23/2022+
+ms.devlang: cpp, csharp
+
+zone_pivot_groups: programming-languages-speech-sdk-cli
++
+# Quickstart: Create captions with speech to text
++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech recognition](how-to-recognize-speech.md)
ai-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/conversation-transcription.md
+
+ Title: Conversation transcription overview - Speech service
+
+description: You use the conversation transcription feature for meetings. It combines recognition, speaker ID, and diarization to provide transcription of any conversation.
++++++ Last updated : 01/23/2022++++
+# What is conversation transcription?
+
+Conversation transcription is a [speech to text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. This feature, which is currently in preview, combines speech recognition, speaker identification, and sentence attribution to determine who said what, and when, in a conversation.
+
+> [!NOTE]
+> Multi-device conversation access is a preview feature.
+
+## Key features
+
+You might find the following features of conversation transcription useful:
+
+- **Timestamps:** Each speaker utterance has a timestamp, so that you can easily find when a phrase was said.
+- **Readable transcripts:** Transcripts have formatting and punctuation added automatically to ensure the text closely matches what was being said.
+- **User profiles:** User profiles are generated by collecting user voice samples and sending them to signature generation.
+- **Speaker identification:** Speakers are identified by using user profiles, and a _speaker identifier_ is assigned to each.
+- **Multi-speaker diarization:** Determine who said what by synthesizing the audio stream with each speaker identifier.
+- **Real-time transcription:** Provide live transcripts of who is saying what, and when, while the conversation is happening.
+- **Asynchronous transcription:** Provide transcripts with higher accuracy by using a multichannel audio stream.
+
+> [!NOTE]
+> Although conversation transcription doesn't put a limit on the number of speakers in the room, it's optimized for 2-10 speakers per session.
+
+## Get started
+
+See the real-time conversation transcription [quickstart](how-to-use-conversation-transcription.md) to get started.
+
+## Use cases
+
+To make meetings inclusive for everyone, such as participants who are deaf and hard of hearing, it's important to have transcription in real-time. Conversation transcription in real-time mode takes meeting audio and determines who is saying what, allowing all meeting participants to follow the transcript and participate in the meeting, without a delay.
+
+Meeting participants can focus on the meeting and leave note-taking to conversation transcription. Participants can actively engage in the meeting and quickly follow up on next steps, using the transcript instead of taking notes and potentially missing something during the meeting.
+
+## How it works
+
+The following diagram shows a high-level overview of how the feature works.
+
+![Diagram that shows the relationships among different pieces of the conversation transcription solution.](media/scenarios/conversation-transcription-service.png)
+
+## Expected inputs
+
+Conversation transcription uses two types of inputs:
+
+- **Multi-channel audio stream:** For specification and design details, see [Microphone array recommendations](./speech-sdk-microphone.md).
+- **User voice samples:** Conversation transcription needs user profiles in advance of the conversation for speaker identification. Collect audio recordings from each user, and then send the recordings to the [signature generation service](https://aka.ms/cts/signaturegenservice) to validate the audio and generate user profiles.
+
+> [!NOTE]
+> Single channel audio configuration for conversation transcription is currently only available in private preview.
+
+User voice samples for voice signatures are required for speaker identification. Speakers who don't have voice samples are recognized as *unidentified*. Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see the following example). The transcription output then shows speakers as, for example, *Guest_0* and *Guest_1*, instead of recognizing them as pre-enrolled specific speaker names.
+
+```csharp
+config.SetProperty("DifferentiateGuestSpeakers", "true");
+```
+
+## Real-time vs. asynchronous
+
+The following sections provide more detail about transcription modes you can choose.
+
+### Real-time
+
+Audio data is processed live to return the speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide conversation participants a live transcript view of their ongoing conversation. For example, building an application to make meetings more accessible to participants with hearing loss or deafness is an ideal use case for real-time transcription.
+
+### Asynchronous
+
+Audio data is batch processed to return the speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide higher accuracy, without the live transcript view. For example, if you want to build an application to allow meeting participants to easily catch up on missed meetings, then use the asynchronous transcription mode to get high-accuracy transcription results.
+
+### Real-time plus asynchronous
+
+Audio data is processed live to return the speaker identifier and transcript, and, in addition, requests a high-accuracy transcript through asynchronous processing. Select this mode if your application has a need for real-time transcription, and also requires a higher accuracy transcript for use after the conversation or meeting occurred.
+
+## Language support
+
+Currently, conversation transcription supports [all speech to text languages](language-support.md?tabs=stt) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Transcribe conversations in real-time](how-to-use-conversation-transcription.md)
ai-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-encryption-of-data-at-rest.md
+
+ Title: Custom Commands service encryption of data at rest
+
+description: Custom Commands encryption of data at rest.
++++++ Last updated : 07/05/2020++++
+# Custom Commands encryption of data at rest
++
+Custom Commands automatically encrypts your data when it is persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+
+> [!NOTE]
+> Custom Commands service doesn't automatically enable encryption for the LUIS resources associated with your application. If needed, you must enable encryption for your LUIS resource from [here](../luis/encrypt-data-at-rest.md).
+
+## About Azure AI services encryption
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+When you use Custom Commands, speech service will store following data in the cloud:
+* Configuration JSON behind the Custom Commands application
+* LUIS authoring and prediction key
+
+By default, your subscription uses Microsoft-managed encryption keys. However, you can also manage your subscription with your own encryption keys. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available resources created after 27 June, 2020. To use CMK with the Speech service, you will need to create a new Speech resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
+
+To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you'll need to create a new Speech resource from the Azure portal.
+ > [!NOTE]
+ > **Customer-managed keys (CMK) are supported only for Custom Commands.**
+ >
+ > **Custom Speech and Custom Voice still support only Bring Your Own Storage (BYOS).** [Learn more](speech-encryption-of-data-at-rest.md)
+ >
+ > If you're using the given speech resource for accessing these service, compliance needs must be met by explicitly configuring BYOS.
++
+## Customer-managed keys with Azure Key Vault
+
+You must use Azure Key Vault to store customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Speech resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
+
+When a new Speech resource is created and used to provision Custom Commands application - data is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier required for CMK.
+
+Enabling customer managed keys will also enable a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup.
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+## Configure Azure Key Vault
+
+Using customer-managed keys requires that two properties be set in the key vault, **Soft Delete** and **Do Not Purge**. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
+
+> [!IMPORTANT]
+> If you do not have the **Soft Delete** and **Do Not Purge** properties enabled and you delete your key, you won't be able to recover the data in your Azure AI services resource.
+
+To learn how to enable these properties on an existing key vault, see the sections titled **Enabling soft-delete** and **Enabling Purge Protection** in one of the following articles:
+
+- [How to use soft-delete with PowerShell](../../key-vault/general/key-vault-recovery.md).
+- [How to use soft-delete with CLI](../../key-vault/general/key-vault-recovery.md).
+
+Only RSA keys of size 2048 are supported with Azure Storage encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+## Enable customer-managed keys for your Speech resource
+
+To enable customer-managed keys in the Azure portal, follow these steps:
+
+1. Navigate to your Speech resource.
+1. On the **Settings** blade for your Speech resource, select **Encryption**. Select the **Customer Managed Keys** option, as shown in the following figure.
+
+ ![Screenshot showing how to select Customer Managed Keys](media/custom-commands/select-cmk.png)
+
+## Specify a key
+
+After you enable customer-managed keys, you'll have the opportunity to specify a key to associate with the Azure AI services resource.
+
+### Specify a key as a URI
+
+To specify a key as a URI, follow these steps:
+
+1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version.
+1. Copy the value of the **Key Identifier** field, which provides the URI.
+
+ ![Screenshot showing key vault key URI](../media/cognitive-services-encryption/key-uri-portal.png)
+
+1. In the **Encryption** settings for your Speech service, choose the **Enter key URI** option.
+1. Paste the URI that you copied into the **Key URI** field.
+
+1. Specify the subscription that contains the key vault.
+1. Save your changes.
+
+### Specify a key from a key vault
+
+To specify a key from a key vault, first make sure that you have a key vault that contains a key. To specify a key from a key vault, follow these steps:
+
+1. Choose the **Select from Key Vault** option.
+1. Select the key vault containing the key you want to use.
+1. Select the key from the key vault.
+
+ ![Screenshot showing customer-managed key option](media/custom-commands/configure-cmk-fromKV.png)
+
+1. Save your changes.
+
+## Update the key version
+
+When you create a new version of a key, update the Speech resource to use the new version. Follow these steps:
+
+1. Navigate to your Speech resource and display the **Encryption** settings.
+1. Enter the URI for the new key version. Alternately, you can select the key vault and the key again to update the version.
+1. Save your changes.
+
+## Use a different key
+
+To change the key used for encryption, follow these steps:
+
+1. Navigate to your Speech resource and display the **Encryption** settings.
+1. Enter the URI for the new key. Alternately, you can select the key vault and choose a new key.
+1. Save your changes.
+
+## Rotate customer-managed keys
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Speech resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](#update-the-key-version).
+
+Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+
+## Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Azure AI services resource, as the encryption key is inaccessible by Azure AI services.
+
+## Disable customer-managed keys
+
+When you disable customer-managed keys, your Speech resource is then encrypted with Microsoft-managed keys. To disable customer-managed keys, follow these steps:
+
+1. Navigate to your Speech resource and display the **Encryption** settings.
+1. Deselect the checkbox next to the **Use your own key** setting.
+
+## Next steps
+
+* [Speech Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
+* [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md)
ai-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-references.md
+
+ Title: 'Custom Commands concepts and definitions - Speech service'
+
+description: In this article, you learn about concepts and definitions for Custom Commands applications.
++++++ Last updated : 06/18/2020++++
+# Custom Commands concepts and definitions
++
+This article serves as a reference for concepts and definitions for Custom Commands applications.
+
+## Commands configuration
+Commands are the basic building blocks of a Custom Commands application. A command is a set of configurations required to complete a specific task defined by a user.
+
+### Example sentences
+Example utterances are the set examples the user can say to trigger a particular command. You need to provide only a sample of utterances and not an exhaustive list.
+
+### Parameters
+Parameters are information required by the commands to complete a task. In complex scenarios, parameters can also be used to define conditions that trigger custom actions.
+
+### Completion rules
+Completion rules are a series of rules to be executed after the command is ready to be fulfilled, for example, when all the conditions of the rules are satisfied.
+
+### Interaction rules
+Interaction rules are additional rules to handle more specific or complex situations. You can add additional validations or configure advanced features such as confirmations or a one-step correction. You can also build your own custom interaction rules.
+
+## Parameters configuration
+
+Parameters are information required by commands to complete a task. In complex scenarios, parameters can also be used to define conditions that trigger custom actions.
+
+### Name
+A parameter is identified by the name property. You should always give a descriptive name to a parameter. A parameter can be referred across different sections, for example, when you construct conditions, speech responses, or other actions.
+
+### Required
+This check box indicates whether a value for this parameter is required for command fulfillment or completion. You must configure responses to prompt the user to provide a value if a parameter is marked as required.
+
+Note that, if you configured a **required parameter** to have a **Default value**, the system will still explicitly prompt for the parameter's value.
+
+### Type
+Custom Commands supports the following parameter types:
+
+* Age
+* Currency
+* DateTime
+* Dimension
+* Email
+* Geography
+* Number
+* Ordinal
+* Percentage
+* PersonName
+* PhoneNumber
+* String
+* Temperature
+* Url
+
+Every locale supports the "String" parameter type, but availability of all other types differs by locale. Custom Commands uses LUIS's prebuilt entity resolution, so the availability of a parameter type in a locale depends on LUIS's prebuilt entity support in that locale. You can find [more details on LUIS's prebuilt entity support per locale](../luis/luis-reference-prebuilt-entities.md). Custom LUIS entities (such as machine learned entities) are currently not supported.
+
+Some parameter types like Number, String and DateTime support default value configuration, which you can configure from the portal.
+
+### Configuration
+Configuration is a parameter property defined only for the type String. The following values are supported:
+
+* **None**.
+* **Accept full input**: When enabled, a parameter accepts any input utterance. This option is useful when the user needs a parameter with the full utterance. An example is postal addresses.
+* **Accept predefined input values from an external catalog**: This value is used to configure a parameter that can assume a wide variety of values. An example is a sales catalog. In this case, the catalog is hosted on an external web endpoint and can be configured independently.
+* **Accept predefined input values from internal catalog**: This value is used to configure a parameter that can assume a few values. In this case, values must be configured in the Speech Studio.
++
+### Validation
+Validations are constructs applicable to certain parameter types that let you configure constraints on a parameter's value. Currently, Custom Commands supports validations on the following parameter types:
+
+* DateTime
+* Number
+
+## Rules configuration
+A rule in Custom Commands is defined by a set of *conditions* that, when met, execute a set of *actions*. Rules also let you configure *post-execution state* and *expectations* for the next turn.
+
+### Types
+Custom Commands supports the following rule categories:
+
+* **Completion rules**: These rules must be executed upon command fulfillment. All the rules configured in this section for which the conditions are true will be executed.
+* **Interaction rules**: These rules can be used to configure additional custom validations, confirmations, and a one-step correction, or to accomplish any other custom dialog logic. Interaction rules are evaluated at each turn in the processing and can be used to trigger completion rules.
+
+The different actions configured as part of a rule are executed in the order in which they appear in the authoring portal.
+
+### Conditions
+Conditions are the requirements that must be met for a rule to execute. Rules conditions can be of the following types:
+
+* **Parameter value equals**: The configured parameter's value equals a specific value.
+* **No parameter value**: The configured parameters shouldn't have any value.
+* **Required parameters**: The configured parameter has a value.
+* **All required parameters**: All the parameters that were marked as required have a value.
+* **Updated parameters**: One or more parameter values were updated as a result of processing the current input (utterance or activity).
+* **Confirmation was successful**: The input utterance or activity was a successful confirmation (yes).
+* **Confirmation was denied**: The input utterance or activity was not a successful confirmation (no).
+* **Previous command needs to be updated**: This condition is used in instances when you want to catch a negated confirmation along with an update. Behind the scenes, this condition is configured for when the dialog engine detects a negative confirmation where the intent is the same as the previous turn, and the user has responded with an update.
+
+### Actions
+* **Send speech response**: Send a speech response back to the client.
+* **Update parameter value**: Update the value of a command parameter to a specified value.
+* **Clear parameter value**: Clear the command parameter value.
+* **Call web endpoint**: Make a call to a web endpoint.
+* **Send activity to client**: Send a custom activity to the client.
+
+### Expectations
+Expectations are used to configure hints for the processing of the next user input. The following types are supported:
+
+* **Expecting confirmation from user**: This expectation specifies that the application is expecting a confirmation (yes/no) for the next user input.
+* **Expecting parameter(s) input from user**: This expectation specifies one or more command parameters that the application is expecting from the user input.
+
+### Post-execution state
+The post-execution state is the dialog state after processing the current input (utterance or activity). It's of the following types:
+
+* **Keep current state**: Keep current state only.
+* **Complete the command**: Complete the command and no additional rules of the command will be processed.
+* **Execute completion rules**: Execute all the valid completion rules.
+* **Wait for user's input**: Wait for the next user input.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
ai-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands.md
+
+ Title: Custom Commands overview - Speech service
+
+description: An overview of the features, capabilities, and restrictions for Custom Commands, a solution for creating voice applications.
++++++ Last updated : 03/11/2020++++
+# What is Custom Commands?
++
+Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
+
+Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+
+Custom Commands is best suited for task completion or command-and-control scenarios such as "Turn on the overhead light" or "Make it 5 degrees warmer". Custom Commands is well suited for Internet of Things (IoT) devices, ambient and headless devices. Examples include solutions for Hospitality, Retail and Automotive industries, where you want voice-controlled experiences for your guests, in-store inventory management or in-car functionality.
+
+If you're interested in building complex conversational apps, you're encouraged to try the Bot Framework using the [Virtual Assistant Solution](/azure/bot-service/bot-builder-enterprise-template-overview). You can add voice to any bot framework bot using Direct Line Speech.
+
+Good candidates for Custom Commands have a fixed vocabulary with well-defined sets of variables. For example, home automation tasks, like controlling a thermostat, are ideal.
+
+ ![Examples of task completion scenarios](media/voice-assistants/task-completion-examples.png "task completion examples")
+
+## Getting started with Custom Commands
+
+Our goal with Custom Commands is to reduce your cognitive load to learn all the different technologies and focus building your voice commanding app. First step for using Custom Commands to <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">create a Speech resource</a>. You can author your Custom Commands app on the Speech Studio and publish it, after which an on-device application can communicate with it using the Speech SDK.
+
+#### Authoring flow for Custom Commands
+ ![Authoring flow for Custom Commands](media/voice-assistants/custom-commands-flow.png "The Custom Commands authoring flow")
+
+Follow our quickstart to have your first Custom Commands app running code in less than 10 minutes.
+
+* [Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md)
+
+Once you're done with the quickstart, explore our how-to guides for detailed steps for designing, developing, debugging, deploying and integrating a Custom Commands application.
+
+## Building Voice Assistants with Custom Commands
+> [!VIDEO https://www.youtube.com/embed/1zr0umHGFyc]
+
+## Next steps
+
+* [View our Voice Assistants repo on GitHub for samples](https://aka.ms/speech/cc-samples)
+* [Go to the Speech Studio to try out Custom Commands](https://aka.ms/speechstudio/customcommands)
+* [Get the Speech SDK](speech-sdk.md)
ai-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-keyword-basics.md
+
+ Title: Create a custom keyword quickstart - Speech service
+
+description: When a user speaks the keyword, your device sends their dictation to the cloud, until the user stops speaking. Customizing your keyword is an effective way to differentiate your device and strengthen your branding.
++++++ Last updated : 11/12/2021+
+ms.devlang: csharp, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-services
++
+# Quickstart: Create a custom keyword
+++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get the Speech SDK](speech-sdk.md)
ai-services Custom Neural Voice Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-neural-voice-lite.md
+
+ Title: Custom Neural Voice Lite - Speech service
+
+description: Use Custom Neural Voice Lite to demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice.
++++++ Last updated : 10/27/2022+++
+# Custom Neural Voice Lite (preview)
+
+Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Lite and CNV Pro.
+
+- Custom Neural Voice (CNV) Pro allows you to upload your training data collected through professional recording studios and create a higher-quality voice that is nearly indistinguishable from its human samples. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
+- Custom Neural Voice (CNV) Lite is a project type in public preview. You can demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice. No application is required. Microsoft restricts and selects the recording and testing samples for use with CNV Lite. You must apply for full access to CNV Pro in order to deploy and use the CNV Lite model for business purpose.
+
+With a CNV Lite project, you record your voice online by reading 20-50 pre-defined scripts provided by Microsoft. After you've recorded at least 20 samples, you can start to train a model. Once the model is trained successfully, you can review the model and check out 20 output samples produced with another set of pre-defined scripts.
+
+See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice.
+
+## Compare project types
+
+The following table summarizes key differences between the CNV Lite and CNV Pro project types.
+
+|**Items**|**Lite (Preview)**| **Pro**|
+||||
+|Target scenarios |Demonstration or evaluation |Professional scenarios like brand and character voices for chat bots, or audio content reading.|
+|Training data |Record online using Speech Studio |Bring your own data. Recording in a professional studio is recommended. |
+|Scripts for recording |Provided in Speech Studio |Use your own scripts that match the use case scenario. Microsoft provides [example scripts](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) for reference. |
+|Required data size |20-50 utterances |300-2000 utterances|
+|Training time |Less than one compute hour| Approximately 20-40 compute hours |
+|Voice quality |Moderate quality|High quality |
+|Availability |Anyone can record samples online and train a model for demo and evaluation purpose. Full access to Custom Neural Voice is required if you want to deploy the CNV Lite model for business use. |Data upload isn't restricted, but you can only train and deploy a CNV Pro model after access is approved. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).|
+|Pricing |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
+
+## Create a Custom Neural Voice Lite project
+
+To create a Custom Neural Voice Lite project, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select the subscription and Speech resource to work with.
+
+ > [!IMPORTANT]
+ > Custom Neural Voice training is currently only available in some regions. See footnotes in the [regions](regions.md#speech-service) table for more information.
+
+1. Select **Custom Voice** > **Create a project**.
+1. Select **Custom Neural Voice Lite** > **Next**.
+
+ > [!NOTE]
+ > To create a Custom Neural Voice Pro project, see [Create a project for Custom Neural Voice](how-to-custom-voice.md).
+
+1. Follow the instructions provided by the wizard to create your project.
+1. Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Record and build**, **Review model**, and **Deploy model**.
+ :::image type="content" source="media/custom-voice/lite/lite-project-get-started.png" alt-text="Screenshot with an overview of the CNV Lite record, train, test, and deploy workflow.":::
+
+The CNV Lite project expires after 90 days unless the [verbal statement](#submit-verbal-statement) recorded by the voice talent is submitted.
+
+## Record and build a CNV Lite model
+
+Record at least 20 voice samples (up to 50) with provided scripts online. Voice samples recorded here will be used to create a synthetic version of your voice.
+
+Here are some tips to help you record your voice samples:
+- Use a good microphone. Increase the clarity of your samples by using a high-quality microphone. Speak about 8 inches away from the microphone to avoid mouth noises.
+- Avoid background noise. Record in a quiet room without background noise or echoing.
+- Relax and speak naturally. Allow yourself to express emotions as you read the sentences.
+- Record in one take. To keep a consistent energy level, record all sentences in one session.
+- Pronounce each word correctly, and speak clearly.
+
+To record and build a CNV Lite model, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Record and build**.
+1. Select **Get started**.
+1. Read the Voice talent terms of use carefully. Select the checkbox to acknowledge the terms of use.
+1. Select **Accept**
+1. Press the microphone icon to start the noise check. This noise check will take only a few seconds, and you won't need to speak during it.
+1. If noise was detected, you can select **Check again** to repeat the noise check. If no noise was detected, you can select **Done** to proceed to the next step.
+ :::image type="content" source="media/custom-voice/lite/cnv-record-noise-check.png" alt-text="Screenshot of the noise check results when noise was detected.":::
+1. Review the recording tips and select **Got it**. For the best results, go to a quiet area without background noise before recording your voice samples.
+1. Press the microphone icon to start recording.
+ :::image type="content" source="media/custom-voice/lite/cnv-record-sample.png" alt-text="Screenshot of the record sample dashboard.":::
+1. Press the stop icon to stop recording.
+1. Review quality metrics. After recording each sample, check its quality metric before continuing to the next one.
+1. Record more samples. Although you can create a model with just 20 samples, it's recommended that you record up to 50 to get better quality.
+1. Select **Train model** to start the training process.
+
+The training process takes approximately one compute hour. You can check the progress of the training process in the **Review model** page.
+
+## Review model
+
+To review the CNV Lite model and listen to your own synthetic voice, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Review model**. Here you can review the voice model name, model language, sample data size, and training progress. The voice name is composed of the word "Neural" appended to your project name.
+1. Select the voice model name to review the model details and listen to the sample text to speech results.
+1. Select the play icon to hear your voice speak each script.
+ :::image type="content" source="media/custom-voice/lite/lite-review-model.png" alt-text="Screenshot of the review sample output dashboard.":::
+
+## Submit verbal statement
+
+A verbal statement recorded by the voice talent is required before you can [deploy the model](#deploy-model) for your business use.
+
+To submit the voice talent verbal statement, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Deploy model** > **Manage your voice talent**.
+ :::image type="content" source="media/custom-voice/lite/lite-voice-talent-consent.png" alt-text="Screenshot of the record voice talent consent dashboard.":::
+1. Select the model.
+1. Enter the voice talent name and company name.
+1. Read and record the statement. Select the microphone icon to start recording. Select the stop icon to stop recording.
+1. Select **Submit** to submit the statement.
+1. Check the processing status in the script table at the bottom of the dashboard. Once the status is **Succeeded**, you can [deploy the model](#deploy-model).
+
+## Deploy model
+
+To deploy your voice model and use it in your applications, you must get the full access to Custom Neural Voice. Request access on the [intake form](https://aka.ms/customneural). Within approximately 10 business days, you'll receive an email with the approval status. A [verbal statement](#submit-verbal-statement) recorded by the voice talent is also required before you can deploy the model for your business use.
+
+To deploy a CNV Lite model, follow these steps:
+
+1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
+1. Select a voice model name and then select **Next**.
+1. Enter a name and description for your endpoint and then select **Next**.
+1. Select the checkbox to agree to the terms of use and then select **Next**.
+1. Select **Deploy** to deploy the model.
+
+From here, you can use the CNV Lite voice model similarly as you would use a CNV Pro voice model. For example, you can [suspend or resume](how-to-deploy-and-use-endpoint.md) an endpoint after it's created, to limit spend and conserve resources that aren't in use. You can also access the voice in the [Audio Content Creation](how-to-audio-content-creation.md) tool in the [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation).
+
+## Next steps
+
+* [Create a CNV Pro project](how-to-custom-voice.md)
+* [Try the text to speech quickstart](get-started-text-to-speech.md)
+* [Learn more about speech synthesis](how-to-speech-synthesis.md)
ai-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-neural-voice.md
+
+ Title: Custom Neural Voice overview - Speech service
+
+description: Custom Neural Voice is a text to speech feature that allows you to create a one-of-a-kind, customized, synthetic voice for your applications. You provide your own audio data as a sample.
++++++ Last updated : 03/27/2023+++
+# What is Custom Neural Voice?
+
+Custom Neural Voice (CNV) is a text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice for your brand or characters by providing human speech samples as training data.
+
+> [!IMPORTANT]
+> Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
+>
+> Access to [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) is available for anyone to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
+
+Out of the box, [text to speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text to speech scenarios if a unique voice isn't required.
+
+Custom Neural Voice is based on the neural text to speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice.
+
+## How does it work?
+
+To create a custom neural voice, use [Speech Studio](https://aka.ms/speechstudio/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint.
+
+> [!TIP]
+> Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
+
+Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system.
+
+Before you get started in Speech Studio, here are some considerations:
+
+- [Design a persona](record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
+- [Select the recording script](record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
+
+Here's an overview of the steps to create a custom neural voice in Speech Studio:
+
+1. [Create a project](how-to-custom-voice.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country/region and language. If you are going to create multiple voices, it's recommended that you create a project for each voice.
+1. [Set up voice talent](how-to-custom-voice.md). Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model.
+1. [Prepare training data](how-to-custom-voice-prepare-data.md) in the right [format](how-to-custom-voice-training-data.md). It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
+1. [Train your voice model](how-to-custom-voice-create-voice.md). Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+1. [Test your voice](how-to-custom-voice-create-voice.md#test-your-voice-model). Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
+1. [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) in your apps.
+
+You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real-time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
+
+The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text to speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
+
+## Components sequence
+
+Custom Neural Voice consists of three major components: the text analyzer, the neural acoustic
+model, and the neural vocoder. To generate natural synthetic speech from text, text is first input into the text analyzer, which provides output in the form of phoneme sequence. A *phoneme* is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text.
+
+Next, the phoneme sequence goes into the neural acoustic model to predict acoustic features that define speech signals. Acoustic features include the timbre, the speaking style, speed, intonations, and stress patterns. Finally, the neural vocoder converts the acoustic features into audible waves, so that synthetic speech is generated.
+
+![Flowchart that shows the components of Custom Neural Voice.](./media/custom-voice/cnv-intro.png)
+
+Neural text to speech voice models are trained by using deep neural networks based on
+the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
+
+## Migrate to Custom Neural Voice
+
+If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md).
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/ai-services/speech-service/context/context)
+* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/ai-services/speech-service/context/context)
+* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context)
+* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+
+## Next steps
+
+* [Create a project](how-to-custom-voice.md)
+* [Prepare training data](how-to-custom-voice-prepare-data.md)
+* [Train model](how-to-custom-voice-create-voice.md)
ai-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-speech-overview.md
+
+ Title: Custom Speech overview - Speech service
+
+description: Custom Speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
++++++ Last updated : 05/08/2022++++
+# What is Custom Speech?
+
+With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
+
+Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
+
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
+
+## How does it work?
+
+With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
+
+![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
+
+Here's more information about the sequence of steps shown in the previous diagram:
+
+1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
+1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products.
+1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
+1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required.
+1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
+ > [!NOTE]
+ > You pay for Custom Speech model usage and endpoint hosting, but you are not charged for training a model.
+1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+ > [!TIP]
+ > A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
+
+## Next steps
+
+* [Create a project](how-to-custom-speech-create-project.md)
+* [Upload test data](./how-to-custom-speech-upload-data.md)
+* [Train a model](how-to-custom-speech-train-model.md)
ai-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/customize-pronunciation.md
+
+ Title: Structured text phonetic pronunciation data
+description: Use phonemes to customize pronunciation of words in Speech to text.
++++ Last updated : 05/08/2022+++
+# Structured text phonetic pronunciation data
+
+You can specify the phonetic pronunciation of words using the Universal Phone Set (UPS) in a [structured text data](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) file. The UPS is a machine-readable phone set that is based on the International Phonetic Set Alphabet (IPA). The IPA is a standard used by linguists world-wide.
+
+UPS pronunciations consist of a string of UPS phonemes, each separated by whitespace. UPS phoneme labels are all defined using ASCII character strings.
+
+For steps on implementing UPS, see [Structured text phonetic pronunciation](how-to-custom-speech-test-and-train.md#structured-text-data-for-training). Structured text phonetic pronunciation data is separate from [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training), and they cannot be used together. The first one is "sounds-like" or spoken-form data, and is input as a separate file, and trains the model what the spoken form sounds like
+
+ [Structured text phonetic pronunciation data](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) is specified per syllable in a markdown file. Separately, [pronunciation data](how-to-custom-speech-test-and-train.md#pronunciation-data-for-training) it input on its own, and trains the model what the spoken form sounds like. You can either use a pronunciation data file on its own, or you can add pronunciation within a structured text data file. The Speech service doesn't support training a model with both of those datasets as input.
+
+See the sections in this article for the Universal Phone Set for each locale.
+
+## :::no-loc text="en-US":::
++
+## Next steps
+
+- [Upload your data](how-to-custom-speech-upload-data.md)
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
+- [Train your model](how-to-custom-speech-train-model.md)
ai-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/devices-sdk-release-notes.md
+
+ Title: Speech Devices SDK release notes
+
+description: The release notes provide a log of updates, enhancements, bug fixes, and changes to the Speech Devices SDK. This article is updated with each release of the Speech Devices SDK.
++++++ Last updated : 02/12/2022+++
+# Release notes: Speech Devices SDK (retired)
+
+The following sections list changes in the most recent releases.
+
+> [!NOTE]
+> The Speech Devices SDK is deprecated. Archived sample code is available on [GitHub](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK). All users of the Speech Devices SDK are advised to migrate to using Speech SDK version 1.19.0 or later.
+
+## Speech Devices SDK 1.16.0:
+
+- Fixed [GitHub issue #22](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK/issues/22).
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.16.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.15.0:
+
+- Upgraded to new Microsoft Audio Stack (MAS) with improved beamforming and noise reduction for speech.
+- Reduced the binary size by as much as 70% depending on target.
+- Support for [Azure Percept Audio](../../azure-percept/overview-azure-percept-audio.md) with [binary release](https://aka.ms/sdsdk-download-APAudio).
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.15.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.11.0:
+
+- Support for arbitrary microphone array geometries and setting the working angle through a [configuration file](https://aka.ms/sdsdk-micarray-json).
+- Support for [Urbetter DDK](https://urbetters.com/collections).
+- Released binaries for the [GGEC Speaker](https://aka.ms/sdsdk-download-speaker) used in our [Voice Assistant sample](https://aka.ms/sdsdk-speaker).
+- Released binaries for [Linux ARM32](https://aka.ms/sdsdk-download-linux-arm32) and [Linux ARM 64](https://aka.ms/sdsdk-download-linux-arm64) for Raspberry Pi and similar devices.
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.11.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.9.0:
+
+- Initial binaries for [Urbetter DDK](https://aka.ms/sdsdk-download-urbetter) (Linux ARM64) are provided.
+- Roobo v1 now uses Maven for the Speech SDK
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.9.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.7.0:
+
+- Linux ARM is now supported.
+- Initial binaries for [Roobo v2 DDK](https://aka.ms/sdsdk-download-roobov2) are provided (Linux ARM64).
+- Windows users can use `AudioConfig.fromDefaultMicrophoneInput()` or `AudioConfig.fromMicrophoneInput(deviceName)` to specify the microphone to be used.
+- The library size has been optimized.
+- Support for multi-turn recognition using the same speech/intent recognizer object.
+- Fix occasional issue where the process would stop responding while stopping recognition.
+- Sample apps now contain a sample participants.properties file to demonstrate the format of the file.
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.7.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.6.0:
+
+- Support [Azure Kinect DK](https://azure.microsoft.com/services/kinect-dk/) on Windows and Linux with common sample application.
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.6.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.5.1:
+
+- Include [Conversation Transcription](./conversation-transcription.md) in the sample app.
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.5.1. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.5.0: 2019-May release
+
+- Speech Devices SDK is now GA and no longer a gated preview.
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.5.0. For more information, see its [release notes](./releasenotes.md).
+- New keyword technology brings significant quality improvements, see Breaking Changes.
+- New audio processing pipeline for improved far-field recognition.
+
+**Breaking changes**
+
+- Due to the new keyword technology all keywords must be re-created at our improved keyword portal. To fully remove old keywords from the device uninstall the old app.
+ - adb uninstall com.microsoft.cognitiveservices.speech.samples.sdsdkstarterapp
+
+## Speech Devices SDK 1.4.0: 2019-Apr release
+
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.4.0. For more information, see its [release notes](./releasenotes.md).
+
+## Speech Devices SDK 1.3.1: 2019-Mar release
+
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.3.1. For more information, see its [release notes](./releasenotes.md).
+- Updated keyword handling, see Breaking Changes.
+- Sample application adds choice of language for both speech recognition and translation.
+
+**Breaking changes**
+
+- [Installing a keyword](./custom-keyword-basics.md) has been simplified, it is now part of the app and does not need separate installation on the device.
+- The keyword recognition has changed, and two events are supported.
+ - `RecognizingKeyword,` indicates the speech result contains (unverified) keyword text.
+ - `RecognizedKeyword`, indicates that keyword recognition completed recognizing the given keyword.
+
+## Speech Devices SDK 1.1.0: 2018-Nov release
+
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.1.0. For more information, see its [release notes](./releasenotes.md).
+- Far Field Speech recognition accuracy has been improved with our enhanced audio processing algorithm.
+- Sample application added Chinese speech recognition support.
+
+## Speech Devices SDK 1.0.1: 2018-Oct release
+
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.0.1. For more information, see its [release notes](./releasenotes.md).
+- Speech recognition accuracy will be improved with our improved audio processing algorithm
+- One continuous recognition audio session bug is fixed.
+
+**Breaking changes**
+
+- With this release a number of breaking changes are introduced. Please check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details relating to the APIs.
+- The keyword recognition model files are not compatible with Speech Devices SDK 1.0.1. The existing keyword files will be deleted after the new keyword files are written to the device.
+
+## Speech Devices SDK 0.5.0: 2018-Aug release
+
+- Improved the accuracy of speech recognition by fixing a bug in the audio processing code.
+- Updated the [Speech SDK](./speech-sdk.md) component to version 0.5.0.
+
+## Speech Devices SDK 0.2.12733: 2018-May release
+
+The first public preview release of the Speech Devices SDK.
ai-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/direct-line-speech.md
+
+ Title: Direct Line Speech - Speech service
+
+description: An overview of the features, capabilities, and restrictions for Voice assistants using Direct Line Speech with the Speech Software Development Kit (SDK).
++++++ Last updated : 03/11/2020++++
+# What is Direct Line Speech?
+
+Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant. It is powered by the Bot Framework and its Direct Line Speech channel, that is optimized for voice-in, voice-out interaction with bots.
+
+[Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md).
+
+Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands](custom-commands.md) for a streamlined solution experience.
+
+Direct Line Speech supports these locales: `ar-eg`, `ar-sa`, `ca-es`, `da-dk`, `de-de`, `en-au`, `en-ca`, `en-gb`, `en-in`, `en-nz`, `en-us`, `es-es`, `es-mx`, `fi-fi`, `fr-ca`, `fr-fr`, `gu-in`, `hi-in`, `hu-hu`, `it-it`, `ja-jp`, `ko-kr`, `mr-in`, `nb-no`, `nl-nl`, `pl-pl`, `pt-br`, `pt-pt`, `ru-ru`, `sv-se`, `ta-in`, `te-in`, `th-th`, `tr-tr`, `zh-cn`, `zh-hk`, and `zh-tw`.
+
+## Getting started with Direct Line Speech
+
+To create a voice assistant using Direct Line Speech, create a Speech resource and Azure Bot resource in the [Azure portal](https://portal.azure.com). Then [connect the bot](/azure/bot-service/bot-service-channel-connect-directlinespeech) to the Direct Line Speech channel.
+
+ ![Conceptual diagram of the Direct Line Speech orchestration service flow](media/voice-assistants/overview-directlinespeech.png "The Speech Channel flow")
+
+For a complete, step-by-step guide on creating a simple voice assistant using Direct Line Speech, see [the tutorial for speech-enabling your bot with the Speech SDK and the Direct Line Speech channel](tutorial-voice-enable-your-bot-speech-sdk.md).
+
+We also offer quickstarts designed to have you running code and learning the APIs quickly. This table includes a list of voice assistant quickstarts organized by language and platform.
+
+| Quickstart | Platform | API reference |
+||-||
+| C#, UWP | Windows | [Browse](/dotnet/api/microsoft.cognitiveservices.speech) |
+| Java | Windows, macOS, Linux | [Browse](/java/api/com.microsoft.cognitiveservices.speech) |
+| Java | Android | [Browse](/java/api/com.microsoft.cognitiveservices.speech) |
+
+## Sample code
+
+Sample code for creating a voice assistant is available on GitHub. These samples cover the client application for connecting to your assistant in several popular programming languages.
+
+* [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples/#voice-assistants-quickstarts)
+* [Tutorial: Voice enable your assistant with the Speech SDK, C#](tutorial-voice-enable-your-bot-speech-sdk.md)
+
+## Customization
+
+Voice assistants built using Speech service can use the full range of customization options available for [speech to text](speech-to-text.md), [text to speech](text-to-speech.md), and [custom keyword selection](./custom-keyword-basics.md).
+
+> [!NOTE]
+> Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt)).
+
+Direct Line Speech and its associated functionality for voice assistants are an ideal supplement to the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview). Though Direct Line Speech can work with any compatible bot, these resources provide a reusable baseline for high-quality conversational experiences as well as common supporting skills and models to get started quickly.
+
+## Reference docs
+
+* [Speech SDK](./speech-sdk.md)
+* [Azure AI Bot Service](/azure/bot-service/)
+
+## Next steps
+
+* [Get the Speech SDK](speech-sdk.md)
+* [Create and deploy a basic bot](/azure/bot-service/bot-builder-tutorial-basic-deploy)
+* [Get the Virtual Assistant Solution and Enterprise Template](https://github.com/Microsoft/AI)
ai-services Display Text Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/display-text-format.md
+
+ Title: Display text formatting with speech to text - Speech service
+
+description: An overview of key concepts for display text formatting with speech to text.
+++++++ Last updated : 09/19/2022+
+zone_pivot_groups: programming-languages-speech-sdk-cli
++
+# Display text formatting with speech to text
+
+Speech to text offers an array of formatting features to ensure that the transcribed text is clear and legible. Below is an overview of these features and how each one is used to improve the overall clarity of the final text output.
+
+## ITN
+
+Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". This process is performed by the speech to text service and isn't configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output.
+
+|Recognized speech|Display text|
+|||
+|`that will cost nine hundred dollars`|`That will cost $900.`|
+|`my phone number is one eight hundred, four five six, eight nine ten`|`My phone number is 1-800-456-8910.`|
+|`the time is six forty five p m`|`The time is 6:45 PM.`|
+|`I live on thirty five lexington avenue`|`I live on 35 Lexington Ave.`|
+|`the answer is six point five`|`The answer is 6.5.`|
+|`send it to support at help dot com`|`Send it to support@help.com.`|
+
+## Capitalization
+
+Speech to text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service will automatically capitalize proper nouns and words at the beginning of a sentence. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`i got an x l t shirt`|`I got an XL t-shirt.`|
+|`my name is jennifer smith`|`My name is Jennifer Smith.`|
+|`i want to visit new york city`|`I want to visit New York City.`|
+
+## Disfluency removal
+
+When speaking, it's common for someone to stutter, duplicate words, and say filler words like "uhm" or "uh". Speech to text can recognize such disfluencies and remove them from the display text. Disfluency removal is great for transcribing live unscripted speeches to read them back later. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`i uh said that we can go to the uhmm movies`|`I said that we can go to the movies.`|
+|`its its not that big of uhm a deal`|`It's not that big of a deal.`|
+|`umm i think tomorrow should work`|`I think tomorrow should work.`|
+
+## Punctuation
+
+Speech to text automatically punctuates your text to improve clarity. Punctuation is helpful for reading back call or conversation transcriptions. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`how are you`|`How are you?`|
+|`we can go to the mall park or beach`|`We can go to the mall, park, or beach.`|
+
+When you're using speech to text with continuous recognition, you can configure the Speech service to recognize explicit punctuation marks. Then you can speak punctuation aloud in order to make your text more legible. This is especially useful in a situation where you want to use complex punctuation without having to merge it later. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`they entered the room dot dot dot`|`They entered the room...`|
+|`i heart emoji you period`|`I <3 you.`|
+|`the options are apple forward slash banana forward slash orange period`|`The options are apple/banana/orange.`|
+|`are you sure question mark`|`Are you sure?`|
+
+Use the Speech SDK to enable dictation mode when you're using speech to text with continuous recognition. This mode will cause the speech configuration instance to interpret word descriptions of sentence structures such as punctuation.
+
+```csharp
+speechConfig.EnableDictation();
+```
+```cpp
+speechConfig->EnableDictation();
+```
+```go
+speechConfig.EnableDictation()
+```
+```java
+speechConfig.enableDictation();
+```
+```javascript
+speechConfig.enableDictation();
+```
+```objective-c
+[self.speechConfig enableDictation];
+```
+```swift
+self.speechConfig!.enableDictation()
+```
+```python
+speech_config.enable_dictation()
+```
+
+## Profanity filter
+
+You can specify whether to mask, remove, or show profanity in the final transcribed text. Masking replaces profane words with asterisk (*) characters so that you can keep the original sentiment of your text while making it more appropriate for certain situations
+
+> [!NOTE]
+> Microsoft also reserves the right to mask or remove any word that is deemed inappropriate. Such words will not be returned by the Speech service, whether or not you enabled profanity filtering.
+
+The profanity filter options are:
+- `Masked`: Replaces letters in profane words with asterisk (*) characters. Masked is the default option.
+- `Raw`: Include the profane words verbatim.
+- `Removed`: Removes profane words.
+
+For example, to remove profane words from the speech recognition result, set the profanity filter to `Removed` as shown here:
+
+```csharp
+speechConfig.SetProfanity(ProfanityOption.Removed);
+```
+```cpp
+speechConfig->SetProfanity(ProfanityOption::Removed);
+```
+```go
+speechConfig.SetProfanity(common.Removed)
+```
+```java
+speechConfig.setProfanity(ProfanityOption.Removed);
+```
+```javascript
+speechConfig.setProfanity(sdk.ProfanityOption.Removed);
+```
+```objective-c
+[self.speechConfig setProfanityOptionTo:SPXSpeechConfigProfanityOption.SPXSpeechConfigProfanityOption_ProfanityRemoved];
+```
+```swift
+self.speechConfig!.setProfanityOptionTo(SPXSpeechConfigProfanityOption_ProfanityRemoved)
+```
+```python
+speech_config.set_profanity(speechsdk.ProfanityOption.Removed)
+```
+```console
+spx recognize --file caption.this.mp4 --format any --profanity masked --output vtt file - --output srt file -
+```
+
+Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` properties. Profanity filter isn't applied to the result `LexicalForm` and `NormalizedForm` properties. Neither is the filter applied to the word level results.
++
+## Next steps
+
+* [Speech to text quickstart](get-started-speech-to-text.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
ai-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md
+
+ Title: Embedded Speech - Speech service
+
+description: Embedded Speech is designed for on-device scenarios where cloud connectivity is intermittent or unavailable.
+++++++ Last updated : 10/31/2022+
+zone_pivot_groups: programming-languages-set-thirteen
++
+# Embedded Speech (preview)
+
+Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md).
+
+> [!IMPORTANT]
+> Microsoft limits access to embedded speech. You can apply for access through the Azure AI Speech [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/ai-services/speech-service/context/context).
+
+## Platform requirements
+
+Embedded speech is included with the Speech SDK (version 1.24.1 and higher) for C#, C++, and Java. Refer to the general [Speech SDK installation requirements](#embedded-speech-sdk-packages) for programming language and target platform specific details.
+
+**Choose your target environment**
+
+# [Android](#tab/android-target)
+
+Requires Android 7.0 (API level 24) or higher on ARM64 (`arm64-v8a`) or ARM32 (`armeabi-v7a`) hardware.
+
+Embedded TTS with neural voices is only supported on ARM64.
+
+# [Linux](#tab/linux-target)
+
+Requires Linux on x64, ARM64, or ARM32 hardware with [supported Linux distributions](quickstarts/setup-platform.md?tabs=linux).
+
+Embedded speech isn't supported on RHEL/CentOS 7.
+
+Embedded TTS with neural voices isn't supported on ARM32.
+
+# [macOS](#tab/macos-target)
+
+Requires 10.14 or newer on x64 or ARM64 hardware.
+
+# [Windows](#tab/windows-target)
+
+Requires Windows 10 or newer on x64 or ARM64 hardware.
+
+The latest [Microsoft Visual C++ Redistributable for Visual Studio 2015-2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) must be installed regardless of the programming language used with the Speech SDK.
+
+The Speech SDK for Java doesn't support Windows on ARM64.
+++
+## Limitations
+
+Embedded speech is only available with C#, C++, and Java SDKs. The other Speech SDKs, Speech CLI, and REST APIs don't support embedded speech.
+
+Embedded speech recognition only supports mono 16 bit, 8-kHz or 16-kHz PCM-encoded WAV audio.
+
+Embedded neural voices only support 24-kHz sample rate.
+
+## Embedded speech SDK packages
++
+For C# embedded applications, install following Speech SDK for C# packages:
+
+|Package |Description |
+| | |
+|[Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech)|Required to use the Speech SDK|
+| [Microsoft.CognitiveServices.Speech.Extension.Embedded.SR](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.SR) | Required for embedded speech recognition |
+| [Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS) | Required for embedded speech synthesis |
+| [Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime) | Required for embedded speech recognition and synthesis |
+| [Microsoft.CognitiveServices.Speech.Extension.Telemetry](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Telemetry) | Required for embedded speech recognition and synthesis |
+++
+For C++ embedded applications, install following Speech SDK for C++ packages:
+
+|Package |Description |
+| | |
+|[Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech)|Required to use the Speech SDK|
+| [Microsoft.CognitiveServices.Speech.Extension.Embedded.SR](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.SR) | Required for embedded speech recognition |
+| [Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS) | Required for embedded speech synthesis |
+| [Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime) | Required for embedded speech recognition and synthesis |
+| [Microsoft.CognitiveServices.Speech.Extension.Telemetry](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Telemetry) | Required for embedded speech recognition and synthesis |
++++
+**Choose your target environment**
+
+# [Java Runtime](#tab/jre)
+
+For Java embedded applications, add [client-sdk-embedded](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk-embedded) (`.jar`) as a dependency. This package supports cloud, embedded, and hybrid speech.
+
+> [!IMPORTANT]
+> Don't add [client-sdk](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk) in the same project, since it supports only cloud speech services.
+
+Follow these steps to install the Speech SDK for Java using Apache Maven:
+
+1. Install [Apache Maven](https://maven.apache.org/install.html).
+1. Open a command prompt where you want the new project, and create a new `pom.xml` file.
+1. Copy the following XML content into `pom.xml`:
+ ```xml
+ <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.microsoft.cognitiveservices.speech.samples</groupId>
+ <artifactId>quickstart-eclipse</artifactId>
+ <version>1.0.0-SNAPSHOT</version>
+ <build>
+ <sourceDirectory>src</sourceDirectory>
+ <plugins>
+ <plugin>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.7.0</version>
+ <configuration>
+ <source>1.8</source>
+ <target>1.8</target>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ <dependencies>
+ <dependency>
+ <groupId>com.microsoft.cognitiveservices.speech</groupId>
+ <artifactId>client-sdk-embedded</artifactId>
+ <version>1.30.0</version>
+ </dependency>
+ </dependencies>
+ </project>
+ ```
+1. Run the following Maven command to install the Speech SDK and dependencies.
+ ```console
+ mvn clean dependency:copy-dependencies
+ ```
+
+# [Android](#tab/android)
+
+For Java embedded applications, add [client-sdk-embedded](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk-embedded) (`.aar`) as a dependency. This package supports cloud, embedded, and hybrid speech.
+
+> [!IMPORTANT]
+> Don't add [client-sdk](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk) in the same project, since it supports only cloud speech services.
+
+Be sure to use the `@aar` suffix when the dependency is specified in `build.gradle`. Here's an example:
+
+```
+dependencies {
+ implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.30.0@aar'
+}
+```
++
+## Models and voices
+
+For embedded speech, you'll need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
+
+The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
+
+The following [text to speech](text-to-speech.md) locales and voices are available:
+
+| Locale (BCP-47) | Language | Text to speech voices |
+| -- | -- | -- |
+| `de-DE` | German (Germany) | `de-DE-KatjaNeural` (Female)<br/>`de-DE-ConradNeural` (Male)|
+| `en-AU` | English (Australia) | `en-AU-AnnetteNeural` (Female)<br/>`en-AU-WilliamNeural` (Male)|
+| `en-CA` | English (Canada) | `en-CA-ClaraNeural` (Female)<br/>`en-CA-LiamNeural` (Male)|
+| `en-GB` | English (United Kingdom) | `en-GB-LibbyNeural` (Female)<br/>`en-GB-RyanNeural` (Male)|
+| `en-US` | English (United States) | `en-US-AriaNeural` (Female)<br/>`en-US-GuyNeural` (Male)<br/>`en-US-JennyNeural` (Female)|
+| `es-ES` | Spanish (Spain) | `es-ES-ElviraNeural` (Female)<br/>`es-ES-AlvaroNeural` (Male)|
+| `es-MX` | Spanish (Mexico) | `es-MX-DaliaNeural` (Female)<br/>`es-MX-JorgeNeural` (Male)|
+| `fr-CA` | French (Canada) | `fr-CA-SylvieNeural` (Female)<br/>`fr-CA-JeanNeural` (Male)|
+| `fr-FR` | French (France) | `fr-FR-DeniseNeural` (Female)<br/>`fr-FR-HenriNeural` (Male)|
+| `it-IT` | Italian (Italy) | `it-IT-IsabellaNeural` (Female)<br/>`it-IT-DiegoNeural` (Male)|
+| `ja-JP` | Japanese (Japan) | `ja-JP-NanamiNeural` (Female)<br/>`ja-JP-KeitaNeural` (Male)|
+| `ko-KR` | Korean (Korea) | `ko-KR-SunHiNeural` (Female)<br/>`ko-KR-InJoonNeural` (Male)|
+| `pt-BR` | Portuguese (Brazil) | `pt-BR-FranciscaNeural` (Female)<br/>`pt-BR-AntonioNeural` (Male)|
+| `zh-CN` | Chinese (Mandarin, Simplified) | `zh-CN-XiaoxiaoNeural` (Female)<br/>`zh-CN-YunxiNeural` (Male)|
+
+## Embedded speech configuration
+
+For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you downloaded to your local device.
+
+Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
++
+```csharp
+// Provide the location of the models and voices.
+List<string> paths = new List<string>();
+paths.Add("C:\\dev\\embedded-speech\\stt-models");
+paths.Add("C:\\dev\\embedded-speech\\tts-voices");
+var embeddedSpeechConfig = EmbeddedSpeechConfig.FromPaths(paths.ToArray());
+
+// For speech to text
+embeddedSpeechConfig.SetSpeechRecognitionModel(
+ "Microsoft Speech Recognizer en-US FP Model V8",
+ Environment.GetEnvironmentVariable("MODEL_KEY"));
+
+// For text to speech
+embeddedSpeechConfig.SetSpeechSynthesisVoice(
+ "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ Environment.GetEnvironmentVariable("VOICE_KEY"));
+embeddedSpeechConfig.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm);
+```
++
+> [!TIP]
+> The `GetEnvironmentVariable` function is defined in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md).
+
+```cpp
+// Provide the location of the models and voices.
+vector<string> paths;
+paths.push_back("C:\\dev\\embedded-speech\\stt-models");
+paths.push_back("C:\\dev\\embedded-speech\\tts-voices");
+auto embeddedSpeechConfig = EmbeddedSpeechConfig::FromPaths(paths);
+
+// For speech to text
+embeddedSpeechConfig->SetSpeechRecognitionModel((
+ "Microsoft Speech Recognizer en-US FP Model V8",
+ GetEnvironmentVariable("MODEL_KEY"));
+
+// For text to speech
+embeddedSpeechConfig->SetSpeechSynthesisVoice(
+ "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ GetEnvironmentVariable("VOICE_KEY"));
+embeddedSpeechConfig->SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat::Riff24Khz16BitMonoPcm);
+```
+++
+```java
+// Provide the location of the models and voices.
+List<String> paths = new ArrayList<>();
+paths.add("C:\\dev\\embedded-speech\\stt-models");
+paths.add("C:\\dev\\embedded-speech\\tts-voices");
+var embeddedSpeechConfig = EmbeddedSpeechConfig.fromPaths(paths);
+
+// For speech to text
+embeddedSpeechConfig.setSpeechRecognitionModel(
+ "Microsoft Speech Recognizer en-US FP Model V8",
+ System.getenv("MODEL_KEY"));
+
+// For text to speech
+embeddedSpeechConfig.setSpeechSynthesisVoice(
+ "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ System.getenv("VOICE_KEY"));
+embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm);
+```
+++
+## Embedded speech code samples
++
+You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
+
+- [C# (.NET 6.0)](https://aka.ms/embedded-speech-samples-csharp)
+- [C# (.NET MAUI)](https://aka.ms/embedded-speech-samples-csharp-maui)
+- [C# for Unity](https://aka.ms/embedded-speech-samples-csharp-unity)
++
+You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
+- [C++](https://aka.ms/embedded-speech-samples-cpp)
++
+You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
+- [Java (JRE)](https://aka.ms/embedded-speech-samples-java)
+- [Java for Android](https://aka.ms/embedded-speech-samples-java-android)
+
+## Hybrid speech
+
+Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service by default and embedded speech as a fallback in case cloud connectivity is limited or slow.
+
+With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed.
+
+With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request.
+
+## Cloud speech
+
+For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration.
++
+## Next steps
+
+- [Read about text to speech on devices for disconnected and hybrid scenarios](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-now-available-on-devices-for-disconnected-and/ba-p/3716797)
+- [Limited Access to embedded Speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/ai-services/speech-service/context/context)
ai-services Gaming Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/gaming-concepts.md
+
+ Title: Game development with Azure AI Speech - Speech service
+
+description: Concepts for game development with Azure AI Speech.
++++++ Last updated : 01/25/2023+++
+# Game development with Azure AI Speech
+
+Azure AI services for Speech can be used to improve various gaming scenarios, both in- and out-of-game.
+
+Here are a few Speech features to consider for flexible and interactive game experiences:
+
+- Bring everyone into the conversation by synthesizing audio from text. Or by displaying text from audio.
+- Make the game more accessible for players who are unable to read text in a particular language, including young players who haven't learned to read and write. Players can listen to storylines and instructions in their preferred language.
+- Create game avatars and non-playable characters (NPC) that can initiate or participate in a conversation in-game.
+- Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices.
+- Custom neural voice for creating a voice that stays on-brand with consistent quality and speaking style. You can add emotions, accents, nuances, laughter, and other para linguistic sounds and expressions.
+- Use game dialogue prototyping to shorten the amount of time and money spent in product to get the game to market sooner. You can rapidly swap lines of dialog and listen to variations in real-time to iterate the game content.
+
+You can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) for real-time low latency speech to text, text to speech, language identification, and speech translation. You can also use the [Batch transcription API](batch-transcription.md) to transcribe pre-recorded speech to text. To synthesize a large volume of text input (long and short) to speech, use the [Batch synthesis API](batch-synthesis.md).
+
+For information about locale and regional availability, see [Language and voice support](language-support.md) and [Region support](regions.md).
+
+## Text to speech
+
+Help bring everyone into the conversation by converting text messages to audio using [Text to speech](text-to-speech.md) for scenarios, such as game dialogue prototyping, greater accessibility, or non-playable character (NPC) voices. Text to speech includes [prebuilt neural voice](language-support.md?tabs=tts#prebuilt-neural-voices) and [custom neural voice](language-support.md?tabs=tts#custom-neural-voice) features. Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. Custom neural voice is an easy-to-use self-service for creating a highly natural custom voice.
+
+When enabling this functionality in your game, keep in mind the following benefits:
+
+- Voices and languages supported - A large portfolio of [locales and voices](language-support.md?tabs=tts#supported-languages) are supported. You can also [specify multiple languages](speech-synthesis-markup-voice.md#adjust-speaking-languages) for Text to speech output. For [custom neural voice](custom-neural-voice.md), you can [choose to create](how-to-custom-voice-create-voice.md?tabs=neural#choose-a-training-method) different languages from single language training data.
+- Emotional styles supported - [Emotional tones](language-support.md?tabs=tts#voice-styles-and-roles), such as cheerful, angry, sad, excited, hopeful, friendly, unfriendly, terrified, shouting, and whispering. You can [adjust the speaking style](speech-synthesis-markup-voice.md#speaking-styles-and-roles), style degree, and role at the sentence level.
+- Visemes supported - You can use visemes during real-time synthesizing to control the movement of 2D and 3D avatar models, so that the mouth movements are perfectly matched to synthetic speech. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
+- Fine-tuning Text to speech output with Speech Synthesis Markup Language (SSML) - With SSML, you can customize Text to speech outputs, with richer voice tuning supports. For more information, see [Speech Synthesis Markup Language (SSML) overview](speech-synthesis-markup.md).
+- Audio outputs - Each prebuilt neural voice model is available at 24 kHz and high-fidelity 48 kHz. If you select 48-kHz output format, the high-fidelity voice model with 48 kHz will be invoked accordingly. The sample rates other than 24 kHz and 48 kHz can be obtained through upsampling or downsampling when synthesizing. For example, 44.1 kHz is downsampled from 48 kHz. Each audio format incorporates a bitrate and encoding type. For more information, see the [supported audio formats](rest-text-to-speech.md?tabs=streaming#audio-outputs). For more information on 48-kHz high-quality voices, see [this introduction blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-voices-upgraded-to-48khz-with-hifinet2-vocoder/ba-p/3665252).
+
+For an example, see the [Text to speech quickstart](get-started-text-to-speech.md).
+
+## Speech to text
+
+You can use [speech to text](speech-to-text.md) to display text from the spoken audio in your game. For an example, see the [Speech to text quickstart](get-started-speech-to-text.md).
+
+## Language identification
+
+With [language identification](language-identification.md), you can detect the language of the chat string submitted by the player.
+
+## Speech translation
+
+It's not unusual that players in the same game session natively speak different languages and may appreciate receiving both the original message and its translation. You can use [speech translation](speech-translation.md) to translate text between languages so players across the world can communicate with each other in their native language.
+
+For an example, see the [Speech translation quickstart](get-started-speech-translation.md).
+
+> [!NOTE]
+> Besides the Speech service, you can also use the [Translator service](../translator/translator-overview.md). To execute text translation between supported source and target languages in real-time see [Text translation](../translator/text-translation-overview.md).
+
+## Next steps
+
+* [Azure gaming documentation](/gaming/azure/)
+* [Text to speech quickstart](get-started-text-to-speech.md)
+* [Speech to text quickstart](get-started-speech-to-text.md)
+* [Speech translation quickstart](get-started-speech-translation.md)
ai-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-speech-recognition-results.md
+
+ Title: "Get speech recognition results - Speech service"
+
+description: Learn how to get speech recognition results.
++++++ Last updated : 06/13/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: speech to text, speech to text software
++
+# Get speech recognition results
++++++++++
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+* [Use batch transcription](batch-transcription.md)
ai-services Get Started Intent Recognition Clu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition-clu.md
+
+ Title: "Intent recognition with CLU quickstart - Speech service"
+
+description: In this quickstart, you recognize intents from audio data with the Speech service and Language service.
+++++++ Last updated : 02/22/2023+
+zone_pivot_groups: programming-languages-set-thirteen
+keywords: intent recognition
++
+# Quickstart: Recognize intents with Conversational Language Understanding
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech recognition](how-to-recognize-speech.md)
ai-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition.md
+
+ Title: "Intent recognition quickstart - Speech service"
+
+description: In this quickstart, you recognize intents from audio data with the Speech service and LUIS.
++++++ Last updated : 02/22/2023+
+ms.devlang: cpp, csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: intent recognition
++
+# Quickstart: Recognize intents with the Speech service and LUIS
+
+> [!IMPORTANT]
+> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
+>
+> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
+++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See more LUIS samples on GitHub](https://github.com/Azure/pizza_luis_bot)
ai-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speaker-recognition.md
+
+ Title: "Speaker Recognition quickstart - Speech service"
+
+description: In this quickstart, you use speaker recognition to confirm who is speaking. Learn about common design patterns for working with speaker verification and identification.
++++++ Last updated : 01/08/2022+
+ms.devlang: cpp, csharp, javascript
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: speaker recognition, voice biometry
++
+# Quickstart: Recognize and verify who is speaking
+++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
ai-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-to-text.md
+
+ Title: "Speech to text quickstart - Speech service"
+
+description: In this quickstart, you convert speech to text with recognition from a microphone.
++++++ Last updated : 09/16/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: speech to text, speech to text software
++
+# Quickstart: Recognize and convert speech to text
+++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech recognition](how-to-recognize-speech.md)
ai-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-translation.md
+
+ Title: Speech translation quickstart - Speech service
+
+description: In this quickstart, you translate speech from one language to text in another language.
+++++++ Last updated : 09/16/2022+
+zone_pivot_groups: programming-languages-speech-services
+keywords: speech translation
++
+# Quickstart: Recognize and translate speech to text
++++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech translation](how-to-translate-speech.md)
ai-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-text-to-speech.md
+
+ Title: "Text to speech quickstart - Speech service"
+
+description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
++++++ Last updated : 09/16/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: text to speech
++
+# Quickstart: Convert text to speech
+++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech synthesis](how-to-speech-synthesis.md)
ai-services How To Async Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-async-conversation-transcription.md
+
+ Title: Asynchronous Conversation Transcription - Speech service
+
+description: Learn how to use asynchronous Conversation Transcription using the Speech service. Available for Java and C# only.
+++++ Last updated : 11/04/2019
+ms.devlang: csharp, java
+
+zone_pivot_groups: programming-languages-set-twenty-one
++
+# Asynchronous Conversation Transcription
+
+In this article, asynchronous Conversation Transcription is demonstrated using the **RemoteConversationTranscriptionClient** API. If you have configured Conversation Transcription to do asynchronous transcription and have a `conversationId`, you can obtain the transcription associated with that `conversationId` using the **RemoteConversationTranscriptionClient** API.
+
+## Asynchronous vs. real-time + asynchronous
+
+With asynchronous transcription, you stream the conversation audio, but don't need a transcription returned in real-time. Instead, after the audio is sent, use the `conversationId` of `Conversation` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you'll get a `RemoteConversationTranscriptionResult`.
+
+With real-time plus asynchronous, you get the transcription in real-time, but also get the transcription by querying with the `conversationId` (similar to asynchronous scenario).
+
+Two steps are required to accomplish asynchronous transcription. The first step is to upload the audio, choosing either asynchronous only or real-time plus asynchronous. The second step is to get the transcription results.
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
ai-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-audio-content-creation.md
+
+ Title: Audio Content Creation - Speech service
+
+description: Audio Content Creation is an online tool that allows you to run Text to speech synthesis without writing any code.
++++++ Last updated : 09/25/2022+++
+# Speech synthesis with the Audio Content Creation tool
+
+You can use the [Audio Content Creation](https://speech.microsoft.com/portal/audiocontentcreation) tool in Speech Studio for Text to speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
+
+Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune Text to speech voices and design customized audio experiences.
+
+The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust Text to speech output attributes in real-time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
+
+- No-code approach: You can use the Audio Content Creation tool for Text to speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration.
+- Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot.
+
+You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
+
+To learn more, view the Audio Content Creation tutorial video [on YouTube](https://youtu.be/ygApYuOOG6w).
+
+## Get started
+
+The Audio Content Creation tool in Speech Studio is free to access, but you'll pay for Speech service usage. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people.
+
+The next sections cover how to create an Azure account and get a Speech resource.
+
+### Step 1: Create an Azure account
+
+To work with Audio Content Creation, you need a [Microsoft account](https://account.microsoft.com/account) and an [Azure account](https://azure.microsoft.com/free/ai/).
+
+[The Azure portal](https://portal.azure.com/) is the centralized place for you to manage your Azure account. You can create the Speech resource, manage the product access, and monitor everything from simple web apps to complex cloud deployments.
+
+### Step 2: Create a Speech resource
+
+After you sign up for the Azure account, you need to create a Speech resource in your Azure account to access Speech services. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal).
+
+It takes a few moments to deploy your new Speech resource. After the deployment is complete, you can start using the Audio Content Creation tool.
+
+ > [!NOTE]
+ > If you plan to use neural voices, make sure that you create your resource in [a region that supports neural voices](regions.md#speech-service).
+
+### Step 3: Sign in to Audio Content Creation with your Azure account and Speech resource
+
+1. After you get the Azure account and the Speech resource, sign in to [Speech Studio](https://aka.ms/speechstudio/), and then select **Audio Content Creation**.
+
+1. Select the Azure subscription and the Speech resource you want to work with, and then select **Use resource**.
+
+ The next time you sign in to Audio Content Creation, you're linked directly to the audio work files under the current Speech resource. You can check your Azure subscription details and status in the [Azure portal](https://portal.azure.com/).
+
+ If you don't have an available Speech resource and you're the owner or admin of an Azure subscription, you can create a Speech resource in Speech Studio by selecting **Create a new resource**.
+
+ If you have a user role for a certain Azure subscription, you might not have permissions to create a new Speech resource. To get access, contact your admin.
+
+ To switch your Speech resource at any time, select **Settings** at the top of the page.
+
+ To switch directories, select **Settings** or go to your profile.
+
+## Use the tool
+
+The following diagram displays the process for fine-tuning the Text to speech outputs.
++
+Each step in the preceding diagram is described here:
+
+1. Choose the Speech resource you want to work with.
+
+1. [Create an audio tuning file](#create-an-audio-tuning-file) by using plain text or SSML scripts. Enter or upload your content into Audio Content Creation.
+1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text to speech voices](language-support.md?tabs=tts). You can use prebuilt neural voices or a custom neural voice.
+
+ > [!NOTE]
+ > Gated access is available for Custom Neural Voice, which allows you to create high-definition voices that are similar to natural-sounding speech. For more information, see [Gating process](./text-to-speech.md).
+
+1. Select the content you want to preview, and then select **Play** (triangle icon) to preview the default synthesis output.
+
+ If you make any changes to the text, select the **Stop** icon, and then select **Play** again to regenerate the audio with changed scripts.
+
+ Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md).
+
+ For more information about fine-tuning speech output, view the [How to convert Text to speech using Microsoft Azure AI voices](https://youtu.be/ygApYuOOG6w) video.
+
+1. Save and [export your tuned audio](#export-tuned-audio).
+
+ When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
+
+## Create an audio tuning file
+
+You can get your content into the Audio Content Creation tool in either of two ways:
+
+* **Option 1**
+
+ 1. Select **New** > **Text file** to create a new audio tuning file.
+
+ 1. Enter or paste your content into the editing window. The allowable number of characters for each file is 20,000 or fewer. If your script contains more than 20,000 characters, you can use Option 2 to automatically split your content into multiple files.
+
+ 1. Select **Save**.
+
+* **Option 2**
+
+ 1. Select **Upload** > **Text file** to import one or more text files. Both plain text and SSML are supported.
+
+ If your script file is more than 20,000 characters, split the content by paragraphs, by characters, or by regular expressions.
+
+ 1. When you upload your text files, make sure that they meet these requirements:
+
+ | Property | Description |
+ |-||
+ | File format | Plain text (.txt)\*<br> SSML text (.txt)\**<br/> Zip files aren't supported. |
+ | Encoding format | UTF-8 |
+ | File name | Each file must have a unique name. Duplicate files aren't supported. |
+ | Text length | Character limit is 20,000. If your files exceed the limit, split them according to the instructions in the tool. |
+ | SSML restrictions | Each SSML file can contain only a single piece of SSML. |
+
+
+ \* **Plain text example**:
+
+ ```txt
+ Welcome to use Audio Content Creation to customize audio output for your products.
+ ```
+
+ \** **SSML text example**:
+
+ ```xml
+ <speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" version="1.0" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ Welcome to use Audio Content Creation <break time="10ms" />to customize audio output for your products.
+ </voice>
+ </speak>
+ ```
+
+## Export tuned audio
+
+After you've reviewed your audio output and are satisfied with your tuning and adjustment, you can export the audio.
+
+1. Select **Export** to create an audio creation task.
+
+ We recommend **Export to Audio library** to easily store, find, and search audio output in the cloud. You can better integrate with your applications through Azure blob storage. You can also download the audio to your local disk directly.
+
+1. Choose the output format for your tuned audio. The **supported audio formats and sample rates** are listed in the following table:
+
+ | Format | 8 kHz sample rate | 16 kHz sample rate | 24 kHz sample rate | 48 kHz sample rate |
+ | | | | | |
+ | wav | riff-8khz-16bit-mono-pcm | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |riff-48khz-16bit-mono-pcm |
+ | mp3 | N/A | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |audio-48khz-192kbitrate-mono-mp3 |
+
+
+1. To view the status of the task, select the **Task list** tab.
+
+ If the task fails, see the detailed information page for a full report.
+
+1. When the task is complete, your audio is available for download on the **Audio library** pane.
+
+1. Select the file you want to download and **Download**.
+
+ Now you're ready to use your custom tuned audio in your apps or products.
+
+## Configure BYOS and anonymous public read access for blobs
+
+If you lose access permission to your Bring Your Own Storage (BYOS), you won't be able to view, create, edit, or delete files. To resume your access, you need to remove the current storage and reconfigure the BYOS in the [Azure portal](https://portal.azure.com/#allservices). To learn more about how to configure BYOS, see [Mount Azure Storage as a local share in App Service](/azure/app-service/configure-connect-to-azure-storage?pivots=container-linux&tabs=portal).
+
+After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob will be inaccessible. By default, a containerΓÇÖs public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob public access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
+
+## Add or remove Audio Content Creation users
+
+If more than one user wants to use Audio Content Creation, you can grant them access to the Azure subscription and the Speech resource. If you add users to an Azure subscription, they can access all the resources under the Azure subscription. But if you add users to a Speech resource only, they'll have access only to the Speech resource and not to other resources under this Azure subscription. Users with access to the Speech resource can use the Audio Content Creation tool.
+
+The users you grant access to need to set up a [Microsoft account](https://account.microsoft.com/account). If they don' have a Microsoft account, they can create one in just a few minutes. They can use their existing email and link it to a Microsoft account, or they can create and use an Outlook email address as a Microsoft account.
+
+### Add users to a Speech resource
+
+To add users to a Speech resource so that they can use Audio Content Creation, do the following:
++
+1. In the [Azure portal](https://portal.azure.com/), select **All services**.
+1. Then select the **Azure AI services**, and navigate to your specific Speech resource.
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item (for example, selecting **Resource groups** and then clicking through to your wanted resource group).
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add** -> **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add (in this case, **Owner**).
+1. On the **Members** tab, enter a user's email address and select the user's name in the directory. The email address must be linked to a Microsoft account that's trusted by Azure Active Directory. Users can easily sign up for a [Microsoft account](https://account.microsoft.com/account) by using their personal email address.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Here is what happens next:
+
+An email invitation is automatically sent to users. They can accept it by selecting **Accept invitation** > **Accept to join Azure** in their email. They're then redirected to the Azure portal. They don't need to take further action in the Azure portal. After a few moments, users are assigned the role at the Speech resource scope, which gives them access to this Speech resource. If users don't receive the invitation email, you can search for their account under **Role assignments** and go into their profile. Look for **Identity** > **Invitation accepted**, and select **(manage)** to resend the email invitation. You can also copy and send the invitation link to them.
+
+Users now visit or refresh the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with their Microsoft account. They select **Audio Content Creation** block among all speech products. They choose the Speech resource in the pop-up window or in the settings at the upper right.
+
+If they can't find the available Speech resource, they can check to ensure that they're in the right directory. To do so, they select the account profile at the upper right and then select **Switch** next to **Current directory**. If there's more than one directory available, it means they have access to multiple directories. They can switch to different directories and go to **Settings** to see whether the right Speech resource is available.
+
+Users who are in the same Speech resource will see each other's work in the Audio Content Creation tool. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
+
+### Remove users from a Speech resource
+
+1. Search for **Azure AI services** in the Azure portal, select the Speech resource that you want to remove users from.
+1. Select **Access control (IAM)**, and then select the **Role assignments** tab to view all the role assignments for this Speech resource.
+1. Select the users you want to remove, select **Remove**, and then select **OK**.
+
+ :::image type="content" source="media/audio-content-creation/remove-user.png" alt-text="Screenshot of the 'Remove' button on the 'Remove role assignments' pane.":::
+
+### Enable users to grant access to others
+
+If you want to allow a user to grant access to other users, you need to assign them the owner role for the Speech resource and set the user as the Azure directory reader.
+1. Add the user as the owner of the Speech resource. For more information, see [Add users to a Speech resource](#add-users-to-a-speech-resource).
+
+ :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Screenshot showing the 'Owner' role on the 'Add role assignment' pane. ":::
+
+1. In the [Azure portal](https://portal.azure.com/), select the collapsed menu at the upper left, select **Azure Active Directory**, and then select **Users**.
+1. Search for the user's Microsoft account, go to their detail page, and then select **Assigned roles**.
+1. Select **Add assignments** > **Directory Readers**. If the **Add assignments** button is unavailable, it means that you don't have access. Only the global administrator of this directory can add assignments to users.
+
+## Next steps
+
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Batch synthesis](batch-synthesis.md)
ai-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md
+
+ Title: How to configure Azure Active Directory Authentication
+
+description: Learn how to authenticate using Azure Active Directory Authentication
++++++ Last updated : 06/18/2021+
+zone_pivot_groups: programming-languages-set-two
+ms.devlang: cpp, csharp, java, python
++
+# Azure Active Directory Authentication with the Speech SDK
+
+When using the Speech SDK to access the Speech service, there are three authentication methods available: service keys, a key-based token, and Azure Active Directory (Azure AD). This article describes how to configure a Speech resource and create a Speech SDK configuration object to use Azure AD for authentication.
+
+This article shows how to use Azure AD authentication with the Speech SDK. You'll learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a Speech resource
+> - Configure the Speech resource for Azure AD authentication
+> - Get an Azure AD access token
+> - Create the appropriate SDK configuration object.
+
+To learn more about Azure AD access tokens, including token lifetime, visit [Access tokens in the Microsoft identity platform](/azure/active-directory/develop/access-tokens).
+
+## Create a Speech resource
+To create a Speech resource in the [Azure portal](https://portal.azure.com), see [Get the keys for your resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource)
+
+## Configure the Speech resource for Azure AD authentication
+
+To configure your Speech resource for Azure AD authentication, create a custom domain name and assign roles.
+
+### Create a custom domain name
+
+### Assign roles
+For Azure AD authentication with Speech resources, you need to assign either the *Azure AI Speech Contributor* or *Azure AI Speech User* role.
+
+You can assign roles to the user or application using the [Azure portal](../../role-based-access-control/role-assignments-portal.md) or [PowerShell](../../role-based-access-control/role-assignments-powershell.md).
+
+## Get an Azure AD access token
+To get an Azure AD access token in C#, use the [Azure Identity Client Library](/dotnet/api/overview/azure/identity-readme).
+
+Here's an example of using Azure Identity to get an Azure AD access token from an interactive browser:
+```c#
+TokenRequestContext context = new Azure.Core.TokenRequestContext(new string[] { "https://cognitiveservices.azure.com/.default" });
+InteractiveBrowserCredential browserCredential = new InteractiveBrowserCredential();
+var browserToken = browserCredential.GetToken(context);
+string aadToken = browserToken.Token;
+```
+The token context must be set to "https://cognitiveservices.azure.com/.default".
+
+To get an Azure AD access token in C++, use the [Azure Identity Client Library](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity).
+
+Here's an example of using Azure Identity to get an Azure AD access token with your tenant ID, client ID, and client secret credentials:
+```cpp
+const std::string tenantId = "Your Tenant ID";
+const std::string clientId = "Your Client ID";
+const std::string clientSecret = "Your Client Secret";
+const std::string tokenContext = "https://cognitiveservices.azure.com/.default";
+
+Azure::Identity::ClientSecretCredential cred(tenantId,
+ clientId,
+ clientSecret,
+ Azure::Identity::ClientSecretCredentialOptions());
+
+Azure::Core::Credentials::TokenRequestContext context;
+context.Scopes.push_back(tokenContext);
+
+auto token = cred.GetToken(context, Azure::Core::Context());
+```
+The token context must be set to "https://cognitiveservices.azure.com/.default".
++
+To get an Azure AD access token in Java, use the [Azure Identity Client Library](/java/api/overview/azure/identity-readme).
+
+Here's an example of using Azure Identity to get an Azure AD access token from a browser:
+```java
+TokenRequestContext context = new TokenRequestContext();
+context.addScopes("https://cognitiveservices.azure.com/.default");
+
+InteractiveBrowserCredentialBuilder builder = new InteractiveBrowserCredentialBuilder();
+InteractiveBrowserCredential browserCredential = builder.build();
+
+AccessToken browserToken = browserCredential.getToken(context).block();
+String token = browserToken.getToken();
+```
+The token context must be set to "https://cognitiveservices.azure.com/.default".
+
+To get an Azure AD access token in Java, use the [Azure Identity Client Library](/python/api/overview/azure/identity-readme).
+
+Here's an example of using Azure Identity to get an Azure AD access token from an interactive browser:
+```Python
+from azure.identity import InteractiveBrowserCredential
+ibc = InteractiveBrowserCredential()
+aadToken = ibc.get_token("https://cognitiveservices.azure.com/.default")
+```
+
+Find samples that get an Azure AD access token in [Microsoft identity platform code samples](../../active-directory/develop/sample-v2-code.md).
+
+For programming languages where a Microsoft identity platform client library isn't available, you can directly [request an access token](../../active-directory/develop/v2-oauth-ropc.md).
+
+## Get the Speech resource ID
+
+You need your Speech resource ID to make SDK calls using Azure AD authentication.
+
+> [!NOTE]
+> For Intent Recognition use your LUIS Prediction resource ID.
+
+# [Azure portal](#tab/portal)
+
+To get the resource ID in the Azure portal:
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select a Speech resource.
+1. In the **Resource Management** group on the left pane, select **Properties**.
+1. Copy the **Resource ID**
+
+# [PowerShell](#tab/powershell)
+
+To get the resource ID using PowerShell, confirm that you have PowerShell version 7.x or later with the Azure PowerShell module version 5.1.0 or later. To see the versions of these tools, follow these steps:
+
+1. In a PowerShell window, enter:
+
+ `$PSVersionTable`
+
+ Confirm that the `PSVersion` value is 7.x or later. To upgrade PowerShell, follow the instructions at [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
+
+1. In a PowerShell window, enter:
+
+ `Get-Module -ListAvailable Az`
+
+ If nothing appears, or if that version of the Azure PowerShell module is earlier than 5.1.0, follow the instructions at [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell) to upgrade.
+
+Now run `Connect-AzAccount` to create a connection with Azure.
++
+```azurepowershell
+Connect-AzAccount
+$subscriptionId = "Your Azure subscription Id"
+$resourceGroup = "Resource group name where Speech resource is located"
+$speechResourceName = "Your Speech resource name"
+
+# Select the Azure subscription that contains the Speech resource.
+# You can skip this step if your Azure account has only one active subscription.
+Set-AzContext -SubscriptionId $subscriptionId
+
+# Get the Speech resource
+$resource = Get-AzCognitiveServicesAccount -Name $speechResourceName -ResourceGroupName $resourceGroup
+
+# Get the resource ID:
+$resourceId = resource.Id
+```
+***
+
+## Create the Speech SDK configuration object
+
+With an Azure AD access token, you can now create a Speech SDK configuration object.
+
+The method of providing the token, and the method to construct the corresponding Speech SDK ```Config``` object varies by the object you'll be using.
+
+### SpeechRecognizer, SpeechSynthesizer, IntentRecognizer, ConversationTranscriber
+
+For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```SpeechConfig``` object.
+
+```C#
+string resourceId = "Your Resource ID";
+string aadToken = "Your Azure AD access token";
+string region = "Your Speech Region";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+var authorizationToken = $"aad#{resourceId}#{aadToken}";
+var speechConfig = SpeechConfig.FromAuthorizationToken(authorizationToken, region);
+```
+
+```C++
+std::string resourceId = "Your Resource ID";
+std::string aadToken = "Your Azure AD access token";
+std::string region = "Your Speech Region";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
+auto speechConfig = SpeechConfig::FromAuthorizationToken(authorizationToken, region);
+```
+
+```Java
+String resourceId = "Your Resource ID";
+String region = "Your Region";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+String authorizationToken = "aad#" + resourceId + "#" + token;
+SpeechConfig speechConfig = SpeechConfig.fromAuthorizationToken(authorizationToken, region);
+```
+
+```Python
+resourceId = "Your Resource ID"
+region = "Your Region"
+# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+authorizationToken = "aad#" + resourceId + "#" + aadToken.token
+speechConfig = SpeechConfig(auth_token=authorizationToken, region=region)
+```
+
+### TranslationRecognizer
+
+For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```SpeechTranslationConfig``` object.
+
+```C#
+string resourceId = "Your Resource ID";
+string aadToken = "Your Azure AD access token";
+string region = "Your Speech Region";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+var authorizationToken = $"aad#{resourceId}#{aadToken}";
+var speechConfig = SpeechTranslationConfig.FromAuthorizationToken(authorizationToken, region);
+```
+
+```cpp
+std::string resourceId = "Your Resource ID";
+std::string aadToken = "Your Azure AD access token";
+std::string region = "Your Speech Region";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
+auto speechConfig = SpeechTranslationConfig::FromAuthorizationToken(authorizationToken, region);
+```
+
+```Java
+String resourceId = "Your Resource ID";
+String region = "Your Region";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+String authorizationToken = "aad#" + resourceId + "#" + token;
+SpeechTranslationConfig translationConfig = SpeechTranslationConfig.fromAuthorizationToken(authorizationToken, region);
+```
+
+```Python
+resourceId = "Your Resource ID"
+region = "Your Region"
+
+# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+authorizationToken = "aad#" + resourceId + "#" + aadToken.token
+translationConfig = SpeechTranslationConfig(auth_token=authorizationToken, region=region)
+```
+
+### DialogServiceConnector
+
+For the ```DialogServiceConnection``` object, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```CustomCommandsConfig``` or a ```BotFrameworkConfig``` object.
+
+```C#
+string resourceId = "Your Resource ID";
+string aadToken = "Your Azure AD access token";
+string region = "Your Speech Region";
+string appId = "Your app ID";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+var authorizationToken = $"aad#{resourceId}#{aadToken}";
+var customCommandsConfig = CustomCommandsConfig.FromAuthorizationToken(appId, authorizationToken, region);
+```
+
+```cpp
+std::string resourceId = "Your Resource ID";
+std::string aadToken = "Your Azure AD access token";
+std::string region = "Your Speech Region";
+std::string appId = "Your app Id";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
+auto customCommandsConfig = CustomCommandsConfig::FromAuthorizationToken(appId, authorizationToken, region);
+```
+
+```Java
+String resourceId = "Your Resource ID";
+String region = "Your Region";
+String appId = "Your AppId";
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+String authorizationToken = "aad#" + resourceId + "#" + token;
+CustomCommandsConfig dialogServiceConfig = CustomCommandsConfig.fromAuthorizationToken(appId, authorizationToken, region);
+```
+
+The DialogServiceConnector is not currently supported in Python
+
+### VoiceProfileClient
+To use the ```VoiceProfileClient``` with Azure AD authentication, use the custom domain name created above.
+
+```C#
+string customDomainName = "Your Custom Name";
+string hostName = $"https://{customDomainName}.cognitiveservices.azure.com/";
+string token = "Your Azure AD access token";
+
+var config = SpeechConfig.FromHost(new Uri(hostName));
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+var authorizationToken = $"aad#{resourceId}#{aadToken}";
+config.AuthorizationToken = authorizationToken;
+```
+
+```cpp
+std::string customDomainName = "Your Custom Name";
+std::string aadToken = "Your Azure AD access token";
+
+auto speechConfig = SpeechConfig::FromHost("https://" + customDomainName + ".cognitiveservices.azure.com/");
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
+speechConfig->SetAuthorizationToken(authorizationToken);
+```
+
+```Java
+String aadToken = "Your Azure AD access token";
+String customDomainName = "Your Custom Name";
+String hostName = "https://" + customDomainName + ".cognitiveservices.azure.com/";
+SpeechConfig speechConfig = SpeechConfig.fromHost(new URI(hostName));
+
+// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
+String authorizationToken = "aad#" + resourceId + "#" + token;
+
+speechConfig.setAuthorizationToken(authorizationToken);
+```
+
+The ```VoiceProfileClient``` isn't available with the Speech SDK for Python.
+
+> [!NOTE]
+> The ```ConversationTranslator``` doesn't support Azure AD authentication.
ai-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md
+
+ Title: How to configure OpenSSL for Linux
+
+description: Learn how to configure OpenSSL for Linux.
+++++++ Last updated : 06/22/2022+
+zone_pivot_groups: programming-languages-set-three
+++
+# Configure OpenSSL for Linux
+
+With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version.
+
+> [!NOTE]
+> This article is only applicable where the Speech SDK is [supported on Linux](speech-sdk.md#supported-languages).
+
+To ensure connectivity, verify that OpenSSL certificates have been installed in your system. Run a command:
+```bash
+openssl version -d
+```
+
+The output on Ubuntu/Debian based systems should be:
+```
+OPENSSLDIR: "/usr/lib/ssl"
+```
+
+Check whether there's a `certs` subdirectory under OPENSSLDIR. In the example above, it would be `/usr/lib/ssl/certs`.
+
+* If the `/usr/lib/ssl/certs` exists, and if it contains many individual certificate files (with `.crt` or `.pem` extension), there's no need for further actions.
+
+* If OPENSSLDIR is something other than `/usr/lib/ssl` or there's a single certificate bundle file instead of multiple individual files, you need to set an appropriate SSL environment variable to indicate where the certificates can be found.
+
+## Examples
+Here are some example environment variables to configure per OpenSSL directory.
+
+- OPENSSLDIR is `/opt/ssl`. There's a `certs` subdirectory with many `.crt` or `.pem` files.
+Set the environment variable `SSL_CERT_DIR` to point at `/opt/ssl/certs` before using the Speech SDK. For example:
+```bash
+export SSL_CERT_DIR=/opt/ssl/certs
+```
+
+- OPENSSLDIR is `/etc/pki/tls` (like on RHEL/CentOS based systems). There's a `certs` subdirectory with a certificate bundle file, for example `ca-bundle.crt`.
+Set the environment variable `SSL_CERT_FILE` to point at that file before using the Speech SDK. For example:
+```bash
+export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
+```
+
+## Certificate revocation checks
+
+When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
+
+If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
+
+> [!WARNING]
+> If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](../../security/fundamentals/tls-certificate-changes.md) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
+
+### Large CRL files (>10 MB)
+
+One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
+
+The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
++
+```csharp
+config.SetProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
+```
+++
+```cpp
+config->SetProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
+```
+++
+```java
+config.setProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
+```
+++
+```python
+speech_config.set_property_by_name("CONFIG_MAX_CRL_SIZE_KB"", "15000")
+```
+++
+```go
+speechConfig.properties.SetPropertyByString("CONFIG_MAX_CRL_SIZE_KB", "15000")
+```
++
+### Bypassing or ignoring CRL failures
+
+If an environment can't be configured to access an Azure CA location, the Speech SDK will never be able to retrieve an updated CRL. You can configure the SDK either to continue and log download failures or to bypass all CRL checks.
+
+> [!WARNING]
+> CRL checks are a security measure and bypassing them increases susceptibility to attacks. They should not be bypassed without thorough consideration of the security implications and alternative mechanisms for protecting against the attack vectors that CRL checks mitigate.
+
+To continue with the connection when a CRL can't be retrieved, set the property `"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"` to `"true"`. An attempt will still be made to retrieve a CRL and failures will still be emitted in logs, but connection attempts will be allowed to continue.
++
+```csharp
+config.SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
+```
+++
+```cpp
+config->SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
+```
+++
+```java
+config.setProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
+```
+++
+```python
+speech_config.set_property_by_name("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")
+```
+++
+```go
+
+speechConfig.properties.SetPropertyByString("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")
+```
++
+To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
++
+```csharp
+config.SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
+```
+++
+```cpp
+config->SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
+```
+++
+```java
+config.setProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
+```
+++
+```python
+speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")
+```
+++
+```go
+speechConfig.properties.SetPropertyByString("OPENSSL_DISABLE_CRL_CHECK", "true")
+```
++
+### CRL caching and performance
+
+By default, the Speech SDK will cache a successfully downloaded CRL on disk to improve the initial latency of future connections. When no cached CRL is present or when the cached CRL is expired, a new list will be downloaded.
+
+Some Linux distributions don't have a `TMP` or `TMPDIR` environment variable defined, so the Speech SDK won't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK will download a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
+
+## Next steps
+
+- [Speech SDK overview](speech-sdk.md)
+- [Install the Speech SDK](quickstarts/setup-platform.md)
ai-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-rhel-centos-7.md
+
+ Title: How to configure RHEL/CentOS 7 - Speech service
+
+description: Learn how to configure RHEL/CentOS 7 so that the Speech SDK can be used.
++++++ Last updated : 04/01/2022+++
+# Configure RHEL/CentOS 7
+
+To use the Speech SDK on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler (for C++ development) and the shared C++ runtime library on your system.
+
+## Install dependencies
+
+First install all general dependencies:
+
+```bash
+sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm
+
+# Install development tools and libraries
+sudo yum update -y
+sudo yum groupinstall -y "Development tools"
+sudo yum install -y alsa-lib dotnet-sdk-2.1 java-1.8.0-openjdk-devel openssl
+sudo yum install -y gstreamer1 gstreamer1-plugins-base gstreamer1-plugins-good gstreamer1-plugins-bad-free gstreamer1-plugins-ugly-free
+```
+
+## C/C++ compiler and runtime libraries
+
+Install the prerequisite packages with this command:
+
+```bash
+sudo yum install -y gmp-devel mpfr-devel libmpc-devel
+```
+
+Next update the compiler and runtime libraries:
+
+```bash
+# Build GCC 7.5.0 and runtimes and install them under /usr/local
+curl https://ftp.gnu.org/gnu/gcc/gcc-7.5.0/gcc-7.5.0.tar.gz -O
+tar -xf gcc-7.5.0.tar.gz
+mkdir gcc-7.5.0-build && cd gcc-7.5.0-build
+../gcc-7.5.0/configure --enable-languages=c,c++ --disable-bootstrap --disable-multilib --prefix=/usr/local
+make -j$(nproc)
+sudo make install-strip
+```
+
+If the updated compiler and libraries need to be deployed on several machines, you can simply copy them from under `/usr/local` to other machines. If only the runtime libraries are needed then the files in `/usr/local/lib64` will be enough.
+
+## Environment settings
+
+Run the following commands to complete the configuration:
+
+```bash
+# Add updated C/C++ runtimes to the library path
+# (this is required for any development/testing with Speech SDK)
+export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
+
+# For C++ development only:
+# - add the updated compiler to PATH
+# (note, /usr/local/bin should be already first in PATH on vanilla systems)
+# - add Speech SDK libraries from the Linux tar package to LD_LIBRARY_PATH
+# (note, use the actual path to extracted files!)
+export PATH=/usr/local/bin:$PATH
+hash -r # reset cached paths in the current shell session just in case
+export LD_LIBRARY_PATH=/path/to/extracted/SpeechSDK-Linux-<version>/lib/centos7-x64:$LD_LIBRARY_PATH
+```
+
+> [!NOTE]
+> The Linux .tar package contains specific libraries for RHEL/CentOS 7. These are in `lib/centos7-x64` as shown in the environment setting example for `LD_LIBRARY_PATH` above. Speech SDK libraries in `lib/x64` are for all the other supported Linux x64 distributions (including RHEL/CentOS 8) and don't work on RHEL/CentOS 7.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [About the Speech SDK](speech-sdk.md)
ai-services How To Control Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-control-connections.md
+
+ Title: Service connectivity how-to - Speech SDK
+
+description: Learn how to monitor for connection status and manually connect or disconnect from the Speech service.
++++++ Last updated : 04/12/2021+
+zone_pivot_groups: programming-languages-set-thirteen
+ms.devlang: cpp, csharp, java
+++
+# How to monitor and control service connections with the Speech SDK
+
+`SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech service when it's appropriate. Sometimes, you may either want extra control over when connections begin and end or want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
+
+## Retrieve a Connection object
+
+A `Connection` can be obtained from most top-level Speech SDK objects via a static `From...` factory method, for example, `Connection::FromRecognizer(recognizer)` for `SpeechRecognizer`.
++
+```csharp
+var connection = Connection.FromRecognizer(recognizer);
+```
+++
+```cpp
+auto connection = Connection::FromRecognizer(recognizer);
+```
+++
+```java
+Connection connection = Connection.fromRecognizer(recognizer);
+```
++
+## Monitor for connections and disconnections
+
+A `Connection` raises `Connected` and `Disconnected` events when the corresponding status change happens in the Speech SDK's connection to the Speech service. You can listen to these events to know the latest connection state.
++
+```csharp
+connection.Connected += (sender, connectedEventArgs) =>
+{
+ Console.WriteLine($"Successfully connected, sessionId: {connectedEventArgs.SessionId}");
+};
+connection.Disconnected += (sender, disconnectedEventArgs) =>
+{
+ Console.WriteLine($"Disconnected, sessionId: {disconnectedEventArgs.SessionId}");
+};
+```
+++
+```cpp
+connection->Connected += [&](ConnectionEventArgs& args)
+{
+ std::cout << "Successfully connected, sessionId: " << args.SessionId << std::endl;
+};
+connection->Disconnected += [&](ConnectionEventArgs& args)
+{
+ std::cout << "Disconnected, sessionId: " << args.SessionId << std::endl;
+};
+```
+++
+```java
+connection.connected.addEventListener((s, connectionEventArgs) -> {
+ System.out.println("Successfully connected, sessionId: " + connectionEventArgs.getSessionId());
+});
+connection.disconnected.addEventListener((s, connectionEventArgs) -> {
+ System.out.println("Disconnected, sessionId: " + connectionEventArgs.getSessionId());
+});
+```
++
+## Connect and disconnect
+
+`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you may want to control the connection include:
+
+- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible
+- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures
+- Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object
+
+Some important notes on the behavior when manually modifying connection state:
+
+- Trying to connect when already connected will do nothing. It will not generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.
+- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--will not throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
+- Manually disconnecting from the Speech service during an ongoing interaction results in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event.
++
+```csharp
+try
+{
+ connection.Open(forContinuousRecognition: false);
+}
+catch (ApplicationException ex)
+{
+ Console.WriteLine($"Couldn't pre-connect. Details: {ex.Message}");
+}
+// ... Use the SpeechRecognizer
+connection.Close();
+```
+++
+```cpp
+try
+{
+ connection->Open(false);
+}
+catch (std::runtime_error&)
+{
+ std::cout << "Unable to pre-connect." << std::endl;
+}
+// ... Use the SpeechRecognizer
+connection->Close();
+```
+++
+```java
+try {
+ connection.openConnection(false);
+} catch {
+ System.out.println("Unable to pre-connect.");
+}
+// ... Use the SpeechRecognizer
+connection.closeConnection();
+```
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
ai-services How To Custom Commands Debug Build Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-debug-build-time.md
+
+ Title: 'Debug errors when authoring a Custom Commands application (Preview)'
+
+description: In this article, you learn how to debug errors when authoring Custom Commands application.
++++++ Last updated : 06/18/2020++++
+# Debug errors when authoring a Custom Commands application
++
+This article describes how to debug when you see errors while building Custom Commands application.
+
+## Errors when creating an application
+Custom Commands also creates an application in [LUIS](https://www.luis.ai/) when creating a Custom Commands application.
+
+[LUIS limits 500 applications per authoring resource](../luis/luis-limits.md). Creation of LUIS application could fail if you are using an authoring resource that already has 500 applications.
+
+Make sure the selected LUIS authoring resource has less than 500 applications. If not, you can create new LUIS authoring resource, switch to another one, or try to clean up your LUIS applications.
+
+## Errors when deleting an application
+### Can't delete LUIS application
+When deleting a Custom Commands application, Custom Commands may also try to delete the LUIS application associated with the Custom Commands application.
+
+If the deletion of LUIS application failed, go to your [LUIS](https://www.luis.ai/) account to delete them manually.
+
+### TooManyRequests
+When you try to delete large number of applications all at once, it's likely you would see 'TooManyRequests' errors. These errors mean your deletion requests get throttled by Azure.
+
+Refresh your page and try to delete fewer applications.
+
+## Errors when modifying an application
+
+### Can't delete a parameter or a Web Endpoint
+You are not allowed to delete a parameter when it is being used.
+Remove any reference of the parameter in any speech responses, sample sentences, conditions, actions, and try again.
+
+### Can't delete a Web Endpoint
+You are not allowed to delete a Web Endpoint when it is being used.
+Remove any **Call Web Endpoint** action that uses this Web Endpoint before removing a Web Endpoint.
+
+## Errors when training an application
+### Built-In intents
+LUIS has built-in Yes/No intents. Having sample sentences with only "yes", "no" would fail the training.
+
+| Keyword | Variations |
+| - | |
+| Yes | Sure, OK |
+| No | Nope, Not |
+
+### Common sample sentences
+Custom Commands does not allow common sample sentences shared among different commands. The training of an application could fail if some sample sentences in one command are already defined in another command.
+
+Make sure you don't have common sample sentences shared among different commands.
+
+For best practice of balancing your sample sentences across different commands, refer [LUIS best practice](../luis/faq.md).
+
+### Empty sample sentences
+You need to have at least one sample sentence for each Command.
+
+### Undefined parameter in sample sentences
+One or more parameters are used in the sample sentences but not defined.
+
+### Training takes too long
+LUIS training is meant to learn quickly with fewer examples. Don't add too many example sentences.
+
+If you have many example sentences that are similar, define a parameter, abstract them into a pattern and add it to Example Sentences.
+
+For example, you can define a parameter {vehicle} for the example sentences below, and only add "Book a {vehicle}" to Example Sentences.
+
+| Example sentences | Pattern |
+| - | - |
+| Book a car | Book a {vehicle} |
+| Book a flight | Book a {vehicle} |
+| Book a taxi | Book a {vehicle} |
+
+For best practice of LUIS training, refer [LUIS best practice](../luis/faq.md).
+
+## Can't update LUIS key
+### Reassign to E0 authoring resource
+LUIS does not support reassigning LUIS application to E0 authoring resource.
+
+If you need to change your authoring resource from F0 to E0, or change to a different E0 resource, recreate the application.
+
+For quickly export an existing application and import it into a new application, refer to [Continuous Deployment with Azure DevOps](./how-to-custom-commands-deploy-cicd.md).
+
+### Save button is disabled
+If you never assign a LUIS prediction resource to your application, the Save button would be disabled when you try to change your authoring resource without adding a prediction resource.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
ai-services How To Custom Commands Debug Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-debug-runtime.md
+
+ Title: 'Troubleshooting guide for a Custom Commands application at runtime'
+
+description: In this article, you learn how to debug runtime errors in a Custom Commands application.
++++++ Last updated : 06/18/2020++++
+# Troubleshoot a Custom Commands application at runtime
++
+This article describes how to debug when you see errors while running Custom Commands application.
+
+## Connection failed
+
+If your run Custom Commands application from [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md) or [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), you may experience connection errors as listed below:
+
+| Error code | Details |
+| - | -- |
+| [401](#error-401) | AuthenticationFailure: WebSocket Upgrade failed with an authentication error |
+| [1002](#error-1002) | The server returned status code '404' when status code '101' was expected. |
+
+### Error 401
+- The region specified in client application doesn't match with the region of the custom command application
+
+- Speech resource Key is invalid
+
+ Make sure your speech resource key is correct.
+
+### Error 1002
+- Your custom command application isn't published
+
+ Publish your application in the portal.
+
+- Your custom command applicationId isn't valid
+
+ Make sure your custom command application ID is correct.
+ custom command application outside your speech resource
+
+ Make sure the custom command application is created under your speech resource.
+
+For more information on troubleshooting the connection issues, reference [Windows Voice Assistant Client Troubleshooting](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/clients/csharp-wpf#troubleshooting)
++
+## Dialog is canceled
+
+When your Custom Commands application is running, the dialog would be canceled when some errors occur.
+
+- If you're testing the application in the portal, it would directly display the cancellation description and play out an error earcon.
+
+- If you're running the application with [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Activity Logs**.
+
+- If you're following our client application example [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Status**.
+
+- If you're building your own client application, you can always design your desired logics to handle the CancelledDialog events.
+
+The CancelledDialog event consists of cancellation code and description, as listed below:
+
+| Cancellation Code | Cancellation Description |
+| - | | -- |
+| [MaxTurnThresholdReached](#no-progress-was-made-after-the-max-number-of-turns-allowed) | No progress was made after the max number of turns allowed |
+| [RecognizerQuotaExceeded](#recognizer-usage-quota-exceeded) | Recognizer usage quota exceeded |
+| [RecognizerConnectionFailed](#connection-to-the-recognizer-failed) | Connection to the recognizer failed |
+| [RecognizerUnauthorized](#this-application-cant-be-accessed-with-the-current-subscription) | This application can't be accessed with the current subscription |
+| [RecognizerInputExceededAllowedLength](#input-exceeds-the-maximum-supported-length) | Input exceeds the maximum supported length for the recognizer |
+| [RecognizerNotFound](#recognizer-not-found) | Recognizer not found |
+| [RecognizerInvalidQuery](#invalid-query-for-the-recognizer) | Invalid query for the recognizer |
+| [RecognizerError](#recognizer-return-an-error) | Recognizer returns an error |
+
+### No progress was made after the max number of turns allowed
+The dialog is canceled when a required slot isn't successfully updated after certain number of turns. The build-in max number is 3.
+
+### Recognizer usage quota exceeded
+Language Understanding (LUIS) has limits on resource usage. Usually "Recognizer usage quota exceeded error" can be caused by:
+- Your LUIS authoring exceeds the limit
+
+ Add a prediction resource to your Custom Commands application:
+ 1. go to **Settings**, LUIS resource
+ 1. Choose a prediction resource from **Prediction resource**, or select **Create new resource**
+
+- Your LUIS prediction resource exceeds the limit
+
+ If you are on a F0 prediction resource, it has limit of 10 thousand/month, 5 queries/second.
+
+For more details on LUIS resource limits, refer [Language Understanding resource usage and limit](../luis/luis-limits.md#resource-usage-and-limits)
+
+### Connection to the recognizer failed
+Usually it means transient connection failure to Language Understanding (LUIS) recognizer. Try it again and the issue should be resolved.
+
+### This application can't be accessed with the current subscription
+Your subscription isn't authorized to access the LUIS application.
+
+### Input exceeds the maximum supported length
+Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
+
+### Invalid query for the recognizer
+Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
+
+### Recognizer return an error
+The LUIS recognizer returned an error when trying to recognize your input.
+
+### Recognizer not found
+Can't find the recognizer type specified in your custom commands dialog model. Currently, we only support [Language Understanding (LUIS) Recognizer](https://www.luis.ai/).
+
+## Other common errors
+### Unexpected response
+Unexpected responses may be caused multiple things.
+A few checks to start with:
+- Yes/No Intents in example sentences
+
+ As we currently don't support Yes/No Intents except when using with confirmation feature. All the Yes/No Intents defined in example sentences wouldn't be detected.
+
+- Similar intents and examples sentences among commands
+
+ The LUIS recognition accuracy may get affected when two commands share similar intents and examples sentences. You can try to make commands functionality and example sentences as distinct as possible.
+
+ For best practice of improving recognition accuracy, refer [LUIS best practice](../luis/faq.md).
+
+- Dialog is canceled
+
+ Check the reasons of cancellation in the section above.
+
+### Error while rendering the template
+An undefined parameter is used in the speech response.
+
+### Object reference not set to an instance of an object
+You have an empty parameter in the JSON payload defined in **Send Activity to Client** action.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
ai-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-deploy-cicd.md
+
+ Title: 'Continuous Deployment with Azure DevOps (Preview)'
+
+description: In this article, you learn how to set up continuous deployment for your Custom Commands applications. You create the scripts to support the continuous deployment workflows.
++++++ Last updated : 06/18/2020++++
+# Continuous Deployment with Azure DevOps
++
+In this article, you learn how to set up continuous deployment for your Custom Commands applications. The scripts to support the CI/CD workflow are provided to you.
+
+## Prerequisite
+> [!div class = "checklist"]
+> * A Custom Commands application for development (DEV)
+> * A Custom Commands application for production (PROD)
+> * Sign up for [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up)
+
+## Export/Import/Publish
+
+The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
+
+### Set up a pipeline
+
+1. Go to **Azure DevOps - Pipelines** and click "New Pipeline"
+1. In **Connect** section, select the location of your repository where these scripts are located
+1. In **Select** section, select your repository
+1. In **Configure** section, select "Starter pipeline"
+1. Next you'll get an editor with a YAML file, replace the "steps" section with this script.
+
+ ```yaml
+ steps:
+ - task: Bash@3
+ displayName: 'Export source app'
+ inputs:
+ targetType: filePath
+ filePath: ./bash/export.sh
+ arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(SourceAppId) -f ExportedDialogModel.json'
+ workingDirectory: bash
+ failOnStderr: true
+
+ - task: Bash@3
+ displayName: 'Import to target app'
+ inputs:
+ targetType: filePath
+ filePath: ./bash/import.sh
+ arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId) -f ExportedDialogModel.json'
+ workingDirectory: bash
+ failOnStderr: true
+
+ - task: Bash@3
+ displayName: 'Train and Publish target app'
+ inputs:
+ targetType: filePath
+ filePath: './bash/train-and-publish.sh'
+ arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId)'
+ workingDirectory: bash
+ failOnStderr: true
+ ```
+
+1. Note that these scripts assume that you are using the region `westus2`, if that's not the case update the arguments of the tasks accordingly
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that highlights the region value in the arguments.](media/custom-commands/cicd-new-pipeline-yaml.png)
+
+1. In the "Save and run" button, open the dropdown and click "Save"
+
+### Hook up the pipeline with your application
+
+1. Navigate to the main page of the pipeline.
+1. In the top-right corner dropdown, select **Edit pipeline**. It gets you to a YAML editor.
+1. In the top-right corner next to "Run" button, select **Variables**. Select **New variable**.
+1. Add these variables:
+
+ | Variable | Description |
+ | - | | -- |
+ | SourceAppId | ID of the DEV application |
+ | TargetAppId | ID of the PROD application |
+ | SubscriptionKey | The key used for both applications |
+ | Culture | Culture of the applications (en-us) |
+
+ > [!div class="mx-imgBorder"]
+ > ![Send Activity payload](media/custom-commands/cicd-edit-pipeline-variables.png)
+
+1. Select "Run" and then select the "Job" running.
+
+ You should see a list of tasks running that contains: "Export source app", "Import to target app" & "Train and Publish target app"
+
+## Deploy from source code
+
+In case you want to keep the definition of your application in a repository, we provide the scripts for deployments from source code. Since the scripts are in bash, If you are using Windows you'll need to install the [Linux subsystem](/windows/wsl/install-win10).
+
+The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
+
+### Prepare your repository
+
+1. Create a directory for your application, in our example create one called "apps".
+1. Update the arguments of the bash script below, and run. It will import the dialog model of your application to the file myapp.json
+ ```BASH
+ bash/export.sh -r <region> -s <subscriptionkey> -c en-us -a <appid> -f apps/myapp.json
+ ```
+ | Arguments | Description |
+ | - | | -- |
+ | region | Your Speech resource region. For example: `westus2` |
+ | subscriptionkey | Your Speech resource key. |
+ | appid | the Custom Commands' application ID you want to export. |
+
+1. Push these changes to your repository.
+
+### Set up a pipeline
+
+1. Go to **Azure DevOps - Pipelines** and click "New Pipeline"
+1. In **Connect** section, select the location of your repository where these scripts are located
+1. In **Select** section, select your repository
+1. In **Configure** section, select "Starter pipeline"
+1. Next you'll get an editor with a YAML file, replace the "steps" section with this script.
+
+ ```yaml
+ steps:
+ - task: Bash@3
+ displayName: 'Import app'
+ inputs:
+ targetType: filePath
+ filePath: ./bash/import.sh
+ arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId) -f ../apps/myapp.json'
+ workingDirectory: bash
+ failOnStderr: true
+
+ - task: Bash@3
+ displayName: 'Train and Publish app'
+ inputs:
+ targetType: filePath
+ filePath: './bash/train-and-publish.sh'
+ arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId)'
+ workingDirectory: bash
+ failOnStderr: true
+ ```
+
+ > [!NOTE]
+ > these scripts assume that you are using the region westus2, if that's not the case update the arguments of the tasks accordingly
+
+1. In the "Save and run" button, open the dropdown and click "Save"
+
+### Hook up the pipeline with your target applications
+
+1. Navigate to the main page of the pipeline.
+1. In the top-right corner dropdown, select **Edit pipeline**. It gets you to a YAML editor.
+1. In the top-right corner next to "Run" button, select **Variables**. Select **New variable**.
+1. Add these variables:
+
+ | Variable | Description |
+ | - | | -- |
+ | TargetAppId | ID of the PROD application |
+ | SubscriptionKey | The key used for both applications |
+ | Culture | Culture of the applications (en-us) |
+
+1. Select "Run" and then select the "Job" running.
+ You should see a list of tasks running that contains: "Import app" & "Train and Publish app"
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
ai-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-developer-flow-test.md
+
+ Title: 'Test your Custom Commands app'
+
+description: In this article, you learn different approaches to testing a custom commands application.
++++++ Last updated : 06/18/2020++++
+# Test your Custom Commands Application
++
+In this article, you learn different approaches to testing a custom commands application.
+
+## Test in the portal
+
+Test in the portal is the simplest and quickest way to check if your custom command application work as expected. After the app is successfully trained, click `Test` button to start testing.
+
+> [!div class="mx-imgBorder"]
+> ![Test in the portal](media/custom-commands/create-basic-test-chat-no-mic.png)
+
+## Test with Windows Voice Assistant Client
+
+The Windows Voice Assistant Client is a Windows Presentation Foundation (WPF) application in C# that makes it easy to test interactions with your bot before creating a custom client application.
+
+The tool can accept a custom command application ID. It allows you to test your task completion or command-and-control scenario hosted on the Custom Commands service.
+
+To set up the client, checkout [Windows Voice Assistant Client](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/clients/csharp-wpf).
+
+> [!div class="mx-imgBorder"]
+> ![WVAC Create profile](media/custom-commands/conversation.png)
+
+## Test programatically with the Voice Assistant Test Tool
+
+The Voice Assistant Test Tool is a configurable .NET Core C# console application for end-to-end functional regression tests for your Microsoft Voice Assistant.
+
+The tool can run manually as a console command or automated as part of an Azure DevOps CI/CD pipeline to prevent regressions in your bot.
+
+To learn how to set up the tool, see [Voice Assistant Test Tool](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/clients/csharp-dotnet-core/voice-assistant-test).
+
+## Test with Speech SDK-enabled client applications
+
+The Speech software development kit (SDK) exposes many of the Speech service capabilities, which allows you to develop speech-enabled applications. It's available in many programming languages on most platforms.
+
+To set up a Universal Windows Platform (UWP) client application with Speech SDK, and integrate it with your custom command application:
+- [How to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md)
+- [How to: Send activity to client application](./how-to-custom-commands-send-activity-to-client.md)
+- [How to: Set up web endpoints](./how-to-custom-commands-setup-web-endpoints.md)
+
+For other programming languages and platforms:
+- [Speech SDK programming languages, platforms, scenario capacities](./speech-sdk.md)
+- [Voice assistant sample codes](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
ai-services How To Custom Commands Send Activity To Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-send-activity-to-client.md
+
+ Title: 'Send Custom Commands activity to client application'
+
+description: In this article, you learn how to send activity from a Custom Commands application to a client application running the Speech SDK.
++++++ Last updated : 06/18/2020+
+ms.devlang: csharp
+++
+# Send Custom Commands activity to client application
++
+In this article, you learn how to send activity from a Custom Commands application to a client application running the Speech SDK.
+
+You complete the following tasks:
+
+- Define and send a custom JSON payload from your Custom Commands application
+- Receive and visualize the custom JSON payload contents from a C# UWP Speech SDK client application
+
+## Prerequisites
+> [!div class = "checklist"]
+> * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide uses Visual Studio 2019
+> * An Azure AI services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal).
+> * A previously [created Custom Commands app](quickstart-custom-commands-application.md)
+> * A Speech SDK enabled client app:
+[How-to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md)
+
+## Setup Send activity to client
+1. Open the Custom Commands application you previously created
+1. Select **TurnOnOff** command, select **ConfirmationResponse** under completion rule, then select **Add an action**
+1. Under **New Action-Type**, select **Send activity to client**
+1. Copy the JSON below to **Activity content**
+ ```json
+ {
+ "type": "event",
+ "name": "UpdateDeviceState",
+ "value": {
+ "state": "{OnOff}",
+ "device": "{SubjectDevice}"
+ }
+ }
+ ```
+1. Select **Save** to create a new rule with a Send Activity action, **Train** and **Publish** the change
+
+ > [!div class="mx-imgBorder"]
+ > ![Send Activity completion rule](media/custom-commands/send-activity-to-client-completion-rules.png)
+
+## Integrate with client application
+
+In [How-to: Setup client application with Speech SDK (Preview)](./how-to-custom-commands-setup-speech-sdk.md), you created a UWP client application with Speech SDK that handled commands such as `turn on the tv`, `turn off the fan`. With some visuals added, you can see the result of those commands.
+
+To Add labeled boxes with text indicating **on** or **off**, add the following XML block of StackPanel to `MainPage.xaml`.
+
+```xml
+<StackPanel Orientation="Vertical" H......>
+......
+</StackPanel>
+<StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Margin="20">
+ <Grid x:Name="Grid_TV" Margin="50, 0" Width="100" Height="100" Background="LightBlue">
+ <StackPanel>
+ <TextBlock Text="TV" Margin="0, 10" TextAlignment="Center"/>
+ <TextBlock x:Name="State_TV" Text="off" TextAlignment="Center"/>
+ </StackPanel>
+ </Grid>
+ <Grid x:Name="Grid_Fan" Margin="50, 0" Width="100" Height="100" Background="LightBlue">
+ <StackPanel>
+ <TextBlock Text="Fan" Margin="0, 10" TextAlignment="Center"/>
+ <TextBlock x:Name="State_Fan" Text="off" TextAlignment="Center"/>
+ </StackPanel>
+ </Grid>
+</StackPanel>
+<MediaElement ....../>
+```
+
+### Add reference libraries
+
+Since you've created a JSON payload, you need to add a reference to the [JSON.NET](https://www.newtonsoft.com/json) library to handle deserialization.
+
+1. Right-client your solution.
+1. Choose **Manage NuGet Packages for Solution**, Select **Browse**
+1. If you already installed **Newtonsoft.json**, make sure its version is at least 12.0.3. If not, go to **Manage NuGet Packages for Solution - Updates**, search for **Newtonsoft.json** to update it. This guide is using version 12.0.3.
+
+ > [!div class="mx-imgBorder"]
+ > ![Send Activity payload](media/custom-commands/send-activity-to-client-json-nuget.png)
+
+1. Also, make sure NuGet package **Microsoft.NETCore.UniversalWindowsPlatform** is at least 6.2.10. This guide is using version 6.2.10.
+
+In `MainPage.xaml.cs', add
+
+```C#
+using Newtonsoft.Json;
+using Windows.ApplicationModel.Core;
+using Windows.UI.Core;
+```
+
+### Handle the received payload
+
+In `InitializeDialogServiceConnector`, replace the `ActivityReceived` event handler with following code. The modified `ActivityReceived` event handler will extract the payload from the activity and change the visual state of the tv or fan respectively.
+
+```C#
+connector.ActivityReceived += async (sender, activityReceivedEventArgs) =>
+{
+ NotifyUser($"Activity received, hasAudio={activityReceivedEventArgs.HasAudio} activity={activityReceivedEventArgs.Activity}");
+
+ dynamic activity = JsonConvert.DeserializeObject(activityReceivedEventArgs.Activity);
+ var name = activity?.name != null ? activity.name.ToString() : string.Empty;
+
+ if (name.Equals("UpdateDeviceState"))
+ {
+ Debug.WriteLine("Here");
+ var state = activity?.value?.state != null ? activity.value.state.ToString() : string.Empty;
+ var device = activity?.value?.device != null ? activity.value.device.ToString() : string.Empty;
+
+ if (state.Equals("on") || state.Equals("off"))
+ {
+ switch (device)
+ {
+ case "tv":
+ await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
+ CoreDispatcherPriority.Normal, () => { State_TV.Text = state; });
+ break;
+ case "fan":
+ await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
+ CoreDispatcherPriority.Normal, () => { State_Fan.Text = state; });
+ break;
+ default:
+ NotifyUser($"Received request to set unsupported device {device} to {state}");
+ break;
+ }
+ }
+ else {
+ NotifyUser($"Received request to set unsupported state {state}");
+ }
+ }
+
+ if (activityReceivedEventArgs.HasAudio)
+ {
+ SynchronouslyPlayActivityAudio(activityReceivedEventArgs.Audio);
+ }
+};
+```
+
+## Try it out
+
+1. Start the application
+1. Select Enable microphone
+1. Select the Talk button
+1. Say `turn on the tv`
+1. The visual state of the tv should change to "on"
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows that the visual state of the T V is now on.](media/custom-commands/send-activity-to-client-turn-on-tv.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to: set up web endpoints](./how-to-custom-commands-setup-web-endpoints.md)
ai-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-speech-sdk.md
+
+ Title: 'Integrate with a client app using Speech SDK'
+
+description: how to make requests to a published Custom Commands application from the Speech SDK running in a UWP application.
++++++ Last updated : 06/18/2020+
+ms.devlang: csharp
+++
+# Integrate with a client application using Speech SDK
++
+In this article, you learn how to make requests to a published Custom Commands application from the Speech SDK running in an UWP application. In order to establish a connection to the Custom Commands application, you need:
+
+- Publish a Custom Commands application and get an application identifier (App ID)
+- Create a Universal Windows Platform (UWP) client app using the Speech SDK to allow you to talk to your Custom Commands application
+
+## Prerequisites
+
+A Custom Commands application is required to complete this article. If you haven't created a Custom Commands application, you can do so following the quickstarts:
+> [!div class = "checklist"]
+> * [Create a Custom Commands application](quickstart-custom-commands-application.md)
+
+You'll also need:
+> [!div class = "checklist"]
+> * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide is based on Visual Studio 2019.
+> * An Azure AI services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal).
+> * [Enable your device for development](/windows/uwp/get-started/enable-your-device-for-development)
+
+## Step 1: Publish Custom Commands application
+
+1. Open your previously created Custom Commands application
+1. Go to **Settings**, select **LUIS resource**
+1. If **Prediction resource** is not assigned, select a query prediction key or create a new one
+
+ Query prediction key is always required before publishing an application. For more information about LUIS resources, reference [Create LUIS Resource](../luis/luis-how-to-azure-subscription.md)
+
+1. Go back to editing Commands, Select **Publish**
+
+ > [!div class="mx-imgBorder"]
+ > ![Publish application](media/custom-commands/setup-speech-sdk-publish-application.png)
+
+1. Copy the App ID from the publish notification for later use
+1. Copy the Speech Resource Key for later use
+
+## Step 2: Create a Visual Studio project
+
+Create a Visual Studio project for UWP development and [install the Speech SDK](./quickstarts/setup-platform.md?pivots=programming-language-csharp&tabs=uwp).
+
+## Step 3: Add sample code
+
+In this step, we add the XAML code that defines the user interface of the application, and add the C# code-behind implementation.
+
+### XAML code
+
+Create the application's user interface by adding the XAML code.
+
+1. In **Solution Explorer**, open `MainPage.xaml`
+
+1. In the designer's XAML view, replace the entire contents with the following code snippet:
+
+ ```xml
+ <Page
+ x:Class="helloworld.MainPage"
+ xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
+ xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
+ xmlns:local="using:helloworld"
+ xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
+ xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
+ mc:Ignorable="d"
+ Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
+
+ <Grid>
+ <StackPanel Orientation="Vertical" HorizontalAlignment="Center"
+ Margin="20,50,0,0" VerticalAlignment="Center" Width="800">
+ <Button x:Name="EnableMicrophoneButton" Content="Enable Microphone"
+ Margin="0,10,10,0" Click="EnableMicrophone_ButtonClicked"
+ Height="35"/>
+ <Button x:Name="ListenButton" Content="Talk"
+ Margin="0,10,10,0" Click="ListenButton_ButtonClicked"
+ Height="35"/>
+ <StackPanel x:Name="StatusPanel" Orientation="Vertical"
+ RelativePanel.AlignBottomWithPanel="True"
+ RelativePanel.AlignRightWithPanel="True"
+ RelativePanel.AlignLeftWithPanel="True">
+ <TextBlock x:Name="StatusLabel" Margin="0,10,10,0"
+ TextWrapping="Wrap" Text="Status:" FontSize="20"/>
+ <Border x:Name="StatusBorder" Margin="0,0,0,0">
+ <ScrollViewer VerticalScrollMode="Auto"
+ VerticalScrollBarVisibility="Auto" MaxHeight="200">
+ <!-- Use LiveSetting to enable screen readers to announce
+ the status update. -->
+ <TextBlock
+ x:Name="StatusBlock" FontWeight="Bold"
+ AutomationProperties.LiveSetting="Assertive"
+ MaxWidth="{Binding ElementName=Splitter, Path=ActualWidth}"
+ Margin="10,10,10,20" TextWrapping="Wrap" />
+ </ScrollViewer>
+ </Border>
+ </StackPanel>
+ </StackPanel>
+ <MediaElement x:Name="mediaElement"/>
+ </Grid>
+ </Page>
+ ```
+
+The Design view is updated to show the application's user interface.
+
+### C# code-behind source
+
+Add the code-behind source so that the application works as expected. The code-behind source includes:
+
+- Required `using` statements for the `Speech` and `Speech.Dialog` namespaces
+- A simple implementation to ensure microphone access, wired to a button handler
+- Basic UI helpers to present messages and errors in the application
+- A landing point for the initialization code path that will be populated later
+- A helper to play back text to speech (without streaming support)
+- An empty button handler to start listening that will be populated later
+
+Add the code-behind source as follows:
+
+1. In **Solution Explorer**, open the code-behind source file `MainPage.xaml.cs` (grouped under `MainPage.xaml`)
+
+1. Replace the file's contents with the following code:
+
+ ```csharp
+ using Microsoft.CognitiveServices.Speech;
+ using Microsoft.CognitiveServices.Speech.Audio;
+ using Microsoft.CognitiveServices.Speech.Dialog;
+ using System;
+ using System.IO;
+ using System.Text;
+ using Windows.UI.Xaml;
+ using Windows.UI.Xaml.Controls;
+ using Windows.UI.Xaml.Media;
+
+ namespace helloworld
+ {
+ public sealed partial class MainPage : Page
+ {
+ private DialogServiceConnector connector;
+
+ private enum NotifyType
+ {
+ StatusMessage,
+ ErrorMessage
+ };
+
+ public MainPage()
+ {
+ this.InitializeComponent();
+ }
+
+ private async void EnableMicrophone_ButtonClicked(
+ object sender, RoutedEventArgs e)
+ {
+ bool isMicAvailable = true;
+ try
+ {
+ var mediaCapture = new Windows.Media.Capture.MediaCapture();
+ var settings =
+ new Windows.Media.Capture.MediaCaptureInitializationSettings();
+ settings.StreamingCaptureMode =
+ Windows.Media.Capture.StreamingCaptureMode.Audio;
+ await mediaCapture.InitializeAsync(settings);
+ }
+ catch (Exception)
+ {
+ isMicAvailable = false;
+ }
+ if (!isMicAvailable)
+ {
+ await Windows.System.Launcher.LaunchUriAsync(
+ new Uri("ms-settings:privacy-microphone"));
+ }
+ else
+ {
+ NotifyUser("Microphone was enabled", NotifyType.StatusMessage);
+ }
+ }
+
+ private void NotifyUser(
+ string strMessage, NotifyType type = NotifyType.StatusMessage)
+ {
+ // If called from the UI thread, then update immediately.
+ // Otherwise, schedule a task on the UI thread to perform the update.
+ if (Dispatcher.HasThreadAccess)
+ {
+ UpdateStatus(strMessage, type);
+ }
+ else
+ {
+ var task = Dispatcher.RunAsync(
+ Windows.UI.Core.CoreDispatcherPriority.Normal,
+ () => UpdateStatus(strMessage, type));
+ }
+ }
+
+ private void UpdateStatus(string strMessage, NotifyType type)
+ {
+ switch (type)
+ {
+ case NotifyType.StatusMessage:
+ StatusBorder.Background = new SolidColorBrush(
+ Windows.UI.Colors.Green);
+ break;
+ case NotifyType.ErrorMessage:
+ StatusBorder.Background = new SolidColorBrush(
+ Windows.UI.Colors.Red);
+ break;
+ }
+ StatusBlock.Text += string.IsNullOrEmpty(StatusBlock.Text)
+ ? strMessage : "\n" + strMessage;
+
+ if (!string.IsNullOrEmpty(StatusBlock.Text))
+ {
+ StatusBorder.Visibility = Visibility.Visible;
+ StatusPanel.Visibility = Visibility.Visible;
+ }
+ else
+ {
+ StatusBorder.Visibility = Visibility.Collapsed;
+ StatusPanel.Visibility = Visibility.Collapsed;
+ }
+ // Raise an event if necessary to enable a screen reader
+ // to announce the status update.
+ var peer = Windows.UI.Xaml.Automation.Peers.FrameworkElementAutomationPeer.FromElement(StatusBlock);
+ if (peer != null)
+ {
+ peer.RaiseAutomationEvent(
+ Windows.UI.Xaml.Automation.Peers.AutomationEvents.LiveRegionChanged);
+ }
+ }
+
+ // Waits for and accumulates all audio associated with a given
+ // PullAudioOutputStream and then plays it to the MediaElement. Long spoken
+ // audio will create extra latency and a streaming playback solution
+ // (that plays audio while it continues to be received) should be used --
+ // see the samples for examples of this.
+ private void SynchronouslyPlayActivityAudio(
+ PullAudioOutputStream activityAudio)
+ {
+ var playbackStreamWithHeader = new MemoryStream();
+ playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("RIFF"), 0, 4); // ChunkID
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(UInt32.MaxValue), 0, 4); // ChunkSize: max
+ playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("WAVE"), 0, 4); // Format
+ playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("fmt "), 0, 4); // Subchunk1ID
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(16), 0, 4); // Subchunk1Size: PCM
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(1), 0, 2); // AudioFormat: PCM
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(1), 0, 2); // NumChannels: mono
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(16000), 0, 4); // SampleRate: 16kHz
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(32000), 0, 4); // ByteRate
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(2), 0, 2); // BlockAlign
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(16), 0, 2); // BitsPerSample: 16-bit
+ playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("data"), 0, 4); // Subchunk2ID
+ playbackStreamWithHeader.Write(BitConverter.GetBytes(UInt32.MaxValue), 0, 4); // Subchunk2Size
+
+ byte[] pullBuffer = new byte[2056];
+
+ uint lastRead = 0;
+ do
+ {
+ lastRead = activityAudio.Read(pullBuffer);
+ playbackStreamWithHeader.Write(pullBuffer, 0, (int)lastRead);
+ }
+ while (lastRead == pullBuffer.Length);
+
+ var task = Dispatcher.RunAsync(
+ Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
+ {
+ mediaElement.SetSource(
+ playbackStreamWithHeader.AsRandomAccessStream(), "audio/wav");
+ mediaElement.Play();
+ });
+ }
+
+ private void InitializeDialogServiceConnector()
+ {
+ // New code will go here
+ }
+
+ private async void ListenButton_ButtonClicked(
+ object sender, RoutedEventArgs e)
+ {
+ // New code will go here
+ }
+ }
+ }
+ ```
+ > [!NOTE]
+ > If you see error: "The type 'Object' is defined in an assembly that is not referenced"
+ > 1. Right-client your solution.
+ > 1. Choose **Manage NuGet Packages for Solution**, Select **Updates**
+ > 1. If you see **Microsoft.NETCore.UniversalWindowsPlatform** in the update list, Update **Microsoft.NETCore.UniversalWindowsPlatform** to newest version
+
+1. Add the following code to the method body of `InitializeDialogServiceConnector`
+
+ ```csharp
+ // This code creates the `DialogServiceConnector` with your resource information.
+ // create a DialogServiceConfig by providing a Custom Commands application id and Speech resource key
+ // The RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview
+ const string speechCommandsApplicationId = "YourApplicationId"; // Your application id
+ const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your Speech resource key
+ const string region = "YourServiceRegion"; // The Speech resource region.
+
+ var speechCommandsConfig = CustomCommandsConfig.FromSubscription(speechCommandsApplicationId, speechSubscriptionKey, region);
+ speechCommandsConfig.SetProperty(PropertyId.SpeechServiceConnection_RecoLanguage, "en-us");
+ connector = new DialogServiceConnector(speechCommandsConfig);
+ ```
+
+1. Replace the strings `YourApplicationId`, `YourSpeechSubscriptionKey`, and `YourServiceRegion` with your own values for your app, speech key, and [region](regions.md)
+
+1. Append the following code snippet to the end of the method body of `InitializeDialogServiceConnector`
+
+ ```csharp
+ //
+ // This code sets up handlers for events relied on by `DialogServiceConnector` to communicate its activities,
+ // speech recognition results, and other information.
+ //
+ // ActivityReceived is the main way your client will receive messages, audio, and events
+ connector.ActivityReceived += (sender, activityReceivedEventArgs) =>
+ {
+ NotifyUser(
+ $"Activity received, hasAudio={activityReceivedEventArgs.HasAudio} activity={activityReceivedEventArgs.Activity}");
+
+ if (activityReceivedEventArgs.HasAudio)
+ {
+ SynchronouslyPlayActivityAudio(activityReceivedEventArgs.Audio);
+ }
+ };
+
+ // Canceled will be signaled when a turn is aborted or experiences an error condition
+ connector.Canceled += (sender, canceledEventArgs) =>
+ {
+ NotifyUser($"Canceled, reason={canceledEventArgs.Reason}");
+ if (canceledEventArgs.Reason == CancellationReason.Error)
+ {
+ NotifyUser(
+ $"Error: code={canceledEventArgs.ErrorCode}, details={canceledEventArgs.ErrorDetails}");
+ }
+ };
+
+ // Recognizing (not 'Recognized') will provide the intermediate recognized text
+ // while an audio stream is being processed
+ connector.Recognizing += (sender, recognitionEventArgs) =>
+ {
+ NotifyUser($"Recognizing! in-progress text={recognitionEventArgs.Result.Text}");
+ };
+
+ // Recognized (not 'Recognizing') will provide the final recognized text
+ // once audio capture is completed
+ connector.Recognized += (sender, recognitionEventArgs) =>
+ {
+ NotifyUser($"Final speech to text result: '{recognitionEventArgs.Result.Text}'");
+ };
+
+ // SessionStarted will notify when audio begins flowing to the service for a turn
+ connector.SessionStarted += (sender, sessionEventArgs) =>
+ {
+ NotifyUser($"Now Listening! Session started, id={sessionEventArgs.SessionId}");
+ };
+
+ // SessionStopped will notify when a turn is complete and
+ // it's safe to begin listening again
+ connector.SessionStopped += (sender, sessionEventArgs) =>
+ {
+ NotifyUser($"Listening complete. Session ended, id={sessionEventArgs.SessionId}");
+ };
+ ```
+
+1. Add the following code snippet to the body of the `ListenButton_ButtonClicked` method in the `MainPage` class
+
+ ```csharp
+ // This code sets up `DialogServiceConnector` to listen, since you already established the configuration and
+ // registered the event handlers.
+ if (connector == null)
+ {
+ InitializeDialogServiceConnector();
+ // Optional step to speed up first interaction: if not called,
+ // connection happens automatically on first use
+ var connectTask = connector.ConnectAsync();
+ }
+
+ try
+ {
+ // Start sending audio
+ await connector.ListenOnceAsync();
+ }
+ catch (Exception ex)
+ {
+ NotifyUser($"Exception: {ex.ToString()}", NotifyType.ErrorMessage);
+ }
+ ```
+
+1. From the menu bar, choose **File** > **Save All** to save your changes
+
+## Try it out
+
+1. From the menu bar, choose **Build** > **Build Solution** to build the application. The code should compile without errors.
+
+1. Choose **Debug** > **Start Debugging** (or press **F5**) to start the application. The **helloworld** window appears.
+
+ ![Sample UWP virtual assistant application in C# - quickstart](media/sdk/qs-voice-assistant-uwp-helloworld-window.png)
+
+1. Select **Enable Microphone**. If the access permission request pops up, select **Yes**.
+
+ ![Microphone access permission request](media/sdk/qs-csharp-uwp-10-access-prompt.png)
+
+1. Select **Talk**, and speak an English phrase or sentence into your device's microphone. Your speech is transmitted to the Direct Line Speech channel and transcribed to text, which appears in the window.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How-to: send activity to client application (Preview)](./how-to-custom-commands-send-activity-to-client.md)
ai-services How To Custom Commands Setup Web Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-setup-web-endpoints.md
+
+ Title: 'Set up web endpoints'
+
+description: set up web endpoints for Custom Commands
++++++ Last updated : 06/18/2020+
+ms.devlang: csharp
+++
+# Set up web endpoints
++
+In this article, you'll learn how to set up web endpoints in a Custom Commands application that allow you to make HTTP requests from a client application. You'll complete the following tasks:
+
+- Set up web endpoints in Custom Commands application
+- Call web endpoints in Custom Commands application
+- Receive the web endpoints response
+- Integrate the web endpoints response into a custom JSON payload, send, and visualize it from a C# UWP Speech SDK client application
+
+## Prerequisites
+
+> [!div class = "checklist"]
+> * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/)
+> * An Azure AI services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see see [Create a multi-service resource](../multi-service-resource.md?pivots=azportal).
+> * A Custom Commands app (see [Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md))
+> * A Speech SDK enabled client app (see [Integrate with a client application using Speech SDK](how-to-custom-commands-setup-speech-sdk.md))
+
+## Deploy an external web endpoint using Azure Function app
+
+For this tutorial, you need an HTTP endpoint that maintains states for all the devices you set up in the **TurnOnOff** command of your Custom Commands application.
+
+If you already have a web endpoint you want to call, skip to the [next section](#setup-web-endpoints-in-custom-commands).
+Alternatively, the next section provides details about a default hosted web endpoint you can use if you want to skip this section.
+
+### Input format of Azure function
+
+Next, you'll deploy an endpoint using [Azure Functions](../../azure-functions/index.yml).
+The following is the format of a Custom Commands event that is passed to your Azure function. Use this information when you're writing your Azure Function app.
+
+```json
+{
+ "conversationId": "string",
+ "currentCommand": {
+ "name": "string",
+ "parameters": {
+ "SomeParameterName": "string",
+ "SomeOtherParameterName": "string"
+ }
+ },
+ "currentGlobalParameters": {
+ "SomeGlobalParameterName": "string",
+ "SomeOtherGlobalParameterName": "string"
+ }
+}
+```
+
+
+The following table describes the key attributes of this input:
+
+| Attribute | Explanation |
+| - | |
+| **conversationId** | The unique identifier of the conversation. Note this ID can be generated by the client app. |
+| **currentCommand** | The command that's currently active in the conversation. |
+| **name** | The name of the command. The `parameters` attribute is a map with the current values of the parameters. |
+| **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
++
+For the **DeviceState** Azure Function, an example Custom Commands event will look like following. This will act as an **input** to the function app.
+
+```json
+{
+ "conversationId": "someConversationId",
+ "currentCommand": {
+ "name": "TurnOnOff",
+ "parameters": {
+ "item": "tv",
+ "value": "on"
+ }
+ }
+}
+```
+
+### Azure Function output for a Custom Command app
+
+If output from your Azure Function is consumed by a Custom Commands app, it should appear in the following format. See [Update a command from a web endpoint](./how-to-custom-commands-update-command-from-web-endpoint.md) for details.
+
+```json
+{
+ "updatedCommand": {
+ "name": "SomeCommandName",
+ "updatedParameters": {
+ "SomeParameterName": "SomeParameterValue"
+ },
+ "cancel": false
+ },
+ "updatedGlobalParameters": {
+ "SomeGlobalParameterName": "SomeGlobalParameterValue"
+ }
+}
+```
+
+### Azure Function output for a client application
+
+If output from your Azure Function is consumed by a client application, the output can take whatever form the client application requires.
+
+For our **DeviceState** endpoint, output of your Azure function is consumed by a client application instead of the Custom Commands application. Example output of the Azure function should look like the following:
+
+```json
+{
+ "TV": "on",
+ "Fan": "off"
+}
+```
+
+This output should be written to an external storage, so that you can maintain the state of devices. The external storage state will be used in the [Integrate with client application](#integrate-with-client-application) section below.
++
+### Deploy Azure function
+
+We provide a sample you can configure and deploy as an Azure Functions app. To create a storage account for our sample, follow these steps.
+
+1. Create table storage to save device state. In the Azure portal, create a new resource of type **Storage account** by name **devicestate**.
+1. Copy the **Connection string** value from **devicestate -> Access keys**. You'll need to add this string secret to the downloaded sample Function App code.
+1. Download sample [Function App code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/custom-commands/quick-start).
+1. Open the downloaded solution in Visual Studio 2019. In **Connections.json**, replace **STORAGE_ACCOUNT_SECRET_CONNECTION_STRING** with the secret from Step 2.
+1. Download the **DeviceStateAzureFunction** code.
+
+To deploy the sample app to Azure Functions, follow these steps.
+
+1. [Deploy](../../azure-functions/index.yml) the Azure Functions app.
+1. Wait for deployment to succeed and go the deployed resource on the Azure portal.
+1. Select **Functions** in the left pane, and then select **DeviceState**.
+1. In the new window, select **Code + Test** and then select **Get function URL**.
+
+## Setup web endpoints in Custom Commands
+
+Let's hook up the Azure function with the existing Custom Commands application.
+In this section, you'll use an existing default **DeviceState** endpoint. If you created your own web endpoint using Azure Function or otherwise, use that instead of the default `https://webendpointexample.azurewebsites.net/api/DeviceState`.
+
+1. Open the Custom Commands application you previously created.
+1. Go to **Web endpoints**, select **New web endpoint**.
+
+ > [!div class="mx-imgBorder"]
+ > ![New web endpoint](media/custom-commands/setup-web-endpoint-new-endpoint.png)
+
+ | Setting | Suggested value | Description |
+ | - | | -- |
+ | Name | UpdateDeviceState | Name for the web endpoint. |
+ | URL | https://webendpointexample.azurewebsites.net/api/DeviceState | The URL of the endpoint you wish your custom command app to talk to. |
+ | Method | POST | The allowed interactions (such as GET, POST) with your endpoint.|
+ | Headers | Key: app, Value: take the first 8 digits of your applicationId | The header parameters to include in the request header.|
+
+ > [!NOTE]
+ > - The example web endpoint created using [Azure Functions](../../azure-functions/index.yml), which hooks up with the database that saves the device state of the tv and fan.
+ > - The suggested header is only needed for the example endpoint.
+ > - To make sure the value of the header is unique in our example endpoint, take the first 8 digits of your **applicationId**.
+ > - In real world, the web endpoint can be the endpoint to the [IOT hub](../../iot-hub/about-iot-hub.md) that manages your devices.
+
+1. Select **Save**.
+
+## Call web endpoints
+
+1. Go to **TurnOnOff** command, select **ConfirmationResponse** under completion rule, then select **Add an action**.
+1. Under **New Action-Type**, select **Call web endpoint**
+1. In **Edit Action - Endpoints**, select **UpdateDeviceState**, which is the web endpoint we created.
+1. In **Configuration**, put the following values:
+ > [!div class="mx-imgBorder"]
+ > ![Call web endpoints action parameters](media/custom-commands/setup-web-endpoint-edit-action-parameters.png)
+
+ | Setting | Suggested value | Description |
+ | - | | -- |
+ | Endpoints | UpdateDeviceState | The web endpoint you wish to call in this action. |
+ | Query parameters | item={SubjectDevice}&&value={OnOff} | The query parameters to append to the web endpoint URL. |
+ | Body content | N/A | The body content of the request. |
+
+ > [!NOTE]
+ > - The suggested query parameters are only needed for the example endpoint
+
+1. In **On Success - Action to execute**, select **Send speech response**.
+
+ In **Simple editor**, enter `{SubjectDevice} is {OnOff}`.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows the On Success - Action to execute screen.](media/custom-commands/setup-web-endpoint-edit-action-on-success-send-response.png)
+
+ | Setting | Suggested value | Description |
+ | - | | -- |
+ | Action to execute | Send speech response | Action to execute if the request to web endpoint succeeds |
+
+ > [!NOTE]
+ > - You can also directly access the fields in the http response by using `{YourWebEndpointName.FieldName}`. For example: `{UpdateDeviceState.TV}`
+
+1. In **On Failure - Action to execute**, select **Send speech response**
+
+ In **Simple editor**, enter `Sorry, {WebEndpointErrorMessage}`.
+
+ > [!div class="mx-imgBorder"]
+ > ![Call web endpoints action On Fail](media/custom-commands/setup-web-endpoint-edit-action-on-fail.png)
+
+ | Setting | Suggested value | Description |
+ | - | | -- |
+ | Action to execute | Send speech response | Action to execute if the request to web endpoint fails |
+
+ > [!NOTE]
+ > - `{WebEndpointErrorMessage}` is optional. You are free to remove it if you don't want to expose any error message.
+ > - Within our example endpoint, we send back http response with detailed error messages for common errors such as missing header parameters.
+
+### Try it out in test portal
+- On Success response, save, train and test.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows the On Success response.](media/custom-commands/setup-web-endpoint-on-success-response.png)
+- On Fail response, remove one of the query parameters, save, retrain, and test.
+ > [!div class="mx-imgBorder"]
+ > ![Call web endpoints action On Success](media/custom-commands/setup-web-endpoint-on-fail-response.png)
+
+## Integrate with client application
+
+In [Send Custom Commands activity to client application](./how-to-custom-commands-send-activity-to-client.md), you added a **Send activity to client** action. The activity is sent to the client application whether or not **Call web endpoint** action is successful or not.
+However, typically you only want to send activity to the client application when the call to the web endpoint is successful. In this example, this is when the device's state is successfully updated.
+
+1. Delete the **Send activity to client** action you previously added.
+1. Edit call web endpoint:
+ 1. In **Configuration**, make sure **Query Parameters** is `item={SubjectDevice}&&value={OnOff}`
+ 1. In **On Success**, change **Action to execute** to **Send activity to client**
+ 1. Copy the JSON below to the **Activity Content**
+ ```json
+ {
+ "type": "event",
+ "name": "UpdateDeviceState",
+ "value": {
+ "state": "{OnOff}",
+ "device": "{SubjectDevice}"
+ }
+ }
+ ```
+Now you only send activity to the client when the request to the web endpoint is successful.
+
+### Create visuals for syncing device state
+
+Add the following XML to `MainPage.xaml` above the **EnableMicrophoneButton** block.
+
+```xml
+<Button x:Name="SyncDeviceStateButton" Content="Sync Device State"
+ Margin="0,10,10,0" Click="SyncDeviceState_ButtonClicked"
+ Height="35"/>
+<Button x:Name="EnableMicrophoneButton" ......
+ .........../>
+```
+
+### Sync device state
+
+In `MainPage.xaml.cs`, add the reference `using Windows.Web.Http;`. Add the following code to the `MainPage` class. This method sends a GET request to the example endpoint, and extract the current device state for your app. Make sure to change `<your_app_name>` to what you used in the **header** in Custom Command web endpoint.
+
+```C#
+private async void SyncDeviceState_ButtonClicked(object sender, RoutedEventArgs e)
+{
+ //Create an HTTP client object
+ var httpClient = new HttpClient();
+
+ //Add a user-agent header to the GET request.
+ var your_app_name = "<your-app-name>";
+
+ Uri endpoint = new Uri("https://webendpointexample.azurewebsites.net/api/DeviceState");
+ var requestMessage = new HttpRequestMessage(HttpMethod.Get, endpoint);
+ requestMessage.Headers.Add("app", $"{your_app_name}");
+
+ try
+ {
+ //Send the GET request
+ var httpResponse = await httpClient.SendRequestAsync(requestMessage);
+ httpResponse.EnsureSuccessStatusCode();
+ var httpResponseBody = await httpResponse.Content.ReadAsStringAsync();
+ dynamic deviceState = JsonConvert.DeserializeObject(httpResponseBody);
+ var TVState = deviceState.TV.ToString();
+ var FanState = deviceState.Fan.ToString();
+ await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
+ CoreDispatcherPriority.Normal,
+ () =>
+ {
+ State_TV.Text = TVState;
+ State_Fan.Text = FanState;
+ });
+ }
+ catch (Exception ex)
+ {
+ NotifyUser(
+ $"Unable to sync device status: {ex.Message}");
+ }
+}
+```
+
+## Try it out
+
+1. Start the application.
+1. Select Sync Device State.\
+If you tested out the app with `turn on tv` in previous section, you would see the TV shows as **on**.
+ > [!div class="mx-imgBorder"]
+ > ![Sync device state](media/custom-commands/setup-web-endpoint-sync-device-state.png)
+1. Select **Enable microphone**.
+1. Select the **Talk** button.
+1. Say `turn on the fan`. The visual state of the fan should change to **on**.
+ > [!div class="mx-imgBorder"]
+ > ![Turn on fan](media/custom-commands/setup-web-endpoint-turn-on-fan.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Update a command from a client app](./how-to-custom-commands-update-command-from-client.md)
ai-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-update-command-from-client.md
+
+ Title: 'Update a command parameter from a client app'
+
+description: Learn how to update a command from a client application.
++++++ Last updated : 10/20/2020++++
+# Update a command from a client app
++
+In this article, you'll learn how to update an ongoing command from a client application.
+
+## Prerequisites
+> [!div class = "checklist"]
+> * A previously [created Custom Commands app](quickstart-custom-commands-application.md)
+
+## Update the state of a command
+
+If your client application requires you to update the state of an ongoing command without voice input, you can send an event to update the command.
+
+To illustrate this scenario, send the following event activity to update the state of an ongoing command (`TurnOnOff`):
+
+```json
+{
+ "type": "event",
+ "name": "RemoteUpdate",
+ "value": {
+ "updatedCommand": {
+ "name": "TurnOnOff",
+ "updatedParameters": {
+ "OnOff": "on"
+ },
+ "cancel": false
+ },
+ "updatedGlobalParameters": {},
+ "processTurn": true
+ }
+}
+```
+
+Let's review the key attributes of this activity:
+
+| Attribute | Explanation |
+| - | |
+| **type** | The activity is of type `"event"`. |
+| **name** | The name of the event needs to be `"RemoteUpdate"`. |
+| **value** | The attribute `"value"` contains the attributes required to update the current command. |
+| **updatedCommand** | The attribute `"updatedCommand"` contains the name of the command. Within that attribute, `"updatedParameters"` is a map with the names of the parameters and their updated values. |
+| **cancel** | If the ongoing command needs to be canceled, set the attribute `"cancel"` to `true`. |
+| **updatedGlobalParameters** | The attribute `"updatedGlobalParameters"` is a map just like `"updatedParameters"`, but it's used for global parameters. |
+| **processTurn** | If the turn needs to be processed after the activity is sent, set the attribute `"processTurn"` to `true`. |
+
+You can test this scenario in the Custom Commands portal:
+
+1. Open the Custom Commands application that you previously created.
+1. Select **Train** and then **Test**.
+1. Send `turn`.
+1. Open the side panel and select **Activity editor**.
+1. Type and send the `RemoteCommand` event specified in the previous section.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows the event for a remote command.](media/custom-commands/send-remote-command-activity-no-mic.png)
+
+Note how the value for the parameter `"OnOff"` was set to `"on"` through an activity from the client instead of speech or text.
+
+## Update the catalog of the parameter for a command
+
+When you configure the list of valid options for a parameter, the values for the parameter are defined globally for the application.
+
+In our example, the `SubjectDevice` parameter will have a fixed list of supported values regardless of the conversation.
+
+If you want to add new entries to the parameter's catalog per conversation, you can send the following activity:
+
+```json
+{
+ "type": "event",
+ "name": "RemoteUpdate",
+ "value": {
+ "catalogUpdate": {
+ "commandParameterCatalogs": {
+ "TurnOnOff": [
+ {
+ "name": "SubjectDevice",
+ "values": {
+ "stereo": [
+ "cd player"
+ ]
+ }
+ }
+ ]
+ }
+ },
+ "processTurn": false
+ }
+}
+```
+With this activity, you added an entry for `"stereo"` to the catalog of the parameter `"SubjectDevice"` in the command `"TurnOnOff"`.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/custom-commands/update-catalog-with-remote-activity.png" alt-text="Screenshot that shows a catalog update.":::
+
+Note a couple of things:
+- You need to send this activity only once (ideally, right after you start a connection).
+- After you send this activity, you should wait for the event `ParameterCatalogsUpdated` to be sent back to the client.
+
+## Add more context from the client application
+
+You can set additional context from the client application per conversation that can later be used in your Custom Commands application.
+
+For example, think about the scenario where you want to send the ID and name of the device connected to the Custom Commands application.
+
+To test this scenario, let's create a new command in the current application:
+1. Create a new command called `GetDeviceInfo`.
+1. Add an example sentence of `get device info`.
+1. In the completion rule **Done**, add a **Send speech response** action that contains the attributes of `clientContext`.
+ ![Screenshot that shows a response for sending speech with context.](media/custom-commands/send-speech-response-context.png)
+1. Save, train, and test your application.
+1. In the testing window, send an activity to update the client context.
+
+ ```json
+ {
+ "type": "event",
+ "name": "RemoteUpdate",
+ "value": {
+ "clientContext": {
+ "deviceId": "12345",
+ "deviceName": "My device"
+ },
+ "processTurn": false
+ }
+ }
+ ```
+1. Send the text `get device info`.
+ ![Screenshot that shows an activity for sending client context.](media/custom-commands/send-client-context-activity-no-mic.png)
+
+Note a few things:
+- You need to send this activity only once (ideally, right after you start a connection).
+- You can use complex objects for `clientContext`.
+- You can use `clientContext` in speech responses, for sending activities and for calling web endpoints.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Update a command from a web endpoint](./how-to-custom-commands-update-command-from-web-endpoint.md)
ai-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-commands-update-command-from-web-endpoint.md
+
+ Title: 'Update a command from a web endpoint'
+
+description: Learn how to update the state of a command by using a call to a web endpoint.
++++++ Last updated : 10/20/2020++++
+# Update a command from a web endpoint
++
+If your client application requires an update to the state of an ongoing command without voice input, you can use a call to a web endpoint to update the command.
+
+In this article, you'll learn how to update an ongoing command from a web endpoint.
+
+## Prerequisites
+> [!div class = "checklist"]
+> * A previously [created Custom Commands app](quickstart-custom-commands-application.md)
+
+## Create an Azure function
+
+For this example, you'll need an HTTP-triggered [Azure function](../../azure-functions/index.yml) that supports the following input (or a subset of this input):
+
+```JSON
+{
+ "conversationId": "SomeConversationId",
+ "currentCommand": {
+ "name": "SomeCommandName",
+ "parameters": {
+ "SomeParameterName": "SomeParameterValue",
+ "SomeOtherParameterName": "SomeOtherParameterValue"
+ }
+ },
+ "currentGlobalParameters": {
+ "SomeGlobalParameterName": "SomeGlobalParameterValue",
+ "SomeOtherGlobalParameterName": "SomeOtherGlobalParameterValue"
+ }
+}
+```
+
+Let's review the key attributes of this input:
+
+| Attribute | Explanation |
+| - | |
+| **conversationId** | The unique identifier of the conversation. Note that this ID can be generated from the client app. |
+| **currentCommand** | The command that's currently active in the conversation. |
+| **name** | The name of the command. The `parameters` attribute is a map with the current values of the parameters. |
+| **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
+
+The output of the Azure function needs to support the following format:
+
+```JSON
+{
+ "updatedCommand": {
+ "name": "SomeCommandName",
+ "updatedParameters": {
+ "SomeParameterName": "SomeParameterValue"
+ },
+ "cancel": false
+ },
+ "updatedGlobalParameters": {
+ "SomeGlobalParameterName": "SomeGlobalParameterValue"
+ }
+}
+```
+
+You might recognize this format because it's the same one that you used when [updating a command from the client](./how-to-custom-commands-update-command-from-client.md).
+
+Now, create an Azure function based on Node.js. Copy/paste this code:
+
+```nodejs
+module.exports = async function (context, req) {
+ context.log(req.body);
+ context.res = {
+ body: {
+ updatedCommand: {
+ name: "IncrementCounter",
+ updatedParameters: {
+ Counter: req.body.currentCommand.parameters.Counter + 1
+ }
+ }
+ }
+ };
+}
+```
+
+When you call this Azure function from Custom Commands, you'll send the current values of the conversation. You'll return the parameters that you want to update or if you want to cancel the current command.
+
+## Update the existing Custom Commands app
+
+Let's hook up the Azure function with the existing Custom Commands app:
+
+1. Add a new command named `IncrementCounter`.
+1. Add just one example sentence with the value `increment`.
+1. Add a new parameter called `Counter` (same name as specified in the Azure function) of type `Number` with a default value of `0`.
+1. Add a new web endpoint called `IncrementEndpoint` with the URL of your Azure function, with **Remote updates** set to **Enabled**.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/custom-commands/set-web-endpoint-with-remote-updates.png" alt-text="Screenshot that shows setting a web endpoint with remote updates.":::
+1. Create a new interaction rule called **IncrementRule** and add a **Call web endpoint** action.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/custom-commands/increment-rule-web-endpoint.png" alt-text="Screenshot that shows the creation of an interaction rule.":::
+1. In the action configuration, select `IncrementEndpoint`. Configure **On success** to **Send speech response** with the value of `Counter`, and configure **On failure** with an error message.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/custom-commands/set-increment-counter-call-endpoint.png" alt-text="Screenshot that shows setting an increment counter for calling a web endpoint.":::
+1. Set the post-execution state of the rule to **Wait for user's input**.
+
+## Test it
+
+1. Save and train your app.
+1. Select **Test**.
+1. Send `increment` a few times (which is the example sentence for the `IncrementCounter` command).
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/custom-commands/increment-counter-example-no-mic.png" alt-text="Screenshot that shows an increment counter example.":::
+
+Notice how the Azure function increments the value of the `Counter` parameter on each turn.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Enable a CI/CD process for your Custom Commands application](./how-to-custom-commands-deploy-cicd.md)
ai-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md
+
+ Title: CI/CD for Custom Speech - Speech service
+
+description: Apply DevOps with Custom Speech and CI/CD workflows. Implement an existing DevOps solution for your own project.
++++++ Last updated : 05/08/2022+++
+# CI/CD for Custom Speech
+
+Implement automated training, testing, and release management to enable continuous improvement of Custom Speech models as you apply updates to training and testing data. Through effective implementation of CI/CD workflows, you can ensure that the endpoint for the best-performing Custom Speech model is always available.
+
+[Continuous integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing updates in a shared repository, and performing an automated build on it. CI workflows for Custom Speech train a new model from its data sources and perform automated testing on the new model to ensure that it performs better than the previous model.
+
+[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved Custom Speech model. CD makes endpoints easily available to be integrated into solutions.
+
+Custom CI/CD solutions are possible, but for a robust, pre-built solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
+
+## CI/CD workflows for Custom Speech
+
+The purpose of these workflows is to ensure that each Custom Speech model has better recognition accuracy than the previous build. If the updates to the testing and/or training data improve the accuracy, these workflows create a new Custom Speech endpoint.
+
+Git servers such as GitHub and Azure DevOps can run automated workflows when specific Git events happen, such as merges or pull requests. For example, a CI workflow can be triggered when updates to testing data are pushed to the *main* branch. Different Git Servers will have different tooling, but will allow scripting command-line interface (CLI) commands so that they can execute on a build server.
+
+Along the way, the workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It is also helpful to name these assets so that it is easy to see which were created after updating testing data versus training data.
+
+### CI workflow for testing data updates
+
+The principal purpose of the CI/CD workflows is to build a new model using the training data, and to test that model using the testing data to establish whether the [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) (WER) has improved compared to the previous best-performing model (the "benchmark model"). If the new model performs better, it becomes the new benchmark model against which future models are compared.
+
+The CI workflow for testing data updates should retest the current benchmark model with the updated test data to calculate the revised WER. This ensures that when the WER of a new model is compared to the WER of the benchmark, both models have been tested against the same test data and you're comparing like with like.
+
+This workflow should trigger on updates to testing data and:
+
+- Test the benchmark model against the updated testing data.
+- Store the test output, which contains the WER of the benchmark model, using the updated data.
+- The WER from these tests will become the new benchmark WER that future models must beat.
+- The CD workflow does not execute for updates to testing data.
+
+### CI workflow for training data updates
+
+Updates to training data signify updates to the custom model.
+
+This workflow should trigger on updates to training data and:
+
+- Train a new model with the updated training data.
+- Test the new model against the testing data.
+- Store the test output, which contains the WER.
+- Compare the WER from the new model to the WER from the benchmark model.
+- If the WER does not improve, stop the workflow.
+- If the WER improves, execute the CD workflow to create a Custom Speech endpoint.
+
+### CD workflow
+
+After an update to the training data improves a model's recognition, the CD workflow should automatically execute to create a new endpoint for that model, and make that endpoint available such that it can be used in a solution.
+
+#### Release management
+
+Most teams require a manual review and approval process for deployment to a production environment. For a production deployment, you might want to make sure it happens when key people on the development team are available for support, or during low-traffic periods.
+
+### Tools for Custom Speech workflows
+
+Use the following tools for CI/CD automation workflows for Custom Speech:
+
+- [Azure CLI](/cli/azure/) to create an Azure service principal authentication, query Azure subscriptions, and store test results in Azure Blob.
+- [Azure AI Speech CLI](spx-overview.md) to interact with the Speech service from the command line or an automated workflow.
+
+## DevOps solution for Custom Speech using GitHub Actions
+
+For an already-implemented DevOps solution for Custom Speech, go to the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template). Create a copy of the template and begin development of custom models with a robust DevOps system that includes testing, training, and versioning using GitHub Actions. The repository provides sample testing and training data to aid in setup and explain the workflow. After initial setup, replace the sample data with your project data.
+
+The [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) provides the infrastructure and detailed guidance to:
+
+- Copy the template repository to your GitHub account, then create Azure resources and a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the GitHub Actions CI/CD workflows.
+- Walk through the "[dev inner loop](/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow)." Update training and testing data from a feature branch, test the changes with a temporary development model, and raise a pull request to propose and review the changes.
+- When training data is updated in a pull request to *main*, train models with the GitHub Actions CI workflow.
+- Perform automated accuracy testing to establish a model's [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) (WER). Store the test results in Azure Blob.
+- Execute the CD workflow to create an endpoint when the WER improves.
+
+## Next steps
+
+- Use the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) to implement DevOps for Custom Speech with GitHub Actions.
ai-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-create-project.md
+
+ Title: Create a Custom Speech project - Speech service
+
+description: Learn about how to create a project for Custom Speech.
++++++ Last updated : 11/29/2022+
+zone_pivot_groups: speech-studio-cli-rest
++
+# Create a Custom Speech project
+
+Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
+
+## Create a project
++
+To create a Custom Speech project, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select the subscription and Speech resource to work with.
+
+ > [!IMPORTANT]
+ > If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
+
+1. Select **Custom speech** > **Create a new project**.
+1. Follow the instructions provided by the wizard to create your project.
+
+Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
+++
+To create a project, use the `spx csr project create` command. Construct the request parameters according to the following instructions:
+
+- Set the required `language` parameter. The locale of the project and the contained datasets should be the same. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a project:
+
+```azurecli-interactive
+spx csr project create --api-version v3.1 --name "My Project" --description "My Project Description" --language "en-US"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
+ "links": {
+ "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
+ "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
+ "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
+ "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
+ "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
+ },
+ "properties": {
+ "datasetCount": 0,
+ "evaluationCount": 0,
+ "modelCount": 0,
+ "transcriptionCount": 0,
+ "endpointCount": 0
+ },
+ "createdDateTime": "2022-05-17T22:15:18Z",
+ "locale": "en-US",
+ "displayName": "My Project",
+ "description": "My Project Description"
+}
+```
+
+The top-level `self` property in the response body is the project's URI. Use this URI to get details about the project's evaluations, datasets, models, endpoints, and transcriptions. You also use this URI to update or delete a project.
+
+For Speech CLI help with projects, run the following command:
+
+```azurecli-interactive
+spx help csr project
+```
+++
+To create a project, use the [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the required `locale` property. This should be the locale of the contained datasets. The locale can't be changed later.
+- Set the required `displayName` property. This is the project name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "displayName": "My Project",
+ "description": "My Project Description",
+ "locale": "en-US"
+} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/projects"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
+ "links": {
+ "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
+ "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
+ "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
+ "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
+ "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
+ },
+ "properties": {
+ "datasetCount": 0,
+ "evaluationCount": 0,
+ "modelCount": 0,
+ "transcriptionCount": 0,
+ "endpointCount": 0
+ },
+ "createdDateTime": "2022-05-17T22:15:18Z",
+ "locale": "en-US",
+ "displayName": "My Project",
+ "description": "My Project Description"
+}
+```
+
+The top-level `self` property in the response body is the project's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Get) details about the project's evaluations, datasets, models, endpoints, and transcriptions. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Delete) a project.
++
+## Choose your model
+
+There are a few approaches to using Custom Speech models:
+- The base model provides accurate speech recognition out of the box for a range of [scenarios](overview.md#speech-scenarios). Base models are updated periodically to improve accuracy and quality. We recommend that if you use base models, use the latest default base models. If a required customization capability is only available with an older model, then you can choose an older base model.
+- A custom model augments the base model to include domain-specific vocabulary shared across all areas of the custom domain.
+- Multiple custom models can be used when the custom domain has multiple areas, each with a specific vocabulary.
+
+One recommended way to see if the base model will suffice is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) score. If the WER score is high, training a custom model to recognize the incorrectly identified words is recommended.
+
+Multiple models are recommended if the vocabulary varies across the domain areas. For instance, Olympic commentators report on various events, each associated with its own vernacular. Because each Olympic event vocabulary differs significantly from others, building a custom model specific to an event increases accuracy by limiting the utterance data relative to that particular event. As a result, the model doesn't need to sift through unrelated data to make a match. Regardless, training still requires a decent variety of training data. Include audio from various commentators who have different accents, gender, age, etcetera.
+
+## Model stability and lifecycle
+
+A base model or custom model deployed to an endpoint using Custom Speech is fixed until you decide to update it. The speech recognition accuracy and quality will remain consistent, even when a new base model is released. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
+
+Whether you train your own model or use a snapshot of a base model, you can use the model for a limited time. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+## Next steps
+
+* [Training and testing datasets](./how-to-custom-speech-test-and-train.md)
+* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+* [Train a model](how-to-custom-speech-train-model.md)
ai-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md
+
+ Title: Deploy a Custom Speech model - Speech service
+
+description: Learn how to deploy Custom Speech models.
++++++ Last updated : 11/29/2022+
+zone_pivot_groups: speech-studio-cli-rest
++
+# Deploy a Custom Speech model
+
+In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+
+> [!TIP]
+> A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model.
+
+> [!NOTE]
+> Endpoints used by `F0` Speech resources are deleted after seven days.
+
+## Add a deployment endpoint
++
+To create a custom endpoint, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+
+ If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
+
+1. Select **Deploy model** to start the new endpoint wizard.
+
+1. On the **New endpoint** page, enter a name and description for your custom endpoint.
+
+1. Select the custom model that you want to associate with the endpoint.
+
+1. Optionally, you can check the box to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic.
+
+ :::image type="content" source="./media/custom-speech/custom-speech-deploy-model.png" alt-text="Screenshot of the New endpoint page that shows the checkbox to enable logging.":::
+
+1. Select **Add** to save and deploy the endpoint.
+
+On the main **Deploy models** page, details about the new endpoint are displayed in a table, such as name, description, status, and expiration date. It can take up to 30 minutes to instantiate a new endpoint that uses your custom models. When the status of the deployment changes to **Succeeded**, the endpoint is ready to use.
+
+> [!IMPORTANT]
+> Take note of the model expiration date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+Select the endpoint link to view information specific to it, such as the endpoint key, endpoint URL, and sample code.
+++
+To create an endpoint and deploy a model, use the `spx csr endpoint create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `model` parameter to the ID of the model that you want deployed to the endpoint.
+- Set the required `language` parameter. The endpoint locale must match the locale of the model. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Optionally, you can set the `logging` parameter. Set this to `enabled` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`.
+
+Here's an example Speech CLI command to create an endpoint and deploy a model:
+
+```azurecli-interactive
+spx csr endpoint create --api-version v3.1 --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T15:27:51Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:27:51Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description"
+}
+```
+
+The top-level `self` property in the response body is the endpoint's URI. Use this URI to get details about the endpoint's project, model, and logs. You also use this URI to update the endpoint.
+
+For Speech CLI help with endpoints, run the following command:
+
+```azurecli-interactive
+spx help csr endpoint
+```
+++
+To create an endpoint and deploy a model, use the [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the required `model` property to the URI of the model that you want deployed to the endpoint.
+- Set the required `locale` property. The endpoint locale must match the locale of the model. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Optionally, you can set the `loggingEnabled` property within `properties`. Set this to `true` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`.
+
+Make an HTTP POST request using the URI as shown in the following [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ },
+ "locale": "en-US",
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T15:27:51Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:27:51Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description"
+}
+```
+
+The top-level `self` property in the response body is the endpoint's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get) details about the endpoint's project, model, and logs. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Delete) the endpoint.
++
+## Change model and redeploy endpoint
+
+An endpoint can be updated to use another model that was created by the same Speech resource. As previously mentioned, you must update the endpoint's model before the [model expires](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
++
+To use a new model and redeploy the custom endpoint:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select the link to an endpoint by name, and then select **Change model**.
+1. Select the new model that you want the endpoint to use.
+1. Select **Done** to save and redeploy the endpoint.
+++
+To redeploy the custom endpoint with a new model, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
+
+- Set the required `endpoint` parameter to the ID of the endpoint that you want deployed.
+- Set the required `model` parameter to the ID of the model that you want deployed to the endpoint.
+
+Here's an example Speech CLI command that redeploys the custom endpoint with a new model:
+
+```azurecli-interactive
+spx csr endpoint update --api-version v3.1 --endpoint YourEndpointId --model YourModelId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/639d5280-8995-40cc-9329-051fd0fddd46"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T23:01:34Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:41:27Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Updated Endpoint Description"
+}
+```
+
+For Speech CLI help with endpoints, run the following command:
+
+```azurecli-interactive
+spx help csr endpoint
+```
+++
+To redeploy the custom endpoint with a new model, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `model` property to the URI of the model that you want deployed to the endpoint.
+
+Make an HTTP PATCH request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, replace `YourEndpointId` with your endpoint ID, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ }
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/639d5280-8995-40cc-9329-051fd0fddd46"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T23:01:34Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-19T15:41:27Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Updated Endpoint Description"
+}
+```
++
+The redeployment takes several minutes to complete. In the meantime, your endpoint will use the previous model without interruption of service.
+
+## View logging data
+
+Logging data is available for export if you configured it while creating the endpoint.
++
+To download the endpoint logs:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select the link by endpoint name.
+1. Under **Content logging**, select **Download log**.
+++
+To get logs for an endpoint, use the `spx csr endpoint list` command. Construct the request parameters according to the following instructions:
+
+- Set the required `endpoint` parameter to the ID of the endpoint that you want to get logs.
+
+Here's an example Speech CLI command that gets logs for an endpoint:
+
+```azurecli-interactive
+spx csr endpoint list --api-version v3.1 --endpoint YourEndpointId
+```
+
+The location of each log file with more details are returned in the response body.
+++
+To get logs for an endpoint, start by using the [Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/2f78cdb7-58ac-4bd9-9bc6-170e31483b26"
+ },
+ "properties": {
+ "loggingEnabled": true
+ },
+ "lastActionDateTime": "2022-05-19T23:41:05Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-19T23:41:05Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Updated Endpoint Description"
+}
+```
+
+Make an HTTP GET request using the "logs" URI from the previous response body. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
++
+```curl
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId/files/logs" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+The location of each log file with more details are returned in the response body.
++
+Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If your own storage account is linked to the Azure AI services subscription, the logging data won't be automatically deleted.
+
+## Next steps
+
+- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
+- [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)
ai-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-evaluate-data.md
+
+ Title: Test accuracy of a Custom Speech model - Speech service
+
+description: In this article, you learn how to quantitatively measure and improve the quality of our speech to text model or your custom model.
++++++ Last updated : 11/29/2022++
+zone_pivot_groups: speech-studio-cli-rest
+show_latex: true
+no-loc: [$$, '\times', '\over']
++
+# Test accuracy of a Custom Speech model
+
+In this article, you learn how to quantitatively measure and improve the accuracy of the base speech to text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio.
++
+## Create a test
+
+You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a speech to text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
++
+Follow these steps to create a test:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select **Create new test**.
+1. Select **Evaluate accuracy** > **Next**.
+1. Select one audio + human-labeled transcription dataset, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
+
+ > [!NOTE]
+ > It's important to select an acoustic dataset that's different from the one you used with your model. This approach can provide a more realistic sense of the model's performance.
+
+1. Select up to two models to evaluate, and then select **Next**.
+1. Enter the test name and description, and then select **Next**.
+1. Review the test details, and then select **Save and close**.
+++
+To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `model1` parameter to the ID of a model that you want to test.
+- Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.
+- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a test:
+
+```azurecli-interactive
+spx csr evaluation create --api-version v3.1 --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description"
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to get details about the project and test results. You also use this URI to update or delete the evaluation.
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the Speech Studio.
+- Set the required `model1` property to the URI of a model that you want to test.
+- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` property to the URI of a dataset that you want to use for the test.
+- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ },
+ "locale": "en-US"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) details about the evaluation's project and test results. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete) the evaluation.
++
+## Get test results
+
+You should get the test results and [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
++
+Follow these steps to get test results:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select the link by test name.
+1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
+
+This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+++
+To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
+
+- Set the required `evaluation` parameter to the ID of the evaluation that you want to get test results.
+
+Here's an example Speech CLI command that gets test results:
+
+```azurecli-interactive
+spx csr evaluation status --api-version v3.1 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
+```
+
+The word error rates and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
+}
+```
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+The word error rates and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
+}
+```
+++
+## Evaluate word error rate
+
+The industry standard for measuring model accuracy is [word error rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate). WER counts the number of incorrect words identified during recognition, and divides the sum by the total number of words provided in the human-labeled transcript (N).
+
+Incorrectly identified words fall into three categories:
+
+* Insertion (I): Words that are incorrectly added in the hypothesis transcript
+* Deletion (D): Words that are undetected in the hypothesis transcript
+* Substitution (S): Words that were substituted between reference and hypothesis
+
+In the Speech Studio, the quotient is multiplied by 100 and shown as a percentage. The Speech CLI and REST API results aren't multiplied by 100.
+
+$$
+WER = {{I+D+S}\over N} \times 100
+$$
+
+Here's an example that shows incorrectly identified words, when compared to the human-labeled transcript:
+
+![Screenshot showing an example of incorrectly identified words.](./media/custom-speech/custom-speech-dis-words.png)
+
+The speech recognition result erred as follows:
+* Insertion (I): Added the word "a"
+* Deletion (D): Deleted the word "are"
+* Substitution (S): Substituted the word "Jones" for "John"
+
+The word error rate from the previous example is 60%.
+
+If you want to replicate WER measurements locally, you can use the sclite tool from the [NIST Scoring Toolkit (SCTK)](https://github.com/usnistgov/SCTK).
+
+## Resolve errors and improve WER
+
+You can use the WER calculation from the machine recognition results to evaluate the quality of the model you're using with your app, tool, or product. A WER of 5-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, but you might want to consider additional training. A WER of 30% or more signals poor quality and requires customization and training.
+
+How the errors are distributed is important. When many deletion errors are encountered, it's usually because of weak audio signal strength. To resolve this issue, you need to collect audio data closer to the source. Insertion errors mean that the audio was recorded in a noisy environment and crosstalk might be present, causing recognition issues. Substitution errors are often encountered when an insufficient sample of domain-specific terms has been provided as either human-labeled transcriptions or related text.
+
+By analyzing individual files, you can determine what type of errors exist, and which errors are unique to a specific file. Understanding issues at the file level will help you target improvements.
+
+## Example scenario outcomes
+
+Speech recognition scenarios vary by audio quality and language (vocabulary and speaking style). The following table examines four common scenarios:
+
+| Scenario | Audio quality | Vocabulary | Speaking style |
+|-|||-|
+| Call center | Low, 8&nbsp;kHz, could be two people on one audio channel, could be compressed | Narrow, unique to domain and products | Conversational, loosely structured |
+| Voice assistant, such as Cortana, or a drive-through window | High, 16&nbsp;kHz | Entity-heavy (song titles, products, locations) | Clearly stated words and phrases |
+| Dictation (instant message, notes, search) | High, 16&nbsp;kHz | Varied | Note-taking |
+| Video closed captioning | Varied, including varied microphone use, added music | Varied, from meetings, recited speech, musical lyrics | Read, prepared, or loosely structured |
+
+Different scenarios produce different quality outcomes. The following table examines how content from these four scenarios rates in the [WER](how-to-custom-speech-evaluate-data.md). The table shows which error types are most common in each scenario. The insertion, substitution, and deletion error rates help you determine what kind of data to add to improve the model.
+
+| Scenario | Speech recognition quality | Insertion errors | Deletion errors | Substitution errors |
+| | | | | |
+| Call center | Medium<br>(<&nbsp;30%&nbsp;WER) | Low, except when other people talk in the background | Can be high. Call centers can be noisy, and overlapping speakers can confuse the model | Medium. Products and people's names can cause these errors |
+| Voice assistant | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | Medium, due to song titles, product names, or locations |
+| Dictation | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | High |
+| Video closed captioning | Depends on video type (can be <&nbsp;50%&nbsp;WER) | Low | Can be high because of music, noises, microphone quality | Jargon might cause these errors |
++
+## Next steps
+
+* [Train a model](how-to-custom-speech-train-model.md)
+* [Deploy a model](how-to-custom-speech-deploy-model.md)
+* [Use the online transcription editor](how-to-custom-speech-transcription-editor.md)
ai-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-human-labeled-transcriptions.md
+
+ Title: Human-labeled transcriptions guidelines - Speech service
+
+description: You use human-labeled transcriptions with your audio data to improve speech recognition accuracy. This is especially helpful when words are deleted or incorrectly replaced.
++++++ Last updated : 05/08/2022+++
+# How to create human-labeled transcriptions
+
+Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to improve recognition accuracy, especially when words are deleted or incorrectly replaced. This guide can help you create high-quality transcriptions.
+
+A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service will use up to 20 hours of audio for training. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
+
+The transcriptions for all WAV files are contained in a single plain-text file (.txt or .tsv). Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
+
+For example:
+
+```txt
+speech01.wav speech recognition is awesome
+speech02.wav the quick brown fox jumped all over the place
+speech03.wav the lazy dog was not amused
+```
+
+The transcriptions are text-normalized so the system can process them. However, you must do some important normalizations before you upload the dataset.
+
+Human-labeled transcriptions for languages other than English and Mandarin Chinese, must be UTF-8 encoded with a byte-order marker. For other locales transcription requirements, see the sections below.
+
+## en-US
+
+Human-labeled transcriptions for English audio must be provided as plain text, only using ASCII characters. Avoid the use of Latin-1 or Unicode punctuation characters. These characters are often inadvertently added when copying text from a word-processing application or scraping data from web pages. If these characters are present, make sure to update them with the appropriate ASCII substitution.
+
+Here are a few examples:
+
+| Characters to avoid | Substitution | Notes |
+| - | | -- |
+| ΓÇ£Hello worldΓÇ¥ | "Hello world" | The opening and closing quotations marks have been substituted with appropriate ASCII characters. |
+| JohnΓÇÖs day | John's day | The apostrophe has been substituted with the appropriate ASCII character. |
+| It was goodΓÇöno, it was great! | it was good--no, it was great! | The em dash was substituted with two hyphens. |
+
+### Text normalization for US English
+
+Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
+
+- Write out abbreviations in words.
+- Write out non-standard numeric strings in words (such as accounting terms).
+- Non-alphabetic characters or mixed alphanumeric characters should be transcribed as pronounced.
+- Abbreviations that are pronounced as words shouldn't be edited (such as "radar", "laser", "RAM", or "NATO").
+- Write out abbreviations that are pronounced as separate letters with each letter separated by a space.
+- If you use audio, transcribe numbers as words that match the audio (for example, "101" could be pronounced as "one oh one" or "one hundred and one").
+- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah". Lines with such repetitions might be dropped by the Speech service.
+
+Here are a few examples of normalization that you should perform on the transcription:
+
+| Original text | Text after normalization (human) |
+| | - |
+| Dr. Bruce Banner | Doctor Bruce Banner |
+| James Bond, 007 | James Bond, double oh seven |
+| Ke$ha | Kesha |
+| How long is the 2x4 | How long is the two by four |
+| The meeting goes from 1-3pm | The meeting goes from one to three pm |
+| My blood type is O+ | My blood type is O positive |
+| Water is H20 | Water is H 2 O |
+| Play OU812 by Van Halen | Play O U 8 1 2 by Van Halen |
+| UTF-8 with BOM | U T F 8 with BOM |
+| It costs \$3.14 | It costs three fourteen |
+
+The following normalization rules are automatically applied to transcriptions:
+
+- Use lowercase letters.
+- Remove all punctuation except apostrophes within words.
+- Expand numbers into words/spoken form, such as dollar amounts.
+
+Here are a few examples of normalization automatically performed on the transcription:
+
+| Original text | Text after normalization (automatic) |
+| -- | |
+| "Holy cow!" said Batman. | holy cow said batman |
+| "What?" said Batman's sidekick, Robin. | what said batman's sidekick robin |
+| Go get -em! | go get em |
+| I'm double-jointed | I'm double jointed |
+| 104 Elm Street | one oh four Elm street |
+| Tune to 102.7 | tune to one oh two point seven |
+| Pi is about 3.14 | pi is about three point one four |
+
+## de-DE
+
+Human-labeled transcriptions for German audio must be UTF-8 encoded with a byte-order marker.
+
+### Text normalization for German
+
+Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
+
+- Write decimal points as "," and not ".".
+- Write time separators as ":" and not "." (for example: 12:00 Uhr).
+- Abbreviations such as "ca." aren't replaced. We recommend that you use the full spoken form.
+- The four main mathematical operators (+, -, \*, and /) are removed. We recommend replacing them with the written form: "plus," "minus," "mal," and "geteilt."
+- Comparison operators are removed (=, <, and >). We recommend replacing them with "gleich," "kleiner als," and "gr├╢sser als."
+- Write fractions, such as 3/4, in written form (for example: "drei viertel" instead of 3/4).
+- Replace the "Γé¼" symbol with its written form "Euro."
+
+Here are a few examples of normalization that you should perform on the transcription:
+
+| Original text | Text after user normalization | Text after system normalization |
+| - | -- | - |
+| Es ist 12.23 Uhr | Es ist 12:23 Uhr | es ist zw├╢lf uhr drei und zwanzig uhr |
+| {12.45} | {12,45} | zw├╢lf komma vier f├╝nf |
+| 2 + 3 - 4 | 2 plus 3 minus 4 | zwei plus drei minus vier |
+
+The following normalization rules are automatically applied to transcriptions:
+
+- Use lowercase letters for all text.
+- Remove all punctuation, including various types of quotation marks ("test", 'test', "test„, and «test» are OK).
+- Discard rows with any special characters from this set: ¢ ¤ ¥ ¦ § © ª ¬ ® ° ± ² µ × ÿ ج¬.
+- Expand numbers to spoken form, including dollar or Euro amounts.
+- Accept umlauts only for a, o, and u. Others will be replaced by "th" or be discarded.
+
+Here are a few examples of normalization automatically performed on the transcription:
+
+| Original text | Text after normalization |
+| - | |
+| Frankfurter Ring | frankfurter ring |
+| ¡Eine Frage! | eine frage |
+| Wir, haben | wir haben |
+
+## ja-JP
+
+In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence. Lines with longer sentences will be discarded. To add longer text, insert a period in between.
+
+## zh-CN
+
+Human-labeled transcriptions for Mandarin Chinese audio must be UTF-8 encoded with a byte-order marker. Avoid the use of half-width punctuation characters. These characters can be included inadvertently when you prepare the data in a word-processing program or scrape data from web pages. If these characters are present, make sure to update them with the appropriate full-width substitution.
+
+Here are a few examples:
+
+| Characters to avoid | Substitution | Notes |
+| - | -- | -- |
+| "你好" | "你好" | The opening and closing quotations marks have been substituted with appropriate characters. |
+| 需要什么帮助? | 需要什么帮助?| The question mark has been substituted with appropriate character. |
+
+### Text normalization for Mandarin Chinese
+
+Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
+
+- Write out abbreviations in words.
+- Write out numeric strings in spoken form.
+
+Here are a few examples of normalization that you should perform on the transcription:
+
+| Original text | Text after normalization |
+| - | |
+| 我今年 21 | 我今年二十一 |
+| 3 号楼 504 | 三号 楼 五 零 四 |
+
+The following normalization rules are automatically applied to transcriptions:
+
+- Remove all punctuation
+- Expand numbers to spoken form
+- Convert full-width letters to half-width letters
+- Using uppercase letters for all English words
+
+Here are some examples of automatic transcription normalization:
+
+| Original text | Text after normalization |
+| - | |
+| 3.1415 | 三 点 一 四 一 五 |
+| ¥ 3.5 | 三 元 五 角 |
+| w f y z | W F Y Z |
+| 1992 年 8 月 8 日 | 一 九 九 二 年 八 月 八 日 |
+| 你吃饭了吗? | 你 吃饭 了 吗 |
+| 下午 5:00 的航班 | 下午 五点 的 航班 |
+| 我今年 21 岁 | 我 今年 二十 一 岁 |
+
+## Next Steps
+
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
+- [Train your model](how-to-custom-speech-train-model.md)
ai-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-inspect-data.md
+
+ Title: Test recognition quality of a Custom Speech model - Speech service
+
+description: Custom Speech lets you qualitatively inspect the recognition quality of a model. You can play back uploaded audio and determine if the provided recognition result is correct.
++++++ Last updated : 11/29/2022+
+zone_pivot_groups: speech-studio-cli-rest
++
+# Test recognition quality of a Custom Speech model
+
+You can inspect the recognition quality of a Custom Speech model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. After a test has been successfully created, you can see how a model transcribed the audio dataset, or compare results from two models side by side.
+
+Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, which requires transcription datasets input, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
++
+## Create a test
++
+Follow these instructions to create a test:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Navigate to **Speech Studio** > **Custom Speech** and select your project name from the list.
+1. Select **Test models** > **Create new test**.
+1. Select **Inspect quality (Audio-only data)** > **Next**.
+1. Choose an audio dataset that you'd like to use for testing, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
+
+ :::image type="content" source="media/custom-speech/custom-speech-choose-test-data.png" alt-text="Screenshot of choosing a dataset dialog":::
+
+1. Choose one or two models to evaluate and compare accuracy.
+1. Enter the test name and description, and then select **Next**.
+1. Review your settings, and then select **Save and close**.
+++
+To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `model1` parameter to the ID of a model that you want to test.
+- Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.
+- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a test:
+
+```azurecli-interactive
+spx csr evaluation create --api-version v3.1 --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to get details about the project and test results. You also use this URI to update or delete the evaluation.
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the required `model1` property to the URI of a model that you want to test.
+- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
+- Set the required `dataset` property to the URI of a dataset that you want to use for the test.
+- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "displayName": "My Inspection",
+ "description": "My Inspection Description",
+ "locale": "en-US"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": -1.0,
+ "wordErrorRate1": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1,
+ "sentenceErrorRate1": -1.0,
+ "sentenceCount1": -1,
+ "wordCount1": -1,
+ "correctWordCount1": -1,
+ "wordSubstitutionCount1": -1,
+ "wordDeletionCount1": -1,
+ "wordInsertionCount1": -1
+ },
+ "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
+
+The top-level `self` property in the response body is the evaluation's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) details about the evaluation's project and test results. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete) the evaluation.
+++
+## Get test results
+
+You should get the test results and [inspect](#compare-transcription-with-audio) the audio datasets compared to transcription results for each model.
++
+Follow these steps to get test results:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select the link by test name.
+1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
+
+This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
+++
+To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
+
+- Set the required `evaluation` parameter to the ID of the evaluation that you want to get test results.
+
+Here's an example Speech CLI command that gets test results:
+
+```azurecli-interactive
+spx csr evaluation status --api-version v3.1 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
+```
+
+The models, audio dataset, transcriptions, and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
+
+For Speech CLI help with evaluations, run the following command:
+
+```azurecli-interactive
+spx help csr evaluation
+```
+++
+To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+The models, audio dataset, transcriptions, and more details are returned in the response body.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ },
+ "properties": {
+ "wordErrorRate2": 4.62,
+ "wordErrorRate1": 4.6,
+ "sentenceErrorRate2": 66.7,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 166,
+ "wordSubstitutionCount2": 7,
+ "wordDeletionCount2": 0,
+ "wordInsertionCount2": 1,
+ "sentenceErrorRate1": 66.7,
+ "sentenceCount1": 3,
+ "wordCount1": 174,
+ "correctWordCount1": 166,
+ "wordSubstitutionCount1": 7,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 0
+ },
+ "lastActionDateTime": "2022-05-20T16:42:56Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-20T16:42:43Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
+}
+```
++
+## Compare transcription with audio
+
+You can inspect the transcription output by each model tested, against the audio input dataset. If you included two models in the test, you can compare their transcription quality side by side.
++
+To review the quality of transcriptions:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Test models**.
+1. Select the link by test name.
+1. Play an audio file while the reading the corresponding transcription by a model.
+
+If the test dataset included multiple audio files, you'll see multiple rows in the table. If you included two models in the test, transcriptions are shown in side-by-side columns. Transcription differences between models are shown in blue text font.
++++
+The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
+
+To review the quality of transcriptions:
+1. Download the audio test dataset, unless you already have a copy.
+1. Download the output transcriptions.
+1. Play an audio file while the reading the corresponding transcription by a model.
+
+If you're comparing quality between two models, pay particular attention to differences between each model's transcriptions.
++++
+The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
+
+To review the quality of transcriptions:
+1. Download the audio test dataset, unless you already have a copy.
+1. Download the output transcriptions.
+1. Play an audio file while the reading the corresponding transcription by a model.
+
+If you're comparing quality between two models, pay particular attention to differences between each model's transcriptions.
++
+## Next steps
+
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Train your model](how-to-custom-speech-train-model.md)
+- [Deploy your model](./how-to-custom-speech-train-model.md)
+
ai-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle.md
+
+ Title: Model lifecycle of Custom Speech - Speech service
+
+description: Custom Speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
++++++ Last updated : 11/29/2022+
+zone_pivot_groups: speech-studio-cli-rest
++
+# Custom Speech model lifecycle
+
+You can use a Custom Speech model for some time after it's deployed to your custom endpoint. But when new base models are made available, the older models are expired. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
+
+Here are some key terms related to the model lifecycle:
+
+* **Training**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data. In some contexts such as the REST API properties, training is also referred to as **adaptation**.
+* **Transcription**: Using a model and performing speech recognition (decoding audio into text).
+* **Endpoint**: A specific deployment of either a base model or a custom model that only you can access.
+
+> [!NOTE]
+> Endpoints used by `F0` Speech resources are deleted after seven days.
+
+## Expiration timeline
+
+Here are timelines for model adaptation and transcription expiration:
+
+- Training is available for one year after the quarter when the base model was created by Microsoft.
+- Transcription with a base model is available for two years after the quarter when the base model was created by Microsoft.
+- Transcription with a custom model is available for two years after the quarter when you created the custom model.
+
+In this context, quarters end on January 15th, April 15th, July 15th, and October 15th.
+
+## What to do when a model expires
+
+When a custom model or base model expires, it is no longer available for transcription. You can change the model that is used by your custom speech endpoint without downtime.
+
+|Transcription route |Expired model result |Recommendation |
+||||
+|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
+|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
++
+## Get base model expiration dates
++
+The last date that you could use the base model for training was shown when you created the custom model. For more information, see [Train a Custom Speech model](how-to-custom-speech-train-model.md).
+
+Follow these instructions to get the transcription expiration date for a base model:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
+
+ :::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
++++
+To get the training and transcription expiration dates for a base model, use the `spx csr model status` command. Construct the request parameters according to the following instructions:
+
+- Set the `url` parameter to the URI of the base model that you want to get. You can run the `spx csr list --base` command to get available base models for all locales.
+
+Here's an example Speech CLI command to get the training and transcription expiration dates for a base model:
+
+```azurecli-interactive
+spx csr model status --api-version v3.1 --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f
+```
+
+In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d",
+ "datasets": [],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d/manifest"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-01-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-06T10:52:02Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-10-13T00:00:00Z",
+ "locale": "en-US",
+ "displayName": "20210831 + Audio file adaptation",
+ "description": "en-US base model"
+}
+```
+
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To get the training and transcription expiration dates for a base model, use the [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operation of the [Speech to text REST API](rest-speech-to-text.md). You can make a [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) request to get available base models for all locales.
+
+Make an HTTP GET request using the model URI as shown in the following example. Replace `BaseModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/BaseModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d",
+ "datasets": [],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d/manifest"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-01-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-06T10:52:02Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-10-13T00:00:00Z",
+ "locale": "en-US",
+ "displayName": "20210831 + Audio file adaptation",
+ "description": "en-US base model"
+}
+```
+++
+## Get custom model expiration dates
++
+Follow these instructions to get the transcription expiration date for a custom model:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. The expiration date the custom model is shown in the **Expiration** column. This is the last date that you can use the custom model for transcription. Base models are not shown on the **Train custom models** page.
+
+ :::image type="content" source="media/custom-speech/custom-speech-custom-model-expiration.png" alt-text="Screenshot of the train custom models page that shows the transcription expiration date.":::
+
+You can also follow these instructions to get the transcription expiration date for a custom model:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
+
+ :::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
++++
+To get the transcription expiration date for your custom model, use the `spx csr model status` command. Construct the request parameters according to the following instructions:
+
+- Set the `url` parameter to the URI of the model that you want to get. Replace `YourModelId` with your model ID and replace `YourServiceRegion` with your Speech resource region.
+
+Here's an example Speech CLI command to get the transcription expiration date for your custom model:
+
+```azurecli-interactive
+spx csr model status --api-version v3.1 --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId
+```
+
+In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-22T16:37:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
+
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To get the transcription expiration date for your custom model, use the [Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+Make an HTTP GET request using the model URI as shown in the following example. Replace `YourModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
+
+```azurecli-interactive
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+```
+
+In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "Succeeded",
+ "createdDateTime": "2022-05-22T16:37:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
++
+## Next steps
+
+- [Train a model](how-to-custom-speech-train-model.md)
+- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
ai-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md
+
+ Title: "Training and testing datasets - Speech service"
+
+description: Learn about types of training and testing data for a Custom Speech project, along with how to use and manage that data.
++++++ Last updated : 10/24/2022++++
+# Training and testing datasets
+
+In a Custom Speech project, you can upload datasets for training, qualitative inspection, and quantitative measurement. This article covers the types of training and testing data that you can use for Custom Speech.
+
+Text and audio that you use to test and train a custom model should include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
+
+* Include text and audio data to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
+* Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
+* Include samples from different environments, for example, indoor, outdoor, and road noise, where your model will be used.
+* Record audio with hardware devices that the production system will use. If your model must identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
+* Keep the dataset diverse and representative of your project requirements. You can add more data to your model later.
+* Only include data that your model needs to transcribe. Including data that isn't within your custom model's recognition requirements can harm recognition quality overall.
+
+## Data types
+
+The following table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements will vary depending on whether you're creating a test or training a model.
+
+| Data type | Used for testing | Recommended for testing | Used for training | Recommended for training |
+|--|--|-|-|-|
+| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio |
+| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
+| [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text |
+| [Structured text](#structured-text-data-for-training) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
+| [Pronunciation](#pronunciation-data-for-training) | No | Not applicable | Yes | 1 KB to 1 MB of pronunciation text |
+
+Training with plain text or structured text usually finishes within a few minutes.
+
+> [!TIP]
+> Start with plain-text data or structured-text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes versus days).
+>
+> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
+
+If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) REST API.
+
+## Consider datasets by scenario
+
+A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize. The following table shows datasets to consider for some speech recognition scenarios:
+
+| Scenario | Plain text data and structured text data | Audio + human-labeled transcripts | New words with pronunciation |
+| | | | |
+| Call center | Marketing documents, website, product reviews related to call center activity | Call center calls transcribed by humans | Terms that have ambiguous pronunciations (see the *Xbox* example in the preceding section) |
+| Voice assistant | Lists of sentences that use various combinations of commands and entities | Recorded voices speaking commands into device, transcribed into text | Names (movies, songs, products) that have unique pronunciations |
+| Dictation | Written input, such as instant messages or emails | Similar to preceding examples | Similar to preceding examples |
+| Video closed captioning | TV show scripts, movies, marketing content, video summaries | Exact transcripts of videos | Similar to preceding examples |
+
+To help determine which dataset to use to address your problems, refer to the following table:
+
+| Use case | Data type |
+| -- | |
+| Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. | Plain text or structured text data |
+| Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. | Pronunciation data or phonetic pronunciation in structured text |
+| Improve recognition accuracy on speaking styles, accents, or specific background noises. | Audio + human-labeled transcripts |
+
+## Audio + human-labeled transcript data for training or testing
+
+You can use audio + human-labeled transcript data for both [training](how-to-custom-speech-train-model.md) and [testing](how-to-custom-speech-evaluate-data.md) purposes. You must provide human-labeled transcriptions (word by word) for comparison:
+
+- To improve the acoustic aspects like slight accents, speaking styles, and background noises.
+- To measure the accuracy of Microsoft's speech to text accuracy when it's processing your audio files.
+
+For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
+
+> [!IMPORTANT]
+> If a base model doesn't support customization with audio data, only the transcription text will be used for training. If you switch to a base model that supports customization with audio data, the training time may increase from several hours to several days. The change in training time would be most noticeable when you switch to a base model in a [region](regions.md#speech-service) without dedicated hardware for training. If the audio data is not required, you should remove it to decrease the training time.
+
+Audio with human-labeled transcripts offers the greatest accuracy improvements if the audio comes from the target use case. Samples must cover the full scope of speech. For example, a call center for a retail store would get the most calls about swimwear and sunglasses during summer months. Ensure that your sample includes the full scope of speech that you want to detect.
+
+Consider these details:
+
+* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
+* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer very good recognition results in most scenarios, so it's probably enough to train with related text.
+* Custom Speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
+* Avoid samples that include transcription errors, but do include a diversity of audio quality.
+* Avoid sentences that are unrelated to your problem domain. Unrelated sentences can harm your model.
+* When the transcript quality varies, you can duplicate exceptionally good sentences, such as excellent transcriptions that include key phrases, to increase their weight.
+* The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text.
+* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a region that has dedicated hardware for training.
+
+A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition will only be as good as the data that you provide. You should upload only high-quality transcripts.
+
+Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise is not helpful, it shouldn't limit or degrade your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
+
+> [!IMPORTANT]
+> For more information about the best practices of preparing human-labeled transcripts, see [Human-labeled transcripts with audio](how-to-custom-speech-human-labeled-transcriptions.md).
+
+Custom Speech projects require audio files with these properties:
+
+| Property | Value |
+|--|-|
+| File format | RIFF (WAV) |
+| Sample rate | 8,000 Hz or 16,000 Hz |
+| Channels | 1 (mono) |
+| Maximum length per audio | 2 hours (testing) / 60 s (training)<br/><br/>Training with audio has a maximum audio length of 60 seconds per file. For audio files longer than 60 seconds, only the corresponding transcription files will be used for training. If all audio files are longer than 60 seconds, the training will fail.|
+| Sample format | PCM, 16-bit |
+| Archive format | .zip |
+| Maximum zip size | 2 GB or 10,000 files |
+
+## Plain-text data for training
+
+You can add plain text sentences of related text to improve the recognition of domain-specific words and phrases. Related text sentences can reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
+
+Provide domain-related sentences in a single text file. Use text data that's close to the expected spoken utterances. Utterances don't need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect the model to recognize. When possible, try to have one sentence or keyword controlled on a separate line. To increase the weight of a term such as product names, add several sentences that include the term. But don't copy too much - it could affect the overall recognition rate.
+
+> [!NOTE]
+> Avoid related text sentences that include noise such as unrecognizable characters or words.
+
+Use this table to ensure that your plain text dataset file is formatted correctly:
+
+| Property | Value |
+|-|-|
+| Text encoding | UTF-8 BOM |
+| Number of utterances per line | 1 |
+| Maximum file size | 200 MB |
+
+You must also adhere to the following restrictions:
+
+* Avoid repeating characters, words, or groups of words more than three times, as in "aaaa," "yeah yeah yeah yeah," or "that's it that's it that's it that's it." The Speech service might drop lines with too many repetitions.
+* Don't use special characters or UTF-8 characters above `U+00A1`.
+* URIs will be rejected.
+* For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each.
+
+## Structured-text data for training
+
+> [!NOTE]
+> Structured-text data for training is in public preview.
+
+Use structured text data when your data follows a particular pattern in particular utterances that differ only by words or phrases from a list. To simplify the creation of training data and to enable better modeling inside the Custom Language model, you can use a structured text in Markdown format to define lists of items and phonetic pronunciation of words. You can then reference these lists inside your training utterances.
+
+Expected utterances often follow a certain pattern. One common pattern is that utterances differ only by words or phrases from a list. Examples of this pattern could be:
+
+* "I have a question about `product`," where `product` is a list of possible products.
+* "Make that `object` `color`," where `object` is a list of geometric shapes and `color` is a list of colors.
+
+For a list of supported base models and locales for training with structured text, see [Language support](language-support.md?tabs=stt). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
+
+The structured-text file should have an .md extension. The maximum file size is 200 MB, and the text encoding must be UTF-8 BOM. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
+
+Here are key details about the supported Markdown format:
+
+| Property | Description | Limits |
+|-|-|--|
+|`@list`|A list of items that can be referenced in an example sentence.|Maximum of 20 lists. Maximum of 35,000 items per list.|
+|`speech:phoneticlexicon`|A list of phonetic pronunciations according to the [Universal Phone Set](customize-pronunciation.md). Pronunciation is adjusted for each instance where the word appears in a list or training sentence. For example, if you have a word that sounds like "cat" and you want to adjust the pronunciation to "k ae t", you would add `- cat/k ae t` to the `speech:phoneticlexicon` list.|Maximum of 15,000 entries. Maximum of 2 pronunciations per word.|
+|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum file size of 200MB.|
+|`//`|Comments follow a double slash (`//`).|Not applicable|
+
+Here's an example structured text file:
+
+```markdown
+// This is a comment because it follows a double slash (`//`).
+
+// Here are three separate lists of items that can be referenced in an example sentence. You can have up to 10 of these.
+@ list food =
+- pizza
+- burger
+- ice cream
+- soda
+
+@ list pet =
+- cat
+- dog
+- fish
+
+@ list sports =
+- soccer
+- tennis
+- cricket
+- basketball
+- baseball
+- football
+
+// List of phonetic pronunciations
+@ speech:phoneticlexicon
+- cat/k ae t
+- fish/f ih sh
+
+// Here are two sections of training sentences.
+#TrainingSentences_Section1
+- you can include sentences without a class reference
+- what {@pet} do you have
+- I like eating {@food} and playing {@sports}
+- my {@pet} likes {@food}
+
+#TrainingSentences_Section2
+- you can include more sentences without a class reference
+- or more sentences that have a class reference like {@pet}
+```
+
+## Pronunciation data for training
+
+Specialized or made up words might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize "Xbox", pronounce it as "X box". This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
+
+You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt).
+
+> [!NOTE]
+> You can use a pronunciation file alongside any other training dataset except structured text training data. To use pronunciation data with structured text, it must be within a structured text file.
+
+The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three. This table includes some examples:
+
+| Recognized displayed form | Spoken form |
+|--|--|
+| 3CPO | three c p o |
+| CNTK | c n t k |
+| IEEE | i triple e |
+
+You provide pronunciations in a single text file. Include the spoken utterance and a custom pronunciation for each. Each row in the file should begin with the recognized form, then a tab character, and then the space-delimited phonetic sequence.
+
+```tsv
+3CPO three c p o
+CNTK c n t k
+IEEE i triple e
+```
+
+Refer to the following table to ensure that your pronunciation dataset files are valid and correctly formatted.
+
+| Property | Value |
+|-|-|
+| Text encoding | UTF-8 BOM (ANSI is also supported for English) |
+| Number of pronunciations per line | 1 |
+| Maximum file size | 1 MB (1 KB for free tier) |
+
+### Audio data for training or testing
+
+Audio data is optimal for testing the accuracy of Microsoft's baseline speech to text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+
+> [!NOTE]
+> Audio only data for training is available in preview for the `en-US` locale. For other locales, to train with audio data you must also provide [human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
+
+Custom Speech projects require audio files with these properties:
+
+| Property | Value |
+|--|--|
+| File format | RIFF (WAV) |
+| Sample rate | 8,000 Hz or 16,000 Hz |
+| Channels | 1 (mono) |
+| Maximum length per audio | 2 hours |
+| Sample format | PCM, 16-bit |
+| Archive format | .zip |
+| Maximum archive size | 2 GB or 10,000 files |
+
+> [!NOTE]
+> When you're uploading training and testing data, the .zip file size can't exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can test from only a *single* dataset.
+
+Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a> to verify audio properties or convert existing audio to the appropriate formats. Here are some example SoX commands:
+
+| Activity | SoX command |
+||-|
+| Check the audio file format. | `sox --i <filename>` |
+| Convert the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
+
+## Next steps
+
+- [Upload your data](how-to-custom-speech-upload-data.md)
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Train a custom model](how-to-custom-speech-train-model.md)
ai-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-train-model.md
+
+ Title: Train a Custom Speech model - Speech service
+
+description: Learn how to train Custom Speech models. Training a speech to text model can improve recognition accuracy for the Microsoft base model or a custom model.
++++++ Last updated : 11/29/2022++
+zone_pivot_groups: speech-studio-cli-rest
++
+# Train a Custom Speech model
+
+In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released.
+
+> [!NOTE]
+> You pay for Custom Speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md), but you are not charged for training a model.
+
+Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
+
+You can use a custom model for a limited time after it's trained. You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+> [!IMPORTANT]
+> If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. After a model is trained, you can [copy it to a Speech resource](#copy-a-model) in another region as needed.
+>
+> In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. See footnotes in the [regions](regions.md#speech-service) table for more information.
+
+## Create a model
++
+After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. Select **Train a new model**.
+1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list. The name of the base model corresponds to the date when it was released in YYYYMMDD format. The customization capabilities of the base model are listed in parenthesis after the model name in Speech Studio.
+
+ > [!IMPORTANT]
+ > Take note of the **Expiration for adaptation** date. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+1. On the **Choose data** page, select one or more datasets that you want to use for training. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
+1. Enter a name and description for your custom model, and then select **Next**.
+1. Optionally, check the **Add test in the next step** box. If you skip this step, you can run the same tests later. For more information, see [Test recognition quality](how-to-custom-speech-inspect-data.md) and [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
+1. Select **Save and close** to kick off the build for your custom model.
+1. Return to the **Train custom models** page.
+
+ > [!IMPORTANT]
+ > Take note of the **Expiration** date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+++
+To create a model with datasets for training, use the `spx csr model create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `dataset` parameter to the ID of a dataset that you want used for training. To specify multiple datasets, set the `datasets` (plural) parameter and separate the IDs with a semicolon.
+- Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+- Optionally, you can set the `base` property. For example: `--base 1aae1070-7972-47e9-a977-87e3b05c457d`. If you don't specify the `base`, the default base model for the locale is used. The Speech CLI `base` parameter corresponds to the `baseModel` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a model with datasets for training:
+
+```azurecli-interactive
+spx csr model create --api-version v3.1 --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US"
+```
+
+> [!NOTE]
+> In this example, the `base` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-21T13:21:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
+
+> [!IMPORTANT]
+> Take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+>
+> Take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+The top-level `self` property in the response body is the model's URI. Use this URI to get details about the model's project, manifest, and deprecation dates. You also use this URI to update or delete a model.
+
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To create a model with datasets for training, use the [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the required `datasets` property to the URI of the datasets that you want used for training.
+- Set the required `locale` property. The model locale must match the locale of the project and base model. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+- Optionally, you can set the `baseModel` property. For example: `"baseModel": {"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"}`. If you don't specify the `baseModel`, the default base model for the locale is used.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "displayName": "My Model",
+ "description": "My Model Description",
+ "baseModel": null,
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "locale": "en-US"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models"
+```
+
+> [!NOTE]
+> In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ },
+ "datasets": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ }
+ ],
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-21T13:21:01Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-21T13:21:01Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description"
+}
+```
+
+> [!IMPORTANT]
+> Take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+>
+> Take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+The top-level `self` property in the response body is the model's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel) details about the model's project, manifest, and deprecation dates. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Delete) the model.
+++
+## Copy a model
+
+You can copy a model to another project that uses the same locale. For example, after a model is trained with audio data in a [region](regions.md#speech-service) with dedicated hardware for training, you can copy it to a Speech resource in another region as needed.
++
+Follow these instructions to copy a model to a project in another region:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Train custom models**.
+1. Select **Copy to**.
+1. On the **Copy speech model** page, select a target region where you want to copy the model.
+ :::image type="content" source="./media/custom-speech/custom-speech-copy-to-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/custom-speech-copy-to-full.png":::
+1. Select a Speech resource in the target region, or create a new Speech resource.
+1. Select a project where you want to copy the model, or create a new project.
+1. Select **Copy**.
+
+After the model is successfully copied, you'll be notified and can view it in the target project.
+++
+Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech to text REST API](rest-speech-to-text.md).
+++
+To copy a model to another Speech resource, use the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
+
+Make an HTTP POST request using the URI as shown in the following example. Use the region and URI of the model you want to copy from. Replace `YourModelId` with the model ID, replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "targetSubscriptionKey": "ModelDestinationSpeechResourceKey"
+} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId:copyto"
+```
+
+> [!NOTE]
+> Only the `targetSubscriptionKey` property in the request body has information about the destination Speech resource.
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
+ "baseModel": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
+ },
+ "links": {
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
+ "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae:copyto"
+ },
+ "properties": {
+ "deprecationDates": {
+ "adaptationDateTime": "2023-01-15T00:00:00Z",
+ "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ }
+ },
+ "lastActionDateTime": "2022-05-22T23:15:27Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-22T23:15:27Z",
+ "locale": "en-US",
+ "displayName": "My Model",
+ "description": "My Model Description",
+ "customProperties": {
+ "PortalAPIVersion": "3",
+ "Purpose": "",
+ "VadKind": "None",
+ "ModelClass": "None",
+ "UsesHalide": "False",
+ "IsDynamicGrammarSupported": "False"
+ }
+}
+```
+++
+## Connect a model
+
+Models might have been copied from one project using the Speech CLI or REST API, without being connected to another project. Connecting a model is a matter of updating the model with a reference to the project.
++
+If you are prompted in Speech Studio, you can connect them by selecting the **Connect** button.
++++
+To connect a model to a project, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `modelId` parameter to the ID of the model that you want to connect to the project.
+
+Here's an example Speech CLI command that connects a model to a project:
+
+```azurecli-interactive
+spx csr model update --api-version v3.1 --model YourModelId --project YourProjectId
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ },
+}
+```
+
+For Speech CLI help with models, run the following command:
+
+```azurecli-interactive
+spx help csr model
+```
+++
+To connect a new model to a project of the Speech resource where the model was copied, use the [Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ },
+}
+```
+++
+## Next steps
+
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
+- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+- [Deploy a model](how-to-custom-speech-deploy-model.md)
ai-services How To Custom Speech Transcription Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-transcription-editor.md
+
+ Title: How to use the online transcription editor for Custom Speech - Speech service
+
+description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech.
++++++ Last updated : 05/08/2022+++
+# How to use the online transcription editor
+
+The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech. The main use cases of the editor are as follows:
+
+* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
+* You already have audio + human-labeled datasets, but there are errors or defects in the transcription. The editor allows you to quickly modify the transcriptions to get best training accuracy.
+
+The only requirement to use the transcription editor is to have audio data uploaded, with or without corresponding transcriptions.
+
+You can find the **Editor** tab next to the **Training and testing dataset** tab on the main **Speech datasets** page.
++
+Datasets in the **Training and testing dataset** tab can't be updated. You can import a copy of a training or testing dataset to the **Editor** tab, add or edit human-labeled transcriptions to match the audio, and then export the edited dataset to the **Training and testing dataset** tab. Please also note that you can't use a dataset that's in the Editor to train or test a model.
+
+## Import datasets to the Editor
+
+To import a dataset to the Editor, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select **Import data**
+1. Select datasets. You can select audio data only, audio + human-labeled data, or both. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor.
+1. Enter a name and description for the new dataset, and then select **Next**.
+1. Review your settings, and then select **Import and close** to kick off the import process. After data has been successfully imported, you can select datasets and start editing.
+
+> [!NOTE]
+> You can also select a dataset from the main **Speech datasets** page and export them to the Editor. Select a dataset and then select **Export to Editor**.
+
+## Edit transcription to match audio
+
+Once a dataset has been imported to the Editor, you can start editing the dataset. You can add or edit human-labeled transcriptions to match the audio as you hear it. You do not edit any audio data.
+
+To edit a dataset's transcription in the Editor, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select the link to a dataset by name.
+1. From the **Audio + text files** table, select the link to an audio file by name.
+1. After you've made edits, select **Save**.
+
+If there are multiple files in the dataset, you can select **Previous** and **Next** to move from file to file. Edit and save changes to each file as you go.
+
+The detail page lists all the segments in each audio file, and you can select the desired utterance. For each utterance, you can play and compare the audio with the corresponding transcription. Edit the transcriptions if you find any insertion, deletion, or substitution errors. For more information about word error types, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
+
+## Export datasets from the Editor
+
+Datasets in the Editor can be exported to the **Training and testing dataset** tab, where they can be used to train or test a model.
+
+To export datasets from the Editor, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
+1. Select the link to a dataset by name.
+1. Select one or more rows from the **Audio + text files** table.
+1. Select **Export** to export all of the selected files as one new dataset.
+
+The files are exported as a new dataset, and will not impact or replace other training or testing datasets.
+
+## Next steps
+
+- [Test recognition quality](how-to-custom-speech-inspect-data.md)
+- [Train your model](how-to-custom-speech-train-model.md)
+- [Deploy your model](./how-to-custom-speech-train-model.md)
+
ai-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-upload-data.md
+
+ Title: "Upload training and testing datasets for Custom Speech - Speech service"
+
+description: Learn about how to upload data to test or train a Custom Speech model.
++++++ Last updated : 11/29/2022+
+zone_pivot_groups: speech-studio-cli-rest
++
+# Upload training and testing datasets for Custom Speech
+
+You need audio or text data for testing the accuracy of speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
+
+> [!TIP]
+> You can also use the [online transcription editor](how-to-custom-speech-transcription-editor.md) to create and refine labeled audio datasets.
+
+## Upload datasets
++
+To upload your own datasets in Speech Studio, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**.
+1. Select the **Training data** or **Testing data** tab.
+1. Select a dataset type, and then select **Next**.
+1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob URL. If you select remote location, and you don't use trusted Azure services security mechanism (see next Note), then the remote location should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+
+ > [!NOTE]
+ > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
+
+1. Enter the dataset name and description, and then select **Next**.
+1. Review your settings, and then select **Save and close**.
+
+After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md).
++++
+To create a dataset and connect it to an existing project, use the `spx csr dataset create` command. Construct the request parameters according to the following instructions:
+
+- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects.
+- Set the required `kind` parameter. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
+- Set the required `contentUrl` parameter. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+
+ > [!NOTE]
+ > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
+
+- Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
+
+Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
+
+```azurecli-interactive
+spx csr dataset create --api-version v3.1 --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
+ "kind": "Acoustic",
+ "contentUrl": "https://contoso.com/mydatasetlocation",
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ },
+ "properties": {
+ "acceptedLineCount": 0,
+ "rejectedLineCount": 0
+ },
+ "lastActionDateTime": "2022-05-20T14:07:11Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T14:07:11Z",
+ "locale": "en-US",
+ "displayName": "My Acoustic Dataset",
+ "description": "My Acoustic Dataset Description"
+}
+```
+
+The top-level `self` property in the response body is the dataset's URI. Use this URI to get details about the dataset's project and files. You also use this URI to update or delete a dataset.
+
+For Speech CLI help with datasets, run the following command:
+
+```azurecli-interactive
+spx help csr dataset
+```
++++
+To create a dataset and connect it to an existing project, use the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.
+- Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
+- Set the required `contentUrl` property. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
+
+ > [!NOTE]
+ > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
+
+- Set the required `locale` property. The dataset locale must match the locale of the project. The locale can't be changed later.
+- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.
+
+Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "kind": "Acoustic",
+ "displayName": "My Acoustic Dataset",
+ "description": "My Acoustic Dataset Description",
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ },
+ "contentUrl": "https://contoso.com/mydatasetlocation",
+ "locale": "en-US",
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/datasets"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
+ "kind": "Acoustic",
+ "contentUrl": "https://contoso.com/mydatasetlocation",
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ },
+ "properties": {
+ "acceptedLineCount": 0,
+ "rejectedLineCount": 0
+ },
+ "lastActionDateTime": "2022-05-20T14:07:11Z",
+ "status": "NotStarted",
+ "createdDateTime": "2022-05-20T14:07:11Z",
+ "locale": "en-US",
+ "displayName": "My Acoustic Dataset",
+ "description": "My Acoustic Dataset Description"
+}
+```
+
+The top-level `self` property in the response body is the dataset's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get) details about the dataset's project and files. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Delete) the dataset.
++
+> [!IMPORTANT]
+> Connecting a dataset to a Custom Speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you can't select it for training or testing in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+
+## Next steps
+
+* [Test recognition quality](how-to-custom-speech-inspect-data.md)
+* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
+* [Train a custom model](how-to-custom-speech-train-model.md)
ai-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-create-voice.md
+
+ Title: Train your custom voice model - Speech service
+
+description: Learn how to train a custom neural voice through the Speech Studio portal.
++++++ Last updated : 11/28/2022++++
+# Train your voice model
+
+In this article, you learn how to train a custom neural voice through the Speech Studio portal.
+
+> [!IMPORTANT]
+> Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can [copy](#copy-your-voice-model-to-another-project) it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
+
+Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice. Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
+
+> [!NOTE]
+> Although the total number of hours required per [training method](#choose-a-training-method) will vary, the same unit price applies to each. For more information, see the [Custom Neural training pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+## Choose a training method
+
+After you validate your data files, you can use them to build your Custom Neural Voice model. When you create a custom neural voice, you can choose to train it with one of the following methods:
+
+- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data, select **Neural** method.
+
+- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language.
+
+- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create multiple custom styles by providing style samples (at least 100 utterances per style) as additional training data for the same voice. The supported preset styles vary according to different languages. Refer to [the preset style list for different languages](?tabs=multistyle#available-preset-styles-across-different-languages).
+
+The language of the training data must be one of the [languages that are supported](language-support.md?tabs=tts) for custom neural voice neural, cross-lingual, or multi-style training.
+
+## Train your Custom Neural Voice model
+
+To create a custom neural voice in Speech Studio, follow these steps for one of the following [methods](#choose-a-training-method):
+
+# [Neural](#tab/neural)
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Neural** as the [training method](#choose-a-training-method) for your model and then select **Next**. To use a different training method, see [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
+ :::image type="content" source="media/custom-voice/cnv-train-neural.png" alt-text="Screenshot that shows how to select neural training.":::
+1. Select a version of the training recipe for your model. The latest version is selected by default. The supported features and training time can vary by version. Normally, the latest version is recommended for the best results. In some cases, you can choose an older version to reduce training time.
+1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
+1. Select **Next**.
+1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Select **Next**.
+1. Review the settings and check the box to accept the terms of use.
+1. Select **Submit** to start training the model.
+
+# [Neural - cross lingual](#tab/crosslingual)
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Neural - cross lingual** as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
+ :::image type="content" source="media/custom-voice/cnv-train-neural-cross-lingual.png" alt-text="Screenshot that shows how to select neural cross lingual training.":::
+1. Select the **Target language** that will be the secondary language for your voice model. Only one target language can be selected for a voice model.
+1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
+1. Select **Next**.
+1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Select **Next**.
+1. Review the settings and check the box to accept the terms of use.
+1. Select **Submit** to start training the model.
+
+# [Neural - multi style](#tab/multistyle)
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
+1. Select **Neural - multi style** as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model).
+ :::image type="content" source="media/custom-voice/cnv-train-neural-multi-style.png" alt-text="Screenshot that shows how to select neural multi style training.":::
+1. Select one or more preset speaking styles to train.
+1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
+1. Select **Next**.
+1. Optionally, you can add additional custom speaking styles. The maximum number of custom styles varies by languages: `English (United States)` allows up to 10 custom styles, `Chinese (Mandarin, Simplified)` allows up to 4 custom styles, and `Japanese (Japan)` allows up to 5 custom styles.
+ 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#speaking-styles-and-roles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
+ 1. Select style samples as training data. The style samples should be all from the same voice talent profile.
+1. Select **Next**.
+1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
+1. Select **Next**.
+1. Each training generates 100 sample audios for the default style and 20 for each preset style automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the default style at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
+1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
+1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
+1. Select **Next**.
+1. Review the settings and check the box to accept the terms of use.
+1. Select **Submit** to start training the model.
+
+## Available preset styles across different languages
+
+The following table summarizes the different preset styles according to different languages.
+
+| Speaking style | Language |
+|-|-|
+| angry | English (United States)<br> Chinese (Mandarin, Simplified)(preview)<br> Japanese (Japan)(preview) |
+| calm | Chinese (Mandarin, Simplified)(preview)|
+| chat | Chinese (Mandarin, Simplified)(preview) |
+| cheerful | English (United States) <br> Chinese (Mandarin, Simplified)(preview) <br>Japanese (Japan)(preview)|
+| disgruntled | Chinese (Mandarin, Simplified)(preview) |
+| excited | English (United States) |
+| fearful | Chinese (Mandarin, Simplified)(preview) |
+| friendly | English (United States) |
+| hopeful | English (United States) |
+| sad | English (United States)<br>Chinese (Mandarin, Simplified)(preview)<br>Japanese (Japan)(preview)|
+| shouting | English (United States) |
+| terrified | English (United States) |
+| unfriendly | English (United States)|
+| whispering | English (United States) |
+| serious | Chinese (Mandarin, Simplified)(preview) |
+
+
+
+The **Train model** table displays a new entry that corresponds to this newly created model. The status reflects the process of converting your data to a voice model, as described in this table:
+
+| State | Meaning |
+| -- | - |
+| Processing | Your voice model is being created. |
+| Succeeded | Your voice model has been created and can be deployed. |
+| Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
+| Canceled | The training for your voice model was canceled. |
+
+While the model status is **Processing**, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training.
++
+After you finish training the model successfully, you can review the model details and [test the model](#test-your-voice-model).
+
+You can use the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio]( https://speech.microsoft.com/portal/audiocontentcreation) to create audio and fine-tune your deployed voice. If applicable for your voice, one of multiple styles can also be selected.
+
+### Rename your model
+
+If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
++
+Enter the new name on the **Clone voice model** window, then select **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
++
+### Test your voice model
+
+After your voice model is successfully built, you can use the generated sample audio files to test it before deploying it for use.
+
+The quality of the voice depends on many factors, such as:
+
+- The size of the training data.
+- The quality of the recording.
+- The accuracy of the transcript file.
+- How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.
+
+Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
++
+If you want to upload your own test scripts to further test your model, select **Add test scripts** to upload your own test script.
++
+Before uploading test script, check the [test script requirements](#test-script-requirements). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+On **Add test scripts** window, select **Browse for a file** to select your own script, then select **Add** to upload it.
++
+### Test script requirements
+
+The test script must be a .txt file, less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
+
+Unlike the [training transcription files](how-to-custom-voice-training-data.md#transcription-data-for-individual-utterances--matching-transcript), the test script should exclude the utterance ID (filenames of each utterance). Otherwise, these IDs are spoken.
+
+Here's an example set of utterances in one .txt file:
+
+```text
+This is the waistline, and it's falling.
+We have trouble scoring.
+It was Janet Maslin.
+```
+
+Each paragraph of the utterance results in a separate audio. If you want to combine all sentences into one audio, make them a single paragraph.
+
+>[!NOTE]
+> The generated audio files are a combination of the automatic test scripts and custom test scripts.
+
+### Update engine version for your voice model
+
+Azure Text to speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you've trained your voice, you can apply your voice to the new language model by updating to the latest engine version.
+
+When a new engine is available, you're prompted to update your neural voice model.
++
+Go to the model details page and follow the on-screen instructions to install the latest engine.
++
+Alternatively, select **Install the latest engine** later to update your model to the latest engine version.
++
+You're not charged for engine update. The previous versions are still kept. You can check all engine versions for the model from **Engine version** drop-down list, or remove one if you don't need it anymore.
++
+The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and selecting **Set as default**.
++
+If you want to test each engine version of your voice model, you can select a version from the drop-down list, then select **DefaultTests** under **Testing** to listen to the sample audios. If you want to upload your own test scripts to further test your current engine version, first make sure the version is set as default, then follow the [testing steps above](#test-your-voice-model).
+
+Updating the engine will create a new version of the model at no additional cost. After you've updated the engine version for your voice model, you need to deploy the new version to [create a new endpoint](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint). You can only deploy the default version.
++
+After you've created a new endpoint, you need to [transfer the traffic to the new endpoint in your product](how-to-deploy-and-use-endpoint.md#switch-to-a-new-voice-model-in-your-product).
+
+For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+
+## Copy your voice model to another project
+
+You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region.
+
+> [!NOTE]
+> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
+
+To copy your custom neural voice model to another project:
+
+1. On the **Train model** tab, select a voice model that you want to copy, and then select **Copy to project**.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Screenshot of the copy to project option.":::
+
+1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Screenshot of the copy voice model dialog.":::
+
+1. Select **Submit** to copy the model.
+1. Select **View model** under the notification message for copy success.
+
+Navigate to the project where you copied the model to [deploy the model copy](how-to-deploy-and-use-endpoint.md).
+
+## Next steps
+
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
+- [How to record voice samples](record-custom-voice-samples.md)
+- [Text to speech API reference](rest-text-to-speech.md)
ai-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-prepare-data.md
+
+ Title: "How to prepare data for Custom Voice - Speech service"
+
+description: "Learn how to provide studio recordings and the associated scripts that will be used to train your Custom Neural Voice."
++++++ Last updated : 10/27/2022+++
+# Prepare training data for Custom Neural Voice
+
+When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md). The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
+
+All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. To confirm that your data is correctly formatted, see [Training data types](how-to-custom-voice-training-data.md).
+
+> [!NOTE]
+> - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
+> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users. Please see out [Speech service quotas and limits](speech-services-quotas-and-limits.md#custom-neural-voice) for more details.
+
+## Upload your data
+
+When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A *training set* is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. The service checks data readiness per each training set. You can import multiple data to a training set.
+
+To upload training data, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Prepare training data** > **Upload data**.
+1. In the **Upload data** wizard, choose a [data type](how-to-custom-voice-training-data.md) and then select **Next**.
+1. Select local files from your computer or enter the Azure Blob storage URL to upload data.
+1. Under **Specify the target training set**, select an existing training set or create a new one. If you created a new training set, make sure it's selected in the drop-down list before you continue.
+1. Select **Next**.
+1. Enter a name and description for your data and then select **Next**.
+1. Review the upload details, and select **Submit**.
+
+> [!NOTE]
+> Duplicate IDs are not accepted. Utterances with the same ID will be removed.
+>
+> Duplicate audio names are removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicates, they're rejected.
+
+Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
+
+After you upload the data, you can check the details in the training set detail view. On the detail page, you can further check the pronunciation issue and the noise level for each of your data. The pronunciation score at the sentence level ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. Utterances with an overall score lower than 70 will be rejected. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
+
+## Resolve data issues online
+
+After upload, you can check the data details of the training set. Before continuing to [train your voice model](how-to-custom-voice-create-voice.md), you should try to resolve any data issues.
+
+You can resolve data issues per utterance in Speech Studio.
+
+1. On the detail page, go to the **Accepted data** or **Rejected data** page. Select individual utterances you want to change, then select **Edit**.
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the accepted data or rejected data details page.":::
+
+ You can choose which data issues to be displayed based on your criteria.
+
+ :::image type="content" source="media/custom-voice/cnv-issues-display-criteria.png" alt-text="Screenshot of choosing which data issues to be displayed":::
+
+1. Edit window will be displayed.
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-editscript.png" alt-text="Screenshot of displaying Edit transcript and recording file window.":::
+
+1. Update transcript or recording file according to issue description on the edit window.
+
+ You can edit transcript in the text box, then select **Done**
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-scriptedit-done.png" alt-text="Screenshot of selecting Done button on the Edit transcript and recording file window.":::
+
+ If you need to update recording file, select **Update recording file**, then upload the fixed recording file (.wav).
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-upload-recording.png" alt-text="Screenshot that shows how to upload recording file on the Edit transcript and recording file window.":::
+
+1. After you've made changes to your data, you need to check the data quality by clicking **Analyze data** before using this dataset for training.
+
+ You can't select this training set for training model before the analysis is complete.
+
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset-analyze.png" alt-text="Screenshot of selecting Analyze data on Data details page.":::
+
+ You can also delete utterances with issues by selecting them and clicking **Delete**.
+
+### Typical data issues
+
+The issues are divided into three types. Refer to the following tables to check the respective types of errors.
+
+**Auto-rejected**
+
+Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can [fix these data errors online](#resolve-data-issues-online) or upload the corrected data again for training.
+
+| Category | Name | Description |
+| | -- | |
+| Script | Invalid separator| You must separate the utterance ID and the script content with a Tab character.|
+| Script | Invalid script ID| The script line ID must be numeric.|
+| Script | Duplicated script|Each line of the script content must be unique. The line is duplicated with {}.|
+| Script | Script too long| The script must be less than 1,000 characters.|
+| Script | No matching audio| The ID of each utterance (each line of the script file) must match the audio ID.|
+| Script | No valid script| No valid script is found in this dataset. Fix the script lines that appear in the detailed issue list.|
+| Audio | No matching script| No audio files match the script ID. The name of the .wav files must match with the IDs in the script file.|
+| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the .wav file format by using an audio tool like [SoX](http://sox.sourceforge.net/).|
+| Audio | Low sampling rate| The sampling rate of the .wav files can't be lower than 16 KHz.|
+| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.|
+| Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
+| Mismatch | Low scored utterance| Sentence-level pronunciation score is lower than 70. Review the script and the audio content to make sure they match.|
+
+**Auto-fixed**
+
+The following errors are fixed automatically, but you should review and confirm the fixes are made correctly.
+
+| Category | Name | Description |
+| | -- | |
+| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. |
+| Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
+| Script | Text auto normalized|Text is automatically normalized for digits, symbols, and abbreviations. Review the script and audio to make sure they match.|
+
+**Manual check required**
+
+Unresolved errors listed in the next table affect the quality of training, but data with these errors won't be excluded during training. For higher-quality training, it's a good idea to fix these errors manually.
+
+| Category | Name | Description |
+| | -- | |
+| Script | Non-normalized text |This script contains symbols. Normalize the symbols to match the audio. For example, normalize */* to *slash*.|
+| Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
+| Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
+| Script | No valid end punctuation| Add one of the following at the end of the line: full stop (half-width '.' or full-width '。'), exclamation point (half-width '!' or full-width '!' ), or question mark ( half-width '?' or full-width '?').|
+| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. If it's lower, it will be automatically raised to 24 KHz.|
+| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10 percent of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
+| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
+| Volume | Start silence issue | The first 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the first 100 ms at the start silent.|
+| Volume| End silence issue| The last 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the last 100 ms at the end silent.|
+| Mismatch | Low scored words|Review the script and the audio content to make sure they match, and control the noise floor level. Reduce the length of long silence, or split the audio into multiple utterances if it's too long.|
+| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.|
+| Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.|
+| Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.|
+| Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
+
+## Next steps
+
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
+- [How to record voice samples](record-custom-voice-samples.md)
ai-services How To Custom Voice Talent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-talent.md
+
+ Title: "Set up voice talent for custom neural voice - Speech service"
+
+description: Create a voice talent profile with an audio file recorded by the voice talent, consenting to the usage of their speech data to train a custom voice model.
++++++ Last updated : 10/27/2022+++
+# Set up voice talent for Custom Neural Voice
+
+A voice talent is an individual or target speaker whose voices are recorded and used to create neural voice models.
+
+Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model. The consent statement is also used to verify that the voice talent is the same person as the speaker in the training data.
+
+> [!TIP]
+> Before you get started in Speech Studio, define your voice [persona and choose the right voice talent](record-custom-voice-samples.md#choose-your-voice-talent).
+
+You can find the verbal consent statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. See also the [disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context).
+
+## Add voice talent
+
+To add a voice talent profile and upload their consent statement, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Set up voice talent** > **Add voice talent**.
+1. In the **Add new voice talent** wizard, describe the characteristics of the voice you're going to create. The scenarios that you specify here must be consistent with what you provided in the application form.
+1. Select **Next**.
+1. On the **Upload voice talent statement** page, follow the instructions to upload the voice talent statement you've recorded beforehand. Make sure the verbal statement was [recorded](record-custom-voice-samples.md) with the same settings, environment, and speaking style as your training data.
+ :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot of the voice talent statement upload dialog.":::
+1. Enter the voice talent name and company name. The voice talent name must be the name of the person who recorded the consent statement. The company name must match the company name that was spoken in the recorded statement.
+1. Select **Next**.
+1. Review the voice talent and persona details, and select **Submit**.
+
+After the voice talent status is *Succeeded*, you can proceed to [train your custom voice model](how-to-custom-voice-create-voice.md).
+
+## Next steps
+
+- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
ai-services How To Custom Voice Training Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice-training-data.md
+
+ Title: "Training data for Custom Neural Voice - Speech service"
+
+description: "Learn about the data types that you can use to train a Custom Neural Voice."
++++++ Last updated : 10/27/2022+++
+# Training data for Custom Neural Voice
+
+When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
+
+> [!TIP]
+> To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md).
+
+## Types of training data
+
+A voice training dataset includes audio recordings, and a text file with the associated transcriptions. Each audio file should contain a single utterance (a single sentence or a single turn for a dialog system), and be less than 15 seconds long.
+
+In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts.
+
+This table lists data types and how each is used to create a custom Text to speech voice model.
+
+| Data type | Description | When to use | Extra processing required |
+| | -- | -- | |
+| [Individual utterances + matching transcript](#individual-utterances--matching-transcript) | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
+| [Long audio + transcript](#long-audio--transcript-preview) | A collection (.zip) of long, unsegmented audio files (.wav or .mp3, longer than 20 seconds, at most 1000 audio files), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they aren't segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation wherever required. |
+| [Audio only (Preview)](#audio-only-preview) | A collection (.zip) of audio files (.wav or .mp3, at most 1000 audio files) without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation wherever required.|
+
+Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type.
+
+> [!NOTE]
+> The maximum number of datasets allowed to be imported per subscription is 500 zip files for standard subscription (S0) users.
+
+## Individual utterances + matching transcript
+
+You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
+
+To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
+
+For data format examples, refer to the sample training set on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/Sample%20Data). The sample training set includes the sample script and the associated audio.
+
+### Audio data for Individual utterances + matching transcript
+
+Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text to speech voices aren't supported, except for the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav.
+
+Follow these guidelines when preparing audio.
+
+| Property | Value |
+| -- | -- |
+| File format | RIFF (.wav), grouped into a .zip file |
+| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| Sample format | PCM, at least 16-bit |
+| Audio length | Shorter than 15 seconds |
+| Archive format | .zip |
+| Maximum archive size | 2048 MB |
+
+> [!NOTE]
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+
+### Transcription data for Individual utterances + matching transcript
+
+The transcription file is a plain text file. Use these guidelines to prepare your transcriptions.
+
+| Property | Value |
+| -- | -- |
+| File format | Plain text (.txt) |
+| Encoding format | ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
+| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t). |
+| Maximum file size | 2048 MB |
+
+Below is an example of how the transcripts are organized utterance by utterance in one .txt file:
+
+```
+0000000001[tab] This is the waistline, and it's falling.
+0000000002[tab] We have trouble scoring.
+0000000003[tab] It was Janet Maslin.
+```
+ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
+
+## Long audio + transcript (Preview)
+
+> [!NOTE]
+> For **Long audio + transcript (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
+
+In some cases, you may not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service will use the [Batch Transcription API](batch-transcription.md) feature of speech to text.
+
+During the processing of the segmentation, your audio files and the transcripts will also be sent to the Custom Speech service to refine the recognition model so the accuracy can be improved for your data. No data will be retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training.
+
+> [!NOTE]
+> This service will be charged toward your speech to text subscription usage. The long-audio segmentation service is only supported with standard (S0) Speech resources.
+
+### Audio data for Long audio + transcript
+
+Follow these guidelines when preparing audio for segmentation.
+
+| Property | Value |
+| -- | -- |
+| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
+| Audio length | Longer than 20 seconds |
+| Archive format | .zip |
+| Maximum archive size | 2048 MB, at most 1000 audio files included |
+
+> [!NOTE]
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+
+All audio files should be grouped into a zip file. ItΓÇÖs OK to put .wav files and .mp3 files into one audio zip. For example, you can upload a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 second long, and another audio named ΓÇÿqueenstory.mp3ΓÇÖ, 200 second long. All .mp3 files will be transformed into the .wav format after processing.
+
+### Transcription data for Long audio + transcript
+
+Transcripts must be prepared to the specifications listed in this table. Each audio file must be matched with a transcript.
+
+| Property | Value |
+| -- | -- |
+| File format | Plain text (.txt), grouped into a .zip |
+| File name | Use the same name as the matching audio file |
+| Encoding format |ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
+| # of utterances per line | No limit |
+| Maximum file size | 2048 MB |
+
+All transcripts files in this data type should be grouped into a zip file. For example, you've uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You'll need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you'll provide the full correct transcription for the matching audio.
+
+After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
+
+## Audio only (Preview)
+
+> [!NOTE]
+> For **Audio only (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
+
+If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech to text subscription usage.
+
+Follow these guidelines when preparing audio.
+
+> [!NOTE]
+> The long-audio segmentation service will leverage the batch transcription feature of speech to text, which only supports standard subscription (S0) users.
+
+| Property | Value |
+| -- | -- |
+| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
+| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
+| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
+| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
+| Audio length | No limit |
+| Archive format | .zip |
+| Maximum archive size | 2048 MB, at most 1000 audio files included |
+
+> [!NOTE]
+> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+
+All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
+
+## Next steps
+
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
+- [How to record voice samples](record-custom-voice-samples.md)
ai-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-voice.md
+
+ Title: Create a project for Custom Neural Voice - Speech service
+
+description: Learn how to create a Custom Neural Voice project that contains data, models, tests, and endpoints in Speech Studio.
++++++ Last updated : 10/27/2022+++
+# Create a project for Custom Neural Voice
+
+Content for [Custom Neural Voice](https://aka.ms/customvoice) like data, models, tests, and endpoints are organized into projects in Speech Studio. Each project is specific to a country/region and language, and the gender of the voice you want to create. For example, you might create a project for a female voice for your call center's chat bots that use English in the United States.
+
+> [!TIP]
+> Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
+
+All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=tts) and [region](regions.md#speech-service).
+
+## Create a Custom Neural Voice Pro project
+
+To create a Custom Neural Voice Pro project, follow these steps:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select the subscription and Speech resource to work with.
+
+ > [!IMPORTANT]
+ > Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can copy it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
+
+1. Select **Custom Voice** > **Create a project**.
+1. Select **Custom Neural Voice Pro** > **Next**.
+1. Follow the instructions provided by the wizard to create your project.
+
+Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**.
+
+## Next steps
+
+- [Set up voice talent](how-to-custom-voice-talent.md)
+- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
ai-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-deploy-and-use-endpoint.md
+
+ Title: How to deploy and use voice model - Speech service
+
+description: Learn about how to deploy and use a custom neural voice model.
++++++ Last updated : 11/30/2022++
+zone_pivot_groups: programming-languages-set-nineteen
++
+# Deploy and use your voice model
+
+After you've successfully created and [trained](how-to-custom-voice-create-voice.md) your voice model, you deploy it to a custom neural voice endpoint.
+
+Use the Speech Studio to [add a deployment endpoint](#add-a-deployment-endpoint) for your custom neural voice. You can use either the Speech Studio or text to speech REST API to [suspend or resume](#suspend-and-resume-an-endpoint) a custom neural voice endpoint.
+
+> [!NOTE]
+> You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice.
+
+To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text to speech service.
+
+## Add a deployment endpoint
+
+To create a custom neural voice endpoint:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
+1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
+1. Select a voice model that you want to associate with this endpoint.
+1. Enter a **Name** and **Description** for your custom endpoint.
+1. Select **Deploy** to create your endpoint.
+
+After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+
+## Application settings
+
+The application settings that you use as REST API [request parameters](#request-parameters) are available on the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
++
+* The **Endpoint key** shows the Speech resource key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
+* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`.
+* The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
+
+## Use your custom voice
+
+The custom endpoint is functionally identical to the standard endpoint that's used for text to speech requests.
+
+One difference is that the `EndpointId` must be specified to use the custom voice via the Speech SDK. You can start with the [text to speech quickstart](get-started-text-to-speech.md) and then update the code with the `EndpointId` and `SpeechSynthesisVoiceName`.
+
+```csharp
+var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
+speechConfig.SpeechSynthesisVoiceName = "YourCustomVoiceName";
+speechConfig.EndpointId = "YourEndpointId";
+```
+
+```cpp
+auto speechConfig = SpeechConfig::FromSubscription(speechKey, speechRegion);
+speechConfig->SetSpeechSynthesisVoiceName("YourCustomVoiceName");
+speechConfig->SetEndpointId("YourEndpointId");
+```
+
+```java
+SpeechConfig speechConfig = SpeechConfig.fromSubscription(speechKey, speechRegion);
+speechConfig.setSpeechSynthesisVoiceName("YourCustomVoiceName");
+speechConfig.setEndpointId("YourEndpointId");
+```
+
+```ObjectiveC
+SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithSubscription:speechKey region:speechRegion];
+speechConfig.speechSynthesisVoiceName = @"YourCustomVoiceName";
+speechConfig.EndpointId = @"YourEndpointId";
+```
+
+```Python
+speech_config = speechsdk.SpeechConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION'))
+speech_config.endpoint_id = "YourEndpointId"
+speech_config.speech_synthesis_voice_name = "YourCustomVoiceName"
+```
+
+To use a custom neural voice via [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#voice-element), specify the model name as the voice name. This example uses the `YourCustomVoiceName` voice.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="YourCustomVoiceName">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+## Switch to a new voice model in your product
+
+Once you've updated your voice model to the latest engine version, or if you want to switch to a new voice in your product, you need to redeploy the new voice model to a new endpoint. Redeploying new voice model on your existing endpoint is not supported. After deployment, switch the traffic to the newly created endpoint. We recommend that you transfer the traffic to the new endpoint in a test environment first to ensure that the traffic works well, and then transfer to the new endpoint in the production environment. During the transition, you need to keep the old endpoint. If there are some problems with the new endpoint during transition, you can switch back to your old endpoint. If the traffic has been running well on the new endpoint for about 24 hours (recommended value), you can delete your old endpoint.
+
+> [!NOTE]
+> If your voice name is changed and you are using Speech Synthesis Markup Language (SSML), be sure to use the new voice name in SSML.
+
+## Suspend and resume an endpoint
+
+You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can continue to use the same endpoint URL in your application to synthesize speech.
+
+You can suspend and resume an endpoint in Speech Studio or via the REST API.
+
+> [!NOTE]
+> The suspend operation will complete almost immediately. The resume operation completes in about the same amount of time as a new deployment.
+
+### Suspend and resume an endpoint in Speech Studio
+
+This section describes how to suspend or resume a custom neural voice endpoint in the Speech Studio portal.
+
+#### Suspend endpoint
+
+1. To suspend and deactivate your endpoint, select **Suspend** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
+
+ :::image type="content" source="media/custom-voice/cnv-endpoint-suspend.png" alt-text="Screenshot of the select suspend endpoint option":::
+
+1. In the dialog box that appears, select **Submit**. After the endpoint is suspended, Speech Studio will show the **Successfully suspended endpoint** notification.
+
+#### Resume endpoint
+
+1. To resume and activate your endpoint, select **Resume** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
+
+ :::image type="content" source="media/custom-voice/cnv-endpoint-resume.png" alt-text="Screenshot of the select resume endpoint option":::
+
+1. In the dialog box that appears, select **Submit**. After you successfully reactivate the endpoint, the status will change from **Suspended** to **Succeeded**.
+
+### Suspend and resume endpoint via REST API
+
+This section will show you how to [get](#get-endpoint), [suspend](#suspend-endpoint), or [resume](#resume-endpoint) a custom neural voice endpoint via REST API.
++
+#### Get endpoint
+
+Get the endpoint by endpoint ID. The operation returns details about an endpoint such as model ID, project ID, and status.
+
+For example, you might want to track the status progression for [suspend](#suspend-endpoint) or [resume](#resume-endpoint) operations. Use the `status` property in the response payload to determine the status of the endpoint.
+
+The possible `status` property values are:
+
+| Status | Description |
+| - | |
+| `NotStarted` | The endpoint hasn't yet been deployed, and it's not available for speech synthesis. |
+| `Running` | The endpoint is in the process of being deployed or resumed, and it's not available for speech synthesis. |
+| `Succeeded` | The endpoint is active and available for speech synthesis. The endpoint has been deployed or the resume operation succeeded. |
+| `Failed` | The endpoint deploy or suspend operation failed. The endpoint can only be viewed or deleted in [Speech Studio](https://aka.ms/custom-voice-portal).|
+| `Disabling` | The endpoint is in the process of being suspended, and it's not available for speech synthesis. |
+| `Disabled` | The endpoint is inactive, and it's not available for speech synthesis. The suspend operation succeeded or the resume operation failed. |
+
+> [!Tip]
+> If the status is `Failed` or `Disabled`, check `properties.error` for a detailed error message. However, there won't be error details if the status is `Disabled` due to a successful suspend operation.
+
+##### Get endpoint example
+
+For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+GET api/texttospeech/v3.0/endpoints/<YourEndpointId> HTTP/1.1
+Ocp-Apim-Subscription-Key: YourResourceKey
+Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
+```
+
+cURL example:
+
+```Console
+curl -v -X GET "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >"
+```
+
+Response header example:
+
+```
+Status code: 200 OK
+```
+
+Response body example:
+
+```json
+{
+ "model": {
+ "id": "a92aa4b5-30f5-40db-820c-d2d57353de44"
+ },
+ "project": {
+ "id": "ffc87aba-9f5f-4bfa-9923-b98186591a79"
+ },
+ "properties": {},
+ "status": "Succeeded",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "id": "e7ffdf12-17c7-4421-9428-a7235931a653",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "locale": "en-US",
+ "name": "Voice endpoint",
+ "description": "Example for voice endpoint"
+}
+```
+
+#### Suspend endpoint
+
+You can suspend an endpoint to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
+
+You suspend an endpoint with its unique deployment ID. The endpoint status must be `Succeeded` before you can suspend it.
+
+Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Succeeded`, to `Disabling`, and finally to `Disabled`.
+
+##### Suspend endpoint example
+
+For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend HTTP/1.1
+Ocp-Apim-Subscription-Key: YourResourceKey
+Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
+Content-Type: application/json
+Content-Length: 0
+```
+
+cURL example:
+
+```Console
+curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0"
+```
+
+Response header example:
+
+```
+Status code: 202 Accepted
+```
+
+For more information, see [response headers](#response-headers).
+
+#### Resume endpoint
+
+When you resume an endpoint, you can use the same endpoint URL that you used before it was suspended.
+
+You resume an endpoint with its unique deployment ID. The endpoint status must be `Disabled` before you can resume it.
+
+Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Disabled`, to `Running`, and finally to `Succeeded`. If the resume operation failed, the endpoint status will be `Disabled`.
+
+##### Resume endpoint example
+
+For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume HTTP/1.1
+Ocp-Apim-Subscription-Key: YourResourceKey
+Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
+Content-Type: application/json
+Content-Length: 0
+```
+
+cURL example:
+
+```Console
+curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0"
+```
+
+Response header example:
+```
+Status code: 202 Accepted
+```
+
+For more information, see [response headers](#response-headers).
+
+#### Parameters and response codes
+
+##### Request parameters
+
+You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and Speech resource key in Speech Studio.
+
+| Name | Location | Required | Type | Description |
+| | | -- | | |
+| `YourResourceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. |
+| `YourEndpointId` | Path | `True` | string | The identifier of the endpoint. |
+| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The Speech resource key the endpoint is associated with. |
+
+##### Response headers
+
+Status code: 202 Accepted
+
+| Name | Type | Description |
+| - | | -- |
+| `Location` | string | The location of the endpoint that can be used as the full URL to get endpoint. |
+| `Retry-After` | string | The total seconds of recommended interval to retry to get endpoint status. |
+
+##### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors.
+
+| HTTP status code | Description | Possible reason |
+| - | -- | - |
+| 200 | OK | The request was successful. |
+| 202 | Accepted | The request has been accepted and is being processed. |
+| 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your Speech resource key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your Speech resource. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers.|
+
+## Next steps
+
+- [How to record voice samples](record-custom-voice-samples.md)
+- [Text to speech API reference](rest-text-to-speech.md)
+- [Batch synthesis](batch-synthesis.md)
ai-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-develop-custom-commands-application.md
+
+ Title: 'How-to: Develop Custom Commands applications - Speech service'
+
+description: Learn how to develop and customize Custom Commands applications. These voice-command apps are best suited for task completion or command-and-control scenarios.
+++++++ Last updated : 12/15/2020++++
+# Develop Custom Commands applications
++
+In this how-to article, you learn how to develop and configure Custom Commands applications. The Custom Commands feature helps you build rich voice-command apps that are optimized for voice-first interaction experiences. The feature is best suited to task completion or command-and-control scenarios. It's particularly well suited for Internet of Things (IoT) devices and for ambient and headless devices.
+
+In this article, you create an application that can turn a TV on and off, set the temperature, and set an alarm. After you create these basic commands, you'll learn about the following options for customizing commands:
+
+* Adding parameters to commands
+* Adding configurations to command parameters
+* Building interaction rules
+* Creating language-generation templates for speech responses
+* Using Custom Voice tools
+
+## Create an application by using simple commands
+
+Start by creating an empty Custom Commands application. For details, refer to the [quickstart](quickstart-custom-commands-application.md). In this application, instead of importing a project, you create a blank project.
+
+1. In the **Name** box, enter the project name *Smart-Room-Lite* (or another name of your choice).
+1. In the **Language** list, select **English (United States)**.
+1. Select or create a LUIS resource.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing the "New project" window.](media/custom-commands/create-new-project.png)
+
+### Update LUIS resources (optional)
+
+You can update the authoring resource that you selected in the **New project** window. You can also set a prediction resource.
+
+A prediction resource is used for recognition when your Custom Commands application is published. You don't need a prediction resource during the development and testing phases.
+
+### Add a TurnOn command
+
+In the empty Smart-Room-Lite Custom Commands application you created, add a command. The command will process an utterance, `Turn on the tv`. It will respond with the message `Ok, turning the tv on`.
+
+1. Create a new command by selecting **New command** at the top of the left pane. The **New command** window opens.
+1. For the **Name** field, provide the value `TurnOn`.
+1. Select **Create**.
+
+The middle pane lists the properties of the command.
+
+The following table explains the command's configuration properties. For more information, see [Custom Commands concepts and definitions](./custom-commands-references.md).
+
+| Configuration | Description |
+| - | |
+| Example sentences | Example utterances the user can say to trigger this command. |
+| Parameters | Information required to complete the command. |
+| Completion rules | Actions to be taken to fulfill the command. Examples: responding to the user or communicating with a web service. |
+| Interaction rules | Other rules to handle more specific or complex situations. |
++
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing where to create a command.](media/custom-commands/add-new-command.png)
+
+#### Add example sentences
+
+In the **Example sentences** section, you provide an example of what the user can say.
+
+1. In the middle pane, select **Example sentences**.
+1. In the pane on the right, add examples:
+
+ ```
+ Turn on the tv
+ ```
+
+1. At the top of the pane, select **Save**.
+
+You don't have parameters yet, so you can move to the **Completion rules** section.
+
+#### Add a completion rule
+
+Next, the command needs a completion rule. This rule tells the user that a fulfillment action is being taken.
+
+For more information about rules and completion rules, see [Custom Commands concepts and definitions](./custom-commands-references.md).
+
+1. Select the default completion rule **Done**. Then edit it as follows:
++
+ | Setting | Suggested value | Description |
+ | - | - | -- |
+ | **Name** | `ConfirmationResponse` | A name describing the purpose of the rule |
+ | **Conditions** | None | Conditions that determine when the rule can run |
+ | **Actions** | **Send speech response** > **Simple editor** > `Ok, turning the tv on` | The action to take when the rule condition is true |
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing where to create a speech response.](media/custom-commands/create-speech-response-action.png)
+
+1. Select **Save** to save the action.
+1. Back in the **Completion rules** section, select **Save** to save all changes.
+
+ > [!NOTE]
+ > You don't have to use the default completion rule that comes with the command. You can delete the default completion rule and add your own rule.
+
+### Add a SetTemperature command
+
+Now add one more command, `SetTemperature`. This command will take a single utterance, `Set the temperature to 40 degrees`, and respond with the message `Ok, setting temperature to 40 degrees`.
+
+To create the new command, follow the steps you used for the `TurnOn` command, but use the example sentence `Set the temperature to 40 degrees`.
+
+Then edit the existing **Done** completion rules as follows:
+
+| Setting | Suggested value |
+| - | - |
+| **Name** | `ConfirmationResponse` |
+| **Conditions** | None |
+| **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, setting temperature to 40 degrees` |
+
+Select **Save** to save all changes to the command.
+
+### Add a SetAlarm command
+
+Create a new `SetAlarm` command. Use the example sentence `Set an alarm for 9 am tomorrow`. Then edit the existing **Done** completion rules as follows:
+
+| Setting | Suggested value |
+| - | - |
+| **Name** | `ConfirmationResponse` |
+| **Conditions** | None |
+| **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, setting an alarm for 9 am tomorrow` |
+
+Select **Save** to save all changes to the command.
+
+### Try it out
+
+Test the application's behavior by using the test pane:
+
+1. In the upper-right corner of the pane, select the **Train** icon.
+1. When the training finishes, select **Test**.
+
+Try out the following utterance examples by using voice or text:
+
+- You type: *set the temperature to 40 degrees*
+- Expected response: Ok, setting temperature to 40 degrees
+- You type: *turn on the tv*
+- Expected response: Ok, turning the tv on
+- You type: *set an alarm for 9 am tomorrow*
+- Expected response: Ok, setting an alarm for 9 am tomorrow
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing the test in a web-chat interface.](media/custom-commands/create-basic-test-chat-no-mic.png)
+
+> [!TIP]
+> In the test pane, you can select **Turn details** for information about how this voice input or text input was processed.
+
+## Add parameters to commands
+
+In this section, you learn how to add parameters to your commands. Commands require parameters to complete a task. In complex scenarios, parameters can be used to define conditions that trigger custom actions.
+
+### Configure parameters for a TurnOn command
+
+Start by editing the existing `TurnOn` command to turn on and turn off multiple devices.
+
+1. Now that the command will handle both on and off scenarios, rename the command as *TurnOnOff*.
+ 1. In the pane on the left, select the **TurnOn** command. Then next to **New command** at the top of the pane, select the edit button.
+
+ 1. In the **Rename command** window, change the name to *TurnOnOff*.
+
+1. Add a new parameter to the command. The parameter represents whether the user wants to turn the device on or off.
+ 1. At top of the middle pane, select **Add**. From the drop-down menu, select **Parameter**.
+ 1. In the pane on the right, in the **Parameters** section, in the **Name** box, add `OnOff`.
+ 1. Select **Required**. In the **Add response for a required parameter** window, select **Simple editor**. In the **First variation** field, add *On or Off?*.
+ 1. Select **Update**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows the 'Add response for a required parameter' section with the 'Simple editor' tab selected.](media/custom-commands/add-required-on-off-parameter-response.png)
+
+ 1. Configure the parameter's properties by using the following table. For information about all of the configuration properties of a command, see [Custom Commands concepts and definitions](./custom-commands-references.md).
++
+ | Configuration | Suggested value | Description |
+ | | -| |
+ | **Name** | `OnOff` | A descriptive name for the parameter |
+ | **Required** | Selected | Check box indicating whether a value for this parameter is required before the command finishes. |
+ | **Response for required parameter** |**Simple editor** > `On or Off?` | A prompt asking for the value of this parameter when it isn't known. |
+ | **Type** | **String** | Parameter type, such as Number, String, Date Time, or Geography. |
+ | **Configuration** | **Accept predefined input values from an internal catalog** | For strings, this setting limits inputs to a set of possible values. |
+ | **Predefined input values** | `on`, `off` | Set of possible values and their aliases. |
++
+ 1. To add predefined input values, select **Add a predefined input**. In **New Item** window, type *Name* as shown in the preceding table. In this case, you're not using aliases, so you can leave this field blank.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to create a parameter.](media/custom-commands/create-on-off-parameter.png)
+
+ 1. Select **Save** to save all configurations of the parameter.
+
+#### Add a SubjectDevice parameter
+
+1. To add a second parameter to represent the name of the devices that can be controlled by using this command, select **Add**. Use the following configuration.
++
+ | Setting | Suggested value |
+ | | |
+ | **Name** | `SubjectDevice` |
+ | **Required** | Selected |
+ | **Response for required parameter** | **Simple editor** > `Which device do you want to control?` |
+ | **Type** | **String** |
+ | **Configuration** | **Accept predefined input values from an internal catalog** |
+ | **Predefined input values** | `tv`, `fan` |
+ | **Aliases** (`tv`) | `television`, `telly` |
+
+1. Select **Save**.
+
+#### Modify example sentences
+
+For commands that use parameters, it's helpful to add example sentences that cover all possible combinations. For example:
+
+* Complete parameter information: `turn {OnOff} the {SubjectDevice}`
+* Partial parameter information: `turn it {OnOff}`
+* No parameter information: `turn something`
+
+Example sentences that use varying degrees of information allow the Custom Commands application to resolve both one-shot resolutions and multiple-turn resolutions by using partial information.
+
+With that information in mind, edit the example sentences to use these suggested parameters:
+
+```
+turn {OnOff} the {SubjectDevice}
+{SubjectDevice} {OnOff}
+turn it {OnOff}
+turn something {OnOff}
+turn something
+```
+
+Select **Save**.
+
+> [!TIP]
+> In the example-sentences editor, use curly braces to refer to your parameters. For example, `turn {OnOff} the {SubjectDevice}`.
+> Use a tab for automatic completion backed by previously created parameters.
+
+#### Modify completion rules to include parameters
+
+Modify the existing completion rule `ConfirmationResponse`.
+
+1. In the **Conditions** section, select **Add a condition**.
+1. In the **New Condition** window, in the **Type** list, select **Required parameters**. In the list that follows, select both **OnOff** and **SubjectDevice**.
+1. Select **Create**.
+1. In the **Actions** section, edit the **Send speech response** action by hovering over it and selecting the edit button. This time, use the newly created `OnOff` and `SubjectDevice` parameters:
+
+ ```
+ Ok, turning the {SubjectDevice} {OnOff}
+ ```
+1. Select **Save**.
+
+Try out the changes by selecting the **Train** icon at the top of the pane on the right.
+
+When the training finishes, select **Test**. A **Test your application** window appears. Try the following interactions:
+
+- Input: *turn off the tv*
+- Output: Ok, turning off the tv
+- Input: *turn off the television*
+- Output: Ok, turning off the tv
+- Input: *turn it off*
+- Output: Which device do you want to control?
+- Input: *the tv*
+- Output: Ok, turning off the tv
+
+### Configure parameters for a SetTemperature command
+
+Modify the `SetTemperature` command to enable it to set the temperature as the user directs.
+
+Add a `TemperatureValue` parameter. Use the following configuration:
+
+| Configuration | Suggested value |
+| | -|
+| **Name** | `TemperatureValue` |
+| **Required** | Selected |
+| **Response for required parameter** | **Simple editor** > `What temperature would you like?`
+| **Type** | `Number` |
++
+Edit the example utterances to use the following values.
+
+```
+set the temperature to {TemperatureValue} degrees
+change the temperature to {TemperatureValue}
+set the temperature
+change the temperature
+```
+
+Edit the existing completion rules. Use the following configuration.
+
+| Configuration | Suggested value |
+| | -|
+| **Conditions** | **Required parameter** > **TemperatureValue** |
+| **Actions** | **Send speech response** > `Ok, setting temperature to {TemperatureValue} degrees` |
+
+### Configure parameters for a SetAlarm command
+
+Add a parameter called `DateTime`. Use the following configuration.
+
+ | Setting | Suggested value |
+ | | -|
+ | **Name** | `DateTime` |
+ | **Required** | Selected |
+ | **Response for required parameter** | **Simple editor** > `For what time?` |
+ | **Type** | **DateTime** |
+ | **Date Defaults** | If the date is missing, use today. |
+ | **Time Defaults** | If the time is missing, use the start of the day. |
++
+> [!NOTE]
+> This article mostly uses String, Number, and DateTime parameter types. For a list of all supported parameter types and their properties, see [Custom Commands concepts and definitions](./custom-commands-references.md).
++
+Edit the example utterances. Use the following values.
+
+```
+set an alarm for {DateTime}
+set alarm {DateTime}
+alarm for {DateTime}
+```
+
+Edit the existing completion rules. Use the following configuration.
+
+ | Setting | Suggested value |
+ | - | - |
+ | **Actions** | **Send speech response** > `Ok, alarm set for {DateTime}` |
+
+Test the three commands together by using utterances related to different commands. (You can switch between the different commands.)
+
+- Input: *Set an alarm*
+- Output: For what time?
+- Input: *Turn on the tv*
+- Output: Ok, turning the tv on
+- Input: *Set an alarm*
+- Output: For what time?
+- Input: *5 pm*
+- Output: Ok, alarm set for 2020-05-01 17:00:00
+
+## Add configurations to command parameters
+
+In this section, you learn more about advanced parameter configuration, including:
+
+ - How parameter values can belong to a set that's defined outside of the Custom Commands application.
+ - How to add validation clauses on the parameter values.
+
+### Configure a parameter as an external catalog entity
+
+The Custom Commands feature allows you to configure string-type parameters to refer to external catalogs hosted over a web endpoint. So you can update the external catalog independently without editing the Custom Commands application. This approach is useful in cases where the catalog entries are numerous.
+
+Reuse the `SubjectDevice` parameter from the `TurnOnOff` command. The current configuration for this parameter is **Accept predefined inputs from internal catalog**. This configuration refers to a static list of devices in the parameter configuration. Move out this content to an external data source that can be updated independently.
+
+To move the content, start by adding a new web endpoint. In the pane on the left, go to the **Web endpoints** section. There, add a new web endpoint URL. Use the following configuration.
+
+| Setting | Suggested value |
+|-|-|
+| **Name** | `getDevices` |
+| **URL** | `<Your endpoint of getDevices.json>` |
+| **Method** | **GET** |
+
+Then, configure and host a web endpoint that returns a JSON file that lists the devices that can be controlled. The web endpoint should return a JSON file that's formatted like this example:
+
+```json
+{
+ "fan" : [],
+ "refrigerator" : [
+ "fridge"
+ ],
+ "lights" : [
+ "bulb",
+ "bulbs",
+ "light",
+ "light bulb"
+ ],
+ "tv" : [
+ "telly",
+ "television"
+ ]
+}
+
+```
+
+Next go the **SubjectDevice** parameter settings page. Set up the following properties.
+
+| Setting | Suggested value |
+| -| - |
+| **Configuration** | **Accept predefined inputs from external catalog** |
+| **Catalog endpoint** | `getDevices` |
+| **Method** | **GET** |
+
+Then select **Save**.
+
+> [!IMPORTANT]
+> You won't see an option to configure a parameter to accept inputs from an external catalog unless you have the web endpoint set in the **Web endpoint** section in the pane on the left.
+
+Try it out by selecting **Train**. After the training finishes, select **Test** and try a few interactions.
+
+* Input: *turn on*
+* Output: Which device do you want to control?
+* Input: *lights*
+* Output: Ok, turning the lights on
+
+> [!NOTE]
+> You can now control all the devices hosted on the web endpoint. But you still need to train the application to test the new changes and then republish the application.
+
+### Add validation to parameters
+
+*Validations* are constructs that apply to certain parameter types that allow you to configure constraints on the parameter's value. They prompt you for corrections if values don't fall within the constraints. For a list of parameter types that extend the validation construct, see [Custom Commands concepts and definitions](./custom-commands-references.md).
+
+Test out validations by using the `SetTemperature` command. Use the following steps to add a validation for the `Temperature` parameter.
+
+1. In the pane on the left, select the **SetTemperature** command.
+1. In the middle pane, select **Temperature**.
+1. In the pane on the right, select **Add a validation**.
+1. In the **New validation** window, configure validation as shown in the following table. Then select **Create**.
++
+ | Parameter configuration | Suggested value | Description |
+ | - | - | - |
+ | **Min Value** | `60` | For Number parameters, the minimum value this parameter can assume |
+ | **Max Value** | `80` | For Number parameters, the maximum value this parameter can assume |
+ | **Failure response** | **Simple editor** > **First variation** > `Sorry, I can only set temperature between 60 and 80 degrees. What temperature do you want?` | A prompt to ask for a new value if the validation fails |
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to add a range validation.](media/custom-commands/add-validations-temperature.png)
+
+Try it out by selecting the **Train** icon at the top of the pane on the right. After the training finishes, select **Test**. Try a few interactions:
+
+- Input: *Set the temperature to 72 degrees*
+- Output: Ok, setting temperature to 72 degrees
+- Input: *Set the temperature to 45 degrees*
+- Output: Sorry, I can only set temperature between 60 and 80 degrees
+- Input: *make it 72 degrees instead*
+- Output: Ok, setting temperature to 72 degrees
+
+## Add interaction rules
+
+Interaction rules are *additional* rules that handle specific or complex situations. Although you're free to author your own interaction rules, in this example you use interaction rules for the following scenarios:
+
+* Confirming commands
+* Adding a one-step correction to commands
+
+For more information about interaction rules, see [Custom Commands concepts and definitions](./custom-commands-references.md).
+
+### Add confirmations to a command
+
+To add a confirmation, you use the `SetTemperature` command. To achieve confirmation, create interaction rules by using the following steps:
+
+1. In the pane on the left, select the **SetTemperature** command.
+1. In the middle pane, add interaction rules by selecting **Add**. Then select **Interaction rules** > **Confirm command**.
+
+ This action adds three interaction rules. The rules ask the user to confirm the date and time of the alarm. They expect a confirmation (yes or no) for the next turn.
+
+ 1. Modify the **Confirm command** interaction rule by using the following configuration:
+ 1. Change the name to **Confirm temperature**.
+ 1. The condition **All required parameters** has already been added.
+ 1. Add a new action: **Type** > **Send speech response** > **Are you sure you want to set the temperature as {TemperatureValue} degrees?**
+ 1. In the **Expectations** section, leave the default value of **Expecting confirmation from user**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to create the required parameter response.](media/custom-speech-commands/add-confirmation-set-temperature.png)
++
+ 1. Modify the **Confirmation succeeded** interaction rule to handle a successful confirmation (the user said yes).
+
+ 1. Change the name to **Confirmation temperature succeeded**.
+ 1. Leave the existing **Confirmation was successful** condition.
+ 1. Add a new condition: **Type** > **Required parameters** > **TemperatureValue**.
+ 1. Leave the default **Post-execution state** value as **Execute completion rules**.
+
+ 1. Modify the **Confirmation denied** interaction rule to handle scenarios when confirmation is denied (the user said no).
+
+ 1. Change the name to **Confirmation temperature denied**.
+ 1. Leave the existing **Confirmation was denied** condition.
+ 1. Add a new condition: **Type** > **Required parameters** > **TemperatureValue**.
+ 1. Add a new action: **Type** > **Send speech response** > **No problem. What temperature then?**.
+ 1. Change the default **Post-execution state** value to **Wait for user's input**.
+
+> [!IMPORTANT]
+> In this article, you use the built-in confirmation capability. You can also manually add interaction rules one by one.
+
+Try out the changes by selecting **Train**. When the training finishes, select **Test**.
+
+- **Input**: *Set temperature to 80 degrees*
+- **Output**: are you sure you want to set the temperature as 80 degrees?
+- **Input**: *No*
+- **Output**: No problem. What temperature then?
+- **Input**: *72 degrees*
+- **Output**: are you sure you want to set the temperature as 72 degrees?
+- **Input**: *Yes*
+- **Output**: OK, setting temperature to 72 degrees
+
+### Implement corrections in a command
+
+In this section, you'll configure a one-step correction. This correction is used after the fulfillment action has run. You'll also see an example of how a correction is enabled by default if the command isn't fulfilled yet. To add a correction when the command isn't finished, add the new parameter `AlarmTone`.
+
+In the left pane, select the **SetAlarm** command. Then and add the new parameter **AlarmTone**.
+
+- **Name** > `AlarmTone`
+- **Type** > **String**
+- **Default Value** > **Chimes**
+- **Configuration** > **Accept predefined input values from the internal catalog**
+- **Predefined input values** > **Chimes**, **Jingle**, and **Echo** (These values are individual predefined inputs.)
++
+Next, update the response for the **DateTime** parameter to **Ready to set alarm with tone as {AlarmTone}. For what time?**. Then modify the completion rule as follows:
+
+1. Select the existing completion rule **ConfirmationResponse**.
+1. In the pane on the right, hover over the existing action and select **Edit**.
+1. Update the speech response to `OK, alarm set for {DateTime}. The alarm tone is {AlarmTone}`.
+
+> [!IMPORTANT]
+> The alarm tone can change without any explicit configuration in an ongoing command. For example, it can change when the command hasn't finished yet. A correction is enabled *by default* for all of the command parameters, regardless of the turn number, if the command is yet to be fulfilled.
+
+#### Implement a correction when a command is finished
+
+The Custom Commands platform allows for one-step correction even when the command has finished. This feature isn't enabled by default. It must be explicitly configured.
+
+Use the following steps to configure a one-step correction:
+
+1. In the **SetAlarm** command, add an interaction rule of the type **Update previous command** to update the previously set alarm. Rename the interaction rule as **Update previous alarm**.
+1. Leave the default condition: **Previous command needs to be updated**.
+1. Add a new condition: **Type** > **Required Parameter** > **DateTime**.
+1. Add a new action: **Type** > **Send speech response** > **Simple editor** > **Updating previous alarm time to {DateTime}**.
+1. Leave the default **Post-execution state** value as **Command completed**.
+
+Try out the changes by selecting **Train**. Wait for the training to finish, and then select **Test**.
+
+- **Input**: *Set an alarm.*
+- **Output**: Ready to set alarm with tone as Chimes. For what time?
+- **Input**: *Set an alarm with the tone as Jingle for 9 am tomorrow.*
+- **Output**: OK, alarm set for 2020-05-21 09:00:00. The alarm tone is Jingle.
+- **Input**: *No, 8 am.*
+- **Output**: Updating previous alarm time to 2020-05-29 08:00.
+
+> [!NOTE]
+> In a real application, in the **Actions** section of this correction rule, you'll also need to send back an activity to the client or call an HTTP endpoint to update the alarm time in your system. This action should be solely responsible for updating the alarm time. It shouldn't be responsible for any other attribute of the command. In this case, that attribute would be the alarm tone.
+
+## Add language-generation templates for speech responses
+
+Language-generation (LG) templates allow you to customize the responses sent to the client. They introduce variance into the responses. You can achieve language generation by using:
+
+* Language-generation templates.
+* Adaptive expressions.
+
+Custom Commands templates are based on the Bot Framework's [LG templates](/azure/bot-service/file-format/bot-builder-lg-file-format#templates). Because the Custom Commands feature creates a new LG template when required (for speech responses in parameters or actions), you don't have to specify the name of the LG template.
+
+So you don't need to define your template like this:
+
+ ```
+ # CompletionAction
+ - Ok, turning {OnOff} the {SubjectDevice}
+ - Done, turning {OnOff} the {SubjectDevice}
+ - Proceeding to turn {OnOff} {SubjectDevice}
+ ```
+
+Instead, you can define the body of the template without the name, like this:
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing a template editor example.](./media/custom-commands/template-editor-example.png)
++
+This change introduces variation into the speech responses that are sent to the client. For an utterance, the corresponding speech response is randomly picked out of the provided options.
+
+By taking advantage of LG templates, you can also define complex speech responses for commands by using adaptive expressions. For more information, see the [LG templates format](/azure/bot-service/file-format/bot-builder-lg-file-format#templates).
+
+By default, the Custom Commands feature supports all capabilities, with the following minor differences:
+
+* In the LG templates, entities are represented as `${entityName}`. The Custom Commands feature doesn't use entities. But you can use parameters as variables with either the `${parameterName}` representation or the `{parameterName}` representation.
+* The Custom Commands feature doesn't support template composition and expansion, because you never edit the *.lg* file directly. You edit only the responses of automatically created templates.
+* The Custom Commands feature doesn't support custom functions that LG injects. Predefined functions are supported.
+* The Custom Commands feature doesn't support options, such as `strict`, `replaceNull`, and `lineBreakStyle`.
+
+### Add template responses to a TurnOnOff command
+
+Modify the `TurnOnOff` command to add a new parameter. Use the following configuration.
+
+| Setting | Suggested value |
+| | |
+| **Name** | `SubjectContext` |
+| **Required** | Unselected |
+| **Type** | **String** |
+| **Default value** | `all` |
+| **Configuration** | **Accept predefined input values from internal catalog** |
+| **Predefined input values** | `room`, `bathroom`, `all`|
+
+#### Modify a completion rule
+
+Edit the **Actions** section of the existing completion rule **ConfirmationResponse**. In the **Edit action** window, switch to **Template Editor**. Then replace the text with the following example.
+
+```
+- IF: @{SubjectContext == "all" && SubjectDevice == "lights"}
+ - Ok, turning all the lights {OnOff}
+- ELSEIF: @{SubjectDevice == "lights"}
+ - Ok, turning {OnOff} the {SubjectContext} {SubjectDevice}
+- ELSE:
+ - Ok, turning the {SubjectDevice} {OnOff}
+ - Done, turning {OnOff} the {SubjectDevice}
+```
+
+Train and test your application by using the following input and output. Notice the variation of responses. The variation is created by multiple alternatives of the template value and also by use of adaptive expressions.
+
+* Input: *turn on the tv*
+* Output: Ok, turning the tv on
+* Input: *turn on the tv*
+* Output: Done, turned on the tv
+* Input: *turn off the lights*
+* Output: Ok, turning all the lights off
+* Input: *turn off room lights*
+* Output: Ok, turning off the room lights
+
+## Use a custom voice
+
+Another way to customize Custom Commands responses is to select an output voice. Use the following steps to switch the default voice to a custom voice:
+
+1. In your Custom Commands application, in the pane on the left, select **Settings**.
+1. In the middle pane, select **Custom Voice**.
+1. In the table, select a custom voice or public voice.
+1. Select **Save**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing sample sentences and parameters.](media/custom-commands/select-custom-voice.png)
+
+> [!NOTE]
+> For public voices, neural types are available only for specific regions. For more information, see [Speech service supported regions](./regions.md#speech-service).
+>
+> You can create custom voices on the **Custom Voice** project page. For more information, see [Get started with Custom Voice](./how-to-custom-voice.md).
+
+Now the application will respond in the selected voice, instead of the default voice.
+
+## Next steps
+
+* Learn how to [integrate your Custom Commands application](how-to-custom-commands-setup-speech-sdk.md) with a client app by using the Speech SDK.
+* [Set up continuous deployment](how-to-custom-commands-deploy-cicd.md) for your Custom Commands application by using Azure DevOps.
ai-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-get-speech-session-id.md
+
+ Title: How to get Speech to text Session ID and Transcription ID
+
+description: Learn how to get Speech service Speech to text Session ID and Transcription ID
++++++ Last updated : 11/29/2022+++
+# How to get Speech to text Session ID and Transcription ID
+
+If you use [Speech to text](speech-to-text.md) and need to open a support case, you are often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs.
+
+> [!NOTE]
+> * *Session ID* is used in [real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md).
+> * *Transcription ID* is used in [Batch transcription](batch-transcription.md).
+
+## Getting Session ID
+
+[Real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md) use either the [Speech SDK](speech-sdk.md) or the [REST API for short audio](rest-speech-to-text-short.md).
+
+To get the Session ID, when using SDK you need to:
+
+1. Enable application logging.
+1. Find the Session ID inside the log.
+
+If you use Speech SDK for JavaScript, get the Session ID as described in [this section](#get-session-id-using-javascript).
+
+If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details in [this section](#get-session-id-using-speech-cli).
+
+In case of [Speech to text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio).
+
+### Enable logging in the Speech SDK
+
+Enable logging for your application as described in [this article](how-to-use-logging.md).
+
+### Get Session ID from the log
+
+Open the log file your application produced and look for `SessionId:`. The number that would follow is the Session ID you need. In the following log excerpt example `0b734c41faf8430380d493127bd44631` is the Session ID.
+
+```
+[874193]: 218ms SPX_DBG_TRACE_VERBOSE: audio_stream_session.cpp:1238 [0000023981752A40]CSpxAudioStreamSession::FireSessionStartedEvent: Firing SessionStarted event: SessionId: 0b734c41faf8430380d493127bd44631
+```
+### Get Session ID using JavaScript
+
+If you use Speech SDK for JavaScript, you get Session ID with the help of `sessionStarted` event from the [Recognizer class](/javascript/api/microsoft-cognitiveservices-speech-sdk/recognizer).
+
+See an example of getting Session ID using JavaScript in [this sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/browser/https://docsupdatetracker.net/index.html). Look for `recognizer.sessionStarted = onSessionStarted;` and then for `function onSessionStarted`.
+
+### Get Session ID using Speech CLI
+
+If you use [Speech CLI](spx-overview.md), then you'll see the Session ID in `SESSION STARTED` and `SESSION STOPPED` console messages.
+
+You can also enable logging for your sessions and get the Session ID from the log file as described in [this section](#get-session-id-from-the-log). Run the appropriate Speech CLI command to get the information on using logs:
+
+```console
+spx help recognize log
+```
+```console
+spx help translate log
+```
++
+### Provide Session ID using REST API for short audio
+
+Unlike Speech SDK, [Speech to text REST API for short audio](rest-speech-to-text-short.md) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request.
+
+Generate a GUID inside your code or using any standard tool. Use the GUID value *without dashes or other dividers*. As an example we will use `9f4ffa5113a846eba289aa98b28e766f`.
+
+As a part of your REST request use `X-ConnectionId=<GUID>` expression. For our example, a sample request will look like this:
+```http
+https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&X-ConnectionId=9f4ffa5113a846eba289aa98b28e766f
+```
+`9f4ffa5113a846eba289aa98b28e766f` will be your Session ID.
+
+## Getting Transcription ID for Batch transcription
+
+[Batch transcription API](batch-transcription.md) is a subset of the [Speech to text REST API](rest-speech-to-text.md).
+
+The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).
+
+The following is and example response body of a [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request. GUID value `537216f8-0620-4a10-ae2d-00bdb423b36f` found in the first `self` element is the Transcription ID.
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/824bd685-2d45-424d-bb65-c3fe99e32927"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f/files"
+ },
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": false,
+ "channels": [
+ 0,
+ 1
+ ],
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked"
+ },
+ "lastActionDateTime": "2021-11-19T14:09:51Z",
+ "status": "NotStarted",
+ "createdDateTime": "2021-11-19T14:09:51Z",
+ "locale": "ru-RU",
+ "displayName": "transcriptiontest"
+}
+```
+> [!NOTE]
+> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request.
+
+> [!NOTE]
+> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) request.
ai-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-lower-speech-synthesis-latency.md
+
+ Title: How to lower speech synthesis latency using Speech SDK
+
+description: How to lower speech synthesis latency using Speech SDK, including streaming, pre-connection, and so on.
++++++ Last updated : 04/29/2021++
+zone_pivot_groups: programming-languages-set-nineteen
++
+# Lower speech synthesis latency using Speech SDK
+
+The synthesis latency is critical to your applications.
+In this article, we will introduce the best practices to lower the latency and bring the best performance to your end users.
+
+Normally, we measure the latency by `first byte latency` and `finish latency`, as follows:
++
+| Latency | Description | [SpeechSynthesisResult](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisresult) property key |
+|--|-||
+| first byte latency | Indicates the time delay between the start of the synthesis task and receipt of the first chunk of audio data. | SpeechServiceResponse_SynthesisFirstByteLatencyMs |
+| finish latency | Indicates the time delay between the start of the synthesis task and the receipt of the whole synthesized audio data. | SpeechServiceResponse_SynthesisFinishLatencyMs |
+
+The Speech SDK puts the latency durations in the Properties collection of [`SpeechSynthesisResult`](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisresult). The following sample code shows these values.
+
+```csharp
+var result = await synthesizer.SpeakTextAsync(text);
+Console.WriteLine($"first byte latency: \t{result.Properties.GetProperty(PropertyId.SpeechServiceResponse_SynthesisFirstByteLatencyMs)} ms");
+Console.WriteLine($"finish latency: \t{result.Properties.GetProperty(PropertyId.SpeechServiceResponse_SynthesisFinishLatencyMs)} ms");
+// you can also get the result id, and send to us when you need help for diagnosis
+var resultId = result.ResultId;
+```
+++
+| Latency | Description | [SpeechSynthesisResult](/cpp/cognitive-services/speech/speechsynthesisresult) property key |
+|--|-||
+| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SpeechServiceResponse_SynthesisFirstByteLatencyMs` |
+| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SpeechServiceResponse_SynthesisFinishLatencyMs` |
+
+The Speech SDK measured the latencies and puts them in the property bag of [`SpeechSynthesisResult`](/cpp/cognitive-services/speech/speechsynthesisresult). Refer following codes to get them.
+
+```cpp
+auto result = synthesizer->SpeakTextAsync(text).get();
+auto firstByteLatency = std::stoi(result->Properties.GetProperty(PropertyId::SpeechServiceResponse_SynthesisFirstByteLatencyMs));
+auto finishedLatency = std::stoi(result->Properties.GetProperty(PropertyId::SpeechServiceResponse_SynthesisFinishLatencyMs));
+// you can also get the result id, and send to us when you need help for diagnosis
+auto resultId = result->ResultId;
+```
+++
+| Latency | Description | [SpeechSynthesisResult](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisresult) property key |
+|--|-||
+| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SpeechServiceResponse_SynthesisFirstByteLatencyMs` |
+| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SpeechServiceResponse_SynthesisFinishLatencyMs` |
+
+The Speech SDK measured the latencies and puts them in the property bag of [`SpeechSynthesisResult`](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisresult). Refer following codes to get them.
+
+```java
+SpeechSynthesisResult result = synthesizer.SpeakTextAsync(text).get();
+System.out.println("first byte latency: \t" + result.getProperties().getProperty(PropertyId.SpeechServiceResponse_SynthesisFirstByteLatencyMs) + " ms.");
+System.out.println("finish latency: \t" + result.getProperties().getProperty(PropertyId.SpeechServiceResponse_SynthesisFinishLatencyMs) + " ms.");
+// you can also get the result id, and send to us when you need help for diagnosis
+String resultId = result.getResultId();
+```
++++
+| Latency | Description | [SpeechSynthesisResult](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult) property key |
+|--|-||
+| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SpeechServiceResponse_SynthesisFirstByteLatencyMs` |
+| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SpeechServiceResponse_SynthesisFinishLatencyMs` |
+
+The Speech SDK measured the latencies and puts them in the property bag of [`SpeechSynthesisResult`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult). Refer following codes to get them.
+
+```python
+result = synthesizer.speak_text_async(text).get()
+first_byte_latency = int(result.properties.get_property(speechsdk.PropertyId.SpeechServiceResponse_SynthesisFirstByteLatencyMs))
+finished_latency = int(result.properties.get_property(speechsdk.PropertyId.SpeechServiceResponse_SynthesisFinishLatencyMs))
+# you can also get the result id, and send to us when you need help for diagnosis
+result_id = result.result_id
+```
+++
+| Latency | Description | [SPXSpeechSynthesisResult](/objectivec/cognitive-services/speech/spxspeechsynthesisresult) property key |
+|--|-||
+| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SPXSpeechServiceResponseSynthesisFirstByteLatencyMs` |
+| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SPXSpeechServiceResponseSynthesisFinishLatencyMs` |
+
+The Speech SDK measured the latencies and puts them in the property bag of [`SPXSpeechSynthesisResult`](/objectivec/cognitive-services/speech/spxspeechsynthesisresult). Refer following codes to get them.
+
+```Objective-C
+SPXSpeechSynthesisResult *speechResult = [speechSynthesizer speakText:text];
+int firstByteLatency = [intString [speechResult.properties getPropertyById:SPXSpeechServiceResponseSynthesisFirstByteLatencyMs]];
+int finishedLatency = [intString [speechResult.properties getPropertyById:SPXSpeechServiceResponseSynthesisFinishLatencyMs]];
+// you can also get the result id, and send to us when you need help for diagnosis
+NSString *resultId = result.resultId;
+```
++
+The first byte latency is much lower than finish latency in most cases.
+The first byte latency is independent from text length, while finish latency increases with text length.
+
+Ideally, we want to minimize the user-experienced latency (the latency before user hears the sound) to one network route trip time plus the first audio chunk latency of the speech synthesis service.
+
+## Streaming
+
+Streaming is critical to lowering latency.
+Client code can start playback when the first audio chunk is received.
+In a service scenario, you can forward the audio chunks immediately to your clients instead of waiting for the whole audio.
++
+You can use the [`PullAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), [`Synthesizing` event](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing), and [`AudioDataStream`](/dotnet/api/microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
+
+Taking `AudioDataStream` as an example:
+
+```csharp
+using (var synthesizer = new SpeechSynthesizer(config, null as AudioConfig))
+{
+ using (var result = await synthesizer.StartSpeakingTextAsync(text))
+ {
+ using (var audioDataStream = AudioDataStream.FromResult(result))
+ {
+ byte[] buffer = new byte[16000];
+ uint filledSize = 0;
+ while ((filledSize = audioDataStream.ReadData(buffer)) > 0)
+ {
+ Console.WriteLine($"{filledSize} bytes received.");
+ }
+ }
+ }
+}
+```
+++
+You can use the [`PullAudioOutputStream`](/cpp/cognitive-services/speech/audio-pullaudiooutputstream), [`PushAudioOutputStream`](/cpp/cognitive-services/speech/audio-pushaudiooutputstream), the [`Synthesizing` event](/cpp/cognitive-services/speech/speechsynthesizer#synthesizing), and [`AudioDataStream`](/cpp/cognitive-services/speech/audiodatastream) of the Speech SDK to enable streaming.
+
+Taking `AudioDataStream` as an example:
+
+```cpp
+auto synthesizer = SpeechSynthesizer::FromConfig(config, nullptr);
+auto result = synthesizer->SpeakTextAsync(text).get();
+auto audioDataStream = AudioDataStream::FromResult(result);
+uint8_t buffer[16000];
+uint32_t filledSize = 0;
+while ((filledSize = audioDataStream->ReadData(buffer, sizeof(buffer))) > 0)
+{
+ cout << filledSize << " bytes received." << endl;
+}
+```
+++
+You can use the [`PullAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_Synthesizing), and [`AudioDataStream`](/java/api/com.microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
+
+Taking `AudioDataStream` as an example:
+
+```java
+SpeechSynthesizer synthesizer = new SpeechSynthesizer(config, null);
+SpeechSynthesisResult result = synthesizer.StartSpeakingTextAsync(text).get();
+AudioDataStream audioDataStream = AudioDataStream.fromResult(result);
+byte[] buffer = new byte[16000];
+long filledSize = audioDataStream.readData(buffer);
+while (filledSize > 0) {
+ System.out.println(filledSize + " bytes received.");
+ filledSize = audioDataStream.readData(buffer);
+}
+```
+++
+You can use the [`PullAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#synthesizing), and [`AudioDataStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
+
+Taking `AudioDataStream` as an example:
+
+```python
+speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
+result = speech_synthesizer.start_speaking_text_async(text).get()
+audio_data_stream = speechsdk.AudioDataStream(result)
+audio_buffer = bytes(16000)
+filled_size = audio_data_stream.read_data(audio_buffer)
+while filled_size > 0:
+ print("{} bytes received.".format(filled_size))
+ filled_size = audio_data_stream.read_data(audio_buffer)
+```
+++
+You can use the [`SPXPullAudioOutputStream`](/objectivec/cognitive-services/speech/spxpullaudiooutputstream), [`SPXPushAudioOutputStream`](/objectivec/cognitive-services/speech/spxpushaudiooutputstream), the [`Synthesizing` event](/objectivec/cognitive-services/speech/spxspeechsynthesizer#addsynthesizingeventhandler), and [`SPXAudioDataStream`](/objectivec/cognitive-services/speech/spxaudiodatastream) of the Speech SDK to enable streaming.
+
+Taking `AudioDataStream` as an example:
+
+```Objective-C
+SPXSpeechSynthesizer *synthesizer = [[SPXSpeechSynthesizer alloc] initWithSpeechConfiguration:speechConfig audioConfiguration:nil];
+SPXSpeechSynthesisResult *speechResult = [synthesizer startSpeakingText:inputText];
+SPXAudioDataStream *stream = [[SPXAudioDataStream alloc] initFromSynthesisResult:speechResult];
+NSMutableData* data = [[NSMutableData alloc]initWithCapacity:16000];
+while ([stream readData:data length:16000] > 0) {
+ // Read data here
+}
+```
++
+## Pre-connect and reuse SpeechSynthesizer
+
+The Speech SDK uses a websocket to communicate with the service.
+Ideally, the network latency should be one route trip time (RTT).
+If the connection is newly established, the network latency will include extra time to establish the connection.
+The establishment of a websocket connection needs the TCP handshake, SSL handshake, HTTP connection, and protocol upgrade, which introduces time delay.
+To avoid the connection latency, we recommend pre-connecting and reusing the `SpeechSynthesizer`.
+
+### Pre-connect
+
+To pre-connect, establish a connection to the Speech service when you know the connection will be needed soon. For example, if you are building a speech bot in client, you can pre-connect to the speech synthesis service when the user starts to talk, and call `SpeakTextAsync` when the bot reply text is ready.
++
+```csharp
+using (var synthesizer = new SpeechSynthesizer(uspConfig, null as AudioConfig))
+{
+ using (var connection = Connection.FromSpeechSynthesizer(synthesizer))
+ {
+ connection.Open(true);
+ }
+ await synthesizer.SpeakTextAsync(text);
+}
+```
+++
+```cpp
+auto synthesizer = SpeechSynthesizer::FromConfig(config, nullptr);
+auto connection = Connection::FromSpeechSynthesizer(synthesizer);
+connection->Open(true);
+```
+++
+```java
+SpeechSynthesizer synthesizer = new SpeechSynthesizer(speechConfig, (AudioConfig) null);
+Connection connection = Connection.fromSpeechSynthesizer(synthesizer);
+connection.openConnection(true);
+```
+++
+```python
+synthesizer = speechsdk.SpeechSynthesizer(config, None)
+connection = speechsdk.Connection.from_speech_synthesizer(synthesizer)
+connection.open(True)
+```
+++
+```Objective-C
+SPXSpeechSynthesizer* synthesizer = [[SPXSpeechSynthesizer alloc]initWithSpeechConfiguration:self.speechConfig audioConfiguration:nil];
+SPXConnection* connection = [[SPXConnection alloc]initFromSpeechSynthesizer:synthesizer];
+[connection open:true];
+```
++
+> [!NOTE]
+> If the synthesize text is available, just call `SpeakTextAsync` to synthesize the audio. The SDK will handle the connection.
+
+### Reuse SpeechSynthesizer
+
+Another way to reduce the connection latency is to reuse the `SpeechSynthesizer` so you don't need to create a new `SpeechSynthesizer` for each synthesis.
+We recommend using object pool in service scenario, see our sample code for [C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_server_scenario_sample.cs) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechSynthesisScenarioSamples.java).
++
+## Transmit compressed audio over the network
+
+When the network is unstable or with limited bandwidth, the payload size will also impact latency.
+Meanwhile, a compressed audio format helps to save the users' network bandwidth, which is especially valuable for mobile users.
+
+We support many compressed formats including `opus`, `webm`, `mp3`, `silk`, and so on, see the full list in [SpeechSynthesisOutputFormat](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat).
+For example, the bitrate of `Riff24Khz16BitMonoPcm` format is 384 kbps, while `Audio24Khz48KBitRateMonoMp3` only costs 48 kbps.
+Our Speech SDK will automatically use a compressed format for transmission when a `pcm` output format is set.
+For Linux and Windows, `GStreamer` is required to enable this feature.
+Refer [this instruction](how-to-use-codec-compressed-audio-input-streams.md) to install and configure `GStreamer` for Speech SDK.
+For Android, iOS and macOS, no extra configuration is needed starting version 1.20.
+
+## Others tips
+
+### Cache CRL files
+
+The Speech SDK uses CRL files to check the certification.
+Caching the CRL files until expired helps you avoid downloading CRL files every time.
+See [How to configure OpenSSL for Linux](how-to-configure-openssl-linux.md#certificate-revocation-checks) for details.
+
+### Use latest Speech SDK
+
+We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
+
+## Load test guideline
+
+You may use load test to test the speech synthesis service capacity and latency.
+Here are some guidelines.
+
+ - The speech synthesis service has the ability to autoscale, but takes time to scale out. If the concurrency is increased in a short time, the client may get long latency or `429` error code (too many requests). So, we recommend you increase your concurrency step by step in load test. [See this article](speech-services-quotas-and-limits.md#general-best-practices-to-mitigate-throttling-during-autoscaling) for more details, especially [this example of workload patterns](speech-services-quotas-and-limits.md#example-of-a-workload-pattern-best-practice).
+ - You can leverage our sample using object pool ([C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_server_scenario_sample.cs) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechSynthesisScenarioSamples.java)) for load test and getting the latency numbers. You can modify the test turns and concurrency in the sample to meet your target concurrency.
+ - The service has quota limitation based on the real traffic, therefore, if you want to perform load test with the concurrency much higher than your real traffic, connect before your test.
+
+## Next steps
+
+* [See the samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master) on GitHub
ai-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-custom-neural-voice.md
+
+ Title: Migrate from custom voice to custom neural voice - Speech service
+
+description: This document helps users migrate from custom voice to custom neural voice.
++++++ Last updated : 11/12/2021+++
+# Migrate from custom voice to custom neural voice
+
+> [!IMPORTANT]
+> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
+>
+> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
+
+The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text to speech technology, in a responsible way.
+
+|Custom voice |Custom neural voice |
+|--|--|
+| The standard, or "traditional," method of custom voice breaks down spoken language into phonetic snippets that can be remixed and matched using classical programming or statistical methods. | Custom neural voice synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech rather than using classical programming or statistical methods.|
+| Custom voice<sup>1</sup> requires a large volume of voice data to produce a more human-like voice model. With fewer recorded lines, a standard custom voice model will tend to sound more obviously robotic. |The custom neural voice capability enables you to create a unique brand voice in multiple languages and styles by using a small set of recordings.|
+
+<sup>1</sup> When creating a custom voice model, the maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users, and 500 for standard subscription (S0) users.
+
+## Action required
+
+Before you can migrate to custom neural voice, your [application](https://aka.ms/customneural) must be accepted. Access to the custom neural voice service is subject to Microsoft's sole discretion based on our eligibility criteria. You must commit to using custom neural voice in alignment with our [Responsible AI principles](https://microsoft.com/ai/responsible-ai) and the [code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+
+> [!TIP]
+> Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs.
+
+1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and then [apply here](https://aka.ms/customneural).
+1. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
+1. Before you can [train](how-to-custom-voice-create-voice.md) and [deploy](how-to-deploy-and-use-endpoint.md) a custom voice model, you must [create a voice talent profile](how-to-custom-voice-talent.md). The profile requires an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model.
+1. Update your code in your apps if you have created a new endpoint with a new model.
+
+## Custom voice details (deprecated)
+
+Read the following sections for details on custom voice.
+
+### Language support
+
+Custom voice supports the following languages (locales).
+
+| Language | Locale |
+|--|-|
+|Chinese (Mandarin, Simplified)|`zh-CN`|
+|Chinese (Mandarin, Simplified), English bilingual|`zh-CN` bilingual|
+|English (India)|`en-IN`|
+|English (United Kingdom)|`en-GB`|
+|English (United States)|`en-US`|
+|French (France)|`fr-FR`|
+|German (Germany)|`de-DE`|
+|Italian (Italy)|`it-IT`|
+|Portuguese (Brazil)|`pt-BR`|
+|Spanish (Mexico)|`es-MX`|
+
+### Regional support
+
+If you've created a custom voice font, use the endpoint that you've created. You can also use the endpoints listed below, replacing the `{deploymentId}` with the deployment ID for your voice model.
+
+| Region | Endpoint |
+|--|-|
+| Australia East | `https://australiaeast.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Brazil South | `https://brazilsouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Canada Central | `https://canadacentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Central US | `https://centralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| East Asia | `https://eastasia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| East US | `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| East US 2 | `https://eastus2.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| France Central | `https://francecentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| India Central | `https://centralindia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Japan East | `https://japaneast.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Japan West | `https://japanwest.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Korea Central | `https://koreacentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| North Central US | `https://northcentralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| North Europe | `https://northeurope.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| South Central US | `https://southcentralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| Southeast Asia | `https://southeastasia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| UK South | `https://uksouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| West Europe | `https://westeurope.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| West Central US | `https://westcentralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| West US | `https://westus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
+| West US 2 | `https://westus2.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try out custom neural voice](custom-neural-voice.md)
ai-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md
+
+ Title: Migrate from prebuilt standard voice to prebuilt neural voice - Speech service
+
+description: This document helps users migrate from prebuilt standard voice to prebuilt neural voice.
++++++ Last updated : 11/12/2021+++
+# Migrate from prebuilt standard voice to prebuilt neural voice
+
+> [!IMPORTANT]
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
+>
+> The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**.
+
+The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience.
+
+|Prebuilt standard voice | Prebuilt neural voice |
+|--|--|
+| Noticeably robotic | Natural sounding, closer to human-parity|
+| Limited capabilities in voice tuning<sup>1</sup> |Advanced capabilities in voice tuningΓÇ» |
+| No new investment in future voice fonts |On-going investment in future voice fonts |
+
+<sup>1</sup> For voice tuning, volume and pitch changes can be applied to standard voices at the word or sentence-level, whereas they can only be applied to neural voices at the sentence level. Duration supports standard voices only. To learn more about details on prosody elements, see [Improve synthesis with SSML](speech-synthesis-markup.md).
+
+## Action required
+
+> [!TIP]
+> Even without an Azure account, you can listen to voice samples at the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
+
+1. Review the [price](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) structure.
+2. To make the change, [follow the sample code](speech-synthesis-markup-voice.md#voice-element) to update the voice name in your speech synthesis request to the supported neural voice names in chosen languages. Use neural voices for your speech synthesis request, on cloud or on prem. For on-premises container, use the [neural voice containers](../cognitive-services-container-support.md) and follow the [instructions](speech-container-howto.md).
+
+## Standard voice details (deprecated)
+
+Read the following sections for details on standard voice.
+
+### Language support
+
+More than 75 prebuilt standard voices are available in over 45 languages and locales, which allow you to convert text into synthesized speech.
+
+> [!NOTE]
+> With two exceptions, standard voices are created from samples that use a 16 khz sample rate.
+> **The en-US-AriaRUS** and **en-US-GuyRUS** voices are also created from samples that use a 24 khz sample rate.
+> All voices can upsample or downsample to other sample rates when synthesizing.
+
+| Language | Locale (BCP-47) | Gender | Voice name |
+|--|--|--|--|
+| Arabic (Arabic ) | `ar-EG` | Female | `ar-EG-Hoda`|
+| Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-Naayf`|
+| Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-Ivan`|
+| Catalan (Spain) | `ca-ES` | Female | `ca-ES-HerenaRUS`|
+| Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-Danny`|
+| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-TracyRUS`|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-HuihuiRUS`|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-Kangkang`|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-Yaoyao`|
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HanHanRUS`|
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-Yating`|
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-Zhiwei`|
+| Croatian (Croatia) | `hr-HR` | Male | `hr-HR-Matej`|
+| Czech (Czech Republic) | `cs-CZ` | Male | `cs-CZ-Jakub`|
+| Danish (Denmark) | `da-DK` | Female | `da-DK-HelleRUS`|
+| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-HannaRUS`|
+| English (Australia) | `en-AU` | Female | `en-AU-Catherine`|
+| English (Australia) | `en-AU` | Female | `en-AU-HayleyRUS`|
+| English (Canada) | `en-CA` | Female | `en-CA-HeatherRUS`|
+| English (Canada) | `en-CA` | Female | `en-CA-Linda`|
+| English (India) | `en-IN` | Female | `en-IN-Heera`|
+| English (India) | `en-IN` | Female | `en-IN-PriyaRUS`|
+| English (India) | `en-IN` | Male | `en-IN-Ravi`|
+| English (Ireland) | `en-IE` | Male | `en-IE-Sean`|
+| English (United Kingdom) | `en-GB` | Male | `en-GB-George`|
+| English (United Kingdom) | `en-GB` | Female | `en-GB-HazelRUS`|
+| English (United Kingdom) | `en-GB` | Female | `en-GB-Susan`|
+| English (United States) | `en-US` | Male | `en-US-BenjaminRUS`|
+| English (United States) | `en-US` | Male | `en-US-GuyRUS`|
+| English (United States) | `en-US` | Female | `en-US-AriaRUS`|
+| English (United States) | `en-US` | Female | `en-US-ZiraRUS`|
+| Finnish (Finland) | `fi-FI` | Female | `fi-FI-HeidiRUS`|
+| French (Canada) | `fr-CA` | Female | `fr-CA-Caroline`|
+| French (Canada) | `fr-CA` | Female | `fr-CA-HarmonieRUS`|
+| French (France) | `fr-FR` | Female | `fr-FR-HortenseRUS`|
+| French (France) | `fr-FR` | Female | `fr-FR-Julie`|
+| French (France) | `fr-FR` | Male | `fr-FR-Paul`|
+| French (Switzerland) | `fr-CH` | Male | `fr-CH-Guillaume`|
+| German (Austria) | `de-AT` | Male | `de-AT-Michael`|
+| German (Germany) | `de-DE` | Female | `de-DE-HeddaRUS`|
+| German (Germany) | `de-DE` | Male | `de-DE-Stefan`|
+| German (Switzerland) | `de-CH` | Male | `de-CH-Karsten`|
+| Greek (Greece) | `el-GR` | Male | `el-GR-Stefanos`|
+| Hebrew (Israel) | `he-IL` | Male | `he-IL-Asaf`|
+| Hindi (India) | `hi-IN` | Male | `hi-IN-Hemant`|
+| Hindi (India) | `hi-IN` | Female | `hi-IN-Kalpana`|
+| Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-Szabolcs`|
+| Indonesian (Indonesia) | `id-ID` | Male | `id-ID-Andika`|
+| Italian (Italy) | `it-IT` | Male | `it-IT-Cosimo`|
+| Italian (Italy) | `it-IT` | Female | `it-IT-LuciaRUS`|
+| Japanese (Japan) | `ja-JP` | Female | `ja-JP-Ayumi`|
+| Japanese (Japan) | `ja-JP` | Female | `ja-JP-HarukaRUS`|
+| Japanese (Japan) | `ja-JP` | Male | `ja-JP-Ichiro`|
+| Korean (Korea) | `ko-KR` | Female | `ko-KR-HeamiRUS`|
+| Malay (Malaysia) | `ms-MY` | Male | `ms-MY-Rizwan`|
+| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-HuldaRUS`|
+| Polish (Poland) | `pl-PL` | Female | `pl-PL-PaulinaRUS`|
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-Daniel`|
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-HeloisaRUS`|
+| Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-HeliaRUS`|
+| Romanian (Romania) | `ro-RO` | Male | `ro-RO-Andrei`|
+| Russian (Russia) | `ru-RU` | Female | `ru-RU-EkaterinaRUS`|
+| Russian (Russia) | `ru-RU` | Female | `ru-RU-Irina`|
+| Russian (Russia) | `ru-RU` | Male | `ru-RU-Pavel`|
+| Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-Filip`|
+| Slovenian (Slovenia) | `sl-SI` | Male | `sl-SI-Lado`|
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-HildaRUS`|
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-Raul`|
+| Spanish (Spain) | `es-ES` | Female | `es-ES-HelenaRUS`|
+| Spanish (Spain) | `es-ES` | Female | `es-ES-Laura`|
+| Spanish (Spain) | `es-ES` | Male | `es-ES-Pablo`|
+| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-HedvigRUS`|
+| Tamil (India) | `ta-IN` | Male | `ta-IN-Valluvar`|
+| Telugu (India) | `te-IN` | Female | `te-IN-Chitra`|
+| Thai (Thailand) | `th-TH` | Male | `th-TH-Pattara`|
+| Turkish (T├╝rkiye) | `tr-TR` | Female | `tr-TR-SedaRUS`|
+| Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-An` |
+
+> [!IMPORTANT]
+> The `en-US-Jessa` voice has changed to `en-US-Aria`. If you were using "Jessa" before, convert over to "Aria".
+>
+> You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)" in your speech synthesis requests.
+
+### Regional support
+
+Use this table to determine **availability of standard voices** by region/endpoint:
+
+| Region | Endpoint |
+|--|-|
+| Australia East | `https://australiaeast.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Brazil South | `https://brazilsouth.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Canada Central | `https://canadacentral.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Central US | `https://centralus.tts.speech.microsoft.com/cognitiveservices/v1` |
+| East Asia | `https://eastasia.tts.speech.microsoft.com/cognitiveservices/v1` |
+| East US | `https://eastus.tts.speech.microsoft.com/cognitiveservices/v1` |
+| East US 2 | `https://eastus2.tts.speech.microsoft.com/cognitiveservices/v1` |
+| France Central | `https://francecentral.tts.speech.microsoft.com/cognitiveservices/v1` |
+| India Central | `https://centralindia.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Japan East | `https://japaneast.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Japan West | `https://japanwest.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Korea Central | `https://koreacentral.tts.speech.microsoft.com/cognitiveservices/v1` |
+| North Central US | `https://northcentralus.tts.speech.microsoft.com/cognitiveservices/v1` |
+| North Europe | `https://northeurope.tts.speech.microsoft.com/cognitiveservices/v1` |
+| South Central US | `https://southcentralus.tts.speech.microsoft.com/cognitiveservices/v1` |
+| Southeast Asia | `https://southeastasia.tts.speech.microsoft.com/cognitiveservices/v1` |
+| UK South | `https://uksouth.tts.speech.microsoft.com/cognitiveservices/v1` |
+| West Central US | `https://westcentralus.tts.speech.microsoft.com/cognitiveservices/v1` |
+| West Europe | `https://westeurope.tts.speech.microsoft.com/cognitiveservices/v1` |
+| West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/v1` |
+| West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/v1` |
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try out prebuilt neural voice](text-to-speech.md)
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
+
+ Title: Use pronunciation assessment
+
+description: Learn about pronunciation assessment features that are currently publicly available.
+++++++ Last updated : 06/05/2023+
+zone_pivot_groups: programming-languages-speech-sdk
++
+# Use pronunciation assessment
+
+In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object.
+
+> [!NOTE]
+> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
+
+You can get pronunciation assessment scores for:
+
+- Full text
+- Words
+- Syllable groups
+- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
+
+> [!NOTE]
+> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+
+## Configuration parameters
+
+> [!NOTE]
+> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details.
+
+You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
++
+```csharp
+var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
+ referenceText: "good morning",
+ gradingSystem: GradingSystem.HundredMark,
+ granularity: Granularity.Phoneme,
+ enableMiscue: true);
+```
+
++
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
+```
+++
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
+```
+++
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}")
+```
+++
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}");
+```
+++
+```ObjectiveC
+SPXPronunciationAssessmentConfiguration *pronunciationAssessmentConfig =
+[[SPXPronunciationAssessmentConfiguration alloc] init:@"good morning"
+ gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
+ granularity:SPXPronunciationAssessmentGranularity_Phoneme
+ enableMiscue:true];
+```
++++
+```swift
+var pronunciationAssessmentConfig: SPXPronunciationAssessmentConfiguration?
+do {
+ try pronunciationAssessmentConfig = SPXPronunciationAssessmentConfiguration.init(referenceText, gradingSystem: SPXPronunciationAssessmentGradingSystem.hundredMark, granularity: SPXPronunciationAssessmentGranularity.phoneme, enableMiscue: true)
+} catch {
+ print("error \(error) happened")
+ pronunciationAssessmentConfig = nil
+ return
+}
+```
+++++
+This table lists some of the key configuration parameters for pronunciation assessment.
+
+| Parameter | Description |
+|--|-|
+| `ReferenceText` | The text that the pronunciation is evaluated against. |
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
+| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels greater than or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table.|
+| `ScenarioId` | A GUID indicating a customized point system. |
+
+## Syllable groups
+
+Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
+
+The following table compares example phonemes with the corresponding syllables.
+
+| Sample word | Phonemes | Syllables |
+|--|-|-|
+|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl|
+|hello|hɛloʊ|hɛ·loʊ|
+|luck|lʌk|lʌk|
+|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs|
+
+To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`.
+
+## Phoneme alphabet format
+
+For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
+
+The following table compares example SAPI phonemes with the corresponding IPA phonemes.
+
+| Sample word | SAPI Phonemes | IPA phonemes |
+|--|-|-|
+|hello|h eh l ow|h ɛ l oʊ|
+|luck|l ah k|l ʌ k|
+|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
+
+To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
++
+```csharp
+pronunciationAssessmentConfig.PhonemeAlphabet = "IPA";
+```
+
++
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
+
++
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
+++
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}")
+```
+++
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
+```
++
+
+```ObjectiveC
+pronunciationAssessmentConfig.phonemeAlphabet = @"IPA";
+```
++++
+```swift
+pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
+```
+++++
+## Spoken phoneme
+
+With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes.
+
+For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+
+```json
+{
+ "Id": "bbb42ea51bdb46d19a1d685e635fe173",
+ "RecognitionStatus": 0,
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "DisplayText": "Hello.",
+ "NBest": [
+ {
+ "Confidence": 0.975003,
+ "Lexical": "hello",
+ "ITN": "hello",
+ "MaskedITN": "hello",
+ "Display": "Hello.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100,
+ "FluencyScore": 100,
+ "CompletenessScore": 100,
+ "PronScore": 100
+ },
+ "Words": [
+ {
+ "Word": "hello",
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 99.0,
+ "ErrorType": "None"
+ },
+ "Syllables": [
+ {
+ "Syllable": "hɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 91.0
+ },
+ "Offset": 7500000,
+ "Duration": 4100000
+ },
+ {
+ "Syllable": "loʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ },
+ "Offset": 11700000,
+ "Duration": 9600000
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "h",
+ "PronunciationAssessment": {
+ "AccuracyScore": 98.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "h",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 35.0
+ },
+ {
+ "Phoneme": "k",
+ "Score": 23.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 20.0
+ }
+ ]
+ },
+ "Offset": 7500000,
+ "Duration": 3500000
+ },
+ {
+ "Phoneme": "ɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 47.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "ə",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 47.0
+ },
+ {
+ "Phoneme": "h",
+ "Score": 17.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 2.0
+ }
+ ]
+ },
+ "Offset": 11100000,
+ "Duration": 500000
+ },
+ {
+ "Phoneme": "l",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "l",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 46.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 5.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 3.0
+ },
+ {
+ "Phoneme": "u",
+ "Score": 1.0
+ }
+ ]
+ },
+ "Offset": 11700000,
+ "Duration": 1100000
+ },
+ {
+ "Phoneme": "oʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "oʊ",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "d",
+ "Score": 29.0
+ },
+ {
+ "Phoneme": "t",
+ "Score": 24.0
+ },
+ {
+ "Phoneme": "n",
+ "Score": 22.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 18.0
+ }
+ ]
+ },
+ "Offset": 12900000,
+ "Duration": 8400000
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
+
+
+```csharp
+pronunciationAssessmentConfig.NBestPhonemeCount = 5;
+```
+
++
+```cpp
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
+
++
+```Java
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
+
++
+```Python
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}")
+```
+++
+```JavaScript
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
+```
++
+
+
+```ObjectiveC
+pronunciationAssessmentConfig.nbestPhonemeCount = 5;
+```
++++
+```swift
+pronunciationAssessmentConfig?.nbestPhonemeCount = 5
+```
++++
+## Get pronunciation assessment results
+
+In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified.
+
+> [!TIP]
+> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
+
+When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
++
+```csharp
+using (var speechRecognizer = new SpeechRecognizer(
+ speechConfig,
+ audioConfig))
+{
+ pronunciationAssessmentConfig.ApplyTo(speechRecognizer);
+ var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
+
+ // The pronunciation assessment result as a Speech SDK object
+ var pronunciationAssessmentResult =
+ PronunciationAssessmentResult.FromResult(speechRecognitionResult);
+
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult);
+}
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
+++
+Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
+
+```cpp
+auto speechRecognizer = SpeechRecognizer::FromConfig(
+ speechConfig,
+ audioConfig);
+
+pronunciationAssessmentConfig->ApplyTo(speechRecognizer);
+speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get();
+
+// The pronunciation assessment result as a Speech SDK object
+auto pronunciationAssessmentResult =
+ PronunciationAssessmentResult::FromResult(speechRecognitionResult);
+
+// The pronunciation assessment result as a JSON string
+auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult);
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624).
+
+
+For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
+
+```Java
+SpeechRecognizer speechRecognizer = new SpeechRecognizer(
+ speechConfig,
+ audioConfig);
+
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync();
+SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS);
+
+// The pronunciation assessment result as a Speech SDK object
+PronunciationAssessmentResult pronunciationAssessmentResult =
+ PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+
+// The pronunciation assessment result as a JSON string
+String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult);
+
+recognizer.close();
+speechConfig.close();
+audioConfig.close();
+pronunciationAssessmentConfig.close();
+speechRecognitionResult.close();
+```
++++
+```JavaScript
+var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
+
+pronunciationAssessmentConfig.applyTo(speechRecognizer);
+
+speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => {
+ // The pronunciation assessment result as a Speech SDK object
+ var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
+
+ // The pronunciation assessment result as a JSON string
+ var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
+},
+{});
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52).
+++
+```Python
+speech_recognizer = speechsdk.SpeechRecognizer(
+ speech_config=speech_config, \
+ audio_config=audio_config)
+
+pronunciation_assessment_config.apply_to(speech_recognizer)
+speech_recognition_result = speech_recognizer.recognize_once()
+
+# The pronunciation assessment result as a Speech SDK object
+pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result)
+
+# The pronunciation assessment result as a JSON string
+pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult)
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1).
+++
+
+```ObjectiveC
+SPXSpeechRecognizer* speechRecognizer = \
+ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
+ audioConfiguration:audioConfig];
+
+[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer];
+
+SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce];
+
+// The pronunciation assessment result as a Speech SDK object
+SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult];
+
+// The pronunciation assessment result as a JSON string
+NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult];
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862).
+++
+```swift
+let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig)
+
+try! pronConfig.apply(to: speechRecognizer)
+
+let speechRecognitionResult = try? speechRecognizer.recognizeOnce()
+
+// The pronunciation assessment result as a Speech SDK object
+let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!)
+
+// The pronunciation assessment result as a JSON string
+let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult)
+```
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L224).
++++
+### Result parameters
+
+This table lists some of the key pronunciation assessment results.
+
+| Parameter | Description |
+|--|-|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
+| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |
+| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
+| `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.|
+
+### JSON result example
+
+Pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
+- The phoneme [alphabet](#phoneme-alphabet-format) is IPA.
+- The [syllables](#syllable-groups) are returned alongside phonemes for the same word.
+- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).
+- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.
+- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
+
+```json
+{
+ "Id": "bbb42ea51bdb46d19a1d685e635fe173",
+ "RecognitionStatus": 0,
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "DisplayText": "Hello.",
+ "NBest": [
+ {
+ "Confidence": 0.975003,
+ "Lexical": "hello",
+ "ITN": "hello",
+ "MaskedITN": "hello",
+ "Display": "Hello.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100,
+ "FluencyScore": 100,
+ "CompletenessScore": 100,
+ "PronScore": 100
+ },
+ "Words": [
+ {
+ "Word": "hello",
+ "Offset": 7500000,
+ "Duration": 13800000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 99.0,
+ "ErrorType": "None"
+ },
+ "Syllables": [
+ {
+ "Syllable": "hɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 91.0
+ },
+ "Offset": 7500000,
+ "Duration": 4100000
+ },
+ {
+ "Syllable": "loʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0
+ },
+ "Offset": 11700000,
+ "Duration": 9600000
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "h",
+ "PronunciationAssessment": {
+ "AccuracyScore": 98.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "h",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 35.0
+ },
+ {
+ "Phoneme": "k",
+ "Score": 23.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 20.0
+ }
+ ]
+ },
+ "Offset": 7500000,
+ "Duration": 3500000
+ },
+ {
+ "Phoneme": "ɛ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 47.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "ə",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 52.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 47.0
+ },
+ {
+ "Phoneme": "h",
+ "Score": 17.0
+ },
+ {
+ "Phoneme": "æ",
+ "Score": 2.0
+ }
+ ]
+ },
+ "Offset": 11100000,
+ "Duration": 500000
+ },
+ {
+ "Phoneme": "l",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "l",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "oʊ",
+ "Score": 46.0
+ },
+ {
+ "Phoneme": "ə",
+ "Score": 5.0
+ },
+ {
+ "Phoneme": "ɛ",
+ "Score": 3.0
+ },
+ {
+ "Phoneme": "u",
+ "Score": 1.0
+ }
+ ]
+ },
+ "Offset": 11700000,
+ "Duration": 1100000
+ },
+ {
+ "Phoneme": "oʊ",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100.0,
+ "NBestPhonemes": [
+ {
+ "Phoneme": "oʊ",
+ "Score": 100.0
+ },
+ {
+ "Phoneme": "d",
+ "Score": 29.0
+ },
+ {
+ "Phoneme": "t",
+ "Score": 24.0
+ },
+ {
+ "Phoneme": "n",
+ "Score": 22.0
+ },
+ {
+ "Phoneme": "l",
+ "Score": 18.0
+ }
+ ]
+ },
+ "Offset": 12900000,
+ "Duration": 8400000
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Pronunciation assessment in streaming mode
+
+Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore` , and `CompletenessScore` will vary over time throughout the recording and evaluation process.
++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191).
++++
+## Next steps
+
+- Learn our quality [benchmark](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866)
+- Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)
+- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
ai-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-intents-from-speech-csharp.md
+
+ Title: How to recognize intents from speech using the Speech SDK C#
+
+description: In this guide, you learn how to recognize intents from speech using the Speech SDK for C#.
++++++ Last updated : 02/08/2022+
+ms.devlang: csharp
+++
+# How to recognize intents from speech using the Speech SDK for C#
+
+The Azure AI services [Speech SDK](speech-sdk.md) integrates with the [Language Understanding service (LUIS)](https://www.luis.ai/home) to provide **intent recognition**. An intent is something the user wants to do: book a flight, check the weather, or make a call. The user can use whatever terms feel natural. Using machine learning, LUIS maps user requests to the intents you've defined.
+
+> [!NOTE]
+> A LUIS application defines the intents and entities you want to recognize. It's separate from the C# application that uses the Speech service. In this article, "app" means the LUIS app, while "application" means the C# code.
+
+In this guide, you use the Speech SDK to develop a C# console application that derives intents from user utterances through your device's microphone. You'll learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a Visual Studio project referencing the Speech SDK NuGet package
+> - Create a speech configuration and get an intent recognizer
+> - Get the model for your LUIS app and add the intents you need
+> - Specify the language for speech recognition
+> - Recognize speech from a file
+> - Use asynchronous, event-driven continuous recognition
+
+## Prerequisites
+
+Be sure you have the following items before you begin this guide:
+
+- A LUIS account. You can get one for free through the [LUIS portal](https://www.luis.ai/home).
+- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).
+
+## LUIS and speech
+
+LUIS integrates with the Speech service to recognize intents from speech. You don't need a Speech service subscription, just LUIS.
+
+LUIS uses two kinds of keys:
+
+| Key type | Purpose |
+| | -- |
+| Authoring | Lets you create and modify LUIS apps programmatically |
+| Prediction | Used to access the LUIS application in runtime |
+
+For this guide, you need the prediction key type. This guide uses the example Home Automation LUIS app, which you can create by following the [Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) quickstart. If you've created a LUIS app of your own, you can use it instead.
+
+When you create a LUIS app, LUIS automatically generates an authoring key so you can test the app using text queries. This key doesn't enable the Speech service integration and won't work with this guide. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this guide.
+
+After you create the LUIS resource in the Azure dashboard, log into the [LUIS portal](https://www.luis.ai/home), choose your application on the **My Apps** page, then switch to the app's **Manage** page. Finally, select **Azure Resources** in the sidebar.
++
+On the **Azure Resources** page:
+
+Select the icon next to a key to copy it to the clipboard. (You may use either key.)
+
+## Create the project and add the workload
+
+To create a Visual Studio project for Windows development, you need to create the project, set up Visual Studio for .NET desktop development, install the Speech SDK, and choose the target architecture.
+
+To start, create the project in Visual Studio, and make sure that Visual Studio is set up for .NET desktop development:
+
+1. Open Visual Studio 2019.
+
+1. In the Start window, select **Create a new project**.
+
+1. In the **Create a new project** window, choose **Console App (.NET Framework)**, and then select **Next**.
+
+1. In the **Configure your new project** window, enter *helloworld* in **Project name**, choose or create the directory path in **Location**, and then select **Create**.
+
+1. From the Visual Studio menu bar, select **Tools** > **Get Tools and Features**, which opens Visual Studio Installer and displays the **Modifying** dialog box.
+
+1. Check whether the **.NET desktop development** workload is available. If the workload hasn't been installed, select the check box next to it, and then select **Modify** to start the installation. It may take a few minutes to download and install.
+
+ If the check box next to **.NET desktop development** is already selected, select **Close** to exit the dialog box.
+
+ ![Enable .NET desktop development](~/articles/ai-services/speech-service/media/sdk/vs-enable-net-desktop-workload.png)
+
+1. Close Visual Studio Installer.
+
+### Install the Speech SDK
+
+The next step is to install the [Speech SDK NuGet package](https://aka.ms/csspeech/nuget), so you can reference it in the code.
+
+1. In the Solution Explorer, right-click the **helloworld** project, and then select **Manage NuGet Packages** to show the NuGet Package Manager.
+
+ ![NuGet Package Manager](~/articles/ai-services/speech-service/media/sdk/vs-nuget-package-manager.png)
+
+1. In the upper-right corner, find the **Package Source** drop-down box, and make sure that **nuget.org** is selected.
+
+1. In the upper-left corner, select **Browse**.
+
+1. In the search box, type *Microsoft.CognitiveServices.Speech* and select **Enter**.
+
+1. From the search results, select the **Microsoft.CognitiveServices.Speech** package, and then select **Install** to install the latest stable version.
+
+ ![Install Microsoft.CognitiveServices.Speech NuGet package](~/articles/ai-services/speech-service/media/sdk/qs-csharp-dotnet-windows-03-nuget-install-1.0.0.png)
+
+1. Accept all agreements and licenses to start the installation.
+
+ After the package is installed, a confirmation appears in the **Package Manager Console** window.
+
+### Choose the target architecture
+
+Now, to build and run the console application, create a platform configuration matching your computer's architecture.
+
+1. From the menu bar, select **Build** > **Configuration Manager**. The **Configuration Manager** dialog box appears.
+
+ ![Configuration Manager dialog box](~/articles/ai-services/speech-service/media/sdk/vs-configuration-manager-dialog-box.png)
+
+1. In the **Active solution platform** drop-down box, select **New**. The **New Solution Platform** dialog box appears.
+
+1. In the **Type or select the new platform** drop-down box:
+ - If you're running 64-bit Windows, select **x64**.
+ - If you're running 32-bit Windows, select **x86**.
+
+1. Select **OK** and then **Close**.
+
+## Add the code
+
+Next, you add code to the project.
+
+1. From **Solution Explorer**, open the file **Program.cs**.
+
+1. Replace the block of `using` statements at the beginning of the file with the following declarations:
+
+ [!code-csharp[Top-level declarations](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#toplevel)]
+
+1. Replace the provided `Main()` method, with the following asynchronous equivalent:
+
+ ```csharp
+ public static async Task Main()
+ {
+ await RecognizeIntentAsync();
+ Console.WriteLine("Please press Enter to continue.");
+ Console.ReadLine();
+ }
+ ```
+
+1. Create an empty asynchronous method `RecognizeIntentAsync()`, as shown here:
+
+ ```csharp
+ static async Task RecognizeIntentAsync()
+ {
+ }
+ ```
+
+1. In the body of this new method, add this code:
+
+ [!code-csharp[Intent recognition by using a microphone](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#intentRecognitionWithMicrophone)]
+
+1. Replace the placeholders in this method with your LUIS resource key, region, and app ID as follows.
+
+ | Placeholder | Replace with |
+ | -- | |
+ | `YourLanguageUnderstandingSubscriptionKey` | Your LUIS resource key. Again, you must get this item from your Azure dashboard. You can find it on your app's **Azure Resources** page (under **Manage**) in the [LUIS portal](https://www.luis.ai/home). |
+ | `YourLanguageUnderstandingServiceRegion` | The short identifier for the region your LUIS resource is in, such as `westus` for West US. See [Regions](regions.md). |
+ | `YourLanguageUnderstandingAppId` | The LUIS app ID. You can find it on your app's **Settings** page in the [LUIS portal](https://www.luis.ai/home). |
+
+With these changes made, you can build (**Control+Shift+B**) and run (**F5**) the application. When you're prompted, try saying "Turn off the lights" into your PC's microphone. The application displays the result in the console window.
+
+The following sections include a discussion of the code.
+
+## Create an intent recognizer
+
+First, you need to create a speech configuration from your LUIS prediction key and region. You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. The speech configuration has multiple ways to specify the resource you want to use; here, we use `FromSubscription`, which takes the resource key and region.
+
+> [!NOTE]
+> Use the key and region of your LUIS resource, not a Speech resource.
+
+Next, create an intent recognizer using `new IntentRecognizer(config)`. Since the configuration already knows which resource to use, you don't need to specify the key again when creating the recognizer.
+
+## Import a LUIS model and add intents
+
+Now import the model from the LUIS app using `LanguageUnderstandingModel.FromAppId()` and add the LUIS intents that you wish to recognize via the recognizer's `AddIntent()` method. These two steps improve the accuracy of speech recognition by indicating words that the user is likely to use in their requests. You don't have to add all the app's intents if you don't need to recognize them all in your application.
+
+To add intents, you must provide three arguments: the LUIS model (which has been created and is named `model`), the intent name, and an intent ID. The difference between the ID and the name is as follows.
+
+| `AddIntent()`&nbsp;argument | Purpose |
+| | - |
+| `intentName` | The name of the intent as defined in the LUIS app. This value must match the LUIS intent name exactly. |
+| `intentID` | An ID assigned to a recognized intent by the Speech SDK. This value can be whatever you like; it doesn't need to correspond to the intent name as defined in the LUIS app. If multiple intents are handled by the same code, for instance, you could use the same ID for them. |
+
+The Home Automation LUIS app has two intents: one for turning on a device, and another for turning off a device. The lines below add these intents to the recognizer; replace the three `AddIntent` lines in the `RecognizeIntentAsync()` method with this code.
+
+```csharp
+recognizer.AddIntent(model, "HomeAutomation.TurnOff", "off");
+recognizer.AddIntent(model, "HomeAutomation.TurnOn", "on");
+```
+
+Instead of adding individual intents, you can also use the `AddAllIntents` method to add all the intents in a model to the recognizer.
+
+## Start recognition
+
+With the recognizer created and the intents added, recognition can begin. The Speech SDK supports both single-shot and continuous recognition.
+
+| Recognition mode | Methods to call | Result |
+| - | | |
+| Single-shot | `RecognizeOnceAsync()` | Returns the recognized intent, if any, after one utterance. |
+| Continuous | `StartContinuousRecognitionAsync()`<br>`StopContinuousRecognitionAsync()` | Recognizes multiple utterances; emits events (for example, `IntermediateResultReceived`) when results are available. |
+
+The application uses single-shot mode and so calls `RecognizeOnceAsync()` to begin recognition. The result is an `IntentRecognitionResult` object containing information about the intent recognized. You extract the LUIS JSON response by using the following expression:
+
+```csharp
+result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)
+```
+
+The application doesn't parse the JSON result. It only displays the JSON text in the console window.
+
+![Single LUIS recognition results](media/sdk/luis-results.png)
+
+## Specify recognition language
+
+By default, LUIS recognizes intents in US English (`en-us`). By assigning a locale code to the `SpeechRecognitionLanguage` property of the speech configuration, you can recognize intents in other languages. For example, add `config.SpeechRecognitionLanguage = "de-de";` in our application before creating the recognizer to recognize intents in German. For more information, see [LUIS language support](../LUIS/luis-language-support.md).
+
+## Continuous recognition from a file
+
+The following code illustrates two additional capabilities of intent recognition using the Speech SDK. The first, previously mentioned, is continuous recognition, where the recognizer emits events when results are available. These events can then be processed by event handlers that you provide. With continuous recognition, you call the recognizer's `StartContinuousRecognitionAsync()` method to start recognition instead of `RecognizeOnceAsync()`.
+
+The other capability is reading the audio containing the speech to be processed from a WAV file. Implementation involves creating an audio configuration that can be used when creating the intent recognizer. The file must be single-channel (mono) with a sampling rate of 16 kHz.
+
+To try out these features, delete or comment out the body of the `RecognizeIntentAsync()` method, and add the following code in its place.
+
+[!code-csharp[Intent recognition by using events from a file](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#intentContinuousRecognitionWithFile)]
+
+Revise the code to include your LUIS prediction key, region, and app ID and to add the Home Automation intents, as before. Change `whatstheweatherlike.wav` to the name of your recorded audio file. Then build, copy the audio file to the build directory, and run the application.
+
+For example, if you say "Turn off the lights", pause, and then say "Turn on the lights" in your recorded audio file, console output similar to the following may appear:
+
+![Audio file LUIS recognition results](media/sdk/luis-results-2.png)
+
+The Speech SDK team actively maintains a large set of examples in an open-source repository. For the sample source code repository, see the [Azure AI services Speech SDK on GitHub](https://aka.ms/csspeech/samples). There are samples for C#, C++, Java, Python, Objective-C, Swift, JavaScript, UWP, Unity, and Xamarin. Look for the code from this article in the **samples/csharp/sharedcontent/console** folder.
+
+## Next steps
+
+> [Quickstart: Recognize speech from a microphone](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnetcore)
ai-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-recognize-speech.md
+
+ Title: "How to recognize speech - Speech service"
+
+description: Learn how to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
++++++ Last updated : 09/16/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: speech to text, speech to text software
++
+# How to recognize speech
+++++++++++
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+* [Use batch transcription](batch-transcription.md)
ai-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-select-audio-input-devices.md
+
+ Title: Select an audio input device with the Speech SDK
+
+description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'
++++++ Last updated : 07/05/2019+
+ms.devlang: cpp, csharp, java, javascript, objective-c, python
+++
+# Select an audio input device with the Speech SDK
+
+This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK to select the audio input. You configure the audio device through the `AudioConfig` object:
+
+```C++
+audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
+```
+
+```cs
+audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
+```
+
+```Python
+audio_config = AudioConfig(device_name="<device id>");
+```
+
+```objc
+audioConfig = AudioConfiguration.FromMicrophoneInput("<device id>");
+```
+
+```Java
+audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
+```
+
+```JavaScript
+audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
+```
+
+> [!Note]
+> Microphone use isn't available for JavaScript running in Node.js.
+
+## Audio device IDs on Windows for desktop applications
+
+Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for desktop applications.
+
+The following code sample illustrates how to use it to enumerate audio devices in C++:
+
+```cpp
+#include <cstdio>
+#include <mmdeviceapi.h>
+
+#include <Functiondiscoverykeys_devpkey.h>
+
+const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
+const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
+
+constexpr auto REFTIMES_PER_SEC = (10000000 * 25);
+constexpr auto REFTIMES_PER_MILLISEC = 10000;
+
+#define EXIT_ON_ERROR(hres) \
+ if (FAILED(hres)) { goto Exit; }
+#define SAFE_RELEASE(punk) \
+ if ((punk) != NULL) \
+ { (punk)->Release(); (punk) = NULL; }
+
+void ListEndpoints();
+
+int main()
+{
+ CoInitializeEx(NULL, COINIT_MULTITHREADED);
+ ListEndpoints();
+}
+
+//--
+// This function enumerates all active (plugged in) audio
+// rendering endpoint devices. It prints the friendly name
+// and endpoint ID string of each endpoint device.
+//--
+void ListEndpoints()
+{
+ HRESULT hr = S_OK;
+ IMMDeviceEnumerator *pEnumerator = NULL;
+ IMMDeviceCollection *pCollection = NULL;
+ IMMDevice *pEndpoint = NULL;
+ IPropertyStore *pProps = NULL;
+ LPWSTR pwszID = NULL;
+
+ hr = CoCreateInstance(CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, IID_IMMDeviceEnumerator, (void**)&pEnumerator);
+ EXIT_ON_ERROR(hr);
+
+ hr = pEnumerator->EnumAudioEndpoints(eCapture, DEVICE_STATE_ACTIVE, &pCollection);
+ EXIT_ON_ERROR(hr);
+
+ UINT count;
+ hr = pCollection->GetCount(&count);
+ EXIT_ON_ERROR(hr);
+
+ if (count == 0)
+ {
+ printf("No endpoints found.\n");
+ }
+
+ // Each iteration prints the name of an endpoint device.
+ PROPVARIANT varName;
+ for (ULONG i = 0; i < count; i++)
+ {
+ // Get the pointer to endpoint number i.
+ hr = pCollection->Item(i, &pEndpoint);
+ EXIT_ON_ERROR(hr);
+
+ // Get the endpoint ID string.
+ hr = pEndpoint->GetId(&pwszID);
+ EXIT_ON_ERROR(hr);
+
+ hr = pEndpoint->OpenPropertyStore(
+ STGM_READ, &pProps);
+ EXIT_ON_ERROR(hr);
+
+ // Initialize the container for property value.
+ PropVariantInit(&varName);
+
+ // Get the endpoint's friendly-name property.
+ hr = pProps->GetValue(PKEY_Device_FriendlyName, &varName);
+ EXIT_ON_ERROR(hr);
+
+ // Print the endpoint friendly name and endpoint ID.
+ printf("Endpoint %d: \"%S\" (%S)\n", i, varName.pwszVal, pwszID);
+
+ CoTaskMemFree(pwszID);
+ pwszID = NULL;
+ PropVariantClear(&varName);
+ }
+
+Exit:
+ CoTaskMemFree(pwszID);
+ pwszID = NULL;
+ PropVariantClear(&varName);
+ SAFE_RELEASE(pEnumerator);
+ SAFE_RELEASE(pCollection);
+ SAFE_RELEASE(pEndpoint);
+ SAFE_RELEASE(pProps);
+}
+```
+
+In C#, you can use the [NAudio](https://github.com/naudio/NAudio) library to access the CoreAudio API and enumerate devices as follows:
+
+```cs
+using System;
+
+using NAudio.CoreAudioApi;
+
+namespace ConsoleApp
+{
+ class Program
+ {
+ static void Main(string[] args)
+ {
+ var enumerator = new MMDeviceEnumerator();
+ foreach (var endpoint in
+ enumerator.EnumerateAudioEndPoints(DataFlow.Capture, DeviceState.Active))
+ {
+ Console.WriteLine("{0} ({1})", endpoint.FriendlyName, endpoint.ID);
+ }
+ }
+ }
+}
+```
+
+A sample device ID is `{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}`.
+
+## Audio device IDs on UWP
+
+On the Universal Windows Platform (UWP), you can obtain audio input devices by using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
+
+The following code samples show how to do this step in C++ and C#:
+
+```cpp
+#include <winrt/Windows.Foundation.h>
+#include <winrt/Windows.Devices.Enumeration.h>
+
+using namespace winrt::Windows::Devices::Enumeration;
+
+void enumerateDeviceIds()
+{
+ auto promise = DeviceInformation::FindAllAsync(DeviceClass::AudioCapture);
+
+ promise.Completed(
+ [](winrt::Windows::Foundation::IAsyncOperation<DeviceInformationCollection> const& sender,
+ winrt::Windows::Foundation::AsyncStatus /* asyncStatus */) {
+ auto info = sender.GetResults();
+ auto num_devices = info.Size();
+
+ for (const auto &device : info)
+ {
+ std::wstringstream ss{};
+ ss << "looking at device (of " << num_devices << "): " << device.Id().c_str() << "\n";
+ OutputDebugString(ss.str().c_str());
+ }
+ });
+}
+```
+
+```cs
+using Windows.Devices.Enumeration;
+using System.Linq;
+
+namespace helloworld {
+ private async void EnumerateDevices()
+ {
+ var devices = await DeviceInformation.FindAllAsync(DeviceClass.AudioCapture);
+
+ foreach (var device in devices)
+ {
+ Console.WriteLine($"{device.Name}, {device.Id}\n");
+ }
+ }
+}
+```
+
+A sample device ID is `\\\\?\\SWD#MMDEVAPI#{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}#{2eef81be-33fa-4800-9670-1cd474972c3f}`.
+
+## Audio device IDs on Linux
+
+The device IDs are selected by using standard ALSA device IDs.
+
+The IDs of the inputs attached to the system are contained in the output of the command `arecord -L`.
+Alternatively, they can be obtained by using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
+
+Sample IDs are `hw:1,0` and `hw:CARD=CC,DEV=0`.
+
+## Audio device IDs on macOS
+
+The following function implemented in Objective-C creates a list of the names and IDs of the audio devices attached to a Mac.
+
+The `deviceUID` string is used to identify a device in the Speech SDK for macOS.
+
+```objc
+#import <Foundation/Foundation.h>
+#import <CoreAudio/CoreAudio.h>
+
+CFArrayRef CreateInputDeviceArray()
+{
+ AudioObjectPropertyAddress propertyAddress = {
+ kAudioHardwarePropertyDevices,
+ kAudioObjectPropertyScopeGlobal,
+ kAudioObjectPropertyElementMaster
+ };
+
+ UInt32 dataSize = 0;
+ OSStatus status = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
+ if (kAudioHardwareNoError != status) {
+ fprintf(stderr, "AudioObjectGetPropertyDataSize (kAudioHardwarePropertyDevices) failed: %i\n", status);
+ return NULL;
+ }
+
+ UInt32 deviceCount = (uint32)(dataSize / sizeof(AudioDeviceID));
+
+ AudioDeviceID *audioDevices = (AudioDeviceID *)(malloc(dataSize));
+ if (NULL == audioDevices) {
+ fputs("Unable to allocate memory", stderr);
+ return NULL;
+ }
+
+ status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, audioDevices);
+ if (kAudioHardwareNoError != status) {
+ fprintf(stderr, "AudioObjectGetPropertyData (kAudioHardwarePropertyDevices) failed: %i\n", status);
+ free(audioDevices);
+ audioDevices = NULL;
+ return NULL;
+ }
+
+ CFMutableArrayRef inputDeviceArray = CFArrayCreateMutable(kCFAllocatorDefault, deviceCount, &kCFTypeArrayCallBacks);
+ if (NULL == inputDeviceArray) {
+ fputs("CFArrayCreateMutable failed", stderr);
+ free(audioDevices);
+ audioDevices = NULL;
+ return NULL;
+ }
+
+ // Iterate through all the devices and determine which are input-capable
+ propertyAddress.mScope = kAudioDevicePropertyScopeInput;
+ for (UInt32 i = 0; i < deviceCount; ++i) {
+ // Query device UID
+ CFStringRef deviceUID = NULL;
+ dataSize = sizeof(deviceUID);
+ propertyAddress.mSelector = kAudioDevicePropertyDeviceUID;
+ status = AudioObjectGetPropertyData(audioDevices[i], &propertyAddress, 0, NULL, &dataSize, &deviceUID);
+ if (kAudioHardwareNoError != status) {
+ fprintf(stderr, "AudioObjectGetPropertyData (kAudioDevicePropertyDeviceUID) failed: %i\n", status);
+ continue;
+ }
+
+ // Query device name
+ CFStringRef deviceName = NULL;
+ dataSize = sizeof(deviceName);
+ propertyAddress.mSelector = kAudioDevicePropertyDeviceNameCFString;
+ status = AudioObjectGetPropertyData(audioDevices[i], &propertyAddress, 0, NULL, &dataSize, &deviceName);
+ if (kAudioHardwareNoError != status) {
+ fprintf(stderr, "AudioObjectGetPropertyData (kAudioDevicePropertyDeviceNameCFString) failed: %i\n", status);
+ continue;
+ }
+
+ // Determine if the device is an input device (it is an input device if it has input channels)
+ dataSize = 0;
+ propertyAddress.mSelector = kAudioDevicePropertyStreamConfiguration;
+ status = AudioObjectGetPropertyDataSize(audioDevices[i], &propertyAddress, 0, NULL, &dataSize);
+ if (kAudioHardwareNoError != status) {
+ fprintf(stderr, "AudioObjectGetPropertyDataSize (kAudioDevicePropertyStreamConfiguration) failed: %i\n", status);
+ continue;
+ }
+
+ AudioBufferList *bufferList = (AudioBufferList *)(malloc(dataSize));
+ if (NULL == bufferList) {
+ fputs("Unable to allocate memory", stderr);
+ break;
+ }
+
+ status = AudioObjectGetPropertyData(audioDevices[i], &propertyAddress, 0, NULL, &dataSize, bufferList);
+ if (kAudioHardwareNoError != status || 0 == bufferList->mNumberBuffers) {
+ if (kAudioHardwareNoError != status)
+ fprintf(stderr, "AudioObjectGetPropertyData (kAudioDevicePropertyStreamConfiguration) failed: %i\n", status);
+ free(bufferList);
+ bufferList = NULL;
+ continue;
+ }
+
+ free(bufferList);
+ bufferList = NULL;
+
+ // Add a dictionary for this device to the array of input devices
+ CFStringRef keys [] = { CFSTR("deviceUID"), CFSTR("deviceName")};
+ CFStringRef values [] = { deviceUID, deviceName};
+
+ CFDictionaryRef deviceDictionary = CFDictionaryCreate(kCFAllocatorDefault,
+ (const void **)(keys),
+ (const void **)(values),
+ 2,
+ &kCFTypeDictionaryKeyCallBacks,
+ &kCFTypeDictionaryValueCallBacks);
+
+ CFArrayAppendValue(inputDeviceArray, deviceDictionary);
+
+ CFRelease(deviceDictionary);
+ deviceDictionary = NULL;
+ }
+
+ free(audioDevices);
+ audioDevices = NULL;
+
+ // Return a non-mutable copy of the array
+ CFArrayRef immutableInputDeviceArray = CFArrayCreateCopy(kCFAllocatorDefault, inputDeviceArray);
+ CFRelease(inputDeviceArray);
+ inputDeviceArray = NULL;
+
+ return immutableInputDeviceArray;
+}
+```
+
+For example, the UID for the built-in microphone is `BuiltInMicrophoneDevice`.
+
+## Audio device IDs on iOS
+
+Audio device selection with the Speech SDK isn't supported on iOS. Apps that use the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
+
+For example, the instruction
+
+```objc
+[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord
+ withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:NULL];
+```
+
+Enables the use of a Bluetooth headset for a speech-enabled app.
+
+## Audio device IDs in JavaScript
+
+In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
+
+## Next steps
+
+- [Explore samples on GitHub](https://aka.ms/csspeech/samples)
+
+- [Customize acoustic models](./how-to-custom-speech-train-model.md)
+- [Customize language models](./how-to-custom-speech-train-model.md)
ai-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md
+
+ Title: Get facial position with viseme
+
+description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme.
++++++ Last updated : 10/23/2022+
+ms.devlang: cpp, csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-speech-services-nomore-variant
++
+# Get facial position with viseme
+
+> [!NOTE]
+> To explore the locales supported for Viseme ID and blend shapes, refer to [the list of all supported locales](language-support.md?tabs=tts#viseme). Scalable Vector Graphics (SVG) is only supported for the `en-US` locale.
+
+A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes.
+
+You can use visemes to control the movement of 2D and 3D avatar models, so that the facial positions are best aligned with synthetic speech. For example, you can:
+
+ * Create an animated virtual voice assistant for intelligent kiosks, building multi-mode integrated services for your customers.
+ * Build immersive news broadcasts and improve audience experiences with natural face and mouth movements.
+ * Generate more interactive gaming avatars and cartoon characters that can speak with dynamic content.
+ * Make more effective language teaching videos that help language learners understand the mouth behavior of each word and phoneme.
+ * People with hearing impairment can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
+
+For more information about visemes, view this [introductory video](https://youtu.be/ui9XT47uwxs).
+> [!VIDEO https://www.youtube.com/embed/ui9XT47uwxs]
+
+## Overall workflow of producing viseme with speech
+
+Neural Text to speechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme ID, Scalable Vector Graphics (SVG), or blend shapes. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar.
+
+The overall workflow of viseme is depicted in the following flowchart:
+
+![Diagram of the overall workflow of viseme.](media/text-to-speech/viseme-structure.png)
+
+## Viseme ID
+
+Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth position for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes).
+
+Speech audio output can be accompanied by viseme IDs and `Audio offset`. The `Audio offset` indicates the offset timestamp that represents the start time of each viseme, in ticks (100 nanoseconds).
+
+### Map phonemes to visemes
+
+Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes. The table below shows a mapping relationship between viseme IDs and mouth positions, listing typical IPA phonemes for each viseme ID.
+
+| Viseme ID | IPA | Mouth position|
+||||
+|0|Silence|<img src="media/text-to-speech/viseme-id-0.jpg" width="200" height="200" alt="The mouth position when viseme ID is 0">|
+|1|`æ`, `ə`, `ʌ`| <img src="media/text-to-speech/viseme-id-1.jpg" width="200" height="200" alt="The mouth position when viseme ID is 1">|
+|2|`ɑ`|<img src="media/text-to-speech/viseme-id-2.jpg" width="200" height="200" alt="The mouth position when viseme ID is 2">|
+|3|`ɔ`|<img src="media/text-to-speech/viseme-id-3.jpg" width="200" height="200" alt="The mouth position when viseme ID is 3">|
+|4|`ɛ`, `ʊ`|<img src="media/text-to-speech/viseme-id-4.jpg" width="200" height="200" alt="The mouth position when viseme ID is 4">|
+|5|`ɝ`|<img src="media/text-to-speech/viseme-id-5.jpg" width="200" height="200" alt="The mouth position when viseme ID is 5">|
+|6|`j`, `i`, `ɪ` |<img src="media/text-to-speech/viseme-id-6.jpg" width="200" height="200" alt="The mouth position when viseme ID is 6">|
+|7|`w`, `u`|<img src="media/text-to-speech/viseme-id-7.jpg" width="200" height="200" alt="The mouth position when viseme ID is 7">|
+|8|`o`|<img src="media/text-to-speech/viseme-id-8.jpg" width="200" height="200" alt="The mouth position when viseme ID is 8">|
+|9|`aʊ`|<img src="media/text-to-speech/viseme-id-9.jpg" width="200" height="200" alt="The mouth position when viseme ID is 9">|
+|10|`ɔɪ`|<img src="media/text-to-speech/viseme-id-10.jpg" width="200" height="200" alt="The mouth position when viseme ID is 10">|
+|11|`aɪ`|<img src="media/text-to-speech/viseme-id-11.jpg" width="200" height="200" alt="The mouth position when viseme ID is 11">|
+|12|`h`|<img src="media/text-to-speech/viseme-id-12.jpg" width="200" height="200" alt="The mouth position when viseme ID is 12">|
+|13|`╔╣`|<img src="media/text-to-speech/viseme-id-13.jpg" width="200" height="200" alt="The mouth position when viseme ID is 13">|
+|14|`l`|<img src="media/text-to-speech/viseme-id-14.jpg" width="200" height="200" alt="The mouth position when viseme ID is 14">|
+|15|`s`, `z`|<img src="media/text-to-speech/viseme-id-15.jpg" width="200" height="200" alt="The mouth position when viseme ID is 15">|
+|16|`ʃ`, `tʃ`, `dʒ`, `ʒ`|<img src="media/text-to-speech/viseme-id-16.jpg" width="200" height="200" alt="The mouth position when viseme ID is 16">|
+|17|`├░`|<img src="media/text-to-speech/viseme-id-17.jpg" width="200" height="200" alt="The mouth position when viseme ID is 17">|
+|18|`f`, `v`|<img src="media/text-to-speech/viseme-id-18.jpg" width="200" height="200" alt="The mouth position when viseme ID is 18">|
+|19|`d`, `t`, `n`, `╬╕`|<img src="media/text-to-speech/viseme-id-19.jpg" width="200" height="200" alt="The mouth position when viseme ID is 19">|
+|20|`k`, `g`, `ŋ`|<img src="media/text-to-speech/viseme-id-20.jpg" width="200" height="200" alt="The mouth position when viseme ID is 20">|
+|21|`p`, `b`, `m`|<img src="media/text-to-speech/viseme-id-21.jpg" width="200" height="200" alt="The mouth position when viseme ID is 21">|
+
+## 2D SVG animation
+
+For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position.
+
+With temporal tags that are provided in a viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character that's designed for language learning.
+
+![Screenshot showing a 2D rendering example of four red-lipped mouths, each representing a different viseme ID that corresponds to a phoneme.](media/text-to-speech/viseme-demo-2D.png)
+
+## 3D blend shapes animation
+
+You can use blend shapes to drive the facial movements of a 3D character that you designed.
+
+The blend shapes JSON string is represented as a 2-dimensional matrix. Each row represents a frame. Each frame (in 60 FPS) contains an array of 55 facial positions.
+
+## Get viseme events with the Speech SDK
+
+To get viseme with your synthesized speech, subscribe to the `VisemeReceived` event in the Speech SDK.
+
+> [!NOTE]
+> To request SVG or blend shapes output, you should use the `mstts:viseme` element in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup-structure.md#viseme-element).
+
+The following snippet shows how to subscribe to the viseme event:
++
+```csharp
+using (var synthesizer = new SpeechSynthesizer(speechConfig, audioConfig))
+{
+ // Subscribes to viseme received event
+ synthesizer.VisemeReceived += (s, e) =>
+ {
+ Console.WriteLine($"Viseme event received. Audio offset: " +
+ $"{e.AudioOffset / 10000}ms, viseme id: {e.VisemeId}.");
+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ var animation = e.Animation;
+ };
+
+ // If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
+ var result = await synthesizer.SpeakSsmlAsync(ssml);
+}
+
+```
+++
+```cpp
+auto synthesizer = SpeechSynthesizer::FromConfig(speechConfig, audioConfig);
+
+// Subscribes to viseme received event
+synthesizer->VisemeReceived += [](const SpeechSynthesisVisemeEventArgs& e)
+{
+ cout << "viseme event received. "
+ // The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds.
+ << "Audio offset: " << e.AudioOffset / 10000 << "ms, "
+ << "viseme id: " << e.VisemeId << "." << endl;
+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ auto animation = e.Animation;
+};
+
+// If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
+auto result = synthesizer->SpeakSsmlAsync(ssml).get();
+```
+++
+```java
+SpeechSynthesizer synthesizer = new SpeechSynthesizer(speechConfig, audioConfig);
+
+// Subscribes to viseme received event
+synthesizer.VisemeReceived.addEventListener((o, e) -> {
+ // The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds.
+ System.out.print("Viseme event received. Audio offset: " + e.getAudioOffset() / 10000 + "ms, ");
+ System.out.println("viseme id: " + e.getVisemeId() + ".");
+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ String animation = e.getAnimation();
+});
+
+// If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
+SpeechSynthesisResult result = synthesizer.SpeakSsmlAsync(ssml).get();
+```
+++
+```Python
+speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config)
+
+def viseme_cb(evt):
+ print("Viseme event received: audio offset: {}ms, viseme id: {}.".format(
+ evt.audio_offset / 10000, evt.viseme_id))
+
+ # `Animation` is an xml string for SVG or a json string for blend shapes
+ animation = evt.animation
+
+# Subscribes to viseme received event
+speech_synthesizer.viseme_received.connect(viseme_cb)
+
+# If VisemeID is the only thing you want, you can also use `speak_text_async()`
+result = speech_synthesizer.speak_ssml_async(ssml).get()
+```
+++
+```Javascript
+var synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig, audioConfig);
+
+// Subscribes to viseme received event
+synthesizer.visemeReceived = function (s, e) {
+ window.console.log("(Viseme), Audio offset: " + e.audioOffset / 10000 + "ms. Viseme ID: " + e.visemeId);
+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ var animation = e.Animation;
+}
+
+// If VisemeID is the only thing you want, you can also use `speakTextAsync()`
+synthesizer.speakSsmlAsync(ssml);
+```
+++
+```Objective-C
+SPXSpeechSynthesizer *synthesizer =
+ [[SPXSpeechSynthesizer alloc] initWithSpeechConfiguration:speechConfig
+ audioConfiguration:audioConfig];
+
+// Subscribes to viseme received event
+[synthesizer addVisemeReceivedEventHandler: ^ (SPXSpeechSynthesizer *synthesizer, SPXSpeechSynthesisVisemeEventArgs *eventArgs) {
+ NSLog(@"Viseme event received. Audio offset: %fms, viseme id: %lu.", eventArgs.audioOffset/10000., eventArgs.visemeId);
+
+ // `Animation` is an xml string for SVG or a json string for blend shapes
+ NSString *animation = eventArgs.Animation;
+}];
+
+// If VisemeID is the only thing you want, you can also use `SpeakText`
+[synthesizer speakSsml:ssml];
+```
++
+Here's an example of the viseme output.
+
+# [Viseme ID](#tab/visemeid)
+
+```text
+(Viseme), Viseme ID: 1, Audio offset: 200ms.
+
+(Viseme), Viseme ID: 5, Audio offset: 850ms.
+
+……
+
+(Viseme), Viseme ID: 13, Audio offset: 2350ms.
+```
+
+# [2D SVG](#tab/2dsvg)
+
+The SVG output is an xml string that contains the animation.
+Render the SVG animation along with the synthesized speech to see the mouth movement.
+
+```xml
+<svg width= "1200px" height= "1200px" ..>
+ <g id= "front_start" stroke= "none" stroke-width= "1" fill= "none" fill-rule= "evenodd">
+ <animate attributeName= "d" begin= "d_dh_front_background_1_0.end" dur= "0.27500
+ ...
+```
+
+# [3D blend shapes](#tab/3dblendshapes)
++
+Each viseme event includes a series of frames in the `Animation` SDK property. These are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames preceded the current list of frames.
+
+The output json looks like the following sample. Each frame within `BlendShapes` contains an array of 55 facial positions represented as decimal values between 0 to 1. The decimal values are in the same order as described in the facial positions table below.
+
+```json
+{
+ "FrameIndex":0,
+ "BlendShapes":[
+ [0.021,0.321,...,0.258],
+ [0.045,0.234,...,0.288],
+ ...
+ ]
+}
+```
+
+The order of `BlendShapes` is as follows.
+
+| Order | Facial position in `BlendShapes`|
+| | -- |
+| 1 | eyeBlinkLeft|
+| 2 | eyeLookDownLeft|
+| 3 | eyeLookInLeft|
+| 4 | eyeLookOutLeft|
+| 5 | eyeLookUpLeft|
+| 6 | eyeSquintLeft|
+| 7 | eyeWideLeft|
+| 8 | eyeBlinkRight|
+| 9 | eyeLookDownRight|
+| 10 | eyeLookInRight|
+| 11 | eyeLookOutRight|
+| 12 | eyeLookUpRight|
+| 13 | eyeSquintRight|
+| 14 | eyeWideRight|
+| 15 | jawForward|
+| 16 | jawLeft|
+| 17 | jawRight|
+| 18 | jawOpen|
+| 19 | mouthClose|
+| 20 | mouthFunnel|
+| 21 | mouthPucker|
+| 22 | mouthLeft|
+| 23 | mouthRight|
+| 24 | mouthSmileLeft|
+| 25 | mouthSmileRight|
+| 26 | mouthFrownLeft|
+| 27 | mouthFrownRight|
+| 28 | mouthDimpleLeft|
+| 29 | mouthDimpleRight|
+| 30 | mouthStretchLeft|
+| 31 | mouthStretchRight|
+| 32 | mouthRollLower|
+| 33 | mouthRollUpper|
+| 34 | mouthShrugLower|
+| 35 | mouthShrugUpper|
+| 36 | mouthPressLeft|
+| 37 | mouthPressRight|
+| 38 | mouthLowerDownLeft|
+| 39 | mouthLowerDownRight|
+| 40 | mouthUpperUpLeft|
+| 41 | mouthUpperUpRight|
+| 42 | browDownLeft|
+| 43 | browDownRight|
+| 44 | browInnerUp|
+| 45 | browOuterUpLeft|
+| 46 | browOuterUpRight|
+| 47 | cheekPuff|
+| 48 | cheekSquintLeft|
+| 49 | cheekSquintRight|
+| 50 | noseSneerLeft|
+| 51 | noseSneerRight|
+| 52 | tongueOut|
+| 53 | headRoll|
+| 54 | leftEyeRoll|
+| 55 | rightEyeRoll|
+++
+After you obtain the viseme output, you can use these events to drive character animation. You can build your own characters and automatically animate them.
+
+## Next steps
+
+- [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)
+- [How to improve synthesis with SSML](speech-synthesis-markup.md)
ai-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis.md
+
+ Title: "How to synthesize speech from text - Speech service"
+
+description: Learn how to convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
+++++++ Last updated : 09/16/2022
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+
+zone_pivot_groups: programming-languages-speech-services
+keywords: text to speech
++
+# How to synthesize speech from text
+++++++++++
+## Next steps
+
+* [Try the text to speech quickstart](get-started-text-to-speech.md)
+* [Get started with Custom Neural Voice](how-to-custom-voice.md)
+* [Improve synthesis with SSML](speech-synthesis-markup.md)
ai-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md
+
+ Title: How to track Speech SDK memory usage - Speech service
+
+description: The Speech SDK supports numerous programming languages for speech to text and text to speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK.
++++++ Last updated : 12/10/2019+
+ms.devlang: cpp, csharp, java, objective-c, python
+
+zone_pivot_groups: programming-languages-set-two
+++
+# How to track Speech SDK memory usage
+
+The Speech SDK is based on a native code base that's projected into multiple programming languages through a series of interoperability layers. Each language-specific projection has idiomatically correct features to manage the object lifecycle. Additionally, the Speech SDK includes memory management tooling to track resource usage with object logging and object limits.
+
+## How to read object logs
+
+If [Speech SDK logging is enabled](how-to-use-logging.md), tracking tags are emitted to enable historical object observation. These tags include:
+
+* `TrackHandle` or `StopTracking`
+* The object type
+* The current number of objects that are tracked the type of the object, and the current number being tracked.
+
+Here's a sample log:
+
+```terminal
+(284): 8604ms SPX_DBG_TRACE_VERBOSE: handle_table.h:90 TrackHandle type=Microsoft::Cognitive
+```
+
+## Set a warning threshold
+
+You have the option to create a warning threshold, and if that threshold is exceeded (assuming logging is enabled), a warning message is logged. The warning message contains a dump of all objects in existence along with their count. This information can be used to better understand issues.
+
+To enable a warning threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you've created an instance of `SpeechConfig` called `config`:
++
+```csharp
+config.SetProperty("SPEECH-ObjectCountWarnThreshold", "10000");
+```
+++
+```C++
+config->SetProperty("SPEECH-ObjectCountWarnThreshold", "10000");
+```
+++
+```java
+config.setProperty("SPEECH-ObjectCountWarnThreshold", "10000");
+```
+++
+```Python
+speech_config.set_property_by_name("SPEECH-ObjectCountWarnThreshold", "10000")?
+```
+++
+```ObjectiveC
+[config setPropertyTo:@"10000" byName:"SPEECH-ObjectCountWarnThreshold"];
+```
++
+> [!TIP]
+> The default value for this property is 10,000.
+
+## Set an error threshold
+
+Using the Speech SDK, you can set the maximum number of objects allowed at a given time. If this setting is enabled, when the maximum number is hit, attempts to create new recognizer objects will fail. Existing objects will continue to work.
+
+Here's a sample error:
+
+```terminal
+Runtime error: The maximum object count of 500 has been exceeded.
+The threshold can be adjusted by setting the SPEECH-ObjectCountErrorThreshold property on the SpeechConfig object.
+Handle table dump by object type:
+class Microsoft::Cognitive
+class Microsoft::Cognitive
+class Microsoft::Cognitive
+class Microsoft::Cognitive
+```
+
+To enable an error threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you've created an instance of `SpeechConfig` called `config`:
++
+```csharp
+config.SetProperty("SPEECH-ObjectCountErrorThreshold", "10000");
+```
+++
+```C++
+config->SetProperty("SPEECH-ObjectCountErrorThreshold", "10000");
+```
+++
+```java
+config.setProperty("SPEECH-ObjectCountErrorThreshold", "10000");
+```
+++
+```Python
+speech_config.set_property_by_name("SPEECH-ObjectCountErrorThreshold", "10000")?
+```
+++
+```objc
+[config setPropertyTo:@"10000" byName:"SPEECH-ObjectCountErrorThreshold"];
+```
++
+> [!TIP]
+> The default value for this property is the platform-specific maximum value for a `size_t` data type. A typical recognition will consume between 7 and 10 internal objects.
+
+## Next steps
+
+* [Learn more about the Speech SDK](speech-sdk.md)
ai-services How To Translate Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-translate-speech.md
+
+ Title: "How to translate speech - Speech service"
+
+description: Learn how to translate speech from one language to text in another language, including object construction and supported audio input formats.
+++++++ Last updated : 06/08/2022+
+zone_pivot_groups: programming-languages-speech-services
++
+# How to recognize and translate speech
+++++++++++
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Try the speech translation quickstart](get-started-speech-translation.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
ai-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-audio-input-streams.md
+
+ Title: Speech SDK audio input stream concepts
+
+description: An overview of the capabilities of the Speech SDK audio input stream.
++++++ Last updated : 05/09/2023+
+ms.devlang: csharp
++
+# How to use the audio input stream
+
+The Speech SDK provides a way to stream audio into the recognizer as an alternative to microphone or file input.
+
+This guide describes how to use audio input streams. It also describes some of the requirements and limitations of the audio input stream.
+
+See more examples of speech-to-text recognition with audio input stream on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs).
+
+## Identify the format of the audio stream
+
+Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure AI services Speech service.
+
+Supported audio samples are:
+
+ - PCM format (int-16, signed)
+ - One channel
+ - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second)
+ - Two-block aligned (16 bit including padding for a sample)
+
+The corresponding code in the SDK to create the audio format looks like this example:
+
+```csharp
+byte channels = 1;
+byte bitsPerSample = 16;
+int samplesPerSecond = 16000; // or 8000
+var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
+```
+
+Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
+
+## Create your own audio input stream class
+
+You can create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample:
+
+```csharp
+public class ContosoAudioStream : PullAudioInputStreamCallback
+{
+ public ContosoAudioStream() {}
+
+ public override int Read(byte[] buffer, uint size)
+ {
+ // Returns audio data to the caller.
+ // E.g., return read(config.YYY, buffer, size);
+ return 0;
+ }
+
+ public override void Close()
+ {
+ // Close and clean up resources.
+ }
+}
+```
+
+Create an audio configuration based on your audio format and custom audio input stream. For example:
+
+```csharp
+var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(), audioFormat);
+```
+
+Here's how the custom audio input stream is used in the context of a speech recognizer:
+
+```csharp
+using System;
+using System.IO;
+using System.Threading.Tasks;
+using Microsoft.CognitiveServices.Speech;
+using Microsoft.CognitiveServices.Speech.Audio;
+
+public class ContosoAudioStream : PullAudioInputStreamCallback
+{
+ public ContosoAudioStream() {}
+
+ public override int Read(byte[] buffer, uint size)
+ {
+ // Returns audio data to the caller.
+ // E.g., return read(config.YYY, buffer, size);
+ return 0;
+ }
+
+ public override void Close()
+ {
+ // Close and clean up resources.
+ }
+}
+
+class Program
+{
+ static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY");
+ static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION");
+
+ async static Task Main(string[] args)
+ {
+ byte channels = 1;
+ byte bitsPerSample = 16;
+ uint samplesPerSecond = 16000; // or 8000
+ var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
+ var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(), audioFormat);
+
+ var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
+ speechConfig.SpeechRecognitionLanguage = "en-US";
+ var speechRecognizer = new SpeechRecognizer(speechConfig, audioConfig);
+
+ Console.WriteLine("Speak into your microphone.");
+ var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
+ Console.WriteLine($"RECOGNIZED: Text={speechRecognitionResult.Text}");
+ }
+}
+```
+
+## Next steps
+
+- [Speech to text quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp)
+- [How to recognize speech](./how-to-recognize-speech.md?pivots=programming-language-csharp)
ai-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-codec-compressed-audio-input-streams.md
+
+ Title: How to use compressed input audio - Speech service
+
+description: Learn how to use compressed input audio the Speech SDK and CLI.
+++++++ Last updated : 04/25/2022
+ms.devlang: cpp, csharp, golang, java, python
+
+zone_pivot_groups: programming-languages-speech-services
++
+# How to use compressed input audio
+++++++++++
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
ai-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-conversation-transcription.md
+
+ Title: Real-time Conversation Transcription quickstart - Speech service
+
+description: In this quickstart, learn how to transcribe meetings and other conversations. You can add, remove, and identify multiple participants by streaming audio to the Speech service.
++++++ Last updated : 11/12/2022+
+zone_pivot_groups: acs-js-csharp-python
+ms.devlang: csharp, javascript
+++
+# Quickstart: Real-time Conversation Transcription
+
+You can transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe conversations. See the Conversation Transcription [overview](conversation-transcription.md) for more information.
+
+## Limitations
+
+* Only available in the following subscription regions: `centralus`, `eastasia`, `eastus`, `westeurope`
+* Requires a 7-mic circular multi-microphone array. The microphone array should meet [our specification](./speech-sdk-microphone.md).
+
+> [!NOTE]
+> The Speech SDK for C++, Java, Objective-C, and Swift support Conversation Transcription, but we haven't yet included a guide here.
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Asynchronous Conversation Transcription](how-to-async-conversation-transcription.md)
ai-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-custom-entity-pattern-matching.md
+
+ Title: How to recognize intents with custom entity pattern matching
+
+description: In this guide, you learn how to recognize intents and custom entities from simple patterns.
++++++ Last updated : 11/15/2021+
+zone_pivot_groups: programming-languages-set-thirteen
+++
+# How to recognize intents with custom entity pattern matching
+
+The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
+
+In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a Visual Studio project referencing the Speech SDK NuGet package
+> - Create a speech configuration and get an intent recognizer
+> - Add intents and patterns via the Speech SDK API
+> - Add custom entities via the Speech SDK API
+> - Use asynchronous, event-driven continuous recognition
+
+## When to use pattern matching
+
+Use pattern matching if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
+* You don't have access to a CLU model, but still want intents.
+
+For more information, see the [pattern matching overview](./pattern-matching-overview.md).
+
+## Prerequisites
+
+Be sure you have the following items before you begin this guide:
+
+- An [Azure AI services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
+- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).
+++
ai-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-logging.md
+
+ Title: Speech SDK logging - Speech service
+
+description: Learn about how to enable logging in the Speech SDK (C++, C#, Python, Objective-C, Java).
+++++++ Last updated : 07/05/2019
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+++
+# Enable logging in the Speech SDK
+
+Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file.
+
+## Sample
+
+The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you've created an instance called `speechConfig`:
+
+```csharp
+speechConfig.SetProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
+```
+
+```java
+speechConfig.setProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
+```
+
+```C++
+speechConfig->SetProperty(PropertyId::Speech_LogFilename, "LogfilePathAndName");
+```
+
+```Python
+speech_config.set_property(speechsdk.PropertyId.Speech_LogFilename, "LogfilePathAndName")
+```
+
+```objc
+[speechConfig setPropertyTo:@"LogfilePathAndName" byId:SPXSpeechLogFilename];
+```
+
+```go
+import ("github.com/Microsoft/cognitive-services-speech-sdk-go/common")
+
+speechConfig.SetProperty(common.SpeechLogFilename, "LogfilePathAndName")
+```
+
+You can create a recognizer from the configuration object. This will enable logging for all recognizers.
+
+> [!NOTE]
+> If you create a `SpeechSynthesizer` from the configuration object, it will not enable logging. If logging is enabled though, you will also receive diagnostics from the `SpeechSynthesizer`.
+
+JavaScript is an exception where the logging is enabled via SDK diagnostics as shown in the following code snippet:
+
+```javascript
+sdk.Diagnostics.SetLoggingLevel(sdk.LogLevel.Debug);
+sdk.Diagnostics.SetLogOutputPath("LogfilePathAndName");
+```
+
+## Create a log file on different platforms
+
+For Windows or Linux, the log file can be in any path the user has write permission for. Write permissions to file system locations in other operating systems may be limited or restricted by default.
+
+### Universal Windows Platform (UWP)
+
+UWP applications need to be places log files in one of the application data locations (local, roaming, or temporary). A log file can be created in the local application folder:
+
+```csharp
+StorageFolder storageFolder = ApplicationData.Current.LocalFolder;
+StorageFile logFile = await storageFolder.CreateFileAsync("logfile.txt", CreationCollisionOption.ReplaceExisting);
+speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile.Path);
+```
+
+Within a Unity UWP application, a log file can be created using the application persistent data path folder as follows:
+
+```csharp
+#if ENABLE_WINMD_SUPPORT
+ string logFile = Application.persistentDataPath + "/logFile.txt";
+ speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile);
+#endif
+```
+For more about file access permissions in UWP applications, see [File access permissions](/windows/uwp/files/file-access-permissions).
+
+### Android
+
+You can save a log file to either internal storage, external storage, or the cache directory. Files created in the internal storage or the cache directory are private to the application. It is preferable to create a log file in external storage.
+
+```java
+File dir = context.getExternalFilesDir(null);
+File logFile = new File(dir, "logfile.txt");
+speechConfig.setProperty(PropertyId.Speech_LogFilename, logFile.getAbsolutePath());
+```
+
+The code above will save a log file to the external storage in the root of an application-specific directory. A user can access the file with the file manager (usually in `Android/data/ApplicationName/logfile.txt`). The file will be deleted when the application is uninstalled.
+
+You also need to request `WRITE_EXTERNAL_STORAGE` permission in the manifest file:
+
+```xml
+<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="...">
+ ...
+ <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
+ ...
+</manifest>
+```
+
+Within a Unity Android application, the log file can be created using the application persistent data path folder as follows:
+
+```csharp
+string logFile = Application.persistentDataPath + "/logFile.txt";
+speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile);
+```
+In addition, you need to also set write permission in your Unity Player settings for Android to "External (SDCard)". The log will be written
+to a directory you can get using a tool such as AndroidStudio Device File Explorer. The exact directory path may vary between Android devices,
+location is typically the `sdcard/Android/data/your-app-packagename/files` directory.
+
+More about data and file storage for Android applications is available [here](https://developer.android.com/guide/topics/data/data-storage.html).
+
+#### iOS
+
+Only directories inside the application sandbox are accessible. Files can be created in the documents, library, and temp directories. Files in the documents directory can be made available to a user.
+
+If you are using Objective-C on iOS, use the following code snippet to create a log file in the application document directory:
+
+```objc
+NSString *filePath = [
+ [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) firstObject]
+ stringByAppendingPathComponent:@"logfile.txt"];
+[speechConfig setPropertyTo:filePath byId:SPXSpeechLogFilename];
+```
+
+To access a created file, add the below properties to the `Info.plist` property list of the application:
+
+```xml
+<key>UIFileSharingEnabled</key>
+<true/>
+<key>LSSupportsOpeningDocumentsInPlace</key>
+<true/>
+```
+
+If you're using Swift on iOS, please use the following code snippet to enable logs:
+```swift
+let documentsDirectoryPathString = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first!
+let documentsDirectoryPath = NSURL(string: documentsDirectoryPathString)!
+let logFilePath = documentsDirectoryPath.appendingPathComponent("swift.log")
+self.speechConfig!.setPropertyTo(logFilePath!.absoluteString, by: SPXPropertyId.speechLogFilename)
+```
+
+More about iOS File System is available [here](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html).
+
+## Logging with multiple recognizers
+
+Although a log file output path is specified as a configuration property into a `SpeechRecognizer` or other SDK object, SDK logging is a singleton, *process-wide* facility with no concept of individual instances. You can think of this as the `SpeechRecognizer` constructor (or similar) implicitly calling a static and internal "Configure Global Logging" routine with the property data available in the corresponding `SpeechConfig`.
+
+This means that you can't, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging will be emitted to that file.
+
+This also means that the lifetime of the object that configured logging isn't tied to the duration of logging. Logging will not stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging may be stopped by setting the log file path to an empty string when creating a new object.
+
+To reduce potential confusion when configuring logging for multiple instances, it may be useful to abstract control of logging from objects doing real work. An example pair of helper routines:
+
+```cpp
+void EnableSpeechSdkLogging(const char* relativePath)
+{
+ auto configForLogging = SpeechConfig::FromSubscription("unused_key", "unused_region");
+ configForLogging->SetProperty(PropertyId::Speech_LogFilename, relativePath);
+ auto emptyAudioConfig = AudioConfig::FromStreamInput(AudioInputStream::CreatePushStream());
+ auto temporaryRecognizer = SpeechRecognizer::FromConfig(configForLogging, emptyAudioConfig);
+}
+
+void DisableSpeechSdkLogging()
+{
+ EnableSpeechSdkLogging("");
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
ai-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-use-simple-language-pattern-matching.md
+
+ Title: How to recognize intents with simple language pattern matching
+
+description: In this guide, you learn how to recognize intents and entities from simple patterns.
++++++ Last updated : 04/19/2022+
+zone_pivot_groups: programming-languages-set-thirteen
+++
+# How to recognize intents with simple language pattern matching
+
+The Azure AI services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
+
+In this guide, you use the Speech SDK to develop a C++ console application that derives intents from user utterances through your device's microphone. You'll learn how to:
+
+> [!div class="checklist"]
+>
+> - Create a Visual Studio project referencing the Speech SDK NuGet package
+> - Create a speech configuration and get an intent recognizer
+> - Add intents and patterns via the Speech SDK API
+> - Recognize speech from a microphone
+> - Use asynchronous, event-driven continuous recognition
+
+## When to use pattern matching
+
+Use pattern matching if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
+* You don't have access to a CLU model, but still want intents.
+
+For more information, see the [pattern matching overview](./pattern-matching-overview.md).
+
+## Prerequisites
+
+Be sure you have the following items before you begin this guide:
+
+- An [Azure AI services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
+- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).
+
+## Speech and simple patterns
+
+The simple patterns are a feature of the Speech SDK and need an Azure AI services resource or a Unified Speech resource.
+
+A pattern is a phrase that includes an Entity somewhere within it. An Entity is defined by wrapping a word in curly brackets. This example defines an Entity with the ID "floorName", which is case-sensitive:
+
+```
+ Take me to the {floorName}
+```
+
+All other special characters and punctuation will be ignored.
+
+Intents will be added using calls to the IntentRecognizer->AddIntent() API.
+++
ai-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-windows-voice-assistants-get-started.md
+
+ Title: Voice Assistants on Windows - Get Started
+
+description: The steps to begin developing a windows voice agent, including a reference to the sample code quickstart.
++++++ Last updated : 04/15/2020++++
+# Get started with voice assistants on Windows
+
+This guide takes you through the steps to begin developing a voice assistant on Windows.
+
+## Set up your development environment
+
+To start developing a voice assistant for Windows, you'll need to make sure you have the proper development environment.
+
+- **Visual Studio:** You'll need to install [Microsoft Visual Studio 2017](https://visualstudio.microsoft.com/), Community Edition or higher
+- **Windows version**: A PC with a Windows Insider fast ring build of Windows and the Windows Insider version of the Windows SDK. This sample code is verified as working on Windows Insider Release Build 19025.vb_release_analog.191112-1600 using Windows SDK 19018. Any Build or SDK above the specified versions should be compatible.
+- **UWP development tools**: The Universal Windows Platform development workload in Visual Studio. See the UWP [Get set up](/windows/uwp/get-started/get-set-up) page to get your machine ready for developing UWP Applications.
+- **A working microphone and audio output**
+
+## Obtain resources from Microsoft
+
+Some resources necessary for a customized voice agent on Windows requires resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development.
+
+- **Keyword model:** Voice activation requires a keyword model from Microsoft in the form of a .bin file. The .bin file provided in the UWP Voice Assistant Sample is trained on the keyword *Contoso*.
+- **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they're protected under Limited Access Feature restrictions. To use a Limited Access Feature, you'll need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.
+
+## Establish a dialog service
+
+For a complete voice assistant experience, the application will need a dialog service that
+
+- Detect a keyword in a given audio file
+- Listen to user input and convert it to text
+- Provide the text to a bot
+- Translate the text response of the bot to an audio output
+
+Here are the requirements to create a basic dialog service using Direct Line Speech.
+
+- **Speech resource:** An Azure resource for Speech features such as speech to text and text to speech. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure AI services resource](../multi-service-resource.md?pivots=azportal).
+- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot".
+
+## Try out the sample app
+
+With your Speech resource key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials.
+
+## Create your own voice assistant for Windows
+
+Once you've received your Limited Access Feature token and bin file from Microsoft, you can begin on your own voice assistant on Windows.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Read the voice assistant implementation guide](windows-voice-assistants-implementation-guide.md)
ai-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/improve-accuracy-phrase-list.md
+
+ Title: Improve recognition accuracy with phrase list
+description: Phrase lists can be used to customize speech recognition results based on context.
++++++ Last updated : 09/01/2022
+zone_pivot_groups: programming-languages-set-two-with-js-spx
++
+# Improve recognition accuracy with phrase list
+
+A phrase list is a list of words or phrases provided ahead of time to help improve their recognition. Adding a phrase to a phrase list increases its importance, thus making it more likely to be recognized.
+
+For supported phrase list locales, see [Language and voice support for the Speech service](language-support.md?tabs=phraselist).
+
+Examples of phrases include:
+* Names
+* Geographical locations
+* Homonyms
+* Words or acronyms unique to your industry or organization
+
+Phrase lists are simple and lightweight:
+- **Just-in-time**: A phrase list is provided just before starting the speech recognition, eliminating the need to train a custom model.
+- **Lightweight**: You don't need a large data set. Simply provide a word or phrase to boost its recognition.
+
+You can use phrase lists with the [Speech Studio](speech-studio-overview.md), [Speech SDK](quickstarts/setup-platform.md), or [Speech Command Line Interface (CLI)](spx-overview.md). The [Batch transcription API](batch-transcription.md) doesn't support phrase lists.
+
+You can use phrase lists with both standard and [custom speech](custom-speech-overview.md). There are some situations where training a custom model that includes phrases is likely the best option to improve accuracy. For example, in the following cases you would use Custom Speech:
+- If you need to use a large list of phrases. A phrase list shouldn't have more than 500 phrases.
+- If you need a phrase list for languages that are not currently supported.
+
+## Try it in Speech Studio
+
+You can use [Speech Studio](speech-studio-overview.md) to test how phrase list would help improve recognition for your audio. To implement a phrase list with your application in production, you'll use the Speech SDK or Speech CLI.
+
+For example, let's say that you want the Speech service to recognize this sentence:
+"Hi Rehaan, this is Jessie from Contoso bank. "
+
+After testing, you might find that it's incorrectly recognized as:
+"Hi **everyone**, this is **Jesse** from **can't do so bank**."
+
+In this case you would want to add "Rehaan", "Jessie", and "Contoso" to your phrase list. Then the names should be recognized correctly.
+
+Now try Speech Studio to see how phrase list can improve recognition accuracy.
+
+> [!NOTE]
+> You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region.
+
+1. Go to **Real-time Speech to text** in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
+1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, this is Jessie from Contoso bank. " Then select the red button to stop recording.
+1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jessie", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step.
+1. Select **Show advanced options** and turn on **Phrase list**.
+1. Enter "Contoso;Jessie;Rehaan" in the phrase list text box. Note that multiple phrases need to be separated by a semicolon.
+ :::image type="content" source="./media/custom-speech/phrase-list-after-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/phrase-list-after-full.png":::
+1. Use the microphone to test recognition again. Otherwise you can select the retry arrow next to your audio file to re-run your audio. The terms "Rehaan", "Jessie", or "Contoso" should be recognized.
+
+## Implement phrase list
+
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
+
+```csharp
+var phraseList = PhraseListGrammar.FromRecognizer(recognizer);
+phraseList.AddPhrase("Contoso");
+phraseList.AddPhrase("Jessie");
+phraseList.AddPhrase("Rehaan");
+```
+
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
+
+```cpp
+auto phraseListGrammar = PhraseListGrammar::FromRecognizer(recognizer);
+phraseListGrammar->AddPhrase("Contoso");
+phraseListGrammar->AddPhrase("Jessie");
+phraseListGrammar->AddPhrase("Rehaan");
+```
+
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
+
+```java
+PhraseListGrammar phraseList = PhraseListGrammar.fromRecognizer(recognizer);
+phraseList.addPhrase("Contoso");
+phraseList.addPhrase("Jessie");
+phraseList.addPhrase("Rehaan");
+```
+
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
+
+```javascript
+const phraseList = sdk.PhraseListGrammar.fromRecognizer(recognizer);
+phraseList.addPhrase("Contoso");
+phraseList.addPhrase("Jessie");
+phraseList.addPhrase("Rehaan");
+```
+
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
+
+```Python
+phrase_list_grammar = speechsdk.PhraseListGrammar.from_recognizer(reco)
+phrase_list_grammar.addPhrase("Contoso")
+phrase_list_grammar.addPhrase("Jessie")
+phrase_list_grammar.addPhrase("Rehaan")
+```
+
+With the [Speech CLI](spx-overview.md) you can include a phrase list in-line or with a text file along with the recognize command.
+
+# [Terminal](#tab/terminal)
+
+Try recognition from a microphone or an audio file.
+
+```console
+spx recognize --microphone --phrases "Contoso;Jessie;Rehaan;"
+spx recognize --file "your\path\to\audio.wav" --phrases "Contoso;Jessie;Rehaan;"
+```
+
+You can also add a phrase list using a text file that contains one phrase per line.
+
+```console
+spx recognize --microphone --phrases @phrases.txt
+spx recognize --file "your\path\to\audio.wav" --phrases @phrases.txt
+```
+
+# [PowerShell](#tab/powershell)
+
+Try recognition from a microphone or an audio file.
+
+```powershell
+spx --% recognize --microphone --phrases "Contoso;Jessie;Rehaan;"
+spx --% recognize --file "your\path\to\audio.wav" --phrases "Contoso;Jessie;Rehaan;"
+```
+
+You can also add a phrase list using a text file that contains one phrase per line.
+
+```powershell
+spx --% recognize --microphone --phrases @phrases.txt
+spx --% recognize --file "your\path\to\audio.wav" --phrases @phrases.txt
+```
+
+***
++
+Allowed characters include locale-specific letters and digits, white space characters, and special characters such as +, \-, $, :, (, ), {, }, \_, ., ?, @, \\, ΓÇÖ, &, \#, %, \^, \*, \`, \<, \>, ;, \/. Other special characters are removed internally from the phrase.
+
+## Next steps
+
+Check out more options to improve recognition accuracy.
+
+> [!div class="nextstepaction"]
+> [Custom Speech](custom-speech-overview.md)
ai-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/ingestion-client.md
+
+ Title: Ingestion Client - Speech service
+
+description: In this article we describe a tool released on GitHub that enables customers push audio files to Speech service easily and quickly
++++++ Last updated : 08/29/2022+++
+# Ingestion Client with Azure AI services
+
+The Ingestion Client is a tool released by Microsoft on GitHub that helps you quickly deploy a call center transcription solution to Azure with a no-code approach.
+
+> [!TIP]
+> You can use the tool and resulting solution in production to process a high volume of audio.
+
+Ingestion Client uses the [Azure AI Language](../language-service/index.yml), [Azure AI Speech](./index.yml), [Azure storage](https://azure.microsoft.com/product-categories/storage/), and [Azure Functions](https://azure.microsoft.com/services/functions/).
+
+## Get started with the Ingestion Client
+
+An Azure account and a multi-service Azure AI services resource are needed to run the Ingestion Client.
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne" title="Create an Azure AI services resource" target="_blank">Create an Azure AI services resource</a> in the Azure portal.
+* Get the resource key and region. After your resource is deployed, select **Go to resource** to view and manage keys. For more information about Azure AI services resources, see [Get the keys for your resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource).
+
+See the [Getting Started Guide for the Ingestion Client](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/ingestion/ingestion-client/Setup/guide.md) on GitHub to learn how to setup and use the tool.
+
+## Ingestion Client Features
+
+The Ingestion Client works by connecting a dedicated [Azure storage](https://azure.microsoft.com/product-categories/storage/) account to custom [Azure Functions](https://azure.microsoft.com/services/functions/) in a serverless fashion to pass transcription requests to the service. The transcribed audio files land in the dedicated [Azure Storage container](https://azure.microsoft.com/product-categories/storage/).
+
+> [!IMPORTANT]
+> Pricing varies depending on the mode of operation (batch vs real-time) as well as the Azure Function SKU selected. By default the tool will create a Premium Azure Function SKU to handle large volume. Visit the [Pricing](https://azure.microsoft.com/pricing/details/functions/) page for more information.
+
+Internally, the tool uses Speech and Language services, and follows best practices to handle scale-up, retries and failover. The following schematic describes the resources and connections.
++
+The following Speech service feature is used by the Ingestion Client:
+
+- [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.
+
+Here are some Language service features that are used by the Ingestion Client:
+
+- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription.
+- [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.
+
+Besides Azure AI services, these Azure products are used to complete the solution:
+
+- [Azure storage](https://azure.microsoft.com/product-categories/storage/): For storing telephony data and the transcripts that are returned by the Batch Transcription API. This storage account should use notifications, specifically for when new files are added. These notifications are used to trigger the transcription process.
+- [Azure Functions](https://azure.microsoft.com/services/functions/): For creating the shared access signature (SAS) URI for each recording, and triggering the HTTP POST request to start a transcription. Additionally, you use Azure Functions to create requests to retrieve and delete transcriptions by using the Batch Transcription API.
+
+## Tool customization
+
+The tool is built to show customers results quickly. You can customize the tool to your preferred SKUs and setup. The SKUs can be edited from the [Azure portal](https://portal.azure.com) and [the code itself is available on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+
+> [!NOTE]
+> We suggest creating the resources in the same dedicated resource group to understand and track costs more easily.
+
+## Next steps
+
+* [Learn more about Azure AI services features for call center](./call-center-overview.md)
+* [Explore the Language service features](../language-service/overview.md#available-features)
+* [Explore the Speech service features](./overview.md)
ai-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/intent-recognition.md
+
+ Title: Intent recognition overview - Speech service
+
+description: Intent recognition allows you to recognize user objectives you have pre-defined. This article is an overview of the benefits and capabilities of the intent recognition service.
+++++++ Last updated : 02/22/2023
+keywords: intent recognition
++
+# What is intent recognition?
+
+In this overview, you will learn about the benefits and capabilities of intent recognition. The Azure AI Speech SDK provides two ways to recognize intents, both described below. An intent is something the user wants to do: book a flight, check the weather, or make a call. Using intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options you define in the Intent Recognizer or Conversational Language Understanding (CLU) model.
+
+## Pattern matching
+
+The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using [CLU](#conversational-language-understanding) or a combination of the two.
+
+Use pattern matching if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
+* You don't have access to a CLU model, but still want intents.
+
+For more information, see the [pattern matching concepts](./pattern-matching-overview.md) and then:
+* Start with [simple pattern matching](how-to-use-simple-language-pattern-matching.md).
+* Improve your pattern matching by using [custom entities](how-to-use-custom-entity-pattern-matching.md).
+
+## Conversational Language Understanding
+
+Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it.
+
+Both a Speech resource and Language resource are required to use CLU with the Speech SDK. The Speech resource is used to transcribe the user's speech into text, and the Language resource is used to recognize the intent of the utterance. To get started, see the [quickstart](get-started-intent-recognition-clu.md).
+
+> [!IMPORTANT]
+> When you use conversational language understanding with the Speech SDK, you are charged both for the Speech to text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
+
+For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](../language-service/conversational-language-understanding/overview.md).
+
+> [!IMPORTANT]
+> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
+>
+> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
+
+## Next steps
+
+* [Intent recognition with simple pattern matching](how-to-use-simple-language-pattern-matching.md)
+* [Intent recognition with CLU quickstart](get-started-intent-recognition-clu.md)
ai-services Keyword Recognition Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/keyword-recognition-guidelines.md
+
+ Title: Keyword recognition recommendations and guidelines - Speech service
+
+description: An overview of recommendations and guidelines when using keyword recognition.
++++++ Last updated : 04/30/2021+++
+# Recommendations and guidelines for keyword recognition
+
+This article outlines how to choose your keyword optimize its accuracy characteristics and how to design your user experiences with Keyword Verification.
+
+## Choosing an effective keyword
+
+Creating an effective keyword is vital to ensuring your product will consistently and accurately respond. Consider the following guidelines when you choose a keyword.
+
+> [!NOTE]
+> The examples below are in English but the guidelines apply to all languages supported by Custom Keyword. For a list of all supported languages, see [Language support](language-support.md?tabs=custom-keyword).
+
+- It should take no longer than two seconds to say.
+- Words of 4 to 7 syllables work best. For example, "Hey, Computer" is a good keyword. Just "Hey" is a poor one.
+- Keywords should follow common pronunciation rules specific to the native language of your end-users.
+- A unique or even a made-up word that follows common pronunciation rules might reduce false positives. For example, "computerama" might be a good keyword.
+- Do not choose a common word. For example, "eat" and "go" are words that people say frequently in ordinary conversation. They might lead to higher than desired false accept rates for your product.
+- Avoid using a keyword that might have alternative pronunciations. Users would have to know the "right" pronunciation to get their product to voice activate. For example, "509" can be pronounced "five zero nine," "five oh nine," or "five hundred and nine." "R.E.I." can be pronounced "r-e-i" or "ray." "Live" can be pronounced "/l─½v/" or "/liv/".
+- Do not use special characters, symbols, or digits. For example, "Go#" and "20 + cats" could be problematic keywords. However, "go sharp" or "twenty plus cats" might work. You can still use the symbols in your branding and use marketing and documentation to reinforce the proper pronunciation.
++
+## User experience recommendations with Keyword Verification
+
+With a multi-stage keyword recognition scenario where [Keyword Verification](keyword-recognition-overview.md#keyword-verification) is used, applications can choose when the end-user is notified of a keyword detection. The recommendation for rendering any visual or audible indicator is to rely upon on responses from the Keyword Verification service:
+
+![User experience guideline when optimizing for accuracy.](media/custom-keyword/kw-verification-ux-accuracy.png)
+
+This ensures the optimal experience in terms of accuracy to minimize the user-perceived impact of false accepts but incurs additional latency.
+
+For applications that require latency optimization, applications can provide light and unobtrusive indicators to the end-user based on the on-device keyword recognition. For example, lighting an LED pattern or pulsing an icon. The indicators can continue to exist if Keyword Verification responds with a keyword accept, or can be dismissed if the response is a keyword reject:
+
+![User experience guideline when optimizing for latency.](media/custom-keyword/kw-verification-ux-latency.png)
+
+## Next steps
+
+* [Get the Speech SDK.](speech-sdk.md)
+* [Learn more about Voice Assistants.](voice-assistants.md)
ai-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/keyword-recognition-overview.md
+
+ Title: Keyword recognition overview - Speech service
+
+description: An overview of the features, capabilities, and restrictions for keyword recognition by using the Speech Software Development Kit (SDK).
++++++ Last updated : 04/30/2021++++
+# What is keyword recognition?
+
+Keyword recognition detects a word or short phrase within a stream of audio. It's also referred to as keyword spotting.
+
+The most common use case of keyword recognition is voice activation of virtual assistants. For example, "Hey Cortana" is the keyword for the Cortana assistant. Upon recognition of the keyword, a scenario-specific action is carried out. For virtual assistant scenarios, a common resulting action is speech recognition of audio that follows the keyword.
+
+Generally, virtual assistants are always listening. Keyword recognition acts as a privacy boundary for the user. A keyword requirement acts as a gate that prevents unrelated user audio from crossing the local device to the cloud.
+
+To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multistage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
+
+The current system is designed with multiple stages that span the edge and cloud:
+
+![Diagram that shows multiple stages of keyword recognition across the edge and cloud.](media/custom-keyword/kw-recognition-multi-stage.png)
+
+Accuracy of keyword recognition is measured via the following metrics:
+
+* **Correct accept rate**: Measures the system's ability to recognize the keyword when it's spoken by a user. The correct accept rate is also known as the true positive rate.
+* **False accept rate**: Measures the system's ability to filter out audio that isn't the keyword spoken by a user. The false accept rate is also known as the false positive rate.
+
+The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance isn't supported.
+
+## Custom keyword for on-device models
+
+With the [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword), you can generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
+
+### Pricing
+
+There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK when used in conjunction with other Speech service features such as speech to text.
+
+### Types of models
+
+You can use custom keyword to generate two types of on-device models for any keyword.
+
+| Model type | Description |
+| - | -- |
+| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models might not have optimal accuracy characteristics. |
+| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model by using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
+
+> [!NOTE]
+> You can view a list of regions that support the **Advanced** model type in the [keyword recognition region support](regions.md#speech-service) documentation.
+
+Neither model type requires you to upload training data. Custom keyword fully handles data generation and model training.
+
+### Pronunciations
+
+When you create a new model, custom keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all variations that closely represent the way you expect users to say the keyword. All other pronunciations shouldn't be selected.
+
+It's important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, if you choose more pronunciations than you need, you might get higher false accept rates. If you choose too few pronunciations, where not all expected variations are covered, you might get lower correct accept rates.
+
+### Test models
+
+After on-device models are generated by custom keyword, they can be tested directly on the portal. You can use the portal to speak directly into your browser and get keyword recognition results.
+
+## Keyword verification
+
+Keyword verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. Tuning or training isn't required for keyword verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency and are transparent to client applications.
+
+### Pricing
+
+Keyword verification is always used in combination with speech to text. There's no cost to use keyword verification beyond the cost of speech to text.
+
+### Keyword verification and speech to text
+
+When keyword verification is used, it's always in combination with speech to text. Both services run in parallel, which means audio is sent to both services for simultaneous processing.
+
+![Diagram that shows parallel processing of keyword verification and speech to text.](media/custom-keyword/kw-verification-parallel-processing.png)
+
+Running keyword verification and speech to text in parallel yields the following benefits:
+
+* **No other latency on speech to text results**: Parallel execution means that keyword verification adds no latency. The client receives speech to text results as quickly. If keyword verification determines the keyword wasn't present in the audio, speech to text processing is terminated. This action protects against unnecessary speech to text processing. Network and cloud model processing increases the user-perceived latency of voice activation. For more information, see [Recommendations and guidelines](keyword-recognition-guidelines.md).
+* **Forced keyword prefix in speech to text results**: Speech to text processing ensures that the results sent to the client are prefixed with the keyword. This behavior allows for increased accuracy in the speech to text results for speech that follows the keyword.
+* **Increased speech to text timeout**: Because of the expected presence of the keyword at the beginning of audio, speech to text allows for a longer pause of up to five seconds after the keyword before it determines the end of speech and terminates speech to text processing. This behavior ensures that the user experience is correctly handled for staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
+
+### Keyword verification responses and latency considerations
+
+For each request to the service, keyword verification returns one of two responses: accepted or rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency doesn't include network cost between the client and Speech services.
+
+| Keyword verification response | Description |
+| -- | -- |
+| Accepted | Indicates the service believed that the keyword was present in the audio stream provided as part of the request. |
+| Rejected | Indicates the service believed that the keyword wasn't present in the audio stream provided as part of the request. |
+
+Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, keyword verification processes a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in two seconds, the service times out and signals a rejected response to the client.
+
+### Use keyword verification with on-device models from custom keyword
+
+The Speech SDK enables seamless use of on-device models generated by using custom keyword with keyword verification and speech to text. It transparently handles:
+
+* Audio gating to keyword verification and speech recognition based on the outcome of an on-device model.
+* Communicating the keyword to keyword verification.
+* Communicating any more metadata to the cloud for orchestrating the end-to-end scenario.
+
+You don't need to explicitly specify any configuration parameters. All necessary information is automatically extracted from the on-device model generated by custom keyword.
+
+The sample and tutorials linked here show how to use the Speech SDK:
+
+ * [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
+ * [Tutorial: Voice enable your assistant built using Azure AI Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)
+ * [Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
+
+## Speech SDK integration and scenarios
+
+The Speech SDK enables easy use of personalized on-device keyword recognition models generated with custom keyword and keyword verification. To ensure that your product needs can be met, the SDK supports the following two scenarios:
+
+| Scenario | Description | Samples |
+| -- | -- | - |
+| End-to-end keyword recognition with speech to text | Best suited for products that will use a customized on-device keyword model from custom keyword with keyword verification and speech to text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure AI Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> |
+| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from custom keyword. | <ul><li>[C# on Windows UWP sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
+
+## Next steps
+
+* [Read the quickstart to generate on-device keyword recognition models using custom keyword](custom-keyword-basics.md)
+* [Learn more about voice assistants](voice-assistants.md)
ai-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md
+
+ Title: Language identification - Speech service
+
+description: Language identification is used to determine the language being spoken in audio when compared against a list of provided languages.
+++++++ Last updated : 04/19/2023+
+zone_pivot_groups: programming-languages-speech-services-nomore-variant
++
+# Language identification
+
+Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).
+
+Language identification (LID) use cases include:
+
+* [Speech to text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text.
+* [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
+
+For speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
+
+## Configuration options
+
+> [!IMPORTANT]
+> Language Identification APIs are simplified with the Speech SDK version 1.25 and later. The
+`SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have
+been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. Prioritizing between low latency and high accuracy is no longer necessary following recent model improvements. Now, you only need to select whether to run at-start or continuous Language Identification when doing continuous speech recognition or translation.
+
+Whether you use language identification with [speech to text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options.
+
+- Define a list of [candidate languages](#candidate-languages) that you expect in the audio.
+- Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification.
+
+Then you make a [recognize once or continuous recognition](#recognize-once-or-continuous) request to the Speech service.
+
+Code snippets are included with the concepts described next. Complete samples for each use case are provided later.
+
+### Candidate languages
+
+You provide candidate languages with the `AutoDetectSourceLanguageConfig` object, at least one of which is expected to be in the audio. You can include up to four languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification). The Speech service returns one of the candidate languages provided even if those languages weren't in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, either `fr-FR` or `en-US` would be returned.
+
+You must provide the full locale with dash (`-`) separator, but language identification only uses one locale per base language. Don't include multiple locales (for example, "en-US" and "en-GB") for the same language.
++
+```csharp
+var autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
+```
++
+```cpp
+auto autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
+```
++
+```python
+auto_detect_source_language_config = \
+ speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
+```
++
+```java
+AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE", "zh-CN"));
+```
++
+```javascript
+var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages([("en-US", "de-DE", "zh-CN"]);
+```
++
+```objective-c
+NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
+SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
+ [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
+```
++
+For more information, see [supported languages](language-support.md?tabs=language-identification).
+
+### At-start and Continuous language identification
+
+Speech supports both at-start and continuous language identification (LID).
+
+> [!NOTE]
+> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), JavaScript ([for speech to text only](#speech-to-text)),and Python.
+- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change. With at-start LID, a single language is detected and returned in less than 5 seconds.
+- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID doesn't support changing languages within the same sentence. For example, if you're primarily speaking Spanish and insert some English words, it will not detect the language change per word.
+
+You implement at-start LID or continuous LID by calling methods for [recognize once or continuous](#recognize-once-or-continuous). Continuous LID is only supported with continuous recognition.
+
+### Recognize once or continuous
+
+Language identification is completed with recognition objects and operations. You'll make a request to the Speech service for recognition of audio.
+
+> [!NOTE]
+> Don't confuse recognition with identification. Recognition can be used with or without language identification.
+
+You'll either call the "recognize once" method, or the start and stop continuous recognition methods. You choose from:
+
+- Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
+- Continuous recognition with at-start LID
+- Continuous recognition with continuous LID
+
+The `SpeechServiceConnection_LanguageIdMode` property is only required for continuous LID. Without it, the Speech service defaults to at-start lid. The supported values are "AtStart" for at-start LID or "Continuous" for continuous LID.
++
+```csharp
+// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
+var result = await recognizer.RecognizeOnceAsync();
+
+// Start and stop continuous recognition with At-start LID
+await recognizer.StartContinuousRecognitionAsync();
+await recognizer.StopContinuousRecognitionAsync();
+
+// Start and stop continuous recognition with Continuous LID
+speechConfig.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
+await recognizer.StartContinuousRecognitionAsync();
+await recognizer.StopContinuousRecognitionAsync();
+```
++
+```cpp
+// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
+auto result = recognizer->RecognizeOnceAsync().get();
+
+// Start and stop continuous recognition with At-start LID
+recognizer->StartContinuousRecognitionAsync().get();
+recognizer->StopContinuousRecognitionAsync().get();
+
+// Start and stop continuous recognition with Continuous LID
+speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
+recognizer->StartContinuousRecognitionAsync().get();
+recognizer->StopContinuousRecognitionAsync().get();
+```
++
+```java
+// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
+SpeechRecognitionResult result = recognizer->RecognizeOnceAsync().get();
+
+// Start and stop continuous recognition with At-start LID
+recognizer.startContinuousRecognitionAsync().get();
+recognizer.stopContinuousRecognitionAsync().get();
+
+// Start and stop continuous recognition with Continuous LID
+speechConfig.setProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
+recognizer.startContinuousRecognitionAsync().get();
+recognizer.stopContinuousRecognitionAsync().get();
+```
++
+```python
+# Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
+result = recognizer.recognize_once()
+
+# Start and stop continuous recognition with At-start LID
+recognizer.start_continuous_recognition()
+recognizer.stop_continuous_recognition()
+
+# Start and stop continuous recognition with Continuous LID
+speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
+recognizer.start_continuous_recognition()
+recognizer.stop_continuous_recognition()
+```
++
+## Speech to text
+
+You use Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech to text overview](speech-to-text.md).
+
+> [!NOTE]
+> Speech to text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech to text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, JavaScript, and Python.
+>
+> Currently for speech to text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
++
+See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_with_language_id_samples.cs).
+
+### [Recognize once](#tab/once)
+
+```csharp
+using Microsoft.CognitiveServices.Speech;
+using Microsoft.CognitiveServices.Speech.Audio;
+
+var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey","YourServiceRegion");
+
+var autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig.FromLanguages(
+ new string[] { "en-US", "de-DE", "zh-CN" });
+
+using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
+using (var recognizer = new SpeechRecognizer(
+ speechConfig,
+ autoDetectSourceLanguageConfig,
+ audioConfig))
+{
+ var speechRecognitionResult = await recognizer.RecognizeOnceAsync();
+ var autoDetectSourceLanguageResult =
+ AutoDetectSourceLanguageResult.FromResult(speechRecognitionResult);
+ var detectedLanguage = autoDetectSourceLanguageResult.Language;
+}
+```
+
+### [Continuous recognition](#tab/continuous)
+
+```csharp
+using Microsoft.CognitiveServices.Speech;
+using Microsoft.CognitiveServices.Speech.Audio;
+
+var region = "YourServiceRegion";
+// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
+var endpointUrl = new Uri(endpointString);
+
+var config = SpeechConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
+
+// Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
+
+var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
+
+var stopRecognition = new TaskCompletionSource<int>();
+using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
+{
+ using (var recognizer = new SpeechRecognizer(config, autoDetectSourceLanguageConfig, audioInput))
+ {
+ // Subscribes to events.
+ recognizer.Recognizing += (s, e) =>
+ {
+ if (e.Result.Reason == ResultReason.RecognizingSpeech)
+ {
+ Console.WriteLine($"RECOGNIZING: Text={e.Result.Text}");
+ var autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.FromResult(e.Result);
+ Console.WriteLine($"DETECTED: Language={autoDetectSourceLanguageResult.Language}");
+ }
+ };
+
+ recognizer.Recognized += (s, e) =>
+ {
+ if (e.Result.Reason == ResultReason.RecognizedSpeech)
+ {
+ Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
+ var autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.FromResult(e.Result);
+ Console.WriteLine($"DETECTED: Language={autoDetectSourceLanguageResult.Language}");
+ }
+ else if (e.Result.Reason == ResultReason.NoMatch)
+ {
+ Console.WriteLine($"NOMATCH: Speech could not be recognized.");
+ }
+ };
+
+ recognizer.Canceled += (s, e) =>
+ {
+ Console.WriteLine($"CANCELED: Reason={e.Reason}");
+
+ if (e.Reason == CancellationReason.Error)
+ {
+ Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
+ Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
+ Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
+ }
+
+ stopRecognition.TrySetResult(0);
+ };
+
+ recognizer.SessionStarted += (s, e) =>
+ {
+ Console.WriteLine("\n Session started event.");
+ };
+
+ recognizer.SessionStopped += (s, e) =>
+ {
+ Console.WriteLine("\n Session stopped event.");
+ Console.WriteLine("\nStop recognition.");
+ stopRecognition.TrySetResult(0);
+ };
+
+ // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
+ await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
+
+ // Waits for completion.
+ // Use Task.WaitAny to keep the task rooted.
+ Task.WaitAny(new[] { stopRecognition.Task });
+
+ // Stops recognition.
+ await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
+ }
+}
+```
++++
+See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp).
+
+### [Recognize once](#tab/once)
+
+```cpp
+using namespace std;
+using namespace Microsoft::Cognitive
+using namespace Microsoft::Cognitive
+
+auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey","YourServiceRegion");
+
+auto autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
+
+auto recognizer = SpeechRecognizer::FromConfig(
+ speechConfig,
+ autoDetectSourceLanguageConfig
+ );
+
+speechRecognitionResult = recognizer->RecognizeOnceAsync().get();
+auto autoDetectSourceLanguageResult =
+ AutoDetectSourceLanguageResult::FromResult(speechRecognitionResult);
+auto detectedLanguage = autoDetectSourceLanguageResult->Language;
+```
+
+### [Continuous recognition](#tab/continuous)
++++++
+See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java).
+
+### [Recognize once](#tab/once)
+
+```java
+AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE"));
+
+SpeechRecognizer recognizer = new SpeechRecognizer(
+ speechConfig,
+ autoDetectSourceLanguageConfig,
+ audioConfig);
+
+Future<SpeechRecognitionResult> future = recognizer.recognizeOnceAsync();
+SpeechRecognitionResult result = future.get(30, TimeUnit.SECONDS);
+AutoDetectSourceLanguageResult autoDetectSourceLanguageResult =
+ AutoDetectSourceLanguageResult.fromResult(result);
+String detectedLanguage = autoDetectSourceLanguageResult.getLanguage();
+
+recognizer.close();
+speechConfig.close();
+autoDetectSourceLanguageConfig.close();
+audioConfig.close();
+result.close();
+```
+
+### [Continuous recognition](#tab/continuous)
++++++
+See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py).
+
+### [Recognize once](#tab/once)
+
+```Python
+auto_detect_source_language_config = \
+ speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE"])
+speech_recognizer = speechsdk.SpeechRecognizer(
+ speech_config=speech_config,
+ auto_detect_source_language_config=auto_detect_source_language_config,
+ audio_config=audio_config)
+result = speech_recognizer.recognize_once()
+auto_detect_source_language_result = speechsdk.AutoDetectSourceLanguageResult(result)
+detected_language = auto_detect_source_language_result.language
+```
+
+### [Continuous recognition](#tab/continuous)
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+import time
+import json
+
+speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
+weatherfilename="en-us_zh-cn.wav"
+
+# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
+speech_config = speechsdk.SpeechConfig(subscription=speech_key, endpoint=endpoint_string)
+audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
+
+# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
+
+auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
+ languages=["en-US", "de-DE", "zh-CN"])
+
+speech_recognizer = speechsdk.SpeechRecognizer(
+ speech_config=speech_config,
+ auto_detect_source_language_config=auto_detect_source_language_config,
+ audio_config=audio_config)
+
+done = False
+
+def stop_cb(evt):
+ """callback that signals to stop continuous recognition upon receiving an event `evt`"""
+ print('CLOSING on {}'.format(evt))
+ nonlocal done
+ done = True
+
+# Connect callbacks to the events fired by the speech recognizer
+speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
+speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
+speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
+speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
+speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
+# stop continuous recognition on either session stopped or canceled events
+speech_recognizer.session_stopped.connect(stop_cb)
+speech_recognizer.canceled.connect(stop_cb)
+
+# Start continuous speech recognition
+speech_recognizer.start_continuous_recognition()
+while not done:
+ time.sleep(.5)
+
+speech_recognizer.stop_continuous_recognition()
+```
+++
+```Objective-C
+NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
+SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
+ [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
+SPXSpeechRecognizer* speechRecognizer = \
+ [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
+ autoDetectSourceLanguageConfiguration:autoDetectSourceLanguageConfig
+ audioConfiguration:audioConfig];
+SPXSpeechRecognitionResult *result = [speechRecognizer recognizeOnce];
+SPXAutoDetectSourceLanguageResult *languageDetectionResult = [[SPXAutoDetectSourceLanguageResult alloc] init:result];
+NSString *detectedLanguage = [languageDetectionResult language];
+```
+++
+```Javascript
+var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages(["en-US", "de-DE"]);
+var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, autoDetectSourceLanguageConfig, audioConfig);
+speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) => {
+ var languageDetectionResult = SpeechSDK.AutoDetectSourceLanguageResult.fromResult(result);
+ var detectedLanguage = languageDetectionResult.language;
+},
+{});
+```
++
+### Speech to text custom models
+
+> [!NOTE]
+> Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models.
+
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+
+```csharp
+var sourceLanguageConfigs = new SourceLanguageConfig[]
+{
+ SourceLanguageConfig.FromLanguage("en-US"),
+ SourceLanguageConfig.FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR")
+};
+var autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig.FromSourceLanguageConfigs(
+ sourceLanguageConfigs);
+```
++
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+
+```cpp
+std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
+sourceLanguageConfigs.push_back(
+ SourceLanguageConfig::FromLanguage("en-US"));
+sourceLanguageConfigs.push_back(
+ SourceLanguageConfig::FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));
+
+auto autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig::FromSourceLanguageConfigs(
+ sourceLanguageConfigs);
+```
++
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+
+```java
+List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
+sourceLanguageConfigs.add(
+ SourceLanguageConfig.fromLanguage("en-US"));
+sourceLanguageConfigs.add(
+ SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));
+
+AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
+ AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs(
+ sourceLanguageConfigs);
+```
++
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+
+```Python
+ en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
+ fr_language_config = speechsdk.languageconfig.SourceLanguageConfig("fr-FR", "The Endpoint Id for custom model of fr-FR")
+ auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
+ sourceLanguageConfigs=[en_language_config, fr_language_config])
+```
++
+This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
+
+```Objective-C
+SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
+SPXSourceLanguageConfiguration* frLanguageConfig = \
+ [[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"fr-FR"
+ endpointId:@"The Endpoint Id for custom model of fr-FR"];
+NSArray *languageConfigs = @[enLanguageConfig, frLanguageConfig];
+SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
+ [[SPXAutoDetectSourceLanguageConfiguration alloc]initWithSourceLanguageConfigurations:languageConfigs];
+```
++
+Language detection with a custom endpoint isn't supported by the Speech SDK for JavaScript. For example, if you include "fr-FR" as shown here, the custom endpoint will be ignored.
+
+```Javascript
+var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US");
+var frLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR");
+var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs([enLanguageConfig, frLanguageConfig]);
+```
++
+## Speech translation
+
+You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md).
+
+> [!NOTE]
+> Speech translation with language identification is only supported with Speech SDKs in C#, C++, and Python.
+> Currently for speech translation with language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
+
+See more examples of speech translation with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/translation_samples.cs).
+
+### [Recognize once](#tab/once)
+
+```csharp
+using Microsoft.CognitiveServices.Speech;
+using Microsoft.CognitiveServices.Speech.Audio;
+using Microsoft.CognitiveServices.Speech.Translation;
+
+public static async Task RecognizeOnceSpeechTranslationAsync()
+{
+ var region = "YourServiceRegion";
+ // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+ var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
+ var endpointUrl = new Uri(endpointString);
+
+ var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
+
+ // Source language is required, but currently ignored.
+ string fromLanguage = "en-US";
+ speechTranslationConfig.SpeechRecognitionLanguage = fromLanguage;
+
+ speechTranslationConfig.AddTargetLanguage("de");
+ speechTranslationConfig.AddTargetLanguage("fr");
+
+ var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
+
+ using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
+
+ using (var recognizer = new TranslationRecognizer(
+ speechTranslationConfig,
+ autoDetectSourceLanguageConfig,
+ audioConfig))
+ {
+
+ Console.WriteLine("Say something or read from file...");
+ var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
+
+ if (result.Reason == ResultReason.TranslatedSpeech)
+ {
+ var lidResult = result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
+
+ Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={result.Text}");
+ foreach (var element in result.Translations)
+ {
+ Console.WriteLine($" TRANSLATED into '{element.Key}': {element.Value}");
+ }
+ }
+ }
+}
+```
+
+### [Continuous recognition](#tab/continuous)
+
+```csharp
+using Microsoft.CognitiveServices.Speech;
+using Microsoft.CognitiveServices.Speech.Audio;
+using Microsoft.CognitiveServices.Speech.Translation;
+
+public static async Task MultiLingualTranslation()
+{
+ var region = "YourServiceRegion";
+ // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+ var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
+ var endpointUrl = new Uri(endpointString);
+
+ var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
+
+ // Source language is required, but currently ignored.
+ string fromLanguage = "en-US";
+ config.SpeechRecognitionLanguage = fromLanguage;
+
+ config.AddTargetLanguage("de");
+ config.AddTargetLanguage("fr");
+
+ // Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+ config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
+ var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
+
+ var stopTranslation = new TaskCompletionSource<int>();
+ using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
+ {
+ using (var recognizer = new TranslationRecognizer(config, autoDetectSourceLanguageConfig, audioInput))
+ {
+ recognizer.Recognizing += (s, e) =>
+ {
+ var lidResult = e.Result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
+
+ Console.WriteLine($"RECOGNIZING in '{lidResult}': Text={e.Result.Text}");
+ foreach (var element in e.Result.Translations)
+ {
+ Console.WriteLine($" TRANSLATING into '{element.Key}': {element.Value}");
+ }
+ };
+
+ recognizer.Recognized += (s, e) => {
+ if (e.Result.Reason == ResultReason.TranslatedSpeech)
+ {
+ var lidResult = e.Result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
+
+ Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={e.Result.Text}");
+ foreach (var element in e.Result.Translations)
+ {
+ Console.WriteLine($" TRANSLATED into '{element.Key}': {element.Value}");
+ }
+ }
+ else if (e.Result.Reason == ResultReason.RecognizedSpeech)
+ {
+ Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
+ Console.WriteLine($" Speech not translated.");
+ }
+ else if (e.Result.Reason == ResultReason.NoMatch)
+ {
+ Console.WriteLine($"NOMATCH: Speech could not be recognized.");
+ }
+ };
+
+ recognizer.Canceled += (s, e) =>
+ {
+ Console.WriteLine($"CANCELED: Reason={e.Reason}");
+
+ if (e.Reason == CancellationReason.Error)
+ {
+ Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
+ Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
+ Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
+ }
+
+ stopTranslation.TrySetResult(0);
+ };
+
+ recognizer.SpeechStartDetected += (s, e) => {
+ Console.WriteLine("\nSpeech start detected event.");
+ };
+
+ recognizer.SpeechEndDetected += (s, e) => {
+ Console.WriteLine("\nSpeech end detected event.");
+ };
+
+ recognizer.SessionStarted += (s, e) => {
+ Console.WriteLine("\nSession started event.");
+ };
+
+ recognizer.SessionStopped += (s, e) => {
+ Console.WriteLine("\nSession stopped event.");
+ Console.WriteLine($"\nStop translation.");
+ stopTranslation.TrySetResult(0);
+ };
+
+ // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
+ Console.WriteLine("Start translation...");
+ await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
+
+ Task.WaitAny(new[] { stopTranslation.Task });
+ await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
+ }
+ }
+}
+```
++++
+See more examples of speech translation with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/translation_samples.cpp).
+
+### [Recognize once](#tab/once)
+
+```cpp
+auto region = "YourServiceRegion";
+// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
+auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
+
+auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE" });
+
+// Sets source and target languages
+// The source language will be detected by the language detection feature.
+// However, the SpeechRecognitionLanguage still need to set with a locale string, but it will not be used as the source language.
+// This will be fixed in a future version of Speech SDK.
+auto fromLanguage = "en-US";
+config->SetSpeechRecognitionLanguage(fromLanguage);
+config->AddTargetLanguage("de");
+config->AddTargetLanguage("fr");
+
+// Creates a translation recognizer using microphone as audio input.
+auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig);
+cout << "Say something...\n";
+
+// Starts translation, and returns after a single utterance is recognized. The end of a
+// single utterance is determined by listening for silence at the end or until a maximum of 15
+// seconds of audio is processed. The task returns the recognized text as well as the translation.
+// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
+// shot recognition like command or query.
+// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
+auto result = recognizer->RecognizeOnceAsync().get();
+
+// Checks result.
+if (result->Reason == ResultReason::TranslatedSpeech)
+{
+ cout << "RECOGNIZED: Text=" << result->Text << std::endl;
+
+ for (const auto& it : result->Translations)
+ {
+ cout << "TRANSLATED into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
+ }
+}
+else if (result->Reason == ResultReason::RecognizedSpeech)
+{
+ cout << "RECOGNIZED: Text=" << result->Text << " (text could not be translated)" << std::endl;
+}
+else if (result->Reason == ResultReason::NoMatch)
+{
+ cout << "NOMATCH: Speech could not be recognized." << std::endl;
+}
+else if (result->Reason == ResultReason::Canceled)
+{
+ auto cancellation = CancellationDetails::FromResult(result);
+ cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;
+
+ if (cancellation->Reason == CancellationReason::Error)
+ {
+ cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
+ cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
+ cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
+ }
+}
+```
+
+### [Continuous recognition](#tab/continuous)
+
+```cpp
+using namespace std;
+using namespace Microsoft::Cognitive
+using namespace Microsoft::Cognitive
+using namespace Microsoft::Cognitive
+
+void MultiLingualTranslation()
+{
+ auto region = "YourServiceRegion";
+ // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+ auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
+ auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
+
+ // Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+ speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
+ auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
+
+ promise<void> recognitionEnd;
+ // Source language is required, but currently ignored.
+ auto fromLanguage = "en-US";
+ config->SetSpeechRecognitionLanguage(fromLanguage);
+ config->AddTargetLanguage("de");
+ config->AddTargetLanguage("fr");
+
+ auto audioInput = AudioConfig::FromWavFileInput("whatstheweatherlike.wav");
+ auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig, audioInput);
+
+ recognizer->Recognizing.Connect([](const TranslationRecognitionEventArgs& e)
+ {
+ std::string lidResult = e.Result->Properties.GetProperty(PropertyId::SpeechServiceConnection_AutoDetectSourceLanguageResult);
+
+ cout << "Recognizing in Language = "<< lidResult << ":" << e.Result->Text << std::endl;
+ for (const auto& it : e.Result->Translations)
+ {
+ cout << " Translated into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
+ }
+ });
+
+ recognizer->Recognized.Connect([](const TranslationRecognitionEventArgs& e)
+ {
+ if (e.Result->Reason == ResultReason::TranslatedSpeech)
+ {
+ std::string lidResult = e.Result->Properties.GetProperty(PropertyId::SpeechServiceConnection_AutoDetectSourceLanguageResult);
+ cout << "RECOGNIZED in Language = " << lidResult << ": Text=" << e.Result->Text << std::endl;
+ }
+ else if (e.Result->Reason == ResultReason::RecognizedSpeech)
+ {
+ cout << "RECOGNIZED: Text=" << e.Result->Text << " (text could not be translated)" << std::endl;
+ }
+ else if (e.Result->Reason == ResultReason::NoMatch)
+ {
+ cout << "NOMATCH: Speech could not be recognized." << std::endl;
+ }
+
+ for (const auto& it : e.Result->Translations)
+ {
+ cout << " Translated into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
+ }
+ });
+
+ recognizer->Canceled.Connect([&recognitionEnd](const TranslationRecognitionCanceledEventArgs& e)
+ {
+ cout << "CANCELED: Reason=" << (int)e.Reason << std::endl;
+ if (e.Reason == CancellationReason::Error)
+ {
+ cout << "CANCELED: ErrorCode=" << (int)e.ErrorCode << std::endl;
+ cout << "CANCELED: ErrorDetails=" << e.ErrorDetails << std::endl;
+ cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
+
+ recognitionEnd.set_value();
+ }
+ });
+
+ recognizer->Synthesizing.Connect([](const TranslationSynthesisEventArgs& e)
+ {
+ auto size = e.Result->Audio.size();
+ cout << "Translation synthesis result: size of audio data: " << size
+ << (size == 0 ? "(END)" : "");
+ });
+
+ recognizer->SessionStopped.Connect([&recognitionEnd](const SessionEventArgs& e)
+ {
+ cout << "Session stopped.";
+ recognitionEnd.set_value();
+ });
+
+ // Starts continuos recognition. Use StopContinuousRecognitionAsync() to stop recognition.
+ recognizer->StartContinuousRecognitionAsync().get();
+ recognitionEnd.get_future().get();
+ recognizer->StopContinuousRecognitionAsync().get();
+}
+```
++++
+See more examples of speech translation with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/translation_sample.py).
+
+### [Recognize once](#tab/once)
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+import time
+import json
+
+speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
+weatherfilename="en-us_zh-cn.wav"
+
+# set up translation parameters: source language and target languages
+# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
+translation_config = speechsdk.translation.SpeechTranslationConfig(
+ subscription=speech_key,
+ endpoint=endpoint_string,
+ speech_recognition_language='en-US',
+ target_languages=('de', 'fr'))
+audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
+
+# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
+auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
+
+# Creates a translation recognizer using and audio file as input.
+recognizer = speechsdk.translation.TranslationRecognizer(
+ translation_config=translation_config,
+ audio_config=audio_config,
+ auto_detect_source_language_config=auto_detect_source_language_config)
+
+# Starts translation, and returns after a single utterance is recognized. The end of a
+# single utterance is determined by listening for silence at the end or until a maximum of 15
+# seconds of audio is processed. The task returns the recognition text as result.
+# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
+# shot recognition like command or query.
+# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
+result = recognizer.recognize_once()
+
+# Check the result
+if result.reason == speechsdk.ResultReason.TranslatedSpeech:
+ print("""Recognized: {}
+ German translation: {}
+ French translation: {}""".format(
+ result.text, result.translations['de'], result.translations['fr']))
+elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
+ print("Recognized: {}".format(result.text))
+ detectedSrcLang = result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
+ print("Detected Language: {}".format(detectedSrcLang))
+elif result.reason == speechsdk.ResultReason.NoMatch:
+ print("No speech could be recognized: {}".format(result.no_match_details))
+elif result.reason == speechsdk.ResultReason.Canceled:
+ print("Translation canceled: {}".format(result.cancellation_details.reason))
+ if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
+ print("Error details: {}".format(result.cancellation_details.error_details))
+```
+
+### [Continuous recognition](#tab/continuous)
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+import time
+import json
+
+speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
+weatherfilename="en-us_zh-cn.wav"
+
+# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
+endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
+translation_config = speechsdk.translation.SpeechTranslationConfig(
+ subscription=speech_key,
+ endpoint=endpoint_string,
+ speech_recognition_language='en-US',
+ target_languages=('de', 'fr'))
+audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
+
+# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
+translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
+
+# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
+auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
+
+# Creates a translation recognizer using and audio file as input.
+recognizer = speechsdk.translation.TranslationRecognizer(
+ translation_config=translation_config,
+ audio_config=audio_config,
+ auto_detect_source_language_config=auto_detect_source_language_config)
+
+def result_callback(event_type, evt):
+ """callback to display a translation result"""
+ print("{}: {}\n\tTranslations: {}\n\tResult Json: {}".format(
+ event_type, evt, evt.result.translations.items(), evt.result.json))
+
+done = False
+
+def stop_cb(evt):
+ """callback that signals to stop continuous recognition upon receiving an event `evt`"""
+ print('CLOSING on {}'.format(evt))
+ nonlocal done
+ done = True
+
+# connect callback functions to the events fired by the recognizer
+recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
+recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
+# event for intermediate results
+recognizer.recognizing.connect(lambda evt: result_callback('RECOGNIZING', evt))
+# event for final result
+recognizer.recognized.connect(lambda evt: result_callback('RECOGNIZED', evt))
+# cancellation event
+recognizer.canceled.connect(lambda evt: print('CANCELED: {} ({})'.format(evt, evt.reason)))
+
+# stop continuous recognition on either session stopped or canceled events
+recognizer.session_stopped.connect(stop_cb)
+recognizer.canceled.connect(stop_cb)
+
+def synthesis_callback(evt):
+ """
+ callback for the synthesis event
+ """
+ print('SYNTHESIZING {}\n\treceived {} bytes of audio. Reason: {}'.format(
+ evt, len(evt.result.audio), evt.result.reason))
+ if evt.result.reason == speechsdk.ResultReason.RecognizedSpeech:
+ print("RECOGNIZED: {}".format(evt.result.properties))
+ if evt.result.properties.get(speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult) == None:
+ print("Unable to detect any language")
+ else:
+ detectedSrcLang = evt.result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
+ jsonResult = evt.result.properties[speechsdk.PropertyId.SpeechServiceResponse_JsonResult]
+ detailResult = json.loads(jsonResult)
+ startOffset = detailResult['Offset']
+ duration = detailResult['Duration']
+ if duration >= 0:
+ endOffset = duration + startOffset
+ else:
+ endOffset = 0
+ print("Detected language = " + detectedSrcLang + ", startOffset = " + str(startOffset) + " nanoseconds, endOffset = " + str(endOffset) + " nanoseconds, Duration = " + str(duration) + " nanoseconds.")
+ global language_detected
+ language_detected = True
+
+# connect callback to the synthesis event
+recognizer.synthesizing.connect(synthesis_callback)
+
+# start translation
+recognizer.start_continuous_recognition()
+
+while not done:
+ time.sleep(.5)
+
+recognizer.stop_continuous_recognition()
+```
++++
+## Run and use a container
+
+Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
+
+When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`.
+
+For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide.
++
+## Speech to text batch transcription
+
+To identify languages with [Batch transcription REST API](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request.
+
+> [!WARNING]
+> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
+>
+> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#speech-to-text-custom-models) instead of batch transcription.
+
+The following example shows the usage of the `languageIdentification` property with four candidate languages. For more information about request properties see [Create a batch transcription](batch-transcription-create.md#request-configuration-options).
+
+```json
+{
+ <...>
+
+ "properties": {
+ <...>
+
+ "languageIdentification": {
+ "candidateLocales": [
+ "en-US",
+ "ja-JP",
+ "zh-CN",
+ "hi-IN"
+ ]
+ },
+ <...>
+ }
+}
+```
+
+## Next steps
+
+* [Try the speech to text quickstart](get-started-speech-to-text.md)
+* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
+* [Use batch transcription](batch-transcription.md)
ai-services Language Learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-learning-overview.md
+
+ Title: Language learning with Azure AI Speech
+
+description: Azure AI services for Speech can be used to learn languages.
++++++ Last updated : 02/23/2023+++
+# Language learning with Azure AI Speech
+
+The Azure AI service for Speech platform is a comprehensive collection of technologies and services aimed at accelerating the incorporation of speech into applications. Azure AI services for Speech can be used to learn languages.
++
+## Pronunciation Assessment
+
+The [Pronunciation Assessment](pronunciation-assessment-tool.md) feature is designed to provide instant feedback to users on the accuracy, fluency, and prosody of their speech when learning a new language, so that they can speak and present in a new language with confidence. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+
+The Pronunciation Assessment feature offers several benefits for educators, service providers, and students.
+- For educators, it provides instant feedback, eliminates the need for time-consuming oral language assessments, and offers consistent and comprehensive assessments.
+- For service providers, it offers high real-time capabilities, worldwide Azure AI Speech and supports growing global business.
+- For students and learners, it provides a convenient way to practice and receive feedback, authoritative scoring to compare with native pronunciation and helps to follow the exact text order for long sentences or full documents.
+
+## Speech to text
+
+Azure [Speech to text](speech-to-text.md) supports real-time language identification for multilingual language learning scenarios, help human-human interaction with better understanding and readable context.
+
+## Text to speech
+
+[Text to speech](text-to-speech.md) prebuilt neural voices can read out learning materials natively and empower self-served learning. A broad portfolio of [languages and voices](language-support.md?tabs=tts) are supported for AI teacher, content read aloud capabilities, and more. Microsoft is continuously working on bringing new languages to the world.
+
+[Custom Neural Voice](custom-neural-voice.md) is available for you to create a customized synthetic voice for your applications. Education companies are using this technology to personalize language learning, by creating unique characters with distinct voices that match the culture and background of their target audience.
+
+## Next steps
+
+* [How to use pronunciation assessment](how-to-pronunciation-assessment.md)
+* [What is Speech to text](speech-to-text.md)
+* [What is Text to speech](text-to-speech.md)
+* [What is Custom Neural Voice](custom-neural-voice.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
+
+ Title: Language support - Speech service
+
+description: The Speech service supports numerous languages for speech to text and text to speech conversion, along with speech translation. This article provides a comprehensive list of language support by service feature.
++++++ Last updated : 01/12/2023++++
+# Language and voice support for the Speech service
+
+The following tables summarize language support for [speech to text](speech-to-text.md), [text to speech](text-to-speech.md), [pronunciation assessment](how-to-pronunciation-assessment.md), [speech translation](speech-translation.md), [speaker recognition](speaker-recognition-overview.md), and additional service features.
+
+You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), [Speech to text REST API for short audio](rest-speech-to-text-short.md) and [Text to speech REST API](rest-text-to-speech.md#get-a-list-of-voices).
+
+## Supported languages
+
+Language support varies by Speech service functionality.
+
+> [!NOTE]
+> See [Speech Containers](speech-container-overview.md#available-speech-containers) and [Embedded Speech](embedded-speech.md#models-and-voices) separately for their supported languages.
+
+**Choose a Speech feature**
+
+# [Speech to text](#tab/stt)
+
+The table in this section summarizes the locales supported for Speech to text. See the table footnotes for more details.
+
+Additional remarks for Speech to text locales are included in the [Custom Speech](#custom-speech) section below.
+
+> [!TIP]
+> Try out the [Real-time Speech to text tool](https://speech.microsoft.com/portal/speechtotexttool) without having to use any code.
++
+### Custom Speech
+
+To improve Speech to text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
+
+# [Text to speech](#tab/tts)
+
+The table in this section summarizes the locales and voices supported for Text to speech. See the table footnotes for more details.
+
+Additional remarks for Text to speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
+
+> [!TIP]
+> Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
++
+### Voice styles and roles
+
+In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. All prebuilt voices with speaking styles and multi-style custom voices support style degree adjustment. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender.
+
+To learn how you can configure and adjust neural voice styles and roles, see [Speech Synthesis Markup Language](speech-synthesis-markup-voice.md#speaking-styles-and-roles).
+
+Use the following table to determine supported styles and roles for each neural voice.
++
+### Viseme
+
+This table lists all the locales supported for [Viseme](speech-synthesis-markup-structure.md#viseme-element). For more information about Viseme, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md) and [Viseme element](speech-synthesis-markup-structure.md#viseme-element).
++
+### Prebuilt neural voices
+
+Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices in the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
+
+> [!IMPORTANT]
+> Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
+
+Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
+
+Please note that the following neural voices are retired.
+
+- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. All requests with previous versions won't succeed starting from October 30, 2021.
+- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
+
+### Custom Neural Voice
+
+Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data. Multi-style custom neural voices support style degree adjustment. There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview).
+
+Select the right locale that matches your training data to train a custom neural voice model. For example, if the recording data is spoken in English with a British accent, select `en-GB`.
+
+With the cross-lingual feature, you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages with Cross-lingual support.
++
+# [Pronunciation assessment](#tab/pronunciation-assessment)
+
+The table in this section summarizes the 19 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 18 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
++
+# [Speech translation](#tab/speech-translation)
+
+The table in this section summarizes the locales supported for Speech translation. Speech translation supports different languages for speech-to-speech and speech to text translation. The available target languages depend on whether the translation target is speech or text.
+
+#### Translate from language
+
+To set the input speech recognition language, specify the full locale with a dash (`-`) separator. See the [speech to text language table](?tabs=stt#supported-languages). The default language is `en-US` if you don't specify a language.
+
+#### Translate to text language
+
+To set the translation target language, with few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. See the speech translation target language table below. The default language is `en` if you don't specify a language.
++
+# [Language identification](#tab/language-identification)
+
+The table in this section summarizes the locales supported for [Language identification](language-identification.md).
+> [!NOTE]
+> Language Identification compares speech at the language level, such as English and German. Do not include multiple locales of the same language in your candidate list.
++
+# [Speaker recognition](#tab/speaker-recognition)
+
+The table in this section summarizes the locales supported for Speaker recognition. Speaker recognition is mostly language agnostic. The universal model for text-independent speaker recognition combines various data sources from multiple languages. We've tuned and evaluated the model on these languages and locales. For more information on speaker recognition, see the [overview](speaker-recognition-overview.md).
++
+# [Custom keyword](#tab/custom-keyword)
+
+The table in this section summarizes the locales supported for custom keyword and keyword verification.
++
+# [Intent Recognition](#tab/intent-recognizer-pattern-matcher)
+
+The table in this section summarizes the locales supported for the Intent Recognizer Pattern Matcher.
++
+***
+
+## Next steps
+
+* [Region support](regions.md)
ai-services Logging Audio Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/logging-audio-transcription.md
+
+ Title: How to log audio and transcriptions for speech recognition
+
+description: Learn how to use audio and transcription logging for speech to text and speech translation.
+++++++ Last updated : 03/28/2023+
+zone_pivot_groups: programming-languages-speech-services-nomore-variant
++
+# How to log audio and transcriptions for speech recognition
+
+You can enable logging for both audio input and recognized speech when using [speech to text](get-started-speech-to-text.md) or [speech translation](get-started-speech-to-text.md). For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged. This article describes how to enable, access and delete the audio and transcription logs.
+
+Audio and transcription logs can be used as input for [Custom Speech](custom-speech-overview.md) model training. You might have other use cases.
+
+> [!WARNING]
+> Don't depend on audio and transcription logs when the exact record of input audio is required. In the periods of peak load, the service prioritizes hardware resources for transcription tasks. This may result in minor parts of the audio not being logged. Such occasions are rare, but nevertheless possible.
+
+Logging is done asynchronously for both base and custom model endpoints. Audio and transcription logs are stored by the Speech service and not written locally. The logs are retained for 30 days. After this period, the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
+
+## Enable audio and transcription logging
+
+Logging is disabled by default. Logging can be enabled [per recognition session](#enable-logging-for-a-single-recognition-session) or [per custom model endpoint](#enable-audio-and-transcription-logging-for-a-custom-model-endpoint).
+
+### Enable logging for a single recognition session
+
+You can enable logging for a single recognition session, whether using the default base model or [custom model](how-to-custom-speech-deploy-model.md) endpoint.
+
+> [!WARNING]
+> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active.
+
+#### Enable logging for speech to text with the Speech SDK
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging()` of the [SpeechConfig](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig) class instance.
+
+```csharp
+speechConfig.EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/dotnet/api/microsoft.cognitiveservices.speech.propertyid):
+
+```csharp
+string isAudioLoggingEnabled = speechConfig.GetProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/dotnet/api/microsoft.cognitiveservices.speech.speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging` of the [SpeechConfig](/cpp/cognitive-services/speech/speechconfig) class instance.
+
+```cpp
+speechConfig->EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` property:
+
+```cpp
+string isAudioLoggingEnabled = speechConfig->GetProperty(PropertyId::SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/cpp/cognitive-services/speech/speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechConfig](/java/api/com.microsoft.cognitiveservices.speech.speechconfig) class instance.
+
+```java
+speechConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/java/api/com.microsoft.cognitiveservices.speech.propertyid):
+
+```java
+String isAudioLoggingEnabled = speechConfig.getProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/java/api/com.microsoft.cognitiveservices.speech.speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) class instance.
+
+```javascript
+speechConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid):
+
+```javascript
+var SpeechSDK;
+SpeechSDK = speechSdk;
+// <...>
+string isAudioLoggingEnabled = speechConfig.getProperty(SpeechSDK.PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enable_audio_logging` of the [SpeechConfig](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig) class instance.
+
+```python
+speech_config.enable_audio_logging()
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid):
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+# <...>
+is_audio_logging_enabled = speech_config.get_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_EnableAudioLogging)
+```
+
+Each [SpeechRecognizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechrecognizer) that uses this `speech_config` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging` of the [SPXSpeechConfiguration](/objectivec/cognitive-services/speech/spxspeechconfiguration) class instance.
+
+```objectivec
+[speechConfig enableAudioLogging];
+```
+
+To check whether logging is enabled, get the value of the `SPXSpeechServiceConnectionEnableAudioLogging` [property](/objectivec/cognitive-services/speech/spxpropertyid):
+
+```objectivec
+NSString *isAudioLoggingEnabled = [speechConfig getPropertyById:SPXSpeechServiceConnectionEnableAudioLogging];
+```
+
+Each [SpeechRecognizer](/objectivec/cognitive-services/speech/spxspeechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+#### Enable logging for speech translation with the Speech SDK
+
+For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging()` of the [SpeechTranslationConfig](/dotnet/api/microsoft.cognitiveservices.speech.speechtranslationconfig) class instance.
+
+```csharp
+speechTranslationConfig.EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/dotnet/api/microsoft.cognitiveservices.speech.propertyid):
+
+```csharp
+string isAudioLoggingEnabled = speechTranslationConfig.GetProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/dotnet/api/microsoft.cognitiveservices.speech.translation.translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging` of the [SpeechTranslationConfig](/cpp/cognitive-services/speech/translation-speechtranslationconfig) class instance.
+
+```cpp
+speechTranslationConfig->EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` property:
+
+```cpp
+string isAudioLoggingEnabled = speechTranslationConfig->GetProperty(PropertyId::SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/cpp/cognitive-services/speech/translation-translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechTranslationConfig](/java/api/com.microsoft.cognitiveservices.speech.translation.speechtranslationconfig) class instance.
+
+```java
+speechTranslationConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/java/api/com.microsoft.cognitiveservices.speech.propertyid):
+
+```java
+String isAudioLoggingEnabled = speechTranslationConfig.getProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/java/api/com.microsoft.cognitiveservices.speech.translation.translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechTranslationConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechtranslationconfig) class instance.
+
+```javascript
+speechTranslationConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid):
+
+```javascript
+var SpeechSDK;
+SpeechSDK = speechSdk;
+// <...>
+string isAudioLoggingEnabled = speechTranslationConfig.getProperty(SpeechSDK.PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/javascript/api/microsoft-cognitiveservices-speech-sdk/translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enable_audio_logging` of the [SpeechTranslationConfig](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.speechtranslationconfig) class instance.
+
+```python
+speech_translation_config.enable_audio_logging()
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid):
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+# <...>
+is_audio_logging_enabled = speech_translation_config.get_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_EnableAudioLogging)
+```
+
+Each [TranslationRecognizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.translationrecognizer) that uses this `speech_translation_config` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging` of the [SPXSpeechTranslationConfiguration](/objectivec/cognitive-services/speech/spxspeechtranslationconfiguration) class instance.
+
+```objectivec
+[speechTranslationConfig enableAudioLogging];
+```
+
+To check whether logging is enabled, get the value of the `SPXSpeechServiceConnectionEnableAudioLogging` [property](/objectivec/cognitive-services/speech/spxpropertyid):
+
+```objectivec
+NSString *isAudioLoggingEnabled = [speechTranslationConfig getPropertyById:SPXSpeechServiceConnectionEnableAudioLogging];
+```
+
+Each [TranslationRecognizer](/objectivec/cognitive-services/speech/spxtranslationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+#### Enable logging for Speech to text REST API for short audio
+
+If you use [Speech to text REST API for short audio](rest-speech-to-text-short.md) and want to enable audio and transcription logging, you need to use the query parameter and value `storeAudio=true` as a part of your REST request. A sample request looks like this:
+
+```http
+https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&storeAudio=true
+```
+
+### Enable audio and transcription logging for a custom model endpoint
+
+This method is applicable for [Custom Speech](custom-speech-overview.md) endpoints only.
+
+Logging can be enabled or disabled in the persistent custom model endpoint settings. When logging is enabled (turned on) for a custom model endpoint, then you don't need to enable logging at the [recognition session level with the SDK or REST API](#enable-logging-for-a-single-recognition-session). Even when logging isn't enabled for a custom model endpoint, you can enable logging temporarily at the recognition session level with the SDK or REST API.
+
+> [!WARNING]
+> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active.
+
+You can enable audio and transcription logging for a custom model endpoint:
+- When you create the endpoint using the Speech Studio, REST API, or Speech CLI. For details about how to enable logging for a Custom Speech endpoint, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint).
+- When you update the endpoint ([Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)) using the [Speech to text REST API](rest-speech-to-text.md). For an example of how to update the logging setting for an endpoint, see [Turn off logging for a custom model endpoint](#turn-off-logging-for-a-custom-model-endpoint). But instead of setting the `contentLoggingEnabled` property to `false`, set it to `true` to enable logging for the endpoint.
+
+## Turn off logging for a custom model endpoint
+
+To disable audio and transcription logging for a custom model endpoint, you must update the persistent endpoint logging setting using the [Speech to text REST API](rest-speech-to-text.md). There isn't a way to disable logging for an existing custom model endpoint using the Speech Studio.
+
+To turn off logging for a custom endpoint, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `contentLoggingEnabled` property within `properties`. Set this property to `true` to enable logging of the endpoint's traffic. Set this property to `false` to disable logging of the endpoint's traffic.
+
+Make an HTTP PATCH request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, replace `YourEndpointId` with your endpoint ID, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "properties": {
+ "contentLoggingEnabled": false
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/71b46720-995d-4038-a331-0317e9e7a02f"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/122fd2f7-1d3a-4404-885d-2b24a2a187e8"
+ },
+ "properties": {
+ "loggingEnabled": false
+ },
+ "lastActionDateTime": "2023-03-28T23:03:15Z",
+ "status": "Succeeded",
+ "createdDateTime": "2023-03-28T23:02:40Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description"
+}
+```
+
+The response body should reflect the new setting. The name of the logging property in the response (`loggingEnabled`) is different from the name of the logging property that you set in the request (`contentLoggingEnabled`).
+
+## Get audio and transcription logs
+
+You can access audio and transcription logs using [Speech to text REST API](#get-audio-and-transcription-logs-with-speech-to-text-rest-api). For [custom model](how-to-custom-speech-deploy-model.md) endpoints, you can also use [Speech Studio](#get-audio-and-transcription-logs-with-speech-studio). See details in the following sections.
+
+> [!NOTE]
+> Logging data is kept for 30 days. After this period the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
+
+### Get audio and transcription logs with Speech Studio
+
+This method is applicable for [custom model](how-to-custom-speech-deploy-model.md) endpoints only.
+
+To download the endpoint logs:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select the link by endpoint name.
+1. Under **Content logging**, select **Download log**.
+
+With this approach, you can download all available log sets at once. There's no way to download selected log sets in Speech Studio.
+
+### Get audio and transcription logs with Speech to text REST API
+
+You can download all or a subset of available log sets.
+
+This method is applicable for base and [custom model](how-to-custom-speech-deploy-model.md) endpoints. To list and download audio and transcription logs:
+- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.
+- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.
+
+### Get log IDs with Speech to text REST API
+
+In some scenarios, you may need to get IDs of the available logs. For example, you may want to delete a specific log as described [later in this article](#delete-specific-log).
+
+To get IDs of the available logs:
+- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.
+- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.
+
+Here's a sample output of [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs). For simplicity, only one log set is shown:
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json",
+ "name": "163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9.v2.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 79920
+ },
+ "createdDateTime": "2023-03-13T16:37:15Z",
+ "links": {
+ "contentUrl": "<Link to download log file>"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav",
+ "name": "163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9.wav",
+ "kind": "Audio",
+ "properties": {
+ "size": 932966
+ },
+ "createdDateTime": "2023-03-13T16:37:15Z",
+ "links": {
+ "contentUrl": "<Link to to download log file>"
+ }
+ }
+ ]
+}
+```
+
+The locations of each audio and transcription log file are returned in the response body. See the corresponding `kind` property to determine whether the file includes the audio (`"kind": "Audio"`) or the transcription (`"kind": "Transcription"`).
+
+The log ID for each log file is the last part of the URL in the `"self"` element value. The log ID in the following example is `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json`.
+
+```json
+"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json"
+```
+
+## Delete audio and transcription logs
+
+Logging data is kept for 30 days. After this period, the logs are automatically deleted. However you can delete specific logs or a range of available logs at any time.
+
+For any base or [custom model](how-to-custom-speech-deploy-model.md) endpoint you can delete all available logs, logs for a given time frame, or a particular log based on its Log ID. The deletion process is done asynchronously and can take minutes, hours, one day, or longer depending on the number of log files.
+
+To delete audio and transcription logs you must use the [Speech to text REST API](rest-speech-to-text.md). There isn't a way to delete logs using the Speech Studio.
+
+### Delete all logs or logs for a given time frame
+
+To delete all logs or logs for a given time frame:
+
+- Base models: Use the [Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md).
+- Custom model endpoints: Use the [Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+Optionally, set the `endDate` of the audio logs deletion (specific day, UTC). Expected format: "yyyy-mm-dd". For instance, "2023-03-15" results in deleting all logs on March 15, 2023 and before.
+
+### Delete specific log
+
+To delete a specific log by ID:
+
+- Base models: Use the [Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog) operation of the [Speech to text REST API](rest-speech-to-text.md).
+- Custom model endpoints: Use the [Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog) operation of the [Speech to text REST API](rest-speech-to-text.md).
+
+For details about how to get Log IDs, see a previous section [Get log IDs with Speech to text REST API](#get-log-ids-with-speech-to-text-rest-api).
+
+Since audio and transcription logs have separate IDs (such as IDs `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json` and `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav` from a [previous example in this article](#get-log-ids-with-speech-to-text-rest-api)), when you want to delete both audio and transcription logs you execute separate [delete by ID](#delete-specific-log) requests.
+
+## Next steps
+
+* [Speech to text quickstart](get-started-speech-to-text.md)
+* [Speech translation quickstart](./get-started-speech-translation.md)
+* [Create and train custom speech models](custom-speech-overview.md)
ai-services Migrate To Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-to-batch-synthesis.md
+
+ Title: Migrate to Batch synthesis API - Speech service
+
+description: This document helps developers migrate code from Long Audio REST API to Batch synthesis REST API.
++++++ Last updated : 09/01/2022+
+ms.devlang: csharp
+++
+# Migrate code from Long Audio API to Batch synthesis API
+
+The [Batch synthesis API](batch-synthesis.md) (Preview) provides asynchronous synthesis of long-form text to speech. Benefits of upgrading from Long Audio API to Batch synthesis API, and details about how to do so, are described in the sections below.
+
+> [!IMPORTANT]
+> [Batch synthesis API](batch-synthesis.md) is currently in public preview. Once it's generally available, the Long Audio API will be deprecated.
+
+## Base path
+
+You must update the base path in your code from `/texttospeech/v3.0/longaudiosynthesis` to `/texttospeech/3.1-preview1/batchsynthesis`. For example, to list synthesis jobs for your Speech resource in the `eastus` region, use `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis` instead of `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis`.
+
+## Regions and endpoints
+
+Batch synthesis API is available in all [Speech regions](regions.md).
+
+The Long Audio API is limited to the following regions:
+
+| Region | Endpoint |
+|--|-|
+| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
+| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
+| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
+| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
+| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
+| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
+| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
+
+## Voices list
+
+Batch synthesis API supports all [text to speech voices and styles](language-support.md?tabs=tts).
+
+The Long Audio API is limited to the set of voices returned by a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
+
+## Text inputs
+
+Batch synthesis text inputs are sent in a JSON payload of up to 500 kilobytes.
+
+Long Audio API text inputs are uploaded from a file that meets the following requirements:
+* One plain text (.txt) or SSML text (.txt) file encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom). Don't use compressed files such as ZIP. If you have more than one input file, you must submit multiple requests.
+* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs. For plain text, each paragraph is separated by a new line. For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs.
+
+With Batch synthesis API, you can use any of the [supported SSML elements](speech-synthesis-markup.md), including the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements aren't supported by Long Audio API.
+
+## Audio output formats
+
+Batch synthesis API supports all [text to speech audio output formats](rest-text-to-speech.md#audio-outputs).
+
+The Long Audio API is limited to the following set of audio output formats. The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
+
+* riff-8khz-16bit-mono-pcm
+* riff-16khz-16bit-mono-pcm
+* riff-24khz-16bit-mono-pcm
+* riff-48khz-16bit-mono-pcm
+* audio-16khz-32kbitrate-mono-mp3
+* audio-16khz-64kbitrate-mono-mp3
+* audio-16khz-128kbitrate-mono-mp3
+* audio-24khz-48kbitrate-mono-mp3
+* audio-24khz-96kbitrate-mono-mp3
+* audio-24khz-160kbitrate-mono-mp3
+
+## Getting results
+
+With batch synthesis API, use the URL from the `outputs.result` property of the GET batch synthesis response. The [results](batch-synthesis.md#batch-synthesis-results) are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details.
+
+Long Audio API text inputs and results are returned via two separate content URLs as shown in the following example. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request. Both ZIP files can be downloaded from the URL in their `links.contentUrl` property.
+
+## Cleaning up resources
+
+Batch synthesis API supports up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+
+The Long Audio API is limited to 20,000 requests for each Azure subscription account. The Speech service doesn't remove job history automatically. You must remove the previous job run history before making new requests that would otherwise exceed the limit.
+
+## Next steps
+
+- [Batch synthesis API](batch-synthesis.md)
+- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)
+- [Text to speech quickstart](get-started-text-to-speech.md)
ai-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v2-to-v3.md
+
+ Title: Migrate from v2 to v3 REST API - Speech service
+
+description: This document helps developers migrate code from v2 to v3 of the Speech to text REST API.speech-to-text REST API.
++++++ Last updated : 11/29/2022++++
+# Migrate code from v2.0 to v3.0 of the REST API
+
+Compared to v2, the v3 version of the Speech services REST API for speech to text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two.
+
+> [!IMPORTANT]
+> The Speech to text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech to text REST API v3.1. Complete the steps in this article and then see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide for additional requirements.
+
+## Forward compatibility
+
+All entities from v2 can also be found in the v3 API under the same identity. Where the schema of a result has changed, (for example, transcriptions), the result of a GET in the v3 version of the API uses the v3 schema. The result of a GET in the v2 version of the API uses the same v2 schema. Newly created entities on v3 aren't available in responses from v2 APIs.
+
+## Migration steps
+
+This is a summary list of items you need to be aware of when you are preparing for migration. Details are found in the individual links. Depending on your current use of the API not all steps listed here may apply. Only a few changes require non-trivial changes in the calling code. Most changes just require a change to item names.
+
+General changes:
+
+1. [Change the host name](#host-name-changes)
+
+1. [Rename the property id to self in your client code](#identity-of-an-entity)
+
+1. [Change code to iterate over collections of entities](#working-with-collections-of-entities)
+
+1. [Rename the property name to displayName in your client code](#name-of-an-entity)
+
+1. [Adjust the retrieval of the metadata of referenced entities](#accessing-referenced-entities)
+
+1. If you use Batch transcription:
+
+ * [Adjust code for creating batch transcriptions](#creating-transcriptions)
+
+ * [Adapt code to the new transcription results schema](#format-of-v3-transcription-results)
+
+ * [Adjust code for how results are retrieved](#getting-the-content-of-entities-and-the-results)
+
+1. If you use Custom model training/testing APIs:
+
+ * [Apply modifications to custom model training](#customizing-models)
+
+ * [Change how base and custom models are retrieved](#retrieving-base-and-custom-models)
+
+ * [Rename the path segment accuracytests to evaluations in your client code](#accuracy-tests)
+
+1. If you use endpoints APIs:
+
+ * [Change how endpoint logs are retrieved](#retrieving-endpoint-logs)
+
+1. Other minor changes:
+
+ * [Pass all custom properties as customProperties instead of properties in your POST requests](#using-custom-properties)
+
+ * [Read the location from response header Location instead of Operation-Location](#response-headers)
+
+## Breaking changes
+
+### Host name changes
+
+Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech to text REST API v3.0](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
+>[!IMPORTANT]
+>Change the hostname from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com` where region is the region of your speech subscription. Also remove `api/`from any path in your client code.
+
+### Identity of an entity
+
+The property `id` is now `self`. In v2, an API user had to know how our paths on the API are being created. This was non-extensible and required unnecessary work from the user. The property `id` (uuid) is replaced by `self` (string), which is location of the entity (URL). The value is still unique between all your entities. If `id` is stored as a string in your code, a rename is enough to support the new schema. You can now use the `self` content as the URL for the `GET`, `PATCH`, and `DELETE` REST calls for your entity.
+
+If the entity has additional functionality available through other paths, they are listed under `links`. The following example for transcription shows a separate method to `GET` the content of the transcription:
+>[!IMPORTANT]
+>Rename the property `id` to `self` in your client code. Change the type from `uuid` to `string` if needed.
+
+**v2 transcription:**
+
+```json
+{
+ "id": "9891c965-bb32-4880-b14b-6d44efb158f3",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "status": "Succeeded",
+ "locale": "en-US",
+ "name": "Transcription using locale en-US"
+}
+```
+
+**v3 transcription:**
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "status": "Succeeded",
+ "locale": "en-US",
+ "displayName": "Transcription using locale en-US",
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
+ }
+}
+```
+
+Depending on your code's implementation, it may not be enough to rename the property. We recommend using the returned `self` and `links` values as the target urls of your REST calls, rather than generating paths in your client. By using the returned URLs, you can be sure that future changes in paths will not break your client code.
+
+### Working with collections of entities
+
+Previously the v2 API returned all available entities in a result. To allow a more fine grained control over the expected response size in v3, all collection results are paginated. You have control over the count of returned entities and the starting offset of the page. This behavior makes it easy to predict the runtime of the response processor.
+
+The basic shape of the response is the same for all collections:
+
+```json
+{
+ "values": [
+ {
+ }
+ ],
+ "@nextLink": "https://{region}.api.cognitive.microsoft.com/speechtotext/v3.0/{collection}?skip=100&top=100"
+}
+```
+
+The `values` property contains a subset of the available collection entities. The count and offset can be controlled using the `skip` and `top` query parameters. When `@nextLink` is not `null`, there's more data available and the next batch of data can be retrieved by doing a GET on `$.@nextLink`.
+
+This change requires calling the `GET` for the collection in a loop until all elements have been returned.
+
+>[!IMPORTANT]
+>When the response of a GET to `speechtotext/v3.1/{collection}` contains a value in `$.@nextLink`, continue issuing `GETs` on `$.@nextLink` until `$.@nextLink` is not set to retrieve all elements of that collection.
+
+### Creating transcriptions
+
+A detailed description on how to create batches of transcriptions can be found in [Batch transcription How-to](./batch-transcription.md).
+
+The v3 transcription API lets you set specific transcription options explicitly. All (optional) configuration properties can now be set in the `properties` property.
+Version v3 also supports multiple input files, so it requires a list of URLs rather than a single URL as v2 did. The v2 property name `recordingsUrl` is now `contentUrls` in v3. The functionality of analyzing sentiment in transcriptions has been removed in v3. See [Text Analysis](https://azure.microsoft.com/services/cognitive-services/text-analytics/) for sentiment analysis options.
+
+The new property `timeToLive` under `properties` can help prune the existing completed entities. The `timeToLive` specifies a duration after which a completed entity will be deleted automatically. Set it to a high value (for example `PT12H`) when the entities are continuously tracked, consumed, and deleted and therefore usually processed long before 12 hours have passed.
+
+**v2 transcription POST request body:**
+
+```json
+{
+ "locale": "en-US",
+ "name": "Transcription using locale en-US",
+ "recordingsUrl": "https://contoso.com/mystoragelocation",
+ "properties": {
+ "AddDiarization": "False",
+ "AddWordLevelTimestamps": "False",
+ "PunctuationMode": "DictatedAndAutomatic",
+ "ProfanityFilterMode": "Masked"
+ }
+}
+```
+
+**v3 transcription POST request body:**
+
+```json
+{
+ "locale": "en-US",
+ "displayName": "Transcription using locale en-US",
+ "contentUrls": [
+ "https://contoso.com/mystoragelocation",
+ "https://contoso.com/myotherstoragelocation"
+ ],
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": false,
+ "punctuationMode": "DictatedAndAutomatic",
+ "profanityFilterMode": "Masked"
+ }
+}
+```
+>[!IMPORTANT]
+>Rename the property `recordingsUrl` to `contentUrls` and pass an array of urls instead of a single url. Pass settings for `diarizationEnabled` or `wordLevelTimestampsEnabled` as `bool` instead of `string`.
+
+### Format of v3 transcription results
+
+The schema of transcription results has changed slightly to align with transcriptions created by real-time endpoints. Find an in-depth description of the new format in the [Batch transcription How-to](./batch-transcription.md). The schema of the result is published in our [GitHub sample repository](https://aka.ms/csspeech/samples) under `samples/batch/transcriptionresult_v3.schema.json`.
+
+Property names are now camel-cased and the values for `channel` and `speaker` now use integer types. Format for durations now use the structure described in ISO 8601, which matches duration formatting used in other Azure APIs.
+
+Sample of a v3 transcription result. The differences are described in the comments.
+
+```json
+{
+ "source": "...", // (new in v3) was AudioFileName / AudioFileUrl
+ "timestamp": "2020-06-16T09:30:21Z", // (new in v3)
+ "durationInTicks": 41200000, // (new in v3) was AudioLengthInSeconds
+ "duration": "PT4.12S", // (new in v3)
+ "combinedRecognizedPhrases": [ // (new in v3) was CombinedResults
+ {
+ "channel": 0, // (new in v3) was ChannelNumber
+ "lexical": "hello world",
+ "itn": "hello world",
+ "maskedITN": "hello world",
+ "display": "Hello world."
+ }
+ ],
+ "recognizedPhrases": [ // (new in v3) was SegmentResults
+ {
+ "recognitionStatus": "Success", //
+ "channel": 0, // (new in v3) was ChannelNumber
+ "offset": "PT0.07S", // (new in v3) new format, was OffsetInSeconds
+ "duration": "PT1.59S", // (new in v3) new format, was DurationInSeconds
+ "offsetInTicks": 700000.0, // (new in v3) was Offset
+ "durationInTicks": 15900000.0, // (new in v3) was Duration
+
+ // possible transcriptions of the current phrase with confidences
+ "nBest": [
+ {
+ "confidence": 0.898652852,phrase
+ "speaker": 1,
+ "lexical": "hello world",
+ "itn": "hello world",
+ "maskedITN": "hello world",
+ "display": "Hello world.",
+
+ "words": [
+ {
+ "word": "hello",
+ "offset": "PT0.09S",
+ "duration": "PT0.48S",
+ "offsetInTicks": 900000.0,
+ "durationInTicks": 4800000.0,
+ "confidence": 0.987572
+ },
+ {
+ "word": "world",
+ "offset": "PT0.59S",
+ "duration": "PT0.16S",
+ "offsetInTicks": 5900000.0,
+ "durationInTicks": 1600000.0,
+ "confidence": 0.906032
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+>[!IMPORTANT]
+>Deserialize the transcription result into the new type as shown above. Instead of a single file per audio channel, distinguish channels by checking the property value of `channel` for each element in `recognizedPhrases`. There is now a single result file for each input file.
++
+### Getting the content of entities and the results
+
+In v2, the links to the input or result files have been inlined with the rest of the entity metadata. As an improvement in v3, there is a clear separation between entity metadata (which is returned by a GET on `$.self`) and the details and credentials to access the result files. This separation helps protect customer data and allows fine control over the duration of validity of the credentials.
+
+In v3, `links` include a sub-property called `files` in case the entity exposes data (datasets, transcriptions, endpoints, or evaluations). A GET on `$.links.files` will return a list of files and a SAS URL
+to access the content of each file. To control the validity duration of the SAS URLs, the query parameter `sasValidityInSeconds` can be used to specify the lifetime.
+
+**v2 transcription:**
+
+```json
+{
+ "id": "9891c965-bb32-4880-b14b-6d44efb158f3",
+ "status": "Succeeded",
+ "reportFileUrl": "https://contoso.com/report.txt?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=6c044930-3926-4be4-be76-f728327c53b5",
+ "resultsUrls": {
+ "channel_0": "https://contoso.com/audiofile1.wav?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=6c044930-3926-4be4-be76-f72832e6600c",
+ "channel_1": "https://contoso.com/audiofile2.wav?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=3e0163f1-0029-4d4a-988d-3fba7d7c53b5"
+ }
+}
+```
+
+**v3 transcription:**
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
+ }
+}
+```
+
+**A GET on `$.links.files` would result in:**
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/f23e54f5-ed74-4c31-9730-2f1a3ef83ce8",
+ "name": "Name",
+ "kind": "Transcription",
+ "properties": {
+ "size": 200
+ },
+ "createdDateTime": "2020-01-13T08:00:00Z",
+ "links": {
+ "contentUrl": "https://customspeech-usw.blob.core.windows.net/artifacts/mywavefile1.wav.json?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/28bc946b-c251-4a86-84f6-ea0f0a2373ef",
+ "name": "Name",
+ "kind": "TranscriptionReport",
+ "properties": {
+ "size": 200
+ },
+ "createdDateTime": "2020-01-13T08:00:00Z",
+ "links": {
+ "contentUrl": "https://customspeech-usw.blob.core.windows.net/artifacts/report.json?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
+ }
+ }
+ ],
+ "@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files?skip=2&top=2"
+}
+```
+
+The `kind` property indicates the format of content of the file. For transcriptions, the files of kind `TranscriptionReport` are the summary of the job and files of the kind `Transcription` are the result of the job itself.
+
+>[!IMPORTANT]
+>To get the results of operations, use a `GET` on `/speechtotext/v3.0/{collection}/{id}/files`, they are no longer contained in the responses of `GET` on `/speechtotext/v3.0/{collection}/{id}` or `/speechtotext/v3.0/{collection}`.
+
+### Customizing models
+
+Before v3, there was a distinction between an _acoustic model_ and a _language model_ when a model was being trained. This distinction resulted in the need to specify multiple models when creating endpoints or transcriptions. To simplify this process for a caller, we removed the differences and made everything depend on the content of the datasets that are being used for model training. With this change, the model creation now supports mixed datasets (language data and acoustic data). Endpoints and transcriptions now require only one model.
+
+With this change, the need for a `kind` in the `POST` operation has been removed and the `datasets[]` array can now contain multiple datasets of the same or mixed kinds.
+
+To improve the results of a trained model, the acoustic data is automatically used internally during language training. In general, models created through the v3 API deliver more accurate results than models created with the v2 API.
+
+>[!IMPORTANT]
+>To customize both the acoustic and language model part, pass all of the required language and acoustic datasets in `datasets[]` of the POST to `/speechtotext/v3.0/models`. This will create a single model with both parts customized.
+
+### Retrieving base and custom models
+
+To simplify getting the available models, v3 has separated the collections of "base models" from the customer owned "customized models". The two routes are now
+`GET /speechtotext/v3.0/models/base` and `GET /speechtotext/v3.0/models/`.
+
+In v2, all models were returned together in a single response.
+
+>[!IMPORTANT]
+>To get a list of provided base models for customization, use `GET` on `/speechtotext/v3.0/models/base`. You can find your own customized models with a `GET` on `/speechtotext/v3.0/models`.
+
+### Name of an entity
+
+The `name` property is now `displayName`. This is consistent with other Azure APIs to not indicate identity properties. The value of this property must not be unique and can be changed after entity creation with a `PATCH` operation.
+
+**v2 transcription:**
+
+```json
+{
+ "name": "Transcription using locale en-US"
+}
+```
+
+**v3 transcription:**
+
+```json
+{
+ "displayName": "Transcription using locale en-US"
+}
+```
+
+>[!IMPORTANT]
+>Rename the property `name` to `displayName` in your client code.
+
+### Accessing referenced entities
+
+In v2, referenced entities were always inlined, for example the used models of an endpoint. The nesting of entities resulted in large responses and consumers rarely consumed the nested content. To shrink the response size and improve performance, the referenced entities are no longer inlined in the response. Instead, a reference to the other entity appears, and can directly be used for a subsequent `GET` (it's a URL as well), following the same pattern as the `self` link.
+
+**v2 transcription:**
+
+```json
+{
+ "id": "9891c965-bb32-4880-b14b-6d44efb158f3",
+ "models": [
+ {
+ "id": "827712a5-f942-4997-91c3-7c6cde35600b",
+ "modelKind": "Language",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "status": "Running",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "locale": "en-US",
+ "name": "Acoustic model",
+ "description": "Example for an acoustic model",
+ "datasets": [
+ {
+ "id": "702d913a-8ba6-4f66-ad5c-897400b081fb",
+ "dataImportKind": "Language",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "status": "Succeeded",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "locale": "en-US",
+ "name": "Language dataset",
+ }
+ ]
+ },
+ ]
+}
+```
+
+**v3 transcription:**
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/021a72d0-54c4-43d3-8254-27336ead9037"
+ }
+}
+```
+
+If you need to consume the details of a referenced model as shown in the above example, just issue a GET on `$.model.self`.
+
+>[!IMPORTANT]
+>To retrieve the metadata of referenced entities, issue a GET on `$.{referencedEntity}.self`, for example to retrieve the model of a transcription do a `GET` on `$.model.self`.
++
+### Retrieving endpoint logs
+
+Version v2 of the service supported logging endpoint results. To retrieve the results of an endpoint with v2, you would create a "data export", which represented a snapshot of the results defined by a time range. The process of exporting batches of data was inflexible. The v3 API gives access to each individual file and allows iteration through them.
+
+**A successfully running v3 endpoint:**
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6",
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs"
+ }
+}
+```
+
+**Response of GET `$.links.logs`:**
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/6d72ad7e-f286-4a6f-b81b-a0532ca6bcaa/files/logs/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
+ "name": "2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
+ "kind": "Audio",
+ "properties": {
+ "size": 12345
+ },
+ "createdDateTime": "2020-01-13T08:00:00Z",
+ "links": {
+ "contentUrl": "https://customspeech-usw.blob.core.windows.net/artifacts/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
+ }
+ }
+ ],
+ "@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs?top=2&SkipToken=2!188!MDAwMDk1ITZhMjhiMDllLTg0MDYtNDViMi1hMGRkLWFlNzRlOGRhZWJkNi8yMDIwLTA0LTAxLzEyNDY0M182MzI5NGRkMi1mZGYzLTRhZmEtOTA0NC1mODU5ZTcxOWJiYzYud2F2ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--"
+}
+```
+
+Pagination for endpoint logs works similar to all other collections, except that no offset can be specified. Due to the large amount of available data, pagination is determined by the server.
+
+In v3, each endpoint log can be deleted individually by issuing a `DELETE` operation on the `self` of a file, or by using `DELETE` on `$.links.logs`. To specify an end date, the query parameter `endDate` can be added to the request.
+
+>[!IMPORTANT]
+>Instead of creating log exports on `/api/speechtotext/v2.0/endpoints/{id}/data` use `/v3.0/endpoints/{id}/files/logs/` to access log files individually.
+
+### Using custom properties
+
+To separate custom properties from the optional configuration properties, all explicitly named properties are now located in the `properties` property and all properties defined by the callers are now located in the `customProperties` property.
+
+**v2 transcription entity:**
+
+```json
+{
+ "properties": {
+ "customerDefinedKey": "value",
+ "diarizationEnabled": "False",
+ "wordLevelTimestampsEnabled": "False"
+ }
+}
+```
+
+**v3 transcription entity:**
+
+```json
+{
+ "properties": {
+ "diarizationEnabled": false,
+ "wordLevelTimestampsEnabled": false
+ },
+ "customProperties": {
+ "customerDefinedKey": "value"
+ }
+}
+```
+
+This change also lets you use correct types on all explicitly named properties under `properties` (for example boolean instead of string).
+
+>[!IMPORTANT]
+>Pass all custom properties as `customProperties` instead of `properties` in your `POST` requests.
+
+### Response headers
+
+v3 no longer returns the `Operation-Location` header in addition to the `Location` header on `POST` requests. The value of both headers in v2 was the same. Now only `Location` is returned.
+
+Because the new API version is now managed by Azure API management (APIM), the throttling related headers `X-RateLimit-Limit`, `X-RateLimit-Remaining`, and `X-RateLimit-Reset` aren't contained in the response headers.
+
+>[!IMPORTANT]
+>Read the location from response header `Location` instead of `Operation-Location`. In case of a 429 response code, read the `Retry-After` header value instead of `X-RateLimit-Limit`, `X-RateLimit-Remaining`, or `X-RateLimit-Reset`.
++
+### Accuracy tests
+
+Accuracy tests have been renamed to evaluations because the new name describes better what they represent. The new paths are: `https://{region}.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations`.
+
+>[!IMPORTANT]
+>Rename the path segment `accuracytests` to `evaluations` in your client code.
++
+## Next steps
+
+* [Speech to text REST API](rest-speech-to-text.md)
+* [Speech to text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
ai-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-0-to-v3-1.md
+
+ Title: Migrate from v3.0 to v3.1 REST API - Speech service
+
+description: This document helps developers migrate code from v3.0 to v3.1 of the Speech to text REST API.
++++++ Last updated : 01/25/2023+
+ms.devlang: csharp
+++
+# Migrate code from v3.0 to v3.1 of the REST API
+
+The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below.
+
+> [!IMPORTANT]
+> Speech to text REST API v3.1 is generally available. Version 3.0 of the [Speech to text REST API](rest-speech-to-text.md) will be retired.
+
+## Base path
+
+You must update the base path in your code from `/speechtotext/v3.0` to `/speechtotext/v3.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base`.
+
+Please note these additional changes:
+- The `/models/{id}/copyto` operation (includes '/') in version 3.0 is replaced by the `/models/{id}:copyto` operation (includes ':') in version 3.1.
+- The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
+- The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
+
+For more details, see [Operation IDs](#operation-ids) later in this guide.
+
+## Batch transcription
+
+> [!NOTE]
+> Don't use Speech to text REST API v3.0 to retrieve a transcription created via Speech to text REST API v3.1. You'll see an error message such as the following: "The API version cannot be used to access this transcription. Please use API version v3.1 or higher."
+
+In the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation the following three properties are added:
+- The `displayFormWordLevelTimestampsEnabled` property can be used to enable the reporting of word-level timestamps on the display form of the transcription results. The results are returned in the `displayWords` property of the transcription file.
+- The `diarization` property can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers. To use this property you must also set the `diarizationEnabled` property to `true`.
+- The `languageIdentification` property can be used specify settings for language identification on the input prior to transcription. Up to 10 candidate locales are supported for language identification. The returned transcription will include a new `locale` property for the recognized language or the locale that you provided.
+
+The `filter` property is added to the [Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List), [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles), and [Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions) operations. The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, and `locale`. For example: `filter=createdDateTime gt 2022-02-01T11:00:00Z`
+
+If you use webhook to receive notifications about transcription status, please note that the webhooks created via V3.0 API cannot receive notifications for V3.1 transcription requests. You need to create a new webhook endpoint via V3.1 API in order to receive notifications for V3.1 transcription requests.
+
+## Custom Speech
+
+### Datasets
+
+The following operations are added for uploading and managing multiple data blocks for a dataset:
+ - [Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock) - Upload a block of data for the dataset. The maximum size of the block is 8MiB.
+ - [Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks) - Get the list of uploaded blocks for this dataset.
+ - [Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks) - Commit blocklist to complete the upload of the dataset.
+
+To support model adaptation with [structured text in markdown](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) data, the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation now supports the **LanguageMarkdown** data kind. For more information, see [upload datasets](how-to-custom-speech-upload-data.md#upload-datasets).
+
+### Models
+
+The [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) and [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operations return information on the type of adaptation supported by each base model.
+
+```json
+"features": {
+ "supportsAdaptationsWith": [
+ "Acoustic",
+ "Language",
+ "LanguageMarkdown",
+ "Pronunciation"
+ ]
+}
+```
+
+The [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation has a new `customModelWeightPercent` property where you can specify the weight used when the Custom Language Model (trained from plain or structured text data) is combined with the Base Language Model. Valid values are integers between 1 and 100. The default value is currently 30.
+
+The `filter` property is added to the following operations:
+
+- [Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_List)
+- [Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)
+- [Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_List)
+- [Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_List)
+- [Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)
+- [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)
+- [Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)
+- [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)
+- [Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListDatasets)
+- [Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEndpoints)
+- [Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEvaluations)
+- [Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListModels)
+
+The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, `locale`, and `kind`. For example: `filter=locale eq 'en-US'`
+
+Added the [Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListFiles) operation to get the files of the model identified by the given ID.
+
+Added the [Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile) operation to get one specific file (identified with fileId) from a model (identified with ID). This lets you retrieve a **ModelReport** file that provides information on the data processed during training.
+
+## Operation IDs
+
+You must update the base path in your code from `/speechtotext/v3.0` to `/speechtotext/v3.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base`.
+
+The name of each `operationId` in version 3.1 is prefixed with the object name. For example, the `operationId` for "Create Model" changed from [CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel) in version 3.0 to [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) in version 3.1.
+
+|Path|Method|Version 3.1 Operation ID|Version 3.0 Operation ID|
+|||||
+|`/datasets`|GET|[Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_List)|[GetDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasets)|
+|`/datasets`|POST|[Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create)|[CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset)|
+|`/datasets/{id}`|DELETE|[Datasets_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Delete)|[DeleteDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset)|
+|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)|
+|`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)|
+|`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|GET|[Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock)|Not applicable|
+|`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)|
+|`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
+|`/datasets/locales`|GET|[Datasets_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListSupportedLocales)|[GetSupportedLocalesForDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForDatasets)|
+|`/datasets/upload`|POST|[Datasets_Upload](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Upload)|[UploadDatasetFromForm](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UploadDatasetFromForm)|
+|`/endpoints`|GET|[Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_List)|[GetEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoints)|
+|`/endpoints`|POST|[Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create)|[CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint)|
+|`/endpoints/{id}`|DELETE|[Endpoints_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Delete)|[DeleteEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint)|
+|`/endpoints/{id}`|GET|[Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get)|[GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint)|
+|`/endpoints/{id}`|PATCH|[Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)|[UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint)|
+|`/endpoints/{id}/files/logs`|DELETE|[Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs)|[DeleteEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLogs)|
+|`/endpoints/{id}/files/logs`|GET|[Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs)|[GetEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLogs)|
+|`/endpoints/{id}/files/logs/{logId}`|DELETE|[Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog)|[DeleteEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLog)|
+|`/endpoints/{id}/files/logs/{logId}`|GET|[Endpoints_GetLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetLog)|[GetEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLog)|
+|`/endpoints/base/{locale}/files/logs`|DELETE|[Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs)|[DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs`|GET|[Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs)|[GetBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|DELETE|[Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog)|[DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|GET|[Endpoints_GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetBaseModelLog)|[GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLog)|
+|`/endpoints/locales`|GET|[Endpoints_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListSupportedLocales)|[GetSupportedLocalesForEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEndpoints)|
+|`/evaluations`|GET|[Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_List)|[GetEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluations)|
+|`/evaluations`|POST|[Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create)|[CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation)|
+|`/evaluations/{id}`|DELETE|[Evaluations_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete)|[DeleteEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation)|
+|`/evaluations/{id}`|GET|[Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get)|[GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation)|
+|`/evaluations/{id}`|PATCH|[Evaluations_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update)|[UpdateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation)|
+|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)|
+|`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)|
+|`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
+|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)|
+|`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)|
+|`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
+|`/models/{id}`|DELETE|[Models_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Delete)|[DeleteModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel)|
+|`/models/{id}`|GET|[Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel)|[GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel)|
+|`/models/{id}`|PATCH|[Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update)|[UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel)|
+|`/models/{id}/files`|GET|[Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListFiles)|Not applicable|
+|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile)|Not applicable|
+|`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)|
+|`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
+|`/models/base/{id}`|GET|[Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
+|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)|
+|`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
+|`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
+|`/projects`|POST|[Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create)|[CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject)|
+|`/projects/{id}`|DELETE|[Projects_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Delete)|[DeleteProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject)|
+|`/projects/{id}`|GET|[Projects_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Get)|[GetProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject)|
+|`/projects/{id}`|PATCH|[Projects_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Update)|[UpdateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject)|
+|`/projects/{id}/datasets`|GET|[Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListDatasets)|[GetDatasetsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetsForProject)|
+|`/projects/{id}/endpoints`|GET|[Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEndpoints)|[GetEndpointsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointsForProject)|
+|`/projects/{id}/evaluations`|GET|[Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEvaluations)|[GetEvaluationsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationsForProject)|
+|`/projects/{id}/models`|GET|[Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListModels)|[GetModelsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelsForProject)|
+|`/projects/{id}/transcriptions`|GET|[Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions)|[GetTranscriptionsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsForProject)|
+|`/projects/locales`|GET|[Projects_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListSupportedLocales)|[GetSupportedProjectLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedProjectLocales)|
+|`/transcriptions`|GET|[Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List)|[GetTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions)|
+|`/transcriptions`|POST|[Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)|[CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription)|
+|`/transcriptions/{id}`|DELETE|[Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete)|[DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)|
+|`/transcriptions/{id}`|GET|[Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get)|[GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription)|
+|`/transcriptions/{id}`|PATCH|[Transcriptions_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Update)|[UpdateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription)|
+|`/transcriptions/{id}/files`|GET|[Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles)|[GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles)|
+|`/transcriptions/{id}/files/{fileId}`|GET|[Transcriptions_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_GetFile)|[GetTranscriptionFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFile)|
+|`/transcriptions/locales`|GET|[Transcriptions_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListSupportedLocales)|[GetSupportedLocalesForTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForTranscriptions)|
+|`/webhooks`|GET|[WebHooks_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_List)|[GetHooks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHooks)|
+|`/webhooks`|POST|[WebHooks_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Create)|[CreateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateHook)|
+|`/webhooks/{id}:ping`<sup>2</sup>|POST|[WebHooks_Ping](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Ping)|[PingHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/PingHook)|
+|`/webhooks/{id}:test`<sup>3</sup>|POST|[WebHooks_Test](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Test)|[TestHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/TestHook)|
+|`/webhooks/{id}`|DELETE|[WebHooks_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Delete)|[DeleteHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteHook)|
+|`/webhooks/{id}`|GET|[WebHooks_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Get)|[GetHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHook)|
+|`/webhooks/{id}`|PATCH|[WebHooks_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Update)|[UpdateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateHook)|
+
+<sup>1</sup> The `/models/{id}/copyto` operation (includes '/') in version 3.0 is replaced by the `/models/{id}:copyto` operation (includes ':') in version 3.1.
+
+<sup>2</sup> The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
+
+<sup>3</sup> The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
+
+## Next steps
+
+* [Speech to text REST API](rest-speech-to-text.md)
+* [Speech to text REST API v3.1 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1)
+* [Speech to text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
++
ai-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migration-overview-neural-voice.md
+
+ Title: Migration to neural voice - Speech service
+
+description: This document summarizes the benefits of migration from non-neural voice to neural voice.
++++++ Last updated : 11/12/2021+++
+# Migration to neural voice
+
+We're retiring two features from [Text to speech](index-text-to-speech.yml) capabilities as detailed below.
+
+## Custom voice (non-neural training)
+
+> [!IMPORTANT]
+> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
+>
+> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
+
+Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice.
+
+Custom neural voice is a gated service. You need to create an Azure account and Speech service subscription (with S0 tier), and [apply](https://aka.ms/customneural) to use custom neural feature. After you've been granted access, you can visit the [Speech Studio portal](https://speech.microsoft.com/portal) and then select Custom Voice to get started.
+
+Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs.
+
+Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. Custom voice (retired) is referred as **Custom**, and custom neural voice is referred as **Custom Neural**.
+
+## Prebuilt standard voice
+
+> [!IMPORTANT]
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
+>
+> The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**.
+
+Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
+
+Prebuilt neural voice is powered by deep neural networks. You need to create an Azure account and Speech service subscription. Then you can use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal), and select prebuilt neural voices to get started. Listening to the voice sample without creating an Azure account, you can visit the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
+
+Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. Prebuilt standard voice (retired) is referred as **Standard**, and prebuilt neural voice is referred as **Neural**.
+
+## Next steps
+
+- [Try out prebuilt neural voice](text-to-speech.md)
+- [Try out custom neural voice](custom-neural-voice.md)
ai-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/multi-device-conversation.md
+
+ Title: Multi-device Conversation overview - Speech service
+
+description: Multi-device conversation makes it easy to create a speech or text conversation between multiple clients and coordinate the messages that are sent between them.
++++++ Last updated : 02/19/2022++++
+# What is Multi-device Conversation?
+
+Multi-device conversation makes it easy to create a speech or text conversation between multiple clients and coordinate the messages sent between them.
+
+> [!NOTE]
+> Multi-device conversation access is a preview feature.
+
+With multi-device conversation, you can:
+
+- Connect multiple clients into the same conversation and manage the sending and receiving of messages between them.
+- Easily transcribe audio from each client and send the transcription to the others, with optional translation.
+- Easily send text messages between clients, with optional translation.
+
+You can build a feature or solution that works across an array of devices. Each device can independently send messages (either transcriptions of audio or instant messages) to all other devices.
+
+Whereas [Conversation Transcription](conversation-transcription.md) works on a single device with a multichannel microphone array, Multi-device Conversation is suited for scenarios with multiple devices, each with a single microphone.
+
+>[!IMPORTANT]
+> Multi-device conversation does not support sending audio files between clients: only the transcription and/or translation.
+
+## Key features
+
+- **Real-time transcription:** Everyone will receive a transcript of the conversation, so they can follow along the text in real-time or save it for later.
+- **Real-time translation:** With more than 70 [supported languages](language-support.md) for text translation, users can translate the conversation to their preferred languages.
+- **Readable transcripts:** The transcription and translation are easy to follow, with punctuation and sentence breaks.
+- **Voice or text input:** Each user can speak or type on their own device, depending on the language support capabilities enabled for the participant's chosen language. Please refer to [Language support](language-support.md).
+- **Message relay:** The multi-device conversation service will distribute messages sent by one client to all the others, in the languages of their choice.
+- **Message identification:** Every message that users receive in the conversation will be tagged with the nickname of the user who sent it.
+
+## Use cases
+
+### Lightweight conversations
+
+Creating and joining a conversation is easy. One user will act as the 'host' and create a conversation, which generates a random five letter conversation code and a QR code. All other users can join the conversation by typing in the conversation code or scanning the QR code.
+
+Since users join via the conversation code and aren't required to share contact information, it is easy to create quick, on-the-spot conversations.
+
+### Inclusive meetings
+
+Real-time transcription and translation can help make conversations accessible for people who speak different languages and/or are deaf or hard of hearing. Each person can also actively participate in the conversation, by speaking their preferred language or sending instant messages.
+
+### Presentations
+
+You can also provide captions for presentations and lectures both on-screen and on the audience members' own devices. After the audience joins with the conversation code, they can see the transcript in their preferred language, on their own device.
+
+## How it works
+
+All clients will use the Speech SDK to create or join a conversation. The Speech SDK interacts with the multi-device conversation service, which manages the lifetime of a conversation, including the list of participants, each client's chosen language(s), and messages sent.
+
+Each client can send audio or instant messages. The service will use speech recognition to convert audio into text, and send instant messages as-is. If clients choose different languages, then the service will translate all messages to the specified language(s) of each client.
+
+![Multi-device Conversation Overview Diagram](media/scenarios/multi-device-conversation.png)
+
+## Overview of Conversation, Host, and Participant
+
+A conversation is a session that one user starts for the other participating users to join. All clients connect to the conversation using the five-letter conversation code.
+
+Each conversation creates metadata that includes:
+- Timestamps of when the conversation started and ended
+- List of all participants in the conversation, which includes each user's chosen nickname and primary language for speech or text input.
++
+There are two types of users in a conversation: host and participant.
+
+The host is the user who starts a conversation, and who acts as the administrator of that conversation.
+- Each conversation can only have one host
+- The host must be connected to the conversation for the duration of the conversation. If the host leaves the conversation, the conversation will end for all other participants.
+- The host has a few extra controls to manage the conversation:
+ - Lock the conversation - prevent additional participants from joining
+ - Mute all participants - prevent other participants from sending any messages to the conversation, whether transcribed from speech or instant messages
+ - Mute individual participants
+ - Unmute all participants
+ - Unmute individual participants
+
+A participant is a user who joins a conversation.
+- A participant can leave and rejoin the same conversation at any time, without ending the conversation for other participants.
+- Participants cannot lock the conversation or mute/unmute others
+
+> [!NOTE]
+> Each conversation can have up to 100 participants, of which 10 can be simultaneously speaking at any given time.
+
+## Language support
+
+When creating or joining a conversation, each user must choose a primary language: the language that they will speak and send instant messages in, and also the language they will see other users' messages.
+
+There are two kinds of languages: speech to text and text-only:
+- If the user chooses a speech to text language as their primary language, then they will be able to use both speech and text input in the conversation.
+
+- If the user chooses a text-only language, then they will only be able to use text input and send instant messages in the conversation. Text-only languages are the languages that are supported for text translation, but not speech to text. You can see available languages on the [language support](./language-support.md) page.
+
+Apart from their primary language, each participant can also specify additional languages for translating the conversation.
+
+Below is a summary of what the user will be able to do in a multi-device conversation, based to their chosen primary language.
++
+| What the user can do in the conversation | Speech to text | Text-only |
+|--|-||
+| Use speech input | ✔️ | ❌ |
+| Send instant messages | ✔️ | ✔️ |
+| Translate the conversation | ✔️ | ✔️ |
+
+> [!NOTE]
+> For lists of available speech to text and text translation languages, see [supported languages](./language-support.md).
+++
+## Next steps
+
+* [Translate conversations in real-time](quickstarts/multi-device-conversation.md)
ai-services Openai Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/openai-speech.md
+
+ Title: "Azure OpenAI speech to speech chat - Speech service"
+
+description: In this how-to guide, you can use Speech to converse with Azure OpenAI. The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
+++++++ Last updated : 04/15/2023+
+zone_pivot_groups: programming-languages-csharp-python
+keywords: speech to text, openai
++
+# Azure OpenAI speech to speech chat
+++
+## Next steps
+
+- [Learn more about Speech](overview.md)
+- [Learn more about Azure OpenAI](../openai/overview.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
+
+ Title: What is the Speech service?
+
+description: The Speech service provides speech to text, text to speech, and speech translation capabilities with an Azure resource. Add speech to your applications, tools, and devices with the Speech SDK, Speech Studio, or REST APIs.
++++++ Last updated : 09/16/2022+++
+# What is the Speech service?
++
+The Speech service provides speech to text and text to speech capabilities with a [Speech resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal). You can transcribe speech to text with high accuracy, produce natural-sounding text to speech voices, translate spoken audio, and use speaker recognition during conversations.
++
+Create custom voices, add specific words to your base vocabulary, or build your own models. Run Speech anywhere, in the cloud or at the edge in containers. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](./rest-speech-to-text.md).
+
+Speech is available for many [languages](language-support.md), [regions](regions.md), and [price points](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+## Speech scenarios
+
+Common scenarios for speech include:
+- [Captioning](./captioning-concepts.md): Learn how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios.
+- [Audio Content Creation](text-to-speech.md#more-about-neural-text-to-speech-features): You can use neural voices to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems.
+- [Call Center](call-center-overview.md): Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case.
+- [Language learning](language-learning-overview.md): Provide pronunciation assessment feedback to language learners, support real-time transcription for remote learning conversations, and read aloud teaching materials with neural voices.
+- [Voice assistants](voice-assistants.md): Create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation.
+
+Microsoft uses Speech for many scenarios, such as captioning in Teams, dictation in Office 365, and Read Aloud in the Edge browser.
++
+## Speech capabilities
+
+Speech feature summaries are provided below with links for more information.
+
+### Speech to text
+
+Use [speech to text](speech-to-text.md) to transcribe audio into text, either in [real-time](#real-time-speech-to-text) or asynchronously with [batch transcription](#batch-transcription).
+
+> [!TIP]
+> You can try real-time speech to text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
+
+Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarization to determine who said what and when. Get readable transcripts with automatic formatting and punctuation.
+
+The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage.
+
+### Real-time speech to text
+
+With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as:
+- Transcriptions, captions, or subtitles for live meetings
+- Contact center agent assist
+- Dictation
+- Voice agents
+- Pronunciation assessment
+
+### Batch transcription
+
+[Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
+- Transcriptions, captions, or subtitles for pre-recorded audio
+- Contact center post-call analytics
+- Diarization
+
+### Text to speech
+
+With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more.
+
+- Prebuilt neural voice: Highly natural out-of-the-box voices. Check the prebuilt neural voice samples the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
+- Custom neural voice: Besides the pre-built neural voices that come out of the box, you can also create a [custom neural voice](custom-neural-voice.md) that is recognizable and unique to your brand or product. Custom neural voices are private and can offer a competitive advantage. Check the custom neural voice samples [here](https://aka.ms/customvoice).
+
+### Speech translation
+
+[Speech translation](speech-translation.md) enables real-time, multilingual translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech to text translation.
+
+### Language identification
+
+[Language identification](language-identification.md) is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech to text recognition, or with speech translation.
+
+### Speaker recognition
+[Speaker recognition](speaker-recognition-overview.md) provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?".
+
+### Pronunciation assessment
+
+[Pronunciation assessment](./how-to-pronunciation-assessment.md) evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence.
+
+### Intent recognition
+
+[Intent recognition](./intent-recognition.md): Use speech to text with conversational language understanding to derive user intents from transcribed speech and act on voice commands.
+
+## Delivery and presence
+
+You can deploy Azure AI services Speech features in the cloud or on-premises.
+
+With [containers](speech-container-howto.md), you can bring the service closer to your data for compliance, security, or other operational reasons.
+
+Speech service deployment in sovereign clouds is available for some government entities and their partners. For example, the Azure Government cloud is available to US government entities and their partners. Azure China cloud is available to organizations with a business presence in China. For more information, see [sovereign clouds](sovereign-clouds.md).
++
+## Use Speech in your application
+
+The [Speech Studio](speech-studio-overview.md) is a set of UI-based tools for building and integrating features from Azure AI services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
+
+The [Speech CLI](spx-overview.md) is a command-line tool for using Speech service without having to write any code. Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI.
+
+The [Speech SDK](./speech-sdk.md) exposes many of the Speech service capabilities you can use to develop speech-enabled applications. The Speech SDK is available in many programming languages and across all platforms.
+
+In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use REST APIs for [batch transcription](batch-transcription.md) and [speaker recognition](/rest/api/speakerrecognition/) REST APIs.
+
+## Get started
+
+We offer quickstarts in many popular programming languages. Each quickstart is designed to teach you basic design patterns and have you running code in less than 10 minutes. See the following list for the quickstart for each feature:
+
+* [Speech to text quickstart](get-started-speech-to-text.md)
+* [Text to speech quickstart](get-started-text-to-speech.md)
+* [Speech translation quickstart](./get-started-speech-translation.md)
+
+## Code samples
+
+Sample code for the Speech service is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. Use these links to view SDK and REST samples:
+
+- [Speech to text, text to speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+- [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)
+- [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
+- [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples)
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+### Speech to text
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
+
+### Pronunciation Assessment
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+
+### Custom Neural Voice
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
+* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+
+### Speaker Recognition
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+
+## Next steps
+
+* [Get started with speech to text](get-started-speech-to-text.md)
+* [Get started with text to speech](get-started-text-to-speech.md)
ai-services Pattern Matching Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pattern-matching-overview.md
+
+ Title: Pattern Matching overview
+
+description: Pattern Matching with the IntentRecognizer helps you get started quickly with offline intent matching.
++++++ Last updated : 11/15/2021
+keywords: intent recognition pattern matching
++
+# What is pattern matching?
++
ai-services Power Automate Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/power-automate-batch-transcription.md
+
+ Title: Power automate batch transcription - Speech service
+
+description: Transcribe audio files from an Azure Storage container using the Power Automate batch transcription connector.
+++++++ Last updated : 03/09/2023++
+# Power automate batch transcription
+
+This article describes how to use [Power Automate](/power-automate/getting-started) and the [Azure AI services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) to transcribe audio files from an Azure Storage container. The connector uses the [Batch Transcription REST API](batch-transcription.md), but you don't need to write any code to use it. If the connector doesn't meet your requirements, you can still use the [REST API](rest-speech-to-text.md#transcriptions) directly.
+
+In addition to [Power Automate](/power-automate/getting-started), you can use the [Azure AI services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) with [Power Apps](/power-apps) and [Logic Apps](../../logic-apps/index.yml).
+
+> [!TIP]
+> Try more Speech features in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
+
+## Prerequisites
++
+## Create the Azure Blob Storage container
+
+In this example, you'll transcribe audio files that are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account.
+
+Follow these steps to create a new storage account and container.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
+1. Select the Storage account.
+1. In the **Data storage** group in the left pane, select **Containers**.
+1. Select **+ Container**.
+1. Enter a name for the new container such as "batchtranscription" and select **Create**.
+1. Get the **Access key** for the storage account. Select **Access keys** in the **Security + networking** group in the left pane. View and take note of the **key1** (or **key2**) value. You'll need the access key later when you [configure the connector](#create-a-power-automate-flow).
+
+Later you'll [upload files to the container](#upload-files-to-the-container) after the connector is configured, since the events of adding and modifying files kick off the transcription process.
+
+## Create a Power Automate flow
+
+### Create a new flow
+
+1. [Sign in to power automate](https://make.powerautomate.com/)
+1. From the collapsible menu on the left, select **Create**.
+1. Select **Automated cloud flow** to start from a blank flow that can be triggered by a designated event.
+
+ :::image type="content" source="./media/power-platform/create-automated-cloud-flow.png" alt-text="A screenshot of the menu for creating an automated cloud flow." lightbox="./media/power-platform/create-automated-cloud-flow.png":::
+
+1. In the **Build an automated cloud flow** dialog, enter a name for your flow such as "BatchSTT".
+1. Select **Skip** to exit the dialog and continue without choosing a trigger.
+
+### Configure the flow trigger
+
+1. Choose a trigger from the [Azure Blob Storage connector](/connectors/azureblob/). For this example, enter "blob" in the search connectors and triggers box to narrow the results.
+1. Under the **Azure Blob Storage** connector, select the **When a blob is added or modified** trigger.
+
+ :::image type="content" source="./media/power-platform/flow-search-blob.png" alt-text="A screenshot of the search connectors and triggers dialog." lightbox="./media/power-platform/flow-search-blob.png":::
+
+1. Configure the Azure Blob Storage connection.
+ 1. From the **Authentication type** drop-down list, select **Access Key**.
+ 1. Enter the account name and access key of the Azure Storage account that you [created previously](#create-the-azure-blob-storage-container).
+ 1. Select **Create** to continue.
+1. Configure the **When a blob is added or modified** trigger.
+
+ :::image type="content" source="./media/power-platform/flow-connection-settings-blob.png" alt-text="A screenshot of the configure blob trigger dialog." lightbox="./media/power-platform/flow-connection-settings-blob.png":::
+
+ 1. From the **Storage account name or blob endpoint** drop-down list, select **Use connection settings**. You should see the storage account name as a component of the connection string.
+ 1. Under **Container** select the folder icon. Choose the container that you [created previously](#create-the-azure-blob-storage-container).
+
+### Create SAS URI by path
+
+To transcribe an audio file that's in your [Azure Blob Storage container](#create-the-azure-blob-storage-container) you need a [Shared Access Signature (SAS) URI](../../storage/common/storage-sas-overview.md) for the file.
+
+The [Azure Blob Storage connector](/connectors/azureblob/) supports SAS URIs for individual blobs, but not for entire containers.
+
+1. Select **+ New step** to begin adding a new operation for the Azure Blob Storage connector.
+1. Enter "blob" in the search connectors and actions box to narrow the results.
+1. Under the **Azure Blob Storage** connector, select the **Create SAS URI by path** trigger.
+1. Under the **Storage account name or blob endpoint** drop-down, choose the same connection that you used for the **When a blob is added or modified** trigger.
+1. Select `Path` as dynamic content for the **Blob path** field.
+
+By now, you should have a flow that looks like this:
++
+### Create transcription
+
+1. Select **+ New step** to begin adding a new operation for the [Azure AI services for Batch Speech to text connector](/connectors/cognitiveservicesspe/).
+1. Enter "batch speech to text" in the search connectors and actions box to narrow the results.
+1. Select the **Azure AI services for Batch Speech to text** connector.
+1. Select the **Create transcription** action.
+1. Create a new connection to the Speech resource that you [created previously](#prerequisites). The connection will be available throughout the Power Automate environment. For more information, see [Manage connections in Power Automate](/power-automate/add-manage-connections).
+ 1. Enter a name for the connection such as "speech-resource-key". You can choose any name that you like.
+ 1. In the **API Key** field, enter the Speech resource key.
+
+ Optionally you can select the connector ellipses (...) to view available connections. If you weren't prompted to create a connection, then you already have a connection that's selected by default.
+
+ :::image type="content" source="./media/power-platform/flow-progression-speech-resource-connection.png" alt-text="A screenshot of the view connections dialog." lightbox="./media/power-platform/flow-progression-speech-resource-connection.png":::
+
+1. Configure the **Create transcription** action.
+ 1. In the **locale** field enter the expected locale of the audio data to transcribe.
+ 1. Select `DisplayName` as dynamic content for the **displayName** field. You can choose any name that you would like to refer to later.
+ 1. Select `Web Url` as dynamic content for the **contentUrls Item - 1** field. This is the SAS URI output from the [Create SAS URI by path](#create-sas-uri-by-path) action.
+
+ > [!TIP]
+ > For more information about create transcription parameters, see the [Azure AI services for Batch Speech to text](/connectors/cognitiveservicesspe/#create-transcription) documentation.
+
+1. From the top navigation menu, select **Save**.
+
+### Test the flow
+
+1. From the top navigation menu, select **Flow checker**. In the side panel that appears, you shouldn't see any errors or warnings. If you do, then you should fix them before continuing.
+1. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**.
+1. In the side panel that appears, select **Manually** and then select **Test**.
+
+After a few seconds, you should see an indication that the flow is in progress.
++
+The flow is waiting for a file to be added or modified in the Azure Blob Storage container. That's the [trigger that you configured](#configure-the-flow-trigger) earlier.
+
+To trigger the test flow, upload an audio file to the Azure Blob Storage container as described next.
+
+## Upload files to the container
+
+Follow these steps to upload [wav, mp3, or ogg](batch-transcription-audio-data.md#supported-audio-formats) files from your local directory to the Azure Storage container that you [created previously](#create-the-azure-blob-storage-container).
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
+1. Select the Storage account.
+1. Select the new container.
+1. Select **Upload**.
+1. Choose the files to upload and select **Upload**.
+
+## View the transcription flow results
+
+After you upload the audio file to the Azure Blob Storage container, the flow should run and complete. Return to your test flow in the Power Automate portal to view the results.
++
+You can select and expand the **Create transcription** to see detailed input and output results.
+
+## Next steps
+
+- [Azure AI services for Batch Speech to text connector](/connectors/cognitiveservicesspe/)
+- [Azure Blob Storage connector](/connectors/azureblob/)
+- [Power Platform](/power-platform/)
ai-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pronunciation-assessment-tool.md
+
+ Title: How to use pronunciation assessment in Speech Studio
+
+description: The pronunciation assessment tool in Speech Studio gives you feedback on the accuracy and fluency of your speech, no coding required.
++++++ Last updated : 09/08/2022+++
+# Pronunciation assessment in Speech Studio
+
+Pronunciation assessment uses the Speech to text capability to provide subjective and objective feedback for language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds.
+
+Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input.
+- At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech.
+- At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.
+- Syllable-level accuracy scores are currently available via the [JSON file](?tabs=json#pronunciation-assessment-results) or [Speech SDK](how-to-pronunciation-assessment.md).
+- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech.
+
+This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
+
+> [!NOTE]
+> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
+>
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+
+## Try out pronunciation assessment
+
+You can explore and try out pronunciation assessment even without signing in.
+
+> [!TIP]
+> To assess more than 5 seconds of speech with your own script, sign in with an [Azure account](https://azure.microsoft.com/free/cognitive-services) and use your <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a>.
+
+Follow these steps to assess your pronunciation of the reference text:
+
+1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment).
+
+ :::image type="content" source="media/pronunciation-assessment/pa.png" alt-text="Screenshot of how to go to Pronunciation Assessment on Speech Studio.":::
+
+1. Choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
+
+ :::image type="content" source="media/pronunciation-assessment/pa-language.png" alt-text="Screenshot of choosing a supported language that you want to evaluate the pronunciation.":::
+
+1. Choose from the provisioned text samples, or under the **Enter your own script** label, enter your own reference text.
+
+ When reading the text, you should be close to microphone to make sure the recorded voice isn't too low.
+
+ :::image type="content" source="media/pronunciation-assessment/pa-record.png" alt-text="Screenshot of where to record audio with a microphone.":::
+
+ Otherwise you can upload recorded audio for pronunciation assessment. Once successfully uploaded, the audio will be automatically evaluated by the system, as shown in the following screenshot.
+
+ :::image type="content" source="media/pronunciation-assessment/pa-upload.png" alt-text="Screenshot of uploading recorded audio to be assessed.":::
+
+## Pronunciation assessment results
+
+Once you've recorded the reference text or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on the accuracy and fluency of spoken audio, by comparing a machine generated transcript of the input audio with the reference text. You can listen to your spoken audio, and download it if necessary.
+
+You can also check the pronunciation assessment result in JSON. The word-level, syllable-level, and phoneme-level accuracy scores are included in the JSON file.
+
+### [Display](#tab/display)
+
+The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. The error types in the pronunciation assessment are represented using different colors. Yellow indicates mispronunciations, gray indicates omissions, and red indicates insertions. This visual distinction makes it easier to identify and analyze specific errors. It provides a clear overview of the error types and frequencies in the spoken audio, helping you focus on areas that need improvement. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes.
++
+### [JSON](#tab/json)
+
+The complete transcription is shown in the `text` attribute. You can see accuracy scores for the whole word, syllables, and specific phonemes. You can get the same results using the Speech SDK. For information, see [How to use Pronunciation Assessment](how-to-pronunciation-assessment.md).
+
+```json
+{
+ "text": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
+ "duration": 156100000,
+ "offset": 800000,
+ "json": {
+ "Id": "f583d7588c89425d8fce76686c11ed12",
+ "RecognitionStatus": 0,
+ "Offset": 800000,
+ "Duration": 156100000,
+ "DisplayText": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
+ "SNR": 40.47014,
+ "NBest": [
+ {
+ "Confidence": 0.97532314,
+ "Lexical": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
+ "ITN": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
+ "MaskedITN": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
+ "Display": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
+ "PronunciationAssessment": {
+ "AccuracyScore": 92,
+ "FluencyScore": 81,
+ "CompletenessScore": 93,
+ "PronScore": 85.6
+ },
+ "Words": [
+ // Words preceding "countryside" are omitted for brevity...
+ {
+ "Word": "countryside",
+ "Offset": 66200000,
+ "Duration": 7900000,
+ "PronunciationAssessment": {
+ "AccuracyScore": 30,
+ "ErrorType": "Mispronunciation"
+ },
+ "Syllables": [
+ {
+ "Syllable": "kahn",
+ "PronunciationAssessment": {
+ "AccuracyScore": 3
+ },
+ "Offset": 66200000,
+ "Duration": 2700000
+ },
+ {
+ "Syllable": "triy",
+ "PronunciationAssessment": {
+ "AccuracyScore": 19
+ },
+ "Offset": 69000000,
+ "Duration": 1100000
+ },
+ {
+ "Syllable": "sayd",
+ "PronunciationAssessment": {
+ "AccuracyScore": 51
+ },
+ "Offset": 70200000,
+ "Duration": 3900000
+ }
+ ],
+ "Phonemes": [
+ {
+ "Phoneme": "k",
+ "PronunciationAssessment": {
+ "AccuracyScore": 0
+ },
+ "Offset": 66200000,
+ "Duration": 900000
+ },
+ {
+ "Phoneme": "ah",
+ "PronunciationAssessment": {
+ "AccuracyScore": 0
+ },
+ "Offset": 67200000,
+ "Duration": 1000000
+ },
+ {
+ "Phoneme": "n",
+ "PronunciationAssessment": {
+ "AccuracyScore": 11
+ },
+ "Offset": 68300000,
+ "Duration": 600000
+ },
+ {
+ "Phoneme": "t",
+ "PronunciationAssessment": {
+ "AccuracyScore": 16
+ },
+ "Offset": 69000000,
+ "Duration": 300000
+ },
+ {
+ "Phoneme": "r",
+ "PronunciationAssessment": {
+ "AccuracyScore": 27
+ },
+ "Offset": 69400000,
+ "Duration": 300000
+ },
+ {
+ "Phoneme": "iy",
+ "PronunciationAssessment": {
+ "AccuracyScore": 15
+ },
+ "Offset": 69800000,
+ "Duration": 300000
+ },
+ {
+ "Phoneme": "s",
+ "PronunciationAssessment": {
+ "AccuracyScore": 26
+ },
+ "Offset": 70200000,
+ "Duration": 1700000
+ },
+ {
+ "Phoneme": "ay",
+ "PronunciationAssessment": {
+ "AccuracyScore": 56
+ },
+ "Offset": 72000000,
+ "Duration": 1300000
+ },
+ {
+ "Phoneme": "d",
+ "PronunciationAssessment": {
+ "AccuracyScore": 100
+ },
+ "Offset": 73400000,
+ "Duration": 700000
+ }
+ ]
+ },
+ // Words following "countryside" are omitted for brevity...
+ ]
+ }
+ ]
+ }
+}
+```
+++
+### Assessment scores in streaming mode
+
+Pronunciation Assessment supports uninterrupted streaming mode. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you don't press the stop recording button, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
+
+Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 3 sub aspects: **Accuracy score**, **Fluency score**, and **Completeness score**. In streaming mode, since the **Accuracy score**, **Fluency score and Completeness score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score and Fluency score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
+
+Refer to the demo examples below for the whole process of evaluating pronunciation in streaming mode.
+
+**Start recording**
+
+As you start recording, the scores at the bottom begin to alter from 0.
++
+**During recording**
+
+During recording a long paragraph, you can pause recording at any time. You can continue to evaluate your recording as long as you don't press the stop button.
++
+**Finish recording**
+
+After you press the stop button, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score** at the bottom.
++
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+
+## Next steps
+
+- Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)
+- Read the blog about [use cases](https://aka.ms/pronunciationassessment/techblog)
ai-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstart-custom-commands-application.md
+
+ Title: 'Quickstart: Create a voice assistant using Custom Commands - Speech service'
+
+description: In this quickstart, you create and test a basic Custom Commands application in Speech Studio.
++++++ Last updated : 02/19/2022++++
+# Quickstart: Create a voice assistant with Custom Commands
++
+In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app.
+
+## Region Availability
+At this time, Custom Commands supports speech resources created in regions that have [voice assistant capabilities](./regions.md#voice-assistants).
+
+## Prerequisites
+
+> [!div class="checklist"]
+> * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">Create a Speech resource in a region that supports Custom Commands.</a> Refer to the **Region Availability** section above for list of supported regions.
+> * Download the sample
+[Smart Room Lite](https://aka.ms/speech/cc-quickstart) json file.
+> * Download the latest version of [Windows Voice Assistant Client](https://aka.ms/speech/va-samples-wvac).
+
+## Go to the Speech Studio for Custom Commands
+
+1. In a web browser, go to [Speech Studio](https://aka.ms/speechstudio/customcommands).
+1. Enter your credentials to sign in to the portal.
+
+ The default view is your list of Speech resources.
+ > [!NOTE]
+ > If you don't see the select resource page, you can navigate there by choosing "Resource" from the settings menu on the top bar.
+
+1. Select your Speech resource, and then select **Go to Studio**.
+1. Select **Custom Commands**.
+
+ The default view is a list of the Custom Commands applications you have under your selected resource.
+
+## Import an existing application as a new Custom Commands project
+
+1. Select **New project** to create a project.
+
+1. In the **Name** box, enter project name as `Smart-Room-Lite` (or something else of your choice).
+1. In the **Language** list, select **English (United States)**.
+1. Select **Browse files** and in the browse window, select the **SmartRoomLite.json** file.
+
+ > [!div class="mx-imgBorder"]
+ > ![Create a project](media/custom-commands/import-project.png)
+
+1. In the **LUIS authoring resource** list, select an authoring resource. If there are no valid authoring resources, create one by selecting **Create new LUIS authoring resource**.
+
+ 1. In the **Resource Name** box, enter the name of the resource.
+ 1. In the **Resource Group** list, select a resource group.
+ 1. In the **Location** list, select a location.
+ 1. In the **Pricing Tier** list, select a tier.
+
+
+ > [!NOTE]
+ > You can create resource groups by entering the desired resource group name into the "Resource Group" field. The resource group will be created when **Create** is selected.
++
+1. Next, select **Create** to create your project.
+1. After the project is created, select your project.
+You should now see overview of your new Custom Commands application.
+
+## Try out some voice commands
+1. Select **Train** at the top of the right pane.
+1. Once training is completed, select **Test** and try out the following utterances:
+ - Turn on the tv
+ - Set the temperature to 80 degrees
+ - Turn it off
+ - The tv
+ - Set an alarm for 5 PM
+
+## Integrate Custom Commands application in an assistant
+Before you can access this application from outside Speech Studio, you need to publish the application. For publishing an application, you will need to configure prediction LUIS resource.
+
+### Update prediction LUIS resource
++
+1. Select **Settings** in the left pane and select **LUIS resources** in the middle pane.
+1. Select a prediction resource, or create one by selecting **Create new resource**.
+1. Select **Save**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Set LUIS Resources](media/custom-commands/set-luis-resources.png)
+
+> [!NOTE]
+> Because the authoring resource supports only 1,000 prediction endpoint requests per month, you will mandatorily need to set a LUIS prediction resource before publishing your Custom Commands application.
+
+### Publish the application
+
+Select **Publish** on top of the right pane. Once publish completes, a new window will appear. Note down the **Application ID** and **Speech resource key** value from it. You will need these two values to be able to access the application from outside Speech Studio.
+
+Alternatively, you can also get these values by selecting **Settings** > **General** section.
+
+### Access application from client
+
+In the scope of this article, we will be using the Windows Voice Assistant client you downloaded as part of the pre-requisites. Unzip the folder.
+1. Launch **VoiceAssistantClient.exe**.
+1. Create a new publish profile and enter value for **Connection Profile**. In the **General Settings** section, enter values **Subscription Key** (this is same as the **Speech resource key** value you saved when publishing the application), **Subscription key region** and **Custom commands app ID**.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that highlights the General Settings section for creating a WVAC profile.](media/custom-commands/create-profile.png)
+1. Select **Save and Apply Profile**.
+1. Now try out the following inputs via speech/text
+ > [!div class="mx-imgBorder"]
+ > ![WVAC Create profile](media/custom-commands/conversation.png)
++
+> [!TIP]
+> You can select entries in **Activity Log** to inspect the raw responses being sent from the Custom Commands service.
+
+## Next steps
+
+In this article, you used an existing application. Next, in the [how-to sections](./how-to-develop-custom-commands-application.md), you learn how to design, develop, debug, test and integrate a Custom Commands application from scratch.
ai-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/multi-device-conversation.md
+
+ Title: 'Quickstart: Multi-device Conversation - Speech service'
+
+description: In this quickstart, you'll learn how to create and join clients to a multi-device conversation by using the Speech SDK.
++++++ Last updated : 06/25/2020+
+zone_pivot_groups: programming-languages-set-nine
+ms.devlang: cpp, csharp
+++
+# Quickstart: Multi-device Conversation
++
+> [!NOTE]
+> The Speech SDK for Java, JavaScript, Objective-C, and Swift support Multi-device Conversation, but we haven't yet included a guide here.
++
ai-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/setup-platform.md
+
+ Title: Install the Speech SDK
+
+description: In this quickstart, you'll learn how to install the Speech SDK for your preferred programming language.
++++++ Last updated : 09/16/2022++
+zone_pivot_groups: programming-languages-speech-sdk
++
+# Install the Speech SDK
+++++++++
+## Next steps
+
+* [Speech to text quickstart](../get-started-speech-to-text.md)
+* [Text to speech quickstart](../get-started-text-to-speech.md)
+* [Speech translation quickstart](../get-started-speech-translation.md)
ai-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/quickstarts/voice-assistants.md
+
+ Title: 'Quickstart: Create a custom voice assistant - Speech service'
+
+description: In this quickstart, you use the Speech SDK to create a custom voice assistant.
++++++ Last updated : 06/25/2020+
+ms.devlang: csharp, golang, java
+
+zone_pivot_groups: programming-languages-voice-assistants
++
+# Quickstart: Create a custom voice assistant
++
+> [!NOTE]
+> The Speech SDK for C++, JavaScript, Objective-C, Python, and Swift support custom voice assistants, but we haven't yet included a guide here.
++++
ai-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/record-custom-voice-samples.md
+
+ Title: "Recording custom voice samples - Speech service"
+
+description: Make a production-quality custom voice by preparing a robust script, hiring good voice talent, and recording professionally.
++++++ Last updated : 10/14/2022+++
+# Recording voice samples for Custom Neural Voice
+
+This article provides you instructions on preparing high-quality voice samples for creating a professional voice model using the Custom Neural Voice Pro project.
+
+Creating a high-quality production custom neural voice from scratch isn't a casual undertaking. The central component of a custom neural voice is a large collection of audio samples of human speech. It's vital that these audio recordings be of high quality. Choose a voice talent who has experience making these kinds of recordings, and have them recorded by a recording engineer using professional equipment.
+
+Before you can make these recordings, though, you need a script: the words that will be spoken by your voice talent to create the audio samples.
+
+Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results.
+
+## Tips for preparing data for a high-quality voice
+
+A highly-natural custom neural voice depends on several factors, like the quality and size of your training data.
+
+The quality of your training data is a primary factor. For example, in the same training set, consistent volume, speaking rate, speaking pitch, and speaking style are essential to create a high-quality custom neural voice. You should also avoid background noise in the recording and make sure the script and recording match. To ensure the quality of your data, you need to follow [script selection criteria](#script-selection-criteria) and [recording requirements](#recording-your-script).
+
+Regarding the size of the training data, in most cases you can build a reasonable custom neural voice with 500 utterances. According to our tests, adding more training data in most languages does not necessarily improve naturalness of the voice itself (tested using the MOS score), however, with more training data that covers more word instances, you have higher possibility to reduce the ratio of dissatisfactory parts of speech for the voice, such as the glitches. To hear what dissatisfactory parts of speech sound like, refer to [the GitHub examples](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/DSAT-examples.md).
+
+In some cases, you may want a voice persona with unique characteristics. For example, a cartoon persona needs a voice with a special speaking style, or a voice that is very dynamic in intonation. For such cases, we recommend that you prepare at least 1000 (preferably 2000) utterances, and record them at a professional recording studio. To learn more about how to improve the quality of your voice model, see [characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context).
+
+## Voice recording roles
+
+There are four basic roles in a custom neural voice recording project:
+
+Role|Purpose
+-|-
+Voice talent |This person's voice will form the basis of the custom neural voice.
+Recording engineer |Oversees the technical aspects of the recording and operates the recording equipment.
+Director |Prepares the script and coaches the voice talent's performance.
+Editor |Finalizes the audio files and prepares them for upload to Speech Studio
+
+An individual may fill more than one role. This guide assumes that you'll be filling the director role and hiring both a voice talent and a recording engineer. If you want to make the recordings yourself, this article includes some information about the recording engineer role. The editor role isn't needed until after the recording session, and can be performed by the director or the recording engineer.
+
+## Choose your voice talent
+
+Actors with experience in voiceover, voice character work, announcing or newsreading make good voice talent. Choose voice talent whose natural voice you like. It's possible to create unique "character" voices, but it's much harder for most talent to perform them consistently, and the effort can cause voice strain. The single most important factor for choosing voice talent is consistency. Your recordings for the same voice style should all sound like they were made on the same day in the same room. You can approach this ideal through good recording practices and engineering.
+
+Your voice talent must be able to speak with consistent rate, volume level, pitch, and tone with clear dictation. They also need to be able to control their pitch variation, emotional affect, and speech mannerisms. Recording voice samples can be more fatiguing than other kinds of voice work, so most voice talent can usually only record for two or three hours a day. Limit sessions to three or four days a week, with a day off in-between if possible.
+
+Work with your voice talent to develop a persona that defines the overall sound and emotional tone of the custom neural voice, making sure to pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotion, so define the speaking styles of your persona and ask your voice talent to read the script in a way that resonates with the styles you want.
+
+For example, a persona with a naturally upbeat personality would carry a note of optimism even when they speak neutrally. However, this personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
+
+> [!TIP]
+> Usually, you'll want to own the voice recordings you make. Your voice talent should be amenable to a work-for-hire contract for the project.
+
+## Create a script
+
+The starting point of any custom neural voice recording session is the script, which contains the utterances to be spoken by your voice talent. The term "utterances" encompasses both full sentences and shorter phrases. Building a custom neural voice requires at least 300 recorded utterances as training data.
++
+The utterances in your script can come from anywhere: fiction, non-fiction, transcripts of speeches, news reports, and anything else available in printed form. For a brief discussion of potential legal issues, see the ["Legalities"](#legalities) section. You can also write your own text.
+
+Your utterances don't need to come from the same source, the same kind of source, or have anything to do with each other. However, if you use set phrases (for example, "You have successfully logged in") in your speech application, make sure to include them in your script. It will give your custom neural voice a better chance of pronouncing those phrases well.
+
+We recommend the recording scripts include both general sentences and domain-specific sentences. For example, if you plan to record 2,000 sentences, 1,000 of them could be general sentences, another 1,000 of them could be sentences from your target domain or the use case of your application.
+
+We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own.
+
+### Script selection criteria
+
+Below are some general guidelines that you can follow to create a good corpus (recorded audio samples) for custom neural voice training.
+
+- Balance your script to cover different sentence types in your domain including statements, questions, exclamations, long sentences, and short sentences.
+
+ Each sentence should contain 4 words to 30 words, and no duplicate sentences should be included in your script.<br>
+ For how to balance the different sentence types, refer to the following table:
+
+ | Sentence types | Coverage |
+ | : | : |
+ | Statement sentences | Statement sentences should be 70-80% of the script.|
+ | Question sentences | Question sentences should be about 10%-20% of your domain script, including 5%-10% of rising and 5%-10% of falling tones. |
+ | Exclamation sentences| Exclamation sentences should be about 10%-20% of your script.|
+ | Short word/phrase| Short word/phrase scripts should be about 10% of total utterances, with 5 to 7 words per case. |
+
+ > [!NOTE]
+ > Short words/phrases should be separated with a commas. They help remind your voice talent to pause briefly when reading them.
+
+ Best practices include:
+ - Balanced coverage for Parts of Speech, like verbs, nouns, adjectives, and so on.
+ - Balanced coverage for pronunciations. Include all letters from A to Z so the Text to speech engine learns how to pronounce each letter in your style.
+ - Readable, understandable, common-sense scripts for the speaker to read.
+ - Avoid too many similar patterns for words/phrases, like "easy" and "easier".
+ - Include different formats of numbers: address, unit, phone, quantity, date, and so on, in all sentence types.
+ - Include spelling sentences if it's something your custom neural voice will be used to read. For example, "The spelling of Apple is A P P L E".
+
+- Don't put multiple sentences into one line/one utterance. Separate each line by utterance.
+
+- Make sure the sentence is clean. Generally, don't include too many non-standard words like numbers or abbreviations as they're usually hard to read. Some applications may require the reading of many numbers or acronyms. In these cases, you can include these words, but normalize them in their spoken form.
+
+ Below are some best practices for example:
+ - For lines with abbreviations, instead of "BTW", write "by the way".
+ - For lines with digits, instead of "911", write "nine one one".
+ - For lines with acronyms, instead of "ABC", write "A B C".
+
+ With that, make sure your voice talent pronounces these words in an expected way. Keep your script and recordings matched during the training process.
+
+- Your script should include many different words and sentences with different kinds of sentence lengths, structures, and moods.
+
+- Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your voice talent, you may catch more mistakes.
+
+### Difference between voice talent script and training script
+
+The training script can differ from the voice talent script, especially for scripts that contain digits, symbols, abbreviations, date, and time. Scripts prepared for the voice talent must follow native reading conventions, such as 50% and $45. The scripts used for training must be normalized to match the audio recording, such as *fifty percent* and *forty-five dollars*.
+
+> [!NOTE]
+> We provide some example scripts for the voice talent on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script). To use the example scripts for training, you must normalize them according to the recordings of your voice talent before uploading the file.
+
+The following table shows the difference between scripts for voice talent and the normalized script for training.
+
+| Category |Voice talent script example | Training script example (normalized) |
+| | | |
+| Digits |123| one hundred and twenty-three|
+| Symbols |50%| fifty percent|
+| Abbreviation |ASAP| as soon as possible|
+| Date and time |March 3rd at 5:00 PM| March third at five PM|
+
+### Typical defects of a script
+
+The script's poor quality can adversely affect the training results. To achieve high-quality training results, it's crucial to avoid defects.
+
+Script defects generally fall into the following categories:
+
+| Category | Example |
+| : | : |
+| Meaningless content. | "Colorless green ideas sleep furiously."|
+| Incomplete sentences. |- "This was my last eve" (no subject, no specific meaning) <br>- "He's obviously already funny (no quote mark in the end, it's not a complete sentence) |
+| Typo in the sentences. | - Start with a lower case<br>- No ending punctuation if needed<br> - Misspelling <br>- Lack of punctuation: no period in the end (except news title)<br>- End with symbols, except comma, question, exclamation <br>- Wrong format, such as:<br> &emsp;- 45$ (should be $45)<br> &emsp;- No space or excess space between word/punctuation |
+|Duplication in similar format, one per each pattern is enough. |- "Now is 1pm in New York"<br>- "Now is 2pm in New York"<br>- "Now is 3pm in New York"<br>- "Now is 1pm in Seattle"<br>- "Now is 1pm in Washington D.C." |
+|Uncommon foreign words: only commonly used foreign words are acceptable in the script. | In English one might use the French word "faux" in common speech, but a French expression such as "coincer la bulle" would be uncommon. |
+|Emoji or any other uncommon symbols | |
+
+### Script format
+
+The script is for use during recording sessions, so you can set it up any way you find easy to work with. Create the text file that's required by Speech Studio separately.
+
+A basic script format contains three columns:
+
+- The number of the utterance, starting at 1. Numbering makes it easy for everyone in the studio to refer to a particular utterance ("let's try number 356 again"). You can use the Microsoft Word paragraph numbering feature to number the rows of the table automatically.
+- A blank column where you'll write the take number or time code of each utterance to help you find it in the finished recording.
+- The text of the utterance itself.
+
+ ![Sample script](media/custom-voice/script.png)
+
+> [!NOTE]
+> Most studios record in short segments known as "takes". Each take typically contains 10 to 24 utterances. Just noting the take number is sufficient to find an utterance later. If you're recording in a studio that prefers to make longer recordings, you'll want to note the time code instead. The studio will have a prominent time display.
+
+Leave enough space after each row to write notes. Be sure that no utterance is split between pages. Number the pages, and print your script on one side of the paper.
+
+Print three copies of the script: one for the voice talent, one for the recording engineer, and one for the director (you). Use a paper clip instead of staples: an experienced voice artist will separate the pages to avoid making noise as the pages are turned.
+
+### Voice talent statement
+
+To train a neural voice, you must [create a voice talent profile](how-to-custom-voice-talent.md) with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence.
+
+### Legalities
+
+Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance won't be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose isn't well established. Microsoft can't provide legal advice on this issue; consult your own legal counsel.
+
+Fortunately, it's possible to avoid these issues entirely. There are many sources of text you can use without permission or license.
+
+|Text source|Description|
+|-|-|
+|[CMU Arctic corpus](https://pyroomacoustics.readthedocs.io/en/pypi-release/pyroomacoustics.datasets.cmu_arctic.html)|About 1100 sentences selected from out-of-copyright works specifically for use in speech synthesis projects. An excellent starting point.|
+|Works no longer<br>under copyright|Typically works published prior to 1923. For English, [Project Gutenberg](https://www.gutenberg.org/) offers tens of thousands of such works. You may want to focus on newer works, as the language will be closer to modern English.|
+|Government&nbsp;works|Works created by the United States government are not copyrighted in the United States, though the government may claim copyright in other countries/regions.|
+|Public domain|Works for which copyright has been explicitly disclaimed or that have been dedicated to the public domain. It may not be possible to waive copyright entirely in some jurisdictions.|
+|Permissively licensed works|Works distributed under a license like Creative Commons or the GNU Free Documentation License (GFDL). Wikipedia uses the GFDL. Some licenses, however, may impose restrictions on performance of the licensed content that may impact the creation of a custom neural voice model, so read the license carefully.|
+
+## Recording your script
+
+Record your script at a professional recording studio that specializes in voice work. They'll have a recording booth, the right equipment, and the right people to operate it. It's recommended not to skimp on recording.
+
+Discuss your project with the studio's recording engineer and listen to their advice. The recording should have little or no dynamic range compression (maximum of 4:1). It's critical that the audio has consistent volume and a high signal-to-noise ratio, while being free of unwanted sounds.
+
+### Recording requirements
+
+To achieve high-quality training results, follow the following requirements during recording or data preparation:
+
+- Clear and well pronounced
+
+- Natural speed: not too slow or too fast between audio files.
+
+- Appropriate volume, prosody and break: stable within the same sentence or between sentences, correct break for punctuation.
+
+- No noise during recording
+
+- Fit your persona design
+
+- No wrong accent: fit to the target design
+
+- No wrong pronunciation
+
+You can refer to below specification to prepare for the audio samples as best practice.
+
+| Property | Value |
+| : | : |
+| File format | *.wav, Mono |
+| Sampling rate | 24 KHz |
+| Sample format | 16 bit, PCM |
+| Peak volume levels | -3 dB to -6 dB |
+| SNR | > 35 dB |
+| Silence | - There should have some silence (recommend 100 ms) at the beginning and ending, but no longer than 200 ms<br>- Silence between words or phrases < -30 dB<br>- Silence in the wave after last word is spoken <-60 dB |
+| Environment noise, echo | - The level of noise at start of the wave before speaking < -70 dB |
+
+> [!Note]
+> You can record at higher sampling rate and bit depth, for example in the format of 48 KHz 24 bit PCM. During the custom neural voice training, we'll down sample it to 24 KHz 16 bit PCM automatically.
+
+A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
+
+Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
+
+### Typical audio errors
+
+For high-quality training results, avoiding audio errors is highly recommended. Audio errors can are usually within following categories:
+
+- Audio file name doesn't match the script ID.
+- WAR file has an invalid format and can't be read.
+- Audio sampling rate is lower than 16 KHz. It's recommended that the .wav file sampling rate be equal or higher than 24 KHz for high-quality neural voice.
+- Volume peak isn't within the range of -3 dB (70% of max volume) to -6 dB (50%).
+- Waveform overflow: the waveform is cut at its peak value and is thus not complete.
+
+ ![waveform overflow](media/custom-voice/overflow.png)
+
+- The silent parts of the recording aren't clean; you can hear sounds such as ambient noise, mouth noise and echo.
+
+ For example, below audio contains the environment noise between speeches.
+
+ ![environment noise](media/custom-voice/environment-noise.png)
+
+ Below sample contains signs of DC offset or echo.
+
+ ![DC offset or echo](media/custom-voice/dc-offset-noise.png)
+
+- The overall volume is too low. Your data will be tagged as an issue if the volume is lower than -18 dB (10% of max volume). Make sure all audio files should be consistent at the same level of volume.
+
+ ![overall volume](media/custom-voice/overall-volume.png)
+
+- No silence before the first word or after the last word. Also, the start or end silence shouldn't be longer than 200 ms or shorter than 100 ms.
+
+ ![No silence](media/custom-voice/no-silence.png)
+
+### Do it yourself
+
+If you want to make the recording yourself, instead of going into a recording studio, here's a short primer. Thanks to the rise of home recording and podcasting, it's easier than ever to find good recording advice and resources online.
+
+Your "recording booth" should be a small room with no noticeable echo or "room tone." It should be as quiet and soundproof as possible. Drapes on the walls can be used to reduce echo and neutralize or "deaden" the sound of the room.
+
+Use a high-quality studio condenser microphone ("mic" for short) intended for recording voice. Sennheiser, AKG, and even newer Zoom mics can yield good results. You can buy a mic, or rent one from a local audio-visual rental firm. Look for one with a USB interface. This type of mic conveniently combines the microphone element, preamp, and analog-to-digital converter into one package, simplifying hookup.
+
+You may also use an analog microphone. Many rental houses offer "vintage" microphones known for their voice character. Note that professional analog gear uses balanced XLR connectors, rather than the 1/4-inch plug that's used in consumer equipment. If you go analog, you'll also need a preamp and a computer audio interface with these connectors.
+
+Install the microphone on a stand or boom, and install a pop filter in front of the microphone to eliminate noise from "plosive" consonants like "p" and "b." Some microphones come with a suspension mount that isolates them from vibrations in the stand, which is helpful.
+
+The voice talent must stay at a consistent distance from the microphone. Use tape on the floor to mark where they should stand. If the talent prefers to sit, take special care to monitor mic distance and avoid chair noise.
+
+Use a stand to hold the script. Avoid angling the stand so that it can reflect sound toward the microphone.
+
+The person operating the recording equipment ΓÇö the recording engineer ΓÇö should be in a separate room from the talent, with some way to talk to the talent in the recording booth (a *talkback circuit*).
+
+The recording should contain as little noise as possible, with a goal of -80 dB.
+
+Listen closely to a recording of silence in your "booth," figure out where any noise is coming from, and eliminate the cause. Common sources of noise are air vents, fluorescent light ballasts, traffic on nearby roads, and equipment fans (even notebook PCs might have fans). Microphones and cables can pick up electrical noise from nearby AC wiring, usually a hum or buzz. A buzz can also be caused by a *ground loop*, which is caused by having equipment plugged into more than one electrical circuit.
+
+> [!TIP]
+> In some cases, you might be able to use an equalizer or a noise reduction software plug-in to help remove noise from your recordings, although it is always best to stop it at its source.
+
+Set levels so that most of the available dynamic range of digital recording is used without overdriving. That means set the audio loud, but not so loud that it becomes distorted. An example of the waveform of a good recording is shown in the following image:
+
+![A good recording waveform](media/custom-voice/good-recording.png)
+
+Here, most of the range (height) is used, but the highest peaks of the signal don't reach the top or bottom of the window. You can also see that the silence in the recording approximates a thin horizontal line, indicating a low noise floor. This recording has acceptable dynamic range and signal-to-noise ratio.
+
+Record directly into the computer via a high-quality audio interface or a USB port, depending on the mic you're using. For analog, keep the audio chain simple: mic, preamp, audio interface, computer. You can license both [Avid Pro Tools](https://www.avid.com/en/pro-tools) and [Adobe Audition](https://www.adobe.com/products/audition.html) monthly at a reasonable cost. If your budget is extremely tight, try the free [Audacity](https://www.audacityteam.org/).
+
+Record at 44.1 KHz 16 bit monophonic (CD quality) or better. Current state-of-the-art is 48 KHz 24 bit, if your equipment supports it. You'll down-sample your audio to 24 KHz 16-bit before you submit it to Speech Studio. Still, it pays to have a high-quality original recording in the event that edits are needed.
+
+Ideally, have different people serve in the roles of director, engineer, and talent. Don't try to do it all yourself. In a pinch, one person can be both the director and the engineer.
+
+### Before the session
+
+To avoid wasting studio time, run through the script with your voice talent before the recording session. While the voice talent becomes familiar with the text, they can clarify the pronunciation of any unfamiliar words.
+
+> [!NOTE]
+> Most recording studios offer electronic display of scripts in the recording booth. In this case, type your run-through notes directly into the script's document. You'll still want a paper copy to take notes on during the session, though. Most engineers will want a hard copy, too. And you'll still want a third printed copy as a backup for the talent in case the computer is down.
+
+Your voice talent might ask which word you want emphasized in an utterance (the "operative word"). Tell them that you want a natural reading with no particular emphasis. Emphasis can be added when speech is synthesized; it shouldn't be a part of the original recording.
+
+Direct the talent to pronounce words distinctly. Every word of the script should be pronounced as written. Sounds shouldn't be omitted or slurred together, as is common in casual speech, *unless they have been written that way in the script*.
+
+|Written text|Unwanted casual pronunciation|
+|-|-|
+|never going to give you up|never gonna give you up|
+|there are four lights|there're four lights|
+|how's the weather today|how's th' weather today|
+|say hello to my little friend|say hello to my lil' friend|
+
+The talent should *not* add distinct pauses between words. The sentence should still flow naturally, even while sounding a little formal. This fine distinction might take practice to get right.
+
+### The recording session
+
+Create a reference recording, or *match file,* of a typical utterance at the beginning of the session. Ask the talent to repeat this line every page or so. Each time, compare the new recording to the reference. This practice helps the talent remain consistent in volume, tempo, pitch, and intonation. Meanwhile, the engineer can use the match file as a reference for levels and overall consistency of sound.
+
+The match file is especially important when you resume recording after a break or on another day. Play it a few times for the talent and have them repeat it each time until they're matching well.
+
+Coach your talent to take a deep breath and pause for a moment before each utterance. Record a couple of seconds of silence between utterances. Words should be pronounced the same way each time they appear, considering context. For example, "record" as a verb is pronounced differently from "record" as a noun.
+
+Record approximately five seconds of silence before the first recording to capture the "room tone". This practice helps Speech Studio compensate for noise in the recordings.
+
+> [!TIP]
+> All you need to capture is the voice talent, so you can make a monophonic (single-channel) recording of just their lines. However, if you record in stereo, you can use the second channel to record the chatter in the control room to capture discussion of particular lines or takes. Remove this track from the version that's uploaded to Speech Studio.
+
+Listen closely, using headphones, to the voice talent's performance. You're looking for good but natural diction, correct pronunciation, and a lack of unwanted sounds. Don't hesitate to ask your talent to re-record an utterance that doesn't meet these standards.
+
+> [!TIP]
+> If you are using a large number of utterances, a single utterance might not have a noticeable effect on the resultant custom neural voice. It might be more expedient to simply note any utterances with issues, exclude them from your dataset, and see how your custom neural voice turns out. You can always go back to the studio and record the missed samples later.
+
+Note the take number or time code on your script for each utterance. Ask the engineer to mark each utterance in the recording's metadata or cue sheet as well.
+
+Take regular breaks and provide a beverage to help your voice talent keep their voice in good shape.
+
+### After the session
+
+Modern recording studios run on computers. At the end of the session, you receive one or more audio files, not a tape. These files will probably be WAV or AIFF format in CD quality (44.1 KHz 16-bit) or better. 24 KHz 16-bit is common and desirable. The default sampling rate for a custom neural voice is 24 KHz. It's recommended that you should use a sample rate of 24 KHz for your training data. Higher sampling rates, such as 96 KHz, are generally not needed.
+
+Speech Studio requires each provided utterance to be in its own file. Each audio file delivered by the studio contains multiple utterances. So the primary post-production task is to split up the recordings and prepare them for submission. The recording engineer might have placed markers in the file (or provided a separate cue sheet) to indicate where each utterance starts.
+
+Use your notes to find the exact takes you want, and then use a sound editing utility, such as [Avid Pro Tools](https://www.avid.com/en/pro-tools), [Adobe Audition](https://www.adobe.com/products/audition.html), or the free [Audacity](https://www.audacityteam.org/), to copy each utterance into a new file.
+
+Listen to each file carefully. At this stage, you can edit out small unwanted sounds that you missed during recording, like a slight lip smack before a line, but be careful not to remove any actual speech. If you can't fix a file, remove it from your dataset and note that you've done so.
+
+Convert each file to 16 bits and a sample rate of 24 KHz before saving and if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
+
+Finally, create the transcript that associates each WAV file with a text version of the corresponding utterance. [Train your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+
+Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
+
+## Next steps
+
+You're ready to upload your recordings and create your custom neural voice.
+
+> [!div class="nextstepaction"]
+> [Train your voice model](./how-to-custom-voice-create-voice.md)
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
+
+ Title: Regions - Speech service
+
+description: A list of available regions and endpoints for the Speech service, including speech to text, text to speech, and speech translation.
++++++ Last updated : 09/16/2022++++
+# Speech service supported regions
+
+The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://aka.ms/speechstudio/).
+
+Keep in mind the following points:
+
+* If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when you create a `SpeechConfig`. Make sure the region matches the region of your subscription.
+* If your application uses one of the Speech service REST APIs, the region is part of the endpoint URI you use when making requests.
+* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors.
+
+> [!NOTE]
+> Speech service doesn't store or process customer data outside the region the customer deploys the service instance in.
+
+## Speech service
+
+The following regions are supported for Speech service features such as speech to text, text to speech, pronunciation assessment, and translation. The geographies are listed in alphabetical order.
+
+| Geography | Region | Region identifier |
+| -- | -- | -- |
+| Africa | South Africa North | `southafricanorth` <sup>6</sup>|
+| Asia Pacific | East Asia | `eastasia` <sup>5</sup>|
+| Asia Pacific | Southeast Asia | `southeastasia` <sup>1,2,3,4,5</sup>|
+| Asia Pacific | Australia East | `australiaeast` <sup>1,2,3,4</sup>|
+| Asia Pacific | Central India | `centralindia` <sup>1,2,3,4,5</sup>|
+| Asia Pacific | Japan East | `japaneast` <sup>2,5</sup>|
+| Asia Pacific | Japan West | `japanwest` |
+| Asia Pacific | Korea Central | `koreacentral` <sup>2</sup>|
+| Canada | Canada Central | `canadacentral` <sup>1</sup>|
+| Europe | North Europe | `northeurope` <sup>1,2,4,5</sup>|
+| Europe | West Europe | `westeurope` <sup>1,2,3,4,5</sup>|
+| Europe | France Central | `francecentral` |
+| Europe | Germany West Central | `germanywestcentral` |
+| Europe | Norway East | `norwayeast` |
+| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>|
+| Europe | Switzerland West | `switzerlandwest` |
+| Europe | UK South | `uksouth` <sup>1,2,3,4</sup>|
+| Middle East | UAE North | `uaenorth` <sup>6</sup>|
+| South America | Brazil South | `brazilsouth` <sup>6</sup>|
+| US | Central US | `centralus` |
+| US | East US | `eastus` <sup>1,2,3,4,5</sup>|
+| US | East US 2 | `eastus2` <sup>1,2,4,5</sup>|
+| US | North Central US | `northcentralus` <sup>4,6</sup>|
+| US | South Central US | `southcentralus` <sup>1,2,3,4,5,6</sup>|
+| US | West Central US | `westcentralus` <sup>5</sup>|
+| US | West US | `westus` <sup>2,5</sup>|
+| US | West US 2 | `westus2` <sup>1,2,4,5</sup>|
+| US | West US 3 | `westus3` |
+
+<sup>1</sup> The region has dedicated hardware for Custom Speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
+
+<sup>2</sup> The region is available for Custom Neural Voice training. You can copy a trained neural voice model to other regions for deployment.
+
+<sup>3</sup> The Long Audio API is available in the region.
+
+<sup>4</sup> The region supports custom keyword advanced models.
+
+<sup>5</sup> The region supports keyword verification.
+
+<sup>6</sup> The region does not support Speaker Recognition.
+
+## Intent recognition
+
+Available regions for intent recognition via the Speech SDK are in the following table.
+
+| Global region | Region | Region identifier |
+| - | - | -- |
+| Asia | East Asia | `eastasia` |
+| Asia | Southeast Asia | `southeastasia` |
+| Australia | Australia East | `australiaeast` |
+| Europe | North Europe | `northeurope` |
+| Europe | West Europe | `westeurope` |
+| North America | East US | `eastus` |
+| North America | East US 2 | `eastus2` |
+| North America | South Central US | `southcentralus` |
+| North America | West Central US | `westcentralus` |
+| North America | West US | `westus` |
+| North America | West US 2 | `westus2` |
+| South America | Brazil South | `brazilsouth` |
+
+This is a subset of the publishing regions supported by the [Language Understanding service (LUIS)](../luis/luis-reference-regions.md).
+
+## Voice assistants
+
+The [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
+
+| Global region | Region | Region identifier |
+| - | - | -- |
+| North America | West US | `westus` |
+| North America | West US 2 | `westus2` |
+| North America | East US | `eastus` |
+| North America | East US 2 | `eastus2` |
+| North America | West Central US | `westcentralus` |
+| North America | South Central US | `southcentralus` |
+| Europe | West Europe | `westeurope` |
+| Europe | North Europe | `northeurope` |
+| Asia | East Asia | `eastasia` |
+| Asia | Southeast Asia | `southeastasia` |
+| India | Central India | `centralindia` |
ai-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md
+
+ Title: What's new - Speech service
+
+description: Find out about new releases and features for the Azure AI service for Speech.
+++++++ Last updated : 03/16/2023+++
+# What's new in Azure AI Speech?
+
+Azure AI Speech is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+
+## Recent highlights
+
+* Speech SDK 1.30.0 was released in July 2023.
+* Speech to text and text to speech container versions were updated in March 2023.
+* Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription.
+* Custom Speech to text container disconnected mode was released in January 2023.
+* Text to speech Batch synthesis API is available in public preview.
+* Speech to text REST API version 3.1 is generally available.
+
+## Release notes
+
+**Choose a service or resource**
+
+# [SDK](#tab/speech-sdk)
++
+# [CLI](#tab/speech-cli)
++
+# [Text to speech service](#tab/text-to-speech)
++
+# [Speech to text service](#tab/speech-to-text)
++
+# [Containers](#tab/containers)
++
+***
ai-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/resiliency-and-recovery-plan.md
+
+ Title: How to back up and recover speech customer resources
+
+description: Learn how to prepare for service outages with Custom Speech and Custom Voice.
+++++++ Last updated : 07/28/2021+++
+# Back up and recover speech customer resources
+
+The Speech service is [available in various regions](./regions.md). Speech resource keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside.
+
+Datasets for customer-created data assets, such as customized speech models, custom voice fonts and speaker recognition voice profiles, are also **available only within the service-deployed region**. Such assets are:
+
+**Custom Speech**
+- Training audio/text data
+- Test audio/text data
+- Customized speech models
+- Log data
+
+**Custom Voice**
+- Training audio/text data
+- Test audio/text data
+- Custom voice fonts
+
+**Speaker Recognition**
+- Speaker enrollment audio
+- Speaker voice signature
+
+While some customers use our default endpoints to transcribe audio or standard voices for speech synthesis, other customers create assets for customization.
+
+These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity if there's a region outage.
+
+## How to monitor service availability
+
+If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a Speech resource.
+
+Follow these steps to configure your client to monitor for errors:
+
+1. Find the [list of regionally available endpoints in our documentation](./rest-speech-to-text.md).
+2. Select a primary and one or more secondary/backup regions from the list.
+3. From Azure portal, create Speech service resources for each region.
+ - If you have set a specific quota, you may also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
+
+4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
+ - Regional Speech service endpoints
+ - [Regional key and the region code](./rest-speech-to-text.md)
+
+5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
+
+ 1. Since networks experience transient errors, for single connectivity issue occurrences, the suggestion is to retry.
+ 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text to speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880).
+
+The recovery from regional failures for this usage type can be instantaneous and at a low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
+
+## Custom endpoint recovery
+
+Data assets, models or deployments in one region can't be made visible or accessible in any other region.
+
+You should create Speech service resources in both a main and a secondary region by following the same steps as used for default endpoints.
+
+### Custom Speech
+
+Custom Speech service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
+
+1. Create your custom model in one main region (Primary).
+2. Run the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation to replicate the custom model to all prepared regions (Secondary).
+3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md).
+ - If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
+4. Configure your client to fail over on persistent errors as with the default endpoints usage.
+
+Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you don't require real-time failover, you can still follow these steps to prepare for a manual failover.
+
+#### Offline failover
+
+If you don't require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
+
+#### Failover time requirements
+
+This section provides general guidance about timing. The times were recorded to estimate offline failover using a [representative test data set](https://github.com/microsoft/Cognitive-Custom-Speech-Service).
+
+- Data upload to new region: **15mins**
+- Acoustic/language model creation: **6 hours (depending on the data volume)**
+- Model evaluation: **30 mins**
+- Endpoint deployment: **10 mins**
+- Model copy API call: **10 mins**
+- Client code reconfiguration and deployment: **Depending on the client system**
+
+It's nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
+
+### Custom Voice
+
+Custom Voice doesn't support automatic failover. Handle real-time synthesis failures with these two options.
+
+**Option 1: Fail over to public voice in the same region.**
+
+When custom voice real-time synthesis fails, fail over to a public voice (client sample code: [GitHub: custom voice failover to public voice](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L899)).
+
+Check the [public voices available](language-support.md?tabs=tts). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
+
+**Option 2: Fail over to custom voice on another region.**
+
+1. Create and deploy your custom voice in one main region (primary).
+2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://aka.ms/speechstudio/).
+3. Go to Speech Studio and switch to the Speech resource in the secondary region. Load the copied model and create a new endpoint.
+ - Voice model deployment usually finishes **in 3 minutes**.
+ - Each endpoint is subject to extra charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+4. Configure your client to fail over to the secondary region. See sample code in C#: [GitHub: custom voice failover to secondary region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L920).
+
+### Speaker Recognition
+
+Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
+
+During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
ai-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md
+
+ Title: Speech to text REST API for short audio - Speech service
+
+description: Learn how to use Speech to text REST API for short audio to convert speech to text.
++++++ Last updated : 05/02/2023+
+ms.devlang: csharp
+++
+# Speech to text REST API for short audio
+
+Use cases for the Speech to text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md).
+
+Before you use the Speech to text REST API for short audio, consider the following limitations:
+
+* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
+* The REST API for short audio returns only final results. It doesn't provide partial results.
+* [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md).
+* [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech.
+
+Before you use the Speech to text REST API for short audio, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication).
+
+## Regions and endpoints
+
+The endpoint for the REST API for short audio has this format:
+
+```
+https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1
+```
+
+Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your Speech resource.
+
+> [!NOTE]
+> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
+
+## Audio formats
+
+Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
+
+| Format | Codec | Bit rate | Sample rate |
+|--|-|-|--|
+| WAV | PCM | 256 kbps | 16 kHz, mono |
+| OGG | OPUS | 256 kpbs | 16 kHz, mono |
+
+> [!NOTE]
+> The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
+
+## Request headers
+
+This table lists required and optional headers for speech to text requests:
+
+|Header| Description | Required or optional |
+||-||
+| `Ocp-Apim-Subscription-Key` | Your resource key for the Speech service. | Either this header or `Authorization` is required. |
+| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
+| `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional |
+| `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
+| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Use this header only if you're chunking audio data. | Optional |
+| `Expect` | If you're using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if you're sending chunked audio data. |
+| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It's good practice to always include `Accept`. | Optional, but recommended. |
+
+## Query parameters
+
+These parameters might be included in the query string of the REST request.
+
+> [!NOTE]
+> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+
+| Parameter | Description | Required or optional |
+|--|-||
+| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt). | Required |
+| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
+| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
+| `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
+
+### Pronunciation assessment parameters
+
+This table lists required and optional parameters for pronunciation assessment:
+
+| Parameter | Description | Required or optional |
+|--|-||
+| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required |
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
+| `Granularity` | The evaluation granularity. Accepted values are:<br><br> `Phoneme`, which shows the score on the full-text, word, and phoneme levels.<br>`Word`, which shows the score on the full-text and word levels. <br>`FullText`, which shows the score on the full-text level only.<br><br> The default setting is `Phoneme`. | Optional |
+| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response properties](#response-properties). The default setting is `Basic`. | Optional |
+| `EnableMiscue` | Enables miscue calculation. With this parameter enabled, the pronounced words will be compared to the reference text. They'll be marked with omission or insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional |
+| `ScenarioId` | A GUID that indicates a customized point system. | Optional |
+
+Here's example JSON that contains the pronunciation assessment parameters:
+
+```json
+{
+ "ReferenceText": "Good morning.",
+ "GradingSystem": "HundredMark",
+ "Granularity": "FullText",
+ "Dimension": "Comprehensive"
+}
+```
+
+The following sample code shows how to build the pronunciation assessment parameters into the `Pronunciation-Assessment` header:
+
+```csharp
+var pronAssessmentParamsJson = $"{{\"ReferenceText\":\"Good morning.\",\"GradingSystem\":\"HundredMark\",\"Granularity\":\"FullText\",\"Dimension\":\"Comprehensive\"}}";
+var pronAssessmentParamsBytes = Encoding.UTF8.GetBytes(pronAssessmentParamsJson);
+var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes);
+```
+
+We strongly recommend streaming ([chunked transfer](#chunked-transfer)) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
+
+> [!NOTE]
+> For more For more information, see [pronunciation assessment](how-to-pronunciation-assessment.md).
+
+## Sample request
+
+The following sample includes the host name and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended but not required.
+
+```HTTP
+POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
+Accept: application/json;text/xml
+Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
+Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY
+Host: westus.stt.speech.microsoft.com
+Transfer-Encoding: chunked
+Expect: 100-continue
+```
+
+To enable pronunciation assessment, you can add the following header. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters).
+
+```HTTP
+Pronunciation-Assessment: eyJSZWZlcm...
+```
+
+### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors.
+
+| HTTP status code | Description | Possible reasons |
+||-|--|
+| 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) |
+| 200 | OK | The request was successful. The response body is a JSON object. |
+| 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |
+| 401 | Unauthorized | A resource key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
+| 403 | Forbidden | A resource key or authorization token is missing. |
++
+## Sample responses
+
+Here's a typical response for `simple` recognition:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "DisplayText": "Remind me to buy 5 pencils.",
+ "Offset": "1236645672289",
+ "Duration": "1236645672289"
+}
+```
+
+Here's a typical response for `detailed` recognition:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "Offset": "1236645672289",
+ "Duration": "1236645672289",
+ "NBest": [
+ {
+ "Confidence": 0.9052885,
+ "Display": "What's the weather like?",
+ "ITN": "what's the weather like",
+ "Lexical": "what's the weather like",
+ "MaskedITN": "what's the weather like"
+ },
+ {
+ "Confidence": 0.92459863,
+ "Display": "what is the weather like",
+ "ITN": "what is the weather like",
+ "Lexical": "what is the weather like",
+ "MaskedITN": "what is the weather like"
+ }
+ ]
+}
+```
+
+Here's a typical response for recognition with pronunciation assessment:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "Offset": "400000",
+ "Duration": "11000000",
+ "NBest": [
+ {
+ "Confidence" : "0.87",
+ "Lexical" : "good morning",
+ "ITN" : "good morning",
+ "MaskedITN" : "good morning",
+ "Display" : "Good morning.",
+ "PronScore" : 84.4,
+ "AccuracyScore" : 100.0,
+ "FluencyScore" : 74.0,
+ "CompletenessScore" : 100.0,
+ "Words": [
+ {
+ "Word" : "Good",
+ "AccuracyScore" : 100.0,
+ "ErrorType" : "None",
+ "Offset" : 500000,
+ "Duration" : 2700000
+ },
+ {
+ "Word" : "morning",
+ "AccuracyScore" : 100.0,
+ "ErrorType" : "None",
+ "Offset" : 5300000,
+ "Duration" : 900000
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Response properties
+
+Results are provided as JSON. The `simple` format includes the following top-level fields:
+
+| Property | Description |
+|--|--|
+|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
+|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
+|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.|
+|`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
+
+The `RecognitionStatus` field might contain these values:
+
+| Status | Description |
+|--|-|
+| `Success` | The recognition was successful, and the `DisplayText` field is present. |
+| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
+| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
+| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
+| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. |
+
+> [!NOTE]
+> If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result.
+
+The `detailed` format includes additional forms of recognized results.
+When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
+
+The object in the `NBest` list can include:
+
+| Property | Description |
+|--|-|
+| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
+| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
+| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
+| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
+| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
+| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
+| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
+| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
+| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
+
+## Chunked transfer
+
+Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
+
+The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
+
+```csharp
+var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
+request.SendChunked = true;
+request.Accept = @"application/json;text/xml";
+request.Method = "POST";
+request.ProtocolVersion = HttpVersion.Version11;
+request.Host = host;
+request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
+request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_RESOURCE_KEY";
+request.AllowWriteStreamBuffering = false;
+
+using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
+{
+ // Open a request stream and write 1,024-byte chunks in the stream one at a time.
+ byte[] buffer = null;
+ int bytesRead = 0;
+ using (var requestStream = request.GetRequestStream())
+ {
+ // Read 1,024 raw bytes from the input audio file.
+ buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))];
+ while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0)
+ {
+ requestStream.Write(buffer, 0, bytesRead);
+ }
+
+ requestStream.Flush();
+ }
+}
+```
+
+## Authentication
++
+## Next steps
+
+- [Customize speech models](./how-to-custom-speech-train-model.md)
+- [Get familiar with batch transcription](batch-transcription.md)
+
ai-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text.md
+
+ Title: Speech to text REST API - Speech service
+
+description: Get reference documentation for Speech to text REST API.
++++++ Last updated : 11/29/2022+
+ms.devlang: csharp
+++
+# Speech to text REST API
+
+Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
+
+> [!IMPORTANT]
+> Speech to text REST API v3.1 is generally available. Version 3.0 of the [Speech to text REST API](rest-speech-to-text.md) will be retired. For more information, see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide.
+
+> [!div class="nextstepaction"]
+> [See the Speech to text REST API v3.1 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/)
+
+> [!div class="nextstepaction"]
+> [See the Speech to text REST API v3.0 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/)
+
+Use Speech to text REST API to:
+
+- [Custom Speech](custom-speech-overview.md): With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
+- [Batch transcription](batch-transcription.md): Transcribe audio files as a batch from multiple URLs or an Azure container.
+
+Speech to text REST API includes such features as:
+
+- Get logs for each endpoint if logs have been requested for that endpoint.
+- Request the manifest of the models that you create, to set up on-premises containers.
+- Upload data from Azure storage accounts by using a shared access signature (SAS) URI.
+- Bring your own storage. Use your own storage accounts for logs, transcription files, and other data.
+- Some operations support webhook notifications. You can register your webhooks where notifications are sent.
+
+## Datasets
+
+Datasets are applicable for [Custom Speech](custom-speech-overview.md). You can use datasets to train and test the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
+
+See [Upload training and testing datasets](how-to-custom-speech-upload-data.md?pivots=rest-api) for examples of how to upload datasets. This table includes all the operations that you can perform on datasets.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/datasets`|GET|[Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_List)|[GetDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasets)|
+|`/datasets`|POST|[Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create)|[CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset)|
+|`/datasets/{id}`|DELETE|[Datasets_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Delete)|[DeleteDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset)|
+|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)|
+|`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)|
+|`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|GET|[Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks)|Not applicable|
+|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock)|Not applicable|
+|`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)|
+|`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
+|`/datasets/locales`|GET|[Datasets_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListSupportedLocales)|[GetSupportedLocalesForDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForDatasets)|
+|`/datasets/upload`|POST|[Datasets_Upload](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Upload)|[UploadDatasetFromForm](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UploadDatasetFromForm)|
+
+## Endpoints
+
+Endpoints are applicable for [Custom Speech](custom-speech-overview.md). You must deploy a custom endpoint to use a Custom Speech model.
+
+See [Deploy a model](how-to-custom-speech-deploy-model.md?pivots=rest-api) for examples of how to manage deployment endpoints. This table includes all the operations that you can perform on endpoints.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/endpoints`|GET|[Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_List)|[GetEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoints)|
+|`/endpoints`|POST|[Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create)|[CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint)|
+|`/endpoints/{id}`|DELETE|[Endpoints_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Delete)|[DeleteEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint)|
+|`/endpoints/{id}`|GET|[Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get)|[GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint)|
+|`/endpoints/{id}`|PATCH|[Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)|[UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint)|
+|`/endpoints/{id}/files/logs`|DELETE|[Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs)|[DeleteEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLogs)|
+|`/endpoints/{id}/files/logs`|GET|[Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs)|[GetEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLogs)|
+|`/endpoints/{id}/files/logs/{logId}`|DELETE|[Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog)|[DeleteEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLog)|
+|`/endpoints/{id}/files/logs/{logId}`|GET|[Endpoints_GetLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetLog)|[GetEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLog)|
+|`/endpoints/base/{locale}/files/logs`|DELETE|[Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs)|[DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs`|GET|[Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs)|[GetBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|DELETE|[Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog)|[DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog)|
+|`/endpoints/base/{locale}/files/logs/{logId}`|GET|[Endpoints_GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetBaseModelLog)|[GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLog)|
+|`/endpoints/locales`|GET|[Endpoints_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListSupportedLocales)|[GetSupportedLocalesForEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEndpoints)|
+
+## Evaluations
+
+Evaluations are applicable for [Custom Speech](custom-speech-overview.md). You can use evaluations to compare the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
+
+See [Test recognition quality](how-to-custom-speech-inspect-data.md?pivots=rest-api) and [Test accuracy](how-to-custom-speech-evaluate-data.md?pivots=rest-api) for examples of how to test and evaluate Custom Speech models. This table includes all the operations that you can perform on evaluations.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/evaluations`|GET|[Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_List)|[GetEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluations)|
+|`/evaluations`|POST|[Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create)|[CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation)|
+|`/evaluations/{id}`|DELETE|[Evaluations_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete)|[DeleteEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation)|
+|`/evaluations/{id}`|GET|[Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get)|[GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation)|
+|`/evaluations/{id}`|PATCH|[Evaluations_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update)|[UpdateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation)|
+|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)|
+|`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)|
+|`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
+
+## Health status
+
+Health status provides insights about the overall health of the service and sub-components.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
+
+## Models
+
+Models are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). You can use models to transcribe audio files. For example, you can use a model trained with a specific dataset to transcribe audio files.
+
+See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md?pivots=rest-api) for examples of how to train and manage Custom Speech models. This table includes all the operations that you can perform on models.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)|
+|`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)|
+|`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
+|`/models/{id}`|DELETE|[Models_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Delete)|[DeleteModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel)|
+|`/models/{id}`|GET|[Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel)|[GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel)|
+|`/models/{id}`|PATCH|[Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update)|[UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel)|
+|`/models/{id}/files`|GET|[Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListFiles)|Not applicable|
+|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile)|Not applicable|
+|`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)|
+|`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
+|`/models/base/{id}`|GET|[Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
+|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)|
+|`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
+
+## Projects
+
+Projects are applicable for [Custom Speech](custom-speech-overview.md). Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
+
+See [Create a project](how-to-custom-speech-create-project.md?pivots=rest-api) for examples of how to create projects. This table includes all the operations that you can perform on projects.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
+|`/projects`|POST|[Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create)|[CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject)|
+|`/projects/{id}`|DELETE|[Projects_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Delete)|[DeleteProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject)|
+|`/projects/{id}`|GET|[Projects_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Get)|[GetProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject)|
+|`/projects/{id}`|PATCH|[Projects_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Update)|[UpdateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject)|
+|`/projects/{id}/datasets`|GET|[Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListDatasets)|[GetDatasetsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetsForProject)|
+|`/projects/{id}/endpoints`|GET|[Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEndpoints)|[GetEndpointsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointsForProject)|
+|`/projects/{id}/evaluations`|GET|[Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEvaluations)|[GetEvaluationsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationsForProject)|
+|`/projects/{id}/models`|GET|[Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListModels)|[GetModelsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelsForProject)|
+|`/projects/{id}/transcriptions`|GET|[Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions)|[GetTranscriptionsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsForProject)|
+|`/projects/locales`|GET|[Projects_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListSupportedLocales)|[GetSupportedProjectLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedProjectLocales)|
++
+## Transcriptions
+
+Transcriptions are applicable for [Batch Transcription](batch-transcription.md). Batch transcription is used to transcribe a large amount of audio in storage. You should send multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe.
+
+See [Create a transcription](batch-transcription-create.md?pivots=rest-api) for examples of how to create a transcription from multiple audio files. This table includes all the operations that you can perform on transcriptions.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/transcriptions`|GET|[Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List)|[GetTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions)|
+|`/transcriptions`|POST|[Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)|[CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription)|
+|`/transcriptions/{id}`|DELETE|[Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete)|[DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)|
+|`/transcriptions/{id}`|GET|[Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get)|[GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription)|
+|`/transcriptions/{id}`|PATCH|[Transcriptions_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Update)|[UpdateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription)|
+|`/transcriptions/{id}/files`|GET|[Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles)|[GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles)|
+|`/transcriptions/{id}/files/{fileId}`|GET|[Transcriptions_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_GetFile)|[GetTranscriptionFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFile)|
+|`/transcriptions/locales`|GET|[Transcriptions_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListSupportedLocales)|[GetSupportedLocalesForTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForTranscriptions)|
++
+## Web hooks
+
+Web hooks are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). In particular, web hooks apply to [datasets](#datasets), [endpoints](#endpoints), [evaluations](#evaluations), [models](#models), and [transcriptions](#transcriptions). Web hooks can be used to receive notifications about creation, processing, completion, and deletion events.
+
+This table includes all the web hook operations that are available with the Speech to text REST API.
+
+|Path|Method|Version 3.1|Version 3.0|
+|||||
+|`/webhooks`|GET|[WebHooks_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_List)|[GetHooks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHooks)|
+|`/webhooks`|POST|[WebHooks_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Create)|[CreateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateHook)|
+|`/webhooks/{id}:ping`<sup>1</sup>|POST|[WebHooks_Ping](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Ping)|[PingHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/PingHook)|
+|`/webhooks/{id}:test`<sup>2</sup>|POST|[WebHooks_Test](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Test)|[TestHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/TestHook)|
+|`/webhooks/{id}`|DELETE|[WebHooks_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Delete)|[DeleteHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteHook)|
+|`/webhooks/{id}`|GET|[WebHooks_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Get)|[GetHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHook)|
+|`/webhooks/{id}`|PATCH|[WebHooks_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Update)|[UpdateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateHook)|
+
+<sup>1</sup> The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
+
+<sup>2</sup> The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
+
+## Next steps
+
+- [Create a Custom Speech project](how-to-custom-speech-create-project.md)
+- [Get familiar with batch transcription](batch-transcription.md)
+
ai-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-text-to-speech.md
+
+ Title: Text to speech API reference (REST) - Speech service
+
+description: Learn how to use the REST API to convert text into synthesized speech.
++++++ Last updated : 01/24/2022++++
+# Text to speech REST API
+
+The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region by using a REST API. In this article, you'll learn about authorization options, query options, how to structure a request, and how to interpret a response.
+
+> [!TIP]
+> Use cases for the text to speech REST API are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For example, with the Speech SDK you can [subscribe to events](how-to-speech-synthesis.md#subscribe-to-synthesizer-events) for more insights about the text to speech processing and results.
+
+The text to speech REST API supports neural text to speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information:
+
+- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
+- For information about regional availability, see [Speech service supported regions](regions.md#speech-service).
+- For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
+
+> [!IMPORTANT]
+> Costs vary for prebuilt neural voices (called *Neural* on the pricing page) and custom neural voices (called *Custom Neural* on the pricing page). For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+Before you use the text to speech REST API, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication).
+
+## Get a list of voices
+
+You can use the `tts.speech.microsoft.com/cognitiveservices/voices/list` endpoint to get a full list of voices for a specific region or endpoint. Prefix the voices list endpoint with a region to get a list of voices for that region. For example, to get a list of voices for the `westus` region, use the `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` endpoint. For a list of all supported regions, see the [regions](regions.md) documentation.
+
+> [!NOTE]
+> [Voices and styles in preview](language-support.md?tabs=tts) are only available in three service regions: East US, West Europe, and Southeast Asia.
+
+### Request headers
+
+This table lists required and optional headers for text to speech requests:
+
+| Header | Description | Required or optional |
+|--|-||
+| `Ocp-Apim-Subscription-Key` | Your Speech resource key. | Either this header or `Authorization` is required. |
+| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
+
+### Request body
+
+A body isn't required for `GET` requests to this endpoint.
+
+### Sample request
+
+This request requires only an authorization header:
+
+```http
+GET /cognitiveservices/voices/list HTTP/1.1
+
+Host: westus.tts.speech.microsoft.com
+Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY
+```
+
+Here's an example curl command:
+
+```curl
+curl --location --request GET 'https://YOUR_RESOURCE_REGION.tts.speech.microsoft.com/cognitiveservices/voices/list' \
+--header 'Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY'
+```
+
+### Sample response
+
+You should receive a response with a JSON body that includes all supported locales, voices, gender, styles, and other details. The `WordsPerMinute` property for each voice can be used to estimate the length of the output speech. This JSON example shows partial results to illustrate the structure of a response:
+
+```json
+[
+ // Redacted for brevity
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
+ "DisplayName": "Jenny",
+ "LocalName": "Jenny",
+ "ShortName": "en-US-JennyNeural",
+ "Gender": "Female",
+ "Locale": "en-US",
+ "LocaleName": "English (United States)",
+ "StyleList": [
+ "assistant",
+ "chat",
+ "customerservice",
+ "newscast",
+ "angry",
+ "cheerful",
+ "sad",
+ "excited",
+ "friendly",
+ "terrified",
+ "shouting",
+ "unfriendly",
+ "whispering",
+ "hopeful"
+ ],
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "GA",
+ "ExtendedPropertyMap": {
+ "IsHighQuality48K": "True"
+ },
+ "WordsPerMinute": "152"
+ },
+ // Redacted for brevity
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (en-US, JennyMultilingualNeural)",
+ "DisplayName": "Jenny Multilingual",
+ "LocalName": "Jenny Multilingual",
+ "ShortName": "en-US-JennyMultilingualNeural",
+ "Gender": "Female",
+ "Locale": "en-US",
+ "LocaleName": "English (United States)",
+ "SecondaryLocaleList": [
+ "de-DE",
+ "en-AU",
+ "en-CA",
+ "en-GB",
+ "es-ES",
+ "es-MX",
+ "fr-CA",
+ "fr-FR",
+ "it-IT",
+ "ja-JP",
+ "ko-KR",
+ "pt-BR",
+ "zh-CN"
+ ],
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "GA",
+ "WordsPerMinute": "190"
+ },
+ // Redacted for brevity
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (ga-IE, OrlaNeural)",
+ "DisplayName": "Orla",
+ "LocalName": "Orla",
+ "ShortName": "ga-IE-OrlaNeural",
+ "Gender": "Female",
+ "Locale": "ga-IE",
+ "LocaleName": "Irish (Ireland)",
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "GA",
+ "WordsPerMinute": "139"
+ },
+ // Redacted for brevity
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (zh-CN, YunxiNeural)",
+ "DisplayName": "Yunxi",
+ "LocalName": "云希",
+ "ShortName": "zh-CN-YunxiNeural",
+ "Gender": "Male",
+ "Locale": "zh-CN",
+ "LocaleName": "Chinese (Mandarin, Simplified)",
+ "StyleList": [
+ "narration-relaxed",
+ "embarrassed",
+ "fearful",
+ "cheerful",
+ "disgruntled",
+ "serious",
+ "angry",
+ "sad",
+ "depressed",
+ "chat",
+ "assistant",
+ "newscast"
+ ],
+ "SampleRateHertz": "24000",
+ "VoiceType": "Neural",
+ "Status": "GA",
+ "RolePlayList": [
+ "Narrator",
+ "YoungAdultMale",
+ "Boy"
+ ],
+ "WordsPerMinute": "293"
+ },
+ // Redacted for brevity
+]
+```
+
+### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors.
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. |
+| 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
+| 401 | Unauthorized | The request is not authorized. Make sure your resource key or token is valid and in the correct region. |
+| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. |
+| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
++
+## Convert text to speech
+
+The `cognitiveservices/v1` endpoint allows you to convert text to speech by using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md).
+
+### Regions and endpoints
+
+These regions are supported for text to speech through the REST API. Be sure to select the endpoint that matches your Speech resource region.
++
+### Request headers
+
+This table lists required and optional headers for text to speech requests:
+
+| Header | Description | Required or optional |
+|--|-||
+| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Required |
+| `Content-Type` | Specifies the content type for the provided text. Accepted value: `application/ssml+xml`. | Required |
+| `X-Microsoft-OutputFormat` | Specifies the audio output format. For a complete list of accepted values, see [Audio outputs](#audio-outputs). | Required |
+| `User-Agent` | The application name. The provided value must be fewer than 255 characters. | Required |
+
+### Request body
+
+If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text to speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
+
+### Sample request
+
+This HTTP request uses SSML to specify the voice and language. If the body length is long, and the resulting audio exceeds 10 minutes, it's truncated to 10 minutes. In other words, the audio length can't exceed 10 minutes.
+
+```http
+POST /cognitiveservices/v1 HTTP/1.1
+
+X-Microsoft-OutputFormat: riff-24khz-16bit-mono-pcm
+Content-Type: application/ssml+xml
+Host: westus.tts.speech.microsoft.com
+Content-Length: <Length>
+Authorization: Bearer [Base64 access_token]
+User-Agent: <Your application name>
+
+<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male'
+ name='en-US-ChristopherNeural'>
+ I'm excited to try text to speech!
+</voice></speak>
+```
+<sup>*</sup> For the Content-Length, you should use your own content length. In most cases, this value is calculated automatically.
+
+### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors:
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. The response body is an audio file. |
+| 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
+| 401 | Unauthorized | The request is not authorized. Make sure your Speech resource key or token is valid and in the correct region. |
+| 415 | Unsupported media type | It's possible that the wrong `Content-Type` value was provided. `Content-Type` should be set to `application/ssml+xml`. |
+| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. |
+| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
+
+If the HTTP status is `200 OK`, the body of the response contains an audio file in the requested format. This file can be played as it's transferred, saved to a buffer, or saved to a file.
+
+## Audio outputs
+
+The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
+
+#### [Streaming](#tab/streaming)
+
+```
+amr-wb-16000hz
+audio-16khz-16bit-32kbps-mono-opus
+audio-16khz-32kbitrate-mono-mp3
+audio-16khz-64kbitrate-mono-mp3
+audio-16khz-128kbitrate-mono-mp3
+audio-24khz-16bit-24kbps-mono-opus
+audio-24khz-16bit-48kbps-mono-opus
+audio-24khz-48kbitrate-mono-mp3
+audio-24khz-96kbitrate-mono-mp3
+audio-24khz-160kbitrate-mono-mp3
+audio-48khz-96kbitrate-mono-mp3
+audio-48khz-192kbitrate-mono-mp3
+ogg-16khz-16bit-mono-opus
+ogg-24khz-16bit-mono-opus
+ogg-48khz-16bit-mono-opus
+raw-8khz-8bit-mono-alaw
+raw-8khz-8bit-mono-mulaw
+raw-8khz-16bit-mono-pcm
+raw-16khz-16bit-mono-pcm
+raw-16khz-16bit-mono-truesilk
+raw-22050hz-16bit-mono-pcm
+raw-24khz-16bit-mono-pcm
+raw-24khz-16bit-mono-truesilk
+raw-44100hz-16bit-mono-pcm
+raw-48khz-16bit-mono-pcm
+webm-16khz-16bit-mono-opus
+webm-24khz-16bit-24kbps-mono-opus
+webm-24khz-16bit-mono-opus
+```
+
+#### [NonStreaming](#tab/nonstreaming)
+
+```
+riff-8khz-8bit-mono-alaw
+riff-8khz-8bit-mono-mulaw
+riff-8khz-16bit-mono-pcm
+riff-22050hz-16bit-mono-pcm
+riff-24khz-16bit-mono-pcm
+riff-44100hz-16bit-mono-pcm
+riff-48khz-16bit-mono-pcm
+```
+
+***
+
+> [!NOTE]
+> If you select 48kHz output format, the high-fidelity voice model with 48kHz will be invoked accordingly. The sample rates other than 24kHz and 48kHz can be obtained through upsampling or downsampling when synthesizing, for example, 44.1kHz is downsampled from 48kHz.
+>
+> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/).
+
+## Authentication
++
+## Next steps
+
+- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
+- [Get started with custom neural voice](how-to-custom-voice.md)
+- [Batch synthesis](batch-synthesis.md)
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/role-based-access-control.md
+
+ Title: Role-based access control for Speech resources - Speech service
+
+description: Learn how to assign access roles for a Speech resource.
++++++ Last updated : 04/03/2022+++
+# Role-based access control for Speech resources
+
+You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a Custom Speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
+
+> [!NOTE]
+> A Speech resource can inherit or be assigned multiple roles. The final level of access to this resource is a combination of all roles permissions from the operation level.
+
+## Roles for Speech resources
+
+A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in this table are assigned by default.
+
+| Role | Can list resource keys | Access to data, models, and endpoints|
+| | | |
+|**Owner** |Yes |View, create, edit, and delete |
+|**Contributor** |Yes |View, create, edit, and delete |
+|**Azure AI services Contributor** |Yes |View, create, edit, and delete |
+|**Azure AI services User** |Yes |View, create, edit, and delete |
+|**Azure AI Speech Contributor** |No | View, create, edit, and delete |
+|**Azure AI Speech User** |No |View only |
+|**Azure AI services Data Reader (Preview)** |No |View only |
+
+> [!IMPORTANT]
+> Whether a role can list resource keys is important for [Speech Studio authentication](#speech-studio-authentication). To list resource keys, a role must have permission to run the `Microsoft.CognitiveServices/accounts/listKeys/action` operation. Please note that if key authentication is disabled in the Azure Portal, then none of the roles can list keys.
+
+Keep the built-in roles if your Speech resource can have full read and write access to the projects.
+
+For finer-grained resource access control, you can [add or remove roles](../../role-based-access-control/role-assignments-portal.md?tabs=current) using the Azure portal. For example, you could create a custom role with permission to upload Custom Speech datasets, but without permission to deploy a Custom Speech model to an endpoint.
+
+## Authentication with keys and tokens
+
+The [roles](#roles-for-speech-resources) define what permissions you have. Authentication is required to use the Speech resource.
+
+To authenticate with Speech resource keys, all you need is the key and region. To authenticate with an Azure AD token, the Speech resource must have a [custom subdomain](speech-services-private-link.md#create-a-custom-domain-name) and use a [private endpoint](speech-services-private-link.md#turn-on-private-endpoints). The Speech service uses custom subdomains with private endpoints only.
+
+### Speech SDK authentication
+
+For the SDK, you configure whether to authenticate with a Speech resource key or Azure AD token. For details, see [Azure Active Directory Authentication with the Speech SDK](how-to-configure-azure-ad-auth.md).
+
+### Speech Studio authentication
+
+Once you're signed into [Speech Studio](speech-studio-overview.md), you select a subscription and Speech resource. You don't choose whether to authenticate with a Speech resource key or Azure AD token. Speech Studio gets the key or token automatically from the Speech resource. If one of the assigned [roles](#roles-for-speech-resources) has permission to list resource keys, Speech Studio will authenticate with the key. Otherwise, Speech Studio will authenticate with the Azure AD token.
+
+If Speech Studio uses your Azure AD token, but the Speech resource doesn't have a custom subdomain and private endpoint, then you can't use some features in Speech Studio. In this case, for example, the Speech resource can be used to train a Custom Speech model, but you can't use a Custom Speech model to transcribe audio files.
+
+| Authentication credential | Feature availability |
+| | |
+|Speech resource key|Full access limited only by the assigned role permissions.|
+|Azure AD token with custom subdomain and private endpoint|Full access limited only by the assigned role permissions.|
+|Azure AD token without custom subdomain and private endpoint (not recommended)|Features are limited. For example, the Speech resource can be used to train a Custom Speech model or Custom Neural Voice. But you can't use a Custom Speech model or Custom Neural Voice.|
+
+## Next steps
+
+* [Azure Active Directory Authentication with the Speech SDK](how-to-configure-azure-ad-auth.md).
+* [Speech service encryption of data at rest](speech-encryption-of-data-at-rest.md).
ai-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/sovereign-clouds.md
+
+ Title: Sovereign Clouds - Speech service
+
+description: Learn how to use Sovereign Clouds
+++++++ Last updated : 05/10/2022+++
+# Speech service in sovereign clouds
+
+## Azure Government (United States)
+
+Available to US government entities and their partners only. See more information about Azure Government [here](../../azure-government/documentation-government-welcome.md) and [here.](../../azure-government/compare-azure-government-global-azure.md)
+
+- **Azure portal:**
+ - [https://portal.azure.us/](https://portal.azure.us/)
+- **Regions:**
+ - US Gov Arizona
+ - US Gov Virginia
+- **Available pricing tiers:**
+ - Free (F0) and Standard (S0). See more details [here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)
+- **Supported features:**
+ - Speech to text
+ - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation)
+ - [Speech Studio](https://speech.azure.us/)
+ - Text to speech
+ - Standard voice
+ - Neural voice
+ - Speech translation
+- **Unsupported features:**
+ - Custom Voice
+ - Custom Commands
+- **Supported languages:**
+ - See the list of supported languages [here](language-support.md)
+
+### Endpoint information
+
+This section contains Speech service endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
+
+#### Speech service REST API
+
+Speech service REST API endpoints in Azure Government have the following format:
+
+| REST API type / operation | Endpoint format |
+|--|--|
+| Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`
+| [Speech to text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
+| [Speech to text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` |
+| [Text to speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` |
+
+Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
+
+| | Region identifier |
+|--|--|
+| **US Gov Arizona** | `usgovarizona` |
+| **US Gov Virginia** | `usgovvirginia` |
+
+#### Speech SDK
+
+For [Speech SDK](speech-sdk.md) in sovereign clouds you need to use "from host / with host" instantiation of `SpeechConfig` class or `--host` option of [Speech CLI](spx-overview.md). (You may also use "from endpoint / with endpoint" instantiation and `--endpoint` Speech CLI option).
+
+`SpeechConfig` class should be instantiated like this:
+
+# [C#](#tab/c-sharp)
+```csharp
+var config = SpeechConfig.FromHost(usGovHost, subscriptionKey);
+```
+# [C++](#tab/cpp)
+```cpp
+auto config = SpeechConfig::FromHost(usGovHost, subscriptionKey);
+```
+# [Java](#tab/java)
+```java
+SpeechConfig config = SpeechConfig.fromHost(usGovHost, subscriptionKey);
+```
+# [Python](#tab/python)
+```python
+import azure.cognitiveservices.speech as speechsdk
+speech_config = speechsdk.SpeechConfig(host=usGovHost, subscription=subscriptionKey)
+```
+# [Objective-C](#tab/objective-c)
+```objectivec
+SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:usGovHost subscription:subscriptionKey];
+```
+***
+
+Speech CLI should be used like this (note the `--host` option):
+```dos
+spx recognize --host "usGovHost" --file myaudio.wav
+```
+Replace `subscriptionKey` with your Speech resource key. Replace `usGovHost` with the expression matching the required service offering and the region of your subscription from this table:
+
+| Region / Service offering | Host expression |
+|--|--|
+| **US Gov Arizona** | |
+| Speech to text | `wss://usgovarizona.stt.speech.azure.us` |
+| Text to speech | `https://usgovarizona.tts.speech.azure.us` |
+| **US Gov Virginia** | |
+| Speech to text | `wss://usgovvirginia.stt.speech.azure.us` |
+| Text to speech | `https://usgovvirginia.tts.speech.azure.us` |
++
+## Azure China
+
+Available to organizations with a business presence in China. See more information about Azure China [here](/azure/china/overview-operations).
+
+- **Azure portal:**
+ - [https://portal.azure.cn/](https://portal.azure.cn/)
+- **Regions:**
+ - China East 2
+ - China North 2
+ - China North 3
+- **Available pricing tiers:**
+ - Free (F0) and Standard (S0). See more details [here](https://www.azure.cn/pricing/details/cognitive-services/https://docsupdatetracker.net/index.html)
+- **Supported features:**
+ - Speech to text
+ - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation)
+ - [Speech Studio](https://speech.azure.cn/)
+ - [Pronunciation assessment](how-to-pronunciation-assessment.md)
+ - Text to speech
+ - Standard voice
+ - Neural voice
+ - Speech translator
+- **Unsupported features:**
+ - Custom Voice
+ - Custom Commands
+- **Supported languages:**
+ - See the list of supported languages [here](language-support.md)
+
+### Endpoint information
+
+This section contains Speech service endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
+
+#### Speech service REST API
+
+Speech service REST API endpoints in Azure China have the following format:
+
+| REST API type / operation | Endpoint format |
+|--|--|
+| Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`
+| [Speech to text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
+| [Speech to text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` |
+| [Text to speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` |
+
+Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
+
+| | Region identifier |
+|--|--|
+| **China East 2** | `chinaeast2` |
+| **China North 2** | `chinanorth2` |
+| **China North 3** | `chinanorth3` |
+
+#### Speech SDK
+
+For [Speech SDK](speech-sdk.md) in sovereign clouds you need to use "from host / with host" instantiation of `SpeechConfig` class or `--host` option of [Speech CLI](spx-overview.md). (You may also use "from endpoint / with endpoint" instantiation and `--endpoint` Speech CLI option).
+
+`SpeechConfig` class should be instantiated like this:
+
+# [C#](#tab/c-sharp)
+```csharp
+var config = SpeechConfig.FromHost("azCnHost", subscriptionKey);
+```
+# [C++](#tab/cpp)
+```cpp
+auto config = SpeechConfig::FromHost("azCnHost", subscriptionKey);
+```
+# [Java](#tab/java)
+```java
+SpeechConfig config = SpeechConfig.fromHost("azCnHost", subscriptionKey);
+```
+# [Python](#tab/python)
+```python
+import azure.cognitiveservices.speech as speechsdk
+speech_config = speechsdk.SpeechConfig(host="azCnHost", subscription=subscriptionKey)
+```
+# [Objective-C](#tab/objective-c)
+```objectivec
+SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:"azCnHost" subscription:subscriptionKey];
+```
+***
+
+Speech CLI should be used like this (note the `--host` option):
+```dos
+spx recognize --host "azCnHost" --file myaudio.wav
+```
+Replace `subscriptionKey` with your Speech resource key. Replace `azCnHost` with the expression matching the required service offering and the region of your subscription from this table:
+
+| Region / Service offering | Host expression |
+|--|--|
+| **China East 2** | |
+| Speech to text | `wss://chinaeast2.stt.speech.azure.cn` |
+| Text to speech | `https://chinaeast2.tts.speech.azure.cn` |
+| **China North 2** | |
+| Speech to text | `wss://chinanorth2.stt.speech.azure.cn` |
+| Text to speech | `https://chinanorth2.tts.speech.azure.cn` |
+| **China North 3** | |
+| Speech to text | `wss://chinanorth3.stt.speech.azure.cn` |
+| Text to speech | `https://chinanorth3.tts.speech.azure.cn` |
ai-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speaker-recognition-overview.md
+
+ Title: Speaker recognition overview - Speech service
+
+description: Speaker recognition provides algorithms that verify and identify speakers by their unique voice characteristics, by using voice biometry. Speaker recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. This article is an overview of the benefits and capabilities of the speaker recognition feature.
++++++ Last updated : 01/08/2022++
+keywords: speaker recognition, voice biometry
++
+# What is speaker recognition?
+
+Speaker recognition can help determine who is speaking in an audio clip. The service can verify and identify speakers by their unique voice characteristics, by using voice biometry.
+
+You provide audio training data for a single speaker, which creates an enrollment profile based on the unique characteristics of the speaker's voice. You can then cross-check audio voice samples against this profile to verify that the speaker is the same person (speaker verification). You can also cross-check audio voice samples against a *group* of enrolled speaker profiles to see if it matches any profile in the group (speaker identification).
+
+> [!IMPORTANT]
+> Microsoft limits access to speaker recognition. You can apply for access through the [Azure AI services speaker recognition limited access review](https://aka.ms/azure-speaker-recognition). For more information, see [Limited access for speaker recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition).
+
+## Speaker verification
+
+Speaker verification streamlines the process of verifying an enrolled speaker identity with either passphrases or free-form voice input. For example, you can use it for customer identity verification in call centers or contactless facility access.
+
+### How does speaker verification work?
+
+The following flowchart provides a visual of how this works:
++
+Speaker verification can be either text-dependent or text-independent. *Text-dependent* verification means that speakers need to choose the same passphrase to use during both enrollment and verification phases. *Text-independent* verification means that speakers can speak in everyday language in the enrollment and verification phrases.
+
+For text-dependent verification, the speaker's voice is enrolled by saying a passphrase from a set of predefined phrases. Voice features are extracted from the audio recording to form a unique voice signature, and the chosen passphrase is also recognized. Together, the voice signature and the passphrase are used to verify the speaker.
+
+Text-independent verification has no restrictions on what the speaker says during enrollment, besides the initial activation phrase when active enrollment is enabled. It doesn't have any restrictions on the audio sample to be verified, because it only extracts voice features to score similarity.
+
+The APIs aren't intended to determine whether the audio is from a live person, or from an imitation or recording of an enrolled speaker.
+
+## Speaker identification
+
+Speaker identification helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers. Speaker identification enables you to attribute speech to individual speakers, and unlock value from scenarios with multiple speakers, such as:
+
+* Supporting solutions for remote meeting productivity.
+* Building multi-user device personalization.
+
+### How does speaker identification work?
+
+Enrollment for speaker identification is text-independent. There are no restrictions on what the speaker says in the audio, besides the initial activation phrase when active enrollment is enabled. Similar to speaker verification, the speaker's voice is recorded in the enrollment phase, and the voice features are extracted to form a unique voice signature. In the identification phase, the input voice sample is compared to a specified list of enrolled voices (up to 50 in each request).
+
+## Data security and privacy
+
+Speaker enrollment data is stored in a secured system, including the speech audio for enrollment and the voice signature features. The speech audio for enrollment is only used when the algorithm is upgraded, and the features need to be extracted again. The service doesn't retain the speech recording or the extracted voice features that are sent to the service during the recognition phase.
+
+You control how long data should be retained. You can create, update, and delete enrollment data for individual speakers through API calls. When the subscription is deleted, all the speaker enrollment data associated with the subscription will also be deleted.
+
+As with all of the Azure AI services resources, developers who use the speaker recognition feature must be aware of Microsoft policies on customer data. You should ensure that you have received the appropriate permissions from the users. You can find more details in [Data and privacy for speaker recognition](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition). For more information, see the [Azure AI services page](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/) on the Microsoft Trust Center.
+
+## Common questions and solutions
+
+| Question | Solution |
+||-|
+| What situations am I most likely to use speaker recognition? | Good examples include call center customer verification, voice-based patient check-in, meeting transcription, and multi-user device personalization.|
+| What's the difference between identification and verification? | Identification is the process of detecting which member from a group of speakers is speaking. Verification is the act of confirming that a speaker matches a known, *enrolled* voice.|
+| What languages are supported? | See [Speaker recognition language support](language-support.md?tabs=speaker-recognition). |
+| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speech-service).|
+| What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV. |
+| Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. |
+| What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#delete-voice-profile-enrollments). Recognition audio samples aren't retained or stored. |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Speaker recognition quickstart](./get-started-speaker-recognition.md)
ai-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-batch-processing.md
+
+ Title: Batch processing kit for Speech containers
+
+description: Use the Batch processing kit to scale Speech container requests.
++++++ Last updated : 10/22/2020+++
+# Batch processing kit for Speech containers
+
+Use the batch processing kit to complement and scale out workloads on Speech containers. Available as a container, this open-source utility helps facilitate batch transcription for large numbers of audio files, across any number of on-premises and cloud-based speech container endpoints.
++
+The batch kit container is available for free on [GitHub](https://github.com/microsoft/batch-processing-kit) and [Docker hub](https://hub.docker.com/r/batchkit/speech-batch-kit/tags). You are only [billed](speech-container-overview.md#billing) for the Speech containers you use.
+
+| Feature | Description |
+|||
+| Batch audio file distribution | Automatically dispatch large numbers of files to on-premises or cloud-based Speech container endpoints. Files can be on any POSIX-compliant volume, including network filesystems. |
+| Speech SDK integration | Pass common flags to the Speech SDK, including: n-best hypotheses, diarization, language, profanity masking. |
+|Run modes | Run the batch client once, continuously in the background, or create HTTP endpoints for audio files. |
+| Fault tolerance | Automatically retry and continue transcription without losing progress, and differentiate between which errors can, and can't be retried on. |
+| Endpoint availability detection | If an endpoint becomes unavailable, the batch client will continue transcribing, using other container endpoints. After becoming available again, the client will automatically begin using the endpoint. |
+| Endpoint hot-swapping | Add, remove, or modify Speech container endpoints during runtime without interrupting the batch progress. Updates are immediate. |
+| Real-time logging | Real-time logging of attempted requests, timestamps, and failure reasons, with Speech SDK log files for each audio file. |
+
+## Get the container image with `docker pull`
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download the latest batch kit container.
++
+```bash
+docker pull docker.io/batchkit/speech-batch-kit:latest
+```
+
+## Endpoint configuration
+
+The batch client takes a yaml configuration file that specifies the on-premises container endpoints. The following example can be written to `/mnt/my_nfs/config.yaml`, which is used in the examples below.
+++++++++++
+```yaml
+MyContainer1:
+ concurrency: 5
+ host: 192.168.0.100
+ port: 5000
+ rtf: 3
+MyContainer2:
+ concurrency: 5
+ host: BatchVM0.corp.redmond.microsoft.com
+ port: 5000
+ rtf: 2
+MyContainer3:
+ concurrency: 10
+ host: localhost
+ port: 6001
+ rtf: 4
+```
+
+This yaml example specifies three speech containers on three hosts. The first host is specified by a IPv4 address, the second is running on the same VM as the batch-client, and the third container is specified by the DNS hostname of another VM. The `concurrency` value specifies the maximum concurrent file transcriptions that can run on the same container. The `rtf` (Real-Time Factor) value is optional and can be used to tune performance.
+
+The batch client can dynamically detect if an endpoint becomes unavailable (for example, due to a container restart or networking issue), and when it becomes available again. Transcription requests will not be sent to containers that are unavailable, and the client will continue using other available containers. You can add, remove, or edit endpoints at any time without interrupting the progress of your batch.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+## Run the batch processing container
+ΓÇ»
+> [!NOTE]
+> * This example uses the same directory (`/my_nfs`) for the configuration file and the inputs, outputs, and logs directories. You can use hosted or NFS-mounted directories for these folders.
+> * Running the client with `–h` will list the available command-line parameters, and their default values. 
+> * The batch processing container is only supported on Linux.
+
+Use the Docker `run` command to start the container. This will start an interactive shell inside the container.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+```bash
+docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs --entrypoint /bin/bash /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest
+```
+
+To run the batch client:
+++++++++++++++++++++++++++++++++++++++++
+```bash
+run-batch-client -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization None -language en-US -strict_config
+```
+
+To run the batch client and container in a single command:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+```bash
+docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs
+```
+++++++++++
+The client will start running. If an audio file has already been transcribed in a previous run, the client will automatically skip the file. Files are sent with an automatic retry if transient errors occur, and you can differentiate between which errors you want to the client to retry on. On a transcription error, the client will continue transcription, and can retry without losing progress.
+
+## Run modes
+
+The batch processing kit offers three modes, using the `--run-mode` parameter.
+
+#### [Oneshot](#tab/oneshot)
+
+`ONESHOT` mode transcribes a single batch of audio files (from an input directory and optional file list) to an output folder.
++
+1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
+2. Place audio files for transcription in an input directory.
+3. Invoke the container on the directory, which will begin processing the files. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
+4. The files are dispatched to the container endpoints from step 1.
+5. Logs and the Speech container output are returned to the specified output directory.
+
+#### [Daemon](#tab/daemon)
+
+> [!TIP]
+> If multiple files are added to the input directory at the same time, you can improve performance by instead adding them in a regular interval.
+
+`DAEMON` mode transcribes existing files in a given folder, and continuously transcribes new audio files as they are added.
++
+1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
+2. Invoke the container on an input directory. The batch client will begin monitoring the directory for incoming files.
+3. Set up continuous audio file delivery to the input directory. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
+4. Once a file write or POSIX signal is detected, the container is triggered to respond.
+5. The files are dispatched to the container endpoints from step 1.
+6. Logs and the Speech container output are returned to the specified output directory.
+
+#### [REST](#tab/rest)
+
+`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a Python module extension or importing as a submodule.
++
+1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
+2. Send an HTTP request to one of the API server's endpoints.
+
+ |Endpoint |Description |
+ |||
+ |`/submit` | Endpoint for creating new batch requests. |
+ |`/status` | Endpoint for checking the status of a batch request. The connection will stay open until the batch completes. |
+ |`/watch` | Endpoint for using HTTP long polling until the batch completes. |
+
+3. Audio files are uploaded from the input directory. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
+4. If a request is sent to the `/submit` endpoint, the files are dispatched to the container endpoints from step
+5. Logs and the Speech container output are returned to the specified output directory.
++
+## Logging
+
+> [!NOTE]
+> The batch client may overwrite the *run.log* file periodically if it gets too large.
+
+The client creates a *run.log* file in the directory specified by the `-log_folder` argument in the docker `run` command. Logs are captured at the DEBUG level by default. The same logs are sent to the `stdout/stderr`, and filtered depending on the `-file_log_level` or `console_log_level` arguments. This log is only necessary for debugging, or if you need to send a trace for support. The logging folder also contains the Speech SDK logs for each audio file.
+
+The output directory specified by `-output_folder` will contain a *run_summary.json* file, which is periodically rewritten every 30 seconds or whenever new transcriptions are finished. You can use this file to check on progress as the batch proceeds. It will also contain the final run statistics and final status of every file when the batch is completed. The batch is completed when the process has a clean exit.
+
+## Next steps
+
+* [How to install and run containers](speech-container-howto.md)
+++++++
ai-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-configuration.md
+
+ Title: Configure Speech containers
+
+description: Speech service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
++++++ Last updated : 04/18/2023+++
+# Configure Speech service containers
+
+Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality.
+
+The Speech container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. The container-specific settings are the billing settings.
+
+## Configuration settings
++
+> [!IMPORTANT]
+> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](speech-container-overview.md#billing).
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Speech_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+This setting can be found in the following place:
+
+- Azure portal: **Speech** Resource Management, under **Keys**
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Speech_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Speech_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+This setting can be found in the following place:
+
+- Azure portal: **Speech** Overview, labeled `Endpoint`
+
+| Required | Name | Data type | Description |
+| -- | - | | -- |
+| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [billing](speech-container-overview.md#billing). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../cognitive-services-custom-subdomains.md). |
+
+## Eula setting
++
+## Fluentd settings
++
+## HTTP proxy credentials settings
++
+## Logging settings
++
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+The Standard Speech containers don't use input or output mounts to store training or service data. However, custom speech containers rely on volume mounts.
+
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](speech-container-howto.md#host-computer-requirements-and-recommendations)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
+
+| Optional | Name | Data type | Description |
+| -- | - | | -- |
+| Not allowed | `Input` | String | Standard Speech containers do not use this. Custom speech containers use [volume mounts](#volume-mount-settings). |
+| Optional | `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output` |
+
+## Volume mount settings
+
+The custom speech containers use [volume mounts](https://docs.docker.com/storage/volumes/) to persist custom models. You can specify a volume mount by adding the `-v` (or `--volume`) option to the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+> [!NOTE]
+> The volume mount settings are only applicable for [Custom Speech to text](speech-container-cstt.md) containers.
+
+Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same `ModelId` for a custom speech container will use the previously downloaded model. If the volume mount is not provided, custom models cannot be persisted.
+
+The volume mount setting consists of three color `:` separated fields:
+
+1. The first field is the name of the volume on the host machine, for example _C:\input_.
+2. The second field is the directory in the container, for example _/usr/local/models_.
+3. The third field (optional) is a comma-separated list of options, for more information see [use volumes](https://docs.docker.com/storage/volumes/).
+
+Here's a volume mount example that mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory.
+
+```bash
+-v C:\input:/usr/local/models
+```
++
+## Next steps
+
+- Review [How to install and run containers](speech-container-howto.md)
++
ai-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md
+
+ Title: Custom speech to text containers - Speech service
+
+description: Install and run custom speech to text containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
+++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Custom speech to text containers with Docker
+
+The Custom speech to text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech to text container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+## Container images
+
+The Custom speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
+| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:3.12.0-amd64` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<prerelease>
+```
+
+> [!NOTE]
+> The `locale` and `voice` for custom speech to text containers is determined by the custom model ingested by the container.
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/custom-speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/custom-speech-to-text",
+ "tags": [
+ "2.10.0-amd64",
+ "2.11.0-amd64",
+ "2.12.0-amd64",
+ "2.12.1-amd64",
+ <--redacted for brevity-->
+ "latest"
+ ]
+}
+```
+
+### Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest
+```
+
+> [!NOTE]
+> The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container.
+
+## Get the model ID
+
+Before you can [run](#run-the-container-with-docker-run) the container, you need to know the model ID of your custom model or a base model ID. When you run the container you specify one of the model IDs to download and use.
+
+# [Custom model ID](#tab/custom-model)
+
+The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). For information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+
+![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png)
+
+Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command.
+
+![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
++
+# [Base model ID](#tab/base-model)
+
+You can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account.
+
+To get base model IDs, you use the `docker run` command. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+BaseModelLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command checks the container image and returns the available base models of the target locale.
+
+> [!NOTE]
+> Although you use the `docker run` command, the container isn't started for service.
+
+The output gives you a list of base models with the information locale, model ID, and creation date time. For example:
+
+```
+Checking available base model for en-us
+2020/10/30 21:54:20 [Info] Searching available base models for en-us
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T08:23:42Z, Id: a3d8aab9-6f36-44cd-9904-b37389ce2bfa
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T12:01:02Z, Id: cc7826ac-5355-471d-9bc6-a54673d06e45
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2017-08-17T12:00:00Z, Id: a1f8db59-40ff-4f0e-b011-37629c3a1a53
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-04-16T11:55:00Z, Id: c7a69da3-27de-4a4b-ab75-b6716f6321e5
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-09-21T15:18:43Z, Id: da494a53-0dad-4158-b15f-8f9daca7a412
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-10-19T11:28:54Z, Id: 84ec130b-d047-44bf-a46d-58c1ac292ca7
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T07:59:09Z, Id: ee5c100f-152f-4ae5-9e9d-014af3c01c56
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T09:21:55Z, Id: d04959a6-71da-4913-9997-836793e3c115
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-01-11T10:04:19Z, Id: 488e5f23-8bc5-46f8-9ad8-ea9a49a8efda
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-02-18T14:37:57Z, Id: 0207b3e6-92a8-4363-8c0e-361114cdd719
+2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-03-03T17:34:10Z, Id: 198d9b79-2950-4609-b6ec-f52254074a05
+2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us
+```
+++
+## Display model download
+
+Before you [run](#run-the-container-with-docker-run) the container, you can optionally get the available display models information and choose to download those models into your speech to text container to get highly improved final display output. Display model download is available with custom-speech-to-text container version 3.1.0 and later.
+
+> [!NOTE]
+> Although you use the `docker run` command, the container isn't started for service.
+
+You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models.
+
+Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
+BaseModelLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
+DisplayLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+RescoreId={RESCORE_MODEL_ID} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+> [!NOTE]
+> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models).
+
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container for service.
+
+# [Custom speech to text](#tab/container)
++
+# [Disconnected custom speech to text](#tab/disconnected)
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
+
+If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
+
+In order to prepare and configure a disconnected custom speech to text container you will need two separate speech resources:
+
+- A regular Azure AI services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container.
+- An Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
+
+Follow these steps to download and run the container in disconnected environments.
+1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure AI services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
+1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+
+### Download a model for the disconnected container
+
+For this step, use a regular Azure AI services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
++
+### Download the disconnected container license
+
+Next, you download your disconnected license file. The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container.
+
+You can only use a license file with the appropriate container and model that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+
+For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+-v {MODEL_PATH} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+### Run the disconnected container
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` |
+| `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
+| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure AI services documentation. |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
+
+For this step, use an Azure AI services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+-v {MODEL_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+The Custom Speech to text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
++++
+## Use the container
++
+[Try the speech to text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region.
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure AI services containers](../cognitive-services-container-support.md)
ai-services Speech Container Howto On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-howto-on-premises.md
+
+ Title: Use Speech service containers with Kubernetes and Helm
+
+description: Using Kubernetes and Helm to define the speech to text and text to speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises.
++++++ Last updated : 07/22/2021+++
+# Use Speech service containers with Kubernetes and Helm
+
+One option to manage your Speech containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define the speech to text and text to speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services and various configuration options. For more information about running Docker containers without Kubernetes orchestration, see [install and run Speech service containers](speech-container-howto.md).
+
+## Prerequisites
+
+The following prerequisites before using Speech containers on-premises:
+
+| Required | Purpose |
+|-||
+| Azure Account | If you don't have an Azure subscription, create a [free account][free-azure-account] before you begin. |
+| Container Registry access | In order for Kubernetes to pull the docker images into the cluster, it will need access to the container registry. |
+| Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. |
+| Helm CLI | Install the [Helm CLI][helm-install], which is used to install a helm chart (container package definition). |
+|Speech resource |In order to use these containers, you must have:<br><br>A _Speech_ Azure resource to get the associated billing key and billing endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: resource key<br><br>**{ENDPOINT_URI}**: endpoint URI example is: `https://eastus.api.cognitive.microsoft.com/sts/v1.0`|
+
+## The recommended host computer configuration
+
+Refer to the [Speech service container host computer][speech-container-host-computer] details as a reference. This *helm chart* automatically calculates CPU and memory requirements based on how many decodes (concurrent requests) that the user specifies. Additionally, it will adjust based on whether optimizations for audio/text input are configured as `enabled`. The helm chart defaults to, two concurrent requests and disabling optimization.
+
+| Service | CPU / Container | Memory / Container |
+|--|--|--|
+| **speech to text** | one decoder requires a minimum of 1,150 millicores. If the `optimizedForAudioFile` is enabled, then 1,950 millicores are required. (default: two decoders) | Required: 2 GB<br>Limited: 4 GB |
+| **text to speech** | one concurrent request requires a minimum of 500 millicores. If the `optimizeForTurboMode` is enabled, then 1,000 millicores are required. (default: two concurrent requests) | Required: 1 GB<br> Limited: 2 GB |
+
+## Connect to the Kubernetes cluster
+
+The host computer is expected to have an available Kubernetes cluster. See this tutorial on [deploying a Kubernetes cluster](../../aks/tutorial-kubernetes-deploy-cluster.md) for a conceptual understanding of how to deploy a Kubernetes cluster to a host computer.
+
+## Configure Helm chart values for deployment
+
+Visit the [Microsoft Helm Hub][ms-helm-hub] for all the publicly available helm charts offered by Microsoft. From the Microsoft Helm Hub, you'll find the **Azure AI Speech On-Premises Chart**. The **Azure AI Speech On-Premises** is the chart we'll install, but we must first create an `config-values.yaml` file with explicit configurations. Let's start by adding the Microsoft repository to our Helm instance.
+
+```console
+helm repo add microsoft https://microsoft.github.io/charts/repo
+```
+
+Next, we'll configure our Helm chart values. Copy and paste the following YAML into a file named `config-values.yaml`. For more information on customizing the **Azure AI Speech On-Premises Helm Chart**, see [customize helm charts](#customize-helm-charts). Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values.
+
+```yaml
+# These settings are deployment specific and users can provide customizations
+# speech to text configurations
+speechToText:
+ enabled: true
+ numberOfConcurrentRequest: 3
+ optimizeForAudioFile: true
+ image:
+ registry: mcr.microsoft.com
+ repository: azure-cognitive-services/speechservices/speech-to-text
+ tag: latest
+ pullSecrets:
+ - mcr # Or an existing secret
+ args:
+ eula: accept
+ billing: # {ENDPOINT_URI}
+ apikey: # {API_KEY}
+
+# text to speech configurations
+textToSpeech:
+ enabled: true
+ numberOfConcurrentRequest: 3
+ optimizeForTurboMode: true
+ image:
+ registry: mcr.microsoft.com
+ repository: azure-cognitive-services/speechservices/text-to-speech
+ tag: latest
+ pullSecrets:
+ - mcr # Or an existing secret
+ args:
+ eula: accept
+ billing: # {ENDPOINT_URI}
+ apikey: # {API_KEY}
+```
+
+> [!IMPORTANT]
+> If the `billing` and `apikey` values are not provided, the services will expire after 15 min. Likewise, verification will fail as the services will not be available.
+
+### The Kubernetes package (Helm chart)
+
+The *Helm chart* contains the configuration of which docker image(s) to pull from the `mcr.microsoft.com` container registry.
+
+> A [Helm chart][helm-charts] is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
+
+The provided *Helm charts* pull the docker images of the Speech service, both text to speech and the speech to text services from the `mcr.microsoft.com` container registry.
+
+## Install the Helm chart on the Kubernetes cluster
+
+To install the *helm chart* we'll need to execute the [`helm install`][helm-install-cmd] command, replacing the `<config-values.yaml>` with the appropriate path and file name argument. The `microsoft/cognitive-services-speech-onpremise` Helm chart referenced below is available on the [Microsoft Helm Hub here][ms-helm-hub-speech-chart].
+
+```console
+helm install onprem-speech microsoft/cognitive-services-speech-onpremise \
+ --version 0.1.1 \
+ --values <config-values.yaml>
+```
+
+Here is an example output you might expect to see from a successful install execution:
+
+```console
+NAME: onprem-speech
+LAST DEPLOYED: Tue Jul 2 12:51:42 2019
+NAMESPACE: default
+STATUS: DEPLOYED
+
+RESOURCES:
+==> v1/Pod(related)
+NAME READY STATUS RESTARTS AGE
+speech-to-text-7664f5f465-87w2d 0/1 Pending 0 0s
+speech-to-text-7664f5f465-klbr8 0/1 ContainerCreating 0 0s
+text-to-speech-56f8fb685b-4jtzh 0/1 ContainerCreating 0 0s
+text-to-speech-56f8fb685b-frwxf 0/1 Pending 0 0s
+
+==> v1/Service
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+speech-to-text LoadBalancer 10.0.252.106 <pending> 80:31811/TCP 1s
+text-to-speech LoadBalancer 10.0.125.187 <pending> 80:31247/TCP 0s
+
+==> v1beta1/PodDisruptionBudget
+NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
+speech-to-text-poddisruptionbudget N/A 20% 0 1s
+text-to-speech-poddisruptionbudget N/A 20% 0 1s
+
+==> v1beta2/Deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+speech-to-text 0/2 2 0 0s
+text-to-speech 0/2 2 0 0s
+
+==> v2beta2/HorizontalPodAutoscaler
+NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
+speech-to-text-autoscaler Deployment/speech-to-text <unknown>/50% 2 10 0 0s
+text-to-speech-autoscaler Deployment/text-to-speech <unknown>/50% 2 10 0 0s
++
+NOTES:
+cognitive-services-speech-onpremise has been installed!
+Release is named onprem-speech
+```
+
+The Kubernetes deployment can take over several minutes to complete. To confirm that both pods and services are properly deployed and available, execute the following command:
+
+```console
+kubectl get all
+```
+
+You should expect to see something similar to the following output:
+
+```console
+NAME READY STATUS RESTARTS AGE
+pod/speech-to-text-7664f5f465-87w2d 1/1 Running 0 34m
+pod/speech-to-text-7664f5f465-klbr8 1/1 Running 0 34m
+pod/text-to-speech-56f8fb685b-4jtzh 1/1 Running 0 34m
+pod/text-to-speech-56f8fb685b-frwxf 1/1 Running 0 34m
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3h
+service/speech-to-text LoadBalancer 10.0.252.106 52.162.123.151 80:31811/TCP 34m
+service/text-to-speech LoadBalancer 10.0.125.187 65.52.233.162 80:31247/TCP 34m
+
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+deployment.apps/speech-to-text 2 2 2 2 34m
+deployment.apps/text-to-speech 2 2 2 2 34m
+
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/speech-to-text-7664f5f465 2 2 2 34m
+replicaset.apps/text-to-speech-56f8fb685b 2 2 2 34m
+
+NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
+horizontalpodautoscaler.autoscaling/speech-to-text-autoscaler Deployment/speech-to-text 1%/50% 2 10 2 34m
+horizontalpodautoscaler.autoscaling/text-to-speech-autoscaler Deployment/text-to-speech 0%/50% 2 10 2 34m
+```
+
+### Verify Helm deployment with Helm tests
+
+The installed Helm charts define *Helm tests*, which serve as a convenience for verification. These tests validate service readiness. To verify both **speech to text** and **text to speech** services, we'll execute the [Helm test][helm-test] command.
+
+```console
+helm test onprem-speech
+```
+
+> [!IMPORTANT]
+> These tests will fail if the POD status is not `Running` or if the deployment is not listed under the `AVAILABLE` column. Be patient as this can take over ten minutes to complete.
+
+These tests will output various status results:
+
+```console
+RUNNING: speech to text-readiness-test
+PASSED: speech to text-readiness-test
+RUNNING: text to speech-readiness-test
+PASSED: text to speech-readiness-test
+```
+
+As an alternative to executing the *helm tests*, you could collect the *External IP* addresses and corresponding ports from the `kubectl get all` command. Using the IP and port, open a web browser and navigate to `http://<external-ip>:<port>:/swagger/https://docsupdatetracker.net/index.html` to view the API swagger page(s).
+
+## Customize Helm charts
+
+Helm charts are hierarchical. Being hierarchical allows for chart inheritance, it also caters to the concept of specificity, where settings that are more specific override inherited rules.
++++
+## Next steps
+
+For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks].
+
+> [!div class="nextstepaction"]
+> [Azure AI containers][cog-svcs-containers]
+
+<!-- LINKS - external -->
+[free-azure-account]: https://azure.microsoft.com/free
+[git-download]: https://git-scm.com/downloads
+[azure-cli]: /cli/azure/install-azure-cli
+[docker-engine]: https://www.docker.com/products/docker-engine
+[kubernetes-cli]: https://kubernetes.io/docs/tasks/tools/install-kubectl
+[helm-install]: https://helm.sh/docs/intro/install/
+[helm-install-cmd]: https://helm.sh/docs/intro/using_helm/#helm-install-installing-a-package
+[tiller-install]: https://helm.sh/docs/install/#installing-tiller
+[helm-charts]: https://helm.sh/docs/topics/charts/
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[helm-test]: https://v2.helm.sh/docs/helm/#helm-test
+[ms-helm-hub]: https://artifacthub.io/packages/search?repo=microsoft
+[ms-helm-hub-speech-chart]: https://hub.helm.sh/charts/microsoft/cognitive-services-speech-onpremise
+
+<!-- LINKS - internal -->
+[speech-container-host-computer]: speech-container-howto.md#host-computer-requirements-and-recommendations
+[installing-helm-apps-in-aks]: ../../aks/kubernetes-helm.md
+[cog-svcs-containers]: ../cognitive-services-container-support.md
ai-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-howto.md
+
+ Title: Install and run Speech containers with Docker - Speech service
+
+description: Use the Speech containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
++++++ Last updated : 04/18/2023+
+keywords: on-premises, Docker, container
++
+# Install and run Speech containers with Docker
+
+By using containers, you can use a subset of the Speech service features in your own environment. In this article, you'll learn how to download, install, and run a Speech container.
+
+> [!NOTE]
+> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+## Prerequisites
+
+You must meet the following prerequisites before you use Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need:
+
+* You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech service resource" target="_blank">Speech service resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+### Billing arguments
+
+Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times.
+
+Three primary parameters for all Azure AI containers are required. The Microsoft Software License Terms must be present with a value of **accept**. An Endpoint URI and API key are also needed.
+
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey` parameter.
+
+The <a href="https://docs.docker.com/engine/reference/commandline/run/" target="_blank">`docker run` <span class="docon docon-navigate-external x-hidden-focus"></span></a> command will start the container when all three of the following options are provided with valid values:
+
+| Option | Description |
+|--|-|
+| `ApiKey` | The API key of the Speech resource that's used to track billing information.<br/>The `ApiKey` value is used to start the container and is available on the Azure portal's **Keys** page of the corresponding Speech resource. Go to the **Keys** page, and select the **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon.|
+| `Billing` | The endpoint of the Speech resource that's used to track billing information.<br/>The endpoint is available on the Azure portal **Overview** page of the corresponding Speech resource. Go to the **Overview** page, hover over the endpoint, and a **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon appears. Copy and use the endpoint where needed.|
+| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
+
+> [!IMPORTANT]
+> These subscription keys are used to access your Azure AI services API. Don't share your keys. Store them securely. For example, use Azure Key Vault. We also recommend that you regenerate these keys regularly. Only one key is necessary to make an API call. When you regenerate the first key, you can use the second key for continued access to the service.
+
+The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. For an example of the information sent to Microsoft for billing, see the [Azure AI container FAQ](../containers/container-faq.yml#how-does-billing-work) in the Azure AI services documentation.
+
+For more information about these options, see [Configure containers](speech-container-configuration.md).
+
+### Container requirements and recommendations
+
+The following table describes the minimum and recommended allocation of resources for each Speech container:
+
+| Container | Minimum | Recommended |Speech Model|
+|--||-| -- |
+| Speech to text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory|
+| Custom speech to text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory|
+| Speech language identification | 1 core, 1-GB memory | 1 core, 1-GB memory |n/a|
+| Neural text to speech | 6 core, 12-GB memory | 8 core, 16-GB memory |n/a|
+
+Each core must be at least 2.6 gigahertz (GHz) or faster.
+
+Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+> [!NOTE]
+> The minimum and recommended allocations are based on Docker limits, *not* the host machine resources.
+> For example, speech to text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech models (see above table).
+> Also, the first run of either container might take longer because models are being paged into memory.
+
+## Host computer requirements and recommendations
+
+The host is an x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
+
+* [Azure Kubernetes Service](~/articles/aks/index.yml).
+* [Azure Container Instances](~/articles/container-instances/index.yml).
+* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
++
+> [!NOTE]
+> Containers support compressed audio input to the Speech SDK by using GStreamer.
+> To install GStreamer in a container, follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md).
+
+### Advanced Vector Extension support
+
+The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
+
+```console
+grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
+```
+> [!WARNING]
+> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
+
+## Run the container
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Once running, the container continues to run until you [stop the container](#stop-the-container).
+
+Take note the following best practices with the `docker run` command:
+
+- **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+- **Argument order**: Do not change the order of the arguments unless you are familiar with Docker containers.
+
+You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. The following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
+
+```bash
+docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
+```
+
+Here's an example result:
+```
+IMAGE ID REPOSITORY TAG
+<image-id> <repository-path/name> <tag-name>
+```
+
+## Validate that a container is running
+
+There are several ways to validate that the container is running. Locate the *External IP* address and exposed port of the container in question, and open your favorite web browser. Use the various request URLs that follow to validate the container is running.
+
+The example request URLs listed here are `http://localhost:5000`, but your specific container might vary. Make sure to rely on your container's *External IP* address and exposed port.
+
+| Request URL | Purpose |
+|--|--|
+| `http://localhost:5000/` | The container provides a home page. |
+| `http://localhost:5000/ready` | Requested with GET, this URL provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/status` | Also requested with GET, this URL verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
+
+## Stop the container
+
+To shut down the container, in the command-line environment where the container is running, select <kbd>Ctrl+C</kbd>.
+
+## Run multiple containers on the same host
+
+If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
+
+You can have this container and a different Azure AI container running on the HOST together. You also can have multiple containers of the same Azure AI container running.
+
+## Host URLs
+
+> [!NOTE]
+> Use a unique port number if you're running multiple containers.
+
+| Protocol | Host URL | Containers |
+|--|--|--|
+| WS | `ws://localhost:5000` | [Speech to text](speech-container-stt.md#use-the-container)<br/><br/>[Custom speech to text](speech-container-cstt.md#use-the-container) |
+| HTTP | `http://localhost:5000` | [Neural text to speech](speech-container-ntts.md#use-the-container)<br/><br/>[Speech language identification](speech-container-lid.md#use-the-container) |
+
+For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-ai-services-container-security) in the Azure AI services documentation.
+
+## Troubleshooting
+
+When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues.
+
+> [!TIP]
+> For more troubleshooting information and guidance, see [Azure AI containers frequently asked questions (FAQ)](../containers/container-faq.yml) in the Azure AI services documentation.
++
+### Logging settings
+
+Speech containers come with ASP.NET Core logging support. Here's an example of the `neural-text-to-speech container` started with default logging to the console:
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY} \
+Logging:Console:LogLevel:Default=Information
+```
+
+For more information about logging, see [Configure Speech containers](speech-container-configuration.md#logging-settings) and [usage records](../containers/disconnected-containers.md#usage-records) in the Azure AI services documentation.
+
+## Microsoft diagnostics container
+
+If you're having trouble running an Azure AI container, you can try using the Microsoft diagnostics container. Use this container to diagnose common errors in your deployment environment that might prevent Azure AI containers from functioning as expected.
+
+To get the container, use the following `docker pull` command:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/diagnostic
+```
+
+Then run the container. Replace `{ENDPOINT_URI}` with your endpoint, and replace `{API_KEY}` with your key to your resource:
+
+```bash
+docker run --rm mcr.microsoft.com/azure-cognitive-services/diagnostic \
+eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+The container will test for network connectivity to the billing endpoint.
+
+## Run disconnected containers
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
+
+## Next steps
+
+* Review [configure containers](speech-container-configuration.md) for configuration settings.
+* Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md).
+* Deploy and run containers on [Azure Container Instance](../containers/azure-container-instance-recipe.md)
+* Use more [Azure AI services containers](../cognitive-services-container-support.md).
++
ai-services Speech Container Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-lid.md
+
+ Title: Language identification containers - Speech service
+
+description: Install and run language identification containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
+++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Language identification containers with Docker
+
+The Speech language identification container detects the language spoken in audio files. You can get real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a language identification container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+>
+> The Speech language identification container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+> [!TIP]
+> To get the most useful results, use the Speech language identification container with the [speech to text](speech-container-stt.md) or [custom speech to text](speech-container-cstt.md) containers.
+
+## Container images
+
+The Speech language identification container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `language-detection`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
+| 1.11.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.11.0-amd64-preview` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<prerelease>
+```
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/language-detection",
+ "tags": [
+ "1.1.0-amd64-preview",
+ "1.11.0-amd64-preview",
+ "1.3.0-amd64-preview",
+ "1.5.0-amd64-preview",
+ <--redacted for brevity-->
+ "1.8.0-amd64-preview",
+ "latest"
+ ]
+}
+```
+
+## Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest
+```
++
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
+
+The following table represents the various `docker run` parameters and their corresponding descriptions:
+
+| Parameter | Description |
+|||
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+
+When you run the Speech language identification container, configure the port, memory, and CPU according to the language identification container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
+
+Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
+
+```bash
+docker run --rm -it -p 5000:5003 --memory 1g --cpus 1 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a Speech language identification container from the container image.
+* Allocates 1 CPU core and 1 GB of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
+
+## Run with the speech to text container
+
+If you want to run the language identification container with the [speech to text](speech-container-stt.md) container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`:
+
+```bash
+docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000
+```
+
+Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls.
+
+## Use the container
++
+[Try language identification](language-identification.md) using host authentication instead of key and region. When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`.
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure AI services containers](../cognitive-services-container-support.md)
ai-services Speech Container Ntts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-ntts.md
+
+ Title: Neural text to speech containers - Speech service
+
+description: Install and run neural text to speech containers with Docker to perform speech synthesis and more on-premises.
+++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Text to speech containers with Docker
+
+The neural text to speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text to speech container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+## Container images
+
+The neural text to speech container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest`<br/><br/>The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. |
+| 2.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:2.12.0-amd64-mr-in` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<voice>-<preview>
+```
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/neural-text-to-speech",
+ "tags": [
+ "1.10.0-amd64-cs-cz-antoninneural",
+ "1.10.0-amd64-cs-cz-vlastaneural",
+ "1.10.0-amd64-de-de-conradneural",
+ "1.10.0-amd64-de-de-katjaneural",
+ "1.10.0-amd64-en-au-natashaneural",
+ <--redacted for brevity-->
+ "latest"
+ ]
+}
+```
+
+> [!IMPORTANT]
+> We retired the standard speech synthesis voices and standard [text to speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/text-to-speech/tags) container on August 31, 2021. You should use neural voices with the [neural-text to speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md).
+
+## Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest
+```
+
+> [!IMPORTANT]
+> The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. For additional locales and voices, see [text to speech container images](#container-images).
+
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
+
+# [Neural text to speech](#tab/container)
+
+The following table represents the various `docker run` parameters and their corresponding descriptions:
+
+| Parameter | Description |
+|||
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+
+When you run the text to speech container, configure the port, memory, and CPU according to the text to speech container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
+
+Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+
+* Runs a neural text to speech container from the container image.
+* Allocates 6 CPU cores and 12 GB of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+# [Disconnected neural text to speech](#tab/disconnected)
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
+
+If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
+
+The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure AI services documentation. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+++
+For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
+
+## Use the container
++
+[Try the text to speech quickstart](get-started-text-to-speech.md) using host authentication instead of key and region.
+
+### SSML voice element
+
+When you construct a neural text to speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The [locale of the voice](language-support.md?tabs=tts) must correspond to the locale of the container model.
+
+For example, a model that was downloaded via the `latest` tag (defaults to "en-US") would have a voice name of `en-US-AriaNeural`.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-AriaNeural">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure AI services containers](../cognitive-services-container-support.md)
ai-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md
+
+ Title: Speech containers overview - Speech service
+
+description: Use the Docker containers for the Speech service to perform speech recognition, transcription, generation, and more on-premises.
++++++ Last updated : 04/18/2023+
+keywords: on-premises, Docker, container
++
+# Speech containers overview
+
+By using containers, you can use a subset of the Speech service features in your own environment. With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Containers are great for specific security and data governance requirements.
+
+> [!NOTE]
+> You must [request and get approval](#request-approval-to-run-the-container) to use a Speech container.
+
+## Available Speech containers
+
+The following table lists the Speech containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
+
+| Container | Features | Supported versions and locales |
+|--|--|--|
+| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
+| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
+| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |
+| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
+
+<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
+<sup>2</sup> Not available as a disconnected container.
+
+## Request approval to run the container
+
+To use the Speech containers, you must submit one of the following request forms and wait for approval:
+- [Connected containers request form](https://aka.ms/csgate) if you want to run containers regularly, in environments that are only connected to the internet.
+- [Disconnected Container request form](https://aka.ms/csdisconnectedcontainers) if you want to run containers in environments that can be disconnected from the internet. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
+
+The form requests information about you, your company, and the user scenario for which you'll use the container.
+
+* On the form, you must use an email address associated with an Azure subscription ID.
+* The Azure resource you use to run the container must have been created with the approved Azure subscription ID.
+* Check your email for updates on the status of your application from Microsoft.
+
+After you submit the form, the Azure AI services team reviews it and emails you with a decision within 10 business days.
+
+> [!IMPORTANT]
+> To use the Speech containers, your request must be approved.
+
+While you're waiting for approval, you can [setup the prerequisites](speech-container-howto.md#prerequisites) on your host computer. You can also download the container from the Microsoft Container Registry (MCR). You can run the container after your request is approved.
+
+## Billing
+
+The Speech containers send billing information to Azure by using a Speech resource on your Azure account.
+
+> [!NOTE]
+> Connected and disconnected container pricing and commitment tiers vary. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times. For more information, see [billing arguments](speech-container-howto.md#billing-arguments).
+
+## Container recipes and other container services
+
+You can use container recipes to create containers that can be reused. Containers can be built with some or all configuration settings so that they are not needed when the container is started. For container recipes see the following Azure AI services articles:
+- [Create containers for reuse](../containers/container-reuse-recipe.md)
+- [Deploy and run container on Azure Container Instance](../containers/azure-container-instance-recipe.md)
+- [Deploy a language detection container to Azure Kubernetes Service](../containers/azure-kubernetes-recipe.md)
+- [Use Docker Compose to deploy multiple containers](../containers/docker-compose-recipe.md)
+
+For information about other container services, see the following Azure AI services articles:
+- [Tutorial: Create a container image for deployment to Azure Container Instances](../../container-instances/container-instances-tutorial-prepare-app.md)
+- [Quickstart: Create a private container registry using the Azure CLI](../../container-registry/container-registry-get-started-azure-cli.md)
+- [Tutorial: Prepare an application for Azure Kubernetes Service (AKS)](../../aks/tutorial-kubernetes-prepare-app.md)
+
+## Next steps
+
+* [Install and run Speech containers](speech-container-howto.md)
++
ai-services Speech Container Stt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-stt.md
+
+ Title: Speech to text containers - Speech service
+
+description: Install and run speech to text containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
+++++++ Last updated : 04/18/2023+
+zone_pivot_groups: programming-languages-speech-sdk-cli
+keywords: on-premises, Docker, container
++
+# Speech to text containers with Docker
+
+The Speech to text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech to text container.
+
+> [!NOTE]
+> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
+
+For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
+
+## Container images
+
+The Speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`.
++
+The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. Either append a specific version or append `:latest` to get the most recent version.
+
+| Version | Path |
+|--||
+| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest`<br/><br/>The `latest` tag pulls the latest image for the `en-US` locale. |
+| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:3.12.0-amd64-mr-in` |
+
+All tags, except for `latest`, are in the following format and are case sensitive:
+
+```
+<major>.<minor>.<patch>-<platform>-<locale>-<prerelease>
+```
+
+The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
+
+```json
+{
+ "name": "azure-cognitive-services/speechservices/speech-to-text",
+ "tags": [
+ "2.10.0-amd64-ar-ae",
+ "2.10.0-amd64-ar-bh",
+ "2.10.0-amd64-ar-eg",
+ "2.10.0-amd64-ar-iq",
+ "2.10.0-amd64-ar-jo",
+ <--redacted for brevity-->
+ "latest"
+ ]
+}
+```
+
+## Get the container image with docker pull
+
+You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
+
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
+
+```bash
+docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest
+```
+
+> [!IMPORTANT]
+> The `latest` tag pulls the latest image for the `en-US` locale. For additional versions and locales, see [speech to text container images](#container-images).
+
+## Run the container with docker run
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
+
+# [Speech to text](#tab/container)
+
+The following table represents the various `docker run` parameters and their corresponding descriptions:
+
+| Parameter | Description |
+|||
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
+
+When you run the speech to text container, configure the port, memory, and CPU according to the speech to text container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
+
+Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
+
+```bash
+docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
+mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+This command:
+* Runs a `speech-to-text` container from the container image.
+* Allocates 4 CPU cores and 8 GB of memory.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+# [Disconnected speech to text](#tab/disconnected)
+
+To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure AI services documentation.
+
+If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
+
+The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
+
+| Placeholder | Description |
+|-|-|
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure AI services documentation. |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+++
+For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
++
+## Use the container
++
+[Try the speech to text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region.
+
+## Next steps
+
+* See the [Speech containers overview](speech-container-overview.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings
+* Use more [Azure AI services containers](../cognitive-services-container-support.md)
ai-services Speech Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-devices.md
+
+ Title: Speech devices overview - Speech service
+
+description: Get started with the Speech devices. The Speech service works with a wide variety of devices and audio sources.
++++++ Last updated : 12/27/2021+++
+# What are Speech devices?
+
+The [Speech service](overview.md) works with a wide variety of devices and audio sources. You can use the default audio processing available on a device. Otherwise, the Speech SDK has an option for you to use our advanced audio processing algorithms that are designed to work well with the [Speech service](overview.md). It provides accurate far-field [speech recognition](speech-to-text.md) via noise suppression, echo cancellation, beamforming, and dereverberation.
+
+## Audio processing
+Audio processing is enhancements applied to a stream of audio to improve the audio quality. Examples of common enhancements include automatic gain control (AGC), noise suppression, and acoustic echo cancellation (AEC). The Speech SDK integrates [Microsoft Audio Stack (MAS)](audio-processing-overview.md), allowing any application or product to use its audio processing capabilities on input audio.
+
+## Microphone array recommendations
+The Speech SDK works best with a microphone array that has been designed according to our recommended guidelines. For details, see [Microphone array recommendations](speech-sdk-microphone.md).
+
+## Device development kits
+The Speech SDK is designed to work with purpose-built development kits, and varying microphone array configurations. For example, you can use one of these Azure development kits.
+
+- [Azure Percept DK](../../azure-percept/overview-azure-percept-dk.md) contains a preconfigured audio processor and a four-microphone linear array. You can use voice commands, keyword spotting, and far field speech with the help of Azure AI services.
+- [Azure Kinect DK](../../kinect-dk/about-azure-kinect-dk.md) is a spatial computing developer kit with advanced AI sensors that provide sophisticated Azure AI Vision and speech models. As an all-in-one small device with multiple modes, it contains a depth sensor, spatial microphone array with a video camera, and orientation sensor.
+
+## Next steps
+
+* [Audio processing concepts](audio-processing-overview.md)
ai-services Speech Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-encryption-of-data-at-rest.md
+
+ Title: Speech service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Speech service.
+++++ Last updated : 07/14/2021+
+#Customer intent: As a user of the Translator service, I want to learn how encryption at rest works.
++
+# Speech service encryption of data at rest
+
+Speech service automatically encrypts your data when it is persisted it to the cloud. Speech service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+
+## About Azure AI services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+When you use Custom Speech and Custom Voice, Speech service may store following data in the cloud:
+
+* Speech trace data - only if your turn the trace on for your custom endpoint
+* Uploaded training and test data
+
+By default, your data are stored in Microsoft's storage and your subscription uses Microsoft-managed encryption keys. You also have an option to prepare your own storage account. Access to the store is managed by the Managed Identity, and Speech service cannot directly access to your own data, such as speech trace data, customization training data and custom models.
+
+For more information about Managed Identity, see [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+In the meantime, when you use Custom Command, you can manage your subscription with your own encryption keys. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. For more information about Custom Command and CMK, see [Custom Commands encryption of data at rest](custom-commands-encryption-of-data-at-rest.md).
+
+## Bring your own storage (BYOS) for customization and logging
+
+To request access to bring your own storage, fill out and submit theΓÇ»[Speech service - bring your own storage (BYOS) request form](https://aka.ms/cogsvc-cmk). Once approved, you'll need to create your own storage account to store the data required for customization and logging. When adding a storage account, the Speech service resource will enable a system assigned managed identity.
+
+> [!IMPORTANT]
+> The user account you use to create a Speech resource with BYOS functionality enabled should be assigned the [Owner role at the Azure subscription scope](../../cost-management-billing/manage/add-change-subscription-administrator.md#to-assign-a-user-as-an-administrator). Otherwise you will get an authorization error during the resource provisioning.
+
+After the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory (AAD). After being registered, the managed identity will be given access to the storage account. For more about managed identities, see [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the storage account will be removed. This will cause the parts of the Speech service that require access to the storage account to stop working.
+
+The Speech service doesn't currently support Customer Lockbox. However, customer data can be stored using BYOS, allowing you to achieve similar data controls to [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md). Keep in mind that Speech service data stays and is processed in the region where the Speech resource was created. This applies to any data at rest and data in transit. When using customization features, like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where your BYOS (if used) and Speech service resource reside.
+
+> [!IMPORTANT]
+> Microsoft **does not** use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored.
+
+## Next steps
+
+* [Speech service - bring your own storage (BYOS) request form](https://aka.ms/cogsvc-cmk)
+* [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
ai-services Speech Sdk Microphone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-sdk-microphone.md
+
+ Title: Microphone array recommendations - Speech service
+
+description: Speech SDK microphone array recommendations. These array geometries are recommended for use with the Microsoft Audio Stack.
++++++ Last updated : 12/27/2021++++
+# Microphone array recommendations
+
+In this article, you learn how to design a microphone array customized for use with the Speech SDK. This is most pertinent if you are selecting, specifying, or building hardware for speech solutions.
+
+The Speech SDK works best with a microphone array that has been designed according to these guidelines, including the microphone geometry, component selection, and architecture.
+
+## Microphone geometry
+
+The following array geometries are recommended for use with the Microsoft Audio Stack. Location of sound sources and rejection of ambient noise is improved with greater number of microphones with dependencies on specific applications, user scenarios, and the device form factor.
+
+| Array |Microphones| Geometry |
+| -- | -- | -- |
+|Circular - 7 Microphones|<img src="media/speech-devices-sdk/7-mic-c.png" alt="7 mic circular array" width="150"/>|6 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced|
+|Circular - 4 Microphones|<img src="media/speech-devices-sdk/4-mic-c.png" alt="4 mic circular array" width="150"/>|3 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced|
+|Linear - 4 Microphones|<img src="media/speech-devices-sdk/4-mic-l.png" alt="4 mic linear array" width="150"/>|Length = 120 mm, Spacing = 40 mm|
+|Linear - 2 Microphones|<img src="media/speech-devices-sdk/2-mic-l.png" alt="2 mic linear array" width="150"/>|Spacing = 40 mm|
+
+Microphone channels should be ordered ascending from 0, according to the numbering depicted above for each array. The Microsoft Audio Stack will require an additional reference stream of audio playback to perform echo cancellation.
+
+## Component selection
+
+Microphone components should be selected to accurately reproduce a signal free of noise and distortion.
+
+The recommended properties when selecting microphones are:
+
+| Parameter | Recommended |
+| | -- |
+| SNR | \>= 65 dB (1 kHz signal 94 dBSPL, A-weighted noise) |
+| Amplitude Matching | ┬▒ 1 dB @ 1 kHz |
+| Phase Matching | ┬▒ 2┬░ @ 1 kHz |
+| Acoustic Overload Point (AOP) | \>= 120 dBSPL (THD = 10%) |
+| Bit Rate | Minimum 24-bit |
+| Sampling Rate | Minimum 16 kHz\* |
+| Frequency Response | ┬▒ 3 dB, 200-8000 Hz Floating Mask\* |
+| Reliability | Storage Temperature Range -40┬░C to 70┬░C<br />Operating Temperature Range -20┬░C to 55┬░C |
+
+\*_Higher sampling rates or "wider" frequency ranges may be necessary for high-quality communications (VoIP) applications_
+
+Good component selection must be paired with good electroacoustic integration in order to avoid impairing the performance of the components used. Unique use cases may also necessitate additional requirements (for example: operating temperature ranges).
+
+## Microphone array integration
+
+The performance of the microphone array when integrated into a device will differ from the component specification. It is important to ensure that the microphones are well matched after integration. Therefore the device performance measured after any fixed gain or EQ should meet the following recommendations:
+
+| Parameter | Recommended |
+| | -- |
+| SNR | \> 63 dB (1 kHz signal 94 dBSPL, A-weighted noise) |
+| Output Sensitivity | -26 dBFS/Pa @ 1 kHz (recommended) |
+| Amplitude Matching | ┬▒ 2 dB, 200-8000 Hz |
+| THD%\* | Γëñ 1%, 200-8000 Hz, 94 dBSPL, 5th Order |
+| Frequency Response | ┬▒ 6 dB, 200-8000 Hz Floating Mask\*\* |
+
+\*\*_A low distortion speaker is required to measure THD (e.g. Neumann KH120)_
+
+\*\*_"Wider" frequency ranges may be necessary for high-quality communications (VoIP) applications_
+
+## Speaker integration recommendations
+
+As echo cancellation is necessary for speech recognition devices that contain speakers, additional recommendations are provided for speaker selection and integration.
+
+| Parameter | Recommended |
+| | -- |
+| Linearity Considerations | No non-linear processing after speaker reference, otherwise a hardware-based loopback reference stream is required |
+| Speaker Loopback | Provided via WASAPI, private APIs, custom ALSA plug-in (Linux), or provided via firmware channel |
+| THD% | 3rd Octave Bands minimum 5th Order, 70 dBA Playback @ 0.8 m Γëñ 6.3%, 315-500 Hz Γëñ 5%, 630-5000 Hz |
+| Echo Coupling to Microphones | \> -10 dB TCLw using ITU-T G.122 Annex B.4 method, normalized to mic level<br />TCLw = TCLwmeasured \+ (Measured Level - Target Output Sensitivity)<br />TCLw = TCLwmeasured \+ (Measured Level - (-26)) |
+
+## Integration design architecture
+
+The following guidelines for architecture are necessary when integrating microphones into a device:
+
+| Parameter | Recommendation |
+| | -- |
+| Mic Port Similarity | All microphone ports are same length in array |
+| Mic Port Dimensions | Port size ├ÿ0.8-1.0 mm. Port Length / Port Diameter \< 2 |
+| Mic Sealing | Sealing gaskets uniformly implemented in stack-up. Recommend \> 70% compression ratio for foam gaskets |
+| Mic Reliability | Mesh should be used to prevent dust and ingress (between PCB for bottom ported microphones and sealing gasket/top cover) |
+| Mic Isolation | Rubber gaskets and vibration decoupling through structure, particularly for isolating any vibration paths due to integrated speakers |
+| Sampling Clock | Device audio must be free of jitter and drop-outs with low drift |
+| Record Capability | The device must be able to record individual channel raw streams simultaneously |
+| USB | All USB audio input devices must set descriptors according to the [USB Audio Devices Rev3 Spec](https://www.usb.org/document-library/usb-audio-devices-rev-30-and-adopters-agreement) |
+| Microphone Geometry | Drivers must implement [Microphone Array Geometry Descriptors](/windows-hardware/drivers/audio/ksproperty-audio-mic-array-geometry) correctly |
+| Discoverability | Devices must not have any undiscoverable or uncontrollable hardware, firmware, or 3rd party software-based non-linear audio processing algorithms to/from the device |
+| Capture Format | Capture formats must use a minimum sampling rate of 16 kHz and recommended 24-bit depth |
+
+## Electrical architecture considerations
+
+Where applicable, arrays may be connected to a USB host (such as a SoC that runs the [Microsoft Audio Stack (MAS)](audio-processing-overview.md)) and interfaces to Speech services or other applications.
+
+Hardware components such as PDM-to-TDM conversion should ensure that the dynamic range and SNR of the microphones is preserved within re-samplers.
+
+High-speed USB Audio Class 2.0 should be supported within any audio MCUs in order to provide the necessary bandwidth for up to seven channels at higher sample rates and bit depths.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about audio processing](audio-processing-overview.md)
ai-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-sdk.md
+
+ Title: About the Speech SDK - Speech service
+
+description: The Speech software development kit (SDK) exposes many of the Speech service capabilities, making it easier to develop speech-enabled applications.
++++++ Last updated : 09/16/2022+++
+# What is the Speech SDK?
+
+The Speech SDK (software development kit) exposes many of the [Speech service capabilities](overview.md), so you can develop speech-enabled applications. The Speech SDK is available [in many programming languages](quickstarts/setup-platform.md) and across platforms. The Speech SDK is ideal for both real-time and non-real-time scenarios, by using local devices, files, Azure Blob Storage, and input and output streams.
+
+In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech to text REST API](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
+
+## Supported languages
+
+The Speech SDK supports the following languages and platforms:
+
+| Programming language | Reference | Platform support |
+|-|-|-|
+| [C#](quickstarts/setup-platform.md?pivots=programming-language-csharp) <sup>1</sup> | [.NET](/dotnet/api/microsoft.cognitiveservices.speech) | Windows, Linux, macOS, Mono, Xamarin.iOS, Xamarin.Mac, Xamarin.Android, UWP, Unity |
+| [C++](quickstarts/setup-platform.md?pivots=programming-language-cpp) <sup>2</sup> | [C++](/cpp/cognitive-services/speech/) | Windows, Linux, macOS |
+| [Go](quickstarts/setup-platform.md?pivots=programming-language-go) | [Go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) | Linux |
+| [Java](quickstarts/setup-platform.md?pivots=programming-language-java) | [Java](/java/api/com.microsoft.cognitiveservices.speech) | Android, Windows, Linux, macOS |
+| [JavaScript](quickstarts/setup-platform.md?pivots=programming-language-javascript) | [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/) | Browser, Node.js |
+| [Objective-C](quickstarts/setup-platform.md?pivots=programming-language-objectivec) | [Objective-C](/objectivec/cognitive-services/speech/) | iOS, macOS |
+| [Python](quickstarts/setup-platform.md?pivots=programming-language-python) | [Python](/python/api/azure-cognitiveservices-speech/) | Windows, Linux, macOS |
+| [Swift](quickstarts/setup-platform.md?pivots=programming-language-swift) | [Objective-C](/objectivec/cognitive-services/speech/) <sup>3</sup> | iOS, macOS |
+
+<sup>1 C# code samples are available in the documentation. The Speech SDK for C# is based on .NET Standard 2.0, so it supports many platforms and programming languages. For more information, see [.NET implementation support](/dotnet/standard/net-standard#net-implementation-support).</sup>
+<sup>2 C isn't a supported programming language for the Speech SDK.</sup>
+<sup>3 The Speech SDK for Swift shares client libraries and reference documentation with the Speech SDK for Objective-C.</sup>
++
+## Speech SDK demo
+
+The following video shows how to install the [Speech SDK for C#](quickstarts/setup-platform.md) and write a simple .NET console application for speech to text.
+
+> [!VIDEO c20d3b0c-e96a-4154-9299-155e27db7117]
+
+## Code samples
+
+Speech SDK code samples are available in the documentation and GitHub.
+
+### Docs samples
+
+At the top of documentation pages that contain samples, options to select include C#, C++, Go, Java, JavaScript, Objective-C, Python, or Swift.
++
+If a sample is not available in your preferred programming language, you can select another programming language to get started and learn about the concepts, or see the reference and samples linked from the beginning of the article.
+
+### GitHub samples
+
+In depth samples are available in the [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/csspeech/samples) repository on GitHub. There are samples for C# (including UWP, Unity, and Xamarin), C++, Java, JavaScript (including Browser and Node.js), Objective-C, Python, and Swift. Code samples for Go are available in the [Microsoft/cognitive-services-speech-sdk-go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) repository on GitHub.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-speech.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-speech) forums are available for the developer community to ask and answer questions about Azure Cognitive Speech and other services. Microsoft monitors the forums and replies to questions that the community has not yet answered. To make sure that we see your question, tag it with 'azure-speech'.
+
+You can suggest an idea or report a bug by creating an issue on GitHub:
+- [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/GHspeechissues)
+- [Microsoft/cognitive-services-speech-sdk-go](https://github.com/microsoft/cognitive-services-speech-sdk-go/issues)
+- [Microsoft/cognitive-services-speech-sdk-js](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues)
+
+See also [Azure AI services support and help options](../cognitive-services-support-options.md?context=/azure/ai-services/speech-service/context/context) to get support, stay up-to-date, give feedback, and report bugs for Azure AI services.
+
+## Next steps
+
+* [Install the SDK](quickstarts/setup-platform.md)
+* [Try the speech to text quickstart](./get-started-speech-to-text.md)
ai-services Speech Service Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-service-vnet-service-endpoint.md
+
+ Title: Use Virtual Network service endpoints with Speech service
+
+description: This article describes how to use Speech service with an Azure Virtual Network service endpoint.
++++++ Last updated : 03/19/2021+++
+# Use Speech service through a Virtual Network service endpoint
+
+[Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) help to provide secure and direct connectivity to Azure services over an optimized route on the Azure backbone network. Endpoints help you secure your critical Azure service resources to only your virtual networks. Service endpoints enable private IP addresses in the virtual network to reach the endpoint of an Azure service without needing a public IP address on the virtual network.
+
+This article explains how to set up and use Virtual Network service endpoints with Speech service in Azure AI services.
+
+> [!NOTE]
+> Before you start, review [how to use virtual networks with Azure AI services](../cognitive-services-virtual-networks.md).
+
+This article also describes [how to remove Virtual Network service endpoints later but still use the Speech resource](#use-a-speech-resource-that-has-a-custom-domain-name-but-that-doesnt-have-allowed-virtual-networks).
+
+To set up a Speech resource for Virtual Network service endpoint scenarios, you need to:
+1. [Create a custom domain name for the Speech resource](#create-a-custom-domain-name).
+1. [Configure virtual networks and networking settings for the Speech resource](#configure-virtual-networks-and-the-speech-resource-networking-settings).
+1. [Adjust existing applications and solutions](#adjust-existing-applications-and-solutions).
+
+> [!NOTE]
+> Setting up and using Virtual Network service endpoints for Speech service is similar to setting up and using private endpoints. In this article, we refer to the corresponding sections of the [article on using private endpoints](speech-services-private-link.md) when the procedures are the same.
++
+This article describes how to use Virtual Network service endpoints with Speech service. For information about private endpoints, see [Use Speech service through a private endpoint](speech-services-private-link.md).
+
+## Create a custom domain name
+
+Virtual Network service endpoints require a [custom subdomain name for Azure AI services](../cognitive-services-custom-subdomains.md). Create a custom domain by following the [guidance](speech-services-private-link.md#create-a-custom-domain-name) in the private endpoint article. All warnings in the section also apply to Virtual Network service endpoints.
+
+## Configure virtual networks and the Speech resource networking settings
+
+You need to add all virtual networks that are allowed access via the service endpoint to the Speech resource networking properties.
+
+> [!NOTE]
+> To access a Speech resource via the Virtual Network service endpoint, you need to enable the `Microsoft.CognitiveServices` service endpoint type for the required subnets of your virtual network. Doing so will route all subnet traffic related to Azure AI services through the private backbone network. If you intend to access any other Azure AI services resources from the same subnet, make sure these resources are configured to allow your virtual network.
+>
+> If a virtual network isn't added as *allowed* in the Speech resource networking properties, it won't have access to the Speech resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the virtual network. And if the service endpoint is enabled but the virtual network isn't allowed, the Speech resource won't be accessible for the virtual network through a public IP address, no matter what the Speech resource's other network security settings are. That's because enabling the `Microsoft.CognitiveServices` endpoint routes all traffic related to Azure AI services through the private backbone network, and in this case the virtual network should be explicitly allowed to access the resource. This guidance applies for all Azure AI services resources, not just for Speech resources.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Speech resource.
+1. In the **Resource Management** group in the left pane, select **Networking**.
+1. On the **Firewalls and virtual networks** tab, select **Selected Networks and Private Endpoints**.
+
+ > [!NOTE]
+ > To use Virtual Network service endpoints, you need to select the **Selected Networks and Private Endpoints** network security option. No other options are supported. If your scenario requires the **All networks** option, consider using [private endpoints](speech-services-private-link.md), which support all three network security options.
+
+5. Select **Add existing virtual network** or **Add new virtual network** and provide the required parameters. Select **Add** for an existing virtual network or **Create** for a new one. If you add an existing virtual network, the `Microsoft.CognitiveServices` service endpoint will automatically be enabled for the selected subnets. This operation can take up to 15 minutes. Also, see the note at the beginning of this section.
+
+### Enabling service endpoint for an existing virtual network
+
+As described in the previous section, when you configure a virtual network as *allowed* for the Speech resource, the `Microsoft.CognitiveServices` service endpoint is automatically enabled. If you later disable it, you need to re-enable it manually to restore the service endpoint access to the Speech resource (and to other Azure AI services resources):
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the virtual network.
+1. In the **Settings** group in the left pane, select **Subnets**.
+1. Select the required subnet.
+1. A new panel appears on the right side of the window. In this panel, in the **Service Endpoints** section, select `Microsoft.CognitiveServices` in the **Services** list.
+1. Select **Save**.
+
+## Adjust existing applications and solutions
+
+A Speech resource that has a custom domain enabled interacts with the Speech service in a different way. This is true for a custom-domain-enabled Speech resource regardless of whether service endpoints are configured. Information in this section applies to both scenarios.
+
+### Use a Speech resource that has a custom domain name and allowed virtual networks
+
+In this scenario, the **Selected Networks and Private Endpoints** option is selected in the networking settings of the Speech resource and at least one virtual network is allowed. This scenario is equivalent to [using a Speech resource that has a custom domain name and a private endpoint enabled](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint).
++
+### Use a Speech resource that has a custom domain name but that doesn't have allowed virtual networks
+
+In this scenario, private endpoints aren't enabled and one of these statements is true:
+
+- The **Selected Networks and Private Endpoints** option is selected in the networking settings of the Speech resource, but no allowed virtual networks are configured.
+- The **All networks** option is selected in the networking settings of the Speech resource.
+
+This scenario is equivalent to [using a Speech resource that has a custom domain name and that doesn't have private endpoints](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
+++
+## Learn more
+
+* [Use Speech service through a private endpoint](speech-services-private-link.md)
+* [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+* [Azure Private Link](../../private-link/private-link-overview.md)
+* [Speech SDK](speech-sdk.md)
+* [Speech to text REST API](rest-speech-to-text.md)
+* [Text to speech REST API](rest-text-to-speech.md)
ai-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md
+
+ Title: How to use private endpoints with Speech service
+
+description: Learn how to use Speech service with private endpoints provided by Azure Private Link
++++++ Last updated : 04/07/2021++++
+# Use Speech service through a private endpoint
+
+[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure by using a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address that's accessible only within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
+
+This article explains how to set up and use Private Link and private endpoints with the Speech service. This article then describes how to remove private endpoints later, but still use the Speech resource.
+
+> [!NOTE]
+> Before you proceed, review [how to use virtual networks with Azure AI services](../cognitive-services-virtual-networks.md).
++
+Setting up a Speech resource for the private endpoint scenarios requires performing the following tasks:
+1. [Create a custom domain name](#create-a-custom-domain-name)
+1. [Turn on private endpoints](#turn-on-private-endpoints)
+1. [Adjust existing applications and solutions](#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint)
++
+This article describes the usage of the private endpoints with Speech service. Usage of the VNet service endpoints is described [here](speech-service-vnet-service-endpoint.md).
+
+## Create a custom domain name
+> [!CAUTION]
+> A Speech resource with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
+>
++
+## Turn on private endpoints
+
+We recommend using the [private DNS zone](../../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints.
+You can create a private DNS zone during the provisioning process.
+If you're using your own DNS server, you might also need to change your DNS configuration.
+
+Decide on a DNS strategy *before* you provision private endpoints for a production Speech resource. And test your DNS changes, especially if you use your own DNS server.
+
+Use one of the following articles to create private endpoints.
+These articles use a web app as a sample resource to make available through private endpoints.
+
+- [Create a private endpoint by using the Azure portal](../../private-link/create-private-endpoint-portal.md)
+- [Create a private endpoint by using Azure PowerShell](../../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using Azure CLI](../../private-link/create-private-endpoint-cli.md)
+
+Use these parameters instead of the parameters in the article that you chose:
+
+| Setting | Value |
+|||
+| Resource type | **Microsoft.CognitiveServices/accounts** |
+| Resource | **\<your-speech-resource-name>** |
+| Target sub-resource | **account** |
+
+**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
+
+### Resolve DNS from the virtual network
+
+This check is *required*.
+
+Follow these steps to test the custom DNS entry from your virtual network:
+
+1. Log in to a virtual machine located in the virtual network to which you've attached your private endpoint.
+1. Open a Windows command prompt or a Bash shell, run `nslookup`, and confirm that it successfully resolves your resource's custom domain name.
+
+ ```dos
+ C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
+ Server: UnKnown
+ Address: 168.63.129.16
+
+ Non-authoritative answer:
+ Name: my-private-link-speech.privatelink.cognitiveservices.azure.com
+ Address: 172.28.0.10
+ Aliases: my-private-link-speech.cognitiveservices.azure.com
+ ```
+
+1. Confirm that the IP address matches the IP address of your private endpoint.
+
+### Resolve DNS from other networks
+
+Perform this check only if you've turned on either the **All networks** option or the **Selected Networks and Private Endpoints** access option in the **Networking** section of your resource.
+
+If you plan to access the resource by using only a private endpoint, you can skip this section.
+
+1. Log in to a computer attached to a network that's allowed to access the resource.
+2. Open a Windows command prompt or Bash shell, run `nslookup`, and confirm that it successfully resolves your resource's custom domain name.
+
+ ```dos
+ C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
+ Server: UnKnown
+ Address: fe80::1
+
+ Non-authoritative answer:
+ Name: vnetproxyv1-weu-prod.westeurope.cloudapp.azure.com
+ Address: 13.69.67.71
+ Aliases: my-private-link-speech.cognitiveservices.azure.com
+ my-private-link-speech.privatelink.cognitiveservices.azure.com
+ westeurope.prod.vnet.cog.trafficmanager.net
+ ```
+
+> [!NOTE]
+> The resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Speech resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
+
+## Adjust an application to use a Speech resource with a private endpoint
+
+A Speech resource with a custom domain interacts with the Speech service in a different way.
+This is true for a custom-domain-enabled Speech resource both with and without private endpoints.
+Information in this section applies to both scenarios.
+
+Follow instructions in this section to adjust existing applications and solutions to use a Speech resource with a custom domain name and a private endpoint turned on.
+
+A Speech resource with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
+
+> [!NOTE]
+> A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
+> This way differs from the scenario of a Speech resource that uses a private endpoint.
+> This is important to consider because you may decide to remove private endpoints later.
+> See [Adjust an application to use a Speech resource without private endpoints](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints) later in this article.
+
+### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs
+
+We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
+
+Speech service has REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario.
+
+Speech to text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario.
+
+The Speech to text REST APIs are:
+- [Speech to text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
+- [Speech to text REST API for short audio](rest-speech-to-text-short.md), which is used for real-time speech to text.
+
+Usage of the Speech to text REST API for short audio and the Text to speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
+
+Speech to text REST API uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario.
+
+The next subsections describe both cases.
+
+#### Speech to text REST API
+
+Usually, Speech resources use [Azure AI services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech to text REST API](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
+
+This is a sample request URL:
+
+```http
+https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions
+```
+
+> [!NOTE]
+> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
+
+After you turn on a custom domain for a Speech resource (which is necessary for private endpoints), that resource will use the following DNS name pattern for the basic REST API endpoint: <p/>`{your custom name}.cognitiveservices.azure.com`
+
+That means that in our example, the REST API endpoint name will be: <p/>`my-private-link-speech.cognitiveservices.azure.com`
+
+And the sample request URL needs to be converted to:
+```http
+https://my-private-link-speech.cognitiveservices.azure.com/speechtotext/v3.1/transcriptions
+```
+This URL should be reachable from the virtual network with the private endpoint attached (provided the [correct DNS resolution](#resolve-dns-from-the-virtual-network)).
+
+After you turn on a custom domain name for a Speech resource, you typically replace the host name in all request URLs with the new custom domain host name. All other parts of the request (like the path `/speechtotext/v3.1/transcriptions` in the earlier example) remain the same.
+
+> [!TIP]
+> Some customers develop applications that use the region part of the regional endpoint's DNS name (for example, to send the request to the Speech resource deployed in the particular Azure region).
+>
+> A custom domain for a Speech resource contains *no* information about the region where the resource is deployed. So the application logic described earlier will *not* work and needs to be altered.
+
+#### Speech to text REST API for short audio and Text to speech REST API
+
+The [Speech to text REST API for short audio](rest-speech-to-text-short.md) and the [Text to speech REST API](rest-text-to-speech.md) use two types of endpoints:
+- [Azure AI services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the Azure AI services REST API to obtain an authorization token
+- Special endpoints for all other operations
+
+> [!NOTE]
+> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
+
+The detailed description of the special endpoints and how their URL should be transformed for a private-endpoint-enabled Speech resource is provided in [this subsection](#construct-endpoint-url) about usage with the Speech SDK. The same principle described for the SDK applies for the Speech to text REST API for short audio and the Text to speech REST API.
+
+Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text to speech REST API. Usage of the Speech to text REST API for short audio is fully equivalent.
+
+> [!NOTE]
+> When you're using the Speech to text REST API for short audio and Text to speech REST API in private endpoint scenarios, use a resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech to text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text to speech REST API](rest-text-to-speech.md#request-headers))
+>
+> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
+
+**Text to speech REST API usage example**
+
+We'll use West Europe as a sample Azure region and `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain). The custom domain name `my-private-link-speech.cognitiveservices.azure.com` in our example belongs to the Speech resource created in the West Europe region.
+
+To get the list of the voices supported in the region, perform the following request:
+
+```http
+https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list
+```
+See more details in the [Text to speech REST API documentation](rest-text-to-speech.md).
+
+For the private-endpoint-enabled Speech resource, the endpoint URL for the same operation needs to be modified. The same request will look like this:
+
+```http
+https://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/voices/list
+```
+See a detailed explanation in the [Construct endpoint URL](#construct-endpoint-url) subsection for the Speech SDK.
+
+### Speech resource with a custom domain name and a private endpoint: Usage with the Speech SDK
+
+Using the Speech SDK with a custom domain name and private-endpoint-enabled Speech resources requires you to review and likely change your application code.
+
+We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
+
+#### Construct endpoint URL
+
+Usually in SDK scenarios (as well as in the Speech to text REST API for short audio and Text to speech REST API scenarios), Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is:
+
+`{region}.{speech service offering}.speech.microsoft.com`
+
+An example DNS name is:
+
+`westeurope.stt.speech.microsoft.com`
+
+All possible values for the region (first element of the DNS name) are listed in [Speech service supported regions](regions.md). (See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.) The following table presents the possible values for the Speech service offering (second element of the DNS name):
+
+| DNS name value | Speech service offering |
+|-|-|
+| `commands` | [Custom Commands](custom-commands.md) |
+| `convai` | [Conversation Transcription](conversation-transcription.md) |
+| `s2s` | [Speech Translation](speech-translation.md) |
+| `stt` | [Speech to text](speech-to-text.md) |
+| `tts` | [Text to speech](text-to-speech.md) |
+| `voice` | [Custom Voice](how-to-custom-voice.md) |
+
+So the earlier example (`westeurope.stt.speech.microsoft.com`) stands for a Speech to text endpoint in West Europe.
+
+Private-endpoint-enabled endpoints communicate with Speech service via a special proxy. Because of that, *you must change the endpoint connection URLs*.
+
+A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.speech.microsoft.com/{URL path}`
+
+A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{speech service offering}/{URL path}`
+
+**Example 1.** An application is communicating by using the following URL (speech recognition using the base model for US English in West Europe):
+
+```
+wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
+```
+
+To use it in the private-endpoint-enabled scenario when the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`, you must modify the URL like this:
+
+```
+wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
+```
+
+Notice the details:
+
+- The host name `westeurope.stt.speech.microsoft.com` is replaced by the custom domain host name `my-private-link-speech.cognitiveservices.azure.com`.
+- The second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+
+**Example 2.** An application uses the following URL to synthesize speech in West Europe:
+```
+wss://westeurope.tts.speech.microsoft.com/cognitiveservices/websocket/v1
+```
+
+The following equivalent URL uses a private endpoint, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
+
+```
+wss://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/websocket/v1
+```
+
+The same principle in Example 1 is applied, but the key element this time is `tts`.
+
+#### Modifying applications
+
+Follow these steps to modify your code:
+
+1. Determine the application endpoint URL:
+
+ - [Turn on logging for your application](how-to-use-logging.md) and run it to log activity.
+ - In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach the Speech service.
+
+ Example:
+
+ ```
+ (114917): 41ms SPX_DBG_TRACE_VERBOSE: property_bag_impl.cpp:138 ISpxPropertyBagImpl::LogPropertyAndValue: this=0x0000028FE4809D78; name='SPEECH-ConnectionUrl'; value='wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?traffictype=spx&language=en-US'
+ ```
+
+ So the URL that the application used in this example is:
+
+ ```
+ wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
+ ```
+
+2. Create a `SpeechConfig` instance by using a full endpoint URL:
+
+ 1. Modify the endpoint that you just determined, as described in the earlier [Construct endpoint URL](#construct-endpoint-url) section.
+
+ 1. Modify how you create the instance of `SpeechConfig`. Most likely, your application is using something like this:
+ ```csharp
+ var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
+ ```
+ This won't work for a private-endpoint-enabled Speech resource because of the host name and URL changes that we described in the previous sections. If you try to run your existing application without any modifications by using the key of a private-endpoint-enabled resource, you'll get an authentication error (401).
+
+ To make it work, modify how you instantiate the `SpeechConfig` class and use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:
+ - `speechKey` contains the key of the private-endpoint-enabled Speech resource.
+ - `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain:
+ ```
+ wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
+ ```
+
+ Create a `SpeechConfig` instance:
+ ```csharp
+ var config = SpeechConfig.FromEndpoint(endPoint, speechKey);
+ ```
+ ```cpp
+ auto config = SpeechConfig::FromEndpoint(endPoint, speechKey);
+ ```
+ ```java
+ SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, speechKey);
+ ```
+ ```python
+ import azure.cognitiveservices.speech as speechsdk
+ speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=speechKey)
+ ```
+ ```objectivec
+ SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:speechKey];
+ ```
+
+> [!TIP]
+> The query parameters specified in the endpoint URI are not changed, even if they're set by other APIs. For example, if the recognition language is defined in the URI as query parameter `language=en-US`, and is also set to `ru-RU` via the corresponding property, the language setting in the URI is used. The effective language is then `en-US`.
+>
+> Parameters set in the endpoint URI always take precedence. Other APIs can override only parameters that are not specified in the endpoint URI.
+
+After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios.
+++
+## Adjust an application to use a Speech resource without private endpoints
+
+In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech service, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
+
+This section explains how to use a Speech resource with a custom domain name but *without* any private endpoints with the Speech service REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
+
+### DNS configuration
+
+Remember how a custom domain DNS name of the private-endpoint-enabled Speech resource is [resolved from public networks](#resolve-dns-from-other-networks). In this case, the IP address resolved points to a proxy endpoint for a virtual network. That endpoint is used for dispatching the network traffic to the private-endpoint-enabled Azure AI services resource.
+
+However, when *all* resource private endpoints are removed (or right after the enabling of the custom domain name), the CNAME record of the Speech resource is reprovisioned. It now points to the IP address of the corresponding [Azure AI services regional endpoint](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
+
+So the output of the `nslookup` command will look like this:
+```dos
+C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
+Server: UnKnown
+Address: fe80::1
+
+Non-authoritative answer:
+Name: apimgmthskquihpkz6d90kmhvnabrx3ms3pdubscpdfk1tsx3a.cloudapp.net
+Address: 13.93.122.1
+Aliases: my-private-link-speech.cognitiveservices.azure.com
+ westeurope.api.cognitive.microsoft.com
+ cognitiveweprod.trafficmanager.net
+ cognitiveweprod.azure-api.net
+ apimgmttmdjylckcx6clmh2isu2wr38uqzm63s8n4ub2y3e6xs.trafficmanager.net
+ cognitiveweprod-westeurope-01.regional.azure-api.net
+```
+Compare it with the output from [this section](#resolve-dns-from-other-networks).
+
+### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs
+
+#### Speech to text REST API
+
+Speech to text REST API usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api).
+
+#### Speech to text REST API for short audio and Text to speech REST API
+
+In this case, usage of the Speech to text REST API for short audio and usage of the Text to speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [Speech to text REST API for short audio](rest-speech-to-text-short.md) and [Text to speech REST API](rest-text-to-speech.md) documentation.
+
+> [!NOTE]
+> When you're using the Speech to text REST API for short audio and Text to speech REST API in custom domain scenarios, use a Speech resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech to text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text to speech REST API](rest-text-to-speech.md#request-headers))
+>
+> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
+
+### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
+
+Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the general case as described in the [Speech SDK documentation](speech-sdk.md).
+
+In case you have modified your code for using with a [private-endpoint-enabled Speech resource](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), consider the following.
+
+In the section on [private-endpoint-enabled Speech resources](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), we explained how to determine the endpoint URL, modify it, and make it work through "from endpoint"/"with endpoint" initialization of the `SpeechConfig` class instance.
+
+However, if you try to run the same application after having all private endpoints removed (allowing some time for the corresponding DNS record reprovisioning), you'll get an internal service error (404). The reason is that the [DNS record](#dns-configuration) now points to the regional Azure AI services endpoint instead of the virtual network proxy, and the URL paths like `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US` won't be found there.
+
+You need to roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code:
+
+```csharp
+var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
+```
++
+## Pricing
+
+For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
+
+## Learn more
+
+* [Use Speech service through a Virtual Network service endpoint](speech-service-vnet-service-endpoint.md)
+* [Azure Private Link](../../private-link/private-link-overview.md)
+* [Azure VNet service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md)
+* [Speech SDK](speech-sdk.md)
+* [Speech to text REST API](rest-speech-to-text.md)
+* [Text to speech REST API](rest-text-to-speech.md)
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
+
+ Title: Speech service quotas and limits
+
+description: Quick reference, detailed description, and best practices on the quotas and limits for the Speech service in Azure AI services.
++++++ Last updated : 02/17/2023+++
+# Speech service quotas and limits
+
+This article contains a quick reference and a detailed description of the quotas and limits for the Speech service in Azure AI services. The information applies to all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) of the service. It also contains some best practices to avoid request throttling.
+
+For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+## Quotas and limits reference
+
+The following sections provide you with a quick guide to the quotas and limits that apply to the Speech service.
+
+For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable.
+
+### Speech to text quotas and limits per resource
+
+This section describes speech to text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
+
+#### Real-time speech to text and speech translation
+
+You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the [Speech to text REST API for short audio](rest-speech-to-text-short.md).
+
+> [!IMPORTANT]
+> These limits apply to concurrent real-time speech to text requests and speech translation requests combined. For example, if you have 60 concurrent speech to text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests.
+
+| Quota | Free (F0) | Standard (S0) |
+|--|--|--|
+| Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
+| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
+
+#### Batch transcription
+
+| Quota | Free (F0) | Standard (S0) |
+|--|--|--|
+| [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
+| Max audio input file size | N/A | 1 GB |
+| Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB |
+| Max blob container size | N/A | 5 GB |
+| Max number of blobs per container | N/A | 10000 |
+| Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
+| Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file |
+
+#### Model customization
+
+The limits in this table apply per Speech resource when you create a Custom Speech model.
+
+| Quota | Free (F0) | Standard (S0) |
+|--|--|--|
+| REST API limit | 300 requests per minute | 300 requests per minute |
+| Max number of speech datasets | 2 | 500 |
+| Max acoustic dataset file size for data import | 2 GB | 2 GB |
+| Max language dataset file size for data import | 200 MB | 1.5 GB |
+| Max pronunciation dataset file size for data import | 1 KB | 1 MB |
+| Max text size when you're using the `text` parameter in the [Models_Create](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create/) API request | 200 KB | 500 KB |
+
+### Text to speech quotas and limits per resource
+
+This section describes text to speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
+
+#### Common text to speech quotas and limits
+
+| Quota | Free (F0) | Standard (S0) |
+|--|--|--|
+| Maximum number of transactions per time period for prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds<br/><br/>This limit isn't adjustable. | 200 transactions per second (TPS) (default value)<br/><br/>The rate is adjustable up to 1000 TPS for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit). |
+| Max audio length produced per request | 10 min | 10 min |
+| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
+| Max SSML message size per turn for websocket | 64 KB | 64 KB |
+
+#### Custom Neural Voice
+
+| Quota | Free (F0)| Standard (S0) |
+|--|--|--|
+| Max number of transactions per second (TPS) | Not available for F0 | 200 transactions per second (TPS) (default value) |
+| Max number of datasets | N/A | 500 |
+| Max number of simultaneous dataset uploads | N/A | 5 |
+| Max data file size for data import per dataset | N/A | 2 GB |
+| Upload of long audios or audios without script | N/A | Yes |
+| Max number of simultaneous model trainings | N/A | 3 |
+| Max number of custom endpoints | N/A | 50 |
+
+#### Audio Content Creation tool
+
+| Quota | Free (F0)| Standard (S0) |
+|--|--|--|
+| File size (plain text in SSML)<sup>1</sup> | 3,000 characters per file | 20,000 characters per file |
+| File size (lexicon file)<sup>2</sup> | 3,000 characters per file | 20,000 characters per file |
+| Billable characters in SSML| 15,000 characters per file | 100,000 characters per file |
+| Export to audio library | 1 concurrent task | N/A |
+
+<sup>1</sup> The limit only applies to plain text in SSML and doesn't include tags.
+
+<sup>2</sup> The limit includes all text including tags. The characters of lexicon file aren't charged. Only the lexicon elements in SSML are counted as billable characters. Refer to [billable characters](text-to-speech.md#billable-characters) to learn more.
+
+### Speaker recognition quotas and limits per resource
+
+Speaker recognition is limited to 20 transactions per second (TPS).
+
+## Detailed description, quota adjustment, and best practices
+
+Some of the Speech service quotas are adjustable. This section provides additional explanations, best practices, and adjustment instructions.
+
+The following quotas are adjustable for Standard (S0) resources. The Free (F0) request limits aren't adjustable.
+
+- Speech to text [concurrent request limit](#real-time-speech-to-text-and-speech-translation) for base model endpoint and custom endpoint
+- Text to speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices
+- Speech translation [concurrent request limit](#real-time-speech-to-text-and-speech-translation)
+
+Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity.
+
+Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In most cases, this throttled state is transient.
+
+### General best practices to mitigate throttling during autoscaling
+
+To minimize issues related to throttling, it's a good idea to use the following techniques:
+
+- Implement retry logic in your application.
+- Avoid sharp changes in the workload. Increase the workload gradually. For example, let's say your application is using text to speech, and your current workload is 5 TPS. The next second, you increase the load to 20 TPS (that is, four times more). Speech service immediately starts scaling up to fulfill the new load, but is unable to scale as needed within one second. Some of the requests will get response code 429 (too many requests).
+- Test different load increase patterns. For more information, see the [workload pattern example](#example-of-a-workload-pattern-best-practice).
+- Create additional Speech service resources in *different* regions, and distribute the workload among them. (Creating multiple Speech service resources in the same region will not affect the performance, because all resources will be served by the same backend cluster).
+
+The next sections describe specific cases of adjusting quotas.
+
+### Speech to text: increase real-time speech to text concurrent request limit
+
+By default, the number of concurrent real-time speech to text and speech translation [requests combined](#real-time-speech-to-text-and-speech-translation) is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
+
+>[!NOTE]
+> Concurrent request limits for base and custom models need to be adjusted separately. You can have a Speech service resource that's associated with many custom endpoints hosting many custom model deployments. As needed, the limit adjustments per custom endpoint must be requested separately.
+
+Increasing the limit of concurrent requests doesn't directly affect your costs. The Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
+
+You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
+
+>[!NOTE]
+>[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on. Speech containers do, however, have their own capacity limitations that should be taken into account. For more information, see the [Speech containers FAQ](./speech-container-howto.md).
+
+#### Have the required information ready
+
+- For the base model:
+ - Speech resource ID
+ - Region
+- For the custom model:
+ - Region
+ - Custom endpoint ID
+
+How to get information for the base model:
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase the concurrency request limit.
+1. From the **Resource Management** group, select **Properties**.
+1. Copy and save the values of the following fields:
+ - **Resource ID**
+ - **Location** (your endpoint region)
+
+How to get information for the custom model:
+
+1. Go to the [Speech Studio](https://aka.ms/speechstudio/customspeech) portal.
+1. Sign in if necessary, and go to **Custom Speech**.
+1. Select your project, and go to **Deployment**.
+1. Select the required endpoint.
+1. Copy and save the values of the following fields:
+ - **Service Region** (your endpoint region)
+ - **Endpoint ID**
+
+#### Create and submit a support request
+
+Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
+
+1. Ensure you have the required information listed in the previous section.
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit.
+1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.
+1. In **Summary**, describe what you want (for example, "Increase speech to text concurrency request limit").
+1. In **Problem type**, select **Quota or Subscription issues**.
+1. In **Problem subtype**, select either:
+ - **Quota or concurrent requests increase** for an increase request.
+ - **Quota or usage validation** to check the existing limit.
+1. Select **Next: Solutions**. Proceed further with the request creation.
+1. On the **Details** tab, in the **Description** field, enter the following:
+ - A note that the request is about the speech to text quota.
+ - Choose either the base or custom model.
+ - The Azure resource information you [collected previously](#have-the-required-information-ready).
+ - Any other required information.
+1. On the **Review + create** tab, select **Create**.
+1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
+
+### Example of a workload pattern best practice
+
+Here's a general example of a good approach to take. It's meant only as a template that you can adjust as necessary for your own use.
+
+Suppose that a Speech service resource has the concurrent request limit set to 300. Start the workload from 20 concurrent connections, and increase the load by 20 concurrent connections every 90-120 seconds. Control the service responses, and implement the logic that falls back (reduces the load) if you get too many requests (response code 429). Then, retry the load increase in one minute, and if it still doesn't work, try again in two minutes. Use a pattern of 1-2-4-4 minutes for the intervals.
+
+Generally, it's a very good idea to test the workload and the workload patterns before going to production.
+
+### Text to speech: increase concurrent request limit
+
+For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
+
+Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
+
+You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
+
+>[!NOTE]
+>[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on.
+
+#### Prepare the required information
+
+To create an increase request, you need to provide your information.
+
+- For the prebuilt voice:
+ - Speech resource ID
+ - Region
+- For the custom voice:
+ - Deployment region
+ - Custom endpoint ID
+
+How to get information for the prebuilt voice:
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase the concurrency request limit.
+1. From the **Resource Management** group, select **Properties**.
+1. Copy and save the values of the following fields:
+ - **Resource ID**
+ - **Location** (your endpoint region)
+
+How to get information for the custom voice:
+
+1. Go to the [Speech Studio](https://aka.ms/speechstudio/customvoice) portal.
+1. Sign in if necessary, and go to **Custom Voice**.
+1. Select your project, and go to **Deploy model**.
+1. Select the required endpoint.
+1. Copy and save the values of the following fields:
+ - **Service Region** (your endpoint region)
+ - **Endpoint ID**
+
+#### Create and submit a support request
+
+Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
+
+1. Ensure you have the required information listed in the previous section.
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit.
+1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.
+1. In **Summary**, describe what you want (for example, "Increase text to speech concurrency request limit").
+1. In **Problem type**, select **Quota or Subscription issues**.
+1. In **Problem subtype**, select either:
+ - **Quota or concurrent requests increase** for an increase request.
+ - **Quota or usage validation** to check the existing limit.
+1. On the **Recommended solution** tab, select **Next**.
+1. On the **Additional details** tab, fill in all the required items. And in the **Details** field, enter the following:
+ - A note that the request is about the text to speech quota.
+ - Choose either the prebuilt voice or custom voice.
+ - The Azure resource information you [collected previously](#prepare-the-required-information).
+ - Any other required information.
+1. On the **Review + create** tab, select **Create**.
+1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
ai-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-ssml-phonetic-sets.md
+
+ Title: Speech phonetic alphabets - Speech service
+
+description: This article presents Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
++++++ Last updated : 09/16/2022+++
+# SSML phonetic alphabets
+
+Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text to speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup-pronunciation.md#phoneme-element).
+
+Speech service supports the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) suprasegmentals that are listed here. You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup-pronunciation.md#phoneme-element).
+
+|`ipa` | Symbol | Note|
+|-|-|-|
+| `ˈ` | Primary stress | Don’t use single quote ( ‘ or ' ) though it looks similar. |
+| `ˌ` | Secondary stress | Don’t use comma ( , ) though it looks similar. |
+| `.` | Syllable boundary | |
+| `ː` | Long | Don’t use colon ( : or :) though it looks similar. |
+| `‿` | Linking | |
+
+> [!TIP]
+> You can use [the international phonetic alphabet keyboard](https://www.internationalphoneticalphabet.org/html-ipa-keyboard-v1/keyboard/) to create the correct `ipa` suprasegmentals.
+
+For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The eight locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, zh-HK, and zh-TW. For those eight locales, you set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup-pronunciation.md#phoneme-element).
+
+See the sections in this article for the phonemes that are specific to each locale.
+
+> [!NOTE]
+> The following tables list viseme IDs corresponding to phonemes for different locales. When viseme ID is 0, it indicates silence.
+
+## ar-EG/ar-SA
+
+## bg-BG
+
+## ca-ES
+
+## cs-CZ
+
+## da-DK
+
+## de-DE/de-CH/de-AT
+
+## el-GR
+
+## en-GB/en-IE/en-AU
+
+## :::no-loc text="en-US/en-CA":::
+
+## es-ES
+
+## es-MX
+
+## fi-FI
+
+## fr-FR/fr-CA/fr-CH
+
+## he-IL
+
+## hr-HR
+
+## hu-HU
+
+## id-ID
+
+## it-IT
+
+## ja-JP
+
+## ko-KR
+
+## ms-MY
+
+## nb-NO
+
+## nl-NL/nl-BE
+
+## pl-PL
+
+## pt-BR
+
+## pt-PT
+
+## ro-RO
+
+## ru-RU
+
+## sk-SK
+
+## sl-SI
+
+## sv-SE
+
+## th-TH
+
+## tr-TR
+
+## vi-VN
+
+## zh-CN
+
+## zh-HK
+
+## zh-TW
+
+## Map X-SAMPA to IPA
++
ai-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-studio-overview.md
+
+ Title: "Speech Studio overview - Speech service"
+
+description: Speech Studio is a set of UI-based tools for building and integrating features from Speech service in your applications.
++++++ Last updated : 09/25/2022+++
+# What is Speech Studio?
+
+[Speech Studio](https://aka.ms/speechstudio/) is a set of UI-based tools for building and integrating features from Azure AI services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
+
+> [!TIP]
+> You can try speech to text and text to speech in [Speech Studio](https://aka.ms/speechstudio/) without signing up or writing any code.
+
+## Speech Studio scenarios
+
+Explore, try out, and view sample code for some of common use cases.
+
+* [Captioning](https://aka.ms/speechstudio/captioning): Choose a sample video clip to see real-time or offline processed captioning results. Learn how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. For more information, see the [captioning quickstart](captioning-quickstart.md).
+
+* [Call Center](https://aka.ms/speechstudio/callcenter): View a demonstration on how to use the Language and Speech services to analyze call center conversations. Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case. For more information, see the [call center quickstart](call-center-quickstart.md).
+
+For a demonstration of these scenarios in Speech Studio, view this [introductory video](https://youtu.be/mILVerU6DAw).
+> [!VIDEO https://www.youtube.com/embed/mILVerU6DAw]
+
+## Speech Studio features
+
+In Speech Studio, the following Speech service features are available as project types:
+
+* [Real-time speech to text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech to text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech to text works on your audio samples. To explore the full functionality, see [What is speech to text](speech-to-text.md).
+
+* [Batch speech to text](https://aka.ms/speechstudio/batchspeechtotext): Quickly test batch transcription capabilities to transcribe a large amount of audio in storage and receive results asynchronously, To learn more about Batch Speech-to-text, see [Batch speech to text overview](batch-transcription.md).
+
+* [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
+
+* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
+
+* [Speech Translation](https://aka.ms/speechstudio/speechtranslation): Quickly test and translate speech into other languages of your choice with low latency. To explore the full functionality, see [What is speech translation](speech-translation.md).
+
+* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=tts). Bring your scenarios to life with highly expressive and human-like neural voices.
+
+* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text to speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+
+* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): A no-code approach for text to speech synthesis. You can use the output audio as-is, or as a starting point for further customization. You can build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. For more information, see the [Audio Content Creation](how-to-audio-content-creation.md) documentation.
+
+* [Custom Keyword](https://aka.ms/speechstudio/customkeyword): A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
+
+* [Custom Commands](https://aka.ms/speechstudio/customcommands): Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
+
+## Next steps
+
+* [Explore Speech Studio](https://speech.microsoft.com)
ai-services Speech Synthesis Markup Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-pronunciation.md
+
+ Title: Pronunciation with Speech Synthesis Markup Language (SSML) - Speech service
+
+description: Learn about Speech Synthesis Markup Language (SSML) elements to improve pronunciation.
++++++ Last updated : 11/30/2022+++
+# Pronunciation with SSML
+
+You can use Speech Synthesis Markup Language (SSML) with text to speech to specify how the speech is pronounced. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced.
+
+Refer to the sections below for details about how to use SSML elements to improve pronunciation. For more information about SSML syntax, see [SSML document structure and events](speech-synthesis-markup-structure.md).
+
+## phoneme element
+
+The `phoneme` element is used for phonetic pronunciation in SSML documents. Always provide human-readable speech as a fallback.
+
+Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different `en-US` pronunciations of the letter "c" in the words "candy" and "cease" or the different pronunciations of the letter combination "th" in the words "thing" and "those."
+
+> [!NOTE]
+> For a list of locales that support phonemes, see footnotes in the [language support](language-support.md?tabs=tts) table.
+
+Usage of the `phoneme` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `alphabet` | The phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash; See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li><li>`x-sampa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md#map-x-sampa-to-ipa)</li></ul><br>The alphabet applies only to the `phoneme` in the element. | Optional |
+| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text to speech rejects the entire SSML document and produces none of the speech output specified in the document.<br/><br/>For `ipa`, to stress one syllable by placing stress symbol before this syllable, you need to mark all syllables for the word. Or else, the syllable before this stress symbol will be stressed. For `sapi`, if you want to stress one syllable, you need to place the stress symbol after this syllable, whether or not all syllables of the word are marked.| Required |
+
+### phoneme examples
+
+The supported values for attributes of the `phoneme` element were [described previously](#phoneme-element). In the first two examples, the values of `ph="tə.ˈmeɪ.toʊ"` or `ph="təmeɪˈtoʊ"` are specified to stress the syllable `meɪ`.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <phoneme alphabet="ipa" ph="tə.ˈmeɪ.toʊ"> tomato </phoneme>
+ </voice>
+</speak>
+```
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <phoneme alphabet="ipa" ph="təmeɪˈtoʊ"> tomato </phoneme>
+ </voice>
+</speak>
+```
+```xml
+<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <phoneme alphabet="sapi" ph="iy eh n y uw eh s"> en-US </phoneme>
+ </voice>
+</speak>
+```
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <s>His name is Mike <phoneme alphabet="ups" ph="JH AU"> Zhou </phoneme></s>
+ </voice>
+</speak>
+```
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <phoneme alphabet='x-sampa' ph='he."lou'>hello</phoneme>
+ </voice>
+</speak>
+```
+
+## Custom lexicon
+
+You can define how single entities (such as company, a medical term, or an emoji) are read in SSML by using the [phoneme](#phoneme-element) and [sub](#sub-element) elements. To define how multiple entities are read, create an XML structured custom lexicon file. Then you upload the custom lexicon XML file and reference it with the SSML `lexicon` element.
+
+> [!NOTE]
+> For a list of locales that support custom lexicon, see footnotes in the [language support](language-support.md?tabs=tts) table.
+>
+> The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
+
+Usage of the `lexicon` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| | - | - |
+| `uri` | The URI of the publicly accessible custom lexicon XML file with either the `.xml` or `.pls` file extension. Using [Azure Blob Storage](../../storage/blobs/storage-quickstart-blobs-portal.md) is recommended but not required. For more information about the custom lexicon file, see [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/).| Required |
+
+### Custom lexicon examples
+
+The supported values for attributes of the `lexicon` element were [described previously](#custom-lexicon).
+
+After you've published your custom lexicon, you can reference it from your SSML. The following SSML example references a custom lexicon that was uploaded to `https://www.example.com/customlexicon.xml`.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="http://www.w3.org/2001/mstts"
+ xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <lexicon uri="https://www.example.com/customlexicon.xml"/>
+ BTW, we will be there probably at 8:00 tomorrow morning.
+ Could you help leave a message to Robert Benigni for me?
+ </voice>
+</speak>
+```
+
+### Custom lexicon file
+
+To define how multiple entities are read, you can define them in a custom lexicon XML file with either the `.xml` or `.pls` file extension.
+
+> [!NOTE]
+> The custom lexicon file is a valid XML document, but it cannot be used as an SSML document.
+
+Here are some limitations of the custom lexicon file:
+
+- **File size**: The custom lexicon file size is limited to a maximum of 100 KB. If the file size exceeds the 100-KB limit, the synthesis request fails.
+- **Lexicon cache refresh**: The custom lexicon is cached with the URI as the key on text to speech when it's first loaded. The lexicon with the same URI won't be reloaded within 15 minutes, so the custom lexicon change needs to wait 15 minutes at the most to take effect.
+
+The supported elements and attributes of a custom lexicon XML file are described in the [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/). Here are some examples of the supported elements and attributes:
+
+- The `lexicon` element contains at least one `lexeme` element. Lexicon contains the necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so if you apply it for a different locale, it won't work. The `lexicon` element also has an `alphabet` attribute to indicate the alphabet used in the lexicon. The possible values are `ipa` and `x-microsoft-sapi`.
+- Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello", it won't work for the `lexeme` "hello".
+- The `grapheme` element contains text that describes the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography).
+- The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term.
+- The `phoneme` element provides text that describes how the `lexeme` is pronounced. The syllable boundary is '.' in the IPA alphabet. The `phoneme` element can't contain white space when you use the IPA alphabet.
+- When the `alias` and `phoneme` elements are provided with the same `grapheme` element, `alias` has higher priority.
+
+Microsoft provides a [validation tool for the custom lexicon](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomLexiconValidation) that helps you find errors (with detailed error messages) in the custom lexicon file. Using the tool is recommended before you use the custom lexicon XML file in production with the Speech service.
+
+#### Custom lexicon file examples
+
+The following XML example (not SSML) would be contained in a custom lexicon `.xml` file. When you use this custom lexicon, "BTW" is read as "By the way." "Benigni" is read with the provided IPA "bɛˈniːnji."
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<lexicon version="1.0"
+ xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
+ http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
+ alphabet="ipa" xml:lang="en-US">
+ <lexeme>
+ <grapheme>BTW</grapheme>
+ <alias>By the way</alias>
+ </lexeme>
+ <lexeme>
+ <grapheme> Benigni </grapheme>
+ <phoneme> bɛˈniːnji</phoneme>
+ </lexeme>
+ <lexeme>
+ <grapheme>😀</grapheme>
+ <alias>test emoji</alias>
+ </lexeme>
+</lexicon>
+```
+
+You can't directly set the pronunciation of a phrase by using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, and then associate the `phoneme` with that `alias`. For example:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<lexicon version="1.0"
+ xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
+ http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
+ alphabet="ipa" xml:lang="en-US">
+ <lexeme>
+ <grapheme>Scotland MV</grapheme>
+ <alias>ScotlandMV</alias>
+ </lexeme>
+ <lexeme>
+ <grapheme>ScotlandMV</grapheme>
+ <phoneme>ˈskɒtlənd.ˈmiːdiəm.weɪv</phoneme>
+ </lexeme>
+</lexicon>
+```
+
+You could also directly provide your expected `alias` for the acronym or abbreviated term. For example:
+
+```xml
+ <lexeme>
+ <grapheme>Scotland MV</grapheme>
+ <alias>Scotland Media Wave</alias>
+ </lexeme>
+```
+
+The preceding custom lexicon XML file examples use the IPA alphabet, which is also known as the IPA phone set. We suggest that you use the IPA because it's the international standard. For some IPA characters, they're the "precomposed" and "decomposed" version when they're being represented with Unicode. The custom lexicon only supports the decomposed Unicode.
+
+The Speech service defines a phonetic set for these locales: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, `zh-HK`, and `zh-TW`. For more information on the detailed Speech service phonetic alphabet, see the [Speech service phonetic sets](speech-ssml-phonetic-sets.md).
+
+You can use the `x-microsoft-sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated here:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<lexicon version="1.0"
+ xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
+ http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
+ alphabet="x-microsoft-sapi" xml:lang="en-US">
+ <lexeme>
+ <grapheme>BTW</grapheme>
+ <alias> By the way </alias>
+ </lexeme>
+ <lexeme>
+ <grapheme> Benigni </grapheme>
+ <phoneme> b eh 1 - n iy - n y iy </phoneme>
+ </lexeme>
+</lexicon>
+```
+
+## say-as element
+
+The `say-as` element indicates the content type, such as number or date, of the element's text. This element provides guidance to the speech synthesis engine about how to pronounce the text.
+
+Usage of the `say-as` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `interpret-as` | Indicates the content type of an element's text. For a list of types, see the following table.| Required |
+| `format`| Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them. See the following table. | Optional |
+| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional |
+
+The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column isn't empty in the table below.
+
+| interpret-as | format | Interpretation |
+| - | - | - |
+| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
+| `cardinal`, `number` | None| The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">10</say-as> options`<br /><br />As "There are ten options."|
+| `ordinal` | None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."|
+| `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
+| `fraction` | None | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
+| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
+| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
+| `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish. |
+| `telephone` | None | The text is spoken as a telephone number. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
+| `currency` | None | The text is spoken as a currency. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="currency">99.9 USD</say-as>`<br /><br />As "ninety-nine US dollars and ninety cents."|
+| `address`| None | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington."|
+| `name` | None | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
+
+### say-as examples
+
+The supported values for attributes of the `say-as` element were [described previously](#say-as-element).
+
+The speech synthesis engine speaks the following example as "Your first request was for one room on October nineteenth twenty ten with early arrival at twelve thirty five PM."
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <p>
+ Your <say-as interpret-as="ordinal"> 1st </say-as> request was for <say-as interpret-as="cardinal"> 1 </say-as> room
+ on <say-as interpret-as="date" format="mdy"> 10/19/2010 </say-as>, with early arrival at <say-as interpret-as="time" format="hms12"> 12:35pm </say-as>.
+ </p>
+ </voice>
+</speak>
+```
+
+## sub element
+
+Use the `sub` element to indicate that the alias attribute's text value should be pronounced instead of the element's enclosed text. In this way, the SSML contains both a spoken and written form.
+
+Usage of the `sub` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `alias` | The text value that should be pronounced instead of the element's enclosed text. | Required |
+
+### sub examples
+
+The supported values for attributes of the `sub` element were [described previously](#sub-element).
+
+The speech synthesis engine speaks the following example as "World Wide Web Consortium."
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <sub alias="World Wide Web Consortium">W3C</sub>
+ </voice>
+</speak>
+```
+
+## Pronunciation with MathML
+
+The Mathematical Markup Language (MathML) is an XML-compliant markup language that describes mathematical content and structure. The Speech service can use the MathML as input text to properly pronounce mathematical notations in the output audio.
+
+> [!NOTE]
+> The MathML elements (tags) are currently supported by all neural voices in the `en-US` and `en-AU` locales.
+
+All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3.0](https://www.w3.org/TR/MathML3/) specifications are supported, except the MathML 3.0 [Elementary Math](https://www.w3.org/TR/MathML3/chapter3.html#presm.elementary) elements.
+
+Take note of these MathML elements and attributes:
+- The `xmlns` attribute in `<math xmlns="http://www.w3.org/1998/Math/MathML">` is optional.
+- The `semantics`, `annotation`, and `annotation-xml` elements don't output speech, so they're ignored.
+- If an element isn't recognized, it will be ignored, and the child elements within it will still be processed.
+
+The MathML entities aren't supported by XML syntax, so you must use the corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
+
+### MathML examples
+
+The text to speech output for this example is "a squared plus b squared equals c squared".
+
+```xml
+<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts' xml:lang='en-US'><voice name='en-US-JennyNeural'><math xmlns='http://www.w3.org/1998/Math/MathML'><msup><mi>a</mi><mn>2</mn></msup><mo>+</mo><msup><mi>b</mi><mn>2</mn></msup><mo>=</mo><msup><mi>c</mi><mn>2</mn></msup></math></voice></speak>
+```
+
+## Next steps
+
+- [SSML overview](speech-synthesis-markup.md)
+- [SSML document structure and events](speech-synthesis-markup-structure.md)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
ai-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-structure.md
+
+ Title: Speech Synthesis Markup Language (SSML) document structure and events - Speech service
+
+description: Learn about the Speech Synthesis Markup Language (SSML) document structure.
++++++ Last updated : 11/30/2022+++
+# SSML document structure and events
+
+The Speech Synthesis Markup Language (SSML) with input text determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.
+
+Refer to the sections below for details about how to structure elements in the SSML document.
+
+## Document structure
+
+The Speech service implementation of SSML is based on the World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/). The elements supported by the Speech can differ from the W3C standard.
+
+Each SSML document is created with SSML elements or tags. These elements are used to adjust the voice, style, pitch, prosody, volume, and more.
+
+Here's a subset of the basic structure and syntax of an SSML document:
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="string">
+ <mstts:backgroundaudio src="string" volume="string" fadein="string" fadeout="string"/>
+ <voice name="string" effect="string">
+ <audio src="string"></audio>
+ <bookmark mark="string"/>
+ <break strength="string" time="string" />
+ <emphasis level="value"></emphasis>
+ <lang xml:lang="string"></lang>
+ <lexicon uri="string"/>
+ <math xmlns="http://www.w3.org/1998/Math/MathML"></math>
+ <mstts:audioduration value="string"/>
+ <mstts:express-as style="string" styledegree="value" role="string"></mstts:express-as>
+ <mstts:silence type="string" value="string"/>
+ <mstts:viseme type="string"/>
+ <p></p>
+ <phoneme alphabet="string" ph="string"></phoneme>
+ <prosody pitch="value" contour="value" range="value" rate="value" volume="value"></prosody>
+ <s></s>
+ <say-as interpret-as="string" format="string" detail="string"></say-as>
+ <sub alias="string"></sub>
+ </voice>
+</speak>
+```
+
+Some examples of contents that are allowed in each element are described in the following list:
+- `audio`: The body of the `audio` element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. The `audio` element can also contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
+- `bookmark`: This element can't contain text or any other elements.
+- `break`: This element can't contain text or any other elements.
+- `emphasis`: This element can contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, and `sub`.
+- `lang`: This element can contain all other elements except `mstts:backgroundaudio`, `voice`, and `speak`.
+- `lexicon`: This element can't contain text or any other elements.
+- `math`: This element can only contain text and MathML elements.
+- `mstts:audioduration`: This element can't contain text or any other elements.
+- `mstts:backgroundaudio`: This element can't contain text or any other elements.
+- `mstts:express-as`: This element can contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, and `sub`.
+- `mstts:silence`: This element can't contain text or any other elements.
+- `mstts:viseme`: This element can't contain text or any other elements.
+- `p`: This element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `sub`, `mstts:express-as`, and `s`.
+- `phoneme`: This element can only contain text and no other elements.
+- `prosody`: This element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
+- `s`: This element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `mstts:express-as`, and `sub`.
+- `say-as`: This element can only contain text and no other elements.
+- `sub`: This element can only contain text and no other elements.
+- `speak`: The root element of an SSML document. This element can contain the following elements: `mstts:backgroundaudio` and `voice`.
+- `voice`: This element can contain all other elements except `mstts:backgroundaudio` and `speak`.
+
+The Speech service automatically handles punctuation as appropriate, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark.
+
+## Special characters
+
+To use the characters `&`, `<`, and `>` within the SSML element's value or text, you must use the entity format. Specifically you must use `&amp;` in place of `&`, use `&lt;` in place of `<`, and use `&gt;` in place of `>`. Otherwise the SSML will not be parsed correctly.
+
+For example, specify `green &amp; yellow` instead of `green & yellow`. The following SSML will be parsed as expected:
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ My favorite colors are green &amp; yellow.
+ </voice>
+</speak>
+```
+
+Special characters such as quotation marks, apostrophes, and brackets, must be escaped. For more information, see [Extensible Markup Language (XML) 1.0: Appendix D](https://www.w3.org/TR/xml/#sec-entexpand).
+
+Attribute values must be enclosed by double or single quotation marks. For example, `<prosody volume="90">` and `<prosody volume='90'>` are well-formed, valid elements, but `<prosody volume=90>` won't be recognized.
+
+## Speak root element
+
+The `speak` element is the root element that's required for all SSML documents. The `speak` element contains information such as version, language, and the markup vocabulary definition.
+
+Here's the syntax for the `speak` element:
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="string"></speak>
+```
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `version` | Indicates the version of the SSML specification used to interpret the document markup. The current version is "1.0".| Required|
+| `xml:lang` | The language of the root document. The value can contain a language code such as `en` (English), or a locale such as `en-US` (English - United States). | Required |
+| `xmlns` | The URI to the document that defines the markup vocabulary (the element types and attribute names) of the SSML document. The current URI is "http://www.w3.org/2001/10/synthesis". | Required |
+
+The `speak` element must contain at least one [voice element](speech-synthesis-markup-voice.md#voice-element).
+
+### speak examples
+
+The supported values for attributes of the `speak` element were [described previously](#speak-root-element).
+
+#### Single voice example
+
+This example uses the `en-US-JennyNeural` voice. For more examples, see [voice examples](speech-synthesis-markup-voice.md#voice-examples).
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+## Add a break
+
+Use the `break` element to override the default behavior of breaks or pauses between words. You can use it to add pauses that are otherwise automatically inserted by the Speech service.
+
+Usage of the `break` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `strength` | The relative duration of a pause by using one of the following values:<br/><ul><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul>| Optional |
+| `time` | The absolute duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`. If the `time` attribute is set, the `strength` attribute is ignored.| Optional |
+
+Here are more details about the `strength` attribute.
+
+| Strength | Relative duration |
+| - | - |
+| X-weak | 250 ms |
+| Weak | 500 ms |
+| Medium | 750 ms |
+| Strong | 1,000 ms |
+| X-strong | 1,250 ms |
+
+### Break examples
+
+The supported values for attributes of the `break` element were [described previously](#add-a-break). The following three ways all add 750 ms breaks.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ Welcome <break /> to text to speech.
+ Welcome <break strength="medium" /> to text to speech.
+ Welcome <break time="750ms" /> to text to speech.
+ </voice>
+</speak>
+```
+
+## Add silence
+
+Use the `mstts:silence` element to insert pauses before or after text, or between two adjacent sentences.
+
+One of the differences between `mstts:silence` and `break` is that a `break` element can be inserted anywhere in the text. Silence only works at the beginning or end of input text or at the boundary of two adjacent sentences.
+
+The silence setting is applied to all input text within it's enclosing `voice` element. To reset or change the silence setting again, you must use a new `voice` element with either the same voice or a different voice.
+
+Usage of the `mstts:silence` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` ΓÇô Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` ΓÇô Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` ΓÇô Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` ΓÇô Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` ΓÇô Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` ΓÇô Silence between adjacent sentences. The value is an absolute silence length.</li><li>`Comma-exact` ΓÇô Silence at the comma in half-width or full-width format. The value is an absolute silence length.</li><li>`Semicolon-exact` ΓÇô Silence at the semicolon in half-width or full-width format. The value is an absolute silence length.</li><li>`Enumerationcomma-exact` ΓÇô Silence at the enumeration comma in full-width format. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required |
+| `Value` | The duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`.| Required |
+
+### mstts silence examples
+
+The supported values for attributes of the `mstts:silence` element were [described previously](#add-silence).
+
+In this example, `mstts:silence` is used to add 200 ms of silence between two sentences.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
+<voice name="en-US-JennyNeural">
+<mstts:silence type="Sentenceboundary" value="200ms"/>
+If we're home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
+A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time.
+</voice>
+</speak>
+```
+
+In this example, `mstts:silence` is used to add 50 ms of silence at the comma, 100 ms of silence at the semicolon, and 150 ms of silence at the enumeration comma.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="zh-CN">
+<voice name="zh-CN-YunxiNeural">
+<mstts:silence type="comma-exact" value="50ms"/><mstts:silence type="semicolon-exact" value="100ms"/><mstts:silence type="enumerationcomma-exact" value="150ms"/>你好呀,云希、晓晓;你好呀。
+</voice>
+</speak>
+```
+
+## Specify paragraphs and sentences
+
+The `p` and `s` elements are used to denote paragraphs and sentences, respectively. In the absence of these elements, the Speech service automatically determines the structure of the SSML document.
+
+### Paragraph and sentence examples
+
+The following example defines two paragraphs that each contain sentences. In the second paragraph, the Speech service automatically determines the sentence structure, since they aren't defined in the SSML document.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <p>
+ <s>Introducing the sentence element.</s>
+ <s>Used to mark individual sentences.</s>
+ </p>
+ <p>
+ Another simple paragraph.
+ Sentence structure in this paragraph is not explicitly marked.
+ </p>
+ </voice>
+</speak>
+```
+
+## Bookmark element
+
+You can use the `bookmark` element in SSML to reference a specific location in the text or tag sequence. Then you'll use the Speech SDK and subscribe to the `BookmarkReached` event to get the offset of each marker in the audio stream. The `bookmark` element won't be spoken. For more information, see [Subscribe to synthesizer events](how-to-speech-synthesis.md#subscribe-to-synthesizer-events).
+
+Usage of the `bookmark` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `mark` | The reference text of the `bookmark` element. | Required |
+
+### Bookmark examples
+
+The supported values for attributes of the `bookmark` element were [described previously](#bookmark-element).
+
+As an example, you might want to know the time offset of each flower word in the following snippet:
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-AriaNeural">
+ We are selling <bookmark mark='flower_1'/>roses and <bookmark mark='flower_2'/>daisies.
+ </voice>
+</speak>
+```
+
+## Viseme element
+
+A viseme is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. You can use the `mstts:viseme` element in SSML to request viseme output. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
+
+The viseme setting is applied to all input text within it's enclosing `voice` element. To reset or change the viseme setting again, you must use a new `voice` element with either the same voice or a different voice.
+
+Usage of the `viseme` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `type` | The type of viseme output.<ul><li>`redlips_front` ΓÇô lip-sync with viseme ID and audio offset output </li><li>`FacialExpression` ΓÇô blend shapes output</li></ul> | Required |
+
+> [!NOTE]
+> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
+
+### Viseme examples
+
+The supported values for attributes of the `viseme` element were [described previously](#viseme-element).
+
+This SSML snippet illustrates how to request blend shapes with your synthesized speech.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <mstts:viseme type="FacialExpression"/>
+ Rainbow has seven colors: Red, orange, yellow, green, blue, indigo, and violet.
+ </voice>
+</speak>
+```
+
+## Next steps
+
+- [SSML overview](speech-synthesis-markup.md)
+- [Voice and sound with SSML](speech-synthesis-markup-voice.md)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
ai-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md
+
+ Title: Voice and sound with Speech Synthesis Markup Language (SSML) - Speech service
+
+description: Learn about Speech Synthesis Markup Language (SSML) elements to determine what your output audio will sound like.
++++++ Last updated : 11/30/2022+++
+# Voice and sound with SSML
+
+Use Speech Synthesis Markup Language (SSML) to specify the text to speech voice, language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note.
+
+Refer to the sections below for details about how to use SSML elements to specify voice and sound. For more information about SSML syntax, see [SSML document structure and events](speech-synthesis-markup-structure.md).
+
+## Voice element
+
+At least one `voice` element must be specified within each SSML [speak](speech-synthesis-markup-structure.md#speak-root-element) element. This element determines the voice that's used for text to speech.
+
+You can include multiple `voice` elements in a single SSML document. Each `voice` element can specify a different voice. You can also use the same voice multiple times with different settings, such as when you [change the silence duration](speech-synthesis-markup-structure.md#add-silence) between sentences.
++
+Usage of the `voice` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `name` | The voice used for text to speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=tts).| Required|
+| `effect` |The audio effect processor that's used to optimize the quality of the synthesized speech output for specific scenarios on devices. <br/><br/>For some scenarios in production environments, the auditory experience may be degraded due to the playback distortion on certain devices. For example, the synthesized speech from a car speaker may sound dull and muffled due to environmental factors such as speaker response, room reverberation, and background noise. The passenger might have to turn up the volume to hear more clearly. To avoid manual operations in such a scenario, the audio effect processor can make the sound clearer by compensating the distortion of playback.<br/><br/>The following values are supported:<br/><ul><li>`eq_car` ΓÇô Optimize the auditory experience when providing high-fidelity speech in cars, buses, and other enclosed automobiles.</li><li>`eq_telecomhp8k` ΓÇô Optimize the auditory experience for narrowband speech in telecom or telephone scenarios. We recommend a sampling rate of 8 kHz. If the sample rate isn't 8 kHz, the auditory quality of the output speech won't be optimized.</li></ul><br/>If the value is missing or invalid, this attribute will be ignored and no effect will be applied.| Optional |
+
+### Voice examples
+
+The supported values for attributes of the `voice` element were [described previously](#voice-element).
+
+#### Single voice example
+
+This example uses the `en-US-JennyNeural` voice.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+#### Multiple voices example
+
+Within the `speak` element, you can specify multiple voices for text to speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element.
+
+This example alternates between the `en-US-JennyNeural` and `en-US-ChristopherNeural` voices.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ Good morning!
+ </voice>
+ <voice name="en-US-ChristopherNeural">
+ Good morning to you too Jenny!
+ </voice>
+</speak>
+```
+
+#### Custom neural voice example
+
+To use your [custom neural voice](how-to-deploy-and-use-endpoint.md#use-your-custom-voice), specify the model name as the voice name in SSML.
+
+This example uses a custom voice named "my-custom-voice".
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="my-custom-voice">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+#### Audio effect example
+
+You use the `effect` attribute to optimize the auditory experience for scenarios such as cars and telecommunications. The following SSML example uses the `effect` attribute with the configuration in car scenarios.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural" effect="eq_car">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+
+## Speaking styles and roles
+
+By default, neural voices have a neutral speaking style. You can adjust the speaking style, style degree, and role at the sentence level.
+
+> [!NOTE]
+> Styles, style degree, and roles are supported for a subset of neural voices as described in the [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles) documentation. To determine what styles and roles are supported for each voice, you can also use the [list voices](rest-text-to-speech.md#get-a-list-of-voices) API and the [Audio Content Creation](https://aka.ms/audiocontentcreation) web application.
+
+Usage of the `mstts:express-as` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `style` | The voice-specific speaking style. You can express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant. If the style value is missing or invalid, the entire `mstts:express-as` element is ignored and the service uses the default neutral speech. For custom neural voice styles, see the [custom neural voice style example](#custom-neural-voice-style-example). | Required |
+| `styledegree` | The intensity of the speaking style. You can specify a stronger or softer style to make the speech more expressive or subdued. The range of accepted values are: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. If the style degree is missing or isn't supported for your voice, this attribute is ignored.| Optional |
+| `role`| The speaking role-play. The voice can imitate a different age and gender, but the voice name isn't changed. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. If the role is missing or isn't supported for your voice, this attribute is ignored. | Optional |
+
+The following table has descriptions of each supported `style` attribute.
+
+|Style|Description|
+| - | - |
+|`style="advertisement_upbeat"`|Expresses an excited and high-energy tone for promoting a product or service.|
+|`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.|
+|`style="angry"`|Expresses an angry and annoyed tone.|
+|`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
+|`style="calm"`|Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, and prosody are more uniform compared to other types of speech.|
+|`style="chat"`|Expresses a casual and relaxed tone.|
+|`style="cheerful"`|Expresses a positive and happy tone.|
+|`style="customerservice"`|Expresses a friendly and helpful tone for customer support.|
+|`style="depressed"`|Expresses a melancholic and despondent tone with lower pitch and energy.|
+|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.|
+|`style="documentary-narration"`|Narrates documentaries in a relaxed, interested, and informative style suitable for dubbing documentaries, expert commentary, and similar content.|
+|`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.|
+|`style="empathetic"`|Expresses a sense of caring and understanding.|
+|`style="envious"`|Expresses a tone of admiration when you desire something that someone else has.|
+|`style="excited"`|Expresses an upbeat and hopeful tone. It sounds like something great is happening and the speaker is really happy about that.|
+|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tension and unease.|
+|`style="friendly"`|Expresses a pleasant, inviting, and warm tone. It sounds sincere and caring.|
+|`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.|
+|`style="hopeful"`|Expresses a warm and yearning tone. It sounds like something good will happen to the speaker.|
+|`style="lyrical"`|Expresses emotions in a melodic and sentimental way.|
+|`style="narration-professional"`|Expresses a professional, objective tone for content reading.|
+|`style="narration-relaxed"`|Express a soothing and melodious tone for content reading.|
+|`style="newscast"`|Expresses a formal and professional tone for narrating news.|
+|`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.|
+|`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
+|`style="poetry-reading"`|Expresses an emotional and rhythmic tone while reading a poem.|
+|`style="sad"`|Expresses a sorrowful tone.|
+|`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.|
+|`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
+|`style="sports_commentary"`|Expresses a relaxed and interesting tone for broadcasting a sports event.|
+|`style="sports_commentary_excited"`|Expresses an intensive and energetic tone for broadcasting exciting moments in a sports event.|
+|`style="whispering"`|Speaks very softly and make a quiet and gentle sound|
+|`style="terrified"`|Expresses a very scared tone, with faster pace and a shakier voice. It sounds like the speaker is in an unsteady and frantic status.|
+|`style="unfriendly"`|Expresses a cold and indifferent tone.|
+
+The following table has descriptions of each supported `role` attribute.
+
+| Role | Description |
+| - | - |
+| `role="Girl"` | The voice imitates a girl. |
+| `role="Boy"` | The voice imitates a boy. |
+| `role="YoungAdultFemale"` | The voice imitates a young adult female. |
+| `role="YoungAdultMale"` | The voice imitates a young adult male. |
+| `role="OlderAdultFemale"` | The voice imitates an older adult female. |
+| `role="OlderAdultMale"` | The voice imitates an older adult male. |
+| `role="SeniorFemale"` | The voice imitates a senior female. |
+| `role="SeniorMale"` | The voice imitates a senior male. |
++
+### mstts express-as examples
+
+The supported values for attributes of the `mstts:express-as` element were [described previously](#speaking-styles-and-roles).
+
+#### Style and degree example
+
+You use the `mstts:express-as` element to express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant.
+
+The following SSML example uses the `<mstts:express-as>` element with a sad style degree of "2".
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
+ <voice name="zh-CN-XiaomoNeural">
+ <mstts:express-as style="sad" styledegree="2">
+ 快走吧,路上一定要注意安全,早去早回。
+ </mstts:express-as>
+ </voice>
+</speak>
+```
+
+#### Role example
+
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
+
+This SSML snippet illustrates how the `role` attribute is used to change the role-play for `zh-CN-XiaomoNeural`.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
+ <voice name="zh-CN-XiaomoNeural">
+ 女儿看见父亲走了进来,问道:
+ <mstts:express-as role="YoungAdultFemale" style="calm">
+ “您来的挺快的,怎么过来的?”
+ </mstts:express-as>
+ 父亲放下手提包,说:
+ <mstts:express-as role="OlderAdultMale" style="calm">
+ “刚打车过来的,路上还挺顺畅。”
+ </mstts:express-as>
+ </voice>
+</speak>
+```
+
+#### Custom neural voice style example
+
+Your custom neural voice can be trained to speak with some preset styles such as cheerful, sad, and whispering. You can also [train a custom neural voice](how-to-custom-voice-create-voice.md?tabs=multistyle#train-your-custom-neural-voice-model) to speak in a custom style as determined by your training data. To use your custom neural voice style in SSML, specify the style name that you previously entered in Speech Studio.
+
+This example uses a custom voice named "my-custom-voice". The custom voice speaks with the "cheerful" preset style and style degree of "2", and then with a custom style named "my-custom-style" and style degree of "0.01".
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="my-custom-voice">
+ <mstts:express-as style="cheerful" styledegree="2">
+ That'd be just amazing!
+ </mstts:express-as>
+ <mstts:express-as style="my-custom-style" styledegree="0.01">
+ What's next?
+ </mstts:express-as>
+ </voice>
+</speak>
+```
+
+## Adjust speaking languages
+
+By default, all neural voices are fluent in their own language and English without using the `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
+
+You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
+
+Usage of the `lang` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `xml:lang` | The language that you want the neural voice to speak. | Required to adjust the speaking language for the neural voice. If you're using `lang xml:lang`, the locale must be provided. |
+
+> [!NOTE]
+> The `<lang xml:lang>` element is incompatible with the `prosody` and `break` elements. You can't adjust pause and prosody like pitch, contour, rate, or volume in this element.
+
+Use this table to determine which speaking languages are supported for each neural voice. If the voice doesn't speak the language of the input text, the Speech service won't output synthesized audio.
+
+| Voice | Primary and default locale | Secondary locales |
+| - | - | - |
+| `en-US-JennyMultilingualNeural` | `en-US` | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
+
+### Lang examples
+
+The supported values for attributes of the `lang` element were [described previously](#adjust-speaking-languages).
+
+The primary language for `en-US-JennyMultilingualNeural` is `en-US`. You must specify `en-US` as the default language within the `speak` element, whether or not the language is adjusted elsewhere.
+
+This SSML snippet shows how to use the `lang` element (and `xml:lang` attribute) to speak `de-DE` with the `en-US-JennyMultilingualNeural` neural voice.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-JennyMultilingualNeural">
+ <lang xml:lang="de-DE">
+ Wir freuen uns auf die Zusammenarbeit mit Ihnen!
+ </lang>
+ </voice>
+</speak>
+```
+
+Within the `speak` element, you can specify multiple languages including `en-US` for text to speech output. For each adjusted language, the text must match the language and be wrapped in a `voice` element. This SSML snippet shows how to use `<lang xml:lang>` to change the speaking languages to `es-MX`, `en-US`, and `fr-FR`.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-JennyMultilingualNeural">
+ <lang xml:lang="es-MX">
+ ¡Esperamos trabajar con usted!
+ </lang>
+ <lang xml:lang="en-US">
+ We look forward to working with you!
+ </lang>
+ <lang xml:lang="fr-FR">
+ Nous avons hâte de travailler avec vous!
+ </lang>
+ </voice>
+</speak>
+```
+
+## Adjust prosody
+
+The `prosody` element is used to specify changes to pitch, contour, range, rate, and volume for the text to speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
+
+Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. Text to speech limits or substitutes values that aren't supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120.
+
+Usage of the `prosody` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `contour` | Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
+| `pitch` | Indicates the baseline pitch for the text. Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `range`| A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`.| Optional |
+| `rate` | Indicates the speaking rate of the text. Speaking rate can be applied at the word or sentence level. The rate changes should be within 0.5 to 2 times the original audio. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the original rate. A value of *0.5* results in a halving of the original rate. A value of *2* results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. Volume changes can be applied at the sentence level. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+
+### Prosody examples
+
+The supported values for attributes of the `prosody` element were [described previously](#adjust-prosody).
+
+#### Change speaking rate example
+
+This SSML snippet illustrates how the `rate` attribute is used to change the speaking rate to 30% greater than the default rate.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <prosody rate="+30.00%">
+ Enjoy using text to speech.
+ </prosody>
+ </voice>
+</speak>
+```
+
+#### Change volume example
+
+This SSML snippet illustrates how the `volume` attribute is used to change the volume to 20% greater than the default volume.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <prosody volume="+20.00%">
+ Enjoy using text to speech.
+ </prosody>
+ </voice>
+</speak>
+```
+
+#### Change pitch example
+
+This SSML snippet illustrates how the `pitch` attribute is used so that the voice speaks in a high pitch.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ Welcome to <prosody pitch="high">Enjoy using text to speech.</prosody>
+ </voice>
+</speak>
+```
+
+#### Change pitch contour example
+
+This SSML snippet illustrates how the `contour` attribute is used to change the contour.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <prosody contour="(60%,-60%) (100%,+80%)" >
+ Were you the only person in the room?
+ </prosody>
+ </voice>
+</speak>
+```
+
+## Adjust emphasis
+
+The optional `emphasis` element is used to add or remove word-level stress for the text. This element can only contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, `sub`, and `voice`.
+
+> [!NOTE]
+> The word-level emphasis tuning is only available for these neural voices: `en-US-GuyNeural`, `en-US-DavisNeural`, and `en-US-JaneNeural`.
+>
+> For words that have low pitch and short duration, the pitch might not be raised enough to be noticed.
+
+Usage of the `emphasis` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2) | Optional |
+
+### Emphasis examples
+
+The supported values for attributes of the `emphasis` element were [described previously](#adjust-emphasis).
+
+This SSML snippet demonstrates how the `emphasis` element is used to add moderate level emphasis for the word "meetings".
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
+ <voice name="en-US-GuyNeural">
+ I can help you join your <emphasis level="moderate">meetings</emphasis> fast.
+ </voice>
+</speak>
+```
+
+## Add recorded audio
+
+The `audio` element is optional. You can use it to insert prerecorded audio into an SSML document. The body of the `audio` element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. The `audio` element can also contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
+
+Any audio included in the SSML document must meet these requirements:
+* The audio file must be valid *.mp3, *.wav, *.opus, *.ogg, *.flac, or *.wma files.
+* The combined total time for all text and audio files in a single response can't exceed 600 seconds.
+* The audio must not contain any customer-specific or other sensitive information.
+
+> [!NOTE]
+> The `audio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
+
+Usage of the `audio` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `src` | The URI location of the audio file. The audio must be hosted on an internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the file must present a valid, trusted TLS/SSL certificate. We recommend that you put the audio file into Blob Storage in the same Azure region as the text to speech endpoint to minimize the latency. | Required |
+
+### Audio examples
+
+The supported values for attributes of the `audio` element were [described previously](#add-recorded-audio).
+
+This SSML snippet illustrates how the `src` attribute is used to insert audio from two .wav files.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural">
+ <p>
+ <audio src="https://contoso.com/opinionprompt.wav"/>
+ Thanks for offering your opinion. Please begin speaking after the beep.
+ <audio src="https://contoso.com/beep.wav">
+ Could not play the beep, please voice your opinion now.
+ </audio>
+ </p>
+ </voice>
+</speak>
+```
+
+## Audio duration
+
+Use the `mstts:audioduration` element to set the duration of the output audio. Use this element to help synchronize the timing of audio output completion. The audio duration can be decreased or increased between 0.5 to 2 times the rate of the original audio. The original audio here is the audio without any other rate settings. The speaking rate will be slowed down or sped up accordingly based on the set value.
+
+The audio duration setting is applied to all input text within its enclosing `voice` element. To reset or change the audio duration setting again, you must use a new `voice` element with either the same voice or a different voice.
+
+Usage of the `mstts:audioduration` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `value` | The requested duration of the output audio in either seconds (such as `2s`) or milliseconds (such as `2000ms`).<br/><br/>This value should be within 0.5 to 2 times the original audio without any other rate settings. For example, if the requested duration of your audio is `30s`, then the original audio must have otherwise been between 15 and 60 seconds. If you set a value outside of these boundaries, the duration is set according to the respective minimum or maximum multiple.<br/><br/>Given your requested output audio duration, the Speech service adjusts the speaking rate accordingly. Use the [voice list](rest-text-to-speech.md#get-a-list-of-voices) API and check the `WordsPerMinute` attribute to find out the speaking rate of the neural voice that you're using. You can divide the number of words in your input text by the value of the `WordsPerMinute` attribute to get the approximate original output audio duration. The output audio will sound most natural when you set the audio duration closest to the estimated duration.| Required |
+
+### mstts audio duration examples
+
+The supported values for attributes of the `mstts:audioduration` element were [described previously](#audio-duration).
+
+In this example, the original audio is around 15 seconds. The `mstts:audioduration` element is used to set the audio duration to 20 seconds (`20s`).
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
+<voice name="en-US-JennyNeural">
+<mstts:audioduration value="20s"/>
+If we're home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
+A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time.
+</voice>
+</speak>
+```
+
+## Background audio
+
+You can use the `mstts:backgroundaudio` element to add background audio to your SSML documents or mix an audio file with text to speech. With `mstts:backgroundaudio`, you can loop an audio file in the background, fade in at the beginning of text to speech, and fade out at the end of text to speech.
+
+If the background audio provided is shorter than the text to speech or the fade out, it loops. If it's longer than the text to speech, it stops when the fade out has finished.
+
+Only one background audio file is allowed per SSML document. You can intersperse `audio` tags within the `voice` element to add more audio to your SSML document.
+
+> [!NOTE]
+> The `mstts:backgroundaudio` element should be put in front of all `voice` elements. If specified, it must be the first child of the `speak` element.
+>
+> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
+
+Usage of the `mstts:backgroundaudio` element's attributes are described in the following table.
+
+| Attribute | Description | Required or optional |
+| - | - | - |
+| `src` | The URI location of the background audio file. | Required |
+| `volume` | The volume of the background audio file. **Accepted values**: `0` to `100` inclusive. The default value is `1`. | Optional |
+| `fadein` | The duration of the background audio fade-in as milliseconds. The default value is `0`, which is the equivalent to no fade in. **Accepted values**: `0` to `10000` inclusive. | Optional |
+| `fadeout` | The duration of the background audio fade-out in milliseconds. The default value is `0`, which is the equivalent to no fade out. **Accepted values**: `0` to `10000` inclusive. | Optional|
+
+### mstss backgroundaudio examples
+
+The supported values for attributes of the `mstts:backgroundaudio` element were [described previously](#background-audio).
+
+```xml
+<speak version="1.0" xml:lang="en-US" xmlns:mstts="http://www.w3.org/2001/mstts">
+ <mstts:backgroundaudio src="https://contoso.com/sample.wav" volume="0.7" fadein="3000" fadeout="4000"/>
+ <voice name="en-US-JennyNeural">
+ The text provided in this document will be spoken over the background audio.
+ </voice>
+</speak>
+```
+
+## Next steps
+
+- [SSML overview](speech-synthesis-markup.md)
+- [SSML document structure and events](speech-synthesis-markup-structure.md)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
+
ai-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup.md
+
+ Title: Speech Synthesis Markup Language (SSML) overview - Speech service
+
+description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech.
++++++ Last updated : 11/30/2022+++
+# Speech Synthesis Markup Language (SSML) overview
+
+Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input.
+
+> [!TIP]
+> You can hear voices in different styles and pitches reading example text via the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
+
+## Scenarios
+
+You can use SSML to:
+
+- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.
+- [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note.
+- [Control pronunciation](speech-synthesis-markup-pronunciation.md) of the output audio. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced.
+
+## Use SSML
+
+> [!IMPORTANT]
+> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. For more information, see [text to speech pricing notes](text-to-speech.md#pricing-note).
+
+You can use SSML in the following ways:
+
+- [Audio Content Creation](https://aka.ms/audiocontentcreation) tool: Author plain text and SSML in Speech Studio: You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).
+- [Batch synthesis API](batch-synthesis.md): Provide SSML via the `inputs` property.
+- [Speech CLI](get-started-text-to-speech.md?pivots=programming-language-cli): Provide SSML via the `spx synthesize --ssml SSML` command line argument.
+- [Speech SDK](how-to-speech-synthesis.md#use-ssml-to-customize-speech-characteristics): Provide SSML via the "speak" SSML method.
+
+## Next steps
+
+- [SSML document structure and events](speech-synthesis-markup-structure.md)
+- [Voice and sound with SSML](speech-synthesis-markup-voice.md)
+- [Pronunciation with SSML](speech-synthesis-markup-pronunciation.md)
+- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
+
+ Title: Speech to text overview - Speech service
+
+description: Get an overview of the benefits and capabilities of the speech to text feature of the Speech service.
++++++ Last updated : 04/05/2023++
+keywords: speech to text, speech to text software
++
+# What is speech to text?
+
+In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure AI services. Speech to text can be used for [real-time](#real-time-speech-to-text) or [batch transcription](#batch-transcription) of audio streams into text.
+
+> [!NOTE]
+> To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt).
+
+## Real-time speech to text
+
+With real-time speech to text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as:
+- Transcriptions, captions, or subtitles for live meetings
+- Contact center agent assist
+- Dictation
+- Voice agents
+- Pronunciation assessment
+
+Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
+
+## Batch transcription
+
+Batch transcription is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
+- Transcriptions, captions, or subtitles for pre-recorded audio
+- Contact center post-call analytics
+- Diarization
+
+Batch transcription is available via:
+- [Speech to text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+- The [Speech CLI](spx-overview.md) supports both real-time and batch transcription. For Speech CLI help with batch transcriptions, run the following command:
+ ```azurecli-interactive
+ spx help batch transcription
+ ```
+
+## Custom Speech
+
+With [Custom Speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
+
+> [!TIP]
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
+
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md).
+
+Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt).
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
+
+## Next steps
+
+- [Get started with speech to text](get-started-speech-to-text.md)
+- [Create a batch transcription](batch-transcription-create.md)
ai-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-translation.md
+
+ Title: Speech translation overview - Speech service
+
+description: With speech translation, you can add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices.
++++++ Last updated : 09/16/2022++
+keywords: speech translation
++
+# What is speech translation?
+
+In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech to text translation of audio streams.
+
+By using the Speech SDK or Speech CLI, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
+
+For a list of languages supported for speech translation, see [Language and voice support](language-support.md?tabs=speech-translation).
+
+## Core features
+
+* Speech to text translation with recognition results.
+* Speech-to-speech translation.
+* Support for translation to multiple target languages.
+* Interim recognition and translation results.
+
+## Get started
+
+As your first step, try the [Speech translation quickstart](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
+
+You'll find [Speech SDK speech to text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition and translation, and working with custom models.
+
+## Next steps
+
+* Try the [speech translation quickstart](get-started-speech-translation.md)
+* Install the [Speech SDK](speech-sdk.md)
+* Install the [Speech CLI](spx-overview.md)
ai-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-basics.md
+
+ Title: "Quickstart: The Speech CLI - Speech service"
+
+description: In this Azure AI Speech CLI quickstart, you interact with speech to text, text to speech, and speech translation without having to write code.
++++++ Last updated : 09/16/2022++++
+# Quickstart: Get started with the Azure AI Speech CLI
+
+In this article, you'll learn how to use the Azure AI Speech CLI (also called SPX) to access Speech services such as speech to text, text to speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts.
+
+This article assumes that you have working knowledge of the Command Prompt window, terminal, or PowerShell.
+
+> [!NOTE]
+> In PowerShell, the [stop-parsing token](/powershell/module/microsoft.powershell.core/about/about_special_characters#stop-parsing-token) (`--%`) should follow `spx`. For example, run `spx --% config @region` to view the current region config value.
+
+## Download and install
++
+## Create a resource configuration
+
+# [Terminal](#tab/terminal)
+
+To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](../../ai-services/multi-service-resource.md?pivots=azportal).
+
+To configure your resource key and region identifier, run the following commands:
+
+```console
+spx config @key --set SPEECH-KEY
+spx config @region --set SPEECH-REGION
+```
+
+The key and region are stored for future Speech CLI commands. To view the current configuration, run the following commands:
+
+```console
+spx config @key
+spx config @region
+```
+
+As needed, include the `clear` option to remove either stored value:
+
+```console
+spx config @key --clear
+spx config @region --clear
+```
+
+# [PowerShell](#tab/powershell)
+
+To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
+
+To configure your Speech resource key and region identifier, run the following commands in PowerShell:
+
+```powershell
+spx --% config @key --set SPEECH-KEY
+spx --% config @region --set SPEECH-REGION
+```
+
+The key and region are stored for future SPX commands. To view the current configuration, run the following commands:
+
+```powershell
+spx --% config @key
+spx --% config @region
+```
+
+As needed, include the `clear` option to remove either stored value:
+
+```powershell
+spx --% config @key --clear
+spx --% config @region --clear
+```
+
+***
+
+## Basic usage
+
+> [!IMPORTANT]
+> When you use the Speech CLI in a container, include the `--host` option. You must also specify `--key none` to ensure that the CLI doesn't try to use a Speech key for authentication. For example, run `spx recognize --key none --host wss://localhost:5000/ --file myaudio.wav` to recognize speech from an audio file in a [speech to text container](speech-container-stt.md).
+
+This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help that's built into the tool by running the following command:
+
+```console
+spx
+```
+
+You can search help topics by keyword. For example, to see a list of Speech CLI usage examples, run the following command:
+
+```console
+spx help find --topics "examples"
+```
+
+To see options for the recognize command, run the following command:
+
+```console
+spx help recognize
+```
+
+Additional help commands are listed in the console output. You can enter these commands to get detailed help about subcommands.
+
+## Speech to text (speech recognition)
+
+> [!NOTE]
+> You can't use your computer's microphone when you run the Speech CLI within a Docker container. However, you can read from and save audio files in your local mounted directory.
+
+To convert speech to text (speech recognition) by using your system's default microphone, run the following command:
+
+```console
+spx recognize --microphone
+```
+
+After you run the command, SPX begins listening for audio on the current active input device. It stops listening when you select **Enter**. The spoken audio is then recognized and converted to text in the console output.
+
+With the Speech CLI, you can also recognize speech from an audio file. Run the following command:
+
+```console
+spx recognize --file /path/to/file.wav
+```
+
+> [!TIP]
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help recognize```.
+
+## Text to speech (speech synthesis)
+
+The following command takes text as input and then outputs the synthesized speech to the current active output device (for example, your computer speakers).
+
+```console
+spx synthesize --text "Testing synthesis using the Speech CLI" --speakers
+```
+
+You can also save the synthesized output to a file. In this example, let's create a file named *my-sample.wav* in the directory where you're running the command.
+
+```console
+spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav
+```
+
+These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md?tabs=tts).
+
+```console
+spx synthesize --voices
+```
+
+Here's a command for using one of the voices you've discovered.
+
+```console
+spx synthesize --text "Bienvenue chez moi." --voice fr-FR-AlainNeural --speakers
+```
+
+> [!TIP]
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help synthesize```.
+
+## Speech to text translation
+
+With the Speech CLI, you can also do speech to text translation. Run the following command to capture audio from your default microphone and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
+
+```console
+spx translate --microphone --source en-US --target ru-RU
+```
+
+When you're translating into multiple languages, separate the language codes with a semicolon (`;`).
+
+```console
+spx translate --microphone --source en-US --target ru-RU;fr-FR;es-ES
+```
+
+If you want to save the output of your translation, use the `--output` flag. In this example, you'll also read from a file.
+
+```console
+spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --output file /some/file/path/russian_translation.txt
+```
+
+> [!TIP]
+> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
++
+## Next steps
+
+* [Install GStreamer to use the Speech CLI with MP3 and other formats](./how-to-use-codec-compressed-audio-input-streams.md)
+* [Configuration options for the Speech CLI](./spx-data-store-configuration.md)
+* [Batch operations with the Speech CLI](./spx-batch-operations.md)
ai-services Spx Batch Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-batch-operations.md
+
+ Title: "Run batch operations with the Speech CLI - Speech service"
+
+description: Learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI.
++++++ Last updated : 09/16/2022++++
+# Run batch operations with the Speech CLI
+
+Common tasks when using the Speech service, are batch operations. In this article, you'll learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. Specifically, you'll learn how to:
+
+* Run batch speech recognition on a directory of audio files
+* Run batch speech synthesis by iterating over a `.tsv` file
+
+## Batch speech to text (speech recognition)
+
+The Speech service is often used to recognize speech from audio files. In this example, you'll learn how to iterate over a directory using the Speech CLI to capture the recognition output for each `.wav` file. The `--files` flag is used to point at the directory where audio files are stored, and the wildcard `*.wav` is used to tell the Speech CLI to run recognition on every file with the extension `.wav`. The output for each recognition file is written as a tab separated value in `speech_output.tsv`.
+
+> [!NOTE]
+> The `--threads` argument can be also used in the next section for `spx synthesize` commands, and the available threads will depend on the CPU and its current load percentage.
+
+```console
+spx recognize --files C:\your_wav_file_dir\*.wav --output file C:\output_dir\speech_output.tsv --threads 10
+```
+
+The following is an example of the output file structure.
+
+```output
+audio.input.id recognizer.session.started.sessionid recognizer.recognized.result.text
+sample_1 07baa2f8d9fd4fbcb9faea451ce05475 A sample wave file.
+sample_2 8f9b378f6d0b42f99522f1173492f013 Sample text synthesized.
+```
+
+## Batch text to speech (speech synthesis)
+
+The easiest way to run batch text to speech is to create a new `.tsv` (tab-separated-value) file, and use the `--foreach` command in the Speech CLI. You can create a `.tsv` file using your favorite text editor, for this example, let's call it `text_synthesis.tsv`:
+
+>[!IMPORTANT]
+> When copying the contents of this text file, make sure that your file has a **tab** not spaces between the file location and the text. Sometimes, when copying the contents from this example, tabs are converted to spaces causing the `spx` command to fail when run.
+
+```Input
+audio.output text
+C:\batch_wav_output\wav_1.wav Sample text to synthesize.
+C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
+C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+```
+
+Next, you run a command to point to `text_synthesis.tsv`, perform synthesis on each `text` field, and write the result to the corresponding `audio.output` path as a `.wav` file.
+
+```console
+spx synthesize --foreach in @C:\your\path\to\text_synthesis.tsv
+```
+
+This command is the equivalent of running `spx synthesize --text "Sample text to synthesize" --audio output C:\batch_wav_output\wav_1.wav` **for each** record in the `.tsv` file.
+
+A couple things to note:
+
+* The column headers, `audio.output` and `text`, correspond to the command-line arguments `--audio output` and `--text`, respectively. Multi-part command-line arguments like `--audio output` should be formatted in the file with no spaces, no leading dashes, and periods separating strings, for example, `audio.output`. Any other existing command-line arguments can be added to the file as additional columns using this pattern.
+* When the file is formatted in this way, no additional arguments are required to be passed to `--foreach`.
+* Ensure to separate each value in the `.tsv` with a **tab**.
+
+However, if you have a `.tsv` file like the following example, with column headers that **do not match** command-line arguments:
+
+```Input
+wav_path str_text
+C:\batch_wav_output\wav_1.wav Sample text to synthesize.
+C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
+C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
+```
+
+You can override these field names to the correct arguments using the following syntax in the `--foreach` call. This is the same call as above.
+
+```console
+spx synthesize --foreach audio.output;text in @C:\your\path\to\text_synthesis.tsv
+```
+
+## Next steps
+
+* [Speech CLI overview](./spx-overview.md)
+* [Speech CLI quickstart](./spx-basics.md)
ai-services Spx Data Store Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-data-store-configuration.md
+
+ Title: "Configure the Speech CLI datastore - Speech service"
+
+description: Learn how to configure the Speech CLI datastore.
+++++ Last updated : 05/01/2022++++
+# Configure the Speech CLI datastore
+
+The [Speech CLI](spx-basics.md) can rely on settings in configuration files, which you can refer to using a `@` symbol. The Speech CLI saves a new setting in a new `./spx/data` subdirectory that is created in the current working directory for the Speech CLI. When looking for a configuration value, the Speech CLI searches your current working directory, then in the datastore at `./spx/data`, and then in other datastores, including a final read-only datastore in the `spx` binary.
+
+In the [Speech CLI quickstart](spx-basics.md), you used the datastore to save your `@key` and `@region` values, so you did not need to specify them with each `spx` command. Keep in mind, that you can use configuration files to store your own configuration settings, or even use them to pass URLs or other dynamic content generated at runtime.
+
+For more details about datastore files, including use of default configuration files (`@spx.default`, `@default.config`, and `@*.default.config` for command-specific default settings), enter this command:
+
+```console
+spx help advanced setup
+```
+
+## nodefaults
+
+The following example clears the `@my.defaults` configuration file, adds key-value pairs for **key** and **region** in the file, and uses the configuration in a call to `spx recognize`.
+
+```console
+spx config @my.defaults --clear
+spx config @my.defaults --add key 000072626F6E20697320636F6F6C0000
+spx config @my.defaults --add region westus
+
+spx config @my.defaults
+
+spx recognize --nodefaults @my.defaults --file hello.wav
+```
+
+## Dynamic configuration
+
+You can also write dynamic content to a configuration file using the `--output` option.
+
+For example, the following command creates a custom speech model and stores the URL of the new model in a configuration file. The next command waits until the model at that URL is ready for use before returning.
+
+```console
+spx csr model create --name "Example 4" --datasets @my.datasets.txt --output url @my.model.txt
+spx csr model status --model @my.model.txt --wait
+```
+
+The following example writes two URLs to the `@my.datasets.txt` configuration file. In this scenario, `--output` can include an optional **add** keyword to create a configuration file or append to the existing one.
+
+```console
+spx csr dataset create --name "LM" --kind Language --content https://crbn.us/data.txt --output url @my.datasets.txt
+spx csr dataset create --name "AM" --kind Acoustic --content https://crbn.us/audio.zip --output add url @my.datasets.txt
+
+spx config @my.datasets.txt
+```
+
+## SPX config add
+
+For readability, flexibility, and convenience, you can use a preset configuration with select output options.
+
+For example, you might have the following requirements for [captioning](captioning-quickstart.md):
+- Recognize from the input file `caption.this.mp4`.
+- Output WebVTT and SRT captions to the files `caption.vtt` and `caption.srt` respectively.
+- Output the `offset`, `duration`, `resultid`, and `text` of each recognizing event to the file `each.result.tsv`.
+
+You can create a preset configuration named `@caption.defaults` as shown here:
+
+```console
+spx config @caption.defaults --clear
+spx config @caption.defaults --add output.each.recognizing.result.offset=true
+spx config @caption.defaults --add output.each.recognizing.result.duration=true
+spx config @caption.defaults --add output.each.recognizing.result.resultid=true
+spx config @caption.defaults --add output.each.recognizing.result.text=true
+spx config @caption.defaults --add output.each.file.name=each.result.tsv
+spx config @caption.defaults --add output.srt.file.name=caption.srt
+spx config @caption.defaults --add output.vtt.file.name=caption.vtt
+```
+
+The settings are saved to the current directory in a file named `caption.defaults`. Here are the file contents:
+
+```
+output.each.recognizing.result.offset=true
+output.each.recognizing.result.duration=true
+output.each.recognizing.result.resultid=true
+output.each.recognizing.result.text=true
+output.all.file.name=output.result.tsv
+output.each.file.name=each.result.tsv
+output.srt.file.name=caption.srt
+output.vtt.file.name=caption.vtt
+```
+
+Then, to generate [captions](captioning-quickstart.md), you can run this command that imports settings from the `@caption.defaults` preset configuration:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt --output srt @caption.defaults
+```
+
+Using the preset configuration as shown previously is similar to running the following command:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt --output each file each.result.tsv --output all file output.result.tsv --output each recognizer recognizing result offset --output each recognizer recognizing duration --output each recognizer recognizing result resultid --output each recognizer recognizing text
+```
+
+## Next steps
+
+* [Batch operations with the Speech CLI](./spx-batch-operations.md)
ai-services Spx Output Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-output-options.md
+
+ Title: "Configure the Speech CLI output options - Speech service"
+
+description: Learn how to configure output options with the Speech CLI.
+++++ Last updated : 09/16/2022+++
+# Configure the Speech CLI output options
+
+The [Speech CLI](spx-basics.md) output can be written to standard output or specified files.
+
+For contextual help in the Speech CLI, you can run any of the following commands:
+
+```console
+spx help recognize output examples
+spx help synthesize output examples
+spx help translate output examples
+spx help intent output examples
+```
+
+## Standard output
+
+If the file argument is a hyphen (`-`), the results are written to standard output as shown in the following example.
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt file - --output srt file - --output each file - @output.each.detailed --property SpeechServiceResponse_StablePartialResultThreshold=0 --profanity masked
+```
+
+## Default file output
+
+If you omit the `file` option, output is written to default files in the current directory.
+
+For example, run the following command to write WebVTT and SRT [captions](captioning-concepts.md) to their own default files:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt --output srt --output each text --output all duration
+```
+
+The default file names are as follows, where the `<EPOCH_TIME>` is replaced at run time.
+- The default SRT file name includes the input file name and the local operating system epoch time: `output.caption.this.<EPOCH_TIME>.srt`
+- The default Web VTT file name includes the input file name and the local operating system epoch time: `output.caption.this.<EPOCH_TIME>.vtt`
+- The default `output each` file name, `each.<EPOCH_TIME>.tsv`, includes the local operating system epoch time. This file is not created by default, unless you specify the `--output each` option.
+- The default `output all` file name, `output.<EPOCH_TIME>.tsv`, includes the local operating system epoch time. This file is created by default.
+
+## Output to specific files
+
+For output to files that you specify instead of the [default files](#default-file-output), set the `file` option to the file name.
+
+For example, to output both WebVTT and SRT [captions](captioning-concepts.md) to files that you specify, run the following command:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt --output each text --output each file each.result.tsv --output all file output.result.tsv
+```
+
+The preceding command also outputs the `each` and `all` results to the specified files.
+
+## Output to multiple files
+
+For translations with `spx translate`, separate files are created for the source language (such as `--source en-US`) and each target language (such as `--target "de;fr;zh-Hant"`).
+
+For example, to output translated SRT and WebVTT captions, run the following command:
+
+```console
+spx translate --source en-US --target "de;fr;zh-Hant" --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt
+```
+
+Captions should then be written to the following files: *caption.srt*, *caption.vtt*, *caption.de.srt*, *caption.de.vtt*, *caption.fr.srt*, *caption.fr.vtt*, *caption.zh-Hant.srt*, and *caption.zh-Hant.vtt*.
+
+## Suppress header
+
+You can suppress the header line in the output file by setting the `has header false` option:
+
+```
+spx recognize --nodefaults @my.defaults --file audio.wav --output recognized text --output file has header false
+```
+
+See [Configure the Speech CLI datastore](spx-data-store-configuration.md#nodefaults) for more information about `--nodefaults`.
+
+## Next steps
+
+* [Captioning quickstart](./captioning-quickstart.md)
ai-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/spx-overview.md
+
+ Title: The Azure AI Speech CLI
+
+description: In this article, you learn about the Speech CLI, a command-line tool for using Speech service without having to write any code.
++++++ Last updated : 09/16/2022+++
+# What is the Speech CLI?
+
+The Speech CLI is a command-line tool for using Speech service without having to write any code. The Speech CLI requires minimal setup. You can easily use it to experiment with key features of Speech service and see how it works with your use cases. Within minutes, you can run simple test workflows, such as batch speech-recognition from a directory of files or text to speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready, and you can scale it up to run larger processes by using automated `.bat` or shell scripts.
+
+Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. As you're deciding when to use the Speech CLI or the Speech SDK, consider the following guidance.
+
+Use the Speech CLI when:
+* You want to experiment with Speech service features with minimal setup and without having to write code.
+* You have relatively simple requirements for a production application that uses Speech service.
+
+Use the Speech SDK when:
+* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, or C++).
+* You have complex requirements that might require advanced service requests.
+* You're developing custom behavior, including response streaming.
+
+## Core features
+
+* **Speech recognition**: Convert speech to text either from audio files or directly from a microphone, or transcribe a recorded conversation.
+
+* **Speech synthesis**: Convert text to speech either by using input from text files or by inputting directly from the command line. Customize speech output characteristics by using [Speech Synthesis Markup Language (SSML) configurations](speech-synthesis-markup.md).
+
+* **Speech translation**: Translate audio in a source language to text or audio in a target language.
+
+* **Run on Azure compute resources**: Send Speech CLI commands to run on an Azure remote compute resource by using `spx webjob`.
+
+## Get started
+
+To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands. It also gives you slightly more advanced commands for running batch operations for speech to text and text to speech. After you've read the basics article, you should understand the syntax well enough to start writing some custom commands or automate simple Speech service operations.
+
+## Next steps
+
+- [Get started with the Azure AI Speech CLI](spx-basics.md)
+- [Speech CLI configuration options](./spx-data-store-configuration.md)
+- [Speech CLI batch operations](./spx-batch-operations.md)
ai-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/swagger-documentation.md
+
+ Title: Generate a REST API client library - Speech service
+
+description: The Swagger documentation can be used to auto-generate SDKs for a number of programming languages.
+++++++ Last updated : 10/03/2022++
+# Generate a REST API client library for the Speech to text REST API
+
+Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs.
+
+> [!NOTE]
+> Speech service has several REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md).
+>
+> However only [Speech to text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech service REST APIs.
+
+## Generating code from the Swagger specification
+
+The [Swagger specification](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1) has options that allow you to quickly test for various paths. However, sometimes it's desirable to generate code for all paths, creating a single library of calls that you can base future solutions on. Let's take a look at the process to generate a Python library.
+
+You'll need to set Swagger to the region of your Speech resource. You can confirm the region in the **Overview** part of your Speech resource settings in Azure portal. The complete list of supported regions is available [here](regions.md#speech-service).
+
+1. In a browser, go to the Swagger specification for your [region](regions.md#speech-service):
+ `https://<your-region>.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1`
+1. On that page, select **API definition**, and select **Swagger**. Copy the URL of the page that appears.
+1. In a new browser, go to [https://editor.swagger.io](https://editor.swagger.io)
+1. Select **File**, select **Import URL**, paste the URL, and select **OK**.
+1. Select **Generate Client** and select **python**. The client library downloads to your computer in a `.zip` file.
+1. Extract everything from the download. You might use `tar -xf` to extract everything.
+1. Install the extracted module into your Python environment:
+ `pip install path/to/package/python-client`
+1. The installed package is named `swagger_client`. Check that the installation has worked:
+ `python -c "import swagger_client"`
+
+You can use the Python library that you generated with the [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
+
+## Next steps
+
+* [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
+* [Speech to text REST API](rest-speech-to-text.md)
ai-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech.md
+
+ Title: Text to speech overview - Speech service
+
+description: Get an overview of the benefits and capabilities of the text to speech feature of the Speech service.
++++++ Last updated : 09/25/2022++
+keywords: text to speech
++
+# What is text to speech?
+
+In this overview, you learn about the benefits and capabilities of the text to speech feature of the Speech service, which is part of Azure AI services.
+
+Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text to speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
+
+## Core features
+
+Text to speech includes the following features:
+
+| Feature | Summary | Demo |
+| | | |
+| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. |
+| Custom Neural Voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). |
+
+### More about neural text to speech features
+
+The text to speech feature of the Speech service on Azure has been fully upgraded to the neural text to speech engine. This engine uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the clear articulation of words, neural text to speech significantly reduces listening fatigue when users interact with AI systems.
+
+The patterns of stress and intonation in spoken language are called _prosody_. Traditional text to speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis.
+
+Here's more information about neural text to speech features in the Speech service, and how they overcome the limits of traditional text to speech systems:
+
+* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md).
+
+* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
+
+* **Prebuilt neural voices**: Microsoft neural text to speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
+
+ - Make interactions with chatbots and voice assistants more natural and engaging.
+ - Convert digital texts such as e-books into audiobooks.
+ - Enhance in-car navigation systems.
+
+ For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
+
+* **Fine-tuning text to speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text to speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
+
+ You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md) and [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).
+
+* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
+
+ By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md?tabs=tts).
+
+> [!NOTE]
+> We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
+>
+> If your applications, tools, or products are using any of the standard voices and custom voices, you must migrate to the neural version. For more information, see [Migrate to neural voices](migration-overview-neural-voice.md).
+
+## Get started
+
+To get started with text to speech, see the [quickstart](get-started-text-to-speech.md). Text to speech is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md).
+
+> [!TIP]
+> To convert text to speech with a no-code approach, try the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation).
+
+## Sample code
+
+Sample code for text to speech is available on GitHub. These samples cover text to speech conversion in most popular programming languages:
+
+* [Text to speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
+* [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
+
+## Custom Neural Voice
+
+In addition to prebuilt neural voices, you can create and fine-tune custom neural voices that are unique to your product or brand. All it takes to get started is a handful of audio files and the associated transcriptions. For more information, see [Get started with Custom Neural Voice](how-to-custom-voice.md).
+
+## Pricing note
+
+### Billable characters
+When you use the text to speech feature, you're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable:
+
+* Text passed to the text to speech feature in the SSML body of the request
+* All markup within the text field of the request body in the SSML format, except for `<speak>` and `<voice>` tags
+* Letters, punctuation, spaces, tabs, markup, and all white-space characters
+* Every code point defined in Unicode
+
+For detailed information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+> [!IMPORTANT]
+> Each Chinese character is counted as two characters for billing, including kanji used in Japanese, hanja used in Korean, or hanzi used in other languages.
+
+### Model training and hosting time for Custom Neural Voice
+
+Custom Neural Voice training and hosting are both calculated by hour and billed per second. For the billing unit price, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+Custom Neural Voice (CNV) training time is measured by ΓÇÿcompute hourΓÇÖ (a unit to measure machine running time). Typically, when training a voice model, two computing tasks are running in parallel. So, the calculated compute hours will be longer than the actual training time. On average, it takes less than one compute hour to train a CNV Lite voice; while for CNV Pro, it usually takes 20 to 40 compute hours to train a single-style voice, and around 90 compute hours to train a multi-style voice. The CNV training time is billed with a cap of 96 compute hours. So in the case that a voice model is trained in 98 compute hours, you will only be charged with 96 compute hours.
+
+Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour). The hosting time (hours) for each endpoint is calculated at 00:00 UTC every day for the previous 24 hours. For example, if the endpoint has been active for 24 hours on day one, it will be billed for 24 hours at 00:00 UTC the second day. If the endpoint is newly created or has been suspended during the day, it will be billed for its accumulated running time until 00:00 UTC the second day. If the endpoint is not currently hosted, it will not be billed. In addition to the daily calculation at 00:00 UTC each day, the billing is also triggered immediately when an endpoint is deleted or suspended. For example, for an endpoint created at 08:00 UTC on December 1, the hosting hour will be calculated to 16 hours at 00:00 UTC on December 2 and 24 hours at 00:00 UTC on December 3. If the user suspends hosting the endpoint at 16:30 UTC on December 3, the duration (16.5 hours) from 00:00 to 16:30 UTC on December 3 will be calculated for billing.
+
+## Reference docs
+
+* [Speech SDK](speech-sdk.md)
+* [REST API: Text to speech](rest-text-to-speech.md)
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
+
+* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
+* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+
+## Next steps
+
+* [Text to speech quickstart](get-started-text-to-speech.md)
+* [Get the Speech SDK](speech-sdk.md)
ai-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/troubleshooting.md
+
+ Title: Troubleshoot the Speech SDK - Speech service
+
+description: This article provides information to help you solve issues you might encounter when you use the Speech SDK.
++++++ Last updated : 12/08/2022+++
+# Troubleshoot the Speech SDK
+
+This article provides information to help you solve issues you might encounter when you use the Speech SDK.
+
+## Authentication failed
+
+You might observe one of several authentication errors, depending on the programming environment, API, or SDK. Here are some example errors:
+- Did you set the speech resource key and region values?
+- AuthenticationFailure
+- HTTP 403 Forbidden or HTTP 401 Unauthorized. Connection requests without a valid `Ocp-Apim-Subscription-Key` or `Authorization` header are rejected with a status of 403 or 401.
+- ValueError: cannot construct SpeechConfig with the given arguments (or a variation of this message). This error could be observed, for example, when you run one of the Speech SDK for Python quickstarts without setting environment variables. You might also see it when you set the environment variables to something invalid such as your key or region.
+- Exception with an error code: 0x5. This access denied error could be observed, for example, when you run one of the Speech SDK for C# quickstarts without setting environment variables.
+
+For baseline authentication troubleshooting tips, see [validate your resource key](#validate-your-resource-key) and [validate an authorization token](#validate-an-authorization-token). For more information about confirming credentials, see [get the keys for your resource](../multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource).
+
+### Validate your resource key
+
+You can verify that you have a valid resource key by running one of the following commands.
+
+> [!NOTE]
+> Replace `YOUR_RESOURCE_KEY` and `YOUR_REGION` with your own resource key and associated region.
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$FetchTokenHeader = @{
+ 'Content-type'='application/x-www-form-urlencoded'
+ 'Content-Length'= '0'
+ 'Ocp-Apim-Subscription-Key' = 'YOUR_RESOURCE_KEY'
+}
+$OAuthToken = Invoke-RestMethod -Method POST -Uri https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken -Headers $FetchTokenHeader
+$OAuthToken
+```
+
+# [cURL](#tab/curl)
+
+```
+curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0"
+```
+++
+If you entered a valid resource key, the command returns an authorization token, otherwise an error is returned.
+
+### Validate an authorization token
+
+If you're using an authorization token for authentication, you might see an authentication error because:
+- The authorization token is invalid
+- The authorization token is expired
+
+If you use an authorization token for authentication, run one of the following commands to verify that the authorization token is still valid. Tokens are valid for 10 minutes.
+
+> [!NOTE]
+> Replace `YOUR_AUDIO_FILE` with the path to your prerecorded audio file. Replace `YOUR_ACCESS_TOKEN` with the authorization token returned in the preceding step. Replace `YOUR_REGION` with the correct region.
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$SpeechServiceURI =
+'https://YOUR_REGION.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US'
+
+# $OAuthToken is the authorization token returned by the token service.
+$RecoRequestHeader = @{
+ 'Authorization' = 'Bearer '+ $OAuthToken
+ 'Transfer-Encoding' = 'chunked'
+ 'Content-type' = 'audio/wav; codec=audio/pcm; samplerate=16000'
+}
+
+# Read audio into byte array.
+$audioBytes = [System.IO.File]::ReadAllBytes("YOUR_AUDIO_FILE")
+
+$RecoResponse = Invoke-RestMethod -Method POST -Uri $SpeechServiceURI -Headers $RecoRequestHeader -Body $audioBytes
+
+# Show the result.
+$RecoResponse
+```
+
+# [cURL](#tab/curl)
+
+```
+curl -v -X POST "https://YOUR_REGION.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US" -H "Authorization: Bearer YOUR_ACCESS_TOKEN" -H "Transfer-Encoding: chunked" -H "Content-type: audio/wav; codec=audio/pcm; samplerate=16000" --data-binary @YOUR_AUDIO_FILE
+```
+++
+If you entered a valid authorization token, the command returns the transcription for your audio file, otherwise an error is returned.
++
+## InitialSilenceTimeout via RecognitionStatus
+
+This issue usually is observed with [single-shot recognition](./how-to-recognize-speech.md#single-shot-recognition) of a single utterance. For example, the error can be returned under the following circumstances:
+
+* The audio begins with a long stretch of silence. In that case, the service stops the recognition after a few seconds and returns `InitialSilenceTimeout`.
+* The audio uses an unsupported codec format, which causes the audio data to be treated as silence.
+
+It's OK to have silence at the beginning of audio, but only when you use [continuous recognition](./how-to-recognize-speech.md#continuous-recognition).
+
+## SPXERR_AUDIO_SYS_LIBRARY_NOT_FOUND
+
+This can be returned, for example, when multiple versions of Python have been installed, or if you're not using a supported version of Python. You can try using a different python interpreter or uninstall all python versions and re-install the latest version of python and the Speech SDK.
+
+## HTTP 400 Bad Request
+
+This error usually occurs when the request body contains invalid audio data. Only WAV format is supported. Also, check the request's headers to make sure you specify appropriate values for `Content-Type` and `Content-Length`.
+
+## HTTP 408 Request Timeout
+
+The error most likely occurs because no audio data is being sent to the service. This error also might be caused by network issues.
+
+## Next steps
+
+* [Review the release notes](releasenotes.md)
ai-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/tutorial-voice-enable-your-bot-speech-sdk.md
+
+ Title: "Tutorial: Voice-enable your bot - Speech service"
+
+description: In this tutorial, you'll create an echo bot and configure a client app that lets you speak to your bot and hear it respond back to you.
++++++ Last updated : 01/24/2022+
+ms.devlang: csharp
+++
+# Tutorial: Voice-enable your bot
+
+You can use Azure AI services Speech to voice-enable a chat bot.
+
+In this tutorial, you'll use the Microsoft Bot Framework to create a bot that responds to what you say. You'll deploy your bot to Azure and register it with the Bot Framework Direct Line Speech channel. Then, you'll configure a sample client app for Windows that lets you speak to your bot and hear it speak back to you.
+
+To complete the tutorial, you don't need extensive experience or familiarity with Azure, Bot Framework bots, or Direct Line Speech.
+
+The voice-enabled chat bot that you make in this tutorial follows these steps:
+
+1. The sample client application is configured to connect to the Direct Line Speech channel and the echo bot.
+1. When the user presses a button, voice audio streams from the microphone. Or audio is continuously recorded when a custom keyword is used.
+1. If a custom keyword is used, keyword detection happens on the local device, gating audio streaming to the cloud.
+1. By using the Speech SDK, the sample client application connects to the Direct Line Speech channel and streams audio.
+1. Optionally, higher-accuracy keyword verification happens on the service.
+1. The audio is passed to the speech recognition service and transcribed to text.
+1. The recognized text is passed to the echo bot as a Bot Framework activity.
+1. The response text is turned into audio by the text to speech service, and streamed back to the client application for playback.
+
+![Diagram that illustrates the flow of the Direct Line Speech channel.](media/tutorial-voice-enable-your-bot-speech-sdk/diagram.png "The Speech Channel flow")
+
+> [!NOTE]
+> The steps in this tutorial don't require a paid service. As a new Azure user, you can use credits from your free Azure trial subscription and the free tier of the Speech service to complete this tutorial.
+
+Here's what this tutorial covers:
+> [!div class="checklist"]
+> * Create new Azure resources.
+> * Build, test, and deploy the echo bot sample to Azure App Service.
+> * Register your bot with a Direct Line Speech channel.
+> * Build and run the Windows Voice Assistant Client to interact with your echo bot.
+> * Add custom keyword activation.
+> * Learn to change the language of the recognized and spoken speech.
+
+## Prerequisites
+
+Here's what you'll need to complete this tutorial:
+
+- A Windows 10 PC with a working microphone and speakers (or headphones).
+- [Visual Studio 2017](https://visualstudio.microsoft.com/downloads/) or later, with the **ASP.NET and web development** workload installed.
+- [.NET Framework Runtime 4.6.1](https://dotnet.microsoft.com/download) or later.
+- An Azure account. [Sign up for free](https://azure.microsoft.com/free/cognitive-services/).
+- A [GitHub](https://github.com/) account.
+- [Git for Windows](https://git-scm.com/download/win).
+
+## Create a resource group
+
+The client app that you'll create in this tutorial uses a handful of Azure services. To reduce the round-trip time for responses from your bot, you'll want to make sure that these services are in the same Azure region.
+
+This section walks you through creating a resource group in the West US region. You'll use this resource group when you're creating individual resources for the Bot Framework, the Direct Line Speech channel, and the Speech service.
+
+1. Go to the [Azure portal page for creating a resource group](https://portal.azure.com/#create/Microsoft.ResourceGroup).
+1. Provide the following information:
+ * Set **Subscription** to **Free Trial**. (You can also use an existing subscription.)
+ * Enter a name for **Resource group**. We recommend **SpeechEchoBotTutorial-ResourceGroup**.
+ * From the **Region** dropdown menu, select **West US**.
+1. Select **Review and create**. You should see a banner that reads **Validation passed**.
+1. Select **Create**. It might take a few minutes to create the resource group.
+1. As with the resources you'll create later in this tutorial, it's a good idea to pin this resource group to your dashboard for easy access. If you want to pin this resource group, select the pin icon next to the name.
+
+### Choose an Azure region
+
+Ensure that you use a [supported Azure region](regions.md#voice-assistants). The Direct Line Speech channel uses the text to speech service, which has neural and standard voices. Neural voices are used at [these Azure regions](regions.md#speech-service), and standard voices (retiring) are used at [these Azure regions](how-to-migrate-to-prebuilt-neural-voice.md).
+
+For more information about regions, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/).
+
+## Create resources
+
+Now that you have a resource group in a supported region, the next step is to create individual resources for each service that you'll use in this tutorial.
+
+### Create a Speech service resource
+
+1. Go to the [Azure portal page for creating a Speech service resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices).
+1. Provide the following information:
+ * For **Name**, we recommend **SpeechEchoBotTutorial-Speech** as the name of your resource.
+ * For **Subscription**, make sure that **Free Trial** is selected.
+ * For **Location**, select **West US**.
+ * For **Pricing tier**, select **F0**. This is the free tier.
+ * For **Resource group**, select **SpeechEchoBotTutorial-ResourceGroup**.
+1. After you've entered all required information, select **Create**. It might take a few minutes to create the resource.
+1. Later in this tutorial, you'll need subscription keys for this service. You can access these keys at any time from your resource's **Overview** area (under **Manage keys**) or the **Keys** area.
+
+At this point, check that your resource group (**SpeechEchoBotTutorial-ResourceGroup**) has a Speech service resource:
+
+| Name | Type | Location |
+||-|-|
+| SpeechEchoBotTutorial-Speech | Speech | West US |
+
+### Create an Azure App Service plan
+
+An App Service plan defines a set of compute resources for a web app to run.
+
+1. Go to the [Azure portal page for creating an Azure App Service plan](https://portal.azure.com/#create/Microsoft.AppServicePlanCreate).
+1. Provide the following information:
+ * Set **Subscription** to **Free Trial**. (You can also use an existing subscription.)
+ * For **Resource group**, select **SpeechEchoBotTutorial-ResourceGroup**.
+ * For **Name**, we recommend **SpeechEchoBotTutorial-AppServicePlan** as the name of your plan.
+ * For **Operating System**, select **Windows**.
+ * For **Region**, select **West US**.
+ * For **Pricing Tier**, make sure that **Standard S1** is selected. This should be the default value. If it isn't, set **Operating System** to **Windows**.
+1. Select **Review and create**. You should see a banner that reads **Validation passed**.
+1. Select **Create**. It might take a few minutes to create the resource.
+
+At this point, check that your resource group (**SpeechEchoBotTutorial-ResourceGroup**) has two resources:
+
+| Name | Type | Location |
+||-|-|
+| SpeechEchoBotTutorial-AppServicePlan | App Service Plan | West US |
+| SpeechEchoBotTutorial-Speech | Azure AI services | West US |
+
+## Build an echo bot
+
+Now that you've created resources, start with the echo bot sample, which echoes the text that you've entered as its response. The sample code is already configured to work with the Direct Line Speech channel, which you'll connect after you've deployed the bot to Azure.
+
+> [!NOTE]
+> The instructions that follow, along with more information about the echo bot, are available in the [sample's README on GitHub](https://github.com/microsoft/BotBuilder-Samples/blob/master/samples/csharp_dotnetcore/02.echo-bot/README.md).
+
+### Run the bot sample on your machine
+
+1. Clone the samples repository:
+
+ ```bash
+ git clone https://github.com/Microsoft/botbuilder-samples.git
+ ```
+
+1. Open Visual Studio.
+1. From the toolbar, select **File** > **Open** > **Project/Solution**. Then open the project solution:
+
+ ```
+ samples\csharp_dotnetcore\02.echo-bot\EchoBot.sln
+ ```
+
+1. After the project is loaded, select the <kbd>F5</kbd> key to build and run the project.
+
+ In the browser that opens, you'll see a screen similar to this one:
+
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot that shows the EchoBot page with the message that your bot is ready.](media/tutorial-voice-enable-your-bot-speech-sdk/echobot-running-on-localhost.png "EchoBot running on localhost")](media/tutorial-voice-enable-your-bot-speech-sdk/echobot-running-on-localhost.png#lightbox)
+
+### Test the bot sample with the Bot Framework Emulator
+
+The [Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop app that lets bot developers test and debug their bots locally (or remotely through a tunnel). The emulator accepts typed text as the input (not voice). The bot will also respond with text.
+
+Follow these steps to use the Bot Framework Emulator to test your echo bot running locally, with text input and text output. After you deploy the bot to Azure, you'll test it with voice input and voice output.
+
+1. Install [Bot Framework Emulator](https://github.com/Microsoft/BotFramework-Emulator/releases/latest) version 4.3.0 or later.
+1. Open the Bot Framework Emulator, and then select **File** > **Open Bot**.
+1. Enter the URL for your bot. For example:
+
+ ```
+ http://localhost:3978/api/messages
+ ```
+
+1. Select **Connect**.
+1. The bot should greet you with a "Hello and welcome!" message. Type in any text message and confirm that you get a response from the bot.
+
+ This is what an exchange of communication with an echo bot might look like:
+ [![Screenshot shows the Bot Framework Emulator.](media/tutorial-voice-enable-your-bot-speech-sdk/bot-framework-emulator.png "Bot Framework emulator")](media/tutorial-voice-enable-your-bot-speech-sdk/bot-framework-emulator.png#lightbox)
+
+## Deploy your bot to Azure App Service
+
+The next step is to deploy the echo bot to Azure. There are a few ways to deploy a bot, including the [Azure CLI](/azure/bot-service/bot-builder-deploy-az-cli) and [deployment templates](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/13.core-bot). This tutorial focuses on publishing directly from Visual Studio.
+
+> [!NOTE]
+> If **Publish** doesn't appear as you perform the following steps, use Visual Studio Installer to add the **ASP.NET and web development** workload.
+
+1. From Visual Studio, open the echo bot that's been configured for use with the Direct Line Speech channel:
+
+ ```
+ samples\csharp_dotnetcore\02.echo-bot\EchoBot.sln
+ ```
+
+1. In Solution Explorer, right-click the **EchoBot** project and select **Publish**.
+1. In the **Publish** window that opens:
+ 1. Select **Azure** > **Next**.
+ 1. Select **Azure App Service (Windows)** > **Next**.
+ 1. Select **Create a new Azure App Service** by the green plus sign.
+1. When the **App Service (Windows)** window appears:
+ * Select **Add an account**, and sign in with your Azure account credentials. If you're already signed in, select your account from the dropdown list.
+ * For **Name**, enter a globally unique name for your bot. This name is used to create a unique bot URL.
+
+ A default name that includes the date and time appears in the box (for example, **EchoBot20190805125647**). You can use the default name for this tutorial.
+ * For **Subscription**, select **Free Trial**.
+ * For **Resource Group**, select **SpeechEchoBotTutorial-ResourceGroup**.
+ * For **Hosting Plan**, select **SpeechEchoBotTutorial-AppServicePlan**.
+1. Select **Create**. On the final wizard screen, select **Finish**.
+1. Select **Publish**. Visual Studio deploys the bot to Azure.
+
+ You should see a success message in the Visual Studio output window that looks like this:
+
+ ```
+ Publish Succeeded.
+ Web App was published successfully https://EchoBot20190805125647.azurewebsites.net/
+ ```
+
+ Your default browser should open and display a page that reads: "Your bot is ready!"
+
+At this point, check your resource group (**SpeechEchoBotTutorial-ResourceGroup**) in the Azure portal. Confirm that it contains these three resources:
+
+| Name | Type | Location |
+||-|-|
+| EchoBot20190805125647 | App Service | West US |
+| SpeechEchoBotTutorial-AppServicePlan | App Service plan | West US |
+| SpeechEchoBotTutorial-Speech | Azure AI services | West US |
+
+## Enable web sockets
+
+You need to make a small configuration change so that your bot can communicate with the Direct Line Speech channel by using web sockets. Follow these steps to enable web sockets:
+
+1. Go to the [Azure portal](https://portal.azure.com), and select your App Service resource. The resource name should be similar to **EchoBot20190805125647** (your unique app name).
+1. On the left pane, under **Settings**, select **Configuration**.
+1. Select the **General settings** tab.
+1. Find the toggle for **Web sockets** and set it to **On**.
+1. Select **Save**.
+
+> [!TIP]
+> You can use the controls at the top of your Azure App Service page to stop or restart the service. This ability can come in handy when you're troubleshooting.
+
+## Create a channel registration
+
+Now that you've created an Azure App Service resource to host your bot, the next step is to create a channel registration. Creating a channel registration is a prerequisite for registering your bot with Bot Framework channels, including the Direct Line Speech channel. If you want to learn more about how bots use channels, see [Connect a bot to channels](/azure/bot-service/bot-service-manage-channels).
+
+1. Go to the [Azure portal page for creating an Azure bot](https://portal.azure.com/#create/Microsoft.AzureBot).
+1. Provide the following information:
+ * For **Bot handle**, enter **SpeechEchoBotTutorial-BotRegistration-####**. Replace **####** with a number of your choice.
+
+ > [!NOTE]
+ > The bot handle must be globally unique. If you enter one and get the error message "The requested bot ID is not available," pick a different number. The following examples use **8726**.
+ * For **Subscription**, select **Free Trial**.
+ * For **Resource group**, select **SpeechEchoBotTutorial-ResourceGroup**.
+ * For **Location**, select **West US**.
+ * For **Pricing tier**, select **F0**.
+ * Ignore **Auto create App ID and password**.
+1. At the bottom of the **Azure Bot** pane, select **Create**.
+1. After you create the resource, open your **SpeechEchoBotTutorial-BotRegistration-####** resource in the Azure portal.
+1. From the **Settings** area, select **Configuration**.
+1. For **Messaging endpoint**, enter the URL for your web app with the **/api/messages** path appended. For example, if your globally unique app name was **EchoBot20190805125647**, your messaging endpoint would be `https://EchoBot20190805125647.azurewebsites.net/api/messages/`.
+
+At this point, check your resource group (**SpeechEchoBotTutorial-ResourceGroup**) in the Azure portal. It should now show at least four resources:
+
+| Name | Type | Location |
+||-|-|
+| EchoBot20190805125647 | App Service | West US |
+| SpeechEchoBotTutorial-AppServicePlan | App Service plan | West US |
+| SpeechEchoBotTutorial-BotRegistration-8726 | Bot Service | Global |
+| SpeechEchoBotTutorial-Speech | Azure AI services | West US |
+
+> [!IMPORTANT]
+> The Azure AI Bot Service resource shows the Global region, even though you selected West US. This is expected.
+
+## Optional: Test in web chat
+
+The **Azure Bot** page has a **Test in Web Chat** option under **Settings**. It won't work by default with your bot because the web chat needs to authenticate against your bot.
+
+If you want to test your deployed bot with text input, use the following steps. Note that these steps are optional and aren't required for you to continue with the tutorial.
+
+1. In the [Azure portal](https://portal.azure.com), find and open your **EchoBotTutorial-BotRegistration-####** resource.
+1. From the **Settings** area, select **Configuration**. Copy the value under **Microsoft App ID**.
+1. Open the Visual Studio EchoBot solution. In Solution Explorer, find and double-select **appsettings.json**.
+1. Replace the empty string next to **MicrosoftAppId** in the JSON file with the copied ID value.
+1. Go back to the Azure portal. In the **Settings** area, select **Configuration**. Then select **Manage** next to **Microsoft App ID**.
+1. Select **New client secret**. Add a description (for example, **web chat**) and select **Add**. Copy the new secret.
+1. Replace the empty string next to **MicrosoftAppPassword** in the JSON file with the copied secret value.
+1. Save the JSON file. It should look something like this code:
+
+ ```json
+ {
+ "MicrosoftAppId": "3be0abc2-ca07-475e-b6c3-90c4476c4370",
+ "MicrosoftAppPassword": "-zRhJZ~1cnc7ZIlj4Qozs_eKN.8Cq~U38G"
+ }
+ ```
+
+1. Republish the app: right-click the **EchoBot** project in Visual Studio Solution Explorer, select **Publish**, and then select the **Publish** button.
+
+## Register the Direct Line Speech channel
+
+Now it's time to register your bot with the Direct Line Speech channel. This channel creates a connection between your bot and a client app compiled with the Speech SDK.
+
+1. In the [Azure portal](https://portal.azure.com), find and open your **SpeechEchoBotTutorial-BotRegistration-####** resource.
+1. From the **Settings** area, select **Channels** and then take the following steps:
+ 1. Under **More channels**, select **Direct Line Speech**.
+ 1. Review the text on the **Configure Direct line Speech** page, and then expand the **Cognitive service account** dropdown menu.
+ 1. Select the Speech service resource that you created earlier (for example, **SpeechEchoBotTutorial-Speech**) from the menu to associate your bot with your subscription key.
+ 1. Ignore the rest of the optional fields.
+ 1. Select **Save**.
+
+1. From the **Settings** area, select **Configuration** and then take the following steps:
+ 1. Select the **Enable Streaming Endpoint** checkbox. This step is necessary for creating a communication protocol built on web sockets between your bot and the Direct Line Speech channel.
+ 1. Select **Save**.
+
+If you want to learn more, see [Connect a bot to Direct Line Speech](/azure/bot-service/bot-service-channel-connect-directlinespeech).
+
+## Run the Windows Voice Assistant Client
+
+The Windows Voice Assistant Client is a Windows Presentation Foundation (WPF) app in C# that uses the [Speech SDK](./speech-sdk.md) to manage communication with your bot through the Direct Line Speech channel. Use it to interact with and test your bot before writing a custom client app. It's open source, so you can either download the executable file and run it, or build it yourself.
+
+The Windows Voice Assistant Client has a simple UI that allows you to configure the connection to your bot, view the text conversation, view Bot Framework activities in JSON format, and display adaptive cards. It also supports the use of custom keywords. You'll use this client to speak with your bot and receive a voice response.
+
+> [!NOTE]
+> At this point, confirm that your microphone and speakers are enabled and working.
+
+1. Go to the [GitHub repository for the Windows Voice Assistant Client](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/README.md).
+1. Follow the provided instructions to either:
+ * Download a prebuilt executable file in a .zip package to run
+ * Build the executable file yourself, by cloning the repository and building the project
+
+1. Open the **VoiceAssistantClient.exe** client application and configure it to connect to your bot, by following the instructions in the GitHub repository.
+1. Select **Reconnect** and make sure you see the message "New conversation started - type or press the microphone button."
+1. Let's test it out. Select the microphone button, and speak a few words in English. The recognized text appears as you speak. When you're done speaking, the bot replies in its own voice, saying "echo" followed by the recognized words.
+
+ You can also use text to communicate with the bot. Just type in the text on the bottom bar.
+
+### Troubleshoot errors in the Windows Voice Assistant Client
+
+If you get an error message in your main app window, use this table to identify and troubleshoot the problem:
+
+| Message | What should you do? |
+|-|-|
+|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct resource key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the key and its region correctly. |
+|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: We could not connect to the bot before sending a message | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|
+|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) box and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|
+|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your resource key doesn't support neural voices. See [neural voices](./regions.md#speech-service) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).|
+
+If the actions in the table don't address your problem, see [Voice assistants: Frequently asked questions](faq-voice-assistants.yml). If you still can't resolve your problem after following all the steps in this tutorial, please enter a new issue on the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues).
+
+#### A note on connection timeout
+
+If you're connected to a bot and no activity has happened in the last five minutes, the service automatically closes the web socket connection with the client and with the bot. This is by design. A message appears on the bottom bar: "Active connection timed out but ready to reconnect on demand."
+
+You don't need to select the **Reconnect** button. Press the microphone button and start talking, enter a text message, or say the keyword (if one is enabled). The connection is automatically reestablished.
+
+### View bot activities
+
+Every bot sends and receives activity messages. In the **Activity Log** window of the Windows Voice Assistant Client, timestamped logs show each activity that the client has received from the bot. You can also see the activities that the client sent to the bot by using the [DialogServiceConnector.SendActivityAsync](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.sendactivityasync) method. When you select a log item, it shows the details of the associated activity as JSON.
+
+Here's sample JSON of an activity that the client received:
+
+```json
+{
+ "attachments":[],
+ "channelData":{
+ "conversationalAiData":{
+ "requestInfo":{
+ "interactionId":"8d5cb416-73c3-476b-95fd-9358cbfaebfa",
+ "version":"0.2"
+ }
+ }
+ },
+ "channelId":"directlinespeech",
+ "conversation":{
+ "id":"129ebffe-772b-47f0-9812-7c5bfd4aca79",
+ "isGroup":false
+ },
+ "entities":[],
+ "from":{
+ "id":"SpeechEchoBotTutorial-BotRegistration-8726"
+ },
+ "id":"89841b4d-46ce-42de-9960-4fe4070c70cc",
+ "inputHint":"acceptingInput",
+ "recipient":{
+ "id":"129ebffe-772b-47f0-9812-7c5bfd4aca79|0000"
+ },
+ "replyToId":"67c823b4-4c7a-4828-9d6e-0b84fd052869",
+ "serviceUrl":"urn:botframework:websocket:directlinespeech",
+ "speak":"<speak version='1.0' xmlns='https://www.w3.org/2001/10/synthesis' xml:lang='en-US'><voice name='en-US-JennyNeural'>Echo: Hello and welcome.</voice></speak>",
+ "text":"Echo: Hello and welcome.",
+ "timestamp":"2019-07-19T20:03:51.1939097Z",
+ "type":"message"
+}
+```
+
+To learn more about what's returned in the JSON output, see the [fields in the activity](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md). For the purpose of this tutorial, you can focus on the [text](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#text) and [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) fields.
+
+### View client source code for calls to the Speech SDK
+
+The Windows Voice Assistant Client uses the NuGet package [Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech/), which contains the Speech SDK. A good place to start reviewing the sample code is the method `InitSpeechConnector()` in the file [VoiceAssistantClient\MainWindow.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs), which creates these two Speech SDK objects:
+- [DialogServiceConfig](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconfig): For configuration settings like resource key and its region.
+- [DialogServiceConnector](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.-ctor): To manage the channel connection and client subscription events for handling recognized speech and bot responses.
+
+## Add custom keyword activation
+
+The Speech SDK supports custom keyword activation. Similar to "Hey Cortana" for a Microsoft assistant, you can write an app that will continuously listen for a keyword of your choice. Keep in mind that a keyword can be single word or a multiple-word phrase.
+
+> [!NOTE]
+> The term *keyword* is often used interchangeably with the term *wake word*. You might see both used in Microsoft documentation.
+
+Keyword detection happens on the client app. If you're using a keyword, audio is streamed to the Direct Line Speech channel only if the keyword is detected. The Direct Line Speech channel includes a component called *keyword verification*, which does more complex processing in the cloud to verify that the keyword you've chosen is at the start of the audio stream. If keyword verification succeeds, then the channel will communicate with the bot.
+
+Follow these steps to create a keyword model, configure the Windows Voice Assistant Client to use this model, and test it with your bot:
+
+1. [Create a custom keyword by using the Speech service](./custom-keyword-basics.md).
+1. Unzip the model file that you downloaded in the previous step. It should be named for your keyword. You're looking for a file named **kws.table**.
+1. In the Windows Voice Assistant Client, find the **Settings** menu (the gear icon in the upper right). For **Model file path**, enter the full path name for the **kws.table** file from step 2.
+1. Select the **Enabled** checkbox. You should see this message next to the checkbox: "Will listen for the keyword upon next connection." If you've provided the wrong file or an invalid path, you should see an error message.
+1. Enter the values for **Subscription key** and **Subscription key region**, and then select **OK** to close the **Settings** menu.
+1. Select **Reconnect**. You should see a message that reads: "New conversation started - type, press the microphone button, or say the keyword." The app is now continuously listening.
+1. Speak any phrase that starts with your keyword. For example: "{your keyword}, what time is it?" You don't need to pause after uttering the keyword. When you're finished, two things happen:
+ * You see a transcription of what you spoke.
+ * You hear the bot's response.
+1. Continue to experiment with the three input types that your bot supports:
+ * Entering text on the bottom bar
+ * Pressing the microphone icon and speaking
+ * Saying a phrase that starts with your keyword
+
+### View the source code that enables keyword detection
+
+In the source code of the Windows Voice Assistant Client, use these files to review the code that enables keyword detection:
+
+- [VoiceAssistantClient\Models.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/Models.cs) includes a call to the Speech SDK method [KeywordRecognitionModel.fromFile()](/javascript/api/microsoft-cognitiveservices-speech-sdk/keywordrecognitionmodel#fromfile-string-). This method is used to instantiate the model from a local file on disk.
+- [VoiceAssistantClient\MainWindow.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs) includes a call to the Speech SDK method [DialogServiceConnector.StartKeywordRecognitionAsync()](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.startkeywordrecognitionasync). This method activates continuous keyword detection.
+
+## Optional: Change the language and bot voice
+
+The bot that you've created will listen for and respond in English, with a default US English text to speech voice. However, you're not limited to using English or a default voice.
+
+In this section, you'll learn how to change the language that your bot will listen for and respond in. You'll also learn how to select a different voice for that language.
+
+### Change the language
+
+You can choose from any of the languages mentioned in the [speech to text](language-support.md?tabs=stt) table. The following example changes the language to German.
+
+1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech to text](language-support.md?tabs=stt) table.
+
+ This step sets the spoken language to be recognized, overriding the default **en-us**. It also instructs the Direct Line Speech channel to use a default German voice for the bot reply.
+1. Close the **Settings** page, and then select the **Reconnect** button to establish a new connection to your echo bot.
+1. Select the microphone button, and say a phrase in German. The recognized text appears, and the echo bot replies with the default German voice.
+
+### Change the default bot voice
+
+You can select the text to speech voice and control pronunciation if the bot specifies the reply in the form of a [Speech Synthesis Markup Language](speech-synthesis-markup.md) (SSML) instead of simple text. The echo bot doesn't use SSML, but you can easily modify the code to do that.
+
+The following example adds SSML to the echo bot reply so that the German voice `de-DE-RalfNeural` (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md?tabs=tts) that are supported for your language.
+
+1. Open **samples\csharp_dotnetcore\02.echo-bot\echo-bot.cs**.
+1. Find these lines:
+
+ ```csharp
+ var replyText = $"Echo: {turnContext.Activity.Text}";
+ await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);
+ ```
+
+ Replace them with this code:
+
+ ```csharp
+ var replyText = $"Echo: {turnContext.Activity.Text}";
+ var replySpeak = @"<speak version='1.0' xmlns='https://www.w3.org/2001/10/synthesis' xml:lang='de-DE'>
+ <voice name='de-DE-RalfNeural'>" +
+ $"{replyText}" + "</voice></speak>";
+ await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replySpeak), cancellationToken);
+ ```
+1. Build your solution in Visual Studio and fix any build errors.
+
+The second argument in the method `MessageFactory.Text` sets the [activity speak field](https://github.com/Microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) in the bot reply. With the preceding change, it has been replaced from simple text to SSML in order to specify a non-default German voice.
+
+### Redeploy your bot
+
+Now that you've made the necessary change to the bot, the next step is to republish it to Azure App Service and try it out:
+
+1. In the Solution Explorer window, right-click the **EchoBot** project and select **Publish**.
+1. Your previous deployment configuration has already been loaded as the default. Select **Publish** next to **EchoBot20190805125647 - Web Deploy**.
+
+ The **Publish Succeeded** message appears in the Visual Studio output window, and a webpage opens with the message "Your bot is ready!"
+1. Open the Windows Voice Assistant Client app. Select the **Settings** button (upper-right gear icon), and make sure that you still have **de-de** in the **Language** field.
+1. Follow the instructions in [Run the Windows Voice Assistant Client](#run-the-windows-voice-assistant-client) to reconnect with your newly deployed bot, speak in the new language, and hear your bot reply in that language with the new voice.
+
+## Clean up resources
+
+If you're not going to continue using the echo bot deployed in this tutorial, you can remove it and all its associated Azure resources by deleting the Azure resource group:
+
+1. In the [Azure portal](https://portal.azure.com), select **Resource Groups** under **Azure services**.
+1. Find the **SpeechEchoBotTutorial-ResourceGroup** resource group. Select the three dots (...).
+1. Select **Delete resource group**.
+
+## Explore documentation
+
+* [Deploy to an Azure region near you](https://azure.microsoft.com/global-infrastructure/locations/) to see the improvement in bot response time.
+* [Deploy to an Azure region that supports high-quality neural text to speech voices](./regions.md#speech-service).
+* Get pricing associated with the Direct Line Speech channel:
+ * [Bot Service pricing](https://azure.microsoft.com/pricing/details/bot-service/)
+ * [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)
+* Build and deploy your own voice-enabled bot:
+ * Build a [Bot Framework bot](https://dev.botframework.com/). Then [register it with the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech) and [customize your bot for voice](/azure/bot-service/directline-speech-bot).
+ * Explore existing [Bot Framework solutions](https://microsoft.github.io/botframework-solutions/index): [Build a virtual assistant](https://microsoft.github.io/botframework-solutions/overview/virtual-assistant-solution/) and [extend it to Direct Line Speech](https://microsoft.github.io/botframework-solutions/clients-and-channels/tutorials/enable-speech/1-intro/).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Build your own client app by using the Speech SDK](./quickstarts/voice-assistants.md?pivots=programming-language-csharp)
ai-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/voice-assistants.md
+
+ Title: Voice assistants overview - Speech service
+
+description: An overview of the features, capabilities, and restrictions for voice assistants with the Speech SDK.
++++++ Last updated : 03/11/2020++++
+# What is a voice assistant?
+
+By using voice assistants with the Speech service, developers can create natural, human-like, conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation.
+
+## Choose an assistant solution
+
+The first step in creating a voice assistant is to decide what you want it to do. Speech service provides multiple, complementary solutions for crafting assistant interactions. You might want your application to support an open-ended conversation with phrases such as "I need to go to Seattle" or "What kind of pizza can I order?" For flexibility and versatility, you can add voice in and voice out capabilities to a bot by using Azure AI Bot Service with the [Direct Line Speech](direct-line-speech.md) channel.
+
+If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
+
+## Reference architecture for building a voice assistant by using the Speech SDK
+
+ ![Conceptual diagram of the voice assistant orchestration service flow.](media/voice-assistants/overview.png)
+
+## Core features
+
+Whether you choose [Direct Line Speech](direct-line-speech.md) or another solution to create your assistant interactions, you can use a rich set of customization features to customize your assistant to your brand, product, and personality.
+
+| Category | Features |
+|-|-|
+|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants by using a custom keyword such as "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which you can configure by going to [Get started with custom keywords](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus using the device alone).
+|[Speech to text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech to text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
+|[Text to speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text to speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to speech (Neural TTS) voice that gives a voice to your brand.
+
+## Get started with voice assistants
+
+We offer the following quickstart article that's designed to have you running code in less than 10 minutes: [Quickstart: Create a custom voice assistant by using Direct Line Speech](quickstarts/voice-assistants.md)
+
+## Sample code and tutorials
+
+Sample code for creating a voice assistant is available on GitHub. The samples cover the client application for connecting to your assistant in several popular programming languages.
+
+* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
+* [Tutorial: Voice-enable an assistant that's built by using Azure AI Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
+
+## Customization
+
+Voice assistants that you build by using Speech service can use a full range of customization options.
+
+* [Custom Speech](./custom-speech-overview.md)
+* [Custom Voice](how-to-custom-voice.md)
+* [Custom Keyword](keyword-recognition-overview.md)
+
+> [!NOTE]
+> Customization options vary by language and locale. To learn more, see [Supported languages](language-support.md).
+
+## Next steps
+
+* [Learn more about Direct Line Speech](direct-line-speech.md)
+* [Get the Speech SDK](speech-sdk.md)
ai-services Windows Voice Assistants Automatic Enablement Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-automatic-enablement-guidelines.md
+
+ Title: Privacy guidelines for voice assistants on Windows
+
+description: The instructions to enable voice activation for a voice agent by default.
++++++ Last updated : 04/15/2020+++
+# Privacy guidelines for voice assistants on Windows
+
+It's important that users are given clear information about how their voice data is collected and used and important that they are given control over if and how this collection happens. These core facets of privacy -- *disclosure* and *consent* -- are especially important for voice assistants integrated into Windows given the always-listening nature of their use.
+
+Developers creating voice assistants on Windows must include clear user interface elements within their applications that reflect the listening capabilities of the assistant.
+
+> [!NOTE]
+> Failure to provide appropriate disclosure and consent for an assistant application, including after application updates, may result in the assistant becoming unavailable for voice activation until privacy issues are resolved.
+
+## Minimum requirements for feature inclusion
+
+Windows users can see and control the availability of their assistant applications in **`Settings > Privacy > Voice activation`**.
+
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot shows options to control availablity of Cortana. ](media/voice-assistants/windows_voice_assistant/privacy-app-listing.png "A Windows voice activation privacy setting entry for an assistant application")](media/voice-assistants/windows_voice_assistant/privacy-app-listing.png#lightbox)
+
+To become eligible for inclusion in this list, contact Microsoft at winvoiceassistants@microsoft.com to get started. By default, users will need to explicitly enable voice activation for a new assistant in **`Settings > Privacy > Voice Activation`**, which an application can protocol link to with `ms-settings:privacy-voiceactivation`. An allowed application will appear in the list once it has run and used the `Windows.ApplicationModel.ConversationalAgent` APIs. Its voice activation settings will be modifiable once the application has obtained microphone consent from the user.
+
+Because the Windows privacy settings include information about how voice activation works and has standard UI for controlling permission, disclosure and consent are both fulfilled. The assistant will remain in this allowed list as long as it does not:
+
+* Mislead or misinform the user about voice activation or voice data handling by the assistant
+* Unduly interfere with another assistant
+* Break any other relevant Microsoft policies
+
+If any of the above are discovered, Microsoft may remove an assistant from the allowed list until problems are resolved.
+
+> [!NOTE]
+> In all cases, voice activation permission requires microphone permission. If an assistant application does not have microphone access, it will not be eligible for voice activation and will appear in the voice activation privacy settings in a disabled state.
+
+## Additional requirements for inclusion in Microphone consent
+
+Assistant authors who want to make it easier and smoother for their users to opt in to voice activation can do so by meeting additional requirements to adequately fulfill disclosure and consent without an extra trip to the settings page. Once approved, voice activation will become available immediately once a user grants microphone permission to the assistant application. To qualify for this, an assistant application must do the following **before** prompting for Microphone consent (for example, by using the `AppCapability.RequestAccessAsync` API):
+
+1. Provide clear and prominent indication to the user that the application would like to listen to the user's voice for a keyword, *even when the application is not running*, and would like the user's consent
+1. Include relevant information about data usage and privacy policies, such as a link to an official privacy statement
+1. Avoid any directive or leading wording (for example, "click yes on the following prompt") in the experience flow that discloses audio capture behavior
+
+If an application accomplishes all of the above, it's eligible to enable voice activation capability together with Microphone consent. Contact winvoiceassistants@microsoft.com for more information and to review a first use experience.
+
+> [!NOTE]
+> Voice activation above lock is not eligible for automatic enablement with Microphone access and will still require a user to visit the Voice activation privacy page to enable above-lock access for an assistant.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about best practices for voice assistants on Windows](windows-voice-assistants-best-practices.md)
ai-services Windows Voice Assistants Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-best-practices.md
+
+ Title: Voice Assistants on Windows - Design Guidelines
+
+description: Guidelines for best practices when designing a voice agent experience.
++++++ Last updated : 05/1/2020+++
+# Design assistant experiences for Windows 10
+
+Voice assistants developed on Windows 10 must implement the user experience guidelines below to provide the best possible experiences for voice activation on Windows 10. This document will guide developers through understanding the key work needed for a voice assistant to integrate with the Windows 10 Shell.
+
+## Contents
+
+- [Summary of voice activation views supported in Windows 10](#summary-of-voice-activation-views-supported-in-windows-10)
+- [Requirements summary](#requirements-summary)
+- [Best practices for good listening experiences](#best-practices-for-good-listening-experiences)
+- [Design guidance for in-app voice activation](#design-guidance-for-in-app-voice-activation)
+- [Design guidance for voice activation above lock](#design-guidance-for-voice-activation-above-lock)
+- [Design guidance for voice activation preview](#design-guidance-for-voice-activation-preview)
+
+## Summary of voice activation views supported in Windows 10
+
+Windows 10 infers an activation experience for the customer context based on the device context. The following summary table is a high-level overview of the different views available when the screen is on.
+
+| View (Availability) | Device context | Customer goal | Appears when | Design needs |
+| | | | | |
+| **In-app (19H1)** | Below lock, assistant has focus | Interact with the assistant app | Assistant processes the request in-app | Main in-app view listening experience |
+| **Above lock (19H2)** | Above lock, unauthenticated | Interact with the assistant, but from a distance | System is locked and assistant requests activation | Full-screen visuals for far-field UI. Implement dismissal policies to not block unlocking. |
+| **Voice activation preview (20H1)** | Below lock, assistant does not have focus | Interact with the assistant, but in a less intrusive way | System is below lock and assistant requests background activation | Minimal canvas. Resize or hand-off to the main app view as needed. |
+
+## Requirements summary
+
+Minimal effort is required to access the different experiences. However, assistants do need to implement the right design guidance for each view. This table below provides a checklist of the requirements that must be followed.
+
+| **Voice activation view** | **Assistant requirements summary** |
+| | |
+| **In-app** | <ul><li>Process the request in-app</li><li>Provides UI indicators for listening states</li><li>UI adapts as window sizes change</li></ul> |
+| **Above lock** | <ul><li>Detect lock state and request activation</li><li>Do not provide always persistent UX that would block access to the Windows lock screen</li><li>Provide full screen visuals and a voice-first experience</li><li>Honor dismissal guidance below</li><li>Follow privacy and security considerations below</li></ul> |
+| **Voice activation preview** | <ul><li>Detect unlock state and request background activation</li><li>Draw minimal listening UX in the preview pane</li><li>Draw a close X in the top-right and self-dismiss and stop streaming audio when pressed</li><li>Resize or hand-off to the main assistant app view as needed to provide answers</li></ul> |
+
+## Best practices for good listening experiences
+
+Assistants should build a listening experience to provide critical feedback so the customer can understand the state of the assistant. Below are some possible states to consider when building an assistant experience. These are only possible suggestions, not mandatory guidance.
+
+- Assistant is available for speech input
+- Assistant is in the process of activating (either a keyword or mic button press)
+- Assistant is actively streaming audio to the assistant cloud
+- Assistant is ready for the customer to start speaking
+- Assistant is hearing that words are being said
+- Assistant understands that the customer is done speaking
+- Assistant is processing and preparing a response
+- Assistant is responding
+
+Even if states change rapidly it is worth considering providing UX for states, since durations are variable across the Windows ecosystem. Visual feedback as well as brief audio chimes or chirps, also called &quot;earcons&quot;, can be part of the solution. Likewise, visual cards coupled with audio descriptions make for good response options.
+
+## Design guidance for in-app voice activation
+
+When the assistant app has focus, the customer intent is clearly to interact with the app, so all voice activation experiences should be handled by the main app view. This view may be resized by the customer. To help explain assistant shell interactions, the rest of this document uses the concrete example of a financial service assistant named Contoso. In this and subsequent diagrams, what the customer says will appear in cartoon speech bubbles on the left with assistant responses in cartoon bubbles on the right.
+
+**In-app view. Initial state when voice activation begins:**
+![Screenshot showing the Contoso finance assistant app open to it's default canvas. A cartoon speech bubble on the right says "Contoso".](media/voice-assistants/windows_voice_assistant/initial_state.png)
+
+**In-app view. After successful voice activation, listening experience begins:**![Screenshot of voice assistant on Windows while voice assistant is listening](media/voice-assistants/windows_voice_assistant/listening.png)
+
+**In-app view. All responses remain in the app experience.**![Screenshot of voice assistant on Windows as assistant replies](media/voice-assistants/windows_voice_assistant/response.png)
+
+## Design guidance for voice activation above lock
+
+Available with 19H2, assistants built on Windows voice activation platform are available to answer above lock.
+
+### Customer opt-in
+
+Voice activation above lock is always disabled by default. Customers opt-in through the Windows settings>Privacy>Voice Activation. For details on monitoring and prompting for this setting, see the [above lock implementation guide](windows-voice-assistants-implementation-guide.md#detecting-above-lock-activation-user-preference).
+
+### Not a lock-screen replacement
+
+While notifications or other standard app lock-screen integration points remain available for the assistant, the Windows lock screen always defines the initial customer experience until a voice activation occurs. After voice activation is detected, the assistant app temporarily appears above the lock screen. To avoid customer confusion, when active above lock, the assistant app must never present UI to ask for any kind of credentials or log in.
+
+![Screenshot of a Windows lock screen](media/voice-assistants/windows_voice_assistant/lock_screen1.png)
+
+### Above lock experience following voice activation
+
+When the screen is on, the assistant app is full screen with no title bar above the lock screen. Larger visuals and strong voice descriptions with strong voice-primary interface allow for cases where the customer is too far away to read UI or has their hands busy with another (non-PC) task.
+
+When the screen remains off, the assistant app could play an earcon to indicate the assistant is activating and provide a voice-only experience.
+
+![Screenshot of voice assistant above lock](media/voice-assistants/windows_voice_assistant/above_lock_listening.png)
+
+### Dismissal policies
+
+The assistant must implement the dismissal guidance in this section to make it easier for customers to log in the next time they want to use their Windows PC. Below are specific requirements, which the assistant must implement:
+
+- **All assistant canvases that show above lock must contain an X** in the top right that dismisses the assistant.
+- **Pressing any key must also dismiss the assistant app**. Keyboard input is a traditional lock app signal that the customer wants to log in. Therefore, any keyboard/text input should not be directed to the app. Instead, the app should self-dismiss when keyboard input is detected, so the customer can easily log in to their device.
+- **If the screen goes off, the app must self-dismiss.** This ensures that the next time the customer uses their PC, the login screen will be ready and waiting for them.
+- If the app is &quot;in use&quot;, it may continue above lock. &quot;in use&quot; constitutes any input or output. For example, when streaming music or video the app may continue above lock. &quot;Follow on&quot; and other multiturn dialog steps are permitted to keep the app above lock.
+- **Implementation details on dismissing the application** can be found [in the above lock implementation guide](windows-voice-assistants-implementation-guide.md#closing-the-application).
+
+![Screenshot showing the above lock view of the Contoso finance assistant app.](media/voice-assistants/windows_voice_assistant/above_lock_response.png)
+
+![Screenshot of a desktop showing the Windows lock screen.](media/voice-assistants/windows_voice_assistant/lock_screen2.png)
+
+### Privacy &amp; security considerations above lock
+
+Many PCs are portable but not always within customer reach. They may be briefly left in hotel rooms, airplane seats, or workspaces, where other people have physical access. If assistants that are enabled above lock aren't prepared, they can become subject to the class of so-called &quot;[evil maid](https://en.wikipedia.org/wiki/Evil_maid_attack)&quot; attacks.
+
+Therefore, assistants should follow the guidance in this section to help keep experience secure. Interaction above lock occurs when the Windows user is unauthenticated. This means that, in general, **input to the assistant should also be treated as unauthenticated**.
+
+- Assistants should **implement a skill allowed list to identify skills that are confirmed secure and safe** to be accessed above lock.
+- Speaker ID technologies can play a role in alleviating some risks, but Speaker ID is not a suitable replacement for Windows authentication.
+- The skill allowed list should consider three classes of actions or skills:
+
+| **Action class** | **Description** | **Examples (not a complete list)** |
+| | | |
+| Safe without authentication | General purpose information or basic app command and control | &quot;What time is it?&quot;, &quot;Play the next track&quot; |
+| Safe with Speaker ID | Impersonation risk, revealing personal information. | &quot;What&#39;s my next appointment?&quot;, &quot;Review my shopping list&quot;, &quot;Answer the call&quot; |
+| Safe only after Windows authentication | High-risk actions that an attacker could use to harm the customer | &quot;Buy more groceries&quot;, &quot;Delete my (important) appointment&quot;, &quot;Send a (mean) text message&quot;, &quot;Launch a (nefarious) webpage&quot; |
+
+For the case of Contoso, general information around public stock information is safe without authentication. Customer-specific information such as number of shares owned is likely safe with Speaker ID. However, buying or selling stocks should never be allowed without Windows authentication.
+
+To further secure the experience, **weblinks, or other app-to-app launches will always be blocked by Windows until the customer signs in.** As a last resort mitigation, Microsoft reserves the right to remove an application from the allowed list of enabled assistants if a serious security issue is not addressed in a timely manner.
+
+## Design guidance for voice activation preview
+
+Below lock, when the assistant app does _not_ have focus, Windows provides a less intrusive voice activation UI to help keep the customer in flow. This is especially true for the case of false activations that would be highly disruptive if they launched the full app. The core idea is that each assistant has another home in the Shell, the assistant taskbar icon. When the request for background activation occurs, a small view above the assistant taskbar icon appears. Assistants should provide a small listening experience in this canvas. After processing the requests, assistants can choose to resize this view to show an in-context answer or to hand off their main app view to show larger, more detailed visuals.
+
+- To stay minimal, the preview does not have a title bar, so **the assistant must draw an X in the top right to allow customers to dismiss the view.** Refer to [Closing the Application](windows-voice-assistants-implementation-guide.md#closing-the-application) for the specific APIs to call when the dismiss button is pressed.
+- To support voice activation previews, assistants may invite customers to pin the assistant to the taskbar during first run.
+
+**Voice activation preview: Initial state**
+
+The Contoso assistant has a home on the taskbar: their swirling, circular icon.
+
+![Screenshot of voice assistant on Windows as a taskbar icon pre-activation](media/voice-assistants/windows_voice_assistant/pre_compact_view.png)
+
+**As activation progresses**, the assistant requests background activation. The assistant is given a small preview pane (default width 408 and height: 248). If server-side voice activation determines the signal was a false positive, this view could be dismissed for minimal interruption.
+
+![Screenshot of voice assistant on Windows in compact view while verifying activation](media/voice-assistants/windows_voice_assistant/compact_view_activating.png)
+
+**When final activation is confirmed**, the assistant presents its listening UX. Assistant must always draw a dismiss X in the top right of the voice activation preview.
+
+![Screenshot of voice assistant on Windows listening in compact view](media/voice-assistants/windows_voice_assistant/compact_view_listening.png)
+
+**Quick answers** may appear in the voice activation preview. A TryResizeView will allow assistants to request different sizes.
+
+![Screenshot of voice assistant on Windows replying in compact view](media/voice-assistants/windows_voice_assistant/compact_view_response.png)
+
+**Hand-off**. At any point, the assistant may hand off to its main app view to provide more information, dialogue, or answers that require more screen real estate. Please refer to the [Transition from compact view to full view](windows-voice-assistants-implementation-guide.md#transition-from-compact-view-to-full-view) section for implementation details.
+
+![Screenshots of voice assistant on Windows before and after expanding the compact view](media/voice-assistants/windows_voice_assistant/compact_transition.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started on developing your voice assistant](how-to-windows-voice-assistants-get-started.md)
ai-services Windows Voice Assistants Implementation Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-implementation-guide.md
+
+ Title: Voice Assistants on Windows - Above Lock Implementation Guidelines
+
+description: The instructions to implement voice activation and above-lock capabilities for a voice agent application.
++++++ Last updated : 04/15/2020++++
+# Implementing Voice Assistants on Windows
+
+This guide walks through important implementation details for creating a voice assistant on Windows.
+
+## Implementing voice activation
+
+After [setting up your environment](how-to-windows-voice-assistants-get-started.md) and learning [how voice activation works](windows-voice-assistants-overview.md#how-does-voice-activation-work), you can start implementing voice activation for your own voice assistant application.
+
+### Registration
+
+#### Ensure that the microphone is available and accessible, then monitor its state
+
+MVA needs a microphone to be present and accessible to be able to detect a voice activation. Use the [AppCapability](/uwp/api/windows.security.authorization.appcapabilityaccess.appcapability), [DeviceWatcher](/uwp/api/windows.devices.enumeration.devicewatcher), and [MediaCapture](/uwp/api/windows.media.capture.mediacapture) classes to check for microphone privacy access, device presence, and device status (like volume and mute) respectively.
+
+### Register the application with the background service
+
+In order for MVA to launch the application in the background, the application needs to be registered with the Background Service. See a full guide for Background Service registration [here](/windows/uwp/launch-resume/register-a-background-task).
+
+### Unlock the Limited Access Feature
+
+Use your Microsoft-provided Limited Access Feature key to unlock the voice assistant feature. Use the [LimitedAccessFeature](/uwp/api/windows.applicationmodel.limitedaccessfeatures) class from the Windows SDK to do this.
+
+### Register the keyword for the application
+
+The application needs to register itself, its keyword model, and its language with Windows.
+
+Start by retrieving the keyword detector. In this sample code, we retrieve the first detector, but you can select a particular detector by selecting it from `configurableDetectors`.
+
+```csharp
+private static async Task<ActivationSignalDetector> GetFirstEligibleDetectorAsync()
+{
+ var detectorManager = ConversationalAgentDetectorManager.Default;
+ var allDetectors = await detectorManager.GetAllActivationSignalDetectorsAsync();
+ var configurableDetectors = allDetectors.Where(candidate => candidate.CanCreateConfigurations
+ && candidate.Kind == ActivationSignalDetectorKind.AudioPattern
+ && (candidate.SupportedModelDataTypes.Contains("MICROSOFT_KWSGRAPH_V1")));
+
+ if (configurableDetectors.Count() != 1)
+ {
+ throw new NotSupportedException($"System expects one eligible configurable keyword spotter; actual is {configurableDetectors.Count()}.");
+ }
+
+ var detector = configurableDetectors.First();
+
+ return detector;
+}
+```
+
+After retrieving the ActivationSignalDetector object, call its `ActivationSignalDetector.CreateConfigurationAsync` method with the signal ID, model ID, and display name to register your keyword and retrieve your application's `ActivationSignalDetectionConfiguration`. The signal and model IDs should be GUIDs decided on by the developer and stay consistent for the same keyword.
+
+### Verify that the voice activation setting is enabled
+
+To use voice activation, a user needs to enable voice activation for their system and enable voice activation for their application. You can find the setting under "Voice activation privacy settings" in Windows settings. To check the status of the voice activation setting in your application, use the instance of the `ActivationSignalDetectionConfiguration` from registering the keyword. The [AvailabilityInfo](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-uwp/UWPVoiceAssistantSample/UIAudioStatus.cs#L128) field on the `ActivationSignalDetectionConfiguration` contains an enum value that describes the state of the voice activation setting.
+
+### Retrieve a ConversationalAgentSession to register the app with the MVA system
+
+The `ConversationalAgentSession` is a class in the Windows SDK that allows your app to update Windows with the app state (Idle, Detecting, Listening, Working, Speaking) and receive events, such as activation detection and system state changes such as the screen locking. Retrieving an instance of the AgentSession also serves to register the application with Windows as activatable by voice. It is best practice to maintain one reference to the `ConversationalAgentSession`. To retrieve the session, use the `ConversationalAgentSession.GetCurrentSessionAsync` API.
+
+### Listen to the two activation signals: the OnBackgroundActivated and OnSignalDetected
+
+Windows will signal your app when it detects a keyword in one of two ways. If the app is not active (that is, you do not have a reference to a non-disposed instance of `ConversationalAgentSession`), then it will launch your app and call the OnBackgroundActivated method in the App.xaml.cs file of your application. If the event arguments' `BackgroundActivatedEventArgs.TaskInstance.Task.Name` field matches the string "AgentBackgroundTrigger", then the application startup was triggered by voice activation. The application needs to override this method and retrieve an instance of ConversationalAgentSession to signal to Windows that is now active. When the application is active, Windows will signal the occurrence of a voice activation using the `ConversationalAgentSession.OnSignalDetected` event. Add an event handler to this event as soon as you retrieve the `ConversationalAgentSession`.
+
+## Keyword verification
+
+Once a voice agent application is activated by voice, the next step is to verify that the keyword detection was valid. Windows does not provide a solution for keyword verification, but it does allow voice assistants to access the audio from the hypothesized activation and complete verification on its own.
+
+### Retrieve activation audio
+
+Create an [AudioGraph](/uwp/api/windows.media.audio.audiograph) and pass it to the `CreateAudioDeviceInputNodeAsync` of the `ConversationalAgentSession`. This will load the graph's audio buffer with the audio *starting approximately 3 seconds before the keyword was detected*. This additional leading audio is included to accommodate a wide range of keyword lengths and speaker speeds. Then, handle the [QuantumStarted](/uwp/api/windows.media.audio.audiograph.quantumstarted) event from the audio graph to retrieve the audio data.
+
+```csharp
+var inputNode = await agentSession.CreateAudioDeviceInputNodeAsync(audioGraph);
+var outputNode = inputGraph.CreateFrameOutputNode();
+inputNode.AddOutgoingConnection(outputNode);
+audioGraph.QuantumStarted += OnQuantumStarted;
+```
+
+Note: The leading audio included in the audio buffer can cause keyword verification to fail. To fix this issue, trim the beginning of the audio buffer before you send the audio for keyword verification. This initial trim should be tailored to each assistant.
+
+### Launch in the foreground
+
+When keyword verification succeeds, the application needs to move to the foreground to display UI. Call the `ConversationalAgentSession.RequestForegroundActivationAsync` API to move your application to the foreground.
+
+### Transition from compact view to full view
+
+When your application is first activated by voice, it is started in a compact view. Please read the [Design guidance for voice activation preview](windows-voice-assistants-best-practices.md#design-guidance-for-voice-activation-preview) for guidance on the different views and transitions between them for voice assistants on Windows.
+
+To make the transition from compact view to full app view, use the ApplicationView API `TryEnterViewModeAsync`:
+
+```csharp
+var appView = ApplicationView.GetForCurrentView();
+await appView.TryEnterViewModeAsync(ApplicationViewMode.Default);
+```
+
+## Implementing above lock activation
+
+The following steps cover the requirements to enable a voice assistant on Windows to run above lock, including references to example code and guidelines for managing the application lifecycle.
+
+For guidance on designing above lock experiences, visit the [best practices guide](windows-voice-assistants-best-practices.md).
+
+When an app shows a view above lock, it is considered to be in "Kiosk Mode". For more information on implementing an app that uses Kiosk Mode, see the [kiosk mode documentation](/windows-hardware/drivers/partnerapps/create-a-kiosk-app-for-assigned-access).
+
+### Transitioning above lock
+
+An activation above lock is similar to an activation below lock. If there are no active instances of the application, a new instance will be started in the background and `OnBackgroundActivated` in App.xaml.cs will be called. If there is an instance of the application, that instance will get a notification through the `ConversationalAgentSession.SignalDetected` event.
+
+If the application does not appear above lock, it must call `ConversationalAgentSession.RequestForegroundActivationAsync`. This triggers the `OnLaunched` method in App.xaml.cs which should navigate to the view that will appear above lock.
+
+### Detecting lock screen transitions
+
+The ConversationalAgent library in the Windows SDK provides an API to make the lock screen state and changes to the lock screen state easily accessible. To detect the current lock screen state, check the `ConversationalAgentSession.IsUserAuthenticated` field. To detect changes in lock state, add an event handler to the `ConversationalAgentSession` object's `SystemStateChanged` event. It will fire whenever the screen changes from unlocked to locked or vice versa. If the value of the event arguments is `ConversationalAgentSystemStateChangeType.UserAuthentication`, then the lock screen state has changed.
+
+```csharp
+conversationalAgentSession.SystemStateChanged += (s, e) =>
+{
+ if (e.SystemStateChangeType == ConversationalAgentSystemStateChangeType.UserAuthentication)
+ {
+ // Handle lock state change
+ }
+};
+```
+
+### Detecting above lock activation user preference
+
+The application entry in the Voice Activation Privacy settings page has a toggle for above lock functionality. For your app to be able to launch above lock, the user will need to turn this setting on. The status of above lock permissions is also stored in the ActivationSignalDetectionConfiguration object. The AvailabilityInfo.HasLockScreenPermission status reflects whether the user has given above lock permission. If this setting is disabled, a voice application can prompt the user to navigate to the appropriate settings page at the link "ms-settings:privacy-voiceactivation" with instructions on how to enable above-lock activation for the application.
+
+## Closing the application
+
+To properly close the application programmatically while above or below lock, use the `WindowService.CloseWindow()` API. This triggers all UWP lifecycle methods, including OnSuspend, allowing the application to dispose of its `ConversationalAgentSession` instance before closing.
+
+> [!NOTE]
+> The application can close without closing the [below lock instance](/windows-hardware/drivers/partnerapps/create-a-kiosk-app-for-assigned-access#add-a-way-out-of-assigned-access-). In this case, the above lock view needs to "clean up", ensuring that once the screen is unlocked, there are no event handlers or tasks that will try to manipulate the above lock view.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Visit the UWP Voice Assistant Sample app for examples and code walk-throughs](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample)
ai-services Windows Voice Assistants Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/windows-voice-assistants-overview.md
+
+ Title: Voice Assistants on Windows overview - Speech service
+
+description: An overview of the voice assistants on Windows, including capabilities and development resources available.
++++++ Last updated : 02/19/2022++++
+# What are Voice Assistants on Windows?
+
+Voice assistant applications can take advantage of the Windows ConversationalAgent APIs to achieve a complete voice-enabled assistant experience.
+
+## Voice Assistant Features
+
+Voice agent applications can be activated by a spoken keyword to enable a hands-free, voice driven experience. Voice activation works when the application is closed and when the screen is locked.
+
+In addition, Windows provides a set of voice-activation privacy settings that gives users control of voice activation and above lock activation on a per-app basis.
+
+After voice activation, Windows will manage multiple active agents properly and notify each voice assistant if they are interrupted or deactivated. This allows applications to manage interruptions and other inter-agent events properly.
+
+## How does voice activation work?
+
+The Agent Activation Runtime (AAR) is the ongoing process in Windows that manages application activation on a spoken keyword or button press. It starts with Windows as long as there is at least one application on the system that is registered with the system. Applications interact with AAR through the ConversationalAgent APIs in the Windows SDK.
+
+When the user speaks a keyword, the software or hardware keyword spotter on the system notifies AAR that a keyword has been detected, providing a keyword ID. AAR in turn sends a request to BackgroundService to start the application with the corresponding application ID.
+
+### Registration
+
+The first time a voice activated application is run, it registers its app ID and keyword information through the ConversationalAgent APIs. AAR registers all configurations in the global mapping with the hardware or software keyword spotter on the system, allowing them to detect the application's keyword. The application also [registers with the Background Service](/windows/uwp/launch-resume/register-a-background-task).
+
+Note that this means an application cannot be activated by voice until it has been run once and registration has been allowed to complete.
+
+### Receiving an activation
+
+Upon receiving the request from AAR, the Background Service launches the application. The application receives a signal through the OnBackgroundActivated life-cycle method in `App.xaml.cs` with a unique event argument. This argument tells the application that it was activated by AAR and that it should start keyword verification.
+
+If the application successfully verifies the keyword, it can make a request that appears in the foreground. When this request succeeds, the application displays UI and continues its interaction with the user.
+
+AAR still signals active applications when their keyword is spoken. Rather than signaling through the life-cycle method in `App.xaml.cs`, though, it signals through an event in the ConversationalAgent APIs.
+
+### Keyword verification
+
+The keyword spotter that triggers the application to start has achieved low power consumption by simplifying the keyword model. This allows the keyword spotter to be "always on" without a high power impact, but it also means the keyword spotter will likely have a high number of "false accepts" where it detects a keyword even though no keyword was spoken. This is why the voice activation system launches the application in the background: to give the application a chance to verify that the keyword was spoken before interrupting the user's current session. AAR saves the audio since a few seconds before the keyword was spotted and makes it accessible to the application. The application can use this to run a more reliable keyword spotter on the same audio.
+
+## Next steps
+
+- Review the [design guidelines](windows-voice-assistants-best-practices.md) to provide the best experiences for voice activation.
+- See the voice assistants on Windows [get started](how-to-windows-voice-assistants-get-started.md) page.
+- See the [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) page and follow the steps to get the sample client running.
ai-services Document Translation Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/connector/document-translation-flow.md
+
+ Title: "Use Translator V3 connector to build a Document Translation flow"
+
+description: Use Microsoft Translator V3 connector and Power Automate to create a Document Translation flow.
++++++ Last updated : 07/18/2023+++
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD029 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+
+# Create a Document Translation flow (preview)
+
+> [!IMPORTANT]
+>
+> The Translator connector is currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+This tutorial guides you through configuring a Microsoft Translator V3 connector cloud flow that supports document translation. The Translator V3 connector creates a connection between your Translator Service instance and Microsoft Power Automate enabling you to use one or more prebuilt operations as steps in your apps and workflows.
+
+Document Translation is a cloud-based REST API feature of the Azure AI Translator service. The Document Translation API enables multiple and complex document translations while preserving original document structure and data format.
+
+In this tutorial:
+
+> [!div class="checklist"]
+>
+> * [Create a blob storage account with containers for your source and target files](#azure-storage).
+> * [Set-up a managed identity with role-based access control (RBAC)](#create-a-managed-identity).
+> * [Translate documents from your Azure Blob Storage account](#translate-documents).
+> * [Translate documents from your SharePoint site](#translate-documents).
+
+## Prerequisites
+
+Here's what you need to get started: [**Translator resource**](#translator-resource), [**Azure storage account**](#azure-storage) with at least two containers, and a [**system-assigned managed identity**](#managed-identity-with-rbac) with role-based access.
+
+### Translator resource
+
+* If you don't have an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/), you can [**create one for free**](https://azure.microsoft.com/free/).
+
+* Create a [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource). As you complete the Translator project and instance details fields, pay special attention to the following entries:
+
+ * **Resource Region**. Choose a **geographic** region like **West US** (**not** the *Global* region).
+
+ * **Pricing tier**. Select **Standard S1** to try the service.
+
+* Use the **key** and **name** from your Translator resource to connect your application to Power Automate. Your Translator resource keys are found under the Resource Management section in the Azure portal and your resource name is located at the top of the page.
+
+ :::image type="content" source="../media/connectors/keys-resource-details.png" alt-text="Get key and endpoint.":::
+
+* Copy and paste your key and resource name in a convenient location, such as *Microsoft Notepad*.
+
+### Azure storage
+
+* Next, you need an [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) and at least two [containers](/azure/storage/blobs/storage-quickstart-blobs-portal?branch=main#create-a-container) for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+* **If your storage account is behind a firewall, you must enable additional configurations**:
+
+ 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+ 1. Select your Storage account.
+ 1. In the **Security + networking** group in the left pane, select **Networking**.
+ 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
+
+ :::image type="content" source="../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
+
+ 1. Deselect all check boxes.
+ 1. Make sure **Microsoft network routing** is selected.
+ 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Translator resource as the instance name.
+ 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, *see* [Configure Azure Storage firewalls and virtual networks](../../../storage/common/storage-network-security.md).
+
+ :::image type="content" source="../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view.":::
+
+ 1. Select **Save**. It may take up to 5 min for the network changes to propagate.
+
+### Managed identity with RBAC
+
+ Finally, before you can use the Translator V3 connector's operations for document translation, you must grant your Translator resource access to your storage account using a managed identity with role based identity control (RBAC).
+
+ :::image type="content" source="../document-translation/media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+#### Create a managed identity
+
+First, create a system-assigned managed identity for your Translator resource and grant that identity specific permissions to access your Azure storage account:
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Translator resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Within the **System assigned** tab, turn on the **Status** toggle.
+1. Select **Save**.
+
+ :::image type="content" source="../media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
+
+#### Role assignment
+
+Next, assign a **`Storage Blob Data Contributor`** role to the managed identity *at the* storage scope for your storage resource.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Translator resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Under **Permissions** select **Azure role assignments**:
+
+ :::image type="content" source="../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
+
+1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
+
+ :::image type="content" source="../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
+
+1. Finally, assign a **Storage Blob Data Contributor** role to your Translator service resource. The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+
+ | Field | Value|
+ ||--|
+ |**Scope**| ***Storage***.|
+ |**Subscription**| ***The subscription associated with your storage resource***.|
+ |**Resource**| ***The name of your storage resource***.
+ |**Role** | ***Storage Blob Data Contributor***.|
+
+1. After the *Added Role assignment* confirmation message appears, refresh the page to see the added role assignment.
+
+ :::image type="content" source="../media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
+
+1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
+
+ :::image type="content" source="../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
+
+### Configure a Document Translation flow
+
+Now that you've completed the prerequisites and initial setup, let's get started using the Translator V3 connector to create your document translation flow:
+
+1. Sign in to [Power Automate](https://powerautomate.microsoft.com/).
+
+1. Select **Create** from the left sidebar menu.
+
+1. Select **Instant cloud flow** from the main content area.
+
+ :::image type="content" source="../media/connectors/create-flow.png" alt-text="Screenshot showing how to create an instant cloud flow.":::
+
+1. In the popup window, name your flow then choose **Manually trigger a flow** and select **Create**.
+
+ :::image type="content" source="../media/connectors/select-manual-flow.png" alt-text="Screenshot showing how to manually trigger a flow.":::
+
+1. The first step for your instant flowΓÇö**Manually trigger a flow**ΓÇöappears on screen. Select **New step**.
+
+ :::image type="content" source="../media/connectors/add-new-step.png" alt-text="Screenshot of add new flow step page.":::
+
+## Translate documents
+
+Next, we're ready to select an action. You can translate documents located in your **Azure Blob Storage** or **Microsoft SharePoint** account.
+
+### [Use Azure Blob Storage](#tab/blob-storage)
+
+### Azure Blob Storage
+
+Here are the steps to translate a file in Azure Blob Storage using the Translator V3 connector:
+> [!div class="checklist"]
+>
+> * Choose the Translator V3 connector.
+> * Select document translation.
+> * Enter your Azure Blob Storage credentials and container locations.
+> * Translate your document(s) choosing source and target languages.
+> * Get the status of the translation operation.
+
+1. In the **Choose an operation** pop-up window, enter Translator V3 in the **Search connectors and actions** search bar and select the **Microsoft Translator V3** icon.
+
+ :::image type="content" source="../media/connectors/choose-operation.png" alt-text="Screenshot showing the selection of Translator V3 as the next flow step.":::
+
+1. Select the **Start document translation** action.
+1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
+
+ * **Connection name**. Enter a name for your connection.
+
+ * **Subscription Key**. Your Translator resource keys are found under the **Resource Management** section of the resource sidebar in the Azure portal. Enter one of your keys. **Make certain that your Translator resource is assigned to a geographical region such as West US** (not global).
+
+ * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
+
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+
+ > [!NOTE]
+ > After you've set up your connection, you won't be required to reenter your credentials for subsequent flows.
+
+1. The **Start document translation** action window now appears. Complete the fields as follows:
+
+ * For **Storage type of the input documents**. Select **File** or **Folder**.
+ * Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
+ * **Location of the source documents**. Enter the URL for your document(s) in your Azure storage source document container.
+ * **Location of the translated documents**. Enter the URL for your Azure storage target document container.
+
+ To find your source and target URLs:
+ * Navigate to your storage account in the Azure portal.
+ * In the left sidebar, under **Data storage** , select **Containers**:
+
+ |Source|Target|
+ ||-|
+ |Select the checkbox next to the source container|Select the checkbox next to the target container.|
+ | From the main window area, select a file or document for translation.| Select the ellipses located at the right, then choose **Properties**.|
+ | The source URL is located at the top of the Properties list. Select the **Copy to Clipboard** icon.|The target URL is located at the top of the Properties list. Select the **Copy to Clipboard** icon.|
+ | Navigate to your Power automate flow and paste the source URL in the **Location of the source documents** field.|Navigate to your Power automate flow and paste the target URL in the **Location of the translated documents** field.|
+ * Choose a **Target Language** from the dropdown menu and select **Save**.
+
+ :::image type="content" source="../media/connectors/start-document-translation-window.png" alt-text="Screenshot of the Start document translation dialog window.":::
+
+## Get documents status
+
+Now that you've submitted your document(s) for translation, let's check the status of the operation.
+
+1. Select **New step**.
+
+1. Enter Translator V3 in the search box and choose **Microsoft Translator V3**.
+
+1. Select **Get documents status** (not the singular Get *document* status action).
+
+ :::image type="content" source="../media/connectors/get-documents-status-step.png" alt-text="Screenshot of the get documents status step.":::
+
+1. Next, you're going to enter an expression to retrieve the **`operation ID`** value.
+
+1. Select the operation ID field. A **Dynamic content** / **Expression** dropdown window appears.
+
+1. Select the **Expression** tab and enter the following expression into the function field:
+
+ ```powerappsfl
+
+ body('Start_document_translation').operationID
+
+ ```
+
+ :::image type="content" source="../media/connectors/create-function-expression.png" alt-text="Screenshot showing function creation window.":::
+
+1. Select **OK**. The function appears in the **Operation ID** window. Select **Save**.
+
+ :::image type="content" source="../media/connectors/operation-id-function.png" alt-text="Screenshot showing the operation ID field with an expression function value.":::
+
+## Test the connector flow
+
+Time to check our flow and document translation results.
+
+1. There's a green bar at the top of the page indicating that ***Your flow is ready to go***.
+1. Select **Test** from the upper-right corner of the page.
+
+ :::image type="content" source="../media/connectors/test-flow.png" alt-text="Screenshot showing the test icon/button.":::
+
+1. Select the following buttons: **Test Flow** → **Manually** → **Test** from the right-side window.
+1. In the next window, select the **Run flow** button.
+1. Finally, select the **Done** button.
+1. You should receive a ***Your flow ran successfully*** message and green check marks align with each successful step.
+
+ :::image type="content" source="../media/connectors/successful-document-translation-flow.png" alt-text="Screenshot of successful document translation flow.":::
+
+1. Select the **Get documents status** step, then select **Show raw outputs** from the **Outputs** section.
+1. A **Get documents status** window appears. At the top of the JSON response, you see `"statusCode":200` indicating that the request was successful.
+
+ :::image type="content" source="../media/connectors/get-documents-status.png" alt-text="Screenshot showing the 'Get documents status' JSON response.":::
+
+1. As a final check, navigate to your Azure Blob Storage target source container. There, you should see the translated document in the **Overview** section. The document may be in a folder labeled with the translation language code.
+
+#### [Use Microsoft SharePoint](#tab/sharepoint)
+
+### Microsoft SharePoint
+
+Here are the steps to upload a file from your SharePoint site to Azure Blob Storage, translate the file, and return the translated file to SharePoint:
+
+> [!div class="checklist"]
+>
+> * [Retrieve the source file content from your SharePoint site](#get-file-content).
+> * [Create an Azure storage blob from the source file](#create-a-storage-blob).
+> * [Translate the source file](#start-document-translation).
+> * [Get documents status](#get-documents-status).
+> * [Get blob content and metadata](#get-blob-content-and-metadata).
+> * [Create a new SharePoint folder](#create-new-folder).
+> * [Create your translated file in Sharepoint](#create-sharepoint-file).
+
+##### Get file content
+
+ 1. In the **Choose an operation** pop-up window enter **SharePoint**, then select the **Get file content** content. Power Automate automatically signs you into your SharePoint account.
+
+ :::image type="content" source="../media/connectors/get-file-content.png" alt-text="Screenshot of the SharePoint Get file content action.":::
+
+1. On the **Get file content** step window, complete the following fields:
+ * **Site Address**. Select the SharePoint site URL where your file is located from the dropdown list.
+ * **File Identifier**. Select the folder icon and choose the document(s) for translation.
+
+##### Create a storage blob
+
+1. Select **New step** and enter **Azure Blob Storage** in the search box.
+
+1. Select the **Create blob (V2)** action.
+
+1. If you're using the Azure storage step for the first time, you need to enter your storage resource authentication:
+
+1. In the **Authentication type** field, choose **Azure AD Integrated** and then select the **Sign in** button.
+
+ :::image type="content" source="../media/connectors/storage-authentication.png" alt-text="Screenshot of Azure Blob Storage authentication window.":::
+
+1. Choose the Azure Active Directory (Azure AD) account associated with your Azure Blob Storage and Translator resource accounts.
+
+1. After you have completed the **Azure Blob Storage** authentication, the **Create blob** step appears. Complete the fields as follows:
+
+ * **Storage account name or blob endpoint**. Select **Enter custom value** and enter your storage account name.
+ * **Folder path**. Select the folder icon and select your source document container.
+ * **Blob name**. Enter the name of the file you're translating.
+ * **Blob content**. Select the **Blob content** field to reveal the **Dynamic content** dropdown list and choose the SharePoint **File Content** icon.
+
+ :::image type="content" source="../media/connectors/file-dynamic-content.png" alt-text="Screenshot of the file content icon on the dynamic content menu.":::
+
+##### Start document translation
+
+1. Select **New step**.
+
+1. Enter Translator V3 in the search box and choose **Microsoft Translator V3**.
+
+1. Select the **Start document translation** action.
+
+1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
+
+ * **Connection name**. Enter a name for your connection.
+ * **Subscription Key**. Enter one of your keys that you copied from the Azure portal.
+ * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal.
+ * Select **Create**.
+
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+
+ > [!NOTE]
+ > After you've setup your connection, you won't be required to reenter your credentials for subsequent Translator flows.
+
+1. The **Start document translation** action window now appears. Complete the fields as follows:
+
+1. For **Storage type of the input documents**. Select **File** or **Folder**.
+1. Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
+1. **Location of the source documents**. Select the field and add **Path** followed by **Name** from the **Dynamic content** dropdown list.
+1. **Location of the translated documents**. Enter the URL for your Azure storage target document container.
+
+ To find your target container URL:
+
+ * Navigate to your storage account in the Azure portal.
+ * In the left sidebar, under **Data storage** , select **Containers**.
+ * Select the checkbox next to the target container.
+ * Select the ellipses located at the right, then choose **Properties**.
+ * The target URL is located at the top of the Properties list. Select the **Copy to Clipboard** icon.
+ * Navigate back to your Power automate flow and paste the target URL in the **Location of the translated documents** field.
+
+1. Choose a **Target Language** from the dropdown menu and select **Save**.
+
+ :::image type="content" source="../media/connectors/start-document-translation-window.png" alt-text="Screenshot of the Start document translation dialog window.":::
+
+##### Get documents status
+
+Prior to retrieving the documents status, let's schedule a 30-second delay to ensure that the file has been processed for translation:
+
+1. Select **New step**. Enter **Schedule** in the search box and choose **Delay**.
+ * For **Count**. Enter **30**.
+ * For **Unit**. Enter **Second** from the dropdown list.
+
+ :::image type="content" source="../media/connectors/delay-step.png" alt-text="Screenshot showing the delay step.":::
+
+1. Select **New step**. Enter Translator V3 in the search box and choose **Microsoft Translator V3**.
+
+1. Select **Get documents status** (not the singular Get *document* status).
+
+ :::image type="content" source="../media/connectors/get-documents-status-step.png" alt-text="Screenshot of the get documents status step.":::
+
+1. Next, you're going to enter an expression to retrieve the **`operation ID`** value.
+
+1. Select the **operation ID** field. A **Dynamic content** / **Expression** dropdown window appears.
+
+1. Select the **Expression** tab and enter the following expression into the function field:
+
+ ```powerappsfl
+
+ body('Start_document_translation').operationID
+
+ ```
+
+ :::image type="content" source="../media/connectors/create-function-expression.png" alt-text="Screenshot showing function creation window.":::
+
+1. Select **OK**. The function appears in the **Operation ID** window. Select **Save**.
+
+ :::image type="content" source="../media/connectors/operation-id-function.png" alt-text="Screenshot showing the operation ID field with an expression function value.":::
+
+##### Get blob content and metadata
+
+In this step, you retrieve the translated document from Azure Blob Storage and upload it to SharePoint.
+
+1. Select **New Step**, enter **Control** in the search box, and select **Apply to each** from the **Actions** list.
+1. Select the input field to show the **Dynamic content** window and select **value**.
+
+ :::image type="content" source="../media/connectors/sharepoint-steps.png" alt-text="Screenshot showing the 'Apply to each' control step.":::
+
+1. Select **Add an action**.
+
+ :::image type="content" source="../media/connectors/add-action.png" alt-text="Screenshot showing the 'Add an action' control step.":::
+
+1. Enter **Control** in the search box and select **Do until** from the **Actions** list.
+1. Select the **Choose a value** field to show the **Dynamic content** window and select **progress**.
+1. Complete the three fields as follows: **progress** → **is equal to** → **1**:
+
+ :::image type="content" source="../media/connectors/do-until-progress.png" alt-text="Screenshot showing the Do until control action.":::
+
+1. Select **Add an action**, enter **Azure Blob Storage** in the search box, and select the **Get blob content using path (V2)** action.
+1. In the **Storage account name or blob endpoint** field, select **Enter custom value** and enter your storage account name.
+1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression** and enter the following logic in the formula field:
+
+ ```powerappsfl
+
+ concat(split(outputs('Get_documents_status')?['body/path'],'/')[3],'/',split(outputs('Get_documents_status')?['body/path'],'/')[4],'/',split(outputs('Get_documents_status')?['body/path'],'/')[5])
+
+ ```
+
+1. Select **Add an action**, enter **Azure Blob Storage** in the search box, and select the **Get Blob Metadata using path (V2)** action.
+1. In the **Storage account name or blob endpoint** field, select **Enter custom value** and enter your storage account name.
+
+ :::image type="content" source="../media/connectors/enter-custom-value.png" alt-text="Screenshot showing 'enter custom value' from the Create blob (V2) window.":::
+
+1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression** and enter the following logic in the formula field:
+
+ ```powerappsfl
+
+ concat(split(outputs('Get_documents_status')?['body/path'],'/')[3],'/',split(outputs('Get_documents_status')?['body/path'],'/')[4],'/',split(outputs('Get_documents_status')?['body/path'],'/')[5])
+
+ ```
+
+##### Create new folder
+
+1. Select **Add an action**, enter **SharePoint** in the search box, and select the **Create new folder** action.
+1. Select your SharePoint URL from the **Site Address** dropdown window.
+1. In the next field, select a **List or Library** from the dropdown.
+1. Name your new folder and enter it in the **/Folder Path/** field (be sure to enclose the path name with forward slashes).
+
+##### Create SharePoint file
+
+1. Select **Add an action**, enter **SharePoint** in the search box, and select the **Create file** action.
+1. Select your SharePoint URL from the **Site Address** dropdown window.
+1. In the **Folder Path** field, from the **Dynamic content** list, under *Create new folder*, select **Full Path** .
+1. In the **File Name** field, from the **Dynamic content** list, under *Get Blob Metadata using path (V2)*, select **Name** .
+1. In the **File Content** field, from the **Dynamic content** list, under *Get Blob Metadata using path (V2)* select **File Content** .
+1. Select **Save**.
+
+ :::image type="content" source="../media/connectors/apply-to-each-complete.png" alt-text="Screenshot showing the apply-to-each step sequence.":::
+
+## Test your connector flow
+
+Let's check your document translation flow and results.
+
+1. There's a green bar at the top of the page indicating that **Your flow is ready to go.**.
+1. Select **Test** from the upper-right corner of the page.
+
+ :::image type="content" source="../media/connectors/test-flow.png" alt-text="Screenshot showing the test icon/button.":::
+
+1. Select **Test Flow** → **Manually** → **Test** from the right-side window.
+1. In the **Run flow** window, select the **Continue** then the **Run flow** buttons.
+1. Finally, select the **Done** button.
+1. You should receive a "Your flow ran successfully" message and green check marks align with each successful step.
+
+ :::image type="content" source="../media/connectors/successful-sharepoint-flow.png" alt-text="Screenshot showing a successful flow using SharePoint and Azure Blob Storage.":::
+++
+That's it! You've learned to automate document translation processes using the Microsoft Translator V3 connector and Power Automate.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Explore the Document Translation reference guide](../document-translation/reference/rest-api-guide.md)
ai-services Text Translator Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/connector/text-translator-flow.md
+
+ Title: Use Translator V3 connector to configure a Text Translation flow
+
+description: Use Microsoft Translator V3 connector and Power Automate to configure a Text Translation flow.
++++++ Last updated : 07/18/2023+++
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD029 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+
+# Create a text translation flow (preview)
+
+> [!IMPORTANT]
+>
+> * The Translator connector is currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+This article guides you through configuring a Microsoft Translator V3 connector cloud flow that supports text translation and transliteration. The Translator V3 connector creates a connection between your Translator Service instance and Microsoft Power Automate enabling you to use one or more prebuilt operations as steps in your apps and workflows.
+
+Text translation is a cloud-based REST API feature of the Azure AI Translator service. The Text Translation API enables quick and accurate source-to-target text translations in real time.
+
+## Prerequisites
+
+To get started, you need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
+
+* Once you have your Azure subscription, create a [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource):
+
+* You need the key and name from your resource to connect your application to Power Automate. Your Translator resource keys are found under the Resource Management section in the Azure portal and your resource name is located at the top of the page. Copy and paste your key and resource name in a convenient location, such as *Microsoft Notepad*.
+
+ :::image type="content" source="../media/connectors/keys-resource-details.png" alt-text= "Screenshot showing key and endpoint location in the Azure portal.":::
+
+## Configure the Translator V3 connector
+
+Now that you've completed the prerequisites, let's get started.
+
+1. Sign in to [Power Automate](https://powerautomate.microsoft.com/).
+
+1. Select **Create** from the left sidebar menu.
+
+1. Select **Instant cloud flow** from the main content area.
+
+ :::image type="content" source="../media/connectors/create-flow.png" alt-text="Screenshot showing how to create an instant cloud flow.":::
+
+1. In the popup window, **name your flow**, choose **Manually trigger a flow** then, select **Create**.
+
+ :::image type="content" source="../media/connectors/select-manual-flow.png" alt-text="Screenshot showing how to manually trigger a flow.":::
+
+1. The first step for your instant flowΓÇö**Manually trigger a flow**ΓÇöappears on screen. Select **New step**.
+
+ :::image type="content" source="../media/connectors/add-new-step.png" alt-text="Screenshot of add new flow step page.":::
+
+1. A **choose an operation** pop-up window appears. Enter Translator V3 in the **Search connectors and actions** search bar then select the **Microsoft Translator V3** icon.
+
+ :::image type="content" source="../media/connectors/choose-operation.png" alt-text="Screenshot showing the selection of Translator V3 as the next flow step.":::
+
+## Structure your cloud flow
+
+Let's select an action. Choose to translate or transliterate text.
+
+#### [Translate text](#tab/translate)
+
+### Translate
+
+1. Select the **Translate text** action.
+1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
+
+ * **Connection name**. Enter a name for your connection.
+ * **Subscription Key**. Enter one of your keys that you copied from the Azure portal.
+ * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
+
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+
+ > [!NOTE]
+ > After you've setup your connection, you won't be required to reenter your credentials for subsequent Translator flows.
+
+1. The **Translate text** action window appears next.
+1. Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
+1. Select a **Target Language** from the dropdown window.
+1. Enter the **Body Text**.
+1. Select **Save**.
+
+ :::image type="content" source="../media/connectors/translate-text-step.png" alt-text="Screenshot showing the translate text step.":::
+
+#### [Transliterate text](#tab/transliterate)
+
+### Transliterate
+
+1. Select the **Transliterate** action.
+1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
+
+ * **Connection name**. Enter a name for your connection.
+ * **Subscription Key**. Enter one of your keys that you copied from the Azure portal.
+ * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
+
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+
+1. Next, the **Transliterate** action window appears.
+1. **Language**. Select the language of the text that is to be converted.
+1. **Source script**. Select the name of the input text script.
+1. **Target script**. Select the name of transliterated text script.
+1. Select **Save**.
+
+ :::image type="content" source="../media/connectors/transliterate-text-step.png" alt-text="Screenshot showing the transliterate text step.":::
+++
+## Test your connector flow
+
+Let's test the cloud flow and view the translated text.
+
+1. There's a green bar at the top of the page indicating that **Your flow is ready to go.**.
+1. Select **Test** from the upper-right corner of the page.
+
+ :::image type="content" source="../media/connectors/test-flow.png" alt-text="Screenshot showing the test icon/button.":::
+
+1. Select **Test Flow** → **Manually** → **Test** from the right-side window.
+1. In the next window, select the **Run flow** button.
+1. Finally, select the **Done** button.
+1. You should receive a "Your flow ran successfully" message and green check marks align with each successful step.
+
+ :::image type="content" source="../media/connectors/successful-text-translation-flow.png" alt-text="Screenshot of successful flow.":::
+
+#### [Translate text](#tab/translate)
+
+Select the **Translate text** step to view the translated text (output):
+
+ :::image type="content" source="../media/connectors/translated-text-output.png" alt-text="Screenshot of translated text output.":::
+
+#### [Transliterate text](#tab/transliterate)
+
+Select the **Transliterate** step to view the translated text (output):
+
+ :::image type="content" source="../media/connectors/transliterated-text-output.png" alt-text="Screenshot of transliterated text output.":::
+++
+> [!TIP]
+>
+> * Check on the status of your flow by selecting **My flows** tab on the navigation sidebar.
+> * Edit or update your connection by selecting **Connections** under the **Data** tab on the navigation sidebar.
+
+That's it! You now know how to automate text translation processes using the Microsoft Translator V3 connector and Power Automate.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure a Translator V3 connector document translation flow](document-translation-flow.md)
ai-services Translator Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-container-configuration.md
+
+ Title: Configure containers - Translator
+
+description: The Translator container runtime environment is configured using the `docker run` command arguments. There are both required and optional settings.
++++++ Last updated : 07/18/2023+
+recommendations: false
++
+# Configure Translator Docker containers
+
+Azure AI services provides each container with a common configuration framework. You can easily configure your Translator containers to build Translator application architecture optimized for robust cloud capabilities and edge locality.
+
+The **Translator** container runtime environment is configured using the `docker run` command arguments. This container has both required and optional settings. The required container-specific settings are the billing settings.
+
+## Configuration settings
+
+The container has the following configuration settings:
+
+|Required|Setting|Purpose|
+|--|--|--|
+|Yes|[ApiKey](#apikey-configuration-setting)|Tracks billing information.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetric support to your container.|
+|Yes|[Billing](#billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure.|
+|Yes|[EULA](#eula-setting)| Indicates that you've accepted the license for the container.|
+|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
+|No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.|
+|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
+|Yes|[Mounts](#mount-settings)|Reads and writes data from the host computer to the container and from the container back to the host computer.|
+
+ > [!IMPORTANT]
+> The [**ApiKey**](#apikey-configuration-setting), [**Billing**](#billing-configuration-setting), and [**EULA**](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container.
+
+## ApiKey configuration setting
+
+The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Translator_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
+
+This setting can be found in the following place:
+
+* Azure portal: **Translator** resource management, under **Keys**
+
+## ApplicationInsights setting
++
+## Billing configuration setting
+
+The `Billing` setting specifies the endpoint URI of the _Translator_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Translator_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+
+This setting can be found in the following place:
+
+* Azure portal: **Translator** Overview page, labeled `Endpoint`
+
+| Required | Name | Data type | Description |
+| -- | - | | -- |
+| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](translator-how-to-install-container.md#required-elements). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../../cognitive-services-custom-subdomains.md). |
+
+## EULA setting
++
+## Fluentd settings
++
+## HTTP proxy credentials settings
++
+## Logging settings
++
+## Mount settings
+
+Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure AI containers](../../cognitive-services-container-support.md)
ai-services Translator Container Supported Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-container-supported-parameters.md
+
+ Title: "Container: Translate method"
+
+description: Understand the parameters, headers, and body messages for the container Translate method of Azure AI Translator to translate text.
+++++++ Last updated : 07/18/2023+++
+# Container: Translate
+
+Translate text.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+http://localhost:{port}/translate?api-version=3.0
+```
+
+Example: http://<span></span>localhost:5000/translate?api-version=3.0
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+### Required parameters
+
+| Query parameter | Description |
+| | |
+| api-version | _Required parameter_. <br>Version of the API requested by the client. Value must be `3.0`. |
+| from | _Required parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope.|
+| to | _Required parameter_. <br>Specifies the language of the output text. The target language must be one of the [supported languages](../reference/v3-0-languages.md) included in the `translation` scope. For example, use `to=de` to translate to German. <br>It's possible to translate to multiple languages simultaneously by repeating the parameter in the query string. For example, use `to=de&to=it` to translate to German and Italian. |
+
+### Optional parameters
+
+| Query parameter | Description |
+| | |
+| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
+| includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
+
+Request headers include:
+
+| Headers | Description |
+| | |
+| Authentication header(s) | _Required request header_. <br>See [available options for authentication](../reference/v3-0-reference.md#authentication). |
+| Content-Type | _Required request header_. <br>Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. |
+| Content-Length | _Required request header_. <br>The length of the request body. |
+| X-ClientTraceId | _Optional_. <br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to translate.
+
+```json
+[
+ {"Text":"I would really like to drive your car around the block a few times."}
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 100 elements.
+* The entire text included in the request can't exceed 10,000 characters including spaces.
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
+
+* `to`: A string representing the language code of the target language.
+
+* `text`: A string giving the translated text.
+
+* `sentLen`: An object returning sentence boundaries in the input and output texts.
+
+* `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+* `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+ Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
+
+ * `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
+
+Examples of JSON responses are provided in the [examples](#examples) section.
+
+## Response headers
+
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
+| X-MT-System | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: </br></br>&FilledVerySmallSquare; Custom - Request includes a custom system and at least one custom system was used during translation.</br>&FilledVerySmallSquare; Team - All other requests |
+
+## Response status codes
+
+If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](../reference/v3-0-reference.md#errors).
+
+## Examples
+
+### Translate a single input
+
+This example shows how to translate a single sentence from English to Simplified Chinese.
+
+```curl
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+The `translations` array includes one element, which provides the translation of the single piece of text in the input.
+
+### Translate multiple pieces of text
+
+Translating multiple strings at once is simply a matter of specifying an array of strings in the request body.
+
+```curl
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}, {'Text':'I am fine, thank you.'}]"
+```
+
+The response contains the translation of all pieces of text in the exact same order as in the request.
+The response body is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
+ ]
+ },
+ {
+ "translations":[
+ {"text":"我很好,谢谢你。","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+### Translate to multiple languages
+
+This example shows how to translate the same input to several languages in one request.
+
+```curl
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"},
+ {"text":"Hallo, was ist dein Name?","to":"de"}
+ ]
+ }
+]
+```
+
+### Translate content with markup and decide what's translated
+
+It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated.
+
+```
+<div class="notranslate">This will not be translated.</div>
+<div>This will be translated. </div>
+```
+
+Here's a sample request to illustrate.
+
+```curl
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
+```
+
+The response is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"<div class=\"notranslate\">This will not be translated.</div><div>这将被翻译。</div>","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+### Translate with dynamic dictionary
+
+If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is only safe for proper nouns such as personal names and product names.
+
+The markup to supply uses the following syntax.
+
+```
+<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
+```
+
+For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request:
+
+```
+curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">word or phrase</mstrans:dictionary> is a dictionary entry.'}]"
+```
+
+The result is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"Das Wort \"wordomatic\" ist ein W├╢rterbucheintrag.","to":"de"}
+ ]
+ }
+]
+```
+
+This feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you've created training data that shows your work or phrase in context, you'll get much better results. [Learn more about Custom Translator](../custom-translator/concepts/customization.md).
+
+## Request limits
+
+Each translate request is limited to 10,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests.
+
+The following table lists array element and character limits for the Translator **translation** operation.
+
+| Operation | Maximum size of array element | Maximum number of array elements | Maximum request size (characters) |
+|:-|:-|:-|:-|
+| translate | 10,000 | 100 | 10,000 |
ai-services Translator Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-disconnected-containers.md
+
+ Title: Use Translator Docker containers in disconnected environments
+
+description: Learn how to run Azure AI Translator containers in disconnected environments.
+++++ Last updated : 07/18/2023+++
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+
+# Use Translator containers in disconnected environments
+
+ Azure AI Translator containers allow you to use Translator Service APIs with the benefits of containerization. Disconnected containers are offered through commitment tier pricing offered at a discounted rate compared to pay-as-you-go pricing. With commitment tier pricing, you can commit to using Translator Service features for a fixed fee, at a predictable total cost, based on the needs of your workload.
+
+## Get started
+
+Before attempting to run a Docker container in an offline environment, make sure you're familiar with the following requirements to successfully download and use the container:
+
+* Host computer requirements and recommendations.
+* The Docker `pull` command to download the container.
+* How to validate that a container is running.
+* How to send queries to the container's endpoint, once it's running.
+
+## Request access to use containers in disconnected environments
+
+Complete and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the Internet.
++
+Access is limited to customers that meet the following requirements:
+
+* Your organization should be identified as strategic customer or partner with Microsoft.
+* Disconnected containers are expected to run fully offline, hence your use cases must meet at least one of these or similar requirements:
+ * Environment or device(s) with zero connectivity to internet.
+ * Remote location that occasionally has internet access.
+ * Organization under strict regulation of not sending any kind of data back to cloud.
+* Application completed as instructed. Make certain to pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
+
+## Create a new resource and purchase a commitment plan
+
+1. Create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+1. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
+
+ > [!NOTE]
+ >
+ > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
+
+ :::image type="content" source="../media/create-resource-offline-container.png" alt-text="A screenshot showing resource creation on the Azure portal.":::
+
+1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
+
+## Gather required parameters
+
+There are three required parameters for all Azure AI services' containers:
+
+* The end-user license agreement (EULA) must be present with a value of *accept*.
+* The endpoint URL for your resource from the Azure portal.
+* The API key for your resource from the Azure portal.
+
+Both the endpoint URL and API key are needed when you first run the container to configure it for disconnected usage. You can find the key and endpoint on the **Key and endpoint** page for your resource in the Azure portal:
+
+ :::image type="content" source="../media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot of Azure portal keys and endpoint page.":::
+
+> [!IMPORTANT]
+> You will only use your key and endpoint to configure the container to run in a disconnected environment. After you configure the container, you won't need the key and endpoint values to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
+
+## Download a Docker container with `docker pull`
+
+Download the Docker container that has been approved to run in a disconnected environment. For example:
+
+|Docker pull command | Value |Format|
+|-|-||
+|&bullet; **`docker pull [image]`**</br>&bullet; **`docker pull [image]:latest`**|The latest container image.|&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation</br> </br>&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation: latest |
+|||
+|&bullet; **`docker pull [image]:[version]`** | A specific container image |mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64 |
+
+ **Example Docker pull command**
+
+```docker
+docker pull mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+```
+
+## Configure the container to run in a disconnected environment
+
+Now that you've downloaded your container, you need to execute the `docker run` command with the following parameters:
+
+* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
+* **`Languages={language list}`**. You must include this parameter to download model files for the [languages](../language-support.md) you want to translate.
+
+> [!IMPORTANT]
+> The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
+
+The following example shows the formatting for the `docker run` command with placeholder values. Replace these placeholder values with your own values.
+
+| Placeholder | Value | Format|
+|-|-||
+| `[image]` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
+| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
+ | `{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, in the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Text Translation resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`|
+| `{LANGUAGES_LIST}` | List of language codes separated by commas. It's mandatory to have English (en) language as part of the list.| `en`, `fr`, `it`, `zu`, `uk` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+
+ **Example `docker run` command**
+
+```docker
+
+docker run --rm -it -p 5000:5000 \
+
+-v {MODEL_MOUNT_PATH} \
+
+-v {LICENSE_MOUNT_PATH} \
+
+-e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \
+
+-e DownloadLicense=true \
+
+-e eula=accept \
+
+-e billing={ENDPOINT_URI} \
+
+-e apikey={API_KEY} \
+
+-e Languages={LANGUAGES_LIST} \
+
+[image]
+```
+
+### Translator translation models and container configuration
+
+After you've [configured the container](#configure-the-container-to-run-in-a-disconnected-environment), the values for the downloaded translation models and container configuration will be generated and displayed in the container output:
+
+```bash
+ -e MODELS= usr/local/models/model1/, usr/local/models/model2/
+ -e TRANSLATORSYSTEMCONFIG=/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832
+```
+
+## Run the container in a disconnected environment
+
+Once the license file has been downloaded, you can run the container in a disconnected environment with your license, appropriate memory, and suitable CPU allocations. The following example shows the formatting of the `docker run` command with placeholder values. Replace these placeholders values with your own values.
+
+Whenever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. In addition, an output mount must be specified so that billing usage records can be written.
+
+Placeholder | Value | Format|
+|-|-||
+| `[image]`| The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `16g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
+| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/translator/license:/path/to/license/directory` |
+|`{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
+|`{MODELS_DIRECTORY_LIST}`|List of comma separated directories each having a machine translation model. | `/usr/local/models/enu_esn_generalnn_2022240501,/usr/local/models/esn_enu_generalnn_2022240501` |
+| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
+|`{TRANSLATOR_CONFIG_JSON}`| Translator system configuration file used by container internally.| `/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832` |
+
+ **Example `docker run` command**
+
+```docker
+
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+
+-v {MODEL_MOUNT_PATH} \
+
+-v {LICENSE_MOUNT_PATH} \
+
+-v {OUTPUT_MOUNT_PATH} \
+
+-e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \
+
+-e Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} \
+
+-e MODELS={MODELS_DIRECTORY_LIST} \
+
+-e TRANSLATORSYSTEMCONFIG={TRANSLATOR_CONFIG_JSON} \
+
+-e eula=accept \
+
+[image]
+```
+
+## Other parameters and commands
+
+Here are a few more parameters and commands you may need to run the container:
+
+#### Usage records
+
+When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST API endpoint to generate a report about service usage.
+
+#### Arguments for storing logs
+
+When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
+
+ **Example `docker run` command**
+
+```docker
+docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
+```
+
+#### Get records using the container endpoints
+
+The container provides two endpoints for returning records regarding its usage.
+
+#### Get all records
+
+The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
+
+```HTTP
+https://<service>/records/usage-logs/
+```
+
+ **Example HTTPS endpoint**
+
+ `http://localhost:5000/records/usage-logs`
+
+The usage-logs endpoint returns a JSON response similar to the following example:
+
+```json
+{
+"apiType": "string",
+"serviceName": "string",
+"meters": [
+{
+ "name": "string",
+ "quantity": 256345435
+ }
+ ]
+}
+```
+
+#### Get records for a specific month
+
+The following endpoint provides a report summarizing usage over a specific month and year:
+
+```HTTP
+https://<service>/records/usage-logs/{MONTH}/{YEAR}
+```
+
+This usage-logs endpoint returns a JSON response similar to the following example:
+
+```json
+{
+ "apiType": "string",
+ "serviceName": "string",
+ "meters": [
+ {
+ "name": "string",
+ "quantity": 56097
+ }
+ ]
+}
+```
+
+### Purchase a different commitment plan for disconnected containers
+
+Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+
+You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section.
+
+### End a commitment plan
+
+ If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're still able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you do so, you avoid charges for the following year.
+
+## Troubleshooting
+
+Run the container with an output mount and logging enabled. These settings enable the container to generate log files that are helpful for troubleshooting issues that occur while starting or running the container.
+
+> [!TIP]
+> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../containers/disconnected-container-faq.yml).
+
+That's it! You've learned how to create and run disconnected containers for Azure AI Translator Service.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Request parameters for Translator text containers](translator-container-supported-parameters.md)
ai-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md
+
+ Title: Install and run Docker containers for Translator API
+
+description: Use the Docker container for Translator API to translate text.
++++++ Last updated : 07/18/2023+
+recommendations: false
+keywords: on-premises, Docker, container, identify
++
+# Install and run Translator containers
+
+Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
+
+Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
+
+See the list of [languages supported](../language-support.md) when using Translator containers.
+
+> [!IMPORTANT]
+>
+> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
+> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
+
+<!-- markdownlint-disable MD033 -->
+
+## Prerequisites
+
+To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+You'll also need to have:
+
+| Required | Purpose |
+|--|--|
+| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
+| Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
+| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
+
+|Optional|Purpose|
+||-|
+|Azure CLI (command-line interface) |<ul><li> The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell.</li></ul> |
+
+## Required elements
+
+All Azure AI containers require three primary elements:
+
+* **EULA accept setting**. An end-user license agreement (EULA) set with a value of `Eula=accept`.
+
+* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource **Keys and Endpoint** page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
+
+> [!IMPORTANT]
+>
+> * Keys are used to access your Azure AI resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
+
+## Host computer
++
+## Container requirements and recommendations
+
+The following table describes the minimum and recommended CPU cores and memory to allocate for the Translator container.
+
+| Container | Minimum |Recommended | Language Pair |
+|--|||-|
+| Translator |`2` cores, 2-GB memory |`4` cores, 8-GB memory | 4 |
+
+* Each core must be at least 2.6 gigahertz (GHz) or faster.
+
+* For every language pair, it's recommended to have 2 GB of memory. By default, the Translator offline container has four language pairs.
+
+* The core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
+
+> [!NOTE]
+>
+> * CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the docker run command.
+>
+> * The minimum and recommended specifications are based on Docker limits, not host machine resources.
+
+## Request approval to run container
+
+Complete and submit the [**Azure AI services
+Application for Gated Services**](https://aka.ms/csgate-translator) to request access to the container.
+++
+## Translator container image
+
+The Translator container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`.
+
+To use the latest version of the container, you can use the `latest` tag. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags).
+
+## Get container images with **docker commands**
+
+> [!IMPORTANT]
+>
+> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
+> * The `EULA`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start.
+
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to download a container image from Microsoft Container registry and run it.
+
+```Docker
+docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
+-v /mnt/d/TranslatorContainer:/usr/local/models \
+-e apikey={API_KEY} \
+-e eula=accept \
+-e billing={ENDPOINT_URI} \
+-e Languages=en,fr,es,ar,ru \
+mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+```
+
+The above command:
+
+* Downloads and runs a Translator container from the container image.
+* Allocates 12 gigabytes (GB) of memory and four CPU core.
+* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
+* Accepts the end-user agreement (EULA)
+* Configures billing endpoint
+* Downloads translation models for languages English, French, Spanish, Arabic, and Russian
+* Automatically removes the container after it exits. The container image is still available on the host computer.
+
+### Run multiple containers on the same host
+
+If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
+
+You can have this container and a different Azure AI container running on the HOST together. You also can have multiple containers of the same Azure AI container running.
+
+## Query the container's Translator endpoint
+
+ The container provides a REST-based Translator endpoint API. Here's an example request:
+
+```curl
+curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS"
+ -H "Content-Type: application/json" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+> [!NOTE]
+> If you attempt the cURL POST request before the container is ready, you'll end up getting a *Service is temporarily unavailable* response. Wait until the container is ready, then try again.
+
+## Stop the container
++
+## Troubleshoot
+
+### Validate that a container is running
+
+There are several ways to validate that the container is running:
+
+* The container provides a homepage at `\` as a visual validation that the container is running.
+
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+
+| Request URL | Purpose |
+|--|--|
+| `http://localhost:5000/` | The container provides a home page. |
+| `http://localhost:5000/ready` | Requested with GET. Provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/status` | Requested with GET. Verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
+| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
+++
+## Text translation code samples
+
+### Translate text with swagger
+
+#### English &leftrightarrow; German
+
+Navigate to the swagger page: `http://localhost:5000/swagger/https://docsupdatetracker.net/index.html`
+
+1. Select **POST /translate**
+1. Select **Try it out**
+1. Enter the **From** parameter as `en`
+1. Enter the **To** parameter as `de`
+1. Enter the **api-version** parameter as `3.0`
+1. Under **texts**, replace `string` with the following JSON
+
+```json
+ [
+ {
+ "text": "hello, how are you"
+ }
+ ]
+```
+
+Select **Execute**, the resulting translations are output in the **Response Body**. You should expect something similar to the following response:
+
+```json
+"translations": [
+ {
+ "text": "hallo, wie geht es dir",
+ "to": "de"
+ }
+ ]
+```
+
+### Translate text with Python
+
+```python
+import requests, json
+
+url = 'http://localhost:5000/translate?api-version=3.0&from=en&to=fr'
+headers = { 'Content-Type': 'application/json' }
+body = [{ 'text': 'Hello, how are you' }]
+
+request = requests.post(url, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(
+ response,
+ sort_keys=True,
+ indent=4,
+ ensure_ascii=False,
+ separators=(',', ': ')))
+```
+
+### Translate text with C#/.NET console app
+
+Launch Visual Studio, and create a new console application. Edit the `*.csproj` file to add the `<LangVersion>7.1</LangVersion>` nodeΓÇöspecifies C# 7.1. Add the [Newtoonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) NuGet package, version 11.0.2.
+
+In the `Program.cs` replace all the existing code with the following script:
+
+```csharp
+using Newtonsoft.Json;
+using System;
+using System.Net.Http;
+using System.Text;
+using System.Threading.Tasks;
+
+namespace TranslateContainer
+{
+ class Program
+ {
+ const string ApiHostEndpoint = "http://localhost:5000";
+ const string TranslateApi = "/translate?api-version=3.0&from=en&to=de";
+
+ static async Task Main(string[] args)
+ {
+ var textToTranslate = "Sunny day in Seattle";
+ var result = await TranslateTextAsync(textToTranslate);
+
+ Console.WriteLine(result);
+ Console.ReadLine();
+ }
+
+ static async Task<string> TranslateTextAsync(string textToTranslate)
+ {
+ var body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ var client = new HttpClient();
+ using (var request =
+ new HttpRequestMessage
+ {
+ Method = HttpMethod.Post,
+ RequestUri = new Uri($"{ApiHostEndpoint}{TranslateApi}"),
+ Content = new StringContent(requestBody, Encoding.UTF8, "application/json")
+ })
+ {
+ // Send the request and await a response.
+ var response = await client.SendAsync(request);
+
+ return await response.Content.ReadAsStringAsync();
+ }
+ }
+ }
+}
+```
+
+## Summary
+
+In this article, you learned concepts and workflows for downloading, installing, and running Translator container. Now you know:
+
+* Translator provides Linux containers for Docker.
+* Container images are downloaded from the container registry and run in Docker.
+* You can use the REST API to call 'translate' operation in Translator container by specifying the container's host URI.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure AI containers](../../containers/index.yml?context=%2fazure%2fcognitive-services%2ftranslator%2fcontext%2fcontext)
ai-services Create Translator Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/create-translator-resource.md
+
+ Title: Create a Translator resource
+
+description: This article shows you how to create an Azure AI Translator resource and retrieve yourgit key and endpoint URL.
+++++++ Last updated : 07/18/2023++
+# Create a Translator resource
+
+In this article, you learn how to create a Translator resource in the Azure portal. [Azure AI Translator](translator-overview.md) is a cloud-based machine translation service that is part of the [Azure AI services](../what-are-ai-services.md) family. Azure resources are instances of services that you create. All API requests to Azure AI services require an **endpoint** URL and a read-only **key** for authenticating access.
+
+## Prerequisites
+
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free 12-month subscription**](https://azure.microsoft.com/free/).
+
+## Create your resource
+
+The Translator service can be accessed through two different resource types:
+
+* [**Single-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource types enable access to a single service API key and endpoint.
+
+* [**Multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource types enable access to multiple Azure AI services using a single API key and endpoint.
+
+## Complete your project and instance details
+
+1. **Subscription**. Select one of your available Azure subscriptions.
+
+1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using the Document Translation feature with [managed identity authorization](document-translation/how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**.
+
+1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > If you are using a Translator feature that requires a custom domain endpoint, such as Document Translation, the value that you enter in the Name field will be the custom domain name parameter for the endpoint.
+
+1. **Pricing tier**. Select a [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/translator) that meets your needs:
+
+ * Each subscription has a free tier.
+ * The free tier has the same features and functionality as the paid plans and doesn't expire.
+ * Only one free tier is available per subscription.
+ * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest you select the Standard S1 instance tier to try Document Translation.
+
+1. If you've created a multi-service resource, you need to confirm more usage details via the check boxes.
+
+1. Select **Review + Create**.
+
+1. Review the service terms and select **Create** to deploy your resource.
+
+1. After your resource has successfully deployed, select **Go to resource**.
+
+### Authentication keys and endpoint URL
+
+All Azure AI services API requests require an endpoint URL and a read-only key for authentication.
+
+* **Authentication keys**. Your key is a unique string that is passed on every request to the Translation service. You can pass your key through a query-string parameter or by specifying it in the HTTP request header.
+
+* **Endpoint URL**. Use the Global endpoint in your API request unless you need a specific Azure region or custom endpoint. *See* [Base URLs](reference/v3-0-reference.md#base-urls). The Global endpoint URL is `api.cognitive.microsofttranslator.com`.
+
+## Get your authentication keys and endpoint
+
+1. After your new resource deploys, select **Go to resource** or navigate directly to your resource page.
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+1. Copy and paste your keys and endpoint URL in a convenient location, such as *Microsoft Notepad*.
++
+## How to delete a resource or resource group
+
+> [!WARNING]
+>
+> Deleting a resource group also deletes all resources contained in the group.
+
+To remove an Azure AI multi-service or Translator resource, you can **delete the resource** or **delete the resource group**.
+
+To delete the resource:
+
+1. Navigate to your Resource Group in the Azure portal.
+1. Select the resources to be deleted by selecting the adjacent check box.
+1. Select **Delete** from the top menu near the right edge.
+1. Type *yes* in the **Deleted Resources** dialog box.
+1. Select **Delete**.
+
+To delete the resource group:
+
+1. Navigate to your Resource Group in the Azure portal.
+1. Select the **Delete resource group** from the top menu bar near the left edge.
+1. Confirm the deletion request by entering the resource group name and selecting **Delete**.
+
+## How to get started with Translator
+
+In our quickstart, you learn how to use the Translator service with REST APIs.
+
+> [!div class="nextstepaction"]
+> [Get Started with Translator](quickstart-text-rest-api.md)
+
+## More resources
+
+* [Microsoft Translator code samples](https://github.com/MicrosoftTranslator). Multi-language Translator code samples are available on GitHub.
+* [Microsoft Translator Support Forum](https://www.aka.ms/TranslatorForum)
+* [Get Started with Azure (3-minute video)](https://azure.microsoft.com/get-started/?b=16.24)
ai-services Beginners Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/beginners-guide.md
+
+ Title: Custom Translator for beginners
+
+description: A user guide for understanding the end-to-end customized machine translation process.
+++++ Last updated : 07/18/2023++
+# Custom Translator for beginners
+
+ [Custom Translator](overview.md) enables you to a build translation system that reflects your business, industry, and domain-specific terminology and style. Training and deploying a custom system is easy and doesn't require any programming skills. The customized translation system seamlessly integrates into your existing applications, workflows, and websites and is available on Azure through the same cloud-based [Microsoft Text Translation API](../reference/v3-0-translate.md?tabs=curl) service that powers billions of translations every day.
+
+The platform enables users to build and publish custom translation systems to and from English. The Custom Translator supports more than 60 languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../language-support.md).
+
+## Is a custom translation model the right choice for me?
+
+A well-trained custom translation model provides more accurate domain-specific translations because it relies on previously translated in-domain documents to learn preferred translations. Translator uses these terms and phrases in context to produce fluent translations in the target language while respecting context-dependent grammar.
+
+Training a full custom translation model requires a substantial amount of data. If you don't have at least 10,000 sentences of previously trained documents, you won't be able to train a full-language translation model. However, you can either train a dictionary-only model or use the high-quality, out-of-the-box translations available with the Text Translation API.
++
+## What does training a custom translation model involve?
+
+Building a custom translation model requires:
+
+* Understanding your use-case.
+
+* Obtaining in-domain translated data (preferably human translated).
+
+* The ability to assess translation quality or target language translations.
+
+## How do I evaluate my use-case?
+
+Having clarity on your use-case and what success looks like is the first step towards sourcing proficient training data. Here are a few considerations:
+
+* What is your desired outcome and how will you measure it?
+
+* What is your business domain?
+
+* Do you have in-domain sentences of similar terminology and style?
+
+* Does your use-case involve multiple domains? If yes, should you build one translation system or multiple systems?
+
+* Do you have requirements impacting regional data residency at-rest and in-transit?
+
+* Are the target users in one or multiple regions?
+
+## How should I source my data?
+
+Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
+
+* Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
+
+* Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
+
+* Can you crawl online portals to collect source sentences and synthesize target sentences?
+
+## What should I use for training material?
+
+| Source | What it does | Rules to follow |
+||||
+| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
+| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
+| Test documents | Calculate the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
+| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
+| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
+
+## What is a BLEU score?
+
+BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
+
+A BLEU score is a number between zero and 100. A score of zero indicates a low quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100 - a BLEU score between 40 and 60 indicates a high-quality translation.
+
+[Read more](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma)
+
+## What happens if I don't submit tuning or testing data?
+
+Tuning and test sentences are optimally representative of what you plan to translate in the future. If you don't submit any tuning or testing data, Custom Translator will automatically exclude sentences from your training documents to use as tuning and test data.
+
+| System-generated | Manual-selection |
+|||
+| Convenient. | Enables fine-tuning for your future needs.|
+| Good, if you know that your training data is representative of what you are planning to translate. | Provides more freedom to compose your training data.|
+| Easy to redo when you grow or shrink the domain. | Allows for more data and better domain coverage.|
+|Changes each training run.| Remains static over repeated training runs|
+
+## How is training material processed by Custom Translator?
+
+To prepare for training, documents undergo a series of processing and filtering steps. These steps are explained below. Knowledge of the filtering process may help with understanding the sentence count displayed as well as the steps you can take to prepare training documents for training with Custom Translator.
+
+* ### Sentence alignment
+
+ If your document isn't in XLIFF, XLSX, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence-by-sentence. Translator doesn't perform document alignmentΓÇöit follows your naming convention for the documents to find a matching document in the other language. Within the source text, Custom Translator tries to find the corresponding sentence in the target language. It uses document markup like embedded HTML tags to help with the alignment.
+
+ If you see a large discrepancy between the number of sentences in the source and target documents, your source document may not be parallel, or couldn't be aligned. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel.
+
+* ### Extracting tuning and testing data
+
+ Tuning and testing data is optional. If you don't provide it, the system will remove an appropriate percentage from your training documents to use for tuning and testing. The removal happens dynamically as part of the training process. Since this step occurs as part of training, your uploaded documents aren't affected. You can see the final used sentence counts for each category of dataΓÇötraining, tuning, testing, and dictionaryΓÇöon the Model details page after training has succeeded.
+
+* ### Length filter
+
+ * Removes sentences with only one word on either side.
+ * Removes sentences with more than 100 words on either side. Chinese, Japanese, Korean are exempt.
+ * Removes sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
+ * Removes sentences with more than 2000 characters for Chinese, Japanese, Korean.
+ * Removes sentences with less than 1% alphanumeric characters.
+ * Removes dictionary entries containing more than 50 words.
+
+* ### White space
+
+ * Replaces any sequence of white-space characters including tabs and CR/LF sequences with a single space character.
+ * Removes leading or trailing space in the sentence.
+
+* ### Sentence end punctuation
+
+ * Replaces multiple sentence-end punctuation characters with a single instance. Japanese character normalization.
+
+ * Converts full width letters and digits to half-width characters.
+
+* ### Unescaped XML tags
+
+ Transforms unescaped tags into escaped tags:
+
+ | Tag | Becomes |
+ |||
+ | \&lt; | \&amp;lt; |
+ | \&gt; | \&amp;gt; |
+ | \&amp; | \&amp;amp; |
+
+* ### Invalid characters
+
+ Custom Translator removes sentences that contain Unicode character U+FFFD. The character U+FFFD indicates a failed encoding conversion.
+
+## What steps should I take before uploading data?
+
+* Remove sentences with invalid encoding.
+* Remove Unicode control characters.
+* If feasible, align sentences (source-to-target).
+* Remove source and target sentences that don't match the source and target languages.
+* When source and target sentences have mixed languages, ensure that untranslated words are intentional, for example, names of organizations and products.
+* Correct grammatical and typographical errors to prevent teaching these errors to your model.
+* Though our training process handles source and target lines containing multiple sentences, it's better to have one source sentence mapped to one target sentence.
+
+## How do I evaluate the results?
+
+After your model is successfully trained, you can view the model's BLEU score and baseline model BLEU score on the model details page. We use the same set of test data to generate both the model's BLEU score and the baseline BLEU score. This data will help you make an informed decision regarding which model would be better for your use-case.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try our Quickstart](quickstart.md)
ai-services Bleu Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/bleu-score.md
+
+ Title: "What is a BLEU score? - Custom Translator"
+
+description: BLEU is a measurement of the differences between machine translation and human-created reference translations of the same source sentence.
+++++ Last updated : 07/18/2023++
+#Customer intent: As a Custom Translator user, I want to understand how BLEU score works so that I understand system test outcome better.
++
+# What is a BLEU score?
+
+[BLEU (Bilingual Evaluation Understudy)](https://en.wikipedia.org/wiki/BLEU) is a measurement of the difference between an automatic translation and human-created reference translations of the same source sentence.
+
+## Scoring process
+
+The BLEU algorithm compares consecutive phrases of the automatic translation
+with the consecutive phrases it finds in the reference translation, and counts
+the number of matches, in a weighted fashion. These matches are position
+independent. A higher match degree indicates a higher degree of similarity with
+the reference translation, and higher score. Intelligibility and grammatical correctness aren't taken into account.
+
+## How BLEU works?
+
+The BLEU score's strength is that it correlates well with human judgment. BLEU averages out
+individual sentence judgment errors over a test corpus, rather than attempting
+to devise the exact human judgment for every sentence.
+
+A more extensive discussion of BLEU scores is [here](https://youtu.be/-UqDljMymMg).
+
+BLEU results depend strongly on the breadth of your domain; consistency of
+test, training and tuning data; and how much data you have
+available for training. If your models have been trained on a narrow domain, and
+your training data is consistent with your test data, you can expect a high
+BLEU score.
+
+>[!NOTE]
+>A comparison between BLEU scores is only justifiable when BLEU results are compared with the same Test set, the same language pair, and the same MT engine. A BLEU score from a different test set is bound to be different.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [BLEU score evaluation](../how-to/test-your-model.md)
ai-services Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/customization.md
+
+ Title: Translation Customization - Translator
+
+description: Use the Microsoft Translator Hub to build your own machine translation system using your preferred terminology and style.
++++++ Last updated : 07/18/2023+++
+# Customize your text translations
+
+The Custom Translator is a feature of the Translator service, which allows users to customize Microsoft Translator's advanced neural machine translation when translating text using Translator (version 3 only).
+
+The feature can also be used to customize speech translation when used with [Azure AI Speech](../../../speech-service/index.yml).
+
+## Custom Translator
+
+With Custom Translator, you can build neural translation systems that understand the terminology used in your own business and industry. The customized translation system will then integrate into existing applications, workflows, and websites.
+
+### How does it work?
+
+Use your previously translated documents (leaflets, webpages, documentation, etc.) to build a translation system that reflects your domain-specific terminology and style, better than a standard translation system. Users can upload TMX, XLIFF, TXT, DOCX, and XLSX documents.
+
+The system also accepts data that is parallel at the document level but isn't yet aligned at the sentence level. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator will be able to automatically match sentences across documents. The system can also use monolingual data in either or both languages to complement the parallel training data to improve the translations.
+
+The customized system is then available through a regular call to Translator using the category parameter.
+
+Given the appropriate type and amount of training data it isn't uncommon to expect gains between 5 and 10, or even more BLEU points on translation quality by using Custom Translator.
+
+More details about the various levels of customization based on available data can be found in the [Custom Translator User Guide](../overview.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set up a customized language system using Custom Translator](../overview.md)
ai-services Data Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/data-filtering.md
+
+ Title: "Data Filtering - Custom Translator"
+
+description: When you submit documents to be used for training a custom system, the documents undergo a series of processing and filtering steps.
++++ Last updated : 07/18/2023+++
+#Customer intent: As a Custom Translator, I want to understand how data is filtered before training a model.
++
+# Data filtering
+
+When you submit documents to be used for training, the documents undergo a series of processing and filtering steps. These steps are explained here. The knowledge of the filtering may help you understand the sentence count displayed in Custom Translator and the steps you may take yourself to prepare the documents for training with Custom Translator.
+
+## Sentence alignment
+
+If your document isn't in XLIFF, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence by sentence. Custom Translator doesn't perform document alignment ΓÇô it follows your naming of the documents to find the matching document of the other language. Within the document, Custom Translator tries to find the corresponding sentence in the other language. It uses document markup like embedded HTML tags to help with the alignment.
+
+If you see a large discrepancy between the number of sentences in the source and target documents, your documents may not be parallel. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel. Custom Translator shows a warning next to the document if the sentence count differs suspiciously.
+
+## Deduplication
+
+Custom Translator removes the sentences that are present in test and tuning documents from training data. The removal happens dynamically inside of the training run, not in the data processing step. Custom Translator reports the sentence count to you in the project overview before such removal.
+
+## Length filter
+
+* Remove sentences with only one word on either side.
+* Remove sentences with more than 100 words on either side.ΓÇ» Chinese, Japanese, Korean are exempt.
+* Remove sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
+* Remove sentences with more than 2000 characters for Chinese, Japanese, Korean.
+* Remove sentences with less than 1% alpha characters.
+* Remove dictionary entries containing more than 50 words.
+
+## White space
+
+* Replace any sequence of white-space characters including tabs and CR/LF sequences with a single space character.
+* Remove leading or trailing space in the sentence
+
+## Sentence end punctuation
+
+Replace multiple sentence end punctuation characters with a single instance.
+
+## Japanese character normalization
+
+Convert full width letters and digits to half-width characters.
+
+## Unescaped XML tags
+
+Filtering transforms unescaped tags into escaped tags:
+* `&lt;` becomes `&amp;lt;`
+* `&gt;` becomes `&amp;gt;`
+* `&amp;` becomes `&amp;amp;`
+
+## Invalid characters
+
+Custom Translator removes sentences that contain Unicode character U+FFFD. The character U+FFFD indicates a failed encoding conversion.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to train a model](../how-to/train-custom-model.md)
ai-services Dictionaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/dictionaries.md
+
+ Title: "What is a dictionary? - Custom Translator"
+
+description: How to create an aligned document that specifies a list of phrases or sentences (and their translations) that you always want Microsoft Translator to translate the same way. Dictionaries are sometimes also called glossaries or term bases.
++++ Last updated : 07/18/2023+++
+#Customer intent: As a Custom Translator, I want to understand how to use a dictionary to build a custom translation model.
++
+# What is a dictionary?
+
+A dictionary is an aligned pair of documents that specifies a list of phrases or sentences and their corresponding translations. Use a dictionary in your training, when you want Translator to translate any instances of the source phrase or sentence, using the translation you've provided in the dictionary. Dictionaries are sometimes called glossaries or term bases. You can think of the dictionary as a brute force "copy and replace" for all the terms you list. Furthermore, Microsoft Custom Translator service builds and makes use of its own general purpose dictionaries to improve the quality of its translation. However, a customer provided dictionary takes precedent and will be searched first to look up words or sentences.
+
+Dictionaries only work for projects in language pairs that have a fully supported Microsoft general neural network model behind them. [View the complete list of languages](../../language-support.md).
+
+## Phrase dictionary
+
+A phrase dictionary is case-sensitive. It's an exact find-and-replace operation. When you include a phrase dictionary in training your model, any word or phrase listed is translated in the way specified. The rest of the sentence is translated as usual. You can use a phrase dictionary to specify phrases that shouldn't be translated by providing the same untranslated phrase in the source and target files.
+
+## Sentence dictionary
+
+A sentence dictionary is case-insensitive. The sentence dictionary allows you to specify an exact target translation for a source sentence. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If the source dictionary entry ends with punctuation, it's ignored during the match. If only a portion of the sentence matches, the entry won't match. When a match is detected, the target entry of the sentence dictionary will be returned.
+
+## Dictionary-only trainings
+
+You can train a model using only dictionary data. To do so, select only the dictionary document (or multiple dictionary documents) that you wish to include and select **Create model**. Since this training is dictionary-only, there's no minimum number of training sentences required. Your model will typically complete training much faster than a standard training. The resulting models will use the Microsoft baseline models for translation with the addition of the dictionaries you've added. You won't get a test report.
+
+>[!Note]
+>Custom Translator doesn't sentence align dictionary files, so it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned.
+
+## Recommendations
+
+- Dictionaries aren't a substitute for training a model using training data. For better results, we recommended letting the system learn from your training data. However, when sentences or compound nouns must be translated verbatim, use a dictionary.
+
+- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context of that sentence is lost or limited for translating the rest of the sentence. The result is that, while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence often suffers.
+
+- The phrase dictionary works well for compound nouns like product names ("_Microsoft SQL Server_"), proper names ("_City of Hamburg_"), or product features ("_pivot table_"). It doesn't work as well for verbs or adjectives because those words are typically highly contextual within the source or target language. The best practice is to avoid phrase dictionary entries for anything but compound nouns.
+
+- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries are case- and punctuation-sensitive. Custom Translator will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation marks as specified in the source dictionary file. Also, translations will reflect the capitalization and punctuation provided in the target dictionary file.
+
+ **Example**
+
+ - If you're training an English-to-Spanish system that uses a phrase dictionary and you specify "_SQL server_" in the source file and "_Microsoft SQL Server_" in the target file. When you request the translation of a sentence that contains the phrase "_SQL server_", Custom Translator will match the dictionary entry and the translation will contain "_Microsoft SQL Server_."
+ - When you request translation of a sentence that includes the same phrase but **doesn't** match what is in your source file, such as "_sql server_", "_sql Server_" or "_SQL Server_", it **won't** return a match from your dictionary.
+ - The translation follows the rules of the target language as specified in your phrase dictionary.
+
+- If you're using a sentence dictionary, end-of-sentence punctuation is ignored.
+
+ **Example**
+
+ - If your source dictionary contains "_This sentence ends with punctuation!_", then any translation requests containing "_This sentence ends with punctuation_" will match.
+
+- Your dictionary should contain unique source lines. If a source line (a word, phrase, or sentence) appears more than once in a dictionary file, the system will always use the **last entry** provided and return the target when a match is found.
+
+- Avoid adding phrases that consist of only numbers or are two- or three-letter words, such as acronyms, in the source dictionary file.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about document formatting guidelines](document-formats-naming-convention.md)
ai-services Document Formats Naming Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/document-formats-naming-convention.md
+
+ Title: "Document formats and naming conventions - Custom Translator"
+
+description: This article is a guide to document formats and naming conventions in Custom Translator to avoid naming conflicts.
++++ Last updated : 07/18/2023+++
+#Customer intent: As a Custom Translator user, I want to understand how to format and name my documents.
++
+# Document formats and naming convention guidance
+
+Any file used for custom translation must be at least **four** characters in length.
+
+This table includes all supported file formats that you can use to build your translation system:
+
+| Format | Extensions | Description |
+|-|--|--|
+| XLIFF | .XLF, .XLIFF | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
+| TMX | .TMX | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
+| ZIP | .ZIP | ZIP is an archive file format. |
+| Locstudio | .LCL | A Microsoft format for parallel documents |
+| Microsoft Word | .DOCX | Microsoft Word document |
+| Adobe Acrobat | .PDF | Adobe Acrobat portable document |
+| HTML | .HTML, .HTM | HTML document |
+| Text file | .TXT | UTF-16 or UTF-8 encoded text files. The file name must not contain Japanese characters. |
+| Aligned text file | .ALIGN | The extension `.ALIGN` is a special extension that you can use if you know that the sentences in the document pair are perfectly aligned. If you provide a `.ALIGN` file, Custom Translator won't align the sentences for you. |
+| Excel file | .XLSX | Excel file (2013 or later). First line/ row of the spreadsheet should be language code. |
+
+## Dictionary formats
+
+For dictionaries, Custom Translator supports all file formats that are supported for training sets. If you're using an Excel dictionary, the first line/ row of the spreadsheet should be language codes.
+
+## Zip file formats
+
+Documents can be grouped into a single zip file and uploaded. The Custom Translator supports zip file formats (ZIP, GZ, and TGZ).
+
+Each document in the zip file with the extension TXT, HTML, HTM, PDF, DOCX, ALIGN must follow this naming convention:
+
+{document name}\_{language code}
+where {document name} is the name of your document, {language code} is the ISO LanguageID (two characters), indicating that the document contains sentences in that language. There must be an underscore (_) before the language code.
+
+For example, to upload two parallel documents within a zip for an English to
+Spanish system, the files should be named "data_en" and "data_es".
+
+Translation Memory files (TMX, XLF, XLIFF, LCL, XLSX) aren't required to follow the specific language-naming convention.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about managing projects](workspace-and-project.md#what-is-a-custom-translator-project)
ai-services Model Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/model-training.md
+
+ Title: "What are training and modeling? - Custom Translator"
+
+description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
+++++ Last updated : 07/18/2023++
+#Customer intent: As a Custom Translator user, I want to concept of a model and training, so that I can efficiently use training, tuning and testing datasets the helps me build a translation model.
++
+# What are training and modeling?
+
+A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
+
+If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself.
+
+## Training document type for Custom Translator
+
+Documents included in training set are used by the Custom Translator as the basis for building your model. During training execution, sentences that are present in these documents are aligned (or paired). You can take liberties in composing your set of training documents. You can include documents that you believe are of tangential relevance in one model. Again exclude them in another to see the impact in [BLEU (Bilingual Evaluation Understudy) score](bleu-score.md). As long as you keep the tuning set and test set constant, feel free to experiment with the composition of the training set. This approach is an effective way to modify the quality of your translation system.
+
+You can run multiple trainings within a project and compare the [BLEU scores](bleu-score.md) across all training runs. When you're running multiple trainings for comparison, ensure same tuning/ test data is specified each time. Also make sure to also inspect the results manually in the ["Testing"](../how-to/test-your-model.md) tab.
+
+## Tuning document type for Custom Translator
+
+Parallel documents included in this set are used by the Custom Translator to tune the translation system for optimal results.
+
+The tuning data is used during training to adjust all parameters and weights of the translation system to the optimal values. Choose your tuning data carefully: the tuning data should be representative of the content of the documents you intend to translate in the future. The tuning data has a major influence on the quality of the translations produced. Tuning enables the translation system to provide translations that are closest to the samples you provide in the tuning data. You don't need more than 2500 sentences in your tuning data. For optimal translation quality, it's recommended to select the tuning set manually by choosing the most representative selection of sentences.
+
+When creating your tuning set, choose sentences that are a meaningful and representative length of the future sentences that you expect to translate. Choose sentences that have words and phrases that you intend to translate in the approximate distribution that you expect in your future translations. In practice, a sentence length of 7 to 10 words will produce the best results. These sentences contain enough context to show inflection and provide a phrase length that is significant, without being overly complex.
+
+A good description of the type of sentences to use in the tuning set is prose: actual fluent sentences. Not table cells, not poems, not lists of things, not only punctuation, or numbers in a sentence - regular language.
+
+If you manually select your tuning data, it shouldn't have any of the same sentences as your training and testing data. The tuning data has a significant impact on the quality of the translations - choose the sentences carefully.
+
+If you aren't sure what to choose for your tuning data, just select the training data and let Custom Translator select the tuning data for you. When you let the Custom Translator choose the tuning data automatically, it will use a random subset of sentences from your bilingual training documents and exclude these sentences from the training material itself.
+
+## Testing dataset for Custom Translator
+
+Parallel documents included in the testing set are used to compute the BLEU (Bilingual Evaluation Understudy) score. This score indicates the quality of your translation system. This score actually tells you how closely the translations done by the translation system resulting from this training match the reference sentences in the test data set.
+
+The BLEU score is a measurement of the delta between the automatic translation and the reference translation. Its value ranges from 0 to 100. A score of 0 indicates that not a single word of the reference appears in the translation. A score of 100 indicates that the automatic translation exactly matches the reference: the same word is in the exact same position. The score you receive is the BLEU score average for all sentences of the testing data.
+
+The test data should include parallel documents where the target language sentences are the most desirable translations of the corresponding source language sentences in the source-target pair. You may want to use the same criteria you used to compose the tuning data. However, the testing data has no influence over the quality of the translation system. It's used exclusively to generate the BLEU score for you.
+
+You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it will use a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
+
+You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Test and evaluate your model](../how-to/test-your-model.md)
ai-services Parallel Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/parallel-documents.md
+
+ Title: "What are parallel documents? - Custom Translator"
+
+description: Parallel documents are pairs of documents where one is the translation of the other. One document in the pair contains sentences in the source language and the other document contains these sentences translated into the target language.
++++ Last updated : 07/18/2023++
+#Customer intent: As a Custom Translator, I want to understand how to use parallel documents to build a custom translation model.
++
+# What are parallel documents?
+
+Parallel documents are pairs of documents where one is the translation of the
+other. One document in the pair contains sentences in the source language and
+the other document contains these sentences translated into the target language.
+It doesn't matter which language is marked as "source" and which language is
+marked as "target" ΓÇô a parallel document can be used to train a translation
+system in either direction.
+
+## Requirements
+
+You'll need a minimum of 10,000 unique aligned parallel sentences to train a system. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. As a best practice, continuously add more parallel content and retrain to improve the quality of your translation system. For more information, *see* [Sentence Alignment](./sentence-alignment.md).
+
+Microsoft requires that documents uploaded to the Custom Translator don't violate a third party's copyright or intellectual properties. For more information, please see the [Terms of Use](https://azure.microsoft.com/support/legal/cognitive-services-terms/). Uploading a document using the portal doesn't alter the ownership of the intellectual property in the document itself.
+
+## Use of parallel documents
+
+Parallel documents are used by the system:
+
+1. To learn how words, phrases and sentences are commonly mapped between the
+ two languages.
+
+2. To learn how to process the appropriate context depending on the surrounding
+ phrases. A word may not always translate to the exact same word in the other
+ language.
+
+As a best practice, make sure that there's a 1:1 sentence correspondence between
+the source and target language versions of the documents.
+
+If your project is domain (category) specific, your documents should be
+consistent in terminology within that category. The quality of the resulting
+translation system depends on the number of sentences in your document set and
+the quality of the sentences. The more examples your documents contain with
+diverse usages for a word specific to your category, the better job the system
+can do during translation.
+
+Documents uploaded are private to each workspace and can be used in as many
+projects or trainings as you like. Sentences extracted from your documents are
+stored separately in your repository as plain Unicode text files and are
+available for you to delete. Don't use the Custom Translator as a document
+repository, you won't be able to download the documents you uploaded in the
+format you uploaded them.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to use a dictionary](dictionaries.md)
ai-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/sentence-alignment.md
+
+ Title: "Sentence pairing and alignment - Custom Translator"
+
+description: During the training execution, sentences present in parallel documents are paired or aligned. Custom Translator learns translations one sentence at a time, by reading a sentence and translating it. Then it aligns words and phrases in these two sentences to each other.
++++ Last updated : 07/18/2023+++
+#Customer intent: As a Custom Translator user, I want to know how sentence alignment works, so that I can have better understanding of underlying process of sentence extraction, pairing, filtering, aligning.
++
+# Sentence pairing and alignment in parallel documents
+
+After documents are uploaded, sentences present in parallel documents are
+paired or aligned. Custom Translator reports the number of sentences it was
+able to pair as the Aligned Sentences in each of the data sets.
+
+## Pairing and alignment process
+
+Custom Translator learns translations of sentences one sentence at a time. It reads a sentence from the source text, and then the translation of this sentence from the target text. Then it aligns words and phrases in these two sentences to each other. This process enables it to create a map of the words and phrases in one sentence to the equivalent words and phrases in the translation of the sentence. Alignment tries to ensure that the system trains on sentences that are translations of each other.
+
+## Pre-aligned documents
+
+If you know you have parallel documents, you may override the
+sentence alignment by supplying pre-aligned text files. You can extract all
+sentences from both documents into text file, organized one sentence per line,
+and upload with an `.align` extension. The `.align` extension signals Custom
+Translator that it should skip sentence alignment.
+
+For best results, try to make sure that you have one sentence per line in your
+ files. Don't have newline characters within a sentence, it will cause poor
+alignments.
+
+## Suggested minimum number of sentences
+
+For a training to succeed, the table below shows the minimum number of sentences required in each document type. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. The general guideline is having more in-domain parallel sentences of human translation quality should produce higher-quality models.
+
+| Document type | Suggested minimum sentence count | Maximum sentence count |
+||--|--|
+| Training | 10,000 | No upper limit |
+| Tuning | 500 | 2,500 |
+| Testing | 500 | 2,500 |
+| Dictionary | 0 | 250,000 |
+
+> [!NOTE]
+>
+> - Training will not start and will fail if the 10,000 minimum sentence count for Training is not met.
+> - Tuning and Testing are optional. If you do not provide them, the system will remove an appropriate percentage from Training to use for validation and testing.
+> - You can train a model using only dictionary data. Please refer to [What is Dictionary](dictionaries.md).
+> - If your dictionary contains more than 250,000 sentences, our Document Translation feature is a better choice. Please refer to [Document Translation](../../document-translation/overview.md).
+> - Free (F0) subscription training has a maximum limit of 2,000,000 characters.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to use a dictionary](dictionaries.md)
ai-services Workspace And Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/workspace-and-project.md
+
+ Title: "What is a workspace and project? - Custom Translator"
+
+description: This article will explain the differences between a workspace and a project as well as project categories and labels for the Custom Translator service.
+++++ Last updated : 07/18/2023+++
+#Customer intent: As a Custom Translator user, I want to concept of a project, so that I can use it efficiently.
+
+# What is a Custom Translator workspace?
+
+A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is inside a specific workspace.
+
+Workspace is private to you and the people you invite into your workspace. Uninvited people don't have access to the content of your workspace. You can invite as many people as you like into your workspace and modify or remove their access anytime. You can also create a new workspace. By default a workspace won't contain any projects or documents that are within your other workspaces.
+
+## What is a Custom Translator project?
+
+A project is a wrapper for a model, documents, and tests. Each project
+automatically includes all documents that are uploaded into that workspace that
+have the correct language pair. For example, if you have both an English to
+Spanish project and a Spanish to English project, the same documents will be
+included in both projects. Each project has a CategoryID associated with it
+that is used when querying the [V3 API](../../reference/v3-0-translate.md?tabs=curl) for translations. CategoryID is parameter used to get translations from a customized system built with Custom Translator.
+
+## Project categories
+
+The category identifies the domain ΓÇô the area of terminology and style you want to use ΓÇô for your project. Choose the category most relevant to your documents. In some cases, your choice of the category directly influences the behavior of the Custom Translator.
+
+We have two sets of baseline models. They're General and Technology. If the category **Technology** is selected, the Technology baseline models will be used. For any other category selection, the General baseline models are used. The Technology baseline model does well in technology domain, but it shows lower quality, if the sentences used for translation don't fall within the technology domain. We suggest customers select category Technology only if sentences fall strictly within the technology domain.
+
+In the same workspace, you may create projects for the same language pair in
+different categories. Custom Translator prevents creation of a duplicate project
+with the same language pair and category. Applying a label to your project
+allows you to avoid this restriction. Don't use labels unless you're building translation systems for multiple clients, as adding a
+unique label to your project will be reflected in your projects CategoryID.
+
+## Project labels
+
+Custom Translator allows you to assign a project label to your project. The
+project label distinguishes between multiple projects with the same language
+pair and category. As a best practice, avoid using project labels unless
+necessary.
+
+The project label is used as part of the CategoryID. If the project label is
+left unset or is set identically across projects, then projects with the same
+category and *different* language pairs will share the same CategoryID. This approach is
+advantageous because it allows you to switch between languages when using the Translator API without worrying about a CategoryID that is unique to each project.
+
+For example, if I wanted to enable translations in the Technology domain from
+English to French and from French to English, I would create two
+projects: one for English -\> French, and one for French -\> English. I would
+specify the same category (Technology) for both and leave the project label
+blank. The CategoryID for both projects would match, so I could query the API
+for both English and French translations without having to modify my CategoryID.
+
+If you're a language service provider and want to serve
+multiple customers with different models that retain the same category and
+language pair, use a project label to differentiate between customers.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about model training](model-training.md)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/faq.md
+
+ Title: "Frequently asked questions - Custom Translator"
+
+description: This article contains answers to frequently asked questions about the Azure AI Translator Custom Translator.
++++ Last updated : 07/18/2023+++++
+# Custom Translator frequently asked questions
+
+This article contains answers to frequently asked questions about [Custom Translator](https://portal.customtranslator.azure.ai).
+
+## What are the current restrictions in Custom Translator?
+
+There are restrictions and limits with respect to file size, model training, and model deployment. Keep these restrictions in mind when setting up your training to build a model in Custom Translator.
+
+- Submitted files must be less than 100 MB in size.
+- Monolingual data isn't supported.
+
+## When should I request deployment for a translation system that has been trained?
+
+It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the BLEU score and/ or the test results aren't satisfactory. You should
+be strict and careful in designing your tuning set and your test set. Make certain your sets
+fully represent the terminology and style of material you want to
+translate. You can be more liberal in composing your training data, and
+experiment with different options. Request a system deployment when you're
+satisfied with the translations in your system test results and have no more data to add to
+improve your trained system.
+
+## How many trained systems can be deployed in a project?
+
+Only one trained system can be deployed per project. It may take several
+trainings to create a suitable translation system for your project and we
+encourage you to request deployment of a training that gives you the best
+result. You can determine the quality of the training by the BLEU score (higher
+is better), and by consulting with reviewers before deciding that the quality of
+translations is suitable for deployment.
+
+## When can I expect my trainings to be deployed?
+
+The deployment generally takes less than an hour.
+
+## How do you access a deployed system?
+
+Deployed systems can be accessed via the Microsoft Translator Text API V3 by
+specifying the CategoryID. More information about the Translator Text API can
+be found in the [API
+Reference](../reference/v3-0-reference.md)
+webpage.
+
+## How do I skip alignment and sentence breaking if my data is already sentence aligned?
+
+The Custom Translator skips sentence alignment and sentence breaking for TMX
+files and for text files with the `.align` extension. `.align` files give users
+an option to skip Custom Translator's sentence breaking and alignment process for the
+files that are perfectly aligned, and need no further processing. We recommend
+using `.align` extension only for files that are perfectly aligned.
+
+If the number of extracted sentences doesn't match the two files with the same
+base name, Custom Translator will still run the sentence aligner on `.align`
+files.
+
+## I tried uploading my TMX, but it says "document processing failed"
+
+Ensure that the TMX conforms to the [TMX 1.4b Specification](https://www.gala-global.org/tmx-14b).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the custom translator quickstart](quickstart.md)
+
ai-services Copy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/copy-model.md
+
+ Title: Copy a custom model
+
+description: This article explains how to copy a custom model to another workspace using the Azure AI Translator Custom Translator.
++++ Last updated : 07/18/2023+++
+# Copy a custom model
+
+Copying a model to other workspaces enables model lifecycle management (for example, development → test → production) and increases usage scalability while reducing the training cost.
+
+## Copy model to another workspace
+
+ > [!Note]
+ >
+ > To copy model from one workspace to another, you must have an **Owner** role in both workspaces.
+ >
+ > The copied model cannot be recopied. You can only rename, delete, or publish a copied model.
+
+1. After successful model training, select the **Model details** blade.
+
+1. Select the **Model Name** to copy.
+
+1. Select **Copy to workspace**.
+
+1. Fill out the target details.
+
+ :::image type="content" source="../media/how-to/copy-model-1.png" alt-text="Screenshot illustrating the copy model dialog window.":::
+
+ > [!Note]
+ >
+ > A dropdown list displays the list of workspaces available to use. Otherwise, select **Create a new workspace**.
+ >
+ > If selected workspace contains a project for the same language pair, it can be selected from the Project dropdown list, otherwise, select **Create a new project** to create one.
+
+1. Select **Copy model**.
+
+1. A notification panel shows the copy progress. The process should complete fairly quickly:
+
+ :::image type="content" source="../media/how-to/copy-model-2.png" alt-text="Screenshot illustrating notification that the copy model is in process.":::
+
+1. After **Copy model** completion, a copied model is available in the target workspace and ready to publish. A **Copied model** watermark is appended to the model name.
+
+ :::image type="content" source="../media/how-to/copy-model-3.png" alt-text="Screenshot illustrating the copy complete dialog window.":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to publish/deploy a custom model](publish-model.md).
ai-services Create Manage Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/create-manage-project.md
+
+ Title: Create and manage a project
+
+description: How to create and manage a project in the Azure AI Translator Custom Translator.
++++ Last updated : 07/18/2023++++
+# Create and manage a project
+
+A project contains translation models for one language pair. Each project includes all documents that were uploaded into that workspace with the correct language pair.
+
+Creating a project is the first step in building and publishing a model.
+
+## Create a project
+
+1. After you sign in, your default workspace is loaded. To create a project in different workspace, select **My workspaces**, then select a workspace name.
+
+1. Select **Create project**.
+
+1. Enter the following details about your project in the creation dialog:
+
+ - **Project name (required):** Give your project a unique, meaningful name. It's not necessary to mention the languages within the title.
+
+ - **Language pair (required):** Select the source and target languages from the dropdown list
+
+ - **Domain (required):** Select the domain from the dropdown list that's most appropriate for your project. The domain describes the terminology and style of the documents you intend to translate.
+
+ >[!Note]
+ >Select **Show advanced options** to add project label, project description, and domain description
+
+ - **Project label:** The project label distinguishes between projects with the same language pair and domain. As a best practice, here are a few tips:
+
+ - Use a label *only* if you're planning to build multiple projects for the same language pair and same domain and want to access these projects with a different Domain ID.
+
+ - Don't use a label if you're building systems for one domain only.
+
+ - A project label isn't required and not helpful to distinguish between language pairs.
+
+ - You can use the same label for multiple projects.
+
+ - **Project description:** A short summary about the project. This description has no influence over the behavior of the Custom Translator or your resulting custom system, but can help you differentiate between different projects.
+
+ - **Domain description:** Use this field to better describe the particular field or industry in which you're working. or example, if your category is medicine, you might add details about your subfield, such as surgery or pediatrics. The description has no influence over the behavior of the Custom Translator or your resulting custom system.
+
+1. Select **Create project**.
+
+ :::image type="content" source="../media/how-to/create-project-dialog.png" alt-text="Screenshot illustrating the create project fields.":::
+
+## Edit a project
+
+To modify the project name, project description, or domain description:
+
+1. Select the workspace name.
+
+1. Select the project name, for example, *English-to-German*.
+
+1. The **Edit and Delete** buttons should now be visible.
+
+ :::image type="content" source="../media/how-to/edit-project-dialog-1.png" alt-text="Screenshot illustrating the edit project fields":::
+
+1. Select **Edit** and fill in or modify existing text.
+
+ :::image type="content" source="../media/how-to/edit-project-dialog-2.png" alt-text="Screenshot illustrating detailed edit project fields.":::
+
+1. Select **Edit project** to save.
+
+## Delete a project
+
+1. Follow the [**Edit a project**](#edit-a-project) steps 1-3 above.
+
+1. Select **Delete** and read the delete message before you select **Delete project** to confirm.
+
+ :::image type="content" source="../media/how-to/delete-project-1.png" alt-text="Screenshot illustrating delete project fields.":::
+
+ >[!Note]
+ >If your project has a published model or a model that is currently in training, you will only be able to delete your project once your model is no longer published or training.
+ >
+ > :::image type="content" source="../media/how-to/delete-project-2.png" alt-text="Screenshot illustrating the unable to delete message.":::
+
+## Next steps
+
+- Learn [how to manage project documents](create-manage-training-documents.md).
+- Learn [how to train a model](train-custom-model.md).
+- Learn [how to test and evaluate model quality](test-your-model.md).
+- Learn [how to publish model](publish-model.md).
+- Learn [how to translate with custom models](translate-with-custom-model.md).
ai-services Create Manage Training Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/create-manage-training-documents.md
+
+ Title: Build and upload training documents
+
+description: How to build and upload parallel documents (two documents where one is the origin and the other is the translation) using Custom Translator.
++++ Last updated : 07/18/2023++++
+# Build and manage training documents
+
+[Custom Translator](../overview.md) enables you to build translation models that reflect your business, industry, and domain-specific terminology and style. Training and deploying a custom model is easy and doesn't require any programming skills. Custom Translator allows you to upload parallel files, translation memory files, or zip files.
+
+[Parallel documents](../concepts/parallel-documents.md) are pairs of documents where one (target) is a translation of the other (source). One document in the pair contains sentences in the source language and the other document contains those sentences translated into the target language.
+
+Before uploading your documents, review the [document formats and naming convention guidance](../concepts/document-formats-naming-convention.md) to make sure your file format is supported by Custom Translator.
+
+## How to create document sets
+
+Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
+
+- Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
+
+- Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
+
+- Can you crawl online portals to collect source sentences and synthesize target sentences?
+
+### Training material for each document types
+
+| Source | What it does | Rules to follow |
+||||
+| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](../concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
+| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
+| Test documents | Calculate the [BLEU score](../beginners-guide.md#what-is-a-bleu-score).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
+| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
+| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
+
+## How to upload documents
+
+Document types are associated with the language pair selected when you create a project.
+
+1. Sign-in to [Custom Translator](https://portal.customtranslator.azure.ai) portal. Your default workspace is loaded and a list of previously created projects are displayed.
+
+1. Select the desired project **Name**. By default, the **Manage documents** blade is selected and a list of previously uploaded documents is displayed.
+
+1. Select **Add document set** and choose the document type:
+
+ - Training set
+ - Testing set
+ - Tuning set
+ - Dictionary set:
+ - Phrase Dictionary
+ - Sentence Dictionary
+
+1. Select **Next**.
+
+ :::image type="content" source="../media/how-to/upload-1.png" alt-text="Screenshot illustrating the document upload link.":::
+
+ >[!Note]
+ >Choosing **Dictionary set** launches **Choose type of dictionary** dialog.
+ >Choose one and select **Next**
+
+1. Select your documents format from the radio buttons.
+
+ :::image type="content" source="../media/how-to/upload-2.png" alt-text="Screenshot illustrating the upload document page.":::
+
+ - For **Parallel documents**, fill in the `Document set name` and select **Browse files** to select source and target documents.
+ - For **Translation memory (TM)** file or **Upload multiple sets with ZIP**, select **Browse files** to select the file
+
+1. Select **Upload**.
+
+At this point, Custom Translator is processing your documents and attempting to extract sentences as indicated in the upload notification. Once done processing, you'll see the upload successful notification.
+
+ :::image type="content" source="../media/quickstart/document-upload-notification.png" alt-text="Screenshot illustrating the upload document processing dialog window.":::
+
+## View upload history
+
+In workspace page you can view history of all document uploads details like document type, language pair, upload status etc.
+
+1. From the [Custom Translator](https://portal.customtranslator.azure.ai) portal workspace page,
+ click Upload History tab to view history.
+
+ :::image type="content" source="../media/how-to/upload-history-tab.png" alt-text="Screenshot showing the upload history tab.":::
+
+2. This page shows the status of all of your past uploads. It displays
+ uploads from most recent to least recent. For each upload, it shows the document name, upload status, the upload date, the number of files uploaded, type of file uploaded, the language pair of the file, and created by. You can use Filter to quickly find documents by name, status, language, and date range.
+
+ :::image type="content" source="../media/how-to/upload-history-page.png" alt-text="Screenshot showing the upload history page.":::
+
+3. Select any upload history record. In upload history details page,
+ you can view the files uploaded as part of the upload, uploaded status of the file, language of the file and error message (if there is any error in upload).
+
+## Next steps
+
+- Learn [how to train a model](train-custom-model.md).
+- Learn [how to test and evaluate model quality](test-your-model.md).
+- Learn [how to publish model](publish-model.md).
+- Learn [how to translate with custom models](translate-with-custom-model.md).
ai-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/create-manage-workspace.md
+
+ Title: Create and manage a workspace
+
+description: How to create and manage workspaces
++++ Last updated : 07/18/2023+++++
+# Create and manage a workspace
+
+ Workspaces are places to manage your documents, projects, and models. When you create a workspace, you can choose to use the workspace independently, or share it with teammates to divide up the work.
+
+## Create workspace
+
+1. After you sign in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
+
+ :::image type="content" source="../media/quickstart/first-time-user.png" alt-text="Screenshot illustrating first-time sign-in.":::
+
+1. Select **My workspaces**
+
+1. Select **Create a new workspace**
+
+1. Type a **Workspace name** and select **Next**
+
+1. Select "Global" for **Select resource region** from the dropdown list.
+
+1. Copy/paste your Translator Services key.
+
+1. Select **Next**.
+
+1. Select **Done**
+
+ > [!NOTE]
+ > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2**.
+
+ > [!NOTE]
+ > All uploaded customer content, custom model binaries, custom model configurations, and training logs are kept encrypted-at-rest in the selected region.
+
+ :::image type="content" source="../media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
+
+ :::image type="content" source="../media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation.":::
+
+## Manage workspace settings
+
+Select a workspace and navigate to **Workspace settings**. You can manage the following workspace settings:
+
+* Change the resource key if the region is **Global**. If you're using a region-specific resource such as **East US**, you can't change your resource key.
+
+* Change the workspace name.
+
+* [Share the workspace with others](#share-workspace-for-collaboration).
+
+* Delete the workspace.
+
+### Share workspace for collaboration
+
+The person who created the workspace is the owner. Within **Workspace settings**, an owner can designate three different roles for a collaborative workspace:
+
+* **Owner**. An owner has full permissions within the workspace.
+
+* **Editor**. An editor can add documents, train models, and delete documents and projects. They can't modify who the workspace is shared with, delete the workspace, or change the workspace name.
+
+* **Reader**. A reader can view (and download if available) all information in the workspace.
+
+> [!NOTE]
+> The Custom Translator workspace sharing policy has changed. For additional security measures, you can share a workspace only with people who have recently signed in to the Custom Translator portal.
+
+1. Select **Share**.
+
+1. Complete the **email address** field for collaborators.
+
+1. Select **role** from the dropdown list.
+
+1. Select **Share**.
+++
+### Remove somebody from a workspace
+
+1. Select **Share**.
+
+2. Select the **X** icon next to the **Role** and email address that you want to remove.
++
+### Restrict access to workspace models
+
+> [!WARNING]
+> **Restrict access** blocks runtime translation requests to all published models in the workspace if the requests don't include the same Translator resource that was used to create the workspace.
+
+Select the **Yes** checkbox. Within few minutes, all published models are secured from unauthorized access.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to manage projects](create-manage-project.md)
ai-services Enable Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/enable-vnet-service-endpoint.md
+
+ Title: Enable Virtual Network service endpoints with Custom Translator service
+
+description: This article describes how to use Custom Translator service with an Azure Virtual Network service endpoint.
+++++ Last updated : 07/05/2023++++
+# Enable Custom Translator through Azure Virtual Network
+
+In this article, we show you how to set up and use VNet service endpoints with Custom Translator.
+
+Azure Virtual Network (VNet) [service endpoints](../../../../virtual-network/virtual-network-service-endpoints-overview.md) securely connect your Azure service resources to your virtual networks over an optimized route via the Azure global network. Service endpoints enable private IP addresses within your virtual network to reach the endpoint of an Azure service without the need for a public IP address on the virtual network.
+
+For more information, see [Azure Virtual Network overview](../../../../virtual-network/virtual-networks-overview.md)
+
+> [!NOTE]
+> Before you start, review [how to use virtual networks with Azure AI services](../../../cognitive-services-virtual-networks.md).
+
+ To set up a Translator resource for VNet service endpoint scenarios, you need the resources:
+
+* [A regional Translator resource (global isn't supported)](../../create-translator-resource.md).
+* [VNet and networking settings for the Translator resource](#configure-virtual-networks-resource-networking-settings).
+
+## Configure virtual networks resource networking settings
+
+To start, you need to add all virtual networks that are allowed access via the service endpoint to the Translator resource networking properties. To enable access to a Translator resource via the VNet, you need to enable the `Microsoft.CognitiveServices` service endpoint type for the required subnets of your virtual network. Doing so routes all subnet traffic related to Azure AI services through the private global network. If you intend to access any other Azure AI services resources from the same subnet, make sure these resources are also configured to allow your virtual network.
+
+> [!NOTE]
+>
+> * If a virtual network isn't added as *allowed* in the Translator resource networking properties, it won't have access to the Translator resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the virtual network.
+> * If the service endpoint is enabled but the virtual network isn't allowed, the Translator resource won't be accessible for the virtual network through a public IP address, regardless of your other network security settings.
+> * Enabling the `Microsoft.CognitiveServices` endpoint routes all traffic related to Azure AI services through the private global network. Thus, the virtual network should be explicitly allowed to access the resource.
+> * This guidance applies for all Azure AI services resources, not just for Translator resources.
+
+Let's get started:
+
+1. Navigate to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+
+1. Select a regional Translator resource.
+
+1. From the **Resource Management** group in the left side panel, select **Networking**.
+
+ :::image type="content" source="../media/how-to/resource-management-networking.png" alt-text="Screenshot of the networking selection under Resource Management in the Azure portal.":::
+
+1. From the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="../media/how-to/firewalls-virtual-network.png" alt-text="Screenshot of the firewalls and virtual network page in the Azure portal.":::
+
+ > [!NOTE]
+ > To use Virtual Network service endpoints, you need to select the **Selected Networks and Private Endpoints** network security option. No other options are supported.
+
+1. Select **Add existing virtual network** or **Add new virtual network** and provide the required parameters.
+
+ * Complete the process by selecting **Add** for an existing virtual network or **Create** for a new one.
+
+ * If you add an existing virtual network, the `Microsoft.CognitiveServices` service endpoint is automatically enabled for the selected subnets.
+
+ * If you create a new virtual network, the **default** subnet is automatically configured to the `Microsoft.CognitiveServices` service endpoint. This operation can take few minutes.
+
+ > [!NOTE]
+ > As described in the [previous section](#configure-virtual-networks-resource-networking-settings), when you configure a virtual network as *allowed* for the Translator resource, the `Microsoft.CognitiveServices` service endpoint is automatically enabled. If you later disable it, you need to re-enable it manually to restore the service endpoint access to the Translator resource (and to a subset of other Azure AI services resources).
+
+1. Now, when you choose the **Selected Networks and Private Endpoints** tab, you can see your enabled virtual network and subnets under the **Virtual networks** section.
+
+1. How to check the service endpoint
+
+ * From the **Resource Management** group in the left side panel, select **Networking**.
+
+ * Select your **virtual network** and then select the desired **subnet**.
+
+ :::image type="content" source="../media/how-to/select-subnet.png" alt-text="Screenshot of subnet selection section in the Azure portal.":::
+
+ * A new **Subnets** window appears.
+
+ * Select **Service endpoints** from the **Settings** menu located on the left side panel.
+
+ :::image type="content" source="../media/how-to/service-endpoints.png" alt-text="Screenshot of the **Subnets** selection from the **Settings** menu in the Azure portal.":::
+
+1. From the **Settings** menu in the left side panel, choose **Service Endpoints** and, in the main window, check that your virtual network subnet is included in the `Microsoft.CognitiveServices` list.
+
+## Use the Custom Translator portal
+
+The following table describes Custom Translator project accessibility per Translator resource **Networking** → **Firewalls and virtual networks** security setting:
+
+ :::image type="content" source="../media/how-to/allow-network-access.png" alt-text="Screenshot of allowed network access section in the Azure portal.":::
+
+> [!IMPORTANT]
+ > If you configure **Selected Networks and Private Endpoints** via the **Networking** → **Firewalls and virtual networks** tab, you can't use the Custom Translator portal and your Translator resource. However, you can still use the Translator resource outside of the Custom Translator portal.
+
+| Translator resource network security setting | Custom Translator portal accessibility |
+|--|--|
+| All networks | No restrictions |
+| Selected Networks and Private Endpoints | Accessible from allowed VNET IP addresses |
+| Disabled | Not accessible |
+
+To use Custom Translator without relaxing network access restrictions on your production Translator resource, consider this workaround:
+
+* Create another Translator resource for development that can be used on a public network.
+
+* Prepare your custom model in the Custom Translator portal on the development resource.
+
+* Copy the model on your development resource to your production resource using [Custom Translator non-interactive REST API](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) `workspaces` → `copy authorization and models` → `copy functions`.
+
+Congratulations! You learned how to use Azure VNet service endpoints with Custom Translator.
+
+## Learn more
+
+Visit the [**Custom Translator API**](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) page to view our non-interactive REST APIs.
ai-services Publish Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/publish-model.md
+
+ Title: Publish a custom model
+
+description: This article explains how to publish a custom model using the Azure AI Translator Custom Translator.
++++ Last updated : 07/18/2023+++
+# Publish a custom model
+
+Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing).
+
+## Publish your trained model
+
+You can publish one model per project to one or multiple regions.
+1. Select the **Publish model** blade.
+
+1. Select *en-de with sample data* and select **Publish**.
+
+1. Check the desired region(s).
+
+1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
+
+ :::image type="content" source="../media/quickstart/publish-model.png" alt-text="Screenshot illustrating the publish model blade.":::
+
+## Replace a published model
+
+To replace a published model, you can exchange the published model with a different model in the same region(s):
+
+1. Select the replacement model.
+
+1. Select **Publish**.
+
+1. Select **publish** once more in the **Publish model** dialog window.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to translate with custom models](../quickstart.md)
ai-services Test Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/test-your-model.md
+
+ Title: View custom model details and test translation
+
+description: How to test your custom model BLEU score and evaluate translations
++++ Last updated : 07/18/2023+++
+# Test your model
+
+Once your model has successfully trained, you can use translations to evaluate the quality of your model. In order to make an informed decision about whether to use our standard model or your custom model, you should evaluate the delta between your custom model [**BLEU score**](#bleu-score) and our standard model **Baseline BLEU**. If your models have been trained on a narrow domain, and your training data is consistent with the test data, you can expect a high BLEU score.
+
+## BLEU score
+
+BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
+
+A BLEU score is a number between zero and 100. A score of zero indicates a low-quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100ΓÇöa BLEU score between 40 and 60 indicates a high-quality translation.
+
+[Read more](../concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma)
+
+## Model details
+
+1. Select the **Model details** blade.
+
+1. Select the model name. Review the training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You'll use the `Category ID` to make translation requests.
+
+1. Evaluate the model [BLEU](../beginners-guide.md#what-is-a-bleu-score) score. Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means there's high translation quality using the custom model.
+
+ :::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating the model detail.":::
+
+## Test quality of your model's translation
+
+1. Select **Test model** blade.
+
+1. Select model **Name**.
+
+1. Human evaluate translation from your **Custom model** and the **Baseline model** (our pre-trained baseline used for customization) against **Reference** (target translation from the test set).
+
+1. If you're satisfied with the training results, place a deployment request for the trained model.
+
+## Next steps
+
+- Learn [how to publish/deploy a custom model](publish-model.md).
+- Learn [how to translate documents with a custom model](translate-with-custom-model.md).
ai-services Train Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/train-custom-model.md
+
+ Title: Train model
+
+description: How to train a custom model
++++ Last updated : 07/18/2023+++
+# Train a custom model
+
+A model provides translations for a specific language pair. The outcome of a successful training is a model. To train a custom model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A minimum of 10,000 parallel training sentences are required to train a full model.
+
+## Create model
+
+1. Select the **Train model** blade.
+
+1. Type the **Model name**.
+
+1. Keep the default **Full training** selected or select **Dictionary-only training**.
+
+ >[!Note]
+ >Full training displays all uploaded document types. Dictionary-only displays dictionary documents only.
+
+1. Under **Select documents**, select the documents you want to use to train the model, for example, `sample-English-German` and review the training cost associated with the selected number of sentences.
+
+1. Select **Train now**.
+
+1. Select **Train** to confirm.
+
+ >[!Note]
+ >**Notifications** displays model training in progress, e.g., **Submitting data** state. Training model takes few hours, subject to the number of selected sentences.
+
+ :::image type="content" source="../media/quickstart/train-model.png" alt-text="Screenshot illustrating the train model blade.":::
+
+## When to select dictionary-only training
+
+For better results, we recommended letting the system learn from your training data. However, when you don't have enough parallel sentences to meet the 10,000 minimum requirements, or sentences and compound nouns must be rendered as-is, use dictionary-only training. Your model will typically complete training much faster than with full training. The resulting models will use the baseline models for translation along with the dictionaries you've added. You won't see BLEU scores or get a test report.
+
+> [!Note]
+>Custom Translator doesn't sentence-align dictionary files. Therefore, it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned. If not, the document upload will fail.
+
+## Model details
+
+1. After successful model training, select the **Model details** blade.
+
+1. Select the **Model Name** to review training date/time, total training time, number of sentences used for training, tuning, testing, dictionary, and whether the system generated the test and tuning sets. You'll use `Category ID` to make translation requests.
+
+1. Evaluate the model [BLEU score](../beginners-guide.md#what-is-a-bleu-score). Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
+
+ :::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating model details fields.":::
+
+## Duplicate model
+
+1. Select the **Model details** blade.
+
+1. Hover over the model name and check the selection button.
+
+1. Select **Duplicate**.
+
+1. Fill in **New model name**.
+
+1. Keep **Train immediately** checked if no further data will be selected or uploaded, otherwise, check **Save as draft**
+
+1. Select **Save**
+
+ > [!Note]
+ >
+ > If you save the model as `Draft`, **Model details** is updated with the model name in `Draft` status.
+ >
+ > To add more documents, select on the model name and follow `Create model` section above.
+
+ :::image type="content" source="../media/how-to/duplicate-model.png" alt-text="Screenshot illustrating the duplicate model blade.":::
+
+## Next steps
+
+- Learn [how to test and evaluate model quality](test-your-model.md).
+- Learn [how to publish model](publish-model.md).
+- Learn [how to translate with custom models](translate-with-custom-model.md).
ai-services Translate With Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/translate-with-custom-model.md
+
+ Title: Translate text with a custom model
+
+description: How to make translation requests using custom models published with the Azure AI Translator Custom Translator.
++++ Last updated : 07/18/2023+++
+# Translate text with a custom model
+
+After you publish your custom model, you can access it with the Translator API by using the `Category ID` parameter.
+
+## How to translate
+
+1. Use the `Category ID` when making a custom translation request via Microsoft Translator [Text API V3](../../reference/v3-0-translate.md?tabs=curl). The `Category ID` is created by concatenating the WorkspaceID, project label, and category code. Use the `CategoryID` with the Text Translation API to get custom translations.
+
+ ```http
+ https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=de&category=a2eb72f9-43a8-46bd-82fa-4693c8b64c3c-TECH
+
+ ```
+
+ More information about the Translator Text API can be found on the [Translator API Reference](../../reference/v3-0-translate.md) page.
+
+1. You may also want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about building and publishing custom models](../beginners-guide.md)
ai-services Key Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/key-terms.md
+
+ Title: Key terms - Custom Translator
+
+description: List of key terms used in Custom Translator articles.
++++ Last updated : 07/18/2023++++
+# Custom Translator key terms
+
+The following table presents a list of key terms that you may find as you work with the [Custom Translator](https://portal.customtranslator.azure.ai).
+
+| Word or Phrase|Definition|
+||--|
+| Source Language | The source language is the starting language that you want to convert to another language (the "target").|
+| Target Language| The target language is the language that you want the machine translation to provide after it receives the source language. |
+| Monolingual File | A monolingual file has a single language not paired with another file of a different language. |
+| Parallel Files | A parallel file is combination of two files with corresponding text. One file has the source language. The other has the target language.|
+| Sentence Alignment| Parallel dataset must have aligned sentences to sentences that represent the same text in both languages. For instance, in a source parallel file the first sentence should, in theory, map to the first sentence in the target parallel file.|
+| Aligned Text | One of the most important steps of file validation is to align the sentences in the parallel documents. Things are expressed differently in different languages. Also different languages have different word orders. This step does the job of aligning the sentences with the same content so that they can be used for training. A low sentence alignment indicates there might be something wrong with one or both of the files. |
+| Word Breaking/ Unbreaking | Word breaking is the function of marking the boundaries between words. Many writing systems use a space to denote the boundary between words. Word unbreaking refers to the removal of any visible marker that may have been inserted between words in a preceding step. |
+| Delimiters | Delimiters are the ways that a sentence is divided up into segments or delimit the margin between sentences. For instance, in English spaces delimit words, colons, and semi-colons delimit clauses and periods delimit sentences. |
+| Training Files | A training file is used to teach the machine translation system how to map from one language (the source) to a target language (the target). The more data you provide, the better the system will perform. |
+| Tuning Files | These files are often randomly derived from the training set (if you don't select a tuning set). The sentences are autoselected and used to tune the system and ensure that it's functioning properly. If you wish to create a general-purpose translation model and create your own tuning files, make sure they're a random set of sentences across domains |
+| Testing Files| These files are often derived files, randomly selected from the training set (if you don't select any test set). The purpose of these sentences is to evaluate the translation model's accuracy. To make sure the system accurately translates these sentences, you may wish to create a testing set and upload it to the translator. Doing so will ensure that the sentences are used in the system's evaluation (the generation of a BLEU score). |
+| Combo file | A type of file in which the source and translated sentences are contained in the same file. Supported file formats (TMX, XLIFF, XLF, ICI, and XLSX). |
+| Archive file | A file that contains other files. Supported file formats (zip, gz, tgz). |
+| BLEU Score | [BLEU](concepts/bleu-score.md) is the industry standard method for evaluating the "precision" or accuracy of the translation model. Though other methods of evaluation exist, Microsoft Translator relies BLEU method to report accuracy to Project Owners.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/overview.md
+
+ Title: What is Custom Translator?
+
+description: Custom Translator offers similar capabilities to what Microsoft Translator Hub does for Statistical Machine Translation (SMT), but exclusively for Neural Machine Translation (NMT) systems.
++++ Last updated : 07/18/2023+++
+# What is Custom Translator?
+
+Custom Translator is a feature of the Microsoft Translator service, which enables enterprises, app developers, and language service providers to build customized neural machine translation (NMT) systems. The customized translation systems seamlessly integrate into existing applications, workflows, and websites.
+
+Translation systems built with [Custom Translator](https://portal.customtranslator.azure.ai) are available through Microsoft Translator [Microsoft Translator Text API V3](../reference/v3-0-translate.md?tabs=curl), the same cloud-based, secure, high performance system powering billions of translations every day.
+
+The platform enables users to build and publish custom translation systems to and from English. Custom Translator supports more than three dozen languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../language-support.md).
+
+This documentation contains the following article types:
+
+* [**Quickstarts**](./quickstart.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](./how-to/create-manage-workspace.md) contain instructions for using the feature in more specific or customized ways.
+
+## Features
+
+Custom Translator provides different features to build custom translation system and later access it.
+
+|Feature |Description |
+|||
+|[Apply neural machine translation technology](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) | Improve your translation by applying neural machine translation (NMT) provided by Custom translator. |
+|[Build systems that knows your business terminology](./beginners-guide.md) | Customize and build translation systems using parallel documents that understand the terminologies used in your own business and industry. |
+|[Use a dictionary to build your models](./how-to/train-custom-model.md#when-to-select-dictionary-only-training) | If you don't have training data set, you can train a model with only dictionary data. |
+|[Collaborate with others](./how-to/create-manage-workspace.md#manage-workspace-settings) | Collaborate with your team by sharing your work with different people. |
+|[Access your custom translation model](./how-to/translate-with-custom-model.md) | Your custom translation model can be accessed anytime by your existing applications/ programs via Microsoft Translator Text API V3. |
+
+## Get better translations
+
+Microsoft Translator released [Neural Machine Translation (NMT)](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) in 2016. NMT provided major advances in translation quality over the industry-standard [Statistical Machine Translation (SMT)](https://en.wikipedia.org/wiki/Statistical_machine_translation) technology. Because NMT better captures the context of full sentences before translating them, it provides higher quality, more human-sounding, and more fluent translations. [Custom Translator](https://portal.customtranslator.azure.ai) provides NMT for your custom models resulting better translation quality.
+
+You can use previously translated documents to build a translation system. These documents include domain-specific terminology and style, better than a standard translation system. Users can upload ALIGN, PDF, LCL, HTML, HTM, XLF, TMX, XLIFF, TXT, DOCX, and XLSX documents.
+
+Custom Translator also accepts data that's parallel at the document level to make data collection and preparation more effective. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator will be able to automatically match sentences across documents.
+
+If the appropriate type and amount of training data is supplied, it's not uncommon to see [BLEU score](concepts/bleu-score.md) gains between 5 and 10 points by using Custom Translator.
+
+## Be productive and cost effective
+
+With [Custom Translator](https://portal.customtranslator.azure.ai), training and deploying a custom system doesn't require any programming skills.
+
+The secure [Custom Translator](https://portal.customtranslator.azure.ai) portal enables users to upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system will then be available for use at scale within a few hours (actual time depends on training data size).
+
+[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a [dedicated API](https://custom-api.cognitive.microsofttranslator.com/swagger/). The API allows users to manage creating or updating training through their own app or webservice.
+
+The cost of using a custom model to translate content is based on the user's Translator Text API pricing tier. See the Azure AI services [Translator Text API pricing webpage](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
+for pricing tier details.
+
+## Securely translate anytime, anywhere on all your apps and services
+
+Custom systems can be seamlessly accessed and integrated into any product or business workflow and on any device via the Microsoft Translator Text REST API.
+
+## Next steps
+
+* Read about [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
+
+* With [Quickstart](./quickstart.md) learn to build a translation model in Custom Translator.
ai-services Platform Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/platform-upgrade.md
+
+ Title: "Platform upgrade - Custom Translator"
+
+description: Custom Translator v1.0 upgrade
++++ Last updated : 07/18/2023+++
+# Custom Translator platform upgrade
+
+> [!CAUTION]
+>
+> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the new platform for continued processing and support.
+
+Following measured and consistent high-quality results using models trained on the Custom Translator new platform, the v1.0 platform is retiring. The new Custom Translator platform delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the new platform by June 02, 2023.
+
+## Custom Translator v1.0 upgrade timeline
+
+* **May 01, 2023** → Custom Translator v1.0 model publishing ends. There's no downtime during the v1.0 model migration. All model publishing and in-flight translation requests will continue without disruption until June 02, 2023.
+
+* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to new platform models.
+
+* **June 08, 2023** → Remaining v1.0 published models migrate automatically and are published by the Custom Translator team.
+
+## Upgrade to new platform
+
+> [!IMPORTANT]
+>
+> * Starting **May 01, 2023** the upgrade wizard and workspace banner will be displayed in the Custom Translator portal indicating that you have v1.0 models to upgrade.
+> * The banner contains a **Select** button that takes you to the upgrade wizard where a list of all your v1.0 models available for upgrade are displayed.
+> * Select any or all of your v1.0 models then select **Train** to start new platform model training.
+
+* **Check to see if you have published v1.0 models**. After signing in to the Custom Translator portal, you'll see a message indicating that you have v1.0 models to upgrade. You can also check to see if a current workspace has v1.0 models by selecting **Workspace settings** and scrolling to the bottom of the page.
+
+* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the new platform.
+
+## Unpublished and opt-out published models
+
+* For unpublished models, save the model data (training, testing, dictionary) and delete the project.
+
+* For published models that you don't want to upgrade, save your model data (training, testing, dictionary), unpublish the model, and delete the project.
+
+## Next steps
+
+For more support, visit [Azure AI services support and help options](../../cognitive-services-support-options.md).
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/quickstart.md
+
+ Title: "Quickstart: Build, deploy, and use a custom model - Custom Translator"
+
+description: A step-by-step guide to building a translation system using the Custom Translator portal v2.
++++ Last updated : 07/05/2023+++
+# Quickstart: Build, publish, and translate with custom models
+
+Translator is a cloud-based neural machine translation service that is part of the Azure AI services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, learn to build custom solutions for your applications across all [supported languages](../language-support.md).
+
+## Prerequisites
+
+ To use the [Custom Translator](https://portal.customtranslator.azure.ai/) portal, you need the following resources:
+
+* A [Microsoft account](https://signup.live.com).
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have an Azure subscription, [create a Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You need the key and endpoint from the resource to connect your application to the Translator service. Paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="../media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+For more information, *see* [how to create a Translator resource](../create-translator-resource.md).
+
+## Custom Translator portal
+
+Once you have the above prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
+
+You can read an overview of translation and custom translation, learn some tips, and watch a getting started video in the [Azure AI technical blog](https://techcommunity.microsoft.com/t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956).
+
+## Process summary
+
+1. [**Create a workspace**](#create-a-workspace). A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is done inside a specific workspace.
+
+1. [**Create a project**](#create-a-project). A project is a wrapper for models, documents, and tests. Each project includes all documents that are uploaded into that workspace with the correct language pair. For example, if you have both an English-to-Spanish project and a Spanish-to-English project, the same documents are included in both projects.
+
+1. [**Upload parallel documents**](#upload-documents). Parallel documents are pairs of documents where one (target) is the translation of the other (source). One document in the pair contains sentences in the source language and the other document contains sentences translated into the target language. It doesn't matter which language is marked as "source" and which language is marked as "target"ΓÇöa parallel document can be used to train a translation system in either direction.
+
+1. [**Train your model**](#train-your-model). A model is the system that provides translation for a specific language pair. The outcome of a successful training is a model. When you train a model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator automatically assembles tuning and testing data. It uses a random subset of sentences from your training documents, and excludes these sentences from the training data itself. A 10,000 parallel sentence is the minimum requirement to train a model.
+
+1. [**Test (human evaluate) your model**](#test-your-model). The testing set is used to compute the [BLEU](beginners-guide.md#what-is-a-bleu-score) score. This score indicates the quality of your translation system.
+
+1. [**Publish (deploy) your trained model**](#publish-your-model). Your custom model is made available for runtime translation requests.
+
+1. [**Translate text**](#translate-text). Use the cloud-based, secure, high performance, highly scalable Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl) to make translation requests.
+
+## Create a workspace
+
+1. After your sign-in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
+
+ :::image type="content" source="media/quickstart/first-time-user.png" alt-text="Screenshot illustrating how to create a workspace.":::
+
+1. Select **My workspaces**.
+
+1. Select **Create a new workspace**.
+
+1. Type _Contoso MT models_ for **Workspace name** and select **Next**.
+
+1. Select "Global" for **Select resource region** from the dropdown list.
+
+1. Copy/paste your Translator Services key.
+
+1. Select **Next**.
+
+1. Select **Done**.
+
+ >[!Note]
+ > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2.**
+
+ :::image type="content" source="media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
+
+ :::image type="content" source="media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation.":::
+
+## Create a project
+
+Once the workspace is created successfully, you're taken to the **Projects** page.
+
+You create English-to-German project to train a custom model with only a [training](concepts/model-training.md#training-document-type-for-custom-translator) document type.
+
+1. Select **Create project**.
+
+1. Type *English-to-German* for **Project name**.
+
+1. Select *English (en)* as **Source language** from the dropdown list.
+
+1. Select *German (de)* as **Target language** from the dropdown list.
+
+1. Select *General* for **Domain** from the dropdown list.
+
+1. Select **Create project**.
+
+ :::image type="content" source="media/quickstart/create-project.png" alt-text="Screenshot illustrating how to create a project.":::
+
+## Upload documents
+
+In order to create a custom model, you need to upload all or a combination of [training](concepts/model-training.md#training-document-type-for-custom-translator), [tuning](concepts/model-training.md#tuning-document-type-for-custom-translator), [testing](concepts/model-training.md#testing-dataset-for-custom-translator), and [dictionary](concepts/dictionaries.md) document types.
+
+In this quickstart, you'll upload [training](concepts/model-training.md#training-document-type-for-custom-translator) documents for customization.
+
+>[!Note]
+> You can use our sample training, phrase and sentence dictionaries dataset, [Customer sample English-to-German datasets](https://github.com/MicrosoftTranslator/CustomTranslatorSampleDatasets), for this quickstart. However, for production, it's better to upload your own training dataset.
+
+1. Select *English-to-German* project name.
+
+1. Select **Manage documents** from the left navigation menu.
+
+1. Select **Add document set**.
+
+1. Check the **Training set** box and select **Next**.
+
+1. Keep **Parallel documents** checked and type *sample-English-German*.
+
+1. Under the **Source (English - EN) file**, select **Browse files** and select *sample-English-German-Training-en.txt*.
+
+1. Under **Target (German - EN) file**, select **Browse files** and select *sample-English-German-Training-de.txt*.
+
+1. Select **Upload**
+
+ >[!Note]
+ >You can upload the sample phrase and sentence dictionaries dataset. This step is left for you to complete.
+
+ :::image type="content" source="media/quickstart/upload-model.png" alt-text="Screenshot illustrating how to upload documents.":::
+
+## Train your model
+
+Now you're ready to train your English-to-German model.
+
+1. Select **Train model** from the left navigation menu.
+
+1. Type *en-de with sample data* for **Model name**.
+
+1. Keep **Full training** checked.
+
+1. Under **Select documents**, check *sample-English-German* and review the training cost associated with the selected number of sentences.
+
+1. Select **Train now**.
+
+1. Select **Train** to confirm.
+
+ >[!Note]
+ >**Notifications** displays model training in progress, e.g., **Submitting data** state. Training model takes few hours, subject to the number of selected sentences.
+
+ :::image type="content" source="media/quickstart/train-model.png" alt-text="Screenshot illustrating how to create a model.":::
+
+1. After successful model training, select **Model details** from the left navigation menu.
+
+1. Select the model name *en-de with sample data*. Review training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You use the `Category ID` to make translation requests.
+
+1. Evaluate the model [BLEU](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
+
+ >[!Note]
+ >If you train with our shared customer sample datasets, BLEU score will be different than the image.
+
+ :::image type="content" source="media/quickstart/model-details.png" alt-text="Screenshot illustrating model details.":::
+
+## Test your model
+
+Once your training has completed successfully, inspect the test set translated sentences.
+
+1. Select **Test model** from the left navigation menu.
+2. Select "en-de with sample data"
+3. Human evaluate translation from **New model** (custom model), and **Baseline model** (our pretrained baseline used for customization) against **Reference** (target translation from the test set)
+
+## Publish your model
+
+Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing).
+
+1. Select **Publish model** from the left navigation menu.
+
+1. Select *en-de with sample data* and select **Publish**.
+
+1. Check the desired region(s).
+
+1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
+
+ :::image type="content" source="media/quickstart/publish-model.png" alt-text="Screenshot illustrating how to deploy a trained model.":::
+
+## Translate text
+
+1. Developers should use the `Category ID` when making translation requests with Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl). More information about the Translator Text API can be found on the [API Reference](../reference/v3-0-reference.md) webpage.
+
+1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to manage workspaces](how-to/create-manage-workspace.md)
ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/release-notes.md
+
+ Title: "Release notes - Custom Translator"
+
+description: Custom Translator releases, improvements, bug fixes, and known issues.
++++ Last updated : 07/18/2023++++
+# Custom Translator release notes
+
+This page presents the latest feature, improvement, bug fix, and known issue release notes for Custom Translator service.
+
+## 2023-June release
+
+### June 2023 new features and model updates
+
+#### Custom Translator platform upgrade
+
+&emsp; 🆕 ***Model Upgrade Wizard*** is now available in **Workspace settings** to help guide customers through the V1-model-upgrade-to-new-platform process. For more information, *see* [Custom Translator platform upgrade](platform-upgrade.md).
+
+#### Custom Translator copy model
+
+&emsp; 🆕 ***Copy Model*** is now available in **Model details** to enable the copying of models from one workspace to another. This feature enables model lifecycle management (development → testing → production) and/or scaling. For more information, *see* [Copy a custom model](how-to/copy-model.md).
+
+#### Restrict access to published models
+
+ &emsp; Published model security is now enhanced and restricted access is now enabled within **Workspace settings** to allow only linked Translator resources to request translation.
+
+#### June language model updates
+
+&emsp; Current supported language pairs are listed in the following table. For higher quality, we encourage you to retrain your models accordingly. For more information, *see* [Language support](../language-support.md#custom-translator-language-pairs).
+
+|Source Language|Target Language|
+|:-|:-|
+| Czech (cs-cz) | English (en-us) |
+| Danish (da-dk) | English (en-us) |
+| German (de-&#8203;de) | English (en-us) |
+| Greek (el-gr) | English (en-us) |
+| English (en-us) | Arabic (ar-sa) |
+| English (en-us) | Czech (cs-cz) |
+| English (en-us) | Danish (da-dk) |
+| English (en-us) | German (de-&#8203;de) |
+| English (en-us) | Greek (el-gr) |
+| English (en-us) | Spanish (es-es) |
+| English (en-us) | French (fr-fr) |
+| English (en-us) | Hebrew (he-il) |
+| English (en-us) | Hindi (hi-in) |
+| English (en-us) | Croatian (hr-hr) |
+| English (en-us) | Hungarian (hu-hu) |
+| English (en-us) | Indonesian (id-id) |
+| English (en-us) | Italian (it-it) |
+| English (en-us) | Japanese (ja-jp) |
+| English (en-us) | Korean (ko-kr) |
+| English (en-us) | Lithuanian (lt-lt) |
+| English (en-us) | Latvian (lv-lv) |
+| English (en-us) | Norwegian (nb-no) |
+| English (en-us) | Polish (pl-pl) |
+| English (en-us) | Portuguese (pt-pt) |
+| English (en-us) | Russian (ru-ru) |
+| English (en-us) | Slovak (sk-sk) |
+| English (en-us) | Swedish (sv-se) |
+| English (en-us) | Ukrainian (uk-ua) |
+| English (en-us) | Vietnamese (vi-vn) |
+| English (en-us) | Chinese Simplified (zh-cn) |
+| Spanish (es-es) | English (en-us) |
+| French (fr-fr) | English (en-us) |
+| Hindi (hi-in) | English (en-us) |
+| Hungarian (hu-hu) | English (en-us) |
+| Indonesian (id-id) | English (en-us) |
+| Italian (it-it) | English (en-us) |
+| Japanese (ja-jp) | English (en-us) |
+| Korean (ko-kr) | English (en-us) |
+| Norwegian (nb-no) | English (en-us) |
+| Dutch (nl-nl) | English (en-us) |
+| Polish (pl-pl) | English (en-us) |
+| Portuguese (pt-br) | English (en-us) |
+| Russian (ru-ru) | English (en-us) |
+| Swedish (sv-se) | English (en-us) |
+| Thai (th-th) | English (en-us) |
+| Turkish (tr-tr) | English (en-us) |
+| Vietnamese (vi-vn) | English (en-us) |
+| Chinese Simplified (zh-cn) | English (en-us) |
+
+## 2022-November release
+
+### November 2022 improvements and fixes
+
+#### Custom Translator stable GA v2.0 release
+
+* Custom Translator version v2.0 is generally available and ready for use in your production applications.
+
+* Upload history has been added to the workspace, next to Projects and Documents tabs.
+
+#### November language model updates
+
+* Language pairs are listed in the following table. We encourage you to retrain your models accordingly for higher quality.
+
+|Source Language|Target Language|
+|:-|:-|
+|Chinese Simplified (zh-Hans)|English (en-us)|
+|Chinese Traditional (zh-Hant)|English (en-us)|
+|Czech (cs)|English (en-us)|
+|Dutch (nl)|English (en-us)|
+|English (en-us)|Chinese Simplified (zh-Hans)|
+|English (en-us)|Chinese Traditional (zh-Hant)|
+|English (en-us)|Czech (cs)|
+|English (en-us)|Dutch (nl)|
+|English (en-us)|French (fr)|
+|English (en-us)|German (de)|
+|English (en-us)|Italian (it)|
+|English (en-us)|Polish (pl)|
+|English (en-us)|Romanian (ro)|
+|English (en-us)|Russian (ru)|
+|English (en-us)|Spanish (es)|
+|English (en-us)|Swedish (sv)|
+|German (de)|English (en-us)|
+|Italian (it)|English (en-us)|
+|Russian (ru)|English (en-us)|
+|Spanish (es)|English (en-us)|
+
+#### Security update
+
+* Custom Translator API preview REST API calls now require User Access Token to authenticate.
+
+* Visit our GitHub repo for a [C# code sample](https://github.com/MicrosoftTranslator/CustomTranslator-API-CSharp).
+
+#### Fixes
+
+* Resolved document upload error that caused a blank page in the browser.
+
+* Applied functional modifications.
+
+## 2021-May release
+
+### May 2021 improvements and fixes
+
+* We added new training pipeline to improve the custom model generalization and capacity to retain more customer terminology (words and phrases).
+
+* Refreshed Custom Translator baselines to fix word alignment bug. See list of impacted language pair*.
+
+### Language pair list
+
+| Source Language | Target Language |
+|-|--|
+| Arabic (`ar`) | English (`en-us`)|
+| Brazilian Portuguese (`pt`) | English (`en-us`)|
+| Bulgarian (`bg`) | English (`en-us`)|
+| Chinese Simplified (`zh-Hans`) | English (`en-us`)|
+| Chinese Traditional (`zh-Hant`) | English (`en-us`)|
+| Croatian (`hr`) | English (`en-us`)|
+| Czech (`cs`) | English (`en-us`)|
+| Danish (`da`) | English (`en-us`)|
+| Dutch (nl) | English (`en-us`)|
+| English (`en-us`) | Arabic (`ar`)|
+| English (`en-us`) | Bulgarian (`bg`)|
+| English (`en-us`) | Chinese Simplified (`zh-Hans`|
+| English (`en-us`) | Chinese Traditional (`zh-Hant`|
+| English (`en-us`) | Czech (`cs)`|
+| English (`en-us`) | Danish (`da`)|
+| English (`en-us`) | Dutch (`nl`)|
+| English (`en-us`) | Estonian (`et`)|
+| English (`en-us`) | Fijian (`fj`)|
+| English (`en-us`) | Finnish (`fi`)|
+| English (`en-us`) | French (`fr`)|
+| English (`en-us`) | Greek (`el`)|
+| English (`en-us`) | Hindi (`hi`) |
+| English (`en-us`) | Hungarian (`hu`)|
+| English (`en-us`) | Icelandic (`is`)|
+| English (`en-us`) | Indonesian (`id`)|
+| English (`en-us`) | Inuktitut (`iu`)|
+| English (`en-us`) | Irish (`ga`)|
+| English (`en-us`) | Italian (`it`)|
+| English (`en-us`) | Japanese (`ja`)|
+| English (`en-us`) | Korean (`ko`)|
+| English (`en-us`) | Lithuanian (`lt`)|
+| English (`en-us`) | Norwegian (`nb`)|
+| English (`en-us`) | Polish (`pl`)|
+| English (`en-us`) | Romanian (`ro`)|
+| English (`en-us`) | Samoan (`sm`)|
+| English (`en-us`) | Slovak (`sk`)|
+| English (`en-us`) | Spanish (`es`)|
+| English (`en-us`) | Swedish (`sv`)|
+| English (`en-us`) | Tahitian (`ty`)|
+| English (`en-us`) | Thai (`th`)|
+| English (`en-us`) | Tongan (`to`)|
+| English (`en-us`) | Turkish (`tr`)|
+| English (`en-us`) | Ukrainian (`uk`) |
+| English (`en-us`) | Welsh (`cy`)|
+| Estonian (`et`) | English (`en-us`)|
+| Fijian (`fj`) | English (`en-us`)|
+| Finnish (`fi`) | English (`en-us`)|
+| German (`de`) | English (`en-us`)|
+| Greek (`el`) | English (`en-us`)|
+| Hungarian (`hu`) | English (`en-us`)|
+| Icelandic (`is`) | English (`en-us`)|
+| Indonesian (`id`) | English (`en-us`)
+| Inuktitut (`iu`) | English (`en-us`)|
+| Irish (`ga`) | English (`en-us`)|
+| Italian (`it`) | English (`en-us`)|
+| Japanese (`ja`) | English (`en-us`)|
+| Kazakh (`kk`) | English (`en-us`)|
+| Korean (`ko`) | English (`en-us`)|
+| Lithuanian (`lt`) | English (`en-us`)|
+| Malagasy (`mg`) | English (`en-us`)|
+| Maori (`mi`) | English (`en-us`)|
+| Norwegian (`nb`) | English (`en-us`)|
+| Persian (`fa`) | English (`en-us`)|
+| Polish (`pl`) | English (`en-us`)|
+| Romanian (`ro`) | English (`en-us`)|
+| Russian (`ru`) | English (`en-us`)|
+| Slovak (`sk`) | English (`en-us`)|
+| Spanish (`es`) | English (`en-us`)|
+| Swedish (`sv`) | English (`en-us`)|
+| Tahitian (`ty`) | English (`en-us`)|
+| Thai (`th`) | English (`en-us`)|
+| Tongan (`to`) | English (`en-us`)|
+| Turkish (`tr`) | English (`en-us`)|
+| Vietnamese (`vi`) | English (`en-us`)|
+| Welsh (`cy`) | English (`en-us`)|
ai-services Document Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/document-sdk-overview.md
+
+ Title: Document Translation SDKs
+
+description: Document Translation software development kits (SDKs) expose Document Translation features and capabilities, using C#, Java, JavaScript, and Python programming language.
+++++ Last updated : 07/18/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Document Translation SDK
+
+Document Translation is a cloud-based REST API feature of the Azure AI Translator service. The Document Translation API enables quick and accurate source-to-target whole document translations, asynchronously, in supported languages and various file formats. The Document Translation software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Translation REST API capabilities into your applications.
+
+## Supported languages
+
+Document Translation SDK supports the following programming languages:
+
+| Language → SDK version | Package|Client library| Supported API version|
+|:-:|:-|:-|:-|
+|[.NET/C# → 1.0.0](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Document/1.0.0/https://docsupdatetracker.net/index.html)| [NuGet](https://www.nuget.org/packages/Azure.AI.Translation.Document) | [Azure SDK for .NET](/dotnet/api/overview/azure/AI.Translation.Document-readme?view=azure-dotnet&preserve-view=true) | Document Translation v1.1|
+|[Python → 1.0.0](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-translation-document/1.0.0/https://docsupdatetracker.net/index.html)|[PyPi](https://pypi.org/project/azure-ai-translation-document/1.0.0/)|[Azure SDK for Python](/python/api/overview/azure/ai-translation-document-readme?view=azure-python&preserve-view=true)|Document Translation v1.1|
+
+## Changelog and release history
+
+This section provides a version-based description of Document Translation feature and capability releases, changes, updates, and enhancements.
+
+### [C#/.NET](#tab/csharp)
+
+**Version 1.0.0 (GA)** </br>
+**2022-06-07**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/CHANGELOG.md)
+
+##### [README](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/README.md)
+
+##### [Samples](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/samples)
+
+### [Python](#tab/python)
+
+**Version 1.0.0 (GA)** </br>
+**2022-06-07**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/CHANGELOG.md)
+
+##### [README](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/README.md)
+
+##### [Samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/samples)
+++
+## Use Document Translation SDK in your applications
+
+The Document Translation SDK enables the use and management of the Translation service in your application. The SDK builds on the underlying Document Translation REST APIs for use within your programming language paradigm. Choose your preferred programming language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.Translation.Document --version 1.0.0
+```
+
+```powershell
+Install-Package Azure.AI.Translation.Document -Version 1.0.0
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-translation-document==1.0.0
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using System;
+using Azure.Core;
+using Azure.AI.Translation.Document;
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.translation.document import DocumentTranslationClient
+from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Authenticate the client
+
+### [C#/.NET](#tab/csharp)
+
+Create an instance of the `DocumentTranslationClient` object to interact with the Document Translation SDK, and then call methods on that client object to interact with the service. The `DocumentTranslationClient` is the primary interface for using the Document Translation client library. It provides both synchronous and asynchronous methods to perform operations.
+
+```csharp
+private static readonly string endpoint = "<your-custom-endpoint>";
+private static readonly string key = "<your-key>";
+
+DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+```
+
+### [Python](#tab/python)
+
+Create an instance of the `DocumentTranslationClient` object to interact with the Document Translation SDK, and then call methods on that client object to interact with the service. The `DocumentTranslationClient` is the primary interface for using the Document Translation client library. It provides both synchronous and asynchronous methods to perform operations.
+
+```python
+endpoint = "<endpoint>"
+key = "<apiKey>"
+
+client = DocumentTranslationClient(endpoint, AzureKeyCredential(key))
+
+```
+++
+### 4. Build your application
+
+### [C#/.NET](#tab/csharp)
+
+The Document Translation interface requires the following input:
+
+1. Upload your files to an Azure Blob Storage source container (sourceUri).
+1. Provide a target container where the translated documents can be written (targetUri).
+1. Include the target language code (targetLanguage).
+
+```csharp
+
+Uri sourceUri = new Uri("<your-source container-url");
+Uri targetUri = new Uri("<your-target-container-url>");
+string targetLanguage = "<target-language-code>";
+
+DocumentTranslationInput input = new DocumentTranslationInput(sourceUri, targetUri, targetLanguage)
+```
+
+### [Python](#tab/python)
+
+The Document Translation interface requires the following input:
+
+1. Upload your files to an Azure Blob Storage source container (sourceUri).
+1. Provide a target container where the translated documents can be written (targetUri).
+1. Include the target language code (targetLanguage).
+
+```python
+sourceUrl = "<your-source container-url>"
+targetUrl = "<your-target-container-url>"
+targetLanguage = "<target-language-code>"
+
+poller = client.begin_translation(sourceUrl, targetUrl, targetLanguage)
+result = poller.result()
+
+```
+++
+## Help options
+
+The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/microsoft-translator) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer.
+
+> [!TIP]
+> To make sure that we see your Microsoft Q&A question, tag it with **`microsoft-translator`**.
+> To make sure that we see your Stack Overflow question, tag it with **`Azure AI Translator`**.
+>
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Document Translation SDK quickstart**](quickstarts/document-translation-sdk.md) [**Document Translation v1.1 REST API reference**](reference/rest-api-guide.md)
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/faq.md
+
+ Title: Frequently asked questions - Document Translation
+
+description: Get answers to frequently asked questions about Document Translation.
++++++++ Last updated : 07/18/2023+++
+<!-- markdownlint-disable MD001 -->
+
+# Answers to frequently asked questions
+
+## Document Translation: FAQ
+
+#### Should I specify the source language in a request?
+
+If the language of the content in the source document is known, it's recommended to specify the source language in the request to get a better translation. If the document has content in multiple languages or the language is unknown, then don't specify the source language in the request. Document Translation automatically identifies language for each text segment and translates.
+
+#### To what extent are the layout, structure, and formatting maintained?
+
+When text is translated from the source to target language, the overall length of translated text may differ from source. The result could be reflow of text across pages. The same fonts may not be available both in source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
+
+#### Will the text in an image within a document gets translated?
+
+No. The text in an image within a document won't get translated.
+
+#### Can Document Translation translate content from scanned documents?
+
+Yes. Document Translation translates content from _scanned PDF_ documents.
+
+#### Will my document be translated if it's password protected?
+
+No. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
+
+#### If I'm using managed identities, do I also need a SAS token URL?
+
+No. Don't include SAS token URLSΓÇöyour requests will fail. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/create-sas-tokens.md
+
+ Title: Create shared access signature (SAS) tokens for storage containers and blobs
+description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
++++++ Last updated : 07/18/2023++
+# Create SAS tokens for your storage containers
+
+In this article, you learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
++
+>[!TIP]
+>
+> [Managed identities](create-use-managed-identities.md) provide an alternate method for you to grant access to your storage data without the need to include SAS tokens with your HTTP requests. *See*, [Managed identities for Document Translation](create-use-managed-identities.md).
+>
+> * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
+> * Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+> * There's no added cost to use managed identities in Azure.
+
+At a high level, here's how SAS tokens work:
+
+* Your application submits the SAS token to Azure Storage as part of a REST API request.
+
+* If the storage service verifies that the SAS is valid, the request is authorized.
+
+* If the SAS token is deemed invalid, the request is declined, and the error code 403 (Forbidden) is returned.
+
+Azure Blob Storage offers three resource types:
+
+* **Storage** accounts provide a unique namespace in Azure for your data.
+* **Data storage containers** are located in storage accounts and organize sets of blobs (files, text, or images).
+* **Blobs** are located in containers and store text and binary data such as files, text, and images.
+
+> [!IMPORTANT]
+>
+> * SAS tokens are used to grant permissions to storage resources, and should be protected in the same manner as an account key.
+>
+> * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
+
+## Prerequisites
+
+To get started, you need the following resources:
+
+* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* A [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource.
+
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
+
+## Create SAS tokens in the Azure portal
+
+<!-- markdownlint-disable MD024 -->
+
+Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your container or a specific file as follows and continue with these steps:
+
+| Create SAS token for a container| Create SAS token for a specific file|
+|:--:|:--:|
+**Your storage account** → **containers** → **your container** |**Your storage account** → **containers** → **your container**→ **your file** |
+
+1. Right-click the container or file and select **Generate SAS** from the drop-down menu.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by checking and/or clearing the appropriate check box:
+
+ * Your **source** container or file must have designated **read** and **list** access.
+
+ * Your **target** container or file must have designated **write** and **list** access.
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+ * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.
+ * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations.
+ * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
+ * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Azure AD credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
+
+1. Review then select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** are displayed in the lower area of window.
+
+1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+## Create SAS tokens with Azure Storage Explorer
+
+Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
+
+* You need the [**Azure Storage Explorer**](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
+
+* After the Azure Storage Explorer app is installed, [connect it to the storage account](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation. Follow these steps to create tokens for a storage container or specific blob file:
+
+### [SAS tokens for storage containers](#tab/Containers)
+
+1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
+1. Expand the Storage Accounts node and select **Blob Containers**.
+1. Expand the Blob Containers node and right-click a storage **container** node to display the options menu.
+1. Select **Get Shared Access Signature...** from options menu.
+1. In the **Shared Access Signature** window, make the following selections:
+ * Select your **Access policy** (the default is none).
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
+ * Select the **Time zone** for the Start and Expiry date and time (default is Local).
+ * Define your container **Permissions** by checking and/or clearing the appropriate check box.
+ * Review and select **Create**.
+
+1. A new window appears with the **Container** name, **URI**, and **Query string** for your container.
+1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+### [SAS tokens for specific blob file](#tab/blobs)
+
+1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
+1. Expand your storage node and select **Blob Containers**.
+1. Expand the Blob Containers node and select a **container** node to display the contents in the main window.
+1. Select the file where you wish to delegate SAS access and right-click to display the options menu.
+1. Select **Get Shared Access Signature...** from options menu.
+1. In the **Shared Access Signature** window, make the following selections:
+ * Select your **Access policy** (the default is none).
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
+ * Select the **Time zone** for the Start and Expiry date and time (default is Local).
+ * Define your container **Permissions** by checking and/or clearing the appropriate check box.
+ * Your **source** container or file must have designated **read** and **list** access.
+ * Your **target** container or file must have designated **write** and **list** access.
+ * Select **key1** or **key2**.
+ * Review and select **Create**.
+
+1. A new window appears with the **Blob** name, **URI**, and **Query string** for your blob.
+1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.**
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+++
+### Use your SAS URL to grant access
+
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
+
+You can include your SAS URL with REST API requests in two ways:
+
+* Use the **SAS URL** as your sourceURL and targetURL values.
+
+* Append the **SAS query string** to your existing sourceURL and targetURL values.
+
+Here's a sample REST API request:
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+That's it! You've learned how to create SAS tokens to authorize how clients access your data.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get Started with Document Translation](../quickstarts/document-translation-rest-api.md)
+>
ai-services Create Use Glossaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/create-use-glossaries.md
+
+ Title: Create and use a glossary with Document Translation
+description: How to create and use a glossary with Document Translation.
++++++ Last updated : 07/18/2023++
+# Use glossaries with Document Translation
+
+A glossary is a list of terms with definitions that you create for the Document Translation service to use during the translation process. Currently, the glossary feature supports one-to-one source-to-target language translation. Common use cases for glossaries include:
+
+* **Context-specific terminology**. Create a glossary that designates specific meanings for your unique context.
+
+* **No translation**. For example, you can restrict Document Translation from translating product name brands by using a glossary with the same source and target text.
+
+* **Specified translations for ambiguous words**. Choose a specific translation for poly&#8203;semantic words.
+
+## Create, upload, and use a glossary file
+
+1. **Create your glossary file.** Create a file in a supported format (preferably tab-separated values) that contains all the terms and phrases you want to use in your translation.
+
+ To check if your file format is supported, *see* [Get supported glossary formats](../reference/get-supported-glossary-formats.md).
+
+ The following English-source glossary contains words that can have different meanings depending upon the context in which they're used. The glossary provides the expected translation for each word in the file to help ensure accuracy.
+
+ For instance, when the word `Bank` appears in a financial document, it should be translated to reflect its financial meaning. If the word `Bank` appears in a geographical document, it may refer to shore to reflect its topographical meaning. Similarly, the word `Crane` can refer to either a bird or machine.
+
+ ***Example glossary .tsv file: English-to-French***
+
+ ```tsv
+ Bank Banque
+ Card Carte
+ Crane Grue
+ Office Office
+ Tiger Tiger
+ US United States
+ ```
+
+1. **Upload your glossary to Azure storage**. To complete this step, you need an [Azure Blob Storage account](https://ms.portal.azure.com/#create/Microsoft.StorageAccount) with [containers](/azure/storage/blobs/storage-quickstart-blobs-portal?branch=main#create-a-container) to store and organize your blob data within your storage account.
+
+1. **Specify your glossary in the translation request.** Include the **`glossary URL`**, **`format`**, and **`version`** in your **`POST`** request:
+
+ :::code language="json" source="../../../../../cognitive-services-rest-samples/curl/Translator/translate-with-glossary.json" range="1-23" highlight="13-14":::
+
+ > [!NOTE]
+ > The example used an enabled [**system-assigned managed identity**](create-use-managed-identities.md#enable-a-system-assigned-managed-identity) with a [**Storage Blob Data Contributor**](create-use-managed-identities.md#grant-storage-account-access-for-your-translator-resource) role assignment for authorization. For more information, *see* [**Managed identities for Document Translation**](./create-use-managed-identities.md).
+
+### Case sensitivity
+
+By default, Azure AI Translator service API is **case-sensitive**, meaning that it matches terms in the source text based on case.
+
+* **Partial sentence application**. When your glossary is applied to **part of a sentence**, the Document Translation API checks whether the glossary term matches the case in the source text. If the casing doesn't match, the glossary isn't applied.
+
+* **Complete sentence application**. When your glossary is applied to a **complete sentence**, the service becomes **case-insensitive**. It matches the glossary term regardless of its case in the source text. This provision applies the correct results for use cases involving idioms and quotes.
+
+## Next steps
+
+Try the Document Translation how-to guide to asynchronously translate whole documents using a programming language of your choice:
+
+> [!div class="nextstepaction"]
+> [Use Document Translation REST APIs](use-rest-api-programmatically.md)
ai-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/create-use-managed-identities.md
+
+ Title: Create and use managed identities for Document Translation
+
+description: Understand how to create and use managed identities in the Azure portal
++++++ Last updated : 07/18/2023+++
+# Managed identities for Document Translation
+
+Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
+
+ :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
+
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md).
+
+* There's no added cost to use managed identities in Azure.
+
+> [!IMPORTANT]
+>
+> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
+>
+> * To use managed identities for Document Translation operations, you must [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a specific geographic Azure region such as **East US**. If your Translator resource region is set to **Global**, then you can't use managed identity for Document Translation. You can still use [Shared Access Signature tokens (SAS)](create-sas-tokens.md) for Document Translation.
+>
+> * Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (not a multi-service Azure AI services) resource assigned to a **geographical** region such as **West US**. For detailed steps, _see_ [Create a multi-service resource](../../../multi-service-resource.md).
+
+* A brief understanding of [**Azure role-based access control (`Azure RBAC`)**](../../../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You also need to create containers to store and organize your blob data within your storage account.
+
+* **If your storage account is behind a firewall, you must enable the following configuration**:
+ 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+ 1. Select the Storage account.
+ 1. In the **Security + networking** group in the left pane, select **Networking**.
+ 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
+
+ :::image type="content" source="../../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
+
+ 1. Deselect all check boxes.
+ 1. Make sure **Microsoft network routing** is selected.
+ 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Translator resource as the instance name.
+ 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, _see_ [Configure Azure Storage firewalls and virtual networks](../../../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions).
+
+ :::image type="content" source="../../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view.":::
+
+ 1. Select **Save**.
+
+ > [!NOTE]
+ > It may take up to 5 min for the network changes to propagate.
+
+ Although network access is now permitted, your Translator resource is still unable to access the data in your Storage account. You need to [create a managed identity](#managed-identity-assignments) for and [assign a specific access role](#grant-storage-account-access-for-your-translator-resource) to your Translator resource.
+
+## Managed identity assignments
+
+There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
+
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
+
+In the following steps, we enable a system-assigned managed identity and grant your Translator resource limited access to your Azure Blob Storage account.
+
+## Enable a system-assigned managed identity
+
+You must grant the Translator resource access to your storage account before it can create, read, or delete blobs. Once you enabled the Translator resource with a system-assigned managed identity, you can use Azure role-based access control (`Azure RBAC`), to give Translator access to your Azure storage containers.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Translator resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Within the **System assigned** tab, turn on the **Status** toggle.
+
+ :::image type="content" source="../../media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
+
+ > [!IMPORTANT]
+ > User assigned managed identity won't meet requirements for the batch transcription storage account scenario. Be sure to enable system assigned managed identity.
+
+1. Select **Save**.
+
+## Grant storage account access for your Translator resource
+
+> [!IMPORTANT]
+> To assign a system-assigned managed identity role, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](../../../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../../../role-based-access-control/built-in-roles.md#user-access-administrator) at the storage scope for the storage resource.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select the Translator resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Under **Permissions** select **Azure role assignments**:
+
+ :::image type="content" source="../../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
+
+1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
+
+ :::image type="content" source="../../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
+
+1. Next, assign a **Storage Blob Data Contributor** role to your Translator service resource. The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+
+ | Field | Value|
+ ||--|
+ |**Scope**| **_Storage_**.|
+ |**Subscription**| **_The subscription associated with your storage resource_**.|
+ |**Resource**| **_The name of your storage resource_**.|
+ |**Role** | **_Storage Blob Data Contributor_**.|
+
+ :::image type="content" source="../../media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
+
+1. After the _Added Role assignment_ confirmation message appears, refresh the page to see the added role assignment.
+
+ :::image type="content" source="../../media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
+
+1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
+
+ :::image type="content" source="../../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
+
+## HTTP requests
+
+* A batch Document Translation request is submitted to your Translator service endpoint via a POST request.
+
+* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
+
+* If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request.
+
+* The translated documents appear in your target container.
+
+### Headers
+
+The following headers are included with each Document Translation API request:
+
+|HTTP header|Description|
+||--|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure key for your Translator or Azure AI services resource.|
+|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
+
+### POST request body
+
+* The request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches`.
+* The request body is a JSON object named `inputs`.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs. With system assigned managed identity, you use a plain Storage Account URL (no SAS or other additions). The format is `https://<storage_account_name>.blob.core.windows.net/<container_name>`.
+* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
+* A value for the `glossaries` field (optional) is applied when the document is being translated.
+* The `targetUrl` for each target language must be unique.
+
+> [!IMPORTANT]
+> If a file with the same name already exists in the destination, the job will fail. When using managed identities, don't include a SAS token URL with your HTTP requests. If you do so, your requests will fail.
+
+<!-- markdownlint-disable MD024 -->
+### Translate all documents in a container
+
+This sample request body references a source container for all documents to be translated to a target language.
+
+For more information, _see_ [request parameters](#post-request-body).
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>"
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate a specific document in a container
+
+This sample request body references a single source document to be translated into two target languages.
+
+> [!IMPORTANT]
+> In addition to the request parameters [noted previously](#post-request-body), you must include `"storageType": "File"`. Otherwise the source URL is assumed to be at the container level.
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>/source-english.docx"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>/Target-Spanish.docx"
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>/Target-German.docx",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate all documents in a container using a custom glossary
+
+This sample request body references a source container for all documents to be translated to a target language using a glossary.
+
+For more information, _see_ [request parameters](#post-request-body).
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>",
+ "filter": {
+ "prefix": "myfolder/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>",
+ "language": "es",
+ "glossaries": [
+ {
+ "glossaryUrl": "https://<storage_account_name>.blob.core.windows.net/<glossary_container_name>/en-es.xlf",
+ "format": "xliff"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+Great! You've learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and `Azure RBAC`, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Access Azure Storage from a web app using managed identities](../../../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fcognitive-services%2ftranslator%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcognitive-services%2ftranslator%2ftoc.json)
ai-services Use Rest Api Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/use-rest-api-programmatically.md
+
+ Title: Use Document Translation APIs programmatically
+description: "How to create a Document Translation service using C#, Go, Java, Node.js, or Python and the REST API"
++++++ Last updated : 07/18/2023+
+recommendations: false
+ms.devlang: csharp, golang, java, javascript, python
+++
+# Use REST APIs programmatically
+
+ Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service. You can use the Document Translation API to asynchronously translate whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats) while preserving source document structure and text formatting. In this how-to guide, you learn to use Document Translation APIs with a programming language of your choice and the HTTP REST API.
+
+## Prerequisites
+
+> [!NOTE]
+>
+> * Typically, when you create an Azure AI resource in the Azure portal, you have the option to create a multi-service key or a single-service key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource):
+
+ **Complete the Translator project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
+
+ 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
+
+ 1. **Pricing tier**. Document Translation isn't supported in the free tier. Select Standard S1 to try the service.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource has successfully deployed, select **Go to resource**.
+
+### Retrieve your key and custom domain endpoint
+
+*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
+
+1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+
+1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
+
+1. You **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
+
+ :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
+
+### Get your key
+
+Requests to the Translator service require a read-only key for authenticating access.
+
+1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+1. Copy and paste your key in a convenient location, such as *Microsoft Notepad*.
+1. You paste it into the code sample to authenticate your request to the Document Translation service.
++
+## Create Azure Blob Storage containers
+
+You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+
+* **Source container**. This container is where you upload your files for translation (required).
+* **Target container**. This container is where your translated files are stored (required).
+
+> [!NOTE]
+> Document Translation supports glossaries as blobs in target containers (not separate glossary containers). If want to include a custom glossary, add it to the target container and include the` glossaryUrl` with the request. If the translation language pair is not present in the glossary, it will not be applied. *See* [Translate documents using a custom glossary](#translate-documents-using-a-custom-glossary)
+
+### **Create SAS access tokens for Document Translation**
+
+The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](create-sas-tokens.md).
+
+* Your **source** container or blob must have designated **read** and **list** access.
+* Your **target** container or blob must have designated **write** and **list** access.
+* Your **glossary** blob must have designated **read** and **list** access.
+
+> [!TIP]
+>
+> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
+> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
+> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
+
+## HTTP requests
+
+A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
+
+For detailed information regarding Azure AI Translator Service request limits, _see_ [**Document Translation request limits**](../../service-limits.md#document-translation).
+
+### HTTP headers
+
+The following headers are included with each Document Translation API request:
+
+|HTTP header|Description|
+||--|
+|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure key for your Translator or Azure AI services resource.|
+|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
+
+### POST request body properties
+
+* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches`
+* The POST request body is a JSON object named `inputs`.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs.
+* The `prefix` and `suffix` are case-sensitive strings to filter documents in the source path for translation. The `prefix` field is often used to delineate subfolders for translation. The `suffix` field is most often used for file extensions.
+* A value for the `glossaries` field (optional) is applied when the document is being translated.
+* The `targetUrl` for each target language must be unique.
+
+>[!NOTE]
+> If a file with the same name already exists in the destination, the job will fail.
+
+<!-- markdownlint-disable MD024 -->
+### Translate all documents in a container
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate a specific document in a container
+
+* Specify `"storageType": "File"`
+* If you aren't using a [**system-assigned managed identity**](create-use-managed-identities.md) for authentication, make sure you've created source URL & SAS token for the specific blob/document (not for the container)
+* Ensure you've specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+* This sample request returns a single document translated into two target languages
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Translate documents using a custom glossary
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://myblob.blob.core.windows.net/source"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://myblob.blob.core.windows.net/target",
+ "language": "es",
+ "glossaries": [
+ {
+ "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf",
+ "format": "xliff"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Use code to submit Document Translation requests
+
+### Set up your coding Platform
+
+### [C#](#tab/csharp)
+
+* Create a new project.
+* Replace Program.cs with the C# code sample.
+* Set your endpoint, key, and container URL values in Program.cs.
+* To process JSON data, add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/).
+* Run the program from the project directory.
+
+### [Node.js](#tab/javascript)
+
+* Create a new Node.js project.
+* Install the Axios library with `npm i axios`.
+* Copy/paste the JavaScript sample code into your project.
+* Set your endpoint, key, and container URL values.
+* Run the program.
+
+### [Python](#tab/python)
+
+* Create a new project.
+* Copy and paste the code from one of the samples into your project.
+* Set your endpoint, key, and container URL values.
+* Run the program. For example: `python translate.py`.
+
+### [Java](#tab/java)
+
+* Create a working directory for your project. For example:
+
+```powershell
+mkdir sample-project
+```
+
+* In your project directory, create the following subdirectory structure:
+
+ src</br>
+&emsp; Γöö main</br>
+&emsp;&emsp;&emsp;Γöö java
+
+```powershell
+mkdir -p src/main/java/
+```
+
+**NOTE**: Java source files (for example, _sample.java_) live in src/main/**java**.
+
+* In your root directory (for example, *sample-project*), initialize your project with Gradle:
+
+```powershell
+gradle init --type basic
+```
+
+* When prompted to choose a **DSL**, select **Kotlin**.
+
+* Update the `build.gradle.kts` file. Keep in mind that you need to update your `mainClassName` depending on the sample:
+
+ ```java
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClassName = "{NAME OF YOUR CLASS}"
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ compile("com.squareup.okhttp:okhttp:2.5.0")
+ }
+ ```
+
+* Create a Java file in the **java** directory and copy/paste the code from the provided sample. Don't forget to add your key and endpoint.
+
+* **Build and run the sample from the root directory**:
+
+```powershell
+gradle build
+gradle run
+```
+
+### [Go](#tab/go)
+
+* Create a new Go project.
+* Add the provided Go code sample.
+* Set your endpoint, key, and container URL values.
+* Save the file with a `.go` extension.
+* Open a command prompt on a computer with Go installed.
+* Build the file. For example: `go build example-code.go`.
+* Run the file, for example: `example-code`.
+
+
+
+> [!IMPORTANT]
+>
+> For the code samples, you'll hard-code your Shared Access Signature (SAS) URL where indicated. Remember to remove the SAS URL from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Managed Identity](create-use-managed-identities.md). For more information, _see_ Azure Storage [security](../../../../storage/common/authorize-data-access.md).
+
+> You may need to update the following fields, depending upon the operation:
+>>>
+>> * `endpoint`
+>> * `key`
+>> * `sourceURL`
+>> * `targetURL`
+>> * `glossaryURL`
+>> * `id` (job ID)
+>>
+
+#### Locating the `id` value
+
+* You find the job `id` in the POST method response Header `Operation-Location` URL value. The last parameter of the URL is the operation's job **`id`**:
+
+|**Response header**|**Result URL**|
+|--|-|
+Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/9dce0aa9-78dc-41ba-8cae-2e2f3c2ff8ec</span>
+
+* You can also use a **GET Jobs** request to retrieve a Document Translation job `id` .
+
+>
+
+## Translate documents
+
+### [C#](#tab/csharp)
+
+```csharp
+
+ using System;
+ using System.Net.Http;
+ using System.Threading.Tasks;
+ using System.Text;
++
+ class Program
+ {
+
+ static readonly string route = "/batches";
+
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+
+ private static readonly string key = "<YOUR-KEY>";
+
+ static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\"}, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
+
+ static async Task Main(string[] args)
+ {
+ using HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+
+ StringContent content = new StringContent(json, Encoding.UTF8, "application/json");
+
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Content = content;
+
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+ if (response.IsSuccessStatusCode)
+ {
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine();
+ Console.WriteLine($"Response Headers:");
+ Console.WriteLine(response.Headers);
+ }
+ else
+ Console.Write("Error");
+
+ }
+
+ }
+
+ }
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios').default;
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1';
+let route = '/batches';
+let key = '<YOUR-KEY>';
+
+let data = JSON.stringify({"inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "language": "en"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "category": "general",
+ "language": "es"}]});
+
+let config = {
+ method: 'post',
+ baseURL: endpoint,
+ url: route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Content-Type': 'application/json'
+ },
+ data: data
+};
+
+axios(config)
+.then(function (response) {
+ let result = { statusText: response.statusText, statusCode: response.status, headers: response.headers };
+ console.log()
+ console.log(JSON.stringify(result));
+})
+.catch(function (error) {
+ console.log(error);
+});
+```
+
+### [Python](#tab/python)
+
+```python
+
+import requests
+
+endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1"
+key = '<YOUR-KEY>'
+path = '/batches'
+constructed_url = endpoint + path
+
+payload= {
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "language": "en"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS",
+ "storageSource": "AzureBlob",
+ "category": "general",
+ "language": "es"
+ }
+ ]
+ }
+ ]
+}
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Content-Type': 'application/json'
+}
+
+response = requests.post(constructed_url, headers=headers, json=payload)
+
+print(f'response status code: {response.status_code}\nresponse status: {response.reason}\nresponse headers: {response.headers}')
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class DocumentTranslation {
+ String key = "<YOUR-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+ String path = endpoint + "/batches";
+
+ OkHttpClient client = new OkHttpClient();
+
+ public void post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
+ Request request = new Request.Builder()
+ .url(path).post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.code());
+ System.out.println(response.headers());
+ }
+
+ public static void main(String[] args) {
+ try {
+ DocumentTranslation sampleRequest = new DocumentTranslation();
+ sampleRequest.post();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+"net/http"
+)
+
+func main() {
+endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1"
+key := "<YOUR-KEY>"
+uri := endpoint + "/batches"
+method := "POST"
+
+var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en"},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
+
+req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonStr))
+req.Header.Add("Ocp-Apim-Subscription-Key", key)
+req.Header.Add("Content-Type", "application/json")
+
+client := &http.Client{}
+
+if err != nil {
+ fmt.Println(err)
+ return
+ }
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+ fmt.Println("response status:", res.Status)
+ fmt.Println("response headers", res.Header)
+}
+```
+++
+## Get file formats
+
+Retrieve a list of supported file formats. If successful, this method returns a `200 OK` response code.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+
+ static readonly string route = "/documents/formats";
+
+ private static readonly string key = "<YOUR-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Get;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1';
+let key = '<YOUR-KEY>';
+let route = '/documents/formats';
+
+let config = {
+ method: 'get',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class GetFileFormats {
+
+ String key = "<YOUR-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+ String url = endpoint + "/documents/formats";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.1/documents/formats'
+key = '<YOUR-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': key
+}
+conn.request("GET", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1"
+ key := "<YOUR-KEY>"
+ uri := endpoint + "/documents/formats"
+ method := "GET"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## Get job status
+
+Get the current status for a single job and a summary of all jobs in a Document Translation request. If successful, this method returns a `200 OK` response code.
+<!-- markdownlint-disable MD024 -->
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+
+ static readonly string route = "/batches/{id}";
+
+ private static readonly string key = "<YOUR-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Get;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1';
+let key = '<YOUR-KEY>';
+let route = '/batches/{id}';
+
+let config = {
+ method: 'get',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class GetJobStatus {
+
+ String key = "<YOUR-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+ String url = endpoint + "/batches/{id}";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.1/batches/{id}'
+key = '<YOUR-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': key
+}
+conn.request("GET", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1"
+ key := "<YOUR-KEY>"
+ uri := endpoint + "/batches/{id}"
+ method := "GET"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## Get document status
+
+### Brief overview
+
+Retrieve the status of a specific document in a Document Translation request. If successful, this method returns a `200 OK` response code.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+
+ static readonly string route = "/{id}/document/{documentId}";
+
+ private static readonly string key = "<YOUR-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Get;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1';
+let key = '<YOUR-KEY>';
+let route = '/{id}/document/{documentId}';
+
+let config = {
+ method: 'get',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class GetDocumentStatus {
+
+ String key = "<YOUR-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+ String url = endpoint + "/{id}/document/{documentId}";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.1/{id}/document/{documentId}'
+key = '<YOUR-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': key
+}
+conn.request("GET", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1"
+ key := "<YOUR-KEY>"
+ uri := endpoint + "/{id}/document/{documentId}"
+ method := "GET"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+## Delete job
+
+### Brief overview
+
+Cancel currently processing or queued job. Only documents for which translation hasn't started are canceled.
+
+### [C#](#tab/csharp)
+
+```csharp
+
+using System;
+using System.Net.Http;
+using System.Threading.Tasks;
++
+class Program
+{
++
+ private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+
+ static readonly string route = "/batches/{id}";
+
+ private static readonly string key = "<YOUR-KEY>";
+
+ static async Task Main(string[] args)
+ {
+
+ HttpClient client = new HttpClient();
+ using HttpRequestMessage request = new HttpRequestMessage();
+ {
+ request.Method = HttpMethod.Delete;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
++
+ HttpResponseMessage response = await client.SendAsync(request);
+ string result = response.Content.ReadAsStringAsync().Result;
+
+ Console.WriteLine($"Status code: {response.StatusCode}");
+ Console.WriteLine($"Response Headers: {response.Headers}");
+ Console.WriteLine();
+ Console.WriteLine(result);
+ }
+}
+```
+
+### [Node.js](#tab/javascript)
+
+```javascript
+
+const axios = require('axios');
+
+let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1';
+let key = '<YOUR-KEY>';
+let route = '/batches/{id}';
+
+let config = {
+ method: 'delete',
+ url: endpoint + route,
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+};
+
+axios(config)
+.then(function (response) {
+ console.log(JSON.stringify(response.data));
+})
+.catch(function (error) {
+ console.log(error);
+});
+
+```
+
+### [Java](#tab/java)
+
+```java
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.squareup.okhttp.*;
+
+public class DeleteJob {
+
+ String key = "<YOUR-KEY>";
+ String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1";
+ String url = endpoint + "/batches/{id}";
+ OkHttpClient client = new OkHttpClient();
+
+ public void get() throws IOException {
+ Request request = new Request.Builder().url(
+ url).method("DELETE", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
+ Response response = client.newCall(request).execute();
+ System.out.println(response.body().string());
+ }
+
+ public static void main(String[] args) throws IOException {
+ try{
+ GetJobs jobs = new GetJobs();
+ jobs.get();
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+import http.client
+
+host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
+parameters = '//translator/text/batch/v1.1/batches/{id}'
+key = '<YOUR-KEY>'
+conn = http.client.HTTPSConnection(host)
+payload = ''
+headers = {
+ 'Ocp-Apim-Subscription-Key': key
+}
+conn.request("DELETE", parameters , payload, headers)
+res = conn.getresponse()
+data = res.read()
+print(res.status)
+print()
+print(data.decode("utf-8"))
+```
+
+### [Go](#tab/go)
+
+```go
+
+package main
+
+import (
+ "fmt"
+ "net/http"
+ "io/ioutil"
+)
+
+func main() {
+
+ endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1"
+ key := "<YOUR-KEY>"
+ uri := endpoint + "/batches/{id}"
+ method := "DELETE"
+
+ client := &http.Client {
+ }
+ req, err := http.NewRequest(method, uri, nil)
+
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+
+ res, err := client.Do(req)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ defer res.Body.Close()
+
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ fmt.Println(err)
+ return
+ }
+ fmt.Println(res.StatusCode)
+ fmt.Println(string(body))
+}
+```
+++
+### Common HTTP status codes
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. |
+| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. When managing your subscription on the Azure portal, make sure you're using the **Translator** single-service resource _not_ the **Azure AI services** multi-service resource.
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+
+## Learn more
+
+* [Translator v3 API reference](../../reference/v3-0-reference.md)
+* [Language support](../../language-support.md)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a customized language system using Custom Translator](../../custom-translator/overview.md)
ai-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/language-studio.md
+
+ Title: Try Document Translation in Language Studio
+description: "Document Translation in Azure AI Language Studio."
++++++ Last updated : 07/18/2023++
+recommendations: false
++
+# Document Translation in Language Studio (Preview)
+
+> [!IMPORTANT]
+> Document Translation in Language Studio is currently in Public Preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+ Document Translation in [**Azure AI Language Studio**](https://language.cognitive.azure.com/home) is a no-code user interface that lets you interactively translate documents from local or Azure Blob Storage.
+
+## Supported regions
+
+The Document Translation feature in the Language Studio is currently available in the following regions;
+
+|DisplayName|Name|
+|--||
+|East US |`eastus`|
+|East US 2 |`eastus2`|
+|West US 2 | `westus2`|
+|West US 3| `westus3`|
+|UK South| `uksouth`|
+|South Central US| `southcentralus` |
+|Australia East|`australiaeast` |
+|Central India| `centralindia` |
+|North Europe| `northeurope` |
+|West Europe|`westeurope`|
+|Switzerland North| `switzerlandnorth` |
+
+## Prerequisites
+
+If you or an administrator have previously setup a Translator resource with a **system-assigned managed identity**, enabled a **Storage Blob Data Contributor** role assignment, and created an Azure Blob Storage account, you can skip this section and [**Get started**](#get-started) right away.
+
+> [!NOTE]
+>
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+Document Translation in Language Studio requires the following resources:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource) with [**system-assigned managed identity**](how-to-guides/create-use-managed-identities.md#enable-a-system-assigned-managed-identity) enabled and a [**Storage Blob Data Contributor**](how-to-guides/create-use-managed-identities.md#grant-storage-account-access-for-your-translator-resource) role assigned. For more information, *see* [**Managed identities for Document Translation**](how-to-guides/create-use-managed-identities.md). Also, make sure the region and pricing sections are completed as follows:
+
+ * **Resource Region**. For this project, choose a geographic region such as **East US**. For Document Translation, [system-assigned managed identity](how-to-guides/create-use-managed-identities.md) isn't supported for the **Global** region.
+
+ * **Pricing tier**. Select Standard S1 or D3 to try the service. Document Translation isn't supported in the free tier.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). An active Azure Blob Storage account is required to use Document Translation in the Language Studio.
+
+Now that you've completed the prerequisites, let's start translating documents!
+
+## Get started
+
+At least one **source document** is required. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx). The source language is English.
+
+1. Navigate to [Language Studio](https://language.cognitive.azure.com/home).
+
+1. If you're using the Language Studio for the first time, a **Select an Azure resource** pop-up screen appears. Make the following selections:
+
+ * **Azure directory**.
+ * **Azure subscription**.
+ * **Resource type**. Choose **Translator**.
+ * **Resource name**. The resource you select must have [**managed identity enabled**](how-to-guides/create-use-managed-identities.md).
+
+ :::image type="content" source="media/language-studio/choose-azure-resource.png" alt-text="Screenshot of the language studio choose your Azure resource dialog window.":::
+
+ > [!TIP]
+ > You can update your selected directory and resource by selecting the Translator settings icon located in the left navigation section.
+
+1. Navigate to Language Studio and select the **Document Translation** tile:
+
+ :::image type="content" source="media/language-studio/welcome-home-page.png" alt-text="Screenshot of the language studio home page.":::
+
+1. If you're using the Document Translation feature for the first time, start with the **Initial Configuration** to select your **Azure AI Translator resource** and **Document storage** account:
+
+ :::image type="content" source="media/language-studio/initial-configuration.png" alt-text="Screenshot of the initial configuration page.":::
+
+1. In the **Job** section, choose the language to **Translate from** (source) or keep the default **Auto-detect language** and select the language to **Translate to** (target). You can select a maximum of 10 target languages. Once you've selected your source and target language(s), select **Next**:
+
+ :::image type="content" source="media/language-studio/basic-information.png" alt-text="Screenshot of the language studio basic information page.":::
+
+## File location and destination
+
+Your source and target files can be located in your local environment or your Azure Blob Storage [container](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). Follow the steps to select where to retrieve your source and store your target files:
+
+### Choose a source file location
+
+#### [**Local**](#tab/local-env)
+
+ 1. In the **files and destination** section, choose the files for translation by selecting the **Upload local files** button.
+
+ 1. Next, select **&#x2795; Add file(s)**, choose the file(s) for translation, then select **Next**:
+
+ :::image type="content" source="media/language-studio/upload-file.png" alt-text="Screenshot of the select files for translation page.":::
+
+#### [**Azure Blob Storage**](#tab/blob-storage)
+
+1. In the **files and destination** section, choose the files for translation by selecting the **Select for Blob storage** button.
+
+1. Next, choose your *source* **Blob container**, find and select the file(s) for translation, then select **Next**:
+
+ :::image type="content" source="media/language-studio/select-blob-container.png" alt-text="Screenshot of select files from your blob container.":::
+++
+### Choose a target file destination
+
+#### [**Local**](#tab/local-env)
+
+While still in the **files and destination** section, select **Download translated file(s)**. Once you have made your choice, select **Next**:
+
+ :::image type="content" source="media/language-studio/target-file-upload.png" alt-text="Screenshot of the select destination for target files page.":::
+
+#### [**Azure Blob Storage**](#tab/blob-storage)
+
+1. While still in the **files and destination** section, select **Upload to Azure Blob Storage**.
+1. Next, choose your *target* **Blob container** and select **Next**:
+
+ :::image type="content" source="media/language-studio/target-file-upload.png" alt-text="Screenshot of target file upload drop-down menu.":::
+++
+### Optional selections and review
+
+1. (Optional) You can add **additional options** for custom translation and/or a glossary file. If you don't require these options, just select **Next**.
+
+1. On the **Review and finish** page, check to make sure that your selections are correct. If not, you can go back. If everything looks good, select the **Start translation job** button.
+
+ :::image type="content" source="media/language-studio/start-translation.png" alt-text="Screenshot of the start translation job page.":::
+
+1. The **Job history** page contains the **Translation job id** and job status.
+
+ > [!NOTE]
+ > The list of translation jobs on the job history page includes all the jobs that were submitted through the chosen translator resource. If your colleague used the same translator resource to submit a job, you will see the status of that job on the job history page.
+
+ :::image type="content" source="media/language-studio/job-history.png" alt-text="Screenshot of the job history page.":::
+
+That's it! You now know how to translate documents using Azure AI Language Studio.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>
+> [Use Document Translation REST APIs programmatically](how-to-guides/use-rest-api-programmatically.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/overview.md
+
+ Title: What is Document Translation?
+description: An overview of the cloud-based batch Document Translation service and process.
++++++ Last updated : 07/18/2023++
+recommendations: false
+
+# What is Document Translation?
+
+Document Translation is a cloud-based feature of the [Azure AI Translator](../translator-overview.md) service and is part of the Azure AI service family of REST APIs. The Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md), while preserving original document structure and data format.
+
+## Key features
+
+| Feature | Description |
+| | -|
+| **Translate large files**| Translate whole documents asynchronously.|
+|**Translate numerous files**|Translate multiple files across all supported languages and dialects while preserving document structure and data format.|
+|**Preserve source file presentation**| Translate files while preserving the original layout and format.|
+|**Apply custom translation**| Translate documents using general and [custom translation](../custom-translator/concepts/customization.md#custom-translator) models.|
+|**Apply custom glossaries**|Translate documents using custom glossaries.|
+|**Automatically detect document language**|Let the Document Translation service determine the language of the document.|
+|**Translate documents with content in multiple languages**|Use the autodetect feature to translate documents with content in multiple languages into your target language.|
+
+> [!NOTE]
+> When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
+> For more information on input requirements, *see* [Document Transaltion request limits](../service-limits.md#document-translation)
+
+## Development options
+
+You can add Document Translation to your applications using the REST API or a client-library SDK:
+
+* The [**REST API**](reference/rest-api-guide.md). is a language agnostic interface that enables you to create HTTP requests and authorization headers to translate documents.
+
+* The [**client-library SDKs**](./quickstarts/document-translation-sdk.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](https://pypi.org/project/azure-ai-translation-document/).
+
+## Get started
+
+In our quickstart, you learn how to rapidly get started using Document Translation. To begin, you need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
+
+> [!div class="nextstepaction"]
+> [Start here](./quickstarts/document-translation-rest-api.md "Learn how to use Document Translation with HTTP REST")
+
+## Supported document formats
+
+The [Get supported document formats method](reference/get-supported-document-formats.md) returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
+
+Document Translation supports the following document file types:
+
+| File type| File extension|Description|
+|||--|
+|Adobe PDF|`pdf`|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.|
+|Comma-Separated Values |`csv`| A comma-delimited raw-data file used by spreadsheet programs.|
+|HTML|`html`, `htm`|Hyper Text Markup Language.|
+|Localization Interchange File Format|xlf| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
+|Markdown| `markdown`, `mdown`, `mkdn`, `md`, `mkd`, `mdwn`, `mdtxt`, `mdtext`, `rmd`| A lightweight markup language for creating formatted text.|
+|M&#8203;HTML|`mthml`, `mht`| A web page archive format used to combine HTML code and its companion resources.|
+|Microsoft Excel|`xls`, `xlsx`|A spreadsheet file for data analysis and documentation.|
+|Microsoft Outlook|`msg`|An email message created or saved within Microsoft Outlook.|
+|Microsoft PowerPoint|`ppt`, `pptx`| A presentation file used to display content in a slideshow format.|
+|Microsoft Word|`doc`, `docx`| A text document file.|
+|OpenDocument Text|`odt`|An open-source text document file.|
+|OpenDocument Presentation|`odp`|An open-source presentation file.|
+|OpenDocument Spreadsheet|`ods`|An open-source spreadsheet file.|
+|Rich Text Format|`rtf`|A text document containing formatting.|
+|Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.|
+|Text|`txt`| An unformatted text document.|
+
+## Request limits
+
+For detailed information regarding Azure AI Translator Service request limits, *see* [**Document Translation request limits**](../service-limits.md#document-translation).
+
+### Legacy file types
+
+Source file types are preserved during the document translation with the following **exceptions**:
+
+| Source file extension | Translated file extension|
+| | |
+| .doc, .odt, .rtf, | .docx |
+| .xls, .ods | .xlsx |
+| .ppt, .odp | .pptx |
+
+## Supported glossary formats
+
+Document Translation supports the following glossary file types:
+
+| File type| File extension|Description|
+|||--|
+|Comma-Separated Values| `csv` |A comma-delimited raw-data file used by spreadsheet programs.|
+|Localization Interchange File Format| `xlf` , `xliff`| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.|
+|Tab-Separated Values/TAB|`tsv`, `tab`| A tab-delimited raw-data file used by spreadsheet programs.|
+
+## Data residency
+
+Document Translation data residency depends on the Azure region where your Translator resource was created:
+
+* Translator resources **created** in any region in Europe are **processed** at data center in West Europe and North Europe.
+* Translator resources **created** in any region in Asia or Australia are **processed** at data center in Southeast Asia and Australia East.
+* Translator resource **created** in all other regions including Global, North America and South America are **processed** at data center in East US and West US 2.
+
+### Text Translation data residency
+
+✔️ Feature: **Translator Text** </br>
+✔️ Region where resource created: **Any**
+
+| Service endpoint | Request processing data center |
+||--|
+|**Global (recommended):**</br>**`api.cognitive.microsofttranslator.com`**|Closest available data center.|
+|**Americas:**</br>**`api-nam.cognitive.microsofttranslator.com`**|East US &bull; South Central US &bull; West Central US &bull; West US 2|
+|**Europe:**</br>**`api-eur.cognitive.microsofttranslator.com`**|North Europe &bull; West Europe|
+| **Asia Pacific:**</br>**`api-apc.cognitive.microsofttranslator.com`**|Korea South &bull; Japan East &bull; Southeast Asia &bull; Australia East|
+|
+
+### Document Translation data residency
+
+✔️ Feature: **Document Translation**</br>
+✔️ Service endpoint: **Custom:** &#8198;&#8198;&#8198; **`<name-of-your-resource.cognitiveservices.azure.com/translator/text/batch/v1.1`**
+
+ |Resource region| Request processing data center |
+|-|--|
+|**Global and any region in the Americas** | East US &bull; West US 2|
+|**Any region in Europe**| North Europe &bull; West Europe|
+|**Any region in Asia Pacific**| Southeast Asia &bull; Australia East|
++
+### Document Translation data residency
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get Started with Document Translation](./quickstarts/document-translation-rest-api.md)
ai-services Document Translation Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-rest-api.md
+
+ Title: Get started with Document Translation
+description: "How to create a Document Translation service using C#, Go, Java, Node.js, or Python programming languages and the REST API"
++++++ Last updated : 07/18/2023+
+recommendations: false
+ms.devlang: csharp, golang, java, javascript, python
+
+zone_pivot_groups: programming-languages-set-translator
++
+# Get started with Document Translation
+
+Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+
+## Prerequisites
+
+> [!IMPORTANT]
+>
+> * Java and JavaScript Document Translation SDKs are currently available in **public preview**. Features, approaches and processes may change, prior to the general availability (GA) release, based on user feedback.
+> * C# and Python SDKs are general availability (GA) releases ready for use in your production applications
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource):
+
+ **Complete the Translator project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
+
+ 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
+
+ 1. **Pricing tier**. Document Translation isn't supported in the free tier. **Select Standard S1 to try the service**.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource has successfully deployed, select **Go to resource**.
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) -->
+
+### Retrieve your key and document translation endpoint
+
+*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
+
+1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+
+1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
+
+1. You paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
+
+ :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) -->
+
+## Create Azure Blob Storage containers
+
+You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+
+* **Source container**. This container is where you upload your files for translation (required).
+* **Target container**. This container is where your translated files are stored (required).
+
+### **Required authentication**
+
+The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](../how-to-guides/create-sas-tokens.md).
+
+* Your **source** container or blob must have designated **read** and **list** access.
+* Your **target** container or blob must have designated **write** and **list** access.
+* Your **glossary** blob must have designated **read** and **list** access.
+
+> [!TIP]
+>
+> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
+> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
+> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) -->
+
+### Sample document
+
+For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. The source language is English.
+++++++++++++
+That's it, congratulations! In this quickstart, you used Document Translation to translate a document while preserving it's original structure and data format.
+
+## Next steps
++
ai-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-sdk.md
+
+ Title: "Document Translation C#/.NET or Python client library"
+
+description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
+++++++ Last updated : 07/18/2023+
+zone_pivot_groups: programming-languages-document-sdk
++
+# Document Translation client-library SDKs
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD001 -->
+
+Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+
+> [!IMPORTANT]
+>
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource.
+>
+> * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest that you select Standard S1 to try Document Translation. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource). If you're planning on using the Document Translation feature with [managed identity authorization](../how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**. Select the **Standard S1 or D3** or pricing tier.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+### Storage container authorization
+
+You can choose one of the following options to authorize access to your Translator resource.
+
+**✔️ Managed Identity**. A managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for an Azure managed resource. Managed identities enable you to run your Translator application without having to embed credentials in your code. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+
+To learn more, *see* [Managed identities for Document Translation](../how-to-guides/create-use-managed-identities.md).
+
+ :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+**✔️ Shared Access Signature (SAS)**. A shared access signature is a URL that grants restricted access for a specified period of time to your Translator service. To use this method, you need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs.
+
+* Your **source** container or blob must have designated **read** and **list** access.
+* Your **target** container or blob must have designated **write** and **list** access.
+
+To learn more, *see* [**Create SAS tokens**](../how-to-guides/create-sas-tokens.md).
+
+ :::image type="content" source="../media/sas-url-token.png" alt-text="Screenshot of a resource URI with a SAS token.":::
+++++
+### Next step
+
+> [!div class="nextstepaction"]
+> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
ai-services Cancel Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/cancel-translation.md
+
+ Title: Cancel translation method
+
+description: The cancel translation method cancels a current processing or queued operation.
+++++++ Last updated : 07/18/2023++
+# Cancel translation
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+Cancel a current processing or queued operation. An operation isn't canceled if it's already completed, has failed, or is canceling. A bad request is returned. Documents that have completed translation aren't canceled and are charged. All pending documents are canceled if possible.
+
+## Request URL
+
+Send a `DELETE` request to:
+
+```DELETE HTTP
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id}
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+|--|--|--|
+|`id`|True|The operation-ID.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+|--|--|
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+| Status Code| Description|
+|--|--|
+|200|OK. Cancel request has been submitted|
+|401|Unauthorized. Check your credentials.|
+|404|Not found. Resource isn't found.
+|500|Internal Server Error.
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Cancel translation response
+
+### Successful response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|`id`|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary|Summary containing a list of details.|
+|summary.total|integer|Count of total documents.|
+|summary.failed|integer|Count of documents failed.|
+|summary.success|integer|Count of documents successfully translated.|
+|summary.inProgress|integer|Count of documents in progress.|
+|summary.notYetStarted|integer|Count of documents not yet started processing.|
+|summary.cancelled|integer|Number of canceled.|
+|summary.totalCharacterCharged|integer|Total characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be "documents" or `document id` for an invalid document.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was an invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+Status code: 200
+
+```JSON
+{
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Document Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-document-status.md
+
+ Title: Get document status method
+
+description: The get document status method returns the status for a specific document.
+++++++ Last updated : 07/18/2023++
+# Get document status
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+The Get Document Status method returns the status for a specific document. The method returns the translation status for a specific document based on the request ID and document ID.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id}/documents/{documentId}
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|documentId|True|The document ID.|
+|`id`|True|The batch ID.|
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request accepted by the service. The operation details are returned.HeadersRetry-After: integerETag: string|
+|401|Unauthorized. Check your credentials.|
+|404|Not Found. Resource isn't found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get document status response
+
+### Successful get document status response
+
+|Name|Type|Description|
+| | | |
+|path|string|Location of the document or folder.|
+|sourcePath|string|Location of the source document.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|to|string|Two letter language code of To Language. [See the list of languages](../../language-support.md).|
+|progress|number|Progress of the translation if available|
+|`id`|string|Document ID.|
+|characterCharged|integer|Characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
+
+## Examples
+
+### Example successful response
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "sourcePath": "https://myblob.blob.core.windows.net/sourceContainer/fr/mydoc.txt",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Running",
+ "to": "fr",
+ "progress": 0.1,
+ "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
+ "characterCharged": 0
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 401
+
+```JSON
+{
+ "error": {
+ "code": "Unauthorized",
+ "message": "User is not authorized",
+ "target": "Document",
+ "innerError": {
+ "code": "Unauthorized",
+ "message": "Operation is not authorized"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-documents-status.md
+
+ Title: Get documents status
+
+description: The get documents status method returns the status for all documents in a batch document translation request.
+++++++ Last updated : 07/18/2023++
+# Get documents status
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+If the number of documents in the response exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
+
+`$top`, `$skip`, and `$maxpagesize` query parameters can be used to specify the number of results to return and an offset for the collection.
+
+`$top` indicates the total number of records the user wants to be returned across all pages. `$skip` indicates the number of records to skip from the list of document status held by the server based on the sorting method specified. By default, we sort by descending start time. ``$maxpagesize`` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page.
+
+$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc asc" or "$orderBy=createdDateTimeUtc desc"). The default sorting is descending by createdDateTimeUtc. Some query parameters can be used to filter the returned list (ex: "status=Succeeded,Cancelled") only returns succeeded and canceled documents. createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify a range of datetime to filter the returned list by. The supported filtering query parameters are (status, IDs, createdDateTimeUtcStart, createdDateTimeUtcEnd).
+
+When both `$top` and `$skip` are included, the server should first apply `$skip` and then `$top` on the collection.
+
+> [!NOTE]
+> If the server can't honor `$top` and/or `$skip`, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id}/documents
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|In|Required|Type|Description|
+| | | | | |
+|`id`|path|True|string|The operation ID.|
+|`$maxpagesize`|query|False|integer int32|`$maxpagesize` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a `$maxpagesize` preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
+|$orderBy|query|False|array|The sorting query for the collection (ex: `CreatedDateTimeUtc asc`, `CreatedDateTimeUtc desc`).|
+|`$skip`|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection. Note: If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|`$top`|query|False|integer int32|`$top` indicates the total number of records the user wants to be returned across all pages. Clients MAY use `$top` and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection. Note: If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.|
+|createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.|
+|`ids`|query|False|array|IDs to use in filtering.|
+|statuses|query|False|array|Statuses to use in filtering.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the documents. HeadersRetry-After: integerETag: string|
+|400|Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|404|Resource isn't found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
++
+## Get documents status response
+
+### Successful get documents status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|@nextLink|string|Url for the next page. Null if no more pages available.|
+|value|DocumentStatus []|The detail status list of individual documents.|
+|value.path|string|Location of the document or folder.|
+|value.sourcePath|string|Location of the source document.|
+|value.createdDateTimeUtc|string|Operation created date time.|
+|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|value.status|status|List of possible statuses for job or document.<ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.to|string|To language.|
+|value.progress|number|Progress of the translation if available.|
+|value.id|string|Document ID.|
+|value.characterCharged|integer|Characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was an invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
+ "sourcePath": "https://myblob.blob.core.windows.net/sourceContainer/fr/mydoc.txt",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Running",
+ "to": "fr",
+ "progress": 0.1,
+ "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
+ "characterCharged": 0
+ }
+ ],
+ "@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.1/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55/documents?$top=5&$skip=15"
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Supported Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-document-formats.md
+
+ Title: Get supported document formats method
+
+description: The get supported document formats method returns a list of supported document formats.
+++++++ Last updated : 07/18/2023++
+# Get supported document formats
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+The Get supported document formats method returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/documents/formats
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+|--|--|
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+|--|--|
+|200|OK. Returns the list of supported document file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## File format response
+
+### Successful fileFormatListResult response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|value|FileFormat []|FileFormat[] contains the listed details.|
+|value.contentTypes|string[]|Supported Content-Types for this format.|
+|value.defaultVersion|string|Default version if none is specified.|
+|value.fileExtensions|string[]|Supported file extension for this format.|
+|value.format|string|Name of the format.|
+|value.versions|string [] | Supported version.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+ |code|string|Enums containing high-level error codes. Possible values:<ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+Status code: 200
+
+```JSON
+{
+ "value": [
+ {
+ "format": "PlainText",
+ "fileExtensions": [
+ ".txt"
+ ],
+ "contentTypes": [
+ "text/plain"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlWord",
+ "fileExtensions": [
+ ".docx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlPresentation",
+ "fileExtensions": [
+ ".pptx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.presentationml.presentation"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenXmlSpreadsheet",
+ "fileExtensions": [
+ ".xlsx"
+ ],
+ "contentTypes": [
+ "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OutlookMailMessage",
+ "fileExtensions": [
+ ".msg"
+ ],
+ "contentTypes": [
+ "application/vnd.ms-outlook"
+ ],
+ "versions": []
+ },
+ {
+ "format": "HtmlFile",
+ "fileExtensions": [
+ ".html",
+ ".htm"
+ ],
+ "contentTypes": [
+ "text/html"
+ ],
+ "versions": []
+ },
+ {
+ "format": "PortableDocumentFormat",
+ "fileExtensions": [
+ ".pdf"
+ ],
+ "contentTypes": [
+ "application/pdf"
+ ],
+ "versions": []
+ },
+ {
+ "format": "XLIFF",
+ "fileExtensions": [
+ ".xlf"
+ ],
+ "contentTypes": [
+ "application/xliff+xml"
+ ],
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2"
+ ]
+ },
+ {
+ "format": "TSV",
+ "fileExtensions": [
+ ".tsv",
+ ".tab"
+ ],
+ "contentTypes": [
+ "text/tab-separated-values"
+ ],
+ "versions": []
+ },
+ {
+ "format": "CSV",
+ "fileExtensions": [
+ ".csv"
+ ],
+ "contentTypes": [
+ "text/csv"
+ ],
+ "versions": []
+ },
+ {
+ "format": "RichTextFormat",
+ "fileExtensions": [
+ ".rtf"
+ ],
+ "contentTypes": [
+ "application/rtf"
+ ],
+ "versions": []
+ },
+ {
+ "format": "WordDocument",
+ "fileExtensions": [
+ ".doc"
+ ],
+ "contentTypes": [
+ "application/msword"
+ ],
+ "versions": []
+ },
+ {
+ "format": "PowerpointPresentation",
+ "fileExtensions": [
+ ".ppt"
+ ],
+ "contentTypes": [
+ "application/vnd.ms-powerpoint"
+ ],
+ "versions": []
+ },
+ {
+ "format": "ExcelSpreadsheet",
+ "fileExtensions": [
+ ".xls"
+ ],
+ "contentTypes": [
+ "application/vnd.ms-excel"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenDocumentText",
+ "fileExtensions": [
+ ".odt"
+ ],
+ "contentTypes": [
+ "application/vnd.oasis.opendocument.text"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenDocumentPresentation",
+ "fileExtensions": [
+ ".odp"
+ ],
+ "contentTypes": [
+ "application/vnd.oasis.opendocument.presentation"
+ ],
+ "versions": []
+ },
+ {
+ "format": "OpenDocumentSpreadsheet",
+ "fileExtensions": [
+ ".ods"
+ ],
+ "contentTypes": [
+ "application/vnd.oasis.opendocument.spreadsheet"
+ ],
+ "versions": []
+ },
+ {
+ "format": "Markdown",
+ "fileExtensions": [
+ ".markdown",
+ ".mdown",
+ ".mkdn",
+ ".md",
+ ".mkd",
+ ".mdwn",
+ ".mdtxt",
+ ".mdtext",
+ ".rmd"
+ ],
+ "contentTypes": [
+ "text/markdown",
+ "text/x-markdown",
+ "text/plain"
+ ],
+ "versions": []
+ },
+ {
+ "format": "Mhtml",
+ "fileExtensions": [
+ ".mhtml",
+ ".mht"
+ ],
+ "contentTypes": [
+ "message/rfc822",
+ "application/x-mimearchive",
+ "multipart/related"
+ ],
+ "versions": []
+ }
+ ]
+}
+
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-glossary-formats.md
+
+ Title: Get supported glossary formats method
+
+description: The get supported glossary formats method returns the list of supported glossary formats.
+++++++ Last updated : 07/18/2023++
+# Get supported glossary formats
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+The Get supported glossary formats method returns a list of glossary formats supported by the Document Translation service. The list includes the common file extension used.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/glossaries/formats
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Returns the list of supported glossary file formats.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
++
+## Get supported glossary formats response
+
+Base type for list return in the Get supported glossary formats API.
+
+### Successful get supported glossary formats response
+
+Base type for list return in the Get supported glossary formats API.
+
+|Name|Type|Description|
+| | | |
+|value|FileFormat []|FileFormat[] contains the listed details.|
+|value.contentTypes|string []|Supported Content-Types for this format.|
+|value.defaultVersion|string|Default version if none is specified|
+|value.fileExtensions|string []| Supported file extension for this format.|
+|value.format|string|Name of the format.|
+|value.versions|string []| Supported version.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "format": "XLIFF",
+ "fileExtensions": [
+ ".xlf"
+ ],
+ "contentTypes": [
+ "application/xliff+xml"
+ ],
+ "defaultVersion": "1.2",
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2"
+ ]
+ },
+ {
+ "format": "TSV",
+ "fileExtensions": [
+ ".tsv",
+ ".tab"
+ ],
+ "contentTypes": [
+ "text/tab-separated-values"
+ ]
+ },
+ {
+ "format": "CSV",
+ "fileExtensions": [
+ ".csv"
+ ],
+ "contentTypes": [
+ "text/csv"
+ ]
+ }
+ ]
+}
+
+```
+
+### Example error response
+the following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Supported Storage Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-storage-sources.md
+
+ Title: Get supported storage sources method
+
+description: The get supported storage sources method returns a list of supported storage sources.
+++++++ Last updated : 07/18/2023++
+# Get supported storage sources
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+The Get supported storage sources method returns a list of storage sources/options supported by the Document Translation service.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/storagesources
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the list of storage sources.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get supported storage sources response
+
+### Successful get supported storage sources response
+
+Base type for list return in the Get supported storage sources API.
+
+|Name|Type|Description|
+| | | |
+|value|string []|List of objects.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ "AzureBlob"
+ ]
+}
+```
+
+### Example error response
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Translation Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-translation-status.md
+
+ Title: Get translation status
+
+description: The get translation status method returns the status for a document translation request.
+++++++ Last updated : 07/18/2023++
+# Get translation status
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+The Get translation status method returns the status for a document translation request. The status includes the overall request status and the status for documents that are being translated as part of that request.
+
+## Request URL
+
+Send a `GET` request to:
+
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id}
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|Required|Description|
+| | | |
+|`id`|True|The operation ID.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the batch translation operation. HeadersRetry-After: integerETag: string|
+|401|Unauthorized. Check your credentials.|
+|404|Resource isn't found.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get translation status response
+
+### Successful get translation status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|`id`|string|ID of the operation.|
+|createdDateTimeUtc|string|Operation created date time.|
+|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|summary|StatusSummary|Summary containing the listed details.|
+|summary.total|integer|Total count.|
+|summary.failed|integer|Failed count.|
+|summary.success|integer|Number of successful.|
+|summary.inProgress|integer|Number of in progress.|
+|summary.notYetStarted|integer|Count of not yet started.|
+|summary.cancelled|integer|Number of canceled.|
+|summary.totalCharacterCharged|integer|Total characters charged by the API.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` for invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "id": "727bf148-f327-47a0-9481-abae6362f11e",
+ "createdDateTimeUtc": "2020-03-26T00:00:00Z",
+ "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 10,
+ "failed": 1,
+ "success": 9,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+}
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 401
+
+```JSON
+{
+ "error": {
+ "code": "Unauthorized",
+ "message": "User is not authorized",
+ "target": "Document",
+ "innerError": {
+ "code": "Unauthorized",
+ "message": "Operation is not authorized"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Get Translations Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-translations-status.md
+
+ Title: Get translations status
+
+description: The get translations status method returns a list of batch requests submitted and the status for each request.
+++++++ Last updated : 07/18/2023++
+# Get translations status
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+The Get translations status method returns a list of batch requests submitted and the status for each request. This list only contains batch requests submitted by the user (based on the resource).
+
+If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
+
+`$top`, `$skip` and `$maxpagesize` query parameters can be used to specify the number of results to return and an offset for the collection.
+
+`$top` indicates the total number of records the user wants to be returned across all pages. `$skip` indicates the number of records to skip from the list of batches based on the sorting method specified. By default, we sort by descending start time. `$maxpagesize` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page.
+
+$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc asc" or "$orderBy=createdDateTimeUtc desc"). The default sorting is descending by createdDateTimeUtc. Some query parameters can be used to filter the returned list (ex: "status=Succeeded,Cancelled") returns succeeded and canceled operations. createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify a range of datetime to filter the returned list by. The supported filtering query parameters are (status, IDs, createdDateTimeUtcStart, createdDateTimeUtcEnd).
+
+The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
+
+When both `$top` and `$skip` are included, the server should first apply `$skip` and then `$top` on the collection.
+
+> [!NOTE]
+> If the server can't honor `$top` and/or `$skip`, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+|Query parameter|In|Required|Type|Description|
+| | | |||
+|`$maxpagesize`|query|False|integer int32|`$maxpagesize` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a `$maxpagesize` preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
+|$orderBy|query|False|array|The sorting query for the collection (ex: `CreatedDateTimeUtc asc`, `CreatedDateTimeUtc desc`)|
+|`$skip`|query|False|integer int32|`$skip` indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use `$top` and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection.Note: If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|`$top`|query|False|integer int32|`$top` indicates the total number of records the user wants to be returned across all pages. Clients MAY use `$top` and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection.Note: If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.|
+|createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.|
+|`ids`|query|False|array|IDs to use in filtering.|
+|statuses|query|False|array|Statuses to use in filtering.|
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|200|OK. Successful request and returns the status of the all the operations. HeadersRetry-After: integerETag: string|
+|400|Bad Request. Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|500|Internal Server Error.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Get translations status response
+
+### Successful get translations status response
+
+The following information is returned in a successful response.
+
+|Name|Type|Description|
+| | | |
+|@nextLink|string|Url for the next page. Null if no more pages available.|
+|value|TranslationStatus[]|TranslationStatus[] Array|
+|value.id|string|ID of the operation.|
+|value.createdDateTimeUtc|string|Operation created date time.|
+|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
+|value.status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.summary|StatusSummary[]|Summary containing the listed details.|
+|value.summary.total|integer|Count of total documents.|
+|value.summary.failed|integer|Count of documents failed.|
+|value.summary.success|integer|Count of documents successfully translated.|
+|value.summary.inProgress|integer|Count of documents in progress.|
+|value.summary.notYetStarted|integer|Count of documents not yet started processing.|
+|value.summary.cancelled|integer|Count of documents canceled.|
+|value.summary.totalCharacterCharged|integer|Total count of characters charged.|
+
+### Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was an invalid document.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (it can be nested).|
+|innerError.code|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was an invalid document.|
+
+## Examples
+
+### Example successful response
+
+The following JSON object is an example of a successful response.
+
+```JSON
+{
+ "value": [
+ {
+ "id": "36724748-f7a0-4db7-b7fd-f041ddc75033",
+ "createdDateTimeUtc": "2021-06-18T03:35:30.153374Z",
+ "lastActionDateTimeUtc": "2021-06-18T03:36:44.6155316Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 3,
+ "failed": 2,
+ "success": 1,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+ },
+ {
+ "id": "1c7399a7-6913-4f20-bb43-e2fe2ba1a67d",
+ "createdDateTimeUtc": "2021-05-24T17:57:43.8356624Z",
+ "lastActionDateTimeUtc": "2021-05-24T17:57:47.128391Z",
+ "status": "Failed",
+ "summary": {
+ "total": 1,
+ "failed": 1,
+ "success": 0,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 0
+ }
+ },
+ {
+ "id": "daa2a646-4237-4f5f-9a48-d515c2d9af3c",
+ "createdDateTimeUtc": "2021-04-14T19:49:26.988272Z",
+ "lastActionDateTimeUtc": "2021-04-14T19:49:43.9818634Z",
+ "status": "Succeeded",
+ "summary": {
+ "total": 2,
+ "failed": 0,
+ "success": 2,
+ "inProgress": 0,
+ "notYetStarted": 0,
+ "cancelled": 0,
+ "totalCharacterCharged": 21899
+ }
+ }
+ ],
+ ""@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.1/operations/727BF148-F327-47A0-9481-ABAE6362F11E/documents?`$top`=5&`$skip`=15"
+}
+
+```
+
+### Example error response
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
+
+Status code: 500
+
+```JSON
+{
+ "error": {
+ "code": "InternalServerError",
+ "message": "Internal Server Error",
+ "target": "Operation",
+ "innerError": {
+ "code": "InternalServerError",
+ "message": "Unexpected internal server error has occurred"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/rest-api-guide.md
+
+ Title: "Document Translation REST API reference guide"
+
+description: View a list of with links to the Document Translation REST APIs.
++++++ Last updated : 07/18/2023+++
+# Document Translation REST API reference guide
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+Document Translation is a cloud-based feature of the Azure AI Translator service and is part of the Azure AI service family of REST APIs. The Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format. The available methods are listed in the following table:
+
+| Request| Description|
+||--|
+| [**Get supported document formats**](get-supported-document-formats.md)| This method returns a list of supported document formats.|
+|[**Get supported glossary formats**](get-supported-glossary-formats.md)|This method returns a list of supported glossary formats.|
+|[**Get supported storage sources**](get-supported-storage-sources.md)| This method returns a list of supported storage sources/options.|
+|[**Start translation (POST)**](start-translation.md)|This method starts a document translation job. |
+|[**Get documents status**](get-documents-status.md)|This method returns the status of a all documents in a translation job.|
+|[**Get document status**](get-document-status.md)| This method returns the status for a specific document in a job. |
+|[**Get translations status**](get-translations-status.md)| This method returns a list of translation requests submitted by a user and the status for each request.|
+|[**Get translation status**](get-translation-status.md) | This method returns a summary of the status for a specific document translation request. It includes the overall request status and the status for documents that are being translated as part of that request.|
+|[**Cancel translation (DELETE)**](cancel-translation.md)| This method cancels a document translation that is currently processing or queued. |
+
+> [!div class="nextstepaction"]
+> [Explore our client libraries and SDKs for C# and Python.](../quickstarts/document-translation-sdk.md)
ai-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/start-translation.md
+
+ Title: Start translation
+
+description: Start a document translation request with the Document Translation service.
+++++++ Last updated : 07/18/2023++
+# Start translation
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+Use this API to start a translation request with the Document Translation service. Each request can contain multiple documents and must contain a source and destination container for each document.
+
+The prefix and suffix filter (if supplied) are used to filter folders. The prefix is applied to the subpath after the container name.
+
+Glossaries / Translation memory can be included in the request and applied by the service when the document is translated.
+
+If the glossary is invalid or unreachable during translation, an error is indicated in the document status. If a file with the same name already exists in the destination, the job fails. The targetUrl for each target language must be unique.
+
+## Request URL
+
+Send a `POST` request to:
+```HTTP
+POST https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches
+```
+
+Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+
+> [!IMPORTANT]
+>
+> * **All API requests to the Document Translation service require a custom domain endpoint**.
+> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
+
+## Request headers
+
+Request headers are:
+
+|Headers|Description|
+| | |
+|Ocp-Apim-Subscription-Key|Required request header|
+
+## Request Body: Batch Submission Request
+
+|Name|Type|Description|
+| | | |
+|inputs|BatchRequest[]|BatchRequest list. The input list of documents or folders containing documents. Media Types: `application/json`, `text/json`, `application/*+json`.|
+
+### Inputs
+
+Definition for the input batch translation request.
+
+|Name|Type|Required|Description|
+| | | | |
+|source|SourceInput[]|True|inputs.source list. Source of the input documents.|
+|storageType|StorageInputType[]|False|inputs.storageType list. Storage type of the input documents source string. Required for single document translation only.|
+|targets|TargetInput[]|True|inputs.target list. Location of the destination for the output.|
+
+**inputs.source**
+
+Source of the input documents.
+
+|Name|Type|Required|Description|
+| | | | |
+|filter|DocumentFilter[]|False|DocumentFilter[] list.|
+|filter.prefix|string|False|A case-sensitive prefix string to filter documents in the source path for translation. For example, when using an Azure storage blob Uri, use the prefix to restrict sub folders for translation.|
+|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. It's most often use for file extensions.|
+|language|string|False|Language code If none is specified, we perform auto detect on the document.|
+|sourceUrl|string|True|Location of the folder / container or single file with your documents.|
+|storageSource|StorageSource|False|StorageSource list.|
+|storageSource.AzureBlob|string|False||
+
+**inputs.storageType**
+
+Storage type of the input documents source string.
+
+|Name|Type|
+| | |
+|file|string|
+|folder|string|
+
+**inputs.target**
+
+Destination for the finished translated documents.
+
+|Name|Type|Required|Description|
+| | | | |
+|category|string|False|Category / custom system for translation request.|
+|glossaries|Glossary[]|False|Glossary list. List of Glossary.|
+|glossaries.format|string|False|Format.|
+|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it isn't applied.|
+|glossaries.storageSource|StorageSource|False|StorageSource list.|
+|glossaries.version|string|False|Optional Version. If not specified, default is used.|
+|targetUrl|string|True|Location of the folder / container with your documents.|
+|language|string|True|Two letter Target Language code. See [list of language codes](../../language-support.md).|
+|storageSource|StorageSource []|False|StorageSource [] list.|
+
+## Example request
+
+The following are examples of batch requests.
+
+> [!NOTE]
+> In the following examples, limited access has been granted to the contents of an Azure Storage container [using a shared access signature(SAS)](../../../../storage/common/storage-sas-overview.md) token.
+
+**Translating all documents in a container**
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating all documents in a container applying glossaries**
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "fr",
+ "glossaries": [
+ {
+ "glossaryUrl": "https://my.blob.core.windows.net/glossaries/en-fr.xlf?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=BsciG3NWoOoRjOYesTaUmxlXzyjsX4AgVkt2AsxJ9to%3D",
+ "format": "xliff",
+ "version": "1.2"
+ }
+ ]
+
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating specific folder in a container**
+
+Make sure you've specified the folder name (case sensitive) as prefix in filter.
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D",
+ "filter": {
+ "prefix": "MyFolder/"
+ }
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "fr"
+ }
+ ]
+ }
+ ]
+}
+```
+
+**Translating specific document in a container**
+
+* Specify "storageType": "File"
+* Create source URL & SAS token for the specific blob/document.
+* Specify the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
+
+This sample request shows a single document translated into two target languages
+
+```json
+{
+ "inputs": [
+ {
+ "storageType": "File",
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "es"
+ },
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
+ "language": "de"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status Code|Description|
+| | |
+|202|Accepted. Successful request and the batch request created. The header Operation-Location indicates a status url with the operation ID.HeadersOperation-Location: string|
+|400|Bad Request. Invalid request. Check input parameters.|
+|401|Unauthorized. Check your credentials.|
+|429|Request rate is too high.|
+|500|Internal Server Error.|
+|503|Service is currently unavailable. Try again later.|
+|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+
+## Error response
+
+|Name|Type|Description|
+| | | |
+|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|message|string|Gets high-level error message.|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties: ErrorCode, message and optional properties target, details(key value pair), and inner error(it can be nested).|
+|inner.Errorcode|string|Gets code error string.|
+|innerError.message|string|Gets high-level error message.|
+|innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if the document is invalid.|
+
+## Examples
+
+### Example successful response
+
+The following information is returned in a successful response.
+
+You can find the job ID in the POST method's response Header Operation-Location URL value. The last parameter of the URL is the operation's job ID (the string following "/operation/").
+
+```HTTP
+Operation-Location: https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55
+```
+
+### Example error response
+
+```JSON
+{
+ "error": {
+ "code": "ServiceUnavailable",
+ "message": "Service is temporary unavailable",
+ "innerError": {
+ "code": "ServiceTemporaryUnavailable",
+ "message": "Service is currently unavailable. Please try again later"
+ }
+ }
+}
+```
+
+## Next steps
+
+Follow our quickstart to learn more about using Document Translation and the client library.
+
+> [!div class="nextstepaction"]
+> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
ai-services Dynamic Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/dynamic-dictionary.md
+
+ Title: Dynamic Dictionary - Translator
+
+description: This article explains how to use the dynamic dictionary feature of the Azure AI Translator.
++++++ Last updated : 07/18/2023+++
+# How to use a dynamic dictionary
+
+If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is safe only for compound nouns like proper names and product names.
+
+**Syntax:**
+
+<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
+
+**Requirements:**
+
+* The `From` and `To` languages must include English and another supported language.
+* You must include the `From` parameter in your API translation request instead of using the autodetect feature.
+
+**Example: en-de:**
+
+Source input: `The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.`
+
+Target output: `Das Wort "wordomatic" ist ein W├╢rterbucheintrag.`
+
+This feature works the same way with and without HTML mode.
+
+Use the feature sparingly. A better way to customize translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you have or can create training data that shows your work or phrase in context, you get much better results. You can find more information about Custom Translator at [https://aka.ms/CustomTranslator](https://aka.ms/CustomTranslator).
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/encrypt-data-at-rest.md
+
+ Title: Translator encryption of data at rest
+
+description: Microsoft lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Translator, and how to enable and manage CMK.
+++++ Last updated : 07/18/2023+
+#Customer intent: As a user of the Translator service, I want to learn how encryption at rest works.
++
+# Translator encryption of data at rest
+
+Translator automatically encrypts your uploaded data when it's persisted to the cloud helping to meet your organizational security and compliance goals.
+
+## About Azure AI services encryption
+
+Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
+
+## About encryption key management
+
+By default, your subscription uses Microsoft-managed encryption keys. If you're using a pricing tier that supports Customer-managed keys, you can see the encryption settings for your resource in the **Encryption** section of the [Azure portal](https://portal.azure.com), as shown in the following image.
+
+![View Encryption settings](../media/cognitive-services-encryption/encryptionblade.png)
+
+For subscriptions that only support Microsoft-managed encryption keys, you won't have an **Encryption** section.
+
+## Customer-managed keys with Azure Key Vault
+
+By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
+
+> [!IMPORTANT]
+> Customer-managed keys are available for all pricing tiers for the Translator service. To request the ability to use customer-managed keys, fill out and submit the [Translator Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk) It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Translator service, you will need to create a new Translator resource. Once your Translator resource is created, you can use Azure Key Vault to set up your managed identity.
+
+Follow these steps to enable customer-managed keys for Translator:
+
+1. Create your new regional Translator or regional Azure AI services resource. Customer-managed keys won't work with a global resource.
+2. Enabled Managed Identity in the Azure portal, and add your customer-managed key information.
+3. Create a new workspace in Custom Translator and associate this subscription information.
+
+### Enable customer-managed keys
+
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
+
+A new Azure AI services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault. The key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available as soon as the resource is created.
+
+To learn how to use customer-managed keys with Azure Key Vault for Azure AI services encryption, see:
+
+- [Configure customer-managed keys with Key Vault for Azure AI services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)
+
+Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
+
+> [!IMPORTANT]
+> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working. Any models that you have deployed will also be undeployed. All uploaded data will be deleted from Custom Translator. If the managed identities are re-enabled, we will not automatically redeploy the model for you.
+
+> [!IMPORTANT]
+> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+
+### Store customer-managed keys in Azure Key Vault
+
+To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+
+Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+> [!NOTE]
+> If the entire key vault is deleted, your data will no longer be displayed and all your models will be undeployed. All uploaded data will be deleted from Custom Translator.
+
+### Revoke access to customer-managed keys
+
+To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Azure AI services resource and your models will be undeployed, as the encryption key is inaccessible by Azure AI services. All uploaded data will also be deleted from Custom Translator.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
ai-services Firewalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/firewalls.md
+
+ Title: Translate behind firewalls - Translator
+
+description: Azure AI Translator can translate behind firewalls using either domain-name or IP filtering.
++++++ Last updated : 07/18/2023+++
+# Use Translator behind firewalls
+
+Translator can translate behind firewalls using either [Domain-name](../../firewall/dns-settings.md#configure-dns-proxyazure-portal) or [IP filtering](#configure-firewall). Domain-name filtering is the preferred method.
+
+If you still require IP filtering, you can get the [IP addresses details using service tag](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). Translator is under the **CognitiveServicesManagement** service tag.
+
+## Configure firewall
+
+ Navigate to your Translator resource in the Azure portal.
+
+1. Select **Networking** from the **Resource Management** section.
+1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="media/firewall-setting-azure-portal.png" alt-text="Screenshot of the firewall setting in the Azure portal.":::
+
+ > [!NOTE]
+ >
+ > * Once you enable **Selected Networks and Private Endpoints**, you must use the **Virtual Network** endpoint to call the Translator. You can't use the standard translator endpoint (`api.cognitive.microsofttranslator.com`) and you can't authenticate with an access token.
+ > * For more information, *see* [**Virtual Network Support**](reference/v3-0-reference.md#virtual-network-support).
+
+1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (`non-reserved`) addresses are accepted.
+
+Running Microsoft Translator from behind a specific IP filtered firewall is **not recommended**. The setup is likely to break in the future without notice.
+
+The IP addresses for Translator geographical endpoints as of September 21, 2021 are:
+
+|Geography|Base URL (geographical endpoint)|IP Addresses|
+|:--|:--|:--|
+|United States|api-nam.cognitive.microsofttranslator.com|20.42.6.144, 20.49.96.128, 40.80.190.224, 40.64.128.192|
+|Europe|api-eur.cognitive.microsofttranslator.com|20.50.1.16, 20.38.87.129|
+|Asia Pacific|api-apc.cognitive.microsofttranslator.com|40.80.170.160, 20.43.132.96, 20.37.196.160, 20.43.66.16|
+
+## Next steps
+
+[**Translator virtual network support**](reference/v3-0-reference.md#virtual-network-support)
+
+[**Configure virtual networks**](../cognitive-services-virtual-networks.md#grant-access-from-an-internet-ip-range)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md
+
+ Title: Language support - Translator
+
+description: Azure AI Translator supports the following languages for text to text translation using Neural Machine Translation (NMT).
++++++ Last updated : 07/18/2023++
+# Translator language support
+
+**Translation - Cloud:** Cloud translation is available in all languages for the Translate operation of Text Translation and for Document Translation.
+
+**Translation ΓÇô Containers:** Language support for Containers.
+
+**Custom Translator:** Custom Translator can be used to create customized translation models that you can then use to customize your translated output while using the Text Translation or Document Translation features.
+
+**Auto Language Detection:** Automatically detect the language of the source text while using Text Translation or Document Translation.
+
+**Dictionary:** Use the [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) or [Dictionary Examples](reference/v3-0-dictionary-examples.md) operations from the Text Translation feature to display alternative translations from or to English and examples of words in context.
+
+## Translation
+
+> [!NOTE]
+> Language code `pt` will default to `pt-br`, Portuguese (Brazil).
+
+|Language | Language code | Cloud ΓÇô Text Translation and Document Translation | Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
+|:-|:-:|:-:|:-:|:-:|:-:|:-:|
+| Afrikaans | `af` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Albanian | `sq` |Γ£ö|Γ£ö||Γ£ö||
+| Amharic | `am` |Γ£ö|Γ£ö||||
+| Arabic | `ar` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Armenian | `hy` |Γ£ö|Γ£ö||Γ£ö||
+| Assamese | `as` |Γ£ö|Γ£ö|Γ£ö|||
+| Azerbaijani (Latin) | `az` |Γ£ö|Γ£ö||||
+| Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Bashkir | `ba` |Γ£ö|Γ£ö||||
+| Basque | `eu` |Γ£ö|Γ£ö||||
+| Bosnian (Latin) | `bs` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö||||
+| Catalan | `ca` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Chinese (Literary) | `lzh` |Γ£ö|Γ£ö||||
+| Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Chinese Traditional | `zh-Hant` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Croatian | `hr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Dari | `prs` |Γ£ö|Γ£ö||||
+| Divehi | `dv` |Γ£ö|Γ£ö||Γ£ö||
+| Dutch | `nl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| English | `en` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Estonian | `et` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Faroese | `fo` |Γ£ö|Γ£ö||||
+| Fijian | `fj` |Γ£ö|Γ£ö|Γ£ö|||
+| Filipino | `fil` |Γ£ö|Γ£ö|Γ£ö|||
+| Finnish | `fi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| French | `fr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| French (Canada) | `fr-ca` |Γ£ö|Γ£ö||||
+| Galician | `gl` |Γ£ö|Γ£ö||||
+| Georgian | `ka` |Γ£ö|Γ£ö||Γ£ö||
+| German | `de` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
+| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö|
+| Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Inuinnaqtun | `ikt` |Γ£ö|Γ£ö||||
+| Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Inuktitut (Latin) | `iu-Latn` |Γ£ö|Γ£ö||||
+| Irish | `ga` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Italian | `it` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Japanese | `ja` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö|||
+| Kazakh | `kk` |Γ£ö|Γ£ö||||
+| Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
+| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö|
+| Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
+| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö||
+| Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
+| Kyrgyz (Cyrillic) | `ky` |Γ£ö|Γ£ö||||
+| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö||
+| Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Macedonian | `mk` |Γ£ö|Γ£ö||Γ£ö||
+| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö|||
+| Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
+| Maltese | `mt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Maori | `mi` |Γ£ö|Γ£ö|Γ£ö|||
+| Marathi | `mr` |Γ£ö|Γ£ö|Γ£ö|||
+| Mongolian (Cyrillic) | `mn-Cyrl` |Γ£ö|Γ£ö||||
+| Mongolian (Traditional) | `mn-Mong` |Γ£ö|Γ£ö||Γ£ö||
+| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö||
+| Nepali | `ne` |Γ£ö|Γ£ö||||
+| Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Odia | `or` |Γ£ö|Γ£ö|Γ£ö|||
+| Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö||
+| Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Polish | `pl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Portuguese (Brazil) | `pt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Portuguese (Portugal) | `pt-pt` |Γ£ö|Γ£ö||||
+| Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö|||
+| Queretaro Otomi | `otq` |Γ£ö|Γ£ö||||
+| Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Samoan (Latin) | `sm` |Γ£ö|Γ£ö |Γ£ö|||
+| Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö|Γ£ö||Γ£ö||
+| Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Somali (Arabic) | `so` |Γ£ö|Γ£ö||Γ£ö||
+| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Swahili (Latin) | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Tahitian | `ty` |Γ£ö|Γ£ö |Γ£ö|Γ£ö||
+| Tamil | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Tatar (Latin) | `tt` |Γ£ö|Γ£ö||||
+| Telugu | `te` |Γ£ö|Γ£ö|Γ£ö|||
+| Thai | `th` |Γ£ö|Γ£ö |Γ£ö|Γ£ö|Γ£ö|
+| Tibetan | `bo` |Γ£ö|Γ£ö|||
+| Tigrinya | `ti` |Γ£ö|Γ£ö||||
+| Tongan | `to` |Γ£ö|Γ£ö|Γ£ö|||
+| Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Turkmen (Latin) | `tk` |Γ£ö|Γ£ö|||
+| Ukrainian | `uk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Upper Sorbian | `hsb` |Γ£ö|Γ£ö||||
+| Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Uyghur (Arabic) | `ug` |Γ£ö|Γ£ö|||
+| Uzbek (Latin | `uz` |Γ£ö|Γ£ö||Γ£ö||
+| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
+| Zulu | `zu` |Γ£ö|Γ£ö||||
+
+## Document Translation: scanned PDF support
+
+|Language|Language Code|Supported as source language for scanned PDF?|Supported as target language for scanned PDF?|
+|:-|:-:|:-:|:-:|
+|Afrikaans|`af`|Yes|Yes|
+|Albanian|`sq`|Yes|Yes|
+|Amharic|`am`|No|No|
+|Arabic|`ar`|Yes|Yes|
+|Armenian|`hy`|No|No|
+|Assamese|`as`|No|No|
+|Azerbaijani (Latin)|`az`|Yes|Yes|
+|Bangla|`bn`|No|No|
+|Bashkir|`ba`|No|Yes|
+|Basque|`eu`|Yes|Yes|
+|Bosnian (Latin)|`bs`|Yes|Yes|
+|Bulgarian|`bg`|Yes|Yes|
+|Cantonese (Traditional)|`yue`|No|Yes|
+|Catalan|`ca`|Yes|Yes|
+|Chinese (Literary)|`lzh`|No|Yes|
+|Chinese Simplified|`zh-Hans`|Yes|Yes|
+|Chinese Traditional|`zh-Hant`|Yes|Yes|
+|Croatian|`hr`|Yes|Yes|
+|Czech|`cs`|Yes|Yes|
+|Danish|`da`|Yes|Yes|
+|Dari|`prs`|No|No|
+|Divehi|`dv`|No|No|
+|Dutch|`nl`|Yes|Yes|
+|English|`en`|Yes|Yes|
+|Estonian|`et`|Yes|Yes|
+|Faroese|`fo`|Yes|Yes|
+|Fijian|`fj`|Yes|Yes|
+|Filipino|`fil`|Yes|Yes|
+|Finnish|`fi`|Yes|Yes|
+|French|`fr`|Yes|Yes|
+|French (Canada)|`fr-ca`|Yes|Yes|
+|Galician|`gl`|Yes|Yes|
+|Georgian|`ka`|No|No|
+|German|`de`|Yes|Yes|
+|Greek|`el`|Yes|Yes|
+|Gujarati|`gu`|No|No|
+|Haitian Creole|`ht`|Yes|Yes|
+|Hebrew|`he`|No|No|
+|Hindi|`hi`|Yes|Yes|
+|Hmong Daw (Latin)|`mww`|Yes|Yes|
+|Hungarian|`hu`|Yes|Yes|
+|Icelandic|`is`|Yes|Yes|
+|Indonesian|`id`|Yes|Yes|
+|Interlingua|`ia`|Yes|Yes|
+|Inuinnaqtun|`ikt`|No|Yes|
+|Inuktitut|`iu`|No|No|
+|Inuktitut (Latin)|`iu-Latn`|Yes|Yes|
+|Irish|`ga`|Yes|Yes|
+|Italian|`it`|Yes|Yes|
+|Japanese|`ja`|Yes|Yes|
+|Kannada|`kn`|No|Yes|
+|Kazakh (Cyrillic)|`kk`, `kk-cyrl`|Yes|Yes|
+|Kazakh (Latin)|`kk-latn`|Yes|Yes|
+|Khmer|`km`|No|No|
+|Klingon|`tlh-Latn`|No|No|
+|Klingon (plqaD)|`tlh-Piqd`|No|No|
+|Korean|`ko`|Yes|Yes|
+|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|No|No|
+|Kurdish (Latin) (Northern)|`ku-latn`, `kmr`|Yes|Yes|
+|Kyrgyz (Cyrillic)|`ky`|Yes|Yes|
+|Lao|`lo`|No|No|
+|Latvian|`lv`|No|Yes|
+|Lithuanian|`lt`|Yes|Yes|
+|Macedonian|`mk`|No|Yes|
+|Malagasy|`mg`|No|Yes|
+|Malay (Latin)|`ms`|Yes|Yes|
+|Malayalam|`ml`|No|Yes|
+|Maltese|`mt`|Yes|Yes|
+|Maori|`mi`|Yes|Yes|
+|Marathi|`mr`|Yes|Yes|
+|Mongolian (Cyrillic)|`mn-Cyrl`|Yes|Yes|
+|Mongolian (Traditional)|`mn-Mong`|No|No|
+|Myanmar (Burmese)|`my`|No|No|
+|Nepali|`ne`|Yes|Yes|
+|Norwegian|`nb`|Yes|Yes|
+|Odia|`or`|No|No|
+|Pashto|`ps`|No|No|
+|Persian|`fa`|No|No|
+|Polish|`pl`|Yes|Yes|
+|Portuguese (Brazil)|`pt`, `pt-br`|Yes|Yes|
+|Portuguese (Portugal)|`pt-pt`|Yes|Yes|
+|Punjabi|`pa`|No|Yes|
+|Queretaro Otomi|`otq`|No|Yes|
+|Romanian|`ro`|Yes|Yes|
+|Russian|`ru`|Yes|Yes|
+|Samoan (Latin)|`sm`|Yes|Yes|
+|Serbian (Cyrillic)|`sr-Cyrl`|No|Yes|
+|Serbian (Latin)|`sr`, `sr-latn`|Yes|Yes|
+|Slovak|`sk`|Yes|Yes|
+|Slovenian|`sl`|Yes|Yes|
+|Somali|`so`|No|Yes|
+|Spanish|`es`|Yes|Yes|
+|Swahili (Latin)|`sw`|Yes|Yes|
+|Swedish|`sv`|Yes|Yes|
+|Tahitian|`ty`|No|Yes|
+|Tamil|`ta`|No|Yes|
+|Tatar (Latin)|`tt`|Yes|Yes|
+|Telugu|`te`|No|Yes|
+|Thai|`th`|No|No|
+|Tibetan|`bo`|No|No|
+|Tigrinya|`ti`|No|No|
+|Tongan|`to`|Yes|Yes|
+|Turkish|`tr`|Yes|Yes|
+|Turkmen (Latin)|`tk`|Yes|Yes|
+|Ukrainian|`uk`|No|Yes|
+|Upper Sorbian|`hsb`|Yes|Yes|
+|Urdu|`ur`|No|No|
+|Uyghur (Arabic)|`ug`|No|No|
+|Uzbek (Latin)|`uz`|Yes|Yes|
+|Vietnamese|`vi`|No|Yes|
+|Welsh|`cy`|Yes|Yes|
+|Yucatec Maya|`yua`|Yes|Yes|
+|Zulu|`zu`|Yes|Yes|
+
+## Transliteration
+
+The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the "To/From", "<-->" indicates that the language can be transliterated from or to either of the scripts listed. The "-->" indicates that the language can only be transliterated from one script to the other.
+
+| Language | Language code | Script | To/From | Script|
+|:-- |:-:|:-:|:-:|:-:|
+| Arabic | `ar` | Arabic `Arab` | <--> | Latin `Latn` |
+| Assamese | `as` | Bengali `Beng` | <--> | Latin `Latn` |
+| Bangla | `bn` | Bengali `Beng` | <--> | Latin `Latn` |
+|Belarusian| `be` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Bulgarian| `bg` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+| Chinese (Simplified) | `zh-Hans` | Chinese Simplified `Hans`| <--> | Latin `Latn` |
+| Chinese (Simplified) | `zh-Hans` | Chinese Simplified `Hans`| <--> | Chinese Traditional `Hant`|
+| Chinese (Traditional) | `zh-Hant` | Chinese Traditional `Hant`| <--> | Latin `Latn` |
+| Chinese (Traditional) | `zh-Hant` | Chinese Traditional `Hant`| <--> | Chinese Simplified `Hans` |
+|Greek| `el` | Greek `Grek` | <--> | Latin `Latn` |
+| Gujarati | `gu` | Gujarati `Gujr` | <--> | Latin `Latn` |
+| Hebrew | `he` | Hebrew `Hebr` | <--> | Latin `Latn` |
+| Hindi | `hi` | Devanagari `Deva` | <--> | Latin `Latn` |
+| Japanese | `ja` | Japanese `Jpan` | <--> | Latin `Latn` |
+| Kannada | `kn` | Kannada `Knda` | <--> | Latin `Latn` |
+|Kazakh| `kk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Korean| `ko` | Korean `Kore` | <--> | Latin `Latn` |
+|Kyrgyz| `ky` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Macedonian| `mk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+| Malayalam | `ml` | Malayalam `Mlym` | <--> | Latin `Latn` |
+| Marathi | `mr` | Devanagari `Deva` | <--> | Latin `Latn` |
+|Mongolian| `mn` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+| Odia | `or` | Oriya `Orya` | <--> | Latin `Latn` |
+|Persian| `fa` | Arabic `Arab` | <--> | Latin `Latn` |
+| Punjabi | `pa` | Gurmukhi `Guru` | <--> | Latin `Latn` |
+|Russian| `ru` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+| Serbian (Cyrillic) | `sr-Cyrl` | Cyrillic `Cyrl` | --> | Latin `Latn` |
+| Serbian (Latin) | `sr-Latn` | Latin `Latn` | --> | Cyrillic `Cyrl`|
+|Sindhi| `sd` | Arabic `Arab` | <--> | Latin `Latn` |
+|Sinhala| `si` | Sinhala `Sinh` | <--> | Latin `Latn` |
+|Tajik| `tg` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+| Tamil | `ta` | Tamil `Taml` | <--> | Latin `Latn` |
+|Tatar| `tt` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+| Telugu | `te` | Telugu `Telu` | <--> | Latin `Latn` |
+| Thai | `th` | Thai `Thai` | --> | Latin `Latn` |
+|Ukrainian| `uk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
+|Urdu| `ur` | Arabic `Arab` | <--> | Latin `Latn` |
+
+## Custom Translator language pairs
+
+|Source Language|Target Language|
+|:-|:-|
+| Czech (cs-cz) | English (en-us) |
+| Danish (da-dk) | English (en-us) |
+| German (de-&#8203;de) | English (en-us) |
+| Greek (el-gr) | English (en-us) |
+| English (en-us) | Arabic (ar-sa) |
+| English (en-us) | Czech (cs-cz) |
+| English (en-us) | Danish (da-dk) |
+| English (en-us) | German (de-&#8203;de) |
+| English (en-us) | Greek (el-gr) |
+| English (en-us) | Spanish (es-es) |
+| English (en-us) | French (fr-fr) |
+| English (en-us) | Hebrew (he-il) |
+| English (en-us) | Hindi (hi-in) |
+| English (en-us) | Croatian (hr-hr) |
+| English (en-us) | Hungarian (hu-hu) |
+| English (en-us) | Indonesian (id-id) |
+| English (en-us) | Italian (it-it) |
+| English (en-us) | Japanese (ja-jp) |
+| English (en-us) | Korean (ko-kr) |
+| English (en-us) | Lithuanian (lt-lt) |
+| English (en-us) | Latvian (lv-lv) |
+| English (en-us) | Norwegian (nb-no) |
+| English (en-us) | Polish (pl-pl) |
+| English (en-us) | Portuguese (pt-pt) |
+| English (en-us) | Russian (ru-ru) |
+| English (en-us) | Slovak (sk-sk) |
+| English (en-us) | Swedish (sv-se) |
+| English (en-us) | Ukrainian (uk-ua) |
+| English (en-us) | Vietnamese (vi-vn) |
+| English (en-us) | Chinese Simplified (zh-cn) |
+| Spanish (es-es) | English (en-us) |
+| French (fr-fr) | English (en-us) |
+| Hindi (hi-in) | English (en-us) |
+| Hungarian (hu-hu) | English (en-us) |
+| Indonesian (id-id) | English (en-us) |
+| Italian (it-it) | English (en-us) |
+| Japanese (ja-jp) | English (en-us) |
+| Korean (ko-kr) | English (en-us) |
+| Norwegian (nb-no) | English (en-us) |
+| Dutch (nl-nl) | English (en-us) |
+| Polish (pl-pl) | English (en-us) |
+| Portuguese (pt-br) | English (en-us) |
+| Russian (ru-ru) | English (en-us) |
+| Swedish (sv-se) | English (en-us) |
+| Thai (th-th) | English (en-us) |
+| Turkish (tr-tr) | English (en-us) |
+| Vietnamese (vi-vn) | English (en-us) |
+| Chinese Simplified (zh-cn) | English (en-us) |
+
+## Other Azure AI services
+
+Add more capabilities to your apps and workflows by utilizing other Azure AI services with Translator. Language support for other
+
+* [Azure AI Vision](../computer-vision/language-support.md)
+* [Speech](../speech-service/language-support.md)
+* [Language service](../language-service/concepts/language-support.md)
+
+View all [Azure AI services](../index.yml).
+
+## Next steps
+
+* [Text Translation reference](reference/v3-0-reference.md)
+* [Document Translation reference](document-translation/reference/rest-api-guide.md)
+* [Custom Translator overview](custom-translator/overview.md)
ai-services Migrate To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/migrate-to-v3.md
+
+ Title: Migrate to V3 - Translator
+
+description: This article provides the steps to help you migrate from V2 to V3 of the Azure AI Translator.
++++++ Last updated : 07/18/2023+++
+# Translator V2 to V3 Migration
+
+> [!NOTE]
+> V2 was deprecated on April 30, 2018. Please migrate your applications to V3 in order to take advantage of new functionality available exclusively in V3. V2 was retired on May 24, 2021.
+
+The Microsoft Translator team has released Version 3 (V3) of the Translator. This release includes new features, deprecated methods and a new format for sending to, and receiving data from the Microsoft Translator Service. This document provides information for changing applications to use V3.
+
+The end of this document contains helpful links for you to learn more.
+
+## Summary of features
+
+* No Trace - In V3 No-Trace applies to all pricing tiers in the Azure portal. This feature means that no text submitted to the V3 API, will be saved by Microsoft.
+* JSON - XML is replaced by JSON. All data sent to the service and received from the service is in JSON format.
+* Multiple target languages in a single request - The Translate method accepts multiple 'to' languages for translation in a single request. For example, a single request can be 'from' English and 'to' German, Spanish and Japanese, or any other group of languages.
+* Bilingual dictionary - A bilingual dictionary method has been added to the API. This method includes 'lookup' and 'examples'.
+* Transliterate - A transliterate method has been added to the API. This method will convert words and sentences in one script into another script. For example, Arabic to Latin.
+* Languages - A new 'languages' method delivers language information, in JSON format, for use with the 'translate', 'dictionary', and 'transliterate' methods.
+* New to Translate - New capabilities have been added to the 'translate' method to support some of the features that were in the V2 API as separate methods. An example is TranslateArray.
+* Speak method - Text to speech functionality is no longer supported in the Microsoft Translator. Text to speech functionality is available in [Microsoft Speech Service](../speech-service/text-to-speech.md).
+
+The following list of V2 and V3 methods identifies the V3 methods and APIs that will provide the functionality that came with V2.
+
+| V2 API Method | V3 API Compatibility |
+|:-- |:-|
+| `Translate` | [Translate](reference/v3-0-translate.md) |
+| `TranslateArray` | [Translate](reference/v3-0-translate.md) |
+| `GetLanguageNames` | [Languages](reference/v3-0-languages.md) |
+| `GetLanguagesForTranslate` | [Languages](reference/v3-0-languages.md) |
+| `GetLanguagesForSpeak` | [Microsoft Speech Service](../speech-service/language-support.md) |
+| `Speak` | [Microsoft Speech Service](../speech-service/text-to-speech.md) |
+| `Detect` | [Detect](reference/v3-0-detect.md) |
+| `DetectArray` | [Detect](reference/v3-0-detect.md) |
+| `AddTranslation` | Feature is no longer supported |
+| `AddTranslationArray` | Feature is no longer supported |
+| `BreakSentences` | [BreakSentence](reference/v3-0-break-sentence.md) |
+| `GetTranslations` | Feature is no longer supported |
+| `GetTranslationsArray` | Feature is no longer supported |
+
+## Move to JSON format
+
+Microsoft Translator Translation V2 accepted and returned data in XML format. In V3, all data sent and received using the API is in JSON format. XML will no longer be accepted or returned in V3.
+
+This change will affect several aspects of an application written for the V2 Text Translation API. As an example: The Languages API returns language information for text translation, transliteration, and the two dictionary methods. You can request all language information for all methods in one call or request them individually.
+
+The languages method does not require authentication; by selecting the following link you can see all the language information for V3 in JSON:
+
+[https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation,dictionary,transliteration](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation,dictionary,transliteration)
+
+## Authentication Key
+
+The authentication key you are using for V2 will be accepted for V3. You will not need to get a new subscription. You will be able to mix V2 and V3 in your apps during the yearlong migration period, making it easier for you to release new versions while you are still migrating from V2-XML to V3-JSON.
+
+## Pricing Model
+
+Microsoft Translator V3 is priced in the same way V2 was priced; per character, including spaces. The new features in V3 make some changes in what characters are counted for billing.
+
+| V3 Method | Characters Counted for Billing |
+|:-- |:-|
+| `Languages` | No characters submitted, none counted, no charge. |
+| `Translate` | Count is based on how many characters are submitted for translation, and how many languages the characters are translated into. 50 characters submitted, and 5 languages requested will be 50x5. |
+| `Transliterate` | Number of characters submitted for transliteration are counted. |
+| `Dictionary lookup & example` | Number of characters submitted for Dictionary lookup and examples are counted. |
+| `BreakSentence` | No Charge. |
+| `Detect` | No Charge. |
+
+## V3 End Points
+
+Global
+
+* api.cognitive.microsofttranslator.com
+
+## V3 API text translations methods
+
+[`Languages`](reference/v3-0-languages.md)
+
+[`Translate`](reference/v3-0-translate.md)
+
+[`Transliterate`](reference/v3-0-transliterate.md)
+
+[`BreakSentence`](reference/v3-0-break-sentence.md)
+
+[`Detect`](reference/v3-0-detect.md)
+
+[`Dictionary/lookup`](reference/v3-0-dictionary-lookup.md)
+
+[`Dictionary/example`](reference/v3-0-dictionary-examples.md)
+
+## Compatibility and customization
+
+> [!NOTE]
+>
+> The Microsoft Translator Hub will be retired on May 17, 2019. [View important migration information and dates](https://www.microsoft.com/translator/business/hub/).
+
+Microsoft Translator V3 uses neural machine translation by default. As such, it cannot be used with the Microsoft Translator Hub. The Translator Hub only supports legacy statistical machine translation. Customization for neural translation is now available using the Custom Translator. [Learn more about customizing neural machine translation](custom-translator/overview.md)
+
+Neural translation with the V3 text API does not support the use of standard categories (SMT, speech, tech, _generalnn_).
+
+| Version | Endpoint | GDPR Processor Compliance | Use Translator Hub | Use Custom Translator (Preview) |
+| : | :- | : | :-- | : |
+|Translator Version 2| api.microsofttranslator.com| No |Yes |No|
+|Translator Version 3| api.cognitive.microsofttranslator.com| Yes| No| Yes|
+
+**Translator Version 3**
+
+* It's generally available and fully supported.
+* It's GDPR-compliant as a processor and satisfies all ISO 20001 and 20018 as well as SOC 3 certification requirements.
+* It allows you to invoke the neural network translation systems you have customized with Custom Translator (Preview), the new Translator NMT customization feature.
+* It doesn't provide access to custom translation systems created using the Microsoft Translator Hub.
+
+You are using Version 3 of the Translator if you are using the api.cognitive.microsofttranslator.com endpoint.
+
+**Translator Version 2**
+
+* Doesn't satisfy all ISO 20001,20018 and SOC 3 certification requirements.
+* Doesn't allow you to invoke the neural network translation systems you have customized with the Translator customization feature.
+* Provides access to custom translation systems created using the Microsoft Translator Hub.
+* You are using Version 2 of the Translator if you are using the api.microsofttranslator.com endpoint.
+
+No version of the Translator creates a record of your translations. Your translations are never shared with anyone. More information on the [Translator No-Trace](https://www.aka.ms/NoTrace) webpage.
+
+## Links
+
+* [Microsoft Privacy Policy](https://privacy.microsoft.com/privacystatement)
+* [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal)
+* [Online Services Terms](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [View V3.0 Documentation](reference/v3-0-reference.md)
ai-services Modifications Deprecations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/modifications-deprecations.md
+
+ Title: Modifications to Translator Service
+description: Translator Service changes, modifications, and deprecations
++++++ Last updated : 07/18/2023+++
+# Modifications to Translator Service
+
+Learn about Translator service changes, modification, and deprecations.
+
+> [!NOTE]
+> Looking for updates and preview announcements? Visit our [What's new](whats-new.md) page to stay up to date with release notes, feature enhancements, and our newest documentation.
+
+## November 2022
+
+### Changes to Translator `Usage` metrics
+
+> [!IMPORTANT]
+> **`Characters Translated`** and **`Characters Trained`** metrics are deprecated and have been removed from the Azure portal.
+
+|Deprecated metric| Current metric(s) | Description|
+||||
+|Characters Translated (Deprecated)</br></br></br></br>|**&bullet; Text Characters Translated**</br></br>**&bullet;Text Custom Characters Translated**| &bullet; Number of characters in incoming **text** translation request.</br></br> &bullet; Number of characters in incoming **custom** translation request. |
+|Characters Trained (Deprecated) | **&bullet; Text Trained Characters** | &bullet; Number of characters **trained** using text translation service.|
+
+* In 2021, two new metrics, **Text Characters Translated** and **Text Custom Characters Translated**, were added to help with granular metrics data service usage. These metrics replaced **Characters Translated** which provided combined usage data for the general and custom text translation service.
+
+* Similarly, the **Text Trained Characters** metric was added to replace the **Characters Trained** metric.
+
+* **Characters Trained** and **Characters Translated** metrics have had continued support in the Azure portal with the deprecated flag to allow migration to the current metrics. As of October 2022, Characters Trained and Characters Translated are no longer available in the Azure portal.
++
ai-services Prevent Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/prevent-translation.md
+
+ Title: Prevent content translation - Translator
+
+description: Prevent translation of content with the Translator. The Translator allows you to tag content so that it isn't translated.
++++++ Last updated : 07/18/2023+++
+# How to prevent translation of content with the Translator
+
+The Translator allows you to tag content so that it isn't translated. For example, you may want to tag code, a brand name, or a word/phrase that doesn't make sense when localized.
+
+## Methods for preventing translation
+
+1. Tag your content with `notranslate`. It's by design that this works only when the input textType is set as HTML
+
+ Example:
+
+ ```html
+ <span class="notranslate">This will not be translated.</span>
+ <span>This will be translated. </span>
+ ```
+
+ ```html
+ <div class="notranslate">This will not be translated.</div>
+ <div>This will be translated. </div>
+ ```
+
+2. Tag your content with `translate="no"`. This only works when the input textType is set as HTML
+
+ Example:
+
+ ```html
+ <span translate="no">This will not be translated.</span>
+ <span>This will be translated. </span>
+ ```
+
+ ```html
+ <div translate="no">This will not be translated.</div>
+ <div>This will be translated. </div>
+ ```
+
+3. Use the [dynamic dictionary](dynamic-dictionary.md) to prescribe a specific translation.
+
+4. Don't pass the string to the Translator for translation.
+
+5. Custom Translator: Use a [dictionary in Custom Translator](custom-translator/concepts/dictionaries.md) to prescribe the translation of a phrase with 100% probability.
++
+## Next steps
+> [!div class="nextstepaction"]
+> [Use the Translate operation to translate text](reference/v3-0-translate.md)
ai-services Profanity Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/profanity-filtering.md
+
+ Title: Profanity filtering - Translator
+
+description: Use Translator profanity filtering to determine the level of profanity translated in your text.
++++++ Last updated : 07/18/2023+++
+# Add profanity filtering with the Translator
+
+Normally the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures. As a result, the degree of profanity in the target language may be amplified or reduced.
+
+If you want to avoid seeing profanity in the translation, even if profanity is present in the source text, use the profanity filtering option available in the Translate() method. This option allows you to choose whether you want the profanity deleted, marked with appropriate tags, or no action taken.
+
+The Translate() method takes the "options" parameter, which contains the new element "ProfanityAction." The accepted values of ProfanityAction are "NoAction," "Marked," and "Deleted." For the value of "Marked," an additional, optional element "ProfanityMarker" can take the values "Asterisk" (default) and "Tag."
++
+## Accepted values and examples of ProfanityMarker and ProfanityAction
+| ProfanityAction value | ProfanityMarker value | Action | Example: Source - Spanish| Example: Target - English|
+|:--|:--|:--|:--|:--|
+| NoAction| | Default. Same as not setting the option. Profanity passes from source to target. | Que coche de \<insert-profane-word> | What a \<insert-profane-word> car |
+| Marked | Asterisk | Profane words are replaced by asterisks (default). | Que coche de \<insert-profane-word> | What a *** car |
+| Marked | Tag | Profane words are surrounded by XML tags \<profanity\>...\</profanity>. | Que coche de \<insert-profane-word> | What a \<profanity> \<insert-profane-word> \</profanity> car |
+| Deleted | | Profane words are removed from the output without replacement. | Que coche de \<insert-profane-word> | What a car |
+
+In the above examples, **\<insert-profane-word>** is a placeholder for profane words.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Apply profanity filtering with your Translator call](reference/v3-0-translate.md)
ai-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md
+
+ Title: "Quickstart: Azure AI Translator REST APIs"
+
+description: "Learn to translate text with the Translator service REST APIs. Examples are provided in C#, Go, Java, JavaScript and Python."
++++++ Last updated : 07/18/2023+
+ms.devlang: csharp, golang, java, javascript, python
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+
+# Quickstart: Azure AI Translator REST APIs
+
+Try the latest version of Azure AI Translator. In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice or the REST API. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production.
+
+## Prerequisites
+
+You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
+
+ * You need the key and endpoint from the resource to connect your application to the Translator service. You paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+ > [!NOTE]
+ >
+ > * For this quickstart it is recommended that you use a Translator text single-service global resource.
+ > * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription.
+ > * If you choose to use an Azure AI multi-service or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription.
+ > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translation REST API headers](translator-text-apis.md).
+
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites)
+-->
+
+## Headers
+
+To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code for each programming language.
+
+For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
+
+Header|Value| Condition |
+| |: |:|
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|&bullet; ***Required***|
+|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |&bullet; ***Required*** when using an Azure AI multi-service or regional (geographic) resource like **West US**.</br>&bullet; ***Optional*** when using a single-service global Translator Resource.
+|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|&bullet; **Required**|
+|**Content-Length**|The **length of the request** body.|&bullet; ***Optional***|
+
+> [!IMPORTANT]
+>
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, _see_ the Azure AI services [security](../security-features.md) article.
+
+## Translate text
+
+The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
+
+For detailed information regarding Azure AI Translator service request limits, *see* [**Text translation request limits**](service-limits.md#text-translation).
+
+### [C#: Visual Studio](#tab/csharp)
+
+### Set up your Visual Studio project
+
+1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
+
+ > [!TIP]
+ >
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
+
+1. Open Visual Studio.
+
+1. On the Start page, choose **Create a new project**.
+
+ :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
+
+1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
+
+ :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
+
+1. In the **Configure your new project** dialog window, enter `translator_quickstart` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
+
+ :::image type="content" source="media/quickstarts/configure-new-project.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
+
+1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
+
+ :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
+
+### Install the Newtonsoft.json package with NuGet
+
+1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
+
+ :::image type="content" source="media/quickstarts/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
+
+1. Select the Browse tab and type Newtonsoft.json.
+
+ :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
+
+1. Select install from the right package manager window to add the package to your project.
+
+ :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
+<!-- checked -->
+<!-- [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project) -->
+
+### Build your C# application
+
+> [!NOTE]
+>
+> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
+> * The new output uses recent C# features that simplify the code you need to write.
+> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
+> * For more information, _see_ [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
+
+1. Open the **Program.cs** file.
+
+1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code sample into your application's Program.cs file. Make sure you update the key variable with the value from your Azure portal Translator instance:
+
+```csharp
+using System.Text;
+using Newtonsoft.Json;
+
+class Program
+{
+ private static readonly string key = "<your-translator-key>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Input and output languages are defined as parameters.
+ string route = "/translate?api-version=3.0&from=en&to=fr&to=zu";
+ string textToTranslate = "I would really like to drive your car around the block a few times!";
+ object[] body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+
+```
+<!-- checked -->
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=build-your-c#-application) -->
+
+### Run your C# application
+
+Once you've added a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
++
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=run-your-c#-application) -->
+
+### [Go](#tab/go)
+
+### Set up your Go environment
+
+You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
+
+> [!TIP]
+>
+> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
+
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
+
+ * Download the Go version for your operating system.
+ * Once the download is complete, run the installer.
+ * Open a command prompt and enter the following to confirm Go was installed:
+
+ ```console
+ go version
+ ```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=set-up-your-go-environment) -->
+
+### Build your Go application
+
+1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-app**, and navigate to it.
+
+1. Create a new GO file named **translation.go** from the **translator-app** directory.
+
+1. Copy and paste the provided code sample into your **translation.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance:
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "fr")
+ q.Add("to", "zu")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "I would really like to drive your car around the block a few times."},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=build-your-go-application) -->
+
+### Run your Go application
+
+Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-app** folder and use the following command:
+
+```console
+ go run translation.go
+```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=run-your-go-application) -->
+
+### [Java: Gradle](#tab/java)
+
+### Set up your Java environment
+
+* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. _See_ [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+
+ >[!TIP]
+ >
+ > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
+ > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
+
+* If you aren't using VS Code, make sure you have the following installed in your development environment:
+
+ * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
+
+ * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=set-up-your-java-environment) -->
+
+### Create a new Gradle project
+
+1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+ ```console
+ mkdir translator-text-app && translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including _build.gradle.kts_, which is used at runtime to create and configure your application.
+
+ ```console
+ gradle init --type basic
+ ```
+
+1. When prompted to choose a **DSL**, select **Kotlin**.
+
+1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
+
+1. Update `build.gradle.kts` with the following code:
+
+ ```kotlin
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClass.set("TranslatorText")
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ implementation("com.squareup.okhttp3:okhttp:4.10.0")
+ implementation("com.google.code.gson:gson:2.9.0")
+ }
+ ```
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-a-gradle-project) -->
+
+### Create your Java Application
+
+1. From the translator-text-app directory, run the following command:
+
+ ```console
+ mkdir -p src/main/java
+ ```
+
+ You create the following directory structure:
+
+ :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
+
+1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item TranslatorText.java**.
+ >
+ > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
+
+1. Open the `TranslatorText.java` file in your IDE and copy then paste the following code sample into your application. **Make sure you update the key with one of the key values from your Azure portal Translator instance:**
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<your-translator-key";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
++
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"I would really like to drive your car around the block a few times!\"}]");
+ Request request = new Request.Builder()
+ .url("https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&to=zu")
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-your-java-application) -->
+
+### Build and run your Java application
+
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+
+1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=build-and-run-your-java-application) -->
+
+### [JavaScript: Node.js](#tab/nodejs)
+
+### Set up your Node.js Express project
+
+1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
+
+ > [!TIP]
+ >
+ > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
+
+1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
+
+ ```console
+ mkdir translator-app && cd translator-app
+ ```
+
+ ```powershell
+ mkdir translator-app; cd translator-app
+ ```
+
+1. Run the npm init command to initialize the application and scaffold your project.
+
+ ```console
+ npm init
+ ```
+
+1. Specify your project's attributes using the prompts presented in the terminal.
+
+ * The most important attributes are name, version number, and entry point.
+ * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
+ * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
+ * After you've completed the prompts, a `package.json` file will be created in your translator-app directory.
+
+1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
+
+ ```console
+ npm install axios uuid
+ ```
+
+1. Create the `index.js` file in the application directory.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item index.js**.
+ >
+ > * You can also create a new file named `index.js` in your IDE and save it to the `translator-app` directory.
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-nodejs-express-project) -->
+
+### Build your JavaScript application
+
+Add the following code sample to your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**:
+
+```javascript
+ const axios = require('axios').default;
+ const { v4: uuidv4 } = require('uuid');
+
+ let key = "<your-translator-key>";
+ let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ let location = "<YOUR-RESOURCE-LOCATION>";
+
+ axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['fr', 'zu']
+ },
+ data: [{
+ 'text': 'I would really like to drive your car around the block a few times!'
+ }],
+ responseType: 'json'
+ }).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+ })
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-javascript-application) -->
+
+### Run your JavaScript application
+
+Once you've added the code sample to your application, run your program:
+
+1. Navigate to your application directory (translator-app).
+
+1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=run-your-javascript-application) -->
+
+### [Python](#tab/python)
+
+### Set up your Python project
+
+1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+
+ > [!TIP]
+ >
+ > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
+
+1. Open a terminal window and use pip to install the Requests library and uuid0 package:
+
+ ```console
+ pip install requests uuid
+ ```
+
+ > [!NOTE]
+ > We will also use a Python built-in package called json. It's used to work with JSON data.
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-python-project) -->
+
+### Build your Python application
+
+1. Create a new Python file called **translator-app.py** in your preferred editor or IDE.
+
+1. Add the following code sample to your `translator-app.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<your-translator-key>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['fr', 'zu']
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'I would really like to drive your car around the block a few times!'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-python-application) -->
+
+### Run your Python application
+
+Once you've added a code sample to your application, build and run your program:
+
+1. Navigate to your **translator-app.py** file.
+
+1. Type the following command in your console:
+
+ ```console
+ python translator-app.py
+ ```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=run-your-python-application) -->
+++
+## Next steps
+
+That's it, congratulations! You've learned to use the Translator service to translate text.
+
+ Explore our how-to documentation and take a deeper dive into Translation service capabilities:
+
+* [**Translate text**](translator-text-apis.md#translate-text)
+
+* [**Transliterate text**](translator-text-apis.md#transliterate-text)
+
+* [**Detect and identify language**](translator-text-apis.md#detect-language)
+
+* [**Get sentence length**](translator-text-apis.md#get-sentence-length)
+
+* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
ai-services Quickstart Text Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-sdk.md
+
+ Title: "Quickstart: Azure AI Translator SDKs"
+
+description: "Learn to translate text with the Translator service SDks in a programming language of your choice: C#, Java, JavaScript, or Python."
++++++ Last updated : 07/18/2023+
+ms.devlang: csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-set-translator-sdk
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+
+# Quickstart: Azure AI Translator SDKs (preview)
+
+> [!IMPORTANT]
+>
+> * The Translator text SDKs are currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production.
+
+## Prerequisites
+
+You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
+
+ * Get the key, endpoint, and region from the resource to connect your application to the Translator service. Paste these values into the code later in the quickstart. You can find them on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+++++++++
+That's it, congratulations! In this quickstart, you used a Text Translation SDK to translate text.
+
+## Next steps
+
+Learn more about Text Translation development options:
+
+> [!div class="nextstepaction"]
+>[Text Translation SDK overview](text-sdk-overview.md) </br></br>[Text Translation V3 reference](reference/v3-0-reference.md)
ai-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/rest-api-guide.md
+
+ Title: "Text Translation REST API reference guide"
+
+description: View a list of with links to the Text Translation REST APIs.
++++++ Last updated : 07/18/2023+++
+# Text Translation REST API
+
+Text Translation is a cloud-based feature of the Azure AI Translator service and is part of the Azure AI service family of REST APIs. The Text Translation API translates text between language pairs across all [supported languages and dialects](../../language-support.md). The available methods are listed in the table below:
+
+| Request| Method| Description|
+||--||
+| [**languages**](v3-0-languages.md) | **GET** | Returns the set of languages currently supported by the **translation**, **transliteration**, and **dictionary** methods. This request doesn't require authentication headers and you don't need a Translator resource to view the supported language set.|
+|[**translate**](v3-0-translate.md) | **POST**| Translate specified source language text into the target language text.|
+|[**transliterate**](v3-0-transliterate.md) | **POST** | Map source language script or alphabet to a target language script or alphabet.
+|[**detect**](v3-0-detect.md) | **POST** | Identify the source language. |
+|[**breakSentence**](v3-0-break-sentence.md) | **POST** | Returns an array of integers representing the length of sentences in a source text. |
+| [**dictionary/lookup**](v3-0-dictionary-lookup.md) | **POST** | Returns alternatives for single word translations. |
+| [**dictionary/examples**](v3-0-dictionary-examples.md) | **POST** | Returns how a term is used in context. |
+
+> [!div class="nextstepaction"]
+> [Create a Translator resource in the Azure portal.](../create-translator-resource.md)
+
+> [!div class="nextstepaction"]
+> [Quickstart: REST API and your programming language](../quickstart-text-rest-api.md)
ai-services V3 0 Break Sentence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-break-sentence.md
+
+ Title: Translator BreakSentence Method
+
+description: The Translator BreakSentence method identifies the positioning of sentence boundaries in a piece of text.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: BreakSentence
+
+Identifies the positioning of sentence boundaries in a piece of text.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+https://api.cognitive.microsofttranslator.com/breaksentence?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+| Query Parameter | Description |
+| -| -- |
+| api-version <img width=200/> | **Required query parameter**.<br/>Version of the API requested by the client. Value must be `3.0`. |
+| language | **Optional query parameter**.<br/>Language tag identifying the language of the input text. If a code isn't specified, automatic language detection will be applied. |
+| script | **Optional query parameter**.<br/>Script tag identifying the script used by the input text. If a script isn't specified, the default script of the language will be assumed. |
+
+Request headers include:
+
+| Headers | Description |
+| - | -- |
+| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="v3-0-reference.md#authentication">available options for authentication</a>. |
+| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
+| Content-Length | **Required request header**.<br/>The length of the request body. |
+| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`. Sentence boundaries are computed for the value of the `Text` property. A sample request body with one piece of text looks like that:
+
+```json
+[
+ { "Text": "How are you? I am fine. What did you do today?" }
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 100 elements.
+* The text value of an array element can't exceed 50,000 characters including spaces.
+* The entire text included in the request can't exceed 50,000 characters including spaces.
+* If the `language` query parameter is specified, then all array elements must be in the same language. Otherwise, language auto-detection is applied to each array element independently.
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+* `sentLen`: An array of integers representing the lengths of the sentences in the text element. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+* `detectedLanguage`: An object describing the detected language through the following properties:
+
+ * `language`: Code of the detected language.
+
+ * `score`: A float value indicating the confidence in the result. The score is between zero (0) and one (1.0). A low score (<= 0.4) indicates a low confidence.
+
+The `detectedLanguage` property is only present in the result object when language auto-detection is requested.
+
+An example JSON response is:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "sentLen": [
+ 13,
+ 11,
+ 22
+ ]
+ }
+]
+```
+
+## Response headers
+
+<table width="100%">
+ <th width="20%">Headers</th>
+ <th>Description</th>
+ <tr>
+ <td>X-RequestId</td>
+ <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td>
+ </tr>
+</table>
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+<table width="100%">
+ <th width="20%">Status Code</th>
+ <th>Description</th>
+ <tr>
+ <td>200</td>
+ <td>Success.</td>
+ </tr>
+ <tr>
+ <td>400</td>
+ <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td>
+ </tr>
+ <tr>
+ <td>401</td>
+ <td>The request couldn't be authenticated. Check that credentials are specified and valid.</td>
+ </tr>
+ <tr>
+ <td>403</td>
+ <td>The request isn't authorized. Check the details error message. This response code often indicates that all free translations provided with a trial subscription have been used up.</td>
+ </tr>
+ <tr>
+ <td>429</td>
+ <td>The server rejected the request because the client has exceeded request limits.</td>
+ </tr>
+ <tr>
+ <td>500</td>
+ <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
+ </tr>
+ <tr>
+ <td>503</td>
+ <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
+ </tr>
+</table>
+
+If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
+
+## Examples
+
+The following example shows how to obtain sentence boundaries for a single sentence. The language of the sentence is automatically detected by the service.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/breaksentence?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'How are you? I am fine. What did you do today?'}]"
+```
ai-services V3 0 Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-detect.md
+
+ Title: Translator Detect Method
+
+description: Identify the language of a piece of text with the Azure AI Translator Detect method.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: Detect
+
+Identifies the language of a piece of text.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+https://api.cognitive.microsofttranslator.com/detect?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+| Query parameter | Description |
+| | |
+| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. |
+
+Request headers include:
+
+| Headers | Description |
+| | |
+| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication)</a>. |
+| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
+| Content-Length | *Required request header*.<br/>The length of the request body. |
+| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`. Language detection is applied to the value of the `Text` property. The language auto-detection works better with longer input text. A sample request body looks like that:
+
+```json
+[
+ { "Text": "Ich w├╝rde wirklich gerne Ihr Auto ein paar Mal um den Block fahren." }
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 100 elements.
+* The entire text included in the request cannot exceed 50,000 characters including spaces.
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+ * `language`: Code of the detected language.
+
+ * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
+
+ * `isTranslationSupported`: A boolean value which is true if the detected language is one of the languages supported for text translation.
+
+ * `isTransliterationSupported`: A boolean value which is true if the detected language is one of the languages supported for transliteration.
+
+ * `alternatives`: An array of other possible languages. Each element of the array is another object with the same properties listed above: `language`, `score`, `isTranslationSupported` and `isTransliterationSupported`.
+
+An example JSON response is:
+
+```json
+[
+
+ {
+
+ "language": "de",
+
+ "score": 1.0,
+
+ "isTranslationSupported": true,
+
+ "isTransliterationSupported": false
+
+ }
+
+]
+```
+
+## Response headers
+
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+| Status Code | Description |
+| | |
+| 200 | Success. |
+| 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |
+| 401 | The request could not be authenticated. Check that credentials are specified and valid. |
+| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. |
+| 429 | The server rejected the request because the client has exceeded request limits. |
+| 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
+| 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
+
+If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
+
+## Examples
+
+The following example shows how to retrieve languages supported for text translation.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/detect?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'What language is this text written in?'}]"
+```
ai-services V3 0 Dictionary Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-dictionary-examples.md
+
+ Title: Translator Dictionary Examples Method
+
+description: The Translator Dictionary Examples method provides examples that show how terms in the dictionary are used in context.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: Dictionary Examples
+
+Provides examples that show how terms in the dictionary are used in context. This operation is used in tandem with [Dictionary lookup](./v3-0-dictionary-lookup.md).
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+https://api.cognitive.microsofttranslator.com/dictionary/examples?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+| Query Parameter | Description |
+| | -- |
+| api-version <img width=200/> | **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0`. |
+| from | **Required parameter**.<br/>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. |
+| to | **Required parameter**.<br/>Specifies the language of the output text. The target language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. |
+
+Request headers include:
+
+| Headers | Description |
+| | -- |
+| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="v3-0-reference.md#authentication">available options for authentication</a>. |
+| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
+| Content-Length | **Required request header**.<br/>The length of the request body. |
+| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with the following properties:
+
+ * `Text`: A string specifying the term to lookup. This should be the value of a `normalizedText` field from the back-translations of a previous [Dictionary lookup](./v3-0-dictionary-lookup.md) request. It can also be the value of the `normalizedSource` field.
+
+ * `Translation`: A string specifying the translated text previously returned by the [Dictionary lookup](./v3-0-dictionary-lookup.md) operation. This should be the value from the `normalizedTarget` field in the `translations` list of the [Dictionary lookup](./v3-0-dictionary-lookup.md) response. The service will return examples for the specific source-target word-pair.
+
+An example is:
+
+```json
+[
+ {"Text":"fly", "Translation":"volar"}
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 10 elements.
+* The text value of an array element cannot exceed 100 characters including spaces.
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+ * `normalizedSource`: A string giving the normalized form of the source term. Generally, this should be identical to the value of the `Text` field at the matching list index in the body of the request.
+
+ * `normalizedTarget`: A string giving the normalized form of the target term. Generally, this should be identical to the value of the `Translation` field at the matching list index in the body of the request.
+
+ * `examples`: A list of examples for the (source term, target term) pair. Each element of the list is an object with the following properties:
+
+ * `sourcePrefix`: The string to concatenate _before_ the value of `sourceTerm` to form a complete example. Do not add a space character, since it is already there when it should be. This value may be an empty string.
+
+ * `sourceTerm`: A string equal to the actual term looked up. The string is added with `sourcePrefix` and `sourceSuffix` to form the complete example. Its value is separated so it can be marked in a user interface, e.g., by bolding it.
+
+ * `sourceSuffix`: The string to concatenate _after_ the value of `sourceTerm` to form a complete example. Do not add a space character, since it is already there when it should be. This value may be an empty string.
+
+ * `targetPrefix`: A string similar to `sourcePrefix` but for the target.
+
+ * `targetTerm`: A string similar to `sourceTerm` but for the target.
+
+ * `targetSuffix`: A string similar to `sourceSuffix` but for the target.
+
+ > [!NOTE]
+ > If there are no examples in the dictionary, the response is 200 (OK) but the `examples` list is an empty list.
+
+## Examples
+
+This example shows how to lookup examples for the pair made up of the English term `fly` and its Spanish translation `volar`.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/examples?api-version=3.0&from=en&to=es" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly', 'Translation':'volar'}]"
+```
+
+The response body (abbreviated for clarity) is:
+
+```
+[
+ {
+ "normalizedSource":"fly",
+ "normalizedTarget":"volar",
+ "examples":[
+ {
+ "sourcePrefix":"They need machines to ",
+ "sourceTerm":"fly",
+ "sourceSuffix":".",
+ "targetPrefix":"Necesitan máquinas para ",
+ "targetTerm":"volar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"That should really ",
+ "sourceTerm":"fly",
+ "sourceSuffix":".",
+ "targetPrefix":"Eso realmente debe ",
+ "targetTerm":"volar",
+ "targetSuffix":"."
+ },
+ //
+ // ...list abbreviated for documentation clarity
+ //
+ ]
+ }
+]
+```
ai-services V3 0 Dictionary Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-dictionary-lookup.md
+
+ Title: Translator Dictionary Lookup Method
+
+description: The Dictionary Lookup method provides alternative translations for a word and a small number of idiomatic phrases.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: Dictionary Lookup
+
+Provides alternative translations for a word and a small number of idiomatic phrases. Each translation has a part-of-speech and a list of back-translations. The back-translations enable a user to understand the translation in context. The [Dictionary Example](./v3-0-dictionary-examples.md) operation allows further drill down to see example uses of each translation pair.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+| Query Parameter | Description |
+| | -- |
+| api-version <img width=200/> | **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0` |
+| from | **Required parameter**.<br/>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. |
+| to | **Required parameter**.<br/>Specifies the language of the output text. The target language must be one of the [supported languages](v3-0-languages.md) included in the `dictionary` scope. |
++
+Request headers include:
+
+| Headers | Description |
+| | -- |
+| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="v3-0-reference.md#authentication">available options for authentication</a>. |
+| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
+| Content-Length | **Required request header**.<br/>The length of the request body. |
+| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the term to lookup.
+
+```json
+[
+ {"Text":"fly"}
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 10 elements.
+* The text value of an array element cannot exceed 100 characters including spaces.
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+ * `normalizedSource`: A string giving the normalized form of the source term. For example, if the request is "JOHN", the normalized form will be "john". The content of this field becomes the input to [lookup examples](./v3-0-dictionary-examples.md).
+
+ * `displaySource`: A string giving the source term in a form best suited for end-user display. For example, if the input is "JOHN", the display form will reflect the usual spelling of the name: "John".
+
+ * `translations`: A list of translations for the source term. Each element of the list is an object with the following properties:
+
+ * `normalizedTarget`: A string giving the normalized form of this term in the target language. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md).
+
+ * `displayTarget`: A string giving the term in the target language and in a form best suited for end-user display. Generally, this will only differ from the `normalizedTarget` in terms of capitalization. For example, a proper noun like "Juan" will have `normalizedTarget = "juan"` and `displayTarget = "Juan"`.
+
+ * `posTag`: A string associating this term with a part-of-speech tag.
+
+ | Tag name | Description |
+ |-|--|
+ | ADJ | Adjectives |
+ | ADV | Adverbs |
+ | CONJ | Conjunctions |
+ | DET | Determiners |
+ | MODAL | Verbs |
+ | NOUN | Nouns |
+ | PREP | Prepositions |
+ | PRON | Pronouns |
+ | VERB | Verbs |
+ | OTHER | Other |
+
+ As an implementation note, these tags were determined by part-of-speech tagging the English side, and then taking the most frequent tag for each source/target pair. So if people frequently translate a Spanish word to a different part-of-speech tag in English, tags may end up being wrong (with respect to the Spanish word).
+
+ * `confidence`: A value between 0.0 and 1.0 which represents the "confidence" (or perhaps more accurately, "probability in the training data") of that translation pair. The sum of confidence scores for one source word may or may not sum to 1.0.
+
+ * `prefixWord`: A string giving the word to display as a prefix of the translation. Currently, this is the gendered determiner of nouns, in languages that have gendered determiners. For example, the prefix of the Spanish word "mosca" is "la", since "mosca" is a feminine noun in Spanish. This is only dependent on the translation, and not on the source. If there is no prefix, it will be the empty string.
+
+ * `backTranslations`: A list of "back translations" of the target. For example, source words that the target can translate to. The list is guaranteed to contain the source word that was requested (e.g., if the source word being looked up is "fly", then it is guaranteed that "fly" will be in the `backTranslations` list). However, it is not guaranteed to be in the first position, and often will not be. Each element of the `backTranslations` list is an object described by the following properties:
+
+ * `normalizedText`: A string giving the normalized form of the source term that is a back-translation of the target. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md).
+
+ * `displayText`: A string giving the source term that is a back-translation of the target in a form best suited for end-user display.
+
+ * `numExamples`: An integer representing the number of examples that are available for this translation pair. Actual examples must be retrieved with a separate call to [lookup examples](./v3-0-dictionary-examples.md). The number is mostly intended to facilitate display in a UX. For example, a user interface may add a hyperlink to the back-translation if the number of examples is greater than zero and show the back-translation as plain text if there are no examples. Note that the actual number of examples returned by a call to [lookup examples](./v3-0-dictionary-examples.md) may be less than `numExamples`, because additional filtering may be applied on the fly to remove "bad" examples.
+
+ * `frequencyCount`: An integer representing the frequency of this translation pair in the data. The main purpose of this field is to provide a user interface with a means to sort back-translations so the most frequent terms are first.
+
+ > [!NOTE]
+ > If the term being looked-up does not exist in the dictionary, the response is 200 (OK) but the `translations` list is an empty list.
+
+## Examples
+
+This example shows how to lookup alternative translations in Spanish of the English term `fly` .
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly'}]"
+```
+
+The response body (abbreviated for clarity) is:
+
+```
+[
+ {
+ "normalizedSource":"fly",
+ "displaySource":"fly",
+ "translations":[
+ {
+ "normalizedTarget":"volar",
+ "displayTarget":"volar",
+ "posTag":"VERB",
+ "confidence":0.4081,
+ "prefixWord":"",
+ "backTranslations":[
+ {"normalizedText":"fly","displayText":"fly","numExamples":15,"frequencyCount":4637},
+ {"normalizedText":"flying","displayText":"flying","numExamples":15,"frequencyCount":1365},
+ {"normalizedText":"blow","displayText":"blow","numExamples":15,"frequencyCount":503},
+ {"normalizedText":"flight","displayText":"flight","numExamples":15,"frequencyCount":135}
+ ]
+ },
+ {
+ "normalizedTarget":"mosca",
+ "displayTarget":"mosca",
+ "posTag":"NOUN",
+ "confidence":0.2668,
+ "prefixWord":"",
+ "backTranslations":[
+ {"normalizedText":"fly","displayText":"fly","numExamples":15,"frequencyCount":1697},
+ {"normalizedText":"flyweight","displayText":"flyweight","numExamples":0,"frequencyCount":48},
+ {"normalizedText":"flies","displayText":"flies","numExamples":9,"frequencyCount":34}
+ ]
+ },
+ //
+ // ...list abbreviated for documentation clarity
+ //
+ ]
+ }
+]
+```
+
+This example shows what happens when the term being looked up does not exist for the valid dictionary pair.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly123456'}]"
+```
+
+Since the term is not found in the dictionary, the response body includes an empty `translations` list.
+
+```
+[
+ {
+ "normalizedSource":"fly123456",
+ "displaySource":"fly123456",
+ "translations":[]
+ }
+]
+```
ai-services V3 0 Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-languages.md
+
+ Title: Translator Languages Method
+
+description: The Languages method gets the set of languages currently supported by other operations of the Translator.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: Languages
+
+Gets the set of languages currently supported by other operations of the Translator.
+
+## Request URL
+
+Send a `GET` request to:
+```HTTP
+https://api.cognitive.microsofttranslator.com/languages?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+<table width="100%">
+ <th width="20%">Query parameter</th>
+ <th>Description</th>
+ <tr>
+ <td>api-version</td>
+ <td><em>Required parameter</em>.<br/>Version of the API requested by the client. Value must be `3.0`.</td>
+ </tr>
+ <tr>
+ <td>scope</td>
+ <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`.</td>
+ </tr>
+</table>
+
+*See* [response body](#response-body).
+
+Request headers are:
+
+<table width="100%">
+ <th width="20%">Headers</th>
+ <th>Description</th>
+ <tr>
+ <td>Accept-Language</td>
+ <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language isn't specified or when localization isn't available.
+ </td>
+ </tr>
+ <tr>
+ <td>X-ClientTraceId</td>
+ <td>*Optional request header*.<br/>A client-generated GUID to uniquely identify the request.</td>
+ </tr>
+</table>
+
+Authentication isn't required to get language resources.
+
+## Response body
+
+A client uses the `scope` query parameter to define which groups of languages it's interested in.
+
+* `scope=translation` provides languages supported to translate text from one language to another language;
+
+* `scope=transliteration` provides capabilities for converting text in one language from one script to another script;
+
+* `scope=dictionary` provides language pairs for which `Dictionary` operations return data.
+
+A client may retrieve several groups simultaneously by specifying a comma-separated list of names. For example, `scope=translation,transliteration,dictionary` would return supported languages for all groups.
+
+A successful response is a JSON object with one property for each requested group:
+
+```json
+{
+ "translation": {
+ //... set of languages supported to translate text (scope=translation)
+ },
+ "transliteration": {
+ //... set of languages supported to convert between scripts (scope=transliteration)
+ },
+ "dictionary": {
+ //... set of languages supported for alternative translations and examples (scope=dictionary)
+ }
+}
+```
+
+The value for each property is as follows.
+
+* `translation` property
+
+ The value of the `translation` property is a dictionary of (key, value) pairs. Each key is a BCP 47 language tag. A key identifies a language for which text can be translated to or translated from. The value associated with the key is a JSON object with properties that describe the language:
+
+ * `name`: Display name of the language in the locale requested via `Accept-Language` header.
+
+ * `nativeName`: Display name of the language in the locale native for this language.
+
+ * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
+
+ An example is:
+
+ ```json
+ {
+ "translation": {
+ ...
+ "fr": {
+ "name": "French",
+ "nativeName": "Français",
+ "dir": "ltr"
+ },
+ ...
+ }
+ }
+ ```
+
+* `transliteration` property
+
+ The value of the `transliteration` property is a dictionary of (key, value) pairs. Each key is a BCP 47 language tag. A key identifies a language for which text can be converted from one script to another script. The value associated with the key is a JSON object with properties that describe the language and its supported scripts:
+
+ * `name`: Display name of the language in the locale requested via `Accept-Language` header.
+
+ * `nativeName`: Display name of the language in the locale native for this language.
+
+ * `scripts`: List of scripts to convert from. Each element of the `scripts` list has properties:
+
+ * `code`: Code identifying the script.
+
+ * `name`: Display name of the script in the locale requested via `Accept-Language` header.
+
+ * `nativeName`: Display name of the language in the locale native for the language.
+
+ * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
+
+ * `toScripts`: List of scripts available to convert text to. Each element of the `toScripts` list has properties `code`, `name`, `nativeName`, and `dir` as described earlier.
+
+ An example is:
+
+ ```json
+ {
+ "transliteration": {
+ ...
+ "ja": {
+ "name": "Japanese",
+ "nativeName": "日本語",
+ "scripts": [
+ {
+ "code": "Jpan",
+ "name": "Japanese",
+ "nativeName": "日本語",
+ "dir": "ltr",
+ "toScripts": [
+ {
+ "code": "Latn",
+ "name": "Latin",
+ "nativeName": "ラテン語",
+ "dir": "ltr"
+ }
+ ]
+ },
+ {
+ "code": "Latn",
+ "name": "Latin",
+ "nativeName": "ラテン語",
+ "dir": "ltr",
+ "toScripts": [
+ {
+ "code": "Jpan",
+ "name": "Japanese",
+ "nativeName": "日本語",
+ "dir": "ltr"
+ }
+ ]
+ }
+ ]
+ },
+ ...
+ }
+ }
+ ```
+
+* `dictionary` property
+
+ The value of the `dictionary` property is a dictionary of (key, value) pairs. Each key is a BCP 47 language tag. The key identifies a language for which alternative translations and back-translations are available. The value is a JSON object that describes the source language and the target languages with available translations:
+
+ * `name`: Display name of the source language in the locale requested via `Accept-Language` header.
+
+ * `nativeName`: Display name of the language in the locale native for this language.
+
+ * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
+
+ * `translations`: List of languages with alterative translations and examples for the query expressed in the source language. Each element of the `translations` list has properties:
+
+ * `name`: Display name of the target language in the locale requested via `Accept-Language` header.
+
+ * `nativeName`: Display name of the target language in the locale native for the target language.
+
+ * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
+
+ * `code`: Language code identifying the target language.
+
+ An example is:
+
+ ```json
+ "es": {
+ "name": "Spanish",
+ "nativeName": "Espa├▒ol",
+ "dir": "ltr",
+ "translations": [
+ {
+ "name": "English",
+ "nativeName": "English",
+ "dir": "ltr",
+ "code": "en"
+ }
+ ]
+ },
+ ```
+
+The structure of the response object doesn't change without a change in the version of the API. For the same version of the API, the list of available languages may change over time because Microsoft Translator continually extends the list of languages supported by its services.
+
+The list of supported languages doesn't change frequently. To save network bandwidth and improve responsiveness, a client application should consider caching language resources and the corresponding entity tag (`ETag`). Then, the client application can periodically (for example, once every 24 hours) query the service to fetch the latest set of supported languages. Passing the current `ETag` value in an `If-None-Match` header field allows the service to optimize the response. If the resource hasn't been modified, the service returns status code 304 and an empty response body.
+
+## Response headers
+
+<table width="100%">
+ <th width="20%">Headers</th>
+ <th>Description</th>
+ <tr>
+ <td>ETag</td>
+ <td>Current value of the entity tag for the requested groups of supported languages. To make subsequent requests more efficient, the client may send the `ETag` value in an `If-None-Match` header field.
+ </td>
+ </tr>
+ <tr>
+ <td>X-RequestId</td>
+ <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td>
+ </tr>
+</table>
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+<table width="100%">
+ <th width="20%">Status Code</th>
+ <th>Description</th>
+ <tr>
+ <td>200</td>
+ <td>Success.</td>
+ </tr>
+ <tr>
+ <td>304</td>
+ <td>The resource hasn't been modified since the version specified by request headers `If-None-Match`.</td>
+ </tr>
+ <tr>
+ <td>400</td>
+ <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td>
+ </tr>
+ <tr>
+ <td>429</td>
+ <td>The server rejected the request because the client has exceeded request limits.</td>
+ </tr>
+ <tr>
+ <td>500</td>
+ <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
+ </tr>
+ <tr>
+ <td>503</td>
+ <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
+ </tr>
+</table>
+
+If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
+
+## Examples
+
+The following example shows how to retrieve languages supported for text translation.
+
+```curl
+curl "https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation"
+```
ai-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-reference.md
+
+ Title: Translator V3.0 Reference
+
+description: Reference documentation for the Translator V3.0. Version 3 of the Translator provides a modern JSON-based Web API.
++++++ Last updated : 07/18/2023+++
+# Translator v3.0
+
+## What's new?
+
+Version 3 of the Translator provides a modern JSON-based Web API. It improves usability and performance by consolidating existing features into fewer operations and it provides new features.
+
+* Transliteration to convert text in one language from one script to another script.
+* Translation to multiple languages in one request.
+* Language detection, translation, and transliteration in one request.
+* Dictionary to look up alternative translations of a term, to find back-translations and examples showing terms used in context.
+* More informative language detection results.
+
+## Base URLs
+
+Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there's a datacenter failure when using the global endpoint, the request may be routed outside of the geography.
+
+To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
+
+|Geography|Base URL (geographical endpoint)|Datacenters|
+|:--|:--|:--|
+|Global (`non-regional`)| api.cognitive.microsofttranslator.com|Closest available datacenter|
+|Asia Pacific| api-apc.cognitive.microsofttranslator.com|Korea South, Japan East, Southeast Asia, and Australia East|
+|Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe|
+|United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2|
+
+<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
+```curl
+// Pass secret key and region using headers to a custom endpoint
+curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
+-H "Ocp-Apim-Subscription-Key: xxx" \
+-H "Ocp-Apim-Subscription-Region: switzerlandnorth" \
+-H "Content-Type: application/json" \
+-d "[{'Text':'Hello'}]" -v
+```
+<sup>`2`</sup> Custom Translator isn't currently available in Switzerland.
+
+## Authentication
+
+Subscribe to Translator or [multi-service](https://azure.microsoft.com/pricing/details/cognitive-services/) in Azure AI services, and use your key (available in the Azure portal) to authenticate.
+
+There are three headers that you can use to authenticate your subscription. This table describes how each is used:
+
+|Headers|Description|
+|:-|:-|
+|Ocp-Apim-Subscription-Key|*Use with Azure AI services subscription if you're passing your secret key*.<br/>The value is the Azure secret key for your subscription to Translator.|
+|Authorization|*Use with Azure AI services subscription if you're passing an authentication token.*<br/>The value is the Bearer token: `Bearer <token>`.|
+|Ocp-Apim-Subscription-Region|*Use with multi-service and regional translator resource.*<br/>The value is the region of the multi-service or regional translator resource. This value is optional when using a global translator resource.|
+
+### Secret key
+
+The first option is to authenticate using the `Ocp-Apim-Subscription-Key` header. Add the `Ocp-Apim-Subscription-Key: <YOUR_SECRET_KEY>` header to your request.
+
+#### Authenticating with a global resource
+
+When you use a [global translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation), you need to include one header to call the Translator.
+
+|Headers|Description|
+|:--|:-|
+|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
+
+Here's an example request to call the Translator using the global translator resource
+
+```curl
+// Pass secret key using headers
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es" \
+ -H "Ocp-Apim-Subscription-Key:<your-key>" \
+ -H "Content-Type: application/json" \
+ -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+#### Authenticating with a regional resource
+
+When you use a [regional translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation),
+there are two headers that you need to call the Translator.
+
+|Headers|Description|
+|:--|:-|
+|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
+|Ocp-Apim-Subscription-Region| The value is the region of the translator resource. |
+
+Here's an example request to call the Translator using the regional translator resource
+
+```curl
+// Pass secret key and region using headers
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es" \
+ -H "Ocp-Apim-Subscription-Key:<your-key>" \
+ -H "Ocp-Apim-Subscription-Region:<your-region>" \
+ -H "Content-Type: application/json" \
+ -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+#### Authenticating with a Multi-service resource
+
+A multi-service resource allows you to use a single API key to authenticate requests for multiple services.
+
+When you use a multi-service secret key, you must include two authentication headers with your request. There are two headers that you need to call the Translator.
+
+|Headers|Description|
+|:--|:-|
+|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your multi-service resource.|
+|Ocp-Apim-Subscription-Region| The value is the region of the multi-service resource. |
+
+Region is required for the multi-service Text API subscription. The region you select is the only region that you can use for text translation when using the multi-service key. It must be the same region you selected when you signed up for your multi-service subscription through the Azure portal.
+
+If you pass the secret key in the query string with the parameter `Subscription-Key`, then you must specify the region with query parameter `Subscription-Region`.
+
+### Authenticating with an access token
+
+Alternatively, you can exchange your secret key for an access token. This token is included with each request as the `Authorization` header. To obtain an authorization token, make a `POST` request to the following URL:
+
+| Resource type | Authentication service URL |
+|--|--|
+| Global | `https://api.cognitive.microsoft.com/sts/v1.0/issueToken` |
+| Regional or Multi-Service | `https://<your-region>.api.cognitive.microsoft.com/sts/v1.0/issueToken` |
+
+Here are example requests to obtain a token given a secret key for a global resource:
+
+```curl
+// Pass secret key using header
+curl --header 'Ocp-Apim-Subscription-Key: <your-key>' --data "" 'https://api.cognitive.microsoft.com/sts/v1.0/issueToken'
+
+// Pass secret key using query string parameter
+curl --data "" 'https://api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=<your-key>'
+```
+
+And here are example requests to obtain a token given a secret key for a regional resource located in Central US:
+
+```curl
+// Pass secret key using header
+curl --header "Ocp-Apim-Subscription-Key: <your-key>" --data "" "https://centralus.api.cognitive.microsoft.com/sts/v1.0/issueToken"
+
+// Pass secret key using query string parameter
+curl --data "" "https://centralus.api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=<your-key>"
+```
+
+A successful request returns the encoded access token as plain text in the response body. The valid token is passed to the Translator service as a bearer token in the Authorization.
+
+```http
+Authorization: Bearer <Base64-access_token>
+```
+
+An authentication token is valid for 10 minutes. The token should be reused when making multiple calls to the Translator. However, if your program makes requests to the Translator over an extended period of time, then your program must request a new access token at regular intervals (for example, every 8 minutes).
+
+## Authentication with Azure Active Directory (Azure AD)
+
+ Translator v3.0 supports Azure AD authentication, Microsoft's cloud-based identity and access management solution. Authorization headers enable the Translator service to validate that the requesting client is authorized to use the resource and to complete the request.
+
+### **Prerequisites**
+
+* A brief understanding of how to [**authenticate with Azure Active Directory**](../../authentication.md?tabs=powershell#authenticate-with-azure-active-directory).
+
+* A brief understanding of how to [**authorize access to managed identities**](../../authentication.md?tabs=powershell#authorize-access-to-managed-identities).
+
+### **Headers**
+
+|Header|Value|
+|:--|:-|
+|Authorization| The value is an access **bearer token** generated by Azure AD.</br><ul><li> The bearer token provides proof of authentication and validates the client's authorization to use the resource.</li><li> An authentication token is valid for 10 minutes and should be reused when making multiple calls to Translator.</br></li>*See* [Sample request: 2. Get a token](../../authentication.md?tabs=powershell#sample-request)</ul>|
+|Ocp-Apim-Subscription-Region| The value is the region of the **translator resource**.</br><ul><li> This value is optional if the resource is global.</li></ul>|
+|Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
+
+##### **Translator property pageΓÇöAzure portal**
++
+### **Examples**
+
+#### **Using the global endpoint**
+
+```curl
+ // Using headers, pass a bearer token generated by Azure AD, resource ID, and the region.
+
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es" \
+ -H "Authorization: Bearer <Base64-access_token>"\
+ -H "Ocp-Apim-ResourceId: <Resource ID>" \
+ -H "Ocp-Apim-Subscription-Region: <your-region>" \
+ -H "Content-Type: application/json" \
+ -data-raw "[{'Text':'Hello, friend.'}]"
+```
+
+#### **Using your custom endpoint**
+
+```curl
+// Using headers, pass a bearer token generated by Azure AD.
+
+curl -X POST https://<your-custom-domain>.cognitiveservices.azure.com/translator/text/v3.0/translate?api-version=3.0&to=es \
+ -H "Authorization: Bearer <Base64-access_token>"\
+ -H "Content-Type: application/json" \
+ -data-raw "[{'Text':'Hello, friend.'}]"
+```
+
+### **Examples using managed identities**
+
+Translator v3.0 also supports authorizing access to managed identities. If a managed identity is enabled for a translator resource, you can pass the bearer token generated by managed identity in the request header.
+
+#### **With the global endpoint**
+
+```curl
+// Using headers, pass a bearer token generated either by Azure AD or Managed Identities, resource ID, and the region.
+
+curl -X POST https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es \
+ -H "Authorization: Bearer <Base64-access_token>"\
+ -H "Ocp-Apim-ResourceId: <Resource ID>" \
+ -H "Ocp-Apim-Subscription-Region: <your-region>" \
+ -H "Content-Type: application/json" \
+ -data-raw "[{'Text':'Hello, friend.'}]"
+```
+
+#### **With your custom endpoint**
+
+```curl
+//Using headers, pass a bearer token generated by Managed Identities.
+
+curl -X POST https://<your-custom-domain>.cognitiveservices.azure.com/translator/text/v3.0/translate?api-version=3.0&to=es \
+ -H "Authorization: Bearer <Base64-access_token>"\
+ -H "Content-Type: application/json" \
+ -data-raw "[{'Text':'Hello, friend.'}]"
+```
+
+## Virtual Network support
+
+The Translator service is now available with Virtual Network (VNET) capabilities in all regions of the Azure public cloud. To enable Virtual Network, *See* [Configuring Azure AI services Virtual Networks](../../cognitive-services-virtual-networks.md?tabs=portal).
+
+Once you turn on this capability, you must use the custom endpoint to call the Translator. You can't use the global translator endpoint ("api.cognitive.microsofttranslator.com") and you can't authenticate with an access token.
+
+You can find the custom endpoint after you create a [translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) and allow access from selected networks and private endpoints.
+
+1. Navigate to your Translator resource in the Azure portal.
+1. Select **Networking** from the **Resource Management** section.
+1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
+
+ :::image type="content" source="../media/virtual-network-setting-azure-portal.png" alt-text="Screenshot of the virtual network setting in the Azure portal.":::
+
+1. Select **Save** to apply your changes.
+1. Select **Keys and Endpoint** from the **Resource Management** section.
+1. Select the **Virtual Network** tab.
+1. Listed there are the endpoints for Text Translation and Document Translation.
+
+ :::image type="content" source="../media/virtual-network-endpoint.png" alt-text="Screenshot of the virtual network endpoint.":::
+
+|Headers|Description|
+|:--|:-|
+|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
+|Ocp-Apim-Subscription-Region| The value is the region of the translator resource. This value is optional if the resource is `global`|
+
+Here's an example request to call the Translator using the custom endpoint
+
+```curl
+// Pass secret key and region using headers
+curl -X POST "https://<your-custom-domain>.cognitiveservices.azure.com/translator/text/v3.0/translate?api-version=3.0&to=es" \
+ -H "Ocp-Apim-Subscription-Key:<your-key>" \
+ -H "Ocp-Apim-Subscription-Region:<your-region>" \
+ -H "Content-Type: application/json" \
+ -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+## Errors
+
+A standard error response is a JSON object with name/value pair named `error`. The value is also a JSON object with properties:
+
+* `code`: A server-defined error code.
+* `message`: A string giving a human-readable representation of the error.
+
+For example, a customer with a free trial subscription would receive the following error once the free quota is exhausted:
+
+```json
+{
+ "error": {
+ "code":403001,
+ "message":"The operation isn't allowed because the subscription has exceeded its free quota."
+ }
+}
+```
+
+The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes are:
+
+| Code | Description |
+|:-|:--|
+| 400000| One of the request inputs isn't valid.|
+| 400001| The "scope" parameter is invalid.|
+| 400002| The "category" parameter is invalid.|
+| 400003| A language specifier is missing or invalid.|
+| 400004| A target script specifier ("To script") is missing or invalid.|
+| 400005| An input text is missing or invalid.|
+| 400006| The combination of language and script isn't valid.|
+| 400018| A source script specifier ("From script") is missing or invalid.|
+| 400019| One of the specified languages isn't supported.|
+| 400020| One of the elements in the array of input text isn't valid.|
+| 400021| The API version parameter is missing or invalid.|
+| 400023| One of the specified language pair isn't valid.|
+| 400035| The source language ("From" field) isn't valid.|
+| 400036| The target language ("To" field) is missing or invalid.|
+| 400042| One of the options specified ("Options" field) isn't valid.|
+| 400043| The client trace ID (ClientTraceId field or X-ClientTranceId header) is missing or invalid.|
+| 400050| The input text is too long. View [request limits](../service-limits.md).|
+| 400064| The "translation" parameter is missing or invalid.|
+| 400070| The number of target scripts (ToScript parameter) doesn't match the number of target languages (To parameter).|
+| 400071| The value isn't valid for TextType.|
+| 400072| The array of input text has too many elements.|
+| 400073| The script parameter isn't valid.|
+| 400074| The body of the request isn't valid JSON.|
+| 400075| The language pair and category combination isn't valid.|
+| 400077| The maximum request size has been exceeded. View [request limits](../service-limits.md).|
+| 400079| The custom system requested for translation between from and to language doesn't exist.|
+| 400080| Transliteration isn't supported for the language or script.|
+| 401000| The request isn't authorized because credentials are missing or invalid.|
+| 401015| "The credentials provided are for the Speech API. This request requires credentials for the Text API. Use a subscription to Translator."|
+| 403000| The operation isn't allowed.|
+| 403001| The operation isn't allowed because the subscription has exceeded its free quota.|
+| 405000| The request method isn't supported for the requested resource.|
+| 408001| The translation system requested is being prepared. Retry in a few minutes.|
+| 408002| Request timed out waiting on incoming stream. The client didn't produce a request within the time that the server was prepared to wait. The client may repeat the request without modifications at any later time.|
+| 415000| The Content-Type header is missing or invalid.|
+| 429000, 429001, 429002| The server rejected the request because the client has exceeded request limits.|
+| 500000| An unexpected error occurred. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
+| 503000| Service is temporarily unavailable. Retry. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
+
+## Metrics
+Metrics allow you to view the translator usage and availability information in Azure portal, under metrics section as shown in the below screenshot. For more information, see [Data and platform metrics](../../../azure-monitor/essentials/data-platform-metrics.md).
+
+![Translator Metrics](../media/translatormetrics.png)
+
+This table lists available metrics with description of how they're used to monitor translation API calls.
+
+| Metrics | Description |
+|:-|:--|
+| TotalCalls| Total number of API calls.|
+| TotalTokenCalls| Total number of API calls via token service using authentication token.|
+| SuccessfulCalls| Number of successful calls.|
+| TotalErrors| Number of calls with error response.|
+| BlockedCalls| Number of calls that exceeded rate or quota limit.|
+| ServerErrors| Number of calls with server internal error(5XX).|
+| ClientErrors| Number of calls with client-side error(4XX).|
+| Latency| Duration to complete request in milliseconds.|
+| CharactersTranslated| Total number of characters in incoming text request.|
ai-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-translate.md
+
+ Title: Translator Translate Method
+
+description: Understand the parameters, headers, and body messages for the Translate method of Azure AI Translator to translate text.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: Translate
+
+Translates text.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+https://api.cognitive.microsofttranslator.com/translate?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+### Required parameters
+
+| Query parameter | Description |
+| | |
+| api-version | _Required parameter_. <br>Version of the API requested by the client. Value must be `3.0`. |
+| to | _Required parameter_. <br>Specifies the language of the output text. The target language must be one of the [supported languages](v3-0-languages.md) included in the `translation` scope. For example, use `to=de` to translate to German. <br>It's possible to translate to multiple languages simultaneously by repeating the parameter in the query string. For example, use `to=de&to=it` to translate to German and Italian. |
+
+### Optional parameters
+
+| Query parameter | Description |
+| | |
+
+| Query parameter | Description |
+| | |
+| from | _Optional parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope. If the `from` parameter isn't specified, automatic language detection is applied to determine the source language. <br> <br>You must use the `from` parameter rather than autodetection when using the [dynamic dictionary](../dynamic-dictionary.md) feature. **Note**: the dynamic dictionary feature is case-sensitive. |
+| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
+| category | _Optional parameter_. <br>A string specifying the category (domain) of the translation. This parameter is used to get translations from a customized system built with [Custom Translator](../custom-translator/concepts/customization.md). Add the Category ID from your Custom Translator [project details](../custom-translator/how-to/create-manage-project.md) to this parameter to use your deployed customized system. Default value is: `general`. |
+| profanityAction | _Optional parameter_. <br>Specifies how profanities should be treated in translations. Possible values are: `NoAction` (default), `Marked` or `Deleted`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
+| profanityMarker | _Optional parameter_. <br>Specifies how profanities should be marked in translations. Possible values are: `Asterisk` (default) or `Tag`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
+| includeAlignment | _Optional parameter_. <br>Specifies whether to include alignment projection from source text to translated text. Possible values are: `true` or `false` (default). |
+| includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
+| suggestedFrom | _Optional parameter_. <br>Specifies a fallback language if the language of the input text can't be identified. Language autodetection is applied when the `from` parameter is omitted. If detection fails, the `suggestedFrom` language will be assumed. |
+| fromScript | _Optional parameter_. <br>Specifies the script of the input text. |
+| toScript | _Optional parameter_. <br>Specifies the script of the translated text. |
+| allowFallback | _Optional parameter_. <br>Specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. Possible values are: `true` (default) or `false`. <br> <br>`allowFallback=false` specifies that the translation should only use systems trained for the `category` specified by the request. If a translation for language X to language Y requires chaining through a pivot language E, then all the systems in the chain (X → E and E → Y) will need to be custom and have the same category. If no system is found with the specific category, the request will return a 400 status code. `allowFallback=true` specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. |
+
+Request headers include:
+
+| Headers | Description |
+| | |
+| Authentication header(s) | _Required request header_. <br>See [available options for authentication](./v3-0-reference.md#authentication). |
+| Content-Type | _Required request header_. <br>Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. |
+| Content-Length | _Required request header_. <br>The length of the request body. |
+| X-ClientTraceId | _Optional_. <br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to translate.
+
+```json
+[
+ {"Text":"I would really like to drive your car around the block a few times."}
+]
+```
+
+For information on character and array limits, _see_ [Request limits](../service-limits.md#character-and-array-limits-per-request).
+
+## Response body
+
+A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
+
+* `detectedLanguage`: An object describing the detected language through the following properties:
+
+ * `language`: A string representing the code of the detected language.
+
+ * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
+
+ The `detectedLanguage` property is only present in the result object when language auto-detection is requested.
+
+* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
+
+ * `to`: A string representing the language code of the target language.
+
+ * `text`: A string giving the translated text.
+
+* `transliteration`: An object giving the translated text in the script specified by the `toScript` parameter.
+
+ * `script`: A string specifying the target script.
+
+ * `text`: A string giving the translated text in the target script.
+
+ The `transliteration` object isn't included if transliteration doesn't take place.
+
+ * `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the alignment element will be empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions.
+
+* `sentLen`: An object returning sentence boundaries in the input and output texts.
+
+ * `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+ * `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
+
+ Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
+
+* `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
+
+Examples of JSON responses are provided in the [examples](#examples) section.
+
+## Response headers
+
+| Headers | Description |
+| | |
+| X-requestid | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
+| X-mt-system | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests |
+| X-metered-usage |Specifies consumption (the number of characters for which the user will be charged) for the translation job request. For example, if the word "Hello" is translated from English (en) to French (fr), this field will return the value '5'.|
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+|Status code | Description |
+| | |
+|200 | Success. |
+|400 |One of the query parameters is missing or not valid. Correct request parameters before retrying. |
+|401 | The request couldn't be authenticated. Check that credentials are specified and valid. |
+|403 | The request isn't authorized. Check the details error message. This status code often indicates that all free translations provided with a trial subscription have been used up. |
+|408 | The request couldn't be fulfilled because a resource is missing. Check the details error message. When the request includes a custom category, this status code often indicates that the custom translation system isn't yet available to serve requests. The request should be retried after a waiting period (for example, 1 minute). |
+|429 | The server rejected the request because the client has exceeded request limits. |
+|500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
+|503 |Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
+
+If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
+
+## Examples
+
+### Translate a single input
+
+This example shows how to translate a single sentence from English to Simplified Chinese.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+The `translations` array includes one element, which provides the translation of the single piece of text in the input.
+
+### Translate a single input with language autodetection
+
+This example shows how to translate a single sentence from English to Simplified Chinese. The request doesn't specify the input language. Autodetection of the source language is used instead.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```
+[
+ {
+ "detectedLanguage": {"language": "en", "score": 1.0},
+ "translations":[
+ {"text": "你好, 你叫什么名字?", "to": "zh-Hans"}
+ ]
+ }
+]
+```
+The response is similar to the response from the previous example. Since language autodetection was requested, the response also includes information about the language detected for the input text. The language autodetection works better with longer input text.
+
+### Translate with transliteration
+
+Let's extend the previous example by adding transliteration. The following request asks for a Chinese translation written in Latin script.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=zh-Hans&toScript=Latn" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```
+[
+ {
+ "detectedLanguage":{"language":"en","score":1.0},
+ "translations":[
+ {
+ "text":"你好, 你叫什么名字?",
+ "transliteration":{"script":"Latn", "text":"nǐ hǎo , nǐ jiào shén me míng zì ?"},
+ "to":"zh-Hans"
+ }
+ ]
+ }
+]
+```
+
+The translation result now includes a `transliteration` property, which gives the translated text using Latin characters.
+
+### Translate multiple pieces of text
+
+Translating multiple strings at once is simply a matter of specifying an array of strings in the request body.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}, {'Text':'I am fine, thank you.'}]"
+```
+
+The response contains the translation of all pieces of text in the exact same order as in the request.
+The response body is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
+ ]
+ },
+ {
+ "translations":[
+ {"text":"我很好,谢谢你。","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+### Translate to multiple languages
+
+This example shows how to translate the same input to several languages in one request.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
+```
+
+The response body is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"你好, 你叫什么名字?","to":"zh-Hans"},
+ {"text":"Hallo, was ist dein Name?","to":"de"}
+ ]
+ }
+]
+```
+
+### Handle profanity
+
+Normally the Translator service will retain profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures, and as a result the degree of profanity in the target language may be amplified or reduced.
+
+If you want to avoid getting profanity in the translation, regardless of the presence of profanity in the source text, you can use the profanity filtering option. The option allows you to choose whether you want to see profanity deleted, whether you want to mark profanities with appropriate tags (giving you the option to add your own post-processing), or you want no action taken. The accepted values of `ProfanityAction` are `Deleted`, `Marked` and `NoAction` (default).
++
+| ProfanityAction | Action |
+| | |
+| `NoAction` | NoAction is the default behavior. Profanity will pass from source to target. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a jack. |
+| `Deleted` | Profane words will be removed from the output without replacement. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a** |
+| `Marked` | Profane words are replaced by a marker in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags &lt;profanity&gt; and &lt;/profanity&gt;: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a &lt;profanity&gt;jack&lt;/profanity&gt;. |
+
+For example:
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de&profanityAction=Marked" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'This is an <expletive> good idea.'}]"
+```
+
+This request returns:
+
+```
+[
+ {
+ "translations":[
+ {"text":"Das ist eine *** gute Idee.","to":"de"}
+ ]
+ }
+]
+```
+
+Compare with:
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de&profanityAction=Marked&profanityMarker=Tag" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'This is an <expletive> good idea.'}]"
+```
+
+That last request returns:
+
+```
+[
+ {
+ "translations":[
+ {"text":"Das ist eine <profanity>verdammt</profanity> gute Idee.","to":"de"}
+ ]
+ }
+]
+```
+
+### Translate content with markup and decide what's translated
+
+It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated.
+
+```
+<div class="notranslate">This will not be translated.</div>
+<div>This will be translated. </div>
+```
+
+Here's a sample request to illustrate.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
+```
+
+The response is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"<div class=\"notranslate\">This will not be translated.</div><div>这将被翻译。</div>","to":"zh-Hans"}
+ ]
+ }
+]
+```
+
+### Obtain alignment information
+
+Alignment is returned as a string value of the following format for every word of the source. The information for each word is separated by a space, including for non-space-separated languages (scripts) like Chinese:
+
+[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]] *
+
+Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21".
+
+In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the Alignment element will be empty. The method returns no error in that case.
+
+To receive alignment information, specify `includeAlignment=true` on the query string.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&includeAlignment=true" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The answer lies in machine translation.'}]"
+```
+
+The response is:
+
+```
+[
+ {
+ "translations":[
+ {
+ "text":"La réponse se trouve dans la traduction automatique.",
+ "to":"fr",
+ "alignment":{"proj":"0:2-0:1 4:9-3:9 11:14-11:19 16:17-21:24 19:25-40:50 27:37-29:38 38:38-51:51"}
+ }
+ ]
+ }
+]
+```
+
+The alignment information starts with `0:2-0:1`, which means that the first three characters in the source text (`The`) map to the first two characters in the translated text (`La`).
+
+#### Limitations
+
+Obtaining alignment information is an experimental feature that we've enabled for prototyping research and experiences with potential phrase mappings. We may choose to stop supporting this feature in the future. Here are some of the notable restrictions where alignments aren't supported:
+
+* Alignment isn't available for text in HTML format that is, textType=html
+* Alignment is only returned for a subset of the language pairs:
+ * English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic).
+ * from Japanese to Korean or from Korean to Japanese.
+ * from Japanese to Chinese Simplified and Chinese Simplified to Japanese.
+ * from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.
+* You won't receive alignment if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you" and other high frequency sentences.
+* Alignment isn't available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md)
+
+### Obtain sentence boundaries
+
+To receive information about sentence length in the source text and translated text, specify `includeSentenceLength=true` on the query string.
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&includeSentenceLength=true" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The answer lies in machine translation. The best machine translation technology cannot always provide translations tailored to a site or users like a human. Simply copy and paste a code snippet anywhere.'}]"
+```
+
+The response is:
+
+```json
+[
+ {
+ "translations":[
+ {
+ "text":"La réponse se trouve dans la traduction automatique. La meilleure technologie de traduction automatique ne peut pas toujours fournir des traductions adaptées à un site ou des utilisateurs comme un être humain. Il suffit de copier et coller un extrait de code n'importe où.",
+ "to":"fr",
+ "sentLen":{"srcSentLen":[40,117,46],"transSentLen":[53,157,62]}
+ }
+ ]
+ }
+]
+```
+
+### Translate with dynamic dictionary
+
+If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is only safe for proper nouns such as personal names and product names. **Note**: the dynamic dictionary feature is case-sensitive.
+
+The markup to supply uses the following syntax.
+
+```
+<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
+```
+
+For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request:
+
+```
+curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.'}]"
+```
+
+The result is:
+
+```
+[
+ {
+ "translations":[
+ {"text":"Das Wort \"wordomatic\" ist ein W├╢rterbucheintrag.","to":"de"}
+ ]
+ }
+]
+```
+
+This dynamic-dictionary feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you can create training data that shows your work or phrase in context, you'll get much better results. [Learn more about Custom Translator](../custom-translator/concepts/customization.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the Translator quickstart](../quickstart-text-rest-api.md)
ai-services V3 0 Transliterate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-transliterate.md
+
+ Title: Translator Transliterate Method
+
+description: Convert text in one language from one script to another script with the Translator Transliterate method.
+++++++ Last updated : 07/18/2023+++
+# Translator 3.0: Transliterate
+
+Converts text in one language from one script to another script.
+
+## Request URL
+
+Send a `POST` request to:
+
+```HTTP
+https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0
+```
+
+## Request parameters
+
+Request parameters passed on the query string are:
+
+| Query parameter | Description |
+| | |
+| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. |
+| language | *Required parameter*.<br/>Specifies the language of the text to convert from one script to another. Possible languages are listed in the `transliteration` scope obtained by querying the service for its [supported languages](./v3-0-languages.md). |
+| fromScript | *Required parameter*.<br/>Specifies the script used by the input text. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find input scripts available for the selected language. |
+| toScript | *Required parameter*.<br/>Specifies the output script. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find output scripts available for the selected combination of input language and input script. |
+
+Request headers include:
+
+| Headers | Description |
+| | |
+| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication). |
+| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json` |
+| Content-Length | *Required request header*.<br/>The length of the request body. |
+| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
+
+## Request body
+
+The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to convert.
+
+```json
+[
+ {"Text":"πüôπéôπü½πüíπü»"},
+ {"Text":"さようなら"}
+]
+```
+
+The following limitations apply:
+
+* The array can have at most 10 elements.
+* The text value of an array element cannot exceed 1,000 characters including spaces.
+* The entire text included in the request cannot exceed 5,000 characters including spaces.
+
+## Response body
+
+A successful response is a JSON array with one result for each element in the input array. A result object includes the following properties:
+
+ * `text`: A string which is the result of converting the input string to the output script.
+
+ * `script`: A string specifying the script used in the output.
+
+An example JSON response is:
+
+```json
+[
+ {"text":"konnnichiha","script":"Latn"},
+ {"text":"sayounara","script":"Latn"}
+]
+```
+
+## Response headers
+
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
+
+## Response status codes
+
+The following are the possible HTTP status codes that a request returns.
+
+| Status Code | Description |
+| | |
+| 200 | Success. |
+| 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |
+| 401 | The request could not be authenticated. Check that credentials are specified and valid. |
+| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. |
+| 429 | The server rejected the request because the client has exceeded request limits. |
+| 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
+| 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
+
+If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
+
+## Examples
+
+The following example shows how to convert two Japanese strings into Romanized Japanese.
+
+The JSON payload for the request in this example:
+
+```json
+[{"text":"こんにちは","script":"jpan"},{"text":"さようなら","script":"jpan"}]
+```
+
+If you are using cURL in a command-line window that does not support Unicode characters, take the following JSON payload and save it into a file named `request.txt`. Be sure to save the file with `UTF-8` encoding.
+
+```
+curl -X POST "https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0&language=ja&fromScript=Jpan&toScript=Latn" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d @request.txt
+```
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/service-limits.md
+
+ Title: Service limits - Translator Service
+
+description: This article lists service limits for the Translator text and document translation. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour.
++++++ Last updated : 07/18/2023+++
+# Service limits for Azure AI Translator Service
+
+This article provides both a quick reference and detailed description of Azure AI Translator Service character and array limits for text and document translation.
+
+## Text translation
+
+Charges are incurred based on character count, not request frequency. Character limits are subscription-based.
+
+### Character and array limits per request
+
+Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 &times; 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, it's recommended that you send shorter requests.
+
+The following table lists array element and character limits for each text translation operation.
+
+| Operation | Maximum Size of Array Element | Maximum Number of Array Elements | Maximum Request Size (characters) |
+|:-|:-|:-|:-|
+| **Translate** | 50,000| 1,000| 50,000 |
+| **Transliterate** | 5,000| 10| 5,000 |
+| **Detect** | 50,000 |100 |50,000 |
+| **BreakSentence** | 50,000| 100 |50,000 |
+| **Dictionary Lookup** | 100 |10| 1,000 |
+| **Dictionary Examples** | 100 for text and 100 for translation (200 total)| 10|2,000 |
+
+### Character limits per hour
+
+Your character limit per hour is based on your Translator subscription tier.
+
+The hourly quota should be consumed evenly throughout the hour. For example, at the F0 tier limit of 2 million characters per hour, characters should be consumed no faster than roughly 33,300 characters per minute. The sliding window range is 2 million characters divided by 60 minutes.
+
+You're likely to receive an out-of-quota response under the following circumstances:
+
+* You've reached or surpass the quota limit.
+* You've sent a large portion of the quota in too short a period of time.
+
+There are no limits on concurrent requests.
+
+| Tier | Character limit |
+||--|
+| F0 | 2 million characters per hour |
+| S1 | 40 million characters per hour |
+| S2 / C2 | 40 million characters per hour |
+| S3 / C3 | 120 million characters per hour |
+| S4 / C4 | 200 million characters per hour |
+
+Limits for [multi-service subscriptions](./reference/v3-0-reference.md#authentication) are the same as the S1 tier.
+
+These limits are restricted to Microsoft's standard translation models. Custom translation models that use Custom Translator are limited to 3,600 characters per second, per model.
+
+### Latency
+
+The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry.
+
+## Document Translation
+
+This table lists the content limits for data sent using Document Translation:
+
+|Attribute | Limit|
+|||
+|Document size| Γëñ 40 MB |
+|Total number of files.|Γëñ 1000 |
+|Total content size in a batch | Γëñ 250 MB|
+|Number of target languages in a batch| Γëñ 10 |
+|Size of Translation memory file| Γëñ 10 MB|
+
+> [!NOTE]
+> Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
+
+## Next steps
+
+* [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
+* [Regional availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
+* [v3 Translator reference](./reference/v3-0-reference.md)
ai-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/sovereign-clouds.md
+
+ Title: "Translator: sovereign clouds"
+
+description: Using Translator in sovereign clouds
++++++ Last updated : 07/18/2023+++
+# Translator in sovereign (national) clouds
+
+ Azure sovereign clouds are isolated in-country/region platforms with independent authentication, storage, and compliance requirements. Sovereign clouds are often used within geographical boundaries where there's a strict data residency requirement. Translator is currently deployed in the following sovereign clouds:
+
+|Cloud | Region identifier |
+||--|
+| [Azure US Government](../../azure-government/documentation-government-welcome.md)|<ul><li>`usgovarizona` (US Gov Arizona)</li><li>`usgovvirginia` (US Gov Virginia)</li></ul>|
+| [Azure China 21 Vianet](/azure/china/overview-operations) |<ul><li>`chinaeast2` (East China 2)</li><li>`chinanorth` (China North)</li></ul>|
+
+## Azure portal endpoints
+
+The following table lists the base URLs for Azure sovereign cloud endpoints:
+
+| Sovereign cloud | Azure portal endpoint |
+| | -- |
+| Azure portal for US Government | `https://portal.azure.us` |
+| Azure portal China operated by 21 Vianet | `https://portal.azure.cn` |
+
+<!-- markdownlint-disable MD033 -->
+
+## Translator: sovereign clouds
+
+### [Azure US Government](#tab/us)
+
+ The Azure Government cloud is available to US government customers and their partners. US federal, state, local, tribal governments and their partners have access to the Azure Government cloud dedicated instance. Cloud operations are controlled by screened US citizens.
+
+| Azure US Government | Availability and support |
+|--|--|
+|Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>|
+| Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>|
+|Available pricing tiers|<ul><li>Free (F0) and Standard (S1). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
+|Supported Features | <ul><li>[Text Translation](reference/v3-0-reference.md)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
+|Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>|
+
+<!-- markdownlint-disable MD036 -->
+
+### Endpoint
+
+#### Azure portal
+
+Base URL:
+
+```http
+https://portal.azure.us
+```
+
+#### Authorization token
+
+Replace the `<region-identifier>` parameter with the sovereign cloud identifier:
+
+|Cloud | Region identifier |
+||--|
+| Azure US Government|<ul><li>`usgovarizona` (US Gov Arizona)</li><li>`usgovvirginia` (US Gov Virginia)</li></ul>|
+| Azure China 21 Vianet|<ul><li>`chinaeast2` (East China 2)</li><li>`chinanorth` (China North)</li></ul>|
+
+```http
+https://<region-identifier>.api.cognitive.microsoft.us/sts/v1.0/issueToken
+```
+
+#### Text translation
+
+```http
+https://api.cognitive.microsofttranslator.us/
+```
+
+#### Document Translation custom endpoint
+
+```http
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0
+```
+
+#### Custom Translator portal
+
+```http
+https://portal.customtranslator.azure.us/
+```
+
+### Example API translation request
+
+Translate a single sentence from English to Simplified Chinese.
+
+**Request**
+
+```curl
+curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version=3.0?&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <key>" -H "Ocp-Apim-Subscription-Region: chinanorth" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'你好, 你叫什么名字?'}]"
+```
+
+**Response body**
+
+```JSON
+[
+ {
+ "translations":[
+ {"text": "Hello, what is your name?", "to": "en"}
+ ]
+ }
+]
+```
+
+> [!div class="nextstepaction"]
+> [Azure Government: Translator text reference](../../azure-government/documentation-government-cognitiveservices.md#translator)
+
+### [Azure China 21 Vianet](#tab/china)
+
+The Azure China cloud is a physical and logical network-isolated instance of cloud services located in China. In order to apply for an Azure China account, you need a Chinese legal entity, Internet Content provider (ICP) license, and physical presence within China.
+
+|Azure China 21 Vianet | Availability and support |
+|||
+|Azure portal |<ul><li>[Azure China 21 Vianet Portal](https://portal.azure.cn/)</li></ul>|
+|Regions <br></br>The region-identifier is a required header when using a multi-service resource. | <ul><li>`chinanorth` </li><li> `chinaeast2`</li></ul>|
+|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li></ul>|
+|Supported Languages|<ul><li>[Translator language support.](https://docs.azure.cn/cognitive-services/translator/language-support)</li></ul>|
+
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD024 -->
+
+### Endpoint
+
+Base URL
+
+#### Azure portal
+
+```http
+https://portal.azure.cn
+```
+
+#### Authorization token
+
+Replace the `<region-identifier>` parameter with the sovereign cloud identifier:
+
+```http
+https://<region-identifier>.api.cognitive.azure.cn/sts/v1.0/issueToken
+```
+
+#### Text translation
+
+```http
+https://api.translator.azure.cn/translate
+```
+
+### Example text translation request
+
+Translate a single sentence from English to Simplified Chinese.
+
+**Request**
+
+```curl
+curl -X POST "https://api.translator.azure.cn/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text': 'Hello, what is your name?'}]"
+```
+
+**Response body**
+
+```JSON
+[
+ {
+ "translations":[
+ {"text": "你好, 你叫什么名字?", "to": "zh-Hans"}
+ ]
+ }
+]
+```
+
+#### Document Translation custom endpoint
+
+```http
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.cn/translator/text/batch/v1.0
+```
+
+### Example batch translation request
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "zh-Hans"
+ }
+ ]
+ }
+ ]
+}
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Translator](index.yml)
ai-services Text Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/text-sdk-overview.md
+
+ Title: Azure Text Translation SDKs
+
+description: Azure Text Translation software development kits (SDKs) expose Text Translation features and capabilities, using C#, Java, JavaScript, and Python programming language.
++++++ Last updated : 07/18/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Azure Text Translation SDK (preview)
+
+> [!IMPORTANT]
+>
+> * The Translator text SDKs are currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+Azure Text Translation is a cloud-based REST API feature of the Azure AI Translator service. The Text Translation API enables quick and accurate source-to-target text translations in real time. The Text Translation software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Text Translation REST API capabilities into your applications. Text Translation SDK is available across programming platforms in C#/.NET, Java, JavaScript, and Python.
+
+## Supported languages
+
+Text Translation SDK supports the programming languages and platforms:
+
+| Language → SDK version | Package|Client library| Supported API version|
+|:-:|:-|:-|:-|
+|[.NET/C# → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.Translation.Text/1.0.0-beta.1)|[Azure SDK for .NET](/dotnet/api/overview/azure/ai.translation.text-readme?view=azure-dotnet-preview&preserve-view=true)|Translator v3.0|
+|[Java&#x2731; → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-translation-text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-translation-text/1.0.0-beta.1)|[Azure SDK for Java](/java/api/overview/azure/ai-translation-text-readme?view=azure-java-preview&preserve-view=true)|Translator v3.0|
+|[JavaScript → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-cognitiveservices-translatortext/1.0.0/https://docsupdatetracker.net/index.html)|[npm](https://www.npmjs.com/package/@azure-rest/ai-translation-text/v/1.0.0-beta.1)|[Azure SDK for JavaScript](/javascript/api/overview/azure/text-translation?view=azure-node-preview&preserve-view=true) |Translator v3.0 |
+|**Python → 1.0.0b1**|[PyPi](https://pypi.org/project/azure-ai-translation-text/1.0.0b1/)|[Azure SDK for Python](/python/api/azure-ai-translation-text/azure.ai.translation.text?view=azure-python-preview&preserve-view=true) |Translator v3.0|
+
+&#x2731; The Azure Text Translation SDK for Java is tested and supported on Windows, Linux, and macOS platforms. It isn't tested on other platforms and doesn't support Android deployments.
+
+## Changelog and release history
+
+This section provides a version-based description of Text Translation feature and capability releases, changes, updates, and enhancements.
+
+#### Translator Text SDK April 2023 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+* **Version 1.0.0-beta.1 (2023-04-17)**
+* **Targets Text Translation v3.0**
+* **Initial version release**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.Translation.Text/1.0.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/translation/Azure.AI.Translation.Text/CHANGELOG.md#100-beta1-2023-04-17)
+
+[**README**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Text#readme)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Text/samples)
+
+### [**Java**](#tab/java)
+
+* **Version 1.0.0-beta.1 (2023-04-18)**
+* **Targets Text Translation v3.0**
+* **Initial version release**
+
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-translation-text/1.0.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#100-beta1-2023-04-18)
+
+[**README**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/translation/azure-ai-translation-text#readme)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/translation/azure-ai-translation-text#next-steps)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 1.0.0-beta.1 (2023-04-18)**
+* **Targets Text Translation v3.0**
+* **Initial version release**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure-rest/ai-translation-text/v/1.0.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/translation/ai-translation-text-rest/CHANGELOG.md#100-beta1-2023-04-18)
+
+[**README**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/translation/ai-translation-text-rest/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/translation/ai-translation-text-rest/samples/v1-beta)
+
+### [**Python**](#tab/python)
+
+* **Version 1.0.0b1 (2023-04-19)**
+* **Targets Text Translation v3.0**
+* **Initial version release**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-translation-text/1.0.0b1/)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/translation/azure-ai-translation-text/CHANGELOG.md#100b1-2023-04-19)
+
+[**README**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/translation/azure-ai-translation-text/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/translation/azure-ai-translation-text/samples)
+++
+## Use Text Translation SDK in your applications
+
+The Text Translation SDK enables the use and management of the Text Translation service in your application. The SDK builds on the underlying Text Translation REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Text Translation SDK for your preferred programming language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.Translation.Text --version 1.0.0-beta.1
+```
+
+```powershell
+Install-Package Azure.AI.Translation.Text -Version 1.0.0-beta.1
+```
+
+### [Java](#tab/java)
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-translation-text</artifactId>
+ <version>1.0.0-beta.1</version>
+</dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-translation-text:1.0.0-beta.1")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure-rest/ai-translation-text@1.0.0-beta.1
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-translation-text==1.0.0b1
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.Translation.Text;
+```
+
+### [Java](#tab/java)
+
+```java
+import java.util.List;
+import java.util.ArrayList;
+import com.azure.ai.translation.text.models.*;
+import com.azure.ai.translation.text.TextTranslationClientBuilder;
+import com.azure.ai.translation.text.TextTranslationClient;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const {TextTranslationClient } = require("@azure-rest/ai-translation-text").default;
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.core.credentials import TextTranslationClient
+from azure-ai-translation-text import TextTranslationClient
+```
+++
+### 3. Authenticate the client
+
+Interaction with the Translator service using the client library begins with creating an instance of the `TextTranslationClient`class. You need your API key and region to instantiate a client object.
+The Text Translation API key is found in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+**Using the global endpoint (default)**
+
+```csharp
+string key = "<your-key>";
+
+AzureKeyCredential credential = new(key);
+TextTranslationClient client = new(credential);
+```
+
+**Using a regional endpoint**
+
+```csharp
+
+Uri endpoint = new("<your-endpoint>");
+string key = "<your-key>";
+string region = "<region>";
+
+AzureKeyCredential credential = new(key);
+TextTranslationClient client = new(credential, region);
+```
+
+### [Java](#tab/java)
+
+**Using the global endpoint (default)**
+
+```java
+
+String apiKey = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(apiKey);
+
+TextTranslationClient client = new TextTranslationClientBuilder()
+ .credential(credential)
+ .buildClient();
+```
+
+**Using a regional endpoint**
+
+```java
+String apiKey = "<your-key>";
+String endpoint = "<your-endpoint>";
+String region = "<region>";
+
+AzureKeyCredential credential = new AzureKeyCredential(apiKey);
+
+TextTranslationClient client = new TextTranslationClientBuilder()
+.credential(credential)
+.region(region)
+.endpoint(endpoint)
+.buildClient();
+
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+const TextTranslationClient = require("@azure-rest/ai-translation-text").default,
+
+const apiKey = "<your-key>";
+const endpoint = "<your-endpoint>";
+const region = "<region>";
+
+```
+
+### [Python](#tab/python)
+
+```python
+
+from azure.ai.translation.text import TextTranslationClient, TranslatorCredential
+from azure.ai.translation.text.models import InputTextItem
+from azure.core.exceptions import HttpResponseError
+
+key = "<your-key>"
+endpoint = "<your-endpoint>"
+region = "<region>"
++
+credential = TranslatorCredential(key, region)
+text_translator = TextTranslationClient(endpoint=endpoint, credential=credential)
+```
+++
+### 4. Build your application
+
+### [C#/.NET](#tab/csharp)
+
+Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Text/samples) for .NET/C#.
+
+### [Java](#tab/java)
+
+Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/translation/azure-ai-translation-text/src/samples/java/com/azure/ai/translation/text) for Java.
+
+### [JavaScript](#tab/javascript)
+
+Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/translation/ai-translation-text-rest/samples/v1-beta) for JavaScript or TypeScript.
+
+### [Python](#tab/python)
+
+Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/translation/azure-ai-translation-text/samples) for Python.
+++
+## Help options
+
+The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-text-translation) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-text-translation`**.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Text Translation quickstart**](quickstart-text-sdk.md)
+
+>[!div class="nextstepaction"]
+> [**Text Translation v3.0 reference guide**](reference/v3-0-reference.md)
ai-services Text Translation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/text-translation-overview.md
+
+ Title: What is Azure Text Translation?
+
+description: Integrate the Text Translation API into your applications, websites, tools, and other solutions to provide multi-language user experiences.
++++++ Last updated : 07/18/2023+++
+# What is Azure Text Translation?
+
+ Azure Text Translation is a cloud-based REST API feature of the Translator service that uses neural machine translation technology to enable quick and accurate source-to-target text translation in real time across all [supported languages](language-support.md). In this overview, you learn how the Text Translation REST APIs enable you to build intelligent solutions for your applications and workflows.
+
+Text translation documentation contains the following article types:
+
+* [**Quickstarts**](quickstart-text-rest-api.md). Getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](create-translator-resource.md). Instructions for accessing and using the service in more specific or customized ways.
+* [**Reference articles**](reference/v3-0-reference.md). REST API documentation and programming language-based content.
+
+## Text translation features
+
+ Text Translation supports the following methods:
+
+* [**Languages**](reference/v3-0-languages.md). Returns a list of languages supported by **Translate**, **Transliterate**, and **Dictionary Lookup** operations. This request doesn't require authentication; just copy and paste the following GET request into Postman or your favorite API tool or browser:
+
+ ```http
+ https://api.cognitive.microsofttranslator.com/languages?api-version=3.0
+ ```
+
+* [**Translate**](reference/v3-0-translate.md#translate-to-multiple-languages). Renders single source-language text to multiple target-language texts with a single request.
+
+* [**Transliterate**](reference/v3-0-transliterate.md). Converts characters or letters of a source language to the corresponding characters or letters of a target language.
+
+* [**Detect**](reference/v3-0-detect.md). Returns the source code language code and a boolean variable denoting whether the detected language is supported for text translation and transliteration.
+
+ > [!NOTE]
+ > You can **Translate, Transliterate, and Detect** text with [a single REST API call](reference/v3-0-translate.md#translate-a-single-input-with-language-autodetection) .
+
+* [**Dictionary lookup**](reference/v3-0-dictionary-lookup.md). Returns equivalent words for the source term in the target language.
+* [**Dictionary example**](reference/v3-0-dictionary-examples.md) Returns grammatical structure and context examples for the source term and target term pair.
+
+## Text translation deployment options
+
+Add Text Translation to your projects and applications using the following resources:
+
+* Access the cloud-based Translator service via the [**REST API**](reference/rest-api-guide.md), available in Azure.
+
+* Use the REST API [translate request](containers/translator-container-supported-parameters.md) with the [**Text translation Docker container**](containers/translator-how-to-install-container.md).
+
+ > [!IMPORTANT]
+ >
+ > * To use the Translator container you must complete and submit the [**Azure AI services Application for Gated Services**](https://aka.ms/csgate-translator) online request form and have it approved to acquire access to the container.
+ >
+ > * The [**Translator container image**](https://hub.docker.com/_/microsoft-azure-cognitive-services-translator-text-translation) supports limited features compared to cloud offerings.
+ >
+
+## Get started with Text Translation
+
+Ready to begin?
+
+* [**Create a Translator resource**](create-translator-resource.md "Go to the Azure portal.") in the Azure portal.
+
+* [**Get your access keys and API endpoint**](create-translator-resource.md#authentication-keys-and-endpoint-url). An endpoint URL and read-only key are required for authentication.
+
+* Explore our [**Quickstart**](quickstart-text-rest-api.md "Learn to use Translator via REST and a preferred programming language.") and view use cases and code samples for the following programming languages:
+ * [**C#/.NET**](quickstart-text-rest-api.md?tabs=csharp)
+ * [**Go**](quickstart-text-rest-api.md?tabs=go)
+ * [**Java**](quickstart-text-rest-api.md?tabs=java)
+ * [**JavaScript/Node.js**](quickstart-text-rest-api.md?tabs=nodejs)
+ * [**Python**](quickstart-text-rest-api.md?tabs=python)
+
+## Next steps
+
+Dive deeper into the Text Translation REST API:
+
+> [!div class="nextstepaction"]
+> [See the REST API reference](./reference/v3-0-reference.md)
ai-services Translator Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-faq.md
+
+ Title: Frequently asked questions - Translator
+
+description: Get answers to frequently asked questions about the Translator API in Azure AI services.
+++++++ Last updated : 07/18/2023+++
+# Frequently asked questionsΓÇöTranslator API
++
+## How does Translator count characters?
+
+Translator counts every code point defined in Unicode as a character. Each translation counts as a separate translation, even if the request was made in a single API call translating to multiple languages. The length of the response doesn't matter and the number of requests, words, bytes, or sentences isn't relevant to character count.
+
+Translator counts the following input:
+
+* Text passed to Translator in the body of a request.
+ * `Text` when using the [Translate](reference/v3-0-translate.md), [Transliterate](reference/v3-0-transliterate.md), and [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) methods
+ * `Text` and `Translation` when using the [Dictionary Examples](reference/v3-0-dictionary-examples.md) method.
+
+* All markup: HTML, XML tags, etc. within the request body text field. JSON notation used to build the request (for instance the key "Text:") is **not** counted.
+* An individual letter.
+* Punctuation.
+* A space, tab, markup, or any white-space character.
+* A repeated translation, even if you have previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same.
+
+For scripts based on graphic symbols, such as written Chinese and Japanese Kanji, the Translator service counts the number of Unicode code points. One character per symbol. Exception: Unicode surrogate pairs count as two characters.
+
+Calls to the **Detect** and **BreakSentence** methods aren't counted in the character consumption. However, we do expect calls to the Detect and BreakSentence methods to be reasonably proportionate to the use of other counted functions. If the number of Detect or BreakSentence calls exceeds the number of other counted methods by 100 times, Microsoft reserves the right to restrict your use of the Detect and BreakSentence methods.
+
+For detailed information regarding Azure AI Translator Service request limits, *see* [**Text translation request limits**](service-limits.md#text-translation).
+
+## Where can I see my monthly usage?
+
+The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can be used to estimate your costs. You can also monitor, view, and add Azure alerts for your Azure services in your user account in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to your Translator resource Overview page.
+1. Select the **subscription** for your Translator resource.
+
+ :::image type="content" source="media/azure-portal-overview.png" alt-text="Screenshot of the subscription link on overview page in the Azure portal.":::
+
+1. In the left rail, make your selection under **Cost Management**:
+
+ :::image type="content" source="media/azure-portal-cost-management.png" alt-text="Screenshot of the cost management resources links in the Azure portal.":::
+
+## Is attribution required when using Translator?
+
+Attribution isn't required when using Translator for text and speech translation. It's recommended that you inform users that the content they're viewing is machine translated.
+
+If attribution is present, it must conform to the [Translator attribution guidelines](https://www.microsoft.com/translator/business/attribution/).
+
+## Is Translator a replacement for human translator?
+
+No, both have their place as essential tools for communication. Use machine translation where the quantity of content, speed of creation, and budget constraints make it impossible to use human translation.
+
+Machine translation has been used as a first pass by several of our [language service provider (LSP)](https://www.microsoft.com/translator/business/partners/) partners, prior to using human translation and can improve productivity by up to 50 percent. For a list of LSP partners, visit the Translator partner page.
++
+> [!TIP]
+> If you can't find answers to your questions in this FAQ, try asking the Translator API community on [StackOverflow](https://stackoverflow.com/search?q=%5Bmicrosoft-cognitive%5D+or+%5Bmicrosoft-cognitive%5D+translator&s=34bf0ce2-b6b3-4355-86a6-d45a1121fe27).
ai-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-overview.md
+
+ Title: What is the Microsoft Azure AI Translator?
+
+description: Integrate Translator into your applications, websites, tools, and other solutions to provide multi-language user experiences.
++++++ Last updated : 07/18/2023+++
+# What is Azure AI Translator?
+
+Translator Service is a cloud-based neural machine translation service that is part of the [Azure AI services](../what-are-ai-services.md) family and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+
+Translator documentation contains the following article types:
+
+* [**Quickstarts**](quickstart-text-rest-api.md). Getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](translator-text-apis.md). Instructions for accessing and using the service in more specific or customized ways.
+* [**Reference articles**](reference/v3-0-reference.md). REST API documentation and programming language-based content.
+
+## Translator features and development options
+
+Translator service supports the following features. Use the links in this table to learn more about each feature and browse the API references.
+
+| Feature | Description | Development options |
+|-|-|--|
+| [**Text Translation**](text-translation-overview.md) | Execute text translation between supported source and target languages in real time. | <ul><li>[**REST API**](reference/rest-api-guide.md) </li><li>[Text translation Docker container](containers/translator-how-to-install-container.md).</li></ul> |
+| [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. | <ul><li>[**REST API**](document-translation/reference/rest-api-guide.md)</li><li>[**Client-library SDK**](document-translation/quickstarts/document-translation-sdk.md)</li></ul> |
+| [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. | <ul><li>[**Custom Translator portal**](https://portal.customtranslator.azure.ai/)</li></ul> |
+
+For detailed information regarding Azure AI Translator Service request limits, *see* [**Text translation request limits**](service-limits.md#text-translation).
+
+## Try the Translator service for free
+
+First, you need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
+
+Next, you need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
+
+Now, you're ready to get started! [**Create a Translator service**](create-translator-resource.md "Go to the Azure portal."), [**get your access keys and API endpoint**](create-translator-resource.md#authentication-keys-and-endpoint-url "An endpoint URL and read-only key are required for authentication."), and try our [**quickstart**](quickstart-text-rest-api.md "Learn to use Translator via REST.").
+
+## Next steps
+
+* Learn more about the following features:
+ * [**Text Translation**](text-translation-overview.md)
+ * [**Document Translation**](document-translation/overview.md)
+ * [**Custom Translator**](custom-translator/overview.md)
+* Review [**Translator pricing**](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
ai-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-text-apis.md
+
+ Title: "Use Azure AI Translator APIs"
+
+description: "Learn to translate text, transliterate text, detect language and more with the Translator service. Examples are provided in C#, Java, JavaScript and Python."
++++++ Last updated : 07/18/2023+
+ms.devlang: csharp, golang, java, javascript, python
+
+keywords: translator, translator service, translate text, transliterate text, language detection
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+
+# Use Azure AI Translator APIs
+
+In this how-to guide, you learn to use the [Translator service REST APIs](reference/rest-api-guide.md). You start with basic examples and move onto some core configuration options that are commonly used during development, including:
+
+* [Translation](#translate-text)
+* [Transliteration](#transliterate-text)
+* [Language identification/detection](#detect-language)
+* [Calculate sentence length](#get-sentence-length)
+* [Get alternate translations](#dictionary-lookup-alternate-translations) and [examples of word usage in a sentence](#dictionary-examples-translations-in-context)
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* An Azure AI multi-service or Translator resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or a [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal, to get your key and endpoint. After it deploys, select **Go to resource**.
+
+* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+* You need the key and endpoint from the resource to connect your application to the Translator service. Later, you paste your key and endpoint into the code samples. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* the Azure AI services [security](../security-features.md).
+
+## Headers
+
+To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to make sure the following headers are included with each request. Don't worry, we include the headers in the sample code in the following sections.
+
+|Header|Value| Condition |
+| |: |:|
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |
+|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using an Azure AI multi-service or regional (geographic) resource like **West US**.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>|
+|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>|
+|**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> |
+|**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul>
+
+## Set up your application
+
+### [C#](#tab/csharp)
+
+1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
+
+ > [!TIP]
+ >
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
+
+1. Open Visual Studio.
+
+1. On the Start page, choose **Create a new project**.
+
+ :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
+
+1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
+
+ :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
+
+1. In the **Configure your new project** dialog window, enter `translator_text_app` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
+
+ :::image type="content" source="media/how-to-guides/configure-your-console-app.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
+
+1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
+
+ :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
+
+### Install the Newtonsoft.json package with NuGet
+
+1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
+
+ :::image type="content" source="media/how-to-guides/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
+
+1. Select the Browse tab and type Newtonsoft.
+
+ :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
+
+1. Select install from the right package manager window to add the package to your project.
+
+ :::image type="content" source="media/how-to-guides/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
+
+### Build your application
+
+> [!NOTE]
+>
+> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
+> * The new output uses recent C# features that simplify the code you need to write.
+> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
+> * For more information, *see* [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
+
+1. Open the **Program.cs** file.
+
+1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
+
+1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
++
+### [Go](#tab/go)
+
+You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
+
+> [!TIP]
+>
+> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
+
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
+
+ * Download the Go version for your operating system.
+ * Once the download is complete, run the installer.
+ * Open a command prompt and enter the following to confirm Go was installed:
+
+ ```console
+ go version
+ ```
+
+1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+1. Create a new GO file named **text-translator.go** from the **translator-text-app** directory.
+
+1. Copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
+
+1. Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
+
+ ```console
+ go run translation.go
+ ```
+
+### [Java](#tab/java)
+
+* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+
+ >[!TIP]
+ >
+ > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
+ > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
+
+* If you aren't using Visual Studio Code, make sure you have the following installed in your development environment:
+
+ * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
+
+ * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
+
+### Create a new Gradle project
+
+1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+ ```console
+ mkdir translator-text-app && translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
+
+ ```console
+ gradle init --type basic
+ ```
+
+1. When prompted to choose a **DSL**, select **Kotlin**.
+
+1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
+
+ > [!NOTE]
+ > It may take a few minutes for the entire application to be created, but soon you should see several folders and files including `build-gradle.kts`.
+
+1. Update `build.gradle.kts` with the following code:
+
+ ```kotlin
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClass.set("TextTranslator")
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ implementation("com.squareup.okhttp3:okhttp:4.10.0")
+ implementation("com.google.code.gson:gson:2.9.0")
+ }
+ ```
+
+### Create a Java Application
+
+1. From the translator-text-app directory, run the following command:
+
+ ```console
+ mkdir -p src/main/java
+ ```
+
+ You create the following directory structure:
+
+ :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
+
+1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item TranslatorText.java**.
+ >
+ > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
+
+1. Copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
+
+1. Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+
+ 1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+ 1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+
+### [Node.js](#tab/nodejs)
+
+1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
+
+ > [!TIP]
+ >
+ > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
+
+1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`.
+
+ ```console
+ mkdir translator-text-app && cd translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the npm init command to initialize the application and scaffold your project.
+
+ ```console
+ npm init
+ ```
+
+1. Specify your project's attributes using the prompts presented in the terminal.
+
+ * The most important attributes are name, version number, and entry point.
+ * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
+ * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
+ * After you've completed the prompts, a `package.json` file will be created in your translator-text-app directory.
+
+1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
+
+ ```console
+ npm install axios uuid
+ ```
+
+1. Create the `index.js` file in the application directory.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item index.js**.
+ >
+ > * You can also create a new file named `index.js` in your IDE and save it to the `translator-text-app` directory.
+
+1. Copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
+
+1. Once you've added the code sample to your application, run your program:
+
+ 1. Navigate to your application directory (translator-text-app).
+
+ 1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+
+### [Python](#tab/python)
+
+1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+
+ > [!TIP]
+ >
+ > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
+
+1. Open a terminal window and use pip to install the Requests library and uuid0 package:
+
+ ```console
+ pip install requests uuid
+ ```
+
+ > [!NOTE]
+ > We will also use a Python built-in package called json. It's used to work with JSON data.
+
+1. Create a new Python file called **text-translator.py** in your preferred editor or IDE.
+
+1. Add the following code sample to your `text-translator.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
+
+1. Once you've added a desired code sample to your application, build and run your program:
+
+ 1. Navigate to your **text-translator.py** file.
+
+ 1. Type the following command in your console:
+
+ ```console
+ python text-translator.py
+ ```
+++
+> [!IMPORTANT]
+> The samples in this guide require hard-coded keys and endpoints.
+> Remember to **remove the key from your code when you're done**, and **never post it publicly**.
+> For production, consider using a secure way of storing and accessing your credentials. For more information, *see* [Azure AI services security](../security-features.md).
+
+## Translate text
+
+The core operation of the Translator service is to translate text. In this section, you build a request that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System.Text;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Input and output languages are defined as parameters.
+ string route = "/translate?api-version=3.0&from=en&to=sw&to=it";
+ string textToTranslate = "Hello, friend! What did you do today?";
+ object[] body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "it")
+ q.Add("to", "sw")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Hello friend! What did you do today?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&from=en&to=sw&to=it";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Hello, friend! What did you do today?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+ const { v4: uuidv4 } = require('uuid');
+
+ let key = "<your-translator-key>";
+ let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['sw', 'it']
+ },
+ data: [{
+ 'text': 'Hello, friend! What did you do today?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['sw', 'it']
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Hello, friend! What did you do today?'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations":[
+ {
+ "text":"Halo, rafiki! Ulifanya nini leo?",
+ "to":"sw"
+ },
+ {
+ "text":"Ciao, amico! Cosa hai fatto oggi?",
+ "to":"it"
+ }
+ ]
+ }
+]
+```
+
+You can check the consumption (the number of characters charged) for each request in the [**response headers: x-metered-usage**](reference/v3-0-translate.md#response-headers) field.
+
+## Detect language
+
+If you need translation, but don't know the language of the text, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
+
+### Detect source language during translation
+
+If you don't include the `from` parameter in your translation request, the Translator service attempts to detect the source text's language. In the response, you get the detected language (`language`) and a confidence score (`score`). The closer the `score` is to `1.0`, means that there's increased confidence that the detection is correct.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Output languages are defined as parameters, input language detected.
+ string route = "/translate?api-version=3.0&to=en&to=it";
+ string textToTranslate = "Halo, rafiki! Ulifanya nini leo?";
+ object[] body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("to", "en")
+ q.Add("to", "it")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Halo rafiki! Ulifanya nini leo?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&to=en&to=it";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Halo, rafiki! Ulifanya nini leo?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'to': ['en', 'it']
+ },
+ data: [{
+ 'text': 'Halo, rafiki! Ulifanya nini leo?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'to': ['en', 'it']
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Halo, rafiki! Ulifanya nini leo?'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage":{
+ "language":"sw",
+ "score":0.8
+ },
+ "translations":[
+ {
+ "text":"Hello friend! What did you do today?",
+ "to":"en"
+ },
+ {
+ "text":"Ciao amico! Cosa hai fatto oggi?",
+ "to":"it"
+ }
+ ]
+ }
+]
+```
+
+### Detect source language without translation
+
+It's possible to use the Translator service to detect the language of source text without performing a translation. To do so, you use the [`/detect`](./reference/v3-0-detect.md) endpoint.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Just detect language
+ string route = "/detect?api-version=3.0";
+ string textToLangDetect = "Hallo Freund! Was hast du heute gemacht?";
+ object[] body = new object[] { new { Text = textToLangDetect } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/detect?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Ciao amico! Cosa hai fatto oggi?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/detect?api-version=3.0";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Hallo Freund! Was hast du heute gemacht?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText detectRequest = new TranslatorText();
+ String response = detectRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/detect',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0'
+ },
+ data: [{
+ 'text': 'Hallo Freund! Was hast du heute gemacht?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/detect'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Hallo Freund! Was hast du heute gemacht?'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+The `/detect` endpoint response includes alternate detections, and indicates if the translation and transliteration are supported for all of the detected languages. After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "language":"de",
+
+ "score":1.0,
+
+ "isTranslationSupported":true,
+
+ "isTransliterationSupported":false
+ }
+]
+```
+
+## Transliterate text
+
+Transliteration is the process of converting a word or phrase from the script (alphabet) of one language to another based on phonetic similarity. For example, you could use transliteration to convert "สวัสดี" (`thai`) to "sawatdi" (`latn`). There's more than one way to perform transliteration. In this section, you learn how to use language detection using the `translate` endpoint, and the `transliterate` endpoint.
+
+### Transliterate during translation
+
+If you're translating into a language that uses a different alphabet (or phonemes) than your source, you might need a transliteration. In this example, we translate "Hello" from English to Thai. In addition to getting the translation in Thai, you get a transliteration of the translated phrase using the Latin alphabet.
+
+To get a transliteration from the `translate` endpoint, use the `toScript` parameter.
+
+> [!NOTE]
+> For a complete list of available languages and transliteration options, see [language support](language-support.md).
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Output language defined as parameter, with toScript set to latn
+ string route = "/translate?api-version=3.0&to=th&toScript=latn";
+ string textToTransliterate = "Hello, friend! What did you do today?";
+ object[] body = new object[] { new { Text = textToTransliterate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("to", "th")
+ q.Add("toScript", "latn")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Hello, friend! What did you do today?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&to=th&toScript=latn";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
++
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Hello, friend! What did you do today?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'to': 'th',
+ 'toScript': 'latn'
+ },
+ data: [{
+ 'text': 'Hello, friend! What did you do today?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```Python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'to': 'th',
+ 'toScript': 'latn'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Hello, friend! What did you do today?'
+}]
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Keep in mind that the response from `translate` endpoint includes the detected source language with a confidence score, a translation using the alphabet of the output language, and a transliteration using the Latin alphabet.
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1
+ },
+ "translations": [
+ {
+ "text": "หวัดดีเพื่อน! วันนี้เธอทำอะไรไปบ้าง ",
+ "to": "th",
+ "transliteration": {
+ "script": "Latn",
+ "text": "watdiphuean! wannithoethamaraipaiang"
+ }
+ }
+ ]
+ }
+]
+```
+
+### Transliterate without translation
+
+You can also use the `transliterate` endpoint to get a transliteration. When using the transliteration endpoint, you must provide the source language (`language`), the source script/alphabet (`fromScript`), and the output script/alphabet (`toScript`) as parameters. In this example, we're going to get the transliteration for สวัสดีเพื่อน! วันนี้คุณทำอะไร.
+
+> [!NOTE]
+> For a complete list of available languages and transliteration options, see [language support](language-support.md).
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // For a complete list of options, see API reference.
+ // Input and output languages are defined as parameters.
+ string route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn";
+ string textToTransliterate = "สวัสดีเพื่อน! วันนี้คุณทำอะไร";
+ object[] body = new object[] { new { Text = textToTransliterate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/transliterate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("language", "th")
+ q.Add("fromScript", "thai")
+ q.Add("toScript", "latn")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "สวัสดีเพื่อน! วันนี้คุณทำอะไร"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
++
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"สวัสดีเพื่อน! วันนี้คุณทำอะไร\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText transliterateRequest = new TranslatorText();
+ String response = transliterateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/transliterate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'language': 'th',
+ 'fromScript': 'thai',
+ 'toScript': 'latn'
+ },
+ data: [{
+ 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/transliterate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'language': 'th',
+ 'fromScript': 'thai',
+ 'toScript': 'latn'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `transliterate` only returns the `text` and the output `script`.
+
+```json
+[
+ {
+ "text":"sawatdiphuean! wannikhunthamarai",
+
+ "script":"latn"
+ }
+]
+```
+
+## Get sentence length
+
+With the Translator service, you can get the character count for a sentence or series of sentences. The response is returned as an array, with character counts for each sentence detected. You can get sentence lengths with the `translate` and `breaksentence` endpoints.
+
+### Get sentence length during translation
+
+You can get character counts for both source text and translation output using the `translate` endpoint. To return sentence length (`srcSenLen` and `transSenLen`) you must set the `includeSentenceLength` parameter to `True`.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Include sentence length details.
+ string route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
+ string sentencesToCount =
+ "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
+ object[] body = new object[] { new { Text = sentencesToCount } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("to", "es")
+ q.Add("includeSentenceLength", "true")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
+ public static String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'to': 'es',
+ 'includeSentenceLength': true
+ },
+ data: [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'to': 'es',
+ 'includeSentenceLength': True
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+}]
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. In addition to the detected source language and translation, you get character counts for each detected sentence for both the source (`srcSentLen`) and translation (`transSentLen`).
+
+```json
+[
+ {
+ "detectedLanguage":{
+ "language":"en",
+ "score":1.0
+ },
+ "translations":[
+ {
+ "text":"¿Puedes decirme cómo llegar a Penn Station? Oh, ¿no estás seguro? Está bien.",
+ "to":"es",
+ "sentLen":{
+ "srcSentLen":[
+ 44,
+ 21,
+ 12
+ ],
+ "transSentLen":[
+ 44,
+ 22,
+ 10
+ ]
+ }
+ }
+ ]
+ }
+]
+```
+
+### Get sentence length without translation
+
+The Translator service also lets you request sentence length without translation using the `breaksentence` endpoint.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Only include sentence length details.
+ string route = "/breaksentence?api-version=3.0";
+ string sentencesToCount =
+ "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
+ object[] body = new object[] { new { Text = sentencesToCount } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/breaksentence?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.google.gson.*;
+import com.squareup.okhttp.*;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/breaksentence?api-version=3.0";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText breakSentenceRequest = new TranslatorText();
+ String response = breakSentenceRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/breaksentence',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0'
+ },
+ data: [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/breaksentence'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `breaksentence` only returns the character counts for the source text in an array called `sentLen`.
+
+```json
+[
+ {
+ "detectedLanguage":{
+ "language":"en",
+ "score":1.0
+ },
+ "sentLen":[
+ 44,
+ 21,
+ 12
+ ]
+ }
+]
+```
+
+## Dictionary lookup (alternate translations)
+
+With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "`luz solar`," "`rayos solares`," and "`soleamiento`," "`sol`," and "`insolaci├│n`."
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // See many translation options
+ string route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
+ string wordToTranslate = "sunlight";
+ object[] body = new object[] { new { Text = wordToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/dictionary/lookup?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "es")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "sunlight"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.google.gson.*;
+import com.squareup.okhttp.*;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
+ public String url = endpoint.concat(route);
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"sunlight\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText breakSentenceRequest = new TranslatorText();
+ String response = breakSentenceRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/dictionary/lookup',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+ },
+ data: [{
+ 'text': 'sunlight'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/dictionary/lookup'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'sunlight'
+}]
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Let's examine the response more closely since the JSON is more complex than some of the other examples in this article. The `translations` array includes a list of translations. Each object in this array includes a confidence score (`confidence`), the text optimized for end-user display (`displayTarget`), the normalized text (`normalizedText`), the part of speech (`posTag`), and information about previous translation (`backTranslations`). For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-lookup.md)
+
+```json
+[
+ {
+ "normalizedSource":"sunlight",
+ "displaySource":"sunlight",
+ "translations":[
+ {
+ "normalizedTarget":"luz solar",
+ "displayTarget":"luz solar",
+ "posTag":"NOUN",
+ "confidence":0.5313,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":15,
+ "frequencyCount":702
+ },
+ {
+ "normalizedText":"sunshine",
+ "displayText":"sunshine",
+ "numExamples":7,
+ "frequencyCount":27
+ },
+ {
+ "normalizedText":"daylight",
+ "displayText":"daylight",
+ "numExamples":4,
+ "frequencyCount":17
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"rayos solares",
+ "displayTarget":"rayos solares",
+ "posTag":"NOUN",
+ "confidence":0.1544,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":4,
+ "frequencyCount":38
+ },
+ {
+ "normalizedText":"rays",
+ "displayText":"rays",
+ "numExamples":11,
+ "frequencyCount":30
+ },
+ {
+ "normalizedText":"sunrays",
+ "displayText":"sunrays",
+ "numExamples":0,
+ "frequencyCount":6
+ },
+ {
+ "normalizedText":"sunbeams",
+ "displayText":"sunbeams",
+ "numExamples":0,
+ "frequencyCount":4
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"soleamiento",
+ "displayTarget":"soleamiento",
+ "posTag":"NOUN",
+ "confidence":0.1264,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":0,
+ "frequencyCount":7
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"sol",
+ "displayTarget":"sol",
+ "posTag":"NOUN",
+ "confidence":0.1239,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sun",
+ "displayText":"sun",
+ "numExamples":15,
+ "frequencyCount":20387
+ },
+ {
+ "normalizedText":"sunshine",
+ "displayText":"sunshine",
+ "numExamples":15,
+ "frequencyCount":1439
+ },
+ {
+ "normalizedText":"sunny",
+ "displayText":"sunny",
+ "numExamples":15,
+ "frequencyCount":265
+ },
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":15,
+ "frequencyCount":242
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"insolaci├│n",
+ "displayTarget":"insolaci├│n",
+ "posTag":"NOUN",
+ "confidence":0.064,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"heat stroke",
+ "displayText":"heat stroke",
+ "numExamples":3,
+ "frequencyCount":67
+ },
+ {
+ "normalizedText":"insolation",
+ "displayText":"insolation",
+ "numExamples":1,
+ "frequencyCount":55
+ },
+ {
+ "normalizedText":"sunstroke",
+ "displayText":"sunstroke",
+ "numExamples":2,
+ "frequencyCount":31
+ },
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":0,
+ "frequencyCount":12
+ },
+ {
+ "normalizedText":"solarization",
+ "displayText":"solarization",
+ "numExamples":0,
+ "frequencyCount":7
+ },
+ {
+ "normalizedText":"sunning",
+ "displayText":"sunning",
+ "numExamples":1,
+ "frequencyCount":7
+ }
+ ]
+ }
+ ]
+ }
+]
+```
+
+## Dictionary examples (translations in context)
+
+After you've performed a dictionary lookup, pass the source and translation text to the `dictionary/examples` endpoint, to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // See examples of terms in context
+ string route = "/dictionary/examples?api-version=3.0&from=en&to=es";
+ object[] body = new object[] { new { Text = "sunlight", Translation = "luz solar" } } ;
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/dictionary/examples?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "es")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ Translation string
+ }{
+ {
+ Text: "sunlight",
+ Translation: "luz solar",
+ },
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator Text API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.google.gson.*;
+import com.squareup.okhttp.*;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/dictionary/examples?api-version=3.0&from=en&to=es";
+ public String url = endpoint.concat(route);
++
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"sunlight\", \"Translation\": \"luz solar\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText dictionaryExamplesRequest = new TranslatorText();
+ String response = dictionaryExamplesRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+let key = "<YOUR-TRANSLATOR-KEY>";
+let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using an Azure AI multi-service resource.
+let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/dictionary/examples',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+ },
+ data: [{
+ 'text': 'sunlight',
+ 'translation': 'luz solar'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/dictionary/examples'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'sunlight',
+ 'translation': 'luz solar'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-examples.md)
+
+```json
+[
+ {
+ "normalizedSource":"sunlight",
+ "normalizedTarget":"luz solar",
+ "examples":[
+ {
+ "sourcePrefix":"You use a stake, silver, or ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Se usa una estaca, plata, o ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"A pocket of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Una bolsa de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"There must also be ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"También debe haber ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"We were living off of current ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Estábamos viviendo de la ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" actual."
+ },
+ {
+ "sourcePrefix":"And they don't need unbroken ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Y ellos no necesitan ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" ininterrumpida."
+ },
+ {
+ "sourcePrefix":"We have lamps that give the exact equivalent of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Disponemos de lámparas que dan el equivalente exacto de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"Plants need water and ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Las plantas necesitan agua y ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"So this requires ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Así que esto requiere ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"And this pocket of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":" freed humans from their ...",
+ "targetPrefix":"Y esta bolsa de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":", liber├│ a los humanos de ..."
+ },
+ {
+ "sourcePrefix":"Since there is no ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":", the air within ...",
+ "targetPrefix":"Como no hay ",
+ "targetTerm":"luz solar",
+ "targetSuffix":", el aire atrapado en ..."
+ },
+ {
+ "sourcePrefix":"The ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":" shining through the glass creates a ...",
+ "targetPrefix":"La ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" a través de la vidriera crea una ..."
+ },
+ {
+ "sourcePrefix":"Less ice reflects less ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":", and more open ocean ...",
+ "targetPrefix":"Menos hielo refleja menos ",
+ "targetTerm":"luz solar",
+ "targetSuffix":", y más mar abierto ..."
+ },
+ {
+ "sourcePrefix":"",
+ "sourceTerm":"Sunlight",
+ "sourceSuffix":" is most intense at midday, so ...",
+ "targetPrefix":"La ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" es más intensa al mediodía, por lo que ..."
+ },
+ {
+ "sourcePrefix":"... capture huge amounts of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":", so fueling their growth.",
+ "targetPrefix":"... capturan enormes cantidades de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" que favorecen su crecimiento."
+ },
+ {
+ "sourcePrefix":"... full height, giving more direct ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":" in the winter.",
+ "targetPrefix":"... altura completa, dando más ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" directa durante el invierno."
+ }
+ ]
+ }
+]
+```
+
+## Troubleshooting
+
+### Common HTTP status codes
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. |
+| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+
+### Java users
+
+If you're encountering connection issues, it may be that your TLS/SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Customize and improve translation](custom-translator/concepts/customization.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/whats-new.md
+
+ Title: What's new in Azure AI Translator?
+
+description: Learn of the latest changes to the Translator Service API.
++++++ Last updated : 07/18/2023++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+
+# What's new in Azure AI Translator?
+
+Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
+
+Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
+
+Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+
+## July 2023
++
+* Document Translation REST API v1.1 is now Generally Available (GA).
+
+## June 2023
+
+**Documentation updates**
+
+* The [Document Translation SDK overview](document-translation/document-sdk-overview.md) is now available to provide guidance and resources for the .NET/C# and Python SDKs.
+* The [Document Translation SDK quickstart](document-translation/quickstarts/document-translation-sdk.md) is now available for the C# and Python programming languages.
+
+## May 2023
+
+**Announcing new releases for Build 2023**
+
+### Text Translation SDK (preview)
+
+* The Text translation SDKs are now available in public preview for C#/.NET, Java, JavaScript/TypeScript, and Python programming languages.
+* To learn more, see [Text translation SDK overview](text-sdk-overview.md).
+* To get started, try a [Text Translation SDK quickstart](document-translation/document-sdk-overview.md) using a programming language of your choice.
+
+### Microsoft Translator V3 Connector (preview)
+
+The Translator V3 Connector is now available in public preview. The connector creates a connection between your Translator Service instance and Microsoft Power Automate enabling you to use one or more prebuilt operations as steps in your apps and workflows. To learn more, see the following documentation:
+
+* [Automate document translation](connector/document-translation-flow.md)
+* [Automate text translation](connector/text-translator-flow.md)
+
+## February 2023
+
+[**Document Translation in Language Studio**](document-translation/language-studio.md) is now available for Public Preview. The feature provides a no-code user interface to interactively translate documents from local or Azure Blob Storage.
+
+## November 2022
+
+### Custom Translator stable GA v2.0 release
+
+Custom Translator version v2.0 is generally available and ready for use in your production applications!
+
+## June 2022
+
+### Document Translation stable GA 1.0.0 release
+
+Document Translation .NET and Python client-library SDKs are now generally available and ready for use in production applications!
+
+### [**C#**](#tab/csharp)
+
+**Version 1.0.0 (GA)** </br>
+**2022-06-07**
+
+##### [README](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/README.md)
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/CHANGELOG.md)
+
+##### [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.Translation.Document)
+
+##### [**SDK reference documentation**](/dotnet/api/overview/azure/AI.Translation.Document-readme)
+
+### [Python](#tab/python)
+
+**Version 1.0.0 (GA)** </br>
+**2022-06-07**
+
+##### [README](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/README.md)
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/CHANGELOG.md)
+
+##### [**Package (PyPI)**](https://pypi.org/project/azure-ai-translation-document/1.0.0/)
+
+##### [**SDK reference documentation**](/python/api/overview/azure/ai-translation-document-readme?view=azure-python&preserve-view=true)
+++
+## May 2022
+
+### [Document Translation support for scanned PDF documents](https://aka.ms/blog_ScannedPdfTranslation)
+
+* Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.
+
+## April 2022
+
+### [Text and document translation support for Faroese](https://www.microsoft.com/translator/blog/2022/04/25/introducing-faroese-translation-for-faroese-flag-day/)
+
+* Translator service has [text and document translation language support](language-support.md) for Faroese, a Germanic language originating on the Faroe Islands. The Faroe Islands are a self-governing region within the Kingdom of Denmark located between Norway and Iceland. Faroese is descended from Old West Norse spoken by Vikings in the Middle Ages.
+
+### [Text and document translation support for Basque and Galician](https://www.microsoft.com/translator/blog/2022/04/12/break-the-language-barrier-with-translator-now-with-two-new-languages/)
+
+* Translator service has [text and document translation language support](language-support.md) for Basque and Galician. Basque is a language isolate, meaning it isn't related to any other modern language. It's spoken in parts of northern Spain and southern France. Galician is spoken in northern Portugal and western Spain. Both Basque and Galician are official languages of Spain.
+
+## March 2022
+
+### [Text and document translation support for Somali and Zulu languages](https://www.microsoft.com/translator/blog/2022/03/29/translator-welcomes-two-new-languages-somali-and-zulu/)
+
+* Translator service has [text and document translation language support](language-support.md) for Somali and Zulu. The Somali language, spoken throughout Africa, has more than 21 million speakers and is in the Cushitic branch of the Afroasiatic language family. The Zulu language has 12 million speakers and is recognized as one of South Africa's 11 official languages.
+
+## February 2022
+
+### [Text and document translation support for Upper Sorbian](https://www.microsoft.com/translator/blog/2022/02/21/translator-celebrates-international-mother-language-day-by-adding-upper-sorbian/),
+
+* Translator service has [text and document translation language support](language-support.md) for Upper Sorbian. The Translator team has worked tirelessly to preserve indigenous and endangered languages around the world. Language data provided by the Upper Sorbian language community was instrumental in introducing this language to Translator.
+
+### [Text and document translation support for Inuinnaqtun and Romanized Inuktitut](https://www.microsoft.com/translator/blog/2022/02/01/introducing-inuinnaqtun-and-romanized-inuktitut/)
+
+* Translator service has [text and document translation language support](language-support.md) for Inuinnaqtun and Romanized Inuktitut. Both are indigenous languages that are essential and treasured foundations of Canadian culture and society.
+
+## January 2022
+
+### Custom Translator portal (v2.0) public preview
+
+* The [Custom Translator portal (v2.0)](https://portal.customtranslator.azure.ai/) is now in public preview and includes significant changes that makes it easier to create your custom translation systems.
+
+* To learn more, see our Custom Translator [documentation](custom-translator/overview.md) and try our [quickstart](custom-translator/quickstart.md) for step-by-step instructions.
+
+## October 2021
+
+### [Text and document support for more than 100 languages](https://www.microsoft.com/translator/blog/2021/10/11/translator-now-translates-more-than-100-languages/)
+
+* Translator service has added [text and document language support](language-support.md) for the following languages:
+ * **Bashkir**. A Turkic language spoken by approximately 1.4 million native speakers. It has three regional language groups: Southern, Eastern, and Northwestern.
+ * **Dhivehi**. Also known as Maldivian, it's an Indo-Aryan language primarily spoken in the island nation of Maldives.
+ * **Georgian**. A Kartvelian language that is the official language of Georgia. It has approximately 4 million speakers.
+ * **Kyrgyz**. A Turkic language that is the official language of Kyrgyzstan.
+ * **Macedonian (Cyrillic)**. An Eastern South Slavic language that is the official language of North Macedonia. It has approximately 2 million people.
+ * **Mongolian (Traditional)**. Traditional Mongolian script is the first writing system created specifically for the Mongolian language. Mongolian is the official language of Mongolia.
+ * **Tatar**. A Turkic language used by speakers in modern Tatarstan. It's closely related to Crimean Tatar and Siberian Tatar but each belongs to different subgroups.
+ * **Tibetan**. It has nearly 6 million speakers and can be found in many Tibetan Buddhist publications.
+ * **Turkmen**. The official language of Turkmenistan. It's similar to Turkish and Azerbaijani.
+ * **Uyghur**. A Turkic language with nearly 15 million speakers. It's spoken primarily in Western China.
+ * **Uzbek (Latin)**. A Turkic language that is the official language of Uzbekistan. It has 34 million native speakers.
+
+These additions bring the total number of languages supported in Translator to 103.
+
+## August 2021
+
+### [Text and document translation support for literary Chinese](https://www.microsoft.com/translator/blog/2021/08/25/microsoft-translator-releases-literary-chinese-translation/)
+
+* Azure AI Translator has [text and document language support](language-support.md) for literary Chinese. Classical or literary Chinese is a traditional style of written Chinese used by traditional Chinese poets and in ancient Chinese poetry.
+
+## June 2021
+
+### [Document Translation client libraries for C#/.NET and Python](document-translation/document-sdk-overview.md)ΓÇönow available in prerelease
+
+## May 2021
+
+### [Document Translation ΓÇò now generally available](https://www.microsoft.com/translator/blog/2021/05/25/translate-full-documents-with-document-translation-%e2%80%95-now-in-general-availability/)
+
+* **Feature release**: Translator's [Document Translation](document-translation/overview.md) feature is generally available. Document Translation is designed to translate large files and batch documents with rich content while preserving original structure and format. You can also use custom glossaries and custom models built with [Custom Translator](custom-translator/overview.md) to ensure your documents are translated quickly and accurately.
+
+### [Translator service available in containers](https://www.microsoft.com/translator/blog/2021/05/25/translator-service-now-available-in-containers/)
+
+* **New release**: Translator service is available in containers as a gated preview. [Submit an online request](https://aka.ms/csgate-translator) and have it approved prior to getting started. Containers enable you to run several Translator service features in your own environment and are great for specific security and data governance requirements. *See*, [Install and run Translator containers (preview)](containers/translator-how-to-install-container.md)
+
+## February 2021
+
+### [Document Translation public preview](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
+
+* **New release**: [Document Translation](document-translation/overview.md) is available as a preview feature of the Translator Service. Preview features are still in development and aren't meant for production use. They're made available on a "preview" basis so customers can get early access and provide feedback. Document Translation enables you to translate large documents and process batch files while still preserving the original structure and format. _See_ [Microsoft Translator blog: Introducing Document Translation](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
+
+### [Text and document translation support for nine added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
+
+* Translator service has [text and document translation language support](language-support.md) for the following languages:
+
+ * **Albanian**. An isolate language unrelated to any other and spoken by nearly 8 million people.
+ * **Amharic**. An official language of Ethiopia spoken by approximately 32 million people. It's also the liturgical language of the Ethiopian Orthodox church.
+ * **Armenian**. The official language of Armenia with 5-7 million speakers.
+ * **Azerbaijani**. A Turkic language spoken by approximately 23 million people.
+ * **Khmer**. The official language of Cambodia with approximately 16 million speakers.
+ * **Lao**. The official language of Laos with 30 million native speakers.
+ * **Myanmar**. The official language of Myanmar, spoken as a first language by approximately 33 million people.
+ * **Nepali**. The official language of Nepal with approximately 16 million native speakers.
+ * **Tigrinya**. A language spoken in Eritrea and northern Ethiopia with nearly 11 million speakers.
+
+## January 2021
+
+### [Text and document translation support for Inuktitut](https://www.microsoft.com/translator/blog/2021/01/27/inuktitut-is-now-available-in-microsoft-translator/)
+
+* Translator service has [text and document translation language support](language-support.md) for **Inuktitut**, one of the principal Inuit languages of Canada. Inuktitut is one of eight official aboriginal languages in the Northwest Territories.
+
+## November 2020
+
+### [Custom Translator V2 is generally available](https://www.microsoft.com/translator/blog/2021/01/27/inuktitut-is-now-available-in-microsoft-translator/)
+
+* **New release**: Custom Translator V2 upgrade is fully available to the generally available (GA). The V2 platform enables you to build custom models with all document types (training, testing, tuning, phrase dictionary, and sentence dictionary). _See_ [Microsoft Translator blog: Custom Translator pushes the translation quality bar closer to human parity](https://www.microsoft.com/translator/blog/2020/11/12/microsoft-custom-translator-pushes-the-translation-quality-bar-closer-to-human-parity).
+
+## October 2020
+
+### [Text and document translation support for Canadian French](https://www.microsoft.com/translator/blog/2020/10/20/cest-tiguidou-ca-translator-adds-canadian-french/)
+
+* Translator service has [text and document translation language support](language-support.md) for **Canadian French**. Canadian French and European French are similar to one another and are mutually understandable. However, there can be significant differences in vocabulary, grammar, writing, and pronunciation. Over 7 million Canadians (20 percent of the population) speak French as their first language.
+
+## September 2020
+
+### [Text and document translation support for Assamese and Axomiya](https://www.microsoft.com/translator/blog/2020/09/29/assamese-text-translation-is-here/)
+
+* Translator service has [text and document translation language support](language-support.md) for **Assamese** also knows as **Axomiya**. Assamese / Axomiya is primarily spoken in Eastern India by approximately 14 million people.
+
+## August 2020
+
+### [Introducing virtual networks and private links for translator](https://www.microsoft.com/translator/blog/2020/08/19/virtual-networks-and-private-links-for-translator-are-now-generally-available/)
+
+* **New release**: Virtual network capabilities and Azure private links for Translator are generally available (GA). Azure private links allow you to access Translator and your Azure hosted services over a private endpoint in your virtual network. You can use private endpoints for Translator to allow clients on a virtual network to securely access data over a private link. _See_ [Microsoft Translator blog: Virtual Networks and Private Links for Translator are generally available](https://www.microsoft.com/translator/blog/2020/08/19/virtual-networks-and-private-links-for-translator-are-now-generally-available/)
+
+### [Custom Translator upgrade to v2](https://www.microsoft.com/translator/blog/2020/08/05/custom-translator-v2-is-now-available/)
+
+* **New release**: Custom Translator V2 phase 1 is available. The newest version of Custom Translator rolls out in two phases to provide quicker translation and quality improvements, and allow you to keep your training data in the region of your choice. *See* [Microsoft Translator blog: Custom Translator: Introducing higher quality translations and regional data residency](https://www.microsoft.com/translator/blog/2020/08/05/custom-translator-v2-is-now-available/)
+
+### [Text and document translation support for two Kurdish regional languages](https://www.microsoft.com/translator/blog/2020/08/20/translator-adds-two-kurdish-dialects-for-text-translation/)
+
+* **Northern (Kurmanji) Kurdish** (15 million native speakers) and **Central (Sorani) Kurdish** (7 million native speakers). Most Kurdish texts are written in Kurmanji and Sorani.
+
+### [Text and document translation support for two Afghan languages](https://www.microsoft.com/translator/blog/2020/08/17/translator-adds-dari-and-pashto-text-translation/)
+
+* **Dari** (20 million native speakers) and **Pashto** (40 - 60 million speakers). The two official languages of Afghanistan.
+
+### [Text and document translation support for Odia](https://www.microsoft.com/translator/blog/2020/08/13/odia-language-text-translation-is-now-available-in-microsoft-translator/)
+
+* **Odia** is a classical language spoken by 35 million people in India and across the world. It joins **Bangla**, **Gujarati**, **Hindi**, **Kannada**, **Malayalam**, **Marathi**, **Punjabi**, **Tamil**, **Telugu**, **Urdu**, and **English** as the 12th most used language of India supported by Microsoft Translator.
ai-services Word Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/word-alignment.md
+
+ Title: Word alignment - Translator
+
+description: To receive alignment information, use the Translate method and include the optional includeAlignment parameter.
++++++ Last updated : 07/18/2023+++
+# How to receive word alignment information
+
+## Receiving word alignment information
+To receive alignment information, use the Translate method and include the optional includeAlignment parameter.
+
+## Alignment information format
+Alignment is returned as a string value of the following format for every word of the source. The information for each word is separated by a space, including for non-space-separated languages (scripts) like Chinese:
+
+[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]] *
+
+Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21".
+
+In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the Alignment element will be empty. The method returns no error in that case.
+
+## Restrictions
+Alignment is only returned for a subset of the language pairs at this point:
+* from English to any other language;
+* from any other language to English except for Chinese Simplified, Chinese Traditional, and Latvian to English
+* from Japanese to Korean or from Korean to Japanese
+You will not receive alignment information if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you", and other high frequency sentences.
+
+## Example
+
+Example JSON
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "Kann ich morgen Ihr Auto fahren?",
+ "to": "de",
+ "alignment": {
+ "proj": "0:2-0:3 4:4-5:7 6:10-25:30 12:15-16:18 17:19-20:23 21:28-9:14 29:29-31:31"
+ }
+ }
+ ]
+ }
+]
+```
ai-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/use-key-vault.md
+
+ Title: Develop Azure AI services applications with Key Vault
+description: Learn how to develop Azure AI services applications securely by using Key Vault.
+++++ Last updated : 09/13/2022
+zone_pivot_groups: programming-languages-set-twenty-eight
++
+# Develop Azure AI services applications with Key Vault
+
+Use this article to learn how to develop Azure AI services applications securely by using [Azure Key Vault](../key-vault/general/overview.md).
+
+Key Vault reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
+
+## Prerequisites
++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free)
+* [Visual Studio IDE](https://visualstudio.microsoft.com/vs/)
+* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
+* [A multi-service resource or a resource for a specific service](./multi-service-resource.md?pivots=azportal)
+++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
+* [Python 3.7 or later](https://www.python.org/)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
+* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
+* [A multi-service resource or a resource for a specific service](./multi-service-resource.md?pivots=azportal)
+++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
+* [Java Development Kit (JDK) version 8 or above](/azure/developer/java/fundamentals/)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
+* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
+* [A multi-service resource or a resource for a specific service](./multi-service-resource.md?pivots=azportal)
+++
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
+* [Current Node.js v14 LTS or later](https://nodejs.org/)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
+* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
+* [A multi-service resource or a resource for a specific service](./multi-service-resource.md?pivots=azportal)
++
+> [!NOTE]
+> Review the documentation and quickstart articles for the Azure AI service you're using to get an understanding of:
+> * The credentials and other information you will need to send API calls.
+> * The packages and code you will need to run your application.
+
+## Get your credentials from your Azure AI services resource
+
+Before you add your credential information to your Azure key vault, you need to retrieve them from your Azure AI services resource. For example, if your service needs a key and endpoint you would find them using the following steps:
+
+1. Navigate to your Azure resource in the [Azure portal](https://portal.azure.com/).
+1. From the collapsible menu on the left, select **Keys and Endpoint**.
+
+ :::image type="content" source="language-service/custom-text-classification/media/get-endpoint-azure.png" alt-text="A screenshot showing the key and endpoint page in the Azure portal." lightbox="language-service/custom-text-classification/media/get-endpoint-azure.png":::
+
+Some Azure AI services require different information to authenticate API calls, such as a key and region. Make sure to retrieve this information before continuing on.
+
+## Add your credentials to your key vault
+
+For your application to retrieve and use your credentials to authenticate API calls, you will need to add them to your [key vault secrets](../key-vault/secrets/about-secrets.md).
+
+Repeat these steps to generate a secret for each required resource credential. For example, a key and endpoint. These secret names will be used later to authenticate your application.
+
+1. Open a new browser tab or window. Navigate to your key vault in the <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Go to Azure portal" target="_blank">Azure portal</a>.
+1. From the collapsible menu on the left, select **Objects** > **Secrets**.
+1. Select **Generate/Import**.
+
+ :::image type="content" source="media/key-vault/store-secrets.png" alt-text="A screenshot showing the key vault key page in the Azure portal." lightbox="media/key-vault/store-secrets.png":::
+
+1. On the **Create a secret** screen, enter the following values:
+
+ |Name | Value |
+ |||
+ |Upload options | Manual |
+ |Name | A secret name for your key or endpoint. For example: "CognitiveServicesKey" or "CognitiveServicesEndpoint" |
+ |Value | Your Azure AI services resource key or endpoint. |
+
+ Later your application will use the secret "Name" to securely access the "Value".
+
+1. Leave the other values as their defaults. Select **Create**.
+
+ >[!TIP]
+ > Make sure to remember the names that you set for your secrets, as you'll use them later in your application.
+
+You should now have named secrets for your resource information.
+
+## Create an environment variable for your key vault's name
+
+We recommend creating an environment variable for your Azure key vault's name. Your application will read this environment variable at runtime to retrieve your key and endpoint information.
+
+To set environment variables, use one the following commands. `KEY_VAULT_NAME` with the name of the environment variable, and replace `Your-Key-Vault-Name` with the name of your key vault, which will be stored in the environment variable.
+
+# [Azure CLI](#tab/azure-cli)
+
+Create and assign persisted environment variable, given the value.
+
+```CMD
+setx KEY_VAULT_NAME "Your-Key-Vault-Name"
+```
+
+In a new instance of the **Command Prompt**, read the environment variable.
+
+```CMD
+echo %KEY_VAULT_NAME%
+```
+
+# [PowerShell](#tab/powershell)
+
+Create and assign a persisted environment variable. Replace `Your-Key-Vault-Name` with the name of your key vault.
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('KEY_VAULT_NAME', 'Your-Key-Vault-Name', 'User')
+```
+
+In a new instance of the **Windows PowerShell**, read the environment variable.
+
+```powershell
+[System.Environment]::GetEnvironmentVariable('KEY_VAULT_NAME')
+```
++++
+## Authenticate to Azure using Visual Studio
+
+Developers using Visual Studio 2017 or later can authenticate an Azure Active Directory account through Visual Studio. This enables you to access secrets in your key vault by signing into your Azure subscription from within the IDE.
+
+To authenticate in Visual Studio, select **Tools** from the top navigation menu, and select **Options**. Navigate to the **Azure Service Authentication** option to sign in with your user name and password.
+
+## Authenticate using the command line
++
+## Create a new C# application
+
+Using the Visual Studio IDE, create a new .NET Core console app. This will create a "Hello World" project with a single C# source file: `program.cs`.
+
+Install the following client libraries by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for the following libraries, and select **Install** for each:
+
+* `Azure.Security.KeyVault.Secrets`
+* `Azure.Identity`
+
+## Import the example code
+
+Copy the following example code into your `program.cs` file. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Azure;
+using Azure.Identity;
+using Azure.Security.KeyVault.Secrets;
+using System.Net;
+
+namespace key_vault_console_app
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ //Name of your key vault
+ var keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
+
+ //variables for retrieving the key and endpoint from your key vault.
+ //Set these variables to the names you created for your secrets
+ const string keySecretName = "Your-Key-Secret-Name";
+ const string endpointSecretName = "Your-Endpoint-Secret-Name";
+
+ //Endpoint for accessing your key vault
+ var kvUri = $"https://{keyVaultName}.vault.azure.net";
+
+ var keyVaultClient = new SecretClient(new Uri(kvUri), new DefaultAzureCredential());
+
+ Console.WriteLine($"Retrieving your secrets from {keyVaultName}.");
+
+ //Key and endpoint secrets retrieved from your key vault
+ var keySecret = await keyVaultClient.GetSecretAsync(keySecretName);
+ var endpointSecret = await keyVaultClient.GetSecretAsync(endpointSecretName);
+ Console.WriteLine($"Your key secret value is: {keySecret.Value.Value}");
+ Console.WriteLine($"Your endpoint secret value is: {endpointSecret.Value.Value}");
+ Console.WriteLine("Secrets retrieved successfully");
+
+ }
+ }
+}
+```
+
+## Run the application
+
+Run the application by selecting the **Debug** button at the top of Visual studio. Your key and endpoint secrets will be retrieved from your key vault.
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-new-c-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. Install the `Azure.AI.TextAnalytics` library by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for the following libraries, and select **Install** for each:
+
+1. Add the following directive to the top of your `program.cs` file.
+
+ ```csharp
+ using Azure.AI.TextAnalytics;
+ ```
+
+1. Add the following code sample to your application.
+
+ ```csharp
+ // Example method for extracting named entities from text
+ private static void EntityRecognitionExample(string keySecret, string endpointSecret)
+ {
+ //String to be sent for Named Entity Recognition
+ var exampleString = "I had a wonderful trip to Seattle last week.";
+
+ AzureKeyCredential azureKeyCredential = new AzureKeyCredential(keySecret);
+ Uri endpoint = new Uri(endpointSecret);
+ var languageServiceClient = new TextAnalyticsClient(endpoint, azureKeyCredential);
+
+ Console.WriteLine($"Sending a Named Entity Recognition (NER) request");
+ var response = languageServiceClient.RecognizeEntities(exampleString);
+ Console.WriteLine("Named Entities:");
+ foreach (var entity in response.Value)
+ {
+ Console.WriteLine($"\tText: {entity.Text},\tCategory: {entity.Category},\tSub-Category: {entity.SubCategory}");
+ Console.WriteLine($"\t\tScore: {entity.ConfidenceScore:F2},\tLength: {entity.Length},\tOffset: {entity.Offset}\n");
+ }
+ }
+ ```
+
+3. Add the following code to call `EntityRecognitionExample()` from your main method, with your key and endpoint values.
+
+ ```csharp
+ EntityRecognitionExample(keySecret.Value.Value, endpointSecret.Value.Value);
+ ```
+
+4. Run the application.
+++
+## Authenticate your application
++
+## Create a Python application
+
+Create a new folder named `keyVaultExample`. Then use your preferred code editor to create a file named `program.py` inside the newly created folder.
+
+### Install Key Vault and Language service packages
+
+1. In a terminal or command prompt, navigate to your project folder and install the Azure Active Directory identity library:
+
+ ```terminal
+ pip install azure-identity
+ ```
+
+1. Install the Key Vault secrets library:
+
+ ```terminal
+ pip install azure-keyvault-secrets
+ ```
+
+## Import the example code
+
+Add the following code sample to the file named `program.py`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```python
+import os
+from azure.keyvault.secrets import SecretClient
+from azure.identity import DefaultAzureCredential
+from azure.core.credentials import AzureKeyCredential
+
+keyVaultName = os.environ["KEY_VAULT_NAME"]
+
+# Set these variables to the names you created for your secrets
+keySecretName = "Your-Key-Secret-Name"
+endpointSecretName = "Your-Endpoint-Secret-Name"
+
+# URI for accessing key vault
+KVUri = f"https://{keyVaultName}.vault.azure.net"
+
+# Instantiate the client and retrieve secrets
+credential = DefaultAzureCredential()
+kv_client = SecretClient(vault_url=KVUri, credential=credential)
+
+print(f"Retrieving your secrets from {keyVaultName}.")
+
+retrieved_key = kv_client.get_secret(keySecretName).value
+retrieved_endpoint = kv_client.get_secret(endpointSecretName).value
+
+print(f"Your secret key value is {retrieved_key}.");
+print(f"Your secret endpoint value is {retrieved_endpoint}.");
+
+```
+
+## Run the application
+
+Use the following command to run the application. Your key and endpoint secrets will be retrieved from your key vault.
+
+```terminal
+python ./program.py
+```
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-python-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. Install the Language service library:
+
+ ```console
+ pip install azure-ai-textanalytics==5.1.0
+ ```
+
+1. Add the following code to your application
+
+ ```python
+ from azure.ai.textanalytics import TextAnalyticsClient
+ # Authenticate the key vault secrets client using your key and endpoint
+ azure_key_credential = AzureKeyCredential(retrieved_key)
+ # Now you can use key vault credentials with the Language service
+ language_service_client = TextAnalyticsClient(
+ endpoint=retrieved_endpoint,
+ credential=azure_key_credential)
+
+ # Example of recognizing entities from text
+
+ print("Sending NER request")
+
+ try:
+ documents = ["I had a wonderful trip to Seattle last week."]
+ result = language_service_client.recognize_entities(documents = documents)[0]
+ print("Named Entities:\n")
+ for entity in result.entities:
+ print("\tText: \t", entity.text, "\tCategory: \t", entity.category, "\tSubCategory: \t", entity.subcategory,
+ "\n\tConfidence Score: \t", round(entity.confidence_score, 2), "\tLength: \t", entity.length, "\tOffset: \t", entity.offset, "\n")
+
+ except Exception as err:
+ print("Encountered exception. {}".format(err))
+ ```
+
+1. Run the application.
+++
+## Authenticate your application
++
+## Create a java application
+
+In your preferred IDE, create a new Java console application project, and create a class named `Example`.
+
+## Add dependencies
+
+In your project, add the following dependencies to your `pom.xml` file.
+
+```xml
+<dependencies>
+
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-security-keyvault-secrets</artifactId>
+ <version>4.2.3</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.2.0</version>
+ </dependency>
+ </dependencies>
+```
+
+## Import the example code
+
+Copy the following code into a file named `Example.java`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```java
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.security.keyvault.secrets.SecretClient;
+import com.azure.security.keyvault.secrets.SecretClientBuilder;
+import com.azure.core.credential.AzureKeyCredential;
+
+public class Example {
+
+ public static void main(String[] args) {
+
+ String keyVaultName = System.getenv("KEY_VAULT_NAME");
+ String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net";
+
+ //variables for retrieving the key and endpoint from your key vault.
+ //Set these variables to the names you created for your secrets
+ String keySecretName = "Your-Key-Secret-Name";
+ String endpointSecretName = "Your-Endpoint-Secret-Name";
+
+ //Create key vault secrets client
+ SecretClient secretClient = new SecretClientBuilder()
+ .vaultUrl(keyVaultUri)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
+
+ //retrieve key and endpoint from key vault
+ String keyValue = secretClient.getSecret(keySecretName).getValue();
+ String endpointValue = secretClient.getSecret(endpointSecretName).getValue();
+ System.out.printf("Your secret key value is: %s", keyValue)
+ System.out.printf("Your secret endpoint value is: %s", endpointValue)
+ }
+}
+```
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-java-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. In your application, add the following dependency:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-textanalytics</artifactId>
+ <version>5.1.12</version>
+ </dependency>
+ ```
+
+1. add the following import statements to your file.
+
+ ```java
+ import com.azure.ai.textanalytics.models.*;
+ import com.azure.ai.textanalytics.TextAnalyticsClientBuilder;
+ import com.azure.ai.textanalytics.TextAnalyticsClient;
+ ```
+
+1. Add the following code to the `main()` method in your application:
+
+ ```java
+
+ TextAnalyticsClient languageClient = new TextAnalyticsClientBuilder()
+ .credential(new AzureKeyCredential(keyValue))
+ .endpoint(endpointValue)
+ .buildClient();
+
+ // Example for recognizing entities in text
+ String text = "I had a wonderful trip to Seattle last week.";
+
+ for (CategorizedEntity entity : languageClient.recognizeEntities(text)) {
+ System.out.printf(
+ "Recognized entity: %s, entity category: %s, entity sub-category: %s, score: %s, offset: %s, length: %s.%n",
+ entity.getText(),
+ entity.getCategory(),
+ entity.getSubcategory(),
+ entity.getConfidenceScore(),
+ entity.getOffset(),
+ entity.getLength());
+ }
+ ```
+
+1. Run your application
+++
+## Authenticate your application
++
+## Create a new Node.js application
+
+Create a Node.js application that uses your key vault.
+
+In a terminal, create a folder named `key-vault-js-example` and change into that folder:
+
+```terminal
+mkdir key-vault-js-example && cd key-vault-js-example
+```
+
+Initialize the Node.js project:
+
+```terminal
+npm init -y
+```
+
+### Install Key Vault and Language service packages
+
+1. Using the terminal, install the Azure Key Vault secrets library, [@azure/keyvault-secrets](https://www.npmjs.com/package/@azure/keyvault-secrets) for Node.js.
+
+ ```terminal
+ npm install @azure/keyvault-secrets
+ ```
+
+1. Install the Azure Identity library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault.
+
+ ```terminal
+ npm install @azure/identity
+ ```
+
+## Import the code sample
+
+Add the following code sample to a file named `index.js`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
+
+```javascript
+const { SecretClient } = require("@azure/keyvault-secrets");
+const { DefaultAzureCredential } = require("@azure/identity");
+// Load the .env file if it exists
+const dotenv = require("dotenv");
+dotenv.config();
+
+async function main() {
+ const credential = new DefaultAzureCredential();
+
+ const keyVaultName = process.env["KEY_VAULT_NAME"];
+ const url = "https://" + keyVaultName + ".vault.azure.net";
+
+ const kvClient = new SecretClient(url, credential);
+
+ // Set these variables to the names you created for your secrets
+ const keySecretName = "Your-Key-Secret-Name";
+ const endpointSecretName = "Your-Endpoint-Secret-Name";
+
+ console.log("Retrieving secrets from ", keyVaultName);
+ const retrievedKey = await (await kvClient.getSecret(keySecretName)).value;
+ const retrievedEndpoint = await (await kvClient.getSecret(endpointSecretName)).value;
+ console.log("Your secret key value is: ", retrievedKey);
+ console.log("Your secret endpoint value is: ", retrievedEndpoint);
+}
+
+main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+});
+```
+
+## Run the sample application
+
+Use the following command to run the application. Your key and endpoint secrets will be retrieved from your key vault.
+
+```terminal
+node index.js
+```
+
+## Send a test Language service call (optional)
+
+If you're using a multi-service resource or Language resource, you can update [your application](#create-a-new-nodejs-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
+
+1. Install the Azure AI service for Language library, [@azure/ai-text-analytics](https://www.npmjs.com/package/@azure/ai-text-analytics/) to send API requests to the [Language service](./language-service/overview.md).
+
+ ```terminal
+ npm install @azure/ai-text-analytics@5.1.0
+ ```
+
+2. Add the following code to your application:
+
+ ```javascript
+ const { TextAnalyticsClient, AzureKeyCredential } = require("@azure/ai-text-analytics");
+ // Authenticate the language client with your key and endpoint
+ const languageClient = new TextAnalyticsClient(retrievedEndpoint, new AzureKeyCredential(retrievedKey));
+
+ // Example for recognizing entities in text
+ console.log("Sending NER request")
+ const entityInputs = [
+ "I had a wonderful trip to Seattle last week."
+ ];
+ const entityResults = await languageClient.recognizeEntities(entityInputs);
+ entityResults.forEach(document => {
+ console.log(`Document ID: ${document.id}`);
+ document.entities.forEach(entity => {
+ console.log(`\tName: ${entity.text} \tCategory: ${entity.category} \tSubcategory: ${entity.subCategory ? entity.subCategory : "N/A"}`);
+ console.log(`\tScore: ${entity.confidenceScore}`);
+ });
+ });
+ ```
+
+3. Run the application.
++
+## Next steps
+
+* See [What are Azure AI services](./what-are-ai-services.md) for available features you can develop along with [Azure Key Vault](../key-vault/general/index.yml).
+* For additional information on secure application development, see:
+ * [Best practices for using Azure Key Vault](../key-vault/general/best-practices.md)
+ * [Azure AI services security](security-features.md)
+ * [Azure security baseline for Azure AI services](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=/azure/ai-services/TOC.json)
ai-services What Are Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md
+
+ Title: What are Azure AI services?
+
+description: Azure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge.
+++
+keywords: Azure AI services, cognitive
++ Last updated : 7/18/2023++++
+# What are Azure AI services?
+
+Azure AI services help developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and pre-built and customizable APIs and models. Example applications include natural language processing for conversations, search, monitoring, translation, speech, vision, and decision-making.
++
+Most Azure AI services are available through REST APIs and client library SDKs in popular development languages. For more information, see each service's documentation.
+
+## Available Azure AI services
+
+Select a service from the table below and learn how it can help you meet your development goals.
+
+| Service | Description |
+| | |
+| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) | Identify potential problems early on |
+| ![Azure Cognitive Search icon](media/service-icons/cognitive-search.svg) [Azure Cognitive Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps |
+| ![Azure OpenAI Service icon](media/service-icons/azure.svg) [Azure OpenAI](./openai/index.yml) | Perform a wide variety of natural language tasks |
+| ![Bot service icon](media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels |
+| ![Content Moderator icon](media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content |
+| ![Content Safety icon](media/service-icons/content-safety.svg) [Content Safety](./content-safety/index.yml) | An AI service that detects unwanted contents |
+| ![Custom Vision icon](media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition to fit your business |
+| ![Document Intelligence icon](media/service-icons/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into usable data at a fraction of the time and cost |
+| ![Face icon](medi) | Detect and identify people and emotions in images |
+| ![Immersive Reader icon](media/service-icons/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text |
+| ![Language icon](media/service-icons/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities |
+| ![Language Understanding icon](media/service-icons/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps |
+| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) | An AI service that detects unwanted contents |
+| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml) | Create rich, personalized experiences for each user |
+| ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers |
+| ![Speech icon](media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition |
+| ![Translator icon](media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects |
+| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](../azure-video-indexer/index.yml) | Extract actionable insights from your videos |
+| ![Vision icon](media/service-icons/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos |
+
+## Pricing tiers and billing
+
+Pricing tiers (and the amount you get billed) are based on the number of transactions you send using your authentication information. Each pricing tier specifies the:
+* maximum number of allowed transactions per second (TPS).
+* service features enabled within the pricing tier.
+* cost for a predefined number of transactions. Going above this number will cause an extra charge as specified in the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/) for your service.
+
+> [!NOTE]
+> Many of the Azure AI services have a free tier you can use to try the service. To use the free tier, use `F0` as the SKU for your resource.
++
+## Use Azure AI services in different development environments
+
+With Azure and Azure AI services, you have access to several development options, such as:
+
+* Automation and integration tools like Logic Apps and Power Automate.
+* Deployment options such as Azure Functions and the App Service.
+* Azure AI services Docker containers for secure access.
+* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for big data scenarios.
+
+To learn more, see [Azure AI services development options](./cognitive-services-development-options.md).
+
+### Containers for Azure AI services
+
+Azure AI services also provides several Docker containers that let you use the same APIs that are available from Azure, on-premises. These containers give you the flexibility to bring Azure AI services closer to your data for compliance, security, or other operational reasons. For more information, see [Azure AI containers](cognitive-services-container-support.md "Azure AI containers").
+
+## Regional availability
+
+The APIs in Azure AI services are hosted on a growing network of Microsoft-managed data centers. You can find the regional availability for each API in [Azure region list](https://azure.microsoft.com/regions "Azure region list").
+
+Looking for a region we don't support yet? Let us know by filing a feature request on our [UserVoice forum](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858).
+
+## Language support
+
+Azure AI services supports a wide range of cultural languages at the service level. You can find the language availability for each API in the [supported languages list](language-support.md "Supported languages list").
+
+## Security
+
+Azure AI services provides a layered security model, including [authentication](authentication.md "Authentication") with Azure Active Directory credentials, a valid resource key, and [Azure Virtual Networks](cognitive-services-virtual-networks.md "Azure Virtual Networks").
+
+## Certifications and compliance
+
+Azure AI services has been awarded certifications such as CSA STAR Certification, FedRAMP Moderate, and HIPAA BAA. You can [download](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942 "Download") certifications for your own audits and security reviews.
+
+To understand privacy and data management, go to the [Trust Center](https://servicetrust.microsoft.com/ "Trust Center").
+
+## Help and support
+
+Azure AI services provides several support options to help you move forward with creating intelligent applications. Azure AI services also has a strong community of developers that can help answer your specific questions. For a full list of support options available to you, see [Azure AI services support and help options](cognitive-services-support-options.md "Azure AI services support and help options").
+
+## Next steps
+
+* Select a service from the tables above and learn how it can help you meet your development goals.
+* [Create a multi-service resource](multi-service-resource.md?pivots=azportal)
+* [Plan and manage costs for Azure AI services](plan-manage-costs.md)
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
- [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML. - [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API.
+- [Validate OData request](validate-odata-request-policy.md) - Validates a request to an OData API to ensure conformance with the OData specification.
- [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema. - [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in responses against the API schema.
api-management Import Api From Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-odata.md
+
+ Title: Import an OData API to Azure API Management | Microsoft Docs
+description: Learn how to import an OData API to an API Management instance using the Azure portal.
+++++ Last updated : 06/06/2023+++
+# Import an OData API
+
+This article shows how to import an OData-compliant service as an API in API Management.
+
+In this article, you learn how to:
+> [!div class="checklist"]
+> * Import an OData metadata description using the Azure portal
+> * Manage the OData schema in the portal
+> * Secure the OData API
+
+> [!NOTE]
+> Importing an OData service as an API is in preview. Currently, testing OData APIs isn't supported in the test console of the Azure portal or in the API Management developer portal.
+
+## Prerequisites
+
+* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+
+* A service exposed as OData v2 or v4.
++
+## Import OData metadata
+
+1. In the left menu, select **APIs** > **+ Add API**.
+1. Under **Create from definition**, select **OData**.
+
+ :::image type="content" source="media/import-api-from-odata/odata-api.png" alt-text="Screenshot of creating an API from an OData description in the portal." :::
+1. Enter API settings. You can update your settings later by going to the **Settings** tab of the API.
+
+ 1. In **OData specification**, enter a URL for an OData metadata endpoint, typically the URL to the service root, appended with `/$metadata`. Alternatively, select a local OData XML file to import.
+
+ 1. Enter remaining settings to configure your API. These settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
+
+ The API is added to the **APIs** list. The entity sets and functions that are exposed in the OData metadata description appear on the API's **Schema** tab.
+
+ :::image type="content" source="media/import-api-from-odata/odata-schema.png" alt-text="Screenshot of schema of OData API in the portal." :::
+
+## Update the OData schema
+
+You can access an editor in the portal to view your API's OData schema. If the API changes, you can also update the schema in API Management from a file or an OData service endpoint.
+
+1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, select **APIs** > your OData API.
+1. On the **Schema** tab, select the edit (**\</>**) icon.
+1. Review the schema. If you want to update it, select **Update from file** or **Update schema from endpoint**.
+
+ :::image type="content" source="media/import-api-from-odata/odata-schema-update.png" alt-text="Screenshot of schema editor for OData API in the portal." :::
+
+## Secure your OData API
+
+Secure your OData API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and an [OData validation policy](validate-odata-request-policy.md) to protect against attacks through OData API requests.
+
+> [!TIP]
+> In the portal, configure policies for your OData API on the **API policies** tab.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Transform and protect a published API](transform-api.md)
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md
Previously updated : 01/26/2022 Last updated : 06/06/2023
In this article, you'll:
> * Complete API configuration > * Test the API in the Azure portal
+> [!NOTE]
+> In preview, API Management can now directly import an OData API from its metadata description, without requiring conversion to an OpenAPI specification. [Learn more](import-api-from-odata.md).
+ ## Prerequisites - An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
In this example the `set-backend-service` policy routes requests based on the ve
Initially the backend service base URL is derived from the API settings. So the request URL `https://contoso.azure-api.net/api/partners/15?version=2013-05&subscription-key=abcdef` becomes `http://contoso.com/api/10.4/partners/15?version=2013-05&subscription-key=abcdef` where `http://contoso.com/api/10.4/` is the backend service URL specified in the API settings.
-When the [<choose\>](choose-policy.md) policy statement is applied the backend service base URL may change again either to `http://contoso.com/api/8.2` or `http://contoso.com/api/9.1`, depending on the value of the version request query parameter. For example, if the value is `"2013-15"` the final request URL becomes `http://contoso.com/api/8.2/partners/15?version=2013-05&subscription-key=abcdef`.
+When the [<choose\>](choose-policy.md) policy statement is applied the backend service base URL may change again either to `http://contoso.com/api/8.2` or `http://contoso.com/api/9.1`, depending on the value of the version request query parameter. For example, if the value is `"2013-15"` the final request URL becomes `http://contoso.com/api/8.2/partners/15?version=2013-15&subscription-key=abcdef`.
If further transformation of the request is desired, other [Transformation policies](api-management-transformation-policies.md) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](set-query-parameter-policy.md) policy can be used to remove the now redundant version attribute.
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
# Validate GraphQL request
-The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths. An invalid query is a "request error". Authorization is only done for valid requests.
+The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths in a GraphQL API. An invalid query is a "request error". Authorization is only done for valid requests.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
Available actions are described in the following table.
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ### Usage notes
-Because GraphQL queries use a flattened schema, permissions may be applied at any leaf node of an output type:
-
-* Mutation, query, or subscription
-* Individual field in a type declaration
-
-Permissions may not be applied to:
-
-* Input types
-* Fragments
-* Unions
-* Interfaces
-* The schema element
-
+* Configure the policy for a [pass-through](graphql-api.md) or [synthetic](graphql-schema-resolve-api.md) GraphQL API that has been imported to API Management.
+
+* This policy can only be used once in a policy section.
+
+* Because GraphQL queries use a flattened schema, permissions may be applied at any leaf node of an output type:
+
+ * Mutation, query, or subscription
+ * Individual field in a type declaration
+
+ Permissions may not be applied to:
+
+ * Input types
+ * Fragments
+ * Unions
+ * Interfaces
+ * The schema element
+
## Error handling Failure to validate against the GraphQL schema, or a failure for the request's size or depth, is a request error and results in the request being failed with an errors block (but no data block).
api-management Validate Odata Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-odata-request-policy.md
+
+ Title: Azure API Management policy reference - validate-odata-request | Microsoft Docs
+description: Reference for the validate-odata-request policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 06/06/2023+++
+# Validate OData request
+
+The `validate-odata-request` policy validates the request URL, headers, and parameters of a request to an OData API to ensure conformance with the [OData specification](https://www.odata.org/documentation).
+
+> [!NOTE]
+> This policy is currently in preview.
+
+## Policy statement
+
+```xml
+<validate-odata-request error-variable-name="variable name" default-odata-version="OData version number" min-odata-version="OData version number" max-odata-version="OData version number" max-size="size in bytes" />
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| error-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| default-odata-version | The default OData version that is assumed for parameter validation if the request doesn't contain an `OData-Version` header. | No | 4.0 |
+| min-odata-version | The minimum OData version in the `OData-Version` header of the request that the policy accepts. | No | N/A |
+| max-odata-version | The maximum OData version in the `OData-Version` header of the request that the policy accepts. | No | N/A |
+| max-size | Maximum size of the request payload in bytes. | No | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API
+- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+
+### Usage notes
+
+* Configure the policy for an OData API that has been [imported](import-api-from-odata.md) to API Management.
+* This policy can only be used once in a policy section.
+
+## Example
+
+The following example validates a request to an OData API and assumes a default OData version of 4.01 if no `OData-Version` header is present:
+
+```xml
+<validate-odata-request default-odata-version="4.01" />
+```
+
+## Related policies
+
+* [Validation policies](api-management-policies.md#validation-policies)
+
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
Previously updated : 03/22/2022 Last updated : 07/17/2023 ms.devlang: javascript
-#zone_pivot_groups: app-service-ide-oss
zone_pivot_groups: app-service-vscode-cli-portal # Deploy a Node.js web app in Azure
Sign in to the Azure portal at https://portal.azure.com.
### Create Azure resources
-1. Type **app services** in the search. Under **Services**, select **App Services**.
+1. To start creating a Node.js app, browse to [https://ms.portal.azure.com/#create/Microsoft.WebSite](https://ms.portal.azure.com/#create/Microsoft.WebSite).
- :::image type="content" source="./media/quickstart-nodejs/portal-search.png?text=Azure portal search details" alt-text="Screenshot of portal search":::
-
-1. In the **App Services** page, select **Create**.
1. In the **Basics** tab, under **Project details**, ensure the correct subscription is selected and then select to **Create new** resource group. Type *myResourceGroup* for the name. :::image type="content" source="./media/quickstart-nodejs/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the web app":::
-1. Under **Instance details**, type a globally unique name for your web app and select **Code**. Select *Node 14 LTS* **Runtime stack**, an **Operating System**, and a **Region** you want to serve your app from.
+1. Under **Instance details**, type a globally unique name for your web app and select **Code**. Select *Node 18 LTS* **Runtime stack**, an **Operating System**, and a **Region** you want to serve your app from.
:::image type="content" source="./media/quickstart-nodejs/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size":::
Sign in to the Azure portal at https://portal.azure.com.
:::image type="content" source="./media/quickstart-nodejs/next-steps.png" alt-text="Screenshot showing the next step of going to the resource":::
-### Get FTP credentials
+### Get FTPS credentials
Azure App Service supports [**two types of credentials**](deploy-configure-credentials.md) for FTP/S deployment. These credentials aren't the same as your Azure subscription credentials. In this section, you get the *application-scope credentials* to use with FileZilla.
Azure App Service supports [**two types of credentials**](deploy-configure-crede
1. Open **FileZilla** and create a new site.
-1. From the **FTPS credentials** tab, copy **FTPS endpoint**, **Username**, and **Password** into FileZilla.
+1. From the **FTPS credentials** tab, under **Application scope**, copy **FTPS endpoint**, **FTPS Username**, and **Password** into FileZilla.
:::image type="content" source="./media/quickstart-nodejs/filezilla-ftps-connection.png" alt-text="FTPS connection details"::: 1. Select **Connect** in FileZilla.
-### Deploy files with FTP
+### Deploy files with FTPS
1. Copy all files and directories files to the [**/site/wwwroot** directory](https://github.com/projectkudu/kudu/wiki/File-structure-on-azure) in Azure.
You can deploy changes to this app by making edits in Visual Studio Code, saving
:::zone target="docs" pivot="development-environment-azure-portal"
-2. Save your changes, then redeploy the app using your FTP client again.
+2. Save your changes, then redeploy the app using your FTP client.
1. Once deployment is complete, refresh the webpage `http://<app-name>.azurewebsites.net`. You should see that the `Welcome to Express` message has been changed to `Welcome to Azure!`.
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
Application Gateway supports the following cipher suites from which you can choo
- TLS_DHE_DSS_WITH_AES_128_CBC_SHA - TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 - TLS_DHE_DSS_WITH_AES_256_CBC_SHA
+- Constrained clients looking for "Maximum Fragment Length Negotiation" support must use the newer [**2022 Predefined**](#predefined-tls-policy) or [**Customv2 policies**](#custom-tls-policy).
## Next steps
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
Previously updated : 03/31/2020 Last updated : 07/05/2023
There are two stages in a migration:
This article primarily helps with the configuration migration. The traffic migration would vary depending on customerΓÇÖs needs and environment. But we have included some general recommendations further in this [article](#traffic-migration).
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An existing Application Gateway V1 Standard.
+* Make sure you have the latest PowerShell modules, or you can use Azure Cloud Shell in the portal.
+* If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+++ ## Configuration migration An Azure PowerShell script is provided in this document. It performs the following operations to help you with the configuration:
To run the script:
* **sslCertificates from Keyvault: Optional**.You can download the certificates stored in Azure Key Vault and pass it to migration script.To download the certificate as a PFX file, run following command. These commands access SecretId, and then save the content as a PFX file. ```azurepowershell
- $vaultName = <kv-name>
- $certificateName = <cert-name>
- $password = <password>
+ $vaultName = ConvertTo-SecureString <kv-name> -AsPlainText -Force
+ $certificateName = ConvertTo-SecureString <cert-name> -AsPlainText -Force
+ $password = ConvertTo-SecureString <password> -AsPlainText -Force
$pfxSecret = Get-AzKeyVaultSecret -VaultName $vaultName -Name $certificateName -AsPlainText $secretByte = [Convert]::FromBase64String($pfxSecret)
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
Use [az network application-gateway create](/cli/azure/network/application-gatew
--frontend-port 80 \ --http-settings-port 80 \ --http-settings-protocol Http \
- --public-ip-address myAGPublicIPAddress
+ --public-ip-address myAGPublicIPAddress \
+ --priority 100
``` It may take several minutes for the application gateway to be created. After the application gateway is created, you'll see these new features:
When no longer needed, remove the resource group, application gateway, and all r
## Next steps
-[Restrict web traffic with a web application firewall](../web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md)
+[Restrict web traffic with a web application firewall](../web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md)
applied-ai-services Applied Ai Services Customer Spotlight Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/applied-ai-services-customer-spotlight-use-cases.md
- Title: Customer spotlight on use cases-
-description: Learn about customers who have used Azure Applied AI Services. See use cases that improve customer experience, streamline processes, and protect data.
---- Previously updated : 06/02/2022----
-# Customer spotlight on use cases
-
-Customers are using Azure Applied AI Services to add AI horsepower to their business scenarios:
--- Chatbots improve customer experience in a robust, expandable way.-- AI-driven search offers strong data security and delivers smart results that add value.-- Azure Form Recognizer increases ROI by using automation to streamline data extraction.-
-## Use cases
-
-| Partner | Description | Customer story |
-||-|-|
-| <center>![Logo of Progressive Insurance, which consists of the word progressive in a slanted font in blue, capital letters.](./media/logo-progressive-02.png) | **Progressive uses Azure Bot Service and Azure Cognitive Search to help customers make smarter insurance decisions.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Insurance shoppers gain new service channel with artificial intelligence chatbot](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
-| <center>![Logo of Wix, which consists of the name Wix in a dary-gray font in lowercase letters.](./media/wix-logo-01.png) | **WIX uses Cognitive Search to deploy smart search across 150 million websites.** <br> "We really benefitted from choosing Azure Cognitive Search because we could go to market faster than we had with other products. We don't have to manage infrastructure, and our developers can spend time on higher-value tasks."*-Giedrius Gra┼╛evi─ìius: Project Manager for Search, Wix* | [Wix deploys smart, scalable search across 150 million websites with Azure Cognitive Search](https://customers.microsoft.com/story/764974-wix-partner-professional-services-azure-cognitive-search) |
-| <center>![Logo of Chevron. The name Chevron appears above two vertically stacked chevrons that point downward. The top one is blue, and the lower one is red.](./media/chevron-01.png) | **Chevron uses Form Recognizer to extract volumes of data from unstructured reports.**<br>"We only have a finite amount of time to extract data, and oftentimes the data that's left behind is valuable. With this new technology, we're able to extract everything and then decide what we can use to improve our performance."*-Diane Cillis, Engineering Technologist, Chevron Canada* | [Chevron is using AI-powered robotic process automation to extract volumes of data from unstructured reports for analysis](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services) |
-
-## See also
--- [What are Applied AI Services?](what-are-applied-ai-services.md)-- [Why use Applied AI Services?](why-applied-ai-services.md)
-
-ΓÇïΓÇï
applied-ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/changelog-release-history.md
- Title: Form Recognizer changelog and release history-
-description: A version-based description of Form Recognizer feature and capability releases, changes, enhancements, and updates.
----- Previously updated : 04/24/2023---
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD051 -->
-
-# Changelog and release history
-
-This reference article provides a version-based description of Form Recognizer feature and capability releases, changes, updates, and enhancements.
-
-#### Form Recognizer SDK April 2023 preview release
-
-This release includes the following updates:
-
-### [**C#**](#tab/csharp)
-
-* **Version 4.1.0-beta.1 (2023-04-13**)
-* **Targets 2023-02-28-preview by default**
-* **No breaking changes**
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#410-beta1-2023-04-13)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
-
-### [**Java**](#tab/java)
-
-* **Version 4.1.0-beta.1 (2023-04-12**)
-* **Targets 2023-02-28-preview by default**
-* **No breaking changes**
-
-[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#410-beta1-2023-04-12)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples#readme)
-
-### [**JavaScript**](#tab/javascript)
-
-* **Version 4.1.0-beta.1 (2023-04-11**)
-* **Targets 2023-02-28-preview by default**
-* **No breaking changes**
-
-[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#410-beta1-2023-04-11)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta)
-
-### [**Python**](#tab/python)
-
-* **Version 3.3.0b1 (2023-04-13**)
-* **Targets 2023-02-28-preview by default**
-* **No breaking changes**
-
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#330b1-2023-04-13)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/samples)
---
-#### Form Recognizer SDK September 2022 GA release
-
-This release includes the following updates:
-
-> [!IMPORTANT]
-> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier.
-
-### [**C#**](#tab/csharp)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
-
-### [**Java**](#tab/java)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-jav)
-
-### [**JavaScript**](#tab/javascript)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
-
-[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md)
-
-### [**Python**](#tab/python)
-
-> [!NOTE]
-> Python 3.7 or later is required to use this package.
-
-* **Version 3.2.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
---
-#### Form Recognizer SDK beta August 2022 preview release
-
-This release includes the following updates:
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.5 (2022-08-09)**
-**Supports REST API 2022-06-30-preview clients**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
-
-[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.6 (2022-08-10)**
-**Supports REST API 2022-06-30-preview and earlier clients**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
-
- [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.6 (2022-08-09)**
-**Supports REST API 2022-06-30-preview and earlier clients**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
-
- [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
-
-### [**Python**](#tab/python)
-
-> [!IMPORTANT]
-> Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
-
-**Version 3.2.0b6 (2022-08-09)**
-**Supports REST API 2022-06-30-preview and earlier clients**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
-
- [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
---
-### Form Recognizer SDK beta June 2022 preview release
-
-This release includes the following updates:
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.4 (2022-06-08)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
-
-[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.5 (2022-06-07)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-
- [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.4 (2022-06-07)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
-
- [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
-
-### [**Python**](#tab/python)
-
-**Version 3.2.0b5 (2022-06-07**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
-
- [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
--
applied-ai-services Choose Model Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/choose-model-feature.md
- Title: Choose the best Form Recognizer model-
-description: Choose the best Form Recognizer model to meet your needs.
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Which Form Recognizer model should I use?
-
-Azure Form Recognizer supports a wide variety of models that enable you to add intelligent document processing to your applications and optimize your workflows. Selecting the right model is essential to ensure the success of your enterprise. In this article, we explore the available Form Recognizer models and provide guidance for how to choose the best solution for your projects.
-
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1b]
-
-The following decision charts highlight the features of each **Form Recognizer v3.0** supported model and help you choose the best model to meet the needs and requirements of your application.
-
-> [!IMPORTANT]
-> Be sure to check the [**language support**](language-support.md) page for supported language text and field extraction by feature.
-
-## Pretrained document-analysis models
-
-| Document type | Example| Data to extract | Your best solution |
-| --|--|--|-|
-|**A generic document**. | A contract or letter. |You want to primarily extract written or printed text lines, words, locations, and detected languages.|[**Read OCR model**](concept-read.md)|
-|**A document that includes structural information**. |A report or study.| In addition to written or printed text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.| [**Layout analysis model**](concept-layout.md)
-|**A structured or semi-structured document that includes content formatted as fields and values**.|A form or document that is a standardized format commonly used in your business or industry like a credit application or survey. | You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| [**General document model**](concept-general-document.md)|
-
-## Pretrained scenario-specific models
-
-| Document type | Data to extract | Your best solution |
-| --|--|-|
-|**U.S. W-2 tax form**|You want to extract key information such as salary, wages, and taxes withheld.|[**W-2 model**](concept-w2.md)|
-|**Health insurance card** or health insurance ID.| You want to extract key information such as insurer, member ID, prescription coverage, and group number.|[**Health insurance card model**](./concept-insurance-card.md)|
-|**Invoice** or billing statement.|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md)
- |**Receipt**, voucher, or single-page hotel receipt. |You want to extract key information such as merchant name, transaction date, and transaction total.|[**Receipt model**](concept-receipt.md)|
-|**Identity document (ID)** like a U.S. driver's license or international passport. |You want to extract key information such as first name, last name, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)|
-|**Business card** or calling card.|You want to extract key information such as first name, last name, company name, email address, and phone number.|[**Business card model**](concept-business-card.md)|
-|**Mixed-type document(s)** with structured, semi-structured, and/or unstructured elements. | You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| [**Custom model**](concept-custom.md)|
-
->[!Tip]
->
-> * If you're still unsure which pretrained model to use, try the **General Document model** to extract key-value pairs.
-> * The General Document model is powered by the Read OCR engine to detect text lines, words, locations, and languages.
-> * General document also extracts the same data as the Layout model (pages, tables, styles).
-
-## Custom extraction models
-
-| Training set | Example documents | Your best solution |
-| --|--|-|
-|**Structured, consistent, documents with a static layout**. |Structured forms such as questionnaires or applications. | [**Custom template model**](./concept-custom-template.md)|
-|**Structured, semi-structured, and unstructured documents**.|&#9679; Structured &rightarrow; surveys</br>&#9679; Semi-structured &rightarrow; invoices</br>&#9679; Unstructured &rightarrow; letters| [**Custom neural model**](concept-custom-neural.md)|
-|**A collection of several models each trained on similar-type documents.** |&#9679; Supply purchase orders</br>&#9679; Equipment purchase orders</br>&#9679; Furniture purchase orders</br> **All composed into a single model**.| [**Composed custom model**](concept-composed-models.md)|
-
-## Next steps
-
-* [Learn how to process your own forms and documents](quickstarts/try-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
applied-ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md
- Title: Interpret and improve model accuracy and analysis confidence scores-
-description: Best practices to interpret the accuracy score from the train model operation and the confidence score from analysis operations.
----- Previously updated : 05/23/2023-
-monikerRange: '>=form-recog-2.1.0'
--
-# Accuracy and confidence scores for custom models
--
-> [!NOTE]
->
-> * **Custom neural models do not provide accuracy scores during training**.
-> * Confidence scores for structured fields such as tables are currently unavailable.
-
-Custom models generate an estimated accuracy score when trained. Documents analyzed with a custom model produce a confidence score for extracted fields. In this article, learn to interpret accuracy and confidence scores and best practices for using those scores to improve accuracy and confidence results.
-
-## Accuracy scores
-
-The output of a `build` (v3.0) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document.
-The accuracy value range is a percentage between 0% (low) and 100% (high). The estimated accuracy is calculated by running a few different combinations of the training data to predict the labeled values.
-
-**Form Recognizer Studio** </br>
-**Trained custom model (invoice)**
--
-## Confidence scores
-
-Form Recognizer analysis results return an estimated confidence for predicted words, key-value pairs, selection marks, regions, and signatures. Currently, not all document fields return a confidence score.
-
-Field confidence indicates an estimated probability between 0 and 1 that the prediction is correct. For example, a confidence value of 0.95 (95%) indicates that the prediction is likely correct 19 out of 20 times. For scenarios where accuracy is critical, confidence may be used to determine whether to automatically accept the prediction or flag it for human review.
-
-Confidence scores have two data points: the field level confidence score and the text extraction confidence score. In addition to the field confidence of position and span, the text extraction confidence in the ```pages``` section of the response is the model's confidence in the text extraction (OCR) process. The two confidence scores should be combined to generate one overall confidence score.
-
-**Form Recognizer Studio** </br>
-**Analyzed invoice prebuilt-invoice model**
--
-## Interpret accuracy and confidence scores
-
-The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
-
-| Accuracy | Confidence | Result |
-|--|--|--|
-| High| High | <ul><li>The model is performing well with the labeled keys and document formats. </li><li>You have a balanced training dataset</li></ul> |
-| High | Low | <ul><li>The analyzed document appears different from the training dataset.</li><li>The model would benefit from retraining with at least five more labeled documents. </li><li>These results could also indicate a format variation between the training dataset and the analyzed document. </br>Consider adding a new model.</li></ul> |
-| Low | High | <ul><li>This result is most unlikely.</li><li>For low accuracy scores, add more labeled data or split visually distinct documents into multiple models.</li></ul> |
-| Low | Low| <ul><li>Add more labeled data.</li><li>Split visually distinct documents into multiple models.</li></ul>|
-
-## Ensure high model accuracy
-
-Variances in the visual structure of your documents affect the accuracy of your model. Reported accuracy scores can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that a document set can look similar when viewed by humans but appear dissimilar to an AI model. To follow, is a list of the best practices for training models with the highest accuracy. Following these guidelines should produce a model with higher accuracy and confidence scores during analysis and reduce the number of documents flagged for human review.
-
-* Ensure that all variations of a document are included in the training dataset. Variations include different formats, for example, digital versus scanned PDFs.
-
-* If you expect the model to analyze both types of PDF documents, add at least five samples of each type to the training dataset.
-
-* Separate visually distinct document types to train different models.
- * As a general rule, if you remove all user entered values and the documents look similar, you need to add more training data to the existing model.
- * If the documents are dissimilar, split your training data into different folders and train a model for each variation. You can then [compose](how-to-guides/compose-custom-models.md?view=form-recog-2.1.0&preserve-view=true#create-a-composed-model) the different variations into a single model.
-
-* Make sure that you don't have any extraneous labels.
-
-* For signature and region labeling, don't include the surrounding text.
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Learn to create custom models ](quickstarts/try-form-recognizer-studio.md#custom-models)
applied-ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-add-on-capabilities.md
- Title: Add-on capabilities - Form Recognizer-
-description: How to increase service limit capacity with add-on capabilities.
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
-
-<!-- markdownlint-disable MD033 -->
-
-# Azure Form Recognizer add-on capabilities (preview)
-
-**This article applies to:** ![Form Recognizer checkmark](medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
-
-> [!NOTE]
->
-> Add-on capabilities for Form Recognizer Studio are only available within the Read and Layout models for the `2023-02-28-preview` release.
-
-Form Recognizer now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are three add-on capabilities available for the `2023-02-28-preview`:
-
-* [`ocr.highResolution`](#high-resolution-extraction)
-
-* [`ocr.formula`](#formula-extraction)
-
-* [`ocr.font`](#font-property-extraction)
-
-## High resolution extraction
-
-The task of recognizing small text from large-size documents, like engineering drawings, is a challenge. Often the text is mixed with other graphical elements and has varying fonts, sizes and orientations. Moreover, the text may be broken into separate parts or connected with other symbols. Form Recognizer now supports extracting content from these types of documents with the `ocr.highResolution` capability. You get improved quality of content extraction from A1/A2/A3 documents by enabling this add-on capability.
-
-## Formula extraction
-
-The `ocr.formula` capability extracts all identified formulas, such as mathematical equations, in the `formulas` collection as a top level object under `content`. Inside `content`, detected formulas are represented as `:formula:`. Each entry in this collection represents a formula that includes the formula type as `inline` or `display`, and its LaTeX representation as `value` along with its `polygon` coordinates. Initially, formulas appear at the end of each page.
-
- > [!NOTE]
- > The `confidence` score is hard-coded for the `2023-02-28` public preview release.
-
- ```json
- "content": ":formula:",
- "pages": [
- {
- "pageNumber": 1,
- "formulas": [
- {
- "kind": "inline",
- "value": "\\frac { \\partial a } { \\partial b }",
- "polygon": [...],
- "span": {...},
- "confidence": 0.99
- },
- {
- "kind": "display",
- "value": "y = a \\times b + a \\times c",
- "polygon": [...],
- "span": {...},
- "confidence": 0.99
- }
- ]
- }
- ]
- ```
-
-## Font property extraction
-
-The `ocr.font` capability extracts all font properties of text extracted in the `styles` collection as a top-level object under `content`. Each style object specifies a single font property, the text span it applies to, and its corresponding confidence score. The existing style property is extended with more font properties such as `similarFontFamily` for the font of the text, `fontStyle` for styles such as italic and normal, `fontWeight` for bold or normal, `color` for color of the text, and `backgroundColor` for color of the text bounding box.
-
- ```json
- "content": "Foo bar",
- "styles": [
- {
- "similarFontFamily": "Arial, sans-serif",
- "spans": [ { "offset": 0, "length": 3 } ],
- "confidence": 0.98
- },
- {
- "similarFontFamily": "Times New Roman, serif",
- "spans": [ { "offset": 4, "length": 3 } ],
- "confidence": 0.98
- },
- {
- "fontStyle": "italic",
- "spans": [ { "offset": 1, "length": 2 } ],
- "confidence": 0.98
- },
- {
- "fontWeight": "bold",
- "spans": [ { "offset": 2, "length": 3 } ],
- "confidence": 0.98
- },
- {
- "color": "#FF0000",
- "spans": [ { "offset": 4, "length": 2 } ],
- "confidence": 0.98
- },
- {
- "backgroundColor": "#00FF00",
- "spans": [ { "offset": 5, "length": 2 } ],
- "confidence": 0.98
- }
- ]
- ```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> Learn more:
-> [**Read model**](concept-read.md) [**Layout model**](concept-layout.md).
applied-ai-services Concept Analyze Document Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-analyze-document-response.md
- Title: Form Recognizer APIs analyze document response-
-description: Description of the different objects returned as part of the analyze document response and how to use the document analysis response in your applications.
----- Previously updated : 05/23/2023--
-monikerRange: 'form-recog-3.0.0'
-
-# Analyze document API response
-
-In this article, let's examine the different objects returned as part of the analyze document response and how to use the document analysis API response in your applications.
-
-## Analyze document request
-
-The Form Recognizer APIs analyze images, PDFs, and other document files to extract and detect various content, layout, style, and semantic elements. The analyze operation is an async API. Submitting a document returns an **Operation-Location** header that contains the URL to poll for completion. When an analysis request completes successfully, the response contains the elements described in the [model data extraction](concept-model-overview.md#model-data-extraction).
-
-### Response elements
-
-* Content elements are the basic text elements extracted from the document.
-
-* Layout elements group content elements into structural units.
-
-* Style elements describe the font and language of content elements.
-
-* Semantic elements assign meaning to the specified content elements.
-
-All content elements are grouped according to pages, specified by page number (`1`-indexed). They're also sorted by reading order that arranges semantically contiguous elements together, even if they cross line or column boundaries. When the reading order among paragraphs and other layout elements is ambiguous, the service generally returns the content in a left-to-right, top-to-bottom order.
-
-> [!NOTE]
-> Currently, Form Recognizer does not support reading order across page boundaries. Selection marks are not positioned within the surrounding words.
-
-The top-level content property contains a concatenation of all content elements in reading order. All elements specify their position in the reader order via spans within this content string. The content of some elements may not be contiguous.
-
-## Analyze response
-
-The analyze response for each API returns different objects. API responses contain elements from component models where applicable.
-
-| Response content | Description | API |
-|--|--|--|
-| **pages**| Words, lines and spans recognized from each page of the input document. | Read, Layout, General Document, Prebuilt, and Custom models|
-| **paragraphs**| Content recognized as paragraphs. | Read, Layout, General Document, Prebuilt, and Custom models|
-| **styles**| Identified text element properties. | Read, Layout, General Document, Prebuilt, and Custom models|
-| **languages**| Identified language associated with each span of the text extracted | Read |
-| **tables**| Tabular content identified and extracted from the document. Tables relate to tables identified by the pretrained layout model. Content labeled as tables is extracted as structured fields in the documents object. | Layout, General Document, Invoice, and Custom models |
-| **keyValuePairs**| Key-value pairs recognized by a pretrained model. The key is a span of text from the document with the associated value. | General document and Invoice models |
-| **documents**| Fields recognized are returned in the ```fields``` dictionary within the list of documents| Prebuilt models, Custom models|
-
-For more information on the objects returned by each API, see [model data extraction](concept-model-overview.md#model-data-extraction).
-
-## Element properties
-
-### Spans
-
-Spans specify the logical position of each element in the overall reading order, with each span specifying a character offset and length into the top-level content string property. By default, character offsets and lengths are returned in units of user-perceived characters (also known as [`grapheme clusters`](/dotnet/standard/base-types/character-encoding-introduction) or text elements). To accommodate different development environments that use different character units, user can specify the `stringIndexIndex` query parameter to return span offsets and lengths in Unicode code points (Python 3) or UTF16 code units (Java, JavaScript, .NET) as well. For more information, *see* [multilingual/emoji support](../../cognitive-services/language-service/concepts/multilingual-emoji-support.md).
--
-### Bounding Region
-
-Bounding regions describe the visual position of each element in the file. Since elements may not be visually contiguous or may cross pages (tables), the positions of most elements are described via an array of bounding regions. Each region specifies the page number (`1`-indexed) and bounding polygon. The bounding polygon is described as a sequence of points, clockwise from the left relative to the natural orientation of the element. For quadrilaterals, plot points are top-left, top-right, bottom-right, and bottom-left corners. Each point represents its x, y coordinate in the page unit specified by the unit property. In general, unit of measure for images is pixels while PDFs use inches.
--
-> [!NOTE]
-> Current Form Recognizer only returns 4-vertex quadrilaterals as bounding polygons. Future versions may return different number of points to describe more complex shapes, such as curved lines or non-rectangular images. Bounding regions applied only to rendered files, if the file is not rendered, bounding regions are not returned. Currently files of docx/xlsx/pptx/html format are not rendered.
-
-### Content elements
-
-#### Word
-
-A word is a content element composed of a sequence of characters. In Form Recognizer, a word is defined as a sequence of adjacent characters, with whitespace separating words from one another. For languages that don't use space separators between words each character is returned as a separate word, even if it doesn't represent a semantic word unit.
--
-#### Selection marks
-
-A selection mark is a content element that represents a visual glyph indicating the state of a selection. Checkbox is a common form of selection marks. However, they may also be represented via radio buttons or a boxed cell in a visual form. The state of a selection mark may be selected or unselected, with different visual representation to indicate the state.
--
-### Layout elements
-
-#### Line
-
-A line is an ordered sequence of consecutive content elements separated by a visual space, or ones that are immediately adjacent for languages without space delimiters between words. Content elements in the same horizontal plane (row) but separated by more than a single visual space are most often split into multiple lines. While this feature sometimes splits semantically contiguous content into separate lines, it enables the representation of textual content split into multiple columns or cells. Lines in vertical writing are detected in the vertical direction.
--
-#### Paragraph
-
-A paragraph is an ordered sequence of lines that form a logical unit. Typically, the lines share common alignment and spacing between lines. Paragraphs are often delimited via indentation, added spacing, or bullets/numbering. Content can only be assigned to a single paragraph.
-Select paragraphs may also be associated with a functional role in the document. Currently supported roles include page header, page footer, page number, title, section heading, and footnote.
--
-#### Page
-
-A page is a grouping of content that typically corresponds to one side of a sheet of paper. A rendered page is characterized via width and height in the specified unit. In general, images use pixel while PDFs use inch. The angle property describes the overall text angle in degrees for pages that may be rotated.
-
-> [!NOTE]
-> For spreadsheets like Excel, each sheet is mapped to a page. For presentations, like PowerPoint, each slide is mapped to a page. For file formats without a native concept of pages without rendering like HTML or Word documents, the main content of the file is considered a single page.
-
-#### Table
-
-A table organizes content into a group of cells in a grid layout. The rows and columns may be visually separated by grid lines, color banding, or greater spacing. The position of a table cell is specified via its row and column indices. A cell may span across multiple rows and columns.
-
-Based on its position and styling, a cell may be classified as general content, row header, column header, stub head, or description:
-
-* A row header cell is typically the first cell in a row that describes the other cells in the row.
-
-* A column header cell is typically the first cell in a column that describes the other cells in a column.
-
-* A row or column may contain multiple header cells to describe hierarchical content.
-
-* A stub head cell is typically the cell in the first row and first column position. It may be empty or describe the values in the header cells in the same row/column.
-
-* A description cell generally appears at the topmost or bottom area of a table, describing the overall table content. However, it may sometimes appear in the middle of a table to break the table into sections. Typically, description cells span across multiple cells in a single row.
-
-* A table caption specifies content that explains the table. A table may further have an associated caption and a set of footnotes. Unlike a description cell, a caption typically lies outside the grid layout. A table footnote annotates content inside the table, often marked with a footnote symbol. It's often found below the table grid.
-
-**Layout tables differ from document fields extracted from tabular data**. Layout tables are extracted from tabular visual content in the document without considering the semantics of the content. In fact, some layout tables are designed purely for visual layout and may not always contain structured data. The method to extract structured data from documents with diverse visual layout, like itemized details of a receipt, generally requires significant post processing. It's essential to map the row or column headers to structured fields with normalized field names. Depending on the document type, use prebuilt models or train a custom model to extract such structured content. The resulting information is exposed as document fields. Such trained models can also handle tabular data without headers and structured data in nontabular forms, for example the work experience section of a resume.
--
-#### Form field (key value pair)
-
-A form field consists of a field label (key) and value. The field label is generally a descriptive text string describing the meaning of the field. It often appears to the left of the value, though it can also appear over or under the value. The field value contains the content value of a specific field instance. The value may consist of words, selection marks, and other content elements. It may also be empty for unfilled form fields. A special type of form field has a selection mark value with the field label to its right.
-Document field is a similar but distinct concept from general form fields. The field label (key) in a general form field must appear in the document. Thus, it can't generally capture information like the merchant name in a receipt. Document fields are labeled and don't extract a key, document fields only map an extracted value to a labeled key. For more information, *see* [document fields]().
--
-### Style elements
-
-#### Style
-
-A style element describes the font style to apply to text content. The content is specified via spans into the global content property. Currently, the only detected font style is whether the text is handwritten. As other styles are added, text may be described via multiple nonconflicting style objects. For compactness, all text sharing the particular font style (with the same confidence) are described via a single style object.
--
-```json
-
-{
- "confidence": 1,
- "spans": [
- {
- "offset": 2402,
- "length": 7
- }
- ],
- "isHandwritten": true
-}
-```
-
-#### Language
-
-A language element describes the detected language for content specified via spans into the global content property. The detected language is specified via a [BCP-47 language tag](https://en.wikipedia.org/wiki/IETF_language_tag) to indicate the primary language and optional script and region information. For example, English and traditional Chinese are recognized as "en" and "zh-Hant", respectively. Regional spelling differences for UK English may lead the text to be detected as "en-GB". Language elements don't cover text without a dominant language (ex. numbers).
-
-### Semantic elements
-
-#### Document
-
-A document is a semantically complete unit. A file may contain multiple documents, such as multiple tax forms within a PDF file, or multiple receipts within a single page. However, the ordering of documents within the file doesn't fundamentally affect the information it conveys.
-
-> [!NOTE]
-> Current Form Recognizer does not support multiple documents on a single page.
-
-The document type describes documents sharing a common set of semantic fields, represented by a structured schema, independent of its visual template or layout. For example, all documents of type "receipt" may contain the merchant name, transaction date, and transaction total, although restaurant and hotel receipts often differ in appearance.
-
-A document element includes the list of recognized fields from among the fields specified by the semantic schema of the detected document type. A document field may be extracted or inferred. Extracted fields are represented via the extracted content and optionally its normalized value, if interpretable. Inferred fields don't have content property and are represented only via its value. Array fields don't include a content property, as the content can be concatenated from the content of the array elements. Object fields do contain a content property that specifies the full content representing the object, which may be a superset of the extracted subfields.
-
-The semantic schema of a document type is described via the fields it may contain. Each field schema is specified via its canonical name and value type. Field value types include basic (ex. string), compound (ex. address), and structured (ex. array, object) types. The field value type also specifies the semantic normalization performed to convert detected content into a normalization representation. Normalization may be locale dependent.
-
-#### Basic types
-
-| Field value type| Description | Normalized representation | Example (Field content -> Value) |
-|--|--|--|--|
-| string | Plain text | Same as content | MerchantName: "Contoso" → "Contoso" |
-| date | Date | ISO 8601 - YYYY-MM-DD | InvoiceDate: "5/7/2022" → "2022-05-07" |
-| time | Time | ISO 8601 - hh:mm:ss | TransactionTime: "9:45 PM" → "21:45:00" |
-| phoneNumber | Phone number | E.164 - +{CountryCode}{SubscriberNumber} | WorkPhone: "(800) 555-7676" → "+18005557676"|
-| countryRegion | Country/region | ISO 3166-1 alpha-3 | CountryRegion: "United States" → "USA" |
-| selectionMark | Is selected | "signed" or "unsigned" | AcceptEula: ☑ → "selected" |
-| signature | Is signed | Same as content | LendeeSignature: {signature} → "signed" |
-| number | Floating point number | Floating point number | Quantity: "1.20" → 1.2|
-| integer | Integer number | 64-bit signed number | Count: "123" → 123 |
-| boolean | Boolean value | true/false | IsStatutoryEmployee: ☑ → true |
-
-#### Compound types
-
-* Currency: Currency amount with optional currency unit. A value, for example: ```InvoiceTotal: $123.45```
-
- ```json
- {
- "amount": 123.45,
- "currencySymbol": "$"
- }
- ```
-
-* Address: Parsed address. For example: ```ShipToAddress: 123 Main St., Redmond, WA 98052```
-
-```json
- {
- "poBox": "PO Box 12",
- "houseNumber": "123",
- "streetName": "Main St.",
- "city": "Redmond",
- "state": "WA",
- "postalCode": "98052",
- "countryRegion": "USA",
- "streetAddress": "123 Main St."
- }
-
-```
-
-#### Structured types
-
-* Array: List of fields of the same type
-
-```json
-"Items": {
- "type": "array",
- "valueArray": [
-
- ]
-}
-
-```
-
-* Object: Named list of subfields of potentially different types
-
-```json
-"InvoiceTotal": {
- "type": "currency",
- "valueCurrency": {
- "currencySymbol": "$",
- "amount": 110
- },
- "content": "$110.00",
- "boundingRegions": [
- {
- "pageNumber": 1,
- "polygon": [
- 7.3842,
- 7.465,
- 7.9181,
- 7.465,
- 7.9181,
- 7.6089,
- 7.3842,
- 7.6089
- ]
- }
- ],
- "confidence": 0.945,
- "spans": [
- {
- "offset": 806,
- "length": 7
- }
- ]
-}
-```
-
-## Next steps
-
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
- Title: Business card data extraction - Form Recognizer-
-description: OCR and machine learning based business card scanning in Form Recognizer extracts key data from business cards.
----- Previously updated : 05/23/2023--
-<!-- markdownlint-disable MD033 -->
-
-# Azure Form Recognizer business card model
---
-The Form Recognizer business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract data from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
-
-## Business card data extraction
-
-Business cards are a great way to represent a business or a professional. The company logo, fonts and background images found in business cards help promote the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integration into for the benefit of their users.
-
-***Sample business card processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
----
-***Sample business processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***
---
-## Development options
--
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-businessCard**|
---
-Form Recognizer v2.1 supports the following tools:
-
-| Feature | Resources |
-|-|-|
-|**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-business-cards)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
--
-### Try business card data extraction
-
-See how data, including name, job title, address, email, and company name, is extracted from business cards. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
--
-#### Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio is available with the v3.0 API.
-
-1. On the Form Recognizer Studio home page, select **Business cards**
-
-1. You can analyze the sample business card or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/business-card-analyze.png" alt-text="Screenshot: analyze business card menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
---
-## Form Recognizer Sample Labeling tool
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
-
- :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
-
-1. Select the **Form Type** to analyze from the dropdown menu.
-
-1. Choose a URL for the file you would like to analyze from the below options:
-
- * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
- * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
- * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
- * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-
-1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
-
- :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
-
-1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
- :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu.":::
-
-1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
-
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
-
- :::image type="content" source="media/business-card-results.png" alt-text="Screenshot of the business card model analyze results operation.":::
-
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
--
-## Input requirements
-----
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
-* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
---
-## Supported languages and locales
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
-| Model | LanguageΓÇöLocale code | Default |
-|--|:-|:|
-|Business card (v3.0 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
-|Business card (v2.1 API)| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li> | Autodetected |
-
-## Field extractions
-
-|Name| Type | Description |Standardized output |
-|:--|:-|:-|:-:|
-| ContactNames | Array of objects | Contact name | |
-| FirstName | String | First (given) name of contact | |
-| LastName | String | Last (family) name of contact | |
-| CompanyNames | Array of strings | Company name(s)| |
-| Departments | Array of strings | Department(s) or organization(s) of contact | |
-| JobTitles | Array of strings | Listed Job title(s) of contact | |
-| Emails | Array of strings | Contact email address(es) | |
-| Websites | Array of strings | Company website(s) | |
-| Addresses | Array of strings | Address(es) extracted from business card | |
-| MobilePhones | Array of phone numbers | Mobile phone number(s) from business card |+1 xxx xxx xxxx |
-| Faxes | Array of phone numbers | Fax phone number(s) from business card | +1 xxx xxx xxxx |
-| WorkPhones | Array of phone numbers | Work phone number(s) from business card | +1 xxx xxx xxxx |
-| OtherPhones | Array of phone numbers | Other phone number(s) from business card | +1 xxx xxx xxxx |
---
-### Fields extracted
-
-|Name| Type | Description | Text |
-|:--|:-|:-|:-|
-| ContactNames | array of objects | Contact name extracted from business card | [{ "FirstName": "John", "LastName": "Doe" }] |
-| FirstName | string | First (given) name of contact | "John" |
-| LastName | string | Last (family) name of contact | "Doe" |
-| CompanyNames | array of strings | Company name extracted from business card | ["Contoso"] |
-| Departments | array of strings | Department or organization of contact | ["R&D"] |
-| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] |
-| Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] |
-| Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
-| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
-| MobilePhones | array of phone numbers | Mobile phone number extracted from business card | ["+19876543210"] |
-| Faxes | array of phone numbers | Fax phone number extracted from business card | ["+19876543211"] |
-| WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+19876543231"] |
-| OtherPhones | array of phone numbers | Other phone number extracted from business card | ["+19876543233"] |
-
-## Supported locales
-
-**Prebuilt business cards v2.1** supports the following locales:
-
-* **en-us**
-* **en-au**
-* **en-ca**
-* **en-gb**
-* **en-in**
-
-### Migration guide and REST API v3.0
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
--
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
- Title: Composed custom models - Form Recognizer-
-description: Compose several custom models into a single model for easier data extraction from groups of distinct form types.
----- Previously updated : 05/23/2023---
-# Composed custom models
----
-**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document.
-
-With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-
-* ```Custom form``` and ```Custom template``` models can be composed together into a single composed model.
-
-* With the model compose operation, you can assign up to 200 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results.
-
-* For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates.
-
-* The response includes a ```docType``` property to indicate which of the composed models was used to analyze the document.
-
-* For ```Custom neural``` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. Model compose is best suited for scenarios when you have documents of different types being submitted for analysis.
---
-With the introduction of [**custom classification models**](./concept-custom-classifier.md), you can choose to use a [**composed model**](./concept-composed-models.md) or [**classification model**](concept-custom-classifier.md) as an explicit step before analysis. For a deeper understanding of when to use a classification or composed model, _see_ [**Custom classification models**](concept-custom-classifier.md#compare-custom-classification-and-composed-models).
-
-## Compose model limits
-
-> [!NOTE]
-> With the addition of **_custom neural model_** , there are a few limits to the compatibility of models that can be composed together.
-
-### Composed model compatibility
-
-|Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models v3.0 (preview) |Custom neural models 3.0 (GA)|
-|--|--|--|--|--|
-|**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported|
-|**Custom template models v3.0** |Supported|Supported|Not Supported|NotSupported|
-|**Custom template models v3.0 (GA)** |Not Supported|Not Supported|Supported|Not Supported|
-|**Custom neural models v3.0 (preview)**|Not Supported|Not Supported|Supported|Not Supported|
-|**Custom Neural models v3.0 (GA)**|Not Supported|Not Supported|Not Supported|Supported|
-
-* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition ensures that the v2.1 model can be composed with other models.
-
-* Models composed with v2.1 of the API continues to be supported, requiring no updates.
-
-* The limit for maximum number of custom models that can be composed is 100.
--
-## Development options
-
-The following resources are supported by Form Recognizer **v3.0** :
-
-| Feature | Resources |
-|-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Java SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|
-| _**Composed model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
--
-Form Recognizer v2.1 supports the following resources:
-
-| Feature | Resources |
-|-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-| _**Composed model**_ |<ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</li><li>JavaScript SDK</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
-
-## Next steps
-
-Learn to create and compose custom models:
-
-> [!div class="nextstepaction"]
-> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-classifier.md
- Title: Custom classification model - Form Recognizer-
-description: Use the custom classification model to train a model to identify and split the documents you process within your application.
----- Previously updated : 05/23/2023--
-monikerRange: 'form-recog-3.0.0'
--
-# Custom classification model
-
-**This article applies to:** ![Form Recognizer checkmark](medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
-
-> [!IMPORTANT]
->
-> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
->
-
-Custom classification models are deep-learning-model types that combine layout and language features to accurately detect and identify documents you process within your application. Custom classification models can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
-
-## Model capabilities
-
-Custom classification models can analyze a single- or multi-file documents to identify if any of the trained document types are contained within an input file. Here are the currently supported scenarios:
-
-* A single file containing one document. For instance, a loan application form.
-
-* A single file containing multiple documents. For instance, a loan application package containing a loan application form, payslip, and bank statement.
-
-* A single file containing multiple instances of the same document. For instance, a collection of scanned invoices.
-
-Training a custom classifier requires at least two distinct classes and a minimum of five samples per class.
-
-### Compare custom classification and composed models
-
-A custom classification model can replace [a composed model](concept-composed-models.md) in some scenarios but there are a few differences to be aware of:
-
-| Capability | Custom classifier process | Composed model process |
-|--|--|--|
-|Analyze a single document of unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the classification model based on the document class. This step allows for a confidence-based check before invoking the extraction model analysis.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model containing the model corresponding to the input document type. |
- |Analyze a single document of unknown type belonging to several types trained for extraction model processing.| &#9679;Requires multiple calls.</br> &#9679; Make a call to the classifier that ignores documents not matching a designated type for extraction.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model. The service selects a custom model within the composed model with the highest match.</br> &#9679; A composed model can't ignore documents.|
-|Analyze a file containing multiple documents of known or unknown type belonging to one of the types trained for extraction model processing.| &#9679; Requires multiple calls. </br> &#9679; Call the extraction model for each identified document in the input file.</br> &#9679; Invoke the extraction model. | &#9679; Requires a single call to a composed model.</br> &#9679; The composed model invokes the component model once on the first instance of the document. </br> &#9679;The remaining documents are ignored. |
-
-## Language support
-
-Classification models currently only support English language documents.
-
-## Best practices
-
-Custom classification models require a minimum of five samples per class to train. If the classes are similar, adding extra training samples improves model accuracy.
-
-## Training a model
-
-Custom classification models are only available in the [v3.0 API](v3-migration-guide.md) starting with API version ```2023-02-28-preview```. [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
-
-When using the REST API, if you've organized your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classification model.
-
-```rest
-https://{endpoint}/formrecognizer/documentClassifiers:build?api-version=2023-02-28-preview
-
-{
- "classifierId": "demo2.1",
- "description": "",
- "docTypes": {
- "car-maint": {
- "azureBlobSource": {
- "containerUrl": "SAS URL to container",
- "prefix": "sample1/car-maint/"
- }
- },
- "cc-auth": {
- "azureBlobSource": {
- "containerUrl": "SAS URL to container",
- "prefix": "sample1/cc-auth/"
- }
- },
- "deed-of-trust": {
- "azureBlobSource": {
- "containerUrl": "SAS URL to container",
- "prefix": "sample1/deed-of-trust/"
- }
- }
- }
-}
-
-```
-
-Alternatively, if you have a flat list of files or only plan to use a few select files within each folder to train the model, you can use the ```azureBlobFileListSource``` property to train the model. This step requires a ```file list``` in [JSON Lines](https://jsonlines.org/) format. For each class, add a new file with a list of files to be submitted for training.
-
-```rest
-{
- "classifierId": "demo2",
- "description": "",
- "docTypes": {
- "car-maint": {
- "azureBlobFileListSource": {
- "containerUrl": "SAS URL to container",
- "fileList": "sample1/car-maint.jsonl"
- }
- },
- "cc-auth": {
- "azureBlobFileListSource": {
- "containerUrl": "SAS URL to container",
- "fileList": "sample1/cc-auth.jsonl"
- }
- },
- "deed-of-trust": {
- "azureBlobFileListSource": {
- "containerUrl": "SAS URL to container",
- "fileList": "sample1/deed-of-trust.jsonl"
- }
- }
- }
-}
-
-```
-
-File list `car-maint.jsonl` contains the following files.
-
-```json
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Adatum.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Fincher.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Lamna.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Liberty.pdf"}
-{"file":"sample1/car-maint/Commercial Motor Vehicle - Trey.pdf"}
-```
-
-## Next steps
-
-Learn to create custom classification models:
-
-> [!div class="nextstepaction"]
-> [**Build a custom classification model**](how-to-guides/build-a-custom-classifier.md)
-> [**Custom models overview**](concept-custom.md)
applied-ai-services Concept Custom Label Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-label-tips.md
- Title: Labeling tips for custom models in the Form Recognizer Studio-
-description: Label tips and tricks for Form Recognizer Studio
----- Previously updated : 01/30/2023----
-# Tips for labeling custom model datasets
-
-This article highlights the best methods for labeling custom model datasets in the Form Recognizer Studio. Labeling documents can be time consuming when you have a large number of labels, long documents, or documents with varying structure. These tips should help you label documents more efficiently.
-
-## Video: Custom labels best practices
-
-* The following video is the second of two presentations intended to help you build custom models with higher accuracy (the first presentation explores [How to create a balanced data set](concept-custom-label.md#video-custom-label-tips-and-pointers)).
-
-* Here, we examine best practices for labeling your selected documents. With semantically relevant and consistent labeling, you should see an improvement in model performance.</br></br>
-
- > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fZKB ]
-
-## Search
-
-The Studio now includes a search box for instances when you know you need to find specific words to label, but just don't know where they're located in the document. Simply search for the word or phrase and navigate to the specific section in the document to label the occurrence.
-
-## Auto label tables
-
-Tables can be challenging to label, when they have many rows or dense text. If the layout table extracts the result you need, you should just use that result and skip the labeling process. In instances where the layout table isn't exactly what you need, you can start with generating the table field from the values layout extracts. Start by selecting the table icon on the page and select on the auto label button. You can then edit the values as needed. Auto label currently only supports single page tables.
-
-## Shift select
-
-When labeling a large span of text, rather than mark each word in the span, hold down the shift key as you're selecting the words to speed up labeling and ensure you don't miss any words in the span of text.
-
-## Region labeling
-
-A second option for labeling larger spans of text is to use region labeling. When region labeling is used, the OCR results are populated in the value at training time. The difference between the shift select and region labeling is only in the visual feedback the shift labeling approach provides.
-
-## Field subtypes
-
-When creating a field, select the right subtype to minimize post processing, for instance select the ```dmy``` option for dates to extract the values in a ```dd-mm-yyyy``` format.
-
-## Batch layout
-
-When creating a project, select the batch layout option to prepare all documents in your dataset for labeling. This feature ensures that you no longer have to select on each document and wait for the layout results before you can start labeling.
-
-## Next steps
-
-* Learn more about custom labeling:
-
- > [!div class="nextstepaction"]
- > [Custom labels](concept-custom-label.md)
-
-* Learn more about custom template models:
-
- > [!div class="nextstepaction"]
- > [Custom template models](concept-custom-template.md )
-
-* Learn more about custom neural models:
-
- > [!div class="nextstepaction"]
- > [Custom neural models](concept-custom-neural.md )
applied-ai-services Concept Custom Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-label.md
- Title: Best practices for labeling documents in the Form Recognizer Studio-
-description: Label documents in the Studio to create a training dataset. Labeling guidelines aimed at training a model with high accuracy
----- Previously updated : 03/03/2023--
-monikerRange: 'form-recog-3.0.0'
--
-# Best practices: Generating Form Recognizer labeled dataset
-
-Custom models (template and neural) require a labeled dataset of at least five documents to train a model. The quality of the labeled dataset affects the accuracy of the trained model. This guide helps you learn more about generating a model with high accuracy by assembling a diverse dataset and provides best practices for labeling your documents.
-
-## Understand the components of a labeled dataset
-
-A labeled dataset consists of several files:
-
-* You provide a set of sample documents (typically PDFs or images). A minimum of five documents is needed to train a model.
-
-* Additionally, the labeling process generates the following files:
-
- * A `fields.json` file is created when the first field is added. There's one `fields.json` file for the entire training dataset, the field list contains the field name and associated sub fields and types.
-
- * The Studio runs each of the documents through the [Layout API](concept-layout.md). The layout response for each of the sample files in the dataset is added as `{file}.ocr.json`. The layout response is used to generate the field labels when a specific span of text is labeled.
-
- * A `{file}.labels.json` file is created or updated when a field is labeled in a document. The label file contains the spans of text and associated polygons from the layout output for each span of text the user adds as a value for a specific field.
-
-## Video: Custom label tips and pointers
-
-* The following video is the first of two presentations intended to help you build custom models with higher accuracy (The second presentation examines [Best practices for labeling documents](concept-custom-label-tips.md#video-custom-labels-best-practices)).
-
-* Here, we explore how to create a balanced data set and select the right documents to label. This process sets you on the path to higher quality models.</br></br>
-
- > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWWHru]
-
-## Create a balanced dataset
-
-Before you start labeling, it's a good idea to look at a few different samples of the document to identify which samples you want to use in your labeled dataset. A balanced dataset represents all the typical variations you would expect to see for the document. Creating a balanced dataset results in a model with the highest possible accuracy. A few examples to consider are:
-
-* **Document formats**: If you expect to analyze both digital and scanned documents, add a few examples of each type to the training dataset
-
-* **Variations (template model)**: Consider splitting the dataset into folders and train a model for each of variation. Any variations that include either structure or layout should be split into different models. You can then compose the individual models into a single [composed model](concept-composed-models.md).
-
-* **Variations (Neural models)**: When your dataset has a manageable set of variations, about 15 or fewer, create a single dataset with a few samples of each of the different variations to train a single model. If the number of template variations is larger than 15, you train multiple models and [compose](concept-composed-models.md) them together.
-
-* **Tables**: For documents containing tables with a variable number of rows, ensure that the training dataset also represents documents with different numbers of rows.
-
-* **Multi page tables**: When tables span multiple pages, label a single table. Add documents to the training dataset with the expected variations representedΓÇödocuments with the table on a single page only and documents with the table spanning two or more pages with all the rows labeled.
-
-* **Optional fields**: If your dataset contains documents with optional fields, validate that the training dataset has a few documents with the options represented.
-
-## Start by identifying the fields
-
-Take the time to identify each of the fields you plan to label in the dataset. Pay attention to optional fields. Define the fields with the labels that best match the supported types.
-
-Use the following guidelines to define the fields:
-
-* For custom neural models, use semantically relevant names for fields. For example, if the value being extracted is `Effective Date`, name it `effective_date` or `EffectiveDate` not a generic name like **date1**.
-
-* Ideally, name your fields with Pascal or camel case.
-
-* If a value is part of a visually repeating structure and you only need a single value, label it as a table and extract the required value during post-processing.
-
-* For tabular fields spanning multiple pages, define and label the fields as a single table.
-
-> [!NOTE]
-> Custom neural models share the same labeling format and strategy as custom template models. Currently custom neural models only support a subset of the field types supported by custom template models.
-
-## Model capabilities
-
-Custom neural models currently only support key-value pairs, structured fields (tables), and selection marks.
-
-| Model type | Form fields | Selection marks | Tabular fields | Signature | Region |
-|--|--|--|--|--|--|
-| Custom neural | ✔️Supported | ✔️Supported | ✔️Supported | Unsupported | ✔️Supported<sup>1</sup> |
-| Custom template | ✔️Supported| ✔️Supported | ✔️Supported | ✔️Supported | ✔️Supported |
-
-<sup>1</sup> Region labeling implementation differs between template and neural models. For template models, the training process injects synthetic data at training time if no text is found in the region labeled. With neural models, no synthetic text is injected and the recognized text is used as is.
-
-## Tabular fields
-
-Tabular fields (tables) are supported with custom neural models starting with API version ```2022-06-30-preview```. Models trained with API version 2022-06-30-preview or later will accept tabular field labels and documents analyzed with the model with API version 2022-06-30-preview or later will produce tabular fields in the output within the ```documents``` section of the result in the ```analyzeResult``` object.
-
-Tabular fields support **cross page tables** by default. To label a table that spans multiple pages, label each row of the table across the different pages in the single table. As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include both samples where an entire table is on a single page and samples of a table spanning two or more pages.
-
-Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
-
-## Labeling guidelines
-
-* **Labeling values is required.** Don't include the surrounding text. For example when labeling a checkbox, name the field to indicate the check box selection for example ```selectionYes``` and ```selectionNo``` rather than labeling the yes or no text in the document.
-
-* **Don't provide interleaving field values** The value of words and/or regions of one field must be either a consecutive sequence in natural reading order without interleaving with other fields or in a region that doesn't cover any other fields
-
-* **Consistent labeling**. If a value appears in multiple contexts withing the document, consistently pick the same context across documents to label the value.
-
-* **Visually repeating data**. Tables support visually repeating groups of information not just explicit tables. Explicit tables are identified in tables section of the analyzed documents as part of the layout output and don't need to be labeled as tables. Only label a table field if the information is visually repeating and not identified as a table as part of the layout response. An example would be the repeating work experience section of a resume.
-
-* **Region labeling (custom template)**. Labeling specific regions allows you to define a value when none exists. If the value is optional, ensure that you leave a few sample documents with the region not labeled. When labeling regions, don't include the surrounding text with the label.
-
-## Next steps
-
-* Train a custom model:
-
- > [!div class="nextstepaction"]
- > [How to train a model](how-to-guides/build-custom-model-v3.md)
-
-* Learn more about custom template models:
-
- > [!div class="nextstepaction"]
- > [Custom template models](concept-custom-template.md )
-
-* Learn more about custom neural models:
-
- > [!div class="nextstepaction"]
- > [Custom neural models](concept-custom-neural.md )
-
-* View the REST API:
-
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
- Title: Custom neural document model - Form Recognizer-
-description: Use the custom neural document model to train a model to extract data from structured, semistructured, and unstructured documents.
----- Previously updated : 05/23/2023--
-monikerRange: 'form-recog-3.0.0'
--
-# Custom neural document model
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-Custom neural document models or neural models are a deep learned model type that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured and unstructured documents. The table below lists common document types for each category:
-
-|Documents | Examples |
-||--|
-|structured| surveys, questionnaires|
-|semi-structured | invoices, purchase orders |
-|unstructured | contracts, letters|
-
-Custom neural models share the same labeling format and strategy as [custom template](concept-custom-template.md) models. Currently custom neural models only support a subset of the field types supported by custom template models.
-
-## Model capabilities
-
-Custom neural models currently only support key-value pairs and selection marks and structured fields (tables), future releases include support for signatures.
-
-| Form fields | Selection marks | Tabular fields | Signature | Region |
-|:--:|:--:|:--:|:--:|:--:|
-| Supported | Supported | Supported | Unsupported | Supported <sup>1</sup> |
-
-<sup>1</sup> Region labels in custom neural models use the results from the Layout API for specified region. This feature is different from template models where, if no value is present, text is generated at training time.
-
-### Build mode
-
-The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
-
-Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance across companies. For more information, *see* [Custom model build mode](concept-custom.md#build-mode).
-
-## Language support
-
-1. Neural models now support added languages in the ```2023-02-28-preview``` API.
-
-| Languages | API version |
-|:--:|:--:|
-| English | `2022-08-31` (GA), `2023-02-28-preview`|
-| German | `2023-02-28-preview`|
-| Italian | `2023-02-28-preview`|
-| French | `2023-02-28-preview`|
-| Spanish | `2023-02-28-preview`|
-| Dutch | `2023-02-28-preview`|
-
-## Tabular fields
-
-With the release of API versions **2022-06-30-preview** and later, custom neural models will support tabular fields (tables):
-
-* Models trained with API version 2022-08-31, or later will accept tabular field labels.
-* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
-* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
-
-Tabular fields support **cross page tables** by default:
-
-* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
-* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
-
-Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
-
-## Supported regions
-
-As of October 18, 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
-
-* Australia East
-* Brazil South
-* Canada Central
-* Central India
-* Central US
-* East Asia
-* East US
-* East US2
-* France Central
-* Japan East
-* South Central US
-* Southeast Asia
-* UK South
-* West Europe
-* West US2
-* US Gov Arizona
-* US Gov Virginia
---
-> [!TIP]
-> You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed to **any other region** and use it accordingly.
->
-> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo) or [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region.
-
-## Best practices
-
-Custom neural models differ from custom template models in a few different ways. The custom template or model relies on a consistent visual template to extract the labeled data. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model, and test to determine if it supports your functional needs.
-
-### Dealing with variations
-
-Custom neural models can generalize across different formats of a single document type. As a best practice, create a single model for all variations of a document type. Add at least five labeled samples for each of the different variations to the training dataset.
-
-### Field naming
-
-When you label the data, labeling the field relevant to the value improves the accuracy of the key-value pairs extracted. For example, for a field value containing the supplier ID, consider naming the field "supplier_id". Field names should be in the language of the document.
-
-### Labeling contiguous values
-
-Value tokens/words of one field must be either
-
-* Consecutive sequence in natural reading order without interleaving with other fields
-* In a region that don't cover any other fields
-
-### Representative data
-
-Values in training cases should be diverse and representative. For example, if a field is named "date", values for this field should be a date. synthetic value like a random string can affect model performance.
-
-## Current Limitations
-
-* The model doesn't recognize values split across page boundaries.
-* Custom neural models are only trained in English. Model performance is lower for documents in other languages.
-* If a dataset labeled for custom template models is used to train a custom neural model, the unsupported field types are ignored.
-* Custom neural models are limited to 20 build operations per month. Open a support request if you need the limit increased. For more information, see [Form Recognizer service quotas and limits](service-limits.md)
-
-## Training a model
-
-Custom neural models are only available in the [v3 API](v3-migration-guide.md).
-
-| Document Type | REST API | SDK | Label and Test Models|
-|--|--|--|--|
-| Custom document | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```.
-
-```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
-
-{
- "modelId": "string",
- "description": "string",
- "buildMode": "neural",
- "azureBlobSource":
- {
- "containerUrl": "string",
- "prefix": "string"
- }
-}
-```
-
-## Next steps
-
-Learn to create and compose custom models:
-
-> [!div class="nextstepaction"]
-> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
- Title: Custom template document model - Form Recognizer-
-description: Use the custom template document model to train a model to extract data from structured or templated forms.
----- Previously updated : 05/23/2023---
-# Custom template document model
---
-Custom template (formerly custom form) is an easy-to-train document model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
--
-Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
--
-## Model capabilities
-
-Custom template models support key-value pairs, selection marks, tables, signature fields, and selected regions.
-
-| Form fields | Selection marks | Tabular fields (Tables) | Signature | Selected regions |
-|:--:|:--:|:--:|:--:|:--:|
-| Supported| Supported | Supported | Supported| Supported |
--
-## Tabular fields
-
-With the release of API versions **2022-06-30-preview** and later, custom template models will add support for **cross page** tabular fields (tables):
-
-* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
-* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages if you expect to see those variations in documents.
-
-Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
--
-## Dealing with variations
-
-Template models rely on a defined visual template, changes to the template results in lower accuracy. In those instances, split your training dataset to include at least five samples of each template and train a model for each of the variations. You can then [compose](concept-composed-models.md) the models into a single endpoint. For subtle variations, like digital PDF documents and images, it's best to include at least five examples of each type in the same training dataset.
-
-## Training a model
--
-Custom template models are generally available with the [v3.0 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/BuildDocumentModel). If you're starting with a new project or have an existing labeled dataset, use the v3 API with Form Recognizer Studio to train a custom template model.
-
-| Model | REST API | SDK | Label and Test Models|
-|--|--|--|--|
-| Custom template | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-
-With the v3.0 API, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```.
-
-```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
-
-{
- "modelId": "string",
- "description": "string",
- "buildMode": "template",
- "azureBlobSource":
- {
- "containerUrl": "string",
- "prefix": "string"
- }
-}
-```
---
-Custom (template) models are generally available with the [v2.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm).
-
-| Model | REST API | SDK | Label and Test Models|
-|--|--|--|--|
-| Custom model (template) | [Form Recognizer 2.1 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Form Recognizer Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
--
-## Next steps
-
-Learn to create and compose custom models:
--
-> [!div class="nextstepaction"]
-> [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
---
-> [!div class="nextstepaction"]
-> [**Build a custom model**](concept-custom.md#build-a-custom-model)
-> [**Compose custom models**](concept-composed-models.md#development-options)
-
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
- Title: Custom document models - Form Recognizer-
-description: Label and train customized models for your documents and compose multiple models into a single model identifier.
----- Previously updated : 05/23/2023
-monikerRange: '>=form-recog-2.1.0'
--
-# Azure Form Recognizer Custom document models
---
-Form Recognizer uses advanced machine learning technology to identify documents, detect and extract information from forms and documents, and return the extracted data in a structured JSON output. With Form Recognizer, you can use document analysis models, pre-built/pre-trained, or your trained standalone custom models.
-
-Custom models now include [custom classification models](./concept-custom-classifier.md) for scenarios where you need to identify the document type prior to invoking the extraction model. Classifier models are available starting with the ```2023-02-28-preview``` API. A classification model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business to create a document processing solution. Standalone custom extraction models can be combined to create [composed models](concept-composed-models.md).
--
-## Custom document model types
-
-Custom document models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
-
-### Custom extraction models
-
-To create a custom extraction model, label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
-
-### Custom template model
-
-The custom template or custom form model relies on a consistent visual template to extract the labeled data. Variances in the visual structure of your documents affect the accuracy of your model. Structured forms such as questionnaires or applications are examples of consistent visual templates.
-
-Your training set consists of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields, and regions. Template models and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
-
-> [!TIP]
->
->To confirm that your training documents present a consistent visual template, remove all the user-entered data from each form in the set. If the blank forms are identical in appearance, they represent a consistent visual template.
->
-> For more information, *see* [Interpret and improve accuracy and confidence for custom models](concept-accuracy-confidence.md).
-
-### Custom neural model
-
-The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
-
-### Build mode
-
-The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
-
-* Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document.
-
-* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance across companies. Neural models currently only support English text.
-
-This table provides links to the build mode programming language SDK references and code samples on GitHub:
-
-|Programming language | SDK reference | Code sample |
-||||
-| C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
-|Java| DocumentBuildMode Class | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildDocumentModel.java)|
-|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-latest&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)|
-|Python | DocumentBuildMode Enum| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
-
-## Compare model features
-
-The following table compares custom template and custom neural features:
-
-|Feature|Custom template (form) | Custom neural (document) |
-||||
-|Document structure|Template, form, and structured | Structured, semi-structured, and unstructured|
-|Training time | 1 to 5 minutes | 20 minutes to 1 hour |
-|Data extraction | Key-value pairs, tables, selection marks, coordinates, and signatures | Key-value pairs, selection marks and tables|
-|Document variations | Requires a model per each variation | Uses a single model for all variations |
-|Language support | Multiple [language support](language-support.md#read-layout-and-custom-form-template-model) | English, with preview support for Spanish, French, German, Italian and Dutch [language support](language-support.md#custom-neural-model) |
-
-### Custom classification model
-
- Document classification is a new scenario supported by Form Recognizer with the ```2023-02-28-preview``` API. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. See [custom classification](concept-custom-classifier.md) models to learn more.
-
-## Custom model tools
-
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID|
-|||:|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[Python SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|***custom-model-id***|
---
-Form Recognizer v2.1 supports the following tools:
-
-> [!NOTE]
-> Custom model types [custom neural](concept-custom-neural.md) and [custom template](concept-custom-template.md) are only available with Form Recognizer version v3.0.
-
-| Feature | Resources |
-|||
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
--
-## Build a custom model
-
-### [Custom extraction](#tab/extraction)
-
-Extract data from your specific or unique documents using custom models. You need the following resources:
-
-* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
--
-## Sample Labeling tool
-
->[!TIP]
->
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
-> * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
-
-* The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR) services.
-
-* Try the [**Sample Labeling tool quickstart**](quickstarts/try-sample-label-tool.md#train-a-custom-model) to get started building and using a custom model.
---
-## Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer Studio is available with the v3.0 API.
-
-1. On the **Form Recognizer Studio** home page, select **Custom extraction models**.
-
-1. Under **My Projects**, select **Create a project**.
-
-1. Complete the project details fields.
-
-1. Configure the service resource by adding your **Storage account** and **Blob container** to **Connect your training data source**.
-
-1. Review and create your project.
-
-1. Add your sample documents to label, build and test your custom model.
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)
-
-For a detailed walkthrough to create your first custom extraction model, see [how to create a custom extraction model](how-to-guides/build-a-custom-model.md)
-
-### [Custom classification](#tab/classification)
-
-Extract data from your specific or unique documents using custom models. You need the following resources:
-
-* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
-
-## Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer Studio is available with the v3.0 API.
-
-1. On the **Form Recognizer Studio** home page, select **Custom classification models**.
-
-1. Under **My Projects**, select **Create a project**.
-
-1. Complete the project details fields.
-
-1. Configure the service resource by adding your **Storage account** and **Blob container** to **Connect your training data source**.
-
-1. Review and create your project.
-
-1. Label your documents to build and test your custom classification model.
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects)
-
-For a detailed walkthrough to create your first custom extraction model, see [how to create a custom extraction model](how-to-guides/build-a-custom-classifier.md)
---
-## Custom model extraction summary
-
-This table compares the supported data extraction areas:
-
-|Model| Form fields | Selection marks | Structured fields (Tables) | Signature | Region labeling |
-|--|:--:|:--:|:--:|:--:|:--:|
-|Custom template| Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-|Custom neural| Γ£ö| Γ£ö | Γ£ö | **n/a** | * |
-
-**Table symbols**:
-Γ£öΓÇösupported;
-**n/aΓÇöcurrently unavailable;
-*-behaves differently. With template models, synthetic data is generated at training time. With neural models, exiting text recognized in the region is selected.
-
-> [!TIP]
-> When choosing between the two model types, start with a custom neural model if it meets your functional needs. See [custom neural](concept-custom-neural.md ) to learn more about custom neural models.
-
-## Custom model development options
-
-The following table describes the features available with the associated tools and SDKs. As a best practice, ensure that you use the compatible tools listed here.
-
-| Document type | REST API | SDK | Label and Test Models|
-|--|--|--|--|
-| Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
-| Custom template 3.0 | [Form Recognizer 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom neural | [Form Recognizer 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!NOTE]
-> Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API.
-
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats are JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
-* For PDF and TIFF files, up to 2,000 pages can be processed. With a free tier subscription, only the first two pages are processed.
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
-* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller.
-* The total size of the training data is 500 pages or less.
-* If your PDFs are password-locked, you must remove the lock before submission.
-
- > [!TIP]
- > Training data:
- >
- >* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
- > * Please supply only a single instance of the form per document.
- > * For filled-in forms, use examples that have all their fields filled in.
- > * Use forms with different values in each field.
- >* If your form images are of lower quality, use a larger dataset. For example, use 10 to 15 images.
-
-## Supported languages and locales
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
- The Form Recognizer v3.0 version introduces more language support for custom models. For a list of supported handwritten and printed text, see [Language support](language-support.md).
-
- Form Recognizer v3.0 introduces several new features and capabilities:
-
-* **Custom model API**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not.
-* [Form Recognizer v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.
-* [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities.
-
-### Try signature detection
-
-1. Build your training dataset.
-
-1. Go to [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). Under **Custom models**, select **Custom form**.
-
- :::image type="content" source="media/label-tool/select-custom-form.png" alt-text="Screenshot that shows selecting the Form Recognizer Studio Custom form page.":::
-
-1. Follow the workflow to create a new project:
-
- 1. Follow the **Custom model** input requirements.
-
- 1. Label your documents. For signature fields, use **Region** labeling for better accuracy.
-
- :::image type="content" source="media/label-tool/signature-label-region-too.png" alt-text="Screenshot that shows the Label signature field.":::
-
-After your training set is labeled, you can train your custom model and use it to analyze documents. The signature fields specify whether a signature was detected or not.
----
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md
- Title: "Form Recognizer Studio"-
-description: "Concept: Form and document processing, data extraction, and analysis using Form Recognizer Studio "
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Form Recognizer Studio
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-form-recognizer-studio.md) to get started analyzing documents with pretrained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts.
-
-The following image shows the Invoice prebuilt model feature at work.
--
-## Form Recognizer Studio features
-
-The following Form Recognizer service features are available in the Studio.
-
-* **Read**: Try out Form Recognizer's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true).
-
-* **Layout**: Try out Form Recognizer's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#layout-model).
-
-* **General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#general-document-model).
-
-* **Prebuilt models**: Form Recognizer's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model).
-
-* **Custom models**: Form Recognizer's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the online wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more and use the [Form Recognizer v3.0 migration guide](v3-migration-guide.md) to start integrating the new models with your applications.
-
-## Next steps
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**v3.0 SDK quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
-* Refer to our [**v3.0 REST API quickstarts**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0features using the new REST API.
-
-> [!div class="nextstepaction"]
-> [Form Recognizer Studio quickstart](quickstarts/try-form-recognizer-studio.md)
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
- Title: General key-value extraction - Form Recognizer-
-description: Extract key-value pairs, tables, selection marks, and text from your documents with Form Recognizer
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
-
-<!-- markdownlint-disable MD033 -->
-
-# Form Recognizer general document model
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-migration-guide.md).
-
-> [!NOTE]
-> The ```2023-02-28-preview``` version of the general document model adds support for **normalized keys**.
-
-## General document features
-
-* The general document model is a pretrained model; it doesn't require labels or training.
-
-* A single API extracts key-value pairs, selection marks, text, tables, and structure from documents.
-
-* The general document model supports structured, semi-structured, and unstructured documents.
-
-* Key names are spans of text within the document that are associated with a value. With the ```2023-02-28-preview``` API version, key names are normalized where applicable.
-
-* Selection marks are identified as fields with a value of ```:selected:``` or ```:unselected:```
-
-***Sample document processed in the Form Recognizer Studio***
--
-## Key-value pair extraction
-
-The general document API supports most form types and analyzes your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
-
-### Key normalization (common name)
-
-When the service analyzes documents with variations in key names like ```Social Security Number```, ```Social Security Nbr```, ```SSN```, the output normalizes the key variations to a single common name, ```SocialSecurityNumber```. This normalization simplifies downstream processing for documents where you no longer need to account for variations in the key name.
--
-## Development options
-
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID
-|-|-||
-| **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-document**|
-
-### Try Form Recognizer
-
-Try extracting data from forms and documents using the Form Recognizer Studio.
-
-You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-#### Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio and the general document model are available with the v3.0 API.
-
-1. On the Form Recognizer Studio home page, select **General documents**
-
-1. You can analyze the sample document or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/general-document-analyze-1.png" alt-text="Screenshot: analyze general document menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document)
-
-## Key-value pairs
-
-Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context).
-
-## Data extraction
-
-| **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** | **Common Names** |
-| | :: |::| :: | :: | :: |
-|General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô* |
-
-Γ£ô* - Only available in the 2023-02-28-preview API version.
-
-## Input requirements
--
-## Supported languages and locales
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
-| Model | LanguageΓÇöLocale code | Default |
-|--|:-|:|
-|General document| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US|
-
-## Considerations
-
-* Keys are spans of text extracted from the document, for semi structured documents, keys may need to be mapped to an existing dictionary of keys.
-
-* Expect to see key-value pairs with a key, but no value. For example if a user chose to not provide an email address on the form.
-
-## Next steps
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
-
-> [!div class="nextstepaction"]
-> [Try the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
- Title: Identity document (ID) processing ΓÇô Form Recognizer-
-description: Automate identity document (ID) processing of driver licenses, passports, and more with Form Recognizer.
----- Previously updated : 05/23/2023----
-<!-- markdownlint-disable MD033 -->
-
-# Azure Form Recognizer identity document model
----
-Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents. The API analyzes identity documents (including the following) and returns a structured JSON data representation:
-
-* US Drivers Licenses (all 50 states and District of Columbia)
-* International passport biographical pages
-* US state IDs
-* Social Security cards
-* Permanent resident cards
---
-Azure Form Recognizer can analyze and extract information from government-issued identification documents (IDs) using its prebuilt IDs model. It combines our powerful [Optical Character Recognition (OCR)](../../cognitive-services/computer-vision/overview-ocr.md) capabilities with ID recognition capabilities to extract key information from Worldwide Passports and U.S. Driver's Licenses (all 50 states and D.C.). The IDs API extracts key information from these identity documents, such as first name, last name, date of birth, document number, and more. This API is available in the Form Recognizer v2.1 as a cloud service.
--
-## Identity document processing
-
-Identity document processing involves extracting data from identity documents either manually or by using OCR-based technology. ID document is processing an important step in any business process that requires some proof of identity. Examples include customer verification in banks and other financial institutions, mortgage applications, medical visits, claim processing, hospitality industry, and more. Individuals provide some proof of their identity via driver licenses, passports, and other similar documents so that the business can efficiently verify them before providing services and benefits.
--
-***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
----
-## Data extraction
-
-The prebuilt IDs service extracts the key values from worldwide passports and U.S. Driver's Licenses and returns them in an organized structured JSON response.
-
-### **Driver's license example**
-
-![Sample Driver's License](./media/id-example-drivers-license.JPG)
-
-### **Passport example**
-
-![Sample Passport](./media/id-example-passport-result.JPG)
--
-## Development options
-
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-idDocument**|
--
-Form Recognizer v2.1 supports the following tools:
-
-| Feature | Resources |
-|-|-|
-|**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-identity-id-documents)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-
-## Input requirements
-----
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-
-* Form Recognizer processes PDF and TIFF files up to 2000 pages or only the first two pages for free-tier subscribers.
-
-* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
--
-### Try Form Recognizer
-
-Extract data, including name, birth date, and expiration date, from ID documents. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
--
-## Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio is available with the v3.0 API (API version 2022-08-31 generally available (GA) release)
-
-1. On the Form Recognizer Studio home page, select **Identity documents**
-
-1. You can analyze the sample invoice or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/id-document-analyze.png" alt-text="Screenshot: analyze ID document menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
--
-## Form Recognizer Sample Labeling tool
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
-
- :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
-
-1. Select the **Form Type** to analyze from the dropdown menu.
-
-1. Choose a URL for the file you would like to analyze from the below options:
-
- * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
- * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
- * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
- * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-
-1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
-
- :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
-
-1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
- :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select document type dropdown menu.":::
-
-1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyzes the document.
-
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
-
- :::image type="content" source="media/id-example-drivers-license.jpg" alt-text="Screenshot of the identity model analyze results operation.":::
-
-1. Download the JSON output file to view the detailed results.
-
- * The "readResults" node contains every line of text with its respective bounding box placement on the page.
- * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
- * The "pageResults" section includes the tables extracted. For each table, Form Recognizer extracts the text, row, and column index, row and column spanning, bounding box, and more.
- * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
-
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
---
-## Supported document types
-
-| Region | Document Types |
-|--|-|
-|Worldwide|Passport Book, Passport Card|
-|`United States (US)`|Driver License, Identification Card, Residency Permit (Green card), Social Security Card, Military ID|
-|`India (IN)`|Driver License, PAN Card, Aadhaar Card|
-|`Canada (CA)`|Driver License, Identification Card, Residency Permit (Maple Card)|
-|`United Kingdom (GB)`|Driver License, National Identity Card|
-|`Australia (AU)`|Driver License, Photo Card, Key-pass ID (including digital version)|
-
-## Field extractions
-
-The following are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the following fields in the `documents.*.fields`. The json output includes all the extracted text in the documents, words, lines, and styles.
-
->[!NOTE]
->
-> In addition to specifying the IdDocument model, you can designate the ID type for (driver license, passport, national/regional identity card, residence permit, or US social security card ).
-
-### Data extraction (all types)
-
-|**Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** |**Key-Value pairs** | **Fields** |
-|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-|[prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-
-### Document types
-
-#### `idDocument.driverLicense` fields extracted
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`CountryRegion`|`countryRegion`|Country or region code|USA|
-|`Region`|`string`|State or province|Washington|
-|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG|
-|`DocumentDiscriminator`|`string`|Driver license document discriminator|12645646464554646456464544|
-|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
-|`LastName`|`string`|Surname|TALBOT|
-|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
-|`DateOfBirth`|`date`|Date of birth|01/06/1958|
-|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
-|`DateOfIssue`|`date`|Date of issue|08/12/2012|
-|`EyeColor`|`string`|Eye color|BLU|
-|`HairColor`|`string`|Hair color|BRO|
-|`Height`|`string`|Height|5'11"|
-|`Weight`|`string`|Weight|185LB|
-|`Sex`|`string`|Sex|M|
-|`Endorsements`|`string`|Endorsements|L|
-|`Restrictions`|`string`|Restrictions|B|
-|`VehicleClassifications`|`string`|Vehicle classification|D|
-
-#### `idDocument.passport` fields extracted
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`DocumentNumber`|`string`|Passport number|340020013|
-|`FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
-|`MiddleName`|`string`|Name between given name and surname|REYES|
-|`LastName`|`string`|Surname|BROOKS|
-|`Aliases`|`array`|||
-|`Aliases.*`|`string`|Also known as|MAY LIN|
-|`DateOfBirth`|`date`|Date of birth|1980-01-01|
-|`DateOfExpiration`|`date`|Date of expiration|2019-05-05|
-|`DateOfIssue`|`date`|Date of issue|2014-05-06|
-|`Sex`|`string`|Sex|F|
-|`CountryRegion`|`countryRegion`|Issuing country or organization|USA|
-|`DocumentType`|`string`|Document type|P|
-|`Nationality`|`countryRegion`|Nationality|USA|
-|`PlaceOfBirth`|`string`|Place of birth|MASSACHUSETTS, U.S.A.|
-|`PlaceOfIssue`|`string`|Place of issue|LA PAZ|
-|`IssuingAuthority`|`string`|Issuing authority|United States Department of State|
-|`PersonalNumber`|`string`|Personal ID. No.|A234567893|
-|`MachineReadableZone`|`object`|Machine readable zone (MRZ)|P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816|
-|`MachineReadableZone.FirstName`|`string`|Given name and middle initial if applicable|JENNIFER|
-|`MachineReadableZone.LastName`|`string`|Surname|BROOKS|
-|`MachineReadableZone.DocumentNumber`|`string`|Passport number|340020013|
-|`MachineReadableZone.CountryRegion`|`countryRegion`|Issuing country or organization|USA|
-|`MachineReadableZone.Nationality`|`countryRegion`|Nationality|USA|
-|`MachineReadableZone.DateOfBirth`|`date`|Date of birth|1980-01-01|
-|`MachineReadableZone.DateOfExpiration`|`date`|Date of expiration|2019-05-05|
-|`MachineReadableZone.Sex`|`string`|Sex|F|
-
-#### `idDocument.nationalIdentityCard` fields extracted
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`CountryRegion`|`countryRegion`|Country or region code|USA|
-|`Region`|`string`|State or province|Washington|
-|`DocumentNumber`|`string`|National/regional identity card number|WDLABCD456DG|
-|`DocumentDiscriminator`|`string`|National/regional identity card document discriminator|12645646464554646456464544|
-|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
-|`LastName`|`string`|Surname|TALBOT|
-|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
-|`DateOfBirth`|`date`|Date of birth|01/06/1958|
-|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
-|`DateOfIssue`|`date`|Date of issue|08/12/2012|
-|`EyeColor`|`string`|Eye color|BLU|
-|`HairColor`|`string`|Hair color|BRO|
-|`Height`|`string`|Height|5'11"|
-|`Weight`|`string`|Weight|185LB|
-|`Sex`|`string`|Sex|M|
-
-#### `idDocument.residencePermit` fields extracted
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`CountryRegion`|`countryRegion`|Country or region code|USA|
-|`DocumentNumber`|`string`|Residence permit number|WDLABCD456DG|
-|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
-|`LastName`|`string`|Surname|TALBOT|
-|`DateOfBirth`|`date`|Date of birth|01/06/1958|
-|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
-|`DateOfIssue`|`date`|Date of issue|08/12/2012|
-|`Sex`|`string`|Sex|M|
-|`PlaceOfBirth`|`string`|Place of birth|Germany|
-|`Category`|`string`|Permit category|DV2|
-
-#### `idDocument.usSocialSecurityCard` fields extracted
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`DocumentNumber`|`string`|Social security card number|WDLABCD456DG|
-|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
-|`LastName`|`string`|Surname|TALBOT|
-|`DateOfIssue`|`date`|Date of issue|08/12/2012|
-
-#### `idDocument` fields extracted
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234|
-|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG|
-|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.|
-|`LastName`|`string`|Surname|TALBOT|
-|`DateOfBirth`|`date`|Date of birth|01/06/1958|
-|`DateOfExpiration`|`date`|Date of expiration|08/12/2020|
---
-## Supported document types and locales
-
- **Prebuilt ID v2.1** extracts key values from worldwide passports, and U.S. Driver's Licenses in the **en-us** locale.
-
-## Fields extracted
-
-|Name| Type | Description | Value |
-|:--|:-|:-|:-|
-| Country | country | Country code compliant with ISO 3166 standard | "USA" |
-| DateOfBirth | date | DOB in YYYY-MM-DD format | "1980-01-01" |
-| DateOfExpiration | date | Expiration date in YYYY-MM-DD format | "2019-05-05" |
-| DocumentNumber | string | Relevant passport number, driver's license number, etc. | "340020013" |
-| FirstName | string | Extracted given name and middle initial if applicable | "JENNIFER" |
-| LastName | string | Extracted surname | "BROOKS" |
-| Nationality | country | Country code compliant with ISO 3166 standard | "USA" |
-| Sex | gender | Possible extracted values include "M", "F" and "X" | "F" |
-| MachineReadableZone | object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
-| DocumentType | string | Document type, for example, Passport, Driver's License | "passport" |
-| Address | string | Extracted address (Driver's License only) | "123 STREET ADDRESS YOUR CITY WA 99999-1234"|
-| Region | string | Extracted region, state, province, etc. (Driver's License only) | "Washington" |
-
-### Migration guide
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
--
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-insurance-card.md
- Title: Form Recognizer insurance card prebuilt model-
-description: Data extraction and analysis extraction using the insurance card model
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Azure Form Recognizer health insurance card model
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-The Form Recognizer health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards. A health insurance card is a key document for care processing and can be digitally analyzed for patient onboarding, financial coverage information, cashless payments, and insurance claim processing. The health insurance card model analyzes health card images; extracts key information such as insurer, member, prescription, and group number; and returns a structured JSON representation. Health insurance cards can be presented in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
-
-***Sample health insurance card processed using Form Recognizer Studio***
--
-## Development options
-
-Form Recognizer v3.0 supports the prebuilt health insurance card model with the following tools:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**health insurance card model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-healthInsuranceCard.us**|
-
-### Try Form Recognizer
-
-See how data is extracted from health insurance cards using the Form Recognizer Studio. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
-
-#### Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio is available with API version v3.0.
-
-1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **Health insurance cards**.
-
-1. You can analyze the sample insurance card document or select the **Γ₧ò Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/insurance-card-analyze.png" alt-text="Screenshot: analyze health insurance card window in the Form Recognizer Studio.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)
-
-## Input requirements
--
-## Supported languages and locales
-
-| Model | LanguageΓÇöLocale code | Default |
-|--|:-|:|
-|prebuilt-healthInsuranceCard.us| <ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
-
-## Field extraction
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`Insurer`|`string`|Health insurance provider name|PREMERA<br>BLUE CROSS|
-|`Member`|`object`|||
-|`Member.Name`|`string`|Member name|ANGEL BROWN|
-|`Member.BirthDate`|`date`|Member date of birth|01/06/1958|
-|`Member.Employer`|`string`|Member name employer|Microsoft|
-|`Member.Gender`|`string`|Member gender|M|
-|`Member.IdNumberSuffix`|`string`|Identification Number Suffix as it appears on some health insurance cards|01|
-|`Dependents`|`array`|Array holding list of dependents, ordered where possible by membership suffix value||
-|`Dependents.*`|`object`|||
-|`Dependents.*.Name`|`string`|Dependent name|01|
-|`IdNumber`|`object`|||
-|`IdNumber.Prefix`|`string`|Identification Number Prefix as it appears on some health insurance cards|ABC|
-|`IdNumber.Number`|`string`|Identification Number|123456789|
-|`GroupNumber`|`string`|Insurance Group Number|1000000|
-|`PrescriptionInfo`|`object`|||
-|`PrescriptionInfo.Issuer`|`string`|ANSI issuer identification number (IIN)|(80840) 300-11908-77|
-|`PrescriptionInfo.RxBIN`|`string`|Prescription issued BIN number|987654|
-|`PrescriptionInfo.RxPCN`|`string`|Prescription processor control number|63200305|
-|`PrescriptionInfo.RxGrp`|`string`|Prescription group number|BCAAXYZ|
-|`PrescriptionInfo.RxId`|`string`|Prescription identification number. If not present, defaults to membership ID number|P97020065|
-|`PrescriptionInfo.RxPlan`|`string`|Prescription Plan number|A1|
-|`Pbm`|`string`|Pharmacy Benefit Manager for the plan|CVS CAREMARK|
-|`EffectiveDate`|`date`|Date from which the plan is effective|08/12/2012|
-|`Copays`|`array`|Array holding list of copay Benefits||
-|`Copays.*`|`object`|||
-|`Copays.*.Benefit`|`string`|Co-Pay Benefit name|Deductible|
-|`Copays.*.Amount`|`currency`|Co-Pay required amount|$1,500|
-|`Payer`|`object`|||
-|`Payer.Id`|`string`|Payer ID Number|89063|
-|`Payer.Address`|`address`|Payer address|123 Service St., Redmond WA, 98052|
-|`Payer.PhoneNumber`|`phoneNumber`|Payer phone number|+1 (987) 213-5674|
-|`Plan`|`object`|||
-|`Plan.Number`|`string`|Plan number|456|
-|`Plan.Name`|`string`|Plan name - If see Medicaid -> then Medicaid|HEALTH SAVINGS PLAN|
-|`Plan.Type`|`string`|Plan type|PPO|
-
-### Migration guide and REST API v3.0
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
-
-## Next steps
-
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
- Title: Invoice data extraction ΓÇô Form Recognizer-
-description: Automate invoice data extraction with Form Recognizer's invoice model to extract accounts payable data including invoice line items.
----- Previously updated : 05/23/2023--
-<!-- markdownlint-disable MD033 -->
-
-# Azure Form Recognizer invoice model
---
-The Form Recognizer invoice model uses powerful Optical Character Recognition (OCR) capabilities to analyze and extract key fields and line items from sales invoices, utility bills, and purchase orders. Invoices can be of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The API analyzes invoice text; extracts key information such as customer name, billing address, due date, and amount due; and returns a structured JSON data representation. The model currently supports both English and Spanish invoices.
-
-**Supported document types:**
-
-* Invoices
-* Utility bills
-* Sales orders
-* Purchase orders
-
-## Automated invoice processing
-
-Automated invoice processing is the process of extracting key accounts payable fields from billing account documents. Extracted data includes line items from invoices integrated with your accounts payable (AP) workflows for reviews and payments. Historically, the accounts payable process has been done manually and, hence, very time consuming. Accurate extraction of key data from invoices is typically the first and one of the most critical steps in the invoice automation process.
--
-**Sample invoice processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)**:
----
-**Sample invoice processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net)**:
---
-## Development options
--
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-invoice**|
---
-Form Recognizer v2.1 supports the following tools:
-
-| Feature | Resources |
-|-|-|
-|**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-invoices)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
--
-## Input requirements
-----
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
-* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
--
-## Try invoice data extraction
-
-See how data, including customer information, vendor details, and line items, is extracted from invoices. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
--
-## Form Recognizer Studio
-
-1. On the Form Recognizer Studio home page, select **Invoices**
-
-1. You can analyze the sample invoice or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/invoice-analyze.png" alt-text="Screenshot: analyze invoice menu.":::
-
-> [!div class="nextstepaction"]
-> [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
---
-## Form Recognizer Sample Labeling tool
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
-
- :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of layout model analyze results process.":::
-
-1. Select the **Form Type** to analyze from the dropdown menu.
-
-1. Choose a URL for the file you would like to analyze from the below options:
-
- * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
- * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
- * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
- * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-
-1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
-
- :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
-
-1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
- :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot showing the select-form-type dropdown menu.":::
-
-1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
-
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
-
- :::image type="content" source="media/invoice-example-new.jpg" alt-text="Screenshot of layout model analyze results operation.":::
-
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
---
-## Supported languages and locales
-
->[!NOTE]
-> Form Recognizer auto-detects language and locale data.
-
-| Supported languages | Details |
-|:-|:|
-| &bullet; English (en) | United States (us), Australia (-au), Canada (-ca), United Kingdom (-uk), India (-in)|
-| &bullet; Spanish (es) |Spain (es)|
-| &bullet; German (de) | Germany (de)|
-| &bullet; French (fr) | France (fr) |
-| &bullet; Italian (it) | Italy (it)|
-| &bullet; Portuguese (pt) | Portugal (pt), Brazil (br)|
-| &bullet; Dutch (nl) | Netherlands (nl)|
-
-## Field extraction
-
-|Name| Type | Description | Standardized output |
-|:--|:-|:-|::|
-| CustomerName | String | Invoiced customer| |
-| CustomerId | String | Customer reference ID | |
-| PurchaseOrder | String | Purchase order reference number | |
-| InvoiceId | String | ID for this specific invoice (often "Invoice Number") | |
-| InvoiceDate | Date | Date the invoice was issued | yyyy-mm-dd|
-| DueDate | Date | Date payment for this invoice is due | yyyy-mm-dd|
-| VendorName | String | Vendor name | |
-| VendorTaxId | String | The taxpayer number associated with the vendor | |
-| VendorAddress | String | Vendor mailing address| |
-| VendorAddressRecipient | String | Name associated with the VendorAddress | |
-| CustomerAddress | String | Mailing address for the Customer | |
-| CustomerTaxId | String | The taxpayer number associated with the customer | |
-| CustomerAddressRecipient | String | Name associated with the CustomerAddress | |
-| BillingAddress | String | Explicit billing address for the customer | |
-| BillingAddressRecipient | String | Name associated with the BillingAddress | |
-| ShippingAddress | String | Explicit shipping address for the customer | |
-| ShippingAddressRecipient | String | Name associated with the ShippingAddress | |
-| PaymentTerm | String | The terms of payment for the invoice | |
- |Sub&#8203;Total| Number | Subtotal field identified on this invoice | Integer |
-| TotalTax | Number | Total tax field identified on this invoice | Integer |
-| InvoiceTotal | Number (USD) | Total new charges associated with this invoice | Integer |
-| AmountDue | Number (USD) | Total Amount Due to the vendor | Integer |
-| ServiceAddress | String | Explicit service address or property address for the customer | |
-| ServiceAddressRecipient | String | Name associated with the ServiceAddress | |
-| RemittanceAddress | String | Explicit remittance or payment address for the customer | |
-| RemittanceAddressRecipient | String | Name associated with the RemittanceAddress | |
-| ServiceStartDate | Date | First date for the service period (for example, a utility bill service period) | yyyy-mm-dd |
-| ServiceEndDate | Date | End date for the service period (for example, a utility bill service period) | yyyy-mm-dd|
-| PreviousUnpaidBalance | Number | Explicit previously unpaid balance | Integer |
-| CurrencyCode | String | The currency code associated with the extracted amount | |
-| PaymentDetails | Array | An array that holds Payment Option details such as `IBAN`and `SWIFT` | |
-| TotalDiscount | Number | The total discount applied to an invoice | Integer |
-| TaxItems (en-IN only) | Array | AN array that holds added tax information such as `CGST`, `IGST`, and `SGST`. This line item is currently only available for the en-in locale | |
-
-### Line items
-
-Following are the line items extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg))
-
-|Name| Type | Description | Text (line item #1) | Value (standardized output) |
-|:--|:-|:-|:-| :-|
-| Items | String | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
-| Amount | Number | The amount of the line item | $60.00 | 100 |
-| Description | String | The text description for the invoice line item | Consulting service | Consulting service |
-| Quantity | Number | The quantity for this invoice line item | 2 | 2 |
-| UnitPrice | Number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
-| ProductCode | String| Product code, product number, or SKU associated with the specific line item | A123 | |
-| Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | |
-| Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 |
-| Tax | Number | Tax associated with each line item. Possible values include tax amount and tax Y/N | 10.00 | |
-| TaxRate | Number | Tax Rate associated with each line item. | 10% | |
-
-The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
-
-### Key-value pairs
-
-The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context).
---
-## Supported locales
-
-**Prebuilt invoice v2.1** supports invoices in the **en-us** locale.
-
-## Fields extracted
-
-The Invoice service extracts the text, tables, and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg)).
-
-|Name| Type | Description | Text | Value (standardized output) |
-|:--|:-|:-|:-| :-|
-| CustomerName | string | Customer being invoiced | Microsoft Corp | |
-| CustomerId | string | Reference ID for the customer | CID-12345 | |
-| PurchaseOrder | string | A purchase order reference number | PO-3333 | |
-| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | |
-| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
-| DueDate | date | Date payment for this invoice is due | 12/15/2019 | 2019-12-15 |
-| VendorName | string | Vendor who has created this invoice | CONTOSO LTD. | |
-| VendorAddress | string | Mailing address for the Vendor | 123 456th St New York, NY, 10001 | |
-| VendorAddressRecipient | string | Name associated with the VendorAddress | Contoso Headquarters | |
-| CustomerAddress | string | Mailing address for the Customer | 123 Other Street, Redmond WA, 98052 | |
-| CustomerAddressRecipient | string | Name associated with the CustomerAddress | Microsoft Corp | |
-| BillingAddress | string | Explicit billing address for the customer | 123 Bill Street, Redmond WA, 98052 | |
-| BillingAddressRecipient | string | Name associated with the BillingAddress | Microsoft Services | |
-| ShippingAddress | string | Explicit shipping address for the customer | 123 Ship Street, Redmond WA, 98052 | |
-| ShippingAddressRecipient | string | Name associated with the ShippingAddress | Microsoft Delivery | |
-| Sub&#8203;Total | number | Subtotal field identified on this invoice | $100.00 | 100 |
-| TotalTax | number | Total tax field identified on this invoice | $10.00 | 10 |
-| InvoiceTotal | number | Total new charges associated with this invoice | $110.00 | 110 |
-| AmountDue | number | Total Amount Due to the vendor | $610.00 | 610 |
-| ServiceAddress | string | Explicit service address or property address for the customer | 123 Service Street, Redmond WA, 98052 | |
-| ServiceAddressRecipient | string | Name associated with the ServiceAddress | Microsoft Services | |
-| RemittanceAddress | string | Explicit remittance or payment address for the customer | 123 Remit St New York, NY, 10001 | |
-| RemittanceAddressRecipient | string | Name associated with the RemittanceAddress | Contoso Billing | |
-| ServiceStartDate | date | First date for the service period (for example, a utility bill service period) | 10/14/2019 | 2019-10-14 |
-| ServiceEndDate | date | End date for the service period (for example, a utility bill service period) | 11/14/2019 | 2019-11-14 |
-| PreviousUnpaidBalance | number | Explicit previously unpaid balance | $500.00 | 500 |
-
-Following are the line items extracted from an invoice in the JSON output response (the following output uses this [sample invoice](./media/sample-invoice.jpg))
-
-|Name| Type | Description | Text (line item #1) | Value (standardized output) |
-|:--|:-|:-|:-| :-|
-| Items | string | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
-| Amount | number | The amount of the line item | $60.00 | 100 |
-| Description | string | The text description for the invoice line item | Consulting service | Consulting service |
-| Quantity | number | The quantity for this invoice line item | 2 | 2 |
-| UnitPrice | number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
-| ProductCode | string| Product code, product number, or SKU associated with the specific line item | A123 | |
-| Unit | string| The unit of the line item, e.g, kg, lb etc. | hours | |
-| Date | date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 |
-| Tax | number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
-
-### JSON output
-
-The JSON output has three parts:
-
-* `"readResults"` node contains all of the recognized text and selection marks. Text is organized via page, then by line, then by individual words.
-* `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".
-* `"documentResults"` node contains the invoice-specific values and line items that the model discovered. It's where to find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more.
-
-## Migration guide
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
--
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
- Title: Document layout analysis - Form Recognizer-
-description: Extract text, tables, selections, titles, section headings, page headers, page footers, and more with layout analysis model from Form Recognizer.
----- Previously updated : 06/23/2023---
-# Azure Form Recognizer layout model
---
-Form Recognizer layout model is an advanced machine-learning based document analysis API available in the Form Recognizer cloud. It enables you to take documents in various formats and return structured data representations of the documents. It combines an enhanced version of our powerful [Optical Character Recognition (OCR)](../../cognitive-services/computer-vision/overview-ocr.md) capabilities with deep learning models to extract text, tables, selection marks, and document structure.
-
-## Document layout analysis
-
-Document structure layout analysis is the process of analyzing a document to extract regions of interest and their inter-relationships. The goal is to extract text and structural elements from the page to build better semantic understanding models. There are two types of roles that text plays in a document layout:
-
-* **Geometric roles**: Text, tables, and selection marks are examples of geometric roles.
-* **Logical roles**: Titles, headings, and footers are examples of logical roles.
-
-The following illustration shows the typical components in an image of a sample page.
---
-***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
--
-## Development options
-
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID |
-|-|||
-|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-layout**|
---
-**Sample document processed with [Form Recognizer Sample Labeling tool layout model](https://fott-2-1.azurewebsites.net/layout-analyze)**:
---
-## Input requirements
-----
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
-* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
--
-### Try layout extraction
-
-See how data, including text, tables, table headers, selection marks, and structure information is extracted from documents using Form Recognizer. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
--
-## Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio is available with the v3.0 API.
-
-***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
--
-1. On the Form Recognizer Studio home page, select **Layout**
-
-1. You can analyze the sample document or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/layout-analyze.png" alt-text="Screenshot: analyze layout menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
---
-## Form Recognizer Sample Labeling tool
-
-1. Navigate to the [Form Recognizer sample tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
-
- :::image type="content" source="media/label-tool/layout-1.jpg" alt-text="Screenshot of connection settings for the Form Recognizer layout process.":::
-
-1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
-1. In the **Source** field, select **URL** from the dropdown menu You can use our sample document:
-
- * [**Sample document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg)
-
- * Select the **Fetch** button.
-
-1. Select **Run Layout**. The Form Recognizer Sample Labeling tool calls the Analyze Layout API and analyze the document.
-
- :::image type="content" source="media/fott-layout.png" alt-text="Screenshot: Layout dropdown window.":::
-
-1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
-
- :::image type="content" source="media/label-tool/layout-3.jpg" alt-text="Screenshot of connection settings for the Form Recognizer Sample Labeling tool.":::
--
-## Supported document types
-
-| **Model** | **Images** | **PDF** | **TIFF** |
-| | | | |
-| Layout | Γ£ô | Γ£ô | Γ£ô |
-
-## Supported languages and locales
-
-*See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
--
-### Data extraction
-
-**Starting with v3.0 GA**, it extracts paragraphs and more structure information like titles, section headings, page header, page footer, page number, and footnote from the document page. These structural elements are examples of logical roles described in the previous section. This capability is supported for PDF documents and images (JPG, PNG, BMP, TIFF).
-
-| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Logical roles** |
-| | | | | | |
-| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-
-**Supported logical roles for paragraphs**:
-The paragraph roles are best used with unstructured documents. Paragraph roles help analyze the structure of the extracted content for better semantic search and analysis.
-
-* title
-* sectionHeading
-* footnote
-* pageHeader
-* pageFooter
-* pageNumber
---
-### Data extraction support
-
-| **Model** | **Text** | **Tables** | Selection marks|
-| | | | |
-| Layout | Γ£ô | Γ£ô| Γ£ô |
-
-Form Recognizer v2.1 supports the following tools:
-
-| Feature | Resources |
-|-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-layout)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
---
-## Model extraction
-
-The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-
-### Paragraph extraction
-
-The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
-
-```json
-"paragraphs": [
- {
- "spans": [],
- "boundingRegions": [],
- "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
- }
-]
-```
-
-### Paragraph roles
-
-The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Form Recognizer Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
-
-| **Predicted role** | **Description** |
-| | |
-| `title` | The main heading(s) in the page |
-| `sectionHeading` | One or more subheading(s) on the page |
-| `footnote` | Text near the bottom of the page |
-| `pageHeader` | Text near the top edge of the page |
-| `pageFooter` | Text near the bottom edge of the page |
-| `pageNumber` | Page number |
-
-```json
-{
- "paragraphs": [
- {
- "spans": [],
- "boundingRegions": [],
- "role": "title",
- "content": "NEWS TODAY"
- },
- {
- "spans": [],
- "boundingRegions": [],
- "role": "sectionHeading",
- "content": "Mirjam Nilsson"
- }
- ]
-}
-
-```
-
-### Pages extraction
-
-The pages collection is the first object you see in the service response.
-
-```json
-"pages": [
- {
- "pageNumber": 1,
- "angle": 0,
- "width": 915,
- "height": 1190,
- "unit": "pixel",
- "words": [],
- "lines": [],
- "spans": [],
- "kind": "document"
- }
-]
-```
-
-### Text lines and words extraction
-
-The document layout model in Form Recognizer extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-
-```json
-"words": [
- {
- "content": "While",
- "polygon": [],
- "confidence": 0.997,
- "span": {}
- },
-],
-"lines": [
- {
- "content": "While healthcare is still in the early stages of its Al journey, we",
- "polygon": [],
- "spans": [],
- }
-]
-```
-
-### Selection marks extraction
-
-The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
-
-```json
-{
- "selectionMarks": [
- {
- "state": "unselected",
- "polygon": [],
- "confidence": 0.995,
- "span": {
- "offset": 1421,
- "length": 12
- }
- }
- ]
-}
-```
-
-### Extract tables from documents and images
-
-Extracting tables is a key requirement for processing documents containing large volumes of data typically formatted as tables. The Layout model extracts tables in the `pageResults` section of the JSON output. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding polygon is output along with information whether it's recognized as a `columnHeader` or not. The model supports extracting tables that are rotated. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top-level content that contains the full text from the document.
-
-```json
-{
- "tables": [
- {
- "rowCount": 9,
- "columnCount": 4,
- "cells": [
- {
- "kind": "columnHeader",
- "rowIndex": 0,
- "columnIndex": 0,
- "columnSpan": 4,
- "content": "(In millions, except earnings per share)",
- "boundingRegions": [],
- "spans": []
- },
- ]
- }
- ]
-}
-
-```
-
-### Handwritten style for text lines (Latin languages only)
-
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
-
-```json
-"styles": [
-{
- "confidence": 0.95,
- "spans": [
- {
- "offset": 509,
- "length": 24
- }
- "isHandwritten": true
- ]
-}
-```
-
-### Annotations extraction
-
-The Layout model extracts annotations in documents, such as checks and crosses. The response includes the kind of annotation, along with a confidence score and bounding polygon.
-
-```json
- {
- "pages": [
- {
- "annotations": [
- {
- "kind": "cross",
- "polygon": [...],
- "confidence": 1
- }
- ]
- }
- ]
-}
-```
-
-### Barcode extraction
-
-The Layout model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page.
-
-#### Supported barcode types
-
-| **Barcode Type** | **Example** |
-| | |
-| QR Code |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::|
-| Code 39 |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::|
-| Code 128 |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::|
-| UPC (UPC-A & UPC-E) |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::|
-| PDF417 |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::|
-
- > [!NOTE]
- > The `confidence` score is hard-coded for the `2023-02-28` public preview.
-
- ```json
- "content": ":barcode:",
- "pages": [
- {
- "pageNumber": 1,
- "barcodes": [
- {
- "kind": "QRCode",
- "value": "http://test.com/",
- "span": { ... },
- "polygon": [...],
- "confidence": 1
- }
- ]
- }
- ]
- ```
-
-### Extract selected pages from documents
-
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
---
-### Natural reading order output (Latin only)
-
-You can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
--
-### Select page numbers or ranges for text extraction
-
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
--
-## The Get Analyze Layout Result operation
-
-The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID the Analyze Layout operation created. It returns a JSON response that contains a **status** field with the following possible values.
-
-|Field| Type | Possible values |
-|:--|:-:|:-|
-|status | string | `notStarted`: The analysis operation hasn't started.<br /><br />`running`: The analysis operation is in progress.<br /><br />`failed`: The analysis operation has failed.<br /><br />`succeeded`: The analysis operation has succeeded.|
-
-Call this operation iteratively until it returns the `succeeded` value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
-
-When the **status** field has the `succeeded` value, the JSON response includes the extracted layout, text, tables, and selection marks. The extracted data includes extracted text lines and words, bounding boxes, text appearance with handwritten indication, tables, and selection marks with selected/unselected indicated.
-
-### Handwritten classification for text lines (Latin only)
-
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
--
-### Sample JSON output
-
-The response to the *Get Analyze Layout Result* operation is a structured representation of the document with all the information extracted.
-See here for a [sample document file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout.pdf) and its structured output [sample layout output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout-output.json).
-
-The JSON output has two parts:
-
-* `readResults` node contains all of the recognized text and selection mark. The text presentation hierarchy is page, then line, then individual words.
-* `pageResults` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults" field.
-
-## Example Output
-
-### Text
-
-Layout API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
-
-### Tables with headers
-
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
-
-![Tables example](./media/layout-table-header-demo.gif)
-
-### Selection marks
-
-Layout API also extracts selection marks from documents. Extracted selection marks include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `readResults` section of the JSON output.
-
-### Migration guide
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-
-## Next steps
--
-* [Learn how to process your own forms and documents](quickstarts/try-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
- Title: Document processing models - Form Recognizer-
-description: Document processing models for OCR, document layout, invoices, identity, custom models, and more to extract text, structure, and key-value pairs.
----- Previously updated : 05/23/2023---
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD011 -->
-
-# Document processing models
---
- Azure Form Recognizer supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt document analysis or domain specific model or train a custom model tailored to your specific business needs and use cases. Form Recognizer can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
-
-## Model overview
--
-| **Model** | **Description** |
-| | |
-|**Document analysis models**||
-| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.|
-| [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.|
-| [General document](#general-document) | Extract key-value pairs in addition to text and document structure information.|
-|**Prebuilt models**||
-| [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number and other key information from US health insurance cards.|
-| [W-2](#w-2) | Process W2 forms to extract employee, employer, wage, and other information. |
-| [Invoice](#invoice) | Automate invoices. |
-| [Receipt](#receipt) | Extract receipt data from receipts.|
-| [Identity document (ID)](#identity-document-id) | Extract identity (ID) fields from US driver licenses and international passports. |
-| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. |
-|**Custom models**||
-| [Custom model (overview)](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
-| [Custom extraction models](#custom-extraction)| &#9679; **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>&#9679; **Custom neural models** are trained on various document types to extract fields from structured, semi-structured and unstructured documents.|
-| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
-| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
-
-### Read OCR
--
-The Read API analyzes and extracts lines, words, their locations, detected languages, and handwritten style if detected.
-
-***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: read model](concept-read.md)
-
-### Layout analysis
--
-The Layout analysis model analyzes and extracts text, tables, selection marks, and other structure elements like titles, section headings, page headers, page footers, and more.
-
-***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
--
-> [!div class="nextstepaction"]
->
-> [Learn more: layout model](concept-layout.md)
-
-### General document
--
-The general document model is ideal for extracting common key-value pairs from forms and documents. It's a pretrained model and can be directly invoked via the REST API and the SDKs. You can use the general document model as an alternative to training a custom model.
-
-***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: general document model](concept-general-document.md)
-
-### Health insurance card
--
-The health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards.
-
-***Sample US health insurance card processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: Health insurance card model](concept-insurance-card.md)
-
-### W-2
--
-The W-2 form model extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
-
-***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: W-2 model](concept-w2.md)
-
-### Invoice
--
-The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
-
-***Sample invoice processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: invoice model](concept-invoice.md)
-
-### Receipt
--
-Use the receipt model to scan sales receipts for merchant name, dates, line items, quantities, and totals from printed and handwritten receipts. The version v3.0 also supports single-page hotel receipt processing.
-
-***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: receipt model](concept-receipt.md)
-
-### Identity document (ID)
--
-Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents) to extract key fields.
-
-***Sample U.S. Driver's License processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: identity document model](concept-id-document.md)
-
-### Business card
--
-Use the business card model to scan and extract key information from business card images.
-
-***Sample business card processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: business card model](concept-business-card.md)
-
-### Custom models
--
-Custom document models analyze and extract data from forms and documents specific to your business. They're trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started.
-
-Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
-
-***Sample custom template processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: custom model](concept-custom.md)
-
-#### Custom extraction
--
-Custom extraction model can be one of two types, **custom template** or **custom neural**. To create a custom extraction model, label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
-
-***Sample custom extraction processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: custom template model](concept-custom-template.md)
-
-> [!div class="nextstepaction"]
-> [Learn more: custom neural model](./concept-custom-neural.md)
-
-#### Custom classifier
--
-The custom classification model enables you to identify the document type prior to invoking the extraction model. The classification model is available starting with the 2023-02-28-preview. Training a custom classification model requires at least two distinct classes and a minimum of five samples per class.
-
-> [!div class="nextstepaction"]
-> [Learn more: custom classification model](concept-custom-classifier.md)
-
-#### Composed models
-
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. You can assign up to 200 trained custom models to a single composed model.
-
-***Composed model dialog window in [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: custom model](concept-custom.md)
-
-## Model data extraction
-
-| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** |
-|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
-| [prebuilt-healthInsuranceCard.us](concept-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
-| [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
-| [prebuilt-document](concept-general-document.md#data-extraction)| Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-businessCard](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
-
-## Input requirements
--
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
-
-### Version migration
-
-Learn how to use Form Recognizer v3.0 in your applications by following our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md)
---
-| **Model** | **Description** |
-| | |
-|**Document analysis**||
-| [Layout](#layout) | Extract text and layout information from documents.|
-|**Prebuilt**||
-| [Invoice](#invoice) | Extract key information from English and Spanish invoices. |
-| [Receipt](#receipt) | Extract key information from English receipts. |
-| [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
-| [Business card](#business-card) | Extract key information from English business cards. |
-|**Custom**||
-| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
-| [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
-
-### Layout
-
-The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
-
-***Sample document processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/layout-analyze)***:
--
-> [!div class="nextstepaction"]
->
-> [Learn more: layout model](concept-layout.md)
-
-### Invoice
-
-The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due.
-
-***Sample invoice processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: invoice model](concept-invoice.md)
-
-### Receipt
-
-* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
-
-***Sample receipt processed using [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: receipt model](concept-receipt.md)
-
-### ID document
-
- The ID document model analyzes and extracts key information from the following documents:
-
-* U.S. Driver's Licenses (all 50 states and District of Columbia)
-
-* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
-
-***Sample U.S. Driver's License processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: identity document model](concept-id-document.md)
-
-### Business card
-
-The business card model analyzes and extracts key information from business card images.
-
-***Sample business card processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: business card model](concept-business-card.md)
-
-### Custom
-
-* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
-
-***Sample custom model processing using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: custom model](concept-custom.md)
-
-#### Composed custom model
-
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
-
-***Composed model dialog window using the [Sample Labeling tool](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
--
-> [!div class="nextstepaction"]
-> [Learn more: custom model](concept-custom.md)
-
-## Model data extraction
-
-| **Model** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
-|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| [Layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
-| [Invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
-| [Receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [ID Document](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [Business Card](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [Custom Form](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
-
-## Input requirements
--
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
-
-### Version migration
-
- You can learn how to use Form Recognizer v3.0 in your applications by following our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md)
--
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept Query Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-query-fields.md
- Title: Query field extraction - Form Recognizer-
-description: Use Form Recognizer to extract query field data.
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
-
-<!-- markdownlint-disable MD033 -->
-
-# Azure Form Recognizer query field extraction (preview)
-
-**This article applies to:** ![Form Recognizer checkmark](medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
-
-> [!IMPORTANT]
->
-> * The Form Recognizer Studio query fields extraction feature is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-> * Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
-
-Form Recognizer now supports query field extractions using Azure OpenAI capabilities. With query field extraction, you can add fields to the extraction process using a query request without the need for added training.
-
-> [!NOTE]
->
-> Form Recognizer Studio query field extraction is currently available with the general document model for the `2023-02-28-preview` release.
-
-## Select query fields
-
-For query field extraction, specify the fields you want to extract and Form Recognizer analyzes the document accordingly. Here's an example:
-
-* If you're processing a contract in the Form Recognizer Studio, you can pass a list of field labels like `Party1`, `Party2`, `TermsOfUse`, `PaymentTerms`, `PaymentDate`, and `TermEndDate`" as part of the analyze document request.
-
- :::image type="content" source="media/studio/query-field-select.png" alt-text="Screenshot of query fields selection window in Form Recognizer Studio.":::
-
-* Form Recognizer utilizes the capabilities of both [**Azure OpenAI Service**](../../cognitive-services/openai/overview.md) and extraction models to analyze and extract the field data and return the values in a structured JSON output.
-
-* In addition to the query fields, the response includes text, tables, selection marks, general document key-value pairs, and other relevant data.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try the Form Recognizer Studio quickstart](./quickstarts/try-form-recognizer-studio.md)
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
- Title: OCR for documents - Form Recognizer-
-description: Extract print and handwritten text from scanned and digital documents with Form Recognizer's Read OCR model.
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Form Recognizer read (OCR) model
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-> [!NOTE]
->
-> For extracting text from external images like labels, street signs, and posters, use the [Computer Vision v4.0 preview Read](../../cognitive-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
->
-
-Form Recognizer v3.0's Read Optical Character Recognition (OCR) model runs at a higher resolution than Computer Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes preview support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The Read model is the underlying OCR engine for other Form Recognizer prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, in addition to custom models.
-
-## What is OCR for documents?
-
-Optical Character Recognition (OCR) for documents is optimized for large text-heavy documents in multiple file formats and global languages. It includes features like higher-resolution scanning of document images for better handling of smaller and dense text; paragraph detection; and fillable form management. OCR capabilities also include advanced scenarios like single character boxes and accurate extraction of key fields commonly found in invoices, receipts, and other prebuilt scenarios.
-
-## Read OCR supported document types
-
-> [!NOTE]
->
-> * Only API Version 2022-06-30-preview supports Microsoft Word, Excel, PowerPoint, and HTML file formats in addition to all other document types supported by the GA versions.
-> * For the preview of Office and HTML file formats, Read API ignores the pages parameter and extracts all pages by default. Each embedded image counts as 1 page unit and each worksheet, slide, and page (up to 3000 characters) count as 1 page.
-
-| **Model** | **Images** | **PDF** | **TIFF** | **Word** | **Excel** | **PowerPoint** | **HTML** |
-| | | | | | | | |
-| **prebuilt-read** | GA</br> (2022-08-31)| GA</br> (2022-08-31) | GA</br> (2022-08-31) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) |
-
-### Data extraction
-
-| **Model** | **Text** | **[Language detection](language-support.md#detected-languages-read-api)** |
-| | | |
-**prebuilt-read** | Γ£ô |Γ£ô |
-
-## Development options
-
-Form Recognizer v3.0 supports the following resources:
-
-| Model | Resources | Model ID |
-|-|||
-|**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
-
-## Try OCR in Form Recognizer
-
-Try extracting text from forms and documents using the Form Recognizer Studio. You need the following assets:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-### Form Recognizer Studio
-
-> [!NOTE]
-> Currently, Form Recognizer Studio doesn't support Microsoft Word, Excel, PowerPoint, and HTML file formats in the Read version v3.0.
-
-***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
--
-1. On the Form Recognizer Studio home page, select **Read**
-
-1. You can analyze the sample document or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/form-recognizer-studio-read-analyze-v3p2-updated.png" alt-text="Screenshot: analyze read menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
-
-## Input requirements
--
-## Supported languages and locales
-
-Form Recognizer v3.0 version supports several languages for the read OCR model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
-
-## Data detection and extraction
-
-### Microsoft Office and HTML text extraction
-
-Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDKs of that API version to preview text extraction from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text and text from the images embedded in the Word document by running OCR on the images.
--
-The page units in the model output are computed as shown:
-
- **File format** | **Computed page unit** | **Total pages** |
-| | | |
-|Word | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
-|Excel | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
-|PowerPoint | Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
-|HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-
- ### Barcode extraction
-
-The Read OCR model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. Here, the `confidence` is hard-coded for the public preview (`2023-02-28`) release.
-
-#### Supported barcode types
-
-| **Barcode Type** | **Example** |
-| | |
-| QR Code |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::|
-| Code 39 |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::|
-| Code 128 |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::|
-| UPC (UPC-A & UPC-E) |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::|
-| PDF417 |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::|
-
-```json
-"content": ":barcode:",
- "pages": [
- {
- "pageNumber": 1,
- "barcodes": [
- {
- "kind": "QRCode",
- "value": "http://test.com/",
- "span": { ... },
- "polygon": [...],
- "confidence": 1
- }
- ]
- }
- ]
-```
-
-### Paragraphs extraction
-
-The Read OCR model in Form Recognizer extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
-
-```json
-"paragraphs": [
- {
- "spans": [],
- "boundingRegions": [],
- "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
- }
-]
-```
-
-### Language detection
-
-The Read OCR model in Form Recognizer adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
-
-```json
-"languages": [
- {
- "spans": [
- {
- "offset": 0,
- "length": 131
- }
- ],
- "locale": "en",
- "confidence": 0.7
- },
-]
-```
-
-### Extract pages from documents
-
-The page units in the model output are computed as shown:
-
- **File format** | **Computed page unit** | **Total pages** |
-| | | |
-|Images | Each image = 1 page unit | Total images |
-|PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
-|TIFF | Each image in the TIFF = 1 page unit | Total images in the PDF |
-
-```json
-"pages": [
- {
- "pageNumber": 1,
- "angle": 0,
- "width": 915,
- "height": 1190,
- "unit": "pixel",
- "words": [],
- "lines": [],
- "spans": [],
- "kind": "document"
- }
-]
-```
-
-### Extract text lines and words
-
-The Read OCR model extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-
-For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Read extracts all embedded text as is. For any embedded images, it runs OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
-
-```json
-"words": [
- {
- "content": "While",
- "polygon": [],
- "confidence": 0.997,
- "span": {}
- },
-],
-"lines": [
- {
- "content": "While healthcare is still in the early stages of its Al journey, we",
- "polygon": [],
- "spans": [],
- }
-]
-```
-
-### Select page (s) for text extraction
-
-For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
-
-> [!NOTE]
-> For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, the Read API ignores the pages parameter and extracts all pages by default.
-
-### Handwritten style for text lines (Latin languages only)
-
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows an example JSON snippet.
-
-```json
-"styles": [
-{
- "confidence": 0.95,
- "spans": [
- {
- "offset": 509,
- "length": 24
- }
- "isHandwritten": true
- ]
-}
-```
-
-## Next steps
-
-Complete a Form Recognizer quickstart:
-
-> [!div class="checklist"]
->
-> * [**REST API**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
-> * [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-csharp)
-> * [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-python)
-> * [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-java)
-> * [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>
-
-Explore our REST API:
-
-> [!div class="nextstepaction"]
-> [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
- Title: Receipt data extraction - Form Recognizer-
-description: Use machine learning powered receipt data extraction model to digitize receipts.
----- Previously updated : 05/23/2023--
-<!-- markdownlint-disable MD033 -->
-
-# Azure Form Recognizer receipt model
---
-The Form Recognizer receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
-
-## Receipt data extraction
-
-Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. Azure Form Recognizer OCR-powered receipt data extraction helps to automate the conversion and save time and effort.
--
-***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
----
-**Sample receipt processed with [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/connection)**:
---
-## Development options
-
-Form Recognizer v3.0 Supports the following tools:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</li></ul>|**prebuilt-receipt**|
---
-Form Recognizer v2.1 supports the following tools:
-
-| Feature | Resources |
-|-|-|
-|**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](./how-to-guides/use-sdk-rest-api.md?pivots=programming-language-rest-api&preserve-view=true&tabs=windows&view=form-recog-2.1.0#analyze-receipts)</li><li>[**Client-library SDK**](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
--
-## Input requirements
-----
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-* For PDF and TIFF, Form Recognizer can process up to 2000 pages for standard tier subscribers or only the first two pages for free-tier subscribers.
-* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
--
-### Try receipt data extraction
-
-See how Form Recognizer extracts data, including time and date of transactions, merchant information, and amount totals from receipts. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
--
-#### Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio is available with the v3.0 API.
-
-1. On the Form Recognizer Studio home page, select **Receipts**
-
-1. You can analyze the sample receipt or select the **+ Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/receipt-analyze.png" alt-text="Screenshot: analyze receipt menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
---
-## Form Recognizer Sample Labeling tool
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
-
- :::image type="content" source="media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results process.":::
-
-1. Select the **Form Type** to analyze from the dropdown menu.
-
-1. Choose a URL for the file you would like to analyze from the below options:
-
- * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
- * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
- * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
- * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-
-1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
-
- :::image type="content" source="media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
-
-1. In the **Form Recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
- :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu.":::
-
-1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
-
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
-
- :::image type="content" source="media/receipts-example.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
-
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
---
-## Supported languages and locales
-
->[!NOTE]
-> Form Recognizer auto-detects language and locale data.
-
-### [**2022-08-31 (GA)**](#tab/2022-08-31)
-
-#### Thermal receipts (retail, meal, parking, etc.)
-
-| Supported Languages | Details |
-|:--|:-:|
-|English|United States (`en-US`), Australia (`en-AU`), Canada (`en-CA`), United Kingdom (`en-GB`), India (`en-IN`), United Arab Emirates (`en-AE`)|
-|Croatian|Croatia (`hr-HR`)|
-|Czech|Czechia (`cs-CZ`)|
-|Danish|Denmark (`da-DK`)|
-|Dutch|Netherlands (`nl-NL`)|
-|Finnish|Finland (`fi-FI`)|
-|French|Canada (`fr-CA`), France (`fr-FR`)|
-|German|Germany (`de-DE`)|
-|Hungarian|Hungary (`hu-HU`)|
-|Italian|Italy (`it-IT`)|
-|Japanese|Japan (`ja-JP`)|
-|Latvian|Latvia (`lv-LV`)|
-|Lithuanian|Lithuania (`lt-LT`)|
-|Norwegian|Norway (`no-NO`)|
-|Portuguese|Brazil (`pt-BR`), Portugal (`pt-PT`)|
-|Spanish|Spain (`es-ES`)|
-|Swedish|Sweden (`sv-SE`)|
-|Vietnamese|Vietnam (`vi-VN`)|
-
-#### Hotel receipts
-
-| Supported Languages | Details |
-|:--|:-:|
-|English|United States (`en-US`)|
-|French|France (`fr-FR`)|
-|German|Germany (`de-DE`)|
-|Italian|Italy (`it-IT`)|
-|Japanese|Japan (`ja-JP`)|
-|Portuguese|Portugal (`pt-PT`)|
-|Spanish|Spain (`es-ES`)|
-
-### [2023-02-28-preview](#tab/2023-02-28-preview)
-
-#### Thermal receipts (retail, meal, parking, etc.)
-
-| Supported Languages | Details |
-|:--|:-:|
-|English|United States (`en-US`), Australia (`en-AU`), Canada (`en-CA`), United Kingdom (`en-GB`), India (`en-IN`), United Arab Emirates (`en-AE`)|
-|Croatian|Croatia (`hr-HR`)|
-|Czech|Czechia (`cs-CZ`)|
-|Danish|Denmark (`da-DK`)|
-|Dutch|Netherlands (`nl-NL`)|
-|Finnish|Finland (`fi-FI`)|
-|French|Canada (`fr-CA`), France (`fr-FR`)|
-|German|Germany (`de-DE`)|
-|Hungarian|Hungary (`hu-HU`)|
-|Italian|Italy (`it-IT`)|
-|Japanese|Japan (`ja-JP`)|
-|Latvian|Latvia (`lv-LV`)|
-|Lithuanian|Lithuania (`lt-LT`)|
-|Norwegian|Norway (`no-NO`)|
-|Portuguese|Brazil (`pt-BR`), Portugal (`pt-PT`)|
-|Spanish|Spain (`es-ES`)|
-|Swedish|Sweden (`sv-SE`)|
-|Vietnamese|Vietnam (`vi-VN`)|
-
-#### Hotel receipts
-
-| Supported Languages | Details |
-|:--|:-:|
-|English|United States (`en-US`)|
-|French|France (`fr-FR`)|
-|German|Germany (`de-DE`)|
-|Italian|Italy (`it-IT`)|
-|Japanese|Japan (`ja-JP`)|
-|Portuguese|Portugal (`pt-PT`)|
-|Spanish|Spain (`es-ES`)|
----
-## Supported languages and locales v2.1
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
-| Model | LanguageΓÇöLocale code | Default |
-|--|:-|:|
-|Receipt| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li></ul> | Autodetected |
--
-## Field extraction
--
-|Name| Type | Description | Standardized output |
-|:--|:-|:-|:-|
-| ReceiptType | String | Type of sales receipt | Itemized |
-| MerchantName | String | Name of the merchant issuing the receipt | |
-| MerchantPhoneNumber | phoneNumber | Listed phone number of merchant | +1 xxx xxx xxxx |
-| MerchantAddress | String | Listed address of merchant | |
-| TransactionDate | Date | Date the receipt was issued | yyyy-mm-dd |
-| TransactionTime | Time | Time the receipt was issued | hh-mm-ss (24-hour) |
-| Total | Number (USD)| Full transaction total of receipt | Two-decimal float|
-| Subtotal | Number (USD) | Subtotal of receipt, often before taxes are applied | Two-decimal float|
- | Tax | Number (USD) | Total tax on receipt (often sales tax or equivalent). **Renamed to "TotalTax" in 2022-06-30 version**. | Two-decimal float |
-| Tip | Number (USD) | Tip included by buyer | Two-decimal float|
-| Items | Array of objects | Extracted line items, with name, quantity, unit price, and total price extracted | |
-| Name | String | Item description. **Renamed to "Description" in 2022-06-30 version**. | |
-| Quantity | Number | Quantity of each item | Two-decimal float |
-| Price | Number | Individual price of each item unit| Two-decimal float |
-| TotalPrice | Number | Total price of line item | Two-decimal float |
---
- Form Recognizer v3.0 introduces several new features and capabilities. In addition to thermal receipts, the **Receipt** model supports single-page hotel receipt processing and tax detail extraction for all receipt types.
-
-### [**2022-08-31 (GA)**](#tab/2022-08-31)
-
-#### Thermal receipts (receipt, receipt.retailMeal, receipt.creditCard, receipt.gas, receipt.parking)
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
-|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
-|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
-|`Total`|`number`|Full transaction total of receipt|$14.34|
-|`TransactionDate`|`date`|Date the receipt was issued|June 06, 2019|
-|`TransactionTime`|`time`|Time the receipt was issued|4:49 PM|
-|`Subtotal`|`number`|Subtotal of receipt, often before taxes are applied|$12.34|
-|`TotalTax`|`number`|Tax on receipt, often sales tax or equivalent|$2.00|
-|`Tip`|`number`|Tip included by buyer|$1.00|
-|`Items`|`array`|||
-|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
-|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
-|`Items.*.Description`|`string`|Item description|Surface Pro 6|
-|`Items.*.Quantity`|`number`|Quantity of each item|1|
-|`Items.*.Price`|`number`|Individual price of each item unit|$999.00|
-|`Items.*.ProductCode`|`string`|Product code, product number, or SKU associated with the specific line item|A123|
-|`Items.*.QuantityUnit`|`string`|Quantity unit of each item||
-|`TaxDetails`|`array`|||
-|`TaxDetails.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
-|`TaxDetails.*.Amount`|`currency`|The amount of the tax detail|$999.00|
-#### Hotel receipts (receipt.hotel)
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
-|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
-|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
-|`Total`|`number`|Full transaction total of receipt|$14.34|
-|`ArrivalDate`|`date`|Date of arrival|27Mar21|
-|`DepartureDate`|`date`|Date of departure|28Mar21|
-|`Currency`|`string`|Currency unit of receipt amounts (ISO 4217), or 'MIXED' if multiple values are found|USD|
-|`MerchantAliases`|`array`|||
-|`MerchantAliases.*`|`string`|Alternative name of merchant|Contoso (R)|
-|`Items`|`array`|||
-|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
-|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
-|`Items.*.Description`|`string`|Item description|Room Charge|
-|`Items.*.Date`|`date`|Item date|27Mar21|
-|`Items.*.Category`|`string`|Item category|Room|
-
-### [2023-02-28-preview](#tab/2023-02-28-preview)
-
-#### Thermal receipts (receipt, receipt.retailMeal, receipt.creditCard, receipt.gas, receipt.parking)
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
-|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
-|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
-|`Total`|`number`|Full transaction total of receipt|$14.34|
-|`TransactionDate`|`date`|Date the receipt was issued|June 06, 2019|
-|`TransactionTime`|`time`|Time the receipt was issued|4:49 PM|
-|`Subtotal`|`number`|Subtotal of receipt, often before taxes are applied|$12.34|
-|`TotalTax`|`number`|Tax on receipt, often sales tax or equivalent|$2.00|
-|`Tip`|`number`|Tip included by buyer|$1.00|
-|`Items`|`array`|||
-|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
-|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
-|`Items.*.Description`|`string`|Item description|Surface Pro 6|
-|`Items.*.Quantity`|`number`|Quantity of each item|1|
-|`Items.*.Price`|`number`|Individual price of each item unit|$999.00|
-|`Items.*.ProductCode`|`string`|Product code, product number, or SKU associated with the specific line item|A123|
-|`Items.*.QuantityUnit`|`string`|Quantity unit of each item||
-|`TaxDetails`|`array`|||
-|`TaxDetails.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
-|`TaxDetails.*.Amount`|`currency`|The amount of the tax detail|$999.00|
-
-#### Hotel receipts (receipt.hotel)
-
-| Field | Type | Description | Example |
-|:|:--|:|:--|
-|`MerchantName`|`string`|Name of the merchant issuing the receipt|Contoso|
-|`MerchantPhoneNumber`|`phoneNumber`|Listed phone number of merchant|987-654-3210|
-|`MerchantAddress`|`address`|Listed address of merchant|123 Main St. Redmond WA 98052|
-|`Total`|`number`|Full transaction total of receipt|$14.34|
-|`ArrivalDate`|`date`|Date of arrival|27Mar21|
-|`DepartureDate`|`date`|Date of departure|28Mar21|
-|`Currency`|`string`|Currency unit of receipt amounts (ISO 4217), or 'MIXED' if multiple values are found|USD|
-|`MerchantAliases`|`array`|||
-|`MerchantAliases.*`|`string`|Alternative name of merchant|Contoso (R)|
-|`Items`|`array`|||
-|`Items.*`|`object`|Extracted line item|1<br>Surface Pro 6<br>$999.00<br>$999.00|
-|`Items.*.TotalPrice`|`number`|Total price of line item|$999.00|
-|`Items.*.Description`|`string`|Item description|Room Charge|
-|`Items.*.Date`|`date`|Item date|27Mar21|
-|`Items.*.Category`|`string`|Item category|Room|
-----
-### Migration guide and REST API v3.0
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
--
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
- Title: Automated W-2 form processing - Form Recognizer-
-description: Use the Form Recognizer prebuilt W-2 model to automate extraction of W2 form data.
----- Previously updated : 05/23/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Form Recognizer W-2 form model
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported on [US Internal Revenue Service (IRS) tax forms](https://www.irs.gov/forms-pubs/about-form-w-2). A W-2 tax form is a multipart form divided into state and federal sections consisting of more than 14 boxes detailing an employee's income from the previous year. The W-2 tax form is a key document used in employees' federal and state tax filings, as well as other processes like mortgage loans and Social Security Administration (SSA) benefits. The Form Recognizer W-2 model supports both single and multiple standard and customized forms from 2018 to the present.
-
-## Automated W-2 form processing
-
-An employer sends form W-2, also known as the Wage and Tax Statement, to each employee and the Internal Revenue Service (IRS) at the end of the year. A W-2 form reports employees' annual wages and the amount of taxes withheld from their paychecks. The IRS also uses W-2 forms to track individuals' tax obligations. The Social Security Administration (SSA) uses the information on this and other forms to compute the Social Security benefits for all workers.
-
-***Sample W-2 tax form processed using Form Recognizer Studio***
--
-## Development options
-
-Form Recognizer v3.0 supports the following tools:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
-
-### Try W-2 data extraction
-
-Try extracting data from W-2 forms using the Form Recognizer Studio. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
-
-#### Form Recognizer Studio
-
-> [!NOTE]
-> Form Recognizer studio is available with v3.0 API.
-
-1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2**.
-
-1. You can analyze the sample W-2 document or select the **Γ₧ò Add** button to upload your own sample.
-
-1. Select the **Analyze** button:
-
- :::image type="content" source="media/studio/w2-analyze.png" alt-text="Screenshot: analyze W-2 window in the Form Recognizer Studio.":::
-
- > [!div class="nextstepaction"]
- > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
-
-## Input requirements
--
-## Supported languages and locales
-
-| Model | LanguageΓÇöLocale code | Default |
-|--|:-|:|
-|prebuilt-tax.us.w2|<ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
-
-## Field extraction
-
-|Name| Box | Type | Description | Standardized output|
-|:--|:-|:-|:-|:-|
-| Employee.SocialSecurityNumber | a | String | Employee's Social Security Number (SSN). | 123-45-6789 |
-| Employer.IdNumber | b | String | Employer's ID number (EIN), the business equivalent of a social security number| 12-1234567 |
-| Employer.Name | c | String | Employer's name | Contoso |
-| Employer.Address | c | String | Employer's address (with city) | 123 Example Street Sample City, CA |
-| Employer.ZipCode | c | String | Employer's zip code | 12345 |
-| ControlNumber | d | String | A code identifying the unique W-2 in the records of employer | R3D1 |
-| Employee.Name | e | String | Full name of the employee | Henry Ross|
-| Employee.Address | f | String | Employee's address (with city) | 123 Example Street Sample City, CA |
-| Employee.ZipCode | f | String | Employee's zip code | 12345 |
-| WagesTipsAndOtherCompensation | 1 | Number | A summary of your pay, including wages, tips and other compensation | 50000 |
-| FederalIncomeTaxWithheld | 2 | Number | Federal income tax withheld | 1111 |
-| SocialSecurityWages | 3 | Number | Social security wages | 35000 |
-| SocialSecurityTaxWithheld | 4 | Number | Social security tax with held | 1111 |
-| MedicareWagesAndTips | 5 | Number | Medicare wages and tips | 45000 |
-| MedicareTaxWithheld | 6 | Number | Medicare tax with held | 1111 |
-| SocialSecurityTips | 7 | Number | Social security tips | 1111 |
-| AllocatedTips | 8 | Number | Allocated tips | 1111 |
-| Verification&#8203;Code | 9 | String | Verification Code on Form W-2 | A123-B456-C789-DXYZ |
-| DependentCareBenefits | 10 | Number | Dependent care benefits | 1111 |
-| NonqualifiedPlans | 11 | Number | The nonqualified plan, a type of retirement savings plan that is employer-sponsored and tax-deferred | 1111 |
-| AdditionalInfo | | Array of objects | An array of LetterCode and Amount | |
-| LetterCode | 12a, 12b, 12c, 12d | String | Letter code Refer to [IRS/W-2](https://www.irs.gov/pub/irs-prior/fw2--2014.pdf) for the semantics of the code values. | D |
-| Amount | 12a, 12b, 12c, 12d | Number | Amount | 1234 |
-| IsStatutoryEmployee | 13 | String | Whether the RetirementPlan box is checked or not | true |
-| IsRetirementPlan | 13 | String | Whether the RetirementPlan box is checked or not | true |
-| IsThirdPartySickPay | 13 | String | Whether the ThirdPartySickPay box is checked or not | false |
-| Other | 14 | String | Other info employers may use this field to report | |
-| StateTaxInfos | | Array of objects | An array of state tax info including State, EmployerStateIdNumber, StateIncomeTax, StageWagesTipsEtc | |
-| State | 15 | String | State | CA |
-| EmployerStateIdNumber | 15 | String | Employer state number | 123-123-1234 |
-| StateWagesTipsEtc | 16 | Number | State wages, tips, etc. | 50000 |
-| StateIncomeTax | 17 | Number | State income tax | 1535 |
-| LocalTaxInfos | | Array of objects | An array of local income tax info including LocalWagesTipsEtc, LocalIncomeTax, LocalityName | |
-| LocalWagesTipsEtc | 18 | Number | Local wages, tips, etc. | 50000 |
-| LocalIncomeTax | 19 | Number | Local income tax | 750 |
-| LocalityName | 20 | Number | Locality name. | CLEVELAND |
- | W2Copy | | String | Copy of W-2 forms A, B, C, D, 1, or 2 | Copy A For Social Security Administration |
-| TaxYear | | Number | Tax year | 2020 |
-| W2FormVariant | | String | The variants of W-2 forms, including "W-2", "W-2AS", "W-2CM", "W-2GU", "W-2VI" | W-2 |
-
-### Migration guide and REST API v3.0
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-
-* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
-
-## Next steps
-
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
- Title: Configure Form Recognizer containers-
-description: Learn how to configure the Form Recognizer container to parse form and table data.
----- Previously updated : 11/29/2022-
-monikerRange: 'form-recog-2.1.0'
-
-# Configure Form Recognizer containers
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
-
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premises and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
-
-## Configuration settings
-
-Each container has the following configuration settings:
-
-|Required|Setting|Purpose|
-|--|--|--|
-|Yes|[Key](#key-and-billing-configuration-setting)|Tracks billing information.|
-|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. For more information, _see_ [Billing](form-recognizer-container-install-run.md#billing). For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
-|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
-|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) customer content support to your container.|
-|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
-|No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.|
-|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
-
-> [!IMPORTANT]
-> The [`Key`](#key-and-billing-configuration-setting), [`Billing`](#key-and-billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together. You must provide valid values for all three settings; otherwise, your containers won't start. For more information about using these configuration settings to instantiate a container, see [Billing](form-recognizer-container-install-run.md#billing).
-
-## Key and Billing configuration setting
-
-The `Key` setting specifies the Azure resource key that's used to track billing information for the container. The value for the Key must be a valid key for the resource that's specified for `Billing` in the "Billing configuration setting" section.
-
-The `Billing` setting specifies the endpoint URI of the resource on Azure that's used to meter billing information for the container. The value for this configuration setting must be a valid endpoint URI for a resource on Azure. The container reports usage about every 10 to 15 minutes.
-
- You can find these settings in the Azure portal on the **Keys and Endpoint** page.
-
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-
-## EULA setting
--
-## ApplicationInsights setting
--
-## Fluentd settings
--
-## HTTP proxy credentials settings
--
-## Logging settings
--
-## Volume settings
-
-Use [**volumes**](https://docs.docker.com/storage/volumes/) to read and write data to and from the container. Volumes are the preferred for persisting data generated and used by Docker containers. You can specify an input mount or an output mount by including the `volumes` option and specifying `type` (bind), `source` (path to the folder) and `target` (file path parameter).
-
-The Form Recognizer container requires an input volume and an output volume. The input volume can be read-only (`ro`), and it's required for access to the data that's used for training and scoring. The output volume has to be writable, and you use it to store the models and temporary data.
-
-The exact syntax of the host volume location varies depending on the host operating system. Additionally, the volume location of the [host computer](form-recognizer-container-install-run.md#host-computer-requirements) might not be accessible because of a conflict between the Docker service account permissions and the host mount location permissions.
-
-## Example docker-compose.yml file
-
-The **docker compose** method is built from three steps:
-
- 1. Create a Dockerfile.
- 1. Define the services in a **docker-compose.yml** so they can be run together in an isolated environment.
- 1. Run `docker-compose up` to start and run your services.
-
-### Single container example
-
-In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
-
-#### **Layout container**
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
-
- ports:
- - "5000"
- networks:
- - ocrvnet
-networks:
- ocrvnet:
- driver: bridge
-```
-
-### Multiple containers example
-
-#### **Receipt and OCR Read containers**
-
-In this example, enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container and {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
-
-```yml
-version: "3"
-
- azure-cognitive-service-receipt:
- container_name: azure-cognitive-service-receipt
- image: cognitiveservicespreview.azurecr.io/microsoft/cognitive-services-form-recognizer-receipt:2.1
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - key={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5050"
- networks:
- - ocrvnet
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2
- environment:
- - EULA=accept
- - billing={COMPUTER_VISION_ENDPOINT_URI}
- - key={COMPUTER_VISION_KEY}
- networks:
- - ocrvnet
-
-networks:
- ocrvnet:
- driver: bridge
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about running multiple containers and the docker compose command](form-recognizer-container-install-run.md)
-
-* [Azure container instance recipe](../../../cognitive-services/containers/azure-container-instance-recipe.md)
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
- Title: Form Recognizer image tags and release notes-
-description: A listing of all Form Recognizer container image tags.
----- Previously updated : 01/23/2023-
-monikerRange: 'form-recog-2.1.0'
--
-# Form Recognizer container image tags and release notes
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
-
-## Feature containers
-
-Form Recognizer features are supported by seven containers:
-
-| Container name | Fully qualified image name |
-|||
-| **Layout** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout |
-| **Business Card** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard |
-| **ID Document** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document |
-| **Receipt** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt |
-| **Invoice** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice |
-| **Custom API** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api |
-| **Custom Supervised** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised |
-
-## Microsoft container registry (MCR)
-
-Form Recognizer container images can be found within the [**Microsoft Container Registry Catalog**](https://mcr.microsoft.com/v2/_catalog) listing, the primary registry for all Microsoft Published Docker images:
-
- :::image type="content" source="../media/containers/microsoft-container-registry-catalog.png" alt-text="Screenshot of the Microsoft Container Registry (MCR) catalog list.":::
-
-## Form Recognizer tags
-
-The following tags are available for Form Recognizer:
-
-### [Latest version](#tab/current)
-
-Release notes for `v2.1`:
-
-| Container | Tags | Retrieve image |
-||:||
-| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout`|
-| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard` |
-| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
-| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
-| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| **Custom API** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
-| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
-
-### [Previous versions](#tab/previous)
-
-> [!IMPORTANT]
-> The Form Recognizer v1.0 container has been retired.
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Install and run Form Recognizer containers](form-recognizer-container-install-run.md)
->
-
-* [Azure container instance recipe](../../../cognitive-services/containers/azure-container-instance-recipe.md)
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
- Title: Install and run Docker containers for Form Recognizer-
-description: Use the Docker containers for Form Recognizer on-premises to identify and extract key-value pairs, selection marks, tables, and structure from forms and documents.
----- Previously updated : 06/19/2023---
-# Install and run Form Recognizer containers
-
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD051 -->
---
-Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine-learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file.
-
-In this article you learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements.
-
-* **Read**, **Layout**, **General Document**, **ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by Form Recognizer v3.0 containers.
-
-* **Business Card** model is currently only supported in the [v2.1 containers](form-recognizer-container-install-run.md?view=form-recog-2.1.0&preserve-view=true).
---
-> [!IMPORTANT]
->
-> Form Recognizer v3.0 containers are now generally available. If you are getting started with containers, consider using the v3 containers.
-
-In this article you learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements.
-
-* **Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by six Form Recognizer feature containers.
-
-* For Receipt, Business Card and ID Document containers you also need the **Read** OCR container.
--
-> [!IMPORTANT]
->
-> * To use Form Recognizer containers, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
-
-## Prerequisites
-
-To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-You also need the following to use Form Recognizer containers:
-
-| Required | Purpose |
-|-||
-| **Familiarity with Docker** | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). |
-| **Docker Engine installed** | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-|**Form Recognizer resource** | A [**single-service Azure Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
-
-|Optional|Purpose|
-||-|
-|**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. |
-
-You also need a **Computer Vision API resource to process business cards, ID documents, or Receipts**.
-
-* You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image).
-
-* The usual [billing](#billing) fees apply.
-
-* If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`).
-
-* If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of **5000**.
-
-* Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:
-
- * **{COMPUTER_VISION_KEY}**: one of the two available resource keys.
- * **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.
-
-## Request approval to run container
-
-Complete and submit the [**Azure Cognitive Services Application for Gated Services**](https://aka.ms/csgate) to request access to the container.
--
-## Host computer requirements
-
-The host is a x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
-
-* [Azure Kubernetes Service](../../../aks/index.yml).
-* [Azure Container Instances](../../../container-instances/index.yml).
-* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
-
-### Container requirements and recommendations
-
-#### Required supporting containers
-
-The following table lists the supporting container(s) for each Form Recognizer container you download. For more information, see the [Billing](#billing) section.
-
-| Feature container | Supporting container(s) |
-||--|
-| **Layout** | None |
-| **Business Card** | **Computer Vision Read**|
-| **ID Document** | **Computer Vision Read** |
-| **Invoice** | **Layout** |
-| **Receipt** |**Computer Vision Read** |
-| **Custom** | **Custom API**, **Custom Supervised**, **Layout**|
-
-Feature container | Supporting container(s) |
-||--|
-| **Read** | None |
-| **Layout** | None|
-| **General Document** | Layout |
-| **Invoice** | Layout|
-| **Receipt** | Read |
-| **ID Document** | Read|
-| **Custom Template** | Layout |
--
-#### Recommended CPU cores and memory
-
-> [!NOTE]
->
-> The minimum and recommended values are based on Docker limits and *not* the host machine resources.
--
-##### Form Recognizer containers
-
-| Container | Minimum | Recommended |
-|--||-|
-| `Read` | `8` cores, 10-GB memory | `8` cores, 24-GB memory|
-| `Layout` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
-| `General Document` | `8` cores, 12-GB memory | `8` cores, 24-GB memory|
-| `ID Document` | `8` cores, 8-GB memory | `8` cores, 24-GB memory |
-| `Invoice` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
-| `Receipt` | `8` cores, 11-GB memory | `8` cores, 24-GB memory |
-| `Custom Template` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
---
-##### Read, Layout, and prebuilt containers
-
-| Container | Minimum | Recommended |
-|--||-|
-| `Read 3.2` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
-| `Layout 2.1` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
-| `Business Card 2.1` | `2` cores, 4-GB memory | `4` cores, 4-GB memory |
-| `ID Document 2.1` | `1` core, 2-GB memory |`2` cores, 2-GB memory |
-| `Invoice 2.1` | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
-| `Receipt 2.1` | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
-
-##### Custom containers
-
-The following host machine requirements are applicable to **train and analyze** requests:
-
-| Container | Minimum | Recommended |
-|--||-|
-| Custom API| 0.5 cores, 0.5-GB memory| `1` core, 1-GB memory |
-|Custom Supervised | `4` cores, 2-GB memory | `8` cores, 4-GB memory|
-
-* Each core must be at least 2.6 gigahertz (GHz) or faster.
-* Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker compose` or `docker run` command.
-
-> [!TIP]
-> You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. For example, the following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
->
-> ```docker
-> docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
->
-> IMAGE ID REPOSITORY TAG
-> <image-id> <repository-path/name> <tag-name>
-> ```
-
-## Run the container with the **docker-compose up** command
-
-* Replace the {ENDPOINT_URI} and {API_KEY} values with your resource Endpoint URI and the key from the Azure resource page.
-
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-
-* Ensure that the EULA value is set to "accept".
-
-* The `EULA`, `Billing`, and `ApiKey` values must be specified; otherwise the container can't start.
-
-> [!IMPORTANT]
-> The keys are used to access your Form Recognizer resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
--
-### [Read](#tab/read)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
-
-```yml
-version: "3.9"
-
- azure-form-recognizer-read:
- container_name: azure-form-recognizer-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- ports:
- - "5000:5000"
- networks:
- - ocrvnet
-networks:
- ocrvnet:
- driver: bridge
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [General Document](#tab/general-document)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your General Document and Layout container instances.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-document:
- container_name: azure-cognitive-service-document
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/document-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
- ports:
- - "5000:5000"
- azure-cognitive-service-layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-Given the resources on the machine, the General Document container might take some time to start up.
-
-### [Layout](#tab/layout)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
-
-```yml
-version: "3.9"
-
- azure-form-recognizer-layout:
- container_name: azure-form-recognizer-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- ports:
- - "5000:5000"
- networks:
- - ocrvnet
-networks:
- ocrvnet:
- driver: bridge
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Invoice](#tab/invoice)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer Invoice container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout container instances.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-invoice:
- container_name: azure-cognitive-service-invoice
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
- ports:
- - "5000:5000"
- azure-cognitive-service-layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Receipt](#tab/receipt)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt and Read container instances.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-receipt:
- container_name: azure-cognitive-service-receipt
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5000"
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [ID Document](#tab/id-document)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-id-document:
- container_name: azure-cognitive-service-id-document
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5000"
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Business Card](#tab/business-card)
-
-The Business Card container is not supported by Form Recognizer v3.0.
-
-### [Custom](#tab/custom)
-
-In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
-
-#### Create a folder to store the following files
-
-* [**.env**](#-create-an-environment-file)
-* [**nginx.conf**](#-create-an-nginx-file)
-* [**docker-compose.yml**](#-create-a-docker-compose-file)
-
-#### Create a folder to store your input data
-
-* Name this folder **files**.
-* We reference the file path for this folder as **{FILE_MOUNT_PATH}**.
-* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called files, located in the same folder as the docker-compose file, the .env file entry is `FILE_MOUNT_PATH="./files"`
-
-#### Create a folder to store the logs written by the Form Recognizer service on your local machine
-
-* Name this folder **output**.
-* We reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
-* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called output, located in the same folder as the docker-compose file, the .env file entry is `OUTPUT_MOUNT_PATH="./output"`
-
-#### Create a folder for storing internal processing shared between the containers
-
-* Name this folder **shared**.
-* We reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
-* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called shared, located in the same folder as the docker-compose file, the .env file entry is `SHARED_MOUNT_PATH="./shared"`
-
-#### Create a folder for the Studio to store project related information
-
-* Name this folder **db**.
-* We reference the file path for this folder as **{DB_MOUNT_PATH}**.
-* Copy the file path in a convenient location, you need to add it to your **.env** file. As an example if the folder is called db, located in the same folder as the docker-compose file, the .env file entry is `DB_MOUNT_PATH="./db"`
-
-#### Create an environment file
-
- 1. Name this file **.env**.
-
- 1. Declare the following environment variables:
-
- ```text
-SHARED_MOUNT_PATH="./shared"
-OUTPUT_MOUNT_PATH="./output"
-FILE_MOUNT_PATH="./files"
-DB_MOUNT_PATH="./db"
-FORM_RECOGNIZER_ENDPOINT_URI="YourFormRecognizerEndpoint"
-FORM_RECOGNIZER_KEY="YourFormRecognizerKey"
-NGINX_CONF_FILE="./nginx.conf"
- ```
-
-#### Create an **nginx** file
-
- 1. Name this file **nginx.conf**.
-
- 1. Enter the following configuration:
-
-```text
-worker_processes 1;
-
-events { worker_connections 1024; }
-
-http {
-
- sendfile on;
- client_max_body_size 90M;
- upstream docker-custom {
- server azure-cognitive-service-custom-template:5000;
- }
-
- upstream docker-layout {
- server azure-cognitive-service-layout:5000;
- }
-
- server {
- listen 5000;
-
- location = / {
- proxy_set_header Host $host:$server_port;
- proxy_set_header Referer $scheme://$host:$server_port;
- proxy_pass http://docker-custom/;
- }
-
- location /status {
- proxy_pass http://docker-custom/status;
- }
-
- location /test {
- return 200 $scheme://$host:$server_port;
- }
-
- location /ready {
- proxy_pass http://docker-custom/ready;
- }
-
- location /swagger {
- proxy_pass http://docker-custom/swagger;
- }
-
- location /formrecognizer/documentModels/prebuilt-layout {
- proxy_set_header Host $host:$server_port;
- proxy_set_header Referer $scheme://$host:$server_port;
-
- add_header 'Access-Control-Allow-Origin' '*' always;
- add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
- add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
- add_header 'Access-Control-Expose-Headers' '*' always;
-
- if ($request_method = 'OPTIONS') {
- return 200;
- }
-
- proxy_pass http://docker-layout/formrecognizer/documentModels/prebuilt-layout;
- }
-
- location /formrecognizer/documentModels {
- proxy_set_header Host $host:$server_port;
- proxy_set_header Referer $scheme://$host:$server_port;
-
- add_header 'Access-Control-Allow-Origin' '*' always;
- add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
- add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE' always;
- add_header 'Access-Control-Expose-Headers' '*' always;
-
- if ($request_method = 'OPTIONS') {
- return 200;
- }
-
- proxy_pass http://docker-custom/formrecognizer/documentModels;
- }
-
- location /formrecognizer/operations {
- add_header 'Access-Control-Allow-Origin' '*' always;
- add_header 'Access-Control-Allow-Headers' 'cache-control,content-type,ocp-apim-subscription-key,x-ms-useragent' always;
- add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, PATCH' always;
- add_header 'Access-Control-Expose-Headers' '*' always;
-
- if ($request_method = OPTIONS ) {
- return 200;
- }
-
- proxy_pass http://docker-custom/formrecognizer/operations;
- }
- }
-}
-
-```
-
-#### Create a **docker compose** file
-
-1. Name this file **docker-compose.yml**
-
-2. The following code sample is a self-contained `docker compose` example to run Form Recognizer Layout, Studio and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
-
- ```yml
-version: '3.3'
-
- nginx:
- image: nginx:alpine
- container_name: reverseproxy
- volumes:
- - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
- ports:
- - "5000:5000"
- layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
- environment:
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
-
- custom-template:
- container_name: azure-cognitive-service-custom-template
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
- restart: always
- depends_on:
- - layout
- environment:
- AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
-
- studio:
- container_name: form-recognizer-studio
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
- environment:
- ONPREM_LOCALFILE_BASEPATH: /onprem_folder
- STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
- volumes:
- - type: bind
- source: ${FILE_MOUNT_PATH} # path to your local folder
- target: /onprem_folder
- - type: bind
- source: ${DB_MOUNT_PATH} # path to your local folder
- target: /onprem_db
- ports:
- - "5001:5001"
- user: "1000:1000" # echo $(id -u):$(id -g)
-
- ```
-
-The custom template container can use Azure Storage queues or in memory queues. The `Storage:ObjectStore:AzureBlob:ConnectionString` and `queue:azure:connectionstring` environment variables only need to be set if you're using Azure Storage queues. When running locally, delete these variables.
-
-### Ensure the service is running
-
-To ensure that the service is up and running. Run these commands in an Ubuntu shell.
-
-```bash
-$cd <folder containing the docker-compose file>
-
-$source .env
-
-$docker-compose up
-```
-
-Custom template containers require a few different configurations and support other optional configurations
-
-| Setting | Required | Description |
-|--||-|
-|EULA | Yes | License acceptance Example: Eula=accept|
-|Billing | Yes | Billing endpoint URI of the FR resource |
-|ApiKey | Yes | The endpoint key of the FR resource |
-| Queue:Azure:ConnectionString | No| Azure Queue connection string |
-|Storage:ObjectStore:AzureBlob:ConnectionString | No| Azure Blob connection string |
-| HealthCheck:MemoryUpperboundInMB | No | Memory threshold for reporting unhealthy to liveness. Default: Same as recommended memory |
-| StorageTimeToLiveInMinutes | No| TTL duration to remove all intermediate and final files. Default: Two days, TTL can set between five minutes to seven days |
-| Task:MaxRunningTimeSpanInMinutes | No| Maximum running time for treating request as timeout. Default: 60 minutes |
-| HTTP_PROXY_BYPASS_URLS | No | Specify URLs for bypassing proxy Example: HTTP_PROXY_BYPASS_URLS = abc.com, xyz.com |
-| AzureCognitiveServiceReadHost (Receipt, IdDocument Containers Only)| Yes | Specify Read container uri Example:AzureCognitiveServiceReadHost=http://onprem-frread:5000 |
-| AzureCognitiveServiceLayoutHost (Document, Invoice Containers Only) | Yes | Specify Layout container uri Example:AzureCognitiveServiceLayoutHost=http://onprem-frlayout:5000 |
-
-#### Use the Form Recognizer Studio to train a model
-
-* Gather a set of at least five forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*).
-
-* Once you can confirm that the containers are running, open a browser and navigate to the endpoint where you have the containers deployed. If this deployment is your local machine, the endpoint is [http://localhost:5001](http://localhost:5001).
-* Select the custom extraction model tile.
-* Select the `Create project` option.
-* Provide a project name and optionally a description
-* On the "configure your resource" step, provide the endpoint to your custom template model. If you deployed the containers on your local machine, use this URL [http://localhost:5000](http://localhost:5000)
-* Provide a subfolder for where your training data is located within the files folder.
-* Finally, create the project
-
-You should now have a project created, ready for labeling. Upload your training data and get started labeling. If you're new to labeling, see [build and train a custom model](../how-to-guides/build-a-custom-model.md)
-
-#### Using the API to train
-
-If you plan to call the APIs directly to train a model, the custom template model train API requires a base64 encoded zip file that is the contents of your labeling project. You can omit the PDF or image files and submit only the JSON files.
-
-Once you have your dataset labeled and *.ocr.json, *.labels.json and fields.json files added to a zip, use the PowerShell commands to generate the base64 encoded string.
-
-```powershell
-$bytes = [System.IO.File]::ReadAllBytes("<your_zip_file>.zip")
-$b64String = [System.Convert]::ToBase64String($bytes, [System.Base64FormattingOptions]::None)
-
-```
-
-Use the build model API to post the request.
-
-```http
-POST http://localhost:5000/formrecognizer/documentModels:build?api-version=2022-08-31
-
-{
- "modelId": "mymodel",
- "description": "test model",
- "buildMode": "template",
-
- "base64Source": "<Your base64 encoded string>",
- "tags": {
- "additionalProp1": "string",
- "additionalProp2": "string",
- "additionalProp3": "string"
- }
-}
-```
----
-### [Read](#tab/read)
-
-The Read container is not supported by Form Recognizer v2.1.
-
-### [General Document](#tab/general-document)
-
-The General Document container is not supported by Form Recognizer v2.1.
-
-### [Layout](#tab/layout)
-
-The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- ports:
- - "5000"
- networks:
- - ocrvnet
-networks:
- ocrvnet:
- driver: bridge
-
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Invoice](#tab/invoice)
-
-The following code sample is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-invoice:
- container_name: azure-cognitive-service-invoice
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceLayoutHost=http://azure-cognitive-service-layout:5000
- ports:
- - "5000:5000"
- networks:
- - ocrvnet
- azure-cognitive-service-layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- networks:
- - ocrvnet
-
-networks:
- ocrvnet:
- driver: bridge
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Receipt](#tab/receipt)
-
-The following code sample is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-receipt:
- container_name: azure-cognitive-service-receipt
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5000"
- networks:
- - ocrvnet
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
- environment:
- - EULA=accept
- - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apiKey={COMPUTER_VISION_KEY}
- networks:
- - ocrvnet
-
-networks:
- ocrvnet:
- driver: bridge
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [ID Document](#tab/id-document)
-
-The following code sample is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-id:
- container_name: azure-cognitive-service-id
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5000"
- networks:
- - ocrvnet
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
- environment:
- - EULA=accept
- - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apiKey={COMPUTER_VISION_KEY}
- networks:
- - ocrvnet
-
-networks:
- ocrvnet:
- driver: bridge
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Business Card](#tab/business-card)
-
-The following code sample is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Computer Vision Read container.
-
-```yml
-version: "3.9"
-
- azure-cognitive-service-businesscard:
- container_name: azure-cognitive-service-businesscard
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard
- environment:
- - EULA=accept
- - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- - apiKey={FORM_RECOGNIZER_KEY}
- - AzureCognitiveServiceReadHost=http://azure-cognitive-service-read:5000
- ports:
- - "5000:5000"
- networks:
- - ocrvnet
- azure-cognitive-service-read:
- container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2021-04-12
- environment:
- - EULA=accept
- - billing={COMPUTER_VISION_ENDPOINT_URI}
- - apiKey={COMPUTER_VISION_KEY}
- networks:
- - ocrvnet
-
-networks:
- ocrvnet:
- driver: bridge
-```
-
-Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
-
-```bash
-docker-compose up
-```
-
-### [Custom](#tab/custom)
-
-In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
-
-#### &bullet; Create a folder to store the following files
-
- 1. [**.env**](#-create-an-environment-file)
- 1. [**nginx.conf**](#-create-an-nginx-file)
- 1. [**docker-compose.yml**](#-create-a-docker-compose-file)
-
-#### &bullet; Create a folder to store your input data
-
- 1. Name this folder **shared**.
- 1. We reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
- 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You need to add it to your **.env** file.
-
-#### &bullet; Create a folder to store the logs written by the Form Recognizer service on your local machine.
-
- 1. Name this folder **output**.
- 1. We reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
- 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You need to add it to your **.env** file.
-
-#### &bullet; Create an environment file
-
- 1. Name this file **.env**.
-
- 1. Declare the following environment variables:
-
- ```text
- SHARED_MOUNT_PATH="<file-path-to-shared-folder>"
- OUTPUT_MOUNT_PATH="<file -path-to-output-folder>"
- FORM_RECOGNIZER_ENDPOINT_URI="<your-form-recognizer-endpoint>"
- FORM_RECOGNIZER_KEY="<your-form-recognizer-key>"
- RABBITMQ_HOSTNAME="rabbitmq"
- RABBITMQ_PORT=5672
- NGINX_CONF_FILE="<file-path>"
- ```
-
-#### &bullet; Create an **nginx** file
-
- 1. Name this file **nginx.conf**.
-
- 1. Enter the following configuration:
-
-```text
-worker_processes 1;
-
-events { worker_connections 1024; }
-
-http {
-
- sendfile on;
- client_max_body_size 90M;
- upstream docker-api {
- server azure-cognitive-service-custom-api:5000;
- }
-
- upstream docker-layout {
- server azure-cognitive-service-layout:5000;
- }
-
- server {
- listen 5000;
-
- location = / {
- proxy_set_header Host $host:$server_port;
- proxy_set_header Referer $scheme://$host:$server_port;
- proxy_pass http://docker-api/;
-
- }
-
- location /status {
- proxy_pass http://docker-api/status;
-
- }
-
- location /ready {
- proxy_pass http://docker-api/ready;
-
- }
-
- location /swagger {
- proxy_pass http://docker-api/swagger;
-
- }
-
- location /formrecognizer/v2.1/custom/ {
- proxy_set_header Host $host:$server_port;
- proxy_set_header Referer $scheme://$host:$server_port;
- proxy_pass http://docker-api/formrecognizer/v2.1/custom/;
-
- }
-
- location /formrecognizer/v2.1/layout/ {
- proxy_set_header Host $host:$server_port;
- proxy_set_header Referer $scheme://$host:$server_port;
- proxy_pass http://docker-layout/formrecognizer/v2.1/layout/;
-
- }
- }
-}
-```
-
-* Gather a set of at least six forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created.
-
-* If you want to label your data, download the [Form Recognizer Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases). The download imports the labeling tool .exe file that you use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
-
-#### Create a new Sample Labeling tool project
-
-* Open the labeling tool by double-clicking on the Sample Labeling tool .exe file.
-* On the left pane of the tool, select the connections tab.
-* Select to create a new project and give it a name and description.
-* For the provider, choose the local file system option. For the local folder, make sure you enter the path to the folder where you stored the sample data files.
-* Navigate back to the home tab and select the "Use custom to train a model with labels and key-value pairs option".
-* Select the train button on the left pane to train the labeled model.
-* Save this connection and use it to label your requests.
-* You can choose to analyze the file of your choice against the trained model.
-
-#### &bullet; Create a **docker compose** file
-
-1. Name this file **docker-compose.yml**
-
-2. The following code sample is a self-contained `docker compose` example to run Form Recognizer Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
-
- ```yml
- version: '3.3'
-
- nginx:
- image: nginx:alpine
- container_name: reverseproxy
- volumes:
- - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
- ports:
- - "5000:5000"
- rabbitmq:
- container_name: ${RABBITMQ_HOSTNAME}
- image: rabbitmq:3
- expose:
- - "5672"
- layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
- depends_on:
- - rabbitmq
- environment:
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
- Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
-
- custom-api:
- container_name: azure-cognitive-service-custom-api
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api
- restart: always
- depends_on:
- - rabbitmq
- environment:
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
- Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
-
- custom-supervised:
- container_name: azure-cognitive-service-custom-supervised
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised
- restart: always
- depends_on:
- - rabbitmq
- environment:
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- CustomFormRecognizer:ContainerPhase: All
- CustomFormRecognizer:LayoutAnalyzeUri: http://azure-cognitive-service-layout:5000/formrecognizer/v2.1/layout/analyze
- Logging:Console:LogLevel:Default: Information
- Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
- Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- ```
-
-### Ensure the service is running
-
-To ensure that the service is up and running. Run these commands in an Ubuntu shell.
-
-```bash
-$cd <folder containing the docker-compose file>
-
-$source .env
-
-$docker-compose up
-```
-
-### Create a new connection
-
-* On the left pane of the tool, select the **connections** tab.
-* Select **create a new project** and give it a name and description.
-* For the provider, choose the **local file system** option. For the local folder, make sure you enter the path to the folder where you stored the **sample data** files.
-* Navigate back to the home tab and select **Use custom to train a model with labels and key-value pairs**.
-* Select the **train button** on the left pane to train the labeled model.
-* **Save** this connection and use it to label your requests.
-* You can choose to analyze the file of your choice against the trained model.
-
-## The Sample Labeling tool and Azure Container Instances (ACI)
-
-To learn how to use the Sample Labeling tool with an Azure Container Instance, *see*, [Deploy the Sample Labeling tool](../deploy-label-tool.md#deploy-with-azure-container-instances-aci).
----
-## Validate that the service is running
-
-There are several ways to validate that the container is running:
-
-* The container provides a homepage at `\` as a visual validation that the container is running.
-
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the listed request URLs to validate the container is running. The listed example request URLs are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
-
- Request URL | Purpose
- -- | --
- |**http://<span></span>localhost:5000/** | The container provides a home page.
- |**http://<span></span>localhost:5000/ready** | Requested with GET, this request provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes liveness and readiness probes.
- |**http://<span></span>localhost:5000/status** | Requested with GET, this request verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes liveness and readiness probes.
- |**http://<span></span>localhost:5000/swagger** | The container provides a full set of documentation for the endpoints and a Try it out feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required.
- |
--
-## Stop the containers
-
-To stop the containers, use the following command:
-
-```console
-docker-compose down
-```
-
-## Billing
-
-The Form Recognizer containers send billing information to Azure by using a Form Recognizer resource on your Azure account.
-
-Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You're billed for each container instance used to process your documents and images.
-
-> [!NOTE]
-> Currently, Form Recognizer v3 containers only support pay as you go pricing. Support for commitment tiers and disconnected mode will be added in March 2023.
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
-
-Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You're billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you're billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you're billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
-
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
-
-### Connect to Azure
-
-The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run, but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Cognitive Services container FAQ](../../../cognitive-services/containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
-
-### Billing arguments
-
-The [**docker-compose up**](https://docs.docker.com/engine/reference/commandline/compose_up/) command starts the container when all three of the following options are provided with valid values:
-
-| Option | Description |
-|--|-|
-| `ApiKey` | The key of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to a key for the provisioned resource that's specified in `Billing`. |
-| `Billing` | The endpoint of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.|
-| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
-
-For more information about these options, see [Configure containers](form-recognizer-container-configuration.md).
-
-## Summary
-
-That's it! In this article, you learned concepts and workflows for downloading, installing, and running Form Recognizer containers. In summary:
-
-* Form Recognizer provides seven Linux containers for Docker.
-* Container images are downloaded from mcr.
-* Container images run in Docker.
-* The billing information must be specified when you instantiate a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* [Form Recognizer container configuration settings](form-recognizer-container-configuration.md)
-
-* [Azure container instance recipe](../../../cognitive-services/containers/azure-container-instance-recipe.md)
applied-ai-services Form Recognizer Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md
- Title: Use Form Recognizer containers in disconnected environments-
-description: Learn how to run Azure Cognitive Services Docker containers disconnected from the internet.
----- Previously updated : 03/02/2023-
-monikerRange: 'form-recog-2.1.0'
--
-# Form Recognizer containers in disconnected environments
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
-
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-Azure Cognitive Services Form Recognizer containers allow you to use Form Recognizer APIs with the benefits of containerization. Disconnected containers are offered through commitment tier pricing offered at a discounted rate compared to pay-as-you-go pricing. With commitment tier pricing, you can commit to using Form Recognizer features for a fixed fee, at a predictable total cost, based on the needs of your workload.
-
-## Get started
-
-Before attempting to run a Docker container in an offline environment, make sure you're familiar with the following requirements to successfully download and use the container:
-
-* Host computer requirements and recommendations.
-* The Docker `pull` command to download the container.
-* How to validate that a container is running.
-* How to send queries to the container's endpoint, once it's running.
-
-## Request access to use containers in disconnected environments
-
-Before you can use Form Recognizer containers in disconnected environments, you must first fill out and [submit a request form](../../../cognitive-services/containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments) and [purchase a commitment plan](../../../cognitive-services/containers/disconnected-containers.md#purchase-a-commitment-plan-to-use-containers-in-disconnected-environments).
-
-## Gather required parameters
-
-There are three required parameters for all Cognitive Services' containers:
-
-* The end-user license agreement (EULA) must be present with a value of *accept*.
-* The endpoint URL for your resource from the Azure portal.
-* The API key for your resource from the Azure portal.
-
-Both the endpoint URL and API key are needed when you first run the container to configure it for disconnected usage. You can find the key and endpoint on the **Key and endpoint** page for your resource in the Azure portal:
-
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-
-> [!IMPORTANT]
-> You will only use your key and endpoint to configure the container to run in a disconnected environment. After you configure the container, you won't need the key and endpoint values to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
-
-## Download a Docker container with `docker pull`
-
-Download the Docker container that has been approved to run in a disconnected environment. For example:
-
-|Docker pull command | Value |Format|
-|-|-||
-|&bullet; **`docker pull [image]`**</br>&bullet; **`docker pull [image]:latest`**|The latest container image.|&bullet; mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout</br> </br>&bullet; mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice: latest |
-|||
-|&bullet; **`docker pull [image]:[version]`** | A specific container image |dockers pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt:2.1-preview |
-
- **Example Docker pull command**
-
-```docker
-docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:latest
-```
-
-## Configure the container to be run in a disconnected environment
-
-Now that you've downloaded your container, you need to execute the `docker run` command with the following parameter:
-
-* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
-
-> [!IMPORTANT]
->The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
-
-The following example shows the formatting for the `docker run` command to use with placeholder values. Replace these placeholder values with your own values.
-
-| Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Form Recognizer resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`{string}`|
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-
- **Example `docker run` command**
-
-```docker
-
-docker run --rm -it -p 5000:5000 \
-
--v {LICENSE_MOUNT} \-
-{IMAGE} \
-
-eula=accept \
-
-billing={ENDPOINT_URI} \
-
-apikey={API_KEY} \
-
-DownloadLicense=True \
-
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-```
-
-After you've configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
-
-## Form Recognizer container models and configuration
-
-After you've [configured the container](#configure-the-container-to-be-run-in-a-disconnected-environment), the values for the downloaded form recognizer models and container configuration will be generated and displayed in the container output.
-
-## Run the container in a disconnected environment
-
-Once the license file has been downloaded, you can run the container in a disconnected environment with your license, appropriate memory, and suitable CPU allocations. The following example shows the formatting of the `docker run` command with placeholder values. Replace these placeholders values with your own values.
-
-Whenever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. In addition, an output mount must be specified so that billing usage records can be written.
-
-Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/license:/path/to/license/directory` |
-| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
-
- **Example `docker run` command**
-
-```docker
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
---v {LICENSE_MOUNT} \---v {OUTPUT_PATH} \-
-{IMAGE} \
-
-eula=accept \
-
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
-
-## Other parameters and commands
-
-Here are a few more parameters and commands you may need to run the container.
-
-#### Usage records
-
-When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST API endpoint to generate a report about service usage.
-
-#### Arguments for storing logs
-
-When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
-
-```Docker
-docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
-```
-
-#### Get records using the container endpoints
-
-The container provides two endpoints for returning records about its usage.
-
-#### Get all records
-
-The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
-
-```http
-https://<service>/records/usage-logs/
-```
-
- **Example HTTPS endpoint**
-
- `http://localhost:5000/records/usage-logs`
-
-The usage-log endpoint returns a JSON response similar to the following example:
-
-```json
-{
- "apiType": "string",
- "serviceName": "string",
- "meters": [
- {
- "name": "string",
- "quantity": 256345435
- }
- ]
-}
-```
-
-#### Get records for a specific month
-
-The following endpoint provides a report summarizing usage over a specific month and year.
-
-```HTTP
-https://<service>/records/usage-logs/{MONTH}/{YEAR}
-```
-
-This usage-logs endpoint returns a JSON response similar to the following example:
-
-```json
-{
- "apiType": "string",
- "serviceName": "string",
- "meters": [
- {
- "name": "string",
- "quantity": 56097
- }
- ]
-}
-```
-
-## Troubleshooting
-
-Run the container with an output mount and logging enabled. These settings enable the container generates log files that are helpful for troubleshooting issues that occur while starting or running the container.
-
-> [!TIP]
-> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../../cognitive-services/containers/disconnected-container-faq.yml).
-
-## Next steps
-
-* [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](../deploy-label-tool.md#deploy-with-azure-container-instances-aci)
-* [Change or end a commitment plan](../../../cognitive-services/containers/disconnected-containers.md#purchase-a-different-commitment-plan-for-disconnected-containers)
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
- Title: How to create a Form Recognizer resource-
-description: Create a Form Recognizer resource in the Azure portal
----- Previously updated : 04/17/2023-
-monikerRange: '>=form-recog-2.1.0'
--
-# Create a Form Recognizer resource
--
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. In this article, learn how to create a Form Recognizer resource in the Azure portal.
-
-## Visit the Azure portal
-
-The Azure portal is a single platform you can use to create and manage Azure services.
-
-Let's get started:
-
-1. Navigate to the Azure portal home page: [Azure home page](https://portal.azure.com/#home).
-
-1. Select **Create a resource** from the Azure home page.
-
-1. Search for and choose **Form Recognizer** from the search bar.
-
-1. Select the **Create** button.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-one.gif" alt-text="Gif showing how to create a Form Recognizer resource.":::
-
-## Create a resource
-
-1. Next, you're going to fill out the **Create Form Recognizer** fields with the following values:
-
- * **Subscription**. Select your current subscription.
- * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that contains your resource. You can create a new group or add it to a pre-existing group.
- * **Region**. Select your local region.
- * **Name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameFormRecognizer*.
- * **Pricing tier**. The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-1. Select **Review + Create**.
-
- :::image type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Form Recognizer resource.":::
-
-1. Azure will run a quick validation check, after a few seconds you should see a green banner that says **Validation Passed**.
-
-1. Once the validation banner appears, select the **Create** button from the bottom-left corner.
-
-1. After you select create, you'll be redirected to a new page that says **Deployment in progress**. After a few seconds, you'll see a message that says, **Your deployment is complete**.
--
-## Get Endpoint URL and keys
-
-1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
-
-1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Form Recognizer API.
-
-1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button, on the left navigation bar, and retrieve them there.
-
- :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL":::
-
-That's it! You're now ready to start automating data extraction using Azure Form Recognizer.
-
-## Next steps
-
-* Try the [Form Recognizer Studio](concept-form-recognizer-studio.md), an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications.
-
-* Complete a Form Recognizer quickstart and get started creating a document processing app in the development language of your choice:
-
- * [C#](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
- * [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
- * [Java](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
- * [JavaScript](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
- Title: Create shared access signature (SAS) tokens for your storage containers and blobs
-description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
----- Previously updated : 06/19/2023-
-monikerRange: '>=form-recog-2.1.0'
--
-# Create SAS tokens for storage containers
--
- In this article, learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
--
-At a high level, here's how SAS tokens work:
-
-* Your application submits the SAS token to Azure Storage as part of a REST API request.
-
-* If the storage service verifies that the SAS is valid, the request is authorized.
-
-* If the SAS token is deemed invalid, the request is declined and the error code 403 (Forbidden) is returned.
-
-Azure Blob Storage offers three resource types:
-
-* **Storage** accounts provide a unique namespace in Azure for your data.
-* **Data storage containers** are located in storage accounts and organize sets of blobs.
-* **Blobs** are located in containers and store text and binary data such as files, text, and images.
-
-## When to use a SAS token
-
-* **Training custom models**. Your assembled set of training documents *must* be uploaded to an Azure Blob Storage container. You can opt to use a SAS token to grant access to your training documents.
-
-* **Using storage containers with public access**. You can opt to use a SAS token to grant limited access to your storage resources that have public read access.
-
- > [!IMPORTANT]
- >
- > * If your Azure storage account is protected by a virtual network or firewall, you can't grant access with a SAS token. You'll have to use a [managed identity](managed-identities.md) to grant access to your storage resource.
- >
- > * [Managed identity](managed-identities-secured-access.md) supports both privately and publicly accessible Azure Blob Storage accounts.
- >
- > * SAS tokens grant permissions to storage resources, and should be protected in the same manner as an account key.
- >
- > * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
-
-## Prerequisites
-
-To get started, you need:
-
-* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
-
-* A [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [Cognitive Services multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
-
-* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You need to create containers to store and organize your blob data within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
-
- * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
- * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
-
-## Upload your documents
-
-1. Go to the [Azure portal](https://portal.azure.com/#home).
- * Select **Your storage account** → **Data storage** → **Containers**.
-
- :::image type="content" source="media/sas-tokens/data-storage-menu.png" alt-text="Screenshot that shows the Data storage menu in the Azure portal.":::
-
-1. Select a container from the list.
-
-1. Select **Upload** from the menu at the top of the page.
-
- :::image type="content" source="media/sas-tokens/container-upload-button.png" alt-text="Screenshot that shows the container Upload button in the Azure portal.":::
-
-1. The **Upload blob** window appears. Select your files to upload.
-
- :::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal.":::
-
- > [!NOTE]
- > By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](./build-training-data-set.md#organize-your-data-in-subfolders-optional).
-
-## Use the Azure portal
-
-The Azure portal is a web-based console that enables you to manage your Azure subscription and resources using a graphical user interface (GUI).
-
-1. Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows:
-
- * **Your storage account** → **containers** → **your container**.
-
-1. Select **Generate SAS** from the menu near the top of the page.
-
-1. Select **Signing method** → **User delegation key**.
-
-1. Define **Permissions** by selecting or clearing the appropriate checkbox.</br>
-
- * Make sure the **Read**, **Write**, **Delete**, and **List** permissions are selected.
-
- :::image type="content" source="media/sas-tokens/sas-permissions.png" alt-text="Screenshot that shows the SAS permission fields in the Azure portal.":::
-
- >[!IMPORTANT]
- >
- > * If you receive a message similar to the following one, you'll also need to assign access to the blob data in your storage account:
- >
- > :::image type="content" source="media/sas-tokens/need-permissions.png" alt-text="Screenshot that shows the lack of permissions warning.":::
- >
- > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
- > * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.md?tabs=current) to assign a role that allows for read, write, and delete permissions for your Azure storage container. *See* [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor).
-
-1. Specify the signed key **Start** and **Expiry** times.
-
- * When you create a SAS token, the default duration is 48 hours. After 48 hours, you'll need to create a new token.
- * Consider setting a longer duration period for the time you're using your storage account for Form Recognizer Service operations.
- * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
- * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
- * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Azure AD credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
-
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
-
-1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS token. The default value is HTTPS.
-
-1. Select **Generate SAS token and URL**.
-
-1. The **Blob SAS token** query string and **Blob SAS URL** appear in the lower area of the window. To use the Blob SAS token, append it to a storage service URI.
-
-1. Copy and paste the **Blob SAS token** and **Blob SAS URL** values in a secure location. They're displayed only once and can't be retrieved after the window is closed.
-
-1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
-
-## Use Azure Storage Explorer
-
-Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
-
-### Get started
-
-* You need the [**Azure Storage Explorer**](../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
-
-* After the Azure Storage Explorer app is installed, [connect it the storage account](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Form Recognizer.
-
-### Create your SAS tokens
-
-1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
-1. Expand the Storage Accounts node and select **Blob Containers**.
-1. Expand the Blob Containers node and right-click a storage **container** node to display the options menu.
-1. Select **Get Shared Access Signature** from options menu.
-1. In the **Shared Access Signature** window, make the following selections:
- * Select your **Access policy** (the default is none).
- * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
- * Select the **Time zone** for the Start and Expiry date and time (default is Local).
- * Define your container **Permissions** by selecting the **Read**, **Write**, **List**, and **Delete** checkboxes.
- * Select **key1** or **key2**.
- * Review and select **Create**.
-
-1. A new window appears with the **Container** name, **SAS URL**, and **Query string** for your container.
-
-1. **Copy and paste the SAS URL and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
-
-1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
-
-## Use your SAS URL to grant access
-
-The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
-
-### REST API
-
-To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/BuildDocumentModel), add the SAS URL to the request body:
-
- ```json
- {
- "source":"<BLOB SAS URL>"
- }
- ```
-
-That's it! You've learned how to create SAS tokens to authorize how clients access your data.
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Build a training data set](build-training-data-set.md)
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
- Title: How to deploy the Form Recognizer Sample Labeling tool-
-description: Learn the different ways you can deploy the Form Recognizer Sample Labeling tool to help with supervised learning.
----- Previously updated : 01/09/2023-
-monikerRange: 'form-recog-2.1.0'
--
-# Deploy the Sample Labeling tool
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
-
->[!TIP]
->
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
-> * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
-
-> [!NOTE]
-> The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the Sample Labeling tool for yourself.
-
-The Form Recognizer Sample Labeling tool is an application that provides a simple user interface (UI), which you can use to manually label forms (documents) for supervised learning. In this article, we'll provide links and instructions that teach you how to:
-
-* [Run the Sample Labeling tool locally](#run-the-sample-labeling-tool-locally)
-* [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](#deploy-with-azure-container-instances-aci)
-* [Use and contribute to the open-source OCR Form Labeling Tool](#open-source-on-github)
-
-## Run the Sample Labeling tool locally
-
-The fastest way to start labeling data is to run the Sample Labeling tool locally. The following quickstart uses the Form Recognizer REST API and the Sample Labeling tool to train a custom model with manually labeled data.
-
-* [Get started with Azure Form Recognizer](label-tool.md).
-
-## Deploy with Azure Container Instances (ACI)
-
-Before we get started, it's important to note that there are two ways to deploy the Sample Labeling tool to an Azure Container Instance (ACI). Both options are used to run the Sample Labeling tool with ACI:
-
-* [Using the Azure portal](#azure-portal)
-* [Using the Azure CLI](#azure-cli)
-
-### Azure portal
-
-Follow these steps to create a new resource using the Azure portal:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/signin/index/).
-2. Select **Create a resource**.
-3. Next, select **Web App**.
-
- > [!div class="mx-imgBorder"]
- > ![Select web app](./media/quickstarts/create-web-app.png)
-
-4. First, make sure that the **Basics** tab is selected. Now, you're going to need to provide some information:
-
- > [!div class="mx-imgBorder"]
- > ![Select Basics](./media/quickstarts/select-basics.png)
- * Subscription - Select an existing Azure subscription
- * Resource Group - You can reuse an existing resource group or create a new one for this project. Creating a new resource group is recommended.
- * Name - Give your web app a name.
- * Publish - Select **Docker Container**
- * Operating System - Select **Linux**
- * Region - Choose a region that makes sense for you.
- * Linux Plan - Select a pricing tier/plan for your app service.
-
- > [!div class="mx-imgBorder"]
- > ![Configure your web app](./media/quickstarts/select-docker.png)
-
-5. Next, select the **Docker** tab.
-
- > [!div class="mx-imgBorder"]
- > ![Select Docker](./media/quickstarts/select-docker.png)
-
-6. Now let's configure your Docker container. All fields are required unless otherwise noted:
-<!-- markdownlint-disable MD025 -->
-
-* Options - Select **Single Container**
-* Image Source - Select **Private Registry**
-* Server URL - Set to `https://mcr.microsoft.com`
-* Username (Optional) - Create a username.
-* Password (Optional) - Create a secure password that you'll remember.
-* Image and tag - Set to `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1`
-* Continuous Deployment - Set to **On** if you want to receive automatic updates when the development team makes changes to the Sample Labeling tool.
-* Startup command - Set to `./run.sh eula=accept`
-
-> [!div class="mx-imgBorder"]
-> ![Configure Docker](./media/quickstarts/configure-docker.png)
-
-* Next, select **Review + Create**, then **Create** to deploy your web app. When complete, you can access your web app at the URL provided in the **Overview** for your resource.
-
-### Continuous deployment
-
-After you've created your web app, you can enable the continuous deployment option:
-
-* From the left pane, choose **Container settings**.
-* In the main window, navigate to Continuous deployment and toggle between the **On** and **Off** buttons to set your preference:
--
-> [!NOTE]
-> When creating your web app, you can also configure authorization/authentication. This is not necessary to get started.
-
-> [!IMPORTANT]
-> You may need to enable TLS for your web app in order to view it at its `https` address. Follow the instructions in [Enable a TLS endpoint](../../container-instances/container-instances-container-group-ssl.md) to set up a sidecar container than enables TLS/SSL for your web app.
-<!-- markdownlint-disable MD001 -->
-### Azure CLI
-
-As an alternative to using the Azure portal, you can create a resource using the Azure CLI. Before you continue, you'll need to install the [Azure CLI](/cli/azure/install-azure-cli). You can skip this step if you're already working with the Azure CLI.
-
-There's a few things you need know about this command:
-
-* `DNS_NAME_LABEL=aci-demo-$RANDOM` generates a random DNS name.
-* This sample assumes that you have a resource group that you can use to create a resource. Replace `<resource_group_name>` with a valid resource group associated with your subscription.
-* You'll need to specify where you want to create the resource. Replace `<region name>` with your desired region for the web app.
-* This command automatically accepts EULA.
-
-From the Azure CLI, run this command to create a web app resource for the Sample Labeling tool:
-
-<!-- markdownlint-disable MD024 -->
-
-```azurecli
-DNS_NAME_LABEL=aci-demo-$RANDOM
-
-az container create \
- --resource-group <resource_group_name> \
- --name <name> \
- --image mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 \
- --ports 3000 \
- --dns-name-label $DNS_NAME_LABEL \
- --location <region name> \
- --cpu 2 \
- --memory 8 \
- --command-line "./run.sh eula=accept"
-
-```
-
-### Connect to Azure AD for authorization
-
-It's recommended that you connect your web app to Azure Active Directory (Azure AD). This connection ensures that only users with valid credentials can sign in and use your web app. Follow the instructions in [Configure your App Service app](../../app-service/configure-authentication-provider-aad.md) to connect to Azure Active Directory.
-
-## Open source on GitHub
-
-The OCR Form Labeling Tool is also available as an open-source project on GitHub. The tool is a web application built using React + Redux, and is written in TypeScript. To learn more or contribute, see [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
-
-## Next steps
-
-Use the [Train with labels](label-tool.md) quickstart to learn how to use the tool to manually label training data and perform supervised learning.
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
- Title: Disaster recovery guidance for Azure Form Recognizer-
-description: Learn how to use the copy model API to back up your Form Recognizer resources.
----- Previously updated : 10/14/2022--
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD033 -->
-
-# Back up and recover your Form Recognizer models
----
-When you create a Form Recognizer resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Form Recognizer resources in different regions and the ability to sync custom models across regions.
-
-The Copy API enables this scenario by allowing you to copy custom models from one Form Recognizer account or into others, which can exist in any supported geographical region. This guide shows you how to use the Copy REST API with cURL. You can also use an HTTP request service like Postman to issue the requests.
-
-## Business scenarios
-
-If your app or business depends on the use of a Form Recognizer custom model, we recommend you copy your model to another Form Recognizer account in another region. If a regional outage occurs, you can then access your model in the region where it was copied.
-
-## Prerequisites
-
-1. Two Form Recognizer Azure resources in different Azure regions. If you don't have them, go to the Azure portal and [create a new Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer).
-1. The key, endpoint URL, and subscription ID for your Form Recognizer resource. You can find these values on the resource's **Overview** tab in the [Azure portal](https://portal.azure.com/#home).
---
-## Copy API overview
-
-The process for copying a custom model consists of the following steps:
-
-1. First you issue a copy authorization request to the target resource&mdash;that is, the resource that will receive the copied model. You get back the URL of the newly created target model, which will receive the copied model.
-1. Next you send the copy request to the source resource&mdash;the resource that contains the model to be copied with the payload (copy authorization) returned from the previous call. You'll get back a URL that you can query to track the progress of the operation.
-1. You'll use your source resource credentials to query the progress URL until the operation is a success. With v3.0, you can also query the new model ID in the target resource to get the status of the new model.
-
-## Generate Copy authorization request
-
-The following HTTP request gets copy authorization from your target resource. You'll need to enter the endpoint and key of your target resource as headers.
-
-```http
-POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
-Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-Request body
-
-```json
-{
- "modelId": "target-model-name",
- "description": "Copied from SCUS"
-}
-```
-
-You'll get a `200` response code with response body that contains the JSON payload required to initiate the copy.
-
-```json
-{
- "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
- "targetResourceRegion": "region",
- "targetModelId": "target-model-name",
- "targetModelLocation": "model path",
- "accessToken": "access token",
- "expirationDateTime": "timestamp"
-}
-```
-
-## Start Copy operation
-
-The following HTTP request starts the copy operation on the source resource. You'll need to enter the endpoint and key of your source resource as the url and header. Notice that the request URL contains the model ID of the source model you want to copy.
-
-```http
-POST {{source-endpoint}}formrecognizer/documentModels/{model-to-be-copied}:copyTo?api-version=2022-08-31
-Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-The body of your request is the response from the previous step.
-
-```json
-{
- "targetResourceId": "/subscriptions/{targetSub}/resourceGroups/{targetRG}/providers/Microsoft.CognitiveServices/accounts/{targetService}",
- "targetResourceRegion": "region",
- "targetModelId": "target-model-name",
- "targetModelLocation": "model path",
- "accessToken": "access token",
- "expirationDateTime": "timestamp"
-}
-```
-
-You'll get a `202\Accepted` response with an Operation-Location header. This value is the URL that you'll use to track the progress of the operation. Copy it to a temporary location for the next step.
-
-```http
-HTTP/1.1 202 Accepted
-Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
-```
-
-> [!NOTE]
-> The Copy API transparently supports the [AEK/CMK](https://msazure.visualstudio.com/Cognitive%20Services/_wiki/wikis/Cognitive%20Services.wiki/52146/Customer-Managed-Keys) feature. This doesn't require any special treatment, but note that if you're copying between an unencrypted resource to an encrypted resource, you need to include the request header `x-ms-forms-copy-degrade: true`. If this header is not included, the copy operation will fail and return a `DataProtectionTransformServiceError`.
-
-## Track Copy progress
-
-```console
-GET https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
-Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-### Track the target model ID
-
-You can also use the **[Get model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel)** API to track the status of the operation by querying the target model. Call the API using the target model ID that you copied down from the [Generate Copy authorization request](#generate-copy-authorization-request) response.
-
-```http
-GET https://{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31" -H "Ocp-Apim-Subscription-Key: {YOUR-KEY}
-```
-
-In the response body, you'll see information about the model. Check the `"status"` field for the status of the model.
-
-```http
-HTTP/1.1 200 OK
-Content-Type: application/json; charset=utf-8
-{"modelInfo":{"modelId":"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d","status":"ready","createdDateTime":"2020-02-26T16:59:28Z","lastUpdatedDateTime":"2020-02-26T16:59:34Z"},"trainResult":{"trainingDocuments":[{"documentName":"0.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"1.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"2.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"3.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"4.pdf","pages":1,"errors":[],"status":"succeeded"}],"errors":[]}}
-```
-
-## cURL sample code
-
-The following code snippets use cURL to make API calls outlined in the steps above. You'll still need to fill in the model IDs and subscription information specific to your own resources.
-
-### Generate Copy authorization
-
-**Request**
-
-```bash
-curl -i -X POST "{YOUR-ENDPOINT}formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31"
--H "Content-Type: application/json"--H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"data-ascii "{
- 'modelId': '{modelId}',
- 'description': '{description}'
-}"
-```
-
-**Successful response**
-
-```json
-{
- "targetResourceId": "string",
- "targetResourceRegion": "string",
- "targetModelId": "string",
- "targetModelLocation": "string",
- "accessToken": "string",
- "expirationDateTime": "string"
-}
-```
-
-### Begin Copy operation
-
-**Request**
-
-```bash
-curl -i -X POST "{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31"
--H "Content-Type: application/json"--H "Ocp-Apim-Subscription-Key: {YOUR-KEY}"data-ascii "{
- 'targetResourceId': '{targetResourceId}',
- 'targetResourceRegion': {targetResourceRegion}',
- 'targetModelId': '{targetModelId}',
- 'targetModelLocation': '{targetModelLocation}',
- 'accessToken': '{accessToken}',
- 'expirationDateTime': '{expirationDateTime}'
-}"
-
-```
-
-**Successful response**
-
-```http
-HTTP/1.1 202 Accepted
-Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
-```
-
-### Track copy operation progress
-
-You can use the [**Get operation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetOperation) API to list all document model operations (succeeded, in-progress, or failed) associated with your Form Recognizer resource. Operation information only persists for 24 hours. Here's a list of the operations (operationId) that can be returned:
-
-* documentModelBuild
-* documentModelCompose
-* documentModelCopyTo
-
-### Track the target model ID
-
-If the operation was successful, the document model can be accessed using the [**getModel**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel) (get a single model), or [**GetModels**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) (get a list of models) APIs.
---
-## Copy model overview
-
-The process for copying a custom model consists of the following steps:
-
-1. First you issue a copy authorization request to the target resource&mdash;that is, the resource that will receive the copied model. You get back the URL of the newly created target model, which will receive the copied model.
-1. Next you send the copy request to the source resource&mdash;the resource that contains the model to be copied with the payload (copy authorization) returned from the previous call. You'll get back a URL that you can query to track the progress of the operation.
-1. You'll use your source resource credentials to query the progress URL until the operation is a success.
-
-## Generate authorization request
-
-The following HTTP request generates a copy authorization from your target resource. You'll need to enter the endpoint and key of your target resource as headers.
-
-```http
-POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization
-Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-You'll get a `201\Created` response with a `modelId` value in the body. This string is the ID of the newly created (blank) model. The `accessToken` is needed for the API to copy data to this resource, and the `expirationDateTimeTicks` value is the expiration of the token. Save all three of these values to a secure location.
-
-```http
-HTTP/1.1 201 Created
-Location: https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/33f4d42c-cd2f-4e74-b990-a1aeafab5a5d
-{"modelId":"<your model ID>","accessToken":"<your access token>","expirationDateTimeTicks":637233481531659440}
-```
-
-## Start the copy operation
-
-The following HTTP request starts the Copy operation on the source resource. You'll need to enter the endpoint and key of your source resource as headers. Notice that the request URL contains the model ID of the source model you want to copy.
-
-```http
-POST https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/<your model ID>/copy HTTP/1.1
-Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-The body of your request needs to have the following format. You'll need to enter the resource ID and region name of your target resource. You can find your resource ID on the **Properties** tab of your resource in the Azure portal, and you can find the region name on the **Keys and endpoint** tab. You'll also need the model ID, access token, and expiration value that you copied from the previous step.
-
-```json
-{
- "targetResourceId": "{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_ID}",
- "targetResourceRegion": "{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_REGION_NAME}",
- "copyAuthorization": {"modelId":"<your model ID>","accessToken":"<your access token>","expirationDateTimeTicks":637233481531659440}
-}
-```
-
-You'll get a `202\Accepted` response with an Operation-Location header. This value is the URL that you'll use to track the progress of the operation. Copy it to a temporary location for the next step.
-
-```http
-HTTP/1.1 202 Accepted
-Operation-Location: https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/eccc3f13-8289-4020-ba16-9f1d1374e96f/copyresults/02989ba8-1296-499f-aaf4-55cfff41b8f1
-```
-
-> [!NOTE]
-> The Copy API transparently supports the [AEK/CMK](https://msazure.visualstudio.com/Cognitive%20Services/_wiki/wikis/Cognitive%20Services.wiki/52146/Customer-Managed-Keys) feature. This operation doesn't require any special treatment, but note that if you're copying between an unencrypted resource to an encrypted resource, you need to include the request header `x-ms-forms-copy-degrade: true`. If this header is not included, the copy operation will fail and return a `DataProtectionTransformServiceError`.
-
-### Track operation progress
-
-Track your progress by querying the **Get Copy Model Result** API against the source resource endpoint.
-
-```http
-GET https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/eccc3f13-8289-4020-ba16-9f1d1374e96f/copyresults/02989ba8-1296-499f-aaf4-55cfff41b8f1 HTTP/1.1
-Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-Your response will vary depending on the status of the operation. Look for the `"status"` field in the JSON body. If you're automating this API call in a script, we recommend querying the operation once every second.
-
-```http
-HTTP/1.1 200 OK
-Content-Type: application/json; charset=utf-8
-{"status":"succeeded","createdDateTime":"2020-04-23T18:18:01.0275043Z","lastUpdatedDateTime":"2020-04-23T18:18:01.0275048Z","copyResult":{}}
-```
-
-### Track operation status with modelID
-
-You can also use the **Get Custom Model** API to track the status of the operation by querying the target model. Call this API using the target model ID that you copied down in the first step.
-
-```http
-GET https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/33f4d42c-cd2f-4e74-b990-a1aeafab5a5d HTTP/1.1
-Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}
-```
-
-In the response body, you'll see information about the model. Check the `"status"` field for the status of the model.
-
-```http
-HTTP/1.1 200 OK
-Content-Type: application/json; charset=utf-8
-{"modelInfo":{"modelId":"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d","status":"ready","createdDateTime":"2020-02-26T16:59:28Z","lastUpdatedDateTime":"2020-02-26T16:59:34Z"},"trainResult":{"trainingDocuments":[{"documentName":"0.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"1.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"2.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"3.pdf","pages":1,"errors":[],"status":"succeeded"},{"documentName":"4.pdf","pages":1,"errors":[],"status":"succeeded"}],"errors":[]}}
-```
-
-## cURL code samples
-
-The following code snippets use cURL to make API calls outlined in the steps above. You'll still need to fill in the model IDs and subscription information specific to your own resources.
-
-### Generate copy authorization
-
-```bash
-curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}"
-```
-
-### Start copy operation
-
-```bash
-curl -i -X POST "https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/v2.1/custom/models/copyAuthorization" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY}" --data-ascii "{ \"targetResourceId\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_ID}\", \"targetResourceRegion\": \"{TARGET_AZURE_FORM_RECOGNIZER_RESOURCE_REGION_NAME}\", \"copyAuthorization\": "{\"modelId\":\"33f4d42c-cd2f-4e74-b990-a1aeafab5a5d\",\"accessToken\":\"1855fe23-5ffc-427b-aab2-e5196641502f\",\"expirationDateTimeTicks\":637233481531659440}"}"
-```
-
-### Track copy progress
-
-```bash
-curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v2.1/custom/models/{SOURCE_MODELID}/copyResults/{RESULT_ID}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY}"
-```
---
-### Common error code messages
-
-|Error|Resolution|
-|:--|:--|
-| 400 / Bad Request with `"code:" "1002"` | Indicates validation error or badly formed copy request. Common issues include: a) Invalid or modified `copyAuthorization` payload. b) Expired value for `expirationDateTimeTicks` token (`copyAuthorization` payload is valid for 24 hours). c) Invalid or unsupported `targetResourceRegion`. d) Invalid or malformed `targetResourceId` string.
-|"Authorization failure due to missing or invalid authorization claims".| Occurs when the `copyAuthorization` payload or content is modified from what was returned by the `copyAuthorization` API. Ensure that the payload is the same exact content that was returned from the earlier `copyAuthorization` call.|
-|"Couldn't retrieve authorization metadata".| Indicates that the `copyAuthorization` payload is being reused with a copy request. A copy request that succeeds won't allow any further requests that use the same `copyAuthorization` payload. If you raise a separate error (like the ones noted below) and you later retry the copy with the same authorization payload, this error gets raised. The resolution is to generate a new `copyAuthorization` payload and then reissue the copy request.|
-|"Data transfer request isn't allowed as it downgrades to a less secure data protection scheme".| Occurs when copying between an `AEK` enabled resource to a non `AEK` enabled resource. To allow copying encrypted model to the target as unencrypted specify `x-ms-forms-copy-degrade: true` header with the copy request.|
-|"Couldn't fetch information for Cognitive resource with ID...". | Indicates that the Azure resource indicated by the `targetResourceId` isn't a valid Cognitive resource or doesn't exist. Verify and reissue the copy request to resolve this issue.</br> Ensure the resource is valid and exists in the specified region, such as, `westus2`|
--
-## Next steps
--
-In this guide, you learned how to use the Copy API to back up your custom models to a secondary Form Recognizer resource. Next, explore the API reference docs to see what else you can do with Form Recognizer.
-
-* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
--
-In this guide, you learned how to use the Copy API to back up your custom models to a secondary Form Recognizer resource. Next, explore the API reference docs to see what else you can do with Form Recognizer.
-
-* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
-
applied-ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/encrypt-data-at-rest.md
- Title: Form Recognizer service encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Form Recognizer, and how to enable and manage CMK.
----- Previously updated : 10/20/2022--
-monikerRange: '>=form-recog-2.1.0'
--
-# Form Recognizer encryption of data at rest
--
-Azure Form Recognizer automatically encrypts your data when persisting it to the cloud. Form Recognizer encryption protects your data to help you to meet your organizational security and compliance commitments.
--
-> [!IMPORTANT]
-> Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Form Recognizer, you will need to create a new Form Recognizer resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
--
-## Next steps
-
-* [Form Recognizer Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
- Title: "Build and train a custom classifier"-
-description: Learn how to label, and build a custom document classification model.
----- Previously updated : 04/25/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Build and train a custom classification model (preview)
-
-**This article applies to:** ![Form Recognizer checkmark](../medi) supported by Form Recognizer REST API version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)**.
-
-> [!IMPORTANT]
->
-> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
->
-
-Custom classification models can classify each page in an input file to identify the document(s) within. Classifier models can also identify multiple documents or multiple instances of a single document in the input file. Form Recognizer custom models require as few as five training documents per document class to get started. To get started training a custom classification model, you need at least **five documents** for each class and **two classes** of documents.
-
-## Custom classification model input requirements
-
-Make sure your training data set follows the input requirements for Form Recognizer.
--
-## Training data tips
-
-Follow these tips to further optimize your data set for training:
-
-* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-
-* If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-## Upload your training data
-
-Once you've put together the set of forms or documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. If your dataset is organized as folders, preserve that structure as the Studio can use your folder names for labels to simplify the labeling process.
-
-## Create a classification project in the Form Recognizer Studio
-
-The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
-
-1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-form-recognizer-studio.md#added-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
-
-1. In the Studio, select the **Custom classification model** tile, on the custom models section of the page and select the **Create a project** button.
-
- :::image type="content" source="../media/how-to/studio-create-classifier-project.png" alt-text="Screenshot of how to create a classifier project in the Form Recognizer Studio.":::
-
- 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
-
- 1. Next, choose or create a Form Recognizer resource before you select continue.
-
- :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot showing the project setup dialog window.":::
-
-1. Next select the storage account you used to upload your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
-
- > [!IMPORTANT]
- > You can either organize the training dataset by folders where the folder name is the label or class for documents or create a flat list of documents that you can assign a label to in the Studio.
-
- :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot showing how to select the Form Recognizer resource.":::
-
-1. **Training a custom classifier requires the output from the Layout model for each document in your dataset**. Run layout on all documents prior to the model training process.
-
-1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
-
-## Label your data
-
-In your project, you only need to label each document with the appropriate class label.
--
-You see the files you uploaded to storage in the file list, ready to be labeled. You have a few options to label your dataset.
-
-1. If the documents are organized in folders, the Studio prompts you to use the folder names as labels. This step simplifies your labeling down to a single select.
-
-1. To assign a label to a document, select on the add label selection mark to assign a label.
-
-1. Control select to multi-select documents to assign a label
-
-You should now have all the documents in your dataset labeled. If you look at the storage account, you find *.ocr.json* files that correspond to each document in your training dataset and a new **class-name.jsonl** file for each class labeled. This training dataset is submitted to train the model.
-
-## Train your model
-
-With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
-
-1. On the train model dialog, provide a unique classifier ID and, optionally, a description. The classifier ID accepts a string data type.
-
-1. Select **Train** to initiate the training process.
-
-1. Classifier models train in a few minutes.
-
-1. Navigate to the *Models* menu to view the status of the train operation.
-
-## Test the model
-
-Once the model training is complete, you can test your model by selecting the model on the models list page.
-
-1. Select the model and select on the **Test** button.
-
-1. Add a new file by browsing for a file or dropping a file into the document selector.
-
-1. With a file selected, choose the **Analyze** button to test the model.
-
-1. The model results are displayed with the list of identified documents, a confidence score for each document identified and the page range for each of the documents identified.
-
-1. Validate your model by evaluating the results for each document identified.
-
-Congratulations you've trained a custom classification model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
-
-## Troubleshoot
-
-The [classification model](../concept-custom-classifier.md) requires results from the [layout model](../concept-layout.md) for each training document. If you haven't provided the layout results, the Studio attempts to run the layout model for each document prior to training the classifier. This process is throttled and can result in a 429 response.
-
-In the Studio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about custom model types](../concept-custom.md)
-
-> [!div class="nextstepaction"]
-> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
applied-ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-model.md
- Title: "Build and train a custom model"-
-description: Learn how to build, label, and train a custom model.
----- Previously updated : 01/31/2023---
-# Build and train a custom model
--
-Form Recognizer models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
-
-## Custom model input requirements
-
-First, make sure your training data set follows the input requirements for Form Recognizer.
----
-## Training data tips
-
-Follow these tips to further optimize your data set for training:
-
-* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-* For forms with input fields, use examples that have all of the fields completed.
-* Use forms with different values in each field.
-* If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-## Upload your training data
-
-Once you've put together the set of forms or documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Video: Train your custom model
-
-* Once you've gathered and uploaded your training dataset, you're ready to train your custom model. In the following video, we create a project and explore some of the fundamentals for successfully labeling and training a model.</br></br>
-
- > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1c]
-
-## Create a project in the Form Recognizer Studio
-
-The Form Recognizer Studio provides and orchestrates all the API calls required to complete your dataset and train your model.
-
-1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). The first time you use the Studio, you need to [initialize your subscription, resource group, and resource](../quickstarts/try-form-recognizer-studio.md). Then, follow the [prerequisites for custom projects](../quickstarts/try-form-recognizer-studio.md#added-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
-
-1. In the Studio, select the **Custom models** tile, on the custom models page and select the **Create a project** button.
-
- :::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Form Recognizer Studio.":::
-
- 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
-
- 1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue.
-
- > [!IMPORTANT]
- > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
-
- :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
-
-1. Next select the storage account you used to upload your custom model training dataset. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a subfolder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
-
- :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
-
-1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
-
-## Label your data
-
-In your project, your first task is to label your dataset with the fields you wish to extract.
-
-The files you uploaded to storage are listed on the left of your screen, with the first file ready to be labeled.
-
-1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type.
-
- :::image type="content" source="../media/how-to/studio-create-label.png" alt-text="Screenshot: Create a label.":::
-
-1. Enter a name for the field.
-
-1. To assign a value to the field, choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. The labeled value is below the field name in the list of fields.
-
-1. Repeat the process for all the fields you wish to label for your dataset.
-
-1. Label the remaining documents in your dataset by selecting each document and selecting the text to be labeled.
-
-You now have all the documents in your dataset labeled. The *.labels.json* and *.ocr.json* files correspond to each document in your training dataset and a new fields.json file. This training dataset is submitted to train the model.
-
-## Train your model
-
-With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
-
-1. On the train model dialog, provide a unique model ID and, optionally, a description. The model ID accepts a string data type.
-
-1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
-
- :::image type="content" source="../media/how-to/studio-train-model.png" alt-text="Screenshot: Train model dialog":::
-
-1. Select **Train** to initiate the training process.
-
-1. Template models train in a few minutes. Neural models can take up to 30 minutes to train.
-
-1. Navigate to the *Models* menu to view the status of the train operation.
-
-## Test the model
-
-Once the model training is complete, you can test your model by selecting the model on the models list page.
-
-1. Select the model and select on the **Test** button.
-
-1. Select the `+ Add` button to select a file to test the model.
-
-1. With a file selected, choose the **Analyze** button to test the model.
-
-1. The model results are displayed in the main window and the fields extracted are listed in the right navigation bar.
-
-1. Validate your model by evaluating the results for each field.
-
-1. The right navigation bar also has the sample code to invoke your model and the JSON results from the API.
-
-Congratulations you've trained a custom model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about custom model types](../concept-custom.md)
-
-> [!div class="nextstepaction"]
-> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
---
-**Applies to:** ![Form Recognizer v2.1 checkmark](../medi?view=form-recog-3.0.0&preserve-view=true)
-
-When you use the Form Recognizer custom model, you provide your own training data to the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
-
-You need at least five filled-in forms of the same type.
-
-If you want to use manually labeled training data, you must start with at least five filled-in forms of the same type. You can still use unlabeled forms in addition to the required data set.
-
-## Custom model input requirements
-
-First, make sure your training data set follows the input requirements for Form Recognizer.
----
-## Training data tips
-
-Follow these tips to further optimize your data set for training.
-
-* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-* For filled-in forms, use examples that have all of their fields filled in.
-* Use forms with different values in each field.
-* If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-## Upload your training data
-
-When you've put together the set of form documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
-
-If you want to use manually labeled data, upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files.
-
-### Organize your data in subfolders (optional)
-
-By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API only uses documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
-
-```json
-{
- "source":"<SAS URL>"
-}
-```
-
-If you add the following content to the request body, the API trains with documents located in subfolders. The `"prefix"` field is optional and limits the training data set to files whose paths begin with the given string. So a value of `"Test"`, for example, causes the API to look at only the files or folders that begin with the word *Test*.
-
-```json
-{
- "source": "<SAS URL>",
- "sourceFilter": {
- "prefix": "<prefix string>",
- "includeSubFolders": true
- },
- "useLabelFile": false
-}
-```
-
-## Next steps
-
-Now that you've learned how to build a training data set, follow a quickstart to train a custom Form Recognizer model and start using it on your forms.
-
-* [Train a model and extract document data using the client library or REST API](../quickstarts/get-started-sdks-rest-api.md)
-* [Train with labels using the Sample Labeling tool](../label-tool.md)
-
-## See also
-
-* [What is Form Recognizer?](../overview.md)
-
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/compose-custom-models.md
- Title: "How to guide: create and compose custom models with Form Recognizer"-
-description: Learn how to create, use, and manage Form Recognizer custom and composed models
----- Previously updated : 06/23/2023---
-# Compose custom models
-
-<!-- markdownlint-disable MD051 -->
-<!-- markdownlint-disable MD024 -->
---
-A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 200 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-
-To learn more, see [Composed custom models](../concept-composed-models.md).
-
-In this article, you learn how to create and use composed custom models to analyze your forms and documents.
-
-## Prerequisites
-
-To get started, you need the following resources:
-
-* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
-
-* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
- 1. After the resource deploys, select **Go to resource**.
-
- 1. Copy the **Keys and Endpoint** values from the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Form Recognizer API.
-
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
-
- > [!TIP]
- > For more information, see [**create a Form Recognizer resource**](../create-a-form-recognizer-resource.md).
-
-* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Create your custom models
-
-First, you need a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
-
-* [**Assemble your training dataset**](#assemble-your-training-dataset)
-* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
-* [**Train your custom models**](#train-your-custom-model)
-
-## Assemble your training dataset
-
-Building a custom model begins with establishing your training dataset. You need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](../how-to-guides/build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#custom-model-input-requirements) for Form Recognizer.
-
->[!TIP]
-> Follow these tips to optimize your data set for training:
->
-> * If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
-> * For filled-in forms, use examples that have all of their fields filled in.
-> * Use forms with different values in each field.
-> * If your form images are of lower quality, use a larger data set (10-15 images, for example).
-
-See [Build a training data set](../how-to-guides/build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true) for tips on how to collect your training documents.
-
-## Upload your training dataset
-
-When you've gathered a set of training documents, you need to [upload your training data](../how-to-guides/build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#upload-your-training-data) to an Azure blob storage container.
-
-If you want to use manually labeled data, you have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents.
-
-## Train your custom model
-
-When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-
-Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
-
-### [Form Recognizer Studio](#tab/studio)
-
-To create custom models, start with configuring your project:
-
-1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
-
-1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
-
-1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
-
-1. Review and submit your settings to create the project.
--
-While creating your custom models, you may need to extract data collections from your documents. The collections may appear one of two formats. Using tables as the visual pattern:
-
-* Dynamic or variable count of values (rows) for a given set of fields (columns)
-
-* Specific collection of values for a given set of fields (columns and/or rows)
-
-See [Form Recognizer Studio: labeling as tables](../quickstarts/try-form-recognizer-studio.md#labeling-as-tables)
-
-### [REST API](#tab/rest)
-
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
-
-Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels are treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
-
-Once you have your label files, you can include them with by calling the training method with the *useLabelFile* parameter set to `true`.
--
-### [Client libraries](#tab/sdks)
-
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you have them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
-
-|Language |Method|
-|--|--|
-|**C#**|**StartBuildModel**|
-|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.documentanalysis.administration.documentmodeladministrationclient.beginbuildmodel)|
-|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)|
-| **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
---
-## Create a composed model
-
-> [!NOTE]
-> **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
-
-With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Form Recognizer first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
-
-### [Form Recognizer Studio](#tab/studio)
-
-Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-
-* [**Gather your custom model IDs**](#gather-your-model-ids)
-* [**Compose your custom models**](#compose-your-custom-models)
-* [**Analyze documents**](#analyze-documents)
-* [**Manage your composed models**](#manage-your-composed-models)
-
-#### Gather your model IDs
-
-When you train models using the [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/), the model ID is located in the models menu under a project:
--
-#### Compose your custom models
-
-1. Select a custom models project.
-
-1. In the project, select the ```Models``` menu item.
-
-1. From the resulting list of models, select the models you wish to compose.
-
-1. Choose the **Compose button** from the upper-left corner.
-
-1. In the pop-up window, name your newly composed model and select **Compose**.
-
-1. When the operation completes, your newly composed model appears in the list.
-
-1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results.
-
-#### Analyze documents
-
-The custom model **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You should provide the composed model ID for the `modelID` parameter in your applications.
--
-#### Manage your composed models
-
-You can manage your custom models throughout life cycles:
-
-* Test and validate new documents.
-* Download your model to use in your applications.
-* Delete your model when its lifecycle is complete.
--
-### [REST API](#tab/rest)
-
-Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-
-* [**Compose your custom models**](#compose-your-custom-models)
-* [**Analyze documents**](#analyze-documents)
-* [**Manage your composed models**](#manage-your-composed-models)
-
-#### Compose your custom models
-
-The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) accepts a list of model IDs to be composed.
--
-#### Analyze documents
-
-To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
--
-#### Manage your composed models
-
-You can manage custom models throughout your development needs including [**copying**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo), [**listing**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels), and [**deleting**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) your models.
-
-### [Client libraries](#tab/sdks)
-
-Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-
-* [**Create a composed model**](#create-a-composed-model)
-* [**Analyze documents**](#analyze-documents)
-* [**Manage your composed models**](#manage-your-composed-models)
-
-#### Create a composed model
-
-You can use the programming language of your choice to create a composed model:
-
-| Programming language| Code sample |
-|--|--|
-|**C#** | [Model compose](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
-|**Java** | [Model compose](https://github.com/Azure/azure-sdk-for-java/blob/afa0d44fa42979ae9ad9b92b23cdba493a562127/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java)
-|**JavaScript** | [Compose model](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/composeModel.js)
-|**Python** | [Create composed model](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
-
-#### Analyze documents
-
-Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
-
-|Programming language| Code sample |
-|--|--|
-|**C#** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzeWithCustomModel.md)
-|**Java** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
-|**JavaScript** | [Analyze a document with a custom/composed model using model ID](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/analyzeDocumentByModelId.js)
-|**Python** | [Analyze a document with a custom/composed model using model ID](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_analyze_custom_documents.py)
-
-#### Manage your composed models
-
-You can manage a custom model at each stage in its life cycles. You can copy a custom model between resources, view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
-
-|Programming language| Code sample |
-|--|--|
-|**C#** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_CopyCustomModel.md#copy-a-custom-model-between-form-recognizer-resources)|
-|**Java** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/CopyDocumentModel.java)|
-|**JavaScript** | [Copy a custom model between Form Recognizer resources](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/copyModel.js)|
-|**Python** | [Copy a custom model between Form Recognizer resources](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_copy_model_to.py)|
--
-Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
-
-## Next steps
-
-Try one of our Form Recognizer quickstarts:
-
-> [!div class="nextstepaction"]
-> [Form Recognizer Studio](../quickstarts/try-form-recognizer-studio.md)
-
-> [!div class="nextstepaction"]
-> [REST API](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
-
-> [!div class="nextstepaction"]
-> [C#](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true#prerequisites)
-
-> [!div class="nextstepaction"]
-> [Java](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
-
-> [!div class="nextstepaction"]
-> [JavaScript](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
-
-> [!div class="nextstepaction"]
-> [Python](../quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
----
-Form Recognizer uses advanced machine-learning technology to detect and extract information from document images and return the extracted data in a structured JSON output. With Form Recognizer, you can train standalone custom models or combine custom models to create composed models.
-
-* **Custom models**. Form Recognizer custom models enable you to analyze and extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases.
-
-* **Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model that encompasses your form types. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis.
-
-In this article, you learn how to create Form Recognizer custom and composed models using our [Form Recognizer Sample Labeling tool](../label-tool.md), [REST APIs](../how-to-guides/use-sdk-rest-api.md?view=form-recog-2.1.0&preserve-view=true#train-a-custom-model), or [client-library SDKs](../how-to-guides/use-sdk-rest-api.md?view=form-recog-2.1.0&preserve-view=true#train-a-custom-model).
-
-## Sample Labeling tool
-
-Try extracting data from custom forms using our Sample Labeling tool. You need the following resources:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
-
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-> [!div class="nextstepaction"]
-> [Try it](https://fott-2-1.azurewebsites.net/projects/create)
-
-In the Form Recognizer UI:
-
-1. Select **Use Custom to train a model with labels and get key value pairs**.
-
- :::image type="content" source="../media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
-
-1. In the next window, select **New project**:
-
- :::image type="content" source="../media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
-
-## Create your models
-
-The steps for building, training, and using custom and composed models are as follows:
-
-* [**Assemble your training dataset**](#assemble-your-training-dataset)
-* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
-* [**Train your custom model**](#train-your-custom-model)
-* [**Compose custom models**](#create-a-composed-model)
-* [**Analyze documents**](#analyze-documents-with-your-custom-or-composed-model)
-* [**Manage your custom models**](#manage-your-custom-models)
-
-## Assemble your training dataset
-
-Building a custom model begins with establishing your training dataset. You need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#custom-model-input-requirements) for Form Recognizer.
-
-## Upload your training dataset
-
-You need to [upload your training data](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#upload-your-training-data)
-to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Train your custom model
-
-You [train your model](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#train-your-model) with labeled data sets. Labeled datasets rely on the prebuilt-layout API, but supplementary human input is included such as your specific labels and field locations. Start with at least five completed forms of the same type for your labeled training data.
-
-When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-
-Form Recognizer uses the [Layout](../concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
-
-[Get started with Train with labels](../label-tool.md)
-
-> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
-
-## Create a composed model
-
-> [!NOTE]
-> **Model Compose is only available for custom models trained *with* labels.** Attempting to compose unlabeled models will produce an error.
-
-With the Model Compose operation, you can assign up to 200 trained custom models to a single model ID. When you call Analyze with the composed model ID, Form Recognizer classifies the form you submitted first, chooses the best matching assigned model, and then returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
-
-Using the Form Recognizer Sample Labeling tool, the REST API, or the Client-library SDKs, follow the steps to set up a composed model:
-
-1. [**Gather your custom model IDs**](#gather-your-custom-model-ids)
-1. [**Compose your custom models**](#compose-your-custom-models)
-
-### Gather your custom model IDs
-
-Once the training process has successfully completed, your custom model is assigned a model ID. You can retrieve a model ID as follows:
-
-<!-- Applies to FOTT but labeled studio to eliminate tab grouping warning -->
-### [**Form Recognizer Sample Labeling tool**](#tab/studio)
-
-When you train models using the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net/), the model ID is located in the Train Result window:
--
-### [**REST API**](#tab/rest)
-
-The [**REST API**](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#train-your-model) returns a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
--
-### [**Client-library SDKs**](#tab/sdks)
-
- The [**client-library SDKs**](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#train-your-model) return a model object that can be queried to return the trained model ID:
-
-* C\# | [CustomFormModel Class](/dotnet/api/azure.ai.formrecognizer.training.customformmodel?view=azure-dotnet&preserve-view=true#properties "Azure SDK for .NET")
-
-* Java | [CustomFormModelInfo Class](/java/api/com.azure.ai.formrecognizer.training.models.customformmodelinfo?view=azure-java-stable&preserve-view=true#methods "Azure SDK for Java")
-
-* JavaScript | CustomFormModelInfo interface
-
-* Python | [CustomFormModelInfo Class](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.customformmodelinfo?view=azure-python&preserve-view=true&branch=main#variables "Azure SDK for Python")
--
-#### Compose your custom models
-
-After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
-
-<!-- Applies to FOTT but labeled studio to eliminate tab grouping warning -->
-### [**Form Recognizer Sample Labeling tool**](#tab/studio)
-
-The **Sample Labeling tool** enables you to quickly get started training models and composing them to a single model ID.
-
-After you have completed training, compose your models as follows:
-
-1. On the left rail menu, select the **Model Compose** icon (merging arrow).
-
-1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
-
-1. Choose the **Compose button** from the upper-left corner.
-
-1. In the pop-up window, name your newly composed model and select **Compose**.
-
-When the operation completes, your newly composed model appears in the list.
-
- :::image type="content" source="../media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="../media/custom-model-compose-expanded.png":::
-
-### [**REST API**](#tab/rest)
-
-Using the **REST API**, you can make a [**Compose Custom Model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`.
-
-### [**Client-library SDKs**](#tab/sdks)
-
-Use the programming language code of your choice to create a composed model that is called with a single model ID. The following links are code samples that demonstrate how to create a composed model from existing custom models:
-
-* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
-
-* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeDocumentModel.java).
-
-* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
-
-* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2/sample_compose_model.py)
---
-## Analyze documents with your custom or composed model
-
- The custom form **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You can provide a single custom model ID or a composed model ID for the `modelID` parameter.
--
-### [**Form Recognizer Sample Labeling tool**](#tab/studio)
-
-1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
-
-1. Choose a local file or image URL to analyze.
-
-1. Select the **Run Analysis** button.
-
-1. The tool applies tags in bounding boxes and reports the confidence percentage for each tag.
--
-### [**REST API**](#tab/rest)
-
-Using the REST API, you can make an [Analyze Document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request to analyze a document and extract key-value pairs and table data.
-
-### [**Client-library SDKs**](#tab/sdks)
-
-Using the programming language of your choice to analyze a form or document with a custom or composed model. You need your Form Recognizer endpoint, key, and model ID.
-
-* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
-
-* [**Java**](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
-
-* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/recognizeCustomForm.js)
-
-* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.1/sample_recognize_custom_forms.py)
---
-Test your newly trained models by [analyzing forms](build-a-custom-model.md?view=form-recog-2.1.0&preserve-view=true#test-the-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](../label-tool.md#improve-results).
-
-## Manage your custom models
-
-You can [manage your custom models](../how-to-guides/use-sdk-rest-api.md?view=form-recog-2.1.0&preserve-view=true#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) from your account.
-
-Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
-
-## Next steps
-
-Learn more about the Form Recognizer client library by exploring our API reference documentation.
-
-> [!div class="nextstepaction"]
-> [Form Recognizer API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-
applied-ai-services Estimate Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/estimate-cost.md
- Title: "Check my usage and estimate the cost"-
-description: Learn how to use Azure portal to check how many pages are analyzed and estimate the total price.
----- Previously updated : 10/20/2022-
-monikerRange: '>=form-recog-2.1.0'
--
-# Check my Form Recognizer usage and estimate the price
--
- In this guide, you'll learn how to use the metrics dashboard in the Azure portal to view how many pages were processed by Azure Form Recognizer. You'll also learn how to estimate the cost of processing those pages using the Azure pricing calculator.
-
-## Check how many pages were processed
-
-We'll start by looking at the page processing data for a given time period:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to your Form Recognizer resource.
-
-1. From the **Overview** page, select the **Monitoring** tab located near the middle of the page.
-
- :::image type="content" source="../media/azure-portal-overview-menu.png" alt-text="Screenshot of the Azure portal overview page menu.":::
-
-1. Select a time range and you'll see the **Processed Pages** chart displayed.
-
- :::image type="content" source="../media/azure-portal-overview-monitoring.png" alt-text="Screenshot that shows how many pages are processed on the resource overview page." lightbox="../media/azure-portal-processed-pages.png":::
-
-### Examine analyzed pages
-
-We can now take a deeper dive to see each model's analyzed pages:
-
-1. Under the **Monitoring** section, select **Metrics** from the left navigation menu.
-
- :::image type="content" source="../media/azure-portal-monitoring-metrics.png" alt-text="Screenshot of the monitoring menu in the Azure portal.":::
-
-1. On the **Metrics** page, select **Add metric**.
-
-1. Select the Metric dropdown menu and, under **USAGE**, choose **Processed Pages**.
-
- :::image type="content" source="../media/azure-portal-add-metric.png" alt-text="Screenshot that shows how to add new metrics on Azure portal.":::
-
-1. From the upper right corner, configure the time range and select the **Apply** button.
-
- :::image type="content" source="../media/azure-portal-processed-pages-timeline.png" alt-text="Screenshot of time period options for metrics in the Azure portal." lightbox="../media/azure-portal-metrics-timeline.png":::
-
-1. Select **Apply splitting**.
-
- :::image type="content" source="../media/azure-portal-apply-splitting.png" alt-text="Screenshot of the Apply splitting option in the Azure portal.":::
-
-1. Choose **FeatureName** from the **Values** dropdown menu.
-
- :::image type="content" source="../media/azure-portal-splitting-on-feature-name.png" alt-text="Screenshot of the Apply splitting values dropdown menu.":::
-
-1. You'll see a breakdown of the pages analyzed by each model.
-
- :::image type="content" source="../media/azure-portal-metrics-drill-down.png" alt-text="Screenshot demonstrating how to drill down to check analyzed pages by model." lightbox="../media/azure-portal-drill-down-closeup.png":::
-
-## Estimate price
-
-Now that we have the page processed data from the portal, we can use the Azure pricing calculator to estimate the cost:
-
-1. Sign in to [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) with the same credentials you use for the Azure portal.
-
- > Press Ctrl + right-click to open in a new tab!
-
-1. Search for **Azure Form Recognizer** in the **Search products** search box.
-
-1. Select **Azure Form Recognizer** and you'll see that it has been added to the page.
-
-1. Under **Your Estimate**, select the relevant **Region**, **Payment Option** and **Instance** for your Form Recognizer resource. For more information, *see* [Azure Form Recognizer pricing options](https://azure.microsoft.com/pricing/details/form-recognizer/#pricing).
-
-1. Enter the number of pages processed from the Azure portal metrics dashboard. That data can be found using the steps in sections [Check how many pages are processed](#check-how-many-pages-were-processed) or [Examine analyzed pages](#examine-analyzed-pages), above.
-
-1. The estimated price is on the right, after the equal (**=**) sign.
-
- :::image type="content" source="../media/azure-portal-pricing.png" alt-text="Screenshot of how to estimate the price based on processed pages.":::
-
-That's it. You now know where to find how many pages you have processed using Form Recognizer and how to estimate the cost.
-
-## Next steps
-
-> [!div class="nextstepaction"]
->
-> [Learn more about Form Recognizer service quotas and limits](../service-limits.md)
applied-ai-services Project Share Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/project-share-custom-models.md
- Title: "Share custom model projects using Form Recognizer Studio"-
-description: Learn how to share custom model projects using Form Recognizer Studio.
----- Previously updated : 05/08/2023-
-monikerRange: 'form-recog-3.0.0'
--
-# Share custom model projects using Form Recognizer Studio
-
-Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. Form Recognizer Studio enables project sharing feature within the custom extraction model. Projects can be shared easily via a project token. The same project token can also be used to import a project.
-
-## Prerequisite
-
-In order to share and import your custom extraction projects seamlessly, both users (user who shares and user who imports) need an An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). Also, both users need to configure permissions to grant access to the Form Recognizer and storage resources.
-
-Generally, in the process of creating a custom model project, most of the requirements should have been met for project sharing. However, in cases where the project sharing feature does not work, please check the below.
-
-## Granted access and permissions
-
- > [!IMPORTANT]
- > Custom model projects can be imported only if you have the access to the storage account that is associated with the project you are trying to import. Check your storage account permission before starting to share or import projects with others.
-
-### Virtual networks and firewalls
-
-If your storage account VNet is enabled or if there are any firewall constraints, the project can't be shared. If you want to bypass those restrictions, ensure that those settings are turned off.
-
-A workaround is to manually create a project using the same settings as the project being shared.
--
-## Share a custom extraction model with Form Recognizer studio
-
-Follow these steps to share your project using Form Recognizer studio:
-
-1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio).
-
-1. In the Studio, select the **Custom extraction models** tile, under the custom models section.
-
- :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot showing how to select a custom extraction model in the Studio.":::
-
-1. On the custom extraction models page, select the desired model to share and then select the **Share** button.
-
- :::image type="content" source="../media/how-to/studio-project-share.png" alt-text="Screenshot showing how to select the desired model and select the share option.":::
-
-1. On the share project dialog, copy the project token for the selected project.
--
-## Import custom extraction model with Form Recognizer studio
-
-Follow these steps to import a project using Form Recognizer studio.
-
-1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio).
-
-1. In the Studio, select the **Custom extraction models** tile, under the custom models section.
-
- :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot: Select custom extraction model in the Studio.":::
-
-1. On the custom extraction models page, select the **Import** button.
-
- :::image type="content" source="../media/how-to/studio-project-import.png" alt-text="Screenshot: Select import within custom extraction model page.":::
-
-1. On the import project dialog, paste the project token shared with you and select import.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Back up and recover models](../disaster-recovery.md)
applied-ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-sdk-rest-api.md
- Title: "Use Form Recognizer client library SDKs or REST API "-
-description: How to use Form Recognizer SDKs or REST API and create apps to extract key data from documents.
------ Previously updated : 03/03/2023-
-zone_pivot_groups: programming-languages-set-formre
-
-<!-- markdownlint-disable MD051 -->
-
-# Use Form Recognizer models
--
- In this guide, you learn how to add Form Recognizer models to your applications and workflows using a programming language SDK of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key text and structure elements from documents. We recommend that you use the free service as you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-Choose from the following Form Recognizer models to analyze and extract data and values from forms and documents:
-
-> [!div class="checklist"]
->
-> * The [prebuilt-read](../concept-read.md) model is at the core of all Form Recognizer models and can detect lines, words, locations, and languages. Layout, general document, prebuilt, and custom models all use the read model as a foundation for extracting texts from documents.
->
-> * The [prebuilt-layout](../concept-layout.md) model extracts text and text locations, tables, selection marks, and structure information from documents and images.
->
-> * The [prebuilt-document](../concept-general-document.md) model extracts key-value pairs, tables, and selection marks from documents and can be used as an alternative to training a custom model without labels.
->
-> * The [prebuilt-healthInsuranceCard.us](../concept-insurance-card.md) model extracts key information from US health insurance cards.
->
-> * The [prebuilt-tax.us.w2](../concept-w2.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms.
->
-> * The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
->
-> * The [prebuilt-receipt](../concept-receipt.md) model extracts key information from printed and handwritten sales receipts.
->
-> * The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards.
->
-> * The [prebuilt-businessCard](../concept-business-card.md) model extracts key information from business card images.
------------------
-## Next steps
-
-Congratulations! You've learned to use Form Recognizer models to analyze various documents in different ways. Next, explore the Form Recognizer Studio and reference documentation.
->[!div class="nextstepaction"]
-> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!div class="nextstepaction"]
-> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
--
-In this how-to guide, you learn how to add Form Recognizer to your applications and workflows using an SDK, in a programming language of your choice, or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-You use the following APIs to extract structured data from forms and documents:
-
-* [Authenticate the client](#authenticate-the-client)
-* [Analyze Layout](#analyze-layout)
-* [Analyze receipts](#analyze-receipts)
-* [Analyze business cards](#analyze-business-cards)
-* [Analyze invoices](#analyze-invoices)
-* [Analyze ID documents](#analyze-id-documents)
-* [Train a custom model](#train-a-custom-model)
-* [Analyze forms with a custom model](#analyze-forms-with-a-custom-model)
-* [Manage custom models](#manage-custom-models)
---------------
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
- Title: "How-to: Analyze documents, Label forms, train a model, and analyze forms with Form Recognizer"-
-description: How to use the Form Recognizer sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
----- Previously updated : 06/23/2023-
-monikerRange: 'form-recog-2.1.0'
-
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD034 -->
-# Train a custom model using the Sample Labeling tool
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
-
->[!TIP]
->
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
-> * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the V3.0.
-
-In this article, you use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
-
-> [!VIDEO https://learn.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
-
-## Prerequisites
-
- You need the following resources to complete this project:
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You paste your key and endpoint into the code later in the quickstart.
- * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-* A set of at least six forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account.
-
-## Create a Form Recognizer resource
--
-## Try it out
-
-Try out the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net/) online:
-
-> [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott-2-1.azurewebsites.net/)
-
-You need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
-
-## Set up the Sample Labeling tool
-
-> [!NOTE]
->
-> If your storage data is behind a VNet or firewall, you must deploy the **Form Recognizer Sample Labeling tool** behind your VNet or firewall and grant access by creating a [system-assigned managed identity](managed-identities.md "Azure managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources").
-
-You use the Docker engine to run the Sample Labeling tool. Follow these steps to set up the Docker container. For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).
-
-> [!TIP]
-> The OCR Form Labeling Tool is also available as an open source project on GitHub. The tool is a TypeScript web application built using React + Redux. To learn more or contribute, see the [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md#run-as-web-application) repo. To try out the tool online, go to the [Form Recognizer Sample Labeling tool website](https://fott-2-1.azurewebsites.net/).
-
-1. First, install Docker on a host computer. This guide shows you how to use local computer as a host. If you want to use a Docker hosting service in Azure, see the [Deploy the Sample Labeling tool](deploy-label-tool.md) how-to guide.
-
- The host computer must meet the following hardware requirements:
-
- | Container | Minimum | Recommended|
- |:--|:--|:--|
- |Sample Labeling tool|`2` core, 4-GB memory|`4` core, 8-GB memory|
-
- Install Docker on your machine by following the appropriate instructions for your operating system:
-
- * [Windows](https://docs.docker.com/docker-for-windows/)
- * [macOS](https://docs.docker.com/docker-for-mac/)
- * [Linux](https://docs.docker.com/install/)
-
-1. Get the Sample Labeling tool container with the `docker pull` command.
-
- ```console
- docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1
- ```
-
-1. Now you're ready to run the container with `docker run`.
-
- ```console
- docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 eula=accept
- ```
-
- This command makes the sample-labeling tool available through a web browser. Go to `http://localhost:3000`.
-
-> [!NOTE]
-> You can also label documents and train models using the Form Recognizer REST API. To train and Analyze with the REST API, see [Train with labels using the REST API and Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
-
-## Set up input data
-
-First, make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. When you train, you need to direct the API to a subfolder.
-
-### Configure cross-domain resource sharing (CORS)
-
-Enable CORS on your storage account. Select your storage account in the Azure portal and then choose the **CORS** tab on the left pane. On the bottom line, fill in the following values. Select **Save** at the top.
-
-* Allowed origins = *
-* Allowed methods = \[select all\]
-* Allowed headers = *
-* Exposed headers = *
-* Max age = 200
-
-> [!div class="mx-imgBorder"]
-> ![CORS setup in the Azure portal](media/label-tool/cors-setup.png)
-
-## Connect to the Sample Labeling tool
-
- The Sample Labeling tool connects to a source (your original uploaded forms) and a target (created labels and output data).
-
-Connections can be set up and shared across projects. They use an extensible provider model, so you can easily add new source/target providers.
-
-To create a new connection, select the **New Connections** (plug) icon, in the left navigation bar.
-
-Fill in the fields with the following values:
-
-* **Display Name** - The connection display name.
-* **Description** - Your project description.
-* **SAS URL** - The shared access signature (SAS) URL of your Azure Blob Storage container. [!INCLUDE [get SAS URL](includes/sas-instructions.md)]
-
- :::image type="content" source="media/quickstarts/get-sas-url.png" alt-text="SAS URL retrieval":::
--
-## Create a new project
-
-In the Sample Labeling tool, projects store your configurations and settings. Create a new project and fill in the fields with the following values:
-
-* **Display Name** - the project display name
-* **Security Token** - Some project settings can include sensitive values, such as keys or other shared secrets. Each project generates a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
-* **Source Connection** - The Azure Blob Storage connection you created in the previous step that you would like to use for this project.
-* **Folder Path** - Optional - If your source forms are located in a folder on the blob container, specify the folder name here
-* **Form Recognizer Service Uri** - Your Form Recognizer endpoint URL.
-* **Key** - Your Form Recognizer key.
-* **Description** - Optional - Project description
--
-## Label your forms
-
-When you create or open a project, the main tag editor window opens. The tag editor consists of three parts:
-
-* A resizable v3.0 pane that contains a scrollable list of forms from the source connection.
-* The main editor pane that allows you to apply tags.
-* The tags editor pane that allows users to modify, lock, reorder, and delete tags.
-
-### Identify text and tables
-
-Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool draws bounding boxes around each text element.
-
-The labeling tool also shows which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we don't label the table content, but rather rely on the automated extraction.
--
-In v2.1, if your training document doesn't have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
-
-### Apply labels to text
-
-Next, you create tags (labels) and apply them to the text elements that you want the model to analyze.
-
-1. First, use the tags editor pane to create the tags you'd like to identify.
- 1. Select **+** to create a new tag.
- 1. Enter the tag name.
- 1. Press Enter to save the tag.
-1. In the main editor, select words from the highlighted text elements or a region you drew in.
-1. Select the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
-1. Follow these steps to label at least five of your forms.
- > [!Tip]
- > Keep the following tips in mind when you're labeling your forms:
- >
- > * You can only apply one tag to each selected text element.
- > * Each tag can only be applied once per page. If a value appears multiple times on the same form, create different tags for each instance. For example: "invoice# 1", "invoice# 2" and so on.
- > * Tags cannot span across pages.
- > * Label values as they appear on the form; don't try to split a value into two parts with two different tags. For example, an address field should be labeled with a single tag even if it spans multiple lines.
- > * Don't include keys in your tagged fields&mdash;only the values.
- > * Table data should be detected automatically and will be available in the final output JSON file. However, if the model fails to detect all of your table data, you can manually tag these fields as well. Tag each cell in the table with a different label. If your forms have tables with varying numbers of rows, make sure you tag at least one form with the largest possible table.
- > * Use the buttons to the right of the **+** to search, rename, reorder, and delete your tags.
- > * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key.
- >
--
-### Specify tag value types
-
-You can set the expected data type for each tag. Open the context menu to the right of a tag and select a type from the menu. This feature allows the detection algorithm to make assumptions that improve the text-detection accuracy. It also ensures that the detected values are returned in a standardized format in the final JSON output. Value type information is saved in the **fields.json** file in the same path as your label files.
-
-> [!div class="mx-imgBorder"]
-> ![Value type selection with Sample Labeling tool](media/whats-new/value-type.png)
-
-The following value types and variations are currently supported:
-
-* `string`
- * default, `no-whitespaces`, `alphanumeric`
-
-* `number`
- * default, `currency`
- * Formatted as a Floating point value.
- * Example: 1234.98 on the document is formatted into 1234.98 on the output
-
-* `date`
- * default, `dmy`, `mdy`, `ymd`
-
-* `time`
-* `integer`
- * Formatted as an integer value.
- * Example: 1234.98 on the document is formatted into 123498 on the output.
-* `selectionMark`
-
-> [!NOTE]
-> See these rules for date formatting:
->
-> You must specify a format (`dmy`, `mdy`, `ymd`) for date formatting to work.
->
-> The following characters can be used as date delimiters: `, - / . \`. Whitespace cannot be used as a delimiter. For example:
->
-> * 01,01,2020
-> * 01-01-2020
-> * 01/01/2020
->
-> The day and month can each be written as one or two digits, and the year can be two or four digits:
->
-> * 1-1-2020
-> * 1-01-20
->
-> If a date string has eight digits, the delimiter is optional:
->
-> * 01012020
-> * 01 01 2020
->
-> The month can also be written as its full or short name. If the name is used, delimiter characters are optional. However, this format may be recognized less accurately than others.
->
-> * 01/Jan/2020
-> * 01Jan2020
-> * 01 Jan 2020
-
-### Label tables (v2.1 only)
-
-At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by selecting **Add a new table tag**. Specify whether the table has a fixed number of rows or variable number of rows depending on the document and define the schema.
--
-Once you've defined your table tag, tag the cell values.
--
-## Train a custom model
-
-Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you see the following information:
-
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you need it if you want to do prediction calls through the [REST API](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api).
-* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
-* The list of tags, and the estimated accuracy per tag.
---
-After training finishes, examine the **Average Accuracy** value. If it's low, you should add more input documents and repeat the labeling steps. The documents you've already labeled remain in the project index.
-
-> [!TIP]
-> You can also run the training process with a REST API call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
-
-## Compose trained models
-
-With Model Compose, you can compose up to 200 models to a single model ID. When you call Analyze with the composed `modelID`, Form Recognizer classifies the form you submitted, choose the best matching model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
-
-* To compose models in the Sample Labeling tool, select the Model Compose (merging arrow) icon from the navigation bar.
-* Select the models you wish to compose together. Models with the arrows icon are already composed models.
-* Choose the **Compose button**. In the pop-up, name your new composed model and select **Compose**.
-* When the operation completes, your newly composed model should appear in the list.
--
-## Analyze a form
-
-Select the Analyze icon from the navigation bar to test your model. Select source *Local file*. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag.
--
-> [!TIP]
-> You can also run the Analyze API with a REST call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
-
-## Improve results
-
-Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value is high, but the confidence scores are low (or the results are inaccurate), add the prediction file to the training set, label it, and train again.
-
-The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
-
-## Save a project and resume later
-
-To resume your project at another time or in another browser, you need to save your project's security token and reenter it later.
-
-### Get project credentials
-
-Go to your project settings page (slider icon) and take note of the security token name. Then go to your application settings (gear icon), which shows all of the security tokens in your current browser instance. Find your project's security token and copy its name and key value to a secure location.
-
-### Restore project credentials
-
-When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings.
-
-### Resume a project
-
-Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's `.fott` file. The application loads all of the project's settings because it has the security token.
-
-## Next steps
-
-In this quickstart, you've learned how to use the Form Recognizer Sample Labeling tool to train a model with manually labeled data. If you'd like to build your own utility to label training data, use the REST APIs that deal with labeled data training.
-
-> [!div class="nextstepaction"]
-> [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
-
-* [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api)
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
- Title: Language support - Form Recognizer-
-description: Learn more about the human languages that are available with Form Recognizer.
----- Previously updated : 03/03/2023--
-# Language support for Form Recognizer
---
-This article covers the supported languages for text and field **extraction (by feature)** and **[detection (Read only)](#detected-languages-read-api)**. Both groups are mutually exclusive.
-
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-
-## Read, layout, and custom form (template) model
-
-The following lists include the currently GA languages in the most recent v3.0 version for Read, Layout, and Custom template (form) models.
-
-> [!NOTE]
-> **Language code optional**
->
-> Form Recognizer's deep learning based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-
-To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](v3-migration-guide.md) to understand the differences from the v2.1 GA API and explore the [v3.0 SDK and REST API quickstarts](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true).
-
-### Handwritten text
-
-The following table lists the supported languages for extracting handwritten texts.
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
-
-### Print text
-
-The following table lists the supported languages for print text by the most recent GA version.
-
-|Language| Code (optional) |Language| Code (optional) |
-|:--|:-:|:--|:-:|
-|Afrikaans|`af`|Khasi | `kha` |
-|Albanian |`sq`|K'iche' | `quc` |
-|Angika (Devanagari) | `anp`| Korean | `ko` |
-|Arabic | `ar` | Korku | `kfq`|
-|Asturian |`ast`| Koryak | `kpy`|
-|Awadhi-Hindi (Devanagari) | `awa`| Kosraean | `kos`|
-|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`|
-|Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`|
-|Basque |`eu`| Kurdish (Latin) | `ku-latn`
-|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagari) | `kru`|
-|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
-|Bhojpuri-Hindi (Devanagari) | `bho`| Lakota | `lkt` |
-|Bislama |`bi`| Latin | `la` |
-|Bodo (Devanagari) | `brx`| Lithuanian | `lt` |
-|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` |
-|Brajbha | `bra`|Lule Sami | `smj`|
-|Breton |`br`|Luxembourgish | `lb` |
-|Bulgarian | `bg`|Mahasu Pahari (Devanagari) | `bfz`|
-|Bundeli | `bns`|Malay (Latin) | `ms` |
-|Buryat (Cyrillic) | `bua`|Maltese | `mt`
-|Catalan |`ca`|Malto (Devanagari) | `kmj`
-|Cebuano |`ceb`|Manx | `gv` |
-|Chamling | `rab`|Maori | `mi`|
-|Chamorro |`ch`|Marathi | `mr`|
-|Chhattisgarhi (Devanagari)| `hne`| Mongolian (Cyrillic) | `mn`|
-|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`|
-|Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`|
-|Cornish |`kw`|Neapolitan | `nap` |
-|Corsican |`co`|Nepali | `ne`|
-|Crimean Tatar (Latin)|`crh`|Niuean | `niu`|
-|Croatian | `hr`|Nogay | `nog`
-|Czech | `cs` |Northern Sami (Latin) | `sme`|
-|Danish | `da` |Norwegian | `no` |
-|Dari | `prs`|Occitan | `oc` |
-|Dhimal (Devanagari) | `dhi`| Ossetic | `os`|
-|Dogri (Devanagari) | `doi`|Pashto | `ps`|
-|Dutch | `nl` |Persian | `fa`|
-|English | `en` |Polish | `pl` |
-|Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
-|Estonian |`et`|Punjabi (Arabic) | `pa`|
-|Faroese | `fo`|Ripuarian | `ksh`|
-|Fijian |`fj`|Romanian | `ro` |
-|Filipino |`fil`|Romansh | `rm` |
-|Finnish | `fi` | Russian | `ru` |
-|French | `fr` |Sadri (Devanagari) | `sck` |
-|Friulian | `fur` | Samoan (Latin) | `sm`
-|Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`|
-|Galician | `gl` |Santali(Devanagiri) | `sat` |
-|German | `de` | Scots | `sco` |
-|Gilbertese | `gil` | Scottish Gaelic | `gd` |
-|Gondi (Devanagari) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
-|Greenlandic | `kl` | Sherpa (Devanagari) | `xsr` |
-|Gurung (Devanagari) | `gvr`| Sirmauri (Devanagari) | `srx`|
-|Haitian Creole | `ht` | Skolt Sami | `sms` |
-|Halbi (Devanagari) | `hlb`| Slovak | `sk`|
-|Hani | `hni` | Slovenian | `sl` |
-|Haryanvi | `bgc`|Somali (Arabic) | `so`|
-|Hawaiian | `haw`|Southern Sami | `sma`
-|Hindi | `hi`|Spanish | `es` |
-|Hmong Daw (Latin)| `mww` | Swahili (Latin) | `sw` |
-|Ho(Devanagiri) | `hoc`|Swedish | `sv` |
-|Hungarian | `hu` |Tajik (Cyrillic) | `tg` |
-|Icelandic | `is`| Tatar (Latin) | `tt` |
-|Inari Sami | `smn`|Tetum | `tet` |
-|Indonesian | `id` | Thangmi | `thf` |
-|Interlingua | `ia` |Tongan | `to`|
-|Inuktitut (Latin) | `iu` | Turkish | `tr` |
-|Irish | `ga` |Turkmen (Latin) | `tk`|
-|Italian | `it` |Tuvan | `tyv`|
-|Japanese | `ja` |Upper Sorbian | `hsb` |
-|Jaunsari (Devanagari) | `Jns`|Urdu | `ur`|
-|Javanese | `jv` |Uyghur (Arabic) | `ug`|
-|Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`|
-|Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
-|Kangri (Devanagari) | `xnr`|Uzbek (Latin) | `uz` |
-|Karachay-Balkar | `krc`|Volap├╝k | `vo` |
-|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` |
-|Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
-|Kashubian | `csb` |Western Frisian | `fy` |
-|Kazakh (Cyrillic) | `kk-cyrl`|Yucatec Maya | `yua` |
-|Kazakh (Latin) | `kk-latn`|Zhuang | `za` |
-|Khaling | `klr`|Zulu | `zu` |
-
-### Print text in preview (API version 2023-02-28-preview)
-
-Use the parameter `api-version=2023-02-28-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
-
-|Language| Code (optional) |Language| Code (optional) |
-|:--|:-:|:--|:-:|
-|Abaza|`abq`|Malagasy | `mg` |
-|Abkhazian |`ab`|Mandinka | `mnk` |
-|Achinese | `ace`| Mapudungun | `arn` |
-|Acoli | `ach` | Mari (Russia) | `chm`|
-|Adangme |`ada`| Masai | `mas`|
-|Adyghe | `ady`| Mende (Sierra Leone) | `men`|
-|Afar | `aa`| Meru | `mer`|
-|Akan | `ak`| Meta' | `mgo`|
-|Algonquin |`alq`| Minangkabau | `min`
-|Asu (Tanzania) | `asa`|Mohawk| `moh`|
-|Avaric | `av`| Mongondow | `mog`
-|Aymara | `ay`| Morisyen | `mfe` |
-|Bafia |`ksf`| Mundang | `mua` |
-|Bambara | `bm`| Nahuatl | `nah` |
-|Bashkir | `ba`| Navajo | `nv` |
-|Bemba (Zambia) | `bem`| Ndonga | `ng` |
-|Bena (Tanzania) | `bez`|Ngomba | `jgo`|
-|Bikol |`bik`|North Ndebele | `nd` |
-|Bini | `bin`|Nyanja | `ny`|
-|Chechen | `ce`|Nyankole | `nyn` |
-|Chiga | `cgg`|Nzima | `nzi`
-|Choctaw |`cho`|Ojibwa | `oj`
-|Chukot |`ckt`|Oromo | `om` |
-|Chuvash | `cv`|Pampanga | `pam`|
-|Cree |`cr`|Pangasinan | `pag`|
-|Creek| `mus`| Papiamento | `pap`|
-|Crow | `cro`|Pedi | `nso`|
-|Dargwa | `dar`|Quechua | `qu`|
-|Duala |`dua`|Rundi | `rn` |
-|Dungan |`dng`|Rwa | `rwk`|
-|Efik|`efi`|Samburu | `saq`|
-|Fon | `fon`|Sango | `sg`
-|Ga | `gaa` |Sangu (Gabon) | `snq`|
-|Ganda | `lg` |Sena | `seh` |
-|Gayo | `gay`|Serbian (Cyrillic) | `sr-cyrl` |
-|Guarani| `gn`| Shambala | `ksb`|
-|Gusii | `guz`|Shona | `sn`|
-|Greek | `el`|Siksika | `bla`|
-|Hebrew | `he` | Soga | `xog`|
-|Herero | `hz` |Somali (Latin) | `so-latn` |
-|Hiligaynon | `hil` |Songhai | `son` |
-|Iban | `iba`|South Ndebele | `nr`|
-|Igbo |`ig`|Southern Altai | `alt`|
-|Iloko | `ilo`|Southern Sotho | `st` |
-|Ingush |`inh`|Sundanese | `su` |
-|Jola-Fonyi |`dyo`|Swati | `ss` |
-|Kabardian | `kbd` | Tabassaran| `tab` |
-|Kalenjin | `kln` |Tachelhit| `shi` |
-|Kalmyk | `xal` | Tahitian | `ty`|
-|Kanuri | `kr`|Taita | `dav` |
-|Khakas | `kjh` | Tamil | `ta`|
-|Kikuyu | `ki` | Tatar (Cyrillic) | `tt-cyrl` |
-|Kildin Sami | `sjd` | Teso | `teo` |
-|Kinyarwanda| `rw`| Thai | `th`|
-|Komi | `kv` | Tok Pisin | `tpi` |
-|Kongo| `kg`| Tsonga | `ts`|
-|Kpelle | `kpe` | Tswana | `tn` |
-|Kuanyama | `kj`| Udmurt | `udm`|
-|Lak | `lbe` | Uighur (Cyrillic) | `ug-cyrl` |
-|Latvian | `lv`|Ukrainian | `uk`|
-|Lezghian | `lex`|Vietnamese | `vi`
-|Lingala | `ln`|Vunjo | `vun` |
-|Lozi| `loz` | Wolof | `wo` |
-|Luo (Kenya and Tanzania) | `luo`| Xhosa|`xh` |
-|Luyia | `luy` |Yakut | `sah` |
-|Macedonian | `mk`| Zapotec | `zap` |
-|Machame| `jmc`| Zarma | `dje` |
-|Madurese | `mad` |
-|Makhuwa-Meetto | `mgh` |
-|Makonde | `kde` |
-
-## Custom neural model
-
-Language| API Version |
-|:--|:-:|
-|English | `2022-08-31` (GA), `2023-02-28-preview`|
-|Spanish | `2023-02-28-preview`|
-|German | `2023-02-28-preview`|
-|French | `2023-02-28-preview`|
-|Italian | `2023-02-28-preview`|
-|Dutch | `2023-02-28-preview`|
-
-## Receipt model
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
-Receipt supports all English receipts and the following locales:
-
-|Language| Locale code |
-|:--|:-:|
-|English |`en-au`|
-|English (Canada)|`en-ca`|
-|English (United Kingdom)|`en-gb`|
-|English (India)|`en-in`|
-|English (United States)| `en-us`|
-|French | `fr` |
-| Spanish | `es` |
-
-## Business card model
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
-Business Card supports all English business cards with the following locales:
-
-|Language| Locale code |
-|:--|:-:|
-|English |`en-US`, `en-CA`, `en-GB`, `en-IN`|
-|German | de|
-|French | fr|
-|Italian |it|
-|Portuguese |pt|
-|Dutch | nl|
-
-The **2022-06-30** and later releases include Japanese language support:
-
-|Language| Locale code |
-|:--|:-:|
-| Japanese | `ja` |
-
-## Invoice model
-
-Language| Locale code |
-|:--|:-:|
-|English |`en-US`, `en-CA`, `en-GB`, `en-IN`|
-|Spanish| es|
-|German | de|
-|French | fr|
-|Italian |it|
-|Portuguese |pt|
-|Dutch | nl|
-
-## ID document model
-
-This technology is currently available for US driver licenses and the biographical page from international passports (excluding visa and other travel documents).
-
-## General Document
-
-Language| Locale code |
-|:--|:-:|
-|English (United States)|en-us|
-
-## Detected languages: Read API
-
-The [Read API](concept-read.md) supports detecting the following languages in your documents. This list may include languages not currently supported for text extraction.
-
-> [!NOTE]
-> **Language detection**
->
-> Form Recognizer read model can _detect_ possible presence of languages and returns language codes for detected languages. To determine if text can also be
-> extracted for a given language, see previous sections.
-
-> [!NOTE]
-> **Detected languages vs extracted languages**
->
-> This section lists the languages we can detect from the documents using the Read model, if present. Please note that this list differs from list of languages we support extracting text from, which is specified in the above sections for each model.
-
-| Language | Code |
-|||
-| Afrikaans | `af` |
-| Albanian | `sq` |
-| Amharic | `am` |
-| Arabic | `ar` |
-| Armenian | `hy` |
-| Assamese | `as` |
-| Azerbaijani | `az` |
-| Basque | `eu` |
-| Belarusian | `be` |
-| Bengali | `bn` |
-| Bosnian | `bs` |
-| Bulgarian | `bg` |
-| Burmese | `my` |
-| Catalan | `ca` |
-| Central Khmer | `km` |
-| Chinese | `zh` |
-| Chinese Simplified | `zh_chs` |
-| Chinese Traditional | `zh_cht` |
-| Corsican | `co` |
-| Croatian | `hr` |
-| Czech | `cs` |
-| Danish | `da` |
-| Dari | `prs` |
-| Divehi | `dv` |
-| Dutch | `nl` |
-| English | `en` |
-| Esperanto | `eo` |
-| Estonian | `et` |
-| Fijian | `fj` |
-| Finnish | `fi` |
-| French | `fr` |
-| Galician | `gl` |
-| Georgian | `ka` |
-| German | `de` |
-| Greek | `el` |
-| Gujarati | `gu` |
-| Haitian | `ht` |
-| Hausa | `ha` |
-| Hebrew | `he` |
-| Hindi | `hi` |
-| Hmong Daw | `mww` |
-| Hungarian | `hu` |
-| Icelandic | `is` |
-| Igbo | `ig` |
-| Indonesian | `id` |
-| Inuktitut | `iu` |
-| Irish | `ga` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Javanese | `jv` |
-| Kannada | `kn` |
-| Kazakh | `kk` |
-| Kinyarwanda | `rw` |
-| Kirghiz | `ky` |
-| Korean | `ko` |
-| Kurdish | `ku` |
-| Lao | `lo` |
-| Latin | `la` |
-| Latvian | `lv` |
-| Lithuanian | `lt` |
-| Luxembourgish | `lb` |
-| Macedonian | `mk` |
-| Malagasy | `mg` |
-| Malay | `ms` |
-| Malayalam | `ml` |
-| Maltese | `mt` |
-| Maori | `mi` |
-| Marathi | `mr` |
-| Mongolian | `mn` |
-| Nepali | `ne` |
-| Norwegian | `no` |
-| Norwegian Nynorsk | `nn` |
-| Oriya | `or` |
-| Pasht | `ps` |
-| Persian | `fa` |
-| Polish | `pl` |
-| Portuguese | `pt` |
-| Punjabi | `pa` |
-| Queretaro Otomi | `otq` |
-| Romanian | `ro` |
-| Russian | `ru` |
-| Samoan | `sm` |
-| Serbian | `sr` |
-| Shona | `sn` |
-| Sindhi | `sd` |
-| Sinhala | `si` |
-| Slovak | `sk` |
-| Slovenian | `sl` |
-| Somali | `so` |
-| Spanish | `es` |
-| Sundanese | `su` |
-| Swahili | `sw` |
-| Swedish | `sv` |
-| Tagalog | `tl` |
-| Tahitian | `ty` |
-| Tajik | `tg` |
-| Tamil | `ta` |
-| Tatar | `tt` |
-| Telugu | `te` |
-| Thai | `th` |
-| Tibetan | `bo` |
-| Tigrinya | `ti` |
-| Tongan | `to` |
-| Turkish | `tr` |
-| Turkmen | `tk` |
-| Ukrainian | `uk` |
-| Urdu | `ur` |
-| Uzbek | `uz` |
-| Vietnamese | `vi` |
-| Welsh | `cy` |
-| Xhosa | `xh` |
-| Yiddish | `yi` |
-| Yoruba | `yo` |
-| Yucatec Maya | `yua` |
-| Zulu | `zu` |
----
-This table lists the written languages supported by each Form Recognizer service.
-
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-
-## Layout and custom model
-
-|Language| Language code |
-|:--|:-:|
-|Afrikaans|`af`|
-|Albanian |`sq`|
-|Asturian |`ast`|
-|Basque |`eu`|
-|Bislama |`bi`|
-|Breton |`br`|
-|Catalan |`ca`|
-|Cebuano |`ceb`|
-|Chamorro |`ch`|
-|Chinese (Simplified) | `zh-Hans`|
-|Chinese (Traditional) | `zh-Hant`|
-|Cornish |`kw`|
-|Corsican |`co`|
-|Crimean Tatar (Latin) |`crh`|
-|Czech | `cs` |
-|Danish | `da` |
-|Dutch | `nl` |
-|English (printed and handwritten) | `en` |
-|Estonian |`et`|
-|Fijian |`fj`|
-|Filipino |`fil`|
-|Finnish | `fi` |
-|French | `fr` |
-|Friulian | `fur` |
-|Galician | `gl` |
-|German | `de` |
-|Gilbertese | `gil` |
-|Greenlandic | `kl` |
-|Haitian Creole | `ht` |
-|Hani | `hni` |
-|Hmong Daw (Latin) | `mww` |
-|Hungarian | `hu` |
-|Indonesian | `id` |
-|Interlingua | `ia` |
-|Inuktitut (Latin) | `iu` |
-|Irish | `ga` |
-|Italian | `it` |
-|Japanese | `ja` |
-|Javanese | `jv` |
-|K'iche' | `quc` |
-|Kabuverdianu | `kea` |
-|Kachin (Latin) | `kac` |
-|Kara-Kalpak | `kaa` |
-|Kashubian | `csb` |
-|Khasi | `kha` |
-|Korean | `ko` |
-|Kurdish (latin) | `kur` |
-|Luxembourgish | `lb` |
-|Malay (Latin) | `ms` |
-|Manx | `gv` |
-|Neapolitan | `nap` |
-|Norwegian | `no` |
-|Occitan | `oc` |
-|Polish | `pl` |
-|Portuguese | `pt` |
-|Romansh | `rm` |
-|Scots | `sco` |
-|Scottish Gaelic | `gd` |
-|Slovenian | `slv` |
-|Spanish | `es` |
-|Swahili (Latin) | `sw` |
-|Swedish | `sv` |
-|Tatar (Latin) | `tat` |
-|Tetum | `tet` |
-|Turkish | `tr` |
-|Upper Sorbian | `hsb` |
-|Uzbek (Latin) | `uz` |
-|Volap├╝k | `vo` |
-|Walser | `wae` |
-|Western Frisian | `fy` |
-|Yucatec Maya | `yua` |
-|Zhuang | `za` |
-|Zulu | `zu` |
-
-## Prebuilt receipt and business card
-
->[!NOTE]
- > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image.
-
-Prebuilt Receipt and Business Cards support all English receipts and business cards with the following locales:
-
-|Language| Locale code |
-|:--|:-:|
-|English (Australia)|`en-au`|
-|English (Canada)|`en-ca`|
-|English (United Kingdom)|`en-gb`|
-|English (India|`en-in`|
-|English (United States)| `en-us`|
-
-## Prebuilt invoice
-
-Language| Locale code |
-|:--|:-:|
-|English (United States)|en-us|
-
-## Prebuilt identity documents
-
-This technology is currently available for US driver licenses and the biographical page from international passports (excluding visa and other travel documents).
--
-## Next steps
--
-> [!div class="nextstepaction"]
-> [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!div class="nextstepaction"]
-> [Try Form Recognizer Sample Labeling tool](https://aka.ms/fott-2.1-ga)
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
- Title: "Configure secure access with managed identities and private endpoints"-
-description: Learn how to configure secure communications between Form Recognizer and other Azure Services.
----- Previously updated : 03/03/2023-
-monikerRange: '>=form-recog-2.1.0'
--
-# Configure secure access with managed identities and private endpoints
--
-This how-to guide walks you through the process of enabling secure connections for your Form Recognizer resource. You can secure the following connections:
-
-* Communication between a client application within a Virtual Network (VNET) and your Form Recognizer Resource.
-
-* Communication between Form Recognizer Studio and your Form Recognizer resource.
-
-* Communication between your Form Recognizer resource and a storage account (needed when training a custom model).
-
- You're setting up your environment to secure the resources:
-
- :::image type="content" source="media/managed-identities/secure-config.png" alt-text="Screenshot of secure configuration with managed identity and private endpoints.":::
-
-## Prerequisites
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* A [**Form Recognizer**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
-
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. Create containers to store and organize your blob data within your storage account.
-
-* An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Form Recognizer resource. Create a virtual network to deploy your application resources to train models and analyze documents.
-
-* An **Azure data science VM** for [**Windows**](../../machine-learning/data-science-virtual-machine/provision-vm.md) or [**Linux/Ubuntu**](../../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md) to optionally deploy a data science VM in the virtual network to test the secure connections being established.
-
-## Configure resources
-
-Configure each of the resources to ensure that the resources can communicate with each other:
-
-* Configure the Form Recognizer Studio to use the newly created Form Recognizer resource by accessing the settings page and selecting the resource.
-
-* Validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request successfully completes.
-
-* Add a training dataset to a container in the Storage account you created.
-
-* Select the custom model tile to create a custom project. Ensure that you select the same Form Recognizer resource and the storage account you created in the previous step.
-
-* Select the container with the training dataset you uploaded in the previous step. Ensure that if the training dataset is within a folder, the folder path is set appropriately.
-
-* If you have the required permissions, the Studio sets the CORS setting required to access the storage account. If you don't have the permissions, you need to ensure that the CORS settings are configured on the Storage account before you can proceed.
-
-* Validate that the Studio is configured to access your training data, if you can see your documents in the labeling experience, all the required connections have been established.
-
-You now have a working implementation of all the components needed to build a Form Recognizer solution with the default security model:
-
- :::image type="content" source="media/managed-identities/default-config.png" alt-text="Screenshot of default security configuration.":::
-
-Next, complete the following steps:
-
-* Setup managed identity on the Form Recognizer resource.
-
-* Secure the storage account to restrict traffic from only specific virtual networks and IP addresses.
-
-* Configure the Form Recognizer managed identity to communicate with the storage account.
-
-* Disable public access to the Form Recognizer resource and create a private endpoint to make it accessible from the virtual network.
-
-* Add a private endpoint for the storage account in a selected virtual network.
-
-* Validate that you can train models and analyze documents from within the virtual network.
-
-## Setup managed identity for Form Recognizer
-
-Navigate to the Form Recognizer resource in the Azure portal and select the **Identity** tab. Toggle the **System assigned** managed identity to **On** and save the changes:
-
- :::image type="content" source="media/managed-identities/v2-fr-mi.png" alt-text="Screenshot of configure managed identity.":::
-
-## Secure the Storage account to limit traffic
-
-Start configuring secure communications by navigating to the **Networking** tab on your **Storage account** in the Azure portal.
-
-1. Under **Firewalls and virtual networks**, choose **Enabled from selected virtual networks and IP addresses** from the **Public network access** list.
-
-1. Ensure that **Allow Azure services on the trusted services list to access this storage account** is selected from the **Exceptions** list.
-
-1. **Save** your changes.
-
- :::image type="content" source="media/managed-identities/v2-stg-firewall.png" alt-text="Screenshot of configure storage firewall.":::
-
-> [!NOTE]
->
-> Your storage account won't be accessible from the public internet.
->
-> Refreshing the custom model labeling page in the Studio will result in an error message.
-
-## Enable access to storage from Form Recognizer
-
-To ensure that the Form Recognizer resource can access the training dataset, you need to add a role assignment for your [managed identity](#setup-managed-identity-for-form-recognizer).
-
-1. Staying on the storage account window in the Azure portal, navigate to the **Access Control (IAM)** tab in the left navigation bar.
-
-1. Select the **Add role assignment** button.
-
- :::image type="content" source="media/managed-identities/v2-stg-role-assign-role.png" alt-text="Screenshot of add role assignment window.":::
-
-1. On the **Role** tab, search for and select the **Storage Blob Data Reader** permission and select **Next**.
-
- :::image type="content" source="media/managed-identities/v2-stg-role-assignment.png" alt-text="Screenshot of choose a role tab.":::
-
-1. On the **Members** tab, select the **Managed identity** option and choose **+ Select members**
-
-1. On the **Select managed identities** dialog window, select the following options:
-
- * **Subscription**. Select your subscription.
-
- * **Managed Identity**. Select Form **Recognizer**.
-
- * **Select**. Choose the Form Recognizer resource you enabled with a managed identity.
-
- :::image type="content" source="media/managed-identities/v2-stg-role-assign-resource.png" alt-text="Screenshot of managed identities dialog window.":::
-
-1. **Close** the dialog window.
-
-1. Finally, select **Review + assign** to save your changes.
-
-Great! You've configured your Form Recognizer resource to use a managed identity to connect to a storage account.
-
-> [!TIP]
->
-> When you try the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio), you'll see the READ API and other prebuilt models don't require storage access to process documents. However, training a custom model requires additional configuration because the Studio can't directly communicate with a storage account.
- > You can enable storage access by selecting **Add your client IP address** from the **Networking** tab of the storage account to configure your machine to access the storage account via IP allowlisting.
-
-## Configure private endpoints for access from VNETs
-
-When you connect to resources from a virtual network, adding private endpoints ensures both the storage account, and the Form Recognizer resource are accessible from the virtual network.
-
-Next, configure the virtual network to ensure only resources within the virtual network or traffic router through the network have access to the Form Recognizer resource and the storage account.
-
-### Enable your virtual network and private endpoints
-
-1. In the Azure portal, navigate to your Form Recognizer resource.
-
-1. Select the **Networking** tab from the left navigation bar.
-
-1. Enable the **Selected Networking and Private Endpoints** option from the **Firewalls and virtual networks** tab and select save.
-
-> [!NOTE]
->
->If you try accessing any of the Form Recognizer Studio features, you'll see an access denied message. To enable access from the Studio on your machine, select the **client IP address checkbox** and **Save** to restore access.
-
- :::image type="content" source="media/managed-identities/v2-fr-network.png" alt-text="Screenshot showing how to disable public access to Form Recognizer.":::
-
-### Configure your private endpoint
-
-1. Navigate to the **Private endpoint connections** tab and select the **+ Private endpoint**. You're navigated to the **Create a private endpoint** dialog page.
-
-1. On the **Create private endpoint** dialog page, select the following options:
-
- * **Subscription**. Select your billing subscription.
-
- * **Resource group**. Select the appropriate resource group.
-
- * **Name**. Enter a name for your private endpoint.
-
- * **Region**. Select the same region as your virtual network.
-
- * Select **Next: Resource**.
-
- :::image type="content" source="media/managed-identities/v2-fr-private-end-basics.png" alt-text="Screenshot showing how to set-up a private endpoint":::
-
-### Configure your virtual network
-
-1. On the **Resource** tab, accept the default values and select **Next: Virtual Network**.
-
-1. On the **Virtual Network** tab, ensure that the virtual network you created is selected in the virtual network.
-
-1. If you have multiple subnets, select the subnet where you want the private endpoint to connect. Accept the default value to **Dynamically allocate IP address**.
-
-1. Select **Next: DNS**
-
-1. Accept the default value **Yes** to **integrate with private DNS zone**.
-
- :::image type="content" source="media/managed-identities/v2-fr-private-end-vnet.png" alt-text="Screenshot showing how to configure private endpoint":::
-
-1. Accept the remaining defaults and select **Next: Tags**.
-
-1. Select **Next: Review + create** .
-
-Well done! Your Form Recognizer resource now is only accessible from the virtual network and any IP addresses in the IP allowlist.
-
-### Configure private endpoints for storage
-
-Navigate to your **storage account** on the Azure portal.
-
-1. Select the **Networking** tab from the left navigation menu.
-
-1. Select the **Private endpoint connections** tab.
-
-1. Choose add **+ Private endpoint**.
-
-1. Provide a name and choose the same region as the virtual network.
-
-1. Select **Next: Resource**.
-
- :::image type="content" source="media/managed-identities/v2-stg-private-end-basics.png" alt-text="Screenshot showing how to create a private endpoint":::
-
-1. On the resource tab, select **blob** from the **Target sub-resource** list.
-
-1. select **Next: Virtual Network**.
-
- :::image type="content" source="media/managed-identities/v2-stg-private-end-resource.png" alt-text="Screenshot showing how to configure a private endpoint for a blob.":::
-
-1. Select the **Virtual network** and **Subnet**. Make sure **Enable network policies for all private endpoints in this subnet** is selected and the **Dynamically allocate IP address** is enabled.
-
-1. Select **Next: DNS**.
-
-1. Make sure that **Yes** is enabled for **Integrate with private DNS zone**.
-
-1. Select **Next: Tags**.
-
-1. Select **Next: Review + create**.
-
-Great work! You now have all the connections between the Form Recognizer resource and storage configured to use managed identities.
-
-> [!NOTE]
-> The resources are only accessible from the virtual network.
->
-> Studio access and analyze requests to your Form Recognizer resource will fail unless the request originates from the virtual network or is routed via the virtual network.
-
-## Validate your deployment
-
-To validate your deployment, you can deploy a virtual machine (VM) to the virtual network and connect to the resources.
-
-1. Configure a [Data Science VM](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) in the virtual network.
-
-1. Remotely connect into the VM from your desktop to launch a browser session to access Form Recognizer Studio.
-
-1. Analyze requests and the training operations should now work successfully.
-
-That's it! You can now configure secure access for your Form Recognizer resource with managed identities and private endpoints.
-
-## Common error messages
-
-* **Failed to access Blob container**:
-
- :::image type="content" source="media/managed-identities/cors-error.png" alt-text="Screenshot of error message when CORS config is required":::
-
- **Resolution**: [Configure CORS](quickstarts/try-form-recognizer-studio.md#prerequisites-for-new-users).
-
-* **AuthorizationFailure**:
-
- :::image type="content" source="media/managed-identities/auth-failure.png" alt-text="Screenshot of authorization failure error.":::
-
- **Resolution**: Ensure that there's a network line-of-sight between the computer accessing the form recognizer studio and the storage account. For example, you may need to add the client IP address in the storage account's networking tab.
-
-* **ContentSourceNotAccessible**:
-
- :::image type="content" source="media/managed-identities/content-source-error.png" alt-text="Screenshot of content source not accessible error.":::
-
- **Resolution**: Make sure you've given your Form Recognizer managed identity the role of **Storage Blob Data Reader** and enabled **Trusted services** access or **Resource instance** rules on the networking tab.
-
-* **AccessDenied**:
-
- :::image type="content" source="media/managed-identities/access-denied.png" alt-text="Screenshot of an access denied error.":::
-
- **Resolution**: Check to make sure there's connectivity between the computer accessing the form recognizer studio and the form recognizer service. For example, you may need to add the client IP address to the Form Recognizer service's networking tab.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
- Title: Create and use managed identities with Form Recognizer-
-description: Understand how to create and use managed identity with Form Recognizer
----- Previously updated : 03/17/2023-
-monikerRange: '>=form-recog-2.1.0'
--
-# Managed identities for Form Recognizer
--
-Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
--
-* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
-
-* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-* There's no added cost to use managed identities in Azure.
-
-> [!IMPORTANT]
->
-> * Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens.
->
-> * Managed identities are a safer way to grant access to data without having credentials in your code.
-
-## Private storage account access
-
- Private Azure storage account access and authentication support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Form Recognizer can't directly access your storage account data. However, once a managed identity is enabled, Form Recognizer can access your storage account using an assigned managed identity credential.
-
-> [!NOTE]
->
-> * If you intend to analyze your storage data with the [**Form Recognizer Sample Labeling tool (FOTT)**](https://fott-2-1.azurewebsites.net/), you must deploy the tool behind your VNet or firewall.
->
-> * The Analyze [**Receipt**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync), [**Business Card**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync), [**Invoice**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291), [**ID document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7738978e467c5fb8707), and [**Custom Form**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) APIs can extract data from a single document by posting requests as raw binary content. In these scenarios, there is no requirement for a managed identity credential.
-
-## Prerequisites
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
-
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You also need to create containers to store and organize your blob data within your storage account.
-
- * If your storage account is behind a firewall, **you must enable the following configuration**: </br></br>
-
- * On your storage account page, select **Security + networking** → **Networking** from the left menu.
- :::image type="content" source="media/managed-identities/security-and-networking-node.png" alt-text="Screenshot: security + networking tab.":::
-
- * In the main window, select **Allow access from selected networks**.
- :::image type="content" source="media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
-
- * On the selected networks page, navigate to the **Exceptions** category and make certain that the [**Allow Azure services on the trusted services list to access this storage account**](../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
-
- :::image type="content" source="media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view":::
-* A brief understanding of [**Azure role-based access control (Azure RBAC)**](../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
-
-## Managed identity assignments
-
-There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer only supports system-assigned managed identity:
-
-* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
-
-* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
-
-In the following steps, we enable a system-assigned managed identity and grant Form Recognizer limited access to your Azure blob storage account.
-
-## Enable a system-assigned managed identity
-
->[!IMPORTANT]
->
-> To enable a system-assigned managed identity, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../role-based-access-control/built-in-roles.md#user-access-administrator). You can specify a scope at four levels: management group, subscription, resource group, or resource.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with your Azure subscription.
-
-1. Navigate to your **Form Recognizer** resource page in the Azure portal.
-
-1. In the left rail, Select **Identity** from the **Resource Management** list:
-
- :::image type="content" source="media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
-
-1. In the main window, toggle the **System assigned Status** tab to **On**.
-
-## Grant access to your storage account
-
-You need to grant Form Recognizer access to your storage account before it can create, read, or delete blobs. Now that you've enabled Form Recognizer with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Form Recognizer access to Azure storage. The **Storage Blob Data Reader** role gives Form Recognizer (represented by the system-assigned managed identity) read and list access to the blob container and data.
-
-1. Under **Permissions** select **Azure role assignments**:
-
- :::image type="content" source="media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
-
-1. On the Azure role assignments page that opens, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
-
- :::image type="content" source="media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
-
- > [!NOTE]
- >
- > If you're unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or you get the permissions error, "you do not have permissions to add role assignment at this scope", check that you're currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as Owner or User Access Administrator at the Storage scope for the storage resource.
-
-1. Next, you're going to assign a **Storage Blob Data Reader** role to your Form Recognizer service resource. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
-
- | Field | Value|
- ||--|
- |**Scope**| **_Storage_**|
- |**Subscription**| **_The subscription associated with your storage resource_**.|
- |**Resource**| **_The name of your storage resource_**|
- |**Role** | **_Storage Blob Data Reader_**ΓÇöallows for read access to Azure Storage blob containers and data.|
-
- :::image type="content" source="media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
-
-1. After you've received the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
-
- :::image type="content" source="media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
-
-1. If you don't see the change right away, wait and try refreshing the page once more. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
-
- :::image type="content" source="media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
-
- That's it! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Form Recognizer specific access rights to your storage resource without having to manage credentials such as SAS tokens.
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Configure secure access with managed identities and private endpoints](managed-identities-secured-access.md)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
- Title: What is Azure Form Recognizer?-
-description: Azure Form Recognizer is a machine-learning based OCR and intelligent document processing service to automate extraction of key data from forms and documents.
----- Previously updated : 05/23/2023---
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-# What is Azure Form Recognizer?
---
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that enables you to build intelligent document processing solutions. Massive amounts of data, spanning a wide variety of data types, are stored in forms and documents. Form Recognizer enables you to effectively manage the velocity at which data is collected and processed and is key to improved operations, informed data-driven decisions, and enlightened innovation. </br></br>
-
-| ✔️ [**Document analysis models**](#document-analysis-models) | ✔️ [**Prebuilt models**](#prebuilt-models) | ✔️ [**Custom models**](#custom-model-overview) | ✔️[**Gated preview models**](#gated-preview-models) |
-
-### Document analysis models
-
-Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or progress.
-
- :::column:::
- :::image type="icon" source="media/overview/icon-read.png" link="#read":::</br>
- [**Read**](#read) | Extract printed </br>and handwritten text.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br>
- [**Layout**](#layout) | Extract text </br>and document structure.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-general-document.png" link="#general-document":::</br>
- [**General document**](#general-document) | Extract text, </br>structure, and key-value pairs.
- :::column-end:::
-
-### Prebuilt models
-
-Prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models.
-
- :::column span="":::
- :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br>
- [**Invoice**](#invoice) | Extract customer </br>and vendor details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br>
- [**Receipt**](#receipt) | Extract sales </br>transaction details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br>
- [**Identity**](#identity-id) | Extract identification </br>and verification details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-insurance-card.png" link="#w-2":::</br>
- [🆕 **Insurance card**](#w-2) | Extract health insurance details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-w2.png" link="#w-2":::</br>
- [**W2**](#w-2) | Extract taxable </br>compensation details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-business-card.png" link="#business-card":::</br>
- [**Business card**](#business-card) | Extract business contact details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model-preview":::</br>
- [**Contract**](#contract-model-preview) | Extract agreement</br> and party details.
- :::column-end:::
-
-### Custom models
-
-Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models.
-
- :::column:::
- **Extraction models**</br>
- Custom extraction models are trained to extract labeled fields from documents.
- :::column-end:::
-
- :::column:::
- :::image type="icon" source="media/overview/icon-custom-template.png" link="#custom-template":::</br>
- [**Custom template**](#custom-template) | Extract data from static layouts.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-neural.png" link="#custom-neural":::</br>
- [**Custom neural**](#custom-neural) | Extract data from mixed-type documents.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-composed.png" link="#custom-composed":::</br>
- [**Custom composed**](#custom-composed) | Extract data using a collection of models.
- :::column-end:::
-
- :::column:::
- **Classification model**</br>
- Custom classifiers analyze input documents to identify document types prior to invoking an extraction model.
- :::column-end:::
-
- :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br>
- [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) prior to invoking an extraction model.
- :::column-end:::
-
-### Gated preview models
-
-Form Recognizer Studio preview features are currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback. Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
-
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098e.png" link="#us-tax-1098-e-form-preview":::</br>
- [**US Tax 1098-E form**](#us-tax-1098-e-form-preview) | Extract student loan interest details
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form-preview":::</br>
- [**US Tax 1098 form**](#us-tax-1098-form-preview) | Extract mortgage interest details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098t.png" link="#us-tax-1098-t-form-preview":::</br>
- [**US Tax 1098-T form**](#us-tax-1098-t-form-preview) | Extract qualified tuition details.
- :::column-end:::
- :::column-end:::
-
-## Models and development options
-
-> [!NOTE]
->The following document understanding models and development options are supported by the Form Recognizer service v3.0.
-
-You can use Form Recognizer to automate document processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse development options.
-
-### Read
--
-|About| Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Read OCR model**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data and field extraction](concept-read.md#data-extraction)| &#9679; Contract processing. </br>&#9679; Financial or medical report processing.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript) |
-
-> [!div class="nextstepaction"]
-> [Return to model types](#document-analysis-models)
-
-### Layout
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Layout analysis model**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data and field extraction](concept-layout.md#data-extraction)</br>&#9679; Layout API has been updated to a prebuilt model. |&#9679; Document indexing and retrieval by structure.</br>&#9679; Preprocessing prior to OCR analysis. |&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)|
-
-> [!div class="nextstepaction"]
-> [Return to model types](#document-analysis-models)
-
-### General document
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**General document model**](concept-general-document.md)|&#9679; Extract **text,layout, and key-value pairs** from documents.</br>&#9679; [Data and field extraction](concept-general-document.md#data-extraction)|&#9679; Key-value pair extraction.</br>&#9679; Form processing.</br>&#9679; Survey data collection and analysis.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</br>&#9679; [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model) |
-
-> [!div class="nextstepaction"]
-> [Return to model types](#document-analysis-models)
-
-### Invoice
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Invoice model**](concept-invoice.md) |&#9679; Extract key information from invoices.</br>&#9679; [Data and field extraction](concept-invoice.md#field-extraction) |&#9679; Accounts payable processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)|
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### Receipt
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Receipt model**](concept-receipt.md) |&#9679; Extract key information from receipts.</br>&#9679; [Data and field extraction](concept-receipt.md#field-extraction)</br>&#9679; Receipt model v3.0 supports processing of **single-page hotel receipts**.|&#9679; Expense management.</br>&#9679; Consumer behavior data analysis.</br>&#9679; Customer loyalty program.</br>&#9679; Merchandise return processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)|
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### Identity (ID)
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Identity document (ID) model**](concept-id-document.md) |&#9679; Extract key information from passports and ID cards.</br>&#9679; [Document types](concept-id-document.md#document-types)</br>&#9679; Extract endorsements, restrictions, and vehicle classifications from US driver's licenses. |&#9679; Know your customer (KYC) financial services guidelines compliance.</br>&#9679; Medical account management.</br>&#9679; Identity checkpoints and gateways.</br>&#9679; Hotel registration. |&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)|
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### Health insurance card
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-| [**Health insurance card**](concept-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### W-2
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**W-2 Form**](concept-w2.md) |&#9679; Extract key information from IRS US W2 tax forms (year 2018-2021).</br>&#9679; [Data and field extraction](concept-w2.md#field-extraction)|&#9679; Automated tax document management.</br>&#9679; Mortgage loan application processing. |&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model) |
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### Business card
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Business card model**](concept-business-card.md) |&#9679; Extract key information from business cards.</br>&#9679; [Data and field extraction](concept-business-card.md#field-extractions) |&#9679; Sales lead and marketing management. |&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)|
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### Custom model overview
--
-| About | Description |Automation use cases |Development options |
-|-|--|--|--|
-|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)|
-
-> [!div class="nextstepaction"]
-> [Return to custom model types](#custom-models)
-
-#### Custom template
--
- > [!NOTE]
- > To train a custom template model, set the ```buildMode``` property to ```template```.
- > For more information, *see* [Training a template model](concept-custom-template.md#training-a-model)
-
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| &#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#custom-models)
-
-#### Custom neural
--
- > [!NOTE]
- > To train a custom neural model, set the ```buildMode``` property to ```neural```.
- > For more information, *see* [Training a neural model](concept-custom-neural.md#training-a-model)
-
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
- |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentModel)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#custom-models)
-
-#### Custom composed
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Composed custom models**](concept-composed-models.md)| A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types.| Useful when you've trained several models and want to group them to analyze similar form types like purchase orders.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/ComposeDocumentModel)</br>&#9679; [**C# SDK**](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&#9679; [**Java SDK**](/jav?view=form-recog-3.0.0&preserve-view=true)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#custom-models)
-
-#### Custom classification model
--
-| About | Description |Automation use cases | Development options |
-|-|--|-|--|
-|[**Composed classification model**](concept-custom-classifier.md)| Custom classification models combine layout and language features to detect, identify, and classify documents within an input file.|&#9679; A loan application packaged containing application form, payslip, and, bank statement.</br>&#9679; A collection of scanned invoices. |&#9679; [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/BuildDocumentClassifier)</br>
-
-> [!div class="nextstepaction"]
-> [Return to model types](#custom-models)
-
-### Contract model (preview)
--
-| About | Development options |
-|-|--|
-|Extract contract agreement and party details.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#gated-preview-models)
-
-### US tax 1098 form (preview)
--
-| About | Development options |
-|-|--|
-|Extract mortgage interest information and details.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#gated-preview-models)
-
-### US tax 1098-E form (preview)
--
-| About | Development options |
-|-|--|
-|Extract student loan information and details.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#gated-preview-models)
-
-### US tax 1098-T form (preview)
--
-| About | Development options |
-|-|--|
-|Extract tuition information and details.|&#9679; [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
-
-> [!div class="nextstepaction"]
-> [Return to model types](#gated-preview-models)
---
-Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) for developers to build intelligent document processing solutions. Form Recognizer applies machine-learning-based optical character recognition (OCR) and document understanding technologies to extract text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents. To learn more about each model, *see* the Concepts articles:
-
-| Model type | Model name |
-||--|
-|**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true) </br> |
-| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md?view=form-recog-2.1.0&preserve-view=true)</br>&#9679; [**Receipt model**](concept-receipt.md?view=form-recog-2.1.0&preserve-view=true) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md?view=form-recog-2.1.0&preserve-view=true) </br>&#9679; [**Business card model**](concept-business-card.md?view=form-recog-2.1.0&preserve-view=true) </br>
-| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md?view=form-recog-2.1.0&preserve-view=true)|
----
-## Form Recognizer models and development options
-
- >[!TIP]
- >
- > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
- > * The v3.0 Studio supports any model trained with v2.1 labeled data.
- > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-
-> [!NOTE]
-> The following models and development options are supported by the Form Recognizer service v2.1.
-
-Use the links in the table to learn more about each model and browse the API references:
-
-| Model| Description | Development options |
-|-|--|-|
-|[**Layout analysis**](concept-layout.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | &#9679; [**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</br>&#9679; [**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)|
-|[**Custom model**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| &#9679; [**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Sample Labeling Tool**](concept-custom.md?view=form-recog-2.1.0&preserve-view=true#build-a-custom-model)</br>&#9679; [**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)|
-|[**Invoice model**](concept-invoice.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales invoices. | &#9679; [**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md#try-it-prebuilt-model)</br>&#9679; [**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)|
-|[**Receipt model**](concept-receipt.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from sales receipts.| &#9679; [**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)|
-|[**Identity document (ID) model**](concept-id-document.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from US driver's licenses and international passports.| &#9679; [**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)|
-|[**Business card model**](concept-business-card.md?view=form-recog-2.1.0&preserve-view=true) | Automated data processing and extraction of key information from business cards.| &#9679; [**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</br>&#9679; [**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</br>&#9679; [**Client-library SDK**](quickstarts/get-started-sdks-rest-api.md)</br>&#9679; [**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)|
--
-## Data privacy and security
-
- As with all AI services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See our [Data, privacy, and security for Form Recognizer](/legal/cognitive-services/form-recognizer/fr-data-privacy-security) page.
-
-## Next steps
--
-* [Choose a Form Recognizer model](choose-model-feature.md)
-
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-
applied-ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api.md
- Title: "Quickstart: Form Recognizer SDKs | REST API "-
-description: Use a Form Recognizer SDK or the REST API to create a forms processing app that extracts key data and structure elements from your documents.
------ Previously updated : 06/02/2023-
-zone_pivot_groups: programming-languages-set-formre
--
-# Get started with Form Recognizer
---
-Get started with the latest version of Azure Form Recognizer. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Form Recognizer models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md) page.
-----------------
-That's it, congratulations!
-
-In this quickstart, you used a form Form Recognizer model to analyze various forms and documents. Next, explore the Form Recognizer Studio and reference documentation to learn about Form Recognizer API in depth.
-
-## Next steps
-
->[!div class="nextstepaction"]
-> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!div class="nextstepaction"]
-> [**Explore our how-to documentation and take a deeper dive into Form Recognizer models**](../how-to-guides/use-sdk-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
---
-Get started with Azure Form Recognizer using the programming language of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md) page.
------------------
-That's it, congratulations! In this quickstart, you used Form Recognizer models to analyze various forms in different ways.
-
-## Next steps
-
-* For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
-
-* The v3.0 Studio supports any model trained with v2.1 labeled data.
-
-* You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-* *See* our [**REST API**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [**Python**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
-
applied-ai-services Try Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-form-recognizer-studio.md
- Title: "Quickstart: Form Recognizer Studio | v3.0"-
-description: Form and document processing, data extraction, and analysis using Form Recognizer Studio
----- Previously updated : 03/03/2023-
-monikerRange: 'form-recog-3.0.0'
--
-<!-- markdownlint-disable MD001 -->
-
-# Get started: Form Recognizer Studio
--
-[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pretrained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and other quickstarts.
-
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE56n49]
-
-## Prerequisites for new users
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Cognitive Services multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
-
-> [!TIP]
-> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
-
-## Models
-
-Prebuilt models help you add Form Recognizer features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. Form Recognizer currently supports the following prebuilt models:
-
-#### Document analysis
-
-* [**General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs.
-* [**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
-* [**Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
-
-#### Prebuilt
-
-* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
-* [**Health insurance card**](https://formrecognizer.appliedai.azure.com/studio): extract insurer, member, prescription, group number and other key information from US health insurance cards.
-* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms.
-* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports.
-* [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-
-#### Custom
-
-* [**Custom extraction models**](https://formrecognizer.appliedai.azure.com/studio): extract information from forms and documents with custom extraction models. Quickly train a model by labeling as few as five sample documents.
-* [**Custom classification model**](https://formrecognizer.appliedai.azure.com/studio): train a custom classifier to distinguish between the different document types within your applications. Quickly train a model with as few as two classes and five samples per class.
-
-#### Gated preview models
-
-> [!NOTE]
-> To request access for gated preview models in Form Recognizer Studio, complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey).
-
-* [**General document with query fields**](https://formrecognizer.appliedai.azure.com/studio): extract labels, values such as names, dates, and amounts from documents.
-* [**Contract**](https://formrecognizer.appliedai.azure.com/studio): extract the title and signatory party information (including names, references, and addresses) from contracts.
-* [**Vaccination card**](https://formrecognizer.appliedai.azure.com/studio): extract card holder name, health provider, and vaccination records from US COVID-19 vaccination cards.
-* [**US 1098 tax form**](https://formrecognizer.appliedai.azure.com/studio): extract mortgage interest information from US 1098 tax forms.
-* [**US 1098-E tax form**](https://formrecognizer.appliedai.azure.com/studio): extract student loan information from US 1098-E tax forms.
-* [**US 1098-T tax form**](https://formrecognizer.appliedai.azure.com/studio): extract tuition information from US 1098-T forms.
-
-> [!NOTE]
-> To request access for gated preview models in Form Recognizer Studio, complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey).
-
-After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com/studio/document).
-
-In the following example, we use the General Documents feature. The steps to use other pretrained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
-
- :::image border="true" type="content" source="../media/quickstarts/form-recognizer-general-document-demo-preview3.gif" alt-text="Selecting the General Document API to analysis a document in the Form Recognizer Studio.":::
-
-1. Select a Form Recognizer service feature from the Studio home page.
-
-1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
-
-1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command.
-
-1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
-
-1. Observe the highlighted extracted content in the document view. Hover your mouse over the keys and values to see details.
-
-1. In the output section's Result tab, browse the JSON output to understand the service response format.
-
-1. In the Code tab, browse the sample code for integration. Copy and download to get started.
-
-## Added prerequisites for custom projects
-
-In addition to the Azure account and a Form Recognizer or Cognitive Services resource, you need:
-
-### Azure Blob Storage container
-
-A **standard performance** [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You create containers to store and organize your training documents within your storage account. If you don't know how to create an Azure storage account with a container, following these quickstarts:
-
-* [**Create a storage account**](../../../storage/common/storage-account-create.md). When creating your storage account, make sure to select **Standard** performance in the **Instance details → Performance** field.
-* [**Create a container**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When creating your container, set the **Public access level** field to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
-
-### Configure CORS
-
-[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you need access to the CORS tab of your storage account.
-
-1. Select the CORS tab for the storage account.
-
- :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
-
-1. Start by creating a new CORS entry in the Blob service.
-
-1. Set the **Allowed origins** to `https://formrecognizer.appliedai.azure.com`.
-
- :::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
-
- > [!TIP]
- > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
-
-1. Select all the available 8 options for **Allowed methods**.
-
-1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
-
-1. Set the **Max Age** to 120 seconds or any acceptable value.
-
-1. Select the save button at the top of the page to save the changes.
-
-CORS should now be configured to use the storage account from Form Recognizer Studio.
-
-### Sample documents set
-
-1. Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows: **Your storage account** → **Data storage** → **Containers**
-
- :::image border="true" type="content" source="../media/sas-tokens/data-storage-menu.png" alt-text="Screenshot: Data storage menu in the Azure portal.":::
-
-1. Select a **container** from the list.
-
-1. Select **Upload** from the menu at the top of the page.
-
- :::image border="true" type="content" source="../media/sas-tokens/container-upload-button.png" alt-text="Screenshot: container upload button in the Azure portal.":::
-
-1. The **Upload blob** window appears.
-
-1. Select your file(s) to upload.
-
- :::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot: upload blob window in the Azure portal.":::
-
-> [!NOTE]
-> By default, the Studio will use form documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../build-training-data-set.md#organize-your-data-in-subfolders-optional)
-
-## Custom models
-
-To create custom models, you start with configuring your project:
-
-1. From the Studio home, select the Custom model card to open the Custom models page.
-
-1. Use the "Create a project" command to start the new project configuration wizard.
-
-1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
-
-1. Review and submit your settings to create the project.
-
-1. From the labeling view, define the labels and their types that you're interested in extracting.
-
-1. Select the text in the document and select the label from the drop-down list or the labels pane.
-
-1. Label four more documents to get at least five documents labeled.
-
-1. Select the Train command and enter model name, select whether you want the custom template (form) or custom neural (document) model to start training your custom model.
-
-1. Once the model is ready, use the Test command to validate it with your test documents and observe the results.
--
-### Labeling as tables
-
-> [!NOTE]
-> * With the release of API versions 2022-06-30-preview and later, custom template models will add support for [cross page tabular fields (tables)](../concept-custom-template.md#tabular-fields).
-> * With the release of API versions 2022-06-30-preview and later, custom neural models will support [tabular fields (tables)](../concept-custom-template.md#tabular-fields) and models trained with API version 2022-08-31, or later will accept tabular field labels.
-
-1. Use the Delete command to delete models that aren't required.
-
-1. Download model details for offline viewing.
-
-1. Select multiple models and compose them into a new model to be used in your applications.
-
-Using tables as the visual pattern:
-
-For custom form models, while creating your custom models, you may need to extract data collections from your documents. Data collections may appear in a couple of formats. Using tables as the visual pattern:
-
-* Dynamic or variable count of values (rows) for a given set of fields (columns)
-
-* Specific collection of values for a given set of fields (columns and/or rows)
-
-**Label as dynamic table**
-
-Use dynamic tables to extract variable count of values (rows) for a given set of fields (columns):
-
-1. Add a new "Table" type label, select "Dynamic table" type, and name your label.
-
-1. Add the number of columns (fields) and rows (for data) that you need.
-
-1. Select the text in your page and then choose the cell to assign to the text. Repeat for all rows and columns in all pages in all documents.
--
-**Label as fixed table**
-
-Use fixed tables to extract specific collection of values for a given set of fields (columns and/or rows):
-
-1. Create a new "Table" type label, select "Fixed table" type, and name it.
-
-1. Add the number of columns and rows that you need corresponding to the two sets of fields.
-
-1. Select the text in your page and then choose the cell to assign it to the text. Repeat for other documents.
--
-### Signature detection
-
->[!NOTE]
-> Signature fields are currently only supported for custom template models. When training a custom neural model, labeled signature fields are ignored.
-
-To label for signature detection: (Custom form only)
-
-1. In the labeling view, create a new "Signature" type label and name it.
-
-1. Use the Region command to create a rectangular region at the expected location of the signature.
-
-1. Select the drawn region and choose the Signature type label to assign it to your drawn region. Repeat for other documents.
--
-## Next steps
-
-* Follow our [**Form Recognizer v3.0 migration guide**](../v3-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
-* Refer to our [**v3.0 REST API quickstarts**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to try the v3.0 features using the new REST API.
-
-[Get started with the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com).
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
- Title: "Quickstart: Label forms, train a model, and analyze forms using the Sample Labeling tool - Form Recognizer"-
-description: In this quickstart, you'll learn to use the Form Recognizer Sample Labeling tool to manually label form documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
----- Previously updated : 10/10/2022-
-monikerRange: 'form-recog-2.1.0'
-
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD034 -->
-<!-- markdownlint-disable MD029 -->
-# Get started with the Form Recognizer Sample Labeling tool
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
-
->[!TIP]
->
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
-> * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version.
-
-The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR)
-
-* [Analyze documents with the Layout API](#analyze-layout). Try the Layout API to extract text, tables, selection marks, and structure from documents.
-
-* [Analyze documents using a prebuilt model](#analyze-using-a-prebuilt-model). Start with a prebuilt model to extract data from invoices, receipts, identity documents, or business cards.
-
-* [Train and analyze a custom Form](#train-a-custom-form-model). Use a custom model to extract data from documents specific to distinct business data and use cases.
-
-## Prerequisites
-
-You'll need the following to get started:
-
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer), or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-
- > [!TIP]
- > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
-
-## Create a Form Recognizer resource
--
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-## Analyze using a Prebuilt model
-
-Form Recognizer offers several prebuilt models to choose from. Each model has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the prebuilt models currently supported by the Form Recognizer service:
-
-* [**Invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices.
-* [**Receipt**](../concept-receipt.md): extracts text and key information from receipts.
-* [**ID document**](../concept-id-document.md): extracts text and key information from driver licenses and international passports.
-* [**Business-card**](../concept-business-card.md): extracts text and key information from business cards.
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select the **Use prebuilt model to get data** tile.
-
- :::image type="content" source="../media/label-tool/prebuilt-1.jpg" alt-text="Screenshot of the layout model analyze results operation.":::
-
-1. Select the **Form Type** to analyze from the dropdown menu.
-
-1. Choose a URL for the file you would like to analyze from the below options:
-
- * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
- * [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
- * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg).
- * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-
-1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
-
- :::image type="content" source="../media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
-
-1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
- :::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot of the 'select-form-type' dropdown menu.":::
-
-1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
-
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
-
- :::image type="content" source="../media/label-tool/prebuilt-2.jpg" alt-text="Analyze Results of Form Recognizer invoice model":::
-
-1. Download the JSON output file to view the detailed results.
-
- * The "readResults" node contains every line of text with its respective bounding box placement on the page.
- * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
- * The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
- * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
-
-## Analyze Layout
-
-Azure the Form Recognizer Layout API extracts text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP).
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select **Use Layout to get text, tables and selection marks**.
-
- :::image type="content" source="../media/label-tool/layout-1.jpg" alt-text="Connection settings for Layout Form Recognizer tool.":::
-
-1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
-
-1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-
-1. In the **Source** field, select **URL** from the dropdown menu, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg`, and select the **Fetch** button.
-
-1. Select **Run Layout**. The Form Recognizer Sample Labeling tool will call the Analyze Layout API and analyze the document.
-
- :::image type="content" source="../media/fott-layout.png" alt-text="Screenshot: Layout dropdown menu.":::
-
-1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
-
- :::image type="content" source="../media/label-tool/layout-3.jpg" alt-text="Connection settings for Form Recognizer tool.":::
-
-1. Download the JSON output file to view the detailed Layout Results.
- * The `readResults` node contains every line of text with its respective bounding box placement on the page.
- * The `selectionMarks` node shows every selection mark (checkbox, radio mark) and whether its status is `selected` or `unselected`.
- * The `pageResults` section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted.
-
-## Train a custom form model
-
-Train a custom model to analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You'll need at least five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
-
-### Prerequisites for training a custom form model
-
-* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip).
-
-* If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
-
-* Configure CORS
-
- [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
-
- 1. Select the CORS tab for the storage account.
-
- :::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
-
- 1. Start by creating a new CORS entry in the Blob service.
-
- 1. Set the **Allowed origins** to `https://fott-2-1.azurewebsites.net`.
-
- :::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
-
- > [!TIP]
- > You can use the wildcard character '*' rather than a specified domain to allow all origin domains to make requests via CORS.
-
- 1. Select all the available 8 options for **Allowed methods**.
-
- 1. Approve all **Allowed headers** and **Exposed headers** by entering an * in each field.
-
- 1. Set the **Max Age** to 120 seconds or any acceptable value.
-
- 1. Select the save button at the top of the page to save the changes.
-
-### Use the Sample Labeling tool
-
-1. Navigate to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/).
-
-1. On the sample tool home page, select **Use custom form to train a model with labels and get key-value pairs**.
-
- :::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Train a custom model.":::
-
-1. Select **New project**
-
- :::image type="content" source="../media/fott-new-project.png" alt-text="Screenshot: select a new project prompt.":::
-
-#### Create a new project
-
-Configure the **Project Settings** fields with the following values:
-
-1. **Display Name**. Name your project.
-
-1. **Security Token**. Each project will auto-generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
-
-1. **Source connection**. The Sample Labeling tool connects to a source (your original uploaded forms) and a target (created labels and output data). Connections can be set up and shared across projects. They use an extensible provider model, so you can easily add new source/target providers.
-
- * Create a new connection, select the **Add Connection** button. Complete the fields with the following values:
-
- > [!div class="checklist"]
- >
- > * **Display Name**. Name the connection.
- > * **Description**. Add a brief description.
- > * **SAS URL**. Paste the shared access signature (SAS) URL for your Azure Blob Storage container.
-
- * To retrieve the SAS URL for your custom model training data, go to your storage resource in the Azure portal and select the **Storage Explorer** tab. Navigate to your container, right-click, and select **Get shared access signature**. It's important to get the SAS for your container, not for the storage account itself. Make sure the **Read**, **Write**, **Delete** and **List** permissions are checked, and select **Create**. Then copy the value in the **URL** section to a temporary location. It should have the form: `https://<storage account>.blob.core.windows.net/<container name>?<SAS value>`.
-
- :::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS location.":::
-
-1. **Folder Path** (optional). If your source forms are located within a folder in the blob container, specify the folder name.
-
-1. **Form Recognizer Service Uri** - Your Form Recognizer endpoint URL.
-
-1. **Key**. Your Form Recognizer key.
-
-1. **API version**. Keep the v2.1 (default) value.
-
-1. **Description** (optional). Describe your project.
-
- :::image type="content" source="../media/label-tool/connections.png" alt-text="Connection settings":::
-
-#### Label your forms
-
- :::image type="content" source="../media/label-tool/new-project.png" alt-text="New project page":::
-
-When you create or open a project, the main tag editor window opens. The tag editor consists of three parts:
-
-* A resizable preview pane that contains a scrollable list of forms from the source connection.
-* The main editor pane that allows you to apply tags.
-* The tags editor pane that allows users to modify, lock, reorder, and delete tags.
-
-##### Identify text and tables
-
-Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
-
-The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we won't label the table content, but rather rely on the automated extraction.
-
- :::image type="content" source="../media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool.":::
-
-##### Apply labels to text
-
-Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the Sample Label data set includes already labeled fields; we'll add another field.
-
-Use the tags editor pane to create a new tag you'd like to identify:
-
-1. Select **+** plus sign to create a new tag.
-
-1. Enter the tag "Total" name.
-
-1. Select **Enter** to save the tag.
-
-1. In the main editor, select the total value from the highlighted text elements.
-
-1. Select the Total tag to apply to the value, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane. Follow these steps to label all five forms in the sample dataset:
-
- > [!Tip]
- > Keep the following tips in mind when you're labeling your forms:
- >
- > * You can only apply one tag to each selected text element.
- >
- > * Each tag can only be applied once per page. If a value appears multiple times on the same form, create different tags for each instance. For example: "invoice# 1", "invoice# 2" and so on.
- > * Tags cannot span across pages.
- > * Label values as they appear on the form; don't try to split a value into two parts with two different tags. For example, an address field should be labeled with a single tag even if it spans multiple lines.
- > * Don't include keys in your tagged fields&mdash;only the values.
- > * Table data should be detected automatically and will be available in the final output JSON file in the 'pageResults' section. However, if the model fails to detect all of your table data, you can also label and train a model to detect tables, see [Train a custom model | Label your forms](../label-tool.md#label-your-forms)
- > * Use the buttons to the right of the **+** to search, rename, reorder, and delete your tags.
- > * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key.
- >
-
- :::image type="content" source="../media/label-tool/custom-1.jpg" alt-text="Label the samples.":::
-
-#### Train a custom model
-
-Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
-
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./get-started-sdks-rest-api.md?pivots=programming-language-rest-api) or [client library](./get-started-sdks-rest-api.md).
-
-* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling more forms and retraining to create a new model. We recommend starting by labeling five forms analyzing and testing the results and then if needed adding more forms as needed.
-* The list of tags, and the estimated accuracy per tag. For more information, _see_ [Interpret and improve accuracy and confidence](../concept-accuracy-confidence.md).
-
- :::image type="content" source="../media/label-tool/custom-3.jpg" alt-text="Training view tool.":::
-
-#### Analyze a custom form
-
-1. Select the **Analyze** icon from the navigation bar to test your model.
-
-1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
-
-1. Choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
-
- :::image type="content" source="../media/analyze.png" alt-text="Training view.":::
-
-That's it! You've learned how to use the Form Recognizer sample tool for Form Recognizer prebuilt, layout and custom models. You've also learned to analyze a custom form with manually labeled data.
-
-## Next steps
-
->[!div class="nextstepaction"]
-> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
- Title: Customer spotlight-
-description: Highlight customer stories with Form Recognizer.
----- Previously updated : 05/25/2022----
-# Customer spotlight
-
-The following customers and partners have adopted Form Recognizer across a wide range of business and technical scenarios.
-
-| Customer/Partner | Description | Link |
-||-|-|
-| **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | |
- | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries/regions. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. ||
-|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | |
-|**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. ||
-|**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. ||
-|**Chevron**| [**Chevron**](https://www.chevron.com//) Canada Business Unit is now using Form Recognizer with UiPath's robotic process automation platform to automate the extraction of data and move it into back-end systems for analysis. Subject matter experts have more time to focus on higher-value activities and information flows more rapidly. Accelerated operational control enables the company to analyze its business with greater speed, accuracy, and depth. | [Customer story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services)|
-|**Cross Masters**|[**Cross Masters**](https://crossmasters.com/), uses cutting-edge AI technologies not only as a passion, but as an essential part of a work culture requiring continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Cross Masters used Form Recognizer to develop a unique, customized solution, to provide clients with market insights from a large set of collected invoices. Most impressive is the extraction quality and continuous introduction of new features, such as model composing and table labeling. ||
-|**Element**| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. Element is one of the fastest growing companies in the global testing, inspection and certification sector having over 6,500 engaged experts working in more than 200 facilities across the globe. When the finance team for the Americas was forced to work from home during the COVID-19 pandemic, it needed to digitalize its paper processes fast. The creativity of the team and its use of Azure Form Recognizer delivered more than business as usualΓÇöit delivered significant efficiencies. The Element team used the tools in Azure so the next phase could be expedited. Rather than coding from scratch, they saw the opportunity to use the Azure Form Recognizer. This integration quickly gave them the functionality they needed, together with the agility and security of Azure. Azure Logic Apps is used to automate the process of extracting the documents from email, storing them, and updating the system with the extracted data. Computer Vision, part of Azure Cognitive Services, partners with Azure Form Recognizer to extract the right data points from the invoice documentsΓÇöwhether they're a pdf or scanned images. | [Customer story](https://customers.microsoft.com/story/1414941527887021413-element)|
-|**Emaar Properties**| [**Emaar Properties**](https://www.emaar.com/en/), operates Dubai Mall, the world's most-visited retail and entertainment destination. Each year, the Dubai Mall draws more than 80 million visitors. To enrich the shopping experience, Emaar Properties offers a unique rewards program through a dedicated mobile app. Loyalty program points are earned via submitted receipts. Emaar Properties uses Azure Form Recognizer to process submitted receipts and has achieved 92 percent reading accuracy.| [Customer story](https://customers.microsoft.com/story/1459754150957690925-emaar-retailers-azure-en-united-arab-emirates)|
-|**EY**| [**EY**](https://ey.com/) (Ernst & Young Global Limited) is a multinational professional services network that helps to create long-term value for clients and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries/regions to help clients grow, transform, and operate. EY teams work across assurance, consulting, law, strategy, tax, and transactions to find solutions for complex issues facing our world today. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. Azure Form Recognizer and Custom Vision partnered to enable EY teams to automate and improve the OCR and document handling processes for its transactions services clients. | [Customer story](https://customers.microsoft.com/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
-|**Financial Fabric**| [**Financial Fabric**](https://www.financialfabric.com/), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Form Recognizer, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. ||
-|**Fujitsu**| [**Fujitsu**](https://scanners.us.fujitsu.com/about-us) is the world leader in document scanning technology, with more than 50 percent of global market share, but that doesn't stop the company from constantly innovating. To improve the performance and accuracy of its cloud scanning solution, Fujitsu incorporated Azure Form Recognizer. It took only a few months to deploy the new technologies, and they have boosted character recognition rates as high as 99.9 percent. This collaboration helps Fujitsu deliver market-leading innovation and give its customers powerful and flexible tools for end-to-end document management. | [Customer story](https://customers.microsoft.com/en-us/story/1504311236437869486-fujitsu-document-scanning-azure-form-recognizer)|
-|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP combined their AI solution with Azure Form Recognizer to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. ||
-|**HCA Healthcare**| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure Form Recognizer to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)|
-|**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. ||
-|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. The application platform then brings this data into business workflows as organized information. This workflow provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. The applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
-|**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
-|**Old Mutual**| [**Old Mutual**](https://www.oldmutual.co.za/) is Africa's leading financial services group with a comprehensive range of investment capabilities. They're the industry leader in retirement fund solutions, investments, asset management, group risk benefits, insurance, and multi-fund management. The Old Mutual team used Microsoft Natural Language Processing and Optical Character Recognition to provide the basis for automating key customer transactions received via emails. It also offered an opportunity to identify incomplete customer requests in order to nudge customers to the correct digital channels. Old Mutual's extensible solution technology was further developed as a microservice to be consumed by any enterprise application through a secure API management layer. | [Customer story](https://customers.microsoft.com/en-us/story/1507561807660098567-old-mutual-banking-capital-markets-azure-en-south-africa)|
-|**Standard Bank**| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Form Recognizer to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)|
-| **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy ||
-|**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. ||
-|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
- Title: Form Recognizer SDKs -
-description: Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, and Python programming language.
------ Previously updated : 04/25/2023---
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD051 -->
-
-# Form Recognizer SDK (GA)
--
-> [!IMPORTANT]
-> For more information on the latest public preview version (**2023-02-28-preview**), *see* [Form Recognizer SDK (preview)](sdk-preview.md)
-
-Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
-
-## Supported languages
-
-Form Recognizer SDK supports the following languages and platforms:
-
-| Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support |
-|:-:|:-|:-| :-|
-| [.NET/C# → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 4.0.6 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[JavaScript → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[Python → 3.2.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
-
-## Supported Clients
-
-| Language| SDK version | API version | Supported clients|
-| : | :--|:- | :--|
-|.NET/C#</br> Java</br> JavaScript</br>| 4.0.0 (latest GA release)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|.NET/C#</br> Java</br> JavaScript</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
-|.NET/C#</br> Java</br> JavaScript</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
-| Python| 3.2.x (latest GA release) | v3.0 / 2022-08-31 (default)| DocumentAnalysisClient</br>DocumentModelAdministrationClient|
-| Python | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
-| Python | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
-
-## Use Form Recognizer SDK in your applications
-
-The Form Recognizer SDK enables the use and management of the Form Recognizer service in your application. The SDK builds on the underlying Form Recognizer REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Form Recognizer SDK for your preferred language:
-
-### 1. Install the SDK client library
-
-### [C#/.NET](#tab/csharp)
-
-```dotnetcli
-dotnet add package Azure.AI.FormRecognizer --version 4.0.0
-```
-
-```powershell
-Install-Package Azure.AI.FormRecognizer -Version 4.0.0
-```
-
-### [Java](#tab/java)
-
-```xml
-<dependency>
-<groupId>com.azure</groupId>
-<artifactId>azure-ai-formrecognizer</artifactId>
-<version>4.0.6</version>
-</dependency>
-```
-
-```kotlin
-implementation("com.azure:azure-ai-formrecognizer:4.0.6")
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-npm i @azure/ai-form-recognizer@4.0.0
-```
-
-### [Python](#tab/python)
-
-```python
-pip install azure-ai-formrecognizer==3.2.0
-```
--
-### 2. Import the SDK client library into your application
-
-### [C#/.NET](#tab/csharp)
-
-```csharp
-using Azure;
-using Azure.AI.FormRecognizer.DocumentAnalysis;
-```
-
-### [Java](#tab/java)
-
-```java
-import com.azure.ai.formrecognizer.*;
-import com.azure.ai.formrecognizer.models.*;
-import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
-
-import com.azure.core.credential.AzureKeyCredential;
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
-```
-
-### [Python](#tab/python)
-
-```python
-from azure.ai.formrecognizer import DocumentAnalysisClient
-from azure.core.credentials import AzureKeyCredential
-```
---
-### 3. Set up authentication
-
-There are two supported methods for authentication
-
-* Use a [Form Recognizer API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
-
-* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
-
-#### Use your API key
-
-Here's where to find your Form Recognizer API key in the Azure portal:
--
-### [C#/.NET](#tab/csharp)
-
-```csharp
-
-//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
-string key = "<your-key>";
-string endpoint = "<your-endpoint>";
-AzureKeyCredential credential = new AzureKeyCredential(key);
-DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
-```
-
-### [Java](#tab/java)
-
-```java
-
-// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
-DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential("<your-key>"))
- .endpoint("<your-endpoint>")
- .buildClient();
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-
-// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
-async function main() {
- const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
-```
-
-### [Python](#tab/python)
-
-```python
-
-# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
-```
---
-#### Use an Azure Active Directory (Azure AD) token credential
-
-> [!NOTE]
-> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../cognitive-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
-
-Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
-
-### [C#/.NET](#tab/csharp)
-
-Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
-
-1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
-
- ```console
- dotnet add package Azure.Identity
- ```
-
- ```powershell
- Install-Package Azure.Identity
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
-
- ```csharp
- string endpoint = "<your-endpoint>";
- var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
- ```
-
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
-
-### [Java](#tab/java)
-
-Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
-
-1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.5.3</version>
- </dependency>
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
-
- ```java
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
- .endpoint("{your-endpoint}")
- .credential(credential)
- .buildClient();
- ```
-
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
-
-### [JavaScript](#tab/javascript)
-
-Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
-
-1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
-
- ```javascript
- npm install @azure/identity
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
-
- ```javascript
- const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- const { DefaultAzureCredential } = require("@azure/identity");
-
- const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
- ```
-
-For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
-
-### [Python](#tab/python)
-
-Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
-
-1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
-
- ```python
- pip install azure-identity
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
-
- ```python
- from azure.identity import DefaultAzureCredential
- from azure.ai.formrecognizer import DocumentAnalysisClient
-
- credential = DefaultAzureCredential()
- document_analysis_client = DocumentAnalysisClient(
- endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
- credential=credential
- )
- ```
-
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
---
-### 4. Build your application
-
-Create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice.
-
-## Help options
-
-The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure Form Recognizer and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
-
-## Next steps
-
->[!div class="nextstepaction"]
-> [**Explore Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-
-> [!div class="nextstepaction"]
-> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
applied-ai-services Sdk Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-preview.md
- Title: Form Recognizer SDKs (preview)-
-description: The preview Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
------ Previously updated : 04/25/2023-
-monikerRange: 'form-recog-3.0.0'
--
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD051 -->
-
-# Form Recognizer SDK (public preview)
-
-**The SDKs referenced in this article are supported by:** ![Form Recognizer checkmark](media/yes-icon.png) **Form Recognizer REST API version 2023-02-28-preview**.
-
-> [!IMPORTANT]
->
-> * Form Recognizer public preview releases provide early access to features that are in active development.
-> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
-> * The public preview version of Form Recognizer client libraries default to service version [**Form Recognizer 2023-02-28-preview REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument).
-
-Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
-
-## Supported languages
-
-Form Recognizer SDK supports the following languages and platforms:
-
-| Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support |
-|:-:|:-|:-| :-|
-| [.NET/C# → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)|[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1) |[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[JavaScript → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[Python → 3.3.0b1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0b1/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
-
-## Supported Clients
-
-| Language| SDK version | API version (default) | Supported clients|
-| : | :--|:- | :--|
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.1.0-beta-1 (preview)| 2023_02_28_preview|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
-| **Python**| 3.3.0bx (preview) | 2023-02-28-preview | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
-| **Python**| 3.2.x (GA) | v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
-| **Python**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
-| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
-
-## Use Form Recognizer SDK in your applications
-
-The Form Recognizer SDK enables the use and management of the Form Recognizer service in your application. The SDK builds on the underlying Form Recognizer REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Form Recognizer SDK for your preferred language:
-
-### 1. Install the SDK client library
-
-### [C#/.NET](#tab/csharp)
-
-```dotnetcli
-dotnet add package Azure.AI.FormRecognizer --version 4.1.0-beta.1
-```
-
-```powershell
-Install-Package Azure.AI.FormRecognizer -Version 4.1.0-beta.1
-```
-
-### [Java](#tab/java)
-
-```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-ai-formrecognizer</artifactId>
- <version>4.1.0-beta.1</version>
- </dependency>
-```
-
-```kotlin
-implementation("com.azure:azure-ai-formrecognizer:4.1.0-beta.1")
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-npm i @azure/ai-form-recognizer@4.1.0-beta.1
-```
-
-### [Python](#tab/python)
-
-```python
-pip install azure-ai-formrecognizer==3.3.0b1
-```
---
-### 2. Import the SDK client library into your application
-
-### [C#/.NET](#tab/csharp)
-
-```csharp
-using Azure;
-using Azure.AI.FormRecognizer.DocumentAnalysis;
-```
-
-### [Java](#tab/java)
-
-```java
-import com.azure.ai.formrecognizer.*;
-import com.azure.ai.formrecognizer.models.*;
-import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
-
-import com.azure.core.credential.AzureKeyCredential;
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
-```
-
-### [Python](#tab/python)
-
-```python
-from azure.ai.formrecognizer import DocumentAnalysisClient
-from azure.core.credentials import AzureKeyCredential
-```
---
-### 3. Set up authentication
-
-There are two supported methods for authentication
-
-* Use a [Form Recognizer API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
-
-* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
-
-#### Use your API key
-
-Here's where to find your Form Recognizer API key in the Azure portal:
--
-### [C#/.NET](#tab/csharp)
-
-```csharp
-
-//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
-string key = "<your-key>";
-string endpoint = "<your-endpoint>";
-AzureKeyCredential credential = new AzureKeyCredential(key);
-DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
-```
-
-### [Java](#tab/java)
-
-```java
-
-// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
-DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential("<your-key>"))
- .endpoint("<your-endpoint>")
- .buildClient();
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-
-// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
-async function main() {
- const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
-```
-
-### [Python](#tab/python)
-
-```python
-
-# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
-```
---
-#### Use an Azure Active Directory (Azure AD) token credential
-
-> [!NOTE]
-> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../cognitive-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
-
-Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
-
-### [C#/.NET](#tab/csharp)
-
-Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
-
-1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
-
- ```console
- dotnet add package Azure.Identity
- ```
-
- ```powershell
- Install-Package Azure.Identity
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
-
- ```csharp
- string endpoint = "<your-endpoint>";
- var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
- ```
-
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
-
-### [Java](#tab/java)
-
-Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
-
-1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.5.3</version>
- </dependency>
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
-
- ```java
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
- .endpoint("{your-endpoint}")
- .credential(credential)
- .buildClient();
- ```
-
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
-
-### [JavaScript](#tab/javascript)
-
-Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
-
-1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
-
- ```javascript
- npm install @azure/identity
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
-
- ```javascript
- const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- const { DefaultAzureCredential } = require("@azure/identity");
-
- const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
- ```
-
-For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
-
-### [Python](#tab/python)
-
-Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
-
-1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
-
- ```python
- pip install azure-identity
- ```
-
-1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
-
-1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
-
-1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
-
-1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
-
- ```python
- from azure.identity import DefaultAzureCredential
- from azure.ai.formrecognizer import DocumentAnalysisClient
-
- credential = DefaultAzureCredential()
- document_analysis_client = DocumentAnalysisClient(
- endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
- credential=credential
- )
- ```
-
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
---
-### 4. Build your application
-
-Create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice.
-
-## Help options
-
-The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure Form Recognizer and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [**Explore Form Recognizer REST API 2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
- Title: Form Recognizer quotas and limits-
-description: Quick reference, detailed description, and best practices on Azure Form Recognizer service Quotas and Limits
------ Previously updated : 06/23/2023---
-# Form Recognizer service quotas and limits
-<!-- markdownlint-disable MD033 -->
---
-This article contains both a quick reference and detailed description of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
-
-## Model usage
--
-> [!div class="checklist"]
->
-> * [**Form Recognizer SDKs**](quickstarts/get-started-sdks-rest-api.md)
-> * [**Form Recognizer REST API**](quickstarts/get-started-sdks-rest-api.md)
-> * [**Form Recognizer Studio v3.0**](quickstarts/try-form-recognizer-studio.md)
--
-> [!div class="checklist"]
->
-> * [**Form Recognizer SDKs**](quickstarts/get-started-sdks-rest-api.md)
-> * [**Form Recognizer REST API**](quickstarts/get-started-sdks-rest-api.md)
-> * [**Sample Labeling Tool v2.1**](https://fott-2-1.azurewebsites.net/)
--
-|Quota|Free (F0)<sup>1</sup>|Standard (S0)|
-|--|--|--|
-| **Transactions Per Second limit** | 1 | 15 (default value) |
-| Adjustable | No | Yes <sup>2</sup> |
-| **Max document size** | 4 MB | 500 MB |
-| Adjustable | No | No |
-| **Max number of pages (Analysis)** | 2 | 2000 |
-| Adjustable | No | No |
-| **Max size of labels file** | 10 MB | 10 MB |
-| Adjustable | No | No |
-| **Max size of OCR json response** | 500 MB | 500 MB |
-| Adjustable | No | No |
-| **Max number of Template models** | 500 | 5000 |
-| Adjustable | No | No |
-| **Max number of Neural models** | 100 | 500 |
-| Adjustable | No | No |
--
-## Custom model usage
-
-> [!div class="checklist"]
->
-> * [**Custom template model**](concept-custom-template.md)
-> * [**Custom neural model**](concept-custom-neural.md)
-> * [**Composed classification models**](concept-custom-classifier.md)
-> * [**Composed custom models**](concept-composed-models.md)
-
-|Quota|Free (F0) <sup>1</sup>|Standard (S0)|
-|--|--|--|
-| **Compose Model limit** | 5 | 200 (default value) |
-| Adjustable | No | No |
-| **Training dataset size * Neural** | 1 GB <sup>3</sup> | 1 GB (default value) |
-| Adjustable | No | No |
-| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) |
-| Adjustable | No | No |
-| **Max number of pages (Training) * Template** | 500 | 500 (default value) |
-| Adjustable | No | No |
-| **Max number of pages (Training) * Neural** | 50,000 | 50,000 (default value) |
-| Adjustable | No | No |
-| **Custom neural model train** | 10 per month | 20 per month |
-| Adjustable | No |Yes <sup>3</sup>|
-| **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) |
-| Adjustable | No | No |
-| **Training dataset size * Classifier** | 1GB | 1GB (default value) |
-| Adjustable | No | No |
---
-## Custom model limits
-
-> [!div class="checklist"]
->
-> * [**Custom template model**](concept-custom-template.md)
-> * [**Composed custom models**](concept-composed-models.md)
-
-| Quota | Free (F0) <sup>1</sup> | Standard (S0) |
-|--|--|--|
-| **Compose Model limit** | 5 | 200 (default value) |
-| Adjustable | No | No |
-| **Training dataset size** | 50 MB | 50 MB (default value) |
-| Adjustable | No | No |
-| **Max number of pages (Training)** | 500 | 500 (default value) |
-| Adjustable | No | No |
---
-> <sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).</br>
-> <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions(#create-and-submit-support-request).</br>
-> <sup>3</sup> Neural models training count is reset every calendar month. Open a support request to increase the monthly training limit.
-> <sup>4</sup> This limit applies to all documents found in your training dataset folder prior to any labeling-related updates.
-
-## Detailed description, Quota adjustment, and best practices
-
-Before requesting a quota increase (where applicable), ensure that it's necessary. Form Recognizer service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity.
-
-If your application returns Response Code 429 (*Too many requests*) and your workload is within the defined limits: most likely, the service is scaling up to your demand, but hasn't yet reached the required scale. Thus the service doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long. For more information, *see* [Quotas and Limits quick reference](#form-recognizer-service-quotas-and-limits))
-
-### General best practices to mitigate throttling during autoscaling
-
-To minimize issues related to throttling (Response Code 429), we recommend using the following techniques:
-
-* Implement retry logic in your application
-* Avoid sharp changes in the workload. Increase the workload gradually <br/>
-*Example.* Your application is using Form Recognizer and your current workload is 10 TPS (transactions per second). The next second you increase the load to 40 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it can't do it within a second, so some of the requests get Response Code 429.
-
-The next sections describe specific cases of adjusting quotas.
-Jump to [Form Recognizer: increasing concurrent request limit](#create-and-submit-support-request)
-
-### Increasing transactions per second request limit
-
-By default the number of transactions per second is limited to 15 transactions per second for a Form Recognizer resource. For the Standard pricing tier, this amount can be increased. Before submitting the request, ensure you're familiar with the material in [this section](#detailed-description-quota-adjustment-and-best-practices) and aware of these [best practices](#example-of-a-workload-pattern-best-practice).
-
-Increasing the Concurrent Request limit does **not** directly affect your costs. Form Recognizer service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
-
-Existing value of Concurrent Request limit parameter is **not** visible via Azure portal, Command-Line tools, or API requests. To verify the existing value, create an Azure Support Request.
-
-If you would like to increase your transactions per second, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource * [enable auto scaling](../../cognitive-services/autoscale.md). You can also submit an increase TPS support request.
-
-#### Have the required information ready
-
-* Form Recognizer Resource ID
-* Region
-
-* **How to get information (Base model)**:
- * Go to [Azure portal](https://portal.azure.com/)
- * Select the Form Recognizer Resource for which you would like to increase the transaction limit
- * Select *Properties* (*Resource Management* group)
- * Copy and save the values of the following fields:
- * **Resource ID**
- * **Location** (your endpoint Region)
-
-#### Create and submit support request
-
-Initiate the increase of transactions per second(TPS) limit for your resource by submitting the Support Request:
-
-* Ensure you have the [required information](#have-the-required-information-ready)
-* Go to [Azure portal](https://portal.azure.com/)
-* Select the Form Recognizer Resource for which you would like to increase the TPS limit
-* Select *New support request* (*Support + troubleshooting* group)
-* A new window appears with autopopulated information about your Azure Subscription and Azure Resource
-* Enter *Summary* (like "Increase Form Recognizer TPS limit")
-* In Problem type,* select "Quota or usage validation"
-* Select *Next: Solutions*
-* Proceed further with the request creation
-* Under the *Details* tab, enter the following information in the *Description* field:
- * a note, that the request is about **Form Recognizer** quota.
- * Provide a TPS expectation you would like to scale to meet.
- * Azure resource information you [collected](#have-the-required-information-ready).
- * Complete entering the required information and select *Create* button in *Review + create* tab
- * Note the support request number in Azure portal notifications. You're contacted shortly for further processing
-
-## Example of a workload pattern best practice
-
-This example presents the approach we recommend following to mitigate possible request throttling due to [Autoscaling being in progress](#detailed-description-quota-adjustment-and-best-practices). It isn't an *exact recipe*, but merely a template we invite to follow and adjust as necessary.
-
- Let us suppose that a Form Recognizer resource has the default limit set. Start the workload to submit your analyze requests. If you find that you're seeing frequent throttling with response code 429, start by implementing an exponential backoff on the GET analyze response request. By using a progressively longer wait time between retries for consecutive error responses, for example a 2-5-13-34 pattern of delays between requests. In general, it's recommended to not call the get analyze response more than once every 2 seconds for a corresponding POST request.
-
-If you find that you're being throttled on the number of POST requests for documents being submitted, consider adding a delay between the requests. If your workload requires a higher degree of concurrent processing, you then need to create a support request to increase your service limits on transactions per second.
-
-Generally, it's highly recommended to test the workload and the workload patterns before going to production.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about error codes and troubleshooting](v3-error-guide.md)
applied-ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/studio-overview.md
- Title: What is Form Recognizer Studio?-
-description: Learn how to set up and use Form Recognizer Studio to test features of Azure Form Recognizer on the web.
----- Previously updated : 02/14/2023-
-monikerRange: 'form-recog-3.0.0'
--
-<!-- markdownlint-disable MD033 -->
-# What is Form Recognizer Studio?
-
-**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-
-Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. The studio provides a platform for you to experiment with the different Form Recognizer models and sample their returned data in an interactive manner without the need to write code.
-
-The studio supports Form Recognizer v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-
-## Get started using Form Recognizer Studio
-
-1. To use Form Recognizer Studio, you need the following assets:
-
- * **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
-
- * **Cognitive Services or Form Recognizer resource**. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal to get your key and endpoint. Use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-
-1. Navigate to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. You have two options:
-
- **a. Access by Resource**.
-
- * Choose your existing subscription.
- * Select an existing resource group within your subscription or create a new one.
- * Select your existing Form Recognizer or Cognitive services resource.
-
- :::image type="content" source="media/studio/welcome-to-studio.png" alt-text="Screenshot of the configure service resource window.":::
-
- **b. Access by API endpoint and key**.
-
- * Retrieve your endpoint and key from the Azure portal.
- * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
- * Enter the values in the appropriate fields.
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-1. Once you've completed configuring your resource, you're able to try the different models offered by Form Recognizer Studio. From the front page, select any Form Recognizer model to try using with a no-code approach.
-
- :::image type="content" source="media/studio/form-recognizer-studio-front.png" alt-text="Screenshot of Form Recognizer Studio front page.":::
-
-1. After you've tried Form Recognizer Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) to get started incorporating Form Recognizer models into your own applications.
-
-To learn more about each model, *see* concept pages.
--
-### Manage your resource
-
- To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Form Recognizer Studio home page and select the **Resource** tab. If you have access to other resources, you can switch resources as well.
--
-With Form Recognizer, you can quickly automate your data processing in applications and workflows, easily enhance data-driven strategies, and skillfully enrich document search capabilities.
-
-## Next steps
-
-* Visit [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) to begin using the models presented by the service.
-
-* For more information on Form Recognizer capabilities, see [Azure Form Recognizer overview](overview.md).
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
- Title: "Train your custom template model with the sample-labeling tool and table tags"-
-description: Learn how to effectively use supervised table tag labeling.
----- Previously updated : 01/09/2023-
-#Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way.
-monikerRange: 'form-recog-2.1.0'
--
-# Train models with the sample-labeling tool
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
-
->[!TIP]
->
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
-> * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK quickstarts to get started with version v3.0.
-
-In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
-
-## When should I use table tags?
-
-Here are some examples of when using table tags would be appropriate:
--- There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.-- There's data you wish to extract that isn't presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.-
-> [!NOTE]
-> Form Recognizer automatically finds and extracts all tables in your documents whether the tables are tagged or not. Therefore, you don't have to label every table from your form with a table tag and your table tags don't have to replicate the structure of very table found in your form. Tables extracted automatically by Form Recognizer will be included in the pageResults section of the JSON output.
-
-## Create a table tag with the Form Recognizer Sample Labeling tool
-<!-- markdownlint-disable MD004 -->
-* Determine whether you want a **dynamic** or **fixed-size** table tag. If the number of rows vary from document to document use a dynamic table tag. If the number of rows is consistent across your documents, use a fixed-size table tag.
-* If your table tag is dynamic, define the column names and the data type and format for each column.
-* If your table is fixed-size, define the column name, row name, data type, and format for each tag.
-
-## Label your table tag data
-
-* If your project has a table tag, you can open the labeling panel, and populate the tag as you would label key-value fields.
-
-## Next steps
-
-Follow our quickstart to train and use your custom Form Recognizer model:
-
-> [!div class="nextstepaction"]
-> [Train with labels using the Sample Labeling tool](label-tool.md)
-
-## See also
-
-* [Train a custom model with the Sample Labeling tool](label-tool.md)
->
applied-ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-azure-function.md
- Title: "Tutorial: Use an Azure Function to process stored documents"-
-description: This guide shows you how to use an Azure function to trigger the processing of documents that are uploaded to an Azure blob storage container.
------ Previously updated : 10/31/2022----
-# Tutorial: Use Azure Functions and Python to process stored documents
-
-Form Recognizer can be used as part of an automated data processing pipeline built with Azure Functions. This guide will show you how to use Azure Functions to process documents that are uploaded to an Azure blob storage container. This workflow extracts table data from stored documents using the Form Recognizer layout model and saves the table data in a .csv file in Azure. You can then display the data using Microsoft Power BI (not covered here).
--
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Create an Azure Storage account.
-> * Create an Azure Functions project.
-> * Extract layout data from uploaded forms.
-> * Upload extracted layout data to Azure Storage.
-
-## Prerequisites
-
-* **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-
-* **A Form Recognizer resource**. Once you have your Azure subscription, create a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-
- * After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the tutorial:
-
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-* [**Python 3.6.x, 3.7.x, 3.8.x or 3.9.x**](https://www.python.org/downloads/) (Python 3.10.x isn't supported for this project).
-
-* The latest version of [**Visual Studio Code**](https://code.visualstudio.com/) (VS Code) with the following extensions installed:
-
- * [**Azure Functions extension**](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). Once it's installed, you should see the Azure logo in the left-navigation pane.
-
- * [**Azure Functions Core Tools**](../../azure-functions/functions-run-local.md?tabs=v3%2cwindows%2ccsharp%2cportal%2cbash) version 3.x (Version 4.x isn't supported for this project).
-
- * [**Python Extension**](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio code. For more information, *see* [Getting Started with Python in VS Code](https://code.visualstudio.com/docs/python/python-tutorial)
-
-* [**Azure Storage Explorer**](https://azure.microsoft.com/features/storage-explorer/) installed.
-
-* **A local PDF document to analyze**. You can use our [sample pdf document](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample-layout.pdf) for this project.
-
-## Create an Azure Storage account
-
-1. [Create a general-purpose v2 Azure Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the Azure portal. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
-
- * [Create a storage account](../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
- * [Create a container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
-
-1. On the left pane, select the **Resource sharing (CORS)** tab, and remove the existing CORS policy if any exists.
-
-1. Once your storage account has deployed, create two empty blob storage containers, named **input** and **output**.
-
-## Create an Azure Functions project
-
-1. Create a new folder named **functions-app** to contain the project and choose **Select**.
-
-1. Open Visual Studio Code and open the Command Palette (Ctrl+Shift+P). Search for and choose **Python:Select Interpreter** → choose an installed Python interpreter that is version 3.6.x, 3.7.x, 3.8.x or 3.9.x. This selection will add the Python interpreter path you selected to your project.
-
-1. Select the Azure logo from the left-navigation pane.
-
- * You'll see your existing Azure resources in the Resources view.
-
- * Select the Azure subscription that you're using for this project and below you should see the Azure Function App.
-
- :::image type="content" source="media/tutorial-azure-function/azure-extensions-visual-studio-code.png" alt-text="Screenshot of a list showing your Azure resources in a single, unified view.":::
-
-1. Select the Workspace (Local) section located below your listed resources. Select the plus symbol and choose the **Create Function** button.
-
- :::image type="content" source="media/tutorial-azure-function/workspace-create-function.png" alt-text="Screenshot showing where to begin creating an Azure function.":::
-
-1. When prompted, choose **Create new project** and navigate to the **function-app** directory. Choose **Select**.
-
-1. You'll be prompted to configure several settings:
-
- * **Select a language** → choose Python.
-
- * **Select a Python interpreter to create a virtual environment** → select the interpreter you set as the default earlier.
-
- * **Select a template** → choose **Azure Blob Storage trigger** and give the trigger a name or accept the default name. Press **Enter** to confirm.
-
- * **Select setting** → choose **➕Create new local app setting** from the dropdown menu.
-
- * **Select subscription** → choose your Azure subscription with the storage account you created → select your storage account → then select the name of the storage input container (in this case, `input/{name}`). Press **Enter** to confirm.
-
- * **Select how your would like to open your project** → choose **Open the project in the current window** from the dropdown menu.
-
-1. Once you've completed these steps, VS Code will add a new Azure Function project with a *\_\_init\_\_.py* Python script. This script will be triggered when a file is uploaded to the **input** storage container:
-
- ```python
- import logging
-
- import azure.functions as func
--
- def main(myblob: func.InputStream):
- logging.info(f"Python blob trigger function processed blob \n"
- f"Name: {myblob.name}\n"
- f"Blob Size: {myblob.length} bytes")
- ```
-
-## Test the function
-
-1. Press F5 to run the basic function. VS Code will prompt you to select a storage account to interface with.
-
-1. Select the storage account you created and continue.
-
-1. Open Azure Storage Explorer and upload the sample PDF document to the **input** container. Then check the VS Code terminal. The script should log that it was triggered by the PDF upload.
-
- :::image type="content" source="media/tutorial-azure-function/visual-studio-code-terminal-test.png" alt-text="Screenshot of the VS Code terminal after uploading a new document.":::
-
-1. Stop the script before continuing.
-
-## Add document processing code
-
-Next, you'll add your own code to the Python script to call the Form Recognizer service and parse the uploaded documents using the Form Recognizer [layout model](concept-layout.md).
-
-1. In VS Code, navigate to the function's *requirements.txt* file. This file defines the dependencies for your script. Add the following Python packages to the file:
-
- ```txt
- cryptography
- azure-functions
- azure-storage-blob
- azure-identity
- requests
- pandas
- numpy
- ```
-
-1. Then, open the *\_\_init\_\_.py* script. Add the following `import` statements:
-
- ```Python
- import logging
- from azure.storage.blob import BlobServiceClient
- import azure.functions as func
- import json
- import time
- from requests import get, post
- import os
- import requests
- from collections import OrderedDict
- import numpy as np
- import pandas as pd
- ```
-
-1. You can leave the generated `main` function as-is. You'll add your custom code inside this function.
-
- ```python
- # This part is automatically generated
- def main(myblob: func.InputStream):
- logging.info(f"Python blob trigger function processed blob \n"
- f"Name: {myblob.name}\n"
- f"Blob Size: {myblob.length} bytes")
- ```
-
-1. The following code block calls the Form Recognizer [Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync) API on the uploaded document. Fill in your endpoint and key values.
-
- ```Python
- # This is the call to the Form Recognizer endpoint
- endpoint = r"Your Form Recognizer Endpoint"
- apim_key = "Your Form Recognizer Key"
- post_url = endpoint + "/formrecognizer/v2.1/layout/analyze"
- source = myblob.read()
-
- headers = {
- # Request headers
- 'Content-Type': 'application/pdf',
- 'Ocp-Apim-Subscription-Key': apim_key,
- }
-
- text1=os.path.basename(myblob.name)
- ```
-
- > [!IMPORTANT]
- > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* Cognitive Services [security](../../cognitive-services/security-features.md).
-
-1. Next, add code to query the service and get the returned data.
-
- ```python
- resp = requests.post(url=post_url, data=source, headers=headers)
-
- if resp.status_code != 202:
- print("POST analyze failed:\n%s" % resp.text)
- quit()
- print("POST analyze succeeded:\n%s" % resp.headers)
- get_url = resp.headers["operation-location"]
-
- wait_sec = 25
-
- time.sleep(wait_sec)
- # The layout API is async therefore the wait statement
-
- resp = requests.get(url=get_url, headers={"Ocp-Apim-Subscription-Key": apim_key})
-
- resp_json = json.loads(resp.text)
-
- status = resp_json["status"]
-
- if status == "succeeded":
- print("POST Layout Analysis succeeded:\n%s")
- results = resp_json
- else:
- print("GET Layout results failed:\n%s")
- quit()
-
- results = resp_json
-
- ```
-
-1. Add the following code to connect to the Azure Storage **output** container. Fill in your own values for the storage account name and key. You can get the key on the **Access keys** tab of your storage resource in the Azure portal.
-
- ```Python
- # This is the connection to the blob storage, with the Azure Python SDK
- blob_service_client = BlobServiceClient.from_connection_string("DefaultEndpointsProtocol=https;AccountName="Storage Account Name";AccountKey="storage account key";EndpointSuffix=core.windows.net")
- container_client=blob_service_client.get_container_client("output")
- ```
-
- The following code parses the returned Form Recognizer response, constructs a .csv file, and uploads it to the **output** container.
-
- > [!IMPORTANT]
- > You will likely need to edit this code to match the structure of your own form documents.
-
- ```python
- # The code below extracts the json format into tabular data.
- # Please note that you need to adjust the code below to your form structure.
- # It probably won't work out-of-the-box for your specific form.
- pages = results["analyzeResult"]["pageResults"]
-
- def make_page(p):
- res=[]
- res_table=[]
- y=0
- page = pages[p]
- for tab in page["tables"]:
- for cell in tab["cells"]:
- res.append(cell)
- res_table.append(y)
- y=y+1
-
- res_table=pd.DataFrame(res_table)
- res=pd.DataFrame(res)
- res["table_num"]=res_table[0]
- h=res.drop(columns=["boundingBox","elements"])
- h.loc[:,"rownum"]=range(0,len(h))
- num_table=max(h["table_num"])
- return h, num_table, p
-
- h, num_table, p= make_page(0)
-
- for k in range(num_table+1):
- new_table=h[h.table_num==k]
- new_table.loc[:,"rownum"]=range(0,len(new_table))
- row_table=pages[p]["tables"][k]["rows"]
- col_table=pages[p]["tables"][k]["columns"]
- b=np.zeros((row_table,col_table))
- b=pd.DataFrame(b)
- s=0
- for i,j in zip(new_table["rowIndex"],new_table["columnIndex"]):
- b.loc[i,j]=new_table.loc[new_table.loc[s,"rownum"],"text"]
- s=s+1
-
- ```
-
-1. Finally, the last block of code uploads the extracted table and text data to your blob storage element.
-
- ```Python
- # Here is the upload to the blob storage
- tab1_csv=b.to_csv(header=False,index=False,mode='w')
- name1=(os.path.splitext(text1)[0]) +'.csv'
- container_client.upload_blob(name=name1,data=tab1_csv)
- ```
-
-## Run the function
-
-1. Press F5 to run the function again.
-
-1. Use Azure Storage Explorer to upload a sample PDF form to the **input** storage container. This action should trigger the script to run, and you should then see the resulting .csv file (displayed as a table) in the **output** container.
-
-You can connect this container to Power BI to create rich visualizations of the data it contains.
-
-## Next steps
-
-In this tutorial, you learned how to use an Azure Function written in Python to automatically process uploaded PDF documents and output their contents in a more data-friendly format. Next, learn how to use Power BI to display the data.
-
-> [!div class="nextstepaction"]
-> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/)
-
-* [What is Form Recognizer?](overview.md)
-* Learn more about the [layout model](concept-layout.md)
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
- Title: Use Azure Logic Apps with Form Recognizer-
-description: A tutorial outlining how to use Form Recognizer with Logic Apps.
----- Previously updated : 08/22/2022-
-monikerRange: 'form-recog-2.1.0'
--
-# Tutorial: Use Azure Logic Apps with Form Recognizer
-
-**This article applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
-
-> [!IMPORTANT]
->
-> This tutorial and the Logic App Form Recognizer connector targets Form Recognizer REST API v2.1 and must be used in conjuction with the [FOTT Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
-
-Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and third-party applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
-
-* Create business processes and workflows visually.
-* Integrate workflows with software as a service (SaaS) and enterprise applications.
-* Automate enterprise application integration (EAI), business-to-business(B2B), and electronic data interchange (EDI) tasks.
-
-For more information, *see* [Logic Apps Overview](../../logic-apps/logic-apps-overview.md).
-
- In this tutorial, you'll learn how to build a Logic App connector flow to automate the following tasks:
-
-> [!div class="checklist"]
->
-> * Detect when an invoice as been added to a OneDrive folder.
-> * Process the invoice using the Form Recognizer prebuilt-invoice model.
-> * Send the extracted information from the invoice to a pre-specified email address.
-
-## Prerequisites
-
-To complete this tutorial, you'll need the following resources:
-
-* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
-
-* **A Form Recognizer resource**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
- 1. After the resource deploys, select **Go to resource**.
-
- 1. Copy the **Keys and Endpoint** values from your resource in the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
-
- :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
-
- > [!TIP]
- > For more information, *see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
-
-* A free [**OneDrive**](https://onedrive.live.com/signup) or [**OneDrive for Business**](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) cloud storage account.
-
- > [!NOTE]
- >
- > * OneDrive is intended for personal storage.
- > * OneDrive for Business is part of Office 365 and is designed for organizations. It provides cloud storage where you can store, share, and sync all work files.
- >
-
-* A free [**Outlook online**](https://signup.live.com/signup.aspx?lic=1&mkt=en-ca) or [**Office 365**](https://www.microsoft.com/microsoft-365/outlook/email-and-calendar-software-microsoft-outlook) email account**.
-
-* **A sample invoice to test your Logic App**. You can download and use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf) for this tutorial.
-
-## Create a OneDrive folder
-
-Before we jump into creating the Logic App, we have to set up a OneDrive folder.
-
-1. Go to your [OneDrive](https://onedrive.live.com/) or [OneDrive for Business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) home page.
-
-1. Select the **Γ₧ò New** drop-down menu in the upper-left corner and select **Folder**.
-
-1. Enter a name for your new folder and select **Create**.
-
-1. You should see the new folder in your files. For now, we're done with OneDrive, but you'll need to access this folder later.
--
-## Create a Logic App resource
-
-At this point, you should have a Form Recognizer resource and a OneDrive folder all set. Now, it's time to create a Logic App resource.
-
-1. Select **Create a resource** from the Azure home page.
-
-1. Search for and choose **Logic App** from the search bar.
-
-1. Select the create button
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-five.gif" alt-text="GIF showing how to create a Logic App resource.":::
-
-1. Next, you're going to fill out the **Create Logic App** fields with the following values:
-
- * **Subscription**. Select your current subscription.
- * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. Choose the same resource group you have for your Form Recognizer resource.
- * **Type**. Select **Consumption**. The Consumption resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../../logic-apps/logic-apps-pricing.md#consumption-pricing).
- * **Logic App name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameLogicApp*.
- * **Region**. Select your local region.
- * **Enable log analytics**. For this project, select **No**.
-
-1. When you're done, you should have something similar to the image below (Resource group, Logic App name, and Region may be different). After checking these values, select **Review + create** in the bottom-left corner.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-six.png" alt-text="Image showing correct values to create a Logic App resource.":::
-
-1. A short validation check should run. After it completes successfully, select **Create** in the bottom-left corner.
-
-1. You'll be redirected to a screen that says **Deployment in progress**. Give Azure some time to deploy; it can take a few minutes. After the deployment is complete, you should see a banner that says, **Your deployment is complete**. When you reach this screen, select **Go to resource**.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-seven.gif" alt-text="GIF showing how to get to newly created Logic App resource.":::
-
-1. You'll be redirected to the **Logic Apps Designer** page. There's a short video for a quick introduction to Logic Apps available on the home screen. When you're ready to begin designing your Logic App, select the **Blank Logic App** button.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-eight.png" alt-text="Image showing how to enter the Logic App Designer.":::
-
-1. You should see a screen that looks like the one below. Now, you're ready to start designing and implementing your Logic App.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-nine.png" alt-text="Image of the Logic App Designer.":::
-
-## Create automation flow
-
-Now that you have the Logic App connector resource set up and configured, the only thing left is to create the automation flow and test it out!
-
-1. Search for and select **OneDrive** or **OneDrive for Business** in the search bar.
-
-1. Select the **When a file is created** trigger.
-
-1. You'll see a OneDrive pop-up window and be prompted to log into your OneDrive account. Select **Sign in** and follow the prompts to connect your account.
-
- > [!TIP]
- > If you try to sign into the OneDrive connector using an Office 365 account, you may receive the following error: ***Sorry, we can't sign you in here with your @MICROSOFT.COM account.***
- >
- > * This error happens because OneDrive is a cloud-based storage for personal use that can be accessed with an Outlook.com or Microsoft Live account not with Office 365 account.
- > * You can use the OneDrive for Business connector if you want to use an Office 365 account. Make sure that you have [created a OneDrive Folder](#create-a-onedrive-folder) for this project in your OneDrive for Business account.
-
-1. After your account is connected, select the folder you created earlier in your OneDrive or OneDrive for Business account. Leave the other default values in place. Your window should look similar to the one below.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-ten.gif" alt-text="GIF showing how to add the first node to workflow.":::
-
-1. Next, we're going to add a new step to the workflow. Select the plus button underneath the newly created OneDrive node.
-
-1. A new node should be added to the Logic App designer view. Search for "Form Recognizer" in the search bar and select **Analyze invoice** from the list.
-
-1. Now, you should see a window where you'll create your connection. Specifically, you're going to connect your Form Recognizer resource to the Logic Apps Designer Studio:
-
- * Enter a **Connection name**. It should be something easy to remember.
- * Enter the Form Recognizer resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Form Recognizer resource and copy them again. When you're done, select **Create**.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-eleven.gif" alt-text="GIF showing how to add second node to workflow.":::
-
-1. You should see the parameters tab for the **Analyze Invoice** connector.
-
-1. Select the **Document/Image File Content** field. A dynamic content pop-up should appear. If it doesn't, select the **Add dynamic content** button below the field.
-
-1. Select **File content** from the pop-up list. This step is essentially sending the file(s) to be analyzed to the Form recognizer prebuilt-invoice model. Once you see the **File content** badge show in the **Document /Image file content** field, you've completed this step correctly.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-twelve.gif" alt-text="GIF showing how to add dynamic content to second node.":::
-
-1. We need to add the last step. Once again, select the **Γ₧ò New step** button to add another action.
-
-1. In the search bar, enter *Outlook* and select **Outlook.com** (personal) or **Office 365 Outlook** (work).
-
-1. In the actions bar, scroll down until you find **Send an email (V2)** and select this action.
-
-1. Just like with OneDrive, you'll be asked to sign into your Outlook or Office 365 Outlook account. After you sign in, you should see a window like the one pictured below. In this window, we're going to format the email to be sent with the dynamic content that Form Recognizer will extract from the invoice.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-thirteen.gif" alt-text="GIF showing how to add final step to workflow.":::
-
-1. We're almost done! Make the following changes to the following fields:
-
- * **To**. Enter your personal or business email address or any other email address you have access to.
-
- * **Subject**. Enter ***Invoice received from:*** and then append dynamic content **Vendor name field Vendor name**.
-
- * **Body**. We're going to add specific information about the invoice:
-
- 1. Type ***Invoice ID:*** and append the dynamic content **Invoice ID field Invoice ID**.
-
- 1. On a new line type ***Invoice due date:*** and append the dynamic content **Invoice date field invoice date (date)**.
-
- 1. Type ***Amount due:*** and append the dynamic content **Amount due field Amount due (number)**.
-
- 1. Lastly, because the amount due is an important number we also want to send the confidence score for this extraction in the email. To do this type ***Amount due (confidence):*** and add the dynamic content **Amount due field confidence of amount due**. When you're done, the window should look similar to the screen below.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-fifteen.png" alt-text="Image of completed Outlook node.":::
-
-1. **Select Save in the upper left corner**.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-sixteen.png" alt-text="Image of completed connector flow.":::
-
-> [!NOTE]
->
-> * The Logic App designer will automatically add a "for each loop" around the send email action. This is normal due to output format that may return more than one invoice from PDFs in the future.
-> * The current version only returns a single invoice per PDF.
-
-## Test automation flow
-
-Let's quickly review what we've done before we test our flow:
-
-> [!div class="checklist"]
->
-> * We created a triggerΓÇöin this case scenario, the trigger is when a file is created in a pre-specified folder in our OneDrive account.
-> * We added a Form Recognizer action to our flowΓÇöin this scenario we decided to use the invoice API to automatically analyze the invoices from the OneDrive folder.
-> * We added an Outlook.com action to our flowΓÇöfor this scenario we sent some of the analyzed invoice data to a pre-determined email address.
-
-Now that we've created the flow, the last thing to do is to test it and make sure that we're getting the expected behavior.
-
-1. Now, to test the Logic App first open a new tab and navigate to the OneDrive folder you set up at the beginning of this tutorial. Add this file to the OneDrive folder [Sample invoice.](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf)
-
-1. Return to the Logic App designer tab and select the **Run trigger** button and select **Run** from the drop-down menu.
-
-1. You should see a sample run of your Logic App run if all the steps have green check marks it means the run was successful.
-
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-seventeen.gif" alt-text="GIF of sample run of Logic App.":::
-
-1. Check your email and you should see a new email with the information we pre-specified.
-
-1. Be sure to [disable or delete](../../logic-apps/manage-logic-apps-with-azure-portal.md#disable-or-enable-a-single-logic-app) your logic App after you're done so usage stops.
-
-Congratulations! You've officially completed this tutorial.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Use the invoice processing prebuilt model in Power Automate](/ai-builder/flow-invoice-processing?toc=/azure/applied-ai-services/form-recognizer/toc.json&bc=/azure/applied-ai-services/form-recognizer/breadcrumb/toc.json)
applied-ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-error-guide.md
- Title: "Reference: Form Recognizer Errors"-
-description: Learn how errors are represented in Form Recognizer and find a list of possible errors returned by the service.
----- Previously updated : 10/07/2022-
-monikerRange: 'form-recog-3.0.0'
--
-# Form Recognizer error guide v3.0
-
-Form Recognizer uses a unified design to represent all errors encountered in the REST APIs. Whenever an API operation returns a 4xx or 5xx status code, additional information about the error is returned in the response JSON body as follows:
-
-```json
-{
- "error": {
- "code": "InvalidRequest",
- "message": "Invalid request.",
- "innererror": {
- "code": "InvalidContent",
- "message": "The file format is unsupported or corrupted. Refer to documentation for the list of supported formats."
- }
- }
-}
-```
-
-For long-running operations where multiple errors may be encountered, the top-level error code is set to the most severe error, with the individual errors listed under the *error.details* property. In such scenarios, the *target* property of each individual error specifies the trigger of the error.
-
-```json
-{
- "status": "failed",
- "createdDateTime": "2021-07-14T10:17:51Z",
- "lastUpdatedDateTime": "2021-07-14T10:17:51Z",
- "error": {
- "code": "InternalServerError",
- "message": "An unexpected error occurred.",
- "details": [
- {
- "code": "InternalServerError",
- "message": "An unexpected error occurred."
- },
- {
- "code": "InvalidContentDimensions",
- "message": "The input image dimensions are out of range. Refer to documentation for supported image dimensions.",
- "target": "2"
- }
- ]
- }
-}
-```
-
-The top-level *error.code* property can be one of the following error code messages:
-
-| Error Code | Message | Http Status |
-| -- | | -- |
-| InvalidRequest | Invalid request. | 400 |
-| InvalidArgument | Invalid argument. | 400 |
-| Forbidden | Access forbidden due to policy or other configuration. | 403 |
-| NotFound | Resource not found. | 404 |
-| MethodNotAllowed | The requested HTTP method is not allowed. | 405 |
-| Conflict | The request could not be completed due to a conflict. | 409 |
-| UnsupportedMediaType | Request content type is not supported. | 415 |
-| InternalServerError | An unexpected error occurred. | 500 |
-| ServiceUnavailable | A transient error has occurred. Try again. | 503 |
-
-When possible, more details are specified in the *inner error* property.
-
-| Top Error Code | Inner Error Code | Message |
-| -- | - | - |
-| Conflict | ModelExists | A model with the provided name already exists. |
-| Forbidden | AuthorizationFailed | Authorization failed: {details} |
-| Forbidden | InvalidDataProtectionKey | Data protection key is invalid: {details} |
-| Forbidden | OutboundAccessForbidden | The request contains a domain name that is not allowed by the current access control policy. |
-| InternalServerError | Unknown | Unknown error. |
-| InvalidArgument | InvalidContentSourceFormat | Invalid content source: {details} |
-| InvalidArgument | InvalidParameter | The parameter {parameterName} is invalid: {details} |
-| InvalidArgument | InvalidParameterLength | Parameter {parameterName} length must not exceed {maxChars} characters. |
-| InvalidArgument | InvalidSasToken | The shared access signature (SAS) is invalid: {details} |
-| InvalidArgument | ParameterMissing | The parameter {parameterName} is required. |
-| InvalidRequest | ContentSourceNotAccessible | Content is not accessible: {details} |
-| InvalidRequest | ContentSourceTimeout | Timeout while receiving the file from client. |
-| InvalidRequest | DocumentModelLimit | Account cannot create more than {maximumModels} models. |
-| InvalidRequest | DocumentModelLimitNeural | Account cannot create more than 10 custom neural models per month. Please contact support to request additional capacity. |
-| InvalidRequest | DocumentModelLimitComposed | Account cannot create a model with more than {details} component models. |
-| InvalidRequest | InvalidContent | The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats. |
-| InvalidRequest | InvalidContentDimensions | The input image dimensions are out of range. Refer to documentation for supported image dimensions. |
-| InvalidRequest | InvalidContentLength | The input image is too large. Refer to documentation for the maximum file size. |
-| InvalidRequest | InvalidFieldsDefinition | Invalid fields: {details} |
-| InvalidRequest | InvalidTrainingContentLength | Training content contains {bytes} bytes. Training is limited to {maxBytes} bytes. |
-| InvalidRequest | InvalidTrainingContentPageCount | Training content contains {pages} pages. Training is limited to {pages} pages. |
-| InvalidRequest | ModelAnalyzeError | Could not analyze using a custom model: {details} |
-| InvalidRequest | ModelBuildError | Could not build the model: {details} |
-| InvalidRequest | ModelComposeError | Could not compose the model: {details} |
-| InvalidRequest | ModelNotReady | Model is not ready for the requested operation. Wait for training to complete or check for operation errors. |
-| InvalidRequest | ModelReadOnly | The requested model is read-only. |
-| InvalidRequest | NotSupportedApiVersion | The requested operation requires {minimumApiVersion} or later. |
-| InvalidRequest | OperationNotCancellable | The operation can no longer be canceled. |
-| InvalidRequest | TrainingContentMissing | Training data is missing: {details} |
-| InvalidRequest | UnsupportedContent | Content is not supported: {details} |
-| NotFound | ModelNotFound | The requested model was not found. It may have been deleted or is still building. |
-| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
- Title: "How-to: Migrate your application from Form Recognizer v2.1 to v3.0."-
-description: In this how-to guide, you'll learn the differences between Form Recognizer API v2.1 and v3.0. You'll also learn the changes you need to move to the newer version of the API.
----- Previously updated : 10/20/2022-
-monikerRange: '>=form-recog-2.1.0'
--
-# Form Recognizer v3.0 migration
-
-> [!IMPORTANT]
->
-> Form Recognizer REST API v3.0 introduces breaking changes in the REST API request and analyze response JSON.
-
-## Migrating from a v3.0 preview API version
-
-Preview APIs are periodically deprecated. If you're using a preview API version, please update your application to target the GA API version. To migrate from the 2021-09-30-preview, 2022-01-30-preview or the 2022-06-30-preview API versions to the `2022-08-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md).
-
-> [!IMPORTANT]
->
-> Preview API versions 2021-09-30-preview, 2022-01-30-preview and 2022-06-30-preview are being retired July 31st 2023. All analyze requests that use these API versions will fail. Custom neural models trained with any of these API versions will no longer be usable once the API versions are deprecated. All custom neural models trained with preview API versions will need to be retrained with the GA API version.
-
-The `2022-08-31` (GA) API has a few updates from the preview API versions:
-
-* Field rename: boundingBox to polygon to support non-quadrilateral polygon regions.
-* Field deleted: entities removed from the result of the general document model.
-* Field rename: documentLanguage.languageCode to locale
-* Added support for HEIF format
-* Added paragraph detection, with role classification for layout and general document models.
-* Added support for parsed address fields.
-
-## Migrating from v2.1
-
-Form Recognizer v3.0 introduces several new features and capabilities:
-
-* [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) has been redesigned for better usability.
-* [**General document (v3.0)**](concept-general-document.md) model is a new API that extracts text, tables, structure, and key-value pairs, from forms and documents.
-* [**Custom neural model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents.
-* [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing.
-* [**ID document (v3.0)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom template models.
-* [**Custom model API (v3.0)**](overview.md) supports analysis of all the newly added prebuilt models. For a complete list of prebuilt models, see the [overview](overview.md) page.
-
-In this article, you'll learn the differences between Form Recognizer v2.1 and v3.0 and how to move to the newer version of the API.
-
-> [!CAUTION]
->
-> * REST API **2022-08-31** release includes a breaking change in the REST API analyze response JSON.
-> * The `boundingBox` property is renamed to `polygon` in each instance.
-
-## Changes to the REST API endpoints
-
- The v3.0 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the layout analysis (prebuilt-layout) and prebuilt models.
-
-### POST request
-
-```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
-
-```
-
-### GET request
-
-```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-08-31
-```
-
-### Analyze operation
-
-* The request payload and call pattern remain unchanged.
-* The Analyze operation specifies the input document and content-specific configurations, it returns the analyze result URL via the Operation-Location header in the response.
-* Poll this Analyze Result URL, via a GET request to check the status of the analyze operation (minimum recommended interval between requests is 1 second).
-* Upon success, status is set to succeeded and [analyzeResult](#changes-to-analyze-result) is returned in the response body. If errors are encountered, status will be set to `failed`, and an error will be returned.
-
-| Model | v2.1 | v3.0 |
-|:--| :--| :--|
-| **Request URL prefix**| **https://{your-form-recognizer-endpoint}/formrecognizer/v2.1** | **https://{your-form-recognizer-endpoint}/formrecognizer** |
-| **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
-| **Layout**| /layout/analyze |`/documentModels/prebuilt-layout:analyze`|
-|**Custom**| /custom/{modelId}/analyze |`/documentModels/{modelId}:analyze` |
-| **Invoice** | /prebuilt/invoice/analyze | `/documentModels/prebuilt-invoice:analyze` |
-| **Receipt** | /prebuilt/receipt/analyze | `/documentModels/prebuilt-receipt:analyze` |
-| **ID document** | /prebuilt/idDocument/analyze | `/documentModels/prebuilt-idDocument:analyze` |
-|**Business card**| /prebuilt/businessCard/analyze| `/documentModels/prebuilt-businessCard:analyze`|
-|**W-2**| /prebuilt/w-2/analyze| `/documentModels/prebuilt-w-2:analyze`|
-
-### Analyze request body
-
-The content to be analyzed is provided via the request body. Either the URL or base64 encoded data can be user to construct the request.
-
- To specify a publicly accessible web URL, set  Content-Type to **application/json** and send the following JSON body:
-
- ```json
- {
- "urlSource": "{urlPath}"
- }
- ```
-
-Base64 encoding is also supported in Form Recognizer v3.0:
-
-```json
-{
- "base64Source": "{base64EncodedContent}"
-}
-```
-
-### Additionally supported parameters
-
-Parameters that continue to be supported:
-
-* `pages` : Analyze only a specific subset of pages in the document. List of page numbers indexed from the number `1` to analyze. Ex. "1-3,5,7-9"
-* `locale` : Locale hint for text recognition and document analysis. Value may contain only the language code (ex. "en", "fr") or BCP 47 language tag (ex. "en-US").
-
-Parameters no longer supported:
-
-* includeTextDetails
-
-The new response format is more compact and the full output is always returned.
-
-## Changes to analyze result
-
-Analyze response has been refactored to the following top-level results to support multi-page elements.
-
-* `pages`
-* `tables`
-* `keyValuePairs`
-* `entities`
-* `styles`
-* `documents`
-
-> [!NOTE]
->
-> The analyzeResult response changes includes a number of changes like moving up from a property of pages to a top lever property within analyzeResult.
-
-```json
-
-{
-// Basic analyze result metadata
-"apiVersion": "2022-08-31", // REST API version used
-"modelId": "prebuilt-invoice", // ModelId used
-"stringIndexType": "textElements", // Character unit used for string offsets and lengths:
-// textElements, unicodeCodePoint, utf16CodeUnit // Concatenated content in global reading order across pages.
-// Words are generally delimited by space, except CJK (Chinese, Japanese, Korean) characters.
-// Lines and selection marks are generally delimited by newline character.
-// Selection marks are represented in Markdown emoji syntax (:selected:, :unselected:).
-"content": "CONTOSO LTD.\nINVOICE\nContoso Headquarters...", "pages": [ // List of pages analyzed
-{
-// Basic page metadata
-"pageNumber": 1, // 1-indexed page number
-"angle": 0, // Orientation of content in clockwise direction (degree)
-"width": 0, // Page width
-"height": 0, // Page height
-"unit": "pixel", // Unit for width, height, and polygon coordinates
-"spans": [ // Parts of top-level content covered by page
-{
-"offset": 0, // Offset in content
-"length": 7 // Length in content
-}
-], // List of words in page
-"words": [
-{
-"text": "CONTOSO", // Equivalent to $.content.Substring(span.offset, span.length)
-"boundingBox": [ ... ], // Position in page
-"confidence": 0.99, // Extraction confidence
-"span": { ... } // Part of top-level content covered by word
-}, ...
-], // List of selectionMarks in page
-"selectionMarks": [
-{
-"state": "selected", // Selection state: selected, unselected
-"boundingBox": [ ... ], // Position in page
-"confidence": 0.95, // Extraction confidence
-"span": { ... } // Part of top-level content covered by selection mark
-}, ...
-], // List of lines in page
-"lines": [
-{
-"content": "CONTOSO LTD.", // Concatenated content of line (may contain both words and selectionMarks)
-"boundingBox": [ ... ], // Position in page
-"spans": [ ... ], // Parts of top-level content covered by line
-}, ...
-]
-}, ...
-], // List of extracted tables
-"tables": [
-{
-"rowCount": 1, // Number of rows in table
-"columnCount": 1, // Number of columns in table
-"boundingRegions": [ // Polygons or Bounding boxes potentially across pages covered by table
-{
-"pageNumber": 1, // 1-indexed page number
-"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-08-31 API
-}
-],
-"spans": [ ... ], // Parts of top-level content covered by table // List of cells in table
-"cells": [
-{
-"kind": "stub", // Cell kind: content (default), rowHeader, columnHeader, stub, description
-"rowIndex": 0, // 0-indexed row position of cell
-"columnIndex": 0, // 0-indexed column position of cell
-"rowSpan": 1, // Number of rows spanned by cell (default=1)
-"columnSpan": 1, // Number of columns spanned by cell (default=1)
-"content": "SALESPERSON", // Concatenated content of cell
-"boundingRegions": [ ... ], // Bounding regions covered by cell
-"spans": [ ... ] // Parts of top-level content covered by cell
-}, ...
-]
-}, ...
-], // List of extracted key-value pairs
-"keyValuePairs": [
-{
-"key": { // Extracted key
-"content": "INVOICE:", // Key content
-"boundingRegions": [ ... ], // Key bounding regions
-"spans": [ ... ] // Key spans
-},
-"value": { // Extracted value corresponding to key, if any
-"content": "INV-100", // Value content
-"boundingRegions": [ ... ], // Value bounding regions
-"spans": [ ... ] // Value spans
-},
-"confidence": 0.95 // Extraction confidence
-}, ...
-],
-"styles": [
-{
-"isHandwritten": true, // Is content in this style handwritten?
-"spans": [ ... ], // Spans covered by this style
-"confidence": 0.95 // Detection confidence
-}, ...
-], // List of extracted documents
-"documents": [
-{
-"docType": "prebuilt-invoice", // Classified document type (model dependent)
-"boundingRegions": [ ... ], // Document bounding regions
-"spans": [ ... ], // Document spans
-"confidence": 0.99, // Document splitting/classification confidence // List of extracted fields
-"fields": {
-"VendorName": { // Field name (docType dependent)
-"type": "string", // Field value type: string, number, array, object, ...
-"valueString": "CONTOSO LTD.",// Normalized field value
-"content": "CONTOSO LTD.", // Raw extracted field content
-"boundingRegions": [ ... ], // Field bounding regions
-"spans": [ ... ], // Field spans
-"confidence": 0.99 // Extraction confidence
-}, ...
-}
-}, ...
-]
-}
-
-```
-
-## Build or train model
-
-The model object has three updates in the new API
-
-* ```modelId``` is now a property that can be set on a model for a human readable name.
-* ```modelName``` has been renamed to ```description```
-* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom neural models.
-
-The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed```, and the error is returned.
-
-The following code is a sample build request using a SAS token. Note the trailing slash when setting the prefix or folder path.
-
-```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
-
-{
- "modelId": {modelId},
- "description": "Sample model",
- "buildMode": "template",
- "azureBlobSource": {
- "containerUrl": "https://{storageAccount}.blob.core.windows.net/{containerName}?{sasToken}",
- "prefix": "{folderName/}"
- }
-}
-```
-
-## Changes to compose model
-
-Model compose is now limited to single level of nesting. Composed models are now consistent with custom models with the addition of ```modelId``` and ```description``` properties.
-
-```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-08-31
-{
- "modelId": "{composedModelId}",
- "description": "{composedModelDescription}",
- "componentModels": [
- { "modelId": "{modelId1}" },
- { "modelId": "{modelId2}" },
- ]
-}
-
-```
-
-## Changes to copy model
-
-The call pattern for copy model remains unchanged:
-
-* Authorize the copy operation with the target resource calling ```authorizeCopy```. Now a POST request.
-* Submit the authorization to the source resource to copy the model calling ```copyTo```
-* Poll the returned operation to validate the operation completed successfully
-
-The only changes to the copy model function are:
-
-* HTTP action on the ```authorizeCopy``` is now a POST request.
-* The authorization payload contains all the information needed to submit the copy request.
-
-***Authorize the copy***
-
-```json
-POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
-{
- "modelId": "{targetModelId}",
- "description": "{targetModelDescription}",
-}
-```
-
-Use the response body from the authorize action to construct the request for the copy.
-
-```json
-POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copyTo?api-version=2022-08-31
-{
- "targetResourceId": "{targetResourceId}",
- "targetResourceRegion": "{targetResourceRegion}",
- "targetModelId": "{targetModelId}",
- "targetModelLocation": "https://{targetHost}/formrecognizer/documentModels/{targetModelId}",
- "accessToken": "{accessToken}",
- "expirationDateTime": "2021-08-02T03:56:11Z"
-}
-```
-
-## Changes to list models
-
-List models have been extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels).
-
-***Sample list models request***
-
-```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-08-31
-```
-
-## Change to get model
-
-As get model now includes prebuilt models, the get operation returns a ```docTypes``` dictionary. Each document type is described by its name, optional description, field schema, and optional field confidence. The field schema describes the list of fields potentially returned with the document type.
-
-```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
-```
-
-## New get info operation
-
-The ```info``` operation on the service returns the custom model count and custom model limit.
-
-```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-08-31
-```
-
-***Sample response***
-
-```json
-{
- "customDocumentModels": {
- "count": 5,
- "limit": 100
- }
-}
-```
-
-## Next steps
-
-In this migration guide, you've learned how to upgrade your existing Form Recognizer application to use the v3.0 APIs.
-
-* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-* [What is Form Recognizer?](overview.md)
-* [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
- Title: What's new in Form Recognizer?-
-description: Learn the latest changes to the Form Recognizer API.
----- Previously updated : 06/23/2023-
-monikerRange: '>=form-recog-2.1.0'
--
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD051 -->
-
-# What's new in Azure Form Recognizer
--
-Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
-
->[!NOTE]
-> With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md).
-
-## May 2023
-
-**Introducing refreshed documentation for Build 2023**
-
-* [🆕 Form Recognizer Overview](overview.md?view=form-recog-3.0.0&preserve-view=true) has enhanced navigation, structured access points, and enriched images.
-
-* [🆕 Choose a Form Recognizer model](choose-model-feature.md?view=form-recog-3.0.0&preserve-view=true) provides guidance for choosing the best Form Recognizer solution for your projects and workflows.
-
-## April 2023
-
-**Announcing the latest Azure Form Recognizer client-library public preview release**
-
-* Form Recognizer REST API Version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) supports the public preview release SDKs. This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) SDKs:
-
- * [**Custom classification model**](concept-custom-classifier.md)
-
- * [**Query fields extraction**](concept-query-fields.md)
-
- * [**Add-on capabilities**](concept-add-on-capabilities.md)
-
-* For more information, _see_ [**Form Recognizer SDK (public preview**)](./sdk-preview.md) and [March 2023 release](#march-2023) notes.
-
-## March 2023
-
-> [!IMPORTANT]
-> [**`2023-02-28-preview`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) capabilities are currently only available in the following regions:
->
-> * West Europe
-> * West US2
-> * East US
-
-* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Form Recognizer starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult).
-* [**Query fields**](concept-query-fields.md) capabilities, added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region.
-* [**Read**](concept-read.md#barcode-extraction) and [**Layout**](concept-layout.md#barcode-extraction) models support **barcode** extraction with the ```2023-02-28-preview``` API.
-* [**Add-on capabilities**](concept-add-on-capabilities.md)
- * [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API.
- * [**Formula extraction**](concept-add-on-capabilities.md#formula-extraction) is now recognized with the ```2023-02-28-preview``` API.
- * [**High resolution extraction**](concept-add-on-capabilities.md#high-resolution-extraction) is now recognized with the ```2023-02-28-preview``` API.
-* [**Common name key normalization**](concept-general-document.md#key-normalization-common-name) capabilities are added to the General Document model to improve processing forms with variations in key names.
-* [**Custom extraction model updates**](concept-custom.md)
- * [**Custom neural model**](concept-custom-neural.md) now supports added languages for training and analysis. Train neural models for Dutch, French, German, Italian and Spanish.
- * [**Custom template model**](concept-custom-template.md) now has an improved signature detection capability.
-* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio) updates
- * In addition to support for all the new features like classification and query fields, the Studio now enables project sharing for custom model projects.
- * New model additions in gated preview: **Vaccination cards**, **Contracts**, **US Tax 1098**, **US Tax 1098-E**, and **US Tax 1098-T**. To request access to gated preview models, complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey).
-* [**Receipt model updates**](concept-receipt.md)
- * Receipt model has added support for thermal receipts.
- * Receipt model now has added language support for 18 languages and three regional languages (English, French, Portuguese).
- * Receipt model now supports `TaxDetails` extraction.
-* [**Layout model**](concept-layout.md) now has improved table recognition.
-* [**Read model**](concept-read.md) now has added improvement for single-digit character recognition.
---
-## February 2023
-
-* Select Form Recognizer containers for v3.0 are now available for use!
-* Currently **Read v3.0** and **Layout v3.0** containers are available.
-
- For more information, _see_ [Install and run Form Recognizer containers](containers/form-recognizer-container-install-run.md?view=form-recog-3.0.0&preserve-view=true)
---
-## January 2023
-
-* Prebuilt receipt model - added languages supported. The receipt model now supports these added languages and locales
- * Japanese - Japan (ja-JP)
- * French - Canada (fr-CA)
- * Dutch - Netherlands (nl-NL)
- * English - United Arab Emirates (en-AE)
- * Portuguese - Brazil (pt-BR)
-
-* Prebuilt invoice model - added languages supported. The invoice model now supports these added languages and locales
- * English - United States (en-US), Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN)
- * Spanish - Spain (es-ES)
- * French - France (fr-FR)
- * Italian - Italy (it-IT)
- * Portuguese - Portugal (pt-PT)
- * Dutch - Netherlands (nl-NL)
-
-* Prebuilt invoice model - added fields recognized. The invoice model now recognizes these added fields
- * Currency code
- * Payment options
- * Total discount
- * Tax items (en-IN only)
-
-* Prebuilt ID model - added document types supported. The ID model now supports these added document types
- * US Military ID
-
-> [!TIP]
-> All January 2023 updates are available with [REST API version **2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
-
-* **[Prebuilt receipt model](concept-receipt.md#supported-languages-and-locales)ΓÇöadditional language support**:
-
- The **prebuilt receipt model** now has added support for the following languages:
-
- * English - United Arab Emirates (en-AE)
- * Dutch - Netherlands (nl-NL)
- * French - Canada (fr-CA)
- * German - (de-DE)
- * Italian - (it-IT)
- * Japanese - Japan (ja-JP)
- * Portuguese - Brazil (pt-BR)
-
-* **[Prebuilt invoice model](concept-invoice.md)ΓÇöadditional language support and field extractions**
-
- The **prebuilt invoice model** now has added support for the following languages:
-
- * English - Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN)
- * Portuguese - Brazil (pt-BR)
-
- The **prebuilt invoice model** now has added support for the following field extractions:
-
- * Currency code
- * Payment options
- * Total discount
- * Tax items (en-IN only)
-
-* **[Prebuilt ID document model](concept-id-document.md#document-types)ΓÇöadditional document types support**
-
- The **prebuilt ID document model** now has added support for the following document types:
-
- * Driver's license expansion supporting India, Canada, United Kingdom and Australia
- * US military ID cards and documents
- * India ID cards and documents (PAN and Aadhaar)
- * Australia ID cards and documents (photo card, Key-pass ID)
- * Canada ID cards and documents (identification card, Maple card)
- * United Kingdom ID cards and documents (national/regional identity card)
---
-## December 2022
-
-* [**Form Recognizer Studio updates**](https://formrecognizer.appliedai.azure.com/studio)
-
- The December Form Recognizer Studio release includes the latest updates to Form Recognizer Studio. There are significant improvements to user experience, primarily with custom model labeling support.
-
- * **Page range**. The Studio now supports analyzing specified pages from a document.
-
- * **Custom model labeling**:
-
- * **Run Layout API automatically**. You can opt to run the Layout API for all documents automatically in your blob storage during the setup process for custom model.
-
- * **Search**. The Studio now includes search functionality to locate words within a document. This improvement allows for easier navigation while labeling.
-
- * **Navigation**. You can select labels to target labeled words within a document.
-
- * **Auto table labeling**. After you select the table icon within a document, you can opt to autolabel the extracted table in the labeling view.
-
- * **Label subtypes and second-level subtypes** The Studio now supports subtypes for table columns, table rows, and second-level subtypes for types such as dates and numbers.
-
-* Building custom neural models is now supported in the US Gov Virginia region.
-
-* Preview API versions ```2022-01-30-preview``` and ```2021-09-30-preview``` will be retired January 31 2023. Update to the ```2022-08-31``` API version to avoid any service disruptions.
---
-## November 2022
-
-* **Announcing the latest stable release of Azure Form Recognizer libraries**
- * This release includes important changes and updates for .NET, Java, JavaScript, and Python SDKs. For more information, _see_ [**Azure SDK DevBlog**](https://devblogs.microsoft.com/azure-sdk/announcing-new-stable-release-of-azure-form-recognizer-libraries/).
- * The most significant enhancements are the introduction of two new clients, the **`DocumentAnalysisClient`** and the **`DocumentModelAdministrationClient`**.
---
-## October 2022
-
-* **Form Recognizer versioned content**
- * Form Recognizer documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default.
-
- :::image type="content" source="media/versioning-and-monikers.png" alt-text="Screenshot of the Form Recognizer landing page denoting the version dropdown menu.":::
-
-* **Form Recognizer Studio Sample Code**
- * Sample code for the [Form Recognizer Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Form Recognizer into their own UX or build their own new UX using the Form Recognizer Studio sample code.
-
-* **Language expansion**
- * With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
- * Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
-
-* **New Prebuilt Contract model**
- * A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. the contracts model is currently in preview, request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
-
-* **Region expansion for training custom neural models**
- * Training custom neural models now supported in added regions.
- > [!div class="checklist"]
- >
- > * East US
- > * East US2
- > * US Gov Arizona
---
-## September 2022
-
->[!NOTE]
-> Starting with version 4.0.0, a new set of clients has been introduced to leverage the newest features of the Form Recognizer service.
-
-**SDK version 4.0.0 GA release includes the following updates:**
-
-### [**C#**](#tab/csharp)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
-
-### [**Java**](#tab/java)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-jav)
-
-### [**JavaScript**](#tab/javascript)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
-
-[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md)
-
-### [Python](#tab/python)
-
-> [!NOTE]
-> Python 3.7 or later is required to use this package.
-
-* **Version 3.2.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
---
-* **Region expansion for training custom neural models now supported in six new regions**
- > [!div class="checklist"]
- >
- > * Australia East
- > * Central US
- > * East Asia
- > * France Central
- > * UK South
- > * West US2
-
- * For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
-
- * Form Recognizer SDK version 4.0.0 GA release
- * **Form Recognizer SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
- * For more information on Form Recognizer SDKs, see the [**SDK overview**](sdk-overview.md).
- * Update your applications using your programming language's **migration guide**.
---
-## August 2022
-
-**Form Recognizer SDK beta August 2022 preview release includes the following updates:**
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.5 (2022-08-09)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
-
-[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.6 (2022-08-10)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
-
- [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.6 (2022-08-09)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
-
- [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
-
-### [Python](#tab/python)
-
-> [!IMPORTANT]
-> Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
-
-**Version 3.2.0b6 (2022-08-09)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
-
- [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
---
-* Form Recognizer v3.0 generally available
-
- * **Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
-
-* Form Recognizer Studio updates
- > [!div class="checklist"]
- >
- > * **Next steps**. Under each model page, the Studio now has a next steps section. Users can quickly reference sample code, troubleshooting guidelines, and pricing information.
- > * **Custom models**. The Studio now includes the ability to reorder labels in custom model projects to improve labeling efficiency.
- > * **Copy Models** Custom models can be copied across Form Recognizer services from within the Studio. The operation enables the promotion of a trained model to other environments and regions.
- > * **Delete documents**. The Studio now supports deleting documents from labeled dataset within custom projects.
-
-* Form Recognizer service updates
-
- * [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Form Recognizer with paragraphs and language detection as the two new features. Form Recognizer Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Form Recognizer.
- * [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number.
- * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields now resolves to the existing fields TotalTax and Line/Tax respectively.
- * [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information.
- * [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE).
- * [**prebuilt-businessCard**](concept-business-card.md). Address parse support to extract subfields for address components like address, city, state, country/region, and zip code.
-
-* **AI quality improvements**
-
- * [**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other key data commonly found in receipts and invoices and improved processing of digital PDF documents.
- * [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells.
- * [**prebuilt-document**](concept-general-document.md). Improved value and check box detection.
- * [**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
---
-## June 2022
-
-* Form Recognizer SDK beta June 2022 preview release includes the following updates:
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.4 (2022-06-08)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
-
-[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.5 (2022-06-07)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-
- [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.4 (2022-06-07)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
-
- [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
-
-### [Python](#tab/python)
-
-**Version 3.2.0b5 (2022-06-07**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
-
- [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
--
-* [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June release is the latest update to the Form Recognizer Studio. There are considerable user experience and accessibility improvements addressed in this update:
-
- * **Code sample for JavaScript and C#**. The Studio code tab now adds JavaScript and C# code samples in addition to the existing Python one.
- * **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
- * **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
-
-* Form Recognizer v3.0 **2022-06-30-preview** release presents extensive updates across the feature APIs:
-
- * [**Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
- * [**Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
- * [**Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
- * [**Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs).
- * [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
- * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
- * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
- * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction](concept-read.md#microsoft-office-and-html-text-extraction).
---
-## February 2022
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.3 (2022-02-10)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta3-2022-02-10)
-
- [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3)
-
- [**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.4 (2022-02-10)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta4-2022-02-10)
-
- [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.4/jar)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.3 (2022-02-10)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#400-beta3-2022-02-10)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3)
-
- [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
-
-### [Python](#tab/python)
-
-**Version 3.2.0b3 (2022-02-10)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#320b3-2022-02-10)
-
- [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/)
-
- [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
---
-* Form Recognizer v3.0 preview release introduces several new features, capabilities and enhancements:
-
- * [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-structured and **unstructured documents**.
- * [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
- * [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
- * [**General document**](concept-general-document.md) pretrained model is now updated to support selection marks in addition to API text, tables, structure, and key-value pairs from forms and documents.
- * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
- * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
- * [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
-
-* Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
-
-* Form Recognizer model data extraction
-
- | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Signatures**|
- | | :: |::| :: | :: |:: |
- |Read | Γ£ô | | | | |
- |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
- | Layout | Γ£ô | | Γ£ô | Γ£ô | |
- | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
- |Receipt | Γ£ô | Γ£ô | | |Γ£ô|
- | ID document | Γ£ô | Γ£ô | | ||
- | Business card | Γ£ô | Γ£ô | | ||
- | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
- | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
-
-* Form Recognizer SDK beta preview release includes the following updates:
-
- * [Custom Document models and modes](concept-custom.md):
- * [Custom template](concept-custom-template.md) (formerly custom form)
- * [Custom neural](concept-custom-neural.md).
- * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
-
- * [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
- * [Read prebuilt model](concept-read.md) (prebuilt-read).
- * [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
---
-## November 2021
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.2 (2021-11-09)**
-
-| [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.2) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) | [**API reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
- | [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.2) | [**Changelog/Release History**](https://oss.sonatype.org/service/local/repositories/releases/content/com/azure/azure-ai-formrecognizer/4.0.0-beta.2/azure-ai-formrecognizer-4.0.0-beta.2-changelog.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.2/https://docsupdatetracker.net/index.html)
-
-### [**JavaScript**](#tab/javascript)
-
-| [**Package (NPM)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.2) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) | [**API reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true) |
-
-### [**Python**](#tab/python)
-
-| [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b2/) | [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b2/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) | [**API reference documentation**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/latest/azure.ai.formrecognizer.html)
---
-* **Form Recognizer v3.0 preview SDK release update (beta.2) incorporates bug fixes and minor feature updates.**
---
-## October 2021
-
-* **Form Recognizer v3.0 preview release version 4.0.0-beta.1 (2021-10-07)introduces several new features and capabilities:**
-
- * [**General document**](concept-general-document.md) model is a new API that uses a pretrained model to extract text, tables, structure, and key-value pairs from forms and documents.
- * [**Hotel receipt**](concept-receipt.md) model added to prebuilt receipt processing.
- * [**Expanded fields for ID document**](concept-id-document.md) the ID model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
- * [**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field.
- * [**Language Expansion**](language-support.md) Support for 122 languages (print) and 7 languages (handwritten). Form Recognizer Layout and Custom Form expand [supported languages](language-support.md) to 122 with its latest preview. The preview includes text extraction for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages. In addition extraction of handwritten text now supports seven languages that include English, and new previews of Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
- * **Tables and text extraction enhancements** Layout now supports extracting single row tables also called key-value tables. Text extraction enhancements include better processing of digital PDFs and Machine Readable Zone (MRZ) text in identity documents, along with general performance.
- * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Form Recognizer Studio to test the different prebuilt models or label and train a custom model
-
- * Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
-
-* Form Recognizer model data extraction
-
- | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |
- | | :: |::| :: | :: |
- |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
- | Layout | Γ£ô | | Γ£ô | Γ£ô |
- | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
- |Receipt | Γ£ô | Γ£ô | | |
- | ID document | Γ£ô | Γ£ô | | |
- | Business card | Γ£ô | Γ£ô | | |
- | Custom |Γ£ô | Γ£ô | Γ£ô | Γ£ô |
---
-## September 2021
-
-* [Azure metrics explorer advanced features](../../azure-monitor/essentials/metrics-charts.md) are available on your Form Recognizer resource overview page in the Azure portal.
-
-* Monitoring menu
-
- :::image type="content" source="media/portal-metrics.png" alt-text="Screenshot showing the monitoring menu in the Azure portal":::
-
-* Charts
-
- :::image type="content" source="media/portal-metrics-charts.png" alt-text="Screenshot showing an example metric chart in the Azure portal.":::
-
-* **ID document** model update: given names including a suffix, with or without a period (full stop), process successfully:
-
- |Input Text | Result with update |
- ||-|
- | William Isaac Kirby Jr. |**FirstName**: William Isaac</br></br>**LastName**: Kirby Jr. |
- | Henry Caleb Ross Sr | **FirstName**: Henry Caleb </br></br> **LastName**: Ross Sr |
---
-## July 2021
-
-* System-assigned managed identity support: You can now enable a system-assigned managed identity to grant Form Recognizer limited access to private storage accounts including accounts protected by a Virtual Network (VNet) or firewall or have enabled bring-your-own-storage (BYOS). _See_ [Create and use managed identity for your Form Recognizer resource](managed-identities.md) to learn more.
---
-## June 2021
-
-### [**C#**](#tab/csharp)
-
-| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.1.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
-
-### [**Java**](#tab/java)
-
- | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package dependency version 3.1.1](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/3.1.1) |
-
-### [**JavaScript**](#tab/javascript)
-
-> [!NOTE]
-> There are no updates to JavaScript SDK v3.1.0.
-
-| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
-
-### [**Python**](#tab/python)
-
-| [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [PyPi azure-ai-formrecognizer 3.1.1](https://pypi.org/project/azure-ai-formrecognizer/) |
---
-* Form Recognizer containers v2.1 released in gated preview and are now supported by six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval.
-
- * *See* [**Install and run Docker containers for Form Recognizer**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout) and [**Configure Form Recognizer containers**](containers/form-recognizer-container-configuration.md?branch=main)
-
-* Form Recognizer connector released in preview: The [**Form Recognizer connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](../../logic-apps/logic-apps-overview.md), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards and ID documents.
-
-* Form Recognizer SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python. The patch addresses invoices that don't have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
---
-## May 2021
-
-### [**C#**](#tab/csharp)
-
-* **Version 3.1.0 (2021-05-26)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true) | [NuGet package version 3.0.1](https://www.nuget.org/packages/Azure.AI.FormRecognizer) |
-
-### [**Java**](#tab/java)
-
-* **Version 3.1.0 (2021-05-26)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#310-2021-05-26) | [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [Maven artifact package dependency version 3.1.0](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |
-
-### [**JavaScript**](#tab/javascript)
-
-* **Version 3.1.0 (2021-05-26)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/@azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/javascript/api/@azure/cognitiveservices-formrecognizer/formrecognizerclient?view=azure-node-latest&preserve-view=true)| [npm package dependency form-recognizer 3.1.0](https://www.npmjs.com/package/@azure/ai-form-recognizer) |
-
-### [**Python**](#tab/python)
-
-* **Version 3.1.0 (2021-05-26)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#310-2021-05-26)| [Reference documentation](/java/api/com.azure.ai.formrecognizer.models?view=azure-java-stable&preserve-view=true)| [PyPi azure-ai-formrecognizer 3.1.0](https://pypi.org/project/azure-ai-formrecognizer/) |
---
-* Form Recognizer 2.1 is generally available. The GA release marks the stability of the changes introduced in prior 2.1 preview package versions. This release enables you to detect and extract information and data from the following document types:
- > [!div class="checklist"]
- >
- > * [Documents](concept-layout.md)
- > * [Receipts](./concept-receipt.md)
- > * [Business cards](./concept-business-card.md)
- > * [Invoices](./concept-invoice.md)
- > * [Identity documents](./concept-id-document.md)
- > * [Custom forms](concept-custom.md)
-
-* To get started, try the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/) and follow the [quickstart](./quickstarts/try-sample-label-tool.md).
-
-* The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update can be used to identify which rows make up the table header.
---
-## April 2021
-<!-- markdownlint-disable MD029 -->
-
-### [**C#**](#tab/csharp)
-
-* ***NuGet package version 3.1.0-beta.4**
-
-* [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#310-beta4-2021-04-06)
-
-* **New methods to analyze data from identity documents**:
-
- **StartRecognizeIdDocumentsFromUriAsync**
-
- **StartRecognizeIdDocumentsAsync**
-
- For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation.
-
-* Expanded the set of document languages that can be provided to the **[StartRecognizeContent](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizecontent?view=azure-dotnet-preview&preserve-view=true)** method.
-
-* **New property `Pages` supported by the following classes**:
-
- **[RecognizeBusinessCardsOptions](/dotnet/api/azure.ai.formrecognizer.recognizebusinesscardsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
- **[RecognizeCustomFormsOptions](/dotnet/api/azure.ai.formrecognizer.recognizecustomformsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
- **[RecognizeInvoicesOptions](/dotnet/api/azure.ai.formrecognizer.recognizeinvoicesoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
- **[RecognizeReceiptsOptions](/dotnet/api/azure.ai.formrecognizer.recognizereceiptsoptions?view=azure-dotnet-preview&preserve-view=true)**</br>
-
- The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the p age numbers and ranges separated by commas: `2, 5-7`.
-
-* **New property `ReadingOrder` supported for the following class**:
-
- **[RecognizeContentOptions](/dotnet/api/azure.ai.formrecognizer.recognizecontentoptions?view=azure-dotnet-preview&preserve-view=true)**
-
- The `ReadingOrder` property is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-
-### [**Java**](#tab/java)
-
-**Maven artifact package dependency version 3.1.0-beta.3**
-
-* **New methods to analyze data from identity documents**:
-
- **[beginRecognizeIdDocumentsFromUrl]**
-
- **[beginRecognizeIdDocuments]**
-
- For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation.
-
-* ** **Bitmap Image file (.bmp) support for custom forms and training methods in the [FormContentType](/java/api/com.azure.ai.formrecognizer.models.formcontenttype?view=azure-java-preview&preserve-view=true#fields) fields**:
-
- * `image/bmp`
-
- * **New property `Pages` supported by the following classes**:
-
- **[RecognizeBusinessCardsOptions](/java/api/com.azure.ai.formrecognizer.models.recognizebusinesscardsoptions?view=azure-java-preview&preserve-view=true)**</br>
- **[RecognizeCustomFormOptions](/java/api/com.azure.ai.formrecognizer.models.recognizecustomformsoptions?view=azure-java-preview&preserve-view=true)**</br>
- **[RecognizeInvoicesOptions](/java/api/com.azure.ai.formrecognizer.models.recognizeinvoicesoptions?view=azure-java-preview&preserve-view=true)**</br>
- **[RecognizeReceiptsOptions](/java/api/com.azure.ai.formrecognizer.models.recognizereceiptsoptions?view=azure-java-preview&preserve-view=true)**</br>
-
- * The `Pages` property allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
-
-* **New keyword argument `ReadingOrder` supported for the following methods**:
-
- * **[beginRecognizeContent](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?preserve-view=true&view=azure-java-preview)**</br>
- * **[beginRecognizeContentFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontentfromurl?view=azure-java-preview&preserve-view=true)**</br>
- * The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-
-* The client defaults to the latest supported service version, which currently is **2.1-preview.3**.
-
-### [**JavaScript**](#tab/javascript)
-
-**npm package version 3.1.0-beta.3**
-
-* **New methods to analyze data from identity documents**:
-
- **azure-ai-form-recognizer-formrecognizerclient-beginrecognizeidentitydocumentsfromurl**
-
- **beginRecognizeIdDocuments**
-
- For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation.
-
-* **New field values added to the FieldValue interface**:
-
- `gender`ΓÇöpossible values are `M` `F` or `X`.</br>
- `country`ΓÇöpossible values follow [ISO alpha-3](https://www.iso.org/obp/ui/#search) three-letter country code string.
-
-* New option `pages` supported by all form recognition methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
-
-* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true to the URL)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-
-* Split **FormField** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
-
-* Migrated to the **`2.1-preview.3`** Form Recognizer service endpoint for all REST API calls.
-
-### [**Python**](#tab/python)
-
-**pip package version 3.1.0b4**
-
-* **New methods to analyze data from identity documents**:
-
- **[begin_recognize_id_documents_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python&preserve-view=true)**
-
- **[begin_recognize_id_documents](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python&preserve-view=true)**
-
- For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation.
-
-* **New field values added to the [FieldValueType](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.fieldvaluetype?view=azure-python-preview&preserve-view=true) enum**:
-
- genderΓÇöpossible values are `M` `F` or `X`.
-
- countryΓÇöpossible values follow [ISO alpha-3 Country Codes](https://www.iso.org/obp/ui/#search).
-
-* **Bitmap Image file (.bmp) support for custom forms and training methods in the [FormContentType](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formcontenttype?view=azure-python-preview&preserve-view=true) enum**:
-
- image/bmp
-
-* **New keyword argument `pages` supported by the following methods**:
-
- **[begin_recognize_receipts](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true&branch=main#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-receipts)**
-
- **[begin_recognize_receipts_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-receipts-from-url)**
-
- **[begin_recognize_business_cards](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-business-cards)**
-
- **[begin_recognize_business_cards_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-business-cards-from-url)**
-
- **[begin_recognize_invoices](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-invoices)**
-
- **[begin_recognize_invoices_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-invoices-from-url)**
-
- **[begin_recognize_content](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content)**
-
- **[begin_recognize_content_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content-from-url-)**
-
- The `pages` keyword argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`.
-
-* **New keyword argument `readingOrder` supported for the following methods**:
-
- **[begin_recognize_content](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content)**
-
- **[begin_recognize_content_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content-from-url)**
-
- The `readingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
---
-* **SDK preview updates for API version `2.1-preview.3` introduces feature updates and enhancements.**
---
-## March 2021
-
- **Form Recognizer v2.1 public preview v2.1-preview.3 has been released and includes the following features:**
-
-* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses.
-
- [Learn more about the prebuilt ID model](./concept-id-document.md)
-
- :::image type="content" source="./media/id-canada-passport-example.png" alt-text="Screenshot of a sample passport." lightbox="./media/id-canada-passport-example.png":::
-
-* **Line-item extraction for invoice model** - Prebuilt Invoice model now supports line item extraction; it now extracts full items and their parts - description, amount, quantity, product ID, date and more. With a simple API/SDK call, you can extract useful data from your invoices - text, table, key-value pairs, and line items.
-
- [Learn more about the invoice model](./concept-invoice.md)
-
-* **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model extracts line items as part of the JSON output in the documentResults section.
-
- :::image type="content" source="./media/table-labeling.png" alt-text="Screenshot of the table labeling feature." lightbox="./media/table-labeling.png":::
-
- In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model knows to extract values properly from analyzed documents.
-
-* **Support for 66 new languages** - The Layout API and Custom Models for Form Recognizer now support 73 languages.
-
- [Learn more about Form Recognizer's language support](language-support.md)
-
-* **Natural reading order, handwriting classification, and page selection** - With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter and set it to "natural" value for a more human-friendly reading order output. In addition, for Latin languages, Form Recognizer classifies text lines as handwritten style or not and give a confidence score.
-
-* **Prebuilt receipt model quality improvements** This update includes many quality improvements for the prebuilt Receipt model, especially around line item extraction.
---
-## November 2020
-
-* **Form Recognizer v2.1-preview.2 has been released and includes the following features:**
-
- * **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in various formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts key text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, and bill to.
-
- > [Learn more about the prebuilt invoice model](./concept-invoice.md)
-
- :::image type="content" source="./media/invoice-example.jpg" alt-text="Screenshot of a sample invoice." lightbox="./media/invoice-example.jpg":::
-
- * **Enhanced table extraction** - Form Recognizer now provides enhanced table extraction, which combines our powerful Optical Character Recognition (OCR) capabilities with a deep learning table extraction model. Form Recognizer can extract data from tables, including complex tables with merged columns, rows, no borders and more.
-
- :::image type="content" source="./media/tables-example.jpg" alt-text="Screenshot of tables analysis." lightbox="./media/tables-example.jpg":::
-
- > [Learn more about Layout extraction](concept-layout.md)
-
- * **Client library update** - The latest versions of the [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.
- * **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md)
- * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages.
- * **Quality improvements** - Extraction improvements including single digit extraction improvements.
- * **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data is extracted without writing any code.
-
- * [**Try the Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net)
-
- :::image type="content" source="media/ui-preview.jpg" alt-text="Screenshot of the Sample Labeling tool homepage.":::
-
- * **Feedback Loop** - When Analyzing files via the Sample Labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
- * **Auto Label Documents** - Automatically labels added documents based on previous labeled documents in the project.
---
-## August 2020
-
-* **Form Recognizer `v2.1-preview.1` has been released and includes the following features:
-
- * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
- * **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`).
- * **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks.
- * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
- * **Model name** - add a friendly name to your custom models for easier management and tracking.
- * **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
- * **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
- * **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_.
-
-* **v2.0** includes the following update:
-
- * The [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for NET, Python, Java, and JavaScript have entered General Availability.
-
- **New samples** are available on GitHub.
-
- * The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.
- * The [Sample Labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](label-tool.md) for getting started with the tool.
- * The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_.
---
-## July 2020
-
-<!-- markdownlint-disable MD004 -->
-* **Form Recognizer v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated SDKs for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/).
- * **Table enhancements and Extraction enhancements** - includes accuracy improvements and table extractions enhancements, specifically, the capability to learn tables headers and structures in _custom train without labels_.
-
- * **Currency support** - Detection and extraction of global currency symbols.
- * **Azure Gov** - Form Recognizer is now also available in Azure Gov.
- * **Enhanced security features**:
- * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./encrypt-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
- * **Private endpoints** ΓÇô Enables you on a virtual network to [securely access data over a Private Link.](../../private-link/private-link-overview.md)
---
-## June 2020
-
-* **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature.
-* **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Form Recognizer client objects in the SDKs.
-* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, _see_ the SDK changelogs.
- * [C# SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
- * [Python SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
- * [Java SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-jav)
- * [JavaScript SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_1.0.0-preview.3/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
---
-## April 2020
-
-* **SDK support for Form Recognizer API v2.0 Public Preview** - This month we expanded our service support to include a preview SDK for Form Recognizer v2.0 release. Use these links to get started with your language of choice:
-* [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme)
-* [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme)
-* [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
-* [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
-
-The new SDK supports all the features of the v2.0 REST API for Form Recognizer. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
-
-* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource. This authorization is secured by calling the Copy Authorization operation against the target resource endpoint.
-
-* [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API
-* [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
-
-* Security improvements
-
-* Customer-Managed Keys are now available for FormRecognizer. For more information, see [Data encryption at rest for Form Recognizer](./encrypt-data-at-rest.md).
-* Use Managed Identities for access to Azure resources with Azure Active Directory. For more information, see [Authorize access to managed identities](../../cognitive-services/authentication.md#authorize-access-to-managed-identities).
---
-## March 2020
-
-* **Value types for labeling** You can now specify the types of values you're labeling with the Form Recognizer Sample Labeling tool. The following value types and variations are currently supported:
-* `string`
- * default, `no-whitespaces`, `alphanumeric`
-* `number`
- * default, `currency`
-* `date`
- * default, `dmy`, `mdy`, `ymd`
-* `time`
-* `integer`
-
-See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to learn how to use this feature.
-
-* **Table visualization** The Sample Labeling tool now displays tables that were recognized in the document. This feature lets you view recognized and extracted tables from the document prior to labeling and analyzing. This feature can be toggled on/off using the layers option.
-
-* The following image is an example of how tables are recognized and extracted:
-
- :::image type="content" source="media/whats-new/table-viz.png" alt-text="Screenshot of table visualization using the Sample Labeling tool.":::
-
-* The extracted tables are available in the JSON output under `"pageResults"`.
-
- > [!IMPORTANT]
- > Labeling tables isn't supported. If tables are not recognized and extracted automatically, you can only label them as key/value pairs. When labeling tables as key/value pairs, label each cell as a unique value.
-
-* Extraction enhancements
-
-* This release includes extraction enhancements and accuracy improvements, specifically, the capability to label and extract multiple key/value pairs in the same line of text.
-
-* Sample Labeling tool is now open-source
-
-* The Form Recognizer Sample Labeling tool is now available as an open-source project. You can integrate it within your solutions and make customer-specific changes to meet your needs.
-
-* For more information about the Form Recognizer Sample Labeling tool, review the documentation available on [GitHub](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
-
-* TLS 1.2 enforcement
-
-* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/security-features.md).
---
-## January 2020
-
-This release introduces the Form Recognizer 2.0. In the next sections, you'll find more information about new features, enhancements, and changes.
-
-* New features
-
- * **Custom model**
- * **Train with labels** You can now train a custom model with manually labeled data. This method results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
- * **Asynchronous API** You can use async API calls to train with and analyze large data sets and files.
- * **TIFF file support** You can now train with and extract data from TIFF documents.
- * **Extraction accuracy improvements**
-
- * **Prebuilt receipt model**
- * **Tip amounts** You can now extract tip amounts and other handwritten values.
- * **Line item extraction** You can extract line item values from receipts.
- * **Confidence values** You can view the model's confidence for each extracted value.
- * **Extraction accuracy improvements**
-
- * **Layout extraction** You can now use the Layout API to extract text data and table data from your forms.
-
-* Custom model API changes
-
- All of the APIs for training and using custom models have been renamed, and some synchronous methods are now asynchronous. The following are major changes:
-
- * The process of training a model is now asynchronous. You initiate training through the **/custom/models** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}** to return the training results.
- * Key/value extraction is now initiated by the **/custom/models/{modelID}/analyze** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}/analyzeResults/{resultID}** to return the extraction results.
- * Operation IDs for the Train operation are now found in the **Location** header of HTTP responses, not the **Operation-Location** header.
-
-* Receipt API changes
-
- * The APIs for reading sales receipts have been renamed.
-
- * Receipt data extraction is now initiated by the **/prebuilt/receipt/analyze** API call. This call returns an operation ID, which you can pass into **/prebuilt/receipt/analyzeResults/{resultID}** to return the extraction results.
-
-* Output format changes
-
- * The JSON responses for all API calls have new formats. Some keys and values have been added, removed, or renamed. See the quickstarts for examples of the current JSON formats.
---
-## Next steps
--
-* Try processing your own forms and documents with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
---
-* Try processing your own forms and documents with the [Form Recognizer Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
-
-* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
--
applied-ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-cache-token.md
- Title: "Cache the authentication token"-
-description: This article will show you how to cache the authentication token.
------ Previously updated : 01/14/2020----
-# How to cache the authentication token
-
-This article demonstrates how to cache the authentication token in order to improve performance of your application.
-
-## Using ASP.NET
-
-Import the **Microsoft.Identity.Client** NuGet package, which is used to acquire a token.
-
-Create a confidential client application property.
-
-```csharp
-private IConfidentialClientApplication _confidentialClientApplication;
-private IConfidentialClientApplication ConfidentialClientApplication
-{
- get {
- if (_confidentialClientApplication == null) {
- _confidentialClientApplication = ConfidentialClientApplicationBuilder.Create(ClientId)
- .WithClientSecret(ClientSecret)
- .WithAuthority($"https://login.windows.net/{TenantId}")
- .Build();
- }
-
- return _confidentialClientApplication;
- }
-}
-```
-
-Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
-
-> [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../../active-directory/develop/msal-migration.md) for more details.
--
-```csharp
-public async Task<string> GetTokenAsync()
-{
- const string resource = "https://cognitiveservices.azure.com/";
-
- var authResult = await ConfidentialClientApplication.AcquireTokenForClient(
- new[] { $"{resource}/.default" })
- .ExecuteAsync()
- .ConfigureAwait(false);
-
- return authResult.AccessToken;
-}
-```
-
-The `AuthenticationResult` object has an `AccessToken` property which is the actual token you will use when launching the Immersive Reader using the SDK. It also has an `ExpiresOn` property which denotes when the token will expire. Before launching the Immersive Reader, you can check whether the token has expired, and acquire a new token only if it has expired.
-
-## Using Node.JS
-
-Add the [**request**](https://www.npmjs.com/package/request) npm package to your project. Use the following code to acquire a token, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
-
-```javascript
-router.get('/token', function(req, res) {
- request.post(
- {
- headers: { 'content-type': 'application/x-www-form-urlencoded' },
- url: `https://login.windows.net/${TENANT_ID}/oauth2/token`,
- form: {
- grant_type: 'client_credentials',
- client_id: CLIENT_ID,
- client_secret: CLIENT_SECRET,
- resource: 'https://cognitiveservices.azure.com/'
- }
- },
- function(err, resp, json) {
- const result = JSON.parse(json);
- return res.send({
- access_token: result.access_token,
- expires_on: result.expires_on
- });
- }
- );
-});
-```
-
-The `expires_on` property is the date and time at which the token expires, expressed as the number of seconds since January 1, 1970 UTC. Use this value to determine whether your token has expired before attempting to acquire a new one.
-
-```javascript
-async function getToken() {
- if (Date.now() / 1000 > CREDENTIALS.expires_on) {
- CREDENTIALS = await refreshCredentials();
- }
- return CREDENTIALS.access_token;
-}
-```
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Configure Read Aloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-configure-read-aloud.md
- Title: "Configure Read Aloud"-
-description: This article will show you how to configure the various options for Read Aloud.
------ Previously updated : 06/29/2020---
-# How to configure Read Aloud
-
-This article demonstrates how to configure the various options for Read Aloud in the Immersive Reader.
-
-## Automatically start Read Aloud
-
-The `options` parameter contains all of the flags that can be used to configure Read Aloud. Set `autoplay` to `true` to enable automatically starting Read Aloud after launching the Immersive Reader.
-
-```typescript
-const options = {
- readAloudOptions: {
- autoplay: true
- }
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
-```
-
-> [!NOTE]
-> Due to browser limitations, automatic playback is not supported in Safari.
-
-## Configure the voice
-
-Set `voice` to either `male` or `female`. Not all languages support both voices. For more information, see the [Language Support](./language-support.md) page.
-
-```typescript
-const options = {
- readAloudOptions: {
- voice: 'female'
- }
-};
-```
-
-## Configure playback speed
-
-Set `speed` to a number between `0.5` (50%) and `2.5` (250%) inclusive. Values outside this range will get clamped to either 0.5 or 2.5.
-
-```typescript
-const options = {
- readAloudOptions: {
- speed: 1.5
- }
-};
-```
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Configure Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-configure-translation.md
- Title: "Configure translation"-
-description: This article will show you how to configure the various options for translation.
------ Previously updated : 01/06/2022---
-# How to configure Translation
-
-This article demonstrates how to configure the various options for Translation in the Immersive Reader.
-
-## Configure Translation language
-
-The `options` parameter contains all of the flags that can be used to configure Translation. Set the `language` parameter to the language you wish to translate to. See the [Language Support](./language-support.md) for the full list of supported languages.
-
-```typescript
-const options = {
- translationOptions: {
- language: 'fr-FR'
- }
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
-```
-
-## Automatically translate the document on load
-
-Set `autoEnableDocumentTranslation` to `true` to enable automatically translating the entire document when the Immersive Reader loads.
-
-```typescript
-const options = {
- translationOptions: {
- autoEnableDocumentTranslation: true
- }
-};
-```
-
-## Automatically enable word translation
-
-Set `autoEnableWordTranslation` to `true` to enable single word translation.
-
-```typescript
-const options = {
- translationOptions: {
- autoEnableWordTranslation: true
- }
-};
-```
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-create-immersive-reader.md
- Title: "Create an Immersive Reader Resource"-
-description: This article shows you how to create a new Immersive Reader resource with a custom subdomain and then configure Azure AD in your Azure tenant.
------- Previously updated : 03/31/2023---
-# Create an Immersive Reader resource and configure Azure Active Directory authentication
-
-In this article, we provide a script that creates an Immersive Reader resource and configure Azure Active Directory (Azure AD) authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must also be configured with Azure AD permissions.
-
-The script is designed to create and configure all the necessary Immersive Reader and Azure AD resources for you all in one step. However, you can also just configure Azure AD authentication for an existing Immersive Reader resource, if for instance, you happen to have already created one in the Azure portal.
-
-For some customers, it may be necessary to create multiple Immersive Reader resources, for development vs. production, or perhaps for multiple different regions your service is deployed in. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with the Azure AD permissions.
-
-The script is designed to be flexible. It first looks for existing Immersive Reader and Azure AD resources in your subscription, and creates them only as necessary if they don't already exist. If it's your first time creating an Immersive Reader resource, the script does everything you need. If you want to use it just to configure Azure AD for an existing Immersive Reader resource that was created in the portal, it does that too.
-It can also be used to create and configure multiple Immersive Reader resources.
-
-## Permissions
-
-The listed **Owner** of your Azure subscription has all the required permissions to create an Immersive Reader resource and configure Azure AD authentication.
-
-If you aren't an owner, the following scope-specific permissions are required:
-
-* **Contributor**. You need to have at least a Contributor role associated with the Azure subscription:
-
- :::image type="content" source="media/contributor-role.png" alt-text="Screenshot of contributor built-in role description.":::
-
-* **Application Developer**. You need to have at least an Application Developer role associated in Azure AD:
-
- :::image type="content" source="media/application-developer-role.png" alt-text="{alt-text}":::
-
-For more information, _see_ [Azure AD built-in roles](../../active-directory/roles/permissions-reference.md#application-developer)
-
-## Set up PowerShell environment
-
-1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
-
-1. Copy and paste the following code snippet into the shell.
-
- ```azurepowershell-interactive
- function Create-ImmersiveReaderResource(
- [Parameter(Mandatory=$true, Position=0)] [String] $SubscriptionName,
- [Parameter(Mandatory=$true)] [String] $ResourceName,
- [Parameter(Mandatory=$true)] [String] $ResourceSubdomain,
- [Parameter(Mandatory=$true)] [String] $ResourceSKU,
- [Parameter(Mandatory=$true)] [String] $ResourceLocation,
- [Parameter(Mandatory=$true)] [String] $ResourceGroupName,
- [Parameter(Mandatory=$true)] [String] $ResourceGroupLocation,
- [Parameter(Mandatory=$true)] [String] $AADAppDisplayName,
- [Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri,
- [Parameter(Mandatory=$true)] [String] $AADAppClientSecretExpiration
- )
- {
- $unused = ''
- if (-not [System.Uri]::TryCreate($AADAppIdentifierUri, [System.UriKind]::Absolute, [ref] $unused)) {
- throw "Error: AADAppIdentifierUri must be a valid URI"
- }
-
- Write-Host "Setting the active subscription to '$SubscriptionName'"
- $subscriptionExists = Get-AzSubscription -SubscriptionName $SubscriptionName
- if (-not $subscriptionExists) {
- throw "Error: Subscription does not exist"
- }
- az account set --subscription $SubscriptionName
-
- $resourceGroupExists = az group exists --name $ResourceGroupName
- if ($resourceGroupExists -eq "false") {
- Write-Host "Resource group does not exist. Creating resource group"
- $groupResult = az group create --name $ResourceGroupName --location $ResourceGroupLocation
- if (-not $groupResult) {
- throw "Error: Failed to create resource group"
- }
- Write-Host "Resource group created successfully"
- }
-
- # Create an Immersive Reader resource if it doesn't already exist
- $resourceId = az cognitiveservices account show --resource-group $ResourceGroupName --name $ResourceName --query "id" -o tsv
- if (-not $resourceId) {
- Write-Host "Creating the new Immersive Reader resource '$ResourceName' (SKU '$ResourceSKU') in '$ResourceLocation' with subdomain '$ResourceSubdomain'"
- $resourceId = az cognitiveservices account create `
- --name $ResourceName `
- --resource-group $ResourceGroupName `
- --kind ImmersiveReader `
- --sku $ResourceSKU `
- --location $ResourceLocation `
- --custom-domain $ResourceSubdomain `
- --query "id" `
- -o tsv
-
- if (-not $resourceId) {
- throw "Error: Failed to create Immersive Reader resource"
- }
- Write-Host "Immersive Reader resource created successfully"
- }
-
- # Create an Azure Active Directory app if it doesn't already exist
- $clientId = az ad app show --id $AADAppIdentifierUri --query "appId" -o tsv
- if (-not $clientId) {
- Write-Host "Creating new Azure Active Directory app"
- $clientId = az ad app create --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv
- if (-not $clientId) {
- throw "Error: Failed to create Azure Active Directory application"
- }
- Write-Host "Azure Active Directory application created successfully."
-
- $clientSecret = az ad app credential reset --id $clientId --end-date "$AADAppClientSecretExpiration" --query "password" | % { $_.Trim('"') }
- if (-not $clientSecret) {
- throw "Error: Failed to create Azure Active Directory application client secret"
- }
- Write-Host "Azure Active Directory application client secret created successfully."
-
- Write-Host "NOTE: To manage your Active Directory application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
- }
-
- # Create a service principal if it doesn't already exist
- $principalId = az ad sp show --id $AADAppIdentifierUri --query "id" -o tsv
- if (-not $principalId) {
- Write-Host "Creating new service principal"
- az ad sp create --id $clientId | Out-Null
- $principalId = az ad sp show --id $AADAppIdentifierUri --query "id" -o tsv
-
- if (-not $principalId) {
- throw "Error: Failed to create new service principal"
- }
- Write-Host "New service principal created successfully"
-
- # Sleep for 5 seconds to allow the new service principal to propagate
- Write-Host "Sleeping for 5 seconds"
- Start-Sleep -Seconds 5
- }
-
- Write-Host "Granting service principal access to the newly created Immersive Reader resource"
- $accessResult = az role assignment create --assignee $principalId --scope $resourceId --role "Cognitive Services Immersive Reader User"
- if (-not $accessResult) {
- throw "Error: Failed to grant service principal access"
- }
- Write-Host "Service principal access granted successfully"
-
- # Grab the tenant ID, which is needed when obtaining an Azure AD token
- $tenantId = az account show --query "tenantId" -o tsv
-
- # Collect the information needed to obtain an Azure AD token into one object
- $result = @{}
- $result.TenantId = $tenantId
- $result.ClientId = $clientId
- $result.ClientSecret = $clientSecret
- $result.Subdomain = $ResourceSubdomain
-
- Write-Host "`nSuccess! " -ForegroundColor Green -NoNewline
- Write-Host "Save the following JSON object to a text file for future reference."
- Write-Host "*****"
- if($clientSecret -ne $null) {
-
- Write-Host "This function has created a client secret (password) for you. This secret is used when calling Azure Active Directory to fetch access tokens."
- Write-Host "This is the only time you will ever see the client secret for your Azure Active Directory application, so save it now." -ForegroundColor Yellow
- }
- else{
- Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Azure Active Directory application. Please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow
- }
- Write-Host "*****`n"
- Write-Output (ConvertTo-Json $result)
- }
- ```
-
-1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders with your own values as appropriate.
-
- ```azurepowershell-interactive
- Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
- ```
-
- The full command looks something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command with your own values. This example has dummy values for the '<PARAMETER_VALUES>'. Yours may be different, as you come up with your own names for these values.
-
- ```
- Create-ImmersiveReaderResource
- -SubscriptionName 'MyOrganizationSubscriptionName'
- -ResourceName 'MyOrganizationImmersiveReader'
- -ResourceSubdomain 'MyOrganizationImmersiveReader'
- -ResourceSKU 'S0'
- -ResourceLocation 'westus2'
- -ResourceGroupName 'MyResourceGroupName'
- -ResourceGroupLocation 'westus2'
- -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp'
- -AADAppIdentifierUri 'api://MyOrganizationImmersiveReaderAADApp'
- -AADAppClientSecretExpiration '2021-12-31'
- ```
-
- | Parameter | Comments |
- | | |
- | SubscriptionName |Name of the Azure subscription to use for your Immersive Reader resource. You must have a subscription in order to create a resource. |
- | ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters.|
- | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. |
- | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Cognitive Services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
- | ResourceLocation |Options: `australiaeast`, `brazilsouth`, `canadacentral`, `centralindia`, `centralus`, `eastasia`, `eastus`, `eastus2`, `francecentral`, `germanywestcentral`, `japaneast`, `japanwest`, `jioindiawest`, `koreacentral`, `northcentralus`, `northeurope`, `norwayeast`, `southafricanorth`, `southcentralus`, `southeastasia`, `swedencentral`, `switzerlandnorth`, `switzerlandwest`, `uaenorth`, `uksouth`, `westcentralus`, `westeurope`, `westus`, `westus2`, `westus3`. This parameter is optional if the resource already exists. |
- | ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group doesn't already exist, a new one with this name is created. |
- | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. |
- | AADAppDisplayName |The Azure Active Directory application display name. If an existing Azure AD application isn't found, a new one with this name is created. This parameter is optional if the Azure AD application already exists. |
- | AADAppIdentifierUri |The URI for the Azure AD application. If an existing Azure AD application isn't found, a new one with this URI is created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we're using the default Azure AD URI scheme prefix of `api://` for compatibility with the [Azure AD policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
- | AADAppClientSecretExpiration |The date or datetime after which your Azure AD Application Client Secret (password) will expire (for example, '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function creates a client secret for you. To manage Azure AD application client secrets after you've created this resource, visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) `[AADAppDisplayName]` -> Certificates and Secrets section -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot).|
-
- Manage your Azure AD application secrets
-
- ![Azure Portal Certificates and Secrets blade](./media/client-secrets-blade.png)
-
-1. Copy the JSON output into a text file for later use. The output should look like the following.
-
- ```json
- {
- "TenantId": "...",
- "ClientId": "...",
- "ClientSecret": "...",
- "Subdomain": "..."
- }
- ```
-
-## Next steps
-
-* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Customize Launch Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-customize-launch-button.md
- Title: "Edit the Immersive Reader launch button"-
-description: This article will show you how to customize the button that launches the Immersive Reader.
------- Previously updated : 03/08/2021---
-# How to customize the Immersive Reader button
-
-This article demonstrates how to customize the button that launches the Immersive Reader to fit the needs of your application.
-
-## Add the Immersive Reader button
-
-The Immersive Reader SDK provides default styling for the button that launches the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling.
-
-```html
-<div class='immersive-reader-button'></div>
-```
-
-## Customize the button style
-
-Use the `data-button-style` attribute to set the style of the button. The allowed values are `icon`, `text`, and `iconAndText`. The default value is `icon`.
-
-### Icon button
-
-```html
-<div class='immersive-reader-button' data-button-style='icon'></div>
-```
-
-This renders the following:
-
-![This is the rendered Text button](./media/button-icon.png)
-
-### Text button
-
-```html
-<div class='immersive-reader-button' data-button-style='text'></div>
-```
-
-This renders the following:
-
-![This is the rendered Immersive Reader button.](./media/button-text.png)
-
-### Icon and text button
-
-```html
-<div class='immersive-reader-button' data-button-style='iconAndText'></div>
-```
-
-This renders the following:
-
-![Icon button](./media/button-icon-and-text.png)
-
-## Customize the button text
-
-Configure the language and the alt text for the button using the `data-locale` attribute. The default language is English.
-
-```html
-<div class='immersive-reader-button' data-locale='fr-FR'></div>
-```
-
-## Customize the size of the icon
-
-The size of the Immersive Reader icon can be configured using the `data-icon-px-size` attribute. This sets the size of the icon in pixels. The default size is 20px.
-
-```html
-<div class='immersive-reader-button' data-icon-px-size='50'></div>
-```
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Launch Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-launch-immersive-reader.md
- Title: "How to launch the Immersive Reader"-
-description: Learn how to launch the Immersive reader using JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
----- Previously updated : 03/04/2021--
-zone_pivot_groups: immersive-reader-how-to-guides
--
-# How to launch the Immersive Reader
-
-In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This article demonstrates how to launch the Immersive Reader JavaScript, Python, Android, or iOS.
------------
applied-ai-services How To Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-multiple-resources.md
- Title: "Integrate multiple Immersive Reader resources"-
-description: In this tutorial, you'll create a Node.js application that launches the Immersive Reader using multiple Immersive Reader resources.
------ Previously updated : 01/14/2020--
-#Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer.
--
-# Integrate multiple Immersive Reader resources
-
-In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. In the [quickstart](./quickstarts/client-libraries.md), you learned how to use Immersive Reader with a single resource. This tutorial covers how to integrate multiple Immersive Reader resources in the same application. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create multiple Immersive Reader resource under an existing resource group
-> * Launch the Immersive Reader using multiple resources
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-* Follow the [quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to create a web app that launches the Immersive Reader with NodeJS. In that quickstart, you configure a single Immersive Reader resource. We will build on top of that in this tutorial.
-
-## Create the Immersive Reader resources
-
-Follow [these instructions](./how-to-create-immersive-reader.md) to create each Immersive Reader resource. The **Create-ImmersiveReaderResource** script has `ResourceName`, `ResourceSubdomain`, and `ResourceLocation` as parameters. These should be unique for each resource being created. The remaining parameters should be the same as what you used when setting up your first Immersive Reader resource. This way, each resource can be linked to the same Azure resource group and Azure AD application.
-
-The example below shows how to create two resources, one in WestUS, and another in EastUS. Notice the unique values for `ResourceName`, `ResourceSubdomain`, and `ResourceLocation`.
-
-```azurepowershell-interactive
-Create-ImmersiveReaderResource
- -SubscriptionName <SUBSCRIPTION_NAME> `
- -ResourceName Resource_name_wus `
- -ResourceSubdomain resource-subdomain-wus `
- -ResourceSKU <RESOURCE_SKU> `
- -ResourceLocation westus `
- -ResourceGroupName <RESOURCE_GROUP_NAME> `
- -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
- -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
- -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
- -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
-
-Create-ImmersiveReaderResource
- -SubscriptionName <SUBSCRIPTION_NAME> `
- -ResourceName Resource_name_eus `
- -ResourceSubdomain resource-subdomain-eus `
- -ResourceSKU <RESOURCE_SKU> `
- -ResourceLocation eastus `
- -ResourceGroupName <RESOURCE_GROUP_NAME> `
- -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
- -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
- -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
- -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
-```
-
-## Add resources to environment configuration
-
-In the quickstart, you created an environment configuration file that contains the `TenantId`, `ClientId`, `ClientSecret`, and `Subdomain` parameters. Since all of your resources use the same Azure AD application, we can use the same values for the `TenantId`, `ClientId`, and `ClientSecret`. The only change that needs to be made is to list each subdomain for each resource.
-
-Your new __.env__ file should now look something like the following:
-
-```text
-TENANT_ID={YOUR_TENANT_ID}
-CLIENT_ID={YOUR_CLIENT_ID}
-CLIENT_SECRET={YOUR_CLIENT_SECRET}
-SUBDOMAIN_WUS={YOUR_WESTUS_SUBDOMAIN}
-SUBDOMAIN_EUS={YOUR_EASTUS_SUBDOMAIN}
-```
-
-Be sure not to commit this file into source control, as it contains secrets that should not be made public.
-
-Next, we're going to modify the _routes\index.js_ file that we created to support our multiple resources. Replace its content with the following code.
-
-As before, this code creates an API endpoint that acquires an Azure AD authentication token using your service principal password. This time, it allows the user to specify a resource location and pass it in as a query parameter. It then returns an object containing the token and the corresponding subdomain.
-
-```javascript
-var express = require('express');
-var router = express.Router();
-var request = require('request');
-
-/* GET home page. */
-router.get('/', function(req, res, next) {
- res.render('index', { Title: 'Express' });
-});
-
-router.get('/GetTokenAndSubdomain', function(req, res) {
- try {
- request.post({
- headers: {
- 'content-type': 'application/x-www-form-urlencoded'
- },
- url: `https://login.windows.net/${process.env.TENANT_ID}/oauth2/token`,
- form: {
- grant_type: 'client_credentials',
- client_id: process.env.CLIENT_ID,
- client_secret: process.env.CLIENT_SECRET,
- resource: 'https://cognitiveservices.azure.com/'
- }
- },
- function(err, resp, tokenResult) {
- if (err) {
- console.log(err);
- return res.status(500).send('CogSvcs IssueToken error');
- }
-
- var tokenResultParsed = JSON.parse(tokenResult);
-
- if (tokenResultParsed.error) {
- console.log(tokenResult);
- return res.send({error : "Unable to acquire Azure AD token. Check the debugger for more information."})
- }
-
- var token = tokenResultParsed.access_token;
-
- var subdomain = "";
- var region = req.query && req.query.region;
- switch (region) {
- case "eus":
- subdomain = process.env.SUBDOMAIN_EUS
- break;
- case "wus":
- default:
- subdomain = process.env.SUBDOMAIN_WUS
- }
-
- return res.send({token, subdomain});
- });
- } catch (err) {
- console.log(err);
- return res.status(500).send('CogSvcs IssueToken error');
- }
-});
-
-module.exports = router;
-```
-
-The **getimmersivereaderlaunchparams** API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
-
-## Launch the Immersive Reader with sample content
-
-1. Open _views\index.pug_, and replace its content with the following code. This code populates the page with some sample content, and adds two buttons that launches the Immersive Reader. One for launching Immersive Reader for the EastUS resource, and another for the WestUS resource.
-
- ```pug
- doctype html
- html
- head
- title Immersive Reader Quickstart Node.js
-
- link(rel='stylesheet', href='https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css')
-
- // A polyfill for Promise is needed for IE11 support.
- script(src='https://cdn.jsdelivr.net/npm/promise-polyfill@8/dist/polyfill.min.js')
-
- script(src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.2.0.js')
- script(src='https://code.jquery.com/jquery-3.3.1.min.js')
-
- style(type="text/css").
- .immersive-reader-button {
- background-color: white;
- margin-top: 5px;
- border: 1px solid black;
- float: right;
- }
- body
- div(class="container")
- button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("wus")') WestUS Immersive Reader
- button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("eus")') EastUS Immersive Reader
-
- h1(id="ir-title") About Immersive Reader
- div(id="ir-content" lang="en-us")
- p Immersive Reader is a tool that implements proven techniques to improve reading comprehension for emerging readers, language learners, and people with learning differences. The Immersive Reader is designed to make reading more accessible for everyone. The Immersive Reader
-
- ul
- li Shows content in a minimal reading view
- li Displays pictures of commonly used words
- li Highlights nouns, verbs, adjectives, and adverbs
- li Reads your content out loud to you
- li Translates your content into another language
- li Breaks down words into syllables
-
- h3 The Immersive Reader is available in many languages.
-
- p(lang="es-es") El Lector inmersivo está disponible en varios idiomas.
- p(lang="zh-cn") 沉浸式阅读器支持许多语言
- p(lang="de-de") Der plastische Reader ist in vielen Sprachen verf├╝gbar.
- p(lang="ar-eg" dir="rtl" style="text-align:right") يتوفر \"القارئ الشامل\" في العديد من اللغات.
-
- script(type="text/javascript").
- function getTokenAndSubdomainAsync(region) {
- return new Promise(function (resolve, reject) {
- $.ajax({
- url: "/GetTokenAndSubdomain",
- type: "GET",
- data: {
- region: region
- },
- success: function (data) {
- if (data.error) {
- reject(data.error);
- } else {
- resolve(data);
- }
- },
- error: function (err) {
- reject(err);
- }
- });
- });
- }
-
- function handleLaunchImmersiveReader(region) {
- getTokenAndSubdomainAsync(region)
- .then(function (response) {
- const token = response["token"];
- const subdomain = response["subdomain"];
- // Learn more about chunk usage and supported MIME types https://learn.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk
- const data = {
- Title: $("#ir-title").text(),
- chunks: [{
- content: $("#ir-content").html(),
- mimeType: "text/html"
- }]
- };
- // Learn more about options https://learn.microsoft.com/azure/cognitive-services/immersive-reader/reference#options
- const options = {
- "onExit": exitCallback,
- "uiZIndex": 2000
- };
- ImmersiveReader.launchAsync(token, subdomain, data, options)
- .catch(function (error) {
- alert("Error in launching the Immersive Reader. Check the console.");
- console.log(error);
- });
- })
- .catch(function (error) {
- alert("Error in getting the Immersive Reader token and subdomain. Check the console.");
- console.log(error);
- });
- }
-
- function exitCallback() {
- console.log("This is the callback function. It is executed when the Immersive Reader closes.");
- }
- ```
-
-3. Our web app is now ready. Start the app by running:
-
- ```bash
- npm start
- ```
-
-4. Open your browser and navigate to `http://localhost:3000`. You should see the above content on the page. Click on either the **EastUS Immersive Reader** button or the **WestUS Immersive Reader** button to launch the Immersive Reader using those respective resources.
-
-## Next steps
-
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
-* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
applied-ai-services How To Prepare Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-prepare-html.md
- Title: "How to prepare HTML content for Immersive Reader"-
-description: Learn how to launch the Immersive reader using HTML, JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
------ Previously updated : 03/04/2021---
-# How to prepare HTML content for Immersive Reader
-
-This article shows you how to structure your HTML and retrieve the content, so that it can be used by Immersive Reader.
-
-## Prepare the HTML content
-
-Place the content that you want to render in the Immersive Reader inside of a container element. Be sure that the container element has a unique `id`. The Immersive Reader provides support for basic HTML elements, see the [reference](reference.md#html-support) for more information.
-
-```html
-<div id='immersive-reader-content'>
- <b>Bold</b>
- <i>Italic</i>
- <u>Underline</u>
- <strike>Strikethrough</strike>
- <code>Code</code>
- <sup>Superscript</sup>
- <sub>Subscript</sub>
- <ul><li>Unordered lists</li></ul>
- <ol><li>Ordered lists</li></ol>
-</div>
-```
-
-## Get the HTML content in JavaScript
-
-Use the `id` of the container element to get the HTML content in your JavaScript code.
-
-```javascript
-const htmlContent = document.getElementById('immersive-reader-content').innerHTML;
-```
-
-## Launch the Immersive Reader with your HTML content
-
-When calling `ImmersiveReader.launchAsync`, set the chunk's `mimeType` property to `text/html` to enable rendering HTML.
-
-```javascript
-const data = {
- chunks: [{
- content: htmlContent,
- mimeType: 'text/html'
- }]
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
-```
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](reference.md)
applied-ai-services How To Store User Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-store-user-preferences.md
- Title: "Store user preferences"-
-description: This article will show you how to store the user's preferences.
------ Previously updated : 06/29/2020---
-# How to store user preferences
-
-This article demonstrates how to store the user's UI settings, formally known as **user preferences**, via the [-preferences](./reference.md#options) and [-onPreferencesChanged](./reference.md#options) Immersive Reader SDK options.
-
-When the [CookiePolicy](./reference.md#cookiepolicy-options) SDK option is set to *Enabled*, the Immersive Reader application stores the **user preferences** (text size, theme color, font, and so on) in cookies, which are local to a specific browser and device. Each time the user launches the Immersive Reader on the same browser and device, it will open with the user's preferences from their last session on that device. However, if the user opens the Immersive Reader on a different browser or device, the settings will initially be configured with the Immersive Reader's default settings, and the user will have to set their preferences again, and so on for each device they use. The `-preferences` and `-onPreferencesChanged` Immersive Reader SDK options provide a way for applications to roam a user's preferences across various browsers and devices, so that the user has a consistent experience wherever they use the application.
-
-First, by supplying the `-onPreferencesChanged` callback SDK option when launching the Immersive Reader application, the Immersive Reader will send a `-preferences` string back to the host application each time the user changes their preferences during the Immersive Reader session. The host application is then responsible for storing the user preferences in their own system. Then, when that same user launches the Immersive Reader again, the host application can retrieve that user's preferences from storage, and supply them as the `-preferences` string SDK option when launching the Immersive Reader application, so that the user's preferences are restored.
-
-This functionality may be used as an alternate means to storing **user preferences** in the case where using cookies is not desirable or feasible.
-
-> [!CAUTION]
-> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
-
-## How to enable storing user preferences
-
-the Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function will be called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
-
-```typescript
-const options = {
- onPreferencesChanged: (value: string) => {
- // Store user preferences here
- }
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
-```
-
-## How to load user preferences into the Immersive Reader
-
-Pass in the user's preferences to the Immersive Reader using the `-preferences` option. A trivial example to store and load the user's preferences is as follows:
-
-```typescript
-const storedUserPreferences = localStorage.getItem("USER_PREFERENCES");
-let userPreferences = storedUserPreferences === null ? null : storedUserPreferences;
-const options = {
- preferences: userPreferences,
- onPreferencesChanged: (value: string) => {
- userPreferences = value;
- localStorage.setItem("USER_PREFERENCES", userPreferences);
- }
-};
-```
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services Display Math https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to/display-math.md
- Title: "Display math in the Immersive Reader"-
-description: This article will show you how to display math in the Immersive Reader.
------ Previously updated : 01/14/2020----
-# How to display math in the Immersive Reader
-
-The Immersive Reader can display math when provided in the form of Mathematical Markup Language ([MathML](https://developer.mozilla.org/docs/Web/MathML)).
-The MIME type can be set through the Immersive Reader [chunk](../reference.md#chunk). See [supported MIME types](../reference.md#supported-mime-types) for more information.
-
-## Send Math to the Immersive Reader
-In order to send math to the Immersive Reader, supply a chunk containing MathML, and set the MIME type to ```application/mathml+xml```;
-
-For example, if your content were the following:
-
-```html
-<div id='ir-content'>
- <math xmlns='http://www.w3.org/1998/Math/MathML'>
- <mfrac>
- <mrow>
- <msup>
- <mi>x</mi>
- <mn>2</mn>
- </msup>
- <mo>+</mo>
- <mn>3</mn>
- <mi>x</mi>
- <mo>+</mo>
- <mn>2</mn>
- </mrow>
- <mrow>
- <mi>x</mi>
- <mo>−</mo>
- <mn>3</mn>
- </mrow>
- </mfrac>
- <mo>=</mo>
- <mn>4</mn>
- </math>
-</div>
-```
-
-Then you could display your content by using the following JavaScript.
-
-```javascript
-const data = {
- Title: 'My Math',
- chunks: [{
- content: document.getElementById('ir-content').innerHTML.trim(),
- mimeType: 'application/mathml+xml'
- }]
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
-```
-
-When you launch the Immersive Reader, you should see:
-
-![Math in Immersive Reader](../media/how-tos/1-math.png)
-
-## Next steps
-
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
applied-ai-services Set Cookie Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to/set-cookie-policy.md
- Title: "Set Immersive Reader Cookie Policy"-
-description: This article will show you how to set the cookie policy for the Immersive Reader.
------- Previously updated : 01/06/2020----
-# How to set the cookie policy for the Immersive Reader
-
-The Immersive Reader will disable cookie usage by default. If you enable cookie usage, then the Immersive Reader may use cookies to maintain user preferences and track feature usage. If you enable cookie usage in the Immersive Reader, please consider the requirements of EU Cookie Compliance Policy. It is the responsibility of the host application to obtain any necessary user consent in accordance with EU Cookie Compliance Policy.
-
-The cookie policy can be set through the Immersive Reader [options](../reference.md#options).
-
-## Enable Cookie Usage
-
-```javascript
-var options = {
- 'cookiePolicy': ImmersiveReader.CookiePolicy.Enable
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
-```
-
-## Disable Cookie Usage
-
-```javascript
-var options = {
- 'cookiePolicy': ImmersiveReader.CookiePolicy.Disable
-};
-
-ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
-```
-
-## Next steps
-
-* View the [Node.js quickstart](../quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/language-support.md
- Title: Language support - Immersive Reader-
-description: Learn more about the human languages that are available with Immersive Reader.
------ Previously updated : 11/15/2021---
-# Language support for Immersive Reader
-
-This article lists supported human languages for Immersive Reader features.
--
-## Text to speech
-
-| Language | Tag |
-|-|--|
-| Afrikaans | af |
-| Afrikaans (South Africa) | af-ZA |
-| Albanian | sq |
-| Albanian (Albania) | sq-AL |
-| Amharic | am |
-| Amharic (Ethiopia) | am-ET |
-| Arabic (Egyptian) | ar-EG |
-| Arabic (Lebanon) | ar-LB |
-| Arabic (Oman) | ar-OM |
-| Arabic (Saudi Arabia) | ar-SA |
-| Azerbaijani | az |
-| Azerbaijani (Azerbaijan) | az-AZ |
-| Bangla | bn |
-| Bangla (Bangladesh) | bn-BD |
-| Bangla (India) | bn-IN |
-| Bosnian | bs |
-| Bosnian (Bosnia & Herzegovina) | bs-BA |
-| Bulgarian | bg |
-| Bulgarian (Bulgaria) | bg-BG |
-| Burmese | my |
-| Burmese (Myanmar) | my-MM |
-| Catalan | ca |
-| Catalan (Catalan) | ca-ES |
-| Chinese | zh |
-| Chinese (China) | zh-CN |
-| Chinese (Hong Kong SAR) | zh-HK |
-| Chinese (Macao SAR) | zh-MO |
-| Chinese (Singapore) | zh-SG |
-| Chinese (Taiwan) | zh-TW |
-| Chinese Simplified | zh-Hans |
-| Chinese Simplified (China) | zh-Hans-CN |
-| Chinese Simplified (Singapore) | zh-Hans-SG |
-| Chinese Traditional | zh-Hant |
-| Chinese Traditional (China) | zh-Hant-CN |
-| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
-| Chinese Traditional (Macao SAR) | zh-Hant-MO |
-| Chinese Traditional (Taiwan) | zh-Hant-TW |
-| Chinese (Literary) | lzh |
-| Chinese (Literary, China) | lzh-CN |
-| Croatian | hr |
-| Croatian (Croatia) | hr-HR |
-| Czech | cs |
-| Czech (Czech Republic) | cs-CZ |
-| Danish | da |
-| Danish (Denmark) | da-DK |
-| Dutch | nl |
-| Dutch (Belgium) | nl-BE |
-| Dutch (Netherlands) | nl-NL |
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (Kenya) | en-KE |
-| English (New Zealand) | en-NZ |
-| English (Nigeria) | en-NG |
-| English (Philippines) | en-PH |
-| English (Singapore) | en-SG |
-| English (South Africa) | en-ZA |
-| English (Tanzania) | en-TZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| Estonian | et-EE |
-| Filipino | fil |
-| Filipino (Philippines) | fil-PH |
-| Finnish | fi |
-| Finnish (Finland) | fi-FI |
-| French | fr |
-| French (Belgium) | fr-BE |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| Galician | gl |
-| Galician (Spain) | gl-ES |
-| Georgian | ka |
-| Georgian (Georgia) | ka-GE |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Greek | el |
-| Greek (Greece) | el-GR |
-| Gujarati | gu |
-| Gujarati (India) | gu-IN |
-| Hebrew | he |
-| Hebrew (Israel) | he-IL |
-| Hindi | hi |
-| Hindi (India) | hi-IN |
-| Hungarian | hu |
-| Hungarian (Hungary) | hu-HU |
-| Icelandic | is |
-| Icelandic (Iceland) | is-IS |
-| Indonesian | id |
-| Indonesian (Indonesia) | id-ID |
-| Irish | ga |
-| Irish (Ireland) | ga-IE |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Japanese | ja |
-| Japanese (Japan) | ja-JP |
-| Javanese | jv |
-| Javanese (Indonesia) | jv-ID |
-| Kannada | kn |
-| Kannada (India) | kn-IN |
-| Kazakh | kk |
-| Kazakh (Kazakhstan) | kk-KZ |
-| Khmer | km |
-| Khmer (Cambodia) | km-KH |
-| Korean | ko |
-| Korean (Korea) | ko-KR |
-| Lao | lo |
-| Lao (Laos) | lo-LA |
-| Latvian | Lv-LV |
-| Latvian (Latvia) | lv-LV |
-| Lithuanian | lt |
-| Lithuanian | lt-LT |
-| Macedonian | mk |
-| Macedonian (North Macedonia) | mk-MK |
-| Malay | ms |
-| Malay (Malaysia) | ms-MY |
-| Malayalam | ml |
-| Malayalam (India) | ml-IN |
-| Maltese | mt |
-| Maltese (Malta) | Mt-MT |
-| Marathi | mr |
-| Marathi (India) | mr-IN |
-| Mongolian | mn |
-| Mongolian (Mongolia) | mn-MN |
-| Nepali | ne |
-| Nepali (Nepal) | ne-NP |
-| Norwegian Bokmal| nb |
-| Norwegian Bokmal (Norway) | nb-NO |
-| Pashto | ps |
-| Pashto (Afghanistan) | ps-AF |
-| Persian | fa |
-| Persian (Iran) | fa-IR |
-| Polish | pl |
-| Polish (Poland) | pl-PL |
-| Portuguese | pt |
-| Portuguese (Brazil) | pt-BR |
-| Portuguese (Portugal) | pt-PT |
-| Romanian | ro |
-| Romanian (Romania) | ro-RO |
-| Russian | ru |
-| Russian (Russia) | ru-RU |
-| Serbian (Cyrillic) | sr-Cyrl |
-| Serbian (Cyrillic, Serbia) | sr-Cyrl-RS |
-| Sinhala | si |
-| Sinhala (Sri Lanka) | si-LK |
-| Slovak | sk |
-| Slovak (Slovakia) | sk-SK |
-| Slovenian | sl |
-| Slovenian (Slovenia) | sl-SI |
-| Somali | so |
-| Somali (Somalia) | so-SO |
-| Spanish | es |
-| Spanish (Argentina) | es-AR |
-| Spanish (Colombia) | es-CO |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
-| Spanish (United States) | es-US |
-| Sundanese | su |
-| Sundanese (Indonesia) | su-ID |
-| Swahili | sw |
-| Swahili (Kenya) | sw-KE |
-| Swedish | sv |
-| Swedish (Sweden) | sv-SE |
-| Tamil | ta |
-| Tamil (India) | ta-IN |
-| Tamil (Malaysia) | ta-MY |
-| Telugu | te |
-| Telugu (India) | te-IN |
-| Thai | th |
-| Thai (Thailand) | th-TH |
-| Turkish | tr |
-| Turkish (T├╝rkiye) | tr-TR |
-| Ukrainian | uk |
-| Ukrainian (Ukraine) | uk-UA |
-| Urdu | ur |
-| Urdu (India) | ur-IN |
-| Uzbek | uz |
-| Uzbek (Uzbekistan) | uz-UZ |
-| Vietnamese | vi |
-| Vietnamese (Vietnam) | vi-VN |
-| Welsh | cy |
-| Welsh (United Kingdom) | Cy-GB |
-| Zulu | zu |
-| Zulu (South Africa) | zu-ZA |
-
-## Translation
-
-| Language | Tag |
-|-|--|
-| Afrikaans | af |
-| Albanian | sq |
-| Amharic | am |
-| Arabic | ar |
-| Arabic (Egyptian) | ar-EG |
-| Arabic (Saudi Arabia) | ar-SA |
-| Armenian | hy |
-| Assamese | as |
-| Azerbaijani | az |
-| Bangla | bn |
-| Bashkir | ba |
-| Bosnian | bs |
-| Bulgarian | bg |
-| Bulgarian (Bulgaria) | bg-BG |
-| Burmese | my |
-| Catalan | ca |
-| Catalan (Catalan) | ca-ES |
-| Chinese | zh |
-| Chinese (China) | zh-CN |
-| Chinese (Hong Kong SAR) | zh-HK |
-| Chinese (Macao SAR) | zh-MO |
-| Chinese (Singapore) | zh-SG |
-| Chinese (Taiwan) | zh-TW |
-| Chinese Simplified | zh-Hans |
-| Chinese Simplified (China) | zh-Hans-CN |
-| Chinese Simplified (Singapore) | zh-Hans-SG |
-| Chinese Traditional | zh-Hant |
-| Chinese Traditional (China) | zh-Hant-CN |
-| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
-| Chinese Traditional (Macao SAR) | zh-Hant-MO |
-| Chinese Traditional (Taiwan) | zh-Hant-TW |
-| Chinese (Literary) | lzh |
-| Croatian | hr |
-| Croatian (Croatia) | hr-HR |
-| Czech | cs |
-| Czech (Czech Republic) | cs-CZ |
-| Danish | da |
-| Danish (Denmark) | da-DK |
-| Dari (Afghanistan) | prs |
-| Divehi | dv |
-| Dutch | nl |
-| Dutch (Netherlands) | nl-NL |
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (New Zealand) | en-NZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| Estonian | et |
-| Faroese | fo |
-| Fijian | fj |
-| Filipino | fil |
-| Finnish | fi |
-| Finnish (Finland) | fi-FI |
-| French | fr |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| Georgian | ka |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Greek | el |
-| Greek (Greece) | el-GR |
-| Gujarati | gu |
-| Haitian (Creole) | ht |
-| Hebrew | he |
-| Hebrew (Israel) | he-IL |
-| Hindi | hi |
-| Hindi (India) | hi-IN |
-| Hmong Daw | mww |
-| Hungarian | hu |
-| Hungarian (Hungary) | hu-HU |
-| Icelandic | is |
-| Indonesian | id |
-| Indonesian (Indonesia) | id-ID |
-| Inuinnaqtun | ikt |
-| Inuktitut | iu |
-| Inuktitut (Latin) | iu-Latn |
-| Irish | ga |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Japanese | ja |
-| Japanese (Japan) | ja-JP |
-| Kannada | kn |
-| Kazakh | kk |
-| Khmer | km |
-| Kiswahili | sw |
-| Korean | ko |
-| Korean (Korea) | ko-KR |
-| Kurdish (Central) | ku |
-| Kurdish (Northern) | kmr |
-| KurdishCentral | ckb |
-| Kyrgyz | ky |
-| Lao | lo |
-| Latvian | lv |
-| Lithuanian | lt |
-| Macedonian | mk |
-| Malagasy | mg |
-| Malay | ms |
-| Malay (Malaysia) | ms-MY |
-| Malayalam | ml |
-| Maltese | mt |
-| Maori | mi |
-| Marathi | mr |
-| Mongolian (Cyrillic) | mn-Cyrl |
-| Mongolian (Traditional) | mn-Mong |
-| Nepali | ne |
-| Norwegian Bokmal| nb |
-| Norwegian Bokmal (Norway) | nb-NO |
-| Odia | or |
-| Pashto (Afghanistan) | ps |
-| Persian | fa |
-| Polish | pl |
-| Polish (Poland) | pl-PL |
-| Portuguese | pt |
-| Portuguese (Brazil) | pt-BR |
-| Portuguese (Portugal) | pt-PT |
-| Punjabi | pa |
-| Querétaro Otomi | otq |
-| Romanian | ro |
-| Romanian (Romania) | ro-RO |
-| Russian | ru |
-| Russian (Russia) | ru-RU |
-| Samoan | sm |
-| Serbian | sr |
-| Serbian (Cyrillic) | sr-Cyrl |
-| Serbian (Latin) | sr-Latn |
-| Slovak | sk |
-| Slovak (Slovakia) | sk-SK |
-| Slovenian | sl |
-| Slovenian (Slovenia) | sl-SI |
-| Somali | so |
-| Spanish | es |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
-| Swedish | sv |
-| Swedish (Sweden) | sv-SE |
-| Tahitian | ty |
-| Tamil | ta |
-| Tamil (India) | ta-IN |
-| Tatar | tt |
-| Telugu | te |
-| Telugu (India) | te-IN |
-| Thai | th |
-| Thai (Thailand) | th-TH |
-| Tibetan | bo |
-| Tigrinya | ti |
-| Tongan | to |
-| Turkish | tr |
-| Turkish (T├╝rkiye) | tr-TR |
-| Turkmen | tk |
-| Ukrainian | uk |
-| UpperSorbian | hsb |
-| Urdu | ur |
-| Uyghur | ug |
-| Vietnamese | vi |
-| Vietnamese (Vietnam) | vi-VN |
-| Welsh | cy |
-| Yucatec Maya | yua |
-| Yue Chinese | yue |
-| Zulu | zu |
-
-## Language detection
-
-| Language | Tag |
-|-|--|
-| Arabic | ar |
-| Arabic (Egyptian) | ar-EG |
-| Arabic (Saudi Arabia) | ar-SA |
-| Basque | eu |
-| Bulgarian | bg |
-| Bulgarian (Bulgaria) | bg-BG |
-| Catalan | ca |
-| Catalan (Catalan) | ca-ES |
-| Chinese Simplified | zh-Hans |
-| Chinese Simplified (China) | zh-Hans-CN |
-| Chinese Simplified (Singapore) | zh-Hans-SG |
-| Chinese Traditional | zh-Hant-CN |
-| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
-| Chinese Traditional (Macao SAR) | zh-Hant-MO |
-| Chinese Traditional (Taiwan) | zh-Hant-TW |
-| Croatian | hr |
-| Croatian (Croatia) | hr-HR |
-| Czech | cs |
-| Czech (Czech Republic) | cs-CZ |
-| Danish | da |
-| Danish (Denmark) | da-DK |
-| Dutch | nl |
-| Dutch (Netherlands) | nl-NL |
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (New Zealand) | en-NZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| Estonian | et |
-| Finnish | fi |
-| Finnish (Finland) | fi-FI |
-| French | fr |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| Galician | gl |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Greek | el |
-| Greek (Greece) | el-GR |
-| Hebrew | he |
-| Hebrew (Israel) | he-IL |
-| Hindi | hi |
-| Hindi (India) | hi-IN |
-| Hungarian | hu |
-| Hungarian (Hungary) | hu-HU |
-| Icelandic | is |
-| Indonesian | id |
-| Indonesian (Indonesia) | id-ID |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Japanese | ja |
-| Japanese (Japan) | ja-JP |
-| Kazakh | kk |
-| Korean | ko |
-| Korean (Korea) | ko-KR |
-| Latvian | lv |
-| Lithuanian | lt |
-| Malay | ms |
-| Malay (Malaysia) | ms-MY |
-| Norwegian Bokmal| nb |
-| Norwegian Bokmal (Norway) | nb-NO |
-| Norwegian Nynorsk | nn |
-| Polish | pl |
-| Polish (Poland) | pl-PL |
-| Portuguese | pt |
-| Portuguese (Brazil) | pt-BR |
-| Portuguese (Portugal) | pt-PT |
-| Romanian | ro |
-| Romanian (Romania) | ro-RO |
-| Russian | ru |
-| Russian (Russia) | ru-RU |
-| Serbian(Cyrillic) | sr-Cyrl |
-| Serbian (Latin) | sr-Latn |
-| Slovak | sk |
-| Slovak (Slovakia) | sk-SK |
-| Slovenian | sl |
-| Slovenian (Slovenia) | sl-SI |
-| Spanish | es |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
-| Swedish | sv |
-| Swedish (Sweden) | sv-SE |
-| Tamil | ta |
-| Tamil (India) | ta-IN |
-| Telugu | te |
-| Telugu (India) | te-IN |
-| Thai | th |
-| Thai (Thailand) | th-TH |
-| Turkish | tr |
-| Turkish (T├╝rkiye) | tr-TR |
-| Ukrainian | uk |
-| Vietnamese | vi |
-| Vietnamese (Vietnam) | vi-VN |
-| Welsh | cy |
-
-## Syllabification
-
-| Language | Tag |
-|-|--|
-| Basque | eu |
-| Bulgarian | bg |
-| Bulgarian (Bulgaria) | bg-BG |
-| Catalan | ca |
-| Catalan (Catalan) | ca-ES |
-| Croatian | hr |
-| Croatian (Croatia) | hr-HR |
-| Czech | cs |
-| Czech (Czech Republic) | cs-CZ |
-| Danish | da |
-| Danish (Denmark) | da-DK |
-| Dutch | nl |
-| Dutch (Netherlands) | nl-NL |
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (New Zealand) | en-NZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| Estonian | et |
-| Finnish | fi |
-| Finnish (Finland) | fi-FI |
-| French | fr |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| Galician | gl |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Greek | el |
-| Greek (Greece) | el-GR |
-| Hungarian | hu |
-| Hungarian (Hungary) | hu-HU |
-| Icelandic | is |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Kazakh | kk |
-| Latvian | lv |
-| Lithuanian | lt |
-| Norwegian Bokmal| nb |
-| Norwegian Bokmal (Norway) | nb-NO |
-| Norwegian Nynorsk | nn |
-| Polish | pl |
-| Polish (Poland) | pl-PL |
-| Portuguese | pt |
-| Portuguese (Brazil) | pt-BR |
-| Portuguese (Portugal) | pt-PT |
-| Romanian | ro |
-| Romanian (Romania) | ro-RO |
-| Russian | ru |
-| Russian (Russia) | ru-RU |
-| Serbian | sr |
-| Serbian(Cyrillic) | sr-Cyrl |
-| Serbian (Latin) | sr-Latn |
-| Slovak | sk |
-| Slovak (Slovakia) | sk-SK |
-| Slovenian | sl |
-| Slovenian (Slovenia) | sl-SI |
-| Spanish | es |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
-| Swedish | sv |
-| Swedish (Sweden) | sv-SE |
-| Turkish | tr |
-| Turkish (T├╝rkiye) | tr-TR |
-| Ukrainian | uk |
-| Welsh | cy |
-
-## Picture dictionary
-
-| Language | Tag |
-|-|--|
-| Danish | da |
-| Danish (Denmark) | da-DK |
-| Dutch | nl |
-| Dutch (Netherlands) | nl-NL |
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (New Zealand) | en-NZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| Finnish | fi |
-| Finnish (Finland) | fi-FI |
-| French | fr |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Japanese | ja |
-| Japanese (Japan) | ja-JP |
-| Korean | ko |
-| Korean (Korea) | ko-KR |
-| Norwegian Bokmal| nb |
-| Norwegian Bokmal (Norway) | nb-NO |
-| Polish | pl |
-| Polish (Poland) | pl-PL |
-| Portuguese | pt |
-| Portuguese (Brazil) | pt-BR |
-| Russian | ru |
-| Russian (Russia) | ru-RU |
-| Spanish | es |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
-| Swedish | sv |
-| Swedish (Sweden) | sv-SE |
-
-## Parts of speech
-
-| Language | Tag |
-|-|--|
-| Arabic (Egyptian) | ar-EG |
-| Arabic (Saudi Arabia) | ar-SA |
-| Danish | da |
-| Danish (Denmark) | da-DK |
-| Dutch | nl |
-| Dutch (Netherlands) | nl-NL |
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (New Zealand) | en-NZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| Finnish | fi |
-| Finnish (Finland) | fi-FI |
-| French | fr |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Japanese | ja |
-| Japanese (Japan) | ja-JP |
-| Korean | ko |
-| Korean (Korea) | ko-KR |
-| Norwegian Bokmal| nb |
-| Norwegian Bokmal (Norway) | nb-NO |
-| Norwegian Nynorsk | nn |
-| Polish | pl |
-| Polish (Poland) | pl-PL |
-| Portuguese | pt |
-| Portuguese (Brazil) | pt-BR |
-| Russian | ru |
-| Russian (Russia) | ru-RU |
-| Spanish | es |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
-| Swedish | sv |
-| Swedish (Sweden) | sv-SE |
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/overview.md
- Title: What is Azure Immersive Reader?-
-description: Immersive Reader is a tool that is designed to help people with learning differences or help new readers and language learners with reading comprehension.
------- Previously updated : 11/15/2021--
-keywords: readers, language learners, display pictures, improve reading, read content, translate
-#Customer intent: As a developer, I want to learn more about the Immersive Reader, which is a new offering in Cognitive Services, so that I can embed this package of content into a document to accommodate users with reading differences.
--
-# What is Azure Immersive Reader?
-
-[Immersive Reader](https://www.onenote.com/learningtools) is part of [Azure Applied AI Services](../../applied-ai-services/what-are-applied-ai-services.md), and is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft One Note to improve your web applications.
-
-This documentation contains the following types of articles:
-
-* **[Quickstarts](quickstarts/client-libraries.md)** are getting-started instructions to guide you through making requests to the service.
-* **[How-to guides](how-to-create-immersive-reader.md)** contain instructions for using the service in more specific or customized ways.
-
-## Use Immersive Reader to improve reading accessibility
-
-Immersive Reader is designed to make reading easier and more accessible for everyone. Let's take a look at a few of Immersive Reader's core features.
-
-### Isolate content for improved readability
-
-Immersive Reader isolates content to improve readability.
-
- ![Isolate content for improved readability with Immersive Reader](./media/immersive-reader.png)
-
-### Display pictures for common words
-
-For commonly used terms, the Immersive Reader will display a picture.
-
- ![Picture Dictionary with Immersive Reader](./media/picture-dictionary.png)
-
-### Highlight parts of speech
-
-Immersive Reader can be use to help learners understand parts of speech and grammar by highlighting verbs, nouns, pronouns, and more.
-
- ![Show parts of speech with Immersive Reader](./media/parts-of-speech.png)
-
-### Read content aloud
-
-Speech synthesis (or text-to-speech) is baked into the Immersive Reader service, which lets your readers select text to be read aloud.
-
- ![Read text aloud with Immersive Reader](./media/read-aloud.png)
-
-### Translate content in real-time
-
-Immersive Reader can translate text into many languages in real-time. This is helpful to improve comprehension for readers learning a new language.
-
- ![Translate text with Immersive Reader](./media/translation.png)
-
-### Split words into syllables
-
-With Immersive Reader you can break words into syllables to improve readability or to sound out new words.
-
- ![Break words into syllables with Immersive Reader](./media/syllabification.png)
-
-## How does Immersive Reader work?
-
-Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your wep application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
-
-## Get started with Immersive Reader
-
-The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:
-
-* [Quickstart: Use the Immersive Reader client library](quickstarts/client-libraries.md)
applied-ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/quickstarts/client-libraries.md
- Title: "Quickstart: Immersive Reader client library"-
-description: "The Immersive Reader client library makes it easy to integrate the Immersive Reader service into your web applications to improve reading comprehension. In this quickstart, you'll learn how to use Immersive Reader for text selection, recognizing parts of speech, reading selected text out loud, translation, and more."
---
-zone_pivot_groups: programming-languages-set-twenty
--- Previously updated : 03/08/2021--
-keywords: display pictures, parts of speech, read selected text, translate words, reading comprehension
--
-# Quickstart: Get started with Immersive Reader
---------------
applied-ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/reference.md
- Title: "Immersive Reader SDK Reference"-
-description: The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
-------- Previously updated : 11/15/2021---
-# Immersive Reader JavaScript SDK Reference (v1.2)
-
-The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
-
-You may use `npm`, `yarn`, or an `HTML` `<script>` element to include the library of the latest stable build in your web application:
-
-```html
-<script type='text/javascript' src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.2.0.js'></script>
-```
-
-```bash
-npm install @microsoft/immersive-reader-sdk
-```
-
-```bash
-yarn add @microsoft/immersive-reader-sdk
-```
-
-## Functions
-
-The SDK exposes the functions:
--- [`ImmersiveReader.launchAsync(token, subdomain, content, options)`](#launchasync)--- [`ImmersiveReader.close()`](#close)--- [`ImmersiveReader.renderButtons(options)`](#renderbuttons)-
-<br>
-
-## launchAsync
-
-Launches the Immersive Reader within an `HTML` `iframe` element in your web application. The size of your content is limited to a maximum of 50 MB.
-
-```typescript
-launchAsync(token: string, subdomain: string, content: Content, options?: Options): Promise<LaunchResponse>;
-```
-
-#### launchAsync Parameters
-
-| Name | Type | Description |
-| - | - | |
-| `token` | string | The Azure AD authentication token. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). |
-| `subdomain` | string | The custom subdomain of your Immersive Reader resource in Azure. For more information, see [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md). |
-| `content` | [Content](#content) | An object containing the content to be shown in the Immersive Reader. |
-| `options` | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. |
-
-#### Returns
-
-Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [`LaunchResponse`](#launchresponse) object.
-
-#### Exceptions
-
-The returned `Promise` will be rejected with an [`Error`](#error) object if the Immersive Reader fails to load. For more information, see the [error codes](#error-codes).
-
-<br>
-
-## close
-
-Closes the Immersive Reader.
-
-An example use case for this function is if the exit button is hidden by setting ```hideExitButton: true``` in [options](#options). Then, a different button (for example a mobile header's back arrow) can call this ```close``` function when it's clicked.
-
-```typescript
-close(): void;
-```
-
-<br>
-
-## Immersive Reader Launch Button
-
-The SDK provides default styling for the button for launching the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling. For more information, see [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md).
-
-```html
-<div class='immersive-reader-button'></div>
-```
-
-#### Optional attributes
-
-Use the following attributes to configure the look and feel of the button.
-
-| Attribute | Description |
-| | -- |
-| `data-button-style` | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. |
-| `data-locale` | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. |
-| `data-icon-px-size` | Sets the size of the icon in pixels. Defaults to 20px. |
-
-<br>
-
-## renderButtons
-
-The ```renderButtons``` function isn't necessary if you're using the [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) guidance.
-
-This function styles and updates the document's Immersive Reader button elements. If ```options.elements``` is provided, then the buttons will be rendered within each element provided in ```options.elements```. Using the ```options.elements``` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call ```ImmersiveReader.renderButtons(options: RenderButtonsOptions);``` on page load as demonstrated in the below code snippet. Otherwise, the buttons will be rendered within the document's elements that have the class ```immersive-reader-button``` as shown in [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md).
-
-```typescript
-// This snippet assumes there are two empty div elements in
-// the page HTML, button1 and button2.
-const btn1: HTMLDivElement = document.getElementById('button1');
-const btn2: HTMLDivElement = document.getElementById('button2');
-const btns: HTMLDivElement[] = [btn1, btn2];
-ImmersiveReader.renderButtons({elements: btns});
-```
-
-See the above [Optional Attributes](#optional-attributes) for more rendering options. To use these options, add any of the option attributes to each ```HTMLDivElement``` in your page HTML.
-
-```typescript
-renderButtons(options?: RenderButtonsOptions): void;
-```
-
-#### renderButtons Parameters
-
-| Name | Type | Description |
-| - | - | |
-| `options` | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. |
-
-### renderButtons Options
-
-Options for rendering the Immersive Reader buttons.
-
-```typescript
-{
- elements: HTMLDivElement[];
-}
-```
-
-#### renderButtons Options Parameters
-
-| Setting | Type | Description |
-| - | - | -- |
-| elements | HTMLDivElement[] | Elements to render the Immersive Reader buttons in. |
-
-##### `elements`
-```Parameters
-Type: HTMLDivElement[]
-Required: false
-```
-
-<br>
-
-## LaunchResponse
-
-Contains the response from the call to `ImmersiveReader.launchAsync`. A reference to the `HTML` `iframe` element that contains the Immersive Reader can be accessed via `container.firstChild`.
-
-```typescript
-{
- container: HTMLDivElement;
- sessionId: string;
- charactersProcessed: number;
-}
-```
-
-#### LaunchResponse Parameters
-
-| Setting | Type | Description |
-| - | - | -- |
-| container | HTMLDivElement | HTML element that contains the Immersive Reader `iframe` element. |
-| sessionId | String | Globally unique identifier for this session, used for debugging. |
-| charactersProcessed | number | Total number of characters processed |
-
-## Error
-
-Contains information about an error.
-
-```typescript
-{
- code: string;
- message: string;
-}
-```
-
-#### Error Parameters
-
-| Setting | Type | Description |
-| - | - | -- |
-| code | String | One of a set of error codes. For more information, see [Error codes](#error-codes). |
-| message | String | Human-readable representation of the error. |
-
-#### Error codes
-
-| Code | Description |
-| - | -- |
-| BadArgument | Supplied argument is invalid, see `message` parameter of the [Error](#error). |
-| Timeout | The Immersive Reader failed to load within the specified timeout. |
-| TokenExpired | The supplied token is expired. |
-| Throttled | The call rate limit has been exceeded. |
-
-<br>
-
-## Types
-
-### Content
-
-Contains the content to be shown in the Immersive Reader.
-
-```typescript
-{
- title?: string;
- chunks: Chunk[];
-}
-```
-
-#### Content Parameters
-
-| Name | Type | Description |
-| - | - | |
-| title | String | Title text shown at the top of the Immersive Reader (optional) |
-| chunks | [Chunk[]](#chunk) | Array of chunks |
-
-##### `title`
-```Parameters
-Type: String
-Required: false
-Default value: "Immersive Reader"
-```
-
-##### `chunks`
-```Parameters
-Type: Chunk[]
-Required: true
-Default value: null
-```
-
-<br>
-
-### Chunk
-
-A single chunk of data, which will be passed into the Content of the Immersive Reader.
-
-```typescript
-{
- content: string;
- lang?: string;
- mimeType?: string;
-}
-```
-
-#### Chunk Parameters
-
-| Name | Type | Description |
-| - | - | |
-| content | String | The string that contains the content sent to the Immersive Reader. |
-| lang | String | Language of the text, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Language will be detected automatically if not specified. For more information, see [Supported Languages](#supported-languages). |
-| mimeType | string | Plain text, MathML, HTML & Microsoft Word DOCX formats are supported. For more information, see [Supported MIME types](#supported-mime-types). |
-
-##### `content`
-```Parameters
-Type: String
-Required: true
-Default value: null
-```
-
-##### `lang`
-```Parameters
-Type: String
-Required: false
-Default value: Automatically detected
-```
-
-##### `mimeType`
-```Parameters
-Type: String
-Required: false
-Default value: "text/plain"
-```
-
-#### Supported MIME types
-
-| MIME Type | Description |
-| | -- |
-| text/plain | Plain text. |
-| text/html | HTML content. [Learn more](#html-support)|
-| application/mathml+xml | Mathematical Markup Language (MathML). [Learn more](./how-to/display-math.md).
-| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document.
--
-<br>
-
-## Options
-
-Contains properties that configure certain behaviors of the Immersive Reader.
-
-```typescript
-{
- uiLang?: string;
- timeout?: number;
- uiZIndex?: number;
- useWebview?: boolean;
- onExit?: () => any;
- customDomain?: string;
- allowFullscreen?: boolean;
- parent?: Node;
- hideExitButton?: boolean;
- cookiePolicy?: CookiePolicy;
- disableFirstRun?: boolean;
- readAloudOptions?: ReadAloudOptions;
- translationOptions?: TranslationOptions;
- displayOptions?: DisplayOptions;
- preferences?: string;
- onPreferencesChanged?: (value: string) => any;
- disableGrammar?: boolean;
- disableTranslation?: boolean;
- disableLanguageDetection?: boolean;
-}
-```
-
-#### Options Parameters
-
-| Name | Type | Description |
-| - | - | |
-| uiLang | String | Language of the UI, the value is in IETF BCP 47-language tag format, for example, en, es-ES. Defaults to browser language if not specified. |
-| timeout | Number | Duration (in milliseconds) before [launchAsync](#launchasync) fails with a timeout error (default is 15,000 ms). This timeout only applies to the initial launch of the Reader page, when the Reader page opens successfully and the spinner starts. Adjustment of the timeout should'nt be necessary. |
-| uiZIndex | Number | Z-index of the `HTML` `iframe` element that will be created (default is 1000). |
-| useWebview | Boolean| Use a webview tag instead of an `HTML` `iframe` element, for compatibility with Chrome Apps (default is false). |
-| onExit | Function | Executes when the Immersive Reader exits. |
-| customDomain | String | Reserved for internal use. Custom domain where the Immersive Reader webapp is hosted (default is null). |
-| allowFullscreen | Boolean | The ability to toggle fullscreen (default is true). |
-| parent | Node | Node in which the `HTML` `iframe` element or `Webview` container is placed. If the element doesn't exist, iframe is placed in `body`. |
-| hideExitButton | Boolean | Hides the Immersive Reader's exit button arrow (default is false). This value should only be true if there's an alternative mechanism provided to exit the Immersive Reader (e.g a mobile toolbar's back arrow). |
-| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent following EU Cookie Compliance Policy. For more information, see [Cookie Policy Options](#cookiepolicy-options). |
-| disableFirstRun | Boolean | Disable the first run experience. |
-| readAloudOptions | [ReadAloudOptions](#readaloudoptions) | Options to configure Read Aloud. |
-| translationOptions | [TranslationOptions](#translationoptions) | Options to configure translation. |
-| displayOptions | [DisplayOptions](#displayoptions) | Options to configure text size, font, theme, and so on. |
-| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. For more information, see [Settings Parameters](#settings-parameters) and [How-To Store User Preferences](./how-to-store-user-preferences.md). |
-| onPreferencesChanged | Function | Executes when the user's preferences have changed. For more information, see [How-To Store User Preferences](./how-to-store-user-preferences.md). |
-| disableTranslation | Boolean | Disable the word and document translation experience. |
-| disableGrammar | Boolean | Disable the Grammar experience. This option will also disable Syllables, Parts of Speech and Picture Dictionary, which depends on Parts of Speech. |
-| disableLanguageDetection | Boolean | Disable Language Detection to ensure the Immersive Reader only uses the language that is explicitly specified on the [Content](#content)/[Chunk[]](#chunk). This option should be used sparingly, primarily in situations where language detection isn't working, for instance, this issue is more likely to happen with short passages of fewer than 100 characters. You should be certain about the language you're sending, as text-to-speech won't have the correct voice. Syllables, Parts of Speech and Picture Dictionary won't work correctly if the language isn't correct. |
-
-##### `uiLang`
-```Parameters
-Type: String
-Required: false
-Default value: User's browser language
-```
-
-##### `timeout`
-```Parameters
-Type: Number
-Required: false
-Default value: 15000
-```
-
-##### `uiZIndex`
-```Parameters
-Type: Number
-Required: false
-Default value: 1000
-```
-
-##### `onExit`
-```Parameters
-Type: Function
-Required: false
-Default value: null
-```
-
-##### `preferences`
-
-> [!CAUTION]
-> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
-
-```Parameters
-Type: String
-Required: false
-Default value: null
-```
-
-##### `onPreferencesChanged`
-```Parameters
-Type: Function
-Required: false
-Default value: null
-```
-
-##### `customDomain`
-```Parameters
-Type: String
-Required: false
-Default value: null
-```
-
-<br>
-
-## ReadAloudOptions
-
-```typescript
-type ReadAloudOptions = {
- voice?: string;
- speed?: number;
- autoplay?: boolean;
-};
-```
-
-#### ReadAloudOptions Parameters
-
-| Name | Type | Description |
-| - | - | |
-| voice | String | Voice, either "Female" or "Male". Not all languages support both genders. |
-| speed | Number | Playback speed, must be between 0.5 and 2.5, inclusive. |
-| autoPlay | Boolean | Automatically start Read Aloud when the Immersive Reader loads. |
-
-##### `voice`
-```Parameters
-Type: String
-Required: false
-Default value: "Female" or "Male" (determined by language)
-Values available: "Female", "Male"
-```
-
-##### `speed`
-```Parameters
-Type: Number
-Required: false
-Default value: 1
-Values available: 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5
-```
-
-> [!NOTE]
-> Due to browser limitations, autoplay is not supported in Safari.
-
-<br>
-
-## TranslationOptions
-
-```typescript
-type TranslationOptions = {
- language: string;
- autoEnableDocumentTranslation?: boolean;
- autoEnableWordTranslation?: boolean;
-};
-```
-
-#### TranslationOptions Parameters
-
-| Name | Type | Description |
-| - | - | |
-| language | String | Sets the translation language, the value is in IETF BCP 47-language tag format, for example, fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. |
-| autoEnableDocumentTranslation | Boolean | Automatically translate the entire document. |
-| autoEnableWordTranslation | Boolean | Automatically enable word translation. |
-
-##### `language`
-```Parameters
-Type: String
-Required: true
-Default value: null
-Values available: For more information, see the Supported Languages section
-```
-
-<br>
-
-## ThemeOption
-
-```typescript
-enum ThemeOption { Light, Dark }
-```
-
-## DisplayOptions
-
-```typescript
-type DisplayOptions = {
- textSize?: number;
- increaseSpacing?: boolean;
- fontFamily?: string;
- themeOption?: ThemeOption
-};
-```
-
-#### DisplayOptions Parameters
-
-| Name | Type | Description |
-| - | - | |
-| textSize | Number | Sets the chosen text size. |
-| increaseSpacing | Boolean | Sets whether text spacing is toggled on or off. |
-| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
-| themeOption | ThemeOption | Sets the chosen Theme of the reader ("Light", "Dark"). |
-
-##### `textSize`
-```Parameters
-Type: Number
-Required: false
-Default value: 20, 36 or 42 (Determined by screen size)
-Values available: 14, 20, 28, 36, 42, 48, 56, 64, 72, 84, 96
-```
-
-##### `fontFamily`
-```Parameters
-Type: String
-Required: false
-Default value: "Calibri"
-Values available: "Calibri", "Sitka", "ComicSans"
-```
-
-<br>
-
-## CookiePolicy Options
-
-```typescript
-enum CookiePolicy { Disable, Enable }
-```
-
-**The settings listed below are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default to follow EU Cookie Compliance laws. If you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, your website or application will need proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader. The table below describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled.
-
-#### Settings Parameters
-
-| Setting | Type | Description |
-| - | - | -- |
-| textSize | Number | Sets the chosen text size. |
-| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
-| textSpacing | Number | Sets whether text spacing is toggled on or off. |
-| formattingEnabled | Boolean | Sets whether HTML formatting is toggled on or off. |
-| theme | String | Sets the chosen theme (e.g "Light", "Dark"...). |
-| syllabificationEnabled | Boolean | Sets whether syllabification toggled on or off. |
-| nounHighlightingEnabled | Boolean | Sets whether noun-highlighting is toggled on or off. |
-| nounHighlightingColor | String | Sets the chosen noun-highlighting color. |
-| verbHighlightingEnabled | Boolean | Sets whether verb-highlighting is toggled on or off. |
-| verbHighlightingColor | String | Sets the chosen verb-highlighting color. |
-| adjectiveHighlightingEnabled | Boolean | Sets whether adjective-highlighting is toggled on or off. |
-| adjectiveHighlightingColor | String | Sets the chosen adjective-highlighting color. |
-| adverbHighlightingEnabled | Boolean | Sets whether adverb-highlighting is toggled on or off. |
-| adverbHighlightingColor | String | Sets the chosen adverb-highlighting color. |
-| pictureDictionaryEnabled | Boolean | Sets whether Picture Dictionary is toggled on or off. |
-| posLabelsEnabled | Boolean | Sets whether the superscript text label of each highlighted Part of Speech is toggled on or off. |
-
-<br>
-
-## Supported Languages
-
-The translation feature of Immersive Reader supports many languages. For more information, see [Language Support](./language-support.md).
-
-<br>
-
-## HTML support
-
-When formatting is enabled, the following content will be rendered as HTML in the Immersive Reader.
-
-| HTML | Supported Content |
-| | -- |
-| Font Styles | Bold, Italic, Underline, Code, Strikethrough, Superscript, Subscript |
-| Unordered Lists | Disc, Circle, Square |
-| Ordered Lists | Decimal, Upper-Alpha, Lower-Alpha, Upper-Roman, Lower-Roman |
-
-Unsupported tags will be rendered comparably. Images and tables are currently not supported.
-
-<br>
-
-## Browser support
-
-Use the most recent versions of the following browsers for the best experience with the Immersive Reader.
-
-* Microsoft Edge
-* Internet Explorer 11
-* Google Chrome
-* Mozilla Firefox
-* Apple Safari
-
-<br>
-
-## Next steps
-
-* Explore the [Immersive Reader SDK on GitHub](https://github.com/microsoft/immersive-reader-sdk)
-* [Quickstart: Create a web app that launches the Immersive Reader (C#)](./quickstarts/client-libraries.md?pivots=programming-language-csharp)
applied-ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/release-notes.md
- Title: "Immersive Reader SDK Release Notes"-
-description: Learn more about what's new in the Immersive Reader JavaScript SDK.
-------- Previously updated : 11/15/2021---
-# Immersive Reader JavaScript SDK Release Notes
-
-## Version 1.3.0
-
-This release contains new features, security vulnerability fixes, and updates to code samples.
-
-#### New Features
-
-* Added the capability for the Immersive Reader iframe to request microphone permissions for Reading Coach
-
-#### Improvements
-
-* Update code samples to use v1.3.0
-* Update code samples to demonstrate the usage of latest options from v1.2.0
-
-## Version 1.2.0
-
-This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
-
-#### New Features
-
-* Add option to set the theme to light or dark
-* Add option to set the parent node where the iframe/webview container is placed
-* Add option to disable the Grammar experience
-* Add option to disable the Translation experience
-* Add option to disable Language Detection
-
-#### Improvements
-
-* Add title and aria modal attributes to the iframe
-* Set isLoading to false when exiting
-* Update code samples to use v1.2.0
-* Adds React code sample
-* Adds Ember code sample
-* Adds Azure function code sample
-* Adds C# code sample demonstrating how to call the Azure Function for authentication
-* Adds Android Kotlin code sample demonstrating how to call the Azure Function for authentication
-* Updates the Swift code sample to be Objective C compliant
-* Updates Advanced C# code sample to demonstrate the usage of new options: parent node, disableGrammar, disableTranslation, and disableLanguageDetection
-
-#### Fixes
-
-* Fixes multiple security vulnerabilities by upgrading TypeScript packages
-* Fixes bug where renderButton rendered a duplicate icon and label in the button
-
-## Version 1.1.0
-
-This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
-
-#### New Features
-
-* Enable saving and loading user preferences across different browsers and devices
-* Enable configuring default display options
-* Add option to set the translation language, enable word translation, and enable document translation when launching Immersive Reader
-* Add support for configuring Read Aloud via options
-* Add ability to disable first run experience
-* Add ImmersiveReaderView for UWP
-
-#### Improvements
-
-* Update the Android code sample HTML to work with the latest SDK
-* Update launch response to return the number of characters processed
-* Update code samples to use v1.1.0
-* Do not allow launchAsync to be called when already loading
-* Check for invalid content by ignoring messages where the data is not a string
-* Wrap call to window in an if clause to check browser support of Promise
-
-#### Fixes
-
-* Fix dependabot by removing yarn.lock from gitignore
-* Fix security vulnerability by upgrading pug to v3.0.0 in quickstart-nodejs code sample
-* Fix multiple security vulnerabilities by upgrading Jest and TypeScript packages
-* Fix a security vulnerability by upgrading Microsoft.IdentityModel.Clients.ActiveDirectory to v5.2.0
-
-<br>
-
-## Version 1.0.0
-
-This release contains breaking changes, new features, code sample improvements, and bug fixes.
-
-#### Breaking Changes
-
-* Require Azure AD token and subdomain, and deprecates tokens used in previous versions.
-* Set CookiePolicy to disabled. Retention of user preferences is disabled by default. The Reader launches with default settings every time, unless the CookiePolicy is set to enabled.
-
-#### New Features
-
-* Add support to enable or disable cookies
-* Add Android Kotlin quick start code sample
-* Add Android Java quick start code sample
-* Add Node quick start code sample
-
-#### Improvements
-
-* Update Node.js advanced README.md
-* Change Python code sample from advanced to quick start
-* Move iOS Swift code sample into js/samples
-* Update code samples to use v1.0.0
-
-#### Fixes
-
-* Fix for Node.js advanced code sample
-* Add missing files for advanced-csharp-multiple-resources
-* Remove en-us from hyperlinks
-
-<br>
-
-## Version 0.0.3
-
-This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
-
-#### New Features
-
-* Add iOS Swift code sample
-* Add C# advanced code sample demonstrating use of multiple resources
-* Add support to disable the full screen toggle feature
-* Add support to hide the Immersive Reader application exit button
-* Add a callback function that may be used by the host application upon exiting the Immersive Reader
-* Update code samples to use Azure Active Directory Authentication
-
-#### Improvements
-
-* Update C# advanced code sample to include Word document
-* Update code samples to use v0.0.3
-
-#### Fixes
-
-* Upgrade lodash to version 4.17.14 to fix security vulnerability
-* Update C# MSAL library to fix security vulnerability
-* Upgrade mixin-deep to version 1.3.2 to fix security vulnerability
-* Upgrade jest, webpack and webpack-cli which were using vulnerable versions of set-value and mixin-deep to fix security vulnerability
-
-<br>
-
-## Version 0.0.2
-
-This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
-
-#### New Features
-
-* Add Python advanced code sample
-* Add Java quick start code sample
-* Add simple code sample
-
-#### Improvements
-
-* Rename resourceName to cogSvcsSubdomain
-* Move secrets out of code and use environment variables
-* Update code samples to use v0.0.2
-
-#### Fixes
-
-* Fix Immersive Reader button accessibility bugs
-* Fix broken scrolling
-* Upgrade handlebars package to version 4.1.2 to fix security vulnerability
-* Fixes bugs in SDK unit tests
-* Fixes JavaScript Internet Explorer 11 compatibility bugs
-* Updates SDK urls
-
-<br>
-
-## Version 0.0.1
-
-The initial release of the Immersive Reader JavaScript SDK.
-
-* Add Immersive Reader JavaScript SDK
-* Add support to specify the UI language
-* Add a timeout to determine when the launchAsync function should fail with a timeout error
-* Add support to specify the z-index of the Immersive Reader iframe
-* Add support to use a webview tag instead of an iframe, for compatibility with Chrome Apps
-* Add SDK unit tests
-* Add Node.js advanced code sample
-* Add C# advanced code sample
-* Add C# quick start code sample
-* Add package configuration, Yarn and other build files
-* Add git configuration files
-* Add README.md files to code samples and SDK
-* Add MIT License
-* Add Contributor instructions
-* Add static icon button SVG assets
-
-## Next steps
-
-Get started with Immersive Reader:
-
-* Read the [Immersive Reader client library Reference](./reference.md)
-* Explore the [Immersive Reader client library on GitHub](https://github.com/microsoft/immersive-reader-sdk)
applied-ai-services Security How To Update Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/security-how-to-update-role-assignment.md
- Title: "Security Advisory: Update Role Assignment for Azure Active Directory authentication permissions"-
-description: This article will show you how to update the role assignment on existing Immersive Reader resources due to a security bug discovered in November 2021
-------- Previously updated : 01/06/2022---
-# Security Advisory: Update Role Assignment for Azure Active Directory authentication permissions
-
-A security bug has been discovered with Immersive Reader Azure Active Directory (Azure AD) authentication configuration. We are advising that you change the permissions on your Immersive Reader resources as described below.
-
-## Background
-
-A security bug was discovered that relates to Azure AD authentication for Immersive Reader. When initially creating your Immersive Reader resources and configuring them for Azure AD authentication, it is necessary to grant permissions for the Azure AD application identity to access your Immersive Reader resource. This is known as a Role Assignment. The Azure role that was previously used for permissions was the [Cognitive Services User](../../role-based-access-control/built-in-roles.md#cognitive-services-user) role.
-
-During a security audit, it was discovered that this `Cognitive Services User` role has permissions to [List Keys](/rest/api/cognitiveservices/accountmanagement/accounts/list-keys). This is slightly concerning because Immersive Reader integrations involve the use of this Azure AD access token in client web apps and browsers, and if the access token were to be stolen by a bad actor or attacker, there is a concern that this access token could be used to `list keys` of your Immersive Reader resource. If an attacker could `list keys` for your resource, then they would obtain the `Subscription Key` for your resource. The `Subscription Key` for your resource is used as an authentication mechanism and is considered a secret. If an attacker had the resource's `Subscription Key`, it would allow them to make valid and authenticated API calls to your Immersive Reader resource endpoint, which could lead to Denial of Service due to the increased usage and throttling on your endpoint. It would also allow unauthorized use of your Immersive Reader resource, which would lead to increased charges on your bill.
-
-In practice however, this attack or exploit is not likely to occur or may not even be possible. For Immersive Reader scenarios, customers obtain Azure AD access tokens with an audience of `https://cognitiveservices.azure.com`. In order to successfully `list keys` for your resource, the Azure AD access token would need to have an audience of `https://management.azure.com`. Generally speaking, this is not too much of a concern, since the access tokens used for Immersive Reader scenarios would not work to `list keys`, as they do not have the required audience. In order to change the audience on the access token, an attacker would have to hijack the token acquisition code and change the audience before the call is made to Azure AD to acquire the token. Again, this is not likely to be exploited because, as an Immersive Reader authentication best practice, we advise that customers create Azure AD access tokens on the web application backend, not in the client or browser. In those cases, since the token acquisition happens on the backend service, it's not as likely or perhaps even possible that attacker could compromise that process and change the audience.
-
-The real concern comes when or if any customer were to acquire tokens from Azure AD directly in client code. We strongly advise against this, but since customers are free to implement as they see fit, it is possible that some customers are doing this.
-
-To mitigate the concerns about any possibility of using the Azure AD access token to `list keys`, we have created a new built-in Azure role called `Cognitive Services Immersive Reader User` that does not have the permissions to `list keys`. This new role is not a shared role for the Cognitive Services platform like `Cognitive Services User` role is. This new role is specific to Immersive Reader and will only allow calls to Immersive Reader APIs.
-
-We are advising that ALL customers migrate to using the new `Cognitive Services Immersive Reader User` role instead of the original `Cognitive Services User` role. We have provided a script below that you can run on each of your resources to switch over the role assignment permissions.
-
-This recommendation applies to ALL customers, to ensure that this vulnerability is patched for everyone, no matter what the implementation scenario or likelihood of attack.
-
-If you do NOT do this, nothing will break. The old role will continue to function. The security impact for most customers is minimal. However, it is advised that you migrate to the new role to mitigate the security concerns discussed above. Applying this update is a security advisory recommendation; it is not a mandate.
-
-Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role.
--
-## Call to action
-
-If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
-
-After you have updated the role using the script below, it is also advised that you rotate the subscription keys on your resource. This is in case your keys have been compromised by the exploit above, and somebody is actually using your resource with subscription key authentication without your consent. Rotating the keys will render the previous keys invalid and deny any further access. For customers using Azure AD authentication, which should be everyone per current Immersive Reader SDK implementation, rotating the keys will have no impact on the Immersive Reader service, since Azure AD access tokens are used for authentication, not the subscription key. Rotating the subscription keys is just another precaution.
-
-You can rotate the subscription keys on the [Azure portal](https://portal.azure.com). Navigate to your resource and then to the `Keys and Endpoint` blade. At the top, there are buttons to `Regenerate Key1` and `Regenerate Key2`.
-----
-### Use Azure PowerShell environment to update your Immersive Reader resource Role assignment
-
-1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
-
-1. Copy and paste the following code snippet into the shell.
-
- ```azurepowershell-interactive
- function Update-ImmersiveReaderRoleAssignment(
- [Parameter(Mandatory=$true, Position=0)] [String] $SubscriptionName,
- [Parameter(Mandatory=$true)] [String] $ResourceGroupName,
- [Parameter(Mandatory=$true)] [String] $ResourceName,
- [Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri
- )
- {
- $unused = ''
- if (-not [System.Uri]::TryCreate($AADAppIdentifierUri, [System.UriKind]::Absolute, [ref] $unused)) {
- throw "Error: AADAppIdentifierUri must be a valid URI"
- }
-
- Write-Host "Setting the active subscription to '$SubscriptionName'"
- $subscriptionExists = Get-AzSubscription -SubscriptionName $SubscriptionName
- if (-not $subscriptionExists) {
- throw "Error: Subscription does not exist"
- }
- az account set --subscription $SubscriptionName
-
- # Get the Immersive Reader resource
- $resourceId = az cognitiveservices account show --resource-group $ResourceGroupName --name $ResourceName --query "id" -o tsv
- if (-not $resourceId) {
- throw "Error: Failed to find Immersive Reader resource"
- }
-
- # Get the Azure AD application service principal
- $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
- if (-not $principalId) {
- throw "Error: Failed to find Azure AD application service principal"
- }
-
- $newRoleName = "Cognitive Services Immersive Reader User"
- $newRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $newRoleName --query "[].id" -o tsv
- if ($newRoleExists) {
- Write-Host "New role assignment for '$newRoleName' role already exists on resource"
- }
- else {
- Write-Host "Creating new role assignment for '$newRoleName' role"
- $roleCreateResult = az role assignment create --assignee $principalId --scope $resourceId --role $newRoleName
- if (-not $roleCreateResult) {
- throw "Error: Failed to add new role assignment"
- }
- Write-Host "New role assignment created successfully"
- }
-
- $oldRoleName = "Cognitive Services User"
- $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv
- if (-not $oldRoleExists) {
- Write-Host "Old role assignment for '$oldRoleName' role does not exist on resource"
- }
- else {
- Write-Host "Deleting old role assignment for '$oldRoleName' role"
- az role assignment delete --assignee $principalId --scope $resourceId --role $oldRoleName
- $oldRoleExists = az role assignment list --assignee $principalId --scope $resourceId --role $oldRoleName --query "[].id" -o tsv
- if ($oldRoleExists) {
- throw "Error: Failed to delete old role assignment"
- }
- Write-Host "Old role assignment deleted successfully"
- }
- }
- ```
-
-1. Run the function `Update-ImmersiveReaderRoleAssignment`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate.
-
- ```azurepowershell-interactive
- Update-ImmersiveReaderRoleAssignment -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceName '<RESOURCE_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>'
- ```
-
- The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
-
- ```Update-ImmersiveReaderRoleAssignment```<br>
- ``` -SubscriptionName 'MyOrganizationSubscriptionName'```<br>
- ``` -ResourceGroupName 'MyResourceGroupName'```<br>
- ``` -ResourceName 'MyOrganizationImmersiveReader'```<br>
- ``` -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'```<br>
-
- | Parameter | Comments |
- | | |
- | SubscriptionName |The name of your Azure subscription. |
- | ResourceGroupName |The name of the Resource Group that contains your Immersive Reader resource. |
- | ResourceName |The name of your Immersive Reader resource. |
- | AADAppIdentifierUri |The URI for your Azure AD app. |
--
-## Next steps
-
-* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
- Title: "Tutorial: Create an iOS app that takes a photo and launches it in the Immersive Reader (Swift)"-
-description: In this tutorial, you will build an iOS app from scratch and add the Picture to Immersive Reader functionality.
------ Previously updated : 01/14/2020-
-#Customer intent: As a developer, I want to integrate two Cognitive Services, the Immersive Reader and the Read API into my iOS application so that I can view any text from a photo in the Immersive Reader.
--
-# Tutorial: Create an iOS app that launches the Immersive Reader with content from a photo (Swift)
-
-The [Immersive Reader](https://www.onenote.com/learningtools) is an inclusively designed tool that implements proven techniques to improve reading comprehension.
-
-The [Computer Vision Cognitive Services Read API](../../cognitive-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream.
-
-In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios).
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-* [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12)
-* An Immersive Reader resource configured for Azure Active Directory authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You will need some of the values created here when configuring the sample project properties. Save the output of your session into a text file for future reference.
-* Usage of this sample requires an Azure subscription to the Computer Vision Cognitive Service. [Create a Computer Vision Cognitive Service resource in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision).
-
-## Create an Xcode project
-
-Create a new project in Xcode.
-
-![New Project](./media/ios/xcode-create-project.png)
-
-Choose **Single View App**.
-
-![New Single View App](./media/ios/xcode-single-view-app.png)
-
-## Get the SDK CocoaPod
-
-The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods:
-
-1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods.
-
-2. Create a Podfile by running `pod init` in your Xcode project's root directory.
-
-3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
-
- ```ruby
- platform :ios, '9.0'
-
- target 'picture-to-immersive-reader-swift' do
- use_frameworks!
- # Pods for picture-to-immersive-reader-swift
- pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
- end
- ```
-
-4. In the terminal, in the directory of your Xcode project, run the command `pod install` to install the Immersive Reader SDK pod.
-
-5. Add `import immersive_reader_sdk` to all files that need to reference the SDK.
-
-6. Ensure to open the project by opening the `.xcworkspace` file and not the `.xcodeproj` file.
-
-## Acquire an Azure AD authentication token
-
-You need some values from the Azure AD authentication configuration prerequisite step above for this part. Refer back to the text file you saved of that session.
-
-````text
-TenantId => Azure subscription TenantId
-ClientId => Azure AD ApplicationId
-ClientSecret => Azure AD Application Service Principal password
-Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI PowerShell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
-````
-
-In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
-
-## Set up the app to run without a storyboard
-
-Open AppDelegate.swift and replace the file with the following code.
-
-```swift
-import UIKit
-
-@UIApplicationMain
-class AppDelegate: UIResponder, UIApplicationDelegate {
-
- var window: UIWindow?
-
- var navigationController: UINavigationController?
-
- func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
- // Override point for customization after application launch.
-
- window = UIWindow(frame: UIScreen.main.bounds)
-
- // Allow the app run without a storyboard
- if let window = window {
- let mainViewController = PictureLaunchViewController()
- navigationController = UINavigationController(rootViewController: mainViewController)
- window.rootViewController = navigationController
- window.makeKeyAndVisible()
- }
- return true
- }
-
- func applicationWillResignActive(_ application: UIApplication) {
- // Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state.
- // Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method to pause the game.
- }
-
- func applicationDidEnterBackground(_ application: UIApplication) {
- // Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later.
- // If your application supports background execution, this method is called instead of applicationWillTerminate: when the user quits.
- }
-
- func applicationWillEnterForeground(_ application: UIApplication) {
- // Called as part of the transition from the background to the active state; here you can undo many of the changes made on entering the background.
- }
-
- func applicationDidBecomeActive(_ application: UIApplication) {
- // Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
- }
-
- func applicationWillTerminate(_ application: UIApplication) {
- // Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:.
- }
-
-}
-```
-
-## Add functionality for taking and uploading photos
-
-Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code.
-
-```swift
-import UIKit
-import immersive_reader_sdk
-
-class PictureLaunchViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
-
- private var photoButton: UIButton!
- private var cameraButton: UIButton!
- private var titleText: UILabel!
- private var bodyText: UILabel!
- private var sampleContent: Content!
- private var sampleChunk: Chunk!
- private var sampleOptions: Options!
- private var imagePicker: UIImagePickerController!
- private var spinner: UIActivityIndicatorView!
- private var activityIndicatorBackground: UIView!
- private var textURL = "vision/v2.0/read/core/asyncBatchAnalyze";
-
- override func viewDidLoad() {
- super.viewDidLoad()
-
- view.backgroundColor = .white
-
- titleText = UILabel()
- titleText.text = "Picture to Immersive Reader with OCR"
- titleText.font = UIFont.boldSystemFont(ofSize: 32)
- titleText.textAlignment = .center
- titleText.lineBreakMode = .byWordWrapping
- titleText.numberOfLines = 0
- view.addSubview(titleText)
-
- bodyText = UILabel()
- bodyText.text = "Capture or upload a photo of handprinted text on a piece of paper, handwriting, typed text, text on a computer screen, writing on a white board and many more, and watch it be presented to you in the Immersive Reader!"
- bodyText.font = UIFont.systemFont(ofSize: 18)
- bodyText.lineBreakMode = .byWordWrapping
- bodyText.numberOfLines = 0
- let screenSize = self.view.frame.height
- if screenSize <= 667 {
- // Font size for smaller iPhones.
- bodyText.font = bodyText.font.withSize(16)
-
- } else if screenSize <= 812.0 {
- // Font size for medium iPhones.
- bodyText.font = bodyText.font.withSize(18)
-
- } else if screenSize <= 896 {
- // Font size for larger iPhones.
- bodyText.font = bodyText.font.withSize(20)
-
- } else {
- // Font size for iPads.
- bodyText.font = bodyText.font.withSize(26)
- }
- view.addSubview(bodyText)
-
- photoButton = UIButton()
- photoButton.backgroundColor = .darkGray
- photoButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5)
- photoButton.layer.cornerRadius = 5
- photoButton.setTitleColor(.white, for: .normal)
- photoButton.setTitle("Choose Photo from Library", for: .normal)
- photoButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold)
- photoButton.addTarget(self, action: #selector(selectPhotoButton(sender:)), for: .touchUpInside)
- view.addSubview(photoButton)
-
- cameraButton = UIButton()
- cameraButton.backgroundColor = .darkGray
- cameraButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5)
- cameraButton.layer.cornerRadius = 5
- cameraButton.setTitleColor(.white, for: .normal)
- cameraButton.setTitle("Take Photo", for: .normal)
- cameraButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold)
- cameraButton.addTarget(self, action: #selector(takePhotoButton(sender:)), for: .touchUpInside)
- view.addSubview(cameraButton)
-
- activityIndicatorBackground = UIView()
- activityIndicatorBackground.backgroundColor = UIColor.black
- activityIndicatorBackground.alpha = 0
- view.addSubview(activityIndicatorBackground)
- view.bringSubviewToFront(_: activityIndicatorBackground)
-
- spinner = UIActivityIndicatorView(style: .whiteLarge)
- view.addSubview(spinner)
-
- let layoutGuide = view.safeAreaLayoutGuide
-
- titleText.translatesAutoresizingMaskIntoConstraints = false
- titleText.topAnchor.constraint(equalTo: layoutGuide.topAnchor, constant: 25).isActive = true
- titleText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true
- titleText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true
-
- bodyText.translatesAutoresizingMaskIntoConstraints = false
- bodyText.topAnchor.constraint(equalTo: titleText.bottomAnchor, constant: 35).isActive = true
- bodyText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true
- bodyText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true
-
- cameraButton.translatesAutoresizingMaskIntoConstraints = false
- if screenSize > 896 {
- // Constraints for iPads.
- cameraButton.heightAnchor.constraint(equalToConstant: 150).isActive = true
- cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true
- cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true
- cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 150).isActive = true
- } else {
- // Constraints for iPhones.
- cameraButton.heightAnchor.constraint(equalToConstant: 100).isActive = true
- cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true
- cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true
- cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 100).isActive = true
- }
- cameraButton.bottomAnchor.constraint(equalTo: photoButton.topAnchor, constant: -40).isActive = true
-
- photoButton.translatesAutoresizingMaskIntoConstraints = false
- if screenSize > 896 {
- // Constraints for iPads.
- photoButton.heightAnchor.constraint(equalToConstant: 150).isActive = true
- photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true
- photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true
- } else {
- // Constraints for iPhones.
- photoButton.heightAnchor.constraint(equalToConstant: 100).isActive = true
- photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true
- photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true
- }
-
- spinner.translatesAutoresizingMaskIntoConstraints = false
- spinner.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true
- spinner.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
-
- activityIndicatorBackground.translatesAutoresizingMaskIntoConstraints = false
- activityIndicatorBackground.topAnchor.constraint(equalTo: layoutGuide.topAnchor).isActive = true
- activityIndicatorBackground.bottomAnchor.constraint(equalTo: layoutGuide.bottomAnchor).isActive = true
- activityIndicatorBackground.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor).isActive = true
- activityIndicatorBackground.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor).isActive = true
-
- // Create content and options.
- sampleChunk = Chunk(content: bodyText.text!, lang: nil, mimeType: nil)
- sampleContent = Content( Title: titleText.text!, chunks: [sampleChunk])
- sampleOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil)
- }
-
- @IBAction func selectPhotoButton(sender: AnyObject) {
- // Launch the photo picker.
- imagePicker = UIImagePickerController()
- imagePicker.delegate = self
- self.imagePicker.sourceType = .photoLibrary
- self.imagePicker.allowsEditing = true
- self.present(self.imagePicker, animated: true, completion: nil)
- self.photoButton.isEnabled = true
- }
-
- @IBAction func takePhotoButton(sender: AnyObject) {
- if !UIImagePickerController.isSourceTypeAvailable(.camera) {
- // If there is no camera on the device, disable the button
- self.cameraButton.backgroundColor = .gray
- self.cameraButton.isEnabled = true
-
- } else {
- // Launch the camera.
- imagePicker = UIImagePickerController()
- imagePicker.delegate = self
- self.imagePicker.sourceType = .camera
- self.present(self.imagePicker, animated: true, completion: nil)
- self.cameraButton.isEnabled = true
- }
- }
-
- func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
- imagePicker.dismiss(animated: true, completion: nil)
- photoButton.isEnabled = false
- cameraButton.isEnabled = false
- self.spinner.startAnimating()
- activityIndicatorBackground.alpha = 0.6
-
- // Retrieve the image.
- let image = (info[.originalImage] as? UIImage)!
-
- // Retrieve the byte array from image.
- let imageByteArray = image.jpegData(compressionQuality: 1.0)
-
- // Call the getTextFromImage function passing in the image the user takes or chooses.
- getTextFromImage(subscriptionKey: Constants.computerVisionSubscriptionKey, getTextUrl: Constants.computerVisionEndPoint + textURL, pngImage: imageByteArray!, onSuccess: { cognitiveText in
- print("cognitive text is: \(cognitiveText)")
- DispatchQueue.main.async {
- self.photoButton.isEnabled = true
- self.cameraButton.isEnabled = true
- }
-
- // Create content and options with the text from the image.
- let sampleImageChunk = Chunk(content: cognitiveText, lang: nil, mimeType: nil)
- let sampleImageContent = Content( Title: "Text from image", chunks: [sampleImageChunk])
- let sampleImageOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil)
-
- // Callback to get token for Immersive Reader.
- self.getToken(onSuccess: {cognitiveToken in
-
- DispatchQueue.main.async {
-
- launchImmersiveReader(navController: self.navigationController!, token: cognitiveToken, subdomain: Constants.subdomain, content: sampleImageContent, options: sampleImageOptions, onSuccess: {
- self.spinner.stopAnimating()
- self.activityIndicatorBackground.alpha = 0
- self.photoButton.isEnabled = true
- self.cameraButton.isEnabled = true
-
- }, onFailure: { error in
- print("An error occured launching the Immersive Reader: \(error)")
- self.spinner.stopAnimating()
- self.activityIndicatorBackground.alpha = 0
- self.photoButton.isEnabled = true
- self.cameraButton.isEnabled = true
-
- })
- }
-
- }, onFailure: { error in
- DispatchQueue.main.async {
- self.photoButton.isEnabled = true
- self.cameraButton.isEnabled = true
-
- }
- print("An error occured retrieving the token: \(error)")
- })
-
- }, onFailure: { error in
- DispatchQueue.main.async {
- self.photoButton.isEnabled = true
- self.cameraButton.isEnabled = true
- }
-
- })
- }
-
- /// Retrieves the token for the Immersive Reader using Azure Active Directory authentication
- ///
- /// - Parameters:
- /// -onSuccess: A closure that gets called when the token is successfully recieved using Azure Active Directory authentication.
- /// -theToken: The token for the Immersive Reader recieved using Azure Active Directory authentication.
- /// -onFailure: A closure that gets called when the token fails to be obtained from the Azure Active Directory Authentication.
- /// -theError: The error that occured when the token fails to be obtained from the Azure Active Directory Authentication.
- func getToken(onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) {
-
- let tokenForm = "grant_type=client_credentials&resource=https://cognitiveservices.azure.com/&client_id=" + Constants.clientId + "&client_secret=" + Constants.clientSecret
- let tokenUrl = "https://login.windows.net/" + Constants.tenantId + "/oauth2/token"
-
- var responseTokenString: String = "0"
-
- let url = URL(string: tokenUrl)!
- var request = URLRequest(url: url)
- request.httpBody = tokenForm.data(using: .utf8)
- request.httpMethod = "POST"
-
- let task = URLSession.shared.dataTask(with: request) { data, response, error in
- guard let data = data,
- let response = response as? HTTPURLResponse,
- // Check for networking errors.
- error == nil else {
- print("error", error ?? "Unknown error")
- onFailure("Error")
- return
- }
-
- // Check for http errors.
- guard (200 ... 299) ~= response.statusCode else {
- print("statusCode should be 2xx, but is \(response.statusCode)")
- print("response = \(response)")
- onFailure(String(response.statusCode))
- return
- }
-
- let responseString = String(data: data, encoding: .utf8)
- print("responseString = \(String(describing: responseString!))")
-
- let jsonResponse = try? JSONSerialization.jsonObject(with: data, options: [])
- guard let jsonDictonary = jsonResponse as? [String: Any] else {
- onFailure("Error parsing JSON response.")
- return
- }
- guard let responseToken = jsonDictonary["access_token"] as? String else {
- onFailure("Error retrieving token from JSON response.")
- return
- }
- responseTokenString = responseToken
- onSuccess(responseTokenString)
- }
-
- task.resume()
- }
-
- /// Returns the text string after it has been extracted from an Image input.
- ///
- /// - Parameters:
- /// -subscriptionKey: The Azure subscription key.
- /// -pngImage: Image data in PNG format.
- /// - Returns: a string of text representing the
- func getTextFromImage(subscriptionKey: String, getTextUrl: String, pngImage: Data, onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) {
-
- let url = URL(string: getTextUrl)!
- var request = URLRequest(url: url)
- request.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
- request.setValue("application/octet-stream", forHTTPHeaderField: "Content-Type")
-
- // Two REST API calls are required to extract text. The first call is to submit the image for processing, and the next call is to retrieve the text found in the image.
-
- // Set the body to the image in byte array format.
- request.httpBody = pngImage
-
- request.httpMethod = "POST"
-
- let task = URLSession.shared.dataTask(with: request) { data, response, error in
- guard let data = data,
- let response = response as? HTTPURLResponse,
- // Check for networking errors.
- error == nil else {
- print("error", error ?? "Unknown error")
- onFailure("Error")
- return
- }
-
- // Check for http errors.
- guard (200 ... 299) ~= response.statusCode else {
- print("statusCode should be 2xx, but is \(response.statusCode)")
- print("response = \(response)")
- onFailure(String(response.statusCode))
- return
- }
-
- let responseString = String(data: data, encoding: .utf8)
- print("responseString = \(String(describing: responseString!))")
-
- // Send the second call to the API. The first API call returns operationLocation which stores the URI for the second REST API call.
- let operationLocation = response.allHeaderFields["Operation-Location"] as? String
-
- if (operationLocation == nil) {
- print("Error retrieving operation location")
- return
- }
-
- // Wait 10 seconds for text recognition to be available as suggested by the Text API documentation.
- print("Text submitted. Waiting 10 seconds to retrieve the recognized text.")
- sleep(10)
-
- // HTTP GET request with the operationLocation url to retrieve the text.
- let getTextUrl = URL(string: operationLocation!)!
- var getTextRequest = URLRequest(url: getTextUrl)
- getTextRequest.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
- getTextRequest.httpMethod = "GET"
-
- // Send the GET request to retrieve the text.
- let taskGetText = URLSession.shared.dataTask(with: getTextRequest) { data, response, error in
- guard let data = data,
- let response = response as? HTTPURLResponse,
- // Check for networking errors.
- error == nil else {
- print("error", error ?? "Unknown error")
- onFailure("Error")
- return
- }
-
- // Check for http errors.
- guard (200 ... 299) ~= response.statusCode else {
- print("statusCode should be 2xx, but is \(response.statusCode)")
- print("response = \(response)")
- onFailure(String(response.statusCode))
- return
- }
-
- // Decode the JSON data into an object.
- let customDecoding = try! JSONDecoder().decode(TextApiResponse.self, from: data)
-
- // Loop through the lines to get all lines of text and concatenate them together.
- var textFromImage = ""
- for textLine in customDecoding.recognitionResults[0].lines {
- textFromImage = textFromImage + textLine.text + " "
- }
-
- onSuccess(textFromImage)
- }
- taskGetText.resume()
-
- }
-
- task.resume()
- }
-
- // Structs used for decoding the Text API JSON response.
- struct TextApiResponse: Codable {
- let status: String
- let recognitionResults: [RecognitionResult]
- }
-
- struct RecognitionResult: Codable {
- let page: Int
- let clockwiseOrientation: Double
- let width, height: Int
- let unit: String
- let lines: [Line]
- }
-
- struct Line: Codable {
- let boundingBox: [Int]
- let text: String
- let words: [Word]
- }
-
- struct Word: Codable {
- let boundingBox: [Int]
- let text: String
- let confidence: String?
- }
-
-}
-```
-
-## Build and run the app
-
-Set the archive scheme in Xcode by selecting a simulator or device target.
-![Archive scheme](./media/ios/xcode-archive-scheme.png)<br/>
-![Select Target](./media/ios/xcode-select-target.png)
-
-In Xcode, press Ctrl + R or click on the play button to run the project and the app should launch on the specified simulator or device.
-
-In your app, you should see:
-
-![Sample app](./media/ios/picture-to-immersive-reader-ipad-app.png)
-
-Inside the app, take or upload a photo of text by pressing the 'Take Photo' button or 'Choose Photo from Library' button and the Immersive Reader will then launch displaying the text from the photo.
-
-![Immersive Reader](./media/ios/picture-to-immersive-reader-ipad.png)
-
-## Next steps
-
-* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/cost-management.md
- Title: Cost management with Azure Metrics Advisor-
-description: Learn about cost management and pricing for Azure Metrics Advisor
----- Previously updated : 09/06/2022---
-# Azure Metrics Advisor cost management
-
-Azure Metrics Advisor monitors the performance of your organization's growth engines, including sales revenue and manufacturing operations. Quickly identify and fix problems through a powerful combination of monitoring in near-real time, adapting models to your scenario, and offering granular analysis with diagnostics and alerting. You will only be charged for the time series that are analyzed by the service. There's no up-front commitment or minimum fee.
-
-> [!NOTE]
-> This article discusses how pricing is calculated to assist you with planning and cost management when using Azure Metric Advisor. The prices in this article do not reflect actual prices and are for example purposes only. For the latest pricing information please refer to the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
-
-## Key points about cost management and pricing
--- You will be charged for the number of **distinct [time series](glossary.md#time-series)** analyzed during a month. If one data point is analyzed for a time series, it will be calculated as well.-- The number of distinct time series is **irrespective** of its [granularity](glossary.md#granularity). An hourly time series and a daily time series will be charged at the same price. -- The number of distinct time series is **highly related** to the data schema(choice of timestamp, dimension, measure) during onboarding. Please don't choose **timestamp or any IDs** as [dimension](glossary.md#dimension) to avoid dimension explosion, and introduce unexpected cost.-- You will be charged based on the tiered pricing structure listed below. The first day of next month will initialize a new statistic window. -- The more time series you onboard to the service for analysis, the lower price you pay for each time series. -
-**Again keep in mind, the prices below are for example purposes only**. For the latest pricing information, consult the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
-
-| Analyzed time series /month| $ per time series |
-|--|--|
-| Free: first 25 time series | $- |
-| 26 time series - 1k time series | $0.75 |
-| 1k time series - 5k time series | $0.50 |
-| 5k time series - 20k time series | $0.25|
-| 20k time series - 50k time series| $0.10|
-| >50k time series | $0.05 |
--
-To help you get a basic understanding of Metrics Advisor and start to explore the service, there's an included amount being offered to allow you to analyze up to 25 time series for free.
-
-## Pricing examples
-
-### Example 1
-<!-- introduce statistic window-->
-
-In month 1, if a customer has onboarded a data feed with 25 time series for analyzing the first week. Afterwards, they onboard another data feed with 30 time series the second week. But at the third week, they delete 30 time series that were onboarded during the second week. Then there are **55** distinct time series being analyzed in month 1, the customer will be charged for **30** of them (exclude the 25 time series in the free tier) and falls under tier 1. The monthly cost is: 30 * $0.75 = **$22.5**.
-
-| Volume tier | $ per time series | $ per month |
-| | -- | -- |
-| First 30 (55-25) time series | $0.75 | $22.5 |
-| **Total = 30 time series** | | **$22.5 per month** |
-
-In month 2, the customer has not onboarded or deleted any time series. Then there are 25 analyzed time series in month 2. No cost will be introduced.
-
-### Example 2
-<!-- introduce how time series is calculated-->
-
-A business planner needs to track the company's revenue as the indicator of business healthiness. Usually there's a week by week pattern, the customer onboards the metrics into Metrics Advisor for analyzing anomalies. Metrics Advisor is able to learn the pattern from historical data and perform detection on follow-up data points. There might be a sudden drop detected as an anomaly, which may indicate an underlying issue, like a service outage or a promotional offer not working as expected. There might also be an unexpected spike detected as an anomaly, which may indicate a highly successful marketing campaign or a significant customer win.
-
-The metric is analyzed on **100 product categories** and **10 regions**, then the number of distinct time series being analyzed is calculated as:
-
-```
-1(Revenue) * 100 product categories * 10 regions = 1,000 analyzed time series
-```
-
-Based on the tiered pricing model described above, 1,000 analyzed time series per month is charged at (1,000 - 25) * $0.75 = **$731.25**.
-
-| Volume tier | $ per time series | $ per month |
-| | -- | -- |
-| First 975 (1,000-25) time series | $0.75 | $731.25 |
-| **Total = 30 time series** | | **$731.25 per month** |
-
-### Example 3
-<!-- introduce cost for multiple metrics and -->
-
-After validating detection results on the revenue metric, the customer would like to onboard two more metrics to be analyzed. One is cost, another is DAU(daily active user) of their website. They would also like to add a new dimension with **20 channels**. Within the month, 10 out of the 100 product categories are discontinued after the first week, and are not analyzed further. In addition, 10 new product categories are introduced in the third week of the month, and the corresponding time series are analyzed for half of the month. Then the number of distinct time series being analyzed are calculated as:
-
-```
-3(Revenue, cost and DAU) * 110 product categories * 10 regions * 20 channels = 66,000 analyzed time series
-```
-
-Based on the tiered pricing model described above, 66,000 analyzed time series per month fall into tier 5 and will be charged at **$10281.25**.
-
-| Volume tier | $ per time series | $ per month |
-| | -- | -- |
-| First 975 (1,000-25) time series | $0.75 | $731.25 |
-| Next 4,000 time series | $0.50 | $2,000 |
-| Next 15,000 time series | $0.25 | $3,750 |
-| Next 30,000 time series | $0.10 | $3,000 |
-| Next 16,000 time series | $0.05 | $800 |
-| **Total = 65,975 time series** | | **$10281.25 per month** |
-
-## Next steps
--- [Manage your data feeds](how-tos/manage-data-feeds.md)-- [Configurations for different data sources](data-feeds-from-different-sources.md)-- [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md)-
applied-ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/data-feeds-from-different-sources.md
- Title: Connect different data sources to Metrics Advisor-
-description: Add different data feeds to Metrics Advisor
------ Previously updated : 05/26/2021----
-# How-to: Connect different data sources
-
-Use this article to find the settings and requirements for connecting different types of data sources to Azure Metrics Advisor. To learn about using your data with Metrics Advisor, see [Onboard your data](how-tos/onboard-your-data.md).
-
-## Supported authentication types
-
-| Authentication types | Description |
-| |-|
-|**Basic** | You need to provide basic parameters for accessing data sources. For example, you can use a connection string or a password. Data feed admins can view these credentials. |
-| **Azure managed identity** | [Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) for Azure resources is a feature of Azure Active Directory (Azure AD). It provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication.|
-| **Azure SQL connection string**| Store your Azure SQL connection string as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of the credential entity can view these credentials, but authorized viewers can create data feeds without needing to know details for the credentials. |
-| **Azure Data Lake Storage Gen2 shared key**| Store your data lake account key as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of the credential entity can view these credentials, but authorized viewers can create data feeds without needing to know details for the credentials.|
-| **Service principal**| Store your [service principal](../../active-directory/develop/app-objects-and-service-principals.md) as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of the credential entity can view the credentials, but authorized viewers can create data feeds without needing to know details for the credentials.|
-| **Service principal from key vault**|Store your [service principal in a key vault](/azure-stack/user/azure-stack-key-vault-store-credentials) as a credential entity in Metrics Advisor, and use it directly each time you import metrics data. Only admins of a credential entity can view the credentials, but viewers can create data feeds without needing to know details for the credentials. |
--
-## Data sources and corresponding authentication types
-
-| Data sources | Authentication types |
-|-| |
-|[Application Insights](#appinsights) | Basic |
-|[Azure Blob Storage (JSON)](#blob) | Basic<br>Managed identity |
-|[Azure Cosmos DB (SQL)](#cosmosdb) | Basic |
-|[Azure Data Explorer (Kusto)](#kusto) | Basic<br>Managed identity<br>Service principal<br>Service principal from key vault |
-|[Azure Data Lake Storage Gen2](#adl) | Basic<br>Data Lake Storage Gen2 shared key<br>Service principal<br>Service principal from key vault |
-|[Azure Event Hubs](#eventhubs) | Basic |
-|[Azure Monitor Logs](#log) | Basic<br>Service principal<br>Service principal from key vault |
-|[Azure SQL Database / SQL Server](#sql) | Basic<br>Managed identity<br>Service principal<br>Service principal from key vault<br>Azure SQL connection string |
-|[Azure Table Storage](#table) | Basic |
-|[InfluxDB (InfluxQL)](#influxdb) | Basic |
-|[MongoDB](#mongodb) | Basic |
-|[MySQL](#mysql) | Basic |
-|[PostgreSQL](#pgsql) | Basic|
-
-The following sections specify the parameters required for all authentication types within different data source scenarios.
-
-## <span id="appinsights">Application Insights</span>
-
-* **Application ID**: This is used to identify this application when you're using the Application Insights API. To get the application ID, follow these steps:
-
- 1. From your Application Insights resource, select **API Access**.
-
- ![Screenshot that shows how to get the application ID from your Application Insights resource.](media/portal-app-insights-app-id.png)
-
- 2. Copy the application ID generated into the **Application ID** field in Metrics Advisor.
-
-* **API key**: API keys are used by applications outside the browser to access this resource. To get the API key, follow these steps:
-
- 1. From the Application Insights resource, select **API Access**.
-
- 2. Select **Create API key**.
-
- 3. Enter a short description, select the **Read telemetry** option, and select **Generate key**.
-
- ![Screenshot that shows how to get the API key in the Azure portal.](media/portal-app-insights-app-id-api-key.png)
-
- > [!IMPORTANT]
- > Copy and save this API key. It will never be shown to you again. If you lose this key, you have to create a new one.
-
- 4. Copy the API key to the **API key** field in Metrics Advisor.
-
-* **Query**: Application Insights logs are built on Azure Data Explorer, and Azure Monitor log queries use a version of the same Kusto query language. The [Kusto query language documentation](/azure/data-explorer/kusto/query) should be your primary resource for writing a query against Application Insights.
-
- Sample query:
-
- ``` Kusto
- [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
- ```
- You can also refer to the [Tutorial: Write a valid query](tutorials/write-a-valid-query.md) for more specific examples.
-
-## <span id="blob">Azure Blob Storage (JSON)</span>
-
-* **Connection string**: There are two authentication types for Azure Blob Storage (JSON):
-
- * **Basic**: See [Configure Azure Storage connection strings](../../storage/common/storage-configure-connection-string.md#configure-a-connection-string-for-an-azure-storage-account) for information on retrieving this string. Also, you can visit the Azure portal for your Azure Blob Storage resource, and find the connection string directly in **Settings** > **Access keys**.
-
- * **Managed identity**: Managed identities for Azure resources can authorize access to blob and queue data. The feature uses Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services.
-
- You can create a managed identity in the Azure portal for your Azure Blob Storage resource. In **Access Control (IAM)**, select **Role assignments**, and then select **Add**. A suggested role type is: **Storage Blob Data Reader**. For more details, refer to [Use managed identity to access Azure Storage](../../active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md#grant-access-1).
-
- ![Screenshot that shows a managed identity blob.](media/managed-identity-blob.png)
-
-
-* **Container**: Metrics Advisor expects time series data to be stored as blob files (one blob per timestamp), under a single container. This is the container name field.
-
-* **Blob template**: Metrics Advisor uses a path to find the JSON file in Blob Storage. This is an example of a blob file template, which is used to find the JSON file in Blob Storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, and if you have `%d` in your path, you can add it after `%m`. If your JSON file is named by date, you can also use `%Y-%m-%d-%h-%M.json`.
-
- The following parameters are supported:
-
- * `%Y` is the year, formatted as `yyyy`.
- * `%m` is the month, formatted as `MM`.
- * `%d` is the day, formatted as `dd`.
- * `%h` is the hour, formatted as `HH`.
- * `%M` is the minute, formatted as `mm`.
-
- For example, in the following dataset, the blob template should be `%Y/%m/%d/00/JsonFormatV2.json`.
-
- ![Screenshot that shows the blob template.](media/blob-template.png)
-
-
-* **JSON format version**: Defines the data schema in the JSON files. Metrics Advisor supports the following versions. You can choose one to fill in the field:
-
- * **v1**
-
- Only the metrics *Name* and *Value* are accepted. For example:
-
- ```json
- {"count":11, "revenue":1.23}
- ```
-
- * **v2**
-
- The metrics *Dimensions* and *timestamp* are also accepted. For example:
-
- ```json
- [
- {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
- {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
- ]
- ```
-
- Only one timestamp is allowed per JSON file.
-
-## <span id="cosmosdb">Azure Cosmos DB (SQL)</span>
-
-* **Connection string**: The connection string to access your Azure Cosmos DB instance. This can be found in the Azure Cosmos DB resource in the Azure portal, in **Keys**. For more information, see [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md).
-* **Database**: The database to query against. In the Azure portal, under **Containers**, go to **Browse** to find the database.
-* **Collection ID**: The collection ID to query against. In the Azure portal, under **Containers**, go to **Browse** to find the collection ID.
-* **SQL query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
-
- Sample query:
-
- ```SQL
- SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
- ```
-
- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-
-## <span id="kusto">Azure Data Explorer (Kusto)</span>
-
-* **Connection string**: There are four authentication types for Azure Data Explorer (Kusto): basic, service principal, service principal from key vault, and managed identity. The data source in the connection string should be in the URI format (starts with "https"). You can find the URI in the Azure portal.
-
- * **Basic**: Metrics Advisor supports accessing Azure Data Explorer (Kusto) by using Azure AD application authentication. You need to create and register an Azure AD application, and then authorize it to access an Azure Data Explorer database. For more information, see [Create an Azure AD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app). Here's an example of connection string:
-
- ```
- Data Source=<URI Server>;Initial Catalog=<Database>;AAD Federated Security=True;Application Client ID=<Application Client ID>;Application Key=<Application Key>;Authority ID=<Tenant ID>
- ```
-
- * **Service principal**: A service principal is a concrete instance created from the application object. The service principal inherits certain properties from that application object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access. To use a service principal in Metrics Advisor:
-
- 1. Create the Azure AD application registration. For more information, see [Create an Azure AD app registration in Azure Data Explorer](/azure/data-explorer/provision-azure-ad-app).
-
- 1. Manage Azure Data Explorer database permissions. For more information, see [Manage Azure Data Explorer database permissions](/azure/data-explorer/manage-database-permissions).
-
- 1. Create a credential entity in Metrics Advisor. See how to [create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
-
- Here's an example of connection string:
-
- ```
- Data Source=<URI Server>;Initial Catalog=<Database>
- ```
-
- * **Service principal from key vault**: Azure Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. For more information, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault. Here's an example of connection string:
- ```
- Data Source=<URI Server>;Initial Catalog=<Database>
- ```
-
- * **Managed identity**: Managed identity for Azure resources can authorize access to blob and queue data. Managed identity uses Azure AD credentials from applications running in Azure virtual machines, function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources and Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. Learn how to [authorize with a managed identity](../../storage/blobs/authorize-managed-identity.md#enable-managed-identities-on-a-vm).
-
- You can create a managed identity in the Azure portal for your Azure Data Explorer (Kusto). Select **Permissions** > **Add**. The suggested role type is: **admin / viewer**.
-
- ![Screenshot that shows managed identity for Kusto.](media/managed-identity-kusto.png)
-
- Here's an example of connection string:
- ```
- Data Source=<URI Server>;Initial Catalog=<Database>
- ```
-
-
-* **Query**: To get and formulate data into multi-dimensional time series data, see [Kusto Query Language](/azure/data-explorer/kusto/query). You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
-
- Sample query:
-
- ```kusto
- [TableName] | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd);
- ```
-
- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-
-## <span id="adl">Azure Data Lake Storage Gen2</span>
-
-* **Account Name**: The authentication types for Azure Data Lake Storage Gen2 are basic, Azure Data Lake Storage Gen2 shared key, service principal, and service principal from Key Vault.
-
- * **Basic**: The **Account Name** of your Azure Data Lake Storage Gen2. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) resource, in **Access keys**.
-
- * **Azure Data Lake Storage Gen2 shared key**: First, you specify the account key to access your Azure Data Lake Storage Gen2 (this is the same as the account key in the basic authentication type. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) resource, in **Access keys**. Then, you [create a credential entity](how-tos/credential-entity.md) for Azure Data Lake Storage Gen2 shared key type, and fill in the account key.
-
- The account name is the same as the basic authentication type.
-
- * **Service principal**: A *service principal* is a concrete instance created from the application object, and it inherits certain properties from that application object. A service principal is created in each tenant where the application is used, and it references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
-
- The account name is the same as the basic authentication type.
-
- **Step 1:** Create and register an Azure AD application, and then authorize it to access the database. For more information, see [Create an Azure AD app registration](/azure/data-explorer/provision-azure-ad-app).
-
- **Step 2:** Assign roles.
-
- 1. In the Azure portal, go to the **Storage accounts** service.
-
- 2. Select the Azure Data Lake Storage Gen2 account to use with this application registration.
-
- 3. Select **Access Control (IAM)**.
-
- 4. Select **+ Add**, and select **Add role assignment** from the menu.
-
- 5. Set the **Select** field to the Azure AD application name, and set the role to **Storage Blob Data Contributor**. Then select **Save**.
-
- ![Screenshot that shows the steps to assign roles.](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
-
- **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
-
- * **Service principal from Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. Create a service principal first, and then store the service principal inside a key vault. For more details, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv). The account name is the same as the basic authentication type.
-
-* **Account Key** (only necessary for the basic authentication type): Specify the account key to access your Azure Data Lake Storage Gen2. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) resource, in **Access keys**.
-
-* **File System Name (Container)**: For Metrics Advisor, you store your time series data as blob files (one blob per timestamp), under a single container. This is the container name field. You can find this in your Azure storage account (Azure Data Lake Storage Gen2) instance. In **Data Lake Storage**, select **Containers**, and then you see the container name.
-
-* **Directory Template**: This is the directory template of the blob file. The following parameters are supported:
-
- * `%Y` is the year, formatted as `yyyy`.
- * `%m` is the month, formatted as `MM`.
- * `%d` is the day, formatted as `dd`.
- * `%h` is the hour, formatted as `HH`.
- * `%M` is the minute, formatted as `mm`.
-
- Query sample for a daily metric: `%Y/%m/%d`.
-
- Query sample for an hourly metric: `%Y/%m/%d/%h`.
-
-* **File Template**:
- Metrics Advisor uses a path to find the JSON file in Blob Storage. The following is an example of a blob file template, which is used to find the JSON file in Blob Storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, and if you have `%d` in your path, you can add it after `%m`.
-
- The following parameters are supported:
-
- * `%Y` is the year, formatted as `yyyy`.
- * `%m` is the month, formatted as `MM`.
- * `%d` is the day, formatted as `dd`.
- * `%h` is the hour, formatted as `HH`.
- * `%M` is the minute, formatted as `mm`.
-
- Metrics Advisor supports the data schema in the JSON files, as in the following example:
-
- ```json
- [
- {"date": "2018-01-01T00:00:00Z", "market":"en-us", "count":11, "revenue":1.23},
- {"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56}
- ]
- ```
-
-## <span id="eventhubs">Azure Event Hubs</span>
-
-* **Limitations**: Be aware of the following limitations with integration.
-
- * Metrics Advisor integration with Event Hubs doesn't currently support more than three active data feeds in one Metrics Advisor instance in public preview.
- * Metrics Advisor will always start consuming messages from the latest offset, including when reactivating a paused data feed.
-
- * Messages during the data feed pause period will be lost.
- * The data feed ingestion start time is set to the current Coordinated Universal Time timestamp automatically, when the data feed is created. This time is only for reference purposes.
-
- * Only one data feed can be used per consumer group. To reuse a consumer group from another deleted data feed, you need to wait at least ten minutes after deletion.
- * The connection string and consumer group can't be modified after the data feed is created.
- * For Event Hubs messages, only JSON is supported, and the JSON values can't be a nested JSON object. The top-level element can be a JSON object or a JSON array.
-
- Valid messages are as follows:
-
- Single JSON object:
-
- ```json
- {
- "metric_1": 234,
- "metric_2": 344,
- "dimension_1": "name_1",
- "dimension_2": "name_2"
- }
- ```
-
- JSON array:
-
- ```json
- [
- {
- "timestamp": "2020-12-12T12:00:00", "temperature": 12.4,
- "location": "outdoor"
- },
- {
- "timestamp": "2020-12-12T12:00:00", "temperature": 24.8,
- "location": "indoor"
- }
- ]
- ```
--
-* **Connection String**: Go to the instance of Event Hubs. Then add a new policy or choose an existing shared access policy. Copy the connection string in the pop-up panel.
- ![Screenshot of Event Hubs.](media/datafeeds/entities-eventhubs.jpg)
-
- ![Screenshot of shared access policies.](media/datafeeds/shared-access-policies.jpg)
-
- Here's an example of a connection string:
- ```
- Endpoint=<Server>;SharedAccessKeyName=<SharedAccessKeyName>;SharedAccessKey=<SharedAccess Key>;EntityPath=<EntityPath>
- ```
-
-* **Consumer Group**: A [consumer group](../../event-hubs/event-hubs-features.md#consumer-groups) is a view (state, position, or offset) of an entire event hub.
-You find this on the **Consumer Groups** menu of an instance of Azure Event Hubs. A consumer group can only serve one data feed. Create a new consumer group for each data feed.
-* **Timestamp** (optional): Metrics Advisor uses the Event Hubs timestamp as the event timestamp, if the user data source doesn't contain a timestamp field. The timestamp field is optional. If no timestamp column is chosen, the service uses the enqueued time as the timestamp.
-
- The timestamp field must match one of these two formats:
-
- * `YYYY-MM-DDTHH:MM:SSZ`
- * The number of seconds or milliseconds from the epoch of `1970-01-01T00:00:00Z`.
-
- The timestamp will left-align to granularity. For example, if the timestamp is `2019-01-01T00:03:00Z`, granularity is 5 minutes, and then Metrics Advisor aligns the timestamp to `2019-01-01T00:00:00Z`. If the event timestamp is `2019-01-01T00:10:00Z`, Metrics Advisor uses the timestamp directly, without any alignment.
--
-## <span id="log">Azure Monitor Logs</span>
-
-Azure Monitor Logs has the following authentication types: basic, service principal, and service principal from Key Vault.
-* **Basic**: You need to fill in **Tenant ID**, **Client ID**, **Client Secret**, and **Workspace ID**.
- To get **Tenant ID**, **Client ID**, and **Client Secret**, see [Register app or web API](../../active-directory/develop/quickstart-register-app.md). You can find **Workspace ID** in the Azure portal.
-
- ![Screenshot that shows where to find the Workspace ID in the Azure portal.](media/workspace-id.png)
-
-* **Service principal**: A service principal is a concrete instance created from the application object, and it inherits certain properties from that application object. A service principal is created in each tenant where the application is used, and it references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
-
- **Step 1:** Create and register an Azure AD application, and then authorize it to access a database. For more information, see [Create an Azure AD app registration](/azure/data-explorer/provision-azure-ad-app).
-
- **Step 2:** Assign roles.
- 1. In the Azure portal, go to the **Storage accounts** service.
- 2. Select **Access Control (IAM)**.
- 3. Select **+ Add**, and then select **Add role assignment** from the menu.
- 4. Set the **Select** field to the Azure AD application name, and set the role to **Storage Blob Data Contributor**. Then select **Save**.
-
- ![Screenshot that shows how to assign roles.](media/datafeeds/adls-gen-2-app-reg-assign-roles.png)
-
-
- **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
-
-* **Service principal from Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. Create a service principal first, and then store the service principal inside a key vault. For more details, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv).
-
-* **Query**: Specify the query. For more information, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
-
- Sample query:
-
- ``` Kusto
- [TableName]
- | where [TimestampColumn] >= datetime(@IntervalStart) and [TimestampColumn] < datetime(@IntervalEnd)
- | summarize [count_per_dimension]=count() by [Dimension]
- ```
-
- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-
-## <span id="sql">Azure SQL Database | SQL Server</span>
-
-* **Connection String**: The authentication types for Azure SQL Database and SQL Server are basic, managed identity, Azure SQL connection string, service principal, and service principal from key vault.
-
- * **Basic**: Metrics Advisor accepts an [ADO.NET style connection string](/dotnet/framework/data/adonet/connection-string-syntax) for a SQL Server data source.
- Here's an example of connection string:
-
- ```
- Data Source=<Server>;Initial Catalog=<db-name>;User ID=<user-name>;Password=<password>
- ```
-
- * <span id='jump'>**Managed identity**</span>: Managed identity for Azure resources can authorize access to blob and queue data. It does so by using Azure AD credentials from applications running in Azure virtual machines, function apps, virtual machine scale sets, and other services. By using managed identity for Azure resources and Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud. To [enable your managed entity](../../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md), follow these steps:
- 1. Enabling a system-assigned managed identity is a one-click experience. In the Azure portal, for your Metrics Advisor workspace, go to **Settings** > **Identity** > **System assigned**. Then set the status as **on**.
-
- ![Screenshot that shows how to set the status as on.](media/datafeeds/set-identity-status.png)
-
- 1. Enable Azure AD authentication. In the Azure portal, for your data source, go to **Settings** > **Active Directory admin**. Select **Set admin**, and select an **Azure AD user account** to be made an administrator of the server. Then, choose **Select**.
-
- ![Screenshot that shows how to set the admin.](media/datafeeds/set-admin.png)
-
- 1. Enable managed identity in Metrics Advisor. You can edit a query in the database management tool or in the Azure portal.
-
- **Management tool**: In your database management tool, select **Active Directory - Universal with MFA support** in the authentication field. In the **User name** field, enter the name of the Azure AD account that you set as the server administrator in step 2. For example, this might be `test@contoso.com`.
-
- ![Screenshot that shows how to set connection details.](media/datafeeds/connection-details.png)
-
- **Azure portal**: In your SQL database, select **Query editor**, and sign in the admin account.
- ![Screenshot that shows how to edit your query in the Azure portal.](media/datafeeds/query-editor.png)
-
- Then in the query window, run the following (note that this is the same for the management tool method):
-
- ```
- CREATE USER [MI Name] FROM EXTERNAL PROVIDER
- ALTER ROLE db_datareader ADD MEMBER [MI Name]
- ```
-
- > [!NOTE]
- > The `MI Name` is the managed identity name in Metrics Advisor (for service principal, it should be replaced with the service principal name). For more information, see [Authorize with a managed identity](../../storage/blobs/authorize-managed-identity.md#enable-managed-identities-on-a-vm).
-
- Here's an example of a connection string:
-
- ```
- Data Source=<Server>;Initial Catalog=<Database>
- ```
-
- * **Azure SQL connection string**:
-
-
- Here's an example of a connection string:
-
- ```
- Data Source=<Server>;Initial Catalog=<Database>;User ID=<user-name>;Password=<password>
- ```
-
-
- * **Service principal**: A service principal is a concrete instance created from the application object, and it inherits certain properties from that application object. A service principal is created in each tenant where the application is used, and it references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.
-
- **Step 1:** Create and register an Azure AD application, and then authorize it to access a database. For more information, see [Create an Azure AD app registration](/azure/data-explorer/provision-azure-ad-app).
-
- **Step 2:** Follow the steps documented previously, in [managed identity in SQL Server](#jump).
-
- **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when you're adding a data feed for the service principal authentication type.
-
- Here's an example of a connection string:
-
- ```
- Data Source=<Server>;Initial Catalog=<Database>
- ```
-
- * **Service principal from Key Vault**: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. Create a service principal first, and then store the service principal inside a key vault. For more details, see [Create a credential entity for service principal from Key Vault](how-tos/credential-entity.md#sp-from-kv). You can also find your connection string in your Azure SQL Server resource, in **Settings** > **Connection strings**.
-
- Here's an example of connection string:
-
- ```
- Data Source=<Server>;Initial Catalog=<Database>
- ```
-
-* **Query**: Use a SQL query to get and formulate data into multi-dimensional time series data. You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting an expected metrics value in an interval. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
--
- Sample query:
-
- ```SQL
- SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
- ```
-
-## <span id="table">Azure Table Storage</span>
-
-* **Connection String**: Create a shared access signature (SAS) URL, and fill it in here. The most straightforward way to generate a SAS URL is by using the Azure portal. First, under **Settings**, go to the storage account you want to access. Then select **Shared access signature**. Select the **Table** and **Object** checkboxes, and then select **Generate SAS and connection string**. In the Metrics Advisor workspace, copy and paste the **Table service SAS URL** into the text box.
-
- ![Screenshot that shows how to generate the shared access signature in Azure Table Storage.](media/azure-table-generate-sas.png)
-
-* **Table Name**: Specify a table to query against. You can find this in your Azure storage account instance. In the **Table Service** section, select **Tables**.
-
-* **Query**: You can use `@IntervalStart` and `@IntervalEnd` in your query to help with getting an expected metrics value in an interval. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`.
-
- Sample query:
-
- ``` mssql
- PartitionKey ge '@IntervalStart' and PartitionKey lt '@IntervalEnd'
- ```
-
- For more information, see the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
--
-## <span id="influxdb">InfluxDB (InfluxQL)</span>
-
-* **Connection String**: The connection string to access InfluxDB.
-* **Database**: The database to query against.
-* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
-
- Sample query:
-
- ``` SQL
- SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
- ```
-
-For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-
-* **User name**: This is optional for authentication.
-* **Password**: This is optional for authentication.
-
-## <span id="mongodb">MongoDB</span>
-
-* **Connection String**: The connection string to access MongoDB.
-* **Database**: The database to query against.
-* **Query**: A command to get and formulate data into multi-dimensional time series data for ingestion. Verify the command on [db.runCommand()](https://docs.mongodb.com/manual/reference/method/db.runCommand/https://docsupdatetracker.net/index.html).
-
- Sample query:
-
- ``` MongoDB
- {"find": "[TableName]","filter": { [Timestamp]: { $gte: ISODate(@IntervalStart) , $lt: ISODate(@IntervalEnd) }},"singleBatch": true}
- ```
-
-
-## <span id="mysql">MySQL</span>
-
-* **Connection String**: The connection string to access MySQL DB.
-* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
-
- Sample query:
-
- ``` SQL
- SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn]< @IntervalEnd
- ```
-
- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-
-## <span id="pgsql">PostgreSQL</span>
-
-* **Connection String**: The connection string to access PostgreSQL DB.
-* **Query**: A query to get and formulate data into multi-dimensional time series data for ingestion.
-
- Sample query:
-
- ``` SQL
- SELECT [TimestampColumn], [DimensionColumn], [MetricColumn] FROM [TableName] WHERE [TimestampColumn] >= @IntervalStart and [TimestampColumn] < @IntervalEnd
- ```
- For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-
-## Next steps
-
-* While you're waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
-* When your metric data is ingested, you can [configure metrics and fine tune detection configuration](how-tos/configure-metrics.md).
applied-ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/encryption.md
- Title: Metrics Advisor service encryption-
-description: Metrics Advisor service encryption of data at rest.
------ Previously updated : 07/02/2021-
-#Customer intent: As a user of the Metrics Advisor service, I want to learn how encryption at rest works.
--
-# Metrics Advisor service encryption of data at rest
-
-Metrics Advisor service automatically encrypts your data when it is persisted to the cloud. The Metrics Advisor service encryption protects your data and helps you to meet your organizational security and compliance commitments.
--
-Metrics Advisor supports CMK and double encryption by using BYOS (bring your own storage).
-
-## Steps to create a Metrics Advisor with BYOS
-
-### Step1. Create an Azure Database for PostgreSQL and set admin
--- Create an Azure Database for PostgreSQL-
- Log in to the Azure portal and create a resource of the Azure Database for PostgreSQL. Couple of things to notice:
-
- 1. Please select the **'Single Server'** deployment option.
- 2. When choosing 'Datasource', please specify as **'None'**.
- 3. For the 'Location', please make sure to create within the **same location** as Metrics Advisor resource.
- 4. 'Version' should be set to **11**.
- 5. 'Compute + storage' should choose a 'Memory Optimized' SKU with at least **32 vCores**.
-
- ![Create an Azure Database for PostgreSQL](media/cmk-create.png)
--- Set Active Directory Admin for newly created PG-
- After successfully creating your Azure Database for PostgreSQL. Go to the resource page of the newly created Azure PG resource. Select 'Active Directory admin' tab and set yourself as the Admin.
--
-### Step2. Create a Metrics Advisor resource and enable Managed Identity
--- Create a Metrics Advisor resource in the Azure portal-
- Go to Azure portal again and search 'Metrics Advisor'. When creating Metrics Advisor, do remember the following:
-
- 1. Choose the **same 'region'** as you created Azure Database for PostgreSQL.
- 2. Mark 'Bring your own storage' as **'Yes'** and select the Azure Database for PostgreSQL you just created in the dropdown list.
--- Enable the Managed Identity for Metrics Advisor-
- After creating the Metrics Advisor resource, select 'Identity' and set 'Status' to **'On'** to enable Managed Identity.
--- Get Application ID of Managed Identity-
- Go to Azure Active Directory, and select 'Enterprise applications'. Change 'Application type' to **'Managed Identity'**, copy resource name of Metrics Advisor, and search. Then you're able to view the 'Application ID' from the query result, copy it.
-
-### Step3. Grant Metrics Advisor access permission to your Azure Database for PostgreSQL
--- Grant **'Owner'** role for the Managed Identity on your Azure Database for PostgreSQL--- Set firewall rules-
- 1. Set 'Allow access to Azure services' as 'Yes'.
- 2. Add your clientIP address to log in to Azure Database for PostgreSQL.
--- Get the access-token for your account with resource type 'https://ossrdbms-aad.database.windows.net'. The access token is the password you need to log in to the Azure Database for PostgreSQL by your account. An example using `az` client:-
- ```
- az login
- az account get-access-token --resource https://ossrdbms-aad.database.windows.net
- ```
--- After getting the token, use it to log in to your Azure Database for PostgreSQL. Replace the 'servername' as the one that you can find in the 'overview' of your Azure Database for PostgreSQL.-
- ```
- export PGPASSWORD=<access-token>
- psql -h <servername> -U <adminaccount@servername> -d postgres
- ```
--- After login, execute the following commands to grant Metrics Advisor access permission to Azure Database for PostgreSQL. Replace the 'appid' with the one that you get in Step 2.-
- ```
- SET aad_validate_oids_in_tenant = off;
- CREATE ROLE metricsadvisor WITH LOGIN PASSWORD '<appid>' IN ROLE azure_ad_user;
- ALTER ROLE metricsadvisor CREATEDB;
- GRANT azure_pg_admin TO metricsadvisor;
- ```
-
-By completing all the above steps, you've successfully created a Metrics Advisor resource with CMK supported. Wait for a couple of minutes until your Metrics Advisor is accessible.
-
-## Next steps
-
-* [Metrics Advisor Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
applied-ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/glossary.md
- Title: Metrics Advisor glossary-
-description: Key ideas and concepts for the Metrics Advisor service
------ Previously updated : 09/14/2020---
-# Metrics Advisor glossary of common vocabulary and concepts
-
-This document explains the technical terms used in Metrics Advisor. Use this article to learn about common concepts and objects you might encounter when using the service.
-
-## Data feed
-
-> [!NOTE]
-> Multiple metrics can share the same data source, and even the same data feed.
-
-A data feed is what Metrics Advisor ingests from your data source, such as Azure Cosmos DB or a SQL server. A data feed contains rows of:
-* timestamps
-* zero or more dimensions
-* one or more measures.
-
-## Interval
-Metrics need to be monitored at a certain granularity according to business requirements. For example, business Key Performance Indicators (KPIs) are monitored at daily granularity. However, service performance metrics are often monitored at minute/hourly granularity. So the frequency to collect metric data from sources are different.
-
-Metrics Advisor continuously grabs metrics data at each time interval, **the interval is equal to the granularity of the metrics.** Every time, Metrics Advisor runs the query you have written ingests data at this specific interval. Based on this data ingestion mechanism, the query script **should not return all metric data that exists in the database, but needs to limit the result to a single interval.**
-
-<!-- ![What is interval](media/tutorial/what-is-interval.png) -->
-
-## Metric
-
-A metric is a quantifiable measure that is used to monitor and assess the status of a specific business process. It can be a combination of multiple time series values divided into dimensions. For example a *web health* metric might contain dimensions for *user count* and the *en-us market*.
-
-## Dimension
-
-A dimension is one or more categorical values. The combination of those values identifies a particular univariate time series, for example: country/region, language, tenant, and so on.
-
-## Multi-dimensional metric
-
-What is a multi-dimension metric? Let's use two examples.
-
-**Revenue of a business**
-
-Suppose you have data for the revenue of your business. Your time series data might look something like this:
-
-| Timestamp | Category | Market | Revenue |
-| -|-|--|-- |
-| 2020-6-1 | Food | US | 1000 |
-| 2020-6-1 | Apparel | US | 2000 |
-| 2020-6-2 | Food | UK | 800 |
-| ... | ... |... | ... |
-
-In this example, *Category* and *Market* are dimensions. *Revenue* is the Key Performance Indicator (KPI) which could be sliced into different categories and/or markets, and could also be aggregated. For example, the revenue of *food* for all markets.
-
-**Error counts for a complex application**
-
-Suppose you have data for the number of errors logged in an application. Your time series data might look something like this:
-
-| Timestamp | Application component | Region | Error count |
-| -|-|--|-- |
-| 2020-6-1 | Employee database | WEST EU | 9000 |
-| 2020-6-1 | Message queue | EAST US | 1000 |
-| 2020-6-2 | Message queue | EAST US | 8000|
-| ... | ... | ... | ...|
-
-In this example, *Application component* and *Region* are dimensions. *Error count* is the KPI which could be sliced into different categories and/or markets, and could also be aggregated. For example, the error count of *Message queue* in all regions.
-
-## Measure
-
-A measure is a fundamental or unit-specific term and a quantifiable value of the metric.
-
-## Time series
-
-A time series is a series of data points indexed (or listed or graphed) in chronological order. Most commonly, a time series is a sequence taken at successive, equally spaced points in time. It is a sequence of discrete-time data.
-
-In Metrics Advisor, values of one metric on a specific dimension combination are called one series.
-
-## Granularity
-
-Granularity indicates how frequent data points will be generated at the data source. For example, daily, hourly.
-
-## Ingest data since(UTC)
-
-Ingest data since(UTC) is the time that you want Metrics Advisor to begin ingesting data from your data source. Your data source must have data at the specified ingestion start time.
-
-## Confidence boundaries
-
-> [!NOTE]
-> Confidence boundaries are not the only measurement used to find anomalies. It's possible for data points outside of this boundary to be flagged as normal by the detection model.
-
-In Metrics Advisor, confidence boundaries represent the sensitivity of the algorithm used, and are used to filter out overly sensitive anomalies. On the web portal, confidence bounds appear as a transparent blue band. All the points within the band are treated as normal points.
-
-Metrics Advisor provides tools to adjust the sensitivity of the algorithms used. See [How to: Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md) for more information.
-
-![Confidence bounds](media/confidence-bounds.png)
--
-## Hook
-
-Metrics Advisor lets you create and subscribe to real-time alerts. These alerts are sent over the internet, [using a hook](how-tos/alerts.md).
-
-## Anomaly incident
-
-After a detection configuration is applied to metrics, incidents are generated whenever any series within it has an anomaly. In large data sets this can be overwhelming, so Metrics Advisor groups series of anomalies within a metric into an incident. The service will also evaluate the severity and provide tools for [diagnosing an incident](how-tos/diagnose-an-incident.md).
-
-### Diagnostic tree
-
-In Metrics Advisor, you can apply anomaly detection on metrics, then Metrics Advisor automatically monitors all time series of all dimension combinations. Whenever there is any anomaly detected, Metrics Advisor aggregates anomalies into incidents.
-After an incident occurs, Metrics Advisor will provide a diagnostic tree with a hierarchy of contributing anomalies, and identify ones with the biggest impact. Each incident has a root cause anomaly, which is the top node of the tree.
-
-### Anomaly grouping
-
-Metrics Advisor provides the capability to find related time series with similar patterns. It can also provide deeper insights into the impact on other dimensions, and correlate the anomalies.
-
-### Time series comparison
-
-You can pick multiple time series to compare trends in a single visualization. This provides a clear and insightful way to view and compare related series.
-
-## Detection configuration
-
->[!Note]
->Detection configurations are only applied within an individual metric.
-
-On the Metrics Advisor web portal, a detection configuration (such as sensitivity, auto snooze, and direction) is listed on the left panel when viewing a metric. Parameters can be tuned and applied to all series within this metric.
-
-A detection configuration is required for every time series, and determines whether a point in the time series is an anomaly. Metrics Advisor will set up a default configuration for the whole metric when you first onboard data.
-
-You can additionally refine the configuration by applying tuning parameters on a group of series, or a specific one. Only one configuration will be applied to a time series:
-* Configurations applied to a specific series will overwrite configurations for a group
-* Configurations for a group will overwrite configurations applied to the whole metric.
-
-Metrics Advisor provides several [detection methods](how-tos/configure-metrics.md#anomaly-detection-methods), and you can combine them using logical operators.
-
-### Smart detection
-
-Anomaly detection using multiple machine learning algorithms.
-
-**Sensitivity**: A numerical value to adjust the tolerance of the anomaly detection. Visually, the higher the value, the narrower the upper and lower boundaries around the time series.
-
-### Hard threshold
-
-Values outside of upper or lower bounds are anomalies.
-
-**Min**: The lower bound
-
-**Max**: The upper bound
-
-### Change threshold
-
-Use the previous point value to determine if this point is an anomaly.
-
-**Change percentage**: Compared to the previous point, the current point is an anomaly if the percentage of change is more than this parameter.
-
-**Change over points**: How many points to look back.
-
-### Common parameters
-
-**Direction**: A point is an anomaly only when the deviation occurs in the direction *up*, *down*, or *both*.
-
-**Not valid anomaly until**: A data point is only an anomaly if a specified percentage of previous points are also anomalies.
-
-## Alert settings
-
-Alert settings determine which anomalies should trigger an alert. You can set multiple alerts with different settings. For example, you could create an alert for anomalies with lower business impact, and another for more importance alerts.
-
-You can also create an alert across metrics. For example, an alert that only gets triggered if two specified metrics have anomalies.
-
-### Alert scope
-
-Alert scope refers to the scope that the alert applies to. There are four options:
-
-**Anomalies of all series**: Alerts will be triggered for anomalies in all series within the metric.
-
-**Anomalies in series group**: Alerts will only be triggered for anomalies in specific dimensions of the series group. The number of specified dimensions should be smaller than the total number dimensions.
-
-**Anomalies in favorite series**: Alerts will only be triggered for anomalies that are added as favorites. You can choose a group of series as a favorite for each detecting config.
-
-**Anomalies in top N of all series**: Alerts will only be triggered for anomalies in the top N series. You can set parameters to specify the number of timestamps to take into account, and how many anomalies must be in them to send the alert.
-
-### Severity
-
-Severity is a grade that Metrics Advisor uses to describe the severity of incident, including *High*, *Medium*, and *Low*.
-
-Currently, Metrics Advisor uses the following factors to measure the alert severity:
-1. The value proportion and the quantity proportion of anomalies in the metric.
-1. Confidence of anomalies.
-1. Your favorite settings also contribute to the severity.
-
-### Auto snooze
-
-Some anomalies are transient issues, especially for small granularity metrics. You can *snooze* an alert for a specific number of time points. If anomalies are found within that specified number of points, no alert will be triggered. The behavior of auto snooze can be set on either metric level or series level.
-
-The behavior of snooze can be set on either metric level or series level.
-
-## Data feed settings
-
-### Ingestion Time Offset
-
-By default, data is ingested according to the granularity (such as *daily*). By using a positive integer, you can delay ingestion of the data by the specified value. Using a negative number, you can advance the ingestion by the specified value.
-
-### Max Ingestion per Minute
-
-Set this parameter if your data source supports limited concurrency. Otherwise leave the default settings.
-
-### Stop retry after
-
-If data ingestion has failed, Metrics Advisor will retry automatically after a period of time. The beginning of the period is the time when the first data ingestion occurred. The length of the retry period is defined according to the granularity. If you use the default value (`-1`), the retry period will be determined according to the granularity:
-
-| Granularity | Stop Retry After |
-| : | : |
-| Daily, Custom (>= 1 Day), Weekly, Monthly, Yearly | 7 days |
-| Hourly, Custom (< 1 Day) | 72 hours |
-
-### Min retry interval
-
-You can specify the minimum interval when retrying to pull data from the source. If you use the default value (`-1`), the retry interval will be determined according to the granularity:
-
-| Granularity | Minimum Retry Interval |
-| : | : |
-| Daily, Custom (>= 1 Day), Weekly, Monthly | 30 minutes |
-| Hourly, Custom (< 1 Day) | 10 minutes |
-| Yearly | 1 day |
-
-### Grace period
-
-> [!Note]
-> The grace period begins at the regular ingestion time, plus specified ingestion time offset.
-
-A grace period is a period of time where Metrics Advisor will continue fetching data from the data source, but won't fire any alerts. If no data was ingested after the grace period, a *Data feed not available* alert will be triggered.
-
-### Snooze alerts in
-
-When this option is set to zero, each timestamp with *Not Available* will trigger an alert. When set to a value other than zero, the specified number of *Not available* alerts will be snoozed if no data was fetched.
-
-## Data feed permissions
-
-There are two roles to manage data feed permissions: *Administrator*, and *Viewer*.
-
-* An *Administrator* has full control of the data feed and metrics within it. They can activate, pause, delete the data feed, and make updates to feeds and configurations. An *Administrator* is typically the owner of the metrics.
-
-* A *Viewer* is able to view the data feed or metrics, but is not able to make changes.
-
-## Next steps
-- [Metrics Advisor overview](overview.md)-- [Use the web portal](quickstarts/web-portal.md)
applied-ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/alerts.md
- Title: Configure Metrics Advisor alerts-
-description: How to configure your Metrics Advisor alerts using hooks for email, web and Azure DevOps.
------ Previously updated : 09/14/2020---
-# How-to: Configure alerts and get notifications using a hook
-
-After an anomaly is detected by Metrics Advisor, an alert notification will be triggered based on alert settings, using a hook. An alert setting can be used with multiple detection configurations, various parameters are available to customize your alert rule.
-
-## Create a hook
-
-Metrics Advisor supports four different types of hooks: email, Teams, webhook, and Azure DevOps. You can choose the one that works for your specific scenario.
-
-### Email hook
-
-> [!Note]
-> Metrics Advisor resource administrators need to configure the Email settings, and input **SMTP related information** into Metrics Advisor before anomaly alerts can be sent. The resource group admin or subscription admin needs to assign at least one *Cognitive Services Metrics Advisor Administrator* role in the Access control tab of the Metrics Advisor resource. [Learn more about e-mail settings configuration](../faq.yml#how-to-set-up-email-settings-and-enable-alerting-by-email-).
--
-An email hook is the channel for anomaly alerts to be sent to email addresses specified in the **Email to** section. Two types of alert emails will be sent: **Data feed not available** alerts, and **Incident reports**, which contain one or multiple anomalies.
-
-To create an email hook, the following parameters are available:
-
-|Parameter |Description |
-|||
-| Name | Name of the email hook |
-| Email to| Email addresses to send alerts to|
-| External link | Optional field, which enables a customized redirect, such as for troubleshooting notes. |
-| Customized anomaly alert title | Title template supports `${severity}`, `${alertSettingName}`, `${datafeedName}`, `${metricName}`, `${detectConfigName}`, `${timestamp}`, `${topDimension}`, `${incidentCount}`, `${anomalyCount}`
-
-After you select **OK**, an email hook will be created. You can use it in any alert settings to receive anomaly alerts. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-azure-logic-apps-teams-and-smtp) for detailed steps.
-
-### Teams hook
-
-A Teams hook is the channel for anomaly alerts to be sent to a channel in Microsoft Teams. A Teams hook is implemented through an "Incoming webhook" connector. You may need to create an "Incoming webhook" connector ahead in your target Teams channel and get a URL of it. Then pivot back to your Metrics Advisor workspace.
-
-Select "Hooks" tab in left navigation bar, and select "Create hook" button at top right of the page. Choose hook type of "Teams", following parameters are provided:
-
-|Parameter |Description |
-|||
-| Name | Name of the Teams hook |
-| Connector URL | The URL that just copied from "Incoming webhook" connector that created in target Teams channel. |
-
-After you select **OK**, a Teams hook will be created. You can use it in any alert settings to notify anomaly alerts to target Teams channel. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-azure-logic-apps-teams-and-smtp) for detailed steps.
-
-### Web hook
-
-A web hook is another notification channel by using an endpoint that is provided by the customer. Any anomaly detected on the time series will be notified through a web hook. There're several steps to enable a web hook as alert notification channel within Metrics Advisor.
-
-**Step1.** Enable Managed Identity in your Metrics Advisor resource
-
-A system assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). The managed identity is authenticated with Azure AD, so you donΓÇÖt have to store any credentials in code.
-
-Go to Metrics Advisor resource in Azure portal, and select "Identity", turn it to "on" then Managed Identity is enabled.
-
-**Step2.** Create a web hook in Metrics Advisor workspace
-
-Log in to you workspace and select "Hooks" tab, then select "Create hook" button.
--
-To create a web hook, you will need to add the following information:
-
-|Parameter |Description |
-|||
-|Endpoint | The API address to be called when an alert is triggered. **MUST be Https**. |
-|Username / Password | For authenticating to the API address. Leave this black if authentication isn't needed. |
-|Header | Custom headers in the API call. |
-|Certificate identifier in Azure Key vaults| If accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults. Input the identifier here.
-
-> [!Note]
-> When a web hook is created or modified, the endpoint will be called as a test with **an empty request body**. Your API needs to return a 200 HTTP code to successfully pass the validation.
---- Request method is **POST**-- Timeout 30s-- Retry for 5xx error, ignore other error. Will not follow 301/302 redirect request.-- Request body:
-```
-{
-"value": [{
- "hookId": "b0f27e91-28cf-4aa2-aa66-ac0275df14dd",
- "alertType": "Anomaly",
- "alertInfo": {
- "anomalyAlertingConfigurationId": "1bc6052e-9a2a-430b-9cbd-80cd07a78c64",
- "alertId": "172536dbc00",
- "timestamp": "2020-05-27T00:00:00Z",
- "createdTime": "2020-05-29T10:04:45.590Z",
- "modifiedTime": "2020-05-29T10:04:45.590Z"
- },
- "callBackUrl": "https://kensho2-api.azurewebsites.net/alert/anomaly/configurations/1bc6052e-9a2a-430b-9cbd-80cd07a78c64/alerts/172536dbc00/incidents"
-}]
-}
-```
-
-**Step3. (optional)** Store your certificate in Azure Key vaults and get identifier
-As mentioned, if accessing the endpoint needs to be authenticated by a certificate, the certificate should be stored in Azure Key vaults.
--- Check [Set and retrieve a certificate from Azure Key Vault using the Azure portal](../../../key-vault/certificates/quick-create-portal.md)-- Click on the certificate you've added, then you're able to copy the "Certificate identifier". -- Then select "Access policies" and "Add access policy", grant "get" permission for "Key permissions", "Secrete permissions" and "Certificate permissions". Select principal as the name of your Metrics Advisor resource. Select "Add" and "Save" button in "Access policies" page. -
-**Step4.** Receive anomaly notification
-When a notification is pushed through a web hook, you can fetch incidents data by calling the "callBackUrl" in Webhook Request. Details for this api:
--- [/alert/anomaly/configurations/{configurationId}/alerts/{alertId}/incidents](https://westus2.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/getIncidentsFromAlertByAnomalyAlertingConfiguration)-
-By using web hook and Azure Logic Apps, it's possible to send email notification **without an SMTP server configured**. Refer to the tutorial of [enable anomaly notification in Metrics Advisor](../tutorials/enable-anomaly-notification.md#send-notifications-with-azure-logic-apps-teams-and-smtp) for detailed steps.
-
-### Azure DevOps
-
-Metrics Advisor also supports automatically creating a work item in Azure DevOps to track issues/bugs when any anomaly is detected. All alerts can be sent through Azure DevOps hooks.
-
-To create an Azure DevOps hook, you will need to add the following information
-
-|Parameter |Description |
-|||
-| Name | A name for the hook |
-| Organization | The organization that your DevOps belongs to |
-| Project | The specific project in DevOps. |
-| Access Token | A token for authenticating to DevOps. |
-
-> [!Note]
-> You need to grant write permissions if you want Metrics Advisor to create work items based on anomaly alerts.
-> After creating hooks, you can use them in any of your alert settings. Manage your hooks in the **hook settings** page.
-
-## Add or edit alert settings
-
-Go to metrics detail page to find the **Alert settings** section, in the bottom-left corner of the metrics detail page. It lists all alert settings that apply to the selected detection configuration. When a new detection configuration is created, there's no alert setting, and no alerts will be sent.
-You can use the **add**, **edit** and **delete** icons to modify alert settings.
--
-Select the **add** or **edit** buttons to get a window to add or edit your alert settings.
--
-**Alert setting name**: The name of the alert setting. It will be displayed in the alert email title.
-
-**Hooks**: The list of hooks to send alerts to.
-
-The section marked in the screenshot above are the settings for one detection configuration. You can set different alert settings for different detection configurations. Choose the target configuration using the third drop-down list in this window.
-
-### Filter settings
-
-The following are filter settings for one detection configuration.
-
-**Alert For** has four options for filtering anomalies:
-
-* **Anomalies in all series**: All anomalies will be included in the alert.
-* **Anomalies in the series group**: Filter series by dimension values. Set specific values for some dimensions. Anomalies will only be included in the alert when the series matches the specified value.
-* **Anomalies in favorite series**: Only the series marked as favorite will be included in the alert. |
-* **Anomalies in top N of all series**: This filter is for the case that you only care about the series whose value is in the top N. Metrics Advisor will look back over previous timestamps, and check if values of the series at these timestamps are in top N. If the "in top n" count is larger than the specified number, the anomaly will be included in an alert. |
-
-**Filter anomaly options are an extra filter with the following options**:
--- **Severity**: The anomaly will only be included when the anomaly severity is within the specified range.-- **Snooze**: Stop alerts temporarily for anomalies in the next N points (period), when triggered in an alert.
- - **snooze type**: When set to **Series**, a triggered anomaly will only snooze its series. For **Metric**, one triggered anomaly will snooze all the series in this metric.
- - **snooze number**: the number of points (period) to snooze.
- - **reset for non-successive**: When selected, a triggered anomaly will only snooze the next n successive anomalies. If one of the following data points isn't an anomaly, the snooze will be reset from that point; When unselected, one triggered anomaly will snooze next n points (period), even if successive data points aren't anomalies.
-- **value** (optional): Filter by value. Only point values that meet the condition, anomaly will be included. If you use the corresponding value of another metric, the dimension names of the two metrics should be consistent.-
-Anomalies not filtered out will be sent in an alert.
-
-### Add cross-metric settings
-
-Select **+ Add cross-metric settings** in the alert settings page to add another section.
-
-The **Operator** selector is the logical relationship of each section, to determine if they send an alert.
--
-|Operator |Description |
-|||
-|AND | Only send an alert if a series matches each alert section, and all data points are anomalies. If the metrics have different dimension names, an alert will never be triggered. |
-|OR | Send the alert if at least one section contains anomalies. |
--
-## Next steps
--- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-an-incident.md).-- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/anomaly-feedback.md
- Title: Provide anomaly feedback to the Metrics Advisor service-
-description: Learn how to send feedback on anomalies found by your Metrics Advisor instance, and tune the results.
----- Previously updated : 11/24/2020---
-# Provide anomaly feedback
-
-User feedback is one of the most important methods to discover defects within the anomaly detection system. Here we provide a way for users to mark incorrect detection results directly on a time series, and apply the feedback immediately. In this way, a user can teach the anomaly detection system how to do anomaly detection for a specific time series through active interactions.
-
-> [!NOTE]
-> Currently feedback will only affect anomaly detection results by **Smart detection** but not **Hard threshold** and **Change threshold**.
-
-## How to give time series feedback
-
-You can provide feedback from the metric detail page on any series. Just select any point, and you will see the below feedback dialog. It shows you the dimensions of the series you've chosen. You can reselect dimension values, or even remove some of them to get a batch of time series data. After choosing time series, select the **Add** button to add the feedback, there are four kinds of feedback you could give. To append multiple feedback items, select the **Save** button once you complete your annotations.
---
-### Mark the anomaly point type
-
-As shown in the image below, the feedback dialog will fill the timestamp of your chosen point automatically, though you can edit this value. You then select whether you want to identify this item as an `Anomaly`, `NotAnomaly`, or `AutoDetect`.
--
-The selection will apply your feedback to the future anomaly detection processing of the same series. The processed points will not be recalculated. That means if you marked an Anomaly as NotAnomaly, we will suppress similar anomalies in the future, and if you marked a `NotAnomaly` point as `Anomaly`, we will tend to detect similar points as `Anomaly` in the future. If `AutoDetect` is chosen, any previous feedback on the same point will be ignored in the future.
-
-## Provide feedback for multiple continuous points
-
-If you would like to give anomaly feedback for multiple continuous points at the same time, select the group of points you want to annotate. You will see the chosen time-range automatically filled when you provide anomaly feedback.
--
-To view if an individual point is affected by your anomaly feedback, when browsing a time series, select a single point. If its anomaly detection result has been changed by feedback, the tooltip will show **Affected by feedback: true**. If it shows **Affected by feedback: false**, this means an anomaly feedback calculation was performed for this point, but the anomaly detection result should not be changed.
--
-There are some situations where we do not suggest giving feedback:
--- The anomaly is caused by a holiday. It's suggested to use a preset event to solve this kind of false alarm, as it will be more precise.-- The anomaly is caused by a known data source change. For example, an upstream system change happened at that time. In this situation, it is expected to give an anomaly alert since our system didn't know what caused the value change and when similar value changes will happen again. Thus we don't suggest annotating this kind of issue as `NotAnomaly`.-
-## Change points
-
-Sometimes the trend change of data will affect anomaly detection results. When a decision is made as to whether a point is an anomaly or not, the latest window of history data will be taken into consideration. When your time series has a trend change, you could mark the exact change point, this will help our anomaly detector in future analysis.
-
-As the figure below shows, you could select `ChangePoint` for the feedback Type, and select `ChangePoint`, `NotChangePoint`, or `AutoDetect` from the pull-down list.
--
-> [!NOTE]
-> If your data keeps changing, you will only need to mark one point as a `ChangePoint`, so if you marked a `timerange`, we will fill the last point's timestamp and time automatically. In this case, your annotation will only affect anomaly detection results after 12 points.
-
-## Seasonality
-
-For seasonal data, when we perform anomaly detection, one step is to estimate the period(seasonality) of the time series, and apply it to the anomaly detection phase. Sometimes, it's hard to identify a precise period, and the period may also change. An incorrectly defined period may have side effects on your anomaly detection results. You can find the current period from a tooltip, its name is `Min Period`. `Window` is a recommended window size to detect the last point within the window. `Window` usually is \(`Min Period` &times; 4 \) + 1.
--
-You can provide feedback for period to fix this kind of anomaly detection error. As the figure shows, you can set a period value. The unit `interval` means one granularity. Here zero intervals means the data is non-seasonal. You could also select `AutoDetect` if you want to cancel previous feedback and let the pipeline detect period automatically.
-
-> [!NOTE]
-> When setting period you do not need to assign a timestamp or timerange, the period will affect future anomaly detections on whole timeseries from the moment you give feedback.
---
-## Provide comment feedback
-
-You can also add comments to annotate and provide context to your data. To add comments, select a time range and add the text for your comment.
--
-## Time series batch feedback
-
-As previously described, the feedback modal allows you to reselect or remove dimension values, to get a batch of time series defined by a dimension filter. You can also open this modal by clicking the "+" button for Feedback from the left panel, and select dimensions and dimension values.
---
-## How to view feedback history
-
-There are two ways to view feedback history. You can select the feedback history button from the left panel, and will see a feedback list modal. It lists all the feedback you've given before either for single series or dimension filters.
--
-Another way to view feedback history is from a series. You will see several buttons on the upper right corner of each series. Select the show feedback button, and the line will switch from showing anomaly points to showing feedback entries. The green flag represents a change point, and the blue points are other feedback points. You could also select them, and will get a feedback list modal that lists the details of the feedback given for this point.
---
-> [!NOTE]
-> Anyone who has access to the metric is permitted to give feedback, so you may see feedback given by other datafeed owners. If you edit the same point as someone else, your feedback will overwrite the previous feedback entry.
-
-## Next steps
-- [Diagnose an incident](diagnose-an-incident.md).-- [Configure metrics and fine tune detection configuration](configure-metrics.md)-- [Configure alerts and get notifications using a hook](../how-tos/alerts.md)
applied-ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/configure-metrics.md
- Title: Configure your Metrics Advisor instance using the web portal-
-description: How to configure your Metrics Advisor instance and fine-tune the anomaly detection results.
----- Previously updated : 05/12/2022---
-# Configure metrics and fine tune detection configuration
-
-Use this article to start configuring your Metrics Advisor instance using the web portal and fine-tune the anomaly detection results.
-
-## Metrics
-
-To browse the metrics for a specific data feed, go to the **Data feeds** page and select one of the feeds. This will display a list of metrics associated with it.
--
-Select one of the metric names to see its details. In this view, you can switch to another metric in the same data feed using the drop-down list in the top right corner of the screen.
-
-When you first view a metric's details, you can load a time series by letting Metrics Advisor choose one for you, or by specifying values to be included for each dimension.
-
-You can also select time ranges, and change the layout of the page.
-
-> [!NOTE]
-> - The start time is inclusive.
-> - The end time is exclusive.
-
-You can select the **Incidents** tab to view anomalies, and find a link to the [Incident hub](diagnose-an-incident.md).
-
-## Tune the detection configuration
-
-A metric can apply one or more detection configurations. There's a default configuration for each metric, which you can edit or add to, according to your monitoring needs.
-
-### Detection configuration auto-tuning based on anomaly preference
-
-Detection configuration auto-tuning is a new feature released in Metrics Advisor, to help address the following scenarios:
--- Depending on the use case, certain types of anomalies may be of more significant interest. Sometimes you may be interested in sudden spikes or dips, but other cases, spikes/dips or transient anomalies aren't critical. Previously it was hard to distinguish the configuration for different anomaly types. The new auto-tuning feature makes this distinction between anomaly types possible. As of now, there are five supported anomaly patterns:
- * Spike
- * Dip
- * Increase
- * Decrease
- * Steady
--- Sometimes, there may be many dimensions within one metric, which will split the metric into hundreds, thousands, or even more time series to be monitored. However, often some of these dimensions aren't equally important. Take revenue as an example, the number for small regions or a niche product category might be quite small and therefore also not that stable. But at the same time not necessarily critical. The new auto-tuning feature has made it possible to fine tune configuration based on series value range.-
-This allows you to not have to spend as much effort fine tuning your configuration again and again, and also reduces alert noise.
-
-> [!NOTE]
-> The auto-tuning feature is only applied on the 'Smart detection' method.
-
-#### Prerequisite for triggering auto-tuning
-
-After the metrics are onboarded to Metrics Advisor, the system will try to perform statistics on the metrics to categorize **anomaly pattern** types and **series value** distribution. By providing this functionality, you can further fine tune the configuration based on their specific preferences. At the beginning, it will show a status of **Initializing**.
--
-#### Choose to enable auto-tuning on anomaly pattern and series value
-
-The feature enables you to tune detection configuration from two perspectives **anomaly pattern** and **series value**. Based on your specific use case, you can choose which one to enabled or enable both.
--- For the **anomaly pattern** option, the system will list out different anomaly patterns that were observed with the metric. You can choose which ones you're interested in and select them, the unselected patterns will have their sensitivity **reduced** by default.--- For the **series value** option, your selection will depend on your specific use case. You'll have to decide if you want to use a higher sensitivity for series with higher values, and decrease sensitivity on low value ones, or vice versa. Then check the checkbox.--
-#### Tune the configuration for selected anomaly patterns
-
-If specific anomaly patterns are chosen, the next step is to fine tune the configuration for each. There's a global **sensitivity** that is applied for all series. For each anomaly pattern, you can tune the **adjustment**, which is based on the global **sensitivity**.
-
-You must tune each anomaly pattern that has been chosen individually.
--
-#### Tune the configuration for each series value group
-
-After the system generates statistics on all time series within the metric, several series value groups are created automatically. As described above, you can fine tune the **adjustment** for each series value group according to your specific business needs.
-
-There will be a default adjustment configured to get the best detection results, but it can be further tuned.
--
-#### Set up alert rules
-
-Even once the detection configuration on capturing valid anomalies is tuned, it's still important to input **alert
-rules** to make sure the final alert rules can meet eventual business needs. There are a number of rules that can be set, like **filter rules** or **snooze continuous alert rules**.
--
-After configuring all the settings described in the section above, the system will orchestrate them together and automatically detect anomalies based on your inputted preferences. The goal is to get the best configuration that works for each metric, which can be achieved much easier through use of the new **auto-tuning** capability.
-
-### Tune the configuration for all series in current metric
-
-This configuration will be applied to all the series in this metric, except for ones with a separate configuration. A metric level configuration is applied by default when data is onboarded, and is shown on the left panel. Users can directly edit metric level config on metric page.
-
-There are additional parameters like **Direction**, and **Valid anomaly** that can be used to further tune the configuration. You can combine different detection methods as well.
--
-### Tune the configuration for a specific series or group
-
-Select **Advanced configuration** below the metric level configuration options to see the group level configuration.You can add a configuration for an individual series, or group of series by clicking the **+** icon in this window. The parameters are similar to the metric-level configuration parameters, but you may need to specify at least one dimension value for a group-level configuration to identify a group of series. And specify all dimension values for series-level configuration to identify a specific series.
-
-This configuration will be applied to the group of series or specific series instead of the metric level configuration. After setting the conditions for this group, save it.
--
-### Anomaly detection methods
-
-Metrics Advisor offers multiple anomaly detection methods: **Hard threshold, Smart detection, Change threshold**. You can use one or combine them using logical operators by clicking the **'+'** button.
-
-**Hard threshold**
-
- Hard threshold is a basic method for anomaly detection. You can set an upper and/or lower bound to determine the expected value range. Any points fall out of the boundary will be identified as an anomaly.
-
-**Smart detection**
-
-Smart detection is powered by machine learning that learns patterns from historical data, and uses them for future detection. When using this method, the **Sensitivity** is the most important parameter for tuning the detection results. You can drag it to a smaller or larger value to affect the visualization on the right side of the page. Choose one that fits your data and save it.
--
-In smart detection mode, the sensitivity and boundary version parameters are used to fine-tune the anomaly detection result.
-
-Sensitivity can affect the width of the expected value range of each point. When increased, the expected value range will be tighter, and more anomalies will be reported:
--
-When the sensitivity is turned down, the expected value range will be wider, and fewer anomalies will be reported:
--
-**Change threshold**
-
-Change threshold is normally used when metric data generally stays around a certain range. The threshold is set according to **Change percentage**. The **Change threshold** mode is able to detect anomalies in the scenarios:
-
-* Your data is normally stable and smooth. You want to be notified when there are fluctuations.
-* Your data is normally unstable and fluctuates a lot. You want to be notified when it becomes too stable or flat.
-
-Use the following steps to use this mode:
-
-1. Select **Change threshold** as your anomaly detection method when you set the anomaly detection configurations for your metrics or time series.
-
- :::image type="content" source="../media/metrics/change-threshold.png" alt-text="change threshold":::
-
-2. Select the **out of the range** or **in the range** parameter based on your scenario.
-
- If you want to detect fluctuations, select **out of the range**. For example, with the settings below, any data point that changes over 10% compared to the previous one will be detected as an outlier.
- :::image type="content" source="../media/metrics/out-of-the-range.png" alt-text="out of range parameter":::
-
- If you want to detect flat lines in your data, select **in the range**. For example, with the settings below, any data point that changes within 0.01% compared to the previous one will be detected as an outlier. Because the threshold is so small (0.01%), it detects flat lines in the data as outliers.
-
- :::image type="content" source="../media/metrics/in-the-range.png" alt-text="In range parameter":::
-
-3. Set the percentage of change that will count as an anomaly, and which previously captured data points will be used for comparison. This comparison is always between the current data point, and a single data point N points before it.
-
- **Direction** is only valid if you're using the **out of the range** mode:
-
- * **Up** configures detection to only detect anomalies when (current data point) - (comparison data point) > **+** threshold percentage.
- * **Down** configures detection to only detect anomalies when (current data point) - (comparing data point) < **-** threshold percentage.
---
-## Preset events
-
-Sometimes, expected events and occurrences (such as holidays) can generate anomalous data. Using preset events, you can add flags to the anomaly detection output, during specified times. This feature should be configured after your data feed is onboarded. Each metric can only have one preset event configuration.
-
-> [!Note]
-> Preset event configuration will take holidays into consideration during anomaly detection, and may change your results. It will be applied to the data points ingested after you save the configuration.
-
-Select the **Configure Preset Event** button next to the metrics drop-down list on each metric details page.
--
-In the window that appears, configure the options according to your usage. Make sure **Enable holiday event** is selected to use the configuration.
-
-The **Holiday event** section helps you suppress unnecessary anomalies detected during holidays. There are two options for the **Strategy** option that you can apply:
-
-* **Suppress holiday**: Suppresses all anomalies and alerts in anomaly detection results during holiday period.
-* **Holiday as weekend**: Calculates the average expected values of several corresponding weekends before the holiday, and bases the anomaly status off of these values.
-
-There are several other values you can configure:
-
-|Option |Description |
-|||
-|**Choose one dimension as country** | Choose a dimension that contains country information. For example, a country code. |
-|**Country code mapping** | The mapping between a standard [country code](https://wikipedia.org/wiki/ISO_3166-1_alpha-2), and chosen dimension's country data. |
-|**Holiday options** | Whether to take into account all holidays, only PTO (Paid Time Off) holidays, or only Non-PTO holidays. |
-|**Days to expand** | The impacted days before and after a holiday. |
-
-The **Cycle event** section can be used in some scenarios to help reduce unnecessary alerts by using cyclic patterns in the data. For example:
--- Metrics that have multiple patterns or cycles, such as both a weekly and monthly pattern.-- Metrics that don't have a clear pattern, but the data is comparable Year over Year (YoY), Month over Month (MoM), Week Over Week (WoW), or Day Over Day (DoD).-
-Not all options are selectable for every granularity. The available options per granularity are below (Γ£ö for available, X for unavailable):
-
-| Granularity | YoY | MoM | WoW | DoD |
-|:-|:-|:-|:-|:-|
-| Yearly | X | X | X | X |
-| Monthly | X | X | X | X |
-| Weekly | Γ£ö | X | X | X |
-| Daily | Γ£ö | Γ£ö | Γ£ö | X |
-| Hourly | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Minutely | X | X | X | X |
-| Secondly | X | X | X | X |
-| Custom* | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-
-When using a custom granularity in seconds, only available if the metric is longer than one hour and less than one day.
-
-Cycle event is used to reduce anomalies if they follow a cyclic pattern, but it will report an anomaly if multiple data points don't follow the pattern. **Strict mode** is used to enable anomaly reporting if even one data point doesn't follow the pattern.
--
-## View recent incidents
-
-Metrics Advisor detects anomalies on all your time series data as they're ingested. However, not all anomalies need to be escalated, because they might not have a significant impact. Aggregation will be performed on anomalies to group related ones into incidents. You can view these incidents from the **Incident** tab in metrics details page.
-
-Select an incident to go to the **Incidents analysis** page where you can see more details about it. Select **Manage incidents in new Incident hub**, to find the [Incident hub](diagnose-an-incident.md) page where you can find all incidents under the specific metric.
-
-## Subscribe anomalies for notification
-
-If you'd like to get notified whenever an anomaly is detected, you can subscribe to alerts for the metric, using a hook. For more information, see [configure alerts and get notifications using a hook](alerts.md) for more information.
-
-## Next steps
-- [Configure alerts and get notifications using a hook](alerts.md)-- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-an-incident.md).-
applied-ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/credential-entity.md
- Title: Create a credential entity-
-description: How to create a credential entity to manage your credential in secure.
----- Previously updated : 06/22/2021---
-# How-to: Create a credential entity
-
-When onboarding a data feed, you should select an authentication type, some authentication types like *Azure SQL Connection String* and *Service Principal* need a credential entity to store credential-related information, in order to manage your credential in secure. This article will tell how to create a credential entity for different credential types in Metrics Advisor.
-
-
-## Basic procedure: Create a credential entity
-
-You can create a **credential entity** to store credential-related information, and use it for authenticating to your data sources. You can share the credential entity to others and enable them to connect to your data sources without sharing the real credentials. It can be created in 'Adding data feed' tab or 'Credential entity' tab. After creating a credential entity for a specific authentication type, you can just choose one credential entity you created when adding new data feed, this will be helpful when creating multiple data feeds. The general procedure of creating and using a credential entity is shown below:
-
-1. Select '+' to create a new credential entity in 'Adding data feed' tab (you can also create one in 'Credential entity feed' tab).
-
- ![create credential entity](../media/create-credential-entity.png)
-
-2. Set the credential entity name, description (if needed), credential type (equals to *authentication type*) and other settings.
-
- ![set credential entity](../media/set-credential-entity.png)
-
-3. After creating a credential entity, you can choose it when specifying authentication type.
-
- ![choose credential entity](../media/choose-credential-entity.png)
-
-There are **four credential types** in Metrics Advisor: Azure SQL Connection String, Azure Data Lake Storage Gen2 Shared Key Entity, Service Principal, Service Principal from Key Vault. For different credential type settings, see following instructions.
-
-## Azure SQL Connection String
-
-You should set the **Name** and **Connection String**, then select 'create'.
-
-![set credential entity for sql connection string](../media/credential-entity/credential-entity-sql-connection-string.png)
-
-## Azure Data Lake Storage Gen2 Shared Key Entity
-
-You should set the **Name** and **Account Key**, then select 'create'. Account key could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
-
-<!-- 增加basic说明,tips是错的;增加一下怎么管理;加一个step1的link
>
-![set credential entity for data lake](../media/credential-entity/credential-entity-data-lake.png)
-
-## Service principal
-
-To create service principal for your data source, you can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md). After creating a service principal, you need to fill in the following configurations in credential entity.
-
-![sp credential entity](../media/credential-entity/credential-entity-service-principal.png)
-
-* **Name:** Set a name for your service principal credential entity.
-* **Tenant ID & Client ID:** After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**.
-
- ![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
-
-* **Client Secret:** After creating a service principal in Azure portal, you should go to **Certificates & Secrets** to create a new client secret, and the **value** should be used as `Client Secret` in credential entity. (Note: The value only appears once, so it's better to store it somewhere.)
--
- ![sp Client secret value](../media/credential-entity/sp-secret-value.png)
-
-## <span id="sp-from-kv">Service principal from Key Vault</span>
-
-There are several steps to create a service principal from key vault.
-
-**Step 1. Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source.
-
-After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**. The **Directory (tenant) ID** should be `Tenant ID` in credential entity configurations.
-
-![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png)
-
-**Step 2. Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.)
-
-![sp Client secret value](../media/credential-entity/sp-secret-value.png)
-
-**Step 3. Create a key vault.** In [Azure portal](https://portal.azure.com/#home), select **Key vaults** to create one.
-
-![create a key vault in azure portal](../media/credential-entity/create-key-vault.png)
-
-After creating a key vault, the **Vault URI** is the `Key Vault Endpoint` in MA (Metrics Advisor) credential entity.
-
-![key vault endpoint](../media/credential-entity/key-vault-endpoint.png)
-
-**Step 4. Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**.
-The first is for `Service Principal Client Id`, the other is for `Service Principal Client Secret`, both of their name will be used in credential entity configurations.
-
-![generate secrets](../media/credential-entity/generate-secrets.png)
-
-* **Service Principal Client ID:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client ID` in **Step 1**.
-
- ![secret1: sp client id](../media/credential-entity/secret-1-sp-client-id.png)
-
-* **Service Principal Client Secret:** Set a `Name` for this secret, the name will be used in credential entity configuration, and the value should be your Service Principal `Client Secret Value` in **Step 2**.
-
- ![secret2: sp client secret](../media/credential-entity/secret-2-sp-secret-value.png)
-
-Until now, the *client ID* and *client secret* of service principal are finally stored in Key Vault. Next, you need to create another service principal to store the key vault. Therefore, you should **create two service principals**, one to save client ID and client secret, which will be stored in a key vault, the other is to store the key vault.
-
-**Step 5. Create a service principal to store the key vault.**
-
-1. Go to [Azure portal AAD (Azure Active Directory)](https://portal.azure.com/?trace=diagnostics&feature.customportal=false#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) and create a new registration.
-
- ![create a new registration](../media/credential-entity/create-registration.png)
-
- After creating the service principal, the **Application (client) ID** in Overview will be the `Key Vault Client ID` in credential entity configuration.
-
-2. In **Manage->Certificates & Secrets**, create a client secret by selecting 'New client secret'. Then you should **copy down the value**, because it appears only once. The value is `Key Vault Client Secret` in credential entity configuration.
-
- ![add client secret](../media/credential-entity/add-client-secret.png)
-
-**Step 6. Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'.
-
-![grant sp to key vault](../media/credential-entity/grant-sp-to-kv.png)
--
-## Configurations conclusion
-To conclude, the credential entity configurations in Metrics Advisor for *Service Principal from Key Vault* and the way to get them are shown in table below:
-
-| Configuration | How to get |
-|-| |
-| Key Vault Endpoint | **Step 3:** Vault URI of key vault. |
-| Tenant ID | **Step 1:** Directory (tenant) ID of your first service principal. |
-| Key Vault Client ID | **Step 5:** The Application (client) ID of your second service principal. |
-| Key Vault Client Secret | **Step 5:** The client secret value of your second service principal. |
-| Service Principal Client ID Name | **Step 4:** The secret name you set for Client ID. |
-| Service Principal Client Secret Name | **Step 4:** The secret name you set for Client Secret Value. |
--
-## Next steps
--- [Onboard your data](onboard-your-data.md)-- [Connect different data sources](../data-feeds-from-different-sources.md)
applied-ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
- Title: Diagnose an incident using Metrics Advisor-
-description: Learn how to diagnose an incident using Metrics Advisor, and get detailed views of anomalies in your data.
----- Previously updated : 04/15/2021---
-# Diagnose an incident using Metrics Advisor
-
-## What is an incident?
-
-When there are anomalies detected on multiple time series within one metric at a particular timestamp, Metrics Advisor will automatically group anomalies that **share the same root cause** into one incident. An incident usually indicates a real issue, Metrics Advisor performs analysis on top of it and provides automatic root cause analysis insights.
-
-This will significantly remove customer's effort to view each individual anomaly and quickly finds the most important contributing factor to an issue.
-
-An alert generated by Metrics Advisor may contain multiple incidents and each incident may contain multiple anomalies captured on different time series at the same timestamp.
-
-## Paths to diagnose an incident
--- **Diagnose from an alert notification**-
- If you've configured a hook of the email/Teams type and applied at least one alerting configuration. Then you will receive continuous alert notifications escalating incidents that are analyzed by Metrics Advisor. Within the notification, there's an incident list and a brief description. For each incident, there's a **"Diagnose"** button, selecting it will direct you to the incident detail page to view diagnostic insights.
-
- :::image type="content" source="../media/diagnostics/alert-notification.png" alt-text="Diagnose from an alert notification":::
--- **Diagnose from an incident in "Incident hub"**-
- There's a central place in Metrics Advisor that gathers all incidents that have been captured and make it easy to track any ongoing issues. Selecting the **Incident Hub** tab in left navigation bar will list out all incidents within the selected metrics. Within the incident list, select one of them to view detailed diagnostic insights.
-
- :::image type="content" source="../media/diagnostics/incident-list.png" alt-text="Diagnose from an incident in Incident hub":::
--- **Diagnose from an incident listed in metrics page**-
- Within the metrics detail page, there's a tab named **Incidents** which lists the latest incidents captured for this metric. The list can be filtered by the severity of the incidents or the dimension value of the metrics.
-
- Selecting one incident in the list will direct you to the incident detail page to view diagnostic insights.
-
- :::image type="content" source="../media/diagnostics/incident-in-metrics.png" alt-text="Diagnose from an incident listed in metrics page":::
-
-## Typical diagnostic flow
-
-After being directed to the incident detail page, you're able to take advantage of the insights that are automatically analyzed by Metrics Advisor to quickly locate root cause of an issue or use the analysis tool to further evaluate the issue impact. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident.
-
-### Step 1. Check summary of current incident
-
-The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause.
--- Basic information includes the "top impacted series" with a diagram, "impact start & end time", "incident severity" and "total anomalies included". By reading this, you can get a basic understanding of an ongoing issue and the impact of it. -- Actions & tracings, this is used to facilitate team collaboration on an ongoing incident. Sometimes one incident may need to involve cross-team members' effort to analyze and resolve it. Everyone who has the permission to view the incident can add an action or a tracing event. -
- For example, after diagnosing the incident and root cause is identified, an engineer can add a tracing item with type of "customized" and input the root cause in the comment section. Leave the status as "Active". Then other teammates can share the same info and know there's someone working on the fix. You can also add an "Azure DevOps" item to track the incident with a specific task or bug.
---- Analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on time series within one metric with different dimension values at the same timestamp. Then performs correlation, clustering to group related anomalies together and generates root cause advice. --
-For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident.
--
-### Step 2. View cross-dimension diagnostic insights
-
-After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**.
-
-For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy, which is named the **Diagnostic tree**. For example, a "revenue" metric is monitored by two dimensions: "region" and "category". Despite concrete dimension values, there needs to have an **aggregated** dimension value, like **"SUM"**. Then time series of "region" = **"SUM"** and "category" = **"SUM"** will be categorized as the root node within the tree. Whenever there's an anomaly captured at **"SUM"** dimension, then it could be drilled down and analyzed to locate which specific dimension value has contributed the most to the parent node anomaly. Select each node to expand and see detailed information.
---- To enable an "aggregated" dimension value in your metrics-
- Metrics Advisor supports performing "Roll-up" on dimensions to calculate an "aggregated" dimension value. The diagnostic tree supports diagnosing on **"SUM", "AVG", "MAX","MIN","COUNT"** aggregations. To enable an "aggregated" dimension value, you can enable the "Roll-up" function during data onboarding. Please make sure your metrics is **mathematically computable** and that the aggregated dimension has real business value.
-
- :::image type="content" source="../media/diagnostics/automatic-roll-up.png" alt-text="Roll-up settings":::
--- If there's no "aggregated" dimension value in your metrics-
- If there's no "aggregated" dimension value in your metrics and the "Roll-up" function is not enabled during data onboarding. There will be no metric value calculated for "aggregated" dimension, it will show up as a gray node in the tree and could be expanded to view its child nodes.
-
-#### Legend of diagnostic tree
-
-There are three kinds of nodes in the diagnostic tree:
-- **Blue node**, which corresponds to a time series with real metric value. -- **Gray node**, which corresponds to a virtual time series with no metric value, it's a logical node. -- **Red node**, which corresponds to the top impacted time series of the current incident.-
-For each node abnormal status is described by the color of the node border
-- **Red border** means there's an anomaly captured on the time series corresponding to the incident timestamp.-- **Non-red border** means there's no anomaly captured on the time series corresponding to the incident timestamp.-
-#### Display mode
-
-There are two display modes for a diagnostic tree: only show anomaly series or show major proportions.
--- **Only show anomaly series mode** enables customer to focus on current anomalies that captured on different series and diagnose root cause of top impacted series. -- **Show major proportions** enables customer to check on abnormal status of major proportions of top impacted series. In this mode, the tree would show both series with anomaly detected and series with no anomaly. But more focus on important series. -
-#### Analyze options
--- **Show delta ratio**-
- "Delta ratio" is the percentage of current node delta compared to parent node delta. HereΓÇÖs the formula:
-
- (real value of current node - expected value of current node) / (real value of parent node - expected value of parent node) * 100%
-
- This is used to analyze the major contribution of parent node delta.
--- **Show value proportion**-
- "Value proportion" is the percentage of current node value compared to parent node value. HereΓÇÖs the formula:
-
- (real value of current node / real value of parent node) * 100%
-
- This is used to evaluate the proportion of current node within the whole.
-
-By using "Diagnostic tree", customers can locate root cause of current incident into specific dimension. This significantly removes customer's effort to view each individual anomalies or pivot through different dimensions to find the major anomaly contribution.
-
-### Step 3. View cross-metrics diagnostic insights using "Metrics graph"
-
-Sometimes, it's hard to analyze an issue by checking abnormal status of one single metric, but need to correlate multiple metrics together. Customers are able to configure a **Metrics graph**, which indicates the relationship between metrics. Refer to [How to build a metrics graph](metrics-graph.md) to get started.
-
-#### Check anomaly status on root cause dimension within "Metrics graph"
-
-By using the above cross-dimension diagnostic result, the root cause is limited to a specific dimension value. Then use the "Metrics graph" and filter by the analyzed root cause dimension to check anomaly status on other metrics.
-
-For example, if there's an incident captured on "revenue" metrics. The top impacted series is at global region with "region" = "SUM". By using cross-dimension diagnostic, the root cause has been located on "region" = "Karachi". There's a pre-configured metrics graph, including metrics of "revenue", "cost", "DAU", "PLT(page load time)" and "CHR(cache hit rate)".
-
-Metrics Advisor will automatically filter the metrics graph by the root cause dimension of "region" = "Karachi" and display anomaly status of each metric. By analyzing the relation between metrics and anomaly status, customers can gain further insights of what is the final root cause.
--
-#### Auto related anomalies
-
-By applying the root cause dimension filter on the metrics graph, anomalies on each metric at the timestamp of the current incident will be autorelated. Those anomalies should be related to the identified root cause of current incident.
--
-## Next steps
--- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/further-analysis.md
- Title: Further analyze an incident and evaluate impact-
-description: Learn how to leverage analysis tools to further analyze an incident.
----- Previously updated : 04/15/2021---
-# Further analyze an incident and evaluate impact
-
-## Metrics drill down by dimensions
-
-When you're viewing incident information, you may need to get more detailed information, for example, for different dimensions, and timestamps. If your data has one or more dimensions, you can use the drill down function to get a more detailed view.
-
-To use the drill down function, click on the **Metric drilling** tab in the **Incident hub**.
--
-The **Dimensions** setting is a list of dimensions for an incident, you can select other available dimension values for each one. After the dimension values are changed. The **Timestamp** setting lets you view the current incident at different moments in time.
-
-### Select drilling options and choose a dimension
-
-There are two types of drill down options: **Drill down** and **Horizontal comparison**.
-
-> [!Note]
-> - For drill down, you can explore the data from different dimension values, except the currenly selected dimensions.
-> - For horizontal comparison, you can explore the data from different dimension values, except the all-up dimensions.
--
-### Value comparison for different dimension values
-
-The second section of the drill down tab is a table with comparisons for different dimension values. It includes the value, baseline value, difference value, delta value and whether it is an anomaly.
--
-### Value and expected value comparisons for different dimension value
-
-The third section of the drill down tab is a histogram with the values and expected values, for different dimension values. The histogram is sorted by the difference between value and expected value. You can find the unexpected value with the biggest impact easily. For example, in the above picture, we can find that, except the all up value, **US7** contributes the most for the anomaly.
--
-### Raw value visualization
-The last part of drill down tab is a line chart of the raw values. With this chart provided, you don't need to navigate to the metric page to view details.
--
-## Compare time series
-
-Sometimes when an anomaly is detected on a specific time series, it's helpful to compare it with multiple other series in a single visualization.
-Click on the **Compare tools** tab, and then click on the blue **+ Add** button.
--
-Select a series from your data feed. You can choose the same granularity or a different one. Select the target dimensions and load the series trend, then click **Ok** to compare it with a previous series. The series will be put together in one visualization. You can continue to add more series for comparison and get further insights. Click the drop down menu at the top of the **Compare tools** tab to compare the time series data over a time-shifted period.
-
-> [!Warning]
-> To make a comparison, time series data analysis may require shifts in data points so the granularity of your data must support it. For example, if your data is weekly and you use the **Day over day** comparison, you will get no results. In this example, you would use the **Month over month** comparison instead.
-
-After selecting a time-shifted comparison, you can select whether you want to compare the data values, the delta values, or the percentage delta.
-
-## View similar anomalies using Time Series Clustering
-
-When viewing an incident, you can use the **Similar time-series-clustering** tab to see the various series associated with it. Series in one group are summarized together. From the above picture, we can know that there is at least two series groups. This feature is only available if the following requirements are met:
--- Metrics must have one or more dimensions or dimension values.-- The series within one metric must have a similar trend.-
-Available dimensions are listed on the top the tab, and you can make a selection to specify the series.
-
applied-ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/manage-data-feeds.md
- Title: Manage data feeds in Metrics Advisor-
-description: Learn how to manage data feeds that you've added to Metrics Advisor.
----- Previously updated : 10/25/2022---
-# How to: Manage your data feeds
-
-Learn how to manage your onboarded data feeds in Metrics Advisor. This article guides you through managing data feeds in Metrics Advisor.
-
-## Edit a data feed
-
-> [!NOTE]
-> The following details cannot be changed after a data feed has been created.
-> * Data feed ID
-> * Created Time
-> * Dimension
-> * Source Type
-> * Granularity
-
-Only the administrator of a data feed is allowed to make changes to it.
-
-On the data feed list page, you can **pause, reactivate, delete** a data feed:
-
-* **Pause/Reactivate**: Select the **Pause/Play** button to pause/reactivate a data feed.
-
-* **Delete**: Select **Delete** button to delete a data feed.
-
-If you change the ingestion start time, you need to verify the schema again. You can change it by clicking **Edit** in the data feed detail page.
-
-## Backfill your data feed
-
-Select the **Backfill** button to trigger an immediate ingestion on a time-stamp, to fix a failed ingestion or override the existing data.
-- The start time is inclusive.-- The end time is exclusive.-- Anomaly detection is re-triggered on selected range only.--
-## Manage permission of a data feed
-
-Azure operations can be divided into two categories - control plane and data plane. You use the control plane to manage resources in your subscription. You use the data plane to use capabilities exposed by your instance of a resource type.
-Metrics Advisor requires at least a 'Reader' role to use its capabilities, but cannot perform edit/delete action to the resource itself.
-
-Within Metrics Advisor there're other fine-grained roles to enable permission control on specific entities, like data feeds, hooks, credentials etc. There are two types of roles:
--- **Administrator**: Has full permissions to manage a data feed, hook, credentials, etc. including modify and delete.-- **Viewer**: Has access to a read-only view of the data feed, hook, credentials, etc.-
-## Advanced settings
-
-There are several optional advanced settings when creating a new data feed, they can be modified in data feed detail page.
-
-### Ingestion options
-
-* **Ingestion time offset**: By default, data is ingested according to the specified granularity. For example, a metric with a *daily* timestamp will be ingested one day after its timestamp. You can use the offset to delay the time of ingestion with a *positive* number, or advance it with a *negative* number.
-
-* **Max concurrency**: Set this parameter if your data source supports limited concurrency. Otherwise leave at the default setting.
-
-* **Stop retry after**: If data ingestion has failed, it will retry automatically within a period. The beginning of the period is the time when the first data ingestion happened. The length of the period is defined according to the granularity. If leaving the default value (-1), the value will be determined according to the granularity as below.
-
- | Granularity | Stop Retry After |
- | : | : |
- | Daily, Custom (>= 1 Day), Weekly, Monthly, Yearly | 7 days |
- | Hourly, Custom (< 1 Day) | 72 hours |
-
-* **Min retry interval**: You can specify the minimum interval when retrying pulling data from source. If leaving the default value (-1), the retry interval will be determined according to the granularity as below.
-
- | Granularity | Minimum Retry Interval |
- | : | : |
- | Daily, Custom (>= 1 Day), Weekly, Monthly | 30 minutes |
- | Hourly, Custom (< 1 Day) | 10 minutes |
- | Yearly | 1 day |
-
-### Fill gap when detecting:
-
-> [!NOTE]
-> This setting won't affect your data source and will not affect the data charts displayed on the portal. The auto-filling only occurs during anomaly detection.
-
-Sometimes series are not continuous. When there are missing data points, Metrics Advisor will use the specified value to fill them before anomaly detection to improve accuracy.
-The options are:
-
-* Using the value from the previous actual data point. This is used by default.
-* Using a specific value.
-
-### Action link template:
-
-Action link templates are used to predefine actionable HTTP urls, which consist of the placeholders `%datafeed`, `%metric`, `%timestamp`, `%detect_config`, and `%tagset`. You can use the template to redirect from an anomaly or an incident to a specific URL to drill down.
--
-Once you've filled in the action link, click **Go to action link** on the incident list's action option, and diagnostic tree's right-click menu. Replace the placeholders in the action link template with the corresponding values of the anomaly or incident.
-
-| Placeholder | Examples | Comment |
-| - | -- | - |
-| `%datafeed` | - | Data feed ID |
-| `%metric` | - | Metric ID |
-| `%detect_config` | - | Detect config ID |
-| `%timestamp` | - | Timestamp of an anomaly or end time of a persistent incident |
-| `%tagset` | `%tagset`, <br> `[%tagset.get("Dim1")]`, <br> `[ %tagset.get("Dim1", "filterVal")]` | Dimension values of an anomaly or top anomaly of an incident. <br> The `filterVal` is used to filter out matching values within the square brackets. |
-
-Examples:
-
-* If the action link template is `https://action-link/metric/%metric?detectConfigId=%detect_config`:
- * The action link `https://action-link/metric/1234?detectConfigId=2345` would go to anomalies or incidents under metric `1234` and detect config `2345`.
-
-* If the action link template is `https://action-link?[Dim1=%tagset.get('Dim1','')&][Dim2=%tagset.get('Dim2','')]`:
- * The action link would be `https://action-link?Dim1=Val1&Dim2=Val2` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`.
- * The action link would be `https://action-link?Dim2=Val2` when the anomaly is `{ "Dim1": "", "Dim2": "Val2" }`, since `[Dim1=***&]` is skipped for the dimension value empty string.
-
-* If the action link template is `https://action-link?filter=[Name/Dim1 eq '%tagset.get('Dim1','')' and ][Name/Dim2 eq '%tagset.get('Dim2','')']`:
- * The action link would be `https://action-link?filter=Name/Dim1 eq 'Val1' and Name/Dim2 eq 'Val2'` when the anomaly is `{ "Dim1": "Val1", "Dim2": "Val2" }`,
- * The action link would be `https://action-link?filter=Name/Dim2 eq 'Val2'` when anomaly is `{ "Dim1": "", "Dim2": "Val2" }` since `[Name/Dim1 eq '***' and ]` is skipped for the dimension value empty string.
-
-### "Data feed not available" alert settings
-
-A data feed is considered as not available if no data is ingested from the source within the grace period specified from the time the data feed starts ingestion. An alert is triggered in this case.
-
-To configure an alert, you need to [create a hook](alerts.md#create-a-hook) first. Alerts will be sent through the hook configured.
-
-* **Grace period**: The Grace period setting is used to determine when to send an alert if no data points are ingested. The reference point is the time of first ingestion. If an ingestion fails, Metrics Advisor will keep trying at a regular interval specified by the granularity. If it continues to fail past the grace period, an alert will be sent.
-
-* **Auto snooze**: When this option is set to zero, each timestamp with *Not Available* triggers an alert. When a setting other than zero is specified, continuous timestamps after the first timestamp with *not available* are not triggered according to the setting specified.
-
-## Next steps
-- [Configure metrics and fine tune detection configuration](configure-metrics.md)-- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-an-incident.md).
applied-ai-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/metrics-graph.md
- Title: Metrics Advisor metrics graph-
-description: How to configure your Metrics graph and visualize related anomalies in your data.
----- Previously updated : 09/08/2020---
-# How-to: Build a metrics graph to analyze related metrics
-
-Each time series in Metrics Advisor is monitored separately by a model that learns from historical data to predict future trends. Anomalies will be detected if any data point falls out of the historical pattern. In some cases, however, several metrics may relate to each other, and anomalies need to be analyzed across multiple metrics. **Metrics graph** is just the tool that helps with this.
-
-For example, if you have several metrics that monitor your business from different perspectives, anomaly detection will be applied respectively. However, in the real business case, anomalies detected on multiple metrics may have a relation with each other, discovering those relations and analyzing root cause base on that would be helpful when addressing real issues. The metrics graph helps automatically correlate anomalies detected on related metrics to accelerate the troubleshooting process.
-
-## Select a metric to put the first node to the graph
-
-Click the **Metrics graph** tab in the navigation bar. The first step for building a metrics graph is to put a node onto the graph. Select a data feed and a metric at the top of the page. A node will appear in the bottom panel.
--
-## Add a node/relation on existing node
-
-Next, you need to add another node and specify a relation to an existing node(s). Select an existing node and right-click on it. A context menu will appear with several options.
-
-Select **Add relation**, and you will be able to choose another metric and specify the relation type between the two nodes. You can also apply specific dimension filters.
--
-After repeating the above steps, you will have a metrics graph describing the relations between all related metrics.
-
-There're other actions you can take on the graph:
-1. Delete a node
-2. Go to metrics
-3. Go to Incident Hub
-4. Expand
-5. Delete relation
-
-## Legend of metrics graph
-
-Each node on the graph represents a metric. There are four kinds of nodes in the metrics graph:
--- **Green node**: The node that represents current metric incident severity is low.-- **Orange node**: The node that represents current metric incident severity is medium.-- **Red node**: The node that represents current metric incident severity is high.-- **Blue node**: The node which doesn't have anomaly severity.--
-## View related metrics anomaly status in incident hub
-
-When the metrics graph is built, whenever an anomaly is detected on metrics within the graph, you will able to view related anomaly statuses and get a high-level view of the incident.
-
-Click into an incident within the graph and scroll down to **cross metrics analysis**, below the diagnostic information.
--
-## Next steps
--- [Adjust anomaly detection using feedback](anomaly-feedback.md)-- [Diagnose an incident](diagnose-an-incident.md).-- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md
- Title: Onboard your data feed to Metrics Advisor-
-description: How to get started with onboarding your data feeds to Metrics Advisor.
------ Previously updated : 04/20/2021---
-# How-to: Onboard your metric data to Metrics Advisor
-
-Use this article to learn about onboarding your data to Metrics Advisor.
-
-## Data schema requirements and configuration
-
-If you are not sure about some of the terms, refer to [Glossary](../glossary.md).
-
-## Avoid loading partial data
-
-Partial data is caused by inconsistencies between the data stored in Metrics Advisor and the data source. This can happen when the data source is updated after Metrics Advisor has finished pulling data. Metrics Advisor only pulls data from a given data source once.
-
-For example, if a metric has been onboarded to Metrics Advisor for monitoring. Metrics Advisor successfully grabs metric data at timestamp A and performs anomaly detection on it. However, if the metric data of that particular timestamp A has been refreshed after the data has been ingested. New data value won't be retrieved.
-
-You can try to [backfill](manage-data-feeds.md#backfill-your-data-feed) historical data (described later) to mitigate inconsistencies but this won't trigger new anomaly alerts, if alerts for those time points have already been triggered. This process may add additional workload to the system, and is not automatic.
-
-To avoid loading partial data, we recommend two approaches:
-
-* Generate data in one transaction:
-
- Ensure the metric values for all dimension combinations at the same timestamp are stored to the data source in one transaction. In the above example, wait until data from all data sources is ready, and then load it into Metrics Advisor in one transaction. Metrics Advisor can poll the data feed regularly until data is successfully (or partially) retrieved.
-
-* Delay data ingestion by setting a proper value for the **Ingestion time offset** parameter:
-
- Set the **Ingestion time offset** parameter for your data feed to delay the ingestion until the data is fully prepared. This can be useful for some data sources which don't support transactions such as Azure Table Storage. See [advanced settings](manage-data-feeds.md#advanced-settings) for details.
-
-## Start by adding a data feed
-
-After signing into your Metrics Advisor portal and choosing your workspace, click **Get started**. Then, on the main page of the workspace, click **Add data feed** from the left menu.
-
-### Add connection settings
-
-#### 1. Basic settings
-Next you'll input a set of parameters to connect your time-series data source.
-* **Source Type**: The type of data source where your time series data is stored.
-* **Granularity**: The interval between consecutive data points in your time series data. Currently Metrics Advisor supports: Yearly, Monthly, Weekly, Daily, Hourly, per minute, and Custom. The lowest interval the customization option supports is 60 seconds.
- * **Seconds**: The number of seconds when *granularityName* is set to *Customize*.
-* **Ingest data since (UTC)**: The baseline start time for data ingestion. `startOffsetInSeconds` is often used to add an offset to help with data consistency.
-
-#### 2. Specify connection string
-Next, you'll need to specify the connection information for the data source. For details on the other fields and connecting different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
-
-#### 3. Specify query for a single timestamp
-<!-- Next, you'll need to specify a query to convert the data into the required schema, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
-
-For details of different types of data sources, see [How-to: Connect different data sources](../data-feeds-from-different-sources.md).
-
-### Load data
-
-After the connection string and query string are inputted, select **Load data**. Within this operation, Metrics Advisor will check connection and permission to load data, check necessary parameters (@IntervalStart and @IntervalEnd) which need to be used in query, and check the column name from data source.
-
-If there's an error at this step:
-1. First check if the connection string is valid.
-2. Then check if there's sufficient permissions and that the ingestion worker IP address is granted access.
-3. Then check if required parameters (@IntervalStart and @IntervalEnd) are used in your query.
-
-### Schema configuration
-
-Once the data schema is loaded, select the appropriate fields.
-
-If the timestamp of a data point is omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as a timestamp. If you get a message that a column cannot be specified as a timestamp, check your query or data source, and whether there are multiple timestamps in the query result - not only in the preview data. When performing data ingestion, Metrics Advisor can only consume one chunk (for example one day, one hour - according to the granularity) of time-series data from the given source each time.
-
-|Selection |Description |Notes |
-||||
-| **Display Name** | Name to be displayed in your workspace instead of the original column name. | Optional.|
-|**Timestamp** | The timestamp of a data point. If omitted, Metrics Advisor will use the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as timestamp. | Optional. Should be specified with at most one column. If you get a **column cannot be specified as Timestamp** error, check your query or data source for duplicate timestamps. |
-|**Measure** | The numeric values in the data feed. For each data feed, you can specify multiple measures but at least one column should be selected as measure. | Should be specified with at least one column. |
-|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series, for example: country/region, language, tenant. You can select zero or more columns as dimensions. Note: be cautious when selecting a non-string column as a dimension. | Optional. |
-|**Ignore** | Ignore the selected column. | Optional. For data sources support using a query to get data, there is no 'Ignore' option. |
-
-If you want to ignore columns, we recommend updating your query or data source to exclude those columns. You can also ignore columns using **Ignore columns** and then **Ignore** on the specific columns. If a column should be a dimension and is mistakenly set as *Ignored*, Metrics Advisor may end up ingesting partial data. For example, assume the data from your query is as below:
-
-| Row ID | Timestamp | Country/Region | Language | Income |
-| | | | | |
-| 1 | 2019/11/10 | China | ZH-CN | 10000 |
-| 2 | 2019/11/10 | China | EN-US | 1000 |
-| 3 | 2019/11/10 | US | ZH-CN | 12000 |
-| 4 | 2019/11/11 | US | EN-US | 23000 |
-| ... | ...| ... | ... | ... |
-
-If *Country* is a dimension and *Language* is set as *Ignored*, then the first and second rows will have the same dimensions for a timestamp. Metrics Advisor will arbitrarily use one value from the two rows. Metrics Advisor will not aggregate the rows in this case.
-
-After configuring the schema, select **Verify schema**. Within this operation, Metrics Advisor will perform following checks:
-- Whether timestamp of queried data falls into one single interval. -- Whether there's duplicate values returned for the same dimension combination within one metric interval. -
-### Automatic roll up settings
-
-> [!IMPORTANT]
-> If you'd like to enable root cause analysis and other diagnostic capabilities, the **Automatic roll up settings** need to be configured.
-> Once enabled, the automatic roll-up settings cannot be changed.
-
-Metrics Advisor can automatically perform aggregation(for example SUM, MAX, MIN) on each dimension during ingestion, then builds a hierarchy which will be used in root case analysis and other diagnostic features.
-
-Consider the following scenarios:
-
-* *"I do not need to include the roll-up analysis for my data."*
-
- You do not need to use the Metrics Advisor roll-up.
-
-* *"My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others."*
-
- This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
-
- | Country/Region | Language | Income |
- ||-|--|
- | China | ZH-CN | 10000 |
- | (NULL) | EN-US | 999999 |
- | US | EN-US | 12000 |
- | | EN-US | 5000 |
-
-* *"I need Metrics Advisor to roll up my data by calculating Sum/Max/Min/Avg/Count and represent it by {some string}."*
-
- Some data sources such as Azure Cosmos DB or Azure Blob Storage do not support certain calculations like *group by* or *cube*. Metrics Advisor provides the roll up option to automatically generate a data cube during ingestion.
- This option means you need Metrics Advisor to calculate the roll-up using the algorithm you've selected and use the specified string to represent the roll-up in Metrics Advisor. This won't change any data in your data source.
- For example, suppose you have a set of time series which stands for Sales metrics with the dimension (Country, Region). For a given timestamp, it might look like the following:
--
- | Country | Region | Sales |
- |||-|
- | Canada | Alberta | 100 |
- | Canada | British Columbia | 500 |
- | United States | Montana | 100 |
--
- After enabling Auto Roll Up with *Sum*, Metrics Advisor will calculate the dimension combinations, and sum the metrics during data ingestion. The result might be:
-
- | Country | Region | Sales |
- | | | - |
- | Canada | Alberta | 100 |
- | NULL | Alberta | 100 |
- | Canada | British Columbia | 500 |
- | NULL | British Columbia | 500 |
- | United States | Montana | 100 |
- | NULL | Montana | 100 |
- | NULL | NULL | 700 |
- | Canada | NULL | 600 |
- | United States | NULL | 100 |
-
- `(Country=Canada, Region=NULL, Sales=600)` means the sum of Sales in Canada (all regions) is 600.
-
- The following is the transformation in SQL language.
-
- ```mssql
- SELECT
- dimension_1,
- dimension_2,
- ...
- dimension_n,
- sum (metrics_1) AS metrics_1,
- sum (metrics_2) AS metrics_2,
- ...
- sum (metrics_n) AS metrics_n
- FROM
- each_timestamp_data
- GROUP BY
- CUBE (dimension_1, dimension_2, ..., dimension_n);
- ```
-
- Consider the following before using the Auto roll up feature:
-
- * If you want to use *SUM* to aggregate your data, make sure your metrics are additive in each dimension. Here are some examples of *non-additive* metrics:
- - Fraction-based metrics. This includes ratio, percentage, etc. For example, you should not add the unemployment rate of each state to calculate the unemployment rate of the entire country/region.
- - Overlap in dimension. For example, you should not add the number of people in to each sport to calculate the number of people who like sports, because there is an overlap between them, one person can like multiple sports.
- * To ensure the health of the whole system, the size of cube is limited. Currently, the limit is **100,000**. If your data exceeds that limit, ingestion will fail for that timestamp.
-
-## Advanced settings
-
-There are several advanced settings to enable data ingested in a customized way, such as specifying ingestion offset, or concurrency. For more information, see the [advanced settings](manage-data-feeds.md#advanced-settings) section in the data feed management article.
-
-## Specify a name for the data feed and check the ingestion progress
-
-Give a custom name for the data feed, which will be displayed in your workspace. Then click on **Submit**. In the data feed details page, you can use the ingestion progress bar to view status information.
---
-To check ingestion failure details:
-
-1. Click **Show Details**.
-2. Click **Status** then choose **Failed** or **Error**.
-3. Hover over a failed ingestion, and view the details message that appears.
--
-A *failed* status indicates the ingestion for this data source will be retried later.
-An *Error* status indicates Metrics Advisor won't retry for the data source. To reload data, you need to trigger a backfill/reload manually.
-
-You can also reload the progress of an ingestion by clicking **Refresh Progress**. After data ingestion completes, you're free to click into metrics and check anomaly detection results.
-
-## Next steps
-- [Manage your data feeds](manage-data-feeds.md)-- [Configurations for different data sources](../data-feeds-from-different-sources.md)-- [Configure metrics and fine tune detection configuration](configure-metrics.md)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/overview.md
- Title: What is the Azure Metrics Advisor service?-
-description: What is Metrics Advisor?
----- Previously updated : 07/06/2021---
-# What is Azure Metrics Advisor?
-
-Metrics Advisor is a part of [Azure Applied AI Services](../../applied-ai-services/what-are-applied-ai-services.md) that uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predicative maintenance, and business monitor applications on top of the service. Use Metrics Advisor to:
-
-* Analyze multi-dimensional data from multiple data sources
-* Identify and correlate anomalies
-* Configure and fine-tune the anomaly detection model used on your data
-* Diagnose anomalies and help with root cause analysis
--
-This documentation contains the following types of articles:
-* The [quickstarts](./Quickstarts/web-portal.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-tos/onboard-your-data.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](glossary.md) provide in-depth explanations of the service's functionality and features.
-
-## Connect to a variety of data sources
-
-Metrics Advisor can connect to, and [ingest multi-dimensional metric](how-tos/onboard-your-data.md) data from many data stores, including: SQL Server, Azure Blob Storage, MongoDB and more.
-
-## Easy-to-use and customizable anomaly detection
-
-* Metrics Advisor automatically selects the best model for your data, without needing to know any machine learning.
-* Automatically monitor every time series within [multi-dimensional metrics](glossary.md#multi-dimensional-metric).
-* Use [parameter tuning](how-tos/configure-metrics.md) and [interactive feedback](how-tos/anomaly-feedback.md) to customize the model applied on your data, and future anomaly detection results.
-
-## Real-time notification through multiple channels
-
-Whenever anomalies are detected, Metrics Advisor is able to [send real time notification](how-tos/alerts.md) through multiple channels using hooks, such as: email hooks, web hooks, Teams hooks and Azure DevOps hooks. Flexible alert configuration lets you customize when and where to send a notification.
-
-## Smart diagnostic insights by analyzing anomalies
-
-### Analyze root cause into specific dimension
-
-Metrics Advisor combines anomalies detected on the same multi-dimensional metric into a diagnostic tree to help you analyze root cause into specific dimension. There's also automated analyzed insights available by analyzing the greatest contribution of each dimension.
-
-### Cross-metrics analysis using Metrics graph
-
-A [Metrics graph](./how-tos/metrics-graph.md) indicates the relation between metrics. Cross-metrics analysis can be enabled to help you catch on abnormal status among all related metrics in a holistic view. And eventually locate the final root cause.
-
-Refer to [how to diagnose an incident](./how-tos/diagnose-an-incident.md) for more detail.
-
-## Typical workflow
-
-The workflow is simple: after onboarding your data, you can fine-tune the anomaly detection, and create configurations to fit your scenario.
-
-1. [Create an Azure resource](https://go.microsoft.com/fwlink/?linkid=2142156) for Metrics Advisor.
-2. Build your first monitor using the web portal.
- 1. [Onboard your data](./how-tos/onboard-your-data.md)
- 2. [Fine-tune anomaly detection configuration](./how-tos/configure-metrics.md)
- 3. [Subscribe anomalies for notification](./how-tos/alerts.md)
- 4. [View diagnostic insights](./how-tos/diagnose-an-incident.md)
-3. Use the REST API to customize your instance.
-
-## Video
-* [Introducing Metrics Advisor](https://www.youtube.com/watch?v=0Y26cJqZMIM)
-* [New to Cognitive Services](https://www.youtube.com/watch?v=7tCLJHdBZgM)
-
-## Data retention & limitation:
-
-Metrics Advisor will keep at most **10,000** time intervals ([what is an interval?](tutorials/write-a-valid-query.md#what-is-an-interval)) forward counting from current timestamp, no matter there's data available or not. Data falls out of the window will be deleted. Data retention mapping to count of days for different metric granularity:
-
-| Granularity(min) | Retention(day) |
-|| |
-| 1 | 6.94 |
-| 5 | 34.72|
-| 15 | 104.1|
-| 60(=hourly) | 416.67 |
-| 1440(=daily)|10000.00|
-
-There are also further limitations. Refer to [FAQ](faq.yml#what-are-the-data-retention-and-limitations-of-metrics-advisor-) for more details.
-
-## Use cases for Metrics Advisor
-
-* [Protect your organization's growth by using Azure Metrics Advisor](https://techcommunity.microsoft.com/t5/azure-ai/protect-your-organization-s-growth-by-using-azure-metrics/ba-p/2564682)
-* [Supply chain anomaly detection and root cause analysis with Azure Metric Advisor](https://techcommunity.microsoft.com/t5/azure-ai/supply-chain-anomaly-detection-and-root-cause-analysis-with/ba-p/2871920)
-* [Customer support: How Azure Metrics Advisor can help improve customer satisfaction](https://techcommunity.microsoft.com/t5/azure-ai-blog/customer-support-how-azure-metrics-advisor-can-help-improve/ba-p/3038907)
-* [Detecting methane leaks using Azure Metrics Advisor](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detecting-methane-leaks-using-azure-metrics-advisor/ba-p/3254005)
-* [AIOps with Azure Metrics Advisor - OpenDataScience.com](https://opendatascience.com/aiops-with-azure-metrics-advisor/)
-
-## Next steps
-
-* Explore a quickstart: [Monitor your first metric on web](quickstarts/web-portal.md).
-* Explore a quickstart: [Use the REST APIs to customize your solution](./quickstarts/rest-api-and-client-library.md).
applied-ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
- Title: Metrics Advisor client libraries REST API-
-description: Use this quickstart to connect your applications to the Metrics Advisor API from Azure Cognitive Services.
----- Previously updated : 11/07/2022-
-zone_pivot_groups: programming-languages-metrics-monitor
---
-# Quickstart: Use the client libraries or REST APIs to customize your solution
-
-Get started with the Metrics Advisor REST API or client libraries. Follow these steps to install the package and try out the example code for basic tasks.
-
-Use Metrics Advisor to perform:
-
-* Add a data feed from a data source
-* Check ingestion status
-* Configure detection and alerts
-* Query the anomaly detection results
-* Diagnose anomalies
-----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../../cognitive-services/cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../../cognitive-services/cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-## Next steps
--- [Use the web portal](web-portal.md)-- [Onboard your data feeds](../how-tos/onboard-your-data.md)
- - [Manage data feeds](../how-tos/manage-data-feeds.md)
- - [Configurations for different data sources](../data-feeds-from-different-sources.md)
-- [Configure metrics and fine tune detection configuration](../how-tos/configure-metrics.md)-- [Adjust anomaly detection using feedback](../how-tos/anomaly-feedback.md)
applied-ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/web-portal.md
- Title: 'Quickstart: Metrics Advisor web portal'-
-description: Learn how to start using the Metrics Advisor web portal.
--- Previously updated : 11/07/2022------
-# Quickstart: Monitor your first metric by using the web portal
-
-When you provision an instance of Azure Metrics Advisor, you can use the APIs and web-based workspace to interact with the service. The web-based workspace can be used as a straightforward way to quickly get started with the service. It also provides a visual way to configure settings, customize your model, and perform root cause analysis.
-
-## Prerequisites
-
-* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-* When you have your Azure subscription, <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your instance of Metrics Advisor.
-
-
-> [!TIP]
-> * It can take 10 to 30 minutes for your Metrics Advisor resource to deploy. Select **Go to resource** after it successfully deploys.
-> * If you want to use the REST API to interact with the service, you need the key and endpoint from the resource you create. You can find them on the **Keys and endpoints** tab in the created resource.
--
-This document uses a SQL database as an example for creating your first monitor.
-
-## Sign in to your workspace
-
-After your resource is created, sign in to the [Metrics Advisor portal](https://go.microsoft.com/fwlink/?linkid=2143774) with your Active Directory account. From the landing page, select your **Directory**, **Subscription**, and **Workspace** that you just created, and then select **Get started**. To use time series data, select **Add data feed** from the left menu.
-
-
-Currently you can create one Metrics Advisor resource at each available region. You can switch workspaces in the Metrics Advisor portal at any time.
--
-## Time series data
-
-Metrics Advisor provides connectors for different data sources, such as Azure SQL Database, Azure Data Explorer, and Azure Table Storage. The steps for connecting data are similar for different connectors, although some configuration parameters might vary. For more information, see [Connect different data sources](../data-feeds-from-different-sources.md).
-
-This quickstart uses a SQL database as an example. You can also ingest your own data by following the same steps.
--
-### Data schema requirements and configuration
--
-### Configure connection settings and query
-
-[Add the data feeds](../how-tos/onboard-your-data.md) by connecting to your time series data source. Start by selecting the following parameters:
-
-* **Source Type**: The type of data source where your time series data is stored.
-* **Granularity**: The interval between consecutive data points in your time series data (for example, yearly, monthly, or daily). The shortest interval supported is 60 seconds.
-* **Ingest data since (UTC)**: The start time for the first timestamp to be ingested.
----
-### Load data
-
-After you input the connection and query strings, select **Load data**. Metrics Advisor checks the connection and permission to load data, checks the necessary parameters used in the query, and checks the column name from the data source.
-
-If there's an error at this step:
-1. Check if the connection string is valid.
-1. Confirm that there's sufficient permissions and that the ingestion worker IP address is granted access.
-1. Check if the required parameters (`@IntervalStart` and `@IntervalEnd`) are used in your query.
-
-### Schema configuration
-
-After the data is loaded by running the query, select the appropriate fields.
--
-|Selection |Description |Notes |
-||||
-|**Timestamp** | The timestamp of a data point. If the timestamp is omitted, Metrics Advisor uses the timestamp when the data point is ingested instead. For each data feed, you can specify at most one column as timestamp. | Optional. Should be specified with at most one column. |
-|**Measure** | The numeric values in the data feed. For each data feed, you can specify multiple measures, but at least one column should be selected as measure. | Should be specified with at least one column. |
-|**Dimension** | Categorical values. A combination of different values identifies a particular single-dimension time series. Examples include country/region, language, and tenant. You can select none, or an arbitrary number of columns as dimensions. If you're selecting a non-string column as dimension, be cautious with dimension explosion. | Optional. |
-|**Ignore** | Ignore the selected column. | Optional. For data sources that support using a query to get data, there's no ignore option. |
---
-After configuring the schema, select **Verify schema**. Metrics Advisor performs the following checks:
-- Whether the timestamp of the queried data falls into one single interval. -- Whether there are duplicate values returned for the same dimension combination within one metric interval. -
-### Automatic roll-up settings
-
-> [!IMPORTANT]
-> If you want to enable root cause analysis and other diagnostic capabilities, configure the automatic roll-up settings. After you enable the analysis, you can't change the automatic roll-up settings.
-
-Metrics Advisor can automatically perform aggregation on each dimension during ingestion. Then the service builds a hierarchy that you can use in root cause analysis and other diagnostic features. For more information, see [Automatic roll-up settings](../how-tos/onboard-your-data.md#automatic-roll-up-settings).
-
-Give a custom name for the data feed, which will be shown in your workspace. Select **Submit**.
-
-## Tune detection configuration
-
-After the data feed is added, Metrics Advisor attempts to ingest metric data from the specified start date. It will take some time for data to be fully ingested, and you can view the ingestion status by selecting **Ingestion progress** at the top of the data feed page. If data is ingested, Metrics Advisor will apply detection, and continue to monitor the source for new data.
-
-When detection is applied, select one of the metrics listed in the data feed to find the **Metric detail page**. Here, you can:
-- View visualizations of all time series' slices under this metric.-- Update detection configuration to meet expected results.-- Set up notification for detected anomalies.--
-## View diagnostic insights
-
-After tuning the detection configuration, you should find that detected anomalies reflect actual anomalies in your data. Metrics Advisor performs analysis on multidimensional metrics to locate the root cause to a specific dimension. The service also performs cross-metrics analysis by using the metrics graph feature.
-
-To view diagnostic insights, select the red dots on time series visualizations. These red dots represent detected anomalies. A window will appear with a link to the incident analysis page.
--
-On the incident analysis page, you see a group of related anomalies and diagnostic insights. The following sections cover the major steps to diagnose an incident.
-
-### Check the summary of the current incident
-
-You can find the summary at the top of the incident analysis page. This summary includes basic information, actions and tracings, and an analyzed root cause. Basic information includes the top impacted series with a diagram, the impact start and end time, the severity, and the total anomalies included.
-
-The analyzed root cause is an automatically analyzed result. Metrics Advisor analyzes all anomalies that are captured on a time series, within one metric with different dimension values at the same timestamp. Then the service performs correlation, clustering group-related anomalies together, and generates advice about a root cause.
--
-Based on these, you can already get a straightforward view of the current abnormal status, the impact of the incident, and the most likely root cause. You can then take immediate action to resolve the incident.
-
-### View cross-dimension diagnostic insights
-
-You can also get more detailed info on abnormal status on other dimensions within the same metric in a holistic way, by using the diagnostic tree feature.
-
-For metrics with multiple dimensions, Metrics Advisor categorizes the time series into a hierarchy (called a diagnostic tree). For example, a revenue metric is monitored by two dimensions: region and category. You need to have an aggregated dimension value, such as `SUM`. Then, the time series of `region = SUM` and `category = SUM` is categorized as the root node within the tree. Whenever there's an anomaly captured at the `SUM` dimension, you can analyze it to locate which specific dimension value has contributed the most to the parent node anomaly. Select each node to expand it for detailed information.
--
-### View cross-metrics diagnostic insights
-
-Sometimes, it's hard to analyze an issue by checking the abnormal status of a single metric, and you need to correlate multiple metrics together. To do this, configure a metrics graph, which indicates the relationships between metrics.
-
-By using the cross-dimension diagnostic result described in the previous section, you can identify that the root cause is limited to a specific dimension value. Then use a metrics graph to filter by the analyzed root cause dimension, to check the anomaly status on other metrics.
--
-You can also pivot across more diagnostic insights by using additional features. These features help you drill down on dimensions of anomalies, view similar anomalies, and compare across metrics. For more information, see [Diagnose an incident](../how-tos/diagnose-an-incident.md).
-
-## Get notified when new anomalies are found
-
-If you want to get alerted when an anomaly is detected in your data, you can create a subscription for one or more of your metrics. Metrics Advisor uses hooks to send alerts. Three types of hooks are supported: email hook, web hook, and Azure DevOps. We'll use web hook as an example.
-
-### Create a web hook
-
-In Metrics Advisor, you can use a web hook to surface an anomaly programmatically. The service calls a user-provided API when an alert is triggered. For more information, see [Create a hook](../how-tos/alerts.md#create-a-hook).
-
-### Configure alert settings
-
-After creating a hook, an alert setting determines how and which alert notifications should be sent. You can set multiple alert settings for each metric. Two important settings are **Alert for**, which specifies the anomalies to be included, and **Filter anomaly options**, which defines which anomalies to include in the alert. For more information, see [Add or edit alert settings](../how-tos/alerts.md#add-or-edit-alert-settings).
--
-## Next steps
--- [Add your metric data to Metrics Advisor](../how-tos/onboard-your-data.md)
- - [Manage your data feeds](../how-tos/manage-data-feeds.md)
- - [Connect different data sources](../data-feeds-from-different-sources.md)
-- [Use the client libraries or REST APIs to customize your solution](./rest-api-and-client-library.md)-- [Configure metrics and fine-tune detection configuration](../how-tos/configure-metrics.md)
applied-ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
- Title: Metrics Advisor anomaly notification e-mails with Azure Logic Apps
-description: Learn how to automate sending e-mail alerts in response to Metric Advisor anomalies
----- Previously updated : 05/20/2021--
-# Tutorial: Enable anomaly notification in Metrics Advisor
-
-<!-- 2. Introductory paragraph
-Required. Lead with a light intro that describes, in customer-friendly language,
-what the customer will learn, or do, or accomplish. Answer the fundamental ΓÇ£why
-would I want to do this?ΓÇ¥ question. Keep it short.
>--
-<!-- 3. Tutorial outline
-Required. Use the format provided in the list below.
>-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a hook in Metrics Advisor
-> * Send Notifications with Azure Logic Apps
-> * Send Notifications to Microsoft Teams
-> * Send Notifications via SMTP server
-
-<!-- 4. Prerequisites
-Required. First prerequisite is a link to a free trial account if one exists. If there
-are no prerequisites, state that no prerequisites are needed for this tutorial.
>-
-## Prerequisites
-### Create a Metrics Advisor resource
-
-To explore capabilities of Metrics Advisor, you may need to <a href="https://go.microsoft.com/fwlink/?linkid=2142156" title="Create a Metrics Advisor resource" target="_blank">create a Metrics Advisor resource </a> in the Azure portal to deploy your Metrics Advisor instance.
-
-### Create a hook in Metrics Advisor
-A hook in Metrics Advisor is a bridge that enables customer to subscribe to metrics anomalies and send notifications through different channels. There are four types of hooks in Metrics Advisor:
-
-- Email hook-- Webhook-- Teams hook-- Azure DevOps hook-
-Each hook type corresponds to a specific channel that anomaly will be notified through.
-
-<!-- 5. H2s
-Required. Give each H2 a heading that sets expectations for the content that follows.
-Follow the H2 headings with a sentence about how the section contributes to the whole.
>-
-## Send notifications with Azure Logic Apps, Teams, and SMTP
-
-#### [Logic Apps](#tab/logic)
-
-### Send email notification by using Azure Logic Apps
-
-<!-- Introduction paragraph -->
-There are two common options to send email notifications that are supported in Metrics Advisor. One is to use webhooks and Azure Logic Apps to send email alerts, the other is to set up an SMTP server and use it to send email alerts directly. This section will focus on the first option, which is easier for customers who don't have an available SMTP server.
-
-**Step 1.** Create a webhook in Metrics Advisor
-
-A webhook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a webhook.
-
-Select the **Hooks** tab in your Metrics Advisor workspace, and select the **Create hook** button. Choose a hook type of **web hook**. Fill in the required parameters and select **OK**. For detailed steps, refer to [create a webhook](../how-tos/alerts.md#web-hook).
-
-There's one extra parameter of **Endpoint** that needs to be filled out, this could be done after completing Step 3 below.
--
-**Step 2.** Create a Consumption logic app resource
-
-In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource with a blank workflow by following the instructions in [Create an example Consumption logic app workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md). When you see the workflow designer opens, return to this tutorial.
--
-**Step 3.** Add a trigger of **When an HTTP request is received**
--- Azure Logic Apps uses various actions to trigger workflows that are defined. For this use case, it uses the trigger named **When an HTTP request is received**. --- In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.-
- ![Screenshot that shows the When an HTTP request dialog box and the Use sample payload to generate schema option selected.](../media/tutorial/logic-apps-generate-schema.png)
-
- Copy the following sample JSON into the textbox and select **Done**.
-
- ```json
- {
- "properties": {
- "value": {
- "items": {
- "properties": {
- "alertInfo": {
- "properties": {
- "alertId": {
- "type": "string"
- },
- "anomalyAlertingConfigurationId": {
- "type": "string"
- },
- "createdTime": {
- "type": "string"
- },
- "modifiedTime": {
- "type": "string"
- },
- "timestamp": {
- "type": "string"
- }
- },
- "type": "object"
- },
- "alertType": {
- "type": "string"
- },
- "callBackUrl": {
- "type": "string"
- },
- "hookId": {
- "type": "string"
- }
- },
- "required": [
- "hookId",
- "alertType",
- "alertInfo",
- "callBackUrl"
- ],
- "type": "object"
- },
- "type": "array"
- }
- },
- "type": "object"
- }
- ```
--- Choose the method as 'POST' and select **Save**. You can now see the URL of your HTTP request trigger. Select the copy icon to copy it and fill it back in the **Endpoint** in Step 1. -
- ![Screenshot that highlights the copy icon to copy the URL of your HTTP request trigger.](../media/tutorial/logic-apps-copy-url.png)
-
-**Step 4.** Add a next step using 'HTTP' action
-
-Signals that are pushed through the webhook only contain limited information like timestamp, alertID, configurationID, etc. Detailed information needs to be queried using the callback URL provided in the signal. This step is to query detailed alert info.
--- Choose a method of 'GET'-- Select 'callBackURL' from 'Dynamic content' list in 'URI'.-- Enter a key of 'Content-Type' in 'Headers' and input a value of 'application/json'-- Enter a key of 'x-api-key' in 'Headers' and get this by the clicking **'API keys'** tab in your Metrics Advisor workspace. This step is to ensure the workflow has sufficient permissions for API calls.-
- ![Screenshot that highlights the api-keys](../media/tutorial/logic-apps-api-key.png)
-
-**Step 5.** Add a next step to ΓÇÿparse JSONΓÇÖ
-
-You need to parse the response of the API for easier formatting of email content.
-
-> [!NOTE]
-> This tutorial only shares a quick example, the final email format needs to be further designed.
--- Select 'Body' from 'Dynamic content' list in 'Content'-- select **Use sample payload to generate schema**. Copy the following sample JSON into the textbox and select **Done**.-
-```json
-{
- "properties": {
- "@@nextLink": {},
- "value": {
- "items": {
- "properties": {
- "properties": {
- "properties": {
- "IncidentSeverity": {
- "type": "string"
- },
- "IncidentStatus": {
- "type": "string"
- }
- },
- "type": "object"
- },
- "rootNode": {
- "properties": {
- "createdTime": {
- "type": "string"
- },
- "detectConfigGuid": {
- "type": "string"
- },
- "dimensions": {
- "properties": {
- },
- "type": "object"
- },
- "metricGuid": {
- "type": "string"
- },
- "modifiedTime": {
- "type": "string"
- },
- "properties": {
- "properties": {
- "AnomalySeverity": {
- "type": "string"
- },
- "ExpectedValue": {}
- },
- "type": "object"
- },
- "seriesId": {
- "type": "string"
- },
- "timestamp": {
- "type": "string"
- },
- "value": {
- "type": "number"
- }
- },
- "type": "object"
- }
- },
- "required": [
- "rootNode",
- "properties"
- ],
- "type": "object"
- },
- "type": "array"
- }
- },
- "type": "object"
-}
-```
-
-**Step 6.** Add a next step to ΓÇÿcreate HTML tableΓÇÖ
-
-A bunch of information has been returned from the API call, however, depending on your scenarios not all of the information may be useful. Choose the items that you care about and would like included in the alert email.
-
-Below is an example of an HTML table that chooses 'timestamp', 'metricGUID' and 'dimension' to be included in the alert email.
-
-![Screenshot of html table example](../media/tutorial/logic-apps-html-table.png)
-
-**Step 7.** Add the final step to ΓÇÿsend an emailΓÇÖ
-
-There are several options to send email, both Microsoft hosted and 3rd-party offerings. Customer may need to have a tenant/account for their chosen option. For example, when choosing ΓÇÿOffice 365 OutlookΓÇÖ as the server. Sign in process will be pumped for building connection and authorization. An API connection will be established to use email server to send alert.
-
-Fill in the content that you'd like to include to 'Body', 'Subject' in the email and fill in an email address in 'To'.
-
-![Screenshot of send an email](../media/tutorial/logic-apps-send-email.png)
-
-#### [Teams Channel](#tab/teams)
-
-### Send anomaly notification through a Microsoft Teams channel
-This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites.
-
-**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel
--- Navigate to the Teams channel that you'd like to send notification to, select 'ΓÇóΓÇóΓÇó'(More options). -- In the dropdown list, select 'Connectors'. Within the new dialog, search for 'Incoming Webhook' and click 'Add'.-
- ![Screenshot to create an incoming webhook](../media/tutorial/add-webhook.png)
--- If you are not able to view the 'Connectors' option, please contact your Teams group owners. Select 'Manage team', then select the 'Settings' tab at the top and check whether the setting of 'Allow members to create, update and remove connectors' is checked.-
- ![Screenshot to check teams settings](../media/tutorial/teams-settings.png)
--- Input a name for the connector and you can also upload an image to make it as the avatar. Select 'Create', then the Incoming Webhook connector is added successfully to your channel. A URL will be generated at the bottom of the dialog, **be sure to select 'Copy'**, then select 'Done'. -
- ![Screenshot to copy URL](../media/tutorial/webhook-url.png)
-
-**Step 2.** Create a new 'Teams hook' in Metrics Advisor
--- Select 'Hooks' tab in left navigation bar, and select the 'Create hook' button at top right of the page. -- Choose hook type of 'Teams', then input a name and paste the URL that you copied from the above step. -- Select 'Save'. -
- ![Screenshot to create a Teams hook](../media/tutorial/teams-hook.png)
-
-**Step 3.** Apply the Teams hook to an alert configuration
-
-Go and select one of the data feeds that you have onboarded. Select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to anomalies that are detected and notify through a Teams channel.
-
-Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. Then you're set for applying a Teams hook to an alert configuration. Any new anomalies will be notified through the Teams channel.
-
-![Screenshot that applies an Teams hook to an alert configuration](../media/tutorial/teams-hook-in-alert.png)
--
-#### [SMTP E-mail](#tab/smtp)
-
-### Send email notification by configuring an SMTP server
-
-This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password.
-
-**Step 1.** Assign your account as the 'Cognitive Service Metrics Advisor Administrator' role
--- A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control(IAM) tab.-- Select 'Add role assignments'.-- Pick a role of 'Cognitive Services Metrics Advisor Administrator', select your account as in the image below.-- Select 'Save' button, then you've been successfully added as administrator of a Metrics Advisor resource. All the above actions need to be performed by a subscription administrator or resource group administrator. It might take up to one minute for the permissions to propagate. -
-![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png)
-
-**Step 2.** Configure SMTP server in Metrics Advisor workspace
-
-After you've completed the above steps and have been successfully added as an administrator of the Metrics Advisor resource. Wait several minutes for the permissions to propagate. Then sign in to your Metrics Advisor workspace, you should be able to view a new tab named 'Email setting' on the left navigation panel. Select it and to continue configuration.
-
-Parameters to be filled out:
--- SMTP server name (**required**): Fill in the name of your SMTP server provider, most server names are written in the form ΓÇ£smtp.domain.comΓÇ¥ or ΓÇ£mail.domain.comΓÇ¥. Take Office365 as an example, it should be set as 'smtp.office365.com'. -- SMTP server port (**required**): Port 587 is the default port for SMTP submission on the modern web. While you can use other ports for submission (more on those next), you should always start with port 587 as the default and only use a different port if circumstances dictate (like your host blocking port 587 for some reason).-- Email sender(s)(**required**): This is the real email account that takes responsibility to send emails. You may need to fill in the account name and password of the sender. You can set a quota threshold for the maximum number of alert emails to be sent within one minute for one account. You can set multiple senders if there's possibility of having large volume of alerts to be sent in one minute, but at least one account should be set. -- Send on behalf of (optional): If you have multiple senders configured, but you'd like alert emails to appear to be sent from one account. You can use this field to align them. But note you may need to grant permission to the senders to allow sending emails on behalf of their account. -- Default CC (optional): To set a default email address that will be cc'd in all email alerts. -
-Below is an example of a configured SMTP server:
-
-![Screenshot that shows an example of a configured SMTP server](../media/tutorial/email-setting.png)
-
-**Step 3.** Create an email hook in Metrics Advisor
-
-After successfully configuring an SMTP server, you're set to create an 'email hook' in the 'Hooks' tab in Metrics Advisor. For more about creating an 'email hook', refer to [article on alerts](../how-tos/alerts.md#email-hook) and follow the steps to completion.
-
-**Step 4.** Apply the email hook to an alert configuration
-
- Go and select one of the data feeds that you on-boarded, select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to the anomalies that have been detected and sent through emails.
-
-Select the '+' button and choose the hook that you created, fill in other fields and select 'Save'. You have now successfully setup an email hook with a custom alert configuration and any new anomalies will be escalated through the hook using the SMTP server.
-
-![Screenshot that applies an email hook to an alert configuration](../media/tutorial/apply-hook.png)
---
-## Next steps
-
-Advance to the next article to learn how to create.
-> [!div class="nextstepaction"]
-> [Write a valid query](write-a-valid-query.md)
-
-<!--
-Remove all the comments in this template before you sign-off or merge to the
-main branch.
>
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/whats-new.md
- Title: Metrics Advisor what's new-
-description: Learn about what is new with Metrics Advisor
----- Previously updated : 12/16/2022---
-# Metrics Advisor: what's new in the docs
-
-Welcome! This page covers what's new in the Metrics Advisor docs. Check back every month for information on service changes, doc additions and updates this month.
-
-## December 2022
-
-**Metrics Advisor new portal experience** has been released in preview! [Metrics Advisor Studio](https://metricsadvisor.appliedai.azure.com/) is designed to align the look and feel with the Cognitive Services studio experiences with better usability and significant accessibility improvements. Both the classic portal and the new portal will be available for a period of time, until we officially retire the classic portal.
-
-We **strongly encourage** you to try out the new experience and share your feedback through the smiley face on the top banner.
-
-## May 2022
-
- **Detection configuration auto-tuning** has been released. This feature enables you to customize the service to better surface and personalize anomalies. Instead of the traditional way of setting configurations for each time series or a group of time series. A guided experience is provided to capture your detection preferences, such as the level of sensitivity, and the types of anomaly patterns, which allows you to tailor the model to your own needs on the back end. Those preferences can then be applied to all the time series you're monitoring. This allows you to reduce configuration costs while achieving better detection results.
-
-Check out [this article](how-tos/configure-metrics.md#tune-the-detection-configuration) to learn how to take advantage of the new feature.
-
-## SDK updates
-
-If you want to learn about the latest updates to Metrics Advisor client SDKs see:
-
-* [.NET SDK change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/CHANGELOG.md)
-* [Java SDK change log](https://github.com/Azure/azure-sdk-for-jav)
-* [Python SDK change log](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/metricsadvisor/azure-ai-metricsadvisor/CHANGELOG.md)
-* [JavaScript SDK change log](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/metricsadvisor/ai-metrics-advisor/CHANGELOG.md)
-
-## June 2021
-
-### New articles
-
-* [Tutorial: Write a valid query to onboard metrics data](tutorials/write-a-valid-query.md)
-* [Tutorial: Enable anomaly notification in Metrics Advisor](tutorials/enable-anomaly-notification.md)
-
-### Updated articles
-
-* [Updated metrics onboarding flow](how-tos/onboard-your-data.md)
-* [Enriched guidance when adding data feeds from different sources](data-feeds-from-different-sources.md)
-* [Updated new notification channel using Microsoft Teams](how-tos/alerts.md#teams-hook)
-* [Updated incident diagnostic experience](how-tos/diagnose-an-incident.md)
-
-## October 2020
-
-### New articles
-
-* [Quickstarts for Metrics Advisor client SDKs for .NET, Java, Python, and JavaScript](quickstarts/rest-api-and-client-library.md)
-
-### Updated articles
-
-* [Update on how Metric Advisor builds an incident tree for multi-dimensional metrics](./faq.yml)
applied-ai-services What Are Applied Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/what-are-applied-ai-services.md
- Title: What are Azure Applied AI Services?-
-description: Applied AI Services description.
-keywords: applied ai services, artificial intelligence, applied ai, ai services, cognitive search, applied ai features
---- Previously updated : 05/04/2021---
-# What are Azure Applied AI Services?
-
-Azure Applied AI Services are high-level services focused on empowering developers to quickly unlock the value of data by applying AI into their key business scenarios. Built on top of the AI APIs of Azure Cognitive Services, Azure Applied AI Services are optimized for critical tasks ranging from monitoring and diagnosing metric anomalies, mining knowledge from documents, enhancing the customer experience through transcription analysis, boosting literacy in the classroom, document understanding and more. Previously, companies would have to orchestrate multiple AI skills, add business logic, and create a UI to go from development to deployment for their scenario ΓÇô all of which would consume time, expertise, and resources ΓÇô these ΓÇ£scenario specificΓÇ¥ services provide developers these benefits ΓÇ£out of the boxΓÇ¥.ΓÇï
-
-## Azure Form Recognizer
-
-Enabling organizations in all industries to consume information hidden within documents to increase productivity, automate business processes and generate knowledge and insights. Azure Form Recognizer is a service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. You quickly get accurate results that are tailored to your specific content without heavy manual intervention or extensive data science expertise. Use Form Recognizer to automate data entry in your applications and enrich your documents' search capabilities. Azure Form Recognizer is built using OCR, Text Analytics and Custom Text from Azure Cognitive Services.
-
-Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts, IDs and business cards, and the layout model.
-
-[Learn more about Azure Form Recognizer](./form-recognizer/index.yml)ΓÇïΓÇï
-
-## Azure Metrics Advisor
-
-Protecting organizationΓÇÖs growth by enabling them to make the right decision based on intelligence from metrics of businesses, services and physical assets. Azure Metrics Advisor uses AI to perform data monitoring and anomaly detection in time series data. The service automates the process of applying models to your data, and provides a set of APIs and a web-based workspace for data ingestion, anomaly detection, and diagnostics - without needing to know machine learning. Developers can build AIOps, predictive maintenance, and business monitoring applications on top of the service. Azure Metrics Advisor is built using Anomaly Detector from Azure Cognitive Services.ΓÇï
-
-[Learn more about Azure Metrics Advisor](./metrics-advisor/index.yml)
-
-## Azure Cognitive Search
-
-Unlock valuable information lying latent in all your content in order to perform an action or make decisions. Azure Cognitive Search is the only cloud search service with built-in AI capabilities that enrich all types of information to help you identify and explore relevant content at scale. Use cognitive skills for vision, language, and speech, or use custom machine learning models to uncover insights from all types of content. Azure Cognitive Search also offers semantic search capability, which uses advanced machine learning techniques to understand user intent and contextually rank the most relevant search results. Spend more time innovating and less time maintaining a complex cloud search solution. Azure Cognitive Search is built using Computer Vision and Text Analytics from Azure Cognitive Services.
-
-[Learn more about Azure Cognitive Search](../search/index.yml)ΓÇïΓÇï
-
-## Azure Immersive Reader
-
-Enhance reading comprehension and achievement with AI. Azure Immersive Reader is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft OneNote to improve your web applications. Azure Immersive Reader is built using Translation and Text to Speech from Azure Cognitive Services.
-
-[Learn more about Azure Immersive Reader](./immersive-reader/index.yml)
-
-## Azure Bot Service
-
-Enable rapid creation of customizable, sophisticated, conversational experiences with pre-built conversational components enabling business value right out of the box. Azure Bot Service Composer is an open-source visual authoring canvas for developers and multidisciplinary teams to build bots. Composer integrates language understanding services such as LUIS and QnA Maker and allows sophisticated composition of bot replies using language generation. Azure Bot Service is built using Speech/Telephony, LUIS, and QnA Maker from Azure Cognitive Services.
-
-[Learn more about Azure Bot Service](/composer/)ΓÇï
-
-## Azure Video Analyzer
-
-Enabling businesses to build automated apps powered by video intelligence without being a video or AI expert. Azure Video Analyzer is a service for building AI-based video solutions and applications. You can generate real-time business insights from video streams, processing data near the source and applying the AI of your choice. Record videos of interest on the edge or in the cloud and combine them with other data to power your business decisions. Azure Video Analyzer is built using Spatial Analysis from Azure Cognitive Services. Azure Video Analyzer for Media is built using Face, Speech, Translation, Text analytics, Custom vision, and textual content moderation from Azure Cognitive Services.
-
-[Learn more about Azure Video Analytics](../azure-video-analyzer/index.yml)ΓÇïΓÇï
-
-## Certifications and compliance
-
-Applied AI Services has been awarded certifications such as CSA STAR Certification, FedRAMP Moderate, and HIPAA BAA. You can [download](/samples/browse/?redirectedfrom=TechNet-Gallery "download") certifications for your own audits and security reviews.
-
-To understand privacy and data management, go to the [Trust Center](https://servicetrust.microsoft.com/ "Trust Center").
-
-## Support
-
-Applied AI Services provides several support options to help you move forward with creating intelligent applications. Applied AI Services also has a strong community of developers that can help answer your specific questions. For a full list of options available to you, see:
--- [Submit Feedback on UserVoice](https://aka.ms/AppliedAIUserVoice)-- [Ask Questions on Microsoft Q&A](/answers/topics/azure-applied-ai-services.html)-- [Troubleshoot on StackOverflow](https://aka.ms/AppliedAIStackOverflow)
applied-ai-services Why Applied Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/why-applied-ai-services.md
- Title: Why Azure Applied AI Services?-
-description: Why Applied AI Services description.
---- Previously updated : 05/05/2021---
-# Why Azure Applied AI Services?
-
-Azure Applied AI Services reduces the time developers need to modernize business processes from months to days. These services help you accelerate time to value for specific business scenarios through a combination of Azure Cognitive Services, task-specific AI, and business logic. ΓÇï
-
-Each Azure Applied AI service addresses a common need and generates new opportunities across organizations such as analyzing conversations for improved customer experiences, automating document processing for operational productivity, understanding the root cause of anomalies for protecting your organizationΓÇÖs growth, and extracting insights from content ranging from documents to videos.
-
-By building on top of the AI models from Azure Cognitive Services as well as providing task-specific AI models and built-in business logic, Azure Applied AI Services enable developers to quickly deploy common scenarios versus building from scratch.
-
-## Benefits ΓÇïΓÇï
-- Modernize business process ΓÇô Use task-specific AI to solve your scenario-- Accelerate development ΓÇô Go live with your AI solutions quickly-- Run responsibly anywhere ΓÇô Enterprise-grade responsible and secure services from the cloud to the edge
-
-
-## What is the difference between Applied AI Services and Cognitive Services?
-
-Both Applied AI Services and Cognitive Services are designed to help developers create intelligent apps. Cognitive Services provides general purpose AI services that serve as the core engine for Applied AI Services.
-
-Applied AI Services builds on top of Cognitive Services while also adding task-specific AI and business logic to optimize for specific use cases so that developers spend less time designing solutions or setting up pipelines.
-
-If there isnΓÇÖt an Applied AI Service available to meet a specific use case, developers can also build their own solutions from scratch, using Cognitive Services as building blocks.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md
- Title: Hotpatch for Windows Server Azure Edition
-description: Learn how hotpatch for Windows Server Azure Edition works and how to enable it
---- Previously updated : 04/18/2023----
-# Hotpatch for new virtual machines
-
-Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits:
-* Lower workload impact with less reboots
-* Faster deployment of updates as the packages are smaller, install faster, and have easier patch orchestration with Azure Update Manager
-* Better protection, as the hotpatch update packages are scoped to Windows security updates that install faster without rebooting
-
-## Supported platforms
-
-> [!IMPORTANT]
-> Hotpatch is currently in PREVIEW for certain platforms in the following table.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-| Operating system | Azure | Azure Stack HCI |
-| -- | -- | -- |
-| Windows Server 2022 Datacenter: Azure Edition Server Core | Generally available (GA) | Public preview |
-| Windows Server 2022 Datacenter: Azure Edition with Desktop Experience | Public preview | Public preview |
-
-> [!NOTE]
-> You can set Hotpatch in Windows Server 2022 Datacenter: Azure Edition with Desktop Experience in the Azure portal by getting the VM preview image from [this link](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/microsoftwindowsserver.windowsserverhotpatch-previews/resourceGroupId//resourceGroupLocation//dontDiscardJourney~/false/_provisioningContext~/%7B%22initialValues%22%3A%7B%22subscriptionIds%22%3A%5B%222389fa6a-fd51-4c60-8a9e-95e7e17b76b6%22%2C%221b3510f5-e7cd-457d-a84e-1a0a61013375%22%5D%2C%22resourceGroupNames%22%3A%5B%5D%2C%22locationNames%22%3A%5B%22westus2%22%2C%22centralus%22%2C%22eastus%22%5D%7D%2C%22telemetryId%22%3A%22b73b4782-5eee-41b0-ad74-9a0b98365009%22%2C%22marketplaceItem%22%3A%7B%22categoryIds%22%3A%5B%5D%2C%22id%22%3A%22Microsoft.Portal%22%2C%22itemDisplayName%22%3A%22NoMarketplace%22%2C%22products%22%3A%5B%5D%2C%22version%22%3A%22%22%2C%22productsWithNoPricing%22%3A%5B%5D%2C%22publisherDisplayName%22%3A%22Microsoft.Portal%22%2C%22deploymentName%22%3A%22NoMarketplace%22%2C%22launchingContext%22%3A%7B%22telemetryId%22%3A%22b73b4782-5eee-41b0-ad74-9a0b98365009%22%2C%22source%22%3A%5B%5D%2C%22galleryItemId%22%3A%22%22%7D%2C%22deploymentTemplateFileUris%22%3A%7B%7D%2C%22uiMetadata%22%3Anull%7D%7D). For related information, please refer [this](https://azure.microsoft.com/updates/hotpatch-is-now-available-on-preview-images-of-windows-server-vms-on-azure-with-the-desktop-experience-installation-mode/) Azure update.
-
-## How hotpatch works
-
-Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches updates the VM without requiring a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
--
-There are two types of baselines: **Planned baselines** and **Unplanned baselines**.
-* **Planned baselines** are released on a regular cadence, with hotpatch releases in between. Planned baselines include all the updates in a comparable _Latest Cumulative Update_ for that month, and require a reboot.
- * The sample schedule illustrates four planned baseline releases in a calendar year (five total in the diagram), and eight hotpatch releases.
-* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a hotpatch. When unplanned baselines are released, they replace a hotpatch update in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot.
- * The sample schedule illustrates two unplanned baselines that would replace the hotpatch releases for those months (the actual number of unplanned baselines in a year isn't known in advance).
-
-## Regional availability
-Hotpatch is available in all global Azure regions.
-
-## How to get started
-
-> [!NOTE]
-> You can preview onboarding Automanage machine best practices during VM creation in the Azure portal using [this link](https://aka.ms/AzureEdition).
-
-To start using hotpatch on a new VM, follow these steps:
-1. Start creating a new VM from the Azure portal
- * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal by visiting the [Azure Marketplace](https://aka.ms/AzureEdition).
-1. Supply details during VM creation
- * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. See [automanage windows server services](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported.
- * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' is selected. Patch orchestration options are set to 'Azure-orchestrated'.
- * If you create a VM by visiting the [Azure Marketplace](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview.
-
-1. Create your new VM
-
-<!--
-## Enabling preview access
-
-### REST API
-
-The following example describes how to enable the preview for your subscription:
-
-```
-POST on `/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestHotPatchVMPreview/register?api-version=2015-12-01`
-POST on `/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestAutoPatchVMPreview/register?api-version=2015-12-01`
-POST on `/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestPatchVMPreview/register?api-version=2015-12-01`
-```
-
-Feature registration can take up to 15 minutes. To check the registration status:
-
-```
-GET on `/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestHotPatchVMPreview?api-version=2015-12-01`
-GET on `/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestAutoPatchVMPreview?api-version=2015-12-01`
-GET on `/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestPatchVMPreview?api-version=2015-12-01`
-```
-
-Once the feature has been registered for your subscription, complete the opt-in process by propagating the change into the Compute resource provider.
-
-```
-POST on `/subscriptions/{subscriptionId}/providers/Microsoft.Compute/register?api-version=2019-12-01`
-```
-
-### Azure PowerShell
-
-Use the ```Register-AzProviderFeature``` cmdlet to enable the preview for your subscription.
-
-``` PowerShell
-Register-AzProviderFeature -FeatureName InGuestHotPatchVMPreview -ProviderNamespace Microsoft.Compute
-Register-AzProviderFeature -FeatureName InGuestAutoPatchVMPreview -ProviderNamespace Microsoft.Compute
-Register-AzProviderFeature -FeatureName InGuestPatchVMPreview -ProviderNamespace Microsoft.Compute
-```
-
-Feature registration can take up to 15 minutes. To check the registration status:
-
-``` PowerShell
-Get-AzProviderFeature -FeatureName InGuestHotPatchVMPreview -ProviderNamespace Microsoft.Compute
-Get-AzProviderFeature -FeatureName InGuestAutoPatchVMPreview -ProviderNamespace Microsoft.Compute
-Get-AzProviderFeature -FeatureName InGuestPatchVMPreview -ProviderNamespace Microsoft.Compute
-```
-
-Once the feature has been registered for your subscription, complete the opt-in process by propagating the change into the Compute resource provider.
-
-``` PowerShell
-Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
-```
-
-### Azure CLI
-
-Use ```az feature register``` to enable the preview for your subscription.
-
-```
-az feature register --namespace Microsoft.Compute --name InGuestHotPatchVMPreview
-az feature register --namespace Microsoft.Compute --name InGuestAutoPatchVMPreview
-az feature register --namespace Microsoft.Compute --name InGuestPatchVMPreview
-```
-
-Feature registration can take up to 15 minutes. To check the registration status:
-```
-az feature show --namespace Microsoft.Compute --name InGuestHotPatchVMPreview
-az feature show --namespace Microsoft.Compute --name InGuestAutoPatchVMPreview
-az feature show --namespace Microsoft.Compute --name InGuestPatchVMPreview
-```
-
-Once the feature has been registered for your subscription, complete the opt-in process by propagating the change into the Compute resource provider.
-
-```
-az provider register --namespace Microsoft.Compute
-```
>-
-## Patch installation
-
-[Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled automatically for all VMs created with a supported _Windows Server Azure Edition_ image. With automatic VM guest patching enabled:
-* Patches classified as Critical or Security are automatically downloaded and applied on the VM.
-* Patches are applied during off-peak hours in the VM's time zone.
-* Azure manages the patch orchestration and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates).
-* Virtual machine health, as determined through platform health signals, is monitored to detect patching failures.
-
-## How does automatic VM guest patching work?
-
-When [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
-
-With hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months require VM reboots. Other Critical or Security patches may also be available periodically, which may require VM reboots.
-
-The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
-
-Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM is assessed and applicable patches are installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
-
-Definition updates and other patches not classified as Critical or Security won't be installed through Automatic VM Guest Patching.
-
-## Understanding the patch status for your VM
-
-To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, select ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM.
-
-The Hotpatch status associated with your VM is displayed on the page. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section, all security and critical updates are automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
---
-With automatic VM guest patching, your VM is periodically and automatically assessed for available updates. These periodic assessments ensure that available patches are detected. You can view the results of the assessment on the Updates Page, including the time of the last assessment. You can also choose to trigger an on-demand patch assessment for your VM at any time using the ΓÇÿAssess nowΓÇÖ option and review the results after assessment completes.
-
-Similar to on-demand assessment, you can also install patches on-demand for your VM using the ΓÇÿInstall updates nowΓÇÖ option. Here you can choose to install all updates under specific patch classifications. You can also specify updates to include or exclude by providing a list of individual knowledge base articles. Patches installed on-demand aren't installed using availability-first principles and may require more reboots and VM downtime for update installation.
-
-## Supported updates
-
-Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-hotpatch) Windows update channel.
-
-There are some important considerations to running a supported _Windows Server Azure Edition_ VM with hotpatch enabled. Reboots are still required to install updates that aren't included in the hotpatch program. Reboots are also required periodically after a new baseline has been installed. The reboots keep the VM in sync with non-security patches included in the latest cumulative update.
-* Patches that are currently not included in the hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and require a reboot.
-
-## Frequently asked questions
-
-### What is hotpatching?
-
-* Hotpatching is a new way to install updates on a supported _Windows Server Azure Edition_ VM in Azure that doesnΓÇÖt require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process.
-
-### How does hotpatching work?
-
-* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with updates that donΓÇÖt require a reboot to take effect. The baseline is updated periodically with a new cumulative update. The cumulative update includes all security and quality updates and requires a reboot.
-
-### Why should I use hotpatch?
-
-* When you use hotpatch on a supported _Windows Server Azure Edition_ image, your VM will have higher availability (fewer reboots), and faster updates (smaller packages that are installed faster without the need to restart processes). This process results in a VM that is always up to date and secure.
-
-### What types of updates are covered by hotpatch?
-
-* Hotpatch currently covers Windows security updates.
-
-### When will I receive the first hotpatch update?
-
-* Hotpatch updates are typically released on the second Tuesday of each month.
-
-### What will the hotpatch schedule look like?
-
-* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with hotpatch updates released monthly. A typical baseline update is released every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes).
-
- :::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule.":::
-
-### Do VMs need a reboot after enrolling in hotpatch?
-
-* Reboots are still required to install updates not included in the hotpatch program, and are required periodically after a baseline (Windows Update Latest Cumulative Update) has been installed. This reboot will keep your VM in sync with all the patches included in the cumulative update. Baselines (which require a reboot) will start out on a three-month cadence and increase over time.
-
-### Are my applications affected when a hotpatch update is installed?
-
-* Because hotpatch patches the in-memory code of running processes without the need to restart the process, your applications are unaffected by the patching process. This is separate from any potential performance and functionality implications of the patch itself.
-
-### Can I turn off hotpatch on my VM?
-
-* You can turn off hotpatch on a VM via the Azure portal. Turning off hotpatch will unenroll the VM from hotpatch, which reverts the VM to typical update behavior for Windows Server. Once you unenroll from hotpatch on a VM, you can re-enroll that VM when the next hotpatch baseline is released.
-
-### Can I upgrade from my existing Windows Server OS?
-
-* Yes, upgrading from existing versions of Windows Server (such as Windows Server 2016 or Windows Server 2019) to _Windows Server 2022 Datacenter: Azure Edition_ is supported.
-
-### How can I get troubleshooting support for hotpatching?
-
-* You can file a [technical support case ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For the Service option, search for and select **Virtual Machine running Windows** under Compute. Select **Azure Features** for the problem type and **Automatic VM Guest Patching** for the problem subtype.
-
-### Are Azure Virtual Machine Scale Sets Uniform Orchestration Supported on Azure-Edition images?
-
-* The [Windows Server 2022 Azure Edition Images](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftwindowsserver.windowsserver?tab=PlansAndPrice) provide the best in class operating system that includes the innovation built into Windows Server 2022 images plus additional features. Since Azure Edition images support Hotpatching, VM scale sets (VMSS) with Uniform Orchestration can't be created on these images. The blockade on using VMSS Uniform Orchestration on these images will be lifted once [Auto Guest Patching](https://learn.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching?toc=https%3A%2F%2Flearn.microsoft.com%2F%2Fazure%2Fvirtual-machine-scale-sets%2Ftoc.json&bc=https%3A%2F%2Flearn.microsoft.com%2F%2Fazure%2Fbread%2Ftoc.json) and Hotpatching are supported.
-
-## Next steps
-
-* Learn about [Azure Update Management](../automation/update-management/overview.md)
-* Learn more about [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md)
-* Learn more about [Automanage for Windows Server](automanage-windows-server-services-overview.md)
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server-services-overview.md
- Title: Automanage for Windows Server
-description: Overview of Azure Automanage for Windows Server capabilities with Windows Server Azure Edition
---- Previously updated : 04/18/2023---
-# Azure Automanage for Windows Server
-
-Azure Automanage for Windows Server brings new capabilities specifically to _Windows Server Azure Edition_. These capabilities include:
-- Hotpatch-- SMB over QUIC-- Extended network for Azure-
-Automanage for Windows Server capabilities can be found in one or more of these _Windows Server Azure Edition_ images:
--- Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)-- Windows Server 2022 Datacenter: Azure Edition (Core)-
-Capabilities vary by image, see [getting started](#getting-started-with-windows-server-azure-edition) for more detail.
-
-## Automanage for Windows Server capabilities
-
-### Hotpatch
-
-Hotpatch is available on the following images:
--- Windows Server 2022 Datacenter: Azure Edition (Desktop Experience)-- Windows Server 2022 Datacenter: Azure Edition (Core)-
-Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of hot patching. To learn more, see [Hotpatch](automanage-hotpatch.md).
-
-#### Supported platforms
-
-> [!IMPORTANT]
-> Hotpatch is currently in PREVIEW for certain platforms in the following table.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-| Operating system | Azure | Azure Stack HCI |
-| -- | -- | -- |
-| Windows Server 2022 Datacenter: Azure Edition (Core) | Generally available (GA) | Public preview |
-| Windows Server 2022 Datacenter: Azure Edition (Desktop Experience) | Public preview | Public preview |
-
-### SMB over QUIC
-
-SMB over QUIC is available on the following images:
--- Windows Server 2022 Datacenter: Azure Edition (Desktop experience)-- Windows Server 2022 Datacenter: Azure Edition (Core)-
-SMB over QUIC offers an "SMB VPN" for telecommuters, mobile device users, and branch offices, providing secure, reliable connectivity to edge file servers over untrusted networks like the Internet. [QUIC](https://datatracker.ietf.org/doc/rfc9000/) is an IETF-standardized protocol used in HTTP/3, designed for maximum data protection with TLS 1.3 and requires encryption that cannot be disabled. SMB behaves normally within the QUIC tunnel, meaning the user experience doesn't change. SMB features like multichannel, signing, compression, continuous availability, and directory leasing work normally.
-
-SMB over QUIC is also integrated with [Automanage machine best practices for Windows Server](automanage-windows-server.md) to help make SMB over QUIC management easier. QUIC uses certificates to provide its encryption and organizations often struggle to maintain complex public key infrastructures. Automanage machine best practices ensure that certificates do not expire without warning and that SMB over QUIC stays enabled for maximum continuity of service.
-
-To learn more, see [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic) and [SMB over QUIC management with Automanage machine best practices](automanage-smb-over-quic.md).
-
-### Extended network for Azure
-
-Extended Network for Azure is available on the following images:
--- Windows Server 2022 Datacenter: Azure Edition (Desktop experience)-- Windows Server 2022 Datacenter: Azure Edition (Core)-
-Azure Extended Network enables you to stretch an on-premises subnet into Azure to let on-premises virtual machines keep their original on-premises private IP addresses when migrating to Azure. To learn more, see [Azure Extended Network](/windows-server/manage/windows-admin-center/azure/azure-extended-network).
-
-## Getting started with Windows Server Azure Edition
-
-It's important to consider up front, which Automanage for Windows Server capabilities you would like to use, then choose a corresponding VM image that supports all of those capabilities. Some of the _Windows Server Azure Edition_ images support only a subset of capabilities.
-
-> [!NOTE]
-> If you would like to preview the upcoming version of **Windows Server Azure Edition**, see [Windows Server VNext Datacenter: Azure Edition](windows-server-azure-edition-vnext.md).
-
-### Deciding which image to use
-
-| Image | Capabilities |
-| - | |
-| Windows Server 2022 Datacenter: Azure Edition (Desktop experience) | SMB over QUIC, Extended network for Azure |
-| Windows Server 2022 Datacenter: Azure Edition (Core) | Hotpatch, SMB over QUIC, Extended network for Azure |
-
-### Creating a VM
-
-To start using Automanage for Windows Server capabilities on a new VM, use your preferred method to create an Azure VM, and select the _Windows Server Azure Edition_ image that corresponds to the set of [capabilities](#getting-started-with-windows-server-azure-edition) that you would like to use.
-
-> [!IMPORTANT]
-> Some capabilities have specific configuration steps to perform during VM creation, and some capabilities that are in preview have specific opt-in and portal viewing requirements. See the individual capability topics above to learn more about using that capability with your VM.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Azure Automanage](overview-about.md)
automanage Windows Server Azure Edition Vnext https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/windows-server-azure-edition-vnext.md
- Title: "Preview: Windows Server VNext Datacenter (Azure Edition) for Azure Automanage"
-description: Learn how to preview the upcoming version of Windows Server Azure Edition and know what to expect.
---- Previously updated : 04/02/2022---
-# Preview: Windows Server VNext Datacenter (Azure Edition) for Azure Automanage
-
-> [!IMPORTANT]
-> **Windows Server VNext Datacenter Azure Edition** is currently in public preview.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-[Windows Server: Azure Edition (WSAE)](./automanage-windows-server-services-overview.md) is a new edition of Windows Server focused on innovation and efficiency. Featuring an annual release cadence and optimized to run on Azure properties, WSAE brings new functionality to Windows Server users faster than the traditional Long-Term Servicing Channel (LTSC) editions of Windows Server (2016,2019,2022, etc.) the first version of this new variant is Windows Server 2022 Datacenter: Azure Edition, announced at Microsoft Ignite in November 2021.
-
-The annual WSAE releases are delivered using Windows Update, rather than a full OS upgrade. As part of this annual release cadence, the WSAE Insider preview program will spin up each spring with the opportunity to access early builds of the next release - leading to general availability in the fall. Install the preview to get early access to all the new features and functionality prior to general availability. If you are a registered Microsoft Server Insider, you have access to create and use virtual machine images from this preview. For more information and to manage your Insider membership, visit the [Windows Insider home page](https://insider.windows.com/) or [Windows Insiders for Business home page.](https://insider.windows.com/for-business/)
-
-## WhatΓÇÖs in the preview?
-
-Much like the [Dev Channel of the insider program for Windows 11](https://insider.windows.com/understand-flighting), the WSAE Preview comes directly from an active, still-in-development codebase. As a result, you may encounter some features that are not yet complete, and in some cases, you may see features & functionality that will not appear in the Azure Edition annual release.
-Details regarding each preview will be shared in release announcements posted to the [Windows Server Insiders](https://techcommunity.microsoft.com/t5/windows-server-insiders/bd-p/WindowsServerInsiders) space on Microsoft Tech Community.
-
-## How do I get started?
-
-To create a Windows Server: Azure Edition preview virtual machine in Azure, visit the [Microsoft Server Operating Systems Preview](https://aka.ms/createWSAEpreview) offer in the Azure marketplace.
-
-## Next steps
-
-To learn more about the features and functionality in the Windows Server: Azure Edition preview, [view the most recent preview announcement here.](https://aka.ms/currentWSAEpreview)
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/configure-alerts.md
The following example shows that the file **c:\windows\system32\drivers\etc\host
Let's use this example to discuss the steps for creating alerts on a change.
-1. On the **Change tracking** page from your Automation account, select **Log Analytics**.
+1. On the **Change tracking** page from your Virtual Machine, select **Log Analytics**.
2. In the Logs search, look for content changes to the **hosts** file with the query `ConfigurationChange | where FieldsChanged contains "FileContentChecksum" and FileSystemPath contains "hosts"`. This query looks for content changes for files with fully qualified path names containing the word `hosts`. You can also ask for a specific file by changing the path portion to its fully qualified form, for example, using `FileSystemPath == "c:\windows\system32\drivers\etc\hosts"`.
Once you have your alerts configured, you can set up an action group, which is a
* Learn about [log queries](../../azure-monitor/logs/log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace.
-* [Analyze usage in Log Analytics workspace](../../azure-monitor/logs/analyze-usage.md) describes how to analyze and alert on your data usage.
+* [Analyze usage in Log Analytics workspace](../../azure-monitor/logs/analyze-usage.md) describes how to analyze and alert on your data usage.
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
Title: Pull settings to App Configuration with Azure Pipelines
-description: Learn to use Azure Pipelines to pull key-values to an App Configuration Store
+ Title: Pull settings from App Configuration with Azure Pipelines
+description: Learn how to use Azure Pipelines to pull key-values from an App Configuration Store
Last updated 11/17/2020
-# Pull settings to App Configuration with Azure Pipelines
+# Pull settings from App Configuration with Azure Pipelines
The [Azure App Configuration](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task) task pulls key-values from your App Configuration store and sets them as Azure pipeline variables, which can be consumed by subsequent tasks. This task complements the [Azure App Configuration Push](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task-push) task that pushes key-values from a configuration file into your App Configuration store. For more information, see [Push settings to App Configuration with Azure Pipelines](push-kv-devops-pipeline.md).
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
+
+ Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc
+description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc.
Last updated : 07/12/2023+++
+# Prepare to deliver Extended Security Updates for Windows Server 2012
+
+With Windows Server 2012 and Windows Server 2012 R2 reaching end of support on October 10, 2023, Azure Arc-enabled servers lets you enroll your existing Windows Server 2012/2012 R2 machines in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview). Affording both cost flexibility and an enhanced delivery experience, Azure Arc better positions you to migrate to Azure.
+
+The purpose of this article is to help you understand the benefits and how to prepare to use Arc-enabled servers to enable delivery of ESUs.
+
+## Key benefits
+
+Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the following key benefits:
+
+- **Pay-as-you-go:** Flexibility to sign up for a monthly subscription service with the ability to migrate mid-year.
+
+- **Azure billed:** You can draw down from your existing [Microsoft Azure Consumption Commitment](/marketplace/azure-consumption-commitment-benefit) (MACC) and analyze your costs using [Azure Cost Management and Billing](../../cost-management-billing/cost-management-billing-overview.md).
+
+- **Built-in inventory:** The coverage and enrollment status of Windows Server 2012/2012 R2 ESUs on eligible Arc-enabled servers are identified in the Azure portal, highlighting gaps and status changes.
+
+- **Keyless delivery:** The enrollment of ESUs on Azure Arc-enabled Windows Server 2012/2012 R2 machines won't require the acquisition or activation of keys.
+
+Other Azure services through Azure Arc-enabled servers are available, with offerings such as:
+
+* [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers.md) to help protect you from various cyber threats and vulnerabilities.
+* [Update management center (preview)](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
+* [Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Beyond providing an aggregated view to evaluate the overall state of the environment, Azure Policy helps to bring your resources to compliance through bulk and automatic remediation.
+
+ >[!NOTE]
+ >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Update management center (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
+
+## Prepare delivery of ESUs
+
+To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) and establishing a connection to Azure.
+
+- **Deployment options:** There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md).
+
+- **Networking:** Connectivity options include public endpoint, proxy server, and private link or Azure Express Route. Review the [networking prerequisites](network-requirements.md) to prepare their non-Azure environment for deployment to Azure Arc.
+
+We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy one month before Windows Server 2012 end of support. Billing for this service starts from October 2023, after Windows Server 2012 end of support.
+
+## Next steps
+
+* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management).
+* Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent.
+* Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers.
azure-arc Enable Group Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-group-management.md
+
+ Title: Enable guest management
+description: The article provides information about how to enable guest management and includes supported environments and prerequisites.
Last updated : 07/18/2023+
+ms.
+++
+keywords: "VMM, Arc, Azure"
+++
+# Enable guest management
+
+The article provides information about how to enable guest management and includes supported environments and prerequisites.
+
+## Supported environments
+
+- **VMM version**: VMs managed by VMM 2019 UR5 or later and VMM 2022 UR1 or later.
+
+- **Host version OS version**: VMs running on Windows Server 2016, 2019, 2022, Azure Stack HCI 21H2 and Azure Stack HCI 22H2.
+
+- **Guest OS version**: Windows Server 2012 R2, 2016, 2019, 2022, Windows 10, and Windows 11.
+
+- Linux OS versions arenΓÇÖt supported.
+
+## Prerequisites
+
+Ensure the following prerequisites:
+
+- VM is running in a supported environment.
+- VM can connect through the firewall to communicate over the internet.
+- [These URLs](/azure/azure-arc/servers/network-requirements#urls) arenΓÇÖt blocked.
+- The VM is powered on, and the resource bridge has network connectivity to the host running the VM.
+
+## To Enable guest management
+
+To enable guest management, do the following:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/), search for SCVMM VM and select it.
+2. Select **Configuration**.
+3. Select **Enable guest management** and provide the administrator username and password to enable guest management.
+4. Select **Apply**.
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-functions Durable Functions Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-extension-upgrade.md
func extensions install
However, if you **only** wish to install the latest Durable Functions extension release, you would run the following command: ```console
-func extensions install Microsoft.Azure.WebJobs.Extensions.DurableTask -v <version>
+func extensions install -p Microsoft.Azure.WebJobs.Extensions.DurableTask -v <version>
``` For example: ```console
-func extensions install Microsoft.Azure.WebJobs.Extensions.DurableTask -v 2.9.1
+func extensions install -p Microsoft.Azure.WebJobs.Extensions.DurableTask -v 2.9.1
```
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
recommendations: false Previously updated : 05/06/2023 Last updated : 07/14/2023 # Azure guidance for secure isolation
Azure provides extensive options for [data encryption at rest](../security/funda
In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although other encryption keys can be used as described in *[Storage service encryption](#storage-service-encryption)* section. -- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is used for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys. These keys aren't exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
+- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is used for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. The DEK is always stored encrypted by the Key Encryption Key (KEK).
+- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key encryption key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys; Managed HSM always uses FIPS 140 validated hardware security modules. These keys aren't exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
:::image type="content" source="./media/secure-isolation-fig16.png" alt-text="Data Encryption Keys are encrypted using your key stored in Azure Key Vault"::: **Figure 16.** Data Encryption Keys are encrypted using your key stored in Azure Key Vault Therefore, the encryption key hierarchy involves both DEK and KEK. DEK is encrypted with KEK and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEK can decrypt the DEK. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEK, the KEK is effectively a single point by which DEK can be deleted via deletion of the KEK.
-Detailed information about various [data encryption models](../security/fundamentals/encryption-models.md) and specifics on key management for many Azure platform services is available in online documentation. Moreover, some Azure services provide other [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage service encryption and Azure Disk encryption for IaaS Virtual Machines, including server-side encryption for managed disks.
+Detailed information about various [data encryption models](../security/fundamentals/encryption-models.md) and specifics on key management for many Azure platform services is available in online documentation. Moreover, some Azure services provide other [encryption models](../security/fundamentals/encryption-overview.md#azure-encryption-models), including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage service encryption and disk encryption for IaaS Virtual Machines.
> [!TIP] > You should review published Azure data encryption documentation for guidance on how to protect your data.
However, you can also choose to manage encryption with your own keys by specifyi
Storage service encryption is enabled by default for all new and existing storage accounts and it [can't be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest: -- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key that is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It's never exposed directly to the Azure Storage service or other services. You must use Azure Key Vault to store your customer-managed keys for Storage service encryption.-- *Stamp Key (SK)* is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, that is, cluster of storage hardware. This key is used to perform the final wrap of the DEK that results in the following key chain hierarchy: SK(KEK(DEK)).
+- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation and is used derive a unique key for each block of data. The Storage Service always encrypts the DEK using either the Stamp Key or a Key Encryption Key if the customer has configured customer-managed key encryption.
+- *Key Encryption Key (KEK)* is an asymmetric RSA (2048 or greater) key managed by the customer and is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. It's never exposed directly to the Azure Storage service or other services.
+- *Stamp Key (SK)* is a symmetric AES-256 key managed by Azure Storage. This key is used to protect the DEK when not using a customer-managed key.
-These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it can't be disabled.
+These keys protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it can't be disabled.
:::image type="content" source="./media/secure-isolation-fig17.png" alt-text="Encryption flow for Storage service encryption"::: **Figure 17.** Encryption flow for Storage service encryption Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage [redundancy options](../storage/common/storage-redundancy.md) support encryption and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
-Because data encryption is performed by the Storage service, server-side encryption with CMK enables you to use any operating system types and images for your VMs. For your Windows and Linux IaaS VMs, Azure also provides Azure Disk encryption that enables you to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables [double encryption of data at rest](../virtual-machines/disks-enable-double-encryption-at-rest-portal.md).
+Because data encryption is performed by the Storage service, server-side encryption with CMK enables you to use any operating system types and images for your VMs. For your Windows and Linux IaaS VMs, Azure also provides Azure Disk encryption that enables you to encrypt managed disks with CMK within the Guest VM or EncryptionAtHost that encrypts disk data right at the host, as described in the next sections. Azure Storage service encryption also offers [double encryption of disk data at rest](../virtual-machines/disks-enable-double-encryption-at-rest-portal.md).
#### Azure Disk encryption Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../virtual-machines/disk-encryption-overview.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
Drive encryption through BitLocker and DM-Crypt is a data protection feature tha
For managed disks, Azure Disk encryption allows you to encrypt the OS and Data disks used by an IaaS virtual machine; however, Data can't be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help you control and manage the disk encryption keys in key vaults. You can supply your own encryption keys, which are safeguarded in Azure Key Vault to support *bring your own key (BYOK)* scenarios, as described previously in *[Data encryption key management](#data-encryption-key-management)* section.
-Azure Disk encryption isn't supported by Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption. See [Encryption at host](#encryption-at-host) for other options involving Managed HSM.
+Azure Disk encryption does not support Managed HSM or an on-premises key management service. Only key vaults managed by the Azure Key Vault service can be used to safeguard customer-managed encryption keys for Azure Disk encryption. See [Encryption at host](#encryption-at-host) for other options involving Managed HSM.
> [!NOTE] > Detailed instructions are available for creating and configuring a key vault for Azure Disk encryption with both **[Windows](../virtual-machines/windows/disk-encryption-key-vault.md)** and **[Linux](../virtual-machines/linux/disk-encryption-key-vault.md)** VMs.
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Di
[Azure managed disks](../virtual-machines/managed-disks-overview.md) are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux virtual machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for you. Azure managed disks automatically encrypt your data by default using [256-bit AES encryption](../virtual-machines/disk-encryption.md) that is FIPS 140 validated. For encryption key management, you have the following choices: - [Platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys) is the default choice that provides transparent data encryption at rest for managed disks whereby keys are managed by Microsoft.-- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables you to have control over your own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault. Only key vaults can be used to safeguard customer-managed keys; managed HSMs don't support Azure Disk encryption.
+- [Customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) enables you to have control over your own keys that can be imported into or generated inside Azure Key Vault or Managed HSM. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA KEK that is stored in Azure Key Vault or Managed HSM.
Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys. ##### *Encryption at host*
-Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). As mentioned previously, [Azure Disk encryption](../virtual-machines/disk-encryption-overview.md) for virtual machines and virtual machine scale sets isn't supported by Managed HSM. However, encryption at host with CMK is supported by Managed HSM.
+Encryption at host works with server-side encryption to ensure data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. The server hosting your VM encrypts your data with no performance impact or requirement for code running in your guest VM, and that encrypted data flows into Azure Storage using the keys configured for server-side encryption. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). Encryption at host with CMK can use keys stored in Managed HSM or Key Vault, just like server-side encryption for managed disks.
You're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Table below lists API endpoints in Azure vs. Azure Government for accessing and
||Custom Vision|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://www.customvision.azure.us/)|| ||Content Moderator|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||Face API|cognitiveservices.azure.com|cognitiveservices.azure.us||
-||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)|Part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml)|
+||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)|Part of [Cognitive Services for Language](../ai-services/language-service/index.yml)|
||Personalizer|cognitiveservices.azure.com|cognitiveservices.azure.us||
-||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml)|
-||Speech service|See [STT API docs](../cognitive-services/speech-service/rest-speech-to-text-short.md#regions-and-endpoints)|[Speech Studio](https://speech.azure.us/)</br></br>See [Speech service endpoints](../cognitive-services/Speech-Service/sovereign-clouds.md)</br></br>**Speech translation endpoints**</br>Virginia: `https://usgovvirginia.s2s.speech.azure.us`</br>Arizona: `https://usgovarizona.s2s.speech.azure.us`</br>||
-||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml)|
-||Translator|See [Translator API docs](../cognitive-services/translator/reference/v3-0-reference.md#base-urls)|cognitiveservices.azure.us||
+||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../ai-services/language-service/index.yml)|
+||Speech service|See [STT API docs](../ai-services/speech-service/rest-speech-to-text-short.md#regions-and-endpoints)|[Speech Studio](https://speech.azure.us/)</br></br>See [Speech service endpoints](../ai-services/Speech-Service/sovereign-clouds.md)</br></br>**Speech translation endpoints**</br>Virginia: `https://usgovvirginia.s2s.speech.azure.us`</br>Arizona: `https://usgovarizona.s2s.speech.azure.us`</br>||
+||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us|Part of [Cognitive Services for Language](../ai-services/language-service/index.yml)|
+||Translator|See [Translator API docs](../ai-services/translator/reference/v3-0-reference.md#base-urls)|cognitiveservices.azure.us||
|**Analytics**|Azure HDInsight|azurehdinsight.net|azurehdinsight.us|| ||Event Hubs|servicebus.windows.net|servicebus.usgovcloudapi.net|| ||Power BI|app.powerbi.com|app.powerbigov.us|[Power BI US Gov](https://powerbi.microsoft.com/documentation/powerbi-service-govus-overview/)|
For information on how to deploy Bot Framework and Azure Bot Service bots to Azu
For feature variations and limitations, see [Azure Machine Learning feature availability across cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md).
-### [Cognitive
+### [Cognitive
The following Content Moderator **features aren't currently available** in Azure Government: - Review UI and Review APIs.
-### [Cognitive
+### [Cognitive
The following Language Understanding **features aren't currently available** in Azure Government: - Speech Requests - Prebuilt Domains
-Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml).
+Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services for Language](../ai-services/language-service/index.yml).
-### [Cognitive
+### [Cognitive
-For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/speech-service/sovereign-clouds.md).
+For feature variations and limitations, including API endpoints, see [Speech service in sovereign clouds](../ai-services/speech-service/sovereign-clouds.md).
-### [Cognitive
+### [Cognitive
-For feature variations and limitations, including API endpoints, see [Translator in sovereign clouds](../cognitive-services/translator/sovereign-clouds.md).
+For feature variations and limitations, including API endpoints, see [Translator in sovereign clouds](../ai-services/translator/sovereign-clouds.md).
## Analytics
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | | [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive Services Containers](../../ai-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
| **Service** | **FedRAMP High** | **DoD IL2** |
-| [Cognitive
-| [Cognitive
-| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
| [Container Instances](../../container-instances/index.yml) | &#x2705; | &#x2705; | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | | [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; | | [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | | [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; |
-| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; |
+| [Form Recognizer](../../ai-services/document-intelligence/index.yml) | &#x2705; | &#x2705; |
| [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | | [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | | [Health Bot](/healthbot/) | &#x2705; | &#x2705; | | [HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; | | [HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; |
-| [Immersive Reader](../../applied-ai-services/immersive-reader/index.yml) | &#x2705; | &#x2705; |
+| [Immersive Reader](../../ai-services/immersive-reader/index.yml) | &#x2705; | &#x2705; |
| [Import/Export](../../import-export/index.yml) | &#x2705; | &#x2705; | | [Internet Analyzer](../../internet-analyzer/index.yml) | &#x2705; | &#x2705; | | [IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | | [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | | [Media Services](/azure/media-services/) | &#x2705; | &#x2705; |
-| [Metrics Advisor](../../applied-ai-services/metrics-advisor/index.yml) | &#x2705; | &#x2705; |
+| [Metrics Advisor](../../ai-services/metrics-advisor/index.yml) | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Azure Attestation](../../attestation/index.yml)| &#x2705; | &#x2705; | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Cognitive
-| [Cognitive
-| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive Services Containers](../../ai-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
| [Container Instances](../../container-instances/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Form Recognizer](../../ai-services/document-intelligence/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
With the [Analyze Image method](https://westcentralus.dev.cognitive.microsoft.co
- The coordinates, gender, and age of any faces contained in the image. - The ImageType (clip art or a line drawing). - The dominant color, the accent color, or whether an image is black & white.-- The category defined in this [taxonomy](../cognitive-services/computer-vision/category-taxonomy.md).
+- The category defined in this [taxonomy](../ai-services/computer-vision/category-taxonomy.md).
- Does the image contain adult or sexually suggestive content? ### Analyze an image C# example request
A successful response is returned in JSON. Shown below is an example of a succes
} } ```
-For more information, see [public documentation](../cognitive-services/computer-vision/index.yml) and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) for Computer Vision.
+For more information, see [public documentation](../ai-services/computer-vision/index.yml) and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) for Computer Vision.
## Face API
Response:
} ] ```
-For more information, see [public documentation](../cognitive-services/computer-vision/index-identity.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
+For more information, see [public documentation](../ai-services/computer-vision/overview-identity.md), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
## Text Analytics
-For instructions on how to use Text Analytics, see [Quickstart: Use the Text Analytics client library and REST API](../cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md?tabs=version-3-1&pivots=programming-language-csharp).
+For instructions on how to use Text Analytics, see [Quickstart: Use the Text Analytics client library and REST API](../ai-services/language-service/language-detection/overview.md?tabs=version-3-1&pivots=programming-language-csharp).
### Variations
For instructions on how to use Text Analytics, see [Quickstart: Use the Text Ana
### Variations - The URI for accessing Translator in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).-- [Virtual Network support](../cognitive-services/cognitive-services-virtual-networks.md) for Translator service is limited to only `US Gov Virginia` region.
+- [Virtual Network support](../ai-services/cognitive-services-virtual-networks.md) for Translator service is limited to only `US Gov Virginia` region.
The URI for accessing the API is: - `https://<your-custom-domain>.cognitiveservices.azure.us/translator/text/v3.0` - You can find your custom domain endpoint in the overview blade on the Azure Government portal once the resource is created.
For instructions on how to use Text Analytics, see [Quickstart: Use the Text Ana
### Text translation method
-The below example uses [Text Translation - Translate method](../cognitive-services/translator/reference/v3-0-translate.md) to translate a string of text from a language into another specified language. There are multiple [language codes](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation) that can be used with Translator.
+The below example uses [Text Translation - Translate method](../ai-services/translator/reference/v3-0-translate.md) to translate a string of text from a language into another specified language. There are multiple [language codes](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation) that can be used with Translator.
### Text translation C# example request
namespace TextTranslator
} } ```
-For more information, see [public documentation](../cognitive-services/translator/translator-overview.md) and [public API documentation](../cognitive-services/translator/reference/v3-0-reference.md) for Translator.
+For more information, see [public documentation](../ai-services/translator/translator-overview.md) and [public API documentation](../ai-services/translator/reference/v3-0-reference.md) for Translator.
### Next Steps
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
For AI and machine learning services availability in Azure Government, see [Prod
- Configure encryption at rest of content in Azure Machine Learning by using customer-managed keys in Azure Key Vault. Azure Machine Learning stores snapshots, output, and logs in the Azure Blob Storage account that's associated with the Azure Machine Learning workspace and customer subscription. All the data stored in Azure Blob Storage is [encrypted at rest with Microsoft-managed keys](../machine-learning/concept-enterprise-security.md). Customers can use their own keys for data stored in Azure Blob Storage. See [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in the Content Moderator service by [using customer-managed keys in Azure Key Vault](../cognitive-services/content-moderator/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
+- Configure encryption at rest of content in the Content Moderator service by [using customer-managed keys in Azure Key Vault](../ai-services/content-moderator/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in Cognitive Services Custom Vision [using customer-managed keys in Azure Key Vault](../cognitive-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
+- Configure encryption at rest of content in Cognitive Services Custom Vision [using customer-managed keys in Azure Key Vault](../ai-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in the Face service by [using customer-managed keys in Azure Key Vault](../cognitive-services/face/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
+- Configure encryption at rest of content in the Face service by [using customer-managed keys in Azure Key Vault](../ai-services/computer-vision/identity-encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in the Language Understanding service by [using customer-managed keys in Azure Key Vault](../cognitive-services/luis/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
+- Configure encryption at rest of content in the Language Understanding service by [using customer-managed keys in Azure Key Vault](../ai-services/luis/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml).
+Cognitive Services Language Understanding (LUIS) is part of [Cognitive Services for Language](../ai-services/language-service/index.yml).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in Cognitive Services Personalizer [using customer-managed keys in Azure Key Vault](../cognitive-services/personalizer/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
+- Configure encryption at rest of content in Cognitive Services Personalizer [using customer-managed keys in Azure Key Vault](../ai-services/personalizer/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in Cognitive Services QnA Maker [using customer-managed keys in Azure Key Vault](../cognitive-services/qnamaker/encrypt-data-at-rest.md).
+- Configure encryption at rest of content in Cognitive Services QnA Maker [using customer-managed keys in Azure Key Vault](../ai-services/qnamaker/encrypt-data-at-rest.md).
-Cognitive Services QnA Maker is part of [Cognitive Services for Language](../cognitive-services/language-service/index.yml).
+Cognitive Services QnA Maker is part of [Cognitive Services for Language](../ai-services/language-service/index.yml).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in Speech Services by [using customer-managed keys in Azure Key Vault](../cognitive-services/speech-service/speech-encryption-of-data-at-rest.md).
+- Configure encryption at rest of content in Speech Services by [using customer-managed keys in Azure Key Vault](../ai-services/speech-service/speech-encryption-of-data-at-rest.md).
-### [Cognitive
+### [Cognitive
-- Configure encryption at rest of content in the Translator service by [using customer-managed keys in Azure Key Vault](../cognitive-services/translator/encrypt-data-at-rest.md).
+- Configure encryption at rest of content in the Translator service by [using customer-managed keys in Azure Key Vault](../ai-services/translator/encrypt-data-at-rest.md).
## Analytics
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
For a list of samples showing how to integrate Azure AD with Azure Maps, see:
[Authentication with Azure Maps]: azure-maps-authentication.md [Azure Maps & Azure Active Directory Samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Azure Maps community - Open-source projects]: open-source-projects.md#third-part-map-control-plugins
+[Azure Maps community - Open-source projects]: open-source-projects.md#third-party-map-control-plugins
[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps [AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components [azure-maps-control]: https://www.npmjs.com/package/azure-maps-control
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
These open-source, community-driven initiatives are created and maintained by the Azure Maps team. They're not part of the standard product or service offerings.
-The following lists some of the most popular Azure Maps open-source projects and samples.
-
-**Bots**
-
-| Project Name | Description |
-|-|-|
-| [Bot Framework - Point of Interest skill](https://github.com/microsoft/botframework-solutions/tree/488093ac2fddf16096171f6a926315aa45e199e7/skills/csharp/pointofinterestskill) | The Point of Interest Skill provides POI related capabilities to a Virtual Assistant using Azure Maps with Azure Bot Service and Bot Framework. |
-| [BotBuilder Location](https://github.com/Microsoft/BotBuilder-Location) | An open-source location picker control for Microsoft Bot Framework powered by Bing Maps REST services. |
-
-<a name="open-web-sdk-modules"></a>
-
-**Open Web SDK modules**
-
-The following is a list of open-source projects that extend the capabilities of the Azure Maps Web SDK.
-
-| Project Name | Description |
-|-|-|
-| [Azure Maps Animation module](https://github.com/Azure-Samples/azure-maps-animations) | A rich library of animations for use with the Azure Maps Web SDK. |
-| [Azure Maps Bring Data Into View Control module](https://github.com/Azure-Samples/azure-maps-bring-data-into-view-control) | An Azure Maps Web SDK module that provides a control that makes it easy to bring any data loaded on the map into view. |
-| [Azure Maps Geolocation Control module](https://github.com/Azure-Samples/azure-maps-geolocation-control) | An Azure Maps Web SDK module that provides a control that uses the browser's geolocation API to locate the user on the map. |
-| [Azure Maps Gridded Data Source module](https://github.com/Azure-Samples/azure-maps-gridded-data-source) | A module for the Azure Maps Web SDK that provides a data source that clusters data points into cells of a grid area. This operation is also known by many names such as tessellations, data binning, or hex bins. |
-| [Azure Maps Fullscreen Control module](https://github.com/Azure-Samples/azure-maps-fullscreen-control) | An Azure Maps Web SDK module that provides a control to display the map in fullscreen mode. |
-| [Azure Maps HTML Marker Layer module](https://github.com/Azure-Samples/azure-maps-html-marker-layer) | An Azure Maps Web SDK module that provides a layer that renders point data from a data source as HTML elements on the map. |
-| [Azure Maps Image Exporter module](https://github.com/Azure-Samples/azure-maps-image-exporter) | A module for the Azure Maps Web SDK that generates screenshots of the map. |
-| [Azure Maps Overview Map module](https://github.com/Azure-Samples/azure-maps-overview-map) | An Azure Maps Web SDK module that provides a control that displays an overview map of the area the main map is focused on. |
-| [Azure Maps Scale Bar Control module](https://github.com/Azure-Samples/azure-maps-scale-bar-control) | An Azure Maps Web SDK module that provides a control that displays a scale bar relative to the pixel resolution at the center of the map. |
-| [Azure Maps Selection Control module](https://github.com/Azure-Samples/azure-maps-selection-control) | An Azure Maps Web SDK module that provides controls for selecting data in a data source using drawing tools or by requesting a route range polygon. |
-| [Azure Maps Services UI module](https://github.com/Azure-Samples/azure-maps-services-ui) | A set of web UI controls that wrap the Azure Maps REST services. |
-| [Azure Maps Spider Clusters module](https://github.com/Azure-Samples/azure-maps-spider-clusters) | A module for the Azure Maps Web SDK that adds a visualization to the map which expands clusters into a spiral spider layout. |
-| [Azure Maps Spyglass Control module](https://github.com/Azure-Samples/azure-maps-spyglass-control) | An Azure Maps Web SDK module that provides a window that displays a data set inside of a spyglass on the map. |
-| [Azure Maps Swipe Map module](https://github.com/Azure-Samples/azure-maps-swipe-map) | A module for the Azure Maps Web SDK that allows swiping between two overlapping maps, ideal for comparing two overlapping data sets. |
-| [Azure Maps Sync Map module](https://github.com/Azure-Samples/azure-maps-sync-maps) | An Azure Maps Web SDK module that synchronizes the cameras of two or more maps. |
-
-**Samples**
-
-| Project Name | Description |
-|-|-|
-| [Azure Maps Code Samples](https://github.com/Azure-Samples/AzureMapsCodeSamples) | A collection of code samples for using Azure Maps in web-based apps. |
-| [Azure Maps Gov Cloud Code Samples](https://github.com/Azure-Samples/AzureMapsCodeSamples) | A collection of code samples for using Azure Maps through Azure Government Cloud. |
-| [Azure Maps & Azure Active Directory Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) | A collection of samples that show how to use Azure Active Directory with Azure Maps. |
-| [LiveMaps](https://github.com/Azure-Samples/LiveMaps) | Sample application to provide live indoor maps visualization of IoT data on top of Azure Maps using Azure Maps Creator |
-| [Azure Maps Jupyter Notebook samples](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook) | A collection of Python samples using the Azure Maps REST services. |
-| [Azure Maps .NET UWP IoT Remote Control](https://github.com/Azure-Samples/azure-maps-dotnet-webgl-uwp-iot-remote-control) | This is a sample application that shows how to build a remotely controlled map using Azure Maps and IoT hub services. |
-| [Implement IoT spatial analytics using Azure Maps](https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing) | Tracking and capturing relevant events that occur in space and time is a common IoT scenario. |
-
-**Third party map control plugins**
+## Open-source projects
+
+The following tables list some of the most popular Azure Maps open-source projects and samples.
+
+### Bots
+
+| Project Name | Description |
+|--|-|
+| [Bot Framework - Point of Interest skill] | The Point of Interest Skill provides POI related capabilities to a Virtual Assistant using Azure Maps with Azure Bot Service and Bot Framework. |
+| [BotBuilder Location] | An open-source location picker control for Microsoft Bot Framework powered by Bing Maps REST services. |
+
+### Open Web SDK modules
+
+The following table lists the open-source projects that extend the capabilities of the Azure Maps Web SDK.
+
+| Project Name | Description |
+|--|-|
+| [Azure Maps Animation module] | A rich library of animations for use with the Azure Maps Web SDK. |
+| [Azure Maps Bring Data Into View Control module] | An Azure Maps Web SDK module that provides a control that makes it easy to bring any data loaded on the map into view. |
+| [Azure Maps Geolocation Control module] | An Azure Maps Web SDK module that provides a control that uses the browser's geolocation API to locate the user on the map. |
+| [Azure Maps Gridded Data Source module] | A module for the Azure Maps Web SDK that provides a data source that clusters data points into cells of a grid area. This operation is also known by many names such as tessellations, data binning, or hex bins. |
+| [Azure Maps Fullscreen Control module] | An Azure Maps Web SDK module that provides a control to display the map in full screen mode. |
+| [Azure Maps HTML Marker Layer module] | An Azure Maps Web SDK module that provides a layer that renders point data from a data source as HTML elements on the map. |
+| [Azure Maps Image Exporter module] | A module for the Azure Maps Web SDK that generates screenshots of the map. |
+| [Azure Maps Overview Map module] | An Azure Maps Web SDK module that provides a control that displays an overview map of the area the main map is focused on. |
+| [Azure Maps Scale Bar Control module] | An Azure Maps Web SDK module that provides a control that displays a scale bar relative to the pixel resolution at the center of the map. |
+| [Azure Maps Selection Control module] | An Azure Maps Web SDK module that provides controls for selecting data in a data source using drawing tools or by requesting a route range polygon. |
+| [Azure Maps Services UI module] | A set of web UI controls wrapping the Azure Maps REST services. |
+| [Azure Maps Spider Clusters module] | A module for the Azure Maps Web SDK that adds a visualization to the map that expands clusters into a spiral spider layout. |
+| [Azure Maps Spyglass Control module] | An Azure Maps Web SDK module that provides a window that displays a data set inside of a spyglass on the map. |
+| [Azure Maps Swipe Map module] | A module for the Azure Maps Web SDK that allows swiping between two overlapping maps, ideal for comparing two overlapping data sets. |
+| [Azure Maps Sync Map module] | An Azure Maps Web SDK module that synchronizes the cameras of two or more maps. |
+
+### Samples
+
+| Project Name | Description |
+|--|-|
+| [Azure Maps Code Samples] | A collection of code samples for using Azure Maps in web-based apps. |
+| [Azure Maps Gov Cloud Code Samples] | A collection of code samples for using Azure Maps through Azure Government Cloud. |
+| [Azure Maps & Azure Active Directory Samples] | A collection of samples that show how to use Azure Active Directory with Azure Maps. |
+| [LiveMaps] | A sample application that provides live indoor maps visualization of IoT data on top of Azure Maps using Azure Maps Creator. |
+| [Azure Maps Jupyter Notebook samples] | A collection of Python samples using the Azure Maps REST services. |
+| [Azure Maps .NET UWP IoT Remote Control] | A sample application that shows how to build a remotely controlled map using Azure Maps and IoT hub services. |
+| [Implement IoT spatial analytics using Azure Maps] | Tracking and capturing relevant events that occur in space and time is a common IoT scenario. |
+ <a name="third-part-map-control-plugins"></a>
-| Project Name | Description |
-|-|-|
-| [Azure Maps Cesium plugin](https://github.com/azure-samples/azure-maps-cesium) | A [Cesium JS](https://cesium.com/cesiumjs/) plugin that makes it easy to integrate Azure Maps services such as [tile layers](/rest/api/maps/render-v2/get-map-tile) and [geocoding services](/rest/api/maps/search). |
-| [Azure Maps Leaflet plugin](https://github.com/azure-samples/azure-maps-leaflet) | A [leaflet](https://leafletjs.com/) JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services](/rest/api/maps/render-v2/get-map-tile). |
- | [Azure Maps OpenLayers plugin](https://github.com/azure-samples/azure-maps-openlayers) | A [OpenLayers](https://www.openlayers.org/) JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services](/rest/api/maps/render-v2/get-map-tile). |
+### Third party map control plugins
+
+| Project Name | Description |
+|--|-|
+| [Azure Maps Cesium plugin] | A [Cesium JS] plugin that makes it easy to integrate Azure Maps services such as [tile layers] and [geocoding services]. |
+| [Azure Maps Leaflet plugin] | A [leaflet] JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services]. |
+| [Azure Maps OpenLayers plugin] | A [OpenLayers] JavaScript plugin that makes it easy to overlay tile layers from the [Azure Maps tile services]. |
-**Tools and resources**
+### Tools and resources
-| Project Name | Description |
-|-|-|
-| [Azure Maps Docs](https://github.com/MicrosoftDocs/azure-docs/tree/master/articles/azure-maps) | Source for all Azure Location Based Services documentation. |
-| [Azure Maps Creator Tools](https://github.com/Azure-Samples/AzureMapsCreator) | Python tools for Azure Maps Creator Tools. |
+| Project Name | Description |
+|-|-|
+| [Azure Maps Docs] | Source for all Azure Location Based Services documentation. |
+| [Azure Maps Creator Tools] | Python tools for Azure Maps Creator Tools. |
-For a more complete list of open-source projects for Azure Maps that includes community created projects, see [Azure Maps Open Source Projects](https://github.com/microsoft/Maps/blob/master/AzureMaps.md) in GitHub.
+For a more complete list of open-source projects for Azure Maps that includes community created projects, see [Azure Maps Open Source Projects] in GitHub.
## Supportability of open-source projects
-The following statements apply across all Azure Maps open-source projects and samples:
+All Azure Maps open-source projects and samples use supported and recommended techniques and are:
-- Azure Maps open-source projects and samples are created by Microsoft and the community.-- Azure Maps open-source projects and samples are maintained by Microsoft and the community.-- Azure Maps open-source projects and samples use supported and recommended techniques.-- Azure Maps open-source projects and samples are a community initiative ΓÇô people who work on the initiative for the benefit of others, and have their normal day job as well.-- Azure Maps open-source projects and samples are NOT a product, and therefore it's not supported by Premier Support or other official support channels.-- Azure Maps open-source projects and samples are supported in similar ways as other open-source projects done by Microsoft with support from the community by the community.
+- Created and maintained by Microsoft and the community.
+- A community initiative ΓÇô people who work on the initiative for the benefit of others, and have their normal day job as well.
+- NOT a product, and not supported by Premier Support or other official support channels.
+- Supported in similar ways as other open-source projects done by Microsoft with support from the community by the community.
## Next steps Find more open-source Azure Maps projects. > [!div class="nextstepaction"]
-> [Code samples](/samples/browse/?products=azure-maps)
+> [Code samples]
+
+[Azure Maps & Azure Active Directory Samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples
+[Azure Maps .NET UWP IoT Remote Control]: https://github.com/Azure-Samples/azure-maps-dotnet-webgl-uwp-iot-remote-control
+[Azure Maps Animation module]: https://github.com/Azure-Samples/azure-maps-animations
+[Azure Maps Bring Data Into View Control module]: https://github.com/Azure-Samples/azure-maps-bring-data-into-view-control
+[Azure Maps Cesium plugin]: https://github.com/azure-samples/azure-maps-cesium
+[Azure Maps Code Samples]: https://github.com/Azure-Samples/AzureMapsCodeSamples
+[Azure Maps Creator Tools]: https://github.com/Azure-Samples/AzureMapsCreator
+[Azure Maps Docs]: https://github.com/MicrosoftDocs/azure-docs/tree/master/articles/azure-maps
+[Azure Maps Fullscreen Control module]: https://github.com/Azure-Samples/azure-maps-fullscreen-control
+[Azure Maps Geolocation Control module]: https://github.com/Azure-Samples/azure-maps-geolocation-control
+[Azure Maps Gov Cloud Code Samples]: https://github.com/Azure-Samples/AzureMapsCodeSamples
+[Azure Maps Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source
+[Azure Maps HTML Marker Layer module]: https://github.com/Azure-Samples/azure-maps-html-marker-layer
+[Azure Maps Image Exporter module]: https://github.com/Azure-Samples/azure-maps-image-exporter
+[Azure Maps Jupyter Notebook samples]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook
+[Azure Maps Leaflet plugin]: https://github.com/azure-samples/azure-maps-leaflet
+[Azure Maps OpenLayers plugin]: https://github.com/azure-samples/azure-maps-openlayers
+[Azure Maps Overview Map module]: https://github.com/Azure-Samples/azure-maps-overview-map
+[Azure Maps Scale Bar Control module]: https://github.com/Azure-Samples/azure-maps-scale-bar-control
+[Azure Maps Selection Control module]: https://github.com/Azure-Samples/azure-maps-selection-control
+[Azure Maps Services UI module]: https://github.com/Azure-Samples/azure-maps-services-ui
+[Azure Maps Spider Clusters module]: https://github.com/Azure-Samples/azure-maps-spider-clusters
+[Azure Maps Spyglass Control module]: https://github.com/Azure-Samples/azure-maps-spyglass-control
+[Azure Maps Swipe Map module]: https://github.com/Azure-Samples/azure-maps-swipe-map
+[Azure Maps Sync Map module]: https://github.com/Azure-Samples/azure-maps-sync-maps
+[Azure Maps tile services]: /rest/api/maps/render-v2/get-map-tile
+[Bot Framework - Point of Interest skill]: https://github.com/microsoft/botframework-solutions/tree/488093ac2fddf16096171f6a926315aa45e199e7/skills/csharp/pointofinterestskill
+[BotBuilder Location]: https://github.com/Microsoft/BotBuilder-Location
+[Cesium JS]: https://cesium.com/cesiumjs/
+[Code samples]: /samples/browse/?products=azure-maps
+[geocoding services]: /rest/api/maps/search
+[Implement IoT spatial analytics using Azure Maps]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing
+[leaflet]: https://leafletjs.com
+[LiveMaps]: https://github.com/Azure-Samples/LiveMaps
+[OpenLayers]: https://www.openlayers.org/
+[tile layers]: /rest/api/maps/render-v2/get-map-tile
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
Title: Add a heat map layer to an Azure Maps Power BI visual-
-description: In this article, you will learn how to use the heat map layer in an Azure Maps Power BI visual.
+
+description: This article describes how to use the heat map layer in an Azure Maps Power BI visual.
Last updated 05/23/2023
# Add a heat map layer
-In this article, you will learn how to add a heat map layer to an Azure Maps Power BI visual.
+This article describes how to add a heat map layer to an Azure Maps Power BI visual.
:::image type="content" source="media/power-bi-visual/heat-map.png" alt-text="Heat map layer in an Azure Maps Power BI visual.":::
-Heat maps, also known as density maps, are a type of overlay on a map used to represent the density of data using different colors. Heat maps are often used to show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points. Displaying a large number of data points on a map will result in a degradation in performance and can cover it with overlapping symbols, making it unusable. Rendering the data as a heat map results not only in better performance, it helps you make better sense of the data by making it easy to see the relative density of each data point.
+Heat maps, also known as density maps, are a type of overlay on a map used to represent the density of data using different colors. Heat maps are often used to show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points. Displaying a large number of data points on a map results in a degradation in performance and can cover it with overlapping symbols, making it unusable. Rendering the data as a heat map results not only in better performance, it helps you make better sense of the data by making it easy to see the relative density of each data point.
A heat map is useful when users want to visualize vast comparative data:
A heat map is useful when users want to visualize vast comparative data:
## Prerequisites -- [Get started with Azure Maps Power BI visual](./power-bi-visual-get-started.md).-- Understand [layers in the Azure Maps Power BI visual](./power-bi-visual-understanding-layers.md).
+- [Get started with Azure Maps Power BI visual].
+- Understand [layers in the Azure Maps Power BI visual].
## Add the heat map layer
The **Heat map** section of the **Format** pane provides flexibility to customiz
- Specify if the value in size field should be used as the weight of each data point. - Pick different colors from color pickers. - Set the minimum and maximum zoom level for heat map layer to be visible.-- Decide the heat map layer position amongst different layers, e.g., 3D column layer and bubble layer.
+- Decide the heat map layer position amongst different layers, such as the 3D column and bubble layer.
The following table shows the primary settings that are available in the **Heat map** section of the **Format** pane: | Setting | Description | |-||
-| Size | The radius of each data point in the heat map.<br /><br />Valid values when Unit = ΓÇÿpixelsΓÇÖ: 1 - 200. Default: **20**<br />Valid values when Unit = ΓÇÿmetersΓÇÖ: 1 - 4,000,000|
-
+| Size | The radius of each data point in the heat map.<br><br>Valid values when Unit = ΓÇÿpixelsΓÇÖ: 1 - 200. Default: **20**<br>Valid values when Unit = ΓÇÿmetersΓÇÖ: 1 - 4,000,000|
| Setting | Description | |-||
-| Radius | The radius of each data point in the heat map.<br /><br />Valid values when Unit = ΓÇÿpixelsΓÇÖ: 1 - 200. Default: **20**<br />Valid values when Unit = ΓÇÿmetersΓÇÖ: 1 - 4,000,000|
-| Units | The distance units of the radius. Possible values are:<br /><br />**pixels**. When set to pixels the size of each data point will always be the same, regardless of zoom level.<br />**meters**. When set to meters, the size of the data points will scale based on zoom level based on the equivalent pixel distance at the equator, providing better relativity between neighboring points. However, due to the Mercator projection, the actual radius of coverage in meters at a given latitude will be smaller than this specified radius.<br /><br /> Default: **pixels** |
-| Transparency | Sets the Transparency of the heat map layer. Default: **1**<br/>Value should be from 0% to 100%. |
+| Radius | The radius of each data point in the heat map.<br><br>Valid values when Unit = ΓÇÿpixelsΓÇÖ: 1 - 200. Default: **20**<br>Valid values when Unit = ΓÇÿmetersΓÇÖ: 1 - 4,000,000|
+| Units | The distance units of the radius. Possible values are:<br><br>**pixels**. When set to pixels the size of each data point is the same, regardless of zoom level.<br>**meters**. When set to meters, the size of the data points scale based on zoom level based on the equivalent pixel distance at the equator, providing better relativity between neighboring points. However, due to the Mercator projection, the actual radius of coverage in meters at a given latitude are smaller than this specified radius.<br><br> Default: **pixels** |
+| Transparency | Sets the Transparency of the heat map layer. Default: **1**<br>Value should be from 0% to 100%. |
| Intensity | The intensity of each heat point. Intensity is a decimal value between 0 and 1, used to specify how "hot" a single data point should be. Default: **0.5** |
-| Use size as weight | A boolean value that determines if the size field value should be used as the weight of each data point. If on, this causes the layer to render as a weighted heat map. Default: **Off** |
-| Gradient |Color pick for users to pick 3 colors for low (0%), center (50%) and high (100%) gradient colors. |
+| Use size as weight | A boolean value that determines if the size field value should be used as the weight of each data point. When `On`, the layer renders as a weighted heat map. Default: `Off` |
+| Gradient |A color picker for users to pick three colors for low (0%), center (50%) and high (100%) gradient colors. |
| Min zoom |Minimum zoom level the layer is visible at. Valid values are 1 to 22. Default: **0** | |Max zoom |Maximum zoom level the layer is visible at. Valid values are 1 to 22. Default: **22**| |Layer position |Specify the position of the layer relative to other map layers. Valid values include **Above labels**, **Below labels** and **Below roads** |
The following table shows the primary settings that are available in the **Heat
Change how your data is displayed on the map: > [!div class="nextstepaction"]
-> [Add a 3D column layer](power-bi-visual-add-3d-column-layer.md)
+> [Add a 3D column layer]
Add more context to the map: > [!div class="nextstepaction"]
-> [Add a reference layer](power-bi-visual-add-reference-layer.md)
+> [Add a reference layer]
> [!div class="nextstepaction"]
-> [Add a tile layer](power-bi-visual-add-tile-layer.md)
+> [Add a tile layer]
> [!div class="nextstepaction"]
-> [Show real-time traffic](power-bi-visual-show-real-time-traffic.md)
+> [Show real-time traffic]
Customize the visual: > [!div class="nextstepaction"]
-> [Tips and tricks for color formatting in Power BI](/power-bi/visuals/service-tips-and-tricks-for-color-formatting)
+> [Tips and tricks for color formatting in Power BI]
> [!div class="nextstepaction"]
-> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
+> [Customize visualization titles, backgrounds, and legends]
+
+[Add a 3D column layer]: power-bi-visual-add-3d-column-layer.md
+[Add a reference layer]: power-bi-visual-add-reference-layer.md
+[Add a tile layer]: power-bi-visual-add-tile-layer.md
+[Customize visualization titles, backgrounds, and legends]: /power-bi/visuals/power-bi-visualization-customize-title-background-and-legend
+[Get started with Azure Maps Power BI visual]: power-bi-visual-get-started.md
+[layers in the Azure Maps Power BI visual]: power-bi-visual-understanding-layers.md
+[Show real-time traffic]: power-bi-visual-show-real-time-traffic.md
+[Tips and tricks for color formatting in Power BI]: /power-bi/visuals/service-tips-and-tricks-for-color-formatting
azure-maps Power Bi Visual Add Pie Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md
Title: Add a pie chart layer to an Azure Maps Power BI visual
-description: In this article, you will learn how to use the pie chart layer in an Azure Maps Power BI visual.
+description: This article describes how to use the pie chart layer in an Azure Maps Power BI visual.
Previously updated : 03/15/2022 Last updated : 07/17/2023
# Add a pie chart layer
-In this article, you will learn how to add a pie chart layer to an Azure Maps Power BI visual.
+This article describes how to add a pie chart layer to an Azure Maps Power BI visual.
A pie chart is a visual representation of data in the form of a circular chart or *pie* where each slice represents an element of the dataset that is shown as a percentage of the whole. A list of numerical variables along with categorical (location) variables are required to represent data in the form of a pie chart. :::image type="content" source="./media/power-bi-visual/pie-chart-layer.png" alt-text="A Power B I visual showing the pie chart layer."::: > [!NOTE]
-> The data used in this article comes from the [Power BI Sales and Marketing Sample](/power-bi/create-reports/sample-datasets#download-original-sample-power-bi-files).
+> The data used in this article comes from the [Power BI Sales and Marketing Sample].
## Prerequisites -- [Get started with Azure Maps Power BI visual](./power-bi-visual-get-started.md).-- Understand [layers in the Azure Maps Power BI visual](./power-bi-visual-understanding-layers.md).
+- [Get started with Azure Maps Power BI visual].
+- Understand [Layers in the Azure Maps Power BI visual].
## Add the pie chart layer
The pie chart layer is added automatically based on what fields in the **Visuali
:::image type="content" source="./media/power-bi-visual/visualizations-settings-pie-chart.png" alt-text="A screenshot showing the fields required for the pie chart layer Power B I.":::
-The following steps will walk you through creating a pie chart layer.
+The following steps walk you through creating a pie chart layer.
1. Select two location sources from the **Fields** pane, such as city/state, to add to the **Location** field. 1. Select a numerical field from your table, such as sales, and add it to the **Size** field in the **Visualizations** pane. This field must contain the numerical values used in the pie chart.
-1. Select a data field from your table that can be used as the category that the numerical field applies to, such as *manufacturer*, and add it to the **Legend** field in the **Visualizations** pane. This will appear as the slices of the pie, the size of each slice is a percentage of the whole based on the value in the size field, such as the number of sales broken out by manufacturer.
+1. Select a data field from your table that can be used as the category that the numerical field applies to, such as *manufacturer*, and add it to the **Legend** field in the **Visualizations** pane. This appears as the slices of the pie, the size of each slice is a percentage of the whole based on the value in the size field, such as the number of sales broken out by manufacturer.
1. Next, in the **Format** tab of the **Visualizations** pane, switch the **Bubbles** toggle to **On**. The pie chart layer should now appear. Next you can adjust the Pie chart settings such as size and transparency. ## Pie chart layer settings
-Pie Chart layer is an extension of the bubbles layer, so all settings are made in the **Bubbles** section. If a field is passed into the **Legend** bucket of the **Fields** pane, the pie charts will be populated and will be colored based on their categorization. The outline of the pie chart is white by default but can be changed to a new color. The following are the settings in the **Format** tab of the **Visualizations** pane that are available to a **Pie Chart layer**.
+Pie Chart layer is an extension of the bubbles layer, so all settings are made in the **Bubbles** section. If a field is passed into the **Legend** bucket of the **Fields** pane, the pie charts are populated and colored based on their categorization. The outline of the pie chart is white by default but can be changed to a new color. The following are the settings in the **Format** tab of the **Visualizations** pane that are available to a **Pie Chart layer**.
:::image type="content" source="./media/power-bi-visual/visualizations-settings-bubbles.png" alt-text="A screenshot showing the pie chart settings that appear in the bubbles section when the format tab is selected in the visualization pane in power B I.":::
Pie Chart layer is an extension of the bubbles layer, so all settings are made i
Change how your data is displayed on the map: > [!div class="nextstepaction"]
-> [Add a 3D column layer](power-bi-visual-add-3d-column-layer.md)
+> [Add a 3D column layer]
> [!div class="nextstepaction"]
-> [Add a heat map layer](power-bi-visual-add-heat-map-layer.md)
+> [Add a heat map layer]
+
+[Power BI Sales and Marketing Sample]: /power-bi/create-reports/sample-datasets#download-original-sample-power-bi-files
+[Get started with Azure Maps Power BI visual]: power-bi-visual-get-started.md
+[Layers in the Azure Maps Power BI visual]: power-bi-visual-understanding-layers.md
+[Add a 3D column layer]: power-bi-visual-add-3d-column-layer.md
+[Add a heat map layer]: power-bi-visual-add-heat-map-layer.md
azure-maps Power Bi Visual Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md
Title: Add a tile layer to an Azure Maps Power BI visual-
-description: In this article, you will learn how to use the tile layer in Azure Maps Power BI visual.
+
+description: This article demonstrates how to use the tile layer in Azure Maps Power BI visual.
Previously updated : 11/29/2021 Last updated : 07/18/2023
# Add a tile layer
-The tile layer feature, like the reference layer feature, allows additional data to be overlaid on the map to provide more context. Tile layers allow you to superimpose images on top of the Azure Maps base map tiles. This is a great way to overlay large or complex datasets such as imagery from drones, or millions of rows of data.
+The tile layer feature, like the reference layer feature, allows additional data to be overlaid on the map to provide more context. Tile layers allow you to superimpose images on top of the Azure Maps base map tiles. Superimposing images is a great way to overlay large or complex datasets such as imagery from drones, or millions of rows of data.
-> [!div class="mx-imgBorder"]
-> ![A map displaying a bubble layer above a tile layer showing current infrared weather data from Azure Maps](media/power-bi-visual/radar-tile-layer-with-bubbles.png)
-A tile layer loads in tiles from a server. These images can either be pre-rendered or dynamically rendered. Pre-rendered images are stored like any other image on a server using a naming convention that the tile layer understands. Dynamically rendered images use a service to load the images close to real time. Tile layers are a great way to visualize large datasets on the map. Not only can a tile layer be generated from an image, vector data can also be rendered as a tile layer too.
+A tile layer loads in tiles from a server. These images can either be prerendered or dynamically rendered. prerendered images are stored like any other image on a server using a naming convention that the tile layer understands. Dynamically rendered images use a service to load the images close to real time. Tile layers are a great way to visualize large datasets on the map. Not only can a tile layer be generated from an image, vector data can also be rendered as a tile layer too.
-The bounding box and zoom range of where a tile service is available can be passed as settings to limit where tiles are requested. This is both a performance enhancement for both the visual and the tile service. Below is an overview of all settings available in the **Format** pane that are available in the **Tile layer** section.
+The bounding box and zoom range of where a tile service is available can be passed as settings to limit where tiles are requested, a performance enhancement for both the visual and the tile service. The following table gives an overview of all settings available in the **Format** pane available in the **Tile layer** section.
| Setting | Description | |-||
There are three different tile service naming conventions supported by the Azure
* **X, Y, Zoom notation** - X is the column, Y is the row position of the tile in the tile grid, and the Zoom notation a value based on the zoom level. * **Quadkey notation** - Combines x, y, and zoom information into a single string value. This string value becomes a unique identifier for a single tile.
-* **Bounding Box** - Specify an image in the Bounding box coordinates format: `{west},{south},{east},{north}`. This format is commonly used by [Web Mapping Services (WMS)](https://www.opengeospatial.org/standards/wms).
+* **Bounding Box** - Specify an image in the Bounding box coordinates format: `{west},{south},{east},{north}`. This format is commonly used by [Web Mapping Services (WMS)].
The tile URL an https URL to a tile URL template that uses the following parameters:
parameters:
* `{quadkey}` - Tile `quadkey` identifier based on the Bing Maps tile system naming convention. * `{bbox-epsg-3857}` - A bounding box string with the format `{west},{south},{east},{north}` in the EPSG 3857 spatial reference system.
-As an example, the following is a formatted tile URL for the [weather radar tile service](/rest/api/maps/render-v2/get-map-tile) in Azure Maps. Note that `[subscription-key]` is a placeholder for your Azure Maps subscription key.
+As an example, here's a formatted tile URL for the [weather radar tile service] in Azure Maps.
-> `https://atlas.microsoft.com/map/tile?zoom={z}&x={x}&y={y}&tilesetId=microsoft.weather.radar.main&api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}`
+```html
+`https://atlas.microsoft.com/map/tile?zoom={z}&x={x}&y={y}&tilesetId=microsoft.weather.radar.main&api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}`
+```
-For more information on Azure Maps tiling system, see [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+For more information on Azure Maps tiling system, see [Zoom levels and tile grid].
## Next steps Add more context to the map: > [!div class="nextstepaction"]
-> [Show real-time traffic](power-bi-visual-show-real-time-traffic.md)
+> [Show real-time traffic]
+
+[Web Mapping Services (WMS)]: https://www.opengeospatial.org/standards/wms
+[Show real-time traffic]: power-bi-visual-show-real-time-traffic.md
+[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md
+[weather radar tile service]: /rest/api/maps/render-v2/get-map-tile
azure-maps Power Bi Visual Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-conversion.md
The Azure Maps visual is now Generally Available, providing a streamlined and in
A conversion function is available in Power BI desktop to convert any existing Map and Filled map visuals to the new Azure Maps visual.
-When opening a report with Map and Filled map visuals, you will see the following dialog giving you the option to upgrade to the new Azure Maps visual:
+When opening a report with Map and Filled map visuals, you'll see the following dialog giving you the option to upgrade to the new Azure Maps visual:
:::image type="content" source="media/power-bi-visual/introducing-azure-map-visual.png" alt-text="Screenshot showing the option to upgrade maps to the Azure Maps visual.":::
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
In this quickstart, you will learn how to use Azure Maps to create a map that gi
* Get your Azure Maps subscription key to use in the demo web application. * Download and open the demo map application.
-This quickstart uses the Azure Maps Web SDK, however the Azure Maps service can be used with any map control, such as these popular [open-source map controls](open-source-projects.md#third-part-map-control-plugins) that the Azure Maps team has created plugin's for.
+This quickstart uses the Azure Maps Web SDK, however the Azure Maps service can be used with any map control, such as these popular [open-source map controls](open-source-projects.md#third-party-map-control-plugins) that the Azure Maps team has created plugin's for.
## Prerequisites
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
Learn more about the Azure Maps Web SDK:
[Render Azure Maps in Leaflet sample source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Third%20Party%20Map%20Controls/Render%20Azure%20Maps%20in%20Leaflet/Render%20Azure%20Maps%20in%20Leaflet.html [Azure Maps Leaflet plugin]: https://github.com/azure-samples/azure-maps-leaflet [Azure Maps Samples]: https://samples.azuremaps.com/?search=leaflet
-[Azure Maps community - Open-source projects]: open-source-projects.md#third-part-map-control-plugins
+[Azure Maps community - Open-source projects]: open-source-projects.md#third-party-map-control-plugins
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
Title: Log Analytics agent data sources in Azure Monitor description: Data sources define the log data that Azure Monitor collects from agents and other connected sources. This article describes how Azure Monitor uses data sources, explains how to configure them, and summarizes the different data sources available. --++ Last updated 05/10/2022
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Title: Manage Azure Monitor Agent description: Options for managing Azure Monitor Agent on Azure virtual machines and Azure Arc-enabled servers. --++ Last updated 1/30/2022
azure-monitor Data Sources Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md
Title: Collect Windows and Linux performance data sources with the Log Analytics agent in Azure Monitor description: Learn how to configure collection of performance counters for Windows and Linux agents, how they're stored in the workspace, and how to analyze them. --++ Last updated 06/28/2022
azure-monitor Resource Manager Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-agent.md
Title: Resource Manager template samples for agents
description: Sample Azure Resource Manager templates to deploy and configure virtual machine agents in Azure Monitor. --++ Last updated 04/26/2022
azure-monitor Resource Manager Data Collection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md
Title: Resource Manager template samples for data collection rules
description: Sample Azure Resource Manager templates to create associations between data collection rules and virtual machines in Azure Monitor. --++ Last updated 06/22/2022
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you have received a notification for an alert (such as an email or an SMS) mo
## Action or notification has an unexpected content Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is very resilient and quick but occasionally suffers outages. In this case, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider may have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems is not feasible. You can know that you are recieving a degraded experience, if there is a note at the top of your email notification that says:
-"This is a degraded email experience. That means the formatting may be off or details could be missing. For more infomration on teh degraded emaiol experience, read here."
+"This is a degraded email experience. That means the formatting may be off or details could be missing. For more infomration on the degraded email experience, read here."
If your notification does not contain this note and you have received the alert, but believe some of its fields are missing or incorrect, follow these steps:
azure-monitor Resource Manager Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-action-groups.md
Title: Resource Manager template samples for action groups
description: Sample Azure Resource Manager templates to deploy Azure Monitor action groups. -- Last updated 04/27/2022
azure-monitor Tutorial Log Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md
Title: Tutorial - Create a log query alert for an Azure resource description: Tutorial to create a log query alert for an Azure resource. -- Last updated 09/16/2021
azure-monitor Tutorial Metric Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-metric-alert.md
Title: Tutorial - Create a metric alert for an Azure resource description: Learn how to create a metric chart with Azure metrics explorer.-- Last updated 11/08/2021
azure-monitor Availability Test Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md
The following steps walk you through the process of creating [standard tests](av
1. Connect to your subscription with Azure PowerShell (Connect-AzAccount + Set-AzContext). 2. List all URL ping tests in a resource group:
-```azurepowershell
-$resourceGroup = "myResourceGroup";
-Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup | `
- Where-Object { $_.WebTestKind -eq "ping" };
-```
+ ```azurepowershell
+ $resourceGroup = "myResourceGroup";
+ Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup | `
+ Where-Object { $_.WebTestKind -eq "ping" };
+ ```
3. Find the URL ping Test you want to migrate and record its name. 4. The following commands create a standard test with the same logic as the URL ping test:
-```azurepowershell
-$appInsightsComponent = "componentName";
-$pingTestName = "pingTestName";
-$newStandardTestName = "newStandardTestName";
-
-$componentId = (Get-AzApplicationInsights -ResourceGroupName $resourceGroup -Name $appInsightsComponent).Id;
-$pingTest = Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
-$pingTestRequest = ([xml]$pingTest.ConfigurationWebTest).WebTest.Items.Request;
-$pingTestValidationRule = ([xml]$pingTest.ConfigurationWebTest).WebTest.ValidationRules.ValidationRule;
-
-$dynamicParameters = @{};
-
-if ($pingTestRequest.IgnoreHttpStatusCode -eq [bool]::FalseString) {
-
-$dynamicParameters["RuleExpectedHttpStatusCode"] = [convert]::ToInt32($pingTestRequest.ExpectedHttpStatusCode, 10);
-
-}
-
-if ($pingTestValidationRule -and $pingTestValidationRule.DisplayName -eq "Find Text" `
---and $pingTestValidationRule.RuleParameters.RuleParameter[0].Name -eq "FindText" `--and $pingTestValidationRule.RuleParameters.RuleParameter[0].Value) {
-$dynamicParameters["ContentMatch"] = $pingTestValidationRule.RuleParameters.RuleParameter[0].Value;
-$dynamicParameters["ContentPassIfTextFound"] = $true;
-}
-
-New-AzApplicationInsightsWebTest @dynamicParameters -ResourceGroupName $resourceGroup -Name $newStandardTestName `
--Location $pingTest.Location -Kind 'standard' -Tag @{ "hidden-link:$componentId" = "Resource" } -TestName $newStandardTestName `--RequestUrl $pingTestRequest.Url -RequestHttpVerb "GET" -GeoLocation $pingTest.PropertiesLocations -Frequency $pingTest.Frequency `--Timeout $pingTest.Timeout -RetryEnabled:$pingTest.RetryEnabled -Enabled:$pingTest.Enabled `--RequestParseDependent:($pingTestRequest.ParseDependentRequests -eq [bool]::TrueString);-
-```
+ ```azurepowershell
+ $appInsightsComponent = "componentName";
+ $pingTestName = "pingTestName";
+ $newStandardTestName = "newStandardTestName";
+
+ $componentId = (Get-AzApplicationInsights -ResourceGroupName $resourceGroup -Name $appInsightsComponent).Id;
+ $pingTest = Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
+ $pingTestRequest = ([xml]$pingTest.ConfigurationWebTest).WebTest.Items.Request;
+ $pingTestValidationRule = ([xml]$pingTest.ConfigurationWebTest).WebTest.ValidationRules.ValidationRule;
+
+ $dynamicParameters = @{};
+
+ if ($pingTestRequest.IgnoreHttpStatusCode -eq [bool]::FalseString) {
+
+ $dynamicParameters["RuleExpectedHttpStatusCode"] = [convert]::ToInt32($pingTestRequest.ExpectedHttpStatusCode, 10);
+
+ }
+
+ if ($pingTestValidationRule -and $pingTestValidationRule.DisplayName -eq "Find Text" `
+
+ -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Name -eq "FindText" `
+ -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Value) {
+ $dynamicParameters["ContentMatch"] = $pingTestValidationRule.RuleParameters.RuleParameter[0].Value;
+ $dynamicParameters["ContentPassIfTextFound"] = $true;
+ }
+
+ New-AzApplicationInsightsWebTest @dynamicParameters -ResourceGroupName $resourceGroup -Name $newStandardTestName `
+ -Location $pingTest.Location -Kind 'standard' -Tag @{ "hidden-link:$componentId" = "Resource" } -TestName $newStandardTestName `
+ -RequestUrl $pingTestRequest.Url -RequestHttpVerb "GET" -GeoLocation $pingTest.PropertiesLocations -Frequency $pingTest.Frequency `
+ -Timeout $pingTest.Timeout -RetryEnabled:$pingTest.RetryEnabled -Enabled:$pingTest.Enabled `
+ -RequestParseDependent:($pingTestRequest.ParseDependentRequests -eq [bool]::TrueString);
+
+ ```
5. The new standard test doesn't have alert rules by default, so it doesn't create noisy alerts. No changes are made to your URL ping test so you can continue to rely on it for alerts. 6. Once you have validated the functionality of the new standard test, [update your alert rules](/azure/azure-monitor/alerts/alerts-manage-alert-rules) that reference the URL ping test to reference the standard test instead. Then you disable or delete the URL ping test. 7. To delete a URL ping test with Azure PowerShell, you can use this command:
-```azurepowershell
-Remove-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
-```
+ ```azurepowershell
+ Remove-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
+ ```
## FAQ
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable monitoring for applications hosted on App Service:
This method is the easiest to enable, and no code change or advanced configurations are required. It's often referred to as "runtime" monitoring. For App Service, we recommend that at a minimum you enable this level of monitoring. Based on your specific scenario, you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+ When you enable auto-instrumentation it enables Application Insights with a default setting (it includes sampling as well). Even if you set in Azure AppInsights: Sampling: **All Data 100%** this setting will be ignored.
+ For a complete list of supported autoinstrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). The following platforms are supported for autoinstrumentation monitoring:
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
description: Overview of data collection endpoints (DCEs) in Azure Monitor, incl
Previously updated : 03/16/2022 Last updated : 07/17/2023 ms.reviwer: nikeist
azure-monitor Data Collection Endpoint Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-sample.md
description: Sample data collection endpoint below is for virtual machines with
Previously updated : 03/16/2022 Last updated : 07/17/2023
azure-monitor Data Collection Rule Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-edit.md
Previously updated : 05/31/2022 Last updated : 07/17/2023 # Tutorial: Editing Data Collection Rules
-This tutorial will describe how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools.
+This tutorial describes how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools.
In this tutorial, you learn how to: > [!div class="checklist"]
While going through the wizard on the portal is the simplest way to set up the i
- Update data parsing or filtering logic for your data stream - Change data destination (e.g. send data to an Azure table, as this option is not directly offered as part of the DCR-based custom log wizard)
-In this tutorial, you will, first, set up ingestion of a custom log, then. you will modify the KQL transformation for your custom log to include additional filtering and apply the changes to your DCR. Finally, we are going to combine all editing operations into a single PowerShell script, which can be used to edit any DCR for any of the above mentioned reasons.
+In this tutorial, you first set up ingestion of a custom log. Then you modify the KQL transformation for your custom log to include additional filtering and apply the changes to your DCR. Finally, we're going to combine all editing operations into a single PowerShell script, which can be used to edit any DCR for any of the above mentioned reasons.
## Set up new custom log Start by setting up a new custom log. Follow [Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)]( ../logs/tutorial-logs-ingestion-portal.md). Note the resource ID of the DCR created.
Remove-Item $FilePath
.\DCREditor.ps1 "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/foo/providers/Microsoft.Insights/dataCollectionRules/bar" ```
-DCR content will open in embedded code editor. Once editing is complete, entering "Y" on script prompt will apply changes back to the DCR.
+DCR content opens in embedded code editor. Once editing is complete, entering "Y" on script prompt applies changes back to the DCR.
## Next steps
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
description: Use transformations in a data collection rule in Azure Monitor to f
Previously updated : 06/29/2022 Last updated : 07/17/2023 ms.reviwer: nikeist
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Title: Get started with Azure Monitor metrics explorer description: Learn how to create your first metric chart with Azure Monitor metrics explorer. ++ Previously updated : 02/21/2022 Last updated : 07/20/2023
Azure Monitor metrics explorer is a component of the Azure portal that you can use to plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Use metrics explorer to investigate the health and utilization of your resources.
-## Where do I start?
-Start in the following order:
+## Create a metrics chart
-1. [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart. Then [select a time range](#select-a-time-range) that's relevant for your investigation.
+Create a metric chart using the following steps:
-1. Try [applying dimension filters and splitting](#apply-dimension-filters-and-splitting). The filters and splitting allow you to analyze which segments of the metric contribute to the overall metric value and identify possible outliers.
+- Open the Metrics explorer
+
+- [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart.
+
+- [Select a time range](#select-a-time-range) that's relevant for your investigation.
+
+- [Apply dimension filters and splitting](#apply-dimension-filters-and-splitting). The filters and splitting allow you to analyze which segments of the metric contribute to the overall metric value and identify possible outliers.
+
+- Use [advanced settings](#advanced-chart-settings) to customize the chart before you pin it to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.
+- Add more metrics to the chart. You can also [view multiple resources in the same view](./metrics-dynamic-scope.md).
-1. Use [advanced settings](#advanced-chart-settings) to customize the chart before you pin it to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.
## Create your first metric chart
-To create a metric chart, from your resource, resource group, subscription, or Azure Monitor view, open the **Metrics** tab and follow these steps:
+1. Open the metrics explorer from the monitor overview page, or from the monitoring section of any resource.
-1. Select the **Select a scope** button to open the resource scope picker. You can use the picker to select the resources you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, see [View multiple resources in Azure Monitor metrics explorer](./metrics-dynamic-scope.md).
+ :::image type="content" source="./media/metrics-getting-started/metrics-menu.png" alt-text="A screenshot showing the monitoring page menu.":::
+
+1. Select the **Select a scope** button to open the resource scope picker. You can use the picker to select the resource you want to see metrics for. When you opened the metrics explorer from the resource's menu, the scope is populated for you.
+
+ To learn how to view metrics across multiple resources, see [View multiple resources in Azure Monitor metrics explorer](./metrics-dynamic-scope.md).
> ![Screenshot that shows selecting a resource.](./media/metrics-getting-started/scope-picker.png)
-1. For some resources, you must pick a namespace. The namespace is a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing metrics for files, tables, blobs, and queues. Many resource types have only one namespace.
+1. For some resources, you must pick a namespace. The namespace is a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing metrics for files, tables, blobs, and queues. Most resource types have only one namespace.
1. Select a metric from a list of available metrics.
- > ![Screenshot that shows selecting a metric.](./media/metrics-getting-started/metrics-dropdown.png)
+1. Select the metric aggregation. Available aggregations include minimum, maximum, the average value of the metric, or a count of the number of samples. For more information on aggregations, see [Advanced features of Metrics Explorer](../essentials/metrics-charts.md#aggregation).
+
+ [ ![Screenshot that shows selecting a metric.](./media/metrics-getting-started/metrics-dropdown.png) ](./media/metrics-getting-started/metrics-dropdown.png#lightbox)
+
-1. Optionally, you can [change the metric aggregation](../essentials/metrics-charts.md#aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric.
> [!TIP] > Select **Add metric** and repeat these steps to see multiple metrics plotted in the same chart. For multiple charts in one view, select **Add chart**. ## Select a time range
-> [!WARNING]
+> [!NOTE]
> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). You can query no more than 30 days' worth of data on any single chart. You can [pan](metrics-charts.md#pan) the chart to view the full retention. The 30-day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics). By default, the chart shows the most recent 24 hours of metrics data. Use the **time picker** panel to change the time range, zoom in, or zoom out on your chart.
-![Screenshot that shows changing the time range panel.](./media/metrics-getting-started/time.png)
+[ ![Screenshot that shows changing the time range panel.](./media/metrics-getting-started/time.png) ](./media/metrics-getting-started/time.png#lightbox)
> [!TIP]
-> Use the **time brush** to investigate an interesting area of the chart like a spike or a dip. Position the mouse pointer at the beginning of the area, select and hold the left mouse button, drag to the other side of the area, and then release the button. The chart will zoom in on that time range.
+> Use the **time brush** to investigate an interesting area of the chart like a spike or a dip. Select an area on the chart and the chart zooms in to show more detail for the selected area.
## Apply dimension filters and splitting
-[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments ("dimension values") affect the overall value of the metric. You can use them to identify possible outliers.
+[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments or dimensions affect the overall value of the metric. You can use them to identify possible outliers. For example
- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when you chart the *server response time* metric. You apply the filter on the *success of request* dimension.-- **Splitting** controls whether the chart displays separate lines for each value of a dimension or aggregates the values into a single line. For example, you can see one line for an average response time across all server instances. Or you can see separate lines for each server. You apply splitting on the *server instance* dimension to see separate lines.
+- **Splitting** controls whether the chart displays separate lines for each value of a dimension or aggregates the values into a single line. For example, you can see one line for an average CPU usage across all server instances, or you can see separate lines for each server. The image blow shows splitting a Virtual Machine Scale Set to see each virtual machine separately.
+ For examples that have filtering and splitting applied, see [Metric chart examples](../essentials/metric-chart-samples.md). The article shows the steps that were used to configure the charts. ## Share your metric chart
-There are three ways to share your metric chart. See the following instructions on how to share information from your metric charts by using Excel, a link, or a workbook.
-
-### Download to Excel
-
-Select **Share** > **Download to Excel**. Your download should start immediately.
--
-### Share a link
-
-Select **Share** > **Copy link**. You should get a notification that the link was copied successfully.
--
-### Send to workbook
+Share your metric chart in any of the following ways: See the following instructions on how to share information from your metric charts by using Excel, a link, or a workbook.
-Select **Share** > **Send to Workbook**. In the **Send to Workbook** window, you can send the metric chart to a new or existing workbook.
++ Download to Excel. Select **Share** > **Download to Excel**. Your download starts immediately.++ Share a link. Select **Share** > **Copy link**. You should get a notification that the link was copied successfully.++ Send to workbook Select **Share** > **Send to Workbook**. In the **Send to Workbook** window, you can send the metric chart to a new or existing workbook.++ Pin to Grafana. Select **Share** > **Pin to Grafana**. In the **Pin to Grafana** window, you can send the metric chart to a new or existing Grafana dashboard. ## Advanced chart settings
-You can customize the chart style and title, and modify advanced chart settings. When you're finished with customization, pin the chart to a dashboard or save it to a workbook. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
+You can customize the chart style and title, and modify advanced chart settings. When you're finished with customization, pin the chart to a dashboard or save it to a workbook. You can also configure metrics alerts. See [Advanced features of Metrics Explorer](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
## Next steps
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 07/03/2023 Last updated : 07/18/2023
> [!NOTE] > This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-Date list was last updated: 07/03/2023.
+Date list was last updated: 07/18/2023.
Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
This latest update adds a new column and reorders the metrics to be alphabetical
|StateStoreWriteRequests |Yes |State store write requests |Count |Total |Number of write requests to state store |ExtensionName, IsSuccessful, FailureCategory | ## Microsoft.ContainerInstance/containerGroups
-<!-- Data source : arm-->
+<!-- Data source : naam-->
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
This latest update adds a new column and reorders the metrics to be alphabetical
|NetworkBytesReceivedPerSecond |Yes |Network Bytes Received Per Second |Bytes |Average |The network bytes received per second. |No Dimensions | |NetworkBytesTransmittedPerSecond |Yes |Network Bytes Transmitted Per Second |Bytes |Average |The network bytes transmitted per second. |No Dimensions |
+## Microsoft.ContainerInstance/containerScaleSets
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|CpuPercentage |Yes |Percentage CPU |Percent |Average |Average of the CPU percentages consumed by individual Container Groups in this Scale Set |OrchestratorId |
+|CpuUsage |Yes |CPU usage |MilliCores |Average |Average of the CPU utilizations in millicores consumed by Container Groups in this Scale Set |OrchestratorId |
+|MemoryPercentage |Yes |Memory percentage |Percent |Average |Average of the memory percentages consumed ((usedMemory/allocatedMemory) * 100) by Container Groups in this Scale Set |OrchestratorId |
+|MemoryUsage |Yes |Memory usage |Bytes |Total |Total memory used by all the Container Groups in this Scale Set |OrchestratorId |
+ ## Microsoft.ContainerRegistry/registries <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|AirflowIntegrationRuntimeJobHeartbeatFailure |No |Airflow Integration Runtime Heartbeat Failure |Count |Total |Airflow Integration Runtime Heartbeat Failure |IntegrationRuntimeName, Job | |AirflowIntegrationRuntimeJobStart |No |Airflow Integration Runtime Job Start |Count |Total |Airflow Integration Runtime Job Start |IntegrationRuntimeName, Job | |AirflowIntegrationRuntimeMemoryPercentage |Yes |Airflow Integration Runtime Memory Percentage |Percent |Average |Airflow Integration Runtime Memory Percentage |IntegrationRuntimeName, ContainerName |
+|AirflowIntegrationRuntimeNodeCount |Yes |Airflow Integration Runtime Node Count |Count |Average |Airflow Integration Runtime Node Count |IntegrationRuntimeName |
|AirflowIntegrationRuntimeOperatorFailures |No |Airflow Integration Runtime Operator Failures |Count |Total |Airflow Integration Runtime Operator Failures |IntegrationRuntimeName, Operator | |AirflowIntegrationRuntimeOperatorSuccesses |No |Airflow Integration Runtime Operator Successes |Count |Total |Airflow Integration Runtime Operator Successes |IntegrationRuntimeName, Operator | |AirflowIntegrationRuntimePoolOpenSlots |No |Airflow Integration Runtime Pool Open Slots |Count |Total |Airflow Integration Runtime Pool Open Slots |IntegrationRuntimeName, Pool |
This latest update adds a new column and reorders the metrics to be alphabetical
|IntegrationRuntimeAvailableMemory |Yes |Integration runtime available memory |Bytes |Average |Integration runtime available memory |IntegrationRuntimeName, NodeName | |IntegrationRuntimeAvailableNodeNumber |Yes |Integration runtime available node count |Count |Average |Integration runtime available node count |IntegrationRuntimeName | |IntegrationRuntimeAverageTaskPickupDelay |Yes |Integration runtime queue duration |Seconds |Average |Integration runtime queue duration |IntegrationRuntimeName |
-|IntegrationRuntimeCopyAvailableCapacityPercentage |Yes |Integration runtime copy available capacity percentage |Percent |Maximum |Integration runtime copy available capacity percentage |IntegrationRuntimeName |
-|IntegrationRuntimeCopyCapacityUtilization |Yes |Integration runtime copy capacity utilization |Percent |Maximum |Integration runtime copy capacity utilization |IntegrationRuntimeName |
-|IntegrationRuntimeCopyWaitingQueueLength |Yes |Integration runtime copy waiting queue length |Count |Average |Integration runtime copy waiting queue length |IntegrationRuntimeName |
|IntegrationRuntimeCpuPercentage |Yes |Integration runtime CPU utilization |Percent |Average |Integration runtime CPU utilization |IntegrationRuntimeName, NodeName |
-|IntegrationRuntimeExternalAvailableCapacityPercentage |Yes |Integration runtime external available capacity percentage |Percent |Maximum |Integration runtime external available capacity percentage |IntegrationRuntimeName |
-|IntegrationRuntimeExternalCapacityUtilization |Yes |Integration runtime external capacity utilization |Percent |Maximum |Integration runtime external capacity utilization |IntegrationRuntimeName |
-|IntegrationRuntimeExternalWaitingQueueLength |Yes |Integration runtime external waiting queue length |Count |Average |Integration runtime external waiting queue length |IntegrationRuntimeName |
-|IntegrationRuntimePipelineAvailableCapacityPercentage |Yes |Integration runtime pipeline available capacity percentage |Percent |Maximum |Integration runtime pipeline available capacity percentage |IntegrationRuntimeName |
-|IntegrationRuntimePipelineCapacityUtilization |Yes |Integration runtime pipeline capacity utilization |Percent |Maximum |Integration runtime pipeline capacity utilization |IntegrationRuntimeName |
-|IntegrationRuntimePipelineWaitingQueueLength |Yes |Integration runtime pipeline waiting queue length |Count |Average |Integration runtime pipeline waiting queue length |IntegrationRuntimeName |
|IntegrationRuntimeQueueLength |Yes |Integration runtime queue length |Count |Average |Integration runtime queue length |IntegrationRuntimeName | |MaxAllowedFactorySizeInGbUnits |Yes |Maximum allowed factory size (GB unit) |Count |Maximum |Maximum allowed factory size (GB unit) |No Dimensions | |MaxAllowedResourceCount |Yes |Maximum allowed entities count |Count |Maximum |Maximum allowed entities count |No Dimensions |
+|MVNetIRCopyAvailableCapacityPCT |Yes |Copy available capacity percentage of MVNet integration runtime |Percent |Maximum |Copy available capacity percentage of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRCopyCapacityUtilization |Yes |Copy capacity utilization of MVNet integration runtime |Percent |Maximum |Copy capacity utilization of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRCopyWaitingQueueLength |Yes |Copy waiting queue length of MVNet integration runtime |Count |Average |Copy waiting queue length of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRExternalAvailableCapacityPCT |Yes |External available capacity percentage of MVNet integration runtime |Percent |Maximum |External available capacity percentage of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRExternalCapacityUtilization |Yes |External capacity utilization of MVNet integration runtime |Percent |Maximum |External capacity utilization of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRExternalWaitingQueueLength |Yes |External waiting queue length of MVNet integration runtime |Count |Average |External waiting queue length of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRPipelineAvailableCapacityPCT |Yes |Pipeline available capacity percentage of MVNet integration runtime |Percent |Maximum |Pipeline available capacity percentage of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRPipelineCapacityUtilization |Yes |Pipeline capacity utilization of MVNet integration runtime |Percent |Maximum |Pipeline capacity utilization of MVNet integration runtime |IntegrationRuntimeName |
+|MVNetIRPipelineWaitingQueueLength |Yes |Pipeline waiting queue length of MVNet integration runtime |Count |Average |Pipeline waiting queue length of MVNet integration runtime |IntegrationRuntimeName |
|PipelineCancelledRuns |Yes |Cancelled pipeline runs metrics |Count |Total |Cancelled pipeline runs metrics |FailureType, CancelledBy, Name | |PipelineElapsedTimeRuns |Yes |Elapsed Time Pipeline Runs Metrics |Count |Total |Elapsed Time Pipeline Runs Metrics |RunId, Name | |PipelineFailedRuns |Yes |Failed pipeline runs metrics |Count |Total |Failed pipeline runs metrics |FailureType, Name |
This latest update adds a new column and reorders the metrics to be alphabetical
|BillingApiOperations |Yes |Billing API Operations |Count |Total |Billing metric for the count of all API requests made against the Azure Digital Twins service. |MeterId | |BillingMessagesProcessed |Yes |Billing Messages Processed |Count |Total |Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints. |MeterId | |BillingQueryUnits |Yes |Billing Query Units |Count |Total |The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. |MeterId |
-|BulkOperationEntityCount |Yes |Bulk Operation Entity Count (preview) |Count |Total |The number of twins, models, or relationships processed by a bulk operation. |Operation, Result |
-|BulkOperationLatency |Yes |Bulk Operation Latency (preview) |Milliseconds |Average |Total time taken for a bulk operation to complete. |Operation, Authentication, Protocol |
|DataHistoryRouting |Yes |Data History Messages Routed (preview) |Count |Total |The number of messages routed to a time series database. |EndpointType, Result | |DataHistoryRoutingFailureRate |Yes |Data History Routing Failure Rate (preview) |Percent |Average |The percentage of events that result in an error as they are routed from Azure Digital Twins to a time series database. |EndpointType | |DataHistoryRoutingLatency |Yes |Data History Routing Latency (preview) |Milliseconds |Average |Time elapsed between an event getting routed from Azure Digital Twins to when it is posted to a time series database. |EndpointType, Result |
+|ImportJobEntityCount |Yes |Import Job Entity Count |Count |Total |The number of twins, models, or relationships processed by an import job. |Operation, Result |
+|ImportJobLatency |Yes |Import Job Latency |Milliseconds |Average |Total time taken for an import job to complete. |Operation, Authentication, Protocol |
|IngressEvents |Yes |Ingress Events |Count |Total |The number of incoming telemetry events into Azure Digital Twins. |Result | |IngressEventsFailureRate |Yes |Ingress Events Failure Rate |Percent |Average |The percentage of incoming telemetry events for which the service returns an internal error (500) response code. |No Dimensions | |IngressEventsLatency |Yes |Ingress Events Latency |Milliseconds |Average |The time from when an event arrives to when it is ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. |Result |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|BackendConnectionTimeouts |Yes |Backend Connection Timeouts |Count |Total |Count of requests that timed out waiting for a response from the backend target (includes all retry requests initiated from Traffic Controller to the backend target) |Microsoft.regionName, BackendService |
+|BackendConnectionTimeouts |Yes |Backend Connection Timeouts |Count |Total |Count of requests that timed out waiting for a response from the backend target (includes all retry requests initiated from Application Gateway for Containers to the backend target) |Microsoft.regionName, BackendService |
|BackendHealthyTargets |Yes |Backend Healthy Targets |Count |Average |Count of healthy backend targets |Microsoft.regionName, BackendService |
-|BackendHTTPResponseStatus |Yes |Backend HTTP Response Status |Count |Total |HTTP response status returned by the backend target to Traffic Controller |Microsoft.regionName, BackendService, HttpResponseCode |
-|ClientConnectionIdleTimeouts |Yes |Total Connection Idle Timeouts |Count |Total |Count of connections closed, between client and Traffic Controller frontend, due to exceeding idle timeout |Microsoft.regionName, Frontend |
-|ConnectionTimeouts |Yes |Connection Timeouts |Count |Total |Count of connections closed due to timeout between clients and Traffic Controller |Microsoft.regionName, Frontend |
-|HTTPResponseStatus |Yes |HTTP Response Status |Count |Total |HTTP response status returned by Traffic Controller |Microsoft.regionName, Frontend, HttpResponseCode |
-|TotalRequests |Yes |Total Requests |Count |Total |Count of requests Traffic Controller has served |Microsoft.regionName, Frontend |
+|BackendHTTPResponseStatus |Yes |Backend HTTP Response Status |Count |Total |HTTP response status returned by the backend target to Application Gateway for Containers |Microsoft.regionName, BackendService, HttpResponseCode |
+|ClientConnectionIdleTimeouts |Yes |Total Connection Idle Timeouts |Count |Total |Count of connections closed, between client and Application Gateway for Containers frontend, due to exceeding idle timeout |Microsoft.regionName, Frontend |
+|ConnectionTimeouts |Yes |Connection Timeouts |Count |Total |Count of connections closed due to timeout between clients and Application Gateway for Containers |Microsoft.regionName, Frontend |
+|HTTPResponseStatus |Yes |HTTP Response Status |Count |Total |HTTP response status returned by Application Gateway for Containers |Microsoft.regionName, Frontend, HttpResponseCode |
+|TotalRequests |Yes |Total Requests |Count |Total |Count of requests Application Gateway for Containers has served |Microsoft.regionName, Frontend |
## Microsoft.SignalRService/SignalR <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|cache_used_percent |Yes |Cache used percentage |Percent |Maximum |Cache used percentage. Applies only to data warehouses. |No Dimensions | |connection_failed |Yes |Failed Connections : System Errors |Count |Total |Failed Connections |Error | |connection_failed_user_error |Yes |Failed Connections : User Errors |Count |Total |Failed Connections : User Errors |Error |
-|connection_successful |Yes |Successful Connections |Count |Total |Successful Connections |SslProtocol, ConnectionPolicyResult |
+|connection_successful |Yes |Successful Connections |Count |Total |Successful Connections |SslProtocol |
|cpu_limit |Yes |CPU limit |Count |Average |CPU limit. Applies to vCore-based databases. |No Dimensions | |cpu_percent |Yes |CPU percentage |Percent |Average |CPU percentage |No Dimensions | |cpu_used |Yes |CPU used |Count |Average |CPU used. Applies to vCore-based databases. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|sessions_count |Yes |Sessions count |Count |Average |Number of active sessions. Not applicable to Synapse DW Analytics. |No Dimensions | |sessions_percent |Yes |Sessions percentage |Percent |Average |Sessions percentage. Not applicable to data warehouses. |No Dimensions | |snapshot_backup_size_bytes |Yes |Data backup storage size |Bytes |Maximum |Cumulative data backup storage size. Applies to Hyperscale databases. |No Dimensions |
-|sqlserver_process_core_percent |Yes |SQL Server process core percent |Percent |Maximum |CPU usage as a percentage of the SQL DB process. Not applicable to data warehouses. |No Dimensions |
-|sqlserver_process_memory_percent |Yes |SQL Server process memory percent |Percent |Maximum |Memory usage as a percentage of the SQL DB process. Not applicable to data warehouses. |No Dimensions |
+|sql_instance_cpu_percent |Yes |SQL instance CPU percent |Percent |Maximum |CPU usage by all user and system workloads. Not applicable to data warehouses. |No Dimensions |
+|sql_instance_memory_percent |Yes |SQL instance memory percent |Percent |Maximum |Memory usage by the database engine instance. Not applicable to data warehouses. |No Dimensions |
+|sqlserver_process_core_percent |Yes |SQL Server process core percent |Percent |Maximum |CPU usage as a percentage of the SQL DB process. Not applicable to data warehouses. (This metric is equivalent to sql_instance_cpu_percent, and will be removed in the future.) |No Dimensions |
+|sqlserver_process_memory_percent |Yes |SQL Server process memory percent |Percent |Maximum |Memory usage as a percentage of the SQL DB process. Not applicable to data warehouses. (This metric is equivalent to sql_instance_memory_percent, and will be removed in the future.) |No Dimensions |
|storage |Yes |Data space used |Bytes |Maximum |Data space used. Not applicable to data warehouses. |No Dimensions | |storage_percent |Yes |Data space used percent |Percent |Maximum |Data space used percent. Not applicable to data warehouses or hyperscale databases. |No Dimensions | |tempdb_data_size |Yes |Tempdb Data File Size Kilobytes |Count |Maximum |Space used in tempdb data files in kilobytes. Not applicable to data warehouses. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|physical_data_read_percent |Yes |Data IO percentage |Percent |Average |Data IO percentage |No Dimensions | |sessions_count |Yes |Sessions Count |Count |Average |Number of active sessions |No Dimensions | |sessions_percent |Yes |Sessions percentage |Percent |Average |Sessions percentage |No Dimensions |
-|sqlserver_process_core_percent |Yes |SQL Server process core percent |Percent |Maximum |CPU usage as a percentage of the SQL DB process. Applies to elastic pools. |No Dimensions |
-|sqlserver_process_memory_percent |Yes |SQL Server process memory percent |Percent |Maximum |Memory usage as a percentage of the SQL DB process. Applies to elastic pools. |No Dimensions |
+|sql_instance_cpu_percent |Yes |SQL instance CPU percent |Percent |Maximum |CPU usage by all user and system workloads. Applies to elastic pools. |No Dimensions |
+|sql_instance_memory_percent |Yes |SQL instance memory percent |Percent |Maximum |Memory usage by the database engine instance. Applies to elastic pools. |No Dimensions |
+|sqlserver_process_core_percent |Yes |SQL Server process core percent |Percent |Maximum |CPU usage as a percentage of the SQL DB process. Applies to elastic pools. (This metric is equivalent to sql_instance_cpu_percent, and will be removed in the future.) |No Dimensions |
+|sqlserver_process_memory_percent |Yes |SQL Server process memory percent |Percent |Maximum |Memory usage as a percentage of the SQL DB process. Applies to elastic pools. (This metric is equivalent to sql_instance_memory_percent, and will be removed in the future.) |No Dimensions |
|storage_limit |Yes |Data max size |Bytes |Average |Data max size. Not applicable to hyperscale |No Dimensions | |storage_percent |Yes |Data space used percent |Percent |Average |Data space used percent. Not applicable to hyperscale |No Dimensions | |storage_used |Yes |Data space used |Bytes |Average |Data space used. Not applicable to hyperscale |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-<!--Gen Date: Mon Jul 03 2023 13:34:26 GMT+0800 (China Standard Time)-->
+<!--Gen Date: Tue Jul 18 2023 10:25:51 GMT+0300 (Israel Daylight Time)-->
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
description: This article describes how to collect and analyze monitoring data f
Previously updated : 09/15/2021 Last updated : 07/17/2023
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 07/03/2023 Last updated : 07/18/2023
If you think something is missing, you can open a GitHub comment at the bottom o
|Audit |MCVP Audit Logs |Yes | |Logs |MCVP Logs |Yes |
+## Microsoft.ContainerInstance/containerGroups
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ContainerEvent |Container events |Yes |
+|ContainerInstanceLog |Standard output logs |Yes |
+ ## Microsoft.ContainerRegistry/registries <!-- Data source : naam-->
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlOnlineEndpointConsoleLog |AmlOnlineEndpointConsoleLog |Yes |
-|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog (preview) |Yes |
-|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog (preview) |Yes |
+|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog |Yes |
+|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog |Yes |
## Microsoft.ManagedNetworkFabric/networkDevices <!-- Data source : naam-->
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
-|TrafficControllerAccessLog |Traffic Controller Access Log |Yes |
+|TrafficControllerAccessLog |Application Gateway for Containers Access Log |Yes |
## Microsoft.SignalRService/SignalR <!-- Data source : naam-->
If you think something is missing, you can open a GitHub comment at the bottom o
* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
-<!--Gen Date: Mon Jul 03 2023 13:34:26 GMT+0800 (China Standard Time)-->
+<!--Gen Date: Tue Jul 18 2023 10:25:51 GMT+0300 (Israel Daylight Time)-->
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Automation |[Log Analytics for Azure Automation](../../automation/automation-manage-send-joblogs-log-analytics.md) | | Azure Batch |[Azure Batch logging](../../batch/batch-diagnostics.md) | | Azure Cognitive Search | [Cognitive Search monitoring data reference (schemas)](../../search/monitor-azure-cognitive-search-data-reference.md#schemas) |
-| Azure Cognitive Services | [Logging for Azure Cognitive Services](../../cognitive-services/diagnostic-logging.md) |
+| Azure Cognitive Services | [Logging for Azure Cognitive Services](../../ai-services/diagnostic-logging.md) |
| Azure Container Instances | [Logging for Azure Container Instances](../../container-instances/container-instances-log-analytics.md#log-schema) | | Azure Container Registry | [Logging for Azure Container Registry](../../container-registry/monitor-service.md) | | Azure Content Delivery Network | [Diagnostic logs for Azure Content Delivery Network](../../cdn/cdn-azure-diagnostic-logs.md) |
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
The following tables can receive data from the ingestion API.
Authentication for the Logs Ingestion API is performed at the DCE, which uses standard Azure Resource Manager authentication. A common strategy is to use an application ID and application key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs](tutorial-logs-ingestion-portal.md).
+### Token audience
+
+When developing a custom client to obtain an access token from Azure AD for the purpose of submitting telemetry to Log Ingestion API in Azure Monitor, refer to the table provided below to determine the appropriate audience string for your particular host environment.
+
+| Azure cloud version | Token audience value |
+| | |
+| Azure public cloud | `https://monitor.azure.com` |
+| Azure China cloud | `https://monitor.azure.cn` |
+| Azure US Government cloud | `https://monitor.azure.us` |
+ ## Source data The source data sent by your application is formatted in JSON and must match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure.
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
Follow the steps in this section to review and validate your private link setup.
### Review your endpoint's DNS settings The private endpoint you created should now have five DNS zones configured:
-* `privatelink-monitor-azure-com`
-* `privatelink-oms-opinsights-azure-com`
-* `privatelink-ods-opinsights-azure-com`
-* `privatelink-agentsvc-azure-automation-net`
-* `privatelink-blob-core-windows-net`
+* `privatelink.monitor.azure.com`
+* `privatelink.oms.opinsights.azure.com`
+* `privatelink.ods.opinsights.azure.com`
+* `privatelink.agentsvc.azure.automation-net`
+* `privatelink.blob.core.windows.net`
Each of these zones maps specific Azure Monitor endpoints to private IPs from the virtual network's pool of IPs. The IP addresses shown in the following images are only examples. Your configuration should instead show private IPs from your own network.
azure-monitor Resource Manager Log Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-manager-log-queries.md
Title: Resource Manager template samples for log queries
description: Sample Azure Resource Manager templates to deploy Azure Monitor log queries. --++ Last updated 06/13/2022
azure-monitor Tutorial Workspace Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-api.md
description: Describes how to add a custom transformation to data flowing throug
Previously updated : 07/01/2022 Last updated : 07/17/2023 # Tutorial: Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates
azure-monitor Tutorial Workspace Transformations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-portal.md
description: Describes how to add a custom transformation to data flowing throug
Previously updated : 07/01/2022 Last updated : 07/17/2023 # Tutorial: Add a transformation in a workspace data collection rule by using the Azure portal
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
Title: Configure a Log Analytics workspace for VM insights
description: This article describes how to create and configure a Log Analytics workspace used by VM insights. --++ Last updated 06/22/2022
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Title: VM Insights Dependency Agent description: This article describes how to upgrade the VM insights Dependency agent using command-line, setup wizard, and other methods. --++ Last updated 04/16/2020
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md
Title: Enable Azure Monitor for a hybrid environment description: This article describes how you enable VM insights for a hybrid cloud environment that contains one or more virtual machines. --++ Last updated 06/08/2022
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
Title: Enable VM insights overview description: Learn how to deploy and configure VM insights and find out about the system requirements. --++ Last updated 06/24/2022
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
Title: Enable VM insights by using PowerShell
description: This article describes how to enable VM insights for Azure virtual machines or virtual machine scale sets by using Azure PowerShell. --++ Last updated 06/08/2022
azure-monitor Vminsights Enable Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-resource-manager.md
Title: Enable VM insights using Resource Manager templates
description: This article describes how you enable VM insights for one or more Azure virtual machines or Virtual Machine Scale Sets by using Azure PowerShell or Azure Resource Manager templates. --++ Last updated 06/08/2022
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
Title: How to Query Logs from VM insights description: VM insights solution collects metrics and log data to and this article describes the records and includes sample queries. --++ Last updated 06/08/2022
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
Title: View app dependencies with VM insights description: This article shows how to use the VM insights Map feature. It discovers application components on Windows and Linux systems and maps the communication between services. --++ Last updated 06/08/2022
azure-monitor Vminsights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-optout.md
Title: Disable monitoring in VM insights description: This article describes how to stop monitoring your virtual machines in VM insights. --++ Last updated 06/08/2022
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
Title: What is VM insights? description: Overview of VM insights, which monitors the health and performance of Azure VMs and automatically discovers and maps application components and their dependencies. --++ Last updated 06/08/2023
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
Title: Chart performance with VM insights description: This article discusses the VM insights Performance feature that discovers application components on Windows and Linux systems and maps the communication between services. --++ Last updated 06/08/2022
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
Title: Troubleshoot VM insights description: Troubleshooting information for VM insights installation. --++ Last updated 06/08/2022
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-workbooks.md
Title: Create interactive reports with VM insights workbooks description: Simplify complex reporting with predefined and custom parameterized workbooks for VM insights. --++ Last updated 05/27/2022
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
az stack group create \
``` > [!NOTE]
-> Azure CLI doesn't have a deployment stack set command. Use the new command instead.
+> Azure CLI doesn't have a deployment stack set command. Use the New command instead.
# [Portal](#tab/azure-portal)
Currently not implemented.
-### Use the new command
+### Use the New command
You get a warning similar to the following:
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/overview.md
Azure Custom Resource Providers are made by creating a contract between Azure an
## How to build custom resource providers
-Custom resource providers are a list of contracts between Azure and endpoints. This contract describes how Azure should interact with an endpoint. The resource provider acts like a proxy and will forward requests and responses to and from the specified **endpoint**. A resource provider can specify two types of contracts: [**resourceTypes**](./custom-providers-resources-endpoint-how-to.md) and [**actions**](./custom-providers-action-endpoint-how-to.md). These are enabled through endpoint definitions. An endpoint definition is comprised of three fields: **name**, **routingType**, and **endpoint**.
+Custom resource providers are a list of contracts between Azure and endpoints. These contracts describe how Azure should interact with their endpoints. The resource providers act like a proxy and will forward requests and responses to and from their specified **endpoint**. A resource provider can specify two types of contracts: [**resourceTypes**](./custom-providers-resources-endpoint-how-to.md) and [**actions**](./custom-providers-action-endpoint-how-to.md). These are enabled through endpoint definitions. An endpoint definition is comprised of three fields: **name**, **routingType**, and **endpoint**.
Sample Endpoint:
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.ClassicNetwork | Classic deployment model virtual network | | Microsoft.ClassicStorage | Classic deployment model storage | | Microsoft.ClassicSubscription - [registered](#registration) | Classic deployment model |
-| Microsoft.CognitiveServices | [Cognitive Services](../../cognitive-services/index.yml) |
+| Microsoft.CognitiveServices | [Cognitive Services](../../ai-services/index.yml) |
| Microsoft.Commerce - [registered](#registration) | core | | Microsoft.Compute | [Virtual Machines](../../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | | Microsoft.Consumption - [registered](#registration) | [Cost Management](/azure/cost-management/) |
azure-resource-manager Control Plane And Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-and-data-plane.md
The control plane includes two scenarios for handling requests - "green field" a
## Data plane
-Requests for data plane operations are sent to an endpoint that's specific to your instance. For example, the [Detect Language operation](../../cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection.md) in Cognitive Services is a data plane operation because the request URL is:
+Requests for data plane operations are sent to an endpoint that's specific to your instance. For example, the [Detect Language operation](../../ai-services/language-service/language-detection/overview.md) in Cognitive Services is a data plane operation because the request URL is:
```http POST {Endpoint}/text/analytics/v2.0/languages
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Applying locks can lead to unexpected results. Some operations, which don't seem
- A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting a virtual machine. These operations require a POST method request.
+- A read-only lock on a **resource group** that contains a **virtual machine** prevents users from moving the the VM out of the resource group.
+
+- A read-only lock on a **resource group** prevents users from moving any new **resource** into that resource group.
+ - A read-only lock on a **resource group** that contains an **automation account** prevents all runbooks from starting. These operations require a POST method request. - A cannot-delete lock on a **resource** or **resource group** prevents the deletion of Azure RBAC assignments.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | domains | resource group | 3-50 | Alphanumerics and hyphens. | > | domains / topics | domain | 3-50 | Alphanumerics and hyphens. | > | eventSubscriptions | resource group | 3-64 | Alphanumerics and hyphens. |
-> | topics | resource group | 3-50 | Alphanumerics and hyphens. |
+> | topics | region | 3-50 | Alphanumerics and hyphens. |
## Microsoft.EventHub
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
If your error code isn't listed, submit a GitHub issue. On the right side of the
| MissingSubscriptionRegistration | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) | | NoRegisteredProviderFound | Check resource provider registration status. | [Resolve registration](error-register-resource-provider.md) | | NotFound | You might be attempting to deploy a dependent resource in parallel with a parent resource. Check if you need to add a dependency. | [Resolve dependencies](error-not-found.md) |
-| OperationNotAllowed | The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
+| OperationNotAllowed | There can be several reasons for this error message.<br><br>1. The deployment is attempting an operation which is not allowed on spcecified SKU.<br><br>2. The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
| OperationNotAllowedOnVMImageAsVMsBeingProvisioned | You might be attempting to delete an image that is currently being used to provision VMs. You cannot delete an image that is being used by any virtual machine during the deployment process. Retry the image delete operation after the deployment of the VM is complete. | | | ParentResourceNotFound | Make sure a parent resource exists before creating the child resources. | [Resolve parent resource](error-parent-resource.md) | | PasswordTooLong | You might have selected a password with too many characters, or converted your password value to a secure string before passing it as a parameter. If the template includes a **secure string** parameter, you don't need to convert the value to a secure string. Provide the password value as text. | |
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
azure-sql-edge Onnx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/onnx-overview.md
To infer machine learning models in Azure SQL Edge, you will first need to get a
To obtain a model in the ONNX format: -- **Model Building Services**: Services such as the [automated Machine Learning feature in Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) and [Azure Custom Vision Service](../cognitive-services/custom-vision-service/getting-started-build-a-classifier.md) support directly exporting the trained model in the ONNX format.
+- **Model Building Services**: Services such as the [automated Machine Learning feature in Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) and [Azure Custom Vision Service](../ai-services/custom-vision-service/getting-started-build-a-classifier.md) support directly exporting the trained model in the ONNX format.
- [**Convert and/or export existing models**](https://github.com/onnx/tutorials#converting-to-onnx-format): Several training frameworks (for example, [PyTorch](https://pytorch.org/docs/stable/onnx.html), Chainer, and Caffe2) support native export functionality to ONNX, which allows you to save your trained model to a specific version of the ONNX format. For frameworks that do not support native export, there are standalone ONNX Converter installable packages that enable you to convert models trained from different machine learning frameworks to the ONNX format.
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
Title: Azure Video Indexer accounts
-description: This article gives an overview of Azure Video Indexer accounts and provides links to other articles for more details.
+ Title: Azure AI Video Indexer accounts
+description: This article gives an overview of Azure AI Video Indexer accounts and provides links to other articles for more details.
Last updated 01/25/2023
-# Azure Video Indexer account types
+# Azure AI Video Indexer account types
-This article gives an overview of Azure Video Indexer accounts types and provides links to other articles for more details.
+This article gives an overview of Azure AI Video Indexer accounts types and provides links to other articles for more details.
## Trial account
-When starting out with [Azure Video Indexer](https://www.videoindexer.ai/), click **start free** to kick off a quick and easy process of creating a trial account. No Azure subscription is required and this is a great way to explore Azure Video Indexer and try it out with your content. Keep in mind that the trial Azure Video Indexer account has a limitation on the number of indexing minutes, support, and SLA.
+When starting out with [Azure AI Video Indexer](https://www.videoindexer.ai/), click **start free** to kick off a quick and easy process of creating a trial account. No Azure subscription is required and this is a great way to explore Azure AI Video Indexer and try it out with your content. Keep in mind that the trial Azure AI Video Indexer account has a limitation on the number of indexing minutes, support, and SLA.
-With a trial account, Azure Video Indexer provides:
+With a trial account, Azure AI Video Indexer provides:
-* up to 600 minutes of free indexing to the [Azure Video Indexer](https://www.videoindexer.ai/) website users and
-* up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://api-portal.videoindexer.ai/).
+* up to 600 minutes of free indexing to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website users and
+* up to 2400 minutes of free indexing to users that subscribe to the Azure AI Video Indexer API on the [developer portal](https://api-portal.videoindexer.ai/).
-The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-video-indexer-on-azure-government).
+The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure AI Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-ai-video-indexer-on-azure-government).
## Paid (unlimited) account When you have used up the free trial minutes or are ready to start using Video Indexer for production workloads, you can create a regular paid account which doesn't have minute, support, or SLA limitations. Account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
-Azure Video Indexer unlimited accounts are Azure Resource Manager (ARM) based and unlike trial accounts, are created in your Azure subscription. Moving to an unlimited ARM based account unlocks many security and management capabilities, such as [RBAC user management](../role-based-access-control/overview.md), [Azure Monitor integration](../azure-monitor/overview.md), deployment through ARM templates, and much more.
+Azure AI Video Indexer unlimited accounts are Azure Resource Manager (ARM) based and unlike trial accounts, are created in your Azure subscription. Moving to an unlimited ARM based account unlocks many security and management capabilities, such as [RBAC user management](../role-based-access-control/overview.md), [Azure Monitor integration](../azure-monitor/overview.md), deployment through ARM templates, and much more.
-Billing is per indexed minute, with the per minute cost determined by the selected preset. For more information regarding pricing, see [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+Billing is per indexed minute, with the per minute cost determined by the selected preset. For more information regarding pricing, see [Azure AI Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
## Create accounts
Billing is per indexed minute, with the per minute cost determined by the select
* To create an account with an API, see [Create accounts](/rest/api/videoindexer/stable/accounts) > [!TIP]
- > Make sure you are signed in with the correct domain to the [Azure Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
+ > Make sure you are signed in with the correct domain to the [Azure AI Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
* [Upgrade a trial account to an ARM-based (paid) account and import your content for free](import-content-from-trial.md). ## Classic accounts
-Before ARM based accounts were added to Azure Video Indexer, there was a "classic" account type (where the accounts management plane is built on API Management.) The classic account type is still used by some users.
+Before ARM based accounts were added to Azure AI Video Indexer, there was a "classic" account type (where the accounts management plane is built on API Management.) The classic account type is still used by some users.
-* If you are using a classic (paid) account and interested in moving to an ARM-based account, see [connect an existing classic Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
+* If you are using a classic (paid) account and interested in moving to an ARM-based account, see [connect an existing classic Azure AI Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
-For more information on the difference between regular unlimited accounts and classic accounts, see [Azure Video Indexer as an Azure resource](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/azure-video-indexer-is-now-available-as-an-azure-resource/ba-p/2912422).
+For more information on the difference between regular unlimited accounts and classic accounts, see [Azure AI Video Indexer as an Azure resource](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/azure-video-indexer-is-now-available-as-an-azure-resource/ba-p/2912422).
## Limited access features [!INCLUDE [limited access](./includes/limited-access-account-types.md)]
-For more information, see [Azure Video Indexer limited access features](limited-access-features.md).
+For more information, see [Azure AI Video Indexer limited access features](limited-access-features.md).
## Next steps
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
This article describes how to assign contributor role on the Media Services account. > [!NOTE]
-> If you are creating your Azure Video Indexer through the Azure portal UI, the selected Managed identity will be automatically assigned with a contributor permission on the selected Media Service account.
+> If you are creating your Azure AI Video Indexer through the Azure portal UI, the selected Managed identity will be automatically assigned with a contributor permission on the selected Media Service account.
## Prerequisites
This article describes how to assign contributor role on the Media Services acco
2. User-assigned managed identity > [!NOTE]
-> You need an Azure subscription with access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure Video Indexer account.
+> You need an Azure subscription with access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure AI Video Indexer account.
## Add Contributor role on the Media Services ### [Azure portal](#tab/portal/)
azure-video-indexer Audio Effects Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md
Title: Introduction to Azure Video Indexer audio effects detection-
-description: An introduction to Azure Video Indexer audio effects detection component responsibly.
+ Title: Introduction to Azure AI Video Indexer audio effects detection
+
+description: An introduction to Azure AI Video Indexer audio effects detection component responsibly.
# Audio effects detection
-Audio effects detection is an Azure Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens.
+Audio effects detection is an Azure AI Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens.
When working on the website, the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the category ID, type, name, and instances per category together with the specific timeframes and confidence score.
To display the JSON file, do the following:
], ```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## Audio effects detection components
During the audio effects detection procedure, audio in a media file is processed
- A group of people laughing might be classified as both laughter and crowd. - Natural and nonsynthetic gunshot and explosions sounds are supported.
-When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
- Always respect an individual’s right to privacy, and only ingest audio for lawful and justifiable purposes.   - Don't purposely disclose inappropriate audio of young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
When used responsibly and carefully, Azure Video Indexer is a valuable tool for
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Face detection](face-detection.md) - [OCR](ocr.md)
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
Title: Enable audio effects detection
-description: Audio Effects Detection is one of Azure Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (for example, gunshot, screaming, crowd reaction and more).
+description: Audio Effects Detection is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (for example, gunshot, screaming, crowd reaction and more).
Last updated 05/24/2023
# Enable audio effects detection (preview)
-**Audio effects detection** is one of Azure Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (such as dog barking, crowd reactions, laugher and more).
+**Audio effects detection** is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (such as dog barking, crowd reactions, laugher and more).
Some scenarios where this feature is useful:
Audio Effects in closed captions file is retrieved with the following logic empl
## Adding audio effects in closed caption files
-Audio effects can be added to the closed captions files supported by Azure Video Indexer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai website experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
+Audio effects can be added to the closed captions files supported by Azure AI Video Indexer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai website experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/audio-effects-detection/close-caption.jpg" alt-text="Audio Effects in CC":::
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
A clapperboard insight is used to detect clapperboard instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, *date*, etc. The [clapperboard](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
-When the movie is being edited, a clapperboard is removed from the scene; however, the information that was written on the clapperboard is important. Azure Video Indexer extracts the data from clapperboards, preserves, and presents the metadata.
+When the movie is being edited, a clapperboard is removed from the scene; however, the information that was written on the clapperboard is important. Azure AI Video Indexer extracts the data from clapperboards, preserves, and presents the metadata.
This article shows how to enable the post-production insight and view clapperboard instances with extracted metadata.
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Title: Azure Video Indexer terminology & concepts overview
-description: This article gives a brief overview of Azure Video Indexer terminology and concepts.
+ Title: Azure AI Video Indexer terminology & concepts overview
+description: This article gives a brief overview of Azure AI Video Indexer terminology and concepts.
Last updated 12/01/2022
-# Azure Video Indexer terminology & concepts
+# Azure AI Video Indexer terminology & concepts
-This article gives a brief overview of Azure Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+This article gives a brief overview of Azure AI Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
## Artifact files
Use textual and visual content moderation models to keep your users safe from in
## Insights
-Insights contain an aggregated view of the data: faces, topics, emotions. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights.
+Insights contain an aggregated view of the data: faces, topics, emotions. Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights.
-For detailed explanation of insights, see [Azure Video Indexer insights](insights-overview.md).
+For detailed explanation of insights, see [Azure AI Video Indexer insights](insights-overview.md).
## Keyframes
-Azure Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).
+Azure AI Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).
## Time range vs. adjusted time range
Time range is the time period in the original video. Adjusted time range is the
## Widgets
-Azure Video Indexer supports embedding widgets in your apps. For more information, see [Embed Azure Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
+Azure AI Video Indexer supports embedding widgets in your apps. For more information, see [Embed Azure AI Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
## Next steps
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Title: Connect a classic Azure Video Indexer account to ARM
-description: This topic explains how to connect an existing classic paid Azure Video Indexer account to an ARM-based account
+ Title: Connect a classic Azure AI Video Indexer account to ARM
+description: This topic explains how to connect an existing classic paid Azure AI Video Indexer account to an ARM-based account
Last updated 03/20/2023
-# Connect an existing classic paid Azure Video Indexer account to ARM-based account
+# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account
-This article shows how to connect an existing classic paid Azure Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure Video Indexer account types, review [account types](accounts-overview.md).
+This article shows how to connect an existing classic paid Azure AI Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure AI Video Indexer account types, review [account types](accounts-overview.md).
-In this article, we demonstrate options of connecting your **existing** Azure Video Indexer account to an [ARM][docs-arm-overview]-based account. You can also view the following video.
+In this article, we demonstrate options of connecting your **existing** Azure AI Video Indexer account to an [ARM][docs-arm-overview]-based account. You can also view the following video.
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW10iby] ## Prerequisites
-1. Unlimited paid Azure Video Indexer account (classic account).
+1. Unlimited paid Azure AI Video Indexer account (classic account).
- 1. To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure Video Indexer classic account.
+ 1. To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure AI Video Indexer classic account.
1. Azure Subscription with Owner permissions or Contributor with Administrator Role assignment.
- 1. Same level of permission for the Azure Media Service associated with the existing Azure Video Indexer Classic account.
+ 1. Same level of permission for the Azure Media Service associated with the existing Azure AI Video Indexer Classic account.
1. User assigned managed identity (can be created along the flow). ## Transition state
Connecting a classic account to be ARM-based triggers a 30 days of a transition
The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
-The [invite users](restricted-viewer-role.md#share-the-account) feature in the [Azure Video Indexer website](https://www.videoindexer.ai/) gets disabled. The invited users on this account lose their access to the Azure Video Indexer account Media in the portal.
+The [invite users](restricted-viewer-role.md#share-the-account) feature in the [Azure AI Video Indexer website](https://www.videoindexer.ai/) gets disabled. The invited users on this account lose their access to the Azure AI Video Indexer account Media in the portal.
However, this can be resolved by assigning the right role-assignment to these users through Azure RBAC, see [How to assign RBAC][docs-rbac-assignment]. Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account.
-If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as the [Azure Video Indexer website](https://www.videoindexer.ai/).
+If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
After the transition state ends, users will only be able to generate a valid access token through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account. > [!NOTE] > If there are invited users you wish to remove access from, do it before connecting the account to ARM.
-Before the end of the 30 days of transition state, you can remove access from users through the [Azure Video Indexer website](https://www.videoindexer.ai/) account settings page.
+Before the end of the 30 days of transition state, you can remove access from users through the [Azure AI Video Indexer website](https://www.videoindexer.ai/) account settings page.
## Get started
-### Browse to the [Azure Video Indexer website](https://aka.ms/vi-portal-link)
+### Browse to the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link)
1. Sign in using your Azure AD account. 1. On the top right bar press *User account* to open the side pane account list.
-1. Select the Azure Video Indexer classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*).
+1. Select the Azure AI Video Indexer classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*).
1. Click **Settings**.
- :::image type="content" alt-text="Screenshot that shows the Azure Video Indexer website settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
+ :::image type="content" alt-text="Screenshot that shows the Azure AI Video Indexer website settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
1. Click **Connect to an ARM-based account**. :::image type="content" alt-text="Screenshot that shows the connect to an ARM-based account dialog." source="./media/connect-classic-account-to-arm/connect-classic-to-arm.png"::: 1. Sign to Azure portal.
-1. The Azure Video Indexer create blade will open.
-1. In the **Create Azure Video Indexer account** section enter required values.
+1. The Azure AI Video Indexer create blade will open.
+1. In the **Create Azure AI Video Indexer account** section enter required values.
If you followed the steps the fields should be auto-populated, make sure to validate the eligible values.
- :::image type="content" alt-text="Screenshot that shows the create Azure Video Indexer account dialog." source="./media/connect-classic-account-to-arm/connect-blade.png":::
+ :::image type="content" alt-text="Screenshot that shows the create Azure AI Video Indexer account dialog." source="./media/connect-classic-account-to-arm/connect-blade.png":::
Here are the descriptions for the resource fields:
Before the end of the 30 days of transition state, you can remove access from us
| || |**Subscription**| The subscription currently contains the classic account and other related resources such as the Media Services.| |**Resource Group**|Select an existing resource or create a new one. The resource group must be the same location as the classic account being connected|
- |**Azure Video Indexer account** (radio button)| Select the *"Connecting an existing classic account"*.|
- |**Existing account ID**|Select an existing Azure Video Indexer account from the dropdown.|
- |**Resource name**|Enter the name of the new Azure Video Indexer account. Default value would be the same name the account had as classic.|
+ |**Azure AI Video Indexer account** (radio button)| Select the *"Connecting an existing classic account"*.|
+ |**Existing account ID**|Select an existing Azure AI Video Indexer account from the dropdown.|
+ |**Resource name**|Enter the name of the new Azure AI Video Indexer account. Default value would be the same name the account had as classic.|
|**Location**|The geographic region can't be changed in the connect process, the connected account must stay in the same region. | |**Media Services account name**|The original Media Services account name that was associated with classic account.|
- |**User-assigned managed identity**|Select a user-assigned managed identity, or create a new one. Azure Video Indexer account will use it to access the Media services. The user-assignment managed identity will be assigned the roles of Contributor for the Media Service account.|
+ |**User-assigned managed identity**|Select a user-assigned managed identity, or create a new one. Azure AI Video Indexer account will use it to access the Media services. The user-assignment managed identity will be assigned the roles of Contributor for the Media Service account.|
1. Click **Review + create** at the bottom of the form. ## After connecting to ARM is complete
-After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts).
+After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure AI Video Indexer REST API](/rest/api/videoindexer/preview/accounts).
As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/preview/generate/access-token)ΓÇ¥. Make sure to change to the new "Generate-Access token" by updating all your solutions that use the API.
APIs to be changed:
- Get accounts ΓÇô List of all account in a region. - Create paid account ΓÇô would create a classic account.
-For a full description of [Azure Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
+For a full description of [Azure AI Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
Title: Create a classic Azure Video Indexer account connected to Azure
-description: Learn how to create a classic Azure Video Indexer account connected to Azure.
+ Title: Create a classic Azure AI Video Indexer account connected to Azure
+description: Learn how to create a classic Azure AI Video Indexer account connected to Azure.
Last updated 08/24/2022
-# Create a classic Azure Video Indexer account
+# Create a classic Azure AI Video Indexer account
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-This topic shows how to create a new classic account connected to Azure using the [Azure Video Indexer website](https://aka.ms/vi-portal-link). You can also create an Azure Video Indexer classic account through our [API](https://aka.ms/avam-dev-portal).
+This topic shows how to create a new classic account connected to Azure using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). You can also create an Azure AI Video Indexer classic account through our [API](https://aka.ms/avam-dev-portal).
The topic discusses prerequisites that you need to connect to your Azure subscription and how to configure an Azure Media Services account.
-A few Azure Video Indexer account types are available to you. For detailed explanation, review [Account types](accounts-overview.md).
+A few Azure AI Video Indexer account types are available to you. For detailed explanation, review [Account types](accounts-overview.md).
For the pricing details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
For the pricing details, see [pricing](https://azure.microsoft.com/pricing/detai
* An Azure Active Directory (Azure AD) domain. If you don't have an Azure AD domain, create this domain with your Azure subscription. For more information, see [Managing custom domain names in your Azure AD](../active-directory/enterprise-users/domains-manage.md)
-* A user in your Azure AD domain with an **Application administrator** role. You'll use this member when connecting your Azure Video Indexer account to Azure.
+* A user in your Azure AD domain with an **Application administrator** role. You'll use this member when connecting your Azure AI Video Indexer account to Azure.
This user should be an Azure AD user with a work or school account. Don't use a personal account, such as outlook.com, live.com, or hotmail.com. :::image type="content" alt-text="Screenshot that shows how to choose a user in your Azure A D domain." source="./media/create-account/all-aad-users.png"::: * A user and member in your Azure AD domain.
- You'll use this member when connecting your Azure Video Indexer account to Azure.
+ You'll use this member when connecting your Azure AI Video Indexer account to Azure.
This user should be a member in your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
For the pricing details, see [pricing](https://azure.microsoft.com/pricing/detai
It's strongly recommended to have the following three accounts located in the same region:
-* The Azure Video Indexer account that you're creating.
-* The Azure Video Indexer account that you're connecting with the Media Services account.
+* The Azure AI Video Indexer account that you're creating.
+* The Azure AI Video Indexer account that you're connecting with the Media Services account.
* The Azure storage account connected to the same Media Services account.
- When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
+ When you create an Azure AI Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
If your storage account is behind a firewall, see [storage account that is behind a firewall](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
If your storage account is behind a firewall, see [storage account that is behin
> [!NOTE] > Make sure to write down the Media Services resource and account names.
-1. Before you can play your videos in the [Azure Video Indexer](https://www.videoindexer.ai/) website, you must start the default **Streaming Endpoint** of the new Media Services account.
+1. Before you can play your videos in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, you must start the default **Streaming Endpoint** of the new Media Services account.
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start. :::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
-1. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
+1. For Azure AI Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
1. In the new Media Services account, select **API access**. 2. Select [Service principal authentication method](/azure/media-services/previous/media-services-portal-get-started-with-aad).
If your storage account is behind a firewall, see [storage account that is behin
After you select **Settings**->**Keys**, add **Description**, press **Save**, and the key value gets populated.
- If the key expires, the account owner will have to contact Azure Video Indexer support to renew the key.
+ If the key expires, the account owner will have to contact Azure AI Video Indexer support to renew the key.
> [!NOTE] > Make sure to write down the key value and the Application ID. You'll need it for the steps in the next section.
If your storage account is behind a firewall, see [storage account that is behin
The following Azure Media Services related considerations apply:
-* If you connect to a new Media Services account, Azure Video Indexer automatically starts the default **Streaming Endpoint** in it:
+* If you connect to a new Media Services account, Azure AI Video Indexer automatically starts the default **Streaming Endpoint** in it:
![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
- Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the [Azure Video Indexer](https://www.videoindexer.ai/) website.
-* If you connect to an existing Media Services account, Azure Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure Video Indexer.
+ Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
+* If you connect to an existing Media Services account, Azure AI Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure AI Video Indexer.
## Create a classic account
-1. On the [Azure Video Indexer website](https://aka.ms/vi-portal-link), select **Create unlimited account** (the paid account).
+1. On the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link), select **Create unlimited account** (the paid account).
2. To create a classic account, select **Switch to manual configuration**. In the dialog, provide the following information: |Setting|Description| |||
-|Azure Video Indexer account region|The name of the Azure Video Indexer account region. For better performance and lower costs, it's highly recommended to specify the name of the region where the Azure Media Services resource and Azure Storage account are located. |
+|Azure AI Video Indexer account region|The name of the Azure AI Video Indexer account region. For better performance and lower costs, it's highly recommended to specify the name of the region where the Azure Media Services resource and Azure Storage account are located. |
|Azure AD tenant|The name of the Azure AD tenant, for example "contoso.onmicrosoft.com". The tenant information can be retrieved from the Azure portal. Place your cursor over the name of the signed-in user in the top-right corner. Find the name to the right of **Domain**.| |Subscription ID|The Azure subscription under which this connection should be created. The subscription ID can be retrieved from the Azure portal. Select **All services** in the left panel, and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.| |Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
In the dialog, provide the following information:
See [Import your content from the trial account](import-content-from-trial.md).
-## Automate creation of the Azure Video Indexer account
+## Automate creation of the Azure AI Video Indexer account
To automate the creation of the account is a two steps process:
To automate the creation of the account is a two steps process:
See an example of the [Media Services account creation template](https://github.com/Azure-Samples/media-services-v3-arm-templates). 1. Call [Create-Account with the Media Services and Azure AD application](https://videoindexer.ai.azure.us/account/login?source=apim).
-## Azure Video Indexer in Azure Government
+## Azure AI Video Indexer in Azure Government
### Prerequisites for connecting to Azure Government
To automate the creation of the account is a two steps process:
### Create new account via the Azure Government portal > [!NOTE]
-> The Azure Government cloud does not include a *trial* experience of Azure Video Indexer.
+> The Azure Government cloud does not include a *trial* experience of Azure AI Video Indexer.
-To create a paid account via the Azure Video Indexer website:
+To create a paid account via the Azure AI Video Indexer website:
1. Go to https://videoindexer.ai.azure.us 1. Sign-in with your Azure Government Azure AD account.
-1.If you don't have any Azure Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
+1.If you don't have any Azure AI Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
- The rest of the flow is as described in above, only the regions to select from will be Government regions in which Azure Video Indexer is available
+ The rest of the flow is as described in above, only the regions to select from will be Government regions in which Azure AI Video Indexer is available
- If you already are a contributor or an admin of an existing one or more Azure Video Indexer accounts in Azure Government, you'll be taken to that account and from there you can start a following steps for creating an additional account if needed, as described above.
+ If you already are a contributor or an admin of an existing one or more Azure AI Video Indexer accounts in Azure Government, you'll be taken to that account and from there you can start a following steps for creating an additional account if needed, as described above.
### Create new account via the API on Azure Government To create a paid account in Azure Government, follow the instructions in [Create-Paid-Account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account). This API end point only includes Government cloud regions.
-### Limitations of Azure Video Indexer on Azure Government
+### Limitations of Azure AI Video Indexer on Azure Government
* Only paid accounts (ARM or classic) are available on Azure Government. * No manual content moderation available in Azure Government.
To create a paid account in Azure Government, follow the instructions in [Create
After you're done with this tutorial, delete resources that you aren't planning to use.
-### Delete an Azure Video Indexer account
+### Delete an Azure AI Video Indexer account
-If you want to delete an Azure Video Indexer account, you can delete the account from the Azure Video Indexer website. To delete the account, you must be the owner.
+If you want to delete an Azure AI Video Indexer account, you can delete the account from the Azure AI Video Indexer website. To delete the account, you must be the owner.
Select the account -> **Settings** -> **Delete this account**.
The account will be permanently deleted in 90 days.
## Next steps
-You can programmatically interact with your trial account and/or with your Azure Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
+You can programmatically interact with your trial account and/or with your Azure AI Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
Title: Things to consider when using Azure Video Indexer at scale - Azure
-description: This topic explains what things to consider when using Azure Video Indexer at scale.
+ Title: Things to consider when using Azure AI Video Indexer at scale - Azure
+description: This topic explains what things to consider when using Azure AI Video Indexer at scale.
Last updated 07/03/2023
-# Things to consider when using Azure Video Indexer at scale
+# Things to consider when using Azure AI Video Indexer at scale
-When using Azure Video Indexer to index videos and your archive of videos is growing, consider scaling.
+When using Azure AI Video Indexer to index videos and your archive of videos is growing, consider scaling.
This article answers questions like:
This article answers questions like:
* Is there a smart and efficient way of doing it? * Can I prevent spending excess money in the process?
-The article provides six best practices of how to use Azure Video Indexer at scale.
+The article provides six best practices of how to use Azure AI Video Indexer at scale.
## When uploading videos consider using a URL over byte array
-Azure Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, the latter comes with some constraints. For more information, see [uploading considerations and limitations)](upload-index-videos.md)
+Azure AI Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, the latter comes with some constraints. For more information, see [uploading considerations and limitations)](upload-index-videos.md)
First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30-GB upload size limitation while using URL.
Second, consider just some of the issues that can affect your performance and he
* upload speed, * lost packets somewhere in the world wide web. When you upload videos using URL, you just need to provide a path to the location of a media file and Video Indexer takes care of the rest (see the `videoUrl` field in the [upload video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API). > [!TIP] > Use the `videoUrl` optional parameter of the upload video API.
-To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md). Or, you can use [AzCopy](../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Azure Video Indexer using [SAS URL](../storage/common/storage-sas-overview.md). Azure Video Indexer recommends using *readonly* SAS URLs.
+To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md). Or, you can use [AzCopy](../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Azure AI Video Indexer using [SAS URL](../storage/common/storage-sas-overview.md). Azure AI Video Indexer recommends using *readonly* SAS URLs.
## Automatic Scaling of Media Reserved Units
Azure Video Indexer adds a `retry-after` header in the HTTP response, the header
## Use callback URL
-We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a callback URL and wait for Azure Video Indexer to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
+We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a callback URL and wait for Azure AI Video Indexer to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
You can add a callback URL as one of the parameters of the [upload video API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). Check out the code samples in [GitHub repo](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/).
For callback URL you can also use Azure Functions, a serverless event-driven pla
## Use the right indexing parameters for you
-When making decisions related to using Azure Video Indexer at scale, look at how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save money and make the indexing process for your videos faster.
+When making decisions related to using Azure AI Video Indexer at scale, look at how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save money and make the indexing process for your videos faster.
Before uploading and indexing your video read the [documentation](upload-index-videos.md) to get a better idea of what your options are.
Therefore, we recommend you to verify that you get the right results for your us
## Next steps
-[Examine the Azure Video Indexer output produced by API](video-indexer-output-json-v2.md)
+[Examine the Azure AI Video Indexer output produced by API](video-indexer-output-json-v2.md)
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Title: Create an Azure Video Indexer account
-description: This article explains how to create an account for Azure Video Indexer.
+ Title: Create an Azure AI Video Indexer account
+description: This article explains how to create an account for Azure AI Video Indexer.
Last updated 06/10/2022
Last updated 06/10/2022
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-To start using unlimited features and robust capabilities of Azure Video Indexer, you need to create an Azure Video Indexer unlimited account.
+To start using unlimited features and robust capabilities of Azure AI Video Indexer, you need to create an Azure AI Video Indexer unlimited account.
-This tutorial walks you through the steps of creating the Azure Video Indexer account and its accompanying resources by using the Azure portal. The account that gets created is ARM (Azure Resource Manager) account. For information about different account types, see [Overview of account types](accounts-overview.md).
+This tutorial walks you through the steps of creating the Azure AI Video Indexer account and its accompanying resources by using the Azure portal. The account that gets created is ARM (Azure Resource Manager) account. For information about different account types, see [Overview of account types](accounts-overview.md).
## Prerequisites
This tutorial walks you through the steps of creating the Azure Video Indexer ac
In the [Azure portal](https://portal.azure.com), go to **Subscriptions**->[<*subscription*>]->**ResourceProviders**. Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the registered state, select **Register**. It takes a couple of minutes to register.
-* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the associated Azure Media Services (AMS). You select the AMS account during the Azure Video Indexer account creation, as described below.
+* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the associated Azure Media Services (AMS). You select the AMS account during the Azure AI Video Indexer account creation, as described below.
* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the related managed identity.
-## Use the Azure portal to create an Azure Video Indexer account
+## Use the Azure portal to create an Azure AI Video Indexer account
1. Sign into the [Azure portal](https://portal.azure.com/).
Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the regist
1. Using the search bar at the top, enter **"Video Indexer"**. 1. Select **Video Indexer** under **Services**. 1. Select **Create**.
-1. In the Create an Azure Video Indexer resource section, enter required values (the descriptions follow after the image).
+1. In the Create an Azure AI Video Indexer resource section, enter required values (the descriptions follow after the image).
> [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure Video Indexer resource.":::
+ > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure AI Video Indexer resource.":::
Here are the definitions:
Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the regist
||| |**Subscription**|Choose the subscription to use. If you're a member of only one subscription, you'll see that name. If there are multiple choices, choose a subscription in which your user has the required role. |**Resource group**|Select an existing resource group or create a new one. A resource group is a collection of resources that share lifecycle, permissions, and policies. Learn more [here](../azure-resource-manager/management/overview.md#resource-groups).|
- |**Resource name**|This will be the name of the new Azure Video Indexer account. The name can contain letters, numbers and dashes with no spaces.|
- |**Region**|Select the Azure region that will be used to deploy the Azure Video Indexer account. The region matches the resource group region you chose. If you'd like to change the selected region, change the selected resource group or create a new one in the preferred region. [Azure region in which Azure Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
+ |**Resource name**|This will be the name of the new Azure AI Video Indexer account. The name can contain letters, numbers and dashes with no spaces.|
+ |**Region**|Select the Azure region that will be used to deploy the Azure AI Video Indexer account. The region matches the resource group region you chose. If you'd like to change the selected region, change the selected resource group or create a new one in the preferred region. [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
|**Existing content**|If you have existing classic Video Indexer accounts, you can choose to have the videos, files, and data associated with an existing classic account connected to the new account. See the following article to learn more [Connect the classic account to ARM](connect-classic-account-to-arm.md) |**Available classic accounts**|Classic accounts available in the chosen subscription, resource group, and region.|
- |**Media Services account name**|Select a Media Services that the new Azure Video Indexer account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same region you selected for your Azure Video Indexer account.|
+ |**Media Services account name**|Select a Media Services that the new Azure AI Video Indexer account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same region you selected for your Azure AI Video Indexer account.|
|**Storage account** (appears when creating a new AMS account)|Choose or create a new storage account in the same resource group.|
- |**Managed identity**|Select an existing user-assigned managed identity or system-assigned managed identity or both when creating the account. The new Azure Video Indexer account will use the selected managed identity to access the Media Services associated with the account. If both user-assigned and system assigned managed identities will be selected during the account creation the **default** managed identity is the user-assigned managed identity. A contributor role should be assigned on the Media Services.|
+ |**Managed identity**|Select an existing user-assigned managed identity or system-assigned managed identity or both when creating the account. The new Azure AI Video Indexer account will use the selected managed identity to access the Media Services associated with the account. If both user-assigned and system assigned managed identities will be selected during the account creation the **default** managed identity is the user-assigned managed identity. A contributor role should be assigned on the Media Services.|
1. Select **Review + create** at the bottom of the form. ### Review deployed resource
-You can use the Azure portal to validate the Azure Video Indexer account and other resources that were created. After the deployment is finished, select **Go to resource** to see your new Azure Video Indexer account.
+You can use the Azure portal to validate the Azure AI Video Indexer account and other resources that were created. After the deployment is finished, select **Go to resource** to see your new Azure AI Video Indexer account.
## The Overview tab of the account
This tab enables you to view details about your account.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/create-account-portal/avi-overview.png" alt-text="Screenshot showing the Overview tab.":::
-Select **Explore Azure Video Indexer's portal** to view your new account on the [Azure Video Indexer website](https://aka.ms/vi-portal-link).
+Select **Explore Azure AI Video Indexer's portal** to view your new account on the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link).
### Essential details
Choose the following:
### Identity
-Use the **Identity** tab to manually update the managed identities associated with the Azure Video Indexer resource.
+Use the **Identity** tab to manually update the managed identities associated with the Azure AI Video Indexer resource.
Add new managed identities, switch the default managed identity between user-assigned and system-assigned or set a new user-assigned managed identity.
azure-video-indexer Customize Brands Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md
Title: Customize a Brands model in Azure Video Indexer - Azure
-description: This article gives an overview of what is a Brands model in Azure Video Indexer and how to customize it.
+ Title: Customize a Brands model in Azure AI Video Indexer - Azure
+description: This article gives an overview of what is a Brands model in Azure AI Video Indexer and how to customize it.
Last updated 12/15/2019
-# Customize a Brands model in Azure Video Indexer
+# Customize a Brands model in Azure AI Video Indexer
-Azure Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context.
+Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context.
-Brand detection is useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis, and many more. Azure Video Indexer brand detection enables you to index brand mentions in speech and visual text, using Bing's brands database as well as with customization by building a custom Brands model for each Azure Video Indexer account. The custom Brands model feature allows you to select whether or not Azure Video Indexer will detect brands from the Bing brands database, exclude certain brands from being detected (essentially creating a list of unapproved brands), and include brands that should be part of your model that might not be in Bing's brands database (essentially creating a list of approved brands). The custom Brands model that you create will only be available in the account in which you created the model.
+Brand detection is useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis, and many more. Azure AI Video Indexer brand detection enables you to index brand mentions in speech and visual text, using Bing's brands database as well as with customization by building a custom Brands model for each Azure AI Video Indexer account. The custom Brands model feature allows you to select whether or not Azure AI Video Indexer will detect brands from the Bing brands database, exclude certain brands from being detected (essentially creating a list of unapproved brands), and include brands that should be part of your model that might not be in Bing's brands database (essentially creating a list of approved brands). The custom Brands model that you create will only be available in the account in which you created the model.
## Out of the box detection example
-In the "Microsoft Build 2017 Day 2" presentation, the brand "Microsoft Windows" appears multiple times. Sometimes in the transcript, sometimes as visual text and never as verbatim. Azure Video Indexer detects with high precision that a term is indeed brand based on the context, covering over 90k brands out of the box, and constantly updating. At 02:25, Azure Video Indexer detects the brand from speech and then again at 02:40 from visual text, which is part of the Windows logo.
+In the "Microsoft Build 2017 Day 2" presentation, the brand "Microsoft Windows" appears multiple times. Sometimes in the transcript, sometimes as visual text and never as verbatim. Azure AI Video Indexer detects with high precision that a term is indeed brand based on the context, covering over 90k brands out of the box, and constantly updating. At 02:25, Azure AI Video Indexer detects the brand from speech and then again at 02:40 from visual text, which is part of the Windows logo.
![Brands overview](./media/content-model-customization/brands-overview.png)
azure-video-indexer Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md
Title: Customize a Brands model with Azure Video Indexer API
-description: Learn how to customize a Brands model with the Azure Video Indexer API.
+ Title: Customize a Brands model with Azure AI Video Indexer API
+description: Learn how to customize a Brands model with the Azure AI Video Indexer API.
Last updated 01/14/2020
-# Customize a Brands model with the Azure Video Indexer API
+# Customize a Brands model with the Azure AI Video Indexer API
-Azure Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md).
+Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md).
> [!NOTE] > If your video was indexed prior to adding a brand, you need to reindex it.
-You can use the Azure Video Indexer APIs to create, use, and edit custom Brands models detected in a video, as described in this topic. You can also use the Azure Video Indexer website, as described in [Customize Brands model using the Azure Video Indexer website](customize-brands-model-with-api.md).
+You can use the Azure AI Video Indexer APIs to create, use, and edit custom Brands models detected in a video, as described in this topic. You can also use the Azure AI Video Indexer website, as described in [Customize Brands model using the Azure AI Video Indexer website](customize-brands-model-with-api.md).
## Create a Brand The [create a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Brand) API creates a new custom brand and adds it to the custom Brands model for the specified account. > [!NOTE]
-> Setting `enabled` (in the body) to true puts the brand in the *Include* list for Azure Video Indexer to detect. Setting `enabled` to false puts the brand in the *Exclude* list, so Azure Video Indexer won't detect it.
+> Setting `enabled` (in the body) to true puts the brand in the *Include* list for Azure AI Video Indexer to detect. Setting `enabled` to false puts the brand in the *Exclude* list, so Azure AI Video Indexer won't detect it.
Some other parameters that you can set in the body: * The `referenceUrl` value can be any reference websites for the brand, such as a link to its Wikipedia page.
-* The `tags` value is a list of tags for the brand. This tag shows up in the brand's *Category* field in the Azure Video Indexer website. For example, the brand "Azure" can be tagged or categorized as "Cloud".
+* The `tags` value is a list of tags for the brand. This tag shows up in the brand's *Category* field in the Azure AI Video Indexer website. For example, the brand "Azure" can be tagged or categorized as "Cloud".
### Response
The response provides information on the brand that you searched (using brand ID
``` > [!NOTE]
-> `enabled` being set to `true` signifies that the brand is in the *Include* list for Azure Video Indexer to detect, and `enabled` being false signifies that the brand is in the *Exclude* list, so Azure Video Indexer won't detect it.
+> `enabled` being set to `true` signifies that the brand is in the *Include* list for Azure AI Video Indexer to detect, and `enabled` being false signifies that the brand is in the *Exclude* list, so Azure AI Video Indexer won't detect it.
## Update a specific brand
The response provides a list of all of the brands in your account and each of th
``` > [!NOTE]
-> The brand named *Example* is in the *Include* list for Azure Video Indexer to detect, and the brand named *Example2* is in the *Exclude* list, so Azure Video Indexer won't detect it.
+> The brand named *Example* is in the *Include* list for Azure AI Video Indexer to detect, and the brand named *Example2* is in the *Exclude* list, so Azure AI Video Indexer won't detect it.
## Get Brands model settings
-The [get brands settings](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure Video Indexer will only detect brands from the custom Brands model of the specified account.
+The [get brands settings](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure AI Video Indexer will only detect brands from the custom Brands model of the specified account.
### Response
The response shows whether Bing brands are enabled following the format of the e
## Update Brands model settings
-The [update brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brands-Model-Settings) API updates the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure Video Indexer will only detect brands from the custom Brands model of the specified account.
+The [update brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brands-Model-Settings) API updates the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure AI Video Indexer will only detect brands from the custom Brands model of the specified account.
The `useBuiltIn` flag set to true means that Bing brands are enabled. If `useBuiltin` is false, Bing brands are disabled.
azure-video-indexer Customize Brands Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md
Title: Customize a Brands model with the Azure Video Indexer website
-description: Learn how to customize a Brands model with the Azure Video Indexer website.
+ Title: Customize a Brands model with the Azure AI Video Indexer website
+description: Learn how to customize a Brands model with the Azure AI Video Indexer website.
Last updated 12/15/2019
-# Customize a Brands model with the Azure Video Indexer website
+# Customize a Brands model with the Azure AI Video Indexer website
-Azure Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure Video Indexer detects it as a brand in the content.
+Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content.
A custom Brands model allows you to: -- select if you want Azure Video Indexer to detect brands from the Bing brands database.-- select if you want Azure Video Indexer to exclude certain brands from being detected (essentially creating a blocklist of brands).-- select if you want Azure Video Indexer to include brands that should be part of your model that might not be in Bing's brands database (essentially creating an accept list of brands).
+- select if you want Azure AI Video Indexer to detect brands from the Bing brands database.
+- select if you want Azure AI Video Indexer to exclude certain brands from being detected (essentially creating a blocklist of brands).
+- select if you want Azure AI Video Indexer to include brands that should be part of your model that might not be in Bing's brands database (essentially creating an accept list of brands).
For a detailed overview, see this [Overview](customize-brands-model-overview.md).
-You can use the Azure Video Indexer website to create, use, and edit custom Brands models detected in a video, as described in this article. You can also use the API, as described in [Customize Brands model using APIs](customize-brands-model-with-api.md).
+You can use the Azure AI Video Indexer website to create, use, and edit custom Brands models detected in a video, as described in this article. You can also use the API, as described in [Customize Brands model using APIs](customize-brands-model-with-api.md).
> [!NOTE] > If your video was indexed prior to adding a brand, you need to reindex it. You will find **Re-index** item in the drop-down menu associated with the video. Select **Advanced options** -> **Brand categories** and check **All brands**.
You can use the Azure Video Indexer website to create, use, and edit custom Bran
You have the option to set whether or not you want brands from the Bing brands database to be detected. To set this option, you need to edit the settings of your Brands model. Follow these steps:
-1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
1. To customize a model in your account, select the **Content model customization** button on the left of the page. > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model in Azure Video Indexer ":::
+ > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model in Azure AI Video Indexer ":::
1. To edit brands, select the **Brands** tab. > [!div class="mx-imgBorder"] > :::image type="content" source="./media/customize-brand-model/customize-brand-model.png" alt-text="Screenshot shows the Brands tab of the Content model customization dialog box":::
-1. Check the **Show brands suggested by Bing** option if you want Azure Video Indexer to detect brands suggested by BingΓÇöleave the option unchecked if you don't.
+1. Check the **Show brands suggested by Bing** option if you want Azure AI Video Indexer to detect brands suggested by BingΓÇöleave the option unchecked if you don't.
## Include brands in the model
-The **Include brands** section represents custom brands that you want Azure Video Indexer to detect, even if they aren't suggested by Bing.
+The **Include brands** section represents custom brands that you want Azure AI Video Indexer to detect, even if they aren't suggested by Bing.
### Add a brand to include list 1. Select **+ Create new brand**. Provide a name (required), category (optional), description (optional), and reference URL (optional).
- The category field is meant to help you tag your brands. This field shows up as the brand's *tags* when using the Azure Video Indexer APIs. For example, the brand "Azure" can be tagged or categorized as "Cloud".
+ The category field is meant to help you tag your brands. This field shows up as the brand's *tags* when using the Azure AI Video Indexer APIs. For example, the brand "Azure" can be tagged or categorized as "Cloud".
The reference URL field can be any reference website for the brand (like a link to its Wikipedia page).
The **Include brands** section represents custom brands that you want Azure Vide
## Exclude brands from the model
-The **Exclude brands** section represents the brands that you don't want Azure Video Indexer to detect.
+The **Exclude brands** section represents the brands that you don't want Azure AI Video Indexer to detect.
### Add a brand to exclude list
azure-video-indexer Customize Content Models Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md
Title: Customizing content models in Azure Video Indexer
+ Title: Customizing content models in Azure AI Video Indexer
description: This article gives links to the conceptual articles that explain the benefits of each type of customization. This article also links to how-to guides that show how you can implement the customization of each model. Last updated 06/26/2019
-# Customizing content models in Azure Video Indexer
+# Customizing content models in Azure AI Video Indexer
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure Video Indexer website or API.
+Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure AI Video Indexer website or API.
This article gives links to articles that explain the benefits of each type of customization. The article also links to how-to guides that show how you can implement the customization of each model. ## Brands model * [Customizing the brands model overview](customize-brands-model-overview.md)
-* [Customizing the brands model using the Azure Video Indexer website](customize-brands-model-with-website.md)
-* [Customizing the brands model using the Azure Video Indexer API](customize-brands-model-with-api.md)
+* [Customizing the brands model using the Azure AI Video Indexer website](customize-brands-model-with-website.md)
+* [Customizing the brands model using the Azure AI Video Indexer API](customize-brands-model-with-api.md)
## Language model * [Customizing language models overview](customize-language-model-overview.md)
-* [Customizing language models using the Azure Video Indexer website](customize-language-model-with-website.md)
-* [Customizing language models using the Azure Video Indexer API](customize-language-model-with-api.md)
+* [Customizing language models using the Azure AI Video Indexer website](customize-language-model-with-website.md)
+* [Customizing language models using the Azure AI Video Indexer API](customize-language-model-with-api.md)
## Person model * [Customizing person models overview](customize-person-model-overview.md)
-* [Customizing person models using the Azure Video Indexer website](customize-person-model-with-website.md)
-* [Customizing person models using the Azure Video Indexer API](customize-person-model-with-api.md)
+* [Customizing person models using the Azure AI Video Indexer website](customize-person-model-with-website.md)
+* [Customizing person models using the Azure AI Video Indexer API](customize-person-model-with-api.md)
## Next steps
-[Azure Video Indexer overview](video-indexer-overview.md)
+[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
Title: Customize a Language model in Azure Video Indexer - Azure
-description: This article gives an overview of what is a Language model in Azure Video Indexer and how to customize it.
+ Title: Customize a Language model in Azure AI Video Indexer - Azure
+description: This article gives an overview of what is a Language model in Azure AI Video Indexer and how to customize it.
Last updated 11/23/2022
-# Customize a Language model with Azure Video Indexer
+# Customize a Language model with Azure AI Video Indexer
-Azure Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
+Azure AI Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
-Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure Video Indexer, it is recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, *"container service"* is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
+Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure AI Video Indexer, it is recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, *"container service"* is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
There are 2 ways to customize a language model: -- **Option 1**: Edit the transcript that was generated by Azure Video Indexer. By editing and correcting the transcript, you are training a language model to provide improved results in the future.
+- **Option 1**: Edit the transcript that was generated by Azure AI Video Indexer. By editing and correcting the transcript, you are training a language model to provide improved results in the future.
- **Option 2**: Upload text file(s) to train the language model. The upload file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content. > [!Important] > Do not include in the upload file the words or sentences as currently incorrectly transcribed (for example, *"communities"*) as this will negate the intended impact. > Only include the words as you would like them to appear (for example, *"Kubernetes"*).
-You can use the Azure Video Indexer APIs or the website to create and edit custom Language models, as described in topics in the [Next steps](#next-steps) section of this topic.
+You can use the Azure AI Video Indexer APIs or the website to create and edit custom Language models, as described in topics in the [Next steps](#next-steps) section of this topic.
## Best practices for custom Language models
-Azure Video Indexer learns based on probabilities of word combinations, so to learn best:
+Azure AI Video Indexer learns based on probabilities of word combinations, so to learn best:
* Give enough real examples of sentences as they would be spoken. * Put only one sentence per line, not more. Otherwise the system will learn probabilities across sentences.
azure-video-indexer Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-api.md
Title: Customize a Language model with Azure Video Indexer API
-description: Learn how to customize a Language model with the Azure Video Indexer API.
+ Title: Customize a Language model with Azure AI Video Indexer API
+description: Learn how to customize a Language model with the Azure AI Video Indexer API.
Last updated 02/04/2020
-# Customize a Language model with the Azure Video Indexer API
+# Customize a Language model with the Azure AI Video Indexer API
-Azure Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
+Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
-For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure Video Indexer](customize-language-model-overview.md).
+For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-You can use the Azure Video Indexer APIs to create and edit custom Language models in your account, as described in this topic. You can also use the website, as described in [Customize Language model using the Azure Video Indexer website](customize-language-model-with-api.md).
+You can use the Azure AI Video Indexer APIs to create and edit custom Language models in your account, as described in this topic. You can also use the website, as described in [Customize Language model using the Azure AI Video Indexer website](customize-language-model-with-api.md).
## Create a Language model
The response provides metadata on the newly trained Language model along with me
} ```
-The returned `id` is a unique ID used to distinguish between language models, while `languageModelId` is used both for [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs (also known as `linguisticModelId` in Azure Video Indexer upload/reindex APIs).
+The returned `id` is a unique ID used to distinguish between language models, while `languageModelId` is used both for [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs (also known as `linguisticModelId` in Azure AI Video Indexer upload/reindex APIs).
## Delete a Language model
-The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure Video Indexer will use its default model to reindex the video.
+The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer will use its default model to reindex the video.
### Response
azure-video-indexer Customize Language Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-website.md
Title: Customize Language model with Azure Video Indexer website
-description: Learn how to customize a Language model with the Azure Video Indexer website.
+ Title: Customize Language model with Azure AI Video Indexer website
+description: Learn how to customize a Language model with the Azure AI Video Indexer website.
Last updated 08/10/2020
-# Customize a Language model with the Azure Video Indexer website
+# Customize a Language model with the Azure AI Video Indexer website
-Azure Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
+Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
-For a detailed overview and best practices for custom language models, see [Customize a Language model with Azure Video Indexer](customize-language-model-overview.md).
+For a detailed overview and best practices for custom language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-You can use the Azure Video Indexer website to create and edit custom Language models in your account, as described in this topic. You can also use the API, as described in [Customize Language model using APIs](customize-language-model-with-api.md).
+You can use the Azure AI Video Indexer website to create and edit custom Language models in your account, as described in this topic. You can also use the API, as described in [Customize Language model using APIs](customize-language-model-with-api.md).
## Create a Language model
-1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
1. To customize a model in your account, select the **Content model customization** button on the left of the page. > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-language-model/model-customization.png" alt-text="Customize content model in Azure Video Indexer ":::
+ > :::image type="content" source="./media/customize-language-model/model-customization.png" alt-text="Customize content model in Azure AI Video Indexer ":::
1. Select the **Language** tab. You see a list of supported languages.
To use your Language model on a new video, do one of the following actions:
* Select the **Upload** button on the top of the page.
- ![Upload button Azure Video Indexer](./media/customize-language-model/upload.png)
+ ![Upload button Azure AI Video Indexer](./media/customize-language-model/upload.png)
* Drop your audio or video file or browse for your file. You're given the option to select the **Video source language**. Select the drop-down and select a Language model that you created from the list. It should say the language of your Language model and the name that you gave it in parentheses. For example:
-![Choose video source languageΓÇöReindex a video with Azure Video Indexer](./media/customize-language-model/reindex.png)
+![Choose video source languageΓÇöReindex a video with Azure AI Video Indexer](./media/customize-language-model/reindex.png)
Select the **Upload** option in the bottom of the page, and your new video will be indexed using your Language model.
Select the **Upload** option in the bottom of the page, and your new video will
To use your Language model to reindex a video in your collection, follow these steps:
-1. Sign in to the [Azure Video Indexer](https://www.videoindexer.ai/) home page.
+1. Sign in to the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page.
1. Click on **...** button on the video and select **Re-index**. 1. You're given the option to select the **Video source language** to reindex your video with. Select the drop-down and select a Language model that you created from the list. It should say the language of your language model and the name that you gave it in parentheses. 1. Select the **Re-index** button and your video will be reindexed using your Language model.
To delete a Language model from your account, select the ellipsis (**...**) butt
A new window pops up telling you that the deletion can't be undone. Select the **Delete** option in the new window.
-This action removes the Language model completely from your account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure Video Indexer will use its default model to reindex the video.
+This action removes the Language model completely from your account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer will use its default model to reindex the video.
## Customize Language models by correcting transcripts
-Azure Video Indexer supports automatic customization of Language models based on the actual corrections users make to the transcriptions of their videos.
+Azure AI Video Indexer supports automatic customization of Language models based on the actual corrections users make to the transcriptions of their videos.
1. To make corrections to a transcript, open up the video that you want to edit from your Account Videos. Select the **Timeline** tab.
- ![Customize language model timeline tabΓÇöAzure Video Indexer](./media/customize-language-model/timeline.png)
+ ![Customize language model timeline tabΓÇöAzure AI Video Indexer](./media/customize-language-model/timeline.png)
1. Select the pencil icon to edit the transcript of your transcription.
- ![Customize language model edit transcriptionΓÇöAzure Video Indexer](./media/customize-language-model/edits.png)
+ ![Customize language model edit transcriptionΓÇöAzure AI Video Indexer](./media/customize-language-model/edits.png)
- Azure Video Indexer captures all lines that are corrected by you in the transcription of your video and adds them automatically to a text file called "From transcript edits". These edits are used to retrain the specific Language model that was used to index this video.
+ Azure AI Video Indexer captures all lines that are corrected by you in the transcription of your video and adds them automatically to a text file called "From transcript edits". These edits are used to retrain the specific Language model that was used to index this video.
The edits that were done in the [widget's](video-indexer-embed-widgets.md) timeline are also included.
Azure Video Indexer supports automatic customization of Language models based on
To look at the "From transcript edits" file for each of your Language models, select it to open it.
- ![From transcript editsΓÇöAzure Video Indexer](./media/customize-language-model/from-transcript-edits.png)
+ ![From transcript editsΓÇöAzure AI Video Indexer](./media/customize-language-model/from-transcript-edits.png)
## Next steps
azure-video-indexer Customize Person Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-overview.md
Title: Customize a Person model in Azure Video Indexer - Azure
-description: This article gives an overview of what is a Person model in Azure Video Indexer and how to customize it.
+ Title: Customize a Person model in Azure AI Video Indexer - Azure
+description: This article gives an overview of what is a Person model in Azure AI Video Indexer and how to customize it.
Last updated 05/15/2019
-# Customize a Person model in Azure Video Indexer
+# Customize a Person model in Azure AI Video Indexer
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that are not recognized by Azure Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure Video Indexer to recognize faces that are not recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
+Azure AI Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that are not recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure AI Video Indexer to recognize faces that are not recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
If your account caters to different use-cases, you can benefit from being able to create multiple Person models per account. For example, if the content in your account is meant to be sorted into different channels, you might want to create a separate Person model for each channel.
If your account caters to different use-cases, you can benefit from being able t
Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video, updates the specific custom model that the video was associated with.
-If you do not need the multiple Person model support, do not assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure Video Indexer will use the default Person model in your account.
+If you do not need the multiple Person model support, do not assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer will use the default Person model in your account.
-You can use the Azure Video Indexer website to edit faces that were detected in a video and to manage multiple custom Person models in your account, as described in the [Customize a Person model using a website](customize-person-model-with-website.md) topic. You can also use the API, as described inΓÇ»[Customize a Person model using APIs](customize-person-model-with-api.md).
+You can use the Azure AI Video Indexer website to edit faces that were detected in a video and to manage multiple custom Person models in your account, as described in the [Customize a Person model using a website](customize-person-model-with-website.md) topic. You can also use the API, as described inΓÇ»[Customize a Person model using APIs](customize-person-model-with-api.md).
azure-video-indexer Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-api.md
Title: Customize a Person model with Azure Video Indexer API
-description: Learn how to customize a Person model with the Azure Video Indexer API.
+ Title: Customize a Person model with Azure AI Video Indexer API
+description: Learn how to customize a Person model with the Azure AI Video Indexer API.
Last updated 01/14/2020
-# Customize a Person model with the Azure Video Indexer API
+# Customize a Person model with the Azure AI Video Indexer API
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure Video Indexer will then recognize this face in your future videos and past videos.
+Azure AI Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure AI Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure AI Video Indexer will then recognize this face in your future videos and past videos.
-You can use the Azure Video Indexer API to edit faces that were detected in a video, as described in this topic. You can also use the Azure Video Indexer website, as described in [Customize Person model using the Azure Video Indexer website](customize-person-model-with-api.md).
+You can use the Azure AI Video Indexer API to edit faces that were detected in a video, as described in this topic. You can also use the Azure AI Video Indexer website, as described in [Customize Person model using the Azure AI Video Indexer website](customize-person-model-with-api.md).
## Managing multiple Person models
-Azure Video Indexer supports multiple Person models per account. This feature is currently available only through the Azure Video Indexer APIs.
+Azure AI Video Indexer supports multiple Person models per account. This feature is currently available only through the Azure AI Video Indexer APIs.
If your account caters to different use-case scenarios, you might want to create multiple Person models per account. For example, if your content is related to sports, you can then create a separate Person model for each sport (football, basketball, soccer, and so on). Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
-Each account has a limit of 50 Person models. If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure Video Indexer uses the default custom Person model in your account.
+Each account has a limit of 50 Person models. If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer uses the default custom Person model in your account.
## Create a new Person model
You then use the **id** value for the **personModelId** parameter when [uploadin
To delete a custom Person model from the specified account, use the [delete a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Person-Model) API.
-Once the Person model is deleted successfully, the index of your current videos that were using the deleted model will remain unchanged until you reindex them. Upon reindexing, the faces that were named in the deleted model won't be recognized by Azure Video Indexer in your current videos that were indexed using that model but the faces will still be detected. Your current videos that were indexed using the deleted model will now use your account's default Person model. If faces from the deleted model are also named in your account's default model, those faces will continue to be recognized in the videos.
+Once the Person model is deleted successfully, the index of your current videos that were using the deleted model will remain unchanged until you reindex them. Upon reindexing, the faces that were named in the deleted model won't be recognized by Azure AI Video Indexer in your current videos that were indexed using that model but the faces will still be detected. Your current videos that were indexed using the deleted model will now use your account's default Person model. If faces from the deleted model are also named in your account's default model, those faces will continue to be recognized in the videos.
There's no returned content when the Person model is deleted successfully.
This command allows you to update a face in your video with a name using the ID
The system then recognizes the occurrences of the same face in your other current videos that share the same Person model. Recognition of the face in your other current videos might take some time to take effect as this is a batch process.
-You can update a face that Azure Video Indexer recognized as a celebrity with a new name. The new name that you give will take precedence over the built-in celebrity recognition.
+You can update a face that Azure AI Video Indexer recognized as a celebrity with a new name. The new name that you give will take precedence over the built-in celebrity recognition.
To update the face, use the [update a video face](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Face) API.
-Names are unique for Person models, so if you give two different faces in the same Person model the same `name` parameter value, Azure Video Indexer views the faces as the same person and converges them once you reindex your video.
+Names are unique for Person models, so if you give two different faces in the same Person model the same `name` parameter value, Azure AI Video Indexer views the faces as the same person and converges them once you reindex your video.
## Next steps
-[Customize Person model using the Azure Video Indexer website](customize-person-model-with-website.md)
+[Customize Person model using the Azure AI Video Indexer website](customize-person-model-with-website.md)
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
Title: Customize a Person model with Azure Video Indexer website
-description: Learn how to customize a Person model with the Azure Video Indexer website.
+ Title: Customize a Person model with Azure AI Video Indexer website
+description: Learn how to customize a Person model with the Azure AI Video Indexer website.
Last updated 05/31/2022
-# Customize a Person model with the Azure Video Indexer website
+# Customize a Person model with the Azure AI Video Indexer website
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure Video Indexer supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure Video Indexer](customize-person-model-overview.md).
+Azure AI Video Indexer supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure AI Video Indexer](customize-person-model-overview.md).
-You can use the Azure Video Indexer website to edit faces that were detected in a video, as described in this article. You can also use the API, as described in [Customize a Person model using APIs](customize-person-model-with-api.md).
+You can use the Azure AI Video Indexer website to edit faces that were detected in a video, as described in this article. You can also use the API, as described in [Customize a Person model using APIs](customize-person-model-with-api.md).
## Central management of Person models in your account
-1. To view, edit, and delete the Person models in your account, browse to the Azure Video Indexer website and sign in.
+1. To view, edit, and delete the Person models in your account, browse to the Azure AI Video Indexer website and sign in.
1. Select the content model customization button on the left of the page. > [!div class="mx-imgBorder"]
You can use the Azure Video Indexer website to edit faces that were detected in
## Add a new person to a Person model > [!NOTE]
-> Azure Video Indexer allows you to add multiple people with the same name in a Person model. However, it's recommended you give unique names to each person in your model for usability and clarity.
+> Azure AI Video Indexer allows you to add multiple people with the same name in a Person model. However, it's recommended you give unique names to each person in your model for usability and clarity.
1. To add a new face to a Person model, select the list menu button next to the Person model that you want to add the face to. 1. Select **+ Add person** from the menu. A pop-up will prompt you to fill out the Person's details. Type in the name of the person and select the check button.
- You can then choose from your file explorer or drag and drop the face images of the face. Azure Video Indexer will take all standard image file types (ex: JPG, PNG, and more).
+ You can then choose from your file explorer or drag and drop the face images of the face. Azure AI Video Indexer will take all standard image file types (ex: JPG, PNG, and more).
- Azure Video Indexer can detect occurrences of this person in the future videos that you index and the current videos that you had already indexed, using the Person model to which you added this new face. Recognition of the person in your current videos might take some time to take effect, as this is a batch process.
+ Azure AI Video Indexer can detect occurrences of this person in the future videos that you index and the current videos that you had already indexed, using the Person model to which you added this new face. Recognition of the person in your current videos might take some time to take effect, as this is a batch process.
## Rename a Person model
To use your Person model on a new video, do the following steps:
1. Select the drop-down and select the Person model that you created. 1. Select the **Upload** option in the bottom of the page, and your new video will be indexed using your Person model.
-If you don't specify a Person model during the upload, Azure Video Indexer will index the video using the Default Person model in your account.
+If you don't specify a Person model during the upload, Azure AI Video Indexer will index the video using the Default Person model in your account.
## Use a Person model to reindex a video
-To use a Person model to reindex a video in your collection, go to your account videos on the Azure Video Indexer home page, and hover over the name of the video that you want to reindex.
+To use a Person model to reindex a video in your collection, go to your account videos on the Azure AI Video Indexer home page, and hover over the name of the video that you want to reindex.
You see options to edit, delete, and reindex your video.
If you don't assign a Person model to the video during upload, your edit is save
### Edit a face > [!NOTE]
-> If a Person model has two or more different people with the same name, you won't be able to tag that name within the videos that use that Person model. You'll only be able to make changes to people that share that name in the People tab of the content model customization page in Azure Video Indexer. For this reason, it's recommended that you give unique names to each person in your Person model.
+> If a Person model has two or more different people with the same name, you won't be able to tag that name within the videos that use that Person model. You'll only be able to make changes to people that share that name in the People tab of the content model customization page in Azure AI Video Indexer. For this reason, it's recommended that you give unique names to each person in your Person model.
-1. Browse to the Azure Video Indexer website and sign in.
+1. Browse to the Azure AI Video Indexer website and sign in.
1. Search for a video you want to view and edit in your account. 1. To edit a face in your video, go to the Insights tab and select the pencil icon on the top-right corner of the window.
azure-video-indexer Customize Speech Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-overview.md
Title: Customize a speech model in Azure Video Indexer
-description: This article gives an overview of what is a speech model in Azure Video Indexer.
+ Title: Customize a speech model in Azure AI Video Indexer
+description: This article gives an overview of what is a speech model in Azure AI Video Indexer.
Last updated 03/06/2023
Last updated 03/06/2023
[!INCLUDE [speech model](./includes/speech-model.md)]
-Through Azure Video Indexer integration with [Azure speech services](../cognitive-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
+Through Azure AI Video Indexer integration with [Azure AI Speech services](../ai-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
However, sometimes the base modelΓÇÖs transcription doesn't accurately handle some content. In these situations, a customized speech model can be used to improve recognition of domain-specific vocabulary or pronunciation that is specific to your content by providing text data to train the model. Through the process of creating and adapting speech customization models, your content can be properly transcribed. There is no additional charge for using Video Indexers speech customization.
However, sometimes the base modelΓÇÖs transcription doesn't accurately handle so
If your content contains industry specific terminology or when reviewing Video Indexer transcription results you notice inaccuracies, you can create and train a custom speech model to recognize the terms and improve the transcription quality. It may only be worthwhile to create a custom model if the relevant words and names are expected to appear repeatedly in the content you plan to index. Training a model is sometimes an iterative process and you might find that after the initial training, results could still use improvement and would benefit from additional training, see [How to Improve your custom model](#how-to-improve-your-custom-models) section for guidance.
-However, if you notice a few words or names transcribed incorrectly in the transcript, a custom speech model might not be needed, especially if the words or names arenΓÇÖt expected to be commonly used in content you plan on indexing in the future. You can just edit and correct the transcript in the Video Indexer website (see [View and update transcriptions in Azure Video Indexer website](edit-transcript-lines-portal.md)) and donΓÇÖt have to address it through a custom speech model.
+However, if you notice a few words or names transcribed incorrectly in the transcript, a custom speech model might not be needed, especially if the words or names arenΓÇÖt expected to be commonly used in content you plan on indexing in the future. You can just edit and correct the transcript in the Video Indexer website (see [View and update transcriptions in Azure AI Video Indexer website](edit-transcript-lines-portal.md)) and donΓÇÖt have to address it through a custom speech model.
-For a list of languages that support custom models and pronunciation, see the Customization and Pronunciation columns of the language support table in [Language support in Azure Video Indexer](language-support.md).
+For a list of languages that support custom models and pronunciation, see the Customization and Pronunciation columns of the language support table in [Language support in Azure AI Video Indexer](language-support.md).
## Train datasets
-When indexing a video, you can use a customized speech model to improve the transcription. Models are trained by loading them with [datasets](../cognitive-services/speech-service/how-to-custom-speech-test-and-train.md) that can include plain text data and pronunciation data.
+When indexing a video, you can use a customized speech model to improve the transcription. Models are trained by loading them with [datasets](../ai-services/speech-service/how-to-custom-speech-test-and-train.md) that can include plain text data and pronunciation data.
Text used to test and train a custom model should include samples from a diverse set of content and scenarios that you want your model to recognize. Consider the following factors when creating and training your datasets:
azure-video-indexer Customize Speech Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-api.md
Title: Customize a speech model with the Azure Video Indexer API
-description: Learn how to customize a speech model with the Azure Video Indexer API.
+ Title: Customize a speech model with the Azure AI Video Indexer API
+description: Learn how to customize a speech model with the Azure AI Video Indexer API.
Last updated 03/06/2023
Last updated 03/06/2023
[!INCLUDE [speech model](./includes/speech-model.md)]
-Azure Video Indexer lets you create custom language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to or aligning word or name pronunciation with how it should be written.
+Azure AI Video Indexer lets you create custom language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to or aligning word or name pronunciation with how it should be written.
-For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure Video Indexer](customize-speech-model-overview.md).
+For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
-You can use the Azure Video Indexer APIs to create and edit custom language models in your account. You can also use the website, as described inΓÇ»[Customize speech model using the Azure Video Indexer website](customize-speech-model-with-website.md).
+You can use the Azure AI Video Indexer APIs to create and edit custom language models in your account. You can also use the website, as described inΓÇ»[Customize speech model using the Azure AI Video Indexer website](customize-speech-model-with-website.md).
The following are descriptions of some of the parameters:
azure-video-indexer Customize Speech Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-website.md
Title: Customize a speech model with Azure Video Indexer website
-description: Learn how to customize a speech model with the Azure Video Indexer website.
+ Title: Customize a speech model with Azure AI Video Indexer website
+description: Learn how to customize a speech model with the Azure AI Video Indexer website.
Last updated 03/06/2023
Last updated 03/06/2023
[!INCLUDE [speech model](./includes/speech-model.md)]
-Azure Video Indexer lets you create custom speech models to customize speech recognition by uploading datasets that are used to create a speech model. This article goes through the steps to do so through the Video Indexer website. You can also use the API, as described inΓÇ»[Customize speech model using API](customize-speech-model-with-api.md).
+Azure AI Video Indexer lets you create custom speech models to customize speech recognition by uploading datasets that are used to create a speech model. This article goes through the steps to do so through the Video Indexer website. You can also use the API, as described inΓÇ»[Customize speech model using API](customize-speech-model-with-api.md).
-For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure Video Indexer](customize-speech-model-overview.md).
+For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
## Create a dataset As all custom models must contain a dataset, we'll start with the process of how to create and manage datasets.
-1. Go to the [Azure Video Indexer website](https://www.videoindexer.ai/) and sign in.
+1. Go to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) and sign in.
1. Select the Model customization button on the left of the page. 1. Select the Speech (new) tab. Here you'll begin the process of uploading datasets that are used to train the speech models. > [!div class="mx-imgBorder"] > :::image type="content" source="./media/customize-speech-model/speech-model.png" alt-text="Screenshot of uploading datasets which are used to train the speech models."::: 1. Select Upload dataset.
-1. Select either Plain text or Pronunciation from the Dataset type dropdown menu. Every speech model must have a plain text dataset and can optionally have a pronunciation dataset. To learn more about each type, see Customize a speech model with Azure Video Indexer.
+1. Select either Plain text or Pronunciation from the Dataset type dropdown menu. Every speech model must have a plain text dataset and can optionally have a pronunciation dataset. To learn more about each type, see Customize a speech model with Azure AI Video Indexer.
1. Select Browse which will open the File Explorer. You can only use one file in each dataset. Choose the relevant text file. 1. Select a Language for the model. Choose the language that is spoken in the media files you plan on indexing with this model. 1. The Dataset name is pre-populated with the name of the file but you can modify the name.
You'll then see in the Details tab the name, description, language and status of
## How to use a custom language model when indexing a video
-A custom language model isn't used by default for indexing jobs and must be selected during the index upload process. To learn how to index a video, see Upload and index videos with Azure Video Indexer - Azure Video Indexer | Microsoft Learn.
+A custom language model isn't used by default for indexing jobs and must be selected during the index upload process. To learn how to index a video, see Upload and index videos with Azure AI Video Indexer - Azure AI Video Indexer | Microsoft Learn.
During the upload process, you can select the source language of the video. In the Video source language drop-down menu, you'll see your custom model among the language list. The naming of the model is the language of your Language model and the name that you gave it in parentheses. For example:
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
Title: Deploy Azure Video Indexer by using an ARM template
-description: Learn how to create an Azure Video Indexer account by using an Azure Resource Manager (ARM) template.
+ Title: Deploy Azure AI Video Indexer by using an ARM template
+description: Learn how to create an Azure AI Video Indexer account by using an Azure Resource Manager (ARM) template.
Last updated 05/23/2022
-# Tutorial: Deploy Azure Video Indexer by using an ARM template
+# Tutorial: Deploy Azure AI Video Indexer by using an ARM template
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-In this tutorial, you'll create an Azure Video Indexer account by using the Azure Resource Manager template (ARM template, which is in preview). The resource will be deployed to your subscription and will create the Azure Video Indexer resource based on parameters defined in the *avam.template* file.
+In this tutorial, you'll create an Azure AI Video Indexer account by using the Azure Resource Manager template (ARM template, which is in preview). The resource will be deployed to your subscription and will create the Azure AI Video Indexer resource based on parameters defined in the *avam.template* file.
> [!NOTE]
-> This sample is *not* for connecting an existing Azure Video Indexer classic account to a Resource Manager-based Azure Video Indexer account.
+> This sample is *not* for connecting an existing Azure AI Video Indexer classic account to a Resource Manager-based Azure AI Video Indexer account.
>
-> For full documentation on the Azure Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal). For the latest API version for *Microsoft.VideoIndexer*, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
+> For full documentation on the Azure AI Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal). For the latest API version for *Microsoft.VideoIndexer*, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
## Prerequisites
You need an Azure Media Services account. You can create one for free through [C
2. Fill in the required parameters. 3. Run the following PowerShell commands:
- * Create a new resource group on the same location as your Azure Video Indexer account by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet.
+ * Create a new resource group on the same location as your Azure AI Video Indexer account by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet.
```powershell New-AzResourceGroup -Name myResourceGroup -Location eastus
You need an Azure Media Services account. You can create one for free through [C
### name * Type: string
-* Description: The name of the new Azure Video Indexer account.
+* Description: The name of the new Azure AI Video Indexer account.
* Required: true ### location * Type: string
-* Description: The Azure location where the Azure Video Indexer account should be created.
+* Description: The Azure location where the Azure AI Video Indexer account should be created.
* Required: false > [!NOTE]
-> You need to deploy your Azure Video Indexer account in the same location (region) as the associated Azure Media Services resource.
+> You need to deploy your Azure AI Video Indexer account in the same location (region) as the associated Azure Media Services resource.
### mediaServiceAccountResourceId
You need an Azure Media Services account. You can create one for free through [C
> User assigned managed Identify must have at least Contributor role on the Media Service before deployment, when using System Assigned Managed Identity the Contributor role should be assigned after deployment. * Type: string
-* Description: The resource ID of the managed identity that's used to grant access between Azure Media Services resource and the Azure Video Indexer account.
+* Description: The resource ID of the managed identity that's used to grant access between Azure Media Services resource and the Azure AI Video Indexer account.
* Required: true ### tags * Type: object
-* Description: The array of objects that represents custom user tags on the Azure Video Indexer account.
+* Description: The array of objects that represents custom user tags on the Azure AI Video Indexer account.
* Required: false ## Reference documentation
-If you're new to Azure Video Indexer, see:
+If you're new to Azure AI Video Indexer, see:
-* [The Azure Video Indexer documentation](./index.yml)
-* [The Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/)
+* [The Azure AI Video Indexer documentation](./index.yml)
+* [The Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/)
-After you complete this tutorial, head to other Azure Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
+After you complete this tutorial, head to other Azure AI Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
If you're new to template deployment, see:
If you're new to template deployment, see:
## Next steps
-Connect a [classic paid Azure Video Indexer account to a Resource Manager-based account](connect-classic-account-to-arm.md).
+Connect a [classic paid Azure AI Video Indexer account to a Resource Manager-based account](connect-classic-account-to-arm.md).
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
Title: Deploy Azure Video Indexer by using Bicep
-description: Learn how to create an Azure Video Indexer account by using a Bicep file.
+ Title: Deploy Azure AI Video Indexer by using Bicep
+description: Learn how to create an Azure AI Video Indexer account by using a Bicep file.
Last updated 06/06/2022
-# Tutorial: deploy Azure Video Indexer by using Bicep
+# Tutorial: deploy Azure AI Video Indexer by using Bicep
-In this tutorial, you create an Azure Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md).
+In this tutorial, you create an Azure AI Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md).
> [!NOTE]
-> This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account.
-> For full documentation on Azure Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal) page.
+> This sample is *not* for connecting an existing Azure AI Video Indexer classic account to an ARM-based Azure AI Video Indexer account.
+> For full documentation on Azure AI Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal) page.
> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep). ## Prerequisites
Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-tem
The location must be the same location as the existing Azure media service. You need to provide values for the parameters:
- * Replace **\<account-name\>** with the name of the new Azure video indexer account.
+ * Replace **\<account-name\>** with the name of the new Azure AI Video Indexer account.
* Replace **\<managed-identity\>** with the managed identity used to grant access between Azure Media Services(AMS). * Replace **\<media-service-account-resource-id\>** with the existing Azure media service. ## Reference documentation
-If you're new to Azure Video Indexer, see:
+If you're new to Azure AI Video Indexer, see:
-* [The Azure Video Indexer documentation](./index.yml)
-* [The Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/)
-* After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
+* [The Azure AI Video Indexer documentation](./index.yml)
+* [The Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/)
+* After completing this tutorial, head to other Azure AI Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
If you're new to Bicep deployment, see:
If you're new to Bicep deployment, see:
## Next steps
-[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
+[Connect an existing classic paid Azure AI Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-video-indexer Detect Textual Logo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detect-textual-logo.md
Title: Detect textual logo with Azure Video Indexer
-description: This article gives an overview of Azure Video Indexer textual logo detection.
+ Title: Detect textual logo with Azure AI Video Indexer
+description: This article gives an overview of Azure AI Video Indexer textual logo detection.
Last updated 01/22/2023
# How to detect textual logo (preview) > [!NOTE]
-> Textual logo detection (preview) creation process is currently available through API. The result can be viewed through the Azure Video Indexer [website](https://www.videoindexer.ai/).
+> Textual logo detection (preview) creation process is currently available through API. The result can be viewed through the Azure AI Video Indexer [website](https://www.videoindexer.ai/).
**Textual logo detection** insights are based on the OCR textual detection, which matches a specific predefined text.
In this tutorial we use the example supplied as default:
Insert the following:
-* `Location`: The location of the Azure Video Indexer account.
-* `Account ID`: The ID of the Azure Video Indexer account.
+* `Location`: The location of the Azure AI Video Indexer account.
+* `Account ID`: The ID of the Azure AI Video Indexer account.
* `Access token`: The token, at least at a contributor level permission. The default body is:
The default body is:
|Key|Value| |||
-|Name|Name of the logo, would be used in the Azure Video Indexer website.|
+|Name|Name of the logo, would be used in the Azure AI Video Indexer website.|
|wikipediaSearchTerm|Used to create a description in the Video Indexer website.| |text|The text the model will compare too, make sure to add the obvious name as part of the variations. (e.g Microsoft)| |caseSensitive| true/false according to the variation.|
Use the [Create Logo Group](https://api-portal.videoindexer.ai/api-details#api=O
Insert the following:
-* `Location`: The location of the Azure Video Indexer account.
-* `Account ID`: The ID of the Azure Video Indexer account.
+* `Location`: The location of the Azure AI Video Indexer account.
+* `Account ID`: The ID of the Azure AI Video Indexer account.
* `Access token`: The token, at least at a contributor level permission. > [!div class="mx-imgBorder"]
Use the upload API call:
Specify the following:
-* `Location`: The location of the Azure Video Indexer account.
-* `Account`: The ID of the Azure Video Indexer account.
+* `Location`: The location of the Azure AI Video Indexer account.
+* `Account`: The ID of the Azure AI Video Indexer account.
* `Name`: The name of the media file you're indexing. * `Language`: `en-US`. For more information, see [Language support](language-support.md) * `IndexingPreset`: Select **Advanced Video/Audio+video**.
Specify the following:
## Inspect the output
-Assuming the textual logo model has found a match, you'll be able to view the result in the [Azure Video Indexer website](https://www.videoindexer.ai/).
+Assuming the textual logo model has found a match, you'll be able to view the result in the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
### Insights
azure-video-indexer Detected Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detected-clothing.md
Title: Enable detected clothing feature
-description: Azure Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level.
+description: Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level.
Last updated 11/15/2021
# Enable detected clothing feature (preview)
-Azure Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level.
+Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level.
Two examples where this feature could be useful:
The newly added clothing detection feature is available when indexing your file
:::image type="content" source="./media/detected-clothing/index-video.png" alt-text="This screenshot represents an indexing video option":::
-When you choose to see **Insights** of your video on the [Azure Video Indexer](https://www.videoindexer.ai/) website, the People's detected clothing could be viewed from the **Observed People** tracing insight. When choosing a thumbnail of a person the detected clothing became available.
+When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the People's detected clothing could be viewed from the **Observed People** tracing insight. When choosing a thumbnail of a person the detected clothing became available.
:::image type="content" source="./media/detected-clothing/observed-people.png" alt-text="Observed people screenshot":::
-If you are interested to view People's detected clothing in the Timeline of your video on the Azure Video Indexer website, go to **View** -> **Show Insights** and select the **All** option or **View** -> **Custom View** and select **Observed People**.
+If you are interested to view People's detected clothing in the Timeline of your video on the Azure AI Video Indexer website, go to **View** -> **Show Insights** and select the **All** option or **View** -> **Custom View** and select **Observed People**.
:::image type="content" source="./media/detected-clothing/observed-person.png" alt-text="Observed person screenshot":::
-Searching for a specific clothing to return all the observed people wearing it is enabled using the search bar of either the **Insights** or from the **Timeline** of your video on the Azure Video Indexer website .
+Searching for a specific clothing to return all the observed people wearing it is enabled using the search bar of either the **Insights** or from the **Timeline** of your video on the Azure AI Video Indexer website .
-The following JSON response illustrates what Azure Video Indexer returns when tracing observed people having detected clothing associated:
+The following JSON response illustrates what Azure AI Video Indexer returns when tracing observed people having detected clothing associated:
```json "observedPeople": [
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
Title: Edit speakers in the Azure Video Indexer website
-description: The article demonstrates how to edit speakers with the Azure Video Indexer website.
+ Title: Edit speakers in the Azure AI Video Indexer website
+description: The article demonstrates how to edit speakers with the Azure AI Video Indexer website.
Last updated 11/01/2022
-# Edit speakers with the Azure Video Indexer website
+# Edit speakers with the Azure AI Video Indexer website
-Azure Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article.
+Azure AI Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article.
-The article demonstrates how to edit speakers with the [Azure Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
+The article demonstrates how to edit speakers with the [Azure AI Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
> [!NOTE]
-> The addition or editing of a speaker name is applied throughout the transcript of the video but is not applied to other videos in your Azure Video Indexer account.
+> The addition or editing of a speaker name is applied throughout the transcript of the video but is not applied to other videos in your Azure AI Video Indexer account.
## Start editing
-1. Sign in to the [Azure Video Indexer website](https://www.videoindexer.ai/).
+1. Sign in to the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
2. Select a video. 3. Select the **Timeline** tab. 4. Choose to view speakers.
The article demonstrates how to edit speakers with the [Azure Video Indexer webs
## Add a new speaker
-This action allows adding new speakers that were not identified by Azure Video Indexer. To add a new speaker from the website for the selected video, do the following:
+This action allows adding new speakers that were not identified by Azure AI Video Indexer. To add a new speaker from the website for the selected video, do the following:
1. Select the edit mode.
This action allows adding new speakers that were not identified by Azure Video I
## Rename an existing speaker
-This action allows renaming an existing speaker that was identified by Azure Video Indexer. The update applies to all speakers identified by this name.
+This action allows renaming an existing speaker that was identified by Azure AI Video Indexer. The update applies to all speakers identified by this name.
To rename a speaker from the website for the selected video, do the following:
When adding a new speaker or renaming a speaker, the new name should be unique.
## Next steps
-[Insert or remove transcript lines in the Azure Video Indexer website](edit-transcript-lines-portal.md)
+[Insert or remove transcript lines in the Azure AI Video Indexer website](edit-transcript-lines-portal.md)
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
Title: View and update transcriptions in Azure Video Indexer website
-description: This article explains how to insert or remove a transcript line in the Azure Video Indexer website. It also shows how to view word-level information.
+ Title: View and update transcriptions in Azure AI Video Indexer website
+description: This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.
Last updated 05/03/2022
Last updated 05/03/2022
# View and update transcriptions
-This article explains how to insert or remove a transcript line in the Azure Video Indexer website. It also shows how to view word-level information.
+This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.
-## Insert or remove transcript lines in the Azure Video Indexer website
+## Insert or remove transcript lines in the Azure AI Video Indexer website
-This section explains how to insert or remove a transcript line in the [Azure Video Indexer website](https://www.videoindexer.ai/).
+This section explains how to insert or remove a transcript line in the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
### Add new line to the transcript timeline
To consolidate two lines, which you believe should appear as one.
## Examine word-level transcription information
-This section shows how to examine word-level transcription information based on sentences and phrases that Azure Video Indexer identified. Each phrase is broken into words and each word has the following information associated with it
+This section shows how to examine word-level transcription information based on sentences and phrases that Azure AI Video Indexer identified. Each phrase is broken into words and each word has the following information associated with it
|Name|Description|Example| |||| |Word|A word from a phrase.|"thanks"|
-|Confidence|How confident the Azure Video Indexer that the word is correct.|0.80127704|
+|Confidence|How confident the Azure AI Video Indexer that the word is correct.|0.80127704|
|Offset|The time offset from the beginning of the video to where the word starts.|PT0.86S| |Duration|The duration of the word.|PT0.28S| ### Get and view the transcript
-1. Sign in on the [Azure Video Indexer website](https://www.videoindexer.ai).
+1. Sign in on the [Azure AI Video Indexer website](https://www.videoindexer.ai).
1. Select a video. 1. In the top-right corner, press arrow down and select **Artifacts (ZIP)**. 1. Download the artifacts.
This section shows how to examine word-level transcription information based on
## Next steps
-For updating transcript lines and text using API visit the [Azure Video Indexer API developer portal](https://aka.ms/avam-dev-portal)
+For updating transcript lines and text using API visit the [Azure AI Video Indexer API developer portal](https://aka.ms/avam-dev-portal)
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
Title: Azure Video Indexer emotions detection overview-
-description: This article gives an overview of an Azure Video Indexer emotions detection.
+ Title: Azure AI Video Indexer emotions detection overview
+
+description: This article gives an overview of an Azure AI Video Indexer emotions detection.
# Emotions detection
-Emotions detection is an Azure Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Sad", or none of the above if no other emotion was detected.
+Emotions detection is an Azure AI Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as "Anger", "Fear", "Joy", "Sad", or none of the above if no other emotion was detected.
The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
To display the instances in a JSON file, do the following:
```
-To download the JSON file via the API, use theΓÇ»[Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use theΓÇ»[Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
> [!NOTE] > Emotions detection is language independent, however if the transcript is not in English, it is first being translated to English and only then the model is applied. This may cause a reduced accuracy in emotions detection for non English languages.
During the emotions detection procedure, the transcript of the video is processe
|Component |Definition | ||| |Source language |The user uploads the source file for indexing. |
-|Transcription API |The audio file is sent to Cognitive Services and the translated transcribed output is returned. If a language has been specified, it is processed. |
+|Transcription API |The audio file is sent to Azure AI services and the translated transcribed output is returned. If a language has been specified, it is processed. |
|Emotions detection |Each sentence is sent to the emotions detection model. The model produces the confidence level of each emotion. If the confidence level exceeds a specific threshold, and there is no ambiguity between positive and negative emotions, the emotion is detected. In any other case, the sentence is labeled as neutral.| |Confidence level |The estimated confidence level of the detected emotions is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score. |
Don't purposely disclose inappropriate media showing young children or family me
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
View some other Azure Video Insights:
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
Title: Azure Video Indexer face detection overview-
-description: This article gives an overview of an Azure Video Indexer face detection.
+ Title: Azure AI Video Indexer face detection overview
+
+description: This article gives an overview of an Azure AI Video Indexer face detection.
> [!IMPORTANT] > Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
-Face detection is an Azure Video Indexer AI feature that automatically detects faces in a media file and aggregates instances of similar faces into the same group. The celebrities recognition module is then run to recognize celebrities. This module covers approximately one million faces and is based on commonly requested data sources. Faces that aren't recognized by Azure Video Indexer are still detected but are left unnamed. Customers can build their own custom [Person modules](/azure/azure-video-indexer/customize-person-model-overview) whereby the Azure Video Indexer recognizes faces that aren't recognized by default.
+Face detection is an Azure AI Video Indexer AI feature that automatically detects faces in a media file and aggregates instances of similar faces into the same group. The celebrities recognition module is then run to recognize celebrities. This module covers approximately one million faces and is based on commonly requested data sources. Faces that aren't recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build their own custom [Person modules](/azure/azure-video-indexer/customize-person-model-overview) whereby the Azure AI Video Indexer recognizes faces that aren't recognized by default.
The resulting insights are generated in a categorized list in a JSON file that includes a thumbnail and either name or ID of each face. Clicking faceΓÇÖs thumbnail displays information like the name of the person (if they were recognized), the % of appearances in the video, and their biography if they're a celebrity. It also enables scrolling between the instances in the video.ΓÇ»
This article discusses faces detection and the key considerations for making use
|Term|Definition| ||| |InsightΓÇ» |The information and knowledge derived from the processing and analysis of video and audio files that generate different types of insights and can include detected objects, people, faces, keyframes and translations or transcriptions. |
-|Face recognition  |The analysis of images to identify the faces that appear in the images. This process is implemented via the Azure Cognitive Services Face API. |
+|Face recognition  |The analysis of images to identify the faces that appear in the images. This process is implemented via the Azure AI Face API. |
|Template |Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individualΓÇÖs template. The enrollment or probe images aren't stored by Face API and the original images can't be reconstructed based on a template. Template quality is a key determinant on the accuracy of your results. | |Enrollment |The process of enrolling images of individuals for template creation so they can be recognized. When a person is enrolled to a verification system used for authentication, their template is also associated with a primary identifier2 that is used to determine which template to compare with the probe template. High-quality images and images representing natural variations in how a person looks (for instance wearing glasses, not wearing glasses) generate high-quality enrollment templates. | |Deep searchΓÇ» |The ability to retrieve only relevant video and audio files from a video library by searching for specific terms within the extracted insights.|
To see face detection insight in the JSON file, do the following:
] ```
-To download the JSON file via the API, [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
> [!IMPORTANT] > When reviewing face detections in the UI you may not see all faces, we expose only face groups with a confidence of more than 0.5 and the face must appear for a minimum of 4 seconds or 10% * video_duration. Only when these conditions are met we will show the face in the UI and the Insights.json. You can always retrieve all face instances from the Face Artifact file using the api `https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/ArtifactUrl[?Faces][&accessToken]`
During the Faces Detection procedure, images in a media file are processed, as f
* Summarizing where an actor appears in a movie or reusing footage by deep searching for specific faces in organizational archives for insight on a specific celebrity. * Improved efficiency when creating feature stories at a news or sports agency, for example deep searching for a celebrity or football player in organizational archives.
-* Using faces appearing in the video to create promos, trailers or highlights. Azure Video Indexer can assist by adding keyframes, scene markers, timestamps and labeling so that content editors invest less time reviewing numerous files.  
+* Using faces appearing in the video to create promos, trailers or highlights. Azure AI Video Indexer can assist by adding keyframes, scene markers, timestamps and labeling so that content editors invest less time reviewing numerous files.  
## Considerations when choosing a use case
When used responsibly and carefully face detection is a valuable tool for many i
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [OCR](ocr.md)
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
When might you want to switch from a trial to a regular account?
## Create a new ARM account for the import
-* First you need to create an account. The regular account needs to have been already created and available before performing the import. Azure Video Indexer accounts are Azure Resource Manager (ARM) based and account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
+* First you need to create an account. The regular account needs to have been already created and available before performing the import. Azure AI Video Indexer accounts are Azure Resource Manager (ARM) based and account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
* The target ARM-based account has to be an empty account that has not yet been used to index any media files. * Import from trial can be performed only once per trial account.
When might you want to switch from a trial to a regular account?
To import your data, follow the steps:
- 1. Go to the [Azure Video Indexer website](https://aka.ms/vi-portal-link)
+ 1. Go to the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link)
2. Select your trial account and go to the **Account settings** page. 3. Click the **Import content to an ARM-based account**. 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
- * If the account ID isn't showing, you can copy and paste the account ID from the Azure portal or from the list of accounts under the User account blade at the top right of the Azure Video Indexer Portal.
+ * If the account ID isn't showing, you can copy and paste the account ID from the Azure portal or from the list of accounts under the User account blade at the top right of the Azure AI Video Indexer Portal.
5. Click **Import content**
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
Title: Indexing configuration guide
-description: This article explains the configuration options of indexing process with Azure Video Indexer.
+description: This article explains the configuration options of indexing process with Azure AI Video Indexer.
Last updated 04/27/2023
# The indexing configuration guide
-It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance.
+It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure AI Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance.
-This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md).
+This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure AI Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md).
The initial upload screen presents options to define the video name, source language, and privacy settings.
All the other setting options appear if you select Advanced options.
## Default settings
-By default, Azure Video Indexer is configured to a **Video source language** of English, **Privacy** of private, **Standard** audio and video setting, and **Streaming quality** of single bitrate.
+By default, Azure AI Video Indexer is configured to a **Video source language** of English, **Privacy** of private, **Standard** audio and video setting, and **Streaming quality** of single bitrate.
> [!TIP] > This topic describes each indexing option in detail.
By default, Azure Video Indexer is configured to a **Video source language** of
Below are a few examples of when using the default setting might not be a good fit: - If you need insights observed people or matched person that is only available through Advanced Video. -- If you're only using Azure Video Indexer for transcription and translation, indexing of both audio and video isnΓÇÖt required, **Basic** for audio should suffice. -- If you're consuming Azure Video Indexer insights but have no need to generate a new media file, streaming isn't necessary and **No streaming** should be selected to avoid the encoding job and its associated cost.
+- If you're only using Azure AI Video Indexer for transcription and translation, indexing of both audio and video isnΓÇÖt required, **Basic** for audio should suffice.
+- If you're consuming Azure AI Video Indexer insights but have no need to generate a new media file, streaming isn't necessary and **No streaming** should be selected to avoid the encoding job and its associated cost.
- If a video is primarily in a language that isn't English. ### Video source language
-If you're aware of the language spoken in the video, select the language from the video source language list. If you're unsure of the language of the video, choose **Auto-detect single language**. When uploading and indexing your video, Azure Video Indexer will use language identification (LID) to detect the videos language and generate transcription and insights with the detected language.
+If you're aware of the language spoken in the video, select the language from the video source language list. If you're unsure of the language of the video, choose **Auto-detect single language**. When uploading and indexing your video, Azure AI Video Indexer will use language identification (LID) to detect the videos language and generate transcription and insights with the detected language.
If the video may contain multiple languages and you aren't sure which ones, select **Auto-detect multi-language**. In this case, multi-language (MLID) detection will be applied when uploading and indexing your video. While auto-detect is a great option when the language in your videos varies, there are two points to consider when using LID or MLID: -- LID/MLID don't support all the languages supported by Azure Video Indexer.
+- LID/MLID don't support all the languages supported by Azure AI Video Indexer.
- The transcription is of a higher quality when you pre-select the videoΓÇÖs appropriate language. Learn more about [language support and supported languages](language-support.md). ### Privacy
-This option allows you to determine if the insights should only be accessible to users in your Azure Video Indexer account or to anyone with a link.
+This option allows you to determine if the insights should only be accessible to users in your Azure AI Video Indexer account or to anyone with a link.
### Indexing options
-When indexing a video with the default settings, beware each of the audio and video indexing options may be priced differently. See [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/) for details.
+When indexing a video with the default settings, beware each of the audio and video indexing options may be priced differently. See [Azure AI Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/) for details.
Below are the indexing type options with details of their insights provided. To modify the indexing type, select **Advanced settings**.
Encoding and streaming operations are performed by and billed by Azure Media Ser
- The creation of a Streaming Endpoint. - Egress traffic ΓÇô the volume depends on the number of video playbacks, video playback length, and the video quality (bitrate).
-There are several aspects that influence the total costs of the encoding job. The first is if the encoding is with single or adaptive streaming. This will create either a single output or multiple encoding quality outputs. Each output is billed separately and depends on the source quality of the video you uploaded to Azure Video Indexer.
+There are several aspects that influence the total costs of the encoding job. The first is if the encoding is with single or adaptive streaming. This will create either a single output or multiple encoding quality outputs. Each output is billed separately and depends on the source quality of the video you uploaded to Azure AI Video Indexer.
For Media Services encoding pricing details, see [pricing](https://azure.microsoft.com/pricing/details/media-services/#pricing).
When indexing a video, default streaming settings are applied. Below are the str
|Single bitrate|Adaptive bitrate| No streaming | |||| -- **Single bitrate**: With Single Bitrate, the standard Media Services encoder cost will apply for the output. If the video height is greater than or equal to 720p HD, Azure Video Indexer encodes it with a resolution of 1280 x 720. Otherwise, it's encoded as 640 x 468. The default setting is content-aware encoding. -- **Adaptive bitrate**: With Adaptive Bitrate, if you upload a video in 720p HD single bitrate to Azure Video Indexer and select Adaptive Bitrate, the encoder will use the [AdaptiveStreaming](/rest/api/media/transforms/create-or-update?tabs=HTTP#encodernamedpreset) preset. An output of 720p HD (no output exceeding 720p HD is created) and several lower quality outputs are created (for playback on smaller screens/low bandwidth environments). Each output will use the Media Encoder Standard base price and apply a multiplier for each output. The multiplier is 2x for HD, 1x for non-HD, and 0.25 for audio and billing is per minute of the input video.
+- **Single bitrate**: With Single Bitrate, the standard Media Services encoder cost will apply for the output. If the video height is greater than or equal to 720p HD, Azure AI Video Indexer encodes it with a resolution of 1280 x 720. Otherwise, it's encoded as 640 x 468. The default setting is content-aware encoding.
+- **Adaptive bitrate**: With Adaptive Bitrate, if you upload a video in 720p HD single bitrate to Azure AI Video Indexer and select Adaptive Bitrate, the encoder will use the [AdaptiveStreaming](/rest/api/media/transforms/create-or-update?tabs=HTTP#encodernamedpreset) preset. An output of 720p HD (no output exceeding 720p HD is created) and several lower quality outputs are created (for playback on smaller screens/low bandwidth environments). Each output will use the Media Encoder Standard base price and apply a multiplier for each output. The multiplier is 2x for HD, 1x for non-HD, and 0.25 for audio and billing is per minute of the input video.
**Example**: If you index a video in the US East region that is 40 minutes in length and is 720p HP and have selected the streaming option of Adaptive Bitrate, 3 outputs will be created - 1 HD (multiplied by 2), 1 SD (multiplied by 1) and 1 audio track (multiplied by 0.25). This will total to (2+1+0.25) * 40 = 130 billable output minutes. Output minutes (standard encoder): 130 x $0.015/minute = $1.95. -- **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure Video Indexer website. When No streaming is selected, you aren't billed for encoding.
+- **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure AI Video Indexer website. When No streaming is selected, you aren't billed for encoding.
### Customizing content models
-Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing.
+Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing.
## Next steps
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
Title: Azure Video Indexer insights overview
-description: This article gives a brief overview of Azure Video Indexer insights.
+ Title: Azure AI Video Indexer insights overview
+description: This article gives a brief overview of Azure AI Video Indexer insights.
Last updated 03/03/2023
-# Azure Video Indexer insights
+# Azure AI Video Indexer insights
-When a video is indexed, Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Insights contain an aggregated view of the data: transcripts, optical character recognition elements (OCRs), face, topics, emotions, etc. Once the video is indexed and analyzed, Azure Video Indexer produces a JSON content that contains details of the video insights. For example, each insight type includes instances of time ranges that show when the insight appears in the video.
+When a video is indexed, Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Insights contain an aggregated view of the data: transcripts, optical character recognition elements (OCRs), face, topics, emotions, etc. Once the video is indexed and analyzed, Azure AI Video Indexer produces a JSON content that contains details of the video insights. For example, each insight type includes instances of time ranges that show when the insight appears in the video.
Read details about the following insights here:
Read details about the following insights here:
For information about features and other insights, see: -- [Azure Video Indexer overview](video-indexer-overview.md)
+- [Azure AI Video Indexer overview](video-indexer-overview.md)
- [Transparency note](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-Once you [set up](video-indexer-get-started.md) an Azure Video Indexer account (see [account types](accounts-overview.md)) and [upload a video](upload-index-videos.md), you can view insights as described below.
+Once you [set up](video-indexer-get-started.md) an Azure AI Video Indexer account (see [account types](accounts-overview.md)) and [upload a video](upload-index-videos.md), you can view insights as described below.
## Get the insights using the website
-To visually examine the video's insights, press the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+To visually examine the video's insights, press the **Play** button on the video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-![Screenshot of the Insights tab in Azure Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
+![Screenshot of the Insights tab in Azure AI Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
To get insights produced on the website or the Azure portal:
-1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
1. Find a video whose output you want to examine. 1. Press **Play**. 1. Choose the **Insights** tab.
This API returns a URL only with a link to the specific resource type you reques
[!INCLUDE [artifacts](./includes/artifacts.md)]
-## Examine the Azure Video Indexer output
+## Examine the Azure AI Video Indexer output
-For more information, see [Examine the Azure Video Indexer output]( video-indexer-output-json-v2.md).
+For more information, see [Examine the Azure AI Video Indexer output]( video-indexer-output-json-v2.md).
## Next steps
azure-video-indexer Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/keywords.md
Title: Azure Video Indexer keywords extraction overview -
-description: An introduction to Azure Video Indexer keywords extraction component responsibly.
+ Title: Azure AI Video Indexer keywords extraction overview
+
+description: An introduction to Azure AI Video Indexer keywords extraction component responsibly.
# Keywords extraction
-Keywords extraction is an Azure Video Indexer AI feature that automatically detects insights on the different keywords discussed in media files. Keywords extraction can extract insights in both single language and multi-language media files. The total number of extracted keywords and their categories are listed in the Insights tab, where clicking a Keyword and then clicking Play Previous or Play Next jumps to the keyword in the media file.
+Keywords extraction is an Azure AI Video Indexer AI feature that automatically detects insights on the different keywords discussed in media files. Keywords extraction can extract insights in both single language and multi-language media files. The total number of extracted keywords and their categories are listed in the Insights tab, where clicking a Keyword and then clicking Play Previous or Play Next jumps to the keyword in the media file.
## Prerequisites
To display the instances in a JSON file, do the following:
```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
> [!NOTE] > Keywords extraction is language independent.
During the Keywords procedure, audio and images in a media file are processed, a
|Component|Definition| ||| |Source language | The user uploads the source file for indexing. |
-|Transcription API |The audio file is sent to Cognitive Services and the translated transcribed output is returned. If a language has been specified it is processed.|
-|OCR of video |Images in a media file are processed using the Computer Vision Read API to extract text, its location, and other insights. |
+|Transcription API |The audio file is sent to Azure AI services and the translated transcribed output is returned. If a language has been specified it is processed.|
+|OCR of video |Images in a media file are processed using the Azure AI Vision Read API to extract text, its location, and other insights. |
|Keywords extraction |An extraction algorithm processes the transcribed audio. The results are then combined with the insights detected in the video during the OCR process. The keywords and where they appear in the media and then detected and identified. | |Confidence level| The estimated confidence level of each keyword is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
When used responsibly and carefully Keywords is a valuable tool for many industr
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Labels Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/labels-identification.md
Title: Azure Video Indexer labels identification overview-
-description: This article gives an overview of an Azure Video Indexer labels identification.
+ Title: Azure AI Video Indexer labels identification overview
+
+description: This article gives an overview of an Azure AI Video Indexer labels identification.
# Labels identification
-Labels identification is an Azure Video Indexer AI feature that identifies visual objects like sunglasses or actions like swimming, appearing in the video footage of a media file. There are many labels identification categories and once extracted, labels identification instances are displayed in the Insights tab and can be translated into over 50 languages. Clicking a Label opens the instance in the media file, select Play Previous or Play Next to see more instances.
+Labels identification is an Azure AI Video Indexer AI feature that identifies visual objects like sunglasses or actions like swimming, appearing in the video footage of a media file. There are many labels identification categories and once extracted, labels identification instances are displayed in the Insights tab and can be translated into over 50 languages. Clicking a Label opens the instance in the media file, select Play Previous or Play Next to see more instances.
## Prerequisites
To display labels identification insights in a JSON file, do the following:
}, ```
-To download the JSON file via the API, [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## Labels components
During the Labels procedure, objects in a media file are processed, as follows:
- Carefully consider when using for law enforcement that Labels potentially cannot detect parts of the video. To ensure fair and high-quality decisions, combine Labels with human oversight. - Don't use labels identification for decisions that may have serious adverse impacts. Machine learning models can result in undetected or incorrect classification output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
-When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. - Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
When used responsibly and carefully, Azure Video Indexer is a valuable tool for
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
Title: Use Azure Video Indexer to auto identify spoken languages
-description: This article describes how the Azure Video Indexer language identification model is used to automatically identifying the spoken language in a video.
+ Title: Use Azure AI Video Indexer to auto identify spoken languages
+description: This article describes how the Azure AI Video Indexer language identification model is used to automatically identifying the spoken language in a video.
Last updated 04/12/2020
# Automatically identify the spoken language with language identification model
-Azure Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language.
+Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language.
-See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
+See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section below.
Make sure to review the [Guidelines and limitations](#guidelines-and-limitations
When indexing or [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
-When using portal, go to your **Account videos** on the [Azure Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to re-index. On the right-bottom corner click the re-index button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
+When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to re-index. On the right-bottom corner click the re-index button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
![auto detect](./media/language-identification-model/auto-detect.png) ## Model output
-Azure Video Indexer transcribes the video according to the most likely language if the confidence for that language is `> 0.6`. If the language can't be identified with confidence, it assumes the spoken language is English.
+Azure AI Video Indexer transcribes the video according to the most likely language if the confidence for that language is `> 0.6`. If the language can't be identified with confidence, it assumes the spoken language is English.
Model dominant language is available in the insights JSON as the `sourceLanguage` attribute (under root/videos/insights). A corresponding confidence score is also available under the `sourceLanguageConfidence` attribute.
Model dominant language is available in the insights JSON as the `sourceLanguage
* Automatic language identification (LID) supports the following languages:
- See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
-* Even though Azure Video Indexer supports Arabic (Modern Standard and Levantine), Hindi, and Korean, these languages are not supported in LID.
+ See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
+* Even though Azure AI Video Indexer supports Arabic (Modern Standard and Levantine), Hindi, and Korean, these languages are not supported in LID.
* If the audio contains languages other than the supported list above, the result is unexpected.
-* If Azure Video Indexer can't identify the language with a high enough confidence (`>0.6`), the fallback language is English.
+* If Azure AI Video Indexer can't identify the language with a high enough confidence (`>0.6`), the fallback language is English.
* Currently, there isn't support for file with mixed languages audio. If the audio contains mixed languages, the result is unexpected. * Low-quality audio may impact the model results. * The model requires at least one minute of speech in the audio.
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Title: Language support in Azure Video Indexer
-description: This article provides a comprehensive list of language support by service features in Azure Video Indexer.
+ Title: Language support in Azure AI Video Indexer
+description: This article provides a comprehensive list of language support by service features in Azure AI Video Indexer.
Last updated 03/10/2023
-# Language support in Azure Video Indexer
+# Language support in Azure AI Video Indexer
This article explains Video Indexer's language options and provides a list of language support for each one. It includes the languages support for Video Indexer features, translation, language identification, customization, and the language settings of the Video Indexer website.
This section explains the Video Indexer language options and has a table of the
### Column explanations - **Supported source language** – The language spoken in the media file supported for transcription, translation, and search.-- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.-- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a language model in Azure Video Indexer](customize-language-model-overview.md).-- **Pronunciation (language model)** - Whether the language can be used to create a pronunciation dataset as part of a custom speech model. To learn more, see [Customize a speech model with Azure Video Indexer](customize-speech-model-overview.md).-- **Website Translation** – Whether the language is supported for translation when using the [Azure Video Indexer website](https://aka.ms/vi-portal-link). Select the translated language in the language drop-down menu.
+- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure AI Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.
+- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a language model in Azure AI Video Indexer](customize-language-model-overview.md).
+- **Pronunciation (language model)** - Whether the language can be used to create a pronunciation dataset as part of a custom speech model. To learn more, see [Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
+- **Website Translation** ΓÇô Whether the language is supported for translation when using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). Select the translated language in the language drop-down menu.
:::image type="content" source="media/language-support/website-translation.png" alt-text="Screenshot showing a menu with download, English and views as menu items. A tooltip is shown as mouseover on the English item and says Translation is set to English." lightbox="media/language-support/website-translation.png":::
This section explains the Video Indexer language options and has a table of the
- Frame patterns (Only to Hebrew as of now) All other insights appear in English when using translation.-- **Website Language** - Whether the language can be selected for use on the [Azure Video Indexer website](https://aka.ms/vi-portal-link). Select the **Settings icon** then select the language in the **Language settings** dropdown.
+- **Website Language** - Whether the language can be selected for use on the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). Select the **Settings icon** then select the language in the **Language settings** dropdown.
:::image type="content" source="media/language-support/website-language.jpg" alt-text="Screenshot showing a menu with user settings show them all toggled to on." lightbox="media/language-support/website-language.jpg":::
The API returns a list of supported languages with the following values:
When uploading a media file to Video Indexer, you can specify the media file's source language. If indexing a file through the Video Indexer website, this can be done by selecting a language during the file upload. If you're submitting the indexing job through the API, it's done by using the language parameter. The selected language is then used to generate the transcription of the file.
-If you aren't sure of the source language of the media file or it may contain multiple languages, Video Indexer can detect the spoken languages. If you select either Auto-detect single language (LID) or multi-language (MLID) for the media fileΓÇÖs source language, the detected language or languages will be used to transcribe the media file. To learn more about LID and MLID, see Use Azure Video Indexer to auto identify spoken languages, see [Automatically identify the spoken language with language identification model](language-identification-model.md) and [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
+If you aren't sure of the source language of the media file or it may contain multiple languages, Video Indexer can detect the spoken languages. If you select either Auto-detect single language (LID) or multi-language (MLID) for the media fileΓÇÖs source language, the detected language or languages will be used to transcribe the media file. To learn more about LID and MLID, see Use Azure AI Video Indexer to auto identify spoken languages, see [Automatically identify the spoken language with language identification model](language-identification-model.md) and [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
There's a limit of 10 languages allowed for identification during the indexing of a media file for both LID and MLID. The following are the 9 *default* languages of Language identification (LID) and Multi-language identification (MILD):
If you need to use languages for identification that aren't used by default, you
When you upload a file, the Video Indexer language model cross references 9 languages by default. If there's a match, the model generates the transcription for the file with the detected language.
-Use the language parameter to specify `multi` (MLID) or `auto` (LID) parameters. Use the `customLanguages` parameter to specify up to 10 languages. (The parameter is used only when the language parameter is set to `multi` or `auto`.) To learn more about using the API, see [Use the Azure Video Indexer API](video-indexer-use-apis.md).
+Use the language parameter to specify `multi` (MLID) or `auto` (LID) parameters. Use the `customLanguages` parameter to specify up to 10 languages. (The parameter is used only when the language parameter is set to `multi` or `auto`.) To learn more about using the API, see [Use the Azure AI Video Indexer API](video-indexer-use-apis.md).
## Next steps
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
Title: Limited Access features of Azure Video Indexer
-description: This article talks about the limited access features of Azure Video Indexer.
+ Title: Limited Access features of Azure AI Video Indexer
+description: This article talks about the limited access features of Azure AI Video Indexer.
Last updated 06/17/2022
-# Limited Access features of Azure Video Indexer
+# Limited Access features of Azure AI Video Indexer
-Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. Microsoft facial recognition services are Limited Access in order to help prevent the misuse of the services in accordance with our [AI Principles](https://www.microsoft.com/ai/responsible-ai?SilentAuth=1&wa=wsignin1.0&activetab=pivot1%3aprimaryr6) and [facial recognition](https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work/) principles. The Face Identify and Celebrity Recognition operations in Azure Video Indexer are Limited Access features that require registration.
+Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. Microsoft facial recognition services are Limited Access in order to help prevent the misuse of the services in accordance with our [AI Principles](https://www.microsoft.com/ai/responsible-ai?SilentAuth=1&wa=wsignin1.0&activetab=pivot1%3aprimaryr6) and [facial recognition](https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work/) principles. The Face Identify and Celebrity Recognition operations in Azure AI Video Indexer are Limited Access features that require registration.
Since the announcement on June 11th, 2020, customers may not use, or allow use of, any Azure facial recognition service by or for a police department in the United States. ## Application process
-Limited Access features of Azure Video Indexer are only available to customers managed by Microsoft, and only for use cases selected at the time of registration. Other Azure Video Indexer features do not require registration to use.
+Limited Access features of Azure AI Video Indexer are only available to customers managed by Microsoft, and only for use cases selected at the time of registration. Other Azure AI Video Indexer features do not require registration to use.
-Customers and partners who wish to use Limited Access features of Azure Video Indexer are required to [submit an intake form](https://aka.ms/facerecognition). Access is subject to MicrosoftΓÇÖs sole discretion based on eligibility criteria and a vetting process. Microsoft may require customers and partners to reverify this information periodically.
+Customers and partners who wish to use Limited Access features of Azure AI Video Indexer are required to [submit an intake form](https://aka.ms/facerecognition). Access is subject to MicrosoftΓÇÖs sole discretion based on eligibility criteria and a vetting process. Microsoft may require customers and partners to reverify this information periodically.
-The Azure Video Indexer service is made available to customers and partners under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA#ServiceSpecificTerms)). Please review these terms carefully as they contain important conditions and obligations governing your use of Azure Video Indexer.
+The Azure AI Video Indexer service is made available to customers and partners under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA#ServiceSpecificTerms)). Please review these terms carefully as they contain important conditions and obligations governing your use of Azure AI Video Indexer.
## Limited access features
The Azure Video Indexer service is made available to customers and partners unde
FAQ about Limited Access can be found [here](https://aka.ms/limitedaccesscogservices).
-If you need help with Azure Video Indexer, find support [here](../cognitive-services/cognitive-services-support-options.md).
+If you need help with Azure AI Video Indexer, find support [here](../ai-services/cognitive-services-support-options.md).
-[Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure Video Indexer.
+[Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure AI Video Indexer.
## Next steps
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Title: Logic Apps connector with ARM-based AVI accounts
-description: This article shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts.
+description: This article shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts.
Last updated 11/16/2022
Last updated 11/16/2022
# Logic Apps connector with ARM-based AVI accounts
-Azure Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic.
+Azure AI Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic.
> [!TIP] > For the latest `api-version`, chose the latest stable version in [our REST documentation](/rest/api/videoindexer/stable/generate).
-To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure Video Indexer API.
+To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure AI Video Indexer API.
You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it. > [!TIP] > If you are using a classic AVI account, see [Logic Apps connector with classic-based AVI accounts](logic-apps-connector-tutorial.md).
-## Get started with the Azure Video Indexer connectors
+## Get started with the Azure AI Video Indexer connectors
-To help you get started quickly with the Azure Video Indexer connectors, the example in this article creates Logic App flows. The Logic App and Power Automate capabilities and their editors are almost identical, thus the diagrams and explanations are applicable to both. The example in this article is based on the ARM AVI account. If you're working with a classic account, see [Logic App connectors with classic-based AVI accounts](logic-apps-connector-tutorial.md).
+To help you get started quickly with the Azure AI Video Indexer connectors, the example in this article creates Logic App flows. The Logic App and Power Automate capabilities and their editors are almost identical, thus the diagrams and explanations are applicable to both. The example in this article is based on the ARM AVI account. If you're working with a classic account, see [Logic App connectors with classic-based AVI accounts](logic-apps-connector-tutorial.md).
The "upload and index your video automatically" scenario covered in this article is composed of two different flows that work together. The "two flow" approach is used to support async upload and indexing of larger files effectively.
-* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage. The logic apps that you create in this article, contain one flow per app. The second section (**Create a new logic app of type consumption**) explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
The logic apps that you create in this article, contain one flow per app. The se
## Prerequisites - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- Create an ARM-based [Azure Video Indexer account](create-account-portal.md).
+- Create an ARM-based [Azure AI Video Indexer account](create-account-portal.md).
- Create an Azure Storage account. Keep note of the access key for your Storage account.
- Create two containers: one to store the media files, second to store the insights generated by Azure Video Indexer. In this article, the containers are `videos` and `insights`.
+ Create two containers: one to store the media files, second to store the insights generated by Azure AI Video Indexer. In this article, the containers are `videos` and `insights`.
## Set up the file upload flow (the first flow)
-This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
The following image shows the first flow:
The following image shows the first flow:
1. <a name="access_token"></a>Generate an access token. > [!NOTE]
- > For details about the ARM API and the request/response examples, see [Generate an Azure Video Indexer access token](/rest/api/videoindexer/preview/generate/access-token).
+ > For details about the ARM API and the request/response examples, see [Generate an Azure AI Video Indexer access token](/rest/api/videoindexer/preview/generate/access-token).
> > Press **Try it** to get the correct values for your account.
The following image shows the first flow:
Select **Save**. > [!TIP]
- > Before moving to the next step, set up the right permission between the Logic app and the Azure Video Indexer account.
+ > Before moving to the next step, set up the right permission between the Logic app and the Azure AI Video Indexer account.
> > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps. > [!div class="mx-imgBorder"] > :::image type="content" source="./media/logic-apps-connector-arm-accounts/enable-system.png" alt-text="Screenshot of the how to enable the system assigned managed identity." lightbox="./media/logic-apps-connector-arm-accounts/enable-system.png":::
- 1. Set up system assigned managed identity for permission on Azure Video Indexer resource.
+ 1. Set up system assigned managed identity for permission on Azure AI Video Indexer resource.
- In the Azure portal, go to your Azure Video Indexer resource/account.
+ In the Azure portal, go to your Azure AI Video Indexer resource/account.
1. On the left side blade, and select **Access control**. 1. Select **Add** -> **Add role assignment** -> **Contributor** -> **Next** -> **User, group, or service principal** -> **+Select members**.
The following image shows the first flow:
|Key| Value| |-|-|
- |Location| Location of the associated the Azure Video Indexer account.|
- | Account ID| Account ID of the associated Azure Video Indexer account. You can find the **Account ID** in the **Overview** page of your account, in the Azure portal. Or, the **Account settings** tab, left of the [Azure Video Indexer website](https://www.videoindexer.ai/).|
+ |Location| Location of the associated the Azure AI Video Indexer account.|
+ | Account ID| Account ID of the associated Azure AI Video Indexer account. You can find the **Account ID** in the **Overview** page of your account, in the Azure portal. Or, the **Account settings** tab, left of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).|
|Access Token| Use the `body('HTTP')['accessToken']` expression to extract the access token in the right format from the previous HTTP call.| | Video Name| Select **List of Files Name** from the dynamic content of **When a blob is added or modified** action. | |Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.|
The following image shows the first flow:
Select **Save**.
-The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
+The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure AI Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
## Create a new logic app of type consumption (the second flow)
Create the second flow, Logic Apps of type consumption. The second flow is t
|Key| Value| |-|-|
- |Location| The Location of the Azure Video Indexer account.|
+ |Location| The Location of the Azure AI Video Indexer account.|
| Account ID| The Video Indexer account ID can be copied from the resource/account **Overview** page in the Azure portal.| | Video ID\*| For Video ID, add dynamic content of type **Expression** and put in the following expression: **triggerOutputs()['queries']['id']**. | | Access Token| From the dynamic content, under the **Parse JSON** section select the **accessToken** that is the output of the parse JSON action. |
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
Title: The Azure Video Indexer connectors with Logic App and Power Automate.
-description: This tutorial shows how to unlock new experiences and monetization opportunities Azure Video Indexer connectors with Logic App and Power Automate.
+ Title: The Azure AI Video Indexer connectors with Logic App and Power Automate.
+description: This tutorial shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate.
Last updated 09/21/2020
-# Use Azure Video Indexer with Logic App and Power Automate
+# Use Azure AI Video Indexer with Logic App and Power Automate
-Azure Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
+Azure AI Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure AI Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
-To help you get started quickly with the Azure Video Indexer connectors, we will do a walkthrough of an example Logic App and Power Automate solution you can set up. This tutorial shows how to set up flows using Logic Apps. However, the editors and capabilities are almost identical in both solutions, thus the diagrams and explanations are applicable to both Logic Apps and Power Automate.
+To help you get started quickly with the Azure AI Video Indexer connectors, we will do a walkthrough of an example Logic App and Power Automate solution you can set up. This tutorial shows how to set up flows using Logic Apps. However, the editors and capabilities are almost identical in both solutions, thus the diagrams and explanations are applicable to both Logic Apps and Power Automate.
The "upload and index your video automatically" scenario covered in this tutorial is comprised of two different flows that work together.
-* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure Video Indexer with a callback URL to send a notification once the indexing operation completes.
+* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage. This two flow approach is used to support async upload and indexing of larger files effectively. This tutorial is using Logic App to show how to:
This tutorial is using Logic App to show how to:
## Prerequisites
-* To begin with, you will need an Azure Video Indexer account along with [access to the APIs via API key](video-indexer-use-apis.md).
-* You will also need an Azure Storage account. Keep note of the access key for your Storage account. Create two containers ΓÇô one to store videos in and one to store insights generated by Azure Video Indexer in.
+* To begin with, you will need an Azure AI Video Indexer account along with [access to the APIs via API key](video-indexer-use-apis.md).
+* You will also need an Azure Storage account. Keep note of the access key for your Storage account. Create two containers ΓÇô one to store videos in and one to store insights generated by Azure AI Video Indexer in.
* Next, you will need to open two separate flows on either Logic Apps or Power Automate (depending on which you are using). ## Set up the first flow - file upload
-The first flow is triggered whenever a blob is added in your Azure Storage container. Once triggered, it will create a SAS URI that you can use to upload and index the video in Azure Video Indexer. In this section you will create the following flow.
+The first flow is triggered whenever a blob is added in your Azure Storage container. Once triggered, it will create a SAS URI that you can use to upload and index the video in Azure AI Video Indexer. In this section you will create the following flow.
![File upload flow](./media/logic-apps-connector-tutorial/file-upload-flow.png)
-To set up the first flow, you will need to provide your Azure Video Indexer API Key and Azure Storage credentials.
+To set up the first flow, you will need to provide your Azure AI Video Indexer API Key and Azure Storage credentials.
![Azure blob storage](./media/logic-apps-connector-tutorial/azure-blob-storage.png) ![Connection name and API key](./media/logic-apps-connector-tutorial/connection-name-api-key.png) > [!TIP]
-> If you previously connected an Azure Storage account or Azure Video Indexer account to a Logic App, your connection details are stored and you will be connected automatically. <br/>You can edit the connection by clicking on **Change connection** at the bottom of an Azure Storage (the storage window) or Azure Video Indexer (the player window) action.
+> If you previously connected an Azure Storage account or Azure AI Video Indexer account to a Logic App, your connection details are stored and you will be connected automatically. <br/>You can edit the connection by clicking on **Change connection** at the bottom of an Azure Storage (the storage window) or Azure AI Video Indexer (the player window) action.
-Once you can connect to your Azure Storage and Azure Video Indexer accounts, find and choose the "When a blob is added or modified" trigger in **Logic Apps Designer**.
+Once you can connect to your Azure Storage and Azure AI Video Indexer accounts, find and choose the "When a blob is added or modified" trigger in **Logic Apps Designer**.
Select the container that you will place your video files in.
Also, add a new "Shared Access Protocol" parameter. Choose HttpsOnly for the val
![SAS uri by path](./media/logic-apps-connector-tutorial/sas-uri-by-path.jpg)
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure Video Indexer account token.
+Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure AI Video Indexer account token.
![Get account access token](./media/logic-apps-connector-tutorial/account-access-token.png)
Click **Save**, and letΓÇÖs move on to configure the second flow, to extract the
## Set up the second flow - JSON extraction
-The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it is up to you what you can do with the output.
+The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure AI Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it is up to you what you can do with the output.
Create the second flow separate from the first one. ![JSON extraction flow](./media/logic-apps-connector-tutorial/json-extraction-flow.png)
-To set up this flow, you will need to provide your Azure Video Indexer API Key and Azure Storage credentials again. You will need to update the same parameters as you did for the first flow.
+To set up this flow, you will need to provide your Azure AI Video Indexer API Key and Azure Storage credentials again. You will need to update the same parameters as you did for the first flow.
For your trigger, you will see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you will need the URL eventually. We will come back to this.
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure Video Indexer account token.
+Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure AI Video Indexer account token.
Go to the ΓÇ£Get Video IndexΓÇ¥ action and fill out the required parameters. For Video ID, put in the following expression: triggerOutputs()['queries']['id']
-![Azure Video Indexer action info](./media/logic-apps-connector-tutorial/video-indexer-action-info.jpg)
+![Azure AI Video Indexer action info](./media/logic-apps-connector-tutorial/video-indexer-action-info.jpg)
This expression tells the connecter to get the Video ID from the output of your trigger. In this case, the output of your trigger will be the output of ΓÇ£Upload video and indexΓÇ¥ in your first trigger.
Try out your newly created Logic App or Power Automate solution by adding a vide
## Generate captions
-See the following blog for the steps that show [how to generate captions with Azure Video Indexer and Logic Apps](https://techcommunity.microsoft.com/t5/azure-media-services/generating-captions-with-video-indexer-and-logic-apps/ba-p/1672198).
+See the following blog for the steps that show [how to generate captions with Azure AI Video Indexer and Logic Apps](https://techcommunity.microsoft.com/t5/azure-media-services/generating-captions-with-video-indexer-and-logic-apps/ba-p/1672198).
-The article also shows how to index a video automatically by copying it to OneDrive and how to store the captions generated by Azure Video Indexer in OneDrive.
+The article also shows how to index a video automatically by copying it to OneDrive and how to store the captions generated by Azure AI Video Indexer in OneDrive.
## Clean up resources
After you are done with this tutorial, feel free to keep this Logic App or Power
## Next steps
-This tutorial showed just one Azure Video Indexer connectors example. You can use the Azure Video Indexer connectors for any API call provided by Azure Video Indexer. For example: upload and retrieve insights, translate the results, get embeddable widgets and even customize your models. Additionally, you can choose to trigger those actions based on different sources like updates to file repositories or emails sent. You can then choose to have the results update to our relevant infrastructure or application or generate any number of action items.
+This tutorial showed just one Azure AI Video Indexer connectors example. You can use the Azure AI Video Indexer connectors for any API call provided by Azure AI Video Indexer. For example: upload and retrieve insights, translate the results, get embeddable widgets and even customize your models. Additionally, you can choose to trigger those actions based on different sources like updates to file repositories or emails sent. You can then choose to have the results update to our relevant infrastructure or application or generate any number of action items.
> [!div class="nextstepaction"]
-> [Use the Azure Video Indexer API](video-indexer-use-apis.md)
+> [Use the Azure AI Video Indexer API](video-indexer-use-apis.md)
-For additional resources, refer to [Azure Video Indexer](/connectors/videoindexer-v2/)
+For additional resources, refer to [Azure AI Video Indexer](/connectors/videoindexer-v2/)
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
Title: Repair the connection to Azure, check errors/warnings
-description: Learn how to manage an Azure Video Indexer account connected to Azure repair the connection, examine errors/warnings.
+description: Learn how to manage an Azure AI Video Indexer account connected to Azure repair the connection, examine errors/warnings.
Last updated 01/14/2021
# Repair the connection to Azure, examine errors/warnings
-This article demonstrates how to manage an Azure Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
+This article demonstrates how to manage an Azure AI Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
> [!NOTE]
-> You have to be the Azure Video Indexer account owner to do account configuration adjustments discussed in this topic.
+> You have to be the Azure AI Video Indexer account owner to do account configuration adjustments discussed in this topic.
## Prerequisites
-Connect your Azure Video Indexer account to Azure, as described in [Connected to Azure](connect-to-azure.md).
+Connect your Azure AI Video Indexer account to Azure, as described in [Connected to Azure](connect-to-azure.md).
Make sure to follow [Prerequisites](connect-to-azure.md#prerequisites-for-connecting-to-azure) and review [Considerations](connect-to-azure.md#azure-media-services-considerations) in the article. ## Examine account settings
-This section examines settings of your Azure Video Indexer account.
+This section examines settings of your Azure AI Video Indexer account.
To view settings: 1. Click on the user icon in the top-right corner and select **Settings**.
- ![Settings in Azure Video Indexer](./media/manage-account-connected-to-azure/select-settings.png)
+ ![Settings in Azure AI Video Indexer](./media/manage-account-connected-to-azure/select-settings.png)
2. On the **Settings** page, select the **Account** tab.
If your account needs some adjustments, you'll see relevant errors and warnings
## Repair the connection to Azure
-In the **Update connection to Azure Media Services** dialog of your [Azure Video Indexer](https://www.videoindexer.ai/) page, you're asked to provide values for the following settings:
+In the **Update connection to Azure Media Services** dialog of your [Azure AI Video Indexer](https://www.videoindexer.ai/) page, you're asked to provide values for the following settings:
|Setting|Description| ||| |Azure subscription ID|The subscription ID can be retrieved from the Azure portal. Click on **All services** in the left panel and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.| |Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
-|Application ID|The Azure AD application ID (with permissions for the specified Media Services account) that you created for this Azure Video Indexer account. <br/><br/>To get the app ID, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Azure AD App**. Copy the relevant parameters.|
+|Application ID|The Azure AD application ID (with permissions for the specified Media Services account) that you created for this Azure AI Video Indexer account. <br/><br/>To get the app ID, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Azure AD App**. Copy the relevant parameters.|
|Application key|The Azure AD application key associated with your Media Services account that you specified above. <br/><br/>To get the app key, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Manage application** -> **Certificates & secrets**. Copy the relevant parameters.| ## Errors and warnings
If your account needs some adjustments, you see relevant errors and warnings abo
* Streaming endpoint
- Make sure the underlying Media Services account has the default **Streaming Endpoint** in a started state. Otherwise, you can't watch videos from this Media Services account or in Azure Video Indexer.
+ Make sure the underlying Media Services account has the default **Streaming Endpoint** in a started state. Otherwise, you can't watch videos from this Media Services account or in Azure AI Video Indexer.
* Media reserved units
If your account needs some adjustments, you see relevant errors and warnings abo
## Next steps
-You can programmatically interact with your trial account or Azure Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
+You can programmatically interact with your trial account or Azure AI Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
Use the same Azure AD user you used when connecting to Azure.
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
Title: Manage multiple tenants with Azure Video Indexer - Azure
-description: This article suggests different integration options for managing multiple tenants with Azure Video Indexer.
+ Title: Manage multiple tenants with Azure AI Video Indexer - Azure
+description: This article suggests different integration options for managing multiple tenants with Azure AI Video Indexer.
Last updated 05/15/2019
# Manage multiple tenants
-This article discusses different options for managing multiple tenants with Azure Video Indexer. Choose a method that is most suitable for your scenario:
+This article discusses different options for managing multiple tenants with Azure AI Video Indexer. Choose a method that is most suitable for your scenario:
-* Azure Video Indexer account per tenant
-* Single Azure Video Indexer account for all tenants
+* Azure AI Video Indexer account per tenant
+* Single Azure AI Video Indexer account for all tenants
* Azure subscription per tenant
-## Azure Video Indexer account per tenant
+## Azure AI Video Indexer account per tenant
-When using this architecture, an Azure Video Indexer account is created for each tenant. The tenants have full isolation in the persistent and compute layer.
+When using this architecture, an Azure AI Video Indexer account is created for each tenant. The tenants have full isolation in the persistent and compute layer.
-![Azure Video Indexer account per tenant](./media/manage-multiple-tenants/video-indexer-account-per-tenant.png)
+![Azure AI Video Indexer account per tenant](./media/manage-multiple-tenants/video-indexer-account-per-tenant.png)
### Considerations * Customers don't share storage accounts (unless manually configured by the customer). * Customers don't share compute (reserved units) and don't impact processing jobs times of one another.
-* You can easily remove a tenant from the system by deleting the Azure Video Indexer account.
+* You can easily remove a tenant from the system by deleting the Azure AI Video Indexer account.
* There's no ability to share custom models between tenants. Make sure there's no business requirement to share custom models.
-* Harder to manage due to multiple Azure Video Indexer (and associated Media Services) accounts per tenant.
+* Harder to manage due to multiple Azure AI Video Indexer (and associated Media Services) accounts per tenant.
> [!TIP]
-> Create an admin user for your system in [the Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
+> Create an admin user for your system in [the Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
-## Single Azure Video Indexer account for all users
+## Single Azure AI Video Indexer account for all users
-When using this architecture, the customer is responsible for tenants isolation. All tenants have to use a single Azure Video Indexer account with a single Azure Media Service account. When uploading, searching, or deleting content, the customer will need to filter the proper results for that tenant.
+When using this architecture, the customer is responsible for tenants isolation. All tenants have to use a single Azure AI Video Indexer account with a single Azure Media Service account. When uploading, searching, or deleting content, the customer will need to filter the proper results for that tenant.
-![Single Azure Video Indexer account for all users](./media/manage-multiple-tenants/single-video-indexer-account-for-all-users.png)
+![Single Azure AI Video Indexer account for all users](./media/manage-multiple-tenants/single-video-indexer-account-for-all-users.png)
With this option, customization models (Person, Language, and Brands) can be shared or isolated between tenants by filtering the models by tenant.
When [uploading videos](https://api-portal.videoindexer.ai/api-details#api=Opera
* Ability to share content and customization models between tenants. * One tenant impacts the performance of other tenants.
-* Customer needs to build a complex management layer on top of Azure Video Indexer.
+* Customer needs to build a complex management layer on top of Azure AI Video Indexer.
> [!TIP] > You can use the [priority](upload-index-videos.md) attribute to prioritize tenants jobs. ## Azure subscription per tenant
-When using this architecture, each tenant will have their own Azure subscription. For each user, you'll create a new Azure Video Indexer account in the tenant subscription.
+When using this architecture, each tenant will have their own Azure subscription. For each user, you'll create a new Azure AI Video Indexer account in the tenant subscription.
![Azure subscription per tenant](./media/manage-multiple-tenants/azure-subscription-per-tenant.png) ### Considerations * This is the only option that enables billing separation.
-* This integration has more management overhead than Azure Video Indexer account per tenant. If billing isn't a requirement, it's recommended to use one of the other options described in this article.
+* This integration has more management overhead than Azure AI Video Indexer account per tenant. If billing isn't a requirement, it's recommended to use one of the other options described in this article.
## Next steps
azure-video-indexer Matched Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/matched-person.md
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure Video Indexer matches observed people that were detected in the video with the corresponding faces ("People" insight). To produce the matching algorithm, the bounding boxes for both the faces and the observed people are assigned spatially along the video. The API returns the confidence level of each matching.
+Azure AI Video Indexer matches observed people that were detected in the video with the corresponding faces ("People" insight). To produce the matching algorithm, the bounding boxes for both the faces and the observed people are assigned spatially along the video. The API returns the confidence level of each matching.
The following are some scenarios that benefit from this feature:
The **Matched person** feature is available when indexing your file by choosing
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/matched-person/index-matched-person-feature.png" alt-text="Advanced video or Advanced video + audio preset":::
-To view the Matched person on the [Azure Video Indexer](https://www.videoindexer.ai/) website, go to **View** -> **Show Insights** -> select the **All** option or **View** -> **Custom View** -> **Mapped Faces**.
+To view the Matched person on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, go to **View** -> **Show Insights** -> select the **All** option or **View** -> **Custom View** -> **Mapped Faces**.
-When you choose to see insights of your video on the [Azure Video Indexer](https://www.videoindexer.ai/) website, the matched person could be viewed from the **Observed People tracing** insight. When choosing a thumbnail of a person the matched person became available.
+When you choose to see insights of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the matched person could be viewed from the **Observed People tracing** insight. When choosing a thumbnail of a person the matched person became available.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/matched-person/from-observed-people.png" alt-text="View matched people from the Observed People insight"::: If you would like to view people's detected clothing in the **Timeline** of your video on the [Video Indexer website](https://www.videoindexer.ai/), go to **View** -> **Show Insights** and select the **All option** or **View** -> **Custom View** -> **Observed People**.
-Searching for a specific person by name, returning all the appearances of the specific person is enables using the search bar of the Insights of your video on the Azure Video Indexer.
+Searching for a specific person by name, returning all the appearances of the specific person is enables using the search bar of the Insights of your video on the Azure AI Video Indexer.
## JSON code sample
-The following JSON response illustrates what Azure Video Indexer returns when tracing observed people having Mapped person associated:
+The following JSON response illustrates what Azure AI Video Indexer returns when tracing observed people having Mapped person associated:
```json "observedPeople": [
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Title: Monitoring Azure Video Indexer data reference
-description: Important reference material needed when you monitor Azure Video Indexer
+ Title: Monitoring Azure AI Video Indexer data reference
+description: Important reference material needed when you monitor Azure AI Video Indexer
Last updated 04/17/2023
<!-- VERSION 2.3 Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
-<!-- IMPORTANT STEP 1. Do a search and replace of Azure Video Indexer with the name of your service. That will make the template easier to read -->
+<!-- IMPORTANT STEP 1. Do a search and replace of Azure AI Video Indexer with the name of your service. That will make the template easier to read -->
-# Monitor Azure Video Indexer data reference
+# Monitor Azure AI Video Indexer data reference
-See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure Video Indexer.
+See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure AI Video Indexer.
## Metrics
-Azure Video Indexer currently does not support any metrics monitoring.
+Azure AI Video Indexer currently does not support any metrics monitoring.
<!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs. <!-- Please keep headings in this order -->
Azure Video Indexer currently does not support any metrics monitoring.
<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-<!--This section lists all the automatically collected platform metrics collected for Azure Video Indexer.
+<!--This section lists all the automatically collected platform metrics collected for Azure AI Video Indexer.
|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--|
For more information, see a list of [all platform metrics supported in Azure Mon
## Metric dimensions
-Azure Video Indexer currently does not support any metrics monitoring.
+Azure AI Video Indexer currently does not support any metrics monitoring.
<!-- REQUIRED. Please keep headings in this order --> <!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com --> <!--For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
-Azure Video Indexer does not have any metrics that contain dimensions.
+Azure AI Video Indexer does not have any metrics that contain dimensions.
*OR*
-Azure Video Indexer has the following dimensions associated with its metrics.
+Azure AI Video Indexer has the following dimensions associated with its metrics.
<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
Azure Storage supports following dimensions for metrics in Azure Monitor.
## Resource logs <!-- REQUIRED. Please keep headings in this order -->
-This section lists the types of resource logs you can collect for Azure Video Indexer.
+This section lists the types of resource logs you can collect for Azure AI Video Indexer.
<!-- List all the resource log types you can have and what they are for -->
For reference, see a list of [all resource logs category types supported in Azur
<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-<!--This section lists all the resource log category types collected for Azure Video Indexer.
+<!--This section lists all the resource log category types collected for Azure AI Video Indexer.
|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--|
Resource Provider and Type: [Microsoft.videoindexer/accounts](/azure/azure-monit
| AppServiceAuditLogs | Access Audit Logs | *TODO other important information about this type* | | etc. | | | -->
-### Azure Video Indexer
+### Azure AI Video Indexer
Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftvideoindexeraccounts) | Category | Display Name | Additional information | |:|:-||
-| VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the [Azure Video Indexer website](https://www.videoindexer.ai/) and the REST API. |
-| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. |
+| VIAudit | Azure AI Video Indexer Audit Logs | Logs are produced from both the [Azure AI Video Indexer website](https://www.videoindexer.ai/) and the REST API. |
+| IndexingLogs | Indexing Logs | Azure AI Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. |
<!-- --**END Examples** - --> ## Azure Monitor Logs tables <!-- REQUIRED. Please keep heading in this order -->
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Video Indexer and available for query by Log Analytics.
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure AI Video Indexer and available for query by Log Analytics.
<!--**OPTION 1 EXAMPLE**
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
|Resource Type | Notes | |-|--|
-| [Azure Video Indexer](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | |
+| [Azure AI Video Indexer](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | |
<!-**OPTION 2 EXAMPLE** -
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the automatically generated list. You can group these sections however you want provided you include the proper links back to the proper tables. -->
-### Azure Video Indexer
+### Azure AI Video Indexer
| Table | Description | Additional information | |:|:-||
-| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using the Azure Video Indexer [website](https://aka.ms/VIportal) or the [REST API portal](https://aka.ms/vi-dev-portal). | |
-|VIIndexing| Events produced using the Azure Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [re-index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
+| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using the Azure AI Video Indexer [website](https://aka.ms/VIportal) or the [REST API portal](https://aka.ms/vi-dev-portal). | |
+|VIIndexing| Events produced using the Azure AI Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [re-index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link --> <!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type | | etc. | | |
For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure
<!-- REQUIRED. Please keep heading in this order --> <!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future will have their own specific table. If you have questions, contact azmondocs@microsoft.com -->
-<!-- Azure Video Indexer uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table and the [TODO whatever additional] table to store resource log information. The following columns are relevant.
+<!-- Azure AI Video Indexer uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table and the [TODO whatever additional] table to store resource log information. The following columns are relevant.
**Azure Diagnostics**
For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure
## Activity log <!-- REQUIRED. Please keep heading in this order -->
-The following table lists the operations related to Azure Video Indexer that may be created in the Activity log.
+The following table lists the operations related to Azure AI Video Indexer that may be created in the Activity log.
<!-- Fill in the table with the operations that can be created in the Activity log for the service. --> | Operation | Description |
For more information on the schema of Activity Log entries, see [Activity Log s
## Schemas <!-- REQUIRED. Please keep heading in this order -->
-The following schemas are in use by Azure Video Indexer
+The following schemas are in use by Azure AI Video Indexer
<!-- List the schema and their usage. This can be for resource logs, alerts, event hub formats, etc depending on what you think is important. -->
The following schemas are in use by Azure Video Indexer
## Next steps <!-- replace below with the proper link to your main monitoring service article -->-- See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.
+- See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure AI Video Indexer.
- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
Title: Monitoring Azure Video Indexer
-description: Start here to learn how to monitor Azure Video Indexer
+ Title: Monitoring Azure AI Video Indexer
+description: Start here to learn how to monitor Azure AI Video Indexer
Last updated 12/19/2022
<!-- VERSION 2.2 Template for the main monitoring article for Azure services. Keep the required sections and add/modify any content for any information specific to your service.
-This article should be in your TOC with the name *monitor-Azure Video Indexer.md* and the TOC title "Monitor Azure Video Indexer".
-Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-Azure Video Indexer-reference.md* and the TOC title "Monitoring data".
+This article should be in your TOC with the name *monitor-Azure AI Video Indexer.md* and the TOC title "Monitor Azure AI Video Indexer".
+Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-Azure AI Video Indexer-reference.md* and the TOC title "Monitoring data".
Keep the headings in this order. -->
-<!-- IMPORTANT STEP 1. Do a search and replace of Azure Video Indexer with the name of your service. That will make the template easier to read -->
+<!-- IMPORTANT STEP 1. Do a search and replace of Azure AI Video Indexer with the name of your service. That will make the template easier to read -->
-# Monitoring Azure Video Indexer
+# Monitoring Azure AI Video Indexer
<!-- REQUIRED. Please keep headings in this order --> <!-- Most services can use this section unchanged. Add to it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. --> When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Video Indexer. Azure Video Indexer uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+This article describes the monitoring data generated by Azure AI Video Indexer. Azure AI Video Indexer uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
<!-- Optional diagram showing monitoring for your service. -->
This article describes the monitoring data generated by Azure Video Indexer. Azu
<!-- OPTIONAL. Please keep headings in this order --> <!-- If you don't have an over page, remove this section. If you keep it, edit it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
-<!--The **Overview** page in the Azure portal for each *Azure Video Indexer account* includes *[provide a description of the data in the Overview page.]*.
+<!--The **Overview** page in the Azure portal for each *Azure AI Video Indexer account* includes *[provide a description of the data in the Overview page.]*.
-## *Azure Video Indexer* insights -->
+## *Azure AI Video Indexer* insights -->
Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
Some services in Azure have a special focused pre-built monitoring dashboard in
## Monitoring data <!-- REQUIRED. Please keep headings in this order -->
-Azure Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
+Azure AI Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
-See [Monitoring *Azure Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure Video Indexer.
+See [Monitoring *Azure AI Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure AI Video Indexer.
<!-- If your service has additional non-Azure Monitor monitoring data then outline and refer to that here. Also include that information in the data reference as appropriate. -->
Resource Logs are not collected and stored until you create a diagnostic setting
<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding -->
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure Video Indexer* are listed in [Azure Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs).
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure AI Video Indexer* are listed in [Azure AI Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs).
| Category | Description | |:|:|
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-Currently Azure Video Indexer does not support monitoring of metrics.
+Currently Azure AI Video Indexer does not support monitoring of metrics.
<!-- REQUIRED. Please keep headings in this order If you don't support metrics, say so. Some services may be only onboarded to logs -->
-<!--You can analyze metrics for *Azure Video Indexer* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+<!--You can analyze metrics for *Azure AI Video Indexer* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
<!-- Point to the list of metrics available in your monitor-service-reference article. -->
-<!--For a list of the platform metrics collected for Azure Video Indexer, see [Monitoring *Azure Video Indexer* data reference metrics](monitor-service-reference.md#metrics)
+<!--For a list of the platform metrics collected for Azure AI Video Indexer, see [Monitoring *Azure AI Video Indexer* data reference metrics](monitor-service-reference.md#metrics)
<!-- REQUIRED for services that use a Guest OS. That includes agent based services like Virtual Machines, Service Fabric, Cloud Services, and perhaps others. Delete the section otherwise --> <!--Guest OS metrics must be collected by agents running on the virtual machines hosting your service. <!-- Add additional information as appropriate -->
If you don't support resource logs, say so. Some services may be only onboarded
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure Video Indexer resource logs is found in the [Azure Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure AI Video Indexer resource logs is found in the [Azure AI Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-For a list of the types of resource logs collected for Azure Video Indexer, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
+For a list of the types of resource logs collected for Azure AI Video Indexer, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about log usage or what logs are most important. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
<!-- Add sample Log Analytics Kusto queries for your service. --> > [!IMPORTANT]
-> When you select **Logs** from the Azure Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+> When you select **Logs** from the Azure AI Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure AI Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure AI Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries may be in the Log Analytics UI (sample or example queries). Check if so. -->
-Following are queries that you can use to help you monitor your Azure Video Indexer account.
+Following are queries that you can use to help you monitor your Azure AI Video Indexer account.
<!-- Put in a code section here. --> ```kusto
Azure Monitor alerts proactively notify you when important conditions are found
<!-- If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts. <!-- end -->
-The following table lists common and recommended alert rules for Azure Video Indexer.
+The following table lists common and recommended alert rules for Azure AI Video Indexer.
<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable --> | Alert type | Condition | Description |
VIAudit
<!-- Add additional links. You can change the wording of these and add more if useful. --> -- See [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Video Indexer account.
+- See [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure AI Video Indexer account.
- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Multi Language Identification Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/multi-language-identification-transcription.md
Title: Automatically identify and transcribe multi-language content with Azure Video Indexer
-description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure Video Indexer.
+ Title: Automatically identify and transcribe multi-language content with Azure AI Video Indexer
+description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure AI Video Indexer.
# Automatically identify and transcribe multi-language content
-Azure Video Indexer supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription.
+Azure AI Video Indexer supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription.
## Choosing multilingual identification on indexing with portal You can choose **multi-language detection** when uploading and indexing your video. Alternatively, you can choose **multi-language detection** when re-indexing your video. The following steps describe how to reindex:
-1. Browse to the [Azure Video Indexer](https://vi.microsoft.com/) website and sign in.
+1. Browse to the [Azure AI Video Indexer](https://vi.microsoft.com/) website and sign in.
1. Go to the **Library** page and hover over the name of the video that you want to reindex. 1. On the right-bottom corner, click the **Re-index video** button. 1. In the **Re-index video** dialog, choose **multi-language detection** from the **Video source language** drop-down box.
Additionally, each instance in the transcription section will include the langua
## Next steps
-[Azure Video Indexer overview](video-indexer-overview.md)
+[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Named Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/named-entities.md
Title: Azure Video Indexer named entities extraction overview -
-description: An introduction to Azure Video Indexer named entities extraction component responsibly.
+ Title: Azure AI Video Indexer named entities extraction overview
+
+description: An introduction to Azure AI Video Indexer named entities extraction component responsibly.
# Named entities extraction
-Named entities extraction is an Azure Video Indexer AI feature that uses Natural Language Processing (NLP) to extract insights on the locations, people and brands appearing in audio and images in media files. Named entities extraction is automatically used with Transcription and OCR and its insights are based on those extracted during these processes. The resulting insights are displayed in the **Insights** tab and are filtered into locations, people and brand categories. Clicking a named entity, displays its instance in the media file. It also displays a description of the entity and a Find on Bing link of recognizable entities.
+Named entities extraction is an Azure AI Video Indexer AI feature that uses Natural Language Processing (NLP) to extract insights on the locations, people and brands appearing in audio and images in media files. Named entities extraction is automatically used with Transcription and OCR and its insights are based on those extracted during these processes. The resulting insights are displayed in the **Insights** tab and are filtered into locations, people and brand categories. Clicking a named entity, displays its instance in the media file. It also displays a description of the entity and a Find on Bing link of recognizable entities.
## Prerequisites
To display named entities extraction insights in a JSON file, do the following:
}, ```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## Named entities extraction components
During the named entities extraction procedure, the media file is processed, as
|Component|Definition| ||| |Source file | The user uploads the source file for indexing. |
-|Text extraction |- The audio file is sent to Speech Services API to extract the transcription.<br/>- Sampled frames are sent to the Computer Vision API to extract OCR. |
+|Text extraction |- The audio file is sent to Speech Services API to extract the transcription.<br/>- Sampled frames are sent to the Azure AI Vision API to extract OCR. |
|Analytics |The insights are then sent to the Text Analytics API to extract the entities. For example, Microsoft, Paris or a personΓÇÖs name like Paul or Sarah. |Processing and consolidation | The results are then processed. Where applicable, Wikipedia links are added and brands are identified via the Video Indexer built-in and customizable branding lists. Confidence value The estimated confidence level of each named entity is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
Confidence value The estimated confidence level of each named entity is calculat
- Carefully consider that when using for law enforcement named entities may not always detect parts of the audio. To ensure fair and high-quality decisions, combine named entities with human oversight. - Don't use named entities for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
-When used responsibly and carefully Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+When used responsibly and carefully Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. - Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
When used responsibly and carefully Azure Video Indexer is a valuable tool for m
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
Title: How to enable network security
-description: This article gives an overview of the Azure Video Indexer network security options.
+description: This article gives an overview of the Azure AI Video Indexer network security options.
Last updated 12/19/2022
-# NSG service tags for Azure Video Indexer
+# NSG service tags for Azure AI Video Indexer
-Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+Azure AI Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure AI Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
> [!NOTE] > If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer". The mitigatation is to remove the old "AzureVideoAnalyzerForMedia" tag from your configuration and deployment scripts and start using the "VideoIndexer" tag going forward.
-Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure AI Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
> [!NOTE] > The NSG service tags feature is not available for trial and classic accounts. To update to an ARM account, see [Connect a classic account to ARM](connect-classic-account-to-arm.md) or [Import content from a trial account](import-content-from-trial.md).
Use [Network Security Groups with Service Tags](../virtual-network/service-tags-
Currently we support the global service tag option for using service tags in your network security groups:
-**Use a single global VideoIndexer service tag**: This option opens your virtual network to all IP addresses that the Azure Video Indexer service uses across all regions we offer our service. This method will allow for all IP addresses owned and used by Azure Video Indexer to reach your network resources behind the NSG.
+**Use a single global VideoIndexer service tag**: This option opens your virtual network to all IP addresses that the Azure AI Video Indexer service uses across all regions we offer our service. This method will allow for all IP addresses owned and used by Azure AI Video Indexer to reach your network resources behind the NSG.
> [!NOTE] > Currently we do not support IPs allocated to our services in the Switzerland North Region. These will be added soon. If your account is located in this region you cannot use Service Tags in your NSG today since these IPs are not in the Service Tag list and will be rejected by the NSG rule.
-## Use a single global Azure Video Indexer service tag
+## Use a single global Azure AI Video Indexer service tag
-The easiest way to begin using service tags with your Azure Video Indexer account is to add the global tag `VideoIndexer` to an NSG rule.
+The easiest way to begin using service tags with your Azure AI Video Indexer account is to add the global tag `VideoIndexer` to an NSG rule.
1. From the [Azure portal](https://portal.azure.com/), select your network security group. 1. Under **Settings**, select **Inbound security rules**, and then select **+ Add**.
The easiest way to begin using service tags with your Azure Video Indexer accoun
:::image type="content" source="./media/network-security/nsg-service-tag-videoindexer.png" alt-text="Add a service tag from the Azure portal":::
-This tag contains the IP addresses of Azure Video Indexer services for all regions where available. The tag will ensure that your resource can communicate with the Azure Video Indexer services no matter where it's created.
+This tag contains the IP addresses of Azure AI Video Indexer services for all regions where available. The tag will ensure that your resource can communicate with the Azure AI Video Indexer services no matter where it's created.
## Using Azure CLI
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
Title: Azure Video Indexer observed people tracking & matched faces overview -
-description: An introduction to Azure Video Indexer observed people tracking & matched faces component responsibly.
+ Title: Azure AI Video Indexer observed people tracking & matched faces overview
+
+description: An introduction to Azure AI Video Indexer observed people tracking & matched faces component responsibly.
> [!IMPORTANT] > Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
-Observed people tracking and matched faces are Azure Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
+Observed people tracking and matched faces are Azure AI Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
The resulting insights are displayed in a categorized list in the Insights tab, the tab includes a thumbnail of each person and their ID. Clicking the thumbnail of a person displays the matched person (the corresponding face in the People insight). Insights are also generated in a categorized list in a JSON file that includes the thumbnail ID of the person, the percentage of time appearing in the file, Wiki link (if they're a celebrity) and confidence level.
To see the insights in a JSON file, do the following:
}, ```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## Observed people tracking and matched faces components
It's important to note the limitations of observed people tracing, to avoid or m
### Other considerations
-When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. - Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
When used responsibly and carefully, Azure Video Indexer is a valuable tool for
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
Title: Enable featured clothing of an observed person
-description: When indexing a video using Azure Video Indexer advanced video settings, you can view the featured clothing of an observed person.
+description: When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person.
Last updated 10/10/2022
# Enable featured clothing of an observed person (preview)
-When indexing a video using Azure Video Indexer advanced video settings, you can view the featured clothing of an observed person. The insight provides information of key items worn by individuals within a video and the timestamp in which the clothing appears. This allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they are viewed.
+When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person. The insight provides information of key items worn by individuals within a video and the timestamp in which the clothing appears. This allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they are viewed.
This article discusses how to view the featured clothing insight and how the featured clothing images are ranked.
azure-video-indexer Observed People Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracing.md
# Trace observed people in a video (preview)
-Azure Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
+Azure AI Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
Some scenarios where this feature could be useful:
azure-video-indexer Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md
Title: Azure Video Indexer optical character recognition (OCR) overview -
-description: An introduction to Azure Video Indexer optical character recognition (OCR) component responsibly.
+ Title: Azure AI Video Indexer optical character recognition (OCR) overview
+
+description: An introduction to Azure AI Video Indexer optical character recognition (OCR) component responsibly.
# Optical character recognition (OCR)
-Optical character recognition (OCR) is an Azure Video Indexer AI feature that extracts text from images like pictures, street signs and products in media files to create insights.
+Optical character recognition (OCR) is an Azure AI Video Indexer AI feature that extracts text from images like pictures, street signs and products in media files to create insights.
-OCR currently extracts insights from printed and handwritten text in over 50 languages, including from an image with text in multiple languages. For more information, see [OCR supported languages](/azure/cognitive-services/computer-vision/language-support#optical-character-recognition-ocr).
+OCR currently extracts insights from printed and handwritten text in over 50 languages, including from an image with text in multiple languages. For more information, see [OCR supported languages](../ai-services/computer-vision/language-support.md#optical-character-recognition-ocr).
## Prerequisites
To see the insights in a JSON file, do the following:
}, ```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## OCR components
During the OCR procedure, text images in a media file are processed, as follows:
|Component|Definition| ||| |Source file| The user uploads the source file for indexing.|
-|Read model |Images are detected in the media file and text is then extracted and analyzed by Azure Cognitive Services. |
+|Read model |Images are detected in the media file and text is then extracted and analyzed by Azure AI services. |
|Get read results model |The output of the extracted text is displayed in a JSON file.| |Confidence value| The estimated confidence level of each word is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
-For more information, seeΓÇ»[OCR technology](/azure/cognitive-services/computer-vision/overview-ocr).
+For more information, seeΓÇ»[OCR technology](../ai-services/computer-vision/overview-ocr.md).
## Example use cases
For more information, seeΓÇ»[OCR technology](/azure/cognitive-services/computer-
- When extracting handwritten text, avoid using the OCR results of signatures that are hard to read for both humans and machines. A better way to use OCR is to use it for detecting the presence of a signature for further analysis. - Don't use OCR for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
-When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
- Always respect an individual’s right to privacy, and only ingest videos for lawful and justifiable purposes.   - Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
When used responsibly and carefully, Azure Video Indexer is a valuable tool for
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
Title: Index videos stored on OneDrive - Azure Video Indexer
-description: Learn how to index videos stored on OneDrive by using Azure Video Indexer.
+ Title: Index videos stored on OneDrive - Azure AI Video Indexer
+description: Learn how to index videos stored on OneDrive by using Azure AI Video Indexer.
Last updated 12/17/2021 # Index your videos stored on OneDrive
-This article shows how to index videos stored on OneDrive by using the Azure Video Indexer website.
+This article shows how to index videos stored on OneDrive by using the Azure AI Video Indexer website.
## Supported file formats
-For a list of file formats that you can use with Azure Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
+For a list of file formats that you can use with Azure AI Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
## Index a video by using the website
-1. Sign into the [Azure Video Indexer](https://www.videoindexer.ai/) website, and then select **Upload**.
+1. Sign into the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, and then select **Upload**.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Screenshot that shows the Upload button.":::
For a list of file formats that you can use with Azure Video Indexer, see [Stand
`https://onedrive.live.com/download?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-1. Now enter this URL in the Azure Video Indexer website in the URL field.
+1. Now enter this URL in the Azure AI Video Indexer website in the URL field.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-url.png" alt-text="Screenshot that shows the onedrive url field.":::
-After your video is downloaded from OneDrive, Azure Video Indexer starts indexing and analyzing the video.
+After your video is downloaded from OneDrive, Azure AI Video Indexer starts indexing and analyzing the video.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Screenshot that shows the progress of an upload.":::
-Once Azure Video Indexer is done analyzing, you will receive an email with a link to your indexed video. The email also includes a short description of what was found in your video (for example: people, topics, optical character recognition).
+Once Azure AI Video Indexer is done analyzing, you will receive an email with a link to your indexed video. The email also includes a short description of what was found in your video (for example: people, topics, optical character recognition).
## Upload and index a video by using the API
You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#ap
### Configurations and parameters
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
+This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
#### externalID
-Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure Video Indexer website can be searched via the specified external ID.
+Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure AI Video Indexer website can be searched via the specified external ID.
#### callbackUrl
Use this parameter to specify a callback URL.
[!INCLUDE [callback url](./includes/callback-url.md)]
-Azure Video Indexer returns any existing parameters provided in the original URL. The URL must be encoded.
+Azure AI Video Indexer returns any existing parameters provided in the original URL. The URL must be encoded.
#### indexingPreset
Use this parameter to define an AI bundle that you want to apply on your audio o
> [!NOTE] > The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
-Azure Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
+Azure AI Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/). #### priority
-Azure Video Indexer indexes videos according to their priority. Use the `priority` parameter to specify the index priority. The following values are valid: `Low`, `Normal` (default), and `High`.
+Azure AI Video Indexer indexes videos according to their priority. Use the `priority` parameter to specify the index priority. The following values are valid: `Low`, `Normal` (default), and `High`.
This parameter is supported only for paid accounts. #### streamingPreset
-After your video is uploaded, Azure Video Indexer optionally encodes the video. It then proceeds to indexing and analyzing the video. When Azure Video Indexer is done analyzing, you get a notification with the video ID.
+After your video is uploaded, Azure AI Video Indexer optionally encodes the video. It then proceeds to indexing and analyzing the video. When Azure AI Video Indexer is done analyzing, you get a notification with the video ID.
When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered. After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state.
-For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
+For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure AI Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
The default setting is [content-aware encoding](/azure/media-services/latest/encode-content-aware-concept). If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`. #### videoUrl
-This parameter specifies the URL of the video or audio file to be indexed. If the `videoUrl` parameter is not specified, Azure Video Indexer expects you to pass the file as multipart/form body content.
+This parameter specifies the URL of the video or audio file to be indexed. If the `videoUrl` parameter is not specified, Azure AI Video Indexer expects you to pass the file as multipart/form body content.
### Code sample > [!NOTE] > The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
-The following C# code snippets demonstrate the usage of all the Azure Video Indexer APIs together.
+The following C# code snippets demonstrate the usage of all the Azure AI Video Indexer APIs together.
### [Classic account](#tab/With-classic-account/) After you copy the following code into your development platform, you'll need to provide two parameters:
-* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Azure Video Indexer account.
+* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Azure AI Video Indexer account.
To get your API key:
- 1. Go to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
+ 1. Go to the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
1. Sign in. 1. Go to **Products** > **Authorization** > **Authorization subscription**. 1. Copy the **Primary key** value.
namespace VideoIndexerArm
public static async Task Main(string[] args) {
- // Build Azure Video Indexer resource provider client that has access token through Azure Resource Manager
+ // Build Azure AI Video Indexer resource provider client that has access token through Azure Resource Manager
var videoIndexerResourceProviderClient = await VideoIndexerResourceProviderClient.BuildVideoIndexerResourceProviderClient(); // Get account details
namespace VideoIndexerArm
Console.WriteLine($"account id: {accountId}"); Console.WriteLine($"account location: {accountLocation}");
- // Get account-level access token for Azure Video Indexer
+ // Get account-level access token for Azure AI Video Indexer
var accessTokenRequest = new AccessTokenRequest { PermissionType = AccessTokenPermission.Contributor,
The upload operation might return the following status codes:
- The byte array option times out after 30 minutes. - The URL provided in the `videoURL` parameter must be encoded. - Indexing Media Services assets has the same limitation as indexing from a URL.-- Azure Video Indexer has a duration limit of 4 hours for a single file.
+- Azure AI Video Indexer has a duration limit of 4 hours for a single file.
- The URL must be accessible (for example, a public URL). If it's a private URL, the access token must be provided in the request.
For information about a storage account that's behind a firewall, see the [FAQ](
## Next steps
-[Examine the Azure Video Indexer output produced by an API](video-indexer-output-json-v2.md)
+[Examine the Azure AI Video Indexer output produced by an API](video-indexer-output-json-v2.md)
azure-video-indexer Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/regions.md
Title: Regions in which Azure Video Indexer is available
-description: This article talks about Azure regions in which Azure Video Indexer is available.
+ Title: Regions in which Azure AI Video Indexer is available
+description: This article talks about Azure regions in which Azure AI Video Indexer is available.
Last updated 09/14/2020
-# Azure regions in which Azure Video Indexer exists
+# Azure regions in which Azure AI Video Indexer exists
-Azure Video Indexer APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all).
+Azure AI Video Indexer APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all).
## Locations
-The `location` parameter must be given the Azure region code name as its value. If you are using Azure Video Indexer in preview mode, you should put `"trial"` as the value. `trial` is the default value for the `location` parameter. Otherwise, to get the code name of the Azure region that your account is in and that your call should be routed to, you can use the Azure portal or run a [Azure CLI](/cli/azure) command.
+The `location` parameter must be given the Azure region code name as its value. If you are using Azure AI Video Indexer in preview mode, you should put `"trial"` as the value. `trial` is the default value for the `location` parameter. Otherwise, to get the code name of the Azure region that your account is in and that your call should be routed to, you can use the Azure portal or run a [Azure CLI](/cli/azure) command.
### Azure portal
-1. Sign in on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+1. Sign in on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
1. Select **User accounts** from the top-right corner of the page. 1. Find the location of your account in the top-right corner.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
-description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer.
+ Title: Azure AI Video Indexer release notes | Microsoft Docs
+description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure AI Video Indexer.
Last updated 07/03/2023
-# Azure Video Indexer release notes
+# Azure AI Video Indexer release notes
->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://learn.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us` into your RSS feed reader.
+Revisit this page to view the latest updates.
-To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about:
+To stay up-to-date with the most recent Azure AI Video Indexer developments, this article provides you with information about:
* The latest releases * Known issues
Guidelines to customers: We're working on a new implementation without AMS and w
### API updates
-We're introducing a change in behavior that may requires a change to your existing query logic. The change is in the **List** and **Search** APIs, find a detailed change between the current and the new behavior in a table that follows. You may need to update your code to utilize the [new APIs](https://api-portal.videoindexer.ai/).
+We're introducing a change in behavior that may require a change to your existing query logic. The change is in the **List** and **Search** APIs, find a detailed change between the current and the new behavior in a table that follows. You may need to update your code to utilize the [new APIs](https://api-portal.videoindexer.ai/).
|API |Current|New|The update| |||||
We're introducing a change in behavior that may requires a change to your existi
### Support for HTTP/2
-Added support for HTTP/2 for our [Data Plane API](https://api-portal.videoindexer.ai/). [HTTP/2](https://en.wikipedia.org/wiki/HTTP/2) offers several benefits over HTTP/1.1, which continues to be supported for backwards compatibility. One of the main benefits of HTTP/2 is increased performance, better reliability and reduced system resource requirements over HTTP/1.1. With this change we now support HTTP/2 for both the Video Indexer [Portal](https://videoindexer.ai/) and our Data Plane API. We advise to update your code to take advantage of this change.
+Added support for HTTP/2 for our [Data Plane API](https://api-portal.videoindexer.ai/). [HTTP/2](https://en.wikipedia.org/wiki/HTTP/2) offers several benefits over HTTP/1.1, which continues to be supported for backwards compatibility. One of the main benefits of HTTP/2 is increased performance, better reliability and reduced system resource requirements over HTTP/1.1. With this change we now support HTTP/2 for both the Video Indexer [Portal](https://videoindexer.ai/) and our Data Plane API. We advise you to update your code to take advantage of this change.
### Topics insight improvements
We now support all five levels of IPTC ontology.
### Resource Health support
-Azure Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Azure Video Indexer resources. Azure Resource Health also helps with diagnosing and solving problems and you can set alerts to be notified whenever your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
+Azure AI Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Azure AI Video Indexer resources. Azure Resource Health also helps with diagnosing and solving problems and you can set alerts to be notified whenever your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
### The animation character recognition model has been retired
The **animation character recognition** model has been retired on March 1, 2023.
### Excluding sensitive AI models
-Following the Microsoft Responsible AI agenda, Azure Video Indexer now allows you to exclude specific AI models when indexing media files. The list of sensitive AI models includes: face detection, observed people, emotions, labels identification.
+Following the Microsoft Responsible AI agenda, Azure AI Video Indexer now allows you to exclude specific AI models when indexing media files. The list of sensitive AI models includes: face detection, observed people, emotions, labels identification.
This feature is currently available through the API, and is available in all presets except the Advanced preset.
For more information, see [Considerations and limitations when choosing a use ca
### Support for storage behind firewall
-It's good practice to lock storage accounts and disable public access to enhance or comply with enterprise security policy. Video Indexer can now access non-public accessible storage accounts using the [Azure Trusted Service](https://learn.microsoft.com/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity) exception using Managed Identities. You can read more how to set it up in our [how-to](storage-behind-firewall.md).
+It's good practice to lock storage accounts and disable public access to enhance or comply with enterprise security policy. Video Indexer can now access non-public accessible storage accounts using the [Azure Trusted Service](/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity) exception using Managed Identities. You can read more how to set it up in our [how-to](storage-behind-firewall.md).
### New custom speech and pronunciation training
-Azure Video Indexer has added a new custom speech model experience. The experience includes ability to use custom pronunciation datasets to improve recognition of mispronounced words, phrases, or names. The custom models can be used to improve the transcription quality of content with industry specific terminology. To learn more, see [Customize speech model overview](customize-speech-model-overview.md).
+Azure AI Video Indexer has added a new custom speech model experience. The experience includes ability to use custom pronunciation datasets to improve recognition of mispronounced words, phrases, or names. The custom models can be used to improve the transcription quality of content with industry specific terminology. To learn more, see [Customize speech model overview](customize-speech-model-overview.md).
### Observed people quality improvements
Video Indexer supports the use of Network Security Tag to allow network traffic
### Notification experience
-The [Azure Video Indexer website](https://www.videoindexer.ai/) now has a notification panel where you can stay informed of important product updates, such as service impacting events, new releases, and more.
+The [Azure AI Video Indexer website](https://www.videoindexer.ai/) now has a notification panel where you can stay informed of important product updates, such as service impacting events, new releases, and more.
### Textual logo detection
The [Azure Video Indexer website](https://www.videoindexer.ai/) now has a notifi
### Switching directories
-You can now switch Azure AD directories and manage Azure Video Indexer accounts across tenants using the [Azure Video Indexer website](https://www.videoindexer.ai/).
+You can now switch Azure AD directories and manage Azure AI Video Indexer accounts across tenants using the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
### Language support
Significantly reduced number of low-quality face detection occurrences in the UI
## November 2022
-### Speakers' names can now be edited from the Azure Video Indexer website
+### Speakers' names can now be edited from the Azure AI Video Indexer website
-You can now add new speakers, rename identified speakers and modify speakers assigned to a particular transcript line using the [Azure Video Indexer website](https://www.videoindexer.ai/). For details on how to edit speakers from the **Timeline** pane, see [Edit speakers with the Azure Video Indexer website](edit-speakers.md).
+You can now add new speakers, rename identified speakers and modify speakers assigned to a particular transcript line using the [Azure AI Video Indexer website](https://www.videoindexer.ai/). For details on how to edit speakers from the **Timeline** pane, see [Edit speakers with the Azure AI Video Indexer website](edit-speakers.md).
-The same capabilities are available from the Azure Video Indexer [upload video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
+The same capabilities are available from the Azure AI Video Indexer [upload video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
## October 2022 ### A new built-in role: Video Indexer Restricted Viewer
-The limited access **Video Indexer Restricted Viewer** role is intended for the [Azure Video Indexer website](https://www.videoindexer.ai/) users. The role's permitted actions relate to the [Azure Video Indexer website](https://www.videoindexer.ai/) experience.
+The limited access **Video Indexer Restricted Viewer** role is intended for the [Azure AI Video Indexer website](https://www.videoindexer.ai/) users. The role's permitted actions relate to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) experience.
For more information, see [Manage access with the Video Indexer Restricted Viewer role](restricted-viewer-role.md).
For details, see [Slate detection](slate-detection-insight.md).
### New source languages support for STT, translation, and search
-Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in the [Azure Video Indexer](https://www.videoindexer.ai/) website, widgets and APIs.
+Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, widgets and APIs.
For more information, see [supported languages](language-support.md). ### Edit a speaker's name in the transcription through the API
-You can now edit the name of the speakers in the transcription using the Azure Video Indexer API.
+You can now edit the name of the speakers in the transcription using the Azure AI Video Indexer API.
### Word level time annotation with confidence score
For more information, see [Examine word-level transcription information](edit-tr
The new set of logs, described below, enables you to better monitor your indexing pipeline.
-Azure Video Indexer now supports Diagnostics settings for indexing events. You can now export logs monitoring upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
+Azure AI Video Indexer now supports Diagnostics settings for indexing events. You can now export logs monitoring upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-### Expanded supported languages in LID and MLID through Azure Video Indexer API
+### Expanded supported languages in LID and MLID through Azure AI Video Indexer API
-Expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure Video Indexer API.
+Expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure AI Video Indexer API.
The following languages are now supported through the API: Arabic (United Arab Emirates), Arabic Modern Standard, Arabic Egypt, Arabic (Iraq), Arabic (Jordan), Arabic (Kuwait), Arabic (Oman), Arabic (Qatar), Arabic (Saudi Arabia), Arabic Syrian Arab Republic, Czech, Danish, German, English Australia, English United Kingdom, English United States, Spanish, Spanish (Mexico), Finnish, French (Canada), French, Hebrew, Hindi, Italian, Japanese, Korean, Norwegian, Dutch, Polish, Portuguese, Portuguese (Portugal), Russian, Swedish, Thai, Turkish, Ukrainian, Vietnamese, Chinese (Simplified), Chinese (Cantonese, Traditional).
Use the [Patch person model](https://api-portal.videoindexer.ai/api-details#api=
### View speakers in closed captions
-You can now view speakers in closed captions of the Azure Video Indexer media player. For more information, see [View closed captions in the Azure Video Indexer website](view-closed-captions.md).
+You can now view speakers in closed captions of the Azure AI Video Indexer media player. For more information, see [View closed captions in the Azure AI Video Indexer website](view-closed-captions.md).
### Control face and people bounding boxes using parameters
The new `boundingBoxes` URL parameter controls the option to set bounding boxes
### Control autoplay from the account settings
-Control whether a media file will autoplay when opened using the webapp is through the user settings. Navigate to the [Azure Video Indexer website](https://www.videoindexer.ai/) -> the **Gear** icon (the top-right corner) -> **User settings** -> **Auto-play media files**.
+Control whether a media file will autoplay when opened using the webapp is through the user settings. Navigate to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) -> the **Gear** icon (the top-right corner) -> **User settings** -> **Auto-play media files**.
### Copy video ID from the player view
-**Copy video ID** is available when you select the video in the [Azure Video Indexer website](https://www.videoindexer.ai/)
+**Copy video ID** is available when you select the video in the [Azure AI Video Indexer website](https://www.videoindexer.ai/)
### New dark theme in native Azure colors
-Select the desired theme in the [Azure Video Indexer website](https://www.videoindexer.ai/). Select the **Gear** icon (the top-right corner) -> **User settings**.
+Select the desired theme in the [Azure AI Video Indexer website](https://www.videoindexer.ai/). Select the **Gear** icon (the top-right corner) -> **User settings**.
### Search or filter the account list
-You can search or filter the account list using the account name or region. Select **User accounts** in the top-right corner of the [Azure Video Indexer website](https://www.videoindexer.ai/).
+You can search or filter the account list using the account name or region. Select **User accounts** in the top-right corner of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
## September 2022
You can search or filter the account list using the account name or region. Sele
With an Azure Resource Management (ARM) based [paid (unlimited)](accounts-overview.md) accounts, you are able to use: - [Azure role-based access control (RBAC)](../role-based-access-control/overview.md).-- Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs).
+- Managed Identity to better secure the communication between your Azure Media Services and Azure AI Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs).
- Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform. - [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md).
To create an ARM-based account, see [create an account](create-account-portal.md
### Update topic inferencing model
-Azure Video Indexer topic inferencing model was updated and now we extract more than 6.5 million topics (for example, covering topics such as Covid virus). To benefit from recent model updates you need to re-index your video files.
+Azure AI Video Indexer topic inferencing model was updated and now we extract more than 6.5 million topics (for example, covering topics such as Covid virus). To benefit from recent model updates you need to re-index your video files.
### Topic inferencing model is now available on Azure Government
-You can now leverage topic inferencing model in your Azure Video Indexer paid account on [Azure Government](../azure-government/documentation-government-welcome.md) in Virginia and Arizona regions. With this release we completed the AI parity between Azure global and Azure Government.
+You can now leverage topic inferencing model in your Azure AI Video Indexer paid account on [Azure Government](../azure-government/documentation-government-welcome.md) in Virginia and Arizona regions. With this release we completed the AI parity between Azure global and Azure Government.
To benefit from the model updates you need to re-index your video files.
-### Session length is now 30 days in the Azure Video Indexer website
+### Session length is now 30 days in the Azure AI Video Indexer website
-The [Azure Video Indexer website](https://vi.microsoft.com) session length was extended to 30 days. You can preserve your session without having to re-login every 1 hour.
+The [Azure AI Video Indexer website](https://vi.microsoft.com) session length was extended to 30 days. You can preserve your session without having to re-login every 1 hour.
## July 2022
The featured clothing insight enables more targeted ads placement.
The insight provides information of key items worn by individuals within a video and the timestamp in which the clothing appears. This allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they are viewed.
-To view the featured clothing of an observed person, you have to index the video using Azure Video Indexer advanced video settings. For details on how featured clothing images are ranked and how to view this insight, see [featured clothing](observed-people-featured-clothing.md).
+To view the featured clothing of an observed person, you have to index the video using Azure AI Video Indexer advanced video settings. For details on how featured clothing images are ranked and how to view this insight, see [featured clothing](observed-people-featured-clothing.md).
## June 2022 ### Create Video Indexer blade improvements in Azure portal
-Azure Video Indexer now supports the creation of new resource using system-assigned managed identity or system and user assigned managed identity for the same resource.
+Azure AI Video Indexer now supports the creation of new resource using system-assigned managed identity or system and user assigned managed identity for the same resource.
You can also change the primary managed identity using the **Identity** tab in the [Azure portal](https://portal.azure.com/#home). ### Limited access of celebrity recognition and face identification features
-As part of Microsoft's commitment to responsible AI, we are designing and releasing Azure Video Indexer ΓÇô identification and celebrity recognition features. These features are designed to protect the rights of individuals and society and fostering transparent human-computer interaction. Thus, there is a limited access and use of Azure Video Indexer ΓÇô identification and celebrity recognition features.
+As part of Microsoft's commitment to responsible AI, we are designing and releasing Azure AI Video Indexer ΓÇô identification and celebrity recognition features. These features are designed to protect the rights of individuals and society and fostering transparent human-computer interaction. Thus, there is a limited access and use of Azure AI Video Indexer ΓÇô identification and celebrity recognition features.
Identification and celebrity recognition features require registration and are only available to Microsoft managed customers and partners.
-Customers who wish to use this feature are required to apply and submit an [intake form](https://aka.ms/facerecognition). For more information, read [Azure Video Indexer limited access](limited-access-features.md).
+Customers who wish to use this feature are required to apply and submit an [intake form](https://aka.ms/facerecognition). For more information, read [Azure AI Video Indexer limited access](limited-access-features.md).
Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) and [investment and safeguard for facial recognition](https://aka.ms/AAh9oye).
Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) an
### Line breaking in transcripts
-Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer website, such as adding a new line and editing the lineΓÇÖs timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md).
+Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure AI Video Indexer website, such as adding a new line and editing the lineΓÇÖs timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md).
### Azure Monitor integration
-Azure Video Indexer now supports Diagnostics settings for Audit events. Logs of Audit events can now be exported through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
+Azure AI Video Indexer now supports Diagnostics settings for Audit events. Logs of Audit events can now be exported through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure Video Indexer](monitor-video-indexer.md).
+The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure AI Video Indexer](monitor-video-indexer.md).
### Video Insights improvements
Object Character Reader (OCR) is improved by 60%. Face Detection is improved by
### Service tag
-Azure Video Indexer is now part of [Network Service Tags](network-security.md). Video Indexer often needs to access other Azure resources (for example, Storage). If you secure your inbound traffic to your resources with a Network Security Group you can now select Video Indexer as part of the built-in Service Tags. This will simplify security management as we populate the Service Tag with our public IPs.
+Azure AI Video Indexer is now part of [Network Service Tags](network-security.md). Video Indexer often needs to access other Azure resources (for example, Storage). If you secure your inbound traffic to your resources with a Network Security Group you can now select Video Indexer as part of the built-in Service Tags. This will simplify security management as we populate the Service Tag with our public IPs.
### Celebrity recognition toggle
You can now enable or disable the celebrity recognition model on the account lev
:::image type="content" source="./media/release-notes/celebrity-recognition.png" alt-text="Screenshot showing the celebrity recognition toggle.":::
-### Azure Video Indexer repository name
+### Azure AI Video Indexer repository name
-As of May 1st, our new updated repository of Azure Video Indexer widget was renamed. Use https://www.npmjs.com/package/@azure/video-indexer-widgets instead
+As of May 1st, our new updated repository of Azure AI Video Indexer widget was renamed. Use https://www.npmjs.com/package/@azure/video-indexer-widgets instead
## April 2022
-### Renamed **Azure Video Analyzer for Media** back to **Azure Video Indexer**
+### Renamed **Azure Video Analyzer for Media** back to **Azure AI Video Indexer**
-As of today, Azure Video analyzer for Media product name is **Azure Video Indexer** and all product related assets (web portal, marketing materials). It is a backward compatible change that has no implication on APIs and links. **Azure Video Indexer**'s new logo:
+As of today, Azure Video analyzer for Media product name is **Azure AI Video Indexer** and all product related assets (web portal, marketing materials). It is a backward compatible change that has no implication on APIs and links. **Azure AI Video Indexer**'s new logo:
## March 2022 ### Closed Captioning files now support including speakersΓÇÖ attributes
-Azure Video Indexer enables you to include speakers' characteristic based on a closed captioning file that you choose to download. To include the speakersΓÇÖ attributes, select Downloads -> Closed Captions -> choose the closed captioning downloadable file format (SRT, VTT, TTML, TXT, or CSV) and check **Include speakers** checkbox.
+Azure AI Video Indexer enables you to include speakers' characteristic based on a closed captioning file that you choose to download. To include the speakersΓÇÖ attributes, select Downloads -> Closed Captions -> choose the closed captioning downloadable file format (SRT, VTT, TTML, TXT, or CSV) and check **Include speakers** checkbox.
### Improvements to the widget offering The following improvements were made:
-* Azure Video Indexer widgets support more than 1 locale in a widget's parameter.
+* Azure AI Video Indexer widgets support more than 1 locale in a widget's parameter.
* The Insights widgets support initial search parameters and multiple sorting options. * The Insights widgets also include a confirmation step before deleting a face to avoid mistakes. * The widget customization now supports width as strings (for example 100%, 100vw). ## February 2022
-### Public preview of Azure Video Indexer account management based on ARM in Government cloud
+### Public preview of Azure AI Video Indexer account management based on ARM in Government cloud
-Azure Video Indexer website is now supporting account management based on ARM in public preview (see, [November 2021 release note](#november-2021)).
+Azure AI Video Indexer website is now supporting account management based on ARM in public preview (see, [November 2021 release note](#november-2021)).
### Leverage open-source code to create ARM based account
-Added new code samples including HTTP calls to use Azure Video Indexer create, read, update and delete (CRUD) ARM API for solution developers.
+Added new code samples including HTTP calls to use Azure AI Video Indexer create, read, update and delete (CRUD) ARM API for solution developers.
## January 2022
For more information, see [Audio effects detection](audio-effects-detection.md).
### New source languages support for STT, translation, and search on the website
-Azure Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
-It means transcription, translation, and search features are also supported for these languages in the [Azure Video Indexer](https://www.videoindexer.ai/) website and widgets.
+Azure AI Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
+It means transcription, translation, and search features are also supported for these languages in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and widgets.
## December 2021
The projects feature is now GA and ready for productive use. There is no pricing
### New source languages support for STT, translation, and search on API level
-Azure Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the API level.
+Azure AI Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the API level.
### Matched person detection capability
-When indexing a video with Azure Video Indexer advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
+When indexing a video with Azure AI Video Indexer advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
## November 2021
-### Public preview of Azure Video Indexer account management based on ARM
+### Public preview of Azure AI Video Indexer account management based on ARM
-Azure Video Indexer introduces a public preview of Azure Resource Manager (ARM) based account management. You can leverage ARM-based Azure Video Indexer APIs to create, edit, and delete an account from the [Azure portal](https://portal.azure.com/#home).
+Azure AI Video Indexer introduces a public preview of Azure Resource Manager (ARM) based account management. You can leverage ARM-based Azure AI Video Indexer APIs to create, edit, and delete an account from the [Azure portal](https://portal.azure.com/#home).
> [!NOTE]
-> The Government cloud includes support for CRUD ARM based accounts from Azure Video Indexer API and from the Azure portal.
+> The Government cloud includes support for CRUD ARM based accounts from Azure AI Video Indexer API and from the Azure portal.
>
-> There is currently no support from the Azure Video Indexer [website](https://www.videoindexer.ai).
+> There is currently no support from the Azure AI Video Indexer [website](https://www.videoindexer.ai).
-For more information go to [create an Azure Video Indexer account](https://techcommunity.microsoft.com/t5/azure-ai/azure-video-analyzer-for-media-is-now-available-as-an-azure/ba-p/2912422).
+For more information go to [create an Azure AI Video Indexer account](https://techcommunity.microsoft.com/t5/azure-ai/azure-video-analyzer-for-media-is-now-available-as-an-azure/ba-p/2912422).
### PeopleΓÇÖs clothing detection
-When indexing a video with Azure Video Indexer advanced video settings, you can view the new peopleΓÇÖs clothing detection capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
+When indexing a video with Azure AI Video Indexer advanced video settings, you can view the new peopleΓÇÖs clothing detection capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
### Face bounding box (preview)
You can enable the bounding boxes through the player.
## October 2021
-### Embed widgets in your app using Azure Video Indexer package
+### Embed widgets in your app using Azure AI Video Indexer package
-Use the new Azure Video Indexer (AVAM) `@azure/video-analyzer-for-media-widgets` npm package to add `insights` widgets to your app and customize it according to your needs.
+Use the new Azure AI Video Indexer (AVAM) `@azure/video-analyzer-for-media-widgets` npm package to add `insights` widgets to your app and customize it according to your needs.
-The new AVAM package enables you to easily embed and communicate between our widgets and your app, instead of adding an `iframe` element to embed the insights widget. Learn more in [Embed and customize Azure Video Indexer widgets in your app](https://techcommunity.microsoft.com/t5/azure-media-services/embed-and-customize-azure-video-analyzer-for-media-widgets-in/ba-p/2847063). 
+The new AVAM package enables you to easily embed and communicate between our widgets and your app, instead of adding an `iframe` element to embed the insights widget. Learn more in [Embed and customize Azure AI Video Indexer widgets in your app](https://techcommunity.microsoft.com/t5/azure-media-services/embed-and-customize-azure-video-analyzer-for-media-widgets-in/ba-p/2847063). 
## August 2021
Fixed bugs related to CSS, theming and accessibility:
### Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Indexer enabled [Media Reserved Units (MRUs)](/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure AI Video Indexer enabled [Media Reserved Units (MRUs)](/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure AI Video Indexer. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
## June 2021
-### Azure Video Indexer deployed in six new regions
+### Azure AI Video Indexer deployed in six new regions
-You can now create an Azure Video Indexer paid account in France Central, Central US, Brazil South, West Central US, Korea Central, and Japan West regions.
+You can now create an Azure AI Video Indexer paid account in France Central, Central US, Brazil South, West Central US, Korea Central, and Japan West regions.
## May 2021 ### New source languages support for speech-to-text (STT), translation, and search
-Azure Video Indexer now supports STT, translation, and search in Chinese (Cantonese) ('zh-HK'), Dutch (Netherlands) ('Nl-NL'), Czech ('Cs-CZ'), Polish ('Pl-PL'), Swedish (Sweden) ('Sv-SE'), Norwegian('nb-NO'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'),
+Azure AI Video Indexer now supports STT, translation, and search in Chinese (Cantonese) ('zh-HK'), Dutch (Netherlands) ('Nl-NL'), Czech ('Cs-CZ'), Polish ('Pl-PL'), Swedish (Sweden) ('Sv-SE'), Norwegian('nb-NO'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'),
Arabic: (United Arab Emirates) ('ar-AE', 'ar-EG'), (Iraq) ('ar-IQ'), (Jordan) ('ar-JO'), (Kuwait) ('ar-KW'), (Lebanon) ('ar-LB'), (Oman) ('ar-OM'), (Qatar) ('ar-QA'), (Palestinian Authority) ('ar-PS'), (Syria) ('ar-SY'), and Turkish('tr-TR').
-These languages are available in both API and Azure Video Indexer website. Select the language from the combobox under **Video source language**.
+These languages are available in both API and Azure AI Video Indexer website. Select the language from the combobox under **Video source language**.
-### New theme for Azure Video Indexer
+### New theme for Azure AI Video Indexer
New theme is available: 'Azure' along with the 'light' and 'dark themes. To select a theme, click on the gear icon in the top-right corner of the website, find themes under **User settings**.
When indexing a video through our advanced video settings, you can view our new
## April 2021
-The Video Indexer service was renamed to Azure Video Indexer.
+The Video Indexer service was renamed to Azure AI Video Indexer.
### Improved upload experience in the portal
-Azure Video Indexer has a new upload experience in the [website](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
+Azure AI Video Indexer has a new upload experience in the [website](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
### New developer portal in available in gov-cloud
-The [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
+The [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
### Observed people tracing (preview)
-Azure Video Indexer now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
+Azure AI Video Indexer now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
For example, if a video contains a person, the detect operation will list the person appearances together with their coordinates in the video frames. You can use this functionality to determine the person path in a video. It also lets you determine whether there are multiple instances of the same person in a video. The newly added observed people tracing feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard and basic indexing presets will not include this new advanced model.
-When you choose to see Insights of your video on the Azure Video Indexer website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
+When you choose to see Insights of your video on the Azure AI Video Indexer website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
-The feature is also available in the JSON file generated by Azure Video Indexer. For more information, see [Trace observed people in a video](observed-people-tracing.md).
+The feature is also available in the JSON file generated by Azure AI Video Indexer. For more information, see [Trace observed people in a video](observed-people-tracing.md).
### Detected acoustic events with **Audio Effects Detection** (preview)
-You can now see the detected acoustic events in the closed captions file. The file can be downloaded from the Azure Video Indexer website and is available as an artifact in the GetArtifact API.
+You can now see the detected acoustic events in the closed captions file. The file can be downloaded from the Azure AI Video Indexer website and is available as an artifact in the GetArtifact API.
**Audio Effects Detection** (preview) component detects various acoustics events and classifies them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more). For more information, see [Audio effects detection](audio-effects-detection.md).
The newly added bundle is available when indexing or re-indexing your file by ch
### New developer portal
-Azure Video Indexer has a new [developer portal](https://api-portal.videoindexer.ai/), try out the new Azure Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
+Azure AI Video Indexer has a new [developer portal](https://api-portal.videoindexer.ai/), try out the new Azure AI Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure AI Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure AI Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
### Advanced customization capabilities for insight widget
-SDK is now available to embed Azure Video Indexer's insights widget in your own service and customize its style and data. The SDK supports the standard Azure Video Indexer insights widget and a fully customizable insights widget. Code sample is available in [Azure Video Indexer GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization). With this advanced customization capabilities, solution developer can apply custom styling and bring customerΓÇÖs own AI data and present that in the insight widget (with or without Azure Video Indexer insights).
+SDK is now available to embed Azure AI Video Indexer's insights widget in your own service and customize its style and data. The SDK supports the standard Azure AI Video Indexer insights widget and a fully customizable insights widget. Code sample is available in [Azure AI Video Indexer GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization). With this advanced customization capabilities, solution developer can apply custom styling and bring customerΓÇÖs own AI data and present that in the insight widget (with or without Azure AI Video Indexer insights).
-### Azure Video Indexer deployed in the US North Central, US West and Canada Central
+### Azure AI Video Indexer deployed in the US North Central, US West and Canada Central
-You can now create an Azure Video Indexer paid account in the US North Central, US West and Canada Central regions
+You can now create an Azure AI Video Indexer paid account in the US North Central, US West and Canada Central regions
### New source languages support for speech-to-text (STT), translation and search
-Azure Video Indexer now supports STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Azure Video Indexer website.
+Azure AI Video Indexer now supports STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Azure AI Video Indexer website.
-### Search by Topic in Azure Video Indexer Website
+### Search by Topic in Azure AI Video Indexer Website
-You can now use the search feature, at the top of the [Azure Video Indexer website](https://www.videoindexer.ai/account/login) page, to search for videos with specific topics.
+You can now use the search feature, at the top of the [Azure AI Video Indexer website](https://www.videoindexer.ai/account/login) page, to search for videos with specific topics.
## February 2021 ### Multiple account owners
-Account owner role was added to Azure Video Indexer. You can add, change, and remove users; change their role. For details on how to share an account, see [Invite users](restricted-viewer-role.md#share-the-account).
+Account owner role was added to Azure AI Video Indexer. You can add, change, and remove users; change their role. For details on how to share an account, see [Invite users](restricted-viewer-role.md#share-the-account).
### Audio event detection (public preview) > [!NOTE] > This feature is only available in trial accounts.
-Azure Video Indexer now detects the following audio effects in the non-speech segments of the content: gunshot, glass shatter, alarm, siren, explosion, dog bark, screaming, laughter, crowd reactions (cheering, clapping, and booing) and Silence.
+Azure AI Video Indexer now detects the following audio effects in the non-speech segments of the content: gunshot, glass shatter, alarm, siren, explosion, dog bark, screaming, laughter, crowd reactions (cheering, clapping, and booing) and Silence.
The newly added audio affects feature is available when indexing your file by choosing the **Advanced option** -> **Advanced audio** preset (under Video + audio indexing). Standard indexing will only include **silence** and **crowd reaction**. The **clapping** event type that was included in the previous audio effects model, is now extracted a part of the **crowd reaction** event type.
-When you choose to see **Insights** of your video on the [Azure Video Indexer](https://www.videoindexer.ai/) website, the Audio Effects show up on the page.
+When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the Audio Effects show up on the page.
:::image type="content" source="./media/release-notes/audio-detection.png" alt-text="Audio event detection":::
In addition, the model now includes people and locations in-context which are no
## January 2021
-### Azure Video Indexer is deployed on US Government cloud
+### Azure AI Video Indexer is deployed on US Government cloud
-You can now create an Azure Video Indexer paid account on US government cloud in Virginia and Arizona regions.
-Azure Video Indexer trial offering isn't available in the mentioned region. For more information go to Azure Video Indexer Documentation.
+You can now create an Azure AI Video Indexer paid account on US government cloud in Virginia and Arizona regions.
+Azure AI Video Indexer trial offering isn't available in the mentioned region. For more information go to Azure AI Video Indexer Documentation.
-### Azure Video Indexer deployed in the India Central region
+### Azure AI Video Indexer deployed in the India Central region
-You can now create an Azure Video Indexer paid account in the India Central region.
+You can now create an Azure AI Video Indexer paid account in the India Central region.
-### New Dark Mode for the Azure Video Indexer website experience
+### New Dark Mode for the Azure AI Video Indexer website experience
-The Azure Video Indexer website experience is now available in dark mode.
+The Azure AI Video Indexer website experience is now available in dark mode.
To enable the dark mode open the settings panel and toggle on the **Dark Mode** option. :::image type="content" source="./media/release-notes/dark-mode.png" alt-text="Dark mode setting"::: ## December 2020
-### Azure Video Indexer deployed in the Switzerland West and Switzerland North
+### Azure AI Video Indexer deployed in the Switzerland West and Switzerland North
-You can now create an Azure Video Indexer paid account in the Switzerland West and Switzerland North regions.
+You can now create an Azure AI Video Indexer paid account in the Switzerland West and Switzerland North regions.
## October 2020
-### Planned Azure Video Indexer website authenticatication changes
+### Planned Azure AI Video Indexer website authenticatication changes
-Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Azure Video Indexer website](https://www.videoindexer.ai/) [developer portal](video-indexer-use-apis.md) using Facebook or LinkedIn.
+Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) [developer portal](video-indexer-use-apis.md) using Facebook or LinkedIn.
You will be able to sign up and sign in using one of these providers: Azure AD, Microsoft, and Google. > [!NOTE]
-> The Azure Video Indexer accounts connected to LinkedIn and Facebook will not be accessible after March 1st 2021.
+> The Azure AI Video Indexer accounts connected to LinkedIn and Facebook will not be accessible after March 1st 2021.
>
-> You should [invite](restricted-viewer-role.md#share-the-account) an Azure AD, Microsoft, or Google email you own to the Azure Video Indexer account so you will still have access. You can add an additional owner of supported providers, as described in [invite](restricted-viewer-role.md#share-the-account). <br/>
+> You should [invite](restricted-viewer-role.md#share-the-account) an Azure AD, Microsoft, or Google email you own to the Azure AI Video Indexer account so you will still have access. You can add an additional owner of supported providers, as described in [invite](restricted-viewer-role.md#share-the-account). <br/>
> Alternatively, you can create a paid account and migrate the data. ## August 2020
-### Mobile design for the Azure Video Indexer website
+### Mobile design for the Azure AI Video Indexer website
-The Azure Video Indexer website experience is now supporting mobile devices. The user experience is responsive to adapt to your mobile screen size (excluding customization UIs).
+The Azure AI Video Indexer website experience is now supporting mobile devices. The user experience is responsive to adapt to your mobile screen size (excluding customization UIs).
### Accessibility improvements and bug fixes
-As part of WCAG (Web Content Accessibility guidelines), the Azure Video Indexer website experience is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
+As part of WCAG (Web Content Accessibility guidelines), the Azure AI Video Indexer website experience is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
## July 2020
Multi-language identification is moved from preview to GA and ready for producti
There is no pricing impact related to the "Preview to GA" transition.
-### Azure Video Indexer website improvements
+### Azure AI Video Indexer website improvements
#### Adjustments in the video gallery
The label tagger was upgraded and now includes more visual labels that can be id
## May 2020
-### Azure Video Indexer deployed in the East US
+### Azure AI Video Indexer deployed in the East US
-You can now create an Azure Video Indexer paid account in the East US region.
+You can now create an Azure AI Video Indexer paid account in the East US region.
-### Azure Video Indexer URL
+### Azure AI Video Indexer URL
-Azure Video Indexer regional endpoints were all unified to start only with www. No action item is required.
+Azure AI Video Indexer regional endpoints were all unified to start only with www. No action item is required.
-From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-Also wus.videoindexer.ai would be redirected to www. More information is available in [Embed Azure Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
+Also wus.videoindexer.ai would be redirected to www. More information is available in [Embed Azure AI Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
## April 2020
A new player skin launched with updated design.
* [Get-Accounts-Authorization](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Accounts-Authorization) * [Get-Accounts-With-Token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Accounts-With-Token)
- The Account object has a `Url` field pointing to the location of the [Azure Video Indexer website](https://www.videoindexer.ai/).
+ The Account object has a `Url` field pointing to the location of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
For paid accounts the `Url` field is currently pointing to an internal URL instead of the public website.
-In the coming weeks we will change it and return the [Azure Video Indexer website](https://www.videoindexer.ai/) URL for all accounts (trial and paid).
+In the coming weeks we will change it and return the [Azure AI Video Indexer website](https://www.videoindexer.ai/) URL for all accounts (trial and paid).
- Do not use the internal URLs, you should be using the [Azure Video Indexer public APIs](https://api-portal.videoindexer.ai/).
-* If you are embedding Azure Video Indexer URLs in your applications and the URLs are not pointing to the [Azure Video Indexer website](https://www.videoindexer.ai/) or the Azure Video Indexer API endpoint (`https://api.videoindexer.ai`) but rather to a regional endpoint (for example, `https://wus2.videoindexer.ai`), regenerate the URLs.
+ Do not use the internal URLs, you should be using the [Azure AI Video Indexer public APIs](https://api-portal.videoindexer.ai/).
+* If you are embedding Azure AI Video Indexer URLs in your applications and the URLs are not pointing to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) or the Azure AI Video Indexer API endpoint (`https://api.videoindexer.ai`) but rather to a regional endpoint (for example, `https://wus2.videoindexer.ai`), regenerate the URLs.
You can do it by either:
- * Replacing the URL with a URL pointing to the Azure Video Indexer widget APIs (for example, the [insights widget](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Insights-Widget))
- * Using the Azure Video Indexer website to generate a new embedded URL:
+ * Replacing the URL with a URL pointing to the Azure AI Video Indexer widget APIs (for example, the [insights widget](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Insights-Widget))
+ * Using the Azure AI Video Indexer website to generate a new embedded URL:
Press **Play** to get to your video's page -> click the **&lt;/&gt; Embed** button -> copy the URL into your application:
In the coming weeks we will change it and return the [Azure Video Indexer websit
### Custom language support for additional languages
-Azure Video Indexer now supports custom language models for `ar-SY` , `en-UK`, and `en-AU` (API only).
+Azure AI Video Indexer now supports custom language models for `ar-SY` , `en-UK`, and `en-AU` (API only).
### Delete account timeframe action update Delete account action now deletes the account within 90 days instead of 48 hours.
-### New Azure Video Indexer GitHub repository
+### New Azure AI Video Indexer GitHub repository
-A new Azure Video Indexer GitHub with different projects, getting started guides and code samples is now available:
+A new Azure AI Video Indexer GitHub with different projects, getting started guides and code samples is now available:
https://github.com/Azure-Samples/media-services-video-indexer ### Swagger update
-Azure Video Indexer unified **authentications** and **operations** into a single [Azure Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+Azure AI Video Indexer unified **authentications** and **operations** into a single [Azure AI Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## December 2019
Azure Video Indexer unified **authentications** and **operations** into a single
Update a specific section in the transcript using the [Update-Video-Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
-### Fix account configuration from the Azure Video Indexer website
+### Fix account configuration from the Azure AI Video Indexer website
You can now update Media Services connection configuration in order to self-help with issues like:
You can now update Media Services connection configuration in order to self-help
* password changes * Media Services resources were moved between subscriptions
-To fix the account configuration, in the Azure Video Indexer website, navigate to Settings > Account tab (as owner).
+To fix the account configuration, in the Azure AI Video Indexer website, navigate to Settings > Account tab (as owner).
### Configure the custom vision account
-Configure the custom vision account on paid accounts using the Azure Video Indexer website (previously, this was only supported by API). To do that, sign in to the Azure Video Indexer website, choose Model Customization > <*model*> > Configure.
+Configure the custom vision account on paid accounts using the Azure AI Video Indexer website (previously, this was only supported by API). To do that, sign in to the Azure AI Video Indexer website, choose Model Customization > <*model*> > Configure.
### Scenes, shots and keyframes ΓÇô now in one insight pane
Scenes, shots, and keyframes are now merged into one insight for easier consumpt
### Notification about a long video name
-When a video name is longer than 80 characters, Azure Video Indexer shows a descriptive error on upload.
+When a video name is longer than 80 characters, Azure AI Video Indexer shows a descriptive error on upload.
### Streaming endpoint is disabled notification
-When streaming endpoint is disabled, Azure Video Indexer will show a descriptive error on the player page.
+When streaming endpoint is disabled, Azure AI Video Indexer will show a descriptive error on the player page.
### Error handling improvement
Status code 409 will now be returned from [Re-Index Video](https://api-portal.vi
* Korean custom language models support
- Azure Video Indexer now supports custom language models in Korean (`ko-KR`) in both the API and portal.
+ Azure AI Video Indexer now supports custom language models in Korean (`ko-KR`) in both the API and portal.
* New languages supported for speech-to-text (STT)
- Azure Video Indexer APIs now support STT in Arabic Levantine (ar-SY), English UK regional language (en-GB), and English Australian regional language (en-AU).
+ Azure AI Video Indexer APIs now support STT in Arabic Levantine (ar-SY), English UK regional language (en-GB), and English Australian regional language (en-AU).
For video upload, we replaced zh-HANS to zh-CN, both are supported but zh-CN is recommended and more accurate.
Multiple advancements announced at IBC 2019:
## August 2019 updates
-### Azure Video Indexer deployed in UK South
+### Azure AI Video Indexer deployed in UK South
-You can now create an Azure Video Indexer paid account in the UK south region.
+You can now create an Azure AI Video Indexer paid account in the UK south region.
### New Editorial Shot Type insights available
New tags added to video shots provides editorial ΓÇ£shot typesΓÇ¥ to identify th
### New People and Locations entities extraction available
-Azure Video Indexer identifies named locations and people via natural language processing (NLP) from the videoΓÇÖs OCR and transcription. Azure Video Indexer uses machine learning algorithm to recognize when specific locations (for example, the Eiffel Tower) or people (for example, John Doe) are being called out in a video.
+Azure AI Video Indexer identifies named locations and people via natural language processing (NLP) from the videoΓÇÖs OCR and transcription. Azure AI Video Indexer uses machine learning algorithm to recognize when specific locations (for example, the Eiffel Tower) or people (for example, John Doe) are being called out in a video.
### Keyframes extraction in native resolution
-Keyframes extracted by Azure Video Indexer are available in the original resolution of the video.
+Keyframes extracted by Azure AI Video Indexer are available in the original resolution of the video.
### GA for training custom face models from images
Projects can now be created based on videos indexed in different languages (API
### Editor as a widget
-The Azure Video Indexer AI-editor is now available as a widget to be embedded in customer applications.
+The Azure AI Video Indexer AI-editor is now available as a widget to be embedded in customer applications.
### Update custom language model from closed caption file from the portal
Customers can provide VTT, SRT, and TTML file formats as input for language mode
## June 2019
-### Azure Video Indexer deployed to Japan East
+### Azure AI Video Indexer deployed to Japan East
-You can now create an Azure Video Indexer paid account in the Japan East region.
+You can now create an Azure AI Video Indexer paid account in the Japan East region.
### Create and repair account API (Preview)
You can now see a preview of all the insights that are selected as a result of c
[Create custom language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Language-Model) and [Update custom language models](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) APIs now support VTT, SRT, and TTML file formats as input for language models.
-When calling the [Update Video transcript API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Transcript), the transcript is added automatically. The training model associated with the video is updated automatically as well. For information on how to customize and train your language models, see [Customize a Language model with Azure Video Indexer](customize-language-model-overview.md).
+When calling the [Update Video transcript API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Transcript), the transcript is added automatically. The training model associated with the video is updated automatically as well. For information on how to customize and train your language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
### New download transcript formats ΓÇô TXT and CSV
-In addition to the closed captioning format already supported (SRT, VTT, and TTML), Azure Video Indexer now supports downloading the transcript in TXT and CSV formats.
+In addition to the closed captioning format already supported (SRT, VTT, and TTML), Azure AI Video Indexer now supports downloading the transcript in TXT and CSV formats.
## Next steps
azure-video-indexer Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/resource-health.md
Last updated 05/12/2023
# Diagnose Video Indexer resource issues with Azure Resource Health
-[Azure Resource Health](../service-health/resource-health-overview.md) can help you diagnose and get support for service problems that affect your Azure Video Indexer resources. Resource health is updated every 1-2 minutes and reports the current and past health of your resources. For additional details on how health is assessed, review the [full list of resource types and health checks](../service-health/resource-health-checks-resource-types.md#microsoftnetworkapplicationgateways) in Azure Resource Health.
+[Azure Resource Health](../service-health/resource-health-overview.md) can help you diagnose and get support for service problems that affect your Azure AI Video Indexer resources. Resource health is updated every 1-2 minutes and reports the current and past health of your resources. For additional details on how health is assessed, review the [full list of resource types and health checks](../service-health/resource-health-checks-resource-types.md#microsoftnetworkapplicationgateways) in Azure Resource Health.
## Get started
The health status is displayed as one of the following statuses:
An **Available** status means the service hasn't detected any events that affect the health of the resource. You see the **Recently resolved** notification in cases where the resource has recovered from unplanned downtime during the last 24 hours. > [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/resource-health/available-status.png" alt-text="Diagram of Azure Video Indexer resource health." :::
+> :::image type="content" source="./media/resource-health/available-status.png" alt-text="Diagram of Azure AI Video Indexer resource health." :::
### Unavailable
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
Title: Manage access to an Azure Video Indexer account
+ Title: Manage access to an Azure AI Video Indexer account
description: This article talks about Video Indexer restricted viewer built-in role. This role is an account level permission, which allows users to grant restricted access to a specific user or security group. Last updated 12/14/2022
-# Manage access to an Azure Video Indexer account
+# Manage access to an Azure AI Video Indexer account
-In this article, you'll learn how to manage access (authorization) to an Azure Video Indexer account. As Azure Video IndexerΓÇÖs role management differs depending on the Video Indexer Account type, this document will first cover access management of regular accounts (ARM-based) and then of Classic and Trial accounts.
+In this article, you'll learn how to manage access (authorization) to an Azure AI Video Indexer account. As Azure AI Video IndexerΓÇÖs role management differs depending on the Video Indexer Account type, this document will first cover access management of regular accounts (ARM-based) and then of Classic and Trial accounts.
-To see your accounts, select **User Accounts** at the top-right of the [Azure Video Indexer website](https://videoindexer.ai/). Classic and Trial accounts will have a label with the account type to the right of the account name.
+To see your accounts, select **User Accounts** at the top-right of the [Azure AI Video Indexer website](https://videoindexer.ai/). Classic and Trial accounts will have a label with the account type to the right of the account name.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/restricted-viewer-role/accounts.png" alt-text="Image of accounts.":::
To see your accounts, select **User Accounts** at the top-right of the [Azure Vi
Users with owner or administrator Azure Active Directory (Azure AD) permissions can assign roles to Azure AD users or security groups for an account. For information on how to assign roles, seeΓÇ»[Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-Azure Video Indexer provides three built-in roles. You can learn more about [Azure built-in roles](../role-based-access-control/built-in-roles.md). Azure Video Indexer doesn't support the creation of custom roles.
+Azure AI Video Indexer provides three built-in roles. You can learn more about [Azure built-in roles](../role-based-access-control/built-in-roles.md). Azure AI Video Indexer doesn't support the creation of custom roles.
**Owner** - This role grants full access to manage all resources, including the ability to assign roles to determine who has access to resources. **Contributor** - This role has permissions to everything an owner does except it can't control who has access to resources.
-**Video Indexer Restricted Viewer** - This role is unique to Azure Video Indexer and has permissions to view videos and their insights but can't perform edits or changes or user management operations. This role enables collaboration and user access to insights through the Video Indexer website while limiting their ability to make changes to the environment.
+**Video Indexer Restricted Viewer** - This role is unique to Azure AI Video Indexer and has permissions to view videos and their insights but can't perform edits or changes or user management operations. This role enables collaboration and user access to insights through the Video Indexer website while limiting their ability to make changes to the environment.
Users with this role can perform the following tasks:
Users with this role are unable to perform the following tasks:
Disabled features will appear to users with the **Restricted Viewer** access as greyed out. When a user navigates to an unauthorized page, they receive a pop-up message that they don't have access. > [!Important]
-> The Restricted Viewer role is only available in Azure Video Indexer ARM accounts.
+> The Restricted Viewer role is only available in Azure AI Video Indexer ARM accounts.
> ### Manage account access (for account owners)
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
Title: Azure Video Indexer scenes, shots, and keyframes
-description: This topic gives an overview of the Azure Video Indexer scenes, shots, and keyframes.
+ Title: Azure AI Video Indexer scenes, shots, and keyframes
+description: This topic gives an overview of the Azure AI Video Indexer scenes, shots, and keyframes.
Last updated 06/07/2022
# Scenes, shots, and keyframes
-Azure Video Indexer supports segmenting videos into temporal units based on structural and semantic properties. This capability enables customers to easily browse, manage, and edit their video content based on varying granularities. For example, based on scenes, shots, and keyframes, described in this topic.
+Azure AI Video Indexer supports segmenting videos into temporal units based on structural and semantic properties. This capability enables customers to easily browse, manage, and edit their video content based on varying granularities. For example, based on scenes, shots, and keyframes, described in this topic.
![Scenes, shots, and keyframes](./media/scenes-shots-keyframes/scenes-shots-keyframes.png) ## Scene detection
-Azure Video Indexer determines when a scene changes in video based on visual cues. A scene depicts a single event and it is composed of a series of consecutive shots, which are semantically related. A scene thumbnail is the first keyframe of its underlying shot. Azure Video Indexer segments a video into scenes based on color coherence across consecutive shots and retrieves the beginning and end time of each scene. Scene detection is considered a challenging task as it involves quantifying semantic aspects of videos.
+Azure AI Video Indexer determines when a scene changes in video based on visual cues. A scene depicts a single event and it is composed of a series of consecutive shots, which are semantically related. A scene thumbnail is the first keyframe of its underlying shot. Azure AI Video Indexer segments a video into scenes based on color coherence across consecutive shots and retrieves the beginning and end time of each scene. Scene detection is considered a challenging task as it involves quantifying semantic aspects of videos.
> [!NOTE] > Applicable to videos that contain at least 3 scenes. ## Shot detection
-Azure Video Indexer determines when a shot changes in the video based on visual cues, by tracking both abrupt and gradual transitions in the color scheme of adjacent frames. The shot's metadata includes a start and end time, as well as the list of keyframes included in that shot. The shots are consecutive frames taken from the same camera at the same time.
+Azure AI Video Indexer determines when a shot changes in the video based on visual cues, by tracking both abrupt and gradual transitions in the color scheme of adjacent frames. The shot's metadata includes a start and end time, as well as the list of keyframes included in that shot. The shots are consecutive frames taken from the same camera at the same time.
## Keyframe detection
-Azure Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). Azure Video Indexer retrieves a list of keyframe IDs as part of the shot's metadata, based on which customers can extract the keyframe as a high resolution image.
+Azure AI Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). Azure AI Video Indexer retrieves a list of keyframe IDs as part of the shot's metadata, based on which customers can extract the keyframe as a high resolution image.
### Extracting Keyframes
To extract high-resolution keyframes for your video, you must first upload and i
![Keyframes](./media/scenes-shots-keyframes/extracting-keyframes.png)
-#### With the Azure Video Indexer website
+#### With the Azure AI Video Indexer website
-To extract keyframes using the Azure Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer (make sure to view the warning regarding artifacts below). Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
+To extract keyframes using the Azure AI Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer (make sure to view the warning regarding artifacts below). Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
![Screenshot that shows the "Download" drop-down with "Artifacts" selected.](./media/scenes-shots-keyframes/extracting-keyframes2.png) [!INCLUDE [artifacts](./includes/artifacts.md)]
-#### With the Azure Video Indexer API
+#### With the Azure AI Video Indexer API
To get keyframes using the Video Indexer API, upload and index your video using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) call. Once the indexing job is complete, call [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index). This will give you all of the insights that Video Indexer extracted from your content in a JSON file.
Keyframes are associated with shots in the output JSON.
The shot type associated with an individual shot in the insights JSON represents its editorial type. You may find these shot type characteristics useful when editing videos into clips, trailers, or when searching for a specific style of keyframe for artistic purposes. The different types are determined based on analysis of the first keyframe of each shot. Shots are identified by the scale, size, and location of the faces appearing in their first keyframe.
-The shot size and scale are determined based on the distance between the camera and the faces appearing in the frame. Using these properties, Azure Video Indexer detects the following shot types:
+The shot size and scale are determined based on the distance between the camera and the faces appearing in the frame. Using these properties, Azure AI Video Indexer detects the following shot types:
* Wide: shows an entire personΓÇÖs body. * Medium: shows a person's upper-body and face. * Close up: mainly shows a personΓÇÖs face. * Extreme close-up: shows a personΓÇÖs face filling the screen.
-Shot types can also be determined by location of the subject characters with respect to the center of the frame. This property defines the following shot types in Azure Video Indexer:
+Shot types can also be determined by location of the subject characters with respect to the center of the frame. This property defines the following shot types in Azure AI Video Indexer:
* Left face: a person appears in the left side of the frame. * Center face: a person appears in the central region of the frame.
Additional characteristics:
## Next steps
-[Examine the Azure Video Indexer output produced by the API](video-indexer-output-json-v2.md#scenes)
+[Examine the Azure AI Video Indexer output produced by the API](video-indexer-output-json-v2.md#scenes)
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
Title: Use Video Indexer with storage behind firewall
-description: This article gives an overview how to configure Azure Video Indexer to use storage behind firewall.
+description: This article gives an overview how to configure Azure AI Video Indexer to use storage behind firewall.
Last updated 03/21/2023
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
Title: Switch between tenants on the Azure Video Indexer website
-description: This article shows how to switch between tenants in the Azure Video Indexer website.
+ Title: Switch between tenants on the Azure AI Video Indexer website
+description: This article shows how to switch between tenants in the Azure AI Video Indexer website.
Last updated 01/24/2023
Last updated 01/24/2023
When working with multiple tenants/directories in the Azure environment user might need to switch between the different directories.
-When logging in the Azure Video Indexer website, a default directory will load and the relevant accounts and list them in the **Account list**.
+When logging in the Azure AI Video Indexer website, a default directory will load and the relevant accounts and list them in the **Account list**.
> [!Note] > Trial accounts and Classic accounts are global and not tenant-specific. Hence, the tenant switching described in this article only applies to your ARM accounts.
When logging in the Azure Video Indexer website, a default directory will load a
This article shows two options to solve the same problem - how to switch tenants: -- When starting [from within the Azure Video Indexer website](#switch-tenants-from-within-the-azure-video-indexer-website).-- When starting [from outside of the Azure Video Indexer website](#switch-tenants-from-outside-the-azure-video-indexer-website).
+- When starting [from within the Azure AI Video Indexer website](#switch-tenants-from-within-the-azure-ai-video-indexer-website).
+- When starting [from outside of the Azure AI Video Indexer website](#switch-tenants-from-outside-the-azure-ai-video-indexer-website).
-## Switch tenants from within the Azure Video Indexer website
+## Switch tenants from within the Azure AI Video Indexer website
-1. To switch between directories in the [Azure Video Indexer](https://www.videoindexer.ai/), open the **User menu** > select **Switch directory**.
+1. To switch between directories in the [Azure AI Video Indexer](https://www.videoindexer.ai/), open the **User menu** > select **Switch directory**.
> [!div class="mx-imgBorder"] > ![Screenshot of a user name.](./media/switch-directory/avi-user-switch.png)
This article shows two options to solve the same problem - how to switch tenants
> [!div class="mx-imgBorder"] > ![Screenshot of a tenant list.](./media/switch-directory/tenants.png)
- Once clicked, the logged-in credentials will be used to relog-in to the Azure Video Indexer website with the new directory.
+ Once clicked, the logged-in credentials will be used to relog-in to the Azure AI Video Indexer website with the new directory.
-## Switch tenants from outside the Azure Video Indexer website
+## Switch tenants from outside the Azure AI Video Indexer website
-This section shows how to get the domain name from the Azure portal. You can then sign in with it into th the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+This section shows how to get the domain name from the Azure portal. You can then sign in with it into th the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
### Get the domain name
-1. In the [Azure portal](https://portal.azure.com/), sign in with the same subscription tenant in which your Azure Video Indexer Azure Resource Manager (ARM) account was created.
+1. In the [Azure portal](https://portal.azure.com/), sign in with the same subscription tenant in which your Azure AI Video Indexer Azure Resource Manager (ARM) account was created.
1. Hover over your account name (in the right-top corner). > [!div class="mx-imgBorder"]
If you want to see domains for all of your directories and switch between them,
### Sign in with the correct domain name on the AVI website
-1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
1. Press **Sign out** after pressing the button in the top-right corner. 1. On the AVI website, press **Sign in** and choose the Azure AD account.
azure-video-indexer Topics Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/topics-inference.md
Title: Azure Video Indexer topics inference overview -
-description: An introduction to Azure Video Indexer topics inference component responsibly.
+ Title: Azure AI Video Indexer topics inference overview
+
+description: An introduction to Azure AI Video Indexer topics inference component responsibly.
# Topics inference
-Topics inference is an Azure Video Indexer AI feature that automatically creates inferred insights derived from the transcribed audio, OCR content in visual text, and celebrities recognized in the video using the Video Indexer facial recognition model. The extracted Topics and categories (when available) are listed in the Insights tab. To jump to the topic in the media file, click a Topic -> Play Previous or Play Next.
+Topics inference is an Azure AI Video Indexer AI feature that automatically creates inferred insights derived from the transcribed audio, OCR content in visual text, and celebrities recognized in the video using the Video Indexer facial recognition model. The extracted Topics and categories (when available) are listed in the Insights tab. To jump to the topic in the media file, click a Topic -> Play Previous or Play Next.
The resulting insights are also generated in a categorized list in a JSON file which includes the topic name, timeframe and confidence score.
To display the instances in a JSON file, do the following:
}, ```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
For more information, see [about topics](https://azure.microsoft.com/blog/multi-modal-topic-inferencing-from-videos/).
Below are some considerations to keep in mind when using topics:
- When uploading a file always use high quality audio and video content. At least 1 minute of spontaneous conversational speech is required to perform analysis. Audio effects are detected in non-speech segments only. The minimal duration of a non-speech section is 2 seconds. Voice commands and singing aren't supported. - Typically, small people or objects under 200 pixels and people who are seated may not be detected. People wearing similar clothes or uniforms might be detected as being the same person and will be given the same ID number. People or objects that are obstructed may not be detected. Tracks of people with front and back poses may be split into different instances.
-When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. - Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
When used responsibly and carefully, Azure Video Indexer is a valuable tool for
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Transcription Translation Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/transcription-translation-lid.md
Title: Azure Video Indexer media transcription, translation and language identification overview -
-description: An introduction to Azure Video Indexer media transcription, translation and language identification components responsibly.
+ Title: Azure AI Video Indexer media transcription, translation and language identification overview
+
+description: An introduction to Azure AI Video Indexer media transcription, translation and language identification components responsibly.
# Media transcription, translation and language identification
-Azure Video Indexer transcription, translation and language identification automatically detects, transcribes, and translates the speech in media files into over 50 languages.
+Azure AI Video Indexer transcription, translation and language identification automatically detects, transcribes, and translates the speech in media files into over 50 languages.
-- Azure Video Indexer processes the speech in the audio file to extract the transcription that is then translated into many languages. When selecting to translate into a specific language, both the transcription and the insights like keywords, topics, labels or OCR are translated into the specified language. Transcription can be used as is or be combined with speaker insights that map and assign the transcripts into speakers. Multiple speakers can be detected in an audio file. An ID is assigned to each speaker and is displayed under their transcribed speech. -- Azure Video Indexer language identification (LID) automatically recognizes the supported dominant spoken language in the video file. For more information, see [Applying LID](/azure/azure-video-indexer/language-identification-model). -- Azure Video Indexer multi-language identification (MLID) automatically recognizes the spoken languages in different segments in the audio file and sends each segment to be transcribed in the identified languages. At the end of this process, all transcriptions are combined into the same file. For more information, see [Applying MLID](/azure/azure-video-indexer/multi-language-identification-transcription).
+- Azure AI Video Indexer processes the speech in the audio file to extract the transcription that is then translated into many languages. When selecting to translate into a specific language, both the transcription and the insights like keywords, topics, labels or OCR are translated into the specified language. Transcription can be used as is or be combined with speaker insights that map and assign the transcripts into speakers. Multiple speakers can be detected in an audio file. An ID is assigned to each speaker and is displayed under their transcribed speech.
+- Azure AI Video Indexer language identification (LID) automatically recognizes the supported dominant spoken language in the video file. For more information, see [Applying LID](/azure/azure-video-indexer/language-identification-model).
+- Azure AI Video Indexer multi-language identification (MLID) automatically recognizes the spoken languages in different segments in the audio file and sends each segment to be transcribed in the identified languages. At the end of this process, all transcriptions are combined into the same file. For more information, see [Applying MLID](/azure/azure-video-indexer/multi-language-identification-transcription).
The resulting insights are generated in a categorized list in a JSON file that includes the ID, language, transcribed text, duration and confidence score. ## Prerequisites
To view language insights in `insights.json`, do the following:
}, ```
-To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## Transcription, translation and language identification components
During the transcription, translation and language identification procedure, spe
|Component|Definition| ||| |Source language | The user uploads the source file for indexing, and either:<br/>- Specifies the video source language.<br/>- Selects auto detect single language (LID) to identify the language of the file. The output is saved separately.<br/>- Selects auto detect multi language (MLID) to identify multiple languages in the file. The output of each language is saved separately.|
-|Transcription API| The audio file is sent to Cognitive Services to get the transcribed and translated output. If a language has been specified, it's processed accordingly. If no language is specified, a LID or MLID process is run to identify the language after which the file is processed. |
+|Transcription API| The audio file is sent to Azure AI services to get the transcribed and translated output. If a language has been specified, it's processed accordingly. If no language is specified, a LID or MLID process is run to identify the language after which the file is processed. |
|Output unification |The transcribed and translated files are unified into the same file. The outputted data includes the speaker ID of each extracted sentence together with its confidence level.| |Confidence value |The estimated confidence level of each sentence is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.| ## Example use cases -- Promoting accessibility by making content available for people with hearing disabilities using Azure Video Indexer to generate speech to text transcription and translation into multiple languages.-- Improving content distribution to a diverse audience in different regions and languages by delivering content in multiple languages using Azure Video IndexerΓÇÖs transcription and translation capabilities. -- Enhancing and improving manual closed captioning and subtitles generation by leveraging Azure Video IndexerΓÇÖs transcription and translation capabilities and by using the closed captions generated by Azure Video Indexer in one of the supported formats.-- Using language identification (LID) or multi language identification (MLID) to transcribe videos in unknown languages to allow Azure Video Indexer to automatically identify the languages appearing in the video and generate the transcription accordingly.
+- Promoting accessibility by making content available for people with hearing disabilities using Azure AI Video Indexer to generate speech to text transcription and translation into multiple languages.
+- Improving content distribution to a diverse audience in different regions and languages by delivering content in multiple languages using Azure AI Video IndexerΓÇÖs transcription and translation capabilities.
+- Enhancing and improving manual closed captioning and subtitles generation by leveraging Azure AI Video IndexerΓÇÖs transcription and translation capabilities and by using the closed captions generated by Azure AI Video Indexer in one of the supported formats.
+- Using language identification (LID) or multi language identification (MLID) to transcribe videos in unknown languages to allow Azure AI Video Indexer to automatically identify the languages appearing in the video and generate the transcription accordingly.
## Considerations and limitations when choosing a use case
-When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
- Carefully consider the accuracy of the results, to promote more accurate data, check the quality of the audio, low quality audio might impact the detected insights. - Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
For more information, see: guidelines and limitations in [language detection and
`visupport@microsoft.com`
-## Azure Video Indexer insights
+## Azure AI Video Indexer insights
- [Audio effects detection](audio-effects-detection.md) - [Face detection](face-detection.md)
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
Title: Upload and index videos with Azure Video Indexer using the Video Indexer website
-description: Learn how to upload videos by using Azure Video Indexer.
+ Title: Upload and index videos with Azure AI Video Indexer using the Video Indexer website
+description: Learn how to upload videos by using Azure AI Video Indexer.
Last updated 05/10/2023
Last updated 05/10/2023
You can upload media files from your file system or from a URL. You can also configure basic or advanced settings for indexing, such as privacy, streaming quality, language, presets, people and brands models, custom logos and metadata.
-This article shows how to upload and index media files (audio or video) using the [Azure Video Indexer website](https://aka.ms/vi-portal-link).
+This article shows how to upload and index media files (audio or video) using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link).
You can also view a video that shows [how to upload and index media files](https://www.youtube.com/watch?v=H-SHX8N65vM&t=34s&ab_channel=AzureVideoIndexer). ## Prerequisites -- To upload media files, you need an active Azure Video Indexer account. If you don't have one, [sign up](https://aka.ms/vi-portal-link) for a free trial account, or create an [unlimited paid account](https://aka.ms/avam-arm-docs).
+- To upload media files, you need an active Azure AI Video Indexer account. If you don't have one, [sign up](https://aka.ms/vi-portal-link) for a free trial account, or create an [unlimited paid account](https://aka.ms/avam-arm-docs).
- To upload media files, you need at least contributor-level permission for your account. To manage permissions, see [Manage users and groups](restricted-viewer-role.md).-- To upload media files from a URL, you need a publicly accessible URL for the media file. For example, if the file is hosted in an Azure storage account, you need to [generate a SAS token URL](https://learn.microsoft.com/azure/applied-ai-services/form-recognizer/create-sas-tokens?view=form-recog-3.0.0) and paste it in the input box. You can't use URLs from streaming services such as YouTube.
+- To upload media files from a URL, you need a publicly accessible URL for the media file. For example, if the file is hosted in an Azure storage account, you need to [generate a SAS token URL](../ai-services/document-intelligence/create-sas-tokens.md?view=form-recog-3.0.0&preserve-view=true) and paste it in the input box. You can't use URLs from streaming services such as YouTube.
## Quick upload
If you encounter any issues while uploading media files, try the following solut
- If the **Upload** button is disabled, hover over the button and check for the indication of the problem. Try to refresh the page. If you're using a trial account, check if you have reached the account quota for daily count, daily duration, or total duration. To view your quota and usage, see the Account settings.-- If the upload from URL failed, make sure that the URL is valid and accessible by Video Indexer. Make sure that the URL isn't from a streaming service such as YouTube. Make sure that the media file isn't encrypted, protected by DRM, corrupted, or damaged. Make sure that the media file format is supported by Video Indexer. For a list of supported formats, see [supported media formats](https://learn.microsoft.com/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
+- If the upload from URL failed, make sure that the URL is valid and accessible by Video Indexer. Make sure that the URL isn't from a streaming service such as YouTube. Make sure that the media file isn't encrypted, protected by DRM, corrupted, or damaged. Make sure that the media file format is supported by Video Indexer. For a list of supported formats, see [supported media formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
- If the upload from file system failed, make sure that the file size isn't larger than 2 GB. Make sure that you have a stable internet connection. ## Next steps
-[Supported media formats](https://learn.microsoft.com/azure/azure-video-indexer/upload-index-videos?tabs=with-arm-account-account#supported-file-formats)
+[Supported media formats](/azure/azure-video-indexer/upload-index-videos?tabs=with-arm-account-account#supported-file-formats)
azure-video-indexer Use Editor Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/use-editor-create-project.md
Title: Use the Azure Video Indexer editor to create projects and add video clips
-description: This topic demonstrates how to use the Azure Video Indexer editor to create projects and add video clips.
+ Title: Use the Azure AI Video Indexer editor to create projects and add video clips
+description: This topic demonstrates how to use the Azure AI Video Indexer editor to create projects and add video clips.
# Add video clips to your projects
-The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project.
+The [Azure AI Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project.
-Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows.
+Once created, the project can be rendered and downloaded from Azure AI Video Indexer and be used in your own editing applications or downstream workflows.
Some scenarios where you may find this feature useful are:
This article shows how to create a project and add selected clips from the video
## Create new project and manage videos
-1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
1. Select the **Projects** tab. If you have created projects before, you will see all of your other projects here. 1. Click **Create new project**.
If you click on the downward arrow on the right side of each video, you will ope
1. To create queries for specific clips, use the search box that says "Search in transcript, visual text, people, and labels". 1. Select **View Insights** to customize which insights you want to see and which you don't want to see.
- :::image type="content" source="./media/video-indexer-view-edit/search-try-cognitive-services.png" alt-text="Screenshot shows searching for videos that say Try cognitive services":::
+ :::image type="content" source="./media/video-indexer-view-edit/search-try-cognitive-services.png" alt-text="Screenshot shows searching for videos that say Try Azure AI services":::
1. Add filters to further specify details on what scenes you are looking for by selecting **Filter options**. You can add multiple filters.
As you are selecting and ordering your clips, you can preview the video in the p
### Render and download the project > [!NOTE]
-> For Azure Video Indexer paid accounts, rendering your project has encoding costs. Azure Video Indexer trial accounts are limited to 5 hours of rendering.
+> For Azure AI Video Indexer paid accounts, rendering your project has encoding costs. Azure AI Video Indexer trial accounts are limited to 5 hours of rendering.
-1. Once you are done, make sure that your project has been saved. You can now render this project. Click **Render**, a popup dialog comes up that tells you that Azure Video Indexer will render a file and then the download link will be sent to your email. Select Proceed.
+1. Once you are done, make sure that your project has been saved. You can now render this project. Click **Render**, a popup dialog comes up that tells you that Azure AI Video Indexer will render a file and then the download link will be sent to your email. Select Proceed.
- :::image type="content" source="./media/video-indexer-view-edit/render-download.png" alt-text="Screenshot shows Azure Video Indexer with the option to Render and download your project":::
+ :::image type="content" source="./media/video-indexer-view-edit/render-download.png" alt-text="Screenshot shows Azure AI Video Indexer with the option to Render and download your project":::
You will also see a notification that the project is being rendered on top of the page. Once it is done being rendered, you will see a new notification that the project has been successfully rendered. Click the notification to download the project. It will download the project in mp4 format. 1. You can access saved projects from the **Projects** tab.
As you are selecting and ordering your clips, you can preview the video in the p
You can create a new project directly from a video in your account.
-1. Go to the **Library** tab of the Azure Video Indexer website.
+1. Go to the **Library** tab of the Azure AI Video Indexer website.
1. Open the video that you want to use to create your project. On the insights and timeline page, select the **Video editor** button. This takes you to the same page that you used to create a new project. Unlike the new project, you see the timestamped insights segments of the video, that you had started editing previously. ## See also
-[Azure Video Indexer overview](video-indexer-overview.md)
+[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Video Indexer Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-disaster-recovery.md
Title: Azure Video Indexer failover and disaster recovery
-description: Learn how to fail over to a secondary Azure Video Indexer account if a regional datacenter failure or disaster occurs.
+ Title: Azure AI Video Indexer failover and disaster recovery
+description: Learn how to fail over to a secondary Azure AI Video Indexer account if a regional datacenter failure or disaster occurs.
editor: ''
Last updated 07/29/2019
-# Azure Video Indexer failover and disaster recovery
+# Azure AI Video Indexer failover and disaster recovery
-Azure Video Indexer doesn't provide instant failover of the service if there's a regional datacenter outage or failure. This article explains how to configure your environment for a failover to ensure optimal availability for apps and minimized recovery time if a disaster occurs.
+Azure AI Video Indexer doesn't provide instant failover of the service if there's a regional datacenter outage or failure. This article explains how to configure your environment for a failover to ensure optimal availability for apps and minimized recovery time if a disaster occurs.
We recommend that you configure business continuity disaster recovery (BCDR) across regional pairs to benefit from Azure's isolation and availability policies. For more information, see [Azure paired regions](../availability-zones/cross-region-replication-azure.md).
An Azure subscription. If you don't have an Azure subscription yet, sign up for
## Fail over to a secondary account
-To implement BCDR, you need to have two Azure Video Indexer accounts to handle redundancy.
+To implement BCDR, you need to have two Azure AI Video Indexer accounts to handle redundancy.
-1. Create two Azure Video Indexer accounts connected to Azure (see [Create an Azure Video Indexer account](connect-to-azure.md)). Create one account for your primary region and the other to the paired Azure region.
+1. Create two Azure AI Video Indexer accounts connected to Azure (see [Create an Azure AI Video Indexer account](connect-to-azure.md)). Create one account for your primary region and the other to the paired Azure region.
1. If there's a failure in your primary region, switch to indexing using the secondary account. > [!TIP] > You can automate BCDR by setting up activity log alerts for service health notifications as per [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md).
-For information about using multiple tenants, see [Manage multiple tenants](manage-multiple-tenants.md). To implement BCDR, choose one of these two options: [Azure Video Indexer account per tenant](./manage-multiple-tenants.md#azure-video-indexer-account-per-tenant) or [Azure subscription per tenant](./manage-multiple-tenants.md#azure-subscription-per-tenant).
+For information about using multiple tenants, see [Manage multiple tenants](manage-multiple-tenants.md). To implement BCDR, choose one of these two options: [Azure AI Video Indexer account per tenant](./manage-multiple-tenants.md#azure-ai-video-indexer-account-per-tenant) or [Azure subscription per tenant](./manage-multiple-tenants.md#azure-subscription-per-tenant).
## Next steps
-[Manage an Azure Video Indexer account connected to Azure](manage-account-connected-to-azure.md).
+[Manage an Azure AI Video Indexer account connected to Azure](manage-account-connected-to-azure.md).
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
Title: Embed Azure Video Indexer widgets in your apps
-description: Learn how to embed Azure Video Indexer widgets in your apps.
+ Title: Embed Azure AI Video Indexer widgets in your apps
+description: Learn how to embed Azure AI Video Indexer widgets in your apps.
Last updated 01/10/2023
-# Embed Azure Video Indexer widgets in your apps
+# Embed Azure AI Video Indexer widgets in your apps
-This article shows how you can embed Azure Video Indexer widgets in your apps. Azure Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*.
+This article shows how you can embed Azure AI Video Indexer widgets in your apps. Azure AI Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*.
Starting with version 2, the widget base URL includes the region of the specified account. For example, an account in the West US region generates: `https://www.videoindexer.ai/embed/insights/.../?location=westus2`.
The `location` parameter must be included in the embedded links, see [how to get
To embed a video, use the website as described below:
-1. Sign in to the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+1. Sign in to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
1. Select the video that you want to work with and press **Play**. 1. Select the type of widget that you want (**Cognitive Insights**, **Player**, or **Editor**). 1. Click **&lt;/&gt; Embed**.
The Cognitive Insights widget can interact with a video on your app. This sectio
When you edit the transcripts, the following flow occurs: 1. You edit the transcript in the timeline.
-1. Azure Video Indexer gets these updates and saves them in the [from transcript edits](customize-language-model-with-website.md#customize-language-models-by-correcting-transcripts) in the language model.
+1. Azure AI Video Indexer gets these updates and saves them in the [from transcript edits](customize-language-model-with-website.md#customize-language-models-by-correcting-transcripts) in the language model.
1. The captions are updated:
- * If you are using Azure Video Indexer's player widget - itΓÇÖs automatically updated.
+ * If you are using Azure AI Video Indexer's player widget - itΓÇÖs automatically updated.
* If you are using an external player - you get a new captions file user the **Get video captions** call. ### Cross-origin communications
-To get Azure Video Indexer widgets to communicate with other components:
+To get Azure AI Video Indexer widgets to communicate with other components:
- Uses the cross-origin communication HTML5 method `postMessage`. - Validates the message across VideoIndexer.ai origin.
If you implement your own player code and integrate with Cognitive Insights widg
### Embed widgets in your app or blog (recommended)
-This section shows how to achieve interaction between two Azure Video Indexer widgets so that when a user selects the insight control on your app, the player jumps to the relevant moment.
+This section shows how to achieve interaction between two Azure AI Video Indexer widgets so that when a user selects the insight control on your app, the player jumps to the relevant moment.
1. Copy the Player widget embed code. 2. Copy the Cognitive Insights embed code.
This section shows how to achieve interaction between two Azure Video Indexer wi
Now when a user selects the insight control on your app, the player jumps to the relevant moment.
-For more information, see the [Azure Video Indexer - Embed both Widgets demo](https://codepen.io/videoindexer/pen/NzJeOb).
+For more information, see the [Azure AI Video Indexer - Embed both Widgets demo](https://codepen.io/videoindexer/pen/NzJeOb).
### Embed the Cognitive Insights widget and use Azure Media Player to play the content This section shows how to achieve interaction between a Cognitive Insights widget and an Azure Media Player instance by using the [AMP plug-in](https://breakdown.blob.core.windows.net/public/amp-vb.plugin.js).
-1. Add an Azure Video Indexer plug-in for the AMP player:<br/> `<script src="https://breakdown.blob.core.windows.net/public/amp-vb.plugin.js"></script>`
-2. Instantiate Azure Media Player with the Azure Video Indexer plug-in.
+1. Add an Azure AI Video Indexer plug-in for the AMP player:<br/> `<script src="https://breakdown.blob.core.windows.net/public/amp-vb.plugin.js"></script>`
+2. Instantiate Azure Media Player with the Azure AI Video Indexer plug-in.
```javascript // Init the source.
You can now communicate with Azure Media Player.
For more information, see the [Azure Media Player + VI Insights demo](https://codepen.io/videoindexer/pen/rYONrO).
-### Embed the Azure Video Indexer Cognitive Insights widget and use a different video player
+### Embed the Azure AI Video Indexer Cognitive Insights widget and use a different video player
If you use a video player other than Azure Media Player, you must manually manipulate the video player to achieve the communication.
For more information, see the [Azure Media Player + VI Insights demo](https://co
## Adding subtitles
-If you embed Azure Video Indexer insights with your own [Azure Media Player](https://aka.ms/azuremediaplayer), you can use the `GetVttUrl` method to get closed captions (subtitles). You can also call a JavaScript method from the Azure Video Indexer AMP plug-in `getSubtitlesUrl` (as shown earlier).
+If you embed Azure AI Video Indexer insights with your own [Azure Media Player](https://aka.ms/azuremediaplayer), you can use the `GetVttUrl` method to get closed captions (subtitles). You can also call a JavaScript method from the Azure AI Video Indexer AMP plug-in `getSubtitlesUrl` (as shown earlier).
## Customizing embeddable widgets ### Cognitive Insights widget
-You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`.
+You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure AI Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`.
The possible values are: `people`, `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `namedEntities`, `logos`.
Notice that this option is relevant only in cases when you need to open the insi
### Player widget
-If you embed Azure Video Indexer player, you can choose the size of the player by specifying the size of the iframe.
+If you embed Azure AI Video Indexer player, you can choose the size of the player by specifying the size of the iframe.
For example: `<iframe width="640" height="360" src="https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/" frameborder="0" allowfullscreen />`
-By default, Azure Video Indexer player has autogenerated closed captions that are based on the transcript of the video. The transcript is extracted from the video with the source language that was selected when the video was uploaded.
+By default, Azure AI Video Indexer player has autogenerated closed captions that are based on the transcript of the video. The transcript is extracted from the video with the source language that was selected when the video was uploaded.
If you want to embed with a different language, you can add `&captions=<Language Code>` to the embed player URL. If you want the captions to be displayed by default, you can pass &showCaptions=true.
By default, the player will start playing the video. you can choose not to by pa
## Code samples
-See the [code samples](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets) repo that contains samples for Azure Video Indexer API and widgets:
+See the [code samples](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets) repo that contains samples for Azure AI Video Indexer API and widgets:
| File/folder | Description | |--|--|
-| `azure-media-player` | Load an Azure Video Indexer video in a custom Azure Media Player. |
+| `azure-media-player` | Load an Azure AI Video Indexer video in a custom Azure Media Player. |
| `azure-media-player-vi-insights` | Embed VI Insights with a custom Azure Media Player. | | `control-vi-embedded-player` | Embed VI Player and control it from outside. | | `custom-index-location` | Embed VI Insights from a custom external location (can be customer a blob). |
See the [code samples](https://github.com/Azure-Samples/media-services-video-ind
For more information, see [supported browsers](video-indexer-get-started.md#supported-browsers).
-## Embed and customize Azure Video Indexer widgets in your app using npmjs package
+## Embed and customize Azure AI Video Indexer widgets in your app using npmjs package
Using our [@azure/video-analyzer-for-media-widgets](https://www.npmjs.com/package/@azure/video-analyzer-for-media-widgets) package, you can add the insights widgets to your app and customize it according to your needs.
For more information, see our official [GitHub](https://github.com/Azure-Samples
## Next steps
-For information about how to view and edit Azure Video Indexer insights, see [View and edit Azure Video Indexer insights](video-indexer-view-edit.md).
+For information about how to view and edit Azure AI Video Indexer insights, see [View and edit Azure AI Video Indexer insights](video-indexer-view-edit.md).
-Also, check out [Azure Video Indexer CodePen](https://codepen.io/videoindexer/pen/eGxebZ).
+Also, check out [Azure AI Video Indexer CodePen](https://codepen.io/videoindexer/pen/eGxebZ).
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
Title: Sign up for Azure Video Indexer and upload your first video - Azure
-description: Learn how to sign up and upload your first video using the Azure Video Indexer website.
+ Title: Sign up for Azure AI Video Indexer and upload your first video - Azure
+description: Learn how to sign up and upload your first video using the Azure AI Video Indexer website.
Last updated 08/24/2022
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-You can access Azure Video Indexer capabilities in three ways:
+You can access Azure AI Video Indexer capabilities in three ways:
-* The [Azure Video Indexer website](https://www.videoindexer.ai/): An easy-to-use solution that lets you evaluate the product, manage the account, and customize models (as described in this article).
-* API integration: All of Azure Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure. To get started, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
-* Embeddable widget: Lets you embed the Azure Video Indexer insights, player, and editor experiences into your app. For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
+* The [Azure AI Video Indexer website](https://www.videoindexer.ai/): An easy-to-use solution that lets you evaluate the product, manage the account, and customize models (as described in this article).
+* API integration: All of Azure AI Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure. To get started, seeΓÇ»[Use Azure AI Video Indexer REST API](video-indexer-use-apis.md).
+* Embeddable widget: Lets you embed the Azure AI Video Indexer insights, player, and editor experiences into your app. For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-Once you start using Azure Video Indexer, all your stored data and uploaded content are encrypted at rest with a Microsoft managed key.
+Once you start using Azure AI Video Indexer, all your stored data and uploaded content are encrypted at rest with a Microsoft managed key.
> [!NOTE]
-> Review [planned Azure Video Indexer website authenticatication changes](./release-notes.md#planned-azure-video-indexer-website-authenticatication-changes).
+> Review [planned Azure AI Video Indexer website authenticatication changes](./release-notes.md#planned-azure-ai-video-indexer-website-authenticatication-changes).
-This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video.
+This quickstart shows you how to sign in to the Azure AI Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video.
[!INCLUDE [accounts](./includes/create-accounts-intro.md)]
This quickstart shows you how to sign in to the Azure Video Indexer [website](ht
### Supported browsers
-The following list shows the supported browsers that you can use for the Azure Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
+The following list shows the supported browsers that you can use for the Azure AI Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
- Edge, version: 16 - Firefox, version: 54
The following list shows the supported browsers that you can use for the Azure V
- Chrome for Android, version: 87 - Firefox for Android, version: 83
-### Supported file formats for Azure Video Indexer
+### Supported file formats for Azure AI Video Indexer
-See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure Video Indexer.
+See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure AI Video Indexer.
### Upload
-1. Sign in on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+1. Sign in on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
1. To upload a video, press the **Upload** button or link. > [!NOTE]
See the [input container/file formats](/azure/media-services/latest/encode-media
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Upload":::
-1. Once your video has been uploaded, Azure Video Indexer starts indexing and analyzing the video. As a result a JSON output with insights is produced.
+1. Once your video has been uploaded, Azure AI Video Indexer starts indexing and analyzing the video. As a result a JSON output with insights is produced.
You see the progress.
See the [input container/file formats](/azure/media-services/latest/encode-media
The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
-1. Once Azure Video Indexer is done analyzing, you'll get an email with a link to your video and a short description of what was found in your video. For example: people, spoken and written words, topics, and named entities.
+1. Once Azure AI Video Indexer is done analyzing, you'll get an email with a link to your video and a short description of what was found in your video. For example: people, spoken and written words, topics, and named entities.
1. You can later find your video in the library list and perform different operations. For example: search, reindex, edit. > [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/uploaded.png" alt-text="Uploaded the upload":::
-After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
+After you upload and index a video, you can continue using [Azure AI Video Indexer website](video-indexer-view-edit.md) or [Azure AI Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure AI Video Indexer output](video-indexer-output-json-v2.md)).
## Start using insights
For more details, see [Upload and index videos](upload-index-videos.md) and chec
## Next steps * To embed widgets, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-* For the API integration, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
+* For the API integration, seeΓÇ»[Use Azure AI Video Indexer REST API](video-indexer-use-apis.md).
* Check out our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
- At the end of the workshop, you'll have a good understanding of the kind of information that can be extracted from video and audio content, you'll be more prepared to identify opportunities related to content intelligence, pitch video AI on Azure, and demo several scenarios on Azure Video Indexer.
+ At the end of the workshop, you'll have a good understanding of the kind of information that can be extracted from video and audio content, you'll be more prepared to identify opportunities related to content intelligence, pitch video AI on Azure, and demo several scenarios on Azure AI Video Indexer.
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Title: Examine the Azure Video Indexer output
-description: This topic examines the Azure Video Indexer output produced by the Get Video Index API.
+ Title: Examine the Azure AI Video Indexer output
+description: This topic examines the Azure AI Video Indexer output produced by the Get Video Index API.
Last updated 05/19/2022
-# Examine the Azure Video Indexer output
+# Examine the Azure AI Video Indexer output
-When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
+When a video is indexed, Azure AI Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
-For information, see [Azure Video Indexer insights](insights-overview.md).
+For information, see [Azure AI Video Indexer insights](insights-overview.md).
## Root elements of the insights
Example:
#### faces
-If faces are present, Azure Video Indexer uses the Face API on all the video's frames to detect faces and celebrities.
+If faces are present, Azure AI Video Indexer uses the Face API on all the video's frames to detect faces and celebrities.
|Name|Description| |||
If faces are present, Azure Video Indexer uses the Face API on all the video's f
#### brands
-Azure Video Indexer detects business and product brand names in the speech-to-text transcript and/or video OCR. This information does not include visual recognition of brands or logo detection.
+Azure AI Video Indexer detects business and product brand names in the speech-to-text transcript and/or video OCR. This information does not include visual recognition of brands or logo detection.
|Name|Description| |||
Azure Video Indexer detects business and product brand names in the speech-to-te
|`referenceUrl` | The brand's Wikipedia URL, if exists. For example: [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation). |`description`|The brand's description.| |`tags`|A list of predefined tags that were associated with this brand.|
-|`confidence`|The confidence value of the Azure Video Indexer brand detector (`0`-`1`).|
+|`confidence`|The confidence value of the Azure AI Video Indexer brand detector (`0`-`1`).|
|`instances`|A list of time ranges for this brand. Each instance has a `brandType` value, which indicates whether this brand appeared in the transcript or in an OCR.| ```json
Sentiments are aggregated by their `sentimentType` field (`Positive`, `Neutral`,
#### visualContentModeration
-The `visualContentModeration` transcript contains time ranges that Azure Video Indexer found to potentially have adult content. If `visualContentModeration` is empty, no adult content was identified.
+The `visualContentModeration` transcript contains time ranges that Azure AI Video Indexer found to potentially have adult content. If `visualContentModeration` is empty, no adult content was identified.
Videos that contain adult or racy content might be available for private view only. Users have the option to submit a request for a human review of the content. In that case, the `IsAdult` attribute will contain the result of the human review.
Videos that contain adult or racy content might be available for private view on
#### emotions
-Azure Video Indexer identifies emotions based on speech and audio cues.
+Azure AI Video Indexer identifies emotions based on speech and audio cues.
|Name|Description| |||
Azure Video Indexer identifies emotions based on speech and audio cues.
#### topics
-Azure Video Indexer makes an inference of main topics from transcripts. When possible, the second-level [IPTC](https://iptc.org/standards/media-topics/) taxonomy is included.
+Azure AI Video Indexer makes an inference of main topics from transcripts. When possible, the second-level [IPTC](https://iptc.org/standards/media-topics/) taxonomy is included.
|Name|Description| |||
Azure Video Indexer makes an inference of main topics from transcripts. When pos
|`confidence`|The confidence score in the range `0`-`1`. Higher is more confident.| |`language`|The language used in the topic.| |`iptcName`|The IPTC media code name, if detected.|
-|`instances` |Currently, Azure Video Indexer does not index a topic to time intervals. The whole video is used as the interval.|
+|`instances` |Currently, Azure AI Video Indexer does not index a topic to time intervals. The whole video is used as the interval.|
```json "topics": [{
Azure Video Indexer makes an inference of main topics from transcripts. When pos
## Next steps
-Explore the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai).
+Explore the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai).
-For information about how to embed widgets in your application, see [Embed Azure Video Indexer widgets into your applications](video-indexer-embed-widgets.md).
+For information about how to embed widgets in your application, see [Embed Azure AI Video Indexer widgets into your applications](video-indexer-embed-widgets.md).
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Title: What is Azure Video Indexer?
-description: This article gives an overview of the Azure Video Indexer service.
+ Title: What is Azure AI Video Indexer?
+description: This article gives an overview of the Azure AI Video Indexer service.
Last updated 12/19/2022
-# What is Azure Video Indexer?
+# What is Azure AI Video Indexer?
> [!IMPORTANT] > Following [Azure Media Services retirement announcement](https://aka.ms/ams-retirement), Azure Video Indexer makes the following announcements: [June release notes](release-notes.md#june-2023).
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure Video Indexer is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models.
+Azure AI Video Indexer is a cloud application, part of Azure AI services, built on Azure Media Services and Azure AI services (such as the Face, Translator, Azure AI Vision, and Speech). It enables you to extract the insights from your videos using Azure AI Video Indexer video and audio models.
-Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure Video Indexer in the background.
+Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Below is an illustration of the audio and video analysis performed by Azure AI Video Indexer in the background.
> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure Video Indexer flow." lightbox="./media/video-indexer-overview/model-chart.png":::
+> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure AI Video Indexer flow." lightbox="./media/video-indexer-overview/model-chart.png":::
-To start extracting insights with Azure Video Indexer, see the [how can I get started](#how-can-i-get-started-with-azure-video-indexer) section below.
+To start extracting insights with Azure AI Video Indexer, see the [how can I get started](#how-can-i-get-started-with-azure-ai-video-indexer) section below.
-## What can I do with Azure Video Indexer?
+## What can I do with Azure AI Video Indexer?
-Azure Video Indexer's insights can be applied to many scenarios, among them are:
+Azure AI Video Indexer's insights can be applied to many scenarios, among them are:
* Deep search: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against.
-* Content creation: Create trailers, highlight reels, social media content, or news clips based on the insights Azure Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
-* Accessibility: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure Video Indexer in multiple languages.
-* Monetization: Azure Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server.
+* Content creation: Create trailers, highlight reels, social media content, or news clips based on the insights Azure AI Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
+* Accessibility: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure AI Video Indexer in multiple languages.
+* Monetization: Azure AI Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server.
* Content moderation: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. * Recommendations: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that will match their needs. ## Video/audio AI features
-The following list shows the insights you can retrieve from your video/audio files using Azure Video Indexer video and audio AI features (models).
+The following list shows the insights you can retrieve from your video/audio files using Azure AI Video Indexer video and audio AI features (models).
Unless specified otherwise, a model is generally available.
Unless specified otherwise, a model is generally available.
* **Face detection**: Detects and groups faces appearing in the video. * **Celebrity identification**: Identifies over 1 million celebritiesΓÇölike world leaders, actors, artists, athletes, researchers, business, and tech leaders across the globe. The data about these celebrities can also be found on various websites (IMDB, Wikipedia, and so on).
-* **Account-based face identification**: Trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure Video Indexer API](customize-person-model-with-api.md).
+* **Account-based face identification**: Trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure AI Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure AI Video Indexer API](customize-person-model-with-api.md).
* **Thumbnail extraction for faces**: Identifies the best captured face in each group of faces (based on quality, size, and frontal position) and extracts it as an image asset. * **Optical character recognition (OCR)**: Extracts text from images like pictures, street signs and products in media files to create insights. * **Visual content moderation**: Detects adult and/or racy visuals.
Unless specified otherwise, a model is generally available.
* Textless slate detection, including scene matching. For details, see [Slate detection](slate-detection-insight.md).
-* **Textual logo detection** (preview): Matches a specific predefined text using Azure Video Indexer OCR. For example, if a user created a textual logo: "Microsoft", different appearances of the word *Microsoft* will be detected as the "Microsoft" logo. For more information, see [Detect textual logo](detect-textual-logo.md).
+* **Textual logo detection** (preview): Matches a specific predefined text using Azure AI Video Indexer OCR. For example, if a user created a textual logo: "Microsoft", different appearances of the word *Microsoft* will be detected as the "Microsoft" logo. For more information, see [Detect textual logo](detect-textual-logo.md).
### Audio models
-* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For more information, see [Azure Video Indexer language support](language-support.md).
-* **Automatic language detection**: Identifies the dominant spoken language. For more information, see [Azure Video Indexer language support](language-support.md). If the language can't be identified with confidence, Azure Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
+* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For more information, see [Azure AI Video Indexer language support](language-support.md).
+* **Automatic language detection**: Identifies the dominant spoken language. For more information, see [Azure AI Video Indexer language support](language-support.md). If the language can't be identified with confidence, Azure AI Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
* **Multi-language speech identification and transcription**: Identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md). * **Closed captioning**: Creates closed captioning in three formats: VTT, TTML, SRT. * **Two channel processing**: Auto detects separate transcript and merges to single timeline. * **Noise reduction**: Clears up telephony audio or noisy recordings (based on Skype filters).
-* **Transcript customization** (CRIS): Trains custom speech to text models to create industry-specific transcripts. For more information, see [Customize a Language model from the Azure Video Indexer website](customize-language-model-with-website.md) and [Customize a Language model with the Azure Video Indexer APIs](customize-language-model-with-api.md).
+* **Transcript customization** (CRIS): Trains custom speech to text models to create industry-specific transcripts. For more information, see [Customize a Language model from the Azure AI Video Indexer website](customize-language-model-with-website.md) and [Customize a Language model with the Azure AI Video Indexer APIs](customize-language-model-with-api.md).
* **Speaker enumeration**: Maps and understands which speaker spoke which words and when. Sixteen speakers can be detected in a single audio-file. * **Speaker statistics**: Provides statistics for speakers' speech ratios. * **Textual content moderation**: Detects explicit text in the audio transcript. * **Emotion detection**: Identifies emotions based on speech (what's being said) and voice tonality (how it's being said). The emotion could be joy, sadness, anger, or fear.
-* **Translation**: Creates translations of the audio transcript to many different languages. For more information, see [Azure Video Indexer language support](language-support.md).
+* **Translation**: Creates translations of the audio transcript to many different languages. For more information, see [Azure AI Video Indexer language support](language-support.md).
* **Audio effects detection** (preview): Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
- The detected acoustic events are in the closed captions file. The file can be downloaded from the Azure Video Indexer website. For more information, see [Audio effects detection](audio-effects-detection.md).
+ The detected acoustic events are in the closed captions file. The file can be downloaded from the Azure AI Video Indexer website. For more information, see [Audio effects detection](audio-effects-detection.md).
> [!NOTE] > The full set of events is available only when you choose **Advanced Audio Analysis** when uploading a file, in upload preset. By default, only silence is detected.
When indexing by one channel, partial result for those models will be available.
* **Artifacts**: Extracts rich set of "next level of details" artifacts for each of the models. * **Sentiment analysis**: Identifies positive, negative, and neutral sentiments from speech and visual text.
-## How can I get started with Azure Video Indexer?
+## How can I get started with Azure AI Video Indexer?
-Learn how to [get started with Azure Video Indexer](video-indexer-get-started.md).
+Learn how to [get started with Azure AI Video Indexer](video-indexer-get-started.md).
Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**. ## Compliance, Privacy and Security
-As an important reminder, you must comply with all applicable laws in your use of Azure Video Indexer, and you may not use Azure Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+As an important reminder, you must comply with all applicable laws in your use of Azure AI Video Indexer, and you may not use Azure AI Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-Before uploading any video/image to Azure Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
+Before uploading any video/image to Azure AI Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure AI Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure AI Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-To learn about compliance, privacy and security in Azure Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
+To learn about compliance, privacy and security in Azure AI Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure AI Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
## Next steps
-You're ready to get started with Azure Video Indexer. For more information, see the following articles:
+You're ready to get started with Azure AI Video Indexer. For more information, see the following articles:
- [Indexing and configuration guide](indexing-configuration-guide.md) - [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/)-- [Get started with the Azure Video Indexer website](video-indexer-get-started.md).-- [Process content with Azure Video Indexer REST API](video-indexer-use-apis.md).
+- [Get started with the Azure AI Video Indexer website](video-indexer-get-started.md).
+- [Process content with Azure AI Video Indexer REST API](video-indexer-use-apis.md).
- [Embed visual widgets in your application](video-indexer-embed-widgets.md).
-For the latest updates, see [Azure Video Indexer release notes](release-notes.md).
+For the latest updates, see [Azure AI Video Indexer release notes](release-notes.md).
azure-video-indexer Video Indexer Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-search.md
Title: Search for exact moments in videos with Azure Video Indexer
-description: Learn how to search for exact moments in videos using Azure Video Indexer.
+ Title: Search for exact moments in videos with Azure AI Video Indexer
+description: Learn how to search for exact moments in videos using Azure AI Video Indexer.
Last updated 11/23/2019
-# Search for exact moments in videos with Azure Video Indexer
+# Search for exact moments in videos with Azure AI Video Indexer
-This topic shows you how to use the Azure Video Indexer website to search for exact moments in videos.
+This topic shows you how to use the Azure AI Video Indexer website to search for exact moments in videos.
-1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
1. Specify the search keywords and the search will be performed among all videos in your account's library. You can filter your search by selecting **Filters**. In below example, we search for "Microsoft" that appears as an on-screen text only (OCR).
This topic shows you how to use the Azure Video Indexer website to search for ex
:::image type="content" source="./media/video-indexer-search/insights.png" alt-text="View, search and edit the insights of the video":::
- If you embed the video through Azure Video Indexer widgets, you can achieve the player/insights view and synchronization in your app. For more information, see [Embed Azure Video Indexer widgets into your app](video-indexer-embed-widgets.md).
+ If you embed the video through Azure AI Video Indexer widgets, you can achieve the player/insights view and synchronization in your app. For more information, see [Embed Azure AI Video Indexer widgets into your app](video-indexer-embed-widgets.md).
1. You can view, search, and edit the transcripts by clicking on the **Timeline** tab. :::image type="content" source="./media/video-indexer-search/timeline.png" alt-text="View, search and edit the transcripts of the video":::
You can create a clip based on your video of specific lines and moments by click
## Next steps
-[Process content with Azure Video Indexer REST API](video-indexer-use-apis.md)
+[Process content with Azure AI Video Indexer REST API](video-indexer-use-apis.md)
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Title: Use the Azure Video Indexer API
-description: This article describes how to get started with Azure Video Indexer API.
+ Title: Use the Azure AI Video Indexer API
+description: This article describes how to get started with Azure AI Video Indexer API.
Last updated 07/03/2023
-# Tutorial: Use the Azure Video Indexer API
+# Tutorial: Use the Azure AI Video Indexer API
-Azure Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
+Azure AI Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
[!INCLUDE [accounts](./includes/create-accounts-intro.md)]
-This article shows how the developers can take advantage of the [Azure Video Indexer API](https://api-portal.videoindexer.ai/).
+This article shows how the developers can take advantage of the [Azure AI Video Indexer API](https://api-portal.videoindexer.ai/).
## Prerequisite
Before you start, see the [Recommendations](#recommendations) section (that foll
## Subscribe to the API
-1. Sign in to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
+1. Sign in to the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
> [!Important]
- > * You must use the same provider you used when you signed up for Azure Video Indexer.
+ > * You must use the same provider you used when you signed up for Azure AI Video Indexer.
> * Personal Google and Microsoft (Outlook/Live) accounts can only be used for trial accounts. Accounts connected to Azure require Azure AD. > * There can be only one active account per email. If a user tries to sign in with user@gmail.com for LinkedIn and later with user@gmail.com for Google, the latter will display an error page, saying the user already exists.
- ![Sign in to the Azure Video Indexer API developer portal](./media/video-indexer-use-apis/sign-in.png)
+ ![Sign in to the Azure AI Video Indexer API developer portal](./media/video-indexer-use-apis/sign-in.png)
1. Subscribe. Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select **Authorization** and subscribe.
Before you start, see the [Recommendations](#recommendations) section (that foll
After you subscribe, you can find your subscription under **[Products](https://api-portal.videoindexer.ai/products)** -> **Profile**. In the subscriptions section, you'll find the primary and secondary keys. The keys should be protected. The keys should only be used by your server code. They shouldn't be available on the client side (.js, .html, and so on).
- ![Subscription and keys in the Azure Video Indexer API developer portal](./media/video-indexer-use-apis/subscriptions.png)
+ ![Subscription and keys in the Azure AI Video Indexer API developer portal](./media/video-indexer-use-apis/subscriptions.png)
-An Azure Video Indexer user can use a single subscription key to connect to multiple Azure Video Indexer accounts. You can then link these Azure Video Indexer accounts to different Media Services accounts.
+An Azure AI Video Indexer user can use a single subscription key to connect to multiple Azure AI Video Indexer accounts. You can then link these Azure AI Video Indexer accounts to different Media Services accounts.
## Obtain access token using the Authorization API
You can control the permission level of tokens in two ways:
* For **Account** tokens, you can use the **Get Account Access Token With Permission** API and specify the permission type (**Reader**/**Contributor**/**MyAccessManager**/**Owner**). * For all types of tokens (including **Account** tokens), you can specify **allowEdit=true/false**. **false** is the equivalent of a **Reader** permission (read-only) and **true** is the equivalent of a **Contributor** permission (read-write).
-For most server-to-server scenarios, you'll probably use the same **account** token since it covers both **account** operations and **video** operations. However, if you're planning to make client side calls to Azure Video Indexer (for example, from JavaScript), you would want to use a **video** access token to prevent clients from getting access to the entire account. That's also the reason that when embedding Azure Video Indexer client code in your client (for example, using **Get Insights Widget** or **Get Player Widget**), you must provide a **video** access token.
+For most server-to-server scenarios, you'll probably use the same **account** token since it covers both **account** operations and **video** operations. However, if you're planning to make client side calls to Azure AI Video Indexer (for example, from JavaScript), you would want to use a **video** access token to prevent clients from getting access to the entire account. That's also the reason that when embedding Azure AI Video Indexer client code in your client (for example, using **Get Insights Widget** or **Get Player Widget**), you must provide a **video** access token.
To make things easier, you can use the **Authorization** API > **GetAccounts** to get your accounts without obtaining a user token first. You can also ask to get the accounts with valid tokens, enabling you to skip an additional call to get an account token. Access tokens expire after 1 hour. Make sure your access token is valid before using the Operations API. If it expires, call the Authorization API again to get a new access token.
-You're ready to start integrating with the API. Find [the detailed description of each Azure Video Indexer REST API](https://api-portal.videoindexer.ai/).
+You're ready to start integrating with the API. Find [the detailed description of each Azure AI Video Indexer REST API](https://api-portal.videoindexer.ai/).
## Operational API calls The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways:
-* Use the **Azure Video Indexer website** to get the Account ID:
+* Use the **Azure AI Video Indexer website** to get the Account ID:
- 1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+ 1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
2. Browse to the **Settings** page. 3. Copy the account ID.
- ![Azure Video Indexer settings and account ID](./media/video-indexer-use-apis/account-id.png)
+ ![Azure AI Video Indexer settings and account ID](./media/video-indexer-use-apis/account-id.png)
-* Use **Azure Video Indexer Developer Portal** to programmatically get the Account ID.
+* Use **Azure AI Video Indexer Developer Portal** to programmatically get the Account ID.
Use the [Get account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account) API.
The Account ID parameter is required in all operational API calls. Account ID is
## Recommendations
-This section lists some recommendations when using Azure Video Indexer API.
+This section lists some recommendations when using Azure AI Video Indexer API.
### Uploading - If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.
- The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
+ The URL provided to Azure AI Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
When you're uploading videos by using the API, you have the following options: * Upload your video from a URL (preferred).
When you're uploading videos by using the API, you have the following options:
## Code sample
-The following C# code snippet demonstrates the usage of all the Azure Video Indexer APIs together.
+The following C# code snippet demonstrates the usage of all the Azure AI Video Indexer APIs together.
> [!NOTE] > The following sample is intended for classic accounts only and not compatible with ARM-based accounts. For an updated sample for ARM (recommended), see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
After you're done with this tutorial, delete resources that you aren't planning
## See also -- [Azure Video Indexer overview](video-indexer-overview.md)
+- [Azure AI Video Indexer overview](video-indexer-overview.md)
- [Regions](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) ## Next steps
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
Title: View Azure Video Indexer insights
-description: This article demonstrates how to view Azure Video Indexer insights.
+ Title: View Azure AI Video Indexer insights
+description: This article demonstrates how to view Azure AI Video Indexer insights.
Last updated 04/12/2023
-# View Azure Video Indexer insights
+# View Azure AI Video Indexer insights
-This article shows you how to view the Azure Video Indexer insights of a video.
+This article shows you how to view the Azure AI Video Indexer insights of a video.
-1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
-2. Find a video from which you want to create your Azure Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md).
+1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
+2. Find a video from which you want to create your Azure AI Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md).
3. Press **Play**. The page shows the video's insights.
This article shows you how to view the Azure Video Indexer insights of a video.
## See also
-[Azure Video Indexer overview](video-indexer-overview.md)
+[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer View Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/view-closed-captions.md
Title: View closed captions
-description: Learn how to view captions using the Azure Video Indexer website.
+description: Learn how to view captions using the Azure AI Video Indexer website.
Last updated 10/24/2022
-# View closed captions in the Azure Video Indexer website
+# View closed captions in the Azure AI Video Indexer website
-This article shows how to view closed captions in the [Azure Video Indexer video player](https://www.videoindexer.ai).
+This article shows how to view closed captions in the [Azure AI Video Indexer video player](https://www.videoindexer.ai).
## View closed captions
-1. Go to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in.
+1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
1. Select a video for which you want to view captions.
-1. On the bottom of the Azure Video Indexer video player, select **Closed Captioning** (in some browsers located under the **Captions** menu, in some located under the **gear** icon).
+1. On the bottom of the Azure AI Video Indexer video player, select **Closed Captioning** (in some browsers located under the **Captions** menu, in some located under the **gear** icon).
1. Under **Closed Captioning**, select a language in which you want to view captions. For example, **English**. Once checked, you see the captions in English. 1. To see a speaker in front of the caption, select **Settings** under **Closed Captioning** and check **Show speakers** (under **Configurations**) -> press **Done**. ## Next steps
-See how to [Insert or remove transcript lines in the Azure Video Indexer website](edit-transcript-lines-portal.md) and other how to articles that demonstrate how to navigate in the Azure Video Indexer website.
+See how to [Insert or remove transcript lines in the Azure AI Video Indexer website](edit-transcript-lines-portal.md) and other how to articles that demonstrate how to navigate in the Azure AI Video Indexer website.
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases backup errors
+ Title: Troubleshoot SAP HANA databases back up errors
description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 11/02/2022 Last updated : 07/18/2023 --++ # Troubleshoot backup of SAP HANA databases on Azure
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
| **Error message** | `Backup log chain is broken` | | | |
-| **Possible causes** | HANA LSN Log chain break can be triggered for various reasons, including:<ul><li>Azure Storage call failure to commit backup.</li><li>The Tenant DB is offline.</li><li>Extension upgrade has terminated an in-progress Backup job.</li><li>Unable to connect to Azure Storage during backup.</li><li>SAP HANA has rolled back a transaction in the backup process.</li><li>A backup is complete, but catalog is not yet updated with success in HANA system.</li><li>Backup failed from Azure Backup perspective, but success from the perspective of HANA ΓÇö the log backup/catalog destination might have been updated from backint-to-file system, or the backint executable might have been changed.</li></ul> |
+| **Possible causes** | HANA LSN Log chain break can be triggered for various reasons, including:<ul><li>Azure Storage call failure to commit backup.</li><li>The Tenant DB is offline.</li><li>Extension upgrade has terminated an in-progress Backup job.</li><li>Unable to connect to Azure Storage during backup.</li><li>SAP HANA has rolled back a transaction in the backup process.</li><li>A backup is complete, but catalog is not yet updated with success in HANA system.</li><li>Backup failed from Azure Backup perspective, but success from the perspective of HANA ΓÇö the log backup/catalog destination might have been updated from Backint-to-file system, or the Backint executable might have been changed.</li></ul> |
| **Recommended action** | To resolve this issue, Azure Backup triggers an auto-heal Full backup. While this auto-heal backup is in progress, all log backups are triggered by HANA fail with **OperationCancelledBecauseConflictingAutohealOperationRunningUserError**. Once the auto-heal Full backup is complete, logs and all other backups start working as expected.<br>If you do not see an auto-heal full backup triggered or any successful backup (Full/Differential/ Incremental) in 24 hours, contact Microsoft support.</br> | ### UserErrorSDCtoMDCUpgradeDetected
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
| **Error message** | `Backups will fail with this error when the Backint Configuration is incorrectly updated.` | | | |
-| **Possible causes** | The backint configuration updated during the Configure Protection flow by Azure Backup is either altered/updated by the customer. |
-| **Recommended action** | Check if the following (backint) parameters are set:<br><ul><li>[catalog_backup_using_backint:true]</li><li>[enable_accumulated_catalog_backup:false]</li><li>[parallel_data_backup_backint_channels:1]</li><li>[log_backup_timeout_s:900)]</li><li>[backint_response_timeout:7200]</li></ul>If backint-based parameters are present at the HOST level, remove them. However, if the parameters aren't present at the HOST level, but are manually modified at a database level, ensure that the database level values are set. Or, run [stop protection with retain backup data](./sap-hana-db-manage.md#stop-protection-for-an-sap-hana-database) from the Azure portal, and then select Resume backup. |
+| **Possible causes** | The Backint configuration updated during the Configure Protection flow by Azure Backup is either altered/updated by the customer. |
+| **Recommended action** | Check if the following (Backint) parameters are set:<br><ul><li> `[catalog_backup_using_backint:true]` </li><li> `[enable_accumulated_catalog_backup:false]` </li><li> `[parallel_data_backup_backint_channels:1]` </li><li> `[log_backup_timeout_s:900)]` </li><li> `[backint_response_timeout:7200]` </li></ul>If backint-based parameters are present at the HOST level, remove them. However, if the parameters aren't present at the HOST level, but are manually modified at a database level, ensure that the database level values are set. Or, run [stop protection with retain backup data](./sap-hana-db-manage.md#stop-protection-for-an-sap-hana-database) from the Azure portal, and then select Resume backup. |
### UserErrorIncompatibleSrcTargetSystemsForRestore
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Error message** | `Unable to connect to the AAD service from the HANA system.` | --
-**Possible causes** | Firewall or proxy settings as Backup extension's plugin service account is not allowing the outbound connection to AAD.
-**Recommended action** | Fix the firewall or proxy settings for the outbound connection to AAD to succeed.
+**Possible causes** | Firewall or proxy settings as Backup extension's plugin service account is not allowing the outbound connection to Azure Active Directory.
+**Recommended action** | Fix the firewall or proxy settings for the outbound connection to Azure Active Directory to succeed.
### UserErrorMisConfiguredSslCaStore
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Error message** | `Operation is blocked as the vault has reached its maximum limit for such operations permitted in a span of 24 hours.` | --
-**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you've a large number of datasources protected with a policy and you try to modify that policy, it will trigger configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day.
+**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you've a large number of datasources protected with a policy and you try to modify that policy, it will trigger the configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day.
**Recommended action** | Azure Backup service will automatically retry this operation after 24 hours. ### UserErrorInvalidBackint **Error message** | Found invalid hdbbackint executable. |
-**Possible case** | 1. The operation to change Backint path from `/opt/msawb/bin` to `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` failed due to insufficient storage space in the new location. <br><br> 2. The *hdbbackint utility* located on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` doesn't have executable permissions or correct ownership.
+**Possible cause** | 1. The operation to change Backint path from `/opt/msawb/bin` to `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` failed due to insufficient storage space in the new location. <br><br> 2. The *hdbbackint utility* located on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` doesn't have executable permissions or correct ownership.
**Recommended action** | 1. Ensure that there is free space available on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` or the path where you want to save backups. <br><br> 2. Ensure that *sapsys* group has appropriate permissions on the `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` file by running the command `chmod 755`.
+### UserErrorHanaSQLQueryFailed
+
+**Error message** | Operation failed while running query on HANA Server. <br><br> All operations which fail with this user error is due to an issue caused at Hana side while running the query. Additional details have the clear message of the error.
+ |
+**Possible causes** | - Disk corruption issue. <br> - Memory allocation issues. <br> - Too many databases in use. <br> - Topology update issue.
+**Recommended action** | Work with the SAP HANA team to fix this issue. However, if the issue persists, you can contact Microsoft support for further assistance.
+ ## Restore checks ### Single Container Database (SDC) restore
-Take care of inputs while restoring a single container database (SDC) for HANA to another SDC machine. The database name should be given with lowercase and with "sdc" appended in brackets. The HANA instance will be displayed in capitals.
+Take care of inputs while restoring a single container database (SDC) for HANA to another SDC machine. The database name should be given with lowercase and with `sdc` appended in brackets. The HANA instance will be displayed in capitals.
-Assume an SDC HANA instance "H21" is backed up. The backup items page will show the backup item name as **"h21(sdc)"**. If you attempt to restore this database to another target SDC, say H11, then following inputs need to be provided.
+Assume an SDC HANA instance "H21" is backed up. The backup items page will show the backup item name as `h21(sdc)`. If you attempt to restore this database to another target SDC, say H11, then following inputs need to be provided.
![Restored SDC database name](media/backup-azure-sap-hana-database/hana-sdc-restore.png) Note the following points: -- By default, the restored database name will be populated with the backup item name. In this case, h21(sdc).-- Select the target as H11 won't change the restored database name automatically. **It should be edited to h11(sdc)**. Regarding SDC, the restored db name will be the target instance ID with lowercase letters and 'sdc' appended in brackets.
+- By default, the restored database name will be populated with the backup item name. In this case, `h21(sdc)`.
+- Select the target as H11 won't change the restored database name automatically. It should be edited to `h11(sdc)`. Regarding SDC, the restored db name will be the target instance ID with lowercase letters and `sdc` appended in brackets.
- Since SDC can have only single database, you also need to select the checkbox to allow override of the existing database data with the recovery point data. - Linux is case-sensitive. So be careful to preserve the case.
Upgrades from SDC to MDC that don't cause a SID change can be handled as follows
- Rerun the [pre-registration script](https://aka.ms/scriptforpermsonhana) - Re-register the extension for the same machine in the Azure portal (**Backup** -> **View details** -> Select the relevant Azure VM -> Re-register) - Select **Rediscover DBs** for the same VM. This action should show the new DBs in step 3 as SYSTEMDB and Tenant DB, not SDC-- The older SDC database continues to exist in the vault and have the old backed-up data retained according to the policy
+- The older SDC database continues to exist in the vault and has the old backed-up data retained according to the policy.
- Configure backup for these databases ## SDC to MDC upgrade with a change in SID
Upgrades from SDC to MDC that cause a SID change can be handled as follows:
- Rerun the [pre-registration script](https://aka.ms/scriptforpermsonhana) with correct details (new SID and MDC). Due to a change in SID, you might face issues with successful execution of the script. Contact Azure Backup support if you face issues. - Re-register the extension for the same machine in the Azure portal (**Backup** -> **View details** -> Select the relevant Azure VM -> Re-register). - Select **Rediscover DBs** for the same VM. This action should show the new DBs in step 3 as SYSTEMDB and Tenant DB, not SDC.-- The older SDC database continues to exist in the vault and have old backed up data retained according to the policy.
+- The older SDC database continues to exist in the vault and has old backed-up data retained according to the policy.
- Configure backup for these databases. ## Re-registration failures
In the preceding scenarios, we recommend that you trigger a re-register operatio
## Next steps -- Review the [frequently asked questions](./sap-hana-faq-backup-azure-vm.yml) about the back up of SAP HANA databases on Azure VMs.
+- Review the [frequently asked questions](./sap-hana-faq-backup-azure-vm.yml) about the backup of SAP HANA databases on Azure VMs.
backup Backup Azure Sql Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-restore-cli.md
Title: Restore SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to restore SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 08/11/2022 Last updated : 07/18/2023 --++ # Restore SQL databases in an Azure VM using Azure CLI
In this article, you'll learn how to:
This article assumes you've an SQL database running on Azure VM that's backed-up using Azure Backup. If you've used [Back up an SQL database in Azure using CLI](backup-azure-sql-backup-cli.md) to back up your SQL database, then you're using the following resources:
-* A resource group named *SQLResourceGroup*
-* A vault named *SQLVault*
-* Protected container named *VMAppContainer;Compute;SQLResourceGroup;testSQLVM*
-* Backed-up database/item named *sqldatabase;mssqlserver;master*
-* Resources in the *westus2* region
+* A resource group named `SQLResourceGroup`.
+* A vault named `SQLVault`.
+* Protected container named `VMAppContainer;Compute;SQLResourceGroup;testSQLVM`.
+* Backed-up database/item named `sqldatabase;mssqlserver;master`.
+* Resources in the `westus` region.
>[!Note] >See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios.
Name Operation Status Item Name
be7ea4a4-0752-4763-8570-a306b0a0106f Restore InProgress master [testSQLVM] AzureWorkload 2022-06-21T03:51:06.898981+00:00 0:00:05.652967 ```
-The response provides you the job name. You can use this job name to track the job status using [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+The response provides you with the job name. You can use this job name to track the job status using [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
## Restore and overwrite
Name Operation Status Item Name
1730ec49-166a-4bfd-99d5-93027c2d8480 Restore InProgress master [testSQLVM] AzureWorkload 2022-06-21T04:04:11.161411+00:00 0:00:03.118076 ```
-The response provides you the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+The response provides you with the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
## Restore to a secondary region
The output appears as:
} ```
-The response provides you the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+The response provides you with the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
> [!NOTE] > If you don't want to restore the entire chain but only a subset of files, follow the steps as documented [here](restore-sql-database-azure-vm.md#partial-restore-as-files).
+## Cross Subscription Restore
+
+With Cross Subscription Restore (CSR), you have the flexibility of restoring to any subscription and any vault under your tenant if restore permissions are available. By default, CSR is enabled on all Recovery Services vaults (existing and newly created vaults).
+
+>[!Note]
+>- You can trigger Cross Subscription Restore from Recovery Services vault.
+>- CSR is supported only for streaming based backup and is not supported for snapshot-based backup.
+>- Cross Regional Restore (CRR) with CSR is not supported.
++
+```azurecli
+az backup vault create
+
+```
+
+Add the parameter `cross-subscription-restore-state` that enables you to set the CSR state of the vault during vault creation and updating.
+
+```azurecli
+az backup recoveryconfig show
+
+```
+
+Add the parameter `--target-subscription-id` that enables you to provide the target subscription as the input while triggering Cross Subscription Restore for SQL or HANA datasources.
+
+**Example**:
+
+```azurecli
+ az backup vault create -g {rg_name} -n {vault_name} -l {location} --cross-subscription-restore-state Disable
+ az backup recoveryconfig show --restore-mode alternateworkloadrestore --backup-management-type azureworkload -r {rp} --target-container-name {target_container} --target-item-name {target_item} --target-resource-group {target_rg} --target-server-name {target_server} --target-server-type SQLInstance --target-subscription-id {target_subscription} --target-vault-name {target_vault} --workload-type SQLDataBase --ids {source_item_id}
+
+```
+ ## Next steps * Learn how to [manage SQL databases that are backed up using Azure CLI](backup-azure-sql-manage-cli.md).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-<a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks.
-<a name="premium-ssd-v2-backup">Premium SSD v2 disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks.
+<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, Central US, North Central US, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks.
+<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, Central US, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks.
[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md
Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 02/20/2023 Last updated : 07/18/2023 --++ # Restore SQL Server databases on Azure VMs
Azure Backup can restore SQL Server databases that are running on Azure VMs as f
Before you restore a database, note the following: - You can restore the database to an instance of a SQL Server in the same Azure region.-- The destination server must be registered to the same vault as the source.
+- The destination server must be registered to the same vault as the source. If you want to restore backups to a different vault, [enable Cross Subscription Restore](#cross-subscription-restore).
- If you have multiple instances running on a server, all the instances should be up and running. Otherwise the server won't appear in the list of destination servers for you to restore the database to. For more information, refer to [the troubleshooting steps](backup-sql-server-azure-troubleshoot.md#faulty-instance-in-a-vm-with-multiple-sql-server-instances). - To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). - [CDC](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) enabled databases should be restored using the [Restore as files](#restore-as-files) option.
To restore the database files to the *original path on the source server*, remov
``` >[!Note]
->You shouldn’t have the same database files on the target server (restore with replace).  Also, you can [enable instant file initialization on the target server to reduce the file initialization time overhead]( /sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-ver16).
+>You shouldn’t have the same database files on the target server (restore with replace).  Also, you can [enable instant file initialization on the target server to reduce the file initialization time overhead]( /sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-ver16&preserve-view=true).
To relocate the database files from the target restore server, you can frame a TSQL command using the `MOVE` clauses.
To relocate the database files from the target restore server, you can frame a T
GO ```
-If there are more than two files for the database, you can add additional `MOVE` clauses to the restore query. You can also use SSMS for database recovery using `.bak` files. [Learn more](/sql/relational-databases/backup-restore/restore-a-database-backup-using-ssms?view=sql-server-ver16).
+If there are more than two files for the database, you can add additional `MOVE` clauses to the restore query. You can also use SSMS for database recovery using `.bak` files. [Learn more](/sql/relational-databases/backup-restore/restore-a-database-backup-using-ssms?view=sql-server-ver16&preserve-view=true).
>[!Note] >For large database recovery, we recommend you to use TSQL statements. If you want to relocate the specific database files, see the list of database files in the JSON format created during the **Restore as Files** operation.
The secondary region restore user experience will be similar to the primary regi
:::image type="content" source="./media/backup-azure-sql-database/backup-center-jobs-inline.png" alt-text="Screenshot showing the filtered Backup jobs." lightbox="./media/backup-azure-sql-database/backup-center-jobs-expanded.png"::: ++
+## Cross Subscription Restore
+
+Azure Backup now allows you to restore SQL database to any subscription (as per the following Azure RBAC requirements) from the restore point. By default, Azure Backup restores to the same subscription where the restore points are available.
+
+With Cross Subscription Restore (CSR), you have the flexibility of restoring to any subscription and any vault under your tenant if restore permissions are available. By default, CSR is enabled on all Recovery Services vaults (existing and newly created vaults).
+
+>[!Note]
+>- You can trigger Cross Subscription Restore from Recovery Services vault.
+>- CSR is supported only for streaming based backup and is not supported for snapshot-based backup.
+>- Cross Regional Restore (CRR) with CSR is not supported.
+
+**Azure RBAC requirements**
+
+| Operation type | Backup operator | Recovery Services vault | Alternate operator |
+| | | | |
+| Restore database or restore as files | `Virtual Machine Contributor` | Source VM that got backed up | Instead of a built-in role, you can consider a custom role which has the following permissions: <br><br> - `Microsoft.Compute/virtualMachines/write` <br> - `Microsoft.Compute/virtualMachines/read` |
+| | `Virtual Machine Contributor` | Target VM in which the database will be restored or files are created. | Instead of a built-in role, you can consider a custom role that has the following permissions: <br><br> - `Microsoft.Compute/virtualMachines/write` <br> - `Microsoft.Compute/virtualMachines/read` |
+| | `Backup Operator` | Target Recovery Services vault | |
+
+By default, CSR is enabled on the Recovery Services vault. To update the Recovery Services vault restore settings, go to **Properties** > **Cross Subscription Restore** and make the required changes.
+++ ## Next steps [Manage and monitor](manage-monitor-sql-database-backup.md) SQL Server databases that are backed up by Azure Backup.
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 06/27/2023 Last updated : 07/18/2023 --++ # Support matrix for backup of SAP HANA databases on Azure VMs
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **HANA database size** | HANA databases of size <= 8 TB (this isn't the memory size of the HANA system) | | | **Backup types** | Full, Differential, Incremental and Log backups | Snapshots | | **Restore types** | Refer to the SAP HANA Note [1642148](https://launchpad.support.sap.com/#/notes/1642148) to learn about the supported restore types | |
+| **Cross Subscription Restore** | Supported via the Azure portal and Azure CLI. [Learn more](sap-hana-database-restore.md#cross-subscription-restore). | Not supported |
| **Backup limits** | Up to 8 TB of full backup size per SAP HANA instance (soft limit) | | | **Number of full backups per day** | One scheduled backup. <br><br> Three on-demand backups. <br><br> We recommend not to trigger more than three backups per day. However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts. | | **HANA deployments** | HANA System Replication (HSR) | |
backup Sap Hana Database Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, you'll learn how to restore SAP HANA databases that are running on Azure virtual machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 07/14/2023 Last updated : 07/18/2023 --++ # Restore SAP HANA databases on Azure VMs
To restore the backup data as files instead of a database, select **Restore as F
* `<LogFilesDir>`: The folder that contains the log backups, differential backups, and incremental backups. For Full BackUp Restore, because the log folder isn't created, add an empty directory. * `<PathToPlaceCatalogFile>`: The folder where the generated catalog file must be placed.
- d. Restore by using the newly generated catalog file through HANA Studio, or run the SAP HANA HDBSQL tool restore query with this newly generated catalog. The HDBSQL queries are listed here:
+ d. You can restore by using the newly generated catalog file through HANA Studio or run the SAP HANA HDBSQL tool restore query with this newly generated catalog. The HDBSQL queries are listed here:
* To open the HDBSQL prompt, run the following command:
The secondary region restore user experience is similar to the primary region re
:::image type="content" source="./media/sap-hana-db-restore/hana-view-jobs-inline.png" alt-text="Screenshot that shows filtered backup jobs." lightbox="./media/sap-hana-db-restore/hana-view-jobs-expanded.png":::
+## Cross Subscription Restore
+
+Azure Backup now allows you to restore SAP HANA Database to any subscription (as per the following Azure RBAC requirements) from the restore point. By default, Azure Backup restores to the same subscription where the restore points are available.
+
+With Cross Subscription Restore (CSR), you have the flexibility of restoring to any subscription and any vault under your tenant if restore permissions are available. By default, CSR is enabled on all Recovery Services vaults (existing and newly created vaults).
+
+>[!Note]
+>- You can trigger Cross Subscription Restore from Recovery Services vault.
+>- CSR is supported only for streaming/Backint-based backups and is not supported for snapshot-based backup.
+>- Cross Regional Restore (CRR) with CSR is not supported.
+
+**Azure RBAC requirements**
+
+| Operation type | Backup operator | Recovery Services vault | Alternate operator |
+| | | | |
+| Restore database or restore as files | `Virtual Machine Contributor` | Source VM that got backed up | Instead of a built-in role, you can consider a custom role which has the following permissions: <br><br> - `Microsoft.Compute/virtualMachines/write` <br> - `Microsoft.Compute/virtualMachines/read` |
+| | `Virtual Machine Contributor` | Target VM in which the database will be restored or files are created. | Instead of a built-in role, you can consider a custom role that has the following permissions: <br><br> - `Microsoft.Compute/virtualMachines/write` <br> - `Microsoft.Compute/virtualMachines/read` |
+| | `Backup Operator` | Target Recovery Services vault | |
+
+By default, CSR is enabled on the Recovery Services vault. To update the Recovery Services vault restore settings, go to **Properties** > **Cross Subscription Restore** and make the required changes.
++
+**Cross Subscription Restore using Azure CLI**
+
+```azurecli
+az backup vault create
+
+```
+
+Add the parameter `cross-subscription-restore-state` that enables you to set the CSR state of the vault during vault creation and updating.
+
+```azurecli
+az backup recoveryconfig show
+
+```
+
+Add the parameter `--target-subscription-id` that enables you to provide the target subscription as the input while triggering Cross Subscription Restore for SQL or HANA datasources.
+
+**Example**:
+
+```azurecli
+ az backup vault create -g {rg_name} -n {vault_name} -l {location} --cross-subscription-restore-state Disable
+ az backup recoveryconfig show --restore-mode alternateworkloadrestore --backup-management-type azureworkload -r {rp} --target-container-name {target_container} --target-item-name {target_item} --target-resource-group {target_rg} --target-server-name {target_server} --target-server-type SQLInstance --target-subscription-id {target_subscription} --target-vault-name {target_vault} --workload-type SQLDataBase --ids {source_item_id}
+
+```
+ ## Next steps - [Manage SAP HANA databases by using Azure Backup](sap-hana-db-manage.md)
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 06/19/2023 Last updated : 07/18/2023 --++ # Support matrix for SQL Server Backup in Azure VMs
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
**Supported SQL Server versions** | SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md).
+**Cross Region Restore** | Supported. [Learn more](restore-sql-database-azure-vm.md#cross-region-restore).
+**Cross Subscription Restore** | Supported via the Azure portal and Azure CLI. [Learn more](restore-sql-database-azure-vm.md#cross-subscription-restore).
+ ## Feature considerations and limitations
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md
# Tutorial: Back up SAP HANA databases in an Azure VM using Azure CLI
-This tutorial describes how to back up SAP HANA database instance and SAP HANA System Replication (HSR) instance (preview) using Azure CLI.
+This tutorial describes how to back up SAP HANA database instance and SAP HANA System Replication (HSR) instance using Azure CLI.
Azure CLI is used to create and manage Azure resources from the Command Line or through scripts. This documentation details how to back up an SAP HANA database and trigger on-demand backups - all using Azure CLI. You can also perform these steps using the [Azure portal](./backup-azure-sap-hana-database.md).
Location Name ResourceGroup
westus2 saphanaVault saphanaResourceGroup ```
-# [HSR (preview)](#tab/hsr)
+# [HSR](#tab/hsr)
To create the Recovery Services vault for HSR instance protection, run the following command:
To register and protect database instance, follow these steps:
> The column ΓÇ£nameΓÇ¥ in the above output refers to the container name. This container name will be used in the next sections to enable backups and trigger them. Which in this case, is *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*.
-# [HSR (preview)](#tab/hsr)
+# [HSR](#tab/hsr)
To register and protect database instance, follow these steps:
To get container name, run the following command. [Learn about this CLI command]
```
-# [HSR (preview)](#tab/hsr)
+# [HSR](#tab/hsr)
To enable database instance backup, follow these steps:
The response will give you the job name. This job name can be used to track the
>[!NOTE] >Log backups are automatically triggered and managed by SAP HANA internally.
-# [HSR (preview)](#tab/hsr)
+# [HSR](#tab/hsr)
To run an on-demand backup, run the following command:
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Title: Tutorial - SAP HANA DB restore on Azure using CLI description: In this tutorial, learn how to restore SAP HANA databases running on an Azure VM from an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 06/20/2023 Last updated : 07/18/2023 --++ # Tutorial: Restore SAP HANA databases in an Azure VM using Azure CLI
-This tutorial describes how to restore SAP HANA database instance and SAP HANA System Replication (HSR) instance (preview) using Azure CLI.
+This tutorial describes how to restore SAP HANA database instance and SAP HANA System Replication (HSR) instance using Azure CLI.
Azure CLI is used to create and manage Azure resources from the command line or through scripts. This documentation details how to restore a backed-up SAP HANA database on an Azure VM - using Azure CLI. You can also perform these steps using the [Azure portal](./sap-hana-db-restore.md).
Use [Azure Cloud Shell](tutorial-sap-hana-backup-cli.md) to run CLI commands.
This tutorial assumes you have an SAP HANA database running on Azure VM that's backed-up using Azure Backup. If you've used [Back up an SAP HANA database in Azure using CLI](tutorial-sap-hana-backup-cli.md) to back up your SAP HANA database, then you're using the following resources:
-* A resource group named *saphanaResourceGroup*
-* A vault named *saphanaVault*
-* Protected container named `VMAppContainer;Compute;saphanaResourceGroup;saphanaVM`
-* Backed-up database/item named `saphanadatabase;hxe;hxe`
-* Resources in the *westus2* region
+* A resource group named `saphanaResourceGroup`.
+* A vault named `saphanaVault`.
+* Protected container named `VMAppContainer;Compute;saphanaResourceGroup;saphanaVM`.
+* Backed-up database/item named `saphanadatabase;hxe;hxe`.
+* Resources in the `westus2` region.
For more information on the supported configurations and scenarios, see the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
DefaultRangeRecoveryPoint AzureWorkload
As you can see, the list above contains three recovery points: one each for full, differential, and log backup.
-# [HSR (preview)](#tab/hsr)
+# [HSR](#tab/hsr)
To view the available recovery points, run the following command:
Name Resource
The response will give you the job name. This job name can be used to track the job status using [az backup job show](/cli/azure/backup/job#az-backup-job-show) cmdlet.
-# [HSR (preview)](#tab/hsr)
+# [HSR](#tab/hsr)
To start the restore operation, run the following command:
Move these restored files to the SAP HANA server where you want to restore them
* `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any) * `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step 3**
+## Cross Subscription Restore
+
+With Cross Subscription Restore (CSR), you have the flexibility of restoring to any subscription and any vault under your tenant if restore permissions are available. By default, CSR is enabled on all Recovery Services vaults (existing and newly created vaults).
+
+>[!Note]
+>- You can trigger Cross Subscription Restore from Recovery Services vault.
+>- CSR is supported only for streaming/Backint-based backups and is not supported for snapshot-based backup.
+>- Cross Regional Restore (CRR) with CSR is not supported.
+
+```azurecli
+az backup vault create
+
+```
+
+Add the parameter `cross-subscription-restore-state` that enables you to set the CSR state of the vault during vault creation and updating.
+
+```azurecli
+az backup recoveryconfig show
+
+```
+
+Add the parameter `--target-subscription-id` that enables you to provide the target subscription as the input while triggering Cross Subscription Restore for SQL or HANA datasources.
+
+**Example**:
+
+```azurecli
+ az backup vault create -g {rg_name} -n {vault_name} -l {location} --cross-subscription-restore-state Disable
+ az backup recoveryconfig show --restore-mode alternateworkloadrestore --backup-management-type azureworkload -r {rp} --target-container-name {target_container} --target-item-name {target_item} --target-resource-group {target_rg} --target-server-name {target_server} --target-server-type SQLInstance --target-subscription-id {target_subscription} --target-vault-name {target_vault} --workload-type SQLDataBase --ids {source_item_id}
+
+```
+ ## Next steps * To learn how to manage SAP HANA databases that are backed up using Azure CLI, continue to the tutorial [Manage an SAP HANA database in Azure VM using CLI](tutorial-sap-hana-backup-cli.md)
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
* Shareable Links does not support connection to on-premises or non-Azure VMs and VMSS.  * The Standard SKU is required for this feature. * Bastion only supports 50 requests, including creates and deletes, for shareable links at a time.
+* Bastion only supports 500 shareable links per Bastion resource.
## Prerequisites
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads on Azure Batch description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 05/01/2023 Last updated : 07/14/2023 ms.devlang: csharp, python
You should be familiar with container concepts and how to create a Batch pool an
- **A supported virtual machine (VM) image**: Containers are only supported in pools created with the Virtual Machine Configuration, from a supported image (listed in the next section). If you provide a custom image, see the considerations in the following section and the requirements in [Use a managed image to create a custom image pool](batch-custom-images.md).
+> [!NOTE]
+> From Batch SDK versions:
+> - Batch .NET SDK version 16.0.0
+> - Batch Python SDK version 14.0.0
+> - Batch Java SDK version 11.0.0
+> - Batch Node.js SDK version 11.0.0
+> the `containerConfiguration` requires `Type` property to be passed and the supported values are: `ContainerType.DockerCompatible` and `ContainerType.CriCompatible`.
+ Keep in mind the following limitations: - Batch provides remote direct memory access (RDMA) support only for containers that run on Linux pools.
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
cognitive-services Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/batch-inference.md
- Title: Trigger batch inference with trained model-
-description: Trigger batch inference with trained model
------ Previously updated : 11/01/2022---
-# Trigger batch inference with trained model
-
-You could choose the batch inference API, or the streaming inference API for detection.
-
-| Batch inference API | Streaming inference API |
-| - | - |
-| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
-
-|API Name| Method | Path | Description |
-| | - | -- | |
-|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
-|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
-|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
-
-## Trigger a batch inference
-
-To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps.
-
-To get better performance, we recommend you send out no more than 150,000 data points per batch inference. *(Data points = Number of variables * Number of timestamps)*
-
-This inference is asynchronous, so the results aren't returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
-
-Failures are usually caused by model issues or data issues. You can't perform inference if the model isn't ready or the data link is invalid. Make sure that the training data and inference data are consistent, meaning they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables won't pass the data verification phase and errors will occur. Data verification is deferred so that you'll get error messages only when you query the results.
-
-### Request
-
-A sample request:
-
-```json
-{
- "dataSource": "{{dataSource}}",
- "topContributorCount": 3,
- "startTime": "2021-01-02T12:00:00Z",
- "endTime": "2021-01-03T00:00:00Z"
-}
-```
-#### Required parameters
-
-* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in Azure Blob Storage. The schema should be the same as your training data, either OneTable or MultiTable, and the variable number and name should be exactly the same as well.
-* **startTime**: The start time of data used for inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
-* **endTime**: The end time of data used for inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point.
-
-#### Optional parameters
-
-* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
-
-### Response
-
-A sample response:
-
-```json
-{
- "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
- "summary": {
- "status": "CREATED",
- "errors": [],
- "variableStates": [],
- "setupInfo": {
- "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
- "topContributorCount": 3,
- "startTime": "2021-01-02T12:00:00Z",
- "endTime": "2021-01-03T00:00:00Z"
- }
- },
- "results": []
-}
-```
-* **resultId**: This is the information that you'll need to trigger **Get Batch Inference Results API**.
-* **status**: This indicates whether you trigger a batch inference task successfully. If you see **CREATED**, then you don't need to trigger this API again, you should use the **Get Batch Inference Results API** to get the detection status and anomaly results.
-
-## Get batch detection results
-
-There's no content in the request body, what's required only is to put the resultId in the API path, which will be in a format of:
-**{{endpoint}}anomalydetector/v1.1/multivariate/detect-batch/{{resultId}}**
-
-### Response
-
-A sample response:
-
-```json
-{
- "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
- "summary": {
- "status": "READY",
- "errors": [],
- "variableStates": [
- {
- "variable": "series_0",
- "filledNARatio": 0.0,
- "effectiveCount": 721,
- "firstTimestamp": "2021-01-02T12:00:00Z",
- "lastTimestamp": "2021-01-03T00:00:00Z"
- },
- {
- "variable": "series_1",
- "filledNARatio": 0.0,
- "effectiveCount": 721,
- "firstTimestamp": "2021-01-02T12:00:00Z",
- "lastTimestamp": "2021-01-03T00:00:00Z"
- },
- {
- "variable": "series_2",
- "filledNARatio": 0.0,
- "effectiveCount": 721,
- "firstTimestamp": "2021-01-02T12:00:00Z",
- "lastTimestamp": "2021-01-03T00:00:00Z"
- },
- {
- "variable": "series_3",
- "filledNARatio": 0.0,
- "effectiveCount": 721,
- "firstTimestamp": "2021-01-02T12:00:00Z",
- "lastTimestamp": "2021-01-03T00:00:00Z"
- },
- {
- "variable": "series_4",
- "filledNARatio": 0.0,
- "effectiveCount": 721,
- "firstTimestamp": "2021-01-02T12:00:00Z",
- "lastTimestamp": "2021-01-03T00:00:00Z"
- }
- ],
- "setupInfo": {
- "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
- "topContributorCount": 3,
- "startTime": "2021-01-02T12:00:00Z",
- "endTime": "2021-01-03T00:00:00Z"
- }
- },
- "results": [
- {
- "timestamp": "2021-01-02T12:00:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.3377174139022827,
- "interpretation": []
- },
- "errors": []
- },
- {
- "timestamp": "2021-01-02T12:01:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.24631972312927247,
- "interpretation": []
- },
- "errors": []
- },
- {
- "timestamp": "2021-01-02T12:02:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.16678125858306886,
- "interpretation": []
- },
- "errors": []
- },
- {
- "timestamp": "2021-01-02T12:03:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.23783254623413086,
- "interpretation": []
- },
- "errors": []
- },
- {
- "timestamp": "2021-01-02T12:04:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.24804904460906982,
- "interpretation": []
- },
- "errors": []
- },
- {
- "timestamp": "2021-01-02T12:05:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.11487171649932862,
- "interpretation": []
- },
- "errors": []
- },
- {
- "timestamp": "2021-01-02T12:06:00Z",
- "value": {
- "isAnomaly": true,
- "severity": 0.32980116622958083,
- "score": 0.5666913509368896,
- "interpretation": [
- {
- "variable": "series_2",
- "contributionScore": 0.4130149677604554,
- "correlationChanges": {
- "changedVariables": [
- "series_0",
- "series_4",
- "series_3"
- ]
- }
- },
- {
- "variable": "series_3",
- "contributionScore": 0.2993065960239115,
- "correlationChanges": {
- "changedVariables": [
- "series_0",
- "series_4",
- "series_3"
- ]
- }
- },
- {
- "variable": "series_1",
- "contributionScore": 0.287678436215633,
- "correlationChanges": {
- "changedVariables": [
- "series_0",
- "series_4",
- "series_3"
- ]
- }
- }
- ]
- },
- "errors": []
- }
- ]
-}
-```
-
-The response contains the result status, variable information, inference parameters, and inference results.
-
-* **variableStates**: This lists the information of each variable in the inference request.
-* **setupInfo**: This is the request body submitted for this inference.
-* **results**: This contains the detection results. There are three typical types of detection results.
-
-* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there's insufficient historical data, so inference can't be performed on them. In this case, the error message can be ignored.
-
-* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
- * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
- * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
-
-* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
-
-* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
-
-* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
-
-* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
-
-> [!NOTE]
-> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
-> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
-> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
-
-## Next steps
-
-* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/create-resource.md
- Title: Create an Anomaly Detector resource-
-description: Create an Anomaly Detector resource
------ Previously updated : 11/01/2022----
-# Create and Anomaly Detector resource
-
-Anomaly Detector service is a cloud-based Cognitive Service that uses machine-learning models to detect anomalies in your time series data. Here, you'll learn how to create an Anomaly Detector resource in the Azure portal.
-
-## Create an Anomaly Detector resource in Azure portal
-
-1. Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-1. Once you have your Azure subscription, [create an Anomaly Detector resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal, and fill out the following fields:
-
- - **Subscription**: Select your current subscription.
- - **Resource group**: The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group.
- - **Region**: Select your local region, see supported [Regions](../regions.md).
- - **Name**: Enter a name for your resource. We recommend using a descriptive name, for example *multivariate-msft-test*.
- - **Pricing tier**: The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of create a resource user experience](../media/create-resource/create-resource.png)
-
-1. Select **Identity** in the banner above and make sure you set the status as **On** which enables Anomaly Detector to visit your data in Azure in a secure way, then select **Review + create.**
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of enable managed identity](../media/create-resource/enable-managed-identity.png)
-
-1. Wait a few seconds until validation passed, and select **Create** button from the bottom-left corner.
-1. After you select create, you'll be redirected to a new page that says Deployment in progress. After a few seconds, you'll see a message that says, Your deployment is complete, then select **Go to resource**.
-
-## Get Endpoint URL and keys
-
-In your resource, select **Keys and Endpoint** on the left navigation bar, copy the **key** (both key1 and key2 will work) and **endpoint** values from your Anomaly Detector resource.. You'll need the key and endpoint values to connect your application to the Anomaly Detector API.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of copy key and endpoint user experience](../media/create-resource/copy-key-endpoint.png)
-
-That's it! You could start preparing your data for further steps!
-
-## Next steps
-
-* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
cognitive-services Deploy Anomaly Detection On Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-container-instances.md
- Title: Run Anomaly Detector Container in Azure Container Instances-
-description: Deploy the Anomaly Detector container to an Azure Container Instance, and test it in a web browser.
------- Previously updated : 04/01/2020---
-# Deploy an Anomaly Detector univariate container to Azure Container Instances
-
-Learn how to deploy the Cognitive Services [Anomaly Detector](../anomaly-detector-container-howto.md) container to Azure [Container Instances](../../../container-instances/index.yml). This procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers' attention away from managing infrastructure to instead focusing on application development.
-----
-## Next steps
-
-* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container
-* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings
-* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
cognitive-services Deploy Anomaly Detection On Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-iot-edge.md
- Title: Run Anomaly Detector on IoT Edge-
-description: Deploy the Anomaly Detector module to IoT Edge.
------ Previously updated : 12/03/2020---
-# Deploy an Anomaly Detector univariate module to IoT Edge
-
-Learn how to deploy the Cognitive Services [Anomaly Detector](../anomaly-detector-container-howto.md) module to an IoT Edge device. Once it's deployed into IoT Edge, the module runs in IoT Edge together with other modules as container instances. It exposes the exact same APIs as an Anomaly Detector container instance running in a standard docker container environment.
-
-## Prerequisites
-
-* Use an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
-* Install the [Azure CLI](/cli/azure/install-azure-cli).
-* An [IoT Hub](../../../iot-hub/iot-hub-create-through-portal.md) and an [IoT Edge](../../../iot-edge/quickstart-linux.md) device.
--
-## Deploy the Anomaly Detection module to the edge
-
-1. In the Azure portal, enter **Anomaly Detector on IoT Edge** into the search and open the Azure Marketplace result.
-2. It will take you to the Azure portal's [Target Devices for IoT Edge Module page](https://portal.azure.com/#create/azure-cognitive-service.edge-anomaly-detector). Provide the following required information.
-
- 1. Select your subscription.
-
- 1. Select your IoT Hub.
-
- 1. Select **Find device** and find an IoT Edge device.
-
-3. Select the **Create** button.
-
-4. Select the **AnomalyDetectoronIoTEdge** module.
-
- :::image type="content" source="../media/deploy-anomaly-detection-on-iot-edge/iot-edge-modules.png" alt-text="Image of IoT Edge Modules user interface with AnomalyDetectoronIoTEdge link highlighted with a red box to indicate that this is the item to select.":::
-
-5. Navigate to **Environment Variables** and provide the following information.
-
- 1. Keep the value accept for **Eula**.
-
- 1. Fill out **Billing** with your Cognitive Services endpoint.
-
- 1. Fill out **ApiKey** with your Cognitive Services API key.
-
- :::image type="content" source="../media/deploy-anomaly-detection-on-iot-edge/environment-variables.png" alt-text="Environment variables with red boxes around the areas that need values to be filled in for endpoint and API key":::
-
-6. Select **Update**
-
-7. Select **Next: Routes** to define your route. You define all messages from all modules to go to Azure IoT Hub. To learn how to declare a route, see [Establish routes in IoT Edge](../../../iot-edge/module-composition.md?view=iotedge-2020-11&preserve-view=true).
-
-8. Select **Next: Review + create**. You can preview the JSON file that defines all the modules that get deployed to your IoT Edge device.
-
-9. Select **Create** to start the module deployment.
-
-10. After you complete module deployment, you'll go back to the IoT Edge page of your IoT hub. Select your device from the list of IoT Edge devices to see its details.
-
-11. Scroll down and see the modules listed. Check that the runtime status is running for your new module.
-
-To troubleshoot the runtime status of your IoT Edge device, consult the [troubleshooting guide](../../../iot-edge/troubleshoot.md).
-
-## Test Anomaly Detector on an IoT Edge device
-
-You'll make an HTTP call to the Azure IoT Edge device that has the Azure Cognitive Services container running. The container provides REST-based endpoint APIs. Use the host, `http://<your-edge-device-ipaddress>:5000`, for module APIs.
-
-Alternatively, you can [create a module client by using the Anomaly Detector client library](../quickstarts/client-libraries.md?tabs=linux&pivots=programming-language-python) on the Azure IoT Edge device, and then call the running Azure Cognitive Services container on the edge. Use the host endpoint `http://<your-edge-device-ipaddress>:5000` and leave the host key empty.
-
-If your edge device does not already allow inbound communication on port 5000, you will need to create a new **inbound port rule**.
-
-For an Azure VM, this can set under **Virtual Machine** > **Settings** > **Networking** > **Inbound port rule** > **Add inbound port rule**.
-
-There are several ways to validate that the module is running. Locate the *External IP* address and exposed port of the edge device in question, and open your favorite web browser. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://<your-edge-device-ipaddress:5000`, but your specific container may vary. Keep in mind that you need to use your edge device's *External IP* address.
-
-| Request URL | Purpose |
-|:-|:|
-| `http://<your-edge-device-ipaddress>:5000/` | The container provides a home page. |
-| `http://<your-edge-device-ipaddress>:5000/status` | Also requested with GET, this verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://<your-edge-device-ipaddress>:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
-
-![Container's home page](../../../../includes/media/cognitive-services-containers-api-documentation/container-webpage.png)
-
-## Next steps
-
-* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container
-* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings
-* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
cognitive-services Identify Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/identify-anomalies.md
- Title: How to use the Anomaly Detector API on your time series data-
-description: Learn how to detect anomalies in your data either as a batch, or on streaming data.
------ Previously updated : 10/01/2019---
-# How to: Use the Anomaly Detector univariate API on your time series data
-
-The [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector/operations/post-timeseries-entire-detect) provides two methods of anomaly detection. You can either detect anomalies as a batch throughout your times series, or as your data is generated by detecting the anomaly status of the latest data point. The detection model returns anomaly results along with each data point's expected value, and the upper and lower anomaly detection boundaries. you can use these values to visualize the range of normal values, and anomalies in the data.
-
-## Anomaly detection modes
-
-The Anomaly Detector API provides detection modes: batch and streaming.
-
-> [!NOTE]
-> The following request URLs must be combined with the appropriate endpoint for your subscription. For example:
-> `https://<your-custom-subdomain>.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/entire/detect`
--
-### Batch detection
-
-To detect anomalies throughout a batch of data points over a given time range, use the following request URI with your time series data:
-
-`/timeseries/entire/detect`.
-
-By sending your time series data at once, the API will generate a model using the entire series, and analyze each data point with it.
-
-### Streaming detection
-
-To continuously detect anomalies on streaming data, use the following request URI with your latest data point:
-
-`/timeseries/last/detect`.
-
-By sending new data points as you generate them, you can monitor your data in real time. A model will be generated with the data points you send, and the API will determine if the latest point in the time series is an anomaly.
-
-## Adjusting lower and upper anomaly detection boundaries
-
-By default, the upper and lower boundaries for anomaly detection are calculated using `expectedValue`, `upperMargin`, and `lowerMargin`. If you require different boundaries, we recommend applying a `marginScale` to `upperMargin` or `lowerMargin`. The boundaries would be calculated as follows:
-
-|Boundary |Calculation |
-|||
-|`upperBoundary` | `expectedValue + (100 - marginScale) * upperMargin` |
-|`lowerBoundary` | `expectedValue - (100 - marginScale) * lowerMargin` |
-
-The following examples show an Anomaly Detector API result at different sensitivities.
-
-### Example with sensitivity at 99
-
-![Default Sensitivity](../media/sensitivity_99.png)
-
-### Example with sensitivity at 95
-
-![99 Sensitivity](../media/sensitivity_95.png)
-
-### Example with sensitivity at 85
-
-![85 Sensitivity](../media/sensitivity_85.png)
-
-## Next Steps
-
-* [What is the Anomaly Detector API?](../overview.md)
-* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md)
cognitive-services Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/postman.md
- Title: How to run Multivariate Anomaly Detector API (GA version) in Postman?-
-description: Learn how to detect anomalies in your data either as a batch, or on streaming data with Postman.
------ Previously updated : 12/20/2022---
-# How to run Multivariate Anomaly Detector API in Postman?
-
-This article will walk you through the process of using Postman to access the Multivariate Anomaly Detection REST API.
-
-## Getting started
-
-Select this button to fork the API collection in Postman and follow the steps in this article to test.
-
-[![Run in Postman](../media/postman/button.svg)](https://app.getpostman.com/run-collection/18763802-b90da6d8-0f98-4200-976f-546342abcade?action=collection%2Ffork&collection-url=entityId%3D18763802-b90da6d8-0f98-4200-976f-546342abcade%26entityType%3Dcollection%26workspaceId%3De1370b45-5076-4885-884f-e9a97136ddbc#?env%5BMVAD%5D=W3sia2V5IjoibW9kZWxJZCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJlNjQxZTJlYy01Mzg5LTExZWQtYTkyMC01MjcyNGM4YTZkZmEiLCJzZXNzaW9uSW5kZXgiOjB9LHsia2V5IjoicmVzdWx0SWQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiOGZkZTAwNDItNTM4YS0xMWVkLTlhNDEtMGUxMGNkOTEwZmZhIiwic2Vzc2lvbkluZGV4IjoxfSx7ImtleSI6Ik9jcC1BcGltLVN1YnNjcmlwdGlvbi1LZXkiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJzZWNyZXQiLCJzZXNzaW9uVmFsdWUiOiJjNzNjMGRhMzlhOTA0MjgzODA4ZjBmY2E0Zjc3MTFkOCIsInNlc3Npb25JbmRleCI6Mn0seyJrZXkiOiJlbmRwb2ludCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJodHRwczovL211bHRpLWFkLXRlc3QtdXNjeC5jb2duaXRpdmVzZXJ2aWNlcy5henVyZS5jb20vIiwic2Vzc2lvbkluZGV4IjozfSx7ImtleSI6ImRhdGFTb3VyY2UiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiaHR0cHM6Ly9tdmFkZGF0YXNldC5ibG9iLmNvcmUud2luZG93cy5uZXQvc2FtcGxlLW9uZXRhYmxlL3NhbXBsZV9kYXRhXzVfMzAwMC5jc3YiLCJzZXNzaW9uSW5kZXgiOjR9XQ==)
-
-## Multivariate Anomaly Detector API
-
-1. Select environment as **MVAD**.
-
- :::image type="content" source="../media/postman/postman-initial.png" alt-text="Screenshot of Postman UI with MVAD selected." lightbox="../media/postman/postman-initial.png":::
-
-2. Select **Environment**, paste your Anomaly Detector `endpoint`, `key` and dataSource `url` into the **CURRENT VALUE** column, select **Save** to let the variables take effect.
-
- :::image type="content" source="../media/postman/postman-key.png" alt-text="Screenshot of Postman UI with key, endpoint, and datasource filled in." lightbox="../media/postman/postman-key.png":::
-
-3. Select **Collections**, and select the first API - **Create and train a model**, then select **Send**.
-
- > [!NOTE]
- > If your data is one CSV file, please set the dataSchema as **OneTable**, if your data is multiple CSV files in a folder, please set the dataSchema as **MultiTable.**
-
- :::image type="content" source="../media/postman/create-and-train.png" alt-text="Screenshot of create and train POST request." lightbox="../media/postman/create-and-train.png":::
-
-4. In the response of the first API, copy the modelId and paste it in the `modelId` in **Environments**, select **Save**. Then go to **Collections**, select **Get model status**, and select **Send**.
- ![GIF of process of copying model identifier](../media/postman/model.gif)
-
-5. Select **Batch Detection**, and select **Send**. This API will trigger a synchronous inference task, and you should use the Get batch detection results API several times to get the status and the final results.
-
- :::image type="content" source="../media/postman/result.png" alt-text="Screenshot of batch detection POST request." lightbox="../media/postman/result.png":::
-
-6. In the response, copy the `resultId` and paste it in the `resultId` in **Environments**, select **Save**. Then go to **Collections**, select **Get batch detection results**, and select **Send**.
-
- ![GIF of process of copying result identifier](../media/postman/result.gif)
-
-7. For the rest of the APIs calls, select each and then select Send to test out their request and response.
-
- :::image type="content" source="../media/postman/detection.png" alt-text="Screenshot of detect last POST result." lightbox="../media/postman/detection.png":::
-
-## Next Steps
-
-* [Create an Anomaly Detector resource](create-resource.md)
-* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md)
cognitive-services Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/prepare-data.md
- Title: Prepare your data and upload to Storage Account-
-description: Prepare your data and upload to Storage Account
------ Previously updated : 11/01/2022----
-# Prepare your data and upload to Storage Account
-
-Multivariate Anomaly Detection requires training to process your data, and an Azure Storage Account to store your data for further training and inference steps.
-
-## Data preparation
-
-First you need to prepare your data for training and inference.
-
-### Input data schema
-
-Multivariate Anomaly Detection supports two types of data schemas: **OneTable** and **MultiTable**. You could use either of these schemas to prepare your data and upload to Storage Account for further training and inference.
--
-#### Schema 1: OneTable
-**OneTable** is one CSV file that contains all the variables that you want to train a Multivariate Anomaly Detection model and one `timestamp` column. Download [One Table sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.csv)
-* The `timestamp` values should conform to *ISO 8601*; the values of other variables in other columns could be *integers* or *decimals* with any number of decimal places.
-
-* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference.
-
- ***Example:***
-
-![Diagram of one table schema.](../media/prepare-data/onetable-schema.png)
-
-#### Schema 2: MultiTable
-
-**MultiTable** is multiple CSV files in one file folder, and each CSV file contains only two columns of one variable, with the exact column names of: **timestamp** and **value**. Download [Multiple Tables sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.zip) and unzip it.
-
-* The `timestamp` values should conform to *ISO 8601*; the `value` could be *integers* or *decimals* with any number of decimal places.
-
-* The name of the csv file will be used as the variable name and should be unique. For example, *temperature.csv* and *humidity.csv*.
-
-* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference.
-
- ***Example:***
-
-> [!div class="mx-imgBorder"]
-> ![Diagram of multi table schema.](../media/prepare-data/multitable.png)
-
-> [!NOTE]
-> If your timestamps have hours, minutes, and/or seconds, ensure that they're properly rounded up before calling the APIs.
-> For example, if your data frequency is supposed to be one data point every 30 seconds, but you're seeing timestamps like "12:00:01" and "12:00:28", it's a strong signal that you should pre-process the timestamps to new values like "12:00:00" and "12:00:30".
-> For details, please refer to the ["Timestamp round-up" section](../concepts/best-practices-multivariate.md#timestamp-round-up) in the best practices document.
-
-## Upload your data to Storage Account
-
-Once you prepare your data with either of the two schemas above, you could upload your CSV file (OneTable) or your data folder (MultiTable) to your Storage Account.
-
-1. [Create a Storage Account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM), fill out the fields, which are similar to the steps when creating Anomaly Detector resource.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of Azure Storage account setup page.](../media/prepare-data/create-blob.png)
-
-2. Select **Container** to the left in your Storage Account resource and select **+Container** to create one that will store your data.
-
-3. Upload your data to the container.
-
- **Upload *OneTable* data**
-
- Go to the container that you created, and select **Upload**, then choose your prepared CSV file and upload.
-
- Once your data is uploaded, select your CSV file and copy the **blob URL** through the small blue button. (Please paste the URL somewhere convenient for further steps.)
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of copy blob url for one table.](../media/prepare-data/onetable-copy-url.png)
-
- **Upload *MultiTable* data**
-
- Go to the container that you created, and select **Upload**, then select **Advanced**, and initiate a folder name in **Upload to folder**, and select all the variables in separate CSV files and upload.
-
- Once your data is uploaded, go into the folder, and select one CSV file in the folder, copy the **blob URL** and only keep the part before the name of this CSV file, so the final blob URL should ***link to the folder***. (Please paste the URL somewhere convenient for further steps.)
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of copy blob url for multi table.](../media/prepare-data/multitable-copy-url.png)
-
-4. Grant Anomaly Detector access to read the data in your Storage Account.
- * In your container, select **Access Control(IAM)** to the left, select **+ Add** to **Add role assignment**. If you see the add role assignment is disabled, please contact your Storage Account owner to add Owner role to your Container.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of set access control UI.](../media/prepare-data/add-role-assignment.png)
-
- * Search role of **Storage Blob Data Reader**, **click on it** and then select **Next**. Technically, the roles highlighted below and the *Owner* role all should work.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of add role assignment with reader roles selected.](../media/prepare-data/add-reader-role.png)
-
- * Select assign access to **Managed identity**, and **Select Members**, then choose the anomaly detector resource that you created earlier, then select **Review + assign**.
-
-## Next steps
-
-* [Train a multivariate anomaly detection model](train-model.md)
-* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/streaming-inference.md
- Title: Streaming inference with trained model-
-description: Streaming inference with trained model
------ Previously updated : 11/01/2022---
-# Streaming inference with trained model
-
-You could choose the batch inference API, or the streaming inference API for detection.
-
-| Batch inference API | Streaming inference API |
-| - | - |
-| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
-
-|API Name| Method | Path | Description |
-| | - | -- | |
-|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario |
-|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
-|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
-
-## Trigger a streaming inference API
-
-### Request
-
-With the synchronous API, you can get inference results point by point in real time, and no need for compressing and uploading task like for training and asynchronous inference. Here are some requirements for the synchronous API:
-* You need to put data in **JSON format** into the API request body.
-* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables, and at least `1 sliding window length`.
-
-You can submit a bunch of timestamps of multiple variables in JSON format in the request body, with an API call like this:
-
-**{{endpoint}}/anomalydetector/v1.1/multivariate/models/{modelId}:detect-last**
-
-A sample request:
-
-```json
-{
- "variables": [
- {
- "variable": "Variable_1",
- "timestamps": [
- "2021-01-01T00:00:00Z",
- "2021-01-01T00:01:00Z",
- "2021-01-01T00:02:00Z"
- //more timestamps
- ],
- "values": [
- 0.4551378545933972,
- 0.7388603950488748,
- 0.201088255984052
- //more values
- ]
- },
- {
- "variable": "Variable_2",
- "timestamps": [
- "2021-01-01T00:00:00Z",
- "2021-01-01T00:01:00Z",
- "2021-01-01T00:02:00Z"
- //more timestamps
- ],
- "values": [
- 0.9617871613964145,
- 0.24903311574778408,
- 0.4920561254118613
- //more values
- ]
- },
- {
- "variable": "Variable_3",
- "timestamps": [
- "2021-01-01T00:00:00Z",
- "2021-01-01T00:01:00Z",
- "2021-01-01T00:02:00Z"
- //more timestamps
- ],
- "values": [
- 0.4030756879437628,
- 0.15526889968448554,
- 0.36352226408981103
- //more values
- ]
- }
- ],
- "topContributorCount": 2
-}
-```
-
-#### Required parameters
-
-* **variableName**: This name should be exactly the same as in your training data.
-* **timestamps**: The length of the timestamps should be equal to **1 sliding window**, since every streaming inference call will use 1 sliding window to detect the last point in the sliding window.
-* **values**: The values of each variable in every timestamp that was inputted above.
-
-#### Optional parameters
-
-* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
-
-### Response
-
-A sample response:
-
-```json
-{
- "variableStates": [
- {
- "variable": "series_0",
- "filledNARatio": 0.0,
- "effectiveCount": 1,
- "firstTimestamp": "2021-01-03T01:59:00Z",
- "lastTimestamp": "2021-01-03T01:59:00Z"
- },
- {
- "variable": "series_1",
- "filledNARatio": 0.0,
- "effectiveCount": 1,
- "firstTimestamp": "2021-01-03T01:59:00Z",
- "lastTimestamp": "2021-01-03T01:59:00Z"
- },
- {
- "variable": "series_2",
- "filledNARatio": 0.0,
- "effectiveCount": 1,
- "firstTimestamp": "2021-01-03T01:59:00Z",
- "lastTimestamp": "2021-01-03T01:59:00Z"
- },
- {
- "variable": "series_3",
- "filledNARatio": 0.0,
- "effectiveCount": 1,
- "firstTimestamp": "2021-01-03T01:59:00Z",
- "lastTimestamp": "2021-01-03T01:59:00Z"
- },
- {
- "variable": "series_4",
- "filledNARatio": 0.0,
- "effectiveCount": 1,
- "firstTimestamp": "2021-01-03T01:59:00Z",
- "lastTimestamp": "2021-01-03T01:59:00Z"
- }
- ],
- "results": [
- {
- "timestamp": "2021-01-03T01:59:00Z",
- "value": {
- "isAnomaly": false,
- "severity": 0.0,
- "score": 0.2675322890281677,
- "interpretation": []
- },
- "errors": []
- }
- ]
-}
-```
-
-The response contains the result status, variable information, inference parameters, and inference results.
-
-* **variableStates**: This lists the information of each variable in the inference request.
-* **setupInfo**: This is the request body submitted for this inference.
-* **results**: This contains the detection results. There are three typical types of detection results.
-
-* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
- * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
- * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
-
-* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
-
-* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
-
-* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which is included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
-
-* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
-
-> [!NOTE]
-> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
-> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
-> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
-
-## Next steps
-
-* [Multivariate Anomaly Detection reference architecture](../concepts/multivariate-architecture.md)
-* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/train-model.md
- Title: Train a Multivariate Anomaly Detection model-
-description: Train a Multivariate Anomaly Detection model
------ Previously updated : 11/01/2022---
-# Train a Multivariate Anomaly Detection model
-
-To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a Jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#).
-
-## API Overview
-
-There are 7 APIs provided in Multivariate Anomaly Detection:
-* **Training**: Use `Train Model API` to create and train a model, then use `Get Model Status API` to get the status and model metadata.
-* **Inference**:
- * Use `Async Inference API` to trigger an asynchronous inference process and use `Get Inference results API` to get detection results on a batch of data.
- * You could also use `Sync Inference API` to trigger a detection on one timestamp every time.
-* **Other operations**: `List Model API` and `Delete Model API` are supported in Multivariate Anomaly Detection model for model management.
-
-![Diagram of model training workflow and inference workflow](../media/train-model/api-workflow.png)
-
-|API Name| Method | Path | Description |
-| | - | -- | |
-|**Train Model**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models | Create and train a model |
-|**Get Model Status**| GET | `{endpoint}`anomalydetector/v1.1/multivariate/models/`{modelId}` | Get model status and model metadata with `modelId` |
-|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
-|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
-|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
-|**List Model**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/models | List all models |
-|**Delete Model**| DELET | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}` | Delete model with `modelId` |
-
-## Train a model
-
-In this process, you'll use the following information that you created previously:
-
-* **Key** of Anomaly Detector resource
-* **Endpoint** of Anomaly Detector resource
-* **Blob URL** of your data in Storage Account
-
-For training data size, the maximum number of timestamps is **1000000**, and a recommended minimum number is **5000** timestamps.
-
-### Request
-
-Here's a sample request body to train a Multivariate Anomaly Detection model.
-
-```json
-{
- "slidingWindow": 200,
- "alignPolicy": {
- "alignMode": "Outer",
- "fillNAMethod": "Linear",
- "paddingValue": 0
- },
- "dataSource": "{{dataSource}}", //Example: https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv
- "dataSchema": "OneTable",
- "startTime": "2021-01-01T00:00:00Z",
- "endTime": "2021-01-02T09:19:00Z",
- "displayName": "SampleRequest"
-}
-```
-
-#### Required parameters
-
-These three parameters are required in training and inference API requests:
-
-* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in the Azure Blob Storage.
-* **dataSchema**: This indicates the schema that you're using: `OneTable` or `MultiTable`.
-* **startTime**: The start time of data used for training or inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
-* **endTime**: The end time of data used for training or inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point. If `endTime` equals to `startTime`, it means inference of one single data point, which is often used in streaming scenarios.
-
-#### Optional parameters
-
-Other parameters for training API are optional:
-
-* **slidingWindow**: How many data points are used to determine anomalies. An integer between 28 and 2,880. The default value is 300. If `slidingWindow` is `k` for model training, then at least `k` points should be accessible from the source file during inference to get valid results.
-
- Multivariate Anomaly Detection takes a segment of data points to decide if the next data point is an anomaly. The length of the segment is the `slidingWindow`.
- Please keep two things in mind when choosing a `slidingWindow` value:
- 1. The properties of your data: whether it's periodic and the sampling rate. When your data is periodic, you could set the length of 1 - 3 cycles as the `slidingWindow`. When your data is at a high frequency (small granularity) like minute-level or second-level, you could set a relatively higher value of `slidingWindow`.
- 1. The trade-off between training/inference time and potential performance impact. A larger `slidingWindow` may cause longer training/inference time. There's **no guarantee** that larger `slidingWindow`s will lead to accuracy gains. A small `slidingWindow` may make it difficult for the model to converge on an optimal solution. For example, it's hard to detect anomalies when `slidingWindow` has only two points.
-
-* **alignMode**: How to align multiple variables (time series) on timestamps. There are two options for this parameter, `Inner` and `Outer`, and the default value is `Outer`.
-
- This parameter is critical when there's misalignment between timestamp sequences of the variables. The model needs to align the variables onto the same timestamp sequence before further processing.
-
- `Inner` means the model will report detection results only on timestamps on which **every variable** has a value, that is, the intersection of all variables. `Outer` means the model will report detection results on timestamps on which **any variable** has a value, that is, the union of all variables.
-
- Here's an example to explain different `alignModel` values.
-
- *Variable-1*
-
- |timestamp | value|
- -| --|
- |2020-11-01| 1
- |2020-11-02| 2
- |2020-11-04| 4
- |2020-11-05| 5
-
- *Variable-2*
-
- timestamp | value
- | -
- 2020-11-01| 1
- 2020-11-02| 2
- 2020-11-03| 3
- 2020-11-04| 4
-
- *`Inner` join two variables*
-
- timestamp | Variable-1 | Variable-2
- -| - | -
- 2020-11-01| 1 | 1
- 2020-11-02| 2 | 2
- 2020-11-04| 4 | 4
-
- *`Outer` join two variables*
-
- timestamp | Variable-1 | Variable-2
- | - | -
- 2020-11-01| 1 | 1
- 2020-11-02| 2 | 2
- 2020-11-03| `nan` | 3
- 2020-11-04| 4 | 4
- 2020-11-05| 5 | `nan`
-
-* **fillNAMethod**: How to fill `nan` in the merged table. There might be missing values in the merged table and they should be properly handled. We provide several methods to fill them up. The options are `Linear`, `Previous`, `Subsequent`, `Zero`, and `Fixed` and the default value is `Linear`.
-
- | Option | Method |
- | - | -|
- | `Linear` | Fill `nan` values by linear interpolation |
- | `Previous` | Propagate last valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 2, 3, 3, 4]` |
- | `Subsequent` | Use next valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 3, 3, 4, 4]` |
- | `Zero` | Fill `nan` values with 0. |
- | `Fixed` | Fill `nan` values with a specified valid value that should be provided in `paddingValue`. |
-
-* **paddingValue**: Padding value is used to fill `nan` when `fillNAMethod` is `Fixed` and must be provided in that case. In other cases, it's optional.
-
-* **displayName**: This is an optional parameter, which is used to identify models. For example, you can use it to mark parameters, data sources, and any other metadata about the model and its input data. The default value is an empty string.
-
-### Response
-
-Within the response, the most important thing is the `modelId`, which you'll use to trigger the Get Model Status API.
-
-A response sample:
-
-```json
-{
- "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
- "createdTime": "2022-11-01T00:00:00Z",
- "lastUpdatedTime": "2022-11-01T00:00:00Z",
- "modelInfo": {
- "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
- "dataSchema": "OneTable",
- "startTime": "2021-01-01T00:00:00Z",
- "endTime": "2021-01-02T09:19:00Z",
- "displayName": "SampleRequest",
- "slidingWindow": 200,
- "alignPolicy": {
- "alignMode": "Outer",
- "fillNAMethod": "Linear",
- "paddingValue": 0.0
- },
- "status": "CREATED",
- "errors": [],
- "diagnosticsInfo": {
- "modelState": {
- "epochIds": [],
- "trainLosses": [],
- "validationLosses": [],
- "latenciesInSeconds": []
- },
- "variableStates": []
- }
- }
-}
-```
-
-## Get model status
-
-You could use the above API to trigger a training and use **Get model status API** to know whether the model is trained successfully or not.
-
-### Request
-
-There's no content in the request body, what's required only is to put the modelId in the API path, which will be in a format of:
-**{{endpoint}}anomalydetector/v1.1/multivariate/models/{{modelId}}**
-
-### Response
-
-* **status**: The `status` in the response body indicates the model status with this category: *CREATED, RUNNING, READY, FAILED.*
-* **trainLosses & validationLosses**: These are two machine learning concepts indicating the model performance. If the numbers are decreasing and finally to a relatively small number like 0.2, 0.3, then it means the model performance is good to some extent. However, the model performance still needs to be validated through inference and the comparison with labels if any.
-* **epochIds**: indicates how many epochs the model has been trained out of a total of 100 epochs. For example, if the model is still in training status, `epochId` might be `[10, 20, 30, 40, 50]` , which means that it has completed its 50th training epoch, and therefore is halfway complete.
-* **latenciesInSeconds**: contains the time cost for each epoch and is recorded every 10 epochs. In this example, the 10th epoch takes approximately 0.34 second. This would be helpful to estimate the completion time of training.
-* **variableStates**: summarizes information about each variable. It's a list ranked by `filledNARatio` in descending order. It tells how many data points are used for each variable and `filledNARatio` tells how many points are missing. Usually we need to reduce `filledNARatio` as much as possible.
-Too many missing data points will deteriorate model accuracy.
-* **errors**: Errors during data processing will be included in the `errors` field.
-
-A response sample:
-
-```json
-{
- "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
- "createdTime": "2022-11-01T00:00:12Z",
- "lastUpdatedTime": "2022-11-01T00:00:12Z",
- "modelInfo": {
- "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
- "dataSchema": "OneTable",
- "startTime": "2021-01-01T00:00:00Z",
- "endTime": "2021-01-02T09:19:00Z",
- "displayName": "SampleRequest",
- "slidingWindow": 200,
- "alignPolicy": {
- "alignMode": "Outer",
- "fillNAMethod": "Linear",
- "paddingValue": 0.0
- },
- "status": "READY",
- "errors": [],
- "diagnosticsInfo": {
- "modelState": {
- "epochIds": [
- 10,
- 20,
- 30,
- 40,
- 50,
- 60,
- 70,
- 80,
- 90,
- 100
- ],
- "trainLosses": [
- 0.30325182933699,
- 0.24335388161919333,
- 0.22876543213020673,
- 0.2439815090461211,
- 0.22489577260884372,
- 0.22305156764659015,
- 0.22466289590705524,
- 0.22133831883018668,
- 0.2214335961775346,
- 0.22268397090109912
- ],
- "validationLosses": [
- 0.29047123109451445,
- 0.263965221366497,
- 0.2510373182971068,
- 0.27116744686858824,
- 0.2518718700216274,
- 0.24802495975687047,
- 0.24790137705176768,
- 0.24640804830223623,
- 0.2463938973166726,
- 0.24831805566344597
- ],
- "latenciesInSeconds": [
- 2.1662967205047607,
- 2.0658926963806152,
- 2.112030029296875,
- 2.130472183227539,
- 2.183091640472412,
- 2.1442034244537354,
- 2.117824077606201,
- 2.1345198154449463,
- 2.0993552207946777,
- 2.1198465824127197
- ]
- },
- "variableStates": [
- {
- "variable": "series_0",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_1",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_2",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_3",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_4",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- }
- ]
- }
- }
-}
-```
-
-## List models
-
-You may refer to [this page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/ListMultivariateModel) for information about the request URL and request headers. Notice that we only return 10 models ordered by update time, but you can visit other models by setting the `$skip` and the `$top` parameters in the request URL. For example, if your request URL is `https://{endpoint}/anomalydetector/v1.1/multivariate/models?$skip=10&$top=20`, then we'll skip the latest 10 models and return the next 20 models.
-
-A sample response is
-
-```json
-{
- "models": [
- {
- "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365",
- "createdTime": "2022-10-26T18:00:12Z",
- "lastUpdatedTime": "2022-10-26T18:03:53Z",
- "modelInfo": {
- "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
- "dataSchema": "OneTable",
- "startTime": "2021-01-01T00:00:00Z",
- "endTime": "2021-01-02T09:19:00Z",
- "displayName": "SampleRequest",
- "slidingWindow": 200,
- "alignPolicy": {
- "alignMode": "Outer",
- "fillNAMethod": "Linear",
- "paddingValue": 0.0
- },
- "status": "READY",
- "errors": [],
- "diagnosticsInfo": {
- "modelState": {
- "epochIds": [
- 10,
- 20,
- 30,
- 40,
- 50,
- 60,
- 70,
- 80,
- 90,
- 100
- ],
- "trainLosses": [
- 0.30325182933699,
- 0.24335388161919333,
- 0.22876543213020673,
- 0.2439815090461211,
- 0.22489577260884372,
- 0.22305156764659015,
- 0.22466289590705524,
- 0.22133831883018668,
- 0.2214335961775346,
- 0.22268397090109912
- ],
- "validationLosses": [
- 0.29047123109451445,
- 0.263965221366497,
- 0.2510373182971068,
- 0.27116744686858824,
- 0.2518718700216274,
- 0.24802495975687047,
- 0.24790137705176768,
- 0.24640804830223623,
- 0.2463938973166726,
- 0.24831805566344597
- ],
- "latenciesInSeconds": [
- 2.1662967205047607,
- 2.0658926963806152,
- 2.112030029296875,
- 2.130472183227539,
- 2.183091640472412,
- 2.1442034244537354,
- 2.117824077606201,
- 2.1345198154449463,
- 2.0993552207946777,
- 2.1198465824127197
- ]
- },
- "variableStates": [
- {
- "variable": "series_0",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_1",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_2",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_3",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- },
- {
- "variable": "series_4",
- "filledNARatio": 0.0004999999999999449,
- "effectiveCount": 1999,
- "firstTimestamp": "2021-01-01T00:01:00Z",
- "lastTimestamp": "2021-01-02T09:19:00Z"
- }
- ]
- }
- }
- }
- ],
- "currentCount": 42,
- "maxCount": 1000,
- "nextLink": ""
-}
-```
-
-The response contains four fields, `models`, `currentCount`, `maxCount`, and `nextLink`.
-
-* **models**: This contains the created time, last updated time, model ID, display name, variable counts, and the status of each model.
-* **currentCount**: This contains the number of trained multivariate models in your Anomaly Detector resource.
-* **maxCount**: The maximum number of models supported by your Anomaly Detector resource, which will be differentiated by the pricing tier that you choose.
-* **nextLink**: This could be used to fetch more models since maximum models that will be listed in this API response is **10**.
-
-## Next steps
-
-* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Anomaly Detector Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/anomaly-detector-container-configuration.md
- Title: How to configure a container for Anomaly Detector API-
-description: The Anomaly Detector API container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings.
------ Previously updated : 05/07/2020---
-# Configure Anomaly Detector univariate containers
-
-The **Anomaly Detector** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
-
-## Configuration settings
-
-This container has the following configuration settings:
-
-|Required|Setting|Purpose|
-|--|--|--|
-|Yes|[ApiKey](#apikey-configuration-setting)|Used to track billing information.|
-|No|[ApplicationInsights](#applicationinsights-setting)|Allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
-|Yes|[Billing](#billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure.|
-|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
-|No|[Fluentd](#fluentd-settings)|Write log and, optionally, metric data to a Fluentd server.|
-|No|[Http Proxy](#http-proxy-credentials-settings)|Configure an HTTP proxy for making outbound requests.|
-|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
-|No|[Mounts](#mount-settings)|Read and write data from host computer to container and from container back to host computer.|
-
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](anomaly-detector-container-howto.md#billing).
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Anomaly Detector_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-This setting can be found in the following place:
-
-* Azure portal: **Anomaly Detector's** Resource Management, under **Keys**
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Anomaly Detector_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for an _Anomaly Detector_ resource on Azure.
-
-This setting can be found in the following place:
-
-* Azure portal: **Anomaly Detector's** Overview, labeled `Endpoint`
-
-|Required| Name | Data type | Description |
-|--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
-
-## Eula setting
--
-## Fluentd settings
--
-## Http proxy credentials settings
--
-## Logging settings
-
--
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-The Anomaly Detector containers don't use input or output mounts to store training or service data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](anomaly-detector-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
-
-|Optional| Name | Data type | Description |
-|-||--|-|
-|Not allowed| `Input` | String | Anomaly Detector containers do not use this.|
-|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Example docker run commands
-
-The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](anomaly-detector-container-howto.md#stop-the-container) it.
-
-* **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character for a bash shell. Replace or remove this based on your host operating system's requirements. For example, the line continuation character for windows is a caret, `^`. Replace the back slash with the caret.
-* **Argument order**: Do not change the order of the arguments unless you are very familiar with Docker containers.
-
-Replace value in brackets, `{}`, with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The endpoint key of the `Anomaly Detector` resource on the Azure `Anomaly Detector` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Anomaly Detector` Overview page.| See [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters) for explicit examples. |
--
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](anomaly-detector-container-howto.md#billing).
-> The ApiKey value is the **Key** from the Azure Anomaly Detector Resource keys page.
-
-## Anomaly Detector container Docker examples
-
-The following Docker examples are for the Anomaly Detector container.
-
-### Basic example
-
- ```Docker
- docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
- mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \
- Eula=accept \
- Billing={ENDPOINT_URI} \
- ApiKey={API_KEY}
- ```
-
-### Logging example with command-line arguments
-
- ```Docker
- docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
- mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \
- Eula=accept \
- Billing={ENDPOINT_URI} ApiKey={API_KEY} \
- Logging:Console:LogLevel:Default=Information
- ```
-
-## Next steps
-
-* [Deploy an Anomaly Detector container to Azure Container Instances](how-to/deploy-anomaly-detection-on-container-instances.md)
-* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
cognitive-services Anomaly Detector Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/anomaly-detector-container-howto.md
- Title: Install and run Docker containers for the Anomaly Detector API-
-description: Use the Anomaly Detector API's algorithms to find anomalies in your data, on-premises using a Docker container.
------ Previously updated : 01/27/2023--
-keywords: on-premises, Docker, container, streaming, algorithms
--
-# Install and run Docker containers for the Anomaly Detector API
--
-Containers enable you to use the Anomaly Detector API your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run an Anomaly Detector container.
-
-Anomaly Detector offers a single Docker container for using the API on-premises. Use the container to:
-* Use the Anomaly Detector's algorithms on your data
-* Monitor streaming data, and detect anomalies as they occur in real-time.
-* Detect anomalies throughout your data set as a batch.
-* Detect trend change points in your data set as a batch.
-* Adjust the anomaly detection algorithm's sensitivity to better fit your data.
-
-For detailed information about the API, please see:
-* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-You must meet the following prerequisites before using Anomaly Detector containers:
-
-|Required|Purpose|
-|--|--|
-|Docker Engine| You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
-|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.|
-|Anomaly Detector resource |In order to use these containers, you must have:<br><br>An Azure _Anomaly Detector_ resource to get the associated API key and endpoint URI. Both values are available on the Azure portal's **Anomaly Detector** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
--
-## The host computer
--
-<!--* [Azure IoT Edge](../../iot-edge/index.yml). For instructions of deploying Anomaly Detector module in IoT Edge, see [How to deploy Anomaly Detector module in IoT Edge](how-to-deploy-anomaly-detector-module-in-iot-edge.md).-->
-
-### Container requirements and recommendations
-
-The following table describes the minimum and recommended CPU cores and memory to allocate for Anomaly Detector container.
-
-| QPS(Queries per second) | Minimum | Recommended |
-|--||-|
-| 10 QPS | 4 core, 1-GB memory | 8 core 2-GB memory |
-| 20 QPS | 8 core, 2-GB memory | 16 core 4-GB memory |
-
-Each core must be at least 2.6 gigahertz (GHz) or faster.
-
-Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with `docker pull`
-
-The Anomaly Detector container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/decision` repository and is named `anomaly-detector`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [image tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/decision/anomaly-detector/tags).
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
-
-| Container | Repository |
-|--||
-| cognitive-services-anomaly-detector | `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest` |
-
-> [!TIP]
-> When using [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/), pay close attention to the casing of the container registry, repository, container image name and corresponding tag. They are case sensitive.
-
-<!--
-For a full description of available tags, such as `latest` used in the preceding command, see [anomaly-detector](https://go.microsoft.com/fwlink/?linkid=2083827&clcid=0x409) on Docker Hub.
>-
-### Docker pull for the Anomaly Detector container
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/anomaly-detector:latest
-```
-
-## How to use the container
-
-Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
-
-1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-
-## Run the container with `docker run`
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
-
-[Examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs an Anomaly Detector container from the container image
-* Allocates one CPU core and 4 gigabytes (GB) of memory
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-### Running multiple containers on the same host
-
-If you intend to run multiple containers with exposed ports, make sure to run each container with a different port. For example, run the first container on port 5000 and the second container on port 5001.
-
-Replace the `<container-registry>` and `<container-name>` with the values of the containers you use. These do not have to be the same container. You can have the Anomaly Detector container and the LUIS container running on the HOST together or you can have multiple Anomaly Detector containers running.
-
-Run the first container on host port 5000.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
-<container-registry>/microsoft/<container-name> \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-Run the second container on host port 5001.
--
-```bash
-docker run --rm -it -p 5001:5000 --memory 4g --cpus 1 \
-<container-registry>/microsoft/<container-name> \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-Each subsequent container should be on a different port.
-
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, http://localhost:5000, for container APIs.
-
-<!-- ## Validate container is running -->
--
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](anomaly-detector-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
----
-## Billing
-
-The Anomaly Detector containers send billing information to Azure, using an _Anomaly Detector_ resource on your Azure account.
--
-For more information about these options, see [Configure containers](anomaly-detector-container-configuration.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Anomaly Detector containers. In summary:
-
-* Anomaly Detector provides one Linux container for Docker, encapsulating anomaly detection with batch vs streaming, expected range inference, and sensitivity tuning.
-* Container images are downloaded from a private Azure Container Registry dedicated for containers.
-* Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Anomaly Detector containers by specifying the host URI of the container.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft.
-
-## Next steps
-
-* Review [Configure containers](anomaly-detector-container-configuration.md) for configuration settings
-* [Deploy an Anomaly Detector container to Azure Container Instances](how-to/deploy-anomaly-detection-on-container-instances.md)
-* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)
cognitive-services Anomaly Detection Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md
- Title: Best practices when using the Anomaly Detector univariate API-
-description: Learn about best practices when detecting anomalies with the Anomaly Detector API.
------- Previously updated : 01/22/2021---
-# Best practices for using the Anomaly Detector univariate API
-
-The Anomaly Detector API is a stateless anomaly detection service. The accuracy and performance of its results can be impacted by:
-
-* How your time series data is prepared.
-* The Anomaly Detector API parameters that were used.
-* The number of data points in your API request.
-
-Use this article to learn about best practices for using the API to get the best results for your data.
-
-## When to use batch (entire) or latest (last) point anomaly detection
-
-The Anomaly Detector API's batch detection endpoint lets you detect anomalies through your entire times series data. In this detection mode, a single statistical model is created and applied to each point in the data set. If your time series has the below characteristics, we recommend using batch detection to preview your data in one API call.
-
-* A seasonal time series, with occasional anomalies.
-* A flat trend time series, with occasional spikes/dips.
-
-We don't recommend using batch anomaly detection for real-time data monitoring, or using it on time series data that doesn't have the above characteristics.
-
-* Batch detection creates and applies only one model, the detection for each point is done in the context of the whole series. If the time series data trends up and down without seasonality, some points of change (dips and spikes in the data) may be missed by the model. Similarly, some points of change that are less significant than ones later in the data set may not be counted as significant enough to be incorporated into the model.
-
-* Batch detection is slower than detecting the anomaly status of the latest point when doing real-time data monitoring, because of the number of points being analyzed.
-
-For real-time data monitoring, we recommend detecting the anomaly status of your latest data point only. By continuously applying latest point detection, streaming data monitoring can be done more efficiently and accurately.
-
-The example below describes the impact these detection modes can have on performance. The first picture shows the result of continuously detecting the anomaly status latest point along 28 previously seen data points. The red points are anomalies.
-
-![An image showing anomaly detection using the latest point](../media/last.png)
-
-Below is the same data set using batch anomaly detection. The model built for the operation has ignored several anomalies, marked by rectangles.
-
-![An image showing anomaly detection using the batch method](../media/entire.png)
-
-## Data preparation
-
-The Anomaly Detector API accepts time series data formatted into a JSON request object. A time series can be any numerical data recorded over time in sequential order. You can send windows of your time series data to the Anomaly Detector API endpoint to improve the API's performance. The minimum number of data points you can send is 12, and the maximum is 8640 points. [Granularity](/dotnet/api/microsoft.azure.cognitiveservices.anomalydetector.models.granularity) is defined as the rate that your data is sampled at.
-
-Data points sent to the Anomaly Detector API must have a valid Coordinated Universal Time (UTC) timestamp, and a numerical value.
-
-```json
-{
- "granularity": "daily",
- "series": [
- {
- "timestamp": "2018-03-01T00:00:00Z",
- "value": 32858923
- },
- {
- "timestamp": "2018-03-02T00:00:00Z",
- "value": 29615278
- },
- ]
-}
-```
-
-If your data is sampled at a non-standard time interval, you can specify it by adding the `customInterval` attribute in your request. For example, if your series is sampled every 5 minutes, you can add the following to your JSON request:
-
-```json
-{
- "granularity" : "minutely",
- "customInterval" : 5
-}
-```
-
-### Missing data points
-
-Missing data points are common in evenly distributed time series data sets, especially ones with a fine granularity (A small sampling interval. For example, data sampled every few minutes). Missing less than 10% of the expected number of points in your data shouldn't have a negative impact on your detection results. Consider filling gaps in your data based on its characteristics like substituting data points from an earlier period, linear interpolation, or a moving average.
-
-### Aggregate distributed data
-
-The Anomaly Detector API works best on an evenly distributed time series. If your data is randomly distributed, you should aggregate it by a unit of time, such as Per-minute, hourly, or daily.
-
-## Anomaly detection on data with seasonal patterns
-
-If you know that your time series data has a seasonal pattern (one that occurs at regular intervals), you can improve the accuracy and API response time.
-
-Specifying a `period` when you construct your JSON request can reduce anomaly detection latency by up to 50%. The `period` is an integer that specifies roughly how many data points the time series takes to repeat a pattern. For example, a time series with one data point per day would have a `period` as `7`, and a time series with one point per hour (with the same weekly pattern) would have a `period` of `7*24`. If you're unsure of your data's patterns, you don't have to specify this parameter.
-
-For best results, provide four `period`'s worth of data point, plus an additional one. For example, hourly data with a weekly pattern as described above should provide 673 data points in the request body (`7 * 24 * 4 + 1`).
-
-### Sampling data for real-time monitoring
-
-If your streaming data is sampled at a short interval (for example seconds or minutes), sending the recommended number of data points may exceed the Anomaly Detector API's maximum number allowed (8640 data points). If your data shows a stable seasonal pattern, consider sending a sample of your time series data at a larger time interval, like hours. Sampling your data in this way can also noticeably improve the API response time.
-
-## Next steps
-
-* [What is the Anomaly Detector API?](../overview.md)
-* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md)
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
- Title: Best practices for using the Multivariate Anomaly Detector API-
-description: Best practices for using the Anomaly Detector Multivariate API's to apply anomaly detection to your time series data.
------ Previously updated : 06/07/2022--
-keywords: anomaly detection, machine learning, algorithms
--
-# Best practices for using the Multivariate Anomaly Detector API
-
-This article provides guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs.
-In this tutorial, you'll:
-
-> [!div class="checklist"]
-> * **API usage**: Learn how to use MVAD without errors.
-> * **Data engineering**: Learn how to best cook your data so that MVAD performs with better accuracy.
-> * **Common pitfalls**: Learn how to avoid common pitfalls that customers meet.
-> * **FAQ**: Learn answers to frequently asked questions.
-
-## API usage
-
-Follow the instructions in this section to avoid errors while using MVAD. If you still get errors, refer to the [full list of error codes](./troubleshoot.md) for explanations and actions to take.
----
-## Data engineering
-
-Now you're able to run your code with MVAD APIs without any error. What could be done to improve your model accuracy?
-
-### Data quality
-
-* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It's hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
-* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learned as normal patterns. That may result in real (not missing) data points being detected as anomalies.
--
-### Data quantity
-
-* The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **5,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable.
-* Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job.
-
- Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade. Anything below that may lead to an `NotEnoughInput` error.
--
-### Timestamp round-up
-
-In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here's a simple example.
-
-*Variable-1*
-
-| timestamp | value |
-| | -- |
-| 12:00:01 | 1.0 |
-| 12:00:35 | 1.5 |
-| 12:01:02 | 0.9 |
-| 12:01:31 | 2.2 |
-| 12:02:08 | 1.3 |
-
-*Variable-2*
-
-| timestamp | value |
-| | -- |
-| 12:00:03 | 2.2 |
-| 12:00:37 | 2.6 |
-| 12:01:09 | 1.4 |
-| 12:01:34 | 1.7 |
-| 12:02:04 | 2.0 |
-
-We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD takes into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
-
-Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table is:
-
-| timestamp | Variable-1 | Variable-2 |
-| | -- | -- |
-| 12:00:01 | 1.0 | `nan` |
-| 12:00:03 | `nan` | 2.2 |
-| 12:00:35 | 1.5 | `nan` |
-| 12:00:37 | `nan` | 2.6 |
-| 12:01:02 | 0.9 | `nan` |
-| 12:01:09 | `nan` | 1.4 |
-| 12:01:31 | 2.2 | `nan` |
-| 12:01:34 | `nan` | 1.7 |
-| 12:02:04 | `nan` | 2.0 |
-| 12:02:08 | 1.3 | `nan` |
-
-`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table is empty as there's no common timestamp in variable 1 and variable 2.
-
-Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are:
-
-*Variable-1*
-
-| timestamp | value |
-| | -- |
-| 12:00:00 | 1.0 |
-| 12:00:30 | 1.5 |
-| 12:01:00 | 0.9 |
-| 12:01:30 | 2.2 |
-| 12:02:00 | 1.3 |
-
-*Variable-2*
-
-| timestamp | value |
-| | -- |
-| 12:00:00 | 2.2 |
-| 12:00:30 | 2.6 |
-| 12:01:00 | 1.4 |
-| 12:01:30 | 1.7 |
-| 12:02:00 | 2.0 |
-
-Now the merged table is more reasonable.
-
-| timestamp | Variable-1 | Variable-2 |
-| | -- | -- |
-| 12:00:00 | 1.0 | 2.2 |
-| 12:00:30 | 1.5 | 2.6 |
-| 12:01:00 | 0.9 | 1.4 |
-| 12:01:30 | 2.2 | 1.7 |
-| 12:02:00 | 1.3 | 2.0 |
-
-Values of different variables at close timestamps are well aligned, and the MVAD model can now extract correlation information.
-
-### Limitations
-
-There are some limitations in both the training and inference APIs, you should be aware of these limitations to avoid errors.
-
-#### General Limitations
-* Sliding window: 28-2880 timestamps, default is 300. For periodic data, set the length of 2-4 cycles as the sliding window.
-* Variable numbers: For training and batch inference, at most 301 variables.
-#### Training Limitations
-* Timestamps: At most 1000000. Too few timestamps may decrease model quality. Recommend having more than 5,000 timestamps.
-* Granularity: The minimum granularity is `per_second`.
-
-#### Batch inference limitations
-* Timestamps: At most 20000, at least 1 sliding window length.
-#### Streaming inference limitations
-* Timestamps: At most 2880, at least 1 sliding window length.
-* Detecting timestamps: From 1 to 10.
-
-## Model quality
-
-### How to deal with false positive and false negative in real scenarios?
-We have provided severity that indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives.
-
-### How to estimate which model is best to use according to training loss and validation loss?
-Generally speaking, it's hard to decide which model is the best without a labeled dataset. However, we can leverage the training and validation losses to have a rough estimation and discard those bad models. First, we need to observe whether training losses converge. Divergent losses often indicate poor quality of the model. Second, loss values may help identify whether underfitting or overfitting occurs. Models that are underfitting or overfitting may not have desired performance. Third, although the definition of the loss function doesn't reflect the detection performance directly, loss values may be an auxiliary tool to estimate model quality. Low loss value is a necessary condition for a good model, thus we may discard models with high loss values.
--
-## Common pitfalls
-
-Apart from the [error code table](./troubleshoot.md), we've learned from customers like you some common pitfalls while using MVAD APIs. This table will help you to avoid these issues.
-
-| Pitfall | Consequence |Explanation and solution |
-| | -- | -- |
-| Timestamps in training data and/or inference data weren't rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results aren't as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
-| Too many anomalous data points in the training data | Model accuracy is impacted negatively because it treats anomalous data points as normal patterns during training. | Empirically, keep the abnormal rate at or below **1%** will help. |
-| Too little training data | Model accuracy is compromised. | Empirically, training a MVAD model requires 15,000 or more data points (timestamps) per variable to keep a good accuracy.|
-| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that aren't severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
-| Sub-folders are zipped into the data file for training or inference. | The csv data files inside sub-folders are ignored during training and/or inference. | No sub-folders are allowed in the zip file. Please refer to [Folder structure](#folder-structure) for details. |
-| Too much data in the inference data file: for example, compressing all historical data in the inference data zip file | You may not see any errors but you'll experience degraded performance when you try to upload the zip file to Azure Blob as well as when you try to run inference. | Please refer to [Data quantity](#data-quantity) for details. |
-| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You'll get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
-
-## FAQ
-
-### How does MVAD sliding window work?
-
-Let's use two examples to learn how MVAD's sliding window works. Suppose you have set `slidingWindow` = 1,440, and your input data is at one-minute granularity.
-
-* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
-
-* **Batch scenario**: You have multiple target data points to predict. Your `endTime` will be greater than your `startTime`. Inference in such scenarios is performed in a "moving window" manner. For example, MVAD will use data from `2021-01-01T00:00:00Z` to `2021-01-01T23:59:00Z` (inclusive) to determine whether data at `2021-01-02T00:00:00Z` is anomalous. Then it moves forward and uses data from `2021-01-01T00:01:00Z` to `2021-01-02T00:00:00Z` (inclusive)
-to determine whether data at `2021-01-02T00:01:00Z` is anomalous. It moves on in the same manner (taking 1,440 data points to compare) until the last timestamp specified by `endTime` (or the actual latest timestamp). Therefore, your inference data source must contain data starting from `startTime` - `slidingWindow` and ideally contains in total of size `slidingWindow` + (`endTime` - `startTime`).
-
-### What's the difference between `severity` and `score`?
-
-Normally we recommend you to use `severity` as the filter to sift out 'anomalies' that aren't so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
-
-In cases where you've found a need of more sophisticated rules than thresholds against `severity` or duration of continuous high `severity` values, you may want to use `score` to build more powerful filters. Understanding how MVAD is using `score` to determine anomalies may help:
-
-We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it's also marked as an anomaly.
--
-## Next steps
-
-* [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md).
-* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
cognitive-services Multivariate Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/multivariate-architecture.md
- Title: Predictive maintenance architecture for using the Anomaly Detector Multivariate API-
-description: Reference architecture for using the Anomaly Detector Multivariate APIs to apply anomaly detection to your time series data for predictive maintenance.
------ Previously updated : 12/15/2022--
-keywords: anomaly detection, machine learning, algorithms
--
-# Predictive maintenance solution with Multivariate Anomaly Detector
-
-Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks down.
-
-Monitoring the health status of equipment can be challenging, as each component inside the equipment can generate dozens of signals. For example, vibration, orientation, and rotation. This can be even more complex when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining different rules for those signals and correlating them with each other manually can be costly. Anomaly Detector's multivariate feature allows:
-
-* Multiple correlated signals to be monitored together, and the inter-correlations between them are accounted for in the model.
-* In each captured anomaly, the contribution rank of different signals can help with anomaly explanation, and incident root cause analysis.
-* The multivariate anomaly detection model is built in an unsupervised manner. Models can be trained specifically for different types of equipment.
-
-Here, we provide a reference architecture for a predictive maintenance solution based on Multivariate Anomaly Detector.
-
-## Reference architecture
-
-[ ![Architectural diagram that starts at sensor data being collected at the edge with a piece of industrial equipment and tracks the processing/analysis pipeline to an end output of an incident alert being generated after Anomaly Detector runs.](../media/multivariate-architecture/multivariate-architecture.png) ](../media/multivariate-architecture/multivariate-architecture.png#lightbox)
-
-In the above architecture, streaming events coming from sensor data will be stored in Azure Data Lake and then processed by a data transforming module to be converted into a time-series format. Meanwhile, the streaming event will trigger real-time detection with the trained model. In general, there will be a module to manage the multivariate model life cycle, like *Bridge Service* in this architecture.
-
-**Model training**: Before using the Anomaly Detector multivariate to detect anomalies for a component or equipment. We need to train a model on specific signals (time-series) generated by this entity. The *Bridge Service* will fetch historical data and submit a training job to the Anomaly Detector and then keep the Model ID in the *Model Meta* storage.
-
-**Model validation**: Training time of a certain model could be varied based on the training data volume. The *Bridge Service* could query model status and diagnostic info on a regular basis. Validating model quality could be necessary before putting it online. If there are labels in the scenario, those labels can be used to verify the model quality. Otherwise, the diagnostic info can be used to evaluate the model quality, and you can also perform detection on historical data with the trained model and evaluate the result to backtest the validity of the model.
-
-**Model inference**: Online detection will be performed with the valid model, and the result ID can be stored in the *Inference table*. Both the training process and the inference process are done in an asynchronous manner. In general, a detection task can be completed within seconds. Signals used for detection should be the same ones that have been used for training. For example, if we use vibration, orientation, and rotation for training, in detection the three signals should be included as an input.
-
-**Incident alerting** The detection results can be queried with result IDs. Each result contains severity of each anomaly, and contribution rank. Contribution rank can be used to understand why this anomaly happened, and which signal caused this incident. Different thresholds can be set on the severity to generate alerts and notifications to be sent to field engineers to conduct maintenance work.
-
-## Next steps
--- [Quickstarts](../quickstarts/client-libraries-multivariate.md).-- [Best Practices](../concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/troubleshoot.md
- Title: Troubleshoot the Anomaly Detector multivariate API-
-description: Learn how to remediate common error codes when you use the Azure Anomaly Detector multivariate API.
------ Previously updated : 04/01/2021-
-keywords: anomaly detection, machine learning, algorithms
--
-# Troubleshoot the multivariate API
-
-This article provides guidance on how to troubleshoot and remediate common error messages when you use the Azure Cognitive Services Anomaly Detector multivariate API.
-
-## Multivariate error codes
-
-The following tables list multivariate error codes.
-
-### Common errors
-
-| Error code | HTTP error code | Error message | Comment |
-| -- | | - | |
-| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers. | Add your APIM subscription ID in the header. An example header is `{"apim-subscription-id": <Your Subscription ID>}`. |
-| `FileNotExist` | 400 | File \<source> does not exist. | Check the validity of your blob shared access signature. Make sure that it hasn't expired. |
-| `InvalidBlobURL` | 400 | | Your blob shared access signature isn't a valid shared access signature. |
-| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service isn't allowed to write the data to the blob encrypted by a customer-managed key. Either remove the customer-managed key or grant access to our service again. For more information, see [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../encryption/cognitive-services-encryption-keys-portal.md). |
-| `StorageReadError` | 403 | | Same as `StorageWriteError`. |
-| `UnexpectedError` | 500 | | Contact us with detailed error information. You could take the support options from [Azure Cognitive Services support and help options](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com). |
-
-### Train a multivariate anomaly detection model
-
-| Error code | HTTP error code | Error message | Comment |
-| | | | |
-| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Delete unused models before you train a new model. |
-| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train five models concurrently. Train a new model after previous models have completed their training process. |
-| `InvalidJsonFormat` | 400 | Invalid JSON format. | Training request isn't a valid JSON. |
-| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Check the value of `'alignMode'`, which should be either `'Inner'` or `'Outer'` (case sensitive). |
-| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'`. It cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Check the value of `'fillNAMethod'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
-| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
-| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request hasn't specified a value for the `'source'` field. An example is `{"source": <Your Blob SAS>}`. |
-| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"startTime": "2021-01-01T00:00:00Z"}`. |
-| `InvalidTimestampFormat` | 400 | Invalid timestamp format. The `<timestamp>` format is not a valid format. | The format of timestamp in the request body isn't correct. Try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
-| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"endTime": "2021-01-01T00:00:00Z"}`. |
-| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | The `'slidingWindow'` field must be an integer between 28 and 2880 (inclusive). |
-
-### Get a multivariate model with a model ID
-
-| Error code | HTTP error code | Error message | Comment |
-| | | - | |
-| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID doesn't exist. Check the model ID in the request URL. |
-
-### List multivariate models
-
-| Error code | HTTP error code | Error message | Comment |
-| | | - | |
-|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top. | Check whether the values for the two parameters are numerical. The values $skip and $top are used to list the models with pagination. Because the API only returns the 10 most recently updated models, you could use $skip and $top to get models updated earlier. |
-
-### Anomaly detection with a trained model
-
-| Error code | HTTP error code | Error message | Comment |
-| -- | | | |
-| `ModelNotExist` | 404 | The model does not exist. | The model used for inference doesn't exist. Check the model ID in the request URL. |
-| `ModelFailed` | 400 | Model failed to be trained. | The model isn't successfully trained. Get detailed information by getting the model with model ID. |
-| `ModelNotReady` | 400 | The model is not ready yet. | The model isn't ready yet. Wait for a while until the training process completes. |
-| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit, which is currently 2 GB. Use less data for inference. |
-
-### Get detection results
-
-| Error code | HTTP error code | Error message | Comment |
-| - | | -- | |
-| `ResultNotExist` | 404 | The result does not exist. | The result per request doesn't exist. Either inference hasn't completed or the result has expired. The expiration time is seven days. |
-
-### Data processing errors
-
-The following error codes don't have associated HTTP error codes.
-
-| Error code | Error message | Comment |
-| | | |
-| `NoVariablesFound` | No variables found. Check that your files are organized as per instruction. | No CSV files could be found from the data source. This error is typically caused by incorrect organization of files. See the sample data for the desired structure. |
-| `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. |
-| `FileNotExist` | File \<filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
-| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable wasn't in the training data but appeared in the inference data. |
-| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single CSV file \<filename> exceeds the limit. Train with less data. |
-| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. For more information, see the \<error messages> or verify with `pd.read_csv(filename)` in a local environment. |
-| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each CSV file must have two columns with the names **timestamp** and **value** (case sensitive). |
-| `VariableParseError` | Variable \<variable> parse \<error message> error. | Can't process the \<variable> because of runtime errors. For more information, see the \<error message> or contact us with the \<error message>. |
-| `MergeDataFailed` | Failed to merge data. Check data format. | Data merge failed. This error is possibly because of the wrong data format or the incorrect organization of files. See the sample data for the current file structure. |
-| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Verify the data. |
-| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Verify the data. |
-| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Reduce the size of input data. |
-| `NoData` | There is no effective data. | There's no data to train/inference after processing. Check the start time and end time. |
-| `DataExceedsLimit`. | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. Currently, there's no limit on processed data. |
-| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window, which is \<sliding window size>. | The minimum number of data points for inference is the size of the sliding window. Try to provide more data for inference. |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/overview.md
- Title: What is Anomaly Detector?-
-description: Use the Anomaly Detector API's algorithms to apply anomaly detection on your time series data.
------ Previously updated : 10/27/2022-
-keywords: anomaly detection, machine learning, algorithms
---
-# What is Anomaly Detector?
-
-Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference.
-
-This documentation contains the following types of articles:
-* [**Quickstarts**](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* [**Interactive demo**](https://aka.ms/adDemo) could you help understand how Anomaly Detector works with easy operations.
-* [**How-to guides**](./how-to/identify-anomalies.md) contain instructions for using the service in more specific or customized ways.
-* [**Tutorials**](./tutorials/batch-anomaly-detection-powerbi.md) are longer guides that show you how to use this service as a component in broader business solutions.
-* [**Code samples**](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook) demonstrate how to use Anomaly Detector.
-* [**Conceptual articles**](./concepts/anomaly-detection-best-practices.md) provide in-depth explanations of the service's functionality and features.
-
-## Anomaly Detector capabilities
-
-With Anomaly Detector, you can either detect anomalies in one variable using Univariate Anomaly Detector, or detect anomalies in multiple variables with Multivariate Anomaly Detector.
-
-|Feature |Description |
-|||
-|Univariate Anomaly Detection | Detect anomalies in one variable, like revenue, cost, etc. The model was selected automatically based on your data pattern. |
-|Multivariate Anomaly Detection| Detect anomalies in multiple variables with correlations, which are usually gathered from equipment or other complex system. The underlying model used is a Graph Attention Network.|
-
-### Univariate Anomaly Detection
-
-The Univariate Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
-
-![Line graph of detect pattern changes in service requests.](./media/anomaly_detection2.png)
-
-Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes.
-
-With the Univariate Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
-
-|Feature |Description |
-|||
-| Streaming detection| Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. |
-| Batch detection | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-| Change points detection | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. |
-
-### Multivariate Anomaly Detection
-
-The **Multivariate Anomaly Detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
-
-![Line graph for multiple variables including: rotation, optical filter, pressure, bearing with anomalies highlighted in orange.](./media/multivariate-graph.png)
-
-Imagine 20 sensors from an auto engine generating 20 different signals like rotation, fuel pressure, bearing, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools.
-
-## Join the Anomaly Detector community
-
-Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates!
-
-## Algorithms
-
-* Blogs and papers:
- * [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162)
- * [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798)
- * [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679)
- * [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
- * [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019)
-
-* Videos:
- > [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
-
- > [!VIDEO https://www.youtube.com/embed/FwuI02edclQ]
-
-## Next steps
-
-* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md)
-* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md)
-* The Anomaly Detector [REST API reference](https://aka.ms/ad-api)
cognitive-services Client Libraries Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md
- Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library for multivariate anomaly detection'-
-description: The Anomaly Detector multivariate offers client libraries to detect abnormalities in your data series either as a batch or on streaming data.
---
-zone_pivot_groups: anomaly-detector-quickstart-multivariate
--- Previously updated : 10/27/2022-
-keywords: anomaly detection, algorithms
---
-# Quickstart: Use the Multivariate Anomaly Detector client library
------------
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/quickstarts/client-libraries.md
- Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library'-
-description: The Anomaly Detector API offers client libraries to detect abnormalities in your data series either as a batch or on streaming data.
---
-zone_pivot_groups: anomaly-detector-quickstart
--- Previously updated : 10/27/2022-
-keywords: anomaly detection, algorithms
-recommendations: false
---
-# Quickstart: Use the Univariate Anomaly Detector client library
------------
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/regions.md
- Title: Regions - Anomaly Detector service-
-description: A list of available regions and endpoints for the Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection.
------ Previously updated : 11/1/2022----
-# Anomaly Detector service supported regions
-
-The Anomaly Detector service provides anomaly detection technology on your time series data. The service is available in multiple regions with unique endpoints for the Anomaly Detector SDK and REST APIs.
-
-Keep in mind the following points:
-
-* If your application uses one of the Anomaly Detector service REST APIs, the region is part of the endpoint URI you use when making requests.
-* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you will get authentication errors.
-
-> [!NOTE]
-> The Anomaly Detector service doesn't store or process customer data outside the region the customer deploys the service instance in.
-
-## Univariate Anomaly Detection
-
-The following regions are supported for Univariate Anomaly Detection. The geographies are listed in alphabetical order.
-
-| Geography | Region | Region identifier |
-| -- | -- | -- |
-| Africa | South Africa North | `southafricanorth` |
-| Asia Pacific | East Asia | `eastasia` |
-| Asia Pacific | Southeast Asia | `southeastasia` |
-| Asia Pacific | Australia East | `australiaeast` |
-| Asia Pacific | Central India | `centralindia` |
-| Asia Pacific | Japan East | `japaneast` |
-| Asia Pacific | Japan West | `japanwest` |
-| Asia Pacific | Jio India West | `jioindiawest` |
-| Asia Pacific | Korea Central | `koreacentral` |
-| Canada | Canada Central | `canadacentral` |
-| China | China East 2 | `chinaeast2` |
-| China | China North 2 | `chinanorth2` |
-| Europe | North Europe | `northeurope` |
-| Europe | West Europe | `westeurope` |
-| Europe | France Central | `francecentral` |
-| Europe | Germany West Central | `germanywestcentral` |
-| Europe | Norway East | `norwayeast` |
-| Europe | Switzerland North | `switzerlandnorth` |
-| Europe | UK South | `uksouth` |
-| Middle East | UAE North | `uaenorth` |
-| Qatar | Qatar Central | `qatarcentral` |
-| South America | Brazil South | `brazilsouth` |
-| Sweden | Sweden Central | `swedencentral` |
-| US | Central US | `centralus` |
-| US | East US | `eastus` |
-| US | East US 2 | `eastus2` |
-| US | North Central US | `northcentralus` |
-| US | South Central US | `southcentralus` |
-| US | West Central US | `westcentralus` |
-| US | West US | `westus`|
-| US | West US 2 | `westus2` |
-| US | West US 3 | `westus3` |
-
-## Multivariate Anomaly Detection
-
-The following regions are supported for Multivariate Anomaly Detection. The geographies are listed in alphabetical order.
-
-| Geography | Region | Region identifier |
-| -- | -- | -- |
-| Africa | South Africa North | `southafricanorth` |
-| Asia Pacific | East Asia | `eastasia` |
-| Asia Pacific | Southeast Asia | `southeastasia` |
-| Asia Pacific | Australia East | `australiaeast` |
-| Asia Pacific | Central India | `centralindia` |
-| Asia Pacific | Japan East | `japaneast` |
-| Asia Pacific | Jio India West | `jioindiawest` |
-| Asia Pacific | Korea Central | `koreacentral` |
-| Canada | Canada Central | `canadacentral` |
-| Europe | North Europe | `northeurope` |
-| Europe | West Europe | `westeurope` |
-| Europe | France Central | `francecentral` |
-| Europe | Germany West Central | `germanywestcentral` |
-| Europe | Norway East | `norwayeast` |
-| Europe | Switzerland North | `switzerlandnorth` |
-| Europe | UK South | `uksouth` |
-| Middle East | UAE North | `uaenorth` |
-| South America | Brazil South | `brazilsouth` |
-| US | Central US | `centralus` |
-| US | East US | `eastus` |
-| US | East US 2 | `eastus2` |
-| US | North Central US | `northcentralus` |
-| US | South Central US | `southcentralus` |
-| US | West Central US | `westcentralus` |
-| US | West US | `westus`|
-| US | West US 2 | `westus2` |
-| US | West US 3 | `westus3` |
-
-## Next steps
-
-* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md)
-* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md)
-* The Anomaly Detector [REST API reference](https://aka.ms/ad-api)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/service-limits.md
- Title: Service limits - Anomaly Detector service-
-description: Service limits for Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection.
------ Previously updated : 1/31/2023----
-# Anomaly Detector service quotas and limits
-
-This article contains both a quick reference and detailed description of Azure Anomaly Detector service quotas and limits for all pricing tiers. It also contains some best practices to help avoid request throttling.
-
-The quotas and limits apply to all the versions within Azure Anomaly Detector service.
-
-## Univariate Anomaly Detection
-
-|Quota<sup>1</sup>|Free (F0)|Standard (S0)|
-|--|--|--|
-| **All APIs per second** | 10 | 500 |
-
-<sup>1</sup> All the quota and limit are defined for one Anomaly Detector resource.
-
-## Multivariate Anomaly Detection
-
-### API call per minute
-
-|Quota<sup>1</sup>|Free (F0)<sup>2</sup>|Standard (S0)|
-|--|--|--|
-| **Training API per minute** | 1 | 20 |
-| **Get model API per minute** | 1 | 20 |
-| **Batch(async) inference API per minute** | 10 | 60 |
-| **Get inference results API per minute** | 10 | 60 |
-| **Last(sync) inference API per minute** | 10 | 60 |
-| **List model API per minute** | 1 | 20 |
-| **Delete model API per minute** | 1 | 20 |
-
-<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource.
-
-<sup>2</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/)
-
-### Concurrent models and inference tasks
-|Quota<sup>1</sup>|Free (F0)|Standard (S0)|
-|--|--|--|
-| **Maximum models** *(created, running, ready, failed)*| 20 | 1000 |
-| **Maximum running models** *(created, running)* | 1 | 20 |
-| **Maximum running inference** *(created, running)* | 10 | 60 |
-
-<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource. If you want to increase the limit, please contact AnomalyDetector@microsoft.com for further communication.
-
-## How to increase the limit for your resource?
-
-For the Standard pricing tier, this limit can be increased. Increasing the **concurrent request limit** doesn't directly affect your costs. Anomaly Detector service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests.
-
-The **concurrent request limit parameter** isn't visible via Azure portal, Command-Line tools, or API requests. To verify the current value, create an Azure Support Request.
-
-If you would like to increase your limit, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource [enable auto scaling](../autoscale.md). You can also submit an increase Transactions Per Second (TPS) support request.
-
-### Have the required information ready
-
-* Anomaly Detector resource ID
-
-* Region
-
-#### Retrieve resource ID and region
-
-* Go to [Azure portal](https://portal.azure.com/)
-* Select the Anomaly Detector Resource for which you would like to increase the transaction limit
-* Select Properties (Resource Management group)
-* Copy and save the values of the following fields:
- * Resource ID
- * Location (your endpoint Region)
-
-### Create and submit support request
-
-To request a limit increase for your resource submit a **Support Request**:
-
-1. Go to [Azure portal](https://portal.azure.com/)
-2. Select the Anomaly Detector Resource for which you would like to increase the limit
-3. Select New support request (Support + troubleshooting group)
-4. A new window will appear with auto-populated information about your Azure Subscription and Azure Resource
-5. Enter Summary (like "Increase Anomaly Detector TPS limit")
-6. In Problem type, select *"Quota or usage validation"*
-7. Select Next: Solutions
-8. Proceed further with the request creation
-9. Under the Details tab enters the following in the Description field:
- * A note, that the request is about Anomaly Detector quota.
- * Provide a TPS expectation you would like to scale to meet.
- * Azure resource information you collected.
- * Complete entering the required information and select Create button in *Review + create* tab
- * Note the support request number in Azure portal notifications. You'll be contacted shortly for further processing.
cognitive-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/azure-data-explorer.md
- Title: "Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer"-
-description: Learn how to use the Univariate Anomaly Detector with Azure Data Explorer.
------ Previously updated : 12/19/2022---
-# Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer
-
-## Introduction
-
-The [Anomaly Detector API](/azure/cognitive-services/anomaly-detector/overview-multivariate) enables you to check and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically finding and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API decides boundaries for anomaly detection, expected values, and which data points are anomalies.
-
-[Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a fully managed, high-performance, big data analytics platform that makes it easy to analyze high volumes of data in near real-time. The Azure Data Explorer toolbox gives you an end-to-end solution for data ingestion, query, visualization, and management.
-
-## Anomaly Detection functions in Azure Data Explorer
-
-### Function 1: series_uv_anomalies_fl()
-
-The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detector API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
-
-### Function 2: series_uv_change_points_fl()
-
-The function **[series_uv_change_points_fl()](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=adhoc)** finds change points in time series by calling the Univariate Anomaly Detector API. The function accepts a limited set of time series as numerical dynamic arrays, the change point detection threshold, and the minimum size of the stable trend window. Each time series is converted into the required JSON format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of change points, their respective confidence, and the detected seasonality.
-
-These two functions are user-definedΓÇ»[tabular functions](/azure/data-explorer/kusto/query/functions/user-defined-functions#tabular-function) applied using theΓÇ»[invoke operator](/azure/data-explorer/kusto/query/invokeoperator). You can either embed its code in your query or you can define it as a stored function in your database.
-
-## Where to use these new capabilities?
-
-These two functions are available to use either in Azure Data Explorer website or in the Kusto Explorer application.
-
-![Screenshot of Azure Data Explorer and Kusto Explorer](../media/data-explorer/way-of-use.png)
-
-## Create resources
-
-1. [Create an Azure Data Explorer Cluster](https://portal.azure.com/#create/Microsoft.AzureKusto) in the Azure portal, after the resource is created successfully, go to the resource and create a database.
-2. [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal and check the keys and endpoints that youΓÇÖll need later.
-3. Enable plugins in Azure Data Explorer
- * These new functions have inline Python and require [enabling the python() plugin](/azure/data-explorer/kusto/query/pythonplugin#enable-the-plugin) on the cluster.
- * These new functions call the anomaly detection service endpoint and require:
- * Enable the [http_request plugin / http_request_post plugin](/azure/data-explorer/kusto/query/http-request-plugin) on the cluster.
- * Modify the [callout policy](/azure/data-explorer/kusto/management/calloutpolicy) for type `webapi` to allow accessing the service endpoint.
-
-## Download sample data
-
-This quickstart uses the `request-data.csv` file that can be downloaded from our [GitHub sample data](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_data/request-data.csv)
-
- You can also download the sample data by running:
-
-```cmd
-curl "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_data/request-data.csv" --output request-data.csv
-```
-
-Then ingest the sample data to Azure Data Explorer by following the [ingestion guide](/azure/data-explorer/ingest-sample-data?tabs=ingestion-wizard). Name the new table for the ingested data **univariate**.
-
-Once ingested, your data should look as follows:
--
-## Detect anomalies in an entire time series
-
-In Azure Data Explorer, run the following query to make an anomaly detection chart with your onboarded data. You could also [create a function](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=persistent) to add the code to a stored function for persistent usage.
-
-```kusto
-let series_uv_anomalies_fl=(tbl:(*), y_series:string, sensitivity:int=85, tsid:string='_tsid')
-{
-    let uri = '[Your-Endpoint]anomalydetector/v1.0/timeseries/entire/detect';
-    let headers=dynamic({'Ocp-Apim-Subscription-Key': h'[Your-key]'});
-    let kwargs = pack('y_series', y_series, 'sensitivity', sensitivity);
-    let code = ```if 1:
-        import json
-        y_series = kargs["y_series"]
-        sensitivity = kargs["sensitivity"]
-        json_str = []
-        for i in range(len(df)):
-            row = df.iloc[i, :]
-            ts = [{'value':row[y_series][j]} for j in range(len(row[y_series]))]
-            json_data = {'series': ts, "sensitivity":sensitivity}     # auto-detect period, or we can force 'period': 84. We can also add 'maxAnomalyRatio':0.25 for maximum 25% anomalies
-            json_str = json_str + [json.dumps(json_data)]
-        result = df
-        result['json_str'] = json_str
-    ```;
-    tbl
-    | evaluate python(typeof(*, json_str:string), code, kwargs)
-    | extend _tsid = column_ifexists(tsid, 1)
-    | partition by _tsid (
-       project json_str
-       | evaluate http_request_post(uri, headers, dynamic(null))
-       | project period=ResponseBody.period, baseline_ama=ResponseBody.expectedValues, ad_ama=series_add(0, ResponseBody.isAnomaly), pos_ad_ama=series_add(0, ResponseBody.isPositiveAnomaly)
-       , neg_ad_ama=series_add(0, ResponseBody.isNegativeAnomaly), upper_ama=series_add(ResponseBody.expectedValues, ResponseBody.upperMargins), lower_ama=series_subtract(ResponseBody.expectedValues, ResponseBody.lowerMargins)
-       | extend _tsid=toscalar(_tsid)
-      )
-}
-;
-let stime=datetime(2018-03-01);
-let etime=datetime(2018-04-16);
-let dt=1d;
-let ts = univariate
-| make-series value=avg(Column2) on Column1 from stime to etime step dt
-| extend _tsid='TS1';
-ts
-| invoke series_uv_anomalies_fl('value')
-| lookup ts on _tsid
-| render anomalychart with(xcolumn=Column1, ycolumns=value, anomalycolumns=ad_ama)
-```
-
-After you run the code, you'll render a chart like this:
--
-## Next steps
-
-* [Best practices of Univariate Anomaly Detection](../concepts/anomaly-detection-best-practices.md)
cognitive-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
- Title: "Tutorial: Visualize anomalies using batch detection and Power BI"-
-description: Learn how to use the Anomaly Detector API and Power BI to visualize anomalies throughout your time series data.
------ Previously updated : 09/10/2020---
-# Tutorial: Visualize anomalies using batch detection and Power BI (univariate)
-
-Use this tutorial to find anomalies within a time series data set as a batch. Using Power BI desktop, you will take an Excel file, prepare the data for the Anomaly Detector API, and visualize statistical anomalies throughout it.
-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * Use Power BI Desktop to import and transform a time series data set
-> * Integrate Power BI Desktop with the Anomaly Detector API for batch anomaly detection
-> * Visualize anomalies found within your data, including expected and seen values, and anomaly detection boundaries.
-
-## Prerequisites
-* An [Azure subscription](https://azure.microsoft.com/free/cognitive-services)
-* [Microsoft Power BI Desktop](https://powerbi.microsoft.com/get-started/), available for free.
-* An excel file (.xlsx) containing time series data points.
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector" title="Create an Anomaly Detector resource" target="_blank">create an Anomaly Detector resource </a> in the Azure portal to get your key and endpoint.
- * You will need the key and endpoint from the resource you create to connect your application to the Anomaly Detector API. You'll do this later in the quickstart.
--
-## Load and format the time series data
-
-To get started, open Power BI Desktop and load the time series data you downloaded from the prerequisites. This excel file contains a series of Coordinated Universal Time (UTC) timestamp and value pairs.
-
-> [!NOTE]
-> Power BI can use data from a wide variety of sources, such as .csv files, SQL databases, Azure blob storage, and more.
-
-In the main Power BI Desktop window, click the **Home** ribbon. In the **External data** group of the ribbon, open the **Get Data** drop-down menu and click **Excel**.
-
-![An image of the "Get Data" button in Power BI](../media/tutorials/power-bi-get-data-button.png)
-
-After the dialog appears, navigate to the folder where you downloaded the example .xlsx file and select it. After the **Navigator** dialogue appears, click **Sheet1**, and then **Edit**.
-
-![An image of the data source "Navigator" screen in Power BI](../media/tutorials/navigator-dialog-box.png)
-
-Power BI will convert the timestamps in the first column to a `Date/Time` data type. These timestamps must be converted to text in order to be sent to the Anomaly Detector API. If the Power Query editor doesn't automatically open, click **Edit Queries** on the home tab.
-
-Click the **Transform** ribbon in the Power Query Editor. In the **Any Column** group, open the **Data Type:** drop-down menu, and select **Text**.
-
-![An image of the data type drop down](../media/tutorials/data-type-drop-down.png)
-
-When you get a notice about changing the column type, click **Replace Current**. Afterwards, click **Close & Apply** or **Apply** in the **Home** ribbon.
-
-## Create a function to send the data and format the response
-
-To format and send the data file to the Anomaly Detector API, you can invoke a query on the table created above. In the Power Query Editor, from the **Home** ribbon, open the **New Source** drop-down menu and click **Blank Query**.
-
-Make sure your new query is selected, then click **Advanced Editor**.
-
-![An image of the "Advanced Editor" screen](../media/tutorials/advanced-editor-screen.png)
-
-Within the Advanced Editor, use the following Power Query M snippet to extract the columns from the table and send it to the API. Afterwards, the query will create a table from the JSON response, and return it. Replace the `apiKey` variable with your valid Anomaly Detector API key, and `endpoint` with your endpoint. After you've entered the query into the Advanced Editor, click **Done**.
-
-```M
-(table as table) => let
-
- apikey = "[Placeholder: Your Anomaly Detector resource access key]",
- endpoint = "[Placeholder: Your Anomaly Detector resource endpoint]/anomalydetector/v1.0/timeseries/entire/detect",
- inputTable = Table.TransformColumnTypes(table,{{"Timestamp", type text},{"Value", type number}}),
- jsontext = Text.FromBinary(Json.FromValue(inputTable)),
- jsonbody = "{ ""Granularity"": ""daily"", ""Sensitivity"": 95, ""Series"": "& jsontext &" }",
- bytesbody = Text.ToBinary(jsonbody),
- headers = [#"Content-Type" = "application/json", #"Ocp-Apim-Subscription-Key" = apikey],
- bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody, ManualStatusHandling={400}]),
- jsonresp = Json.Document(bytesresp),
-
- respTable = Table.FromColumns({
-
- Table.Column(inputTable, "Timestamp")
- ,Table.Column(inputTable, "Value")
- , Record.Field(jsonresp, "IsAnomaly") as list
- , Record.Field(jsonresp, "ExpectedValues") as list
- , Record.Field(jsonresp, "UpperMargins")as list
- , Record.Field(jsonresp, "LowerMargins") as list
- , Record.Field(jsonresp, "IsPositiveAnomaly") as list
- , Record.Field(jsonresp, "IsNegativeAnomaly") as list
-
- }, {"Timestamp", "Value", "IsAnomaly", "ExpectedValues", "UpperMargin", "LowerMargin", "IsPositiveAnomaly", "IsNegativeAnomaly"}
- ),
-
- respTable1 = Table.AddColumn(respTable , "UpperMargins", (row) => row[ExpectedValues] + row[UpperMargin]),
- respTable2 = Table.AddColumn(respTable1 , "LowerMargins", (row) => row[ExpectedValues] - row[LowerMargin]),
- respTable3 = Table.RemoveColumns(respTable2, "UpperMargin"),
- respTable4 = Table.RemoveColumns(respTable3, "LowerMargin"),
-
- results = Table.TransformColumnTypes(
-
- respTable4,
- {{"Timestamp", type datetime}, {"Value", type number}, {"IsAnomaly", type logical}, {"IsPositiveAnomaly", type logical}, {"IsNegativeAnomaly", type logical},
- {"ExpectedValues", type number}, {"UpperMargins", type number}, {"LowerMargins", type number}}
- )
-
- in results
-```
-
-Invoke the query on your data sheet by selecting `Sheet1` below **Enter Parameter**, and click **Invoke**.
-
-![An image of the invoke function](../media/tutorials/invoke-function-screenshot.png)
-
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
-
-## Data source privacy and authentication
-
-> [!NOTE]
-> Be aware of your organization's policies for data privacy and access. See [Power BI Desktop privacy levels](/power-bi/desktop-privacy-levels) for more information.
-
-You may get a warning message when you attempt to run the query since it utilizes an external data source.
-
-![An image showing a warning created by Power BI](../media/tutorials/blocked-function.png)
-
-To fix this, click **File**, and **Options and settings**. Then click **Options**. Below **Current File**, select **Privacy**, and **Ignore the Privacy Levels and potentially improve performance**.
-
-Additionally, you may get a message asking you to specify how you want to connect to the API.
-
-![An image showing a request to specify access credentials](../media/tutorials/edit-credentials-message.png)
-
-To fix this, Click **Edit Credentials** in the message. After the dialogue box appears, select **Anonymous** to connect to the API anonymously. Then click **Connect**.
-
-Afterwards, click **Close & Apply** in the **Home** ribbon to apply the changes.
-
-## Visualize the Anomaly Detector API response
-
-In the main Power BI screen, begin using the queries created above to visualize the data. First select **Line Chart** in **Visualizations**. Then add the timestamp from the invoked function to the line chart's **Axis**. Right-click on it, and select **Timestamp**.
-
-![Right-clicking the Timestamp value](../media/tutorials/timestamp-right-click.png)
-
-Add the following fields from the **Invoked Function** to the chart's **Values** field. Use the below screenshot to help build your chart.
-
-* Value
-* UpperMargins
-* LowerMargins
-* ExpectedValues
-
-![An image of the chart settings](../media/tutorials/chart-settings.png)
-
-After adding the fields, click on the chart and resize it to show all of the data points. Your chart will look similar to the below screenshot:
-
-![An image of the chart visualization](../media/tutorials/chart-visualization.png)
-
-### Display anomaly data points
-
-On the right side of the Power BI window, below the **FIELDS** pane, right-click on **Value** under the **Invoked Function query**, and click **New quick measure**.
-
-![An image of the new quick measure screen](../media/tutorials/new-quick-measure.png)
-
-On the screen that appears, select **Filtered value** as the calculation. Set **Base value** to `Sum of Value`. Then drag `IsAnomaly` from the **Invoked Function** fields to **Filter**. Select `True` from the **Filter** drop-down menu.
-
-![A second image of the new quick measure screen](../media/tutorials/new-quick-measure-2.png)
-
-After clicking **Ok**, you will have a `Value for True` field, at the bottom of the list of your fields. Right-click it and rename it to **Anomaly**. Add it to the chart's **Values**. Then select the **Format** tool, and set the X-axis type to **Categorical**.
-
-![An image of the format x axis](../media/tutorials/format-x-axis.png)
-
-Apply colors to your chart by clicking on the **Format** tool and **Data colors**. Your chart should look something like the following:
-
-![An image of the final chart](../media/tutorials/final-chart.png)
cognitive-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
- Title: "Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics"-
-description: Learn how to use the Multivariate Anomaly Detector with Azure Synapse Analytics.
------ Previously updated : 08/03/2022---
-# Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics
-
-Use this tutorial to detect anomalies among multiple variables in Azure Synapse Analytics in very large datasets and databases. This solution is perfect for scenarios like equipment predictive maintenance. The underlying power comes from the integration with [SynapseML](https://microsoft.github.io/SynapseML/), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. It can be installed and used on any Spark 3 infrastructure including your **local machine**, **Databricks**, **Synapse Analytics**, and others.
-
-For more information, see [SynapseML estimator for Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * Use Azure Synapse Analytics to detect anomalies among multiple variables in Synapse Analytics.
-> * Train a multivariate anomaly detector model and inference in separate notebooks in Synapse Analytics.
-> * Get anomaly detection result and root cause analysis for each anomaly.
-
-## Prerequisites
-
-In this section, you'll create the following resources in the Azure portal:
-
-* An **Anomaly Detector** resource to get access to the capability of Multivariate Anomaly Detector.
-* An **Azure Synapse Analytics** resource to use the Synapse Studio.
-* A **Storage account** to upload your data for model training and anomaly detection.
-* A **Key Vault** resource to hold the key of Anomaly Detector and the connection string of the Storage Account.
-
-### Create Anomaly Detector and Azure Synapse Analytics resources
-
-* [Create a resource for Azure Synapse Analytics](https://portal.azure.com/#create/Microsoft.Synapse) in the Azure portal, fill in all the required items.
-* [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal.
-* Sign in to [Azure Synapse Analytics](https://web.azuresynapse.net/) using your subscription and Workspace name.
-
- ![A screenshot of the Synapse Analytics landing page.](../media/multivariate-anomaly-detector-synapse/synapse-workspace-welcome-page.png)
-
-### Create a storage account resource
-
-* [Create a storage account resource](https://portal.azure.com/#create/Microsoft.StorageAccount) in the Azure portal. After your storage account is built, **create a container** to store intermediate data, since SynapseML will transform your original data to a schema that Multivariate Anomaly Detector supports. (Refer to Multivariate Anomaly Detector [input schema](../how-to/multivariate-how-to.md#input-data-schema))
-
- > [!NOTE]
- > For the purposes of this example only we are setting the security on the container to allow anonymous read access for containers and blobs since it will only contain our example .csv data. For anything other than demo purposes this is **not recommended**.
-
- ![A screenshot of the creating a container in a storage account.](../media/multivariate-anomaly-detector-synapse/create-a-container.png)
-
-### Create a Key Vault to hold Anomaly Detector Key and storage account connection string
-
-* Create a key vault and configure secrets and access
- 1. Create a [key vault](https://portal.azure.com/#create/Microsoft.KeyVault) in the Azure portal.
- 2. Go to Key Vault > Access policies, and grant the [Azure Synapse workspace](../../../data-factory/data-factory-service-identity.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext&tabs=synapse-analytics) permission to read secrets from Azure Key Vault.
-
- ![A screenshot of granting permission to Synapse.](../media/multivariate-anomaly-detector-synapse/grant-synapse-permission.png)
-
-* Create a secret in Key Vault to hold the Anomaly Detector key
- 1. Go to your Anomaly Detector resource, **Anomaly Detector** > **Keys and Endpoint**. Then copy either of the two keys to the clipboard.
- 2. Go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret, and then paste the key from the previous step into the **Value** field. Finally, select **Create**.
-
- ![A screenshot of the creating a secret.](../media/multivariate-anomaly-detector-synapse/create-a-secret.png)
-
-* Create a secret in Key Vault to hold Connection String of Storage account
- 1. Go to your Storage account resource, select **Access keys** to copy one of your Connection strings.
-
- ![A screenshot of copying connection string.](../media/multivariate-anomaly-detector-synapse/copy-connection-string.png)
-
- 2. Then go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret (like *myconnectionstring*), and then paste the Connection string from the previous step into the **Value** field. Finally, select **Create**.
-
-## Using a notebook to conduct Multivariate Anomaly Detection in Synapse Analytics
-
-### Create a notebook and a Spark pool
-
-1. Sign in [Azure Synapse Analytics](https://web.azuresynapse.net/) and create a new Notebook for coding.
-
- ![A screenshot of creating notebook in Synapse.](../media/multivariate-anomaly-detector-synapse/create-a-notebook.png)
-
-2. Select **Manage pools** in the page of notebook to create a new Apache Spark pool if you donΓÇÖt have one.
-
- ![A screenshot of creating spark pool.](../media/multivariate-anomaly-detector-synapse/create-spark-pool.png)
-
-### Writing code in notebook
-
-1. Install the latest version of SynapseML with the Anomaly Detection Spark models. You can also install SynapseML in Spark Packages, Databricks, Docker, etc. Please refer to [SynapseML homepage](https://microsoft.github.io/SynapseML/).
-
- If you're using **Spark 3.1**, please use the following code:
-
- ```python
- %%configure -f
- {
- "name": "synapseml",
- "conf": {
- "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT",
- "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
- "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12",
- "spark.yarn.user.classpath.first": "true"
- }
- }
- ```
-
- If you're using **Spark 3.2**, please use the following code:
-
- ```python
- %%configure -f
- {
- "name": "synapseml",
- "conf": {
- "spark.jars.packages": " com.microsoft.azure:synapseml_2.12:0.9.5 ",
- "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
- "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,io.netty:netty-tcnative-boringssl-static",
- "spark.yarn.user.classpath.first": "true"
- }
- }
- ```
-
-2. Import the necessary modules and libraries.
-
- ```python
- from synapse.ml.cognitive import *
- from notebookutils import mssparkutils
- import numpy as np
- import pandas as pd
- import pyspark
- from pyspark.sql.functions import col
- from pyspark.sql.functions import lit
- from pyspark.sql.types import DoubleType
- import synapse.ml
- ```
-
-3. Load your data. Compose your data in the following format, and upload it to a cloud storage that Spark supports like an Azure Storage Account. The timestamp column should be in `ISO8601` format, and the feature columns should be `string` type. **Download sample data [here](https://sparkdemostorage.blob.core.windows.net/mvadcsvdata/spark-demo-data.csv)**.
-
- ```python
- df = spark.read.format("csv").option("header", True).load("wasbs://[container_name]@[storage_account_name].blob.core.windows.net/[csv_file_name].csv")
-
- df = df.withColumn("sensor_1", col("sensor_1").cast(DoubleType())) \
- .withColumn("sensor_2", col("sensor_2").cast(DoubleType())) \
- .withColumn("sensor_3", col("sensor_3").cast(DoubleType()))
-
- df.show(10)
- ```
-
- ![A screenshot of raw data.](../media/multivariate-anomaly-detector-synapse/raw-data.png)
-
-4. Train a multivariate anomaly detection model.
-
- ![A screenshot of training parameter.](../media/multivariate-anomaly-detector-synapse/training-parameter.png)
-
- ```python
- #Input your key vault name and anomaly key name in key vault.
- anomalyKey = mssparkutils.credentials.getSecret("[key_vault_name]","[anomaly_key_secret_name]")
- #Input your key vault name and connection string name in key vault.
- connectionString = mssparkutils.credentials.getSecret("[key_vault_name]", "[connection_string_secret_name]")
-
- #Specify information about your data.
- startTime = "2021-01-01T00:00:00Z"
- endTime = "2021-01-02T09:18:00Z"
- timestampColumn = "timestamp"
- inputColumns = ["sensor_1", "sensor_2", "sensor_3"]
- #Specify the container you created in Storage account, you could also initialize a new name here, and Synapse will help you create that container automatically.
- containerName = "[container_name]"
- #Set a folder name in Storage account to store the intermediate data.
- intermediateSaveDir = "intermediateData"
-
- simpleMultiAnomalyEstimator = (FitMultivariateAnomaly()
- .setSubscriptionKey(anomalyKey)
- #In .setLocation, specify the region of your Anomaly Detector resource, use lowercase letter like: eastus.
- .setLocation("[anomaly_detector_region]")
- .setStartTime(startTime)
- .setEndTime(endTime)
- .setContainerName(containerName)
- .setIntermediateSaveDir(intermediateSaveDir)
- .setTimestampCol(timestampColumn)
- .setInputCols(inputColumns)
- .setSlidingWindow(200)
- .setConnectionString(connectionString))
- ```
-
- Trigger training process through these codes.
-
- ```python
- model = simpleMultiAnomalyEstimator.fit(df)
- type(model)
- ```
-
-5. Trigger inference process.
-
- ```python
- startInferenceTime = "2021-01-02T09:19:00Z"
- endInferenceTime = "2021-01-03T01:59:00Z"
- result = (model
- .setStartTime(startInferenceTime)
- .setEndTime(endInferenceTime)
- .setOutputCol("results")
- .setErrorCol("errors")
- .setTimestampCol(timestampColumn)
- .setInputCols(inputColumns)
- .transform(df))
- ```
-
-6. Get inference results.
-
- ```python
- rdf = (result.select("timestamp",*inputColumns, "results.contributors", "results.isAnomaly", "results.severity").orderBy('timestamp', ascending=True).filter(col('timestamp') >= lit(startInferenceTime)).toPandas())
-
- def parse(x):
- if type(x) is list:
- return dict([item[::-1] for item in x])
- else:
- return {'series_0': 0, 'series_1': 0, 'series_2': 0}
-
- rdf['contributors'] = rdf['contributors'].apply(parse)
- rdf = pd.concat([rdf.drop(['contributors'], axis=1), pd.json_normalize(rdf['contributors'])], axis=1)
- rdf
- ```
-
- The inference results will look as followed. The `severity` is a number between 0 and 1, showing the severe degree of an anomaly. The last three columns indicate the `contribution score` of each sensor, the higher the number is, the more anomalous the sensor is.
- ![A screenshot of inference result.](../media/multivariate-anomaly-detector-synapse/inference-result.png)
-
-## Clean up intermediate data (optional)
-
-By default, the anomaly detector will automatically upload data to a storage account so that the service can process the data. To clean up the intermediate data, you could run the following codes.
-
-```python
-simpleMultiAnomalyEstimator.cleanUpIntermediateData()
-model.cleanUpIntermediateData()
-```
-
-## Use trained model in another notebook with model ID (optional)
-
-If you have the need to run training code and inference code in separate notebooks in Synapse, you could first get the model ID and use that ID to load the model in another notebook by creating a new object.
-
-1. Get the model ID in the training notebook.
-
- ```python
- model.getModelId()
- ```
-
-2. Load the model in inference notebook.
-
- ```python
- retrievedModel = (DetectMultivariateAnomaly()
- .setSubscriptionKey(anomalyKey)
- .setLocation("eastus")
- .setOutputCol("result")
- .setStartTime(startTime)
- .setEndTime(endTime)
- .setContainerName(containerName)
- .setIntermediateSaveDir(intermediateSaveDir)
- .setTimestampCol(timestampColumn)
- .setInputCols(inputColumns)
- .setConnectionString(connectionString)
- .setModelId('5bXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXe9'))
- ```
-
-## Learn more
-
-### About Anomaly Detector
-
-* Learn about [what is Multivariate Anomaly Detector](../overview.md).
-* SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
-* Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Multivariate%20Anomaly%20Detection/).
-* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u).
-
-### About Synapse
-
-* Quick start: [Configure prerequisites for using Cognitive Services in Azure Synapse Analytics](../../../synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md#create-a-key-vault-and-configure-secrets-and-access).
-* Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples.
-* Learn more about [Synapse Analytics](../../../synapse-analytics/index.yml).
-* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
- Title: What's New - Anomaly Detector
-description: This article is regularly updated with news about the Azure Cognitive Services Anomaly Detector.
--- Previously updated : 12/15/2022--
-# What's new in Anomaly Detector
-
-Learn what's new in the service. These items include release notes, videos, blog posts, papers, and other types of information. Bookmark this page to keep up to date with the service.
-
-We have also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft isn't responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
-
-## Release notes
-
-### Jan 2023
-* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/).
-
-### Dec 2022
-* The following SDKs for Multivariate Anomaly Detection are updated to match with the generally available REST API.
-
- |SDK Package |Sample Code |
- |||
- | [Python](https://pypi.org/project/azure-ai-anomalydetector/3.0.0b6/)|[sample_multivariate_detect.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_multivariate_detect.py)|
- | [.NET](https://www.nuget.org/packages/Azure.AI.AnomalyDetector/3.0.0-preview.6) | [Sample4_MultivariateDetect.cs](https://github.com/Azure/azure-sdk-for-net/blob/40a7d122ac99a3a8a7c62afa16898b7acf82c03d/sdk/anomalydetector/Azure.AI.AnomalyDetector/tests/samples/Sample4_MultivariateDetect.cs)|
- | [JAVA](https://search.maven.org/artifact/com.azure/azure-ai-anomalydetector/3.0.0-beta.5/jar) | [MultivariateSample.java](https://github.com/Azure/azure-sdk-for-java/blob/e845677d919d47a2c4837153306b37e5f4ecd795/sdk/anomalydetector/azure-ai-anomalydetector/src/samples/java/com/azure/ai/anomalydetector/MultivariateSample.java)|
- | [JS/TS](https://www.npmjs.com/package/@azure-rest/ai-anomaly-detector/v/1.0.0-beta.1) |[sample_multivariate_detection.ts](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/anomalydetector/ai-anomaly-detector-rest/samples-dev/sample_multivariate_detection.ts)|
-
-* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection).
-
-### Nov 2022
-
-* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to get started using the latest release of Multivariate Anomaly Detection](how-to/create-resource.md).
-
-### June 2022
-
-* New blog released: [Four sets of best practices to use Multivariate Anomaly Detector when monitoring your equipment](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/4-sets-of-best-practices-to-use-multivariate-anomaly-detector/ba-p/3490848#footerContent).
-
-### May 2022
-
-* New blog released: [Detect anomalies in equipment with Multivariate Anomaly Detector in Azure Databricks](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detect-anomalies-in-equipment-with-anomaly-detector-in-azure/ba-p/3390688).
-
-### April 2022
-* Univariate Anomaly Detector is now integrated in Azure Data Explorer(ADX). Check out this [announcement blog post](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-univariate-anomaly-detector-in-azure-data-explorer/ba-p/3285400) to learn more!
-
-### March 2022
-* Anomaly Detector (univariate) available in Sweden Central.
-
-### February 2022
-* **Multivariate Anomaly Detector API has been integrated with Synapse.** Check out this [blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-multivariate-anomaly-detector-in-synapseml/ba-p/3122486) to learn more!
-
-### January 2022
-* **Multivariate Anomaly Detector API v1.1-preview.1 public preview on 1/18.** In this version, Multivariate Anomaly Detector supports synchronous API for inference and added new fields in API output interpreting the correlation change of variables.
-* Univariate Anomaly Detector added new fields in API output.
-
-### November 2021
-* Multivariate Anomaly Detector available in six more regions: UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West. Now in total 26 regions are supported.
-
-### September 2021
-* Anomaly Detector (univariate) available in Jio India West.
-* Multivariate anomaly detector APIs deployed in five more regions: East Asia, West US, Central India, Korea Central, Germany West Central.
-
-### August 2021
-
-* Multivariate anomaly detector APIs deployed in five more regions: West US 3, Japan East, Brazil South, Central US, Norway East. Now in total 15 regions are supported.
-
-### July 2021
-
-* Multivariate anomaly detector APIs deployed in four more regions: Australia East, Canada Central, North Europe, and Southeast Asia. Now in total 10 regions are supported.
-* Anomaly Detector (univariate) available in West US 3 and Norway East.
--
-### June 2021
-
-* Multivariate anomaly detector APIs available in more regions: West US 2, West Europe, East US 2, South Central US, East US, and UK South.
-* Anomaly Detector (univariate) available in Azure cloud for US Government.
-* Anomaly Detector (univariate) available in Azure China (China North 2).
-
-### April 2021
-
-* [IoT Edge module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-cognitive-service.edge-anomaly-detector) (univariate) published.
-* Anomaly Detector (univariate) available in Azure China (China East 2).
-* Multivariate anomaly detector APIs preview in selected regions (West US 2, West Europe).
-
-### September 2020
-
-* Anomaly Detector (univariate) generally available.
-
-### March 2019
-
-* Anomaly Detector announced preview with univariate anomaly detection support.
-
-## Technical articles
-
-* March 12, 2021 [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679) - Technical blog on the new multivariate APIs
-* September 2020 [Multivariate Time-series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040) - Paper on multivariate anomaly detection accepted by ICDM 2020
-* November 5, 2019 [Overview of SR-CNN algorithm in Azure Anomaly Detector](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/overview-of-sr-cnn-algorithm-in-azure-anomaly-detector/ba-p/982798) - Technical blog on SR-CNN
-* June 10, 2019 [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) - Paper on SR-CNN accepted by KDD 2019
-* April 25, 2019 **[UGC]** [Trying the Cognitive Service: Anomaly Detector API (in Japanese)](/ja-jp/azure/cognitive-services/anomaly-detector/)
-* April 20, 2019 [Introducing Azure Anomaly Detector API](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/introducing-azure-anomaly-detector-api/ba-p/490162) - Announcement blog
-
-## Videos
-
-* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han).
-* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detector APIs with Tony Xing and Seth Juarez
-* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez
-* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
-* September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood
-* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](/shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
-* August 27, 2019 [Anomaly Detector v1.0 Best Practices](/shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
-* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](/shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
-* August 13, 2019 [Introducing Azure Anomaly Detector](/shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
--
-## Service updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Bing Autosuggest Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/bing-autosuggest-upgrade-guide-v5-to-v7.md
Last updated 02/20/2019
# Autosuggest API upgrade guide This upgrade guide identifies the changes between version 5 and version 7 of the Bing Autosuggest API. Use this guide to help update your application to use version 7.
cognitive-services Get Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/get-suggestions.md
# Suggesting query terms Typically, you'd call the Bing Autosuggest API each time a user types a new character in your application's search box. The completeness of the query string impacts the relevance of the suggested query terms that the API returns. The more complete the query string, the more relevant the list of suggested query terms are. For example, the suggestions that the API may return for `s` are likely to be less relevant than the queries it returns for `sailing dinghies`.
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/sending-requests.md
Last updated 06/27/2019
# Sending requests to the Bing Autosuggest API. If your application sends queries to any of the Bing Search APIs, you can use the Bing Autosuggest API to improve your users' search experience. The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. As characters are entered into a search box in your application, you can display suggestions in a drop-down list. Use this article to learn more about sending requests to this API.
The **Bing** APIs support search actions that return results according to their
All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Autosuggest API, see [Autosuggest Quickstarts](/azure/cognitive-services/Bing-Autosuggest).
+For examples of basic requests using the Autosuggest API, see [Autosuggest Quickstarts](/azure/cognitive-services/bing-autosuggest).
## Bing Autosuggest API requests
cognitive-services Get Suggested Search Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md
Last updated 12/18/2019
# What is Bing Autosuggest? If your application sends queries to any of the Bing Search APIs, you can use the Bing Autosuggest API to improve your users' search experience. The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. As characters are entered into the search box, you can display suggestions in a drop-down list.
Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore
Learn how to search the web by using the [Bing Web Search API](../bing-web-search/overview.md), and explore the other [Bing Search APIs](../bing-web-search/index.yml).
-Be sure to read [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) so you don't break any of the rules about using the search results.
+Be sure to read [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) so you don't break any of the rules about using the search results.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/language-support.md
Last updated 02/20/2019
# Language and region support for the Bing Autosuggest API The following lists the languages supported by Bing Autosuggest API.
The following lists the languages supported by Bing Autosuggest API.
## See also - [Azure Cognitive Services Documentation page](../index.yml)-- [Azure Cognitive Services Product page](https://azure.microsoft.com/services/cognitive-services/)
+- [Azure Cognitive Services Product page](https://azure.microsoft.com/services/cognitive-services/)
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/client-libraries.md
# Quickstart: Use the Bing Autosuggest client library ::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/csharp.md
# Quickstart: Suggest search queries with the Bing Autosuggest REST API and C# Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple C# application sends a partial search query to the API, and returns suggestions for searches. While this application is written in C#, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingAutosuggestv7.cs).
Follow this quickstart to learn how to make calls to the Bing Autosuggest API an
using System.Text; ```
-2. In a new class, create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. In a new class, create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp static string host = "https://api.cognitive.microsoft.com";
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/java.md
# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Java Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Java application sends a partial search query to the API, and returns suggestions for searches. While this application is written in Java, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/Search/BingAutosuggestv7.java)
Follow this quickstart to learn how to make calls to the Bing Autosuggest API an
import com.google.gson.JsonParser; ```
-2. Create variables for your subscription key, the API host and path, your [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a search query. Use the global endpoint below, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your subscription key, the API host and path, your [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a search query. Use the global endpoint below, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java static String subscriptionKey = "enter key here";
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/nodejs.md
# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Node.js Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Node.js application sends a partial search query to the API, and returns suggestions for searches. While this application is written in JavaScript, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingAutosuggestv7.js)
Follow this quickstart to learn how to make calls to the Bing Autosuggest API an
let https = require ('https'); ```
-2. Create variables for the API endpoint host and path, your subscription key, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a search term. Use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint host and path, your subscription key, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a search term. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript // Replace the subscriptionKey string value with your valid subscription key.
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/php.md
# Quickstart: Suggest search queries with the Bing Autosuggest REST API and PHP Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple PHP application sends a partial search query to the API, and returns suggestions for searches. While this application is written in PHP, the API is a RESTful Web service compatible with most programming languages.
Follow this quickstart to learn how to make calls to the Bing Autosuggest API an
1. Create a new PHP project in your favorite IDE. 2. Add the code provided below. 3. Replace the `subscriptionKey` value with an access key that's valid for your subscription.
-4. Use the global endpoint in the code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+4. Use the global endpoint in the code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
5. Run the program. ```php
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/python.md
# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Python Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Python application sends a partial search query to the API, and returns suggestions for searches. While this application is written in Python, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingAutosuggestv7.py)
Follow this quickstart to learn how to make calls to the Bing Autosuggest API an
import http.client, urllib.parse, json ```
-2. Create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python subscriptionKey = 'enter key here'
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/ruby.md
# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Ruby Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Ruby application sends a partial search query to the API, and returns suggestions for searches. While this application is written in Ruby, the API is a RESTful Web service compatible with most programming languages.
Follow this quickstart to learn how to make calls to the Bing Autosuggest API an
require 'json' ```
-2. Create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```ruby subscriptionKey = 'enter your key here'
cognitive-services Autosuggest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/tutorials/autosuggest.md
# Tutorial: Get search suggestions on a web page In this tutorial, we'll build a Web page that allows users to query the Bing Autosuggest API.
function bingAutosuggest(query, key) {
``` Specify the Bing Autosuggest API endpoint and declare an XMLHttpRequest object, which we will
-use to send requests. You can use the global endpoint below, or the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+use to send requests. You can use the global endpoint below, or the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```html var endpoint = "https://api.cognitive.microsoft.com/bing/v7.0/Suggestions";
cognitive-services Call Endpoint Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-csharp.md
# Quickstart: Call your Bing Custom Search endpoint using C# Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in C#, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingCustomSearchv7.cs).
Use this quickstart to learn how to request search results from your Bing Custom
var searchTerm = args.Length > 0 ? args[0]:"microsoft"; ```
-4. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). For the `url` variable value, you can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+4. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). For the `url` variable value, you can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp var url = "https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search?" +
cognitive-services Call Endpoint Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-java.md
# Quickstart: Call your Bing Custom Search endpoint using Java Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in Java, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/Search/BingCustomSearchv7.java).
Use this quickstart to learn how to request search results from your Bing Custom
import com.google.gson.JsonParser; ```
-2. Create a class named `CustomSrchJava`, and then create variables for your subscription key, custom search endpoint, and search instance's custom configuration ID. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create a class named `CustomSrchJava`, and then create variables for your subscription key, custom search endpoint, and search instance's custom configuration ID. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java public class CustomSrchJava { static String host = "https://api.cognitive.microsoft.com";
cognitive-services Call Endpoint Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-nodejs.md
# Quickstart: Call your Bing Custom Search endpoint using Node.js Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in JavaScript, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingCustomSearchv7.js).
Use this quickstart to learn how to request search results from your Bing Custom
## Send and receive a search request
-1. Create a variable to store the information being sent in your request. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. Create a variable to store the information being sent in your request. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript var info = {
cognitive-services Call Endpoint Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-python.md
# Quickstart: Call your Bing Custom Search endpoint using Python Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in Python, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingCustomSearchv7.py).
Use this quickstart to learn how to request search results from your Bing Custom
## Send and receive a search request
-1. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python url = 'https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search?' + 'q=' + searchTerm + '&' + 'customconfig=' + customConfigId
cognitive-services Define Custom Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/define-custom-suggestions.md
# Configure your custom autosuggest experience Custom Autosuggest returns a list of suggested search query strings that are relevant to your search experience. The suggested query strings are based on a partial query string that the user provides in the search box. The list will contain a maximum of 10 suggestions.
If the user selects a suggested query string from the dropdown list, use the que
- [Get custom suggestions]() - [Search your custom instance](./search-your-custom-view.md)-- [Configure and consume custom hosted UI](./hosted-ui.md)
+- [Configure and consume custom hosted UI](./hosted-ui.md)
cognitive-services Define Your Custom View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/define-your-custom-view.md
# Configure your Bing Custom Search experience A Custom Search instance lets you tailor the search experience to include content only from websites that your users care about. Instead of performing a web-wide search, Bing searches only the slices of the web that interest you. To create your custom view of the web, use the Bing Custom Search [portal](https://www.customsearch.ai).
After adding web slices to the **Active** list, the Bing Custom Search portal wi
You can search for images and videos similarly to web content by using the [Bing Custom Image Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference) or the [Bing Custom Video Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-videos-api-v7-reference). You can display these results with the [hosted UI](hosted-ui.md), or the APIs.
-These APIs are similar to the non-custom [Bing Image Search](../Bing-Image-Search/overview.md) and [Bing Video Search](../bing-video-search/overview.md) APIs, but search the entire web, and do not require the `customConfig` query parameter. See these documentation sets for more information on working with images and videos.
+These APIs are similar to the non-custom [Bing Image Search](../bing-image-search/overview.md) and [Bing Video Search](../bing-video-search/overview.md) APIs, but search the entire web, and do not require the `customConfig` query parameter. See these documentation sets for more information on working with images and videos.
## Test your search instance with the Preview pane
If you subscribed to Custom Search at the appropriate level (see the [pricing pa
- [Call your custom search](./search-your-custom-view.md) - [Configure your hosted UI experience](./hosted-ui.md) - [Use decoration markers to highlight text](../bing-web-search/hit-highlighting.md)-- [Page webpages](../bing-web-search/paging-search-results.md)
+- [Page webpages](../bing-web-search/paging-search-results.md)
cognitive-services Endpoint Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/endpoint-custom.md
# Custom Search Bing Custom Search enables you to create tailored search experiences for topics that you care about. Your users see search results tailored to the content they care about instead of having to page through search results that have irrelevant content. ## Custom Search Endpoint
For information about configuring a Custom Search instance, see [Configure your
The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Custom Search API, see [Custom Search Quick-starts](/azure/cognitive-services/bing-custom-search/)
+For examples of basic requests using the Custom Search API, see [Custom Search Quick-starts](/azure/cognitive-services/bing-custom-search/)
cognitive-services Get Images From Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/get-images-from-instance.md
Last updated 09/10/2018
# Get images from your custom view Bing Custom Images Search lets you enrich your custom search experience with images. Similar to web results, custom search supports searching for images in your instance's list of websites. You can get the images using the Bing Custom Images Search API or through the Hosted UI feature. Using the Hosted UI feature is simple to use and recommended for getting your search experience up and running in short order. For information about configuring your Hosted UI to include images, see [Configure your hosted UI experience](hosted-ui.md).
-If you want more control over displaying the search results, you can use the Bing Custom Images Search API. Because calling the API is similar to calling the Bing Image Search API, checkout [Bing Image Search](../Bing-Image-Search/overview.md) for examples calling the API. But before you do that, familiarize yourself with the [Custom Images Search API reference](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference) content. The main differences are the supported query parameters (you must include the customConfig query parameter) and the endpoint you send requests to.
+If you want more control over displaying the search results, you can use the Bing Custom Images Search API. Because calling the API is similar to calling the Bing Image Search API, checkout [Bing Image Search](../bing-image-search/overview.md) for examples calling the API. But before you do that, familiarize yourself with the [Custom Images Search API reference](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference) content. The main differences are the supported query parameters (you must include the customConfig query parameter) and the endpoint you send requests to.
<!-- ## Next steps [Call your custom view](search-your-custom-view.md)>
+-->
cognitive-services Get Videos From Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/get-videos-from-instance.md
Last updated 09/10/2018
# Get videos from your custom view Bing Custom Videos Search lets you enrich your custom search experience with videos. Similar to web results, custom search supports searching for videos in your instance's list of websites. You can get the videos using the Bing Custom Videos Search API or through the Hosted UI feature. Using the Hosted UI feature is simple to use and recommended for getting your search experience up and running in short order. For information about configuring your Hosted UI to include videos, see [Configure your hosted UI experience](hosted-ui.md).
If you want more control over displaying the search results, you can use the Bin
## Next steps [Call your custom view](search-your-custom-view.md)>
+-->
cognitive-services Hosted Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/hosted-ui.md
# Configure your hosted UI experience Bing Custom Search provides a hosted UI that you can easily integrate into your webpages and web applications as a JavaScript code snippet. Using the Bing Custom Search portal, you can configure the layout, color, and search options of the UI.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/language-support.md
# Language and region support for the Bing Custom Search API The Bing Custom Search API supports more than three dozen countries/regions, many with more than one language.
The `Accept-Language` header and the `setLang` query parameter are mutually excl
|T├╝rkiye|Turkish|tr-TR| |United Kingdom|English|en-GB| |United States|English|en-US|
-|United States|Spanish|es-US|
+|United States|Spanish|es-US|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/overview.md
# What is the Bing Custom Search API? The Bing Custom Search API enables you to create tailored ad-free search experiences for topics that you care about. You can specify the domains and webpages for Bing to search, as well as pin, boost, or demote specific content to create a custom view of the web and help your users quickly find relevant search results.
Familiarize yourself with the reference content for each of the custom search en
- [Custom Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference) - [Custom Image API](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference) - [Custom Video API](/rest/api/cognitiveservices-bingsearch/bing-custom-videos-api-v7-reference)-- [Custom Autosuggest API](/rest/api/cognitiveservices-bingsearch/bing-custom-autosuggest-api-v7-reference)
+- [Custom Autosuggest API](/rest/api/cognitiveservices-bingsearch/bing-custom-autosuggest-api-v7-reference)
cognitive-services Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/quick-start.md
# Quickstart: Create your first Bing Custom Search instance To use Bing Custom Search, you need to create a custom search instance that defines your view or slice of the web. This instance contains the public domains, websites, and webpages that you want to search, along with any ranking adjustments you may want.
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/quickstarts/client-libraries.md
# Quickstart: Use the Bing Custom Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Search Your Custom View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/search-your-custom-view.md
# Call your Bing Custom Search instance from the Portal After you've configured your custom search experience, you can test it from within the Bing Custom Search [portal](https://customsearch.ai).
You can change the subscription associated with your Bing Custom Search instance
- [Call your custom view with NodeJs](./call-endpoint-nodejs.md) - [Call your custom view with Python](./call-endpoint-python.md) -- [Call your custom view with the C# SDK](./quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+- [Call your custom view with the C# SDK](./quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
cognitive-services Share Your Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/share-your-custom-search.md
# Share your Custom Search instance You can easily allow collaborative editing and testing of your instance by sharing it with members of your team. You can share your instance with anyone using just their email address. To share an instance:
To stop sharing an instance with someone, use the remove icon to remove their em
## Next steps -- [Configure your Custom Autosuggest experience](define-custom-suggestions.md)
+- [Configure your Custom Autosuggest experience](define-custom-suggestions.md)
cognitive-services Custom Search Web Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/tutorials/custom-search-web-page.md
# Tutorial: Build a Custom Search web page Bing Custom Search enables you to create tailored search experiences for topics that you care about. For example, if you own a martial arts website that provides a search experience, you can specify the domains, sub-sites, and webpages that Bing searches. Your users see search results tailored to the content they care about instead of paging through general search results that may contain irrelevant content.
Performing a search renders results like this:
## Next steps > [!div class="nextstepaction"]
-> [Call Bing Custom Search endpoint (C#)](../call-endpoint-csharp.md)
+> [Call Bing Custom Search endpoint (C#)](../call-endpoint-csharp.md)
cognitive-services Search For Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/concepts/search-for-entities.md
# Searching for entities with the Bing Entity API ## Suggest search terms with the Bing Autosuggest API
If you are not sure if your experience can be considered a search-like experienc
## Next steps
-* Try a [Quickstart](../quickstarts/csharp.md) to get started searching for entities with the Bing Entity Search API.
+* Try a [Quickstart](../quickstarts/csharp.md) to get started searching for entities with the Bing Entity Search API.
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/concepts/sending-requests.md
Title: "Sending search requests to the Bing Entity Search API"-+ description: The Bing Entity Search API sends a search query to Bing and gets results that include entities and places.
# Sending search requests to the Bing Entity Search API The Bing Entity Search API sends a search query to Bing and gets results that include entities and places. Place results include restaurants, hotel, or other local businesses. For places, the query can specify the name of the local business or it can ask for a list (for example, restaurants near me). Entity results include persons, places, or things. Place in this context is tourist attractions, states, countries/regions, etc.
BingAPIs-Market: en-US
## Next steps * [Searching for entities with the Bing Entity API](search-for-entities.md)
-* [Bing API Use and Display Requirements](../../bing-web-search/use-display-requirements.md)
+* [Bing API Use and Display Requirements](../../bing-web-search/use-display-requirements.md)
cognitive-services Entity Search Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/entity-search-endpoint.md
# Bing Entity Search API endpoint The Bing Entity Search API has one endpoint that returns entities from the Web based on a query. These search results are returned in JSON.
To get entity results using the **Bing API**, send a `GET` request to the follow
## See also
-For more information about headers, parameters, market codes, response objects, errors and more, see the [Bing Entity Search API v7](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) reference article.
+For more information about headers, parameters, market codes, response objects, errors and more, see the [Bing Entity Search API v7](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) reference article.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/overview.md
Last updated 12/18/2019
# What is Bing Entity Search API? The Bing Entity Search API sends a search query to Bing and gets results that include entities and places. Place results include restaurants, hotel, or other local businesses. Bing returns places if the query specifies the name of the local business or asks for a type of business (for example, restaurants near me). Bing returns entities if the query specifies well-known people, places (tourist attractions, states, countries/regions, etc.), or things.
The Bing Entity Search API is a RESTful web service, making it easy to call from
## Next steps
-* Try the [interactive demo](https://azure.microsoft.com/services/cognitive-services/bing-entity-search-api/) for the Bing Entity Search API.
+* Try the [interactive demo](https://azure.microsoft.com/services/cognitive-services/Bing-entity-search-api/) for the Bing Entity Search API.
* To get started quickly with your first request, try a [Quickstart](quickstarts/csharp.md). * The [Bing Entity Search API v7](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) reference section. * The [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
-* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
+* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/client-libraries.md
# Quickstart: Use the Bing Entity Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/csharp.md
# Quickstart: Send a search request to the Bing Entity Search REST API using C# Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple C# application sends a news search query to the API, and displays the response. The source code for this application is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingEntitySearchv7.cs).
Although this application is written in C#, the API is a RESTful Web service com
using System.Text; ```
-2. Create a new class, and add variables for the API endpoint, your subscription key, and the query you want to search. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create a new class, and add variables for the API endpoint, your subscription key, and the query you want to search. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp namespace EntitySearchSample
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/java.md
# Quickstart: Send a search request to the Bing Entity Search REST API using Java Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple Java application sends a news search query to the API, and displays the response.
Although this application is written in Java, the API is a RESTful Web service c
import com.google.gson.JsonParser; ```
-2. In a new class, create variables for the API endpoint, your subscription key, and a search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. In a new class, create variables for the API endpoint, your subscription key, and a search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java public class EntitySearch {
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/nodejs.md
# Quickstart: Send a search request to the Bing Entity Search REST API using Node.js Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple JavaScript application sends a news search query to the API, and displays the response. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingEntitySearchv7.js).
Although this application is written in JavaScript, the API is a RESTful Web ser
let https = require ('https'); ```
-2. Create variables for the API endpoint, your subscription key, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, your subscription key, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript let subscriptionKey = 'ENTER YOUR KEY HERE';
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/php.md
# Quickstart: Send a search request to the Bing Entity Search REST API using PHP Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple PHP application sends a news search query to the API, and displays the response.
To run this application, follow these steps:
1. Create a new PHP project in your favorite IDE. 2. Add the code provided below. 3. Replace the `key` value with an access key valid for your subscription.
-4. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+4. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
5. Run the program. ```php
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/python.md
# Quickstart: Send a search request to the Bing Entity Search REST API using Python Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple Python application sends a news search query to the API, and displays the response. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingEntitySearchv7.py).
Although this application is written in Python, the API is a RESTful Web service
## Create and initialize the application
-1. Create a new Python file in your favorite IDE or editor, and add the following imports. Create variables for your subscription key, endpoint, market, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. Create a new Python file in your favorite IDE or editor, and add the following imports. Create variables for your subscription key, endpoint, market, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python import http.client, urllib.parse
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/ruby.md
# Quickstart: Send a search request to the Bing Entity Search REST API using Ruby Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple Ruby application sends a news search query to the API, and displays the response. The source code for this application is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/ruby/Search/BingEntitySearchv7.rb).
Although this application is written in Ruby, the API is a RESTful Web service c
require 'json' ```
-2. Create variables for your API endpoint, News search URL, your subscription key, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your API endpoint, News search URL, your subscription key, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```ruby host = 'https://api.bing.microsoft.com'
cognitive-services Rank Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/rank-results.md
# Using ranking to display entity search results Each entity search response includes a [RankingResponse](/rest/api/cognitiveservices/bing-web-api-v7-reference#rankingresponse) answer that specifies how you must display search results returned by the Bing Entity Search API. The ranking response groups results into pole, mainline, and sidebar content. The pole result is the most important or prominent result and should be displayed first. If you do not display the remaining results in a traditional mainline and sidebar format, you must provide the mainline content higher visibility than the sidebar content.
Based on this ranking response, the sidebar would display the two entity results
## Next steps > [!div class="nextstepaction"]
-> [Create a single-page web app](tutorial-bing-entities-search-single-page-app.md)
+> [Create a single-page web app](tutorial-bing-entities-search-single-page-app.md)
cognitive-services Tutorial Bing Entities Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/tutorial-bing-entities-search-single-page-app.md
# Tutorial: Single-page web app The Bing Entity Search API lets you search the Web for information about *entities* and *places.* You may request either kind of result, or both, in a given query. The definitions of places and entities are provided below.
The tutorial page is entirely self-contained; it does not use any external frame
In this tutorial, we discuss only selected portions of the source code. The full source code is available [on a separate page](). Copy and paste this code into a text editor and save it as `bing.html`. > [!NOTE]
-> This tutorial is substantially similar to the [single-page Bing Web Search app tutorial](../Bing-Web-Search/tutorial-bing-web-search-single-page-app.md), but deals only with entity search results.
+> This tutorial is substantially similar to the [single-page Bing Web Search app tutorial](../bing-web-search/tutorial-bing-web-search-single-page-app.md), but deals only with entity search results.
## Prerequisites
The HTML also contains the divisions (HTML `<div>` tags) where the search result
To avoid having to include the Bing Search and Bing Maps API subscription keys in the code, we use the browser's persistent storage to store them. If either key has not been stored, we prompt for it and store it for later use. If the key is later rejected by the API, we invalidate the stored key so the user is asked for it upon their next search.
-We define `storeValue` and `retrieveValue` functions that use either the `localStorage` object (if the browser supports it) or a cookie. Our `getSubscriptionKey()` function uses these functions to store and retrieve the user's key. You can use the global endpoint below, or the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+We define `storeValue` and `retrieveValue` functions that use either the `localStorage` object (if the browser supports it) or a cookie. Our `getSubscriptionKey()` function uses these functions to store and retrieve the user's key. You can use the global endpoint below, or the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript // cookie names for data we store
cognitive-services Bing Image Search Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/bing-image-search-resource-faq.md
# Frequently asked questions (FAQ) about the Bing Image Search API Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Azure Cognitive Services on Azure.
cognitive-services Bing Image Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/bing-image-upgrade-guide-v5-to-v7.md
Last updated 01/05/2022
# Bing Image Search API v7 upgrade guide This upgrade guide identifies the changes between version 5 and version 7 of the Bing Image Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
Blocked|InvalidRequest.Blocked
- Added the `name` field to the [ImageGallery](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#imagegallery) object. -- Added `similarTerms` to the [Images](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#images) object. This field contains a list of terms that are similar in meaning to the user's query string.
+- Added `similarTerms` to the [Images](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#images) object. This field contains a list of terms that are similar in meaning to the user's query string.
cognitive-services Bing Image Search Get Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/concepts/bing-image-search-get-images.md
# Get images from the web with the Bing Image Search API When you use the Bing Image Search REST API, you can get images from the web that are related to your search term by sending the following GET request:
When you call the Bing Image Search API, Bing returns a list of results. The lis
## Next steps
-If you haven't tried the Bing Image Search API before, try a [quickstart](../quickstarts/csharp.md). If you're looking for something more complex, try the tutorial to create a [single-page web app](../tutorial-bing-image-search-single-page-app.md).
+If you haven't tried the Bing Image Search API before, try a [quickstart](../quickstarts/csharp.md). If you're looking for something more complex, try the tutorial to create a [single-page web app](../tutorial-bing-image-search-single-page-app.md).
cognitive-services Bing Image Search Sending Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/concepts/bing-image-search-sending-queries.md
# Customize and suggest image search queries Use this article to learn how to customize queries and suggest search terms to send to the Bing Image Search API.
The following shows an example Bing implementation that uses expanded queries. I
## Next steps
-If you haven't tried the Bing Image Search API before, try a [quickstart](../quickstarts/csharp.md). If you're looking for something more complex, try the tutorial to create a [single-page web app](../tutorial-bing-image-search-single-page-app.md).
+If you haven't tried the Bing Image Search API before, try a [quickstart](../quickstarts/csharp.md). If you're looking for something more complex, try the tutorial to create a [single-page web app](../tutorial-bing-image-search-single-page-app.md).
cognitive-services Gif Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/gif-images.md
# Search for GIF images The Bing Image Search API enables you to also search across the entire Web for the most relevant .gif images.  Developers can integrate engaging gifs in various conversation scenarios. 
cognitive-services Image Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/image-insights.md
Last updated 01/05/2022
# Get image insights with the Bing Image Search API > [!IMPORTANT] > Instead of using the /images/details endpoint to get image insights, you should use [Visual Search](../bing-visual-search/overview.md) since it provides more comprehensive insights.
The following is the response to the previous request.
}] } }
-```
+```
cognitive-services Image Search Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/image-search-endpoint.md
# Endpoints for the Bing Image Search API The **Image Search API** includes three endpoints. Endpoint 1 returns images from the Web based on a query. Endpoint 2 returns [ImageInsights](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#imageinsightsresponse). Endpoint 3 returns trending images.
The response to an image search request includes results as JSON objects. For ex
The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Image search API, see [Image Search Quick-starts](./overview.md).
+For examples of basic requests using the Image search API, see [Image Search Quick-starts](./overview.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/language-support.md
# Language and region support for the Bing Image Search API The Bing Image Search API supports more than three dozen countries/regions, many with more than one language. Specifying a country/region with a query serves primarily to refine search results based on interests in that country/region. Additionally, the results may contain links to Bing, and these links may localize the Bing user experience according to the specified country/regions or language.
Alternatively, you can specify the country/region using the `cc` query parameter
|United States|Spanish|es-US| ## Next steps
-For more information about the Bing News Search endpoints, see [News Image Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference).
+For more information about the Bing News Search endpoints, see [News Image Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/overview.md
# What is the Bing Image Search API? The Bing Image Search API enables you to use Bing's image search capabilities in your application. By sending search queries to the API, you can get high-quality images similar to [bing.com/images](https://www.bing.com/images).
To quickly get started with your first API request, you can learn to:
* The [Sending and working with search queries](./concepts/bing-image-search-sending-queries.md) article describes how to make, customize, and pivot search queries.
-* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
+* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/client-libraries.md
# Quickstart: Use the Bing Image Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/csharp.md
# Quickstart: Search for images using the Bing Image Search REST API and C# Use this quickstart to learn how to send search requests to the Bing Image Search API. This C# application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in C#, the API is a RESTful web service compatible with most programming languages.
Use this quickstart to learn how to send search requests to the Bing Image Searc
using Newtonsoft.Json.Linq; ```
-2. Create variables for the API endpoint, your subscription key, and search term. For `uriBase`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, your subscription key, and search term. For `uriBase`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp //...
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/java.md
# Quickstart: Search for images with the Bing Image Search API and Java Use this quickstart to learn how to send search requests to the Bing Image Search API in Azure Cognitive Services. This Java application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in Java, the API is a RESTful web service compatible with most programming languages.
Use this quickstart to learn how to send search requests to the Bing Image Searc
import com.google.gson.JsonParser; ```
-2. Create variables for the API endpoint, your subscription key, and search term. For `host`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, your subscription key, and search term. For `host`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java static String subscriptionKey = "enter key here";
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/nodejs.md
# Quickstart: Search for images using the Bing Image Search REST API and Node.js Use this quickstart to learn how to send search requests to the Bing Image Search API. This JavaScript application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in JavaScript, the API is a RESTful web service compatible with most programming languages.
For more information, see [Cognitive Services Pricing - Bing Search API](https:/
let https = require('https'); ```
-2. Create variables for the API endpoint, image API search path, your subscription key, and search term. For `host`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, image API search path, your subscription key, and search term. For `host`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript let subscriptionKey = 'enter key here';
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/php.md
# Quickstart: Search for images using the Bing Image Search REST API and PHP Use this quickstart to make your first call to the Bing Image Search API and receive a JSON response. The simple application in this article sends a search query and displays the raw results.
To run this application, follow these steps:
1. Make sure secure HTTP support is enabled in your `php.ini` file. For Windows, this file is located in *C:\windows*. 2. Create a new PHP project in your favorite IDE or editor.
-3. Define the API endpoint, your subscription key, and search term. The endpoint can be the global endpoint in the following code, or the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+3. Define the API endpoint, your subscription key, and search term. The endpoint can be the global endpoint in the following code, or the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```php $endpoint = 'https://api.cognitive.microsoft.com/bing/v7.0/images/search';
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/python.md
# Quickstart: Search for images using the Bing Image Search REST API and Python Use this quickstart to learn how to send search requests to the Bing Image Search API. This Python application sends a search query to the API, and displays the URL of the first image in the results. Although this application is written in Python, the API is a RESTful web service compatible with most programming languages.
Use this quickstart to learn how to send search requests to the Bing Image Searc
## Create and initialize the application
-1. Create a new Python file in your favorite IDE or editor, and import the following modules. Create a variable for your subscription key, search endpoint, and search term. For `search_url`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. Create a new Python file in your favorite IDE or editor, and import the following modules. Create a variable for your subscription key, search endpoint, and search term. For `search_url`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python import requests
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/quickstarts/ruby.md
# Quickstart: Search for images using the Bing Image Search REST API and Ruby Use this quickstart to make your first call to the Bing Image Search API and receive a JSON response. This simple Ruby application sends a search query to the API and displays the raw results.
For more information, see [Cognitive Services Pricing - Bing Search API](https:/
require 'json' ```
-2. Create variables for the API endpoint, image API search path, your subscription key, and search term. For `uri`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, image API search path, your subscription key, and search term. For `uri`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```ruby uri = "https://api.cognitive.microsoft.com"
cognitive-services Trending Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/trending-images.md
# Get trending images from the web To get today's trending images, send the following GET request:
X-MSEdge-ClientIP: 999.999.999.999
X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: <blobFromPriorResponseGoesHere> Host: api.cognitive.microsoft.com
-```
+```
cognitive-services Tutorial Bing Image Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md
Title: "Tutorial: Create a single-page web app - Bing Image Search API"-+ description: The Bing Image Search API enables you to search the web for high-quality, relevant images. Use this tutorial to build a single-page web application that can send search queries to the API, and display the results within the webpage.
# Tutorial: Create a single-page app using the Bing Image Search API
-The Bing Image Search API enables you to search the web for high-quality, relevant images. Use this tutorial to build a single-page web application that can send search queries to the API, and display the results within the webpage. This tutorial is similar to the [corresponding tutorial](../Bing-Web-Search/tutorial-bing-web-search-single-page-app.md) for Bing Web Search.
+The Bing Image Search API enables you to search the web for high-quality, relevant images. Use this tutorial to build a single-page web application that can send search queries to the API, and display the results within the webpage. This tutorial is similar to the [corresponding tutorial](../bing-web-search/tutorial-bing-web-search-single-page-app.md) for Bing Web Search.
The tutorial app illustrates how to:
The tutorial app illustrates how to:
## Manage and store user subscription keys
-This application uses web browsers' persistent storage to store API subscription keys. If no key is stored, the webpage will prompt the user for their key and store it for later use. If the key is later rejected by the API, The app will remove it from storage. This sample uses the global endpoint. You can also use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+This application uses web browsers' persistent storage to store API subscription keys. If no key is stored, the webpage will prompt the user for their key and store it for later use. If the key is later rejected by the API, The app will remove it from storage. This sample uses the global endpoint. You can also use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
Define `storeValue` and `retrieveValue` functions to use either the `localStorage` object (if the browser supports it) or a cookie.
cognitive-services Tutorial Image Post https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/tutorial-image-post.md
# Tutorial: Extract image details using the Bing Image Search API and C# There are multiple [endpoints](./image-search-endpoint.md) available through the Bing Image Search API. The `/details` endpoint accepts a POST request with an image, and can return a variety of details about the image. This C# application sends an image using this API, and displays the details returned by Bing, which are JSON objects, such as the following:
This tutorial explains how to:
## Construct an image details search request
-The following is the `/details` endpoint, which accepts POST requests with image data in the body of the request. You can use the global endpoint below, or the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+The following is the `/details` endpoint, which accepts POST requests with image data in the body of the request. You can use the global endpoint below, or the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
``` https://api.cognitive.microsoft.com/bing/v7.0/images/details ```
cognitive-services Bing News Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/bing-news-upgrade-guide-v5-to-v7.md
Last updated 01/10/2019
# News Search API upgrade guide This upgrade guide identifies the changes between version 5 and version 7 of the Bing News Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
Blocked|InvalidRequest.Blocked
- Added the `sort` field to the [News](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference#news) object. The `sort` field shows the sort order of the articles. For example, the articles are sorted by relevance (default) or date. -- Added the [SortValue](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference#sortvalue) object, which defines a sort order. The `isSelected` field indicates whether the response used the sort order. If **true**, the response used the sort order. If `isSelected` is **false**, you can use the URL in the `url` field to request a different sort order.
+- Added the [SortValue](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference#sortvalue) object, which defines a sort order. The `isSelected` field indicates whether the response used the sort order. If **true**, the response used the sort order. If `isSelected` is **false**, you can use the URL in the `url` field to request a different sort order.
cognitive-services Search For News https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/concepts/search-for-news.md
Last updated 12/18/2019
# Search for news with the Bing News Search API The Bing Image Search API makes it easy to integrate Bing's cognitive news searching capabilities into your applications.
If there are other articles that are related to a news article, the news article
## Next steps > [!div class="nextstepaction"]
-> [How to page through Bing News Search results](../../bing-web-search/paging-search-results.md)
+> [How to page through Bing News Search results](../../bing-web-search/paging-search-results.md)
cognitive-services Send Search Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/concepts/send-search-queries.md
# Sending queries to the Bing News Search API The Bing News Search API enables you to search the web for relevant news items. Use this article to learn more about sending search queries to the API.
BingAPIs-Market: en-US
* [What is Bing News Search?](../search-the-web.md). * [Get today's top news](search-for-news.md#get-todays-top-news) * [Get news by category](search-for-news.md#get-news-by-category)
-* [Get trending news](search-for-news.md#get-trending-news)
+* [Get trending news](search-for-news.md#get-trending-news)
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/csharp.md
# Quickstart: Search for news using C# and the Bing News Search REST API Use this quickstart to make your first call to the Bing News Search API. This simple C# application sends a news search query to the API, and displays the JSON response.
The full code to this sample can be found on [GitHub](https://github.com/Azure-S
using System.Collections.Generic; ```
-2. Create variables for the API endpoint, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp const string accessKey = "enter key here";
cognitive-services Endpoint News https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/endpoint-news.md
# Bing News Search API endpoints The **News Search API** returns news articles, Web pages, images, videos, and [entities](../bing-entities-search/overview.md). Entities contain summary information about a person, place, or topic.
Returns news topics that are currently trending on social networks. When the `/t
For details about headers, parameters, market codes, response objects, errors, etc., see the [Bing News search API v7](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference) reference. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the News search API, see [Bing News Search Quick-starts](/azure/cognitive-services/bing-news-search).
+For examples of basic requests using the News search API, see [Bing News Search Quick-starts](/azure/cognitive-services/bing-news-search).
cognitive-services Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/go.md
# Quickstart: Get news results using the Bing News Search REST API and Go This quickstart uses the Go language to call the Bing News Search API. The results include names and URLs of news sources identified by the query string.
type NewsAnswer struct {
## Declare the main function and define variables
-The following code declares the main function and assigns the required variables. Confirm that the endpoint is correct, and then replace the `token` value with a valid subscription key from your Azure account. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+The following code declares the main function and assigns the required variables. Confirm that the endpoint is correct, and then replace the `token` value with a valid subscription key from your Azure account. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```go func main() {
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/java.md
# Quickstart: Perform a news search using Java and the Bing News Search REST API Use this quickstart to make your first call to the Bing News Search API. This simple Java application sends a news search query to the API, and displays the JSON response.
The source code for this sample is available [on GitHub](https://github.com/Azur
import com.google.gson.JsonParser; ```
-2. Create a new class. Add variables for the API endpoint, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create a new class. Add variables for the API endpoint, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java public static SearchResults SearchNews (String searchQuery) throws Exception {
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/language-support.md
# Language and region support for the Bing News Search API The Bing News Search API supports numerous countries/regions, many with more than one language. Specifying a country/region with a query serves primarily to refine search results based on interests in that country/region. Additionally, the results may contain links to Bing, and these links may localize the Bing user experience according to the specified country/region or language.
The following are the country/region codes that you may specify in the `cc` quer
|United States|US| ## Next steps
-For more information about the Bing News Search endpoints, see [News Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference).
+For more information about the Bing News Search endpoints, see [News Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference).
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/nodejs.md
# Quickstart: Perform a news search using Node.js and the Bing News Search REST API Use this quickstart to make your first call to the Bing News Search API. This simple JavaScript application sends a search query to the API and displays the JSON response.
The source code for this sample is available on [GitHub](https://github.com/Azur
let https = require('https'); ```
-2. Create variables for the API endpoint, news API search path, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, news API search path, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript let subscriptionKey = 'enter key here';
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/php.md
# Quickstart: Perform a news search using PHP and the Bing News Search REST API Use this quickstart to make your first call to the Bing News Search API. This simple PHP application sends a search query to the API and displays the JSON response.
To run this application, follow these steps:
2. Create a new PHP project in your favorite IDE or editor. 3. Add the code provided below. 4. Replace the `accessKey` value with an access key valid for your subscription.
-5. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+5. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
6. Run the program. ```php
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/python.md
# Quickstart: Perform a news search using Python and the Bing News Search REST API Use this quickstart to make your first call to the Bing News Search API. This simple Python application sends a search query to the API and processes the JSON result.
The source code for this sample is also available on [GitHub](https://github.com
## Create and initialize the application
-Create a new Python file in your favorite IDE or editor, and import the request module. Create variables for your subscription key, endpoint, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+Create a new Python file in your favorite IDE or editor, and import the request module. Create variables for your subscription key, endpoint, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python import requests
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/quickstarts/client-libraries.md
# Quickstart: Use the Bing News Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/ruby.md
# Quickstart: Perform a news search using Ruby and the Bing News Search REST API Use this quickstart to make your first call to the Bing News Search API. This simple Ruby application sends a search query to the API and processes the JSON response.
The source code for this sample is available on [GitHub](https://github.com/Azur
require 'json' ```
-2. Create variables for the API endpoint, news search URL, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, news search URL, your subscription key, and search term. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```ruby accessKey = "enter key here"
cognitive-services Search The Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/search-the-web.md
# What is the Bing News Search API? The Bing News Search API makes it easy to integrate Bing's cognitive news searching capabilities into your applications. The API provides a similar experience to [Bing News](https://www.bing.com/news), letting you send search queries and receive relevant news articles.
To quickly get started with your first API request, try a quickstart for the [RE
* The [Bing News Search API v7](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference) reference section contains definitions and information on the endpoints, headers, API responses, and query parameters that you can use to request image-based search results. * The [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
-* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
+* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Tutorial Bing News Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/tutorial-bing-news-search-single-page-app.md
# Tutorial: Create a single-page web app The Bing News Search API lets you search the Web and obtain results of the news type relevant to a search query. In this tutorial, we build a single-page Web application that uses the Bing News Search API to display search results on the page. The application includes HTML, CSS, and JavaScript components. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/Tutorials/BingNewsSearchApp.html).
The HTML also contains the divisions (HTML `<div>` tags) where the search result
To avoid having to include the Bing Search API subscription key in the code, we use the browser's persistent storage to store the key. Before the key is stored, we prompt for the user's key. If the key is later rejected by the API, we invalidate the stored key so the user will be prompted again.
-We define `storeValue` and `retrieveValue` functions that use either the `localStorage` object (not all browsers support it) or a cookie. The `getSubscriptionKey()` function uses these functions to store and retrieve the user's key. You can use the global endpoint below, or the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+We define `storeValue` and `retrieveValue` functions that use either the `localStorage` object (not all browsers support it) or a cookie. The `getSubscriptionKey()` function uses these functions to store and retrieve the user's key. You can use the global endpoint below, or the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
``` javascript // Cookie names for data we store
cognitive-services Bing Spell Check Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/bing-spell-check-upgrade-guide-v5-to-v7.md
Last updated 02/20/2019
# Spell Check API upgrade guide This upgrade guide identifies the changes between version 5 and version 7 of the Bing Spell Check API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
Blocked|InvalidRequest.Blocked
## Next steps > [!div class="nextstepaction"]
-> [Use and display requirements](../bing-web-search/use-display-requirements.md)
+> [Use and display requirements](../bing-web-search/use-display-requirements.md)
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/concepts/sending-requests.md
# Sending requests to the Bing Spell Check API To check a text string for spelling and grammar errors, you'd send a GET request to the following endpoint:
BingAPIs-Market: en-US
## Next steps - [What is the Bing Spell Check API?](../overview.md)-- [Bing Spell Check API v7 Reference](/rest/api/cognitiveservices-bingsearch/bing-spell-check-api-v7-reference)
+- [Bing Spell Check API v7 Reference](/rest/api/cognitiveservices-bingsearch/bing-spell-check-api-v7-reference)
cognitive-services Using Spell Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/concepts/using-spell-check.md
# Using the Bing Spell Check API Use this article to learn about using the Bing Spell Check API to perform contextual grammar and spell checking. While most spell-checkers rely on dictionary-based rule sets, the Bing spell-checker leverages machine learning and statistical machine translation to provide accurate and contextual corrections.
If the `type` field is RepeatedToken, you'd still replace the token with `sugges
## Next steps - [What is the Bing Spell Check API?](../overview.md)-- [Bing Spell Check API v7 Reference](/rest/api/cognitiveservices-bingsearch/bing-spell-check-api-v7-reference)
+- [Bing Spell Check API v7 Reference](/rest/api/cognitiveservices-bingsearch/bing-spell-check-api-v7-reference)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/language-support.md
# Language and region support for Bing Spell Check API These languages are supported by the Bing Spell Check API (only in `spell` mode).
Please note that to work with any other language than `en-US`, the `mkt` should
## See also - [Cognitive Services Documentation page](../index.yml)-- [Cognitive Services Product page](https://azure.microsoft.com/services/cognitive-services/)
+- [Cognitive Services Product page](https://azure.microsoft.com/services/cognitive-services/)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/overview.md
# What is the Bing Spell Check API? The Bing Spell Check API enables you to perform contextual grammar and spell checking on text. While most spell-checkers rely on dictionary-based rule sets, the Bing spell-checker leverages machine learning and statistical machine translation to provide accurate and contextual corrections.
The Bing Spell Check API is easy to call from any programming language that can
First, try the Bing Spell Check Search API [interactive demo](https://azure.microsoft.com/services/cognitive-services/spell-check/) to see how you can quickly check a variety of texts.
-When you are ready to call the API, create a [Cognitive services API account](../../cognitive-services/cognitive-services-apis-create-account.md). If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free.
+When you are ready to call the API, create an [Azure AI services resource](../../ai-services/multi-service-resource.md?pivots=azportal). If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free.
-You can also visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
+You can also visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/csharp.md
# Quickstart: Check spelling with the Bing Spell Check REST API and C# Use this quickstart to make your first call to the Bing Spell Check REST API. This simple C# application sends a request to the API and returns a list of suggested corrections.
Although this application is written in C#, the API is a RESTful Web service com
using Newtonsoft.Json; ```
-2. Create variables for the API endpoint, your subscription key, and the text to be spell checked. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, your subscription key, and the text to be spell checked. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp namespace SpellCheckSample
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/java.md
# Quickstart: Check spelling with the Bing Spell Check REST API and Java Use this quickstart to make your first call to the Bing Spell Check REST API. This simple Java application sends a request to the API and returns a list of suggested corrections.
Although this application is written in Java, the API is a RESTful web service c
import javax.net.ssl.HttpsURLConnection; ```
-2. Create variables for the API endpoint's host, path, and your subscription key. Then, create variables for your market, the text you want to spell check, and a string for the spell check mode. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint's host, path, and your subscription key. Then, create variables for your market, the text you want to spell check, and a string for the spell check mode. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java static String host = "https://api.cognitive.microsoft.com";
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/nodejs.md
# Quickstart: Check spelling with the Bing Spell Check REST API and Node.js Use this quickstart to make your first call to the Bing Spell Check REST API. This simple JavaScript application sends a request to the API and returns a list of suggested corrections.
Although this application is written in JavaScript, the API is a RESTful Web ser
## Create and initialize a project
-1. Create a new JavaScript file in your favorite IDE or editor. Set the strictness, and require `https`. Then, create variables for your API endpoint's host, path, and your subscription key. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. Create a new JavaScript file in your favorite IDE or editor. Set the strictness, and require `https`. Then, create variables for your API endpoint's host, path, and your subscription key. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript 'use strict';
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/php.md
# Quickstart: Check spelling with the Bing Spell Check REST API and PHP Use this quickstart to make your first call to the Bing Spell Check REST API. This simple PHP application sends a request to the API and returns a list of suggested corrections.
Although this application is written in PHP, the API is a RESTful Web service co
1. Create a new PHP project in your favorite IDE. 2. Add the code provided below. 3. Replace the `subscriptionKey` value with an access key valid for your subscription.
-4. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+4. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
5. Run the program. ```php
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/python.md
# Quickstart: Check spelling with the Bing Spell Check REST API and Python Use this quickstart to make your first call to the Bing Spell Check REST API. This simple Python application sends a request to the API and returns a list of suggested corrections.
Although this application is written in Python, the API is a RESTful Web service
import json ```
-2. Create variables for the text you want to spell check, your subscription key, and your Bing Spell Check endpoint. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the text you want to spell check, your subscription key, and your Bing Spell Check endpoint. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python api_key = "<ENTER-KEY-HERE>"
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/ruby.md
# Quickstart: Check spelling with the Bing Spell Check REST API and Ruby Use this quickstart to make your first call to the Bing Spell Check REST API using Ruby. This simple application sends a request to the API and returns a list of suggested corrections.
Although this application is written in Ruby, the API is a RESTful Web service c
require 'json' ```
-2. Create variables for your subscription key, endpoint URI, and path. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource. Create your request parameters:
+2. Create variables for your subscription key, endpoint URI, and path. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource. Create your request parameters:
1. Assign your market code to the `mkt` parameter with the `=` operator. The market code is the code of the country/region you make the request from.
cognitive-services Sdk Quickstart Spell Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/sdk-quickstart-spell-check.md
# Quickstart: Check spelling with the Bing Spell Check SDK for C# Use this quickstart to begin spell checking with the Bing Spell Check SDK for C#. While Bing Spell Check has a REST API compatible with most programming languages, the SDK provides an easy way to integrate the service into your applications. The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/samples/SpellCheck).
cognitive-services Spellcheck https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/tutorials/spellcheck.md
# Tutorial: Build a Web page Spell Check client In this tutorial, we'll build a Web page that allows users to query the Bing Spell Check API. The source code for this application is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/Tutorials/BingSpellCheckApp.html).
cognitive-services Bing Video Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/bing-video-upgrade-guide-v5-to-v7.md
Last updated 01/31/2019
# Video Search API upgrade guide This upgrade guide identifies the changes between version 5 and version 7 of the Bing Video Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
Blocked|InvalidRequest.Blocked
- Renamed the `nextOffsetAddCount` field of [Videos](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#videos) to `nextOffset`. The way you use the offset has also changed. Previously, you would set the [offset](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#offset) query parameter to the `nextOffset` value plus the previous offset value plus the number of videos in the result. Now, you simply set the `offset` query parameter to the `nextOffset` value. -- Changed the data type of the `relatedVideos` field from `Video[]` to [VideosModule](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#videosmodule) (see [VideoDetails](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#videodetails)).
+- Changed the data type of the `relatedVideos` field from `Video[]` to [VideosModule](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#videosmodule) (see [VideoDetails](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#videodetails)).
cognitive-services Get Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/concepts/get-videos.md
# Search for videos with the Bing Video Search API The Bing Video Search API makes it easy to integrate Bing's cognitive news searching capabilities into your applications. While the API primarily finds and returns relevant videos from the web, it provides several features for intelligent and focused video retrieval on the web.
You can use the `text` and `thumbnail` fields to display the expanded query stri
## Throttling requests
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/concepts/sending-requests.md
# Sending search requests to the Bing Video Search API This article describes the parameters and attributes of requests sent to the Bing Video Search API, as well as the JSON response object it returns.
For details about consuming the response objects, see [Searching the Web for Vid
For details about getting insights about a video such as related searches, see [Video Insights](../video-insights.md).
-For details about videos that are trending on social media, see [Trending Videos](../trending-videos.md).
+For details about videos that are trending on social media, see [Trending Videos](../trending-videos.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/overview.md
Last updated 12/18/2019
# What is the Bing Video Search API? The Bing Video Search API makes it easy to add video searching capabilities to your services and applications. By sending user search queries with the API, you can get and display relevant and high-quality videos similar to [Bing Video](https://www.bing.com/video). Use this API for search results that only contain videos. The [Bing Web Search API](../bing-web-search/overview.md) can return other types of web content, including webpages, videos, news and images.
Use the [quickstart](./quickstarts/csharp.md) to quickly get started with your f
* The [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
-* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
+* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/client-libraries.md
# Quickstart: Use the Bing Video Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/csharp.md
# Quickstart: Search for videos using the Bing Video Search REST API and C# Use this quickstart to make your first call to the Bing Video Search API. This simple C# application sends an HTTP video search query to the API and displays the JSON response. Although this application is written in C#, the API is a RESTful Web service compatible with most programming languages.
using System.Text.Json;
using System.Text.Json.Serialization; ```
-Add variables for your subscription key, endpoint, and search term. For the `uriBase` value, you can use the global endpoint in the following code or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+Add variables for your subscription key, endpoint, and search term. For the `uriBase` value, you can use the global endpoint in the following code or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp // Replace the accessKey string value with your valid access key.
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/java.md
# Quickstart: Search for videos using the Bing Video Search REST API and Java Use this quickstart to make your first call to the Bing Video Search API. This simple Java application sends an HTTP video search query to the API and displays the JSON response. Although this application is written in Java, the API is a RESTful Web service compatible with most programming languages.
The source code for this sample is available [on GitHub](https://github.com/Azur
} ```
-3. Create a new method named `SearchVideos()` with variables for your API endpoint host and path, your subscription key, and search term. This method returns a `SearchResults` object. For the `host` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+3. Create a new method named `SearchVideos()` with variables for your API endpoint host and path, your subscription key, and search term. This method returns a `SearchResults` object. For the `host` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java public static SearchResults SearchVideos (String searchQuery) throws Exception {
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/nodejs.md
# Quickstart: Search for videos using the Bing Video Search REST API and Node.js Use this quickstart to make your first call to the Bing Video Search API. This simple JavaScript application sends an HTTP video search query to the API, and displays the JSON response. Although this application is written in JavaScript and uses Node.js, the API is a RESTful Web service compatible with most programming languages.
The source code for this sample is available [on GitHub](https://github.com/Azur
let https = require('https'); ```
-2. Create variables for your API endpoint, subscription key, and search term. For the `host` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your API endpoint, subscription key, and search term. For the `host` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript let subscriptionKey = 'enter key here';
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/php.md
# Quickstart: Search for videos using the Bing Video Search REST API and PHP Use this quickstart to make your first call to the Bing Video Search API. This simple PHP application sends an HTTP video search query to the API, and displays the JSON response. The example code is written to work under PHP 5.6.
The [Bing Video Search API](/rest/api/cognitiveservices-bingsearch/bing-web-api-
1. Enable secure HTTP support in your `php.ini` file by uncommenting the `;extension=php_openssl.dll` line, as described in the following code. 2. Create a new PHP project in your favorite IDE or editor. 3. Add the code provided below.
-4. Replace the `$accessKey` value with an access key valid for your subscription. For the `$endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+4. Replace the `$accessKey` value with an access key valid for your subscription. For the `$endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
5. Run the program. ```php
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/python.md
# Quickstart: Search for videos using the Bing Video Search REST API and Python Use this quickstart to make your first call to the Bing Video Search API. This simple Python application sends an HTTP video search query to the API, and displays the JSON response. Although this application is written in Python, the API is a RESTful Web service compatible with most programming languages.
You can run this example as a Jupyter notebook on [MyBinder](https://mybinder.or
import requests from IPython.display import HTML ```
-2. Create variables for your subscription key, search endpoint, and search term. For the `search_url` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your subscription key, search endpoint, and search term. For the `search_url` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python subscription_key = None
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/quickstarts/ruby.md
# Quickstart: Search for videos using the Bing Video Search REST API and Ruby Use this quickstart to make your first call to the Bing Video Search API. This simple Ruby application sends an HTTP video search query to the API, and displays the JSON response. Although this application is written in Python, the API is a RESTful Web service compatible with most programming languages.
The source code for this sample is available [on GitHub](https://github.com/Azur
require 'json' ```
-2. Create variables for the API endpoint, video API search path, your subscription key, and search term. For the `url` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for the API endpoint, video API search path, your subscription key, and search term. For the `url` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```ruby uri = "https://api.cognitive.microsoft.com"
cognitive-services Trending Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/trending-videos.md
Last updated 01/31/2019
# Get trending videos with the Bing Video Search API The Bing Video Search API enables you to find today's trending videos from across the web, and in different categories.
The following example shows an API response that contains trending videos, which
## Next steps > [!div class="nextstepaction"]
-> [Get video insights](video-insights.md)
+> [Get video insights](video-insights.md)
cognitive-services Tutorial Bing Video Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/tutorial-bing-video-search-single-page-app.md
# Tutorial: Single-page Video Search app The Bing Video Search API lets you search the Web and get video results relevant to a search query. In this tutorial, we build a single-page Web application that uses the Bing search API to display search results on the page. The application includes HTML, CSS, and JavaScript components. <!-- Remove until it can be replaced with a sanitized version.
function bingSearchOptions(form) {
For example, the `SafeSearch` parameter in an actual API call can be `strict`, or `moderate`, with `moderate` being the default. ## Performing the request
-Given the query, the options string, and the API key, the `BingWebSearch` function uses an `XMLHttpRequest` object to make the request to the Bing Search endpoint. You can use the global endpoint below, or the [custom subdomain](../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+Given the query, the options string, and the API key, the `BingWebSearch` function uses an `XMLHttpRequest` object to make the request to the Bing Search endpoint. You can use the global endpoint below, or the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript // Search on the query, using search options, authenticated by the key.
cognitive-services Video Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Video-Search/video-insights.md
Last updated 01/31/2019
# Get insights about a video Each video returned by the Bing Video Search API includes a video ID that you can use to get more information about it, such as related videos. To get insights about a video, get its [videoId](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference#video-videoid) token in the API response.
The response to this request will have a top-level [VideoDetails](/rest/api/cogn
## Next steps > [!div class="nextstepaction"]
-> [Search for trending videos](trending-videos.md)
+> [Search for trending videos](trending-videos.md)
cognitive-services Autosuggest Bing Search Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/autosuggest-bing-search-terms.md
# Autosuggest Bing search terms in your application If you provide a search box where the user enters their search term, use the [Bing Autosuggest API](../bing-autosuggest/get-suggested-search-terms.md) to improve the experience. The API returns suggested query strings based on partial search terms as the user types.
You can use this information to let the user know that you modified their query
## See also
-* [Bing Web Search API reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference)
+* [Bing Web Search API reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference)
cognitive-services Bing Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/bing-api-comparison.md
# What are the Bing Search APIs? The Bing Search APIs let you build web-connected apps and services that find webpages, images, news, locations, and more without advertisements. By sending search requests using the Bing Search REST APIs or SDKs, you can get relevant information and content for web searches. Use this article to learn about the different Bing search APIs and how you can integrate cognitive searches into your applications and services. Pricing and rate limits may vary between APIs. ## The Bing Web Search API
-The [Bing Web Search API](../Bing-Web-Search/overview.md) returns webpages, images, video, news, and more. You can filter the search queries sent to this API to include or exclude certain content types.
+The [Bing Web Search API](../bing-web-search/overview.md) returns webpages, images, video, news, and more. You can filter the search queries sent to this API to include or exclude certain content types.
Consider using the Bing Web Search API in applications that may need to search for all types of relevant web content. If your application searches for a specific type of online content, consider one of the search APIs below:
The following Bing search APIs return specific content from the web like images,
| Bing API | Description | | -- | -- |
-| [Entity Search](../Bing-Entities-Search/overview.md) | The Bing Entity Search API returns search results containing entities, which can be people, places, or things. Depending on the query, the API will return one or more entities that satisfy the search query. The search query can include noteworthy individuals, local businesses, landmarks, destinations, and more. |
-| [Image Search](../Bing-Image-Search/overview.md) | The Bing Image Search API lets you search for and find high-quality static and animated images similar to [Bing.com/images](https://www.Bing.com/images). You can refine searches to include or exclude images by attribute, including size, color, license, and freshness. You can also search for trending images, upload images to gain insights about them, and display thumbnail previews. |
-| [News Search](../Bing-News-Search/search-the-web.md) | The Bing News Search API lets you find news stories similar to [Bing.com/news](https://www.Bing.com/news). The API returns news articles from either multiple sources or specific domains. You can search across categories to get trending articles, top stories, and headlines. |
-| [Video Search](../Bing-Video-Search/overview.md) | The Bing Video Search API lets you find videos across the web. Get trending videos, related content, and thumbnail previews. |
+| [Entity Search](../bing-entities-search/overview.md) | The Bing Entity Search API returns search results containing entities, which can be people, places, or things. Depending on the query, the API will return one or more entities that satisfy the search query. The search query can include noteworthy individuals, local businesses, landmarks, destinations, and more. |
+| [Image Search](../bing-image-search/overview.md) | The Bing Image Search API lets you search for and find high-quality static and animated images similar to [Bing.com/images](https://www.Bing.com/images). You can refine searches to include or exclude images by attribute, including size, color, license, and freshness. You can also search for trending images, upload images to gain insights about them, and display thumbnail previews. |
+| [News Search](../bing-news-search/search-the-web.md) | The Bing News Search API lets you find news stories similar to [Bing.com/news](https://www.Bing.com/news). The API returns news articles from either multiple sources or specific domains. You can search across categories to get trending articles, top stories, and headlines. |
+| [Video Search](../bing-video-search/overview.md) | The Bing Video Search API lets you find videos across the web. Get trending videos, related content, and thumbnail previews. |
| [Visual Search](../Bing-visual-search/overview.md) | Upload an image or use a URL to get insightful information about it, like visually similar products, images, and related searches. | [Local Business Search](../bing-local-business-search/overview.md) | The Bing Local Business Search API lets your applications find contact and location information about local businesses based on search queries. | ## The Bing Custom Search API
-Creating a custom search instance with the [Bing Custom Search](../Bing-Custom-Search/overview.md) API lets you create a search experience focused only on content and topics you care about. For example, after you specify the domains, websites, and specific webpages that Bing will search, Bing Custom Search will tailor the results to that specific content. You can incorporate the Bing Custom Autosuggest, Image, and Video Search APIs to further customize your search experience.
+Creating a custom search instance with the [Bing Custom Search](../bing-custom-search/overview.md) API lets you create a search experience focused only on content and topics you care about. For example, after you specify the domains, websites, and specific webpages that Bing will search, Bing Custom Search will tailor the results to that specific content. You can incorporate the Bing Custom Autosuggest, Image, and Video Search APIs to further customize your search experience.
## Additional Bing Search APIs
The following Bing Search APIs let you improve your search experience by combini
| API | Description | | -- | -- |
-| [Bing Autosuggest](../Bing-Autosuggest/get-suggested-search-terms.md) | Improve your application's search experience with the Bing Autosuggest API by returning suggested searches in real time. |
+| [Bing Autosuggest](../bing-autosuggest/get-suggested-search-terms.md) | Improve your application's search experience with the Bing Autosuggest API by returning suggested searches in real time. |
| [Bing Statistics](bing-web-stats.md) | Bing Statistics provides analytics for the Bing Search APIs your application uses. Some of the available analytics include call volume, top query strings, and geographic distribution. | ## Next steps * Bing Search API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/search-api/)
-* The [Bing Use and Display Requirements](./use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
+* The [Bing Use and Display Requirements](./use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
cognitive-services Bing Web Stats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/bing-web-stats.md
# Add analytics to the Bing Search APIs Bing Statistics provides analytics for the Bing Search APIs. These analytics include call volume, top query strings, geographic distribution, and more. You can enable Bing Statistics in the [Azure portal](https://portal.azure.com) by navigating to your Azure resource and clicking **Enable Bing Statistics**.
The following are possible metrics and endpoint restrictions.
## Next steps * [What are the Bing Search APIs?](bing-api-comparison.md)
-* [Bing Search API use and display requirements](use-display-requirements.md)
+* [Bing Search API use and display requirements](use-display-requirements.md)
cognitive-services Bing Web Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/bing-web-upgrade-guide-v5-to-v7.md
Last updated 02/12/2019
# Upgrade from Bing Web Search API v5 to v7 This upgrade guide identifies the changes between version 5 and version 7 of the Bing Web Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
Blocked|InvalidRequest.Blocked
### Object changes -- Added the `someResultsRemoved` field to the [WebAnswer](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference#webanswer) object. The field contains a Boolean value that indicates whether the response excluded some results from the web answer.
+- Added the `someResultsRemoved` field to the [WebAnswer](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference#webanswer) object. The field contains a Boolean value that indicates whether the response excluded some results from the web answer.
cognitive-services Csharp Ranking Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/csharp-ranking-tutorial.md
# Build a console app search client in C# This tutorial shows how to build a simple .NET Core console app that allows users to query the Bing Web Search API and display ranked results.
WebPage:
## Next steps
-Read more about [using ranking to display results](rank-results.md).
+Read more about [using ranking to display results](rank-results.md).
cognitive-services Filter Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/filter-answers.md
Last updated 07/08/2019
# Filtering the answers that the search response includes When you query the web, Bing returns all the relevant content it finds for the search. For example, if the search query is "sailing+dinghies", the response might contain the following answers:
If you set `promote` to news, the response doesn't include the news answer becau
The answers that you want to promote do not count against the `answerCount` limit. For example, if the ranked answers are news, images, and videos, and you set `answerCount` to 1 and `promote` to news, the response contains news and images. Or, if the ranked answers are videos, images, and news, the response contains videos and news.
-You may use `promote` only if you specify the `answerCount` query parameter.
+You may use `promote` only if you specify the `answerCount` query parameter.
cognitive-services Hit Highlighting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/hit-highlighting.md
Last updated 07/30/2019
# Using decoration markers to highlight text Bing supports hit highlighting, which marks query terms (or other terms that Bing finds relevant) in the display strings of some answers. For example, a webpage result's `name`, `displayUrl`, and `snippet` fields might contain marked query terms.
If `textDecorations` is `true`, Bing may include the following markers in the di
## Next steps * [What is the Bing Web Search API?](overview.md)
-* [Resize and crop thumbnails](resize-and-crop-thumbnails.md)
+* [Resize and crop thumbnails](resize-and-crop-thumbnails.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/language-support.md
# Language and region support for the Bing Web Search API The Bing Web Search API supports over three dozen countries or regions, many with more than one language. Specifying a country or region with a query helps refine search results based on that country or regions interests. The results may include links to Bing, and these links may localize the Bing user experience according to the specified country/region or language.
Alternatively, you can specify the market with the `mkt` query parameter, and a
## Next steps
-* [Bing Image Search API reference](/rest/api/cognitiveservices/bing-images-api-v7-reference)
+* [Bing Image Search API reference](/rest/api/cognitiveservices/bing-images-api-v7-reference)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/overview.md
# What is the Bing Web Search API? The Bing Web Search API is a RESTful service that provides instant answers to user queries. Search results are easily configured to include web pages, images, videos, news, translations, and more. Bing Web Search provides the results as JSON based on search relevance and your Bing Web Search subscriptions.
-This API is optimal for applications that need access to all content that is relevant to a user's search query. If you're building an application that requires only a specific type of result, consider using the [Bing Image Search API](../Bing-Image-Search/overview.md), [Bing Video Search API](../bing-video-search/overview.md), or [Bing News Search API](../Bing-News-Search/search-the-web.md). See [Cognitive Services APIs](../index.yml) for a complete list of Bing Search APIs.
+This API is optimal for applications that need access to all content that is relevant to a user's search query. If you're building an application that requires only a specific type of result, consider using the [Bing Image Search API](../bing-image-search/overview.md), [Bing Video Search API](../bing-video-search/overview.md), or [Bing News Search API](../bing-news-search/search-the-web.md). See [Cognitive Services APIs](../index.yml) for a complete list of Bing Search APIs.
Want to see how it works? Try our [Bing Web Search API demo](https://azure.microsoft.com/services/cognitive-services/bing-web-search-api/).
The Bing Web Search API is easy to call from any programming language that can m
* Use our [Python quickstart](./quickstarts/client-libraries.md?pivots=programming-language-python) to make your first call to the Bing Web Search API. * [Build a single-page web app](tutorial-bing-web-search-single-page-app.md). * Review [Web Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference) documentation.
-* Learn more about [use and display requirements](./use-display-requirements.md) for Bing Web Search.
+* Learn more about [use and display requirements](./use-display-requirements.md) for Bing Web Search.
cognitive-services Paging Search Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/paging-search-results.md
# How to page through results from the Bing Search APIs When you send a call to the Bing Web, Custom, Image, News or Video Search APIs, Bing returns a subset of the total number of results that may be relevant to the query. To get the estimated total number of available results, access the answer object's `totalEstimatedMatches` field.
When using the Bing Image and Video APIs, you can use the `nextOffset` value to
* [Bing Custom Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference) * [Bing News Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-news-api-v7-reference) * [Bing Video Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-video-api-v7-reference)
-* [Bing Image Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference)
+* [Bing Image Search API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference)
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/client-libraries.md
# Quickstart: Use a Bing Web Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/csharp.md
# Quickstart: Search the web using the Bing Web Search REST API and C# Use this quickstart to make your first call to the Bing Web Search API. This C# application sends a search request to the API, and shows the JSON response. Although this application is written in C#, the API is a RESTful Web service compatible with most programming languages.
namespace BingSearchApisQuickstart
A few variables must be set before we can continue. Add this code to the `Program` class you created in the previous section:
-1. For the `uriBase` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. For the `uriBase` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
2. Confirm that `uriBase` is valid and replace the `accessKey` value with a subscription key from your Azure account.
cognitive-services Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/go.md
# Quickstart: Search the web using the Bing Web Search REST API and Go Use this quickstart to make your first call to the Bing Web Search API. This Go application sends a search request to the API, and shows the JSON response. Although this application is written in Go, the API is a RESTful Web service compatible with most programming languages.
type BingAnswer struct {
This code declares the main function and sets the required variables:
-1. For the `endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. For the `endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
2. Confirm that the endpoint is correct and replace the `token` value with a valid subscription key from your Azure account.
Responses from the Bing Web Search API are returned as JSON. This sample respons
```go Microsoft Cognitive Services || https://www.microsoft.com/cognitive-services Cognitive Services | Microsoft Azure || https://azure.microsoft.com/services/cognitive-services/
-What is Microsoft Cognitive Services? | Microsoft Docs || https://learn.microsoft.com/azure/cognitive-services/Welcome
+What is Microsoft Cognitive Services? | Microsoft Docs || https://learn.microsoft.com/azure/ai-services/Welcome
Microsoft Cognitive Toolkit || https://www.microsoft.com/en-us/cognitive-toolkit/ Microsoft Customers || https://customers.microsoft.com/en-us/search?sq=%22Microsoft%20Cognitive%20Services%22&ff=&p=0&so=story_publish_date%20desc Microsoft Enterprise Services - Microsoft Enterprise || https://enterprise.microsoft.com/en-us/services/
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/java.md
# Quickstart: Use Java to search the web with the Bing Web Search REST API, an Azure cognitive service In this quickstart, you'll use a Java application to make your first call to the Bing Web Search API. This Java application sends a search request to the API, and shows the JSON response. Although this application is written in Java, the API is a RESTful Web service compatible with most programming languages.
public class BingWebSearch {
The following code sets the `subscriptionKey`, `host`, `path`, and `searchTerm`. Add this code to the `BingWebSearch` class described in the previous section:
-1. For the `host` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. For the `host` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
2. Replace the `subscriptionKey` value with a valid subscription key from your Azure account.
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/nodejs.md
# Quickstart: Search the web using the Bing Web Search REST API and Node.js Use this quickstart to make your first call to the Bing Web Search API. This Node.js application sends a search request to the API, and shows the JSON response. Although this application is written in JavaScript, the API is a RESTful Web service compatible with most programming languages.
if (!SUBSCRIPTION_KEY) {
This function makes a secure GET request and saves the search query as a query parameter in the path.
-1. For the `hostname` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. For the `hostname` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
2. Use `encodeURIComponent` to escape invalid characters. The subscription key is passed in a header.
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/php.md
# Quickstart: Use PHP to call the Bing Web Search API Use this quickstart to make your first call to the Bing Web Search API. This Node.js application sends a search request to the API, and shows the JSON response. Although this application is written in JavaScript, the API is a RESTful Web service compatible with most programming languages.
Before we get started, locate php.ini and uncomment this line:
1. Create a new PHP project in your favorite IDE or editor. Add opening and closing tags: `<?php` and `?>`.
-2. For the `$endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. For the `$endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
3. Confirm that the `$endpoint` value is correct and replace the `$accesskey` value with a valid subscription key from your Azure account.
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/python.md
# Quickstart: Use Python to call the Bing Web Search API Use this quickstart to make your first call to the Bing Web Search API. This Python application sends a search request to the API, and shows the JSON response. Although this application is written in Python, the API is a RESTful Web service compatible with most programming languages.
This example is run as a Jupyter notebook on [MyBinder](https://mybinder.org). T
assert subscription_key ```
-2. Declare the Bing Web Search API endpoint. You can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Declare the Bing Web Search API endpoint. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python search_url = "https://api.bing.microsoft.com/v7.0/search"
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/quickstarts/ruby.md
# Quickstart: Use Ruby to call the Bing Web Search API Use this quickstart to make your first call to the Bing Web Search API. This Ruby application sends a search request to the API, and shows the JSON response. Although this application is written in Ruby, the API is a RESTful Web service compatible with most programming languages.
require 'json'
A few variables must be set before we can continue:
-1. For the `uri` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+1. For the `uri` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
2. Confirm that the `uri` and `path` values are valid and replace the `accessKey` value with a subscription key from your Azure account.
cognitive-services Rank Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/rank-results.md
Last updated 03/17/2019
# How to use ranking to display Bing Web Search API results Each search response includes a [RankingResponse](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference#rankingresponse) answer, that specifies how you must display the search results. The ranking response groups results by mainline content and sidebar content for a traditional search results page. If you do not display the results in a traditional mainline and sidebar format, you must provide the mainline content higher visibility than the sidebar content.
For information about promoting unranked results, see [Promoting answers that ar
For information about limiting the number of ranked answers in the response, see [Limiting the number of answers in the response](./filter-answers.md#limiting-the-number-of-answers-in-the-response).
-For a C# example that uses ranking to display results, see [C# ranking tutorial](./csharp-ranking-tutorial.md).
+For a C# example that uses ranking to display results, see [C# ranking tutorial](./csharp-ranking-tutorial.md).
cognitive-services Resize And Crop Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/resize-and-crop-thumbnails.md
# Resize and crop thumbnail images Some answers from the Bing Search APIs include URLs to thumbnail images served by Bing, which you can resize and crop, and may contain query parameters. For example:
The following shows the image reduced to 100x200 using Blind Ratio cropping. The
## Next steps * [What are the Bing Search APIs?](bing-api-comparison.md)
-* [Bing Search API use and display requirements](use-display-requirements.md)
+* [Bing Search API use and display requirements](use-display-requirements.md)
cognitive-services Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/sdk-samples.md
# Bing Web Search SDK samples The Bing Web Search SDK is available in Python, Node.js, C#, and Java. Code samples, prerequisites, and build instructions are provided on GitHub. The following scenarios are covered:
cognitive-services Search Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/search-responses.md
# Bing Web Search API response structure and answer types When you send Bing Web Search a search request, it returns a [`SearchResponse`](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference#searchresponse) object in the response body. The object includes a field for each answer that Bing determined was relevant to query. This example illustrates a response object if Bing returned all answers:
The following shows how Bing uses the spelling suggestion.
## See also
-* [Bing Web Search API reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference)
+* [Bing Web Search API reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference)
cognitive-services Throttling Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/throttling-requests.md
# Throttling requests to the Bing Web Search API [!INCLUDE [cognitive-services-bing-throttling-requests](../../../includes/cognitive-services-bing-throttling-requests.md)] ## Next steps
-* [Bing Web Search API reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference)
+* [Bing Web Search API reference](/rest/api/cognitiveservices-bingsearch/bing-web-api-v7-reference)
cognitive-services Tutorial Bing Web Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/tutorial-bing-web-search-single-page-app.md
# Tutorial: Create a single-page app using the Bing Web Search API
-This single-page app demonstrates how to retrieve, parse, and display search results from the Bing Web Search API. The tutorial uses boilerplate HTML and CSS, and focuses on the JavaScript code. HTML, CSS, and JS files are available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/Bing-Web-Search) with quickstart instructions.
+This single-page app demonstrates how to retrieve, parse, and display search results from the Bing Web Search API. The tutorial uses boilerplate HTML and CSS, and focuses on the JavaScript code. HTML, CSS, and JS files are available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/bing-web-search) with quickstart instructions.
This sample app can:
git clone https://github.com/Azure-Samples/cognitive-services-REST-api-samples.g
Then run `npm install`. For this tutorial, Express.js is the only dependency. ```console
-cd <path-to-repo>/cognitive-services-REST-api-samples/Tutorials/Bing-Web-Search
+cd <path-to-repo>/cognitive-services-REST-api-samples/Tutorials/bing-web-search
npm install ```
cognitive-services Use Display Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/use-display-requirements.md
# Bing Search API use and display requirements These use and display requirements apply to any implementation of the content and associated information from the following Bing Search APIs, including relationships, metadata, and other signals.
cognitive-services Web Search Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/web-search-endpoints.md
# Web Search endpoint The **Web Search API** returns Web pages, news, images, videos, and [entities](../bing-entities-search/overview.md). Entities have summary information about a person, place, or topic.
Endpoint: For details about headers, parameters, market codes, response objects,
## Response JSON
-The response to a Web search request includes all results as JSON objects. Parsing the result requires procedures that handle the elements of each type. See the [tutorial](./tutorial-bing-web-search-single-page-app.md) and [source code](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/Bing-Web-Search) for examples.
+The response to a Web search request includes all results as JSON objects. Parsing the result requires procedures that handle the elements of each type. See the [tutorial](./tutorial-bing-web-search-single-page-app.md) and [source code](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/bing-web-search) for examples.
## Next steps The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and location by longitude, latitude, and search radius. For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Web search API, see [Search the Web Quick-starts](./overview.md).
+For examples of basic requests using the Web search API, see [Search the Web Quick-starts](./overview.md).
cognitive-services Category Taxonomy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Category-Taxonomy.md
- Title: Taxonomy of image categories - Computer Vision-
-description: Get the 86 categories of taxonomy for the Computer Vision API in Azure Cognitive Services.
------ Previously updated : 04/17/2019---
-# Computer Vision 86-category taxonomy
-
-abstract_
-
-abstract_net
-
-abstract_nonphoto
-
-abstract_rect
-
-abstract_shape
-
-abstract_texture
-
-animal_
-
-animal_bird
-
-animal_cat
-
-animal_dog
-
-animal_horse
-
-animal_panda
-
-building_
-
-building_arch
-
-building_brickwall
-
-building_church
-
-building_corner
-
-building_doorwindows
-
-building_pillar
-
-building_stair
-
-building_street
-
-dark_
-
-drink_
-
-drink_can
-
-dark_fire
-
-dark_fireworks
-
-dark_light
-
-food_
-
-food_bread
-
-food_fastfood
-
-food_grilled
-
-food_pizza
-
-indoor_
-
-indoor_churchwindow
-
-indoor_court
-
-indoor_doorwindows
-
-indoor_marketstore
-
-indoor_room
-
-indoor_venue
-
-object_screen
-
-object_sculpture
-
-others_
-
-outdoor_
-
-outdoor_city
-
-outdoor_field
-
-outdoor_grass
-
-outdoor_house
-
-outdoor_mountain
-
-outdoor_oceanbeach
-
-outdoor_playground
-
-outdoor_pool
-
-outdoor_railway
-
-outdoor_road
-
-outdoor_sportsfield
-
-outdoor_stonerock
-
-outdoor_street
-
-outdoor_water
-
-outdoor_waterside
-
-people_
-
-people_baby
-
-people_crowd
-
-people_group
-
-people_hand
-
-people_many
-
-people_portrait
-
-people_show
-
-people_swimming
-
-people_tattoo
-
-people_young
-
-plant_
-
-plant_branch
-
-plant_flower
-
-plant_leaves
-
-plant_tree
-
-sky_cloud
-
-sky_object
-
-sky_sun
-
-text_
-
-text_mag
-
-text_map
-
-text_menu
-
-text_sign
-
-trans_bicycle
-
-trans_bus
-
-trans_car
-
-trans_trainstation
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/build-enrollment-app.md
- Title: Build a React app to add users to a Face service-
-description: Learn how to set up your development environment and deploy a Face app to get consent from customers.
------ Previously updated : 11/17/2020---
-# Build a React app to add users to a Face service
-
-This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, or personalization kiosk, based on their face data.
-
-When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
-
-The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
-
-## Prerequisites
-
-* An Azure subscription ΓÇô [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you created to connect your application to Face API.
-
-### Important Security Considerations
-* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
-* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
-* As a best practice, consider having separate API keys for development and production.
-
-## Set up the development environment
-
-#### [Android](#tab/android)
-
-1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
-1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select your development OS and **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**.
-1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/).
-1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
-
- > [!WARNING]
- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](../../authentication.md) for other ways to authenticate the service.
-
-1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
-
-#### [iOS](#tab/ios)
-
-1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
-1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select **macOS** as your development OS and **iOS** as the target OS. Complete the section **Installing dependencies**.
-1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/). You will also need to download Xcode.
-1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource.
-
- > [!WARNING]
- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](../../authentication.md) for other ways to authenticate the service.
-
-1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
---
-## Customize the app for your business
-
-Now that you have set up the sample app, you can tailor it to your own needs.
-
-For example, you may want to add situation-specific information on your consent page:
-
-> [!div class="mx-imgBorder"]
-> ![app consent page](../media/enrollment-app/1-consent-1.jpg)
-
-1. Add more instructions to improve verification accuracy.
-
- Many face recognition issues are caused by low-quality reference images. Some factors that can degrade model performance are:
- * Face size (faces that are distant from the camera)
- * Face orientation (faces turned or tilted away from camera)
- * Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
- * Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
- * Blur (such as by rapid face movement when the photograph was taken).
-
- The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.
--
-> [!div class="mx-imgBorder"]
-> ![app image capture instruction page](../media/enrollment-app/4-instruction.jpg)
-
-1. The sample app offers functionality for deleting the user's information and the option to readd. You can enable or disable these operations based on your business requirement.
-
-> [!div class="mx-imgBorder"]
-> ![profile management page](../media/enrollment-app/10-manage-2.jpg)
-
-To extend the app's functionality to cover the full experience, read the [overview](../enrollment-overview.md) for additional features to implement and best practices.
-
-1. Configure your database to map each person with their ID
-
- You need to use a database to store the face image along with user metadata. The social security number or other unique person identifier can be used as a key to look up their face ID.
-
-1. For secure methods of passing your subscription key and endpoint to Face service, see the Azure Cognitive Services [Security](/azure/cognitive-services/cognitive-services-security?tabs=command-line%2Ccsharp) guide.
--
-## Deploy the app
-
-#### [Android](#tab/android)
-
-First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../cognitive-services-security.md?tabs=command-line%2ccsharp).
-
-When you're ready to release your app for production, you'll generate a release-ready APK file, which is the package file format for Android apps. This APK file must be signed with a private key. With this release build, you can begin distributing the app to your devices directly.
-
-Follow the <a href="https://developer.android.com/studio/publish/preparing#publishing-build" title="Prepare for release" target="_blank">Prepare for release <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn how to generate a private key, sign your application, and generate a release APK.
-
-Once you've created a signed APK, see the <a href="https://developer.android.com/studio/publish" title="Publish your app" target="_blank">Publish your app <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn more about how to release your app.
-
-#### [iOS](#tab/ios)
-
-First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../cognitive-services-security.md?tabs=command-line%2ccsharp). To prepare for distribution, you will need to create an app icon, a launch screen, and configure deployment info settings. Follow the [documentation from Xcode](https://developer.apple.com/documentation/Xcode/preparing_your_app_for_distribution) to prepare your app for distribution.
-
-When you're ready to release your app for production, you'll build an archive of your app. Follow the [Xcode documentation](https://developer.apple.com/documentation/Xcode/distributing_your_app_for_beta_testing_and_releases) on how to create an archive build and options for distributing your app.
---
-## Next steps
-
-In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
cognitive-services Storage Lab Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md
- Title: "Tutorial: Generate metadata for Azure images"-
-description: In this tutorial, you'll learn how to integrate the Azure Computer Vision service into a web app to generate metadata for images.
------- Previously updated : 12/29/2022--
-#Customer intent: As a developer of an image-intensive web app, I want to be able to automatically generate captions and search keywords for each of my images.
--
-# Tutorial: Use Computer Vision to generate image metadata in Azure Storage
-
-In this tutorial, you'll learn how to integrate the Azure Computer Vision service into a web app to generate metadata for uploaded images. This is useful for [digital asset management (DAM)](../overview.md#computer-vision-for-digital-asset-management) scenarios, such as if a company wants to quickly generate descriptive captions or searchable keywords for all of its images.
-
-You'll use Visual Studio to write an MVC Web app that accepts images uploaded by users and stores the images in Azure blob storage. You'll learn how to read and write blobs in C# and use blob metadata to attach additional information to the blobs you create. Then you'll submit each image uploaded by the user to the Computer Vision API to generate a caption and search metadata for the image. Finally, you can deploy the app to the cloud using Visual Studio.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Create a storage account and storage containers using the Azure portal
-> * Create a Web app in Visual Studio and deploy it to Azure
-> * Use the Computer Vision API to extract information from images
-> * Attach metadata to Azure Storage images
-> * Check image metadata using [Azure Storage Explorer](http://storageexplorer.com/)
-
-> [!TIP]
-> The section [Use Computer Vision to generate metadata](#Exercise5) is most relevant to Image Analysis. Skip to there if you just want to see how Image Analysis is integrated into an established application.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
-
-## Prerequisites
--- [Visual Studio 2017 Community edition](https://www.visualstudio.com/products/visual-studio-community-vs.aspx) or higher, with the "ASP.NET and web development" and "Azure development" workloads installed.-- The [Azure Storage Explorer](http://storageexplorer.com/) tool installed.-
-<a name="Exercise1"></a>
-## Create a storage account
-
-In this section, you'll use the [Azure portal](https://portal.azure.com?WT.mc_id=academiccontent-github-cxa) to create a storage account. Then you'll create a pair of containers: one to store images uploaded by the user, and another to store image thumbnails generated from the uploaded images.
-
-1. Open the [Azure portal](https://portal.azure.com?WT.mc_id=academiccontent-github-cxa) in your browser. If you're asked to sign in, do so using your Microsoft account.
-1. To create a storage account, click **+ Create a resource** in the ribbon on the left. Then click **Storage**, followed by **Storage account**.
-
- ![Creating a storage account](Images/new-storage-account.png)
-1. Enter a unique name for the storage account in **Name** field and make sure a green check mark appears next to it. The name is important, because it forms one part of the URL through which blobs created under this account are accessed. Place the storage account in a new resource group named "IntellipixResources," and select the region nearest you. Finish by clicking the **Review + create** button at the bottom of the screen to create the new storage account.
- > [!NOTE]
- > Storage account names can be 3 to 24 characters in length and can only contain numbers and lowercase letters. In addition, the name you enter must be unique within Azure. If someone else has chosen the same name, you'll be notified that the name isn't available with a red exclamation mark in the **Name** field.
-
- ![Specifying parameters for a new storage account](Images/create-storage-account.png)
-1. Click **Resource groups** in the ribbon on the left. Then click the "IntellipixResources" resource group.
-
- ![Opening the resource group](Images/open-resource-group.png)
-1. In the tab that opens for the resource group, click the storage account you created. If the storage account isn't there yet, you can click **Refresh** at the top of the tab until it appears.
-
- ![Opening the new storage account](Images/open-storage-account.png)
-1. In the tab for the storage account, click **Blobs** to view a list of containers associated with this account.
-
- ![Viewing blobs button](Images/view-containers.png)
-
-1. The storage account currently has no containers. Before you can create a blob, you must create a container to store it in. Click **+ Container** to create a new container. Type `photos` into the **Name** field and select **Blob** as the **Public access level**. Then click **OK** to create a container named "photos."
-
- > By default, containers and their contents are private. Selecting **Blob** as the access level makes the blobs in the "photos" container publicly accessible, but doesn't make the container itself public. This is what you want because the images stored in the "photos" container will be linked to from a Web app.
-
- ![Creating a "photos" container](Images/create-photos-container.png)
-
-1. Repeat the previous step to create a container named "thumbnails," once more ensuring that the container's **Public access level** is set to **Blob**.
-1. Confirm that both containers appear in the list of containers for this storage account, and that the names are spelled correctly.
-
- ![The new containers](Images/new-containers.png)
-
-1. Close the "Blob service" screen. Click **Access keys** in the menu on the left side of the storage-account screen, and then click the **Copy** button next to **KEY** for **key1**. Paste this access key into your favorite text editor for later use.
-
- ![Copying the access key](Images/copy-storage-account-access-key.png)
-
-You've now created a storage account to hold images uploaded to the app you're going to build, and containers to store the images in.
-
-<a name="Exercise2"></a>
-## Run Azure Storage Explorer
-
-[Azure Storage Explorer](http://storageexplorer.com/) is a free tool that provides a graphical interface for working with Azure Storage on PCs running Windows, macOS, and Linux. It provides most of the same functionality as the Azure portal and offers other features like the ability to view blob metadata. In this section, you'll use the Microsoft Azure Storage Explorer to view the containers you created in the previous section.
-
-1. If you haven't installed Storage Explorer or would like to make sure you're running the latest version, go to http://storageexplorer.com/ and download and install it.
-1. Start Storage Explorer. If you're asked to sign in, do so using your Microsoft account&mdash;the same one that you used to sign in to the Azure portal. If you don't see the storage account in Storage Explorer's left pane, click the **Manage Accounts** button highlighted below and make sure both your Microsoft account and the subscription used to create the storage account have been added to Storage Explorer.
-
- ![Managing accounts in Storage Explorer](Images/add-account.png)
-
-1. Click the small arrow next to the storage account to display its contents, and then click the arrow next to **Blob Containers**. Confirm that the containers you created appear in the list.
-
- ![Viewing blob containers](Images/storage-explorer.png)
-
-The containers are currently empty, but that will change once your app is deployed and you start uploading photos. Having Storage Explorer installed will make it easy for you to see what your app writes to blob storage.
-
-<a name="Exercise3"></a>
-## Create a new Web app in Visual Studio
-
-In this section, you'll create a new Web app in Visual Studio and add code to implement the basic functionality required to upload images, write them to blob storage, and display them in a Web page.
-
-1. Start Visual Studio and use the **File -> New -> Project** command to create a new Visual C# **ASP.NET Web Application** project named "Intellipix" (short for "Intelligent Pictures").
-
- ![Creating a new Web Application project](Images/new-web-app.png)
-
-1. In the "New ASP.NET Web Application" dialog, make sure **MVC** is selected. Then click **OK**.
-
- ![Creating a new ASP.NET MVC project](Images/new-mvc-project.png)
-
-1. Take a moment to review the project structure in Solution Explorer. Among other things, there's a folder named **Controllers** that holds the project's MVC controllers, and a folder named **Views** that holds the project's views. You'll be working with assets in these folders and others as you implement the application.
-
- ![The project shown in Solution Explorer](Images/project-structure.png)
-
-1. Use Visual Studio's **Debug -> Start Without Debugging** command (or press **Ctrl+F5**) to launch the application in your browser. Here's how the application looks in its present state:
-
- ![The initial application](Images/initial-application.png)
-
-1. Close the browser and return to Visual Studio. In Solution Explorer, right-click the **Intellipix** project and select **Manage NuGet Packages...**. Click **Browse**. Then type `imageresizer` into the search box and select the NuGet package named **ImageResizer**. Finally, click **Install** to install the latest stable version of the package. ImageResizer contains APIs that you'll use to create image thumbnails from the images uploaded to the app. OK any changes and accept any licenses presented to you.
-
- ![Installing ImageResizer](Images/install-image-resizer.png)
-
-1. Repeat this process to add the NuGet package named **WindowsAzure.Storage** to the project. This package contains APIs for accessing Azure Storage from .NET applications. OK any changes and accept any licenses presented to you.
-
- ![Installing WindowsAzure.Storage](Images/install-storage-package.png)
-
-1. Open _Web.config_ and add the following statement to the ```<appSettings>``` section, replacing ACCOUNT_NAME with the name of the storage account you created in the first section, and ACCOUNT_KEY with the access key you saved.
-
- ```xml
- <add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY" />
- ```
-
- > [!IMPORTANT]
- > The _Web.config_ file is meant to hold sensitive information like your subscription keys, and any HTTP request to a file with the _.config_ extension is handled by the ASP.NET engine, which returns a "This type of page is not served" message. However, if an attacker is able to find some other exploit that allows them to view your _Web.config_ contents, then they'll be able to expose that information. See [Protecting Connection Strings and Other Configuration Information](/aspnet/web-forms/overview/data-access/advanced-data-access-scenarios/protecting-connection-strings-and-other-configuration-information-cs) for extra steps you can take to further secure your _Web.config_ data.
-
-1. Open the file named *_Layout.cshtml* in the project's **Views/Shared** folder. On line 19, change "Application name" to "Intellipix." The line should look like this:
-
- ```C#
- @Html.ActionLink("Intellipix", "Index", "Home", new { area = "" }, new { @class = "navbar-brand" })
- ```
-
- > [!NOTE]
- > In an ASP.NET MVC project, *_Layout.cshtml* is a special view that serves as a template for other views. You typically define header and footer content that is common to all views in this file.
-
-1. Right-click the project's **Models** folder and use the **Add -> Class...** command to add a class file named *BlobInfo.cs* to the folder. Then replace the empty **BlobInfo** class with the following class definition:
-
- ```C#
- public class BlobInfo
- {
- public string ImageUri { get; set; }
- public string ThumbnailUri { get; set; }
- public string Caption { get; set; }
- }
- ```
-
-1. Open *HomeController.cs* from the project's **Controllers** folder and add the following `using` statements to the top of the file:
-
- ```C#
- using ImageResizer;
- using Intellipix.Models;
- using Microsoft.WindowsAzure.Storage;
- using Microsoft.WindowsAzure.Storage.Blob;
- using System.Configuration;
- using System.Threading.Tasks;
- using System.IO;
- ```
-
-1. Replace the **Index** method in *HomeController.cs* with the following implementation:
-
- ```C#
- public ActionResult Index()
- {
- // Pass a list of blob URIs in ViewBag
- CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
- CloudBlobClient client = account.CreateCloudBlobClient();
- CloudBlobContainer container = client.GetContainerReference("photos");
- List<BlobInfo> blobs = new List<BlobInfo>();
-
- foreach (IListBlobItem item in container.ListBlobs())
- {
- var blob = item as CloudBlockBlob;
-
- if (blob != null)
- {
- blobs.Add(new BlobInfo()
- {
- ImageUri = blob.Uri.ToString(),
- ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/")
- });
- }
- }
-
- ViewBag.Blobs = blobs.ToArray();
- return View();
- }
- ```
-
- The new **Index** method enumerates the blobs in the `"photos"` container and passes an array of **BlobInfo** objects representing those blobs to the view through ASP.NET MVC's **ViewBag** property. Later, you'll modify the view to enumerate these objects and display a collection of photo thumbnails. The classes you'll use to access your storage account and enumerate the blobs&mdash;**[CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount?view=azure-dotnet&preserve-view=true)**, **[CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient?view=azure-dotnet-legacy&preserve-view=true)**, and **[CloudBlobContainer](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer?view=azure-dotnet-legacy&preserve-view=true)**&mdash;come from the **WindowsAzure.Storage** package you installed through NuGet.
-
-1. Add the following method to the **HomeController** class in *HomeController.cs*:
-
- ```C#
- [HttpPost]
- public async Task<ActionResult> Upload(HttpPostedFileBase file)
- {
- if (file != null && file.ContentLength > 0)
- {
- // Make sure the user selected an image file
- if (!file.ContentType.StartsWith("image"))
- {
- TempData["Message"] = "Only image files may be uploaded";
- }
- else
- {
- try
- {
- // Save the original image in the "photos" container
- CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
- CloudBlobClient client = account.CreateCloudBlobClient();
- CloudBlobContainer container = client.GetContainerReference("photos");
- CloudBlockBlob photo = container.GetBlockBlobReference(Path.GetFileName(file.FileName));
- await photo.UploadFromStreamAsync(file.InputStream);
-
- // Generate a thumbnail and save it in the "thumbnails" container
- using (var outputStream = new MemoryStream())
- {
- file.InputStream.Seek(0L, SeekOrigin.Begin);
- var settings = new ResizeSettings { MaxWidth = 192 };
- ImageBuilder.Current.Build(file.InputStream, outputStream, settings);
- outputStream.Seek(0L, SeekOrigin.Begin);
- container = client.GetContainerReference("thumbnails");
- CloudBlockBlob thumbnail = container.GetBlockBlobReference(Path.GetFileName(file.FileName));
- await thumbnail.UploadFromStreamAsync(outputStream);
- }
- }
- catch (Exception ex)
- {
- // In case something goes wrong
- TempData["Message"] = ex.Message;
- }
- }
- }
-
- return RedirectToAction("Index");
- }
- ```
-
- This is the method that's called when you upload a photo. It stores each uploaded image as a blob in the `"photos"` container, creates a thumbnail image from the original image using the `ImageResizer` package, and stores the thumbnail image as a blob in the `"thumbnails"` container.
-
-1. Open *Index.cshmtl* in the project's **Views/Home** folder and replace its contents with the following code and markup:
-
- ```HTML
- @{
- ViewBag.Title = "Intellipix Home Page";
- }
-
- @using Intellipix.Models
-
- <div class="container" style="padding-top: 24px">
- <div class="row">
- <div class="col-sm-8">
- @using (Html.BeginForm("Upload", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
- {
- <input type="file" name="file" id="upload" style="display: none" onchange="$('#submit').click();" />
- <input type="button" value="Upload a Photo" class="btn btn-primary btn-lg" onclick="$('#upload').click();" />
- <input type="submit" id="submit" style="display: none" />
- }
- </div>
- <div class="col-sm-4 pull-right">
- </div>
- </div>
-
- <hr />
-
- <div class="row">
- <div class="col-sm-12">
- @foreach (BlobInfo blob in ViewBag.Blobs)
- {
- <img src="@blob.ThumbnailUri" width="192" title="@blob.Caption" style="padding-right: 16px; padding-bottom: 16px" />
- }
- </div>
- </div>
- </div>
-
- @section scripts
- {
- <script type="text/javascript" language="javascript">
- if ("@TempData["Message"]" !== "") {
- alert("@TempData["Message"]");
- }
- </script>
- }
- ```
-
- The language used here is [Razor](https://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail.
-
-1. Download and unzip the _photos.zip_ file from the [GitHub sample data repository](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial). This is an assortment of different photos you can use to test the app.
-
-1. Save your changes and press **Ctrl+F5** to launch the application in your browser. Then click **Upload a Photo** and upload one of the images you downloaded. Confirm that a thumbnail version of the photo appears on the page.
-
- ![Intellipix with one photo uploaded](Images/one-photo-uploaded.png)
-
-1. Upload a few more images from your **photos** folder. Confirm that they appear on the page, too:
-
- ![Intellipix with three photos uploaded](Images/three-photos-uploaded.png)
-
-1. Right-click in your browser and select **View page source** to view the source code for the page. Find the ```<img>``` elements representing the image thumbnails. Observe that the URLs assigned to the images refer directly to blobs in blob storage. This is because you set the containers' **Public access level** to **Blob**, which makes the blobs inside publicly accessible.
-
-1. Return to Azure Storage Explorer (or restart it if you didn't leave it running) and select the `"photos"` container under your storage account. The number of blobs in the container should equal the number of photos you uploaded. Double-click one of the blobs to download it and see the image stored in the blob.
-
- ![Contents of the "photos" container](Images/photos-container.png)
-
-1. Open the `"thumbnails"` container in Storage Explorer. Open one of the blobs to view the thumbnail images generated from the image uploads.
-
-The app doesn't yet offer a way to view the original images that you uploaded. Ideally, clicking an image thumbnail should display the original image. You'll add that feature next.
-
-<a name="Exercise4"></a>
-## Add a lightbox for viewing photos
-
-In this section, you'll use a free, open-source JavaScript library to add a lightbox viewer that enables users to see the original images they've uploaded (rather than just the image thumbnails). The files are provided for you. All you have to do is integrate them into the project and make a minor modification to *Index.cshtml*.
-
-1. Download the _lightbox.css_ and _lightbox.js_ files from the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/ComputerVision/storage-lab-tutorial).
-1. In Solution Explorer, right-click your project's **Scripts** folder and use the **Add -> New Item...** command to create a *lightbox.js* file. Paste in the contents from the example file in the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/storage-lab-tutorial/scripts/lightbox.js).
-
-1. Right-click the project's "Content" folder and use the **Add -> New Item...** command create a *lightbox.css* file. Paste in the contents from the example file in the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/storage-lab-tutorial/css/lightbox.css).
-1. Download and unzip the _buttons.zip_ file from the GitHub data files repository: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial. You should have four button images.
-
-1. Right-click the Intellipix project in Solution Explorer and use the **Add -> New Folder** command to add a folder named "Images" to the project.
-
-1. Right-click the **Images** folder and use the **Add -> Existing Item...** command to import the four images you downloaded.
-
-1. Open *BundleConfig.cs* in the project's "App_Start" folder. Add the following statement to the ```RegisterBundles``` method in **BundleConfig.cs**:
-
- ```C#
- bundles.Add(new ScriptBundle("~/bundles/lightbox").Include(
- "~/Scripts/lightbox.js"));
- ```
-
-1. In the same method, find the statement that creates a ```StyleBundle``` from "~/Content/css" and add *lightbox.css* to the list of style sheets in the bundle. Here is the modified statement:
-
- ```C#
- bundles.Add(new StyleBundle("~/Content/css").Include(
- "~/Content/bootstrap.css",
- "~/Content/site.css",
- "~/Content/lightbox.css"));
- ```
-
-1. Open *_Layout.cshtml* in the project's **Views/Shared** folder and add the following statement just before the ```@RenderSection``` statement near the bottom:
-
- ```C#
- @Scripts.Render("~/bundles/lightbox")
- ```
-
-1. The final task is to incorporate the lightbox viewer into the home page. To do that, open *Index.cshtml* (it's in the project's **Views/Home** folder) and replace the ```@foreach``` loop with this one:
-
- ```HTML
- @foreach (BlobInfo blob in ViewBag.Blobs)
- {
- <a href="@blob.ImageUri" rel="lightbox" title="@blob.Caption">
- <img src="@blob.ThumbnailUri" width="192" title="@blob.Caption" style="padding-right: 16px; padding-bottom: 16px" />
- </a>
- }
- ```
-
-1. Save your changes and press **Ctrl+F5** to launch the application in your browser. Then click one of the images you uploaded earlier. Confirm that a lightbox appears and shows an enlarged view of the image.
-
- ![An enlarged image](Images/lightbox-image.png)
-
-1. Click the **X** in the lower-right corner of the lightbox to dismiss it.
-
-Now you have a way to view the images you uploaded. The next step is to do more with those images.
-
-<a name="Exercise5"></a>
-## Use Computer Vision to generate metadata
-
-### Create a Computer Vision resource
-
-You'll need to create a Computer Vision resource for your Azure account; this resource manages your access to Azure's Computer Vision service.
-
-1. Follow the instructions in [Create an Azure Cognitive Services resource](../../cognitive-services-apis-create-account.md) to create a Computer Vision resource.
-
-1. Then go to the menu for your resource group and click the Computer Vision API subscription that you created. Copy the URL under **Endpoint** to somewhere you can easily retrieve it in a moment. Then click **Show access keys**.
-
- ![Azure portal page with the endpoint URL and access keys link outlined](Images/copy-vision-endpoint.png)
-
- [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]
--
-1. In the next window, copy the value of **KEY 1** to the clipboard.
-
- ![Manage keys dialog, with the copy button outlined](Images/copy-vision-key.png)
-
-### Add Computer Vision credentials
-
-Next, you'll add the required credentials to your app so that it can access Computer Vision resources.
-
-Navigate to the *Web.config* file at the root of the project. Add the following statements to the `<appSettings>` section of the file, replacing `VISION_KEY` with the key you copied in the previous step, and `VISION_ENDPOINT` with the URL you saved in the step before.
-
-```xml
-<add key="SubscriptionKey" value="VISION_KEY" />
-<add key="VisionEndpoint" value="VISION_ENDPOINT" />
-```
-
-Then in the Solution Explorer, right-click the project and use the **Manage NuGet Packages** command to install the package **Microsoft.Azure.CognitiveServices.Vision.ComputerVision**. This package contains the types needed to call the Computer Vision API.
-
-### Add metadata generation code
-
-Next, you'll add the code that actually uses the Computer Vision service to create metadata for images.
-
-1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file:
-
- ```csharp
- using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
- using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
- ```
-
-1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Computer Vision to generate a description for that image. The Computer Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on.
-
- ```csharp
- // Submit the image to the Azure Computer Vision API
- ComputerVisionClient vision = new ComputerVisionClient(
- new ApiKeyServiceClientCredentials(ConfigurationManager.AppSettings["SubscriptionKey"]),
- new System.Net.Http.DelegatingHandler[] { });
- vision.Endpoint = ConfigurationManager.AppSettings["VisionEndpoint"];
-
- List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>() { VisualFeatureTypes.Description };
- var result = await vision.AnalyzeImageAsync(photo.Uri.ToString(), features);
-
- // Record the image description and tags in blob metadata
- photo.Metadata.Add("Caption", result.Description.Captions[0].Text);
-
- for (int i = 0; i < result.Description.Tags.Count; i++)
- {
- string key = String.Format("Tag{0}", i);
- photo.Metadata.Add(key, result.Description.Tags[i]);
- }
-
- await photo.SetMetadataAsync();
- ```
-
-1. Next, go to the **Index** method in the same file. This method enumerates the stored image blobs in the targeted blob container (as **IListBlobItem** instances) and passes them to the application view. Replace the `foreach` block in this method with the following code. This code calls **CloudBlockBlob.FetchAttributes** to get each blob's attached metadata. It extracts the computer-generated description (`caption`) from the metadata and adds it to the **BlobInfo** object, which gets passed to the view.
-
- ```csharp
- foreach (IListBlobItem item in container.ListBlobs())
- {
- var blob = item as CloudBlockBlob;
-
- if (blob != null)
- {
- blob.FetchAttributes(); // Get blob metadata
- var caption = blob.Metadata.ContainsKey("Caption") ? blob.Metadata["Caption"] : blob.Name;
-
- blobs.Add(new BlobInfo()
- {
- ImageUri = blob.Uri.ToString(),
- ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/"),
- Caption = caption
- });
- }
- }
- ```
-
-### Test the app
-
-Save your changes in Visual Studio and press **Ctrl+F5** to launch the application in your browser. Use the app to upload a few more images, either from the photo set you downloaded or from your own folder. When you hover the cursor over one of the new images in the view, a tooltip window should appear and display the computer-generated caption for the image.
-
-![The computer-generated caption](Images/thumbnail-with-tooltip.png)
-
-To view all of the attached metadata, use Azure Storage Explorer to view the storage container you're using for images. Right-click any of the blobs in the container and select **Properties**. In the dialog, you'll see a list of key-value pairs. The computer-generated image description is stored in the item `Caption`, and the search keywords are stored in `Tag0`, `Tag1`, and so on. When you're finished, click **Cancel** to close the dialog.
-
-![Image properties dialog window, with metadata tags listed](Images/blob-metadata.png)
-
-<a name="Exercise6"></a>
-## Add search to the app
-
-In this section, you will add a search box to the home page, enabling users to do keyword searches on the images that they've uploaded. The keywords are the ones generated by the Computer Vision API and stored in blob metadata.
-
-1. Open *Index.cshtml* in the project's **Views/Home** folder and add the following statements to the empty ```<div>``` element with the ```class="col-sm-4 pull-right"``` attribute:
-
- ```HTML
- @using (Html.BeginForm("Search", "Home", FormMethod.Post, new { enctype = "multipart/form-data", @class = "navbar-form" }))
- {
- <div class="input-group">
- <input type="text" class="form-control" placeholder="Search photos" name="term" value="@ViewBag.Search" style="max-width: 800px">
- <span class="input-group-btn">
- <button class="btn btn-primary" type="submit">
- <i class="glyphicon glyphicon-search"></i>
- </button>
- </span>
- </div>
- }
- ```
-
- This code and markup adds a search box and a **Search** button to the home page.
-
-1. Open *HomeController.cs* in the project's **Controllers** folder and add the following method to the **HomeController** class:
-
- ```C#
- [HttpPost]
- public ActionResult Search(string term)
- {
- return RedirectToAction("Index", new { id = term });
- }
- ```
-
- This is the method that's called when the user clicks the **Search** button added in the previous step. It refreshes the page and includes a search parameter in the URL.
-
-1. Replace the **Index** method with the following implementation:
-
- ```C#
- public ActionResult Index(string id)
- {
- // Pass a list of blob URIs and captions in ViewBag
- CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
- CloudBlobClient client = account.CreateCloudBlobClient();
- CloudBlobContainer container = client.GetContainerReference("photos");
- List<BlobInfo> blobs = new List<BlobInfo>();
-
- foreach (IListBlobItem item in container.ListBlobs())
- {
- var blob = item as CloudBlockBlob;
-
- if (blob != null)
- {
- blob.FetchAttributes(); // Get blob metadata
-
- if (String.IsNullOrEmpty(id) || HasMatchingMetadata(blob, id))
- {
- var caption = blob.Metadata.ContainsKey("Caption") ? blob.Metadata["Caption"] : blob.Name;
-
- blobs.Add(new BlobInfo()
- {
- ImageUri = blob.Uri.ToString(),
- ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/"),
- Caption = caption
- });
- }
- }
- }
-
- ViewBag.Blobs = blobs.ToArray();
- ViewBag.Search = id; // Prevent search box from losing its content
- return View();
- }
- ```
-
- Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed.
-
-1. Add the following helper method to the **HomeController** class:
-
- ```C#
- private bool HasMatchingMetadata(CloudBlockBlob blob, string term)
- {
- foreach (var item in blob.Metadata)
- {
- if (item.Key.StartsWith("Tag") && item.Value.Equals(term, StringComparison.InvariantCultureIgnoreCase))
- return true;
- }
-
- return false;
- }
- ```
-
- This method is called by the **Index** method to determine whether the metadata keywords attached to a given image blob contain the search term that the user entered.
-
-1. Launch the application again and upload several photos. Feel free to use your own photos, not just the ones provided with the tutorial.
-
-1. Type a keyword such as "river" into the search box. Then click the **Search** button.
-
- ![Performing a search](Images/enter-search-term.png)
-
-1. Search results will vary depending on what you typed and what images you uploaded. But the result should be a filtered list of images whose metadata keywords include all or part of the keyword that you typed.
-
- ![Search results](Images/search-results.png)
-
-1. Click the browser's back button to display all of the images again.
-
-You're almost finished. It's time to deploy the app to the cloud.
-
-<a name="Exercise7"></a>
-## Deploy the app to Azure
-
-In this section, you'll deploy the app to Azure from Visual Studio. You will allow Visual Studio to create an Azure Web App for you, preventing you from having to go into the Azure portal and create it separately.
-
-1. Right-click the project in Solution Explorer and select **Publish...** from the context menu. Make sure **Microsoft Azure App Service** and **Create New** are selected, and then click the **Publish** button.
-
- ![Publishing the app](Images/publish-1.png)
-
-1. In the next dialog, select the "IntellipixResources" resource group under **Resource Group**. Click the **New...** button next to "App Service Plan" and create a new App Service Plan in the same location you selected for the storage account in [Create a storage account](#Exercise1), accepting the defaults everywhere else. Finish by clicking the **Create** button.
-
- ![Creating an Azure Web App](Images/publish-2.png)
-
-1. After a few moments, the app will appear in a browser window. Note the URL in the address bar. The app is no longer running locally; it's on the Web, where it's publicly reachable.
-
- ![The finished product!](Images/vs-intellipix.png)
-
-If you make changes to the app and want to push the changes out to the Web, go through the publish process again. You can still test your changes locally before publishing to the Web.
-
-## Clean up resources
-
-If you'd like to keep working on your web app, see the [Next steps](#next-steps) section. If you don't plan to continue using this application, you should delete all app-specific resources. To do delete resources, you can delete the resource group that contains your Azure Storage subscription and Computer Vision resource. This will remove the storage account, the blobs uploaded to it, and the App Service resource needed to connect with the ASP.NET web app.
-
-To delete the resource group, open the **Resource groups** tab in the portal, navigate to the resource group you used for this project, and click **Delete resource group** at the top of the view. You'll be asked to type the resource group's name to confirm you want to delete it. Once deleted, a resource group can't be recovered.
-
-## Next steps
-
-There's much more that you could do to use Azure and develop your Intellipix app even further. For example, you could add support for authenticating users and deleting photos, and rather than force the user to wait for Cognitive Services to process a photo following an upload, you could use [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=academiccontent-github-cxa) to call the Computer Vision API asynchronously each time an image is added to blob storage. You could also do any number of other Image Analysis operations on the image, outlined in the overview.
-
-> [!div class="nextstepaction"]
-> [Image Analysis overview](../overview-image-analysis.md)
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
- Title: Computer Vision 3.2 GA Read OCR container-
-description: Use the Read 3.2 OCR containers from Computer Vision to extract text from images and documents, on-premises.
------ Previously updated : 03/02/2023--
-keywords: on-premises, OCR, Docker, container
--
-# Install Computer Vision 3.2 GA Read OCR container
--
-Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container.
-
-The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
-
-## What's new
-The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you're an existing customer, follow the [download instructions](#get-the-container-image) to get started.
-
-The Read 3.2 OCR container is the latest GA model and provides:
-* New models for enhanced accuracy.
-* Support for multiple languages within the same document.
-* Support for a total of 164 languages. See the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
-* A single operation for both documents and images.
-* Support for larger documents and images.
-* Confidence scores.
-* Support for documents with both print and handwritten text.
-* Ability to extract text from only selected page(s) in a document.
-* Choose text line output order from default to a more natural reading order for Latin languages only.
-* Text line classification as handwritten style or not for Latin languages only.
-
-If you're using Read 2.0 containers today, see the [migration guide](read-container-migration-guide.md) to learn about changes in the new versions.
-
-## Prerequisites
-
-You must meet the following prerequisites before using the containers:
-
-|Required|Purpose|
-|--|--|
-|Docker Engine| You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/install/#server). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
-|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.|
-|Computer Vision resource |In order to use the container, you must have:<br><br>An Azure **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Request approval to run the container
-
-Fill out and submit the [request form](https://aka.ms/csgate) to request approval to run the container.
---
-### Host computer requirements
--
-### Advanced Vector Extension support
-
-The **host** computer is the computer that runs the docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
-
-```console
-grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
-```
-
-> [!WARNING]
-> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
-
-### Container requirements and recommendations
--
-## Get the container image
-
-The Computer Vision Read OCR container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services` repository and is named `read`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/vision/read`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/vision/read/tags).
-
-The following container images for Read are available.
-
-| Container | Container Registry / Repository / Image Name | Tags |
-|--||--|
-| Read 3.2 GA | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30` | latest, 3.2, 3.2-model-2022-04-30 |
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
-```
----
-## How to use the container
-
-Once the container is on the [host computer](#host-computer-requirements), use the following process to work with the container.
-
-1. [Run the container](#run-the-container), with the required billing settings. More [examples](computer-vision-resource-container-config.md) of the `docker run` command are available.
-1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-
-## Run the container
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
-
-[Examples](computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-The above command:
-
-* Runs the Read OCR latest GA container from the container image.
-* Allocates 8 CPU core and 16 gigabytes (GB) of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-You can alternatively run the container using environment variables:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
env Eula=accept \env Billing={ENDPOINT_URI} \env ApiKey={API_KEY} \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30
-```
--
-More [examples](./computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-If you need higher throughput (for example, when processing multi-page files), consider deploying multiple containers [on a Kubernetes cluster](deploy-computer-vision-on-premises.md), using [Azure Storage](../../storage/common/storage-account-create.md) and [Azure Queue](../../storage/queues/storage-queues-introduction.md).
-
-If you're using Azure Storage to store images for processing, you can create a [connection string](../../storage/common/storage-configure-connection-string.md) to use when calling the container.
-
-To find your connection string:
-
-1. Navigate to **Storage accounts** on the Azure portal, and find your account.
-2. Select on **Access keys** in the left navigation list.
-3. Your connection string will be located below **Connection string**
--
-<!-- ## Validate container is running -->
--
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/`.
-
-### Asynchronous Read
-
-You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifier to the HTTP GET request.
-
-From the swagger UI, select the `Analyze` to expand it in the browser. Then select **Try it out** > **Choose file**. In this example, we'll use the following image:
-
-![tabs vs spaces](media/tabs-vs-spaces.png)
-
-When the asynchronous POST has run successfully, it returns an **HTTP 202** status code. As part of the response, there is an `operation-location` header that holds the result endpoint for the request.
-
-```http
- content-length: 0
- date: Fri, 04 Sep 2020 16:23:01 GMT
- operation-location: http://localhost:5000/vision/v3.2/read/operations/a527d445-8a74-4482-8cb3-c98a65ec7ef9
- server: Kestrel
-```
-
-The `operation-location` is the fully qualified URL and is accessed via an HTTP GET. Here is the JSON response from executing the `operation-location` URL from the preceding image:
-
-```json
-{
- "status": "succeeded",
- "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
- "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
- "analyzeResult": {
- "version": "3.2.0",
- "readResults": [
- {
- "page": 1,
- "angle": 2.1243,
- "width": 502,
- "height": 252,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 58,
- 42,
- 314,
- 59,
- 311,
- 123,
- 56,
- 121
- ],
- "text": "Tabs vs",
- "appearance": {
- "style": {
- "name": "handwriting",
- "confidence": 0.96
- }
- },
- "words": [
- {
- "boundingBox": [
- 68,
- 44,
- 225,
- 59,
- 224,
- 122,
- 66,
- 123
- ],
- "text": "Tabs",
- "confidence": 0.933
- },
- {
- "boundingBox": [
- 241,
- 61,
- 314,
- 72,
- 314,
- 123,
- 239,
- 122
- ],
- "text": "vs",
- "confidence": 0.977
- }
- ]
- },
- {
- "boundingBox": [
- 286,
- 171,
- 415,
- 165,
- 417,
- 197,
- 287,
- 201
- ],
- "text": "paces",
- "appearance": {
- "style": {
- "name": "handwriting",
- "confidence": 0.746
- }
- },
- "words": [
- {
- "boundingBox": [
- 286,
- 179,
- 404,
- 166,
- 405,
- 198,
- 290,
- 201
- ],
- "text": "paces",
- "confidence": 0.938
- }
- ]
- }
- ]
- }
- ]
- }
-}
-```
--
-> [!IMPORTANT]
-> If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
-
-### Synchronous read
-
-You can use the following operation to synchronously read an image.
-
-`POST /vision/v3.2/read/syncAnalyze`
-
-When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this behavior is if an error occurs. If an error occurs, the following JSON is returned:
-
-```json
-{
- "status": "Failed"
-}
-```
-
-The JSON response object has the same object graph as the asynchronous version. If you're a JavaScript user and want type safety, consider using TypeScript to cast the JSON response.
-
-For an example use-case, see the <a href="https://aka.ms/ts-read-api-types" target="_blank" rel="noopener noreferrer">TypeScript sandbox here </a> and select **Run** to visualize its ease-of-use.
-
-## Run the container disconnected from the internet
--
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](./computer-vision-resource-container-config.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
---
-## Billing
-
-The Cognitive Services containers send billing information to Azure, using the corresponding resource on your Azure account.
--
-For more information about these options, see [Configure containers](./computer-vision-resource-container-config.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Computer Vision containers. In summary:
-
-* Computer Vision provides a Linux container for Docker, encapsulating Read.
-* The read container image requires an application to run it.
-* Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Read OCR containers by specifying the host URI of the container.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings
-* Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container.
-* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Computer Vision functionality.
-* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
- Title: Configure Read OCR containers - Computer Vision-
-description: This article shows you how to configure both required and optional settings for Read OCR containers in Computer Vision.
------ Previously updated : 04/09/2021----
-# Configure Read OCR Docker containers
-
-You configure the Computer Vision Read OCR container's runtime environment by using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings.
-
-## Configuration settings
--
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](computer-vision-how-to-install-containers.md).
-
-The container also has the following container-specific configuration settings:
-
-|Required|Setting|Purpose|
-|--|--|--|
-|No|ReadEngineConfig:ResultExpirationPeriod| v2.0 containers only. Result expiration period in hours. The default is 48 hours. The setting specifies when the system should clear recognition results. For example, if `resultExpirationPeriod=1`, the system clears the recognition result 1 hour after the process. If `resultExpirationPeriod=0`, the system clears the recognition result after the result is retrieved.|
-|No|Cache:Redis| v2.0 containers only. Enables Redis storage for storing results. A cache is *required* if multiple read OCR containers are placed behind a load balancer.|
-|No|Queue:RabbitMQ|v2.0 containers only. Enables RabbitMQ for dispatching tasks. The setting is useful when multiple read OCR containers are placed behind a load balancer.|
-|No|Queue:Azure:QueueVisibilityTimeoutInMilliseconds | v3.x containers only. The time for a message to be invisible when another worker is processing it. |
-|No|Storage::DocumentStore::MongoDB|v2.0 containers only. Enables MongoDB for permanent result storage. |
-|No|Storage:ObjectStore:AzureBlob:ConnectionString| v3.x containers only. Azure blob storage connection string. |
-|No|Storage:TimeToLiveInDays| v3.x containers only. Result expiration period in days. The setting specifies when the system should clear recognition results. The default is 2 days, which means any result live for longer than that period is not guaranteed to be successfully retrieved. The value is integer and it must be between 1 day to 7 days.|
-|No|StorageTimeToLiveInMinutes| v3.2-model-2021-09-30-preview and new containers. Result expiration period in minutes. The setting specifies when the system should clear recognition results. The default is 2 days (2880 minutes), which means any result live for longer than that period is not guaranteed to be successfully retrieved. The value is integer and it must be between 60 minutes to 7 days (10080 minutes).|
-|No|Task:MaxRunningTimeSpanInMinutes| v3.x containers only. Maximum running time for a single request. The default is 60 minutes. |
-|No|EnableSyncNTPServer| v3.x containers only, except for v3.2-model-2021-09-30-preview and newer containers. Enables the NTP server synchronization mechanism, which ensures synchronization between the system time and expected task runtime. Note that this requires external network traffic. The default is `true`. |
-|No|NTPServerAddress| v3.x containers only, except for v3.2-model-2021-09-30-preview and newer containers. NTP server for the time sync-up. The default is `time.windows.com`. |
-|No|Mounts:Shared| v3.x containers only. Local folder for storing recognition result. The default is `/share`. For running container without using Azure blob storage, we recommend mounting a volume to this folder to ensure you have enough space for the recognition results. |
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure `Cognitive Services` resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Cognitive Services_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-This setting can be found in the following place:
-
-* Azure portal: **Cognitive Services** Resource Management, under **Keys**
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Cognitive Services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Cognitive Services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-This setting can be found in the following place:
-
-* Azure portal: **Cognitive Services** Overview, labeled `Endpoint`
-
-Remember to add the `vision/<version>` routing to the endpoint URI as shown in the following table.
-
-|Required| Name | Data type | Description |
-|--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI<br><br>Example:<br>`Billing=https://westcentralus.api.cognitive.microsoft.com/vision/v3.2` |
-
-## Eula setting
--
-## Fluentd settings
--
-## HTTP proxy credentials settings
--
-## Logging settings
-
-
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-The Computer Vision containers don't use input or output mounts to store training or service data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](computer-vision-how-to-install-containers.md#host-computer-requirements)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions.
-
-|Optional| Name | Data type | Description |
-|-||--|-|
-|Not allowed| `Input` | String | Computer Vision containers do not use this.|
-|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Example docker run commands
-
-The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](computer-vision-how-to-install-containers.md#stop-the-container) it.
-
-* **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-* **Argument order**: Do not change the order of the arguments unless you are very familiar with Docker containers.
-
-Replace {_argument_name_} with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The endpoint key of the `Computer Vision` resource on the Azure `Computer Vision` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Computer Vision` Overview page.| See [gather required parameters](computer-vision-how-to-install-containers.md#gather-required-parameters) for explicit examples. |
--
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](computer-vision-how-to-install-containers.md#billing).
-> The ApiKey value is the **Key** from the Azure `Cognitive Services` Resource keys page.
-
-## Container Docker examples
-
-The following Docker examples are for the Read OCR container.
--
-# [Version 3.2](#tab/version-3-2)
-
-### Basic example
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-
-```
-
-### Logging example
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-Logging:Console:LogLevel:Default=Information
-```
-
-# [Version 2.0-preview](#tab/version-2)
-
-### Basic example
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-
-```
-
-### Logging example
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-Logging:Console:LogLevel:Default=Information
-```
---
-## Next steps
-
-* Review [How to install and run containers](computer-vision-how-to-install-containers.md).
cognitive-services Concept Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-background-removal.md
- Title: Background removal - Image Analysis-
-description: Learn about background removal, an operation of Image Analysis
------- Previously updated : 03/02/2023----
-# Background removal (version 4.0 preview)
-
-The Image Analysis service can divide images into multiple segments or regions to help the user identify different objects or parts of the image. Background removal creates an alpha matte that separates the foreground object from the background in an image.
-
-> [!div class="nextstepaction"]
-> [Call the Background removal API](./how-to/background-removal.md)
-
-This feature provides two possible outputs based on the customer's needs:
--- The foreground object of the image without the background. This edited image shows the foreground object and makes the background transparent, allowing the foreground to be placed on a new background. -- An alpha matte that shows the opacity of the detected foreground object. This matte can be used to separate the foreground object from the background for further processing.-
-This service is currently in preview, and the API may change in the future.
-
-## Background removal examples
-
-The following example images illustrate what the Image Analysis service returns when removing the background of an image and creating an alpha matte.
--
-|Original image |With background removed |Alpha matte |
-|::|::|::|
-
-| | | |
-||||
-| :::image type="content" source="media/background-removal/building-1.png" alt-text="Photo of a city near water."::: | :::image type="content" source="media/background-removal/building-1-result.png" alt-text="Photo of a city near water; sky is transparent."::: | :::image type="content" source="media/background-removal/building-1-matte.png" alt-text="Alpha matte of a city skyline."::: |
-| :::image type="content" source="media/background-removal/person-5.png" alt-text="Photo of a group of people using a tablet."::: | :::image type="content" source="media/background-removal/person-5-result.png" alt-text="Photo of a group of people using a tablet; background is transparent."::: | :::image type="content" source="media/background-removal/person-5-matte.png" alt-text="Alpha matte of a group of people."::: |
-| :::image type="content" source="media/background-removal/bears.png" alt-text="Photo of a group of bears in the woods."::: | :::image type="content" source="media/background-removal/bears-result.png" alt-text="Photo of a group of bears; background is transparent."::: | :::image type="content" source="media/background-removal/bears-alpha.png" alt-text="Alpha matte of a group of bears."::: |
--
-## Limitations
-
-It's important to note the limitations of background removal:
-
-* Background removal works best for categories such as people and animals, buildings and environmental structures, furniture, vehicles, food, text and graphics, and personal belongings.
-* Objects that aren't prominent in the foreground may not be identified as part of the foreground.
-* Images with thin and detailed structures, like hair or fur, may show some artifacts when overlaid on backgrounds with strong contrast to the original background.
-* The latency of the background removal operation will be higher, up to several seconds, for large images. We suggest you experiment with integrating both modes into your workflow to find the best usage for your needs (for instance, calling background removal on the original image versus calling foreground matting on a downsampled version of the image, then resizing the alpha matte to the original size and applying it to the original image).
-
-## Use the API
-
-The background removal feature is available through the [Segment](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-02-01-preview/operations/63e6b6d9217d201194bbecbd) API (`imageanalysis:segment`). You can call this API through the REST API or the Vision SDK. See the [Background removal how-to guide](./how-to/background-removal.md) for more information.
-
-## Next steps
-
-* [Call the background removal API](./how-to/background-removal.md)
cognitive-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-brand-detection.md
- Title: Brand detection - Computer Vision-
-description: Learn about brand and logo detection, a specialized mode of object detection, using the Computer Vision API.
------- Previously updated : 07/05/2022---
-# Brand detection
-
-Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
-
-The Computer Vision service detects whether there are brand logos in a given image; if there are, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
-
-The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Computer Vision service, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
-
-## Brand detection example
-
-The following JSON responses illustrate what Computer Vision returns when detecting brands in the example images.
-
-![A red shirt with a Microsoft label and logo on it](./Images/red-shirt-logo.jpg)
-
-```json
-"brands":[
- {
- "name":"Microsoft",
- "rectangle":{
- "x":20,
- "y":97,
- "w":62,
- "h":52
- }
- }
-]
-```
-
-In some cases, the brand detector will pick up both the logo image and the stylized brand name as two separate logos.
-
-![A gray sweatshirt with a Microsoft label and logo on it](./Images/gray-shirt-logo.jpg)
-
-```json
-"brands":[
- {
- "name":"Microsoft",
- "rectangle":{
- "x":58,
- "y":106,
- "w":55,
- "h":46
- }
- },
- {
- "name":"Microsoft",
- "rectangle":{
- "x":58,
- "y":86,
- "w":202,
- "h":63
- }
- }
-]
-```
-
-## Use the API
-
-The brand detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Brands` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"brands"` section.
-
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Categorizing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-categorizing-images.md
- Title: Image categorization - Computer Vision-
-description: Learn concepts related to the image categorization feature of the Image Analysis API.
------- Previously updated : 07/05/2022----
-# Image categorization
-
-In addition to tags and a description, Image Analysis can return the taxonomy-based categories detected in an image. Unlike tags, categories are organized in a parent/child hierarchy, and there are fewer of them (86, as opposed to thousands of tags). All category names are in English. Categorization can be done by itself or alongside the newer tags model.
-
-## The 86-category hierarchy
-
-Computer vision can categorize an image broadly or specifically, using the list of 86 categories in the following diagram. For the full taxonomy in text format, see [Category Taxonomy](category-taxonomy.md).
-
-![Grouped lists of all the categories in the category taxonomy](./Images/analyze_categories-v2.png)
-
-## Image categorization examples
-
-The following JSON response illustrates what Computer Vision returns when categorizing the example image based on its visual features.
-
-![A woman on the roof of an apartment building](./Images/woman_roof.png)
-
-```json
-{
- "categories": [
- {
- "name": "people_",
- "score": 0.81640625
- }
- ],
- "requestId": "bae7f76a-1cc7-4479-8d29-48a694974705",
- "metadata": {
- "height": 200,
- "width": 300,
- "format": "Jpeg"
- }
-}
-```
-
-The following table illustrates a typical image set and the category returned by Computer Vision for each image.
-
-| Image | Category |
-|-|-|
-| ![Four people posed together as a family](./Images/family_photo.png) | people_group |
-| ![A puppy sitting in a grassy field](./Images/cute_dog.png) | animal_dog |
-| ![A person standing on a mountain rock at sunset](./Images/mountain_vista.png) | outdoor_mountain |
-| ![A pile of bread roles on a table](./Images/bread.png) | food_bread |
-
-## Use the API
-
-The categorization feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Categories` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"categories"` section.
-
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
-
-## Next steps
-
-Learn the related concepts of [tagging images](concept-tagging-images.md) and [describing images](concept-describing-images.md).
cognitive-services Concept Describe Images 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describe-images-40.md
- Title: Image captions - Image Analysis 4.0-
-description: Concepts related to the image captioning feature of the Image Analysis 4.0 API.
------- Previously updated : 01/24/2023----
-# Image captions (version 4.0 preview)
-Image captions in Image Analysis 4.0 (preview) are available through the **Caption** and **Dense Captions** features.
-
-Caption generates a one sentence description for all image contents. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both these features use the latest groundbreaking Florence based AI models.
-
-At this time, image captioning is available in English language only.
-
-### Gender-neutral captions
-All captions contain gender terms: "man", "woman", "boy" and "girl" by default. You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter, **gender-neutral-caption** to `true` in the request URL.
-
-> [!IMPORTANT]
-> Image captioning in Image Analysis 4.0 is only available in the following Azure data center regions at this time: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US, East Asia. You must use a Computer Vision resource located in one of these regions to get results from Caption and Dense Captions features.
->
-> If you have to use a Computer Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Computer Vision regions.
--
-Try out the image captioning features quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Caption and Dense Captions examples
-
-#### [Caption](#tab/image)
-
-The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
-
-![Photo of a man pointing at a screen](./Media/quickstarts/presentation.png)
-
-```json
-"captions": [
- {
- "text": "a man pointing at a screen",
- "confidence": 0.4891590476036072
- }
-]
-```
-
-#### [Dense Captions](#tab/dense)
-
-The following JSON response illustrates what the Analysis 4.0 API returns when generating dense captions for the example image.
-
-![Photo of a tractor on a farm](./Images/farm.png)
-
-```json
-{
- "denseCaptionsResult": {
- "values": [
- {
- "text": "a man driving a tractor in a farm",
- "confidence": 0.535620927810669,
- "boundingBox": {
- "x": 0,
- "y": 0,
- "w": 850,
- "h": 567
- }
- },
- {
- "text": "a man driving a tractor in a field",
- "confidence": 0.5428450107574463,
- "boundingBox": {
- "x": 132,
- "y": 266,
- "w": 209,
- "h": 219
- }
- },
- {
- "text": "a blurry image of a tree",
- "confidence": 0.5139822363853455,
- "boundingBox": {
- "x": 147,
- "y": 126,
- "w": 76,
- "h": 131
- }
- },
- {
- "text": "a man riding a tractor",
- "confidence": 0.4799223840236664,
- "boundingBox": {
- "x": 206,
- "y": 264,
- "w": 64,
- "h": 97
- }
- },
- {
- "text": "a blue sky above a hill",
- "confidence": 0.35495415329933167,
- "boundingBox": {
- "x": 0,
- "y": 0,
- "w": 837,
- "h": 166
- }
- },
- {
- "text": "a tractor in a field",
- "confidence": 0.47338250279426575,
- "boundingBox": {
- "x": 0,
- "y": 243,
- "w": 838,
- "h": 311
- }
- }
- ]
- },
- "modelVersion": "2023-02-01-preview",
- "metadata": {
- "width": 850,
- "height": 567
- }
-}
-```
---
-## Use the API
-
-#### [Image captions](#tab/image)
-
-The image captioning feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. Include `Caption` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"captionResult"` section.
-
-#### [Dense captions](#tab/dense)
-
-The dense captioning feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `denseCaptions` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"denseCaptionsResult"` section.
---
-## Next steps
-
-* Learn the related concept of [object detection](concept-object-detection-40.md).
-* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
-* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md
- Title: Image descriptions - Computer Vision-
-description: Concepts related to the image description feature of the Computer Vision API.
------- Previously updated : 07/04/2023----
-# Image descriptions
-
-Computer Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
-
-At this time, English is the only supported language for image description.
-
-Try out the image captioning features quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Image description example
-
-The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
-
-![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png)
-
-```json
-{
- "description":{
- "tags":[
- "outdoor",
- "city",
- "white"
- ],
- "captions":[
- {
- "text":"a city with tall buildings",
- "confidence":0.48468858003616333
- }
- ]
- },
- "requestId":"7e5e5cac-ef16-43ca-a0c4-02bd49d379e9",
- "metadata":{
- "height":300,
- "width":239,
- "format":"Png"
- },
- "modelVersion":"2021-05-01"
-}
-```
-
-## Use the API
-
-The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
-
-* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
-
-## Next steps
-
-Learn the related concepts of [tagging images](concept-tagging-images.md) and [categorizing images](concept-categorizing-images.md).
cognitive-services Concept Detecting Adult Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md
- Title: Adult, racy, gory content - Computer Vision-
-description: Concepts related to detecting adult content in images using the Computer Vision API.
------- Previously updated : 12/27/2022----
-# Adult content detection
-
-Computer Vision can detect adult material in images so that developers can restrict the display of these images in their software. Content flags are applied with a score between zero and one so developers can interpret the results according to their own preferences.
-
-Try out the adult content detection features quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Content flag definitions
-
-The "adult" classification contains several different categories:
--- **Adult** images are explicitly sexual in nature and often show nudity and sexual acts.-- **Racy** images are sexually suggestive in nature and often contain less sexually explicit content than images tagged as **Adult**.-- **Gory** images show blood/gore.-
-## Use the API
-
-You can detect adult content with the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
--- [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Color Schemes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-color-schemes.md
- Title: Color scheme detection - Computer Vision-
-description: Concepts related to detecting the color scheme in images using the Computer Vision API.
------- Previously updated : 11/17/2021----
-# Color schemes detection
-
-Computer Vision analyzes the colors in an image to provide three different attributes: the dominant foreground color, the dominant background color, and the larger set of dominant colors in the image. The set of possible returned colors is: black, blue, brown, gray, green, orange, pink, purple, red, teal, white, and yellow.
-
-Computer Vision also extracts an accent color, which represents the most vibrant color in the image, based on a combination of the dominant color set and saturation. The accent color is returned as a hexadecimal HTML color code (for example, `#00CC00`).
-
-Computer Vision also returns a boolean value indicating whether the image is a black-and-white image.
-
-## Color scheme detection examples
-
-The following example illustrates the JSON response returned by Computer Vision when it detects the color scheme of an image.
-
-> [!NOTE]
-> In this case, the example image is not a black and white image, but the dominant foreground and background colors are black, and the dominant colors for the image as a whole are black and white.
-
-![Outdoor Mountain at sunset, with a person's silhouette](./Images/mountain_vista.png)
-
-```json
-{
- "color": {
- "dominantColorForeground": "Black",
- "dominantColorBackground": "Black",
- "dominantColors": ["Black", "White"],
- "accentColor": "BB6D10",
- "isBwImg": false
- },
- "requestId": "0dc394bf-db50-4871-bdcc-13707d9405ea",
- "metadata": {
- "height": 202,
- "width": 300,
- "format": "Jpeg"
- }
-}
-```
-
-### Dominant color examples
-
-The following table shows the returned foreground, background, and image colors for each sample image.
-
-| Image | Dominant colors |
-|-|--|
-|![A white flower with a green background](./Images/flower.png)| Foreground: Black<br/>Background: White<br/>Colors: Black, White, Green|
-![A train running through a station](./Images/train_station.png) | Foreground: Black<br/>Background: Black<br/>Colors: Black |
-
-### Accent color examples
-
- The following table shows the returned accent color, as a hexadecimal HTML color value, for each example image.
-
-| Image | Accent color |
-|-|--|
-|![A person standing on a mountain rock at sunset](./Images/mountain_vista.png) | #BB6D10 |
-|![A white flower with a green background](./Images/flower.png) | #C6A205 |
-|![A train running through a station](./Images/train_station.png) | #474A84 |
-
-### Black and white detection examples
-
-The following table shows Computer Vision's black and white evaluation in the sample images.
-
-| Image | Black & white? |
-|-|-|
-|![A black and white picture of buildings in Manhattan](./Images/bw_buildings.png) | true |
-|![A blue house and the front yard](./Images/house_yard.png) | false |
-
-## Use the API
-
-The color scheme detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
-
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Domain Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-domain-content.md
- Title: Domain-specific content - Computer Vision-
-description: Learn how to specify an image categorization domain to return more detailed information about an image.
------- Previously updated : 02/08/2019----
-# Domain-specific content detection
-
-In addition to tagging and high-level categorization, Computer Vision also supports further domain-specific analysis using models that have been trained on specialized data.
-
-There are two ways to use the domain-specific models: by themselves (scoped analysis) or as an enhancement to the categorization feature.
-
-### Scoped analysis
-
-You can analyze an image using only the chosen domain-specific model by calling the [Models/\<model\>/Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API.
-
-The following is a sample JSON response returned by the **models/celebrities/analyze** API for the given image:
-
-![Satya Nadella standing, smiling](./images/satya.jpeg)
-
-```json
-{
- "result": {
- "celebrities": [{
- "faceRectangle": {
- "top": 391,
- "left": 318,
- "width": 184,
- "height": 184
- },
- "name": "Satya Nadella",
- "confidence": 0.99999856948852539
- }]
- },
- "requestId": "8217262a-1a90-4498-a242-68376a4b956b",
- "metadata": {
- "width": 800,
- "height": 1200,
- "format": "Jpeg"
- }
-}
-```
-
-### Enhanced categorization analysis
-
-You can also use domain-specific models to supplement general image analysis. You do this as part of [high-level categorization](concept-categorizing-images.md) by specifying domain-specific models in the *details* parameter of the [Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API call.
-
-In this case, the 86-category taxonomy classifier is called first. If any of the detected categories have a matching domain-specific model, the image is passed through that model as well and the results are added.
-
-The following JSON response shows how domain-specific analysis can be included as the `detail` node in a broader categorization analysis.
-
-```json
-"categories":[
- {
- "name":"abstract_",
- "score":0.00390625
- },
- {
- "name":"people_",
- "score":0.83984375,
- "detail":{
- "celebrities":[
- {
- "name":"Satya Nadella",
- "faceRectangle":{
- "left":597,
- "top":162,
- "width":248,
- "height":248
- },
- "confidence":0.999028444
- }
- ],
- "landmarks":[
- {
- "name":"Forbidden City",
- "confidence":0.9978346
- }
- ]
- }
- }
-]
-```
-
-## List the domain-specific models
-
-Currently, Computer Vision supports the following domain-specific models:
-
-| Name | Description |
-||-|
-| celebrities | Celebrity recognition, supported for images classified in the `people_` category |
-| landmarks | Landmark recognition, supported for images classified in the `outdoor_` or `building_` categories |
-
-Calling the [Models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20e) API will return this information along with the categories to which each model can apply:
-
-```json
-{
- "models":[
- {
- "name":"celebrities",
- "categories":[
- "people_",
- "Σ║║_",
- "pessoas_",
- "gente_"
- ]
- },
- {
- "name":"landmarks",
- "categories":[
- "outdoor_",
- "户外_",
- "屋外_",
- "aoarlivre_",
- "alairelibre_",
- "building_",
- "σ╗║τ¡æ_",
- "建物_",
- "edifício_"
- ]
- }
- ]
-}
-```
-
-## Use the API
-
-This feature is available through the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Celebrities` or `Landmarks` in the **details** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"details"` section.
-
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
- Title: Face detection - Computer Vision-
-description: Learn concepts related to the face detection feature of the Computer Vision API.
------- Previously updated : 12/27/2022----
-# Face detection with Image Analysis
-
-Image Analysis can detect human faces within an image and generate rectangle coordinates for each detected face.
-
-> [!NOTE]
-> This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
--
-Try out the face detection features quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Face detection examples
-
-The following example demonstrates the JSON response returned by Analyze API for an image containing a single human face.
-
-![Vision Analyze Woman Roof Face](./Images/woman_roof_face.png)
-
-```json
-{
- "faces": [
- {
- "age": 23,
- "gender": "Female",
- "faceRectangle": {
- "top": 45,
- "left": 194,
- "width": 44,
- "height": 44
- }
- }
- ],
- "requestId": "8439ba87-de65-441b-a0f1-c85913157ecd",
- "metadata": {
- "height": 200,
- "width": 300,
- "format": "Png"
- }
-}
-```
-
-The next example demonstrates the JSON response returned for an image containing multiple human faces.
-
-![Vision Analyze Family Photo Face](./Images/family_photo_face.png)
-
-```json
-{
- "faces": [
- {
- "age": 11,
- "gender": "Male",
- "faceRectangle": {
- "top": 62,
- "left": 22,
- "width": 45,
- "height": 45
- }
- },
- {
- "age": 11,
- "gender": "Female",
- "faceRectangle": {
- "top": 127,
- "left": 240,
- "width": 42,
- "height": 42
- }
- },
- {
- "age": 37,
- "gender": "Female",
- "faceRectangle": {
- "top": 55,
- "left": 200,
- "width": 41,
- "height": 41
- }
- },
- {
- "age": 41,
- "gender": "Male",
- "faceRectangle": {
- "top": 45,
- "left": 103,
- "width": 39,
- "height": 39
- }
- }
- ],
- "requestId": "3a383cbe-1a05-4104-9ce7-1b5cf352b239",
- "metadata": {
- "height": 230,
- "width": 300,
- "format": "Png"
- }
-}
-```
-
-## Use the API
-
-The face detection feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
-
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Detecting Image Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-image-types.md
- Title: Image type detection - Computer Vision-
-description: Concepts related to the image type detection feature of the Computer Vision API.
------- Previously updated : 03/11/2019----
-# Image type detection
-
-With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API, Computer Vision can analyze the content type of images, indicating whether an image is clip art or a line drawing.
-
-## Detecting clip art
-
-Computer Vision analyzes an image and rates the likelihood of the image being clip art on a scale of 0 to 3, as described in the following table.
-
-| Value | Meaning |
-|-||
-| 0 | Non-clip-art |
-| 1 | Ambiguous |
-| 2 | Normal-clip-art |
-| 3 | Good-clip-art |
-
-### Clip art detection examples
-
-The following JSON responses illustrates what Computer Vision returns when rating the likelihood of the example images being clip art.
-
-![A clip art image of a slice of cheese](./Images/cheese_clipart.png)
-
-```json
-{
- "imageType": {
- "clipArtType": 3,
- "lineDrawingType": 0
- },
- "requestId": "88c48d8c-80f3-449f-878f-6947f3b35a27",
- "metadata": {
- "height": 225,
- "width": 300,
- "format": "Jpeg"
- }
-}
-```
-
-![A blue house and the front yard](./Images/house_yard.png)
-
-```json
-{
- "imageType": {
- "clipArtType": 0,
- "lineDrawingType": 0
- },
- "requestId": "a9c8490a-2740-4e04-923b-e8f4830d0e47",
- "metadata": {
- "height": 200,
- "width": 300,
- "format": "Jpeg"
- }
-}
-```
-
-## Detecting line drawings
-
-Computer Vision analyzes an image and returns a boolean value indicating whether the image is a line drawing.
-
-### Line drawing detection examples
-
-The following JSON responses illustrates what Computer Vision returns when indicating whether the example images are line drawings.
-
-![A line drawing image of a lion](./Images/lion_drawing.png)
-
-```json
-{
- "imageType": {
- "clipArtType": 2,
- "lineDrawingType": 1
- },
- "requestId": "6442dc22-476a-41c4-aa3d-9ceb15172f01",
- "metadata": {
- "height": 268,
- "width": 300,
- "format": "Jpeg"
- }
-}
-```
-
-![A white flower with a green background](./Images/flower.png)
-
-```json
-{
- "imageType": {
- "clipArtType": 0,
- "lineDrawingType": 0
- },
- "requestId": "98437d65-1b05-4ab7-b439-7098b5dfdcbf",
- "metadata": {
- "height": 200,
- "width": 300,
- "format": "Jpeg"
- }
-}
-```
-
-## Use the API
-
-The image type detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `ImageType` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"imageType"` section.
-
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
- Title: "Face detection and attributes - Face"-
-description: Learn more about face detection; face detection is the action of locating human faces in an image and optionally returning different kinds of face-related data.
------- Previously updated : 07/04/2023---
-# Face detection and attributes
--
-This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
-
-You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
-
-## Face rectangle
-
-Each detected face corresponds to a `faceRectangle` field in the response. This is a set of pixel coordinates for the left, top, width, and height of the detected face. Using these coordinates, you can get the location and size of the face. In the API response, faces are listed in size order from largest to smallest.
-
-Try out the capabilities of face detection quickly and easily using Vision Studio.
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Face ID
-
-The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
-
-## Face landmarks
-
-Face landmarks are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. By default, there are 27 predefined landmark points. The following figure shows all 27 points:
-
-![A face diagram with all 27 landmarks labeled](./media/landmarks.1.jpg)
-
-The coordinates of the points are returned in units of pixels.
-
-The Detection_03 model currently has the most accurate landmark detection. The eye and pupil landmarks it returns are precise enough to enable gaze tracking of the face.
-
-## Attributes
--
-Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
-
-* **Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
-* **Age**. The estimated age in years of a particular face.
-* **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
-* **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
-* **Exposure**. The exposure of the face in the image. This attribute returns a value between zero and one and an informal rating of underExposure, goodExposure, or overExposure.
-* **Facial hair**. The estimated facial hair presence and the length for the given face.
-* **Gender**. The estimated gender of the given face. Possible values are male, female, and genderless.
-* **Glasses**. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles.
-* **Hair**. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
-* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings:
-
- ![A head with the pitch, roll, and yaw axes labeled](./media/headpose.1.jpg)
-
- For more information on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
-* **Makeup**. Indicates whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
-* **Mask**. Indicates whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
-* **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
-* **Occlusion**. Indicates whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
-* **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
-* **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios.
- >[!NOTE]
- > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
-
-> [!IMPORTANT]
-> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.
-
-## Input data
-
-Use the following tips to make sure that your input images give the most accurate detection results:
-
-* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
-* The image file size should be no larger than 6 MB.
-* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they're larger than the minimum detectable face size.
-* The maximum detectable face size is 4096 x 4096 pixels.
-* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
-* Some faces might not be recognized because of technical challenges, such as:
- * Images with extreme lighting, for example, severe backlighting.
- * Obstructions that block one or both eyes.
- * Differences in hair type or facial hair.
- * Changes in facial appearance because of age.
- * Extreme facial expressions.
-
-### Input data with orientation information:
-
-Some input images with JPEG format might contain orientation information in Exchangeable image file format (EXIF) metadata. If EXIF orientation is available, images are automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face are estimated based on the rotated image.
-
-To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of the image visualization tools automatically rotate the image according to its EXIF orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
-
-![Two face images with and without rotation](./media/image-rotation.png)
-
-### Video input
-
-If you're detecting faces from a video feed, you may be able to improve performance by adjusting certain settings on your video camera:
-
-* **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
-* **Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.
-* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This results in clearer video frames.
-
- >[!NOTE]
- > A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use.
-
-## Next steps
-
-Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image.
-
-* [Call the detect API](./how-to/identity-detect-faces.md)
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
- Title: "Face recognition - Face"-
-description: Learn the concept of Face recognition, its related operations, and the underlying data structures.
------- Previously updated : 12/27/2022---
-# Face recognition
-
-This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the act of verifying or identifying individuals by their faces. Face recognition is important in implementing the identity verification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
-
-You can try out the capabilities of face recognition quickly and easily using Vision Studio.
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
--
-## Face recognition operations
--
-This section details how the underlying operations use the above data structures to identify and verify a face.
-
-### PersonGroup creation and training
-
-You need to create a [PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) to store the set of people to match against. PersonGroups hold [Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) objects, which each represent an individual person and hold a set of face data belonging to that person.
-
-The [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) operation prepares the data set to be used in face data comparisons.
-
-### Identification
-
-The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
-
-### Verification
-
-The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
-
-## Related data structures
-
-The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription, but name fields may be duplicated.
-
-|Name|Description|
-|:--|:--|
-|DetectedFace| This single face representation is retrieved by the [face detection](./how-to/identity-detect-faces.md) operation. Its ID expires 24 hours after it's created.|
-|PersistedFace| When DetectedFace objects are added to a group, such as FaceList or Person, they become PersistedFace objects. They can be [retrieved](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c) at any time and don't expire.|
-|[FaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b) or [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc)| This data structure is an assorted list of PersistedFace objects. A FaceList has a unique ID, a name string, and optionally a user data string.|
-|[Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c)| This data structure is a list of PersistedFace objects that belong to the same person. It has a unique ID, a name string, and optionally a user data string.|
-|[PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d)| This data structure is an assorted list of Person objects. It has a unique ID, a name string, and optionally a user data string. A PersonGroup must be [trained](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) before it can be used in recognition operations.|
-|PersonDirectory | This data structure is like **LargePersonGroup** but offers additional storage capacity and other added features. For more information, see [Use the PersonDirectory structure (preview)](./how-to/use-persondirectory.md).
--
-## Input data
-
-Use the following tips to ensure that your input images give the most accurate recognition results:
-
-* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
-* Image file size should be no larger than 6 MB.
-* When you create Person objects, use photos that feature different kinds of angles and lighting.
-* Some faces might not be recognized because of technical challenges, such as:
- * Images with extreme lighting, for example, severe backlighting.
- * Obstructions that block one or both eyes.
- * Differences in hair type or facial hair.
- * Changes in facial appearance because of age.
- * Extreme facial expressions.
-* You can utilize the qualityForRecognition attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only "high" quality images are recommended for person enrollment and quality at or above "medium" is recommended for identification scenarios.
-
-## Next steps
-
-Now that you're familiar with face recognition concepts, Write a script that identifies faces against a trained PersonGroup.
-
-* [Face quickstart](./quickstarts-sdk/identity-client-library.md)
cognitive-services Concept Generate Thumbnails 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generate-thumbnails-40.md
- Title: Smart-cropped thumbnails - Image Analysis 4.0-
-description: Concepts related to generating thumbnails for images using the Image Analysis 4.0 API.
------- Previously updated : 01/24/2023----
-# Smart-cropped thumbnails (version 4.0 preview)
-
-A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping to create intuitive image thumbnails that include the most important regions of an image with priority given to any detected faces.
-
-The Computer Vision smart-cropping utility takes one or more aspect ratios in the range [0.75, 1.80] and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates.
-
-> [!IMPORTANT]
-> This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
-
-## Examples
-
-The generated bounding box can vary widely depending on what you specify for aspect ratio, as shown in the following images.
-
-| Aspect ratio | Bounding box |
-|-|--|
-| original | :::image type="content" source="Images/cropped-original.png" alt-text="Photo of a man with a dog at a table."::: |
-| 0.75 | :::image type="content" source="Images/cropped-075-bb.png" alt-text="Photo of a man with a dog at a table. A 0.75 ratio bounding box is drawn."::: |
-| 1.00 | :::image type="content" source="Images/cropped-1-0-bb.png" alt-text="Photo of a man with a dog at a table. A 1.00 ratio bounding box is drawn."::: |
-| 1.50 | :::image type="content" source="Images/cropped-150-bb.png" alt-text="Photo of a man with a dog at a table. A 1.50 ratio bounding box is drawn."::: |
--
-## Use the API
-
-The smart cropping feature is available through the [Analyze Image API](https://aka.ms/vision-4-0-ref). Include `SmartCrops` in the **features** query parameter. Also include a **smartcrops-aspect-ratios** query parameter, and set it to a decimal value for the aspect ratio you want (defined as width / height) in the range [0.75, 1.80]. Multiple aspect ratio values should be comma-separated. If no aspect ratio value is provided the API will return a crop with an aspect ratio that best preserves the imageΓÇÖs most important region.
-
-## Next steps
-
-* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
cognitive-services Concept Generating Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generating-thumbnails.md
- Title: Smart-cropped thumbnails - Computer Vision-
-description: Concepts related to generating thumbnails for images using the Computer Vision API.
------- Previously updated : 11/09/2022----
-# Smart-cropped thumbnails
-
-A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping to create intuitive image thumbnails that include the most important regions of an image with priority given to any detected faces.
-
-The Computer Vision thumbnail generation algorithm works as follows:
-
-1. Remove distracting elements from the image and identify the _area of interest_&mdash;the area of the image in which the main object(s) appears.
-1. Crop the image based on the identified _area of interest_.
-1. Change the aspect ratio to fit the target thumbnail dimensions.
-
-## Area of interest
-
-When you upload an image, the Computer Vision API analyzes it to determine the *area of interest*. It can then use this region to determine how to crop the image. The cropping operation, however, will always match the desired aspect ratio if one is specified.
-
-You can also get the raw bounding box coordinates of this same *area of interest* by calling the **areaOfInterest** API instead. You can then use this information to modify the original image however you wish.
-
-## Examples
-
-The generated thumbnail can vary widely depending on what you specify for height, width, and smart cropping, as shown in the following image.
-
-![A mountain image next to various cropping configurations](./Images/thumbnail-demo.png)
-
-The following table illustrates thumbnails defined by smart-cropping for the example images. The thumbnails were generated for a specified target height and width of 50 pixels, with smart cropping enabled.
-
-| Image | Thumbnail |
-|-|--|
-|![Outdoor Mountain at sunset, with a person's silhouette](./Images/mountain_vista.png) | ![Thumbnail of Outdoor Mountain at sunset, with a person's silhouette](./Images/mountain_vista_thumbnail.png) |
-|![A white flower with a green background](./Images/flower.png) | ![Vision Analyze Flower thumbnail](./Images/flower_thumbnail.png) |
-|![A woman on the roof of an apartment building](./Images/woman_roof.png) | ![thumbnail of a woman on the roof of an apartment building](./Images/woman_roof_thumbnail.png) |
--
-## Use the API
--
-The generate thumbnail feature is available through the [Get Thumbnail](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20c) and [Get Area of Interest](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/b156d0f5e11e492d9f64418d) APIs. You can call this API through a native SDK or through REST calls.
--
-* [Generate a thumbnail (how-to)](./how-to/generate-thumbnail.md)
cognitive-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-image-retrieval.md
- Title: Image Retrieval concepts - Image Analysis 4.0-
-description: Concepts related to image vectorization using the Image Analysis 4.0 API.
------- Previously updated : 03/06/2023---
-# Image retrieval (version 4.0 preview)
-
-Image retrieval is the process of searching a large collection of images to find those that are most similar to a given query image. Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
-
-## What's the difference between vector search and keyword-based search?
-
-Keyword search is the most basic and traditional method of information retrieval. In this approach, the search engine looks for the exact match of the keywords or phrases entered by the user in the search query and compares with labels and tags provided for the images. The search engine then returns images that contain those exact keywords as content tags and image labels. Keyword search relies heavily on the user's ability to input relevant and specific search terms.
-
-Vector search, on the other hand, searches large collections of vectors in high-dimensional space to find vectors that are similar to a given query. Vector search looks for semantic similarities by capturing the context and meaning of the search query. This approach is often more efficient than traditional image retrieval techniques, as it can reduce search space and improve the accuracy of the results.
-
-## Business Applications
-
-Image retrieval has a variety of applications in different fields, including:
--- Digital asset management: Image retrieval can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.-- Medical image retrieval: Image retrieval can be used in medical imaging to search for images based on their diagnostic features or disease patterns. This can help doctors or researchers to identify similar cases or track disease progression.-- Security and surveillance: Image retrieval can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection. -- Forensic image retrieval: Image retrieval can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime.-- E-commerce: Image retrieval can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases.-- Fashion and design: Image retrieval can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends.-
-## What are vector embeddings?
-
-Vector embeddings are a way of representing content&mdash;text or images&mdash;as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks. Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears.
-
-> [!NOTE]
-> Vector embeddings can only be meaningfully compared if they are from the same model type.
-
-## How does it work?
--
-1. Vectorize Images and Text: the Image Retrieval APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
-1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity.
-1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
-
-## Next steps
-
-Enable image retrieval for your search service and follow the steps to generate vector embeddings for text and images.
-* [Call the Image retrieval APIs](./how-to/image-retrieval.md)
-
cognitive-services Concept Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-model-customization.md
- Title: Model customization concepts - Image Analysis 4.0-
-description: Concepts related to the custom model feature of the Image Analysis 4.0 API.
------- Previously updated : 02/06/2023---
-# Model customization (version 4.0 preview)
-
-Model customization lets you train a specialized Image Analysis model for your own use case. Custom models can do either image classification (tags apply to the whole image) or object detection (tags apply to specific areas of the image). Once your custom model is created and trained, it belongs to your Computer Vision resource, and you can call it using the [Analyze Image API](./how-to/call-analyze-image-40.md).
-
-> [!div class="nextstepaction"]
-> [Vision Studio quickstart](./how-to/model-customization.md?tabs=studio)
-
-> [!div class="nextstepaction"]
-> [Python SDK quickstart](./how-to/model-customization.md?tabs=python)
--
-## Scenario components
-
-The main components of a model customization system are the training images, COCO file, dataset object, and model object.
-
-### Training images
-
-Your set of training images should include several examples of each of the labels you want to detect. You'll also want to collect a few extra images to test your model with once it's trained. The images need to be stored in an Azure Storage container in order to be accessible to the model.
-
-In order to train your model effectively, use images with visual variety. Select images that vary by:
--- camera angle-- lighting-- background-- visual style-- individual/grouped subject(s)-- size-- type-
-Additionally, make sure all of your training images meet the following criteria:
--- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format.-- The file size of the image must be less than 20 megabytes (MB).-- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels.-
-### COCO file
-
-The COCO file references all of the training images and associates them with their labeling information. In the case of object detection, it specified the bounding box coordinates of each tag on each image. This file must be in the COCO format, which is a specific type of JSON file. The COCO file should be stored in the same Azure Storage container as the training images.
-
-> [!TIP]
-> [!INCLUDE [coco-files](includes/coco-files.md)]
-
-### Dataset object
-
-The **Dataset** object is a data structure stored by the Image Analysis service that references the association file. You need to create a **Dataset** object before you can create and train a model.
-
-### Model object
-
-The **Model** object is a data structure stored by the Image Analysis service that represents a custom model. It must be associated with a **Dataset** in order to do initial training. Once it's trained, you can query your model by entering its name in the `model-name` query parameter of the [Analyze Image API call](./how-to/call-analyze-image-40.md).
-
-## Quota limits
-
-The following table describes the limits on the scale of your custom model projects.
-
-| Category | Generic image classifier | Generic object detector |
-| - | - | - |
-| Max # training hours | 288 (12 days) | 288 (12 days) |
-| Max # training images | 1,000,000 | 200,000 |
-| Max # evaluation images | 100,000 | 100,000 |
-| Min # training images per category | 2 | 2 |
-| Max # tags per image | multiclass: 1 | NA |
-| Max # regions per image | NA | 1,000 |
-| Max # categories | 2,500 | 1,000 |
-| Min # categories | 2 | 1 |
-| Max image size (Training) | 20 MB | 20 MB |
-| Max image size (Prediction) | Sync: 6 MB, Batch: 20 MB | Sync: 6 MB, Batch: 20 MB |
-| Max image width/height (Training) | 10,240 | 10,240 |
-| Min image width/height (Prediction) | 50 | 50 |
-| Available regions | West US 2, East US, West Europe | West US 2, East US, West Europe |
-| Accepted image types | jpg, png, bmp, gif, jpeg | jpg, png, bmp, gif, jpeg |
-
-## Frequently asked questions
-
-### Why is my COCO file import failing when importing from blob storage?
-
-Currently, Microsoft is addressing an issue that causes COCO file import to fail with large datasets when initiated in Vision Studio. To train using a large dataset, it's recommended to use the REST API instead.
-
-### Why does training take longer/shorter than my specified budget?
-
-The specified training budget is the calibrated **compute time**, not the **wall-clock time**. Some common reasons for the difference are listed:
--- **Longer than specified budget:**
- - Image Analysis experiences a high training traffic, and GPU resources may be tight. Your job may wait in the queue or be put on hold during training.
- - The backend training process ran into unexpected failures, which resulted in retrying logic. The failed runs don't consume your budget, but this can lead to longer training time in general.
- - Your data is stored in a different region than your created Computer Vision resource, which will lead to longer data transmission time.
--- **Shorter than specified budget:** The following factors speed up training at the cost of using more budget in certain wall-clock time.
- - Image Analysis sometimes trains with multiple GPUs depending on your data.
- - Image Analysis sometimes trains multiple exploration trials on multiple GPUs at the same time.
- - Image Analysis sometimes uses premier (faster) GPU SKUs to train.
-
-### Why does my training fail and what I should do?
-
-The following are some common reasons for training failure:
--- `diverged`: The training can't learn meaningful things from your data. Some common causes are:
- - Data is not enough: providing more data should help.
- - Data is of poor quality: check if your images are of low resolution, extreme aspect ratios, or if annotations are wrong.
-- `notEnoughBudget`: Your specified budget isn't enough for the size of your dataset and model type you're training. Specify a larger budget.-- `datasetCorrupt`: Usually this means your provided images aren't accessible or the annotation file is in the wrong format.-- `datasetNotFound`: dataset cannot be found-- `unknown`: This could be a backend issue. Reach out to support for investigation.-
-### What metrics are used for evaluating the models?
-
-The following metrics are used:
--- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5-- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75-
-### Why does my dataset registration fail?
-
-The API responses should be informative enough. They are:
-- `DatasetAlreadyExists`: A dataset with the same name exists-- `DatasetInvalidAnnotationUri`: "An invalid URI was provided among the annotation URIs at dataset registration time.-
-### How many images are required for reasonable/good/best model quality?
-
-Although Florence models have great few-shot capability (achieving great model performance under limited data availability), in general more data makes your trained model better and more robust. Some scenarios require little data (like classifying an apple against a banana), but others require more (like detecting 200 kinds of insects in a rainforest). This makes it difficult to give a single recommendation.
-
-If your data labeling budget is constrained, our recommended workflow is to repeat the following steps:
-
-1. Collect `N` images per class, where `N` images are easy for you to collect (for example, `N=3`)
-1. Train a model and test it on your evaluation set.
-1. If the model performance is:
-
- - **Good enough** (performance is better than your expectation or performance close to your previous experiment with less data collected): Stop here and use this model.
- - **Not good** (performance is still below your expectation or better than your previous experiment with less data collected at a reasonable margin):
- - Collect more images for each class&mdash;a number that's easy for you to collect&mdash;and go back to Step 2.
- - If you notice the performance is not improving any more after a few iterations, it could be because:
- - this problem isn't well defined or is too hard. Reach out to us for case-by-case analysis.
- - the training data might be of low quality: check if there are wrong annotations or very low-pixel images.
--
-### How much training budget should I specify?
-
-You should specify the upper limit of budget that you're willing to consume. Image Analysis uses an AutoML system in its backend to try out different models and training recipes to find the best model for your use case. The more budget that's given, the higher the chance of finding a better model.
-
-The AutoML system also stops automatically if it concludes there's no need to try more, even if there is still remaining budget. So, it doesn't always exhaust your specified budget. You're guaranteed not to be billed over your specified budget.
-
-### Can I control the hyper-parameters or use my own models in training?
-
-No, Image Analysis model customization service uses a low-code AutoML training system that handles hyper-param search and base model selection in the backend.
-
-### Can I export my model after training?
-
-The prediction API is only supported through the cloud service.
-
-### Why does the evaluation fail for my object detection model?
-
-Below are the possible reasons:
-- `internalServerError`: An unknown error occurred. Please try again later.-- `modelNotFound`: The specified model was not found.-- `datasetNotFound`: The specified dataset was not found.-- `datasetAnnotationsInvalid`: An error occurred while trying to download or parse the ground truth annotations associated with the test dataset.-- `datasetEmpty`: The test dataset did not contain any "ground truth" annotations.-
-### What is the expected latency for predictions with custom models?
-
-We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Computer Vision resource that they were trained under, and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory, and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results.
-
-## Data privacy and security
-
-As with all of the Cognitive Services, developers using Image Analysis model customization should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
-
-## Next steps
-
-[Create and train a custom model](./how-to/model-customization.md)
cognitive-services Concept Object Detection 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection-40.md
- Title: Object detection - Image Analysis 4.0-
-description: Learn concepts related to the object detection feature of the Image Analysis 4.0 API - usage and limits.
------- Previously updated : 01/24/2023----
-# Object detection (version 4.0 preview)
-
-Object detection is similar to [tagging](concept-tag-images-40.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat and person, the object detection operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
-
-The object detection function applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes.
-
-Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Object detection example
-
-The following JSON response illustrates what the Analysis 4.0 API returns when detecting objects in the example image.
-
-![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
---
-```json
-{
- "metadata":
- {
- "width": 1260,
- "height": 473
- },
- "objectsResult":
- {
- "values":
- [
- {
- "name": "kitchen appliance",
- "confidence": 0.501,
- "boundingBox": {"x":730,"y":66,"w":135,"h":85}
- },
- {
- "name": "computer keyboard",
- "confidence": 0.51,
- "boundingBox": {"x":523,"y":377,"w":185,"h":46}
- },
- {
- "name": "Laptop",
- "confidence": 0.85,
- "boundingBox": {"x":471,"y":218,"w":289,"h":226}
- },
- {
- "name": "person",
- "confidence": 0.855,
- "boundingBox": {"x":654,"y":0,"w":584,"h":473}
- }
- ]
- }
-}
-```
-
-## Limitations
-
-It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
-
-* Objects are generally not detected if they're small (less than 5% of the image).
-* Objects are generally not detected if they're arranged closely together (a stack of plates, for example).
-* Objects are not differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature.
-
-## Use the API
-
-The object detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Objects` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
-
-## Next steps
-
-* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
-
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md
- Title: Object detection - Computer Vision-
-description: Learn concepts related to the object detection feature of the Computer Vision API - usage and limits.
------- Previously updated : 11/03/2022----
-# Object detection
-
-Object detection is similar to [tagging](concept-tagging-images.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat and person, the Detect operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
-
-The object detection function applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes.
-
-Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Object detection example
-
-The following JSON response illustrates what the Analyze API returns when detecting objects in the example image.
-
-![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
--
-```json
-{
- "objects":[
- {
- "rectangle":{
- "x":730,
- "y":66,
- "w":135,
- "h":85
- },
- "object":"kitchen appliance",
- "confidence":0.501
- },
- {
- "rectangle":{
- "x":523,
- "y":377,
- "w":185,
- "h":46
- },
- "object":"computer keyboard",
- "confidence":0.51
- },
- {
- "rectangle":{
- "x":471,
- "y":218,
- "w":289,
- "h":226
- },
- "object":"Laptop",
- "confidence":0.85,
- "parent":{
- "object":"computer",
- "confidence":0.851
- }
- },
- {
- "rectangle":{
- "x":654,
- "y":0,
- "w":584,
- "h":473
- },
- "object":"person",
- "confidence":0.855
- }
- ],
- "requestId":"25018882-a494-4e64-8196-f627a35c1135",
- "metadata":{
- "height":473,
- "width":1260,
- "format":"Jpeg"
- },
- "modelVersion":"2021-05-01"
-}
-```
--
-## Limitations
-
-It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
-
-* Objects are generally not detected if they're small (less than 5% of the image).
-* Objects are generally not detected if they're arranged closely together (a stack of plates, for example).
-* Objects are not differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature.
-
-## Use the API
-
-The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
--
-* [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
cognitive-services Concept Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-ocr.md
- Title: OCR for images - Computer Vision-
-description: Extract text from in-the-wild and non-document images with a fast and synchronous Computer Vision Image Analysis 4.0 API.
-------- Previously updated : 07/04/2023---
-# OCR for images (version 4.0 preview)
-
-> [!NOTE]
->
-> For extracting text from PDF, Office, and HTML documents and document images, use the [Form Recognizer Read OCR model](../../applied-ai-services/form-recognizer/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelligent document processing scenarios.
-
-OCR traditionally started as a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. For several scenarios, such as single images that aren't text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times.
-
-## What is Computer Vision v4.0 Read OCR (preview)?
-
-The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
-
-## Text extraction example
-
-The following JSON response illustrates what the Image Analysis 4.0 API returns when extracting text from the given image.
-
-![Photo of a sticky note with writing on it.](./Images/handwritten-note.jpg)
-
-```json
-{
- "metadata":
- {
- "width": 1000,
- "height": 945
- },
- "readResult":
- {
- "stringIndexType": "TextElements",
- "content": "You must be the change you\nWish to see in the world !\nEverything has its beauty , but\nnot everyone sees it !",
- "pages":
- [
- {
- "height": 945,
- "width": 1000,
- "angle": -1.099,
- "pageNumber": 1,
- "words":
- [
- {
- "content": "You",
- "boundingBox": [253,268,301,267,304,318,256,318],
- "confidence": 0.998,
- "span": {"offset":0,"length":3}
- },
- {
- "content": "must",
- "boundingBox": [310,266,376,265,378,316,313,317],
- "confidence": 0.988,
- "span": {"offset":4,"length":4}
- },
- {
- "content": "be",
- "boundingBox": [385,264,426,264,428,314,388,316],
- "confidence": 0.928,
- "span": {"offset":9,"length":2}
- },
- {
- "content": "the",
- "boundingBox": [435,263,494,263,496,311,437,314],
- "confidence": 0.997,
- "span": {"offset":12,"length":3}
- },
- {
- "content": "change",
- "boundingBox": [503,263,600,262,602,306,506,311],
- "confidence": 0.995,
- "span": {"offset":16,"length":6}
- },
- {
- "content": "you",
- "boundingBox": [609,262,665,263,666,302,611,305],
- "confidence": 0.998,
- "span": {"offset":23,"length":3}
- },
- {
- "content": "Wish",
- "boundingBox": [327,348,391,343,392,380,328,382],
- "confidence": 0.98,
- "span": {"offset":27,"length":4}
- },
- {
- "content": "to",
- "boundingBox": [406,342,438,340,439,378,407,379],
- "confidence": 0.997,
- "span": {"offset":32,"length":2}
- },
- {
- "content": "see",
- "boundingBox": [446,340,492,337,494,376,447,378],
- "confidence": 0.998,
- "span": {"offset":35,"length":3}
- },
- {
- "content": "in",
- "boundingBox": [500,337,527,336,529,375,501,376],
- "confidence": 0.983,
- "span": {"offset":39,"length":2}
- },
- {
- "content": "the",
- "boundingBox": [534,336,588,334,590,373,536,375],
- "confidence": 0.993,
- "span": {"offset":42,"length":3}
- },
- {
- "content": "world",
- "boundingBox": [599,334,655,333,658,371,601,373],
- "confidence": 0.998,
- "span": {"offset":46,"length":5}
- },
- {
- "content": "!",
- "boundingBox": [663,333,687,333,690,370,666,371],
- "confidence": 0.915,
- "span": {"offset":52,"length":1}
- },
- {
- "content": "Everything",
- "boundingBox": [255,446,371,441,372,490,256,494],
- "confidence": 0.97,
- "span": {"offset":54,"length":10}
- },
- {
- "content": "has",
- "boundingBox": [380,441,421,440,421,488,381,489],
- "confidence": 0.793,
- "span": {"offset":65,"length":3}
- },
- {
- "content": "its",
- "boundingBox": [430,440,471,439,471,487,431,488],
- "confidence": 0.998,
- "span": {"offset":69,"length":3}
- },
- {
- "content": "beauty",
- "boundingBox": [480,439,552,439,552,485,481,487],
- "confidence": 0.296,
- "span": {"offset":73,"length":6}
- },
- {
- "content": ",",
- "boundingBox": [561,439,571,439,571,485,562,485],
- "confidence": 0.742,
- "span": {"offset":80,"length":1}
- },
- {
- "content": "but",
- "boundingBox": [580,439,636,439,636,485,580,485],
- "confidence": 0.885,
- "span": {"offset":82,"length":3}
- },
- {
- "content": "not",
- "boundingBox": [364,516,412,512,413,546,366,549],
- "confidence": 0.994,
- "span": {"offset":86,"length":3}
- },
- {
- "content": "everyone",
- "boundingBox": [422,511,520,504,521,540,423,545],
- "confidence": 0.993,
- "span": {"offset":90,"length":8}
- },
- {
- "content": "sees",
- "boundingBox": [530,503,586,500,588,538,531,540],
- "confidence": 0.988,
- "span": {"offset":99,"length":4}
- },
- {
- "content": "it",
- "boundingBox": [596,500,627,498,628,536,598,537],
- "confidence": 0.998,
- "span": {"offset":104,"length":2}
- },
- {
- "content": "!",
- "boundingBox": [634,498,657,497,659,536,635,536],
- "confidence": 0.994,
- "span": {"offset":107,"length":1}
- }
- ],
- "spans":
- [
- {
- "offset": 0,
- "length": 108
- }
- ],
- "lines":
- [
- {
- "content": "You must be the change you",
- "boundingBox": [253,267,670,262,671,307,254,318],
- "spans": [{"offset":0,"length":26}]
- },
- {
- "content": "Wish to see in the world !",
- "boundingBox": [326,343,691,332,693,369,327,382],
- "spans": [{"offset":27,"length":26}]
- },
- {
- "content": "Everything has its beauty , but",
- "boundingBox": [254,443,640,438,641,485,255,493],
- "spans": [{"offset":54,"length":31}]
- },
- {
- "content": "not everyone sees it !",
- "boundingBox": [364,512,658,496,660,534,365,549],
- "spans": [{"offset":86,"length":22}]
- }
- ]
- }
- ],
- "styles":
- [
- {
- "isHandwritten": true,
- "spans":
- [
- {
- "offset": 0,
- "length": 26
- }
- ],
- "confidence": 0.95
- },
- {
- "isHandwritten": true,
- "spans":
- [
- {
- "offset": 27,
- "length": 58
- }
- ],
- "confidence": 1
- },
- {
- "isHandwritten": true,
- "spans":
- [
- {
- "offset": 86,
- "length": 22
- }
- ],
- "confidence": 0.9
- }
- ]
- }
-}
-```
-
-## Use the API
-
-The text extraction feature is part of the [Analyze Image API](https://aka.ms/vision-4-0-ref). Include `Read` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section.
--
-## Next steps
-
-Follow the [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to extract text from an image using the Image Analysis 4.0 API.
cognitive-services Concept People Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-people-detection.md
- Title: People detection - Computer Vision-
-description: Learn concepts related to the people detection feature of the Computer Vision API - usage and limits.
-------- Previously updated : 09/12/2022---
-# People detection (version 4.0 preview)
-
-Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score.
-
-> [!IMPORTANT]
-> We built this model by enhancing our object detection model for person detection scenarios. People detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
-
-## People detection example
-
-The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features.
-
-![Photo of four people.](./Images/family_photo.png)
-
-```json
-{
- "modelVersion": "2023-02-01-preview",
- "metadata": {
- "width": 300,
- "height": 231
- },
- "peopleResult": {
- "values": [
- {
- "boundingBox": {
- "x": 0,
- "y": 41,
- "w": 95,
- "h": 189
- },
- "confidence": 0.9474349617958069
- },
- {
- "boundingBox": {
- "x": 204,
- "y": 96,
- "w": 95,
- "h": 134
- },
- "confidence": 0.9470965266227722
- },
- {
- "boundingBox": {
- "x": 53,
- "y": 20,
- "w": 136,
- "h": 210
- },
- "confidence": 0.8943784832954407
- },
- {
- "boundingBox": {
- "x": 170,
- "y": 31,
- "w": 91,
- "h": 199
- },
- "confidence": 0.2713555097579956
- }
- ]
- }
-}
-```
-
-## Use the API
-
-The people detection feature is part of the [Image Analysis 4.0 API](https://aka.ms/vision-4-0-ref). Include `People` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"people"` section.
-
-## Next steps
-
-* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
cognitive-services Concept Shelf Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-shelf-analysis.md
- Title: Product Recognition - Image Analysis 4.0-
-description: Learn concepts related to the Product Recognition feature set of Image Analysis 4.0 - usage and limits.
------- Previously updated : 05/03/2023----
-# Product Recognition (version 4.0 preview)
-
-The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document.
-
-Try out the capabilities of Product Recognition quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-## Product Recognition features
-
-### Image modification
-
-The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to:
-* Stitch together multiple images of a shelf to create a single image.
-* Rectify an image to remove perspective distortion.
-
-### Product Understanding (pretrained)
-
-The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each.
-
-The following JSON response illustrates what the Product Understanding API returns.
-
-```json
-{
- "imageMetadata": {
- "width": 2000,
- "height": 1500
- },
- "products": [
- {
- "id": "string",
- "boundingBox": {
- "x": 1234,
- "y": 1234,
- "w": 12,
- "h": 12
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "string"
- }
- ]
- }
- ],
- "gaps": [
- {
- "id": "string",
- "boundingBox": {
- "x": 1234,
- "y": 1234,
- "w": 123,
- "h": 123
- },
- "classifications": [
- {
- "confidence": 0.8,
- "label": "string"
- }
- ]
- }
- ]
-}
-```
-
-### Product Understanding (custom)
-
-The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product.
-
-The following JSON response illustrates what the Product Understanding API returns when used with a custom model.
-
-```json
-"detectedProducts": {
- "imageMetadata": {
- "width": 21,
- "height": 25
- },
- "products": [
- {
- "id": "01",
- "boundingBox": {
- "x": 123,
- "y": 234,
- "w": 34,
- "h": 45
- },
- "classifications": [
- {
- "confidence": 0.8,
- "label": "Product1"
- }
- ]
- }
- ],
- "gaps": [
- {
- "id": "02",
- "boundingBox": {
- "x": 12,
- "y": 123,
- "w": 1234,
- "h": 123
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "Product1"
- }
- ]
- }
- ]
-}
-```
-
-### Planogram matching
-
-The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document.
-
-It returns a JSON response that accounts for each position in the planogram document, whether it's occupied by a product or gap.
-
-```json
-{
- "matchedResultsPerPosition": [
- {
- "positionId": "01",
- "detectedObject": {
- "id": "01",
- "boundingBox": {
- "x": 12,
- "y": 1234,
- "w": 123,
- "h": 12345
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "Product1"
- }
- ]
- }
- }
- ]
-}
-```
-
-## Limitations
-
-* Product Recognition is only available in the **East US** and **West US 2** Azure regions.
-* Shelf images can be up to 20 MB in size. The recommended size is 4 MB.
-* We recommend you do [stitching and rectification](./how-to/shelf-modify-images.md) on the shelf images before uploading them for analysis.
-* Using a [custom model](./how-to/shelf-model-customization.md) is optional in Product Recognition, but it's required for the [planogram matching](./how-to/shelf-planogram.md) function.
--
-## Next steps
-
-Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API.
-* [Prepare images for Product Recognition](./how-to/shelf-modify-images.md)
-* [Analyze a shelf image](./how-to/shelf-analyze.md)
cognitive-services Concept Tag Images 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tag-images-40.md
- Title: Content tags - Image Analysis 4.0-
-description: Learn concepts related to the images tagging feature of the Image Analysis 4.0 API.
------- Previously updated : 01/24/2023----
-# Image tagging (version 4.0 preview)
-
-Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags are not organized as a taxonomy and do not have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
-
-Try out the image tagging features quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Image tagging example
-
-The following JSON response illustrates what Computer Vision returns when tagging visual features detected in the example image.
-
-![A blue house and the front yard](./Images/house_yard.png).
--
-```json
-{
- "metadata":
- {
- "width": 300,
- "height": 200
- },
- "tagsResult":
- {
- "values":
- [
- {
- "name": "grass",
- "confidence": 0.9960499405860901
- },
- {
- "name": "outdoor",
- "confidence": 0.9956876635551453
- },
- {
- "name": "building",
- "confidence": 0.9893627166748047
- },
- {
- "name": "property",
- "confidence": 0.9853052496910095
- },
- {
- "name": "plant",
- "confidence": 0.9791355729103088
- },
- {
- "name": "sky",
- "confidence": 0.976455569267273
- },
- {
- "name": "home",
- "confidence": 0.9732913374900818
- },
- {
- "name": "house",
- "confidence": 0.9726771116256714
- },
- {
- "name": "real estate",
- "confidence": 0.972320556640625
- },
- {
- "name": "yard",
- "confidence": 0.9480281472206116
- },
- {
- "name": "siding",
- "confidence": 0.945357620716095
- },
- {
- "name": "porch",
- "confidence": 0.9410697221755981
- },
- {
- "name": "cottage",
- "confidence": 0.9143695831298828
- },
- {
- "name": "tree",
- "confidence": 0.9111745357513428
- },
- {
- "name": "farmhouse",
- "confidence": 0.8988940119743347
- },
- {
- "name": "window",
- "confidence": 0.894851803779602
- },
- {
- "name": "lawn",
- "confidence": 0.894050121307373
- },
- {
- "name": "backyard",
- "confidence": 0.8931854963302612
- },
- {
- "name": "garden buildings",
- "confidence": 0.8859137296676636
- },
- {
- "name": "roof",
- "confidence": 0.8695330619812012
- },
- {
- "name": "driveway",
- "confidence": 0.8670969009399414
- },
- {
- "name": "land lot",
- "confidence": 0.856428861618042
- },
- {
- "name": "landscaping",
- "confidence": 0.8540748357772827
- }
- ]
- }
-}
-```
-
-## Use the API
-
-The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
--
-* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp)
-
-## Next steps
-
-* Learn the related concept of [describing images](concept-describe-images-40.md).
-* [Call the Analyze Image API](./how-to/call-analyze-image-40.md)
-
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tagging-images.md
- Title: Content tags - Computer Vision-
-description: Learn concepts related to the images tagging feature of the Computer Vision API.
------- Previously updated : 09/20/2022----
-# Image tagging
-
-Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
-
-After you upload an image or specify an image URL, the Analyze API can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
-
-Try out the image tagging features quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Image tagging example
-
-The following JSON response illustrates what Computer Vision returns when tagging visual features detected in the example image.
-
-![A blue house and the front yard](./Images/house_yard.png).
-
-```json
-{
- "tags":[
- {
- "name":"grass",
- "confidence":0.9960499405860901
- },
- {
- "name":"outdoor",
- "confidence":0.9956876635551453
- },
- {
- "name":"building",
- "confidence":0.9893627166748047
- },
- {
- "name":"property",
- "confidence":0.9853052496910095
- },
- {
- "name":"plant",
- "confidence":0.9791355133056641
- },
- {
- "name":"sky",
- "confidence":0.9764555096626282
- },
- {
- "name":"home",
- "confidence":0.9732913970947266
- },
- {
- "name":"house",
- "confidence":0.9726772904396057
- },
- {
- "name":"real estate",
- "confidence":0.972320556640625
- },
- {
- "name":"yard",
- "confidence":0.9480282068252563
- },
- {
- "name":"siding",
- "confidence":0.945357620716095
- },
- {
- "name":"porch",
- "confidence":0.9410697221755981
- },
- {
- "name":"cottage",
- "confidence":0.9143695831298828
- },
- {
- "name":"tree",
- "confidence":0.9111741185188293
- },
- {
- "name":"farmhouse",
- "confidence":0.8988939523696899
- },
- {
- "name":"window",
- "confidence":0.894851565361023
- },
- {
- "name":"lawn",
- "confidence":0.8940501809120178
- },
- {
- "name":"backyard",
- "confidence":0.8931854963302612
- },
- {
- "name":"garden buildings",
- "confidence":0.885913610458374
- },
- {
- "name":"roof",
- "confidence":0.8695329427719116
- },
- {
- "name":"driveway",
- "confidence":0.8670971393585205
- },
- {
- "name":"land lot",
- "confidence":0.8564285039901733
- },
- {
- "name":"landscaping",
- "confidence":0.8540750741958618
- }
- ],
- "requestId":"d60ac02b-966d-4f62-bc24-fbb1fec8bd5d",
- "metadata":{
- "height":200,
- "width":300,
- "format":"Png"
- },
- "modelVersion":"2021-05-01"
-}
-```
-
-## Use the API
-
-The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
--
-* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
-
-## Next steps
-
-Learn the related concepts of [categorizing images](concept-categorizing-images.md) and [describing images](concept-describing-images.md).
cognitive-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/deploy-computer-vision-on-premises.md
- Title: Use Computer Vision container with Kubernetes and Helm-
-description: Learn how to deploy the Computer Vision container using Kubernetes and Helm.
------ Previously updated : 05/09/2022----
-# Use Computer Vision container with Kubernetes and Helm
-
-One option to manage your Computer Vision containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define a Computer Vision container image, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services. For more information about running Docker containers without Kubernetes orchestration, see [install and run Computer Vision containers](computer-vision-how-to-install-containers.md).
-
-## Prerequisites
-
-The following prerequisites before using Computer Vision containers on-premises:
-
-| Required | Purpose |
-|-||
-| Azure Account | If you don't have an Azure subscription, create a [free account][free-azure-account] before you begin. |
-| Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. |
-| Helm CLI | Install the [Helm CLI][helm-install], which is used to install a helm chart (container package definition). |
-| Computer Vision resource |In order to use the container, you must have:<br><br>An Azure **Computer Vision** resource and the associated API key the endpoint URI. Both values are available on the Overview and Keys pages for the resource and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page|
--
-### The host computer
--
-### Container requirements and recommendations
--
-## Connect to the Kubernetes cluster
-
-The host computer is expected to have an available Kubernetes cluster. See this tutorial on [deploying a Kubernetes cluster](../../aks/tutorial-kubernetes-deploy-cluster.md) for a conceptual understanding of how to deploy a Kubernetes cluster to a host computer. You can find more information on deployments in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
-
-## Configure Helm chart values for deployment
-
-Begin by creating a folder named *read*. Then, paste the following YAML content in a new file named `chart.yaml`:
-
-```yaml
-apiVersion: v2
-name: read
-version: 1.0.0
-description: A Helm chart to deploy the Read OCR container to a Kubernetes cluster
-dependencies:
-- name: rabbitmq
- condition: read.image.args.rabbitmq.enabled
- version: ^6.12.0
- repository: https://kubernetes-charts.storage.googleapis.com/
-- name: redis
- condition: read.image.args.redis.enabled
- version: ^6.0.0
- repository: https://kubernetes-charts.storage.googleapis.com/
-```
-
-To configure the Helm chart default values, copy and paste the following YAML into a file named `values.yaml`. Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values. Configure resultExpirationPeriod, Redis, and RabbitMQ if needed.
-
-```yaml
-# These settings are deployment specific and users can provide customizations
-read:
- enabled: true
- image:
- name: cognitive-services-read
- registry: mcr.microsoft.com/
- repository: azure-cognitive-services/vision/read
- tag: 3.2-preview.1
- args:
- eula: accept
- billing: # {ENDPOINT_URI}
- apikey: # {API_KEY}
-
- # Result expiration period setting. Specify when the system should clean up recognition results.
- # For example, resultExpirationPeriod=1, the system will clear the recognition result 1hr after the process.
- # resultExpirationPeriod=0, the system will clear the recognition result after result retrieval.
- resultExpirationPeriod: 1
-
- # Redis storage, if configured, will be used by read OCR container to store result records.
- # A cache is required if multiple read OCR containers are placed behind load balancer.
- redis:
- enabled: false # {true/false}
- password: password
-
- # RabbitMQ is used for dispatching tasks. This can be useful when multiple read OCR containers are
- # placed behind load balancer.
- rabbitmq:
- enabled: false # {true/false}
- rabbitmq:
- username: user
- password: password
-```
-
-> [!IMPORTANT]
-> - If the `billing` and `apikey` values aren't provided, the services expire after 15 minutes. Likewise, verification fails because the services aren't available.
->
-> - If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
->
-
-Create a *templates* folder under the *read* directory. Copy and paste the following YAML into a file named `deployment.yaml`. The `deployment.yaml` file will serve as a Helm template.
-
-> Templates generate manifest files, which are YAML-formatted resource descriptions that Kubernetes can understand. [- Helm Chart Template Guide][chart-template-guide]
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: read
- labels:
- app: read-deployment
-spec:
- selector:
- matchLabels:
- app: read-app
- template:
- metadata:
- labels:
- app: read-app
- spec:
- containers:
- - name: {{.Values.read.image.name}}
- image: {{.Values.read.image.registry}}{{.Values.read.image.repository}}
- ports:
- - containerPort: 5000
- env:
- - name: EULA
- value: {{.Values.read.image.args.eula}}
- - name: billing
- value: {{.Values.read.image.args.billing}}
- - name: apikey
- value: {{.Values.read.image.args.apikey}}
- args:
- - ReadEngineConfig:ResultExpirationPeriod={{ .Values.read.image.args.resultExpirationPeriod }}
- {{- if .Values.read.image.args.rabbitmq.enabled }}
- - Queue:RabbitMQ:HostName={{ include "rabbitmq.hostname" . }}
- - Queue:RabbitMQ:Username={{ .Values.read.image.args.rabbitmq.rabbitmq.username }}
- - Queue:RabbitMQ:Password={{ .Values.read.image.args.rabbitmq.rabbitmq.password }}
- {{- end }}
- {{- if .Values.read.image.args.redis.enabled }}
- - Cache:Redis:Configuration={{ include "redis.connStr" . }}
- {{- end }}
- imagePullSecrets:
- - name: {{.Values.read.image.pullSecret}}
-
-apiVersion: v1
-kind: Service
-metadata:
- name: read-service
-spec:
- type: LoadBalancer
- ports:
- - port: 5000
- selector:
- app: read-app
-```
-
-In the same *templates* folder, copy and paste the following helper functions into `helpers.tpl`. `helpers.tpl` defines useful functions to help generate Helm template.
-
-> [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
-
-```yaml
-{{- define "rabbitmq.hostname" -}}
-{{- printf "%s-rabbitmq" .Release.Name -}}
-{{- end -}}
-
-{{- define "redis.connStr" -}}
-{{- $hostMain := printf "%s-redis-master:6379" .Release.Name }}
-{{- $hostReplica := printf "%s-redis-slave:6379" .Release.Name -}}
-{{- $passWord := printf "password=%s" .Values.read.image.args.redis.password -}}
-{{- $connTail := "ssl=False,abortConnect=False" -}}
-{{- printf "%s,%s,%s,%s" $hostMain $hostReplica $passWord $connTail -}}
-{{- end -}}
-```
-The template specifies a load balancer service and the deployment of your container/image for Read.
-
-### The Kubernetes package (Helm chart)
-
-The *Helm chart* contains the configuration of which docker image(s) to pull from the `mcr.microsoft.com` container registry.
-
-> A [Helm chart][helm-charts] is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
-
-The provided *Helm charts* pull the docker images of the Computer Vision Service, and the corresponding service from the `mcr.microsoft.com` container
-registry.
-
-## Install the Helm chart on the Kubernetes cluster
-
-To install the *helm chart*, we'll need to execute the [`helm install`][helm-install-cmd] command. Ensure to execute the install command from the directory above the `read` folder.
-
-```console
-helm install read ./read
-```
-
-Here is an example output you might expect to see from a successful install execution:
-
-```console
-NAME: read
-LAST DEPLOYED: Thu Sep 04 13:24:06 2019
-NAMESPACE: default
-STATUS: DEPLOYED
-
-RESOURCES:
-==> v1/Pod(related)
-NAME READY STATUS RESTARTS AGE
-read-57cb76bcf7-45sdh 0/1 ContainerCreating 0 0s
-
-==> v1/Service
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-read LoadBalancer 10.110.44.86 localhost 5000:31301/TCP 0s
-
-==> v1beta1/Deployment
-NAME READY UP-TO-DATE AVAILABLE AGE
-read 0/1 1 0 0s
-```
-
-The Kubernetes deployment can take over several minutes to complete. To confirm that both pods and services are properly deployed and available, execute the following command:
-
-```console
-kubectl get all
-```
-
-You should expect to see something similar to the following output:
-
-```console
-kubectl get all
-NAME READY STATUS RESTARTS AGE
-pod/read-57cb76bcf7-45sdh 1/1 Running 0 17s
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45h
-service/read LoadBalancer 10.110.44.86 localhost 5000:31301/TCP 17s
-
-NAME READY UP-TO-DATE AVAILABLE AGE
-deployment.apps/read 1/1 1 1 17s
-
-NAME DESIRED CURRENT READY AGE
-replicaset.apps/read-57cb76bcf7 1 1 1 17s
-```
-
-## Deploy multiple v3 containers on the Kubernetes cluster
-
-Starting in v3 of the container, you can use the containers in parallel on both a task and page level.
-
-By design, each v3 container has a dispatcher and a recognition worker. The dispatcher is responsible for splitting a multi-page task into multiple single page sub-tasks. The recognition worker is optimized for recognizing a single page document. To achieve page level parallelism, deploy multiple v3 containers behind a load balancer and let the containers share a universal storage and queue.
-
-> [!NOTE]
-> Currently only Azure Storage and Azure Queue are supported.
-
-The container receiving the request can split the task into single page sub-tasks, and add them to the universal queue. Any recognition worker from a less busy container can consume single page sub-tasks from the queue, perform recognition, and upload the result to the storage. The throughput can be improved up to `n` times, depending on the number of containers that are deployed.
-
-The v3 container exposes the liveness probe API under the `/ContainerLiveness` path. Use the following deployment example to configure a liveness probe for Kubernetes.
-
-Copy and paste the following YAML into a file named `deployment.yaml`. Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values. Replace the `# {AZURE_STORAGE_CONNECTION_STRING}` comment with your Azure Storage Connection String. Configure `replicas` to the number you want, which is set to `3` in the following example.
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: read
- labels:
- app: read-deployment
-spec:
- selector:
- matchLabels:
- app: read-app
- replicas: # {NUMBER_OF_READ_CONTAINERS}
- template:
- metadata:
- labels:
- app: read-app
- spec:
- containers:
- - name: cognitive-services-read
- image: mcr.microsoft.com/azure-cognitive-services/vision/read
- ports:
- - containerPort: 5000
- env:
- - name: EULA
- value: accept
- - name: billing
- value: # {ENDPOINT_URI}
- - name: apikey
- value: # {API_KEY}
- - name: Storage__ObjectStore__AzureBlob__ConnectionString
- value: # {AZURE_STORAGE_CONNECTION_STRING}
- - name: Queue__Azure__ConnectionString
- value: # {AZURE_STORAGE_CONNECTION_STRING}
- livenessProbe:
- httpGet:
- path: /ContainerLiveness
- port: 5000
- initialDelaySeconds: 60
- periodSeconds: 60
- timeoutSeconds: 20
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-cognitive-service-read
-spec:
- type: LoadBalancer
- ports:
- - port: 5000
- targetPort: 5000
- selector:
- app: read-app
-```
-
-Run the following command.
-
-```console
-kubectl apply -f deployment.yaml
-```
-
-Below is an example output you might see from a successful deployment execution:
-
-```console
-deployment.apps/read created
-service/azure-cognitive-service-read created
-```
-
-The Kubernetes deployment can take several minutes to complete. To confirm that both pods and services are properly deployed and available, then execute the following command:
-
-```console
-kubectl get all
-```
-
-You should see console output similar to the following:
-
-```console
-kubectl get all
-NAME READY STATUS RESTARTS AGE
-pod/read-6cbbb6678-58s9t 1/1 Running 0 3s
-pod/read-6cbbb6678-kz7v4 1/1 Running 0 3s
-pod/read-6cbbb6678-s2pct 1/1 Running 0 3s
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/azure-cognitive-service-read LoadBalancer 10.0.134.0 <none> 5000:30846/TCP 17h
-service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 78d
-
-NAME READY UP-TO-DATE AVAILABLE AGE
-deployment.apps/read 3/3 3 3 3s
-
-NAME DESIRED CURRENT READY AGE
-replicaset.apps/read-6cbbb6678 3 3 3 3s
-```
-
-<!-- ## Validate container is running -->
--
-## Next steps
-
-For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks].
-
-> [!div class="nextstepaction"]
-> [Cognitive Services Containers][cog-svcs-containers]
-
-<!-- LINKS - external -->
-[free-azure-account]: https://azure.microsoft.com/free
-[git-download]: https://git-scm.com/downloads
-[azure-cli]: /cli/azure/install-azure-cli
-[docker-engine]: https://www.docker.com/products/docker-engine
-[kubernetes-cli]: https://kubernetes.io/docs/tasks/tools/install-kubectl
-[helm-install]: https://helm.sh/docs/intro/install/
-[helm-install-cmd]: https://helm.sh/docs/intro/using_helm/#helm-install-installing-a-package
-[helm-charts]: https://helm.sh/docs/topics/charts/
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[helm-test]: https://helm.sh/docs/helm/#helm-test
-[chart-template-guide]: https://helm.sh/docs/chart_template_guide
-
-<!-- LINKS - internal -->
-[vision-container-host-computer]: computer-vision-how-to-install-containers.md#the-host-computer
-[installing-helm-apps-in-aks]: ../../aks/kubernetes-helm.md
-[cog-svcs-containers]: ../cognitive-services-container-support.md
cognitive-services Enrollment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/enrollment-overview.md
- Title: Best practices for adding users to a Face service-
-description: Learn about the process of Face enrollment to register users in a face recognition service.
------ Previously updated : 09/27/2021---
-# Best practices for adding users to a Face service
-
-In order to use the Cognitive Services Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
-
-## Meaningful consent
-
-One of the key purposes of an enrollment application for facial recognition is to give users the opportunity to consent to the use of images of their face for specific purposes, such as access to a worksite. Because facial recognition technologies may be perceived as collecting sensitive personal data, it's especially important to ask for consent in a way that is both transparent and respectful. Consent is meaningful to users when it empowers them to make the decision that they feel is best for them.
-
-Based on Microsoft user research, Microsoft's Responsible AI principles, and [external research](ftp://ftp.cs.washington.edu/tr/2000/12/UW-CSE-00-12-02.pdf), we have found that consent is meaningful when it offers the following to users enrolling in the technology:
-
-* Awareness: Users should have no doubt when they are being asked to provide their face template or enrollment photos.
-* Understanding: Users should be able to accurately describe in their own words what they were being asked for, by whom, to what end, and with what assurances.
-* Freedom of choice: Users should not feel coerced or manipulated when choosing whether to consent and enroll in facial recognition.
-* Control: Users should be able to revoke their consent and delete their data at any time.
-
-This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario.
-
-> [!NOTE]
-> It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices.
-
-## Application development
-
-Before you design an enrollment flow, think about how the application you're building can uphold the promises you make to users about how their data is protected. The following recommendations can help you build an enrollment experience that includes responsible approaches to securing personal data, managing users' privacy, and ensuring that the application is accessible to all users.
-
-|Category | Recommendations |
-|||
-|Hardware | Consider the camera quality of the enrollment device. |
-|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
-|Security | Cognitive Services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Cognitive Services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. |
-|User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. |
-|Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. |
-
-## Next steps
-
-Follow the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide to get started with a sample enrollment app. Then customize it or write your own app to suit the needs of your product.
cognitive-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/add-faces.md
- Title: "Example: Add faces to a PersonGroup - Face"-
-description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure Cognitive Services Face service.
------- Previously updated : 04/10/2019----
-# Add faces to a PersonGroup
--
-This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure Cognitive Services Face .NET client library.
-
-## Step 1: Initialization
-
-The following code declares several variables and implements a helper function to schedule the face add requests:
--- `PersonCount` is the total number of persons.-- `CallLimitPerSecond` is the maximum calls per second according to the subscription tier.-- `_timeStampQueue` is a Queue to record the request timestamps.-- `await WaitCallLimitPerSecondAsync()` waits until it's valid to send the next request.-
-```csharp
-const int PersonCount = 10000;
-const int CallLimitPerSecond = 10;
-static Queue<DateTime> _timeStampQueue = new Queue<DateTime>(CallLimitPerSecond);
-
-static async Task WaitCallLimitPerSecondAsync()
-{
- Monitor.Enter(_timeStampQueue);
- try
- {
- if (_timeStampQueue.Count >= CallLimitPerSecond)
- {
- TimeSpan timeInterval = DateTime.UtcNow - _timeStampQueue.Peek();
- if (timeInterval < TimeSpan.FromSeconds(1))
- {
- await Task.Delay(TimeSpan.FromSeconds(1) - timeInterval);
- }
- _timeStampQueue.Dequeue();
- }
- _timeStampQueue.Enqueue(DateTime.UtcNow);
- }
- finally
- {
- Monitor.Exit(_timeStampQueue);
- }
-}
-```
-
-## Step 2: Authorize the API call
-
-When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](/azure/cognitive-services/computer-vision/quickstarts-sdk/identity-client-library?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
--
-## Step 3: Create the PersonGroup
-
-A PersonGroup named "MyPersonGroup" is created to save the persons.
-The request time is enqueued to `_timeStampQueue` to ensure the overall validation.
-
-```csharp
-const string personGroupId = "mypersongroupid";
-const string personGroupName = "MyPersonGroup";
-_timeStampQueue.Enqueue(DateTime.UtcNow);
-await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
-```
-
-## Step 4: Create the persons for the PersonGroup
-
-Persons are created concurrently, and `await WaitCallLimitPerSecondAsync()` is also applied to avoid exceeding the call limit.
-
-```csharp
-Person[] persons = new Person[PersonCount];
-Parallel.For(0, PersonCount, async i =>
-{
- await WaitCallLimitPerSecondAsync();
-
- string personName = $"PersonName#{i}";
- persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);
-});
-```
-
-## Step 5: Add faces to the persons
-
-Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially.
-Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation.
-
-```csharp
-Parallel.For(0, PersonCount, async i =>
-{
- Guid personId = persons[i].PersonId;
- string personImageDir = @"/path/to/person/i/images";
-
- foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
- {
- await WaitCallLimitPerSecondAsync();
-
- using (Stream stream = File.OpenRead(imagePath))
- {
- await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
- }
- }
-});
-```
-
-## Summary
-
-In this guide, you learned the process of creating a PersonGroup with a massive number of persons and faces. Several reminders:
--- This strategy also applies to FaceLists and LargePersonGroups.-- Adding or deleting faces to different FaceLists or persons in LargePersonGroups are processed concurrently.-- Adding or deleting faces to one specific FaceList or person in a LargePersonGroup are done sequentially.-- For simplicity, how to handle a potential exception is omitted in this guide. If you want to enhance more robustness, apply the proper retry policy.-
-The following features were explained and demonstrated:
--- Create PersonGroups by using the [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) API.-- Create persons by using the [PersonGroup Person - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) API.-- Add faces to persons by using the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API.-
-## Next steps
-
-In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data.
--- [Use the PersonDirectory structure (preview)](use-persondirectory.md)
cognitive-services Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/analyze-video.md
- Title: Analyze videos in near real time - Computer Vision-
-description: Learn how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API.
------ Previously updated : 07/05/2022---
-# Analyze videos in near real time
-
-This article demonstrates how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API. The basic elements of such an analysis are:
--- Acquiring frames from a video source.-- Selecting which frames to analyze.-- Submitting these frames to the API.-- Consuming each analysis result that's returned from the API call.-
-The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
-
-## Approaches to running near real-time analysis
-
-You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
-
-### Design an infinite loop
-
-The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, you grab a frame, analyze it, and then consume the result:
-
-```csharp
-while (true)
-{
- Frame f = GrabFrame();
- if (ShouldAnalyze(f))
- {
- AnalysisResult r = await Analyze(f);
- ConsumeResult(r);
- }
-}
-```
-
-If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
-
-### Allow the API calls to run in parallel
-
-Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
-
-```csharp
-while (true)
-{
- Frame f = GrabFrame();
- if (ShouldAnalyze(f))
- {
- var t = Task.Run(async () =>
- {
- AnalysisResult r = await Analyze(f);
- ConsumeResult(r);
- }
- }
-}
-```
-
-With this approach, you launch each analysis in a separate task. The task can run in the background while you continue grabbing new frames. The approach avoids blocking the main thread as you wait for an API call to return. However, the approach can present certain disadvantages:
-* It costs you some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
-* It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe.
-* Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
-
-### Design a producer-consumer system
-
-For your final approach, designing a "producer-consumer" system, you build a producer thread that looks similar to your previously mentioned infinite loop. However, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
-
-```csharp
-// Queue that will contain the API call tasks.
-var taskQueue = new BlockingCollection<Task<ResultWrapper>>();
-
-// Producer thread.
-while (true)
-{
- // Grab a frame.
- Frame f = GrabFrame();
-
- // Decide whether to analyze the frame.
- if (ShouldAnalyze(f))
- {
- // Start a task that will run in parallel with this thread.
- var analysisTask = Task.Run(async () =>
- {
- // Put the frame, and the result/exception into a wrapper object.
- var output = new ResultWrapper(f);
- try
- {
- output.Analysis = await Analyze(f);
- }
- catch (Exception e)
- {
- output.Exception = e;
- }
- return output;
- }
-
- // Push the task onto the queue.
- taskQueue.Add(analysisTask);
- }
-}
-```
-
-You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
-
-```csharp
-// Consumer thread.
-while (true)
-{
- // Get the oldest task.
- Task<ResultWrapper> analysisTask = taskQueue.Take();
-
- // Wait until the task is completed.
- var output = await analysisTask;
-
- // Consume the exception or result.
- if (output.Exception != null)
- {
- throw output.Exception;
- }
- else
- {
- ConsumeResult(output.Analysis);
- }
-}
-```
-
-## Implement the solution
-
-### Get sample code
-
-To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
-
-The library contains the `FrameGrabber` class, which implements the previously discussed producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
-
-To illustrate some of the possibilities, we've provided two sample apps that use the library.
-
-The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is reproduced in the following code:
-
-```csharp
-using System;
-using System.Linq;
-using Microsoft.Azure.CognitiveServices.Vision.Face;
-using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
-using VideoFrameAnalyzer;
-
-namespace BasicConsoleSample
-{
- internal class Program
- {
- const string ApiKey = "<your API key>";
- const string Endpoint = "https://<your API region>.api.cognitive.microsoft.com";
-
- private static async Task Main(string[] args)
- {
- // Create grabber.
- FrameGrabber<DetectedFace[]> grabber = new FrameGrabber<DetectedFace[]>();
-
- // Create Face Client.
- FaceClient faceClient = new FaceClient(new ApiKeyServiceClientCredentials(ApiKey))
- {
- Endpoint = Endpoint
- };
-
- // Set up a listener for when we acquire a new frame.
- grabber.NewFrameProvided += (s, e) =>
- {
- Console.WriteLine($"New frame acquired at {e.Frame.Metadata.Timestamp}");
- };
-
- // Set up a Face API call.
- grabber.AnalysisFunction = async frame =>
- {
- Console.WriteLine($"Submitting frame acquired at {frame.Metadata.Timestamp}");
- // Encode image and submit to Face service.
- return (await faceClient.Face.DetectWithStreamAsync(frame.Image.ToMemoryStream(".jpg"))).ToArray();
- };
-
- // Set up a listener for when we receive a new result from an API call.
- grabber.NewResultAvailable += (s, e) =>
- {
- if (e.TimedOut)
- Console.WriteLine("API call timed out.");
- else if (e.Exception != null)
- Console.WriteLine("API call threw an exception.");
- else
- Console.WriteLine($"New result received for frame acquired at {e.Frame.Metadata.Timestamp}. {e.Analysis.Length} faces detected");
- };
-
- // Tell grabber when to call the API.
- // See also TriggerAnalysisOnPredicate
- grabber.TriggerAnalysisOnInterval(TimeSpan.FromMilliseconds(3000));
-
- // Start running in the background.
- await grabber.StartProcessingCameraAsync();
-
- // Wait for key press to stop.
- Console.WriteLine("Press any key to stop...");
- Console.ReadKey();
-
- // Stop, blocking until done.
- await grabber.StopProcessingAsync();
- }
- }
-}
-```
-
-The second sample app is a bit more interesting. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
-
-In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure Cognitive Services.
-
-By using this approach, you can visualize the detected face immediately. You can then update the emotions later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Cognitive Services APIs can be used to augment this processing with more advanced analysis when necessary.
-
-![The LiveCameraSample app displaying an image with tags](../../Video/Images/FramebyFrame.jpg)
-
-### Integrate samples into your codebase
-
-To get started with this sample, do the following:
-
-1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
-2. Create resources for Computer Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
- - [Computer Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
- - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
- After the resources are deployed, click **Go to resource** to collect your key and endpoint for each resource.
-3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
-4. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
- - For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
- - For LiveCameraSample, enter the keys in the **Settings** pane of the app. The keys are persisted across sessions as user data.
-
-When you're ready to integrate the samples, reference the VideoFrameAnalyzer library from your own projects.
-
-The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure Cognitive Services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure Cognitive Services.
-
-## Next steps
-
-In this article, you learned how to run near real-time analysis on live video streams by using the Face and Computer Vision services. You also learned how you can use our sample code to get started.
-
-Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
--- [Call the Image Analysis API (how to)](call-analyze-image.md)
cognitive-services Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/background-removal.md
- Title: Remove the background in images-
-description: Learn how to call the Segment API to isolate and remove the background from images.
------ Previously updated : 03/03/2023---
-# Remove the background from images
-
-This article demonstrates how to call the Image Analysis 4.0 API to segment an image. It also shows you how to parse the returned information.
-
-## Prerequisites
-
-This guide assumes you have successfully followed the steps mentioned in the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) page. This means:
-
-* You have <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL.
-* If you're using the client SDK, you have the appropriate SDK package installed and you have a running quickstart application. You modify this quickstart application based on code examples here.
-* If you're using 4.0 REST API calls directly, you have successfully made a `curl.exe` call to the service (or used an alternative tool). You modify the `curl.exe` call based on the examples here.
-
-The quickstart shows you how to extract visual features from an image, however, the concepts are similar to background removal. Therefore you benefit from starting from the quickstart and making modifications.
-
-> [!IMPORTANT]
-> Background removal is only available in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
-
-## Authenticate against the service
-
-To authenticate against the Image Analysis service, you need a Computer Vision key and endpoint URL.
-
-> [!TIP]
-> Don't include the key directly in your code, and never post it publicly. See the Cognitive Services [security](/azure/cognitive-services/security-features) article for more authentication options like [Azure Key Vault](/azure/cognitive-services/use-key-vault).
-
-The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
-
-#### [C#](#tab/csharp)
-
-Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example:
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
-
-#### [Python](#tab/python)
-
-Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example:
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)]
-
-#### [C++](#tab/cpp)
-
-At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example:
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)]
-
-Where we used this helper function to read the value of an environment variable:
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)]
-
-#### [REST API](#tab/rest)
-
-Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview`, where `<endpoint>` is your unique computer vision endpoint URL. See [Select a mode ](./background-removal.md#select-a-mode) section for another query string you add to this URL.
---
-## Select the image to analyze
-
-The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
-
-#### [C#](#tab/csharp)
-
-Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl).
-
-**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)]
-
-> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile).
-
-#### [Python](#tab/python)
-
-In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)]
-
-> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL.
-
-#### [C++](#tab/cpp)
-
-Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl).
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)]
-
-> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile).
-
-#### [REST API](#tab/rest)
-
-When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`.
-
-To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`.
---
-## Select a mode
-
-### [C#](#tab/csharp)
-
-<!-- TODO: After C# ref-docs get published, add link to SegmentationMode (/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.segmentationmode) & ImageSegmentationMode (/dotnet/api/azure.ai.vision.imageanalysis.imagesegmentationmode) -->
-
-Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the property `SegmentationMode`. This property must be set if you want to do segmentation. See `ImageSegmentationMode` for supported values.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/segmentation/Program.cs?name=segmentation_mode)]
-
-### [Python](#tab/python)
-
-<!-- TODO: Where Python ref-docs get published, add link to SegmentationMode (/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-segmentation-mode) & ImageSegmentationMode (/python/api/azure-ai-vision/azure.ai.vision.enums.imagesegmentationmode)> -->
-
-Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the property `segmentation_mode`. This property must be set if you want to do segmentation. See `ImageSegmentationMode` for supported values.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/segmentation/main.py?name=segmentation_mode)]
-
-### [C++](#tab/cpp)
-
-Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetSegmentationMode](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setsegmentationmode) method. You must call this method if you want to do segmentation. See [ImageSegmentationMode](/cpp/cognitive-services/vision/azure-ai-vision-imageanalysis-namespace#enum-imagesegmentationmode) for supported values.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segmentation_mode)]
-
-### [REST](#tab/rest)
-
-Set the query string *mode** to one of these two values. This query string is mandatory if you want to do image segmentation.
-
-|URL parameter | Value |Description |
-|--||-|
-| `mode` | `backgroundRemoval` | Outputs an image of the detected foreground object with a transparent background. |
-| `mode` | `foregroundMatting` | Outputs a gray-scale alpha matte image showing the opacity of the detected foreground object. |
-
-A populated URL for backgroundRemoval would look like this: `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval`
---
-## Get results from the service
-
-This section shows you how to make the API call and parse the results.
-
-#### [C#](#tab/csharp)
-
-The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
-
-**SegmentationResult** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/segmentation/Program.cs?name=segment)]
-
-#### [Python](#tab/python)
-
-The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/segmentation/main.py?name=segment)]
-
-#### [C++](#tab/cpp)
-
-The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segment)]
-
-#### [REST](#tab/rest)
-
-The service returns a `200` HTTP response on success with `Content-Type: image/png`, and the body contains the returned PNG image in the form of a binary stream.
---
-As an example, assume background removal is run on the following image:
--
-On a successful background removal call, The following four-channel PNG image is the response for the `backgroundRemoval` mode:
--
-The following one-channel PNG image is the response for the `foregroundMatting` mode:
--
-The API returns an image the same size as the original for the `foregroundMatting` mode, but at most 16 megapixels (preserving image aspect ratio) for the `backgroundRemoval` mode.
--
-## Error codes
----
-## Next steps
-
-[Background removal concepts](../concept-background-removal.md)
--
cognitive-services Blob Storage Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/blob-storage-search.md
- Title: Configure your blob storage for image retrieval and video search in Vision Studio-
-description: To get started with the **Search photos with natural language** or with **Video summary and frame locator** in Vision Studio, you will need to select or create a new storage account.
------- Previously updated : 03/06/2023----
-# Configure your blob storage for image retrieval and video search in Vision Studio
-
-To get started with the **Search photos with natural language** or **Video summary and frame locator** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Azure Computer Vision resource is more efficient and reduces cost.
-
-> [!IMPORTANT]
-> You need to create your storage account on the same Azure subscription as the Computer Vision resource you're using in the **Search photos with natural language** or **Video summary and frame locator** scenarios as shown below.
--
-## Create a new storage account
-
-To get started, <a href="https://ms.portal.azure.com/#create/Microsoft.StorageAccount" title="create a new storage account" target="_blank">create a new storage account</a>.
---
-Fill in the required parameters to configure your storage account, then select `Review` and `Create`.
-
-Once your storage account has been deployed, select `Go to resource` to open the storage account overview.
---
-## Configure CORS rule on the storage account
-
-In your storage account overview, find the **Settings** section in the left hand navigation and select `Resource sharing (CORS)`, shown below.
---
-Create a CORS rule by setting the **Allowed Origins** field to `https://portal.vision.cognitive.azure.com`.
-
-In the Allowed Methods field, select the `GET` checkbox to allow an authenticated request from a different domain. In the **Max age** field, enter the value `9999`, and click `Save`.
-
-[Learn more about CORS support for Azure Storage](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services).
----
-This will allow Vision Studio to access images and videos in your blob storage container to extract insights on your data.
-
-## Upload images and videos in Vision Studio
-
-In the **Try with your own video** or **Try with your own image** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images or videos are stored. If you don't have a container, you can create one and upload the images or videos from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections.
--------
cognitive-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md
- Title: Call the Image Analysis 4.0 Analyze API-
-description: Learn how to call the Image Analysis 4.0 API and configure its behavior.
------ Previously updated : 01/24/2023---
-# Call the Image Analysis 4.0 Analyze API (preview)
-
-This article demonstrates how to call the Image Analysis 4.0 API to return information about an image's visual features. It also shows you how to parse the returned information.
-
-## Prerequisites
-
-This guide assumes you have successfully followed the steps mentioned in the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) page. This means:
-
-* You have <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL.
-* If you're using the client SDK, you have the appropriate SDK package installed and you have a running [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) application. You can modify this quickstart application based on code examples here.
-* If you're using 4.0 REST API calls directly, you have successfully made a `curl.exe` call to the service (or used an alternative tool). You modify the `curl.exe` call based on the examples here.
-
-## Authenticate against the service
-
-To authenticate against the Image Analysis service, you need a Computer Vision key and endpoint URL.
-
-> [!TIP]
-> Don't include the key directly in your code, and never post it publicly. See the Cognitive Services [security](/azure/cognitive-services/security-features) article for more authentication options like [Azure Key Vault](/azure/cognitive-services/use-key-vault).
-
-The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint.
-
-#### [C#](#tab/csharp)
-
-Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example:
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)]
-
-#### [Python](#tab/python)
-
-Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example:
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)]
-
-#### [C++](#tab/cpp)
-
-At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example:
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)]
-
-Where we used this helper function to read the value of an environment variable:
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)]
-
-#### [REST API](#tab/rest)
-
-Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:analyze&api-version=2023-02-01-preview`, where `<endpoint>` is your unique computer vision endpoint URL. You add query strings based on your analysis options.
---
-## Select the image to analyze
-
-The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
-
-#### [C#](#tab/csharp)
-
-Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl).
-
-**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)]
-
-> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile).
-
-#### [Python](#tab/python)
-
-In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)]
-
-> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL.
-
-#### [C++](#tab/cpp)
-
-Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl).
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)]
-
-> [!TIP]
-> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile).
-
-#### [REST API](#tab/rest)
-
-When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`.
-
-To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`.
---
-## Select analysis options
-
-### Select visual features when using the standard model
-
-The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer.
-
-Visual features 'Captions' and 'DenseCaptions' are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
-
-> [!NOTE]
-> The REST API uses the terms **Smart Crops** and **Smart Crops Aspect Ratios**. The SDK uses the terms **Crop Suggestions** and **Cropping Aspect Ratios**. They both refer to the same service operation. Similarly, the REST API users the term **Read** for detecting text in the image, whereas the SDK uses the term **Text** for the same operation.
-
-#### [C#](#tab/csharp)
-
-Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property. [ImageAnalysisFeature](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=visual_features)]
-
-#### [Python](#tab/python)
-
-Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property. [ImageAnalysisFeature](/python/api/azure-ai-vision/azure.ai.vision.enums.imageanalysisfeature) enum defines the supported values.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=visual_features)]
-
-#### [C++](#tab/cpp)
-
-Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object. Then specify an `std::vector` of visual features you'd like to extract, by calling the [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) method. [ImageAnalysisFeature](/cpp/cognitive-services/vision/azure-ai-vision-imageanalysis-namespace#enum-imageanalysisfeature) enum defines the supported values.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=visual_features)]
-
-#### [REST API](#tab/rest)
-
-You can specify which features you want to use by setting the URL query parameters of the [Analysis 4.0 API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas.
-
-|URL parameter | Value | Description|
-|||--|
-|`features`|`read` | Reads the visible text in the image and outputs it as structured JSON data.|
-|`features`|`caption` | Describes the image content with a complete sentence in supported languages.|
-|`features`|`denseCaptions` | Generates detailed captions for up to 10 prominent image regions. |
-|`features`|`smartCrops` | Finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.|
-|`features`|`objects` | Detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
-|`features`|`tags` | Tags the image with a detailed list of words related to the image content.|
-|`features`|`people` | Detects people appearing in images, including the approximate locations. |
-
-A populated URL might look like this:
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaptions,smartCrops,objects,people`
---
-### Set model name when using a custom model
-
-You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name. You do not need to specify visual features if you use a custom model.
-
-### [C#](#tab/csharp)
-
-To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=model_name)]
-
-### [Python](#tab/python)
-
-To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the [model_name](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-model-name) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=model_name)]
-
-### [C++](#tab/cpp)
-
-To use a custom model, create the [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetModelName](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setmodelname) method. You don't need to call any other methods on **ImageAnalysisOptions**. There's no need to call [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) as you do with standard model, since your custom model already implies the visual features the service extracts.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=model_name)]
-
-### [REST API](#tab/rest)
-
-To use a custom model, don't use the features query parameter. Instead, set the `model-name` parameter to the name of your model as shown here. Replace `MyCustomModelName` with your custom model name.
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName`
---
-### Specify languages
-
-You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language.
-
-Language option only applies when you're using the standard model.
-
-#### [C#](#tab/csharp)
-
-Use the [Language](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) property of your **ImageAnalysisOptions** object to specify a language.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=language)]
-
-#### [Python](#tab/python)
-
-Use the [language](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-language) property of your **ImageAnalysisOptions** object to specify a language.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=language)]
-
-#### [C++](#tab/cpp)
-
-Call the [SetLanguage](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setlanguage) method on your **ImageAnalysisOptions** object to specify a language.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=language)]
-
-#### [REST API](#tab/rest)
-
-The following URL query parameter specifies the language. The default value is `en`.
-
-|URL parameter | Value | Description|
-|||--|
-|`language`|`en` | English|
-|`language`|`es` | Spanish|
-|`language`|`ja` | Japanese|
-|`language`|`pt` | Portuguese|
-|`language`|`zh` | Simplified Chinese|
-
-A populated URL might look like this:
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=caption&language=en`
---
-### Select gender neutral captions
-
-If you're extracting captions or dense captions, you can ask for gender neutral captions. Gender neutral captions is optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**.
-
-Gender neutral caption option only applies when you're using the standard model.
-
-#### [C#](#tab/csharp)
-
-Set the [GenderNeutralCaption](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=gender_neutral_caption)]
-
-#### [Python](#tab/python)
-
-Set the [gender_neutral_caption](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-gender-neutral-caption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=gender_neutral_caption)]
-
-#### [C++](#tab/cpp)
-
-Call the [SetGenderNeutralCaption](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setgenderneutralcaption) method of your **ImageAnalysisOptions** object with **true** as the argument, to enable gender neutral captions.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=gender_neutral_caption)]
-
-#### [REST API](#tab/rest)
-
-Add the optional query string `gender-neutral-caption` with values `true` or `false` (the default).
-
-A populated URL might look like this:
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=caption&gender-neutral-caption=true`
---
-### Select smart cropping aspect ratios
-
-An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when the **smartCrop** option (REST API) or **CropSuggestions** (SDK) was selected as part the visual feature list. If you select smartCrop/CropSuggestions but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive).
-
-Smart cropping aspect rations only applies when you're using the standard model.
-
-#### [C#](#tab/csharp)
-
-Set the [CroppingAspectRatios](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=cropping_aspect_rations)]
-
-#### [Python](#tab/python)
-
-Set the [cropping_aspect_ratios](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-cropping-aspect-ratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ration of 0.9 and 1.33:
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=cropping_aspect_rations)]
-
-#### [C++](#tab/cpp)
-
-Call the [SetCroppingAspectRatios](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setcroppingaspectratios) method of your **ImageAnalysisOptions** with an `std::vector` of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33:
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=cropping_aspect_rations)]
-
-#### [REST API](#tab/rest)
-
-Add the optional query string `smartcrops-aspect-ratios`, with one or more aspect ratios separated by a comma.
-
-A populated URL might look like this:
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=smartCrops&smartcrops-aspect-ratios=0.8,1.2`
---
-## Get results from the service
-
-### Get results using the standard model
-
-This section shows you how to make an analysis call to the service using the standard model, and get the results.
-
-#### [C#](#tab/csharp)
-
-1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/dotnet/api/azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **IDisposable**, therefore create the object with a **using** statement, or explicitly call **Dispose** method after analysis completes.
-
-1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method.
-
-1. Check the **Reason** property on the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed.
-
-1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object.
-
-1. If failed, you can construct the [ImageAnalysisErrorDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object to get information on the failure.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=analyze)]
-
-#### [Python](#tab/python)
-
-1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/python/api/azure-ai-vision/azure.ai.vision.imageanalyzer) object.
-
-1. Call the **analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **analyze_async** method.
-
-1. Check the **reason** property on the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object, to determine if analysis succeeded or failed.
-
-1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresultdetails) object.
-
-1. If failed, you can construct the [ImageAnalysisErrorDetails](/python/api/azure-ai-vision/azure.ai.vision.imageanalysiserrordetails) object to get information on the failure.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=analyze)]
-
-#### [C++](#tab/cpp)
-
-1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/cpp/cognitive-services/vision/imageanalysis-imageanalyzer) object.
-
-1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method.
-
-1. Call **GetReason** method on the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object, to determine if analysis succeeded or failed.
-
-1. If succeeded, proceed to call the relevant **Get** methods on the result based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresultdetails) object.
-
-1. If failed, you can construct the [ImageAnalysisErrorDetails](/cpp/cognitive-services/vision/imageanalysis-imageanalysiserrordetails) object to get information on the failure.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=analyze)]
-
-The code uses the following helper method to display the coordinates of a bounding polygon:
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=polygon_to_string)]
-
-#### [REST API](#tab/rest)
-
-The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
-
-```json
-{
- "captionResult":
- {
- "text": "a person using a laptop",
- "confidence": 0.55291348695755
- },
- "objectsResult":
- {
- "values":
- [
- {"boundingBox":{"x":730,"y":66,"w":135,"h":85},"tags":[{"name":"kitchen appliance","confidence":0.501}]},
- {"boundingBox":{"x":523,"y":377,"w":185,"h":46},"tags":[{"name":"computer keyboard","confidence":0.51}]},
- {"boundingBox":{"x":471,"y":218,"w":289,"h":226},"tags":[{"name":"Laptop","confidence":0.85}]},
- {"boundingBox":{"x":654,"y":0,"w":584,"h":473},"tags":[{"name":"person","confidence":0.855}]}
- ]
- },
- "modelVersion": "2023-02-01-preview",
- "metadata":
- {
- "width": 1260,
- "height": 473
- },
- "tagsResult":
- {
- "values":
- [
- {"name":"computer","confidence":0.9865934252738953},
- {"name":"clothing","confidence":0.9695653915405273},
- {"name":"laptop","confidence":0.9658201932907104},
- {"name":"person","confidence":0.9536289572715759},
- {"name":"indoor","confidence":0.9420197010040283},
- {"name":"wall","confidence":0.8871886730194092},
- {"name":"woman","confidence":0.8632704019546509},
- {"name":"using","confidence":0.5603535771369934}
- ]
- },
- "readResult":
- {
- "stringIndexType": "TextElements",
- "content": "",
- "pages":
- [
- {"height":473,"width":1260,"angle":0,"pageNumber":1,"words":[],"spans":[{"offset":0,"length":0}],"lines":[]}
- ],
- "styles": [],
- "modelVersion": "2022-04-30"
- },
- "smartCropsResult":
- {
- "values":
- [
- {"aspectRatio":1.94,"boundingBox":{"x":158,"y":20,"w":840,"h":433}}
- ]
- },
- "peopleResult":
- {
- "values":
- [
- {"boundingBox":{"x":660,"y":0,"w":584,"h":471},"confidence":0.9698998332023621},
- {"boundingBox":{"x":566,"y":276,"w":24,"h":30},"confidence":0.022009700536727905},
- {"boundingBox":{"x":587,"y":273,"w":20,"h":28},"confidence":0.01859394833445549},
- {"boundingBox":{"x":609,"y":271,"w":19,"h":30},"confidence":0.003902678843587637},
- {"boundingBox":{"x":563,"y":279,"w":15,"h":28},"confidence":0.0034854013938456774},
- {"boundingBox":{"x":566,"y":299,"w":22,"h":41},"confidence":0.0031260766554623842},
- {"boundingBox":{"x":570,"y":311,"w":29,"h":38},"confidence":0.0026493810582906008},
- {"boundingBox":{"x":588,"y":275,"w":24,"h":54},"confidence":0.001754675293341279},
- {"boundingBox":{"x":574,"y":274,"w":53,"h":64},"confidence":0.0012078586732968688},
- {"boundingBox":{"x":608,"y":270,"w":32,"h":59},"confidence":0.0011869356967508793},
- {"boundingBox":{"x":591,"y":305,"w":29,"h":42},"confidence":0.0010676260571926832}
- ]
- }
-}
-```
---
-### Get results using custom model
-
-This section shows you how to make an analysis call to the service, when using a custom model.
-
-#### [C#](#tab/csharp)
-
-The code is similar to the standard model case. The only difference is that results from the custom model are available on the **CustomTags** and/or **CustomObjects** properties of the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object.
-
-[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=analyze)]
-
-#### [Python](#tab/python)
-
-The code is similar to the standard model case. The only difference is that results from the custom model are available on the **custom_tags** and/or **custom_objects** properties of the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object.
-
-[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=analyze)]
-
-#### [C++](#tab/cpp)
-
-The code is similar to the standard model case. The only difference is that results from the custom model are available by calling the **GetCustomTags** and/or **GetCustomObjects** methods of the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object.
-
-[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=analyze)]
-
-#### [REST API](#tab/rest)
---
-## Error codes
--
-## Next steps
-
-* Explore the [concept articles](../concept-describe-images-40.md) to learn more about each feature.
-* Explore the [SDK code samples on GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk).
-* See the [REST API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
- Title: Call the Image Analysis API-
-description: Learn how to call the Image Analysis API and configure its behavior.
------ Previously updated : 12/27/2022---
-# Call the Image Analysis API
-
-This article demonstrates how to call the Image Analysis API to return information about an image's visual features. It also shows you how to parse the returned information using the client SDKs or REST API.
-
-This guide assumes you've already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
-
-## Submit data to the service
-
-The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
-
-#### [REST](#tab/rest)
-
-When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
-
-To analyze a local image, you'd put the binary image data in the HTTP request body.
-
-#### [C#](#tab/csharp)
-
-In your main class, save a reference to the URL of the image you want to analyze.
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze_url)]
-
-> [!TIP]
-> You can also analyze a local image. See the [ComputerVisionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclient) methods, such as **AnalyzeImageInStreamAsync**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ImageAnalysisQuickstart.cs) for scenarios involving local images.
-
-#### [Java](#tab/java)
-
-In your main class, save a reference to the URL of the image you want to analyze.
-
-[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_urlimage)]
-
-> [!TIP]
-> You can also analyze a local image. See the [ComputerVision](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision) methods, such as **AnalyzeImage**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java) for scenarios involving local images.
-
-#### [JavaScript](#tab/javascript)
-
-In your main function, save a reference to the URL of the image you want to analyze.
-
-[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_describe_image)]
-
-> [!TIP]
-> You can also analyze a local image. See the [ComputerVisionClient](/javascript/api/@azure/cognitiveservices-computervision/computervisionclient) methods, such as **describeImageInStream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/ComputerVision/ImageAnalysisQuickstart.js) for scenarios involving local images.
-
-#### [Python](#tab/python)
-
-Save a reference to the URL of the image you want to analyze.
-
-[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_remoteimage)]
-
-> [!TIP]
-> You can also analyze a local image. See the [ComputerVisionClientOperationsMixin](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin) methods, such as **analyze_image_in_stream**. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/ComputerVision/ImageAnalysisQuickstart.py) for scenarios involving local images.
----
-## Determine how to process the data
-
-### Select visual features
-
-The Analyze API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The examples in the sections below add all of the available visual features, but for practical usage you'll likely only need one or two.
-
-#### [REST](#tab/rest)
-
-You can specify which features you want to use by setting the URL query parameters of the [Analyze API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need.
-
-|URL parameter | Value | Description|
-|||--|
-|`features`|`Read` | reads the visible text in the image and outputs it as structured JSON data.|
-|`features`|`Description` | describes the image content with a complete sentence in supported languages.|
-|`features`|`SmartCrops` | finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.|
-|`features`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
-|`features`|`Tags` | tags the image with a detailed list of words related to the image content.|
-
-A populated URL might look like this:
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags`
-
-#### [C#](#tab/csharp)
-
-Define your new method for image analysis. Add the code below, which specifies visual features you'd like to extract in your analysis. See the **[VisualFeatureTypes](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes)** enum for a complete list.
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_visualfeatures)]
--
-#### [Java](#tab/java)
-
-Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes) enum for a complete list.
-
-[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_features_remote)]
-
-#### [JavaScript](#tab/javascript)
-
-Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/javascript/api/@azure/cognitiveservices-computervision/computervisionmodels.visualfeaturetypes) enum for a complete list.
-
-[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_features_remote)]
-
-#### [Python](#tab/python)
-
-Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.models.visualfeaturetypes?view=azure-python&preserve-view=true) enum for a complete list.
-
-[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_features_remote)]
-----
-### Specify languages
-
-You can also specify the language of the returned data.
-
-#### [REST](#tab/rest)
-
-The following URL query parameter specifies the language. The default value is `en`.
-
-|URL parameter | Value | Description|
-|||--|
-|`language`|`en` | English|
-|`language`|`es` | Spanish|
-|`language`|`ja` | Japanese|
-|`language`|`pt` | Portuguese|
-|`language`|`zh` | Simplified Chinese|
-
-A populated URL might look like this:
-
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags&language=en`
-
-#### [C#](#tab/csharp)
-
-Use the *language* parameter of [AnalyzeImageAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclientextensions.analyzeimageasync?view=azure-dotnet#microsoft-azure-cognitiveservices-vision-computervision-computervisionclientextensions-analyzeimageasync(microsoft-azure-cognitiveservices-vision-computervision-icomputervisionclient-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-visualfeaturetypes))))-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-details))))-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-descriptionexclude))))-system-string-system-threading-cancellationtoken&preserve-view=true)) call to specify a language. A method call that specifies a language might look like the following.
-
-```csharp
-ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features, language: "en");
-```
-
-#### [Java](#tab/java)
-
-Use the [AnalyzeImageOptionalParameter](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.analyzeimageoptionalparameter) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
--
-```java
-ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
- .withVisualFeatures(featuresToExtractFromLocalImage)
- .language("en")
- .execute();
-```
-
-#### [JavaScript](#tab/javascript)
-
-Use the **language** property of the [ComputerVisionClientAnalyzeImageOptionalParams](/javascript/api/@azure/cognitiveservices-computervision/computervisionmodels.computervisionclientanalyzeimageoptionalparams) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
-
-```javascript
-const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
-```
-
-#### [Python](#tab/python)
-
-Use the *language* parameter of your [analyze_image](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin?view=azure-python#azure-cognitiveservices-vision-computervision-operations-computervisionclientoperationsmixin-analyze-image&preserve-view=true) call to specify a language. A method call that specifies a language might look like the following.
-
-```python
-results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details, 'en')
-```
----
-## Get results from the service
-
-This section shows you how to parse the results of the API call. It includes the API call itself.
-
-> [!NOTE]
-> **Scoped API calls**
->
-> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
-
-#### [REST](#tab/rest)
-
-The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
-
-```json
-{
- "metadata":
- {
- "width": 300,
- "height": 200
- },
- "tagsResult":
- {
- "values":
- [
- {
- "name": "grass",
- "confidence": 0.9960499405860901
- },
- {
- "name": "outdoor",
- "confidence": 0.9956876635551453
- },
- {
- "name": "building",
- "confidence": 0.9893627166748047
- },
- {
- "name": "property",
- "confidence": 0.9853052496910095
- },
- {
- "name": "plant",
- "confidence": 0.9791355729103088
- }
- ]
- }
-}
-```
-
-### Error codes
-
-See the following list of possible errors and their causes:
-
-* 400
- * `InvalidImageUrl` - Image URL is badly formatted or not accessible.
- * `InvalidImageFormat` - Input data is not a valid image.
- * `InvalidImageSize` - Input image is too large.
- * `NotSupportedVisualFeature` - Specified feature type isn't valid.
- * `NotSupportedImage` - Unsupported image, for example child pornography.
- * `InvalidDetails` - Unsupported `detail` parameter value.
- * `NotSupportedLanguage` - The requested operation isn't supported in the language specified.
- * `BadArgument` - More details are provided in the error message.
-* 415 - Unsupported media type error. The Content-Type isn't in the allowed types:
- * For an image URL, Content-Type should be `application/json`
- * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data`
-* 500
- * `FailedToProcess`
- * `Timeout` - Image processing timed out.
- * `InternalServerError`
--
-#### [C#](#tab/csharp)
-
-The following code calls the Image Analysis API and prints the results to the console.
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze)]
-
-#### [Java](#tab/java)
-
-The following code calls the Image Analysis API and prints the results to the console.
-
-[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_analyze)]
-
-#### [JavaScript](#tab/javascript)
-
-The following code calls the Image Analysis API and prints the results to the console.
-
-[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_analyze)]
-
-#### [Python](#tab/python)
-
-The following code calls the Image Analysis API and prints the results to the console.
-
-[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_analyze)]
----
-> [!TIP]
-> While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker).
--
-## Next steps
-
-* Explore the [concept articles](../concept-object-detection.md) to learn more about each feature.
-* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality.
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
- Title: How to call the Read API-
-description: Learn how to call the Read API and configure its behavior in detail.
-------- Previously updated : 11/03/2022---
-# Call the Computer Vision 3.2 GA Read API
-
-In this guide, you'll learn how to call the v3.2 GA Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs. This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
--
-## Input requirements
-
-The **Read** call takes images and documents as its input. They have the following requirements:
-
-* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
-* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
-* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit.
-* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
-
->[!NOTE]
-> You don't need to crop an image for text lines. Send the whole image to Read API and it will recognize all texts.
-
-## Determine how to process the data (optional)
-
-### Specify the OCR model
-
-By default, the service will use the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
-
-When using the Read operation, use the following values for the optional `model-version` parameter.
-
-|Value| Model used |
-|:--|:-|
-| Not provided | Latest GA model |
-| latest | Latest GA model|
-| [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
-| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwritten text, adds support for Japanese and Korean. |
-| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages. For handwritten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
-| 2021-04-12 | 2021 GA model |
-
-### Input language
-
-By default, the service extracts all text from your images or documents including mixed languages. The [Read operation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) has an optional request parameter for language. Only provide a language code if you want to force the document to be processed as that specific language. Otherwise, the service may return incomplete and incorrect text.
-
-### Natural reading order output (Latin languages only)
-
-By default, the service outputs the text lines in the left to right order. Optionally, with the `readingOrder` request parameter, use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
--
-### Select page(s) or page ranges for text extraction
-
-By default, the service extracts text from all pages in the documents. Optionally, use the `pages` request parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
--
-## Submit data to the service
-
-You submit either a local image or a remote image to the Read API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
-
-The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
-
-`https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
-
-The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step.
-
-|Response header| Example value |
-|:--|:-|
-|Operation-Location | `https://cognitiveservice/vision/v3.2/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
-
-> [!NOTE]
-> **Billing**
->
-> The [Computer Vision pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) page includes the pricing tier for Read. Each analyzed image or page is one transaction. If you call the operation with a PDF or TIFF document containing 100 pages, the Read operation will count it as 100 transactions and you will be billed for 100 transactions. If you made 50 calls to the operation and each call submitted a document with 100 pages, you will be billed for 50 X 100 = 5000 transactions.
--
-## Get results from the service
-
-The second step is to call [Get Read Results](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
-
-`https://{endpoint}/vision/v3.2/read/analyzeResults/{operationId}`
-
-It returns a JSON response that contains a **status** field with the following possible values.
-
-|Value | Meaning |
-|:--|:-|
-| `notStarted`| The operation has not started. |
-| `running`| The operation is being processed. |
-| `failed`| The operation has failed. |
-| `succeeded`| The operation has succeeded. |
-
-You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate.
-
-> [!NOTE]
-> The free tier limits the request rate to 20 calls per minute. The paid tier allows 30 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate.
-
-When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
-
-> [!NOTE]
-> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
-
-### Sample JSON output
-
-See the following example of a successful JSON response:
-
-```json
-{
- "status": "succeeded",
- "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
- "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
- "analyzeResult": {
- "version": "3.2",
- "readResults": [
- {
- "page": 1,
- "angle": 2.1243,
- "width": 502,
- "height": 252,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 58,
- 42,
- 314,
- 59,
- 311,
- 123,
- 56,
- 121
- ],
- "text": "Tabs vs",
- "appearance": {
- "style": {
- "name": "handwriting",
- "confidence": 0.96
- }
- },
- "words": [
- {
- "boundingBox": [
- 68,
- 44,
- 225,
- 59,
- 224,
- 122,
- 66,
- 123
- ],
- "text": "Tabs",
- "confidence": 0.933
- },
- {
- "boundingBox": [
- 241,
- 61,
- 314,
- 72,
- 314,
- 123,
- 239,
- 122
- ],
- "text": "vs",
- "confidence": 0.977
- }
- ]
- }
- ]
- }
- ]
- }
-}
-```
-
-### Handwritten classification for text lines (Latin languages only)
-
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
--
-## Next steps
--- Get started with the [OCR (Read) REST API or client library quickstarts](../quickstarts-sdk/client-library.md).-- Learn about the [Read 3.2 REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
cognitive-services Coco Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/coco-verification.md
- Title: Verify a COCO annotation file-
-description: Use a Python script to verify your COCO file for custom model training.
------ Previously updated : 03/21/2023---
-# Check the format of your COCO annotation file
-
-<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/check_coco_annotation.ipynb -->
-
-> [!TIP]
-> Contents of _check_coco_annotation.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb)**.
-
-This notebook demonstrates how to check if the format of your annotation file is correct. First, install the python samples package from the command line:
-
-```python
-pip install cognitive-service-vision-model-customization-python-samples
-```
-
-Then, run the following python code to check the file's format. You can either enter this code in a Python script, or run the [Jupyter Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/check_coco_annotation.ipynb) on a compatible platform.
-
-```python
-from cognitive_service_vision_model_customization_python_samples import check_coco_annotation_file, AnnotationKind, Purpose
-import pathlib
-import json
-
-coco_file_path = pathlib.Path("{your_coco_file_path}")
-annotation_kind = AnnotationKind.MULTICLASS_CLASSIFICATION # or AnnotationKind.OBJECT_DETECTION
-purpose = Purpose.TRAINING # or Purpose.EVALUATION
-
-check_coco_annotation_file(json.loads(coco_file_path.read_text()), annotation_kind, purpose)
-```
-
-<!-- nbend -->
-
-## Use COCO file in a new project
-
-Once your COCO file is verified, you're ready to import it to your model customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file&mdash;you can follow the guide from there to the end.
-
-## Next steps
-
-* [Create and train a custom model](model-customization.md)
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/find-similar-faces.md
- Title: "Find similar faces"-
-description: Use the Face service to find similar faces (face search by image).
------- Previously updated : 11/07/2022----
-# Find similar faces
--
-The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
-
-This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
-
-## Set up sample URL
-
-This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path.
-
-```
-https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/
-```
-
-## Detect faces for comparison
-
-You need to detect faces in images before you can compare them. In this guide, the following remote image, called *findsimilar.jpg*, will be used as the source:
-
-![Photo of a man who is smiling.](../media/quickstarts/find-similar.jpg)
-
-#### [C#](#tab/csharp)
-
-The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)]
-
-The following code uses the above method to get face data from a series of images.
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)]
--
-#### [JavaScript](#tab/javascript)
-
-The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
--
-The following code uses the above method to get face data from a series of images.
---
-#### [REST API](#tab/rest)
-
-Copy the following cURL command and insert your key and endpoint where appropriate. Then run the command to detect one of the target faces.
--
-Find the `"faceId"` value in the JSON response and save it to a temporary location. Then, call the above command again for these other image URLs, and save their face IDs as well. You'll use these IDs as the target group of faces from which to find a similar face.
--
-Finally, detect the single source face that you'll use for matching, and save its ID. Keep this ID separate from the others.
----
-## Find and print matches
-
-In this guide, the face detected in the *Family1-Dad1.jpg* image should be returned as the face that's similar to the source image face.
-
-![Photo of a man who is smiling; this is the same person as the previous image.](../media/quickstarts/family-1-dad-1.jpg)
-
-#### [C#](#tab/csharp)
-
-The following code calls the Find Similar API on the saved list of faces.
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)]
-
-The following code prints the match details to the console:
-
-[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)]
-
-#### [JavaScript](#tab/javascript)
-
-The following method takes a set of target faces and a single source face. Then, it compares them and finds all the target faces that are similar to the source face. Finally, it prints the match details to the console.
---
-#### [REST API](#tab/rest)
-
-Copy the following cURL command and insert your key and endpoint where appropriate.
--
-Paste in the following JSON content for the `body` value:
--
-Then, copy over the source face ID value to the `"faceId"` field. Then copy the other face IDs, separated by commas, as terms in the `"faceIds"` array.
-
-Run the command, and the returned JSON should show the correct face ID as a similar match.
---
-## Next steps
-
-In this guide, you learned how to call the Find Similar API to do a face search by similarity in a larger group of faces. Next, learn more about the different recognition models available for face comparison operations.
-
-* [Specify a face recognition model](specify-recognition-model.md)
cognitive-services Generate Thumbnail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/generate-thumbnail.md
- Title: "Generate a smart-cropped thumbnail - Image Analysis"-
-description: Use the Image Analysis REST API to generate a thumbnail with smart cropping.
------- Previously updated : 07/20/2022---
-# Generate a smart-cropped thumbnail
-
-You can use Image Analysis to generate a thumbnail with smart cropping. You specify the desired height and width, which can differ in aspect ratio from the input image. Image Analysis uses smart cropping to intelligently identify the area of interest and generate cropping coordinates around that region.
-
-## Call the Generate Thumbnail API
-
-To call the API, do the following steps:
-
-1. Copy the following command into a text editor.
-1. Make the following changes in the command where needed:
- 1. Replace the value of `<subscriptionKey>` with your key.
- 1. Replace the value of `<thumbnailFile>` with the path and name of the file in which to save the returned thumbnail image.
- 1. Replace the first part of the request URL (`westcentralus`) with the text in your own endpoint URL.
- [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)]
- 1. Optionally, change the image URL in the request body (`https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\`) to the URL of a different image from which to generate a thumbnail.
-1. Open a command prompt window.
-1. Paste the command from the text editor into the command prompt window.
-1. Press enter to run the program.
-
- ```bash
- curl -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -o <thumbnailFile> -H "Content-Type: application/json" "https://westus.api.cognitive.microsoft.com/vision/v3.2/generateThumbnail?width=100&height=100&smartCropping=true" -d "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Shorkie_Poo_Puppy.jpg/1280px-Shorkie_Poo_Puppy.jpg\"}"
- ```
-
-## Examine the response
-
-A successful response writes the thumbnail image to the file specified in `<thumbnailFile>`. If the request fails, the response contains an error code and a message to help determine what went wrong. If the request seems to succeed but the created thumbnail isn't a valid image file, it's possible that your key is not valid.
-
-## Next steps
-
-If you'd like to call Image Analysis APIs using a native SDK in the language of your choice, follow the quickstart to get set up.
-
-[Quickstart (Image Analysis)](../quickstarts-sdk/image-analysis-client-library.md)
cognitive-services Identity Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-access-token.md
- Title: "Use limited access tokens - Face"-
-description: Learn how ISVs can manage the Face API usage of their clients by issuing access tokens that grant access to Face features which are normally gated.
------- Previously updated : 05/11/2023---
-# Use limited access tokens for Face
-
-Independent software vendors (ISVs) can manage the Face API usage of their clients by issuing access tokens that grant access to Face features which are normally gated. This allows client companies to use the Face API without having to go through the formal approval process.
-
-This guide shows you how to generate the access tokens, if you're an approved ISV, and how to use the tokens if you're a client.
-
-The LimitedAccessToken feature is a part of the existing [Cognitive Services token service](https://westus.dev.cognitive.microsoft.com/docs/services/57346a70b4769d2694911369/operations/issueScopedToken). We have added a new operation for the purpose of bypassing the Limited Access gate for approved scenarios. Only ISVs that pass the gating requirements will be given access to this feature.
-
-## Example use case
-
-A company sells software that uses the Azure Face service to operate door access security systems. Their clients, individual manufacturers of door devices, subscribe to the software and run it on their devices. These client companies want to make Face API calls from their devices to perform Limited Access operations like face identification. By relying on access tokens from the ISV, they can bypass the formal approval process for face identification. The ISV, which has already been approved, can grant the client just-in-time access tokens.
-
-## Expectation of responsibility
-
-The issuing ISV is responsible for ensuring that the tokens are used only for the approved purpose.
-
-If the ISV learns that a client is using the LimitedAccessToken for non-approved purposes, the ISV should stop generating tokens for that customer. Microsoft can track the issuance and usage of LimitedAccessTokens, and we reserve the right to revoke an ISV's access to the **issueLimitedAccessToken** API if abuse is not addressed.
-
-## Prerequisites
-
-* [cURL](https://curl.haxx.se/) installed (or another tool that can make HTTP requests).
-* The ISV needs to have either an [Azure Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or a [Cognitive Services multi-service](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource.
-* The client needs to have an [Azure Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource.
-
-## Step 1: ISV obtains client's Face resource ID
-
-The ISV should set up a communication channel between their own secure cloud service (which will generate the access token) and their application running on the client's device. The client's Face resource ID must be known prior to generating the LimitedAccessToken.
-
-The Face resource ID has the following format:
-
-`/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.CognitiveServices/accounts/<face-resource-name>`
-
-For example:
-
-`/subscriptions/dc4d27d9-ea49-4921-938f-7782a774e151/resourceGroups/client-rg/providers/Microsoft.CognitiveServices/accounts/client-face-api`
-
-## Step 2: ISV generates a token
-
-The ISV's cloud service, running in a secure environment, calls the **issueLimitedAccessToken** API using their end customer's known Face resource ID.
-
-To call the **issueLimitedAccessToken** API, copy the following cURL command to a text editor.
-
-```bash
-curl -X POST 'https://<isv-endpoint>/sts/v1.0/issueLimitedAccessToken?expiredTime=3600' \
--H 'Ocp-Apim-Subscription-Key: <client-face-key>' \ --H 'Content-Type: application/json' \ --d '{
- "resourceId": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.CognitiveServices/accounts/<face-resource-name>",
- "featureFlags": ["Face.Identification", "Face.Verification"]
-}'
-```
-
-Then, make the following changes:
-1. Replace `<isv-endpoint>` with the endpoint of the ISV's resource. For example, **westus.api.cognitive.microsoft.com**.
-1. Optionally set the `expiredTime` parameter to set the expiration time of the token in seconds. It must be between 60 and 86400. The default value is 3600 (one hour).
-1. Replace `<client-face-key>` with the key of the client's Face resource.
-1. Replace `<subscription-id>` with the subscription ID of the client's Azure subscription.
-1. Replace `<resource-group-name>` with the name of the client's resource group.
-1. Replace `<face-resource-name>` with the name of the client's Face resource.
-1. Set `"featureFlags"` to the set of access roles you want to grant. The available flags are `"Face.Identification"`, `"Face.Verification"`, and `"LimitedAccess.HighRisk"`. An ISV can only grant permissions that it has been granted itself by Microsoft. For example, if the ISV has been granted access to face identification, it can create a LimitedAccessToken for **Face.Identification** for the client. All token creations and uses are logged for usage and security purposes.
-
-Then, paste the command into a terminal window and run it.
-
-The API should return a `200` response with the token in the form of a JSON web token (`application/jwt`). If you want to inspect the LimitedAccessToken, you can do so using [JWT](https://jwt.io/).
-
-## Step 3: Client application uses the token
-
-The ISV's application can then pass the LimitedAccessToken as an HTTP request header for future Face API requests on behalf of the client. This works independently of other authentication mechanisms, so no personal information of the client's is ever leaked to the ISV.
-
-> [!CAUTION]
-> The client doesn't need to be aware of the token value, as it can be passed in the background. If the client were to use a web monitoring tool to intercept the traffic, they'd be able to view the LimitedAccessToken header. However, because the token expires after a short period of time, they are limited in what they can do with it. This risk is known and considered acceptable.
->
-> It's for each ISV to decide how exactly it passes the token from its cloud service to the client application.
-
-#### [REST API](#tab/rest)
-
-An example Face API request using the access token looks like this:
-
-```bash
-curl -X POST 'https://<client-endpoint>/face/v1.0/identify' \
--H 'Ocp-Apim-Subscription-Key: <client-face-key>' \ --H 'LimitedAccessToken: Bearer <token>' \ --H 'Content-Type: application/json' \ --d '{
- "largePersonGroupId": "sample_group",
- "faceIds": [
- "c5c24a82-6845-4031-9d5d-978df9175426",
- "65d083d4-9447-47d1-af30-b626144bf0fb"
- ],
- "maxNumOfCandidatesReturned": 1,
- "confidenceThreshold": 0.5
-}'
-```
-
-> [!NOTE]
-> The endpoint URL and Face key belong to the client's Face resource, not the ISV's resource. The `<token>` is passed as an HTTP request header.
-
-#### [C#](#tab/csharp)
-
-The following code snippets show you how to use an access token with the [Face SDK for C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face).
-
-The following class uses an access token to create a **ServiceClientCredentials** object that can be used to authenticate a Face API client object. It automatically adds the access token as a header in every request that the Face client will make.
-
-```csharp
-public class LimitedAccessTokenWithApiKeyClientCredential : ServiceClientCredentials
-{
- /// <summary>
- /// Creates a new instance of the LimitedAccessTokenWithApiKeyClientCredential class
- /// </summary>
- /// <param name="apiKey">API Key for the Face API or CognitiveService endpoint</param>
- /// <param name="limitedAccessToken">LimitedAccessToken to bypass the limited access program, requires ISV sponsership.</param>
-
- public LimitedAccessTokenWithApiKeyClientCredential(string apiKey, string limitedAccessToken)
- {
- this.ApiKey = apiKey;
- this.LimitedAccessToken = limitedAccessToken;
- }
-
- private readonly string ApiKey;
- private readonly string LimitedAccesToken;
-
- /// <summary>
- /// Add the Basic Authentication Header to each outgoing request
- /// </summary>
- /// <param name="request">The outgoing request</param>
- /// <param name="cancellationToken">A token to cancel the operation</param>
- public override Task ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
- {
- if (request == null)
- throw new ArgumentNullException("request");
- request.Headers.Add("Ocp-Apim-Subscription-Key", ApiKey);
- request.Headers.Add("LimitedAccessToken", $"Bearer {LimitedAccesToken}");
-
- return Task.FromResult<object>(null);
- }
-}
-```
-
-In the client-side application, the helper class can be used like in this example:
-
-```csharp
-static void Main(string[] args)
-{
- // create Face client object
- var faceClient = new FaceClient(new LimitedAccessTokenWithApiKeyClientCredential(apiKey: "<client-face-key>", limitedAccessToken: "<token>"));
-
- faceClient.Endpoint = "https://willtest-eastus2.cognitiveservices.azure.com";
-
- // use Face client in an API call
- using (var stream = File.OpenRead("photo.jpg"))
- {
- var result = faceClient.Face.DetectWithStreamAsync(stream, detectionModel: "Detection_03", recognitionModel: "Recognition_04", returnFaceId: true).Result;
-
- Console.WriteLine(JsonConvert.SerializeObject(result));
- }
-}
-```
--
-## Next steps
-* [LimitedAccessToken API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57346a70b4769d2694911369/operations/issueLimitedAccessToken)
cognitive-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-detect-faces.md
- Title: "Call the Detect API - Face"-
-description: This guide demonstrates how to use face detection to extract attributes like age, emotion, or head pose from a given image.
------- Previously updated : 12/27/2022----
-# Call the Detect API
---
-This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
-
-The code snippets in this guide are written in C# by using the Azure Cognitive Services Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
--
-## Setup
-
-This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
-
-## Submit data to the service
-
-To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
--
-You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for the rectangles that give the pixel coordinates of each face. If you set _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks.
--
-For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
-
-## Determine how to process the data
-
-This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.
-
-### Get face landmarks
-
-[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
--
-### Get face attributes
-
-Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section.
-
-To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
---
-## Get results from the service
-
-### Face landmark results
-
-The following code demonstrates how you might retrieve the locations of the nose and pupils:
--
-You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
--
-When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
--
-### Face attribute results
-
-The following code shows how you might retrieve the face attribute data that you requested in the original call.
--
-To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide.
-
-## Next steps
-
-In this guide, you learned how to use the various functionalities of face detection and analysis. Next, integrate these features into an app to add face data from users.
--- [Tutorial: Add users to a Face service](../enrollment-overview.md)-
-## Related articles
--- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/image-retrieval.md
- Title: Do image retrieval using vectorization - Image Analysis 4.0-
-description: Learn how to call the image retrieval API to vectorize image and search terms.
------- Previously updated : 02/21/2023----
-# Do image retrieval using vectorization (version 4.0 preview)
-
-The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
-
-> [!IMPORTANT]
-> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
-
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
- * After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
-
-## Try out Image Retrieval
-
-You can try out the Image Retrieval feature quickly and easily in your browser using Vision Studio.
-
-> [!IMPORTANT]
-> The Vision Studio experience is limited to 500 images. To use a larger image set, create your own search application using the APIs in this guide.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Call the Vectorize Image API
-
-The `retrieval:vectorizeImage` API lets you convert an image's data to a vector. To call it, make the following changes to the cURL command below:
-
-1. Replace `<endpoint>` with your Computer Vision endpoint.
-1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"url"` to the URL of a remote image you want to use.
-
-```bash
-curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{
-'url':'https://learn.microsoft.com/azure/cognitive-services/computer-vision/media/quickstarts/presentation.png'
-}"
-```
-
-To vectorize a local image, you'd put the binary image data in the HTTP request body.
-
-The API call returns an **vector** JSON object, which defines the image's coordinates in the high-dimensional vector space.
-
-```json
-{
- "modelVersion": "2022-04-11",
- "vector": [ -0.09442752, -0.00067171326, -0.010985051, ... ]
-}
-```
-
-## Call the Vectorize Text API
-
-The `retrieval:vectorizeText` API lets you convert a text string to a vector. To call it, make the following changes to the cURL command below:
-
-1. Replace `<endpoint>` with your Computer Vision endpoint.
-1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"text"` to the example search term you want to use.
-
-```bash
-curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{
-'text':'cat jumping'
-}"
-```
-
-The API call returns an **vector** JSON object, which defines the text string's coordinates in the high-dimensional vector space.
-
-```json
-{
- "modelVersion": "2022-04-11",
- "vector": [ -0.09442752, -0.00067171326, -0.010985051, ... ]
-}
-```
-
-## Calculate vector similarity
-
-Cosine similarity is a method for measuring the similarity of two vectors. In an Image Retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results.
-
-The following example C# code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
-
-```csharp
-public static float GetCosineSimilarity(float[] vector1, float[] vector2)
-{
- float dotProduct = 0;
- int length = Math.Min(vector1.Length, vector2.Length);
- for (int i = 0; i < length; i++)
- {
- dotProduct += vector1[i] * vector2[i];
- }
- float magnitude1 = Math.Sqrt(vector1.Select(x => x * x).Sum());
- float magnitude2 = Math.Sqrt(vector2.Select(x => x * x).Sum());
-
- return dotProduct / (magnitude1 * magnitude2);
-}
-```
-
-## Next steps
-
-[Image retrieval concepts](../concept-image-retrieval.md)
cognitive-services Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-face-data.md
- Title: "Migrate your face data across subscriptions - Face"-
-description: This guide shows you how to migrate your stored face data from one Face subscription to another.
------- Previously updated : 02/22/2021----
-# Migrate your face data to a different Face subscription
-
-> [!CAUTION]
-> The Snapshot API will be retired for all users June 30 2023.
-
-This guide shows you how to move face data, such as a saved PersonGroup object with faces, to a different Azure Cognitive Services Face subscription. To move the data, you use the Snapshot feature. This way you avoid having to repeatedly build and train a PersonGroup or FaceList object when you move or expand your operations. For example, perhaps you created a PersonGroup object with a free subscription and now want to migrate it to your paid subscription. Or you might need to sync face data across subscriptions in different regions for a large enterprise operation.
-
-This same migration strategy also applies to LargePersonGroup and LargeFaceList objects. If you aren't familiar with the concepts in this guide, see their definitions in the [Face recognition concepts](../concept-face-recognition.md) guide. This guide uses the Face .NET client library with C#.
-
-> [!WARNING]
-> The Snapshot feature might move your data outside the geographic region you originally selected. Data might move to West US, West Europe, and Southeast Asia regions.
-
-## Prerequisites
-
-You need the following items:
--- Two Face keys, one with the existing data and one to migrate to. To subscribe to the Face service and get your key, follow the instructions in [Create a Cognitive Services account](../../cognitive-services-apis-create-account.md).-- The Face subscription ID string that corresponds to the target subscription. To find it, select **Overview** in the Azure portal. -- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).-
-## Create the Visual Studio project
-
-This guide uses a simple console app to run the face data migration. For a full implementation, see the [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) on GitHub.
-
-1. In Visual Studio, create a new Console app .NET Framework project. Name it **FaceApiSnapshotSample**.
-1. Get the required NuGet packages. Right-click your project in the Solution Explorer, and select **Manage NuGet Packages**. Select the **Browse** tab, and select **Include prerelease**. Find and install the following package:
- - [Microsoft.Azure.CognitiveServices.Vision.Face 2.3.0-preview](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/2.2.0-preview)
-
-## Create face clients
-
-In the **Main** method in *Program.cs*, create two [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) instances for your source and target subscriptions. This example uses a Face subscription in the East Asia region as the source and a West US subscription as the target. This example demonstrates how to migrate data from one Azure region to another.
--
-```csharp
-var FaceClientEastAsia = new FaceClient(new ApiKeyServiceClientCredentials("<East Asia Key>"))
- {
- Endpoint = "https://southeastasia.api.cognitive.microsoft.com/>"
- };
-
-var FaceClientWestUS = new FaceClient(new ApiKeyServiceClientCredentials("<West US Key>"))
- {
- Endpoint = "https://westus.api.cognitive.microsoft.com/"
- };
-```
-
-Fill in the key values and endpoint URLs for your source and target subscriptions.
--
-## Prepare a PersonGroup for migration
-
-You need the ID of the PersonGroup in your source subscription to migrate it to the target subscription. Use the [PersonGroupOperationsExtensions.ListAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperationsextensions.listasync) method to retrieve a list of your PersonGroup objects. Then get the [PersonGroup.PersonGroupId](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.persongroup.persongroupid#Microsoft_Azure_CognitiveServices_Vision_Face_Models_PersonGroup_PersonGroupId) property. This process looks different based on what PersonGroup objects you have. In this guide, the source PersonGroup ID is stored in `personGroupId`.
-
-> [!NOTE]
-> The [sample code](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) creates and trains a new PersonGroup to migrate. In most cases, you should already have a PersonGroup to use.
-
-## Take a snapshot of a PersonGroup
-
-A snapshot is temporary remote storage for certain Face data types. It functions as a kind of clipboard to copy data from one subscription to another. First, you take a snapshot of the data in the source subscription. Then you apply it to a new data object in the target subscription.
-
-Use the source subscription's FaceClient instance to take a snapshot of the PersonGroup. Use [TakeAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperationsextensions.takeasync) with the PersonGroup ID and the target subscription's ID. If you have multiple target subscriptions, add them as array entries in the third parameter.
-
-```csharp
-var takeSnapshotResult = await FaceClientEastAsia.Snapshot.TakeAsync(
- SnapshotObjectType.PersonGroup,
- personGroupId,
- new[] { "<Azure West US Subscription ID>" /* Put other IDs here, if multiple target subscriptions wanted */ });
-```
-
-> [!NOTE]
-> The process of taking and applying snapshots doesn't disrupt any regular calls to the source or target PersonGroups or FaceLists. Don't make simultaneous calls that change the source object, such as [FaceList management calls](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.facelistoperations) or the [PersonGroup Train](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperations) call, for example. The snapshot operation might run before or after those operations or might encounter errors.
-
-## Retrieve the snapshot ID
-
-The method used to take snapshots is asynchronous, so you must wait for its completion. Snapshot operations can't be canceled. In this code, the `WaitForOperation` method monitors the asynchronous call. It checks the status every 100 ms. After the operation finishes, retrieve an operation ID by parsing the `OperationLocation` field.
-
-```csharp
-var takeOperationId = Guid.Parse(takeSnapshotResult.OperationLocation.Split('/')[2]);
-var operationStatus = await WaitForOperation(FaceClientEastAsia, takeOperationId);
-```
-
-A typical `OperationLocation` value looks like this:
-
-```csharp
-"/operations/a63a3bdd-a1db-4d05-87b8-dbad6850062a"
-```
-
-The `WaitForOperation` helper method is here:
-
-```csharp
-/// <summary>
-/// Waits for the take/apply operation to complete and returns the final operation status.
-/// </summary>
-/// <returns>The final operation status.</returns>
-private static async Task<OperationStatus> WaitForOperation(IFaceClient client, Guid operationId)
-{
- OperationStatus operationStatus = null;
- do
- {
- if (operationStatus != null)
- {
- Thread.Sleep(TimeSpan.FromMilliseconds(100));
- }
-
- // Get the status of the operation.
- operationStatus = await client.Snapshot.GetOperationStatusAsync(operationId);
-
- Console.WriteLine($"Operation Status: {operationStatus.Status}");
- }
- while (operationStatus.Status != OperationStatusType.Succeeded
- && operationStatus.Status != OperationStatusType.Failed);
-
- return operationStatus;
-}
-```
-
-After the operation status shows `Succeeded`, get the snapshot ID by parsing the `ResourceLocation` field of the returned OperationStatus instance.
-
-```csharp
-var snapshotId = Guid.Parse(operationStatus.ResourceLocation.Split('/')[2]);
-```
-
-A typical `resourceLocation` value looks like this:
-
-```csharp
-"/snapshots/e58b3f08-1e8b-4165-81df-aa9858f233dc"
-```
-
-## Apply a snapshot to a target subscription
-
-Next, create the new PersonGroup in the target subscription by using a randomly generated ID. Then use the target subscription's FaceClient instance to apply the snapshot to this PersonGroup. Pass in the snapshot ID and the new PersonGroup ID.
-
-```csharp
-var newPersonGroupId = Guid.NewGuid().ToString();
-var applySnapshotResult = await FaceClientWestUS.Snapshot.ApplyAsync(snapshotId, newPersonGroupId);
-```
--
-> [!NOTE]
-> A Snapshot object is valid for only 48 hours. Only take a snapshot if you intend to use it for data migration soon after.
-
-A snapshot apply request returns another operation ID. To get this ID, parse the `OperationLocation` field of the returned applySnapshotResult instance.
-
-```csharp
-var applyOperationId = Guid.Parse(applySnapshotResult.OperationLocation.Split('/')[2]);
-```
-
-The snapshot application process is also asynchronous, so again use `WaitForOperation` to wait for it to finish.
-
-```csharp
-operationStatus = await WaitForOperation(FaceClientWestUS, applyOperationId);
-```
-
-## Test the data migration
-
-After you apply the snapshot, the new PersonGroup in the target subscription populates with the original face data. By default, training results are also copied. The new PersonGroup is ready for face identification calls without needing retraining.
-
-To test the data migration, run the following operations and compare the results they print to the console:
-
-```csharp
-await DisplayPersonGroup(FaceClientEastAsia, personGroupId);
-await IdentifyInPersonGroup(FaceClientEastAsia, personGroupId);
-
-await DisplayPersonGroup(FaceClientWestUS, newPersonGroupId);
-// No need to retrain the PersonGroup before identification,
-// training results are copied by snapshot as well.
-await IdentifyInPersonGroup(FaceClientWestUS, newPersonGroupId);
-```
-
-Use the following helper methods:
-
-```csharp
-private static async Task DisplayPersonGroup(IFaceClient client, string personGroupId)
-{
- var personGroup = await client.PersonGroup.GetAsync(personGroupId);
- Console.WriteLine("PersonGroup:");
- Console.WriteLine(JsonConvert.SerializeObject(personGroup));
-
- // List persons.
- var persons = await client.PersonGroupPerson.ListAsync(personGroupId);
-
- foreach (var person in persons)
- {
- Console.WriteLine(JsonConvert.SerializeObject(person));
- }
-
- Console.WriteLine();
-}
-```
-
-```csharp
-private static async Task IdentifyInPersonGroup(IFaceClient client, string personGroupId)
-{
- using (var fileStream = new FileStream("data\\PersonGroup\\Daughter\\Daughter1.jpg", FileMode.Open, FileAccess.Read))
- {
- var detectedFaces = await client.Face.DetectWithStreamAsync(fileStream);
-
- var result = await client.Face.IdentifyAsync(detectedFaces.Select(face => face.FaceId.Value).ToList(), personGroupId);
- Console.WriteLine("Test identify against PersonGroup");
- Console.WriteLine(JsonConvert.SerializeObject(result));
- Console.WriteLine();
- }
-}
-```
-
-Now you can use the new PersonGroup in the target subscription.
-
-To update the target PersonGroup again in the future, create a new PersonGroup to receive the snapshot. To do this, follow the steps in this guide. A single PersonGroup object can have a snapshot applied to it only one time.
-
-## Clean up resources
-
-After you finish migrating face data, manually delete the snapshot object.
-
-```csharp
-await FaceClientEastAsia.Snapshot.DeleteAsync(snapshotId);
-```
-
-## Next steps
-
-Next, see the relevant API reference documentation, explore a sample app that uses the Snapshot feature, or follow a how-to guide to start using the other API operations mentioned here:
--- [Snapshot reference documentation (.NET SDK)](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperations)-- [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample)-- [Add faces](add-faces.md)-- [Call the detect API](identity-detect-faces.md)
cognitive-services Migrate From Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-from-custom-vision.md
- Title: "Migrate a Custom Vision project to Image Analysis 4.0"-
-description: Learn how to generate an annotation file from an old Custom Vision project, so you can train a custom Image Analysis model on previous training data.
------ Previously updated : 02/06/2023---
-# Migrate a Custom Vision project to Image Analysis 4.0 preview
-
-You can migrate an existing Azure Custom Vision project to the new Image Analysis 4.0 system. [Custom Vision](../../custom-vision-service/overview.md) is a model customization service that existed before Image Analysis 4.0.
-
-This guide uses Python code to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file&mdash;you can follow the guide from there to the end.
-
-## Prerequisites
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* [Python 3.x](https://www.python.org/)
-* A Custom Vision resource where an existing project is stored.
-* An Azure Storage resource - [Create one](../../../storage/common/storage-account-create.md?tabs=azure-portal)
-
-#### [Jupyter Notebook](#tab/notebook)
-
-This notebook exports your image data and annotations from the workspace of a Custom Vision Service project to your own COCO file in a storage blob, ready for training with Image Analysis Model Customization. You can run the code in this section using a custom Python script, or you can download and run the [Notebook](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb) on a compatible platform.
-
-<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/export_cvs_data_to_blob_storage.ipynb -->
-
-> [!TIP]
-> Contents of _export_cvs_data_to_blob_storage.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/export_cvs_data_to_blob_storage.ipynb)**.
--
-## Install the python samples package
-
-Run the following command to install the required python samples package:
-
-```python
-pip install cognitive-service-vision-model-customization-python-samples
-```
-
-## Authentication
-
-Next, provide the credentials of your Custom Vision project and your blob storage container.
-
-You need to fill in the correct parameter values. You need the following information:
--- The name of the Azure Storage account you want to use with your new custom model project-- The key for that storage account-- The name of the container you want to use in that storage account-- Your Custom Vision training key-- Your Custom Vision endpoint URL-- The project ID of your Custom Vision project-
-The Azure Storage credentials can be found on that resource's page in the Azure portal. The Custom Vision credentials can be found in the Custom Vision project settings page on the [Custom Vision web portal](https://customvision.ai).
--
-```python
-azure_storage_account_name = ''
-azure_storage_account_key = ''
-azure_storage_container_name = ''
-
-custom_vision_training_key = ''
-custom_vision_endpoint = ''
-custom_vision_project_id = ''
-```
-
-## Run the migration
-
-When you run the migration code, the Custom Vision training images will be saved to a `{project_name}_{project_id}/images` folder in your specified Azure blob storage container, and the COCO file will be saved to `{project_name}_{project_id}/train.json` in that same container. Both tagged and untagged images will be exported, including any **Negative**-tagged images.
-
-> [!IMPORTANT]
-> Image Analysis Model Customization does not currently support **multilabel** classification training, buy you can still export data from a Custom Vision multilabel classification project.
-
-```python
-from cognitive_service_vision_model_customization_python_samples import export_data
-import logging
-logging.getLogger().setLevel(logging.INFO)
-logging.getLogger('azure.core.pipeline.policies.http_logging_policy').setLevel(logging.WARNING)
-
-n_process = 8
-export_data(azure_storage_account_name, azure_storage_account_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process)
-```
-
-<!-- nbend -->
-
-#### [Python](#tab/python)
-
-## Install libraries
-
-This script requires certain Python libraries. Install them in your project directory with the following command.
-
-```bash
-pip install azure-storage-blob azure-cognitiveservices-vision-customvision cffi
-```
-
-## Prepare the migration script
-
-Create a new Python file&mdash;_export-cvs-data-to-coco.py_, for example. Then open it in a text editor and paste in the following contents.
-
-```python
-from typing import List, Union
-from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
-from azure.cognitiveservices.vision.customvision.training.models import Image, ImageTag, ImageRegion, Project
-from msrest.authentication import ApiKeyCredentials
-import argparse
-import time
-import json
-import pathlib
-import logging
-from azure.storage.blob import ContainerClient, BlobClient
-import multiprocessing
--
-N_PROCESS = 8
--
-def get_file_name(sub_folder, image_id):
- return f'{sub_folder}/images/{image_id}'
--
-def blob_copy(params):
- container_client, sub_folder, image = params
- blob_client: BlobClient = container_client.get_blob_client(get_file_name(sub_folder, image.id))
- blob_client.start_copy_from_url(image.original_image_uri)
- return blob_client
--
-def wait_for_completion(blobs, time_out=5):
- pendings = blobs
- time_break = 0.5
- while pendings and time_out > 0:
- pendings = [b for b in pendings if b.get_blob_properties().copy.status == 'pending']
- if pendings:
- logging.info(f'{len(pendings)} pending copies. wait for {time_break} seconds.')
- time.sleep(time_break)
- time_out -= time_break
--
-def copy_images_with_retry(pool, container_client, sub_folder, images: List, batch_id, n_retries=5):
- retry_limit = n_retries
- urls = []
- while images and n_retries > 0:
- params = [(container_client, sub_folder, image) for image in images]
- img_and_blobs = zip(images, pool.map(blob_copy, params))
- logging.info(f'Batch {batch_id}: Copied {len(images)} images.')
- urls = urls or [b.url for _, b in img_and_blobs]
-
- wait_for_completion([b for _, b in img_and_blobs])
- images = [image for image, b in img_and_blobs if b.get_blob_properties().copy.status in ['failed', 'aborted']]
- n_retries -= 1
- if images:
- time.sleep(0.5 * (retry_limit - n_retries))
-
- if images:
- raise RuntimeError(f'Copy failed for some images in batch {batch_id}')
-
- return urls
--
-class CocoOperator:
- def __init__(self):
- self._images = []
- self._annotations = []
- self._categories = []
- self._category_name_to_id = {}
-
- @property
- def num_imges(self):
- return len(self._images)
-
- @property
- def num_categories(self):
- return len(self._categories)
-
- @property
- def num_annotations(self):
- return len(self._annotations)
-
- def add_image(self, width, height, coco_url, file_name):
- self._images.append(
- {
- 'id': len(self._images) + 1,
- 'width': width,
- 'height': height,
- 'coco_url': coco_url,
- 'file_name': file_name,
- })
-
- def add_annotation(self, image_id, category_id_or_name: Union[int, str], bbox: List[float] = None):
- self._annotations.append({
- 'id': len(self._annotations) + 1,
- 'image_id': image_id,
- 'category_id': category_id_or_name if isinstance(category_id_or_name, int) else self._category_name_to_id[category_id_or_name]})
-
- if bbox:
- self._annotations[-1]['bbox'] = bbox
-
- def add_category(self, name):
- self._categories.append({
- 'id': len(self._categories) + 1,
- 'name': name
- })
-
- self._category_name_to_id[name] = len(self._categories)
-
- def to_json(self) -> str:
- coco_dict = {
- 'images': self._images,
- 'categories': self._categories,
- 'annotations': self._annotations,
- }
-
- return json.dumps(coco_dict, ensure_ascii=False, indent=2)
--
-def log_project_info(training_client: CustomVisionTrainingClient, project_id):
- project: Project = training_client.get_project(project_id)
- proj_settings = project.settings
- project.settings = None
- logging.info(f'Project info dict: {project.__dict__}')
- logging.info(f'Project setting dict: {proj_settings.__dict__}')
- logging.info(f'Project info: n tags: {len(training_client.get_tags(project_id))},'
- f' n images: {training_client.get_image_count(project_id)} (tagged: {training_client.get_tagged_image_count(project_id)},'
- f' untagged: {training_client.get_untagged_image_count(project_id)})')
--
-def export_data(azure_storage_account_name, azure_storage_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process):
- azure_storage_account_url = f"https://{azure_storage_account_name}.blob.core.windows.net"
- container_client = ContainerClient(azure_storage_account_url, azure_storage_container_name, credential=azure_storage_key)
- credentials = ApiKeyCredentials(in_headers={"Training-key": custom_vision_training_key})
- trainer = CustomVisionTrainingClient(custom_vision_endpoint, credentials)
-
- coco_operator = CocoOperator()
- for tag in trainer.get_tags(custom_vision_project_id):
- coco_operator.add_category(tag.name)
-
- skip = 0
- batch_id = 0
- project_name = trainer.get_project(custom_vision_project_id).name
- log_project_info(trainer, custom_vision_project_id)
- sub_folder = f'{project_name}_{custom_vision_project_id}'
- with multiprocessing.Pool(n_process) as pool:
- while True:
- images: List[Image] = trainer.get_images(project_id=custom_vision_project_id, skip=skip)
- if not images:
- break
- urls = copy_images_with_retry(pool, container_client, sub_folder, images, batch_id)
- for i, image in enumerate(images):
- coco_operator.add_image(image.width, image.height, urls[i], get_file_name(sub_folder, image.id))
- image_tags: List[ImageTag] = image.tags
- image_regions: List[ImageRegion] = image.regions
- if image_regions:
- for img_region in image_regions:
- coco_operator.add_annotation(coco_operator.num_imges, img_region.tag_name, [img_region.left, img_region.top, img_region.width, img_region.height])
- elif image_tags:
- for img_tag in image_tags:
- coco_operator.add_annotation(coco_operator.num_imges, img_tag.tag_name)
-
- skip += len(images)
- batch_id += 1
-
- coco_json_file_name = 'train.json'
- local_json = pathlib.Path(coco_json_file_name)
- local_json.write_text(coco_operator.to_json(), encoding='utf-8')
- coco_json_blob_client: BlobClient = container_client.get_blob_client(f'{sub_folder}/{coco_json_file_name}')
- if coco_json_blob_client.exists():
- logging.warning(f'coco json file exists in blob. Skipped uploading. If existing one is outdated, please manually upload your new coco json from ./train.json to {coco_json_blob_client.url}')
- else:
- coco_json_blob_client.upload_blob(local_json.read_bytes())
- logging.info(f'coco file train.json uploaded to {coco_json_blob_client.url}.')
--
-def parse_args():
- parser = argparse.ArgumentParser('Export Custom Vision workspace data to blob storage.')
-
- parser.add_argument('--custom_vision_project_id', '-p', type=str, required=True, help='Custom Vision Project Id.')
- parser.add_argument('--custom_vision_training_key', '-k', type=str, required=True, help='Custom Vision training key.')
- parser.add_argument('--custom_vision_endpoint', '-e', type=str, required=True, help='Custom Vision endpoint.')
-
- parser.add_argument('--azure_storage_account_name', '-a', type=str, required=True, help='Azure storage account name.')
- parser.add_argument('--azure_storage_account_key', '-t', type=str, required=True, help='Azure storage account key.')
- parser.add_argument('--azure_storage_container_name', '-c', type=str, required=True, help='Azure storage container name.')
-
- parser.add_argument('--n_process', '-n', type=int, required=False, default=8, help='Number of processes used in exporting data.')
-
- return parser.parse_args()
--
-def main():
- args = parse_args()
-
- export_data(args.azure_storage_account_name, args.azure_storage_account_key, args.azure_storage_container_name,
- args.custom_vision_endpoint, args.custom_vision_training_key, args.custom_vision_project_id, args.n_process)
--
-if __name__ == '__main__':
- main()
-```
-
-## Run the script
-
-Run the script using the `python` command.
-
-```console
-python export-cvs-data-to-coco.py -p <project ID> -k <training key> -e <endpoint url> -a <storage account> -t <storage key> -c <container name>
-```
-
-You need to fill in the correct parameter values. You need the following information:
--- The project ID of your Custom Vision project-- Your Custom Vision training key-- Your Custom Vision endpoint URL-- The name of the Azure Storage account you want to use with your new custom model project-- The key for that storage account-- The name of the container you want to use in that storage account---
-## Use COCO file in a new project
-
-The script generates a COCO file and uploads it to the blob storage location you specified. You can now import it to your Model Customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting/importing a COCO file&mdash;you can follow the guide from there to the end.
-
-## Next steps
-
-* [Create and train a custom model](model-customization.md)
cognitive-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/mitigate-latency.md
- Title: How to mitigate latency when using the Face service-
-description: Learn how to mitigate latency when using the Face service.
----- Previously updated : 11/07/2021----
-# How to: mitigate latency when using the Face service
-
-You may encounter latency when using the Face service. Latency refers to any kind of delay that occurs when communicating over a network. In general, possible causes of latency include:
-- The physical distance each packet must travel from source to destination.-- Problems with the transmission medium.-- Errors in routers or switches along the transmission path.-- The time required by antivirus applications, firewalls, and other security mechanisms to inspect packets.-- Malfunctions in client or server applications.-
-This article talks about possible causes of latency specific to using the Azure Cognitive Services, and how you can mitigate these causes.
-
-> [!NOTE]
-> Azure Cognitive Services does not provide any Service Level Agreement (SLA) regarding latency.
-
-## Possible causes of latency
-
-### Slow connection between the Cognitive Service and a remote URL
-
-Some Azure services provide methods that obtain data from a remote URL that you provide. For example, when you call the [DetectWithUrlAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithUrlAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_String_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can specify the URL of an image in which the service tries to detect faces.
-
-```csharp
-var faces = await client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
-```
-
-The Face service must then download the image from the remote server. If the connection from the Face service to the remote server is slow, that will affect the response time of the Detect method.
-
-To mitigate this situation, consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
-
-``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
-```
-
-Be sure to use a storage account in the same region as the Face resource. This will reduce the latency of the connection between the Face service and the storage account.
-
-### Large upload size
-
-Some Azure services provide methods that obtain data from a file that you upload. For example, when you call the [DetectWithStreamAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithStreamAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_IO_Stream_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can upload an image in which the service tries to detect faces.
-
-```csharp
-using FileStream fs = File.OpenRead(@"C:\images\face.jpg");
-System.Collections.Generic.IList<DetectedFace> faces = await client.Face.DetectWithStreamAsync(fs, detectionModel: DetectionModel.Detection02);
-```
-
-If the file to upload is large, that will impact the response time of the `DetectWithStreamAsync` method, for the following reasons:
-- It takes longer to upload the file.-- It takes the service longer to process the file, in proportion to the file size.-
-Mitigations:
-- Consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
-``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
-```
-- Consider uploading a smaller file.
- - See the guidelines regarding [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
- - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
- - For face recognition, reducing the face size to 200x200 pixels doesn't affect the accuracy of the recognition model.
- - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
- - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
-```csharp
-var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
-var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg");
-Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
-IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
-```
-
-### Slow connection between your compute resource and the Face service
-
-If your computer has a slow connection to the Face service, this will affect the response time of service methods.
-
-Mitigations:
-- When you create your Face subscription, make sure to choose the region closest to where your application is hosted.-- If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.-- If longer latencies affect the user experience, choose a timeout threshold (for example, maximum 5 seconds) before retrying the API call.-
-## Next steps
-
-In this guide, you learned how to mitigate latency when using the Face service. Next, learn how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively.
-
-> [!div class="nextstepaction"]
-> [Example: Use the large-scale feature](use-large-scale.md)
-
-## Related topics
--- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)-- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md
- Title: Create a custom Image Analysis model-
-description: Learn how to create and train a custom model to do image classification and object detection that's specific to your use case.
----- Previously updated : 02/06/2023----
-# Create a custom Image Analysis model (preview)
-
-Image Analysis 4.0 allows you to train a custom model using your own training images. By manually labeling your images, you can train a model to apply custom tags to the images (image classification) or detect custom objects (object detection). Image Analysis 4.0 models are especially effective at few-shot learning, so you can get accurate models with less training data.
-
-This guide shows you how to create and train a custom image classification model. The few differences between training an image classification model and object detection model are noted.
-
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. If you're following this guide using Vision Studio, you must create your resource in the East US region. If you're using the Python library, you can create it in the East US, West US 2, or West Europe region. After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
-* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
-* A set of images with which to train your classification model. You can use the set of [sample images on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images). Or, you can use your own images. You only need about 3-5 images per class.
-
-> [!NOTE]
-> We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Computer Vision resource that they were trained under and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results.
-
-#### [Python](#tab/python)
-
-Train your own image classifier (IC) or object detector (OD) with your own data using Image Analysis model customization and Python.
-
-You can run through all of the model customization steps using a Python sample package. You can run the code in this section using a Python script, or you can download and run the Notebook on a compatible platform.
-
-<!-- nbstart https://raw.githubusercontent.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/main/docs/cognitive_service_vision_model_customization.ipynb -->
-
-> [!TIP]
-> Contents of _cognitive_service_vision_model_customization.ipynb_. **[Open in GitHub](https://github.com/Azure-Samples/cognitive-service-vision-model-customization-python-samples/blob/main/docs/cognitive_service_vision_model_customization.ipynb)**.
-
-## Install the python samples package
-
-Install the [sample code](https://pypi.org/project/cognitive-service-vision-model-customization-python-samples/) to train/predict custom models with Python:
-
-```bash
-pip install cognitive-service-vision-model-customization-python-samples
-```
-
-## Authentication
-
-Enter your Computer Vision endpoint URL, key, and the name of the resource, into the code below.
-
-```python
-# Resource and key
-import logging
-logging.getLogger().setLevel(logging.INFO)
-from cognitive_service_vision_model_customization_python_samples import ResourceType
-
-resource_type = ResourceType.SINGLE_SERVICE_RESOURCE # or ResourceType.MULTI_SERVICE_RESOURCE
-
-resource_name = None
-multi_service_endpoint = None
-
-if resource_type == ResourceType.SINGLE_SERVICE_RESOURCE:
- resource_name = '{specify_your_resource_name}'
- assert resource_name
-else:
- multi_service_endpoint = '{specify_your_service_endpoint}'
- assert multi_service_endpoint
-
-resource_key = '{specify_your_resource_key}'
-```
-
-## Prepare a dataset from Azure blob storage
-
-To train a model with your own dataset, the dataset should be arranged in the COCO format described below, hosted on Azure blob storage, and accessible from your Computer Vision resource.
-
-### Dataset annotation format
-
-Image Analysis uses the COCO file format for indexing/organizing the training images and their annotations. Below are examples and explanations of what specific format is needed for multiclass classification and object detection.
-
-Image Analysis model customization for classification is different from other kinds of vision training, as we utilize your class names, as well as image data, in training. So, be sure provide meaningful category names in the annotations.
-
-> [!NOTE]
-> In the example dataset, there are few images for the sake of simplicity. Although [Florence models](https://www.microsoft.com/research/publication/florence-a-new-foundation-model-for-computer-vision/) achieve great few-shot performance (high model quality even with little data available), it's good to have more data for the model to learn. Our recommendation is to have at least five images per class, and the more the better.
-
-Once your COCO annotation file is prepared, you can use the [COCO file verification script](coco-verification.md) to check the format.
-
-#### Multiclass classification example
-
-```json
-{
- "images": [{"id": 1, "width": 224.0, "height": 224.0, "file_name": "images/siberian-kitten.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/siberian-kitten.jpg"},
- {"id": 2, "width": 224.0, "height": 224.0, "file_name": "images/kitten-3.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/kitten-3.jpg"}],
- "annotations": [
- {"id": 1, "category_id": 1, "image_id": 1},
- {"id": 2, "category_id": 1, "image_id": 2},
- ],
- "categories": [{"id": 1, "name": "cat"}, {"id": 2, "name": "dog"}]
-}
-```
-
-Besides `absolute_url`, you can also use `coco_url` (the system accepts either field name).
-
-#### Object detection example
-
-```json
-{
- "images": [{"id": 1, "width": 224.0, "height": 224.0, "file_name": "images/siberian-kitten.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/siberian-kitten.jpg"},
- {"id": 2, "width": 224.0, "height": 224.0, "file_name": "images/kitten-3.jpg", "absolute_url": "https://{your_blob}.blob.core.windows.net/datasets/cat_dog/images/kitten-3.jpg"}],
- "annotations": [
- {"id": 1, "category_id": 1, "image_id": 1, "bbox": [0.1, 0.1, 0.3, 0.3]},
- {"id": 2, "category_id": 1, "image_id": 2, "bbox": [0.3, 0.3, 0.6, 0.6]},
- {"id": 3, "category_id": 2, "image_id": 2, "bbox": [0.2, 0.2, 0.7, 0.7]}
- ],
- "categories": [{"id": 1, "name": "cat"}, {"id": 2, "name": "dog"}]
-}
-```
-
-The values in `bbox: [left, top, width, height]` are relative to the image width and height.
-
-### Blob storage directory structure
-
-Following the examples above, the data directory in your Azure Blob Container `https://{your_blob}.blob.core.windows.net/datasets/` should be arranged like below, where `train_coco.json` is the annotation file.
-
-```
-cat_dog/
- images/
- 1.jpg
- 2.jpg
- train_coco.json
-```
-
-> [!TIP]
-> Quota limit information, including the maximum number of images and categories supported, maximum image size, and so on, can be found on the [concept page](../concept-model-customization.md).
-
-### Grant Computer Vision access to your Azure data blob
-
-You need to take an extra step to give your Computer Vision resource access to read the contents of your Azure blog storage container. There are two ways to do this.
-
-#### Option 1: Shared access signature (SAS)
-
-You can generate a SAS token with at least `read` permission on your Azure Blob Container. This is the option used in the code below. For instructions on acquiring a SAS token, see [Create SAS tokens](/azure/cognitive-services/translator/document-translation/how-to-guides/create-sas-tokens?tabs=Containers).
-
-#### Option 2: Managed Identity or public accessible
-
-You can also use [Managed Identity](/azure/active-directory/managed-identities-azure-resources/overview) to grant access.
-
-Below is a series of steps for allowing the system-assigned Managed Identity of your Computer Vision resource to access your blob storage. In the Azure portal:
-
-1. Go to the **Identity / System assigned** tab of your Computer Vision resource, and change the **Status** to **On**.
-1. Go to the **Access Control (IAM) / Role assignment** tab of your blob storage resource, select **Add / Add role assignment**, and choose either **Storage Blob Data Contributor** or **Storage Blob Data Reader**.
-1. Select **Next**, and choose **Managed Identity** under **Assign access to**, and then select **Select members**.
-1. Choose your subscription, with the Managed Identity being Computer Vision, and look up the one that matches your Computer Vision resource name.
-
-### Register the dataset
-
-Once your dataset has been prepared and hosted on your Azure blob storage container, with access granted to your Computer Vision resource, you can register it with the service.
-
-> [!NOTE]
-> The service only accesses your storage data during training. It doesn't keep copies of your data beyond the training cycle.
-
-```python
-from cognitive_service_vision_model_customization_python_samples import DatasetClient, Dataset, AnnotationKind, AuthenticationKind, Authentication
-
-dataset_name = '{specify_your_dataset_name}'
-auth_kind = AuthenticationKind.SAS # or AuthenticationKind.MI
-
-dataset_client = DatasetClient(resource_type, resource_name, multi_service_endpoint, resource_key)
-annotation_file_uris = ['{specify_your_annotation_uri}'] # example: https://example_data.blob.core.windows.net/datasets/cat_dog/train_coco.json
-# register dataset
-if auth_kind == AuthenticationKind.SAS:
- # option 1: sas
- sas_auth = Authentication(AuthenticationKind.SAS, '{your_sas_token}') # note the token/query string is needed, not the full url
- dataset = Dataset(name=dataset_name,
- annotation_kind=AnnotationKind.MULTICLASS_CLASSIFICATION, # checkout AnnotationKind for all annotation kinds
- annotation_file_uris=annotation_file_uris,
- authentication=sas_auth)
-else:
- # option 2: managed identity or public accessible. make sure your storage is accessible via the managed identiy, if it is not public accessible
- dataset = Dataset(name=dataset_name,
- annotation_kind=AnnotationKind.MULTICLASS_CLASSIFICATION, # checkout AnnotationKind for all annotation kinds
- annotation_file_uris=annotation_file_uris)
-
-reg_dataset = dataset_client.register_dataset(dataset)
-logging.info(f'Register dataset: {reg_dataset.__dict__}')
-
-# specify your evaluation dataset here, you can follow the same registeration process as the training dataset
-eval_dataset = None
-if eval_dataset:
- reg_eval_dataset = dataset_client.register_dataset(eval_dataset)
- logging.info(f'Register eval dataset: {reg_eval_dataset.__dict__}')
-```
-
-## Train a model
-
-After you register the dataset, use it to train a custom model:
-
-```python
-from cognitive_service_vision_model_customization_python_samples import TrainingClient, Model, ModelKind, TrainingParameters, EvaluationParameters
-
-model_name = '{specify_your_model_name}'
-
-training_client = TrainingClient(resource_type, resource_name, multi_service_endpoint, resource_key)
-train_params = TrainingParameters(training_dataset_name=dataset_name, time_budget_in_hours=1, model_kind=ModelKind.GENERIC_IC) # checkout ModelKind for all valid model kinds
-eval_params = EvaluationParameters(test_dataset_name=eval_dataset.name) if eval_dataset else None
-model = Model(model_name, train_params, eval_params)
-model = training_client.train_model(model)
-logging.info(f'Start training: {model.__dict__}')
-```
-
-## Check the training status
-
-Use the following code to check the status of the asynchronous training operation.
-
-```python
-from cognitive_service_vision_model_customization_python_samples import TrainingClient
-
-training_client = TrainingClient(resource_type, resource_name, multi_service_endpoint, resource_key)
-model = training_client.wait_for_completion(model_name, 30)
-```
-
-## Predict with a sample image
-
-Use the following code to get a prediction with a new sample image.
-
-```python
-from cognitive_service_vision_model_customization_python_samples import PredictionClient
-prediction_client = PredictionClient(resource_type, resource_name, multi_service_endpoint, resource_key)
-
-with open('path_to_your_test_image.png', 'rb') as f:
- img = f.read()
-
-prediction = prediction_client.predict(model_name, img, content_type='image/png')
-logging.info(f'Prediction: {prediction}')
-```
-
-<!-- nbend -->
--
-#### [Vision Studio](#tab/studio)
-
-## Create a new custom model
-
-Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) and selecting the **Image analysis** tab. Then select the **Customize models** tile.
--
-Then, sign in with your Azure account and select your Computer Vision resource. If you don't have one, you can create one from this screen.
-
-> [!IMPORTANT]
-> To train a custom model in Vision Studio, your Azure subscription needs to be approved for access. Please request access using [this form](https://aka.ms/visionaipublicpreview).
--
-## Prepare training images
-
-You need to upload your training images to an Azure Blob Storage container. Go to your storage resource in the Azure portal and navigate to the **Storage browser** tab. Here you can create a blob container and upload your images. Put them all at the root of the container.
-
-## Add a dataset
-
-To train a custom model, you need to associate it with a **Dataset** where you provide images and their label information as training data. In Vision Studio, select the **Datasets** tab to view your datasets.
-
-To create a new dataset, select **add new dataset**. Enter a name and select a dataset type: If you'd like to do image classification, select `Multi-class image classification`. If you'd like to do object detection, select `Object detection`.
---
-![Choose Blob Storage]( ../media/customization/create-dataset.png)
-
-Then, select the container from the Azure Blob Storage account where you stored the training images. Check the box to allow Vision Studio to read and write to the blob storage container. This is a necessary step to import labeled data. Create the dataset.
-
-## Create an Azure Machine Learning labeling project
-
-You need a COCO file to convey the labeling information. An easy way to generate a COCO file is to create an Azure Machine Learning project, which comes with a data-labeling workflow.
-
-In the dataset details page, select **Add a new Data Labeling project**. Name it and select **Create a new workspace**. That opens a new Azure portal tab where you can create the Azure Machine Learning project.
-
-![Choose Azure Machine Learning]( ../media/customization/dataset-details.png)
-
-Once the Azure Machine Learning project is created, return to the Vision Studio tab and select it under **Workspace**. The Azure Machine Learning portal will then open in a new browser tab.
-
-## Azure Machine Learning: Create labels
-
-To start labeling, follow the **Please add label classes** prompt to add label classes.
-
-![Label classes]( ../media/customization/azure-machine-learning-home-page.png)
-
-![Add class labels]( ../media/customization/azure-machine-learning-label-categories.png)
-
-Once you've added all the class labels, save them, select **start** on the project, and then select **Label data** at the top.
-
-![Start labeling]( ../media/customization/azure-machine-learning-start.png)
--
-### Azure Machine Learning: Manually label training data
-
-Choose **Start labeling** and follow the prompts to label all of your images. When you're finished, return to the Vision Studio tab in your browser.
-
-Now select **Add COCO file**, then select **Import COCO file from an Azure ML Data Labeling project**. This imports the labeled data from Azure Machine Learning.
-
-The COCO file you just created is now stored in the Azure Storage container that you linked to this project. You can now import it into the model customization workflow. Select it from the drop-down list. Once the COCO file is imported into the dataset, the dataset can be used for training a model.
-
-> [!NOTE]
-> ## Import COCO files from elsewhere
->
-> If you have a ready-made COCO file you want to import, go to the **Datasets** tab and select `Add COCO files to this dataset`. You can choose to add a specific COCO file from a Blob storage account or import from the Azure Machine Learning labeling project.
->
-> Currently, Microsoft is addressing an issue which causes COCO file import to fail with large datasets when initiated in Vision Studio. To train using a large dataset, it's recommended to use the REST API instead.
->
-> ![Choose COCO]( ../media/customization/import-coco.png)
->
-> [!INCLUDE [coco-files](../includes/coco-files.md)]
-
-## Train the custom model
-
-To start training a model with your COCO file, go to the **Custom models** tab and select **Add a new model**. Enter a name for the model and select `Image classification` or `Object detection` as the model type.
-
-![Create custom model]( ../media/customization/start-model-training.png)
-
-Select your dataset, which is now associated with the COCO file containing the labeling information.
-
-Then select a time budget and train the model. For small examples, you can use a `1 hour` budget.
-
-![Review training details]( ../media/customization/train-model.png)
-
-It may take some time for the training to complete. Image Analysis 4.0 models can be accurate with only a small set of training data, but they take longer to train than previous models.
-
-## Evaluate the trained model
-
-After training has completed, you can view the model's performance evaluation. The following metrics are used:
--- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5-- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75-
-If an evaluation set isn't provided when training the model, the reported performance is estimated based on part of the training set. We strongly recommend you use an evaluation dataset (using the same process as above) to have a reliable estimation of your model performance.
-
-![Screenshot of evaluation]( ../media/customization/training-result.png)
-
-## Test custom model in Vision Studio
-
-Once you've built a custom model, you can test by selecting the **Try it out** button on the model evaluation screen.
--
-This takes you to the **Extract common tags from images** page. Choose your custom model from the drop-down menu and upload a test image.
-
-![Screenshot of selecing test model in Vision Studio.]( ../media/customization/quick-test.png)
-
-The prediction results appear in the right column.
-
-#### [REST API](#tab/rest)
-
-## Prepare training data
-
-The first thing you need to do is create a COCO file from your training data. You can create a COCO file by converting an old Custom Vision project using the [migration script](migrate-from-custom-vision.md). Or, you can create a COCO file from scratch using some other labeling tool. See the following specification:
--
-## Upload to storage
-
-Upload your COCO file to a blob storage container, ideally the same blob container that holds the training images themselves.
-
-## Create your training dataset
-
-The `datasets/<dataset-name>` API lets you create a new dataset object that references the training data. Make the following changes to the cURL command below:
-
-1. Replace `<endpoint>` with your Computer Vision endpoint.
-1. Replace `<dataset-name>` with a name for your dataset.
-1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"annotationKind"` to either `"imageClassification"` or `"imageObjectDetection"`, depending on your project.
-1. In the request body, set the `"annotationFileUris"` array to an array of string(s) that show the URI location(s) of your COCO file(s) in blob storage.
-
-```bash
-curl.exe -v -X PUT "https://<endpoint>/computervision/datasets/<dataset-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{
-'annotationKind':'imageClassification',
-'annotationFileUris':['<URI>']
-}"
-```
-
-## Create and train a model
-
-The `models/<model-name>` API lets you create a new custom model and associate it with an existing dataset. It also starts the training process. Make the following changes to the cURL command below:
-
-1. Replace `<endpoint>` with your Computer Vision endpoint.
-1. Replace `<model-name>` with a name for your model.
-1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"trainingDatasetName"` to the name of the dataset from the previous step.
-1. In the request body, set `"modelKind"` to either `"Generic-Classifier"` or `"Generic-Detector"`, depending on your project.
-
-```bash
-curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{
-'trainingParameters': {
- 'trainingDatasetName':'<dataset-name>',
- 'timeBudgetInHours':1,
- 'modelKind':'Generic-Classifier',
- }
-}"
-```
-
-## Evaluate the model's performance on a dataset
-
-The `models/<model-name>/evaluations/<eval-name>` API evaluates the performance of an existing model. Make the following changes to the cURL command below:
-
-1. Replace `<endpoint>` with your Computer Vision endpoint.
-1. Replace `<model-name>` with the name of your model.
-1. Replace `<eval-name>` with a name that can be used to uniquely identify the evaluation.
-1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"testDatasetName"` to the name of the dataset you want to use for evaluation. If you don't have a dedicated dataset, you can use the same dataset you used for training.
-
-```bash
-curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>/evaluations/<eval-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{
-'evaluationParameters':{
- 'testDatasetName':'<dataset-name>'
- },
-}"
-```
-
-The API call returns a **ModelPerformance** JSON object, which lists the model's scores in several categories. The following metrics are used:
--- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5-- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75-
-## Test the custom model on an image
-
-The `imageanalysis:analyze` API does ordinary Image Analysis operations. By specifying some parameters, you can use this API to query your own custom model instead of the prebuilt Image Analysis models. Make the following changes to the cURL command below:
-
-1. Replace `<endpoint>` with your Computer Vision endpoint.
-1. Replace `<model-name>` with the name of your model.
-1. Replace `<subscription-key>` with your Computer Vision key.
-1. In the request body, set `"url"` to the URL of a remote image you want to test your model on.
-
-```bash
-curl.exe -v -X POST "https://<endpoint>/computervision/imageanalysis:analyze?model-name=<model-name>&api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
-{'url':'https://learn.microsoft.com/azure/cognitive-services/computer-vision/media/quickstarts/presentation.png'
-}"
-```
-
-The API call returns an **ImageAnalysisResult** JSON object, which contains all the detected tags for an image classifier, or objects for an object detector, with their confidence scores.
-
-```json
-{
- "kind": "imageAnalysisResult",
- "metadata": {
- "height": 900,
- "width": 1260
- },
- "customModelResult": {
- "classifications": [
- {
- "confidence": 0.97970027,
- "label": "hemlock"
- },
- {
- "confidence": 0.020299695,
- "label": "japanese-cherry"
- }
- ],
- "objects": [],
- "imageMetadata": {
- "width": 1260,
- "height": 900
- }
- }
-}
-```
---
-## Next steps
-
-In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs.
-
-* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions.
-* [Call the Analyze Image API](./call-analyze-image-40.md). Note the sections [Set model name when using a custom model](./call-analyze-image-40.md#set-model-name-when-using-a-custom-model) and [Get results using custom model](./call-analyze-image-40.md#get-results-using-custom-model).
cognitive-services Shelf Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/shelf-analyze.md
- Title: Analyze a shelf image using pretrained models-
-description: Use the Product Understanding API to analyze a shelf image and receive rich product data.
------ Previously updated : 04/26/2023----
-# Analyze a shelf image using pretrained models
-
-The fastest way to start using Product Recognition is to use the built-in pretrained AI models. With the Product Understanding API, you can upload a shelf image and get the locations of products and gaps.
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-## Prerequisites
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal. It must be deployed in the **East US** or **West US 2** region. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you create to connect your application to the Computer Vision service. You'll paste your key and endpoint into the code below later in the guide.
-* An Azure Storage resource with a blob storage container. [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
-* [cURL](https://curl.haxx.se/) installed. Or, you can use a different REST platform, like Postman, Swagger, or the REST Client extension for VS Code.
-* A shelf image. You can download our [sample image](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/shelf-analysis/shelf.png) or bring your own images. The maximum file size per image is 20 MB.
-
-## Analyze shelf images
-
-To analyze a shelf image, do the following steps:
-
-1. Upload the images you'd like to analyze to your blob storage container, and get the absolute URL.
-1. Copy the following `curl` command into a text editor.
-
- ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:analyze" -d "{
- 'url':'<your_url_string>'
- }"
- ```
-1. Make the following changes in the command where needed:
- 1. Replace the value of `<subscriptionKey>` with your Computer Vision resource key.
- 1. Replace the value of `<endpoint>` with your Computer Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
- 1. Replace the `<your_url_string>` contents with the blob URL of the image
-1. Open a command prompt window.
-1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
--
-## Examine the response
-
-A successful response is returned in JSON. The product understanding API results are returned in a `ProductUnderstandingResultApiModel` JSON field:
-
-```json
-{
- "imageMetadata": {
- "width": 2000,
- "height": 1500
- },
- "products": [
- {
- "id": "string",
- "boundingBox": {
- "x": 1234,
- "y": 1234,
- "w": 12,
- "h": 12
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "string"
- }
- ]
- }
- ],
- "gaps": [
- {
- "id": "string",
- "boundingBox": {
- "x": 1234,
- "y": 1234,
- "w": 123,
- "h": 123
- },
- "classifications": [
- {
- "confidence": 0.8,
- "label": "string"
- }
- ]
- }
- ]
-}
-```
-
-See the following sections for definitions of each JSON field.
-
-### Product Understanding Result API model
-
-Results from the product understanding operation.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `imageMetadata` | [ImageMetadataApiModel](#image-metadata-api-model) | The image metadata information such as height, width and format. | Yes |
-| `products` |[DetectedObjectApiModel](#detected-object-api-model) | Products detected in the image. | Yes |
-| `gaps` | [DetectedObjectApiModel](#detected-object-api-model) | Gaps detected in the image. | Yes |
-
-### Image Metadata API model
-
-The image metadata information such as height, width and format.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `width` | integer | The width of the image in pixels. | Yes |
-| `height` | integer | The height of the image in pixels. | Yes |
-
-### Detected Object API model
-
-Describes a detected object in an image.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `id` | string | ID of the detected object. | No |
-| `boundingBox` | [BoundingBoxApiModel](#bounding-box-api-model) | A bounding box for an area inside an image. | Yes |
-| `classifications` | [ImageClassificationApiModel](#image-classification-api-model) | Classification confidences of the detected object. | Yes |
-
-### Bounding Box API model
-
-A bounding box for an area inside an image.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `x` | integer | Left-coordinate of the top left point of the area, in pixels. | Yes |
-| `y` | integer | Top-coordinate of the top left point of the area, in pixels. | Yes |
-| `w` | integer | Width measured from the top-left point of the area, in pixels. | Yes |
-| `h` | integer | Height measured from the top-left point of the area, in pixels. | Yes |
-
-### Image Classification API model
-
-Describes the image classification confidence of a label.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `confidence` | float | Confidence of the classification prediction. | Yes |
-| `label` | string | Label of the classification prediction. | Yes |
-
-## Next steps
-
-In this guide, you learned how to make a basic analysis call using the pretrained Product Understanding REST API. Next, learn how to use a custom Product Recognition model to better meet your business needs.
-
-> [!div class="nextstepaction"]
-> [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md)
-
-* [Image Analysis overview](../overview-image-analysis.md)
cognitive-services Shelf Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/shelf-model-customization.md
- Title: Train a custom model for Product Recognition-
-description: Learn how to use the Image Analysis model customization feature to train a model to recognize specific products in a Product Recognition task.
------- Previously updated : 05/02/2023---
-# Train a custom Product Recognition model
-
-You can train a custom model to recognize specific retail products for use in a Product Recognition scenario. The out-of-box [Analyze](shelf-analyze.md) operation doesn't differentiate between products, but you can build this capability into your app through custom labeling and training.
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-## Use the model customization feature
-
-The [Model customization how-to guide](./model-customization.md) shows you how to train and publish a custom Image Analysis model. You can follow that guide, with a few specifications, to make a model for Product Recognition.
-
-> [!div class="nextstepaction"]
-> [Model customization](model-customization.md)
--
-## Dataset specifications
-
-Your training dataset should consist of images of the retail shelves. When you first create the model, you need to set the _ModelKind_ parameter to **ProductRecognitionModel**.
-
-Also, save the value of the _ModelName_ parameter, so you can use it as a reference later.
-
-## Custom labeling
-
-When you go through the labeling workflow, create labels for each of the products you want to recognize. Then label each product's bounding box, in each image.
-
-## Analyze shelves with a custom model
-
-When your custom model is trained and ready (you've completed the steps in the [Model customization guide](./model-customization.md)), you can use it through the Shelf Analyze operation. Set the _PRODUCT_CLASSIFIER_MODEL_ URL parameter to the name of your custom model (the _ModelName_ value you used in the creation step).
-
-The API call will look like this:
-
-```bash
-curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:analyze?PRODUCT_CLASSIFIER_MODEL=myModelName" -d "{
- 'url':'<your_url_string>'
-}"
-```
-
-## Next steps
-
-In this guide, you learned how to use a custom Product Recognition model to better meet your business needs. Next, set up planogram matching, which works in conjunction with custom Product Recognition.
-
-> [!div class="nextstepaction"]
-> [Planogram matching](shelf-planogram.md)
-
-* [Image Analysis overview](../overview-image-analysis.md)
cognitive-services Shelf Modify Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/shelf-modify-images.md
- Title: Prepare images for Product Recognition-
-description: Use the stitching and rectification APIs to prepare organic photos of retail shelves for accurate image analysis.
------ Previously updated : 07/10/2023----
-# Prepare images for Product Recognition
-
-Part of the Product Recognition workflow involves fixing and modifying the input images so the service can perform correctly.
-
-This guide shows you how to use the Stitching API to combine multiple images of the same physical shelf: this gives you a composite image of the entire retail shelf, even if it's only viewed partially by multiple different cameras.
-
-This guide also shows you how to use the Rectification API to correct for perspective distortion when you stitch together different images.
-
-## Prerequisites
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal. It must be deployed in the **East US** or **West US 2** region. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you create to connect your application to the Computer Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
-* An Azure Storage resource with a blob storage container. [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
-* [cURL](https://curl.haxx.se/) installed. Or, you can use a different REST platform, like Postman, Swagger, or the REST Client extension for VS Code.
-* A set of photos that show adjacent parts of the same shelf. A 50% overlap between images is recommended. You can download and use the sample "unstitched" images from [GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/shelf-analysis).
--
-## Use the Stitching API
-
-The Stitching API combines multiple images of the same physical shelf.
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-To run the image stitching operation on a set of images, follow these steps:
-
-1. Upload the images you'd like to stitch together to your blob storage container, and get the absolute URL for each image. You can stitch up to 10 images at once.
-1. Copy the following `curl` command into a text editor.
-
- ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:stitch?api-version=2023-04-01-preview" --output <your_filename> -d "{
- 'images': [
- {
- 'url':'<your_url_string>'
- },
- {
- 'url':'<your_url_string_2>'
- },
- ...
- ]
- }"
- ```
-1. Make the following changes in the command where needed:
- 1. Replace the value of `<subscriptionKey>` with your Computer Vision resource key.
- 1. Replace the value of `<endpoint>` with your Computer Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
- 1. Replace the `<your_url_string>` contents with the blob URLs of the images. The images should be ordered left to right and top to bottom, according to the physical spaces they show.
- 1. Replace `<your_filename>` with the name and extension of the file where you'd like to get the result (for example, `download.jpg`).
-1. Open a command prompt window.
-1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
-
-## Examine the stitching response
-
-The API returns a `200` response, and the new file is downloaded to the location you specified.
-
-## Use the Rectification API
-
-After you complete the stitching operation, we recommend you do the rectification operation for optimal analysis results.
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-To correct the perspective distortion in the composite image, follow these steps:
-
-1. Upload the image you'd like to rectify to your blob storage container, and get the absolute URL.
-1. Copy the following `curl` command into a text editor.
-
- ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:rectify?api-version=2023-04-01-preview" --output <your_filename> -d "{
- 'url': '<your_url_string>',
- 'controlPoints': {
- 'topLeft': {
- 'x': 0.1,
- 'y': 0.1
- },
- 'topRight': {
- 'x': 0.2,
- 'y': 0.2
- },
- 'bottomLeft': {
- 'x': 0.3,
- 'y': 0.3
- },
- 'bottomRight': {
- 'x': 0.4,
- 'y': 0.4
- }
- }
- }"
- ```
-1. Make the following changes in the command where needed:
- 1. Replace the value of `<subscriptionKey>` with your Computer Vision resource key.
- 1. Replace the value of `<endpoint>` with your Computer Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
- 1. Replace `<your_url_string>` with the blob storage URL of the image.
- 1. Replace the four control point coordinates in the request body. X is the horizontal coordinate and Y is vertical. The coordinates are normalized, so 0.5,0.5 indicates the center of the image, and 1,1 indicates the bottom right corner, for example. Set the coordinates to define the four corners of the shelf fixture as it appears in the image.
-
- :::image type="content" source="../media/shelf/rectify.png" alt-text="Photo of a shelf with its four corners outlined.":::
-
- > [!NOTE]
- > The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
- 1. Replace `<your_filename>` with the name and extension of the file where you'd like to get the result (for example, `download.jpg`).
-1. Open a command prompt window.
-1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
--
-## Examine the rectification response
-
-The API returns a `200` response, and the new file is downloaded to the location you specified.
-
-## Next steps
-
-In this guide, you learned how to prepare shelf photos for analysis. Next, call the Product Understanding API to get analysis results.
-
-> [!div class="nextstepaction"]
-> [Analyze a shelf image](./shelf-analyze.md)
-
-* [Image Analysis overview](../overview-image-analysis.md)
cognitive-services Shelf Planogram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/shelf-planogram.md
- Title: Check planogram compliance with Image Analysis-
-description: Learn how to use the Planogram Matching API to check that a retail shelf in a photo matches its planogram layout.
------- Previously updated : 05/02/2023---
-# Check planogram compliance with Image Analysis
-
-A planogram is a diagram that indicates the correct placement of retail products on shelves. The Image Analysis Planogram Matching API lets you compare analysis results from a photo to the store's planogram input. It returns an account of all the positions in the planogram, and whether a product was found in each position.
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-## Prerequisites
-* You must have already set up and run basic [Product Understanding analysis](./shelf-analyze.md) with the Product Understanding API.
-* [cURL](https://curl.haxx.se/) installed. Or, you can use a different REST platform, like Postman, Swagger, or the REST Client extension for VS Code.
-
-## Prepare a planogram schema
-
-You need to have your planogram data in a specific JSON format. See the sections below for field definitions.
-
-```json
-"planogram": {
- "width": 100.0,
- "height": 50.0,
- "products": [
- {
- "id": "string",
- "name": "string",
- "w": 12.34,
- "h": 123.4
- }
- ],
- "fixtures": [
- {
- "id": "string",
- "w": 2.0,
- "h": 10.0,
- "x": 0.0,
- "y": 3.0
- }
- ],
- "positions": [
- {
- "id": "string",
- "productId": "string",
- "fixtureId": "string",
- "x": 12.0,
- "y": 34.0
- }
- ]
-}
-```
-
-The X and Y coordinates are relative to a top-left origin, and the width and height extend each bounding box down and to the right. The following diagram shows examples of the coordinate system.
--
-> [!NOTE]
-> The brands shown in the images are not affiliated with Microsoft and do not indicate any form of endorsement of Microsoft or Microsoft products by the brand owners, or an endorsement of the brand owners or their products by Microsoft.
-
-Quantities in the planogram schema are in nonspecific units. They can correspond to inches, centimeters, or any other unit of measurement. The matching algorithm calculates the relationship between the photo analysis units (pixels) and the planogram units.
-
-### Planogram API Model
-
-Describes the planogram for planogram matching operations.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `width` | double | Width of the planogram. | Yes |
-| `height` | double | Height of the planogram. | Yes |
-| `products` | [ProductApiModel](#product-api-model) | List of products in the planogram. | Yes |
-| `fixtures` | [FixtureApiModel](#fixture-api-model) | List of fixtures in the planogram. | Yes |
-| `positions` | [PositionApiModel](#position-api-model)| List of positions in the planogram. | Yes |
-
-### Product API Model
-
-Describes a product in the planogram.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `id` | string | ID of the product. | Yes |
-| `name` | string | Name of the product. | Yes |
-| `w` | double | Width of the product. | Yes |
-| `h` | double | Height of the fixture. | Yes |
-
-### Fixture API Model
-
-Describes a fixture (shelf or similar hardware) in a planogram.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `id` | string | ID of the fixture. | Yes |
-| `w` | double | Width of the fixture. | Yes |
-| `h` | double | Height of the fixture. | Yes |
-| `x` | double | Left offset from the origin, in units of in inches or centimeters. | Yes |
-| `y` | double | Top offset from the origin, in units of inches or centimeters. | Yes |
-
-### Position API Model
-
-Describes a product's position in a planogram.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `id` | string | ID of the position. | Yes |
-| `productId` | string | ID of the product. | Yes |
-| `fixtureId` | string | ID of the fixture that the product is on. | Yes |
-| `x` | double | Left offset from the origin, in units of in inches or centimeters. | Yes |
-| `y` | double | Top offset from the origin, in units of inches or centimeters. | Yes |
-
-## Get analysis results
-
-Next, you need to do a [Product Understanding](./shelf-analyze.md) API call with a [custom model](./shelf-model-customization.md).
-
-The returned JSON text should be a `"detectedProducts"` structure. It shows all the products that were detected on the shelf, with the product-specific labels you used in the training stage.
-
-```json
-"detectedProducts": {
- "imageMetadata": {
- "width": 21,
- "height": 25
- },
- "products": [
- {
- "id": "01",
- "boundingBox": {
- "x": 123,
- "y": 234,
- "w": 34,
- "h": 45
- },
- "classifications": [
- {
- "confidence": 0.8,
- "label": "Product1"
- }
- ]
- }
- ],
- "gaps": [
- {
- "id": "02",
- "boundingBox": {
- "x": 12,
- "y": 123,
- "w": 1234,
- "h": 123
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "Product1"
- }
- ]
- }
- ]
-}
-```
-
-## Prepare the matching request
-
-Join the JSON content of your planogram schema with the JSON content of the analysis results, like this:
-
-```json
-"planogram": {
- "width": 100.0,
- "height": 50.0,
- "products": [
- {
- "id": "string",
- "name": "string",
- "w": 12.34,
- "h": 123.4
- }
- ],
- "fixtures": [
- {
- "id": "string",
- "w": 2.0,
- "h": 10.0,
- "x": 0.0,
- "y": 3.0
- }
- ],
- "positions": [
- {
- "id": "string",
- "productId": "string",
- "fixtureId": "string",
- "x": 12.0,
- "y": 34.0
- }
- ]
-},
-"detectedProducts": {
- "imageMetadata": {
- "width": 21,
- "height": 25
- },
- "products": [
- {
- "id": "01",
- "boundingBox": {
- "x": 123,
- "y": 234,
- "w": 34,
- "h": 45
- },
- "classifications": [
- {
- "confidence": 0.8,
- "label": "Product1"
- }
- ]
- }
- ],
- "gaps": [
- {
- "id": "02",
- "boundingBox": {
- "x": 12,
- "y": 123,
- "w": 1234,
- "h": 123
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "Product1"
- }
- ]
- }
- ]
-}
-```
-
-This is the text you'll use in your API request body.
-
-## Call the planogram matching API
-
-1. Copy the following `curl` command into a text editor.
-
- ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-planogrammatching:analyze" -d "<body>"
- ```
-1. Make the following changes in the command where needed:
- 1. Replace the value of `<subscriptionKey>` with your Computer Vision resource key.
- 1. Replace the value of `<endpoint>` with your Computer Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
- 1. Replace the value of `<body>` with the joined JSON string you prepared in the previous section.
-1. Open a command prompt window.
-1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
-
-## Examine the response
-
-A successful response is returned in JSON, showing the products (or gaps) detected at each planogram position. See the sections below for field definitions.
-
-```json
-{
- "matchedResultsPerPosition": [
- {
- "positionId": "01",
- "detectedObject": {
- "id": "01",
- "boundingBox": {
- "x": 12,
- "y": 1234,
- "w": 123,
- "h": 12345
- },
- "classifications": [
- {
- "confidence": 0.9,
- "label": "Product1"
- }
- ]
- }
- }
- ]
-}
-```
-
-### Planogram Matching Position API Model
-
-Paired planogram position ID and corresponding detected object from product understanding result.
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| `positionId` | string | The position ID from the planogram matched to the corresponding detected object. | No |
-| `detectedObject` | DetectedObjectApiModel | Describes a detected object in an image. | No |
-
-## Next steps
-
-* [Image Analysis overview](../overview-image-analysis.md)
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-detection-model.md
- Title: How to specify a detection model - Face-
-description: This article will show you how to choose which face detection model to use with your Azure Face application.
------- Previously updated : 03/05/2021----
-# Specify a face detection model
-
-This guide shows you how to specify a face detection model for the Azure Face service.
-
-The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers have the option to specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
-
-Read on to learn how to specify the face detection model in certain face operations. The Face service uses face detection whenever it converts an image of a face into some other form of data.
-
-If you aren't sure whether you should use the latest model, skip to the [Evaluate different models](#evaluate-different-models) section to evaluate the new model and compare results using your current data set.
-
-## Prerequisites
-
-You should be familiar with the concept of AI face detection. If you aren't, see the face detection conceptual guide or how-to guide:
-
-* [Face detection concepts](../concept-face-detection.md)
-* [Call the detect API](identity-detect-faces.md)
--
-## Evaluate different models
-
-The different face detection models are optimized for different tasks. See the following table for an overview of the differences.
-
-|**detection_01** |**detection_02** |**detection_03**
-||||
-|Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations.
-|Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
-|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call.
-|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call.
-
-The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
-
-## Detect faces with specified model
-
-Face detection finds the bounding-box locations of human faces and identifies their visual landmarks. It extracts the face's features and stores them for later use in [recognition](../concept-face-recognition.md) operations.
-
-When you use the [Face - Detect] API, you can assign the model version with the `detectionModel` parameter. The available values are:
-
-* `detection_01`
-* `detection_02`
-* `detection_03`
-
-A request URL for the [Face - Detect] REST API will look like this:
-
-`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
-
-If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
-
-```csharp
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
-var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03");
-```
-
-## Add face to Person with specified model
-
-The Face service can extract face data from an image and associate it with a **Person** object through the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. In this API call, you can specify the detection model in the same way as in [Face - Detect].
-
-See the following code example for the .NET client library.
-
-```csharp
-// Create a PersonGroup and add a person with face detected by "detection_03" model
-string personGroupId = "mypersongroupid";
-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
-
-string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
-
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
-await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
-```
-
-This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
-
-> [!NOTE]
-> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Face - Identify] API, for example).
-
-## Add face to FaceList with specified model
-
-You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library.
-
-```csharp
-await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
-
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
-await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
-```
-
-This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
-
-> [!NOTE]
-> You don't need to use the same detection model for all faces in a **FaceList** object, and you don't need to use the same detection model when detecting new faces to compare with a **FaceList** object.
--
-## Next steps
-
-In this article, you learned how to specify the detection model to use with different Face APIs. Next, follow a quickstart to get started with face detection and analysis.
-
-* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
-* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
-
-[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
-[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
-[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
-[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
-[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
-[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
-[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
-[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
-[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
-[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
-[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
-[FaceList - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250
-[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-recognition-model.md
- Title: How to specify a recognition model - Face-
-description: This article will show you how to choose which recognition model to use with your Azure Face application.
------ Previously updated : 03/05/2021----
-# Specify a face recognition model
--
-This guide shows you how to specify a face recognition model for face detection, identification and similarity search using the Azure Face service.
-
-The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face recognition model they'd like to use. They can choose the model that best fits their use case.
-
-The Azure Face service has four recognition models available. The models _recognition_01_ (published 2017), _recognition_02_ (published 2019), and _recognition_03_ (published 2020) are continually supported to ensure backwards compatibility for customers using FaceLists or **PersonGroup**s created with these models. A **FaceList** or **PersonGroup** will always use the recognition model it was created with, and new faces will become associated with this model when they're added. This can't be changed after creation and customers will need to use the corresponding recognition model with the corresponding **FaceList** or **PersonGroup**.
-
-You can move to later recognition models at your own convenience; however, you'll need to create new FaceLists and PersonGroups with the recognition model of your choice.
-
-The _recognition_04_ model (published 2021) is the most accurate model currently available. If you're a new customer, we recommend using this model. _Recognition_04_ will provide improved accuracy for both similarity comparisons and person-matching comparisons. _Recognition_04_ improves recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now you can build safe and seamless user experiences that use the latest _detection_03_ model to detect whether an enrolled user is wearing a face cover. Then you can use the latest _recognition_04_ model to recognize their identity. Each model operates independently of the others, and a confidence threshold set for one model isn't meant to be compared across the other recognition models.
-
-Read on to learn how to specify a selected model in different Face operations while avoiding model conflicts. If you're an advanced user and would like to determine whether you should switch to the latest model, skip to the [Evaluate different models](#evaluate-different-models) section. You can evaluate the new model and compare results using your current data set.
--
-## Prerequisites
-
-You should be familiar with the concepts of AI face detection and identification. If you aren't, see these guides first:
-
-* [Face detection concepts](../concept-face-detection.md)
-* [Face recognition concepts](../concept-face-recognition.md)
-* [Call the detect API](identity-detect-faces.md)
-
-## Detect faces with specified model
-
-Face detection identifies the visual landmarks of human faces and finds their bounding-box locations. It also extracts the face's features and stores them temporarily for up to 24 hours for use in identification. All of this information forms the representation of one face.
-
-The recognition model is used when the face features are extracted, so you can specify a model version when performing the Detect operation.
-
-When using the [Face - Detect] API, assign the model version with the `recognitionModel` parameter. The available values are:
-* recognition_01
-* recognition_02
-* recognition_03
-* recognition_04
--
-Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Face - Detect] REST API will look like this:
-
-`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>`
-
-If you're using the client library, you can assign the value for `recognitionModel` by passing a string representing the version. If you leave it unassigned, a default model version of `recognition_01` will be used. See the following code example for the .NET client library.
-
-```csharp
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
-var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: true, returnFaceLandmarks: true, recognitionModel: "recognition_01", returnRecognitionModel: true);
-```
-
-> [!NOTE]
-> The _returnFaceId_ parameter must be set to `true` in order to enable the face recognition scenarios in later steps.
-
-## Identify faces with specified model
-
-The Face service can extract face data from an image and associate it with a **Person** object (through the [Add face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API call, for example), and multiple **Person** objects can be stored together in a **PersonGroup**. Then, a new face can be compared against a **PersonGroup** (with the [Face - Identify] call), and the matching person within that group can be identified.
-
-A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([PersonGroup - Create] or [LargePersonGroup - Create]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [PersonGroup - Get] API with the _returnRecognitionModel_ parameter set as **true**.
-
-See the following code example for the .NET client library.
-
-```csharp
-// Create an empty PersonGroup with "recognition_04" model
-string personGroupId = "mypersongroupid";
-await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
-```
-
-In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features.
-
-Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Face - Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
-
-There is no change in the [Face - Identify] API; you only need to specify the model version in detection.
-
-## Find similar faces with specified model
-
-You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [FaceList - Create] API or [LargeFaceList - Create]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
-
-See the following code example for the .NET client library.
-
-```csharp
-await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
-```
-
-This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
-
-There is no change in the [Face - Find Similar] API; you only specify the model version in detection.
-
-## Verify faces with specified model
-
-The [Face - Verify] API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.
-
-## Evaluate different models
-
-If you'd like to compare the performances of different recognition models on your own data, you'll need to:
-1. Create four PersonGroups using _recognition_01_, _recognition_02_, _recognition_03_, and _recognition_04_ respectively.
-1. Use your image data to detect faces and register them to **Person**s within these four **PersonGroup**s.
-1. Train your PersonGroups using the PersonGroup - Train API.
-1. Test with Face - Identify on all four **PersonGroup**s and compare the results.
--
-If you normally specify a confidence threshold (a value between zero and one that determines how confident the model must be to identify a face), you may need to use different thresholds for different models. A threshold for one model isn't meant to be shared to another and won't necessarily produce the same results.
-
-## Next steps
-
-In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started with face detection.
-
-* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
-* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
-
-[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
-[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
-[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
-[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
-[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
-[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
-[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
-[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
-[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
-[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
-[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
-[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-headpose.md
- Title: Use the HeadPose attribute-
-description: Learn how to use the HeadPose attribute to automatically rotate the face rectangle or detect head gestures in a video feed.
------ Previously updated : 02/23/2021----
-# Use the HeadPose attribute
-
-In this guide, you'll see how you can use the HeadPose attribute of a detected face to enable some key scenarios.
-
-## Rotate the face rectangle
-
-The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.
-
-The [Cognitive Services Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) sample app uses the HeadPose attribute to rotate its detected face rectangles.
-
-### Explore the sample code
-
-You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
-
-```csharp
-/// <summary>
-/// Calculate the rendering face rectangle
-/// </summary>
-/// <param name="faces">Detected face from service</param>
-/// <param name="maxSize">Image rendering size</param>
-/// <param name="imageInfo">Image width and height</param>
-/// <returns>Face structure for rendering</returns>
-public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<DetectedFace> faces, int maxSize, Tuple<int, int> imageInfo)
-{
- var imageWidth = imageInfo.Item1;
- var imageHeight = imageInfo.Item2;
- var ratio = (float)imageWidth / imageHeight;
- int uiWidth = 0;
- int uiHeight = 0;
- if (ratio > 1.0)
- {
- uiWidth = maxSize;
- uiHeight = (int)(maxSize / ratio);
- }
- else
- {
- uiHeight = maxSize;
- uiWidth = (int)(ratio * uiHeight);
- }
-
- var uiXOffset = (maxSize - uiWidth) / 2;
- var uiYOffset = (maxSize - uiHeight) / 2;
- var scale = (float)uiWidth / imageWidth;
-
- foreach (var face in faces)
- {
- var left = (int)(face.FaceRectangle.Left * scale + uiXOffset);
- var top = (int)(face.FaceRectangle.Top * scale + uiYOffset);
-
- // Angle of face rectangles, default value is 0 (not rotated).
- double faceAngle = 0;
-
- // If head pose attributes have been obtained, re-calculate the left & top (X & Y) positions.
- if (face.FaceAttributes?.HeadPose != null)
- {
- // Head pose's roll value acts directly as the face angle.
- faceAngle = face.FaceAttributes.HeadPose.Roll;
- var angleToPi = Math.Abs((faceAngle / 180) * Math.PI);
-
- // _____ | / \ |
- // |____| => |/ /|
- // | \ / |
- // Re-calculate the face rectangle's left & top (X & Y) positions.
- var newLeft = face.FaceRectangle.Left +
- face.FaceRectangle.Width / 2 -
- (face.FaceRectangle.Width * Math.Sin(angleToPi) + face.FaceRectangle.Height * Math.Cos(angleToPi)) / 2;
-
- var newTop = face.FaceRectangle.Top +
- face.FaceRectangle.Height / 2 -
- (face.FaceRectangle.Height * Math.Sin(angleToPi) + face.FaceRectangle.Width * Math.Cos(angleToPi)) / 2;
-
- left = (int)(newLeft * scale + uiXOffset);
- top = (int)(newTop * scale + uiYOffset);
- }
-
- yield return new Face()
- {
- FaceId = face.FaceId?.ToString(),
- Left = left,
- Top = top,
- OriginalLeft = (int)(face.FaceRectangle.Left * scale + uiXOffset),
- OriginalTop = (int)(face.FaceRectangle.Top * scale + uiYOffset),
- Height = (int)(face.FaceRectangle.Height * scale),
- Width = (int)(face.FaceRectangle.Width * scale),
- FaceAngle = faceAngle,
- };
- }
-}
-```
-
-### Display the updated rectangle
-
-From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data:
-
-```xaml
- <DataTemplate>
- <Rectangle Width="{Binding Width}" Height="{Binding Height}" Stroke="#FF26B8F4" StrokeThickness="1">
- <Rectangle.LayoutTransform>
- <RotateTransform Angle="{Binding FaceAngle}"/>
- </Rectangle.LayoutTransform>
- </Rectangle>
-</DataTemplate>
-```
-
-## Detect head gestures
-
-You can detect head gestures like nodding and head shaking by tracking HeadPose changes in real time. You can use this feature as a custom liveness detector.
-
-Liveness detection is the task of determining that a subject is a real person and not an image or video representation. A head gesture detector could serve as one way to help verify liveness, especially as opposed to an image representation of a person.
-
-> [!CAUTION]
-> To detect head gestures in real time, you'll need to call the Face API at a high rate (more than once per second). If you have a free-tier (f0) subscription, this will not be possible. If you have a paid-tier subscription, make sure you've calculated the costs of making rapid API calls for head gesture detection.
-
-See the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceAPIHeadPoseSample) on GitHub for a working example of head gesture detection.
-
-## Next steps
-
-See the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
cognitive-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-large-scale.md
- Title: "Example: Use the Large-Scale feature - Face"-
-description: This guide is an article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects.
------- Previously updated : 05/01/2019----
-# Example: Use the large-scale feature
--
-This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. PersonGroups can hold up to 1000 persons in the free tier and 10,000 in the paid tier, while LargePersonGroups can hold up to one million persons in the paid tier.
-
-This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
-
-LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
-
-The samples are written in C# by using the Azure Cognitive Services Face client library.
-
-> [!NOTE]
-> To enable Face search performance for Identification and FindSimilar in large scale, introduce a Train operation to preprocess the LargeFaceList and LargePersonGroup. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform Identification and FindSimilar if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
-
-## Step 1: Initialize the client object
-
-When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](/azure/cognitive-services/computer-vision/quickstarts-sdk/identity-client-library?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
-
-## Step 2: Code migration
-
-This section focuses on how to migrate PersonGroup or FaceList implementation to LargePersonGroup or LargeFaceList. Although LargePersonGroup or LargeFaceList differs from PersonGroup or FaceList in design and internal implementation, the API interfaces are similar for backward compatibility.
-
-Data migration isn't supported. You re-create the LargePersonGroup or LargeFaceList instead.
-
-### Migrate a PersonGroup to a LargePersonGroup
-
-Migration from a PersonGroup to a LargePersonGroup is simple. They share exactly the same group-level operations.
-
-For PersonGroup- or person-related implementation, it's necessary to change only the API paths or SDK class/module to LargePersonGroup and LargePersonGroup Person.
-
-Add all of the faces and persons from the PersonGroup to the new LargePersonGroup. For more information, see [Add faces](add-faces.md).
-
-### Migrate a FaceList to a LargeFaceList
-
-| FaceList APIs | LargeFaceList APIs |
-|::|::|
-| Create | Create |
-| Delete | Delete |
-| Get | Get |
-| List | List |
-| Update | Update |
-| - | Train |
-| - | Get Training Status |
-
-The preceding table is a comparison of list-level operations between FaceList and LargeFaceList. As is shown, LargeFaceList comes with new operations, Train and Get Training Status, when compared with FaceList. Training the LargeFaceList is a precondition of the
-[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for FaceList. The following snippet is a helper function to wait for the training of a LargeFaceList:
-
-```csharp
-/// <summary>
-/// Helper function to train LargeFaceList and wait for finish.
-/// </summary>
-/// <remarks>
-/// The time interval can be adjusted considering the following factors:
-/// - The training time which depends on the capacity of the LargeFaceList.
-/// - The acceptable latency for getting the training status.
-/// - The call frequency and cost.
-///
-/// Estimated training time for LargeFaceList in different scale:
-/// - 1,000 faces cost about 1 to 2 seconds.
-/// - 10,000 faces cost about 5 to 10 seconds.
-/// - 100,000 faces cost about 1 to 2 minutes.
-/// - 1,000,000 faces cost about 10 to 30 minutes.
-/// </remarks>
-/// <param name="largeFaceListId">The Id of the LargeFaceList for training.</param>
-/// <param name="timeIntervalInMilliseconds">The time interval for getting training status in milliseconds.</param>
-/// <returns>A task of waiting for LargeFaceList training finish.</returns>
-private static async Task TrainLargeFaceList(
- string largeFaceListId,
- int timeIntervalInMilliseconds = 1000)
-{
- // Trigger a train call.
- await FaceClient.LargeTrainLargeFaceListAsync(largeFaceListId);
-
- // Wait for training finish.
- while (true)
- {
- Task.Delay(timeIntervalInMilliseconds).Wait();
- var status = await faceClient.LargeFaceList.TrainAsync(largeFaceListId);
-
- if (status.Status == Status.Running)
- {
- continue;
- }
- else if (status.Status == Status.Succeeded)
- {
- break;
- }
- else
- {
- throw new Exception("The train operation is failed!");
- }
- }
-}
-```
-
-Previously, a typical use of FaceList with added faces and FindSimilar looked like the following:
-
-```csharp
-// Create a FaceList.
-const string FaceListId = "myfacelistid_001";
-const string FaceListName = "MyFaceListDisplayName";
-const string ImageDir = @"/path/to/FaceList/images";
-faceClient.FaceList.CreateAsync(FaceListId, FaceListName).Wait();
-
-// Add Faces to the FaceList.
-Parallel.ForEach(
- Directory.GetFiles(ImageDir, "*.jpg"),
- async imagePath =>
- {
- using (Stream stream = File.OpenRead(imagePath))
- {
- await faceClient.FaceList.AddFaceFromStreamAsync(FaceListId, stream);
- }
- });
-
-// Perform FindSimilar.
-const string QueryImagePath = @"/path/to/query/image";
-var results = new List<SimilarPersistedFace[]>();
-using (Stream stream = File.OpenRead(QueryImagePath))
-{
- var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
- foreach (var face in faces)
- {
- results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
- }
-}
-```
-
-When migrating it to LargeFaceList, it becomes the following:
-
-```csharp
-// Create a LargeFaceList.
-const string LargeFaceListId = "mylargefacelistid_001";
-const string LargeFaceListName = "MyLargeFaceListDisplayName";
-const string ImageDir = @"/path/to/FaceList/images";
-faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName).Wait();
-
-// Add Faces to the LargeFaceList.
-Parallel.ForEach(
- Directory.GetFiles(ImageDir, "*.jpg"),
- async imagePath =>
- {
- using (Stream stream = File.OpenRead(imagePath))
- {
- await faceClient.LargeFaceList.AddFaceFromStreamAsync(LargeFaceListId, stream);
- }
- });
-
-// Train() is newly added operation for LargeFaceList.
-// Must call it before FindSimilarAsync() to ensure the newly added faces searchable.
-await TrainLargeFaceList(LargeFaceListId);
-
-// Perform FindSimilar.
-const string QueryImagePath = @"/path/to/query/image";
-var results = new List<SimilarPersistedFace[]>();
-using (Stream stream = File.OpenRead(QueryImagePath))
-{
- var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
- foreach (var face in faces)
- {
- results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));
- }
-}
-```
-
-As previously shown, the data management and the FindSimilar part are almost the same. The only exception is that a fresh preprocessing Train operation must complete in the LargeFaceList before FindSimilar works.
-
-## Step 3: Train suggestions
-
-Although the Train operation speeds up [FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)
-and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
-
-| Scale for faces or persons | Estimated training time |
-|::|::|
-| 1,000 | 1-2 sec |
-| 10,000 | 5-10 sec |
-| 100,000 | 1-2 min |
-| 1,000,000 | 10-30 min |
-
-To better utilize the large-scale feature, we recommend the following strategies.
-
-### Step 3.1: Customize time interval
-
-As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList.
-
-The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
-
-### Step 3.2: Small-scale buffer
-
-Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
-
-To mitigate this problem, use an extra small-scale LargePersonGroup or LargeFaceList as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master LargePersonGroup or LargeFaceList by running the master training on a sparser interval. Examples are in the middle of the night and daily.
-
-An example workflow:
-
-1. Create a master LargePersonGroup or LargeFaceList, which is the master collection. Create a buffer LargePersonGroup or LargeFaceList, which is the buffer collection. The buffer collection is only for newly added persons or faces.
-1. Add new persons or faces to both the master collection and the buffer collection.
-1. Only train the buffer collection with a short time interval to ensure that the newly added entries take effect.
-1. Call Identification or FindSimilar against both the master collection and the buffer collection. Merge the results.
-1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection.
-1. Delete the old buffer collection after the Train operation finishes on the master collection.
-
-### Step 3.3: Standalone training
-
-If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
-
-Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a LargePersonGroup by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is:
-
-```csharp
-private static void Main()
-{
- // Create a LargePersonGroup.
- const string LargePersonGroupId = "mylargepersongroupid_001";
- const string LargePersonGroupName = "MyLargePersonGroupDisplayName";
- faceClient.LargePersonGroup.CreateAsync(LargePersonGroupId, LargePersonGroupName).Wait();
-
- // Set up standalone training at regular intervals.
- const int TimeIntervalForStatus = 1000 * 60; // 1-minute interval for getting training status.
- const double TimeIntervalForTrain = 1000 * 60 * 60; // 1-hour interval for training.
- var trainTimer = new Timer(TimeIntervalForTrain);
- trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed(LargePersonGroupId, TimeIntervalForStatus);
- trainTimer.AutoReset = true;
- trainTimer.Enabled = true;
-
- // Other operations like creating persons, adding faces, and identification, except for Train.
- // ...
-}
-
-private static void TrainTimerOnElapsed(string largePersonGroupId, int timeIntervalInMilliseconds)
-{
- TrainLargePersonGroup(largePersonGroupId, timeIntervalInMilliseconds).Wait();
-}
-```
-
-For more information about data management and identification-related implementations, see [Add faces](add-faces.md).
-
-## Summary
-
-In this guide, you learned how to migrate the existing PersonGroup or FaceList code, not data, to the LargePersonGroup or LargeFaceList:
--- LargePersonGroup and LargeFaceList work similar to PersonGroup or FaceList, except that the Train operation is required by LargeFaceList.-- Take the proper Train strategy to dynamic data update for large-scale data sets.-
-## Next steps
-
-Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup.
--- [Add faces](add-faces.md)-- [Face client library quickstart](../quickstarts-sdk/identity-client-library.md)
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-persondirectory.md
- Title: "Example: Use the PersonDirectory data structure - Face"-
-description: Learn how to use the PersonDirectory data structure to store face and person data at greater capacity and with other new features.
------- Previously updated : 07/20/2022----
-# Use the PersonDirectory data structure (preview)
--
-To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. **PersonDirectory** is a data structure in Public Preview that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Follow this guide to learn how to do basic tasks with **PersonDirectory**.
-
-## Advantages of PersonDirectory
-
-Currently, the Face API offers the **LargePersonGroup** structure, which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
-
-Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train API calls after adding faces to a **Person** object&mdash;the update process happens automatically.
-
-## Prerequisites
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below.
- * You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-## Add Persons to the PersonDirectory
-
-**Persons** are the base enrollment units in the **PersonDirectory**. Once you add a **Person** to the directory, you can add up to 248 face images to that **Person**, per recognition model. Then you can identify faces against them using varying scopes.
-
-### Create the Person
-To create a **Person**, you need to call the **CreatePerson** API and provide a name or userData property value.
-
-```csharp
-using Newtonsoft.Json;
-using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text;
-using System.Threading.Tasks;
-
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var addPersonUri = "https:// {endpoint}/face/v1.0-preview/persons";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("name", "Example Person");
-body.Add("userData", "User defined data");
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PostAsync(addPersonUri, content);
-}
-```
-
-The CreatePerson call will return a generated ID for the **Person** and an operation location. The **Person** data will be processed asynchronously, so you use the operation location to fetch the results.
-
-### Wait for asynchronous operation completion
-You'll need to query the async operation status using the returned operation location string to check the progress.
-
-First, you should define a data model like the following to handle the status response.
-
-```csharp
-[Serializable]
-public class AsyncStatus
-{
- [DataMember(Name = "status")]
- public string Status { get; set; }
-
- [DataMember(Name = "createdTime")]
- public DateTime CreatedTime { get; set; }
-
- [DataMember(Name = "lastActionTime")]
- public DateTime? LastActionTime { get; set; }
-
- [DataMember(Name = "finishedTime", EmitDefaultValue = false)]
- public DateTime? FinishedTime { get; set; }
-
- [DataMember(Name = "resourceLocation", EmitDefaultValue = false)]
- public string ResourceLocation { get; set; }
-
- [DataMember(Name = "message", EmitDefaultValue = false)]
- public string Message { get; set; }
-}
-```
-
-Using the HttpResponseMessage from above, you can then poll the URL and wait for results.
-
-```csharp
-string operationLocation = response.Headers.GetValues("Operation-Location").FirstOrDefault();
-
-Stopwatch s = Stopwatch.StartNew();
-string status = "notstarted";
-do
-{
- if (status == "succeeded")
- {
- await Task.Delay(500);
- }
-
- var operationResponseMessage = await client.GetAsync(operationLocation);
-
- var asyncOperationObj = JsonConvert.DeserializeObject<AsyncStatus>(await operationResponseMessage.Content.ReadAsStringAsync());
- status = asyncOperationObj.Status;
-
-} while ((status == "running" || status == "notstarted") && s.Elapsed < TimeSpan.FromSeconds(30));
-```
--
-Once the status returns as "succeeded", the **Person** object is considered added to the directory.
-
-> [!NOTE]
-> The asynchronous operation from the Create **Person** call does not have to show "succeeded" status before faces can be added to it, but it does need to be completed before the **Person** can be added to a **DynamicPersonGroup** (see below Create and update a **DynamicPersonGroup**) or compared during an Identify call. Verify calls will work immediately after faces are successfully added to the **Person**.
--
-### Add faces to Persons
-
-Once you have the **Person** ID from the Create Person call, you can add up to 248 face images to a **Person** per recognition model. Specify the recognition model (and optionally the detection model) to use in the call, as data under each recognition model will be processed separately inside the **PersonDirectory**.
-
-The currently supported recognition models are:
-* `Recognition_02`
-* `Recognition_03`
-* `Recognition_04`
-
-Additionally, if the image contains multiple faces, you'll need to specify the rectangle bounding box for the face that is the intended target. The following code adds faces to a **Person** object.
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-// Optional query strings for more fine grained face control
-var queryString = "userData={userDefinedData}&targetFace={left,top,width,height}&detectionModel={detectionModel}";
-var uri = "https://{endpoint}/face/v1.0-preview/persons/{personId}/recognitionModels/{recognitionModel}/persistedFaces?" + queryString;
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("url", "{image url}");
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PostAsync(uri, content);
-}
-```
-
-After the Add Faces call, the face data will be processed asynchronously, and you'll need to wait for the success of the operation in the same manner as before.
-
-When the operation for the face addition finishes, the data will be ready for in Identify calls.
-
-## Create and update a **DynamicPersonGroup**
-
-**DynamicPersonGroups** are collections of references to **Person** objects within a **PersonDirectory**; they're used to create subsets of the directory. A common use is when you want to get fewer false positives and increased accuracy in an Identify operation by limiting the scope to just the **Person** objects you expect to match. Practical use cases include directories for specific building access among a larger campus or organization. The organization directory may contain 5 million individuals, but you only need to search a specific 800 people for a particular building, so you would create a **DynamicPersonGroup** containing those specific individuals.
-
-If you've used a **PersonGroup** before, take note of two major differences:
-* Each **Person** inside a **DynamicPersonGroup** is a reference to the actual **Person** in the **PersonDirectory**, meaning that it's not necessary to recreate a **Person** in each group.
-* As mentioned in previous sections, there's no need to make Train calls, as the face data is processed at the Directory level automatically.
-
-### Create the group
-
-To create a **DynamicPersonGroup**, you need to provide a group ID with alphanumeric or dash characters. This ID will function as the unique identifier for all usage purposes of the group.
-
-There are two ways to initialize a group collection. You can create an empty group initially, and populate it later:
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("name", "Example DynamicPersonGroup");
-body.Add("userData", "User defined data");
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PutAsync(uri, content);
-}
-```
-
-This process is immediate and there's no need to wait for any asynchronous operations to succeed.
-
-Alternatively, you can create it with a set of **Person** IDs to contain those references from the beginning by providing the set in the _AddPersonIds_ argument:
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("name", "Example DynamicPersonGroup");
-body.Add("userData", "User defined data");
-body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PutAsync(uri, content);
-
- // Async operation location to query the completion status from
- var operationLocation = response.Headers.Get("Operation-Location");
-}
-```
-
-> [!NOTE]
-> As soon as the call returns, the created **DynamicPersonGroup** will be ready to use in an Identify call, with any **Person** references provided in the process. The completion status of the returned operation ID, on the other hand, indicates the update status of the person-to-group relationship.
-
-### Update the DynamicPersonGroup
-
-After the initial creation, you can add and remove **Person** references from the **DynamicPersonGroup** with the Update Dynamic Person Group API. To add **Person** objects to the group, list the **Person** IDs in the _addPersonsIds_ argument. To remove **Person** objects, list them in the _removePersonIds_ argument. Both adding and removing can be performed in a single call:
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("name", "Example Dynamic Person Group updated");
-body.Add("userData", "User defined data updated");
-body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
-body.Add("removePersonIds", new List<string>{"{guid1}", "{guid2}", …});
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PatchAsync(uri, content);
-
- // Async operation location to query the completion status from
- var operationLocation = response.Headers.Get("Operation-Location");
-}
-```
-
-Once the call returns, the updates to the collection will be reflected when the group is queried. As with the creation API, the returned operation indicates the update status of person-to-group relationship for any **Person** that's involved in the update. You don't need to wait for the completion of the operation before making further Update calls to the group.
-
-## Identify faces in a PersonDirectory
-
-The most common way to use face data in a **PersonDirectory** is to compare the enrolled **Person** objects against a given face and identify the most likely candidate it belongs to. Multiple faces can be provided in the request, and each will receive its own set of comparison results in the response.
-
-In **PersonDirectory**, there are three types of scopes each face can be identified against:
-
-### Scenario 1: Identify against a DynamicPersonGroup
-
-Specifying the _dynamicPersonGroupId_ property in the request compares the face against every **Person** referenced in the group. Only a single **DynamicPersonGroup** can be identified against in a call.
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-// Optional query strings for more fine grained face control
-var uri = "https://{endpoint}/face/v1.0-preview/identify";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
-body.Add("dynamicPersonGroupId", "{dynamicPersonGroupIdToIdentifyIn}");
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PostAsync(uri, content);
-}
-```
-
-### Scenario 2: Identify against a specific list of persons
-
-You can also specify a list of **Person** IDs in the _personIds_ property to compare the face against each of them.
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var uri = "https://{endpoint}/face/v1.0-preview/identify";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
-body.Add("personIds", new List<string>{"{guid1}", "{guid2}", …});
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PostAsync(uri, content);
-}
-```
-
-### Scenario 3: Identify against the entire **PersonDirectory**
-
-Providing a single asterisk in the _personIds_ property in the request compares the face against every single **Person** enrolled in the **PersonDirectory**.
-
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var uri = "https://{endpoint}/face/v1.0-preview/identify";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
-body.Add("personIds", new List<string>{"*"});
-byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PostAsync(uri, content);
-}
-```
-For all three scenarios, the identification only compares the incoming face against faces whose AddPersonFace call has returned with a "succeeded" response.
-
-## Verify faces against persons in the **PersonDirectory**
-
-With a face ID returned from a detection call, you can verify if the face belongs to a specific **Person** enrolled inside the **PersonDirectory**. Specify the **Person** using the _personId_ property.
-
-```csharp
-var client = new HttpClient();
-
-// Request headers
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-
-var uri = "https://{endpoint}/face/v1.0-preview/verify";
-
-HttpResponseMessage response;
-
-// Request body
-var body = new Dictionary<string, object>();
-body.Add("faceId", "{guid1}");
-body.Add("personId", "{guid1}");
-var jsSerializer = new JavaScriptSerializer();
-byte[] byteData = Encoding.UTF8.GetBytes(jsSerializer.Serialize(body));
-
-using (var content = new ByteArrayContent(byteData))
-{
- content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
- response = await client.PostAsync(uri, content);
-}
-```
-
-The response will contain a Boolean value indicating whether the service considers the new face to belong to the same **Person**, and a confidence score for the prediction.
-
-## Next steps
-
-In this guide, you learned how to use the **PersonDirectory** structure to store face and person data for your Face app. Next, learn the best practices for adding your users' face data.
-
-* [Best practices for adding users](../enrollment-overview.md)
cognitive-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-api-reference.md
- Title: API Reference - Face-
-description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs.
------- Previously updated : 02/17/2021---
-# Face API reference list
-
-Azure Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories:
--- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).-- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).-- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).-- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
cognitive-services Identity Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-encrypt-data-at-rest.md
- Title: Face service encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Face, and how to enable and manage CMK.
------ Previously updated : 08/28/2020--
-#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
--
-# Face service encryption of data at rest
-
-The Face service automatically encrypts your data when persisted to the cloud. The Face service encryption protects your data and helps you to meet your organizational security and compliance commitments.
--
-> [!IMPORTANT]
-> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Face Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Face service, you will need to create a new Face resource and select E0 as the Pricing Tier. Once your Face resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
--
-## Next steps
-
-* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
-* [What is Azure Key Vault](../../key-vault/general/overview.md)?
-* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
- Title: What is Spatial Analysis?-
-description: This document explains the basic concepts and features of the Azure Spatial Analysis container.
------- Previously updated : 12/27/2022---
-# What is Spatial Analysis?
-
-You can use Computer Vision Spatial Analysis to detect the presence and movements of people in video. Ingest video streams from cameras, extract insights, and generate events to be used by other systems. The service can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
-
-Try out the capabilities of Spatial Analysis quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-<!--This documentation contains the following types of articles:
-* The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles]() provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.-->
-
-## What it does
-Spatial Analysis ingests video then detects people in the video. After people are detected, the system tracks the people as they move around over time then generates events as people interact with regions of interest. All operations give insights from a single camera's field of view.
-
-### People counting
-This operation counts the number of people in a specific zone over time using the PersonCount operation. It generates an independent count for each frame processed without attempting to track people across frames. This operation can be used to estimate the number of people in a space or generate an alert when a person appears.
-
-![Spatial Analysis counts the number of people in the cameras field of view](https://user-images.githubusercontent.com/11428131/139924111-58637f2e-f2f6-42d8-8812-ab42fece92b4.gif)
-
-### Entrance Counting
-This feature monitors how long people stay in an area or when they enter through a doorway. This monitoring can be done using the PersonCrossingPolygon or PersonCrossingLine operations. In retail scenarios, these operations can be used to measure wait times for a checkout line or engagement at a display. Also, these operations could measure foot traffic in a lobby or a specific floor in other commercial building scenarios.
-
-![Video frames of people moving in and out of a bordered space, with rectangles drawn around them.](https://user-images.githubusercontent.com/11428131/137016574-0d180d9b-fb9a-42a9-94b7-fbc0dbc18560.gif)
-
-### Social distancing and face mask detection
-This feature analyzes how well people follow social distancing requirements in a space. The system uses the PersonDistance operation to automatically calibrates itself as people walk around in the space. Then it identifies when people violate a specific distance threshold (6 ft. or 10 ft.).
-
-![Spatial Analysis visualizes social distance violation events showing lines between people showing the distance](https://user-images.githubusercontent.com/11428131/139924062-b5e10c0f-3cf8-4ff1-bb58-478571c022d7.gif)
-
-Spatial Analysis can also be configured to detect if a person is wearing a protective face covering such as a mask. A mask classifier can be enabled for the PersonCount, PersonCrossingLine, and PersonCrossingPolygon operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter.
-
-![Spatial Analysis classifies whether people have facemasks in an elevator](https://user-images.githubusercontent.com/11428131/137015842-ce524f52-3ac4-4e42-9067-25d19b395803.png)
-
-## Get started
-
-Follow the [quickstart](spatial-analysis-container.md) to set up the Spatial Analysis container and begin analyzing video.
-
-## Responsible use of Spatial Analysis technology
-
-To learn how to use Spatial Analysis technology responsibly, see the [transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). Microsoft's transparency notes help you understand how our AI technology works and the choices system owners can make that influence system performance and behavior. They focus on the importance of thinking about the whole system including the technology, people, and environment.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/language-support.md
- Title: Language support - Computer Vision-
-description: This article provides a list of natural languages supported by Computer Vision features; OCR, Image analysis.
------- Previously updated : 12/27/2022---
-# Language support for Computer Vision
-
-Some capabilities of Computer Vision support multiple languages; any capabilities not mentioned here only support English.
-
-## Optical Character Recognition (OCR)
-
-The Computer Vision [Read API](./overview-ocr.md) supports many languages. The `Read` API can extract text from images and documents with mixed languages, including from the same text line, without requiring a language parameter.
-
-> [!NOTE]
-> **Language code optional**
->
-> `Read` OCR's deep-learning-based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-
-See [How to specify the `Read` model](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
-
-### Handwritten text
-
-The following table lists the OCR supported languages for handwritten text by the most recent `Read` GA model.
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean|`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
-
-### Print text
-
-The following table lists the OCR supported languages for print text by the most recent `Read` GA model.
-
-|Language| Code (optional) |Language| Code (optional) |
-|:--|:-:|:--|:-:|
-|Afrikaans|`af`|Khasi | `kha` |
-|Albanian |`sq`|K'iche' | `quc` |
-|Angika (Devanagiri) | `anp`| Korean | `ko` |
-|Arabic | `ar` | Korku | `kfq`|
-|Asturian |`ast`| Koryak | `kpy`|
-|Awadhi-Hindi (Devanagiri) | `awa`| Kosraean | `kos`|
-|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`|
-|Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`|
-|Basque |`eu`| Kurdish (Latin) | `ku-latn`
-|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagiri) | `kru`|
-|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
-|Bhojpuri-Hindi (Devanagiri) | `bho`| Lakota | `lkt` |
-|Bislama |`bi`| Latin | `la` |
-|Bodo (Devanagiri) | `brx`| Lithuanian | `lt` |
-|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` |
-|Brajbha | `bra`|Lule Sami | `smj`|
-|Breton |`br`|Luxembourgish | `lb` |
-|Bulgarian | `bg`|Mahasu Pahari (Devanagiri) | `bfz`|
-|Bundeli | `bns`|Malay (Latin) | `ms` |
-|Buryat (Cyrillic) | `bua`|Maltese | `mt`
-|Catalan |`ca`|Malto (Devanagiri) | `kmj`
-|Cebuano |`ceb`|Manx | `gv` |
-|Chamling | `rab`|Maori | `mi`|
-|Chamorro |`ch`|Marathi | `mr`|
-|Chhattisgarhi (Devanagiri)| `hne`| Mongolian (Cyrillic) | `mn`|
-|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`|
-|Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`|
-|Cornish |`kw`|Neapolitan | `nap` |
-|Corsican |`co`|Nepali | `ne`|
-|Crimean Tatar (Latin)|`crh`|Niuean | `niu`|
-|Croatian | `hr`|Nogay | `nog`
-|Czech | `cs` |Northern Sami (Latin) | `sme`|
-|Danish | `da` |Norwegian | `no` |
-|Dari | `prs`|Occitan | `oc` |
-|Dhimal (Devanagiri) | `dhi`| Ossetic | `os`|
-|Dogri (Devanagiri) | `doi`|Pashto | `ps`|
-|Dutch | `nl` |Persian | `fa`|
-|English | `en` |Polish | `pl` |
-|Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
-|Estonian |`et`|Punjabi (Arabic) | `pa`|
-|Faroese | `fo`|Ripuarian | `ksh`|
-|Fijian |`fj`|Romanian | `ro` |
-|Filipino |`fil`|Romansh | `rm` |
-|Finnish | `fi` | Russian | `ru` |
-|French | `fr` |Sadri (Devanagiri) | `sck` |
-|Friulian | `fur` | Samoan (Latin) | `sm`
-|Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`|
-|Galician | `gl` |Santali(Devanagiri) | `sat` |
-|German | `de` | Scots | `sco` |
-|Gilbertese | `gil` | Scottish Gaelic | `gd` |
-|Gondi (Devanagiri) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
-|Greenlandic | `kl` | Sherpa (Devanagiri) | `xsr` |
-|Gurung (Devanagiri) | `gvr`| Sirmauri (Devanagiri) | `srx`|
-|Haitian Creole | `ht` | Skolt Sami | `sms` |
-|Halbi (Devanagiri) | `hlb`| Slovak | `sk`|
-|Hani | `hni` | Slovenian | `sl` |
-|Haryanvi | `bgc`|Somali (Arabic) | `so`|
-|Hawaiian | `haw`|Southern Sami | `sma`
-|Hindi | `hi`|Spanish | `es` |
-|Hmong Daw (Latin)| `mww` | Swahili (Latin) | `sw` |
-|Ho(Devanagiri) | `hoc`|Swedish | `sv` |
-|Hungarian | `hu` |Tajik (Cyrillic) | `tg` |
-|Icelandic | `is`| Tatar (Latin) | `tt` |
-|Inari Sami | `smn`|Tetum | `tet` |
-|Indonesian | `id` | Thangmi | `thf` |
-|Interlingua | `ia` |Tongan | `to`|
-|Inuktitut (Latin) | `iu` | Turkish | `tr` |
-|Irish | `ga` |Turkmen (Latin) | `tk`|
-|Italian | `it` |Tuvan | `tyv`|
-|Japanese | `ja` |Upper Sorbian | `hsb` |
-|Jaunsari (Devanagiri) | `Jns`|Urdu | `ur`|
-|Javanese | `jv` |Uyghur (Arabic) | `ug`|
-|Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`|
-|Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
-|Kangri (Devanagiri) | `xnr`|Uzbek (Latin) | `uz` |
-|Karachay-Balkar | `krc`|Volap├╝k | `vo` |
-|Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` |
-|Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
-|Kashubian | `csb` |Western Frisian | `fy` |
-|Kazakh (Cyrillic) | `kk-cyrl`|Yucatec Maya | `yua` |
-|Kazakh (Latin) | `kk-latn`|Zhuang | `za` |
-|Khaling | `klr`|Zulu | `zu` |
-
-## Image analysis
-
-Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis. Languages for tagging are only available in API version 3.2 or later.
-
-|Language | Language code | Categories | Tags | Description | Adult | Brands | Color | Faces | ImageType | Objects | Celebrities | Landmarks | Captions/Dense captions|
-|:|::|:-:|::|::|::|::|::|::|::|::|::|::|:--:|
-|Arabic |`ar`| | ✅| |||||| ||||
-|Azeri (Azerbaijani) |`az`| | ✅| |||||| ||||
-|Bulgarian |`bg`| | ✅| |||||| ||||
-|Bosnian Latin |`bs`| | ✅| |||||| ||||
-|Catalan |`ca`| | ✅| |||||| ||||
-|Czech |`cs`| | ✅| |||||| ||||
-|Welsh |`cy`| | ✅| |||||| ||||
-|Danish |`da`| | ✅| |||||| ||||
-|German |`de`| | ✅| |||||| ||||
-|Greek |`el`| | ✅| |||||| ||||
-|English |`en`|✅ | ✅| ✅|✅|✅|✅|✅|✅|✅|✅|✅|✅|
-|Spanish |`es`|✅ | ✅| ✅|||||| |✅|✅||
-|Estonian |`et`| | ✅| |||||| ||||
-|Basque |`eu`| | ✅| |||||| ||||
-|Finnish |`fi`| | ✅| |||||| ||||
-|French |`fr`| | ✅| |||||| ||||
-|Irish |`ga`| | ✅| |||||| ||||
-|Galician |`gl`| | ✅| |||||| ||||
-|Hebrew |`he`| | ✅| |||||| ||||
-|Hindi |`hi`| | ✅| |||||| ||||
-|Croatian |`hr`| | ✅| |||||| ||||
-|Hungarian |`hu`| | ✅| |||||| ||||
-|Indonesian |`id`| | ✅| |||||| ||||
-|Italian |`it`| | ✅| |||||| ||||
-|Japanese |`ja`|✅ | ✅| ✅|||||| |✅|✅||
-|Kazakh |`kk`| | ✅| |||||| ||||
-|Korean |`ko`| | ✅| |||||| ||||
-|Lithuanian |`lt`| | ✅| |||||| ||||
-|Latvian |`lv`| | ✅| |||||| ||||
-|Macedonian |`mk`| | ✅| |||||| ||||
-|Malay Malaysia |`ms`| | ✅| |||||| ||||
-|Norwegian (Bokmal) |`nb`| | ✅| |||||| ||||
-|Dutch |`nl`| | ✅| |||||| ||||
-|Polish |`pl`| | ✅| |||||| ||||
-|Dari |`prs`| | ✅| |||||| ||||
-| Portuguese-Brazil|`pt-BR`| | ✅| |||||| ||||
-| Portuguese-Portugal |`pt`|✅ | ✅| ✅|||||| |✅|✅||
-| Portuguese-Portugal |`pt-PT`| | ✅| |||||| ||||
-|Romanian |`ro`| | ✅| |||||| ||||
-|Russian |`ru`| | ✅| |||||| ||||
-|Slovak |`sk`| | ✅| |||||| ||||
-|Slovenian |`sl`| | ✅| |||||| ||||
-|Serbian - Cyrillic RS |`sr-Cryl`| | ✅| |||||| ||||
-|Serbian - Latin RS |`sr-Latn`| | ✅| |||||| ||||
-|Swedish |`sv`| | ✅| |||||| ||||
-|Thai |`th`| | ✅| |||||| ||||
-|Turkish |`tr`| | ✅| |||||| ||||
-|Ukrainian |`uk`| | ✅| |||||| ||||
-|Vietnamese |`vi`| | ✅| |||||| ||||
-|Chinese Simplified |`zh`|✅ | ✅| ✅|||||| |✅|✅||
-|Chinese Simplified |`zh-Hans`| | ✅| |||||| ||||
-|Chinese Traditional |`zh-Hant`| | ✅| |||||| ||||
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
- Title: What is the Azure Face service?-
-description: The Azure Face service provides AI algorithms that you use to detect, recognize, and analyze human faces in images.
------ Previously updated : 07/04/2023--
-keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
-#Customer intent: As the developer of an app that deals with images of humans, I want to learn what the Face service does so I can determine if I should use its features.
--
-# What is the Azure Face service?
-
-> [!WARNING]
-> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
--
-The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
-
-You can use the Face service through a client library SDK or by calling the REST API directly. Follow the quickstart to get started.
-
-> [!div class="nextstepaction"]
-> [Quickstart](quickstarts-sdk/identity-client-library.md)
-
-Or, you can try out the capabilities of Face service quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-This documentation contains the following types of articles:
-* The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
-
-For a more structured approach, follow a Training module for Face.
-* [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/)
-
-## Example use cases
-
-**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or to recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
-
-**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
-
-**Face redaction**: Redact or blur detected faces of people recorded in a video to protect their privacy.
--
-## Face detection and analysis
-
-Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
-
-Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
--
-For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
--
-## Identity verification
-
-Modern enterprises and apps can use the Face identification and Face verification operations to verify that a user is who they claim to be.
-
-### Identification
-
-Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device.
-
-The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
-
-![A grid with three columns for different people, each with three rows of face images](./media/person.group.clare.jpg)
-
-After you create and train a group, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
-
-Try out the capabilities of face identification quickly and easily using Vision Studio.
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-### Verification
-
-The verification operation answers the question, "Do these two faces belong to the same person?".
-
-Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for Identity Verification, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID.
-
-For more information about identity verification, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
-
-Try out the capabilities of face verification quickly and easily using Vision Studio.
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-## Find similar faces
-
-The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
-
-The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
-
-The following example shows the target face:
-
-![A woman smiling](./media/FaceFindSimilar.QueryFace.jpg)
-
-And these images are the candidate faces:
-
-![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg)
-
-To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) reference documentation.
-
-## Group faces
-
-The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
-
-All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
-
-## Data privacy and security
-
-As with all of the Cognitive Services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. For more information, see the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center.
-
-## Next steps
-
-Follow a quickstart to code the basic components of a face recognition app in the language of your choice.
--- [Face quickstart](quickstarts-sdk/identity-client-library.md).
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
- Title: What is Image Analysis?-
-description: The Image Analysis service uses pretrained AI models to extract many different visual features from images.
-------- Previously updated : 07/04/2023-
-keywords: computer vision, computer vision applications, computer vision service
--
-# What is Image Analysis?
-
-The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.
-
-The latest version of Image Analysis, 4.0, which is now in public preview, has new features like synchronous OCR and people detection. We recommend you use this version going forward.
-
-You can use Image Analysis through a client library SDK or by calling the [REST API](https://aka.ms/vision-4-0-ref) directly. Follow the [quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.
-
-> [!div class="nextstepaction"]
-> [Quickstart](quickstarts-sdk/image-analysis-client-library-40.md)
-
-Or, you can try out the capabilities of Image Analysis quickly and easily in your browser using Vision Studio.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-
-This documentation contains the following types of articles:
-* The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-
-For a more structured approach, follow a Training module for Image Analysis.
-* [Analyze images with the Computer Vision service](/training/modules/analyze-images-computer-vision/)
--
-## Analyze Image
-
-You can analyze images to provide insights about their visual features and characteristics. All of the features in this list are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started.
-
-| Name | Description | Concept page |
-||||
-|**Model customization** (v4.0 preview only)|You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis trains a model customized for your use case.|[Model customization](./concept-model-customization.md)|
-|**Read text from images** (v4.0 preview only)| Version 4.0 preview of Image Analysis offers the ability to extract readable text from images. Compared with the async Computer Vision 3.2 Read API, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get OCR along with other insights in a single API call. |[OCR for images](concept-ocr.md)|
-|**Detect people in images** (v4.0 preview only)|Version 4.0 preview of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. |[People detection](concept-people-detection.md)|
-|**Generate image captions** | Generate a caption of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate captions based on the objects identified in the image. <br/><br/>The version 4.0 image captioning model is a more advanced implementation and works with a wider range of input images. It's only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. <br/><br/>Version 4.0 also lets you use dense captioning, which generates detailed captions for individual objects that are found in the image. The API returns the bounding box coordinates (in pixels) of each object found in the image, plus a caption. You can use this functionality to generate descriptions of separate parts of an image.<br/><br/>:::image type="content" source="Images/description.png" alt-text="Photo of cows with a simple description on the right.":::| [Generate image captions (v3.2)](concept-describing-images.md)<br/>[(v4.0 preview)](concept-describe-images-40.md)|
-|**Detect objects** |Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation lists those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. <br/><br/>:::image type="content" source="Images/detect-objects.png" alt-text="Photo of an office with a rectangle drawn around a laptop.":::| [Detect objects (v3.2)](concept-object-detection.md)<br/>[(v4.0 preview)](concept-object-detection-40.md)
-|**Tag visual features**| Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.<br/><br/>:::image type="content" source="Images/tagging.png" alt-text="Photo of a skateboarder with tags listed on the right.":::|[Tag visual features (v3.2)](concept-tagging-images.md)<br/>[(v4.0 preview)](concept-tag-images-40.md)|
-|**Get the area of interest / smart crop** |Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. <br/><br/>The version 4.0 smart cropping model is a more advanced implementation and works with a wider range of input images. It's only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. | [Generate a thumbnail (v3.2)](concept-generating-thumbnails.md)<br/>[(v4.0 preview)](concept-generate-thumbnails-40.md)|
-|**Detect brands** (v3.2 only) | Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. |[Detect brands](concept-brand-detection.md)|
-|**Categorize an image** (v3.2 only)|Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/><br/>Currently, English is the only supported language for tagging and categorizing images. |[Categorize an image](concept-categorizing-images.md)|
-| **Detect faces** (v3.2 only) |Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/><br/>You can also use the dedicated [Face API](./index-identity.yml) for these purposes. It provides more detailed analysis, such as facial identification and pose detection.|[Detect faces](concept-detecting-faces.md)|
-|**Detect image types** (v3.2 only)|Detect characteristics about an image, such as whether an image is a line drawing or the likelihood of whether an image is clip art.| [Detect image types](concept-detecting-image-types.md)|
-| **Detect domain-specific content** (v3.2 only)|Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.| [Detect domain-specific content](concept-detecting-domain-content.md)|
-|**Detect the color scheme** (v3.2 only) |Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors.| [Detect the color scheme](concept-detecting-color-schemes.md)|
-|**Moderate content in images** (v3.2 only) |You can use Computer Vision to detect adult content in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.|[Detect adult content](concept-detecting-adult-content.md)|
--
-## Product Recognition (v4.0 preview only)
-
-The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence or absence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document.
-
-[Product Recognition](./concept-shelf-analysis.md)
-
-## Image Retrieval (v4.0 preview only)
-
-The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
-
-These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
-
-[Image Retrieval](./concept-image-retrieval.md)
-
-## Background removal (v4.0 preview only)
-
-Image Analysis 4.0 (preview) offers the ability to remove the background of an image. This feature can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object. [Background removal](./concept-background-removal.md)
-
-|Original image |With background removed |Alpha matte |
-|::|::|::|
-
-| | | |
-||||
-| :::image type="content" source="media/background-removal/person-5.png" alt-text="Photo of a group of people using a tablet."::: | :::image type="content" source="media/background-removal/person-5-result.png" alt-text="Photo of a group of people using a tablet; background is transparent."::: | :::image type="content" source="media/background-removal/person-5-matte.png" alt-text="Alpha matte of a group of people."::: |
-
-## Image requirements
-
-#### [Version 4.0](#tab/4-0)
-
-Image Analysis works on images that meet the following requirements:
--- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format-- The file size of the image must be less than 20 megabytes (MB)-- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels--
-#### [Version 3.2](#tab/3-2)
-
-Image Analysis works on images that meet the following requirements:
--- The image must be presented in JPEG, PNG, GIF, or BMP format-- The file size of the image must be less than 4 megabytes (MB)-- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels---
-## Data privacy and security
-
-As with all of the Cognitive Services, developers using the Computer Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
-
-## Next steps
-
-Get started with Image Analysis by following the quickstart guide in your preferred development language:
--- [Quickstart (v4.0 preview): Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md)-- [Quickstart (v3.2): Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md)
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
- Title: OCR - Optical Character Recognition-
-description: Learn how the optical character recognition (OCR) services extract print and handwritten text from images and documents in global languages.
------- Previously updated : 07/04/2023----
-# OCR - Optical Character Recognition
-
-OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning-based OCR techniques allow you to extract printed or handwritten text from images such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry.
-
-## How is OCR related to Intelligent Document Processing (IDP)?
-
-Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Form Recognizer](../../applied-ai-services/form-recognizer/overview.md). Form Recognizer includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you're extracting text from scanned and digital documents, use [Form Recognizer Read OCR](../../applied-ai-services/form-recognizer/concept-read.md).
-
-## OCR engine
-
-Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
-
-> [!WARNING]
-> The Computer Vision legacy [OCR API in v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText API in v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are not recomended for use.
--
-## How to use OCR
-
-Try out OCR by using Vision Studio. Then follow one of the links to the Read edition that best meet your requirements.
-
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
--
-## OCR supported languages
-
-Both **Read** versions available today in Computer Vision support several languages for printed and handwritten text. OCR for printed text supports English, French, German, Italian, Portuguese, Spanish, Chinese, Japanese, Korean, Russian, Arabic, Hindi, and other international languages that use Latin, Cyrillic, Arabic, and Devanagari scripts. OCR for handwritten text supports English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish languages.
-
-Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
-
-## OCR common features
-
-The Read OCR model is available in Computer Vision and Form Recognizer with common baseline capabilities while optimizing for respective scenarios. The following list summarizes the common features:
-
-* Printed and handwritten text extraction in supported languages
-* Pages, text lines and words with location and confidence scores
-* Support for mixed languages, mixed mode (print and handwritten)
-* Available as Distroless Docker container for on-premises deployment
-
-## Use the OCR cloud APIs or deploy on-premises
-
-The cloud APIs are the preferred option for most customers because of their ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
-
-For on-premises deployment, the [Read Docker container](./computer-vision-how-to-install-containers.md) enables you to deploy the Computer Vision v3.2 generally available OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
-
-## OCR data privacy and security
-
-As with all of the Cognitive Services, developers using the Computer Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
-
-## Next steps
--- OCR for general (non-document) images: try the [Computer Vision 4.0 preview Image Analysis REST API quickstart](./concept-ocr.md).-- OCR for PDF, Office and HTML documents and document images: start with [Form Recognizer Read](../../applied-ai-services/form-recognizer/concept-read.md).-- Looking for the previous GA version? Refer to the [Computer Vision 3.2 GA SDK or REST API quickstarts](./quickstarts-sdk/client-library.md).
cognitive-services Overview Vision Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-vision-studio.md
- Title: What is Vision Studio?-
-description: Learn how to set up and use Vision Studio to test features of Azure Computer Vision on the web.
------ Previously updated : 12/27/2022----
-# What is Vision Studio?
-
-[Vision Studio](https://portal.vision.cognitive.azure.com/) is a set of UI-based tools that lets you explore, build, and integrate features from Azure Computer Vision.
-
-Vision Studio provides you with a platform to try several service features and sample their returned data in a quick, straightforward manner. Using Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to get started embedding these services into your own applications.
-
-## Get started using Vision Studio
-
-To use Vision Studio, you'll need an Azure subscription and a resource for Cognitive Services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
-
-1. Create an Azure Subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
-
-1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com/). If it's your first time logging in, you'll see a popup window that prompts you to sign in to Azure and then choose or create a Vision resource. You can skip this step and do it later.
-
- :::image type="content" source="./Images/vision-studio-wizard-1.png" alt-text="Screenshot of Vision Studio startup wizard.":::
-
-1. Select **Choose resource**, then select an existing resource within your subscription. If you'd like to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
-
- :::image type="content" source="./Images/vision-studio-wizard-2.png" alt-text="Screenshot of Vision Studio resource selection panel.":::
-
- > [!TIP]
- > * When you select a location for your Azure resource, choose one that's closest to you for lower latency.
- > * If you use the free pricing tier, you can keep using the Vision service even after your Azure free trial or service credit expires.
-
-1. Select **Create resource**. Your resource will be created, and you'll be able to try the different features offered by Vision Studio.
-
- :::image type="content" source="./Images/vision-studio-home-page.png" alt-text="Screenshot of Vision Studio home page.":::
-
-1. From here, you can select any of the different features offered by Vision Studio. Some of them are outlined in the service quickstarts:
- * [OCR quickstart](quickstarts-sdk/client-library.md?pivots=vision-studio)
- * Image Analysis [4.0 quickstart](quickstarts-sdk/image-analysis-client-library-40.md?pivots=vision-studio) and [3.2 quickstart](quickstarts-sdk/image-analysis-client-library.md?pivots=vision-studio)
- * [Face quickstart](quickstarts-sdk/identity-client-library.md?pivots=vision-studio)
-
-## Pre-configured features
-
-Computer Vision offers multiple features that use prebuilt, pre-configured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Computer Vision overview](overview.md) for a list of features offered by the Vision service.
-
-Each of these features has one or more try-it-out experiences in Vision Studio that allow you to upload images and receive JSON and text responses. These experiences help you quickly test the features using a no-code approach.
-
-## Cleaning up resources
-
-If you want to remove a Cognitive Services resource after using Vision Studio, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can't delete your resource directly from Vision Studio, so use one of the following methods:
-* [Using the Azure portal](../cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#clean-up-resources)
-* [Using the Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows#clean-up-resources)
-
-> [!TIP]
-> In Vision Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by selecting the Settings icon in the top-right corner of the Vision Studio screen).
-
-## Next steps
-
-* Go to [Vision Studio](https://portal.vision.cognitive.azure.com/) to begin using features offered by the service.
-* For more information on the features offered, see the [Azure Computer Vision overview](overview.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
- Title: What is Computer Vision?-
-description: The Computer Vision service provides you with access to advanced algorithms for processing images and returning information.
------- Previously updated : 07/04/2023--
-keywords: computer vision, computer vision applications, computer vision service
-#Customer intent: As a developer, I want to evaluate image processing functionality, so that I can determine if it will work for my information extraction or object detection scenarios.
--
-# What is Computer Vision?
-
-Azure's Computer Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in.
-
-| Service|Description|
-|||
-| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on various surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.|
-|[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.|
-| [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. |
-| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.|
-
-## Computer Vision for digital asset management
-
-Computer Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Cognitive Services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Computer Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
-
-## Getting started
-
-Use [Vision Studio](https://portal.vision.cognitive.azure.com/) to try out Computer Vision features quickly in your web browser.
-
-To get started building Computer Vision into your app, follow a quickstart.
-* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md)
-* [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library.md)
-* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
-
-## Image requirements
-
-Computer Vision can analyze images that meet the following requirements:
--- The image must be presented in JPEG, PNG, GIF, or BMP format-- The file size of the image must be less than 4 megabytes (MB)-- The dimensions of the image must be greater than 50 x 50 pixels
- - For the Read API, the dimensions of the image must be between 50 x 50 and 10,000 x 10,000 pixels.
-
-## Data privacy and security
-
-As with all of the Cognitive Services, developers using the Computer Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
-
-## Next steps
-
-Follow a quickstart to implement and run a service in your preferred development language.
-
-* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md)
-* [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library-40.md)
-* [Quickstart: Face](quickstarts-sdk/identity-client-library.md)
-* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
- Title: "Quickstart: Optical character recognition (OCR)"-
-description: Learn how to use Optical character recognition (OCR) in your application through a native client library in the language of your choice.
------ Previously updated : 07/04/2023--
-zone_pivot_groups: programming-languages-ocr
-keywords: computer vision, computer vision service
--
-# Quickstart: Computer Vision v3.2 GA Read
--
-Get started with the Computer Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
---------------
cognitive-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md
- Title: 'Quickstart: Use the Face service'-
-description: The Face API offers client libraries that make it easy to detect, find similar, identify, verify and more.
---
-zone_pivot_groups: programming-languages-set-face
--- Previously updated : 07/04/2023--
-keywords: face search by image, facial recognition search, facial recognition, face recognition app
--
-# Quickstart: Use the Face service
---------------
cognitive-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
- Title: "Quickstart: Image Analysis 4.0"-
-description: Learn how to tag images in your application using Image Analysis 4.0 through a native client SDK in the language of your choice.
------ Previously updated : 01/24/2023--
-zone_pivot_groups: programming-languages-computer-vision-40
-keywords: computer vision, computer vision service
--
-# Quickstart: Image Analysis 4.0
-
-Get started with the Image Analysis 4.0 REST API or client SDK to set up a basic image analysis application. The Image Analysis service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.
------------
cognitive-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library.md
- Title: "Quickstart: Image Analysis"-
-description: Learn how to tag images in your application using Image Analysis through a native client library in the language of your choice.
------ Previously updated : 12/27/2022--
-zone_pivot_groups: programming-languages-computer-vision
-keywords: computer vision, computer vision service
--
-# Quickstart: Image Analysis
-
-Get started with the Image Analysis REST API or client libraries to set up a basic image tagging script. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.
------------------
cognitive-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/read-container-migration-guide.md
- Title: Migrating to the Read v3.x containers-
-description: Learn how to migrate to the v3 Read OCR containers
------ Previously updated : 09/28/2021----
-# Migrate to the Read v3.x OCR containers
-
-If you're using version 2 of the Computer Vision Read OCR container, Use this article to learn about upgrading your application to use version 3.x of the container.
--
-## Configuration changes
-
-* `ReadEngineConfig:ResultExpirationPeriod` is no longer supported. The Read OCR container has a built Cron job that removes the results and metadata associated with a request after 48 hours.
-* `Cache:Redis:Configuration` is no longer supported. The Cache isn't used in the v3.x containers, so you don't need to set it.
-
-## API changes
-
-The Read v3.2 container uses version 3 of the Computer Vision API and has the following endpoints:
-
-* `/vision/v3.2/read/analyzeResults/{operationId}`
-* `/vision/v3.2/read/analyze`
-* `/vision/v3.2/read/syncAnalyze`
-
-See the [Computer Vision v3 REST API migration guide](./upgrade-api-versions.md) for detailed information on updating your applications to use version 3 of cloud-based Read API. This information applies to the container as well. Sync operations are only supported in containers.
-
-## Memory requirements
-
-The requirements and recommendations are based on benchmarks with a single request per second, using a 523-KB image of a scanned business letter that contains 29 lines and a total of 803 characters. The following table describes the minimum and recommended allocations of resources for each Read OCR container.
-
-|Container |Minimum | Recommended |
-||||
-|Read 3.2 **2022-04-30** | 4 cores, 8-GB memory | 8 cores, 16-GB memory |
-
-Each core must be at least 2.6 gigahertz (GHz) or faster.
-
-Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the docker run command.
-
-## Storage implementations
-
->[!NOTE]
-> MongoDB is no longer supported in 3.x versions of the container. Instead, the containers support Azure Storage and offline file systems.
-
-| Implementation | Required runtime argument(s) |
-|||
-|File level (default) | No runtime arguments required. `/share` directory will be used. |
-|Azure Blob | `Storage:ObjectStore:AzureBlob:ConnectionString={AzureStorageConnectionString}` |
-
-## Queue implementations
-
-In v3.x of the container, RabbitMQ is currently not supported. The supported backing implementations are:
-
-| Implementation | Runtime Argument(s) | Intended use |
-|||-|
-| In Memory (default) | No runtime arguments required. | Development and testing |
-| Azure Queues | `Queue:Azure:ConnectionString={AzureStorageConnectionString}` | Production |
-| RabbitMQ | Unavailable | Production |
-
-For added redundancy, the Read v3.x container uses a visibility timer to ensure requests can be successfully processed if a crash occurs when running in a multi-container setup.
-
-Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which sets the time for a message to be invisible when another worker is processing it. To avoid pages from being redundantly processed, we recommend setting the timeout period to 120 seconds. The default value is 30 seconds.
-
-| Default value | Recommended value |
-|||
-| 30000 | 120000 |
--
-## Next steps
-
-* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings
-* Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container.
-* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Computer Vision functionality.
-* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Spatial Analysis Camera Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-camera-placement.md
- Title: Spatial Analysis camera placement-
-description: Learn how to set up a camera for use with Spatial Analysis
------ Previously updated : 06/08/2021----
-# Where to place the camera?
-
-This article provides camera placement recommendations for Spatial Analysis (public preview). It includes general guidelines as well as specific recommendations for height, angle, and camera-to-focal-point-distance for all the included operations.
-
-> [!NOTE]
-> This guide is designed for the Axis M3045-V camera. This camera will use resolution 1920x1080, 106 degree horizontal field of view, 59 degree vertical field of view and a fixed 2.8mm focal length. The principles below will apply to all cameras, but specific guidelines around camera height and camera-to-focal-point distance will need to be adjusted for use with other cameras.
-
-## General guidelines
-
-Consider the following general guidelines when positioning cameras for Spatial Analysis:
-
-* **Lighting height.** Place cameras below lighting fixtures so the fixtures don't block the cameras.
-* **Obstructions.** To avoid obstructing camera views, take note of obstructions such as poles, signage, shelving, walls, and existing LP cameras.
-* **Environmental backlighting.** Outdoor backlighting affects camera image quality. To avoid severe backlighting conditions, avoid directing cameras at external-facing windows and glass doors.
-* **Local privacy rules and regulations.** Local regulations may restrict what cameras can capture. Make sure that you understand local rules and regulations before placing cameras.
-* **Building structure.** HVAC, sprinklers, and existing wiring may limit hard mounting of cameras.
-* **Cable management.** Make sure you can route an ethernet cable from planned camera mounting locations to the Power Over Internet (PoE) switch.
-
-## Height, focal-point distance, and angle
-
-You need to consider three things when deciding how to install a camera for Spatial Analysis:
-- Camera height-- Camera-to-focal-point distance-- The angle of the camera relative to the floor plane-
-It's also important to know the direction that the majority of people walk (person walking direction) in relation to the camera field of view if possible. This direction is important for system performance.
-
-![Image of a person walking in a direction](./media/spatial-analysis/person-walking-direction.png)
-
-The following illustration shows the elevation view for person walking direction.
-
-![Elevation and plan view](./media/spatial-analysis/person-walking-direction-diagram.png)
-
-## Camera height
-
-Generally, cameras should be mounted 12-14 feet from the ground. For Face mask detection, we recommend cameras to be mounted 8-12 feet from the ground. When planning your camera mounting in this range, consider obstructions (for example: shelving, hanging lights, hanging signage, and displays) that might affect the camera view, and then adjust the height as necessary.
-
-## Camera-to-focal-point distance
-
-_Camera-to-focal-point distance_ is the linear distance from the focal point (or center of the camera image) to the camera measured on the ground.
-
-![Camera-to-focal-point-distance](./media/spatial-analysis/camera-focal-point.png)
-
-This distance is measured on the floor plane.
-
-![How camera-to-focal-point-distance is measured from the floor](./media/spatial-analysis/camera-focal-point-floor-plane.png)
-
-From above, it looks like this:
-
-![How camera-to-focal-point-distance is measured from above](./media/spatial-analysis/camera-focal-point-above.png)
-
-Use the table below to determine the camera's distance from the focal point based on specific mounting heights. These distances are for optimal placement. Note that the table provides guidance below the 12'-14' recommendation since some ceilings can limit height. For Face mask detection, recommended camera-to-focal-point distance (min/max) is 4ΓÇÖ-10ΓÇÖ for camera height between 8ΓÇÖ to 12ΓÇÖ.
-
-| Camera height | Camera-to-focal-point distance (min/max) |
-| - | - |
-| 8' | 4.6'-8' |
-| 10' | 5.8'-10' |
-| 12' | 7'-12' |
-| 14' | 8'-14'' |
-| 16' | 9.2'-16' |
-| 20' | 11.5'-20' |
-
-The following illustration simulates camera views from the closest and farthest camera-to-focal-point distances.
-
-| Closest | Farthest |
-| -- | |
-| ![Closest camera-to-focal-point distance](./media/spatial-analysis/focal-point-closest.png) | ![Farthest camera-to-focal-point distance](./media/spatial-analysis/focal-point-farthest.png) |
-
-## Camera angle mounting ranges
-
-This section describes acceptable camera angle mounting ranges. These mounting ranges show the acceptable range for optimal placement.
-
-### Line configuration
-
-For the **cognitiveservices.vision.spatialanalysis-personcrossingline** operation, +/-5┬░ is the optimal camera mounting angle to maximize accuracy.
-
-For Face mask detection, +/-30 degrees is the optimal camera mounting angle for camera height between 8ΓÇÖ to 12ΓÇÖ.
-
-The following illustration simulates camera views using the leftmost (-) and rightmost (+) mounting angle recommendations for using **cognitiveservices.vision.spatialanalysis-personcrossingline** to do entrance counting in a door way.
-
-| Leftmost view | Rightmost view |
-| - | -- |
-| ![Leftmost camera angle](./media/spatial-analysis/camera-angle-left.png) | ![Rightmost camera angle](./media/spatial-analysis/camera-angle-right.png) |
-
-The following illustration shows camera placement and mounting angles from a birds-eye view.
-
-![Bird's eye view](./media/spatial-analysis/camera-angle-top.png)
-
-### Zone configuration
-
-We recommend that you place cameras at 10 feet or more above ground to guarantee the covered area is big enough.
-
-When the zone is next to an obstacle like a wall or shelf, mount cameras in the specified distance from the target within the acceptable 120-degree angle range as shown in the following illustration.
-
-![Acceptable camera angle](./media/spatial-analysis/camera-angle-acceptable.png)
-
-The following illustration provides simulations for the left and right camera views of an area next to a shelf.
-
-| Left view | Right view |
-| - | -- |
-| ![Left view](./media/spatial-analysis/end-cap-left.png) | ![Right view](./media/spatial-analysis/end-cap-right.png) |
-
-#### Queues
-
-The **cognitiveservices.vision.spatialanalysis-personcount**, **cognitiveservices.vision.spatialanalysis-persondistance**, and **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** skills may be used to monitor queues. For optimal queue data quality, retractable belt barriers are preferred to minimize occlusion of the people in the queue and ensure the queues location is consistent over time.
-
-![Retractable belt queue](./media/spatial-analysis/retractable-belt-queue.png)
-
-This type of barrier is preferred over opaque barriers for queue formation to maximize the accuracy of the insights from the system.
-
-There are two types of queues: linear and zig-zag.
-
-The following illustration shows recommendations for linear queues:
-
-![Linear queue recommendation](./media/spatial-analysis/camera-angle-linear-queue.png)
-
-The following illustration provides simulations for the left and right camera views of linear queues. Note that you can mount the camera on the opposite side of the queue.
-
-| Left view | Right view |
-| - | -- |
-| ![Left angle for linear queue](./media/spatial-analysis/camera-angle-linear-left.png) | ![Right angle for linear queue](./media/spatial-analysis/camera-angle-linear-right.png) |
-
-For zig-zag queues, it's best to avoid placing the camera directly facing the queue line direction, as shown in the following illustration. Note that each of the four example camera positions in the illustration provide the ideal view with an acceptable deviation of +/- 15 degrees in each direction.
-
-The following illustrations simulate the view from a camera placed in the ideal locations for a zig-zag queue.
-
-| View 1 | View 2 |
-| - | - |
-| ![View 1](./media/spatial-analysis/camera-angle-ideal-location-right.png) | ![View 2](./media/spatial-analysis/camera-angle-ideal-location-left.png) |
-
-| View 3 | View 4 |
-| - | - |
-| ![View 3](./media/spatial-analysis/camera-angle-ideal-location-back.png) | ![View 4](./media/spatial-analysis/camera-angle-ideal-location-back-left.png) |
-
-##### Organic queues
-
-Organic queue lines form organically. This style of queue is acceptable if queues don't form beyond 2-3 people and the line forms within the zone definition. If the queue length is typically more than 2-3 people, we recommend using a retractable belt barrier to help guide the queue direction and ensure the line forms within the zone definition.
-
-## Next steps
-
-* [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
-* [Logging and troubleshooting](spatial-analysis-logging.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
- Title: How to install and run the Spatial Analysis container - Computer Vision-
-description: The Spatial Analysis container lets you can detect people and distances.
------ Previously updated : 12/27/2022----
-# Install and run the Spatial Analysis container (Preview)
-
-The Spatial Analysis container enables you to analyze real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. Containers are great for specific security and data governance requirements.
-
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* [!INCLUDE [contributor-requirement](../includes/quickstarts/contributor-requirement.md)]
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> for the Standard S1 tier in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource you create to run the Spatial Analysis container. You'll use your key and endpoint later.
-
-### Spatial Analysis container requirements
-
-To run the Spatial Analysis container, you need a compute device with an NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), A2, 1080Ti, or 2080Ti). We recommend that you use [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) with GPU acceleration, however the container runs on any other desktop machine that meets the minimum requirements. We'll refer to this device as the host computer.
-
-#### [Azure Stack Edge device](#tab/azure-stack-edge)
-
-Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge computing device with network data transfer capabilities. For detailed preparation and setup instructions, see the [Azure Stack Edge documentation](../../databox-online/azure-stack-edge-deploy-prep.md).
-
-#### [Desktop machine](#tab/desktop-machine)
-
-#### Minimum hardware requirements
-
-* 4 GB of system RAM
-* 4 GB of GPU RAM
-* 8 core CPU
-* One NVIDIA CUDA Compute Capable GPU 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), A2, 1080Ti, or 2080Ti)
-* 20 GB of HDD space
-
-#### Recommended hardware
-
-* 32 GB of system RAM
-* 16 GB of GPU RAM
-* 8 core CPU
-* Two NVIDIA CUDA Compute Capable GPUs 6.0 or higher (for example, [NVIDIA Tesla T4](https://www.nvidia.com/en-us/data-center/tesla-t4/), A2, 1080Ti, or 2080Ti)
-* 50 GB of SSD space
-
-In this article, you'll download and install the following software packages. The host computer must be able to run the following (see below for instructions):
-
-* [NVIDIA graphics drivers](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html) and [NVIDIA CUDA Toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html). The minimum GPU driver version is 460 with CUDA 11.1.
-* Configurations for [NVIDIA MPS](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf) (Multi-Process Service).
-* [Docker CE](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-enginecommunity-1) and [NVIDIA-Docker2](https://github.com/NVIDIA/nvidia-docker)
-* [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) runtime.
-
-#### [Azure VM with GPU](#tab/virtual-machine)
-In our example, we'll utilize an [NCv3 series VM](../../virtual-machines/ncv3-series.md) that has one v100 GPU.
---
-| Requirement | Description |
-|--|--|
-| Camera | The Spatial Analysis container isn't tied to a specific camera brand. The camera device needs to: support Real-Time Streaming Protocol(RTSP) and H.264 encoding, be accessible to the host computer, and be capable of streaming at 15FPS and 1080p resolution. |
-| Linux OS | [Ubuntu Desktop 18.04 LTS](http://releases.ubuntu.com/18.04/) must be installed on the host computer. |
-
-## Set up the host computer
-
-We recommend that you use an Azure Stack Edge device for your host computer. Select **Desktop Machine** if you're configuring a different device, or **Virtual Machine** if you're utilizing a VM.
-
-#### [Azure Stack Edge device](#tab/azure-stack-edge)
-
-### Configure compute on the Azure Stack Edge portal
-
-Spatial Analysis uses the compute features of the Azure Stack Edge to run an AI solution. To enable the compute features, make sure that:
-
-* You've [connected and activated](../../databox-online/azure-stack-edge-deploy-connect-setup-activate.md) your Azure Stack Edge device.
-* You have a Windows client system running PowerShell 5.0 or later, to access the device.
-* To deploy a Kubernetes cluster, you need to configure your Azure Stack Edge device via the **Local UI** on the [Azure portal](https://portal.azure.com/):
- 1. Enable the compute feature on your Azure Stack Edge device. To enable compute, go to the **Compute** page in the web interface for your device.
- 2. Select a network interface that you want to enable for compute, then select **Enable**. This will create a virtual switch on your device, on that network interface.
- 3. Leave the Kubernetes test node IP addresses and the Kubernetes external services IP addresses blank.
- 4. Select **Apply**. This operation may take about two minutes.
-
-![Configure compute](media/spatial-analysis/configure-compute.png)
-
-### Set up Azure Stack Edge role and create an IoT Hub resource
-
-In the [Azure portal](https://portal.azure.com/), navigate to your Azure Stack Edge resource. On the **Overview** page or navigation list, select the Edge compute **Get started** button. In the **Configure Edge compute** tile, select **Configure**.
-
-![Link](media/spatial-analysis/configure-edge-compute-tile.png)
-
-In the **Configure Edge compute** page, choose an existing IoT Hub, or choose to create a new one. By default, a Standard (S1) pricing tier is used to create an IoT Hub resource. To use a free tier IoT Hub resource, create one and then select it. The IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource
-
-Select **Create**. The IoT Hub resource creation may take a couple of minutes. After the IoT Hub resource is created, the **Configure Edge compute** tile will update to show the new configuration. To confirm that the Edge compute role has been configured, select **View config** on the **Configure compute** tile.
-
-When the Edge compute role is set up on the Edge device, it creates two devices: an IoT device and an IoT Edge device. Both devices can be viewed in the IoT Hub resource. The Azure IoT Edge Runtime will already be running on the IoT Edge device.
-
-> [!NOTE]
-> * Currently only the Linux platform is supported for IoT Edge devices. For help troubleshooting the Azure Stack Edge device, see the [logging and troubleshooting](spatial-analysis-logging.md) article.
-> * To learn more about how to configure an IoT Edge device to communicate through a proxy server, see [Configure an IoT Edge device to communicate through a proxy server](../../iot-edge/how-to-configure-proxy-support.md#azure-portal)
-
-### Enable MPS on Azure Stack Edge
-
-Follow these steps to remotely connect from a Windows client.
-
-1. Run a Windows PowerShell session as an administrator.
-2. Make sure that the Windows Remote Management service is running on your client. At the command prompt, type:
-
- ```powershell
- winrm quickconfig
- ```
-
- For more information, see [Installation and configuration for Windows Remote Management](/windows/win32/winrm/installation-and-configuration-for-windows-remote-management#quick-default-configuration).
-
-3. Assign a variable to the connection string used in the `hosts` file.
-
- ```powershell
- $Name = "<Node serial number>.<DNS domain of the device>"
- ```
-
- Replace `<Node serial number>` and `<DNS domain of the device>` with the node serial number and DNS domain of your device. You can get the values for node serial number from the **Certificates** page and DNS domain from the **Device** page in the local web UI of your device.
-
-4. To add this connection string for your device to the clientΓÇÖs trusted hosts list, type the following command:
-
- ```powershell
- Set-Item WSMan:\localhost\Client\TrustedHosts $Name -Concatenate -Force
- ```
-
-5. Start a Windows PowerShell session on the device:
-
- ```powershell
- Enter-PSSession -ComputerName $Name -Credential ~\EdgeUser -ConfigurationName Minishell -UseSSL
- ```
-
- If you see an error related to trust relationship, then check if the signing chain of the node certificate uploaded to your device is also installed on the client accessing your device.
-
-6. Provide the password when prompted. Use the same password that is used to sign into the local web UI. The default local web UI password is *Password1*. When you successfully connect to the device using remote PowerShell, you see the following sample output:
-
- ```
- Windows PowerShell
- Copyright (C) Microsoft Corporation. All rights reserved.
-
- PS C:\WINDOWS\system32> winrm quickconfig
- WinRM service is already running on this machine.
- PS C:\WINDOWS\system32> $Name = "1HXQG13.wdshcsso.com"
- PS C:\WINDOWS\system32> Set-Item WSMan:\localhost\Client\TrustedHosts $Name -Concatenate -Force
- PS C:\WINDOWS\system32> Enter-PSSession -ComputerName $Name -Credential ~\EdgeUser -ConfigurationName Minishell -UseSSL
-
- WARNING: The Windows PowerShell interface of your device is intended to be used only for the initial network configuration. Please engage Microsoft Support if you need to access this interface to troubleshoot any potential issues you may be experiencing. Changes made through this interface without involving Microsoft Support could result in an unsupported configuration.
- [1HXQG13.wdshcsso.com]: PS>
- ```
-
-#### [Desktop machine](#tab/desktop-machine)
-
-Follow these instructions if your host computer isn't an Azure Stack Edge device.
-
-#### Install NVIDIA CUDA Toolkit and Nvidia graphics drivers on the host computer
-
-Use the following bash script to install the required Nvidia graphics drivers, and CUDA Toolkit.
-
-```bash
-wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
-sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
-sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
-sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
-sudo apt-get update
-sudo apt-get -y install cuda
-```
-
-Reboot the machine, and run the following command.
-
-```bash
-nvidia-smi
-```
-
-You should see the following output.
-
-![NVIDIA driver output](media/spatial-analysis/nvidia-driver-output.png)
-
-### Install Docker CE and nvidia-docker2 on the host computer
-
-Install Docker CE on the host computer.
-
-```bash
-sudo apt-get update
-sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
-sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
-sudo apt-get update
-sudo apt-get install -y docker-ce docker-ce-cli containerd.io
-```
-
-Install the *nvidia-docker-2* software package.
-
-```bash
-distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
-curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
-curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
-sudo apt-get update
-sudo apt-get install -y docker-ce nvidia-docker2
-sudo systemctl restart docker
-```
-
-## Enable NVIDIA MPS on the host computer
-
-> [!TIP]
-> * Don't install MPS if your GPU compute capability is less than 7.x (pre Volta). See [CUDA Compatability](https://docs.nvidia.com/deploy/cuda-compatibility/https://docsupdatetracker.net/index.html#support-title) for reference.
-> * Run the MPS instructions from a terminal window on the host computer. Not inside your Docker container instance.
-
-For best performance and utilization, configure the host computer's GPU(s) for [NVIDIA Multiprocess Service (MPS)](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf). Run the MPS instructions from a terminal window on the host computer.
-
-```bash
-# Set GPU(s) compute mode to EXCLUSIVE_PROCESS
-sudo nvidia-smi --compute-mode=EXCLUSIVE_PROCESS
-
-# Cronjob for setting GPU(s) compute mode to EXCLUSIVE_PROCESS on boot
-echo "SHELL=/bin/bash" > /tmp/nvidia-mps-cronjob
-echo "PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" >> /tmp/nvidia-mps-cronjob
-echo "@reboot root nvidia-smi --compute-mode=EXCLUSIVE_PROCESS" >> /tmp/nvidia-mps-cronjob
-
-sudo chown root:root /tmp/nvidia-mps-cronjob
-sudo mv /tmp/nvidia-mps-cronjob /etc/cron.d/
-
-# Service entry for automatically starting MPS control daemon
-echo "[Unit]" > /tmp/nvidia-mps.service
-echo "Description=NVIDIA MPS control service" >> /tmp/nvidia-mps.service
-echo "After=cron.service" >> /tmp/nvidia-mps.service
-echo "" >> /tmp/nvidia-mps.service
-echo "[Service]" >> /tmp/nvidia-mps.service
-echo "Restart=on-failure" >> /tmp/nvidia-mps.service
-echo "ExecStart=/usr/bin/nvidia-cuda-mps-control -f" >> /tmp/nvidia-mps.service
-echo "" >> /tmp/nvidia-mps.service
-echo "[Install]" >> /tmp/nvidia-mps.service
-echo "WantedBy=multi-user.target" >> /tmp/nvidia-mps.service
-
-sudo chown root:root /tmp/nvidia-mps.service
-sudo mv /tmp/nvidia-mps.service /etc/systemd/system/
-
-sudo systemctl --now enable nvidia-mps.service
-```
-
-## Configure Azure IoT Edge on the host computer
-
-To deploy the Spatial Analysis container on the host computer, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
-
-Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
-
-```bash
-curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
-```
-```azurecli
-sudo az login
-```
-```azurecli
-sudo az account set --subscription "<name or ID of Azure Subscription>"
-```
-```azurecli
-sudo az group create --name "<resource-group-name>" --location "<your-region>"
-```
-See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
-```azurecli
-sudo az iot hub create --name "<iothub-group-name>" --sku S1 --resource-group "<resource-group-name>"
-```
-```azurecli
-sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
-```
-
-You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
-
-Ubuntu Server 18.04:
-```bash
-curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
-```
-
-Copy the generated list.
-```bash
-sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
-```
-
-Install the Microsoft GPG public key.
-
-```bash
-curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
-```
-```bash
-sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
-```
-
-Update the package lists on your device.
-
-```bash
-sudo apt-get update
-```
-
-Install the 1.0.9 release:
-
-```bash
-sudo apt-get install iotedge=1.1* libiothsm-std=1.1*
-```
-
-Next, register the host computer as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-provision-single-device-linux-symmetric.md#register-your-device).
-
-You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
-
-```azurecli
-sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123
-```
-
-On the host computer open `/etc/iotedge/config.yaml` for editing. Replace `ADD DEVICE CONNECTION STRING HERE` with the connection string. Save and close the file.
-Run this command to restart the IoT Edge service on the host computer.
-
-```bash
-sudo systemctl restart iotedge
-```
-
-Deploy the Spatial Analysis container as an IoT Module on the host computer, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
-
-Use the below steps to deploy the container using the Azure CLI.
-
-#### [Azure VM with GPU](#tab/virtual-machine)
-
-An Azure Virtual Machine with a GPU can also be used to run Spatial Analysis. The example below will use a [NCv3 series VM](../../virtual-machines/ncv3-series.md) that has one v100 GPU.
-
-#### Create the VM
-
-Open the [Create a Virtual Machine](https://portal.azure.com/#create/Microsoft.VirtualMachine) wizard in the Azure portal.
-
-Give your VM a name and select the region to be (US) West US 2.
-
-> [!IMPORTANT]
-> Be sure to set `Availability Options` to "No infrastructure redundancy required". Refer to the below figure for the complete configuration and the next step for help locating the correct VM size.
--
-To locate the VM size, select "See all sizes" and then view the list for "N-Series" and select **NC6s_v3**, shown below.
--
-Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Select on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, select create, and complete the wizard.
-
-Once the extension is successfully applied, navigate to the VM main page in the Azure portal and select `Connect`. The VM can be accessed either through SSH or RDP. RDP will be helpful as it will enable viewing of the visualizer window (explained later). Configure the RDP access by following [these steps](../../virtual-machines/linux/use-remote-desktop.md) and opening a remote desktop connection to the VM.
-
-### Verify Graphics Drivers are Installed
-
-Run the following command to verify that the graphics drivers have been successfully installed.
-
-```bash
-nvidia-smi
-```
-
-You should see the following output.
-
-![NVIDIA driver output](media/spatial-analysis/nvidia-driver-output.png)
-
-### Install Docker CE and nvidia-docker2 on the VM
-
-Run the following commands one at a time in order to install Docker CE and nvidia-docker2 on the VM.
-
-Install Docker CE on the host computer.
-
-```bash
-sudo apt-get update
-```
-```bash
-sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
-```
-```bash
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
-```
-```bash
-sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
-```
-```bash
-sudo apt-get update
-```
-```bash
-sudo apt-get install -y docker-ce docker-ce-cli containerd.io
-```
--
-Install the *nvidia-docker-2* software package.
-
-```bash
-DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID)
-```
-```bash
-curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
-```
-```bash
-curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
-```
-```bash
-sudo apt-get update
-```
-```bash
-sudo apt-get install -y docker-ce nvidia-docker2
-```
-```bash
-sudo systemctl restart docker
-```
-
-Now that you have set up and configured your VM, follow the steps below to configure Azure IoT Edge.
-
-## Configure Azure IoT Edge on the VM
-
-To deploy the Spatial Analysis container on the VM, create an instance of an [Azure IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) service using the Standard (S1) or Free (F0) pricing tier.
-
-Use the Azure CLI to create an instance of Azure IoT Hub. Replace the parameters where appropriate. Alternatively, you can create the Azure IoT Hub on the [Azure portal](https://portal.azure.com/).
-
-```bash
-curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
-```
-```azurecli
-sudo az login
-```
-```azurecli
-sudo az account set --subscription "<name or ID of Azure Subscription>"
-```
-```azurecli
-sudo az group create --name "<resource-group-name>" --location "<your-region>"
-```
-See [Region Support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) for available regions.
-```azurecli
-sudo az iot hub create --name "<iothub-name>" --sku S1 --resource-group "<resource-group-name>"
-```
-```azurecli
-sudo az iot hub device-identity create --hub-name "<iothub-name>" --device-id "<device-name>" --edge-enabled
-```
-
-You'll need to install [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) version 1.0.9. Follow these steps to download the correct version:
-
-Ubuntu Server 18.04:
-```bash
-curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
-```
-
-Copy the generated list.
-```bash
-sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
-```
-
-Install the Microsoft GPG public key.
-
-```bash
-curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
-```
-```bash
-sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
-```
-
-Update the package lists on your device.
-
-```bash
-sudo apt-get update
-```
-
-Install the 1.0.9 release:
-
-```bash
-sudo apt-get install iotedge=1.1* libiothsm-std=1.1*
-```
-
-Next, register the VM as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-provision-single-device-linux-symmetric.md#register-your-device).
-
-You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
-
-```azurecli
-sudo az iot hub device-identity connection-string show --device-id my-edge-device --hub-name test-iot-hub-123
-```
-
-On the VM open `/etc/iotedge/config.yaml` for editing. Replace `ADD DEVICE CONNECTION STRING HERE` with the connection string. Save and close the file.
-Run this command to restart the IoT Edge service on the VM.
-
-```bash
-sudo systemctl restart iotedge
-```
-
-Deploy the Spatial Analysis container as an IoT Module on the VM, either from the [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows). If you're using the portal, set the image URI to the location of your Azure Container Registry.
-
-Use the below steps to deploy the container using the Azure CLI.
---
-### IoT Deployment manifest
-
-To streamline container deployment on multiple host computers, you can create a deployment manifest file to specify the container creation options, and environment variables. You can find an example of a deployment manifest [for Azure Stack Edge](https://go.microsoft.com/fwlink/?linkid=2142179), [other desktop machines](https://go.microsoft.com/fwlink/?linkid=2152270), and [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub.
-
-The following table shows the various Environment Variables used by the IoT Edge Module. You can also set them in the deployment manifest linked above, using the `env` attribute in `spatialanalysis`:
-
-| Setting Name | Value | Description|
-||||
-| ARCHON_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values|
-| ARCHON_SHARED_BUFFER_LIMIT | 377487360 | Do not modify|
-| ARCHON_PERF_MARKER| false| Set this to true for performance logging, otherwise this should be false|
-| ARCHON_NODES_LOG_LEVEL | Info; Verbose | Logging level, select one of the two values|
-| OMP_WAIT_POLICY | PASSIVE | Do not modify|
-| QT_X11_NO_MITSHM | 1 | Do not modify|
-| APIKEY | your API Key| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Key and endpoint** section for your resource. |
-| BILLING | your Endpoint URI| Collect this value from Azure portal from your Computer Vision resource. You can find it in the **Key and endpoint** section for your resource.|
-| EULA | accept | This value needs to be set to *accept* for the container to run |
-| DISPLAY | :1 | This value needs to be same as the output of `echo $DISPLAY` on the host computer. Azure Stack Edge devices do not have a display. This setting is not applicable|
-| KEY_ENV | ASE Encryption key | Add this environment variable if Video_URL is an obfuscated string |
-| IV_ENV | Initialization vector | Add this environment variable if Video_URL is an obfuscated string|
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-Once you update the Deployment manifest for [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179), [a desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270) or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with your own settings and selection of operations, you can use the below [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows) command to deploy the container on the host computer, as an IoT Edge Module.
-
-```azurecli
-sudo az login
-sudo az extension add --name azure-iot
-sudo az iot edge set-modules --hub-name "<iothub-name>" --device-id "<device-name>" --content DeploymentManifest.json --subscription "<name or ID of Azure Subscription>"
-```
-
-|Parameter |Description |
-|||
-| `--hub-name` | Your Azure IoT Hub name. |
-| `--content` | The name of the deployment file. |
-| `--target-condition` | Your IoT Edge device name for the host computer. |
-| `-ΓÇôsubscription` | Subscription ID or name. |
-
-This command will start the deployment. Navigate to the page of your Azure IoT Hub instance in the Azure portal to see the deployment status. The status may show as *417 ΓÇô The device's deployment configuration is not set* until the device finishes downloading the container images and starts running.
-
-## Validate that the deployment is successful
-
-There are several ways to validate that the container is running. Locate the *Runtime Status* in the **IoT Edge Module Settings** for the Spatial Analysis module in your Azure IoT Hub instance on the Azure portal. Validate that the **Desired Value** and **Reported Value** for the *Runtime Status* is *Running*.
-
-![Example deployment verification](./media/spatial-analysis/deployment-verification.png)
-
-Once the deployment is complete and the container is running, the **host computer** will start sending events to the Azure IoT Hub. If you used the `.debug` version of the operations, you'll see a visualizer window for each camera you configured in the deployment manifest. You can now define the lines and zones you want to monitor in the deployment manifest and follow the instructions to deploy again.
-
-## Configure the operations performed by Spatial Analysis
-
-You'll need to use [Spatial Analysis operations](spatial-analysis-operations.md) to configure the container to use connected cameras, configure the operations, and more. For each camera device you configure, the operations for Spatial Analysis will generate an output stream of JSON messages, sent to your instance of Azure IoT Hub.
-
-## Use the output generated by the container
-
-If you want to start consuming the output generated by the container, see the following articles:
-
-* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md).
-* Set up Message Routing on your Azure IoT Hub to send the events to other endpoints or save the events to Azure Blob Storage, etc. See [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) for more information.
-
-## Troubleshooting
-
-If you encounter issues when starting or running the container, see [Telemetry and troubleshooting](spatial-analysis-logging.md) for steps for common issues. This article also contains information on generating and collecting logs and collecting system health.
--
-## Billing
-
-The Spatial Analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free.
-
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
--
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running the Spatial Analysis container. In summary:
-
-* Spatial Analysis is a Linux container for Docker.
-* Container images are downloaded from the Microsoft Container Registry.
-* Container images run as IoT Modules in Azure IoT Edge.
-* Configure the container and deploy it on a host machine.
-
-## Next steps
-
-* [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
-* [Logging and troubleshooting](spatial-analysis-logging.md)
-* [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-local.md
- Title: Run Spatial Analysis on a local video file-
-description: Use this guide to learn how to run Spatial Analysis on a recorded local video.
------ Previously updated : 06/28/2022---
-# Run Spatial Analysis on a local video file
-
-You can use Spatial Analysis with either recorded or live video. Use this guide to learn how to run Spatial Analysis on a recorded local video.
-
-## Prerequisites
-
-* Set up a Spatial Analysis container by following the steps in [Set up the host machine and run the container](spatial-analysis-container.md).
-
-## Analyze a video file
-
-To use Spatial Analysis for recorded video, record a video file and save it as a .mp4 file. Then take the following steps:
-
-1. Create a blob storage account in Azure, or use an existing one. Then update the following blob storage settings in the Azure portal:
- 1. Change **Secure transfer required** to **Disabled**
- 1. Change **Allow Blob public access** to **Enabled**
-
-1. Navigate to the **Container** section, and either create a new container or use an existing one. Then upload the video file to the container. Expand the file settings for the uploaded file, and select **Generate SAS**. Be sure to set the **Expiry Date** long enough to cover the testing period. Set **Allowed Protocols** to *HTTP* (*HTTPS* is not supported).
-
-1. Select on **Generate SAS Token and URL** and copy the Blob SAS URL. Replace the starting `https` with `http` and test the URL in a browser that supports video playback.
-
-1. Replace `VIDEO_URL` in the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) with the URL you created, for all of the graphs. Set `VIDEO_IS_LIVE` to `false`, and redeploy the Spatial Analysis container with the updated manifest. See the example below.
-
-The Spatial Analysis module will start consuming video file and will continuously auto replay as well.
--
-```json
-"zonecrossing": {
- "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "Replace http url here",
- "VIDEO_SOURCE_ID": "personcountgraph",
- "VIDEO_IS_LIVE": false,
- "VIDEO_DECODE_GPU_INDEX": 0,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"events\": [{\"type\": \"zonecrossing\", \"config\": {\"threshold\": 16.0, \"focus\": \"footprint\"}}]}]}"
- }
- },
-
-```
-
-## Next steps
-
-* [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure Spatial Analysis operations](spatial-analysis-operations.md)
-* [Logging and troubleshooting](spatial-analysis-logging.md)
-* [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
- Title: Telemetry and logging for Spatial Analysis containers-
-description: Spatial Analysis provides each container with a common configuration framework insights, logging, and security settings.
------ Previously updated : 06/08/2021----
-# Telemetry and troubleshooting
-
-Spatial Analysis includes a set of features to monitor the health of the system and help with diagnosing issues.
-
-## Enable visualizations
-
-To enable a visualization of AI Insight events in a video frame, you need to use the `.debug` version of a [Spatial Analysis operation](spatial-analysis-operations.md) on a desktop machine or Azure VM. The visualization is not possible on Azure Stack Edge devices. There are four debug operations available.
-
-If your device is a local desktop machine or Azure GPU VM (with remote desktop enabled), then then you can switch to `.debug` version of any operation and visualize the output.
-
-1. Open the desktop either locally or by using a remote desktop client on the host computer running Spatial Analysis.
-2. In the terminal run `xhost +`
-3. Update the [deployment manifest](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) under the `spaceanalytics` module with the value of the `DISPLAY` environment variable. You can find its value by running `echo $DISPLAY` in the terminal on the host computer.
- ```
- "env": {
- "DISPLAY": {
- "value": ":11"
- }
- }
- ```
-4. Update the graph in the deployment manifest you want to run in debug mode. In the example below, we update the operationId to cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug. A new parameter `VISUALIZER_NODE_CONFIG` is required to enable the visualizer window. All operations are available in debug flavor. When using shared nodes, use the cognitiveservices.vision.spatialanalysis.debug operation and add `VISUALIZER_NODE_CONFIG` to the instance parameters.
-
- ```
- "zonecrossing": {
- "operationId" : "cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "Replace http url here",
- "VIDEO_SOURCE_ID": "zonecrossingcamera",
- "VIDEO_IS_LIVE": false,
- "VIDEO_DECODE_GPU_INDEX": 0,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0 }",
- "CAMERACALIBRATOR_NODE_CONFIG": "{ \"gpu_index\": 0}",
- "VISUALIZER_NODE_CONFIG": "{ \"show_debug_video\": true }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0.3,0.3],[0.3,0.9],[0.6,0.9],[0.6,0.3],[0.3,0.3]], \"threshold\":35.0}]}"
- }
- }
- ```
-
-5. Redeploy and you will see the visualizer window on the host computer
-6. After the deployment has completed, you might have to copy the `.Xauthority` file from the host computer to the container and restart it. In the sample below, `peopleanalytics` is the name of the container on the host computer.
-
- ```bash
- sudo docker cp $XAUTHORITY peopleanalytics:/root/.Xauthority
- sudo docker stop peopleanalytics
- sudo docker start peopleanalytics
- xhost +
- ```
-
-## Collect system health telemetry
-
-Telegraf is an open source image that works with Spatial Analysis, and is available in the Microsoft Container Registry. It takes the following inputs and sends them to Azure Monitor. The telegraf module can be built with desired custom inputs and outputs. The telegraf module configuration in Spatial Analysis is part of the deployment manifest (linked above). This module is optional and can be removed from the manifest if you don't need it.
-
-Inputs:
-1. Spatial Analysis Metrics
-2. Disk Metrics
-3. CPU Metrics
-4. Docker Metrics
-5. GPU Metrics
-
-Outputs:
-1. Azure Monitor
-
-The supplied Spatial Analysis telegraf module will publish all the telemetry data emitted by the Spatial Analysis container to Azure Monitor. See the [Azure Monitor](../../azure-monitor/overview.md) for information on adding Azure Monitor to your subscription.
-
-After setting up Azure Monitor, you will need to create credentials that enable the module to send telemetry. You can use the Azure portal to create a new Service Principal, or use the Azure CLI command below to create one.
-
-> [!NOTE]
-> This command requires you to have Owner privileges on the subscription.
-
-```azurecli
-# Find your Azure IoT Hub resource ID by running this command. The resource ID should start with something like
-# "/subscriptions/b60d6458-1234-4be4-9885-c7e73af9ced8/resourceGroups/..."
-az iot hub list
-
-# Create a Service Principal with `Monitoring Metrics Publisher` role in the IoTHub resource:
-# Save the output from this command. The values will be used in the deployment manifest. The password won't be shown again so make sure to write it down
-az ad sp create-for-rbac --role="Monitoring Metrics Publisher" --name "<principal name>" --scopes="<resource ID of IoT Hub>"
-```
-
-In the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189), look for the *telegraf* module, and replace the following values with the Service Principal information from the previous step and redeploy.
-
-```json
-
-"telegraf": {
-  "settings": {
-  "image": "mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis/telegraf:1.0",
-  "createOptions": "{\"HostConfig\":{\"Runtime\":\"nvidia\",\"NetworkMode\":\"azure-iot-edge\",\"Memory\":33554432,\"Binds\":[\"/var/run/docker.sock:/var/run/docker.sock\"]}}"
-},
-"type": "docker",
-"env": {
- "AZURE_TENANT_ID": {
- "value": "<Tenant Id>"
- },
- "AZURE_CLIENT_ID": {
- "value": "Application Id"
- },
- "AZURE_CLIENT_SECRET": {
- "value": "<Password>"
- },
- "region": {
- "value": "<Region>"
- },
- "resource_id": {
- "value": "/subscriptions/{subscriptionId}/resourceGroups/{resoureGroupName}/providers/Microsoft.Devices/IotHubs/{IotHub}"
- },
-...
-```
-
-Once the telegraf module is deployed, the reported metrics can be accessed either through the Azure Monitor service, or by selecting **Monitoring** in the IoT Hub on the Azure portal.
--
-### System health events
-
-| Event Name | Description |
-|--|-|
-| archon_exit | Sent when a user changes the Spatial Analysis module status from *running* to *stopped*. |
-| archon_error | Sent when any of the processes inside the container crash. This is a critical error. |
-| InputRate | The rate at which the graph processes video input. Reported every 5 minutes. |
-| OutputRate | The rate at which the graph outputs AI insights. Reported every 5 minutes. |
-| archon_allGraphsStarted | Sent when all graphs have finished starting up. |
-| archon_configchange | Sent when a graph configuration has changed. |
-| archon_graphCreationFailed | Sent when the graph with the reported `graphId` fails to start. |
-| archon_graphCreationSuccess | Sent when the graph with the reported `graphId` starts successfully. |
-| archon_graphCleanup | Sent when the graph with the reported `graphId` cleans up and exits. |
-| archon_graphHeartbeat | Heartbeat sent every minute for every graph of a skill. |
-| archon_apiKeyAuthFail | Sent when the Computer Vision resource key fails to authenticate the container for more than 24 hours, due to the following reasons: Out of Quota, Invalid, Offline. |
-| VideoIngesterHeartbeat | Sent every hour to indicate that video is streamed from the Video source, with the number of errors in that hour. Reported for each graph. |
-| VideoIngesterState | Reports *Stopped* or *Started* for video streaming. Reported for each graph. |
-
-## Troubleshooting an IoT Edge Device
-
-You can use `iotedge` command line tool to check the status and logs of the running modules. For example:
-* `iotedge list`: Reports a list of running modules.
- You can further check for errors with `iotedge logs edgeAgent`. If `iotedge` gets stuck, you can try restarting it with `iotedge restart edgeAgent`
-* `iotedge logs <module-name>`
-* `iotedge restart <module-name>` to restart a specific module
-
-## Collect log files with the diagnostics container
-
-Spatial Analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The Spatial Analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the *diagnostics* module.
-
-In the "env" section add the following configuration:
-
-```json
-"diagnostics": {  
-  "settings": {
-  "image": "mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis/diagnostics:1.0",
-  "createOptions": "{\"HostConfig\":{\"Mounts\":[{\"Target\":\"/usr/bin/docker\",\"Source\":\"/home/data/docker\",\"Type\":\"bind\"},{\"Target\":\"/var/run\",\"Source\":\"/run\",\"Type\":\"bind\"}],\"LogConfig\":{\"Config\":{\"max-size\":\"500m\"}}}}"
-  }
-```
-
-To optimize logs uploaded to a remote endpoint, such as Azure Blob Storage, we recommend maintaining a small file size. See the example below for the recommended Docker logs configuration.
-
-```json
-{
- "HostConfig": {
- "LogConfig": {
- "Config": {
- "max-size": "500m",
- "max-file": "1000"
- }
- }
- }
-}
-```
-
-### Configure the log level
-
-Log level configuration allows you to control the verbosity of the generated logs. Supported log levels are: `none`, `verbose`, `info`, `warning`, and `error`. The default log verbose level for both nodes and platform is `info`.
-
-Log levels can be modified globally by setting the `ARCHON_LOG_LEVEL` environment variable to one of the allowed values.
-It can also be set through the IoT Edge Module Twin document either globally, for all deployed skills, or for every specific skill by setting the values for `platformLogLevel` and `nodesLogLevel` as shown below.
-
-```json
-{
- "version": 1,
- "properties": {
- "desired": {
- "globalSettings": {
- "platformLogLevel": "verbose"
- },
- "graphs": {
- "samplegraph": {
- "nodesLogLevel": "verbose",
- "platformLogLevel": "verbose"
- }
- }
- }
- }
-}
-```
-
-### Collecting Logs
-
-> [!NOTE]
-> The `diagnostics` module does not affect the logging content, it is only assists in collecting, filtering, and uploading existing logs.
-> You must have Docker API version 1.40 or higher to use this module.
-
-The sample deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) includes a module named `diagnostics` that collects and uploads logs. This module is disabled by default and should be enabled through the IoT Edge module configuration when you need to access logs.
-
-The `diagnostics` collection is on-demand and controlled via an IoT Edge direct method, and can send logs to an Azure Blob Storage.
-
-### Configure diagnostics upload targets
-
-From the IoT Edge portal, select your device and then the **diagnostics** module. In the sample Deployment manifest file for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machines](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the **Environment Variables** section for diagnostics, named `env`, and add the following information:
-
-**Configure Upload to Azure Blob Storage**
-
-1. Create your own Azure Blob Storage account, if you haven't already.
-2. Get the **Connection String** for your storage account from the Azure portal. It will be located in **Access Keys**.
-3. Spatial Analysis logs will be automatically uploaded into a Blob Storage container named *rtcvlogs* with the following file name format: `{CONTAINER_NAME}/{START_TIME}-{END_TIME}-{QUERY_TIME}.log`.
-
-```json
-"env":{
- "IOTEDGE_WORKLOADURI":"fd://iotedge.socket",
- "AZURE_STORAGE_CONNECTION_STRING":"XXXXXX", //from the Azure Blob Storage account
- "ARCHON_LOG_LEVEL":"info"
-}
-```
-
-### Uploading Spatial Analysis logs
-
-Logs are uploaded on-demand with the `getRTCVLogs` IoT Edge method, in the `diagnostics` module.
--
-1. Go to your IoT Hub portal page, select **Edge Devices**, then select your device and your diagnostics module.
-2. Go to the details page of the module and click on the ***direct method*** tab.
-3. Type `getRTCVLogs` on Method Name, and a json format string in payload. You can enter `{}`, which is an empty payload.
-4. Set the connection and method timeouts, and click **Invoke Method**.
-5. Select your target container, and build a payload json string using the parameters described in the **Logging syntax** section. Click **Invoke Method** to perform the request.
-
->[!NOTE]
-> Invoking the `getRTCVLogs` method with an empty payload will return a list of all containers deployed on the device. The method name is case sensitive. You will get a 501 error if an incorrect method name is given.
-
-![getRTCVLogs Direct method page](./media/spatial-analysis/direct-log-collection.png)
-
-
-### Logging syntax
-
-The below table lists the parameters you can use when querying logs.
-
-| Keyword | Description | Default Value |
-|--|--|--|
-| StartTime | Desired logs start time, in milliseconds UTC. | `-1`, the start of the container's runtime. When `[-1.-1]` is used as a time range, the API returns logs from the last one hour.|
-| EndTime | Desired logs end time, in milliseconds UTC. | `-1`, the current time. When `[-1.-1]` time range is used, the api returns logs from the last one hour. |
-| ContainerId | Target container for fetching logs.| `null`, when there is no container ID. The API returns all available containers information with IDs.|
-| DoPost | Perform the upload operation. When this is set to `false`, it performs the requested operation and returns the upload size without performing the upload. When set to `true`, it will initiate the asynchronous upload of the selected logs | `false`, do not upload.|
-| Throttle | Indicate how many lines of logs to upload per batch | `1000`, Use this parameter to adjust post speed. |
-| Filters | Filters logs to be uploaded | `null`, filters can be specified as key value pairs based on the Spatial Analysis logs structure: `[UTC, LocalTime, LOGLEVEL,PID, CLASS, DATA]`. For example: `{"TimeFilter":[-1,1573255761112]}, {"TimeFilter":[-1,1573255761112]}, {"CLASS":["myNode"]`|
-
-The following table lists the attributes in the query response.
-
-| Keyword | Description|
-|--|--|
-|DoPost| Either *true* or *false*. Indicates if logs have been uploaded or not. When you choose not to upload logs, the api returns information ***synchronously***. When you choose to upload logs, the api returns 200, if the request is valid, and starts uploading logs ***asynchronously***.|
-|TimeFilter| Time filter applied to the logs.|
-|ValueFilters| Keywords filters applied to the logs. |
-|TimeStamp| Method execution start time. |
-|ContainerId| Target container ID. |
-|FetchCounter| Total number of log lines. |
-|FetchSizeInByte| Total amount of log data in bytes. |
-|MatchCounter| Valid number of log lines. |
-|MatchSizeInByte| Valid amount of log data in bytes. |
-|FilterCount| Total number of log lines after applying filter. |
-|FilterSizeInByte| Total amount of log data in bytes after applying filter. |
-|FetchLogsDurationInMiliSec| Fetch operation duration. |
-|PaseLogsDurationInMiliSec| Filter operation duration. |
-|PostLogsDurationInMiliSec| Post operation duration. |
-
-#### Example request
-
-```json
-{
- "StartTime": -1,
- "EndTime": -1,
- "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
- "DoPost": false,
- "Filters": null
-}
-```
-
-#### Example response
-
-```json
-{
- "status": 200,
- "payload": {
- "DoPost": false,
- "TimeFilter": [-1, 1581310339411],
- "ValueFilters": {},
- "Metas": {
- "TimeStamp": "2020-02-10T04:52:19.4365389+00:00",
- "ContainerId": "5fa17e4d8056e8d16a5a998318716a77becc01b36fde25b3de9fde98a64bf29b",
- "FetchCounter": 61,
- "FetchSizeInByte": 20470,
- "MatchCounter": 61,
- "MatchSizeInByte": 20470,
- "FilterCount": 61,
- "FilterSizeInByte": 20470,
- "FetchLogsDurationInMiliSec": 0,
- "PaseLogsDurationInMiliSec": 0,
- "PostLogsDurationInMiliSec": 0
- }
- }
-}
-```
-
-Check fetch log's lines, times, and sizes, if those settings look good replace ***DoPost*** to `true` and that will push the logs with same filters to destinations.
-
-You can export logs from the Azure Blob Storage when troubleshooting issues.
-
-## Troubleshooting the Azure Stack Edge device
-
-The following section is provided for help with debugging and verification of the status of your Azure Stack Edge device.
-
-### Access the Kubernetes API Endpoint. 
-
-1. In the local UI of your device, go to the **Devices** page.
-2. Under **Device endpoints**, copy the Kubernetes API service endpoint. This endpoint is a string in the following format: `https://compute..[device-IP-address]`.
-3. Save the endpoint string. You will use this later when configuring `kubectl` to access the Kubernetes cluster.
-
-### Connect to PowerShell interface
-
-Remotely, connect from a Windows client. After the Kubernetes cluster is created, you can manage the applications via this cluster. You will need to connect to the PowerShell interface of the device. Depending on the operating system of client, the procedures to remotely connect to the device may be different. The following steps are for a Windows client running PowerShell.
-
-> [!TIP]
-> * Before you begin, make sure that your Windows client is running Windows PowerShell 5.0 or later.
-> * PowerShell is also [available on Linux](/powershell/scripting/install/installing-powershell-core-on-linux).
-
-1. Run a Windows PowerShell session as an Administrator.
- 1. Make sure that the Windows Remote Management service is running on your client. At the command prompt, type `winrm quickconfig`.
-
-2. Assign a variable for the device IP address. For example, `$ip = "<device-ip-address>"`.
-
-3. Use the following command to add the IP address of your device to the client's trusted hosts list.
-
- ```powershell
- Set-Item WSMan:\localhost\Client\TrustedHosts $ip -Concatenate -Force
- ```
-
-4. Start a Windows PowerShell session on the device.
-
- ```powershell
- Enter-PSSession -ComputerName $ip -Credential $ip\EdgeUser -ConfigurationName Minishell
- ```
-
-5. Provide the password when prompted. Use the same password that is used to sign into the local web interface. The default local web interface password is `Password1`.
-
-### Access the Kubernetes cluster
-
-After the Kubernetes cluster is created, you can use the `kubectl` command line tool to access the cluster.
-
-1. Create a new namespace.
-
- ```powershell
- New-HcsKubernetesNamespace -Namespace
- ```
-
-2. Create a user and get a config file. This command will output configuration information for the Kubernetes cluster. Copy this information and save it in a file named *config*. Do not save the file a file extension.
-
- ```powershell
- New-HcsKubernetesUser -UserName
- ```
-
-3. Add the *config* file to the *.kube* folder in your user profile on the local machine.
-
-4. Associate the namespace with the user you created.
-
- ```powershell
- Grant-HcsKubernetesNamespaceAccess -Namespace -UserName
- ```
-
-5. Install `kubectl` on your Windows client using the following command:
-
- ```powershell
- curl https://storage.googleapis.com/kubernetesrelease/release/v1.15.2/bin/windows/amd64/kubectl.exe -O kubectl.exe
- ```
-
-6. Add a DNS entry to the hosts file on your system.
- 1. Run Notepad as administrator and open the *hosts* file located at `C:\windows\system32\drivers\etc\hosts`.
- 2. Create an entry in the hosts file with the device IP address and DNS domain you got from the **Device** page in the local UI. The endpoint you should use will look similar to: `https://compute.asedevice.microsoftdatabox.com/10.100.10.10`.
-
-7. Verify you can connect to the Kubernetes pods.
-
- ```powershell
- kubectl get pods -n "iotedge"
- ```
-
-To get container logs, run the following command:
-
-```powershell
-kubectl logs <pod-name> -n <namespace> --all-containers
-```
-
-### Useful commands
-
-|Command |Description |
-|||
-|`Get-HcsKubernetesUserConfig -AseUser` | Generates a Kubernetes configuration file. When using the command, copy the information into a file named *config*. Do not save the file with a file extension. |
-| `Get-HcsApplianceInfo` | Returns information about your device. |
-| `Enable-HcsSupportAccess` | Generates access credentials to start a support session. |
--
-## How to file a support ticket for Spatial Analysis
-
-If you need more support in finding a solution to a problem you're having with the Spatial Analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with additional guidance.
-
-### Fill out the basics
-Create a new support ticket at the [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page. Follow the prompts to fill in the following parameters:
-
-![Support basics](./media/support-ticket-page-1-final.png)
-
-1. Set **Issue Type** to be `Technical`.
-2. Select the subscription that you are utilizing to deploy the Spatial Analysis container.
-3. Select `My services` and select `Cognitive Services` as the service.
-4. Select the resource that you are utilizing to deploy the Spatial Analysis container.
-5. Write a brief description detailing the problem you are facing.
-6. Select `Spatial Analysis` as your problem type.
-7. Select the appropriate subtype from the drop down.
-8. Select **Next: Solutions** to move on to the next page.
-
-### Recommended solutions
-The next stage will offer recommended solutions for the problem type that you selected. These solutions will solve the most common problems, but if it isn't useful for your solution, select **Next: Details** to go to the next step.
-
-### Details
-On this page, add some additional details about the problem you've been facing. Be sure to include as much detail as possible, as this will help our engineers better narrow down the issue. Include your preferred contact method and the severity of the issue so we can contact you appropriately, and select **Next: Review + create** to move to the next step.
-
-### Review and create
-Review the details of your support request to ensure everything is accurate and represents the problem effectively. Once you are ready, select **Create** to send the ticket to our team! You will receive an email confirmation once your ticket is received, and our team will work to get back to you as soon as possible. You can view the status of your ticket in the Azure portal.
-
-## Next steps
-
-* [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
-* [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
- Title: Spatial Analysis operations-
-description: The Spatial Analysis operations.
------ Previously updated : 02/02/2022----
-# Spatial Analysis operations
-
-Spatial Analysis lets you analyze video streams from camera devices in real time. For each camera device you configure, the Spatial Analysis operations will generate an output stream of JSON messages sent to your instance of Azure IoT Hub.
-
-The Spatial Analysis container implements the following operations. You can configure these operations in the deployment manifest of your container.
-
-| Operation Identifier| Description|
-|||
-| cognitiveservices.vision.spatialanalysis-personcount | Counts people in a designated zone in the camera's field of view. The zone must be fully covered by a single camera in order for **PersonCount** to record an accurate total. <br> Emits an initial _personCountEvent_ event and then _personCountEvent_ events when the count changes. |
-| cognitiveservices.vision.spatialanalysis-personcrossingline | Tracks when a person crosses a designated line in the camera's field of view. <br>Emits a _personLineEvent_ event when the person crosses the line and provides directional info.
-| cognitiveservices.vision.spatialanalysis-personcrossingpolygon | Emits a _personZoneEnterExitEvent_ event when a person enters or exits the designated zone and provides directional info with the side of the zone that was crossed. Emits a _personZoneDwellTimeEvent_ when the person exits the zone and provides directional info as well as the number of milliseconds the person spent inside the zone. |
-| cognitiveservices.vision.spatialanalysis-persondistance | Tracks when people violate a minimum-distance rule. <br> Emits a _personDistanceEvent_ periodically with the location of each distance violation. |
-| cognitiveservices.vision.spatialanalysis | The generic operation, which can be used to run all scenarios mentioned above. This option is more useful when you want to run multiple scenarios on the same camera or use system resources (the GPU, for example) more efficiently. |
-
-All of the above operations are also available in the `.debug` version of the service (for example, `cognitiveservices.vision.spatialanalysis-personcount.debug`). Debug has the capability to visualize video frames as they're being processed. You'll need to run `xhost +` on the host computer to enable the visualization of video frames and events.
--
-> [!IMPORTANT]
-> The Computer Vision AI models detect and locate human presence in video footage and output a bounding box around the human body. The AI models do not attempt to discover the identities or demographics of individuals.
-
-## Operation parameters
-
-The following are the parameters required by each of the Spatial Analysis operations.
-
-| Operation parameters| Description|
-|||
-| `Operation ID` | The Operation Identifier from table above.|
-| `enabled` | Boolean: true or false|
-| `VIDEO_URL`| The RTSP url for the camera device (Example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded stream either through RTSP, http, or mp4. Video_URL can be provided as an obfuscated base64 string value using AES encryption, and if the video url is obfuscated then `KEY_ENV` and `IV_ENV` need to be provided as environment variables. Sample utility to generate keys and encryption can be found [here](/dotnet/api/system.security.cryptography.aesmanaged). |
-| `VIDEO_SOURCE_ID` | A friendly name for the camera device or video stream. This will be returned with the event JSON output.|
-| `VIDEO_IS_LIVE`| True for camera devices; false for recorded videos.|
-| `VIDEO_DECODE_GPU_INDEX`| Which GPU to decode the video frame. By default it is 0. Should be the same as the `gpu_index` in other node config like `DETECTOR_NODE_CONFIG` and `CAMERACALIBRATOR_NODE_CONFIG`.|
-| `INPUT_VIDEO_WIDTH` | Input video/stream's frame width (for example, 1920). This is an optional field and if provided, the frame will be scaled to this dimension while preserving the aspect ratio.|
-| `DETECTOR_NODE_CONFIG` | JSON indicating which GPU to run the detector node on. It should be in the following format: `"{ \"gpu_index\": 0 }",`|
-| `TRACKER_NODE_CONFIG` | JSON indicating whether to compute speed in the tracker node or not. It should be in the following format: `"{ \"enable_speed\": true }",`|
-| `CAMERA_CONFIG` | JSON indicating the calibrated camera parameters for multiple cameras. If the skill you used requires calibration and you already have the camera parameter, you can use this config to provide them directly. Should be in the following format: `"{ \"cameras\": [{\"source_id\": \"endcomputer.0.persondistancegraph.detector+end_computer1\", \"camera_height\": 13.105561256408691, \"camera_focal_length\": 297.60003662109375, \"camera_tiltup_angle\": 0.9738943576812744}] }"`, the `source_id` is used to identify each camera. It can be gotten from the `source_info` of the event we published. It will only take effect when `do_calibration=false` in `DETECTOR_NODE_CONFIG`.|
-| `CAMERACALIBRATOR_NODE_CONFIG` | JSON indicating which GPU to run the camera calibrator node on and whether to use calibration or not. It should be in the following format: `"{ \"gpu_index\": 0, \"do_calibration\": true, \"enable_orientation\": true}",`|
-| `CALIBRATION_CONFIG` | JSON indicating parameters to control how the camera calibration works. It should be in the following format: `"{\"enable_recalibration\": true, \"quality_check_frequency_seconds\": 86400}",`|
-| `SPACEANALYTICS_CONFIG` | JSON configuration for zone and line as outlined below.|
-| `ENABLE_FACE_MASK_CLASSIFIER` | `True` to enable detecting people wearing face masks in the video stream, `False` to disable it. By default this is disabled. Face mask detection requires input video width parameter to be 1920 `"INPUT_VIDEO_WIDTH": 1920`. The face mask attribute won't be returned if detected people aren't facing the camera or are too far from it. For more information, see the [camera placement](spatial-analysis-camera-placement.md). |
-| `STATIONARY_TARGET_REMOVER_CONFIG` | JSON indicating the parameters for stationary target removal, which adds the capability to learn and ignore long-term stationary false positive targets such as mannequins or people in pictures. Configuration should be in the following format: `"{\"enable\": true, \"bbox_dist_threshold-in_pixels\": 5, \"buffer_length_in_seconds\": 3600, \"filter_ratio\": 0.2 }"`|
-
-### Detector node parameter settings
-
-The following is an example of the `DETECTOR_NODE_CONFIG` parameters for all Spatial Analysis operations.
-
-```json
-{
-"gpu_index": 0,
-"enable_breakpad": false
-}
-```
-
-| Name | Type| Description|
-||||
-| `gpu_index` | string| The GPU index on which this operation will run.|
-| `enable_breakpad`| bool | Indicates whether to enable breakpad, which is used to generate a crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.|
-
-### Camera calibration node parameter settings
-The following is an example of the `CAMERACALIBRATOR_NODE_CONFIG` parameters for all spatial analysis operations.
-
-```json
-{
- "gpu_index": 0,
- "do_calibration": true,
- "enable_breakpad": false,
- "enable_orientation": true
-}
-```
-
-| Name | Type | Description |
-||||
-| `do_calibration` | string | Indicates that calibration is turned on. `do_calibration` must be true for **cognitiveservices.vision.spatialanalysis-persondistance** to function properly. `do_calibration` is set by default to `True`. |
-| `enable_breakpad`| bool | Indicates whether to enable breakpad, which is used to generate a crash dump for debug use. It is `false` by default. If you set it to `true`, you also need to add `"CapAdd": ["SYS_PTRACE"]` in the `HostConfig` part of container `createOptions`. By default, the crash dump is uploaded to the [RealTimePersonTracking](https://appcenter.ms/orgs/Microsoft-Organization/apps/RealTimePersonTracking/crashes/errors?version=&appBuild=&period=last90Days&status=&errorType=all&sortCol=lastError&sortDir=desc) AppCenter app, if you want the crash dumps to be uploaded to your own AppCenter app, you can override the environment variable `RTPT_APPCENTER_APP_SECRET` with your app's app secret.
-| `enable_orientation` | bool | Indicates whether you want to compute the orientation for the detected people or not. `enable_orientation` is set by default to `True`. |
-
-### Calibration config
-
-This is an example of the `CALIBRATION_CONFIG` parameters for all spatial analysis operations.
-
-```json
-{
- "enable_recalibration": true,
- "calibration_quality_check_frequency_seconds": 86400,
- "calibration_quality_check_sample_collect_frequency_seconds": 300,
- "calibration_quality_check_one_round_sample_collect_num": 10,
- "calibration_quality_check_queue_max_size": 1000,
- "calibration_event_frequency_seconds": -1
-}
-```
-
-| Name | Type| Description|
-||||
-| `enable_recalibration` | bool | Indicates whether automatic recalibration is turned on. Default is `true`.|
-| `calibration_quality_check_frequency_seconds` | int | Minimum number of seconds between each quality check to determine whether or not recalibration is needed. Default is `86400` (24 hours). Only used when `enable_recalibration=True`.|
-| `calibration_quality_check_sample_collect_frequency_seconds` | int | Minimum number of seconds between collecting new data samples for recalibration and quality checking. Default is `300` (5 minutes). Only used when `enable_recalibration=True`.|
-| `calibration_quality_check_one_round_sample_collect_num` | int | Minimum number of new data samples to collect per round of sample collection. Default is `10`. Only used when `enable_recalibration=True`.|
-| `calibration_quality_check_queue_max_size` | int | Maximum number of data samples to store when camera model is calibrated. Default is `1000`. Only used when `enable_recalibration=True`.|
-| `calibration_event_frequency_seconds` | int | Output frequency (seconds) of camera calibration events. A value of `-1` indicates that the camera calibration shouldn't be sent unless the camera calibration info has been changed. Default is `-1`.|
-
-### Camera calibration output
-
-The following is an example of the output from camera calibration if enabled. Ellipses indicate more of the same type of objects in a list.
-
-```json
-{
- "type": "cameraCalibrationEvent",
- "sourceInfo": {
- "id": "camera1",
- "timestamp": "2021-04-20T21:15:59.100Z",
- "width": 512,
- "height": 288,
- "frameId": 531,
- "cameraCalibrationInfo": {
- "status": "Calibrated",
- "cameraHeight": 13.294151306152344,
- "focalLength": 372.0000305175781,
- "tiltupAngle": 0.9581864476203918,
- "lastCalibratedTime": "2021-04-20T21:15:59.058"
- }
- },
- "zonePlacementInfo": {
- "optimalZoneRegion": {
- "type": "POLYGON",
- "points": [
- {
- "x": 0.8403755868544601,
- "y": 0.5515320334261838
- },
- {
- "x": 0.15805946791862285,
- "y": 0.5487465181058496
- }
- ],
- "name": "optimal_zone_region"
- },
- "fairZoneRegion": {
- "type": "POLYGON",
- "points": [
- {
- "x": 0.7871674491392802,
- "y": 0.7437325905292479
- },
- {
- "x": 0.22065727699530516,
- "y": 0.7325905292479109
- }
- ],
- "name": "fair_zone_region"
- },
- "uniformlySpacedPersonBoundingBoxes": [
- {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.0297339593114241,
- "y": 0.0807799442896936
- },
- {
- "x": 0.10015649452269171,
- "y": 0.2757660167130919
- }
- ]
- }
- ],
- "personBoundingBoxGroundPoints": [
- {
- "x": -22.944068908691406,
- "y": 31.487680435180664
- }
- ]
- }
-}
-```
-
-See [Spatial analysis operation output](#spatial-analysis-operation-output) for details on `source_info`.
-
-| ZonePlacementInfo Field Name | Type| Description|
-||||
-| `optimalZonePolygon` | object| A polygon in the camera image where lines or zones for your operations can be placed for optimal results. <br/> Each value pair represents the x,y for vertices of a polygon. The polygon represents the areas in which people are tracked or counted and polygon points are based on normalized coordinates (0-1), where the top left corner is (0.0, 0.0) and the bottom right corner is (1.0, 1.0).|
-| `fairZonePolygon` | object| A polygon in the camera image where lines or zones for your operations can be placed for good, but possibly not optimal, results. <br/> See `optimalZonePolygon` above for an in-depth explanation of the contents. |
-| `uniformlySpacedPersonBoundingBoxes` | list | A list of bounding boxes of people within the camera image distributed uniformly in real space. Values are based on normalized coordinates (0-1).|
-| `personBoundingBoxGroundPoints` | list | A list of coordinates on the floor plane relative to the camera. Each coordinate corresponds to the bottom right of the bounding box in `uniformlySpacedPersonBoundingBoxes` with the same index. <br/> See the `centerGroundPointX/centerGroundPointY` fields under the [JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights](#json-format-for-cognitiveservicesvisionspatialanalysis-persondistance-ai-insights) section for more details on how coordinates on the floor plane are calculated. |
-
-Example of the zone placement info output visualized on a video frame:
-![Zone placement info visualization](./media/spatial-analysis/zone-placement-info-visualization.png)
-
-The zone placement info provides suggestions for your configurations, but the guidelines in [Camera configuration](#camera-configuration) must still be followed for best results.
-
-### Tracker node parameter settings
-You can configure the speed computation through the tracker node parameter settings.
-```
-{
-"enable_speed": true,
-"remove_stationary_objects": true,
-"stationary_objects_dist_threshold_in_pixels": 5,
-"stationary_objects_buffer_length_in_seconds": 3600,
-"stationary_objects_filter_ratio": 0.2
-}
-```
-| Name | Type| Description|
-||||
-| `enable_speed` | bool | Indicates whether you want to compute the speed for the detected people or not. `enable_speed` is set by default to `True`. It is highly recommended that you enable both speed and orientation to have the best estimated values. |
-| `remove_stationary_objects` | bool | Indicates whether you want to remove stationary objects. `remove_stationary_objects` is set by default to True. |
-| `stationary_objects_dist_threshold_in_pixels` | int | The neighborhood distance threshold to decide whether two detection boxes can be treated as the same detection. `stationary_objects_dist_threshold_in_pixels` is set by default to 5. |
-| `stationary_objects_buffer_length_in_seconds` | int | The minimum length of time in seconds that the system has to look back to decide whether a target is a stationary target or not. `stationary_objects_buffer_length_in_seconds` is set by default to 3600. |
-| `stationary_objects_filter_ratio` | float | If a target is repeatedly detected at the same location (defined in `stationary_objects_dist_threshold_in_pixels`) for greater `stationary_objects_filter_ratio` (0.2 means 20%) of the `stationary_objects_buffer_length_in_seconds` time interval, it will be treated as a stationary target. `stationary_objects_filter_ratio` is set by default to 0.2. |
-
-## Spatial Analysis operations configuration and output
-
-### Zone configuration for cognitiveservices.vision.spatialanalysis-personcount
-
-The following is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that configures a zone. You may configure multiple zones for this operation.
-
-```json
-{
- "zones": [
- {
- "name": "lobbycamera",
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "events": [
- {
- "type": "count",
- "config": {
- "trigger": "event",
- "focus": "footprint"
- }
- }
- ]
- }
- ]
-}
-```
-
-| Name | Type| Description|
-||||
-| `zones` | list| List of zones. |
-| `name` | string| Friendly name for this zone.|
-| `polygon` | list| Each value pair represents the x,y for vertices of a polygon. The polygon represents the areas in which people are tracked or counted. Polygon points are based on normalized coordinates (0-1), where the top left corner is (0.0, 0.0) and the bottom right corner is (1.0, 1.0).
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
-| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcount**, this should be `count`.|
-| `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not.
-| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`. |
-| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
-
-### Line configuration for cognitiveservices.vision.spatialanalysis-personcrossingline
-
-The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a line. You may configure multiple crossing lines for this operation.
-
-```json
-{
- "lines": [
- {
- "name": "doorcamera",
- "line": {
- "start": {
- "x": 0,
- "y": 0.5
- },
- "end": {
- "x": 1,
- "y": 0.5
- }
- },
- "events": [
- {
- "type": "linecrossing",
- "config": {
- "trigger": "event",
- "focus": "footprint"
- }
- }
- ]
- }
- ]
-}
-```
-
-| Name | Type| Description|
-||||
-| `lines` | list| List of lines.|
-| `name` | string| Friendly name for this line.|
-| `line` | list| The definition of the line. This is a directional line allowing you to understand "entry" vs. "exit".|
-| `start` | value pair| x, y coordinates for line's starting point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. |
-| `end` | value pair| x, y coordinates for line's ending point. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size. |
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
-| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingline**, this should be `linecrossing`.|
-|`trigger`|string|The type of trigger for sending an event.<br>Supported Values: "event": fire when someone crosses the line.|
-| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
-
-### Zone configuration for cognitiveservices.vision.spatialanalysis-personcrossingpolygon
-
-This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a zone. You may configure multiple zones for this operation.
-
-```json
-{
-"zones":[
- {
- "name": "queuecamera",
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "events":[{
- "type": "zonecrossing",
- "config":{
- "trigger": "event",
- "focus": "footprint"
- }
- }]
- },
- {
- "name": "queuecamera1",
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "events":[{
- "type": "zonedwelltime",
- "config":{
- "trigger": "event",
- "focus": "footprint"
- }
- }]
- }]
-}
-```
-
-| Name | Type| Description|
-||||
-| `zones` | list| List of zones. |
-| `name` | string| Friendly name for this zone.|
-| `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
-| `target_side` | int| Specifies a side of the zone defined by `polygon` to measure how long people face that side while in the zone. 'dwellTimeForTargetSide' will output that estimated time. Each side is a numbered edge between the two vertices of the polygon that represents your zone. For example, the edge between the first two vertices of the polygon represents the first side, 'side'=1. The value of `target_side` is between `[0,N-1]` where `N` is the number of sides of the `polygon`. This is an optional field. |
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.074 will be 38 pixels on a video with image width = 512 (0.074 X 512 = ~38).|
-| `type` | string| For **cognitiveservices.vision.spatialanalysis-personcrossingpolygon** this should be `zonecrossing` or `zonedwelltime`.|
-| `trigger`|string|The type of trigger for sending an event<br>Supported Values: "event": fire when someone enters or exits the zone.|
-| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box). The default value is footprint.|
-
-### Zone configuration for cognitiveservices.vision.spatialanalysis-persondistance
-
-This is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a zone for **cognitiveservices.vision.spatialanalysis-persondistance**. You may configure multiple zones for this operation.
-
-```json
-{
-"zones":[{
- "name": "lobbycamera",
- "polygon": [[0.3,0.3], [0.3,0.9], [0.6,0.9], [0.6,0.3], [0.3,0.3]],
- "events":[{
- "type": "persondistance",
- "config":{
- "trigger": "event",
- "output_frequency":1,
- "minimum_distance_threshold":6.0,
- "maximum_distance_threshold":35.0,
- "aggregation_method": "average",
- "focus": "footprint"
- }
- }]
- }]
-}
-```
-
-| Name | Type| Description|
-||||
-| `zones` | list| List of zones. |
-| `name` | string| Friendly name for this zone.|
-| `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are counted and the distance between people is measured. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
-| `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. This is an optional field and value is in ratio (0-1). For example, value 0.0253 will be 13 pixels on a video with image width = 512 (0.0253 X 512 = ~13).|
-| `type` | string| For **cognitiveservices.vision.spatialanalysis-persondistance**, this should be `persondistance`.|
-| `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not.
-| `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`.|
-| `minimum_distance_threshold` | float| A distance in feet that will trigger a "TooClose" event when people are less than that distance apart.|
-| `maximum_distance_threshold` | float| A distance in feet that will trigger a "TooFar" event when people are greater than that distance apart.|
-| `aggregation_method` | string| The method for aggregate `persondistance` result. The aggregation_method is applicable to both `mode` and `average`.|
-| `focus` | string| The point location within person's bounding box used to calculate events. Focus's value can be `footprint` (the footprint of person), `bottom_center` (the bottom center of person's bounding box), `center` (the center of person's bounding box).|
-
-### Configuration for cognitiveservices.vision.spatialanalysis
-The following is an example of a JSON input for the `SPACEANALYTICS_CONFIG` parameter that configures a line and zone for **cognitiveservices.vision.spatialanalysis**. You may configure multiple lines/zones for this operation and each line/zone can have different events.
-
-```json
-{
- "lines": [
- {
- "name": "doorcamera",
- "line": {
- "start": {
- "x": 0,
- "y": 0.5
- },
- "end": {
- "x": 1,
- "y": 0.5
- }
- },
- "events": [
- {
- "type": "linecrossing",
- "config": {
- "trigger": "event",
- "focus": "footprint"
- }
- }
- ]
- }
- ],
- "zones": [
- {
- "name": "lobbycamera",
- "polygon": [[0.3, 0.3],[0.3, 0.9],[0.6, 0.9],[0.6, 0.3],[0.3, 0.3]],
- "events": [
- {
- "type": "persondistance",
- "config": {
- "trigger": "event",
- "output_frequency": 1,
- "minimum_distance_threshold": 6.0,
- "maximum_distance_threshold": 35.0,
- "focus": "footprint"
- }
- },
- {
- "type": "count",
- "config": {
- "trigger": "event",
- "output_frequency": 1,
- "focus": "footprint"
- }
- },
- {
- "type": "zonecrossing",
- "config": {
- "focus": "footprint"
- }
- },
- {
- "type": "zonedwelltime",
- "config": {
- "focus": "footprint"
- }
- }
- ]
- }
- ]
-}
-```
-
-## Camera configuration
-
-See the [camera placement](spatial-analysis-camera-placement.md) guidelines to learn about more about how to configure zones and lines.
-
-## Spatial Analysis operation output
-
-The events from each operation are egressed to Azure IoT Hub on JSON format.
-
-### JSON format for cognitiveservices.vision.spatialanalysis-personcount AI Insights
-
-Sample JSON for an event output by this operation.
-
-```json
-{
- "events": [
- {
- "id": "b013c2059577418caa826844223bb50b",
- "type": "personCountEvent",
- "detectionIds": [
- "bc796b0fc2534bc59f13138af3dd7027",
- "60add228e5274158897c135905b5a019"
- ],
- "properties": {
- "personCount": 2
- },
- "zone": "lobbycamera",
- "trigger": "event"
- }
- ],
- "sourceInfo": {
- "id": "camera_id",
- "timestamp": "2020-08-24T06:06:57.224Z",
- "width": 608,
- "height": 342,
- "frameId": "1400",
- "cameraCalibrationInfo": {
- "status": "Calibrated",
- "cameraHeight": 10.306597709655762,
- "focalLength": 385.3199462890625,
- "tiltupAngle": 1.0969393253326416
- },
- "imagePath": ""
- },
- "detections": [
- {
- "type": "person",
- "id": "bc796b0fc2534bc59f13138af3dd7027",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.612683747944079,
- "y": 0.25340268765276636
- },
- {
- "x": 0.7185954043739721,
- "y": 0.6425260577285499
- }
- ]
- },
- "confidence": 0.9559211134910583,
- "metadata": {
- "centerGroundPointX": "2.6310102939605713",
- "centerGroundPointY": "0.0",
- "groundOrientationAngle": "1.3",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- },
- "attributes": [
- {
- "label": "face_mask",
- "confidence": 0.99,
- "task": ""
- }
- ]
- },
- {
- "type": "person",
- "id": "60add228e5274158897c135905b5a019",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.22326200886776573,
- "y": 0.17830915618361087
- },
- {
- "x": 0.34922296122500773,
- "y": 0.6297955429344847
- }
- ]
- },
- "confidence": 0.9389744400978088,
- "metadata": {
- "centerGroundPointX": "2.6310102939605713",
- "centerGroundPointY": "18.635927200317383",
- "groundOrientationAngle": "1.3",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- },
- "attributes": [
- {
- "label": "face_mask",
- "confidence": 0.99,
- "task": ""
- }
- ]
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-
-| Event Field Name | Type| Description|
-||||
-| `id` | string| Event ID|
-| `type` | string| Event type|
-| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
-| `properties` | collection| Collection of values|
-| `trackinId` | string| Unique identifier of the person detected|
-| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
-| `trigger` | string| The trigger type is 'event' or 'interval' depending on the value of `trigger` in SPACEANALYTICS_CONFIG|
-
-| Detections Field Name | Type| Description|
-||||
-| `id` | string| Detection ID|
-| `type` | string| Detection type|
-| `region` | collection| Collection of values|
-| `type` | string| Type of region|
-| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
-| `confidence` | float| Algorithm confidence|
-| `attributes` | array| Array of attributes. Each attribute consists of label, task, and confidence |
-| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask) |
-| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask) |
-| `task` | string | The attribute classification task/class |
--
-| SourceInfo Field Name | Type| Description|
-||||
-| `id` | string| Camera ID|
-| `timestamp` | date| UTC date when the JSON payload was emitted|
-| `width` | int | Video frame width|
-| `height` | int | Video frame height|
-| `frameId` | int | Frame identifier|
-| `cameraCallibrationInfo` | collection | Collection of values|
-| `status` | string | The status of the calibration in the format of `state[;progress description]`. The state can be `Calibrating`, `Recalibrating` (if recalibration is enabled), or `Calibrated`. The progress description part is only valid when it is in `Calibrating` and `Recalibrating` state, which is used to show the progress of current calibration process.|
-| `cameraHeight` | float | The height of the camera above the ground in feet. This is inferred from autocalibration. |
-| `focalLength` | float | The focal length of the camera in pixels. This is inferred from autocalibration. |
-| `tiltUpAngle` | float | The camera tilt angle from vertical. This is inferred from autocalibration.|
--
-### JSON format for cognitiveservices.vision.spatialanalysis-personcrossingline AI Insights
-
-Sample JSON for detections output by this operation.
-
-```json
-{
- "events": [
- {
- "id": "3733eb36935e4d73800a9cf36185d5a2",
- "type": "personLineEvent",
- "detectionIds": [
- "90d55bfc64c54bfd98226697ad8445ca"
- ],
- "properties": {
- "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
- "status": "CrossLeft"
- },
- "zone": "doorcamera"
- }
- ],
- "sourceInfo": {
- "id": "camera_id",
- "timestamp": "2020-08-24T06:06:53.261Z",
- "width": 608,
- "height": 342,
- "frameId": "1340",
- "imagePath": ""
- },
- "detections": [
- {
- "type": "person",
- "id": "90d55bfc64c54bfd98226697ad8445ca",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.491627341822574,
- "y": 0.2385801348769874
- },
- {
- "x": 0.588894994635331,
- "y": 0.6395559924387793
- }
- ]
- },
- "confidence": 0.9005028605461121,
- "metadata": {
- "centerGroundPointX": "2.6310102939605713",
- "centerGroundPointY": "18.635927200317383",
- "groundOrientationAngle": "1.3",
- "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
- "speed": "1.2",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- },
- "attributes": [
- {
- "label": "face_mask",
- "confidence": 0.99,
- "task": ""
- }
- ]
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-| Event Field Name | Type| Description|
-||||
-| `id` | string| Event ID|
-| `type` | string| Event type|
-| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
-| `properties` | collection| Collection of values|
-| `trackinId` | string| Unique identifier of the person detected|
-| `status` | string| Direction of line crossings, either 'CrossLeft' or 'CrossRight'. Direction is based on imagining standing at the "start" facing the "end" of the line. CrossRight is crossing from left to right. CrossLeft is crossing from right to left.|
-| `orientationDirection` | string| The orientation direction of the detected person after crossing the line. The value can be 'Left', 'Right, or 'Straight'. This value is output if `enable_orientation` is set to `True` in `CAMERACALIBRATOR_NODE_CONFIG` |
-| `zone` | string | The "name" field of the line that was crossed|
-
-| Detections Field Name | Type| Description|
-||||
-| `id` | string| Detection ID|
-| `type` | string| Detection type|
-| `region` | collection| Collection of values|
-| `type` | string| Type of region|
-| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
-| `groundOrientationAngle` | float| The clockwise radian angle of the person's orientation on the inferred ground plane |
-| `mappedImageOrientation` | float| The projected clockwise radian angle of the person's orientation on the 2D image space |
-| `speed` | float| The estimated speed of the detected person. The unit is `foot per second (ft/s)`|
-| `confidence` | float| Algorithm confidence|
-| `attributes` | array| Array of attributes. Each attribute consists of label, task, and confidence |
-| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask) |
-| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask) |
-| `task` | string | The attribute classification task/class |
-
-| SourceInfo Field Name | Type| Description|
-||||
-| `id` | string| Camera ID|
-| `timestamp` | date| UTC date when the JSON payload was emitted|
-| `width` | int | Video frame width|
-| `height` | int | Video frame height|
-| `frameId` | int | Frame identifier|
--
-> [!IMPORTANT]
-> The AI model detects a person irrespective of whether the person is facing towards or away from the camera. The AI model doesn't run face recognition and doesn't emit any biometric information.
-
-### JSON format for cognitiveservices.vision.spatialanalysis-personcrossingpolygon AI Insights
-
-Sample JSON for detections output by this operation with `zonecrossing` type SPACEANALYTICS_CONFIG.
-
-```json
-{
- "events": [
- {
- "id": "f095d6fe8cfb4ffaa8c934882fb257a5",
- "type": "personZoneEnterExitEvent",
- "detectionIds": [
- "afcc2e2a32a6480288e24381f9c5d00e"
- ],
- "properties": {
- "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
- "status": "Enter",
- "side": "1"
- },
- "zone": "queuecamera"
- }
- ],
- "sourceInfo": {
- "id": "camera_id",
- "timestamp": "2020-08-24T06:15:09.680Z",
- "width": 608,
- "height": 342,
- "frameId": "428",
- "imagePath": ""
- },
- "detections": [
- {
- "type": "person",
- "id": "afcc2e2a32a6480288e24381f9c5d00e",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.8135572734631991,
- "y": 0.6653949670624315
- },
- {
- "x": 0.9937645761590255,
- "y": 0.9925406829655519
- }
- ]
- },
- "confidence": 0.6267998814582825,
- "metadata": {
- "centerGroundPointX": "2.6310102939605713",
- "centerGroundPointY": "18.635927200317383",
- "groundOrientationAngle": "1.3",
- "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
- "speed": "1.2",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- },
- "attributes": [
- {
- "label": "face_mask",
- "confidence": 0.99,
- "task": ""
- }
- ]
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-
-Sample JSON for detections output by this operation with `zonedwelltime` type SPACEANALYTICS_CONFIG.
-
-```json
-{
- "events": [
- {
- "id": "f095d6fe8cfb4ffaa8c934882fb257a5",
- "type": "personZoneDwellTimeEvent",
- "detectionIds": [
- "afcc2e2a32a6480288e24381f9c5d00e"
- ],
- "properties": {
- "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
- "status": "Exit",
- "side": "1",
- "dwellTime": 7132.0,
- "dwellFrames": 20
- },
- "zone": "queuecamera"
- }
- ],
- "sourceInfo": {
- "id": "camera_id",
- "timestamp": "2020-08-24T06:15:09.680Z",
- "width": 608,
- "height": 342,
- "frameId": "428",
- "imagePath": ""
- },
- "detections": [
- {
- "type": "person",
- "id": "afcc2e2a32a6480288e24381f9c5d00e",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.8135572734631991,
- "y": 0.6653949670624315
- },
- {
- "x": 0.9937645761590255,
- "y": 0.9925406829655519
- }
- ]
- },
- "confidence": 0.6267998814582825,
- "metadata": {
- "centerGroundPointX": "2.6310102939605713",
- "centerGroundPointY": "18.635927200317383",
- "groundOrientationAngle": "1.2",
- "mappedImageOrientation": "0.3",
- "speed": "1.2",
- "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- }
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-
-| Event Field Name | Type| Description|
-||||
-| `id` | string| Event ID|
-| `type` | string| Event type. The value can be either _personZoneDwellTimeEvent_ or _personZoneEnterExitEvent_|
-| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
-| `properties` | collection| Collection of values|
-| `trackinId` | string| Unique identifier of the person detected|
-| `status` | string| Direction of polygon crossings, either 'Enter' or 'Exit'|
-| `side` | int| The number of the side of the polygon that the person crossed. Each side is a numbered edge between the two vertices of the polygon that represents your zone. The edge between the first two vertices of the polygon represents first side. 'Side' is empty when the event isn't associated with a specific side due to occlusion. For example, an exit occurred when a person disappeared but wasn't seen crossing a side of the zone, or an enter occurred when a person appeared in the zone but wasn't seen crossing a side.|
-| `dwellTime` | float | The number of milliseconds that represent the time the person spent in the zone. This field is provided when the event type is personZoneDwellTimeEvent|
-| `dwellFrames` | int | The number of frames that the person spent in the zone. This field is provided when the event type is personZoneDwellTimeEvent|
-| `dwellTimeForTargetSide` | float | The number of milliseconds that represent the time the person spent in the zone and were facing to the `target_side`. This field is provided when `enable_orientation` is `True` in `CAMERACALIBRATOR_NODE_CONFIG ` and the value of `target_side` is set in `SPACEANALYTICS_CONFIG`|
-| `avgSpeed` | float| The average speed of the person in the zone. The unit is `foot per second (ft/s)`|
-| `minSpeed` | float| The minimum speed of the person in the zone. The unit is `foot per second (ft/s)`|
-| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
-
-| Detections Field Name | Type| Description|
-||||
-| `id` | string| Detection ID|
-| `type` | string| Detection type|
-| `region` | collection| Collection of values|
-| `type` | string| Type of region|
-| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
-| `groundOrientationAngle` | float| The clockwise radian angle of the person's orientation on the inferred ground plane |
-| `mappedImageOrientation` | float| The projected clockwise radian angle of the person's orientation on the 2D image space |
-| `speed` | float| The estimated speed of the detected person. The unit is `foot per second (ft/s)`|
-| `confidence` | float| Algorithm confidence|
-| `attributes` | array| Array of attributes. Each attribute consists of label, task, and confidence |
-| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask) |
-| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask) |
-| `task` | string | The attribute classification task/class |
-
-### JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights
-
-Sample JSON for detections output by this operation.
-
-```json
-{
- "events": [
- {
- "id": "9c15619926ef417aa93c1faf00717d36",
- "type": "personDistanceEvent",
- "detectionIds": [
- "9037c65fa3b74070869ee5110fcd23ca",
- "7ad7f43fd1a64971ae1a30dbeeffc38a"
- ],
- "properties": {
- "personCount": 5,
- "averageDistance": 20.807043981552123,
- "minimumDistanceThreshold": 6.0,
- "maximumDistanceThreshold": "Infinity",
- "eventName": "TooClose",
- "distanceViolationPersonCount": 2
- },
- "zone": "lobbycamera",
- "trigger": "event"
- }
- ],
- "sourceInfo": {
- "id": "camera_id",
- "timestamp": "2020-08-24T06:17:25.309Z",
- "width": 608,
- "height": 342,
- "frameId": "1199",
- "cameraCalibrationInfo": {
- "status": "Calibrated",
- "cameraHeight": 12.9940824508667,
- "focalLength": 401.2800598144531,
- "tiltupAngle": 1.057669997215271
- },
- "imagePath": ""
- },
- "detections": [
- {
- "type": "person",
- "id": "9037c65fa3b74070869ee5110fcd23ca",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.39988183975219727,
- "y": 0.2719132942065858
- },
- {
- "x": 0.5051516984638414,
- "y": 0.6488402517218339
- }
- ]
- },
- "confidence": 0.948630690574646,
- "metadata": {
- "centerGroundPointX": "-1.4638760089874268",
- "centerGroundPointY": "18.29732322692871",
- "groundOrientationAngle": "1.3",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- }
- },
- {
- "type": "person",
- "id": "7ad7f43fd1a64971ae1a30dbeeffc38a",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.5200299714740954,
- "y": 0.2875368218672903
- },
- {
- "x": 0.6457497446160567,
- "y": 0.6183311060855263
- }
- ]
- },
- "confidence": 0.8235412240028381,
- "metadata": {
- "centerGroundPointX": "2.6310102939605713",
- "centerGroundPointY": "18.635927200317383",
- "groundOrientationAngle": "1.3",
- "footprintX": "0.7306610584259033",
- "footprintY": "0.8814966493381893"
- }
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-
-| Event Field Name | Type| Description|
-||||
-| `id` | string| Event ID|
-| `type` | string| Event type|
-| `detectionsId` | array| Array of size 1 of unique identifier of the person detection that triggered this event|
-| `properties` | collection| Collection of values|
-| `personCount` | int| Number of people detected when the event was emitted|
-| `averageDistance` | float| The average distance between all detected people in feet|
-| `minimumDistanceThreshold` | float| The distance in feet that will trigger a "TooClose" event when people are less than that distance apart.|
-| `maximumDistanceThreshold` | float| The distance in feet that will trigger a "TooFar" event when people are greater than distance apart.|
-| `eventName` | string| Event name is `TooClose` with the `minimumDistanceThreshold` is violated, `TooFar` when `maximumDistanceThreshold` is violated, or `unknown` when autocalibration hasn't completed|
-| `distanceViolationPersonCount` | int| Number of people detected in violation of `minimumDistanceThreshold` or `maximumDistanceThreshold`|
-| `zone` | string | The "name" field of the polygon that represents the zone that was monitored for distancing between people|
-| `trigger` | string| The trigger type is 'event' or 'interval' depending on the value of `trigger` in SPACEANALYTICS_CONFIG|
-
-| Detections Field Name | Type| Description|
-||||
-| `id` | string| Detection ID|
-| `type` | string| Detection type|
-| `region` | collection| Collection of values|
-| `type` | string| Type of region|
-| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
-| `confidence` | float| Algorithm confidence|
-| `centerGroundPointX/centerGroundPointY` | 2 float values| `x`, `y` values with the coordinates of the person's inferred location on the ground in feet. `x` and `y` are coordinates on the floor plane, assuming the floor is level. The camera's location is the origin. |
-
-In `centerGroundPoint`, `x` is the component of distance from the camera to the person that is perpendicular to the camera image plane. `y` is the component of distance that is parallel to the camera image plane.
-
-![Example center ground point](./media/spatial-analysis/x-y-chart.png)
-
-In this example, `centerGroundPoint` is `{centerGroundPointX: 4, centerGroundPointY: 5}`. This means there's a person four feet ahead of the camera and five feet to the right, looking at the room top-down.
--
-| SourceInfo Field Name | Type| Description|
-||||
-| `id` | string| Camera ID|
-| `timestamp` | date| UTC date when the JSON payload was emitted|
-| `width` | int | Video frame width|
-| `height` | int | Video frame height|
-| `frameId` | int | Frame identifier|
-| `cameraCallibrationInfo` | collection | Collection of values|
-| `status` | string | The status of the calibration in the format of `state[;progress description]`. The state can be `Calibrating`, `Recalibrating` (if recalibration is enabled), or `Calibrated`. The progress description part is only valid when it is in `Calibrating` and `Recalibrating` state, which is used to show the progress of current calibration process.|
-| `cameraHeight` | float | The height of the camera above the ground in feet. This is inferred from autocalibration. |
-| `focalLength` | float | The focal length of the camera in pixels. This is inferred from autocalibration. |
-| `tiltUpAngle` | float | The camera tilt angle from vertical. This is inferred from autocalibration.|
-
-### JSON format for cognitiveservices.vision.spatialanalysis AI Insights
-
-Output of this operation depends on configured `events`, for example if there's a `zonecrossing` event configured for this operation then output will be same as `cognitiveservices.vision.spatialanalysis-personcrossingpolygon`.
-
-## Use the output generated by the container
-
-You may want to integrate Spatial Analysis detection or events into your application. Here are a few approaches to consider:
-
-* Use the Azure Event Hubs SDK for your chosen programming language to connect to the Azure IoT Hub endpoint and receive the events. For more information, see [Read device-to-cloud messages from the built-in endpoint](../../iot-hub/iot-hub-devguide-messages-read-builtin.md).
-* Set up **Message Routing** on your Azure IoT Hub to send the events to other endpoints or save the events to your data storage. For more information, see [IoT Hub Message Routing](../../iot-hub/iot-hub-devguide-messages-d2c.md).
-* Set up an Azure Stream Analytics job to process the events in real-time as they arrive and create visualizations.
-
-## Deploy Spatial Analysis operations at scale (multiple cameras)
-
-In order to get the best performance and utilization of the GPUs, you can deploy any Spatial Analysis operations on multiple cameras using graph instances. Below is a sample configuration for running the `cognitiveservices.vision.spatialanalysis-personcrossingline` operation on 15 cameras.
-
-```json
- "properties.desired": {
- "globalSettings": {
- "PlatformTelemetryEnabled": false,
- "CustomerTelemetryEnabled": true
- },
- "graphs": {
- "personzonelinecrossing": {
- "operationId": "cognitiveservices.vision.spatialanalysis-personcrossingline",
- "version": 1,
- "enabled": true,
- "sharedNodes": {
- "shared_detector0": {
- "node": "PersonCrossingLineGraph.detector",
- "parameters": {
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"batch_size\": 7, \"do_calibration\": true}",
- }
- },
- "shared_calibrator0": {
- "node": "PersonCrossingLineGraph/cameracalibrator",
- "parameters": {
- "CAMERACALIBRATOR_NODE_CONFIG": "{ \"gpu_index\": 0, \"do_calibration\": true, \"enable_zone_placement\": true}",
- "CALIBRATION_CONFIG": "{\"enable_recalibration\": true, \"quality_check_frequency_seconds\": 86400}",
- }
- },
- "parameters": {
- "VIDEO_DECODE_GPU_INDEX": 0,
- "VIDEO_IS_LIVE": true
- },
- "instances": {
- "1": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 1>",
- "VIDEO_SOURCE_ID": "camera 1",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[[0,0],[1,0],[0,1],[1,1],[0,0]]}]}"
- }
- },
- "2": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 2>",
- "VIDEO_SOURCE_ID": "camera 2",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "3": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 3>",
- "VIDEO_SOURCE_ID": "camera 3",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "4": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 4>",
- "VIDEO_SOURCE_ID": "camera 4",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "5": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 5>",
- "VIDEO_SOURCE_ID": "camera 5",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "6": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 6>",
- "VIDEO_SOURCE_ID": "camera 6",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "7": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 7>",
- "VIDEO_SOURCE_ID": "camera 7",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "8": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 8>",
- "VIDEO_SOURCE_ID": "camera 8",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "9": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 9>",
- "VIDEO_SOURCE_ID": "camera 9",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "10": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 10>",
- "VIDEO_SOURCE_ID": "camera 10",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "11": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 11>",
- "VIDEO_SOURCE_ID": "camera 11",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "12": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 12>",
- "VIDEO_SOURCE_ID": "camera 12",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "13": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 13>",
- "VIDEO_SOURCE_ID": "camera 13",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "14": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 14>",
- "VIDEO_SOURCE_ID": "camera 14",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- },
- "15": {
- "sharedNodeMap": {
- "PersonCrossingLineGraph/detector": "shared_detector0",
- "PersonCrossingLineGraph/cameracalibrator": "shared_calibrator0",
- },
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL for camera 15>",
- "VIDEO_SOURCE_ID": "camera 15",
- "SPACEANALYTICS_CONFIG": "<Replace the zone config value, same format as above>"
- }
- }
- }
- },
- }
- }
- ```
-| Name | Type| Description|
-||||
-| `batch_size` | int | If all of the cameras have the same resolution, set `batch_size` to the number of cameras that will be used in that operation, otherwise, set `batch_size` to 1 or leave it as default (1), which indicates no batch is supported. |
-
-## Next steps
-
-* [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Logging and troubleshooting](spatial-analysis-logging.md)
-* [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-web-app.md
- Title: Deploy a Spatial Analysis web app-
-description: Learn how to use Spatial Analysis in a web application.
------ Previously updated : 06/08/2021---
-# How to: Deploy a Spatial Analysis web application
-
-Use this article to learn how to deploy a web app which will collect spatial analysis data(insights) from IotHub and visualize it. This can have useful applications across a wide range of scenarios and industries. For example, if a company wants to optimize the use of its real estate space, they are able to quickly create a solution with different scenarios.
-
-In this tutorial you will learn how to:
-
-* Deploy the Spatial Analysis container
-* Configure the operation and camera
-* Configure the IoT Hub connection in the Web Application
-* Deploy and test the Web Application
-
-This app will showcase below scenarios:
-
-* Count of people entering and exiting a space/store
-* Count of people entering and exiting a checkout area/zone and the time spent in the checkout line (dwell time)
-* Count of people wearing a face mask
-* Count of people violating social distancing guidelines
-
-## Prerequisites
-
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Basic understanding of Azure IoT Edge deployment configurations, and an [Azure IoT Hub](../../iot-hub/index.yml)
-* A configured [host computer](spatial-analysis-container.md).
-
-## Deploy the Spatial Analysis container
-
-Follow [the Host Computer Setup](./spatial-analysis-container.md) to configure the host computer and connect an IoT Edge device to Azure IoT Hub.
-
-### Deploy an Azure IoT Hub service in your subscription
-
-First, create an instance of an Azure IoT Hub service with either the Standard Pricing Tier (S1) or Free Tier (S0). Follow these instructions to create this instance using the Azure CLI.
-
-Fill in the required parameters:
-* Subscription: The name or ID of your Azure Subscription
-* Resource group: Create a name for your resource group
-* Iot Hub Name: Create a name for your IoT Hub
-* IoTHub Name: The name of the IoT Hub you created
-* Edge Device Name: Create a name for your Edge Device
-
-```azurecli
-az login
-az account set --subscription <name or ID of Azure Subscription>
-az group create --name "<Resource Group Name>" --location "WestUS"
-
-az iot hub create --name "<IoT Hub Name>" --sku S1 --resource-group "test-resource-group"
-
-az iot hub device-identity create --hub-name "<IoT Hub Name>" --device-id "<Edge Device Name>" --edge-enabled
-```
-
-### Deploy the container on Azure IoT Edge on the host computer
-
-The next step is to deploy the **spatial analysis** container as an IoT Module on the host computer using the Azure CLI. The deployment process requires a Deployment Manifest file which outlines the required containers, variables, and configurations for your deployment. A sample Deployment Manifest can be found at [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which includes pre-built configurations for all scenarios.
-
-### Set environment variables
-
-Most of the **Environment Variables** for the IoT Edge Module are already set in the sample *DeploymentManifest.json* files linked above. In the file, search for the `ENDPOINT` and `APIKEY` environment variables, shown below. Replace the values with the Endpoint URI and the API Key that you created earlier. Ensure that the EULA value is set to "accept".
-
-```json
-"EULA": {
- "value": "accept"
-},
-"BILLING":{
- "value": "<Use the endpoint from your Computer Vision resource>"
-},
-"APIKEY":{
- "value": "<Use a key from your Computer Vision resource>"
-}
-```
-
-### Configure the operation parameters
-
-If you are using the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which already has all of the required configurations (operations, recorded video file urls and zones etc.), then you can skip to the **Execute the deployment** section.
-
-Now that the initial configuration of the spatial analysis container is complete, the next step is to configure the operations parameters and add them to the deployment.
-
-The first step is to update the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) and configure the desired operation. For example, configuration for cognitiveservices.vision.spatialanalysis-personcount is shown below:
-
-```json
-"personcount": {
- "operationId": "cognitiveservices.vision.spatialanalysis-personcount",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL here>",
- "VIDEO_SOURCE_ID": "<Replace with friendly name>",
- "VIDEO_IS_LIVE":true,
- "DETECTOR_NODE_CONFIG": "{ \"gpu_index\": 0 }",
- "SPACEANALYTICS_CONFIG": "{\"zones\":[{\"name\":\"queue\",\"polygon\":[<Replace with your values>], \"events\": [{\"type\":\"count\"}], \"threshold\":<use 0 for no threshold.}]}"
- }
-},
-```
-
-After the deployment manifest is updated, follow the camera manufacturer's instructions to install the camera, configure the camera url, and configure the user name and password.
-
-Next, set `VIDEO_URL` to the RTSP url of the camera, and the credentials for connecting to the camera.
-
-If the edge device has more than one GPU, select the GPU on which to run this operation. Make sure you load balance the operations where there are no more than 8 operations running on a single GPU at a time.
-
-Next, configure the zone in which you want to count people. To configure the zone polygon, first follow the manufacturerΓÇÖs instructions to retrieve a frame from the camera. To determine each vertex of the polygon, select a point on the frame, take the x,y pixel coordinates of the point relative to the left, top corner of the frame, and divide by the corresponding frame dimensions. Set the results as x,y coordinates of the vertex. You can set the zone polygon configuration in the `SPACEANALYTICS_CONFIG` field.
-
-This is a sample video frame that shows how the vertex coordinates are being calculated for a frame of size 1920/1080.
-![Sample video frame](./media/spatial-analysis/sample-video-frame.jpg)
-
-You can also select a confidence threshold for when detected people are counted and events are generated. Set the threshold to 0 if youΓÇÖd like all events to be output.
-
-### Execute the deployment
-
-Now that the deployment manifest is complete, use this command in the Azure CLI to deploy the container on the host computer as an IoT Edge Module.
-
-```azurecli
-az login
-az extension add --name azure-iot
-az iot edge set-modules --hub-name "<IoT Hub name>" --device-id "<IoT Edge device name>" --content DeploymentManifest.json -ΓÇôsubscription "<subscriptionId>"
-```
-
-Fill in the required parameters:
-
-* IoT Hub Name: Your Azure IoT Hub name
-* DeploymentManifest.json: The name of your deployment file
-* IoT Edge device name: The IoT Edge device name of your host computer
-* Subscription: Your subscription ID or name
-
-This command will begin the deployment, and you can view the deployment status in your Azure IoT Hub instance in the Azure portal. The status may show as *417 ΓÇô The deviceΓÇÖs deployment configuration is not set* until the device finishes downloading the container images and starts running.
-
-### Validate that the deployment was successful
-
-Locate the *Runtime Status* in the IoT Edge Module Settings for the spatial-analysis module in your IoT Hub instance on the Azure portal. The **Desired Value** and **Reported Value** for the *Runtime Status* should say `Running`. See below for what this will look like on the Azure portal.
-
-![Example deployment verification](./media/spatial-analysis/deployment-verification.png)
-
-At this point, the spatial analysis container is running the operation. It emits AI insights for the operations and routes these insights as telemetry to your Azure IoT Hub instance. To configure additional cameras, you can update the deployment manifest file and execute the deployment again.
-
-## Spatial Analysis Web Application
-
-The Spatial Analysis Web Application enables developers to quickly configure a sample web app, host it in their Azure environment, and use the app to validate E2E events.
-
-## Build Docker Image
-
-Follow the [guide](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/README.md#docker-image) to build and push the image to an Azure Container Registry in your subscription.
-
-## Setup Steps
-
-To install the container, create a new Azure App Service and fill in the required parameters. Then go to the **Docker** Tab and select **Single Container**, then **Azure Container Registry**. Use your instance of Azure Container Registry where you pushed the image above.
-
-![Enter image details](./media/spatial-analysis/solution-app-create-screen.png)
-
-After entering the above parameters, click on **Review+Create** and create the app.
-
-### Configure the app
-
-Wait for setup to complete, and navigate to your resource in the Azure portal. Go to the **configuration** section and add the following two **application settings**.
-
-* `EventHubConsumerGroup` ΓÇô The string name of the consumer group from your Azure IoT Hub, you can create a new consumer group in your IoT Hub or use the default group.
-* `IotHubConnectionString` ΓÇô The connection string to your Azure IoT Hub, this can be retrieved from the keys section of your Azure IoT Hub resource
-![Configure Parameters](./media/spatial-analysis/solution-app-config-page.png)
-
-Once these 2 settings are added, click **Save**. Then click **Authentication/Authorization** in the left navigation menu, and update it with the desired level of authentication. We recommend Azure Active Directory (Azure AD) express.
-
-### Test the app
-
-Go to the Azure Service and verify the deployment was successful, and the web app is running. Navigate to the configured url: `<yourapp>.azurewebsites.net` to view the running app.
-
-![Test the deployment](./media/spatial-analysis/solution-app-output.png)
-
-## Get the PersonCount source code
-If you'd like to view or modify the source code for this application, you can find it [on GitHub](https://github.com/Azure-Samples/cognitive-services-spatial-analysis).
-
-## Next steps
-
-* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
-* [Logging and troubleshooting](spatial-analysis-logging.md)
-* [Camera placement guide](spatial-analysis-camera-placement.md)
-* [Zone and line placement guide](spatial-analysis-zone-line-placement.md)
cognitive-services Spatial Analysis Zone Line Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-zone-line-placement.md
- Title: Spatial Analysis zone and line placement-
-description: Learn how to set up zones and lines with Spatial Analysis
------ Previously updated : 06/08/2021----
-# Zone and Line Placement Guide
-
-This article provides guidelines for how to define zones and lines for Spatial Analysis operations to achieve accurate analysis of peoples movements in a space. This applies to all operations.
-
-Zones and lines are defined using the JSON SPACEANALYSIS_CONFIG parameter. See the [Spatial Analysis operations](spatial-analysis-operations.md) article for more information.
-
-## Guidelines for drawing zones
-
-Remember that every space is different; you'll need to update the position or size depending on your needs.
-
-If you want to see a specific section of your camera view, create the largest zone that you can, covering the specific floor area that you're interested in but not including other areas that you're not interested in. This increases the accuracy of the data collected and prevents false positives from areas you don't want to track. Be careful when placing the corners of your polygon and make sure they're not outside the area you want to track. ΓÇâ
-
-### Example of a well-shaped zone
-
-The zone should be big enough to accommodate three people standing along each edge and focused on the area of interest. Spatial Analysis will identify people whose feet are placed in the zone, so when drawing zones on the 2D image, imagine the zone as a carpet laying on the floor.
-
-![Well-shaped zone](./media/spatial-analysis/zone-good-example.png)
-
-### Examples of zones that aren't well-shaped
-
-The following examples show poorly shaped zones. In these examples, the area of interest is the space in front of the *It's Game Time* display.
-
-**Zone is not on the floor.**
-
-![Poorly shaped zone](./media/spatial-analysis/zone-not-on-floor.png)
-
-**Zone is too small.**
-
-![zone is too small](./media/spatial-analysis/zone-too-small.png)
-
-**Zone doesn't fully capture the area around the display.**
-
-![zone doesn't fully capture the area around end cap](./media/spatial-analysis/zone-bad-capture.png)
-
-**Zone is too close to the edge of the camera image and doesn't capture the right display.**
-
-![zone is too close to the edge of the camera image and doesn't capture the right display](./media/spatial-analysis/zone-edge.png)
-
-**Zone is partially blocked by the shelf, so people and floor aren't fully visible.**
-
-![zone is partially blocked, so people aren't fully visible](./media/spatial-analysis/zone-partially-blocked.png)
-
-### Example of a well-shaped line
-
-The line should be long enough to accommodate the entire entrance. Spatial Analysis will identify people whose feet cross the line, so when drawing lines on the 2D image imagine you're drawing them as if they lie on the floor.
-
-If possible, extend the line wider than the actual entrance. If this will not result in extra crossings (as in the image below when the line is against a wall) then extend it.
-
-![Well-shaped line](./media/spatial-analysis/zone-line-good-example.png)
-
-### Examples of lines that aren't well-shaped
-
-The following examples show poorly defined lines.
-
-**Line doesn't cover the entire entry way on the floor.**
-
-![Line doesn't cover the entire entry way on the floor](./media/spatial-analysis/zone-line-bad-coverage.png)
-
-**Line is too high and doesn't cover the entirety of the door.**
-
-![Line is too high and doesn't cover the entirety of the door](./media/spatial-analysis/zone-line-too-high.png)
-
-## Next steps
-
-* [Deploy a People Counting web application](spatial-analysis-web-app.md)
-* [Configure Spatial Analysis operations](./spatial-analysis-operations.md)
-* [Logging and troubleshooting](spatial-analysis-logging.md)
-* [Camera placement guide](spatial-analysis-camera-placement.md)
cognitive-services Upgrade Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/upgrade-api-versions.md
- Title: Upgrade to Read v3.0 of the Computer Vision API-
-description: Learn how to upgrade to Computer Vision v3.0 Read API from v2.0/v2.1.
------- Previously updated : 08/11/2020-----
-# Upgrade from Read v2.x to Read v3.x
-
-This guide shows how to upgrade your existing container or cloud API code from Read v2.x to Read v3.x.
-
-## Determine your API path
-Use the following table to determine the **version string** in the API path based on the Read 3.x version you're migrating to.
-
-|Product type| Version | Version string in 3.x API path |
-|:--|:-|:-|
-|Service | Read 3.0, 3.1, or 3.2 | **v3.0**, **v3.1**, or **v3.2** respectively |
-|Service | Read 3.2 preview | **v3.2-preview.1** |
-|Container | Read 3.0 preview or Read 3.1 preview | **v3.0** or **v3.1-preview.2** respectively |
--
-Next, use the following sections to narrow your operations and replace the **version string** in your API path with the value from the table. For example, for **Read v3.2 preview** cloud and container versions, update the API path to **https://{endpoint}/vision/v3.2-preview.1/read/analyze[?language]**.
-
-## Service/Container
-
-### `Batch Read File`
-
-|Read 2.x |Read 3.x |
-|-|--|
-|https://{endpoint}/vision/**v2.0/read/core/asyncBatchAnalyze** |https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
-
-A new optional _language_ parameter is available. If you don't know the language of your document, or it may be multilingual, don't include it.
-
-### `Get Read Results`
-
-|Read 2.x |Read 3.x |
-|-|--|
-|https://{endpoint}/vision/**v2.0/read/operations**/{operationId} |https://{endpoint}/vision/<**version string**>/read/analyzeResults/{operationId}|
-
-### `Get Read Operation Result` status flag
-
-When the call to `Get Read Operation Result` is successful, it returns a status string field in the JSON body.
-
-|Read 2.x |Read 3.x |
-|-|--|
-|`"NotStarted"` | `"notStarted"`|
-|`"Running"` | `"running"`|
-|`"Failed"` | `"failed"`|
-|`"Succeeded"` | `"succeeded"`|
-
-### API response (JSON)
-
-Note the following changes to the json:
-* In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded"`. In v3.0, this field is `succeeded`.
-* To get the root for page array, change the json hierarchy from `recognitionResults` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
-* The page angle `clockwiseOrientation` has been renamed to `angle` and the range has been changed from 0 - 360 degrees to -180 to 180 degrees. Depending on your code, you may or may not have to make changes as most math functions can handle either range.
-
-The v3.0 API also introduces the following improvements you can optionally use:
-* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
-* `version` tells you the version of the API used to generate results
-* A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
-
-
-In 2.X, the output format is as follows:
-
-```json
- {
- {
- "status": "Succeeded",
- "recognitionResults": [
- {
- "page": 1,
- "language": "en",
- "clockwiseOrientation": 349.59,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- },
- // The rest of result is omitted for brevity
-
-}
-```
-
-In v3.0, it has been adjusted:
-
-```json
- {
- {
- "status": "succeeded",
- "createdDateTime": "2020-05-28T05:13:21Z",
- "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
- "analyzeResult": {
- "version": "3.0.0",
- "readResults": [
- {
- "page": 1,
- "language": "en",
- "angle": 0.8551,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- "confidence": 0.958
- },
- // The rest of result is omitted for brevity
-
- }
-```
-
-## Service only
-
-### `Recognize Text`
-`Recognize Text` is a *preview* operation that is being *deprecated in all versions of Computer Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and other features, so it's recommended. To upgrade from `Recognize Text` to `Read`:
-
-|Recognize Text 2.x |Read 3.x |
-|-|--|
-|https://{endpoint}/vision/**v2.0/recognizeText[?mode]**|https://{endpoint}/vision/<**version string**>/read/analyze[?language]|
-
-The _mode_ parameter isn't supported in `Read`. Both handwritten and printed text will automatically be supported.
-
-A new optional _language_ parameter is available in v3.0. If you don't know the language of your document, or it may be multilingual, don't include it.
-
-### `Get Recognize Text Operation Result`
-
-|Recognize Text 2.x |Read 3.x |
-|-|--|
-|https://{endpoint}/vision/**v2.0/textOperations/**{operationId}|https://{endpoint}/vision/<**version string**>/read/analyzeResults/{operationId}|
-
-### `Get Recognize Text Operation Result` status flags
-When the call to `Get Recognize Text Operation Result` is successful, it returns a status string field in the JSON body.
-
-|Recognize Text 2.x |Read 3.x |
-|-|--|
-|`"NotStarted"` | `"notStarted"`|
-|`"Running"` | `"running"`|
-|`"Failed"` | `"failed"`|
-|`"Succeeded"` | `"succeeded"`|
-
-### API response (JSON)
-
-Note the following changes to the json:
-* In v2.x, `Get Read Operation Result` will return the OCR recognition json when the status is `Succeeded`. In v3.x, this field is `succeeded`.
-* To get the root for page array, change the json hierarchy from `recognitionResult` to `analyzeResult`/`readResults`. The per-page line and words json hierarchy remains unchanged, so no code changes are required.
-
-The v3.0 API also introduces the following improvements you can optionally use. See the API reference for more details:
-* `createdDateTime` and `lastUpdatedDateTime` are added so you can track the duration of processing.
-* `version` tells you the version of the API used to generate results
-* A per-word `confidence` has been added. This value is calibrated so that a value 0.95 means that there is a 95% chance the recognition is correct. The confidence score can be used to select which text to send to human review.
-* `angle` general orientation of the text in clockwise direction, measured in degrees between (-180, 180].
-* `width` and `"height"` give you the dimensions of your document, and `"unit"` provides the unit of those dimensions (pixels or inches, depending on document type.)
-* `page` multipage documents are supported
-* `language` the input language of the document (from the optional _language_ parameter.)
--
-In 2.X, the output format is as follows:
-
-```json
- {
- {
- "status": "Succeeded",
- "recognitionResult": [
- {
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- },
- // The rest of result is omitted for brevity
-
- }
-```
-
-In v3.x, it has been adjusted:
-
-```json
- {
- {
- "status": "succeeded",
- "createdDateTime": "2020-05-28T05:13:21Z",
- "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
- "analyzeResult": {
- "version": "3.0.0",
- "readResults": [
- {
- "page": 1,
- "angle": 0.8551,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- "confidence": 0.958
- },
- // The rest of result is omitted for brevity
-
- }
-```
-
-## Container only
-
-### `Synchronous Read`
-
-|Read 2.0 |Read 3.x |
-|-|--|
-|https://{endpoint}/vision/**v2.0/read/core/Analyze** |https://{endpoint}/vision/<**version string**>/read/syncAnalyze[?language]|
cognitive-services Use Case Alt Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/use-case-alt-text.md
- Title: "Overview: Generate alt text of images with Image Analysis"-
-description: Grow your customer base by making your products and services more accessible. Generate a description of an image in human-readable language, using complete sentences.
------- Previously updated : 03/17/2023----
-# Overview: Generate image alt-text with Image Analysis
-
-## What is alt text?
-
-Alt text, or alternative text, is an HTML attribute added to the `<img>` tag that displays images on an application or web page. It looks like this in plain HTML code:
-
-`<img src="elephant.jpg" alt="An elephant in a grassland">`
-
-Alt text enables website owners to describe an image in plain text. These image descriptions improve accessibility by enabling screen readers such as Microsoft Narrator, JAWS, and NVDA to accurately communicate image content to their visually impaired and blind users.
-
-Alt text is also vital for image search engine optimization (SEO). It helps search engines understand the visual content in your images. The search engine is then better able to include and rank your website in search results when users search for the content in your website.
-
-## Auto-generate alt text with Image Analysis
-
-Image Analysis offers image captioning models that generate one-sentence descriptions of image visual content. You can use these AI generated captions as alt text for your images.
--
-Auto-generated caption: "An elephant in a grassland."
-
-MicrosoftΓÇÖs own products such as PowerPoint, Word, and Edge browser use image captioning by Image Analysis to generate alt text.
--
-## Benefits for your website
--- **Improve accessibility and user experience for blind and low-vision users**. Alt Text makes visual information in images available to screen readers used by blind and low-vision users. -- **Meet legal compliance requirements**. Some websites may be legally required to remove all accessibility barriers. Using alt text for accessibility helps website owners minimize risk of legal action now and in the future. -- **Make your website more discoverable and searchable**. Image alt text helps search engine crawlers find images on your website more easily and rank them higher in search results. -
-## Frequently Asked Questions
-
-### What languages are image captions available in?
-
-Image captions are available in English, Chinese, Portuguese, Japanese, and Spanish in Image Analysis 3.2 API. In the Image Analysis 4.0 API (preview), image captions are only available in English.
-
-### What confidence threshold should I use?
-
-To ensure accurate alt text for all images, you can choose to only accept captions above a certain confidence level. The right confidence level varies for each user depending on the type of images and usage scenario.
-
-In general, we advise a confidence threshold of `0.4` for the Image Analysis 3.2 API and of `0.0` for the Image Analysis 4.0 API (preview).
-
-### What can I do about embarrassing or erroneous captions?
-
-On rare occasions, image captions can contain embarrassing errors, such as labeling a male-identifying person as a "woman" or labeling an adult woman as a "girl". We encourage users to consider using the latest Image Analysis 4.0 API (preview) which eliminates some errors by supporting gender-neutral captions.
-
-Please report any embarrassing or offensive captions by going to the [Azure portal](https://ms.portal.azure.com/#home) and navigating to the **Feedback** button in the top right.
-
-## Next Steps
-Follow a quickstart to begin automatically generating alt text by using image captioning on Image Analysis.
-
-> [!div class="nextstepaction"]
-> [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library-40.md)
cognitive-services Use Case Dwell Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/use-case-dwell-time.md
- Title: "Overview: Monitor dwell time in front of displays"-
-description: Spatial Analysis can provide real-time information about how long customers spend in front of a display in a retail store.
------- Previously updated : 07/22/2022---
-# Overview: Monitor dwell time in front of displays with Spatial Analysis
-
-Spatial Analysis can provide real-time information about how long customers spend in front of a display in a retail store. The service monitors the length of time customers spend in a zone you specify. You can use this information to track customer engagement with promotions/displays within a store or understand customers' preference toward specific products.
--
-## Key features
-
-You can deploy an end-to-end solution to track customers' time spent in front of displays.
-- **Customize areas to monitor**: with Spatial Analysis, you can customize any space to monitor. It can be spaces in front of promotions or spaces in front of displays. -- **Count individual entrances**: you can measure how long people spend within these spaces.-- **Customize models to differentiate between shoppers and employees**: with the customization options offered, you can train and deploy models to differentiate shoppers and employees so that you only generate data about customer behavior.--
-## Benefits for your business
-
-In-store dwell time is an important metric to help store managers track customers' behaviors and generate insights about customers to make data-driven decisions. Tracking in-store dwell time will help store managers in the following ways:
-* **Generate data-driven insights about customers**: by tracking and analyzing the time customers spend in front of specific displays, you can understand how and why customers react to your displays.
-* **Collect real-time customer reactions**: instead of sending and receiving feedback from customers, you have access to customer interaction with displays in real time.
-* **Measure display/promotion effectiveness**: You can track, analyze and compare footfall and time spent by your customers to measure the effectiveness of promotions or displays.
-* **Manage inventory**: You can track and monitor changes in foot traffic and time spent by your customers to modify shift plans and inventory.
-* **Make data-driven decisions**: with data about your customers, you're empowered to make decisions that improve customer experience and satisfy customer needs.
--
-## Next steps
-
-Get started using Spatial Analysis in a container.
-
-> [!div class="nextstepaction"]
-> [Install and run the Spatial Analysis container](./spatial-analysis-container.md)
-
cognitive-services Use Case Identity Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/use-case-identity-verification.md
- Title: "Overview: Identity verification with Face"-
-description: Provide the best-in-class face verification experience in your business solution using Azure Face service. You can verify someone's identity against a government-issued ID card like a passport or driver's license.
------- Previously updated : 07/22/2022---
-# Overview: Identity verification with Face
-
-Provide the best-in-class face verification experience in your business solution using Azure Face service. You can verify someone's identity against a government-issued ID card like a passport or driver's license. Use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a user, or proctoring an online assessment. Identity verification can be done when a person is onboarded to your service, and repeated when they access a digital or physical service.
--
-## Benefits for your business
-
-Identity verification verifies that the user is who they claim to be. Most organizations require some type of identity verification. Biometric identity verification with Face service provides the following benefits to your business:
-
-* Seamless end user experience: instead of entering a passcode manually, users can look at a camera to get access.
-* Improved security: biometric verification is more secure than alternative verification methods that request knowledge from users, since that can be more easily obtained by bad actors.
-
-## Key features
-
-Face service can power an end-to-end, low-friction, high-accuracy identity verification solution.
-
-* Face Detection ("Detection" / "Detect") answers the question, "Are there one or more human faces in this image?" Detection finds human faces in an image and returns bounding boxes indicating their locations. Face detection models alone don't find individually identifying features, only a bounding box. All of the other operations are dependent on Detection: before Face can identify or verify a person (see below), it must know the locations of the faces to be recognized.
-* Face Detection for attributes: The Detect API can optionally be used to analyze attributes about each face, such as head pose and facial landmarks, using other AI models. The attribute functionality is separate from the verification and identification functionality of Face. The full list of attributes is described in the [Face detection concept guide](concept-face-detection.md). The values returned by the API for each attribute are predictions of the perceived attributes and are best used to make aggregated approximations of attribute representation rather than individual assessments.
-* Face Verification ("Verification" / "Verify") builds on Detect and addresses the question, "Are these two images of the same person?" Verification is also called "one-to-one" matching because the probe image is compared to only one enrolled template. Verification can be used in identity verification or access control scenarios to verify that a picture matches a previously captured image (such as from a photo from a government-issued ID card).
-* Face Group ("Group") also builds on Detect and creates smaller groups of faces that look similar to each other from all enrollment templates.
-
-## Next steps
-
-Follow a quickstart to do identity verification with Face.
-
-> [!div class="nextstepaction"]
-> [Identity verification quickstart](./quickstarts-sdk/identity-client-library.md)
cognitive-services Use Case Queue Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/use-case-queue-time.md
- Title: "Overview: Measure retail queue wait time with Spatial Analysis"-
-description: Spatial Analysis can generate real-time information about how long users are waiting in a queue in a retail store. A store manager can use this information to manage open checkout stations and employee shifts.
------- Previously updated : 07/22/2022---
-# Overview: Measure queue wait time with Spatial Analysis
-
-Spatial Analysis can generate real-time information about how long users are waiting in a queue in a retail store. The service monitors the length of time customers spend standing inside a checkout zone you specify. A store manager can use this information to manage open checkout stations and employee shifts.
--
-## Key features
-
-You can deploy an end-to-end solution to track customersΓÇÖ queue wait time.
-* **Customize areas to monitor**: With Spatial Analysis, you can customize the space you want to monitor.
-* **Count individual entrances**: You can monitor how long people spend within these spaces.
-* **Customize models to differentiate between shoppers and employees**: with the customization options offered, you can train and deploy models to differentiate shoppers and employees so that you only generate data about customer behavior.
-
-## Benefits for your business
-
-By managing queue wait time and making informed decisions, you can help your business in the following ways:
-* **Improve customer experience**: customer needs are met effectively and quickly by equipping store managers and employees with actionable alerts triggered by long checkout lines.
-* **Reduce unnecessary operation costs**: optimize staffing needs by monitoring checkout lines and closing vacant checkout stations.
-* **Make long-term data-driven decisions**: predict ideal future store operations by analyzing the traffic at different times of the day and on different days.
-
-
-## Next steps
-
-Get started using Spatial Analysis in a container.
-
-> [!div class="nextstepaction"]
-> [Install and run the Spatial Analysis container](./spatial-analysis-container.md)
cognitive-services Vehicle Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/vehicle-analysis.md
- Title: Configure vehicle analysis containers-
-description: Vehicle analysis provides each container with a common configuration framework, so that you can easily configure and manage compute, AI insight egress, logging, and security settings.
------- Previously updated : 11/07/2022---
-# Install and run vehicle analysis (preview)
-
-Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you'll learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations.
-
-## Prerequisites
-
-* To utilize the operations of vehicle analysis, you must first follow the steps to [install and run spatial analysis container](./spatial-analysis-container.md) including configuring your host machine, downloading and configuring your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, executing the deployment, and setting up device [logging](spatial-analysis-logging.md).
- * When you configure your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, refer to the steps below to add the graph configurations for vehicle analysis to your manifest prior to deploying the container. Or, once the spatial analysis container is up and running, you may add the graph configurations and follow the steps to redeploy. The steps below will outline how to properly configure your container.
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-> [!NOTE]
-> Make sure that the edge device has at least 50GB disk space available before deploying the Spatial Analysis module.
---
-## Vehicle analysis operations
-
-Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
-
-The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the *.cpu* distinction).
-
-| Operation identifier | Description |
-| -- | - |
-| **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview** and **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview** | Counts vehicles parked in a designated zone in the camera's field of view. </br> Emits an initial _vehicleCountEvent_ event and then _vehicleCountEvent_ events when the count changes. |
-| **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** | Identifies when a vehicle parks in a designated parking region in the camera's field of view. </br> Emits a _vehicleInPolygonEvent_ event when the vehicle is parked inside a parking space. |
-
-In addition to exposing the vehicle location, other estimated attributes for **cognitiveservices.vision.vehicleanalysis-vehiclecount-preview**, **cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview**, **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview** include vehicle color and vehicle type. All of the possible values for these attributes are found in the output section (below).
-
-### Operation parameters for vehicle analysis
-
-The following table shows the parameters required by each of the vehicle analysis operations. Many are shared with Spatial Analysis; the only one not shared is the `PARKING_REGIONS` setting. The full list of Spatial Analysis operation parameters can be found in the [Spatial Analysis container](./spatial-analysis-container.md?tabs=azure-stack-edge#iot-deployment-manifest) guide.
-
-| Operation parameters| Description|
-|||
-| Operation ID | The Operation Identifier from table above.|
-| enabled | Boolean: true or false|
-| VIDEO_URL| The RTSP URL for the camera device (for example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded streams either through RTSP, HTTP, or MP4. |
-| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.|
-| VIDEO_IS_LIVE| True for camera devices; false for recorded videos.|
-| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it is 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.|
-| PARKING_REGIONS | JSON configuration for zone and line as outlined below. </br> PARKING_REGIONS must contain four points in normalized coordinates ([0, 1]) that define a convex region (the points follow a clockwise or counterclockwise order).|
-| EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE will generate an output on every single frame received (one FPS). ON_CHANGE will generate an output when something changes (number of vehicles or parking spot occupancy). |
-| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX will use an overlap between the detected bounding box and a reference bounding box. PROJECTIONS will project the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.|
-
-This is an example of a valid `PARKING_REGIONS` configuration:
-
-```json
-"{\"parking_slot1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
-```
-
-### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview
-
-This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
-
-```json
-{
- "zone1": {
- type: "Queue",
- region: [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
- }
-}
-```
-
-| Name | Type| Description|
-||||
-| `zones` | dictionary | Keys are the zone names and the values are a field with type and region.|
-| `name` | string| Friendly name for this zone.|
-| `region` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which vehicles are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
-| `type` | string| For **cognitiveservices.vision.vehicleanalysis-vehiclecount** this should be "Queue".|
--
-### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview
-
-This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation.
-
-```json
-{
- "zone1": {
- type: "SingleSpot",
- region: [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
- }
-}
-```
-
-| Name | Type| Description|
-||||
-| `zones` | dictionary | Keys are the zone names and the values are a field with type and region.|
-| `name` | string| Friendly name for this zone.|
-| `region` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which vehicles are tracked or counted. The float values represent the position of the vertex relative to the top left corner. To calculate the absolute x, y values, you multiply these values with the frame size.
-| `type` | string| For **cognitiveservices.vision.vehicleanalysis-vehicleingpolygon-preview** and **cognitiveservices.vision.vehicleanalysis-vehicleingpolygon.cpu-preview** this should be "SingleSpot".|
-
-## Configuring the vehicle analysis operations
-
-You must configure the graphs for vehicle analysis in your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file to enable the vehicle analysis operations. Below are sample graphs for vehicle analysis. You can add these JSON snippets to your deployment manifest in the "graphs" configuration section, configure the parameters for your video stream, and deploy the module. If you only intend to utilize the vehicle analysis capabilities, you may replace the existing graphs in the [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) with the vehicle analysis graphs.
-
-Below is the graph optimized for the **vehicle count** operation.
-
-```json
-"vehiclecount": {
- "operationId": "cognitiveservices.vision.vehicleanalysis-vehiclecount-preview",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL here>",
- "VIDEO_SOURCE_ID": "vehiclecountgraph",
- "VIDEO_DECODE_GPU_INDEX": 0,
- "VIDEO_IS_LIVE":true,
- "PARKING_REGIONS": ""{\"1\": {\"type\": \"Queue\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
- }
-}
-```
-
-Below is the graph optimized for the **vehicle in polygon** operation, utilized for vehicles in a parking spot.
-
-```json
-"vehicleinpolygon": {
- "operationId": "cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview",
- "version": 1,
- "enabled": true,
- "parameters": {
- "VIDEO_URL": "<Replace RTSP URL here>",
- "VIDEO_SOURCE_ID": "vehcileinpolygon",
- "VIDEO_DECODE_GPU_INDEX": 0,
- "VIDEO_IS_LIVE":true,
- "PARKING_REGIONS": ""{\"1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}"
- }
-}
-```
-
-## Sample cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview output
-
-The JSON below demonstrates an example of the vehicle count operation graph output.
-
-```json
-{
- "events": [
- {
- "id": "95144671-15f7-3816-95cf-2b62f7ff078b",
- "type": "vehicleCountEvent",
- "detectionIds": [
- "249acdb1-65a0-4aa4-9403-319c39e94817",
- "200900c1-9248-4487-8fed-4b0c48c0b78d"
- ],
- "properties": {
- "@type": "type.googleapis.com/microsoft.rtcv.insights.VehicleCountEventMetadata",
- "detectionCount": 3
- },
- "zone": "1",
- "trigger": ""
- }
- ],
- "sourceInfo": {
- "id": "vehiclecountTest",
- "timestamp": "2022-09-21T19:31:05.558Z",
- "frameId": "4",
- "width": 0,
- "height": 0,
- "imagePath": ""
- },
- "detections": [
- {
- "type": "vehicle",
- "id": "200900c1-9248-4487-8fed-4b0c48c0b78d",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.5962499976158142,
- "y": 0.46250003576278687,
- "visible": false,
- "label": ""
- },
- {
- "x": 0.7544531226158142,
- "y": 0.64000004529953,
- "visible": false,
- "label": ""
- }
- ],
- "name": "",
- "normalizationType": "UNSPECIFIED_NORMALIZATION"
- },
- "confidence": 0.9934938549995422,
- "attributes": [
- {
- "task": "VehicleType",
- "label": "Bicycle",
- "confidence": 0.00012480001896619797
- },
- {
- "task": "VehicleType",
- "label": "Bus",
- "confidence": 1.4998147889855318e-05
- },
- {
- "task": "VehicleType",
- "label": "Car",
- "confidence": 0.9200984239578247
- },
- {
- "task": "VehicleType",
- "label": "Motorcycle",
- "confidence": 0.0058081308379769325
- },
- {
- "task": "VehicleType",
- "label": "Pickup_Truck",
- "confidence": 0.0001521655503893271
- },
- {
- "task": "VehicleType",
- "label": "SUV",
- "confidence": 0.04790870100259781
- },
- {
- "task": "VehicleType",
- "label": "Truck",
- "confidence": 1.346438511973247e-05
- },
- {
- "task": "VehicleType",
- "label": "Van/Minivan",
- "confidence": 0.02388562448322773
- },
- {
- "task": "VehicleType",
- "label": "type_other",
- "confidence": 0.0019937530159950256
- },
- {
- "task": "VehicleColor",
- "label": "Black",
- "confidence": 0.49258527159690857
- },
- {
- "task": "VehicleColor",
- "label": "Blue",
- "confidence": 0.47634875774383545
- },
- {
- "task": "VehicleColor",
- "label": "Brown/Beige",
- "confidence": 0.007451261859387159
- },
- {
- "task": "VehicleColor",
- "label": "Green",
- "confidence": 0.0002614705008454621
- },
- {
- "task": "VehicleColor",
- "label": "Grey",
- "confidence": 0.0005819533253088593
- },
- {
- "task": "VehicleColor",
- "label": "Red",
- "confidence": 0.0026496786158531904
- },
- {
- "task": "VehicleColor",
- "label": "Silver",
- "confidence": 0.012039118446409702
- },
- {
- "task": "VehicleColor",
- "label": "White",
- "confidence": 0.007863214239478111
- },
- {
- "task": "VehicleColor",
- "label": "Yellow/Gold",
- "confidence": 4.345366687630303e-05
- },
- {
- "task": "VehicleColor",
- "label": "color_other",
- "confidence": 0.0001758455764502287
- }
- ],
- "metadata": {
- "tracking_id": ""
- }
- },
- {
- "type": "vehicle",
- "id": "249acdb1-65a0-4aa4-9403-319c39e94817",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.44859376549720764,
- "y": 0.5375000238418579,
- "visible": false,
- "label": ""
- },
- {
- "x": 0.6053906679153442,
- "y": 0.7537500262260437,
- "visible": false,
- "label": ""
- }
- ],
- "name": "",
- "normalizationType": "UNSPECIFIED_NORMALIZATION"
- },
- "confidence": 0.9893689751625061,
- "attributes": [
- {
- "task": "VehicleType",
- "label": "Bicycle",
- "confidence": 0.0003215899341739714
- },
- {
- "task": "VehicleType",
- "label": "Bus",
- "confidence": 3.258735432609683e-06
- },
- {
- "task": "VehicleType",
- "label": "Car",
- "confidence": 0.825579047203064
- },
- {
- "task": "VehicleType",
- "label": "Motorcycle",
- "confidence": 0.14065399765968323
- },
- {
- "task": "VehicleType",
- "label": "Pickup_Truck",
- "confidence": 0.00044341650209389627
- },
- {
- "task": "VehicleType",
- "label": "SUV",
- "confidence": 0.02949284389615059
- },
- {
- "task": "VehicleType",
- "label": "Truck",
- "confidence": 1.625348158995621e-05
- },
- {
- "task": "VehicleType",
- "label": "Van/Minivan",
- "confidence": 0.003406822681427002
- },
- {
- "task": "VehicleType",
- "label": "type_other",
- "confidence": 8.27941985335201e-05
- },
- {
- "task": "VehicleColor",
- "label": "Black",
- "confidence": 2.028317430813331e-05
- },
- {
- "task": "VehicleColor",
- "label": "Blue",
- "confidence": 0.00022600525699090213
- },
- {
- "task": "VehicleColor",
- "label": "Brown/Beige",
- "confidence": 3.327144668219262e-06
- },
- {
- "task": "VehicleColor",
- "label": "Green",
- "confidence": 5.160827640793286e-05
- },
- {
- "task": "VehicleColor",
- "label": "Grey",
- "confidence": 5.614096517092548e-05
- },
- {
- "task": "VehicleColor",
- "label": "Red",
- "confidence": 1.0396311012073056e-07
- },
- {
- "task": "VehicleColor",
- "label": "Silver",
- "confidence": 0.9996315240859985
- },
- {
- "task": "VehicleColor",
- "label": "White",
- "confidence": 1.0256461791868787e-05
- },
- {
- "task": "VehicleColor",
- "label": "Yellow/Gold",
- "confidence": 1.8006812751991674e-07
- },
- {
- "task": "VehicleColor",
- "label": "color_other",
- "confidence": 5.103976263853838e-07
- }
- ],
- "metadata": {
- "tracking_id": ""
- }
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-
-| Event Field Name | Type| Description|
-||||
-| `id` | string| Event ID|
-| `type` | string| Event type|
-| `detectionsId` | array| Array of unique identifier of the vehicle detections that triggered this event|
-| `properties` | collection| Collection of values [detectionCount]|
-| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
-| `trigger` | string| Not used |
-
-| Detections Field Name | Type| Description|
-||||
-| `id` | string| Detection ID|
-| `type` | string| Detection type|
-| `region` | collection| Collection of values|
-| `type` | string| Type of region|
-| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
-| `confidence` | float| Algorithm confidence|
-
-| Attribute | Type | Description |
-||||
-| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
-| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
-| `confidence` | float| Algorithm confidence|
-
-| SourceInfo Field Name | Type| Description|
-||||
-| `id` | string| Camera ID|
-| `timestamp` | date| UTC date when the JSON payload was emitted in format YYYY-MM-DDTHH:MM:SS.ssZ |
-| `width` | int | Video frame width|
-| `height` | int | Video frame height|
-| `frameId` | int | Frame identifier|
-
-## Sample cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview output
-
-The JSON below demonstrates an example of the vehicle in polygon operation graph output.
-
-```json
-{
- "events": [
- {
- "id": "1b812a6c-1fa3-3827-9769-7773aeae733f",
- "type": "vehicleInPolygonEvent",
- "detectionIds": [
- "de4256ea-8b38-4883-9394-bdbfbfa5bd41"
- ],
- "properties": {
- "@type": "type.googleapis.com/microsoft.rtcv.insights.VehicleInPolygonEventMetadata",
- "status": "PARKED"
- },
- "zone": "1",
- "trigger": ""
- }
- ],
- "sourceInfo": {
- "id": "vehicleInPolygonTest",
- "timestamp": "2022-09-21T20:36:47.737Z",
- "frameId": "3",
- "width": 0,
- "height": 0,
- "imagePath": ""
- },
- "detections": [
- {
- "type": "vehicle",
- "id": "de4256ea-8b38-4883-9394-bdbfbfa5bd41",
- "region": {
- "type": "RECTANGLE",
- "points": [
- {
- "x": 0.18703125417232513,
- "y": 0.32875001430511475,
- "visible": false,
- "label": ""
- },
- {
- "x": 0.2650781273841858,
- "y": 0.42500001192092896,
- "visible": false,
- "label": ""
- }
- ],
- "name": "",
- "normalizationType": "UNSPECIFIED_NORMALIZATION"
- },
- "confidence": 0.9583820700645447,
- "attributes": [
- {
- "task": "VehicleType",
- "label": "Bicycle",
- "confidence": 0.0005135730025358498
- },
- {
- "task": "VehicleType",
- "label": "Bus",
- "confidence": 2.502854385966202e-07
- },
- {
- "task": "VehicleType",
- "label": "Car",
- "confidence": 0.9575894474983215
- },
- {
- "task": "VehicleType",
- "label": "Motorcycle",
- "confidence": 0.03809007629752159
- },
- {
- "task": "VehicleType",
- "label": "Pickup_Truck",
- "confidence": 6.314369238680229e-05
- },
- {
- "task": "VehicleType",
- "label": "SUV",
- "confidence": 0.003204471431672573
- },
- {
- "task": "VehicleType",
- "label": "Truck",
- "confidence": 4.916510079056025e-07
- },
- {
- "task": "VehicleType",
- "label": "Van/Minivan",
- "confidence": 0.00029918691143393517
- },
- {
- "task": "VehicleType",
- "label": "type_other",
- "confidence": 0.00023934587079565972
- },
- {
- "task": "VehicleColor",
- "label": "Black",
- "confidence": 0.7501943111419678
- },
- {
- "task": "VehicleColor",
- "label": "Blue",
- "confidence": 0.02153826877474785
- },
- {
- "task": "VehicleColor",
- "label": "Brown/Beige",
- "confidence": 0.0013857109006494284
- },
- {
- "task": "VehicleColor",
- "label": "Green",
- "confidence": 0.0006621106876991689
- },
- {
- "task": "VehicleColor",
- "label": "Grey",
- "confidence": 0.007349356077611446
- },
- {
- "task": "VehicleColor",
- "label": "Red",
- "confidence": 0.1460476964712143
- },
- {
- "task": "VehicleColor",
- "label": "Silver",
- "confidence": 0.015320491977036
- },
- {
- "task": "VehicleColor",
- "label": "White",
- "confidence": 0.053948428481817245
- },
- {
- "task": "VehicleColor",
- "label": "Yellow/Gold",
- "confidence": 0.0030805091373622417
- },
- {
- "task": "VehicleColor",
- "label": "color_other",
- "confidence": 0.0004731453664135188
- }
- ],
- "metadata": {
- "tracking_id": ""
- }
- }
- ],
- "schemaVersion": "2.0"
-}
-```
-
-| Event Field Name | Type| Description|
-||||
-| `id` | string| Event ID|
-| `type` | string| Event type|
-| `detectionsId` | array| Array of unique identifier of the vehicle detection that triggered this event|
-| `properties` | collection| Collection of values status [PARKED/EXITED]|
-| `zone` | string | The "name" field of the polygon that represents the zone that was crossed|
-| `trigger` | string| Not used |
-
-| Detections Field Name | Type| Description|
-||||
-| `id` | string| Detection ID|
-| `type` | string| Detection type|
-| `region` | collection| Collection of values|
-| `type` | string| Type of region|
-| `points` | collection| Top left and bottom right points when the region type is RECTANGLE |
-| `confidence` | float| Algorithm confidence|
-
-| Attribute | Type | Description |
-||||
-| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" |
-| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" |
-| `confidence` | float| Algorithm confidence|
-
-| SourceInfo Field Name | Type| Description|
-||||
-| `id` | string| Camera ID|
-| `timestamp` | date| UTC date when the JSON payload was emitted in format YYYY-MM-DDTHH:MM:SS.ssZ |
-| `width` | int | Video frame width|
-| `height` | int | Video frame height|
-| `frameId` | int | Frame identifier|
-
-## Zone and line configuration for vehicle analysis
-
-For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone which you're analyzing.
-
-## Camera placement for vehicle analysis
-
-For guidelines on where and how to place your camera for vehicle analysis, refer to the [camera placement](spatial-analysis-camera-placement.md) guide found in the spatial analysis documentation. Other limitations to consider include the height of the camera mounted in the parking lot space. When you analyze vehicle patterns, a higher vantage point is ideal to ensure that the camera's field of view is wide enough to accommodate one or more vehicles, depending on your scenario.
-
-## Billing
-
-The vehicle analysis container sends billing information to Azure, using a Computer Vision resource on your Azure account. The use of vehicle analysis in public preview is currently free.
-
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft.
-
-## Next steps
-
-* Set up a [Spatial Analysis container](spatial-analysis-container.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
- Title: What's new in Computer Vision?-
-description: Stay up to date on recent releases and updates to Azure Computer Vision.
------- Previously updated : 12/27/2022---
-# What's new in Computer Vision
-
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
-
-## May 2023
-
-### Image Analysis 4.0 Product Recognition (public preview)
-
-The Product Recognition APIs let you analyze photos of shelves in a retail store. You can detect the presence and absence of products and get their bounding box coordinates. Use it in combination with model customization to train a model to identify your specific products. You can also compare Product Recognition results to your store's planogram document. [Product Recognition](./concept-shelf-analysis.md).
-
-## April 2023
-
-### Face limited access tokens
-
-Independent software vendors (ISVs) can manage the Face API usage of their clients by issuing access tokens that grant access to Face features which are normally gated. This allows client companies to use the Face API without having to go through the formal approval process. [Use limited access tokens](how-to/identity-access-token.md).
-
-## March 2023
-
-### Computer Vision Image Analysis 4.0 SDK public preview
-
-The [Florence foundation model](https://www.microsoft.com/en-us/research/project/projectflorence/) is now integrated into Azure Computer Vision. The improved Vision Services enable developers to create market-ready, responsible computer vision applications across various industries. Customers can now seamlessly digitize, analyze, and connect their data to natural language interactions, unlocking powerful insights from their image and video content to support accessibility, drive acquisition through SEO, protect users from harmful content, enhance security, and improve incident response times. For more information, see [Announcing Microsoft's Florence foundation model](https://aka.ms/florencemodel).
-
-### Image Analysis 4.0 SDK (public preview)
-
-Image Analysis 4.0 is now available through client library SDKs in C#, C++, and Python. This update also includes the Florence-powered image captioning and dense captioning at human parity performance.
-
-### Image Analysis V4.0 Captioning and Dense Captioning (public preview):
-
-"Caption" replaces "Describe" in V4.0 as the significantly improved image captioning feature rich with details and semantic understanding. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. There's also a new gender-neutral parameter to allow customers to choose whether to enable probabilistic gender inference for alt-text and Seeing AI applications. Automatically deliver rich captions, accessible alt-text, SEO optimization, and intelligent photo curation to support digital content. [Image captions](./concept-describe-images-40.md).
-
-### Video summary and frame locator (public preview):
-Search and interact with video content in the same intuitive way you think and write. Locate relevant content without the need for additional metadata. Available only in [Vision Studio](https://aka.ms/VisionStudio).
--
-### Image Analysis 4.0 model customization (public preview)
-
-You can now create and train your own [custom image classification and object detection models](./concept-model-customization.md), using Vision Studio or the v4.0 REST APIs.
-
-### Image Retrieval APIs (public preview)
-
-The [Image Retrieval APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search.
-
-### Background removal APIs (public preview)
-
-As part of the Image Analysis 4.0 API, the [Background removal API](./concept-background-removal.md) lets you remove the background of an image. This operation can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object.
-
-### Computer Vision 3.0 & 3.1 previews deprecation
-
-The preview versions of the Computer Vision 3.0 and 3.1 APIs are scheduled to be retired on September 30, 2023. Customers won't be able to make any calls to these APIs past this date. Customers are encouraged to migrate their workloads to the generally available (GA) 3.2 API instead. Mind the following changes when migrating from the preview versions to the 3.2 API:
-- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.-- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.-- Computer Vision 3.2 API uses a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.-
-## October 2022
-
-### Computer Vision Image Analysis 4.0 (public preview)
-
-Image Analysis 4.0 has been released in public preview. The new API includes image captioning, image tagging, object detection, smart crops, people detection, and Read OCR functionality, all available through one Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
-
-## September 2022
-
-### Computer Vision 3.0/3.1 Read previews deprecation
-
-The preview versions of the Computer Vision 3.0 and 3.1 Read API are scheduled to be retired on January 31, 2023. Customers are encouraged to refer to the [How-To](./how-to/call-read-api.md) and [QuickStarts](./quickstarts-sdk/client-library.md?tabs=visual-studio&pivots=programming-language-csharp) to get started with the generally available (GA) version of the Read API instead. The latest GA versions provide the following benefits:
-* 2022 latest generally available OCR model
-* Significant expansion of OCR language coverage including support for handwritten text
-* Significantly improved OCR quality
-
-## June 2022
-
-### Vision Studio launch
-
-Vision Studio is UI tool that lets you explore, build, and integrate features from Azure Cognitive Services for Vision into your applications.
-
-Vision Studio provides you with a platform to try several service features, and see what they return in a visual manner. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
-
-### Responsible AI for Face
-
-#### Face transparency documentation
-* The [transparency documentation](https://aka.ms/faceraidocs) provides guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, providing support to people who believe their results were incorrect, and identifying and addressing fluctuations in accuracy due to variations in operational conditions.
-
-#### Retirement of sensitive attributes
-
-* We have retired facial analysis capabilities that purport to infer emotional states and identity attributes, such as gender, age, smile, facial hair, hair and makeup.
-* Facial detection capabilities, (including detecting blur, exposure, glasses, headpose, landmarks, noise, occlusion, facial bounding box) will remain generally available and do not require an application.
-
-#### Fairlearn package and Microsoft's Fairness Dashboard
-
-* [The open-source Fairlearn package and MicrosoftΓÇÖs Fairness Dashboard](https://github.com/microsoft/responsible-ai-toolbox/tree/main/notebooks/cognitive-services-examples/face-verification) aims to support customers to measure the fairness of Microsoft's facial verification algorithms on their own data, allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.
-
-#### Limited Access policy
-
-* As a part of aligning Face to the updated Responsible AI Standard, a new [Limited Access policy](https://aka.ms/AAh91ff) has been implemented for the Face API and Computer Vision. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. See details on Limited Access for Face [here](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context) and for Computer Vision [here](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context).
-
-### Computer Vision 3.2-preview deprecation
-
-The preview versions of the 3.2 API are scheduled to be retired in December of 2022. Customers are encouraged to use the generally available (GA) version of the API instead. Mind the following changes when migrating from the 3.2-preview versions:
-1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls now take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
-1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
-1. Image Analysis APIs now use a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
-
-## May 2022
-
-### OCR (Read) API model is generally available (GA)
-
-Computer Vision's [OCR (Read) API](overview-ocr.md) latest model with [164 supported languages](language-support.md) is now generally available as a cloud service and container.
-
-* OCR support for print text expands to 164 languages including Russian, Arabic, Hindi and other languages using Cyrillic, Arabic, and Devanagari scripts.
-* OCR support for handwritten text expands to 9 languages with English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, and Spanish.
-* Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices.
-* Improved processing of digital PDF documents.
-* Input file size limit increased 10x to 500 MB.
-* Performance and latency improvements.
-* Available as [cloud service](overview-ocr.md) and [Docker container](computer-vision-how-to-install-containers.md).
-
-See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
-
-> [!div class="nextstepaction"]
-> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
-
-## February 2022
-
-### OCR (Read) API Public Preview supports 164 languages
-
-Computer Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages](language-support.md) to 164 with its latest preview:
-
-* OCR support for print text expands to 42 new languages including Arabic, Hindi and other languages using Arabic and Devanagari scripts.
-* OCR support for handwritten text expands to Japanese and Korean in addition to English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
-* Enhancements including better support for extracting handwritten dates, amounts, names, and single character boxes.
-* General performance and AI quality improvements
-
-See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
-
-> [!div class="nextstepaction"]
-> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
-
-### New Quality Attribute in Detection_01 and Detection_03
-* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concept-face-detection.md) and see how to use it with [QuickStart](./quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio).
-
-## September 2021
-
-### OCR (Read) API Public Preview supports 122 languages
-
-Computer Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages](language-support.md) to 122 with its latest preview:
-
-* OCR support for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages.
-* OCR support for handwritten text in 6 new languages that include English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish.
-* Enhancements for processing digital PDFs and Machine Readable Zone (MRZ) text in identity documents.
-* General performance and AI quality improvements
-
-See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
-
-> [!div class="nextstepaction"]
-> [Get Started with the Read API](./quickstarts-sdk/client-library.md)
-
-## August 2021
-
-### Image tagging language expansion
-
-The [latest version (v3.2)](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
-
-## July 2021
-
-### New HeadPose and Landmarks improvements for Detection_03
-
-* The Detection_03 model has been updated to support facial landmarks.
-* The landmarks feature in Detection_03 is much more precise, especially in the eyeball landmarks which are crucial for gaze tracking.
-
-## May 2021
-
-### Spatial Analysis container update
-
-A new version of the [Spatial Analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
-
-* [Spatial Analysis operations](spatial-analysis-operations.md) can be now configured to detect the orientation that a person is facing.
- * An orientation classifier can be enabled for the `personcrossingline` and `personcrossingpolygon` operations by configuring the `enable_orientation` parameter. It is set to off by default.
-
-* [Spatial Analysis operations](spatial-analysis-operations.md) now also offers configuration to detect a person's speed while walking/running
- * Speed can be detected for the `personcrossingline` and `personcrossingpolygon` operations by turning on the `enable_speed` classifier, which is off by default. The output is reflected in the `speed`, `avgSpeed`, and `minSpeed` outputs.
-
-## April 2021
-
-### Computer Vision v3.2 GA
-
-The Computer Vision API v3.2 is now generally available with the following updates:
-
-* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
-* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
-* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
-* [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
-
-> [!div class="nextstepaction"]
-> [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
-
-### PersonDirectory data structure (preview)
-
-* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
-* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md).
-
-## March 2021
-
-### Computer Vision 3.2 Public Preview update
-
-The Computer Vision API v3.2 public preview has been updated. The preview release has all Computer Vision features along with updated Read and Analyze APIs.
-
-> [!div class="nextstepaction"]
-> [See Computer Vision v3.2 public preview 3](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
-
-## February 2021
-
-### Read API v3.2 Public Preview with OCR support for 73 languages
-
-The Computer Vision Read API v3.2 public preview, available as cloud service and Docker container, includes these updates:
-
-* [OCR for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
-* Natural reading order for the text line output (Latin languages only)
-* Handwriting style classification for text lines along with a confidence score (Latin languages only).
-* Extract text only for selected pages for a multi-page document.
-* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment.
-
-See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
-
-> [!div class="nextstepaction"]
-> [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
--
-### New Face API detection model
-* The new Detection 03 model is the most accurate detection model currently available. If you're a new customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
-### New detectable Face attributes
-* The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
-### New Face API Recognition Model
-* The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). Note that we recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See [Specify a face recognition model](./how-to/specify-recognition-model.md) for more details.
-
-## January 2021
-
-### Spatial Analysis container update
-
-A new version of the [Spatial Analysis container](spatial-analysis-container.md) has been released with a new feature set. This Docker container lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments.
-
-* [Spatial Analysis operations](spatial-analysis-operations.md) can be now configured to detect if a person is wearing a protective face covering such as a mask.
- * A mask classifier can be enabled for the `personcount`, `personcrossingline` and `personcrossingpolygon` operations by configuring the `ENABLE_FACE_MASK_CLASSIFIER` parameter.
- * The attributes `face_mask` and `face_noMask` will be returned as metadata with confidence score for each person detected in the video stream
-* The *personcrossingpolygon* operation has been extended to allow the calculation of the dwell time a person spends in a zone. You can set the `type` parameter in the Zone configuration for the operation to `zonedwelltime` and a new event of type *personZoneDwellTimeEvent* will include the `durationMs` field populated with the number of milliseconds that the person spent in the zone.
-* **Breaking change**: The *personZoneEvent* event has been renamed to *personZoneEnterExitEvent*. This event is raised by the *personcrossingpolygon* operation when a person enters or exits the zone and provides directional info with the numbered side of the zone that was crossed.
-* Video URL can be provided as "Private Parameter/obfuscated" in all operations. Obfuscation is optional now and it will only work if `KEY` and `IV` are provided as environment variables.
-* Calibration is enabled by default for all operations. Set the `do_calibration: false` to disable it.
-* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details
-* Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details.
-
-### Mitigate latency
-* The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](./how-to/mitigate-latency.md).
-
-## December 2020
-### Customer configuration for Face ID storage
-* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
-
-## November 2020
-### Sample Face enrollment app
-* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
-
-## October 2020
-
-### Computer Vision API v3.1 GA
-
-The Computer Vision API in General Availability has been upgraded to v3.1.
-
-## September 2020
-
-### Spatial Analysis container preview
-
-The [Spatial Analysis container](spatial-analysis-container.md) is now in preview. The Spatial Analysis feature of Computer Vision lets you analyze real-time streaming video to understand spatial relationships between people and their movement through physical environments. Spatial Analysis is a Docker container you can use on-premises.
-
-### Read API v3.1 Public Preview adds OCR for Japanese
-
-The Computer Vision Read API v3.1 public preview adds these capabilities:
-
-* OCR for Japanese language
-* For each text line, indicate whether the appearance is Handwriting or Print style, along with a confidence score (Latin languages only).
-* For a multi-page document extract text only for selected pages or page range.
-
-* This preview version of the Read API supports English, Dutch, French, German, Italian, Japanese, Portuguese, Simplified Chinese, and Spanish languages.
-
-See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
-
-> [!div class="nextstepaction"]
-> [Learn more about Read API v3.1 Public Preview 2](https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005)
-
-## August 2020
-### Customer-managed encryption of data at rest
-* The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](./identity-encrypt-data-at-rest.md).
-
-## July 2020
-
-### Read API v3.1 Public Preview with OCR for Simplified Chinese
-
-The Computer Vision Read API v3.1 public preview adds support for Simplified Chinese.
-
-* This preview version of the Read API supports English, Dutch, French, German, Italian, Portuguese, Simplified Chinese, and Spanish languages.
-
-See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
-
-> [!div class="nextstepaction"]
-> [Learn more about Read API v3.1 Public Preview 1](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005)
-
-## May 2020
-
-Computer Vision API v3.0 entered General Availability, with updates to the Read API:
-
-* Support for English, Dutch, French, German, Italian, Portuguese, and Spanish
-* Improved accuracy
-* Confidence score for each extracted word
-* New output format
-
-See the [OCR overview](overview-ocr.md) to learn more.
-
-## April 2020
-### New Face API Recognition Model
-* The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](./how-to/specify-recognition-model.md).
-
-## March 2020
-
-* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
-
-## January 2020
-
-### Read API 3.0 Public Preview
-
-You now can use version 3.0 of the Read API to extract printed or handwritten text from images. Compared to earlier versions, 3.0 provides:
-
-* Improved accuracy
-* New output format
-* Confidence score for each extracted word
-* Support for both Spanish and English languages with the language parameter
-
-Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/REST/CSharp-hand-text.md?tabs=version-3) to get starting using the 3.0 API.
--
-## June 2019
-
-### New Face API detection model
-* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](how-to/specify-detection-model.md).
-
-## April 2019
-
-### Improved attribute accuracy
-* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Improved processing speeds
-* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
-
-## March 2019
-
-### New Face API recognition model
-* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](how-to/specify-recognition-model.md).
-
-## January 2019
-
-### Face Snapshot feature
-* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get). More details in [How to Migrate your face data to a different Face subscription](how-to/migrate-face-data.md).
-
-## October 2018
-
-### API messages
-* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
-
-## May 2018
-
-### Improved attribute accuracy
-* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Increased file size limit
-* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
-
-## March 2018
-
-### New data structure
-* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to use the large-scale feature](how-to/use-large-scale.md).
-* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
-
-## May 2017
-
-### New detectable Face attributes
-* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
-* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.
-
-## March 2017
-
-### New detectable Face attribute
-* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Fixed issues
-* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
-* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels.
-
-## November 2016
-### New subscription tier
-* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
-
-## October 2016
-### API messages
-* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
-
-## July 2016
-### New features
-* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
-* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
-* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
-
-## V1.0 changes from V0
-
-* Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
- [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
-* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
-* Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
-* Deprecated the V0 endpoint of Face API on June 30, 2016.
--
-## Cognitive Service updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/api-reference.md
- Title: API reference - Content Moderator-
-description: Learn about the content moderation APIs for Content Moderator.
------- Previously updated : 05/29/2019----
-# Content Moderator API reference
-
-You can get started with Azure Content Moderator APIs by doing the following:
--- In the Azure portal, [subscribe to the Content Moderator API](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator).-
-You can use the following **Content Moderator APIs** to set up your post-moderation workflows.
-
-| Description | Reference |
-| -- |-|
-| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. | [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
-| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
-| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
-| **List Management API**<br /><br />Create and manage custom exclusion or inclusion lists of images and text. If enabled, the **Image - Match** and **Text - Screen** operations do fuzzy matching of the submitted content against your custom lists. <br /><br />For efficiency, you can skip the machine learning-based moderation step.<br /><br /> | [List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API reference") |
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/client-libraries.md
- Title: 'Quickstart: Use the Content Moderator client library'-
-description: The Content Moderator API offers client libraries that makes it easy to integrate Content Moderator into your applications.
---
-zone_pivot_groups: programming-languages-set-conmod
--- Previously updated : 09/28/2021--
-keywords: content moderator, azure content moderator, online moderator, content filtering software
--
-# Quickstart: Use the Content Moderator client library
------------
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/encrypt-data-at-rest.md
- Title: Content Moderator encryption of data at rest-
-description: Content Moderator encryption of data at rest.
------ Previously updated : 03/13/2020-
-#Customer intent: As a user of the Content Moderator service, I want to learn how encryption at rest works.
--
-# Content Moderator encryption of data at rest
-
-Content Moderator automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
--
-> [!IMPORTANT]
-> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Content Moderator Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Content Moderator service, you will need to create a new Content Moderator resource and select E0 as the Pricing Tier. Once your Content Moderator resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
-
-Customer-managed keys are available in all Azure regions.
--
-## Next steps
-
-* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
-* [What is Azure Key Vault](../../key-vault/general/overview.md)?
-* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/export-delete-data.md
- Title: Export or delete user data - Content Moderator-
-description: You have full control over your data. Learn how to view, export or delete your data in Content Moderator.
------- Previously updated : 02/07/2019---
-# Export or delete user data in Content Moderator
--
-Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Moderation APIs](./api-reference.md).
--
-For more information on how to export and delete user data in Content Moderator, see the following table.
-
-| Data | Export Operation | Delete Operation |
-| - | - | - |
-| Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). |
-| Images for custom matching | Call the [Get image IDs API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f676). Images are stored in a one-way proprietary hash format, and there is no way to extract the actual images. | Call the [Delete all Images API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f686). Or delete the Content Moderator resource using the Azure portal. |
-| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
cognitive-services Facebook Post Moderation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
- Title: "Tutorial: Moderate Facebook content - Content Moderator"-
-description: In this tutorial, you will learn how to use machine-learning-based Content Moderator to help moderate Facebook posts and comments.
------- Previously updated : 01/29/2021-
-#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
--
-# Tutorial: Moderate Facebook posts and commands with Azure Content Moderator
--
-In this tutorial, you will learn how to use Azure Content Moderator to help moderate the posts and comments on a Facebook page. Facebook will send the content posted by visitors to the Content Moderator service. Then your Content Moderator workflows will either publish the content or create reviews within the Review tool, depending on the content scores and thresholds.
-
-> [!IMPORTANT]
-> In 2018, Facebook implemented a more strict vetting policy for Facebook Apps. You will not be able to complete the steps of this tutorial if your app has not been reviewed and approved by the Facebook review team.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Create a Content Moderator team.
-> * Create Azure Functions that listen for HTTP events from Content Moderator and Facebook.
-> * Link a Facebook page to Content Moderator using a Facebook application.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-This diagram illustrates each component of this scenario:
-
-![Diagram of Content Moderator receiving information from Facebook through "FBListener" and sending information through "CMListener"](images/tutorial-facebook-moderation.png)
-
-## Prerequisites
--- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.-- A [Facebook account](https://www.facebook.com/).-
-## Create a review team
-
-Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the Content Moderator Review tool and create a review team. Take note of the **Team ID** value on the **Credentials** page.
-
-## Configure image moderation workflow
-
-Refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide to create a custom image workflow. Content Moderator will use this workflow to automatically check images on Facebook and send some to the Review tool. Take note of the workflow **name**.
-
-## Configure text moderation workflow
-
-Again, refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide; this time, create a custom text workflow. Content Moderator will use this workflow to automatically check text content. Take note of the workflow **name**.
-
-![Configure Text Workflow](images/text-workflow-configure.PNG)
-
-Test your workflow using the **Execute Workflow** button.
-
-![Test Text Workflow](images/text-workflow-test.PNG)
-
-## Create Azure Functions
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
-
-1. Create an Azure Function App as shown on the [Azure Functions](../../azure-functions/functions-create-function-app-portal.md) page.
-1. Go to the newly created Function App.
-1. Within the App, go to the **Platform features** tab and select **Configuration**. In the **Application settings** section of the next page, select **New application setting** to add the following key/value pairs:
-
- | App Setting name | value |
- | -- |-|
- | `cm:TeamId` | Your Content Moderator TeamId |
- | `cm:SubscriptionKey` | Your Content Moderator subscription key - See [Credentials](./review-tool-user-guide/configure.md#credentials) |
- | `cm:Region` | Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
- | `cm:ImageWorkflow` | Name of the workflow to run on Images |
- | `cm:TextWorkflow` | Name of the workflow to run on Text |
- | `cm:CallbackEndpoint` | Url for the CMListener Function App that you will create later in this guide |
- | `fb:VerificationToken` | A secret token that you create, used to subscribe to the Facebook feed events |
- | `fb:PageAccessToken` | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
-
- Click the **Save** button at the top of the page.
-
-1. Go back to the **Platform features** tab. Use the **+** button on the left pane to bring up the **New function** pane. The function you are about to create will receive events from Facebook.
-
- ![Azure Functions pane with the Add Function button highlighted.](images/new-function.png)
-
- 1. Click on the tile that says **Http trigger**.
- 1. Enter the name **FBListener**. The **Authorization Level** field should be set to **Function**.
- 1. Click **Create**.
- 1. Replace the contents of the **run.csx** with the contents from **FbListener/run.csx**
-
- [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/FbListener/run.csx?range=1-154)]
-
-1. Create a new **Http trigger** function named **CMListener**. This function receives events from Content Moderator. Replace the contents of the **run.csx** with the contents from **CMListener/run.csx**
-
- [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/CmListener/run.csx?range=1-110)]
---
-## Configure the Facebook page and App
-
-1. Create a Facebook App.
-
- ![facebook developer page](images/facebook-developer-app.png)
-
- 1. Navigate to the [Facebook developer site](https://developers.facebook.com/)
- 1. Go to **My Apps**.
- 1. Add a New App.
- 1. Provide a name
- 1. Select **Webhooks -> Set Up**
- 1. Select **Page** in the dropdown menu and select **Subscribe to this object**
- 1. Provide the **FBListener Url** as the Callback URL and the **Verify Token** you configured under the **Function App Settings**
- 1. Once subscribed, scroll down to feed and select **subscribe**.
- 1. Select the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
-
-1. Create a Facebook Page.
-
- > [!IMPORTANT]
- > In 2018, Facebook implemented a more strict vetting of Facebook apps. You will not be able to execute sections 2, 3 and 4 if your app has not been reviewed and approved by the Facebook review team.
-
- 1. Navigate to [Facebook](https://www.facebook.com/pages) and create a **new Facebook Page**.
- 1. Allow the Facebook App to access this page by following these steps:
- 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
- 1. Select **Application**.
- 1. Select **Page Access Token**, Send a **Get** request.
- 1. Select the **Page ID** in the response.
- 1. Now append the **/subscribed_apps** to the URL and Send a **Get** (empty response) request.
- 1. Submit a **Post** request. You get the response as **success: true**.
-
-3. Create a non-expiring Graph API access token.
-
- 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
- 2. Select the **Application** option.
- 3. Select the **Get User Access Token** option.
- 4. Under the **Select Permissions**, select **manage_pages** and **publish_pages** options.
- 5. We will use the **access token** (Short Lived Token) in the next step.
-
-4. We use Postman for the next few steps.
-
- 1. Open **Postman** (or get it [here](https://www.getpostman.com/)).
- 2. Import these two files:
- 1. [Postman Collection](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/Facebook%20Permanant%20Page%20Access%20Token.postman_collection.json)
- 2. [Postman Environment](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/FB%20Page%20Access%20Token%20Environment.postman_environment.json)
- 3. Update these environment variables:
-
- | Key | Value |
- | -- |-|
- | appId | Insert your Facebook App Identifier here |
- | appSecret | Insert your Facebook App's secret here |
- | short_lived_token | Insert the short lived user access token you generated in the previous step |
- 4. Now run the 3 APIs listed in the collection:
- 1. Select **Generate Long-Lived Access Token** and click **Send**.
- 2. Select **Get User ID** and click **Send**.
- 3. Select **Get Permanent Page Access Token** and click **Send**.
- 5. Copy the **access_token** value from the response and assign it to the App setting, **fb:PageAccessToken**.
-
-The solution sends all images and text posted on your Facebook page to Content Moderator. Then the workflows that you configured earlier are invoked. The content that does not pass your criteria defined in the workflows gets passed to reviews within the review tool. The rest of the content gets published automatically.
-
-## Next steps
-
-In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
-
-> [!div class="nextstepaction"]
-> [Image moderation](./image-moderation-api.md)
cognitive-services Image Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/image-lists-quickstart-dotnet.md
- Title: "Check images against custom lists in C# - Content Moderator"-
-description: How to moderate images with custom image lists using the Content Moderator SDK for C#.
------- Previously updated : 10/24/2019--
-#Customer intent: As a C# developer of content-providing software, I want to check images against a custom list of inappropriate images so that I can handle them more efficiently.
--
-# Moderate with custom image lists in C#
-
-This article provides information and code samples to help you get started using
-the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
-- Create a custom image list-- Add and remove images from the list-- Get the IDs of all images in the list-- Retrieve and update list metadata-- Refresh the list search index-- Screen images against images in the list-- Delete all images from the list-- Delete the custom list-
-> [!NOTE]
-> There is a maximum limit of **5 image lists** with each list **not to exceed 10,000 images**.
-
-The console application for this guide simulates some of the tasks you
-can perform with the image list API.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Sign up for Content Moderator services
-
-Before you can use Content Moderator services through the REST API or the SDK, you'll need an API subscription key. Subscribe to Content Moderator service in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) to obtain it.
-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
- In the sample code, name the project **ImageLists**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages:
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Newtonsoft.Json-
-### Update the program's using statements
-
-Add the following `using` statements
-
-```csharp
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-```
-
-### Create the Content Moderator client
-
-Add the following code to create a Content Moderator client for your subscription. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
-
-```csharp
-/// <summary>
-/// Wraps the creation and configuration of a Content Moderator client.
-/// </summary>
-/// <remarks>This class library contains insecure code. If you adapt this
-/// code for use in production, use a secure method of storing and using
-/// your Content Moderator subscription key.</remarks>
-public static class Clients
-{
- /// <summary>
- /// The base URL for Content Moderator calls.
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR API KEY";
-
- /// <summary>
- /// Returns a new Content Moderator client for your subscription.
- /// </summary>
- /// <returns>The new client.</returns>
- /// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
- /// When you have finished using the client,
- /// you should dispose of it either directly or indirectly. </remarks>
- public static ContentModeratorClient NewClient()
- {
- // Create and initialize an instance of the Content Moderator API wrapper.
- ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey));
-
- client.Endpoint = AzureEndpoint;
- return client;
- }
-}
-```
-
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Cognitive Services [security](../cognitive-services-security.md) article for more information.
-
-### Initialize application-specific settings
-
-Add the following classes and static fields to the **Program** class in Program.cs.
-
-```csharp
-/// <summary>
-/// The minimum amount of time, im milliseconds, to wait between calls
-/// to the Image List API.
-/// </summary>
-private const int throttleRate = 3000;
-
-/// <summary>
-/// The number of minutes to delay after updating the search index before
-/// performing image match operations against the list.
-/// </summary>
-private const double latencyDelay = 0.5;
-
-/// <summary>
-/// Define constants for the labels to apply to the image list.
-/// </summary>
-private class Labels
-{
- public const string Sports = "Sports";
- public const string Swimsuit = "Swimsuit";
-}
-
-/// <summary>
-/// Define input data for images for this sample.
-/// </summary>
-private class Images
-{
- /// <summary>
- /// Represents a group of images that all share the same label.
- /// </summary>
- public class Data
- {
- /// <summary>
- /// The label for the images.
- /// </summary>
- public string Label;
-
- /// <summary>
- /// The URLs of the images.
- /// </summary>
- public string[] Urls;
- }
-
- /// <summary>
- /// The initial set of images to add to the list with the sports label.
- /// </summary>
- public static readonly Data Sports = new Data()
- {
- Label = Labels.Sports,
- Urls = new string[] {
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample6.png",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample9.png"
- }
- };
-
- /// <summary>
- /// The initial set of images to add to the list with the swimsuit label.
- /// </summary>
- /// <remarks>We're adding sample16.png (image of a puppy), to simulate
- /// an improperly added image that we will later remove from the list.
- /// Note: each image can have only one entry in a list, so sample4.png
- /// will throw an exception when we try to add it with a new label.</remarks>
- public static readonly Data Swimsuit = new Data()
- {
- Label = Labels.Swimsuit,
- Urls = new string[] {
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample3.png",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
- }
- };
-
- /// <summary>
- /// The set of images to subsequently remove from the list.
- /// </summary>
- public static readonly string[] Corrections = new string[] {
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
- };
-}
-
-/// <summary>
-/// The images to match against the image list.
-/// </summary>
-/// <remarks>Samples 1 and 4 should scan as matches; samples 5 and 16 should not.</remarks>
-private static readonly string[] ImagesToScreen = new string[] {
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png",
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png"
-};
-
-/// <summary>
-/// A dictionary that tracks the ID assigned to each image URL when
-/// the image is added to the list.
-/// </summary>
-/// <remarks>Indexed by URL.</remarks>
-private static readonly Dictionary<string, int> ImageIdMap =
- new Dictionary<string, int>();
-
-/// <summary>
-/// The name of the file to contain the output from the list management operations.
-/// </summary>
-/// <remarks>Relative paths are relative to the execution directory.</remarks>
-private static string OutputFile = "ListOutput.log";
-
-/// <summary>
-/// A static reference to the text writer to use for logging.
-/// </summary>
-private static TextWriter writer;
-
-/// <summary>
-/// A copy of the list details.
-/// </summary>
-/// <remarks>Used to initially create the list, and later to update the
-/// list details.</remarks>
-private static Body listDetails;
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests-per-second (RPS)
-> rate limit, and if you exceed the limit, the SDK throws an exception with a 429 error code. A free tier key has a one-RPS rate limit.
--
-## Create a method to write messages to the log file
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Writes a message to the log file, and optionally to the console.
-/// </summary>
-/// <param name="message">The message.</param>
-/// <param name="echo">if set to <c>true</c>, write the message to the console.</param>
-private static void WriteLine(string message = null, bool echo = false)
-{
- writer.WriteLine(message ?? String.Empty);
-
- if (echo)
- {
- Console.WriteLine(message ?? String.Empty);
- }
-}
-```
-
-## Create a method to create the custom list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Creates the custom list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <returns>The response object from the operation.</returns>
-private static ImageList CreateCustomList(ContentModeratorClient client)
-{
- // Create the request body.
- listDetails = new Body("MyList", "A sample list",
- new BodyMetadata("Acceptable", "Potentially racy"));
-
- WriteLine($"Creating list {listDetails.Name}.", true);
-
- var result = client.ListManagementImageLists.Create(
- "application/json", listDetails);
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- return result;
-}
-```
-
-## Create a method to add a collection of images to the list
-
-Add the following method to the **Program** class. This guide does not demonstrate how to apply tags to images in the list.
-
-```csharp
-/// <summary>
-/// Adds images to an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <param name="imagesToAdd">The images to add.</param>
-/// <param name="label">The label to apply to each image.</param>
-/// <remarks>Images are assigned content IDs when they are added to the list.
-/// Track the content ID assigned to each image.</remarks>
-private static void AddImages(
-ContentModeratorClient client, int listId,
-IEnumerable<string> imagesToAdd, string label)
-{
- foreach (var imageUrl in imagesToAdd)
- {
- WriteLine();
- WriteLine($"Adding {imageUrl} to list {listId} with label {label}.", true);
- try
- {
- var result = client.ListManagementImage.AddImageUrlInput(
- listId.ToString(), "application/json", new BodyModel("URL", imageUrl), null, label);
-
- ImageIdMap.Add(imageUrl, Int32.Parse(result.ContentId));
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
- }
- catch (Exception ex)
- {
- WriteLine($"Unable to add image to list. Caught {ex.GetType().FullName}: {ex.Message}", true);
- }
- finally
- {
- Thread.Sleep(throttleRate);
- }
- }
-}
-```
-
-## Create a method to remove images from the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Removes images from an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <param name="imagesToRemove">The images to remove.</param>
-/// <remarks>Images are assigned content IDs when they are added to the list.
-/// Use the content ID to remove the image.</remarks>
-private static void RemoveImages(
- ContentModeratorClient client, int listId,
- IEnumerable<string> imagesToRemove)
-{
- foreach (var imageUrl in imagesToRemove)
- {
- if (!ImageIdMap.ContainsKey(imageUrl)) continue;
- int imageId = ImageIdMap[imageUrl];
-
- WriteLine();
- WriteLine($"Removing entry for {imageUrl} (ID = {imageId}) from list {listId}.", true);
-
- var result = client.ListManagementImage.DeleteImage(
- listId.ToString(), imageId.ToString());
- Thread.Sleep(throttleRate);
-
- ImageIdMap.Remove(imageUrl);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
- }
-}
-```
-
-## Create a method to get all of the content IDs for images in the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Gets all image IDs in an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <returns>The response object from the operation.</returns>
-private static ImageIds GetAllImageIds(
- ContentModeratorClient client, int listId)
-{
- WriteLine();
- WriteLine($"Getting all image IDs for list {listId}.", true);
-
- var result = client.ListManagementImage.GetAllImageIds(listId.ToString());
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- return result;
-}
-```
-
-## Create a method to update the details of the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Updates the details of an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <returns>The response object from the operation.</returns>
-private static ImageList UpdateListDetails(
- ContentModeratorClient client, int listId)
-{
- WriteLine();
- WriteLine($"Updating details for list {listId}.", true);
-
- listDetails.Name = "Swimsuits and sports";
-
- var result = client.ListManagementImageLists.Update(
- listId.ToString(), "application/json", listDetails);
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- return result;
-}
-```
-
-## Create a method to retrieve the details of the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Gets the details for an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <returns>The response object from the operation.</returns>
-private static ImageList GetListDetails(
- ContentModeratorClient client, int listId)
-{
- WriteLine();
- WriteLine($"Getting details for list {listId}.", true);
-
- var result = client.ListManagementImageLists.GetDetails(listId.ToString());
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- return result;
-}
-```
-
-## Create a method to refresh the search index of the list
-
-Add the following method to the **Program** class. Any time you update a list, you need to refresh the search index before using the
-list to screen images.
-
-```csharp
-/// <summary>
-/// Refreshes the search index for an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <returns>The response object from the operation.</returns>
-private static RefreshIndex RefreshSearchIndex(
- ContentModeratorClient client, int listId)
-{
- WriteLine();
- WriteLine($"Refreshing the search index for list {listId}.", true);
-
- var result = client.ListManagementImageLists.RefreshIndexMethod(listId.ToString());
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- return result;
-}
-```
-
-## Create a method to match images against the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Matches images against an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-/// <param name="imagesToMatch">The images to screen.</param>
-private static void MatchImages(
- ContentModeratorClient client, int listId,
- IEnumerable<string> imagesToMatch)
-{
- foreach (var imageUrl in imagesToMatch)
- {
- WriteLine();
- WriteLine($"Matching image {imageUrl} against list {listId}.", true);
-
- var result = client.ImageModeration.MatchUrlInput(
- "application/json", new BodyModel("URL", imageUrl), listId.ToString());
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
- }
-}
-```
-
-## Create a method to delete all images from the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Deletes all images from an image list.
-/// </summary>
-/// <param name="client">The Content Modertor client.</param>
-/// <param name="listId">The list identifier.</param>
-private static void DeleteAllImages(
- ContentModeratorClient client, int listId)
-{
- WriteLine();
- WriteLine($"Deleting all images from list {listId}.", true);
-
- var result = client.ListManagementImage.DeleteAllImages(listId.ToString());
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-}
-```
-
-## Create a method to delete the list
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Deletes an image list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="listId">The list identifier.</param>
-private static void DeleteCustomList(
- ContentModeratorClient client, int listId)
-{
- WriteLine();
- WriteLine($"Deleting list {listId}.", true);
-
- var result = client.ListManagementImageLists.Delete(listId.ToString());
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-}
-```
-
-## Create a method to retrieve IDs for all image lists
-
-Add the following method to the **Program** class.
-
-```csharp
-/// <summary>
-/// Gets all list identifiers for the client.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <returns>The response object from the operation.</returns>
-private static IList<ImageList> GetAllListIds(ContentModeratorClient client)
-{
- WriteLine();
- WriteLine($"Getting all image list IDs.", true);
-
- var result = client.ListManagementImageLists.GetAllImageLists();
- Thread.Sleep(throttleRate);
-
- WriteLine("Response:");
- WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- return result;
-}
-```
-
-## Add code to simulate the use of an image list
-
-Add the following code to the **Main** method. This code simulates many of the operations that you would perform in defining and managing the list, as well as using the list to screen images. The logging features allow you to see the response objects generated by the SDK calls to the Content Moderator service.
-
-```csharp
-// Create the text writer to use for logging, and cache a static reference to it.
-using (StreamWriter outputWriter = new StreamWriter(OutputFile))
-{
- writer = outputWriter;
-
- // Create a Content Moderator client.
- using (var client = Clients.NewClient())
- {
- // Create a custom image list and record the ID assigned to it.
- var creationResult = CreateCustomList(client);
- if (creationResult.Id.HasValue)
- {
- // Cache the ID of the new image list.
- int listId = creationResult.Id.Value;
-
- // Perform various operations using the image list.
- AddImages(client, listId, Images.Sports.Urls, Images.Sports.Label);
- AddImages(client, listId, Images.Swimsuit.Urls, Images.Swimsuit.Label);
-
- GetAllImageIds(client, listId);
- UpdateListDetails(client, listId);
- GetListDetails(client, listId);
-
- // Be sure to refresh search index
- RefreshSearchIndex(client, listId);
-
- // WriteLine();
- WriteLine($"Waiting {latencyDelay} minutes to allow the server time to propagate the index changes.", true);
- Thread.Sleep((int)(latencyDelay * 60 * 1000));
-
- // Match images against the image list.
- MatchImages(client, listId, ImagesToMatch);
-
- // Remove images
- RemoveImages(client, listId, Images.Corrections);
-
- // Be sure to refresh search index
- RefreshSearchIndex(client, listId);
-
- WriteLine();
- WriteLine($"Waiting {latencyDelay} minutes to allow the server time to propagate the index changes.", true);
- Thread.Sleep((int)(latencyDelay * 60 * 1000));
-
- // Match images again against the image list. The removed image should not get matched.
- MatchImages(client, listId, ImagesToMatch);
-
- // Delete all images from the list.
- DeleteAllImages(client, listId);
-
- // Delete the image list.
- DeleteCustomList(client, listId);
-
- // Verify that the list was deleted.
- GetAllListIds(client);
- }
- }
-
- writer.Flush();
- writer.Close();
- writer = null;
-}
-
-Console.WriteLine();
-Console.WriteLine("Press any key to exit...");
-Console.ReadKey();
-```
-
-## Run the program and review the output
-
-The list ID and the image content IDs are different each time you run the application.
-The log file written by the program has the following output:
-
-```json
-Creating list MyList.
-Response:
-{
- "Id": 169642,
- "Name": "MyList",
- "Description": "A sample list",
- "Metadata": {
- "Key One": "Acceptable",
- "Key Two": "Potentially racy"
- }
-}
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png to list 169642 with label Sports.
-Response:
-{
- "ContentId": "169490",
- "AdditionalInfo": [
- {
- "Key": "Source",
- "Value": "169642"
- },
- {
- "Key": "ImageDownloadTimeInMs",
- "Value": "233"
- },
- {
- "Key": "ImageSizeInBytes",
- "Value": "2945548"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_b4d3e20a-0751-4760-8829-475e5da33ce8"
-}
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample6.png to list 169642 with label Sports.
-Response:
-{
- "ContentId": "169491",
- "AdditionalInfo": [
- {
- "Key": "Source",
- "Value": "169642"
- },
- {
- "Key": "ImageDownloadTimeInMs",
- "Value": "215"
- },
- {
- "Key": "ImageSizeInBytes",
- "Value": "2440050"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_cc1eb6af-2463-4e5e-9145-2a11dcecbc30"
-}
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample9.png to list 169642 with label Sports.
-Response:
-{
- "ContentId": "169492",
- "AdditionalInfo": [
- {
- "Key": "Source",
- "Value": "169642"
- },
- {
- "Key": "ImageDownloadTimeInMs",
- "Value": "98"
- },
- {
- "Key": "ImageSizeInBytes",
- "Value": "1631958"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_01edc1f2-b448-48cf-b7f6-23b64d5040e9"
-}
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg to list 169642 with label Swimsuit.
-Response:
-{
- "ContentId": "169493",
- "AdditionalInfo": [
- {
- "Key": "Source",
- "Value": "169642"
- },
- {
- "Key": "ImageDownloadTimeInMs",
- "Value": "27"
- },
- {
- "Key": "ImageSizeInBytes",
- "Value": "17280"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_41f7bc6f-8778-4576-ba46-37b43a6c2434"
-}
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample3.png to list 169642 with label Swimsuit.
-Response:
-{
- "ContentId": "169494",
- "AdditionalInfo": [
- {
- "Key": "Source",
- "Value": "169642"
- },
- {
- "Key": "ImageDownloadTimeInMs",
- "Value": "129"
- },
- {
- "Key": "ImageSizeInBytes",
- "Value": "1242855"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_61a48f33-eb55-4fd9-ac97-20eb0f3622a5"
-}
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png to list 169642 with label Swimsuit.
-Unable to add image to list. Caught Microsoft.CognitiveServices.ContentModerator.Models.APIErrorException: Operation returned an invalid status code 'Conflict'
-
-Adding https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png to list 169642 with label Swimsuit.
-Response:
-{
- "ContentId": "169495",
- "AdditionalInfo": [
- {
- "Key": "Source",
- "Value": "169642"
- },
- {
- "Key": "ImageDownloadTimeInMs",
- "Value": "65"
- },
- {
- "Key": "ImageSizeInBytes",
- "Value": "1088127"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_1c1f3de4-58b9-4aa8-82fa-1b0f479f6d7c"
-}
-
-Getting all image IDs for list 169642.
-Response:
-{
- "ContentSource": "169642",
- "ContentIds": [
- 169490,
- 169491,
- 169492,
- 169493,
- 169494,
- 169495
-],
-"Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- },
-"TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_0d017deb-38fa-4701-a7b1-5b6608c79da2"
-}
-
-Updating details for list 169642.
-Response:
-{
- "Id": 169642,
- "Name": "Swimsuits and sports",
- "Description": "A sample list",
- "Metadata": {
- "Key One": "Acceptable",
- "Key Two": "Potentially racy"
- }
-}
-
-Getting details for list 169642.
-Response:
-{
- "Id": 169642,
- "Name": "Swimsuits and sports",
- "Description": "A sample list",
- "Metadata": {
- "Key One": "Acceptable",
- "Key Two": "Potentially racy"
- }
-}
-
-Refreshing the search index for list 169642.
-Response:
-{
- "ContentSourceId": "169642",
- "IsUpdateSuccess": true,
- "AdvancedInfo": [],
- "Status": {
- "Code": 3000,
- "Description": "RefreshIndex successfully completed.",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_c72255cd-55a0-415e-9c18-0b9c08a9f25b"
-}
-Waiting 0.5 minutes to allow the server time to propagate the index changes.
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_ec384878-dbaa-4999-9042-6ac986355967",
- "CacheID": null,
- "IsMatch": true,
- "Matches": [
- {
- "Score": 1.0,
- "MatchId": 169493,
- "Source": "169642",
- "Tags": [],
- "Label": "Swimsuit"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_e9db4b8f-3067-400f-9552-d3e6af2474c0",
- "CacheID": null,
- "IsMatch": true,
- "Matches": [
- {
- "Score": 1.0,
- "MatchId": 169490,
- "Source": "169642",
- "Tags": [],
- "Label": "Sports"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_25991575-05da-4904-89db-abe88270b403",
- "CacheID": null,
- "IsMatch": false,
- "Matches": [],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_c65d1c91-0d8a-4511-8ac6-814e04adc845",
- "CacheID": null,
- "IsMatch": true,
- "Matches": [
- {
- "Score": 1.0,
- "MatchId": 169495,
- "Source": "169642",
- "Tags": [],
- "Label": "Swimsuit"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Removing entry for https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png (ID = 169495) from list 169642.
-Response:
-""
-
-Refreshing the search index for list 169642.
-Response:
-{
- "ContentSourceId": "169642",
- "IsUpdateSuccess": true,
- "AdvancedInfo": [],
- "Status": {
- "Code": 3000,
- "Description": "RefreshIndex successfully completed.",
- "Exception": null
- },
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_b55a375e-30a1-4612-aa7b-81edcee5bffb"
-}
-
-Waiting 0.5 minutes to allow the server time to propagate the index changes.
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample1.jpg against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_00544948-2936-489c-98c8-b507b654bff5",
- "CacheID": null,
- "IsMatch": true,
- "Matches": [
- {
- "Score": 1.0,
- "MatchId": 169493,
- "Source": "169642",
- "Tags": [],
- "Label": "Swimsuit"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample4.png against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_c36ec646-53c2-4705-86b2-d72b5c2273c7",
- "CacheID": null,
- "IsMatch": true,
- "Matches": [
- {
- "Score": 1.0,
- "MatchId": 169490,
- "Source": "169642",
- "Tags": [],
- "Label": "Sports"
- }
- ],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png against list 169642.
-Response:
-{
- TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_22edad74-690d-4fbc-b7d0-bf64867c4cb9",
- "CacheID": null,
- "IsMatch": false,
- "Matches": [],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Matching image https://moderatorsampleimages.blob.core.windows.net/samples/sample16.png against list 169642.
-Response:
-{
- "TrackingId": "WE_f0527c49616243c5ac65e1cc3482d390_ContentModerator.Preview_abd4a178-3238-4601-8e4f-cf9ee66f605a",
- "CacheID": null,
- "IsMatch": false,
- "Matches": [],
- "Status": {
- "Code": 3000,
- "Description": "OK",
- "Exception": null
- }
-}
-
-Deleting all images from list 169642.
-Response:
-"Reset Successful."
-
-Deleting list 169642.
-Response:
-""
-
-Getting all image list IDs.
-Response:
-[]
-```
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET, and get started on your integration.
cognitive-services Image Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/image-moderation-api.md
- Title: Image Moderation - Content Moderator-
-description: Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content.
------- Previously updated : 10/27/2021----
-# Learn image moderation concepts
-
-Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content. Scan images for text content and extract that text, and detect faces. You can match images against custom lists, and take further action.
-
-## Evaluating for adult and racy content
-
-The **Evaluate** operation returns a confidence score between 0 and 1. It also returns boolean data equal to true or false. These values predict whether the image contains potential adult or racy content. When you call the API with your image (file or URL), the returned response includes the following information:
-
-```json
-"ImageModeration": {
- .............
- "adultClassificationScore": 0.019196987152099609,
- "isImageAdultClassified": false,
- "racyClassificationScore": 0.032390203326940536,
- "isImageRacyClassified": false,
- ............
- ],
-```
-
-> [!NOTE]
->
-> - `isImageAdultClassified` represents the potential presence of images that may be considered sexually explicit or adult in certain situations.
-> - `isImageRacyClassified` represents the potential presence of images that may be considered sexually suggestive or mature in certain situations.
-> - The scores are between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This preview relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.
-> - The boolean values are either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.
-
-## Detecting text with Optical Character Recognition (OCR)
-
-The **Optical Character Recognition (OCR)** operation predicts the presence of text content in an image and extracts it for text moderation, among other uses. You can specify the language. If you do not specify a language, the detection defaults to English.
-
-The response includes the following information:
-- The original text.-- The detected text elements with their confidence scores.-
-Example extract:
-
-```json
-"TextDetection": {
- "status": {
- "code": 3000.0,
- "description": "OK",
- "exception": null
- },
- .........
- "language": "eng",
- "text": "IF WE DID \r\nALL \r\nTHE THINGS \r\nWE ARE \r\nCAPABLE \r\nOF DOING, \r\nWE WOULD \r\nLITERALLY \r\nASTOUND \r\nOURSELVE \r\n",
- "candidates": []
-},
-```
-
-## Detecting faces
-
-Detecting faces helps to detect personal data such as faces in the images. You detect potential faces and the number of potential faces in each image.
-
-A response includes this information:
--- Faces count-- List of locations of faces detected-
-Example extract:
-
-```json
-"FaceDetection": {
- ......
- "result": true,
- "count": 2,
- "advancedInfo": [
- .....
- ],
- "faces": [
- {
- "bottom": 598,
- "left": 44,
- "right": 268,
- "top": 374
- },
- {
- "bottom": 620,
- "left": 308,
- "right": 532,
- "top": 396
- }
- ]
-}
-```
-
-## Creating and managing custom lists
-
-In many online communities, after users upload images or other type of content, offensive items may get shared multiple times over the following days, weeks, and months. The costs of repeatedly scanning and filtering out the same image or even slightly modified versions of the image from multiple places can be expensive and error-prone.
-
-Instead of moderating the same image multiple times, you add the offensive images to your custom list of blocked content. That way, your content moderation system compares incoming images against your custom lists and stops any further processing.
-
-> [!NOTE]
-> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
->
-
-The Content Moderator provides a complete [Image List Management API](try-image-list-api.md) with operations for managing lists of custom images. Start with the [Image Lists API Console](try-image-list-api.md) and use the REST API code samples. Also check out the [Image List .NET quickstart](image-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
-
-## Matching against your custom lists
-
-The Match operation allows fuzzy matching of incoming images against any of your custom lists, created and managed using the List operations.
-
-If a match is found, the operation returns the identifier and the moderation tags of the matched image. The response includes this information:
--- Match score (between 0 and 1)-- Matched image-- Image tags (assigned during previous moderation)-- Image labels-
-Example extract:
-
-```json
-{
- ..............,
- "IsMatch": true,
- "Matches": [
- {
- "Score": 1.0,
- "MatchId": 169490,
- "Source": "169642",
- "Tags": [],
- "Label": "Sports"
- }
- ],
- ....
-}
-```
--
-## Next steps
-
-Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/language-support.md
- Title: Language support - Content Moderator API-
-description: This is a list of natural languages that the Azure Cognitive Services Content Moderator API supports.
------ Previously updated : 10/27/2021----
-# Language support for Content Moderator API
-
-> [!NOTE]
-> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
->
-> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
--
-| Language detection | Profanity | OCR | Auto-correction |
-| -- |-|--||
-| Arabic (Romanized) | Afrikaans | Arabic | Arabic |
-| Balinese | Albanian | Chinese (Simplified) | Danish |
-| Bengali | Amharic | Chinese (Traditional) | Dutch |
-| Buginese | Arabic | Czech | English |
-| Buhid | Armenian | Danish | Finnish |
-| Carian | Assamese | Dutch | French |
-| Chinese (Simplified) | Azerbaijani | English | Greek (modern) |
-| Chinese (Traditional) | Bangla - Bangladesh | Finnish | Italian |
-| Church (Slavic) | Bangla - India | French | Korean |
-| Coptic | Basque | German | Norwegian |
-| Czech | Belarusian | Greek (modern) | Polish |
-| Dhivehi | Bosnian - Cyrillic | Hungarian | Portuguese |
-| Dutch | Bosnian - Latin | Italian | Romanian |
-| English (Creole) | Breton [non-GeoPol] | Japanese | Russian |
-| Farsi | Bulgarian | Korean | Slovak |
-| French | Catalan | Norwegian | Spanish |
-| German | Central Kurdish | Polish | Turkish |
-| Greek | Cherokee | Portuguese | |
-| Haitian | Chinese (Simplified) | Romanian | |
-| Hebrew | Chinese (Traditional) - Hong Kong SAR | Russian | |
-| Hindi | Chinese (Traditional) - Taiwan | Serbian Cyrillic | |
-| Hmong | Croatian | Serbian Latin | |
-| Hungarian | Czech | Slovak | |
-| Italian | Danish | Spanish | |
-| Japanese | Dari | Swedish | |
-| Korean | Dutch | Turkish | |
-| Kurdish (Arabic) | English | | |
-| Kurdish (Latin) | Estonian | | |
-| Lepcha | Filipino | | |
-| Limbu | Finnish | | |
-| Lu | French | | |
-| Lycian | | |
-| Lydian | Georgian | | |
-| Mycenaean (Greek) | German | | |
-| Nko | Greek | | |
-| Norwegian (Bokmal) | Gujarati | | |
-| Norwegian (Nynorsk) | Hausa | | |
-| Old (Persian) | Hebrew | | |
-| Pashto | Hindi | | |
-| Polish | Hungarian | | |
-| Portuguese | Icelandic | | |
-| Punjabi | Igbo | | |
-| Rejang | Indonesian | | |
-| Russian | Inuktitut | | |
-| Santali | Irish | | |
-| Sasak | isiXhosa | | |
-| Saurashtra | isiZulu | | |
-| Serbian (Cyrillic) | Italian | | |
-| Serbian (Latin) | Japanese | | |
-| Sinhala | Kannada | | |
-| Slovenian | Kazakh | | |
-| Spanish | Khmer | | |
-| Swedish | K'iche | | |
-| Sylheti | Kinyarwanda | | |
-| Syriac | Kiswahili | | |
-| Tagbanwa | Konkani | | |
-| Tai (Nua) | Korean | | |
-| Tamashek | Kyrgyz | | |
-| Turkish | Lao | | |
-| Ugaritic | Latvian | | |
-| Uzbek (Cyrillic) | Lithuanian | | |
-| Uzbek (Latin) | Luxembourgish | | |
-| Vai | Macedonian | | |
-| Yi | Malay | | |
-| Zhuang, Chuang | Malayalam | | |
-| | Maltese | | |
-| | Maori | | |
-| | Marathi | | |
-| | Mongolian | | |
-| | Nepali | | |
-| | Norwegian (Bokmål) | | |
-| | Norwegian (Nynorsk) | | |
-| | Odia | | |
-| | Pashto | | |
-| | Persian | | |
-| | Polish | | |
-| | Portuguese - Brazil | | |
-| | Portuguese - Portugal | | |
-| | Pulaar | | |
-| | Punjabi | | |
-| | Punjabi (Pakistan) | | |
-| | Quechua (Peru) | | |
-| | Romanian | | |
-| | Russian | | |
-| | Serbian (Cyrillic) | | |
-| | Serbian (Cyrillic, Bosnia and Herzegovina) | | |
-| | Serbian (Latin) | | |
-| | Sesotho | | |
-| | Sesotho sa Leboa | | |
-| | Setswana | | |
-| | Sindhi | | |
-| | Sinhala | | |
-| | Slovak | | |
-| | Slovenian | | |
-| | Spanish | | |
-| | Swedish | | |
-| | Tajik | | |
-| | Tamil | | |
-| | Tatar | | |
-| | Telugu | | |
-| | Thai | | |
-| | Tigrinya | | |
-| | Turkish | | |
-| | Turkmen | | |
-| | Ukrainian | | |
-| | Urdu | | |
-| | Uyghur | | |
-| | Uzbek | | |
-| | Valencian | | |
-| | Vietnamese | | |
-| | Wolof | | |
-| | Yoruba | | |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
- Title: What is Azure Content Moderator?-
-description: Learn how to use Content Moderator to track, flag, assess, and filter inappropriate material in user-generated content.
------- Previously updated : 11/06/2021--
-keywords: content moderator, azure content moderator, online moderator, content filtering software, content moderation service, content moderation
-
-#Customer intent: As a developer of content management software, I want to find out whether Azure Content Moderator is the right solution for my moderation needs.
--
-# What is Azure Content Moderator?
--
-Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically.
-
-You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.
-
-This documentation contains the following article types:
-
-* [**Quickstarts**](client-libraries.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](try-text-api.md) contain instructions for using the service in more specific or customized ways.
-* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features.
-* [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-For a more structured approach, follow a Training module for Content Moderator.
-* [Introduction to Content Moderator](/training/modules/intro-to-content-moderator/)
-* [Classify and moderate text with Azure Content Moderator](/training/modules/classify-and-moderate-text-with-azure-content-moderator/)
-
-## Where it's used
-
-The following are a few scenarios in which a software developer or team would require a content moderation service:
--- Online marketplaces that moderate product catalogs and other user-generated content.-- Gaming companies that moderate user-generated game artifacts and chat rooms.-- Social messaging platforms that moderate images, text, and videos added by their users.-- Enterprise media companies that implement centralized moderation for their content.-- K-12 education solution providers filtering out content that is inappropriate for students and educators.-
-> [!IMPORTANT]
-> You cannot use Content Moderator to detect illegal child exploitation images. However, qualified organizations can use the [PhotoDNA Cloud Service](https://www.microsoft.com/photodna "Microsoft PhotoDNA Cloud Service") to screen for this type of content.
-
-## What it includes
-
-The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK.
-
-## Moderation APIs
-
-The Content Moderator service includes Moderation APIs, which check content for material that is potentially inappropriate or objectionable.
-
-![block diagram for Content Moderator moderation APIs](images/content-moderator-mod-api.png)
-
-The following table describes the different types of moderation APIs.
-
-| API group | Description |
-| | -- |
-|[**Text moderation**](text-moderation-api.md)| Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data.|
-|[**Custom term lists**](try-terms-list-api.md)| Scans text against a custom list of terms along with the built-in terms. Use custom lists to block or allow content according to your own content policies.|
-|[**Image moderation**](image-moderation-api.md)| Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability, and detects faces.|
-|[**Custom image lists**](try-image-list-api.md)| Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.|
-|[**Video moderation**](video-moderation-api.md)| Scans videos for adult or racy content and returns time markers for said content.|
-
-## Data privacy and security
-
-As with all of the Cognitive Services, developers using the Content Moderator service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
-
-## Next steps
-
-To get started using Content Moderator on the web portal, follow [Try Content Moderator on the web](quick-start.md). Or, complete a [client library or REST API quickstart](client-libraries.md) to implement the basic scenarios in code.
cognitive-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/samples-dotnet.md
- Title: Code samples - Content Moderator, .NET-
-description: Learn how to use Azure Cognitive Services Content Moderator in your .NET applications through the SDK.
------- Previously updated : 10/27/2021---
-# Content Moderator .NET SDK samples
-
-The following list includes links to the code samples built using the Azure Content Moderator SDK for .NET.
--- **Image moderation**: [Evaluate an image for adult and racy content, text, and faces](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-- **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-
-> [!NOTE]
-> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
->
--- **Text moderation**: [Screen text for profanity and personal data](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TextModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-- **Custom terms**: [Moderate with custom term lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TermListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-
-> [!NOTE]
-> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
->
--- **Video moderation**: [Scan a video for adult and racy content and get results](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoModeration/Program.cs). See [quickstart](video-moderation-api.md).--
-See all .NET samples at the [Content Moderator .NET samples on GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator).
cognitive-services Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/samples-rest.md
- Title: Code samples - Content Moderator, C#-
-description: Use Azure Cognitive Services Content Moderator feature based samples in your applications through REST API calls.
------- Previously updated : 01/10/2019---
-# Content Moderator REST samples in C#
-
-The following list includes links to code samples built using the Azure Content Moderator API.
--- [Image moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageModeration)-- [Text moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/TextModeration)-- [Video moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/VideoModeration)-
-For walkthroughs of these samples, check out the [on-demand webinar](https://info.microsoft.com/cognitive-services-content-moderator-ondemand.html).
cognitive-services Term Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/term-lists-quickstart-dotnet.md
- Title: "Check text against a custom term list in C# - Content Moderator"-
-description: How to moderate text with custom term lists using the Content Moderator SDK for C#.
------- Previously updated : 10/24/2019--
-#Customer intent: As a C# developer of content-providing software, I want to analyze text content for terms that are particular to my product, so that I can categorize and handle it accordingly.
--
-# Check text against a custom term list in C#
-
-The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization.
-
-You can use the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to create custom lists of terms to use with the Text Moderation API.
-
-This article provides information and code samples to help you get started using
-the Content Moderator SDK for .NET to:
-- Create a list.-- Add terms to a list.-- Screen terms against the terms in a list.-- Delete terms from a list.-- Delete a list.-- Edit list information.-- Refresh the index so that changes to the list are included in a new scan.-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Sign up for Content Moderator services
-
-Before you can use Content Moderator services through the REST API or the SDK, you'll need a subscription key. Subscribe to Content Moderator service in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) to obtain one.
-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
-1. Name the project **TermLists**. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages for the TermLists project:
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Microsoft.Rest.ClientRuntime.Azure-- Newtonsoft.Json-
-### Update the program's using statements
-
-Add the following `using` statements.
-
-```csharp
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-```
-
-### Create the Content Moderator client
-
-Add the following code to create a Content Moderator client for your subscription. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
-
-```csharp
-/// <summary>
-/// Wraps the creation and configuration of a Content Moderator client.
-/// </summary>
-/// <remarks>This class library contains insecure code. If you adapt this
-/// code for use in production, use a secure method of storing and using
-/// your Content Moderator subscription key.</remarks>
-public static class Clients
-{
- /// <summary>
- /// The base URL fragment for Content Moderator calls.
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR API KEY";
-
- /// <summary>
- /// Returns a new Content Moderator client for your subscription.
- /// </summary>
- /// <returns>The new client.</returns>
- /// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
- /// When you have finished using the client,
- /// you should dispose of it either directly or indirectly. </remarks>
- public static ContentModeratorClient NewClient()
- {
- // Create and initialize an instance of the Content Moderator API wrapper.
- ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey));
-
- client.Endpoint = AzureEndpoint;
- return client;
- }
-}
-```
-
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Cognitive Services [security](../cognitive-services-security.md) article for more information.
-
-### Add private properties
-
-Add the following private properties to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// The language of the terms in the term lists.
-/// </summary>
-private const string lang = "eng";
-
-/// <summary>
-/// The minimum amount of time, in milliseconds, to wait between calls
-/// to the Content Moderator APIs.
-/// </summary>
-private const int throttleRate = 3000;
-
-/// <summary>
-/// The number of minutes to delay after updating the search index before
-/// performing image match operations against the list.
-/// </summary>
-private const double latencyDelay = 0.5;
-```
-
-## Create a term list
-
-You create a term list with **ContentModeratorClient.ListManagementTermLists.Create**. The first parameter to **Create** is a string that contains a MIME type, which should be "application/json". For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f). The second parameter is a **Body** object that contains a name and description for the new term list.
-
-> [!NOTE]
-> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
-
-Add the following method definition to namespace TermLists, class Program.
-
-> [!NOTE]
-> Your Content Moderator service key has a requests-per-second (RPS)
-> rate limit, and if you exceed the limit, the SDK throws an exception with a 429 error code. A free tier key has a one-RPS rate limit.
-
-```csharp
-/// <summary>
-/// Creates a new term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <returns>The term list ID.</returns>
-static string CreateTermList (ContentModeratorClient client)
-{
- Console.WriteLine("Creating term list.");
-
- Body body = new Body("Term list name", "Term list description");
- TermList list = client.ListManagementTermLists.Create("application/json", body);
- if (false == list.Id.HasValue)
- {
- throw new Exception("TermList.Id value missing.");
- }
- else
- {
- string list_id = list.Id.Value.ToString();
- Console.WriteLine("Term list created. ID: {0}.", list_id);
- Thread.Sleep(throttleRate);
- return list_id;
- }
-}
-```
-
-## Update term list name and description
-
-You update the term list information with **ContentModeratorClient.ListManagementTermLists.Update**. The first parameter to **Update** is the term list ID. The second parameter is a MIME type, which should be "application/json". For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f685). The third parameter is a **Body** object, which contains the new name and description.
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Update the information for the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list to update.</param>
-/// <param name="name">The new name for the term list.</param>
-/// <param name="description">The new description for the term list.</param>
-static void UpdateTermList (ContentModeratorClient client, string list_id, string name = null, string description = null)
-{
- Console.WriteLine("Updating information for term list with ID {0}.", list_id);
- Body body = new Body(name, description);
- client.ListManagementTermLists.Update(list_id, "application/json", body);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Add a term to a term list
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Add a term to the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list to update.</param>
-/// <param name="term">The term to add to the term list.</param>
-static void AddTerm (ContentModeratorClient client, string list_id, string term)
-{
- Console.WriteLine("Adding term \"{0}\" to term list with ID {1}.", term, list_id);
- client.ListManagementTerm.AddTerm(list_id, term, lang);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Get all terms in a term list
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Get all terms in the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list from which to get all terms.</param>
-static void GetAllTerms(ContentModeratorClient client, string list_id)
-{
- Console.WriteLine("Getting terms in term list with ID {0}.", list_id);
- Terms terms = client.ListManagementTerm.GetAllTerms(list_id, lang);
- TermsData data = terms.Data;
- foreach (TermsInList term in data.Terms)
- {
- Console.WriteLine(term.Term);
- }
- Thread.Sleep(throttleRate);
-}
-```
-
-## Add code to refresh the search index
-
-After you make changes to a term list, you refresh its search index for the changes to be included the next time you use the term list to screen text. This is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
-
-You refresh a term list search index with **ContentModeratorClient.ListManagementTermLists.RefreshIndexMethod**.
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Refresh the search index for the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list to refresh.</param>
-static void RefreshSearchIndex (ContentModeratorClient client, string list_id)
-{
- Console.WriteLine("Refreshing search index for term list with ID {0}.", list_id);
- client.ListManagementTermLists.RefreshIndexMethod(list_id, lang);
- Thread.Sleep((int)(latencyDelay * 60 * 1000));
-}
-```
-
-## Screen text using a term list
-
-You screen text using a term list with **ContentModeratorClient.TextModeration.ScreenText**, which takes the following parameters.
--- The language of the terms in the term list.-- A MIME type, which can be "text/html", "text/xml", "text/markdown", or "text/plain".-- The text to screen.-- A boolean value. Set this field to **true** to autocorrect the text before screening it.-- A boolean value. Set this field to **true** to detect personal data in the text.-- The term list ID.-
-For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f).
-
-**ScreenText** returns a **Screen** object, which has a **Terms** property that lists any terms that Content Moderator detected in the screening. Note that if Content Moderator did not detect any terms during the screening, the **Terms** property has value **null**.
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Screen the indicated text for terms in the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list to use to screen the text.</param>
-/// <param name="text">The text to screen.</param>
-static void ScreenText (ContentModeratorClient client, string list_id, string text)
-{
- Console.WriteLine("Screening text: \"{0}\" using term list with ID {1}.", text, list_id);
- Screen screen = client.TextModeration.ScreenText(lang, "text/plain", text, false, false, list_id);
- if (null == screen.Terms)
- {
- Console.WriteLine("No terms from the term list were detected in the text.");
- }
- else
- {
- foreach (DetectedTerms term in screen.Terms)
- {
- Console.WriteLine(String.Format("Found term: \"{0}\" from list ID {1} at index {2}.", term.Term, term.ListId, term.Index));
- }
- }
- Thread.Sleep(throttleRate);
-}
-```
-
-## Delete terms and lists
-
-Deleting a term or a list is straightforward. You use the SDK to do the following tasks:
--- Delete a term. (**ContentModeratorClient.ListManagementTerm.DeleteTerm**)-- Delete all the terms in a list without deleting the list. (**ContentModeratorClient.ListManagementTerm.DeleteAllTerms**)-- Delete a list and all of its contents. (**ContentModeratorClient.ListManagementTermLists.Delete**)-
-### Delete a term
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Delete a term from the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list from which to delete the term.</param>
-/// <param name="term">The term to delete.</param>
-static void DeleteTerm (ContentModeratorClient client, string list_id, string term)
-{
- Console.WriteLine("Removed term \"{0}\" from term list with ID {1}.", term, list_id);
- client.ListManagementTerm.DeleteTerm(list_id, term, lang);
- Thread.Sleep(throttleRate);
-}
-```
-
-### Delete all terms in a term list
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Delete all terms from the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list from which to delete all terms.</param>
-static void DeleteAllTerms (ContentModeratorClient client, string list_id)
-{
- Console.WriteLine("Removing all terms from term list with ID {0}.", list_id);
- client.ListManagementTerm.DeleteAllTerms(list_id, lang);
- Thread.Sleep(throttleRate);
-}
-```
-
-### Delete a term list
-
-Add the following method definition to namespace TermLists, class Program.
-
-```csharp
-/// <summary>
-/// Delete the indicated term list.
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="list_id">The ID of the term list to delete.</param>
-static void DeleteTermList (ContentModeratorClient client, string list_id)
-{
- Console.WriteLine("Deleting term list with ID {0}.", list_id);
- client.ListManagementTermLists.Delete(list_id);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Compose the Main method
-
-Add the **Main** method definition to namespace **TermLists**, class **Program**. Finally, close the **Program** class and the **TermLists** namespace.
-
-```csharp
-static void Main(string[] args)
-{
- using (var client = Clients.NewClient())
- {
- string list_id = CreateTermList(client);
-
- UpdateTermList(client, list_id, "name", "description");
- AddTerm(client, list_id, "term1");
- AddTerm(client, list_id, "term2");
-
- GetAllTerms(client, list_id);
-
- // Always remember to refresh the search index of your list
- RefreshSearchIndex(client, list_id);
-
- string text = "This text contains the terms \"term1\" and \"term2\".";
- ScreenText(client, list_id, text);
-
- DeleteTerm(client, list_id, "term1");
-
- // Always remember to refresh the search index of your list
- RefreshSearchIndex(client, list_id);
-
- text = "This text contains the terms \"term1\" and \"term2\".";
- ScreenText(client, list_id, text);
-
- DeleteAllTerms(client, list_id);
- DeleteTermList(client, list_id);
-
- Console.WriteLine("Press ENTER to close the application.");
- Console.ReadLine();
- }
-}
-```
-
-## Run the application to see the output
-
-Your console output will look like the following:
-
-```console
-Creating term list.
-Term list created. ID: 252.
-Updating information for term list with ID 252.
-
-Adding term "term1" to term list with ID 252.
-Adding term "term2" to term list with ID 252.
-
-Getting terms in term list with ID 252.
-term1
-term2
-
-Refreshing search index for term list with ID 252.
-
-Screening text: "This text contains the terms "term1" and "term2"." using term list with ID 252.
-Found term: "term1" from list ID 252 at index 32.
-Found term: "term2" from list ID 252 at index 46.
-
-Removed term "term1" from term list with ID 252.
-
-Refreshing search index for term list with ID 252.
-
-Screening text: "This text contains the terms "term1" and "term2"." using term list with ID 252.
-Found term: "term2" from list ID 252 at index 46.
-
-Removing all terms from term list with ID 252.
-Deleting term list with ID 252.
-Press ENTER to close the application.
-```
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET, and get started on your integration.
cognitive-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/text-moderation-api.md
- Title: Text Moderation - Content Moderator-
-description: Use text moderation for possible unwanted text, personal data, and custom lists of terms.
------- Previously updated : 10/27/2021----
-# Learn text moderation concepts
-
-Use Content Moderator's text moderation models to analyze text content, such as chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
-
-The service response includes the following information:
--- Profanity: term-based matching with built-in list of profane terms in various languages-- Classification: machine-assisted classification into three categories-- Personal data-- Auto-corrected text-- Original text-- Language-
-## Profanity
-
-If the API detects any profane terms in any of the [supported languages](./language-support.md), those terms are included in the response. The response also contains their location (`Index`) in the original text. The `ListId` in the following sample JSON refers to terms found in [custom term lists](try-terms-list-api.md) if available.
-
-```json
-"Terms": [
- {
- "Index": 118,
- "OriginalIndex": 118,
- "ListId": 0,
- "Term": "<offensive word>"
- }
-```
-
-> [!NOTE]
-> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
->
-> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
-
-## Classification
-
-Content Moderator's machine-assisted **text classification feature** supports **English only**, and helps detect potentially undesired content. The flagged content may be assessed as inappropriate depending on context. It conveys the likelihood of each category. The feature uses a trained model to identify possible abusive, derogatory or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words.
-
-The following extract in the JSON extract shows an example output:
-
-```json
-"Classification": {
- "ReviewRecommended": true,
- "Category1": {
- "Score": 1.5113095059859916E-06
- },
- "Category2": {
- "Score": 0.12747249007225037
- },
- "Category3": {
- "Score": 0.98799997568130493
- }
-}
-```
-
-### Explanation
--- `Category1` refers to potential presence of language that may be considered sexually explicit or adult in certain situations.-- `Category2` refers to potential presence of language that may be considered sexually suggestive or mature in certain situations.-- `Category3` refers to potential presence of language that may be considered offensive in certain situations.-- `Score` is between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This feature relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.-- `ReviewRecommended` is either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.-
-## Personal data
-
-The personal data feature detects the potential presence of this information:
--- Email address-- US mailing address-- IP address-- US phone number-
-The following example shows a sample response:
-
-```json
-"pii":{
- "email":[
- {
- "detected":"abcdef@abcd.com",
- "sub_type":"Regular",
- "text":"abcdef@abcd.com",
- "index":32
- }
- ],
- "ssn":[
-
- ],
- "ipa":[
- {
- "sub_type":"IPV4",
- "text":"255.255.255.255",
- "index":72
- }
- ],
- "phone":[
- {
- "country_code":"US",
- "text":"6657789887",
- "index":56
- }
- ],
- "address":[
- {
- "text":"1 Microsoft Way, Redmond, WA 98052",
- "index":89
- }
- ]
-}
-```
-
-## Auto-correction
-
-The text moderation response can optionally return the text with basic auto-correction applied.
-
-For example, the following input text has a misspelling.
-
-> The quick brown fox jumps over the lazzy dog.
-
-If you specify auto-correction, the response contains the corrected version of the text:
-
-> The quick brown fox jumps over the lazy dog.
-
-## Creating and managing your custom lists of terms
-
-While the default, global list of terms works great for most cases, you may want to screen against terms that are specific to your business needs. For example, you may want to filter out any competitive brand names from posts by users.
-
-> [!NOTE]
-> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
->
-
-The following example shows the matching List ID:
-
-```json
-"Terms": [
- {
- "Index": 118,
- "OriginalIndex": 118,
- "ListId": 231.
- "Term": "<offensive word>"
- }
-```
-
-The Content Moderator provides a [Term List API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f) with operations for managing custom term lists. Start with the [Term Lists API Console](try-terms-list-api.md) and use the REST API code samples. Also check out the [Term Lists .NET quickstart](term-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
-
-## Next steps
-
-Test out the APIs with the [Text moderation API console](try-text-api.md).
cognitive-services Try Image Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/try-image-api.md
- Title: Moderate images with the API Console - Content Moderator-
-description: Use the Image Moderation API in Azure Content Moderator to scan image content.
------- Previously updated : 01/10/2019----
-# Moderate images from the API console
-
-Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure Content Moderator to scan image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
-
-## Use the API console
-Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-1. Go to [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c).
-
- The **Image - Evaluate** image moderation page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Try Image - Evaluate page region selection](images/test-drive-region.png)
-
- The **Image - Evaluate** API console opens.
-
-3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
- ![Try Image - Evaluate console subscription key](images/try-image-api-1.PNG)
-
-4. In the **Request body** box, use the default sample image, or specify an image to scan. You can submit the image itself as binary bit data, or specify a publicly accessible URL for an image.
-
- For this example, use the path provided in the **Request body** box, and then select **Send**.
-
- ![Try Image - Evaluate console Request body](images/try-image-api-2.PNG)
-
- This is the image at that URL:
-
- ![Try Image - Evaluate console sample image](images/sample-image.jpg)
-
-5. Select **Send**.
-
-6. The API returns a probability score for each classification. It also returns a determination of whether the image meets the conditions (**true** or **false**).
-
- ![Try Image - Evaluate console probability score and condition determination](images/try-image-api-3.PNG)
-
-## Face detection
-
-You can use the Image Moderation API to locate faces in an image. This option can be useful when you have privacy concerns and want to prevent a specific face from being posted on your platform.
-
-1. In the [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c), in the left menu, under **Image**, select **Find Faces**.
-
- The **Image - Find Faces** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Try Image - Find Faces page region selection](images/test-drive-region.png)
-
- The **Image - Find Faces** API console opens.
-
-3. Specify an image to scan. You can submit the image itself as binary bit data, or specify a publicly accessible URL to an image. This example links to an image that's used in a CNN story.
-
- ![Try Image - Find Faces sample image](images/try-image-api-face-image.jpg)
-
- ![Try Image - Find Faces sample request](images/try-image-api-face-request.png)
-
-4. Select **Send**. In this example, the API finds two faces, and returns their coordinates in the image.
-
- ![Try Image - Find Faces sample Response content box](images/try-image-api-face-response.png)
-
-## Text detection via OCR capability
-
-You can use the Content Moderator OCR capability to detect text in images.
-
-1. In the [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c), in the left menu, under **Image**, select **OCR**.
-
- The **Image - OCR** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image - OCR page region selection](images/test-drive-region.png)
-
- The **Image - OCR** API console opens.
-
-3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-4. In the **Request body** box, use the default sample image. This is the same image that's used in the preceding section.
-
-5. Select **Send**. The extracted text is displayed in JSON:
-
- ![Image - OCR sample Response content box](images/try-image-api-ocr.PNG)
-
-## Next steps
-
-Use the REST API in your code, or follow the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to add image moderation to your application.
cognitive-services Try Image List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/try-image-list-api.md
- Title: Moderate images with custom lists and the API console - Content Moderator-
-description: You use the List Management API in Azure Content Moderator to create custom lists of images.
------- Previously updated : 01/10/2019----
-# Moderate with custom image lists in the API console
-
-You use the [List Management API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672) in Azure Content Moderator to create custom lists of images. Use the custom lists of images with the Image Moderation API. The image moderation operation evaluates your image. If you create custom lists, the operation also compares it to the images in your custom lists. You can use custom lists to block or allow the image.
-
-> [!NOTE]
-> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
->
-
-You use the List Management API to do the following tasks:
--- Create a list.-- Add images to a list.-- Screen images against the images in a list.-- Delete images from a list.-- Delete a list.-- Edit list information.-- Refresh the index so that changes to the list are included in a new scan.-
-## Use the API console
-Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-## Refresh search index
-
-After you make changes to an image list, you must refresh its index for changes to be included in future scans. This step is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
-
-1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image Lists**, and then select **Refresh Search Index**.
-
- The **Image Lists - Refresh Search Index** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image Lists - Refresh Search Index page region selection](images/test-drive-region.png)
-
- The **Image Lists - Refresh Search Index** API console opens.
-
-3. In the **listId** box, enter the list ID. Enter your subscription key, and then select **Send**.
-
- ![Image Lists - Refresh Search Index console Response content box](images/try-image-list-refresh-1.png)
--
-## Create an image list
-
-1. Go to the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672).
-
- The **Image Lists - Create** page opens.
-
-3. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image Lists - Create page region selection](images/test-drive-region.png)
-
- The **Image Lists - Create** API console opens.
-
-4. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-5. In the **Request body** box, enter values for **Name** (for example, MyList) and **Description**.
-
- ![Image Lists - Create console Request body name and description](images/try-terms-list-create-1.png)
-
-6. Use key-value pair placeholders to assign more descriptive metadata to your list.
-
- ```json
- {
- "Name": "MyExclusionList",
- "Description": "MyListDescription",
- "Metadata":
- {
- "Category": "Competitors",
- "Type": "Exclude"
- }
- }
- ```
-
- Add list metadata as key-value pairs, and not the actual images.
-
-7. Select **Send**. Your list is created. Note the **ID** value that is associated with the new list. You need this ID for other image list management functions.
-
- ![Image Lists - Create console Response content box shows the list ID](images/try-terms-list-create-2.png)
-
-8. Next, add images to MyList. In the left menu, select **Image**, and then select **Add Image**.
-
- The **Image - Add Image** page opens.
-
-9. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image - Add Image page region selection](images/test-drive-region.png)
-
- The **Image - Add Image** API console opens.
-
-10. In the **listId** box, enter the list ID that you generated, and then enter the URL of the image that you want to add. Enter your subscription key, and then select **Send**.
-
-11. To verify that the image has been added to the list, in the left menu, select **Image**, and then select **Get All Image Ids**.
-
- The **Image - Get All Image Ids** API console opens.
-
-12. In the **listId** box, enter the list ID, and then enter your subscription key. Select **Send**.
-
- ![Image - Get All Image Ids console Response content box lists the images that you entered](images/try-image-list-create-11.png)
-
-10. Add a few more images. Now that you have created a custom list of images, try [evaluating images](try-image-api.md) by using the custom image list.
-
-## Delete images and lists
-
-Deleting an image or a list is straightforward. You can use the API to do the following tasks:
--- Delete an image. (**Image - Delete**)-- Delete all the images in a list without deleting the list. (**Image - Delete All Images**)-- Delete a list and all of its contents. (**Image Lists - Delete**)-
-This example deletes a single image:
-
-1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image**, and then select **Delete**.
-
- The **Image - Delete** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image - Delete page region selection](images/test-drive-region.png)
-
- The **Image - Delete** API console opens.
-
-3. In the **listId** box, enter the ID of the list to delete an image from. This is the number returned in the **Image - Get All Image Ids** console for MyList. Then, enter the **ImageId** of the image to delete.
-
-In our example, the list ID is **58953**, the value for **ContentSource**. The image ID is **59021**, the value for **ContentIds**.
-
-1. Enter your subscription key, and then select **Send**.
-
-1. To verify that the image has been deleted, use the **Image - Get All Image Ids** console.
-
-## Change list information
-
-You can edit a listΓÇÖs name and description, and add metadata items.
-
-1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image Lists**, and then select **Update Details**.
-
- The **Image Lists - Update Details** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image Lists - Update Details page region selection](images/test-drive-region.png)
-
- The **Image Lists - Update Details** API console opens.
-
-3. In the **listId** box, enter the list ID, and then enter your subscription key.
-
-4. In the **Request body** box, make your edits, and then select the **Send** button on the page.
-
- ![Image Lists - Update Details console Request body edits](images/try-terms-list-change-1.png)
-
-
-## Next steps
-
-Use the REST API in your code or start with the [Image lists .NET quickstart](image-lists-quickstart-dotnet.md) to integrate with your application.
cognitive-services Try Terms List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/try-terms-list-api.md
- Title: Moderate text with custom term lists - Content Moderator-
-description: Use the List Management API to create custom lists of terms to use with the Text Moderation API.
------- Previously updated : 01/10/2019----
-# Moderate with custom term lists in the API console
-
-The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization. For example, you might want to tag competitor names for further review.
-
-Use the [List Management API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f) to create custom lists of terms to use with the Text Moderation API. The **Text - Screen** operation scans your text for profanity, and also compares text against custom and shared blocklists.
-
-> [!NOTE]
-> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
->
-
-You can use the List Management API to do the following tasks:
-- Create a list.-- Add terms to a list.-- Screen terms against the terms in a list.-- Delete terms from a list.-- Delete a list.-- Edit list information.-- Refresh the index so that changes to the list are included in a new scan.-
-## Use the API console
-
-Before you can test-drive the API in the online console, you need your subscription key. This key is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-## Refresh search index
-
-After you make changes to a term list, you must refresh its index for changes to be included in future scans. This step is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
-
-1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term Lists**, and then select **Refresh Search Index**.
-
- The **Term Lists - Refresh Search Index** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term Lists - Refresh Search Index page region selection](images/test-drive-region.png)
-
- The **Term Lists - Refresh Search Index** API console opens.
-
-3. In the **listId** box, enter the list ID. Enter your subscription key, and then select **Send**.
-
- ![Term Lists API - Refresh Search Index console Response content box](images/try-terms-list-refresh-1.png)
-
-## Create a term list
-1. Go to the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f).
-
- The **Term Lists - Create** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term Lists - Create page region selection](images/test-drive-region.png)
-
- The **Term Lists - Create** API console opens.
-
-3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-4. In the **Request body** box, enter values for **Name** (for example, MyList) and **Description**.
-
- ![Term Lists - Create console Request body name and description](images/try-terms-list-create-1.png)
-
-5. Use key-value pair placeholders to assign more descriptive metadata to your list.
-
- ```json
- {
- "Name": "MyExclusionList",
- "Description": "MyListDescription",
- "Metadata":
- {
- "Category": "Competitors",
- "Type": "Exclude"
- }
- }
- ```
-
- Add list metadata as key-value pairs, and not actual terms.
-
-6. Select **Send**. Your list is created. Note the **ID** value that is associated with the new list. You need this ID for other term list management functions.
-
- ![Term Lists - Create console Response content box shows the list ID](images/try-terms-list-create-2.png)
-
-7. Add terms to MyList. In the left menu, under **Term**, select **Add Term**.
-
- The **Term - Add Term** page opens.
-
-8. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term - Add Term page region selection](images/test-drive-region.png)
-
- The **Term - Add Term** API console opens.
-
-9. In the **listId** box, enter the list ID that you generated, and select a value for **language**. Enter your subscription key, and then select **Send**.
-
- ![Term - Add Term console query parameters](images/try-terms-list-create-3.png)
-
-10. To verify that the term has been added to the list, in the left menu, select **Term**, and then select **Get All Terms**.
-
- The **Term - Get All Terms** API console opens.
-
-11. In the **listId** box, enter the list ID, and then enter your subscription key. Select **Send**.
-
-12. In the **Response content** box, verify the terms you entered.
-
- ![Term - Get All Terms console Response content box lists the terms that you entered](images/try-terms-list-create-4.png)
-
-13. Add a few more terms. Now that you have created a custom list of terms, try [scanning some text](try-text-api.md) by using the custom term list.
-
-## Delete terms and lists
-
-Deleting a term or a list is straightforward. You use the API to do the following tasks:
--- Delete a term. (**Term - Delete**)-- Delete all the terms in a list without deleting the list. (**Term - Delete All Terms**)-- Delete a list and all of its contents. (**Term Lists - Delete**)-
-This example deletes a single term.
-
-1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term**, and then select **Delete**.
-
- The **Term - Delete** opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term - Delete page region selection](images/test-drive-region.png)
-
- The **Term - Delete** API console opens.
-
-3. In the **listId** box, enter the ID of the list that you want to delete a term from. This ID is the number (in our example, **122**) that is returned in the **Term Lists - Get Details** console for MyList. Enter the term and select a language.
-
- ![Term - Delete console query parameters](images/try-terms-list-delete-1.png)
-
-4. Enter your subscription key, and then select **Send**.
-
-5. To verify that the term has been deleted, use the **Term Lists - Get All** console.
-
- ![Term Lists - Get All console Response content box shows that term is deleted](images/try-terms-list-delete-2.png)
-
-## Change list information
-
-You can edit a listΓÇÖs name and description, and add metadata items.
-
-1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term Lists**, and then select **Update Details**.
-
- The **Term Lists - Update Details** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term Lists - Update Details page region selection](images/test-drive-region.png)
-
- The **Term Lists - Update Details** API console opens.
-
-3. In the **listId** box, enter the list ID, and then enter your subscription key.
-
-4. In the **Request body** box, make your edits, and then select **Send**.
-
- ![Term Lists - Update Details console Request body edits](images/try-terms-list-change-1.png)
-
-
-## Next steps
-
-Use the REST API in your code or start with the [Term lists .NET quickstart](term-lists-quickstart-dotnet.md) to integrate with your application.
cognitive-services Try Text Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/try-text-api.md
- Title: Moderate text by using the Text Moderation API - Content Moderator-
-description: Test-drive text moderation by using the Text Moderation API in the online console.
-------- Previously updated : 10/27/2021--
-# Moderate text from the API console
-
-Use the [Text Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f) in Azure Content Moderator to scan your text content for profanity and compare it against custom and shared lists.
-
-## Get your API key
-
-Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-## Navigate to the API reference
-
-Go to the [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f).
-
- The **Text - Screen** page opens.
-
-## Open the API console
-
-For **Open API testing console**, select the region that most closely describes your location.
-
- ![Text - Screen page region selection](images/test-drive-region.png)
-
- The **Text - Screen** API console opens.
-
-## Select the inputs
-
-### Parameters
-
-Select the query parameters that you want to use in your text screen. For this example, use the default value for **language**. You can also leave it blank because the operation will automatically detect the likely language as part of its execution.
-
-> [!NOTE]
-> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
->
-> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
-
-For **autocorrect**, **PII**, and **classify (preview)**, select **true**. Leave the **ListId** field empty.
-
- ![Text - Screen console query parameters](images/text-api-console-inputs.PNG)
-
-### Content type
-
-For **Content-Type**, select the type of content you want to screen. For this example, use the default **text/plain** content type. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-### Sample text to scan
-
-In the **Request body** box, enter some text. The following example shows an intentional typo in the text.
-
-```
-Is this a grabage or <offensive word> email abcdef@abcd.com, phone: 4255550111, IP:
-255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
-```
-
-## Analyze the response
-
-The following response shows the various insights from the API. It contains potential profanity, personal data, classification (preview), and the auto-corrected version.
-
-> [!NOTE]
-> The machine-assisted 'Classification' feature is in preview and supports English only.
-
-```json
-{
- "original_text":"Is this a grabage or <offensive word> email abcdef@abcd.com, phone:
- 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
- "normalized_text":" grabage <offensive word> email abcdef@abcd.com, phone:
- 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
- "auto_corrected_text":"Is this a garbage or <offensive word> email abcdef@abcd.com, phone:
- 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
- "status":{
- "code":3000,
- "description":"OK"
- },
- "pii":{
- "email":[
- {
- "detected":"abcdef@abcd.com",
- "sub_type":"Regular",
- "text":"abcdef@abcd.com",
- "index":32
- }
- ],
- "ssn":[
-
- ],
- "ipa":[
- {
- "sub_type":"IPV4",
- "text":"255.255.255.255",
- "index":72
- }
- ],
- "phone":[
- {
- "country_code":"US",
- "text":"6657789887",
- "index":56
- }
- ],
- "address":[
- {
- "text":"1 Microsoft Way, Redmond, WA 98052",
- "index":89
- }
- ]
- },
- "language":"eng",
- "terms":[
- {
- "index":12,
- "original_index":21,
- "list_id":0,
- "term":"<offensive word>"
- }
- ],
- "tracking_id":"WU_ibiza_65a1016d-0f67-45d2-b838-b8f373d6d52e_ContentModerator.
- F0_fe000d38-8ecd-47b5-a8b0-4764df00e3b5"
-}
-```
-
-For a detailed explanation of all sections in the JSON response, refer to the [Text moderation](text-moderation-api.md) conceptual guide.
-
-## Next steps
-
-Use the REST API in your code, or follow the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to integrate with your application.
cognitive-services Video Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/video-moderation-api.md
- Title: "Analyze video content for objectionable material in C# - Content Moderator"-
-description: How to analyze video content for various objectionable material using the Content Moderator SDK for .NET
------- Previously updated : 10/27/2021--
-#Customer intent: As a C# developer of content management software, I want to analyze video content for offensive or inappropriate material so that I can categorize and handle it accordingly.
--
-# Analyze video content for objectionable material in C#
-
-This article provides information and code samples to help you get started using the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to scan video content for adult or racy content.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/)-
-## Set up Azure resources
-
-The Content Moderator's video moderation capability is available as a free public preview **media processor** in Azure Media Services (AMS). Azure Media Services is a specialized Azure service for storing and streaming video content.
-
-### Create an Azure Media Services account
-
-Follow the instructions in [Create an Azure Media Services account](/azure/media-services/previous/media-services-portal-create-account) to subscribe to AMS and create an associated Azure storage account. In that storage account, create a new Blob storage container.
-
-### Create an Azure Active Directory application
-
-Navigate to your new AMS subscription in the Azure portal and select **API access** from the side menu. Select **Connect to Azure Media Services with service principal**. Note the value in the **REST API endpoint** field; you will need this later.
-
-In the **Azure AD app** section, select **Create New** and name your new Azure AD application registration (for example, "VideoModADApp"). Click **Save** and wait a few minutes while the application is configured. Then, you should see your new app registration under the **Azure AD app** section of the page.
-
-Select your app registration and click the **Manage application** button below it. Note the value in the **Application ID** field; you will need this later. Select **Settings** > **Keys**, and enter a description for a new key (such as "VideoModKey"). Click **Save**, and then notice the new key value. Copy this string and save it somewhere secure.
-
-For a more thorough walkthrough of the above process, See [Get started with Azure AD authentication](/azure/media-services/previous/media-services-portal-get-started-with-aad).
-
-Once you've done this, you can use the video moderation media processor in two different ways.
-
-## Use Azure Media Services Explorer
-
-The Azure Media Services Explorer is a user-friendly frontend for AMS. Use it to browse your AMS account, upload videos, and scan content with the Content Moderator media processor. Download and install it from [GitHub](https://github.com/Azure/Azure-Media-Services-Explorer/releases), or see the [Azure Media Services Explorer blog post](https://azure.microsoft.com/blog/managing-media-workflows-with-the-new-azure-media-services-explorer-tool/) for more information.
-
-![Azure Media Services explorer with Content Moderator](images/ams-explorer-content-moderator.PNG)
-
-## Create the Visual Studio project
-
-1. In Visual Studio, create a new **Console app (.NET Framework)** project and name it **VideoModeration**.
-1. If there are other projects in your solution, select this one as the single startup project.
-1. Get the required NuGet packages. Right-click on your project in the Solution Explorer and select **Manage NuGet Packages**; then find and install the following packages:
- - windowsazure.mediaservices
- - windowsazure.mediaservices.extensions
-
-## Add video moderation code
-
-Next, you'll copy and paste the code from this guide into your project to implement a basic content moderation scenario.
-
-### Update the program's using statements
-
-Add the following `using` statements to the top of your _Program.cs_ file.
-
-```csharp
-using System;
-using System.Linq;
-using System.Threading.Tasks;
-using Microsoft.WindowsAzure.MediaServices.Client;
-using System.IO;
-using System.Threading;
-using Microsoft.WindowsAzure.Storage.Blob;
-using Microsoft.WindowsAzure.Storage;
-using Microsoft.WindowsAzure.Storage.Auth;
-using System.Collections.Generic;
-```
-
-### Set up resource references
-
-Add the following static fields to the **Program** class in _Program.cs_. These fields hold the information necessary for connecting to your AMS subscription. Fill them in with the values you got in the steps above. Note that `CLIENT_ID` is the **Application ID** value of your Azure AD app, and `CLIENT_SECRET` is the value of the "VideoModKey" that you created for that app.
-
-```csharp
-// declare constants and globals
-private static CloudMediaContext _context = null;
-private static CloudStorageAccount _StorageAccount = null;
-
-// Azure Media Services (AMS) associated Storage Account, Key, and the Container that has
-// a list of Blobs to be processed.
-static string STORAGE_NAME = "YOUR AMS ASSOCIATED BLOB STORAGE NAME";
-static string STORAGE_KEY = "YOUR AMS ASSOCIATED BLOB STORAGE KEY";
-static string STORAGE_CONTAINER_NAME = "YOUR BLOB CONTAINER FOR VIDEO FILES";
-
-private static StorageCredentials _StorageCredentials = null;
-
-// Azure Media Services authentication.
-private const string AZURE_AD_TENANT_NAME = "microsoft.onmicrosoft.com";
-private const string CLIENT_ID = "YOUR CLIENT ID";
-private const string CLIENT_SECRET = "YOUR CLIENT SECRET";
-
-// REST API endpoint, for example "https://accountname.restv2.westcentralus.media.azure.net/API".
-private const string REST_API_ENDPOINT = "YOUR API ENDPOINT";
-
-// Content Moderator Media Processor Nam
-private const string MEDIA_PROCESSOR = "Azure Media Content Moderator";
-
-// Input and Output files in the current directory of the executable
-private const string INPUT_FILE = "VIDEO FILE NAME";
-private const string OUTPUT_FOLDER = "";
-
-// JSON settings file
-private static readonly string CONTENT_MODERATOR_PRESET_FILE = "preset.json";
-
-```
-
-> [!IMPORTANT]
-> Remember to remove the keys from your code when you're done, and never post them publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). See the Cognitive Services [security](../cognitive-services-security.md) article for more information.
-
-If you wish to use a local video file (simplest case), add it to the project and enter its path as the `INPUT_FILE` value (relative paths are relative to the execution directory).
-
-You will also need to create the _preset.json_ file in the current directory and use it to specify a version number. For example:
-
-```JSON
-{
- "version": "2.0"
-}
-```
-
-### Load the input video(s)
-
-The **Main** method of the **Program** class will create an Azure Media Context and then an Azure Storage Context (in case your videos are in blob storage). The remaining code scans a video from a local folder, blob, or multiple blobs within an Azure storage container. You can try all options by commenting out the other lines of code.
-
-```csharp
-// Create Azure Media Context
-CreateMediaContext();
-
-// Create Storage Context
-CreateStorageContext();
-
-// Use a file as the input.
-IAsset asset = CreateAssetfromFile();
-
-// -- OR
-
-// Or a blob as the input
-// IAsset asset = CreateAssetfromBlob((CloudBlockBlob)GetBlobsList().First());
-
-// Then submit the asset to Content Moderator
-RunContentModeratorJob(asset);
-
-//-- OR -
-
-// Just run the content moderator on all blobs in a list (from a Blob Container)
-// RunContentModeratorJobOnBlobs();
-```
-
-### Create an Azure Media Context
-
-Add the following method to the **Program** class. This uses your AMS credentials to allow communication with AMS.
-
-```csharp
-// Creates a media context from azure credentials
-static void CreateMediaContext()
-{
- // Get Azure AD credentials
- var tokenCredentials = new AzureAdTokenCredentials(AZURE_AD_TENANT_NAME,
- new AzureAdClientSymmetricKey(CLIENT_ID, CLIENT_SECRET),
- AzureEnvironments.AzureCloudEnvironment);
-
- // Initialize an Azure AD token
- var tokenProvider = new AzureAdTokenProvider(tokenCredentials);
-
- // Create a media context
- _context = new CloudMediaContext(new Uri(REST_API_ENDPOINT), tokenProvider);
-}
-```
-
-### Add the code to create an Azure Storage Context
-
-Add the following method to the **Program** class. You use the Storage Context, created from your storage credentials, to access your blob storage.
-
-```csharp
-// Creates a storage context from the AMS associated storage name and key
-static void CreateStorageContext()
-{
- // Get a reference to the storage account associated with a Media Services account.
- if (_StorageCredentials == null)
- {
- _StorageCredentials = new StorageCredentials(STORAGE_NAME, STORAGE_KEY);
- }
- _StorageAccount = new CloudStorageAccount(_StorageCredentials, false);
-}
-```
-
-### Add the code to create Azure Media Assets from local file and blob
-
-The Content Moderator media processor runs jobs on **Assets** within the Azure Media Services platform.
-These methods create the Assets from a local file or an associated blob.
-
-```csharp
-// Creates an Azure Media Services Asset from the video file
-static IAsset CreateAssetfromFile()
-{
- return _context.Assets.CreateFromFile(INPUT_FILE, AssetCreationOptions.None); ;
-}
-
-// Creates an Azure Media Services asset from your blog storage
-static IAsset CreateAssetfromBlob(CloudBlockBlob Blob)
-{
- // Create asset from the FIRST blob in the list and return it
- return _context.Assets.CreateFromBlob(Blob, _StorageCredentials, AssetCreationOptions.None);
-}
-```
-
-### Add the code to scan a collection of videos (as blobs) within a container
-
-```csharp
-// Runs the Content Moderator Job on all Blobs in a given container name
-static void RunContentModeratorJobOnBlobs()
-{
- // Get the reference to the list of Blobs. See the following method.
- var blobList = GetBlobsList();
-
- // Iterate over the Blob list items or work on specific ones as needed
- foreach (var sourceBlob in blobList)
- {
- // Create an Asset
- IAsset asset = _context.Assets.CreateFromBlob((CloudBlockBlob)sourceBlob,
- _StorageCredentials, AssetCreationOptions.None);
- asset.Update();
-
- // Submit to Content Moderator
- RunContentModeratorJob(asset);
- }
-}
-
-// Get all blobs in your container
-static IEnumerable<IListBlobItem> GetBlobsList()
-{
- // Get a reference to the Container within the Storage Account
- // that has the files (blobs) for moderation
- CloudBlobClient CloudBlobClient = _StorageAccount.CreateCloudBlobClient();
- CloudBlobContainer MediaBlobContainer = CloudBlobClient.GetContainerReference(STORAGE_CONTAINER_NAME);
-
- // Get the reference to the list of Blobs
- var blobList = MediaBlobContainer.ListBlobs();
- return blobList;
-}
-```
-
-### Add the method to run the Content Moderator Job
-
-```csharp
-// Run the Content Moderator job on the designated Asset from local file or blob storage
-static void RunContentModeratorJob(IAsset asset)
-{
- // Grab the presets
- string configuration = File.ReadAllText(CONTENT_MODERATOR_PRESET_FILE);
-
- // grab instance of Azure Media Content Moderator MP
- IMediaProcessor mp = _context.MediaProcessors.GetLatestMediaProcessorByName(MEDIA_PROCESSOR);
-
- // create Job with Content Moderator task
- IJob job = _context.Jobs.Create(String.Format("Content Moderator {0}",
- asset.AssetFiles.First() + "_" + Guid.NewGuid()));
-
- ITask contentModeratorTask = job.Tasks.AddNew("Adult and racy classifier task",
- mp, configuration,
- TaskOptions.None);
- contentModeratorTask.InputAssets.Add(asset);
- contentModeratorTask.OutputAssets.AddNew("Adult and racy classifier output",
- AssetCreationOptions.None);
-
- job.Submit();
--
- // Create progress printing and querying tasks
- Task progressPrintTask = new Task(() =>
- {
- IJob jobQuery = null;
- do
- {
- var progressContext = _context;
- jobQuery = progressContext.Jobs
- .Where(j => j.Id == job.Id)
- .First();
- Console.WriteLine(string.Format("{0}\t{1}",
- DateTime.Now,
- jobQuery.State));
- Thread.Sleep(10000);
- }
- while (jobQuery.State != JobState.Finished &&
- jobQuery.State != JobState.Error &&
- jobQuery.State != JobState.Canceled);
- });
- progressPrintTask.Start();
-
- Task progressJobTask = job.GetExecutionProgressTask(
- CancellationToken.None);
- progressJobTask.Wait();
-
- // If job state is Error, the event handling
- // method for job progress should log errors. Here we check
- // for error state and exit if needed.
- if (job.State == JobState.Error)
- {
- ErrorDetail error = job.Tasks.First().ErrorDetails.First();
- Console.WriteLine(string.Format("Error: {0}. {1}",
- error.Code,
- error.Message));
- }
-
- DownloadAsset(job.OutputMediaAssets.First(), OUTPUT_FOLDER);
-}
-```
-
-### Add helper functions
-
-These methods download the Content Moderator output file (JSON) from the Azure Media Services asset, and help track the status of the moderation job so that the program can log a running status to the console.
-
-```csharp
-static void DownloadAsset(IAsset asset, string outputDirectory)
-{
- foreach (IAssetFile file in asset.AssetFiles)
- {
- file.Download(Path.Combine(outputDirectory, file.Name));
- }
-}
-
-// event handler for Job State
-static void StateChanged(object sender, JobStateChangedEventArgs e)
-{
- Console.WriteLine("Job state changed event:");
- Console.WriteLine(" Previous state: " + e.PreviousState);
- Console.WriteLine(" Current state: " + e.CurrentState);
- switch (e.CurrentState)
- {
- case JobState.Finished:
- Console.WriteLine();
- Console.WriteLine("Job finished.");
- break;
- case JobState.Canceling:
- case JobState.Queued:
- case JobState.Scheduled:
- case JobState.Processing:
- Console.WriteLine("Please wait...\n");
- break;
- case JobState.Canceled:
- Console.WriteLine("Job is canceled.\n");
- break;
- case JobState.Error:
- Console.WriteLine("Job failed.\n");
- break;
- default:
- break;
- }
-}
-```
-
-### Run the program and review the output
-
-After the Content Moderation job is completed, analyze the JSON response. It consists of these elements:
--- Video information summary-- **Shots** as "**fragments**"-- **Key frames** as "**events**" with a **reviewRecommended" (= true or false)"** flag based on **Adult** and **Racy** scores-- **start**, **duration**, **totalDuration**, and **timestamp** are in "ticks". Divide by **timescale** to get the number in seconds.-
-> [!NOTE]
-> - `adultScore` represents the potential presence and prediction score of content that may be considered sexually explicit or adult in certain situations.
-> - `racyScore` represents the potential presence and prediction score of content that may be considered sexually suggestive or mature in certain situations.
-> - `adultScore` and `racyScore` are between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This preview relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.
-> - `reviewRecommended` is either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.
-
-```json
-{
-"version": 2,
-"timescale": 90000,
-"offset": 0,
-"framerate": 50,
-"width": 1280,
-"height": 720,
-"totalDuration": 18696321,
-"fragments": [
-{
- "start": 0,
- "duration": 18000
-},
-{
- "start": 18000,
- "duration": 3600,
- "interval": 3600,
- "events": [
- [
- {
- "reviewRecommended": false,
- "adultScore": 0.00001,
- "racyScore": 0.03077,
- "index": 5,
- "timestamp": 18000,
- "shotIndex": 0
- }
- ]
- ]
-},
-{
- "start": 18386372,
- "duration": 119149,
- "interval": 119149,
- "events": [
- [
- {
- "reviewRecommended": true,
- "adultScore": 0.00000,
- "racyScore": 0.91902,
- "index": 5085,
- "timestamp": 18386372,
- "shotIndex": 62
- }
- ]
- ]
-}
-]
-}
-```
-
-## Next steps
-
-[Download the Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
- Title: Copy and back up Custom Vision projects-
-description: Learn how to use the ExportProject and ImportProject APIs to copy and back up your Custom Vision projects.
----- Previously updated : 01/20/2022---
-# Copy and back up your Custom Vision projects
-
-After you've created and trained a Custom Vision project, you may want to copy your project to another resource. If your app or business depends on the use of a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
-
-The **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
-
-> [!TIP]
-> For an example of this scenario using the Python client library, see the [Move Custom Vision Project](https://github.com/Azure-Samples/custom-vision-move-project/tree/master/) repository on GitHub.
--
-## Prerequisites
--- Two Azure Custom Vision resources. If you don't have them, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true).-- The training keys and endpoint URLs of your Custom Vision resources. You can find these values on the resource's **Overview** tab on the Azure portal.-- A created Custom Vision project. See [Build a classifier](./getting-started-build-a-classifier.md) for instructions on how to do this.
-* [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line utility.
-
-## Process overview
-
-The process for copying a project consists of the following steps:
-
-1. First, you get the ID of the project in your source account you want to copy.
-1. Then you call the **ExportProject** API using the project ID and the training key of your source account. You'll get a temporary token string.
-1. Then you call the **ImportProject** API using the token string and the training key of your target account. The project will then be listed under your target account.
-
-## Get the project ID
-
-First call **[GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddead)** to see a list of your existing Custom Vision projects and their IDs. Use the training key and endpoint of your source account.
-
-```curl
-curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects"
--H "Training-key: {training key}"
-```
-
-You'll get a `200\OK` response with a list of projects and their metadata in the body. The `"id"` value is the string to copy for the next steps.
-
-```json
-[
- {
- "id": "00000000-0000-0000-0000-000000000000",
- "name": "string",
- "description": "string",
- "settings": {
- "domainId": "00000000-0000-0000-0000-000000000000",
- "classificationType": "Multiclass",
- "targetExportPlatforms": [
- "CoreML"
- ],
- "useNegativeSet": true,
- "detectionParameters": "string",
- "imageProcessingSettings": {
- "augmentationMethods": {}
- }
- },
- "created": "string",
- "lastModified": "string",
- "thumbnailUri": "string",
- "drModeEnabled": true,
- "status": "Succeeded"
- }
-]
-```
-
-## Export the project
-
-Call **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** using the project ID and your source training key and endpoint.
-
-```curl
-curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects/{projectId}/export"
--H "Training-key: {training key}"
-```
-
-You'll get a `200/OK` response with metadata about the exported project and a reference string `"token"`. Copy the value of the token.
-
-```json
-{
- "iterationCount": 0,
- "imageCount": 0,
- "tagCount": 0,
- "regionCount": 0,
- "estimatedImportTimeInMS": 0,
- "token": "string"
-}
-```
-
-> [!TIP]
-> If you get an "Invalid Token" error when you import your project, it could be that the token URL string isn't web encoded. You can encode the token using a [URL Encoder](https://meyerweb.com/eric/tools/dencoder/).
-
-## Import the project
-
-Call **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
-
-```curl
-curl -v -G -X POST "{endpoint}/customvision/v3.3/Training/projects/import"
data-urlencode "token={token}" --data-urlencode "name={name}"--H "Training-key: {training key}" -H "Content-Length: 0"
-```
-
-You'll get a `200/OK` response with metadata about your newly imported project.
-
-```json
-{
- "id": "00000000-0000-0000-0000-000000000000",
- "name": "string",
- "description": "string",
- "settings": {
- "domainId": "00000000-0000-0000-0000-000000000000",
- "classificationType": "Multiclass",
- "targetExportPlatforms": [
- "CoreML"
- ],
- "useNegativeSet": true,
- "detectionParameters": "string",
- "imageProcessingSettings": {
- "augmentationMethods": {}
- }
- },
- "created": "string",
- "lastModified": "string",
- "thumbnailUri": "string",
- "drModeEnabled": true,
- "status": "Succeeded"
-}
-```
-
-## Next steps
-
-In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation](/rest/api/custom-vision/)
cognitive-services Custom Vision Onnx Windows Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/custom-vision-onnx-windows-ml.md
- Title: "Use an ONNX model with Windows ML - Custom Vision Service"-
-description: Learn how to create a Windows UWP app that uses an ONNX model exported from Azure Cognitive Services.
------- Previously updated : 04/29/2020-
-#Customer intent: As a developer, I want to use a custom vision model with Windows ML.
--
-# Use an ONNX model from Custom Vision with Windows ML (preview)
-
-Learn how to use an ONNX model exported from the Custom Vision service with Windows ML (preview).
-
-In this guide, you'll learn how to use an ONNX file exported from the Custom Vision Service with Windows ML. You'll use the example UWP application with your own trained image classifier.
-
-## Prerequisites
-
-* Windows 10 version 1809 or higher
-* Windows SDK for build 17763 or higher
-* Visual Studio 2017 version 15.7 or later with the __Universal Windows Platform development__ workload enabled.
-* Developer mode enabled on your PC. For more information, see [Enable your device for development](/windows/uwp/get-started/enable-your-device-for-development).
-
-## About the example app
-
-The included application is a generic Windows UWP app. It allows you to select an image from your computer, which is then processed by a locally stored classification model. The tags and scores returned by the model are displayed next to the image.
-
-## Get the example code
-
-The example application is available at the [Cognitive Services ONNX Custom Vision Sample](https://github.com/Azure-Samples/cognitive-services-onnx-customvision-sample) repo on GitHub. Clone it to your local machine and open *SampleOnnxEvaluationApp.sln* in Visual Studio.
-
-## Test the application
-
-1. Use the `F5` key to start the application from Visual Studio. You may be prompted to enable Developer mode.
-1. When the application starts, use the button to select an image for scoring. The default ONNX model is trained to classify different types of plankton.
-
-## Use your own model
-
-To use your own image classifier model, follow these steps:
-
-1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
- * If you have an existing classifier that uses a different domain, you can convert it to **compact** in the project settings. Then, re-train your project before continuing.
-1. Export your model. Switch to the Performance tab and select an iteration that was trained with a **compact** domain. Select the **Export** button that appears. Then select **ONNX**, and then **Export**. Once the file is ready, select the **Download** button. For more information on export options, see [Export your model](./export-your-model.md).
-1. Open the downloaded *.zip* file and extract the *model.onnx* file from it. This file contains your classifier model.
-1. In the Solution Explorer in Visual Studio, right-click the **Assets** Folder and select __Add Existing Item__. Select your ONNX file.
-1. In Solution Explorer, right-click the ONNX file and select **Properties**. Change the following properties for the file:
- * __Build Action__ -> __Content__
- * __Copy to Output Directory__ -> __Copy if newer__
-1. Then open _MainPage.xaml.cs_ and change the value of `_ourOnnxFileName` to the name of your ONNX file.
-1. Use the `F5` to build and run the project.
-1. Click button to select image to evaluate.
-
-## Next steps
-
-To discover other ways to export and use a Custom Vision model, see the following documents:
-
-* [Export your model](./export-your-model.md)
-* [Use exported Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
-* [Use exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
-* [Use exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
-
-For more information on using ONNX models with Windows ML, see [Integrate a model into your app with Windows ML](/windows/ai/windows-ml/integrate-model).
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/encrypt-data-at-rest.md
- Title: Custom Vision encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Custom Vision, and how to enable and manage CMK.
------ Previously updated : 08/28/2020--
-#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
--
-# Custom Vision encryption of data at rest
-
-Azure Custom Vision automatically encrypts your data when persisted it to the cloud. Custom Vision encryption protects your data and to help you to meet your organizational security and compliance commitments.
--
-> [!IMPORTANT]
-> Customer-managed keys are only available resources created after 11 May, 2020. To use CMK with Custom Vision, you will need to create a new Custom Vision resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
--
-## Next steps
-
-* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
-* [What is Azure Key Vault](../../key-vault/general/overview.md)?
-* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-delete-data.md
- Title: View or delete your data - Custom Vision Service-
-description: You maintain full control over your data. This article explains how you can view, export or delete your data in the Custom Vision Service.
------- Previously updated : 03/21/2019----
-# View or delete user data in Custom Vision
-
-Custom Vision collects user data to operate the service, but customers have full control over viewing and deleting their data using the Custom Vision [Training APIs](https://go.microsoft.com/fwlink/?linkid=865446).
--
-To learn how to view and delete user data in Custom Vision, see the following table.
-
-| Data | View operation | Delete operation |
-| - | - | - |
-| Account info (Keys) | [GetAccountInfo](https://go.microsoft.com/fwlink/?linkid=865446) | Delete using Azure portal (Azure Subscriptions). Or using "Delete Your Account" button in CustomVision.ai settings page (Microsoft Account Subscriptions) |
-| Iteration details | [GetIteration](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
-| Iteration performance details | [GetIterationPerformance](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
-| List of iterations | [GetIterations](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
-| Projects and project details | [GetProject](https://go.microsoft.com/fwlink/?linkid=865446) and [GetProjects](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteProject](https://go.microsoft.com/fwlink/?linkid=865446) |
-| Image tags | [GetTag](https://go.microsoft.com/fwlink/?linkid=865446) and [GetTags](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteTag](https://go.microsoft.com/fwlink/?linkid=865446) |
-| Images | [GetTaggedImages](https://go.microsoft.com/fwlink/?linkid=865446) (provides uri for image download) and [GetUntaggedImages](https://go.microsoft.com/fwlink/?linkid=865446) (provides uri for image download) | [DeleteImages](https://go.microsoft.com/fwlink/?linkid=865446) |
-| Exported iterations | [GetExports](https://go.microsoft.com/fwlink/?linkid=865446) | Deleted upon account deletion |
cognitive-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-model-python.md
- Title: "Tutorial: Run TensorFlow model in Python - Custom Vision Service"-
-description: Run a TensorFlow model in Python. This article only applies to models exported from image classification projects in the Custom Vision service.
------- Previously updated : 07/05/2022----
-# Tutorial: Run a TensorFlow model in Python
-
-After you've [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
-
-> [!NOTE]
-> This tutorial applies only to models exported from "General (compact)" image classification projects. If you exported other models, please visit our [sample code repository](https://github.com/Azure-Samples/customvision-export-samples).
-
-## Prerequisites
-
-To use the tutorial, first to do the following:
--- Install either Python 2.7+ or Python 3.6+.-- Install pip.-
-Next, you'll need to install the following packages:
-
-```bash
-pip install tensorflow
-pip install pillow
-pip install numpy
-pip install opencv-python
-```
-
-## Load your model and tags
-
-The downloaded _.zip_ file contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
-
-```Python
-import tensorflow as tf
-import os
-
-graph_def = tf.compat.v1.GraphDef()
-labels = []
-
-# These are set to the default names from exported models, update as needed.
-filename = "model.pb"
-labels_filename = "labels.txt"
-
-# Import the TF graph
-with tf.io.gfile.GFile(filename, 'rb') as f:
- graph_def.ParseFromString(f.read())
- tf.import_graph_def(graph_def, name='')
-
-# Create a list of labels.
-with open(labels_filename, 'rt') as lf:
- for l in lf:
- labels.append(l.strip())
-```
-
-## Prepare an image for prediction
-
-There are a few steps you need to take to prepare the image for prediction. These steps mimic the image manipulation performed during training.
-
-### Open the file and create an image in the BGR color space
-
-```Python
-from PIL import Image
-import numpy as np
-import cv2
-
-# Load from a file
-imageFile = "<path to your image file>"
-image = Image.open(imageFile)
-
-# Update orientation based on EXIF tags, if the file has orientation info.
-image = update_orientation(image)
-
-# Convert to OpenCV format
-image = convert_to_opencv(image)
-```
-
-### Handle images with a dimension >1600
-
-```Python
-# If the image has either w or h greater than 1600 we resize it down respecting
-# aspect ratio such that the largest dimension is 1600
-image = resize_down_to_1600_max_dim(image)
-```
-
-### Crop the largest center square
-
-```Python
-# We next get the largest center square
-h, w = image.shape[:2]
-min_dim = min(w,h)
-max_square_image = crop_center(image, min_dim, min_dim)
-```
-
-### Resize down to 256x256
-
-```Python
-# Resize that square down to 256x256
-augmented_image = resize_to_256_square(max_square_image)
-```
-
-### Crop the center for the specific input size for the model
-
-```Python
-# Get the input size of the model
-with tf.compat.v1.Session() as sess:
- input_tensor_shape = sess.graph.get_tensor_by_name('Placeholder:0').shape.as_list()
-network_input_size = input_tensor_shape[1]
-
-# Crop the center for the specified network_input_Size
-augmented_image = crop_center(augmented_image, network_input_size, network_input_size)
-
-```
-
-### Add helper functions
-
-The steps above use the following helper functions:
-
-```Python
-def convert_to_opencv(image):
- # RGB -> BGR conversion is performed as well.
- image = image.convert('RGB')
- r,g,b = np.array(image).T
- opencv_image = np.array([b,g,r]).transpose()
- return opencv_image
-
-def crop_center(img,cropx,cropy):
- h, w = img.shape[:2]
- startx = w//2-(cropx//2)
- starty = h//2-(cropy//2)
- return img[starty:starty+cropy, startx:startx+cropx]
-
-def resize_down_to_1600_max_dim(image):
- h, w = image.shape[:2]
- if (h < 1600 and w < 1600):
- return image
-
- new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
- return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
-
-def resize_to_256_square(image):
- h, w = image.shape[:2]
- return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
-
-def update_orientation(image):
- exif_orientation_tag = 0x0112
- if hasattr(image, '_getexif'):
- exif = image._getexif()
- if (exif != None and exif_orientation_tag in exif):
- orientation = exif.get(exif_orientation_tag, 1)
- # orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
- orientation -= 1
- if orientation >= 4:
- image = image.transpose(Image.TRANSPOSE)
- if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
- image = image.transpose(Image.FLIP_TOP_BOTTOM)
- if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
- image = image.transpose(Image.FLIP_LEFT_RIGHT)
- return image
-```
-
-## Classify an image
-
-Once the image is prepared as a tensor, we can send it through the model for a prediction.
-
-```Python
-
-# These names are part of the model and cannot be changed.
-output_layer = 'loss:0'
-input_node = 'Placeholder:0'
-
-with tf.compat.v1.Session() as sess:
- try:
- prob_tensor = sess.graph.get_tensor_by_name(output_layer)
- predictions = sess.run(prob_tensor, {input_node: [augmented_image] })
- except KeyError:
- print ("Couldn't find classification output layer: " + output_layer + ".")
- print ("Verify this a model exported from an Object Detection project.")
- exit(-1)
-```
-
-## Display the results
-
-The results of running the image tensor through the model will then need to be mapped back to the labels.
-
-```Python
- # Print the highest probability label
- highest_probability_index = np.argmax(predictions)
- print('Classified as: ' + labels[highest_probability_index])
- print()
-
- # Or you can print out all of the results mapping labels to probabilities.
- label_index = 0
- for p in predictions:
- truncated_probablity = np.float64(np.round(p,8))
- print (labels[label_index], truncated_probablity)
- label_index += 1
-```
-
-## Next steps
-
-Next, learn how to wrap your model into a mobile application:
-* [Use your exported Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
-* [Use your exported CoreML model in an Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
-* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
cognitive-services Export Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-programmatically.md
- Title: "Export a model programmatically"-
-description: Use the Custom Vision client library to export a trained model.
------- Previously updated : 06/28/2021---
-# Export a model programmatically
-
-All of the export options available on the [Custom Vision website](https://www.customvision.ai/) can be done programmatically through the client libraries as well. You may want to do this so you can fully automate the process of retraining and updating the model iteration you use on a local device.
-
-This guide shows you how to export your model to an ONNX file with the Python SDK.
-
-## Create a training client
-
-You need to have a [CustomVisionTrainingClient](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.customvisiontrainingclient) object to export a model iteration. Create variables for your Custom Vision training resources Azure endpoint and keys, and use them to create the client object.
-
-```python
-ENDPOINT = "PASTE_YOUR_CUSTOM_VISION_TRAINING_ENDPOINT_HERE"
-training_key = "PASTE_YOUR_CUSTOM_VISION_TRAINING_SUBSCRIPTION_KEY_HERE"
-
-credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
-trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
-```
-
-> [!IMPORTANT]
-> Remember to remove the keys from your code when youre done, and never post them publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the Cognitive Services [security](../cognitive-services-security.md) article.
-
-## Call the export method
-
-Call the **export_iteration** method.
-* Provide the project ID, iteration ID of the model you want to export.
-* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
-* The *flavor* parameter specifies the format of the exported model: allowed values are `Linux`, `Windows`, `ONNX10`, `ONNX12`, `ARM`, `TensorFlowNormal`, and `TensorFlowLite`.
-* The *raw* parameter gives you the option to retrieve the raw JSON response along with the object model response.
-
-```python
-project_id = "PASTE_YOUR_PROJECT_ID"
-iteration_id = "PASTE_YOUR_ITERATION_ID"
-platform = "ONNX"
-flavor = "ONNX10"
-export = trainer.export_iteration(project_id, iteration_id, platform, flavor, raw=False)
-```
-
-For more information, see the **[export_iteration](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.operations.customvisiontrainingclientoperationsmixin#export-iteration-project-id--iteration-id--platform--flavor-none--custom-headers-none--raw-false-operation-config-)** method.
-
-> [!IMPORTANT]
-> If you've already exported a particular iteration, you cannot call the **export_iteration** method again. Instead, skip ahead to the **get_exports** method call to get a link to your existing exported model.
-
-## Download the exported model
-
-Next, you'll call the **get_exports** method to check the status of the export operation. The operation runs asynchronously, so you should poll this method until the operation completes. When it completes, you can retrieve the URI where you can download the model iteration to your device.
-
-```python
-while (export.status == "Exporting"):
- print ("Waiting 10 seconds...")
- time.sleep(10)
- exports = trainer.get_exports(project_id, iteration_id)
- # Locate the export for this iteration and check its status
- for e in exports:
- if e.platform == export.platform and e.flavor == export.flavor:
- export = e
- break
- print("Export status is: ", export.status)
-```
-
-For more information, see the **[get_exports](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.operations.customvisiontrainingclientoperationsmixin#get-exports-project-id--iteration-id--custom-headers-none--raw-false-operation-config-)** method.
-
-Then, you can programmatically download the exported model to a location on your device.
-
-```python
-if export.status == "Done":
- # Success, now we can download it
- export_file = requests.get(export.download_uri)
- with open("export.zip", "wb") as file:
- file.write(export_file.content)
-```
-
-## Next steps
-
-Integrate your exported model into an application by exploring one of the following articles or samples:
-
-* [Use your Tensorflow model with Python](export-model-python.md)
-* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
-* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
-* See the sample for [Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
-* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
cognitive-services Export Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-your-model.md
- Title: Export your model to mobile - Custom Vision Service-
-description: This article will show you how to export your model for use in creating mobile applications or run locally for real-time classification.
------- Previously updated : 07/05/2022---
-# Export your model for use with mobile devices
-
-Custom Vision Service lets you export your classifiers to be run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
-
-## Export options
-
-Custom Vision Service supports the following exports:
-
-* __TensorFlow__ for __Android__.
-* **TensorFlow.js** for JavaScript frameworks like React, Angular, and Vue. This will run on both **Android** and **iOS** devices.
-* __CoreML__ for __iOS11__.
-* __ONNX__ for __Windows ML__, **Android**, and **iOS**.
-* __[Vision AI Developer Kit](https://azure.github.io/Vision-AI-DevKit-Pages/)__.
-* A __Docker container__ for Windows, Linux, or ARM architecture. The container includes a TensorFlow model and service code to use the Custom Vision API.
-
-> [!IMPORTANT]
-> Custom Vision Service only exports __compact__ domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices. Classifiers built with a compact domain may be slightly less accurate than a standard domain with the same amount of training data.
->
-> For information on improving your classifiers, see the [Improving your classifier](getting-started-improving-your-classifier.md) document.
-
-## Convert to a compact domain
-
-> [!NOTE]
-> The steps in this section only apply if you have an existing model that is not set to compact domain.
-
-To convert the domain of an existing model, take the following steps:
-
-1. On the [Custom vision website](https://customvision.ai), select the __Home__ icon to view a list of your projects.
-
- ![Image of the home icon and projects list](./media/export-your-model/projects-list.png)
-
-1. Select a project, and then select the __Gear__ icon in the upper right of the page.
-
- ![Image of the gear icon](./media/export-your-model/gear-icon.png)
-
-1. In the __Domains__ section, select one of the __compact__ domains. Select __Save Changes__ to save the changes.
-
- > [!NOTE]
- > For Vision AI Dev Kit, the project must be created with the __General (Compact)__ domain, and you must specify the **Vision AI Dev Kit** option under the **Export Capabilities** section.
-
- ![Image of domains selection](./media/export-your-model/domains.png)
-
-1. From the top of the page, select __Train__ to retrain using the new domain.
-
-## Export your model
-
-To export the model after retraining, use the following steps:
-
-1. Go to the **Performance** tab and select __Export__.
-
- ![Image of the export icon](./media/export-your-model/export.png)
-
- > [!TIP]
- > If the __Export__ entry is not available, then the selected iteration does not use a compact domain. Use the __Iterations__ section of this page to select an iteration that uses a compact domain, and then select __Export__.
-
-1. Select your desired export format, and then select __Export__ to download the model.
-
-## Next steps
-
-Integrate your exported model into an application by exploring one of the following articles or samples:
-
-* [Use your TensorFlow model with Python](export-model-python.md)
-* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
-* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
-* See the sample for [TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
-* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
- Title: "Quickstart: Build an object detector with the Custom Vision website"-
-description: In this quickstart, you'll learn how to use the Custom Vision website to create, train, and test an object detector model.
------ Previously updated : 12/27/2022--
-keywords: image recognition, image recognition app, custom vision
--
-# Quickstart: Build an object detector with the Custom Vision website
-
-In this quickstart, you'll learn how to use the Custom Vision website to create an object detector model. Once you build a model, you can test it with new images and integrate it into your own image recognition app.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
--- A set of images with which to train your detector model. You can use the set of [sample images](https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/tree/master/samples/vision/images) on GitHub. Or, you can choose your own images using the tips below.-- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)-
-## Create Custom Vision resources
--
-## Create a new project
-
-In your web browser, navigate to the [Custom Vision web page](https://customvision.ai) and select __Sign in__. Sign in with the same account you used to sign into the Azure portal.
-
-![Image of the sign-in page](./media/browser-home.png)
--
-1. To create your first project, select **New Project**. The **Create new project** dialog box will appear.
-
- ![The new project dialog box has fields for name, description, and domains.](./media/get-started-build-detector/new-project.png)
-
-1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
-
- > [!NOTE]
- > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
-
-1. Under
-1. Select __Object Detection__ under __Project Types__.
-
-1. Next, select one of the available domains. Each domain optimizes the detector for specific types of images, as described in the following table. You can change the domain later if you want to.
-
- |Domain|Purpose|
- |||
- |__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or if you're unsure about which domain to choose, select the __General__ domain. |
- |__Logo__|Optimized for finding brand logos in images.|
- |__Products on shelves__|Optimized for detecting and classifying products on shelves.|
- |__Compact domains__| Optimized for the constraints of real-time object detection on mobile devices. The models generated by compact domains can be exported to run locally.|
-
-1. Finally, select __Create project__.
-
-## Choose training images
--
-## Upload and tag images
-
-In this section, you'll upload and manually tag images to help train the detector.
-
-1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to upload the images.
-
- ![The add images control is shown in the upper left, and as a button at bottom center.](./media/get-started-build-detector/add-images.png)
-
-1. You'll see your uploaded images in the **Untagged** section of the UI. The next step is to manually tag the objects that you want the detector to learn to recognize. Select the first image to open the tagging dialog window.
-
- ![Images uploaded, in Untagged section](./media/get-started-build-detector/images-untagged.png)
-
-1. Click and drag a rectangle around the object in your image. Then, enter a new tag name with the **+** button, or select an existing tag from the drop-down list. It's important to tag every instance of the object(s) you want to detect, because the detector uses the untagged background area as a negative example in training. When you're done tagging, select the arrow on the right to save your tags and move on to the next image.
-
- ![Tagging an object with a rectangular selection](./media/get-started-build-detector/image-tagging.png)
-
-To upload another set of images, return to the top of this section and repeat the steps.
-
-## Train the detector
-
-To train the detector model, select the **Train** button. The detector uses all of the current images and their tags to create a model that identifies each tagged object. This process can take several minutes.
-
-![The train button in the top right of the web page's header toolbar](./media/getting-started-build-a-classifier/train01.png)
-
-The training process should only take a few minutes. During this time, information about the training process is displayed in the **Performance** tab.
-
-![The browser window with a training dialog in the main section](./media/get-started-build-detector/training.png)
-
-## Evaluate the detector
-
-After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision. Precision and recall are two different measurements of the effectiveness of a detector:
--- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%.-- **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.-- **Mean average precision** is the average value of the average precision (AP). AP is the area under the precision/recall curve (precision plotted against recall for each prediction made).-
-![The training results show the overall precision and recall, and mean average precision.](./media/get-started-build-detector/trained-performance.png)
-
-### Probability threshold
--
-### Overlap threshold
-
-The **Overlap Threshold** slider deals with how correct an object prediction must be to be considered "correct" in training. It sets the minimum allowed overlap between the predicted object's bounding box and the actual user-entered bounding box. If the bounding boxes don't overlap to this degree, the prediction won't be considered correct.
-
-## Manage training iterations
-
-Each time you train your detector, you create a new _iteration_ with its own updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. In the left pane you'll also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
-
-See [Use your model with the prediction API](./use-prediction-api.md) to learn how to access your trained models programmatically.
-
-## Next steps
-
-In this quickstart, you learned how to create and train an object detector model using the Custom Vision website. Next, get more information on the iterative process of improving your model.
-
-> [!div class="nextstepaction"]
-> [Test and retrain a model](test-your-model.md)
-
-* [What is Custom Vision?](./overview.md)
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
- Title: "Quickstart: Build an image classification model with the Custom Vision portal"-
-description: In this quickstart, you'll learn how to use the Custom Vision web portal to create, train, and test an image classification model.
------ Previously updated : 11/03/2022--
-keywords: image recognition, image recognition app, custom vision
--
-# Quickstart: Build an image classification model with the Custom Vision portal
-
-In this quickstart, you'll learn how to use the Custom Vision web portal to create an image classification model. Once you build a model, you can test it with new images and eventually integrate it into your own image recognition app.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
--- A set of images with which to train your classification model. You can use the set of [sample images](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) on GitHub. Or, you can choose your own images using the tips below.-- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)--
-## Create Custom Vision resources
--
-## Create a new project
-
-In your web browser, navigate to the [Custom Vision web page](https://customvision.ai) and select __Sign in__. Sign in with the same account you used to sign into the Azure portal.
-
-![Image of the sign-in page](./media/browser-home.png)
--
-1. To create your first project, select **New Project**. The **Create new project** dialog box will appear.
-
- ![The new project dialog box has fields for name, description, and domains.](./media/getting-started-build-a-classifier/new-project.png)
-
-1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
-
- > [!NOTE]
- > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
-
-1. Select __Classification__ under __Project Types__. Then, under __Classification Types__, choose either **Multilabel** or **Multiclass**, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You'll be able to change the classification type later if you want to.
-
-1. Next, select one of the available domains. Each domain optimizes the model for specific types of images, as described in the following table. You can change the domain later if you wish.
-
- |Domain|Purpose|
- |||
- |__Generic__| Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you're unsure of which domain to choose, select the Generic domain. |
- |__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.|
- |__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.|
- |__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.|
- |__Compact domains__| Optimized for the constraints of real-time classification on mobile devices. The models generated by compact domains can be exported to run locally.|
-
-1. Finally, select __Create project__.
-
-## Choose training images
--
-## Upload and tag images
-
-In this section, you'll upload and manually tag images to help train the classifier.
-
-1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to move to tagging. Your tag selection will be applied to the entire group of images you've selected to upload, so it's easier to upload images in separate groups according to their applied tags. You can also change the tags for individual images after they've been uploaded.
-
- ![The add images control is shown in the upper left, and as a button at bottom center.](./media/getting-started-build-a-classifier/add-images01.png)
--
-1. To create a tag, enter text in the __My Tags__ field and press Enter. If the tag already exists, it will appear in a dropdown menu. In a multilabel project, you can add more than one tag to your images, but in a multiclass project you can add only one. To finish uploading the images, use the __Upload [number] files__ button.
-
- ![Image of the tag and upload page](./media/getting-started-build-a-classifier/add-images03.png)
-
-1. Select __Done__ once the images have been uploaded.
-
- ![The progress bar shows all tasks completed.](./media/getting-started-build-a-classifier/add-images04.png)
-
-To upload another set of images, return to the top of this section and repeat the steps.
-
-## Train the classifier
-
-To train the classifier, select the **Train** button. The classifier uses all of the current images to create a model that identifies the visual qualities of each tag. This process can take several minutes.
-
-![The train button in the top right of the web page's header toolbar](./media/getting-started-build-a-classifier/train01.png)
-
-The training process should only take a few minutes. During this time, information about the training process is displayed in the **Performance** tab.
-
-![The browser window with a training dialog in the main section](./media/getting-started-build-a-classifier/train02.png)
-
-## Evaluate the classifier
-
-After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall. Precision and recall are two different measurements of the effectiveness of a classifier:
--- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%.-- **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.-
-![The training results show the overall precision and recall, and the precision and recall for each tag in the classifier.](./media/getting-started-build-a-classifier/train03.png)
-
-### Probability threshold
--
-## Manage training iterations
-
-Each time you train your classifier, you create a new _iteration_ with updated performance metrics. You can view all of your iterations in the left pane of the **Performance** tab. You'll also find the **Delete** button, which you can use to delete an iteration if it's obsolete. When you delete an iteration, you delete any images that are uniquely associated with it.
-
-See [Use your model with the prediction API](./use-prediction-api.md) to learn how to access your trained models programmatically.
-
-## Next steps
-
-In this quickstart, you learned how to create and train an image classification model using the Custom Vision web portal. Next, get more information on the iterative process of improving your model.
-
-> [!div class="nextstepaction"]
-> [Test and retrain a model](test-your-model.md)
-
-* [What is Custom Vision?](./overview.md)
cognitive-services Getting Started Improving Your Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-improving-your-classifier.md
- Title: Improving your model - Custom Vision Service-
-description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your model in the Custom Vision service.
------- Previously updated : 07/05/2022----
-# How to improve your Custom Vision model
-
-In this guide, you'll learn how to improve the quality of your Custom Vision Service model. The quality of your [classifier](./getting-started-build-a-classifier.md) or [object detector](./get-started-build-detector.md) depends on the amount, quality, and variety of the labeled data you provide it and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results.
-
-The following is a general pattern to help you train a more accurate model:
-
-1. First-round training
-1. Add more images and balance data; retrain
-1. Add images with varying background, lighting, object size, camera angle, and style; retrain
-1. Use new image(s) to test prediction
-1. Modify existing training data according to prediction results
-
-## Prevent overfitting
-
-Sometimes a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
-
-To correct this problem, provide images with different angles, backgrounds, object size, groups, and other variations. The following sections expand upon these concepts.
-
-## Data quantity
-
-The number of training images is the most important factor for your dataset. We recommend using at least 50 images per label as a starting point. With fewer images, there's a higher risk of overfitting, and while your performance numbers may suggest good quality, your model may struggle with real-world data.
-
-## Data balance
-
-It's also important to consider the relative quantities of your training data. For instance, using 500 images for one label and 50 images for another label makes for an imbalanced training dataset. This will cause the model to be more accurate in predicting one label than another. You're likely to see better results if you maintain at least a 1:2 ratio between the label with the fewest images and the label with the most images. For example, if the label with the most images has 500 images, the label with the least images should have at least 250 images for training.
-
-## Data variety
-
-Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
-
-![Photo of fruits with unexpected matching.](./media/getting-started-improving-your-classifier/unexpected.png)
-
-To correct this problem, include a variety of images to ensure that your model can generalize well. Below are some ways you can make your training set more diverse:
-
-* __Background:__ Provide images of your object in front of different backgrounds. Photos in natural contexts are better than photos in front of neutral backgrounds as they provide more information for the classifier.
-
- ![Photo of background samples.](./media/getting-started-improving-your-classifier/background.png)
-
-* __Lighting:__ Provide images with varied lighting (that is, taken with flash, high exposure, and so on), especially if the images used for prediction have different lighting. It's also helpful to use images with varying saturation, hue, and brightness.
-
- ![Photo of lighting samples.](./media/getting-started-improving-your-classifier/lighting.png)
-
-* __Object Size:__ Provide images in which the objects vary in size and number (for example, a photo of bunches of bananas and a closeup of a single banana). Different sizing helps the classifier generalize better.
-
- ![Photo of size samples.](./media/getting-started-improving-your-classifier/size.png)
-
-* __Camera Angle:__ Provide images taken with different camera angles. Alternatively, if all of your photos must be taken with fixed cameras (such as surveillance cameras), be sure to assign a different label to every regularly occurring object to avoid overfitting&mdash;interpreting unrelated objects (such as lampposts) as the key feature.
-
- ![Photo of angle samples.](./media/getting-started-improving-your-classifier/angle.png)
-
-* __Style:__ Provide images of different styles of the same class (for example, different varieties of the same fruit). However, if you have objects of drastically different styles (such as Mickey Mouse vs. a real-life mouse), we recommend you label them as separate classes to better represent their distinct features.
-
- ![Photo of style samples.](./media/getting-started-improving-your-classifier/style.png)
-
-## Negative images (classifiers only)
-
-If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images that don't match any of the other tags. When you upload these images, apply the special **Negative** label to them.
-
-Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative.
-
-> [!NOTE]
-> The Custom Vision Service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana.
->
-> On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
-
-## Occlusion and truncation (object detectors only)
-
-If you want your object detector to detect truncated objects (objects that are partially cut out of the image) or occluded objects (objects that are partially blocked by other objects in the image), you'll need to include training images that cover those cases.
-
-> [!NOTE]
-> The issue of objects being occluded by other objects is not to be confused with **Overlap Threshold**, a parameter for rating model performance. The **Overlap Threshold** slider on the [Custom Vision website](https://customvision.ai) deals with how much a predicted bounding box must overlap with the true bounding box to be considered correct.
-
-## Use prediction images for further training
-
-When you use or test the model by submitting images to the prediction endpoint, the Custom Vision service stores those images. You can then use them to improve the model.
-
-1. To view images submitted to the model, open the [Custom Vision web page](https://customvision.ai), go to your project, and select the __Predictions__ tab. The default view shows images from the current iteration. You can use the __Iteration__ drop down menu to view images submitted during previous iterations.
-
- ![screenshot of the predictions tab, with images in view](./media/getting-started-improving-your-classifier/predictions.png)
-
-2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
-
- To add an image to your existing training data, select the image, set the correct tag(s), and select __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab.
-
- ![Screenshot of the tagging page.](./media/getting-started-improving-your-classifier/tag.png)
-
-3. Then use the __Train__ button to retrain the model.
-
-## Visually inspect predictions
-
-To inspect image predictions, go to the __Training Images__ tab, select your previous training iteration in the **Iteration** drop-down menu, and check one or more tags under the **Tags** section. The view should now display a red box around each of the images for which the model failed to correctly predict the given tag.
-
-![Image of the iteration history](./media/getting-started-improving-your-classifier/iteration.png)
-
-Sometimes a visual inspection can identify patterns that you can then correct by adding more training data or modifying existing training data. For example, a classifier for apples vs. limes may incorrectly label all green apples as limes. You can then correct this problem by adding and providing training data that contains tagged images of green apples.
-
-## Next steps
-
-In this guide, you learned several techniques to make your custom image classification model or object detector model more accurate. Next, learn how to test images programmatically by submitting them to the Prediction API.
-
-> [!div class="nextstepaction"]
-> [Use the prediction API](use-prediction-api.md)
cognitive-services Iot Visual Alerts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/iot-visual-alerts-tutorial.md
- Title: "Tutorial: IoT Visual Alerts sample"-
-description: In this tutorial, you use Custom Vision with an IoT device to recognize and report visual states from a camera's video feed.
------- Previously updated : 11/23/2020---
-# Tutorial: Use Custom Vision with an IoT device to report visual states
-
-This sample app illustrates how to use Custom Vision to train a device with a camera to detect visual states. You can run this detection scenario on an IoT device by using an exported ONNX model.
-
-A visual state describes the content of an image: an empty room or a room with people, an empty driveway or a driveway with a truck, and so on. In the image below, you can see the app detect when a banana or an apple is placed in front of the camera.
-
-![Animation of a UI labeling fruit in front of the camera](./media/iot-visual-alerts-tutorial/scoring.gif)
-
-This tutorial will show you how to:
-> [!div class="checklist"]
-> * Configure the sample app to use your own Custom Vision and IoT Hub resources.
-> * Use the app to train your Custom Vision project.
-> * Use the app to score new images in real time and send the results to Azure.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
-
-## Prerequisites
-
-* [!INCLUDE [create-resources](includes/create-resources.md)]
- > [!IMPORTANT]
- > This project needs to be a **Compact** image classification project, because we will be exporting the model to ONNX later.
-* You'll also need to [create an IoT Hub resource](https://portal.azure.com/#create/Microsoft.IotHub) on Azure.
-* [Visual Studio 2015 or later](https://www.visualstudio.com/downloads/)
-* Optionally, an IoT device running Windows 10 IoT Core version 17763 or higher. You can also run the app directly from your PC.
- * For Raspberry Pi 2 and 3, you can set up Windows 10 directly from the IoT Dashboard app. For other devices such as DrangonBoard, you'll need to flash it using the [eMMC method](/windows/iot-core/tutorials/quickstarter/devicesetup#flashing-with-emmc-for-dragonboard-410c-other-qualcomm-devices). If you need help with setting up a new device, see [Setting up your device](/windows/iot-core/tutorials/quickstarter/devicesetup) in the Windows IoT documentation.
-
-## About the Visual Alerts app
-
-The IoT Visual Alerts app runs in a continuous loop, switching between four different states as appropriate:
-
-* **No Model**: A no-op state. The app will continually sleep for one second and check the camera.
-* **Capturing Training Images**: In this state, the app captures a picture and uploads it as a training image to the target Custom Vision project. The app then sleeps for 500 ms and repeats the operation until the set target number of images are captured. Then it triggers the training of the Custom Vision model.
-* **Waiting For Trained Model**: In this state, the app calls the Custom Vision API every second to check whether the target project contains a trained iteration. When it finds one, it downloads the corresponding ONNX model to a local file and switches to the **Scoring** state.
-* **Scoring**: In this state, the app uses Windows ML to evaluate a single frame from the camera against the local ONNX model. The resulting image classification is displayed on the screen and sent as a message to the IoT Hub. The app then sleeps for one second before scoring a new image.
-
-## Examine the code structure
-
-The following files handle the main functionality of the app.
-
-| File | Description |
-|-|-|
-| [MainPage.xaml](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/MainPage.xaml) | This file defines the XAML user interface. It hosts the web camera control and contains the labels used for status updates.|
-| [MainPage.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/MainPage.xaml.cs) | This code controls the behavior of the XAML UI. It contains the state machine processing code.|
-| [CustomVision\CustomVisionServiceWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionServiceWrapper.cs) | This class is a wrapper that handles integration with the Custom Vision Service.|
-| [CustomVision\CustomVisionONNXModel.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/CustomVision/CustomVisionONNXModel.cs) | This class is a wrapper that handles integration with Windows ML for loading the ONNX model and scoring images against it.|
-| [IoTHub\IotHubWrapper.cs](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/blob/master/IoTVisualAlerts/IoTHub/IoTHubWrapper.cs) | This class is a wrapper that handles integration with IoT Hub for uploading scoring results to Azure.|
-
-## Set up the Visual Alerts app
-
-Follow these steps to get the IoT Visual Alerts app running on your PC or IoT device.
-
-1. Clone or download the [IoTVisualAlerts sample](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/tree/master/IoTVisualAlerts) on GitHub.
-1. Open the solution _IoTVisualAlerts.sln_ in Visual Studio
-1. Integrate your Custom Vision project:
- 1. In the _CustomVision\CustomVisionServiceWrapper.cs_ script, update the `ApiKey` variable with your training key.
- 1. Then update the `Endpoint` variable with the endpoint URL associated with your key.
- 1. Update the `targetCVSProjectGuid` variable with the corresponding ID of the Custom Vision project that you want to use.
-1. Set up the IoT Hub resource:
- 1. In the _IoTHub\IotHubWrapper.cs_ script, update the `s_connectionString` variable with the proper connection string for your device.
- 1. On the Azure portal, load your IoT Hub instance, click on **IoT devices** under **Explorers**, select on your target device (or create one if needed), and find the connection string under **Primary Connection String**. The string will contain your IoT Hub name, device ID, and shared access key; it has the following format: `{your iot hub name}.azure-devices.net;DeviceId={your device id};SharedAccessKey={your access key}`.
-
-## Run the app
-
-If you're running the app on your PC, select **Local Machine** for the target device in Visual Studio, and select **x64** or **x86** for the target platform. Then press F5 to run the program. The app should start and display the live feed from the camera and a status message.
-
-If you're deploying to a IoT device with an ARM processor, you'll need to select **ARM** as the target platform and **Remote Machine** as the target device. Provide the IP address of your device when prompted (it must be on the same network as your PC). You can get the IP Address from the Windows IoT default app once you boot the device and connect it to the network. Press F5 to run the program.
-
-When you run the app for the first time, it won't have any knowledge of visual states. It will display a status message that no model is available.
-
-## Capture training images
-
-To set up a model, you need to put the app in the **Capturing Training Images** state. Take one of the following steps:
-* If you're running the app on PC, use the button on the top-right corner of the UI.
-* If you're running the app on an IoT device, call the `EnterLearningMode` method on the device through the IoT Hub. You can call it through the device entry in the IoT Hub menu on the Azure portal, or with a tool such as [IoT Hub Device Explorer](https://github.com/Azure/azure-iot-sdk-csharp).
-
-When the app enters the **Capturing Training Images** state, it will capture about two images every second until it has reached the target number of images. By default, the target is 30 images, but you can set this parameter by passing the desired number as an argument to the `EnterLearningMode` IoT Hub method.
-
-While the app is capturing images, you must expose the camera to the types of visual states that you want to detect (for example, an empty room, a room with
-people, an empty desk, a desk with a toy truck, and so on).
-
-## Train the Custom Vision model
-
-Once the app has finished capturing images, it will upload them and then switch to the **Waiting For Trained Model** state. At this point, you need to go to the [Custom Vision website](https://www.customvision.ai/) and build a model based on the new training images. The following animation shows an example of this process.
-
-![Animation: tagging multiple images of bananas](./media/iot-visual-alerts-tutorial/labeling.gif)
-
-To repeat this process with your own scenario:
-
-1. Sign in to the [Custom Vision website](http://customvision.ai).
-1. Find your target project, which should now have all the training images that the app uploaded.
-1. For each visual state that you want to identify, select the appropriate images and manually apply the tag.
- * For example, if your goal is to distinguish between an empty room and a room with people in it, we recommend tagging five or more images with people as a new class, **People**, and tagging five or more images without people as the **Negative** tag. This will help the model differentiate between the two states.
- * As another example, if your goal is to approximate how full a shelf is, then you might use tags such as **EmptyShelf**, **PartiallyFullShelf**, and **FullShelf**.
-1. When you're finished, select the **Train** button.
-1. Once training is complete, the app will detect that a trained iteration is available. It will start the process of exporting the trained model to ONNX and downloading it to the device.
-
-## Use the trained model
-
-Once the app downloads the trained model, it will switch to the **Scoring** state and start scoring images from the camera in a continuous loop.
-
-For each captured image, the app will display the top tag on the screen. If it doesn't recognize the visual state, it will display **No Matches**. The app also sends these messages to the IoT Hub, and if there is a class being detected, the message will include the label, the confidence score, and a property called `detectedClassAlert`, which can be used by IoT Hub clients interested in doing fast message routing based on properties.
-
-In addition, the sample uses a [Sense HAT library](https://github.com/emmellsoft/RPi.SenseHat) to detect when it's running on a Raspberry Pi with a Sense HAT unit, so it can use it as an output display by setting all display lights to red whenever it detects a class and blank when it doesn't detect anything.
-
-## Reuse the app
-
-If you'd like to reset the app back to its original state, you can do so by clicking on the button on the top-right corner of the UI, or by invoking the method `DeleteCurrentModel` through the IoT Hub.
-
-At any point, you can repeat the step of uploading training images by clicking the top-right UI button or calling the `EnterLearningMode` method again.
-
-If you're running the app on a device and need to retrieve the IP address again (to establish a remote connection through the [Windows IoT Remote Client](https://www.microsoft.com/p/windows-iot-remote-client/9nblggh5mnxz#activetab=pivot:overviewtab), for example), you can call the `GetIpAddress` method through IoT Hub.
-
-## Clean up resources
-
-Delete your Custom Vision project if you no longer want to maintain it. On the [Custom Vision website](https://customvision.ai), navigate to **Projects** and select the trash can under your new project.
-
-![Screenshot of a panel labeled My New Project with a trash can icon](./media/csharp-tutorial/delete_project.png)
-
-## Next steps
-
-In this tutorial, you set up and ran an application that detects visual state information on an IoT device and sends the results to the IoT Hub. Next, explore the source code further or make one of the suggested modifications below.
-
-> [!div class="nextstepaction"]
-> [IoTVisualAlerts sample (GitHub)](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates/tree/master/IoTVisualAlerts)
-
-* Add an IoT Hub method to switch the app directly to the **Waiting For Trained Model** state. This way, you can train the model with images that aren't captured by the device itself and then push the new model to the device on command.
-* Follow the [Visualize real-time sensor data](../../iot-hub/iot-hub-live-data-visualization-in-power-bi.md) tutorial to create a Power BI Dashboard to visualize the IoT Hub alerts sent by the sample.
-* Follow the [IoT remote monitoring](../../iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md) tutorial to create a Logic App that responds to the IoT Hub alerts when visual states are detected.
cognitive-services Limits And Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/limits-and-quotas.md
- Title: Limits and quotas - Custom Vision Service-
-description: This article explains the different types of licensing keys and about the limits and quotas for the Custom Vision Service.
------- Previously updated : 07/05/2022---
-# Limits and quotas
-
-There are two tiers of keys for the Custom Vision service. You can sign up for a F0 (free) or S0 (standard) subscription through the Azure portal. See the corresponding [Cognitive Services Pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for details on pricing and transactions.
-
-The number of training images per project and tags per project are expected to increase over time for S0 projects.
-
-|Factor|**F0 (free)**|**S0 (standard)**|
-|--|--|--|
-|Projects|2|100|
-|Training images per project |5,000|100,000|
-|Predictions / month|10,000 |Unlimited|
-|Tags / project|50|500|
-|Iterations |20|20|
-|Min labeled images per Tag, Classification (50+ recommended) |5|5|
-|Min labeled images per Tag, Object Detection (50+ recommended)|15|15|
-|How long prediction images stored|30 days|30 days|
-|[Prediction](https://go.microsoft.com/fwlink/?linkid=865445) operations with storage (Transactions Per Second)|2|10|
-|[Prediction](https://go.microsoft.com/fwlink/?linkid=865445) operations without storage (Transactions Per Second)|2|20|
-|[TrainProject](https://go.microsoft.com/fwlink/?linkid=865446) (API calls Per Second)|2|10|
-|[Other API calls](https://go.microsoft.com/fwlink/?linkid=865446) (Transactions Per Second)|10|10|
-|Accepted image types|jpg, png, bmp, gif|jpg, png, bmp, gif|
-|Min image height/width in pixels|256 (see note)|256 (see note)|
-|Max image height/width in pixels|10,240|10,240|
-|Max image size (training image upload) |6 MB|6 MB|
-|Max image size (prediction)|4 MB|4 MB|
-|Max number of regions per image (object detection)|300|300|
-|Max number of tags per image (classification)|100|100|
-
-> [!NOTE]
-> Images smaller than than 256 pixels will be accepted but upscaled.
-> Image aspect ratio should not be larger than 25:1.
cognitive-services Logo Detector Mobile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/logo-detector-mobile.md
- Title: "Tutorial: Use custom logo detector to recognize Azure services - Custom Vision"-
-description: In this tutorial, you'll step through a sample app that uses Custom Vision as part of a logo detection scenario. Learn how Custom Vision is used with other components to deliver an end-to-end application.
------- Previously updated : 07/04/2023----
-# Tutorial: Recognize Azure service logos in camera pictures
-
-In this tutorial, you'll explore a sample app that uses Custom Vision as part of a larger scenario. The AI Visual Provision app, a Xamarin.Forms application for mobile platforms, analyzes photos of Azure service logos and then deploys those services to the user's Azure account. Here you'll learn how it uses Custom Vision in coordination with other components to deliver a useful end-to-end application. You can run the whole app scenario for yourself, or you can complete only the Custom Vision part of the setup and explore how the app uses it.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> - Create a custom object detector to recognize Azure service logos.
-> - Connect your app to Azure Computer Vision and Custom Vision.
-> - Create an Azure service principal account to deploy Azure services from the app.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
--- [Visual Studio 2017 or later](https://www.visualstudio.com/downloads/)-- The Xamarin workload for Visual Studio (see [Installing Xamarin](/xamarin/cross-platform/get-started/installation/windows))-- An iOS or Android emulator for Visual Studio-- The [Azure CLI](/cli/azure/install-azure-cli-windows) (optional)-
-## Get the source code
-
-If you want to use the provided web app, clone or download the app's source code from the [AI Visual Provision](https://github.com/Microsoft/AIVisualProvision) repository on GitHub. Open the *Source/VisualProvision.sln* file in Visual Studio. Later, you'll edit some of the project files so you can run the app yourself.
-
-## Create an object detector
-
-Sign in to the [Custom Vision web portal](https://customvision.ai/) and create a new project. Specify an Object Detection project and use the Logo domain; this will let the service use an algorithm optimized for logo detection.
-
-![New-project window on the Custom Vision website in the Chrome browser](media/azure-logo-tutorial/new-project.png)
-
-## Upload and tag images
-
-Next, train the logo detection algorithm by uploading images of Azure service logos and tagging them manually. The AIVisualProvision repository includes a set of training images that you can use. On the website, select the **Add images** button on the **Training Images** tab. Then go to the **Documents/Images/Training_DataSet** folder of the repository. You'll need to manually tag the logos in each image, so if you're only testing out this project, you might want to upload only a subset of the images. Upload at least 15 instances of each tag you plan to use.
-
-After you upload the training images, select the first one on the display. The tagging window will appear. Draw boxes and assign tags for each logo in each image.
-
-![Logo tagging on the Custom Vision website](media/azure-logo-tutorial/tag-logos.png)
-
-The app is configured to work with specific tag strings. You'll find the definitions in the *Source\VisualProvision\Services\Recognition\RecognitionService.cs* file:
-
-[!code-csharp[Tag definitions](~/AIVisualProvision/Source/VisualProvision/Services/Recognition/RecognitionService.cs?name=snippet_constants)]
-
-After you tag an image, go to the right to tag the next one. Close the tagging window when you finish.
-
-## Train the object detector
-
-In the left pane, set the **Tags** switch to **Tagged** to display your images. Then select the green button at the top of the page to train the model. The algorithm will train to recognize the same tags in new images. It will also test the model on some of your existing images to generate accuracy scores.
-
-![The Custom Vision website, on the Training Images tab. In this screenshot, the Train button is outlined](media/azure-logo-tutorial/train-model.png)
-
-## Get the prediction URL
-
-After your model is trained, you're ready to integrate it into your app. You'll need to get the endpoint URL (the address of your model, which the app will query) and the prediction key (to grant the app access to prediction requests). On the **Performance** tab, select the **Prediction URL** button at the top of the page.
-
-![The Custom Vision website, showing a Prediction API window that displays a URL address and API key](media/azure-logo-tutorial/cusvis-endpoint.png)
-
-Copy the endpoint URL and the **Prediction-Key** value to the appropriate fields in the *Source\VisualProvision\AppSettings.cs* file:
-
-[!code-csharp[Custom Vision fields](~/AIVisualProvision/Source/VisualProvision/AppSettings.cs?name=snippet_cusvis_keys)]
-
-## Examine Custom Vision usage
-
-Open the *Source/VisualProvision/Services/Recognition/CustomVisionService.cs* file to see how the app uses your Custom Vision key and endpoint URL. The **PredictImageContentsAsync** method takes a byte stream of an image file along with a cancellation token (for asynchronous task management), calls the Custom Vision prediction API, and returns the result of the prediction.
-
-[!code-csharp[Custom Vision fields](~/AIVisualProvision/Source/VisualProvision/Services/Recognition/CustomVisionService.cs?name=snippet_prediction)]
-
-This result takes the form of a **PredictionResult** instance, which itself contains a list of **Prediction** instances. A **Prediction** contains a detected tag and its bounding box location in the image.
-
-[!code-csharp[Custom Vision fields](~/AIVisualProvision/Source/VisualProvision/Services/Recognition/Prediction.cs?name=snippet_prediction_class)]
-
-To learn more about how the app handles this data, start with the **GetResourcesAsync** method. This method is defined in the *Source/VisualProvision/Services/Recognition/RecognitionService.cs* file.
-
-## Add text recognition
-
-The Custom Vision portion of the tutorial is complete. If you want to run the app, you'll need to integrate the Computer Vision service as well. The app uses the Computer Vision text recognition feature to supplement the logo detection process. An Azure logo can be recognized by its appearance *or* by the text printed near it. Unlike Custom Vision models, Computer Vision is pretrained to perform certain operations on images or videos.
-
-Subscribe to the Computer Vision service to get a key and endpoint URL. For help on this step, see [How to obtain keys](../cognitive-services-apis-create-account.md?tabs=singleservice%2Cwindows).
-
-![The Computer Vision service in the Azure portal, with the Quickstart menu selected. A link for keys is outlined, as is the API endpoint URL](media/azure-logo-tutorial/comvis-keys.png)
-
-Next, open the *Source\VisualProvision\AppSettings.cs* file and populate the `ComputerVisionEndpoint` and `ComputerVisionKey` variables with the correct values.
-
-[!code-csharp[Computer Vision fields](~/AIVisualProvision/Source/VisualProvision/AppSettings.cs?name=snippet_comvis_keys)]
-
-## Create a service principal
-
-The app requires an Azure service principal account to deploy services to your Azure subscription. A service principal lets you delegate specific permissions to an app using Azure role-based access control. To learn more, see the [service principals guide](/azure-stack/operator/azure-stack-create-service-principals).
-
-You can create a service principal by using either Azure Cloud Shell or the Azure CLI, as shown here. To begin, sign in and select the subscription you want to use.
-
-```azurecli
-az login
-az account list
-az account set --subscription "<subscription name or subscription id>"
-```
-
-Then create your service principal. (This process might take some time to finish.)
-
-```azurecli
-az ad sp create-for-rbac --name <servicePrincipalName> --role Contributor --scopes /subscriptions/<subscription_id> --password <yourSPStrongPassword>
-```
-
-Upon successful completion, you should see the following JSON output, including the necessary credentials.
-
-```json
-{
- "clientId": "(...)",
- "clientSecret": "(...)",
- "subscriptionId": "(...)",
- "tenantId": "(...)",
- ...
-}
-```
-
-Take note of the `clientId` and `tenantId` values. Add them to the appropriate fields in the *Source\VisualProvision\AppSettings.cs* file.
-
-[!code-csharp[Computer Vision fields](~/AIVisualProvision/Source/VisualProvision/AppSettings.cs?name=snippet_serviceprincipal)]
-
-## Run the app
-
-At this point, you've given the app access to:
--- A trained Custom Vision model-- The Computer Vision service-- A service principal account-
-Follow these steps to run the app:
-
-1. In Visual Studio Solution Explorer, select either the **VisualProvision.Android** project or the **VisualProvision.iOS** project. Choose a corresponding emulator or connected mobile device from the drop-down menu on the main toolbar. Then run the app.
-
- > [!NOTE]
- > You will need a MacOS device to run an iOS emulator.
-
-1. On the first screen, enter your service principal client ID, tenant ID, and password. Select the **Login** button.
-
- > [!NOTE]
- > On some emulators, the **Login** button might not be activated at this step. If this happens, stop the app, open the *Source/VisualProvision/Pages/LoginPage.xaml* file, find the `Button` element labeled **LOGIN BUTTON**, remove the following line, and then run the app again.
- > ```xaml
- > IsEnabled="{Binding IsValid}"
- > ```
-
- ![The app screen, showing fields for service principal credentials](media/azure-logo-tutorial/app-credentials.png)
-
-1. On the next screen, select your Azure subscription from the drop-down menu. (This menu should contain all of the subscriptions to which your service principal has access.) Select the **Continue** button. At this point, the app might prompt you to grant access to the device's camera and photo storage. Grant the access permissions.
-
- ![The app screen, showing a drop-down field for Target Azure subscription](media/azure-logo-tutorial/app-az-subscription.png)
--
-1. The camera on your device will be activated. Take a photo of one of the Azure service logos that you trained. A deployment window should prompt you to select a region and resource group for the new services (as you would do if you were deploying them from the Azure portal).
-
- ![A smartphone camera screen focused on two paper cutouts of Azure logos](media/azure-logo-tutorial/app-camera-capture.png)
-
- ![An app screen showing fields for the deployment region and resource group](media/azure-logo-tutorial/app-deployment-options.png)
-
-## Clean up resources
-
-If you've followed all of the steps of this scenario and used the app to deploy Azure services to your account, go to the [Azure portal](https://portal.azure.com/). There, cancel the services you don't want to use.
-
-If you plan to create your own object detection project with Custom Vision, you might want to delete the logo detection project you created in this tutorial. A free subscription for Custom Vision allows for only two projects. To delete the logo detection project, on the [Custom Vision website](https://customvision.ai), open **Projects** and then select the trash icon under **My New Project**.
-
-## Next steps
-
-In this tutorial, you set up and explored a full-featured Xamarin.Forms app that uses the Custom Vision service to detect logos in mobile camera images. Next, learn the best practices for building a Custom Vision model so that when you create one for your own app, you can make it powerful and accurate.
-
-> [!div class="nextstepaction"]
-> [How to improve your classifier](getting-started-improving-your-classifier.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
- Title: What is Custom Vision?-
-description: Learn how to use the Azure Custom Vision service to build custom AI models to detect objects or classify images.
------- Previously updated : 07/04/2023--
-keywords: image recognition, image identifier, image recognition app, custom vision
-#Customer intent: As a data scientist/developer, I want to understand what the Custom Vision service does so that I can determine if it's suitable for my project.
--
-# What is Custom Vision?
-
-Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels to images, according to their visual characteristics. Each label represents a classification or object. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
-
-> [!TIP]
-> The Azure Computer Vision Image Analysis API now supports custom models. [Use Image Analysis 4.0](../computer-vision/how-to/model-customization.md) to create custom image identifier models using the latest technology from Azure. To migrate a Custom Vision project to the new Image Analysis 4.0 system, see the [Migration guide](../computer-vision/how-to/migrate-from-custom-vision.md).
-
-You can use Custom Vision through a client library SDK, REST API, or through the [Custom Vision web portal](https://customvision.ai/). Follow a quickstart to get started.
-
-> [!div class="nextstepaction"]
-> [Quickstart (web portal)](getting-started-build-a-classifier.md)
--
-This documentation contains the following types of articles:
-* The [quickstarts](./getting-started-build-a-classifier.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./test-your-model.md) contain instructions for using the service in more specific or customized ways.
-* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-<!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
-
-For a more structured approach, follow a Training module for Custom Vision:
-* [Classify images with the Custom Vision service](/training/modules/classify-images-custom-vision/)
-* [Classify endangered bird species with Custom Vision](/training/modules/cv-classify-bird-species/)
-
-## How it works
-
-The Custom Vision service uses a machine learning algorithm to analyze images. You submit sets of images that have and don't have the visual characteristics you're looking for. Then you label the images with your own custom labels (tags) at the time of submission. The algorithm trains to this data and calculates its own accuracy by testing itself on the same images. Once you've trained your model, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md) or [detect objects](get-started-build-detector.md). You can also [export the model](export-your-model.md) for offline use.
-
-### Classification and object detection
-
-Custom Vision functionality can be divided into two features. **[Image classification](getting-started-build-a-classifier.md)** applies one or more labels to an entire image. **[Object detection](get-started-build-detector.md)** is similar, but it returns the coordinates in the image where the applied label(s) can be found.
-
-### Optimization
-
-The Custom Vision service is optimized to quickly recognize major differences between images, so you can start prototyping your model with a small amount of data. It's generally a good start to use 50 images per label. However, the service isn't optimal for detecting subtle differences in images (for example, detecting minor cracks or dents in quality assurance scenarios).
-
-Additionally, you can choose from several variations of the Custom Vision algorithm that are optimized for images with certain subject material&mdash;for example, landmarks or retail items. For more information, see [Select a domain](select-domain.md).
-
-## How to use it
-
-The Custom Vision Service is available as a set of native SDKs and through a web-based interface on the [Custom Vision portal](https://customvision.ai/). You can create, test, and train a model through either interface or use both together.
-
-### Supported browsers for Custom Vision web portal
-
-The Custom Vision portal can be used by the following web browsers:
-- Microsoft Edge (latest version)-- Google Chrome (latest version)-
-![Custom Vision website in a Chrome browser window](media/browser-home.png)
-
-## Backup and disaster recovery
-
-As a part of Azure, Custom Vision Service has components that are maintained across multiple regions. Service zones and regions are used by all of our services to provide continued service to our customers. For more information on zones and regions, see [Azure regions](../../availability-zones/az-overview.md). If you need additional information or have any issues, [contact support](/answers/topics/azure-custom-vision.html).
--
-## Data privacy and security
-
-As with all of the Cognitive Services, developers using the Custom Vision service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
-
-## Data residency
-
-Custom Vision primarily doesn't replicate data out of the specified region, except for one region, `NorthCentralUS`, where there is no local Azure Support.
-
-## Next steps
-
-* Follow the [Build a classifier](getting-started-build-a-classifier.md) quickstart to get started using Custom Vision on the web portal.
-* Or, complete an [SDK quickstart](quickstarts/image-classification.md) to implement the basic scenarios with code.
cognitive-services Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/image-classification.md
- Title: "Quickstart: Image classification with Custom Vision client library or REST API"-
-description: "Quickstart: Create an image classification project, add tags, upload images, train your project, and make a prediction using the Custom Vision client library or the REST API"
----- Previously updated : 11/03/2022-
-keywords: custom vision, image recognition, image recognition app, image analysis, image recognition software
-zone_pivot_groups: programming-languages-set-cusvis
--
-# Quickstart: Create an image classification project with the Custom Vision client library or REST API
------
cognitive-services Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/quickstarts/object-detection.md
- Title: "Quickstart: Object detection with Custom Vision client library"-
-description: "Quickstart: Create an object detection project, add custom tags, upload images, train the model, and detect objects in images using the Custom Vision client library."
----- Previously updated : 11/03/2022-
-keywords: custom vision
-zone_pivot_groups: programming-languages-set-one
--
-# Quickstart: Create an object detection project with the Custom Vision client library
-----
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/release-notes.md
- Title: Release Notes - Custom Vision Service-
-description: Get the latest information on new releases from the Custom Vision team.
------- Previously updated : 04/03/2019---
-# Custom Vision Service Release Notes
-
-## May 2, 2019 and May 10, 2019
--- Bugfixes and backend improvements-
-## May 23, 2019
--- Improved portal UX experience related to Azure subscriptions, making it easier to select your Azure directories.-
-## April 18, 2019
--- Added Object Detection export for the Vision AI Dev Kit.-- UI tweaks, including project search.-
-## April 3, 2019
--- Increased limit on number of bounding boxes per image to 200. -- Bugfixes, including substantial performance update for models exported to TensorFlow. -
-## March 26, 2019
--- Custom Vision Service has entered General Availability on Azure!-- Added Advanced Training feature with a new machine learning backend for improved performance, especially on challenging datasets and fine-grained classification. With advanced training, you can specify a compute time budget for training and Custom Vision will experimentally identify the best training and augmentation settings. For quick iterations, you can continue to use the existing fast training.-- Introduced 3.0 APIs. Announced coming deprecation of pre-3.0 APIs on October 1, 2019. See the documentation [quickstarts](./quickstarts/image-classification.md) for examples on how to get started.-- Replaced "Default Iterations" with Publish/Unpublish in the 3.0 APIs.-- New model export targets have been added. Dockerfile export has been upgraded to support ARM for Raspberry Pi 3. Export support has been added to the [Vision AI Dev Kit.](https://visionaidevkit.com/).-- Increased limit of Tags per project to 500 for S0 tier. Increased limit of Images per project to 100,000 for S0 tier.-- Removed Adult domain. General domain is recommended instead.-- Announced [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for General Availability. -
-## February 25, 2019
--- Announced the end of Limited Trial projects (projects not associated with an Azure resource), as Custom Vision nears completion of its move to Azure public preview. Beginning March 25, 2019, the CustomVision.ai site will only support viewing projects associated with an Azure resource, such as the free Custom Vision resource. Through October 1, 2019, you'll still be able to access your existing limited trial projects via the Custom Vision APIs. This will give you time to update API keys for any apps you've written with Custom Vision. After October 1, 2019, any limited trial projects you haven't moved to Azure will be deleted.-
-## January 22, 2019
--- Support added for new Azure regions: West US 2, East US, East US 2, West Europe, North Europe, Southeast Asia, Australia East, Central India, UK South, Japan East, and North Central US. Support continues for South Central US.-
-## December 12, 2018
--- Support export for Object Detection models (introduced Object Detection Compact Domain).-- Fixed a number of accessibility issues for improved screen reader and keyboard navigation support.-- UX updates for image viewer and improved object detection tagging experience for faster tagging. -- Updated base model for Object Detection Domain for better quality object detection.-- Bug fixes.-
-## November 6, 2018
--- Added support for Logo Domain in Object Detection.-
-## October 9, 2018
--- Object Detection enters paid preview. You can now create Object Detection projects with an Azure resource.-- Added "Move to Azure" feature to website, to make it easier to upgrade a Limited Trial project to link to an Azure. resource linked project (F0 or S0.) You can find this on the Settings page for your product. -- Added export to ONNX 1.2, to support Windows 2018 October Update version of Windows ML.
-Bug fixes, including for ONNX export with special characters.
-
-## August 14, 2018
--- Added "Get Started" widget to customvision.ai site to guide users through project training.-- Further improvements to the machine learning pipeline to benefit multilabel projects (new loss layer).-
-## June 28, 2018
--- Bug fixes & backend improvements.-- Enabled multiclass classification, for projects where images have exactly one label. In Predictions for multiclass mode, probabilities will sum to one (all images are classified among your specified Tags).-
-## June 13, 2018
--- UX refresh, focused on ease of use and accessibility.-- Improvements to the machine learning pipeline to benefit multilabel projects with a large number of tags.-- Fixed bug in TensorFlow export. Enabled exported model versioning, so iterations can be exported more than once.-
-## May 7, 2018
--- Introduced preview Object Detection feature for Limited Trial projects.-- Upgrade to 2.0 APIs-- S0 tier expanded to up to 250 tags and 50,000 images.-- Significant backend improvements to the machine learning pipeline for image classification projects. Projects trained after April 27, 2018 will benefit from these updates.-- Added model export to ONNX, for use with Windows ML.-- Added model export to Dockerfile. This allows you to download the artifacts to build your own Windows or Linux containers, including a DockerFile, TensorFlow model, and service code.-- For newly trained models exported to TensorFlow in the General (Compact) and Landmark (Compact) Domains, [Mean Values are now (0,0,0)](https://github.com/azure-samples/cognitive-services-android-customvision-sample), for consistency across all projects.-
-## March 1, 2018
--- Entered paid preview and onboarded onto the Azure portal. Projects can now be attached to Azure resources with an F0 (Free) or S0 (Standard) tier. Introduced S0 tier projects, which allow up to 100 tags and 25,000 images.-- Backend changes to the machine learning pipeline/normalization parameter. This will give customers better control of precision-recall tradeoffs when adjusting the Probability Threshold. As a part of these changes, the default Probability Threshold in the CustomVision.ai portal was set to be 50%.-
-## December 19, 2017
--- Export to Android (TensorFlow) added, in addition to previously released export to iOS (CoreML.) This allows export of a trained compact model to be run offline in an application.-- Added Retail and Landmark "compact" domains to enable model export for these domains.-- Released version [1.2 Training API](https://westus2.dev.cognitive.microsoft.com/docs/services/f2d62aa3b93843d79e948fe87fa89554/operations/5a3044ee08fa5e06b890f11f) and [1.1 Prediction API](https://westus2.dev.cognitive.microsoft.com/docs/services/57982f59b5964e36841e22dfbfe78fc1/operations/5a3044f608fa5e06b890f164). Updated APIs support model export, new Prediction operation that does not save images to "Predictions," and introduced batch operations to the Training API.-- UX tweaks, including the ability to see which domain was used to train an iteration.-- Updated [C# SDK and sample](https://github.com/Microsoft/Cognitive-CustomVision-Windows).
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/role-based-access-control.md
- Title: "Azure role-based access control - Custom Vision"-
-description: This article will show you how to configure Azure role-based access control for your Custom Vision projects.
----- Previously updated : 09/11/2020---
-# Azure role-based access control
-
-Custom Vision supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your Custom Vision projects. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/index.yml).
-
-## Add role assignment to Custom Vision resource
-
-Azure RBAC can be assigned to a Custom Vision resource. To grant access to an Azure resource, you add a role assignment.
-1. In the [Azure portal](https://portal.azure.com/), select **All services**.
-1. Then select the **Cognitive Services**, and navigate to your specific Custom Vision training resource.
- > [!NOTE]
- > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item (for example, selecting **Resource groups** and then clicking through to your wanted resource group).
-1. Select **Access control (IAM)** on the left navigation pane.
-1. Select **Add** -> **Add role assignment**.
-1. On the **Role** tab on the next screen, select a role you want to add.
-1. On the **Members** tab, select a user, group, service principal, or managed identity.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-## Custom Vision role types
-
-Use the following table to determine access needs for your Custom Vision resources.
-
-|Role |Permissions |
-|||
-|`Cognitive Services Custom Vision Contributor` | Full access to the projects, including the ability to create, edit, or delete a project. |
-|`Cognitive Services Custom Vision Trainer` | Full access except the ability to create or delete a project. Trainers can view and edit projects and train, publish, unpublish, or export the models. |
-|`Cognitive Services Custom Vision Labeler` | Ability to upload, edit, or delete training images and create, add, remove, or delete tags. Labelers can view projects but can't update anything other than training images and tags. |
-|`Cognitive Services Custom Vision Deployment` | Ability to publish, unpublish, or export the models. Deployers can view projects but can't update a project, training images, or tags. |
-|`Cognitive Services Custom Vision Reader` | Ability to view projects. Readers can't make any changes. |
cognitive-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/select-domain.md
- Title: "Select a domain for a Custom Vision project - Computer Vision"-
-description: This article will show you how to select a domain for your project in the Custom Vision Service.
------ Previously updated : 06/13/2022---
-# Select a domain for a Custom Vision project
-
-From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab), or use the table below.
-
-## Image Classification
-
-|Domain|Purpose|
-|||
-|__General__| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
-|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. ID: `a8e3c40f-fb4a-466f-832a-5e457ae4a344`|
-|__General [A2]__| Optimized for better accuracy with faster inference time than General[A1] and General domains. Recommended for most datasets. This domain requires less training time than General and General [A1] domains. ID: `2e37d7fb-3a54-486a-b4d6-cfc369af0018` |
-|__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. ID: `c151d5b5-dd07-472a-acc8-15d29dea8518`|
-|__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. ID: `ca455789-012d-4b50-9fec-5bb63841c793`|
-|__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`|
-|__Compact domains__| Optimized for the constraints of real-time classification on edge devices.|
--
-> [!NOTE]
-> The General[A1] and General[A2] domains can be used for a broad set of scenarios and are optimized for accuracy. Use the General[A2] model for better inference speed and shorter training time. For larger datasets, you may want to use General[A1] to render better accuracy than General[A2], though it requires more training and inference time. The General model requires more inference time than both General[A1] and General[A2].
-
-## Object Detection
-
-|Domain|Purpose|
-|||
-|__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the General domain. ID: `da2e3a8a-40a5-4171-82f4-58522f70fbc1`|
-|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results are not deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. ID: `9c616dff-2e7d-ea11-af59-1866da359ce6`|
-|__Logo__|Optimized for finding brand logos in images. ID: `1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`|
-|__Products on shelves__|Optimized for detecting and classifying products on shelves. ID: `3780a898-81c3-4516-81ae-3a139614e1f3`|
-|__Compact domains__| Optimized for the constraints of real-time object detection on edge devices.|
-
-## Compact domains
-
-The models generated by compact domains can be exported to run locally. In the Custom Vision 3.4 public preview API, you can get a list of the exportable platforms for compact domains by calling the GetDomains API.
-
-All of the following domains support export in ONNX, TensorFlow,TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the **Object Detection General (compact)** domain does not support VAIDK.
-
-Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU \[1\]. These numbers don't include preprocessing and postprocessing time.
-
-|Task|Domain|ID|Model Size|CPU inference time|GPU inference time|
-|||||||
-|Classification|General (compact)|`0732100f-1a38-4e49-a514-c9b44c697ab5`|6 MB|10 ms|5 ms|
-|Classification|General (compact) [S1]|`a1db07ca-a19a-4830-bae8-e004a42dc863`|43 MB|50 ms|5 ms|
-|Object Detection|General (compact)|`a27d5ca5-bb19-49d8-a70a-fec086c47f5b`|45 MB|35 ms|5 ms|
-|Object Detection|General (compact) [S1]|`7ec2ac80-887b-48a6-8df9-8b1357765430`|14 MB|27 ms|7 ms|
-
->[!NOTE]
->__General (compact)__ domain for Object Detection requires special postprocessing logic. For the detail, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use __General (compact) [S1]__.
-
->[!IMPORTANT]
->There is no guarantee that the exported models give the exactly same result as the prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For the detail of the preprocessing logic, please see [this document](quickstarts/image-classification.md).
-
-\[1\] Intel Xeon E5-2690 CPU and NVIDIA Tesla M60
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
- Title: Integrate Azure storage for notifications and model backup-
-description: Learn how to integrate Azure storage to receive push notifications when you train or export Custom Vision models. You can also save a backup of exported models.
----- Previously updated : 06/25/2021----
-# Integrate Azure storage for notifications and backup
-
-You can integrate your Custom Vision project with an Azure blob storage queue to get push notifications of project training/export activity and backup copies of published models. This feature is useful to avoid continually polling the service for results when long operations are running. Instead, you can integrate the storage queue notifications into your workflow.
-
-This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
-
-> [!NOTE]
-> Push notifications depend on the optional _notificationQueueUri_ parameter in the **CreateProject** API, and model backups require that you also use the optional _exportModelContainerUri_ parameter. This guide will use both for the full set of features.
-
-## Prerequisites
--- A Custom Vision resource in Azure. If you don't have one, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). This feature doesn't currently support the Cognitive Service resource (all in one key).-- An Azure Storage account with a blob container. Follow the [Storage quickstart](../../storage/blobs/storage-quickstart-blobs-portal.md) if you need help with this step.-- [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application.-
-## Set up Azure storage integration
-
-Go to your Custom Vision training resource on the Azure portal, select the **Identity** page, and enable system assigned managed identity.
-
-Next, go to your storage resource in the Azure portal. Go to the **Access control (IAM)** page and select **Add role assignment (Preview)**. Then add a role assignment for either integration feature, or both:
-* If you plan to use the model backup feature, select the **Storage Blob Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
-* If you plan to use the notification queue feature, then select the **Storage Queue Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
-
-For help with role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-### Get integration URLs
-
-Next, you'll get the URLs that allow your Custom Vision resource to access these endpoints.
-
-For the notification queue integration URL, go to the **Queues** page of your storage account, add a new queue, and save its URL to a temporary location.
-
-> [!div class="mx-imgBorder"]
-> ![Azure storage queue page](./media/storage-integration/queue-url.png)
-
-For the model backup integration URL, go to the **Containers** page of your storage account and create a new container. Then select it and go to the **Properties** page. Copy the URL to a temporary location.
-
-> [!div class="mx-imgBorder"]
-> ![Azure storage container properties page](./media/storage-integration/container-url.png)
--
-## Integrate Custom Vision project
-
-Now that you have the integration URLs, you can create a new Custom Vision project that integrates the Azure Storage features. You can also update an existing project to add the features.
-
-### Create new project
-
-When you call the [CreateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
-
-```curl
-curl -v -X POST "{endpoint}/customvision/v3.3/Training/projects?exportModelContainerUri={inputUri}&notificationQueueUri={inputUri}&name={inputName}"
--H "Training-key: {subscription key}"
-```
-
-If you receive a `200/OK` response, that means the URLs have been set up successfully. You should see your URL values in the JSON response as well:
-
-```json
-{
- "id": "00000000-0000-0000-0000-000000000000",
- "name": "string",
- "description": "string",
- "settings": {
- "domainId": "00000000-0000-0000-0000-000000000000",
- "classificationType": "Multiclass",
- "targetExportPlatforms": [
- "CoreML"
- ],
- "useNegativeSet": true,
- "detectionParameters": "string",
- "imageProcessingSettings": {
- "augmentationMethods": {}
-},
-"exportModelContainerUri": {url}
-"notificationQueueUri": {url}
- },
- "created": "string",
- "lastModified": "string",
- "thumbnailUri": "string",
- "drModeEnabled": true,
- "status": "Succeeded"
-}
-```
-
-### Update existing project
-
-To update an existing project with Azure storage feature integration, call the [UpdateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb1) API, using the ID of the project you want to update.
-
-```curl
-curl -v -X PATCH "{endpoint}/customvision/v3.3/Training/projects/{projectId}"
--H "Content-Type: application/json"--H "Training-key: {subscription key}"-data-ascii "{body}"
-```
-
-Set the request body (`body`) to the following JSON format, filling in the appropriate values for _exportModelContainerUri_ and _notificationQueueUri_:
-
-```json
-{
- "name": "string",
- "description": "string",
- "settings": {
- "domainId": "00000000-0000-0000-0000-000000000000",
- "classificationType": "Multiclass",
- "targetExportPlatforms": [
- "CoreML"
- ],
- "imageProcessingSettings": {
- "augmentationMethods": {}
-},
-"exportModelContainerUri": {url}
-"notificationQueueUri": {url}
-
- },
- "status": "Succeeded"
-}
-```
-
-If you receive a `200/OK` response, that means the URLs have been set up successfully.
-
-## Verify the connection
-
-Your API call in the previous section should have already triggered new information in your Azure storage account.
-
-In your designated container, there should be a test blob inside a **CustomVision-TestPermission** folder. This blob will only exist temporarily.
-
-In your notification queue, you should see a test notification in the following format:
-
-```json
-{
-"version": "1.0" ,
-"type": "ConnectionTest",
-"Content":
- {
- "projectId": "00000000-0000-0000-0000-000000000000"
- }
-}
-```
-
-## Get event notifications
-
-When you're ready, call the [TrainProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee1) API on your project to do an ordinary training operation.
-
-In your Storage notification queue, you'll receive a notification once training finishes:
-
-```json
-{
-"version": "1.0" ,
-"type": "Training",
-"Content":
- {
- "projectId": "00000000-0000-0000-0000-000000000000",
- "iterationId": "00000000-0000-0000-0000-000000000000",
- "trainingStatus": "TrainingCompleted"
- }
-}
-```
-
-The `"trainingStatus"` field may be either `"TrainingCompleted"` or `"TrainingFailed"`. The `"iterationId"` field is the ID of the trained model.
-
-## Get model export backups
-
-When you're ready, call the [ExportIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddece) API to export a trained model into a specified platform.
-
-In your designated storage container, a backup copy of the exported model will appear. The blob name will have the format:
-
-```
-{projectId} - {iterationId}.{platformType}
-```
-
-Also, you'll receive a notification in your queue when the export finishes.
-
-```json
-{
-"version": "1.0" ,
-"type": "Export",
-"Content":
- {
- "projectId": "00000000-0000-0000-0000-000000000000",
- "iterationId": "00000000-0000-0000-0000-000000000000",
- "exportStatus": "ExportCompleted",
- "modelUri": {url}
- }
-}
-```
-
-The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`. The `"modelUri"` field will contain the URL of the backup model stored in your container, assuming you integrated queue notifications in the beginning. If you didn't, the `"modelUri"` field will show the SAS URL for your Custom Vision model blob.
-
-## Next steps
-
-In this guide, you learned how to copy and back up a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation (training)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
-* [REST API reference documentation (prediction)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
cognitive-services Suggested Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/suggested-tags.md
- Title: "Label images faster with Smart Labeler"-
-description: In this guide, you'll learn how to use Smart Labeler to generate suggested tags for images. This lets you label a large number of images more quickly when training a Custom Vision model.
------- Previously updated : 12/27/2022---
-# Tag images faster with Smart Labeler
-
-In this guide, you'll learn how to use Smart Labeler to generate suggested tags for images. This lets you label a large number of images more quickly when you're training a Custom Vision model.
-
-When you tag images for a Custom Vision model, the service uses the latest trained iteration of the model to predict the labels of new images. It shows these predictions as suggested tags, based on the selected confidence threshold and prediction uncertainty. You can then either confirm or change the suggestions, speeding up the process of manually tagging the images for training.
-
-## When to use Smart Labeler
-
-Keep the following limitations in mind:
-
-* You should only request suggested tags for images whose tags have already been trained on once. Don't get suggestions for a new tag that you're just beginning to train.
-
-> [!IMPORTANT]
-> The Smart Labeler feature uses the same [pricing model](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) as regular predictions. The first time you trigger suggested tags for a set of images, you'll be charged the same as for prediction calls. After that, the service stores the results for the selected images in a database for 30 days, and you can access them anytime for free within that period. After 30 days, you'll be charged if you request their suggested tags again.
-
-## Smart Labeler workflow
-
-Follow these steps to use Smart Labeler:
-
-1. Upload all of your training images to your Custom Vision project.
-1. Label part of your data set, choosing an equal number of images for each tag.
- > [!TIP]
- > Make sure you use all of the tags for which you want suggestions later on.
-1. Start the training process.
-1. When training is complete, navigate to the **Untagged** view and select the **Get suggested tags** button on the left pane.
- > [!div class="mx-imgBorder"]
- > ![The suggested tags button is shown under the untagged images tab.](./media/suggested-tags/suggested-tags-button.png)
-1. In the popup window that appears, set the number of images for which you want suggestions. You should only get initial tag suggestions for a portion of the untagged images. You'll get better tag suggestions as you iterate through this process.
-1. Confirm the suggested tags, fixing any that aren't correct.
- > [!TIP]
- > Images with suggested tags are sorted by their prediction uncertainty (lower values indicate higher confidence). You can change the sorting order with the **Sort by uncertainty** option. If you set the order to **high to low**, you can correct the high-uncertainty predictions first and then quickly confirm the low-uncertainty ones.
- * In image classification projects, you can select and confirm tags in batches. Filter the view by a given suggested tag, deselect images that are tagged incorrectly, and then confirm the rest in a batch.
- > [!div class="mx-imgBorder"]
- > ![Suggested tags are displayed in batch mode for IC with filters.](./media/suggested-tags/ic-batch-mode.png)
-
- You can also use suggested tags in individual image mode by selecting an image from the gallery.
-
- ![Suggested tags are displayed in individual image mode for IC.](./media/suggested-tags/ic-individual-image-mode.png)
- * In object detection projects, batch confirmations aren't supported, but you can still filter and sort by suggested tags for a more organized labeling experience. Thumbnails of your untagged images will show an overlay of bounding boxes indicating the locations of suggested tags. If you don't select a suggested tag filter, all of your untagged images will appear without overlaying bounding boxes.
- > [!div class="mx-imgBorder"]
- > ![Suggested tags are displayed in batch mode for OD with filters.](./media/suggested-tags/od-batch-mode.png)
-
- To confirm object detection tags, you need to apply them to each individual image in the gallery.
-
- ![Suggested tags are displayed in individual image mode for OD.](./media/suggested-tags/od-individual-image-mode.png)
-1. Start the training process again.
-1. Repeat the preceding steps until you're satisfied with the suggestion quality.
-
-## Next steps
-
-Follow a quickstart to get started creating and training a Custom Vision project.
-
-* [Build a classifier](getting-started-build-a-classifier.md)
-* [Build an object detector](get-started-build-detector.md)
cognitive-services Test Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/test-your-model.md
- Title: Test and retrain a model - Custom Vision Service-
-description: Learn how to test an image and then use it to retrain your model in the Custom Vision service.
------- Previously updated : 07/05/2022---
-# Test and retrain a model with Custom Vision Service
-
-After you train your Custom Vision model, you can quickly test it using a locally stored image or a URL pointing to a remote image. The test uses the most recently trained iteration of your model. Then you can decide whether further training is needed.
-
-## Test your model
-
-1. From the [Custom Vision web portal](https://customvision.ai), select your project. Select **Quick Test** on the right of the top menu bar. This action opens a window labeled **Quick Test**.
-
- ![The Quick Test button is shown in the upper right corner of the window.](./media/test-your-model/quick-test-button.png)
-
-1. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
-
- ![Screenshot of the submit image page.](./media/test-your-model/submit-image.png)
-
-The image you select appears in the middle of the page. Then the prediction results appear below the image in the form of a table with two columns, labeled **Tags** and **Confidence**. After you view the results, you may close the **Quick Test** window.
-
-## Use the predicted image for training
-
-You can now take the image submitted previously for testing and use it to retrain your model.
-
-1. To view images submitted to the classifier, open the [Custom Vision web page](https://customvision.ai) and select the __Predictions__ tab.
-
- ![Image of the predictions tab](./media/test-your-model/predictions-tab.png)
-
- > [!TIP]
- > The default view shows images from the current iteration. You can use the __Iteration__ drop down field to view images submitted during previous iterations.
-
-1. Hover over an image to see the tags that were predicted by the classifier.
-
- > [!TIP]
- > Images are ranked, so that the images that can bring the most gains to the classifier are at the top. To select a different sorting, use the __Sort__ section.
-
- To add an image to your training data, select the image, manually select the tag(s), and then select __Save and close__. The image is removed from __Predictions__ and added to the training images. You can view it by selecting the __Training Images__ tab.
-
- ![Screenshot of the tagging page.](./media/test-your-model/tag-image.png)
-
-1. Use the __Train__ button to retrain the classifier.
-
-## Next steps
-
-[Improve your model](getting-started-improving-your-classifier.md)
cognitive-services Update Application To 3.0 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/update-application-to-3.0-sdk.md
- Title: How to update your project to the 3.0 API-
-description: Learn how to update Custom Vision projects from the previous version of the API to the 3.0 API.
------- Previously updated : 12/27/2022---
-# Update to the 3.0 API
-
-Custom Vision has now reached General Availability and has undergone an API update. This update includes a few new features and, importantly, a few breaking changes:
-
-* The Prediction API is now split into two APIs based on the project type.
-* The Vision AI Developer Kit (VAIDK) export option requires creating a project in a specific way.
-* Default iterations have been removed in favor of publishing / unpublishing a named iteration.
-
-See the [Release notes](release-notes.md) for a full list of the changes. This guide will show you how to update your projects to work with the new API version.
-
-## Use the updated Prediction API
-
-The 2.x APIs used the same prediction call for both image classifiers and object detector projects. Both project types were acceptable to the **PredictImage** and **PredictImageUrl** calls. Starting with 3.0, we have split this API so that you need to match the project type to the call:
-
-* Use **[ClassifyImage](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)** and **[ClassifyImageUrl](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c14)** to get predictions for image classification projects.
-* Use **[DetectImage](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c19)** and **[DetectImageUrl](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c18)** to get predictions for object detection projects.
-
-## Use the new iteration publishing workflow
-
-The 2.x APIs used the default iteration or a specified iteration ID to choose the iteration to use for prediction. Starting in 3.0, we have adopted a publishing flow whereby you first publish an iteration under a specified name from the training API. You then pass the name to the prediction methods to specify which iteration to use.
-
-> [!IMPORTANT]
-> The 3.0 APIs do not use the default iteration feature. Until we deprecate the older APIs, you can continue to use the 2.x APIs to toggle an iteration as the default. These APIs will be maintained for a period of time, and you can call the **[UpdateIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b818)** method to mark an iteration as default.
-
-### Publish an iteration
-
-Once an iteration is trained, you can make it available for prediction using the **[PublishIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c82db28bf6a2b11a8247bbc)** method. To publish an iteration, you'll need the prediction resource ID, which is available on the CustomVision website's settings page.
-
-![The Custom Vision website settings page with the prediction resource ID outlined.](./media/update-application-to-3.0-sdk/prediction-id.png)
-
-> [!TIP]
-> You can also get this information from the [Azure Portal](https://portal.azure.com) by going to the Custom Vision Prediction resource and selecting **Properties**.
-
-Once your iteration is published, apps can use it for prediction by specifying the name in their prediction API call. To make an iteration unavailable for prediction calls, use the **[UnpublishIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b81a)** API.
-
-## Next steps
-
-* [Training API reference documentation (REST)](https://go.microsoft.com/fwlink/?linkid=865446)
-* [Prediction API reference documentation (REST)](https://go.microsoft.com/fwlink/?linkid=865445)
cognitive-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/use-prediction-api.md
- Title: "Use prediction endpoint to programmatically test images with classifier - Custom Vision"-
-description: Learn how to use the API to programmatically test images with your Custom Vision Service classifier.
----- Previously updated : 12/27/2022----
-# Call the prediction API
-
-After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs.
-
-> [!NOTE]
-> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
-
-## Setup
-
-### Publish your trained iteration
-
-From the [Custom Vision web page](https://customvision.ai), select your project and then select the __Performance__ tab.
-
-To submit images to the Prediction API, you'll first need to publish your iteration for prediction, which can be done by selecting __Publish__ and specifying a name for the published iteration. This will make your model accessible to the Prediction API of your Custom Vision Azure resource.
-
-![The performance tab is shown, with a red rectangle surrounding the Publish button.](./media/use-prediction-api/unpublished-iteration.png)
-
-Once your model has been successfully published, you'll see a "Published" label appear next to your iteration in the left-hand sidebar, and its name will appear in the description of the iteration.
-
-![The performance tab is shown, with a red rectangle surrounding the Published label and the name of the published iteration.](./media/use-prediction-api/published-iteration.png)
-
-### Get the URL and prediction key
-
-Once your model has been published, you can retrieve the required information by selecting __Prediction URL__. This will open up a dialog with information for using the Prediction API, including the __Prediction URL__ and __Prediction-Key__.
-
-![The performance tab is shown with a red rectangle surrounding the Prediction URL button.](./media/use-prediction-api/published-iteration-prediction-url.png)
-
-![The performance tab is shown with a red rectangle surrounding the Prediction URL value for using an image file and the Prediction-Key value.](./media/use-prediction-api/prediction-api-info.png)
-
-## Submit data to the service
-
-This guide assumes that you already constructed a **[CustomVisionPredictionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.customvisionpredictionclient)** object, named `predictionClient`, with your Custom Vision prediction key and endpoint URL. For instructions on how to set up this feature, follow one of the [quickstarts](quickstarts/image-classification.md).
-
-In this guide, you'll use a local image, so download an image you'd like to submit to your trained model. The following code prompts the user to specify a local path and gets the bytestream of the file at that path.
-
-```csharp
-Console.Write("Enter image file path: ");
-string imageFilePath = Console.ReadLine();
-byte[] byteData = GetImageAsByteArray(imageFilePath);
-```
-
-Include the following helper method:
-
-```csharp
-private static byte[] GetImageAsByteArray(string imageFilePath)
-{
- FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
- BinaryReader binaryReader = new BinaryReader(fileStream);
- return binaryReader.ReadBytes((int)fileStream.Length);
-}
-```
-
-The **[ClassifyImageAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.customvisionpredictionclientextensions.classifyimageasync#Microsoft_Azure_CognitiveServices_Vision_CustomVision_Prediction_CustomVisionPredictionClientExtensions_ClassifyImageAsync_Microsoft_Azure_CognitiveServices_Vision_CustomVision_Prediction_ICustomVisionPredictionClient_System_Guid_System_String_System_IO_Stream_System_String_System_Threading_CancellationToken_)** method takes the project ID and the locally stored image, and scores the image against the given model.
-
-```csharp
-// Make a prediction against the new project
-Console.WriteLine("Making a prediction:");
-var result = predictionApi.ClassifyImageAsync(project.Id, publishedModelName, byteData);
-```
-
-## Determine how to process the data
-
-You can optionally configure how the service does the scoring operation by choosing alternate methods (see the methods of the **[CustomVisionPredictionClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.customvisionpredictionclient)** class).
-
-You can use a non-async version of the method above for simplicity, but it may cause the program to lock up for a noticeable amount of time.
-
-The **-WithNoStore** methods require that the service does not retain the prediction image after prediction is complete. Normally, the service retains these images so you have the option of adding them as training data for future iterations of your model.
-
-The **-WithHttpMessages** methods return the raw HTTP response of the API call.
-
-## Get results from the service
-
-The service returns results in the form of an **[ImagePrediction](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.models.imageprediction)** object. The **Predictions** property contains a list of **[PredictionModel](/dotnet/api/microsoft.azure.cognitiveservices.vision.customvision.prediction.models.predictionmodel)** objects, which each represents a single object prediction. They include the name of the label and the bounding box coordinates where the object was detected in the image. Your app can then parse this data to, for example, display the image with labeled object fields on a screen.
-
-## Next steps
-
-In this guide, you learned how to submit images to your custom image classifier/detector and receive a response programmatically with the C# SDK. Next, learn how to complete end-to-end scenarios with C#, or get started using a different language SDK.
-
-* [Quickstart: Custom Vision SDK](quickstarts/image-classification.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/whats-new.md
- Title: What's new in Custom Vision?-
-description: This article contains news about Custom Vision.
------ Previously updated : 09/27/2021---
-# What's new in Custom Vision
-
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to keep up to date with the service.
-
-## May 2022
-
-### Estimated Minimum Budget
-- In Custom Vision Portal, users are now able to view the minimum estimated budget needed to train their project. This estimate (shown in hours) is calculated based on volume of images uploaded by user and domain selected by user.-
-## October 2020
-
-### Custom base model
--- Some applications have a large amount of joint training data but need to fine-tune their models separately; this results in better performance for images from different sources with minor differences. In this case, you can train the first model as usual with a large volume of training data. Then call **TrainProject** in the 3.4 public preview API with _CustomBaseModelInfo_ in the request body to use the first stage trained model as the base model for downstream projects. If the source project and the downstream target project have similar images characteristics, you can expect better performance. -
-### New domain information
--- The domain information returned from **GetDomains** in the Custom Vision 3.4 public preview API now includes supported exportable platforms, a brief description of model architecture, and the size of the model for compact domains.-
-### Training divergence feedback
--- The Custom Vision 3.4 public preview API now returns **TrainingErrorDetails** from the **GetIteration** call. On failed iterations, this reveals whether the failure was caused by training divergence, which can be remedied with more and higher-quality training data.-
-## July 2020
-
-### Azure role-based access control
-
-* Custom Vision supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. To learn how to manage access to your Custom Vision projects, see [Azure role-based access control](./role-based-access-control.md).
-
-### Subset training
-
-* When training an object detection project, you can optionally train on only a subset of your applied tags. You may want to do this if you haven't applied enough of certain tags yet, but you do have enough of others. Follow the [Client library quickstart](./quickstarts/object-detection.md) for C# or Python to learn more.
-
-### Azure storage notifications
-
-* You can integrate your Custom Vision project with an Azure blob storage queue to get push notifications of project training/export activity and backup copies of published models. This feature is useful to avoid continually polling the service for results when long operations are running. Instead, you can integrate the storage queue notifications into your workflow. See the [Storage integration](./storage-integration.md) guide to learn more.
-
-### Copy and move projects
-
-* You can now copy projects from one Custom Vision account into others. You might want to move a project from a development to production environment, or back up a project to an account in a different Azure region for increased data security. See the [Copy and move projects](./copy-move-projects.md) guide to learn more.
-
-## September 2019
-
-### Suggested tags
-
-* The Smart Labeler tool on the [Custom Vision website](https://www.customvision.ai/) generates suggested tags for your training images. This lets you label a large number of images more quickly when training a Custom Vision model. For instructions on how to use this feature, see [Suggested tags](./suggested-tags.md).
-
-## Cognitive Service updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Cognitive Services Encryption Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Encryption/cognitive-services-encryption-keys-portal.md
- Title: Customer-Managed Keys for Cognitive Services-
-description: Learn how to use the Azure portal to configure customer-managed keys with Azure Key Vault. Customer-managed keys enable you to create, rotate, disable, and revoke access controls.
----- Previously updated : 04/07/2021---
-# Configure customer-managed keys with Azure Key Vault for Cognitive Services
-
-The process to enable Customer-Managed Keys with Azure Key Vault for Cognitive Services varies by product. Use these links for service-specific instructions:
-
-## Vision
-
-* [Custom Vision encryption of data at rest](../custom-vision-service/encrypt-data-at-rest.md)
-* [Face Services encryption of data at rest](../face/encrypt-data-at-rest.md)
-* [Form Recognizer encryption of data at rest](../../applied-ai-services/form-recognizer/encrypt-data-at-rest.md)
-
-## Language
-
-* [Language Understanding service encryption of data at rest](../LUIS/encrypt-data-at-rest.md)
-* [QnA Maker encryption of data at rest](../QnAMaker/encrypt-data-at-rest.md)
-* [Translator encryption of data at rest](../translator/encrypt-data-at-rest.md)
-* [Language service encryption of data at rest](../language-service/concepts/encryption-data-at-rest.md)
-
-## Speech
-
-* [Speech encryption of data at rest](../speech-service/speech-encryption-of-data-at-rest.md)
-
-## Decision
-
-* [Content Moderator encryption of data at rest](../Content-Moderator/encrypt-data-at-rest.md)
-* [Personalizer encryption of data at rest](../personalizer/encrypt-data-at-rest.md)
-
-## Azure OpenAI
-
-* [Azure OpenAI encryption of data at rest](../openai/encrypt-data-at-rest.md)
--
-## Next steps
-
-* [What is Azure Key Vault](../../key-vault/general/overview.md)?
-* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services App Schema Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/app-schema-definition.md
- Title: App schema definition
-description: The LUIS app is represented in either the `.json` or `.lu` and includes all intents, entities, example utterances, features, and settings.
--- Previously updated : 08/22/2020--
-# App schema definition
---
-The LUIS app is represented in either the `.json` or `.lu` and includes all intents, entities, example utterances, features, and settings.
-
-## Format
-
-When you import and export the app, choose either `.json` or `.lu`.
-
-|Format|Information|
-|--|--|
-|`.json`| Standard programming format|
-|`.lu`|Supported by the Bot Framework's [Bot Builder tools](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown/docs/lu-file-format.md).|
-
-## Version 7.x
-
-* Moving to version 7.x, the entities are represented as nested machine-learning entities.
-* Support for authoring nested machine-learning entities with `enableNestedChildren` property on the following authoring APIs:
- * [Add label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c08)
- * [Add batch label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c09)
- * [Review labels](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c0a)
- * [Suggest endpoint queries for entities](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2e)
- * [Suggest endpoint queries for intents](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2d)
-
-```json
-{
- "luis_schema_version": "7.0.0",
- "intents": [
- {
- "name": "None",
- "features": []
- }
- ],
- "entities": [],
- "hierarchicals": [],
- "composites": [],
- "closedLists": [],
- "prebuiltEntities": [],
- "utterances": [],
- "versionId": "0.1",
- "name": "example-app",
- "desc": "",
- "culture": "en-us",
- "tokenizerVersion": "1.0.0",
- "patternAnyEntities": [],
- "regex_entities": [],
- "phraselists": [
- ],
- "regex_features": [],
- "patterns": [],
- "settings": []
-}
-```
-
-| element | Comment |
-|--|--|
-| "hierarchicals": [], | Deprecated, use [machine-learning entities](concepts/entities.md). |
-| "composites": [], | Deprecated, use [machine-learning entities](concepts/entities.md). [Composite entity](./reference-entity-machine-learned-entity.md) reference. |
-| "closedLists": [], | [List entities](reference-entity-list.md) reference, primarily used as features to entities. |
-| "versionId": "0.1", | Version of a LUIS app.|
-| "name": "example-app", | Name of the LUIS app. |
-| "desc": "", | Optional description of the LUIS app. |
-| "culture": "en-us", | [Language](luis-language-support.md) of the app, impacts underlying features such as prebuilt entities, machine-learning, and tokenizer. |
-| "tokenizerVersion": "1.0.0", | [Tokenizer](luis-language-support.md#tokenization) |
-| "patternAnyEntities": [], | [Pattern.any entity](reference-entity-pattern-any.md) |
-| "regex_entities": [], | [Regular expression entity](reference-entity-regular-expression.md) |
-| "phraselists": [], | [Phrase lists (feature)](concepts/patterns-features.md#create-a-phrase-list-for-a-concept) |
-| "regex_features": [], | Deprecated, use [machine-learning entities](concepts/entities.md). |
-| "patterns": [], | [Patterns improve prediction accuracy](luis-concept-patterns.md) with [pattern syntax](reference-pattern-syntax.md) |
-| "settings": [] | [App settings](luis-reference-application-settings.md)|
-
-## Version 6.x
-
-* Moving to version 6.x, use the new [machine-learning entity](reference-entity-machine-learned-entity.md) to represent your entities.
-
-```json
-{
- "luis_schema_version": "6.0.0",
- "intents": [
- {
- "name": "None",
- "features": []
- }
- ],
- "entities": [],
- "hierarchicals": [],
- "composites": [],
- "closedLists": [],
- "prebuiltEntities": [],
- "utterances": [],
- "versionId": "0.1",
- "name": "example-app",
- "desc": "",
- "culture": "en-us",
- "tokenizerVersion": "1.0.0",
- "patternAnyEntities": [],
- "regex_entities": [],
- "phraselists": [],
- "regex_features": [],
- "patterns": [],
- "settings": []
-}
-```
-
-## Version 4.x
-
-```json
-{
- "luis_schema_version": "4.0.0",
- "versionId": "0.1",
- "name": "example-app",
- "desc": "",
- "culture": "en-us",
- "tokenizerVersion": "1.0.0",
- "intents": [
- {
- "name": "None"
- }
- ],
- "entities": [],
- "composites": [],
- "closedLists": [],
- "patternAnyEntities": [],
- "regex_entities": [],
- "prebuiltEntities": [],
- "model_features": [],
- "regex_features": [],
- "patterns": [],
- "utterances": [],
- "settings": []
-}
-```
-
-## Next steps
-
-* Migrate to the [V3 authoring APIs](luis-migration-authoring-entities.md)
cognitive-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/choose-natural-language-processing-service.md
- Title: Use NLP with QnA Maker for chat bots
-description: Cognitive Services provides two natural language processing services, Language Understanding and QnA Maker, each with a different purpose. Understand when to use each service and how they compliment each other.
--- Previously updated : 10/20/2020--
-# Use Cognitive Services with natural language processing (NLP) to enrich chat bot conversations
----
-## Next steps
-
-* Learn [enterprise design strategies](luis-concept-enterprise.md)
cognitive-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/client-libraries-rest-api.md
- Title: "Quickstart: Language Understanding (LUIS) SDK client libraries and REST API"
-description: Create and query a LUIS app with the LUIS SDK client libraries and REST API.
- Previously updated : 03/07/2022-----
-keywords: Azure, artificial intelligence, ai, natural language processing, nlp, LUIS, azure luis, natural language understanding, ai chatbot, chatbot maker, understanding natural language
-
-zone_pivot_groups: programming-languages-set-luis
-
-# Quickstart: Language Understanding (LUIS) client libraries and REST API
--
-Create and query an Azure LUIS artificial intelligence (AI) app with the LUIS SDK client libraries with this quickstart using C#, Python, or JavaScript. You can also use cURL to send requests using the REST API.
-
-Language Understanding (LUIS) enables you to apply natural language processing (NLP) to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.
-
-* The **authoring** client library and REST API allows you to create, edit, train, and publish your LUIS app.
-* The **prediction runtime** client library and REST API allows you to query the published app.
-----
-## Clean up resources
-
-You can delete the app from the [LUIS portal](https://www.luis.ai) and delete the Azure resources from the [Azure portal](https://portal.azure.com/).
-
-If you're using the REST API, delete the `ExampleUtterances.JSON` file from the file system when you're done with the quickstart.
-
-## Troubleshooting
-
-* Authenticating to the client library - authentication errors usually indicate that the wrong key & endpoint were used. This quickstart uses the authoring key and endpoint for the prediction runtime as a convenience, but will only work if you haven't already used the monthly quota. If you can't use the authoring key and endpoint, you need to use the prediction runtime key and endpoint when accessing the prediction runtime SDK client library.
-* Creating entities - if you get an error creating the nested machine-learning entity used in this tutorial, make sure you copied the code and didn't alter the code to create a different entity.
-* Creating example utterances - if you get an error creating the labeled example utterance used in this tutorial, make sure you copied the code and didn't alter the code to create a different labeled example.
-* Training - if you get a training error, this usually indicates an empty app (no intents with example utterances), or an app with intents or entities that are malformed.
-* Miscellaneous errors - because the code calls into the client libraries with text and JSON objects, make sure you haven't changed the code.
-
-Other errors - if you get an error not covered in the preceding list, let us know by giving feedback at the bottom on this page. Include the programming language and version of the client libraries you installed.
-
-## Next steps
--
-* [Iterative app development for LUIS](./luis-concept-app-iteration.md)
cognitive-services Application Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/application-design.md
- Title: Application Design-
-description: Application design concepts
-------- Previously updated : 01/10/2022---
-# Plan your LUIS app
---
-A Language Understanding (LUIS) app schema contains [intents](../luis-glossary.md#intent) and [entities](../luis-glossary.md#entity) relevant to your subject [domain](../luis-glossary.md#domain). The intents classify user [utterances](../luis-glossary.md#utterance), and the entities extract data from the user utterances. Intents and entities relevant to your subject domain. The intents classify user utterances.
-
-A LUIS app learns and performs most efficiently when you iteratively develop it. Here's a typical iteration cycle:
-
-1. Create a new version
-2. Edit the LUIS app schema. This includes:
- * Intents with example utterances
- * Entities
- * Features
-3. Train, test, and publish
-4. Test for active learning by reviewing utterances sent to the prediction endpoint
-5. Gather data from endpoint queries
--
-## Identify your domain
-
-A LUIS app is centered around a subject domain. For example, you may have a travel app that handles booking of tickets, flights, hotels, and rental cars. Another app may provide content related to exercising, tracking fitness efforts and setting goals. Identifying the domain helps you find words or phrases that are relevant to your domain.
-
-> [!TIP]
-> LUIS offers [prebuilt domains](../howto-add-prebuilt-models.md) for many common scenarios. Check to see if you can use a prebuilt domain as a starting point for your app.
-
-## Identify your intents
-
-Think about the [intents](../luis-concept-intent.md) that are important to your application's task.
-
-Let's take the example of a travel app, with functions to book a flight and check the weather at the user's destination. You can define two intents, BookFlight and GetWeather for these actions.
-
-In a more complex app with more functions, you likely would have more intents, and you should define them carefully so they aren't too specific. For example, BookFlight and BookHotel may need to be separate intents, but BookInternationalFlight and BookDomesticFlight may be too similar.
-
-> [!NOTE]
-> It is a best practice to use only as many intents as you need to perform the functions of your app. If you define too many intents, it becomes harder for LUIS to classify utterances correctly. If you define too few, they may be so general that they overlap.
-
-If you don't need to identify overall user intention, add all the example user utterances to the `None` intent. If your app grows into needing more intents, you can create them later.
-
-## Create example utterances for each intent
-
-To start, avoid creating too many utterances for each intent. Once you have determined the intents you need for your app, create 15 to 30 example utterances per intent. Each utterance should be different from the previously provided utterances. Include a variety of word counts, word choices, verb tenses, and [punctuation](../luis-reference-application-settings.md#punctuation-normalization).
-
-For more information, see [understanding good utterances for LUIS apps](../luis-concept-utterance.md).
-
-## Identify your entities
-
-In the example utterances, identify the entities you want extracted. To book a flight, you need information like the destination, date, airline, ticket category, and travel class. Create entities for these data types and then mark the [entities](entities.md) in the example utterances. Entities are important for accomplishing an intent.
-
-When determining which entities to use in your app, remember that there are different types of entities for capturing relationships between object types. See [Entities in LUIS](../luis-concept-entity-types.md) for more information about the different types.
-
-> [!TIP]
-> LUIS offers [prebuilt entities](../howto-add-prebuilt-models.md) for common, conversational user scenarios. Consider using prebuilt entities as a starting point for your application development.
-
-## Intents versus entities
-
-An intent is the desired outcome of the _whole_ utterance while entities are pieces of data extracted from the utterance. Usually intents are tied to actions, which the client application should take. Entities are information needed to perform this action. From a programming perspective, an intent would trigger a method call and the entities would be used as parameters to that method call.
-
-This utterance _must_ have an intent and _may_ have entities:
-
-"*Buy an airline ticket from Seattle to Cairo*"
-
-This utterance has a single intention:
-
-* Buying a plane ticket
-
-This utterance may have several entities:
-
-* Locations of Seattle (origin) and Cairo (destination)
-* The quantity of a single ticket
-
-## Resolution in utterances with more than one function or intent
-
-In many cases, especially when working with natural conversation, users provide an utterance that can contain more than one function or intent. To address this, a general strategy is to understand that output can be represented by both intents and entities. This representation should be mappable to your client application's actions, and doesn't need to be limited to intents.
-
-**Int-ent-ties** is the concept that actions (usually understood as intents) might also be captured as entities in the app's output, and mapped to specific actions. _Negation,_ _for example, commonly_ relies on intent and entity for full extraction. Consider the following two utterances, which are similar in word choice, but have different results:
-
-* "*Please schedule my flight from Cairo to Seattle*"
-* "*Cancel my flight from Cairo to Seattle*"
-
-Instead of having two separate intents, you should create a single intent with a FlightAction machine learning entity. This machine learning entity should extract the details of the action for both scheduling and canceling requests, and either an origin or destination location.
-
-This FlightAction entity would be structured with the following top-level machine learning entity, and subentities:
-
-* FlightAction
- * Action
- * Origin
- * Destination
-
-To help with extraction, you would add features to the subentities. You would choose features based on the vocabulary you expect to see in user utterances, and the values you want returned in the prediction response.
-
-## Best practices
-
-### Plan Your schema
-
-Before you start building your app's schema, you should identify how and where you plan to use this app. The more thorough and specific your planning, the better your app becomes.
-
-* Research targeted users
-* Define end-to-end personas to represent your app - voice, avatar, issue handling (proactive, reactive)
-* Identify channels of user interactions (such as text or speech), handing off to existing solutions or creating a new solution for this app
-* End-to-end user journey
- * What do you expect this app to do and not do? What are the priorities of what it should do?
- * What are the main use cases?
-* Collecting data - [learn about collecting and preparing data](../data-collection.md)
-
-### Don't train and publish with every single example utterance
-
-Add 10 or 15 utterances before training and publishing. That allows you to see the impact on prediction accuracy. Adding a single utterance may not have a visible impact on the score.
-
-### Don't use LUIS as a training platform
-
-LUIS is specific to a language model's domain. It isn't meant to work as a general natural language training platform.
-
-### Build your app iteratively with versions
-
-Each authoring cycle should be contained within a new [version](../luis-concept-app-iteration.md), cloned from an existing version.
-
-### Don't publish too quickly
-
-Publishing your app too quickly and without proper planning may lead to several issues such as:
-
-* Your app will not work in your actual scenario at an acceptable level of performance.
-* The schema (intents and entities) might not be appropriate, and if you have developed client app logic following the schema, you may need to redo it. This might cause unexpected delays and extra costs to the project you are working on.
-* Utterances you add to the model might cause biases towards example utterances that are hard to debug and identify. It will also make removing ambiguity difficult after you have committed to a certain schema.
-
-### Do monitor the performance of your app
-
-Monitor the prediction accuracy using a [batch test](../luis-how-to-batch-test.md) set.
-
-Keep a separate set of utterances that aren't used as [example utterances](utterances.md) or endpoint utterances. Keep improving the app for your test set. Adapt the test set to reflect real user utterances. Use this test set to evaluate each iteration or version of the app.
-
-### Don't create phrase lists with all possible values
-
-Provide a few examples in the [phrase lists](patterns-features.md#create-a-phrase-list-for-a-concept) but not every word or phrase. LUIS generalizes and takes context into account.
-
-## Next steps
-[Intents](intents.md)
cognitive-services Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/entities.md
- Title: Entities-
-description: Entities concepts
-------- Previously updated : 07/19/2022---
-# Entity types
---
-An entity is an item or an element that is relevant to the user's intent. Entities define data that can be extracted from the utterance and is essential to complete a user's required action. For example:
--
-| Utterance | Intent predicted | Entities extracted | Explanation |
-|--|--|--|--|
-| Hello, how are you? | Greeting | - | Nothing to extract. |
-| I want to order a small pizza | orderPizza | 'small' | 'Size' entity is extracted as 'small'. |
-| Turn off bedroom light | turnOff | 'bedroom' | 'Room' entity is extracted as 'bedroom'. |
-| Check balance in my savings account ending in 4406 | checkBalance | 'savings', '4406' | 'accountType' entity is extracted as 'savings' and 'accountNumber' entity is extracted as '4406'. |
-| Buy 3 tickets to New York | buyTickets | '3', 'New York' | 'ticketsCount' entity is extracted as '3' and 'Destination' entity is extracted as 'New York". |
-
-Entities are optional but recommended. You don't need to create entities for every concept in your app, only when:
-
-* The client application needs the data, or
-* The entity acts as a hint or signal to another entity or intent. To learn more about entities as Features go to [Entities as features](../luis-concept-entity-types.md#entities-as-features).
-
-## Entity types
-
-To create an entity, you have to give it a name and a type. There are several types of entities in LUIS.
-
-## List entity
-
-A list entity represents a fixed, closed set of related words along with their synonyms. You can use list entities to recognize multiple synonyms or variations and extract a normalized output for them. Use the _recommend_ option to see suggestions for new words based on the current list.
-
-A list entity isn't machine-learned, meaning that LUIS doesnΓÇÖt discover more values for list entities. LUIS marks any match to an item in any list as an entity in the response.
-
-Matching list entities is both case sensitive and it has to be an exact match. Normalized values are also used when matching the list entity. For example:
--
-| Normalized value | Synonyms |
-|--|--|
-| Small | `sm`, `sml`, `tiny`, `smallest` |
-| Medium | `md`, `mdm`, `regular`, `average`, `middle` |
-| Large | `lg`, `lrg`, `big` |
-
-See the [list entities reference article](../reference-entity-list.md) for more information.
-
-## Regex entity
-
-A regular expression entity extracts an entity based on a regular expression pattern you provide. It ignores case and ignores cultural variant. Regular expression entities are best for structured text or a predefined sequence of alphanumeric values that are expected in a certain format. For example:
-
-| Entity | Regular expression | Example |
-|--|--|--|
-| Flight Number | `flight [A-Z]{2} [0-9]{4}` | `flight AS 1234` |
-| Credit Card Number | `[0-9]{16}` | `5478789865437632` |
-
-See the [regex entities reference article](../reference-entity-regular-expression.md) for more information.
-
-## Prebuilt entities
-
-LUIS includes a set of prebuilt entities for recognizing common types of information, like dates, times, numbers, measurements, and currency. Prebuilt entity support varies by the culture of your LUIS app. For a full list of the prebuilt entities that LUIS supports, including support by culture, see the [prebuilt entity reference](../luis-reference-prebuilt-entities.md).
-
-When a prebuilt entity is included in your application, its predictions are included in your published application. The behavior of prebuilt entities is pre-trained and canΓÇÖt be modified.
--
-| Prebuilt entity | Example value |
-|--|--|
-| PersonName | James, Bill, Tom |
-| DatetimeV2 | `2019-05-02`, `May 2nd`, `8am on May 2nd 2019` |
-
-See the [prebuilt entities reference article](../luis-reference-prebuilt-entities.md) for more information.
-
-## Pattern.Any entity
-
-A pattern.Any entity is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. It follows a specific rule or pattern and best used for sentences with fixed lexical structure. For example:
--
-| Example utterance | Pattern | Entity |
-|--|--|--|
-| Can I have a burger please? | `Can I have a {meal} [please][?]` | burger |
-| Can I have a pizza? | `Can I have a {meal} [please][?]` | pizza |
-| Where can I find The Great Gatsby? | `Where can I find {bookName}?` | The Great Gatsby |
-
-See the [Pattern.Any entities reference article](../reference-entity-pattern-any.md) for more information.
-
-## Machine learned (ML) entity
-
-Machine learned entity uses context to extract entities based on labeled examples. It is the preferred entity for building LUIS applications. It relies on machine-learning algorithms and requires labeling to be tailored to your application successfully. Use an ML entity to identify data that isnΓÇÖt always well formatted but have the same meaning.
--
-| Example utterance | Extracted product entity |
-|--|--|
-| I want to buy a book. | 'book' |
-| Can I get these shoes please? | 'shoes' |
-| Add those shorts to my basket. | 'shorts' |
-
-See [Machine learned entities](../reference-entity-machine-learned-entity.md) for more information.
-
-#### ML Entity with Structure
-
-An ML entity can be composed of smaller sub-entities, each of which can have its own properties. For example, an _Address_ entity could have the following structure:
-
-* Address: 4567 Main Street, NY, 98052, USA
- * Building Number: 4567
- * Street Name: Main Street
- * State: NY
- * Zip Code: 98052
- * Country: USA
-
-## Building effective ML entities
-
-To build machine learned entities effectively, follow these best practices:
-
-* If you have a machine learned entity with sub-entities, make sure that the different orders and variants of the entity and sub-entities are presented in the labeled utterances. Labeled example utterances should include all valid forms, and include entities that appear and are absent and also reordered within the utterance.
-* Avoid overfitting the entities to a fixed set. Overfitting happens when the model doesn't generalize well, and is a common problem in machine-learning models. This implies the app wouldnΓÇÖt work on new types of examples adequately. In turn, you should vary the labeled example utterances so the app can generalize beyond the limited examples you provide.
-* Your labeling should be consistent across the intents. This includes even utterances you provide in the _None_ intent that includes this entity. Otherwise the model wonΓÇÖt be able to determine the sequences effectively.
-
-## Entities as features
-
-Another important function of entities is to use them as features or distinguishing traits for another intents or entities so that your system observes and learns through them.
-
-## Entities as features for intents
-
-You can use entities as a signal for an intent. For example, the presence of a certain entity in the utterance can distinguish which intent does it fall under.
-
-| Example utterance | Entity | Intent |
-|--|--|--|
-| Book me a _flight to New York_. | City | Book Flight |
-| Book me the _main conference room_. | Room | Reserve Room |
-
-## Entities as Feature for entities
-
-You can also use entities as an indicator of the presence of other entities. A common example of this is using a prebuilt entity as a feature for another ML entity. If youΓÇÖre building a flight booking system and your utterance looks like 'Book me a flight from Cairo to Seattle', you likely will have _Origin City_ and _Destination City_ as ML entities. A good practice would be to use the prebuilt GeographyV2 entity as a feature for both entities.
-
-For more information, see the [GeographyV2 entities reference article](../luis-reference-prebuilt-geographyv2.md).
-
-You can also use entities as required features for other entities. This helps in the resolution of extracted entities. For example, if youΓÇÖre creating a pizza-ordering application and you have a Size ML entity, you can create SizeList list entity and use it as a required feature for the Size entity. Your application will return the normalized value as the extracted entity from the utterance.
-
-See [features](../concepts/patterns-features.md) for more information, and [prebuilt entities](../luis-reference-prebuilt-entities.md) to learn more about prebuilt entities resolution available in your culture.
-
-## Data from entities
-
-Most chat bots and applications need more than the intent name. This additional, optional data comes from entities discovered in the utterance. Each type of entity returns different information about the match.
-
-A single word or phrase in an utterance can match more than one entity. In that case, each matching entity is returned with its score.
-
-All entities are returned in the entities array of the response from the endpoint
-
-## Best practices for entities
-
-### Use machine-learning entities
-
-Machine learned entities are tailored to your app and require labeling to be successful. If you arenΓÇÖt using machine learned entities, you might be using the wrong entities.
-
-Machine learned entities can use other entities as features. These other entities can be custom entities such as regular expression entities or list entities, or you can use prebuilt entities as features.
-
-Learn about [effective machine learned entities](../luis-concept-entity-types.md#machine-learned-ml-entity).
-
-## Next steps
-
-* [How to use entities in your LUIS app](../how-to/entities.md)
-* [Utterances concepts](utterances.md)
cognitive-services Intents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/intents.md
- Title: What are intents in LUIS-
-description: Learn about intents and how they're used in LUIS
------- Previously updated : 07/19/2022--
-# Intents
---
-An intent represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's [utterance](utterances.md).
-
-Define a set of intents that corresponds to actions users want to take in your application. For example, a travel app would have several intents:
-
-Travel app intents | Example utterances |
-||
- BookFlight | "Book me a flight to Rio next week" <br/> "Fly me to Rio on the 24th" <br/> "I need a plane ticket next Sunday to Rio de Janeiro" |
- Greeting | "Hi" <br/>"Hello" <br/>"Good morning" |
- CheckWeather | "What's the weather like in Boston?" <br/> "Show me the forecast for this weekend" |
- None | "Get me a cookie recipe"<br>"Did the Lakers win?" |
-
-All applications come with the predefined intent, "[None](#none-intent)", which is the fallback intent.
-
-## Prebuilt intents
-
-LUIS provides prebuilt intents and their utterances for each of its prebuilt domains. Intents can be added without adding the whole domain. Adding an intent is the process of adding an intent and its utterances to your app. Both the intent name and the utterance list can be modified.
-
-## Return all intents' scores
-
-You assign an utterance to a single intent. When LUIS receives an utterance, by default it returns the top intent for that utterance.
-
-If you want the scores for all intents for the utterance, you can provide a flag in the query string of the prediction API.
-
-|Prediction API version|Flag|
-|--|--|
-|V2|`verbose=true`|
-|V3|`show-all-intents=true`|
-
-## Intent compared to entity
-
-The intent represents the action the application should take for the user, based on the entire utterance. An utterance can have only one top-scoring intent, but it can have many entities.
-
-Create an intent when the user's intention would trigger an action in your client application, like a call to the checkweather() function from the table above. Then create entities to represent parameters required to execute the action.
-
-|Intent | Entity | Example utterance |
-||||
-| CheckWeather | { "type": "location", "entity": "Seattle" }<br>{ "type": "builtin.datetimeV2.date","entity": "tomorrow","resolution":"2018-05-23" } | What's the weather like in `Seattle` `tomorrow`? |
-| CheckWeather | { "type": "date_range", "entity": "this weekend" } | Show me the forecast for `this weekend` |
-||||
--
-## None intent
-
-The **None** intent is created but left empty on purpose. The **None** intent is a required intent and can't be deleted or renamed. Fill it with utterances that are outside of your domain.
-
-The **None** intent is the fallback intent, and should have 10% of the total utterances. It is important in every app, because itΓÇÖs used to teach LUIS utterances that are not important in the app domain (subject area). If you do not add any utterances for the **None** intent, LUIS forces an utterance that is outside the domain into one of the domain intents. This will skew the prediction scores by teaching LUIS the wrong intent for the utterance.
-
-When an utterance is predicted as the None intent, the client application can ask more questions or provide a menu to direct the user to valid choices.
-
-## Negative intentions
-
-If you want to determine negative and positive intentions, such as "I **want** a car" and "I **don't** want a car", you can create two intents (one positive, and one negative) and add appropriate utterances for each. Or you can create a single intent and mark the two different positive and negative terms as an entity.
-
-## Intents and patterns
-
-If you have example utterances, which can be defined in part or whole as a regular expression, consider using the [regular expression entity](../luis-concept-entity-types.md#regex-entity) paired with a [pattern](../luis-concept-patterns.md).
-
-Using a regular expression entity guarantees the data extraction so that the pattern is matched. The pattern matching guarantees an exact intent is returned.
-
-## Intent balance
-
-The app domain intents should have a balance of utterances across each intent. For example, do not have most of your intents with 10 utterances and another intent with 500 utterances. This is not balanced. In this situation, you would want to review the intent with 500 utterances to see if many of the intents can be reorganized into a [pattern](../luis-concept-patterns.md).
-
-The **None** intent is not included in the balance. That intent should contain 10% of the total utterances in the app.
-
-### Intent limits
-
-Review the [limits](../luis-limits.md) to understand how many intents you can add to a model.
-
-> [!Tip]
-> If you need more than the maximum number of intents, consider whether your system is using too many intents and determine if multiple intents be combined into single intent with entities.
-> Intents that are too similar can make it more difficult for LUIS to distinguish between them. Intents should be varied enough to capture the main tasks that the user is asking for, but they don't need to capture every path your code takes. For example, two intents: BookFlight() and FlightCustomerService() might be separate intents in a travel app, but BookInternationalFlight() and BookDomesticFlight() are too similar. If your system needs to distinguish them, use entities or other logic rather than intents.
--
-### Request help for apps with significant number of intents
-
-If reducing the number of intents or dividing your intents into multiple apps doesn't work for you, contact support. If your Azure subscription includes support services, contact [Azure technical support](https://azure.microsoft.com/support/options/).
--
-## Best Practices for Intents:
-
-### Define distinct intents
-
-Make sure the vocabulary for each intent is just for that intent and not overlapping with a different intent. For example, if you want to have an app that handles travel arrangements such as airline flights and hotels, you can choose to have these subject areas as separate intents or the same intent with entities for specific data inside the utterance.
-
-If the vocabulary between two intents is the same, combine the intent, and use entities.
-
-Consider the following example utterances:
-
-1. Book a flight
-2. Book a hotel
-
-"Book a flight" and "book a hotel" use the same vocabulary of "book a *\<noun\>*". This format is the same so it should be the same intent with the different words of flight and hotel as extracted entities.
-
-### Do add features to intents
-
-Features describe concepts for an intent. A feature can be a phrase list of words that are significant to that intent or an entity that is significant to that intent.
-
-### Do find sweet spot for intents
-
-Use prediction data from LUIS to determine if your intents are overlapping. Overlapping intents confuse LUIS. The result is that the top scoring intent is too close to another intent. Because LUIS does not use the exact same path through the data for training each time, an overlapping intent has a chance of being first or second in training. You want the utterance's score for each intention to be farther apart, so this variance doesn't happen. Good distinction for intents should result in the expected top intent every time.
-
-### Balance utterances across intents
-
-For LUIS predictions to be accurate, the quantity of example utterances in each intent (except for the None intent), must be relatively equal.
-
-If you have an intent with 500 example utterances and all your other intents with 10 example utterances, the 500-utterance intent will have a higher rate of prediction.
-
-### Add example utterances to none intent
-
-This intent is the fallback intent, indicating everything outside your application. Add one example utterance to the None intent for every 10 example utterances in the rest of your LUIS app.
-
-### Don't add many example utterances to intents
-
-After the app is published, only add utterances from active learning in the development lifecycle process. If utterances are too similar, add a pattern.
-
-### Don't mix the definition of intents and entities
-
-Create an intent for any action your bot will take. Use entities as parameters that make that action possible.
-
-For example, for a bot that will book airline flights, create a **BookFlight** intent. Do not create an intent for every airline or every destination. Use those pieces of data as [entities](../luis-concept-entity-types.md) and mark them in the example utterances.
-
-## Next steps
-
-[How to use intents](../how-to/intents.md)
cognitive-services Patterns Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/patterns-features.md
- Title: Patterns and features-
-description: Use this article to learn about patterns and features in LUIS
------- Previously updated : 07/19/2022---
-# Patterns in LUIS apps
---
-Patterns are designed to improve accuracy when multiple utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing several more utterances.
-
-## Patterns solve low intent confidence
-
-Consider a Human Resources app that reports on the organizational chart in relation to an employee. Given an employee's name and relationship, LUIS returns the employees involved. Consider an employee, Tom, with a manager named Alice, and a team of subordinates named: Michael, Rebecca, and Carl.
---
-| Utterances | Intent predicted | Intent score |
-|--|--|--|
-| Who is Tom's subordinate? | GetOrgChart | 0.30 |
-| Who is the subordinate of Tom? | GetOrgChart | 0.30 |
-
-If an app has between 10 and 20 utterances with different lengths of sentence, different word order, and even different words (synonyms of "subordinate", "manage", "report"), LUIS may return a low confidence score. Create a pattern to help LUIS understand the importance of the word order.
-
-Patterns solve the following situations:
-
-* The intent score is low
-* The correct intent is not the top score but too close to the top score.
-
-## Patterns are not a guarantee of intent
-
-Patterns use a mix of prediction techniques. Setting an intent for a template utterance in a pattern is not a guarantee of the intent prediction, but it is a strong signal.
-
-## Patterns do not improve machine-learning entity detection
-
-A pattern is primarily meant to help the prediction of intents and roles. The "_pattern.any"_ entity is used to extract free-form entities. While patterns use entities, a pattern does not help detect a machine-learning entity.
-
-Do not expect to see improved entity prediction if you collapse multiple utterances into a single pattern. For simple entities to be utilized by your app, you need to add utterances or use list entities.
-
-## Patterns use entity roles
-
-If two or more entities in a pattern are contextually related, patterns use entity [roles](entities.md) to extract contextual information about entities.
-
-## Prediction scores with and without patterns
-
-Given enough example utterances, LUIS can be able to increase prediction confidence without patterns. Patterns increase the confidence score without having to provide as many utterances.
-
-## Pattern matching
-
-A pattern is matched by detecting the entities inside the pattern first, then validating the rest of the words and word order of the pattern. Entities are required in the pattern for a pattern to match. The pattern is applied at the token level, not the character level.
-
-## Pattern.any entity
-
-The pattern.any entity allows you to find free-form data where the wording of the entity makes it difficult to determine the end of the entity from the rest of the utterance.
-
-For example, consider a Human Resources app that helps employees find company documents. This app might need to understand the following example utterances.
-
-* "_Where is **HRF-123456**?_"
-* "_Who authored **HRF-123234**?_"
-* "_Is **HRF-456098** published in French?_"
-
-However, each document has both a formatted name (used in the above list), and a human-readable name, such as Request relocation from employee new to the company 2018 version 5.
-
-Utterances with the human-readable name might look like:
-
-* "_Where is **Request relocation from employee new to the company 2018 version 5**?_"
-* _"Who authored **"Request relocation from employee new to the company 2018 version 5"**?_"
-* _Is **Request relocation from employee new to the company 2018 version 5** is published in French?_"
-
-The utterances include words that may confuse LUIS about where the entity ends. Using a Pattern.any entity in a pattern allows you to specify the beginning and end of the document name, so LUIS correctly extracts the form name. For example, the following template utterances:
-
-* Where is {FormName}[?]
-* Who authored {FormName}[?]
-* Is {FormName} is published in French[?]
-
-## Best practices for Patterns:
-
-#### Do add patterns in later iterations
-
-You should understand how the app behaves before adding patterns because patterns are weighted more heavily than example utterances and will skew confidence.
-
-Once you understand how your app behaves, add patterns as they apply to your app. You do not need to add them each time you iterate on the app's design.
-
-There is no harm in adding them in the beginning of your model design, but it is easier to see how each pattern changes the model after the model is tested with utterances.
-
-#### Don't add many patterns
-
-Don't add too many patterns. LUIS is meant to learn quickly with fewer examples. Don't overload the system unnecessarily.
-
-## Features
-
-In machine learning, a _feature_ is a distinguishing trait or attribute of data that your system observes and learns through.
-
-Machine-learning features give LUIS important cues for where to look for things that distinguish a concept. They're hints that LUIS can use, but they aren't hard rules. LUIS uses these hints with the labels to find the data.
-
-A feature can be described as a function, like `f(x) = y`. In the example utterance, the feature tells you where to look for the distinguishing trait. Use this information to help create your schema.
-
-## Types of features
-
-Features are a necessary part of your schema design. LUIS supports both phrase lists and models as features:
-
-* Phrase list feature
-* Model (intent or entity) as a feature
-
-## Find features in your example utterances
-
-Because LUIS is a language-based application, the features are text-based. Choose text that indicates the trait you want to distinguish. For LUIS, the smallest unit is the _token_. For the English language, a token is a contiguous span of letters and numbers that has no spaces or punctuation.
-
-Because spaces and punctuation aren't tokens, focus on the text clues that you can use as features. Remember to include variations of words, such as:
-
-* Plural forms
-* Verb tenses
-* Abbreviations
-* Spellings and misspellings
-
-Determine if the text needs the following because it distinguishes a trait:
-
-* Match an exact word or phrase: Consider adding a regular expression entity or a list entity as a feature to the entity or intent.
-* Match a well-known concept like dates, times, or people's names: Use a prebuilt entity as a feature to the entity or intent.
-* Learn new examples over time: Use a phrase list of some examples of the concept as a feature to the entity or intent.
-
-## Create a phrase list for a concept
-
-A phrase list is a list of words or phrases that describe a concept. A phrase list is applied as a case-insensitive match at the token level.
-
-When adding a phrase list, you can set the feature to [**global**](#global-features). A global feature applies to the entire app.
-
-## When to use a phrase list
-
-Use a phrase list when you need your LUIS app to generalize and identify new items for the concept. Phrase lists are like domain-specific vocabulary. They enhance the quality of understanding for intents and entities.
-
-## How to use a phrase list
-
-With a phrase list, LUIS considers context and generalizes to identify items that are similar to, but aren't, an exact text match. Follow these steps to use a phrase list:
-
-1. Start with a machine-learning entity:
- 1. Add example utterances.
- 2. Label with a machine-learning entity.
-2. Add a phrase list:
- 1. Add words with similar meaning. Don't add every possible word or phrase. Instead, add a few words or phrases at a time. Then retrain and publish.
- 2. Review and add suggested words.
-
-## A typical scenario for a phrase list
-
-A typical scenario for a phrase list is to boost words related to a specific idea.
-
-Medical terms are a good example of words that might need a phrase list to boost their significance. These terms can have specific physical, chemical, therapeutic, or abstract meanings. LUIS won't know the terms are important to your subject domain without a phrase list.
-
-For example, to extract the medical terms:
-
-1. Create example utterances and label medical terms within those utterances.
-2. Create a phrase list with examples of the terms within the subject domain. This phrase list should include the actual term you labeled and other terms that describe the same concept.
-3. Add the phrase list to the entity or subentity that extracts the concept used in the phrase list. The most common scenario is a component (child) of a machine-learning entity. If the phrase list should be applied across all intents or entities, mark the phrase list as a global phrase list. The **enabledForAllModels** flag controls this model scope in the API.
-
-## Token matches for a phrase list
-
-A phrase list always applies at the token level. The following table shows how a phrase list that has the word **Ann** applies to variations of the same characters in that order.
-
-| Token variation of "Ann" | Phrase list match when the token is found |
-|--|--|
-| **ANN** <br> **aNN** | Yes - token is **Ann** |
-| **Ann's** | Yes - token is **Ann** |
-| **Anne** | No - token is **Anne** |
-
-## A model as a feature helps another model
-
-You can add a model (intent or entity) as a feature to another model (intent or entity). By adding an existing intent or entity as a feature, you're adding a well-defined concept that has labeled examples.
-
-When adding a model as a feature, you can set the feature as:
-
-* [**Required**](#required-features). A required feature must be found for the model to be returned from the prediction endpoint.
-* [**Global**](#global-features). A global feature applies to the entire app.
-
-## When to use an entity as a feature to an intent
-
-Add an entity as a feature to an intent when the detection of that entity is significant for the intent.
-
-For example, if the intent is for booking a flight, like **BookFlight** , and the entity is ticket information (such as the number of seats, origin, and destination), then finding the ticket-information entity should add significant weight to the prediction of the **BookFlight** intent.
-
-## When to use an entity as a feature to another entity
-
-An entity (A) should be added as a feature to another entity (B) when the detection of that entity (A) is significant for the prediction of entity (B).
-
-For example, if a shipping-address entity is contained in a street-address subentity, then finding the street-address subentity adds significant weight to the prediction for the shipping address entity.
-
-* Shipping address (machine-learning entity):
- * Street number (subentity)
- * Street address (subentity)
- * City (subentity)
- * State or Province (subentity)
- * Country/Region (subentity)
- * Postal code (subentity)
-
-## Nested subentities with features
-
-A machine-learning subentity indicates a concept is present to the parent entity. The parent can be another subentity or the top entity. The value of the subentity acts as a feature to its parent.
-
-A subentity can have both a phrase list and a model (another entity) as a feature.
-
-When the subentity has a phrase list, it boosts the vocabulary of the concept but won't add any information to the JSON response of the prediction.
-
-When the subentity has a feature of another entity, the JSON response includes the extracted data of that other entity.
-
-## Required features
-
-A required feature has to be found in order for the model to be returned from the prediction endpoint. Use a required feature when you know your incoming data must match the feature.
-
-If the utterance text doesn't match the required feature, it won't be extracted.
-
-A required feature uses a non-machine-learning entity:
-
-* Regular-expression entity
-* List entity
-* Prebuilt entity
-
-If you're confident that your model will be found in the data, set the feature as required. A required feature doesn't return anything if it isn't found.
-
-Continuing with the example of the shipping address:
-
-Shipping address (machine learned entity)
-
-* Street number (subentity)
-* Street address (subentity)
-* Street name (subentity)
-* City (subentity)
-* State or Province (subentity)
-* Country/Region (subentity)
-* Postal code (subentity)
-
-## Required feature using prebuilt entities
-
-Prebuilt entities such as city, state, and country/region are generally a closed set of lists, meaning they don't change much over time. These entities could have the relevant recommended features and those features could be marked as required. However, the isRequired flag is only related to the entity it is assigned to and doesn't affect the hierarchy. If the prebuilt sub-entity feature is not found, this will not affect the detection and return of the parent entity.
-
-As an example of a required feature, consider you want to detect addresses. You might consider making a street number a requirement. This would allow a user to enter "1 Microsoft Way" or "One Microsoft Way", and both would resolve to the numeral "1" for the street number sub-entity. See the [prebuilt entity](../luis-reference-prebuilt-entities.md)article for more information.
-
-## Required feature using list entities
-
-A [list entity](../reference-entity-list.md) is used as a list of canonical names along with their synonyms. As a required feature, if the utterance doesn't include either the canonical name or a synonym, then the entity isn't returned as part of the prediction endpoint.
-
-Suppose that your company only ships to a limited set of countries/regions. You can create a list entity that includes several ways for your customer to reference the country/region. If LUIS doesn't find an exact match within the text of the utterance, then the entity (that has the required feature of the list entity) isn't returned in the prediction.
-
-| Canonical name** | Synonyms |
-|--|--|
-| United States | U.S.<br> U.S.A <br> US <br> USA <br> 0 |
-
-A client application, such as a chat bot, can ask a follow-up question to help. This helps the customer understand that the country/region selection is limited and _required_.
-
-## Required feature using regular expression entities
-
-A [regular expression entity](../reference-entity-regular-expression.md) that's used as a required feature provides rich text-matching capabilities.
-
-In the shipping address example, you can create a regular expression that captures syntax rules of the country/region postal codes.
-
-## Global features
-
-While the most common use is to apply a feature to a specific model, you can configure the feature as a **global feature** to apply it to your entire application.
-
-The most common use for a global feature is to add an additional vocabulary to the app. For example, if your customers use a primary language, but expect to be able to use another language within the same utterance, you can add a feature that includes words from the secondary language.
-
-Because the user expects to use the secondary language across any intent or entity, add words from the secondary language to the phrase list. Configure the phrase list as a global feature.
-
-## Combine features for added benefit
-
-You can use more than one feature to describe a trait or concept. A common pairing is to use:
-
-* A phrase list feature: You can use multiple phrase lists as features to the same model.
-* A model as a feature: [prebuilt entity](../luis-reference-prebuilt-entities.md), [regular expression entity](../reference-entity-regular-expression.md), [list entity](../reference-entity-list.md).
-
-## Example: ticket-booking entity features for a travel app
-
-As a basic example, consider an app for booking a flight with a flight-reservation _intent_ and a ticket-booking _entity_. The ticket-booking entity captures the information to book an airplane ticket in a reservation system.
-
-The machine-learning entity for ticket-book has two subentities to capture origin and destination. The features need to be added to each subentity, not the top-level entity.
--
-The ticket-booking entity is a machine-learning entity, with subentities including _Origin_ and _Destination_. These subentities both indicate a geographical location. To help extract the locations, and distinguish between _Origin_ and _Destination_, each subentity should have features.
-
-| Type | Origin subentity | Destination subentity |
-|--|--|--|
-| Model as a feature | [geographyV2](../luis-reference-prebuilt-geographyv2.md) prebuilt entity | [geographyV2](../luis-reference-prebuilt-geographyv2.md) prebuilt entity |
-| Phrase list | **Origin words** : start at, begin from, leave | **Destination words** : to, arrive, land at, go, going, stay, heading |
-| Phrase list | Airport codes - same list for both origin and destination | Airport codes - same list for both origin and destination |
-| Phrase list | Airport names - same list for both origin and destination | Airport codes - same list for both origin and destination |
--
-If you anticipate that people use airport codes and airport names, then LUIS should have phrase lists that use both types of phrases. Airport codes may be more common with text entered in a chatbot while airport names may be more common with spoken conversation such as a speech-enabled chatbot.
-
-The matching details of the features are returned only for models, not for phrase lists because only models are returned in prediction JSON.
-
-## Ticket-booking labeling in the intent
-
-After you create the machine-learning entity, you need to add example utterances to an intent, and label the parent entity and all subentities.
-
-For the ticket booking example, Label the example utterances in the intent with the TicketBooking entity and any subentities in the text.
---
-## Example: pizza ordering app
-
-For a second example, consider an app for a pizza restaurant, which receives pizza orders including the details of the type of pizza someone is ordering. Each detail of the pizza should be extracted, if possible, in order to complete the order processing.
-
-The machine-learning entity in this example is more complex with nested subentities, phrase lists, prebuilt entities, and custom entities.
--
-This example uses features at the subentity level and child of subentity level. Which level gets what kind of phrase list or model as a feature is an important part of your entity design.
-
-While subentities can have many phrase lists as features that help detect the entity, each subentity has only one model as a feature. In this [pizza app](https://github.com/Azure/pizza_luis_bot/blob/master/CognitiveModels/MicrosoftPizza.json), those models are primarily lists.
--
-The correctly labeled example utterances display in a way to show how the entities are nested.
-
-## Next steps
-
-* [LUIS application design](application-design.md)
-* [prebuilt models](../luis-concept-prebuilt-model.md)
-* [intents](intents.md)
-* [entities](entities.md).
cognitive-services Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/utterances.md
- Title: Utterances
-description: Utterances concepts
---
-ms.
-- Previously updated : 07/19/2022-
-# Utterances
---
-Utterances are inputs from users that your app needs to interpret. To train LUIS to extract intents and entities from these inputs, it's important to capture various different example utterances for each intent. Active learning, or the process of continuing to train on new utterances, is essential to the machine-learning intelligence that LUIS provides.
-
-Collect utterances that you think users will enter. Include utterances, which mean the same thing but are constructed in various ways:
-
-* Utterance length - short, medium, and long for your client-application
-* Word and phrase length
-* Word placement - entity at beginning, middle, and end of utterance
-* Grammar
-* Pluralization
-* Stemming
-* Noun and verb choice
-* [Punctuation](../luis-reference-application-settings.md#punctuation-normalization) - using both correct and incorrect grammar
-
-## Choose varied utterances
-
-When you start [adding example utterances](/azure/cognitive-services/luis/luis-how-to-add-entities) to your LUIS model, there are several principles to keep in mind:
-
-## Utterances aren't always well formed
-
-Your app may need to process sentences, like "Book a ticket to Paris for me", or a fragment of a sentence, like "Booking" or "Paris flight" Users also often make spelling mistakes. When planning your app, consider whether or not you want to use [Bing Spell Check](../luis-tutorial-bing-spellcheck.md) to correct user input before passing it to LUIS.
-
-If you do not spell check user utterances, you should train LUIS on utterances that include typos and misspellings.
-
-### Use the representative language of the user
-
-When choosing utterances, be aware that what you think are common terms or phrases might not be common for the typical user of your client application. They may not have domain experience or use different terminology. Be careful when using terms or phrases that a user would only say if they were an expert.
-
-### Choose varied terminology and phrasing
-
-You will find that even if you make efforts to create varied sentence patterns, you will still repeat some vocabulary. For example, the following utterances have similar meaning, but different terminology and phrasing:
-
-* "*How do I get a computer?*"
-* "*Where do I get a computer?*"
-* "*I want to get a computer, how do I go about it?*"
-* "*When can I have a computer?*"
-
-The core term here, _computer_, isn't varied. Use alternatives such as desktop computer, laptop, workstation, or even just machine. LUIS can intelligently infer synonyms from context, but when you create utterances for training, it's always better to vary them.
-
-## Example utterances in each intent
-
-Each intent needs to have example utterances - at least 15. If you have an intent that does not have any example utterances, you will not be able to train LUIS. If you have an intent with one or few example utterances, LUIS may not accurately predict the intent.
-
-## Add small groups of utterances
-
-Each time you iterate on your model to improve it, don't add large quantities of utterances. Consider adding utterances in quantities of 15. Then [Train](/azure/cognitive-services/luis/luis-how-to-train), [publish](/azure/cognitive-services/luis/luis-how-to-publish-app), and [test](/azure/cognitive-services/luis/luis-interactive-test) again.
-
-LUIS builds effective models with utterances that are carefully selected by the LUIS model author. Adding too many utterances isn't valuable because it introduces confusion.
-
-It is better to start with a few utterances, then [review the endpoint utterances](/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances) for correct intent prediction and entity extraction.
-
-## Utterance normalization
-
-Utterance normalization is the process of ignoring the effects of types of text, such as punctuation and diacritics, during training and prediction.
-
-Utterance normalization settings are turned off by default. These settings include:
-
-* Word forms
-* Diacritics
-* Punctuation
-
-If you turn on a normalization setting, scores in the **Test** pane, batch tests, and endpoint queries will change for all utterances for that normalization setting.
-
-When you clone a version in the LUIS portal, the version settings are kept in the new cloned version.
-
-Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings). See the [Reference](../luis-reference-application-settings.md) documentation for more information.
-
-## Word forms
-
-Normalizing **word forms** ignores the differences in words that expand beyond the root.
-
-## Diacritics
-
-Diacritics are marks or signs within the text, such as:
-
-`İ ı Ş Ğ ş ğ ö ü`
-
-## Punctuation marks
-
-Normalizing **punctuation** means that before your models get trained and before your endpoint queries get predicted, punctuation will be removed from the utterances.
-
-Punctuation is a separate token in LUIS. An utterance that contains a period at the end is a separate utterance than one that does not contain a period at the end, and may get two different predictions.
-
-If punctuation is not normalized, LUIS doesn't ignore punctuation marks by default because some client applications may place significance on these marks. Make sure to include example utterances that use punctuation, and ones that don't, for both styles to return the same relative scores.
-
-Make sure the model handles punctuation either in the example utterances (both having and not having punctuation) or in [patterns](/azure/cognitive-services/luis/luis-concept-patterns) where it is easier to ignore punctuation. For example: I am applying for the {Job} position[.]
-
-If punctuation has no specific meaning in your client application, consider [ignoring punctuation](/azure/cognitive-services/luis/luis-concept-utterance#utterance-normalization) by normalizing punctuation.
-
-## Ignoring words and punctuation
-
-If you want to ignore specific words or punctuation in patterns, use a [pattern](/azure/cognitive-services/luis/luis-concept-patterns#pattern-syntax) with the _ignore_ syntax of square brackets, `[]`.
-
-## Training with all utterances
-
-Training is generally non-deterministic: utterance prediction can vary slightly across versions or apps. You can remove non-deterministic training by updating the [version settings](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) API with the UseAllTrainingData name/value pair to use all training data.
-
-## Testing utterances
-
-Developers should start testing their LUIS application with real data by sending utterances to the [prediction endpoint](../luis-how-to-azure-subscription.md) URL. These utterances are used to improve the performance of the intents and entities with [Review utterances](/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances). Tests submitted using the testing pane in the LUIS portal are not sent through the endpoint, and don't contribute to active learning.
-
-## Review utterances
-
-After your model is trained, published, and receiving [endpoint](../luis-glossary.md#endpoint) queries, [review the utterances](/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances) suggested by LUIS. LUIS selects endpoint utterances that have low scores for either the intent or entity.
-
-## Best practices
-
-### Label for word meaning
-
-If the word choice or word arrangement is the same, but doesn't mean the same thing, do not label it with the entity.
-
-In the following utterances, the word fair is a homograph, which means it's spelled the same but has a different meaning:
-* "*What kind of county fairs are happening in the Seattle area this summer?*"
-* "*Is the current 2-star rating for the restaurant fair?*
-
-If you want an event entity to find all event data, label the word fair in the first utterance, but not in the second.
-
-### Don't ignore possible utterance variations
-
-LUIS expects variations in an intent's utterances. The utterances can vary while having the same overall meaning. Variations can include utterance length, word choice, and word placement.
--
-| Don't use the same format | Do use varying formats |
-|--|--|
-| Buy a ticket to Seattle|Buy 1 ticket to Seattle|
-|Buy a ticket to Paris|Reserve two seats on the red eye to Paris next Monday|
-|Buy a ticket to Orlando |I would like to book 3 tickets to Orlando for spring break |
--
-The second column uses different verbs (buy, reserve, book), different quantities (1, &"two", 3), and different arrangements of words but all have the same intention of purchasing airline tickets for travel.
-
-### Don't add too many example utterances to intents
-
-After the app is published, only add utterances from active learning in the development lifecycle process. If utterances are too similar, add a pattern.
-
-## Next steps
-
-* [Intents](intents.md)
-* [Patterns and features concepts](patterns-features.md)
cognitive-services Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/data-collection.md
- Title: Data collection
-description: Learn what example data to collect while developing your app
--- Previously updated : 05/06/2020--
-# Data collection for your app
---
-A Language Understanding (LUIS) app needs data as part of app development.
-
-## Data used in LUIS
-
-LUIS uses text as data to train and test your LUIS app for classification for [intents](luis-concept-intent.md) and for extraction of [entities](concepts/entities.md). You need a large enough data set that you have sufficient data to create separate data sets for both training and test that have the diversity and distribution called out specifically below. The data in each of these sets should not overlap.
-
-## Training data selection for example utterances
-
-Select utterances for your training set based on the following criteria:
-
-* **Real data is best**:
- * **Real data from client application**: Select utterances that are real data from your client application. If the customer sends a web form with their inquiry today, and youΓÇÖre building a bot, you can start by using the web form data.
- * **Crowd-sourced data**: If you donΓÇÖt have any existing data, consider crowd sourcing utterances. Try to crowd-source utterances from your actual user population for your scenario to get the best approximation of the real data your application will see. Crowd-sourced human utterances are better than computer-generated utterances. When you build a data set of synthetic utterances generated on specific patterns, it will lack much of the natural variation youΓÇÖll see with people creating the utterances and wonΓÇÖt end up generalizing well in production.
-* **Data diversity**:
- * **Region diversity**: Make sure the data for each intent is as diverse as possible including _phrasing_ (word choice), and _grammar_. If you are teaching an intent about HR policies about vacation days, make sure you have utterances that represent the terms that are used for all regions youΓÇÖre serving. For example, in Europe people might ask about `taking a holiday` and in the US people might ask about `taking vacation days`.
- * **Language diversity**: If you have users with various native languages that are communicating in a second language, make sure to have utterances that represent non-native speakers.
- * **Input diversity**: Consider your data input path. If you are collecting data from one person, department or input device (microphone) you are likely missing diversity that will be important for your app to learn about all input paths.
- * **Punctuation diversity**: Consider that people use varying levels of punctuation in text applications and make sure you have a diversity of how punctuation is used. If you're using data that comes from speech, it won't have any punctuation, so your data shouldn't either.
-* **Data distribution**: Make sure the data spread across intents represents the same spread of data your client application receives. If your LUIS app will classify utterances that are requests to schedule a leave (50%), but it will also see utterances about inquiring about leave days left (20%), approving leaves (20%) and some out of scope and chit chat (10%) then your data set should have the sample percentages of each type of utterance.
-* **Use all data forms**: If your LUIS app will take data in multiple forms, make sure to include those forms in your training utterances. For example, if your client application takes both speech and typed text input, you need to have speech to text generated utterances as well as typed utterances. You will see different variations in how people speak from how they type as well as different errors in speech recognition and typos. All of this variation should be represented in your training data.
-* **Positive and negative examples**: To teach a LUIS app, it must learn about what the intent is (positive) and what it is not (negative). In LUIS, utterances can only be positive for a single intent. When an utterance is added to an intent, LUIS automatically makes that same example utterance a negative example for all the other intents.
-* **Data outside of application scope**: If your application will see utterances that fall outside of your defined intents, make sure to provide those. The examples that arenΓÇÖt assigned to a particular defined intent will be labeled with the **None** intent. ItΓÇÖs important to have realistic examples for the **None** intent to properly predict utterances that are outside the scope of the defined intents.
-
- For example, if you are creating an HR bot focused on leave time and you have three intents:
- * schedule or edit a leave
- * inquire about available leave days
- * approve/disapprove leave
-
- You want to make sure you have utterances that cover both of those intents, but also that cover potential utterances outside that scope that the application should serve like these:
- * `What are my medical benefits?`
- * `Who is my HR rep?`
- * `tell me a joke`
-* **Rare examples**: Your app will need to have rare examples as well as common examples. If your app has never seen rare examples, it wonΓÇÖt be able to identify them in production. If youΓÇÖre using real data, you will be able to more accurately predict how your LUIS app will work in production.
-
-### Quality instead of quantity
-
-Consider the quality of your existing data before you add more data. With LUIS, youΓÇÖre using Machine Teaching. The combination of your labels and the machine learning features you define is what your LUIS app uses. It doesnΓÇÖt simply rely on the quantity of labels to make the best prediction. The diversity of examples and their representation of what your LUIS app will see in production is the most important part.
-
-### Preprocessing data
-
-The following preprocessing steps will help build a better LUIS app:
-
-* **Remove duplicates**: Duplicate utterances won't hurt, but they don't help either, so removing them will save labeling time.
-* **Apply same client-app preprocess**: If your client application, which calls the LUIS prediction endpoint, applies data processing at runtime before sending the text to LUIS, you should train the LUIS app on data that is processed in the same way. For example, if your client application uses [Bing Spell Check](../bing-spell-check/overview.md) to correct spelling on inputs to LUIS, correct your training and test utterances before adding to LUIS.
-* **Don't apply new cleanup processes that the client app doesn't use**: If your client app accepts speech-generated text directly without any cleanup such as grammar or punctuation, your utterances need to reflect the same including any missing punctuation and any other misrecognition youΓÇÖll need to account for.
-* **Don't clean up data**: DonΓÇÖt get rid of malformed input that you might get from garbled speech recognition, accidental keypresses, or mistyped/misspelled text. If your app will see inputs like these, itΓÇÖs important for it to be trained and tested on them. Add a _malformed input_ intent if you wouldnΓÇÖt expect your app to understand it. Label this data to help your LUIS app predict the correct response at runtime. Your client application can choose an appropriate response to unintelligible utterances such as `Please try again`.
-
-### Labeling data
-
-* **Label text as if it was correct**: The example utterances should have all forms of an entity labeled. This includes text that is misspelled, mistyped, and mistranslated.
-
-### Data review after LUIS app is in production
-
-[Review endpoint utterances](luis-concept-review-endpoint-utterances.md) to monitor real utterance traffic once you have deployed an app to production. This allows you to update your training utterances with real data, which will improve your app. Any app built with crowd-sourced or non-real scenario data will need to be improved based on its real use.
-
-## Test data selection for batch testing
-
-All of the principles listed above for training utterances apply to utterances you should use for your [test set](./luis-how-to-batch-test.md). Ensure the distribution across intents and entities mirror the real distribution as closely as possible.
-
-DonΓÇÖt reuse utterances from your training set in your test set. This improperly biases your results and wonΓÇÖt give you the right indication of how your LUIS app will perform in production.
-
-Once the first version of your app is published, you should update your test set with utterances from real traffic to ensure your test set reflects your production distribution and you can monitor realistic performance over time.
-
-## Next steps
-
-[Learn how LUIS alters your data before prediction](luis-concept-data-alteration.md)
cognitive-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/developer-reference-resource.md
- Title: Developer resources - Language Understanding
-description: SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in your programming language. Manage your Azure resources and LUIS predictions.
--- Previously updated : 01/12/2021---
-# SDK, REST, and CLI developer resources for Language Understanding (LUIS)
---
-SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in your programming language. Manage your Azure resources and LUIS predictions.
-
-## Azure resource management
-
-Use the Azure Cognitive Services Management layer to create, edit, list, and delete the Language Understanding or Cognitive Service resource.
-
-Find reference documentation based on the tool:
-
-* [Azure CLI](/cli/azure/cognitiveservices#az-cognitiveservices-list)
-
-* [Azure RM PowerShell](/powershell/module/azurerm.cognitiveservices/#cognitive_services)
--
-## Language Understanding authoring and prediction requests
-
-The Language Understanding service is accessed from an Azure resource you need to create. There are two resources:
-
-* Use the **authoring** resource for training to create, edit, train, and publish.
-* Use the **prediction** for runtime to send user's text and receive a prediction.
-
-Learn about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Use [Cognitive Services sample code](https://github.com/Azure-Samples/cognitive-services-quickstart-code) to learn and use the most common tasks.
-
-### REST specifications
-
-The [LUIS REST specifications](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/cognitiveservices/data-plane/LUIS), along with all [Azure REST specifications](https://github.com/Azure/azure-rest-api-specs), are publicly available on GitHub.
-
-### REST APIs
-
-Both authoring and prediction endpoint APIS are available from REST APIs:
-
-|Type|Version|
-|--|--|
-|Authoring|[V2](https://go.microsoft.com/fwlink/?linkid=2092087)<br>[preview V3](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview)|
-|Prediction|[V2](https://go.microsoft.com/fwlink/?linkid=2092356)<br>[V3](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/)|
-
-### REST Endpoints
-
-LUIS currently has 2 types of endpoints:
-
-* **authoring** on the training endpoint
-* query **prediction** on the runtime endpoint.
-
-|Purpose|URL|
-|--|--|
-|V2 Authoring on training endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/api/v2.0/apps/{appID}/`|
-|V3 Authoring on training endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/{appID}/`|
-|V2 Prediction - all predictions on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q={q}[&timezoneOffset][&verbose][&spellCheck][&staging][&bing-spell-check-subscription-key][&log]`|
-|V3 Prediction - versions prediction on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/{appId}/versions/{versionId}/predict?query={query}[&verbose][&log][&show-all-intents]`|
-|V3 Prediction - slot prediction on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?query={query}[&verbose][&log][&show-all-intents]`|
-
-The following table explains the parameters, denoted with curly braces `{}`, in the previous table.
-
-|Parameter|Purpose|
-|--|--|
-|`your-resource-name`|Azure resource name|
-|`q` or `query`|utterance text sent from client application such as chat bot|
-|`version`|10 character version name|
-|`slot`| `production` or `staging`|
-
-### REST query string parameters
--
-## App schema
-
-The [app schema](app-schema-definition.md) is imported and exported in a `.json` or `.lu` format.
-
-### Language-based SDKs
-
-|Language |Reference documentation|Package|Quickstarts|
-|--|--|--|--|
-|C#|[Authoring](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.authoring)</br>[Prediction](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.runtime)|[NuGet authoring](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Authoring/)<br>[NuGet prediction](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Query prediction](./client-libraries-rest-api.md?pivots=rest-api)|
-|Go|[Authoring and prediction](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v2.0/luis)|[SDK](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/LUIS)||
-|Java|[Authoring and prediction](/java/api/overview/azure/cognitiveservices/client/languageunderstanding)|[Maven authoring](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-authoring)<br>[Maven prediction](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-runtime)|
-|JavaScript|[Authoring](/javascript/api/@azure/cognitiveservices-luis-authoring/)<br>[Prediction](/javascript/api/@azure/cognitiveservices-luis-runtime/)|[NPM authoring](https://www.npmjs.com/package/@azure/cognitiveservices-luis-authoring)<br>[NPM prediction](https://www.npmjs.com/package/@azure/cognitiveservices-luis-runtime)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)|
-|Python|[Authoring and prediction](./client-libraries-rest-api.md?pivots=rest-api)|[Pip](https://pypi.org/project/azure-cognitiveservices-language-luis/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)|
--
-### Containers
-
-Language Understanding (LUIS) provides a [container](luis-container-howto.md) to provide on-premises and contained versions of your app.
-
-### Export and import formats
-
-Language Understanding provides the ability to manage your app and its models in a JSON format, the `.LU` ([LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown)) format, and a compressed package for the Language Understanding container.
-
-Importing and exporting these formats is available from the APIs and from the LUIS portal. The portal provides import and export as part of the Apps list and Versions list.
-
-## Workshops
-
-* GitHub: (Workshop) [Conversational-AI : NLU using LUIS](https://github.com/GlobalAICommunity/Workshop-Conversational-AI)
-
-## Continuous integration tools
-
-* GitHub: (Preview) [Developing a LUIS app using DevOps practices](https://github.com/Azure-Samples/LUIS-DevOps-Template)
-* GitHub: [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) - Tools supporting continuous integration and deployment for NLU services.
-
-## Bot Framework tools
-
-The bot framework is available as [an SDK](https://github.com/Microsoft/botframework) in a variety of languages and as a service using [Azure Bot Service](https://dev.botframework.com/).
-
-Bot framework provides [several tools](https://github.com/microsoft/botbuilder-tools) to help with Language Understanding, including:
-* [Bot Framework emulator](https://github.com/Microsoft/BotFramework-Emulator/releases) - a desktop application that allows bot developers to test and debug bots built using the Bot Framework SDK
-* [Bot Framework Composer](https://github.com/microsoft/BotFramework-Composer/blob/stable/README.md) - an integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Microsoft Bot Framework
-* [Bot Framework Samples](https://github.com/microsoft/botbuilder-samples) - in #C, JavaScript, TypeScript, and Python
-
-## Next steps
-
-* Learn about the common [HTTP error codes](luis-reference-response-codes.md)
-* [Reference documentation](../../index.yml) for all APIs and SDKs
-* [Bot framework](https://github.com/Microsoft/botbuilder-dotnet) and [Azure Bot Service](https://dev.botframework.com/)
-* [LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown)
-* [Cognitive Containers](../cognitive-services-container-support.md)
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/encrypt-data-at-rest.md
- Title: Language Understanding service encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Language Understanding (LUIS), and how to enable and manage CMK.
------ Previously updated : 08/28/2020-
-#Customer intent: As a user of the Language Understanding (LUIS) service, I want to learn how encryption at rest works.
--
-# Language Understanding service encryption of data at rest
---
-The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments.
-
-## About Cognitive Services encryption
-
-Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
-
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-## Customer-managed keys with Azure Key Vault
-
-There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-
-### Customer-managed keys for Language Understanding
-
-To request the ability to use customer-managed keys, fill out and submit theΓÇ»[LUIS Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with LUIS, you'll need to create a new Language Understanding resource from the Azure portal and select E0 as the Pricing Tier. The new SKU will function the same as the F0 SKU that is already available except for CMK. Users won't be able to upgrade from the F0 to the new E0 SKU.
-
-![LUIS subscription image](../media/cognitive-services-encryption/luis-subscription.png)
-
-### Limitations
-
-There are some limitations when using the E0 tier with existing/previously created applications:
-
-* Migration to an E0 resource will be blocked. Users will only be able to migrate their apps to F0 resources. After you've migrated an existing resource to F0, you can create a new resource in the E0 tier. Learn more about [migration here](./luis-migration-authoring.md).
-* Moving applications to or from an E0 resource will be blocked. A work around for this limitation is to export your existing application, and import it as an E0 resource.
-* The Bing Spell check feature isn't supported.
-* Logging end-user traffic is disabled if your application is E0.
-* The Speech priming capability from the Azure Bot service isn't supported for applications in the E0 tier. This feature is available via the Azure Bot Service, which doesn't support CMK.
-* The speech priming capability from the portal requires Azure Blob Storage. For more information, see [bring your own storage](../Speech-Service/speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging).
-
-### Enable customer-managed keys
-
-A new Cognitive Services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK.
-
-To learn how to use customer-managed keys with Azure Key Vault for Cognitive Services encryption, see:
--- [Configure customer-managed keys with Key Vault for Cognitive Services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)-
-Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
-
-> [!IMPORTANT]
-> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-
-### Store customer-managed keys in Azure Key Vault
-
-To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
-
-Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
-
-### Rotate customer-managed keys
-
-You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Cognitive Services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Cognitive Services by using the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md).
-
-Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
-
-### Revoke access to customer-managed keys
-
-To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource, as the encryption key is inaccessible by Cognitive Services.
-
-## Next steps
-
-* [LUIS Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/faq.md
- Title: LUIS frequently asked questions
-description: Use this article to see frequently asked questions about LUIS, and troubleshooting information
---
-ms.
-- Previously updated : 07/19/2022--
-# Language Understanding Frequently Asked Questions (FAQ)
----
-## What are the maximum limits for LUIS application?
-
-LUIS has several limit areas. The first is the model limit, which controls intents, entities, and features in LUIS. The second area is quota limits based on key type. A third area of limits is the keyboard combination for controlling the LUIS website. A fourth area is the world region mapping between the LUIS authoring website and the LUIS endpoint APIs. See [LUIS limits](luis-limits.md) for more details.
-
-## What is the difference between Authoring and Prediction keys?
-
-An authoring resource lets you create, manage, train, test, and publish your applications. A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. See [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md) to learn about the differences between the authoring key and the prediction runtime key.
-
-## Does LUIS support speech to text?
-
-Yes, [Speech](../speech-service/how-to-recognize-intents-from-speech-csharp.md#luis-and-speech) to text is provided as an integration with LUIS.
-
-## What are Synonyms and word variations?
-
-LUIS has little or no knowledge of the broader _NLP_ aspects, such as semantic similarity, without explicit identification in examples. For example, the following tokens (words) are three different things until they are used in similar contexts in the examples provided:
-
-* Buy
-* Buying
-* Bought
-
-For semantic similarity Natural Language Understanding (NLU), you can use [Conversation Language Understanding](../language-service/conversational-language-understanding/overview.md)
-
-## What are the Authoring and prediction pricing?
-Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint, each has their own pricing. See [Resource usage and limits](luis-limits.md#resource-usage-and-limits)
-
-## What are the supported regions?
-
-See [region support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
-
-## How does LUIS store data?
-
-LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key. Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted.See [Data retention](luis-concept-data-storage.md) to know more details about data storage
-
-## Does LUIS support Customer-Managed Keys (CMK)?
-
-The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments. See [the CMK article](encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault) for more details about customer-managed keys.
-
-## Is it important to train the None intent?
-
-Yes, it is good to train your **None** intent with utterances, especially as you add more labels to other intents. See [none intent](concepts/intents.md#none-intent) for details.
-
-## How do I edit my LUIS app programmatically?
-
-To edit your LUIS app programmatically, use the [Authoring API](https://go.microsoft.com/fwlink/?linkid=2092087). See [Call LUIS authoring API](get-started-get-model-rest-apis.md) and [Build a LUIS app programmatically using Node.js](luis-tutorial-node-import-utterances-csv.md) for examples of how to call the Authoring API. The Authoring API requires that you use an [authoring key](luis-how-to-azure-subscription.md) rather than an endpoint key. Programmatic authoring allows up to 1,000,000 calls per month and five transactions per second. For more info on the keys you use with LUIS, see [Manage keys](luis-how-to-azure-subscription.md).
-
-## Should variations of an example utterance include punctuation?
-
-Use one of the following solutions:
-
-* Ignore [punctuation](luis-reference-application-settings.md#punctuation-normalization)
-* Add the different variations as example utterances to the intent
-* Add the pattern of the example utterance with the [syntax to ignore](concepts/utterances.md#utterance-normalization) the punctuation.
-
-## Why is my app is getting different scores every time I train?
-
-Enable or disable the use non-deterministic training option. When disabled, training will use all available data. When enabled (by default), training will use a random sample each time the app is trained, to be used as a negative for the intent. To make sure that you are getting same scores every time, make sure you train your LUIS app with all your data. See the [training article](how-to/train-test.md#change-deterministic-training-settings-using-the-version-settings-api) for more information.
-
-## I received an HTTP 403 error status code. How do I fix it? Can I handle more requests per second?
-
-You get 403 and 429 error status codes when you exceed the transactions per second or transactions per month for your pricing tier. Increase your pricing tier, or use Language Understanding Docker [containers](luis-container-howto.md).
-
-When you use all of the free 1000 endpoint queries or you exceed your pricing tier's monthly transactions quota, you will receive an HTTP 403 error status code.
-
-To fix this error, you need to either [change your pricing tier](luis-how-to-azure-subscription.md#change-the-pricing-tier) to a higher tier or [create a new resource](luis-get-started-create-app.md#sign-in-to-luis-portal) and assign it to your app.
-
-Solutions for this error include:
-
-* In the [Azure portal](https://portal.azure.com/), navigate to your Language Understanding resource, and select **Resource Management ,** then select **Pricing tier** , and change your pricing tier. You don't need to change anything in the Language Understanding portal if your resource is already assigned to your Language Understanding app.
-* If your usage exceeds the highest pricing tier, add more Language Understanding resources with a load balancer in front of them. The [Language Understanding container](luis-container-howto.md) with Kubernetes or Docker Compose can help with this.
-
-An HTTP 429 error code is returned when your transactions per second exceed your pricing tier.
-
-Solutions include:
-
-* You can [increase your pricing tier](luis-how-to-azure-subscription.md#change-the-pricing-tier), if you are not at the highest tier.
-* If your usage exceeds the highest pricing tier, add more Language Understanding resources with a load balancer in front of them. The [Language Understanding container](luis-container-howto.md) with Kubernetes or Docker Compose can help with this.
-* You can gate your client application requests with a [retry policy](/azure/architecture/best-practices/transient-faults#general-guidelines) you implement yourself when you get this status code.
-
-## Why does LUIS add spaces to the query around or in the middle of words?
-
-LUIS [tokenizes](luis-glossary.md#token) the utterance based on the [culture](luis-language-support.md#tokenization). Both the original value and the tokenized value are available for [data extraction](luis-concept-data-extraction.md#tokenized-entity-returned).
-
-## What do I do when I expect LUIS requests to go beyond the quota?
-
-LUIS has a monthly quota and a per-second quota, based on the pricing tier of the Azure resource.
-
-If your LUIS app request rate exceeds the allowed [quota rate](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/), you can:
-
-* Spread the load to more LUIS apps with the [same app definition](luis-concept-enterprise.md#use-multiple-apps-with-same-app-definition). This includes, optionally, running LUIS from a [container](./luis-container-howto.md).
-* Create and [assign multiple keys](luis-concept-enterprise.md#assign-multiple-luis-keys-to-same-app) to the app.
-
-## Can I Use multiple apps with same app definition?
-
-Yes, export the original LUIS app and import the app back into separate apps. Each app has its own app ID. When you publish, instead of using the same key across all apps, create a separate key for each app. Balance the load across all apps so that no single app is overwhelmed. Add [Application Insights](luis-csharp-tutorial-bf-v4.md) to monitor usage.
-
-To get the same top intent between all the apps, make sure the intent prediction between the first and second intent is wide enough that LUIS is not confused, giving different results between apps for minor variations in utterances.
-
-When training these apps, make sure to [train with all data](luis-how-to-train.md#train-with-all-data).
-
-Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md) website or the authoring API for a [single utterance](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c08) or for a [batch](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09).
-
-Schedule a periodic review, such as every two weeks, of [endpoint utterances](luis-how-to-review-endpoint-utterances.md) for active learning, then retrain and republish the app.
-
-## How do I download a log of user utterances?
-
-By default, your LUIS app logs utterances from users. To download a log of utterances that users send to your LUIS app, go to **My Apps** , and select the app. In the contextual toolbar, select **Export Endpoint Logs**. The log is formatted as a comma-separated value (CSV) file.
-
-## How can I disable the logging of utterances?
-
-You can turn off the logging of user utterances by setting `log=false` in the Endpoint URL that your client application uses to query LUIS. However, turning off logging disables your LUIS app's ability to suggest utterances or improve performance that's based on [active learning](luis-concept-review-endpoint-utterances.md#what-is-active-learning). If you set `log=false` because of data-privacy concerns, you can't download a record of those user utterances from LUIS or use those utterances to improve your app.
-
-Logging is the only storage of utterances.
-
-## Why don't I want all my endpoint utterances logged?
-
-If you are using your log for prediction analysis, do not capture test utterances in your log.
-
-## What are the supported languages?
-
-See [supported languages](luis-language-support.md), for multilingual NLU, consider using the new [Conversation Language Understanding (CLU)](../language-service/conversational-language-understanding/overview.md) feature of the Language Service.
-
-## Is Language Understanding (LUIS) available on-premises or in a private cloud?
-
-Yes, you can use the LUIS [container](luis-container-howto.md) for these scenarios if you have the necessary connectivity to meter usage.
-
-## How do I integrate LUIS with Azure Bot Services?
-
-Use this [tutorial](/composer/how-to-add-luis) to integrate LUIS app with a Bot
cognitive-services Get Started Get Model Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/get-started-get-model-rest-apis.md
- Title: "How to update your LUIS model using the REST API"-
-description: In this article, add example utterances to change a model and train the app.
------- Previously updated : 11/30/2020-
-zone_pivot_groups: programming-languages-set-one
-#Customer intent: As an API developer familiar with REST but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
--
-# How to update the LUIS model with REST APIs
---
-In this article, you will add example utterances to a Pizza app and train the app. Example utterances are conversational user text mapped to an intent. By providing example utterances for intents, you teach LUIS what kinds of user-supplied text belongs to which intent.
-----
cognitive-services How To Application Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to-application-settings-portal.md
- Title: "Application settings"
-description: Configure your application and version settings in the LUIS portal such as utterance normalization and app privacy.
--- Previously updated : 11/30/2020---
-# Application and version settings
---
-Configure your application settings in the LUIS portal such as utterance normalization and app privacy.
-
-## View application name, description, and ID
-
-You can edit your application name, and description. You can copy your App ID. The culture can't be changed.
-
-1. Sign into the [LUIS portal](https://www.luis.ai).
-1. Select an app from the **My apps** list.
-.
-1. Select **Manage** from the top navigation bar, then **Settings** from the left navigation bar.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of LUIS portal, Manage section, Application Settings page](media/app-settings/luis-portal-manage-section-application-settings.png)
--
-## Change application settings
-
-To change a setting, select the toggle on the page.
--
-## Change version settings
-
-To change a setting, select the toggle on the page.
--
-## Next steps
-
-* How to [collaborate](luis-how-to-collaborate.md) with other authors
-* [Publish settings](luis-how-to-publish-app.md#configuring-publish-settings)
cognitive-services Improve Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/improve-application.md
- Title: How to improve LUIS application
-description: Learn how to improve LUIS application
---
-ms.
-- Previously updated : 01/07/2022--
-# How to improve a LUIS app
---
-Use this article to learn how you can improve your LUIS apps, such as reviewing for correct predictions, and working with optional text in utterances.
-
-## Active Learning
-
-The process of reviewing endpoint utterances for correct predictions is called Active learning. Active learning captures queries that are sent to the endpoint, and selects user utterances that it is unsure of. You review these utterances to select the intent and mark the entities for these real-world utterances. Then you can accept these changes into your app's example utterances, then [train](./train-test.md?branch=pr-en-us-181263) and [publish](./publish.md?branch=pr-en-us-181263) the app. This helps LUIS identify utterances more accurately.
-
-## Log user queries to enable active learning
-
-To enable active learning, you must log user queries. This is accomplished by calling the [endpoint query](../luis-get-started-create-app.md#query-the-v3-api-prediction-endpoint) with the `log=true` query string parameter and value.
-
-> [!Note]
-> To disable active learning, don't log user queries. You can change the query parameters by setting log=false in the endpoint query or omit the log parameter because the default value is false for the V3 endpoint.
-
-Use the LUIS portal to construct the correct endpoint query.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-2. Open your app by selecting its name on **My Apps** page.
-3. Go to the **Manage** section, then select **Azure resources**.
-4. For the assigned prediction resource, select **Change query parameters**
--
-5. Toggle **Save logs** then save by selecting **Done**.
--
-This action changes the example URL by adding the `log=true` query string parameter. Copy and use the changed example query URL when making prediction queries to the runtime endpoint.
-
-## Correct predictions to align utterances
-
-Each utterance has a suggested intent displayed in the **Predicted Intent** column, and the suggested entities in dotted bounding boxes.
--
-If you agree with the predicted intent and entities, select the check mark next to the utterance. If the check mark is disabled, this means that there is nothing to confirm.
-If you disagree with the suggested intent, select the correct intent from the predicted intent's drop-down list. If you disagree with the suggested entities, start labeling them. After you are done, select the check mark next to the utterance to confirm what you labeled. Select **save utterance** to move it from the review list and add it its respective intent.
-
-If you are unsure if you should delete the utterance, either move it to the "*None*" intent, or create a new intent such as *miscellaneous* and move the utterance it.
-
-## Working with optional text and prebuilt entities
-
-Suppose you have a Human Resources app that handles queries about an organization's personnel. It might allow for current and future dates in the utterance text - text that uses `s`, `'s`, and `?`.
-
-If you create an "*OrganizationChart*" intent, you might consider the following example utterances:
-
-|Intent|Example utterances with optional text and prebuilt entities|
-|:--|:--|
-|OrgChart-Manager|"Who was Jill Jones manager on March 3?"|
-|OrgChart-Manager|"Who is Jill Jones manager now?"|
-|OrgChart-Manager|"Who will be Jill Jones manager in a month?"|
-|OrgChart-Manager|"Who will be Jill Jones manager on March 3?"|
-
-Each of these examples uses:
-* A verb tense: "_was_", "_is_", "_will be_"
-* A date: "_March 3_", "_now_", "_in a month_"
-
-LUIS needs these to make predictions correctly. Notice that the last two examples in the table use almost the same text except for "_in_" and "_on_".
-
-Using patterns, the following example template utterances would allow for optional information:
-
-|Intent|Example utterances with optional text and prebuilt entities|
-|:--|:--|
-|OrgChart-Manager|Who was {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]|
-|OrgChart-Manager|Who is {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]|
-
-The optional square brackets syntax "*[ ]*" lets you add optional text to the template utterance and can be nested in a second level "*[ [ ] ]*" and include entities or text.
-
-> [!CAUTION]
-> Remember that entities are found first, then the pattern is matched.
-
-### Next Steps:
-
-To test how performance improves, you can access the test console by selecting **Test** in the top panel. For instructions on how to test your app using the test console, see [Train and test your app](/azure/cognitive-services/luis/luis-interactive-test).
cognitive-services Label Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/label-utterances.md
- Title: How to label example utterances in LUIS
-description: Learn how to label example utterance in LUIS.
---
-ms.
-- Previously updated : 01/05/2022--
-# How to label example utterances
---
-Labeling an entity in an example utterance gives LUIS an example of what the entity is and where the entity can appear in the utterance. You can label machine-learned entities and subentities.
-
-You only label machine-learned entities and sub-entities. Other entity types can be added as features to them when applicable.
-
-## Label example utterances from the Intent detail page
-
-To label examples of entities within the utterance, select the utterance's intent.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-2. Open your app by selecting its name on **My Apps** page.
-3. Select the Intent that has the example utterances you want to label for extraction with an entity.
-4. Select the text you want to label then select the entity.
-
-### Two techniques to label entities
-
-Two labeling techniques are supported on the Intent detail page.
-
-* Select entity or subentity from [Entity Palette](../label-entity-example-utterance.md#label-with-the-entity-palette-visible) then select within example utterance text. This is the recommended technique because you can visually verify you are working with the correct entity or subentity, according to your schema.
-* Select within the example utterance text first. A menu will appear with labeling choices.
-
-### Label with the Entity Palette visible
-
-After you've [planned your schema with entities](../concepts/application-design.md), keep the **Entity palette** visible while labeling. The **Entity palette** is a reminder of what entities you planned to extract.
-
-To access the **Entity Palette** , select the **@** symbol in the contextual toolbar above the example utterance list.
--
-### Label entity from Entity Palette
-
-The entity palette offers an alternative to the previous labeling experience. It allows you to brush over text to instantly label it with an entity.
-
-1. Open the entity palette by selecting on the **@** symbol at the top right of the utterance table.
-2. Select the entity from the palette that you want to label. This action is visually indicated with a new cursor. The cursor follows the mouse as you move in the LUIS portal.
-3. In the example utterance, _paint_ the entity with the cursor.
-
- :::image type="content" source="../media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png" alt-text="A screenshot showing an entity painted with the cursor." lightbox="../media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png":::
-
-## Add entity as a feature from the Entity Palette
-
-The Entity Palette's lower section allows you to add features to the currently selected entity. You can select from all existing entities and phrase lists or create a new phrase list.
--
-### Label text with a role in an example utterance
-
-> [!TIP]
-> Roles can be replaced by labeling with subentities of a machine-learning entities.
-
-1. Go to the Intent details page, which has example utterances that use the role.
-2. To label with the role, select the entity label (solid line under text) in the example utterance, then select **View in entity pane** from the drop-down list.
-
- :::image type="content" source="../media/add-entities/view-in-entity-pane.png" alt-text="A screenshot showing the view in entity menu." lightbox="../media/add-entities/view-in-entity-pane.png":::
-
- The entity palette opens to the right.
-
-3. Select the entity, then go to the bottom of the palette and select the role.
-
- :::image type="content" source="../media/add-entities/select-role-in-entity-palette.png" alt-text="A screenshot showing where to select a role." lightbox="../media/add-entities/select-role-in-entity-palette.png":::
--
-## Label entity from in-place menu
-
-Labeling in-place allows you to quickly select the text within the utterance and label it. You can also create a machine learning entity or list entity from the labeled text.
-
-Consider the example utterance: "hi, please i want a cheese pizza in 20 minutes".
-
-Select the left-most text, then select the right-most text of the entity. In the menu that appears, pick the entity you want to label.
--
-## Review labeled text
-
-After labeling, review the example utterance and ensure the selected span of text has been underlined with the chosen entity. The solid line indicates the text has been labeled.
--
-## Confirm predicted entity
-
-If there is a dotted-lined box around the span of text, it indicates the text is predicted but _not labeled yet_. To turn the prediction into a label, select the utterance row, then select **Confirm entities** from the contextual toolbar.
-
-<!--:::image type="content" source="../media/add-entities/prediction-confirm.png" alt-text="A screenshot showing confirming prediction." lightbox="../media/add-entities/prediction-confirm.png":::-->
-
-> [!Note]
-> You do not need to label for punctuation. Use [application settings](../luis-reference-application-settings.md) to control how punctuation impacts utterance predictions.
--
-## Unlabel entities
-
-> [!NOTE]
-> Only machine learned entities can be unlabeled. You can't label or unlabel regular expression entities, list entities, or prebuilt entities.
-
-To unlabel an entity, select the entity and select **Unlabel** from the in-place menu.
--
-## Automatic labeling for parent and child entities
-
-If you are labeling for a subentity, the parent will be labeled automatically.
-
-## Automatic labeling for non-machine learned entities
-
-Non-machine learned entities include prebuilt entities, regular expression entities, list entities, and pattern.any entities. These are automatically labeled by LUIS so they are not required to be manually labeled by users.
-
-## Entity prediction errors
-
-Entity prediction errors indicate the predicted entity doesn't match the labeled entity. This is visualized with a caution indicator next to the utterance.
--
-## Next steps
-
-[Train and test your application](train-test.md)
cognitive-services Orchestration Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/orchestration-projects.md
- Title: Use LUIS and question answering
-description: Learn how to use LUIS and question answering using orchestration.
---
-ms.
-- Previously updated : 05/23/2022--
-# Combine LUIS and question answering capabilities
---
-Cognitive Services provides two natural language processing services, [Language Understanding](../what-is-luis.md) (LUIS) and question answering, each with a different purpose. Understand when to use each service and how they complement each other.
-
-Natural language processing (NLP) allows your client application, such as a chat bot, to work with your users' natural language.
-
-## When to use each feature
-
-LUIS and question answering solve different problems. LUIS determines the intent of a user's text (known as an utterance), while question answering determines the answer to a user's text (known as a query).
-
-To pick the correct service, you need to understand the user text coming from your client application, and what information it needs to get from the Cognitive Service features.
-
-As an example, if your chat bot receives the text "How do I get to the Human Resources building on the Seattle north campus", use the table below to understand how each service works with the text.
--
-| Service | Client application determines |
-|||
-| LUIS | Determines user's intention of text - the service doesn't return the answer to the question. For example, this text would be classified as matching a "FindLocation" intent.|
-| Question answering | Returns the answer to the question from a custom knowledge base. For example, this text would be determined as a question, with the static text answer being "Get on the #9 bus and get off at Franklin street". |
-
-## Create an orchestration project
-
-Orchestration helps you connect more than one project and service together. Each connection in the orchestration is represented by a type and relevant data. The intent needs to have a name, a project type (LUIS, question answering, or conversational language understanding, and a project you want to connect to by name.
-
-You can use orchestration workflow to create new orchestration projects. See [orchestration workflow](../../language-service/orchestration-workflow/how-to/create-project.md) for more information.
-## Set up orchestration between Cognitive Services features
-
-To use an orchestration project to connect LUIS, question answering, and conversational language understanding, you need:
-
-* A language resource in [Language Studio](https://language.azure.com/) or the Azure portal.
-* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/orchestration-workflow/how-to/create-project.md#import-an-orchestration-workflow-project).
-
->[!Note]
->LUIS can be used with Orchestration projects in West Europe only, and requires the authoring resource to be a Language resource. You can either import the application in the West Europe Language resource or change the authoring resource from the portal.
-
-## Change a LUIS resource to a language resource:
-
-You need to follow the following steps to change LUIS authoring resource to a Language resource
-
-1. Log in to the [LUIS portal](https://www.luis.ai/) .
-2. From the list of LUIS applications, select the application you want to change to a Language resource.
-3. From the menu at the top of the screen, select **Manage**.
-4. From the left Menu, select **Azure resource**
-5. Select **Authoring resource** , then change your LUIS authoring resource to the Language resource.
--
-## Next steps
-
-* [Conversational language understanding documentation](../../language-service/conversational-language-understanding/how-to/create-project.md)
cognitive-services Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/publish.md
- Title: Publish
-description: Learn how to publish.
---
-ms.
-- Previously updated : 12/14/2021--
-# Publish your active, trained app
---
-When you finish building, training, and testing your active LUIS app, you make it available to your client application by publishing it to an endpoint.
-
-## Publishing
-
-1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-2. Open your app by selecting its name on **My Apps** page.
-3. To publish to the endpoint, select **Publish** in the top-right corner of the panel.
-
- :::image type="content" source="../media/luis-how-to-publish-app/publish-top-nav-bar.png" alt-text="A screenshot showing the navigation bar at the top of the screen, with the publish button in the top-right." lightbox="../media/luis-how-to-publish-app/publish-top-nav-bar.png":::
-
-1. Select your settings for the published prediction endpoint, then select **Publish**.
--
-## Publishing slots
-
-Select the correct slot when the pop-up window displays:
-
-* Staging
-* Production
-
-By using both publishing slots, you can have two different versions of your app available at the published endpoints, or the same version on two different endpoints.
-
-## Publish in more than one region
-
-The app is published to all regions associated with the LUIS prediction resources. You can find your LUIS prediction resources in the LUIS portal by clicking **Manage** from the top navigation menu, and selecting [Azure Resources](../luis-how-to-azure-subscription.md#assign-luis-resources).
-
-For example, if you add 2 prediction resources to an application in two regions, **westus** and **eastus** , and add these to the app as resources, the app is published in both regions. For more information about LUIS regions, see [Regions](../luis-reference-regions.md).
-
-## Configure publish settings
-
-After you select the slot, configure the publish settings for:
-
-* Sentiment analysis:
-Sentiment analysis allows LUIS to integrate with the Language service to provide sentiment and key phrase analysis. You do not have to provide a Language service key and there is no billing charge for this service to your Azure account. See [Sentiment analysis](../luis-reference-prebuilt-sentiment.md) for more information about the sentiment analysis JSON endpoint response.
-
-* Speech priming:
-Speech priming is the process of sending the LUIS model output to the Speech service prior to converting the text to speech. This allows the speech service to provide speech conversion more accurately for your model. This allows for Speech and LUIS requests and responses in one call by making one speech call and getting back a LUIS response. It provides less latency overall.
-
-After you publish, these settings are available for review from the **Manage** section's **Publish settings** page. You can change the settings with every publish. If you cancel a publish, any changes you made during the publish are also canceled.
-
-## When your app is published
-
-When your app is successfully published, a success notification Will appear at the top of the browser. The notification also includes a link to the endpoints.
-
-If you need the endpoint URL, select the link or select **Manage** in the top menu, then select **Azure Resources** in the left menu.
--
-## Next steps
--- See [Manage keys](../luis-how-to-azure-subscription.md) to add keys to LUIS.-- See [Train and test your app](/azure/cognitive-services/luis/luis-interactive-test) for instructions on how to test your published app in the test console.
cognitive-services Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/sign-in.md
- Title: Sign in to the LUIS portal and create an app
-description: Learn how to sign in to LUIS and create application.
---
-ms.
-- Previously updated : 07/19/2022-
-# Sign in to the LUIS portal and create an app
---
-Use this article to get started with the LUIS portal, and create an authoring resource. After completing the steps in this article, you will be able to create and publish LUIS apps.
-
-## Access the portal
-
-1. To get started with LUIS, go to the [LUIS Portal](https://www.luis.ai/). If you do not already have a subscription, you will be prompted to go create a [free account](https://azure.microsoft.com/free/cognitive-services/) and return back to the portal.
-2. Refresh the page to update it with your newly created subscription
-3. Select your subscription from the dropdown list
--
-4. If your subscription lives under another tenant, you will not be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar containing your initials in the top-right section of the screen. Click on **Choose a different authoring resource** from the top to reopen the window.
--
-5. If you have an existing LUIS authoring resource associated with your subscription, choose it from the dropdown list. You can view all applications that are created under this authoring resource.
-6. If not, then click on **Create a new authoring resource** at the bottom of this modal.
-7. When creating a new authoring resource, provide the following information:
--
-* **Tenant Name** - the tenant your Azure subscription is associated with. You will not be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar at the top-right corner of the screen, containing your initials. Select **Choose a different authoring resource** from the top to reopen the window.
-* **Azure Resource group name** - a custom resource group name you choose in your subscription. Resource groups allow you to group Azure resources for access and management. If you currently do not have a resource group in your subscription, you will not be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
-* **Azure Resource name** - a custom name you choose, used as part of the URL for your authoring transactions. Your resource name can only include alphanumeric characters, `-`, and can't start or end with `-`. If any other symbols are included in the name, creating a resource will fail.
-* **Location** - Choose to author your applications in one of the [three authoring locations](../luis-reference-regions.md) that are currently supported by LUIS including: West US, West Europe and East Australia
-* **Pricing tier** - By default, F0 authoring pricing tier is selected as it is the recommended. Create a [customer managed key](../encrypt-data-at-rest.md#customer-managed-keys-for-language-understanding) from the Azure portal if you are looking for an extra layer of security.
-
-8. Now you have successfully signed in to LUIS. You can now start creating applications.
-
->[!Note]
-> * When creating a new resource, make sure that the resource name only includes alphanumeric characters, '-', and canΓÇÖt start or end with '-'. Otherwise, it will fail.
--
-## Create a new LUIS app
-There are a couple of ways to create a LUIS app. You can create a LUIS app in the LUIS portal, or through the LUIS authoring [APIs](../developer-reference-resource.md).
-
-**Using the LUIS portal** You can create a new app in the portal in several ways:
-* Start with an empty app and create intents, utterances, and entities.
-* Start with an empty app and add a [prebuilt domain](../luis-concept-prebuilt-model.md?branch=pr-en-us-181263).
-* Import a LUIS app from a .lu or .json file that already contains intents, utterances, and entities.
-
-**Using the authoring APIs** You can create a new app with the authoring APIs in a couple of ways:
-* [Add application](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f) - start with an empty app and create intents, utterances, and entities.
-* [Add prebuilt application](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/59104e515aca2f0b48c76be5) - start with a prebuilt domain, including intents, utterances, and entities.
-
-## Create new app in LUIS using portal
-1. On **My Apps** page, select your **Subscription** , and **Authoring resource** then click on **+ New App**.
-
-1. In the dialog box, enter the name of your application, such as Pizza Tutorial.
-2. Choose your application culture, and then select **Done**. The description and prediction resource are optional at this point. You can set then at any time in the **Manage** section of the portal.
- >[!NOTE]
- > The culture cannot be changed once the application is created.
-
- After the app is created, the LUIS portal shows the **Intents** list with the None intent already created for you. You now have an empty app.
-
- :::image type="content" source="../media/pizza-tutorial-new-app-empty-intent-list.png" alt-text="Intents list with a None intent and no example utterances" lightbox="../media/pizza-tutorial-new-app-empty-intent-list.png":::
-
-
-## Next steps
-
-If your app design includes intent detection, [create new intents](intents.md), and add example utterances. If your app design is only data extraction, add example utterances to the None intent, then [create entities](entities.md), and label the example utterances with those entities.
cognitive-services Train Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/train-test.md
- Title: How to use train and test
-description: Learn how to train and test the application.
---
-ms.
-- Previously updated : 01/10/2022--
-# Train and test your LUIS app
---
-Training is the process of teaching your Language Understanding (LUIS) app to extract intent and entities from user utterances. Training comes after you make updates to the model, such as: adding, editing, labeling, or deleting entities, intents, or utterances.
-
-Training and testing an app is an iterative process. After you train your LUIS app, you test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, you should make updates to the LUIS app, then train and test again.
-
-Training is applied to the active version in the LUIS portal.
-
-## How to train interactively
-
-Before you start training your app in the [LUIS portal](https://www.luis.ai/), make sure every intent has at least one utterance. You must train your LUIS app at least once to test it.
-
-1. Access your app by selecting its name on the **My Apps** page.
-2. In your app, select **Train** in the top-right part of the screen.
-3. When training is complete, a notification appears at the top of the browser.
-
->[!Note]
->The training dates and times are in GMT + 2.
-
-## Start the training process
-
-> [!TIP]
->You do not need to train after every single change. Training should be done after a group of changes are applied to the model, or if you want to test or publish the app.
-
-To train your app in the LUIS portal, you only need to select the **Train** button on the top-right corner of the screen.
-
-Training with the REST APIs is a two-step process.
-
-1. Send an HTTP POST [request for training](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c45).
-2. Request the [training status](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c46) with an HTTP GET request.
-
-In order to know when training is complete, you must poll the status until all models are successfully trained.
-
-## Test Your application
-
-Testing is the process of providing sample utterances to LUIS and getting a response of recognized intents and entities. You can test your LUIS app interactively one utterance at a time, or provide a set of utterances. While testing, you can compare the current active model's prediction response to the published model's prediction response.
-
-Testing an app is an iterative process. After training your LUIS app, test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, make updates to the LUIS app, train, and test again.
-
-## Interactive testing
-
-Interactive testing is done from the **Test** panel of the LUIS portal. You can enter an utterance to see how intents and entities are identified and scored. If LUIS isn't predicting an utterance's intents and entities as you would expect, copy the utterance to the **Intent** page as a new utterance. Then label parts of that utterance for entities to train your LUIS app.
-
-See [batch testing](../luis-how-to-batch-test.md) if you are testing more than one utterance at a time, and the [Prediction scores](../luis-concept-prediction-score.md) article to learn more about prediction scores.
--
-## Test an utterance
-
-The test utterance should not be exactly the same as any example utterances in the app. The test utterance should include word choice, phrase length, and entity usage you expect for a user.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-2. Open your app by selecting its name on **My Apps** page.
-3. Select **Test** in the top-right corner of the screen for your app, and a panel will slide into view.
--
-4. Enter an utterance in the text box and press the enter button on the keyboard. You can test a single utterance in the **Test** box, or multiple utterances as a batch in the **Batch testing panel**.
-5. The utterance, its top intent, and score are added to the list of utterances under the text box. In the above example, this is displayed as 'None (0.43)'.
-
-## Inspect the prediction
-
-Inspect the test result details in the **Inspect** panel.
-
-1. With the **Test** panel open, select **Inspect** for an utterance you want to compare. **Inspect** is located next to the utterance's top intent and score. Refer to the above image.
-
-2. The **Inspection** panel will appear. The panel includes the top scoring intent and any identified entities. The panel shows the prediction of the selected utterance.
--
-> [!TIP]
->From the inspection panel, you can add the test utterance to an intent by selecting **Add to example utterances**.
-
-## Change deterministic training settings using the version settings API
-
-Use the [Version settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) with the UseAllTrainingData set to *true* to turn off deterministic training.
-
-## Change deterministic training settings using the LUIS portal
-
-Log into the [LUIS portal](https://www.luis.ai/) and click on your app. Select **Manage** at the top of the screen, then select **Settings.** Enable or disable the **use non-deterministic training** option. When disabled, training will use all available data. Training will only use a _random_ sample of data from other intents as negative data when training each intent
--
-## View sentiment results
-
-If sentiment analysis is configured on the [**Publish**](/azure/cognitive-services/luis/luis-how-to-publish-app#enable-sentiment-analysis) page, the test results will include the sentiment found in the utterance.
-
-## Correct matched pattern's intent
-
-If you are using [Patterns](/azure/cognitive-services/luis/luis-concept-patterns) and the utterance matched is a pattern, but the wrong intent was predicted, select the **Edit** link by the pattern and select the correct intent.
-
-## Compare with published version
-
-You can test the active version of your app with the published [endpoint](../luis-glossary.md#endpoint) version. In the **Inspect** panel, select **Compare with published**.
-> [!NOTE]
-> Any testing against the published model is deducted from your Azure subscription quota balance.
--
-## View endpoint JSON in test panel
-
-You can view the endpoint JSON returned for the comparison by selecting the **Show JSON view** in the top-right corner of the panel.
--
-## Next steps
-
-If testing requires testing a batch of utterances, See [batch testing](../luis-how-to-batch-test.md).
-
-If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's accuracy by labeling more utterances or adding features.
-
-* [Improve your application](./improve-application.md?branch=pr-en-us-181263)
-* [Publishing your application](./publish.md?branch=pr-en-us-181263)
cognitive-services Howto Add Prebuilt Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/howto-add-prebuilt-models.md
- Title: Prebuilt models for Language Understanding-
-description: LUIS includes a set of prebuilt models for quickly adding common, conversational user scenarios.
------ Previously updated : 05/17/2020---
-# Add prebuilt models for common usage scenarios
---
-LUIS includes a set of prebuilt models for quickly adding common, conversational user scenarios. This is a quick and easy way to add abilities to your conversational client application without having to design the models for those abilities.
-
-## Add a prebuilt domain
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-
-1. Select **Prebuilt Domains** from the left toolbar.
-
-1. Find the domain you want added to the app then select **Add domain** button.
-
- > [!div class="mx-imgBorder"]
- > ![Add Calendar prebuilt domain](./media/luis-prebuilt-domains/add-prebuilt-domain.png)
-
-## Add a prebuilt intent
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-
-1. On the **Intents** page, select **Add prebuilt domain intent** from the toolbar above the intents list.
-
-1. Select an intent from the pop-up dialog.
-
- > [!div class="mx-imgBorder"]
- > ![Add prebuilt intent](./media/luis-prebuilt-domains/add-prebuilt-domain-intents.png)
-
-1. Select the **Done** button.
-
-## Add a prebuilt entity
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-1. Select **Entities** in the left side.
-
-1. On the **Entities** page, select **Add prebuilt entity**.
-
-1. In **Add prebuilt entities** dialog box, select the prebuilt entity.
-
- > [!div class="mx-imgBorder"]
- > ![Add prebuilt entity dialog box](./media/luis-prebuilt-domains/add-prebuilt-entity.png)
-
-1. Select **Done**. After the entity is added, you do not need to train the app.
-
-## Add a prebuilt domain entity
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-1. Select **Entities** in the left side.
-
-1. On the **Entities** page, select **Add prebuilt domain entity**.
-
-1. In **Add prebuilt domain models** dialog box, select the prebuilt domain entity.
-
-1. Select **Done**. After the entity is added, you do not need to train the app.
-
-## Publish to view prebuilt model from prediction endpoint
-
-The easiest way to view the value of a prebuilt model is to query from the published endpoint.
-
-## Entities containing a prebuilt entity token
-
-If you have a machine-learning entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learning entity, then add a _required_ feature of a prebuilt entity.
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Build model from .csv with REST APIs](./luis-tutorial-node-import-utterances-csv.md)
cognitive-services Luis Concept Data Alteration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-data-alteration.md
- Title: Data alteration - LUIS
-description: Learn how data can be changed before predictions in Language Understanding (LUIS)
--- Previously updated : 05/06/2020---
-# Alter utterance data before or during prediction
--
-LUIS provides ways to manipulate the utterance before or during the prediction. These include [fixing spelling](luis-tutorial-bing-spellcheck.md), and fixing timezone issues for prebuilt [datetimeV2](luis-reference-prebuilt-datetimev2.md).
-
-## Correct spelling errors in utterance
--
-### V3 runtime
-
-Preprocess text for spelling corrections before you send the utterance to LUIS. Use example utterances with the correct spelling to ensure you get the correct predictions.
-
-Use [Bing Spell Check](../bing-spell-check/overview.md) to correct text before sending it to LUIS.
-
-### Prior to V3 runtime
-
-LUIS uses [Bing Spell Check API V7](../Bing-Spell-Check/overview.md) to correct spelling errors in the utterance. LUIS needs the key associated with that service. Create the key, then add the key as a querystring parameter at the [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356).
-
-The endpoint requires two params for spelling corrections to work:
-
-|Param|Value|
-|--|--|
-|`spellCheck`|boolean|
-|`bing-spell-check-subscription-key`|[Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) endpoint key|
-
-When [Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) detects an error, the original utterance, and the corrected utterance are returned along with predictions from the endpoint.
-
-#### [V2 prediction endpoint response](#tab/V2)
-
-```JSON
-{
- "query": "Book a flite to London?",
- "alteredQuery": "Book a flight to London?",
- "topScoringIntent": {
- "intent": "BookFlight",
- "score": 0.780123
- },
- "entities": []
-}
-```
-
-#### [V3 prediction endpoint response](#tab/V3)
-
-```JSON
-{
- "query": "Book a flite to London?",
- "prediction": {
- "normalizedQuery": "book a flight to london?",
- "topIntent": "BookFlight",
- "intents": {
- "BookFlight": {
- "score": 0.780123
- }
- },
- "entities": {},
- }
-}
-```
-
-* * *
-
-### List of allowed words
-The Bing spell check API used in LUIS does not support a list of words to ignore during the spell check alterations. If you need to allow a list of words or acronyms, process the utterance in the client application before sending the utterance to LUIS for intent prediction.
-
-## Change time zone of prebuilt datetimeV2 entity
-When a LUIS app uses the prebuilt [datetimeV2](luis-reference-prebuilt-datetimev2.md) entity, a datetime value can be returned in the prediction response. The timezone of the request is used to determine the correct datetime to return. If the request is coming from a bot or another centralized application before getting to LUIS, correct the timezone LUIS uses.
-
-### V3 prediction API to alter timezone
-
-In V3, the `datetimeReference` determines the timezone offset. Learn more about [V3 predictions](luis-migration-api-v3.md#v3-post-body).
-
-### V2 prediction API to alter timezone
-The timezone is corrected by adding the user's timezone to the endpoint using the `timezoneOffset` parameter based on the API version. The value of the parameter should be the positive or negative number, in minutes, to alter the time.
-
-#### V2 prediction daylight savings example
-If you need the returned prebuilt datetimeV2 to adjust for daylight savings time, you should use the querystring parameter with a +/- value in minutes for the [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) query.
-
-Add 60 minutes:
-
-`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q=Turn the lights on?timezoneOffset=60&verbose={boolean}&spellCheck={boolean}&staging={boolean}&bing-spell-check-subscription-key={string}&log={boolean}`
-
-Remove 60 minutes:
-
-`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q=Turn the lights on?timezoneOffset=-60&verbose={boolean}&spellCheck={boolean}&staging={boolean}&bing-spell-check-subscription-key={string}&log={boolean}`
-
-#### V2 prediction C# code determines correct value of parameter
-
-The following C# code uses the [TimeZoneInfo](/dotnet/api/system.timezoneinfo) class's [FindSystemTimeZoneById](/dotnet/api/system.timezoneinfo.findsystemtimezonebyid#examples) method to determine the correct offset value based on system time:
-
-```csharp
-// Get CST zone id
-TimeZoneInfo targetZone = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time");
-
-// Get local machine's value of Now
-DateTime utcDatetime = DateTime.UtcNow;
-
-// Get Central Standard Time value of Now
-DateTime cstDatetime = TimeZoneInfo.ConvertTimeFromUtc(utcDatetime, targetZone);
-
-// Find timezoneOffset/datetimeReference
-int offset = (int)((cstDatetime - utcDatetime).TotalMinutes);
-```
-
-## Next steps
-
-[Correct spelling mistakes with this tutorial](luis-tutorial-bing-spellcheck.md)
cognitive-services Luis Concept Data Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-data-conversion.md
- Title: Data conversion - LUIS-
-description: Learn how utterances can be changed before predictions in Language Understanding (LUIS)
------- Previously updated : 03/21/2022---
-# Convert data format of utterances
--
-LUIS provides the following conversions of a user utterance before prediction.
-
-* Speech to text using [Cognitive Services Speech](../Speech-Service/overview.md) service.
-
-## Speech to text
-
-Speech to text is provided as an integration with LUIS.
-
-### Intent conversion concepts
-Conversion of speech to text in LUIS allows you to send spoken utterances to an endpoint and receive a LUIS prediction response. The process is an integration of the [Speech](/azure/cognitive-services/Speech) service with LUIS. Learn more about Speech to Intent with a [tutorial](../speech-service/how-to-recognize-intents-from-speech-csharp.md).
-
-### Key requirements
-You do not need to create a **Bing Speech API** key for this integration. A **Language Understanding** key created in the Azure portal works for this integration. Do not use the LUIS starter key.
-
-### Pricing Tier
-This integration uses a different [pricing](luis-limits.md#resource-usage-and-limits) model than the usual Language Understanding pricing tiers.
-
-### Quota usage
-See [Key limits](luis-limits.md#resource-usage-and-limits) for information.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Extracting data](luis-concept-data-extraction.md)
cognitive-services Luis Concept Data Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-data-extraction.md
- Title: Data extraction - LUIS
-description: Extract data from utterance text with intents and entities. Learn what kind of data can be extracted from Language Understanding (LUIS).
---- Previously updated : 03/21/2022--
-# Extract data from utterance text with intents and entities
--
-LUIS gives you the ability to get information from a user's natural language utterances. The information is extracted in a way that it can be used by a program, application, or chat bot to take action. In the following sections, learn what data is returned from intents and entities with examples of JSON.
-
-The hardest data to extract is the machine-learning data because it isn't an exact text match. Data extraction of the machine-learning [entities](concepts/entities.md) needs to be part of the [authoring cycle](luis-concept-app-iteration.md) until you're confident you receive the data you expect.
-
-## Data location and key usage
-LUIS extracts data from the user's utterance at the published [endpoint](luis-glossary.md#endpoint). The **HTTPS request** (POST or GET) contains the utterance as well as some optional configurations such as staging or production environments.
-
-**V2 prediction endpoint request**
-
-`https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/<appID>?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&q=book 2 tickets to paris`
-
-**V3 prediction endpoint request**
-
-`https://westus.api.cognitive.microsoft.com/luis/v3.0-preview/apps/<appID>/slots/<slot-type>/predict?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&query=book 2 tickets to paris`
-
-The `appID` is available on the **Settings** page of your LUIS app as well as part of the URL (after `/apps/`) when you're editing that LUIS app. The `subscription-key` is the endpoint key used for querying your app. While you can use your free authoring/starter key while you're learning LUIS, it is important to change the endpoint key to a key that supports your [expected LUIS usage](luis-limits.md#resource-usage-and-limits). The `timezoneOffset` unit is minutes.
-
-The **HTTPS response** contains all the intent and entity information LUIS can determine based on the current published model of either the staging or production endpoint. The endpoint URL is found on the [LUIS](luis-reference-regions.md) website, in the **Manage** section, on the **Keys and endpoints** page.
-
-## Data from intents
-The primary data is the top scoring **intent name**. The endpoint response is:
-
-#### [V2 prediction endpoint response](#tab/V2)
-
-```JSON
-{
- "query": "when do you open next?",
- "topScoringIntent": {
- "intent": "GetStoreInfo",
- "score": 0.984749258
- },
- "entities": []
-}
-```
-
-#### [V3 prediction endpoint response](#tab/V3)
-
-```JSON
-{
- "query": "when do you open next?",
- "prediction": {
- "normalizedQuery": "when do you open next?",
- "topIntent": "GetStoreInfo",
- "intents": {
- "GetStoreInfo": {
- "score": 0.984749258
- }
- }
- },
- "entities": []
-}
-```
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-* * *
-
-|Data Object|Data Type|Data Location|Value|
-|--|--|--|--|
-|Intent|String|topScoringIntent.intent|"GetStoreInfo"|
-
-If your chatbot or LUIS-calling app makes a decision based on more than one intent score, return all the intents' scores.
--
-#### [V2 prediction endpoint response](#tab/V2)
-
-Set the querystring parameter, `verbose=true`. The endpoint response is:
-
-```JSON
-{
- "query": "when do you open next?",
- "topScoringIntent": {
- "intent": "GetStoreInfo",
- "score": 0.984749258
- },
- "intents": [
- {
- "intent": "GetStoreInfo",
- "score": 0.984749258
- },
- {
- "intent": "None",
- "score": 0.2040639
- }
- ],
- "entities": []
-}
-```
-
-#### [V3 prediction endpoint response](#tab/V3)
-
-Set the querystring parameter, `show-all-intents=true`. The endpoint response is:
-
-```JSON
-{
- "query": "when do you open next?",
- "prediction": {
- "normalizedQuery": "when do you open next?",
- "topIntent": "GetStoreInfo",
- "intents": {
- "GetStoreInfo": {
- "score": 0.984749258
- },
- "None": {
- "score": 0.2040639
- }
- },
- "entities": {
- }
- }
-}
-```
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-* * *
-
-The intents are ordered from highest to lowest score.
-
-|Data Object|Data Type|Data Location|Value|Score|
-|--|--|--|--|:--|
-|Intent|String|intents[0].intent|"GetStoreInfo"|0.984749258|
-|Intent|String|intents[1].intent|"None"|0.0168218873|
-
-If you add prebuilt domains, the intent name indicates the domain, such as `Utilties` or `Communication` as well as the intent:
-
-#### [V2 prediction endpoint response](#tab/V2)
-
-```JSON
-{
- "query": "Turn on the lights next monday at 9am",
- "topScoringIntent": {
- "intent": "Utilities.ShowNext",
- "score": 0.07842206
- },
- "intents": [
- {
- "intent": "Utilities.ShowNext",
- "score": 0.07842206
- },
- {
- "intent": "Communication.StartOver",
- "score": 0.0239675418
- },
- {
- "intent": "None",
- "score": 0.0168218873
- }],
- "entities": []
-}
-```
-
-#### [V3 prediction endpoint response](#tab/V3)
-
-```JSON
-{
- "query": "Turn on the lights next monday at 9am",
- "prediction": {
- "normalizedQuery": "Turn on the lights next monday at 9am",
- "topIntent": "Utilities.ShowNext",
- "intents": {
- "Utilities.ShowNext": {
- "score": 0.07842206
- },
- "Communication.StartOver": {
- "score": 0.0239675418
- },
- "None": {
- "score": 0.00085447653
- }
- },
- "entities": []
- }
-}
-```
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-* * *
-
-|Domain|Data Object|Data Type|Data Location|Value|
-|--|--|--|--|--|
-|Utilities|Intent|String|intents[0].intent|"<b>Utilities</b>.ShowNext"|
-|Communication|Intent|String|intents[1].intent|<b>Communication</b>.StartOver"|
-||Intent|String|intents[2].intent|"None"|
--
-## Data from entities
-Most chat bots and applications need more than the intent name. This additional, optional data comes from entities discovered in the utterance. Each type of entity returns different information about the match.
-
-A single word or phrase in an utterance can match more than one entity. In that case, each matching entity is returned with its score.
-
-All entities are returned in the **entities** array of the response from the endpoint
-
-## Tokenized entity returned
-
-Review the [token support](luis-language-support.md#tokenization) in LUIS.
--
-## Prebuilt entity data
-[Prebuilt](concepts/entities.md) entities are discovered based on a regular expression match using the open-source [Recognizers-Text](https://github.com/Microsoft/Recognizers-Text) project. Prebuilt entities are returned in the entities array and use the type name prefixed with `builtin::`.
-
-## List entity data
-
-[List entities](reference-entity-list.md) represent a fixed, closed set of related words along with their synonyms. LUIS does not discover additional values for list entities. Use the **Recommend** feature to see suggestions for new words based on the current list. If there is more than one list entity with the same value, each entity is returned in the endpoint query.
-
-## Regular expression entity data
-
-A [regular expression entity](reference-entity-regular-expression.md) extracts an entity based on a regular expression you provide.
-
-## Extracting names
-Getting names from an utterance is difficult because a name can be almost any combination of letters and words. Depending on what type of name you're extracting, you have several options. The following suggestions are not rules but more guidelines.
-
-### Add prebuilt PersonName and GeographyV2 entities
-
-[PersonName](luis-reference-prebuilt-person.md) and [GeographyV2](luis-reference-prebuilt-geographyV2.md) entities are available in some [language cultures](luis-reference-prebuilt-entities.md).
-
-### Names of people
-
-People's name can have some slight format depending on language and culture. Use either a prebuilt **[personName](luis-reference-prebuilt-person.md)** entity or a **[simple entity](concepts/entities.md)** with roles of first and last name.
-
-If you use the simple entity, make sure to give examples that use the first and last name in different parts of the utterance, in utterances of different lengths, and utterances across all intents including the None intent. [Review](./luis-how-to-review-endpoint-utterances.md) endpoint utterances on a regular basis to label any names that were not predicted correctly.
-
-### Names of places
-
-Location names are set and known such as cities, counties, states, provinces, and countries/regions. Use the prebuilt entity **[geographyV2](luis-reference-prebuilt-geographyv2.md)** to extract location information.
-
-### New and emerging names
-
-Some apps need to be able to find new and emerging names such as products or companies. These types of names are the most difficult type of data extraction. Begin with a **[simple entity](concepts/entities.md)** and add a [phrase list](concepts/patterns-features.md). [Review](./luis-how-to-review-endpoint-utterances.md) endpoint utterances on a regular basis to label any names that were not predicted correctly.
-
-## Pattern.any entity data
-
-[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied.
-
-## Sentiment analysis
-If Sentiment analysis is configured while [publishing](luis-how-to-publish-app.md#sentiment-analysis), the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Language service](../language-service/sentiment-opinion-mining/overview.md) documentation.
-
-## Key phrase extraction entity data
-The [key phrase extraction entity](luis-reference-prebuilt-keyphrase.md) returns key phrases in the utterance, provided by the [Language service](../language-service/key-phrase-extraction/overview.md).
-
-## Data matching multiple entities
-
-LUIS returns all entities discovered in the utterance. As a result, your chat bot may need to make a decision based on the results.
-
-## Data matching multiple list entities
-
-If a word or phrase matches more than one list entity, the endpoint query returns each List entity.
-
-For the query `when is the best time to go to red rock?`, and the app has the word `red` in more than one list, LUIS recognizes all the entities and returns an array of entities as part of the JSON endpoint response.
-
-## Next steps
-
-See [Add entities](how-to/entities.md) to learn more about how to add entities to your LUIS app.
cognitive-services Luis Concept Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-data-storage.md
- Title: Data storage - LUIS-
-description: LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key.
------- Previously updated : 12/07/2020---
-# Data storage and removal in Language Understanding (LUIS) Cognitive Services
---
-LUIS stores data encrypted in an Azure data store corresponding to [the region](luis-reference-regions.md) specified by the key.
-
-* Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted.
-
-* Application authors can choose to [enable logging](luis-how-to-review-endpoint-utterances.md#log-user-queries-to-enable-active-learning) on the utterances that are sent to a published application. If enabled, utterances will be saved for 30 days, and can be viewed by the application author. If logging isn't enabled when the application is published, this data is not stored.
-
-## Export and delete app
-Users have full control over [exporting](luis-how-to-start-new-app.md#export-app) and [deleting](luis-how-to-start-new-app.md#delete-app) the app.
-
-## Utterances
-
-Utterances can be stored in two different places.
-
-* During **the authoring process**, utterances are created and stored in the Intent. Utterances in intents are required for a successful LUIS app. Once the app is published and receives queries at the endpoint, the endpoint request's querystring, `log=false`, determines if the endpoint utterance is stored. If the endpoint is stored, it becomes part of the active learning utterances found in the **Build** section of the portal, in the **Review endpoint utterances** section.
-* When you **review endpoint utterances**, and add an utterance to an intent, the utterance is no longer stored as part of the endpoint utterances to be reviewed. It is added to the app's intents.
-
-<a name="utterances-in-an-intent"></a>
-
-### Delete example utterances from an intent
-
-Delete example utterances used for training [LUIS](luis-reference-regions.md). If you delete an example utterance from your LUIS app, it is removed from the LUIS web service and is unavailable for export.
-
-<a name="utterances-in-review"></a>
-
-### Delete utterances in review from active learning
-
-You can delete utterances from the list of user utterances that LUIS suggests in the **[Review endpoint utterances page](luis-how-to-review-endpoint-utterances.md)**. Deleting utterances from this list prevents them from being suggested, but doesn't delete them from logs.
-
-If you don't want active learning utterances, you can [disable active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning). Disabling active learning also disables logging.
-
-### Disable logging utterances
-[Disabling active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning) is disables logging.
--
-<a name="accounts"></a>
-
-## Delete an account
-If you are not migrated, you can delete your account and all your apps will be deleted along with their example utterances and logs. The data is retained for 90 days before the account and data are deleted permanently.
-
-Deleting account is available from the **Settings** page. Select your account name in the top right navigation bar to get to the **Settings** page.
-
-## Delete an authoring resource
-If you have [migrated to an authoring resource](./luis-migration-authoring.md), deleting the resource itself from the Azure portal will delete all your applications associated with that resource, along with their example utterances and logs. The data is retained for 90 days before it is deleted permanently.
-
-To delete your resource, go to the [Azure portal](https://portal.azure.com/#home) and select your LUIS authoring resource. Go to the **Overview** tab and click on the **Delete** button on the top of the page. Then confirm your resource was deleted.
-
-## Data inactivity as an expired subscription
-For the purposes of data retention and deletion, an inactive LUIS app may at _MicrosoftΓÇÖs discretion_ be treated as an expired subscription. An app is considered inactive if it meets the following criteria for the last 90 days:
-
-* Has had **no** calls made to it.
-* Has not been modified.
-* Does not have a current key assigned to it.
-* Has not had a user sign in to it.
-
-## Next steps
-
-[Learn about exporting and deleting an app](luis-how-to-start-new-app.md)
cognitive-services Luis Concept Devops Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-devops-automation.md
- Title: Continuous Integration and Continuous Delivery workflows for LUIS apps
-description: How to implement CI/CD workflows for DevOps for Language Understanding (LUIS).
--- Previously updated : 06/01/2021--
-ms.
--
-# Continuous Integration and Continuous Delivery workflows for LUIS DevOps
---
-Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management). This article describes concepts for implementing automated builds for LUIS.
-
-## Build automation workflows for LUIS
-
-![CI workflows](./media/luis-concept-devops-automation/luis-automation.png)
-
-In your source code management (SCM) system, configure automated build pipelines to run at the following events:
-
-1. **PR workflow** triggered when a [pull request](https://help.github.com/github/collaborating-with-issues-and-pull-requests/about-pull-requests) (PR) is raised. This workflow validates the contents of the PR *before* the updates get merged into the main branch.
-1. **CI/CD workflow** triggered when updates are pushed to the main branch, for example upon merging the changes from a PR. This workflow ensures the quality of all updates to the main branch.
-
-The **CI/CD workflow** combines two complementary development processes:
-
-* [Continuous Integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing code in a shared repository, and performing an automated build on it. Paired with an automated [testing](luis-concept-devops-testing.md) approach, continuous integration allows us to verify that for each update, the LUDown source is still valid and can be imported into a LUIS app, but also that it passes a group of tests that verify the trained app can recognize the intents and entities required for your solution.
-
-* [Continuous Delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes the Continuous Integration concept further to automatically deploy the application to an environment where you can do more in-depth testing. CD enables us to learn early about any unforeseen issues that arise from our changes as quickly as possible, and also to learn about gaps in our test coverage.
-
-The goal of continuous integration and continuous delivery is to ensure that "main is always shippable,". For a LUIS app, this means that we could, if we needed to, take any version from the main branch LUIS app and ship it on production.
-
-### Tools for building automation workflows for LUIS
-
-> [!TIP]
-> You can find a complete solution for implementing DevOps in the [LUIS DevOps template repo](#apply-devops-to-luis-app-development-using-github-actions).
-
-There are different build automation technologies available to create build automation workflows. All of them require that you can script steps using a command-line interface (CLI) or REST calls so that they can execute on a build server.
-
-Use the following tools for building automation workflows for LUIS:
-
-* [Bot Framework Tools LUIS CLI](https://github.com/microsoft/botbuilder-tools/tree/master/packages/LUIS) to work with LUIS apps and versions, train, test, and publish them within the LUIS service.
-
-* [Azure CLI](/cli/azure/) to query Azure subscriptions, fetch LUIS authoring and prediction keys, and to create an Azure [service principal](/cli/azure/ad/sp) used for automation authentication.
-
-* [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool for [testing a LUIS app](luis-concept-devops-testing.md) and to analyze test results.
-
-### The PR workflow
-
-As mentioned, you configure this workflow to run when a developer raises a PR to propose changes to be merged from a feature branch into the main branch. Its purpose is to verify the quality of the changes in the PR before they're merged to the main branch.
-
-This workflow should:
-
-* Create a temporary LUIS app by importing the `.lu` source in the PR.
-* Train and publish the LUIS app version.
-* Run all the [unit tests](luis-concept-devops-testing.md) against it.
-* Pass the workflow if all the tests pass, otherwise fail it.
-* Clean up and delete the temporary app.
-
-If supported by your SCM, configure branch protection rules so that this workflow must complete successfully before the PR can be completed.
-
-### The main branch CI/CD workflow
-
-Configure this workflow to run after the updates in the PR have been merged into the main branch. Its purpose is to keep the quality bar for your main branch high by testing the updates. If the updates meet the quality bar, this workflow deploys the new LUIS app version to an environment where you can do more in-depth testing.
-
-This workflow should:
-
-* Build a new version in your primary LUIS app (the app you maintain for the main branch) using the updated source code.
-
-* Train and publish the LUIS app version.
-
- > [!NOTE]
- > As explained in [Running tests in an automated build workflow](luis-concept-devops-testing.md#running-tests-in-an-automated-build-workflow) you must publish the LUIS app version under test so that tools such as NLU.DevOps can access it. LUIS only supports two named publication slots, *staging* and *production* for a LUIS app, but you can also [publish a version directly](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) and [query by version](./luis-migration-api-v3.md#changes-by-slot-name-and-version-name). Use direct version publishing in your automation workflows to avoid being limited to using the named publishing slots.
-
-* Run all the [unit tests](luis-concept-devops-testing.md).
-
-* Optionally run [batch tests](luis-concept-devops-testing.md#how-to-do-unit-testing-and-batch-testing) to measure the quality and accuracy of the LUIS app version and compare it to some baseline.
-
-* If the tests complete successfully:
- * Tag the source in the repo.
- * Run the Continuous Delivery (CD) job to deploy the LUIS app version to environments for further testing.
-
-### Continuous delivery (CD)
-
-The CD job in a CI/CD workflow runs conditionally on success of the build and automated unit tests. Its job is to automatically deploy the LUIS application to an environment where you can do more testing.
-
-There's no one recommended solution on how best to deploy your LUIS app, and you must implement the process that is appropriate for your project. The [LUIS DevOps template](https://github.com/Azure-Samples/LUIS-DevOps-Template) repo implements a simple solution for this which is to [publish the new LUIS app version](./luis-how-to-publish-app.md) to the *production* publishing slot. This is fine for a simple setup. However, if you need to support a number of different production environments at the same time, such as *development*, *staging* and *UAT*, then the limit of two named publishing slots per app will prove insufficient.
-
-Other options for deploying an app version include:
-
-* Leave the app version published to the direct version endpoint and implement a process to configure downstream production environments with the direct version endpoint as required.
-* Maintain different LUIS apps for each production environments and write automation steps to import the `.lu` into a new version in the LUIS app for the target production environment, to train, and publish it.
-* Export the tested LUIS app version into a [LUIS docker container](./luis-container-howto.md?tabs=v3) and deploy the LUIS container to Azure [Container instances](../../container-instances/index.yml).
-
-## Release management
-
-Generally we recommend that you do continuous delivery only to your non-production environments, such as to development and staging. Most teams require a manual review and approval process for deployment to a production environment. For a production deployment, you might want to make sure it happens when key people on the development team are available for support, or during low-traffic periods.
--
-## Apply DevOps to LUIS app development using GitHub Actions
-
-Go to the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) for a complete solution that implements DevOps and software engineering best practices for LUIS. You can use this template repo to create your own repository with built-in support for CI/CD workflows and practices that enable [source control](luis-concept-devops-sourcecontrol.md), automated builds, [testing](luis-concept-devops-testing.md), and release management with LUIS for your own project.
-
-The [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) walks through how to:
-
-* **Clone the template repo** - Copy the template to your own GitHub repository.
-* **Configure LUIS resources** - Create the [LUIS authoring and prediction resources in Azure](./luis-how-to-azure-subscription.md) that will be used by the continuous integration workflows.
-* **Configure the CI/CD workflows** - Configure parameters for the CI/CD workflows and store them in [GitHub Secrets](https://help.github.com/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets).
-* **Walks through the ["dev inner loop"](/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow)** - The developer makes updates to a sample LUIS app while working in a development branch, tests the updates and then raises a pull request to propose changes and to seek review approval.
-* **Execute CI/CD workflows** - Execute [continuous integration workflows to build and test a LUIS app](#build-automation-workflows-for-luis) using GitHub Actions.
-* **Perform automated testing** - Perform [automated batch testing for a LUIS app](luis-concept-devops-testing.md) to evaluate the quality of the app.
-* **Deploy the LUIS app** - Execute a [continuous delivery (CD) job](#continuous-delivery-cd) to publish the LUIS app.
-* **Use the repo with your own project** - Explains how to use the repo with your own LUIS application.
-
-## Next steps
-
-* Learn how to write a [GitHub Actions workflow with NLU.DevOps](https://github.com/Azure-Samples/LUIS-DevOps-Template/blob/master/docs/4-pipeline.md)
-
-* Use the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) to apply DevOps with your own project.
-* [Source control and branch strategies for LUIS](luis-concept-devops-sourcecontrol.md)
-* [Testing for LUIS DevOps](luis-concept-devops-testing.md)
cognitive-services Luis Concept Devops Sourcecontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-devops-sourcecontrol.md
- Title: Source control and development branches - LUIS
-description: How to maintain your Language Understanding (LUIS) app under source control. How to apply updates to a LUIS app while working in a development branch.
--- Previously updated : 06/14/2022------
-# DevOps practices for LUIS
---
-Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management) by following these guidelines.
-
-## Source control and branch strategies for LUIS
-
-One of the key factors that the success of DevOps depends upon is [source control](/azure/devops/user-guide/source-control). A source control system allows developers to collaborate on code and to track changes. The use of branches allows developers to switch between different versions of the code base, and to work independently from other members of the team. When developers raise a [pull request](https://help.github.com/github/collaborating-with-issues-and-pull-requests/about-pull-requests) (PR) to propose updates from one branch to another, or when changes are merged, these can be the trigger for [automated builds](luis-concept-devops-automation.md) to build and continuously test code.
-
-By using the concepts and guidance that are described in this document, you can develop a LUIS app while tracking changes in a source control system, and follow these software engineering best practices:
--- **Source Control**
- - Source code for your LUIS app is in a human-readable format.
- - The model can be built from source in a repeatable fashion.
- - The source code can be managed by a source code repository.
- - Credentials and secrets such as keys are never stored in source code.
--- **Branching and Merging**
- - Developers can work from independent branches.
- - Developers can work in multiple branches concurrently.
- - It's possible to integrate changes to a LUIS app from one branch into another through rebase or merge.
- - Developers can merge a PR to the parent branch.
--- **Versioning**
- - Each component in a large application should be versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number.
--- **Code Reviews**
- - The changes in the PR are presented as human readable source code that can be reviewed before accepting the PR.
-
-## Source control
-
-To maintain the [App schema definition](./app-schema-definition.md) of a LUIS app in a source code management system, use the [LUDown format (`.lu`)](/azure/bot-service/file-format/bot-builder-lu-file-format) representation of the app. `.lu` format is preferred to `.json` format because it's human readable, which makes it easier to make and review changes in PRs.
-
-### Save a LUIS app using the LUDown format
-
-To save a LUIS app in `.lu` format and place it under source control:
--- EITHER: [Export the app version](./luis-how-to-manage-versions.md#other-actions) as `.lu` from the [LUIS portal](https://www.luis.ai/) and add it to your source control repository--- OR: Use a text editor to create a `.lu` file for a LUIS app and add it to your source control repository-
-> [!TIP]
-> If you are working with the JSON export of a LUIS app, you can [convert it to LUDown](https://github.com/microsoft/botframework-cli/tree/master/packages/luis#bf-luisconvert). Use the `--sort` option to ensure that intents and utterances are sorted alphabetically.
-> Note that the **.LU** export capability built into the LUIS portal already sorts the output.
-
-### Build the LUIS app from source
-
-For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./luis-how-to-train.md) and to [publish it](./luis-how-to-publish-app.md). You can do this in the LUIS portal, or at the command line:
--- Use the LUIS portal to [import the `.lu` version](./luis-how-to-manage-versions.md#import-version) of the app from source control, and [train](./luis-how-to-train.md) and [publish](./luis-how-to-publish-app.md) the app.--- Use the [Bot Framework Command Line Interface for LUIS](https://github.com/microsoft/botbuilder-tools/tree/master/packages/LUIS) at the command line or in a CI/CD workflow to [import](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisversionimport) the `.lu` version of the app from source control into a LUIS application, and [train](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luistrainrun) and [publish](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) the app.-
-### Files to maintain under source control
-
-The following types of files for your LUIS application should be maintained under source control:
--- `.lu` file for the LUIS application--- [Unit Test definition files](luis-concept-devops-testing.md#writing-tests) (utterances and expected results)--- [Batch test files](./luis-how-to-batch-test.md#batch-test-file) (utterances and expected results) used for performance testing-
-### Credentials and keys are not checked in
-
-Do not include keys or similar confidential values in files that you check in to your repo where they might be visible to unauthorized personnel. The keys and other values that you should prevent from check-in include:
--- LUIS Authoring and Prediction keys-- LUIS Authoring and Prediction endpoints-- Azure resource keys-- Access tokens, such as the token for an Azure [service principal](/cli/azure/ad/sp) used for automation authentication-
-#### Strategies for securely managing secrets
-
-Strategies for securely managing secrets include:
--- If you're using Git version control, you can store runtime secrets in a local file and prevent check in of the file by adding a pattern to match the filename to a [.gitignore](https://git-scm.com/docs/gitignore) file-- In an automation workflow, you can store secrets securely in the parameters configuration offered by that automation technology. For example, if you're using [GitHub Actions](https://github.com/features/actions), you can store secrets securely in [GitHub secrets](https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets).-
-## Branching and merging
-
-Distributed version control systems like Git give flexibility in how team members publish, share, review, and iterate on code changes through development branches shared with others. Adopt a [Git branching strategy](/azure/devops/repos/git/git-branching-guidance) that is appropriate for your team.
-
-Whichever branching strategy you adopt, a key principle of all of them is that team members can work on the solution within a *feature branch* independently from the work that is going on in other branches.
-
-To support independent working in branches with a LUIS project:
--- **The main branch has its own LUIS app.** This app represents the current state of your solution for your project and its current active version should always map to the `.lu` source that is in the main branch. All updates to the `.lu` source for this app should be reviewed and tested so that this app could be deployed to build environments such as Production at any time. When updates to the `.lu` are merged into main from a feature branch, you should create a new version in the LUIS app and [bump the version number](#versioning).--- **Each feature branch must use its own instance of a LUIS app**. Developers work with this app in a feature branch without risk of affecting developers who are working in other branches. This 'dev branch' app is a working copy that should be deleted when the feature branch is deleted.-
-![Git feature branch](./media/luis-concept-devops-sourcecontrol/feature-branch.png)
-
-### Developers can work from independent branches
-
-Developers can work on updates on a LUIS app independently from other branches by:
-
-1. Creating a feature branch from the main branch (depending on your branch strategy, usually main or develop).
-
-1. [Create a new LUIS app in the LUIS portal](./luis-how-to-start-new-app.md) (the "*dev branch app*") solely to support the work in the feature branch.
-
- * If the `.lu` source for your solution already exists in your branch, because it was saved after work done in another branch earlier in the project, create your dev branch LUIS app by importing the `.lu` file.
-
- * If you are starting work on a new project, you will not yet have the `.lu` source for your main LUIS app in the repo. You will create the `.lu` file by exporting your dev branch app from the portal when you have completed your feature branch work, and submit it as a part of your PR.
-
-1. Work on the active version of your dev branch app to implement the required changes. We recommend that you work only in a single version of your dev branch app for all the feature branch work. If you create more than one version in your dev branch app, be careful to track which version contains the changes you want to check in when you raise your PR.
-
-1. Test the updates - see [Testing for LUIS DevOps](luis-concept-devops-testing.md) for details on testing your dev branch app.
-
-1. Export the active version of your dev branch app as `.lu` from the [versions list](./luis-how-to-manage-versions.md).
-
-1. Check in your updates and invite peer review of your updates. If you're using GitHub, you'll raise a [pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests).
-
-1. When the changes are approved, merge the updates into the main branch. At this point, you will create a new [version](./luis-how-to-manage-versions.md) of the *main* LUIS app, using the updated `.lu` in main. See [Versioning](#versioning) for considerations on setting the version name.
-
-1. When the feature branch is deleted, it's a good idea to delete the dev branch LUIS app you created for the feature branch work.
-
-### Developers can work in multiple branches concurrently
-
-If you follow the pattern described above in [Developers can work from independent branches](#developers-can-work-from-independent-branches), then you will use a unique LUIS application in each feature branch. A single developer can work on multiple branches concurrently, as long as they switch to the correct dev branch LUIS app for the branch they're currently working on.
-
-We recommend that you use the same name for both the feature branch and for the dev branch LUIS app that you create for the feature branch work, to make it less likely that you'll accidentally work on the wrong app.
-
-As noted above, we recommend that for simplicity, you work in a single version in each dev branch app. If you are using multiple versions, take care to activate the correct version as you switch between dev branch apps.
-
-### Multiple developers can work on the same branch concurrently
-
-You can support multiple developers working on the same feature branch at the same time:
--- Developers check out the same feature branch and push and pull changes submitted by themselves and other developers while work proceeds, as normal.--- If you follow the pattern described above in [Developers can work from independent branches](#developers-can-work-from-independent-branches), then this branch will use a unique LUIS application to support development. That 'dev branch' LUIS app will be created by the first member of the development team who begins work in the feature branch.--- [Add team members as contributors](./luis-how-to-collaborate.md) to the dev branch LUIS app.--- When the feature branch work is complete, export the active version of the dev branch LUIS app as `.lu` from the [versions list](./luis-how-to-manage-versions.md), save the updated `.lu` file in the repo, and check in and PR the changes.-
-### Incorporating changes from one branch to another with rebase or merge
-
-Some other developers on your team working in another branch may have made updates to the `.lu` source and merged them to the main branch after you created your feature branch. You may want to incorporate their changes into your working version before you continue to make own changes within your feature branch. You can do this by [rebase or merge to main](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) in the same way as any other code asset. Since the LUIS app in LUDown format is human readable, it supports merging using standard merge tools.
-
-Follow these tips if you're rebasing your LUIS app in a feature branch:
--- Before you rebase or merge, make sure your local copy of the `.lu` source for your app has all your latest changes that you've applied using the LUIS portal, by re-exporting your app from the portal first. That way, you can make sure that any changes you've made in the portal and not yet exported don't get lost.--- During the merge, use standard tools to resolve any merge conflicts.--- Don't forget after rebase or merge is complete to re-import the app back into the portal, so that you're working with the updated app as you continue to apply your own changes.-
-### Merge PRs
-
-After your PR is approved, you can merge your changes to your main branch. No special considerations apply to the LUDown source for a LUIS app: it's human readable and so supports merging using standard Merge tools. Any merge conflicts may be resolved in the same way as with other source files.
-
-After your PR has been merged, it's recommended to cleanup:
--- Delete the branch in your repo--- Delete the 'dev branch' LUIS app you created for the feature branch work.-
-In the same way as with application code assets, you should write unit tests to accompany LUIS app updates. You should employ continuous integration workflows to test:
--- Updates in a PR before the PR is merged-- The main branch LUIS app after a PR has been approved and the changes have been merged into main.-
-For more information on testing for LUIS DevOps, see [Testing for DevOps for LUIS](luis-concept-devops-testing.md). For more details on implementing workflows, see [Automation workflows for LUIS DevOps](luis-concept-devops-automation.md).
-
-## Code reviews
-
-A LUIS app in LUDown format is human readable, which supports the communication of changes in a PR suitable for review. Unit test files are also written in LUDown format and also easily reviewable in a PR.
-
-## Versioning
-
-An application consists of multiple components that might include things such as a bot running in [Azure Bot Service](/azure/bot-service/bot-service-overview-introduction), [QnA Maker](https://www.qnamaker.ai/), [Azure Speech service](../speech-service/overview.md), and more. To achieve the goal of loosely coupled applications, use [version control](/devops/develop/git/what-is-version-control) so that each component of an application is versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number. It's easier to version your LUIS app independently from other components if you maintain it in its own repo.
-
-The LUIS app for the main branch should have a versioning scheme applied. When you merge updates to the `.lu` for a LUIS app into main, you'll then import that updated source into a new version in the LUIS app for the main branch.
-
-It is recommended that you use a numeric versioning scheme for the main LUIS app version, for example:
-
-`major.minor[.build[.revision]]`
-
-Each update the version number is incremented at the last digit.
-
-The major / minor version can be used to indicate the scope of the changes to the LUIS app functionality:
-
-* Major Version: A significant change, such as support for a new [Intent](./luis-concept-intent.md) or [Entity](concepts/entities.md)
-* Minor Version: A backwards-compatible minor change, such as after significant new training
-* Build: No functionality change, just a different build.
-
-Once you've determined the version number for the latest revision of your main LUIS app, you need to build and test the new app version, and publish it to an endpoint where it can be used in different build environments, such as Quality Assurance or Production. It's highly recommended that you automate all these steps in a continuous integration (CI) workflow.
-
-See:
-- [Automation workflows](luis-concept-devops-automation.md) for details on how to implement a CI workflow to test and release a LUIS app.-- [Release Management](luis-concept-devops-automation.md#release-management) for information on how to deploy your LUIS app.-
-### Versioning the 'feature branch' LUIS app
-
-When you are working with a 'dev branch' LUIS app that you've created to support work in a feature branch, you will be exporting your app when your work is complete and you will include the updated `'lu` in your PR. The branch in your repo, and the 'dev branch' LUIS app should be deleted after the PR is merged into main. Since this app exists solely to support the work in the feature branch, there's no particular versioning scheme you need to apply within this app.
-
-When your changes in your PR are merged into main, that is when the versioning should be applied, so that all updates to main are versioned independently.
-
-## Next steps
-
-* Learn about [testing for LUIS DevOps](luis-concept-devops-testing.md)
-* Learn how to [implement DevOps for LUIS with GitHub](./luis-concept-devops-automation.md)
cognitive-services Luis Concept Devops Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-devops-testing.md
- Title: Testing for DevOps for LUIS apps
-description: How to test your Language Understanding (LUIS) app in a DevOps environment.
--- Previously updated : 06/3/2020--
-# Testing for LUIS DevOps
---
-Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management) by following these guidelines.
-
-In agile software development methodologies, testing plays an integral role in building quality software. Every significant change to a LUIS app should be accompanied by tests designed to test the new functionality the developer is building into the app. These tests are checked into your source code repository along with the `.lu` source of your LUIS app. The implementation of the change is finished when the app satisfies the tests.
-
-Tests are a critical part of [CI/CD workflows](luis-concept-devops-automation.md). When changes to a LUIS app are proposed in a pull request (PR) or after changes are merged into your main branch, then CI workflows should run the tests to verify that the updates haven't caused any regressions.
-
-## How to do Unit testing and Batch testing
-
-There are two different kinds of testing for a LUIS app that you need to perform in continuous integration workflows:
--- **Unit tests** - Relatively simple tests that verify the key functionality of your LUIS app. A unit test passes when the expected intent and the expected entities are returned for a given test utterance. All unit tests must pass for the test run to complete successfully.
-This kind of testing is similar to [Interactive testing](./luis-interactive-test.md) that you can do in the [LUIS portal](https://www.luis.ai/).
--- **Batch tests** - Batch testing is a comprehensive test on your current trained model to measure its performance. Unlike unit tests, batch testing isn't pass|fail testing. The expectation with batch testing is not that every test will return the expected intent and expected entities. Instead, a batch test helps you view the accuracy of each intent and entity in your app and helps you to compare over time as you make improvements.
-This kind of testing is the same as the [Batch testing](./luis-how-to-batch-test.md) that you can perform interactively in the LUIS portal.
-
-You can employ unit testing from the beginning of your project. Batch testing is only really of value once you've developed the schema of your LUIS app and you're working on improving its accuracy.
-
-For both unit tests and batch tests, make sure that your test utterances are kept separate from your training utterances. If you test on the same data you train on, you'll get the false impression your app is performing well when it's just overfitting to the testing data. Tests must be unseen by the model to test how well it is generalizing.
-
-### Writing tests
-
-When you write a set of tests, for each test you need to define:
-
-* Test utterance
-* Expected intent
-* Expected entities.
-
-Use the LUIS [batch file syntax](./luis-how-to-batch-test.md#batch-syntax-template-for-intents-with-entities) to define a group of tests in a JSON-formatted file. For example:
-
-```JSON
-[
- {
- "text": "example utterance goes here",
- "intent": "intent name goes here",
- "entities":
- [
- {
- "entity": "entity name 1 goes here",
- "startPos": 14,
- "endPos": 23
- },
- {
- "entity": "entity name 2 goes here",
- "startPos": 14,
- "endPos": 23
- }
- ]
- }
-]
-```
-
-Some test tools, such as [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) also support LUDown-formatted test files.
-
-#### Designing unit tests
-
-Unit tests should be designed to test the core functionality of your LUIS app. In each iteration, or sprint, of your app development, you should write a sufficient number of tests to verify that the key functionality you are implementing in that iteration is working correctly.
-
-In each unit test, for a given test utterance, you can:
-
-* Test that the correct intent is returned
-* Test that the 'key' entities - those that are critical to your solution - are being returned.
-* Test that the [prediction score](./luis-concept-prediction-score.md) for intent and entities exceeds a threshold that you define. For example, you could decide that you will only consider that a test has passed if the prediction score for the intent and for your key entities exceeds 0.75.
-
-In unit tests, it's a good idea to test that your key entities have been returned in the prediction response, but to ignore any false positives. *False positives* are entities that are found in the prediction response but which are not defined in the expected results for your test. By ignoring false positives, it makes it less onerous to author unit tests while still allowing you to focus on testing that the data that is key to your solution is being returned in a prediction response.
-
-> [!TIP]
-> The [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool supports all your LUIS testing needs. The `compare` command when used in [unit test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode) will assert that all tests pass, and will ignore false positive results for entities that are not labeled in the expected results.
-
-#### Designing Batch tests
-
-Batch test sets should contain a large number of test cases, designed to test across all intents and all entities in your LUIS app. See [Batch testing in the LUIS portal](./luis-how-to-batch-test.md) for information on defining a batch test set.
-
-### Running tests
-
-The LUIS portal offers features to help with interactive testing:
-
-* [**Interactive testing**](./luis-interactive-test.md) allows you to submit a sample utterance and get a response of LUIS-recognized intents and entities. You verify the success of the test by visual inspection.
-
-* [**Batch testing**](./luis-how-to-batch-test.md) uses a batch test file as input to validate your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version, displaying results with a chart.
-
-#### Running tests in an automated build workflow
-
-The interactive testing features in the LUIS portal are useful, but for DevOps, automated testing performed in a CI/CD workflow brings certain requirements:
-
-* Test tools must run in a workflow step on a build server. This means the tools must be able to run on the command line.
-* The test tools must be able to execute a group of tests against an endpoint and automatically verify the expected results against the actual results.
-* If the tests fail, the test tools must return a status code to halt the workflow and "fail the build".
-
-LUIS does not offer a command-line tool or a high-level API that offers these features. We recommend that you use the [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool to run tests and verify results, both at the command line and during automated testing within a CI/CD workflow.
-
-The testing capabilities that are available in the LUIS portal don't require a published endpoint and are a part of the LUIS authoring capabilities. When you're implementing testing in an automated build workflow, you must publish the LUIS app version to be tested to an endpoint so that test tools such as NLU.DevOps can send prediction requests as part of testing.
-
-> [!TIP]
-> * If you're implementing your own testing solution and writing code to send test utterances to an endpoint, remember that if you are using the LUIS authoring key, the allowed transaction rate is limited to 5TPS. Either throttle the sending rate or use a prediction key instead.
-> * When sending test queries to an endpoint, remember to use `log=false` in the query string of your prediction request. This ensures that your test utterances do not get logged by LUIS and end up in the endpoint utterances review list presented by the LUIS [active learning](./luis-concept-review-endpoint-utterances.md) feature and, as a result, accidentally get added to the training utterances of your app.
-
-#### Running Unit tests at the command line and in CI/CD workflows
-
-You can use the [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) package to run tests at the command line:
-
-* Use the NLU.DevOps [test command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md) to submit tests from a test file to an endpoint and to capture the actual prediction results in a file.
-* Use the NLU.DevOps [compare command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md) to compare the actual results with the expected results defined in the input test file. The `compare` command generates NUnit test output, and when used in [unit test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode) by use of the `--unit-test` flag, will assert that all tests pass.
-
-### Running Batch tests at the command line and in CI/CD workflows
-
-You can also use the NLU.DevOps package to run batch tests at the command line.
-
-* Use the NLU.DevOps [test command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md) to submit tests from a test file to an endpoint and to capture the actual prediction results in a file, same as with unit tests.
-* Use the NLU.DevOps [compare command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md) in [Performance test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#performance-test-mode) to measure the performance of your app You can also compare the performance of your app against a baseline performance benchmark, for example, the results from the latest commit to main or the current release. In Performance test mode, the `compare` command generates NUnit test output and [batch test results](./luis-glossary.md#batch-test) in JSON format.
-
-## LUIS non-deterministic training and the effect on testing
-
-When LUIS is training a model, such as an intent, it needs both positive data - the labeled training utterances that you've supplied to train the app for the model - and negative data - data that is *not* valid examples of the usage of that model. During training, LUIS builds the negative data of one model from all the positive data you've supplied for the other models, but in some cases that can produce a data imbalance. To avoid this imbalance, LUIS samples a subset of the negative data in a non-deterministic fashion to optimize for a better balanced training set, improved model performance, and faster training time.
-
-The result of this non-deterministic training is that you may get a slightly [different prediction response between different training sessions](./luis-concept-prediction-score.md), usually for intents and/or entities where the [prediction score](./luis-concept-prediction-score.md) is not high.
-
-If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) with the `UseAllTrainingData` setting set to `true`.
-
-## Next steps
-
-* Learn about [implementing CI/CD workflows](luis-concept-devops-automation.md)
-* Learn how to [implement DevOps for LUIS with GitHub](./luis-concept-devops-automation.md)
cognitive-services Luis Concept Prebuilt Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-prebuilt-model.md
- Title: Prebuilt models - LUIS-
-description: Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt domain or add a relevant domain to your app later.
------- Previously updated : 10/10/2019---
-# Prebuilt models
---
-Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt model or add a relevant model to your app later.
-
-## Types of prebuilt models
-
-LUIS provides three types of prebuilt models. Each model can be added to your app at any time.
-
-|Model type|Includes|
-|--|--|
-|[Domain](luis-reference-prebuilt-domains.md)|Intents, utterances, entities|
-|Intents|Intents, utterances|
-|[Entities](luis-reference-prebuilt-entities.md)|Entities only|
-
-## Prebuilt domains
-
-Language Understanding (LUIS) provides *prebuilt domains*, which are pre-trained models of [intents](luis-how-to-add-intents.md) and [entities](concepts/entities.md) that work together for domains or common categories of client applications.
-
-The prebuilt domains are trained and ready to add to your LUIS app. The intents and entities of a prebuilt domain are fully customizable once you've added them to your app.
-
-> [!TIP]
-> The intents and entities in a prebuilt domain work best together. It's better to combine intents and entities from the same domain when possible.
-> The Utilities prebuilt domain has intents that you can customize for use in any domain. For example, you can add `Utilities.Repeat` to your app and train it recognize whatever actions user might want to repeat in your application.
-
-### Changing the behavior of a prebuilt domain intent
-
-You might find that a prebuilt domain contains an intent that is similar to an intent you want to have in your LUIS app but you want it to behave differently. For example, the **Places** prebuilt domain provides a `MakeReservation` intent for making a restaurant reservation, but you want your app to use that intent to make hotel reservations. In that case, you can modify the behavior of that intent by adding example utterances to the intent about making hotel reservations and then retrain the app.
-
-You can find a full listing of the prebuilt domains in the [Prebuilt domains reference](./luis-reference-prebuilt-domains.md).
-
-## Prebuilt intents
-
-LUIS provides prebuilt intents and their utterances for each of its prebuilt domains. Intents can be added without adding the whole domain. Adding an intent is the process of adding an intent and its utterances to your app. Both the intent name and the utterance list can be modified.
-
-## Prebuilt entities
-
-LUIS includes a set of prebuilt entities for recognizing common types of information, like dates, times, numbers, measurements, and currency. Prebuilt entity support varies by the culture of your LUIS app. For a full list of the prebuilt entities that LUIS supports, including support by culture, see the [prebuilt entity reference](./luis-reference-prebuilt-entities.md).
-
-When a prebuilt entity is included in your application, its predictions are included in your published application. The behavior of prebuilt entities is pre-trained and **cannot** be modified.
-
-## Next steps
-
-Learn how to [add prebuilt entities](./howto-add-prebuilt-models.md) to your app.
cognitive-services Luis Concept Prediction Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-prediction-score.md
- Title: Prediction scores - LUIS
-description: A prediction score indicates the degree of confidence the LUIS API service has for prediction results, based on a user utterance.
--- Previously updated : 04/14/2020--
-# Prediction scores indicate prediction accuracy for intent and entities
---
-A prediction score indicates the degree of confidence LUIS has for prediction results of a user utterance.
-
-A prediction score is between zero (0) and one (1). An example of a highly confident LUIS score is 0.99. An example of a score of low confidence is 0.01.
-
-|Score value|Confidence|
-|--|--|
-|1|definite match|
-|0.99|high confidence|
-|0.01|low confidence|
-|0|definite failure to match|
-
-## Top-scoring intent
-
-Every utterance prediction returns a top-scoring intent. This prediction is a numerical comparison of prediction scores.
-
-## Proximity of scores to each other
-
-The top 2 scores can have a very small difference between them. LUIS doesn't indicate this proximity other than returning the top score.
-
-## Return prediction score for all intents
-
-A test or endpoint result can include all intents. This configuration is set on the endpoint using the correct querystring name/value pair.
-
-|Prediction API|Querystring name|
-|--|--|
-|V3|`show-all-intents=true`|
-|V2|`verbose=true`|
-
-## Review intents with similar scores
-
-Reviewing the score for all intents is a good way to verify that not only is the correct intent identified, but that the next identified intent's score is significantly and consistently lower for utterances.
-
-If multiple intents have close prediction scores, based on the context of an utterance, LUIS may switch between the intents. To fix this situation, continue to add utterances to each intent with a wider variety of contextual differences or you can have the client application, such as a chat bot, make programmatic choices about how to handle the 2 top intents.
-
-The 2 intents, which are too-closely scored, may invert due to **non-deterministic training**. The top score could become the second top and the second top score could become the first top score. In order to prevent this situation, add example utterances to each of the top two intents for that utterance with word choice and context that differentiates the 2 intents. The two intents should have about the same number of example utterances. A rule of thumb for separation to prevent inversion due to training, is a 15% difference in scores.
-
-You can turn off the **non-deterministic training** by [training with all data](luis-how-to-train.md#train-with-all-data).
-
-## Differences with predictions between different training sessions
-
-When you train the same model in a different app, and the scores are not the same, this difference is because there is **non-deterministic training** (an element of randomness). Secondly, any overlap of an utterance to more than one intent means the top intent for the same utterance can change based on training.
-
-If your chat bot requires a specific LUIS score to indicate confidence in an intent, you should use the score difference between the top two intents. This situation provides flexibility for variations in training.
-
-You can turn off the **non-deterministic training** by [training with all data](luis-how-to-train.md#train-with-all-data).
-
-## E (exponent) notation
-
-Prediction scores can use exponent notation, _appearing_ above the 0-1 range, such as `9.910309E-07`. This score is an indication of a very **small** number.
-
-|E notation score |Actual score|
-|--|--|
-|9.910309E-07|.0000009910309|
-
-<a name="punctuation"></a>
-
-## Application settings
-
-Use [application settings](luis-reference-application-settings.md) to control how diacritics and punctuation impact prediction scores.
-
-## Next steps
-
-See [Add entities](how-to/entities.md) to learn more about how to add entities to your LUIS app.
cognitive-services Luis Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-container-configuration.md
- Title: Docker container settings - LUIS-
-description: The LUIS container runtime environment is configured using the `docker run` command arguments. LUIS has several required settings, along with a few optional settings.
------- Previously updated : 04/01/2020---
-# Configure Language Understanding Docker containers
---
-The **Language Understanding** (LUIS) container runtime environment is configured using the `docker run` command arguments. LUIS has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the input [mount settings](#mount-settings) and the billing settings.
-
-## Configuration settings
-
-This container has the following configuration settings:
-
-|Required|Setting|Purpose|
-|--|--|--|
-|Yes|[ApiKey](#apikey-setting)|Used to track billing information.|
-|No|[ApplicationInsights](#applicationinsights-setting)|Allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
-|Yes|[Billing](#billing-setting)|Specifies the endpoint URI of the service resource on Azure.|
-|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
-|No|[Fluentd](#fluentd-settings)|Write log and, optionally, metric data to a Fluentd server.|
-|No|[Http Proxy](#http-proxy-credentials-settings)|Configure an HTTP proxy for making outbound requests.|
-|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
-|Yes|[Mounts](#mount-settings)|Read and write data from host computer to container and from container back to host computer.|
-
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-setting), [`Billing`](#billing-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](luis-container-howto.md#billing).
-
-## ApiKey setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Cognitive Services_ resource specified for the [`Billing`](#billing-setting) configuration setting.
-
-This setting can be found in the following places:
-
-* Azure portal: **Cognitive Services** Resource Management, under **Keys**
-* LUIS portal: **Keys and Endpoint settings** page.
-
-Do not use the starter key or the authoring key.
-
-## ApplicationInsights setting
--
-## Billing setting
-
-The `Billing` setting specifies the endpoint URI of the _Cognitive Services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Cognitive Services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-This setting can be found in the following places:
-
-* Azure portal: **Cognitive Services** Overview, labeled `Endpoint`
-* LUIS portal: **Keys and Endpoint settings** page, as part of the endpoint URI.
-
-| Required | Name | Data type | Description |
-|-||--|-|
-| Yes | `Billing` | string | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](luis-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
-
-## Eula setting
--
-## Fluentd settings
--
-## HTTP proxy credentials settings
--
-## Logging settings
-
-
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-The LUIS container doesn't use input or output mounts to store training or service data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](luis-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
-
-The following table describes the settings supported.
-
-|Required| Name | Data type | Description |
-|-||--|-|
-|Yes| `Input` | String | The target of the input mount. The default value is `/input`. This is the location of the LUIS package files. <br><br>Example:<br>`--mount type=bind,src=c:\input,target=/input`|
-|No| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes LUIS query logs and container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Example docker run commands
-
-The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](luis-container-howto.md#stop-the-container) it.
-
-* These examples use the directory off the `C:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission.
-* Do not change the order of the arguments unless you are very familiar with docker containers.
-* If you are using a different operating system, use the correct console/terminal, folder syntax for mounts, and line continuation character for your system. These examples assume a Windows console with a line continuation character `^`. Because the container is a Linux operating system, the target mount uses a Linux-style folder syntax.
-
-Replace {_argument_name_} with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The endpoint key of the `LUIS` resource on the Azure `LUIS` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `LUIS` Overview page.| See [gather required parameters](luis-container-howto.md#gather-required-parameters) for explicit examples. |
--
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](luis-container-howto.md#billing).
-> The ApiKey value is the **Key** from the Keys and Endpoints page in the LUIS portal and is also available on the Azure `Cognitive Services` resource keys page.
-
-### Basic example
-
-The following example has the fewest arguments possible to run the container:
-
-```console
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 2 ^
mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output,target=/output ^
-mcr.microsoft.com/azure-cognitive-services/luis:latest ^
-Eula=accept ^
-Billing={ENDPOINT_URL} ^
-ApiKey={API_KEY}
-```
-
-### ApplicationInsights example
-
-The following example sets the ApplicationInsights argument to send telemetry to Application Insights while the container is running:
-
-```console
-docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 ^
mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output,target=/output ^
-mcr.microsoft.com/azure-cognitive-services/luis:latest ^
-Eula=accept ^
-Billing={ENDPOINT_URL} ^
-ApiKey={API_KEY} ^
-InstrumentationKey={INSTRUMENTATION_KEY}
-```
-
-### Logging example
-
-The following command sets the logging level, `Logging:Console:LogLevel`, to configure the logging level to [`Information`](https://msdn.microsoft.com).
-
-```console
-docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 ^
mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output,target=/output ^
-mcr.microsoft.com/azure-cognitive-services/luis:latest ^
-Eula=accept ^
-Billing={ENDPOINT_URL} ^
-ApiKey={API_KEY} ^
-Logging:Console:LogLevel:Default=Information
-```
-
-## Next steps
-
-* Review [How to install and run containers](luis-container-howto.md)
-* Refer to [Troubleshooting](troubleshooting.yml) to resolve issues related to LUIS functionality.
-* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-container-howto.md
- Title: Install and run Docker containers for LUIS-
-description: Use the LUIS container to load your trained or published app, and gain access to its predictions on-premises.
------- Previously updated : 03/02/2023-
-keywords: on-premises, Docker, container
--
-# Install and run Docker containers for LUIS
----
-Containers enable you to use LUIS in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a LUIS container.
-
-The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a [LUIS app](https://www.luis.ai), the docker container provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app's prediction accuracy.
-
-The following video demonstrates using this container.
-
-[![Container demonstration for Cognitive Services](./media/luis-container-how-to/luis-containers-demo-video-still.png)](https://aka.ms/luis-container-demo)
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-To run the LUIS container, note the following prerequisites:
-
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne" title="Create a LUIS resource" target="_blank">LUIS resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/).
-* A trained or published app packaged as a mounted input to the container with its associated App ID. You can get the packaged file from the LUIS portal or the Authoring APIs. If you are getting LUIS packaged app from the [authoring APIs](#authoring-apis-for-package-file), you will also need your _Authoring Key_.
--
-### App ID `{APP_ID}`
-
-This ID is used to select the app. You can find the app ID in the [LUIS portal](https://www.luis.ai/) by clicking **Manage** at the top of the screen for your app, and then **Settings**.
--
-### Authoring key `{AUTHORING_KEY}`
-
-This key is used to get the packaged app from the LUIS service in the cloud and upload the query logs back to the cloud. You will need your authoring key if you [export your app using the REST API](#export-published-apps-package-from-api), described later in the article.
-
-You can get your authoring key from the [LUIS portal](https://www.luis.ai/) by clicking **Manage** at the top of the screen for your app, and then **Azure Resources**.
---
-### Authoring APIs for package file
-
-Authoring APIs for packaged apps:
-
-* [Published package API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagepublishedapplicationasgzip)
-* [Not-published, trained-only package API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagetrainedapplicationasgzip)
-
-### The host computer
--
-### Container requirements and recommendations
-
-The below table lists minimum and recommended values for the container host. Your requirements may change depending on traffic volume.
-
-|Container| Minimum | Recommended | TPS<br>(Minimum, Maximum)|
-|--||-|--|
-|LUIS|1 core, 2-GB memory|1 core, 4-GB memory|20, 40|
-
-* Each core must be at least 2.6 gigahertz (GHz) or faster.
-* TPS - transactions per second
-
-Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with `docker pull`
-
-The LUIS container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/language` repository and is named `luis`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/language/luis`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/tags).
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the `mcr.microsoft.com/azure-cognitive-services/language/luis` repository:
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/language/luis:latest
-```
-
-For a full description of available tags, such as `latest` used in the preceding command, see [LUIS](https://go.microsoft.com/fwlink/?linkid=2043204) on Docker Hub.
--
-## How to use the container
-
-Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
-
-![Process for using Language Understanding (LUIS) container](./media/luis-container-how-to/luis-flow-with-containers-diagram.jpg)
-
-1. [Export package](#export-packaged-app-from-luis) for container from LUIS portal or LUIS APIs.
-1. Move package file into the required **input** directory on the [host computer](#the-host-computer). Do not rename, alter, overwrite, or decompress the LUIS package file.
-1. [Run the container](#run-the-container-with-docker-run), with the required _input mount_ and billing settings. More [examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-1. [Querying the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-1. When you are done with the container, [import the endpoint logs](#import-the-endpoint-logs-for-active-learning) from the output mount in the LUIS portal and [stop](#stop-the-container) the container.
-1. Use LUIS portal's [active learning](luis-how-to-review-endpoint-utterances.md) on the **Review endpoint utterances** page to improve the app.
-
-The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f). Then train and/or publish, then download a new package and run the container again.
-
-The LUIS app inside the container can't be exported back to the LUIS service. Only the query logs can be uploaded.
-
-## Export packaged app from LUIS
-
-The LUIS container requires a trained or published LUIS app to answer prediction queries of user utterances. In order to get the LUIS app, use either the trained or published package API.
-
-The default location is the `input` subdirectory in relation to where you run the `docker run` command.
-
-Place the package file in a directory and reference this directory as the input mount when you run the docker container.
-
-### Package types
-
-The input mount directory can contain the **Production**, **Staging**, and **Versioned** models of the app simultaneously. All the packages are mounted.
-
-|Package Type|Query Endpoint API|Query availability|Package filename format|
-|--|--|--|--|
-|Versioned|GET, POST|Container only|`{APP_ID}_v{APP_VERSION}.gz`|
-|Staging|GET, POST|Azure and container|`{APP_ID}_STAGING.gz`|
-|Production|GET, POST|Azure and container|`{APP_ID}_PRODUCTION.gz`|
-
-> [!IMPORTANT]
-> Do not rename, alter, overwrite, or decompress the LUIS package files.
-
-### Packaging prerequisites
-
-Before packaging a LUIS application, you must have the following:
-
-|Packaging Requirements|Details|
-|--|--|
-|Azure _Cognitive Services_ resource instance|Supported regions include<br><br>West US (`westus`)<br>West Europe (`westeurope`)<br>Australia East (`australiaeast`)|
-|Trained or published LUIS app|With no [unsupported dependencies][unsupported-dependencies]. |
-|Access to the [host computer](#the-host-computer)'s file system |The host computer must allow an [input mount](luis-container-configuration.md#mount-settings).|
-
-### Export app package from LUIS portal
-
-The LUIS [portal](https://www.luis.ai) provides the ability to export the trained or published app's package.
-
-### Export published app's package from LUIS portal
-
-The published app's package is available from the **My Apps** list page.
-
-1. Sign on to the LUIS [portal](https://www.luis.ai).
-1. Select the checkbox to the left of the app name in the list.
-1. Select the **Export** item from the contextual toolbar above the list.
-1. Select **Export for container (GZIP)**.
-1. Select the environment of **Production slot** or **Staging slot**.
-1. The package is downloaded from the browser.
-
-![Export the published package for the container from the App page's Export menu](./media/luis-container-how-to/export-published-package-for-container.png)
-
-### Export versioned app's package from LUIS portal
-
-The versioned app's package is available from the **Versions** list page.
-
-1. Sign on to the LUIS [portal](https://www.luis.ai).
-1. Select the app in the list.
-1. Select **Manage** in the app's navigation bar.
-1. Select **Versions** in the left navigation bar.
-1. Select the checkbox to the left of the version name in the list.
-1. Select the **Export** item from the contextual toolbar above the list.
-1. Select **Export for container (GZIP)**.
-1. The package is downloaded from the browser.
-
-![Export the trained package for the container from the Versions page's Export menu](./media/luis-container-how-to/export-trained-package-for-container.png)
-
-### Export published app's package from API
-
-Use the following REST API method, to package a LUIS app that you've already [published](luis-how-to-publish-app.md). Substituting your own appropriate values for the placeholders in the API call, using the table below the HTTP specification.
-
-```http
-GET /luis/api/v2.0/package/{APP_ID}/slot/{SLOT_NAME}/gzip HTTP/1.1
-Host: {AZURE_REGION}.api.cognitive.microsoft.com
-Ocp-Apim-Subscription-Key: {AUTHORING_KEY}
-```
-
-| Placeholder | Value |
-|-|-|
-| **{APP_ID}** | The application ID of the published LUIS app. |
-| **{SLOT_NAME}** | The environment of the published LUIS app. Use one of the following values:<br/>`PRODUCTION`<br/>`STAGING` |
-| **{AUTHORING_KEY}** | The authoring key of the LUIS account for the published LUIS app.<br/>You can get your authoring key from the **User Settings** page on the LUIS portal. |
-| **{AZURE_REGION}** | The appropriate Azure region:<br/><br/>`westus` - West US<br/>`westeurope` - West Europe<br/>`australiaeast` - Australia East |
-
-To download the published package, refer to the [API documentation here][download-published-package]. If successfully downloaded, the response is a LUIS package file. Save the file in the storage location specified for the input mount of the container.
-
-### Export versioned app's package from API
-
-Use the following REST API method, to package a LUIS application that you've already [trained](luis-how-to-train.md). Substituting your own appropriate values for the placeholders in the API call, using the table below the HTTP specification.
-
-```http
-GET /luis/api/v2.0/package/{APP_ID}/versions/{APP_VERSION}/gzip HTTP/1.1
-Host: {AZURE_REGION}.api.cognitive.microsoft.com
-Ocp-Apim-Subscription-Key: {AUTHORING_KEY}
-```
-
-| Placeholder | Value |
-|-|-|
-| **{APP_ID}** | The application ID of the trained LUIS app. |
-| **{APP_VERSION}** | The application version of the trained LUIS app. |
-| **{AUTHORING_KEY}** | The authoring key of the LUIS account for the published LUIS app.<br/>You can get your authoring key from the **User Settings** page on the LUIS portal. |
-| **{AZURE_REGION}** | The appropriate Azure region:<br/><br/>`westus` - West US<br/>`westeurope` - West Europe<br/>`australiaeast` - Australia East |
-
-To download the versioned package, refer to the [API documentation here][download-versioned-package]. If successfully downloaded, the response is a LUIS package file. Save the file in the storage location specified for the input mount of the container.
-
-## Run the container with `docker run`
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
-
-[Examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-
-```console
-docker run --rm -it -p 5000:5000 ^
memory 4g ^cpus 2 ^mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output\,target=/output ^
-mcr.microsoft.com/azure-cognitive-services/language/luis ^
-Eula=accept ^
-Billing={ENDPOINT_URI} ^
-ApiKey={API_KEY}
-```
-
-* This example uses the directory off the `C:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission.
-* Do not change the order of the arguments unless you are familiar with docker containers.
-* If you are using a different operating system, use the correct console/terminal, folder syntax for mounts, and line continuation character for your system. These examples assume a Windows console with a line continuation character `^`. Because the container is a Linux operating system, the target mount uses a Linux-style folder syntax.
-
-This command:
-
-* Runs a container from the LUIS container image
-* Loads LUIS app from input mount at *C:\input*, located on container host
-* Allocates two CPU cores and 4 gigabytes (GB) of memory
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Saves container and LUIS logs to output mount at *C:\output*, located on container host
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-More [examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
-
-> [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-> The ApiKey value is the **Key** from the **Azure Resources** page in the LUIS portal and is also available on the Azure `Cognitive Services` resource keys page.
--
-## Endpoint APIs supported by the container
-
-Both V2 and [V3](luis-migration-api-v3.md) versions of the API are available with the container.
-
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs. Endpoints for published (staging or production) apps have a _different_ route than endpoints for versioned apps.
-
-Use the host, `http://localhost:5000`, for container APIs.
-
-# [V3 prediction endpoint](#tab/v3)
-
-|Package type|HTTP verb|Route|Query parameters|
-|--|--|--|--|
-|Published|GET, POST|`/luis/v3.0/apps/{appId}/slots/{slotName}/predict?` `/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]|
-|Versioned|GET, POST|`/luis/v3.0/apps/{appId}/versions/{versionId}/predict?` `/luis/prediction/v3.0/apps/{appId}/versions/{versionId}/predict`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]|
-
-The query parameters configure how and what is returned in the query response:
-
-|Query parameter|Type|Purpose|
-|--|--|--|
-|`query`|string|The user's utterance.|
-|`verbose`|boolean|A boolean value indicating whether to return all the metadata for the predicted models. Default is false.|
-|`log`|boolean|Logs queries, which can be used later for [active learning](luis-how-to-review-endpoint-utterances.md). Default is false.|
-|`show-all-intents`|boolean|A boolean value indicating whether to return all the intents or the top scoring intent only. Default is false.|
-
-# [V2 prediction endpoint](#tab/v2)
-
-|Package type|HTTP verb|Route|Query parameters|
-|--|--|--|--|
-|Published|[GET](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78), [POST](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee79)|`/luis/v2.0/apps/{appId}?`|`q={q}`<br>`&staging`<br>[`&timezoneOffset`]<br>[`&verbose`]<br>[`&log`]<br>|
-|Versioned|GET, POST|`/luis/v2.0/apps/{appId}/versions/{versionId}?`|`q={q}`<br>[`&timezoneOffset`]<br>[`&verbose`]<br>[`&log`]|
-
-The query parameters configure how and what is returned in the query response:
-
-|Query parameter|Type|Purpose|
-|--|--|--|
-|`q`|string|The user's utterance.|
-|`timezoneOffset`|number|The timezoneOffset allows you to [change the timezone](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity) used by the prebuilt entity datetimeV2.|
-|`verbose`|boolean|Returns all intents and their scores when set to true. Default is false, which returns only the top intent.|
-|`staging`|boolean|Returns query from staging environment results if set to true. |
-|`log`|boolean|Logs queries, which can be used later for [active learning](luis-how-to-review-endpoint-utterances.md). Default is true.|
-
-***
-
-### Query the LUIS app
-
-An example CURL command for querying the container for a published app is:
-
-# [V3 prediction endpoint](#tab/v3)
-
-To query a model in a slot, use the following API:
-
-```bash
-curl -G \
--d verbose=false \--d log=true \data-urlencode "query=turn the lights on" \
-"http://localhost:5000/luis/v3.0/apps/{APP_ID}/slots/production/predict"
-```
-
-To make queries to the **Staging** environment, replace `production` in the route with `staging`:
-
-`http://localhost:5000/luis/v3.0/apps/{APP_ID}/slots/staging/predict`
-
-To query a versioned model, use the following API:
-
-```bash
-curl -G \
--d verbose=false \--d log=false \data-urlencode "query=turn the lights on" \
-"http://localhost:5000/luis/v3.0/apps/{APP_ID}/versions/{APP_VERSION}/predict"
-```
-
-# [V2 prediction endpoint](#tab/v2)
-
-To query a model in a slot, use the following API:
-
-```bash
-curl -X GET \
-"http://localhost:5000/luis/v2.0/apps/{APP_ID}?q=turn%20on%20the%20lights&staging=false&timezoneOffset=0&verbose=false&log=true" \
--H "accept: application/json"
-```
-To make queries to the **Staging** environment, change the **staging** query string parameter value to true:
-
-`staging=true`
-
-To query a versioned model, use the following API:
-
-```bash
-curl -X GET \
-"http://localhost:5000/luis/v2.0/apps/{APP_ID}/versions/{APP_VERSION}?q=turn%20on%20the%20lights&timezoneOffset=0&verbose=false&log=true" \
--H "accept: application/json"
-```
-The version name has a maximum of 10 characters and contains only characters allowed in a URL.
-
-***
-
-## Import the endpoint logs for active learning
-
-If an output mount is specified for the LUIS container, app query log files are saved in the output directory, where `{INSTANCE_ID}` is the container ID. The app query log contains the query, response, and timestamps for each prediction query submitted to the LUIS container.
-
-The following location shows the nested directory structure for the container's log files.
-```
-/output/luis/{INSTANCE_ID}/
-```
-
-From the LUIS portal, select your app, then select **Import endpoint logs** to upload these logs.
-
-![Import container's log files for active learning](./media/luis-container-how-to/upload-endpoint-log-files.png)
-
-After the log is uploaded, [review the endpoint](./luis-concept-review-endpoint-utterances.md) utterances in the LUIS portal.
-
-<!-- ## Validate container is running -->
--
-## Run the container disconnected from the internet
--
-## Stop the container
-
-To shut down the container, in the command-line environment where the container is running, press **Ctrl+C**.
-
-## Troubleshooting
-
-If you run the container with an output [mount](luis-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
---
-## Billing
-
-The LUIS container sends billing information to Azure, using a _Cognitive Services_ resource on your Azure account.
--
-For more information about these options, see [Configure containers](luis-container-configuration.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Language Understanding (LUIS) containers. In summary:
-
-* Language Understanding (LUIS) provides one Linux container for Docker providing endpoint query predictions of utterances.
-* Container images are downloaded from the Microsoft Container Registry (MCR).
-* Container images run in Docker.
-* You can use REST API to query the container endpoints by specifying the host URI of the container.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* Review [Configure containers](luis-container-configuration.md) for configuration settings.
-* See [LUIS container limitations](luis-container-limitations.md) for known capability restrictions.
-* Refer to [Troubleshooting](troubleshooting.yml) to resolve issues related to LUIS functionality.
-* Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
-
-<!-- Links - external -->
-[download-published-package]: https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagepublishedapplicationasgzip
-[download-versioned-package]: https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagetrainedapplicationasgzip
-
-[unsupported-dependencies]: luis-container-limitations.md#unsupported-dependencies-for-latest-container
cognitive-services Luis Container Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-container-limitations.md
- Title: Container limitations - LUIS-
-description: The LUIS container languages that are supported.
------ Previously updated : 10/28/2021---
-# Language Understanding (LUIS) container limitations
---
-The LUIS containers have a few notable limitations. From unsupported dependencies, to a subset of languages supported, this article details these restrictions.
-
-## Supported dependencies for `latest` container
-
-The latest LUIS container supports:
-
-* [New prebuilt domains](luis-reference-prebuilt-domains.md): these enterprise-focused domains include entities, example utterances, and patterns. Extend these domains for your own use.
-
-## Unsupported dependencies for `latest` container
-
-To [export for container](luis-container-howto.md#export-packaged-app-from-luis), you must remove unsupported dependencies from your LUIS app. When you attempt to export for container, the LUIS portal reports these unsupported features that you need to remove.
-
-You can use a LUIS application if it **doesn't include** any of the following dependencies:
-
-Unsupported app configurations|Details|
-|--|--|
-|Unsupported container cultures| The Dutch (`nl-NL`), Japanese (`ja-JP`) and German (`de-DE`) languages are only supported with the [1.0.2 tokenizer](luis-language-support.md#custom-tokenizer-versions).|
-|Unsupported entities for all cultures|[KeyPhrase](luis-reference-prebuilt-keyphrase.md) prebuilt entity for all cultures|
-|Unsupported entities for English (`en-US`) culture|[GeographyV2](luis-reference-prebuilt-geographyV2.md) prebuilt entities|
-|Speech priming|External dependencies are not supported in the container.|
-|Sentiment analysis|External dependencies are not supported in the container.|
-|Bing spell check|External dependencies are not supported in the container.|
-
-## Languages supported
-
-LUIS containers support a subset of the [languages supported](luis-language-support.md#languages-supported) by LUIS proper. The LUIS containers are capable of understanding utterances in the following languages:
-
-| Language | Locale | Prebuilt domain | Prebuilt entity | Phrase list recommendations | **[Sentiment analysis](../language-service/sentiment-opinion-mining/language-support.md) and [key phrase extraction](../language-service/key-phrase-extraction/language-support.md)|
-|--|--|:--:|:--:|:--:|:--:|
-| English (United States) | `en-US` | ✔️ | ✔️ | ✔️ | ✔️ |
-| Arabic (preview - modern standard Arabic) |`ar-AR`|❌|❌|❌|❌|
-| *[Chinese](#chinese-support-notes) |`zh-CN` | ✔️ | ✔️ | ✔️ | ❌ |
-| French (France) |`fr-FR` | ✔️ | ✔️ | ✔️ | ✔️ |
-| French (Canada) |`fr-CA` | ❌ | ❌ | ❌ | ✔️ |
-| German |`de-DE` | ✔️ | ✔️ | ✔️ | ✔️ |
-| Hindi | `hi-IN`| ❌ | ❌ | ❌ | ❌ |
-| Italian |`it-IT` | ✔️ | ✔️ | ✔️ | ✔️ |
-| Korean |`ko-KR` | ✔️ | ❌ | ❌ | *Key phrase* only |
-| Marathi | `mr-IN`|❌|❌|❌|❌|
-| Portuguese (Brazil) |`pt-BR` | ✔️ | ✔️ | ✔️ | not all sub-cultures |
-| Spanish (Spain) |`es-ES` | ✔️ | ✔️ |✔️|✔️|
-| Spanish (Mexico)|`es-MX` | ❌ | ❌ |✔️|✔️|
-| Tamil | `ta-IN`|❌|❌|❌|❌|
-| Telugu | `te-IN`|❌|❌|❌|❌|
-| Turkish | `tr-TR` |✔️| ❌ | ❌ | *Sentiment* only |
--
cognitive-services Luis Get Started Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-get-started-create-app.md
- Title: "Quickstart: Build your app in LUIS portal"
-description: This quickstart shows how to create a LUIS app that uses the prebuilt domain `HomeAutomation` for turning lights and appliances on and off. This prebuilt domain provides intents, entities, and example utterances for you. When you're finished, you'll have a LUIS endpoint running in the cloud.
----
-ms.
- Previously updated : 07/19/2022-
-#Customer intent: As a new user, I want to quickly get a LUIS app created so I can understand the model and actions to train, test, publish, and query.
--
-# Quickstart: Build your app in LUIS portal
--
-In this quickstart, create a LUIS app using the prebuilt home automation domain for turning lights and appliances on and off. This prebuilt domain provides intents, entities, and example utterances for you. Next, try customizing your app by adding more intents and entities. When you're finished, you'll have a LUIS endpoint running in the cloud.
---
-## Create a new app
-
-You can create and manage your applications on **My Apps**.
-
-### Create an application
-
-To create an application, click **+ New app**.
-
-In the window that appears, enter the following information:
-
-|Name |Description |
-|||
-|Name | A name for your app. For example, "home automation". |
-|Culture | The language that your app understands and speaks. |
-|Description | A description for your app.
-|Prediction resource | The prediction resource that will receive queries. |
-
-Select **Done**.
-
->[!NOTE]
->The culture cannot be changed once the application is created.
-
-## Add prebuilt domain
-
-LUIS offers a set of prebuilt domains that can help you get started with your application. A prebuilt domain app is already populated with [intents](./luis-concept-intent.md), [entities](concepts/entities.md) and [utterances](concepts/utterances.md).
-
-1. In the left navigation, select **Prebuilt domains**.
-2. Search for **HomeAutomation**.
-3. Select **Add domain** on the HomeAutomation card.
-
- > [!div class="mx-imgBorder"]
- > ![Select 'Prebuilt domains' then search for 'HomeAutomation'. Select 'Add domain' on the HomeAutomation card.](media/luis-quickstart-new-app/home-automation.png)
-
- When the domain is successfully added, the prebuilt domain box displays a **Remove domain** button.
-
-## Check out intents and entities
-
-1. Select **Intents** in the left navigation menu to see the HomeAutomation domain intents. It has example utterances, such as `HomeAutomation.QueryState` and `HomeAutomation.SetDevice`.
-
- > [!NOTE]
- > **None** is an intent provided by all LUIS apps. You use it to handle utterances that don't correspond to functionality your app provides.
-
-2. Select the **HomeAutomation.TurnOff** intent. The intent contains a list of example utterances that are labeled with entities.
-
- > [!div class="mx-imgBorder"]
- > [![Screenshot of HomeAutomation.TurnOff intent](media/luis-quickstart-new-app/home-automation-turnoff.png "Screenshot of HomeAutomation.TurnOff intent")](media/luis-quickstart-new-app/home-automation-turnoff.png)
-
-3. If you want to view the entities for the app, select **Entities**. If you click on one of the entities, such as **HomeAutomation.DeviceName** you will see a list of values associated with it.
-
- :::image type="content" source="media/luis-quickstart-new-app/entities-page.png" alt-text="Image alt text" lightbox="media/luis-quickstart-new-app/entities-page.png":::
-
-## Train the LUIS app
-After your application is populated with intents, entities, and utterances, you need to train the application so that the changes you made can be reflected.
--
-## Test your app
-
-Once you've trained your app, you can test it.
-
-1. Select **Test** from the top-right navigation.
-
-1. Type a test utterance into the interactive test pane, and press Enter. For example, *Turn off the lights*.
-
- In this example, *Turn off the lights* is correctly identified as the top scoring intent of **HomeAutomation.TurnOff**.
-
- :::image type="content" source="media/luis-quickstart-new-app/review-test-inspection-pane-in-portal.png" alt-text="Screenshot of test panel with utterance highlighted" lightbox="media/luis-quickstart-new-app/review-test-inspection-pane-in-portal.png":::
-
-1. Select **Inspect** to view more information about the prediction.
-
- :::image type="content" source="media/luis-quickstart-new-app/test.png" alt-text="Screenshot of test panel with inspection information" lightbox="media/luis-quickstart-new-app/test.png":::
-
-1. Close the test pane.
-
-## Customize your application
-
-Besides the prebuilt domains LUIS allows you to create your own custom applications or to customize on top of prebuilt ones.
-
-### Create Intents
-
-To add more intents to your app
-
-1. Select **Intents** in the left navigation menu.
-2. Select **Create**
-3. Enter the intent name, `HomeAutomation.AddDeviceAlias`, and then select Done.
-
-### Create Entities
-
-To add more entities to your app
-
-1. Select **Entities** in the left navigation menu.
-2. Select **Create**
-3. Enter the entity name, `HomeAutomation.DeviceAlias`, select machine learned from **type** and then select **Create**.
-
-### Add example utterances
-
-Example utterances are text that a user enters in a chat bot or other client applications. They map the intention of the user's text to a LUIS intent.
-
-On the **Intents** page for `HomeAutomation.AddDeviceAlias`, add the following example utterances under **Example Utterance**,
-
-|#|Example utterances|
-|--|--|
-|1|`Add alias to my fan to be wind machine`|
-|2|`Alias lights to illumination`|
-|3|`nickname living room speakers to our speakers a new fan`|
-|4|`rename living room tv to main tv`|
---
-### Label example utterances
-
-Labeling your utterances is needed because you added an ML entity. Labeling is used by your application to learn how to extract the ML entities you created.
--
-## Create Prediction resource
-At this point, you have completed authoring your application. You need to create a prediction resource to publish your application in order to receive predictions in a chat bot or other client applications through the prediction endpoint
-
-To create a Prediction resource from the LUIS portal
---
-## Publish the app to get the endpoint URL
----
-## Query the V3 API prediction endpoint
--
-2. In the browser address bar, for the query string, make sure the following values are in the URL. If they are not in the query string, add them:
-
- * `verbose=true`
- * `show-all-intents=true`
-
-3. In the browser address bar, go to the end of the URL and enter *turn off the living room light* for the query string, then press Enter.
-
- ```json
- {
- "query": "turn off the living room light",
- "prediction": {
- "topIntent": "HomeAutomation.TurnOff",
- "intents": {
- "HomeAutomation.TurnOff": {
- "score": 0.969448864
- },
- "HomeAutomation.QueryState": {
- "score": 0.0122336326
- },
- "HomeAutomation.TurnUp": {
- "score": 0.006547436
- },
- "HomeAutomation.TurnDown": {
- "score": 0.0050634006
- },
- "HomeAutomation.SetDevice": {
- "score": 0.004951761
- },
- "HomeAutomation.TurnOn": {
- "score": 0.00312553928
- },
- "None": {
- "score": 0.000552945654
- }
- },
- "entities": {
- "HomeAutomation.Location": [
- "living room"
- ],
- "HomeAutomation.DeviceName": [
- [
- "living room light"
- ]
- ],
- "HomeAutomation.DeviceType": [
- [
- "light"
- ]
- ],
- "$instance": {
- "HomeAutomation.Location": [
- {
- "type": "HomeAutomation.Location",
- "text": "living room",
- "startIndex": 13,
- "length": 11,
- "score": 0.902181149,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "HomeAutomation.DeviceName": [
- {
- "type": "HomeAutomation.DeviceName",
- "text": "living room light",
- "startIndex": 13,
- "length": 17,
- "modelTypeId": 5,
- "modelType": "List Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "HomeAutomation.DeviceType": [
- {
- "type": "HomeAutomation.DeviceType",
- "text": "light",
- "startIndex": 25,
- "length": 5,
- "modelTypeId": 5,
- "modelType": "List Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
- }
- }
- }
- ```
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
--
-## Clean up resources
--
-## Next steps
-
-* [Iterative app design](./luis-concept-app-iteration.md)
-* [Best practices](./luis-concept-best-practices.md)
cognitive-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-get-started-get-intent-from-browser.md
- Title: "How to query for predictions using a browser - LUIS"
-description: In this article, use an available public LUIS app to determine a user's intention from conversational text in a browser.
--- Previously updated : 03/26/2021-
-#Customer intent: As an developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
--
-# How to query the prediction runtime with user text
---
-To understand what a LUIS prediction endpoint returns, view a prediction result in a web browser.
-
-## Prerequisites
-
-In order to query a public app, you need:
-
-* Your Language Understanding (LUIS) resource information:
- * **Prediction key** - which can be obtained from [LUIS Portal](https://www.luis.ai/). If you do not already have a subscription to create a key, you can register for a [free account](https://azure.microsoft.com/free/cognitive-services).
- * **Prediction endpoint subdomain** - the subdomain is also the **name** of your LUIS resource.
-* A LUIS app ID - use the public IoT app ID of `df67dcdb-c37d-46af-88e1-8b97951ca1c2`. The user query used in the quickstart code is specific to that app. This app should work with any prediction resource other than the Europe or Australia regions, since it uses "westus" as the authoring region.
-
-## Use the browser to see predictions
-
-1. Open a web browser.
-1. Use the complete URLs below, replacing `YOUR-KEY` with your own LUIS Prediction key. The requests are GET requests and include the authorization, with your LUIS Prediction key, as a query string parameter.
-
- #### [V3 prediction request](#tab/V3-1-1)
--
- The format of the V3 URL for a **GET** endpoint (by slots) request is:
-
- `
- https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY
- `
-
- #### [V2 prediction request](#tab/V2-1-2)
-
- The format of the V2 URL for a **GET** endpoint request is:
-
- `
- https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?subscription-key=YOUR-LUIS-PREDICTION-KEY&q=turn on all lights
- `
-
-1. Paste the URL into a browser window and press Enter. The browser displays a JSON result that indicates that LUIS detects the `HomeAutomation.TurnOn` intent as the top intent and the `HomeAutomation.Operation` entity with the value `on`.
-
- #### [V3 prediction response](#tab/V3-2-1)
-
- ```JSON
- {
- "query": "turn on all lights",
- "prediction": {
- "topIntent": "HomeAutomation.TurnOn",
- "intents": {
- "HomeAutomation.TurnOn": {
- "score": 0.5375382
- }
- },
- "entities": {
- "HomeAutomation.Operation": [
- "on"
- ]
- }
- }
- }
- ```
-
- #### [V2 prediction response](#tab/V2-2-2)
-
- ```json
- {
- "query": "turn on all lights",
- "topScoringIntent": {
- "intent": "HomeAutomation.TurnOn",
- "score": 0.5375382
- },
- "entities": [
- {
- "entity": "on",
- "type": "HomeAutomation.Operation",
- "startIndex": 5,
- "endIndex": 6,
- "score": 0.724984169
- }
- ]
- }
- ```
-
- * * *
-
-1. To see all the intents, add the appropriate query string parameter.
-
- #### [V3 prediction endpoint](#tab/V3-3-1)
-
- Add `show-all-intents=true` to the end of the querystring to **show all intents**, and `verbose=true` to return all detailed information for entities.
-
- `
- https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&show-all-intents=true&verbose=true
- `
-
- ```JSON
- {
- "query": "turn off the living room light",
- "prediction": {
- "topIntent": "HomeAutomation.TurnOn",
- "intents": {
- "HomeAutomation.TurnOn": {
- "score": 0.5375382
- },
- "None": {
- "score": 0.08687421
- },
- "HomeAutomation.TurnOff": {
- "score": 0.0207554
- }
- },
- "entities": {
- "HomeAutomation.Operation": [
- "on"
- ]
- }
- }
- }
- ```
-
- #### [V2 prediction endpoint](#tab/V2)
-
- Add `verbose=true` to the end of the querystring to **show all intents**:
-
- `
- https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?q=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&verbose=true
- `
-
- ```json
- {
- "query": "turn on all lights",
- "topScoringIntent": {
- "intent": "HomeAutomation.TurnOn",
- "score": 0.5375382
- },
- "intents": [
- {
- "intent": "HomeAutomation.TurnOn",
- "score": 0.5375382
- },
- {
- "intent": "None",
- "score": 0.08687421
- },
- {
- "intent": "HomeAutomation.TurnOff",
- "score": 0.0207554
- }
- ],
- "entities": [
- {
- "entity": "on",
- "type": "HomeAutomation.Operation",
- "startIndex": 5,
- "endIndex": 6,
- "score": 0.724984169
- }
- ]
- }
- ```
-
-## Next steps
-
-* [V3 prediction endpoint](luis-migration-api-v3.md)
-* [Custom subdomains](../cognitive-services-custom-subdomains.md)
-* [Use the client libraries or REST API](client-libraries-rest-api.md)
cognitive-services Luis Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-glossary.md
- Title: Glossary - LUIS
-description: The glossary explains terms that you might encounter as you work with the LUIS API Service.
--- Previously updated : 03/21/2022-----
-# Language understanding glossary of common vocabulary and concepts
--
-The Language Understanding (LUIS) glossary explains terms that you might encounter as you work with the LUIS service.
-
-## Active version
-
-The active version is the [version](luis-how-to-manage-versions.md) of your app that is updated when you make changes to the model using the LUIS portal. In the LUIS portal, if you want to make changes to a version that is not the active version, you need to first set that version as active.
-
-## Active learning
-
-Active learning is a technique of machine learning in which the machine learned model is used to identify informative new examples to label. In LUIS, active learning refers to adding utterances from the endpoint traffic whose current predictions are unclear to improve your model. Click on "review endpoint utterances", to view utterances to label.
-
-See also:
-* [Conceptual information](luis-concept-review-endpoint-utterances.md)
-* [Tutorial reviewing endpoint utterances](luis-tutorial-review-endpoint-utterances.md)
-* How to improve the LUIS app by [reviewing endpoint utterances](luis-how-to-review-endpoint-utterances.md)
-
-## Application (App)
-
-In LUIS, your application, or app, is a collection of machine learned models, built on the same data set, that works together to predict intents and entities for a particular scenario. Each application has a separate prediction endpoint.
-
-If you are building an HR bot, you might have a set of intents, such as "Schedule leave time", "inquire about benefits" and "update personal information" and entities for each one of those intents that you group into a single application.
-
-## Authoring
-
-Authoring is the ability to create, manage and deploy a LUIS app, either using the LUIS portal or the authoring APIs.
-
-### Authoring Key
-
-The [authoring key](luis-how-to-azure-subscription.md) is used to author the app. Not used for production-level endpoint queries. For more information, see [resource limits](luis-limits.md#resource-usage-and-limits).
-
-### Authoring Resource
-
-Your LUIS [authoring resource](luis-how-to-azure-subscription.md) is a manageable item that is available through Azure. The resource is your access to the associated authoring, training, and publishing abilities of the Azure service. The resource includes authentication, authorization, and security information you need to access the associated Azure service.
-
-The authoring resource has an Azure "kind" of `LUIS-Authoring`.
-
-## Batch test
-
-Batch testing is the ability to validate a current LUIS app's models with a consistent and known test set of user utterances. The batch test is defined in a [JSON formatted file](./luis-how-to-batch-test.md#batch-test-file).
--
-See also:
-* [Concepts](./luis-how-to-batch-test.md)
-* [How-to](luis-how-to-batch-test.md) run a batch test
-* [Tutorial](./luis-how-to-batch-test.md) - create and run a batch test
-
-### F-measure
-
-In batch testing, a measure of the test's accuracy.
-
-### False negative (FN)
-
-In batch testing, the data points represent utterances in which your app incorrectly predicted the absence of the target intent/entity.
-
-### False positive (FP)
-
-In batch testing, the data points represent utterances in which your app incorrectly predicted the existence of the target intent/entity.
-
-### Precision
-In batch testing, precision (also called positive predictive value) is the fraction of relevant utterances among the retrieved utterances.
-
-An example for an animal batch test is the number of sheep that were predicted divided by the total number of animals (sheep and non-sheep alike).
-
-### Recall
-
-In batch testing, recall (also known as sensitivity), is the ability for LUIS to generalize.
-
-An example for an animal batch test is the number of sheep that were predicted divided by the total number of sheep available.
-
-### True negative (TN)
-
-A true negative is when your app correctly predicts no match. In batch testing, a true negative occurs when your app does predict an intent or entity for an example that has not been labeled with that intent or entity.
-
-### True positive (TP)
-
-True positive (TP) A true positive is when your app correctly predicts a match. In batch testing, a true positive occurs when your app predicts an intent or entity for an example that has been labeled with that intent or entity.
-
-## Classifier
-
-A classifier is a machine learned model that predicts what category or class an input fits into.
-
-An [intent](#intent) is an example of a classifier.
-
-## Collaborator
-
-A collaborator is conceptually the same thing as a [contributor](#contributor). A collaborator is granted access when an owner adds the collaborator's email address to an app that isn't controlled with Azure role-based access control (Azure RBAC). If you are still using collaborators, you should migrate your LUIS account, and use LUIS authoring resources to manage contributors with Azure RBAC.
-
-## Contributor
-
-A contributor is not the [owner](#owner) of the app, but has the same permissions to add, edit, and delete the intents, entities, utterances. A contributor provides Azure role-based access control (Azure RBAC) to a LUIS app.
-
-See also:
-* [How-to](luis-how-to-collaborate.md#add-contributor-to-azure-authoring-resource) add contributors
-
-## Descriptor
-
-A descriptor is the term formerly used for a machine learning [feature](#features).
-
-## Domain
-
-In the LUIS context, a domain is an area of knowledge. Your domain is specific to your scenario. Different domains use specific language and terminology that have meaning in the context of the domain. For example, if you are building an application to play music, your application would have terms and language specific to music ΓÇô words like "song, track, album, lyrics, b-side, artist". For examples of domains, see [prebuilt domains](#prebuilt-domain).
-
-## Endpoint
-
-### Authoring endpoint
-
-The LUIS authoring endpoint URL is where you author, train, and publish your app. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID.
-
-Learn more about authoring your app programmatically from the [Developer reference](developer-reference-resource.md#rest-endpoints)
-
-### Prediction endpoint
-
-The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c37) API.
-
-Your access to the prediction endpoint is authorized with the LUIS prediction key.
-
-## Entity
-
-[Entities](concepts/entities.md) are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want you model to predict an address, but also the subentities of street, city, state, and zipcode. Entities can also be used as features to models. Your response from the LUIS app will include both the predicted intents and all the entities.
-
-### Entity extractor
-
-An entity extractor sometimes known only as an extractor is the type of machine learned model that LUIS uses to predict entities.
-
-### Entity schema
-
-The entity schema is the structure you define for machine learned entities with subentities. The prediction endpoint returns all of the extracted entities and subentities defined in the schema.
-
-### Entity's subentity
-
-A subentity is a child entity of a machine-learning entity.
-
-### Non-machine-learning entity
-
-An entity that uses text matching to extract data:
-* List entity
-* Regular expression entity
-
-### List entity
-
-A [list entity](reference-entity-list.md) represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
-
-The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small", "medium", or "large" are used regardless of the context.
-
-### Regular expression
-
-A [regular expression entity](reference-entity-regular-expression.md) represents a regular expression. Regular expression entities are exact matches, unlike machined learned entities.
-### Prebuilt entity
-
-See Prebuilt model's entry for [prebuilt entity](#prebuilt-entity)
-
-## Features
-
-In machine learning, a feature is a characteristic that helps the model recognize a particular concept. It is a hint that LUIS can use, but not a hard rule.
-
-This term is also referred to as a **[machine-learning feature](concepts/patterns-features.md)**.
-
-These hints are used in conjunction with the labels to learn how to predict new data. LUIS supports both phrase lists and using other models as features.
-
-### Required feature
-
-A required feature is a way to constrain the output of a LUIS model. When a feature for an entity is marked as required, the feature must be present in the example for the entity to be predicted, regardless of what the machine learned model predicts.
-
-Consider an example where you have a prebuilt-number feature that you have marked as required on the quantity entity for a menu ordering bot. When your bot sees `I want a bajillion large pizzas?`, bajillion will not be predicted as a quantity regardless of the context in which it appears. Bajillion is not a valid number and wonΓÇÖt be predicted by the number pre-built entity.
-
-## Intent
-
-An [intent](luis-concept-intent.md) represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill. In LUIS, an utterance as a whole is classified as an intent, but parts of the utterance are extracted as entities
-
-## Labeling examples
-
-Labeling, or marking, is the process of associating a positive or negative example with a model.
-
-### Labeling for intents
-In LUIS, intents within an app are mutually exclusive. This means when you add an utterance to an intent, it is considered a _positive_ example for that intent and a _negative_ example for all other intents. Negative examples should not be confused with the "None" intent, which represents utterances that are outside the scope of the app.
-
-### Labeling for entities
-In LUIS, you [label](label-entity-example-utterance.md) a word or phrase in an intent's example utterance with an entity as a _positive_ example. Labeling shows the intent what it should predict for that utterance. The labeled utterances are used to train the intent.
-
-## LUIS app
-
-See the definition for [application (app)](#application-app).
-
-## Model
-
-A (machine learned) model is a function that makes a prediction on input data. In LUIS, we refer to intent classifiers and entity extractors generically as "models", and we refer to a collection of models that are trained, published, and queried together as an "app".
-
-## Normalized value
-
-You add values to your [list](#list-entity) entities. Each of those values can have a list of one or more synonyms. Only the normalized value is returned in the response.
-
-## Overfitting
-
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
-
-## Owner
-
-Each app has one owner who is the person that created the app. The owner manages permissions to the application in the Azure portal.
-
-## Phrase list
-
-A [phrase list](concepts/patterns-features.md) is a specific type of machine learning feature that includes a group of values (words or phrases) that belong to the same class and must be treated similarly (for example, names of cities or products).
-
-## Prebuilt model
-
-A [prebuilt model](luis-concept-prebuilt-model.md) is an intent, entity, or collection of both, along with labeled examples. These common prebuilt models can be added to your app to reduce the model development work required for your app.
-
-### Prebuilt domain
-
-A prebuilt domain is a LUIS app configured for a specific domain such as home automation (HomeAutomation) or restaurant reservations (RestaurantReservation). The intents, utterances, and entities are configured for this domain.
-
-### Prebuilt entity
-
-A prebuilt entity is an entity LUIS provides for common types of information such as number, URL, and email. These are created based on public data. You can choose to add a prebuilt entity as a stand-alone entity, or as a feature to an entity
-
-### Prebuilt intent
-
-A prebuilt intent is an intent LUIS provides for common types of information and come with their own labeled example utterances.
-
-## Prediction
-
-A prediction is a REST request to the Azure LUIS prediction service that takes in new data (user utterance), and applies the trained and published application to that data to determine what intents and entities are found.
-
-### Prediction key
-
-The [prediction key](luis-how-to-azure-subscription.md) is the key associated with the LUIS service you created in Azure that authorizes your usage of the prediction endpoint.
-
-This key is not the authoring key. If you have a prediction endpoint key, it should be used for any endpoint requests instead of the authoring key. You can see your current prediction key inside the endpoint URL at the bottom of Azure resources page in LUIS website. It is the value of the subscription-key name/value pair.
-
-### Prediction resource
-
-Your LUIS prediction resource is a manageable item that is available through Azure. The resource is your access to the associated prediction of the Azure service. The resource includes predictions.
-
-The prediction resource has an Azure "kind" of `LUIS`.
-
-### Prediction score
-
-The [score](luis-concept-prediction-score.md) is a number from 0 and 1 that is a measure of how confident the system is that a particular input utterance matches a particular intent. A score closer to 1 means the system is very confident about its output and a score closer to 0 means the system is confident that the input does not match a particular output. Scores in the middle mean the system is very unsure of how to make the decision.
-
-For example, take a model that is used to identify if some customer text includes a food order. It might give a score of 1 for "I'd like to order one coffee" (the system is very confident that this is an order) and a score of 0 for "my team won the game last night" (the system is very confident that this is NOT an order). And it might have a score of 0.5 for "let's have some tea" (isn't sure if this is an order or not).
-
-## Programmatic key
-
-Renamed to [authoring key](#authoring-key).
-
-## Publish
-
-[Publishing](luis-how-to-publish-app.md) means making a LUIS active version available on either the staging or production [endpoint](#endpoint).
-
-## Quota
-
-LUIS quota is the limitation of the Azure subscription tier. The LUIS quota can be limited by both requests per second (HTTP Status 429) and total requests in a month (HTTP Status 403).
-
-## Schema
-
-Your schema includes your intents and entities along with the subentities. The schema is initially planned for then iterated over time. The schema doesn't include app settings, features, or example utterances.
-
-## Sentiment Analysis
-Sentiment analysis provides positive or negative values of the utterances provided by the [Language service](../language-service/sentiment-opinion-mining/overview.md).
-
-## Speech priming
-
-Speech priming improves the recognition of spoken words and phrases that are commonly used in your scenario with [Speech Services](../speech-service/overview.md). For speech priming enabled applications, all LUIS labeled examples are used to improve speech recognition accuracy by creating a customized speech model for this specific application. For example, in a chess game you want to make sure that when the user says "Move knight", it isnΓÇÖt interpreted as "Move night". The LUIS app should include examples in which "knight" is labeled as an entity.
-
-## Starter key
-
-A free key to use when first starting out using LUIS.
-
-## Synonyms
-
-In LUIS [list entities](reference-entity-list.md), you can create a normalized value, which can each have a list of synonyms. For example, if you create a size entity that has normalized values of small, medium, large, and extra-large. You could create synonyms for each value like this:
-
-|Nomalized value| Synonyms|
-|--|--|
-|Small| the little one, 8 ounce|
-|Medium| regular, 12 ounce|
-|Large| big, 16 ounce|
-|Xtra large| the biggest one, 24 ounce|
-
-The model will return the normalized value for the entity when any of synonyms are seen in the input.
-
-## Test
-
-[Testing](./luis-interactive-test.md) a LUIS app means viewing model predictions.
-
-## Timezone offset
-
-The endpoint includes [timezoneOffset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). This is the number in minutes you want to add or remove from the datetimeV2 prebuilt entity. For example, if the utterance is "what time is it now?", the datetimeV2 returned is the current time for the client request. If your client request is coming from a bot or other application that is not the same as your bot's user, you should pass in the offset between the bot and the user.
-
-See [Change time zone of prebuilt datetimeV2 entity](luis-concept-data-alteration.md?#change-time-zone-of-prebuilt-datetimev2-entity).
-
-## Token
-A [token](luis-language-support.md#tokenization) is the smallest unit of text that LUIS can recognize. This differs slightly across languages.
-
-For **English**, a token is a continuous span (no spaces or punctuation) of letters and numbers. A space is NOT a token.
-
-|Phrase|Token count|Explanation|
-|--|--|--|
-|`Dog`|1|A single word with no punctuation or spaces.|
-|`RMT33W`|1|A record locator number. It may have numbers and letters, but does not have any punctuation.|
-|`425-555-5555`|5|A phone number. Each punctuation mark is a single token so `425-555-5555` would be 5 tokens:<br>`425`<br>`-`<br>`555`<br>`-`<br>`5555` |
-|`https://luis.ai`|7|`https`<br>`:`<br>`/`<br>`/`<br>`luis`<br>`.`<br>`ai`<br>|
-
-## Train
-
-[Training](luis-how-to-train.md) is the process of teaching LUIS about any changes to the active version since the last training.
-
-### Training data
-
-Training data is the set of information that is needed to train a model. This includes the schema, labeled utterances, features, and application settings.
-
-### Training errors
-
-Training errors are predictions on your training data that do not match their labels.
-
-## Utterance
-
-An [utterance](concepts/utterances.md) is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
-
-## Version
-
-A LUIS [version](luis-how-to-manage-versions.md) is a specific instance of a LUIS application associated with a LUIS app ID and the published endpoint. Every LUIS app has at least one version.
cognitive-services Luis How To Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md
- Title: How to create and manage LUIS resources-
-description: Learn how to use and manage Azure resources for LUIS.the app.
------- Previously updated : 07/19/2022---
-# How to create and manage LUIS resources
---
-Use this article to learn about the types of Azure resources you can use with LUIS, and how to manage them.
-
-## Authoring Resource
-
-An authoring resource lets you create, manage, train, test, and publish your applications. One [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) is available for the LUIS authoring resource - the free (F0) tier, which gives you:
-
-* 1 million authoring transactions
-* 1,000 testing prediction endpoint requests per month.
-
-You can use the [v3.0-preview LUIS Programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f) to manage authoring resources.
-
-## Prediction resource
-
-A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. Two [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) are available for the prediction resource:
-
-* The free (F0) prediction resource, which gives you 10,000 prediction endpoint requests monthly.
-* Standard (S0) prediction resource, which is the paid tier.
-
-You can use the [v3.0-preview LUIS Endpoint API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5f68f4d40a511ce5a7440859) to manage prediction resources.
-
-> [!Note]
-> * You can also use a [multi-service resource](../cognitive-services-apis-create-account-cli.md?tabs=multiservice) to get a single endpoint you can use for multiple Cognitive Services.
-> * LUIS provides two types of F0 (free tier) resources: one for authoring transactions and one for prediction transactions. If you're running out of free quota for prediction transactions, make sure you're using the F0 prediction resource, which gives you a 10,000 free transactions monthly, and not the authoring resource, which gives you 1,000 prediction transactions monthly.
-> * You should author LUIS apps in the [regions](luis-reference-regions.md#publishing-regions) where you want to publish and query.
-
-## Create LUIS resources
-
-To create LUIS resources, you can use the LUIS portal, [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne), or Azure CLI. After you've created your resources, you'll need to assign them to your apps to be used by them.
-
-# [LUIS portal](#tab/portal)
-
-### Create a LUIS authoring resource using the LUIS portal
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), select your country/region and agree to the terms of use. If you see the **My Apps** section in the portal, a LUIS resource already exists and you can skip the next step.
-
-2. In the **Choose an authoring** window that appears, find your Azure subscription, and LUIS authoring resource. If you don't have a resource, you can create a new one.
-
- :::image type="content" source="./media/luis-how-to-azure-subscription/choose-authoring-resource.png" alt-text="Choose a type of Language Understanding authoring resource.":::
-
- When you create a new authoring resource, provide the following information:
- * **Tenant name**: the tenant your Azure subscription is associated with.
- * **Azure subscription name**: the subscription that will be billed for the resource.
- * **Azure resource group name**: a custom resource group name you choose or create. Resource groups allow you to group Azure resources for access and management.
- * **Azure resource name**: a custom name you choose, used as part of the URL for your authoring and prediction endpoint queries.
- * **Pricing tier**: the pricing tier determines the maximum transaction per second and month.
-
-### Create a LUIS Prediction resource using the LUIS portal
--
-# [Without LUIS portal](#tab/without-portal)
-
-### Create LUIS resources without using the LUIS portal
-
-Use the [Azure CLI](/cli/azure/install-azure-cli) to create each resource individually.
-
-> [!TIP]
-> * The authoring resource `kind` is `LUIS.Authoring`
-> * The prediction resource `kind` is `LUIS`
-
-1. Sign in to the Azure CLI:
-
- ```azurecli
- az login
- ```
-
- This command opens a browser so you can select the correct account and provide authentication.
-
-2. Create a LUIS authoring resource of kind `LUIS.Authoring`, named `my-luis-authoring-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region.
-
- ```azurecli
- az cognitiveservices account create -n my-luis-authoring-resource -g my-resource-group --kind LUIS.Authoring --sku F0 -l westus --yes
- ```
-
-3. Create a LUIS prediction endpoint resource of kind `LUIS`, named `my-luis-prediction-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region. If you want higher throughput than the free tier provides, change `F0` to `S0`. [Learn more about pricing tiers and throughput.](luis-limits.md#resource-usage-and-limits)
-
- ```azurecli
- az cognitiveservices account create -n my-luis-prediction-resource -g my-resource-group --kind LUIS --sku F0 -l westus --yes
- ```
----
-## Assign LUIS resources
-
-Creating a resource doesn't necessarily mean that it is put to use, you need to assign it to your apps. You can assign an authoring resource for a single app or for all apps in LUIS.
-
-# [LUIS portal](#tab/portal)
-
-### Assign resources using the LUIS portal
-
-**Assign an authoring resource to all your apps**
-
- The following procedure assigns the authoring resource to all apps.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai).
-1. In the upper-right corner, select your user account, and then select **Settings**.
-1. On the **User Settings** page, select **Add authoring resource**, and then select an existing authoring resource. Select **Save**.
-
-**Assign a resource to a specific app**
-
-The following procedure assigns a resource to a specific app.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai). Select an app from the **My apps** list.
-1. Go to **Manage** > **Azure Resources**:
-
- :::image type="content" source="./media/luis-how-to-azure-subscription/manage-azure-resources-prediction.png" alt-text="Choose a type of Language Understanding prediction resource." lightbox="./media/luis-how-to-azure-subscription/manage-azure-resources-prediction.png":::
-
-1. On the **Prediction resource** or **Authoring resource** tab, select the **Add prediction resource** or **Add authoring resource** button.
-1. Use the fields in the form to find the correct resource, and then select **Save**.
-
-# [Without LUIS portal](#tab/without-portal)
-
-## Assign prediction resource without using the LUIS portal
-
-For automated processes like CI/CD pipelines, you can automate the assignment of a LUIS resource to a LUIS app with the following steps:
-
-1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command.
-
- ```azurecli
- az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv
- ```
-
-1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c) that your user account has access to.
-
- This POST API requires the following values:
-
- |Header|Value|
- |--|--|
- |`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
- |`Ocp-Apim-Subscription-Key`|Your authoring key.|
-
- The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app.
-
-1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be32228e8473de116325515) API.
-
- This POST API requires the following values:
-
- |Type|Setting|Value|
- |--|--|--|
- |Header|`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
- |Header|`Ocp-Apim-Subscription-Key`|Your authoring key.|
- |Header|`Content-type`|`application/json`|
- |Querystring|`appid`|The LUIS app ID.
- |Body||{`AzureSubscriptionId`: Your Subscription ID,<br>`ResourceGroup`: Resource Group name that has your prediction resource,<br>`AccountName`: Name of your prediction resource}|
-
- When this API is successful, it returns `201 - created status`.
---
-## Unassign a resource
-
-When you unassign a resource, it's not deleted from Azure. It's only unlinked from LUIS.
-
-# [LUIS portal](#tab/portal)
-
-## Unassign resources using LUIS portal
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and then select an app from the **My apps** list.
-1. Go to **Manage** > **Azure Resources**.
-1. Select the **Unassign resource** button for the resource.
-
-# [Without LUIS portal](#tab/without-portal)
-
-## Unassign prediction resource without using the LUIS portal
-
-1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command.
-
- ```azurecli
- az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv
- ```
-
-1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c), which your user account has access to.
-
- This POST API requires the following values:
-
- |Header|Value|
- |--|--|
- |`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
- |`Ocp-Apim-Subscription-Key`|Your authoring key.|
-
- The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app.
-
-1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be32554f8591db3a86232e1/console) API.
-
- This DELETE API requires the following values:
-
- |Type|Setting|Value|
- |--|--|--|
- |Header|`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.|
- |Header|`Ocp-Apim-Subscription-Key`|Your authoring key.|
- |Header|`Content-type`|`application/json`|
- |Querystring|`appid`|The LUIS app ID.
- |Body||{`AzureSubscriptionId`: Your Subscription ID,<br>`ResourceGroup`: Resource Group name that has your prediction resource,<br>`AccountName`: Name of your prediction resource}|
-
- When this API is successful, it returns `200 - OK status`.
---
-## Resource ownership
-
-An Azure resource, like a LUIS resource, is owned by the subscription that contains the resource.
-
-To change the ownership of a resource, you can take one of these actions:
-* Transfer the [ownership](../../cost-management-billing/manage/billing-subscription-transfer.md) of your subscription.
-* Export the LUIS app as a file, and then import the app on a different subscription. Export is available on the **My apps** page in the LUIS portal.
-
-## Resource limits
-
-### Authoring key creation limits
-
-You can create as many as 10 authoring keys per region, per subscription. Publishing regions are different from authoring regions. Make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located. For information on how authoring regions map to publishing regions, see [Authoring and publishing regions](luis-reference-regions.md).
-
-See [resource limits](luis-limits.md#resource-usage-and-limits) for more information.
-
-### Errors for key usage limits
-
-Usage limits are based on the pricing tier.
-
-If you exceed your transactions-per-second (TPS) quota, you receive an HTTP 429 error. If you exceed your transaction-per-month (TPM) quota, you receive an HTTP 403 error.
-
-## Change the pricing tier
-
-1. In [the Azure portal](https://portal.azure.com), Go to **All resources** and select your resource
-
- :::image type="content" source="./media/luis-usage-tiers/find.png" alt-text="Screenshot that shows a LUIS subscription in the Azure portal." lightbox="./media/luis-usage-tiers/find.png":::
-
-1. From the left side menu, select **Pricing tier** to see the available pricing tiers
-1. Select the pricing tier you want, and click **Select** to save your change. When the pricing change is complete, a notification will appear in the top right with the pricing tier update.
-
-## View Azure resource metrics
-
-## View a summary of Azure resource usage
-You can view LUIS usage information in the Azure portal. The **Overview** page shows a summary, including recent calls and errors. If you make a LUIS endpoint request, allow up to five minutes for the change to appear.
--
-## Customizing Azure resource usage charts
-The **Metrics** page provides a more detailed view of the data.
-You can configure your metrics charts for a specific **time period** and **metric**.
--
-## Total transactions threshold alert
-If you want to know when you reach a certain transaction threshold, for example 10,000 transactions, you can create an alert:
-
-1. From the left side menu, select **Alerts**
-2. From the top menu select **New alert rule**
-
- :::image type="content" source="./media/luis-usage-tiers/alerts.png" alt-text="Screenshot that shows the alert rules page." lightbox="./media/luis-usage-tiers/alerts.png":::
-
-3. Click on **Add condition**
-
- :::image type="content" source="./media/luis-usage-tiers/alerts-2.png" alt-text="Screenshot that shows the add condition page for alert rules." lightbox="./media/luis-usage-tiers/alerts-2.png":::
-
-4. Select **Total calls**
-
- :::image type="content" source="./media/luis-usage-tiers/alerts-3.png" alt-text="Screenshot that shows the total calls page for alerts." lightbox="./media/luis-usage-tiers/alerts-3.png":::
-
-5. Scroll down to the **Alert logic** section and set the attributes as you want and click **Done**
-
- :::image type="content" source="./media/luis-usage-tiers/alerts-4.png" alt-text="Screenshot that shows the alert logic page." lightbox="./media/luis-usage-tiers/alerts-4.png":::
-
-6. To send notifications or invoke actions when the alert rule triggers go to the **Actions** section and add your action group.
-
- :::image type="content" source="./media/luis-usage-tiers/alerts-5.png" alt-text="Screenshot that shows the actions page for alerts." lightbox="./media/luis-usage-tiers/alerts-5.png":::
-
-### Reset an authoring key
-
-For [migrated authoring resource](luis-migration-authoring.md) apps: If your authoring key is compromised, reset the key in the Azure portal, on the **Keys** page for the authoring resource.
-
-For apps that haven't been migrated: The key is reset on all your apps in the LUIS portal. If you author your apps via the authoring APIs, you need to change the value of `Ocp-Apim-Subscription-Key` to the new key.
-
-### Regenerate an Azure key
-
-You can regenerate an Azure key from the **Keys** page in the Azure portal.
-
-<a name="securing-the-endpoint"></a>
-
-## App ownership, access, and security
-
-An app is defined by its Azure resources, which are determined by the owner's subscription.
-
-You can move your LUIS app. Use the following resources to help you do so by using the Azure portal or Azure CLI:
-
-* [Move an app between LUIS authoring resources](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-move-app-to-another-luis-authoring-azure-resource)
-* [Move a resource to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
-* [Move a resource within the same subscription or across subscriptions](../../azure-resource-manager/management/move-limitations/app-service-move-limitations.md)
--
-## Next steps
-
-* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle.
--
cognitive-services Luis How To Batch Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-batch-test.md
- Title: How to perform a batch test - LUIS-
-description: Use Language Understanding (LUIS) batch testing sets to find utterances with incorrect intents and entities.
-------- Previously updated : 01/19/2022---
-# Batch testing with a set of example utterances
---
-Batch testing validates your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version. Review the batch test results to take appropriate action to improve accuracy, such as adding more example utterances to an intent if your app frequently fails to identify the correct intent or labeling entities within the utterance.
-
-## Group data for batch test
-
-It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained.
-
-The batch JSON file you use should include utterances with top-level machine-learning entities labeled including start and end position. The utterances should not be part of the examples already in the app. They should be utterances you want to positively predict for intent and entities.
-
-You can separate out tests by intent and/or entity or have all the tests (up to 1000 utterances) in the same file.
-
-### Common errors importing a batch
-
-If you run into errors uploading your batch file to LUIS, check for the following common issues:
-
-* More than 1,000 utterances in a batch file
-* An utterance JSON object that doesn't have an entities property. The property can be an empty array.
-* Word(s) labeled in multiple entities
-* Entity labels starting or ending on a space.
-
-## Fixing batch errors
-
-If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](concepts/patterns-features.md) feature with domain-specific vocabulary to help LUIS learn faster.
--
-<a name="batch-testing"></a>
-
-# [LUIS portal](#tab/portal)
-
-## Batch testing using the LUIS portal
-
-### Import and train an example app
-
-Import an app that takes a pizza order such as `1 pepperoni pizza on thin crust`.
-
-1. Download and save [app JSON file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/apps/pizza-with-machine-learned-entity.json?raw=true).
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Select the arrow next to **New app** and click **Import as JSON** to import the JSON into a new app. Name the app `Pizza app`.
--
-1. Select **Train** in the top-right corner of the navigation to train the app.
---
-### Batch test file
-
-The example JSON includes one utterance with a labeled entity to illustrate what a test file looks like. In your own tests, you should have many utterances with correct intent and machine-learning entity labeled.
-
-1. Create `pizza-with-machine-learned-entity-test.json` in a text editor or [download](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/batch-tests/pizza-with-machine-learned-entity-test.json?raw=true) it.
-
-2. In the JSON-formatted batch file, add an utterance with the **Intent** you want predicted in the test.
-
- [!code-json[Add the intents to the batch test file](~/samples-cognitive-services-data-files/luis/batch-tests/pizza-with-machine-learned-entity-test.json "Add the intent to the batch test file")]
-
-## Run the batch
-
-1. Select **Test** in the top navigation bar.
-
-2. Select **Batch testing panel** in the right-side panel.
-
- ![Batch Testing Link](./media/luis-how-to-batch-test/batch-testing-link.png)
-
-3. Select **Import**. In the dialog box that appears, select **Choose File** and locate a JSON file with the correct JSON format that contains *no more than 1,000* utterances to test.
-
- Import errors are reported in a red notification bar at the top of the browser. When an import has errors, no dataset is created. For more information, see [Common errors](#common-errors-importing-a-batch).
-
-4. Choose the file location of the `pizza-with-machine-learned-entity-test.json` file.
-
-5. Name the dataset `pizza test` and select **Done**.
-
-6. Select the **Run** button.
-
-7. After the batch test completes, you can see the following columns:
-
- | Column | Description |
- | -- | - |
- | State | Status of the test. **See results** is only visible after the test is completed. |
- | Name | The name you have given to the test. |
- | Size | Number of tests in this batch test file. |
- | Last Run | Date of last run of this batch test file. |
- | Last result | Number of successful predictions in the test. |
-
-8. To view detailed results of the test, select **See results**.
-
- > [!TIP]
- > * Selecting **Download** will download the same file that you uploaded.
- > * If you see the batch test failed, at least one utterance intent did not match the prediction.
-
-<a name="access-batch-test-result-details-in-a-visualized-view"></a>
-
-### Review batch results for intents
-
-To review the batch test results, select **See results**. The test results show graphically how the test utterances were predicted against the active version.
-
-The batch chart displays four quadrants of results. To the right of the chart is a filter. The filter contains intents and entities. When you select a [section of the chart](#review-batch-results-for-intents) or a point within the chart, the associated utterance(s) display below the chart.
-
-While hovering over the chart, a mouse wheel can enlarge or reduce the display in the chart. This is useful when there are many points on the chart clustered tightly together.
-
-The chart is in four quadrants, with two of the sections displayed in red.
-
-1. Select the **ModifyOrder** intent in the filter list. The utterance is predicted as a **True Positive** meaning the utterance successfully matched its positive prediction listed in the batch file.
-
- > [!div class="mx-imgBorder"]
- > ![Utterance successfully matched its positive prediction](./media/luis-tutorial-batch-testing/intent-predicted-true-positive.png)
-
- The green checkmarks in the filters list also indicate the success of the test for each intent. All the other intents are listed with a 1/1 positive score because the utterance was tested against each intent, as a negative test for any intents not listed in the batch test.
-
-1. Select the **Confirmation** intent. This intent isn't listed in the batch test so this is a negative test of the utterance that is listed in the batch test.
-
- > [!div class="mx-imgBorder"]
- > ![Utterance successfully predicted negative for unlisted intent in batch file](./media/luis-tutorial-batch-testing/true-negative-intent.png)
-
- The negative test was successful, as noted with the green text in the filter, and the grid.
-
-### Review batch test results for entities
-
-The ModifyOrder entity, as a machine entity with subentities, displays if the top-level entity matched and how the subentities are predicted.
-
-1. Select the **ModifyOrder** entity in the filter list then select the circle in the grid.
-
-1. The entity prediction displays below the chart. The display includes solid lines for predictions that match the expectation and dotted lines for predictions that don't match the expectation.
-
- > [!div class="mx-imgBorder"]
- > ![Entity parent successfully predicted in batch file](./media/luis-tutorial-batch-testing/labeled-entity-prediction.png)
-
-<a name="filter-chart-results-by-intent-or-entity"></a>
-
-#### Filter chart results
-
-To filter the chart by a specific intent or entity, select the intent or entity in the right-side filtering panel. The data points and their distribution update in the graph according to your selection.
-
-![Visualized Batch Test Result](./media/luis-how-to-batch-test/filter-by-entity.png)
-
-### Chart result examples
-
-The chart in the LUIS portal, you can perform the following actions:
-
-#### View single-point utterance data
-
-In the chart, hover over a data point to see the certainty score of its prediction. Select a data point to retrieve its corresponding utterance in the utterances list at the bottom of the page.
-
-![Selected utterance](./media/luis-how-to-batch-test/selected-utterance.png)
--
-<a name="relabel-utterances-and-retrain"></a>
-<a name="false-test-results"></a>
-
-#### View section data
-
-In the four-section chart, select the section name, such as **False Positive** at the top-right of the chart. Below the chart, all utterances in that section display below the chart in a list.
-
-![Selected utterances by section](./media/luis-how-to-batch-test/selected-utterances-by-section.png)
-
-In this preceding image, the utterance `switch on` is labeled with the TurnAllOn intent, but received the prediction of None intent. This is an indication that the TurnAllOn intent needs more example utterances in order to make the expected prediction.
-
-The two sections of the chart in red indicate utterances that did not match the expected prediction. These indicate utterances which LUIS needs more training.
-
-The two sections of the chart in green did match the expected prediction.
-
-# [REST API](#tab/rest)
-
-## Batch testing using the REST API
-
-LUIS lets you batch test using the LUIS portal and REST API. The endpoints for the REST API are listed below. For information on batch testing using the LUIS portal, see [Tutorial: batch test data sets](). Use the complete URLs below, replacing the placeholder values with your own LUIS Prediction key and endpoint.
-
-Remember to add your LUIS key to `Ocp-Apim-Subscription-Key` in the header, and set `Content-Type` to `application/json`.
-
-### Start a batch test
-
-Start a batch test using either an app version ID or a publishing slot. Send a **POST** request to one of the following endpoint formats. Include your batch file in the body of the request.
-
-Publishing slot
-* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-NAME>/evaluations`
-
-App version ID
-* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations`
-
-These endpoints will return an operation ID that you will use to check the status, and get results.
--
-### Get the status of an ongoing batch test
-
-Use the operation ID from the batch test you started to get its status from the following endpoint formats:
-
-Publishing slot
-* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/status`
-
-App version ID
-* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/status`
-
-### Get the results from a batch test
-
-Use the operation ID from the batch test you started to get its results from the following endpoint formats:
-
-Publishing slot
-* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/result`
-
-App version ID
-* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/result`
--
-### Batch file of utterances
-
-Submit a batch file of utterances, known as a *data set*, for batch testing. The data set is a JSON-formatted file containing a maximum of 1,000 labeled utterances. You can test up to 10 data sets in an app. If you need to test more, delete a data set and then add a new one. All custom entities in the model appear in the batch test entities filter even if there are no corresponding entities in the batch file data.
-
-The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](concepts/entities.md#machine-learned-ml-entity) you expect to be detected.
-
-### Batch syntax template for intents with entities
-
-Use the following template to start your batch file:
-
-```JSON
-{
- "LabeledTestSetUtterances": [
- {
- "text": "play a song",
- "intent": "play_music",
- "entities": [
- {
- "entity": "song_parent",
- "startPos": 0,
- "endPos": 15,
- "children": [
- {
- "entity": "pre_song",
- "startPos": 0,
- "endPos": 3
- },
- {
- "entity": "song_info",
- "startPos": 5,
- "endPos": 15
- }
- ]
- }
- ]
- }
- ]
-}
-
-```
-
-The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties.
-
-If you do not want to test entities, include the `entities` property and set the value as an empty array, `[]`.
-
-### REST API batch test results
-
-There are several objects returned by the API:
-
-* Information about the intents and entities models, such as precision, recall and F-score.
-* Information about the entities models, such as precision, recall and F-score) for each entity
- * Using the `verbose` flag, you can get more information about the entity, such as `entityTextFScore` and `entityTypeFScore`.
-* Provided utterances with the predicted and labeled intent names
-* A list of false positive entities, and a list of false negative entities.
---
-## Next steps
-
-If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's performance by labeling more utterances or adding features.
-
-* [Label suggested utterances with LUIS](luis-how-to-review-endpoint-utterances.md)
-* [Use features to improve your LUIS app's performance](luis-how-to-add-features.md)
cognitive-services Luis How To Collaborate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-collaborate.md
- Title: Collaborate with others - LUIS-
-description: An app owner can add contributors to the authoring resource. These contributors can modify the model, train, and publish the app.
-------- Previously updated : 05/17/2021---
-# Add contributors to your app
---
-An app owner can add contributors to apps. These contributors can modify the model, train, and publish the app. Once you have [migrated](luis-migration-authoring.md) your account, _contributors_ are managed in the Azure portal for the authoring resource, using the **Access control (IAM)** page. Add a user, using the collaborator's email address and the _contributor_ role.
-
-## Add contributor to Azure authoring resource
-
-You have migrated if your LUIS authoring experience is tied to an Authoring resource on the **Manage -> Azure resources** page in the LUIS portal.
-
-In the Azure portal, find your Language Understanding (LUIS) authoring resource. It has the type `LUIS.Authoring`. In the resource's **Access Control (IAM)** page, add the role of **contributor** for the user that you want to contribute. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-## View the app as a contributor
-
-After you have been added as a contributor, [sign in to the LUIS portal](sign-in-luis-portal.md).
--
-### Users with multiple emails
-
-If you add contributors to a LUIS app, you are specifying the exact email address. While Azure Active Directory (Azure AD) allows a single user to have more than one email account used interchangeably, LUIS requires the user to sign in with the email address specified when adding the contributor.
-
-<a name="owner-and-collaborators"></a>
-
-### Azure Active Directory resources
-
-If you use [Azure Active Directory](../../active-directory/index.yml) (Azure AD) in your organization, Language Understanding (LUIS) needs permission to the information about your users' access when they want to use LUIS. The resources that LUIS requires are minimal.
-
-You see the detailed description when you attempt to sign up with an account that has admin consent or does not require admin consent, such as administrator consent:
-
-* Allows you to sign in to the app with your organizational account and let the app read your profile. It also allows the app to read basic company information. This gives LUIS permission to read basic profile data, such as user ID, email, name
-* Allows the app to see and update your data, even when you are not currently using the app. The permission is required to refresh the access token of the user.
--
-### Azure Active Directory tenant user
-
-LUIS uses standard Azure Active Directory (Azure AD) consent flow.
-
-The tenant admin should work directly with the user who needs access granted to use LUIS in the Azure AD.
-
-* First, the user signs into LUIS, and sees the pop-up dialog needing admin approval. The user contacts the tenant admin before continuing.
-* Second, the tenant admin signs into LUIS, and sees a consent flow pop-up dialog. This is the dialog the admin needs to give permission for the user. Once the admin accepts the permission, the user is able to continue with LUIS. If the tenant admin will not sign in to LUIS, the admin can access [consent](https://account.activedirectory.windowsazure.com/r#/applications) for LUIS. On this page you can filter the list to items that include the name `LUIS`.
-
-If the tenant admin only wants certain users to use LUIS, there are a couple of possible solutions:
-* Giving the "admin consent" (consent to all users of the Azure AD), but then set to "Yes" the "User assignment required" under Enterprise Application Properties, and finally assign/add only the wanted users to the Application. With this method, the Administrator is still providing "admin consent" to the App, however, it's possible to control the users that can access it.
-* A second solution, is by using the [Azure AD identity and access management API in Microsoft Graph](/graph/azuread-identity-access-management-concept-overview) to provide consent to each specific user.
-
-Learn more about Azure active directory users and consent:
-* [Restrict your app](../../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) to a set of users
-
-## Next steps
-
-* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle.
-* Understand the about [authoring resources](luis-how-to-azure-subscription.md) and [adding contributors](luis-how-to-collaborate.md) on that resource.
-* Learn [how to create](luis-how-to-azure-subscription.md) authoring and runtime resources
-* Migrate to the new [authoring resource](luis-migration-authoring.md)
cognitive-services Luis How To Manage Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-manage-versions.md
- Title: Manage versions - LUIS-
-description: Versions allow you to build and publish different models. A good practice is to clone the current active model to a different version of the app before making changes to the model.
------- Previously updated : 10/25/2021---
-# Use versions to edit and test without impacting staging or production apps
---
-Versions allow you to build and publish different models. A good practice is to clone the current active model to a different [version](./luis-concept-app-iteration.md) of the app before making changes to the model.
-
-The active version is the version you are editing in the LUIS portal **Build** section with intents, entities, features, and patterns. When using the authoring APIs, you don't need to set the active version because the version-specific REST API calls include the version in the route.
-
-To work with versions, open your app by selecting its name on **My Apps** page, and then select **Manage** in the top bar, then select **Versions** in the left navigation.
-
-The list of versions shows which versions are published, where they are published, and which version is currently active.
-
-## Clone a version
-
-1. Select the version you want to clone then select **Clone** from the toolbar.
-
-2. In the **Clone version** dialog box, type a name for the new version such as "0.2".
-
- ![Clone Version dialog box](./media/luis-how-to-manage-versions/version-clone-version-dialog.png)
-
- > [!NOTE]
- > Version ID can consist only of characters, digits or '.' and cannot be longer than 10 characters.
-
- A new version with the specified name is created and set as the active version.
-
-## Set active version
-
-Select a version from the list, then select **Activate** from the toolbar.
-
-## Import version
-
-You can import a `.json` or a `.lu` version of your application.
-
-1. Select **Import** from the toolbar, then select the format.
-
-2. In the **Import new version** pop-up window, enter the new ten character version name. You only need to set a version ID if the version in the file already exists in the app.
-
- ![Manage section, versions page, importing new version](./media/luis-how-to-manage-versions/versions-import-pop-up.png)
-
- Once you import a version, the new version becomes the active version.
-
-### Import errors
-
-* Tokenizer errors: If you get a **tokenizer error** when importing, you are trying to import a version that uses a different [tokenizer](luis-language-support.md#custom-tokenizer-versions) than the app currently uses. To fix this, see [Migrating between tokenizer versions](luis-language-support.md#migrating-between-tokenizer-versions).
-
-<a name = "export-version"></a>
-
-## Other actions
-
-* To **delete** a version, select a version from the list, then select **Delete** from the toolbar. Select **Ok**.
-* To **rename** a version, select a version from the list, then select **Rename** from the toolbar. Enter new name and select **Done**.
-* To **export** a version, select a version from the list, then select **Export app** from the toolbar. Choose JSON or LU to export for a backup or to save in source control, choose **Export for container** to [use this app in a LUIS container](luis-container-howto.md).
-
-## See also
-
-See the following links to view the REST APIs for importing and exporting applications:
-
-* [Importing applications](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5892283039e2bb0d9c2805f5)
-* [Exporting applications](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40)
cognitive-services Luis How To Model Intent Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-model-intent-pattern.md
- Title: Patterns add accuracy - LUIS-
-description: Add pattern templates to improve prediction accuracy in Language Understanding (LUIS) applications.
-------- Previously updated : 01/07/2022--
-# How to add patterns to improve prediction accuracy
--
-After a LUIS app receives endpoint utterances, use a [pattern](luis-concept-patterns.md) to improve prediction accuracy for utterances that reveal a pattern in word order and word choice. Patterns use specific [syntax](luis-concept-patterns.md#pattern-syntax) to indicate the location of: [entities](luis-concept-entity-types.md), entity [roles](./luis-concept-entity-types.md), and optional text.
-
->[!Note]
->* After you add, edit, remove, or reassign a pattern, [train](luis-how-to-train.md) and [publish](luis-how-to-publish-app.md) your app for your changes to affect endpoint queries.
->* Patterns only include machine-learning entity parents, not subentities.
-
-## Add template utterance using correct syntax
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-1. Select **Patterns** in the left panel, under **Improve app performance**.
-
-1. Select the correct intent for the pattern.
-
-1. In the template textbox, type the template utterance and select Enter. When you want to enter the entity name, use the correct pattern entity syntax. Begin the entity syntax with `{`. The list of entities displays. Select the correct entity.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of entity for pattern](./media/luis-how-to-model-intent-pattern/patterns-3.png)
-
- If your entity includes a [role](./luis-concept-entity-types.md), indicate the role with a single colon, `:`, after the entity name, such as `{Location:Origin}`. The list of roles for the entities displays in a list. Select the role, and then select Enter.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of entity with role](./media/luis-how-to-model-intent-pattern/patterns-4.png)
-
- After you select the correct entity, finish entering the pattern, and then select Enter. When you are done entering patterns, [train](luis-how-to-train.md) your app.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of entered pattern with both types of entities](./media/luis-how-to-model-intent-pattern/patterns-5.png)
-
-## Create a pattern.any entity
-
-[Pattern.any](luis-concept-entity-types.md) entities are only valid in [patterns](luis-how-to-model-intent-pattern.md), not intents' example utterances. This type of entity helps LUIS find the end of entities of varying length and word choice. Because this entity is used in a pattern, LUIS knows where the end of the entity is in the utterance template.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-1. From the **Build** section, select **Entities** in the left panel, and then select **+ Create**.
-
-1. In the **Choose an entity type** dialog box, enter the entity name in the **Name** box, and select **Pattern.Any** as the **Type** then select **Create**.
-
- Once you [create a pattern utterance](luis-how-to-model-intent-pattern.md) using this entity, the entity is extracted with a combined machine-learning and text-matching algorithm.
-
-## Adding example utterances as pattern
-
-If you want to add a pattern for an entity, the _easiest_ way is to create the pattern from the Intent details page. This ensures your syntax matches the example utterance.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-1. On the **Intents** list page, select the intent name of the example utterance you want to create a template utterance from.
-1. On the Intent details page, select the row for the example utterance you want to use as the template utterance, then select **+ Add as pattern** from the context toolbar.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of selecting example utterance as a template pattern on the Intent details page.](./media/luis-how-to-model-intent-pattern/add-example-utterances-as-pattern-template-utterance-from-intent-detail-page.png)
-
- The utterance must include an entity in order to create a pattern from the utterance.
-
-1. In the pop-up box, select **Done** on the **Confirm patterns** page. You don't need to define the entities' subentities, or features. You only need to list the machine-learning entity.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of confirming example utterance as a template pattern on the Intent details page.](./media/luis-how-to-model-intent-pattern/confirm-patterns-from-example-utterance-intent-detail-page.png)
-
-1. If you need to edit the template, such as selecting text as optional, with the `[]` (square) brackets, you need to make this edit from the **Patterns** page.
-
-1. In the navigation bar, select **Train** to train the app with the new pattern.
-
-## Use the OR operator and groups
-
-The following two patterns can be combined into a single pattern using the group "_( )_" and OR "_|_" syntax.
-
-|Intent|Example utterances with optional text and prebuilt entities|
-|--|--|
-|OrgChart-Manager|"who will be {EmployeeListEntity}['s] manager [[in]{datetimeV2}?]"|
-|OrgChart-Manager|"who will be {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]"|
-
-The new template utterance will be:
-
-"who ( was | is | will be ) {EmployeeListEntity}['s] manager [([in]|[on]){datetimeV2}?]" .
-
-This uses a **group** around the required verb tense and the optional 'in' and 'on' with an **or** pipe between them.
--
-## Template utterances
-
-Due to the nature of the Human Resource subject domain, there are a few common ways of asking about employee relationships in organizations. Such as the following example utterances:
-
-* "Who does Jill Jones report to?"
-* "Who reports to Jill Jones?"
-
-These utterances are too close to determine the contextual uniqueness of each without providing _many_ utterance examples. By adding a pattern for an intent, LUIS learns common utterance patterns for an intent without needing to supply several more utterance examples.
-
->[!Tip]
->Each utterance can be deleted from the review list. Once deleted, it will not appear in the list again. This is true even if the user enters the same utterance from the endpoint.
-
-Template utterance examples for this intent would include:
-
-|Template utterances examples|syntax meaning|
-|--|--|
-|Who does {EmployeeListEntity} report to[?]|interchangeable: {EmployeeListEntity} <br> ignore: [?]|
-|Who reports to {EmployeeListEntity}[?]|interchangeable: {EmployeeListEntity} <br> ignore: [?]|
-
-The "_{EmployeeListEntity}_" syntax marks the entity location within the template utterance and which entity it is. The optional syntax, "_[?]_", marks words or [punctuation](luis-reference-application-settings.md) that is optional. LUIS matches the utterance, ignoring the optional text inside the brackets.
-
-> [!IMPORTANT]
-> While the syntax looks like a regular expression, it is not a regular expression. Only the curly bracket, "_{ }_", and square bracket, "_[ ]_", syntax is supported. They can be nested up to two levels.
-
-For a pattern to be matched to an utterance, _first_ the entities within the utterance must match the entities in the template utterance. This means the entities need to have enough examples in example utterances with a high degree of prediction before patterns with entities are successful. The template doesn't help predict entities, however. The template only predicts intents.
-
-> [!NOTE]
-> While patterns allow you to provide fewer example utterances, if the entities are not detected, the pattern will not match.
-
-## Add phrase list as a feature
-[Features](concepts/patterns-features.md) help LUIS by providing hints that certain words and phrases are part of an app domain vocabulary.
-
-1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-2. Open your app by selecting its name on **My Apps** page.
-3. Select **Build** , then select **Features** in your app&#39;s left panel.
-4. On the **Features** page, select **+ Create**.
-5. In the **Create new phrase list feature** dialog box, enter a name such as Pizza Toppings. In the **Value** box, enter examples of toppings, such as _Ham_. You can type one value at a time, or a set of values separated by commas, and then press **Enter**.
--
-6. Keep the **These values are interchangeable** selector enabled if the phrases can be used interchangeably. The interchangeable phrase list feature serves as a list of synonyms for training. Non-interchangeable phrase lists serve as separate features for training, meaning that features are similar but the intent changes when you swap phrases.
-7. The phrase list can apply to the entire app with the **Global** setting, or to a specific model (intent or entity). If you create the phrase list as a _feature_ from an intent or entity, the toggle is not set to global. In this case, the toggle specifies that the feature is local only to that model, therefore, _not global_ to the application.
-8. Select **Done**. The new feature is added to the **ML Features** page.
-
-> [!Note]
-> * You can delete, or deactivate a phrase list from the contextual toolbar on the **ML Features** page.
-> * A phrase list should be applied to the intent or entity it is intended to help but there may be times when a phrase list should be applied to the entire app as a **Global** feature. On the **Machine Learning** Features page, select the phrase list, then select **Make global** in the top contextual toolbar.
--
-## Add entity as a feature to an intent
-
-To add an entity as a feature to an intent, select the intent from the Intents page, then select **+ Add feature** above the contextual toolbar. The list will include all phrase lists and entities that can be applied as features.
-
-To add an entity as a feature to another entity, you can add the feature either on the Intent detail page using the [Entity Palette](/azure/cognitive-services/luis/label-entity-example-utterance#adding-entity-as-a-feature-from-the-entity-palette) or you can add the feature on the Entity detail page.
--
-## Next steps
-
-* [Train and test](luis-how-to-train.md) your app after improvement.
cognitive-services Luis How To Use Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-how-to-use-dashboard.md
- Title: Dashboard - Language Understanding - LUIS-
-description: Fix intents and entities with your trained app's dashboard. The dashboard displays overall app information, with highlights of intents that should be fixed.
------- Previously updated : 01/07/2022---
-# How to use the Dashboard to improve your app
---
-Find and fix problems with your trained app's intents when you are using example utterances. The dashboard displays overall app information, with highlights of intents that should be fixed.
-
-Review Dashboard analysis is an iterative process, repeat as you change and improve your model.
-
-This page will not have relevant analysis for apps that do not have any example utterances in the intents, known as _pattern-only_ apps.
-
-## What issues can be fixed from dashboard?
-
-The three problems addressed in the dashboard are:
-
-|Issue|Chart color|Explanation|
-|--|--|--|
-|Data imbalance|-|This occurs when the quantity of example utterances varies significantly. All intents need to have _roughly_ the same number of example utterances - except the None intent. It should only have 10%-15% of the total quantity of utterances in the app.<br><br> If the data is imbalanced but the intent accuracy is above certain threshold, this imbalance is not reported as an issue.<br><br>**Start with this issue - it may be the root cause of the other issues.**|
-|Unclear predictions|Orange|This occurs when the top intent and the next intent's scores are close enough that they may flip on the next training, due to [negative sampling](luis-how-to-train.md#train-with-all-data) or more example utterances added to intent. |
-|Incorrect predictions|Red|This occurs when an example utterance is not predicted for the labeled intent (the intent it is in).|
-
-Correct predictions are represented with the color blue.
-
-The dashboard shows these issues and tells you which intents are affected and suggests what you should do to improve the app.
-
-## Before app is trained
-
-Before you train the app, the dashboard does not contain any suggestions for fixes. Train your app to see these suggestions.
-
-## Check your publishing status
-
-The **Publishing status** card contains information about the active version's last publish.
-
-Check that the active version is the version you want to fix.
-
-![Dashboard shows app's external services, published regions, and aggregated endpoint hits.](./media/luis-how-to-use-dashboard/analytics-card-1-shows-app-summary-and-endpoint-hits.png)
-
-This also shows any external services, published regions, and aggregated endpoint hits.
-
-## Review training evaluation
-
-The **Training evaluation** card contains the aggregated summary of your app's overall accuracy by area. The score indicates intent quality.
-
-![The Training evaluation card contains the first area of information about your app's overall accuracy.](./media/luis-how-to-use-dashboard/analytics-card-2-shows-app-overall-accuracy.png)
-
-The chart indicates the correctly predicted intents and the problem areas with different colors. As you improve the app with the suggestions, this score increases.
-
-The suggested fixes are separated out by problem type and are the most significant for your app. If you would prefer to review and fix issues per intent, use the **[Intents with errors](#intents-with-errors)** card at the bottom of the page.
-
-Each problem area has intents that need to be fixed. When you select the intent name, the **Intent** page opens with a filter applied to the utterances. This filter allows you to focus on the utterances that are causing the problem.
-
-### Compare changes across versions
-
-Create a new version before making changes to the app. In the new version, make the suggested changes to the intent's example utterances, then train again. On the Dashboard page's **Training evaluation** card, use the **Show change from trained version** to compare the changes.
-
-![Compare changes across versions](./media/luis-how-to-use-dashboard/compare-improvement-across-versions.png)
-
-### Fix version by adding or editing example utterances and retraining
-
-The primary method of fixing your app will be to add or edit example utterances and retrain. The new or changed utterances need to follow guidelines for [varied utterances](concepts/utterances.md).
-
-Adding example utterances should be done by someone who:
-
-* has a high degree of understanding of what utterances are in the different intents.
-* knows how utterances in one intent may be confused with another intent.
-* is able to decide if two intents, which are frequently confused with each other, should be collapsed into a single intent. If this is the case, the different data must be pulled out with entities.
-
-### Patterns and phrase lists
-
-The analytics page doesnΓÇÖt indicate when to use [patterns](luis-concept-patterns.md) or [phrase lists](concepts/patterns-features.md). If you do add them, it can help with incorrect or unclear predictions but wonΓÇÖt help with data imbalance.
-
-### Review data imbalance
-
-Start with this issue - it may be the root cause of the other issues.
-
-The **data imbalance** intent list shows intents that need more utterances in order to correct the data imbalance.
-
-**To fix this issue**:
-
-* Add more utterances to the intent then train again.
-
-Do not add utterances to the None intent unless that is suggested on the dashboard.
-
-> [!Tip]
-> Use the third section on the page, **Utterances per intent** with the **Utterances (number)** setting, as a quick visual guide of which intents need more utterances.
- ![Use 'Utterances (number)' to find intents with data imbalance.](./media/luis-how-to-use-dashboard/predictions-per-intent-number-of-utterances.png)
-
-### Review incorrect predictions
-
-The **incorrect prediction** intent list shows intents that have utterances, which are used as examples for a specific intent, but are predicted for different intents.
-
-**To fix this issue**:
-
-* Edit utterances to be more specific to the intent and train again.
-* Combine intents if utterances are too closely aligned and train again.
-
-### Review unclear predictions
-
-The **unclear prediction** intent list shows intents with utterances with prediction scores that are not far enough way from their nearest rival, that the top intent for the utterance may change on the next training, due to [negative sampling](luis-how-to-train.md#train-with-all-data).
-
-**To fix this issue**;
-
-* Edit utterances to be more specific to the intent and train again.
-* Combine intents if utterances are too closely aligned and train again.
-
-## Utterances per intent
-
-This card shows the overall app health across the intents. As you fix intents and retrain, continue to glance at this card for issues.
-
-The following chart shows a well-balanced app with almost no issues to fix.
-
-![The following chart shows a well-balanced app with almost no issues to fix.](./media/luis-how-to-use-dashboard/utterance-per-intent-shows-data-balance.png)
-
-The following chart shows a poorly balanced app with many issues to fix.
-
-![Screenshot shows Predictions per intent with several Unclear or Incorrectly predicted results.](./media/luis-how-to-use-dashboard/utterance-per-intent-shows-data-imbalance.png)
-
-Hover over each intent's bar to get information about the intent.
-
-![Screenshot shows Predictions per intent with details of Unclear or Incorrectly predicted results.](./media/luis-how-to-use-dashboard/utterances-per-intent-with-details-of-errors.png)
-
-Use the **Sort by** feature to arrange the intents by issue type so you can focus on the most problematic intents with that issue.
-
-## Intents with errors
-
-This card allows you to review issues for a specific intent. The default view of this card is the most problematic intents so you know where to focus your efforts.
-
-![The Intents with errors card allows you to review issues for a specific intent. The card is filtered to the most problematic intents, by default, so you know where to focus your efforts.](./media/luis-how-to-use-dashboard/most-problematic-intents-with-errors.png)
-
-The top donut chart shows the issues with the intent across the three problem types. If there are issues in the three problem types, each type has its own chart below, along with any rival intents.
-
-### Filter intents by issue and percentage
-
-This section of the card allows you to find example utterances that are falling outside your error threshold. Ideally you want correct predictions to be significant. That percentage is business and customer driven.
-
-Determine the threshold percentages that you are comfortable with for your business.
-
-The filter allows you to find intents with specific issue:
-
-|Filter|Suggested percentage|Purpose|
-|--|--|--|
-|Most problematic intents|-|**Start here** - Fixing the utterances in this intent will improve the app more than other fixes.|
-|Correct predictions below|60%|This is the percentage of utterances in the selected intent that are correct but have a confidence score below the threshold. |
-|Unclear predictions above|15%|This is the percentage of utterances in the selected intent that are confused with the nearest rival intent.|
-|Incorrect predictions above|15%|This is the percentage of utterances in the selected intent that are incorrectly predicted. |
-
-### Correct prediction threshold
-
-What is a confident prediction confidence score to you? At the beginning of app development, 60% may be your target. Use the **Correct predictions below** with the percentage of 60% to find any utterances in the selected intent that need to be fixed.
-
-### Unclear or incorrect prediction threshold
-
-These two filters allow you to find utterances in the selected intent beyond your threshold. You can think of these two percentages as error percentages. If you are comfortable with a 10-15% error rate for predictions, set the filter threshold to 15% to find all utterances above this value.
-
-## Next steps
-
-* [Manage your Azure resources](luis-how-to-azure-subscription.md)
cognitive-services Luis Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-language-support.md
- Title: Language support - LUIS-
-description: LUIS has a variety of features within the service. Not all features are at the same language parity. Make sure the features you are interested in are supported in the language culture you are targeting. A LUIS app is culture-specific and cannot be changed once it is set.
------- Previously updated : 01/18/2022---
-# Language and region support for LUIS
---
-LUIS has a variety of features within the service. Not all features are at the same language parity. Make sure the features you are interested in are supported in the language culture you are targeting. A LUIS app is culture-specific and cannot be changed once it is set.
-
-## Multilingual LUIS apps
-
-If you need a multilingual LUIS client application such as a chatbot, you have a few options. If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use the [Translator service](../translator/translator-overview.md) to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive the resulting scores.
-
-> [!NOTE]
-> A newer version of Language Understanding capabilities is now available as part of Azure Cognitive Service for Language. For more information, see [Azure Cognitive Service for Language Documentation](../language-service/index.yml). For language understanding capabilities that support multiple languages within the Language Service, see [Conversational Language Understanding](../language-service/conversational-language-understanding/concepts/multiple-languages.md).
-
-## Languages supported
-
-LUIS understands utterances in the following languages:
-
-| Language |Locale | Prebuilt domain | Prebuilt entity | Phrase list recommendations | **[Sentiment analysis](../language-service/sentiment-opinion-mining/overview.md) and [key phrase extraction](../language-service/key-phrase-extraction/overview.md)|
-|--|--|:--:|:--:|:--:|:--:|
-| Arabic (preview - modern standard Arabic) |`ar-AR`|-|-|-|-|
-| *[Chinese](#chinese-support-notes) |`zh-CN` | Γ£ö | Γ£ö |Γ£ö|-|
-| Dutch |`nl-NL` |Γ£ö|-|-|Γ£ö|
-| English (United States) |`en-US` | Γ£ö | Γ£ö |Γ£ö|Γ£ö|
-| English (UK) |`en-GB` | Γ£ö | Γ£ö |Γ£ö|Γ£ö|
-| French (Canada) |`fr-CA` |-|-|-|Γ£ö|
-| French (France) |`fr-FR` |Γ£ö| Γ£ö |Γ£ö |Γ£ö|
-| German |`de-DE` |Γ£ö| Γ£ö |Γ£ö |Γ£ö|
-| Gujarati (preview) | `gu-IN`|-|-|-|-|
-| Hindi (preview) | `hi-IN`|-|Γ£ö|-|-|
-| Italian |`it-IT` |Γ£ö| Γ£ö |Γ£ö|Γ£ö|
-| *[Japanese](#japanese-support-notes) |`ja-JP` |Γ£ö| Γ£ö |Γ£ö|Key phrase only|
-| Korean |`ko-KR` |Γ£ö|-|-|Key phrase only|
-| Marathi (preview) | `mr-IN`|-|-|-|-|
-| Portuguese (Brazil) |`pt-BR` |Γ£ö| Γ£ö |Γ£ö |not all sub-cultures|
-| Spanish (Mexico)|`es-MX` |-|Γ£ö|Γ£ö|Γ£ö|
-| Spanish (Spain) |`es-ES` |Γ£ö| Γ£ö |Γ£ö|Γ£ö|
-| Tamil (preview) | `ta-IN`|-|-|-|-|
-| Telugu (preview) | `te-IN`|-|-|-|-|
-| Turkish | `tr-TR` |Γ£ö|Γ£ö|-|Sentiment only|
----
-Language support varies for [prebuilt entities](luis-reference-prebuilt-entities.md) and [prebuilt domains](luis-reference-prebuilt-domains.md).
--
-### *Japanese support notes
-
- - でございます is not the same as です.
- - です is not the same as だ.
--
-### Speech API supported languages
-See Speech [Supported languages](../speech-service/speech-to-text.md) for Speech dictation mode languages.
-
-### Bing Spell Check supported languages
-See Bing Spell Check [Supported languages](../bing-spell-check/language-support.md) for a list of supported languages and status.
-
-## Rare or foreign words in an application
-In the `en-us` culture, LUIS learns to distinguish most English words, including slang. In the `zh-cn` culture, LUIS learns to distinguish most Chinese characters. If you use a rare word in `en-us` or character in `zh-cn`, and you see that LUIS seems unable to distinguish that word or character, you can add that word or character to a [phrase-list feature](luis-how-to-add-features.md). For example, words outside of the culture of the application -- that is, foreign words -- should be added to a phrase-list feature.
-
-<!--This phrase list should be marked non-interchangeable, to indicate that the set of rare words forms a class that LUIS should learn to recognize, but they are not synonyms or interchangeable with each other.-->
-
-### Hybrid languages
-Hybrid languages combine words from two cultures such as English and Chinese. These languages are not supported in LUIS because an app is based on a single culture.
-
-## Tokenization
-To perform machine learning, LUIS breaks an utterance into [tokens](luis-glossary.md#token) based on culture.
-
-|Language| every space or special character | character level|compound words
-|--|:--:|:--:|:--:|
-|Arabic|Γ£ö|||
-|Chinese||Γ£ö||
-|Dutch|Γ£ö||Γ£ö|
-|English (en-us)|Γ£ö |||
-|English (en-GB)|Γ£ö |||
-|French (fr-FR)|Γ£ö|||
-|French (fr-CA)|Γ£ö|||
-|German|Γ£ö||Γ£ö|
-|Gujarati|Γ£ö|||
-|Hindi|Γ£ö|||
-|Italian|Γ£ö|||
-|Japanese|||Γ£ö
-|Korean||Γ£ö||
-|Marathi|Γ£ö|||
-|Portuguese (Brazil)|Γ£ö|||
-|Spanish (es-ES)|Γ£ö|||
-|Spanish (es-MX)|Γ£ö|||
-|Tamil|Γ£ö|||
-|Telugu|Γ£ö|||
-|Turkish|Γ£ö|||
--
-### Custom tokenizer versions
-
-The following cultures have custom tokenizer versions:
-
-|Culture|Version|Purpose|
-|--|--|--|
-|German<br>`de-de`|1.0.0|Tokenizes words by splitting them using a machine learning-based tokenizer that tries to break down composite words into their single components.<br>If a user enters `Ich fahre einen krankenwagen` as an utterance, it is turned to `Ich fahre einen kranken wagen`. Allowing the marking of `kranken` and `wagen` independently as different entities.|
-|German<br>`de-de`|1.0.2|Tokenizes words by splitting them on spaces.<br> If a user enters `Ich fahre einen krankenwagen` as an utterance, it remains a single token. Thus `krankenwagen` is marked as a single entity. |
-|Dutch<br>`nl-nl`|1.0.0|Tokenizes words by splitting them using a machine learning-based tokenizer that tries to break down composite words into their single components.<br>If a user enters `Ik ga naar de kleuterschool` as an utterance, it is turned to `Ik ga naar de kleuter school`. Allowing the marking of `kleuter` and `school` independently as different entities.|
-|Dutch<br>`nl-nl`|1.0.1|Tokenizes words by splitting them on spaces.<br> If a user enters `Ik ga naar de kleuterschool` as an utterance, it remains a single token. Thus `kleuterschool` is marked as a single entity. |
--
-### Migrating between tokenizer versions
-<!--
-Your first choice is to change the tokenizer version in the app file, then import the version. This action changes how the utterances are tokenized but allows you to keep the same app ID.
-
-Tokenizer JSON for 1.0.0. Notice the property value for `tokenizerVersion`.
-
-```JSON
-{
- "luis_schema_version": "3.2.0",
- "versionId": "0.1",
- "name": "german_app_1.0.0",
- "desc": "",
- "culture": "de-de",
- "tokenizerVersion": "1.0.0",
- "intents": [
- {
- "name": "i1"
- },
- {
- "name": "None"
- }
- ],
- "entities": [
- {
- "name": "Fahrzeug",
- "roles": []
- }
- ],
- "composites": [],
- "closedLists": [],
- "patternAnyEntities": [],
- "regex_entities": [],
- "prebuiltEntities": [],
- "model_features": [],
- "regex_features": [],
- "patterns": [],
- "utterances": [
- {
- "text": "ich fahre einen krankenwagen",
- "intent": "i1",
- "entities": [
- {
- "entity": "Fahrzeug",
- "startPos": 23,
- "endPos": 27
- }
- ]
- }
- ],
- "settings": []
-}
-```
-
-Tokenizer JSON for version 1.0.1. Notice the property value for `tokenizerVersion`.
-
-```JSON
-{
- "luis_schema_version": "3.2.0",
- "versionId": "0.1",
- "name": "german_app_1.0.1",
- "desc": "",
- "culture": "de-de",
- "tokenizerVersion": "1.0.1",
- "intents": [
- {
- "name": "i1"
- },
- {
- "name": "None"
- }
- ],
- "entities": [
- {
- "name": "Fahrzeug",
- "roles": []
- }
- ],
- "composites": [],
- "closedLists": [],
- "patternAnyEntities": [],
- "regex_entities": [],
- "prebuiltEntities": [],
- "model_features": [],
- "regex_features": [],
- "patterns": [],
- "utterances": [
- {
- "text": "ich fahre einen krankenwagen",
- "intent": "i1",
- "entities": [
- {
- "entity": "Fahrzeug",
- "startPos": 16,
- "endPos": 27
- }
- ]
- }
- ],
- "settings": []
-}
-```
>-
-Tokenization happens at the app level. There is no support for version-level tokenization.
-
-[Import the file as a new app](luis-how-to-start-new-app.md), instead of a version. This action means the new app has a different app ID but uses the tokenizer version specified in the file.
cognitive-services Luis Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-limits.md
- Title: Limits - LUIS
-description: This article contains the known limits of Azure Cognitive Services Language Understanding (LUIS). LUIS has several limits areas. Model limit controls intents, entities, and features in LUIS. Quota limits based on key type. Keyboard combination controls the LUIS website.
--- Previously updated : 03/21/2022--
-# Limits for your LUIS model and keys
---
-LUIS has several limit areas. The first is the [model limit](#model-limits), which controls intents, entities, and features in LUIS. The second area is [quota limits](#resource-usage-and-limits) based on resource type. A third area of limits is the [keyboard combination](#keyboard-controls) for controlling the LUIS website. A fourth area is the [world region mapping](luis-reference-regions.md) between the LUIS authoring website and the LUIS [endpoint](luis-glossary.md#endpoint) APIs.
-
-## Model limits
-
-If your app exceeds the LUIS model limits, consider using a [LUIS dispatch](luis-concept-enterprise.md#dispatch-tool-and-model) app or using a [LUIS container](luis-container-howto.md).
-
-| Area | Limit |
-| |: |
-| [App name][luis-get-started-create-app] | \*Default character max |
-| Applications | 500 applications per Azure authoring resource |
-| [Batch testing][batch-testing] | 10 datasets, 1000 utterances per dataset |
-| Explicit list | 50 per application |
-| External entities | no limits |
-| [Intents][intents] | 500 per application: 499 custom intents, and the required _None_ intent.<br>[Dispatch-based](https://aka.ms/dispatch-tool) application has corresponding 500 dispatch sources. |
-| [List entities](concepts/entities.md) | Parent: 50, child: 20,000 items. Canonical name is \*default character max. Synonym values have no length restriction. |
-| [machine-learning entities + roles](concepts/entities.md):<br> composite,<br>simple,<br>entity role | A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level. |
-| Model as a feature | Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists. |
-| [Preview - Dynamic list entities](./luis-migration-api-v3.md) | 2 lists of \~1k per query prediction endpoint request |
-| [Patterns](luis-concept-patterns.md) | 500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern |
-| [Pattern.any](concepts/entities.md) | 100 per application, 3 pattern.any entities per pattern |
-| [Phrase list][phrase-list] | 500 phrase lists. 10 global phrase lists due to the model as a feature limit. Non-interchangeable phrase list has max of 5,000 phrases. Interchangeable phrase list has max of 50,000 phrases. Maximum number of total phrases per application of 500,000 phrases. |
-| [Prebuilt entities](./howto-add-prebuilt-models.md) | no limit |
-| [Regular expression entities](concepts/entities.md) | 20 entities<br>500 character max. per regular expression entity pattern |
-| [Roles](concepts/entities.md) | 300 roles per application. 10 roles per entity |
-| [Utterance][utterances] | 500 characters<br><br>If you have text longer than this character limit, you need to segment the utterance prior to input to LUIS and you will receive individual intent responses per segment. There are obvious breaks you can work with, such as punctuation marks and long pauses in speech. |
-| [Utterance examples][utterances] | 15,000 per application - there is no limit on the number of utterances per intent<br><br>If you need to train the application with more examples, use a [dispatch](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Dispatch) model approach. You train individual LUIS apps (known as child apps to the parent dispatch app) with one or more intents and then train a dispatch app that samples from each child LUIS app's utterances to direct the prediction request to the correct child app. |
-| [Versions](./luis-concept-app-iteration.md) | 100 versions per application |
-| [Version name][luis-how-to-manage-versions] | 128 characters |
-
-\*Default character max is 50 characters.
-
-## Name uniqueness
-
-Object names must be unique when compared to other objects of the same level.
-
-| Objects | Restrictions |
-| | |
-| Intent, entity | All intent and entity names must be unique in a version of an app. |
-| ML entity components | All machine-learning entity components (child entities) must be unique, within that entity for components at the same level. |
-| Features | All named features, such as phrase lists, must be unique within a version of an app. |
-| Entity roles | All roles on an entity or entity component must be unique when they are at the same entity level (parent, child, grandchild, etc.). |
-
-## Object naming
-
-Do not use the following characters in the following names.
-
-| Object | Exclude characters |
-| | |
-| Intent, entity, and role names | `:`, `$`, `&`, `%`, `*`, `(`, `)`, `+`, `?`, `~` |
-| Version name | `\`, `/`, `:`, `?`, `&`, `=`, `*`, `+`, `(`, `)`, `%`, `@`, `$`, `~`, `!`, `#` |
-
-## Resource usage and limits
-
-Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint. To learn more about the differences between key types, see [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md).
-
-### Authoring resource limits
-
-Use the _kind_, `LUIS.Authoring`, when filtering resources in the Azure portal. LUIS limits 500 applications per Azure authoring resource.
-
-| Authoring resource | Authoring TPS |
-| | |
-| F0 - Free tier | 1 million/month, 5/second |
-
-* TPS = Transactions per second
-
-[Learn more about pricing.][pricing]
-
-### Query prediction resource limits
-
-Use the _kind_, `LUIS`, when filtering resources in the Azure portal.The LUIS query prediction endpoint resource, used on the runtime, is only valid for endpoint queries.
-
-| Query Prediction resource | Query TPS |
-| | |
-| F0 - Free tier | 10 thousand/month, 5/second |
-| S0 - Standard tier | 50/second |
-
-### Sentiment analysis
-
-[Sentiment analysis integration](luis-how-to-publish-app.md#enable-sentiment-analysis), which provides sentiment information, is provided without requiring another Azure resource.
-
-### Speech integration
-
-[Speech integration](../speech-service/how-to-recognize-intents-from-speech-csharp.md) provides 1 thousand endpoint requests per unit cost.
-
-[Learn more about pricing.][pricing]
-
-## Keyboard controls
-
-| Keyboard input | Description |
-| | |
-| Control+E | switches between tokens and entities on utterances list |
-
-## Website sign-in time period
-
-Your sign-in access is for **60 minutes**. After this time period, you will get this error. You need to sign in again.
-
-<!-- TBD: fix this link -->
-[BATCH-TESTING]: ./luis-interactive-test.md#batch-testing
-[INTENTS]: ./luis-concept-intent.md
-[LUIS-GET-STARTED-CREATE-APP]: ./luis-get-started-create-app.md
-[LUIS-HOW-TO-MANAGE-VERSIONS]: ./luis-how-to-manage-versions.md
-[PHRASE-LIST]: ./concepts/patterns-features.md
-[PRICING]: https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/
-[UTTERANCES]: ./concepts/utterances.md
cognitive-services Luis Migration Api V1 To V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-migration-api-v1-to-v2.md
- Title: v1 to v2 API Migration-
-description: The version 1 endpoint and authoring Language Understanding APIs are deprecated. Use this guide to understand how to migrate to version 2 endpoint and authoring APIs.
------- Previously updated : 04/02/2019---
-# API v1 to v2 Migration guide for LUIS apps
--
-The version 1 [endpoint](https://aka.ms/v1-endpoint-api-docs) and [authoring](https://aka.ms/v1-authoring-api-docs) APIs are deprecated. Use this guide to understand how to migrate to version 2 [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) and [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) APIs.
-
-## New Azure regions
-LUIS has new [regions](./luis-reference-regions.md) provided for the LUIS APIs. LUIS provides a different portal for region groups. The application must be authored in the same region you expect to query. Applications do not automatically migrate regions. You export the app from one region then import into another for it to be available in a new region.
-
-## Authoring route changes
-The authoring API route changed from using the **prog** route to using the **api** route.
--
-| version | route |
-|--|--|
-|1|/luis/v1.0/**prog**/apps|
-|2|/luis/**api**/v2.0/apps|
--
-## Endpoint route changes
-The endpoint API has new query string parameters as well as a different response. If the verbose flag is true, all intents, regardless of score, are returned in an array named intents, in addition to the topScoringIntent.
-
-| version | GET route |
-|--|--|
-|1|/luis/v1/application?ID={appId}&q={q}|
-|2|/luis/v2.0/apps/{appId}?q={q}[&timezoneOffset][&verbose][&spellCheck][&staging][&bing-spell-check-subscription-key][&log]|
--
-v1 endpoint success response:
-```json
-{
- "odata.metadata":"https://dialogice.cloudapp.net/odata/$metadata#domain","value":[
- {
- "id":"bccb84ee-4bd6-4460-a340-0595b12db294","q":"turn on the camera","response":"[{\"intent\":\"OpenCamera\",\"score\":0.976928055},{\"intent\":\"None\",\"score\":0.0230718572}]"
- }
- ]
-}
-```
-
-v2 endpoint success response:
-```json
-{
- "query": "forward to frank 30 dollars through HSBC",
- "topScoringIntent": {
- "intent": "give",
- "score": 0.3964121
- },
- "entities": [
- {
- "entity": "30",
- "type": "builtin.number",
- "startIndex": 17,
- "endIndex": 18,
- "resolution": {
- "value": "30"
- }
- },
- {
- "entity": "frank",
- "type": "frank",
- "startIndex": 11,
- "endIndex": 15,
- "score": 0.935219169
- },
- {
- "entity": "30 dollars",
- "type": "builtin.currency",
- "startIndex": 17,
- "endIndex": 26,
- "resolution": {
- "unit": "Dollar",
- "value": "30"
- }
- },
- {
- "entity": "hsbc",
- "type": "Bank",
- "startIndex": 36,
- "endIndex": 39,
- "resolution": {
- "values": [
- "BankeName"
- ]
- }
- }
- ]
-}
-```
-
-## Key management no longer in API
-The subscription endpoint key APIs are deprecated, returning 410 GONE.
-
-| version | route |
-|--|--|
-|1|/luis/v1.0/prog/subscriptions|
-|1|/luis/v1.0/prog/subscriptions/{subscriptionKey}|
-
-Azure [endpoint keys](luis-how-to-azure-subscription.md) are generated in the Azure portal. You assign the key to a LUIS app on the **[Publish](luis-how-to-azure-subscription.md)** page. You do not need to know the actual key value. LUIS uses the subscription name to make the assignment.
-
-## New versioning route
-The v2 model is now contained in a [version](luis-how-to-manage-versions.md). A version name is 10 characters in the route. The default version is "0.1".
-
-| version | route |
-|--|--|
-|1|/luis/v1.0/**prog**/apps/{appId}/entities|
-|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/entities|
-
-## Metadata renamed
-Several APIs that return LUIS metadata have new names.
-
-| v1 route name | v2 route name |
-|--|--|
-|PersonalAssistantApps |assistants|
-|applicationcultures|cultures|
-|applicationdomains|domains|
-|applicationusagescenarios|usagescenarios|
--
-## "Sample" renamed to "suggest"
-LUIS suggests utterances from existing [endpoint utterances](luis-how-to-review-endpoint-utterances.md) that may enhance the model. In the previous version, this was named **sample**. In the new version, the name is changed from sample to **suggest**. This is called **[Review endpoint utterances](luis-how-to-review-endpoint-utterances.md)** in the LUIS website.
-
-| version | route |
-|--|--|
-|1|/luis/v1.0/**prog**/apps/{appId}/entities/{entityId}/**sample**|
-|1|/luis/v1.0/**prog**/apps/{appId}/intents/{intentId}/**sample**|
-|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/entities/{entityId}/**suggest**|
-|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/intents/{intentId}/**suggest**|
--
-## Create app from prebuilt domains
-[Prebuilt domains](./howto-add-prebuilt-models.md) provide a predefined domain model. Prebuilt domains allow you to quickly develop your LUIS application for common domains. This API allows you to create a new app based on a prebuilt domain. The response is the new appID.
-
-|v2 route|verb|
-|--|--|
-|/luis/api/v2.0/apps/customprebuiltdomains |get, post|
-|/luis/api/v2.0/apps/customprebuiltdomains/{culture} |get|
-
-## Importing 1.x app into 2.x
-The exported 1.x app's JSON has some areas that you need to change before importing into [LUIS][LUIS] 2.0.
-
-### Prebuilt entities
-The [prebuilt entities](./howto-add-prebuilt-models.md) have changed. Make sure you are using the V2 prebuilt entities. This includes using [datetimeV2](luis-reference-prebuilt-datetimev2.md), instead of datetime.
-
-### Actions
-The actions property is no longer valid. It should be an empty
-
-### Labeled utterances
-V1 allowed labeled utterances to include spaces at the beginning or end of the word or phrase. Removed the spaces.
-
-## Common reasons for HTTP response status codes
-See [LUIS API response codes](luis-reference-response-codes.md).
-
-## Next steps
-
-Use the v2 API documentation to update existing REST calls to LUIS [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) and [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) APIs.
-
-[LUIS]: ./luis-reference-regions.md
cognitive-services Luis Migration Api V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-migration-api-v3.md
- Title: Prediction endpoint changes in the V3 API
-description: The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs.
--
-ms.
--- Previously updated : 05/28/2021----
-# Prediction endpoint changes for V3
---
-The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs. There is currently no date by which migration needs to be completed.
-
-**Generally available status** - this V3 API include significant JSON request and response changes from V2 API.
-
-The V3 API provides the following new features:
-
-* [External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time)
-* [Dynamic lists](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time)
-* [Prebuilt entity JSON changes](#prebuilt-entity-changes)
-
-The prediction endpoint [request](#request-changes) and [response](#response-changes) have significant changes to support the new features listed above, including the following:
-
-* [Response object changes](#top-level-json-changes)
-* [Entity role name references instead of entity name](#entity-role-name-instead-of-entity-name)
-* [Properties to mark entities in utterances](#marking-placement-of-entities-in-utterances)
-
-[Reference documentation](https://aka.ms/luis-api-v3) is available for V3.
-
-## V3 changes from preview to GA
-
-V3 made the following changes as part of the move to GA:
-
-* The following prebuilt entities have different JSON responses:
- * [OrdinalV1](luis-reference-prebuilt-ordinal.md)
- * [GeographyV2](luis-reference-prebuilt-geographyv2.md)
- * [DatetimeV2](luis-reference-prebuilt-datetimev2.md)
- * Measurable unit key name from `units` to `unit`
-
-* Request body JSON change:
- * from `preferExternalEntities` to `preferExternalEntities`
- * optional `score` parameter for external entities
-
-* Response body JSON changes:
- * `normalizedQuery` removed
-
-## Suggested adoption strategy
-
-If you use Bot Framework, Bing Spell Check V7, or want to migrate your LUIS app authoring only, continue to use the V2 endpoint.
-
-If you know none of your client application or integrations (Bot Framework, and Bing Spell Check V7) are impacted and you are comfortable migrating your LUIS app authoring and your prediction endpoint at the same time, begin using the V3 prediction endpoint. The V2 prediction endpoint will still be available and is a good fall-back strategy.
-
-For information on using the Bing Spell Check API, see [How to correct misspelled words](luis-tutorial-bing-spellcheck.md).
--
-## Not supported
-
-### Bot Framework and Azure Bot Service client applications
-
-Continue to use the V2 API prediction endpoint until the V4.7 of the Bot Framework is released.
--
-## Endpoint URL changes
-
-### Changes by slot name and version name
-
-The [format of the V3 endpoint HTTP](developer-reference-resource.md#rest-endpoints) call has changed.
-
-If you want to query by version, you first need to [publish via API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c3b) with `"directVersionPublish":true`. Query the endpoint referencing the version ID instead of the slot name.
-
-|Valid values for `SLOT-NAME`|
-|--|
-|`production`|
-|`staging`|
-
-## Request changes
-
-### Query string changes
--
-### V3 POST body
-
-```JSON
-{
- "query":"your utterance here",
- "options":{
- "datetimeReference": "2019-05-05T12:00:00",
- "preferExternalEntities": true
- },
- "externalEntities":[],
- "dynamicLists":[]
-}
-```
-
-|Property|Type|Version|Default|Purpose|
-|--|--|--|--|--|
-|`dynamicLists`|array|V3 only|Not required.|[Dynamic lists](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time) allow you to extend an existing trained and published list entity, already in the LUIS app.|
-|`externalEntities`|array|V3 only|Not required.|[External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time) give your LUIS app the ability to identify and label entities during runtime, which can be used as features to existing entities. |
-|`options.datetimeReference`|string|V3 only|No default|Used to determine [datetimeV2 offset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). The format for the datetimeReference is [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).|
-|`options.preferExternalEntities`|boolean|V3 only|false|Specifies if user's [external entity (with same name as existing entity)](schema-change-prediction-runtime.md#override-existing-model-predictions) is used or the existing entity in the model is used for prediction. |
-|`query`|string|V3 only|Required.|**In V2**, the utterance to be predicted is in the `q` parameter. <br><br>**In V3**, the functionality is passed in the `query` parameter.|
-
-## Response changes
-
-The query response JSON changed to allow greater programmatic access to the data used most frequently.
-
-### Top level JSON changes
---
-The top JSON properties for V2 are, when `verbose` is set to true, which returns all intents and their scores in the `intents` property:
-
-```JSON
-{
- "query":"this is your utterance you want predicted",
- "topScoringIntent":{},
- "intents":[],
- "entities":[],
- "compositeEntities":[]
-}
-```
-
-The top JSON properties for V3 are:
-
-```JSON
-{
- "query": "this is your utterance you want predicted",
- "prediction":{
- "topIntent": "intent-name-1",
- "intents": {},
- "entities":{}
- }
-}
-```
-
-The `intents` object is an unordered list. Do not assume the first child in the `intents` corresponds to the `topIntent`. Instead, use the `topIntent` value to find the score:
-
-```nodejs
-const topIntentName = response.prediction.topIntent;
-const score = intents[topIntentName];
-```
-
-The response JSON schema changes allow for:
-
-* Clear distinction between original utterance, `query`, and returned prediction, `prediction`.
-* Easier programmatic access to predicted data. Instead of enumerating through an array in V2, you can access values by **name** for both intents and entities. For predicted entity roles, the role name is returned because it is unique across the entire app.
-* Data types, if determined, are respected. Numerics are no longer returned as strings.
-* Distinction between first priority prediction information and additional metadata, returned in the `$instance` object.
-
-### Entity response changes
-
-#### Marking placement of entities in utterances
-
-**In V2**, an entity was marked in an utterance with the `startIndex` and `endIndex`.
-
-**In V3**, the entity is marked with `startIndex` and `entityLength`.
-
-#### Access `$instance` for entity metadata
-
-If you need entity metadata, the query string needs to use the `verbose=true` flag and the response contains the metadata in the `$instance` object. Examples are shown in the JSON responses in the following sections.
-
-#### Each predicted entity is represented as an array
-
-The `prediction.entities.<entity-name>` object contains an array because each entity can be predicted more than once in the utterance.
-
-<a name="prebuilt-entities-with-new-json"></a>
-
-#### Prebuilt entity changes
-
-The V3 response object includes changes to prebuilt entities. Review [specific prebuilt entities](luis-reference-prebuilt-entities.md) to learn more.
-
-#### List entity prediction changes
-
-The JSON for a list entity prediction has changed to be an array of arrays:
-
-```JSON
-"entities":{
- "my_list_entity":[
- ["canonical-form-1","canonical-form-2"],
- ["canonical-form-2"]
- ]
-}
-```
-Each interior array corresponds to text inside the utterance. The interior object is an array because the same text can appear in more than one sublist of a list entity.
-
-When mapping between the `entities` object to the `$instance` object, the order of objects is preserved for the list entity predictions.
-
-```nodejs
-const item = 0; // order preserved, use same enumeration for both
-const predictedCanonicalForm = entities.my_list_entity[item];
-const associatedMetadata = entities.$instance.my_list_entity[item];
-```
-
-#### Entity role name instead of entity name
-
-In V2, the `entities` array returned all the predicted entities with the entity name being the unique identifier. In V3, if the entity uses roles and the prediction is for an entity role, the primary identifier is the role name. This is possible because entity role names must be unique across the entire app including other model (intent, entity) names.
-
-In the following example: consider an utterance that includes the text, `Yellow Bird Lane`. This text is predicted as a custom `Location` entity's role of `Destination`.
-
-|Utterance text|Entity name|Role name|
-|--|--|--|
-|`Yellow Bird Lane`|`Location`|`Destination`|
-
-In V2, the entity is identified by the _entity name_ with the role as a property of the object:
-
-```JSON
-"entities":[
- {
- "entity": "Yellow Bird Lane",
- "type": "Location",
- "startIndex": 13,
- "endIndex": 20,
- "score": 0.786378264,
- "role": "Destination"
- }
-]
-```
-
-In V3, the entity is referenced by the _entity role_, if the prediction is for the role:
-
-```JSON
-"entities":{
- "Destination":[
- "Yellow Bird Lane"
- ]
-}
-```
-
-In V3, the same result with the `verbose` flag to return entity metadata:
-
-```JSON
-"entities":{
- "Destination":[
- "Yellow Bird Lane"
- ],
- "$instance":{
- "Destination": [
- {
- "role": "Destination",
- "type": "Location",
- "text": "Yellow Bird Lane",
- "startIndex": 25,
- "length":16,
- "score": 0.9837309,
- "modelTypeId": 1,
- "modelType": "Entity Extractor"
- }
- ]
- }
-}
-```
-
-<a name="external-entities-passed-in-at-prediction-time"></a>
-<a name="override-existing-model-predictions"></a>
-
-## Extend the app at prediction time
-
-Learn [concepts](schema-change-prediction-runtime.md) about how to extend the app at prediction runtime.
--
-## Next steps
-
-Use the V3 API documentation to update existing REST calls to LUIS [endpoint](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs.
cognitive-services Luis Migration Authoring Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-migration-authoring-entities.md
- Title: Migrate to V3 machine-learning entity
-description: The V3 authoring provides one new entity type, the machine-learning entity, along with the ability to add relationships to the machine-learning entity and other entities or features of the application.
--- Previously updated : 05/28/2021--
-# Migrate to V3 Authoring entity
---
-The V3 authoring provides one new entity type, the machine-learning entity, along with the ability to add relationships to the machine-learning entity and other entities or features of the application. There is currently no date by which migration needs to be completed.
-
-## Entities are decomposable in V3
-
-Entities created with the V3 authoring APIs, either using the [APIs](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview) or with the portal, allow you to build a layered entity model with a parent and children. The parent is known to as the **machine-learning entity** and the children are known as **subentities** of the machine learned entity.
-
-Each subentity is also a machine-learning entity but with the added configuration options of features.
-
-* **Required features** are rules that guarantee an entity is extracted when it matches a feature. The rule is defined by required feature to the model:
- * [Prebuilt entity](luis-reference-prebuilt-entities.md)
- * [Regular expression entity](reference-entity-regular-expression.md)
- * [List entity](reference-entity-list.md).
-
-## How do these new relationships compare to V2 authoring
-
-V2 authoring provided hierarchical and composite entities along with roles and features to accomplish this same task. Because the entities, features, and roles were not explicitly related to each other, it was difficult to understand how LUIS implied the relationships during prediction.
-
-With V3, the relationship is explicit and designed by the app authors. This allows you, as the app author, to:
-
-* Visually see how LUIS is predicting these relationships, in the example utterances
-* Test for these relationships either with the [interactive test pane](luis-interactive-test.md) or at the endpoint
-* Use these relationships in the client application, via a well-structured, named, nested [.json object](reference-entity-machine-learned-entity.md)
-
-## Planning
-
-When you migrate, consider the following in your migration plan:
-
-* Back up your LUIS app, and perform the migration on a separate app. Having a V2 and V3 app available at the same time allows you to validate the changes required and the impact on the prediction results.
-* Capture current prediction success metrics
-* Capture current dashboard information as a snapshot of app status
-* Review existing intents, entities, phrase lists, patterns, and batch tests
-* The following elements can be migrated **without change**:
- * Intents
- * Entities
- * Regular expression entity
- * List entity
- * Features
- * Phrase list
-* The following elements need to be migrated **with changes**:
- * Entities
- * Hierarchical entity
- * Composite entity
- * Roles - roles can only be applied to a machine-learning (parent) entity. Roles can't be applied to subentities
- * Batch tests and patterns that use the hierarchical and composite entities
-
-When you design your migration plan, leave time to review the final machine-learning entities, after all hierarchical and composite entities have been migrated. While a straight migration will work, after you make the change and review your batch test results, and prediction JSON, the more unified JSON may lead you to make changes so the final information delivered to the client-side app is organized differently. This is similar to code refactoring and should be treated with the same review process your organization has in place.
-
-If you don't have batch tests in place for your V2 model, and migrate the batch tests to the V3 model as part of the migration, you won't be able to validate how the migration will impact the endpoint prediction results.
-
-## Migrating from V2 entities
-
-As you begin to move to the V3 authoring model, you should consider how to move to the machine-learning entity, and its subentities and features.
-
-The following table notes which entities need to migrate from a V2 to a V3 entity design.
-
-|V2 authoring entity type|V3 authoring entity type|Example|
-|--|--|--|
-|Composite entity|Machine learned entity|[learn more](#migrate-v2-composite-entity)|
-|Hierarchical entity|machine-learning entity's role|[learn more](#migrate-v2-hierarchical-entity)|
-
-## Migrate V2 Composite entity
-
-Each child of the V2 composite should be represented with a subentity of the V3 machine-learning entity. If the composite child is a prebuilt, regular expression, or a list entity, this should be applied as a required feature on the subentity.
-
-Considerations when planning to migrate a composite entity to a machine-learning entity:
-* Child entities can't be used in patterns
-* Child entities are no longer shared
-* Child entities need to be labeled if they used to be non-machine-learned
-
-### Existing features
-
-Any phrase list used to boost words in the composite entity should be applied as a feature to either the machine-learning (parent) entity, the subentity (child) entity, or the intent (if the phrase list only applies to one intent). Plan to add the feature to the entity where it should boost most significantly. Do not add the feature generically to the machine-learning (parent) entity, if it will most significantly boost the prediction of a subentity (child).
-
-### New features
-
-In V3 authoring, add a planning step to evaluate entities as possible features for all the entities and intents.
-
-### Example entity
-
-This entity is an example only. Your own entity migration may require other considerations.
-
-Consider a V2 composite for modifying a pizza `order` that uses:
-* prebuilt datetimeV2 for delivery time
-* phrase list to boost certain words such as pizza, pie, crust, and topping
-* list entity to detect toppings such as mushrooms, olives, pepperoni.
-
-An example utterance for this entity is:
-
-`Change the toppings on my pie to mushrooms and delivery it 30 minutes later`
-
-The following table demonstrates the migration:
-
-|V2 models|V3 models|
-|--|--|
-|Parent - Component entity named `Order`|Parent - machine-learning entity named `Order`|
-|Child - Prebuilt datetimeV2|* Migrate prebuilt entity to new app.<br>* Add required feature on parent for prebuilt datetimeV2.|
-|Child - list entity for toppings|* Migrate list entity to new app.<br>* Then add a required feature on the parent for the list entity.|
--
-## Migrate V2 Hierarchical entity
-
-In V2 authoring, a hierarchical entity was provided before roles existing in LUIS. Both served the same purpose of extracting entities based on context usage. If you have hierarchical entities, you can think of them as simple entities with roles.
-
-In V3 authoring:
-* A role can be applied on the machine-learning (parent) entity.
-* A role can't be applied to any subentities.
-
-This entity is an example only. Your own entity migration may require other considerations.
-
-Consider a V2 hierarchical entity for modifying a pizza `order`:
-* where each child determines either an original topping or the final topping
-
-An example utterance for this entity is:
-
-`Change the topping from mushrooms to olives`
-
-The following table demonstrates the migration:
-
-|V2 models|V3 models|
-|--|--|
-|Parent - Component entity named `Order`|Parent - machine-learning entity named `Order`|
-|Child - Hierarchical entity with original and final pizza topping|* Add role to `Order` for each topping.|
-
-## API change constraint replaced with required feature
-
-This change was made in May 2020 at the //Build conference and only applies to the v3 authoring APIs where an app is using a constrained feature. If you are migrating from v2 authoring to v3 authoring, or have not used v3 constrained features, skip this section.
-
-**Functionality** - ability to require an existing entity as a feature to another model and only extract that model if the entity is detected. The functionality has not changed but the API and terminology have changed.
-
-|Previous terminology|New terminology|
-|--|--|
-|`constrained feature`<br>`constraint`<br>`instanceOf`|`required feature`<br>`isRequired`|
-
-#### Automatic migration
-
-Starting **June 19 2020**, you wonΓÇÖt be allowed to create constraints programmatically using the previous authoring API that exposed this functionality.
-
-All existing constraint features will be automatically migrated to the required feature flag. No programmatic changes are required to your prediction API and no resulting change on the quality of the prediction accuracy.
-
-#### LUIS portal changes
-
-The LUIS preview portal referenced this functionality as a **constraint**. The current LUIS portal designates this functionality as a **required feature**.
-
-#### Previous authoring API
-
-This functionality was applied in the preview authoring **[Create Entity Child API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5d86cf3c6a25a45529767d77)** as the part of an entity's definition, using the `instanceOf` property of an entity's child:
-
-```json
-{
- "name" : "dayOfWeek",
- "instanceOf": "datetimeV2",
- "children": [
- {
- "name": "dayNumber",
- "instanceOf": "number",
- "children": []
- }
- ]
-}
-```
-
-#### New authoring API
-
-This functionality is now applied with the **[Add entity feature relation API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5d9dc1781e38aaec1c375f26)** using the `featureName` and `isRequired` properties. The value of the `featureName` property is the name of the model.
-
-```json
-{
- "featureName": "YOUR-MODEL-NAME-HERE",
- "isRequired" : true
-}
-```
--
-## Next steps
-
-* [Developer resources](developer-reference-resource.md)
cognitive-services Luis Migration Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-migration-authoring.md
- Title: Migrate to an Azure resource authoring key-
-description: This article describes how to migrate Language Understanding (LUIS) authoring authentication from an email account to an Azure resource.
-------- Previously updated : 03/21/2022--
-# Migrate to an Azure resource authoring key
---
-> [!IMPORTANT]
-> As of December 3rd 2020, existing LUIS users must have completed the migration process to continue authoring LUIS applications.
-
-Language Understanding (LUIS) authoring authentication has changed from an email account to an Azure resource. Use this article to learn how to migrate your account, if you haven't migrated yet.
--
-## What is migration?
-
-Migration is the process of changing authoring authentication from an email account to an Azure resource. Your account will be linked to an Azure subscription and an Azure authoring resource after you migrate.
-
-Migration has to be done from the [LUIS portal](https://www.luis.ai). If you create the authoring keys by using the LUIS CLI, for example, you'll need to complete the migration process in the LUIS portal. You can still have co-authors on your applications after migration, but these will be added on the Azure resource level instead of the application level. Migrating your account can't be reversed.
-
-> [!Note]
-> * If you need to create a prediction runtime resource, there's [a separate process](luis-how-to-azure-subscription.md#create-luis-resources) to create it.
-> * See the [migration notes](#migration-notes) section below for information on how your applications and contributors will be affected.
-> * Authoring your LUIS app is free, as indicated by the F0 tier. Learn [more about pricing tiers](luis-limits.md#resource-usage-and-limits).
-
-## Migration prerequisites
-
-* A valid Azure subscription. Ask your tenant admin to add you on the subscription, or [sign up for a free one](https://azure.microsoft.com/free/cognitive-services).
-* A LUIS Azure authoring resource from the LUIS portal or from the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne).
- * Creating an authoring resource from the LUIS portal is part of the migration process described in the next section.
-* If you're a collaborator on applications, applications won't automatically migrate. You will be prompted to export these apps while going through the migration flow. You can also use the [export API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40). You can import the app back into LUIS after migration. The import process creates a new app with a new app ID, for which you're the owner.
-* If you're the owner of the application, you won't need to export your apps because they'll migrate automatically. An email template with a list of all collaborators for each application is provided, so they can be notified of the migration process.
-
-## Migration steps
-
-1. When you sign-in to the [LUIS portal](https://www.luis.ai), an Azure migration window will open with the steps for migration. If you dismiss it, you won't be able to proceed with authoring your LUIS applications, and the only action displayed will be to continue with the migration.
-
- > [!div class="mx-imgBorder"]
- > ![Migration Window Intro](./media/migrate-authoring-key/notify-azure-migration.png)
-
-2. If you have collaborators on any of your apps, you will see a list of application names owned by you, along with the authoring region and collaborator emails on each application. We recommend sending your collaborators an email notifying them about the migration by clicking on the **send** symbol button on the left of the application name.
-A `*` symbol will appear next to the application name if a collaborator has a prediction resource assigned to your application. After migration, these apps will still have these prediction resources assigned to them even though the collaborators will not have access to author your applications. However, this assignment will be broken if the owner of the prediction resource [regenerated the keys](./luis-how-to-azure-subscription.md#regenerate-an-azure-key) from the Azure portal.
-
- > [!div class="mx-imgBorder"]
- > ![Notify collaborators](./media/migrate-authoring-key/notify-azure-migration-collabs.png)
--
- For each collaborator and app, the default email application opens with a lightly formatted email. You can edit the email before sending it. The email template includes the exact app ID and app name.
-
- ```html
- Dear Sir/Madam,
-
- I will be migrating my LUIS account to Azure. Consequently, you will no longer have access to the following app:
-
- App Id: <app-ID-omitted>
- App name: Human Resources
-
- Thank you
- ```
- > [!Note]
- > After you migrate your account to Azure, your apps will no longer be available to collaborators.
-
-3. If you're a collaborator on any apps, a list of application names shared with you is shown along with the authoring region and owner emails on each application. It is recommend to export a copy of the apps by clicking on the export button on the left of the application name. You can import these apps back after you migrate, because they won't be automatically migrated with you.
-A `*` symbol will appear next to the application name if you have a prediction resource assigned to an application. After migration, your prediction resource will still be assigned to these applications even though you will no longer have access to author these apps. If you want to break the assignment between your prediction resource and the application, you will need to go to Azure portal and [regenerate the keys](./luis-how-to-azure-subscription.md#regenerate-an-azure-key).
-
- > [!div class="mx-imgBorder"]
- > ![Export your applications.](./media/migrate-authoring-key/migration-export-apps.png)
--
-4. In the window for migrating regions, you will be asked to migrate your applications to an Azure resource in the same region they were authored in. LUIS has three authoring regions [and portals](./luis-reference-regions.md#luis-authoring-regions). The window will show the regions where your owned applications were authored. The displayed migration regions may be different depending on the regional portal you use, and apps you've authored.
-
- > [!div class="mx-imgBorder"]
- > ![Multi region migration.](./media/migrate-authoring-key/migration-regional-flow.png)
-
-5. For each region, choose to create a new LUIS authoring resource, or to migrate to an existing one using the buttons.
-
- > [!div class="mx-imgBorder"]
- > ![choose to create or existing authoring resource](./media/migrate-authoring-key/migration-multiregional-resource.png)
-
- Provide the following information:
-
- * **Tenant Name**: The tenant that your Azure subscription is associated with. By default this is set to the tenant you're currently using. You can switch tenants by closing this window and selecting the avatar in the top right of the screen, containing your initials. Click on **Migrate to Azure** to re-open the window.
- * **Azure Subscription Name**: The subscription that will be associated with the resource. If you have more than one subscription that belongs to your tenant, select the one you want from the drop-down list.
- * **Authoring Resource Name**: A custom name that you choose. It's used as part of the URL for your authoring and prediction endpoint queries. If you are creating a new authoring resource, note that the resource name can only include alphanumeric characters, `-`, and canΓÇÖt start or end with `-`. If any other symbols are included in the name,
- resource creation and migration will fail.
- * **Azure Resource Group Name**: A custom resource group name that you choose from the drop-down list. Resource groups allow you to group Azure resources for access and management. If you currently do not have a resource group in your subscription, you will not be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
-
-6. After you have successfully migrated in all regions, click on finish. You will now have access to your applications. You can continue authoring and maintaining all your applications in all regions within the portal.
-
-## Migration notes
-
-* Before migration, coauthors are known as _collaborators_ on the LUIS app level. After migration, the Azure role of _contributor_ is used for the same functionality on the Azure resource level.
-* If you have signed-in to more than one [LUIS regional portal](./luis-reference-regions.md#luis-authoring-regions), you will be asked to migrate in multiple regions at once.
-* Applications will automatically migrate with you if you're the owner of the application. Applications will not migrate with you if you're a collaborator on the application. However, collaborators will be prompted to export the apps they need.
-* Application owners can't choose a subset of apps to migrate and there is no way for an owner to know if collaborators have migrated.
-* Migration does not automatically move or add collaborators to the Azure authoring resource. The app owner is the one who needs to complete this step after migration. This step requires [permissions to the Azure authoring resource](./luis-how-to-collaborate.md).
-* After contributors are assigned to the Azure resource, they will need to migrate before they can access applications. Otherwise, they won't have access to author the applications.
--
-## Using apps after migration
-
-After the migration process, all your LUIS apps for which you're the owner will now be assigned to a single LUIS authoring resource.
-The **My Apps** list shows the apps migrated to the new authoring resource. Before you access your apps, select **Choose a different authoring resource** to select the subscription and authoring resource to view the apps that can be authored.
-
-> [!div class="mx-imgBorder"]
-> ![select subscription and authoring resource](./media/migrate-authoring-key/select-sub-and-resource.png)
--
-If you plan to edit your apps programmatically, you'll need the authoring key values. These values are displayed by clicking **Manage** at the top of the screen in the LUIS portal, and then selecting **Azure Resources**. They're also available in the Azure portal on the resource's **Key and endpoints** page. You can also create more authoring resources and assign them from the same page.
-
-## Adding contributors to authoring resources
--
-Learn [how to add contributors](luis-how-to-collaborate.md) on your authoring resource. Contributors will have access to all applications under that resource.
-
-You can add contributors to the authoring resource from the Azure portal, on the **Access Control (IAM)** page for that resource. For more information, see [Add contributors to your app](luis-how-to-collaborate.md).
-
-> [!Note]
-> If the owner of the LUIS app migrated and added the collaborator as a contributor on the Azure resource, the collaborator will still have no access to the app unless they also migrate.
-
-## Troubleshooting the migration process
-
-If you cannot find your Azure subscription in the drop-down list:
-* Ensure that you have a valid Azure subscription that's authorized to create Cognitive Services resources. Go to the [Azure portal](https://portal.azure.com) and check the status of the subscription. If you don't have one, [create a free Azure account](https://azure.microsoft.com/free/cognitive-services/).
-* Ensure that you're in the proper tenant associated with your valid subscription. You can switch tenants selecting the avatar in the top right of the screen, containing your initials.
-
- > [!div class="mx-imgBorder"]
- > ![Page for switching directories](./media/migrate-authoring-key/switch-directories.png)
-
-If you have an existing authoring resource but can't find it when you select the **Use Existing Authoring Resource** option:
-* Your resource was probably created in a different region than the one your are trying to migrate in.
-* Create a new resource from the LUIS portal instead.
-
-If you select the **Create New Authoring Resource** option and migration fails with the error message "Failed retrieving user's Azure information, retry again later":
-* Your subscription might have 10 or more authoring resources per region, per subscription. If that's the case, you won't be able to create a new authoring resource.
-* Migrate by selecting the **Use Existing Authoring Resource** option and selecting one of the existing resources under your subscription.
-
-## Create new support request
-
-If you are having any issues with the migration that are not addressed in the troubleshooting section, please [create a support topic](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and provide the information below with the following fields:
-
- * **Issue Type**: Technical
- * **Subscription**: Choose a subscription from the dropdown list
- * **Service**: Search and select "Cognitive Services"
- * **Resource**: Choose a LUIS resource if there is an existing one. If not, select General question.
-
-## Next steps
-
-* Review [concepts about authoring and runtime keys](luis-how-to-azure-subscription.md)
-* Review how to [assign keys](luis-how-to-azure-subscription.md) and [add contributors](luis-how-to-collaborate.md)
cognitive-services Luis Reference Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-application-settings.md
- Title: Application settings - LUIS
-description: Applications settings for Azure Cognitive Services language understanding apps are stored in the app and portal.
--- Previously updated : 05/04/2020--
-# App and version settings
---
-These settings are stored in the [exported](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40) app and updated with the REST APIs or LUIS portal.
-
-Changing your app version settings resets your app training status to untrained.
---
-Text reference and examples include:
-
-* [Punctuation](#punctuation-normalization)
-* [Diacritics](#diacritics-normalization)
-
-## Diacritics normalization
-
-The following utterances show how diacritics normalization impacts utterances:
-
-|With diacritics set to false|With diacritics set to true|
-|--|--|
-|`quiero tomar una pi├▒a colada`|`quiero tomar una pina colada`|
-|||
-
-### Language support for diacritics
-
-#### Brazilian portuguese `pt-br` diacritics
-
-|Diacritics set to false|Diacritics set to true|
-|-|-|
-|`á`|`a`|
-|`â`|`a`|
-|`ã`|`a`|
-|`à`|`a`|
-|`ç`|`c`|
-|`é`|`e`|
-|`ê`|`e`|
-|`í`|`i`|
-|`├│`|`o`|
-|`├┤`|`o`|
-|`├╡`|`o`|
-|`├║`|`u`|
-|||
-
-#### Dutch `nl-nl` diacritics
-
-|Diacritics set to false|Diacritics set to true|
-|-|-|
-|`á`|`a`|
-|`à`|`a`|
-|`é`|`e`|
-|`ë`|`e`|
-|`è`|`e`|
-|`ï`|`i`|
-|`í`|`i`|
-|`├│`|`o`|
-|`├╢`|`o`|
-|`├║`|`u`|
-|`├╝`|`u`|
-|||
-
-#### French `fr-` diacritics
-
-This includes both french and canadian subcultures.
-
-|Diacritics set to false|Diacritics set to true|
-|--|--|
-|`é`|`e`|
-|`à`|`a`|
-|`è`|`e`|
-|`├╣`|`u`|
-|`â`|`a`|
-|`ê`|`e`|
-|`î`|`i`|
-|`├┤`|`o`|
-|`├╗`|`u`|
-|`ç`|`c`|
-|`ë`|`e`|
-|`ï`|`i`|
-|`├╝`|`u`|
-|`├┐`|`y`|
-
-#### German `de-de` diacritics
-
-|Diacritics set to false|Diacritics set to true|
-|--|--|
-|`ä`|`a`|
-|`├╢`|`o`|
-|`├╝`|`u`|
-
-#### Italian `it-it` diacritics
-
-|Diacritics set to false|Diacritics set to true|
-|--|--|
-|`à`|`a`|
-|`è`|`e`|
-|`é`|`e`|
-|`ì`|`i`|
-|`í`|`i`|
-|`î`|`i`|
-|`├▓`|`o`|
-|`├│`|`o`|
-|`├╣`|`u`|
-|`├║`|`u`|
-
-#### Spanish `es-` diacritics
-
-This includes both spanish and canadian mexican.
-
-|Diacritics set to false|Diacritics set to true|
-|-|-|
-|`á`|`a`|
-|`é`|`e`|
-|`í`|`i`|
-|`├│`|`o`|
-|`├║`|`u`|
-|`├╝`|`u`|
-|`├▒`|`u`|
-
-## Punctuation normalization
-
-The following utterances show how punctuation impacts utterances:
-
-|With punctuation set to False|With punctuation set to True|
-|--|--|
-|`Hmm..... I will take the cappuccino`|`Hmm I will take the cappuccino`|
-|||
-
-### Punctuation removed
-
-The following punctuation is removed with `NormalizePunctuation` is set to true.
-
-|Punctuation|
-|--|
-|`-`|
-|`.`|
-|`'`|
-|`"`|
-|`\`|
-|`/`|
-|`?`|
-|`!`|
-|`_`|
-|`,`|
-|`;`|
-|`:`|
-|`(`|
-|`)`|
-|`[`|
-|`]`|
-|`{`|
-|`}`|
-|`+`|
-|`¡`|
-
-## Next steps
-
-* Learn [concepts](concepts/utterances.md#utterance-normalization) of diacritics and punctuation.
cognitive-services Luis Reference Prebuilt Age https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-age.md
- Title: Age Prebuilt entity - LUIS-
-description: This article contains age prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/04/2019---
-# Age prebuilt entity for a LUIS app
--
-The prebuilt age entity captures the age value both numerically and in terms of days, weeks, months, and years. Because this entity is already trained, you do not need to add example utterances containing age to the application intents. Age entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
-
-## Types of age
-Age is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L3) GitHub repository
-
-## Resolution for prebuilt age entity
---
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "age": [
- {
- "number": 90,
- "unit": "Day"
- }
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "age": [
- {
- "number": 90,
- "unit": "Day"
- }
- ],
- "$instance": {
- "age": [
- {
- "type": "builtin.age",
- "text": "90 day old",
- "startIndex": 2,
- "length": 10,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor"
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.age** entity.
-
-```json
- "entities": [
- {
- "entity": "90 day old",
- "type": "builtin.age",
- "startIndex": 2,
- "endIndex": 11,
- "resolution": {
- "unit": "Day",
- "value": "90"
- }
- }
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [currency](luis-reference-prebuilt-currency.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [dimension](luis-reference-prebuilt-dimension.md) entities.
cognitive-services Luis Reference Prebuilt Currency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-currency.md
- Title: Currency Prebuilt entity - LUIS-
-description: This article contains currency prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/14/2019---
-# Currency prebuilt entity for a LUIS app
--
-The prebuilt currency entity detects currency in many denominations and countries/regions, regardless of LUIS app culture. Because this entity is already trained, you do not need to add example utterances containing currency to the application intents. Currency entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
-
-## Types of currency
-Currency is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L26) GitHub repository
-
-## Resolution for currency entity
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "money": [
- {
- "number": 10.99,
- "units": "Dollar"
- }
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "money": [
- {
- "number": 10.99,
- "unit": "Dollar"
- }
- ],
- "$instance": {
- "money": [
- {
- "type": "builtin.currency",
- "text": "$10.99",
- "startIndex": 23,
- "length": 6,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor"
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.currency** entity.
-
-```json
-"entities": [
- {
- "entity": "$10.99",
- "type": "builtin.currency",
- "startIndex": 23,
- "endIndex": 28,
- "resolution": {
- "unit": "Dollar",
- "value": "10.99"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [datetimeV2](luis-reference-prebuilt-datetimev2.md), [dimension](luis-reference-prebuilt-dimension.md), and [email](luis-reference-prebuilt-email.md) entities.
cognitive-services Luis Reference Prebuilt Datetimev2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-datetimev2.md
- Title: DatetimeV2 Prebuilt entities - LUIS-
-description: This article has datetimeV2 prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 04/13/2020---
-# DatetimeV2 prebuilt entity for a LUIS app
---
-The **datetimeV2** prebuilt entity extracts date and time values. These values resolve in a standardized format for client programs to consume. When an utterance has a date or time that isn't complete, LUIS includes _both past and future values_ in the endpoint response. Because this entity is already trained, you do not need to add example utterances containing datetimeV2 to the application intents.
-
-## Types of datetimeV2
-DatetimeV2 is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-DateTime.yaml) GitHub repository.
-
-## Example JSON
-
-The following utterance and its partial JSON response is shown below.
-
-`8am on may 2nd 2019`
-
-#### [V3 response](#tab/1-1)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "datetime",
- "values": [
- {
- "timex": "2019-05-02T08",
- "resolution": [
- {
- "value": "2019-05-02 08:00:00"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-#### [V3 verbose response](#tab/1-2)
-
-```json
-
-"entities": {
- "datetimeV2": [
- {
- "type": "datetime",
- "values": [
- {
- "timex": "2019-05-02T08",
- "resolution": [
- {
- "value": "2019-05-02 08:00:00"
- }
- ]
- }
- ]
- }
- ],
- "$instance": {
- "datetimeV2": [
- {
- "type": "builtin.datetimeV2.datetime",
- "text": "8am on may 2nd 2019",
- "startIndex": 0,
- "length": 19,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/1-3)
-
-```json
-"entities": [
- {
- "entity": "8am on may 2nd 2019",
- "type": "builtin.datetimeV2.datetime",
- "startIndex": 0,
- "endIndex": 18,
- "resolution": {
- "values": [
- {
- "timex": "2019-05-02T08",
- "type": "datetime",
- "value": "2019-05-02 08:00:00"
- }
- ]
- }
- }
-]
- ```
-
-|Property name |Property type and description|
-|||
-|Entity|**string** - Text extracted from the utterance with type of date, time, date range, or time range.|
-|type|**string** - One of the [subtypes of datetimeV2](#subtypes-of-datetimev2)
-|startIndex|**int** - The index in the utterance at which the entity begins.|
-|endIndex|**int** - The index in the utterance at which the entity ends.|
-|resolution|Has a `values` array that has one, two, or four [values of resolution](#values-of-resolution).|
-|end|The end value of a time, or date range, in the same format as `value`. Only used if `type` is `daterange`, `timerange`, or `datetimerange`|
-
-* * *
-
-## Subtypes of datetimeV2
-
-The **datetimeV2** prebuilt entity has the following subtypes, and examples of each are provided in the table that follows:
-* `date`
-* `time`
-* `daterange`
-* `timerange`
-* `datetimerange`
--
-## Values of resolution
-* The array has one element if the date or time in the utterance is fully specified and unambiguous.
-* The array has two elements if the datetimeV2 value is ambiguous. Ambiguity includes lack of specific year, time, or time range. See [Ambiguous dates](#ambiguous-dates) for examples. When the time is ambiguous for A.M. or P.M., both values are included.
-* The array has four elements if the utterance has two elements with ambiguity. This ambiguity includes elements that have:
- * A date or date range that is ambiguous as to year
- * A time or time range that is ambiguous as to A.M. or P.M. For example, 3:00 April 3rd.
-
-Each element of the `values` array may have the following fields:
-
-|Property name|Property description|
-|--|--|
-|timex|time, date, or date range expressed in TIMEX format that follows the [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601) and the TIMEX3 attributes for annotation using the TimeML language.|
-|mod|term used to describe how to use the value such as `before`, `after`.|
-|type|The subtype, which can be one of the following items: `datetime`, `date`, `time`, `daterange`, `timerange`, `datetimerange`, `duration`, `set`.|
-|value|**Optional.** A datetime object in the Format yyyy-MM-dd (date), HH:mm:ss (time) yyyy-MM-dd HH:mm:ss (datetime). If `type` is `duration`, the value is the number of seconds (duration) <br/> Only used if `type` is `datetime` or `date`, `time`, or `duration.|
-
-## Valid date values
-
-The **datetimeV2** supports dates between the following ranges:
-
-| Min | Max |
-|-|-|
-| 1st January 1900 | 31st December 2099 |
-
-## Ambiguous dates
-
-If the date can be in the past or future, LUIS provides both values. An example is an utterance that includes the month and date without the year.
-
-For example, given the following utterance:
-
-`May 2nd`
-
-* If today's date is May 3rd 2017, LUIS provides both "2017-05-02" and "2018-05-02" as values.
-* When today's date is May 1st 2017, LUIS provides both "2016-05-02" and "2017-05-02" as values.
-
-The following example shows the resolution of the entity "may 2nd". This resolution assumes that today's date is a date between May 2nd 2017 and May 1st 2018.
-Fields with `X` in the `timex` field are parts of the date that aren't explicitly specified in the utterance.
-
-## Date resolution example
--
-The following utterance and its partial JSON response is shown below.
-
-`May 2nd`
-
-#### [V3 response](#tab/2-1)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "date",
- "values": [
- {
- "timex": "XXXX-05-02",
- "resolution": [
- {
- "value": "2019-05-02"
- },
- {
- "value": "2020-05-02"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-#### [V3 verbose response](#tab/2-2)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "date",
- "values": [
- {
- "timex": "XXXX-05-02",
- "resolution": [
- {
- "value": "2019-05-02"
- },
- {
- "value": "2020-05-02"
- }
- ]
- }
- ]
- }
- ],
- "$instance": {
- "datetimeV2": [
- {
- "type": "builtin.datetimeV2.date",
- "text": "May 2nd",
- "startIndex": 0,
- "length": 7,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/2-3)
-
-```json
- "entities": [
- {
- "entity": "may 2nd",
- "type": "builtin.datetimeV2.date",
- "startIndex": 0,
- "endIndex": 6,
- "resolution": {
- "values": [
- {
- "timex": "XXXX-05-02",
- "type": "date",
- "value": "2019-05-02"
- },
- {
- "timex": "XXXX-05-02",
- "type": "date",
- "value": "2020-05-02"
- }
- ]
- }
- }
- ]
-```
-* * *
-
-## Date range resolution examples for numeric date
-
-The `datetimeV2` entity extracts date and time ranges. The `start` and `end` fields specify the beginning and end of the range. For the utterance `May 2nd to May 5th`, LUIS provides **daterange** values for both the current year and the next year. In the `timex` field, the `XXXX` values indicate the ambiguity of the year. `P3D` indicates the time period is three days long.
-
-The following utterance and its partial JSON response is shown below.
-
-`May 2nd to May 5th`
-
-#### [V3 response](#tab/3-1)
-
-```json
-
-"entities": {
- "datetimeV2": [
- {
- "type": "daterange",
- "values": [
- {
- "timex": "(XXXX-05-02,XXXX-05-05,P3D)",
- "resolution": [
- {
- "start": "2019-05-02",
- "end": "2019-05-05"
- },
- {
- "start": "2020-05-02",
- "end": "2020-05-05"
- }
- ]
- }
- ]
- }
- ]
-}
-```
--
-#### [V3 verbose response](#tab/3-2)
-
-```json
-
-"entities": {
- "datetimeV2": [
- {
- "type": "daterange",
- "values": [
- {
- "timex": "(XXXX-05-02,XXXX-05-05,P3D)",
- "resolution": [
- {
- "start": "2019-05-02",
- "end": "2019-05-05"
- },
- {
- "start": "2020-05-02",
- "end": "2020-05-05"
- }
- ]
- }
- ]
- }
- ],
- "$instance": {
- "datetimeV2": [
- {
- "type": "builtin.datetimeV2.daterange",
- "text": "May 2nd to May 5th",
- "startIndex": 0,
- "length": 18,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/3-3)
-
-```json
-"entities": [
- {
- "entity": "may 2nd to may 5th",
- "type": "builtin.datetimeV2.daterange",
- "startIndex": 0,
- "endIndex": 17,
- "resolution": {
- "values": [
- {
- "timex": "(XXXX-05-02,XXXX-05-05,P3D)",
- "type": "daterange",
- "start": "2019-05-02",
- "end": "2019-05-05"
- }
- ]
- }
- }
- ]
-```
-* * *
-
-## Date range resolution examples for day of week
-
-The following example shows how LUIS uses **datetimeV2** to resolve the utterance `Tuesday to Thursday`. In this example, the current date is June 19th. LUIS includes **daterange** values for both of the date ranges that precede and follow the current date.
-
-The following utterance and its partial JSON response is shown below.
-
-`Tuesday to Thursday`
-
-#### [V3 response](#tab/4-1)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "daterange",
- "values": [
- {
- "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)",
- "resolution": [
- {
- "start": "2019-10-08",
- "end": "2019-10-10"
- },
- {
- "start": "2019-10-15",
- "end": "2019-10-17"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-#### [V3 verbose response](#tab/4-2)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "daterange",
- "values": [
- {
- "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)",
- "resolution": [
- {
- "start": "2019-10-08",
- "end": "2019-10-10"
- },
- {
- "start": "2019-10-15",
- "end": "2019-10-17"
- }
- ]
- }
- ]
- }
- ],
- "$instance": {
- "datetimeV2": [
- {
- "type": "builtin.datetimeV2.daterange",
- "text": "Tuesday to Thursday",
- "startIndex": 0,
- "length": 19,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/4-3)
-
-```json
- "entities": [
- {
- "entity": "tuesday to thursday",
- "type": "builtin.datetimeV2.daterange",
- "startIndex": 0,
- "endIndex": 19,
- "resolution": {
- "values": [
- {
- "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)",
- "type": "daterange",
- "start": "2019-04-30",
- "end": "2019-05-02"
- }
- ]
- }
- }
- ]
-```
-* * *
-
-## Ambiguous time
-The values array has two time elements if the time, or time range is ambiguous. When there's an ambiguous time, values have both the A.M. and P.M. times.
-
-## Time range resolution example
-
-DatetimeV2 JSON response has changed in the API V3. The following example shows how LUIS uses **datetimeV2** to resolve the utterance that has a time range.
-
-Changes from API V2:
-* `datetimeV2.timex.type` property is no longer returned because it is returned at the parent level, `datetimev2.type`.
-* The `datetimeV2.value` property has been renamed to `datetimeV2.timex`.
-
-The following utterance and its partial JSON response is shown below.
-
-`from 6pm to 7pm`
-
-#### [V3 response](#tab/5-1)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```JSON
-
-"entities": {
- "datetimeV2": [
- {
- "type": "timerange",
- "values": [
- {
- "timex": "(T18,T19,PT1H)",
- "resolution": [
- {
- "start": "18:00:00",
- "end": "19:00:00"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-#### [V3 verbose response](#tab/5-2)
-
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-
-"entities": {
- "datetimeV2": [
- {
- "type": "timerange",
- "values": [
- {
- "timex": "(T18,T19,PT1H)",
- "resolution": [
- {
- "start": "18:00:00",
- "end": "19:00:00"
- }
- ]
- }
- ]
- }
- ],
- "$instance": {
- "datetimeV2": [
- {
- "type": "builtin.datetimeV2.timerange",
- "text": "from 6pm to 7pm",
- "startIndex": 0,
- "length": 15,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/5-3)
-
-```json
- "entities": [
- {
- "entity": "6pm to 7pm",
- "type": "builtin.datetimeV2.timerange",
- "startIndex": 0,
- "endIndex": 9,
- "resolution": {
- "values": [
- {
- "timex": "(T18,T19,PT1H)",
- "type": "timerange",
- "start": "18:00:00",
- "end": "19:00:00"
- }
- ]
- }
- }
- ]
-```
-
-* * *
-
-## Time resolution example
-
-The following utterance and its partial JSON response is shown below.
-
-`8am`
-
-#### [V3 response](#tab/6-1)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "time",
- "values": [
- {
- "timex": "T08",
- "resolution": [
- {
- "value": "08:00:00"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-#### [V3 verbose response](#tab/6-2)
-
-```json
-"entities": {
- "datetimeV2": [
- {
- "type": "time",
- "values": [
- {
- "timex": "T08",
- "resolution": [
- {
- "value": "08:00:00"
- }
- ]
- }
- ]
- }
- ],
- "$instance": {
- "datetimeV2": [
- {
- "type": "builtin.datetimeV2.time",
- "text": "8am",
- "startIndex": 0,
- "length": 3,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/6-3)
-
-```json
-"entities": [
- {
- "entity": "8am",
- "type": "builtin.datetimeV2.time",
- "startIndex": 0,
- "endIndex": 2,
- "resolution": {
- "values": [
- {
- "timex": "T08",
- "type": "time",
- "value": "08:00:00"
- }
- ]
- }
- }
-]
-```
-
-* * *
-
-## Deprecated prebuilt datetime
-
-The `datetime` prebuilt entity is deprecated and replaced by **datetimeV2**.
-
-To replace `datetime` with `datetimeV2` in your LUIS app, complete the following steps:
-
-1. Open the **Entities** pane of the LUIS web interface.
-2. Delete the **datetime** prebuilt entity.
-3. Click **Add prebuilt entity**
-4. Select **datetimeV2** and click **Save**.
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md).
-
cognitive-services Luis Reference Prebuilt Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-deprecated.md
- Title: Deprecated Prebuilt entities - LUIS-
-description: This article contains deprecated prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 07/29/2019---
-# Deprecated prebuilt entities in a LUIS app
--
-The following prebuilt entities are deprecated and can't be added to new LUIS apps.
-
-* **Datetime**: Existing LUIS apps that use **datetime** should be migrated to **datetimeV2**, although the datetime entity continues to function in pre-existing apps that use it.
-* **Geography**: Existing LUIS apps that use **geography** is supported until December 2018.
-* **Encyclopedia**: Existing LUIS apps that use **encyclopedia** is supported until December 2018.
-
-## Geography culture
-**Geography** is available only in the `en-us` locale.
-
-#### 3 Geography subtypes
-
-Prebuilt entity | Example utterance | JSON
-|||
-`builtin.geography.city` | `seattle` |`{ "type": "builtin.geography.city", "entity": "seattle" }`|
-`builtin.geography.city` | `paris` |`{ "type": "builtin.geography.city", "entity": "paris" }`|
-`builtin.geography.country`| `australia` |`{ "type": "builtin.geography.country", "entity": "australia" }`|
-`builtin.geography.country`| `japan` |`{ "type": "builtin.geography.country", "entity": "japan" }`|
-`builtin.geography.pointOfInterest` | `amazon river` |`{ "type": "builtin.geography.pointOfInterest", "entity": "amazon river" }`|
-`builtin.geography.pointOfInterest` | `sahara desert`|`{ "type": "builtin.geography.pointOfInterest", "entity": "sahara desert" }`|
-
-## Encyclopedia culture
-**Encyclopedia** is available only in the `en-US` locale.
-
-#### Encyclopedia subtypes
-Encyclopedia built-in entity includes over 100 sub-types in the following table: In addition, encyclopedia entities often map to multiple types. For example, the query Ronald Reagan yields:
-
-```json
-{
- "entity": "ronald reagan",
- "type": "builtin.encyclopedia.people.person"
- },
- {
- "entity": "ronald reagan",
- "type": "builtin.encyclopedia.film.actor"
- },
- {
- "entity": "ronald reagan",
- "type": "builtin.encyclopedia.government.us_president"
- },
- {
- "entity": "ronald reagan",
- "type": "builtin.encyclopedia.book.author"
- }
- ```
--
-Prebuilt entity | Prebuilt entity (sub-types) | Example utterance
-|||
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.people.person`| `bryan adams` |
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.producer`| `walt disney` |
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.cinematographer`| `adam greenberg`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.royalty.monarch`| `elizabeth ii`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.director`| `steven spielberg`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.writer`| `alfred hitchcock`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.actor`| `robert de niro`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.martial_arts.martial_artist`| `bruce lee`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.architecture.architect`| `james gallier`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.geography.mountaineer`| `jean couzy`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.celebrities.celebrity`| `angelina jolie`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.music.musician`| `bob dylan`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.soccer.player`| `diego maradona`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.baseball.player`| `babe ruth`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.basketball.player`| `heiko schaffartzik`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.olympics.athlete`| `andre agassi`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.basketball.coach`| `bob huggins`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.american_football.coach`| `james franklin`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.cricket.coach`| `andy flower`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.ice_hockey.coach`| `david quinn`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.ice_hockey.player`| `vincent lecavalier`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.politician`| `harold nicolson`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.us_president`| `barack obama`|
-`builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.us_vice_president`| `dick cheney`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.organization.organization`| `united nations`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.league`| `american league`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.ice_hockey.conference`| `western hockey league`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.division`| `american league east`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.league`| `major league baseball`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.conference`| `national basketball league`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.division`| `pacific division`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.soccer.league`| `premier league`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.american_football.division`| `afc north`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.broadcast`| `nebraska educational telecommunications`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.tv_station`| `abc`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.tv_channel`| `cnbc world`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.radio_station`| `bbc radio 1`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.business.operation`| `bank of china`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.music.record_label`| `pixar`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.aviation.airline`| `air france`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.automotive.company`| `general motors`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.music.musical_instrument_company`| `gibson guitar corporation` |
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.tv.network`| `cartoon network`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.educational_institution`| `cornwall hill college` |
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.school`| `boston arts academy`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.university`| `johns hopkins university`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.team`| `united states national handball team`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.team`| `chicago bulls`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.professional_sports_team`| `boston celtics`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.cricket.team`| `mumbai indians`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.team`| `houston astros`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.american_football.team`| `green bay packers`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.ice_hockey.team`| `hamilton bulldogs`|
-`builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.soccer.team`| `fc bayern munich`|
-`builtin.encyclopedia.organization.organization |builtin.encyclopedia.government.political_party|pertubuhan kebangsaan melayu singapura`|
-`builtin.encyclopedia.time.event`| `builtin.encyclopedia.time.event`| `1740 batavia massacre`|
-`builtin.encyclopedia.time.event`| `builtin.encyclopedia.sports.championship_event`| `super bowl xxxix`|
-`builtin.encyclopedia.time.event`| `builtin.encyclopedia.award.competition`| `eurovision song contest 2003`|
-`builtin.encyclopedia.tv.series_episode`| `builtin.encyclopedia.tv.series_episode`| `the magnificent seven`|
-`builtin.encyclopedia.tv.series_episode`| `builtin.encyclopedia.tv.multipart_tv_episode`| `the deadly assassin`|
-`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.commerce.consumer_product`| `nokia lumia 620`|
-`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.music.album`| `dance pool`|
-`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.automotive.model`| `pontiac fiero`|
-`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.computer.computer`| `toshiba satellite`|
-`builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.computer.web_browser`| `internet explorer`|
-`builtin.encyclopedia.commerce.brand`| `builtin.encyclopedia.commerce.brand`| `diet coke`|
-`builtin.encyclopedia.commerce.brand`| `builtin.encyclopedia.automotive.make`| `chrysler`|
-`builtin.encyclopedia.music.artist`| `builtin.encyclopedia.music.artist`| `michael jackson`|
-`builtin.encyclopedia.music.artist`| `builtin.encyclopedia.music.group`| `the yardbirds`|
-`builtin.encyclopedia.music.music_video`| `builtin.encyclopedia.music.music_video`| `the beatles anthology`|
-`builtin.encyclopedia.theater.play`| `builtin.encyclopedia.theater.play`| `camelot`|
-`builtin.encyclopedia.sports.fight_song`| `builtin.encyclopedia.sports.fight_song`| `the cougar song`|
-`builtin.encyclopedia.film.series`| `builtin.encyclopedia.film.series`| `the twilight saga`|
-`builtin.encyclopedia.tv.program`| `builtin.encyclopedia.tv.program`| `late night with david letterman`|
-`builtin.encyclopedia.radio.radio_program`| `builtin.encyclopedia.radio.radio_program`| `grand ole opry`|
-`builtin.encyclopedia.film.film`| `builtin.encyclopedia.film.film`| `alice in wonderland`|
-`builtin.encyclopedia.cricket.tournament`| `builtin.encyclopedia.cricket.tournament`| `cricket world cup`|
-`builtin.encyclopedia.government.government`| `builtin.encyclopedia.government.government`| `european commission`|
-`builtin.encyclopedia.sports.team_owner`| `builtin.encyclopedia.sports.team_owner`| `bob castellini`|
-`builtin.encyclopedia.music.genre`| `builtin.encyclopedia.music.genre`| `eastern europe`|
-`builtin.encyclopedia.ice_hockey.division`| `builtin.encyclopedia.ice_hockey.division`| `hockeyallsvenskan`|
-`builtin.encyclopedia.architecture.style`| `builtin.encyclopedia.architecture.style`| `spanish colonial revival architecture`|
-`builtin.encyclopedia.broadcast.producer`| `builtin.encyclopedia.broadcast.producer`| `columbia tristar television`|
-`builtin.encyclopedia.book.author`| `builtin.encyclopedia.book.author`| `adam maxwell`|
-`builtin.encyclopedia.religion.founding_figur`| `builtin.encyclopedia.religion.founding_figur`| `gautama buddha`|
-`builtin.encyclopedia.martial_arts.martial_art`| `builtin.encyclopedia.martial_arts.martial_art`| `american kenpo`|
-`builtin.encyclopedia.sports.school`| `builtin.encyclopedia.sports.school`| `yale university`|
-`builtin.encyclopedia.business.product_line`| `builtin.encyclopedia.business.product_line`| `canon powershot`|
-`builtin.encyclopedia.internet.website`| `builtin.encyclopedia.internet.website`| `bing`|
-`builtin.encyclopedia.time.holiday`| `builtin.encyclopedia.time.holiday`| `easter`|
-`builtin.encyclopedia.food.candy_bar`| `builtin.encyclopedia.food.candy_bar`| `cadbury dairy milk`|
-`builtin.encyclopedia.finance.stock_exchange`| `builtin.encyclopedia.finance.stock_exchange`| `tokyo stock exchange`|
-`builtin.encyclopedia.film.festival`| `builtin.encyclopedia.film.festival`| `berlin international film festival`|
-
-## Next steps
-
-Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md).
-
cognitive-services Luis Reference Prebuilt Dimension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-dimension.md
- Title: Dimension Prebuilt entities - LUIS-
-description: This article contains dimension prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/14/2019---
-# Dimension prebuilt entity for a LUIS app
--
-The prebuilt dimension entity detects various types of dimensions, regardless of the LUIS app culture. Because this entity is already trained, you do not need to add example utterances containing dimensions to the application intents. Dimension entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
-
-## Types of dimension
-
-Dimension is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml) GitHub repository.
-
-## Resolution for dimension entity
-
-The following entity objects are returned for the query:
-
-`10 1/2 miles of cable`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "dimension": [
- {
- "number": 10.5,
- "units": "Mile"
- }
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "dimension": [
- {
- "number": 10.5,
- "units": "Mile"
- }
- ],
- "$instance": {
- "dimension": [
- {
- "type": "builtin.dimension",
- "text": "10 1/2 miles",
- "startIndex": 0,
- "length": 12,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.dimension** entity.
-
-```json
-{
- "entity": "10 1/2 miles",
- "type": "builtin.dimension",
- "startIndex": 0,
- "endIndex": 11,
- "resolution": {
- "unit": "Mile",
- "value": "10.5"
- }
-}
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
cognitive-services Luis Reference Prebuilt Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-domains.md
- Title: Prebuilt domain reference - LUIS-
-description: Reference for the prebuilt domains, which are prebuilt collections of intents and entities from Language Understanding Intelligent Services (LUIS).
------- Previously updated : 04/18/2022-
-#source: https://raw.githubusercontent.com/Microsoft/luis-prebuilt-domains/master/README.md
-#acrolinx bug for exception: https://mseng.visualstudio.com/TechnicalContent/_workitems/edit/1518317
--
-# Prebuilt domain reference for your LUIS app
---
-This reference provides information about the [prebuilt domains](./howto-add-prebuilt-models.md), which are prebuilt collections of intents and entities that LUIS offers.
-
-[Custom domains](luis-how-to-start-new-app.md), by contrast, start with no intents and models. You can add any prebuilt domain intents and entities to a custom model.
-
-## Prebuilt domains per language
-
-The table below summarizes the currently supported domains. Support for English is usually more complete than others.
-
-| Entity Type | EN-US | ZH-CN | DE | FR | ES | IT | PT-BR | KO | NL | TR |
-|::|:--:|:--:|:--:|:--:|:--:|:--:|:|:|:|:|
-| Calendar | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Communication | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Email | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| HomeAutomation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Notes | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Places | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| RestaurantReservation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| ToDo | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Utilities | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Weather | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Web | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-
-Prebuilt domains are **not supported** in:
-
-* French Canadian
-* Hindi
-* Spanish Mexican
-* Japanese
-
-## Next steps
-
-Learn the [simple entity](reference-entity-simple.md).
cognitive-services Luis Reference Prebuilt Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-email.md
- Title: LUIS Prebuilt entities email reference-
-description: This article contains email prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 09/27/2019---
-# Email prebuilt entity for a LUIS app
--
-Email extraction includes the entire email address from an utterance. Because this entity is already trained, you do not need to add example utterances containing email to the application intents. Email entity is supported in `en-us` culture only.
-
-## Resolution for prebuilt email
-
-The following entity objects are returned for the query:
-
-`please send the information to patti@contoso.com`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "email": [
- "patti@contoso.com"
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "email": [
- "patti@contoso.com"
- ],
- "$instance": {
- "email": [
- {
- "type": "builtin.email",
- "text": "patti@contoso.com",
- "startIndex": 31,
- "length": 17,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.email** entity.
-
-```json
-"entities": [
- {
- "entity": "patti@contoso.com",
- "type": "builtin.email",
- "startIndex": 31,
- "endIndex": 55,
- "resolution": {
- "value": "patti@contoso.com"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [number](luis-reference-prebuilt-number.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md).
cognitive-services Luis Reference Prebuilt Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-entities.md
- Title: All Prebuilt entities - LUIS-
-description: This article contains lists of the prebuilt entities that are included in Language Understanding (LUIS).
------- Previously updated : 05/05/2021---
-# Entities per culture in your LUIS model
---
-Language Understanding (LUIS) provides prebuilt entities.
-
-## Entity resolution
-When a prebuilt entity is included in your application, LUIS includes the corresponding entity resolution in the endpoint response. All example utterances are also labeled with the entity.
-
-The behavior of prebuilt entities can't be modified but you can improve resolution by [adding the prebuilt entity as a feature to a machine-learning entity or sub-entity](concepts/entities.md#prebuilt-entities).
-
-## Availability
-Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture.
-
-|Culture|Subcultures|Notes|
-|--|--|--|
-|Chinese|[zh-CN](#chinese-entity-support)||
-|Dutch|[nl-NL](#dutch-entity-support)||
-|English|[en-US (American)](#english-american-entity-support)||
-|English|[en-GB (British)](#english-british-entity-support)||
-|French|[fr-CA (Canada)](#french-canadian-entity-support), [fr-FR (France)](#french-france-entity-support), ||
-|German|[de-DE](#german-entity-support)||
-|Italian|[it-IT](#italian-entity-support)||
-|Japanese|[ja-JP](#japanese-entity-support)||
-|Korean|[ko-KR](#korean-entity-support)||
-|Portuguese|[pt-BR (Brazil)](#portuguese-brazil-entity-support)||
-|Spanish|[es-ES (Spain)](#spanish-spain-entity-support), [es-MX (Mexico)](#spanish-mexico-entity-support)||
-|Turkish|[turkish](#turkish-entity-support)||
-
-## Prediction endpoint runtime
-
-The availability of a prebuilt entity in a specific language is determined by the prediction endpoint runtime version.
-
-## Chinese entity support
-
-The following entities are supported:
-
-| Prebuilt entity | zh-CN |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[PersonName](luis-reference-prebuilt-person.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![KeyPhrase](luis-reference-prebuilt-keyphrase.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-
-## Dutch entity support
-
-The following entities are supported:
-
-| Prebuilt entity | nl-NL |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![Datetime](luis-reference-prebuilt-deprecated.md) | - |-->
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## English (American) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | en-US |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[GeographyV2](luis-reference-prebuilt-geographyV2.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[PersonName](luis-reference-prebuilt-person.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-
-## English (British) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | en-GB |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[GeographyV2](luis-reference-prebuilt-geographyV2.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[PersonName](luis-reference-prebuilt-person.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-
-## French (France) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | fr-FR |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## French (Canadian) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | fr-CA |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## German entity support
-
-The following entities are supported:
-
-|Prebuilt entity | de-DE |
-| -- | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## Italian entity support
-
-Italian prebuilt age, currency, dimension, number, percentage _resolution_ changed from V2 and V3 preview.
-
-The following entities are supported:
-
-| Prebuilt entity | it-IT |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## Japanese entity support
-
-The following entities are supported:
-
-|Prebuilt entity | ja-JP |
-| -- | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, - |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, - |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, - |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, - |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, - |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, - |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, - |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![Datetime](luis-reference-prebuilt-deprecated.md) | - |-->
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## Korean entity support
-
-The following entities are supported:
-
-| Prebuilt entity | ko-KR |
-| | :: |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |-->
-<![Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |-->
-<![Datetime](luis-reference-prebuilt-deprecated.md) | - |-->
-<![Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |-->
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![Number](luis-reference-prebuilt-number.md) | - |-->
-<![Ordinal](luis-reference-prebuilt-ordinal.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![Percentage](luis-reference-prebuilt-percentage.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-<![Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |-->
-
-## Portuguese (Brazil) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | pt-BR |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-KeyPhrase is not available in all subcultures of Portuguese (Brazil) - ```pt-BR```.
-
-## Spanish (Spain) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | es-ES |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 |
-[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-## Spanish (Mexico) entity support
-
-The following entities are supported:
-
-| Prebuilt entity | es-MX |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |
-[Email](luis-reference-prebuilt-email.md) | V2, V3 |
-[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |
-[Number](luis-reference-prebuilt-number.md) | V2, V3 |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | - |
-[Percentage](luis-reference-prebuilt-percentage.md) | - |
-[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |
-[URL](luis-reference-prebuilt-url.md) | V2, V3 |
-<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |-->
-<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |-->
-<![PersonName](luis-reference-prebuilt-person.md) | - |-->
-
-<! See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md)-->
-
-## Turkish entity support
-
-| Prebuilt entity | tr-tr |
-| | :: |
-[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |
-[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |
-[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - |
-[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |
-[Email](luis-reference-prebuilt-email.md) | - |
-[Number](luis-reference-prebuilt-number.md) | - |
-[Ordinal](luis-reference-prebuilt-ordinal.md) | - |
-[Percentage](luis-reference-prebuilt-percentage.md) | - |
-[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |
-[URL](luis-reference-prebuilt-url.md) | - |
-<![KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |-->
-<!Phonenumber](luis-reference-prebuilt-phonenumber.md) | - |-->
-
-<! See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md). -->
-
-## Contribute to prebuilt entity cultures
-The prebuilt entities are developed in the Recognizers-Text open-source project. [Contribute](https://github.com/Microsoft/Recognizers-Text) to the project. This project includes examples of currency per culture.
-
-GeographyV2 and PersonName are not included in the Recognizers-Text project. For issues with these prebuilt entities, please open a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-## Next steps
-
-Learn about the [number](luis-reference-prebuilt-number.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [currency](luis-reference-prebuilt-currency.md) entities.
cognitive-services Luis Reference Prebuilt Geographyv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-geographyV2.md
- Title: Geography V2 prebuilt entity - LUIS-
-description: This article contains geographyV2 prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/04/2019---
-# GeographyV2 prebuilt entity for a LUIS app
--
-The prebuilt geographyV2 entity detects places. Because this entity is already trained, you do not need to add example utterances containing GeographyV2 to the application intents. GeographyV2 entity is supported in English [culture](luis-reference-prebuilt-entities.md).
-
-## Subtypes
-The geographical locations have subtypes:
-
-|Subtype|Purpose|
-|--|--|
-|`poi`|point of interest|
-|`city`|name of city|
-|`countryRegion`|name of country or region|
-|`continent`|name of continent|
-|`state`|name of state or province|
--
-## Resolution for GeographyV2 entity
-
-The following entity objects are returned for the query:
-
-`Carol is visiting the sphinx in gizah egypt in africa before heading to texas.`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "geographyV2": [
- {
- "value": "the sphinx",
- "type": "poi"
- },
- {
- "value": "gizah",
- "type": "city"
- },
- {
- "value": "egypt",
- "type": "countryRegion"
- },
- {
- "value": "africa",
- "type": "continent"
- },
- {
- "value": "texas",
- "type": "state"
- }
- ]
-}
-```
-
-In the preceding JSON, `poi` is an abbreviation for **Point of Interest**.
-
-#### [V3 verbose response](#tab/V3-verbose)
-
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "geographyV2": [
- {
- "value": "the sphinx",
- "type": "poi"
- },
- {
- "value": "gizah",
- "type": "city"
- },
- {
- "value": "egypt",
- "type": "countryRegion"
- },
- {
- "value": "africa",
- "type": "continent"
- },
- {
- "value": "texas",
- "type": "state"
- }
- ],
- "$instance": {
- "geographyV2": [
- {
- "type": "builtin.geographyV2.poi",
- "text": "the sphinx",
- "startIndex": 18,
- "length": 10,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "builtin.geographyV2.city",
- "text": "gizah",
- "startIndex": 32,
- "length": 5,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "builtin.geographyV2.countryRegion",
- "text": "egypt",
- "startIndex": 38,
- "length": 5,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "builtin.geographyV2.continent",
- "text": "africa",
- "startIndex": 47,
- "length": 6,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "builtin.geographyV2.state",
- "text": "texas",
- "startIndex": 72,
- "length": 5,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.geographyV2** entity.
-
-```json
-"entities": [
- {
- "entity": "the sphinx",
- "type": "builtin.geographyV2.poi",
- "startIndex": 18,
- "endIndex": 27
- },
- {
- "entity": "gizah",
- "type": "builtin.geographyV2.city",
- "startIndex": 32,
- "endIndex": 36
- },
- {
- "entity": "egypt",
- "type": "builtin.geographyV2.countryRegion",
- "startIndex": 38,
- "endIndex": 42
- },
- {
- "entity": "africa",
- "type": "builtin.geographyV2.continent",
- "startIndex": 47,
- "endIndex": 52
- },
- {
- "entity": "texas",
- "type": "builtin.geographyV2.state",
- "startIndex": 72,
- "endIndex": 76
- },
- {
- "entity": "carol",
- "type": "builtin.personName",
- "startIndex": 0,
- "endIndex": 4
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
cognitive-services Luis Reference Prebuilt Keyphrase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-keyphrase.md
- Title: Keyphrase prebuilt entity - LUIS-
-description: This article contains keyphrase prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/28/2021---
-# keyPhrase prebuilt entity for a LUIS app
--
-The keyPhrase entity extracts a variety of key phrases from an utterance. You don't need to add example utterances containing keyPhrase to the application. The keyPhrase entity is supported in [many cultures](luis-language-support.md#languages-supported) as part of the [Language service](../language-service/overview.md) features.
-
-## Resolution for prebuilt keyPhrase entity
-
-The following entity objects are returned for the query:
-
-`where is the educational requirements form for the development and engineering group`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "keyPhrase": [
- "educational requirements",
- "development"
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "keyPhrase": [
- "educational requirements",
- "development"
- ],
- "$instance": {
- "keyPhrase": [
- {
- "type": "builtin.keyPhrase",
- "text": "educational requirements",
- "startIndex": 13,
- "length": 24,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "builtin.keyPhrase",
- "text": "development",
- "startIndex": 51,
- "length": 11,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.keyPhrase** entity.
-
-```json
-"entities": [
- {
- "entity": "development",
- "type": "builtin.keyPhrase",
- "startIndex": 51,
- "endIndex": 61
- },
- {
- "entity": "educational requirements",
- "type": "builtin.keyPhrase",
- "startIndex": 13,
- "endIndex": 36
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities.
cognitive-services Luis Reference Prebuilt Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-number.md
- Title: Number Prebuilt entity - LUIS-
-description: This article contains number prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 09/27/2019---
-# Number prebuilt entity for a LUIS app
--
-There are many ways in which numeric values are used to quantify, express, and describe pieces of information. This article covers only some of the possible examples. LUIS interprets the variations in user utterances and returns consistent numeric values. Because this entity is already trained, you do not need to add example utterances containing number to the application intents.
-
-## Types of number
-Number is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml) GitHub repository
-
-## Examples of number resolution
-
-| Utterance | Entity | Resolution |
-| - |:-:| --:|
-| ```one thousand times``` | ```"one thousand"``` | ```"1000"``` |
-| ```1,000 people``` | ```"1,000"``` | ```"1000"``` |
-| ```1/2 cup``` | ```"1 / 2"``` | ```"0.5"``` |
-| ```one half the amount``` | ```"one half"``` | ```"0.5"``` |
-| ```one hundred fifty orders``` | ```"one hundred fifty"``` | ```"150"``` |
-| ```one hundred and fifty books``` | ```"one hundred and fifty"``` | ```"150"```|
-| ```a grade of one point five```| ```"one point five"``` | ```"1.5"``` |
-| ```buy two dozen eggs``` | ```"two dozen"``` | ```"24"``` |
--
-LUIS includes the recognized value of a **`builtin.number`** entity in the `resolution` field of the JSON response it returns.
-
-## Resolution for prebuilt number
-
-The following entity objects are returned for the query:
-
-`order two dozen eggs`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "number": [
- 24
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "number": [
- 24
- ],
- "$instance": {
- "number": [
- {
- "type": "builtin.number",
- "text": "two dozen",
- "startIndex": 6,
- "length": 9,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows a JSON response from LUIS, that includes the resolution of the value 24, for the utterance "two dozen".
-
-```json
-"entities": [
- {
- "entity": "two dozen",
- "type": "builtin.number",
- "startIndex": 6,
- "endIndex": 14,
- "resolution": {
- "subtype": "integer",
- "value": "24"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [currency](luis-reference-prebuilt-currency.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md).
cognitive-services Luis Reference Prebuilt Ordinal V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-ordinal-v2.md
- Title: Ordinal V2 prebuilt entity - LUIS-
-description: This article contains ordinal V2 prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 09/27/2019---
-# Ordinal V2 prebuilt entity for a LUIS app
--
-Ordinal V2 number expands [Ordinal](luis-reference-prebuilt-ordinal.md) to provide relative references such as `next`, `last`, and `previous`. These are not extracted using the ordinal prebuilt entity.
-
-## Resolution for prebuilt ordinal V2 entity
-
-The following entity objects are returned for the query:
-
-`what is the second to last choice in the list`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "ordinalV2": [
- {
- "offset": -1,
- "relativeTo": "end"
- }
- ]
-}
-```
-
-#### [V3 verbose response](#tab/V3-verbose)
-
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "ordinalV2": [
- {
- "offset": -1,
- "relativeTo": "end"
- }
- ],
- "$instance": {
- "ordinalV2": [
- {
- "type": "builtin.ordinalV2.relative",
- "text": "the second to last",
- "startIndex": 8,
- "length": 18,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.ordinalV2** entity.
-
-```json
-"entities": [
- {
- "entity": "the second to last",
- "type": "builtin.ordinalV2.relative",
- "startIndex": 8,
- "endIndex": 25,
- "resolution": {
- "offset": "-1",
- "relativeTo": "end"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [percentage](luis-reference-prebuilt-percentage.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
cognitive-services Luis Reference Prebuilt Ordinal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-ordinal.md
- Title: Ordinal Prebuilt entity - LUIS-
-description: This article contains ordinal prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/14/2019---
-# Ordinal prebuilt entity for a LUIS app
--
-Ordinal number is a numeric representation of an object inside a set: `first`, `second`, `third`. Because this entity is already trained, you do not need to add example utterances containing ordinal to the application intents. Ordinal entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
-
-## Types of ordinal
-Ordinal is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml#L45) GitHub repository
-
-## Resolution for prebuilt ordinal entity
-
-The following entity objects are returned for the query:
-
-`Order the second option`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "ordinal": [
- 2
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "ordinal": [
- 2
- ],
- "$instance": {
- "ordinal": [
- {
- "type": "builtin.ordinal",
- "text": "second",
- "startIndex": 10,
- "length": 6,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.ordinal** entity.
-
-```json
-"entities": [
- {
- "entity": "second",
- "type": "builtin.ordinal",
- "startIndex": 10,
- "endIndex": 15,
- "resolution": {
- "value": "2"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [OrdinalV2](luis-reference-prebuilt-ordinal-v2.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
cognitive-services Luis Reference Prebuilt Percentage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-percentage.md
- Title: Percentage Prebuilt entity - LUIS-
-description: This article contains percentage prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 09/27/2019---
-# Percentage prebuilt entity for a LUIS app
--
-Percentage numbers can appear as fractions, `3 1/2`, or as percentage, `2%`. Because this entity is already trained, you do not need to add example utterances containing percentage to the application intents. Percentage entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
-
-## Types of percentage
-Percentage is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml#L114) GitHub repository
-
-## Resolution for prebuilt percentage entity
-
-The following entity objects are returned for the query:
-
-`set a trigger when my stock goes up 2%`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "percentage": [
- 2
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "percentage": [
- 2
- ],
- "$instance": {
- "percentage": [
- {
- "type": "builtin.percentage",
- "text": "2%",
- "startIndex": 36,
- "length": 2,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.percentage** entity.
-
-```json
-"entities": [
- {
- "entity": "2%",
- "type": "builtin.percentage",
- "startIndex": 36,
- "endIndex": 37,
- "resolution": {
- "value": "2%"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
cognitive-services Luis Reference Prebuilt Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-person.md
- Title: PersonName prebuilt entity - LUIS-
-description: This article contains personName prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 05/07/2019---
-# PersonName prebuilt entity for a LUIS app
--
-The prebuilt personName entity detects people names. Because this entity is already trained, you do not need to add example utterances containing personName to the application intents. personName entity is supported in English and Chinese [cultures](luis-reference-prebuilt-entities.md).
-
-## Resolution for personName entity
-
-The following entity objects are returned for the query:
-
-`Is Jill Jones in Cairo?`
--
-#### [V3 response](#tab/V3)
--
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "personName": [
- "Jill Jones"
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "personName": [
- "Jill Jones"
- ],
- "$instance": {
- "personName": [
- {
- "type": "builtin.personName",
- "text": "Jill Jones",
- "startIndex": 3,
- "length": 10,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.personName** entity.
-
-```json
-"entities": [
-{
- "entity": "Jill Jones",
- "type": "builtin.personName",
- "startIndex": 3,
- "endIndex": 12
-}
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
cognitive-services Luis Reference Prebuilt Phonenumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-phonenumber.md
- Title: Phone number Prebuilt entities - LUIS-
-description: This article contains phone number prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 09/27/2019---
-# Phone number prebuilt entity for a LUIS app
--
-The `phonenumber` entity extracts a variety of phone numbers including country code. Because this entity is already trained, you do not need to add example utterances to the application. The `phonenumber` entity is supported in `en-us` culture only.
-
-## Types of a phone number
-`Phonenumber` is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/Base-PhoneNumbers.yaml) GitHub repository
-
-## Resolution for this prebuilt entity
-
-The following entity objects are returned for the query:
-
-`my mobile is 1 (800) 642-7676`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "phonenumber": [
- "1 (800) 642-7676"
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "phonenumber": [
- "1 (800) 642-7676"
- ],
- "$instance": {
-
- "phonenumber": [
- {
- "type": "builtin.phonenumber",
- "text": "1 (800) 642-7676",
- "startIndex": 13,
- "length": 16,
- "score": 1.0,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.phonenumber** entity.
-
-```json
-"entities": [
- {
- "entity": "1 (800) 642-7676",
- "type": "builtin.phonenumber",
- "startIndex": 13,
- "endIndex": 28,
- "resolution": {
- "score": "1",
- "value": "1 (800) 642-7676"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
cognitive-services Luis Reference Prebuilt Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-sentiment.md
- Title: Sentiment analysis - LUIS-
-description: If Sentiment analysis is configured, the LUIS json response includes sentiment analysis.
------- Previously updated : 10/28/2021--
-# Sentiment analysis
--
-If Sentiment analysis is configured, the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Language service](../language-service/index.yml) documentation.
-
-LUIS uses V2 of the API.
-
-Sentiment Analysis is configured when publishing your application. See [how to publish an app](./luis-how-to-publish-app.md) for more information.
-
-## Resolution for sentiment
-
-Sentiment data is a score between 1 and 0 indicating the positive (closer to 1) or negative (closer to 0) sentiment of the data.
-
-#### [English language](#tab/english)
-
-When culture is `en-us`, the response is:
-
-```JSON
-"sentimentAnalysis": {
- "label": "positive",
- "score": 0.9163064
-}
-```
-
-#### [Other languages](#tab/other-languages)
-
-For all other cultures, the response is:
-
-```JSON
-"sentimentAnalysis": {
- "score": 0.9163064
-}
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
cognitive-services Luis Reference Prebuilt Temperature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-temperature.md
- Title: Temperature Prebuilt entity - LUIS-
-description: This article contains temperature prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/14/2019---
-# Temperature prebuilt entity for a LUIS app
--
-Temperature extracts a variety of temperature types. Because this entity is already trained, you do not need to add example utterances containing temperature to the application. Temperature entity is supported in [many cultures](luis-reference-prebuilt-entities.md).
-
-## Types of temperature
-Temperature is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L819) GitHub repository
-
-## Resolution for prebuilt temperature entity
-
-The following entity objects are returned for the query:
-
-`set the temperature to 30 degrees`
--
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "temperature": [
- {
- "number": 30,
- "units": "Degree"
- }
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "temperature": [
- {
- "number": 30,
- "units": "Degree"
- }
- ],
- "$instance": {
- "temperature": [
- {
- "type": "builtin.temperature",
- "text": "30 degrees",
- "startIndex": 23,
- "length": 10,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the **builtin.temperature** entity.
-
-```json
-"entities": [
- {
- "entity": "30 degrees",
- "type": "builtin.temperature",
- "startIndex": 23,
- "endIndex": 32,
- "resolution": {
- "unit": "Degree",
- "value": "30"
- }
- }
-]
-```
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities.
cognitive-services Luis Reference Prebuilt Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-url.md
- Title: URL Prebuilt entities - LUIS-
-description: This article contains url prebuilt entity information in Language Understanding (LUIS).
------- Previously updated : 10/04/2019---
-# URL prebuilt entity for a LUIS app
--
-URL entity extracts URLs with domain names or IP addresses. Because this entity is already trained, you do not need to add example utterances containing URLs to the application. URL entity is supported in `en-us` culture only.
-
-## Types of URLs
-Url is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/Base-URL.yaml) GitHub repository
-
-## Resolution for prebuilt URL entity
-
-The following entity objects are returned for the query:
-
-`https://www.luis.ai is a great cognitive services example of artificial intelligence`
-
-#### [V3 response](#tab/V3)
-
-The following JSON is with the `verbose` parameter set to `false`:
-
-```json
-"entities": {
- "url": [
- "https://www.luis.ai"
- ]
-}
-```
-#### [V3 verbose response](#tab/V3-verbose)
-
-The following JSON is with the `verbose` parameter set to `true`:
-
-```json
-"entities": {
- "url": [
- "https://www.luis.ai"
- ],
- "$instance": {
- "url": [
- {
- "type": "builtin.url",
- "text": "https://www.luis.ai",
- "startIndex": 0,
- "length": 17,
- "modelTypeId": 2,
- "modelType": "Prebuilt Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 response](#tab/V2)
-
-The following example shows the resolution of the https://www.luis.ai is a great cognitive services example of artificial intelligence
-
-```json
-"entities": [
- {
- "entity": "https://www.luis.ai",
- "type": "builtin.url",
- "startIndex": 0,
- "endIndex": 17
- }
-]
-```
-
-* * *
-
-## Next steps
-
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
-
-Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
cognitive-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-regions.md
- Title: Publishing regions & endpoints - LUIS
-description: The region specified in the Azure portal is the same where you will publish the LUIS app and an endpoint URL is generated for this same region.
----- Previously updated : 02/08/2022---
-# Authoring and publishing regions and the associated keys
---
-LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one predection key per region.
-
-<a name="luis-website"></a>
-
-## LUIS Authoring regions
-
-Authoring regions are the regions where the application gets created and the training take place.
-
-LUIS has the following authoring regions available with [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md):
-
-* Australia east
-* West Europe
-* West US
-* Switzerland north
-
-LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai).
-
-<a name="regions-and-azure-resources"></a>
-
-## Publishing regions and Azure resources
-
-Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and assign your application to it. For example, if you create an app with the *westus* authoring region and publish it to the *eastus* and *brazilsouth* regions, the app will run in those two regions.
--
-## Public apps
-A public app is published in all regions so that a user with a supported predection resource can access the app in all regions.
-
-<a name="publishing-regions"></a>
-
-## Publishing regions are tied to authoring regions
-
-When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a resource in a publishing region.
-
-Every authoring region has corresponding prediction regions that you can publish your application to, which are listed in the tables below. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region to match the required publishing region.
--
-## Single data residency
-
-Single data residency means that the data does not leave the boundaries of the region.
-
-> [!Note]
-> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region.
-> * If `log=true`, data is returned to the authoring region for active learning.
-
-## Publishing to Europe
-
- Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
-|--||||
-| Europe | `westeurope`| France Central<br>`francecentral` | `https://francecentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Europe | `westeurope`| North Europe<br>`northeurope` | `https://northeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Europe | `westeurope`| West Europe<br>`westeurope` | `https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Europe | `westeurope`| UK South<br>`uksouth` | `https://uksouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Europe | `westeurope`| Switzerland North<br>`switzerlandnorth` | `https://switzerlandnorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Europe | `westeurope`| Norway East<br>`norwayeast` | `https://norwayeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-
-## Publishing to Australia
-
- Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
-|--||||
-| Australia | `australiaeast` | Australia East<br>`australiaeast` | `https://australiaeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-
-## Other publishing regions
-
- Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format |
-|--||||
-| Africa | `westus`<br>[www.luis.ai][www.luis.ai]| South Africa North<br>`southafricanorth` | `https://southafricanorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Central India<br>`centralindia` | `https://centralindia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| East Asia<br>`eastasia` | `https://eastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan East<br>`japaneast` | `https://japaneast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan West<br>`japanwest` | `https://japanwest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Jio India West<br>`jioindiawest` | `https://jioindiawest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Korea Central<br>`koreacentral` | `https://koreacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Southeast Asia<br>`southeastasia` | `https://southeastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| North UAE<br>`northuae` | `https://northuae.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Canada Central<br>`canadacentral` | `https://canadacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Central US<br>`centralus` | `https://centralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | East US<br>`eastus` | `https://eastus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | East US 2<br>`eastus2` | `https://eastus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | North Central US<br>`northcentralus` | `https://northcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | South Central US<br>`southcentralus` | `https://southcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West Central US<br>`westcentralus` | `https://westcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America | `westus`<br>[www.luis.ai][www.luis.ai] | West US<br>`westus` | `https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 2<br>`westus2` | `https://westus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 3<br>`westus3` | `https://westus3.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-| South America | `westus`<br>[www.luis.ai][www.luis.ai] | Brazil South<br>`brazilsouth` | `https://brazilsouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` |
-
-## Endpoints
-
-Learn more about the [authoring and prediction endpoints](developer-reference-resource.md).
-
-## Failover regions
-
-Each region has a secondary region to fail over to. Failover will only happen in the same geographical region.
-
-Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
-
-The following publishing regions do not have a failover region:
-
-* Brazil South
-* Southeast Asia
-
-## Next steps
--
-> [Prebuilt entities reference](./luis-reference-prebuilt-entities.md)
-
- [www.luis.ai]: https://www.luis.ai
cognitive-services Luis Reference Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-response-codes.md
- Title: API HTTP response codes - LUIS-
-description: Understand what HTTP response codes are returned from the LUIS Authoring and Endpoint APIs
-------- Previously updated : 06/14/2022---
-# Common API response codes and their meaning
---
-The [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) and [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) APIs return HTTP response codes. While response messages include information specific to a request, the HTTP response status code is general.
-
-## Common status codes
-The following table lists some of the most common HTTP response status codes for the [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) and [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) APIs:
-
-|Code|API|Explanation|
-|:--|--|--|
-|400|Authoring, Endpoint|request's parameters are incorrect meaning the required parameters are missing, malformed, or too large|
-|400|Authoring, Endpoint|request's body is incorrect meaning the JSON is missing, malformed, or too large|
-|401|Authoring|used endpoint key, instead of authoring key|
-|401|Authoring, Endpoint|invalid, malformed, or empty key|
-|401|Authoring, Endpoint| key doesn't match region|
-|401|Authoring|you are not the owner or collaborator|
-|401|Authoring|invalid order of API calls|
-|403|Authoring, Endpoint|total monthly key quota limit exceeded|
-|409|Endpoint|application is still loading|
-|410|Endpoint|application needs to be retrained and republished|
-|414|Endpoint|query exceeds maximum character limit|
-|429|Authoring, Endpoint|Rate limit is exceeded (requests/second)|
-
-## Next steps
-
-* REST API [authoring](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f) and [endpoint](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78) documentation
cognitive-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-traffic-manager.md
- Title: Increase endpoint quota - LUIS-
-description: Language Understanding (LUIS) offers the ability to increase the endpoint request quota beyond a single key's quota. This is done by creating more keys for LUIS and adding them to the LUIS application on the **Publish** page in the **Resources and Keys** section.
------- Previously updated : 08/20/2019
-#Customer intent: As an advanced user, I want to understand how to use multiple LUIS endpoint keys to increase the number of endpoint requests my application receives.
--
-# Use Microsoft Azure Traffic Manager to manage endpoint quota across keys
--
-Language Understanding (LUIS) offers the ability to increase the endpoint request quota beyond a single key's quota. This is done by creating more keys for LUIS and adding them to the LUIS application on the **Publish** page in the **Resources and Keys** section.
-
-The client-application has to manage the traffic across the keys. LUIS doesn't do that.
-
-This article explains how to manage the traffic across keys with Azure [Traffic Manager][traffic-manager-marketing]. You must already have a trained and published LUIS app. If you do not have one, follow the Prebuilt domain [quickstart](luis-get-started-create-app.md).
--
-## Connect to PowerShell in the Azure portal
-In the [Azure][azure-portal] portal, open the PowerShell window. The icon for the PowerShell window is the **>_** in the top navigation bar. By using PowerShell from the portal, you get the latest PowerShell version and you are authenticated. PowerShell in the portal requires an [Azure Storage](https://azure.microsoft.com/services/storage/) account.
-
-![Screenshot of Azure portal with PowerShell window open](./media/traffic-manager/azure-portal-powershell.png)
-
-The following sections use [Traffic Manager PowerShell cmdlets](/powershell/module/az.trafficmanager/#traffic_manager).
-
-## Create Azure resource group with PowerShell
-Before creating the Azure resources, create a resource group to contain all the resources. Name the resource group `luis-traffic-manager` and use the region is `West US`. The region of the resource group stores metadata about the group. It won't slow down your resources if they are in another region.
-
-Create resource group with **[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)** cmdlet:
-
-```powerShell
-New-AzResourceGroup -Name luis-traffic-manager -Location "West US"
-```
-
-## Create LUIS keys to increase total endpoint quota
-1. In the Azure portal, create two **Language Understanding** keys, one in the `West US` and one in the `East US`. Use the existing resource group, created in the previous section, named `luis-traffic-manager`.
-
- ![Screenshot of Azure portal with two LUIS keys in luis-traffic-manager resource group](./media/traffic-manager/luis-keys.png)
-
-2. In the [LUIS][LUIS] website, in the **Manage** section, on the **Azure Resources** page, assign keys to the app, and republish the app by selecting the **Publish** button in the top right menu.
-
- The example URL in the **endpoint** column uses a GET request with the endpoint key as a query parameter. Copy the two new keys' endpoint URLs. They are used as part of the Traffic Manager configuration later in this article.
-
-## Manage LUIS endpoint requests across keys with Traffic Manager
-Traffic Manager creates a new DNS access point for your endpoints. It does not act as a gateway or proxy but strictly at the DNS level. This example doesn't change any DNS records. It uses a DNS library to communicate with Traffic Manager to get the correct endpoint for that specific request. _Each_ request intended for LUIS first requires a Traffic Manager request to determine which LUIS endpoint to use.
-
-### Polling uses LUIS endpoint
-Traffic Manager polls the endpoints periodically to make sure the endpoint is still available. The Traffic Manager URL polled needs to be accessible with a GET request and return a 200. The endpoint URL on the **Publish** page does this. Since each endpoint key has a different route and query string parameters, each endpoint key needs a different polling path. Each time Traffic Manager polls, it does cost a quota request. The query string parameter **q** of the LUIS endpoint is the utterance sent to LUIS. This parameter, instead of sending an utterance, is used to add Traffic Manager polling to the LUIS endpoint log as a debugging technique while getting Traffic Manager configured.
-
-Because each LUIS endpoint needs its own path, it needs its own Traffic Manager profile. In order to manage across profiles, create a [_nested_ Traffic Manager](../../traffic-manager/traffic-manager-nested-profiles.md) architecture. One parent profile points to the children profiles and manage traffic across them.
-
-Once the Traffic Manager is configured, remember to change the path to use the logging=false query string parameter so your log is not filling up with polling.
-
-## Configure Traffic Manager with nested profiles
-The following sections create two child profiles, one for the East LUIS key and one for the West LUIS key. Then a parent profile is created and the two child profiles are added to the parent profile.
-
-### Create the East US Traffic Manager profile with PowerShell
-To create the East US Traffic Manager profile, there are several steps: create profile, add endpoint, and set endpoint. A Traffic Manager profile can have many endpoints but each endpoint has the same validation path. Because the LUIS endpoint URLs for the east and west subscriptions are different due to region and endpoint key, each LUIS endpoint has to be a single endpoint in the profile.
-
-1. Create profile with **[New-AzTrafficManagerProfile](/powershell/module/az.trafficmanager/new-aztrafficmanagerprofile)** cmdlet
-
- Use the following cmdlet to create the profile. Make sure to change the `appIdLuis` and `subscriptionKeyLuis`. The subscriptionKey is for the East US LUIS key. If the path is not correct, including the LUIS app ID and endpoint key, the Traffic Manager polling is a status of `degraded` because Traffic Manage can't successfully request the LUIS endpoint. Make sure the value of `q` is `traffic-manager-east` so you can see this value in the LUIS endpoint logs.
-
- ```powerShell
- $eastprofile = New-AzTrafficManagerProfile -Name luis-profile-eastus -ResourceGroupName luis-traffic-manager -TrafficRoutingMethod Performance -RelativeDnsName luis-dns-eastus -Ttl 30 -MonitorProtocol HTTPS -MonitorPort 443 -MonitorPath "/luis/v2.0/apps/<appID>?subscription-key=<subscriptionKey>&q=traffic-manager-east"
- ```
-
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-Name|luis-profile-eastus|Traffic Manager name in Azure portal|
- |-ResourceGroupName|luis-traffic-manager|Created in previous section|
- |-TrafficRoutingMethod|Performance|For more information, see [Traffic Manager routing methods][routing-methods]. If using performance, the URL request to the Traffic Manager must come from the region of the user. If going through a chatbot or other application, it is the chatbot's responsibility to mimic the region in the call to the Traffic Manager. |
- |-RelativeDnsName|luis-dns-eastus|This is the subdomain for the service: luis-dns-eastus.trafficmanager.net|
- |-Ttl|30|Polling interval, 30 seconds|
- |-MonitorProtocol<BR>-MonitorPort|HTTPS<br>443|Port and protocol for LUIS is HTTPS/443|
- |-MonitorPath|`/luis/v2.0/apps/<appIdLuis>?subscription-key=<subscriptionKeyLuis>&q=traffic-manager-east`|Replace `<appIdLuis>` and `<subscriptionKeyLuis>` with your own values.|
-
- A successful request has no response.
-
-2. Add East US endpoint with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.trafficmanager/add-aztrafficmanagerendpointconfig)** cmdlet
-
- ```powerShell
- Add-AzTrafficManagerEndpointConfig -EndpointName luis-east-endpoint -TrafficManagerProfile $eastprofile -Type ExternalEndpoints -Target eastus.api.cognitive.microsoft.com -EndpointLocation "eastus" -EndpointStatus Enabled
- ```
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-EndpointName|luis-east-endpoint|Endpoint name displayed under the profile|
- |-TrafficManagerProfile|$eastprofile|Use profile object created in Step 1|
- |-Type|ExternalEndpoints|For more information, see [Traffic Manager endpoint][traffic-manager-endpoints] |
- |-Target|eastus.api.cognitive.microsoft.com|This is the domain for the LUIS endpoint.|
- |-EndpointLocation|"eastus"|Region of the endpoint|
- |-EndpointStatus|Enabled|Enable endpoint when it is created|
-
- The successful response looks like:
-
- ```console
- Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-eastus
- Name : luis-profile-eastus
- ResourceGroupName : luis-traffic-manager
- RelativeDnsName : luis-dns-eastus
- Ttl : 30
- ProfileStatus : Enabled
- TrafficRoutingMethod : Performance
- MonitorProtocol : HTTPS
- MonitorPort : 443
- MonitorPath : /luis/v2.0/apps/<luis-app-id>?subscription-key=f0517d185bcf467cba5147d6260bb868&q=traffic-manager-east
- MonitorIntervalInSeconds : 30
- MonitorTimeoutInSeconds : 10
- MonitorToleratedNumberOfFailures : 3
- Endpoints : {luis-east-endpoint}
- ```
-
-3. Set East US endpoint with **[Set-AzTrafficManagerProfile](/powershell/module/az.trafficmanager/set-aztrafficmanagerprofile)** cmdlet
-
- ```powerShell
- Set-AzTrafficManagerProfile -TrafficManagerProfile $eastprofile
- ```
-
- A successful response will be the same response as step 2.
-
-### Create the West US Traffic Manager profile with PowerShell
-To create the West US Traffic Manager profile, follow the same steps: create profile, add endpoint, and set endpoint.
-
-1. Create profile with **[New-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/New-azTrafficManagerProfile)** cmdlet
-
- Use the following cmdlet to create the profile. Make sure to change the `appIdLuis` and `subscriptionKeyLuis`. The subscriptionKey is for the East US LUIS key. If the path is not correct including the LUIS app ID and endpoint key, the Traffic Manager polling is a status of `degraded` because Traffic Manage can't successfully request the LUIS endpoint. Make sure the value of `q` is `traffic-manager-west` so you can see this value in the LUIS endpoint logs.
-
- ```powerShell
- $westprofile = New-AzTrafficManagerProfile -Name luis-profile-westus -ResourceGroupName luis-traffic-manager -TrafficRoutingMethod Performance -RelativeDnsName luis-dns-westus -Ttl 30 -MonitorProtocol HTTPS -MonitorPort 443 -MonitorPath "/luis/v2.0/apps/<appIdLuis>?subscription-key=<subscriptionKeyLuis>&q=traffic-manager-west"
- ```
-
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-Name|luis-profile-westus|Traffic Manager name in Azure portal|
- |-ResourceGroupName|luis-traffic-manager|Created in previous section|
- |-TrafficRoutingMethod|Performance|For more information, see [Traffic Manager routing methods][routing-methods]. If using performance, the URL request to the Traffic Manager must come from the region of the user. If going through a chatbot or other application, it is the chatbot's responsibility to mimic the region in the call to the Traffic Manager. |
- |-RelativeDnsName|luis-dns-westus|This is the subdomain for the service: luis-dns-westus.trafficmanager.net|
- |-Ttl|30|Polling interval, 30 seconds|
- |-MonitorProtocol<BR>-MonitorPort|HTTPS<br>443|Port and protocol for LUIS is HTTPS/443|
- |-MonitorPath|`/luis/v2.0/apps/<appIdLuis>?subscription-key=<subscriptionKeyLuis>&q=traffic-manager-west`|Replace `<appId>` and `<subscriptionKey>` with your own values. Remember this endpoint key is different than the east endpoint key|
-
- A successful request has no response.
-
-2. Add West US endpoint with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.TrafficManager/Add-azTrafficManagerEndpointConfig)** cmdlet
-
- ```powerShell
- Add-AzTrafficManagerEndpointConfig -EndpointName luis-west-endpoint -TrafficManagerProfile $westprofile -Type ExternalEndpoints -Target westus.api.cognitive.microsoft.com -EndpointLocation "westus" -EndpointStatus Enabled
- ```
-
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-EndpointName|luis-west-endpoint|Endpoint name displayed under the profile|
- |-TrafficManagerProfile|$westprofile|Use profile object created in Step 1|
- |-Type|ExternalEndpoints|For more information, see [Traffic Manager endpoint][traffic-manager-endpoints] |
- |-Target|westus.api.cognitive.microsoft.com|This is the domain for the LUIS endpoint.|
- |-EndpointLocation|"westus"|Region of the endpoint|
- |-EndpointStatus|Enabled|Enable endpoint when it is created|
-
- The successful response looks like:
-
- ```console
- Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-westus
- Name : luis-profile-westus
- ResourceGroupName : luis-traffic-manager
- RelativeDnsName : luis-dns-westus
- Ttl : 30
- ProfileStatus : Enabled
- TrafficRoutingMethod : Performance
- MonitorProtocol : HTTPS
- MonitorPort : 443
- MonitorPath : /luis/v2.0/apps/c3fc5d1e-5187-40cc-af0f-fbde328aa16b?subscription-key=e3605f07e3cc4bedb7e02698a54c19cc&q=traffic-manager-west
- MonitorIntervalInSeconds : 30
- MonitorTimeoutInSeconds : 10
- MonitorToleratedNumberOfFailures : 3
- Endpoints : {luis-west-endpoint}
- ```
-
-3. Set West US endpoint with **[Set-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/Set-azTrafficManagerProfile)** cmdlet
-
- ```powerShell
- Set-AzTrafficManagerProfile -TrafficManagerProfile $westprofile
- ```
-
- A successful response is the same response as step 2.
-
-### Create parent Traffic Manager profile
-Create the parent Traffic Manager profile and link two child Traffic Manager profiles to the parent.
-
-1. Create parent profile with **[New-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/New-azTrafficManagerProfile)** cmdlet
-
- ```powerShell
- $parentprofile = New-AzTrafficManagerProfile -Name luis-profile-parent -ResourceGroupName luis-traffic-manager -TrafficRoutingMethod Performance -RelativeDnsName luis-dns-parent -Ttl 30 -MonitorProtocol HTTPS -MonitorPort 443 -MonitorPath "/"
- ```
-
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-Name|luis-profile-parent|Traffic Manager name in Azure portal|
- |-ResourceGroupName|luis-traffic-manager|Created in previous section|
- |-TrafficRoutingMethod|Performance|For more information, see [Traffic Manager routing methods][routing-methods]. If using performance, the URL request to the Traffic Manager must come from the region of the user. If going through a chatbot or other application, it is the chatbot's responsibility to mimic the region in the call to the Traffic Manager. |
- |-RelativeDnsName|luis-dns-parent|This is the subdomain for the service: luis-dns-parent.trafficmanager.net|
- |-Ttl|30|Polling interval, 30 seconds|
- |-MonitorProtocol<BR>-MonitorPort|HTTPS<br>443|Port and protocol for LUIS is HTTPS/443|
- |-MonitorPath|`/`|This path doesn't matter because the child endpoint paths are used instead.|
-
- A successful request has no response.
-
-2. Add East US child profile to parent with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.TrafficManager/Add-azTrafficManagerEndpointConfig)** and **NestedEndpoints** type
-
- ```powerShell
- Add-AzTrafficManagerEndpointConfig -EndpointName child-endpoint-useast -TrafficManagerProfile $parentprofile -Type NestedEndpoints -TargetResourceId $eastprofile.Id -EndpointStatus Enabled -EndpointLocation "eastus" -MinChildEndpoints 1
- ```
-
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-EndpointName|child-endpoint-useast|East profile|
- |-TrafficManagerProfile|$parentprofile|Profile to assign this endpoint to|
- |-Type|NestedEndpoints|For more information, see [Add-AzTrafficManagerEndpointConfig](/powershell/module/az.trafficmanager/Add-azTrafficManagerEndpointConfig). |
- |-TargetResourceId|$eastprofile.Id|ID of the child profile|
- |-EndpointStatus|Enabled|Endpoint status after adding to parent|
- |-EndpointLocation|"eastus"|[Azure region name](https://azure.microsoft.com/global-infrastructure/regions/) of resource|
- |-MinChildEndpoints|1|Minimum number to child endpoints|
-
- The successful response look like the following and includes the new `child-endpoint-useast` endpoint:
-
- ```console
- Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-parent
- Name : luis-profile-parent
- ResourceGroupName : luis-traffic-manager
- RelativeDnsName : luis-dns-parent
- Ttl : 30
- ProfileStatus : Enabled
- TrafficRoutingMethod : Performance
- MonitorProtocol : HTTPS
- MonitorPort : 443
- MonitorPath : /
- MonitorIntervalInSeconds : 30
- MonitorTimeoutInSeconds : 10
- MonitorToleratedNumberOfFailures : 3
- Endpoints : {child-endpoint-useast}
- ```
-
-3. Add West US child profile to parent with **[Add-AzTrafficManagerEndpointConfig](/powershell/module/az.TrafficManager/Add-azTrafficManagerEndpointConfig)** cmdlet and **NestedEndpoints** type
-
- ```powerShell
- Add-AzTrafficManagerEndpointConfig -EndpointName child-endpoint-uswest -TrafficManagerProfile $parentprofile -Type NestedEndpoints -TargetResourceId $westprofile.Id -EndpointStatus Enabled -EndpointLocation "westus" -MinChildEndpoints 1
- ```
-
- This table explains each variable in the cmdlet:
-
- |Configuration parameter|Variable name or Value|Purpose|
- |--|--|--|
- |-EndpointName|child-endpoint-uswest|West profile|
- |-TrafficManagerProfile|$parentprofile|Profile to assign this endpoint to|
- |-Type|NestedEndpoints|For more information, see [Add-AzTrafficManagerEndpointConfig](/powershell/module/az.trafficmanager/Add-azTrafficManagerEndpointConfig). |
- |-TargetResourceId|$westprofile.Id|ID of the child profile|
- |-EndpointStatus|Enabled|Endpoint status after adding to parent|
- |-EndpointLocation|"westus"|[Azure region name](https://azure.microsoft.com/global-infrastructure/regions/) of resource|
- |-MinChildEndpoints|1|Minimum number to child endpoints|
-
- The successful response look like and includes both the previous `child-endpoint-useast` endpoint and the new `child-endpoint-uswest` endpoint:
-
- ```console
- Id : /subscriptions/<azure-subscription-id>/resourceGroups/luis-traffic-manager/providers/Microsoft.Network/trafficManagerProfiles/luis-profile-parent
- Name : luis-profile-parent
- ResourceGroupName : luis-traffic-manager
- RelativeDnsName : luis-dns-parent
- Ttl : 30
- ProfileStatus : Enabled
- TrafficRoutingMethod : Performance
- MonitorProtocol : HTTPS
- MonitorPort : 443
- MonitorPath : /
- MonitorIntervalInSeconds : 30
- MonitorTimeoutInSeconds : 10
- MonitorToleratedNumberOfFailures : 3
- Endpoints : {child-endpoint-useast, child-endpoint-uswest}
- ```
-
-4. Set endpoints with **[Set-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/Set-azTrafficManagerProfile)** cmdlet
-
- ```powerShell
- Set-AzTrafficManagerProfile -TrafficManagerProfile $parentprofile
- ```
-
- A successful response is the same response as step 3.
-
-### PowerShell variables
-In the previous sections, three PowerShell variables were created: `$eastprofile`, `$westprofile`, `$parentprofile`. These variables are used toward the end of the Traffic Manager configuration. If you chose not to create the variables, or forgot to, or your PowerShell window times out, you can use the PowerShell cmdlet, **[Get-AzTrafficManagerProfile](/powershell/module/az.TrafficManager/Get-azTrafficManagerProfile)**, to get the profile again and assign it to a variable.
-
-Replace the items in angle brackets, `<>`, with the correct values for each of the three profiles you need.
-
-```powerShell
-$<variable-name> = Get-AzTrafficManagerProfile -Name <profile-name> -ResourceGroupName luis-traffic-manager
-```
-
-## Verify Traffic Manager works
-To verify that the Traffic Manager profiles work, the profiles need to have the status of `Online` This status is based on the polling path of the endpoint.
-
-### View new profiles in the Azure portal
-You can verify that all three profiles are created by looking at the resources in the `luis-traffic-manager` resource group.
-
-![Screenshot of Azure resource group luis-traffic-manager](./media/traffic-manager/traffic-manager-profiles.png)
-
-### Verify the profile status is Online
-The Traffic Manager polls the path of each endpoint to make sure it is online. If it is online, the status of the child profiles are `Online`. This is displayed on the **Overview** of each profile.
-
-![Screenshot of Azure Traffic Manager profile Overview showing Monitor Status of Online](./media/traffic-manager/profile-status-online.png)
-
-### Validate Traffic Manager polling works
-Another way to validate the traffic manager polling works is with the LUIS endpoint logs. On the [LUIS][LUIS] website apps list page, export the endpoint log for the application. Because Traffic Manager polls often for the two endpoints, there are entries in the logs even if they have only been on a few minutes. Remember to look for entries where the query begins with `traffic-manager-`.
-
-```console
-traffic-manager-west 6/7/2018 19:19 {"query":"traffic-manager-west","intents":[{"intent":"None","score":0.944767}],"entities":[]}
-traffic-manager-east 6/7/2018 19:20 {"query":"traffic-manager-east","intents":[{"intent":"None","score":0.944767}],"entities":[]}
-```
-
-### Validate DNS response from Traffic Manager works
-To validate that the DNS response returns a LUIS endpoint, request the Traffic Manage parent profile DNS using a DNS client library. The DNS name for the parent profile is `luis-dns-parent.trafficmanager.net`.
-
-The following Node.js code makes a request for the parent profile and returns a LUIS endpoint:
-
-```javascript
-const dns = require('dns');
-
-dns.resolveAny('luis-dns-parent.trafficmanager.net', (err, ret) => {
- console.log('ret', ret);
-});
-```
-
-The successful response with the LUIS endpoint is:
-
-```json
-[
- {
- value: 'westus.api.cognitive.microsoft.com',
- type: 'CNAME'
- }
-]
-```
-
-## Use the Traffic Manager parent profile
-In order to manage traffic across endpoints, you need to insert a call to the Traffic Manager DNS to find the LUIS endpoint. This call is made for every LUIS endpoint request and needs to simulate the geographic location of the user of the LUIS client application. Add the DNS response code in between your LUIS client application and the request to LUIS for the endpoint prediction.
-
-## Resolving a degraded state
-
-Enable [diagnostic logs](../../traffic-manager/traffic-manager-diagnostic-logs.md) for Traffic Manager to see why endpoint status is degraded.
-
-## Clean up
-Remove the two LUIS endpoint keys, the three Traffic Manager profiles, and the resource group that contained these five resources. This is done from the Azure portal. You delete the five resources from the resources list. Then delete the resource group.
-
-## Next steps
-
-Review [middleware](/azure/bot-service/bot-builder-create-middleware?tabs=csaddmiddleware%252ccsetagoverwrite%252ccsmiddlewareshortcircuit%252ccsfallback%252ccsactivityhandler) options in BotFramework v4 to understand how this traffic management code can be added to a BotFramework bot.
-
-[traffic-manager-marketing]: https://azure.microsoft.com/services/traffic-manager/
-[traffic-manager-docs]: ../../traffic-manager/index.yml
-[LUIS]: ./luis-reference-regions.md#luis-website
-[azure-portal]: https://portal.azure.com/
-[azure-storage]: https://azure.microsoft.com/services/storage/
-[routing-methods]: ../../traffic-manager/traffic-manager-routing-methods.md
-[traffic-manager-endpoints]: ../../traffic-manager/traffic-manager-endpoint-types.md
cognitive-services Luis Tutorial Bing Spellcheck https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-tutorial-bing-spellcheck.md
- Title: Correct misspelled words - LUIS-
-description: Correct misspelled words in utterances by adding Bing Spell Check API V7 to LUIS endpoint queries.
------- Previously updated : 01/05/2022---
-# Correct misspelled words with Bing Resource
---
-The V3 prediction API now supports the [Bing Spellcheck API](/bing/search-apis/bing-spell-check/overview). Add spell checking to your application by including the key to your Bing search resource in the header of your requests. You can use an existing Bing resource if you already own one, or [create a new one](https://portal.azure.com/#create/Microsoft.BingSearch) to use this feature.
-
-Prediction output example for a misspelled query:
-
-```json
-{
- "query": "bouk me a fliht to kayro",
- "prediction": {
- "alteredQuery": "book me a flight to cairo",
- "topIntent": "book a flight",
- "intents": {
- "book a flight": {
- "score": 0.9480589
- }
- "None": {
- "score": 0.0332136229
- }
- },
- "entities": {}
- }
-}
-```
-
-Corrections to spelling are made before the LUIS user utterance prediction. You can see any changes to the original utterance, including spelling, in the response.
-
-## Create Bing Search Resource
-
-To create a Bing Search resource in the Azure portal, follow these instructions:
-
-1. Log in to the [Azure portal](https://portal.azure.com).
-
-2. Select **Create a resource** in the top left corner.
-
-3. In the search box, enter `Bing Search V7` and select the service.
-
-4. An information panel appears to the right containing information including the Legal Notice. Select **Create** to begin the subscription creation process.
-
-> [!div class="mx-imgBorder"]
-> ![Bing Spell Check API V7 resource](./media/luis-tutorial-bing-spellcheck/bing-search-resource-portal.png)
-
-5. In the next panel, enter your service settings. Wait for service creation process to finish.
-
-6. After the resource is created, go to the **Keys and Endpoint** blade on the left.
-
-7. Copy one of the keys to be added to the header of your prediction request. You will only need one of the two keys.
-
-<!--
-## Using the key in LUIS test panel
-There are two places in LUIS to use the key. The first is in the [test panel](luis-interactive-test.md#view-bing-spell-check-corrections-in-test-panel). The key isn't saved into LUIS but instead is a session variable. You need to set the key every time you want the test panel to apply the Bing Spell Check API v7 service to the utterance. See [instructions](luis-interactive-test.md#view-bing-spell-check-corrections-in-test-panel) in the test panel for setting the key.
>--
-## Adding the key to the endpoint URL
-For each query you want to apply spelling correction on, the endpoint query needs the Bing Spellcheck resource key passed in the query header parameter. You may have a chatbot that calls LUIS or you may call the LUIS endpoint API directly. Regardless of how the endpoint is called, each and every call must include the required information in the header's request for spelling corrections to work properly. You must set the value with **mkt-bing-spell-check-key** to the key value.
-
-|Header Key|Header Value|
-|--|--|
-|`mkt-bing-spell-check-key`|Keys found in **Keys and Endpoint** blade of your resource|
-
-## Send misspelled utterance to LUIS
-1. Add a misspelled utterance in the prediction query you will be sending such as "How far is the mountainn?". In English, `mountain`, with one `n`, is the correct spelling.
-
-2. LUIS responds with a JSON result for `How far is the mountain?`. If Bing Spell Check API v7 detects a misspelling, the `query` field in the LUIS app's JSON response contains the original query, and the `alteredQuery` field contains the corrected query sent to LUIS.
-
-```json
-{
- "query": "How far is the mountainn?",
- "alteredQuery": "How far is the mountain?",
- "topScoringIntent": {
- "intent": "Concierge",
- "score": 0.183866
- },
- "entities": []
-}
-```
-
-## Ignore spelling mistakes
-
-If you don't want to use the Bing Search API v7 service, you need to add the correct and incorrect spelling.
-
-Two solutions are:
-
-* Label example utterances that have the all the different spellings so that LUIS can learn proper spelling as well as typos. This option requires more labeling effort than using a spell checker.
-* Create a phrase list with all variations of the word. With this solution, you do not need to label the word variations in the example utterances.
-
-## Next steps
-[Learn more about example utterances](./how-to/entities.md)
cognitive-services Luis Tutorial Node Import Utterances Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-tutorial-node-import-utterances-csv.md
- Title: Import utterances using Node.js - LUIS-
-description: Learn how to build a LUIS app programmatically from preexisting data in CSV format using the LUIS Authoring API.
------- Previously updated : 05/17/2021---
-# Build a LUIS app programmatically using Node.js
---
-LUIS provides a programmatic API that does everything that the [LUIS](luis-reference-regions.md) website does. This can save time when you have pre-existing data and it would be faster to create a LUIS app programmatically than by entering information by hand.
--
-## Prerequisites
-
-* Sign in to the [LUIS](luis-reference-regions.md) website and find your [authoring key](luis-how-to-azure-subscription.md) in Account Settings. You use this key to call the Authoring APIs.
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-* This article starts with a CSV for a hypothetical company's log files of user requests. Download it [here](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/IoT.csv).
-* Install the latest Node.js with NPM. Download it from [here](https://nodejs.org/en/download/).
-* **[Recommended]** Visual Studio Code for IntelliSense and debugging, download it from [here](https://code.visualstudio.com/) for free.
-
-All of the code in this article is available on the [Azure-Samples Language Understanding GitHub repository](https://github.com/Azure-Samples/cognitive-services-language-understanding/tree/master/examples/build-app-programmatically-csv).
-
-## Map preexisting data to intents and entities
-Even if you have a system that wasn't created with LUIS in mind, if it contains textual data that maps to different things users want to do, you might be able to come up with a mapping from the existing categories of user input to intents in LUIS. If you can identify important words or phrases in what the users said, these words might map to entities.
-
-Open the [`IoT.csv`](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/IoT.csv) file. It contains a log of user queries to a hypothetical home automation service, including how they were categorized, what the user said, and some columns with useful information pulled out of them.
-
-![CSV file of pre-existing data](./media/luis-tutorial-node-import-utterances-csv/csv.png)
-
-You see that the **RequestType** column could be intents, and the **Request** column shows an example utterance. The other fields could be entities if they occur in the utterance. Because there are intents, entities, and example utterances, you have the requirements for a simple, sample app.
-
-## Steps to generate a LUIS app from non-LUIS data
-To generate a new LUIS app from the CSV file:
-
-* Parse the data from the CSV file:
- * Convert to a format that you can upload to LUIS using the Authoring API.
- * From the parsed data, gather information about intents and entities.
-* Make authoring API calls to:
- * Create the app.
- * Add intents and entities that were gathered from the parsed data.
- * Once you have created the LUIS app, you can add the example utterances from the parsed data.
-
-You can see this program flow in the last part of the `index.js` file. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/index.js) this code and save it in `index.js`.
--
- [!code-javascript[Node.js code for calling the steps to build a LUIS app](~/samples-luis/examples/build-app-programmatically-csv/index.js)]
--
-## Parse the CSV
-
-The column entries that contain the utterances in the CSV have to be parsed into a JSON format that LUIS can understand. This JSON format must contain an `intentName` field that identifies the intent of the utterance. It must also contain an `entityLabels` field, which can be empty if there are no entities in the utterance.
-
-For example, the entry for "Turn on the lights" maps to this JSON:
-
-```json
- {
- "text": "Turn on the lights",
- "intentName": "TurnOn",
- "entityLabels": [
- {
- "entityName": "Operation",
- "startCharIndex": 5,
- "endCharIndex": 6
- },
- {
- "entityName": "Device",
- "startCharIndex": 12,
- "endCharIndex": 17
- }
- ]
- }
-```
-
-In this example, the `intentName` comes from the user request under the **Request** column heading in the CSV file, and the `entityName` comes from the other columns with key information. For example, if there's an entry for **Operation** or **Device**, and that string also occurs in the actual request, then it can be labeled as an entity. The following code demonstrates this parsing process. You can copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_parse.js) it and save it to `_parse.js`.
-
- [!code-javascript[Node.js code for parsing a CSV file to extract intents, entities, and labeled utterances](~/samples-luis/examples/build-app-programmatically-csv/_parse.js)]
---
-## Create the LUIS app
-Once the data has been parsed into JSON, add it to a LUIS app. The following code creates the LUIS app. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_create.js) it, and save it into `_create.js`.
-
- [!code-javascript[Node.js code for creating a LUIS app](~/samples-luis/examples/build-app-programmatically-csv/_create.js)]
--
-## Add intents
-Once you have an app, you need to intents to it. The following code creates the LUIS app. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_intents.js) it, and save it into `_intents.js`.
-
- [!code-javascript[Node.js code for creating a series of intents](~/samples-luis/examples/build-app-programmatically-csv/_intents.js)]
--
-## Add entities
-The following code adds the entities to the LUIS app. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_entities.js) it, and save it into `_entities.js`.
-
- [!code-javascript[Node.js code for creating entities](~/samples-luis/examples/build-app-programmatically-csv/_entities.js)]
---
-## Add utterances
-Once the entities and intents have been defined in the LUIS app, you can add the utterances. The following code uses the [Utterances_AddBatch](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09) API, which allows you to add up to 100 utterances at a time. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_upload.js) it, and save it into `_upload.js`.
-
- [!code-javascript[Node.js code for adding utterances](~/samples-luis/examples/build-app-programmatically-csv/_upload.js)]
--
-## Run the code
--
-### Install Node.js dependencies
-Install the Node.js dependencies from NPM in the terminal/command line.
-
-```console
-> npm install
-```
-
-### Change Configuration Settings
-In order to use this application, you need to change the values in the index.js file to your own endpoint key, and provide the name you want the app to have. You can also set the app's culture or change the version number.
-
-Open the index.js file, and change these values at the top of the file.
--
-```javascript
-// Change these values
-const LUIS_programmaticKey = "YOUR_AUTHORING_KEY";
-const LUIS_appName = "Sample App";
-const LUIS_appCulture = "en-us";
-const LUIS_versionId = "0.1";
-```
-
-### Run the script
-Run the script from a terminal/command line with Node.js.
-
-```console
-> node index.js
-```
-
-or
-
-```console
-> npm start
-```
-
-### Application progress
-While the application is running, the command line shows progress. The command line output includes the format of the responses from LUIS.
-
-```console
-> node index.js
-intents: ["TurnOn","TurnOff","Dim","Other"]
-entities: ["Operation","Device","Room"]
-parse done
-JSON file should contain utterances. Next step is to create an app with the intents and entities it found.
-Called createApp, created app with ID 314b306c-0033-4e09-92ab-94fe5ed158a2
-Called addIntents for intent named TurnOn.
-Called addIntents for intent named TurnOff.
-Called addIntents for intent named Dim.
-Called addIntents for intent named Other.
-Results of all calls to addIntent = [{"response":"e7eaf224-8c61-44ed-a6b0-2ab4dc56f1d0"},{"response":"a8a17efd-f01c-488d-ad44-a31a818cf7d7"},{"response":"bc7c32fc-14a0-4b72-bad4-d345d807f965"},{"response":"727a8d73-cd3b-4096-bc8d-d7cfba12eb44"}]
-called addEntity for entity named Operation.
-called addEntity for entity named Device.
-called addEntity for entity named Room.
-Results of all calls to addEntity= [{"response":"6a7e914f-911d-4c6c-a5bc-377afdce4390"},{"response":"56c35237-593d-47f6-9d01-2912fa488760"},{"response":"f1dd440c-2ce3-4a20-a817-a57273f169f3"}]
-retrying add examples...
-
-Results of add utterances = [{"response":[{"value":{"UtteranceText":"turn on the lights","ExampleId":-67649},"hasError":false},{"value":{"UtteranceText":"turn the heat on","ExampleId":-69067},"hasError":false},{"value":{"UtteranceText":"switch on the kitchen fan","ExampleId":-3395901},"hasError":false},{"value":{"UtteranceText":"turn off bedroom lights","ExampleId":-85402},"hasError":false},{"value":{"UtteranceText":"turn off air conditioning","ExampleId":-8991572},"hasError":false},{"value":{"UtteranceText":"kill the lights","ExampleId":-70124},"hasError":false},{"value":{"UtteranceText":"dim the lights","ExampleId":-174358},"hasError":false},{"value":{"UtteranceText":"hi how are you","ExampleId":-143722},"hasError":false},{"value":{"UtteranceText":"answer the phone","ExampleId":-69939},"hasError":false},{"value":{"UtteranceText":"are you there","ExampleId":-149588},"hasError":false},{"value":{"UtteranceText":"help","ExampleId":-81949},"hasError":false},{"value":{"UtteranceText":"testing the circuit","ExampleId":-11548708},"hasError":false}]}]
-upload done
-```
----
-## Open the LUIS app
-Once the script completes, you can sign in to [LUIS](luis-reference-regions.md) and see the LUIS app you created under **My Apps**. You should be able to see the utterances you added under the **TurnOn**, **TurnOff**, and **None** intents.
-
-![TurnOn intent](./media/luis-tutorial-node-import-utterances-csv/imported-utterances-661.png)
--
-## Next steps
-
-[Test and train your app in LUIS website](luis-interactive-test.md)
-
-## Additional resources
-
-This sample application uses the following LUIS APIs:
-- [create app](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c36)-- [add intents](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0c)-- [add entities](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0e)-- [add utterances](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09)
cognitive-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-user-privacy.md
- Title: Export & delete data - LUIS-
-description: You have full control over viewing, exporting, and deleting their data. Delete customer data to ensure privacy and compliance.
------- Previously updated : 04/08/2020---
-# Export and delete your customer data in Language Understanding (LUIS) in Cognitive Services
---
-Delete customer data to ensure privacy and compliance.
-
-## Summary of customer data request featuresΓÇï
-Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f).
--
-Customer content is stored encrypted in Microsoft regional Azure storage and includes:
--- User account content collected at registration-- Training data required to build the models-- Logged user queries used by [active learning](luis-concept-review-endpoint-utterances.md) to help improve the model
- - Users can turn off query logging by appending `&log=false` to the request, details [here](./troubleshooting.yml#how-can-i-disable-the-logging-of-utterances-)
-
-## Deleting customer data
-LUIS users have full control to delete any user content, either through the LUIS web portal or the LUIS Authoring (also known as Programmatic) APIs. The following table displays links assisting with both:
-
-| | **User Account** | **Application** | **Example Utterance(s)** | **End-user queries** |
-| | | | | |
-| **Portal** | [Link](luis-concept-data-storage.md#delete-an-account) | [Link](luis-how-to-start-new-app.md#delete-app) | [Link](luis-concept-data-storage.md#utterances-in-an-intent) | [Active learning utterances](luis-how-to-review-endpoint-utterances.md#disable-active-learning)<br>[Logged Utterances](luis-concept-data-storage.md#disable-logging-utterances) |
-| **APIs** | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c4c) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c39) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0b) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/58b6f32139e2bb139ce823c9) |
--
-## Exporting customer data
-LUIS users have full control to view the data on the portal, however it must be exported through the LUIS Authoring (also known as Programmatic) APIs. The following table displays links assisting with data exports via the LUIS Authoring (also known as Programmatic) APIs:
-
-| | **User Account** | **Application** | **Utterance(s)** | **End-user queries** |
-| | | | | |
-| **APIs** | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c48) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0a) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c36) |
-
-## Location of active learning
-
-To enable [active learning](luis-how-to-review-endpoint-utterances.md#log-user-queries-to-enable-active-learning), users' logged utterances, received at the published LUIS endpoints, are stored in the following Azure geographies:
-
-* [Europe](#europe)
-* [Australia](#australia)
-* [United States](#united-states)
-
-With the exception of active learning data (detailed below), LUIS follows the [data storage practices for regional services](https://azuredatacentermap.azurewebsites.net/).
---
-### Europe
-
-Europe Authoring (also known as Programmatic APIs) resources are hosted in Azure's Europe geography, and support deployment of endpoints to the following Azure geographies:
-
-* Europe
-* France
-* United Kingdom
-
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Europe geography for active learning.
-
-### Australia
-
-Australia Authoring (also known as Programmatic APIs) resources are hosted in Azure's Australia geography, and support deployment of endpoints to the following Azure geographies:
-
-* Australia
-
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Australia geography for active learning.
-
-### United States
-
-United States Authoring (also known as Programmatic APIs) resources are hosted in Azure's United States geography, and support deployment of endpoints to the following Azure geographies:
-
-* Azure geographies not supported by the Europe or Australia authoring regions
-
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's United States geography for active learning.
-
-### Switzerland North
-
-Switzerland North Authoring (also known as Programmatic APIs) resources are hosted in Azure's Switzerland geography, and support deployment of endpoints to the following Azure geographies:
-
-* Switzerland
-
-When deploying to these Azure geographies, the utterances received by the endpoint from end users of your app will be stored in Azure's Switzerland geography for active learning.
-
-## Disable active learning
-
-To disable active learning, see [Disable active learning](luis-how-to-review-endpoint-utterances.md#disable-active-learning). To manage stored utterances, see [Delete utterance](luis-how-to-review-endpoint-utterances.md#delete-utterance).
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [LUIS regions reference](./luis-reference-regions.md)
cognitive-services Migrate From Composite Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/migrate-from-composite-entity.md
- Title: Upgrade composite entity - LUIS
-description: Upgrade composite entity to machine-learning entity with upgrade process in the LUIS portal.
--- Previously updated : 05/04/2020--
-# Upgrade composite entity to machine-learning entity
---
-Upgrade composite entity to machine-learning entity to build an entity that receives more complete predictions with better decomposability for debugging the entity.
-
-## Current version model restrictions
-
-The upgrade process creates machine-learning entities, based on the existing composite entities found in your app, into a new version of your app. This includes composite entity children and roles. The process also switches the labels in example utterances to use the new entity.
-
-## Upgrade process
-
-The upgrade process:
-* Creates new machine-learning entity for each composite entity.
-* Child entities:
- * If child entity is only used in composite entity, it will only be added to machine-learning entity.
- * If child entity is used in composite _and_ as a separate entity (labeled in example utterances), it will be added to the version as an entity and as a subentity to the new machine-learning entity.
- * If the child entity uses a role, each role will be converted into a subentity of the same name.
- * If the child entity is a non-machine-learning entity (regular expression, list entity, or prebuilt entity), a new subentity is created with the same name, and the new subentity has a feature using the non-machine-learning entity with the required feature added.
-* Names are retained but must be unique at same subentity/sibling level. Refer to [unique naming limits](./luis-limits.md#name-uniqueness).
-* Labels in example utterances are switched to new machine-learning entity with subentities.
-
-Use the following chart to understand how your model changes:
-
-|Old object|New object|Notes|
-|--|--|--|
-|Composite entity|machine-learning entity with structure|Both objects are parent objects.|
-|Composite's child entity is **simple entity**|subentity|Both objects are child objects.|
-|Composite's child entity is **Prebuilt entity** such as Number|subentity with name of Prebuilt entity such as Number, and subentity has _feature_ of Prebuilt Number entity with constraint option set to _true_.|subentity contains feature with constraint at subentity level.|
-|Composite's child entity is **Prebuilt entity** such as Number, and prebuilt entity has a **role**|subentity with name of role, and subentity has feature of Prebuilt Number entity with constraint option set to true.|subentity contains feature with constraint at subentity level.|
-|Role|subentity|The role name becomes the subentity name. The subentity is a direct descendant of the machine-learning entity.|
-
-## Begin upgrade process
-
-Before updating, make sure to:
-
-* Change versions if you are not on the correct version to upgrade
--
-1. Begin the upgrade process from the notification or you can wait until the next scheduled prompt.
-
- > [!div class="mx-imgBorder"]
- > ![Begin upgrade from notifications](./media/update-composite-entity/notification-begin-update.png)
-
-1. On the pop-up, select **Upgrade now**.
-
-1. Review the **What happens when you upgrade** information then select **Continue**.
-
-1. Select the composite entities from the list to upgrade then select **Continue**.
-
-1. You can move any untrained changes from the current version into the upgraded version by selecting the checkbox.
-
-1. Select **Continue** to begin the upgrade process.
-
-1. The progress bar indicates the status of the upgrade process.
-
-1. When the process is done, you are on a new trained version with the new machine-learning entities. Select **Try your new entities** to see the new entity.
-
- If the upgrade or training failed for the new entity, a notification provides more information.
-
-1. On the Entity list page, the new entities are marked with **NEW** next to the type name.
-
-## Next steps
-
-* [Authors and collaborators](luis-how-to-collaborate.md)
cognitive-services Reference Entity Machine Learned Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/reference-entity-machine-learned-entity.md
- Title: Machine-learning entity type - LUIS-
-description: The machine-learning entity is the preferred entity for building LUIS applications.
------ Previously updated : 01/05/2022--
-# Machine-learning entity
---
-The machine-learning entity is the preferred entity for building LUIS applications.
--
-## Example JSON
-
-Suppose the app takes pizza orders, such as the [decomposable entity tutorial](tutorial-machine-learned-entity.md). Each order can include several different pizzas, including different sizes.
-
-Example utterances include:
-
-|Example utterances for pizza app|
-|--|
-|`Can I get a pepperoni pizza and a can of coke please`|
-|`can I get a small pizza with onions peppers and olives`|
-|`pickup an extra large meat lovers pizza`|
---
-#### [V3 prediction endpoint response](#tab/V3)
-
-Because a machine-learning entity can have many subentities with required features, this is just an example only. It should be considered a guide for what your entity will return.
-
-Consider the query:
-
-`deliver 1 large cheese pizza on thin crust and 2 medium pepperoni pizzas on deep dish crust`
-
-This is the JSON if `verbose=false` is set in the query string:
-
-```json
-"entities": {
- "Order": [
- {
- "FullPizzaWithModifiers": [
- {
- "PizzaType": [
- "cheese pizza"
- ],
- "Size": [
- [
- "Large"
- ]
- ],
- "Quantity": [
- 1
- ]
- },
- {
- "PizzaType": [
- "pepperoni pizzas"
- ],
- "Size": [
- [
- "Medium"
- ]
- ],
- "Quantity": [
- 2
- ],
- "Crust": [
- [
- "Deep Dish"
- ]
- ]
- }
- ]
- }
- ],
- "ToppingList": [
- [
- "Cheese"
- ],
- [
- "Pepperoni"
- ]
- ],
- "CrustList": [
- [
- "Thin"
- ]
- ]
-}
-
-```
-
-This is the JSON if `verbose=true` is set in the query string:
-
-```json
-"entities": {
- "Order": [
- {
- "FullPizzaWithModifiers": [
- {
- "PizzaType": [
- "cheese pizza"
- ],
- "Size": [
- [
- "Large"
- ]
- ],
- "Quantity": [
- 1
- ],
- "$instance": {
- "PizzaType": [
- {
- "type": "PizzaType",
- "text": "cheese pizza",
- "startIndex": 16,
- "length": 12,
- "score": 0.999998868,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "Size": [
- {
- "type": "SizeList",
- "text": "large",
- "startIndex": 10,
- "length": 5,
- "score": 0.998720646,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "Quantity": [
- {
- "type": "builtin.number",
- "text": "1",
- "startIndex": 8,
- "length": 1,
- "score": 0.999878645,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
- },
- {
- "PizzaType": [
- "pepperoni pizzas"
- ],
- "Size": [
- [
- "Medium"
- ]
- ],
- "Quantity": [
- 2
- ],
- "Crust": [
- [
- "Deep Dish"
- ]
- ],
- "$instance": {
- "PizzaType": [
- {
- "type": "PizzaType",
- "text": "pepperoni pizzas",
- "startIndex": 56,
- "length": 16,
- "score": 0.999987066,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "Size": [
- {
- "type": "SizeList",
- "text": "medium",
- "startIndex": 49,
- "length": 6,
- "score": 0.999841452,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "Quantity": [
- {
- "type": "builtin.number",
- "text": "2",
- "startIndex": 47,
- "length": 1,
- "score": 0.9996054,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "Crust": [
- {
- "type": "CrustList",
- "text": "deep dish crust",
- "startIndex": 76,
- "length": 15,
- "score": 0.761551,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
- }
- ],
- "$instance": {
- "FullPizzaWithModifiers": [
- {
- "type": "FullPizzaWithModifiers",
- "text": "1 large cheese pizza on thin crust",
- "startIndex": 8,
- "length": 34,
- "score": 0.616001546,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "FullPizzaWithModifiers",
- "text": "2 medium pepperoni pizzas on deep dish crust",
- "startIndex": 47,
- "length": 44,
- "score": 0.7395033,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
- }
- ],
- "ToppingList": [
- [
- "Cheese"
- ],
- [
- "Pepperoni"
- ]
- ],
- "CrustList": [
- [
- "Thin"
- ]
- ],
- "$instance": {
- "Order": [
- {
- "type": "Order",
- "text": "1 large cheese pizza on thin crust and 2 medium pepperoni pizzas on deep dish crust",
- "startIndex": 8,
- "length": 83,
- "score": 0.6881274,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "ToppingList": [
- {
- "type": "ToppingList",
- "text": "cheese",
- "startIndex": 16,
- "length": 6,
- "modelTypeId": 5,
- "modelType": "List Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- },
- {
- "type": "ToppingList",
- "text": "pepperoni",
- "startIndex": 56,
- "length": 9,
- "modelTypeId": 5,
- "modelType": "List Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ],
- "CrustList": [
- {
- "type": "CrustList",
- "text": "thin crust",
- "startIndex": 32,
- "length": 10,
- "modelTypeId": 5,
- "modelType": "List Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-#### [V2 prediction endpoint response](#tab/V2)
-
-This entity isn't available in the V2 prediction runtime.
-* * *
-
-## Next steps
-
-Learn more about the machine-learning entity including a [tutorial](tutorial-machine-learned-entity.md), [concepts](concepts/entities.md#machine-learned-ml-entity), and [how-to guide](how-to/entities.md#create-a-machine-learned-entity).
-
-Learn about the [list](reference-entity-list.md) entity and [regular expression](reference-entity-regular-expression.md) entity.
cognitive-services Reference Entity Pattern Any https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/reference-entity-pattern-any.md
- Title: Pattern.any entity type - LUIS-
-description: Pattern.any is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends.
------ Previously updated : 09/29/2019--
-# Pattern.any entity
---
-Pattern.any is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends.
-
-Pattern.any entities need to be marked in the [Pattern](luis-how-to-model-intent-pattern.md) template examples, not the intent user examples.
-
-**The entity is a good fit when:**
-
-* The ending of the entity can be confused with the remaining text of the utterance.
-
-## Usage
-
-Given a client application that searches for books based on title, the pattern.any extracts the complete title. A template utterance using pattern.any for this book search is `Was {BookTitle} written by an American this year[?]`.
-
-In the following table, each row has two versions of the utterance. The top utterance is how LUIS initially sees the utterance. It isn't clear where the book title begins and ends. The bottom utterance uses a Pattern.any entity to mark the beginning and end of the entity.
-
-|Utterance with entity in bold|
-|--|
-|`Was The Man Who Mistook His Wife for a Hat and Other Clinical Tales written by an American this year?`<br><br>Was **The Man Who Mistook His Wife for a Hat and Other Clinical Tales** written by an American this year?|
-|`Was Half Asleep in Frog Pajamas written by an American this year?`<br><br>Was **Half Asleep in Frog Pajamas** written by an American this year?|
-|`Was The Particular Sadness of Lemon Cake: A Novel written by an American this year?`<br><br>Was **The Particular Sadness of Lemon Cake: A Novel** written by an American this year?|
-|`Was There's A Wocket In My Pocket! written by an American this year?`<br><br>Was **There's A Wocket In My Pocket!** written by an American this year?|
-||
---
-## Example JSON
-
-Consider the following query:
-
-`where is the form Understand your responsibilities as a member of the community and who needs to sign it after I read it?`
-
-With the embedded form name to extract as a Pattern.any:
-
-`Understand your responsibilities as a member of the community`
-
-#### [V2 prediction endpoint response](#tab/V2)
-
-```JSON
-"entities": [
- {
- "entity": "understand your responsibilities as a member of the community",
- "type": "FormName",
- "startIndex": 18,
- "endIndex": 78,
- "role": ""
- }
-```
--
-#### [V3 prediction endpoint response](#tab/V3)
-
-This is the JSON if `verbose=false` is set in the query string:
-
-```json
-"entities": {
- "FormName": [
- "Understand your responsibilities as a member of the community"
- ]
-}
-```
-
-This is the JSON if `verbose=true` is set in the query string:
-
-```json
-"entities": {
- "FormName": [
- "Understand your responsibilities as a member of the community"
- ],
- "$instance": {
- "FormName": [
- {
- "type": "FormName",
- "text": "Understand your responsibilities as a member of the community",
- "startIndex": 18,
- "length": 61,
- "modelTypeId": 7,
- "modelType": "Pattern.Any Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-* * *
-
-## Next steps
-
-In this [tutorial](luis-tutorial-pattern.md), use the **Pattern.any** entity to extract data from utterances where the utterances are well-formatted and where the end of the data may be easily confused with the remaining words of the utterance.
cognitive-services Reference Entity Regular Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/reference-entity-regular-expression.md
- Title: Regular expression entity type - LUIS
-description: A regular expression is best for raw utterance text. It ignores case and ignores cultural variant. Regular expression matching is applied after spell-check alterations at the character level, not the token level.
--- Previously updated : 05/05/2021-
-# Regular expression entity
---
-A regular expression entity extracts an entity based on a regular expression pattern you provide.
-
-A regular expression is best for raw utterance text. It ignores case and ignores cultural variant. Regular expression matching is applied after spell-check alterations at the token level. If the regular expression is too complex, such as using many brackets, you're not able to add the expression to the model. Uses part but not all of the [.NET Regex](/dotnet/standard/base-types/regular-expressions) library.
-
-**The entity is a good fit when:**
-
-* The data are consistently formatted with any variation that is also consistent.
-* The regular expression does not need more than 2 levels of nesting.
-
-![Regular expression entity](./media/luis-concept-entities/regex-entity.png)
-
-## Example JSON
-
-When using `kb[0-9]{6}`, as the regular expression entity definition, the following JSON response is an example utterance with the returned regular expression entities for the query:
-
-`When was kb123456 published?`:
-
-#### [V2 prediction endpoint response](#tab/V2)
-
-```JSON
-"entities": [
- {
- "entity": "kb123456",
- "type": "KB number",
- "startIndex": 9,
- "endIndex": 16
- }
-]
-```
--
-#### [V3 prediction endpoint response](#tab/V3)
--
-This is the JSON if `verbose=false` is set in the query string:
-
-```json
-"entities": {
- "KB number": [
- "kb123456"
- ]
-}
-```
-
-This is the JSON if `verbose=true` is set in the query string:
-
-```json
-"entities": {
- "KB number": [
- "kb123456"
- ],
- "$instance": {
- "KB number": [
- {
- "type": "KB number",
- "text": "kb123456",
- "startIndex": 9,
- "length": 8,
- "modelTypeId": 8,
- "modelType": "Regex Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-* * *
-
-## Next steps
-
-Learn more about entities:
-
-* [Entity types](concepts/entities.md)
-* [Create entities](how-to/entities.md)
cognitive-services Reference Entity Simple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/reference-entity-simple.md
- Title: Simple entity type - LUIS-
-description: A simple entity describes a single concept from the machine-learning context. Add a phrase list when using a simple entity to improve results.
------- Previously updated : 01/07/2022--
-# Simple entity
---
-A simple entity is a generic entity that describes a single concept and is learned from the machine-learning context. Because simple entities are generally names such as company names, product names, or other categories of names, add a [phrase list](concepts/patterns-features.md) when using a simple entity to boost the signal of the names used.
-
-**The entity is a good fit when:**
-
-* The data aren't consistently formatted but indicate the same thing.
-
-![simple entity](./media/luis-concept-entities/simple-entity.png)
-
-## Example JSON
-
-`Bob Jones wants 3 meatball pho`
-
-In the previous utterance, `Bob Jones` is labeled as a simple `Customer` entity.
-
-The data returned from the endpoint includes the entity name, the discovered text from the utterance, the location of the discovered text, and the score:
-
-#### [V2 prediction endpoint response](#tab/V2)
-
-```JSON
-"entities": [
- {
- "entity": "bob jones",
- "type": "Customer",
- "startIndex": 0,
- "endIndex": 8,
- "score": 0.473899543
- }
-]
-```
-
-#### [V3 prediction endpoint response](#tab/V3)
-
-This is the JSON if `verbose=false` is set in the query string:
-
-```json
-"entities": {
- "Customer": [
- "Bob Jones"
- ]
-}```
-
-This is the JSON if `verbose=true` is set in the query string:
-
-```json
-"entities": {
- "Customer": [
- "Bob Jones"
- ],
- "$instance": {
- "Customer": [
- {
- "type": "Customer",
- "text": "Bob Jones",
- "startIndex": 0,
- "length": 9,
- "score": 0.9339134,
- "modelTypeId": 1,
- "modelType": "Entity Extractor",
- "recognitionSources": [
- "model"
- ]
- }
- ]
- }
-}
-```
-
-* * *
-
-|Data object|Entity name|Value|
-|--|--|--|
-|Simple Entity|`Customer`|`bob jones`|
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn pattern syntax](reference-pattern-syntax.md)
cognitive-services Reference Pattern Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/reference-pattern-syntax.md
- Title: Pattern syntax reference - LUIS
-description: Create entities to extract key data from user utterances in Language Understanding (LUIS) apps. Extracted data is used by the client application.
--- Previously updated : 04/18/2022--
-# Pattern syntax
---
-Pattern syntax is a template for an utterance. The template should contain words and entities you want to match as well as words and [punctuation](luis-reference-application-settings.md#punctuation-normalization) you want to ignore. It is **not** a regular expression.
-
-> [!CAUTION]
-> Patterns only include machine-learning entity parents, not subentities.
-Entities in patterns are surrounded by curly brackets, `{}`. Patterns can include entities, and entities with roles. [Pattern.any](concepts/entities.md#patternany-entity) is an entity only used in patterns.
-
-Pattern syntax supports the following syntax:
-
-|Function|Syntax|Nesting level|Example|
-|--|--|--|--|
-|entity| {} - curly brackets|2|Where is form {entity-name}?|
-|optional|[] - square brackets<BR><BR>There is a limit of 3 on nesting levels of any combination of optional and grouping |2|The question mark is optional [?]|
-|grouping|() - parentheses|2|is (a \| b)|
-|or| \| - vertical bar (pipe)<br><br>There is a limit of 2 on the vertical bars (Or) in one group |-|Where is form ({form-name-short} &#x7c; {form-name-long} &#x7c; {form-number})|
-|beginning and/or end of utterance|^ - caret|-|^begin the utterance<br>the utterance is done^<br>^strict literal match of entire utterance with {number} entity^|
-
-## Nesting syntax in patterns
-
-The **optional** syntax, with square brackets, can be nested two levels. For example: `[[this]is] a new form`. This example allows for the following utterances:
-
-|Nested optional utterance example|Explanation|
-|--|--|
-|this is a new form|matches all words in pattern|
-|is a new form|matches outer optional word and non-optional words in pattern|
-|a new form|matches required words only|
-
-The **grouping** syntax, with parentheses, can be nested two levels. For example: `(({Entity1:RoleName1} | {Entity1:RoleName2} ) | {Entity2} )`. This feature allows any of the three entities to be matched.
-
-If Entity1 is a Location with roles such as origin (Seattle) and destination (Cairo) and Entity 2 is a known building name from a list entity (RedWest-C), the following utterances would map to this pattern:
-
-|Nested grouping utterance example|Explanation|
-|--|--|
-|RedWest-C|matches outer grouping entity|
-|Seattle|matches one of the inner grouping entities|
-|Cairo|matches one of the inner grouping entities|
-
-## Nesting limits for groups with optional syntax
-
-A combination of **grouping** with **optional** syntax has a limit of 3 nesting levels.
-
-|Allowed|Example|
-|--|--|
-|Yes|( [ ( test1 &#x7c; test2 ) ] &#x7c; test3 )|
-|No|( [ ( [ test1 ] &#x7c; test2 ) ] &#x7c; test3 )|
-
-## Nesting limits for groups with or-ing syntax
-
-A combination of **grouping** with **or-ing** syntax has a limit of 2 vertical bars.
-
-|Allowed|Example|
-|--|--|
-|Yes|( test1 &#x7c; test2 &#x7c; ( test3 &#x7c; test4 ) )|
-|No|( test1 &#x7c; test2 &#x7c; test3 &#x7c; ( test4 &#x7c; test5 ) ) |
-
-## Syntax to add an entity to a pattern template
-
-To add an entity into the pattern template, surround the entity name with curly braces, such as `Who does {Employee} manage?`.
-
-|Pattern with entity|
-|--|
-|`Who does {Employee} manage?`|
-
-## Syntax to add an entity and role to a pattern template
-
-An entity role is denoted as `{entity:role}` with the entity name followed by a colon, then the role name. To add an entity with a role into the pattern template, surround the entity name and role name with curly braces, such as `Book a ticket from {Location:Origin} to {Location:Destination}`.
-
-|Pattern with entity roles|
-|--|
-|`Book a ticket from {Location:Origin} to {Location:Destination}`|
-
-## Syntax to add a pattern.any to pattern template
-
-The Pattern.any entity allows you to add an entity of varying length to the pattern. As long as the pattern template is followed, the pattern.any can be any length.
-
-To add a **Pattern.any** entity into the pattern template, surround the Pattern.any entity with the curly braces, such as `How much does {Booktitle} cost and what format is it available in?`.
-
-|Pattern with Pattern.any entity|
-|--|
-|`How much does {Booktitle} cost and what format is it available in?`|
-
-|Book titles in the pattern|
-|--|
-|How much does **steal this book** cost and what format is it available in?|
-|How much does **ask** cost and what format is it available in?|
-|How much does **The Curious Incident of the Dog in the Night-Time** cost and what format is it available in?|
-
-The words of the book title are not confusing to LUIS because LUIS knows where the book title ends, based on the Pattern.any entity.
-
-## Explicit lists
-
-create an [Explicit List](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8) through the authoring API to allow the exception when:
-
-* Your pattern contains a [Pattern.any](concepts/entities.md#patternany-entity)
-* And that pattern syntax allows for the possibility of an incorrect entity extraction based on the utterance.
-
-For example, suppose you have a pattern containing both optional syntax, `[]`, and entity syntax, `{}`, combined in a way to extract data incorrectly.
-
-Consider the pattern `[find] email about {subject} [from {person}]'.
-
-In the following utterances, the **subject** and **person** entity are extracted correctly and incorrectly:
-
-|Utterance|Entity|Correct extraction|
-|--|--|:--:|
-|email about dogs from Chris|subject=dogs<br>person=Chris|Γ£ö|
-|email about the man from La Mancha|subject=the man<br>person=La Mancha|X|
-
-In the preceding table, the subject should be `the man from La Mancha` (a book title) but because the subject includes the optional word `from`, the title is incorrectly predicted.
-
-To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8).
-
-## Syntax to mark optional text in a template utterance
-
-Mark optional text in the utterance using the regular expression square bracket syntax, `[]`. The optional text can nest square brackets up to two brackets only.
-
-|Pattern with optional text|Meaning|
-|--|--|
-|`[find] email about {subject} [from {person}]`|`find` and `from {person}` are optional|
-|`Can you help me[?]|The punctuation mark is optional|
-
-Punctuation marks (`?`, `!`, `.`) should be ignored and you need to ignore them using the square bracket syntax in patterns.
-
-## Next steps
-
-Learn more about patterns:
-
-* [How to add patterns](luis-how-to-model-intent-pattern.md)
-* [How to add pattern.any entity](how-to/entities.md#create-a-patternany-entity)
-* [Patterns Concepts](luis-concept-patterns.md)
-
-Understand how [sentiment](luis-reference-prebuilt-sentiment.md) is returned in the .json response.
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/role-based-access-control.md
- Title: LUIS role-based access control-
-description: Use this article to learn how to add access control to your LUIS resource
----- Previously updated : 08/23/2022---
-# LUIS role-based access control
---
-LUIS supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your LUIS authoring resources. See the [Azure RBAC documentation](../../role-based-access-control/index.yml) for more information.
-
-## Enable Azure Active Directory authentication
-
-To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
-
-## Add role assignment to Language Understanding Authoring resource
-
-Azure RBAC can be assigned to a Language Understanding Authoring resource. To grant access to an Azure resource, you add a role assignment.
-1. In the [Azure portal](https://portal.azure.com/), select **All services**.
-2. Select **Cognitive Services**, and navigate to your specific Language Understanding Authoring resource.
- > [!NOTE]
- > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
-
-1. Select **Access control (IAM)** on the left navigation pane.
-1. Select **Add**, then select **Add role assignment**.
-1. On the **Role** tab on the next screen, select a role you want to add.
-1. On the **Members** tab, select a user, group, service principal, or managed identity.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
--
-## LUIS role types
-
-Use the following table to determine access needs for your LUIS application.
-
-These custom roles only apply to authoring (Language Understanding Authoring) and not prediction resources (Language Understanding).
-
-> [!NOTE]
-> * *Owner* and *Contributor* roles take priority over the custom LUIS roles.
-> * Azure Active Directory (Azure AAD) is only used with custom LUIS roles.
-> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal.
--
-### Cognitive Services LUIS reader
-
-A user that should only be validating and reviewing LUIS applications, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets (utterances, intents, entities) to notify the app developers of any changes that need to be made, but do not have direct access to make them.
--
- :::column span="":::
- **Capabilities**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * Read Utterances
- * Intents
- * Entities
- * Test Application
- :::column-end:::
- :::column span="":::
- All GET APIs under:
- * [LUIS Programmatic v3.0-preview](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f)
- * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f)
-
- All the APIs under:
- * [LUIS Endpoint APIs v2.0](./luis-migration-api-v1-to-v2.md)
- * [LUIS Endpoint APIs v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8)
- * [LUIS Endpoint APIs v3.0-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5cb0a9459a1fe8fa44c28dd8)
-
- All the Batch Testing Web APIs
- :::column-end:::
-
-### Cognitive Services LUIS writer
-
-A user that is responsible for building and modifying LUIS application, as a collaborator in a larger team. The collaborator can modify the LUIS application in any way, train those changes, and validate/test those changes in the portal. However, this user wouldn't have access to deploying this application to the runtime, as they may accidentally reflect their changes in a production environment. They also wouldn't be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in a production environment. They may also create new applications under this resource, but with the restrictions mentioned.
-
- :::column span="":::
- **Capabilities**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * All functionalities under Cognitive Services LUIS Reader.
-
- The ability to add:
- * Utterances
- * Intents
- * Entities
- :::column-end:::
- :::column span="":::
- * All APIs under LUIS reader
-
- All POST, PUT and DELETE APIs under:
-
- * [LUIS Programmatic v3.0-preview](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f)
- * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2d)
-
- Except for
- * [Delete application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c39)
- * [Move app to another LUIS authoring Azure resource](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/apps-move-app-to-another-luis-authoring-azure-resource)
- * [Publish an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c3b)
- * [Update application settings](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/58aeface39e2bb03dcd5909e)
- * [Assign a LUIS azure accounts to an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5be32228e8473de116325515)
- * [Remove an assigned LUIS azure accounts from an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5be32554f8591db3a86232e1)
- :::column-end:::
-
-### Cognitive Services LUIS owner
-
-> [!NOTE]
-> * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal.
-
-These users are the gatekeepers for LUIS applications in a production environment. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments.
-
- :::column span="":::
- **Functionality**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * All functionalities under Cognitive Services LUIS Writer
- * Deploy a model
- * Delete an application
- :::column-end:::
- :::column span="":::
- * All APIs available for LUIS
- :::column-end:::
-
-## Next steps
-
-* [Managing Azure resources](./luis-how-to-azure-subscription.md?branch=pr-en-us-171715&tabs=portal#authoring-resource)
cognitive-services Schema Change Prediction Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/schema-change-prediction-runtime.md
- Title: Extend app at runtime - LUIS
-description: Learn how to extend an already published prediction endpoint to pass new information.
--- Previously updated : 04/14/2020-
-# Extend app at prediction runtime
---
-The app's schema (models and features) is trained and published to the prediction endpoint. This published model is used on the prediction runtime. You can pass new information, along with the user's utterance, to the prediction runtime to augment the prediction.
-
-Two prediction runtime schema changes include:
-* [External entities](#external-entities)
-* [Dynamic lists](#dynamic-lists)
-
-<a name="external-entities-passed-in-at-prediction-time"></a>
-
-## External entities
-
-External entities give your LUIS app the ability to identify and label entities during runtime, which can be used as features to existing entities. This allows you to use your own separate and custom entity extractors before sending queries to your prediction endpoint. Because this is done at the query prediction endpoint, you don't need to retrain and publish your model.
-
-The client-application is providing its own entity extractor by managing entity matching and determining the location within the utterance of that matched entity and then sending that information with the request.
-
-External entities are the mechanism for extending any entity type while still being used as signals to other models.
-
-This is useful for an entity that has data available only at query prediction runtime. Examples of this type of data are constantly changing data or specific per user. You can extend a LUIS contact entity with external information from a user's contact list.
-
-External entities are part of the V3 authoring API. Learn more about [migrating](luis-migration-api-v3.md) to this version.
-
-### Entity already exists in app
-
-The value of `entityName` for the external entity, passed in the endpoint request POST body, must already exist in the trained and published app at the time the request is made. The type of entity doesn't matter, all types are supported.
-
-### First turn in conversation
-
-Consider a first utterance in a chat bot conversation where a user enters the following incomplete information:
-
-`Send Hazem a new message`
-
-The request from the chat bot to LUIS can pass in information in the POST body about `Hazem` so it is directly matched as one of the user's contacts.
-
-```json
- "externalEntities": [
- {
- "entityName":"contacts",
- "startIndex": 5,
- "entityLength": 5,
- "resolution": {
- "employeeID": "05013",
- "preferredContactType": "TeamsChat"
- }
- }
- ]
-```
-
-The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
-
-### Second turn in conversation
-
-The next user utterance into the chat bot uses a more vague term:
-
-`Send him a calendar reminder for the party.`
-
-In this turn of the conversation, the utterance uses `him` as a reference to `Hazem`. The conversational chat bot, in the POST body, can map `him` to the entity value extracted from the first utterance, `Hazem`.
-
-```json
- "externalEntities": [
- {
- "entityName":"contacts",
- "startIndex": 5,
- "entityLength": 3,
- "resolution": {
- "employeeID": "05013",
- "preferredContactType": "TeamsChat"
- }
- }
- ]
-```
-
-The prediction response includes that external entity, with all the other predicted entities, because it is defined in the request.
-
-### Override existing model predictions
-
-The `preferExternalEntities` options property specifies that if the user sends an external entity that overlaps with a predicted entity with the same name, LUIS chooses the entity passed in or the entity existing in the model.
-
-For example, consider the query `today I'm free`. LUIS detects `today` as a datetimeV2 with the following response:
-
-```JSON
-"datetimeV2": [
- {
- "type": "date",
- "values": [
- {
- "timex": "2019-06-21",
- "value": "2019-06-21"
- }
- ]
- }
-]
-```
-
-If the user sends the external entity:
-
-```JSON
-{
- "entityName": "datetimeV2",
- "startIndex": 0,
- "entityLength": 5,
- "resolution": {
- "date": "2019-06-21"
- }
-}
-```
-
-If the `preferExternalEntities` is set to `false`, LUIS returns a response as if the external entity were not sent.
-
-```JSON
-"datetimeV2": [
- {
- "type": "date",
- "values": [
- {
- "timex": "2019-06-21",
- "value": "2019-06-21"
- }
- ]
- }
-]
-```
-
-If the `preferExternalEntities` is set to `true`, LUIS returns a response including:
-
-```JSON
-"datetimeV2": [
- {
- "date": "2019-06-21"
- }
-]
-```
---
-#### Resolution
-
-The _optional_ `resolution` property returns in the prediction response, allowing you to pass in the metadata associated with the external entity, then receive it back out in the response.
-
-The primary purpose is to extend prebuilt entities but it is not limited to that entity type.
-
-The `resolution` property can be a number, a string, an object, or an array:
-
-* "Dallas"
-* {"text": "value"}
-* 12345
-* ["a", "b", "c"]
-
-<a name="dynamic-lists-passed-in-at-prediction-time"></a>
-
-## Dynamic lists
-
-Dynamic lists allow you to extend an existing trained and published list entity, already in the LUIS app.
-
-Use this feature when your list entity values need to change periodically. This feature allows you to extend an already trained and published list entity:
-
-* At the time of the query prediction endpoint request.
-* For a single request.
-
-The list entity can be empty in the LUIS app but it has to exist. The list entity in the LUIS app isn't changed, but the prediction ability at the endpoint is extended to include up to 2 lists with about 1,000 items.
-
-### Dynamic list JSON request body
-
-Send in the following JSON body to add a new sublist with synonyms to the list, and predict the list entity for the text, `LUIS`, with the `POST` query prediction request:
-
-```JSON
-{
- "query": "Send Hazem a message to add an item to the meeting agenda about LUIS.",
- "options":{
- "timezoneOffset": "-8:00"
- },
- "dynamicLists": [
- {
- "listEntity*":"ProductList",
- "requestLists":[
- {
- "name": "Azure Cognitive Services",
- "canonicalForm": "Azure-Cognitive-Services",
- "synonyms":[
- "language understanding",
- "luis",
- "qna maker"
- ]
- }
- ]
- }
- ]
-}
-```
-
-The prediction response includes that list entity, with all the other predicted entities, because it is defined in the request.
-
-## Next steps
-
-* [Prediction score](luis-concept-prediction-score.md)
-* [Authoring API V3 changes](luis-migration-api-v3.md)
cognitive-services Build Decomposable Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/tutorial/build-decomposable-application.md
- Title: Build a decomposable LUIS application
-description: Use this tutorial to learn how to build a decomposable application.
---
-ms.
-- Previously updated : 01/10/2022--
-# Build a decomposable LUIS application
---
-In this tutorial, you will be able to create a telecom LUIS application that can predict different user intentions. By the end of the tutorial, we should have a telecom application that can predict user intentions based on text provided by users.
-
-We will be handling different user scenarios (intents) such as:
-
-* Signing up for a new telecom line
-* Updating an existing tier
-* Paying a bill
-
-In this tutorial you will learn how to:
-
-1. Create a LUIS application
-2. Create intents
-3. Add entities
-4. Add utterances
-5. Label example utterances
-6. Train an app
-7. Publish an app
-8. Get predictions from the published endpoint
-
-## Create a LUIS application
-
-1. Sign in to the [LUIS portal](https://www.luis.ai/)
-2. Create a new application by selecting **+New app**.
-
- :::image type="content" source="../media/build-decomposable-app/create-luis-app.png" alt-text="A screenshot of the application creation screen." lightbox="../media/build-decomposable-app/create-luis-app.png":::
-
-3. In the window that appears, enter the name "Telecom Tutorial", keeping the default culture, **English**. The other fields are optional, do not set them. Select **Done**.
-
- :::image type="content" source="../media/build-decomposable-app/create-new-app.png" alt-text="A screenshot of the LUIS application's creation fields." lightbox="../media/build-decomposable-app/create-new-app.png":::
--
-## User intentions as intents
-
-The first thing you will see in the **Build** section are the app's intents. [Intents](/azure/cognitive-services/luis/luis-concept-intent) represent a task or an action a user want to perform.
-
-Imagine a telecom LUIS application, what would a user need?
-
-They would probably need to perform some type of user action or ask for help. Another user might want to update their tier or **pay a bill**
-
-The resulting schema is as follows. For more information, see [best practices about planning the schema](/azure/cognitive-services/luis/luis-concept-best-practices#do-plan-your-schema).
-
-| **Intent** | **Purpose** |
-| | |
-| UserActions | Determine user actions |
-| Help | Request help |
-| UpdateTier | Update current tier |
-| PayBill | Pay outstanding bill |
-| None | Determine if user is asking something the LUIS app is not designed to answer. This intent is provided as part of app creation and can&#39;t be deleted. |
-
-## Create a new Intent
-
-An intent is used to classify user utterances based on the user's intention, determined from the natural language text.
-
-To classify an utterance, the intent needs examples of user utterances that should be classified with this intent.
-
-1. Select **Build** from the top navigation menu, then select **Intents** on the left side of the screen. Select **+ Create** to create a new intent. Enter the new intent name, "UserAction", then select **Done**
-
- UserAction could be one of many intents. For example, some users might want to sign up for a new line, while others might ask to retrieve information.
-
-2. Add several example utterances to this intent that you expect a user to ask:
-
- * Hi! I want to sign up for a new line
- * Can I Sign up for a new line?
- * Hello, I want a new line
- * I forgot my line number!
- * I would like a new line number
-
- :::image type="content" source="../media/build-decomposable-app/user-actions.png" alt-text="A screenshot showing example utterances for the UserAction intent." lightbox="../media/build-decomposable-app/user-actions.png":::
-
-For the **PayBill** intent, some utterances could be:
-
-* I want to pay my bill
-* Settle my bill
-* Pay bill
-* I want to close my current balance
-* Hey! I want to pay the current bill
-
-By providing _example utterances_, you are teaching LUIS about what kinds of utterances should be predicted for this intent. These are positive examples. The utterances in all the other intents are treated as negative examples for this intent. Ideally, the more example utterances you add, the better your app's predictions will be.
-
-These few utterances are for demonstration purposes only. A real-world app should have at least 15-30 [utterances](/azure/cognitive-services/luis/luis-concept-utterance) of varying length, word order, tense, grammatical correctness, punctuation, and word count.
-
-## Creating the remaining intents
-
-Perform the above steps to add the following intents to the app:
-
-**"Help"**
-
-* "I need help"
-* "I need assistance"
-* "Help please"
-* "Can someone support me?"
-* "I'm stuck, can you help me"
-* "Can I get help?"
-
-**"UpdateTier"**
-
-* "I want to update my tier"
-* "Update my tier"
-* "I want to change to VIP tier"
-* "Change my subscription to standard tier"
-
-## Example utterances for the None intent
-
-The client application needs to know if an utterance is not meaningful or appropriate for the application. The "None" intent is added to each application as part of the creation process to determine if an utterance shouldn't be answered by the client application.
-
-If LUIS returns the "None" intent for an utterance, your client application can ask if the user wants to end the conversation or give more directions for continuing the conversation.
-
-If you leave the "None" intent empty, an utterance that should be predicted outside the subject domain will be predicted in one of the existing subject domain intents. The result is that the client application, such as a chat bot, will perform incorrect operations based on an incorrect prediction.
-
-1. Select **Intents** from the left panel.
-2. Select the **None** intent. Add three utterances that your user might enter but are not relevant to your Telecom app. These examples shouldn't use words you expect in your subject domain such as Tier, upgrade, signup, bill.
-
- * "When is my flight?"
- * "I need to change my pizza order please"
- * "What is the weather like for today?"
-
-## Add entities
-
-An entity is an item or element that is relevant to the user's intent. Entities define data that can be extracted from the utterance and is essential to complete a user's required action.
-
-1. In the build section, select **Entities.**
-2. To add a new entity, select **+Create**
-
- In this example, we will be creating two entities, "**UpdateTierInfo**" as a machine-learned entity type, and "Tier" as a list entity type. Luis also lets you create [different entity types](/azure/cognitive-services/luis/luis-concept-entity-types).
-
-3. In the window that appears, enter "**UpdateTierInfo**", and select Machine learned from the available types. Select the **Add structure** box to be able to add a structure to this entity.
-
- :::image type="content" source="../media/build-decomposable-app/create-entity.png" alt-text="A screenshot showing an entity." lightbox="../media/build-decomposable-app/create-entity.png":::
-
-4. Select **Next**.
-5. To add a child subentity, click on the "**+**" symbol and start adding the child. For our entity example, "**UpdateTierInfo**", we require three things:
- * **OriginalTier**
- * **NewTier**
- * **PhoneNumber**
-
- :::image type="content" source="../media/build-decomposable-app/add-subentities.png" alt-text="A screenshot of subentities in the app." lightbox="../media/build-decomposable-app/add-subentities.png":::
-
-6. Click **Create** after adding all THE subentities.
-
- We will create another entity named "**Tier**", but this time it will be a list entity, and it will include all the tiers that we might provide: Standard tier, Premium tier, and VIP tier.
-
-1. To do this, go to the entities tab, and press on **+create** and select **list** from the types in the screen that appears.
-
-2. Add the items to your list, and optionally, you can add synonyms to make sure that all cases of that mention will be understood.
-
- :::image type="content" source="../media/build-decomposable-app/list-entities.png" alt-text="A screenshot of a list entity." lightbox="../media/build-decomposable-app/list-entities.png":::
-
-3. Now go back to the "**UpdateTierInfo**" entity and add the "tier" entity as a feature for the "**OriginalTier**" and "**newTier**" entities we created earlier. It should look something like this:
-
- :::image type="content" source="../media/build-decomposable-app/update-tier-info.png" alt-text="A screenshot of an entity's features." lightbox="../media/build-decomposable-app/update-tier-info.png":::
-
- We added tier as a feature for both "**originalTier**" and "**newTier**", and we added the "**Phonenumber**" entity, which is a Regex type. It can be created the same way we created an ML and a list entity.
-
-Now we have successfully created intents, added example utterances, and added entities. We created four intents (other than the "none" intent), and three entities.
-
-## Label example utterances
-
-The machine learned entity is created and the subentities have features. To complete the extraction improvement, the example utterances need to be labeled with the subentities.
-
-There are two ways to label utterances:
-
-1. Using the labeling tool
- 1. Open the **Entity Palette** , and select the "**@**" symbol in the contextual toolbar.
- 2. Select each entity row in the palette, then use the palette cursor to select the entity in each example utterance.
-2. Highlight the text by dragging your cursor. Using the cursor, highlight over the text you want to label. In the following image, we highlighted "vip - tier" and select the "**NewTier**" entity.
-
- :::image type="content" source="../media/build-decomposable-app/label-example-utterance.png" alt-text="A screenshot showing how to label utterances." lightbox="../media/build-decomposable-app/label-example-utterance.png":::
--
-## Train the app
-
-In the top-right side of the LUIS website, select the **Train** button.
-
-Before training, make sure there is at least one utterance for each intent.
--
-## Publish the app
-
-In order to receive a LUIS prediction in a chat bot or other client application, you need to publish the app to the prediction endpoint. In order to publish, you need to train you application fist.
-
-1. Select **Publish** in the top-right navigation.
-
- :::image type="content" source="../media/build-decomposable-app/publish-app.png" alt-text="A screenshot showing the button for publishing an app." lightbox="../media/build-decomposable-app/publish-app.png":::
-
-2. Select the **Production** slot, then select **Done**.
-
- :::image type="content" source="../media/build-decomposable-app/production-slot.png" alt-text="A screenshot showing the production slot selector." lightbox="../media/build-decomposable-app/production-slot.png":::
--
-3. Select **Access your endpoint URLs** in the notification to go to the **Azure Resources** page. You will only be able to see the URLs if you have a prediction resource associated with the app. You can also find the **Azure Resources** page by clicking **Manage** on the left of the screen.
-
- :::image type="content" source="../media/build-decomposable-app/access-endpoint.png" alt-text="A screenshot showing the endpoint access notification." lightbox="../media/build-decomposable-app/access-endpoint.png":::
--
-## Get intent prediction
-
-1. Select **Manage** in the top-right menu, then select **Azure Resources** on the left.
-2. Copy the **Example Query** URL and paste it into a new web browser tab.
-
- The endpoint URL will have the following format.
-
- ```http
- https://YOUR-CUSTOM-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/YOUR-APP-ID/slots/production/predict?subscription-key=YOUR-KEY-ID&amp;verbose=true&amp;show-all-intents=true&amp;log=true&amp;query=YOUR\_QUERY\_HERE
- ```
-
-3. Go to the end of the URL in the address bar and replace the `query=` string parameter with:
-
- "Hello! I am looking for a new number please."
-
- The utterance **query** is passed in the URI. This utterance is not the same as any of the example utterances, and should be a good test to check if LUIS predicts the UserAction intent as the top scoring intent.
- ```JSON
- {
- "query": "hello! i am looking for a new number please",
- "prediction":
- {
- "topIntent": "UserAction",
- "intents":
- {
- "UserAction": {
- "score": 0.8607431},
- "Help":{
- "score": 0.031376917},
- "PayBill": {
- "score": 0.01989629},
- "None": {
- "score": 0.013738701},
- "UpdateTier": {
- "score": 0.012313577}
- },
- "entities": {}
- }
- }
- ```
-The JSON result identifies the top scoring intent as **prediction.topIntent** property. All scores are between 1 and 0, with the better score being closer to 1.
-
-## Client-application next steps
-
-This tutorial created a LUIS app, created intents, entities, added example utterances to each intent, added example utterances to the None intent, trained, published, and tested at the endpoint. These are the basic steps of building a LUIS model.
-
-LUIS does not provide answers to user utterances, it only identifies what type of information is being asked for in natural language. The conversation follow-up is provided by the client application such as an [Azure Bot](https://azure.microsoft.com/services/bot-services/).
-
-## Clean up resources
-
-When no longer needed, delete the LUIS app. To do so, select **My apps** from the top-left menu. Select the ellipsis ( **_..._** ) to the right of the app name in the app list, select **Delete**. In the pop-up dialog named **Delete app?** , select **Ok**.
cognitive-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/what-is-luis.md
- Title: Language Understanding (LUIS) Overview
-description: Language Understanding (LUIS) - a cloud-based API service using machine-learning to conversational, natural language to predict meaning and extract information.
-keywords: Azure, artificial intelligence, ai, natural language processing, nlp, natural language understanding, nlu, LUIS, conversational AI, ai chatbot, nlp ai, azure luis
--- Previously updated : 07/19/2022---
-ms.
--
-# What is Language Understanding (LUIS)?
--
-Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. LUIS provides access through its [custom portal](https://www.luis.ai), [APIs][endpoint-apis] and [SDK client libraries](client-libraries-rest-api.md).
-
-For first time users, follow these steps to [sign in to LUIS portal](sign-in-luis-portal.md "sign in to LUIS portal")
-To get started, you can try a LUIS [prebuilt domain app](luis-get-started-create-app.md).
-
-This documentation contains the following article types:
-
-* [**Quickstarts**](luis-get-started-create-app.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](luis-how-to-start-new-app.md) contain instructions for using the service in more specific or customized ways.
-* [**Concepts**](artificial-intelligence.md) provide in-depth explanations of the service functionality and features.
-* [**Tutorials**](tutorial-intents-only.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-## What does LUIS Offer
-
-* **Simplicity**: LUIS offloads you from the need of in-house AI expertise or any prior machine learning knowledge. With only a few clicks you can build your own conversational AI application. You can build your custom application by following one of our [quickstarts](luis-get-started-create-app.md), or you can use one of our [prebuilt domain](luis-get-started-create-app.md) apps.
-* **Security, Privacy and Compliance**: LUIS is backed by Azure infrastructure, which offers enterprise-grade security, privacy, and compliance. Your data remains yours; you can delete your data at any time. Your data is encrypted while itΓÇÖs in storage. Learn more about this [here](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy).
-* **Integration**: easily integrate your LUIS app with other Microsoft services like [Microsoft Bot framework](/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../speech-service/get-started-intent-recognition.md).
--
-## LUIS Scenarios
-* [Build an enterprise-grade conversational bot](/azure/architecture/reference-architectures/ai/conversational-bot): This reference architecture describes how to build an enterprise-grade conversational bot (chatbot) using the Azure Bot Framework.
-* [Commerce Chatbot](/azure/architecture/solution-ideas/articles/commerce-chatbot): Together, the Azure Bot Service and Language Understanding service enable developers to create conversational interfaces for various scenarios like banking, travel, and entertainment.
-* [Controlling IoT devices using a Voice Assistant](/azure/architecture/solution-ideas/articles/iot-controlling-devices-with-voice-assistant): Create seamless conversational interfaces with all of your internet-accessible devices-from your connected television or fridge to devices in a connected power plant.
--
-## Application Development life cycle
-
-![LUIS app development life cycle](./media/luis-overview/luis-dev-lifecycle.png "LUIS Application Develooment Lifecycle")
--- **Plan**: Identify the scenarios that users might use your application for. Define the actions and relevant information that needs to be recognized.-- **Build**: Use your authoring resource to develop your app. Start by defining [intents](luis-concept-intent.md) and [entities](concepts/entities.md). Then, add training [utterances](concepts/utterances.md) for each intent. -- **Test and Improve**: Start testing your model with other utterances to get a sense of how the app behaves, and you can decide if any improvement is needed. You can improve your application by following these [best practices](luis-concept-best-practices.md). -- **Publish**: Deploy your app for prediction and query the endpoint using your prediction resource. Learn more about authoring and prediction resources [here](luis-how-to-azure-subscription.md). -- **Connect**: Connect to other services such as [Microsoft Bot framework](/composer/tutorial/tutorial-luis), [QnA Maker](../QnAMaker/choose-natural-language-processing-service.md), and [Speech service](../speech-service/get-started-intent-recognition.md). -- **Refine**: [Review endpoint utterances](luis-concept-review-endpoint-utterances.md) to improve your application with real life examples-
-Learn more about planning and building your application [here](luis-how-plan-your-app.md).
-
-## Next steps
-
-* [What's new](whats-new.md "What's new") with the service and documentation
-* [Build a LUIS app](tutorial-intents-only.md)
-* [API reference][endpoint-apis]
-* [Best practices](luis-concept-best-practices.md)
-* [Developer resources](developer-reference-resource.md "Developer resources") for LUIS.
-* [Plan your app](luis-how-plan-your-app.md "Plan your app") with [intents](luis-concept-intent.md "intents") and [entities](concepts/entities.md "entities").
-
-[bot-framework]: /bot-framework/
-[flow]: /connectors/luis/
-[authoring-apis]: https://go.microsoft.com/fwlink/?linkid=2092087
-[endpoint-apis]: https://go.microsoft.com/fwlink/?linkid=2092356
-[qnamaker]: https://qnamaker.ai/
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/whats-new.md
- Title: What's New - Language Understanding (LUIS)
-description: This article is regularly updated with news about the Azure Cognitive Services Language Understanding API.
--- Previously updated : 02/24/2022--
-# What's new in Language Understanding
---
-Learn what's new in the service. These items include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up to date with the service.
-
-## Release notes
-
-### February 2022
-* LUIS containers can be used in [disconnected environments](../containers/disconnected-containers.md?context=/azure/cognitive-services/luis/context/context).
-
-### January 2022
-* [Updated text recognizer](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.8.2) to v1.8.2
-* Added [English (UK)](luis-language-support.md) to supported languages.
-
-### December 2021
-* [Updated text recognizer](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.8.1) to v1.8.1
-* Jio India west [publishing region](luis-reference-regions.md#other-publishing-regions)
-
-### November 2021
-* [Azure Role-based access control (RBAC) article](role-based-access-control.md)
-
-### August 2021
-* Norway East [publishing region](luis-reference-regions.md#publishing-to-europe).
-* West US 3 [publishing region](luis-reference-regions.md#other-publishing-regions).
-
-### April 2021
-
-* Switzerland North [authoring region](luis-reference-regions.md#publishing-to-europe).
-
-### January 2021
-
-* The V3 prediction API now supports the [Bing Spellcheck API](luis-tutorial-bing-spellcheck.md).
-* The regional portals (au.luis.ai and eu.luis.ai) have been consolidated into a single portal and URL. If you were using one of these portals, you will be automatically re-directed to luis.ai.
-
-### December 2020
-
-* All LUIS users are required to [migrate to a LUIS authoring resource](luis-migration-authoring.md)
-* New [evaluation endpoints](luis-how-to-batch-test.md#batch-testing-using-the-rest-api) that allow you to submit batch tests using the REST API, and get accuracy results for your intents and entities. Available starting with the v3.0-preview LUIS Endpoint.
-
-### June 2020
-
-* [Preview 3.0 Authoring](luis-migration-authoring-entities.md) SDK -
- * Version 3.2.0-preview.3 - [.NET - NuGet](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Authoring/)
- * Version 4.0.0-preview.3 - [JS - NPM](https://www.npmjs.com/package/@azure/cognitiveservices-luis-authoring)
-* Applying DevOps practices with LUIS
- * Concepts
- * [DevOps practices for LUIS](luis-concept-devops-sourcecontrol.md)
- * [Continuous Integration and Continuous Delivery workflows for LUIS DevOps](luis-concept-devops-automation.md)
- * [Testing for LUIS DevOps](luis-concept-devops-testing.md)
- * How-to
- * [Apply DevOps to LUIS app development using GitHub Actions](./luis-concept-devops-automation.md)
- * [Complete code GitHub repo](https://github.com/Azure-Samples/LUIS-DevOps-Template)
-
-### May 2020 - //Build
-
-* Released as **generally available** (GA):
- * [Language Understanding container](luis-container-howto.md)
- * Preview portal promoted to [current portal](https://www.luis.ai), [previous](https://previous.luis.ai) portal still available
- * New machine-learning entity creation and labeling experience
- * [Upgrade process](migrate-from-composite-entity.md) from composite and simple entities to machine-learning entities
- * [Setting](how-to-application-settings-portal.md) support for normalizing word variants
-* Preview Authoring API changes
- * App schema 7.x for nested machine-learning entities
- * [Migration to required feature](luis-migration-authoring-entities.md#api-change-constraint-replaced-with-required-feature)
-* New resources for developers
- * [Continuous integration tools](developer-reference-resource.md#continuous-integration-tools)
- * Workshop - learn best practices for [_Natural Language Understanding_ (NLU) using LUIS](developer-reference-resource.md#workshops)
-* [Customer managed keys](./encrypt-data-at-rest.md) - encrypt all the data you use in LUIS by using your own key
-* [AI show](/Shows/AI-Show/New-Features-in-Language-Understanding) (video) - see the new features in LUIS
---
-### March 2020
-
-* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
-
-### November 4, 2019 - Ignite
-
-* Video - [Advanced Natural Language Understanding (NLU) models using LUIS and Azure Cognitive Services | BRK2188](https://www.youtube.com/watch?v=JdJEV2jV0_Y)
-
-* Improved developer productivity
- * General availability of our [prediction endpoint V3](luis-migration-api-v3.md).
- * Ability to import and export apps with `.lu` ([LUDown](https://github.com/microsoft/botbuilder-tools/tree/master/packages/Ludown)) format. This paves the way for an effective CI/CD process.
-* Language expansion
- * [Arabic and Hindi](luis-language-support.md) in public preview.
-* Prebuilt models
- * [Prebuilt domains](luis-reference-prebuilt-domains.md) is now generally available (GA)
- * Japanese [prebuilt entities](luis-reference-prebuilt-entities.md#japanese-entity-support) - age, currency, number, and percentage are not supported in V3.
- * Italian [prebuilt entities](luis-reference-prebuilt-entities.md#italian-entity-support) - age, currency, dimension, number, and percentage resolution changed from V2.
-* Enhanced user experience in [preview.luis.ai portal](https://preview.luis.ai) - revamped labeling experience to enable building and debugging complex models. Try the preview portal tutorials:
- * [Intents only](tutorial-intents-only.md)
- * [Decomposable machine-learning entity](tutorial-machine-learned-entity.md)
-* Advance language understanding capabilities - [building sophisticated language models](concepts/entities.md) with less effort.
-* Define machine learning features at the model level and enable models to be used as signals to other models, for example using entities as features to intents and to other entities.
-* New, expanded [limits](luis-limits.md) - higher maximum for phrase lists and total phrases, new model as a feature limits
-* Extract information from text in the format of deep hierarchy structure, making conversation applications more powerful.
-
- ![machine-learning entity image](./media/whats-new/deep-entity-extraction-example.png)
-
-### September 3, 2019
-
-* Azure authoring resource - [migrate now](luis-migration-authoring.md).
- * 500 apps per Azure resource
- * 100 versions per app
-* Turkish support for prebuilt entities
-* Italian support for datetimeV2
-
-### July 23, 2019
-
-* Update the [Recognizers-Text](https://github.com/microsoft/Recognizers-Text/releases/tag/dotnet-v1.2.3) to 1.2.3
- * Age, Temperature, Dimension, and Currency recognizers in Italian.
- * Improvement in Holiday recognition in English to correctly calculate Thanksgiving-based dates.
- * Improvements in French DateTime to reduce false positives of non-Date and non-Time entities.
- * Support for calendar/school/fiscal year and acronyms in English DateRange.
- * Improved PhoneNumber recognition in Chinese and Japanese.
- * Improved support for NumberRange in English.
- * Performance improvements.
-
-### June 24, 2019
-
-* [OrdinalV2 prebuilt entity](luis-reference-prebuilt-ordinal-v2.md) to support ordering such as next, previous, and last. English culture only.
-
-### May 6, 2019 - //Build Conference
-
-The following features were released at the Build 2019 Conference:
-
-* [Preview of V3 API migration guide](luis-migration-api-v3.md)
-* [Improved analytics dashboard](luis-how-to-use-dashboard.md)
-* [Improved prebuilt domains](luis-reference-prebuilt-domains.md)
-* [Dynamic list entities](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time)
-* [External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time)
-
-## Blogs
-
-[Bot Framework](https://blog.botframework.com/)
-
-## Videos
-
-### 2019 Ignite videos
-
-[Advanced Natural Language Understanding (NLU) models using LUIS and Azure Cognitive Services | BRK2188](https://www.youtube.com/watch?v=JdJEV2jV0_Y)
-
-### 2019 Build videos
-
-[How to use Azure Conversational AI to scale your business for the next generation](https://www.youtube.com/watch?v=_k97jd-csuk&feature=youtu.be)
-
-## Service updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/azure-resources.md
- Title: Azure resources - QnA Maker
-description: QnA Maker uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used in combination allows you to find and fix problems when they occur.
--- Previously updated : 02/02/2022---
-# Azure resources for QnA Maker
-
-QnA Maker uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used _in combination_ allows you to find and fix problems when they occur.
--
-## Resource planning
-
-When you first develop a QnA Maker knowledge base, in the prototype phase, it is common to have a single QnA Maker resource for both testing and production.
-
-When you move into the development phase of the project, you should consider:
-
-* How many languages your knowledge base system will hold?
-* How many regions you need your knowledge base to be available in?
-* How many documents in each domain your system will hold?
-
-Plan to have a single QnA Maker resource hold all knowledge bases that have the same language, the same region and the same subject domain combination.
-
-## Pricing tier considerations
-
-Typically there are three parameters you need to consider:
-
-* **The throughput you need from the service**:
- * Select the appropriate [App Plan](https://azure.microsoft.com/pricing/details/app-service/plans/) for your App service based on your needs. You can [scale up](../../../app-service/manage-scale-up.md) or down the App.
- * This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../search/search-capacity-planning.md) with replicas.
-
-* **Size and the number of knowledge bases**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide number of knowledge bases you need based on number of different subject domains. Once subject domain (for a single language) should be in one knowledge base.
-
-Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
-
-> [!IMPORTANT]
-> You can publish N-1 knowledge bases in a particular tier, where N is the maximum indexes allowed in the tier. Also check the maximum size and the number of documents allowed per tier.
-
-For example, if your tier has 15 allowed indexes, you can publish 14 knowledge bases (one index per published knowledge base). The fifteenth index is used for all the knowledge bases for authoring and testing.
-
-* **Number of documents as sources**: The free SKU of the QnA Maker management service limits the number of documents you can manage via the portal and the APIs to 3 (of 1 MB size each). The standard SKU has no limits to the number of documents you can manage. See more details [here](https://aka.ms/qnamaker-pricing).
-
-The following table gives you some high-level guidelines.
-
-| | QnA Maker Management | App Service | Azure Cognitive Search | Limitations |
-| -- | -- | -- | | -- |
-| **Experimentation** | Free SKU | Free Tier | Free Tier | Publish Up to 2 KBs, 50 MB size |
-| **Dev/Test Environment** | Standard SKU | Shared | Basic | Publish Up to 14 KBs, 2 GB size |
-| **Production Environment** | Standard SKU | Basic | Standard | Publish Up to 49 KBs, 25 GB size |
-
-## Recommended Settings
-
-|Target QPS | App Service | Azure Cognitive Search |
-| -- | -- | |
-| 3 | S1, one Replica | S1, one Replica |
-| 50 | S3, 10 Replicas | S1, 12 Replicas |
-| 80 | S3, 10 Replicas | S3, 12 Replicas |
-| 100 | P3V2, 10 Replicas | S3, 12 Replicas, 3 Partitions |
-| 200 to 250 | P3V2, 20 Replicas | S3, 12 Replicas, 3 Partitions |
-
-## When to change a pricing tier
-
-|Upgrade|Reason|
-|--|--|
-|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku) QnA Maker management SKU|You want to have more QnA pairs or document sources in your knowledge base.|
-|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-app-service) App Service SKU and check Cognitive Search tier and [create Cognitive Search replicas](../../../search/search-capacity-planning.md)|Your knowledge base needs to serve more requests from your client app, such as a chat bot.|
-|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-the-azure-cognitive-search-service) Azure Cognitive Search service|You plan to have many knowledge bases.|
-
-Get the latest runtime updates by [updating your App Service in the Azure portal](../how-to/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates).
-
-## Keys in QnA Maker
-
-Your QnA Maker service deals with two kinds of keys: **authoring keys** and **query endpoint keys** used with the runtime hosted in the App service.
-
-Use these keys when making requests to the service through APIs.
-
-![Key management](../media/authoring-key.png)
-
-|Name|Location|Purpose|
-|--|--|--|
-|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys and Endpoint** page.|
-|Query endpoint key|[QnA Maker portal](https://www.qnamaker.ai)|These keys are used to query the published knowledge base endpoint to get a response for a user question. You typically use this query endpoint in your chat bot or in the client application code that connects to the QnA Maker service. These keys are created when you publish your QnA Maker knowledge base.<br><br>Find these keys in the **Service settings** page. Find this page from the user's menu in the upper right of the page on the drop-down menu.|
-
-### Find authoring keys in the Azure portal
-
-You can view and reset your authoring keys from the Azure portal, where you created the QnA Maker resource.
-
-1. Go to the QnA Maker resource in the Azure portal and select the resource that has the _Cognitive Services_ type:
-
- ![QnA Maker resource list](../media/qnamaker-how-to-key-management/qnamaker-resource-list.png)
-
-2. Go to **Keys and Endpoint**:
-
- ![QnA Maker managed (Preview) Subscription key](../media/qnamaker-how-to-key-management/subscription-key-v2.png)
-
-### Find query endpoint keys in the QnA Maker portal
-
-The endpoint is in the same region as the resource because the endpoint keys are used to make a call to the knowledge base.
-
-Endpoint keys can be managed from the [QnA Maker portal](https://qnamaker.ai).
-
-1. Sign in to the [QnA Maker portal](https://qnamaker.ai), go to your profile, and then select **Service settings**:
-
- ![Endpoint key](../media/qnamaker-how-to-key-management/Endpoint-keys.png)
-
-2. View or reset your keys:
-
- > [!div class="mx-imgBorder"]
- > ![Endpoint key manager](../media/qnamaker-how-to-key-management/Endpoint-keys1.png)
-
- >[!NOTE]
- >Refresh your keys if you think they've been compromised. This may require corresponding changes to your client application or bot code.
-
-## Management service region
-
-The management service of QnA Maker is used only for the QnA Maker portal and for initial data processing. This service is available only in the **West US** region. No customer data is stored in this West US service.
-
-## Resource naming considerations
-
-The resource name for the QnA Maker resource, such as `qna-westus-f0-b`, is also used to name the other resources.
-
-The Azure portal create window allows you to create a QnA Maker resource and select the pricing tiers for the other resources.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of Azure portal for QnA Maker resource creation](../media/concept-azure-resource/create-blade.png)
-
-After the resources are created, they have the same name, except for the optional Application Insights resource, which postpends characters to the name.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of Azure portal resource listing](../media/concept-azure-resource/all-azure-resources-created-qnamaker.png)
-
-> [!TIP]
-> Create a new resource group when you create a QnA Maker resource. That allows you to see all resources associated with the QnA Maker resource when searching by resource group.
-
-> [!TIP]
-> Use a naming convention to indicate pricing tiers within the name of the resource or the resource group. When you receive errors from creating a new knowledge base, or adding new documents, the Cognitive Search pricing tier limit is a common issue.
-
-## Resource purposes
-
-Each Azure resource created with QnA Maker has a specific purpose:
-
-* QnA Maker resource
-* Cognitive Search resource
-* App Service
-* App Plan Service
-* Application Insights Service
-
-### QnA Maker resource
-
-The QnA Maker resource provides access to the authoring and publishing APIs.
-
-#### QnA Maker resource configuration settings
-
-When you create a new knowledge base in the [QnA Maker portal](https://qnamaker.ai), the **Language** setting is the only setting that is applied at the resource level. You select the language when you create the first knowledge base for the resource.
-
-### Cognitive Search resource
-
-The [Cognitive Search](../../../search/index.yml) resource is used to:
-
-* Store the QnA pairs
-* Provide the initial ranking (ranker #1) of the QnA pairs at runtime
-
-#### Index usage
-
-The resource keeps one index to act as the test index and the remaining indexes correlate to one published knowledge base each.
-
-A resource priced to hold 15 indexes, will hold 14 published knowledge bases, and one index is used for testing all the knowledge bases. This test index is partitioned by knowledge base so that a query using the interactive test pane will use the test index but only return results from the specific partition associated with the specific knowledge base.
-
-#### Language usage
-
-The first knowledge base created in the QnA Maker resource is used to determine the _single_ language set for the Cognitive Search resource and all its indexes. You can only have _one language set_ for a QnA Maker service.
-
-#### Using a single Cognitive Search service
-
-If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
-
-Learn [how to configure](../How-To/configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) QnA Maker to use a different Cognitive Service resource than the one created as part of the QnA Maker resource creation process.
-
-### App service and App service plan
-
-The [App service](../../../app-service/index.yml) is used by your client application to access the published knowledge bases via the runtime endpoint. App service includes the natural language processing (NLP) based second ranking layer (ranker #2) of the QnA pairs at runtime. The second ranking applies intelligent filters that can include metadata and follow-up prompts.
-
-To query the published knowledge base, all published knowledge bases use the same URL endpoint, but specify the **knowledge base ID** within the route.
-
-`{RuntimeEndpoint}/qnamaker/knowledgebases/{kbId}/generateAnswer`
-
-### Application Insights
-
-[Application Insights](../../../azure-monitor/app/app-insights-overview.md) is used to collect chat logs and telemetry. Review the common [Kusto queries](../how-to/get-analytics-knowledge-base.md) for information about your service.
-
-### Share services with QnA Maker
-
-QnA Maker creates several Azure resources. To reduce management and benefit from cost sharing, use the following table to understand what you can and can't share:
-
-|Service|Share|Reason|
-|--|--|--|
-|Cognitive Services|X|Not possible by design|
-|App Service plan|Γ£ö|Fixed disk space allocated for an App Service plan. If other apps that sharing the same App Service plan use significant disk space, the QnAMaker App Service instance will encounter problems.|
-|App Service|X|Not possible by design|
-|Application Insights|Γ£ö|Can be shared|
-|Search service|Γ£ö|1. `testkb` is a reserved name for the QnAMaker service; it can't be used by others.<br>2. Synonym map by the name `synonym-map` is reserved for the QnAMaker service.<br>3. The number of published knowledge bases is limited by Search service tier. If there are free indexes available, other services can use them.|
-
-## Next steps
-
-* Learn about the QnA Maker [knowledge base](../How-To/manage-knowledge-bases.md)
-* Understand a [knowledge base life cycle](development-lifecycle-knowledge-base.md)
-* Review service and knowledge base [limits](../limits.md)
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/best-practices.md
- Title: Best practices - QnA Maker
-description: Use these best practices to improve your knowledge base and provide better results to your application/chat bot's end users.
--- Previously updated : 11/19/2021---
-# Best practices of a QnA Maker knowledge base
-
-The [knowledge base development lifecycle](../Concepts/development-lifecycle-knowledge-base.md) guides you on how to manage your KB from beginning to end. Use these best practices to improve your knowledge base and provide better results to your client application or chat bot's end users.
--
-## Extraction
-
-The QnA Maker service is continually improving the algorithms that extract QnAs from content and expanding the list of supported file and HTML formats. Follow the [guidelines](../Concepts/data-sources-and-content.md) for data extraction based on your document type.
-
-In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
-
-### Configuring multi-turn
-
-[Create your knowledge base](../how-to/multi-turn.md#create-a-multi-turn-conversation-from-a-documents-structure) with multi-turn extraction enabled. If your knowledge base does or should support question hierarchy, this hierarchy can be extracted from the document or created after the document is extracted.
-
-## Creating good questions and answers
-
-### Good questions
-
-The best questions are simple. Consider the key word or phrase for each question then create a simple question for that key word or phrase.
-
-Add as many alternate questions as you need but keep the alterations simple. Adding more words or phrasings that are not part of the main goal of the question does not help QnA Maker find a match.
--
-### Add relevant alternative questions
-
-Your user may enter questions with either a conversational style of text, `How do I add a toner cartridge to my printer?` or a keyword search such as `toner cartridge`. The knowledge base should have both styles of questions in order to correctly return the best answer. If you aren't sure what keywords a customer is entering, use Application Insights data to analyze queries.
-
-### Good answers
-
-The best answers are simple answers but not too simple. Do not use answers such as `yes` and `no`. If your answer should link to other sources or provide a rich experience with media and links, use [metadata tagging](../how-to/edit-knowledge-base.md#add-metadata) to distinguish between answers, then [submit the query](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) with metadata tags in the `strictFilters` property to get the correct answer version.
-
-|Answer|Follow-up prompts|
-|--|--|
-|Power down the Surface laptop with the power button on the keyboard.|* Key-combinations to sleep, shut down, and restart.<br>* How to hard-boot a Surface laptop<br>* How to change the BIOS for a Surface laptop<br>* Differences between sleep, shut down and restart|
-|Customer service is available via phone, Skype, and text message 24 hours a day.|* Contact information for sales.<br> * Office and store locations and hours for an in-person visit.<br> * Accessories for a Surface laptop.|
-
-## Chit-Chat
-Add chit-chat to your bot, to make your bot more conversational and engaging, with low effort. You can easily add chit-chat data sets from pre-defined personalities when creating your KB, and change them at any time. Learn how to [add chit-chat to your KB](../How-To/chit-chat-knowledge-base.md).
-
-Chit-chat is supported in [many languages](../how-to/chit-chat-knowledge-base.md#language-support).
-
-### Choosing a personality
-Chit-chat is supported for several predefined personalities:
-
-|Personality |QnA Maker Dataset file |
-||--|
-|Professional |[qna_chitchat_professional.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_professional.tsv) |
-|Friendly |[qna_chitchat_friendly.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_friendly.tsv) |
-|Witty |[qna_chitchat_witty.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_witty.tsv) |
-|Caring |[qna_chitchat_caring.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_caring.tsv) |
-|Enthusiastic |[qna_chitchat_enthusiastic.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_enthusiastic.tsv) |
-
-The responses range from formal to informal and irreverent. Select the personality that is closest aligned with the tone you want for your bot. You can view the [datasets](https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets), and choose one that serves as a base for your bot, and then customize the responses.
-
-### Edit bot-specific questions
-There are some bot-specific questions that are part of the chit-chat data set, and have been filled in with generic answers. Change these answers to best reflect your bot details.
-
-We recommend making the following chit-chat QnAs more specific:
-
-* Who are you?
-* What can you do?
-* How old are you?
-* Who created you?
-* Hello
-
-### Adding custom chit-chat with a metadata tag
-
-If you add your own chit-chat QnA pairs, make sure to add metadata so these answers are returned. The metadata name/value pair is `editorial:chitchat`.
-
-## Searching for answers
-
-GenerateAnswer API uses both questions and the answer to search for best answers to a user's query.
-
-### Searching questions only when answer is not relevant
-
-Use the [`RankerType=QuestionOnly`](#choosing-ranker-type) if you don't want to search answers.
-
-An example of this, is when the knowledge base is a catalog of acronyms as questions with their full form as the answer. The value of the answer will not help to search for the appropriate answer.
-
-## Ranking/Scoring
-Make sure you are making the best use of the ranking features QnA Maker supports. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
-
-### Choosing a threshold
-
-The default [confidence score](confidence-score.md) that is used as a threshold is 0, however you can [change the threshold](confidence-score.md#set-threshold) for your KB based on your needs. Since every KB is different, you should test and choose the threshold that is best suited for your KB.
-
-### Choosing Ranker type
-By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the GenerateAnswer request.
-
-### Add alternate questions
-[Alternate questions](../How-To/edit-knowledge-base.md) improve the likelihood of a match with a user query. Alternate questions are useful when there are multiple ways in which the same question may be asked. This can include changes in the sentence structure and word-style.
-
-|Original query|Alternate queries|Change|
-|--|--|--|
-|Is parking available?|Do you have car park?|sentence structure|
- |Hi|Yo<br>Hey there!|word-style or slang|
-
-<a name="use-metadata-filters"></a>
-
-### Use metadata tags to filter questions and answers
-
-[Metadata](../How-To/edit-knowledge-base.md) adds the ability for a client application to know it should not take all answers but instead to narrow down the results of a user query based on metadata tags. The knowledge base answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
-
-### Use synonyms
-
-While there is some support for synonyms in the English language, use case-insensitive word alterations via the [Alterations API](/rest/api/cognitiveservices/qnamaker/alterations/replace) to add synonyms to keywords that take different forms. Synonyms are added at the QnA Maker service-level and **shared by all knowledge bases in the service**.
-
-### Use distinct words to differentiate questions
-QnA Maker's ranking algorithm, that matches a user query with a question in the knowledge base, works best if each question addresses a different need. Repetition of the same word set between questions reduces the likelihood that the right answer is chosen for a given user query with those words.
-
-For example, you might have two separate QnAs with the following questions:
-
-|QnAs|
-|--|
-|where is the parking *location*|
-|where is the ATM *location*|
-
-Since these two QnAs are phrased with very similar words, this similarity could cause very similar scores for many user queries that are phrased like *"where is the `<x>` location"*. Instead, try to clearly differentiate with queries like *"where is the parking lot"* and *"where is the ATM"*, by avoiding words like "location" that could be in many questions in your KB.
-
-## Collaborate
-QnA Maker allows users to collaborate on a knowledge base. Users need access to the Azure QnA Maker resource group in order to access the knowledge bases. Some organizations may want to outsource the knowledge base editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical [QnA Maker services](../How-to/set-up-qnamaker-service-azure.md) in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the knowledge base contents are transferred with an [import-export](../Tutorials/migrate-knowledge-base.md) process to the QnA Maker service of the approver that will finally publish the knowledge base and update the endpoint.
-
-## Active learning
-
-[Active learning](../How-to/use-active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. It is important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in the QnA Maker portal, you can **[filter by suggestions](../How-To/improve-knowledge-base.md)** then review and accept or reject those suggestions.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Edit a knowledge base](../How-to/edit-knowledge-base.md)
cognitive-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/confidence-score.md
- Title: Confidence score - QnA Maker-
-description: When a user query is matched against a knowledge base, QnA Maker returns relevant answers, along with a confidence score.
----- Previously updated : 01/27/2020---
-# The confidence score of an answer
-When a user query is matched against a knowledge base, QnA Maker returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
-
-The confidence score is a number between 0 and 100. A score of 100 is likely an exact match, while a score of 0 means, that no matching answer was found. The higher the score- the greater the confidence in the answer. For a given query, there could be multiple answers returned. In that case, the answers are returned in order of decreasing confidence score.
-
-In the example below, you can see one QnA entity, with 2 questions.
--
-![Sample QnA pair](../media/qnamaker-concepts-confidencescore/ranker-example-qna.png)
-
-For the above example- you can expect scores like the sample score range below- for different types of user queries:
--
-![Ranker score range](../media/qnamaker-concepts-confidencescore/ranker-score-range.png)
--
-The following table indicates typical confidence associated for a given score.
-
-|Score Value|Score Meaning|Example Query|
-|--|--|--|
-|90 - 100|A near exact match of user query and a KB question|"My changes aren't updated in KB after publish"|
-|> 70|High confidence - typically a good answer that completely answers the user's query|"I published my KB but it's not updated"|
-|50 - 70|Medium confidence - typically a fairly good answer that should answer the main intent of the user query|"Should I save my updates before I publish my KB?"|
-|30 - 50|Low confidence - typically a related answer, that partially answers the user's intent|" What does the save and train do?"|
-|< 30|Very low confidence - typically does not answer the user's query, but has some matching words or phrases |" Where can I add synonyms to my KB"|
-|0|No match, so the answer is not returned.|"How much does the service cost"|
-
-## Choose a score threshold
-The table above shows the scores that are expected on most KBs. However, since every KB is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to 0, so that all possible answers are returned. The recommended threshold that should work for most KBs, is **50**.
-
-When choosing your threshold, keep in mind the balance between Accuracy and Coverage, and tweak your threshold based on your requirements.
--- If **Accuracy** (or precision) is more important for your scenario, then increase your threshold. This way, every time you return an answer, it will be a much more CONFIDENT case, and much more likely to be the answer users are looking for. In this case, you might end up leaving more questions unanswered. *For example:* if you make the threshold **70**, you might miss some ambiguous examples likes "what is save and train?".--- If **Coverage** (or recall) is more important- and you want to answer as many questions as possible, even if there is only a partial relation to the user's question- then LOWER the threshold. This means there could be more cases where the answer does not answer the user's actual query, but gives some other somewhat related answer. *For example:* if you make the threshold **30**, you might give answers for queries like "Where can I edit my KB?"-
-> [!NOTE]
-> Newer versions of QnA Maker include improvements to scoring logic, and could affect your threshold. Any time you update the service, make sure to test and tweak the threshold if necessary. You can check your QnA Service version [here](https://www.qnamaker.ai/UserSettings), and see how to get the latest updates [here](../How-To/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates).
-
-## Set threshold
-
-Set the threshold score as a property of the [GenerateAnswer API JSON body](../How-To/metadata-generateanswer-usage.md#generateanswer-request-configuration). This means you set it for each call to GenerateAnswer.
-
-From the bot framework, set the score as part of the options object with [C#](../How-To/metadata-generateanswer-usage.md?#use-qna-maker-with-a-bot-in-c) or [Node.js](../How-To/metadata-generateanswer-usage.md?#use-qna-maker-with-a-bot-in-nodejs).
-
-## Improve confidence scores
-To improve the confidence score of a particular response to a user query, you can add the user query to the knowledge base as an alternate question on that response. You can also use case-insensitive [word alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) to add synonyms to keywords in your KB.
--
-## Similar confidence scores
-When multiple responses have a similar confidence score, it is likely that the query was too generic and therefore matched with equal likelihood with multiple answers. Try to structure your QnAs better so that every QnA entity has a distinct intent.
--
-## Confidence score differences between test and production
-The confidence score of an answer may change negligibly between the test and published version of the knowledge base even if the content is the same. This is because the content of the test and the published knowledge base are located in different Azure Cognitive Search indexes.
-
-The test index holds all the QnA pairs of your knowledge bases. When querying the test index, the query applies to the entire index then results are restricted to the partition for that specific knowledge base. If the test query results are negatively impacting your ability to validate the knowledge base, you can:
-* organize your knowledge base using one of the following:
- * 1 resource restricted to 1 KB: restrict your single QnA resource (and the resulting Azure Cognitive Search test index) to a single knowledge base.
- * 2 resources - 1 for test, 1 for production: have two QnA Maker resources, using one for testing (with its own test and production indexes) and one for product (also having its own test and production indexes)
-* and, always use the same parameters, such as **[top](../How-To/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers)** when querying both your test and production knowledge base
-
-When you publish a knowledge base, the question and answer contents of your knowledge base moves from the test index to a production index in Azure search. See how the [publish](../Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base) operation works.
-
-If you have a knowledge base in different regions, each region uses its own Azure Cognitive Search index. Because different indexes are used, the scores will not be exactly the same.
--
-## No match found
-When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is "No good match found in the KB". You can override this [default response](../How-To/metadata-generateanswer-usage.md) in the bot or application code calling the endpoint. Alternately, you can also set the override response in Azure and this changes the default for all knowledge bases deployed in a particular QnA Maker service.
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Best practices](./best-practices.md)
cognitive-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/data-sources-and-content.md
- Title: Data sources and content types - QnA Maker
-description: Learn how to import question and answer pairs from data sources and supported content types, which include many standard structured documents such as PDF, DOCX, and TXT - QnA Maker.
----- Previously updated : 01/11/2022--
-# Importing from data sources
-
-A knowledge base consists of question and answer pairs brought in by public URLs and files.
--
-## Data source locations
-
-Content is brought into a knowledge base from a data source. Data source locations are **public URLs or files**, which do not require authentication.
-
-[SharePoint files](../how-to/add-sharepoint-datasources.md), secured with authentication, are the exception. SharePoint resources must be files, not web pages.
-
-QnA Maker supports public URLs ending with a .ASPX web extension which are not secured with authentication.
-
-## Chit-chat content
-
-The chit-chat content set is offered as a complete content data source in several languages and conversational styles. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch. Learn [how to add chit-chat content](../how-to/chit-chat-knowledge-base.md) to your knowledge base.
-
-## Structured data format through import
-
-Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured `.tsv` file that contains questions and answers. This information helps QnA Maker group the question-answer pairs and attribute them to a particular data source.
-
-| Question | Answer | Source| Metadata (1 key: 1 value) |
-|--||-||
-| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
-| Question2 | Answer2 | Editorial| `Key:Value` |
-
-## Structured multi-turn format through import
-
-You can create the multi-turn conversations in a `.tsv` file format. The format provides you with the ability to create the multi-turn conversations by analyzing previous chat logs (with other processes, not using QnA Maker), then create the `.tsv` file through automation. Import the file to replace the existing knowledge base.
-
-> [!div class="mx-imgBorder"]
-> ![Conceptual model of 3 levels of multi-turn question](../media/qnamaker-concepts-knowledgebase/nested-multi-turn.png)
-
-The column for a multi-turn `.tsv`, specific to multi-turn is **Prompts**. An example `.tsv`, shown in Excel, show the information to include to define the multi-turn children:
-
-```JSON
-[
- {"displayOrder":0,"qnaId":2,"displayText":"Level 2 Question A"},
- {"displayOrder":0,"qnaId":3,"displayText":"Level 2 - Question B"}
-]
-```
-
-The **displayOrder** is numeric and the **displayText** is text that shouldn't include markdown.
-
-> [!div class="mx-imgBorder"]
-> ![Multi-turn question example as shown in Excel](../media/qnamaker-concepts-knowledgebase/multi-turn-tsv-columns-excel-example.png)
-
-## Export as example
-
-If you are unsure how to represent your QnA pair in the `.tsv` file:
-* Use this [downloadable example from GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Structured-multi-turn-format.xlsx?raw=true)
-* Or create the pair in the QnA Maker portal, save, then export the knowledge base for an example of how to represent the pair.
-
-## Unstructured data format
-
-You can also create a knowledge base based on unstructured content imported via a file. Currently this functionality is available only via document upload for documents that are in any of the supported file formats.
-
-> [!IMPORTANT]
-> The support for unstructured content via file upload is available only in [question answering](../../language-service/question-answering/overview.md).
-
-## Content types of documents you can add to a knowledge base
-Content types include many standard structured documents such as PDF, DOC, and TXT.
-
-### File and URL data types
-
-The table below summarizes the types of content and file formats that are supported by QnA Maker.
-
-|Source Type|Content Type| Examples|
-|--|--|--|
-|URL|FAQs<br> (Flat, with sections or with a topics homepage)<br>Support pages <br> (Single page how-to articles, troubleshooting articles etc.)|[Plain FAQ](../troubleshooting.md), <br>[FAQ with links](https://www.microsoft.com/microsoft-365/microsoft-365-for-home-and-school-faq),<br> [FAQ with topics homepage](https://www.microsoft.com/Licensing/servicecenter/Help/Faq.aspx)<br>[Support article](./best-practices.md)|
-|PDF / DOC|FAQs,<br> Product Manual,<br> Brochures,<br> Paper,<br> Flyer Policy,<br> Support guide,<br> Structured QnA,<br> etc.|**Without Multi-turn**<br>[Structured QnA.docx](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/structured.docx),<br> [Sample Product Manual.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/product-manual.pdf),<br> [Sample semi-structured.docx](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/semi-structured.docx),<br> [Sample white paper.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/white-paper.pdf),<br> [Unstructured blog.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Introducing-surface-laptop-4-and-new-access.pdf),<br> [Unstructured white paper.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/sample-unstructured-paper.pdf)<br><br>**Multi-turn**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)|
-|*Excel|Structured QnA file<br> (including RTF, HTML support)|**Without Multi-turn**:<br>[Sample QnA FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/QnA%20Maker%20Sample%20FAQ.xlsx)<br><br>**Multi-turn**:<br>[Structured simple FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Structured-multi-turn-format.xlsx)<br>[Surface laptop FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-Surface-Pro.xlsx)|
-|*TXT/TSV|Structured QnA file|[Sample chit-chat.tsv](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Scenario_Responses_Friendly.tsv)|
-
-If you need authentication for your data source, consider the following methods to get that content into QnA Maker:
-
-* Download the file manually and import into QnA Maker
-* Add the file from authenticated [SharePoint location](../how-to/add-sharepoint-datasources.md)
-
-### URL content
-
-Two types of documents can be imported via **URL** in QnA Maker:
-
-* FAQ URLs
-* Support URLs
-
-Each type indicates an expected format.
-
-### File-based content
-
-You can add files to a knowledge base from a public source, or your local file system, in the [QnA Maker portal](https://www.qnamaker.ai).
-
-### Content format guidelines
-
-Learn more about the [format guidelines](../reference-document-format-guidelines.md) for the different files.
-
-## Next steps
-
-Learn how to [edit QnAs](../how-to/edit-knowledge-base.md).
cognitive-services Development Lifecycle Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/development-lifecycle-knowledge-base.md
- Title: Lifecycle of knowledge base - QnA Maker
-description: QnA Maker learns best in an iterative cycle of model changes, utterance examples, publishing, and gathering data from endpoint queries.
--- Previously updated : 09/01/2020--
-# Knowledge base lifecycle in QnA Maker
-QnA Maker learns best in an iterative cycle of model changes, utterance examples, publishing, and gathering data from endpoint queries.
-
-![Authoring cycle](../media/qnamaker-concepts-lifecycle/kb-lifecycle.png)
--
-## Creating a QnA Maker knowledge base
-QnA Maker knowledge base (KB) endpoint provides a best-match answer to a user query based on the content of the KB. Creating a knowledge base is a one-time action to setting up a content repository of questions, answers, and associated metadata. A KB can be created by crawling pre-existing content such the following sources:
--- FAQ pages-- Product manuals-- Q-A pairs-
-Learn how to [create a knowledge base](../quickstarts/create-publish-knowledge-base.md).
-
-## Testing and updating the knowledge base
-
-The knowledge base is ready for testing once it is populated with content, either editorially or through automatic extraction. Interactive testing can be done in the QnA Maker portal, through the **Test** panel. You enter common user queries. Then you verify that the responses returned with both the correct response and a sufficient confidence score.
--
-* **To fix low confidence scores**: add alternate questions.
-* **When a query incorrectly returns the [default response](../How-to/change-default-answer.md)**: add new answers to the correct question.
-
-This tight loop of test-update continues until you are satisfied with the results. Learn how to [test your knowledge base](../How-To/test-knowledge-base.md).
-
-For large KBs, use automated testing with the [generateAnswer API](../how-to/metadata-generateanswer-usage.md#get-answer-predictions-with-the-generateanswer-api) and the `isTest` body property, which queries the `test` knowledge base instead of the published knowledge base.
-
-```json
-{
- "question": "example question",
- "top": 3,
- "userId": "Default",
- "isTest": true
-}
-```
-
-## Publish the knowledge base
-Once you are done testing the knowledge base, you can publish it. Publish pushes the latest version of the tested knowledge base to a dedicated Azure Cognitive Search index representing the **published** knowledge base. It also creates an endpoint that can be called in your application or chat bot.
-
-Due to the publish action, any further changes made to the test version of the knowledge base leave the published version unaffected. The published version might be live in a production application.
-
-Each of these knowledge bases can be targeted for testing separately. Using the APIs, you can target the test version of the knowledge base with `isTest` body property in the generateAnswer call.
-
-Learn how to [publish your knowledge base](../Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
-
-## Monitor usage
-To be able to log the chat logs of your service, you would need to enable Application Insights when you [create your QnA Maker service](../How-To/set-up-qnamaker-service-azure.md).
-
-You can get various analytics of your service usage. Learn more about how to use application insights to get [analytics for your QnA Maker service](../How-To/get-analytics-knowledge-base.md).
-
-Based on what you learn from your analytics, make appropriate [updates to your knowledge base](../How-To/edit-knowledge-base.md).
-
-## Version control for data in your knowledge base
-
-Version control for data is provided through the import/export features on the **Settings** page in the QnA Maker portal.
-
-You can back up a knowledge base by exporting the knowledge base, in either `.tsv` or `.xls` format. Once exported, include this file as part of your regular source control check.
-
-When you need to go back to a specific version, you need to import that file from your local system. An exported knowledge base **must** only be used via import on the **Settings** page. It can't be used as a file or URL document data source. This will replace questions and answers currently in the knowledge base with the contents of the imported file.
-
-## Test and production knowledge base
-A knowledge base is the repository of questions and answer sets created, maintained, and used through QnA Maker. Each QnA Maker resource can hold multiple knowledge bases.
-
-A knowledge base has two states: *test* and *published*.
-
-### Test knowledge base
-
-The *test knowledge base* is the version currently edited and saved. The test version has been tested for accuracy, and for completeness of responses. Changes made to the test knowledge base don't affect the end user of your application or chat bot. The test knowledge base is known as `test` in the HTTP request. The `test` knowledge is available with the QnA Maker's portal interactive **Test** pane.
-
-### Production knowledge base
-
-The *published knowledge base* is the version that's used in your chat bot or application. Publishing a knowledge base puts the content of its test version into its published version. The published knowledge base is the version that the application uses through the endpoint. Make sure that the content is correct and well tested. The published knowledge base is known as `prod` in the HTTP request.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Active learning suggestions](../index.yml)
cognitive-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/plan.md
- Title: Plan your app - QnA Maker
-description: Learn how to plan your QnA Maker app. Understand how QnA Maker works and interacts with other Azure services and some knowledge base concepts.
--- Previously updated : 11/02/2021---
-# Plan your QnA Maker app
-
-To plan your QnA Maker app, you need to understand how QnA Maker works and interacts with other Azure services. You should also have a solid grasp of knowledge base concepts.
--
-## Azure resources
-
-Each [Azure resource](azure-resources.md#resource-purposes) created with QnA Maker has a specific purpose. Each resource has its own purpose, limits, and [pricing tier](azure-resources.md#pricing-tier-considerations). It's important to understand the function of these resources so that you can use that knowledge into your planning process.
-
-| Resource | Purpose |
-|--|--|
-| [QnA Maker](azure-resources.md#qna-maker-resource) resource | Authoring and query prediction |
-| [Cognitive Search](azure-resources.md#cognitive-search-resource) resource | Data storage and search |
-| [App Service resource and App Plan Service](azure-resources.md#app-service-and-app-service-plan) resource | Query prediction endpoint |
-| [Application Insights](azure-resources.md#application-insights) resource | Query prediction telemetry |
-
-### Resource planning
-
-The free tier, `F0`, of each resource works and can provide both the authoring and query prediction experience. You can use this tier to learn authoring and query prediction. When you move to a production or live scenario, reevaluate your resource selection.
-
-### Knowledge base size and throughput
-
-When you build a real app, plan sufficient resources for the size of your knowledge base and for your expected query prediction requests.
-
-A knowledge base size is controlled by the:
-* [Cognitive Search resource](../../../search/search-limits-quotas-capacity.md) pricing tier limits
-* [QnA Maker limits](../limits.md)
-
-The knowledge base query prediction request is controlled by the web app plan and web app. Refer to [recommended settings](azure-resources.md#recommended-settings) to plan your pricing tier.
-
-### Resource sharing
-
-If you already have some of these resources in use, you may consider sharing resources. See which resources [can be shared](azure-resources.md#share-services-with-qna-maker) with the understanding that resource sharing is an advanced scenario.
-
-All knowledge bases created in the same QnA Maker resource share the same **test** query prediction endpoint.
-
-### Understand the impact of resource selection
-
-Proper resource selection means your knowledge base answers query predictions successfully.
-
-If your knowledge base isn't functioning properly, it's typically an issue of improper resource management.
-
-Improper resource selection requires investigation to determine which [resource needs to change](azure-resources.md#when-to-change-a-pricing-tier).
-
-## Knowledge bases
-
-A knowledge base is directly tied its QnA Maker resource. It holds the question and answer (QnA) pairs that are used to answer query prediction requests.
-
-### Language considerations
-
-The first knowledge base created on your QnA Maker resource sets the language for the resource. You can only have one language for a QnA Maker resource.
-
-You can structure your QnA Maker resources by language or you can use [Translator](../../translator/translator-overview.md) to change a query from another language into the knowledge base's language before sending the query to the query prediction endpoint.
-
-### Ingest data sources
-
-You can use one of the following ingested [data sources](../Concepts/data-sources-and-content.md) to create a knowledge base:
-
-* Public URL
-* Private SharePoint URL
-* File
-
-The ingestion process converts [supported content types](../reference-document-format-guidelines.md) to markdown. All further editing of the *answer* is done with markdown. After you create a knowledge base, you can edit [QnA pairs](question-answer-set.md) in the QnA Maker portal with [rich text authoring](../how-to/edit-knowledge-base.md#rich-text-editing-for-answer).
-
-### Data format considerations
-
-Because the final format of a QnA pair is markdown, it's important to understand [markdown support](../reference-markdown-format.md).
-
-Linked images must be available from a public URL to be displayed in the test pane of the QnA Maker portal or in a client application. QnA Maker doesn't provide authentication for content, including images.
-
-### Bot personality
-
-Add a bot personality to your knowledge base with [chit-chat](../how-to/chit-chat-knowledge-base.md). This personality comes through with answers provided in a certain conversational tone such as *professional* and *friendly*. This chit-chat is provided as a conversational set, which you have total control to add, edit, and remove.
-
-A bot personality is recommended if your bot connects to your knowledge base. You can choose to use chit-chat in your knowledge base even if you also connect to other services, but you should review how the bot service interacts to know if that is the correct architectural design for your use.
-
-### Conversation flow with a knowledge base
-
-Conversation flow usually begins with a salutation from a user, such as `Hi` or `Hello`. Your knowledge base can answer with a general answer, such as `Hi, how can I help you`, and it can also provide a selection of follow-up prompts to continue the conversation.
-
-You should design your conversational flow with a loop in mind so that a user knows how to use your bot and isn't abandoned by the bot in the conversation. [Follow-up prompts](../how-to/multi-turn.md) provide linking between QnA pairs, which allow for the conversational flow.
-
-### Authoring with collaborators
-
-Collaborators may be other developers who share the full development stack of the knowledge base application or may be limited to just authoring the knowledge base.
-
-Knowledge base authoring supports several role-based access permissions you apply in the Azure portal to limit the scope of a collaborator's abilities.
-
-## Integration with client applications
-
-Integration with client applications is accomplished by sending a query to the prediction runtime endpoint. A query is sent to your specific knowledge base with an SDK or REST-based request to your QnA Maker's web app endpoint.
-
-To authenticate a client request correctly, the client application must send the correct credentials and knowledge base ID. If you're using an Azure Bot Service, configure these settings as part of the bot configuration in the Azure portal.
-
-### Conversation flow in a client application
-
-Conversation flow in a client application, such as an Azure bot, may require functionality before and after interacting with the knowledge base.
-
-Does your client application support conversation flow, either by providing alternate means to handle follow-up prompts or including chit-chit? If so, design these early and make sure the client application query is handled correctly by another service or when sent to your knowledge base.
-
-### Dispatch between QnA Maker and Language Understanding (LUIS)
-
-A client application may provide several features, only one of which is answered by a knowledge base. Other features still need to understand the conversational text and extract meaning from it.
-
-A common client application architecture is to use both QnA Maker and [Language Understanding (LUIS)](../../LUIS/what-is-luis.md) together. LUIS provides the text classification and extraction for any query, including to other services. QnA Maker provides answers from your knowledge base.
-
-In such a [shared architecture](../choose-natural-language-processing-service.md) scenario, the dispatching between the two services is accomplished by the [Dispatch](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Dispatch) tool from Bot Framework.
-
-### Active learning from a client application
-
-QnA Maker uses _active learning_ to improve your knowledge base by suggesting alternate questions to an answer. The client application is responsible for a part of this [active learning](../How-To/use-active-learning.md). Through conversational prompts, the client application can determine that the knowledge base returned an answer that's not useful to the user, and it can determine a better answer. The client application needs to send that information back to the knowledge base to improve the prediction quality.
-
-### Providing a default answer
-
-If your knowledge base doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page in the QnA Maker portal or in the [APIs](/rest/api/cognitiveservices/qnamaker/knowledgebase/update#request-body).
-
-This default answer is different from the Azure bot default answer. You configure the default answer for your Azure bot in the Azure portal as part of configuration settings. It's returned when the score threshold isn't met.
-
-## Prediction
-
-The prediction is the response from your knowledge base, and it includes more information than just the answer. To get a query prediction response, use the [GenerateAnswer API](query-knowledge-base.md).
-
-### Prediction score fluctuations
-
-A score can change based on several factors:
-
-* Number of answers you requested in response to [GenerateAnswer](query-knowledge-base.md) with `top` property
-* Variety of available alternate questions
-* Filtering for metadata
-* Query sent to `test` or `production` knowledge base
-
-There's a [two-phase answer ranking](query-knowledge-base.md#how-qna-maker-processes-a-user-query-to-select-the-best-answer):
-- Cognitive Search - first rank. Set the number of _answers allowed_ high enough that the best answers are returned by Cognitive Search and then passed into the QnA Maker ranker.-- QnA Maker - second rank. Apply featurization and machine learning to determine best answer.-
-### Service updates
-
-Apply the [latest runtime updates](../how-to/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates) to automatically manage service updates.
-
-### Scaling, throughput, and resiliency
-
-Scaling, throughput, and resiliency are determined by the [Azure resources](../how-to/set-up-qnamaker-service-azure.md), their pricing tiers, and any surrounding architecture such as [Traffic manager](../how-to/configure-QnA-Maker-resources.md#business-continuity-with-traffic-manager).
-
-### Analytics with Application Insights
-
-All queries to your knowledge base are stored in Application Insights. Use our [top queries](../how-to/get-analytics-knowledge-base.md) to understand your metrics.
-
-## Development lifecycle
-
-The [development lifecycle](development-lifecycle-knowledge-base.md) of a knowledge base is ongoing: editing, testing, and publishing your knowledge base.
-
-### Knowledge base development of QnA Maker pairs
-
-Your QnA pairs should be designed and developed based on your client application usage.
-
-Each pair can contain:
-* Metadata - filterable when querying to allow you to tag your QnA pairs with additional information about the source, content, format, and purpose of your data.
-* Follow-up prompts - helps to determine a path through your knowledge base so the user arrives at the correct answer.
-* Alternate questions - important to allow search to match to your answer from different forms of the question. [Active learning suggestions](../How-To/use-active-learning.md) turn into alternate questions.
-
-### DevOps development
-
-Developing a knowledge base to insert into a DevOps pipeline requires that the knowledge base is isolated during batch testing.
-
-A knowledge base shares the Cognitive Search index with all other knowledge bases on the QnA Maker resource. While the knowledge base is isolated by partition, sharing the index can cause a difference in the score when compared to the published knowledge base.
-
-To have the _same score_ on the `test` and `production` knowledge bases, isolate a QnA Maker resource to a single knowledge base. In this architecture, the resource only needs to live as long as the isolated batch test.
-
-## Next steps
-
-* [Azure resources](../how-to/set-up-qnamaker-service-azure.md)
-* [Question and answer pairs](question-answer-set.md)
cognitive-services Query Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/query-knowledge-base.md
- Title: Query the knowledge base - QnA Maker
-description: A knowledge base must be published. Once published, the knowledge base is queried at the runtime prediction endpoint using the generateAnswer API.
- Previously updated : 11/02/2021---
-# Query the knowledge base for answers
-
-A knowledge base must be published. Once published, the knowledge base is queried at the runtime prediction endpoint using the generateAnswer API. The query includes the question text, and other settings, to help QnA Maker select the best possible match to an answer.
--
-## How QnA Maker processes a user query to select the best answer
-
-The trained and [published](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base) QnA Maker knowledge base receives a user query, from a bot or other client application, at the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md). The following diagram illustrates the process when the user query is received.
-
-![The ranking model process for a user query](../media/qnamaker-concepts-knowledgebase/ranker-v1.png)
-
-### Ranker process
-
-The process is explained in the following table.
-
-|Step|Purpose|
-|--|--|
-|1|The client application sends the user query to the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md).|
-|2|QnA Maker preprocesses the user query with language detection, spellers, and word breakers.|
-|3|This preprocessing is taken to alter the user query for the best search results.|
-|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries. Azure search filters [stop words](https://github.com/Azure-Samples/azure-search-sample-dat) in this step.|
-|5|QnA Maker uses syntactic and semantic based featurization to determine the similarity between the user query and the fetched QnA results.|
-|6|The machine-learned ranker model uses the different features, from step 5, to determine the confidence scores and the new ranking order.|
-|7|The new results are returned to the client application in ranked order.|
-|||
-
-Features used include but aren't limited to word-level semantics, term-level importance in a corpus, and deep learned semantic models to determine similarity and relevance between two text strings.
-
-## HTTP request and response with endpoint
-
-When you publish your knowledge base, the service creates a REST-based HTTP endpoint that can be integrated into your application, commonly a chat bot.
-
-### The user query request to generate an answer
-
-A user query is the question that the end user asks of the knowledge base, such as `How do I add a collaborator to my app?`. The query is often in a natural language format or a few keywords that represent the question, such as `help with collaborators`. The query is sent to your knowledge base from an HTTP request in your client application.
-
-```json
-{
- "question": "How do I add a collaborator to my app?",
- "top": 6,
- "isTest": true,
- "scoreThreshold": 20,
- "strictFilters": [
- {
- "name": "QuestionType",
- "value": "Support"
- }],
- "userId": "sd53lsY="
-}
-```
-
-You control the response by setting properties such as [scoreThreshold](./confidence-score.md#choose-a-score-threshold), [top](../how-to/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers), and [strictFilters](../how-to/query-knowledge-base-with-metadata.md).
-
-Use [conversation context](../how-to/query-knowledge-base-with-metadata.md) with [multi-turn functionality](../how-to/multi-turn.md) to keep the conversation going to refine the questions and answers, to find the correct and final answer.
-
-### The response from a call to generate an answer
-
-The HTTP response is the answer retrieved from the knowledge base, based on the best match for a given user query. The response includes the answer and the prediction score. If you asked for more than one top answer with the `top` property, you get more than one top answer, each with a score.
-
-```json
-{
- "answers": [
- {
- "questions": [
- "How do I add a collaborator to my app?",
- "What access control is provided for the app?",
- "How do I find user management and security?"
- ],
- "answer": "Use the Azure portal to add a collaborator using Access Control (IAM)",
- "score": 100,
- "id": 1,
- "source": "Editorial",
- "metadata": [
- {
- "name": "QuestionType",
- "value": "Support"
- },
- {
- "name": "ToolDependency",
- "value": "Azure Portal"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Confidence score](./confidence-score.md)
cognitive-services Question Answer Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/question-answer-set.md
- Title: Design knowledge base - QnA Maker concepts
-description: Learn how to design a knowledge base - QnA Maker.
--- Previously updated : 01/27/2020--
-# Question and answer pair concepts
-
-A knowledge base consists of question and answer (QnA) pairs. Each pair has one answer and a pair contains all the information associated with that _answer_. An answer can loosely resemble a database row or a data structure instance.
--
-## Question and answer pairs
-
-The **required** settings in a question-and-answer (QnA) pair are:
-
-* a **question** - text of user query, used to QnA Maker's machine-learning, to align with text of user's question with different wording but the same answer
-* the **answer** - the pair's answer is the response that's returned when a user query is matched with the associated question
-
-Each pair is represented by an **ID**.
-
-The **optional** settings for a pair include:
-
-* **Alternate forms of the question** - this helps QnA Maker return the correct answer for a wider variety of question phrasings
-* **Metadata**: Metadata are tags associated with a QnA pair and are represented as key-value pairs. Metadata tags are used to filter QnA pairs and limit the set over which query matching is performed.
-* **Multi-turn prompts**, used to continue a multi-turn conversation
-
-![QnA Maker knowledge bases](../media/qnamaker-concepts-knowledgebase/knowledgebase.png)
-
-## Editorially add to knowledge base
-
-If you do not have pre-existing content to populate the knowledge base, you can add QnA pairs editorially in the QnA Maker portal. Learn how to update your knowledge base [here](../How-To/edit-knowledge-base.md).
-
-## Editing your knowledge base locally
-
-Once a knowledge base is created, it is recommended that you make edits to the knowledge base text in the [QnA Maker portal](https://qnamaker.ai), rather than exporting and reimporting through local files. However, there may be times that you need to edit a knowledge base locally.
-
-Export the knowledge base from the **Settings** page, then edit the knowledge base with Microsoft Excel. If you choose to use another application to edit your exported file, the application may introduce syntax errors because it is not fully TSV compliant. Microsoft Excel's TSV files generally don't introduce any formatting errors.
-
-Once you are done with your edits, reimport the TSV file from the **Settings** page. This will completely replace the current knowledge base with the imported knowledge base.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Knowledge base lifecycle in QnA Maker](./development-lifecycle-knowledge-base.md)
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/role-based-access-control.md
- Title: Collaborate with others - QnA Maker
-description: Learn how to collaborate with other authors and editors using Azure role-based access control.
--- Previously updated : 05/15/2020--
-# Collaborate with other authors and editors
-
-Collaborate with other authors and editors using Azure role-based access control (Azure RBAC) placed on your QnA Maker resource.
--
-## Access is provided on the QnA Maker resource
-
-All permissions are controlled by the permissions placed on the QnA Maker resource. These permissions align to read, write, publish, and full access. You can allow collaboration among multiple users by [updating RBAC access](../how-to/manage-qna-maker-app.md) for QnA Maker resource.
-
-This Azure RBAC feature includes:
-* Azure Active Directory (AAD) is 100% backward compatible with key-based authentication for owners and contributors. Customers can use either key-based authentication or Azure RBAC-based authentication in their requests.
-* Quickly add authors and editors to all knowledge bases in the resource because control is at the resource level, not at the knowledge base level.
-
-> [!NOTE]
-> Make sure to add a custom subdomain for the resource. [Custom Subdomain](../../cognitive-services-custom-subdomains.md) should be present by default, but if not, please add it
-
-## Access is provided by a defined role
--
-## Authentication flow
-
-The following diagram shows the flow, from the author's perspective, for signing into the QnA Maker portal and using the authoring APIs.
-
-> [!div class="mx-imgBorder"]
-> ![The following diagram shows the flow, from the author's perspective, for signing into the QnA Maker portal and using the authoring APIs.](../media/qnamaker-how-to-collaborate-knowledge-base/rbac-flow-from-portal-to-service.png)
-
-|Steps|Description|
-|--|--|
-|1|Portal Acquires token for QnA Maker resource.|
-|2|Portal Calls the appropriate QnA Maker authoring API (APIM) passing the token instead of keys.|
-|3|QnA Maker API validates the token.|
-|4 |QnA Maker API calls QnAMaker Service.|
-
-If you intend to call the [authoring APIs](../index.yml), learn more about how to set up authentication.
-
-## Authenticate by QnA Maker portal
-
-If you author and collaborate using the QnA Maker portal, after you add the appropriate role to the resource for a collaborator, the QnA Maker portal manages all the access permissions.
-
-## Authenticate by QnA Maker APIs and SDKs
-
-If you author and collaborate using the APIs, either through REST or the SDKs, you need to [create a service principal](../../authentication.md#assign-a-role-to-a-service-principal) to manage the authentication.
-
-## Next step
-
-* Design a knowledge base for languages and for client applications
cognitive-services Add Sharepoint Datasources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/add-sharepoint-datasources.md
- Title: SharePoint files - QnA Maker
-description: Add secured SharePoint data sources to your knowledge base to enrich the knowledge base with questions and answers that may be secured with Active Directory.
--- Previously updated : 01/25/2022--
-# Add a secured SharePoint data source to your knowledge base
-
-Add secured cloud-based SharePoint data sources to your knowledge base to enrich the knowledge base with questions and answers that may be secured with Active Directory.
-
-When you add a secured SharePoint document to your knowledge base, as the QnA Maker manager, you must request Active Directory permission for QnA Maker. Once this permission is given from the Active Directory manager to QnA Maker for access to SharePoint, it doesn't have to be given again. Each subsequent document addition to the knowledge base will not need authorization if it is in the same SharePoint resource.
-
-If the QnA Maker knowledge base manager is not the Active Directory manager, you will need to communicate with the Active Directory manager to finish this process.
-
-## Prerequisites
-
-* Cloud-based SharePoint - QnA Maker uses Microsoft Graph for permissions. If your SharePoint is on-premises, you won't be able to extract from SharePoint because Microsoft Graph won't be able to determine permissions.
-* URL format - QnA Maker only supports SharePoint urls which are generated for sharing and are of format `https://\*.sharepoint.com`
-
-## Add supported file types to knowledge base
-
-You can add all QnA Maker-supported [file types](../concepts/data-sources-and-content.md#file-and-url-data-types) from a SharePoint site to your knowledge base. You may have to grant [permissions](#permissions) if the file resource is secured.
-
-1. From the library with the SharePoint site, select the file's ellipsis menu, `...`.
-1. Copy the file's URL.
-
- ![Get the SharePoint file URL by selecting the file's ellipsis menu then copying the URL.](../media/add-sharepoint-datasources/get-sharepoint-file-url.png)
-
-1. In the QnA Maker portal, on the **Settings** page, add the URL to the knowledge base.
-
-### Images with SharePoint files
-
-If files include images, those are not extracted. You can add the image, from the QnA Maker portal, after the file is extracted into QnA pairs.
-
-Add the image with the following markdown syntax:
-
-```markdown
-![Explanation or description of image](URL of public image)
-```
-
-The text in the square brackets, `[]`, explains the image. The URL in the parentheses, `()`, is the direct link to the image.
-
-When you test the QnA pair in the interactive test panel, in the QnA Maker portal, the image is displayed, instead of the markdown text. This validates the image can be publicly retrieved from your client-application.
-
-## Permissions
-
-Granting permissions happens when a secured file from a server running SharePoint is added to a knowledge base. Depending on how the SharePoint is set up and the permissions of the person adding the file, this could require:
-
-* no additional steps - the person adding the file has all the permissions needed.
-* steps by both [knowledge base manager](#knowledge-base-manager-add-sharepoint-data-source-in-qna-maker-portal) and [Active Directory manager](#active-directory-manager-grant-file-read-access-to-qna-maker).
-
-See the steps listed below.
-
-### Knowledge base
-
-When the **QnA Maker manager** adds a secured SharePoint document to a knowledge base, the knowledge base manager initiates a request for permission that the Active Directory manager needs to complete.
-
-The request begins with a pop-up to authenticate to an Active Directory account.
-
-![Authenticate User Account](../media/add-sharepoint-datasources/authenticate-user-account.png)
-
-Once the QnA Maker manager selects the account, the Azure Active Directory administrator will receive a notice that they need to allow the QnA Maker app (not the QnA Maker manager) access to the SharePoint resource. The Azure Active Directory manager will need to do this for every SharePoint resource, but not every document in that resource.
-
-### Active directory
-
-The Active Directory manager (not the QnA Maker manager) needs to grant access to QnA Maker to access the SharePoint resource by selecting [this link](https://login.microsoftonline.com/common/oauth2/v2.0/authorize?response_type=id_token&scope=Files.Read%20Files.Read.All%20Sites.Read.All%20User.Read%20User.ReadBasic.All%20profile%20openid%20email&client_id=c2c11949-e9bb-4035-bda8-59542eb907a6&redirect_uri=https%3A%2F%2Fwww.qnamaker.ai%3A%2FCreate&state=68) to authorize the QnA Maker Portal SharePoint enterprise app to have file read permissions.
-
-![Azure Active Directory manager grants permission interactively](../media/add-sharepoint-datasources/aad-manager-grants-permission-interactively.png)
-
-<!--
-The Active Directory manager must grant QnA Maker access either by application name, `QnAMakerPortalSharePoint`, or by application ID, `c2c11949-e9bb-4035-bda8-59542eb907a6`.
>
-<!--
-### Grant access from the interactive pop-up window
-
-The Active Directory manager will get a pop-up window requesting permissions to the `QnAMakerPortalSharePoint` app. The pop-up window includes the QnA Maker Manager email address that initiated the request, an `App Info` link to learn more about **QnAMakerPortalSharePoint**, and a list of permissions requested. Select **Accept** to provide those permissions.
-
-![Azure Active Directory manager grants permission interactively](../media/add-sharepoint-datasources/aad-manager-grants-permission-interactively.png)
>
-<!--
-
-### Grant access from the App Registrations list
-
-1. The Active Directory manager signs in to the Azure portal and opens **[App registrations list](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ApplicationsListBlade)**.
-
-1. Search for and select the **QnAMakerPortalSharePoint** app. Change the second filter box from **My apps** to **All apps**. The app information will open on the right side.
-
- ![Select QnA Maker app in App registrations list](../media/add-sharepoint-datasources/select-qna-maker-app-in-app-registrations.png)
-
-1. Select **Settings**.
-
- [![Select Settings in the right-side blade](../media/add-sharepoint-datasources/select-settings-for-qna-maker-app-registration.png)](../media/add-sharepoint-datasources/select-settings-for-qna-maker-app-registration.png#lightbox)
-
-1. Under **API access**, select **Required permissions**.
-
- ![Select 'Settings', then under 'API access', select 'Required permission'](../media/add-sharepoint-datasources/select-required-permissions-in-settings-blade.png)
-
-1. Do not change any settings in the **Enable Access** window. Select **Grant Permission**.
-
- [![Under 'Grant Permission', select 'Yes'](../media/add-sharepoint-datasources/grant-app-required-permissions.png)](../media/add-sharepoint-datasources/grant-app-required-permissions.png#lightbox)
-
-1. Select **YES** in the pop-up confirmation windows.
-
- ![Grant required permissions](../media/add-sharepoint-datasources/grant-required-permissions.png)
>
-### Grant access from the Azure Active Directory admin center
-
-1. The Active Directory manager signs in to the Azure portal and opens **[Enterprise applications](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps)**.
-
-1. Search for `QnAMakerPortalSharePoint` the select the QnA Maker app.
-
- [![Search for QnAMakerPortalSharePoint in Enterprise apps list](../media/add-sharepoint-datasources/search-enterprise-apps-for-qna-maker.png)](../media/add-sharepoint-datasources/search-enterprise-apps-for-qna-maker.png#lightbox)
-
-1. Under **Security**, go to **Permissions**. Select **Grant admin consent for Organization**.
-
- [![Select authenticated user for Active Directory Admin](../media/add-sharepoint-datasources/grant-aad-permissions-to-enterprise-app.png)](../media/add-sharepoint-datasources/grant-aad-permissions-to-enterprise-app.png#lightbox)
-
-1. Select a Sign-On account with permissions to grant permissions for the Active Directory.
----
-## Add SharePoint data source with APIs
-
-There is a workaround to add latest SharePoint content via API using Azure blob storage, below are the steps:
-1. Download the SharePoint files locally. The user calling the API needs to have access to SharePoint.
-1. Upload them on the Azure blob storage. This will create a secure shared access by [using SAS token.](../../../storage/common/storage-sas-overview.md#how-a-shared-access-signature-works)
-1. Pass the blob URL generated with the SAS token to the QnA Maker API. To allow the Question Answers extraction from the files, you need to add the suffix file type as '&ext=pdf' or '&ext=doc' at the end of the URL before passing it to QnA Maker API.
--
-<!--
-## Get SharePoint File URI
-
-Use the following steps to transform the SharePoint URL into a sharing token.
-
-1. Encode the URL using [base64](https://en.wikipedia.org/wiki/Base64).
-
-1. Convert the base64-encoded result to an unpadded base64url format with the following character changes.
-
- * Remove the equal character, `=` from the end of the value.
- * Replace `/` with `_`.
- * Replace `+` with `-`.
- * Append `u!` to be beginning of the string.
-
-1. Sign in to Graph explorer and run the following query, where `sharedURL` is ...:
-
- ```
- https://graph.microsoft.com/v1.0/shares/<sharedURL>/driveitem
- ```
-
- Get the **@microsoft.graph.downloadUrl** and use this as `fileuri` in the QnA Maker APIs.
-
-### Add or update a SharePoint File URI to your knowledge base
-
-Use the **@microsoft.graph.downloadUrl** from the previous section as the `fileuri` in the QnA Maker API for [adding a knowledge base](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase) or [updating a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The following fields are mandatory: name, fileuri, filename, source.
-
-```
-{
- "name": "Knowledge base name",
- "files": [
- {
- "fileUri": "<@microsoft.graph.downloadURL>",
- "fileName": "filename.xlsx",
- "source": "<SharePoint link>"
- }
- ],
- "urls": [],
- "users": [],
- "hostUrl": "",
- "qnaList": []
-}
-```
---
-## Remove QnA Maker app from SharePoint authorization
-
-1. Use the steps in the previous section to find the Qna Maker app in the Active Directory admin center.
-1. When you select the **QnAMakerPortalSharePoint**, select **Overview**.
-1. Select **Delete** to remove permissions.
->-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Collaborate on your knowledge base](../concepts/data-sources-and-content.md)
cognitive-services Change Default Answer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/change-default-answer.md
- Title: Get default answer - QnA Maker
-description: The default answer is returned when there is no match to the question. You may want to change the default answer from the standard default answer.
--- Previously updated : 11/09/2020---
-# Change default answer for a QnA Maker resource
-
-The default answer for a knowledge base is meant to be returned when an answer is not found. If you are using a client application, such as the [Azure Bot service](/azure/bot-service/bot-builder-howto-qna), it may also have a separate default answer, indicating no answer met the score threshold.
--
-## Types of default answer
-
-There are two types of default answer in your knowledge base. It is important to understand how and when each is returned from a prediction query:
-
-|Types of default answers|Description of answer|
-|--|--|
-|KB answer when no answer is determined|`No good match found in KB.` - When the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer) finds no matching answer to the question, the `DefaultAnswer` setting of the App service is returned. All knowledge bases in the same QnA Maker resource share the same default answer text.<br>You can manage the setting in the Azure portal, via the App service, or with the REST APIs for [getting](/rest/api/appservice/webapps/listapplicationsettings) or [updating](/rest/api/appservice/webapps/updateapplicationsettings) the setting.|
-|Follow-up prompt instruction text|When using a follow-up prompt in a conversation flow, you may not need an answer in the QnA pair because you want the user to select from the follow-up prompts. In this case, set specific text by setting the default answer text, which is returned with each prediction for follow-up prompts. The text is meant to display as instructional text to the selection of follow-up prompts. An example for this default answer text is `Please select from the following choices`. This configuration is explained in the next few sections of this document. Can also set as part of knowledge base definition of `defaultAnswerUsedForExtraction` using [REST API](/rest/api/cognitiveservices/qnamaker/knowledgebase/create).|
-
-### Client application integration
-
-For a client application, such as a bot with the **Azure Bot service**, you can choose from the common following scenarios:
-
-* Use the knowledge base's setting
-* Use different text in the client application to distinguish when an answer is returned but doesn't meet the score threshold. This text can either be static text stored in code, or can be stored in the client application's settings list.
-
-## Set follow-up prompt's default answer when you create knowledge base
-
-When you create a new knowledge base, the default answer text is one of the settings. If you choose not to set it during the creation process, you can change it later with the following procedure.
-
-## Change follow-up prompt's default answer in QnA Maker portal
-
-The knowledge base default answer is returned when no answer is returned from the QnA Maker service.
-
-1. Sign in to the [QnA Maker portal](https://www.qnamaker.ai/) and select your knowledge base from the list.
-1. Select **Settings** from the navigation bar.
-1. Change the value of **Default answer text** in the **Manage knowledge base** section.
-
- :::image type="content" source="../media/qnamaker-concepts-confidencescore/change-default-answer.png" alt-text="Screenshot of QnA Maker portal, Settings page, with default answer textbox highlighted.":::
-
-1. Select **Save and train** to save the change.
-
-## Next steps
-
-* [Create a knowledge base](../How-to/manage-knowledge-bases.md)
cognitive-services Chit Chat Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/chit-chat-knowledge-base.md
- Title: Adding chit-chat to a QnA Maker knowledge base-
-description: Adding personal chit-chat to your bot makes it more conversational and engaging when you create a KB. QnA Maker allows you to easily add a pre-populated set of the top chit-chat, into your KB.
----- Previously updated : 08/25/2021---
-# Add Chit-chat to a knowledge base
-
-Adding chit-chat to your bot makes it more conversational and engaging. The chit-chat feature in QnA maker allows you to easily add a pre-populated set of the top chit-chat, into your knowledge base (KB). This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
--
-This dataset has about 100 scenarios of chit-chat in the voice of multiple personas, like Professional, Friendly and Witty. Choose the persona that most closely resembles your bot's voice. Given a user query, QnA Maker tries to match it with the closest known chit-chat QnA.
-
-Some examples of the different personalities are below. You can see all the personality [datasets](https://github.com/microsoft/botframework-cli/blob/main/packages/qnamaker/docs/chit-chat-dataset.md) along with details of the personalities.
-
-For the user query of `When is your birthday?`, each personality has a styled response:
-
-<!-- added quotes so acrolinx doesn't score these sentences -->
-|Personality|Example|
-|--|--|
-|Professional|Age doesn't really apply to me.|
-|Friendly|I don't really have an age.|
-|Witty|I'm age-free.|
-|Caring|I don't have an age.|
-|Enthusiastic|I'm a bot, so I don't have an age.|
-||
--
-## Language support
-
-Chit-chat data sets are supported in the following languages:
-
-|Language|
-|--|
-|Chinese|
-|English|
-|French|
-|Germany|
-|Italian|
-|Japanese|
-|Korean|
-|Portuguese|
-|Spanish|
--
-## Add chit-chat during KB creation
-During knowledge base creation, after adding your source URLs and files, there is an option for adding chit-chat. Choose the personality that you want as your chit-chat base. If you do not want to add chit-chat, or if you already have chit-chat support in your data sources, choose **None**.
-
-## Add Chit-chat to an existing KB
-Select your KB, and navigate to the **Settings** page. There is a link to all the chit-chat datasets in the appropriate **.tsv** format. Download the personality you want, then upload it as a file source. Make sure not to edit the format or the metadata when you download and upload the file.
-
-![Add chit-chat to existing KB](../media/qnamaker-how-to-chit-chat/add-chit-chat-dataset.png)
-
-## Edit your chit-chat questions and answers
-When you edit your KB, you will see a new source for chit-chat, based on the personality you selected. You can now add altered questions or edit the responses, just like with any other source.
-
-![Edit chit-chat QnAs](../media/qnamaker-how-to-chit-chat/edit-chit-chat.png)
-
-To view the metadata, select **View Options** in the toolbar, then select **Show metadata**.
-
-## Add additional chit-chat questions and answers
-You can add a new chit-chat QnA pair that is not in the predefined data set. Ensure that you are not duplicating a QnA pair that is already covered in the chit-chat set. When you add any new chit-chat QnA, it gets added to your **Editorial** source. To ensure the ranker understands that this is chit-chat, add the metadata key/value pair "Editorial: chitchat", as seen in the following image:
--
-## Delete chit-chat from an existing KB
-Select your KB, and navigate to the **Settings** page. Your specific chit-chat source is listed as a file, with the selected personality name. You can delete this as a source file.
-
-![Delete chit-chat from KB](../media/qnamaker-how-to-chit-chat/delete-chit-chat.png)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Import a knowledge base](../Tutorials/migrate-knowledge-base.md)
-
-## See also
-
-[QnA Maker overview](../Overview/overview.md)
cognitive-services Configure Qna Maker Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/configure-qna-maker-resources.md
- Title: Configure QnA Maker service - QnA Maker
-description: This document outlines advanced configurations for your QnA Maker resources.
--- Previously updated : 08/25/2021---
-# Configure QnA Maker resources
-
-The user can configure QnA Maker to use a different Cognitive search resource. They can also configure App service settings if they are using QnA Maker GA.
--
-## Configure QnA Maker to use different Cognitive Search resource
-
-> [!NOTE]
-> If you change the Azure Search service associated with QnA Maker, you will lose access to all the knowledge bases already present in it. Make sure you export the existing knowledge bases before you change the Azure Search service.
-
-If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
-
-QnA Maker's **App Service** resource uses the Cognitive Search resource. In order to change the Cognitive Search resource used by QnA Maker, you need to change the setting in the Azure portal.
-
-1. Get the **Admin key** and **Name** of the Cognitive Search resource you want QnA Maker to use.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and find the **App Service** associated with your QnA Maker resource. Both with have the same name.
-
-1. Select **Settings**, then **Configuration**. This will display all existing settings for the QnA Maker's App Service.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of Azure portal showing App Service configuration settings](../media/qnamaker-how-to-upgrade-qnamaker/change-search-service-app-service-configuration.png)
-
-1. Change the values for the following keys:
-
- * **AzureSearchAdminKey**
- * **AzureSearchName**
-
-1. To use the new settings, you need to restart the App service. Select **Overview**, then select **Restart**.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of Azure portal restarting App Service after configuration settings change](../media/qnamaker-how-to-upgrade-qnamaker/screenshot-azure-portal-restart-app-service.png)
-
-If you create a QnA service through Azure Resource Manager templates, you can create all resources and control the App Service creation to use an existing Search service.
-
-Learn more about how to configure the App Service [Application settings](../../../app-service/configure-common.md#configure-app-settings).
-
-## Get the latest runtime updates
-
-The QnAMaker runtime is part of the Azure App Service instance that's deployed when you [create a QnAMaker service](./set-up-qnamaker-service-azure.md) in the Azure portal. Updates are made periodically to the runtime. The QnA Maker App Service instance is in auto-update mode after the April 2019 site extension release (version 5+). This update is designed to take care of ZERO downtime during upgrades.
-
-You can check your current version at https://www.qnamaker.ai/UserSettings. If your version is older than version 5.x, you must restart App Service to apply the latest updates:
-
-1. Go to your QnAMaker service (resource group) in the [Azure portal](https://portal.azure.com).
-
- > [!div class="mx-imgBorder"]
- > ![QnAMaker Azure resource group](../media/qnamaker-how-to-troubleshoot/qnamaker-azure-resourcegroup.png)
-
-1. Select the App Service instance and open the **Overview** section.
-
- > [!div class="mx-imgBorder"]
- > ![QnAMaker App Service instance](../media/qnamaker-how-to-troubleshoot/qnamaker-azure-appservice.png)
--
-1. Restart App Service. The update process should finish in a couple of seconds. Any dependent applications or bots that use this QnAMaker service will be unavailable to end users during this restart period.
-
- ![Restart of the QnAMaker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png)
-
-## Configure App service idle setting to avoid timeout
-
-The app service, which serves the QnA Maker prediction runtime for a published knowledge base, has an idle timeout configuration, which defaults to automatically time out if the service is idle. For QnA Maker, this means your prediction runtime generateAnswer API occasionally times out after periods of no traffic.
-
-In order to keep the prediction endpoint app loaded even when there is no traffic, set the idle to always on.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for and select your QnA Maker resource's app service. It will have the same name as the QnA Maker resource but it will have a different **type** of App Service.
-1. Find **Settings** then select **Configuration**.
-1. On the Configuration pane, select **General settings**, then find **Always on**, and select **On** as the value.
-
- > [!div class="mx-imgBorder"]
- > ![On the Configuration pane, select **General settings**, then find **Always on**, and select **On** as the value.](../media/qnamaker-how-to-upgrade-qnamaker/configure-app-service-idle-timeout.png)
-
-1. Select **Save** to save the configuration.
-1. You are asked if you want to restart the app to use the new setting. Select **Continue**.
-
-Learn more about how to configure the App Service [General settings](../../../app-service/configure-common.md#configure-general-settings).
-
-## Business continuity with traffic manager
-
-The primary objective of the business continuity plan is to create a resilient knowledge base endpoint, which would ensure no down time for the Bot or the application consuming it.
-
-> [!div class="mx-imgBorder"]
-> ![QnA Maker bcp plan](../media/qnamaker-how-to-bcp-plan/qnamaker-bcp-plan.png)
-
-The high-level idea as represented above is as follows:
-
-1. Set up two parallel [QnA Maker services](set-up-qnamaker-service-azure.md) in [Azure paired regions](../../../availability-zones/cross-region-replication-azure.md).
-
-1. [Backup](../../../app-service/manage-backup.md) your primary QnA Maker App service and [restore](../../../app-service/manage-backup.md) it in the secondary setup. This will ensure that both setups work with the same hostname and keys.
-
-1. Keep the primary and secondary Azure search indexes in sync. Use the GitHub sample [here](https://github.com/pchoudhari/QnAMakerBackupRestore) to see how to backup-restore Azure indexes.
-
-1. Back up the Application Insights using [continuous export](/previous-versions/azure/azure-monitor/app/export-telemetry).
-
-1. Once the primary and secondary stacks have been set up, use [traffic manager](../../../traffic-manager/traffic-manager-overview.md) to configure the two endpoints and set up a routing method.
-
-1. You would need to create a Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), certificate for your traffic manager endpoint. [Bind the TLS/SSL certificate](../../../app-service/configure-ssl-bindings.md) in your App services.
-
-1. Finally, use the traffic manager endpoint in your Bot or App.
cognitive-services Edit Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/edit-knowledge-base.md
- Title: Edit a knowledge base - QnA Maker
-description: QnA Maker allows you to manage the content of your knowledge base by providing an easy-to-use editing experience.
--- Previously updated : 11/19/2021---
-# Edit QnA pairs in your knowledge base
-
-QnA Maker allows you to manage the content of your knowledge base by providing an easy-to-use editing experience.
-
-QnA pairs are added from a datasource, such as a file or URL, or added as an editorial source. An editorial source indicates the QnA pair was added in the QnA portal manually. All QnA pairs are available for editing.
--
-<a name="add-an-editorial-qna-set"></a>
-
-## Question and answer pairs
-
-A knowledge base consists of question and answer (QnA) pairs. Each pair has one answer and a pair contains all the information associated with that _answer_. An answer can loosely resemble a database row or a data structure instance. The **required** settings in a question-and-answer (QnA) pair are:
-
-* a **question** - text of user query, used to QnA Maker's machine-learning, to align with text of user's question with different wording but the same answer
-* the **answer** - the pair's answer is the response that's returned when a user query is matched with the associated question
-
-Each pair is represented by an **ID**.
-
-The **optional** settings for a pair include:
-
-* **Alternate forms of the question** - this helps QnA Maker return the correct answer for a wider variety of question phrasings
-* **Metadata**: Metadata are tags associated with a QnA pair and are represented as key-value pairs. Metadata tags are used to filter QnA pairs and limit the set over which query matching is performed.
-* **Multi-turn prompts**, used to continue a multi-turn conversation
-
-![QnA Maker knowledge bases](../media/qnamaker-concepts-knowledgebase/knowledgebase.png)
-
-## Add an editorial QnA pair
-
-If you do not have pre-existing content to populate the knowledge base, you can add QnA pairs editorially in the QnA Maker portal.
-
-1. Sign in to the [QnA portal](https://www.qnamaker.ai/), then select the knowledge base to add the QnA pair to.
-1. On the **EDIT** page of the knowledge base, select **Add QnA pair** to add a new QnA pair.
-
- > [!div class="mx-imgBorder"]
- > ![Add QnA pair](../media/qnamaker-how-to-edit-kb/add-qnapair.png)
-
-1. In the new QnA pair row, add the required question and answer fields. The other fields are optional. All fields can be changed at any time.
-
-1. Optionally, add **[alternate phrasing](../Quickstarts/add-question-metadata-portal.md#add-additional-alternatively-phrased-questions)**. Alternate phrasing is any form of the question that is significantly different from the original question but should provide the same answer.
-
- When your knowledge base is published, and you have [active learning](use-active-learning.md) turned on, QnA Maker collects alternate phrasing choices for you to accept. These choices are selected in order to increase the prediction accuracy.
-
-1. Optionally, add **[metadata](../Quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)**. To view metadata, select **View options** in the context menu. Metadata provides filters to the answers that the client application, such as a chat bot, provides.
-
-1. Optionally, add **[follow-up prompts](multi-turn.md)**. Follow-up prompts provide additional conversation paths to the client application to present to the user.
-
-1. Select **Save and train** to see predictions including the new QnA pair.
-
-## Rich-text editing for answer
-
-Rich-text editing of your answer text gives you markdown styling from a simple toolbar.
-
-1. Select the text area for an answer, the rich-text editor toolbar displays on the QnA pair row.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of the rich-text editor with the question and answer of a QnA pair row.](../media/qnamaker-how-to-edit-kb/rich-text-control-qna-pair-row.png)
-
- Any text already in the answer displays correctly as your user will see it from a bot.
-
-1. Edit the text. Select formatting features from the rich-text editing toolbar or use the toggle feature to switch to markdown syntax.
-
- > [!div class="mx-imgBorder"]
- > ![Use the rich-text editor to write and format text and save as markdown.](../media/qnamaker-how-to-edit-kb/rich-text-display-image.png)
-
- |Rich-text editor features|Keyboard shortcut|
- |--|--|
- |Toggle between rich-text editor and markdown. `</>`|CTRL+M|
- |Bold. **B**|CTR+LB|
- |Italics, indicated with an italicized **_I_**|CTRL+I|
- |Unordered list||
- |Ordered list||
- |Paragraph style||
- |Image - add an image available from a public URL.|CTRL+G|
- |Add link to publicly available URL.|CTRL+K|
- |Emoticon - add from a selection of emoticons.|CTRL+E|
- |Advanced menu - undo|CTRL+Z|
- |Advanced menu - redo|CTRL+Y|
-
-1. Add an image to the answer using the Image icon in the rich-text toolbar. The in-place editor needs the publicly accessible image URL and the alternate text for the image.
--
- > [!div class="mx-imgBorder"]
- > ![Screenshot shows the in-place editor with the publicly accessible image URL and alternate text for the image entered.](../media/qnamaker-how-to-edit-kb/add-image-url-alternate-text.png)
-
-1. Add a link to a URL by either selecting the text in the answer, then selecting the Link icon in the toolbar or by selecting the Link icon in the toolbar then entering new text and the URL.
-
- > [!div class="mx-imgBorder"]
- > ![Use the rich-text editor add a publicly accessible image and its ALT text.](../media/qnamaker-how-to-edit-kb/add-link-to-answer-rich-text-editor.png)
-
-## Edit a QnA pair
-
-Any field in any QnA pair can be edited, regardless of the original data source. Some fields may not be visible due to your current **View Options** settings, found in the context tool bar.
-
-## Delete a QnA pair
-
-To delete a QnA, click the **delete** icon on the far right of the QnA row. This is a permanent operation. It can't be undone. Consider exporting your KB from the **Publish** page before deleting pairs.
-
-![Delete QnA pair](../media/qnamaker-how-to-edit-kb/delete-qnapair.png)
-
-## Find the QnA pair ID
-
-If you need to find the QnA pair ID, you can find it in two places:
-
-* Hover on the delete icon on the QnA pair row you are interested in. The hover text includes the QnA pair ID.
-* Export the knowledge base. Each QnA pair in the knowledge base includes the QnA pair ID.
-
-## Add alternate questions
-
-Add alternate questions to an existing QnA pair to improve the likelihood of a match to a user query.
-
-![Add Alternate Questions](../media/qnamaker-how-to-edit-kb/add-alternate-question.png)
-
-## Linking QnA pairs
-
-Linking QnA pairs is provided with [follow-up prompts](multi-turn.md). This is a logical connection between QnA pairs, managed at the knowledge base level. You can edit follow-up prompts in the QnA Maker portal.
-
-You can't link QnA pairs in the answer's metadata.
-
-## Add metadata
-
-Add metadata pairs by first selecting **View options**, then selecting **Show metadata**. This displays the metadata column. Next, select the **+** sign to add a metadata pair. This pair consists of one key and one value.
-
-Learn more about metadata in the QnA Maker portal quickstart for metadata:
-* [Authoring - add metadata to QnA pair](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)
-* [Query prediction - filter answers by metadata](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md)
-
-## Save changes to the QnA pairs
-
-Periodically select **Save and train** after making edits to avoid losing changes.
-
-![Add Metadata](../media/qnamaker-how-to-edit-kb/add-metadata.png)
-
-## When to use rich-text editing versus markdown
-
-[Rich-text editing](#add-an-editorial-qna-set) of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
-
-[Markdown](../reference-markdown-format.md) is a better tool when you need to autogenerate content to create knowledge bases to be imported as part of a CI/CD pipeline or for [batch testing](../index.yml).
-
-## Editing your knowledge base locally
-
-Once a knowledge base is created, it is recommended that you make edits to the knowledge base text in the [QnA Maker portal](https://qnamaker.ai), rather than exporting and reimporting through local files. However, there may be times that you need to edit a knowledge base locally.
-
-Export the knowledge base from the **Settings** page, then edit the knowledge base with Microsoft Excel. If you choose to use another application to edit your exported file, the application may introduce syntax errors because it is not fully TSV compliant. Microsoft Excel's TSV files generally don't introduce any formatting errors.
-
-Once you are done with your edits, reimport the TSV file from the **Settings** page. This will completely replace the current knowledge base with the imported knowledge base.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Collaborate on a knowledge base](../index.yml)
-
-* [Manage Azure resources used by QnA Maker](set-up-qnamaker-service-azure.md)
cognitive-services Get Analytics Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/get-analytics-knowledge-base.md
- Title: Analytics on knowledgebase - QnA Maker-
-description: QnA Maker stores all chat logs and other telemetry, if you have enabled App Insights during the creation of your QnA Maker service. Run the sample queries to get your chat logs from App Insights.
--
-displayName: chat history, history, chat logs, logs
--- Previously updated : 08/25/2021---
-# Get analytics on your knowledge base
-
-QnA Maker stores all chat logs and other telemetry, if you have enabled Application Insights during the [creation of your QnA Maker service](./set-up-qnamaker-service-azure.md). Run the sample queries to get your chat logs from Application Insights.
--
-1. Go to your Application Insights resource.
-
- ![Select your application insights resource](../media/qnamaker-how-to-analytics-kb/resources-created.png)
-
-2. Select **Log (Analytics)**. A new window opens where you can query QnA Maker telemetry.
-
-3. Paste in the following query and run it.
-
- ```kusto
- requests
- | where url endswith "generateAnswer"
- | project timestamp, id, url, resultCode, duration, performanceBucket
- | parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
- | join kind= inner (
- traces | extend id = operation_ParentId
- ) on id
- | where message == "QnAMaker GenerateAnswer"
- | extend question = tostring(customDimensions['Question'])
- | extend answer = tostring(customDimensions['Answer'])
- | extend score = tostring(customDimensions['Score'])
- | project timestamp, resultCode, duration, id, question, answer, score, performanceBucket,KbId
- ```
-
- Select **Run** to run the query.
-
- [![Run query to determine questions, answers, and score from users](../media/qnamaker-how-to-analytics-kb/run-query.png)](../media/qnamaker-how-to-analytics-kb/run-query.png#lightbox)
-
-## Run queries for other analytics on your QnA Maker knowledge base
-
-### Total 90-day traffic
-
-```kusto
-//Total Traffic
-requests
-| where url endswith "generateAnswer" and name startswith "POST"
-| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
-| summarize ChatCount=count() by bin(timestamp, 1d), KbId
-```
-
-### Total question traffic in a given time period
-
-```kusto
-//Total Question Traffic in a given time period
-let startDate = todatetime('2019-01-01');
-let endDate = todatetime('2020-12-31');
-requests
-| where timestamp <= endDate and timestamp >=startDate
-| where url endswith "generateAnswer" and name startswith "POST"
-| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
-| summarize ChatCount=count() by KbId
-```
-
-### User traffic
-
-```kusto
-//User Traffic
-requests
-| where url endswith "generateAnswer"
-| project timestamp, id, url, resultCode, duration
-| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
-| join kind= inner (
-traces | extend id = operation_ParentId
-) on id
-| extend UserId = tostring(customDimensions['UserId'])
-| summarize ChatCount=count() by bin(timestamp, 1d), UserId, KbId
-```
-
-### Latency distribution of questions
-
-```kusto
-//Latency distribution of questions
-requests
-| where url endswith "generateAnswer" and name startswith "POST"
-| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
-| project timestamp, id, name, resultCode, performanceBucket, KbId
-| summarize count() by performanceBucket, KbId
-```
-
-### Unanswered questions
-
-```kusto
-// Unanswered questions
-requests
-| where url endswith "generateAnswer"
-| project timestamp, id, url
-| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
-| join kind= inner (
-traces | extend id = operation_ParentId
-) on id
-| extend question = tostring(customDimensions['Question'])
-| extend answer = tostring(customDimensions['Answer'])
-| extend score = tostring(customDimensions['Score'])
-| where score == "0" and message == "QnAMaker GenerateAnswer"
-| project timestamp, KbId, question, answer, score
-| order by timestamp desc
-```
-
-> **NOTE**
-> If you cannot get properly the log by using Application Insight, please confirm the Application Insights settings on the App Service resource.
-> Open App Service resource and go to Application Insights. And then please check whether it is Enabled or Disabled. If it is disabled, please enable it and then apply there.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Choose capactiy](./improve-knowledge-base.md)
cognitive-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/improve-knowledge-base.md
- Title: Active Learning suggested questions - QnA Maker
-description: Improve the quality of your knowledge base with active learning. Review, accept or reject, add without removing or changing existing questions.
--- Previously updated : 01/11/2022------
-# Accept active learning suggested questions in the knowledge base
--
-<a name="accept-an-active-learning-suggestion-in-the-knowledge-base"></a>
-
-Active Learning alters the Knowledge Base or Search Service after you approve the suggestion, then save and train. If you approve the suggestion, it will be added as an alternate question.
-
-## Turn on active learning
-
-In order to see suggested questions, you must [turn on active learning](../How-To/use-active-learning.md#turn-on-active-learning-for-alternate-questions) for your QnA Maker resource.
-
-## View suggested questions
-
-1. In order to see the suggested questions, on the **Edit** knowledge base page, select **View Options**, then select **Show active learning suggestions**. This option will be disabled if there are no suggestions present for any of the question and answer pairs.
-
- [![On the Edit section of the portal, select Show Suggestions in order to see the active learning's new question alternatives.](../media/improve-knowledge-base/show-suggestions-button.png)](../media/improve-knowledge-base/show-suggestions-button.png#lightbox)
-
-1. Filter the knowledge base with question and answer pairs to show only suggestions by selecting **Filter by Suggestions**.
-
- [![Use the Filter by suggestions toggle to view only the active learning's suggested question alternatives.](../media/improve-knowledge-base/filter-by-suggestions.png)](../media/improve-knowledge-base/filter-by-suggestions.png#lightbox)
-
-1. Each QnA pair suggests the new question alternatives with a check mark, `Γ£ö` , to accept the question or an `x` to reject the suggestions. Select the check mark to add the question.
-
- [![Select or reject active learning's suggested question alternatives by selecting the green check mark or red delete mark.](../media/improve-knowledge-base/accept-active-learning-suggestions-small.png)](../media/improve-knowledge-base/accept-active-learning-suggestions.png#lightbox)
-
- You can add or delete _all suggestions_ by selecting **Add all** or **Reject all** in the contextual toolbar.
-
-1. Select **Save and Train** to save the changes to the knowledge base.
-
-1. Select **Publish** to allow the changes to be available from the [GenerateAnswer API](metadata-generateanswer-usage.md#generateanswer-request-configuration).
-
- When 5 or more similar queries are clustered, every 30 minutes, QnA Maker suggests the alternate questions for you to accept or reject.
--
-<a name="#score-proximity-between-knowledge-base-questions"></a>
-
-## Active learning suggestions are saved in the exported knowledge base
-
-When your app has active learning enabled, and you export the app, the `SuggestedQuestions` column in the tsv file retains the active learning data.
-
-The `SuggestedQuestions` column is a JSON object of information of implicit, `autosuggested`, and explicit, `usersuggested` feedback. An example of this JSON object for a single user-submitted question of `help` is:
-
-```JSON
-[
- {
- "clusterHead": "help",
- "totalAutoSuggestedCount": 1,
- "totalUserSuggestedCount": 0,
- "alternateQuestionList": [
- {
- "question": "help",
- "autoSuggestedCount": 1,
- "userSuggestedCount": 0
- }
- ]
- }
-]
-```
-
-When you reimport this app, the active learning continues to collect information and recommend suggestions for your knowledge base.
--
-### Architectural flow for using GenerateAnswer and Train APIs from a bot
-
-A bot or other client application should use the following architectural flow to use active learning:
-
-1. Bot [gets the answer from the knowledge base](#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers) with the GenerateAnswer API, using the `top` property to get a number of answers.
-
-2. Bot determines explicit feedback:
- * Using your own [custom business logic](#use-the-score-property-along-with-business-logic-to-get-list-of-answers-to-show-user), filter out low scores.
- * In the bot or client-application, display list of possible answers to the user and get user's selected answer.
-3. Bot [sends selected answer back to QnA Maker](#bot-framework-sample-code) with the [Train API](#train-api).
-
-### Use the top property in the GenerateAnswer request to get several matching answers
-
-When submitting a question to QnA Maker for an answer, the `top` property of the JSON body sets the number of answers to return.
-
-```json
-{
- "question": "wi-fi",
- "isTest": false,
- "top": 3
-}
-```
-
-### Use the score property along with business logic to get list of answers to show user
-
-When the client application (such as a chat bot) receives the response, the top 3 questions are returned. Use the `score` property to analyze the proximity between scores. This proximity range is determined by your own business logic.
-
-```json
-{
- "answers": [
- {
- "questions": [
- "Wi-Fi Direct Status Indicator"
- ],
- "answer": "**Wi-Fi Direct Status Indicator**\n\nStatus bar icons indicate your current Wi-Fi Direct connection status: \n\nWhen your device is connected to another device using Wi-Fi Direct, '$ \n\n+ *+ ' Wi-Fi Direct is displayed in the Status bar.",
- "score": 74.21,
- "id": 607,
- "source": "Bugbash KB.pdf",
- "metadata": []
- },
- {
- "questions": [
- "Wi-Fi - Connections"
- ],
- "answer": "**Wi-Fi**\n\nWi-Fi is a term used for certain types of Wireless Local Area Networks (WLAN). Wi-Fi communication requires access to a wireless Access Point (AP).",
- "score": 74.15,
- "id": 599,
- "source": "Bugbash KB.pdf",
- "metadata": []
- },
- {
- "questions": [
- "Turn Wi-Fi On or Off"
- ],
- "answer": "**Turn Wi-Fi On or Off**\n\nTurning Wi-Fi on makes your device able to discover and connect to compatible in-range wireless APs. \n\n1. From a Home screen, tap ::: Apps > e Settings .\n2. Tap Connections > Wi-Fi , and then tap On/Off to turn Wi-Fi on or off.",
- "score": 69.99,
- "id": 600,
- "source": "Bugbash KB.pdf",
- "metadata": []
- }
- ]
-}
-```
-
-## Client application follow-up when questions have similar scores
-
-Your client application displays the questions with an option for the user to select _the single question_ that most represents their intention.
-
-Once the user selects one of the existing questions, the client application sends the user's choice as feedback using the QnA Maker Train API. This feedback completes the active learning feedback loop.
-
-## Train API
-
-Active learning feedback is sent to QnA Maker with the Train API POST request. The API signature is:
-
-```http
-POST https://<QnA-Maker-resource-name>.azurewebsites.net/qnamaker/knowledgebases/<knowledge-base-ID>/train
-Authorization: EndpointKey <endpoint-key>
-Content-Type: application/json
-{"feedbackRecords": [{"userId": "1","userQuestion": "<question-text>","qnaId": 1}]}
-```
-
-|HTTP request property|Name|Type|Purpose|
-|--|--|--|--|
-|URL route parameter|Knowledge base ID|string|The GUID for your knowledge base.|
-|Custom subdomain|QnAMaker resource name|string|The resource name is used as the custom subdomain for your QnA Maker. This is available on the Settings page after you publish the knowledge base. It is listed as the `host`.|
-|Header|Content-Type|string|The media type of the body sent to the API. Default value is: `application/json`|
-|Header|Authorization|string|Your endpoint key (EndpointKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).|
-|Post Body|JSON object|JSON|The training feedback|
-
-The JSON body has several settings:
-
-|JSON body property|Type|Purpose|
-|--|--|--|--|
-|`feedbackRecords`|array|List of feedback.|
-|`userId`|string|The user ID of the person accepting the suggested questions. The user ID format is up to you. For example, an email address can be a valid user ID in your architecture. Optional.|
-|`userQuestion`|string|Exact text of the user's query. Required.|
-|`qnaID`|number|ID of question, found in the [GenerateAnswer response](metadata-generateanswer-usage.md#generateanswer-response-properties). |
-
-An example JSON body looks like:
-
-```json
-{
- "feedbackRecords": [
- {
- "userId": "1",
- "userQuestion": "<question-text>",
- "qnaId": 1
- }
- ]
-}
-```
-
-A successful response returns a status of 204 and no JSON response body.
-
-### Batch many feedback records into a single call
-
-In the client-side application, such as a bot, you can store the data, then send many records in a single JSON body in the `feedbackRecords` array.
-
-An example JSON body looks like:
-
-```json
-{
- "feedbackRecords": [
- {
- "userId": "1",
- "userQuestion": "How do I ...",
- "qnaId": 1
- },
- {
- "userId": "2",
- "userQuestion": "Where is ...",
- "qnaId": 40
- },
- {
- "userId": "3",
- "userQuestion": "When do I ...",
- "qnaId": 33
- }
- ]
-}
-```
---
-<a name="active-learning-is-saved-in-the-exported-apps-tsv-file"></a>
-
-## Bot framework sample code
-
-Your bot framework code needs to call the Train API, if the user's query should be used for active learning. There are two pieces of code to write:
-
-* Determine if query should be used for active learning
-* Send query back to the QnA Maker Train API for active learning
-
-In the [Azure Bot sample](https://github.com/microsoft/BotBuilder-Samples), both of these activities have been programmed.
-
-### Example C# code for Train API with Bot Framework 4.x
-
-The following code illustrates how to send information back to QnA Maker with the Train API.
-
-```csharp
-public class FeedbackRecords
-{
- // <summary>
- /// List of feedback records
- /// </summary>
- [JsonProperty("feedbackRecords")]
- public FeedbackRecord[] Records { get; set; }
-}
-
-/// <summary>
-/// Active learning feedback record
-/// </summary>
-public class FeedbackRecord
-{
- /// <summary>
- /// User id
- /// </summary>
- public string UserId { get; set; }
-
- /// <summary>
- /// User question
- /// </summary>
- public string UserQuestion { get; set; }
-
- /// <summary>
- /// QnA Id
- /// </summary>
- public int QnaId { get; set; }
-}
-
-/// <summary>
-/// Method to call REST-based QnAMaker Train API for Active Learning
-/// </summary>
-/// <param name="endpoint">Endpoint URI of the runtime</param>
-/// <param name="FeedbackRecords">Feedback records train API</param>
-/// <param name="kbId">Knowledgebase Id</param>
-/// <param name="key">Endpoint key</param>
-/// <param name="cancellationToken"> Cancellation token</param>
-public async static void CallTrain(string endpoint, FeedbackRecords feedbackRecords, string kbId, string key, CancellationToken cancellationToken)
-{
- var uri = endpoint + "/knowledgebases/" + kbId + "/train/";
-
- using (var client = new HttpClient())
- {
- using (var request = new HttpRequestMessage())
- {
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(uri);
- request.Content = new StringContent(JsonConvert.SerializeObject(feedbackRecords), Encoding.UTF8, "application/json");
- request.Headers.Add("Authorization", "EndpointKey " + key);
-
- var response = await client.SendAsync(request, cancellationToken);
- await response.Content.ReadAsStringAsync();
- }
- }
-}
-```
-
-### Example Node.js code for Train API with Bot Framework 4.x
-
-The following code illustrates how to send information back to QnA Maker with the Train API.
-
-```javascript
-async callTrain(stepContext){
-
- var trainResponses = stepContext.values[this.qnaData];
- var currentQuery = stepContext.values[this.currentQuery];
-
- if(trainResponses.length > 1){
- var reply = stepContext.context.activity.text;
- var qnaResults = trainResponses.filter(r => r.questions[0] == reply);
-
- if(qnaResults.length > 0){
-
- stepContext.values[this.qnaData] = qnaResults;
-
- var feedbackRecords = {
- FeedbackRecords:[
- {
- UserId:stepContext.context.activity.id,
- UserQuestion: currentQuery,
- QnaId: qnaResults[0].id
- }
- ]
- };
-
- // Call Active Learning Train API
- this.activeLearningHelper.callTrain(this.qnaMaker.endpoint.host, feedbackRecords, this.qnaMaker.endpoint.knowledgeBaseId, this.qnaMaker.endpoint.endpointKey);
-
- return await stepContext.next(qnaResults);
- }
- else{
-
- return await stepContext.endDialog();
- }
- }
-
- return await stepContext.next(stepContext.result);
-}
-```
-
-## Best practices
-
-For best practices when using active learning, see [Best practices](../Concepts/best-practices.md#active-learning).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Use metadata with GenerateAnswer API](metadata-generateanswer-usage.md)
cognitive-services Manage Knowledge Bases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/manage-knowledge-bases.md
- Title: Manage knowledge bases - QnA Maker
-description: QnA Maker allows you to manage your knowledge bases by providing access to the knowledge base settings and content.
--- Previously updated : 03/18/2020---
-# Create knowledge base and manage settings
-
-QnA Maker allows you to manage your knowledge bases by providing access to the knowledge base settings and data sources.
--
-## Prerequisites
-
-> * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> * A [QnA Maker resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
-
-## Create a knowledge base
-
-1. Sign in to the [QnAMaker.ai](https://QnAMaker.ai) portal with your Azure credentials.
-
-1. In the QnA Maker portal, select **Create a knowledge base**.
-
-1. On the **Create** page, skip **Step 1** if you already have your QnA Maker resource.
-
- If you haven't created the resource yet, select **Stable** and **Create a QnA service**. You are directed to the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) to set up a QnA Maker service in your subscription. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
-
- When you are done creating the resource in the Azure portal, return to the QnA Maker portal, refresh the browser page, and continue to **Step 2**.
-
-1. In **Step 3**, select your Active directory, subscription, service (resource), and the language for all knowledge bases created in the service.
-
- ![Screenshot of selecting a QnA Maker service knowledge base](../media/qnamaker-quickstart-kb/qnaservice-selection.png)
-
-1. In **Step 3**, name your knowledge base `My Sample QnA KB`.
-
-1. In **Step 4**, configure the settings with the following table:
-
- |Setting|Value|
- |--|--|
- |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked|
- |**Default answer text**| `Quickstart - default answer not found.`|
- |**+ Add URL**|`https://azure.microsoft.com/en-us/support/faq/`|
- |**Chit-chat**|Select **Professional**|
--
-1. In **Step 5**, Select **Create your KB**.
-
- The extraction process takes a few moments to read the document and identify questions and answers.
-
- After QnA Maker successfully creates the knowledge base, the **Knowledge base** page opens. You can edit the contents of the knowledge base on this page.
-
-## Edit knowledge base
-
-1. Select **My knowledge bases** in the top navigation bar.
-
- You can see all the services you created or shared with you sorted in the descending order of the **last modified** date.
-
- ![My Knowledge Bases](../media/qnamaker-how-to-edit-kb/my-kbs.png)
-
-1. Select a particular knowledge base to make edits to it.
-
-1. Select **Settings**. The following list contains fields you can change.
-
- |Goal|Action|
- |--|--|
- |Add URL|You can add new URLs to add new FAQ content to Knowledge base by clicking **Manage knowledge base -> '+ Add URL'** link.|
- |Delete URL|You can delete existing URLs by selecting the delete icon, the trash can.|
- |Refresh content|If you want your knowledge base to crawl the latest content of existing URLs, select the **Refresh** checkbox. This action will update the knowledge base with latest URL content once. This action is not setting a regular schedule of updates.|
- |Add file|You can add a supported file document to be part of a knowledge base, by selecting **Manage knowledge base**, then selecting **+ Add File**|
- |Import|You can also import any existing knowledge base by selecting **Import Knowledge base** button. |
- |Update|Updating of knowledge base depends on **management pricing tier** used while creating QnA Maker service associated with your knowledge base. You can also update the management tier from Azure portal if necessary.
-
- <br/>
- 1. Once you are done making changes to the knowledge base, select **Save and train** in the top-right corner of the page in order to persist the changes.
-
- ![Save and Train](../media/qnamaker-how-to-edit-kb/save-and-train.png)
-
- >[!CAUTION]
- >If you leave the page before selecting **Save and train**, all changes will be lost.
-
-## Manage large knowledge bases
-
-* **Data source groups**: The QnAs are grouped by the data source from which they were extracted. You can expand or collapse the data source.
-
- ![Use the QnA Maker data source bar to collapse and expand data source questions and answers](../media/qnamaker-how-to-edit-kb/data-source-grouping.png)
-
-* **Search knowledge base**: You can search the knowledge base by typing in the text box at the top of the Knowledge Base table. Click enter to search on the question, answer, or metadata content. Click on the X icon to remove the search filter.
-
- ![Use the QnA Maker search box above the questions and answers to reduce the view to only filter-matching items](../media/qnamaker-how-to-edit-kb/search-paginate-group.png)
-
-* **Pagination**: Quickly move through data sources to manage large knowledge bases
-
- ![Use the QnA Maker pagination features above the questions and answers to move through pages of questions and answers](../media/qnamaker-how-to-edit-kb/pagination.png)
-
-## Delete knowledge bases
-
-Deleting a knowledge base (KB) is a permanent operation. It can't be undone. Before deleting a knowledge base, you should export the knowledge base from the **Settings** page of the QnA Maker portal.
-
-If you share your knowledge base with collaborators,](collaborate-knowledge-base.md) then delete it, everyone loses access to the KB.
cognitive-services Manage Qna Maker App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/manage-qna-maker-app.md
- Title: Manage QnA Maker App - QnA Maker
-description: QnA Maker allows multiple people to collaborate on a knowledge base. QnA Maker offers a capability to improve the quality of your knowledge base with active learning. One can review, accept or reject, and add without removing or changing existing questions.
--- Previously updated : 11/09/2020---
-# Manage QnA Maker app
-
-QnA Maker allows you to collaborate with different authors and content editors by offering a capability to restrict collaborator access based on the collaborator's role.
-Learn more about [QnA Maker collaborator authentication concepts](../Concepts/role-based-access-control.md).
--
-## Add Azure role-based access control (Azure RBAC)
-
-QnA Maker allows multiple people to collaborate on all knowledge bases in the same QnA Maker resource. This feature is provided with [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/role-assignments-portal.md).
-
-## Access at the cognitive resource level
-
-You cannot share a particular knowledge base in a QnA Maker service. If you want more granular access control, consider distributing your knowledge bases across different QnA Maker resources, then add roles to each resource.
-
-## Add a role to a resource
-
-### Add a user account to the cognitive resource
-
-You should apply RBAC controls to the QnA Maker resource.
-
-The following steps use the collaborator role but any of the roles can be added using these steps
-
-1. Sign in to the [Azure](https://portal.azure.com/) portal, and go to your cognitive resource.
-
- ![QnA Maker resource list](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-resource-list.png)
-
-1. Go to the **Access Control (IAM)** tab.
-
- ![QnA Maker IAM](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-iam.png)
-
-1. Select **Add**.
-
- ![QnA Maker IAM add](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-iam-add.png)
-
-1. Select a role from the following list:
-
- |Role|
- |--|
- |Owner|
- |Contributor|
- |Cognitive Services QnA Maker Reader|
- |Cognitive Services QnA Maker Editor|
- |Cognitive Services User|
-
- :::image type="content" source="../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-add-role-iam.png" alt-text="QnA Maker IAM add role.":::
-
-1. Enter the user's email address and press **Save**.
-
- ![QnA Maker IAM add email](../media/qnamaker-how-to-collaborate-knowledge-base/qnamaker-iam-add-email.png)
-
-### View QnA Maker knowledge bases
-
-When the person you shared your QnA Maker service with logs into the [QnA Maker portal](https://qnamaker.ai), they can see all the knowledge bases in that service based on their role.
-
-When they select a knowledge base, their current role on that QnA Maker resource is visible next to the knowledge base name.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a knowledge base](./manage-knowledge-bases.md)
cognitive-services Metadata Generateanswer Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md
- Title: Metadata with GenerateAnswer API - QnA Maker-
-description: QnA Maker lets you add metadata, in the form of key/value pairs, to your question/answer pairs. You can filter results to user queries, and store additional information that can be used in follow-up conversations.
----- Previously updated : 11/09/2020---
-# Get an answer with the GenerateAnswer API
-
-To get the predicted answer to a user's question, use the GenerateAnswer API. When you publish a knowledge base, you can see information about how to use this API on the **Publish** page. You can also configure the API to filter answers based on metadata tags, and test the knowledge base from the endpoint with the test query string parameter.
--
-<a name="generateanswer-api"></a>
-
-## Get answer predictions with the GenerateAnswer API
-
-You use the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer) in your bot or application to query your knowledge base with a user question, to get the best match from the question and answer pairs.
-
-> [!NOTE]
-> This documentation does not apply to the latest release. To learn about using the latest question answering APIs consult the [question answering quickstart guide](../../language-service/question-answering/quickstart/sdk.md).
-
-<a name="generateanswer-endpoint"></a>
-
-## Publish to get GenerateAnswer endpoint
-
-After you publish your knowledge base, either from the [QnA Maker portal](https://www.qnamaker.ai), or by using the [API](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish), you can get the details of your GenerateAnswer endpoint.
-
-To get your endpoint details:
-1. Sign in to [https://www.qnamaker.ai](https://www.qnamaker.ai).
-1. In **My knowledge bases**, select **View Code** for your knowledge base.
- ![Screenshot of My knowledge bases](../media/qnamaker-how-to-metadata-usage/my-knowledge-bases.png)
-1. Get your GenerateAnswer endpoint details.
-
- ![Screenshot of endpoint details](../media/qnamaker-how-to-metadata-usage/view-code.png)
-
-You can also get your endpoint details from the **Settings** tab of your knowledge base.
-
-<a name="generateanswer-request"></a>
-
-## GenerateAnswer request configuration
-
-You call GenerateAnswer with an HTTP POST request. For sample code that shows how to call GenerateAnswer, see the [quickstarts](../quickstarts/quickstart-sdk.md#generate-an-answer-from-the-knowledge-base).
-
-The POST request uses:
-
-* Required [URI parameters](/rest/api/cognitiveservices/qnamakerruntime/runtime/train#uri-parameters)
-* Required header property, `Authorization`, for security
-* Required [body properties](/rest/api/cognitiveservices/qnamakerruntime/runtime/train#feedbackrecorddto).
-
-The GenerateAnswer URL has the following format:
-
-```
-https://{QnA-Maker-endpoint}/knowledgebases/{knowledge-base-ID}/generateAnswer
-```
-
-Remember to set the HTTP header property of `Authorization` with a value of the string `EndpointKey` with a trailing space then the endpoint key found on the **Settings** page.
-
-An example JSON body looks like:
-
-```json
-{
- "question": "qna maker and luis",
- "top": 6,
- "isTest": true,
- "scoreThreshold": 30,
- "rankerType": "" // values: QuestionOnly
- "strictFilters": [
- {
- "name": "category",
- "value": "api"
- }],
- "userId": "sd53lsY="
-}
-```
-
-Learn more about [rankerType](../concepts/best-practices.md#choosing-ranker-type).
-
-The previous JSON requested only answers that are at 30% or above the threshold score.
-
-<a name="generateanswer-response"></a>
-
-## GenerateAnswer response properties
-
-The [response](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer#successful-query) is a JSON object including all the information you need to display the answer and the next turn in the conversation, if available.
-
-```json
-{
- "answers": [
- {
- "score": 38.54820341616869,
- "Id": 20,
- "answer": "There is no direct integration of LUIS with QnA Maker. But, in your bot code, you can use LUIS and QnA Maker together. [View a sample bot](https://github.com/Microsoft/BotBuilder-CognitiveServices/tree/master/Node/samples/QnAMaker/QnAWithLUIS)",
- "source": "Custom Editorial",
- "questions": [
- "How can I integrate LUIS with QnA Maker?"
- ],
- "metadata": [
- {
- "name": "category",
- "value": "api"
- }
- ]
- }
- ]
-}
-```
-
-The previous JSON responded with an answer with a score of 38.5%.
-
-## Match questions only, by text
-
-By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the GenerateAnswer request.
-
-You can search through the published kb, using `isTest=false`, or in the test kb using `isTest=true`.
-
-```json
-{
- "question": "Hi",
- "top": 30,
- "isTest": true,
- "RankerType":"QuestionOnly"
-}
-
-```
-## Use QnA Maker with a bot in C#
-
-The bot framework provides access to the QnA Maker's properties with the [getAnswer API](/dotnet/api/microsoft.bot.builder.ai.qna.qnamaker.getanswersasync#Microsoft_Bot_Builder_AI_QnA_QnAMaker_GetAnswersAsync_Microsoft_Bot_Builder_ITurnContext_Microsoft_Bot_Builder_AI_QnA_QnAMakerOptions_System_Collections_Generic_Dictionary_System_String_System_String__System_Collections_Generic_Dictionary_System_String_System_Double__):
-
-```csharp
-using Microsoft.Bot.Builder.AI.QnA;
-var metadata = new Microsoft.Bot.Builder.AI.QnA.Metadata();
-var qnaOptions = new QnAMakerOptions();
-
-metadata.Name = Constants.MetadataName.Intent;
-metadata.Value = topIntent;
-qnaOptions.StrictFilters = new Microsoft.Bot.Builder.AI.QnA.Metadata[] { metadata };
-qnaOptions.Top = Constants.DefaultTop;
-qnaOptions.ScoreThreshold = 0.3F;
-var response = await _services.QnAServices[QnAMakerKey].GetAnswersAsync(turnContext, qnaOptions);
-```
-
-The previous JSON requested only answers that are at 30% or above the threshold score.
-
-## Use QnA Maker with a bot in Node.js
-
-The bot framework provides access to the QnA Maker's properties with the [getAnswer API](/javascript/api/botbuilder-ai/qnamaker?preserve-view=true&view=botbuilder-ts-latest#generateanswer-stringundefined--number--number-):
-
-```javascript
-const { QnAMaker } = require('botbuilder-ai');
-this.qnaMaker = new QnAMaker(endpoint);
-
-// Default QnAMakerOptions
-var qnaMakerOptions = {
- ScoreThreshold: 0.30,
- Top: 3
-};
-var qnaResults = await this.qnaMaker.getAnswers(stepContext.context, qnaMakerOptions);
-```
-
-The previous JSON requested only answers that are at 30% or above the threshold score.
-
-## Get precise answers with GenerateAnswer API
-
-We offer precise answer feature only with the QnA Maker managed version.
-
-## Common HTTP errors
-
-|Code|Explanation|
-|:--|--|
-|2xx|Success|
-|400|Request's parameters are incorrect meaning the required parameters are missing, malformed, or too large|
-|400|Request's body is incorrect meaning the JSON is missing, malformed, or too large|
-|401|Invalid key|
-|403|Forbidden - you do not have correct permissions|
-|404|KB doesn't exist|
-|410|This API is deprecated and is no longer available|
-
-## Next steps
-
-The **Publish** page also provides information to [generate an answer](../Quickstarts/get-answer-from-knowledge-base-using-url-tool.md) with Postman or cURL.
-
-> [!div class="nextstepaction"]
-> [Get analytics on your knowledge base](../how-to/get-analytics-knowledge-base.md)
-> [!div class="nextstepaction"]
-> [Confidence score](../Concepts/confidence-score.md)
cognitive-services Multi Turn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/multi-turn.md
- Title: Multi-turn conversations - QnA Maker
-description: Use prompts and context to manage the multiple turns, known as multi-turn, for your bot from one question to another. Multi-turn is the ability to have a back-and-forth conversation where the previous question's context influences the next question and answer.
----- Previously updated : 11/18/2021--
-# Use follow-up prompts to create multiple turns of a conversation
-
-Use follow-up prompts and context to manage the multiple turns, known as _multi-turn_, for your bot from one question to another.
-
-To see how multi-turn works, view the following demonstration video:
-
-[![Multi-turn conversation in QnA Maker](../media/conversational-context/youtube-video.png)](https://aka.ms/multiturnexample)
--
-## What is a multi-turn conversation?
-
-Some questions can't be answered in a single turn. When you design your client application (chat bot) conversations, a user might ask a question that needs to be filtered or refined to determine the correct answer. You make this flow through the questions possible by presenting the user with *follow-up prompts*.
-
-When a user asks a question, QnA Maker returns the answer _and_ any follow-up prompts. This response allows you to present the follow-up questions as choices.
-
-> [!CAUTION]
-> Multi-turn prompts are not extracted from FAQ documents. If you need multi-turn extraction, remove the question marks that designate the QnA pairs as FAQs.
-
-## Example multi-turn conversation with chat bot
-
-With multi-turn, a chat bot manages a conversation with a user to determine the final answer, as shown in the following image:
-
-![A multi-turn dialog with prompts that guide a user through a conversation](../media/conversational-context/conversation-in-bot.png)
-
-In the preceding image, a user has started a conversation by entering **My account**. The knowledge base has three linked question-and-answer pairs. To refine the answer, the user selects one of the three choices in the knowledge base. The question (#1), has three follow-up prompts, which are presented in the chat bot as three options (#2).
-
-When the user selects an option (#3), the next list of refining options (#4) is presented. This sequence continues (#5) until the user determines the correct, final answer (#6).
-
-### Use multi-turn in a bot
-
-After publishing your KB, you can select the **Create Bot** button to deploy your QnA Maker bot to Azure Bot service. The prompts will appear in the chat clients that you have enabled for your bot.
-
-## Create a multi-turn conversation from a document's structure
-
-When you create a knowledge base, the **Populate your KB** section displays an **Enable multi-turn extraction from URLs, .pdf or .docx files** check box.
-
-![Check box for enabling multi-turn extraction](../media/conversational-context/enable-multi-turn.png)
-
-When you select this option, QnA Maker extracts the hierarchy present in the document structure. The hierarchy is converted in to follow up prompts and the root of the hierarchy serves as the parent QnA. In some documents, the root of the hierarchy does not have content, which could serve as an answer. You can provide the 'Default Answer Text' to be used as a substitute answer text to extract such hierarchies.
-
-Multi-turn structure can be inferred only from URLs, PDF files, or DOCX files. For an example of structure, view an image of a [Microsoft Surface user manual PDF file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/product-manual.pdf).
--
-### Building your own multi-turn document
-
-If you are creating a multi-turn document, keep in mind the following guidelines:
-
-* Use headings and sub-headings to denote hierarchy. For example, use a h1 to denote the parent QnA and h2 to denote the QnA that should be taken as prompt. Use small heading size to denote subsequent hierarchy. Don't use style, color, or some other mechanism to imply structure in your document, QnA Maker will not extract the multi-turn prompts.
-
-* First character of heading must be capitalized.
-
-* Do not end a heading with a question mark, `?`.
-
-* You can use the [sample document](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx) as an example to create your own multi-turn document.
-
-### Adding files to a multi-turn KB
-
-When you add a hierarchical document, QnA Maker determines follow-up prompts from the structure to create conversational flow.
-
-1. In QnA Maker, select an existing knowledge base, which was created with **Enable multi-turn extraction from URLs, .pdf or .docx files** enabled.
-1. Go to the **Settings** page, select the file or URL to add.
-1. **Save and train** the knowledge base.
-
-> [!Caution]
-> Support for using an exported TSV or XLS multi-turn knowledge base file as a data source for a new or empty knowledge base isn't supported. You need to **Import** that file type, from the **Settings** page of the QnA Maker portal, in order to add exported multi-turn prompts to a knowledge base.
-
-## Create knowledge base with multi-turn prompts with the Create API
-
-You can create a knowledge case with multi-turn prompts using the [QnA Maker Create API](/rest/api/cognitiveservices/qnamaker/knowledgebase/create). The prompts are adding in the `context` property's `prompts` array.
-
-## Show questions and answers with context
-
-Reduce the displayed question-and-answer pairs to only those pairs with contextual conversations.
-
-Select **View options**, and then select **Show context**. The list displays question-and-answer pairs that contain follow-up prompts.
-
-![Filter question-and-answer pairs by contextual conversations](../media/conversational-context/filter-question-and-answers-by-context.png)
-
-The multi-turn context is displayed in the first column.
--
-In the preceding image, **#1** indicates bold text in the column, which signifies the current question. The parent question is the top item in the row. Any questions below it are the linked question-and-answer pairs. These items are selectable, so that you can immediately go to the other context items.
-
-## Add an existing question-and-answer pair as a follow-up prompt
-
-The original question, **My account**, has follow-up prompts, such as **Accounts and signing in**.
-
-![The "Accounts and signing in" answers and follow-up prompts](../media/conversational-context/detected-and-linked-follow-up-prompts.png)
-
-Add a follow-up prompt to an existing question-and-answer pair that isn't currently linked. Because the question isn't linked to any question-and-answer pair, the current view setting needs to be changed.
-
-1. To link an existing question-and-answer pair as a follow-up prompt, select the row for the question-and-answer pair. For the Surface manual, search for **Sign out** to reduce the list.
-1. In the row for **Signout**, in the **Answer** column, select **Add follow-up prompt**.
-1. In the fields in the **Follow-up prompt** pop-up window, enter the following values:
-
- |Field|Value|
- |--|--|
- |Display text|Enter **Turn off the device**. This is custom text to display in the follow-up prompt.|
- |Context-only| Select this check box. An answer is returned only if the question specifies context.|
- |Link to answer|Enter **Use the sign-in screen** to find the existing question-and-answer pair.|
-
-1. One match is returned. Select this answer as the follow-up, and then select **Save**.
-
- ![The "Follow-up prompt" page](../media/conversational-context/search-follow-up-prompt-for-existing-answer.png)
-
-1. After you've added the follow-up prompt, select **Save and train** in the top navigation.
-
-### Edit the display text
-
-When a follow-up prompt is created, and an existing question-and-answer pair is entered as the **Link to Answer**, you can enter new **Display text**. This text doesn't replace the existing question, and it doesn't add a new alternate question. It is separate from those values.
-
-1. To edit the display text, search for and select the question in the **Context** field.
-1. In the row for that question, select the follow-up prompt in the answer column.
-1. Select the display text you want to edit, and then select **Edit**.
-
- ![The Edit command for the display text](../media/conversational-context/edit-existing-display-text.png)
-
-1. In the **Follow-up prompt** pop-up window, change the existing display text.
-1. When you're done editing the display text, select **Save**.
-1. In the top navigation bar, **Save and train**.
-
-## Add a new question-and-answer pair as a follow-up prompt
-
-When you add a new question-and-answer pair to the knowledge base, each pair should be linked to an existing question as a follow-up prompt.
-
-1. On the knowledge base toolbar, search for and select the existing question-and-answer pair for **Accounts and signing in**.
-
-1. In the **Answer** column for this question, select **Add follow-up prompt**.
-1. Under **Follow-up prompt (PREVIEW)**, create a new follow-up prompt by entering the following values:
-
- |Field|Value|
- |--|--|
- |Display text|*Create a Windows Account*. The custom text to display in the follow-up prompt.|
- |Context-only|Select this check box. This answer is returned only if the question specifies context.|
- |Link to answer|Enter the following text as the answer:<br>*[Create](https://account.microsoft.com/) a Windows account with a new or existing email account*.<br>When you save and train the database, this text will be converted. |
- |||
-
- ![Create a new prompt question and answer](../media/conversational-context/create-child-prompt-from-parent.png)
-
-1. Select **Create new**, and then select **Save**.
-
- This action creates a new question-and-answer pair and links the selected question as a follow-up prompt. The **Context** column, for both questions, indicates a follow-up prompt relationship.
-
-1. Select **View options**, and then select [**Show context (PREVIEW)**](#show-questions-and-answers-with-context).
-
- The new question shows how it's linked.
-
- ![Create a new follow-up prompt](../media/conversational-context/new-qna-follow-up-prompt.png)
-
- The parent question displays a new question as one of its choices.
-
- :::image type="content" source="../media/conversational-context/child-prompt-created.png" alt-text="Screenshot shows the Context column, for both questions, indicates a follow-up prompt relationship." lightbox="../media/conversational-context/child-prompt-created.png":::
-
-1. After you've added the follow-up prompt, select **Save and train** in the top navigation bar.
-
-<a name="enable-multi-turn-during-testing-of-follow-up-prompts"></a>
-
-## View multi-turn during testing of follow-up prompts
-
-When you test the question with follow-up prompts in the **Test** pane, the response includes the follow-up prompts.
-
-![The response includes the follow-up prompts](../media/conversational-context/test-pane-with-question-having-follow-up-prompts.png)
-
-## A JSON request to return an initial answer and follow-up prompts
-
-Use the empty `context` object to request the answer to the user's question and include follow-up prompts.
-
-```JSON
-{
- "question": "accounts and signing in",
- "top": 10,
- "userId": "Default",
- "isTest": false,
- "context": {}
-}
-```
-
-## A JSON response to return an initial answer and follow-up prompts
-
-The preceding section requested an answer and any follow-up prompts to **Accounts and signing in**. The response includes the prompt information, which is located at `answers[0].context`, and the text to display to the user.
-
-```JSON
-{
- "answers": [
- {
- "questions": [
- "Accounts and signing in"
- ],
- "answer": "**Accounts and signing in**\n\nWhen you set up your Surface, an account is set up for you. You can create additional accounts later for family and friends, so each person using your Surface can set it up just the way he or she likes. For more info, see All about accounts on Surface.com. \n\nThere are several ways to sign in to your Surface Pro 4: ",
- "score": 100.0,
- "id": 15,
- "source": "product-manual.pdf",
- "metadata": [],
- "context": {
- "isContextOnly": true,
- "prompts": [
- {
- "displayOrder": 0,
- "qnaId": 16,
- "qna": null,
- "displayText": "Use the sign-in screen"
- }
- ]
- }
- },
- {
- "questions": [
- "Sign out"
- ],
- "answer": "**Sign out**\n\nHere's how to sign out: \n\n Go to Start, and right-click your name. Then select Sign out. ",
- "score": 38.01,
- "id": 18,
- "source": "product-manual.pdf",
- "metadata": [],
- "context": {
- "isContextOnly": true,
- "prompts": [
- {
- "displayOrder": 0,
- "qnaId": 16,
- "qna": null,
- "displayText": "Turn off the device"
- }
- ]
- }
- },
- {
- "questions": [
- "Use the sign-in screen"
- ],
- "answer": "**Use the sign-in screen**\n\n1. \n\nTurn on or wake your Surface by pressing the power button. \n\n2. \n\nSwipe up on the screen or tap a key on the keyboard. \n\n3. \n\nIf you see your account name and account picture, enter your password and select the right arrow or press Enter on your keyboard. \n\n4. \n\nIf you see a different account name, select your own account from the list at the left. Then enter your password and select the right arrow or press Enter on your keyboard. ",
- "score": 27.53,
- "id": 16,
- "source": "product-manual.pdf",
- "metadata": [],
- "context": {
- "isContextOnly": true,
- "prompts": []
- }
- }
- ]
-}
-```
-
-The `prompts` array provides text in the `displayText` property and the `qnaId` value. You can show these answers as the next displayed choices in the conversation flow and then send the selected `qnaId` back to QnA Maker in the following request.
-
-<!--
-
-The `promptsToDelete` array provides the ...
->-
-## A JSON request to return a non-initial answer and follow-up prompts
-
-Fill the `context` object to include the previous context.
-
-In the following JSON request, the current question is *Use Windows Hello to sign in* and the previous question was *accounts and signing in*.
-
-```JSON
-{
- "question": "Use Windows Hello to sign in",
- "top": 10,
- "userId": "Default",
- "isTest": false,
- "qnaId": 17,
- "context": {
- "previousQnAId": 15,
- "previousUserQuery": "accounts and signing in"
- }
-}
-```
-
-## A JSON response to return a non-initial answer and follow-up prompts
-
-The QnA Maker _GenerateAnswer_ JSON response includes the follow-up prompts in the `context` property of the first item in the `answers` object:
-
-```JSON
-{
- "answers": [
- {
- "questions": [
- "Use Windows Hello to sign in"
- ],
- "answer": "**Use Windows Hello to sign in**\n\nSince Surface Pro 4 has an infrared (IR) camera, you can set up Windows Hello to sign in just by looking at the screen. \n\nIf you have the Surface Pro 4 Type Cover with Fingerprint ID (sold separately), you can set up your Surface sign you in with a touch. \n\nFor more info, see What is Windows Hello? on Windows.com. ",
- "score": 100.0,
- "id": 17,
- "source": "product-manual.pdf",
- "metadata": [],
- "context": {
- "isContextOnly": true,
- "prompts": []
- }
- },
- {
- "questions": [
- "Meet Surface Pro 4"
- ],
- "answer": "**Meet Surface Pro 4**\n\nGet acquainted with the features built in to your Surface Pro 4. \n\nHere's a quick overview of Surface Pro 4 features: \n\n\n\n\n\n\n\nPower button \n\n\n\n\n\nPress the power button to turn your Surface Pro 4 on. You can also use the power button to put it to sleep and wake it when you're ready to start working again. \n\n\n\n\n\n\n\nTouchscreen \n\n\n\n\n\nUse the 12.3" display, with its 3:2 aspect ratio and 2736 x 1824 resolution, to watch HD movies, browse the web, and use your favorite apps. \n\nThe new Surface G5 touch processor provides up to twice the touch accuracy of Surface Pro 3 and lets you use your fingers to select items, zoom in, and move things around. For more info, see Surface touchscreen on Surface.com. \n\n\n\n\n\n\n\nSurface Pen \n\n\n\n\n\nEnjoy a natural writing experience with a pen that feels like an actual pen. Use Surface Pen to launch Cortana in Windows or open OneNote and quickly jot down notes or take screenshots. \n\nSee Using Surface Pen (Surface Pro 4 version) on Surface.com for more info. \n\n\n\n\n\n\n\nKickstand \n\n\n\n\n\nFlip out the kickstand and work or play comfortably at your desk, on the couch, or while giving a hands-free presentation. \n\n\n\n\n\n\n\nWi-Fi and Bluetooth&reg; \n\n\n\n\n\nSurface Pro 4 supports standard Wi-Fi protocols (802.11a/b/g/n/ac) and Bluetooth 4.0. Connect to a wireless network and use Bluetooth devices like mice, printers, and headsets. \n\nFor more info, see Add a Bluetooth device and Connect Surface to a wireless network on Surface.com. \n\n\n\n\n\n\n\nCameras \n\n\n\n\n\nSurface Pro 4 has two cameras for taking photos and recording video: an 8-megapixel rear-facing camera with autofocus and a 5-megapixel, high-resolution, front-facing camera. Both cameras record video in 1080p, with a 16:9 aspect ratio. Privacy lights are located on the right side of both cameras. \n\nSurface Pro 4 also has an infrared (IR) face-detection camera so you can sign in to Windows without typing a password. For more info, see Windows Hello on Surface.com. \n\nFor more camera info, see Take photos and videos with Surface and Using autofocus on Surface 3, Surface Pro 4, and Surface Book on Surface.com. \n\n\n\n\n\n\n\nMicrophones \n\n\n\n\n\nSurface Pro 4 has both a front and a back microphone. Use the front microphone for calls and recordings. Its noise-canceling feature is optimized for use with Skype and Cortana. \n\n\n\n\n\n\n\nStereo speakers \n\n\n\n\n\nStereo front speakers provide an immersive music and movie playback experience. To learn more, see Surface sound, volume, and audio accessories on Surface.com. \n\n\n\n\n",
- "score": 21.92,
- "id": 3,
- "source": "product-manual.pdf",
- "metadata": [],
- "context": {
- "isContextOnly": true,
- "prompts": [
- {
- "displayOrder": 0,
- "qnaId": 4,
- "qna": null,
- "displayText": "Ports and connectors"
- }
- ]
- }
- },
- {
- "questions": [
- "Use the sign-in screen"
- ],
- "answer": "**Use the sign-in screen**\n\n1. \n\nTurn on or wake your Surface by pressing the power button. \n\n2. \n\nSwipe up on the screen or tap a key on the keyboard. \n\n3. \n\nIf you see your account name and account picture, enter your password and select the right arrow or press Enter on your keyboard. \n\n4. \n\nIf you see a different account name, select your own account from the list at the left. Then enter your password and select the right arrow or press Enter on your keyboard. ",
- "score": 19.04,
- "id": 16,
- "source": "product-manual.pdf",
- "metadata": [],
- "context": {
- "isContextOnly": true,
- "prompts": []
- }
- }
- ]
-}
-```
-
-## Query the knowledge base with the QnA Maker ID
-
-If you are building a custom application, in the initial question's response, any follow-up prompts, and its associated `qnaId` are returned. Now that you have the ID, you can pass it in the follow-up prompt's request body. If the request body contains the `qnaId`, and the context object (which contains the previous QnA Maker properties), then GenerateAnswer will return the exact question by ID, instead of using the ranking algorithm to find the answer by the question text.
-
-## Display order is supported in the Update API
-
-The [display text and display order](/rest/api/cognitiveservices/qnamaker/knowledgebase/update#promptdto), returned in the JSON response, is supported for editing by the [Update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update).
-
-## Add or delete multi-turn prompts with the Update API
-
-You can add or delete multi-turn prompts using the [QnA Maker Update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The prompts are adding in the `context` property's `promptsToAdd` array and the `promptsToDelete` array.
-
-## Export knowledge base for version control
-
-QnA Maker supports version control by including multi-turn conversation steps in the exported file.
-
-## Next steps
-
-* Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/archive/samples/csharp_dotnetcore/11.qnamaker) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
- Title: Network isolation
-description: Users can restrict public access to QnA Maker resources.
--- Previously updated : 11/02/2021---
-# Recommended settings for network isolation
-
-Follow the steps below to restrict public access to QnA Maker resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal).
--
-## Restrict access to App Service (QnA runtime)
-
-You can use the ServiceTag `CognitiveServicesMangement` to restrict inbound access to App Service or ASE (App Service Environment) network security group in-bound rules. Check out more information about service tags in the [virtual network service tags article](../../../virtual-network/service-tags-overview.md).
-
-### Regular App Service
-
-1. Open the Cloud Shell (PowerShell) from the Azure portal.
-2. Run the following command in the PowerShell window at the bottom of the page:
-
-```ps
-Add-AzWebAppAccessRestrictionRule -ResourceGroupName "<resource group name>" -WebAppName "<app service name>" -Name "cognitive services Tag" -Priority 100 -Action Allow -ServiceTag "CognitiveServicesManagement"
-```
-3. Verify the added access rule is present in the **Access Restrictions** section of the **Networking** tab:
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of access restriction rule]( ../media/network-isolation/access-restrictions.png) ]( ../media/network-isolation/access-restrictions.png#lightbox)
-
-4. To access the **Test pane** on the https://qnamaker.ai portal, add the **Public IP address of the machine** from where you want to access the portal. From the **Access Restrictions** page select **Add Rule**, and allow access to your client IP.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of access restriction rule with the addition of public IP address]( ../media/network-isolation/public-address.png) ]( ../media/network-isolation/public-address.png#lightbox)
-
-### Outbound access from App Service
-
-The QnA Maker App Service requires outbound access to the below endpoints. Please make sure theyΓÇÖre added to the allow list if there are any restrictions on the outbound traffic.
-- https://qnamakerstore.blob.core.windows.net-- https://qnamaker-data.trafficmanager.net-- https://qnamakerconfigprovider.trafficmanager.net--
-### Configure App Service Environment to host QnA Maker App Service
-
-The App Service Environment (ASE) can be used to host the QnA Maker App Service instance. Follow the steps below:
-
-1. Create a [new Azure Cognitive Search Resource](https://portal.azure.com/#create/Microsoft.Search).
-2. Create an external ASE with App Service.
- - Follow this [App Service quickstart](../../../app-service/environment/create-external-ase.md#create-an-ase-and-an-app-service-plan-together) for instructions. This process can take up to 1-2 hours.
- - Finally, you will have an App Service endpoint that will appear similar to: `https://<app service name>.<ASE name>.p.azurewebsite.net` .
- - Example: `https:// mywebsite.myase.p.azurewebsite.net`
-3. Add the following App service configurations:
-
- | Name | Value |
- |:|:-|
- | PrimaryEndpointKey | `<app service name>-PrimaryEndpointKey` |
- | AzureSearchName | `<Azure Cognitive Search Resource Name from step #1>` |
- | AzureSearchAdminKey | `<Azure Cognitive Search Resource admin Key from step #1>`|
- | QNAMAKER_EXTENSION_VERSION | `latest` |
- | DefaultAnswer | `no answer found` |
-
-4. Add CORS origin "*" on the App Service to allow access to https://qnamaker.ai portal Test pane. **CORS** is located under the API header in the App Service pane.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of CORS interface within App Service UI]( ../media/network-isolation/cross-orgin-resource-sharing.png) ]( ../media/network-isolation/cross-orgin-resource-sharing.png#lightbox)
-
-5. Create a QnA Maker Cognitive Services instance (Microsoft.CognitiveServices/accounts) using Azure Resource Manager. The QnA Maker endpoint should be set to the App Service Endpoint created above (`https:// mywebsite.myase.p.azurewebsite.net`). Here is a [sample Azure Resource Manager template you can use for reference](https://github.com/pchoudhari/QnAMakerBackupRestore/tree/master/QnAMakerASEArmTemplate).
-
-### Related questions
-
-#### Can QnA Maker be deployed to an internal ASE?
-
-The main reason for using an external ASE is so the QnAMaker service backend (authoring apis) can reach the App Service via the Internet. However, you can still protect it by adding inbound access restriction to allow only connections from addresses associated with the `CognitiveServicesManagement` service tag.
-
-If you still want to use an internal ASE, you need to expose that specific QnA Maker app in the ASE on a public domain via the app gateway DNS TLS/SSL cert. For more information, see this [article on Enterprise deployment of App Services](/azure/architecture/reference-architectures/enterprise-integration/ase-standard-deployment).
-
-## Restrict access to Cognitive Search resource
-
-The Cognitive Search instance can be isolated via a private endpoint after the QnA Maker resources have been created. Use the following steps to lock down access:
-
-1. Create a new [virtual network (VNet)](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) or use existing VNet of ASE (App Service Environment).
-2. Open the VNet resource, then under the **Subnets** tab create two subnets. One for the App Service **(appservicesubnet)** and another subnet **(searchservicesubnet)** for the Cognitive Search resource without delegation.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of virtual networks subnets UI interface]( ../media/network-isolation/subnets.png) ]( ../media/network-isolation/subnets.png#lightbox)
-
-3. In the **Networking** tab in the Cognitive Search service instance switch endpoint connectivity data from public to private. This operation is a long running process and **can take up to 30 minutes** to complete.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of networking UI with public/private toggle button]( ../media/network-isolation/private.png) ]( ../media/network-isolation/private.png#lightbox)
-
-4. Once the Search resource is switched to private, select add **private endpoint**.
- - **Basics tab**: make sure you are creating your endpoint in the same region as search resource.
- - **Resource tab**: select the required search resource of type `Microsoft.Search/searchServices`.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of create a private endpoint UI window]( ../media/network-isolation/private-endpoint.png) ]( ../media/network-isolation/private-endpoint.png#lightbox)
-
- - **Configuration tab**: use the VNet, subnet (searchservicesubnet) created in step 2. After that, in section **Private DNS integration** select the corresponding subscription and create a new private DNS zone called **privatelink.search.windows.net**.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of create private endpoint UI window with subnet field populated]( ../media/network-isolation/subnet.png) ]( ../media/network-isolation/subnet.png#lightbox)
-
-5. Enable VNET integration for the regular App Service. You can skip this step for ASE, as that already has access to the VNET.
- - Go to App Service **Networking** section, and open **VNet Integration**.
- - Link to the dedicated App Service VNet, Subnet (appservicevnet) created in step 2.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of VNET integration UI]( ../media/network-isolation/integration.png) ]( ../media/network-isolation/integration.png#lightbox)
-
-[Create Private endpoints](../reference-private-endpoint.md) to the Azure Search resource.
-
-Follow the steps below to restrict public access to QnA Maker resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal).
-
-After restricting access to Cognitive Service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.
-- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).-- Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules).-- Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of firewall and virtual networks configuration UI]( ../media/network-isolation/firewall.png) ]( ../media/network-isolation/firewall.png#lightbox)
cognitive-services Query Knowledge Base With Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/query-knowledge-base-with-metadata.md
- Title: Contextually filter by using metadata-
-description: QnA Maker filters QnA pairs by metadata.
----- Previously updated : 11/09/2020---
-# Filter Responses with Metadata
-
-QnA Maker lets you add metadata, in the form of key and value pairs, to your pairs of questions and answers. You can then use this information to filter results to user queries, and to store additional information that can be used in follow-up conversations.
--
-<a name="qna-entity"></a>
-
-## Store questions and answers with a QnA entity
-
-It's important to understand how QnA Maker stores the question and answer data. The following illustration shows a QnA entity:
-
-![Illustration of a QnA entity](../media/qnamaker-how-to-metadata-usage/qna-entity.png)
-
-Each QnA entity has a unique and persistent ID. You can use the ID to make updates to a particular QnA entity.
-
-## Use metadata to filter answers by custom metadata tags
-
-Adding metadata allows you to filter the answers by these metadata tags. Add the metadata column from the **View Options** menu. Add metadata to your knowledge base by selecting the metadata **+** icon to add a metadata pair. This pair consists of one key and one value.
-
-![Screenshot of adding metadata](../media/qnamaker-how-to-metadata-usage/add-metadata.png)
-
-<a name="filter-results-with-strictfilters-for-metadata-tags"></a>
-
-## Filter results with strictFilters for metadata tags
-
-Consider the user question "When does this hotel close?", where the intent is implied for the restaurant "Paradise."
-
-Because results are required only for the restaurant "Paradise", you can set a filter in the GenerateAnswer call on the metadata "Restaurant Name". The following example shows this:
-
-```json
-{
- "question": "When does this hotel close?",
- "top": 1,
- "strictFilters": [ { "name": "restaurant", "value": "paradise"}]
-}
-```
-
-## Filter by source
-
-In case you have multiple content sources in your knowledge base and you would like to limit the results to a particular set of sources, you can do that using the reserved keyword `source_name_metadata` as shown below.
-
-```json
-"strictFilters": [
- {
- "name": "category",
- "value": "api"
- },
- {
- "name": "source_name_metadata",
- "value": "boby_brown_docx"
- },
- {
- "name": "source_name_metadata",
- "value": "chitchat.tsv"
- }
-]
-```
---
-### Logical AND by default
-
-To combine several metadata filters in the query, add the additional metadata filters to the array of the `strictFilters` property. By default, the values are logically combined (AND). A logical combination requires all filters to matches the QnA pairs in order for the pair to be returned in the answer.
-
-This is equivalent to using the `strictFiltersCompoundOperationType` property with the value of `AND`.
-
-### Logical OR using strictFiltersCompoundOperationType property
-
-When combining several metadata filters, if you are only concerned with one or some of the filters matching, use the `strictFiltersCompoundOperationType` property with the value of `OR`.
-
-This allows your knowledge base to return answers when any filter matches but won't return answers that have no metadata.
-
-```json
-{
- "question": "When do facilities in this hotel close?",
- "top": 1,
- "strictFilters": [
- { "name": "type","value": "restaurant"},
- { "name": "type", "value": "bar"},
- { "name": "type", "value": "poolbar"}
- ],
- "strictFiltersCompoundOperationType": "OR"
-}
-```
-
-### Metadata examples in quickstarts
-
-Learn more about metadata in the QnA Maker portal quickstart for metadata:
-* [Authoring - add metadata to QnA pair](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)
-* [Query prediction - filter answers by metadata](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md)
-
-<a name="keep-context"></a>
-
-## Use question and answer results to keep conversation context
-
-The response to the GenerateAnswer contains the corresponding metadata information of the matched question and answer pair. You can use this information in your client application to store the context of the previous conversation for use in later conversations.
-
-```json
-{
- "answers": [
- {
- "questions": [
- "What is the closing time?"
- ],
- "answer": "10.30 PM",
- "score": 100,
- "id": 1,
- "source": "Editorial",
- "metadata": [
- {
- "name": "restaurant",
- "value": "paradise"
- },
- {
- "name": "location",
- "value": "secunderabad"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Analyze your knowledgebase](../How-to/get-analytics-knowledge-base.md)
cognitive-services Set Up Qnamaker Service Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
- Title: Set up a QnA Maker service - QnA Maker
-description: Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service.
--- Previously updated : 09/14/2021--
-# Manage QnA Maker resources
-
-Before you can create any QnA Maker knowledge bases, you must first set up a QnA Maker service in Azure. Anyone with authorization to create new resources in a subscription can set up a QnA Maker service. If you are trying the Custom question answering feature, you would need to create the Language resource and add the Custom question answering feature.
--
-A solid understanding of the following concepts is helpful before creating your resource:
-
-* [QnA Maker resources](../Concepts/azure-resources.md)
-* [Authoring and publishing keys](../Concepts/azure-resources.md#keys-in-qna-maker)
-
-## Create a new QnA Maker service
-
-This procedure creates the Azure resources needed to manage the knowledge base content. After you complete these steps, you'll find the _subscription_ keys on the **Keys** page for the resource in the Azure portal.
-
-1. Sign in to the Azure portal and [create a QnA Maker](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) resource.
-
-1. Select **Create** after you read the terms and conditions:
-
- ![Create a new QnA Maker service](../media/qnamaker-how-to-setup-service/create-new-resource-button.png)
-
-1. In **QnA Maker**, select the appropriate tiers and regions:
-
- ![Create a new QnA Maker service - pricing tier and regions](../media/qnamaker-how-to-setup-service/enter-qnamaker-info.png)
-
- * In the **Name** field, enter a unique name to identify this QnA Maker service. This name also identifies the QnA Maker endpoint that your knowledge bases will be associated with.
- * Choose the **Subscription** under which the QnA Maker resource will be deployed.
- * Select the **Pricing tier** for the QnA Maker management services (portal and management APIs). See [more details about SKU pricing](https://aka.ms/qnamaker-pricing).
- * Create a new **Resource group** (recommended) or use an existing one in which to deploy this QnA Maker resource. QnA Maker creates several Azure resources. When you create a resource group to hold these resources, you can easily find, manage, and delete these resources by the resource group name.
- * Select a **Resource group location**.
- * Choose the **Search pricing tier** of the Azure Cognitive Search service. If the Free tier option is unavailable (appears dimmed), it means you already have a free service deployed through your subscription. In that case, you'll need to start with the Basic tier. See [Azure Cognitive Search pricing details](https://azure.microsoft.com/pricing/details/search/).
- * Choose the **Search location** where you want Azure Cognitive Search indexes to be deployed. Restrictions on where customer data must be stored will help determine the location you choose for Azure Cognitive Search.
- * In the **App name** field, enter a name for your Azure App Service instance.
- * By default, App Service defaults to the standard (S1) tier. You can change the plan after creation. Learn more about [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
- * Choose the **Website location** where App Service will be deployed.
-
- > [!NOTE]
- > The **Search Location** can differ from the **Website Location**.
-
- * Choose whether or not you want to enable **Application Insights**. If **Application Insights** is enabled, QnA Maker collects telemetry on traffic, chat logs, and errors.
- * Choose the **App insights location** where the Application Insights resource will be deployed.
- * For cost savings measures, you can [share](configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) some but not all Azure resources created for QnA Maker.
-
-1. After all the fields are validated, select **Create**. The process can take a few minutes to complete.
-
-1. After deployment is completed, you'll see the following resources created in your subscription:
-
- ![Resource created a new QnA Maker service](../media/qnamaker-how-to-setup-service/resources-created.png)
-
- The resource with the _Cognitive Services_ type has your _subscription_ keys.
-
-## Upgrade Azure resources
-
-### Upgrade QnA Maker SKU
-
-When you want to have more questions and answers in your knowledge base, beyond your current tier, upgrade your QnA Maker service pricing tier.
-
-To upgrade the QnA Maker management SKU:
-
-1. Go to your QnA Maker resource in the Azure portal, and select **Pricing tier**.
-
- ![QnA Maker resource](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-resource.png)
-
-1. Choose the appropriate SKU and press **Select**.
-
- ![QnA Maker pricing](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-pricing-page.png)
-
-### Upgrade App Service
-
-When your knowledge base needs to serve more requests from your client app, upgrade your App Service pricing tier.
-
-You can [scale up](../../../app-service/manage-scale-up.md) or scale out App Service.
-
-Go to the App Service resource in the Azure portal, and select the **Scale up** or **Scale out** option as required.
-
-![QnA Maker App Service scale](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-scale.png)
-
-### Upgrade the Azure Cognitive Search service
-
-If you plan to have many knowledge bases, upgrade your Azure Cognitive Search service pricing tier.
-
-Currently, you can't perform an in-place upgrade of the Azure search SKU. However, you can create a new Azure search resource with the desired SKU, restore the data to the new resource, and then link it to the QnA Maker stack. To do this, follow these steps:
-
-1. Create a new Azure search resource in the Azure portal, and select the desired SKU.
-
- ![QnA Maker Azure search resource](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-azuresearch-new.png)
-
-1. Restore the indexes from your original Azure search resource to the new one. See the [backup restore sample code](https://github.com/pchoudhari/QnAMakerBackupRestore).
-
-1. After the data is restored, go to your new Azure search resource, select **Keys**, and write down the **Name** and the **Admin key**:
-
- ![QnA Maker Azure search keys](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-azuresearch-keys.png)
-
-1. To link the new Azure search resource to the QnA Maker stack, go to the QnA Maker App Service instance.
-
- ![QnA Maker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-resource-list-appservice.png)
-
-1. Select **Application settings** and modify the settings in the **AzureSearchName** and **AzureSearchAdminKey** fields from step 3.
-
- ![QnA Maker App Service setting](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-settings.png)
-
-1. Restart the App Service instance.
-
- ![Restart of the QnA Maker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png)
-
-### Inactivity policy for free Search resources
-
-If you are not using a QnA maker resource, you should remove all the resources. If you don't remove unused resources, your Knowledge base will stop working if you created a free Search resource.
-
-Free Search resources are deleted after 90 days without receiving an API call.
-
-## Delete Azure resources
-
-If you delete any of the Azure resources used for your knowledge bases, the knowledge bases will no longer function. Before deleting any resources, make sure you export your knowledge bases from the **Settings** page.
-
-## Next steps
-
-Learn more about the [App service](../../../app-service/index.yml) and [Search service](../../../search/index.yml).
-
-> [!div class="nextstepaction"]
-> [Learn how to author with others](../index.yml)
cognitive-services Test Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/test-knowledge-base.md
- Title: How to test a knowledge base - QnA Maker
-description: Testing your QnA Maker knowledge base is an important part of an iterative process to improve the accuracy of the responses being returned. You can test the knowledge base through an enhanced chat interface that also allows you make edits.
--- Previously updated : 11/02/2021--
-# Test your knowledge base in QnA Maker
-
-Testing your QnA Maker knowledge base is an important part of an iterative process to improve the accuracy of the responses being returned. You can test the knowledge base through an enhanced chat interface that also allows you make edits.
--
-## Interactively test in QnA Maker portal
-
-1. Access your knowledge base by selecting its name on the **My knowledge bases** page.
-1. To access the Test slide-out panel, select **Test** in your application's top panel.
-1. Enter a query in the text box and select Enter.
-1. The best-matched answer from the knowledge base is returned as the response.
-
-### Clear test panel
-
-To clear all the entered test queries and their results from the test console, select **Start over** at the upper-left corner of the Test panel.
-
-### Close test panel
-
-To close the Test panel, select the **Test** button again. While the Test panel is open, you cannot edit the Knowledge Base contents.
-
-### Inspect score
-
-You inspect details of the test result in the Inspect panel.
-
-1. With the Test slide-out panel open, select **Inspect** for more details on that response.
-
- ![Inspect responses](../media/qnamaker-how-to-test-knowledge-bases/inspect.png)
-
-2. The Inspection panel appears. The panel includes the top scoring intent as well as any identified entities. The panel shows the result of the selected utterance.
-
-### Correct the top scoring answer
-
-If the top scoring answer is incorrect, select the correct answer from the list and select **Save and Train**.
-
-![Correct the top scoring answer](../media/qnamaker-how-to-test-knowledge-bases/choose-answer.png)
-
-### Add alternate questions
-
-You can add alternate forms of a question to a given answer. Type the alternate answers in the text box and select enter to add them. Select **Save and Train** to store the updates.
-
-![Add alternate questions](../media/qnamaker-how-to-test-knowledge-bases/add-alternate-question.png)
-
-### Add a new answer
-
-You can add a new answer if any of the existing answers that were matched are incorrect or the answer does not exist in the knowledge base (no good match found in the KB).
-
-At the bottom of the answers list, use the text box to enter a new answer and press enter to add it.
-
-Select **Save and Train** to persist this answer. A new question-answer pair has now been added to your knowledge base.
-
-> [!NOTE]
-> All edits to your knowledge base only get saved when you press the **Save and Train** button.
-
-### Test the published knowledge base
-
-You can test the published version of knowledge base in the test pane. Once you have published the KB, select the **Published KB** box and send a query to get results from the published KB.
-
-![Test against a published KB](../media/qnamaker-how-to-test-knowledge-bases/test-against-published-knowledge-base.png)
-
-## Batch test with tool
-
-Use the batch testing tool when you want to:
-
-* determine top answer and score for a set of questions
-* validate expected answer for set of questions
-
-### Prerequisites
-
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Either [create a QnA Maker service](../Quickstarts/create-publish-knowledge-base.md) or use an existing service, which uses the English language.
-* Download the [multi-turn sample `.docx` file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)
-* Download the [batch testing tool](https://aka.ms/qnamakerbatchtestingtool), extract the executable file from the `.zip` file.
-
-### Sign into QnA Maker portal
-
-[Sign in](https://www.qnamaker.ai/) to the QnA Maker portal.
-
-### Create a new knowledge base from the multi-turn sample.docx file
-
-1. Select **Create a knowledge base** from the tool bar.
-1. Skip **Step 1** because you should already have a QnA Maker resource, moving on to **Step 2** to select your existing resource information:
- * Azure Active Directory ID
- * Azure Subscription Name
- * Azure QnA Service Name
- * Language - the English language
-1. Enter the name `Multi-turn batch test quickstart` as the name of your knowledge base.
-
-1. In **Step 4**, configure the settings with the following table:
-
- |Setting|Value|
- |--|--|
- |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked|
- |**Default answer text**| `Batch test - default answer not found.`|
- |**+ Add File**|Select the downloaded `.docx` file listing in the prerequisites.|
- |**Chit-chat**|Select **Professional**|
-
-1. In **Step 5**, select **Create your KB**.
-
- When the creation process finishes, the portal displays the editable knowledge base.
-
-### Save, train, and publish knowledge base
-
-1. Select **Save and train** from the toolbar to save the knowledge base.
-1. Select **Publish** from the toolbar then select **Publish** again to publish the knowledge base. Publishing makes the knowledge base available for queries from a public URL endpoint. When publishing is complete, save the host URL and endpoint key information shown on the **Publish** page.
-
- |Required data| Example|
- |--|--|
- |Published Host|`https://YOUR-RESOURCE-NAME.azurewebsites.net`|
- |Published Key|`XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX` (32 character string shown after `Endpoint` )|
- |App ID|`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` (36 character string shown as part of `POST`) |
-
-### Create batch test file with question IDs
-
-In order to use the batch test tool, create a file named `batch-test-data-1.tsv` with a text editor. The file should be in UTF-8 format and it needs to have the following columns separated by a tab.
-
-|TSV input file fields|Notes|Example|
-|--|--|--|
-|Knowledge base ID|Your knowledge base ID found on the Publish page. Test several knowledge bases in the same service at one time in a single file by using different knowledge base IDs in a single file.|`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` (36 character string shown as part of `POST`) |
-|Question|The question text a user would enter. 1,000 character max.|`How do I sign out?`|
-|Metadata tags|optional|`topic:power` uses the `key:value` format|
-|Top parameter|optional|`25`|
-|Expected answer ID|optional|`13`|
-
-For this knowledge base, add three rows of just the two required columns to the file. The first column is your knowledge base ID and the second column should be the following list of questions:
-
-|Column 2 - questions|
-|--|
-|`Use Windows Hello to sign in`|
-|`Charge your Surface Pro 4`|
-|`Get to know Windows 10`|
-
-These questions are the exact wording from the knowledge base and should return 100 as the confidence score.
-
-Next, add a few questions, similar to these questions but not exactly the same on three more rows, using the same knowledge base ID:
-
-|Column 2 - questions|
-|--|
-|`What is Windows Hello?`|
-|`How do I charge the laptop?`|
-|`What features are in Windows 10?`|
-
-> [!CAUTION]
-> Make sure that each column is separated by a tab delimiter only. Leading or trailing spaces are added to the column data and will cause the program to throw exceptions when the type or size is incorrect.
-
-The batch test file, when opened in Excel, looks like the following image. The knowledge base ID has been replaced with `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` for security. For your own batch test, make sure the column displays your knowledge base ID.
-
-> [!div class="mx-imgBorder"]
-> ![Input first version of .tsv file from batch test](../media/batch-test/batch-test-1-input.png)
-
-### Test the batch file
-
-Run the batch testing program using the following CLI format at the command line.
-
-Replace `YOUR-RESOURCE-NAME` and `ENDPOINT-KEY` with your own values for service name and endpoint key. These values are found on the **Settings** page in the QnA Maker portal.
-
-```console
-batchtesting.exe batch-test-data-1.tsv https://YOUR-RESOURCE-NAME.azurewebsites.net ENDPOINT-KEY out.tsv
-```
-The test completes and generates the `out.tsv` file:
-
-> [!div class="mx-imgBorder"]
-> ![Output first version of .tsv file from batch test](../media/batch-test/batch-test-1-output.png)
-
-The knowledge base ID has been replaced with `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` for security. For your own batch test, the column displays your knowledge base ID.
-
-The test output of confidence score, in the fourth column, shows the top three questions returned a score of 100 as expected because each question is exactly the same as it appears in the knowledge base. The last three questions, with new wording of the question, do not return 100 as the confidence score. In order to increase the score both for the test, and your users, you need to add more alternate questions to the knowledge base.
-
-### Testing with the optional fields
-
-Once you understand the format and process, you can generate a test file, to run against your knowledge base from a source of data such as from chats logs.
-
-Because the data source and process are automated, the test file can be run many times with different settings to determine the correct values.
-
-For example, if you have a chat log and you want to determine which chat log text applies to which metadata fields, create a test file and set the metadata fields for every row. Run the test, then review the rows that match the metadata. Generally, the matches should be positive, but you should review the results for false positives. A false positive is a row that matches the metadata but based on the text, it shouldn't match.
-
-### Using optional fields in the input batch test file
-
-Use the following chart to understand how to find the field values for optional data.
-
-|Column number|Optional column|Data location|
-|--|--|--|
-|3|metadata|Export existing knowledge base for existing `key:value` pairs.|
-|4|top|Default value of `25` is recommended.|
-|5|Question and answer set ID|Export existing knowledge base for ID values. Also notice the IDs were returned in the output file.|
-
-### Add metadata to the knowledge base
-
-1. In the QnA portal, on the **Edit** page, add metadata of `topic:power` to the following questions:
-
- |Questions|
- |--|
- |Charge your Surface Pro 4|
- |Check the battery level|
-
- Two QnA pairs have the metadata set.
-
- > [!TIP]
- > To see the metadata and QnA IDs of each set, export the knowledge base. Select the **Settings** page, then select **Export** as a `.xls` file. Find this downloaded file and open with Excel reviewing for metadata and ID.
-
-1. Select **Save and train**, then select the **Publish** page, then select the **Publish** button. These actions make the change available to the batch test. Download the knowledge base from the **Settings** page.
-
- The downloaded file has the correct format for the metadata and the correct question and answer set ID. Use these fields in the next section
-
- > [!div class="mx-imgBorder"]
- > ![Exported knowledge base with metadata](../media/batch-test/exported-knowledge-base-with-metadata.png)
-
-### Create a second batch test
-
-There are two main scenarios for batch testing:
-* **Process chat log files** - Determine the top answer for a previously unseen question - the most common situation is when you need to process are log file of queries, such as a chat bot's user questions. Create a batch file test, with only the required columns. The test returns the top answer for each question. That doesn't mean it the top answer is the correct answer. Once you complete this test, move on to the validation test.
-* **Validation test** - Validate the expected answer. This test requires that all the questions and matching expected answers in the batch test have been validated. This may require some manual process.
-
-The following procedure assumes the scenario is to process chat logs with
-
-1. Create a new batch test file to include optional data, `batch-test-data-2.tsv`. Add the six rows from the original batch test input file, then add the metadata, top, and QnA pair ID for each row.
-
- To simulate the automated process of checking new text from chat logs against the knowledge base, set the metadata for each column to the same value: `topic:power`.
-
- > [!div class="mx-imgBorder"]
- > ![Input second version of .tsv file from batch test](../media/batch-test/batch-test-2-input.png)
-
-1. Run the test again, changing the input and output file names to indicate it is the second test.
-
- > [!div class="mx-imgBorder"]
- > ![Output second version of .tsv file from batch test](../media/batch-test/batch-test-2-output.png)
-
-### Test results and an automated test system
-
-This test output file can be parsed as part of an automated continuous test pipeline.
-
-This specific test output should be read as: each row was filtered with metadata, and because each row didn't match the metadata in the knowledge base, the default answer for those non-matching rows returned ("no good match found in kb"). Of those rows that did match, the QnA ID and score was returned.
-
-All rows returned the label of incorrect because no row matched the answer ID expected.
-
-You should be able to see with these results that you can take a chat log and use the text as the query of each row. Without knowing anything about the data, the results tell you a lot about the data that you can then use moving forward:
-
-* meta-data
-* QnA ID
-* score
-
-Was filtering with meta-data a good idea for the test? Yes and no. The test system should create test files for each meta-data pair as well as a test with no meta-data pairs.
-
-### Clean up resources
-
-If you aren't going to continue testing the knowledge base, delete the batch file tool and the test files.
-
-If you're not going to continue to use this knowledge base, delete the knowledge base with the following steps:
-
-1. In the QnA Maker portal, select **My Knowledge bases** from the top menu.
-1. In the list of knowledge bases, select the **Delete** icon on the row of the knowledge base of this quickstart.
-
-[Reference documentation about the tool](../reference-tsv-format-batch-testing.md) includes:
-
-* the command-line example of the tool
-* the format for TSV input and outfile files
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Publish a knowledge base](../quickstarts/create-publish-knowledge-base.md)
cognitive-services Use Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/use-active-learning.md
- Title: Use active learning with knowledge base - QnA Maker
-description: Learn to improve the quality of your knowledge base with active learning. Review, accept or reject, add without removing or changing existing questions.
--- Previously updated : 11/02/2021---
-# Active learning
-
-The _Active learning suggestions_ feature allows you to improve the quality of your knowledge base by suggesting alternative questions, based on user-submissions, to your question and answer pair. You review those suggestions, either adding them to existing questions or rejecting them.
-
-Your knowledge base doesn't change automatically. In order for any change to take effect, you must accept the suggestions. These suggestions add questions but don't change or remove existing questions.
--
-## What is active learning?
-
-QnA Maker learns new question variations with implicit and explicit feedback.
-
-* [Implicit feedback](#how-qna-makers-implicit-feedback-works) ΓÇô The ranker understands when a user question has multiple answers with scores that are very close and considers this as feedback. You don't need to do anything for this to happen.
-* [Explicit feedback](#how-you-give-explicit-feedback-with-the-train-api) ΓÇô When multiple answers with little variation in scores are returned from the knowledge base, the client application asks the user which question is the correct question. The user's explicit feedback is sent to QnA Maker with the [Train API](../How-To/improve-knowledge-base.md#train-api).
-
-Both methods provide the ranker with similar queries that are clustered.
-
-## How active learning works
-
-Active learning is triggered based on the scores of the top few answers returned by QnA Maker. If the score differences between QnA pairs that match the query lie within a small range, then the query is considered a possible suggestion (as an alternate question) for each of the possible QnA pairs. Once you accept the suggested question for a specific QnA pair, it is rejected for the other pairs. You need to remember to save and train, after accepting suggestions.
-
-Active learning gives the best possible suggestions in cases where the endpoints are getting a reasonable quantity and variety of usage queries. When five or more similar queries are clustered, every 30 minutes, QnA Maker suggests the user-based questions to the knowledge base designer to accept or reject. All the suggestions are clustered together by similarity and top suggestions for alternate questions are displayed based on the frequency of the particular queries by end users.
-
-Once questions are suggested in the QnA Maker portal, you need to review and accept or reject those suggestions. There isn't an API to manage suggestions.
-
-## How QnA Maker's implicit feedback works
-
-QnA Maker's implicit feedback uses an algorithm to determine score proximity then makes active learning suggestions. The algorithm to determine proximity is not a simple calculation. The ranges in the following example are not meant to be fixed but should be used as a guide to understand the effect of the algorithm only.
-
-When a question's score is highly confident, such as 80%, the range of scores that are considered for active learning are wide, approximately within 10%. As the confidence score decreases, such as 40%, the range of scores decreases as well, approximately within 4%.
-
-In the following JSON response from a query to QnA Maker's generateAnswer, the scores for A, B, and C are near and would be considered as suggestions.
-
-```json
-{
- "activeLearningEnabled": true,
- "answers": [
- {
- "questions": [
- "Q1"
- ],
- "answer": "A1",
- "score": 80,
- "id": 15,
- "source": "Editorial",
- "metadata": [
- {
- "name": "topic",
- "value": "value"
- }
- ]
- },
- {
- "questions": [
- "Q2"
- ],
- "answer": "A2",
- "score": 78,
- "id": 16,
- "source": "Editorial",
- "metadata": [
- {
- "name": "topic",
- "value": "value"
- }
- ]
- },
- {
- "questions": [
- "Q3"
- ],
- "answer": "A3",
- "score": 75,
- "id": 17,
- "source": "Editorial",
- "metadata": [
- {
- "name": "topic",
- "value": "value"
- }
- ]
- },
- {
- "questions": [
- "Q4"
- ],
- "answer": "A4",
- "score": 50,
- "id": 18,
- "source": "Editorial",
- "metadata": [
- {
- "name": "topic",
- "value": "value"
- }
- ]
- }
- ]
-}
-```
-
-QnA Maker won't know which answer is the best answer. Use the QnA Maker portal's list of suggestions to select the best answer and train again.
--
-## How you give explicit feedback with the Train API
-
-QnA Maker needs explicit feedback about which of the answers was the best answer. How the best answer is determined, is up to you and can include:
-
-* User feedback, selecting one of the answers.
-* Business logic, such as determining an acceptable score range.
-* A combination of both user feedback and business logic.
-
-Use the [Train API](/rest/api/cognitiveservices/qnamaker4.0/runtime/train) to send the correct answer to QnA Maker, after the user selects it.
-
-## Upgrade runtime version to use active learning
-
-Active Learning is supported in runtime version 4.4.0 and above. If your knowledge base was created on an earlier version, [upgrade your runtime](configure-QnA-Maker-resources.md#get-the-latest-runtime-updates) to use this feature.
-
-## Turn on active learning for alternate questions
-
-Active learning is off by default. Turn it on to see suggested questions. After you turn on active learning, you need to send information from the client app to QnA Maker. For more information, see [Architectural flow for using GenerateAnswer and Train APIs from a bot](improve-knowledge-base.md#architectural-flow-for-using-generateanswer-and-train-apis-from-a-bot).
-
-1. Select **Publish** to publish the knowledge base. Active learning queries are collected from the GenerateAnswer API prediction endpoint only. The queries to the Test pane in the QnA Maker portal do not affect active learning.
-
-1. To turn active learning on in the QnA Maker portal, go to the top-right corner, select your **Name**, go to [**Service Settings**](https://www.qnamaker.ai/UserSettings).
-
- ![Turn on active learning's suggested question alternatives from the Service settings page. Select your user name in the top-right menu, then select Service Settings.](../media/improve-knowledge-base/Endpoint-Keys.png)
--
-1. Find the QnA Maker service then toggle **Active Learning**.
-
- > [!div class="mx-imgBorder"]
- > [![On the Service settings page, toggle on Active Learning feature. If you are not able to toggle the feature, you may need to upgrade your service.](../media/improve-knowledge-base/turn-active-learning-on-at-service-setting.png)](../media/improve-knowledge-base/turn-active-learning-on-at-service-setting.png#lightbox)
-
- > [!Note]
- > The exact version on the preceding image is shown as an example only. Your version may be different.
-
- Once **Active Learning** is enabled, the knowledge base suggests new questions at regular intervals based on user-submitted questions. You can disable **Active Learning** by toggling the setting again.
-
-## Review suggested alternate questions
-
-[Review alternate suggested questions](improve-knowledge-base.md) on the **Edit** page of each knowledge base.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a knowledge base](./manage-knowledge-bases.md)
cognitive-services Using Prebuilt Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/using-prebuilt-api.md
- Title: QnA Maker Prebuilt API-
-description: Use the Prebuilt API to get answers over text
----- Previously updated : 05/05/2021--
-# Prebuilt question answering
-
-Prebuilt question answering provides user the capability to answer question over a passage of text without having to create knowledgebases, maintain question and answer pairs or incurring cost for underutilized infrastructure. This functionality is provided as an API and can be used to meet question and answering needs without having to learn the details about QnA Maker or additional storage.
--
-> [!NOTE]
-> This documentation does not apply to the latest release. To learn about using the Prebuilt API with the latest release consult the [question answering prebuilt API article](../../language-service/question-answering/how-to/prebuilt.md).
-
-Given a user query and a block of text/passage the API will return an answer and precise answer (if available).
-
-<a name="qna-entity"></a>
--
-## Example usage of Prebuilt question answering
-
-Imagine that you have one or more blocks of text from which you would like to get answers for a given question. Conventionally you would have had to create as many sources as the number of blocks of text. However, now with Prebuilt question answering you can query the blocks of text without having to define content sources in a knowledge base.
-
-Some other scenarios where the Prebuilt API can be used are:
-
-* You are developing an ebook reader app for end users which allows them to highlight text, enter a question and find answers over highlighted text
-* A browser extension that allows users to ask a question over the content being currently displayed on the browser page
-* A health bot that takes queries from users and provides answers based on the medical content that the bot identifies as most relevant to the user query
-
-Below is an example of a sample request:
-
-## Sample Request
-```
-POST https://{Endpoint}/qnamaker/v5.0-preview.2/generateanswer
-```
-
-### Sample Query over a single block of text
-
-Request Body
-
-```json
-{
- "question": "How long it takes to charge surface pro 4?",
- "documents": [
- {
- "text": "### The basics #### Power and charging It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it. You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
- "id": "doc1"
- }
- ],
- "Language": "en"
-}
-```
-## Sample Response
-
-In the above request body, we query over a single block of text. A sample response received for the above query is shown below,
-
-```json
-{
- "answers": [
- {
- "answer": "### The basics #### Power and charging It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it. You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
- "answerSpan": {
- "text": "two to four hours",
- "score": 0.0,
- "startIndex": 47,
- "endIndex": 64
- },
- "score": 0.9599020481109619,
- "id": "doc1",
- "answerStartIndex": 0,
- "answerEndIndex": 390
- },
- {
- "answer": "It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it. You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
- "score": 0.06749606877565384,
- "id": "doc1",
- "answerStartIndex": 129,
- "answerEndIndex": 390
- },
- {
- "answer": "You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges.",
- "score": 0.011389964260160923,
- "id": "doc1",
- "answerStartIndex": 265,
- "answerEndIndex": 390
- }
- ]
-}
-```
-We see that multiple answers are received as part of the API response. Each answer has a specific confidence score that helps understand the overall relevance of the answer. Users can make use of this confidence score to show the answers to the query.
-
-## Prebuilt API Limits
-
-Visit the [Prebuilt API Limits](../limits.md#prebuilt-question-answering-limits) documentation
-
-## Prebuilt API reference
-Visit the [Prebuilt API reference](/rest/api/cognitiveservices-qnamaker/qnamaker5.0preview2/prebuilt/generateanswer) documentation to understand the input and output parameters required for calling the API.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Overview/language-support.md
- Title: Language support - QnA Maker-
-description: A list of culture, natural languages supported by QnA Maker for your knowledge base. Do not mix languages in the same knowledge base.
----- Previously updated : 11/09/2019--
-# Language support for a QnA Maker resource and knowledge bases
-
-This article describes the language support options for QnA Maker resources and knowledge bases.
--
-Language for the service is selected when you create the first knowledge base in the resource. All additional knowledge bases in the resource must be in the same language.
-
-The language determines the relevance of the results QnA Maker provides in response to user queries. The QnA Maker resource, and all the knowledge bases inside that resource, support a single language. The single language is necessary to provide the best answer results for a query.
-
-## Single language per resource
-
-Consider the following:
-
-* A QnA Maker service, and all its knowledge bases, support one language only.
-* The language is explicitly set when the first knowledge base of the service is created.
-* The language is determined from the files and URLs added when the knowledge base is created.
-* The language can't be changed for any other knowledge bases in the service.
-* The language is used by the Cognitive Search service (ranker #1) and the QnA Maker service (ranker #2) to generate the best answer to a query.
-
-## Supporting multiple languages in one QnA Maker resource
-
-This functionality is not supported in our current Generally Available (GA) stable release. Check out [question answering](../../language-service/question-answering/overview.md) to test out this functionality.
-
-## Supporting multiple languages in one knowledge base
-
-If you need to support a knowledge base system, which includes several languages, you can:
-
-* Use the [Translator service](../../translator/translator-info-overview.md) to translate a question into a single language before sending the question to your knowledge base. This allows you to focus on the quality of a single language and the quality of the alternate questions and answers.
-* Create a QnA Maker resource, and a knowledge base inside that resource, for every language. This allows you to manage separate alternate questions and answer text that is more nuanced for each language. This gives you much more flexibility but requires a much higher maintenance cost when the questions or answers change across all languages.
--
-## Languages supported
-
-The following list contains the languages supported for a QnA Maker resource.
-
-| Language |
-|--|
-| Arabic |
-| Armenian |
-| Bangla |
-| Basque |
-| Bulgarian |
-| Catalan |
-| Chinese_Simplified |
-| Chinese_Traditional |
-| Croatian |
-| Czech |
-| Danish |
-| Dutch |
-| English |
-| Estonian |
-| Finnish |
-| French |
-| Galician |
-| German |
-| Greek |
-| Gujarati |
-| Hebrew |
-| Hindi |
-| Hungarian |
-| Icelandic |
-| Indonesian |
-| Irish |
-| Italian |
-| Japanese |
-| Kannada |
-| Korean |
-| Latvian |
-| Lithuanian |
-| Malayalam |
-| Malay |
-| Norwegian |
-| Polish |
-| Portuguese |
-| Punjabi |
-| Romanian |
-| Russian |
-| Serbian_Cyrillic |
-| Serbian_Latin |
-| Slovak |
-| Slovenian |
-| Spanish |
-| Swedish |
-| Tamil |
-| Telugu |
-| Thai |
-| Turkish |
-| Ukrainian |
-| Urdu |
-| Vietnamese |
-
-## Query matching and relevance
-QnA Maker depends on [Azure Cognitive Search language analyzers](/rest/api/searchservice/language-support) for providing results.
-
-While the Azure Cognitive Search capabilities are on par for supported languages, QnA Maker has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
-
-|Languages with additional ranker|
-|--|
-|Chinese|
-|Czech|
-|Dutch|
-|English|
-|French|
-|German|
-|Hungarian|
-|Italian|
-|Japanese|
-|Korean|
-|Polish|
-|Portuguese|
-|Spanish|
-|Swedish|
-
-This additional ranking is an internal working of the QnA Maker's ranker.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Language selection](../index.yml)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Overview/overview.md
- Title: What is QnA Maker service?
-description: QnA Maker is a cloud-based NLP service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information.
--- Previously updated : 11/19/2021
-keywords: "qna maker, low code chat bots, multi-turn conversations"
---
-# What is QnA Maker?
--
-QnA Maker is a cloud-based Natural Language Processing (NLP) service that allows you to create a natural conversational layer over your data. It is used to find the most appropriate answer for any input from your custom knowledge base (KB) of information.
-
-QnA Maker is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications.
-
-QnA Maker doesn't store customer data. All customer data (question answers and chat logs) is stored in the region the customer deploys the dependent service instances in. For more details on dependent services see [here](../concepts/plan.md?tabs=v1).
-
-This documentation contains the following article types:
-
-* The [quickstarts](../quickstarts/create-publish-knowledge-base.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](../how-to/set-up-qnamaker-service-azure.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](../concepts/plan.md) provide in-depth explanations of the service's functionality and features.
-* [**Tutorials**](../tutorials/create-faq-bot-with-azure-bot-service.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-## When to use QnA Maker
-
-* **When you have static information** - Use QnA Maker when you have static information in your knowledge base of answers. This knowledge base is custom to your needs, which you've built with documents such as [PDFs and URLs](../Concepts/data-sources-and-content.md).
-* **When you want to provide the same answer to a request, question, or command** - when different users submit the same question, the same answer is returned.
-* **When you want to filter static information based on meta-information** - add [metadata](../how-to/metadata-generateanswer-usage.md) tags to provide additional filtering options relevant to your client application's users and the information. Common metadata information includes [chit-chat](../how-to/chit-chat-knowledge-base.md), content type or format, content purpose, and content freshness.
-* **When you want to manage a bot conversation that includes static information** - your knowledge base takes a user's conversational text or command and answers it. If the answer is part of a pre-determined conversation flow, represented in your knowledge base with [multi-turn context](../how-to/multi-turn.md), the bot can easily provide this flow.
-
-## What is a knowledge base?
-
-QnA Maker [imports your content](../Concepts/plan.md) into a knowledge base of question and answer pairs. The import process extracts information about the relationship between the parts of your structured and semi-structured content to imply relationships between the question and answer pairs. You can edit these question and answer pairs or add new pairs.
-
-The content of the question and answer pair includes:
-* All the alternate forms of the question
-* Metadata tags used to filter answer choices during the search
-* Follow-up prompts to continue the search refinement
-
-![Example question and answer with metadata](../media/qnamaker-overview-learnabout/example-question-and-answer-with-metadata.png)
-
-After you publish your knowledge base, a client application sends a user's question to your endpoint. Your QnA Maker service processes the question and responds with the best answer.
-
-## Create a chat bot programmatically
-
-Once a QnA Maker knowledge base is published, a client application sends a question to your knowledge base endpoint and receives the results as a JSON response. A common client application for QnA Maker is a chat bot.
-
-![Ask a bot a question and get answer from knowledge base content](../media/qnamaker-overview-learnabout/bot-chat-with-qnamaker.png)
-
-|Step|Action|
-|:--|:--|
-|1|The client application sends the user's _question_ (text in their own words), "How do I programmatically update my Knowledge Base?" to your knowledge base endpoint.|
-|2|QnA Maker uses the trained knowledge base to provide the correct answer and any follow-up prompts that can be used to refine the search for the best answer. QnA Maker returns a JSON-formatted response.|
-|3|The client application uses the JSON response to make decisions about how to continue the conversation. These decisions can include showing the top answer and presenting more choices to refine the search for the best answer. |
-|||
-
-## Build low code chat bots
-
-The QnA Maker portal provides the complete knowledge base authoring experience. You can import documents, in their current form, to your knowledge base. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
-
-## High quality responses with layered ranking
-
-QnA Maker's system is a layered ranking approach. The data is stored in Azure search, which also serves as the first ranking layer. The top results from Azure search are then passed through QnA Maker's NLP re-ranking model to produce the final results and confidence score.
-
-## Multi-turn conversations
-
-QnA Maker provides multi-turn prompts and active learning to help you improve your basic question and answer pairs.
-
-**Multi-turn prompts** give you the opportunity to connect question and answer pairs. This connection allows the client application to provide a top answer and provides more questions to refine the search for a final answer.
-
-After the knowledge base receives questions from users at the published endpoint, QnA Maker applies **active learning** to these real-world questions to suggest changes to your knowledge base to improve the quality.
-
-## Development lifecycle
-
-QnA Maker provides authoring, training, and publishing along with collaboration permissions to integrate into the full development life cycle.
-
-> [!div class="mx-imgBorder"]
-> ![Conceptual image of development cycle](../media/qnamaker-overview-learnabout/development-cycle.png)
--
-## Complete a quickstart
-
-We offer quickstarts in most popular programming languages, each designed to teach you basic design patterns, and have you running code in less than 10 minutes. See the following list for the quickstart for each feature.
-
-* [Get started with QnA Maker client library](../quickstarts/quickstart-sdk.md)
-* [Get started with QnA Maker portal](../quickstarts/create-publish-knowledge-base.md)
-
-## Next steps
-QnA Maker provides everything you need to build, manage, and deploy your custom knowledge base.
-
-> [!div class="nextstepaction"]
-> [Review the latest changes](../whats-new.md)
cognitive-services Add Question Metadata Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Quickstarts/add-question-metadata-portal.md
- Title: "Add questions and answer in QnA Maker portal"
-description: This article shows how to add question and answer pairs with metadata so your users can find the right answer to their question.
--- Previously updated : 05/26/2020---
-# Add questions and answer with QnA Maker portal
-
-Once a knowledge base is created, add question and answer (QnA) pairs with metadata to filter the answer. The questions in the following table are about Azure service limits, but each has to do with a different Azure search service.
--
-<a name="qna-table"></a>
-
-|Pair|Questions|Answer|Metadata|
-|--|--|--|--|
-|#1|`How large a knowledge base can I create?`<br><br>`What is the max size of a knowledge base?`<br><br>`How many GB of data can a knowledge base hold?` |`The size of the knowledge base depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](../concepts/azure-resources.md) for more details.`|`service=qna_maker`<br>`link_in_answer=true`|
-|#2|`How many knowledge bases can I have for my QnA Maker service?`<br><br>`I selected a Azure Cognitive Search tier that holds 15 knowledge bases, but I can only create 14 - what is going on?`<br><br>`What is the connection between the number of knowledge bases in my QnA Maker service and the Azure Cognitive Search service size?` |`Each knowledge base uses 1 index, and all the knowledge bases share a test index. You can have N-1 knowledge bases where N is the number of indexes your Azure Cognitive Search tier supports.`|`service=search`<br>`link_in_answer=false`|
-
-Once metadata is added to a QnA pair, the client application can:
-
-* Request answers that only match certain metadata.
-* Receive all answers but post-process the answers depending on the metadata for each answer.
-
-## Prerequisites
-
-* Complete the [previous quickstart](./create-publish-knowledge-base.md)
-
-## Sign in to the QnA Maker portal
-
-1. Sign in to the [QnA Maker portal](https://www.qnamaker.ai).
-
-1. Select your existing knowledge base from the [previous quickstart](./create-publish-knowledge-base.md).
-
-## Add additional alternatively-phrased questions
-
-The current knowledge base has the QnA Maker troubleshooting QnA pairs. These pairs were created when the URL was added to the knowledge base during the creation process.
-
-When this URL was imported, only one question with one answer was created. In this procedure, add additional questions.
-
-1. From the **Edit** page, use the search textbox above the question and answer pairs, to find the question `How large a knowledge base can I create?`
-
-1. In the **Question** column, select **+ Add alternative phrasing** then add each new phrasing, provided in the following table.
-
- |Alternative phrasing|
- |--|
- |`What is the max size of a knowledge base?`|
- |`How many GB of data can a knowledge base hold?`|
-
-1. Select **Save and train** to retrain the knowledge base.
-
-1. Select **Test**, then enter a question that is close to one of the new alternative phrasings but isn't exactly the same wording:
-
- `What GB size can a knowledge base be?`
-
- The correct answer is returned in markdown format:
-
- `The size of the knowledge base depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](../concepts/azure-resources.md) for more details.`
-
- If you select **Inspect** under the returned answer, you can see more answers met the question but not with the same high level of confidence.
-
- Do not add every possible combination of alternative phrasing. When you turn on QnA Maker's [active learning](../how-to/improve-knowledge-base.md), this finds the alternative phrasings that will best help your knowledge base meet your users' needs.
-
-1. Select **Test** again to close the test window.
-
-## Add metadata to filter the answers
-
-Adding metadata to a question and answer pair allows your client application to request filtered answers. This filter is applied before the [first and second rankers](../concepts/query-knowledge-base.md#ranker-process) are applied.
-
-1. Add the second question and answer pair, without the metadata, from the [first table in this quickstart](#qna-table), then continue with the following steps.
-
-1. Select **View options**, then select **Show metadata**.
-
-1. For the QnA pair you just added, select **Add metadata tags**, then add the name of `service` and the value of `search`. It looks like this: `service:search`.
-
-1. Add another metadata tag with name of `link_in_answer` and value of `false`. It looks like this: `link_in_answer:false`.
-
-1. Search for the first answer in the table, `How large a knowledge base can I create?`.
-
-1. Add metadata pairs for the same two metadata tags:
-
- `link_in_answer` : `true`<br>
- `service`: `qna_maker`
-
- You now have two questions with the same metadata tags with different values.
-
-1. Select **Save and train** to retrain the knowledge base.
-
-1. Select **Publish** in the top menu to go to the publish page.
-1. Select the **Publish** button to publish the current knowledge base to the endpoint.
-1. After the knowledge base is published, continue to the next quickstart to learn how to generate an answer from your knowledge base.
-
-## What did you accomplish?
-
-You edited your knowledge base to support more questions and provided name/value pairs to support filtering during the search for the top answer or postprocessing, after the answer or answers are returned.
-
-## Clean up resources
-
-If you are not continuing to the next quickstart, delete the QnA Maker and Bot framework resources in the Azure portal.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get answer with Postman or cURL](get-answer-from-knowledge-base-using-url-tool.md)
cognitive-services Create Publish Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Quickstarts/create-publish-knowledge-base.md
- Title: "Quickstart: Create, train, and publish knowledge base - QnA Maker"
-description: You can create a QnA Maker knowledge base (KB) from your own content, such as FAQs or product manuals. This article includes an example of creating a QnA Maker knowledge base from a simple FAQ webpage, to answer questions QnA Maker.
--- Previously updated : 11/02/2021---
-# Quickstart: Create, train, and publish your QnA Maker knowledge base
--
-You can create a QnA Maker knowledge base (KB) from your own content, such as FAQs or product manuals. This article includes an example of creating a QnA Maker knowledge base from a simple FAQ webpage, to answer questions.
-
-## Prerequisites
-
-> [!div class="checklist"]
-> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> * A [QnA Maker resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA Maker resource name you selected when you created the resource.
-
-## Create your first QnA Maker knowledge base
-
-1. Sign in to the [QnAMaker.ai](https://QnAMaker.ai) portal with your Azure credentials.
-
-2. In the QnA Maker portal, select **Create a knowledge base**.
-
-3. On the **Create** page, skip **Step 1** if you already have your QnA Maker resource.
-
-If you haven't created the service yet, select **Stable** and **Create a QnA service**. You are directed to the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) to set up a QnA Maker service in your subscription. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
-
-When you are done creating the resource in the Azure portal, return to the QnA Maker portal, refresh the browser page, and continue to **Step 2**.
-
-4. In **Step 2**, select your Active directory, subscription, service (resource), and the language for all knowledge bases created in the service.
-
- :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/qnaservice-selection.png" alt-text="Screenshot of selecting a QnA Maker service knowledge base":::
-
-5. In **Step 3**, name your knowledge base **My Sample QnA KB**.
-
-6. In **Step 4**, configure the settings with the following table:
-
- |Setting|Value|
- |--|--|
- |**Enable multi-turn extraction from URLs, .pdf or .docx files.**|Checked|
- |**Multi-turn default text**| Select an option|
- |**+ Add URL**|`https://www.microsoft.com/download/faq.aspx`|
- |**Chit-chat**|Select **Professional**|
-
-7. In **Step 5**, Select **Create your KB**.
-
- The extraction process takes a few moments to read the document and identify questions and answers.
-
- After QnA Maker successfully creates the knowledge base, the **Knowledge base** page opens. You can edit the contents of the knowledge base on this page.
-
-## Add a new question and answer set
-
-1. In the QnA Maker portal, on the **Edit** page, select **+ Add QnA pair** from the context toolbar.
-1. Add the following question:
-
- `How many Azure services are used by a knowledge base?`
-
-1. Add the answer formatted with _markdown_:
-
- ` * Azure QnA Maker service\n* Azure Cognitive Search\n* Azure web app\n* Azure app plan`
-
- :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/add-question-and-answer.png" alt-text="Add the question as text and the answer formatted with markdown.":::
-
- The markdown symbol, `*`, is used for bullet points. The `\n` is used for a new line.
-
- The **Edit** page shows the markdown. When you use the **Test** panel later, you will see the markdown displayed properly.
-
-## Save and train
-
-In the upper right, select **Save and train** to save your edits and train QnA Maker. Edits aren't kept unless they're saved.
-
-## Test the knowledge base
-
-1. In the QnA Maker portal, in the upper right, select **Test** to test that the changes you made took effect.
-2. Enter an example user query in the textbox.
-
- `I want to know the difference between 32 bit and 64 bit Windows`
-
- :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/query-dialogue.png" alt-text="Enter an example user query in the textbox.":::
-
-3. Select **Inspect** to examine the response in more detail. The test window is used to test your changes to the knowledge base before publishing your knowledge base.
-
-4. Select **Test** again to close the **Test** panel.
-
-## Publish the knowledge base
-
-When you publish a knowledge base, the contents of your knowledge base move from the `test` index to a `prod` index in Azure search.
-
-![Screenshot of moving the contents of your knowledge base](../media/qnamaker-how-to-publish-kb/publish-prod-test.png)
-
-1. In the QnA Maker portal, select **Publish**. Then to confirm, select **Publish** on the page.
-
- The QnA Maker service is now successfully published. You can use the endpoint in your application or bot code.
-
- ![Screenshot of successful publishing](../media/qnamaker-create-publish-knowledge-base/publish-knowledge-base-to-endpoint.png)
-
-## Create a bot
-
-After publishing, you can create a bot from the **Publish** page:
-
-* You can create several bots quickly, all pointing to the same knowledge base for different regions or pricing plans for the individual bots.
-* If you want only one bot for the knowledge base, use the **View all your bots on the Azure portal** link to view a list of your current bots.
-
-When you make changes to the knowledge base and republish, you don't need to take further action with the bot. It's already configured to work with the knowledge base, and works with all future changes to the knowledge base. Every time you publish a knowledge base, all the bots connected to it are automatically updated.
-
-1. In the QnA Maker portal, on the **Publish** page, select **Create bot**. This button appears only after you've published the knowledge base.
-
- ![Screenshot of creating a bot](../media/qnamaker-create-publish-knowledge-base/create-bot-from-published-knowledge-base-page.png)
-
-1. A new browser tab opens for the Azure portal, with the Azure Bot Service's creation page. Configure the Azure bot service. The bot and QnA Maker can share the web app service plan, but can't share the web app. This means the **app name** for the bot must be different from the app name for the QnA Maker service.
-
- * **Do**
- * Change bot handle - if it is not unique.
- * Select SDK Language. Once the bot is created, you can download the code to your local development environment and continue the development process.
- * **Don't**
- * Change the following settings in the Azure portal when creating the bot. They are pre-populated for your existing knowledge base:
- * QnA Auth Key
- * App service plan and location
--
-1. After the bot is created, open the **Bot service** resource.
-1. Under **Bot Management**, select **Test in Web Chat**.
-1. At the chat prompt of **Type your message**, enter:
-
- `Azure services?`
-
- The chat bot responds with an answer from your knowledge base.
-
- :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/test-web-chat.png" alt-text="Enter a user query into the test web chat.":::
-
-## What did you accomplish?
-
-You created a new knowledge base, added a public URL to the knowledge base, added your own QnA pair, trained, tested, and published the knowledge base.
-
-After publishing the knowledge base, you created a bot, and tested the bot.
-
-This was all accomplished in a few minutes without having to write any code or clean the content.
-
-## Clean up resources
-
-If you are not continuing to the next quickstart, delete the QnA Maker and Bot framework resources in the Azure portal.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Add questions with metadata](add-question-metadata-portal.md)
-
-For more information:
-
-* [Markdown format in answers](../reference-markdown-format.md)
-* QnA Maker [data sources](../Concepts/data-sources-and-content.md).
cognitive-services Get Answer From Knowledge Base Using Url Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-using-url-tool.md
- Title: Use URL tool to get answer from knowledge base - QnA Maker-
-description: This article walks you through getting an answer from your knowledge base using a URL test tool such as cURL or Postman.
----
-zone_pivot_groups: URL-test-interface
- Previously updated : 07/16/2020---
-# Get an answer from a QNA Maker knowledge base
--
-> [!NOTE]
-> This documentation does not apply to the latest release. To learn about using the latest question answering APIs consult the [question answering authoring guide](../../language-service/question-answering/how-to/authoring.md).
--------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Test knowledge base with batch file](../how-to/test-knowledge-base.md#batch-test-with-tool)
-
-Learn more about metadata:
-* [Authoring - add metadata to QnA pair](../How-To/edit-knowledge-base.md#add-metadata)
-* [Query prediction - filter answers by metadata](../How-To/query-knowledge-base-with-metadata.md)
cognitive-services Quickstart Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Quickstarts/quickstart-sdk.md
- Title: "Quickstart: Use SDK to create and manage knowledge base - QnA Maker"
-description: This quickstart shows you how to create and manage your knowledge base using the client SDK.
----- Previously updated : 01/26/2022-
-zone_pivot_groups: qnamaker-quickstart
--
-# Quickstart: QnA Maker client library
-
-Get started with the QnA Maker client library. Follow these steps to install the package and try out the example code for basic tasks.
-------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-## Next steps
-
-> [!div class="nextstepaction"]
->[Tutorial: Test your knowledge base with a batch file](../how-to/test-knowledge-base.md#batch-test-with-tool)
-
-* [What is the QnA Maker API?](../Overview/overview.md)
-* [Edit a knowledge base](../how-to/edit-knowledge-base.md)
-* [Get usage analytics](../how-to/get-analytics-knowledge-base.md)
cognitive-services Create Faq Bot With Azure Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Tutorials/create-faq-bot-with-azure-bot-service.md
- Title: "Tutorial: Create an FAQ bot with Azure Bot Service"
-description: In this tutorial, create a no code FAQ Bot with QnA Maker and Azure Bot Service.
--- Previously updated : 08/31/2020---
-# Tutorial: Create a FAQ bot with Azure Bot Service
-Create an FAQ Bot with QnA Maker and Azure [Bot Service](https://azure.microsoft.com/services/bot-service/) with no code.
-
-In this tutorial, you learn how to:
-
-<!-- green checkmark -->
-> [!div class="checklist"]
-> * Link a QnA Maker knowledge base to an Azure Bot Service
-> * Deploy the Bot
-> * Chat with the Bot in web chat
-> * Light up the Bot in the supported channels
--
-## Create and publish a knowledge base
-
-Follow the [quickstart](../Quickstarts/create-publish-knowledge-base.md) to create a knowledge base. Once the knowledge base has been successfully published, you will reach the below page.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of successful publishing](../media/qnamaker-create-publish-knowledge-base/publish-knowledge-base-to-endpoint.png)
--
-## Create a bot
-
-After publishing, you can create a bot from the **Publish** page:
-
-* You can create several bots quickly, all pointing to the same knowledge base for different regions or pricing plans for the individual bots.
-* If you want only one bot for the knowledge base, use the **View all your bots on the Azure portal** link to view a list of your current bots.
-
-When you make changes to the knowledge base and republish, you don't need to take further action with the bot. It's already configured to work with the knowledge base, and works with all future changes to the knowledge base. Every time you publish a knowledge base, all the bots connected to it are automatically updated.
-
-1. In the QnA Maker portal, on the **Publish** page, select **Create bot**. This button appears only after you've published the knowledge base.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of creating a bot](../media/qnamaker-create-publish-knowledge-base/create-bot-from-published-knowledge-base-page.png)
-
-1. A new browser tab opens for the Azure portal, with the Azure Bot Service's creation page. Configure the Azure bot service. The bot and QnA Maker can share the web app service plan, but can't share the web app. This means the **app name** for the bot must be different from the app name for the QnA Maker service.
-
- * **Do**
- * Change bot handle - if it is not unique.
- * Select SDK Language. Once the bot is created, you can download the code to your local development environment and continue the development process.
- * **Don't**
- * Change the following settings in the Azure portal when creating the bot. They are pre-populated for your existing knowledge base:
- * QnA Auth Key
- * App service plan and location
--
-1. After the bot is created, open the **Bot service** resource.
-1. Under **Bot Management**, select **Test in Web Chat**.
-1. At the chat prompt of **Type your message**, enter:
-
- `Azure services?`
-
- The chat bot responds with an answer from your knowledge base.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of bot returning a response](../media/qnamaker-create-publish-knowledge-base/test-web-chat.png)
--
-1. Light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
-
-## Integrate the bot with channels
-
-Click on **Channels** in the Bot service resource that you have created. You can light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
-
- >[!div class="mx-imgBorder"]
- >![Screenshot of integration with teams](../media/qnamaker-tutorial-updates/connect-with-teams.png)
cognitive-services Export Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Tutorials/export-knowledge-base.md
- Title: Export knowledge bases - QnA Maker
-description: Exporting a knowledge base requires exporting from one knowledge base, then importing into another.
--- Previously updated : 11/09/2020--
-# Move a knowledge base using export-import
-
-You may want to create a copy of your knowledge base for several reasons:
-
-* Copy a knowledge base from QnA Maker GA to Custom question answering
-* To implement a backup and restore process
-* Integrate with your CI/CD pipeline
-* When you wish to move your data to different regions
--
-## Prerequisites
-
-> * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> * A [QnA Maker resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.
-> * Set up a new [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md)
-
-## Export a knowledge base
-1. Sign in to [QnA Maker portal](https://qnamaker.ai).
-1. Select the knowledge base you want to move.
-
-1. On the **Settings** page, you have the options to export **QnAs**, **Synonyms**, or **Knowledge Base Replica**. You can choose to download the data in .tsv/.xlsx.
-
- 1. **QnAs**: When exporting QnAs, all QnA pairs (with questions, answers, metadata, follow-up prompts, and the data source names) are downloaded. The QnA IDs that are exported with the questions and answers may be used to update a specific QnA pair using the [update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The QnA ID for a specific QnA pair remains unchanged across multiple export operations.
- 2. **Synonyms**: You can export Synonyms that have been added to the knowledge base.
- 4. **Knowledge Base Replica**: If you want to download the entire knowledge base with synonyms and other settings, you can choose this option.
-
-## Import a knowledge base
-1. Click **Create a knowledge base** from the top menu of the qnamaker.ai portal and then create an _empty_ knowledge base by not adding any URLs or files. Set the name of your choice for the new knowledge base and then ClickΓÇ»**Create your KB**.
-
-1. In this new knowledge base, open the **Settings** tab and and under _Import knowledge base_ select one of the following options: **QnAs**, **Synonyms**, or **Knowledge Base Replica**.
-
- 1. **QnAs**: This option imports all QnA pairs. **The QnA pairs created in the new knowledge base shall have the same QnA ID as present in the exported file**. You can refer [SampleQnAs.xlsx](https://aka.ms/qnamaker-sampleqnas), [SampleQnAs.tsv](https://aka.ms/qnamaker-sampleqnastsv) to import QnAs.
- 2. **Synonyms**: This option can be used to import synonyms to the knowledge base. You can refer [SampleSynonyms.xlsx](https://aka.ms/qnamaker-samplesynonyms), [SampleSynonyms.tsv](https://aka.ms/qnamaker-samplesynonymstsv) to import synonyms.
- 3. **Knowledge Base Replica**: This option can be used to import KB replica with QnAs, Synonyms and Settings. You can refer [KBReplicaSampleExcel](https://aka.ms/qnamaker-samplereplica), [KBReplicaSampleTSV](https://aka.ms/qnamaker-samplereplicatsv) for more details. If you also want to add unstructured content to the replica, refer [CustomQnAKBReplicaSample](https://aka.ms/qnamaker-samplev2replica).
-
- Either QnAs or Unstructured content is required when importing replica. Unstructured documents are only valid for Custom question answering.
- Synonyms file is not mandatory when importing replica.
- Settings file is mandatory when importing replica.
-
- |Settings|Update permitted when importing to QnA Maker KB?|Update permitted when importing to Custom question answering KB?|
- |:--|--|--|
- |DefaultAnswerForKB|No|Yes|
- |EnableActiveLearning (True/False)|Yes|No|
- |EnableMultiTurnExtraction (True/False)|Yes|Yes|
- |DefaultAnswerforMultiturn|Yes|Yes|
- |Language|No|No|
-
-1. **Test** the new knowledge base using the Test panel. Learn how to [test your knowledge base](../How-To/test-knowledge-base.md).
-
-1. **Publish** the knowledge base and create a chat bot. Learn how to [publish your knowledge base](../Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
-
- > [!div class="mx-imgBorder"]
- > ![Migrate knowledge base](../media/qnamaker-how-to-migrate-kb/import-export-kb.png)
-
-## Programmatically export a knowledge base from QnA Maker
-
-The export/import process is programmatically available using the following REST APIs:
-
-**Export**
-
-* [Download knowledge base API](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/download)
-
-**Import**
-
-* [Replace API (reload with same knowledge base ID)](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/replace)
-* [Create API (load with new knowledge base ID)](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/create)
-
-## Chat logs
-
-There is no way to export chat logs, since the new knowledge base uses Application Insights for storing chat logs.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Edit a knowledge base](../How-To/edit-knowledge-base.md)
cognitive-services Integrate With Power Virtual Assistant Fallback Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md
- Title: "Tutorial: Integrate with Power Virtual Agents - QnA Maker"
-description: In this tutorial, improve the quality of your knowledge base with active learning. Review, accept or reject, or add without removing or changing existing questions.
--- Previously updated : 11/09/2020--
-# Tutorial: Add your knowledge base to Power Virtual Agents
-
-Create and extend a [Power Virtual Agents](https://powervirtualagents.microsoft.com/) bot to provide answers from your knowledge base.
-
-> [!NOTE]
-> The integration demonstrated in this tutorial is in preview and is not intended for deployment to production environments.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Create a Power Virtual Agents bot
-> * Create a system fallback topic
-> * Add QnA Maker as an action to a topic as a Power Automate flow
-> * Create a Power Automate solution
-> * Add a Power Automate flow to your solution
-> * Publish Power Virtual Agents
-> * Test Power Virtual Agents, and recieve an answer from your QnA Maker knowledge base
--
-## Create and publish a knowledge base
-
-1. Follow the [quickstart](../Quickstarts/create-publish-knowledge-base.md) to create a knowledge base. Don't complete the last section, about creating a bot. Instead, use this tutorial to create a bot with Power Virtual Agents.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of published knowledge base settings](../media/how-to-integrate-power-virtual-agent/published-knowledge-base-settings.png)
-
- Enter your published knowledge base settings found on the **Settings** page in the [QnA Maker](https://www.qnamaker.ai/) portal. You will need this information for the [Power Automate step](#create-a-power-automate-flow-to-connect-to-your-knowledge-base) to configure your QnA Maker `GenerateAnswer` connection.
-
-1. In the QnA Maker portal, on the **Settings** page, find the endpoint key, endpoint host, and knowledge base ID.
-
-## Create bot in Power Virtual Agents
-
-[Power Virtual Agents](https://powervirtualagents.microsoft.com/) allows teams to create powerful bots by using a guided, no-code graphical interface. You don't need data scientists or developers.
-
-Create a bot by following the steps in [Create and delete Power Virtual Agents bots](/power-virtual-agents/authoring-first-bot).
-
-## Create the system fallback topic
-
-In Power Virtual Agents, you create a bot with a series of topics (subject areas), in order to answer user questions by performing actions.
-
-Although the bot can connect to your knowledge base from any topic, this tutorial uses the *system fallback* topic. The fallback topic is used when the bot can't find an answer. The bot passes the user's text to QnA Maker's `GenerateAnswer` API, receives the answer from your knowledge base, and displays it to the user as a message.
-
-Create a fallback topic by following the steps in [Configure the system fallback topic in Power Virtual Agents](/power-virtual-agents/authoring-system-fallback-topic).
-
-## Use the authoring canvas to add an action
-
-Use the Power Virtual Agents authoring canvas to connect the fallback topic to your knowledge base. The topic starts with the unrecognized user text. Add an action that passes that text to QnA Maker, and then shows the answer as a message. The last step of displaying an answer is handled as a [separate step](#add-your-solutions-flow-to-power-virtual-agents), later in this tutorial.
-
-This section creates the fallback topic conversation flow.
-
-1. The new fallback action might already have conversation flow elements. Delete the **Escalate** item by selecting the **Options** menu.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/delete-escalate-action-using-option-menu.png" alt-text="Partial screenshot of conversation flow, with delete option highlighted.":::
-
-1. Above the **Message** node, select the plus (**+**) icon, and then select **Call an action**.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/create-new-item-call-an-action.png" alt-text="Partial Screenshot of Call an action.":::
-
-1. Select **Create a flow**. The process takes you to the Power Automate portal.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of Create a flow](../media/how-to-integrate-power-virtual-agent/create-a-flow.png)
-
- Power Automate opens to a new template. You won't use this new template.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-automate-flow-initial-template.png" alt-text="Partial Screenshot of Power Automate with new flow template.":::
-
-## Create a Power Automate flow to connect to your knowledge base
-
-> [!NOTE]
-> Currently the Power Automate template does not support QnA Maker managed (Preview) endpoints. To add a QnA Maker managed (Preview) knowledge base to Power Automate skip this step and manually add the endpoints to it.
-
-The following procedure creates a Power Automate flow that:
-
-* Takes the incoming user text, and sends it to QnA Maker.
-* Returns the top response back to your bot.
-
-1. In **Power Automate**, select **Templates** from the left navigation. If you are asked if you want to leave the browser page, accept Leave.
-
-1. On the templates page, search for the template **Generate answer using QnA Maker** then select the template. This template has all the steps to call QnA Maker with your knowledge base settings and return the top answer.
-
-1. On the new screen for the QnA Maker flow, select **Continue**.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-automate-qna-flow-template-continue.png" alt-text="Partial Screenshot of QnA Maker template flow with Continue button highlighted.":::
-
-1. Select the **Generate Answer** action box, and fill in your QnA Maker settings from a previous section titled [Create and publish a knowledge base](#create-and-publish-a-knowledge-base). Your **Service Host** in the following image refers to your knowledge base host **Host** and is in the format of `https://YOUR-RESOURCE-NAME.azurewebsites.net/qnamaker`.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-fill-in-generate-answer-settings.png" alt-text="Partial Screenshot of QnA Maker template flow with Generate answer (Preview) highlighted.":::
-
-1. Select **Save** to save the flow.
-
-## Create a solution and add the flow
-
-For the bot to find and connect to the flow, the flow must be included in a Power Automate solution.
-
-1. While still in the Power Automate portal, select **Solutions** from the left-side navigation.
-
-1. Select **+ New solution**.
-
-1. Enter a display name. The list of solutions includes every solution in your organization or school. Choose a naming convention that helps you filter to just your solutions. For example, you might prefix your email to your solution name: `jondoe-power-virtual-agent-qnamaker-fallback`.
-
-1. Select your publisher from the list of choices.
-
-1. Accept the default values for the name and version.
-
-1. Select **Create** to finish the process.
-
-## Add your flow to the solution
-
-1. In the list of solutions, select the solution you just created. It should be at the top of the list. If it isn't, search by your email name, which is part of the solution name.
-
-1. In the solution, select **+ Add existing**, and then select **Flow** from the list.
-
-1. Find your flow from the **Outside solutions** list, and then select **Add** to finish the process. If there are many flows, look at the **Modified** column to find the most recent flow.
-
-## Add your solution's flow to Power Virtual Agents
-
-1. Return to the browser tab with your bot in Power Virtual Agents. The authoring canvas should still be open.
-
-1. To insert a new step in the flow, above the **Message** action box, select the plus (**+**) icon. Then select **Call an action**.
-
-1. From the **Flow** pop-up window, select the new flow named **Generate answers using QnA Maker knowledge base...**. The new action appears in the flow.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-flow-after-adding-action.png" alt-text="Partial Screenshot of Power Virtual Agent topic conversation canvas after adding QnA Maker flow.":::
-
-1. To correctly set the input variable to the QnA Maker action, select **Select a variable**, then select **bot.UnrecognizedTriggerPhrase**.
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-selection-action-input.png" alt-text="Partial Screenshot of Power Virtual Agent topic conversation canvas selecting input variable.":::
-
-1. To correctly set the output variable to the QnA Maker action, in the **Message** action, select **UnrecognizedTriggerPhrase**, then select the icon to insert a variable, `{x}`, then select **FinalAnswer**.
-
-1. From the context toolbar, select **Save**, to save the authoring canvas details for the topic.
-
-Here's what the final bot canvas looks like.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot shows the final agent canvas with Trigger Phrases, Action, and then Message sections.](../media/how-to-integrate-power-virtual-agent/power-virtual-agent-topic-authoring-canvas-full-flow.png)
-
-## Test the bot
-As you design your bot in Power Virtual Agents, you can use [the Test bot pane](/power-virtual-agents/authoring-test-bot) to see how the bot leads a customer through the bot conversation.
-
-1. In the test pane, toggle **Track between topics**. This allows you to watch the progression between topics, as well as within a single topic.
-
-1. Test the bot by entering the user text in the following order. The authoring canvas reports the successful steps with a green check mark.
-
- |Question order|Test questions|Purpose|
- |--|--|--|
- |1|Hello|Begin conversation|
- |2|Store hours|Sample topic. This is configured for you without any additional work on your part.|
- |3|Yes|In reply to `Did that answer your question?`|
- |4|Excellent|In reply to `Please rate your experience.`|
- |5|Yes|In reply to `Can I help with anything else?`|
- |6|How can I improve the throughput performance for query predictions?|This question triggers the fallback action, which sends the text to your knowledge base to answer. Then the answer is shown. the green check marks for the individual actions indicate success for each action.|
-
- :::image type="content" source="../media/how-to-integrate-power-virtual-agent/power-virtual-agent-test-tracked.png" alt-text="Screenshot of chat bot with canvas indicating green checkmarks for successful actions.":::
-
-## Publish your bot
-
-To make the bot available to all members of your school or organization, you need to *publish* it.
-
-Publish your bot by following the steps in [Publish your bot](/power-virtual-agents/publication-fundamentals-publish-channels).
-
-## Share your bot
-
-To make your bot available to others, you first need to publish it to a channel. For this tutorial we'll use the demo website.
-
-Configure the demo website by following the steps in [Configure a chatbot for a live or demo website](/power-virtual-agents/publication-connect-bot-to-web-channels).
-
-Then you can share your website URL with your school or organization members.
-
-## Clean up resources
-
-When you are done with the knowledge base, remove the QnA Maker resources in the Azure portal.
-
-## Next step
-
-[Get analytics on your knowledge base](../How-To/get-analytics-knowledge-base.md)
-
-Learn more about:
-
-* [Power Virtual Agents](/power-virtual-agents/)
-* [Power Automate](/power-automate/)
-* [QnA Maker connector](https://us.flow.microsoft.com/connectors/shared_cognitiveservicesqnamaker/qna-maker/) and the [settings for the connector](/connectors/cognitiveservicesqnamaker/)
cognitive-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/choose-natural-language-processing-service.md
- Title: Use NLP with LUIS for chat bots
-description: Learn when to use Language Understanding and when to use QnA Maker and understand how they compliment each other.
--- Previously updated : 04/16/2020--
-# Use Cognitive Services with natural language processing (NLP) to enrich bot conversations
---
-## Next steps
-
-* Learn [how to manage Azure resources](How-To/set-up-qnamaker-service-azure.md)
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/encrypt-data-at-rest.md
- Title: QnA Maker encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for QnA Maker, and how to enable and manage CMK.
----- Previously updated : 09/09/2021-
-#Customer intent: As a user of the QnA Maker service, I want to learn how encryption at rest works.
---
-# QnA Maker encryption of data at rest
-
-QnA Maker automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
--
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
-
-QnA Maker uses CMK support from Azure search. Configure [CMK in Azure Search using Azure Key Vault](../../search/search-security-manage-encryption-keys.md). This Azure instance should be associated with QnA Maker service to make it CMK enabled.
--
-> [!IMPORTANT]
-> Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
-
-## Enable customer-managed keys
-
-The QnA Maker service uses CMK from the Azure Search service. Follow these steps to enable CMKs:
-
-1. Create a new Azure Search instance and enable the prerequisites mentioned in the [customer-managed key prerequisites for Azure Cognitive Search](../../search/search-security-manage-encryption-keys.md#prerequisites).
-
- ![View Encryption settings 1](../media/cognitive-services-encryption/qna-encryption-1.png)
-
-2. When you create a QnA Maker resource, it's automatically associated with an Azure Search instance. This instance cannot be used with CMK. To use CMK, you'll need to associate your newly created instance of Azure Search that was created in step 1. Specifically, you'll need to update the `AzureSearchAdminKey` and `AzureSearchName` in your QnA Maker resource.
-
- ![View Encryption settings 2](../media/cognitive-services-encryption/qna-encryption-2.png)
-
-3. Next, create a new application setting:
- * **Name**: Set to `CustomerManagedEncryptionKeyUrl`
- * **Value**: Use the value that you got in Step 1 when creating your Azure Search instance.
-
- ![View Encryption settings 3](../media/cognitive-services-encryption/qna-encryption-3.png)
-
-4. When finished, restart the runtime. Now your QnA Maker service is CMK-enabled.
-
-## Regional availability
-
-Customer-managed keys are available in all Azure Search regions.
-
-## Encryption of data in transit
-
-QnA Maker portal runs in the user's browser. Every action triggers a direct call to the respective Cognitive Service API. Hence, QnA Maker is compliant for data in transit.
-However, as the QnA Maker portal service is hosted in West-US, it is still not ideal for non-US customers.
-
-## Next steps
-
-* [Encryption in Azure Search using CMKs in Azure Key Vault](../../search/search-security-manage-encryption-keys.md)
-* [Data encryption at rest](../../security/fundamentals/encryption-atrest.md)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/limits.md
- Title: Limits and boundaries - QnA Maker
-description: QnA Maker has meta-limits for parts of the knowledge base and service. It is important to keep your knowledge base within those limits in order to test and publish.
--- Previously updated : 11/02/2021---
-# QnA Maker knowledge base limits and boundaries
-
-QnA Maker limits provided below are a combination of the [Azure Cognitive Search pricing tier limits](../../search/search-limits-quotas-capacity.md) and the [QnA Maker pricing tier limits](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). You need to know both sets of limits to understand how many knowledge bases you can create per resource and how large each knowledge base can grow.
-
-## Knowledge bases
-
-The maximum number of knowledge bases is based on [Azure Cognitive Search tier limits](../../search/search-limits-quotas-capacity.md).
-
-|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
-|||||||-|
-|Maximum number of published knowledge bases allowed|2|14|49|199|199|2,999|
-
- For example, if your tier has 15 allowed indexes, you can publish 14 knowledge bases (one index per published knowledge base). The 15th index, `testkb`, is used for all the knowledge bases for authoring and testing.
-
-## Extraction Limits
-
-### File naming constraints
-
-File names may not include the following characters:
-
-|Do not use character|
-|--|
-|Single quote `'`|
-|Double quote `"`|
-
-### Maximum file size
-
-|Format|Max file size (MB)|
-|--|--|
-|`.docx`|10|
-|`.pdf`|25|
-|`.tsv`|10|
-|`.txt`|10|
-|`.xlsx`|3|
-
-### Maximum number of files
-
-The maximum number of files that can be extracted and maximum file size is based on your **[QnA Maker pricing tier limits](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/)**.
-
-### Maximum number of deep-links from URL
-
-The maximum number of deep-links that can be crawled for extraction of QnAs from a URL page is **20**.
-
-## Metadata Limits
-
-Metadata is presented as a text-based key: value pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure Cognitive Search tier limits](../../search/search-limits-quotas-capacity.md)**.
-
-For GA version, since the test index is shared across all the KBs, the limit is applied across all KBs in the QnA Maker service.
-
-|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
-|||||||-|
-|Maximum metadata fields per QnA Maker service (across all KBs)|1,000|100*|1,000|1,000|1,000|1,000|
-
-### By name and value
-
-The length and acceptable characters for metadata name and value are listed in the following table.
-
-|Item|Allowed chars|Regex pattern match|Max chars|
-|--|--|--|--|
-|Name (key)|Allows<br>Alphanumeric (letters and digits)<br>`_` (underscore)<br> Must not contain spaces.|`^[a-zA-Z0-9_]+$`|100|
-|Value|Allows everything except<br>`:` (colon)<br>`|` (vertical pipe)<br>Only one value allowed.|`^[^:|]+$`|500|
-|||||
-
-## Knowledge Base content limits
-Overall limits on the content in the knowledge base:
-* Length of answer text: 25,000 characters
-* Length of question text: 1,000 characters
-* Length of metadata key text: 100 characters
-* Length of metadata value text: 500 characters
-* Supported characters for metadata name: Alphabets, digits, and `_`
-* Supported characters for metadata value: All except `:` and `|`
-* Length of file name: 200
-* Supported file formats: ".tsv", ".pdf", ".txt", ".docx", ".xlsx".
-* Maximum number of alternate questions: 300
-* Maximum number of question-answer pairs: Depends on the **[Azure Cognitive Search tier](../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure Cognitive Search index.
-* URL/HTML page: 1 million characters
-
-## Create Knowledge base call limits:
-These represent the limits for each create knowledge base action; that is, clicking *Create KB* or calling the CreateKnowledgeBase API.
-* Recommended maximum number of alternate questions per answer: 300
-* Maximum number of URLs: 10
-* Maximum number of files: 10
-* Maximum number of QnAs permitted per call: 1000
-
-## Update Knowledge base call limits
-These represent the limits for each update action; that is, clicking *Save and train* or calling the UpdateKnowledgeBase API.
-* Length of each source name: 300
-* Recommended maximum number of alternate questions added or deleted: 300
-* Maximum number of metadata fields added or deleted: 10
-* Maximum number of URLs that can be refreshed: 5
-* Maximum number of QnAs permitted per call: 1000
-
-## Add unstructured file limits
-
-> [!NOTE]
-> * If you need to use larger files than the limit allows, you can break the file into smaller files before sending them to the API.
-
-These represent the limits when unstructured files are used to *Create KB* or call the CreateKnowledgeBase API:
-* Length of file: We will extract first 32000 characters
-* Maximum three responses per file.
-
-## Prebuilt question answering limits
-
-> [!NOTE]
-> * If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-These represent the limits when Prebuilt API is used to *Generate response* or call the GenerateAnswer API:
-* Number of documents: 5
-* Maximum size of a single document: 5,120 characters
-* Maximum three responses per document.
-
-> [!IMPORTANT]
-> Support for unstructured file/content and is available only in question answering.
-
-## Alterations limits
-[Alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
-
-## Next steps
-
-Learn when and how to change [service pricing tiers](How-To/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku).
cognitive-services Reference App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-app-service.md
- Title: Service configuration - QnA Maker
-description: Understand how and where to configure resources.
--- Previously updated : 11/02/2021---
-# Service configuration
-
-Each version of QnA Maker uses a different set of Azure resources (services). This article describes the supported customizations for these services.
-
-## App Service
-
-QnA Maker uses the App Service to provide the query runtime used by the [generateAnswer API](/rest/api/cognitiveservices/qnamaker4.0/runtime/generateanswer).
-
-These settings are available in the Azure portal, for the App Service. The settings are available by selecting **Settings**, then **Configuration**.
-
-You can set an individual setting either through the Application Settings list, or modify several settings by selecting **Advanced edit**.
-
-|Resource|Setting|
-|--|--|
-|AzureSearchAdminKey|Cognitive Search - used for QnA pair storage and Ranker #1|
-|AzureSearchName|Cognitive Search - used for QnA pair storage and Ranker #1|
-|DefaultAnswer|Text of answer when no match is found|
-|UserAppInsightsAppId|Chat log and telemetry|
-|UserAppInsightsKey|Chat log and telemetry|
-|UserAppInsightsName|Chat log and telemetry|
-|QNAMAKER_EXTENSION_VERSION|Always set to _latest_. This setting will initialize the QnAMaker Site Extension in the App Service.|
-
-You need to **restart** the service from the **Overview** page of the Azure portal, once you are done making changes.
-
-## QnA Maker Service
-
-The QnA Maker service provides configuration for the following users to collaborate on a single QnA Maker service, and all its knowledge bases.
-
-Learn [how to add collaborators](./index.yml) to your service.
-
-## Change Azure Cognitive Search
-
-Learn [how to change the Cognitive Search service](./how-to/configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) linked to your QnA Maker service.
-
-## Change default answer
-
-Learn [how to change the text of your default answers](How-To/change-default-answer.md).
-
-## Telemetry
-
-Application Insights is used for monitoring telemetry with QnA Maker GA. There are no configuration settings specific to QnA Maker.
-
-## App Service Plan
-
-App Service Plan has no configuration settings specific to QnA Maker.
-
-## Next steps
-
-Learn more about [formats](reference-document-format-guidelines.md) for documents and URLs you want to import into a knowledge base.
cognitive-services Reference Document Format Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-document-format-guidelines.md
- Title: Import document format guidelines - QnA Maker
-description: Use these guidelines for importing documents to get the best results for your content.
--- Previously updated : 04/06/2020---
-# Format guidelines for imported documents and URLs
-
-Review these formatting guidelines to get the best results for your content.
-
-## Formatting considerations
-
-After importing a file or URL, QnA Maker converts and stores your content in the [markdown format](https://en.wikipedia.org/wiki/Markdown). The conversion process adds new lines in the text, such as `\n\n`. A knowledge of the markdown format helps you to understand the converted content and manage your knowledge base content.
-
-If you add or edit your content directly in your knowledge base, use **markdown formatting** to create rich text content or change the markdown format content that is already in the answer. QnA Maker supports much of the markdown format to bring rich text capabilities to your content. However, the client application, such as a chat bot may not support the same set of markdown formats. It is important to test the client application's display of answers.
-
-See a full list of [content types and examples](./concepts/data-sources-and-content.md#content-types-of-documents-you-can-add-to-a-knowledge-base).
-
-## Basic document formatting
-
-QnA Maker identifies sections and subsections and relationships in the file based on visual clues like:
-
-* font size
-* font style
-* numbering
-* colors
-
-> [!NOTE]
-> We don't support extraction of images from uploaded documents currently.
-
-### Product manuals
-
-A manual is typically guidance material that accompanies a product. It helps the user to set up, use, maintain, and troubleshoot the product. When QnA Maker processes a manual, it extracts the headings and subheadings as questions and the subsequent content as answers. See an example [here](https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf).
-
-Below is an example of a manual with an index page, and hierarchical content
-
-> [!div class="mx-imgBorder"]
-> ![Product Manual example for a knowledge base](./media/qnamaker-concepts-datasources/product-manual.png)
-
-> [!NOTE]
-> Extraction works best on manuals that have a table of contents and/or an index page, and a clear structure with hierarchical headings.
-
-### Brochures, guidelines, papers, and other files
-
-Many other types of documents can also be processed to generate QA pairs, provided they have a clear structure and layout. These include: Brochures, guidelines, reports, white papers, scientific papers, policies, books, etc. See an example [here](https://qnamakerstore.blob.core.windows.net/qnamakerdata/docs/Manage%20Azure%20Blob%20Storage.docx).
-
-Below is an example of a semi-structured doc, without an index:
-
-> [!div class="mx-imgBorder"]
-> ![Azure Blob storage semi-structured Doc](./media/qnamaker-concepts-datasources/semi-structured-doc.png)
-
-### Unstructured document support
-
-Custom question answering now supports unstructured documents. A document that does not have its content organized in a well defined hierarchical manner, is missing a set structure or has its content free flowing can be considered as an unstructured document.
-
-Below is an example of an unstructured PDF document:
-
-> [!div class="mx-imgBorder"]
-> ![Unstructured document example for a knowledge base](./media/qnamaker-concepts-datasources/unstructured-qna-pdf.png)
-
- Currently this functionality is available only via document upload and only for PDF and DOC file formats.
-
-> [!IMPORTANT]
-> Support for unstructured file/content is available only in question answering.
-
-### Structured QnA Document
-
-The format for structured Question-Answers in DOC files, is in the form of alternating Questions and Answers per line, one question per line followed by its answer in the following line, as shown below:
-
-```text
-Question1
-
-Answer1
-
-Question2
-
-Answer2
-```
-
-Below is an example of a structured QnA word document:
-
-> [!div class="mx-imgBorder"]
-> ![Structured QnA document example for a knowledge base](./media/qnamaker-concepts-datasources/structured-qna-doc.png)
-
-### Structured *TXT*, *TSV* and *XLS* Files
-
-QnAs in the form of structured *.txt*, *.tsv* or *.xls* files can also be uploaded to QnA Maker to create or augment a knowledge base. These can either be plain text, or can have content in RTF or HTML. [QnA pairs](./How-To/edit-knowledge-base.md#question-and-answer-pairs) have an optional metadata field that can be used to group QnA pairs into categories.
-
-| Question | Answer | Metadata (1 key: 1 value) |
-|--||-|
-| Question1 | Answer1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
-| Question2 | Answer2 | `Key:Value` |
-
-Any additional columns in the source file are ignored.
-
-#### Example of structured Excel file
-
-Below is an example of a structured QnA *.xls* file, with HTML content:
-
-> [!div class="mx-imgBorder"]
-> ![Structured QnA excel example for a knowledge base](./media/qnamaker-concepts-datasources/structured-qna-xls.png)
-
-#### Example of alternate questions for single answer in Excel file
-
-Below is an example of a structured QnA *.xls* file, with several alternate questions for a single answer:
-
-> [!div class="mx-imgBorder"]
-> ![Example of alternate questions for single answer in Excel file](./media/qnamaker-concepts-datasources/xls-alternate-question-example.png)
-
-After the file is imported, the question-and-answer pair is in the knowledge base as shown below:
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of alternate questions for single answer imported into knowledge base](./media/qnamaker-concepts-datasources/xls-alternate-question-example-after-import.png)
-
-### Structured data format through import
-
-Importing a knowledge base replaces the content of the existing knowledge base. Import requires a structured .tsv file that contains data source information. This information helps QnA Maker group the question-answer pairs and attribute them to a particular data source. [QnA pairs](./How-To/edit-knowledge-base.md#question-and-answer-pairs) have an optional metadata field that can be used to group QnA pairs into categories.
-
-| Question | Answer | Source| Metadata (1 key: 1 value) |
-|--||-||
-| Question1 | Answer1 | Url1 | <code>Key1:Value1 &#124; Key2:Value2</code> |
-| Question2 | Answer2 | Editorial| `Key:Value` |
-
-<a href="#formatting-considerations"></a>
-
-### Multi-turn document formatting
-
-* Use headings and sub-headings to denote hierarchy. For example You can h1 to denote the parent QnA and h2 to denote the QnA that should be taken as prompt. Use small heading size to denote subsequent hierarchy. Don't use style, color, or some other mechanism to imply structure in your document, QnA Maker will not extract the multi-turn prompts.
-* First character of heading must be capitalized.
-* Do not end a heading with a question mark, `?`.
-
-**Sample documents**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)
-
-## FAQ URLs
-
-QnA Maker can support FAQ web pages in 3 different forms:
-
-* Plain FAQ pages
-* FAQ pages with links
-* FAQ pages with a Topics Homepage
-
-### Plain FAQ pages
-
-This is the most common type of FAQ page, in which the answers immediately follow the questions in the same page.
-
-Below is an example of a plain FAQ page:
-
-> [!div class="mx-imgBorder"]
-> ![Plain FAQ page example for a knowledge base](./media/qnamaker-concepts-datasources/plain-faq.png)
--
-### FAQ pages with links
-
-In this type of FAQ page, questions are aggregated together and are linked to answers that are either in different sections of the same page, or in different pages.
-
-Below is an example of an FAQ page with links in sections that are on the same page:
-
-> [!div class="mx-imgBorder"]
-> ![Section Link FAQ page example for a knowledge base](./media/qnamaker-concepts-datasources/sectionlink-faq.png)
--
-### Parent Topics page links to child answers pages
-
-This type of FAQ has a Topics page where each topic is linked to a corresponding set of questions and answers on a different page. QnA Maker crawls all the linked pages to extract the corresponding questions & answers.
-
-Below is an example of a Topics page with links to FAQ sections in different pages.
-
-> [!div class="mx-imgBorder"]
-> ![Deep link FAQ page example for a knowledge base](./media/qnamaker-concepts-datasources/topics-faq.png)
-
-### Support URLs
-
-QnA Maker can process semi-structured support web pages, such as web articles that would describe how to perform a given task, how to diagnose and resolve a given problem, and what are the best practices for a given process. Extraction works best on content that has a clear structure with hierarchical headings.
-
-> [!NOTE]
-> Extraction for support articles is a new feature and is in early stages. It works best for simple pages, that are well structured, and do not contain complex headers/footers.
-
-> [!div class="mx-imgBorder"]
-> ![QnA Maker supports extraction from semi-structured web pages where a clear structure is presented with hierarchical headings](./media/qnamaker-concepts-datasources/support-web-pages-with-heirarchical-structure.png)
-
-## Import and export knowledge base
-
-**TSV and XLS files**, from exported knowledge bases, can only be used by importing the files from the **Settings** page in the QnA Maker portal. They can't be used as data sources during knowledge base creation or from the **+ Add file** or **+ Add URL** feature on the **Settings** page.
-
-When you import the Knowledge base through these **TSV and XLS files**, the QnA pairs get added to the editorial source and not the sources from which the QnAs were extracted in the exported Knowledge Base.
--
-## Next steps
-
-See a full list of [content types and examples](./concepts/data-sources-and-content.md#content-types-of-documents-you-can-add-to-a-knowledge-base)
cognitive-services Reference Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-markdown-format.md
- Title: Markdown format - QnA Maker
-description: Following is the list of markdown formats that you can use in QnA Maker's answer text.
--- Previously updated : 03/19/2020--
-# Markdown format supported in QnA Maker answer text
-
-QnA Maker stores answer text as markdown. There are many flavors of markdown. In order to make sure the answer text is returned and displayed correctly, use this reference.
-
-Use the **[CommonMark](https://commonmark.org/help/tutorial/https://docsupdatetracker.net/index.html)** tutorial to validate your Markdown. The tutorial has a **Try it** feature for quick copy/paste validation.
-
-## When to use rich-text editing versus markdown
-
-[Rich-text editing](How-To/edit-knowledge-base.md#add-an-editorial-qna-set) of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
-
-Markdown is a better tool when you need to autogenerate content to create knowledge bases to be imported as part of a CI/CD pipeline or for [batch testing](./index.yml).
-
-## Supported markdown format
-
-Following is the list of markdown formats that you can use in QnA Maker's answer text.
-
-|Purpose|Format|Example markdown|Rendering<br>as displayed in Chat bot|
-|--|--|--|--|
-A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n QnA Maker?`|![format new line between two sentences](./media/qnamaker-concepts-datasources/format-newline.png)|
-|Headers from h1 to h6, the number of `#` denotes which header. 1 `#` is the h1.|`\n# text \n## text \n### text \n####text \n#####text` |`## Creating a bot \n ...text.... \n### Important news\n ...text... \n### Related Information\n ....text...`<br><br>`\n# my h1 \n## my h2\n### my h3 \n#### my h4 \n##### my h5`|![format with markdown headers](./media/qnamaker-concepts-datasources/format-headers.png)<br>![format with markdown headers H1 to H5](./media/qnamaker-concepts-datasources/format-h1-h5.png)|
-|Italics |`*text*`|`How do I create a bot with *QnA Maker*?`|![format with italics](./media/qnamaker-concepts-datasources/format-italics.png)|
-|Strong (bold)|`**text**`|`How do I create a bot with **QnA Maker**?`|![format with strong marking for bold](./media/qnamaker-concepts-datasources/format-strong.png)|
-|URL for link|`[text](https://www.my.com)`|`How do I create a bot with [QnA Maker](https://www.qnamaker.ai)?`|![format for URL (hyperlink)](./media/qnamaker-concepts-datasources/format-url.png)|
-|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![QnAMaker](https://review.learn.microsoft.com/azure/cognitive-services/qnamaker/media/qnamaker-how-to-key-management/qnamaker-resource-list.png)`|![format for public image URL](./media/qnamaker-concepts-datasources/format-image-url.png)|
-|Strikethrough|`~~text~~`|`some ~~questoins~~ questions need to be asked`|![format for strikethrough](./media/qnamaker-concepts-datasources/format-strikethrough.png)|
-|Bold and italics|`***text***`|`How can I create a ***QnA Maker*** bot?`|![format for bold and italics](./media/qnamaker-concepts-datasources/format-bold-italics.png)|
-|Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**QnA Maker**](https://www.qnamaker.ai)?`|![format for bold URL](./media/qnamaker-concepts-datasources/format-bold-url.png)|
-|Italics URL for link|`[*text*](https://www.my.com)`|`How do I create a bot with [*QnA Maker*](https://www.qnamaker.ai)?`|![format for italics URL](./media/qnamaker-concepts-datasources/format-url-italics.png)|
-|Escape markdown symbols|`\*text\*`|`How do I create a bot with \*QnA Maker\*?`|![Format for escape markdown symbols.](./media/qnamaker-concepts-datasources/format-escape-markdown-symbols.png)|
-|Ordered list|`\n 1. item1 \n 1. item2`|`This is an ordered list: \n 1. List item 1 \n 1. List item 2`<br>The preceding example uses automatic numbering built into markdown.<br>`This is an ordered list: \n 1. List item 1 \n 2. List item 2`<br>The preceding example uses explicit numbering.|![format for ordered list](./media/qnamaker-concepts-datasources/format-ordered-list.png)|
-|Unordered list|`\n * item1 \n * item2`<br>or<br>`\n - item1 \n - item2`|`This is an unordered list: \n * List item 1 \n * List item 2`|![format for unordered list](./media/qnamaker-concepts-datasources/format-unordered-list.png)|
-|Nested lists|`\n * Parent1 \n\t * Child1 \n\t * Child2 \n * Parent2`<br><br>`\n * Parent1 \n\t 1. Child1 \n\t * Child2 \n 1. Parent2`<br><br>You can nest ordered and unordered lists together. The tab, `\t`, indicates the indentation level of the child element.|`This is an unordered list: \n * List item 1 \n\t * Child1 \n\t * Child2 \n * List item 2`<br><br>`This is an ordered nested list: \n 1. Parent1 \n\t 1. Child1 \n\t 1. Child2 \n 1. Parent2`|![format for nested unordered list](./media/qnamaker-concepts-datasources/format-nested-unordered-list.png)<br>![format for nested ordered list](./media/qnamaker-concepts-datasources/format-nested-ordered-list.png)|
-
-*QnA Maker doesn't process the image in any way. It is the client application's role to render the image.
-
-If you want to add content using update/replace knowledge base APIs and the content/file contains html tags, you can preserve the HTML in your file by ensuring that opening and closing of the tags are converted in the encoded format.
-
-| Preserve HTML | Representation in the API request | Representation in KB |
-|--||-|
-| Yes | \&lt;br\&gt; | &lt;br&gt; |
-| Yes | \&lt;h3\&gt;header\&lt;/h3\&gt; | &lt;h3&gt;header&lt;/h3&gt; |
-
-Additionally, CR LF(\r\n) are converted to \n in the KB. LF(\n) is kept as is. If you want to escape any escape sequence like a \t or \n you can use backslash, for example: '\\\\r\\\\n' and '\\\\t'
-
-## Next steps
-
-Review batch testing [file formats](reference-tsv-format-batch-testing.md).
cognitive-services Reference Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-private-endpoint.md
- Title: How to set up a Private Endpoint - QnA Maker
-description: Understand Private Endpoint creation available in QnA Maker managed.
--- Previously updated : 01/12/2021--
-# Private Endpoints
-
-Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Now, Custom question answering provides you support to create private endpoints to the Azure Search Service.
-
-Private endpoints are provided by [Azure Private Link](../../private-link/private-link-overview.md), as a separate service. For more information about costs, see the [pricing page.](https://azure.microsoft.com/pricing/details/private-link/)
-
-## Prerequisites
-> [!div class="checklist"]
-> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> * A [Text Analytics resource](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics) (with Custom question answering feature) created in the Azure portal. Remember your Azure Active Directory ID, Subscription, Text Analytics resource name you selected when you created the resource.
-
-## Steps to enable private endpoint
-1. Assign *Contributer* role to Text Analytics service in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
-
-> [!div class="mx-imgBorder"]
-> ![Text Analytics Identity](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoints-identity.png)
-
-2. Add the above identity as *Contributer* by going to Azure Search Service IAM tab.
-
-![Managed service IAM](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-access-control.png)
-
-3. Click on *Add role assignments*, add the identity and click on *Save*.
-
-![Managed role assignment](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-role-assignment.png)
-
-4. Now, go to *Networking* tab in the Azure Search Service instance and switch Endpoint connectivity data from *Public* to *Private*. This operation is a long running process and can take up to 30 mins to complete.
-
-![Managed Azure search networking](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking.png)
-
-5. Go to *Networking* tab of Text Analytics service and under the *Allow access from*, select the *Selected Networks and private endpoints* option and Click *save*.
-
-> [!div class="mx-imgBorder"]
-> ![Text Analytics newtorking](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-custom-qna.png)
-
-This will establish a private endpoint connection between Text Analytics service and Azure Cognitive Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure Cognitive Search service instance. Once the whole operation is completed, you are good to use your Text Analytics service.
-
-![Managed Networking Service](../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png)
--
-## Support details
- * We don't support changes to Azure Cognitive Search service once you enable private access to your Text Analytics service. If you change the Azure Cognitive Search service via 'Features' tab after you have enabled private access, the Text Analytics service will become unusable.
- * After establishing Private Endpoint Connection, if you switch Azure Cognitive Search Service Networking to 'Public', you won't be able to use the Text Analytics service. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work
cognitive-services Reference Tsv Format Batch Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/reference-tsv-format-batch-testing.md
- Title: Batch test TSV format - QnA Maker-
-description: Understand the TSV format for batch testing
----- Previously updated : 10/24/2019--
-# Batch testing TSV format
-
-Batch testing is available from [source code](https://github.com/Azure-Samples/cognitive-services-qnamaker-csharp/tree/master/documentation-samples/batchtesting) or as a [downloadable executable zipped](https://aka.ms/qna_btzip). The format of the command to run the batch test is:
-
-```console
-batchtesting.exe input.tsv https://YOUR-HOST.azurewebsites.net ENDPOINT-KEY out.tsv
-```
-
-|Param|Expected Value|
-|--|--|
-|1|name of tsv file formatted with [TSV input fields](#tsv-input-fields)|
-|2|URI for endpoint, with YOUR-HOST from the Publish page of the QnA Maker portal.|
-|3|ENDPOINT-KEY, found on Publish page of the QnA Maker portal.|
-|4|name of tsv file created by batch test for results.|
-
-Use the following information to understand and implement the TSV format for batch testing.
-
-## TSV input fields
-
-|TSV input file fields|Notes|
-|--|--|
-|KBID|Your KB ID found on the Publish page.|
-|Question|The question a user would enter.|
-|Metadata tags|optional|
-|Top parameter|optional|
-|Expected answer ID|optional|
-
-![Input format for TSV file for batch testing.](media/batch-test/input-tsv-format-batch-test.png)
-
-## TSV output fields
-
-|TSV Output file parameters|Notes|
-|--|--|
-|KBID|Your KB ID found on the Publish page.|
-|Question|The question as entered from the input file.|
-|Answer|Top answer from your knowledge base.|
-|Answer ID|Answer ID|
-|Score|Prediction score for answer. |
-|Metadata tags|associated with returned answer|
-|Expected answer ID|optional (only when expected answer ID is given)|
-|Judgment label|optional, values could be: correct or incorrect (only when expected answer is given)|
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/troubleshooting.md
- Title: Troubleshooting - QnA Maker
-description: The curated list of the most frequently asked questions regarding the QnA Maker service will help you adopt the service faster and with better results.
--- Previously updated : 11/02/2021--
-# Troubleshooting for QnA Maker
-
-The curated list of the most frequently asked questions regarding the QnA Maker service will help you adopt the service faster and with better results.
--
-<a name="how-to-get-the-qnamaker-service-hostname"></a>
-
-## Manage predictions
-
-<details>
-<summary><b>How can I improve the throughput performance for query predictions?</b></summary>
-
-**Answer**:
-Throughput performance issues indicate you need to scale up for both your App service and your Cognitive Search. Consider adding a replica to your Cognitive Search to improve performance.
-
-Learn more about [pricing tiers](Concepts/azure-resources.md).
-</details>
-
-<details>
-<summary><b>How to get the QnAMaker service endpoint</b></summary>
-
-**Answer**:
-QnAMaker service endpoint is useful for debugging purposes when you contact QnAMaker Support or UserVoice. The endpoint is a URL in this form: `https://your-resource-name.azurewebsites.net`.
-
-1. Go to your QnAMaker service (resource group) in the [Azure portal](https://portal.azure.com)
-
- ![QnAMaker Azure resource group in Azure portal](./media/qnamaker-how-to-troubleshoot/qnamaker-azure-resourcegroup.png)
-
-1. Select the App Service associated with the QnA Maker resource. Typically, the names are the same.
-
- ![Select QnAMaker App Service](./media/qnamaker-how-to-troubleshoot/qnamaker-azure-appservice.png)
-
-1. The endpoint URL is available in the Overview section
-
- ![QnAMaker endpoint](./media/qnamaker-how-to-troubleshoot/qnamaker-azure-gethostname.png)
-
-</details>
-
-## Manage the knowledge base
-
-<details>
-<summary><b>I accidentally deleted a part of my QnA Maker, what should I do?</b></summary>
-
-**Answer**:
-Do not delete any of the Azure services created along with the QnA Maker resource such as Search or Web App. These are necessary for QnA Maker to work, if you delete one, QnA Maker will stop working correctly.
-
-All deletes are permanent, including question and answer pairs, files, URLs, custom questions and answers, knowledge bases, or Azure resources. Make sure you export your knowledge base from the **Settings** page before deleting any part of your knowledge base.
-
-</details>
-
-<details>
-<summary><b>Why is my URL(s)/file(s) not extracting question-answer pairs?</b></summary>
-
-**Answer**:
-It's possible that QnA Maker can't auto-extract some question-and-answer (QnA) content from valid FAQ URLs. In such cases, you can paste the QnA content in a .txt file and see if the tool can ingest it. Alternately, you can editorially add content to your knowledge base through the [QnA Maker portal](https://qnamaker.ai).
-
-</details>
-
-<details>
-<summary><b>How large a knowledge base can I create?</b></summary>
-
-**Answer**:
-The size of the knowledge base depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](./concepts/azure-resources.md) for more details.
-
-</details>
-
-<details>
-<summary><b>Why can't I see anything in the drop-down when I try to create a new knowledge base?</b></summary>
-
-**Answer**:
-You haven't created any QnA Maker services in Azure yet. Read [here](./How-To/set-up-qnamaker-service-azure.md) to learn how to do that.
-
-</details>
-
-<details>
-<summary><b>How do I share a knowledge base with others?</b></summary>
-
-**Answer**:
-Sharing works at the level of a QnA Maker service, that is, all knowledge bases in the service will be shared. Read [here](./index.yml) how to collaborate on a knowledge base.
-
-</details>
-
-<details>
-<summary><b>Can you share a knowledge base with a contributor that is not in the same AAD tenant, to modify a knowledge base?</b></summary>
-
-**Answer**:
-Sharing is based on Azure role-based access control. If you can share _any_ resource in Azure with another user, you can also share QnA Maker.
-
-</details>
-
-<details>
-<summary><b>If you have an App Service Plan with 5 QnAMaker knowledge bases. Can you assign read/write rights to 5 different users so each of them can access only 1 QnAMaker knowledge base?</b></summary>
-
-**Answer**:
-You can share an entire QnAMaker service, not individual knowledge bases.
-
-</details>
-
-<details>
-<summary><b>How can I change the default message when no good match is found?</b></summary>
-
-**Answer**:
-The default message is part of the settings in your App service.
-- Go to your App service resource in the Azure portal-
-![qnamaker appservice](./media/qnamaker-faq/qnamaker-resource-list-appservice.png)
-- Select the **Settings** option-
-![qnamaker appservice settings](./media/qnamaker-faq/qnamaker-appservice-settings.png)
-- Change the value of the **DefaultAnswer** setting-- Restart your App service-
-![qnamaker appservice restart](./media/qnamaker-faq/qnamaker-appservice-restart.png)
-
-</details>
-
-<details>
-<summary><b>Why is my SharePoint link not getting extracted?</b></summary>
-
-**Answer**:
-See [Data source locations](./concepts/data-sources-and-content.md#data-source-locations) for more information.
-
-</details>
-
-<details>
-<summary><b>The updates that I made to my knowledge base are not reflected on publish. Why not?</b></summary>
-
-**Answer**:
-Every edit operation, whether in a table update, test, or setting, needs to be saved before it can be published. Be sure to select **Save and train** button after every edit operation.
-
-</details>
-
-<details>
-<summary><b>Does the knowledge base support rich data or multimedia?</b></summary>
-
-**Answer**:
-
-#### Multimedia auto-extraction for files and URLs
-
-* URLS - limited HTML-to-Markdown conversion capability.
-* Files - not supported
-
-#### Answer text in markdown
-Once QnA pairs are in the knowledge base, you can edit an answer's markdown text to include links to media available from public URLs.
--
-</details>
-
-<details>
-<summary><b>Does QnA Maker support non-English languages?</b></summary>
-
-**Answer**:
-See more details about [supported languages](./overview/language-support.md).
-
-If you have content from multiple languages, be sure to create a separate service for each language.
-
-</details>
-
-## Manage service
-
-<details>
-<summary><b>When should I restart my app service?</b></summary>
-
-**Answer**:
-Refresh your app service when the caution icon is next to the version value for the knowledge base in the **Endpoint keys** table on the **User Settings** [page](https://www.qnamaker.ai/UserSettings).
-
-</details>
-
-<details>
-<summary><b>I deleted my existing Search service. How can I fix this?</b></summary>
-
-**Answer**:
-If you delete an Azure Cognitive Search index, the operation is final and the index cannot be recovered.
-
-</details>
-
-<details>
-<summary><b>I deleted my `testkb` index in my Search service. How can I fix this?</b></summary>
-
-**Answer**:
-In case you deleted the `testkb` index in your Search service, you can restore the data from the last published KB. Please use the recovery tool [RestoreTestKBIndex](https://github.com/pchoudhari/QnAMakerBackupRestore/tree/master/RestoreTestKBFromProd) available on GitHub.
-
-</details>
-
-<details>
-<summary><b>I am receiving the following error: Please check if QnA Maker App service's CORS settings allow https://www.qnamaker.ai or if there are any organization specific network restrictions. How can I resolve this?</b></summary>
-
-**Answer**:
-In the API section of the App service pane, update the CORS setting to * or "https://www.qnamaker.ai". If this doesn't resolve the issue, check for any organization-specific restrictions.
-
-</details>
-
-<details>
-<summary><b>When should I refresh my endpoint keys?</b></summary>
-
-**Answer**:
-Refresh your endpoint keys if you suspect that they have been compromised.
-
-</details>
-
-<details>
-<summary><b>Can I use the same Azure Cognitive Search resource for knowledge bases using multiple languages?</b></summary>
-
-**Answer**:
-To use multiple language and multiple knowledge bases, the user has to create a QnA Maker resource for each language. This will create a separate Azure search service per language. Mixing different language knowledge bases in a single Azure search service will result in degraded relevance of results.
-
-</details>
-
-<details>
-<summary><b>How can I change the name of the Azure Cognitive Search resource used by QnA Maker?</b></summary>
-
-**Answer**:
-The name of the Azure Cognitive Search resource is the QnA Maker resource name with some random letters appended at the end. This makes it hard to distinguish between multiple Search resources for QnA Maker. Create a separate search service (naming it the way you would like to) and connect it to your QnA Service. The steps are similar to the steps you need to do to [upgrade an Azure search](How-To/set-up-qnamaker-service-azure.md#upgrade-the-azure-cognitive-search-service).
-
-</details>
-
-<details>
-<summary><b>When QnA Maker returns `Runtime core is not initialized,` how do I fix it?</b></summary>
-
-**Answer**:
-The disk space for your app service might be full. Steps to fix your disk space:
-
-1. In the [Azure portal](https://portal.azure.com), select your QnA Maker's App service, then stop the service.
-1. While still on the App service, select **Development Tools**, then **Advanced Tools**, then **Go**. This opens a new browser window.
-1. Select **Debug console**, then **CMD** to open a command-line tool.
-1. Navigate to the _site/wwwroot/Data/QnAMaker/_ directory.
-1. Remove all the folders whose name begins with `rd`.
-
- **Do not delete** the following:
-
- * KbIdToRankerMappings.txt file
- * EndpointSettings.json file
- * EndpointKeys folder
-
-1. Start the App service.
-1. Access your knowledge base to verify it works now.
-
-</details>
-<details>
-<summary><b>Why is my Application Insights not working?</b></summary>
-
-**Answer**:
-Please cross check and update below steps to fix the issue:
-
-1. In App Service -> Settings group -> Configuration section -> Application Settings -> Name "UserAppInsightsKey" parameters is configured properly and set to the respective application insights Overview tab ("Instrumentation Key") Guid.
-
-1. In App Service -> Settings group -> "Application Insights" section -> Make sure app insights is enabled and connected to respective application insights resource.
-
-</details>
-
-<details>
-<summary><b>My Application Insights is enabled but why is it not working properly?</b></summary>
-
-**Answer**:
-Please follow the below given steps:
-
-1. Copy the value of 'ΓÇ£APPINSIGHTS_INSTRUMENTATIONKEYΓÇ¥ name' into 'UserAppInsightsKey' name by overriding if there is some value already present there.
-
-1. If the 'UserAppInsightsKey' key does not exist in app settings, please add a new key with that name and copy the value.
-
-1. Save it and this will automatically restart the app service. This should resolve the issue.
-
-</details>
-
-## Integrate with other services including Bots
-
-<details>
-<summary><b>Do I need to use Bot Framework in order to use QnA Maker?</b></summary>
-
-**Answer**:
-No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with QnA Maker. However, QnA Maker is offered as one of several templates in [Azure Bot Service](/azure/bot-service/). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
-
-</details>
-
-<details>
-<summary><b>How can I create a new bot with QnA Maker?</b></summary>
-
-**Answer**:
-Follow the instructions in [this](./Quickstarts/create-publish-knowledge-base.md) documentation to create your Bot with Azure Bot Service.
-
-</details>
-
-<details>
-<summary><b>How do I use a different knowledge base with an existing Azure bot service?</b></summary>
-
-**Answer**:
-You need to have the following information about your knowledge base:
-
-* Knowledge base ID.
-* Knowledge base's published endpoint custom subdomain name, known as `host`, found on **Settings** page after you publish.
-* Knowledge base's published endpoint key - found on **Settings** page after you publish.
-
-With this information, go to your bot's app service in the Azure portal. Under **Settings -> Configuration -> Application settings**, change those values.
-
-The knowledge base's endpoint key is labeled `QnAAuthkey` in the ABS service.
-
-</details>
-
-<details>
-<summary><b>Can two or more client applications share a knowledge base?</b></summary>
-
-**Answer**:
-Yes, the knowledge base can be queried from any number of clients. If the response from the knowledge base appears to be slow, or timed out, consider upgrading the service tier for the app service associated with the knowledge base.
-
-</details>
-
-<details>
-<summary><b>How do I embed the QnA Maker service in my website?</b></summary>
-
-**Answer**:
-Follow these steps to embed the QnA Maker service as a web-chat control in your website:
-
-1. Create your FAQ bot by following the instructions [here](./Quickstarts/create-publish-knowledge-base.md).
-2. Enable the web chat by following the steps [here](/azure/bot-service/bot-service-channel-connect-webchat)
-
-</details>
-
-## Data storage
-
-<details>
-<summary><b>What data is stored and where is it stored?</b></summary>
-
-**Answer**:
-
-When you create your QnA Maker service, you selected an Azure region. Your knowledge bases and log files are stored in this region.
-
-</details>
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/whats-new.md
- Title: What's new in QnA Maker service?-
-description: This article contains news about QnA Maker.
----- Previously updated : 07/16/2020---
-# What's new in QnA Maker
-
-Learn what's new in the service. These items may release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service.
--
-## Release notes
-
-Learn what's new with QnA Maker.
-
-### November 2021
-* [Question answering](../language-service/question-answering/overview.md) is now [generally available](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/question-answering-feature-is-generally-available/ba-p/2899497) as a feature within [Azure Cognitive Service for Language](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics).
-* Question answering is powered by **state-of-the-art transformer models** and [Turing](https://turing.microsoft.com/) Natural Language models.
-* The erstwhile QnA Maker product will be [retired](https://azure.microsoft.com/updates/azure-qna-maker-will-be-retired-on-31-march-2025/) on 31st March 2025, and no new QnA Maker resources will be created beginning 1st October 2022.
-* All existing QnA Maker customers are strongly advised to [migrate](../language-service/question-answering/how-to/migrate-qnamaker.md) their QnA Maker knowledge bases to Question answering as soon as possible to continue experiencing the best of QnA capabilities offered by Azure.
-
-### May 2021
-
-* QnA Maker managed has been re-introduced as Custom question answering feature in [Language resource](https://portal.azure.com/?quickstart=true#create/Microsoft.CognitiveServicesTextAnalytics).
-* Custom question answering supports unstructured documents.
-* [Prebuilt API](how-to/using-prebuilt-api.md) has been introduced to generate answers for user queries from document text passed via the API.
-
-### November 2020
-
-* New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575).
-
-> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
-* Simplified resource creation
-* End to End region support
-* Deep learnt ranking model
-* Machine Reading Comprehension for precise answers
-
-### July 2020
-
-* [Metadata: `OR` logical combination of multiple metadata pairs](how-to/query-knowledge-base-with-metadata.md#logical-or-using-strictfilterscompoundoperationtype-property)
-* [Steps](how-to/network-isolation.md) to configure Cognitive Search endpoints to be private, but still accessible to QnA Maker.
-* Free Cognitive Search resources are removed after [90 days of inactivity](how-to/set-up-qnamaker-service-azure.md#inactivity-policy-for-free-search-resources).
-
-### June 2020
-
-* Updated [Power Virtual Agent](tutorials/integrate-with-power-virtual-assistant-fallback-topic.md) tutorial for faster and easier steps
-
-### May 2020
-
-* [Azure role-based access control (Azure RBAC)](concepts/role-based-access-control.md)
-* [Rich-text editing](how-to/edit-knowledge-base.md#rich-text-editing-for-answer) for answers
-
-### March 2020
-
-* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
-
-### February 2020
-
-* [NPM package](https://www.npmjs.com/package/@azure/cognitiveservices-qnamaker) with GenerateAnswer API
-
-### November 2019
-
-* [US Government cloud support](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers) for QnA Maker
-* [Multi-turn](./how-to/multi-turn.md) feature in GA
-* [Chit-chat support](./how-to/chit-chat-knowledge-base.md#language-support) available in tier-1 languages
-
-### October 2019
-
-* Explicitly setting the language for all knowledge bases in the QnA Maker service.
-
-### September 2019
-
-* Import and export with XLS file format.
-
-### June 2019
-
-* Improved [ranker model](concepts/query-knowledge-base.md#ranker-process) for French, Italian, German, Spanish, Portuguese
-
-### April 2019
-
-* Support website content extraction
-* [SharePoint document](how-to/add-sharepoint-datasources.md) support from authenticated access
-
-### March 2019
-
-* [Active learning](how-to/improve-knowledge-base.md) provides suggestions for new question alternatives based on real user questions
-* Improved natural language processing (NLP) [ranker](concepts/query-knowledge-base.md#ranker-process) model for English
-
-> [!div class="nextstepaction"]
-> [Create a QnA Maker service](how-to/set-up-qnamaker-service-azure.md)
-
-## Cognitive Service updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-overview.md
- Title: Audio processing - Speech service-
-description: An overview of audio processing and capabilities of the Microsoft Audio Stack.
------ Previously updated : 09/07/2022----
-# Audio processing
-
-The Microsoft Audio Stack is a set of enhancements optimized for speech processing scenarios. This includes examples like keyword recognition and speech recognition. It consists of various enhancements/components that operate on the input audio signal:
-
-* **Noise suppression** - Reduce the level of background noise.
-* **Beamforming** - Localize the origin of sound and optimize the audio signal using multiple microphones.
-* **Dereverberation** - Reduce the reflections of sound from surfaces in the environment.
-* **Acoustic echo cancellation** - Suppress audio being played out of the device while microphone input is active.
-* **Automatic gain control** - Dynamically adjust the personΓÇÖs voice level to account for soft speakers, long distances, or non-calibrated microphones.
-
-[ ![Block diagram of Microsoft Audio Stack's enhancements.](media/audio-processing/mas-block-diagram.png) ](media/audio-processing/mas-block-diagram.png#lightbox)
-
-Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
-
-Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
-
-The Microsoft Audio Stack also powers a wide range of Microsoft products:
-* **Windows** - Microsoft Audio Stack is the default speech processing pipeline when using the Speech audio category.
-* **Microsoft Teams Displays and Microsoft Teams Rooms devices** - Microsoft Teams Displays and Teams Rooms devices use the Microsoft Audio Stack to enable high quality hands-free, voice-based experiences with Cortana.
-
-## Speech SDK integration
-
-The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. Some of the key Microsoft Audio Stack features available via the Speech SDK include:
-* **Real-time microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
-* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
-* **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)).
-* **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
-
-## Minimum requirements to use Microsoft Audio Stack
-
-Microsoft Audio Stack can be used by any product or application that can meet the following requirements:
-* **Raw audio** - Microsoft Audio Stack requires raw (unprocessed) audio as input to yield the best results. Providing audio that is already processed limits the audio stackΓÇÖs ability to perform enhancements at high quality.
-* **Microphone geometries** - Geometry information about each microphone on the device is required to correctly perform all enhancements offered by the Microsoft Audio Stack. Information includes the number of microphones, their physical arrangement, and coordinates. Up to 16 input microphone channels are supported.
-* **Loopback or reference audio** - An audio channel that represents the audio being played out of the device is required to perform acoustic echo cancellation.
-* **Input format** - Microsoft Audio Stack supports down sampling for sample rates that are integral multiples of 16 kHz. A minimum sampling rate of 16 kHz is required. Additionally, the following formats are supported: 32-bit IEEE little endian float, 32-bit little endian signed int, 24-bit little endian signed int, 16-bit little endian signed int, and 8-bit signed int.
-
-## Next steps
-[Use the Speech SDK for audio processing](audio-processing-speech-sdk.md)
cognitive-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-speech-sdk.md
- Title: Use the Microsoft Audio Stack (MAS) - Speech service-
-description: An overview of the features, capabilities, and restrictions for audio processing using the Speech Software Development Kit (SDK).
------ Previously updated : 09/16/2022----
-# Use the Microsoft Audio Stack (MAS)
-
-The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. See the [Audio processing](audio-processing-overview.md) documentation for an overview.
-
-In this article, you learn how to use the Microsoft Audio Stack (MAS) with the Speech SDK.
-
-## Default options
-
-This sample shows how to use MAS with all default enhancement options on input from the device's default microphone.
-
-### [C#](#tab/csharp)
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
-var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
-
-var recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
-
-### [C++](#tab/cpp)
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
-auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions);
-
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
-```
-
-### [Java](#tab/java)
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
-AudioConfig audioInput = AudioConfig.fromDefaultMicrophoneInput(audioProcessingOptions);
-
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
--
-## Preset microphone geometry
-
-This sample shows how to use MAS with a predefined microphone geometry on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements are applied on the input audio stream.
-* **Preset geometry** - The preset geometry represents a linear 2-microphone array.
-* **Audio input device** - The audio input device ID is `hw:0,1`. For more information on how to select an audio input device, see [How to: Select an audio input device with the Speech SDK](how-to-select-audio-input-devices.md).
-
-### [C#](#tab/csharp)
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry.Linear2);
-var audioInput = AudioConfig.FromMicrophoneInput("hw:0,1", audioProcessingOptions);
-
-var recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
-
-### [C++](#tab/cpp)
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry::Linear2);
-auto audioInput = AudioConfig::FromMicrophoneInput("hw:0,1", audioProcessingOptions);
-
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
-```
-
-### [Java](#tab/java)
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry.Linear2);
-AudioConfig audioInput = AudioConfig.fromMicrophoneInput("hw:0,1", audioProcessingOptions);
-
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
--
-## Custom microphone geometry
-
-This sample shows how to use MAS with a custom microphone geometry on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements will be applied on the input audio stream.
-* **Custom geometry** - A custom microphone geometry for a 7-microphone array is provided via the microphone coordinates. The units for coordinates are millimeters.
-* **Audio input** - The audio input is from a file, where the audio within the file is expected from an audio input device corresponding to the custom geometry specified.
-
-### [C#](#tab/csharp)
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[7]
-{
- new MicrophoneCoordinates(0, 0, 0),
- new MicrophoneCoordinates(40, 0, 0),
- new MicrophoneCoordinates(20, -35, 0),
- new MicrophoneCoordinates(-20, -35, 0),
- new MicrophoneCoordinates(-40, 0, 0),
- new MicrophoneCoordinates(-20, 35, 0),
- new MicrophoneCoordinates(20, 35, 0)
-};
-var microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Planar, microphoneCoordinates);
-var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
-var audioInput = AudioConfig.FromWavFileInput("katiesteve.wav", audioProcessingOptions);
-
-var recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
-
-### [C++](#tab/cpp)
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-MicrophoneArrayGeometry microphoneArrayGeometry
-{
- MicrophoneArrayType::Planar,
- { { 0, 0, 0 }, { 40, 0, 0 }, { 20, -35, 0 }, { -20, -35, 0 }, { -40, 0, 0 }, { -20, 35, 0 }, { 20, 35, 0 } }
-};
-auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel::LastChannel);
-auto audioInput = AudioConfig::FromWavFileInput("katiesteve.wav", audioProcessingOptions);
-
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
-```
-
-### [Java](#tab/java)
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[7];
-microphoneCoordinates[0] = new MicrophoneCoordinates(0, 0, 0);
-microphoneCoordinates[1] = new MicrophoneCoordinates(40, 0, 0);
-microphoneCoordinates[2] = new MicrophoneCoordinates(20, -35, 0);
-microphoneCoordinates[3] = new MicrophoneCoordinates(-20, -35, 0);
-microphoneCoordinates[4] = new MicrophoneCoordinates(-40, 0, 0);
-microphoneCoordinates[5] = new MicrophoneCoordinates(-20, 35, 0);
-microphoneCoordinates[6] = new MicrophoneCoordinates(20, 35, 0);
-MicrophoneArrayGeometry microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Planar, microphoneCoordinates);
-AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
-AudioConfig audioInput = AudioConfig.fromWavFileInput("katiesteve.wav", audioProcessingOptions);
-
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
--
-## Select enhancements
-
-This sample shows how to use MAS with a custom set of enhancements on the input audio. By default, all enhancements are enabled but there are options to disable dereverberation, noise suppression, automatic gain control, and echo cancellation individually by using `AudioProcessingOptions`.
-
-In this example:
-* **Enhancement options** - Echo cancellation and noise suppression will be disabled, while all other enhancements remain enabled.
-* **Audio input device** - The audio input device is the default microphone of the device.
-
-### [C#](#tab/csharp)
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
-var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
-
-var recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
-
-### [C++](#tab/cpp)
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
-auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions);
-
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
-```
-
-### [Java](#tab/java)
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
-AudioConfig audioInput = AudioConfig.fromDefaultMicrophoneInput(audioProcessingOptions);
-
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
--
-## Specify beamforming angles
-
-This sample shows how to use MAS with a custom microphone geometry and beamforming angles on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements will be applied on the input audio stream.
-* **Custom geometry** - A custom microphone geometry for a 4-microphone array is provided by specifying the microphone coordinates. The units for coordinates are millimeters.
-* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees. In the sample code below, the start angle is set to 70 degrees and the end angle is set to 110 degrees.
-* **Audio input** - The audio input is from a push stream, where the audio within the stream is expected from an audio input device corresponding to the custom geometry specified.
-
-### [C#](#tab/csharp)
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[4]
-{
- new MicrophoneCoordinates(-60, 0, 0),
- new MicrophoneCoordinates(-20, 0, 0),
- new MicrophoneCoordinates(20, 0, 0),
- new MicrophoneCoordinates(60, 0, 0)
-};
-var microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Linear, 70, 110, microphoneCoordinates);
-var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
-var pushStream = AudioInputStream.CreatePushStream();
-var audioInput = AudioConfig.FromStreamInput(pushStream, audioProcessingOptions);
-
-var recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
-
-### [C++](#tab/cpp)
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-MicrophoneArrayGeometry microphoneArrayGeometry
-{
- MicrophoneArrayType::Linear,
- 70,
- 110,
- { { -60, 0, 0 }, { -20, 0, 0 }, { 20, 0, 0 }, { 60, 0, 0 } }
-};
-auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel::LastChannel);
-auto pushStream = AudioInputStream::CreatePushStream();
-auto audioInput = AudioConfig::FromStreamInput(pushStream, audioProcessingOptions);
-
-auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput);
-```
-
-### [Java](#tab/java)
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
-
-MicrophoneCoordinates[] microphoneCoordinates = new MicrophoneCoordinates[4];
-microphoneCoordinates[0] = new MicrophoneCoordinates(-60, 0, 0);
-microphoneCoordinates[1] = new MicrophoneCoordinates(-20, 0, 0);
-microphoneCoordinates[2] = new MicrophoneCoordinates(20, 0, 0);
-microphoneCoordinates[3] = new MicrophoneCoordinates(60, 0, 0);
-MicrophoneArrayGeometry microphoneArrayGeometry = new MicrophoneArrayGeometry(MicrophoneArrayType.Planar, 70, 110, microphoneCoordinates);
-AudioProcessingOptions audioProcessingOptions = AudioProcessingOptions.create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, microphoneArrayGeometry, SpeakerReferenceChannel.LastChannel);
-PushAudioInputStream pushStream = AudioInputStream.createPushStream();
-AudioConfig audioInput = AudioConfig.fromStreamInput(pushStream, audioProcessingOptions);
-
-SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
-```
--
-## Reference channel for echo cancellation
-
-Microsoft Audio Stack requires the reference channel (also known as loopback channel) to perform echo cancellation. The source of the reference channel varies by platform:
-* **Windows** - The reference channel is automatically gathered by the Speech SDK if the `SpeakerReferenceChannel::LastChannel` option is provided when creating `AudioProcessingOptions`.
-* **Linux** - ALSA (Advanced Linux Sound Architecture) must be configured to provide the reference audio stream as the last channel for the audio input device used. ALSA is configured in addition to providing the `SpeakerReferenceChannel::LastChannel` option when creating `AudioProcessingOptions`.
-
-## Language and platform support
-
-| Language | Platform | Reference docs |
-||-|-|
-| C++ | Windows, Linux | [C++ docs](/cpp/cognitive-services/speech/) |
-| C# | Windows, Linux | [C# docs](/dotnet/api/microsoft.cognitiveservices.speech) |
-| Java | Windows, Linux | [Java docs](/java/api/com.microsoft.cognitiveservices.speech) |
-
-## Next steps
-[Setup development environment](quickstarts/setup-platform.md)
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
- Title: Batch synthesis API (Preview) for text to speech - Speech service-
-description: Learn how to use the batch synthesis API for asynchronous synthesis of long-form text to speech.
------ Previously updated : 11/16/2022---
-# Batch synthesis API (Preview) for text to speech
-
-The Batch synthesis API (Preview) can synthesize a large volume of text input (long and short) asynchronously. Publishers and audio content platforms can create long audio content in a batch. For example: audio books, news articles, and documents. The batch synthesis API can create synthesized audio longer than 10 minutes.
-
-> [!IMPORTANT]
-> The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
-
-The batch synthesis API is asynchronous and doesn't return synthesized audio in real-time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
-
-This diagram provides a high-level overview of the workflow.
-
-![Diagram of the Batch Synthesis API workflow.](media/long-audio-api/long-audio-api-workflow.png)
-
-> [!TIP]
-> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks. For a C# example, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs).
-
-You can use the following REST API operations for batch synthesis:
-
-| Operation | Method | REST API call |
-| - | -- | |
-| Create batch synthesis | `POST` | texttospeech/3.1-preview1/batchsynthesis |
-| Get batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis/{id} |
-| List batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis |
-| Delete batch synthesis | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
-
-For code samples, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-synthesis).
-
-## Create batch synthesis
-
-To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
--- Set the required `textType` property. -- If the `textType` property is set to "PlainText", then you must also set the `voice` property in the `synthesisConfig`. In the example below, the `textType` is set to "SSML", so the `speechSynthesis` isn't set.-- Set the required `displayName` property. Choose a name that you can refer to later. The display name doesn't have to be unique.-- Optionally you can set the `description`, `timeToLive`, and other properties. For more information, see [batch synthesis properties](#batch-synthesis-properties).-
-> [!NOTE]
-> The maximum JSON payload size that will be accepted is 500 kilobytes. Each Speech resource can have up to 200 batch synthesis jobs that are running concurrently.
-
-Make an HTTP POST request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key, replace `YourSpeechRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: application/json" -d '{
- "displayName": "batch synthesis sample",
- "description": "my ssml test",
- "textType": "SSML",
- "inputs": [
- {
- "text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''>
- <voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>
- The rainbow has seven colors.
- </voice>
- </speak>",
- },
- ],
- "properties": {
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false,
- "concatenateResult": false,
- "decompressOutputFiles": false
- },
-}' "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "textType": "SSML",
- "synthesisConfig": {},
- "customVoices": {},
- "properties": {
- "timeToLive": "P31D",
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "decompressOutputFiles": false,
- "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
- },
- "lastActionDateTime": "2022-11-16T15:07:04.121Z",
- "status": "NotStarted",
- "id": "1e2e0fe8-e403-417c-a382-b55eb2ea943d",
- "createdDateTime": "2022-11-16T15:07:04.121Z",
- "displayName": "batch synthesis sample",
- "description": "my ssml test"
-}
-```
-
-The `status` property should progress from `NotStarted` status, to `Running`, and finally to `Succeeded` or `Failed`. You can call the [GET batch synthesis API](#get-batch-synthesis) periodically until the returned status is `Succeeded` or `Failed`.
-
-## Get batch synthesis
-
-To get the status of the batch synthesis job, make an HTTP GET request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "textType": "SSML",
- "synthesisConfig": {},
- "customVoices": {},
- "properties": {
- "audioSize": 100000,
- "durationInTicks": 31250000,
- "succeededAudioCount": 1,
- "failedAudioCount": 0,
- "duration": "PT3.125S",
- "billingDetails": {
- "customNeural": 0,
- "neural": 33
- },
- "timeToLive": "P31D",
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "decompressOutputFiles": false,
- "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
- },
- "outputs": {
- "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
- },
- "lastActionDateTime": "2022-11-05T14:00:32.523Z",
- "status": "Succeeded",
- "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
- "createdDateTime": "2022-11-05T14:00:31.523Z",
- "displayName": "batch synthesis sample",
- "description": "my test"
- }
-```
-
-From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
-
-## List batch synthesis
-
-To list all batch synthesis jobs for the Speech resource, make an HTTP GET request using the URI as shown in the following example. Replace `YourSpeechKey` with your Speech resource key and replace `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `top` (page size) query parameters in URL. The default value for `skip` is 0 and the default value for `top` is 100.
-
-```azurecli-interactive
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "values": [
- {
- "textType": "SSML",
- "synthesisConfig": {},
- "customVoices": {},
- "properties": {
- "audioSize": 100000,
- "durationInTicks": 31250000,
- "succeededAudioCount": 1,
- "failedAudioCount": 0,
- "duration": "PT3.125S",
- "billingDetails": {
- "customNeural": 0,
- "neural": 33
- },
- "timeToLive": "P31D",
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "decompressOutputFiles": false,
- "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
- },
- "outputs": {
- "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/41b83de2-380d-45dc-91af-722b68cfdc8e/results.zip?SAS_Token"
- },
- "lastActionDateTime": "2022-11-05T14:00:32.523Z",
- "status": "Succeeded",
- "id": "41b83de2-380d-45dc-91af-722b68cfdc8e",
- "createdDateTime": "2022-11-05T14:00:31.523Z",
- "displayName": "batch synthesis sample",
- "description": "my test"
- }
- {
- "textType": "PlainText",
- "synthesisConfig": {
- "voice": "en-US-JennyNeural",
- "style": "chat",
- "rate": "+30.00%",
- "pitch": "x-high",
- "volume": "80"
- },
- "customVoices": {},
- "properties": {
- "audioSize": 79384,
- "durationInTicks": 24800000,
- "succeededAudioCount": 1,
- "failedAudioCount": 0,
- "duration": "PT2.48S",
- "billingDetails": {
- "customNeural": 0,
- "neural": 33
- },
- "timeToLive": "P31D",
- "outputFormat": "riff-24khz-16bit-mono-pcm",
- "concatenateResult": false,
- "decompressOutputFiles": false,
- "wordBoundaryEnabled": false,
- "sentenceBoundaryEnabled": false
- },
- "outputs": {
- "result": "https://cvoiceprodeus.blob.core.windows.net/batch-synthesis-output/38e249bf-2607-4236-930b-82f6724048d8/results.zip?SAS_Token"
- },
- "lastActionDateTime": "2022-11-05T18:52:23.210Z",
- "status": "Succeeded",
- "id": "38e249bf-2607-4236-930b-82f6724048d8",
- "createdDateTime": "2022-11-05T18:52:22.807Z",
- "displayName": "batch synthesis sample",
- "description": "my test"
- },
- ],
- // The next page link of the list of batch synthesis.
- "@nextLink": "https://{region}.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=2"
-}
-```
-
-From `outputs.result`, you can download a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. For more information, see [batch synthesis results](#batch-synthesis-results).
-
-The `values` property in the json response lists your synthesis requests. The list is paginated, with a maximum page size of 100. The `"@nextLink"` property is provided as needed to get the next page of the paginated list.
-
-## Delete batch synthesis
-
-Delete the batch synthesis job history after you retrieved the audio output results. The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
-
-To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X DELETE "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis/YourSynthesisId" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
-```
-
-The response headers will include `HTTP/1.1 204 No Content` if the delete request was successful.
-
-## Batch synthesis results
-
-After you [get a batch synthesis job](#get-batch-synthesis) with `status` of "Succeeded", you can download the audio output results. Use the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response.
-
-To get the batch synthesis results file, make an HTTP GET request using the URI as shown in the following example. Replace `YourOutputsResultUrl` with the URL from the `outputs.result` property of the [get batch synthesis](#get-batch-synthesis) response. Replace `YourSpeechKey` with your Speech resource key.
-
-```azurecli-interactive
-curl -v -X GET "YourOutputsResultUrl" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" > results.zip
-```
-
-The results are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details. The numbered prefix of each filename (shown below as `[nnnn]`) is in the same order as the text inputs used when you created the batch synthesis.
-
-> [!NOTE]
-> The `[nnnn].debug.json` file contains the synthesis result ID and other information that might help with troubleshooting. The properties that it contains might change, so you shouldn't take any dependencies on the JSON format.
-
-The summary file contains the synthesis results for each text input. Here's an example `summary.json` file:
-
-```json
-{
- "jobID": "41b83de2-380d-45dc-91af-722b68cfdc8e",
- "status": "Succeeded",
- "results": [
- {
- "texts": [
- "<speak version='1.0' xml:lang='en-US'>\n\t\t\t\t<voice xml:lang='en-US' xml:gender='Female' name='en-US-JennyNeural'>\n\t\t\t\t\tThe rainbow has seven colors.\n\t\t\t\t</voice>\n\t\t\t</speak>"
- ],
- "status": "Succeeded",
- "billingDetails": {
- "CustomNeural": "0",
- "Neural": "33"
- },
- "audioFileName": "0001.wav",
- "properties": {
- "audioSize": "100000",
- "duration": "PT3.1S",
- "durationInTicks": "31250000"
- }
- }
- ]
-}
-```
-
-If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file will be included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file will be included in the results.
-
-Here's an example word data file with both audio offset and duration in milliseconds:
-
-```json
-[
- {
- "Text": "the",
- "AudioOffset": 38,
- "Duration": 153
- },
- {
- "Text": "rainbow",
- "AudioOffset": 201,
- "Duration": 326
- },
- {
- "Text": "has",
- "AudioOffset": 567,
- "Duration": 96
- },
- {
- "Text": "seven",
- "AudioOffset": 673,
- "Duration": 96
- },
- {
- "Text": "colors",
- "AudioOffset": 778,
- "Duration": 451
- },
-]
-```
-
-## Batch synthesis properties
-
-Batch synthesis properties are described in the following table.
-
-| Property | Description |
-|-|-|
-|`createdDateTime`|The date and time when the batch synthesis job was created.<br/><br/>This property is read-only.|
-|`customProperties`|A custom set of optional batch synthesis configuration settings.<br/><br/>This property is stored for your convenience to associate the synthesis jobs that you created with the synthesis jobs that you get or list. This property is stored, but isn't used by the Speech service.<br/><br/>You can specify up to 10 custom properties as key and value pairs. The maximum allowed key length is 64 characters, and the maximum allowed value length is 256 characters.|
-|`customVoices`|The map of a custom voice name and its deployment ID.<br/><br/>For example: `"customVoices": {"your-custom-voice-name": "502ac834-6537-4bc3-9fd6-140114daa66d"}`<br/><br/>You can use the voice name in your `synthesisConfig.voice` (when the `textType` is set to `"PlainText"`) or within the SSML text of `inputs` (when the `textType` is set to `"SSML"`).<br/><br/>This property is required to use a custom voice. If you try to use a custom voice that isn't defined here, the service returns an error.|
-|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
-|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
-|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
-|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result will be written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but keep in mind that the maximum JSON payload size (including all text inputs and other properties) that will be accepted is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
-|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.|
-|`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.|
-|`properties`|A defined set of optional batch synthesis configuration settings.|
-|`properties.audioSize`|The audio output size in bytes.<br/><br/>This property is read-only.|
-|`properties.billingDetails`|The number of words that were processed and billed by `customNeural` versus `neural` (prebuilt) voices.<br/><br/>This property is read-only.|
-|`properties.concatenateResult`|Determines whether to concatenate the result. This optional `bool` value ("true" or "false") is "false" by default.|
-|`properties.decompressOutputFiles`|Determines whether to unzip the synthesis result files in the destination container. This property can only be set when the `destinationContainerUrl` property is set or BYOS (Bring Your Own Storage) is configured for the Speech resource. This optional `bool` value ("true" or "false") is "false" by default.|
-|`properties.destinationContainerUrl`|The batch synthesis results can be stored in a writable Azure container. If you don't specify a container URI with [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) token, the Speech service stores the results in a container managed by Microsoft. SAS with stored access policies isn't supported. When the synthesis job is deleted, the result data is also deleted.<br/><br/>This optional property isn't included in the response when you get the synthesis job.|
-|`properties.duration`|The audio output duration. The value is an ISO 8601 encoded duration.<br/><br/>This property is read-only.|
-|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.|
-|`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.|
-|`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.|
-|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file will be included in the results data ZIP file.|
-|`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.|
-|`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](#delete-batch-synthesis) synthesis method to remove the job sooner.|
-|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file will be included in the results data ZIP file.|
-|`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.|
-|`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.rate`|The rate of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.style`|For some voices, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant.<br/><br/>For information about the available styles per voice, see [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles).<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.voice`|The voice that speaks the audio output.<br/><br/>For information about the available prebuilt neural voices, see [language and voice support](language-support.md?tabs=tts). To use a custom voice, you must specify a valid custom voice and deployment ID mapping in the `customVoices` property.<br/><br/>This property is required when `textType` is set to `"PlainText"`.|
-|`synthesisConfig.volume`|The volume of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
-|`textType`|Indicates whether the `inputs` text property should be plain text or SSML. The possible case-insensitive values are "PlainText" and "SSML". When the `textType` is set to `"PlainText"`, you must also set the `synthesisConfig` voice property.<br/><br/>This property is required.|
-
-## HTTP status codes
-
-The section details the HTTP response codes and messages from the batch synthesis API.
-
-### HTTP 200 OK
-
-HTTP 200 OK indicates that the request was successful.
-
-### HTTP 201 Created
-
-HTTP 201 Created indicates that the create batch synthesis request (via HTTP POST) was successful.
-
-### HTTP 204 error
-
-An HTTP 204 error indicates that the request was successful, but the resource doesn't exist. For example:
-- You tried to get or delete a synthesis job that doesn't exist. -- You successfully deleted a synthesis job. -
-### HTTP 400 error
-
-Here are examples that can result in the 400 error:
-- The `outputFormat` is unsupported or invalid. Provide a valid format value, or leave `outputFormat` empty to use the default setting.-- The number of requested text inputs exceeded the limit of 1,000.-- The `top` query parameter exceeded the limit of 100.-- You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".-- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. -- You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".-
-### HTTP 404 error
-
-The specified entity can't be found. Make sure the synthesis ID is correct.
-
-### HTTP 429 error
-
-There are too many recent requests. Each client application can submit up to 50 requests per 5 seconds for each Speech resource. Reduce the number of requests per second.
-
-You can check the rate limit and quota remaining via the HTTP headers as shown in the following example:
-
-```http
-X-RateLimit-Limit: 50
-X-RateLimit-Remaining: 49
-X-RateLimit-Reset: 2022-11-11T01:49:43Z
-```
-
-### HTTP 500 error
-
-HTTP 500 Internal Server Error indicates that the request failed. The response body contains the error message.
-
-### HTTP error example
-
-Here's an example request that results in an HTTP 400 error, because the `top` query parameter is set to a value greater than 100.
-
-```console
-curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
-```
-
-In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
-
-The response body will resemble the following JSON example:
-
-```json
-{
- "code": "InvalidRequest",
- "message": "The top parameter should not be greater than 100.",
- "innerError": {
- "code": "InvalidParameter",
- "message": "The top parameter should not be greater than 100."
- }
-}
-```
-
-## Next steps
--- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)-- [Text to speech quickstart](get-started-text-to-speech.md)-- [Migrate to batch synthesis](migrate-to-batch-synthesis.md)
cognitive-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md
- Title: Locate audio files for batch transcription - Speech service-
-description: Batch transcription is used to transcribe a large amount of audio in storage. You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe.
------- Previously updated : 10/21/2022---
-# Locate audio files for batch transcription
-
-Batch transcription is used to transcribe a large amount of audio in storage. Batch transcription can access audio files from inside or outside of Azure.
-
-When source audio files are stored outside of Azure, they can be accessed via a public URI (such as "https://crbn.us/hello.wav"). Files should be directly accessible; URIs that require authentication or that invoke interactive scripts before the file can be accessed aren't supported.
-
-Audio files that are stored in Azure Blob storage can be accessed via one of two methods:
-- [Trusted Azure services security mechanism](#trusted-azure-services-security-mechanism)-- [Shared access signature (SAS)](#sas-url-for-batch-transcription) URI. -
-You can specify one or multiple audio files when creating a transcription. We recommend that you provide multiple files per request or point to an Azure Blob storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
-
-## Supported audio formats
-
-The batch transcription API supports the following formats:
-
-| Format | Codec | Bits per sample | Sample rate |
-|--|-|||
-| WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-| MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-| OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-
-For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file. To create an ordered final transcript, use the timestamps that are generated per utterance.
-
-## Azure Blob Storage upload
-
-When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#destination-container-url) to a Blob container.
-
-> [!NOTE]
-> For blob and container limits, see [batch transcription quotas and limits](speech-services-quotas-and-limits.md#batch-transcription).
-
-# [Azure portal](#tab/portal)
-
-Follow these steps to create a storage account and upload wav files from your local directory to a new container.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
-1. Select the Storage account.
-1. In the **Data storage** group in the left pane, select **Containers**.
-1. Select **+ Container**.
-1. Enter a name for the new container and select **Create**.
-1. Select the new container.
-1. Select **Upload**.
-1. Choose the files to upload and select **Upload**.
-
-# [Azure CLI](#tab/azure-cli)
-
-Follow these steps to create a storage account and upload wav files from your local directory to a new container.
-
-1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created. Use the same subscription and resource group as your Speech resource.
-
- ```azurecli-interactive
- set RESOURCE_GROUP=<your existing resource group name>
- ```
-
-1. Set the `AZURE_STORAGE_ACCOUNT` environment variable to the name of a storage account that you want to create.
-
- ```azurecli-interactive
- set AZURE_STORAGE_ACCOUNT=<choose new storage account name>
- ```
-
-1. Create a new storage account with the [`az storage account create`](/cli/azure/storage/account#az-storage-account-create) command. Replace `eastus` with the region of your resource group.
-
- ```azurecli-interactive
- az storage account create -n %AZURE_STORAGE_ACCOUNT% -g %RESOURCE_GROUP% -l eastus
- ```
-
- > [!TIP]
- > When you are finished with batch transcriptions and want to delete your storage account, use the [`az storage delete create`](/cli/azure/storage/account#az-storage-account-delete) command.
-
-1. Get your new storage account keys with the [`az storage account keys list`](/cli/azure/storage/account#az-storage-account-keys-list) command.
-
- ```azurecli-interactive
- az storage account keys list -g %RESOURCE_GROUP% -n %AZURE_STORAGE_ACCOUNT%
- ```
-
-1. Set the `AZURE_STORAGE_KEY` environment variable to one of the key values retrieved in the previous step.
-
- ```azurecli-interactive
- set AZURE_STORAGE_KEY=<your storage account key>
- ```
-
- > [!IMPORTANT]
- > The remaining steps use the `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_KEY` environment variables. If you didn't set the environment variables, you can pass the values as parameters to the commands. See the [az storage container create](/cli/azure/storage/) documentation for more information.
-
-1. Create a container with the [`az storage container create`](/cli/azure/storage/container#az-storage-container-create) command. Replace `<mycontainer>` with a name for your container.
-
- ```azurecli-interactive
- az storage container create -n <mycontainer>
- ```
-
-1. The following [`az storage blob upload-batch`](/cli/azure/storage/blob#az-storage-blob-upload-batch) command uploads all .wav files from the current local directory. Replace `<mycontainer>` with a name for your container. Optionally you can modify the command to upload files from a different directory.
-
- ```azurecli-interactive
- az storage blob upload-batch -d <mycontainer> -s . --pattern *.wav
- ```
---
-## Trusted Azure services security mechanism
-
-This section explains how to set up and limit access to your batch transcription source audio files in an Azure Storage account using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity).
-
-> [!NOTE]
-> With the trusted Azure services security mechanism, you need to use [Azure Blob storage](../../storage/blobs/storage-blobs-overview.md) to store audio files. Usage of [Azure Files](../../storage/files/storage-files-introduction.md) is not supported.
-
-If you perform all actions in this section, your Storage account will be in the following configuration:
-- Access to all external network traffic is prohibited.-- Access to Storage account using Storage account key is prohibited.-- Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited.-- Access to the selected Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).-
-So in effect your Storage account becomes completely "locked" and can't be used in any scenario apart from transcribing audio files that were already present by the time the new configuration was applied. You should consider this configuration as a model as far as the security of your audio data is concerned and customize it according to your needs.
-
-For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
-
-> [!NOTE]
-> Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the storage account. You can use a private endpoint for batch transcription API requests, while separately accessing the source audio files from a secure storage account, or the other way around.
-
-By following the steps below, you'll severely restrict access to the storage account. Then you'll assign the minimum required permissions for Speech resource managed identity to access the Storage account.
-
-### Enable system assigned managed identity for the Speech resource
-
-Follow these steps to enable system assigned managed identity for the Speech resource that you will use for batch transcription.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Speech resource.
-1. In the **Resource Management** group in the left pane, select **Identity**.
-1. On the **System assigned** tab, select **On** for the status.
-
- > [!IMPORTANT]
- > User assigned managed identity won't meet requirements for the batch transcription storage account scenario. Be sure to enable system assigned managed identity.
-
-1. Select **Save**
-
-Now the managed identity for your Speech resource can be granted access to your storage account.
-
-### Restrict access to the storage account
-
-Follow these steps to restrict access to the storage account.
-
-> [!IMPORTANT]
-> Upload audio files in a Blob container before locking down the storage account access.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Storage account.
-1. In the **Settings** group in the left pane, select **Configuration**.
-1. Select **Disabled** for **Allow Blob public access**.
-1. Select **Disabled** for **Allow storage account key access**
-1. Select **Save**.
-
-For more information, see [Prevent anonymous public read access to containers and blobs](../../storage/blobs/anonymous-read-access-prevent.md) and [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md).
-
-### Configure Azure Storage firewall
-
-Having restricted access to the Storage account, you need to grant access to specific managed identities. Follow these steps to add access for the Speech resource.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Storage account.
-1. In the **Security + networking** group in the left pane, select **Networking**.
-1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
-1. Deselect all check boxes.
-1. Make sure **Microsoft network routing** is selected.
-1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Speech resource as the instance name.
-1. Select **Save**.
-
- > [!NOTE]
- > It may take up to 5 min for the network changes to propagate.
-
-Although by now the network access is permitted, the Speech resource can't yet access the data in the Storage account. You need to assign a specific access role for Speech resource managed identity.
-
-### Assign resource access role
-
-Follow these steps to assign the **Storage Blob Data Reader** role to the managed identity of your Speech resource.
-
-> [!IMPORTANT]
-> You need to be assigned the *Owner* role of the Storage account or higher scope (like Subscription) to perform the operation in the next steps. This is because only the *Owner* role can assign roles to others. See details [here](../../role-based-access-control/built-in-roles.md).
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Storage account.
-1. Select **Access Control (IAM)** menu in the left pane.
-1. Select **Add role assignment** in the **Grant access to this resource** tile.
-1. Select **Storage Blob Data Reader** under **Role** and then select **Next**.
-1. Select **Managed identity** under **Members** > **Assign access to**.
-1. Assign the managed identity of your Speech resource and then select **Review + assign**.
-
- :::image type="content" source="media/storage/storage-identity-access-management-role.png" alt-text="Screenshot of the managed role assignment review.":::
-
-1. After confirming the settings, select **Review + assign**
-
-Now the Speech resource managed identity has access to the Storage account and can access the audio files for batch transcription.
-
-With system assigned managed identity, you'll use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
-
-```json
-{
- "contentContainerUrl": "https://<storage_account_name>.blob.core.windows.net/<container_name>"
-}
-```
-
-You could otherwise specify individual files in the container. For example:
-
-```json
-{
- "contentUrls": [
- "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_1>",
- "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_2>"
- ]
-}
-```
-
-## SAS URL for batch transcription
-
-A shared access signature (SAS) is a URI that grants restricted access to an Azure Storage container. Use it when you want to grant access to your batch transcription files for a specific time range without sharing your storage account key.
-
-> [!TIP]
-> If the container with batch transcription source files should only be accessed by your Speech resource, use the [trusted Azure services security mechanism](#trusted-azure-services-security-mechanism) instead.
-
-# [Azure portal](#tab/portal)
-
-Follow these steps to generate a SAS URL that you can use for batch transcriptions.
-
-1. Complete the steps in [Azure Blob Storage upload](#azure-blob-storage-upload) to create a Storage account and upload audio files to a new container.
-1. Select the new container.
-1. In the **Settings** group in the left pane, select **Shared access tokens**.
-1. Select **+ Container**.
-1. Select **Read** and **List** for **Permissions**.
-
- :::image type="content" source="media/storage/storage-container-shared-access-signature.png" alt-text="Screenshot of the container SAS URI permissions.":::
-
-1. Enter the start and expiry times for the SAS URI, or leave the defaults.
-1. Select **Generate SAS token and URL**.
-
-# [Azure CLI](#tab/azure-cli)
-
-Follow these steps to generate a SAS URL that you can use for batch transcriptions.
-
-1. Complete the steps in [Azure Blob Storage upload](#azure-blob-storage-upload) to create a Storage account and upload audio files to a new container.
-1. Generate a SAS URL with read (r) and list (l) permissions for the container with the [`az storage container generate-sas`](/cli/azure/storage/container#az-storage-container-generate-sas) command. Choose a new expiry date and replace `<mycontainer>` with the name of your container.
-
- ```azurecli-interactive
- az storage container generate-sas -n <mycontainer> --expiry 2022-10-10 --permissions rl --https-only
- ```
-
-The previous command returns a SAS token. Append the SAS token to your container blob URL to create a SAS URL. For example: `https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN`.
---
-You will use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
-
-```json
-{
- "contentContainerUrl": "https://<storage_account_name>.blob.core.windows.net/<container_name>?SAS_TOKEN"
-}
-```
-
-You could otherwise specify individual files in the container. You must generate and use a different SAS URL with read (r) permissions for each file. For example:
-
-```json
-{
- "contentUrls": [
- "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_1>?SAS_TOKEN_1",
- "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name_2>?SAS_TOKEN_2"
- ]
-}
-```
-
-## Next steps
--- [Batch transcription overview](batch-transcription.md)-- [Create a batch transcription](batch-transcription-create.md)-- [Get batch transcription results](batch-transcription-get.md)-- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
- Title: Create a batch transcription - Speech service-
-description: With batch transcriptions, you submit the audio, and then retrieve transcription results asynchronously.
------- Previously updated : 11/29/2022
-zone_pivot_groups: speech-cli-rest
---
-# Create a batch transcription
-
-With batch transcriptions, you submit the [audio data](batch-transcription-audio-data.md), and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
-
-> [!NOTE]
-> To use batch transcription, you need to use a standard (S0) Speech resource. Free resources (F0) aren't supported. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-## Create a transcription job
--
-To create a transcription, use the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation of the [Speech to text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions:
--- You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).-- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.-- Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later.-- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. -- Optionally you can set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.-
-For more information, see [request configuration options](#request-configuration-options).
-
-Make an HTTP POST request using the URI as shown in the following [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "contentUrls": [
- "https://crbn.us/hello.wav",
- "https://crbn.us/whatstheweatherlike.wav"
- ],
- "locale": "en-US",
- "displayName": "My Transcription",
- "model": null,
- "properties": {
- "wordLevelTimestampsEnabled": true,
- "languageIdentification": {
- "candidateLocales": [
- "en-US", "de-DE", "es-ES"
- ],
- }
- },
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba/files"
- },
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": true,
- "channels": [
- 0,
- 1
- ],
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked",
- "languageIdentification": {
- "candidateLocales": [
- "en-US",
- "de-DE",
- "es-ES"
- ]
- }
- },
- "lastActionDateTime": "2022-10-21T14:18:06Z",
- "status": "NotStarted",
- "createdDateTime": "2022-10-21T14:18:06Z",
- "locale": "en-US",
- "displayName": "My Transcription"
-}
-```
-
-The top-level `self` property in the response body is the transcription's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) details such as the URI of the transcriptions and transcription report files. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) a transcription.
-
-You can query the status of your transcriptions with the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation.
-
-Call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete)
-regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
---
-To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions:
--- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).-- Set the required `language` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-
-Here's an example Speech CLI command that creates a transcription job:
-
-```azurecli-interactive
-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0/files"
- },
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": false,
- "channels": [
- 0,
- 1
- ],
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked"
- },
- "lastActionDateTime": "2022-10-21T14:21:59Z",
- "status": "NotStarted",
- "createdDateTime": "2022-10-21T14:21:59Z",
- "locale": "en-US",
- "displayName": "My Transcription",
- "description": ""
-}
-```
-
-The top-level `self` property in the response body is the transcription's URI. Use this URI to get details such as the URI of the transcriptions and transcription report files. You also use this URI to update or delete a transcription.
-
-For Speech CLI help with transcriptions, run the following command:
-
-```azurecli-interactive
-spx help batch transcription
-```
--
-## Request configuration options
--
-Here are some property options that you can use to configure a transcription when you call the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation.
-
-| Property | Description |
-|-|-|
-|`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. |
-|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
-|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
-|`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
-|`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.|
-|`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.|
-|`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
-|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models).|
-|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. |
-|`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
-|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription) regularly after you retrieve the transcription results.|
-|`wordLevelTimestampsEnabled`|Specifies if word level timestamps should be included in the output. The default value is `false`.|
----
-For Speech CLI help with transcription configuration options, run the following command:
-
-```azurecli-interactive
-spx help batch transcription create advanced
-```
--
-## Using custom models
-
-Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
-
-Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
--
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "contentUrls": [
- "https://crbn.us/hello.wav",
- "https://crbn.us/whatstheweatherlike.wav"
- ],
- "locale": "en-US",
- "displayName": "My Transcription",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "properties": {
- "wordLevelTimestampsEnabled": true,
- },
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions"
-```
---
-```azurecli-interactive
-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
-```
--
-To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
-
-> [!TIP]
-> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
-
-Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
--
-## Destination container URL
-
-The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
-
-You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic.
-
-The [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) is not supported for storing transcription results from a Speech resource. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos-for-customization-and-logging). You can secure access to BYOS-associated Storage account exactly as described in the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism) guide, except that the BYOS Speech resource would need **Storage Blob Data Contributor** role assignment. The results of batch transcription performed by the BYOS Speech resource will be automatically stored in the **TranscriptionData** folder of the **customspeech-artifacts** blob container.
-
-## Next steps
--- [Batch transcription overview](batch-transcription.md)-- [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Get batch transcription results](batch-transcription-get.md)-- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
- Title: Get batch transcription results - Speech service-
-description: With batch transcription, the Speech service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container.
------- Previously updated : 11/29/2022
-zone_pivot_groups: speech-cli-rest
---
-# Get batch transcription results
-
-To get transcription results, first check the [status](#get-transcription-status) of the transcription job. If the job is completed, you can [retrieve](#get-batch-transcription-results) the transcriptions and transcription report.
-
-## Get transcription status
--
-To get the status of the transcription job, call the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
-
-Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
- },
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": false,
- "displayFormWordLevelTimestampsEnabled": true,
- "channels": [
- 0,
- 1
- ],
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked",
- "duration": "PT3S",
- "languageIdentification": {
- "candidateLocales": [
- "en-US",
- "de-DE",
- "es-ES"
- ]
- }
- },
- "lastActionDateTime": "2022-09-10T18:39:09Z",
- "status": "Succeeded",
- "createdDateTime": "2022-09-10T18:39:07Z",
- "locale": "en-US",
- "displayName": "My Transcription"
-}
-```
-
-The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
----
-To get the status of the transcription job, use the `spx batch transcription status` command. Construct the request parameters according to the following instructions:
--- Set the `transcription` parameter to the ID of the transcription that you want to get. -
-Here's an example Speech CLI command to get the transcription status:
-
-```azurecli-interactive
-spx batch transcription status --api-version v3.1 --transcription YourTranscriptionId
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
- },
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": false,
- "displayFormWordLevelTimestampsEnabled": true,
- "channels": [
- 0,
- 1
- ],
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked",
- "duration": "PT3S"
- },
- "lastActionDateTime": "2022-09-10T18:39:09Z",
- "status": "Succeeded",
- "createdDateTime": "2022-09-10T18:39:07Z",
- "locale": "en-US",
- "displayName": "My Transcription"
-}
-```
-
-The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
-
-For Speech CLI help with transcriptions, run the following command:
-
-```azurecli-interactive
-spx help batch transcription
-```
--
-## Get transcription results
--
-The [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
-
-Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "values": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
- "name": "contenturl_0.json",
- "kind": "Transcription",
- "properties": {
- "size": 3407
- },
- "createdDateTime": "2022-09-10T18:39:09Z",
- "links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
- }
- },
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
- "name": "contenturl_1.json",
- "kind": "Transcription",
- "properties": {
- "size": 8233
- },
- "createdDateTime": "2022-09-10T18:39:09Z",
- "links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
- }
- },
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
- "name": "report.json",
- "kind": "TranscriptionReport",
- "properties": {
- "size": 279
- },
- "createdDateTime": "2022-09-10T18:39:09Z",
- "links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
- }
- }
- ]
-}
-```
-
-The location of each transcription and transcription report files with more details are returned in the response body. The `contentUrl` property contains the URL to the [transcription](#transcription-result-file) (`"kind": "Transcription"`) or [transcription report](#transcription-report-file) (`"kind": "TranscriptionReport"`) file.
-
-If you didn't specify a container in the `destinationContainerUrl` property of the transcription request, the results are stored in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.
---
-The `spx batch transcription list` command returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
--- Set the required `files` flag.-- Set the required `transcription` parameter to the ID of the transcription that you want to get logs.-
-Here's an example Speech CLI command that gets a list of result files for a transcription:
-
-```azurecli-interactive
-spx batch transcription list --api-version v3.1 --files --transcription YourTranscriptionId
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "values": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
- "name": "contenturl_0.json",
- "kind": "Transcription",
- "properties": {
- "size": 3407
- },
- "createdDateTime": "2022-09-10T18:39:09Z",
- "links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
- }
- },
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
- "name": "contenturl_1.json",
- "kind": "Transcription",
- "properties": {
- "size": 8233
- },
- "createdDateTime": "2022-09-10T18:39:09Z",
- "links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
- }
- },
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
- "name": "report.json",
- "kind": "TranscriptionReport",
- "properties": {
- "size": 279
- },
- "createdDateTime": "2022-09-10T18:39:09Z",
- "links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
- }
- }
- ]
-}
-```
-
-The location of each transcription and transcription report files with more details are returned in the response body. The `contentUrl` property contains the URL to the [transcription](#transcription-result-file) (`"kind": "Transcription"`) or [transcription report](#transcription-report-file) (`"kind": "TranscriptionReport"`) file.
-
-By default, the results are stored in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted.
--
-### Transcription report file
-
-One transcription report file is provided for each submitted batch transcription job.
-
-The contents of each transcription result file are formatted as JSON, as shown in this example.
-
-```json
-{
- "successfulTranscriptionsCount": 2,
- "failedTranscriptionsCount": 0,
- "details": [
- {
- "source": "https://crbn.us/hello.wav",
- "status": "Succeeded"
- },
- {
- "source": "https://crbn.us/whatstheweatherlike.wav",
- "status": "Succeeded"
- }
- ]
-}
-```
-
-### Transcription result file
-
-One transcription result file is provided for each successfully transcribed audio file.
-
-The contents of each transcription result file are formatted as JSON, as shown in this example.
-
-```json
-{
- "source": "...",
- "timestamp": "2023-07-10T14:28:16Z",
- "durationInTicks": 25800000,
- "duration": "PT2.58S",
- "combinedRecognizedPhrases": [
- {
- "channel": 0,
- "lexical": "hello world",
- "itn": "hello world",
- "maskedITN": "hello world",
- "display": "Hello world."
- }
- ],
- "recognizedPhrases": [
- {
- "recognitionStatus": "Success",
- "channel": 0,
- "offset": "PT0.76S",
- "duration": "PT1.32S",
- "offsetInTicks": 7600000.0,
- "durationInTicks": 13200000.0,
- "nBest": [
- {
- "confidence": 0.5643338,
- "lexical": "hello world",
- "itn": "hello world",
- "maskedITN": "hello world",
- "display": "Hello world.",
- "displayWords": [
- {
- "displayText": "Hello",
- "offset": "PT0.76S",
- "duration": "PT0.76S",
- "offsetInTicks": 7600000.0,
- "durationInTicks": 7600000.0
- },
- {
- "displayText": "world.",
- "offset": "PT1.52S",
- "duration": "PT0.56S",
- "offsetInTicks": 15200000.0,
- "durationInTicks": 5600000.0
- }
- ]
- },
- {
- "confidence": 0.1769063,
- "lexical": "helloworld",
- "itn": "helloworld",
- "maskedITN": "helloworld",
- "display": "helloworld"
- },
- {
- "confidence": 0.49964225,
- "lexical": "hello worlds",
- "itn": "hello worlds",
- "maskedITN": "hello worlds",
- "display": "hello worlds"
- },
- {
- "confidence": 0.4995761,
- "lexical": "hello worm",
- "itn": "hello worm",
- "maskedITN": "hello worm",
- "display": "hello worm"
- },
- {
- "confidence": 0.49418187,
- "lexical": "hello word",
- "itn": "hello word",
- "maskedITN": "hello word",
- "display": "hello word"
- }
- ]
- }
- ]
-}
-```
-
-Depending in part on the request parameters set when you created the transcription job, the transcription file can contain the following result properties.
-
-|Property|Description|
-|--|--|
-|`channel`|The channel number of the results. For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each input audio file.|
-|`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.|
-|`confidence`|The confidence value for the recognition.|
-|`display`|The display form of the recognized text. Added punctuation and capitalization are included.|
-|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
-|`duration`|The audio duration. The value is an ISO 8601 encoded duration.|
-|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).|
-|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.|
-|`lexical`|The actual words recognized.|
-|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
-|`maskedITN`|The ITN form with profanity masking applied.|
-|`nBest`|A list of possible transcriptions for the current phrase with confidences.|
-|`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
-|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).|
-|`recognitionStatus`|The recognition state. For example: "Success" or "Failure".|
-|`recognizedPhrases`|The list of results for each phrase.|
-|`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.|
-|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
-|`timestamp`|The creation date and time of the transcription. The value is an ISO 8601 encoded timestamp.|
-|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
--
-## Next steps
--- [Batch transcription overview](batch-transcription.md)-- [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Create a batch transcription](batch-transcription-create.md)-- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
- Title: Batch transcription overview - Speech service-
-description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure blobs. Then you can asynchronously retrieve transcriptions.
------- Previously updated : 03/15/2023---
-# What is batch transcription?
-
-Batch transcription is used to transcribe a large amount of audio data in storage. Both the [Speech to text REST API](rest-speech-to-text.md#transcriptions) and [Speech CLI](spx-basics.md) support batch transcription.
-
-You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time.
-
-## How does it work?
-
-With batch transcriptions, you submit the audio data, and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container.
-
-> [!TIP]
-> For a low or no-code solution, you can use the [Batch Speech to text Connector](/connectors/cognitiveservicesspe/) in Power Platform applications such as Power Automate, Power Apps, and Logic Apps. See the [Power automate batch transcription](power-automate-batch-transcription.md) guide to get started.
-
-To use the batch transcription REST API:
-
-1. [Locate audio files for batch transcription](batch-transcription-audio-data.md) - You can upload your own data or use existing audio files via public URI or [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI.
-1. [Create a batch transcription](batch-transcription-create.md) - Submit the transcription job with parameters such as the audio files, the transcription language, and the transcription model.
-1. [Get batch transcription results](batch-transcription-get.md) - Check transcription status and retrieve transcription results asynchronously.
-
-Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
-
-## Next steps
--- [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Create a batch transcription](batch-transcription-create.md)-- [Get batch transcription results](batch-transcription-get.md)-- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Call Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-overview.md
- Title: Azure Cognitive Services for Call Center Overview-
-description: Azure Cognitive Services for Language and Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels.
------ Previously updated : 09/18/2022--
-# Call Center Overview
-
-Azure Cognitive Services for Language and Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation personally identifiable information (PII), summarize the transcription, and detect the sentiment.
-
-Some example scenarios for the implementation of Azure Cognitive Services in call and contact centers are:
-- Virtual agents: Conversational AI-based telephony-integrated voicebots and voice-enabled chatbots-- Agent-assist: Real-time transcription and analysis of a call to improve the customer experience by providing insights and suggest actions to agents-- Post-call analytics: Post-call analysis to create insights into customer conversations to improve understanding and support continuous improvement of call handling, optimization of quality assurance and compliance control as well as other insight driven optimizations.-
-> [!TIP]
-> Try the [Language Studio](https://language.cognitive.azure.com) or [Speech Studio](https://aka.ms/speechstudio/callcenter) for a demonstration on how to use the Language and Speech services to analyze call center conversations.
->
-> To deploy a call center transcription solution to Azure with a no-code approach, try the [Ingestion Client](./ingestion-client.md).
-
-## Cognitive Services features for call centers
-
-A holistic call center implementation typically incorporates technologies from the Language and Speech services.
-
-Audio data typically used in call centers generated through landlines, mobile phones, and radios is often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
-
-Once you've transcribed your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
-
-### Speech service
-
-The Speech service offers the following features that can be used for call center use cases:
--- [Real-time speech to text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events.-- [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech.-- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection.-- [Language Identification](./language-identification.md): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent).-
-The Speech service works well with prebuilt models. However, you might want to further customize and tune the experience for your product or environment. Typical examples for Speech customization include:
-
-| Speech customization | Description |
-| -- | -- |
-| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
-| [Custom Neural Voice](./custom-neural-voice.md) | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. |
-
-### Language service
-
-The Language service offers the following features that can be used for call center use cases:
--- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription.-- [Conversation summarization](../language-service/summarization/overview.md?tabs=conversation-summarization): Summarize in abstract text what each conversation participant said about the issues and resolutions. For example, a call center can group product issues that have a high volume.-- [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.-
-While the Language service works well with prebuilt models, you might want to further customize and tune models to extract more information from your data. Typical examples for Language customization include:
-
-| Language customization | Description |
-| -- | -- |
-| [Custom NER (named entity recognition)](../language-service/custom-named-entity-recognition/overview.md) | Improve the detection and extraction of entities in transcriptions. |
-| [Custom text classification](../language-service/custom-text-classification/overview.md) | Classify and label transcribed utterances with either single or multiple classifications. |
-
-You can find an overview of all Language service features and customization options [here](../language-service/overview.md#available-features).
-
-## Next steps
-
-* [Post-call transcription and analytics quickstart](./call-center-quickstart.md)
-* [Try out the Language Studio](https://language.cognitive.azure.com)
-* [Try out the Speech Studio](https://aka.ms/speechstudio/callcenter)
cognitive-services Call Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-quickstart.md
- Title: "Post-call transcription and analytics quickstart - Speech service"-
-description: In this quickstart, you perform sentiment analysis and conversation summarization of call center transcriptions.
------ Previously updated : 09/20/2022---
-# Quickstart: Post-call transcription and analytics
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try the Ingestion Client](ingestion-client.md)
cognitive-services Call Center Telephony Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-telephony-integration.md
- Title: Call Center Telephony Integration - Speech service-
-description: A common scenario for speech to text is transcribing large volumes of telephony data that come from various systems, such as interactive voice response (IVR) in real-time. This requires an integration with the Telephony System used.
------ Previously updated : 08/10/2022---
-# Telephony Integration
-
-To support real-time scenarios, like Virtual Agent and Agent Assist in Call Centers, an integration with the Call Centers telephony system is required.
-
-Typically, integration with the Speech service is handled by a telephony client connected to the customers SIP/RTP processor, for example, to a Session Border Controller (SBC).
-
-Usually the telephony client handles the incoming audio stream from the SIP/RTP processor, the conversion to PCM and connects the streams using continuous recognition. It also triages the processing of the results, for example, analysis of speech transcripts for Agent Assist or connect with a dialog processing engine (for example, Azure Botframework or Power Virtual Agent) for Virtual Agent.
-
-For easier integration the Speech service also supports ΓÇ£ALAW in WAV containerΓÇ¥ and ΓÇ£MULAW in WAV containerΓÇ¥ for audio streaming.
-
-To build this integration we recommend using the [Speech SDK](./speech-sdk.md).
--
-> [!TIP]
-> For guidance on reducing Text to speech latency check out the **[How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md?pivots=programming-language-csharp)** guide.
->
-> In addition, consider implementing a text to speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized.
-
-## Next steps
-
-* [Learn about Speech SDK](./speech-sdk.md)
-* [How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md)
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
- Title: Captioning with speech to text - Speech service-
-description: An overview of key concepts for captioning with speech to text.
------- Previously updated : 06/02/2022-
-zone_pivot_groups: programming-languages-speech-sdk-cli
--
-# Captioning with speech to text
-
-In this guide, you learn how to create captions with speech to text. Captioning is the process of converting the audio content of a television broadcast, webcast, film, video, live event, or other production into text, and then displaying the text on a screen, monitor, or other visual display system.
-
-Concepts include how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. This guide covers captioning for speech, but doesn't include speaker ID or sound effects such as bells ringing.
-
-Here are some common captioning scenarios:
-- Online courses and instructional videos-- Sporting events-- Voice and video calls-
-The following are aspects to consider when using captioning:
-* Let your audience know that captions are generated by an automated service.
-* Center captions horizontally on the screen, in a large and prominent font.
-* Consider whether to use partial results, when to start displaying captions, and how many words to show at a time.
-* Learn about captioning protocols such as [SMPTE-TT](https://ieeexplore.ieee.org/document/7291854).
-* Consider output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
-
-> [!TIP]
-> Try the [Speech Studio](https://aka.ms/speechstudio/captioning) and choose a sample video clip to see real-time or offline processed captioning results.
->
-> Try the [Azure Video Indexer](../../azure-video-indexer/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
-
-Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
-
-## Caption output format
-
-The Speech service supports output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
-
-> [!TIP]
-> The Speech service provides [profanity filter](display-text-format.md#profanity-filter) options. You can specify whether to mask, remove, or show profanity.
-
-The [SRT](https://docs.fileformat.com/video/srt/) (SubRip Text) timespan output format is `hh:mm:ss,fff`.
-
-```srt
-1
-00:00:00,180 --> 00:00:03,230
-Welcome to applied Mathematics course 201.
-```
-
-The [WebVTT](https://www.w3.org/TR/webvtt1/#introduction) (Web Video Text Tracks) timespan output format is `hh:mm:ss.fff`.
-
-```
-WEBVTT
-
-00:00:00.180 --> 00:00:03.230
-Welcome to applied Mathematics course 201.
-{
- "ResultId": "8e89437b4b9349088a933f8db4ccc263",
- "Duration": "00:00:03.0500000"
-}
-```
-
-## Input audio to the Speech service
-
-For real-time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
-
-For captioning of a prerecording, send file input to the Speech service. For more information, see [How to use compressed input audio](how-to-use-codec-compressed-audio-input-streams.md).
-
-## Caption and speech synchronization
-
-You'll want to synchronize captions with the audio track, whether it's done in real-time or with a prerecording.
-
-The Speech service returns the offset and duration of the recognized speech.
--
-For more information, see [Get speech recognition results](get-speech-recognition-results.md).
-
-## Get partial results
-
-Consider when to start displaying captions, and how many words to show at a time. Speech recognition results are subject to change while an utterance is still being recognized. Partial results are returned with each `Recognizing` event. As each word is processed, the Speech service re-evaluates an utterance in the new context and again returns the best result. The new result isn't guaranteed to be the same as the previous result. The complete and final transcription of an utterance is returned with the `Recognized` event.
-
-> [!NOTE]
-> Punctuation of partial results is not available.
-
-For captioning of prerecorded speech or wherever latency isn't a concern, you could wait for the complete transcription of each utterance before displaying any words. Given the final offset and duration of each word in an utterance, you know when to show subsequent words at pace with the soundtrack.
-
-Real-time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
-
-You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
-
-```csharp
-speechConfig.SetProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
-```
-```cpp
-speechConfig->SetProperty(PropertyId::SpeechServiceResponse_StablePartialResultThreshold, 5);
-```
-```go
-speechConfig.SetProperty(common.SpeechServiceResponseStablePartialResultThreshold, 5)
-```
-```java
-speechConfig.setProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
-```
-```javascript
-speechConfig.setProperty(sdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, 5);
-```
-```objective-c
-[self.speechConfig setPropertyTo:5 byId:SPXSpeechServiceResponseStablePartialResultThreshold];
-```
-```swift
-self.speechConfig!.setPropertyTo(5, by: SPXPropertyId.speechServiceResponseStablePartialResultThreshold)
-```
-```python
-speech_config.set_property(property_id = speechsdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, value = 5)
-```
-```console
-spx recognize --file caption.this.mp4 --format any --property SpeechServiceResponse_StablePartialResultThreshold=5 --output vtt file - --output srt file -
-```
-
-Requesting more stable partial results will reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
-
-### Stable partial threshold example
-In the following recognition sequence without setting a stable partial threshold, "math" is recognized as a word, but the final text is "mathematics". At another point, "course 2" is recognized, but the final text is "course 201".
-
-```console
-RECOGNIZING: Text=welcome to
-RECOGNIZING: Text=welcome to applied math
-RECOGNIZING: Text=welcome to applied mathematics
-RECOGNIZING: Text=welcome to applied mathematics course 2
-RECOGNIZING: Text=welcome to applied mathematics course 201
-RECOGNIZED: Text=Welcome to applied Mathematics course 201.
-```
-
-In the previous example, the transcriptions were additive and no text was retracted. But at other times you might find that the partial results were inaccurate. In either case, the unstable partial results can be perceived as "flickering" when displayed.
-
-For this example, if the stable partial result threshold is set to `5`, no words are altered or backtracked.
-
-```console
-RECOGNIZING: Text=welcome to
-RECOGNIZING: Text=welcome to applied
-RECOGNIZING: Text=welcome to applied mathematics
-RECOGNIZED: Text=Welcome to applied Mathematics course 201.
-```
-
-## Language identification
-
-If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
-
-## Customizations to improve accuracy
-
-A [phrase list](improve-accuracy-phrase-list.md) is a list of words or phrases that you provide right before starting speech recognition. Adding a phrase to a phrase list increases its importance, thus making it more likely to be recognized.
-
-Examples of phrases include:
-* Names
-* Geographical locations
-* Homonyms
-* Words or acronyms unique to your industry or organization
-
-There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontics lectures, you might want to train a custom model with the corresponding domain data.
-
-## Next steps
-
-* [Captioning quickstart](captioning-quickstart.md)
-* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Captioning Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-quickstart.md
- Title: "Create captions with speech to text quickstart - Speech service"-
-description: In this quickstart, you convert speech to text as captions.
------ Previously updated : 04/23/2022--
-zone_pivot_groups: programming-languages-speech-sdk-cli
--
-# Quickstart: Create captions with speech to text
----------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about speech recognition](how-to-recognize-speech.md)
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md
- Title: Conversation transcription overview - Speech service-
-description: You use the conversation transcription feature for meetings. It combines recognition, speaker ID, and diarization to provide transcription of any conversation.
------ Previously updated : 01/23/2022----
-# What is conversation transcription?
-
-Conversation transcription is a [speech to text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. This feature, which is currently in preview, combines speech recognition, speaker identification, and sentence attribution to determine who said what, and when, in a conversation.
-
-> [!NOTE]
-> Multi-device conversation access is a preview feature.
-
-## Key features
-
-You might find the following features of conversation transcription useful:
--- **Timestamps:** Each speaker utterance has a timestamp, so that you can easily find when a phrase was said.-- **Readable transcripts:** Transcripts have formatting and punctuation added automatically to ensure the text closely matches what was being said.-- **User profiles:** User profiles are generated by collecting user voice samples and sending them to signature generation.-- **Speaker identification:** Speakers are identified by using user profiles, and a _speaker identifier_ is assigned to each.-- **Multi-speaker diarization:** Determine who said what by synthesizing the audio stream with each speaker identifier.-- **Real-time transcription:** Provide live transcripts of who is saying what, and when, while the conversation is happening.-- **Asynchronous transcription:** Provide transcripts with higher accuracy by using a multichannel audio stream.-
-> [!NOTE]
-> Although conversation transcription doesn't put a limit on the number of speakers in the room, it's optimized for 2-10 speakers per session.
-
-## Get started
-
-See the real-time conversation transcription [quickstart](how-to-use-conversation-transcription.md) to get started.
-
-## Use cases
-
-To make meetings inclusive for everyone, such as participants who are deaf and hard of hearing, it's important to have transcription in real-time. Conversation transcription in real-time mode takes meeting audio and determines who is saying what, allowing all meeting participants to follow the transcript and participate in the meeting, without a delay.
-
-Meeting participants can focus on the meeting and leave note-taking to conversation transcription. Participants can actively engage in the meeting and quickly follow up on next steps, using the transcript instead of taking notes and potentially missing something during the meeting.
-
-## How it works
-
-The following diagram shows a high-level overview of how the feature works.
-
-![Diagram that shows the relationships among different pieces of the conversation transcription solution.](media/scenarios/conversation-transcription-service.png)
-
-## Expected inputs
-
-Conversation transcription uses two types of inputs:
--- **Multi-channel audio stream:** For specification and design details, see [Microphone array recommendations](./speech-sdk-microphone.md). -- **User voice samples:** Conversation transcription needs user profiles in advance of the conversation for speaker identification. Collect audio recordings from each user, and then send the recordings to the [signature generation service](https://aka.ms/cts/signaturegenservice) to validate the audio and generate user profiles.-
-> [!NOTE]
-> Single channel audio configuration for conversation transcription is currently only available in private preview.
-
-User voice samples for voice signatures are required for speaker identification. Speakers who don't have voice samples are recognized as *unidentified*. Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see the following example). The transcription output then shows speakers as, for example, *Guest_0* and *Guest_1*, instead of recognizing them as pre-enrolled specific speaker names.
-
-```csharp
-config.SetProperty("DifferentiateGuestSpeakers", "true");
-```
-
-## Real-time vs. asynchronous
-
-The following sections provide more detail about transcription modes you can choose.
-
-### Real-time
-
-Audio data is processed live to return the speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide conversation participants a live transcript view of their ongoing conversation. For example, building an application to make meetings more accessible to participants with hearing loss or deafness is an ideal use case for real-time transcription.
-
-### Asynchronous
-
-Audio data is batch processed to return the speaker identifier and transcript. Select this mode if your transcription solution requirement is to provide higher accuracy, without the live transcript view. For example, if you want to build an application to allow meeting participants to easily catch up on missed meetings, then use the asynchronous transcription mode to get high-accuracy transcription results.
-
-### Real-time plus asynchronous
-
-Audio data is processed live to return the speaker identifier and transcript, and, in addition, requests a high-accuracy transcript through asynchronous processing. Select this mode if your application has a need for real-time transcription, and also requires a higher accuracy transcript for use after the conversation or meeting occurred.
-
-## Language support
-
-Currently, conversation transcription supports [all speech to text languages](language-support.md?tabs=stt) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Transcribe conversations in real-time](how-to-use-conversation-transcription.md)
cognitive-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-encryption-of-data-at-rest.md
- Title: Custom Commands service encryption of data at rest-
-description: Custom Commands encryption of data at rest.
------ Previously updated : 07/05/2020----
-# Custom Commands encryption of data at rest
--
-Custom Commands automatically encrypts your data when it is persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments.
-
-> [!NOTE]
-> Custom Commands service doesn't automatically enable encryption for the LUIS resources associated with your application. If needed, you must enable encryption for your LUIS resource from [here](../luis/encrypt-data-at-rest.md).
-
-## About Cognitive Services encryption
-Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
-
-## About encryption key management
-
-When you use Custom Commands, speech service will store following data in the cloud:
-* Configuration JSON behind the Custom Commands application
-* LUIS authoring and prediction key
-
-By default, your subscription uses Microsoft-managed encryption keys. However, you can also manage your subscription with your own encryption keys. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
--
-> [!IMPORTANT]
-> Customer-managed keys are only available resources created after 27 June, 2020. To use CMK with the Speech service, you will need to create a new Speech resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
-
-To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you'll need to create a new Speech resource from the Azure portal.
- > [!NOTE]
- > **Customer-managed keys (CMK) are supported only for Custom Commands.**
- >
- > **Custom Speech and Custom Voice still support only Bring Your Own Storage (BYOS).** [Learn more](speech-encryption-of-data-at-rest.md)
- >
- > If you're using the given speech resource for accessing these service, compliance needs must be met by explicitly configuring BYOS.
--
-## Customer-managed keys with Azure Key Vault
-
-You must use Azure Key Vault to store customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Speech resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-
-When a new Speech resource is created and used to provision Custom Commands application - data is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available only after the resource is created using the Pricing Tier required for CMK.
-
-Enabling customer managed keys will also enable a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup.
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
-
-> [!IMPORTANT]
-> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-
-## Configure Azure Key Vault
-
-Using customer-managed keys requires that two properties be set in the key vault, **Soft Delete** and **Do Not Purge**. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
-
-> [!IMPORTANT]
-> If you do not have the **Soft Delete** and **Do Not Purge** properties enabled and you delete your key, you won't be able to recover the data in your Cognitive Service resource.
-
-To learn how to enable these properties on an existing key vault, see the sections titled **Enabling soft-delete** and **Enabling Purge Protection** in one of the following articles:
--- [How to use soft-delete with PowerShell](../../key-vault/general/key-vault-recovery.md).-- [How to use soft-delete with CLI](../../key-vault/general/key-vault-recovery.md).-
-Only RSA keys of size 2048 are supported with Azure Storage encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
-
-## Enable customer-managed keys for your Speech resource
-
-To enable customer-managed keys in the Azure portal, follow these steps:
-
-1. Navigate to your Speech resource.
-1. On the **Settings** blade for your Speech resource, click **Encryption**. Select the **Customer Managed Keys** option, as shown in the following figure.
-
- ![Screenshot showing how to select Customer Managed Keys](media/custom-commands/select-cmk.png)
-
-## Specify a key
-
-After you enable customer-managed keys, you'll have the opportunity to specify a key to associate with the Cognitive Services resource.
-
-### Specify a key as a URI
-
-To specify a key as a URI, follow these steps:
-
-1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version.
-1. Copy the value of the **Key Identifier** field, which provides the URI.
-
- ![Screenshot showing key vault key URI](../media/cognitive-services-encryption/key-uri-portal.png)
-
-1. In the **Encryption** settings for your Speech service, choose the **Enter key URI** option.
-1. Paste the URI that you copied into the **Key URI** field.
-
-1. Specify the subscription that contains the key vault.
-1. Save your changes.
-
-### Specify a key from a key vault
-
-To specify a key from a key vault, first make sure that you have a key vault that contains a key. To specify a key from a key vault, follow these steps:
-
-1. Choose the **Select from Key Vault** option.
-1. Select the key vault containing the key you want to use.
-1. Select the key from the key vault.
-
- ![Screenshot showing customer-managed key option](media/custom-commands/configure-cmk-fromKV.png)
-
-1. Save your changes.
-
-## Update the key version
-
-When you create a new version of a key, update the Speech resource to use the new version. Follow these steps:
-
-1. Navigate to your Speech resource and display the **Encryption** settings.
-1. Enter the URI for the new key version. Alternately, you can select the key vault and the key again to update the version.
-1. Save your changes.
-
-## Use a different key
-
-To change the key used for encryption, follow these steps:
-
-1. Navigate to your Speech resource and display the **Encryption** settings.
-1. Enter the URI for the new key. Alternately, you can select the key vault and choose a new key.
-1. Save your changes.
-
-## Rotate customer-managed keys
-
-You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Speech resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](#update-the-key-version).
-
-Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
-
-## Revoke access to customer-managed keys
-
-To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource, as the encryption key is inaccessible by Cognitive Services.
-
-## Disable customer-managed keys
-
-When you disable customer-managed keys, your Speech resource is then encrypted with Microsoft-managed keys. To disable customer-managed keys, follow these steps:
-
-1. Navigate to your Speech resource and display the **Encryption** settings.
-1. Deselect the checkbox next to the **Use your own key** setting.
-
-## Next steps
-
-* [Speech Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
-* [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md)
cognitive-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-references.md
- Title: 'Custom Commands concepts and definitions - Speech service'-
-description: In this article, you learn about concepts and definitions for Custom Commands applications.
------ Previously updated : 06/18/2020----
-# Custom Commands concepts and definitions
--
-This article serves as a reference for concepts and definitions for Custom Commands applications.
-
-## Commands configuration
-Commands are the basic building blocks of a Custom Commands application. A command is a set of configurations required to complete a specific task defined by a user.
-
-### Example sentences
-Example utterances are the set examples the user can say to trigger a particular command. You need to provide only a sample of utterances and not an exhaustive list.
-
-### Parameters
-Parameters are information required by the commands to complete a task. In complex scenarios, parameters can also be used to define conditions that trigger custom actions.
-
-### Completion rules
-Completion rules are a series of rules to be executed after the command is ready to be fulfilled, for example, when all the conditions of the rules are satisfied.
-
-### Interaction rules
-Interaction rules are additional rules to handle more specific or complex situations. You can add additional validations or configure advanced features such as confirmations or a one-step correction. You can also build your own custom interaction rules.
-
-## Parameters configuration
-
-Parameters are information required by commands to complete a task. In complex scenarios, parameters can also be used to define conditions that trigger custom actions.
-
-### Name
-A parameter is identified by the name property. You should always give a descriptive name to a parameter. A parameter can be referred across different sections, for example, when you construct conditions, speech responses, or other actions.
-
-### Required
-This check box indicates whether a value for this parameter is required for command fulfillment or completion. You must configure responses to prompt the user to provide a value if a parameter is marked as required.
-
-Note that, if you configured a **required parameter** to have a **Default value**, the system will still explicitly prompt for the parameter's value.
-
-### Type
-Custom Commands supports the following parameter types:
-
-* Age
-* Currency
-* DateTime
-* Dimension
-* Email
-* Geography
-* Number
-* Ordinal
-* Percentage
-* PersonName
-* PhoneNumber
-* String
-* Temperature
-* Url
-
-Every locale supports the "String" parameter type, but availability of all other types differs by locale. Custom Commands uses LUIS's prebuilt entity resolution, so the availability of a parameter type in a locale depends on LUIS's prebuilt entity support in that locale. You can find [more details on LUIS's prebuilt entity support per locale](../luis/luis-reference-prebuilt-entities.md). Custom LUIS entities (such as machine learned entities) are currently not supported.
-
-Some parameter types like Number, String and DateTime support default value configuration, which you can configure from the portal.
-
-### Configuration
-Configuration is a parameter property defined only for the type String. The following values are supported:
-
-* **None**.
-* **Accept full input**: When enabled, a parameter accepts any input utterance. This option is useful when the user needs a parameter with the full utterance. An example is postal addresses.
-* **Accept predefined input values from an external catalog**: This value is used to configure a parameter that can assume a wide variety of values. An example is a sales catalog. In this case, the catalog is hosted on an external web endpoint and can be configured independently.
-* **Accept predefined input values from internal catalog**: This value is used to configure a parameter that can assume a few values. In this case, values must be configured in the Speech Studio.
--
-### Validation
-Validations are constructs applicable to certain parameter types that let you configure constraints on a parameter's value. Currently, Custom Commands supports validations on the following parameter types:
-
-* DateTime
-* Number
-
-## Rules configuration
-A rule in Custom Commands is defined by a set of *conditions* that, when met, execute a set of *actions*. Rules also let you configure *post-execution state* and *expectations* for the next turn.
-
-### Types
-Custom Commands supports the following rule categories:
-
-* **Completion rules**: These rules must be executed upon command fulfillment. All the rules configured in this section for which the conditions are true will be executed.
-* **Interaction rules**: These rules can be used to configure additional custom validations, confirmations, and a one-step correction, or to accomplish any other custom dialog logic. Interaction rules are evaluated at each turn in the processing and can be used to trigger completion rules.
-
-The different actions configured as part of a rule are executed in the order in which they appear in the authoring portal.
-
-### Conditions
-Conditions are the requirements that must be met for a rule to execute. Rules conditions can be of the following types:
-
-* **Parameter value equals**: The configured parameter's value equals a specific value.
-* **No parameter value**: The configured parameters shouldn't have any value.
-* **Required parameters**: The configured parameter has a value.
-* **All required parameters**: All the parameters that were marked as required have a value.
-* **Updated parameters**: One or more parameter values were updated as a result of processing the current input (utterance or activity).
-* **Confirmation was successful**: The input utterance or activity was a successful confirmation (yes).
-* **Confirmation was denied**: The input utterance or activity was not a successful confirmation (no).
-* **Previous command needs to be updated**: This condition is used in instances when you want to catch a negated confirmation along with an update. Behind the scenes, this condition is configured for when the dialog engine detects a negative confirmation where the intent is the same as the previous turn, and the user has responded with an update.
-
-### Actions
-* **Send speech response**: Send a speech response back to the client.
-* **Update parameter value**: Update the value of a command parameter to a specified value.
-* **Clear parameter value**: Clear the command parameter value.
-* **Call web endpoint**: Make a call to a web endpoint.
-* **Send activity to client**: Send a custom activity to the client.
-
-### Expectations
-Expectations are used to configure hints for the processing of the next user input. The following types are supported:
-
-* **Expecting confirmation from user**: This expectation specifies that the application is expecting a confirmation (yes/no) for the next user input.
-* **Expecting parameter(s) input from user**: This expectation specifies one or more command parameters that the application is expecting from the user input.
-
-### Post-execution state
-The post-execution state is the dialog state after processing the current input (utterance or activity). It's of the following types:
-
-* **Keep current state**: Keep current state only.
-* **Complete the command**: Complete the command and no additional rules of the command will be processed.
-* **Execute completion rules**: Execute all the valid completion rules.
-* **Wait for user's input**: Wait for the next user input.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
cognitive-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands.md
- Title: Custom Commands overview - Speech service-
-description: An overview of the features, capabilities, and restrictions for Custom Commands, a solution for creating voice applications.
------ Previously updated : 03/11/2020----
-# What is Custom Commands?
--
-Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
-
-Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
-
-Custom Commands is best suited for task completion or command-and-control scenarios such as "Turn on the overhead light" or "Make it 5 degrees warmer". Custom Commands is well suited for Internet of Things (IoT) devices, ambient and headless devices. Examples include solutions for Hospitality, Retail and Automotive industries, where you want voice-controlled experiences for your guests, in-store inventory management or in-car functionality.
-
-If you're interested in building complex conversational apps, you're encouraged to try the Bot Framework using the [Virtual Assistant Solution](/azure/bot-service/bot-builder-enterprise-template-overview). You can add voice to any bot framework bot using Direct Line Speech.
-
-Good candidates for Custom Commands have a fixed vocabulary with well-defined sets of variables. For example, home automation tasks, like controlling a thermostat, are ideal.
-
- ![Examples of task completion scenarios](media/voice-assistants/task-completion-examples.png "task completion examples")
-
-## Getting started with Custom Commands
-
-Our goal with Custom Commands is to reduce your cognitive load to learn all the different technologies and focus building your voice commanding app. First step for using Custom Commands to <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">create a Speech resource</a>. You can author your Custom Commands app on the Speech Studio and publish it, after which an on-device application can communicate with it using the Speech SDK.
-
-#### Authoring flow for Custom Commands
- ![Authoring flow for Custom Commands](media/voice-assistants/custom-commands-flow.png "The Custom Commands authoring flow")
-
-Follow our quickstart to have your first Custom Commands app running code in less than 10 minutes.
-
-* [Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md)
-
-Once you're done with the quickstart, explore our how-to guides for detailed steps for designing, developing, debugging, deploying and integrating a Custom Commands application.
-
-## Building Voice Assistants with Custom Commands
-> [!VIDEO https://www.youtube.com/embed/1zr0umHGFyc]
-
-## Next steps
-
-* [View our Voice Assistants repo on GitHub for samples](https://aka.ms/speech/cc-samples)
-* [Go to the Speech Studio to try out Custom Commands](https://aka.ms/speechstudio/customcommands)
-* [Get the Speech SDK](speech-sdk.md)
cognitive-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-keyword-basics.md
- Title: Create a custom keyword quickstart - Speech service-
-description: When a user speaks the keyword, your device sends their dictation to the cloud, until the user stops speaking. Customizing your keyword is an effective way to differentiate your device and strengthen your branding.
------ Previously updated : 11/12/2021--
-zone_pivot_groups: programming-languages-speech-services
--
-# Quickstart: Create a custom keyword
-----------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get the Speech SDK](speech-sdk.md)
cognitive-services Custom Neural Voice Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice-lite.md
- Title: Custom Neural Voice Lite - Speech service-
-description: Use Custom Neural Voice Lite to demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice.
------ Previously updated : 10/27/2022---
-# Custom Neural Voice Lite (preview)
-
-Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Lite and CNV Pro.
--- Custom Neural Voice (CNV) Pro allows you to upload your training data collected through professional recording studios and create a higher-quality voice that is nearly indistinguishable from its human samples. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).-- Custom Neural Voice (CNV) Lite is a project type in public preview. You can demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice. No application is required. Microsoft restricts and selects the recording and testing samples for use with CNV Lite. You must apply for full access to CNV Pro in order to deploy and use the CNV Lite model for business purpose. -
-With a CNV Lite project, you record your voice online by reading 20-50 pre-defined scripts provided by Microsoft. After you've recorded at least 20 samples, you can start to train a model. Once the model is trained successfully, you can review the model and check out 20 output samples produced with another set of pre-defined scripts.
-
-See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice.
-
-## Compare project types
-
-The following table summarizes key differences between the CNV Lite and CNV Pro project types.
-
-|**Items**|**Lite (Preview)**| **Pro**|
-||||
-|Target scenarios |Demonstration or evaluation |Professional scenarios like brand and character voices for chat bots, or audio content reading.|
-|Training data |Record online using Speech Studio |Bring your own data. Recording in a professional studio is recommended. |
-|Scripts for recording |Provided in Speech Studio |Use your own scripts that match the use case scenario. Microsoft provides [example scripts](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) for reference. |
-|Required data size |20-50 utterances |300-2000 utterances|
-|Training time |Less than one compute hour| Approximately 20-40 compute hours |
-|Voice quality |Moderate quality|High quality |
-|Availability |Anyone can record samples online and train a model for demo and evaluation purpose. Full access to Custom Neural Voice is required if you want to deploy the CNV Lite model for business use. |Data upload isn't restricted, but you can only train and deploy a CNV Pro model after access is approved. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).|
-|Pricing |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
-
-## Create a Custom Neural Voice Lite project
-
-To create a Custom Neural Voice Lite project, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select the subscription and Speech resource to work with.
-
- > [!IMPORTANT]
- > Custom Neural Voice training is currently only available in some regions. See footnotes in the [regions](regions.md#speech-service) table for more information.
-
-1. Select **Custom Voice** > **Create a project**.
-1. Select **Custom Neural Voice Lite** > **Next**.
-
- > [!NOTE]
- > To create a Custom Neural Voice Pro project, see [Create a project for Custom Neural Voice](how-to-custom-voice.md).
-
-1. Follow the instructions provided by the wizard to create your project.
-1. Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Record and build**, **Review model**, and **Deploy model**.
- :::image type="content" source="media/custom-voice/lite/lite-project-get-started.png" alt-text="Screenshot with an overview of the CNV Lite record, train, test, and deploy workflow.":::
-
-The CNV Lite project expires after 90 days unless the [verbal statement](#submit-verbal-statement) recorded by the voice talent is submitted.
-
-## Record and build a CNV Lite model
-
-Record at least 20 voice samples (up to 50) with provided scripts online. Voice samples recorded here will be used to create a synthetic version of your voice.
-
-Here are some tips to help you record your voice samples:
-- Use a good microphone. Increase the clarity of your samples by using a high-quality microphone. Speak about 8 inches away from the microphone to avoid mouth noises.-- Avoid background noise. Record in a quiet room without background noise or echoing.-- Relax and speak naturally. Allow yourself to express emotions as you read the sentences.-- Record in one take. To keep a consistent energy level, record all sentences in one session.-- Pronounce each word correctly, and speak clearly. -
-To record and build a CNV Lite model, follow these steps:
-
-1. Select **Custom Voice** > Your project name > **Record and build**.
-1. Select **Get started**.
-1. Read the Voice talent terms of use carefully. Select the checkbox to acknowledge the terms of use.
-1. Select **Accept**
-1. Press the microphone icon to start the noise check. This noise check will take only a few seconds, and you won't need to speak during it.
-1. If noise was detected, you can select **Check again** to repeat the noise check. If no noise was detected, you can select **Done** to proceed to the next step.
- :::image type="content" source="media/custom-voice/lite/cnv-record-noise-check.png" alt-text="Screenshot of the noise check results when noise was detected.":::
-1. Review the recording tips and select **Got it**. For the best results, go to a quiet area without background noise before recording your voice samples.
-1. Press the microphone icon to start recording.
- :::image type="content" source="media/custom-voice/lite/cnv-record-sample.png" alt-text="Screenshot of the record sample dashboard.":::
-1. Press the stop icon to stop recording.
-1. Review quality metrics. After recording each sample, check its quality metric before continuing to the next one.
-1. Record more samples. Although you can create a model with just 20 samples, it's recommended that you record up to 50 to get better quality.
-1. Select **Train model** to start the training process.
-
-The training process takes approximately one compute hour. You can check the progress of the training process in the **Review model** page.
-
-## Review model
-
-To review the CNV Lite model and listen to your own synthetic voice, follow these steps:
-
-1. Select **Custom Voice** > Your project name > **Review model**. Here you can review the voice model name, model language, sample data size, and training progress. The voice name is composed of the word "Neural" appended to your project name.
-1. Select the voice model name to review the model details and listen to the sample text to speech results.
-1. Select the play icon to hear your voice speak each script.
- :::image type="content" source="media/custom-voice/lite/lite-review-model.png" alt-text="Screenshot of the review sample output dashboard.":::
-
-## Submit verbal statement
-
-A verbal statement recorded by the voice talent is required before you can [deploy the model](#deploy-model) for your business use.
-
-To submit the voice talent verbal statement, follow these steps:
-
-1. Select **Custom Voice** > Your project name > **Deploy model** > **Manage your voice talent**.
- :::image type="content" source="media/custom-voice/lite/lite-voice-talent-consent.png" alt-text="Screenshot of the record voice talent consent dashboard.":::
-1. Select the model.
-1. Enter the voice talent name and company name.
-1. Read and record the statement. Select the microphone icon to start recording. Select the stop icon to stop recording.
-1. Select **Submit** to submit the statement.
-1. Check the processing status in the script table at the bottom of the dashboard. Once the status is **Succeeded**, you can [deploy the model](#deploy-model).
-
-## Deploy model
-
-To deploy your voice model and use it in your applications, you must get the full access to Custom Neural Voice. Request access on the [intake form](https://aka.ms/customneural). Within approximately 10 business days, you'll receive an email with the approval status. A [verbal statement](#submit-verbal-statement) recorded by the voice talent is also required before you can deploy the model for your business use.
-
-To deploy a CNV Lite model, follow these steps:
-
-1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
-1. Select a voice model name and then select **Next**.
-1. Enter a name and description for your endpoint and then select **Next**.
-1. Select the checkbox to agree to the terms of use and then select **Next**.
-1. Select **Deploy** to deploy the model.
-
-From here, you can use the CNV Lite voice model similarly as you would use a CNV Pro voice model. For example, you can [suspend or resume](how-to-deploy-and-use-endpoint.md) an endpoint after it's created, to limit spend and conserve resources that aren't in use. You can also access the voice in the [Audio Content Creation](how-to-audio-content-creation.md) tool in the [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation).
-
-## Next steps
-
-* [Create a CNV Pro project](how-to-custom-voice.md)
-* [Try the text to speech quickstart](get-started-text-to-speech.md)
-* [Learn more about speech synthesis](how-to-speech-synthesis.md)
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
- Title: Custom Neural Voice overview - Speech service-
-description: Custom Neural Voice is a text to speech feature that allows you to create a one-of-a-kind, customized, synthetic voice for your applications. You provide your own audio data as a sample.
------ Previously updated : 03/27/2023---
-# What is Custom Neural Voice?
-
-Custom Neural Voice (CNV) is a text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice for your brand or characters by providing human speech samples as training data.
-
-> [!IMPORTANT]
-> Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
->
-> Access to [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) is available for anyone to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-
-Out of the box, [text to speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text to speech scenarios if a unique voice isn't required.
-
-Custom Neural Voice is based on the neural text to speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice.
-
-## How does it work?
-
-To create a custom neural voice, use [Speech Studio](https://aka.ms/speechstudio/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint.
-
-> [!TIP]
-> Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-
-Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system.
-
-Before you get started in Speech Studio, here are some considerations:
--- [Design a persona](record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.-- [Select the recording script](record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.-
-Here's an overview of the steps to create a custom neural voice in Speech Studio:
-
-1. [Create a project](how-to-custom-voice.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country/region and language. If you are going to create multiple voices, it's recommended that you create a project for each voice.
-1. [Set up voice talent](how-to-custom-voice.md). Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model.
-1. [Prepare training data](how-to-custom-voice-prepare-data.md) in the right [format](how-to-custom-voice-training-data.md). It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required.
-1. [Train your voice model](how-to-custom-voice-create-voice.md). Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
-1. [Test your voice](how-to-custom-voice-create-voice.md#test-your-voice-model). Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content.
-1. [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) in your apps.
-
-You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real-time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
-
-The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text to speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
-
-## Components sequence
-
-Custom Neural Voice consists of three major components: the text analyzer, the neural acoustic
-model, and the neural vocoder. To generate natural synthetic speech from text, text is first input into the text analyzer, which provides output in the form of phoneme sequence. A *phoneme* is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text.
-
-Next, the phoneme sequence goes into the neural acoustic model to predict acoustic features that define speech signals. Acoustic features include the timbre, the speaking style, speed, intonations, and stress patterns. Finally, the neural vocoder converts the acoustic features into audible waves, so that synthetic speech is generated.
-
-![Flowchart that shows the components of Custom Neural Voice.](./media/custom-voice/cnv-intro.png)
-
-Neural text to speech voice models are trained by using deep neural networks based on
-the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-
-## Migrate to Custom Neural Voice
-
-If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md).
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
-* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
-
-* [Create a project](how-to-custom-voice.md)
-* [Prepare training data](how-to-custom-voice-prepare-data.md)
-* [Train model](how-to-custom-voice-create-voice.md)
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
- Title: Custom Speech overview - Speech service-
-description: Custom Speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
------ Previously updated : 05/08/2022----
-# What is Custom Speech?
-
-With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
-
-Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
-
-A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
-
-## How does it work?
-
-With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
-
-![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
-
-Here's more information about the sequence of steps shown in the previous diagram:
-
-1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
-1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products.
-1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
-1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required.
-1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
- > [!NOTE]
- > You pay for Custom Speech model usage and endpoint hosting, but you are not charged for training a model.
-1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
- > [!TIP]
- > A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
-
-* [Create a project](how-to-custom-speech-create-project.md)
-* [Upload test data](./how-to-custom-speech-upload-data.md)
-* [Train a model](how-to-custom-speech-train-model.md)
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
- Title: Speech Devices SDK release notes-
-description: The release notes provide a log of updates, enhancements, bug fixes, and changes to the Speech Devices SDK. This article is updated with each release of the Speech Devices SDK.
------ Previously updated : 02/12/2022---
-# Release notes: Speech Devices SDK (retired)
-
-The following sections list changes in the most recent releases.
-
-> [!NOTE]
-> The Speech Devices SDK is deprecated. Archived sample code is available on [GitHub](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK). All users of the Speech Devices SDK are advised to migrate to using Speech SDK version 1.19.0 or later.
-
-## Speech Devices SDK 1.16.0:
--- Fixed [GitHub issue #22](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK/issues/22).-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.16.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.15.0:
--- Upgraded to new Microsoft Audio Stack (MAS) with improved beamforming and noise reduction for speech.-- Reduced the binary size by as much as 70% depending on target.-- Support for [Azure Percept Audio](../../azure-percept/overview-azure-percept-audio.md) with [binary release](https://aka.ms/sdsdk-download-APAudio).-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.15.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.11.0:
--- Support for arbitrary microphone array geometries and setting the working angle through a [configuration file](https://aka.ms/sdsdk-micarray-json).-- Support for [Urbetter DDK](https://urbetters.com/collections).-- Released binaries for the [GGEC Speaker](https://aka.ms/sdsdk-download-speaker) used in our [Voice Assistant sample](https://aka.ms/sdsdk-speaker).-- Released binaries for [Linux ARM32](https://aka.ms/sdsdk-download-linux-arm32) and [Linux ARM 64](https://aka.ms/sdsdk-download-linux-arm64) for Raspberry Pi and similar devices.-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.11.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.9.0:
--- Initial binaries for [Urbetter DDK](https://aka.ms/sdsdk-download-urbetter) (Linux ARM64) are provided.-- Roobo v1 now uses Maven for the Speech SDK-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.9.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.7.0:
--- Linux ARM is now supported.-- Initial binaries for [Roobo v2 DDK](https://aka.ms/sdsdk-download-roobov2) are provided (Linux ARM64).-- Windows users can use `AudioConfig.fromDefaultMicrophoneInput()` or `AudioConfig.fromMicrophoneInput(deviceName)` to specify the microphone to be used.-- The library size has been optimized.-- Support for multi-turn recognition using the same speech/intent recognizer object.-- Fix occasional issue where the process would stop responding while stopping recognition.-- Sample apps now contain a sample participants.properties file to demonstrate the format of the file.-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.7.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.6.0:
--- Support [Azure Kinect DK](https://azure.microsoft.com/services/kinect-dk/) on Windows and Linux with common sample application.-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.6.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.5.1:
--- Include [Conversation Transcription](./conversation-transcription.md) in the sample app.-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.5.1. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.5.0: 2019-May release
--- Speech Devices SDK is now GA and no longer a gated preview.-- Updated the [Speech SDK](./speech-sdk.md) component to version 1.5.0. For more information, see its [release notes](./releasenotes.md).-- New keyword technology brings significant quality improvements, see Breaking Changes.-- New audio processing pipeline for improved far-field recognition.-
-**Breaking changes**
--- Due to the new keyword technology all keywords must be re-created at our improved keyword portal. To fully remove old keywords from the device uninstall the old app.
- - adb uninstall com.microsoft.cognitiveservices.speech.samples.sdsdkstarterapp
-
-## Speech Devices SDK 1.4.0: 2019-Apr release
--- Updated the [Speech SDK](./speech-sdk.md) component to version 1.4.0. For more information, see its [release notes](./releasenotes.md).-
-## Speech Devices SDK 1.3.1: 2019-Mar release
--- Updated the [Speech SDK](./speech-sdk.md) component to version 1.3.1. For more information, see its [release notes](./releasenotes.md).-- Updated keyword handling, see Breaking Changes.-- Sample application adds choice of language for both speech recognition and translation.-
-**Breaking changes**
--- [Installing a keyword](./custom-keyword-basics.md) has been simplified, it is now part of the app and does not need separate installation on the device.-- The keyword recognition has changed, and two events are supported.
- - `RecognizingKeyword,` indicates the speech result contains (unverified) keyword text.
- - `RecognizedKeyword`, indicates that keyword recognition completed recognizing the given keyword.
-
-## Speech Devices SDK 1.1.0: 2018-Nov release
--- Updated the [Speech SDK](./speech-sdk.md) component to version 1.1.0. For more information, see its [release notes](./releasenotes.md).-- Far Field Speech recognition accuracy has been improved with our enhanced audio processing algorithm.-- Sample application added Chinese speech recognition support.-
-## Speech Devices SDK 1.0.1: 2018-Oct release
--- Updated the [Speech SDK](./speech-sdk.md) component to version 1.0.1. For more information, see its [release notes](./releasenotes.md).-- Speech recognition accuracy will be improved with our improved audio processing algorithm-- One continuous recognition audio session bug is fixed.-
-**Breaking changes**
--- With this release a number of breaking changes are introduced. Please check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details relating to the APIs.-- The keyword recognition model files are not compatible with Speech Devices SDK 1.0.1. The existing keyword files will be deleted after the new keyword files are written to the device.-
-## Speech Devices SDK 0.5.0: 2018-Aug release
--- Improved the accuracy of speech recognition by fixing a bug in the audio processing code.-- Updated the [Speech SDK](./speech-sdk.md) component to version 0.5.0. -
-## Speech Devices SDK 0.2.12733: 2018-May release
-
-The first public preview release of the Speech Devices SDK.
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/direct-line-speech.md
- Title: Direct Line Speech - Speech service-
-description: An overview of the features, capabilities, and restrictions for Voice assistants using Direct Line Speech with the Speech Software Development Kit (SDK).
------ Previously updated : 03/11/2020----
-# What is Direct Line Speech?
-
-Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant. It is powered by the Bot Framework and its Direct Line Speech channel, that is optimized for voice-in, voice-out interaction with bots.
-
-[Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md).
-
-Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands](custom-commands.md) for a streamlined solution experience.
-
-Direct Line Speech supports these locales: `ar-eg`, `ar-sa`, `ca-es`, `da-dk`, `de-de`, `en-au`, `en-ca`, `en-gb`, `en-in`, `en-nz`, `en-us`, `es-es`, `es-mx`, `fi-fi`, `fr-ca`, `fr-fr`, `gu-in`, `hi-in`, `hu-hu`, `it-it`, `ja-jp`, `ko-kr`, `mr-in`, `nb-no`, `nl-nl`, `pl-pl`, `pt-br`, `pt-pt`, `ru-ru`, `sv-se`, `ta-in`, `te-in`, `th-th`, `tr-tr`, `zh-cn`, `zh-hk`, and `zh-tw`.
-
-## Getting started with Direct Line Speech
-
-To create a voice assistant using Direct Line Speech, create a Speech resource and Azure Bot resource in the [Azure portal](https://portal.azure.com). Then [connect the bot](/azure/bot-service/bot-service-channel-connect-directlinespeech) to the Direct Line Speech channel.
-
- ![Conceptual diagram of the Direct Line Speech orchestration service flow](media/voice-assistants/overview-directlinespeech.png "The Speech Channel flow")
-
-For a complete, step-by-step guide on creating a simple voice assistant using Direct Line Speech, see [the tutorial for speech-enabling your bot with the Speech SDK and the Direct Line Speech channel](tutorial-voice-enable-your-bot-speech-sdk.md).
-
-We also offer quickstarts designed to have you running code and learning the APIs quickly. This table includes a list of voice assistant quickstarts organized by language and platform.
-
-| Quickstart | Platform | API reference |
-||-||
-| C#, UWP | Windows | [Browse](/dotnet/api/microsoft.cognitiveservices.speech) |
-| Java | Windows, macOS, Linux | [Browse](/java/api/com.microsoft.cognitiveservices.speech) |
-| Java | Android | [Browse](/java/api/com.microsoft.cognitiveservices.speech) |
-
-## Sample code
-
-Sample code for creating a voice assistant is available on GitHub. These samples cover the client application for connecting to your assistant in several popular programming languages.
-
-* [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples/#voice-assistants-quickstarts)
-* [Tutorial: Voice enable your assistant with the Speech SDK, C#](tutorial-voice-enable-your-bot-speech-sdk.md)
-
-## Customization
-
-Voice assistants built using Speech service can use the full range of customization options available for [speech to text](speech-to-text.md), [text to speech](text-to-speech.md), and [custom keyword selection](./custom-keyword-basics.md).
-
-> [!NOTE]
-> Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt)).
-
-Direct Line Speech and its associated functionality for voice assistants are an ideal supplement to the [Virtual Assistant Solution and Enterprise Template](/azure/bot-service/bot-builder-enterprise-template-overview). Though Direct Line Speech can work with any compatible bot, these resources provide a reusable baseline for high-quality conversational experiences as well as common supporting skills and models to get started quickly.
-
-## Reference docs
-
-* [Speech SDK](./speech-sdk.md)
-* [Azure Bot Service](/azure/bot-service/)
-
-## Next steps
-
-* [Get the Speech SDK](speech-sdk.md)
-* [Create and deploy a basic bot](/azure/bot-service/bot-builder-tutorial-basic-deploy)
-* [Get the Virtual Assistant Solution and Enterprise Template](https://github.com/Microsoft/AI)
cognitive-services Display Text Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/display-text-format.md
- Title: Display text formatting with speech to text - Speech service-
-description: An overview of key concepts for display text formatting with speech to text.
------- Previously updated : 09/19/2022-
-zone_pivot_groups: programming-languages-speech-sdk-cli
--
-# Display text formatting with speech to text
-
-Speech to text offers an array of formatting features to ensure that the transcribed text is clear and legible. Below is an overview of these features and how each one is used to improve the overall clarity of the final text output.
-
-## ITN
-
-Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". This process is performed by the speech to text service and isn't configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output.
-
-|Recognized speech|Display text|
-|||
-|`that will cost nine hundred dollars`|`That will cost $900.`|
-|`my phone number is one eight hundred, four five six, eight nine ten`|`My phone number is 1-800-456-8910.`|
-|`the time is six forty five p m`|`The time is 6:45 PM.`|
-|`I live on thirty five lexington avenue`|`I live on 35 Lexington Ave.`|
-|`the answer is six point five`|`The answer is 6.5.`|
-|`send it to support at help dot com`|`Send it to support@help.com.`|
-
-## Capitalization
-
-Speech to text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service will automatically capitalize proper nouns and words at the beginning of a sentence. Some examples are shown in this table.
-
-|Recognized speech|Display text|
-|||
-|`i got an x l t shirt`|`I got an XL t-shirt.`|
-|`my name is jennifer smith`|`My name is Jennifer Smith.`|
-|`i want to visit new york city`|`I want to visit New York City.`|
-
-## Disfluency removal
-
-When speaking, it's common for someone to stutter, duplicate words, and say filler words like "uhm" or "uh". Speech to text can recognize such disfluencies and remove them from the display text. Disfluency removal is great for transcribing live unscripted speeches to read them back later. Some examples are shown in this table.
-
-|Recognized speech|Display text|
-|||
-|`i uh said that we can go to the uhmm movies`|`I said that we can go to the movies.`|
-|`its its not that big of uhm a deal`|`It's not that big of a deal.`|
-|`umm i think tomorrow should work`|`I think tomorrow should work.`|
-
-## Punctuation
-
-Speech to text automatically punctuates your text to improve clarity. Punctuation is helpful for reading back call or conversation transcriptions. Some examples are shown in this table.
-
-|Recognized speech|Display text|
-|||
-|`how are you`|`How are you?`|
-|`we can go to the mall park or beach`|`We can go to the mall, park, or beach.`|
-
-When you're using speech to text with continuous recognition, you can configure the Speech service to recognize explicit punctuation marks. Then you can speak punctuation aloud in order to make your text more legible. This is especially useful in a situation where you want to use complex punctuation without having to merge it later. Some examples are shown in this table.
-
-|Recognized speech|Display text|
-|||
-|`they entered the room dot dot dot`|`They entered the room...`|
-|`i heart emoji you period`|`I <3 you.`|
-|`the options are apple forward slash banana forward slash orange period`|`The options are apple/banana/orange.`|
-|`are you sure question mark`|`Are you sure?`|
-
-Use the Speech SDK to enable dictation mode when you're using speech to text with continuous recognition. This mode will cause the speech configuration instance to interpret word descriptions of sentence structures such as punctuation.
-
-```csharp
-speechConfig.EnableDictation();
-```
-```cpp
-speechConfig->EnableDictation();
-```
-```go
-speechConfig.EnableDictation()
-```
-```java
-speechConfig.enableDictation();
-```
-```javascript
-speechConfig.enableDictation();
-```
-```objective-c
-[self.speechConfig enableDictation];
-```
-```swift
-self.speechConfig!.enableDictation()
-```
-```python
-speech_config.enable_dictation()
-```
-
-## Profanity filter
-
-You can specify whether to mask, remove, or show profanity in the final transcribed text. Masking replaces profane words with asterisk (*) characters so that you can keep the original sentiment of your text while making it more appropriate for certain situations
-
-> [!NOTE]
-> Microsoft also reserves the right to mask or remove any word that is deemed inappropriate. Such words will not be returned by the Speech service, whether or not you enabled profanity filtering.
-
-The profanity filter options are:
-- `Masked`: Replaces letters in profane words with asterisk (*) characters. Masked is the default option.-- `Raw`: Include the profane words verbatim.-- `Removed`: Removes profane words.-
-For example, to remove profane words from the speech recognition result, set the profanity filter to `Removed` as shown here:
-
-```csharp
-speechConfig.SetProfanity(ProfanityOption.Removed);
-```
-```cpp
-speechConfig->SetProfanity(ProfanityOption::Removed);
-```
-```go
-speechConfig.SetProfanity(common.Removed)
-```
-```java
-speechConfig.setProfanity(ProfanityOption.Removed);
-```
-```javascript
-speechConfig.setProfanity(sdk.ProfanityOption.Removed);
-```
-```objective-c
-[self.speechConfig setProfanityOptionTo:SPXSpeechConfigProfanityOption.SPXSpeechConfigProfanityOption_ProfanityRemoved];
-```
-```swift
-self.speechConfig!.setProfanityOptionTo(SPXSpeechConfigProfanityOption_ProfanityRemoved)
-```
-```python
-speech_config.set_profanity(speechsdk.ProfanityOption.Removed)
-```
-```console
-spx recognize --file caption.this.mp4 --format any --profanity masked --output vtt file - --output srt file -
-```
-
-Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` properties. Profanity filter isn't applied to the result `LexicalForm` and `NormalizedForm` properties. Neither is the filter applied to the word level results.
--
-## Next steps
-
-* [Speech to text quickstart](get-started-speech-to-text.md)
-* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
- Title: Embedded Speech - Speech service-
-description: Embedded Speech is designed for on-device scenarios where cloud connectivity is intermittent or unavailable.
------- Previously updated : 10/31/2022-
-zone_pivot_groups: programming-languages-set-thirteen
--
-# Embedded Speech (preview)
-
-Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md).
-
-> [!IMPORTANT]
-> Microsoft limits access to embedded speech. You can apply for access through the Azure Cognitive Services Speech [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context).
-
-## Platform requirements
-
-Embedded speech is included with the Speech SDK (version 1.24.1 and higher) for C#, C++, and Java. Refer to the general [Speech SDK installation requirements](#embedded-speech-sdk-packages) for programming language and target platform specific details.
-
-**Choose your target environment**
-
-# [Android](#tab/android-target)
-
-Requires Android 7.0 (API level 24) or higher on ARM64 (`arm64-v8a`) or ARM32 (`armeabi-v7a`) hardware.
-
-Embedded TTS with neural voices is only supported on ARM64.
-
-# [Linux](#tab/linux-target)
-
-Requires Linux on x64, ARM64, or ARM32 hardware with [supported Linux distributions](quickstarts/setup-platform.md?tabs=linux).
-
-Embedded speech isn't supported on RHEL/CentOS 7.
-
-Embedded TTS with neural voices isn't supported on ARM32.
-
-# [macOS](#tab/macos-target)
-
-Requires 10.14 or newer on x64 or ARM64 hardware.
-
-# [Windows](#tab/windows-target)
-
-Requires Windows 10 or newer on x64 or ARM64 hardware.
-
-The latest [Microsoft Visual C++ Redistributable for Visual Studio 2015-2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) must be installed regardless of the programming language used with the Speech SDK.
-
-The Speech SDK for Java doesn't support Windows on ARM64.
---
-## Limitations
-
-Embedded speech is only available with C#, C++, and Java SDKs. The other Speech SDKs, Speech CLI, and REST APIs don't support embedded speech.
-
-Embedded speech recognition only supports mono 16 bit, 8-kHz or 16-kHz PCM-encoded WAV audio.
-
-Embedded neural voices only support 24-kHz sample rate.
-
-## Embedded speech SDK packages
--
-For C# embedded applications, install following Speech SDK for C# packages:
-
-|Package |Description |
-| | |
-|[Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech)|Required to use the Speech SDK|
-| [Microsoft.CognitiveServices.Speech.Extension.Embedded.SR](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.SR) | Required for embedded speech recognition |
-| [Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS) | Required for embedded speech synthesis |
-| [Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime) | Required for embedded speech recognition and synthesis |
-| [Microsoft.CognitiveServices.Speech.Extension.Telemetry](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Telemetry) | Required for embedded speech recognition and synthesis |
---
-For C++ embedded applications, install following Speech SDK for C++ packages:
-
-|Package |Description |
-| | |
-|[Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech)|Required to use the Speech SDK|
-| [Microsoft.CognitiveServices.Speech.Extension.Embedded.SR](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.SR) | Required for embedded speech recognition |
-| [Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Embedded.TTS) | Required for embedded speech synthesis |
-| [Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime) | Required for embedded speech recognition and synthesis |
-| [Microsoft.CognitiveServices.Speech.Extension.Telemetry](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Telemetry) | Required for embedded speech recognition and synthesis |
----
-**Choose your target environment**
-
-# [Java Runtime](#tab/jre)
-
-For Java embedded applications, add [client-sdk-embedded](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk-embedded) (`.jar`) as a dependency. This package supports cloud, embedded, and hybrid speech.
-
-> [!IMPORTANT]
-> Don't add [client-sdk](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk) in the same project, since it supports only cloud speech services.
-
-Follow these steps to install the Speech SDK for Java using Apache Maven:
-
-1. Install [Apache Maven](https://maven.apache.org/install.html).
-1. Open a command prompt where you want the new project, and create a new `pom.xml` file.
-1. Copy the following XML content into `pom.xml`:
- ```xml
- <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>com.microsoft.cognitiveservices.speech.samples</groupId>
- <artifactId>quickstart-eclipse</artifactId>
- <version>1.0.0-SNAPSHOT</version>
- <build>
- <sourceDirectory>src</sourceDirectory>
- <plugins>
- <plugin>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.7.0</version>
- <configuration>
- <source>1.8</source>
- <target>1.8</target>
- </configuration>
- </plugin>
- </plugins>
- </build>
- <dependencies>
- <dependency>
- <groupId>com.microsoft.cognitiveservices.speech</groupId>
- <artifactId>client-sdk-embedded</artifactId>
- <version>1.30.0</version>
- </dependency>
- </dependencies>
- </project>
- ```
-1. Run the following Maven command to install the Speech SDK and dependencies.
- ```console
- mvn clean dependency:copy-dependencies
- ```
-
-# [Android](#tab/android)
-
-For Java embedded applications, add [client-sdk-embedded](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk-embedded) (`.aar`) as a dependency. This package supports cloud, embedded, and hybrid speech.
-
-> [!IMPORTANT]
-> Don't add [client-sdk](https://mvnrepository.com/artifact/com.microsoft.cognitiveservices.speech/client-sdk) in the same project, since it supports only cloud speech services.
-
-Be sure to use the `@aar` suffix when the dependency is specified in `build.gradle`. Here's an example:
-
-```
-dependencies {
- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.30.0@aar'
-}
-```
--
-## Models and voices
-
-For embedded speech, you'll need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
-
-The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
-
-The following [text to speech](text-to-speech.md) locales and voices are available:
-
-| Locale (BCP-47) | Language | Text to speech voices |
-| -- | -- | -- |
-| `de-DE` | German (Germany) | `de-DE-KatjaNeural` (Female)<br/>`de-DE-ConradNeural` (Male)|
-| `en-AU` | English (Australia) | `en-AU-AnnetteNeural` (Female)<br/>`en-AU-WilliamNeural` (Male)|
-| `en-CA` | English (Canada) | `en-CA-ClaraNeural` (Female)<br/>`en-CA-LiamNeural` (Male)|
-| `en-GB` | English (United Kingdom) | `en-GB-LibbyNeural` (Female)<br/>`en-GB-RyanNeural` (Male)|
-| `en-US` | English (United States) | `en-US-AriaNeural` (Female)<br/>`en-US-GuyNeural` (Male)<br/>`en-US-JennyNeural` (Female)|
-| `es-ES` | Spanish (Spain) | `es-ES-ElviraNeural` (Female)<br/>`es-ES-AlvaroNeural` (Male)|
-| `es-MX` | Spanish (Mexico) | `es-MX-DaliaNeural` (Female)<br/>`es-MX-JorgeNeural` (Male)|
-| `fr-CA` | French (Canada) | `fr-CA-SylvieNeural` (Female)<br/>`fr-CA-JeanNeural` (Male)|
-| `fr-FR` | French (France) | `fr-FR-DeniseNeural` (Female)<br/>`fr-FR-HenriNeural` (Male)|
-| `it-IT` | Italian (Italy) | `it-IT-IsabellaNeural` (Female)<br/>`it-IT-DiegoNeural` (Male)|
-| `ja-JP` | Japanese (Japan) | `ja-JP-NanamiNeural` (Female)<br/>`ja-JP-KeitaNeural` (Male)|
-| `ko-KR` | Korean (Korea) | `ko-KR-SunHiNeural` (Female)<br/>`ko-KR-InJoonNeural` (Male)|
-| `pt-BR` | Portuguese (Brazil) | `pt-BR-FranciscaNeural` (Female)<br/>`pt-BR-AntonioNeural` (Male)|
-| `zh-CN` | Chinese (Mandarin, Simplified) | `zh-CN-XiaoxiaoNeural` (Female)<br/>`zh-CN-YunxiNeural` (Male)|
-
-## Embedded speech configuration
-
-For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you downloaded to your local device.
-
-Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
--
-```csharp
-// Provide the location of the models and voices.
-List<string> paths = new List<string>();
-paths.Add("C:\\dev\\embedded-speech\\stt-models");
-paths.Add("C:\\dev\\embedded-speech\\tts-voices");
-var embeddedSpeechConfig = EmbeddedSpeechConfig.FromPaths(paths.ToArray());
-
-// For speech to text
-embeddedSpeechConfig.SetSpeechRecognitionModel(
- "Microsoft Speech Recognizer en-US FP Model V8",
- Environment.GetEnvironmentVariable("MODEL_KEY"));
-
-// For text to speech
-embeddedSpeechConfig.SetSpeechSynthesisVoice(
- "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
- Environment.GetEnvironmentVariable("VOICE_KEY"));
-embeddedSpeechConfig.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm);
-```
--
-> [!TIP]
-> The `GetEnvironmentVariable` function is defined in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md).
-
-```cpp
-// Provide the location of the models and voices.
-vector<string> paths;
-paths.push_back("C:\\dev\\embedded-speech\\stt-models");
-paths.push_back("C:\\dev\\embedded-speech\\tts-voices");
-auto embeddedSpeechConfig = EmbeddedSpeechConfig::FromPaths(paths);
-
-// For speech to text
-embeddedSpeechConfig->SetSpeechRecognitionModel((
- "Microsoft Speech Recognizer en-US FP Model V8",
- GetEnvironmentVariable("MODEL_KEY"));
-
-// For text to speech
-embeddedSpeechConfig->SetSpeechSynthesisVoice(
- "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
- GetEnvironmentVariable("VOICE_KEY"));
-embeddedSpeechConfig->SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat::Riff24Khz16BitMonoPcm);
-```
---
-```java
-// Provide the location of the models and voices.
-List<String> paths = new ArrayList<>();
-paths.add("C:\\dev\\embedded-speech\\stt-models");
-paths.add("C:\\dev\\embedded-speech\\tts-voices");
-var embeddedSpeechConfig = EmbeddedSpeechConfig.fromPaths(paths);
-
-// For speech to text
-embeddedSpeechConfig.setSpeechRecognitionModel(
- "Microsoft Speech Recognizer en-US FP Model V8",
- System.getenv("MODEL_KEY"));
-
-// For text to speech
-embeddedSpeechConfig.setSpeechSynthesisVoice(
- "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
- System.getenv("VOICE_KEY"));
-embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm);
-```
---
-## Embedded speech code samples
--
-You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
--- [C# (.NET 6.0)](https://aka.ms/embedded-speech-samples-csharp)-- [C# (.NET MAUI)](https://aka.ms/embedded-speech-samples-csharp-maui)-- [C# for Unity](https://aka.ms/embedded-speech-samples-csharp-unity)--
-You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
-- [C++](https://aka.ms/embedded-speech-samples-cpp)--
-You can find ready to use embedded speech samples at [GitHub](https://aka.ms/embedded-speech-samples). For remarks on projects from scratch, see samples specific documentation:
-- [Java (JRE)](https://aka.ms/embedded-speech-samples-java)-- [Java for Android](https://aka.ms/embedded-speech-samples-java-android)-
-## Hybrid speech
-
-Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service by default and embedded speech as a fallback in case cloud connectivity is limited or slow.
-
-With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed.
-
-With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request.
-
-## Cloud speech
-
-For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration.
--
-## Next steps
--- [Read about text to speech on devices for disconnected and hybrid scenarios](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-now-available-on-devices-for-disconnected-and/ba-p/3716797)-- [Limited Access to embedded Speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context)
cognitive-services Gaming Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/gaming-concepts.md
- Title: Game development with Azure Cognitive Service for Speech - Speech service-
-description: Concepts for game development with Azure Cognitive Service for Speech.
------ Previously updated : 01/25/2023---
-# Game development with Azure Cognitive Service for Speech
-
-Azure Cognitive Services for Speech can be used to improve various gaming scenarios, both in- and out-of-game.
-
-Here are a few Speech features to consider for flexible and interactive game experiences:
--- Bring everyone into the conversation by synthesizing audio from text. Or by displaying text from audio.-- Make the game more accessible for players who are unable to read text in a particular language, including young players who haven't learned to read and write. Players can listen to storylines and instructions in their preferred language. -- Create game avatars and non-playable characters (NPC) that can initiate or participate in a conversation in-game. -- Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. -- Custom neural voice for creating a voice that stays on-brand with consistent quality and speaking style. You can add emotions, accents, nuances, laughter, and other para linguistic sounds and expressions. -- Use game dialogue prototyping to shorten the amount of time and money spent in product to get the game to market sooner. You can rapidly swap lines of dialog and listen to variations in real-time to iterate the game content.-
-You can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) for real-time low latency speech to text, text to speech, language identification, and speech translation. You can also use the [Batch transcription API](batch-transcription.md) to transcribe pre-recorded speech to text. To synthesize a large volume of text input (long and short) to speech, use the [Batch synthesis API](batch-synthesis.md).
-
-For information about locale and regional availability, see [Language and voice support](language-support.md) and [Region support](regions.md).
-
-## Text to speech
-
-Help bring everyone into the conversation by converting text messages to audio using [Text to speech](text-to-speech.md) for scenarios, such as game dialogue prototyping, greater accessibility, or non-playable character (NPC) voices. Text to speech includes [prebuilt neural voice](language-support.md?tabs=tts#prebuilt-neural-voices) and [custom neural voice](language-support.md?tabs=tts#custom-neural-voice) features. Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. Custom neural voice is an easy-to-use self-service for creating a highly natural custom voice.
-
-When enabling this functionality in your game, keep in mind the following benefits:
--- Voices and languages supported - A large portfolio of [locales and voices](language-support.md?tabs=tts#supported-languages) are supported. You can also [specify multiple languages](speech-synthesis-markup-voice.md#adjust-speaking-languages) for Text to speech output. For [custom neural voice](custom-neural-voice.md), you can [choose to create](how-to-custom-voice-create-voice.md?tabs=neural#choose-a-training-method) different languages from single language training data.-- Emotional styles supported - [Emotional tones](language-support.md?tabs=tts#voice-styles-and-roles), such as cheerful, angry, sad, excited, hopeful, friendly, unfriendly, terrified, shouting, and whispering. You can [adjust the speaking style](speech-synthesis-markup-voice.md#speaking-styles-and-roles), style degree, and role at the sentence level.-- Visemes supported - You can use visemes during real-time synthesizing to control the movement of 2D and 3D avatar models, so that the mouth movements are perfectly matched to synthetic speech. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).-- Fine-tuning Text to speech output with Speech Synthesis Markup Language (SSML) - With SSML, you can customize Text to speech outputs, with richer voice tuning supports. For more information, see [Speech Synthesis Markup Language (SSML) overview](speech-synthesis-markup.md).-- Audio outputs - Each prebuilt neural voice model is available at 24 kHz and high-fidelity 48 kHz. If you select 48-kHz output format, the high-fidelity voice model with 48 kHz will be invoked accordingly. The sample rates other than 24 kHz and 48 kHz can be obtained through upsampling or downsampling when synthesizing. For example, 44.1 kHz is downsampled from 48 kHz. Each audio format incorporates a bitrate and encoding type. For more information, see the [supported audio formats](rest-text-to-speech.md?tabs=streaming#audio-outputs). For more information on 48-kHz high-quality voices, see [this introduction blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-voices-upgraded-to-48khz-with-hifinet2-vocoder/ba-p/3665252). -
-For an example, see the [Text to speech quickstart](get-started-text-to-speech.md).
-
-## Speech to text
-
-You can use [speech to text](speech-to-text.md) to display text from the spoken audio in your game. For an example, see the [Speech to text quickstart](get-started-speech-to-text.md).
-
-## Language identification
-
-With [language identification](language-identification.md), you can detect the language of the chat string submitted by the player.
-
-## Speech translation
-
-It's not unusual that players in the same game session natively speak different languages and may appreciate receiving both the original message and its translation. You can use [speech translation](speech-translation.md) to translate text between languages so players across the world can communicate with each other in their native language.
-
-For an example, see the [Speech translation quickstart](get-started-speech-translation.md).
-
-> [!NOTE]
-> Besides the Speech service, you can also use the [Translator service](../translator/translator-overview.md). To execute text translation between supported source and target languages in real-time see [Text translation](../translator/text-translation-overview.md).
-
-## Next steps
-
-* [Azure gaming documentation](/gaming/azure/)
-* [Text to speech quickstart](get-started-text-to-speech.md)
-* [Speech to text quickstart](get-started-speech-to-text.md)
-* [Speech translation quickstart](get-started-speech-translation.md)
cognitive-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-speech-recognition-results.md
- Title: "Get speech recognition results - Speech service"-
-description: Learn how to get speech recognition results.
------ Previously updated : 06/13/2022--
-zone_pivot_groups: programming-languages-speech-sdk-cli
-keywords: speech to text, speech to text software
--
-# Get speech recognition results
----------
-## Next steps
-
-* [Try the speech to text quickstart](get-started-speech-to-text.md)
-* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
-* [Use batch transcription](batch-transcription.md)
cognitive-services Get Started Intent Recognition Clu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition-clu.md
- Title: "Intent recognition with CLU quickstart - Speech service"-
-description: In this quickstart, you recognize intents from audio data with the Speech service and Language service.
------- Previously updated : 02/22/2023-
-zone_pivot_groups: programming-languages-set-thirteen
-keywords: intent recognition
--
-# Quickstart: Recognize intents with Conversational Language Understanding
----
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about speech recognition](how-to-recognize-speech.md)
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
- Title: "Intent recognition quickstart - Speech service"-
-description: In this quickstart, you recognize intents from audio data with the Speech service and LUIS.
------ Previously updated : 02/22/2023--
-zone_pivot_groups: programming-languages-speech-services
-keywords: intent recognition
--
-# Quickstart: Recognize intents with the Speech service and LUIS
-
-> [!IMPORTANT]
-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
->
-> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
-----------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See more LUIS samples on GitHub](https://github.com/Azure/pizza_luis_bot)
cognitive-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speaker-recognition.md
- Title: "Speaker Recognition quickstart - Speech service"-
-description: In this quickstart, you use speaker recognition to confirm who is speaking. Learn about common design patterns for working with speaker verification and identification.
------ Previously updated : 01/08/2022--
-zone_pivot_groups: programming-languages-speech-services
-keywords: speaker recognition, voice biometry
--
-# Quickstart: Recognize and verify who is speaking
-----------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
- Title: "Speech to text quickstart - Speech service"-
-description: In this quickstart, you convert speech to text with recognition from a microphone.
------ Previously updated : 09/16/2022--
-zone_pivot_groups: programming-languages-speech-services
-keywords: speech to text, speech to text software
--
-# Quickstart: Recognize and convert speech to text
-----------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about speech recognition](how-to-recognize-speech.md)
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
- Title: Speech translation quickstart - Speech service-
-description: In this quickstart, you translate speech from one language to text in another language.
------- Previously updated : 09/16/2022-
-zone_pivot_groups: programming-languages-speech-services
-keywords: speech translation
--
-# Quickstart: Recognize and translate speech to text
------------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about speech translation](how-to-translate-speech.md)
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
- Title: "Text to speech quickstart - Speech service"-
-description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
------ Previously updated : 09/16/2022--
-zone_pivot_groups: programming-languages-speech-services
-keywords: text to speech
--
-# Quickstart: Convert text to speech
-----------
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about speech synthesis](how-to-speech-synthesis.md)
cognitive-services How To Async Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-async-conversation-transcription.md
- Title: Asynchronous Conversation Transcription - Speech service-
-description: Learn how to use asynchronous Conversation Transcription using the Speech service. Available for Java and C# only.
----- Previously updated : 11/04/2019-
-zone_pivot_groups: programming-languages-set-twenty-one
--
-# Asynchronous Conversation Transcription
-
-In this article, asynchronous Conversation Transcription is demonstrated using the **RemoteConversationTranscriptionClient** API. If you have configured Conversation Transcription to do asynchronous transcription and have a `conversationId`, you can obtain the transcription associated with that `conversationId` using the **RemoteConversationTranscriptionClient** API.
-
-## Asynchronous vs. real-time + asynchronous
-
-With asynchronous transcription, you stream the conversation audio, but don't need a transcription returned in real-time. Instead, after the audio is sent, use the `conversationId` of `Conversation` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you'll get a `RemoteConversationTranscriptionResult`.
-
-With real-time plus asynchronous, you get the transcription in real-time, but also get the transcription by querying with the `conversationId` (similar to asynchronous scenario).
-
-Two steps are required to accomplish asynchronous transcription. The first step is to upload the audio, choosing either asynchronous only or real-time plus asynchronous. The second step is to get the transcription results.
----
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
- Title: Audio Content Creation - Speech service-
-description: Audio Content Creation is an online tool that allows you to run Text to speech synthesis without writing any code.
------ Previously updated : 09/25/2022---
-# Speech synthesis with the Audio Content Creation tool
-
-You can use the [Audio Content Creation](https://speech.microsoft.com/portal/audiocontentcreation) tool in Speech Studio for Text to speech synthesis without writing any code. You can use the output audio as-is, or as a starting point for further customization.
-
-Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune Text to speech voices and design customized audio experiences.
-
-The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust Text to speech output attributes in real-time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
--- No-code approach: You can use the Audio Content Creation tool for Text to speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration. -- Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot.-
-You have easy access to a broad portfolio of [languages and voices](language-support.md?tabs=tts). These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
-
-To learn more, view the Audio Content Creation tutorial video [on YouTube](https://youtu.be/ygApYuOOG6w).
-
-## Get started
-
-The Audio Content Creation tool in Speech Studio is free to access, but you'll pay for Speech service usage. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people.
-
-The next sections cover how to create an Azure account and get a Speech resource.
-
-### Step 1: Create an Azure account
-
-To work with Audio Content Creation, you need a [Microsoft account](https://account.microsoft.com/account) and an [Azure account](https://azure.microsoft.com/free/ai/).
-
-[The Azure portal](https://portal.azure.com/) is the centralized place for you to manage your Azure account. You can create the Speech resource, manage the product access, and monitor everything from simple web apps to complex cloud deployments.
-
-### Step 2: Create a Speech resource
-
-After you sign up for the Azure account, you need to create a Speech resource in your Azure account to access Speech services. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-
-It takes a few moments to deploy your new Speech resource. After the deployment is complete, you can start using the Audio Content Creation tool.
-
- > [!NOTE]
- > If you plan to use neural voices, make sure that you create your resource in [a region that supports neural voices](regions.md#speech-service).
-
-### Step 3: Sign in to Audio Content Creation with your Azure account and Speech resource
-
-1. After you get the Azure account and the Speech resource, sign in to [Speech Studio](https://aka.ms/speechstudio/), and then select **Audio Content Creation**.
-
-1. Select the Azure subscription and the Speech resource you want to work with, and then select **Use resource**.
-
- The next time you sign in to Audio Content Creation, you're linked directly to the audio work files under the current Speech resource. You can check your Azure subscription details and status in the [Azure portal](https://portal.azure.com/).
-
- If you don't have an available Speech resource and you're the owner or admin of an Azure subscription, you can create a Speech resource in Speech Studio by selecting **Create a new resource**.
-
- If you have a user role for a certain Azure subscription, you might not have permissions to create a new Speech resource. To get access, contact your admin.
-
- To switch your Speech resource at any time, select **Settings** at the top of the page.
-
- To switch directories, select **Settings** or go to your profile.
-
-## Use the tool
-
-The following diagram displays the process for fine-tuning the Text to speech outputs.
--
-Each step in the preceding diagram is described here:
-
-1. Choose the Speech resource you want to work with.
-
-1. [Create an audio tuning file](#create-an-audio-tuning-file) by using plain text or SSML scripts. Enter or upload your content into Audio Content Creation.
-1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text to speech voices](language-support.md?tabs=tts). You can use prebuilt neural voices or a custom neural voice.
-
- > [!NOTE]
- > Gated access is available for Custom Neural Voice, which allows you to create high-definition voices that are similar to natural-sounding speech. For more information, see [Gating process](./text-to-speech.md).
-
-1. Select the content you want to preview, and then select **Play** (triangle icon) to preview the default synthesis output.
-
- If you make any changes to the text, select the **Stop** icon, and then select **Play** again to regenerate the audio with changed scripts.
-
- Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md).
-
- For more information about fine-tuning speech output, view the [How to convert Text to speech using Microsoft Azure AI voices](https://youtu.be/ygApYuOOG6w) video.
-
-1. Save and [export your tuned audio](#export-tuned-audio).
-
- When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
-
-## Create an audio tuning file
-
-You can get your content into the Audio Content Creation tool in either of two ways:
-
-* **Option 1**
-
- 1. Select **New** > **Text file** to create a new audio tuning file.
-
- 1. Enter or paste your content into the editing window. The allowable number of characters for each file is 20,000 or fewer. If your script contains more than 20,000 characters, you can use Option 2 to automatically split your content into multiple files.
-
- 1. Select **Save**.
-
-* **Option 2**
-
- 1. Select **Upload** > **Text file** to import one or more text files. Both plain text and SSML are supported.
-
- If your script file is more than 20,000 characters, split the content by paragraphs, by characters, or by regular expressions.
-
- 1. When you upload your text files, make sure that they meet these requirements:
-
- | Property | Description |
- |-||
- | File format | Plain text (.txt)\*<br> SSML text (.txt)\**<br/> Zip files aren't supported. |
- | Encoding format | UTF-8 |
- | File name | Each file must have a unique name. Duplicate files aren't supported. |
- | Text length | Character limit is 20,000. If your files exceed the limit, split them according to the instructions in the tool. |
- | SSML restrictions | Each SSML file can contain only a single piece of SSML. |
-
-
- \* **Plain text example**:
-
- ```txt
- Welcome to use Audio Content Creation to customize audio output for your products.
- ```
-
- \** **SSML text example**:
-
- ```xml
- <speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" version="1.0" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- Welcome to use Audio Content Creation <break time="10ms" />to customize audio output for your products.
- </voice>
- </speak>
- ```
-
-## Export tuned audio
-
-After you've reviewed your audio output and are satisfied with your tuning and adjustment, you can export the audio.
-
-1. Select **Export** to create an audio creation task.
-
- We recommend **Export to Audio library** to easily store, find, and search audio output in the cloud. You can better integrate with your applications through Azure blob storage. You can also download the audio to your local disk directly.
-
-1. Choose the output format for your tuned audio. The **supported audio formats and sample rates** are listed in the following table:
-
- | Format | 8 kHz sample rate | 16 kHz sample rate | 24 kHz sample rate | 48 kHz sample rate |
- | | | | | |
- | wav | riff-8khz-16bit-mono-pcm | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |riff-48khz-16bit-mono-pcm |
- | mp3 | N/A | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |audio-48khz-192kbitrate-mono-mp3 |
-
-
-1. To view the status of the task, select the **Task list** tab.
-
- If the task fails, see the detailed information page for a full report.
-
-1. When the task is complete, your audio is available for download on the **Audio library** pane.
-
-1. Select the file you want to download and **Download**.
-
- Now you're ready to use your custom tuned audio in your apps or products.
-
-## Configure BYOS and anonymous public read access for blobs
-
-If you lose access permission to your Bring Your Own Storage (BYOS), you won't be able to view, create, edit, or delete files. To resume your access, you need to remove the current storage and reconfigure the BYOS in the [Azure portal](https://portal.azure.com/#allservices). To learn more about how to configure BYOS, see [Mount Azure Storage as a local share in App Service](/azure/app-service/configure-connect-to-azure-storage?pivots=container-linux&tabs=portal).
-
-After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob will be inaccessible. By default, a containerΓÇÖs public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob public access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
-
-## Add or remove Audio Content Creation users
-
-If more than one user wants to use Audio Content Creation, you can grant them access to the Azure subscription and the Speech resource. If you add users to an Azure subscription, they can access all the resources under the Azure subscription. But if you add users to a Speech resource only, they'll have access only to the Speech resource and not to other resources under this Azure subscription. Users with access to the Speech resource can use the Audio Content Creation tool.
-
-The users you grant access to need to set up a [Microsoft account](https://account.microsoft.com/account). If they don' have a Microsoft account, they can create one in just a few minutes. They can use their existing email and link it to a Microsoft account, or they can create and use an Outlook email address as a Microsoft account.
-
-### Add users to a Speech resource
-
-To add users to a Speech resource so that they can use Audio Content Creation, do the following:
--
-1. In the [Azure portal](https://portal.azure.com/), select **All services**.
-1. Then select the **Cognitive Services**, and navigate to your specific Speech resource.
- > [!NOTE]
- > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item (for example, selecting **Resource groups** and then clicking through to your wanted resource group).
-1. Select **Access control (IAM)** on the left navigation pane.
-1. Select **Add** -> **Add role assignment**.
-1. On the **Role** tab on the next screen, select a role you want to add (in this case, **Owner**).
-1. On the **Members** tab, enter a user's email address and select the user's name in the directory. The email address must be linked to a Microsoft account that's trusted by Azure Active Directory. Users can easily sign up for a [Microsoft account](https://account.microsoft.com/account) by using their personal email address.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-Here is what happens next:
-
-An email invitation is automatically sent to users. They can accept it by selecting **Accept invitation** > **Accept to join Azure** in their email. They're then redirected to the Azure portal. They don't need to take further action in the Azure portal. After a few moments, users are assigned the role at the Speech resource scope, which gives them access to this Speech resource. If users don't receive the invitation email, you can search for their account under **Role assignments** and go into their profile. Look for **Identity** > **Invitation accepted**, and select **(manage)** to resend the email invitation. You can also copy and send the invitation link to them.
-
-Users now visit or refresh the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with their Microsoft account. They select **Audio Content Creation** block among all speech products. They choose the Speech resource in the pop-up window or in the settings at the upper right.
-
-If they can't find the available Speech resource, they can check to ensure that they're in the right directory. To do so, they select the account profile at the upper right and then select **Switch** next to **Current directory**. If there's more than one directory available, it means they have access to multiple directories. They can switch to different directories and go to **Settings** to see whether the right Speech resource is available.
-
-Users who are in the same Speech resource will see each other's work in the Audio Content Creation tool. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
-
-### Remove users from a Speech resource
-
-1. Search for **Cognitive services** in the Azure portal, select the Speech resource that you want to remove users from.
-1. Select **Access control (IAM)**, and then select the **Role assignments** tab to view all the role assignments for this Speech resource.
-1. Select the users you want to remove, select **Remove**, and then select **OK**.
-
- :::image type="content" source="media/audio-content-creation/remove-user.png" alt-text="Screenshot of the 'Remove' button on the 'Remove role assignments' pane.":::
-
-### Enable users to grant access to others
-
-If you want to allow a user to grant access to other users, you need to assign them the owner role for the Speech resource and set the user as the Azure directory reader.
-1. Add the user as the owner of the Speech resource. For more information, see [Add users to a Speech resource](#add-users-to-a-speech-resource).
-
- :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Screenshot showing the 'Owner' role on the 'Add role assignment' pane. ":::
-
-1. In the [Azure portal](https://portal.azure.com/), select the collapsed menu at the upper left, select **Azure Active Directory**, and then select **Users**.
-1. Search for the user's Microsoft account, go to their detail page, and then select **Assigned roles**.
-1. Select **Add assignments** > **Directory Readers**. If the **Add assignments** button is unavailable, it means that you don't have access. Only the global administrator of this directory can add assignments to users.
-
-## Next steps
--- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)-- [Batch synthesis](batch-synthesis.md)
cognitive-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md
- Title: How to configure Azure Active Directory Authentication-
-description: Learn how to authenticate using Azure Active Directory Authentication
------ Previously updated : 06/18/2021-
-zone_pivot_groups: programming-languages-set-two
--
-# Azure Active Directory Authentication with the Speech SDK
-
-When using the Speech SDK to access the Speech service, there are three authentication methods available: service keys, a key-based token, and Azure Active Directory (Azure AD). This article describes how to configure a Speech resource and create a Speech SDK configuration object to use Azure AD for authentication.
-
-This article shows how to use Azure AD authentication with the Speech SDK. You'll learn how to:
-
-> [!div class="checklist"]
->
-> - Create a Speech resource
-> - Configure the Speech resource for Azure AD authentication
-> - Get an Azure AD access token
-> - Create the appropriate SDK configuration object.
-
-To learn more about Azure AD access tokens, including token lifetime, visit [Access tokens in the Microsoft identity platform](/azure/active-directory/develop/access-tokens).
-
-## Create a Speech resource
-To create a Speech resource in the [Azure portal](https://portal.azure.com), see [Get the keys for your resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md#get-the-keys-for-your-resource)
-
-## Configure the Speech resource for Azure AD authentication
-
-To configure your Speech resource for Azure AD authentication, create a custom domain name and assign roles.
-
-### Create a custom domain name
-
-### Assign roles
-For Azure AD authentication with Speech resources, you need to assign either the *Cognitive Services Speech Contributor* or *Cognitive Services Speech User* role.
-
-You can assign roles to the user or application using the [Azure portal](../../role-based-access-control/role-assignments-portal.md) or [PowerShell](../../role-based-access-control/role-assignments-powershell.md).
-
-## Get an Azure AD access token
-To get an Azure AD access token in C#, use the [Azure Identity Client Library](/dotnet/api/overview/azure/identity-readme).
-
-Here's an example of using Azure Identity to get an Azure AD access token from an interactive browser:
-```c#
-TokenRequestContext context = new Azure.Core.TokenRequestContext(new string[] { "https://cognitiveservices.azure.com/.default" });
-InteractiveBrowserCredential browserCredential = new InteractiveBrowserCredential();
-var browserToken = browserCredential.GetToken(context);
-string aadToken = browserToken.Token;
-```
-The token context must be set to "https://cognitiveservices.azure.com/.default".
-
-To get an Azure AD access token in C++, use the [Azure Identity Client Library](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity).
-
-Here's an example of using Azure Identity to get an Azure AD access token with your tenant ID, client ID, and client secret credentials:
-```cpp
-const std::string tenantId = "Your Tenant ID";
-const std::string clientId = "Your Client ID";
-const std::string clientSecret = "Your Client Secret";
-const std::string tokenContext = "https://cognitiveservices.azure.com/.default";
-
-Azure::Identity::ClientSecretCredential cred(tenantId,
- clientId,
- clientSecret,
- Azure::Identity::ClientSecretCredentialOptions());
-
-Azure::Core::Credentials::TokenRequestContext context;
-context.Scopes.push_back(tokenContext);
-
-auto token = cred.GetToken(context, Azure::Core::Context());
-```
-The token context must be set to "https://cognitiveservices.azure.com/.default".
--
-To get an Azure AD access token in Java, use the [Azure Identity Client Library](/java/api/overview/azure/identity-readme).
-
-Here's an example of using Azure Identity to get an Azure AD access token from a browser:
-```java
-TokenRequestContext context = new TokenRequestContext();
-context.addScopes("https://cognitiveservices.azure.com/.default");
-
-InteractiveBrowserCredentialBuilder builder = new InteractiveBrowserCredentialBuilder();
-InteractiveBrowserCredential browserCredential = builder.build();
-
-AccessToken browserToken = browserCredential.getToken(context).block();
-String token = browserToken.getToken();
-```
-The token context must be set to "https://cognitiveservices.azure.com/.default".
-
-To get an Azure AD access token in Java, use the [Azure Identity Client Library](/python/api/overview/azure/identity-readme).
-
-Here's an example of using Azure Identity to get an Azure AD access token from an interactive browser:
-```Python
-from azure.identity import InteractiveBrowserCredential
-ibc = InteractiveBrowserCredential()
-aadToken = ibc.get_token("https://cognitiveservices.azure.com/.default")
-```
-
-Find samples that get an Azure AD access token in [Microsoft identity platform code samples](../../active-directory/develop/sample-v2-code.md).
-
-For programming languages where a Microsoft identity platform client library isn't available, you can directly [request an access token](../../active-directory/develop/v2-oauth-ropc.md).
-
-## Get the Speech resource ID
-
-You need your Speech resource ID to make SDK calls using Azure AD authentication.
-
-> [!NOTE]
-> For Intent Recognition use your LUIS Prediction resource ID.
-
-# [Azure portal](#tab/portal)
-
-To get the resource ID in the Azure portal:
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select a Speech resource.
-1. In the **Resource Management** group on the left pane, select **Properties**.
-1. Copy the **Resource ID**
-
-# [PowerShell](#tab/powershell)
-
-To get the resource ID using PowerShell, confirm that you have PowerShell version 7.x or later with the Azure PowerShell module version 5.1.0 or later. To see the versions of these tools, follow these steps:
-
-1. In a PowerShell window, enter:
-
- `$PSVersionTable`
-
- Confirm that the `PSVersion` value is 7.x or later. To upgrade PowerShell, follow the instructions at [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
-
-1. In a PowerShell window, enter:
-
- `Get-Module -ListAvailable Az`
-
- If nothing appears, or if that version of the Azure PowerShell module is earlier than 5.1.0, follow the instructions at [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell) to upgrade.
-
-Now run `Connect-AzAccount` to create a connection with Azure.
--
-```azurepowershell
-Connect-AzAccount
-$subscriptionId = "Your Azure subscription Id"
-$resourceGroup = "Resource group name where Speech resource is located"
-$speechResourceName = "Your Speech resource name"
-
-# Select the Azure subscription that contains the Speech resource.
-# You can skip this step if your Azure account has only one active subscription.
-Set-AzContext -SubscriptionId $subscriptionId
-
-# Get the Speech resource
-$resource = Get-AzCognitiveServicesAccount -Name $speechResourceName -ResourceGroupName $resourceGroup
-
-# Get the resource ID:
-$resourceId = resource.Id
-```
-***
-
-## Create the Speech SDK configuration object
-
-With an Azure AD access token, you can now create a Speech SDK configuration object.
-
-The method of providing the token, and the method to construct the corresponding Speech SDK ```Config``` object varies by the object you'll be using.
-
-### SpeechRecognizer, SpeechSynthesizer, IntentRecognizer, ConversationTranscriber
-
-For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```SpeechConfig``` object.
-
-```C#
-string resourceId = "Your Resource ID";
-string aadToken = "Your Azure AD access token";
-string region = "Your Speech Region";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-var authorizationToken = $"aad#{resourceId}#{aadToken}";
-var speechConfig = SpeechConfig.FromAuthorizationToken(authorizationToken, region);
-```
-
-```C++
-std::string resourceId = "Your Resource ID";
-std::string aadToken = "Your Azure AD access token";
-std::string region = "Your Speech Region";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
-auto speechConfig = SpeechConfig::FromAuthorizationToken(authorizationToken, region);
-```
-
-```Java
-String resourceId = "Your Resource ID";
-String region = "Your Region";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-String authorizationToken = "aad#" + resourceId + "#" + token;
-SpeechConfig speechConfig = SpeechConfig.fromAuthorizationToken(authorizationToken, region);
-```
-
-```Python
-resourceId = "Your Resource ID"
-region = "Your Region"
-# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-authorizationToken = "aad#" + resourceId + "#" + aadToken.token
-speechConfig = SpeechConfig(auth_token=authorizationToken, region=region)
-```
-
-### TranslationRecognizer
-
-For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```SpeechTranslationConfig``` object.
-
-```C#
-string resourceId = "Your Resource ID";
-string aadToken = "Your Azure AD access token";
-string region = "Your Speech Region";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-var authorizationToken = $"aad#{resourceId}#{aadToken}";
-var speechConfig = SpeechTranslationConfig.FromAuthorizationToken(authorizationToken, region);
-```
-
-```cpp
-std::string resourceId = "Your Resource ID";
-std::string aadToken = "Your Azure AD access token";
-std::string region = "Your Speech Region";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
-auto speechConfig = SpeechTranslationConfig::FromAuthorizationToken(authorizationToken, region);
-```
-
-```Java
-String resourceId = "Your Resource ID";
-String region = "Your Region";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-String authorizationToken = "aad#" + resourceId + "#" + token;
-SpeechTranslationConfig translationConfig = SpeechTranslationConfig.fromAuthorizationToken(authorizationToken, region);
-```
-
-```Python
-resourceId = "Your Resource ID"
-region = "Your Region"
-
-# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-authorizationToken = "aad#" + resourceId + "#" + aadToken.token
-translationConfig = SpeechTranslationConfig(auth_token=authorizationToken, region=region)
-```
-
-### DialogServiceConnector
-
-For the ```DialogServiceConnection``` object, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```CustomCommandsConfig``` or a ```BotFrameworkConfig``` object.
-
-```C#
-string resourceId = "Your Resource ID";
-string aadToken = "Your Azure AD access token";
-string region = "Your Speech Region";
-string appId = "Your app ID";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-var authorizationToken = $"aad#{resourceId}#{aadToken}";
-var customCommandsConfig = CustomCommandsConfig.FromAuthorizationToken(appId, authorizationToken, region);
-```
-
-```cpp
-std::string resourceId = "Your Resource ID";
-std::string aadToken = "Your Azure AD access token";
-std::string region = "Your Speech Region";
-std::string appId = "Your app Id";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
-auto customCommandsConfig = CustomCommandsConfig::FromAuthorizationToken(appId, authorizationToken, region);
-```
-
-```Java
-String resourceId = "Your Resource ID";
-String region = "Your Region";
-String appId = "Your AppId";
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-String authorizationToken = "aad#" + resourceId + "#" + token;
-CustomCommandsConfig dialogServiceConfig = CustomCommandsConfig.fromAuthorizationToken(appId, authorizationToken, region);
-```
-
-The DialogServiceConnector is not currently supported in Python
-
-### VoiceProfileClient
-To use the ```VoiceProfileClient``` with Azure AD authentication, use the custom domain name created above.
-
-```C#
-string customDomainName = "Your Custom Name";
-string hostName = $"https://{customDomainName}.cognitiveservices.azure.com/";
-string token = "Your Azure AD access token";
-
-var config = SpeechConfig.FromHost(new Uri(hostName));
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-var authorizationToken = $"aad#{resourceId}#{aadToken}";
-config.AuthorizationToken = authorizationToken;
-```
-
-```cpp
-std::string customDomainName = "Your Custom Name";
-std::string aadToken = "Your Azure AD access token";
-
-auto speechConfig = SpeechConfig::FromHost("https://" + customDomainName + ".cognitiveservices.azure.com/");
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
-speechConfig->SetAuthorizationToken(authorizationToken);
-```
-
-```Java
-String aadToken = "Your Azure AD access token";
-String customDomainName = "Your Custom Name";
-String hostName = "https://" + customDomainName + ".cognitiveservices.azure.com/";
-SpeechConfig speechConfig = SpeechConfig.fromHost(new URI(hostName));
-
-// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and AAD access token.
-String authorizationToken = "aad#" + resourceId + "#" + token;
-
-speechConfig.setAuthorizationToken(authorizationToken);
-```
-
-The ```VoiceProfileClient``` isn't available with the Speech SDK for Python.
-
-> [!NOTE]
-> The ```ConversationTranslator``` doesn't support Azure AD authentication.
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
- Title: How to configure OpenSSL for Linux-
-description: Learn how to configure OpenSSL for Linux.
------- Previously updated : 06/22/2022-
-zone_pivot_groups: programming-languages-set-three
---
-# Configure OpenSSL for Linux
-
-With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version.
-
-> [!NOTE]
-> This article is only applicable where the Speech SDK is [supported on Linux](speech-sdk.md#supported-languages).
-
-To ensure connectivity, verify that OpenSSL certificates have been installed in your system. Run a command:
-```bash
-openssl version -d
-```
-
-The output on Ubuntu/Debian based systems should be:
-```
-OPENSSLDIR: "/usr/lib/ssl"
-```
-
-Check whether there's a `certs` subdirectory under OPENSSLDIR. In the example above, it would be `/usr/lib/ssl/certs`.
-
-* If the `/usr/lib/ssl/certs` exists, and if it contains many individual certificate files (with `.crt` or `.pem` extension), there's no need for further actions.
-
-* If OPENSSLDIR is something other than `/usr/lib/ssl` or there's a single certificate bundle file instead of multiple individual files, you need to set an appropriate SSL environment variable to indicate where the certificates can be found.
-
-## Examples
-Here are some example environment variables to configure per OpenSSL directory.
--- OPENSSLDIR is `/opt/ssl`. There's a `certs` subdirectory with many `.crt` or `.pem` files.
-Set the environment variable `SSL_CERT_DIR` to point at `/opt/ssl/certs` before using the Speech SDK. For example:
-```bash
-export SSL_CERT_DIR=/opt/ssl/certs
-```
--- OPENSSLDIR is `/etc/pki/tls` (like on RHEL/CentOS based systems). There's a `certs` subdirectory with a certificate bundle file, for example `ca-bundle.crt`.
-Set the environment variable `SSL_CERT_FILE` to point at that file before using the Speech SDK. For example:
-```bash
-export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
-```
-
-## Certificate revocation checks
-
-When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
-
-If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
-
-> [!WARNING]
-> If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](../../security/fundamentals/tls-certificate-changes.md) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
-
-### Large CRL files (>10 MB)
-
-One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
-
-The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
--
-```csharp
-config.SetProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
-```
---
-```cpp
-config->SetProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
-```
---
-```java
-config.setProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
-```
---
-```python
-speech_config.set_property_by_name("CONFIG_MAX_CRL_SIZE_KB"", "15000")
-```
---
-```go
-speechConfig.properties.SetPropertyByString("CONFIG_MAX_CRL_SIZE_KB", "15000")
-```
--
-### Bypassing or ignoring CRL failures
-
-If an environment can't be configured to access an Azure CA location, the Speech SDK will never be able to retrieve an updated CRL. You can configure the SDK either to continue and log download failures or to bypass all CRL checks.
-
-> [!WARNING]
-> CRL checks are a security measure and bypassing them increases susceptibility to attacks. They should not be bypassed without thorough consideration of the security implications and alternative mechanisms for protecting against the attack vectors that CRL checks mitigate.
-
-To continue with the connection when a CRL can't be retrieved, set the property `"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"` to `"true"`. An attempt will still be made to retrieve a CRL and failures will still be emitted in logs, but connection attempts will be allowed to continue.
--
-```csharp
-config.SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
-```
---
-```cpp
-config->SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
-```
---
-```java
-config.setProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
-```
---
-```python
-speech_config.set_property_by_name("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")
-```
---
-```go
-
-speechConfig.properties.SetPropertyByString("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")
-```
--
-To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
--
-```csharp
-config.SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
-```
---
-```cpp
-config->SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
-```
---
-```java
-config.setProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
-```
---
-```python
-speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")
-```
---
-```go
-speechConfig.properties.SetPropertyByString("OPENSSL_DISABLE_CRL_CHECK", "true")
-```
--
-### CRL caching and performance
-
-By default, the Speech SDK will cache a successfully downloaded CRL on disk to improve the initial latency of future connections. When no cached CRL is present or when the cached CRL is expired, a new list will be downloaded.
-
-Some Linux distributions don't have a `TMP` or `TMPDIR` environment variable defined, so the Speech SDK won't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK will download a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
-
-## Next steps
--- [Speech SDK overview](speech-sdk.md)-- [Install the Speech SDK](quickstarts/setup-platform.md)
cognitive-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-rhel-centos-7.md
- Title: How to configure RHEL/CentOS 7 - Speech service-
-description: Learn how to configure RHEL/CentOS 7 so that the Speech SDK can be used.
------ Previously updated : 04/01/2022---
-# Configure RHEL/CentOS 7
-
-To use the Speech SDK on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler (for C++ development) and the shared C++ runtime library on your system.
-
-## Install dependencies
-
-First install all general dependencies:
-
-```bash
-sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm
-
-# Install development tools and libraries
-sudo yum update -y
-sudo yum groupinstall -y "Development tools"
-sudo yum install -y alsa-lib dotnet-sdk-2.1 java-1.8.0-openjdk-devel openssl
-sudo yum install -y gstreamer1 gstreamer1-plugins-base gstreamer1-plugins-good gstreamer1-plugins-bad-free gstreamer1-plugins-ugly-free
-```
-
-## C/C++ compiler and runtime libraries
-
-Install the prerequisite packages with this command:
-
-```bash
-sudo yum install -y gmp-devel mpfr-devel libmpc-devel
-```
-
-Next update the compiler and runtime libraries:
-
-```bash
-# Build GCC 7.5.0 and runtimes and install them under /usr/local
-curl https://ftp.gnu.org/gnu/gcc/gcc-7.5.0/gcc-7.5.0.tar.gz -O
-tar -xf gcc-7.5.0.tar.gz
-mkdir gcc-7.5.0-build && cd gcc-7.5.0-build
-../gcc-7.5.0/configure --enable-languages=c,c++ --disable-bootstrap --disable-multilib --prefix=/usr/local
-make -j$(nproc)
-sudo make install-strip
-```
-
-If the updated compiler and libraries need to be deployed on several machines, you can simply copy them from under `/usr/local` to other machines. If only the runtime libraries are needed then the files in `/usr/local/lib64` will be enough.
-
-## Environment settings
-
-Run the following commands to complete the configuration:
-
-```bash
-# Add updated C/C++ runtimes to the library path
-# (this is required for any development/testing with Speech SDK)
-export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
-
-# For C++ development only:
-# - add the updated compiler to PATH
-# (note, /usr/local/bin should be already first in PATH on vanilla systems)
-# - add Speech SDK libraries from the Linux tar package to LD_LIBRARY_PATH
-# (note, use the actual path to extracted files!)
-export PATH=/usr/local/bin:$PATH
-hash -r # reset cached paths in the current shell session just in case
-export LD_LIBRARY_PATH=/path/to/extracted/SpeechSDK-Linux-<version>/lib/centos7-x64:$LD_LIBRARY_PATH
-```
-
-> [!NOTE]
-> The Linux .tar package contains specific libraries for RHEL/CentOS 7. These are in `lib/centos7-x64` as shown in the environment setting example for `LD_LIBRARY_PATH` above. Speech SDK libraries in `lib/x64` are for all the other supported Linux x64 distributions (including RHEL/CentOS 8) and don't work on RHEL/CentOS 7.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [About the Speech SDK](speech-sdk.md)
cognitive-services How To Control Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-control-connections.md
- Title: Service connectivity how-to - Speech SDK-
-description: Learn how to monitor for connection status and manually connect or disconnect from the Speech service.
------ Previously updated : 04/12/2021-
-zone_pivot_groups: programming-languages-set-thirteen
---
-# How to monitor and control service connections with the Speech SDK
-
-`SpeechRecognizer` and other objects in the Speech SDK automatically connect to the Speech service when it's appropriate. Sometimes, you may either want extra control over when connections begin and end or want more information about when the Speech SDK establishes or loses its connection. The supporting `Connection` class provides this capability.
-
-## Retrieve a Connection object
-
-A `Connection` can be obtained from most top-level Speech SDK objects via a static `From...` factory method, for example, `Connection::FromRecognizer(recognizer)` for `SpeechRecognizer`.
--
-```csharp
-var connection = Connection.FromRecognizer(recognizer);
-```
---
-```cpp
-auto connection = Connection::FromRecognizer(recognizer);
-```
---
-```java
-Connection connection = Connection.fromRecognizer(recognizer);
-```
--
-## Monitor for connections and disconnections
-
-A `Connection` raises `Connected` and `Disconnected` events when the corresponding status change happens in the Speech SDK's connection to the Speech service. You can listen to these events to know the latest connection state.
--
-```csharp
-connection.Connected += (sender, connectedEventArgs) =>
-{
- Console.WriteLine($"Successfully connected, sessionId: {connectedEventArgs.SessionId}");
-};
-connection.Disconnected += (sender, disconnectedEventArgs) =>
-{
- Console.WriteLine($"Disconnected, sessionId: {disconnectedEventArgs.SessionId}");
-};
-```
---
-```cpp
-connection->Connected += [&](ConnectionEventArgs& args)
-{
- std::cout << "Successfully connected, sessionId: " << args.SessionId << std::endl;
-};
-connection->Disconnected += [&](ConnectionEventArgs& args)
-{
- std::cout << "Disconnected, sessionId: " << args.SessionId << std::endl;
-};
-```
---
-```java
-connection.connected.addEventListener((s, connectionEventArgs) -> {
- System.out.println("Successfully connected, sessionId: " + connectionEventArgs.getSessionId());
-});
-connection.disconnected.addEventListener((s, connectionEventArgs) -> {
- System.out.println("Disconnected, sessionId: " + connectionEventArgs.getSessionId());
-});
-```
--
-## Connect and disconnect
-
-`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you may want to control the connection include:
--- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible-- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures-- Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object-
-Some important notes on the behavior when manually modifying connection state:
--- Trying to connect when already connected will do nothing. It will not generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.-- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--will not throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.-- Manually disconnecting from the Speech service during an ongoing interaction results in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event.--
-```csharp
-try
-{
- connection.Open(forContinuousRecognition: false);
-}
-catch (ApplicationException ex)
-{
- Console.WriteLine($"Couldn't pre-connect. Details: {ex.Message}");
-}
-// ... Use the SpeechRecognizer
-connection.Close();
-```
---
-```cpp
-try
-{
- connection->Open(false);
-}
-catch (std::runtime_error&)
-{
- std::cout << "Unable to pre-connect." << std::endl;
-}
-// ... Use the SpeechRecognizer
-connection->Close();
-```
---
-```java
-try {
- connection.openConnection(false);
-} catch {
- System.out.println("Unable to pre-connect.");
-}
-// ... Use the SpeechRecognizer
-connection.closeConnection();
-```
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
cognitive-services How To Custom Commands Debug Build Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-build-time.md
- Title: 'Debug errors when authoring a Custom Commands application (Preview)'-
-description: In this article, you learn how to debug errors when authoring Custom Commands application.
------ Previously updated : 06/18/2020----
-# Debug errors when authoring a Custom Commands application
--
-This article describes how to debug when you see errors while building Custom Commands application.
-
-## Errors when creating an application
-Custom Commands also creates an application in [LUIS](https://www.luis.ai/) when creating a Custom Commands application.
-
-[LUIS limits 500 applications per authoring resource](../luis/luis-limits.md). Creation of LUIS application could fail if you are using an authoring resource that already has 500 applications.
-
-Make sure the selected LUIS authoring resource has less than 500 applications. If not, you can create new LUIS authoring resource, switch to another one, or try to clean up your LUIS applications.
-
-## Errors when deleting an application
-### Can't delete LUIS application
-When deleting a Custom Commands application, Custom Commands may also try to delete the LUIS application associated with the Custom Commands application.
-
-If the deletion of LUIS application failed, go to your [LUIS](https://www.luis.ai/) account to delete them manually.
-
-### TooManyRequests
-When you try to delete large number of applications all at once, it's likely you would see 'TooManyRequests' errors. These errors mean your deletion requests get throttled by Azure.
-
-Refresh your page and try to delete fewer applications.
-
-## Errors when modifying an application
-
-### Can't delete a parameter or a Web Endpoint
-You are not allowed to delete a parameter when it is being used.
-Remove any reference of the parameter in any speech responses, sample sentences, conditions, actions, and try again.
-
-### Can't delete a Web Endpoint
-You are not allowed to delete a Web Endpoint when it is being used.
-Remove any **Call Web Endpoint** action that uses this Web Endpoint before removing a Web Endpoint.
-
-## Errors when training an application
-### Built-In intents
-LUIS has built-in Yes/No intents. Having sample sentences with only "yes", "no" would fail the training.
-
-| Keyword | Variations |
-| - | |
-| Yes | Sure, OK |
-| No | Nope, Not |
-
-### Common sample sentences
-Custom Commands does not allow common sample sentences shared among different commands. The training of an application could fail if some sample sentences in one command are already defined in another command.
-
-Make sure you don't have common sample sentences shared among different commands.
-
-For best practice of balancing your sample sentences across different commands, refer [LUIS best practice](../luis/luis-concept-best-practices.md).
-
-### Empty sample sentences
-You need to have at least one sample sentence for each Command.
-
-### Undefined parameter in sample sentences
-One or more parameters are used in the sample sentences but not defined.
-
-### Training takes too long
-LUIS training is meant to learn quickly with fewer examples. Don't add too many example sentences.
-
-If you have many example sentences that are similar, define a parameter, abstract them into a pattern and add it to Example Sentences.
-
-For example, you can define a parameter {vehicle} for the example sentences below, and only add "Book a {vehicle}" to Example Sentences.
-
-| Example sentences | Pattern |
-| - | - |
-| Book a car | Book a {vehicle} |
-| Book a flight | Book a {vehicle} |
-| Book a taxi | Book a {vehicle} |
-
-For best practice of LUIS training, refer [LUIS best practice](../luis/luis-concept-best-practices.md).
-
-## Can't update LUIS key
-### Reassign to E0 authoring resource
-LUIS does not support reassigning LUIS application to E0 authoring resource.
-
-If you need to change your authoring resource from F0 to E0, or change to a different E0 resource, recreate the application.
-
-For quickly export an existing application and import it into a new application, refer to [Continuous Deployment with Azure DevOps](./how-to-custom-commands-deploy-cicd.md).
-
-### Save button is disabled
-If you never assign a LUIS prediction resource to your application, the Save button would be disabled when you try to change your authoring resource without adding a prediction resource.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
cognitive-services How To Custom Commands Debug Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-runtime.md
- Title: 'Troubleshooting guide for a Custom Commands application at runtime'-
-description: In this article, you learn how to debug runtime errors in a Custom Commands application.
------ Previously updated : 06/18/2020----
-# Troubleshoot a Custom Commands application at runtime
--
-This article describes how to debug when you see errors while running Custom Commands application.
-
-## Connection failed
-
-If your run Custom Commands application from [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md) or [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), you may experience connection errors as listed below:
-
-| Error code | Details |
-| - | -- |
-| [401](#error-401) | AuthenticationFailure: WebSocket Upgrade failed with an authentication error |
-| [1002](#error-1002) | The server returned status code '404' when status code '101' was expected. |
-
-### Error 401
-- The region specified in client application doesn't match with the region of the custom command application--- Speech resource Key is invalid
-
- Make sure your speech resource key is correct.
-
-### Error 1002
-- Your custom command application isn't published
-
- Publish your application in the portal.
--- Your custom command applicationId isn't valid-
- Make sure your custom command application ID is correct.
- custom command application outside your speech resource
-
- Make sure the custom command application is created under your speech resource.
-
-For more information on troubleshooting the connection issues, reference [Windows Voice Assistant Client Troubleshooting](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/clients/csharp-wpf#troubleshooting)
--
-## Dialog is canceled
-
-When your Custom Commands application is running, the dialog would be canceled when some errors occur.
--- If you're testing the application in the portal, it would directly display the cancellation description and play out an error earcon. --- If you're running the application with [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Activity Logs**.--- If you're following our client application example [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Status**.--- If you're building your own client application, you can always design your desired logics to handle the CancelledDialog events.-
-The CancelledDialog event consists of cancellation code and description, as listed below:
-
-| Cancellation Code | Cancellation Description |
-| - | | -- |
-| [MaxTurnThresholdReached](#no-progress-was-made-after-the-max-number-of-turns-allowed) | No progress was made after the max number of turns allowed |
-| [RecognizerQuotaExceeded](#recognizer-usage-quota-exceeded) | Recognizer usage quota exceeded |
-| [RecognizerConnectionFailed](#connection-to-the-recognizer-failed) | Connection to the recognizer failed |
-| [RecognizerUnauthorized](#this-application-cant-be-accessed-with-the-current-subscription) | This application can't be accessed with the current subscription |
-| [RecognizerInputExceededAllowedLength](#input-exceeds-the-maximum-supported-length) | Input exceeds the maximum supported length for the recognizer |
-| [RecognizerNotFound](#recognizer-not-found) | Recognizer not found |
-| [RecognizerInvalidQuery](#invalid-query-for-the-recognizer) | Invalid query for the recognizer |
-| [RecognizerError](#recognizer-return-an-error) | Recognizer returns an error |
-
-### No progress was made after the max number of turns allowed
-The dialog is canceled when a required slot isn't successfully updated after certain number of turns. The build-in max number is 3.
-
-### Recognizer usage quota exceeded
-Language Understanding (LUIS) has limits on resource usage. Usually "Recognizer usage quota exceeded error" can be caused by:
-- Your LUIS authoring exceeds the limit-
- Add a prediction resource to your Custom Commands application:
- 1. go to **Settings**, LUIS resource
- 1. Choose a prediction resource from **Prediction resource**, or select **Create new resource**
--- Your LUIS prediction resource exceeds the limit-
- If you are on a F0 prediction resource, it has limit of 10 thousand/month, 5 queries/second.
-
-For more details on LUIS resource limits, refer [Language Understanding resource usage and limit](../luis/luis-limits.md#resource-usage-and-limits)
-
-### Connection to the recognizer failed
-Usually it means transient connection failure to Language Understanding (LUIS) recognizer. Try it again and the issue should be resolved.
-
-### This application can't be accessed with the current subscription
-Your subscription isn't authorized to access the LUIS application.
-
-### Input exceeds the maximum supported length
-Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
-
-### Invalid query for the recognizer
-Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
-
-### Recognizer return an error
-The LUIS recognizer returned an error when trying to recognize your input.
-
-### Recognizer not found
-Can't find the recognizer type specified in your custom commands dialog model. Currently, we only support [Language Understanding (LUIS) Recognizer](https://www.luis.ai/).
-
-## Other common errors
-### Unexpected response
-Unexpected responses may be caused multiple things.
-A few checks to start with:
-- Yes/No Intents in example sentences-
- As we currently don't support Yes/No Intents except when using with confirmation feature. All the Yes/No Intents defined in example sentences wouldn't be detected.
--- Similar intents and examples sentences among commands-
- The LUIS recognition accuracy may get affected when two commands share similar intents and examples sentences. You can try to make commands functionality and example sentences as distinct as possible.
-
- For best practice of improving recognition accuracy, refer [LUIS best practice](../luis/luis-concept-best-practices.md).
--- Dialog is canceled
-
- Check the reasons of cancellation in the section above.
-
-### Error while rendering the template
-An undefined parameter is used in the speech response.
-
-### Object reference not set to an instance of an object
-You have an empty parameter in the JSON payload defined in **Send Activity to Client** action.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
cognitive-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-deploy-cicd.md
- Title: 'Continuous Deployment with Azure DevOps (Preview)'-
-description: In this article, you learn how to set up continuous deployment for your Custom Commands applications. You create the scripts to support the continuous deployment workflows.
------ Previously updated : 06/18/2020----
-# Continuous Deployment with Azure DevOps
--
-In this article, you learn how to set up continuous deployment for your Custom Commands applications. The scripts to support the CI/CD workflow are provided to you.
-
-## Prerequisite
-> [!div class = "checklist"]
-> * A Custom Commands application for development (DEV)
-> * A Custom Commands application for production (PROD)
-> * Sign up for [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-sign-up)
-
-## Export/Import/Publish
-
-The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
-
-### Set up a pipeline
-
-1. Go to **Azure DevOps - Pipelines** and click "New Pipeline"
-1. In **Connect** section, select the location of your repository where these scripts are located
-1. In **Select** section, select your repository
-1. In **Configure** section, select "Starter pipeline"
-1. Next you'll get an editor with a YAML file, replace the "steps" section with this script.
-
- ```yaml
- steps:
- - task: Bash@3
- displayName: 'Export source app'
- inputs:
- targetType: filePath
- filePath: ./bash/export.sh
- arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(SourceAppId) -f ExportedDialogModel.json'
- workingDirectory: bash
- failOnStderr: true
-
- - task: Bash@3
- displayName: 'Import to target app'
- inputs:
- targetType: filePath
- filePath: ./bash/import.sh
- arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId) -f ExportedDialogModel.json'
- workingDirectory: bash
- failOnStderr: true
-
- - task: Bash@3
- displayName: 'Train and Publish target app'
- inputs:
- targetType: filePath
- filePath: './bash/train-and-publish.sh'
- arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId)'
- workingDirectory: bash
- failOnStderr: true
- ```
-
-1. Note that these scripts assume that you are using the region `westus2`, if that's not the case update the arguments of the tasks accordingly
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that highlights the region value in the arguments.](media/custom-commands/cicd-new-pipeline-yaml.png)
-
-1. In the "Save and run" button, open the dropdown and click "Save"
-
-### Hook up the pipeline with your application
-
-1. Navigate to the main page of the pipeline.
-1. In the top-right corner dropdown, select **Edit pipeline**. It gets you to a YAML editor.
-1. In the top-right corner next to "Run" button, select **Variables**. Click **New variable**.
-1. Add these variables:
-
- | Variable | Description |
- | - | | -- |
- | SourceAppId | ID of the DEV application |
- | TargetAppId | ID of the PROD application |
- | SubscriptionKey | The key used for both applications |
- | Culture | Culture of the applications (en-us) |
-
- > [!div class="mx-imgBorder"]
- > ![Send Activity payload](media/custom-commands/cicd-edit-pipeline-variables.png)
-
-1. Click "Run" and then click in the "Job" running.
-
- You should see a list of tasks running that contains: "Export source app", "Import to target app" & "Train and Publish target app"
-
-## Deploy from source code
-
-In case you want to keep the definition of your application in a repository, we provide the scripts for deployments from source code. Since the scripts are in bash, If you are using Windows you'll need to install the [Linux subsystem](/windows/wsl/install-win10).
-
-The scripts are hosted at [Voice Assistant - Custom Commands](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/custom-commands). Clone the scripts in the bash directory to your repository. Make sure you maintain the same path.
-
-### Prepare your repository
-
-1. Create a directory for your application, in our example create one called "apps".
-1. Update the arguments of the bash script below, and run. It will import the dialog model of your application to the file myapp.json
- ```BASH
- bash/export.sh -r <region> -s <subscriptionkey> -c en-us -a <appid> -f apps/myapp.json
- ```
- | Arguments | Description |
- | - | | -- |
- | region | Your Speech resource region. For example: `westus2` |
- | subscriptionkey | Your Speech resource key. |
- | appid | the Custom Commands' application ID you want to export. |
-
-1. Push these changes to your repository.
-
-### Set up a pipeline
-
-1. Go to **Azure DevOps - Pipelines** and click "New Pipeline"
-1. In **Connect** section, select the location of your repository where these scripts are located
-1. In **Select** section, select your repository
-1. In **Configure** section, select "Starter pipeline"
-1. Next you'll get an editor with a YAML file, replace the "steps" section with this script.
-
- ```yaml
- steps:
- - task: Bash@3
- displayName: 'Import app'
- inputs:
- targetType: filePath
- filePath: ./bash/import.sh
- arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId) -f ../apps/myapp.json'
- workingDirectory: bash
- failOnStderr: true
-
- - task: Bash@3
- displayName: 'Train and Publish app'
- inputs:
- targetType: filePath
- filePath: './bash/train-and-publish.sh'
- arguments: '-r westus2 -s $(SubscriptionKey) -c $(Culture) -a $(TargetAppId)'
- workingDirectory: bash
- failOnStderr: true
- ```
-
- > [!NOTE]
- > these scripts assume that you are using the region westus2, if that's not the case update the arguments of the tasks accordingly
-
-1. In the "Save and run" button, open the dropdown and click "Save"
-
-### Hook up the pipeline with your target applications
-
-1. Navigate to the main page of the pipeline.
-1. In the top-right corner dropdown, select **Edit pipeline**. It gets you to a YAML editor.
-1. In the top-right corner next to "Run" button, select **Variables**. Click **New variable**.
-1. Add these variables:
-
- | Variable | Description |
- | - | | -- |
- | TargetAppId | ID of the PROD application |
- | SubscriptionKey | The key used for both applications |
- | Culture | Culture of the applications (en-us) |
-
-1. Click "Run" and then click in the "Job" running.
- You should see a list of tasks running that contains: "Import app" & "Train and Publish app"
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
cognitive-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md
- Title: 'Test your Custom Commands app'-
-description: In this article, you learn different approaches to testing a custom commands application.
------ Previously updated : 06/18/2020----
-# Test your Custom Commands Application
--
-In this article, you learn different approaches to testing a custom commands application.
-
-## Test in the portal
-
-Test in the portal is the simplest and quickest way to check if your custom command application work as expected. After the app is successfully trained, click `Test` button to start testing.
-
-> [!div class="mx-imgBorder"]
-> ![Test in the portal](media/custom-commands/create-basic-test-chat-no-mic.png)
-
-## Test with Windows Voice Assistant Client
-
-The Windows Voice Assistant Client is a Windows Presentation Foundation (WPF) application in C# that makes it easy to test interactions with your bot before creating a custom client application.
-
-The tool can accept a custom command application ID. It allows you to test your task completion or command-and-control scenario hosted on the Custom Commands service.
-
-To set up the client, checkout [Windows Voice Assistant Client](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/master/clients/csharp-wpf).
-
-> [!div class="mx-imgBorder"]
-> ![WVAC Create profile](media/custom-commands/conversation.png)
-
-## Test programatically with the Voice Assistant Test Tool
-
-The Voice Assistant Test Tool is a configurable .NET Core C# console application for end-to-end functional regression tests for your Microsoft Voice Assistant.
-
-The tool can run manually as a console command or automated as part of an Azure DevOps CI/CD pipeline to prevent regressions in your bot.
-
-To learn how to set up the tool, see [Voice Assistant Test Tool](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/clients/csharp-dotnet-core/voice-assistant-test).
-
-## Test with Speech SDK-enabled client applications
-
-The Speech software development kit (SDK) exposes many of the Speech service capabilities, which allows you to develop speech-enabled applications. It's available in many programming languages on most platforms.
-
-To set up a Universal Windows Platform (UWP) client application with Speech SDK, and integrate it with your custom command application:
-- [How to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md)-- [How to: Send activity to client application](./how-to-custom-commands-send-activity-to-client.md)-- [How to: Set up web endpoints](./how-to-custom-commands-setup-web-endpoints.md)-
-For other programming languages and platforms:
-- [Speech SDK programming languages, platforms, scenario capacities](./speech-sdk.md)-- [Voice assistant sample codes](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
cognitive-services How To Custom Commands Send Activity To Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-send-activity-to-client.md
- Title: 'Send Custom Commands activity to client application' -
-description: In this article, you learn how to send activity from a Custom Commands application to a client application running the Speech SDK.
------ Previously updated : 06/18/2020----
-# Send Custom Commands activity to client application
--
-In this article, you learn how to send activity from a Custom Commands application to a client application running the Speech SDK.
-
-You complete the following tasks:
--- Define and send a custom JSON payload from your Custom Commands application-- Receive and visualize the custom JSON payload contents from a C# UWP Speech SDK client application-
-## Prerequisites
-> [!div class = "checklist"]
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide uses Visual Studio 2019
-> * An Azure Cognitive Services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-> * A previously [created Custom Commands app](quickstart-custom-commands-application.md)
-> * A Speech SDK enabled client app:
-[How-to: Integrate with a client application using Speech SDK](./how-to-custom-commands-setup-speech-sdk.md)
-
-## Setup Send activity to client
-1. Open the Custom Commands application you previously created
-1. Select **TurnOnOff** command, select **ConfirmationResponse** under completion rule, then select **Add an action**
-1. Under **New Action-Type**, select **Send activity to client**
-1. Copy the JSON below to **Activity content**
- ```json
- {
- "type": "event",
- "name": "UpdateDeviceState",
- "value": {
- "state": "{OnOff}",
- "device": "{SubjectDevice}"
- }
- }
- ```
-1. Click **Save** to create a new rule with a Send Activity action, **Train** and **Publish** the change
-
- > [!div class="mx-imgBorder"]
- > ![Send Activity completion rule](media/custom-commands/send-activity-to-client-completion-rules.png)
-
-## Integrate with client application
-
-In [How-to: Setup client application with Speech SDK (Preview)](./how-to-custom-commands-setup-speech-sdk.md), you created a UWP client application with Speech SDK that handled commands such as `turn on the tv`, `turn off the fan`. With some visuals added, you can see the result of those commands.
-
-To Add labeled boxes with text indicating **on** or **off**, add the following XML block of StackPanel to `MainPage.xaml`.
-
-```xml
-<StackPanel Orientation="Vertical" H......>
-......
-</StackPanel>
-<StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Margin="20">
- <Grid x:Name="Grid_TV" Margin="50, 0" Width="100" Height="100" Background="LightBlue">
- <StackPanel>
- <TextBlock Text="TV" Margin="0, 10" TextAlignment="Center"/>
- <TextBlock x:Name="State_TV" Text="off" TextAlignment="Center"/>
- </StackPanel>
- </Grid>
- <Grid x:Name="Grid_Fan" Margin="50, 0" Width="100" Height="100" Background="LightBlue">
- <StackPanel>
- <TextBlock Text="Fan" Margin="0, 10" TextAlignment="Center"/>
- <TextBlock x:Name="State_Fan" Text="off" TextAlignment="Center"/>
- </StackPanel>
- </Grid>
-</StackPanel>
-<MediaElement ....../>
-```
-
-### Add reference libraries
-
-Since you've created a JSON payload, you need to add a reference to the [JSON.NET](https://www.newtonsoft.com/json) library to handle deserialization.
-
-1. Right-client your solution.
-1. Choose **Manage NuGet Packages for Solution**, Select **Browse**
-1. If you already installed **Newtonsoft.json**, make sure its version is at least 12.0.3. If not, go to **Manage NuGet Packages for Solution - Updates**, search for **Newtonsoft.json** to update it. This guide is using version 12.0.3.
-
- > [!div class="mx-imgBorder"]
- > ![Send Activity payload](media/custom-commands/send-activity-to-client-json-nuget.png)
-
-1. Also, make sure NuGet package **Microsoft.NETCore.UniversalWindowsPlatform** is at least 6.2.10. This guide is using version 6.2.10.
-
-In `MainPage.xaml.cs', add
-
-```C#
-using Newtonsoft.Json;
-using Windows.ApplicationModel.Core;
-using Windows.UI.Core;
-```
-
-### Handle the received payload
-
-In `InitializeDialogServiceConnector`, replace the `ActivityReceived` event handler with following code. The modified `ActivityReceived` event handler will extract the payload from the activity and change the visual state of the tv or fan respectively.
-
-```C#
-connector.ActivityReceived += async (sender, activityReceivedEventArgs) =>
-{
- NotifyUser($"Activity received, hasAudio={activityReceivedEventArgs.HasAudio} activity={activityReceivedEventArgs.Activity}");
-
- dynamic activity = JsonConvert.DeserializeObject(activityReceivedEventArgs.Activity);
- var name = activity?.name != null ? activity.name.ToString() : string.Empty;
-
- if (name.Equals("UpdateDeviceState"))
- {
- Debug.WriteLine("Here");
- var state = activity?.value?.state != null ? activity.value.state.ToString() : string.Empty;
- var device = activity?.value?.device != null ? activity.value.device.ToString() : string.Empty;
-
- if (state.Equals("on") || state.Equals("off"))
- {
- switch (device)
- {
- case "tv":
- await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
- CoreDispatcherPriority.Normal, () => { State_TV.Text = state; });
- break;
- case "fan":
- await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
- CoreDispatcherPriority.Normal, () => { State_Fan.Text = state; });
- break;
- default:
- NotifyUser($"Received request to set unsupported device {device} to {state}");
- break;
- }
- }
- else {
- NotifyUser($"Received request to set unsupported state {state}");
- }
- }
-
- if (activityReceivedEventArgs.HasAudio)
- {
- SynchronouslyPlayActivityAudio(activityReceivedEventArgs.Audio);
- }
-};
-```
-
-## Try it out
-
-1. Start the application
-1. Select Enable microphone
-1. Select the Talk button
-1. Say `turn on the tv`
-1. The visual state of the tv should change to "on"
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows that the visual state of the T V is now on.](media/custom-commands/send-activity-to-client-turn-on-tv.png)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to: set up web endpoints](./how-to-custom-commands-setup-web-endpoints.md)
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
- Title: 'Integrate with a client app using Speech SDK' -
-description: how to make requests to a published Custom Commands application from the Speech SDK running in a UWP application.
------ Previously updated : 06/18/2020----
-# Integrate with a client application using Speech SDK
--
-In this article, you learn how to make requests to a published Custom Commands application from the Speech SDK running in an UWP application. In order to establish a connection to the Custom Commands application, you need:
--- Publish a Custom Commands application and get an application identifier (App ID)-- Create a Universal Windows Platform (UWP) client app using the Speech SDK to allow you to talk to your Custom Commands application-
-## Prerequisites
-
-A Custom Commands application is required to complete this article. If you haven't created a Custom Commands application, you can do so following the quickstarts:
-> [!div class = "checklist"]
-> * [Create a Custom Commands application](quickstart-custom-commands-application.md)
-
-You'll also need:
-> [!div class = "checklist"]
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) or higher. This guide is based on Visual Studio 2019.
-> * An Azure Cognitive Services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-> * [Enable your device for development](/windows/uwp/get-started/enable-your-device-for-development)
-
-## Step 1: Publish Custom Commands application
-
-1. Open your previously created Custom Commands application
-1. Go to **Settings**, select **LUIS resource**
-1. If **Prediction resource** is not assigned, select a query prediction key or create a new one
-
- Query prediction key is always required before publishing an application. For more information about LUIS resources, reference [Create LUIS Resource](../luis/luis-how-to-azure-subscription.md)
-
-1. Go back to editing Commands, Select **Publish**
-
- > [!div class="mx-imgBorder"]
- > ![Publish application](media/custom-commands/setup-speech-sdk-publish-application.png)
-
-1. Copy the App ID from the publish notification for later use
-1. Copy the Speech Resource Key for later use
-
-## Step 2: Create a Visual Studio project
-
-Create a Visual Studio project for UWP development and [install the Speech SDK](./quickstarts/setup-platform.md?pivots=programming-language-csharp&tabs=uwp).
-
-## Step 3: Add sample code
-
-In this step, we add the XAML code that defines the user interface of the application, and add the C# code-behind implementation.
-
-### XAML code
-
-Create the application's user interface by adding the XAML code.
-
-1. In **Solution Explorer**, open `MainPage.xaml`
-
-1. In the designer's XAML view, replace the entire contents with the following code snippet:
-
- ```xml
- <Page
- x:Class="helloworld.MainPage"
- xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
- xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
- xmlns:local="using:helloworld"
- xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
- xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
- mc:Ignorable="d"
- Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
-
- <Grid>
- <StackPanel Orientation="Vertical" HorizontalAlignment="Center"
- Margin="20,50,0,0" VerticalAlignment="Center" Width="800">
- <Button x:Name="EnableMicrophoneButton" Content="Enable Microphone"
- Margin="0,10,10,0" Click="EnableMicrophone_ButtonClicked"
- Height="35"/>
- <Button x:Name="ListenButton" Content="Talk"
- Margin="0,10,10,0" Click="ListenButton_ButtonClicked"
- Height="35"/>
- <StackPanel x:Name="StatusPanel" Orientation="Vertical"
- RelativePanel.AlignBottomWithPanel="True"
- RelativePanel.AlignRightWithPanel="True"
- RelativePanel.AlignLeftWithPanel="True">
- <TextBlock x:Name="StatusLabel" Margin="0,10,10,0"
- TextWrapping="Wrap" Text="Status:" FontSize="20"/>
- <Border x:Name="StatusBorder" Margin="0,0,0,0">
- <ScrollViewer VerticalScrollMode="Auto"
- VerticalScrollBarVisibility="Auto" MaxHeight="200">
- <!-- Use LiveSetting to enable screen readers to announce
- the status update. -->
- <TextBlock
- x:Name="StatusBlock" FontWeight="Bold"
- AutomationProperties.LiveSetting="Assertive"
- MaxWidth="{Binding ElementName=Splitter, Path=ActualWidth}"
- Margin="10,10,10,20" TextWrapping="Wrap" />
- </ScrollViewer>
- </Border>
- </StackPanel>
- </StackPanel>
- <MediaElement x:Name="mediaElement"/>
- </Grid>
- </Page>
- ```
-
-The Design view is updated to show the application's user interface.
-
-### C# code-behind source
-
-Add the code-behind source so that the application works as expected. The code-behind source includes:
--- Required `using` statements for the `Speech` and `Speech.Dialog` namespaces-- A simple implementation to ensure microphone access, wired to a button handler-- Basic UI helpers to present messages and errors in the application-- A landing point for the initialization code path that will be populated later-- A helper to play back text to speech (without streaming support)-- An empty button handler to start listening that will be populated later-
-Add the code-behind source as follows:
-
-1. In **Solution Explorer**, open the code-behind source file `MainPage.xaml.cs` (grouped under `MainPage.xaml`)
-
-1. Replace the file's contents with the following code:
-
- ```csharp
- using Microsoft.CognitiveServices.Speech;
- using Microsoft.CognitiveServices.Speech.Audio;
- using Microsoft.CognitiveServices.Speech.Dialog;
- using System;
- using System.IO;
- using System.Text;
- using Windows.UI.Xaml;
- using Windows.UI.Xaml.Controls;
- using Windows.UI.Xaml.Media;
-
- namespace helloworld
- {
- public sealed partial class MainPage : Page
- {
- private DialogServiceConnector connector;
-
- private enum NotifyType
- {
- StatusMessage,
- ErrorMessage
- };
-
- public MainPage()
- {
- this.InitializeComponent();
- }
-
- private async void EnableMicrophone_ButtonClicked(
- object sender, RoutedEventArgs e)
- {
- bool isMicAvailable = true;
- try
- {
- var mediaCapture = new Windows.Media.Capture.MediaCapture();
- var settings =
- new Windows.Media.Capture.MediaCaptureInitializationSettings();
- settings.StreamingCaptureMode =
- Windows.Media.Capture.StreamingCaptureMode.Audio;
- await mediaCapture.InitializeAsync(settings);
- }
- catch (Exception)
- {
- isMicAvailable = false;
- }
- if (!isMicAvailable)
- {
- await Windows.System.Launcher.LaunchUriAsync(
- new Uri("ms-settings:privacy-microphone"));
- }
- else
- {
- NotifyUser("Microphone was enabled", NotifyType.StatusMessage);
- }
- }
-
- private void NotifyUser(
- string strMessage, NotifyType type = NotifyType.StatusMessage)
- {
- // If called from the UI thread, then update immediately.
- // Otherwise, schedule a task on the UI thread to perform the update.
- if (Dispatcher.HasThreadAccess)
- {
- UpdateStatus(strMessage, type);
- }
- else
- {
- var task = Dispatcher.RunAsync(
- Windows.UI.Core.CoreDispatcherPriority.Normal,
- () => UpdateStatus(strMessage, type));
- }
- }
-
- private void UpdateStatus(string strMessage, NotifyType type)
- {
- switch (type)
- {
- case NotifyType.StatusMessage:
- StatusBorder.Background = new SolidColorBrush(
- Windows.UI.Colors.Green);
- break;
- case NotifyType.ErrorMessage:
- StatusBorder.Background = new SolidColorBrush(
- Windows.UI.Colors.Red);
- break;
- }
- StatusBlock.Text += string.IsNullOrEmpty(StatusBlock.Text)
- ? strMessage : "\n" + strMessage;
-
- if (!string.IsNullOrEmpty(StatusBlock.Text))
- {
- StatusBorder.Visibility = Visibility.Visible;
- StatusPanel.Visibility = Visibility.Visible;
- }
- else
- {
- StatusBorder.Visibility = Visibility.Collapsed;
- StatusPanel.Visibility = Visibility.Collapsed;
- }
- // Raise an event if necessary to enable a screen reader
- // to announce the status update.
- var peer = Windows.UI.Xaml.Automation.Peers.FrameworkElementAutomationPeer.FromElement(StatusBlock);
- if (peer != null)
- {
- peer.RaiseAutomationEvent(
- Windows.UI.Xaml.Automation.Peers.AutomationEvents.LiveRegionChanged);
- }
- }
-
- // Waits for and accumulates all audio associated with a given
- // PullAudioOutputStream and then plays it to the MediaElement. Long spoken
- // audio will create extra latency and a streaming playback solution
- // (that plays audio while it continues to be received) should be used --
- // see the samples for examples of this.
- private void SynchronouslyPlayActivityAudio(
- PullAudioOutputStream activityAudio)
- {
- var playbackStreamWithHeader = new MemoryStream();
- playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("RIFF"), 0, 4); // ChunkID
- playbackStreamWithHeader.Write(BitConverter.GetBytes(UInt32.MaxValue), 0, 4); // ChunkSize: max
- playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("WAVE"), 0, 4); // Format
- playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("fmt "), 0, 4); // Subchunk1ID
- playbackStreamWithHeader.Write(BitConverter.GetBytes(16), 0, 4); // Subchunk1Size: PCM
- playbackStreamWithHeader.Write(BitConverter.GetBytes(1), 0, 2); // AudioFormat: PCM
- playbackStreamWithHeader.Write(BitConverter.GetBytes(1), 0, 2); // NumChannels: mono
- playbackStreamWithHeader.Write(BitConverter.GetBytes(16000), 0, 4); // SampleRate: 16kHz
- playbackStreamWithHeader.Write(BitConverter.GetBytes(32000), 0, 4); // ByteRate
- playbackStreamWithHeader.Write(BitConverter.GetBytes(2), 0, 2); // BlockAlign
- playbackStreamWithHeader.Write(BitConverter.GetBytes(16), 0, 2); // BitsPerSample: 16-bit
- playbackStreamWithHeader.Write(Encoding.ASCII.GetBytes("data"), 0, 4); // Subchunk2ID
- playbackStreamWithHeader.Write(BitConverter.GetBytes(UInt32.MaxValue), 0, 4); // Subchunk2Size
-
- byte[] pullBuffer = new byte[2056];
-
- uint lastRead = 0;
- do
- {
- lastRead = activityAudio.Read(pullBuffer);
- playbackStreamWithHeader.Write(pullBuffer, 0, (int)lastRead);
- }
- while (lastRead == pullBuffer.Length);
-
- var task = Dispatcher.RunAsync(
- Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
- {
- mediaElement.SetSource(
- playbackStreamWithHeader.AsRandomAccessStream(), "audio/wav");
- mediaElement.Play();
- });
- }
-
- private void InitializeDialogServiceConnector()
- {
- // New code will go here
- }
-
- private async void ListenButton_ButtonClicked(
- object sender, RoutedEventArgs e)
- {
- // New code will go here
- }
- }
- }
- ```
- > [!NOTE]
- > If you see error: "The type 'Object' is defined in an assembly that is not referenced"
- > 1. Right-client your solution.
- > 1. Choose **Manage NuGet Packages for Solution**, Select **Updates**
- > 1. If you see **Microsoft.NETCore.UniversalWindowsPlatform** in the update list, Update **Microsoft.NETCore.UniversalWindowsPlatform** to newest version
-
-1. Add the following code to the method body of `InitializeDialogServiceConnector`
-
- ```csharp
- // This code creates the `DialogServiceConnector` with your resource information.
- // create a DialogServiceConfig by providing a Custom Commands application id and Speech resource key
- // The RecoLanguage property is optional (default en-US); note that only en-US is supported in Preview
- const string speechCommandsApplicationId = "YourApplicationId"; // Your application id
- const string speechSubscriptionKey = "YourSpeechSubscriptionKey"; // Your Speech resource key
- const string region = "YourServiceRegion"; // The Speech resource region.
-
- var speechCommandsConfig = CustomCommandsConfig.FromSubscription(speechCommandsApplicationId, speechSubscriptionKey, region);
- speechCommandsConfig.SetProperty(PropertyId.SpeechServiceConnection_RecoLanguage, "en-us");
- connector = new DialogServiceConnector(speechCommandsConfig);
- ```
-
-1. Replace the strings `YourApplicationId`, `YourSpeechSubscriptionKey`, and `YourServiceRegion` with your own values for your app, speech key, and [region](regions.md)
-
-1. Append the following code snippet to the end of the method body of `InitializeDialogServiceConnector`
-
- ```csharp
- //
- // This code sets up handlers for events relied on by `DialogServiceConnector` to communicate its activities,
- // speech recognition results, and other information.
- //
- // ActivityReceived is the main way your client will receive messages, audio, and events
- connector.ActivityReceived += (sender, activityReceivedEventArgs) =>
- {
- NotifyUser(
- $"Activity received, hasAudio={activityReceivedEventArgs.HasAudio} activity={activityReceivedEventArgs.Activity}");
-
- if (activityReceivedEventArgs.HasAudio)
- {
- SynchronouslyPlayActivityAudio(activityReceivedEventArgs.Audio);
- }
- };
-
- // Canceled will be signaled when a turn is aborted or experiences an error condition
- connector.Canceled += (sender, canceledEventArgs) =>
- {
- NotifyUser($"Canceled, reason={canceledEventArgs.Reason}");
- if (canceledEventArgs.Reason == CancellationReason.Error)
- {
- NotifyUser(
- $"Error: code={canceledEventArgs.ErrorCode}, details={canceledEventArgs.ErrorDetails}");
- }
- };
-
- // Recognizing (not 'Recognized') will provide the intermediate recognized text
- // while an audio stream is being processed
- connector.Recognizing += (sender, recognitionEventArgs) =>
- {
- NotifyUser($"Recognizing! in-progress text={recognitionEventArgs.Result.Text}");
- };
-
- // Recognized (not 'Recognizing') will provide the final recognized text
- // once audio capture is completed
- connector.Recognized += (sender, recognitionEventArgs) =>
- {
- NotifyUser($"Final speech to text result: '{recognitionEventArgs.Result.Text}'");
- };
-
- // SessionStarted will notify when audio begins flowing to the service for a turn
- connector.SessionStarted += (sender, sessionEventArgs) =>
- {
- NotifyUser($"Now Listening! Session started, id={sessionEventArgs.SessionId}");
- };
-
- // SessionStopped will notify when a turn is complete and
- // it's safe to begin listening again
- connector.SessionStopped += (sender, sessionEventArgs) =>
- {
- NotifyUser($"Listening complete. Session ended, id={sessionEventArgs.SessionId}");
- };
- ```
-
-1. Add the following code snippet to the body of the `ListenButton_ButtonClicked` method in the `MainPage` class
-
- ```csharp
- // This code sets up `DialogServiceConnector` to listen, since you already established the configuration and
- // registered the event handlers.
- if (connector == null)
- {
- InitializeDialogServiceConnector();
- // Optional step to speed up first interaction: if not called,
- // connection happens automatically on first use
- var connectTask = connector.ConnectAsync();
- }
-
- try
- {
- // Start sending audio
- await connector.ListenOnceAsync();
- }
- catch (Exception ex)
- {
- NotifyUser($"Exception: {ex.ToString()}", NotifyType.ErrorMessage);
- }
- ```
-
-1. From the menu bar, choose **File** > **Save All** to save your changes
-
-## Try it out
-
-1. From the menu bar, choose **Build** > **Build Solution** to build the application. The code should compile without errors.
-
-1. Choose **Debug** > **Start Debugging** (or press **F5**) to start the application. The **helloworld** window appears.
-
- ![Sample UWP virtual assistant application in C# - quickstart](media/sdk/qs-voice-assistant-uwp-helloworld-window.png)
-
-1. Select **Enable Microphone**. If the access permission request pops up, select **Yes**.
-
- ![Microphone access permission request](media/sdk/qs-csharp-uwp-10-access-prompt.png)
-
-1. Select **Talk**, and speak an English phrase or sentence into your device's microphone. Your speech is transmitted to the Direct Line Speech channel and transcribed to text, which appears in the window.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How-to: send activity to client application (Preview)](./how-to-custom-commands-send-activity-to-client.md)
cognitive-services How To Custom Commands Setup Web Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-web-endpoints.md
- Title: 'Set up web endpoints' -
-description: set up web endpoints for Custom Commands
------ Previously updated : 06/18/2020----
-# Set up web endpoints
--
-In this article, you'll learn how to set up web endpoints in a Custom Commands application that allow you to make HTTP requests from a client application. You'll complete the following tasks:
--- Set up web endpoints in Custom Commands application-- Call web endpoints in Custom Commands application-- Receive the web endpoints response-- Integrate the web endpoints response into a custom JSON payload, send, and visualize it from a C# UWP Speech SDK client application-
-## Prerequisites
-
-> [!div class = "checklist"]
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/)
-> * An Azure Cognitive Services Speech resource key and region: Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-> * A Custom Commands app (see [Create a voice assistant using Custom Commands](quickstart-custom-commands-application.md))
-> * A Speech SDK enabled client app (see [Integrate with a client application using Speech SDK](how-to-custom-commands-setup-speech-sdk.md))
-
-## Deploy an external web endpoint using Azure Function app
-
-For this tutorial, you need an HTTP endpoint that maintains states for all the devices you set up in the **TurnOnOff** command of your Custom Commands application.
-
-If you already have a web endpoint you want to call, skip to the [next section](#setup-web-endpoints-in-custom-commands).
-Alternatively, the next section provides details about a default hosted web endpoint you can use if you want to skip this section.
-
-### Input format of Azure function
-
-Next, you'll deploy an endpoint using [Azure Functions](../../azure-functions/index.yml).
-The following is the format of a Custom Commands event that is passed to your Azure function. Use this information when you're writing your Azure Function app.
-
-```json
-{
- "conversationId": "string",
- "currentCommand": {
- "name": "string",
- "parameters": {
- "SomeParameterName": "string",
- "SomeOtherParameterName": "string"
- }
- },
- "currentGlobalParameters": {
- "SomeGlobalParameterName": "string",
- "SomeOtherGlobalParameterName": "string"
- }
-}
-```
-
-
-The following table describes the key attributes of this input:
-
-| Attribute | Explanation |
-| - | |
-| **conversationId** | The unique identifier of the conversation. Note this ID can be generated by the client app. |
-| **currentCommand** | The command that's currently active in the conversation. |
-| **name** | The name of the command. The `parameters` attribute is a map with the current values of the parameters. |
-| **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
--
-For the **DeviceState** Azure Function, an example Custom Commands event will look like following. This will act as an **input** to the function app.
-
-```json
-{
- "conversationId": "someConversationId",
- "currentCommand": {
- "name": "TurnOnOff",
- "parameters": {
- "item": "tv",
- "value": "on"
- }
- }
-}
-```
-
-### Azure Function output for a Custom Command app
-
-If output from your Azure Function is consumed by a Custom Commands app, it should appear in the following format. See [Update a command from a web endpoint](./how-to-custom-commands-update-command-from-web-endpoint.md) for details.
-
-```json
-{
- "updatedCommand": {
- "name": "SomeCommandName",
- "updatedParameters": {
- "SomeParameterName": "SomeParameterValue"
- },
- "cancel": false
- },
- "updatedGlobalParameters": {
- "SomeGlobalParameterName": "SomeGlobalParameterValue"
- }
-}
-```
-
-### Azure Function output for a client application
-
-If output from your Azure Function is consumed by a client application, the output can take whatever form the client application requires.
-
-For our **DeviceState** endpoint, output of your Azure function is consumed by a client application instead of the Custom Commands application. Example output of the Azure function should look like the following:
-
-```json
-{
- "TV": "on",
- "Fan": "off"
-}
-```
-
-This output should be written to an external storage, so that you can maintain the state of devices. The external storage state will be used in the [Integrate with client application](#integrate-with-client-application) section below.
--
-### Deploy Azure function
-
-We provide a sample you can configure and deploy as an Azure Functions app. To create a storage account for our sample, follow these steps.
-
-1. Create table storage to save device state. In the Azure portal, create a new resource of type **Storage account** by name **devicestate**.
-1. Copy the **Connection string** value from **devicestate -> Access keys**. You'll need to add this string secret to the downloaded sample Function App code.
-1. Download sample [Function App code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/custom-commands/quick-start).
-1. Open the downloaded solution in Visual Studio 2019. In **Connections.json**, replace **STORAGE_ACCOUNT_SECRET_CONNECTION_STRING** with the secret from Step 2.
-1. Download the **DeviceStateAzureFunction** code.
-
-To deploy the sample app to Azure Functions, follow these steps.
-
-1. [Deploy](../../azure-functions/index.yml) the Azure Functions app.
-1. Wait for deployment to succeed and go the deployed resource on the Azure portal.
-1. Select **Functions** in the left pane, and then select **DeviceState**.
-1. In the new window, select **Code + Test** and then select **Get function URL**.
-
-## Setup web endpoints in Custom Commands
-
-Let's hook up the Azure function with the existing Custom Commands application.
-In this section, you'll use an existing default **DeviceState** endpoint. If you created your own web endpoint using Azure Function or otherwise, use that instead of the default `https://webendpointexample.azurewebsites.net/api/DeviceState`.
-
-1. Open the Custom Commands application you previously created.
-1. Go to **Web endpoints**, select **New web endpoint**.
-
- > [!div class="mx-imgBorder"]
- > ![New web endpoint](media/custom-commands/setup-web-endpoint-new-endpoint.png)
-
- | Setting | Suggested value | Description |
- | - | | -- |
- | Name | UpdateDeviceState | Name for the web endpoint. |
- | URL | https://webendpointexample.azurewebsites.net/api/DeviceState | The URL of the endpoint you wish your custom command app to talk to. |
- | Method | POST | The allowed interactions (such as GET, POST) with your endpoint.|
- | Headers | Key: app, Value: take the first 8 digits of your applicationId | The header parameters to include in the request header.|
-
- > [!NOTE]
- > - The example web endpoint created using [Azure Functions](../../azure-functions/index.yml), which hooks up with the database that saves the device state of the tv and fan.
- > - The suggested header is only needed for the example endpoint.
- > - To make sure the value of the header is unique in our example endpoint, take the first 8 digits of your **applicationId**.
- > - In real world, the web endpoint can be the endpoint to the [IOT hub](../../iot-hub/about-iot-hub.md) that manages your devices.
-
-1. Click **Save**.
-
-## Call web endpoints
-
-1. Go to **TurnOnOff** command, select **ConfirmationResponse** under completion rule, then select **Add an action**.
-1. Under **New Action-Type**, select **Call web endpoint**
-1. In **Edit Action - Endpoints**, select **UpdateDeviceState**, which is the web endpoint we created.
-1. In **Configuration**, put the following values:
- > [!div class="mx-imgBorder"]
- > ![Call web endpoints action parameters](media/custom-commands/setup-web-endpoint-edit-action-parameters.png)
-
- | Setting | Suggested value | Description |
- | - | | -- |
- | Endpoints | UpdateDeviceState | The web endpoint you wish to call in this action. |
- | Query parameters | item={SubjectDevice}&&value={OnOff} | The query parameters to append to the web endpoint URL. |
- | Body content | N/A | The body content of the request. |
-
- > [!NOTE]
- > - The suggested query parameters are only needed for the example endpoint
-
-1. In **On Success - Action to execute**, select **Send speech response**.
-
- In **Simple editor**, enter `{SubjectDevice} is {OnOff}`.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the On Success - Action to execute screen.](media/custom-commands/setup-web-endpoint-edit-action-on-success-send-response.png)
-
- | Setting | Suggested value | Description |
- | - | | -- |
- | Action to execute | Send speech response | Action to execute if the request to web endpoint succeeds |
-
- > [!NOTE]
- > - You can also directly access the fields in the http response by using `{YourWebEndpointName.FieldName}`. For example: `{UpdateDeviceState.TV}`
-
-1. In **On Failure - Action to execute**, select **Send speech response**
-
- In **Simple editor**, enter `Sorry, {WebEndpointErrorMessage}`.
-
- > [!div class="mx-imgBorder"]
- > ![Call web endpoints action On Fail](media/custom-commands/setup-web-endpoint-edit-action-on-fail.png)
-
- | Setting | Suggested value | Description |
- | - | | -- |
- | Action to execute | Send speech response | Action to execute if the request to web endpoint fails |
-
- > [!NOTE]
- > - `{WebEndpointErrorMessage}` is optional. You are free to remove it if you don't want to expose any error message.
- > - Within our example endpoint, we send back http response with detailed error messages for common errors such as missing header parameters.
-
-### Try it out in test portal
-- On Success response, save, train and test.
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the On Success response.](media/custom-commands/setup-web-endpoint-on-success-response.png)
-- On Fail response, remove one of the query parameters, save, retrain, and test.
- > [!div class="mx-imgBorder"]
- > ![Call web endpoints action On Success](media/custom-commands/setup-web-endpoint-on-fail-response.png)
-
-## Integrate with client application
-
-In [Send Custom Commands activity to client application](./how-to-custom-commands-send-activity-to-client.md), you added a **Send activity to client** action. The activity is sent to the client application whether or not **Call web endpoint** action is successful or not.
-However, typically you only want to send activity to the client application when the call to the web endpoint is successful. In this example, this is when the device's state is successfully updated.
-
-1. Delete the **Send activity to client** action you previously added.
-1. Edit call web endpoint:
- 1. In **Configuration**, make sure **Query Parameters** is `item={SubjectDevice}&&value={OnOff}`
- 1. In **On Success**, change **Action to execute** to **Send activity to client**
- 1. Copy the JSON below to the **Activity Content**
- ```json
- {
- "type": "event",
- "name": "UpdateDeviceState",
- "value": {
- "state": "{OnOff}",
- "device": "{SubjectDevice}"
- }
- }
- ```
-Now you only send activity to the client when the request to the web endpoint is successful.
-
-### Create visuals for syncing device state
-
-Add the following XML to `MainPage.xaml` above the **EnableMicrophoneButton** block.
-
-```xml
-<Button x:Name="SyncDeviceStateButton" Content="Sync Device State"
- Margin="0,10,10,0" Click="SyncDeviceState_ButtonClicked"
- Height="35"/>
-<Button x:Name="EnableMicrophoneButton" ......
- .........../>
-```
-
-### Sync device state
-
-In `MainPage.xaml.cs`, add the reference `using Windows.Web.Http;`. Add the following code to the `MainPage` class. This method sends a GET request to the example endpoint, and extract the current device state for your app. Make sure to change `<your_app_name>` to what you used in the **header** in Custom Command web endpoint.
-
-```C#
-private async void SyncDeviceState_ButtonClicked(object sender, RoutedEventArgs e)
-{
- //Create an HTTP client object
- var httpClient = new HttpClient();
-
- //Add a user-agent header to the GET request.
- var your_app_name = "<your-app-name>";
-
- Uri endpoint = new Uri("https://webendpointexample.azurewebsites.net/api/DeviceState");
- var requestMessage = new HttpRequestMessage(HttpMethod.Get, endpoint);
- requestMessage.Headers.Add("app", $"{your_app_name}");
-
- try
- {
- //Send the GET request
- var httpResponse = await httpClient.SendRequestAsync(requestMessage);
- httpResponse.EnsureSuccessStatusCode();
- var httpResponseBody = await httpResponse.Content.ReadAsStringAsync();
- dynamic deviceState = JsonConvert.DeserializeObject(httpResponseBody);
- var TVState = deviceState.TV.ToString();
- var FanState = deviceState.Fan.ToString();
- await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
- CoreDispatcherPriority.Normal,
- () =>
- {
- State_TV.Text = TVState;
- State_Fan.Text = FanState;
- });
- }
- catch (Exception ex)
- {
- NotifyUser(
- $"Unable to sync device status: {ex.Message}");
- }
-}
-```
-
-## Try it out
-
-1. Start the application.
-1. Select Sync Device State.\
-If you tested out the app with `turn on tv` in previous section, you would see the TV shows as **on**.
- > [!div class="mx-imgBorder"]
- > ![Sync device state](media/custom-commands/setup-web-endpoint-sync-device-state.png)
-1. Select **Enable microphone**.
-1. Select the **Talk** button.
-1. Say `turn on the fan`. The visual state of the fan should change to **on**.
- > [!div class="mx-imgBorder"]
- > ![Turn on fan](media/custom-commands/setup-web-endpoint-turn-on-fan.png)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Export Custom Commands application as a remote skill](./how-to-custom-commands-integrate-remote-skills.md)
cognitive-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-client.md
- Title: 'Update a command parameter from a client app' -
-description: Learn how to update a command from a client application.
------ Previously updated : 10/20/2020----
-# Update a command from a client app
--
-In this article, you'll learn how to update an ongoing command from a client application.
-
-## Prerequisites
-> [!div class = "checklist"]
-> * A previously [created Custom Commands app](quickstart-custom-commands-application.md)
-
-## Update the state of a command
-
-If your client application requires you to update the state of an ongoing command without voice input, you can send an event to update the command.
-
-To illustrate this scenario, send the following event activity to update the state of an ongoing command (`TurnOnOff`):
-
-```json
-{
- "type": "event",
- "name": "RemoteUpdate",
- "value": {
- "updatedCommand": {
- "name": "TurnOnOff",
- "updatedParameters": {
- "OnOff": "on"
- },
- "cancel": false
- },
- "updatedGlobalParameters": {},
- "processTurn": true
- }
-}
-```
-
-Let's review the key attributes of this activity:
-
-| Attribute | Explanation |
-| - | |
-| **type** | The activity is of type `"event"`. |
-| **name** | The name of the event needs to be `"RemoteUpdate"`. |
-| **value** | The attribute `"value"` contains the attributes required to update the current command. |
-| **updatedCommand** | The attribute `"updatedCommand"` contains the name of the command. Within that attribute, `"updatedParameters"` is a map with the names of the parameters and their updated values. |
-| **cancel** | If the ongoing command needs to be canceled, set the attribute `"cancel"` to `true`. |
-| **updatedGlobalParameters** | The attribute `"updatedGlobalParameters"` is a map just like `"updatedParameters"`, but it's used for global parameters. |
-| **processTurn** | If the turn needs to be processed after the activity is sent, set the attribute `"processTurn"` to `true`. |
-
-You can test this scenario in the Custom Commands portal:
-
-1. Open the Custom Commands application that you previously created.
-1. Select **Train** and then **Test**.
-1. Send `turn`.
-1. Open the side panel and select **Activity editor**.
-1. Type and send the `RemoteCommand` event specified in the previous section.
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the event for a remote command.](media/custom-commands/send-remote-command-activity-no-mic.png)
-
-Note how the value for the parameter `"OnOff"` was set to `"on"` through an activity from the client instead of speech or text.
-
-## Update the catalog of the parameter for a command
-
-When you configure the list of valid options for a parameter, the values for the parameter are defined globally for the application.
-
-In our example, the `SubjectDevice` parameter will have a fixed list of supported values regardless of the conversation.
-
-If you want to add new entries to the parameter's catalog per conversation, you can send the following activity:
-
-```json
-{
- "type": "event",
- "name": "RemoteUpdate",
- "value": {
- "catalogUpdate": {
- "commandParameterCatalogs": {
- "TurnOnOff": [
- {
- "name": "SubjectDevice",
- "values": {
- "stereo": [
- "cd player"
- ]
- }
- }
- ]
- }
- },
- "processTurn": false
- }
-}
-```
-With this activity, you added an entry for `"stereo"` to the catalog of the parameter `"SubjectDevice"` in the command `"TurnOnOff"`.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/custom-commands/update-catalog-with-remote-activity.png" alt-text="Screenshot that shows a catalog update.":::
-
-Note a couple of things:
-- You need to send this activity only once (ideally, right after you start a connection).-- After you send this activity, you should wait for the event `ParameterCatalogsUpdated` to be sent back to the client.-
-## Add more context from the client application
-
-You can set additional context from the client application per conversation that can later be used in your Custom Commands application.
-
-For example, think about the scenario where you want to send the ID and name of the device connected to the Custom Commands application.
-
-To test this scenario, let's create a new command in the current application:
-1. Create a new command called `GetDeviceInfo`.
-1. Add an example sentence of `get device info`.
-1. In the completion rule **Done**, add a **Send speech response** action that contains the attributes of `clientContext`.
- ![Screenshot that shows a response for sending speech with context.](media/custom-commands/send-speech-response-context.png)
-1. Save, train, and test your application.
-1. In the testing window, send an activity to update the client context.
-
- ```json
- {
- "type": "event",
- "name": "RemoteUpdate",
- "value": {
- "clientContext": {
- "deviceId": "12345",
- "deviceName": "My device"
- },
- "processTurn": false
- }
- }
- ```
-1. Send the text `get device info`.
- ![Screenshot that shows an activity for sending client context.](media/custom-commands/send-client-context-activity-no-mic.png)
-
-Note a few things:
-- You need to send this activity only once (ideally, right after you start a connection).-- You can use complex objects for `clientContext`.-- You can use `clientContext` in speech responses, for sending activities and for calling web endpoints.-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Update a command from a web endpoint](./how-to-custom-commands-update-command-from-web-endpoint.md)
cognitive-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-web-endpoint.md
- Title: 'Update a command from a web endpoint' -
-description: Learn how to update the state of a command by using a call to a web endpoint.
------ Previously updated : 10/20/2020----
-# Update a command from a web endpoint
--
-If your client application requires an update to the state of an ongoing command without voice input, you can use a call to a web endpoint to update the command.
-
-In this article, you'll learn how to update an ongoing command from a web endpoint.
-
-## Prerequisites
-> [!div class = "checklist"]
-> * A previously [created Custom Commands app](quickstart-custom-commands-application.md)
-
-## Create an Azure function
-
-For this example, you'll need an HTTP-triggered [Azure function](../../azure-functions/index.yml) that supports the following input (or a subset of this input):
-
-```JSON
-{
- "conversationId": "SomeConversationId",
- "currentCommand": {
- "name": "SomeCommandName",
- "parameters": {
- "SomeParameterName": "SomeParameterValue",
- "SomeOtherParameterName": "SomeOtherParameterValue"
- }
- },
- "currentGlobalParameters": {
- "SomeGlobalParameterName": "SomeGlobalParameterValue",
- "SomeOtherGlobalParameterName": "SomeOtherGlobalParameterValue"
- }
-}
-```
-
-Let's review the key attributes of this input:
-
-| Attribute | Explanation |
-| - | |
-| **conversationId** | The unique identifier of the conversation. Note that this ID can be generated from the client app. |
-| **currentCommand** | The command that's currently active in the conversation. |
-| **name** | The name of the command. The `parameters` attribute is a map with the current values of the parameters. |
-| **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
-
-The output of the Azure function needs to support the following format:
-
-```JSON
-{
- "updatedCommand": {
- "name": "SomeCommandName",
- "updatedParameters": {
- "SomeParameterName": "SomeParameterValue"
- },
- "cancel": false
- },
- "updatedGlobalParameters": {
- "SomeGlobalParameterName": "SomeGlobalParameterValue"
- }
-}
-```
-
-You might recognize this format because it's the same one that you used when [updating a command from the client](./how-to-custom-commands-update-command-from-client.md).
-
-Now, create an Azure function based on Node.js. Copy/paste this code:
-
-```nodejs
-module.exports = async function (context, req) {
- context.log(req.body);
- context.res = {
- body: {
- updatedCommand: {
- name: "IncrementCounter",
- updatedParameters: {
- Counter: req.body.currentCommand.parameters.Counter + 1
- }
- }
- }
- };
-}
-```
-
-When you call this Azure function from Custom Commands, you'll send the current values of the conversation. You'll return the parameters that you want to update or if you want to cancel the current command.
-
-## Update the existing Custom Commands app
-
-Let's hook up the Azure function with the existing Custom Commands app:
-
-1. Add a new command named `IncrementCounter`.
-1. Add just one example sentence with the value `increment`.
-1. Add a new parameter called `Counter` (same name as specified in the Azure function) of type `Number` with a default value of `0`.
-1. Add a new web endpoint called `IncrementEndpoint` with the URL of your Azure function, with **Remote updates** set to **Enabled**.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/custom-commands/set-web-endpoint-with-remote-updates.png" alt-text="Screenshot that shows setting a web endpoint with remote updates.":::
-1. Create a new interaction rule called **IncrementRule** and add a **Call web endpoint** action.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/custom-commands/increment-rule-web-endpoint.png" alt-text="Screenshot that shows the creation of an interaction rule.":::
-1. In the action configuration, select `IncrementEndpoint`. Configure **On success** to **Send speech response** with the value of `Counter`, and configure **On failure** with an error message.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/custom-commands/set-increment-counter-call-endpoint.png" alt-text="Screenshot that shows setting an increment counter for calling a web endpoint.":::
-1. Set the post-execution state of the rule to **Wait for user's input**.
-
-## Test it
-
-1. Save and train your app.
-1. Select **Test**.
-1. Send `increment` a few times (which is the example sentence for the `IncrementCounter` command).
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/custom-commands/increment-counter-example-no-mic.png" alt-text="Screenshot that shows an increment counter example.":::
-
-Notice how the Azure function increments the value of the `Counter` parameter on each turn.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Enable a CI/CD process for your Custom Commands application](./how-to-custom-commands-deploy-cicd.md)
cognitive-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md
- Title: CI/CD for Custom Speech - Speech service-
-description: Apply DevOps with Custom Speech and CI/CD workflows. Implement an existing DevOps solution for your own project.
------ Previously updated : 05/08/2022---
-# CI/CD for Custom Speech
-
-Implement automated training, testing, and release management to enable continuous improvement of Custom Speech models as you apply updates to training and testing data. Through effective implementation of CI/CD workflows, you can ensure that the endpoint for the best-performing Custom Speech model is always available.
-
-[Continuous integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing updates in a shared repository, and performing an automated build on it. CI workflows for Custom Speech train a new model from its data sources and perform automated testing on the new model to ensure that it performs better than the previous model.
-
-[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved Custom Speech model. CD makes endpoints easily available to be integrated into solutions.
-
-Custom CI/CD solutions are possible, but for a robust, pre-built solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
-
-## CI/CD workflows for Custom Speech
-
-The purpose of these workflows is to ensure that each Custom Speech model has better recognition accuracy than the previous build. If the updates to the testing and/or training data improve the accuracy, these workflows create a new Custom Speech endpoint.
-
-Git servers such as GitHub and Azure DevOps can run automated workflows when specific Git events happen, such as merges or pull requests. For example, a CI workflow can be triggered when updates to testing data are pushed to the *main* branch. Different Git Servers will have different tooling, but will allow scripting command-line interface (CLI) commands so that they can execute on a build server.
-
-Along the way, the workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It is also helpful to name these assets so that it is easy to see which were created after updating testing data versus training data.
-
-### CI workflow for testing data updates
-
-The principal purpose of the CI/CD workflows is to build a new model using the training data, and to test that model using the testing data to establish whether the [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) (WER) has improved compared to the previous best-performing model (the "benchmark model"). If the new model performs better, it becomes the new benchmark model against which future models are compared.
-
-The CI workflow for testing data updates should retest the current benchmark model with the updated test data to calculate the revised WER. This ensures that when the WER of a new model is compared to the WER of the benchmark, both models have been tested against the same test data and you're comparing like with like.
-
-This workflow should trigger on updates to testing data and:
--- Test the benchmark model against the updated testing data.-- Store the test output, which contains the WER of the benchmark model, using the updated data.-- The WER from these tests will become the new benchmark WER that future models must beat.-- The CD workflow does not execute for updates to testing data.-
-### CI workflow for training data updates
-
-Updates to training data signify updates to the custom model.
-
-This workflow should trigger on updates to training data and:
--- Train a new model with the updated training data.-- Test the new model against the testing data.-- Store the test output, which contains the WER.-- Compare the WER from the new model to the WER from the benchmark model.-- If the WER does not improve, stop the workflow.-- If the WER improves, execute the CD workflow to create a Custom Speech endpoint.-
-### CD workflow
-
-After an update to the training data improves a model's recognition, the CD workflow should automatically execute to create a new endpoint for that model, and make that endpoint available such that it can be used in a solution.
-
-#### Release management
-
-Most teams require a manual review and approval process for deployment to a production environment. For a production deployment, you might want to make sure it happens when key people on the development team are available for support, or during low-traffic periods.
-
-### Tools for Custom Speech workflows
-
-Use the following tools for CI/CD automation workflows for Custom Speech:
--- [Azure CLI](/cli/azure/) to create an Azure service principal authentication, query Azure subscriptions, and store test results in Azure Blob.-- [Azure AI Speech CLI](spx-overview.md) to interact with the Speech service from the command line or an automated workflow.-
-## DevOps solution for Custom Speech using GitHub Actions
-
-For an already-implemented DevOps solution for Custom Speech, go to the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template). Create a copy of the template and begin development of custom models with a robust DevOps system that includes testing, training, and versioning using GitHub Actions. The repository provides sample testing and training data to aid in setup and explain the workflow. After initial setup, replace the sample data with your project data.
-
-The [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) provides the infrastructure and detailed guidance to:
--- Copy the template repository to your GitHub account, then create Azure resources and a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) for the GitHub Actions CI/CD workflows.-- Walk through the "[dev inner loop](/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow)." Update training and testing data from a feature branch, test the changes with a temporary development model, and raise a pull request to propose and review the changes.-- When training data is updated in a pull request to *main*, train models with the GitHub Actions CI workflow.-- Perform automated accuracy testing to establish a model's [Word Error Rate](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) (WER). Store the test results in Azure Blob.-- Execute the CD workflow to create an endpoint when the WER improves.-
-## Next steps
--- Use the [Speech DevOps template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) to implement DevOps for Custom Speech with GitHub Actions.
cognitive-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md
- Title: Create a Custom Speech project - Speech service-
-description: Learn about how to create a project for Custom Speech.
------ Previously updated : 11/29/2022-
-zone_pivot_groups: speech-studio-cli-rest
--
-# Create a Custom Speech project
-
-Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
-
-## Create a project
--
-To create a Custom Speech project, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select the subscription and Speech resource to work with.
-
- > [!IMPORTANT]
- > If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
-
-1. Select **Custom speech** > **Create a new project**.
-1. Follow the instructions provided by the wizard to create your project.
-
-Select the new project by name or select **Go to project**. You will see these menu items in the left panel: **Speech datasets**, **Train custom models**, **Test models**, and **Deploy models**.
---
-To create a project, use the `spx csr project create` command. Construct the request parameters according to the following instructions:
--- Set the required `language` parameter. The locale of the project and the contained datasets should be the same. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-
-Here's an example Speech CLI command that creates a project:
-
-```azurecli-interactive
-spx csr project create --api-version v3.1 --name "My Project" --description "My Project Description" --language "en-US"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
- "links": {
- "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
- "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
- "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
- "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
- "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
- },
- "properties": {
- "datasetCount": 0,
- "evaluationCount": 0,
- "modelCount": 0,
- "transcriptionCount": 0,
- "endpointCount": 0
- },
- "createdDateTime": "2022-05-17T22:15:18Z",
- "locale": "en-US",
- "displayName": "My Project",
- "description": "My Project Description"
-}
-```
-
-The top-level `self` property in the response body is the project's URI. Use this URI to get details about the project's evaluations, datasets, models, endpoints, and transcriptions. You also use this URI to update or delete a project.
-
-For Speech CLI help with projects, run the following command:
-
-```azurecli-interactive
-spx help csr project
-```
---
-To create a project, use the [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the required `locale` property. This should be the locale of the contained datasets. The locale can't be changed later.-- Set the required `displayName` property. This is the project name that will be displayed in the Speech Studio.-
-Make an HTTP POST request using the URI as shown in the following [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "displayName": "My Project",
- "description": "My Project Description",
- "locale": "en-US"
-} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/projects"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
- "links": {
- "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
- "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
- "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
- "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
- "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
- },
- "properties": {
- "datasetCount": 0,
- "evaluationCount": 0,
- "modelCount": 0,
- "transcriptionCount": 0,
- "endpointCount": 0
- },
- "createdDateTime": "2022-05-17T22:15:18Z",
- "locale": "en-US",
- "displayName": "My Project",
- "description": "My Project Description"
-}
-```
-
-The top-level `self` property in the response body is the project's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Get) details about the project's evaluations, datasets, models, endpoints, and transcriptions. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Delete) a project.
--
-## Choose your model
-
-There are a few approaches to using Custom Speech models:
-- The base model provides accurate speech recognition out of the box for a range of [scenarios](overview.md#speech-scenarios). Base models are updated periodically to improve accuracy and quality. We recommend that if you use base models, use the latest default base models. If a required customization capability is only available with an older model, then you can choose an older base model. -- A custom model augments the base model to include domain-specific vocabulary shared across all areas of the custom domain.-- Multiple custom models can be used when the custom domain has multiple areas, each with a specific vocabulary.-
-One recommended way to see if the base model will suffice is to analyze the transcription produced from the base model and compare it with a human-generated transcript for the same audio. You can compare the transcripts and obtain a [word error rate (WER)](how-to-custom-speech-evaluate-data.md#evaluate-word-error-rate) score. If the WER score is high, training a custom model to recognize the incorrectly identified words is recommended.
-
-Multiple models are recommended if the vocabulary varies across the domain areas. For instance, Olympic commentators report on various events, each associated with its own vernacular. Because each Olympic event vocabulary differs significantly from others, building a custom model specific to an event increases accuracy by limiting the utterance data relative to that particular event. As a result, the model doesn't need to sift through unrelated data to make a match. Regardless, training still requires a decent variety of training data. Include audio from various commentators who have different accents, gender, age, etcetera.
-
-## Model stability and lifecycle
-
-A base model or custom model deployed to an endpoint using Custom Speech is fixed until you decide to update it. The speech recognition accuracy and quality will remain consistent, even when a new base model is released. This allows you to lock in the behavior of a specific model until you decide to use a newer model.
-
-Whether you train your own model or use a snapshot of a base model, you can use the model for a limited time. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-## Next steps
-
-* [Training and testing datasets](./how-to-custom-speech-test-and-train.md)
-* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
-* [Train a model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
- Title: Deploy a Custom Speech model - Speech service-
-description: Learn how to deploy Custom Speech models.
------ Previously updated : 11/29/2022-
-zone_pivot_groups: speech-studio-cli-rest
--
-# Deploy a Custom Speech model
-
-In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
-
-> [!TIP]
-> A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model.
-
-> [!NOTE]
-> Endpoints used by `F0` Speech resources are deleted after seven days.
-
-## Add a deployment endpoint
--
-To create a custom endpoint, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-
- If this is your first endpoint, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
-
-1. Select **Deploy model** to start the new endpoint wizard.
-
-1. On the **New endpoint** page, enter a name and description for your custom endpoint.
-
-1. Select the custom model that you want to associate with the endpoint.
-
-1. Optionally, you can check the box to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic.
-
- :::image type="content" source="./media/custom-speech/custom-speech-deploy-model.png" alt-text="Screenshot of the New endpoint page that shows the checkbox to enable logging.":::
-
-1. Select **Add** to save and deploy the endpoint.
-
-On the main **Deploy models** page, details about the new endpoint are displayed in a table, such as name, description, status, and expiration date. It can take up to 30 minutes to instantiate a new endpoint that uses your custom models. When the status of the deployment changes to **Succeeded**, the endpoint is ready to use.
-
-> [!IMPORTANT]
-> Take note of the model expiration date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-Select the endpoint link to view information specific to it, such as the endpoint key, endpoint URL, and sample code.
---
-To create an endpoint and deploy a model, use the `spx csr endpoint create` command. Construct the request parameters according to the following instructions:
--- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `model` parameter to the ID of the model that you want deployed to the endpoint. -- Set the required `language` parameter. The endpoint locale must match the locale of the model. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-- Optionally, you can set the `logging` parameter. Set this to `enabled` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`. -
-Here's an example Speech CLI command to create an endpoint and deploy a model:
-
-```azurecli-interactive
-spx csr endpoint create --api-version v3.1 --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
- },
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
- },
- "properties": {
- "loggingEnabled": true
- },
- "lastActionDateTime": "2022-05-19T15:27:51Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-19T15:27:51Z",
- "locale": "en-US",
- "displayName": "My Endpoint",
- "description": "My Endpoint Description"
-}
-```
-
-The top-level `self` property in the response body is the endpoint's URI. Use this URI to get details about the endpoint's project, model, and logs. You also use this URI to update the endpoint.
-
-For Speech CLI help with endpoints, run the following command:
-
-```azurecli-interactive
-spx help csr endpoint
-```
---
-To create an endpoint and deploy a model, use the [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.-- Set the required `model` property to the URI of the model that you want deployed to the endpoint. -- Set the required `locale` property. The endpoint locale must match the locale of the model. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.-- Optionally, you can set the `loggingEnabled` property within `properties`. Set this to `true` to enable audio and diagnostic [logging](#view-logging-data) of the endpoint's traffic. The default is `false`. -
-Make an HTTP POST request using the URI as shown in the following [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
- },
- "properties": {
- "loggingEnabled": true
- },
- "displayName": "My Endpoint",
- "description": "My Endpoint Description",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
- },
- "locale": "en-US",
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
- },
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
- },
- "properties": {
- "loggingEnabled": true
- },
- "lastActionDateTime": "2022-05-19T15:27:51Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-19T15:27:51Z",
- "locale": "en-US",
- "displayName": "My Endpoint",
- "description": "My Endpoint Description"
-}
-```
-
-The top-level `self` property in the response body is the endpoint's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get) details about the endpoint's project, model, and logs. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Delete) the endpoint.
--
-## Change model and redeploy endpoint
-
-An endpoint can be updated to use another model that was created by the same Speech resource. As previously mentioned, you must update the endpoint's model before the [model expires](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
--
-To use a new model and redeploy the custom endpoint:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. Select the link to an endpoint by name, and then select **Change model**.
-1. Select the new model that you want the endpoint to use.
-1. Select **Done** to save and redeploy the endpoint.
---
-To redeploy the custom endpoint with a new model, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
--- Set the required `endpoint` parameter to the ID of the endpoint that you want deployed.-- Set the required `model` parameter to the ID of the model that you want deployed to the endpoint.-
-Here's an example Speech CLI command that redeploys the custom endpoint with a new model:
-
-```azurecli-interactive
-spx csr endpoint update --api-version v3.1 --endpoint YourEndpointId --model YourModelId
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
- },
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/639d5280-8995-40cc-9329-051fd0fddd46"
- },
- "properties": {
- "loggingEnabled": true
- },
- "lastActionDateTime": "2022-05-19T23:01:34Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-19T15:41:27Z",
- "locale": "en-US",
- "displayName": "My Endpoint",
- "description": "My Updated Endpoint Description"
-}
-```
-
-For Speech CLI help with endpoints, run the following command:
-
-```azurecli-interactive
-spx help csr endpoint
-```
---
-To redeploy the custom endpoint with a new model, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `model` property to the URI of the model that you want deployed to the endpoint.-
-Make an HTTP PATCH request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, replace `YourEndpointId` with your endpoint ID, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
- }
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
- },
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/639d5280-8995-40cc-9329-051fd0fddd46"
- },
- "properties": {
- "loggingEnabled": true
- },
- "lastActionDateTime": "2022-05-19T23:01:34Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-19T15:41:27Z",
- "locale": "en-US",
- "displayName": "My Endpoint",
- "description": "My Updated Endpoint Description"
-}
-```
--
-The redeployment takes several minutes to complete. In the meantime, your endpoint will use the previous model without interruption of service.
-
-## View logging data
-
-Logging data is available for export if you configured it while creating the endpoint.
--
-To download the endpoint logs:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. Select the link by endpoint name.
-1. Under **Content logging**, select **Download log**.
---
-To get logs for an endpoint, use the `spx csr endpoint list` command. Construct the request parameters according to the following instructions:
--- Set the required `endpoint` parameter to the ID of the endpoint that you want to get logs.-
-Here's an example Speech CLI command that gets logs for an endpoint:
-
-```azurecli-interactive
-spx csr endpoint list --api-version v3.1 --endpoint YourEndpointId
-```
-
-The location of each log file with more details are returned in the response body.
---
-To get logs for an endpoint, start by using the [Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
-
-Make an HTTP GET request using the URI as shown in the following example. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
- },
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/2f78cdb7-58ac-4bd9-9bc6-170e31483b26"
- },
- "properties": {
- "loggingEnabled": true
- },
- "lastActionDateTime": "2022-05-19T23:41:05Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-19T23:41:05Z",
- "locale": "en-US",
- "displayName": "My Endpoint",
- "description": "My Updated Endpoint Description"
-}
-```
-
-Make an HTTP GET request using the "logs" URI from the previous response body. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
--
-```curl
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId/files/logs" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-The location of each log file with more details are returned in the response body.
--
-Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If your own storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
-
-## Next steps
--- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)-- [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
- Title: Test accuracy of a Custom Speech model - Speech service-
-description: In this article, you learn how to quantitatively measure and improve the quality of our speech to text model or your custom model.
------ Previously updated : 11/29/2022--
-zone_pivot_groups: speech-studio-cli-rest
-show_latex: true
-no-loc: [$$, '\times', '\over']
--
-# Test accuracy of a Custom Speech model
-
-In this article, you learn how to quantitatively measure and improve the accuracy of the base speech to text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio.
--
-## Create a test
-
-You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a speech to text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
--
-Follow these steps to create a test:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
-1. Select **Create new test**.
-1. Select **Evaluate accuracy** > **Next**.
-1. Select one audio + human-labeled transcription dataset, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
-
- > [!NOTE]
- > It's important to select an acoustic dataset that's different from the one you used with your model. This approach can provide a more realistic sense of the model's performance.
-
-1. Select up to two models to evaluate, and then select **Next**.
-1. Enter the test name and description, and then select **Next**.
-1. Review the test details, and then select **Save and close**.
---
-To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
--- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `model1` parameter to the ID of a model that you want to test.-- Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.-- Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.-- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-
-Here's an example Speech CLI command that creates a test:
-
-```azurecli-interactive
-spx csr evaluation create --api-version v3.1 --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": -1.0,
- "wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
- "sentenceErrorRate1": -1.0,
- "sentenceCount1": -1,
- "wordCount1": -1,
- "correctWordCount1": -1,
- "wordSubstitutionCount1": -1,
- "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
- },
- "lastActionDateTime": "2022-05-20T16:42:43Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Evaluation",
- "description": "My Evaluation Description"
-}
-```
-
-The top-level `self` property in the response body is the evaluation's URI. Use this URI to get details about the project and test results. You also use this URI to update or delete the evaluation.
-
-For Speech CLI help with evaluations, run the following command:
-
-```azurecli-interactive
-spx help csr evaluation
-```
---
-To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.-- Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the Speech Studio.-- Set the required `model1` property to the URI of a model that you want to test.-- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.-- Set the required `dataset` property to the URI of a dataset that you want to use for the test.-- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.-
-Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "displayName": "My Evaluation",
- "description": "My Evaluation Description",
- "customProperties": {
- "testingKind": "Evaluation"
- },
- "locale": "en-US"
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": -1.0,
- "wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
- "sentenceErrorRate1": -1.0,
- "sentenceCount1": -1,
- "wordCount1": -1,
- "correctWordCount1": -1,
- "wordSubstitutionCount1": -1,
- "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
- },
- "lastActionDateTime": "2022-05-20T16:42:43Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Evaluation",
- "description": "My Evaluation Description",
- "customProperties": {
- "testingKind": "Evaluation"
- }
-}
-```
-
-The top-level `self` property in the response body is the evaluation's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) details about the evaluation's project and test results. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete) the evaluation.
--
-## Get test results
-
-You should get the test results and [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results.
--
-Follow these steps to get test results:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
-1. Select the link by test name.
-1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
-
-This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
---
-To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
--- Set the required `evaluation` parameter to the ID of the evaluation that you want to get test results.-
-Here's an example Speech CLI command that gets test results:
-
-```azurecli-interactive
-spx csr evaluation status --api-version v3.1 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
-```
-
-The word error rates and more details are returned in the response body.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Evaluation",
- "description": "My Evaluation Description",
- "customProperties": {
- "testingKind": "Evaluation"
- }
-}
-```
-
-For Speech CLI help with evaluations, run the following command:
-
-```azurecli-interactive
-spx help csr evaluation
-```
---
-To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
-
-Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-The word error rates and more details are returned in the response body.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Evaluation",
- "description": "My Evaluation Description",
- "customProperties": {
- "testingKind": "Evaluation"
- }
-}
-```
---
-## Evaluate word error rate
-
-The industry standard for measuring model accuracy is [word error rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate). WER counts the number of incorrect words identified during recognition, and divides the sum by the total number of words provided in the human-labeled transcript (N).
-
-Incorrectly identified words fall into three categories:
-
-* Insertion (I): Words that are incorrectly added in the hypothesis transcript
-* Deletion (D): Words that are undetected in the hypothesis transcript
-* Substitution (S): Words that were substituted between reference and hypothesis
-
-In the Speech Studio, the quotient is multiplied by 100 and shown as a percentage. The Speech CLI and REST API results aren't multiplied by 100.
-
-$$
-WER = {{I+D+S}\over N} \times 100
-$$
-
-Here's an example that shows incorrectly identified words, when compared to the human-labeled transcript:
-
-![Screenshot showing an example of incorrectly identified words.](./media/custom-speech/custom-speech-dis-words.png)
-
-The speech recognition result erred as follows:
-* Insertion (I): Added the word "a"
-* Deletion (D): Deleted the word "are"
-* Substitution (S): Substituted the word "Jones" for "John"
-
-The word error rate from the previous example is 60%.
-
-If you want to replicate WER measurements locally, you can use the sclite tool from the [NIST Scoring Toolkit (SCTK)](https://github.com/usnistgov/SCTK).
-
-## Resolve errors and improve WER
-
-You can use the WER calculation from the machine recognition results to evaluate the quality of the model you're using with your app, tool, or product. A WER of 5-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, but you might want to consider additional training. A WER of 30% or more signals poor quality and requires customization and training.
-
-How the errors are distributed is important. When many deletion errors are encountered, it's usually because of weak audio signal strength. To resolve this issue, you need to collect audio data closer to the source. Insertion errors mean that the audio was recorded in a noisy environment and crosstalk might be present, causing recognition issues. Substitution errors are often encountered when an insufficient sample of domain-specific terms has been provided as either human-labeled transcriptions or related text.
-
-By analyzing individual files, you can determine what type of errors exist, and which errors are unique to a specific file. Understanding issues at the file level will help you target improvements.
-
-## Example scenario outcomes
-
-Speech recognition scenarios vary by audio quality and language (vocabulary and speaking style). The following table examines four common scenarios:
-
-| Scenario | Audio quality | Vocabulary | Speaking style |
-|-|||-|
-| Call center | Low, 8&nbsp;kHz, could be two people on one audio channel, could be compressed | Narrow, unique to domain and products | Conversational, loosely structured |
-| Voice assistant, such as Cortana, or a drive-through window | High, 16&nbsp;kHz | Entity-heavy (song titles, products, locations) | Clearly stated words and phrases |
-| Dictation (instant message, notes, search) | High, 16&nbsp;kHz | Varied | Note-taking |
-| Video closed captioning | Varied, including varied microphone use, added music | Varied, from meetings, recited speech, musical lyrics | Read, prepared, or loosely structured |
-
-Different scenarios produce different quality outcomes. The following table examines how content from these four scenarios rates in the [WER](how-to-custom-speech-evaluate-data.md). The table shows which error types are most common in each scenario. The insertion, substitution, and deletion error rates help you determine what kind of data to add to improve the model.
-
-| Scenario | Speech recognition quality | Insertion errors | Deletion errors | Substitution errors |
-| | | | | |
-| Call center | Medium<br>(<&nbsp;30%&nbsp;WER) | Low, except when other people talk in the background | Can be high. Call centers can be noisy, and overlapping speakers can confuse the model | Medium. Products and people's names can cause these errors |
-| Voice assistant | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | Medium, due to song titles, product names, or locations |
-| Dictation | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | High |
-| Video closed captioning | Depends on video type (can be <&nbsp;50%&nbsp;WER) | Low | Can be high because of music, noises, microphone quality | Jargon might cause these errors |
--
-## Next steps
-
-* [Train a model](how-to-custom-speech-train-model.md)
-* [Deploy a model](how-to-custom-speech-deploy-model.md)
-* [Use the online transcription editor](how-to-custom-speech-transcription-editor.md)
cognitive-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md
- Title: Human-labeled transcriptions guidelines - Speech service-
-description: You use human-labeled transcriptions with your audio data to improve speech recognition accuracy. This is especially helpful when words are deleted or incorrectly replaced.
------ Previously updated : 05/08/2022---
-# How to create human-labeled transcriptions
-
-Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to improve recognition accuracy, especially when words are deleted or incorrectly replaced. This guide can help you create high-quality transcriptions.
-
-A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service will use up to 20 hours of audio for training. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German.
-
-The transcriptions for all WAV files are contained in a single plain-text file (.txt or .tsv). Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
-
-For example:
-
-```txt
-speech01.wav speech recognition is awesome
-speech02.wav the quick brown fox jumped all over the place
-speech03.wav the lazy dog was not amused
-```
-
-The transcriptions are text-normalized so the system can process them. However, you must do some important normalizations before you upload the dataset.
-
-Human-labeled transcriptions for languages other than English and Mandarin Chinese, must be UTF-8 encoded with a byte-order marker. For other locales transcription requirements, see the sections below.
-
-## en-US
-
-Human-labeled transcriptions for English audio must be provided as plain text, only using ASCII characters. Avoid the use of Latin-1 or Unicode punctuation characters. These characters are often inadvertently added when copying text from a word-processing application or scraping data from web pages. If these characters are present, make sure to update them with the appropriate ASCII substitution.
-
-Here are a few examples:
-
-| Characters to avoid | Substitution | Notes |
-| - | | -- |
-| ΓÇ£Hello worldΓÇ¥ | "Hello world" | The opening and closing quotations marks have been substituted with appropriate ASCII characters. |
-| JohnΓÇÖs day | John's day | The apostrophe has been substituted with the appropriate ASCII character. |
-| It was goodΓÇöno, it was great! | it was good--no, it was great! | The em dash was substituted with two hyphens. |
-
-### Text normalization for US English
-
-Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
--- Write out abbreviations in words.-- Write out non-standard numeric strings in words (such as accounting terms).-- Non-alphabetic characters or mixed alphanumeric characters should be transcribed as pronounced.-- Abbreviations that are pronounced as words shouldn't be edited (such as "radar", "laser", "RAM", or "NATO").-- Write out abbreviations that are pronounced as separate letters with each letter separated by a space.-- If you use audio, transcribe numbers as words that match the audio (for example, "101" could be pronounced as "one oh one" or "one hundred and one").-- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah". Lines with such repetitions might be dropped by the Speech service.-
-Here are a few examples of normalization that you should perform on the transcription:
-
-| Original text | Text after normalization (human) |
-| | - |
-| Dr. Bruce Banner | Doctor Bruce Banner |
-| James Bond, 007 | James Bond, double oh seven |
-| Ke$ha | Kesha |
-| How long is the 2x4 | How long is the two by four |
-| The meeting goes from 1-3pm | The meeting goes from one to three pm |
-| My blood type is O+ | My blood type is O positive |
-| Water is H20 | Water is H 2 O |
-| Play OU812 by Van Halen | Play O U 8 1 2 by Van Halen |
-| UTF-8 with BOM | U T F 8 with BOM |
-| It costs \$3.14 | It costs three fourteen |
-
-The following normalization rules are automatically applied to transcriptions:
--- Use lowercase letters.-- Remove all punctuation except apostrophes within words.-- Expand numbers into words/spoken form, such as dollar amounts.-
-Here are a few examples of normalization automatically performed on the transcription:
-
-| Original text | Text after normalization (automatic) |
-| -- | |
-| "Holy cow!" said Batman. | holy cow said batman |
-| "What?" said Batman's sidekick, Robin. | what said batman's sidekick robin |
-| Go get -em! | go get em |
-| I'm double-jointed | I'm double jointed |
-| 104 Elm Street | one oh four Elm street |
-| Tune to 102.7 | tune to one oh two point seven |
-| Pi is about 3.14 | pi is about three point one four |
-
-## de-DE
-
-Human-labeled transcriptions for German audio must be UTF-8 encoded with a byte-order marker.
-
-### Text normalization for German
-
-Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
--- Write decimal points as "," and not ".".-- Write time separators as ":" and not "." (for example: 12:00 Uhr).-- Abbreviations such as "ca." aren't replaced. We recommend that you use the full spoken form.-- The four main mathematical operators (+, -, \*, and /) are removed. We recommend replacing them with the written form: "plus," "minus," "mal," and "geteilt."-- Comparison operators are removed (=, <, and >). We recommend replacing them with "gleich," "kleiner als," and "gr├╢sser als."-- Write fractions, such as 3/4, in written form (for example: "drei viertel" instead of 3/4).-- Replace the "Γé¼" symbol with its written form "Euro."-
-Here are a few examples of normalization that you should perform on the transcription:
-
-| Original text | Text after user normalization | Text after system normalization |
-| - | -- | - |
-| Es ist 12.23 Uhr | Es ist 12:23 Uhr | es ist zw├╢lf uhr drei und zwanzig uhr |
-| {12.45} | {12,45} | zw├╢lf komma vier f├╝nf |
-| 2 + 3 - 4 | 2 plus 3 minus 4 | zwei plus drei minus vier |
-
-The following normalization rules are automatically applied to transcriptions:
--- Use lowercase letters for all text.-- Remove all punctuation, including various types of quotation marks ("test", 'test', "test„, and «test» are OK).-- Discard rows with any special characters from this set: ¢ ¤ ¥ ¦ § © ª ¬ ® ° ± ² µ × ÿ ج¬.-- Expand numbers to spoken form, including dollar or Euro amounts.-- Accept umlauts only for a, o, and u. Others will be replaced by "th" or be discarded.-
-Here are a few examples of normalization automatically performed on the transcription:
-
-| Original text | Text after normalization |
-| - | |
-| Frankfurter Ring | frankfurter ring |
-| ¡Eine Frage! | eine frage |
-| Wir, haben | wir haben |
-
-## ja-JP
-
-In Japanese (ja-JP), there's a maximum length of 90 characters for each sentence. Lines with longer sentences will be discarded. To add longer text, insert a period in between.
-
-## zh-CN
-
-Human-labeled transcriptions for Mandarin Chinese audio must be UTF-8 encoded with a byte-order marker. Avoid the use of half-width punctuation characters. These characters can be included inadvertently when you prepare the data in a word-processing program or scrape data from web pages. If these characters are present, make sure to update them with the appropriate full-width substitution.
-
-Here are a few examples:
-
-| Characters to avoid | Substitution | Notes |
-| - | -- | -- |
-| "你好" | "你好" | The opening and closing quotations marks have been substituted with appropriate characters. |
-| 需要什么帮助? | 需要什么帮助?| The question mark has been substituted with appropriate character. |
-
-### Text normalization for Mandarin Chinese
-
-Text normalization is the transformation of words into a consistent format used when training a model. Some normalization rules are applied to text automatically, however, we recommend using these guidelines as you prepare your human-labeled transcription data:
--- Write out abbreviations in words.-- Write out numeric strings in spoken form.-
-Here are a few examples of normalization that you should perform on the transcription:
-
-| Original text | Text after normalization |
-| - | |
-| 我今年 21 | 我今年二十一 |
-| 3 号楼 504 | 三号 楼 五 零 四 |
-
-The following normalization rules are automatically applied to transcriptions:
--- Remove all punctuation-- Expand numbers to spoken form-- Convert full-width letters to half-width letters-- Using uppercase letters for all English words-
-Here are some examples of automatic transcription normalization:
-
-| Original text | Text after normalization |
-| - | |
-| 3.1415 | 三 点 一 四 一 五 |
-| ¥ 3.5 | 三 元 五 角 |
-| w f y z | W F Y Z |
-| 1992 年 8 月 8 日 | 一 九 九 二 年 八 月 八 日 |
-| 你吃饭了吗? | 你 吃饭 了 吗 |
-| 下午 5:00 的航班 | 下午 五点 的 航班 |
-| 我今年 21 岁 | 我 今年 二十 一 岁 |
-
-## Next Steps
--- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)-- [Test recognition quality](how-to-custom-speech-inspect-data.md)-- [Train your model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md
- Title: Test recognition quality of a Custom Speech model - Speech service-
-description: Custom Speech lets you qualitatively inspect the recognition quality of a model. You can play back uploaded audio and determine if the provided recognition result is correct.
------ Previously updated : 11/29/2022-
-zone_pivot_groups: speech-studio-cli-rest
--
-# Test recognition quality of a Custom Speech model
-
-You can inspect the recognition quality of a Custom Speech model in the [Speech Studio](https://aka.ms/speechstudio/customspeech). You can play back uploaded audio and determine if the provided recognition result is correct. After a test has been successfully created, you can see how a model transcribed the audio dataset, or compare results from two models side by side.
-
-Side-by-side model testing is useful to validate which speech recognition model is best for an application. For an objective measure of accuracy, which requires transcription datasets input, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
--
-## Create a test
--
-Follow these instructions to create a test:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Navigate to **Speech Studio** > **Custom Speech** and select your project name from the list.
-1. Select **Test models** > **Create new test**.
-1. Select **Inspect quality (Audio-only data)** > **Next**.
-1. Choose an audio dataset that you'd like to use for testing, and then select **Next**. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
-
- :::image type="content" source="media/custom-speech/custom-speech-choose-test-data.png" alt-text="Screenshot of choosing a dataset dialog":::
-
-1. Choose one or two models to evaluate and compare accuracy.
-1. Enter the test name and description, and then select **Next**.
-1. Review your settings, and then select **Save and close**.
---
-To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
--- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view the test in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `model1` parameter to the ID of a model that you want to test.-- Set the required `model2` parameter to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.-- Set the required `dataset` parameter to the ID of a dataset that you want to use for the test.-- Set the `language` parameter, otherwise the Speech CLI will set "en-US" by default. This should be the locale of the dataset contents. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-
-Here's an example Speech CLI command that creates a test:
-
-```azurecli-interactive
-spx csr evaluation create --api-version v3.1 --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": -1.0,
- "wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
- "sentenceErrorRate1": -1.0,
- "sentenceCount1": -1,
- "wordCount1": -1,
- "correctWordCount1": -1,
- "wordSubstitutionCount1": -1,
- "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
- },
- "lastActionDateTime": "2022-05-20T16:42:43Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Inspection",
- "description": "My Inspection Description"
-}
-```
-
-The top-level `self` property in the response body is the evaluation's URI. Use this URI to get details about the project and test results. You also use this URI to update or delete the evaluation.
-
-For Speech CLI help with evaluations, run the following command:
-
-```azurecli-interactive
-spx help csr evaluation
-```
---
-To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.-- Set the required `model1` property to the URI of a model that you want to test.-- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.-- Set the required `dataset` property to the URI of a dataset that you want to use for the test.-- Set the required `locale` property. This should be the locale of the dataset contents. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.-
-Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "displayName": "My Inspection",
- "description": "My Inspection Description",
- "locale": "en-US"
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": -1.0,
- "wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
- "sentenceErrorRate1": -1.0,
- "sentenceCount1": -1,
- "wordCount1": -1,
- "correctWordCount1": -1,
- "wordSubstitutionCount1": -1,
- "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
- },
- "lastActionDateTime": "2022-05-20T16:42:43Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Inspection",
- "description": "My Inspection Description"
-}
-```
-
-The top-level `self` property in the response body is the evaluation's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) details about the evaluation's project and test results. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete) the evaluation.
---
-## Get test results
-
-You should get the test results and [inspect](#compare-transcription-with-audio) the audio datasets compared to transcription results for each model.
--
-Follow these steps to get test results:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
-1. Select the link by test name.
-1. After the test is complete, as indicated by the status set to *Succeeded*, you should see results that include the WER number for each tested model.
-
-This page lists all the utterances in your dataset and the recognition results, alongside the transcription from the submitted dataset. You can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, you can decide which model meets your needs and determine where additional training and improvements are required.
---
-To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
--- Set the required `evaluation` parameter to the ID of the evaluation that you want to get test results.-
-Here's an example Speech CLI command that gets test results:
-
-```azurecli-interactive
-spx csr evaluation status --api-version v3.1 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
-```
-
-The models, audio dataset, transcriptions, and more details are returned in the response body.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Inspection",
- "description": "My Inspection Description"
-}
-```
-
-For Speech CLI help with evaluations, run the following command:
-
-```azurecli-interactive
-spx help csr evaluation
-```
---
-To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech to text REST API](rest-speech-to-text.md).
-
-Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-The models, audio dataset, transcriptions, and more details are returned in the response body.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Inspection",
- "description": "My Inspection Description"
-}
-```
--
-## Compare transcription with audio
-
-You can inspect the transcription output by each model tested, against the audio input dataset. If you included two models in the test, you can compare their transcription quality side by side.
--
-To review the quality of transcriptions:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Test models**.
-1. Select the link by test name.
-1. Play an audio file while the reading the corresponding transcription by a model.
-
-If the test dataset included multiple audio files, you'll see multiple rows in the table. If you included two models in the test, transcriptions are shown in side-by-side columns. Transcription differences between models are shown in blue text font.
----
-The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
-
-To review the quality of transcriptions:
-1. Download the audio test dataset, unless you already have a copy.
-1. Download the output transcriptions.
-1. Play an audio file while the reading the corresponding transcription by a model.
-
-If you're comparing quality between two models, pay particular attention to differences between each model's transcriptions.
----
-The audio test dataset, transcriptions, and models tested are returned in the [test results](#get-test-results). If only one model was tested, the `model1` value will match `model2`, and the `transcription1` value will match `transcription2`.
-
-To review the quality of transcriptions:
-1. Download the audio test dataset, unless you already have a copy.
-1. Download the output transcriptions.
-1. Play an audio file while the reading the corresponding transcription by a model.
-
-If you're comparing quality between two models, pay particular attention to differences between each model's transcriptions.
--
-## Next steps
--- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)-- [Train your model](how-to-custom-speech-train-model.md)-- [Deploy your model](./how-to-custom-speech-train-model.md)-
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
- Title: Model lifecycle of Custom Speech - Speech service-
-description: Custom Speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
------ Previously updated : 11/29/2022-
-zone_pivot_groups: speech-studio-cli-rest
--
-# Custom Speech model lifecycle
-
-You can use a Custom Speech model for some time after it's deployed to your custom endpoint. But when new base models are made available, the older models are expired. You must periodically recreate and train your custom model from the latest base model to take advantage of the improved accuracy and quality.
-
-Here are some key terms related to the model lifecycle:
-
-* **Training**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data. In some contexts such as the REST API properties, training is also referred to as **adaptation**.
-* **Transcription**: Using a model and performing speech recognition (decoding audio into text).
-* **Endpoint**: A specific deployment of either a base model or a custom model that only you can access.
-
-> [!NOTE]
-> Endpoints used by `F0` Speech resources are deleted after seven days.
-
-## Expiration timeline
-
-Here are timelines for model adaptation and transcription expiration:
--- Training is available for one year after the quarter when the base model was created by Microsoft.-- Transcription with a base model is available for two years after the quarter when the base model was created by Microsoft.-- Transcription with a custom model is available for two years after the quarter when you created the custom model.-
-In this context, quarters end on January 15th, April 15th, July 15th, and October 15th.
-
-## What to do when a model expires
-
-When a custom model or base model expires, it is no longer available for transcription. You can change the model that is used by your custom speech endpoint without downtime.
-
-|Transcription route |Expired model result |Recommendation |
-||||
-|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
-|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
--
-## Get base model expiration dates
--
-The last date that you could use the base model for training was shown when you created the custom model. For more information, see [Train a Custom Speech model](how-to-custom-speech-train-model.md).
-
-Follow these instructions to get the transcription expiration date for a base model:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
-
- :::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
----
-To get the training and transcription expiration dates for a base model, use the `spx csr model status` command. Construct the request parameters according to the following instructions:
--- Set the `url` parameter to the URI of the base model that you want to get. You can run the `spx csr list --base` command to get available base models for all locales.-
-Here's an example Speech CLI command to get the training and transcription expiration dates for a base model:
-
-```azurecli-interactive
-spx csr model status --api-version v3.1 --model https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/b0bbc1e0-78d5-468b-9b7c-a5a43b2bb83f
-```
-
-In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d",
- "datasets": [],
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d/manifest"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-01-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-06T10:52:02Z",
- "status": "Succeeded",
- "createdDateTime": "2021-10-13T00:00:00Z",
- "locale": "en-US",
- "displayName": "20210831 + Audio file adaptation",
- "description": "en-US base model"
-}
-```
-
-For Speech CLI help with models, run the following command:
-
-```azurecli-interactive
-spx help csr model
-```
---
-To get the training and transcription expiration dates for a base model, use the [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operation of the [Speech to text REST API](rest-speech-to-text.md). You can make a [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) request to get available base models for all locales.
-
-Make an HTTP GET request using the model URI as shown in the following example. Replace `BaseModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/BaseModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-In the response, take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. Also take note of the date in the `transcriptionDateTime` property. This is the last date that you can use the base model for transcription.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d",
- "datasets": [],
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d/manifest"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-01-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-06T10:52:02Z",
- "status": "Succeeded",
- "createdDateTime": "2021-10-13T00:00:00Z",
- "locale": "en-US",
- "displayName": "20210831 + Audio file adaptation",
- "description": "en-US base model"
-}
-```
---
-## Get custom model expiration dates
--
-Follow these instructions to get the transcription expiration date for a custom model:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Train custom models**.
-1. The expiration date the custom model is shown in the **Expiration** column. This is the last date that you can use the custom model for transcription. Base models are not shown on the **Train custom models** page.
-
- :::image type="content" source="media/custom-speech/custom-speech-custom-model-expiration.png" alt-text="Screenshot of the train custom models page that shows the transcription expiration date.":::
-
-You can also follow these instructions to get the transcription expiration date for a custom model:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. The expiration date for the model is shown in the **Expiration** column. This is the last date that you can use the model for transcription.
-
- :::image type="content" source="media/custom-speech/custom-speech-model-expiration.png" alt-text="Screenshot of the deploy models page that shows the transcription expiration date.":::
----
-To get the transcription expiration date for your custom model, use the `spx csr model status` command. Construct the request parameters according to the following instructions:
--- Set the `url` parameter to the URI of the model that you want to get. Replace `YourModelId` with your model ID and replace `YourServiceRegion` with your Speech resource region.-
-Here's an example Speech CLI command to get the transcription expiration date for your custom model:
-
-```azurecli-interactive
-spx csr model status --api-version v3.1 --model https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId
-```
-
-In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
- "baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "datasets": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
- }
- ],
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-21T13:21:01Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-22T16:37:01Z",
- "locale": "en-US",
- "displayName": "My Model",
- "description": "My Model Description"
-}
-```
-
-For Speech CLI help with models, run the following command:
-
-```azurecli-interactive
-spx help csr model
-```
---
-To get the transcription expiration date for your custom model, use the [Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel) operation of the [Speech to text REST API](rest-speech-to-text.md).
-
-Make an HTTP GET request using the model URI as shown in the following example. Replace `YourModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
-
-```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-```
-
-In the response, take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for transcription. The `adaptationDateTime` property is not applicable, since custom models are not used to train other custom models.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
- "baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "datasets": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
- }
- ],
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-21T13:21:01Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-22T16:37:01Z",
- "locale": "en-US",
- "displayName": "My Model",
- "description": "My Model Description"
-}
-```
--
-## Next steps
--- [Train a model](how-to-custom-speech-train-model.md)-- [CI/CD for Custom Speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
- Title: "Training and testing datasets - Speech service"-
-description: Learn about types of training and testing data for a Custom Speech project, along with how to use and manage that data.
------ Previously updated : 10/24/2022----
-# Training and testing datasets
-
-In a Custom Speech project, you can upload datasets for training, qualitative inspection, and quantitative measurement. This article covers the types of training and testing data that you can use for Custom Speech.
-
-Text and audio that you use to test and train a custom model should include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
-
-* Include text and audio data to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
-* Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
-* Include samples from different environments, for example, indoor, outdoor, and road noise, where your model will be used.
-* Record audio with hardware devices that the production system will use. If your model must identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
-* Keep the dataset diverse and representative of your project requirements. You can add more data to your model later.
-* Only include data that your model needs to transcribe. Including data that isn't within your custom model's recognition requirements can harm recognition quality overall.
-
-## Data types
-
-The following table lists accepted data types, when each data type should be used, and the recommended quantity. Not every data type is required to create a model. Data requirements will vary depending on whether you're creating a test or training a model.
-
-| Data type | Used for testing | Recommended for testing | Used for training | Recommended for training |
-|--|--|-|-|-|
-| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio |
-| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
-| [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text |
-| [Structured text](#structured-text-data-for-training) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
-| [Pronunciation](#pronunciation-data-for-training) | No | Not applicable | Yes | 1 KB to 1 MB of pronunciation text |
-
-Training with plain text or structured text usually finishes within a few minutes.
-
-> [!TIP]
-> Start with plain-text data or structured-text data. This data will improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes versus days).
->
-> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-
-If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) REST API.
-
-## Consider datasets by scenario
-
-A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize. The following table shows datasets to consider for some speech recognition scenarios:
-
-| Scenario | Plain text data and structured text data | Audio + human-labeled transcripts | New words with pronunciation |
-| | | | |
-| Call center | Marketing documents, website, product reviews related to call center activity | Call center calls transcribed by humans | Terms that have ambiguous pronunciations (see the *Xbox* example in the preceding section) |
-| Voice assistant | Lists of sentences that use various combinations of commands and entities | Recorded voices speaking commands into device, transcribed into text | Names (movies, songs, products) that have unique pronunciations |
-| Dictation | Written input, such as instant messages or emails | Similar to preceding examples | Similar to preceding examples |
-| Video closed captioning | TV show scripts, movies, marketing content, video summaries | Exact transcripts of videos | Similar to preceding examples |
-
-To help determine which dataset to use to address your problems, refer to the following table:
-
-| Use case | Data type |
-| -- | |
-| Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. | Plain text or structured text data |
-| Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. | Pronunciation data or phonetic pronunciation in structured text |
-| Improve recognition accuracy on speaking styles, accents, or specific background noises. | Audio + human-labeled transcripts |
-
-## Audio + human-labeled transcript data for training or testing
-
-You can use audio + human-labeled transcript data for both [training](how-to-custom-speech-train-model.md) and [testing](how-to-custom-speech-evaluate-data.md) purposes. You must provide human-labeled transcriptions (word by word) for comparison:
--- To improve the acoustic aspects like slight accents, speaking styles, and background noises.-- To measure the accuracy of Microsoft's speech to text accuracy when it's processing your audio files. -
-For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
-
-> [!IMPORTANT]
-> If a base model doesn't support customization with audio data, only the transcription text will be used for training. If you switch to a base model that supports customization with audio data, the training time may increase from several hours to several days. The change in training time would be most noticeable when you switch to a base model in a [region](regions.md#speech-service) without dedicated hardware for training. If the audio data is not required, you should remove it to decrease the training time.
-
-Audio with human-labeled transcripts offers the greatest accuracy improvements if the audio comes from the target use case. Samples must cover the full scope of speech. For example, a call center for a retail store would get the most calls about swimwear and sunglasses during summer months. Ensure that your sample includes the full scope of speech that you want to detect.
-
-Consider these details:
-
-* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
-* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer very good recognition results in most scenarios, so it's probably enough to train with related text.
-* Custom Speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
-* Avoid samples that include transcription errors, but do include a diversity of audio quality.
-* Avoid sentences that are unrelated to your problem domain. Unrelated sentences can harm your model.
-* When the transcript quality varies, you can duplicate exceptionally good sentences, such as excellent transcriptions that include key phrases, to increase their weight.
-* The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text.
-* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a region that has dedicated hardware for training.
-
-A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition will only be as good as the data that you provide. You should upload only high-quality transcripts.
-
-Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise is not helpful, it shouldn't limit or degrade your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
-
-> [!IMPORTANT]
-> For more information about the best practices of preparing human-labeled transcripts, see [Human-labeled transcripts with audio](how-to-custom-speech-human-labeled-transcriptions.md).
-
-Custom Speech projects require audio files with these properties:
-
-| Property | Value |
-|--|-|
-| File format | RIFF (WAV) |
-| Sample rate | 8,000 Hz or 16,000 Hz |
-| Channels | 1 (mono) |
-| Maximum length per audio | 2 hours (testing) / 60 s (training)<br/><br/>Training with audio has a maximum audio length of 60 seconds per file. For audio files longer than 60 seconds, only the corresponding transcription files will be used for training. If all audio files are longer than 60 seconds, the training will fail.|
-| Sample format | PCM, 16-bit |
-| Archive format | .zip |
-| Maximum zip size | 2 GB or 10,000 files |
-
-## Plain-text data for training
-
-You can add plain text sentences of related text to improve the recognition of domain-specific words and phrases. Related text sentences can reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
-
-Provide domain-related sentences in a single text file. Use text data that's close to the expected spoken utterances. Utterances don't need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect the model to recognize. When possible, try to have one sentence or keyword controlled on a separate line. To increase the weight of a term such as product names, add several sentences that include the term. But don't copy too much - it could affect the overall recognition rate.
-
-> [!NOTE]
-> Avoid related text sentences that include noise such as unrecognizable characters or words.
-
-Use this table to ensure that your plain text dataset file is formatted correctly:
-
-| Property | Value |
-|-|-|
-| Text encoding | UTF-8 BOM |
-| Number of utterances per line | 1 |
-| Maximum file size | 200 MB |
-
-You must also adhere to the following restrictions:
-
-* Avoid repeating characters, words, or groups of words more than three times, as in "aaaa," "yeah yeah yeah yeah," or "that's it that's it that's it that's it." The Speech service might drop lines with too many repetitions.
-* Don't use special characters or UTF-8 characters above `U+00A1`.
-* URIs will be rejected.
-* For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each.
-
-## Structured-text data for training
-
-> [!NOTE]
-> Structured-text data for training is in public preview.
-
-Use structured text data when your data follows a particular pattern in particular utterances that differ only by words or phrases from a list. To simplify the creation of training data and to enable better modeling inside the Custom Language model, you can use a structured text in Markdown format to define lists of items and phonetic pronunciation of words. You can then reference these lists inside your training utterances.
-
-Expected utterances often follow a certain pattern. One common pattern is that utterances differ only by words or phrases from a list. Examples of this pattern could be:
-
-* "I have a question about `product`," where `product` is a list of possible products.
-* "Make that `object` `color`," where `object` is a list of geometric shapes and `color` is a list of colors.
-
-For a list of supported base models and locales for training with structured text, see [Language support](language-support.md?tabs=stt). You must use the latest base model for these locales. For locales that don't support training with structured text, the service will take any training sentences that don't reference any classes as part of training with plain-text data.
-
-The structured-text file should have an .md extension. The maximum file size is 200 MB, and the text encoding must be UTF-8 BOM. The syntax of the Markdown is the same as that from the Language Understanding models, in particular list entities and example utterances. For more information about the complete Markdown syntax, see the <a href="/azure/bot-service/file-format/bot-builder-lu-file-format" target="_blank"> Language Understanding Markdown</a>.
-
-Here are key details about the supported Markdown format:
-
-| Property | Description | Limits |
-|-|-|--|
-|`@list`|A list of items that can be referenced in an example sentence.|Maximum of 20 lists. Maximum of 35,000 items per list.|
-|`speech:phoneticlexicon`|A list of phonetic pronunciations according to the [Universal Phone Set](customize-pronunciation.md). Pronunciation is adjusted for each instance where the word appears in a list or training sentence. For example, if you have a word that sounds like "cat" and you want to adjust the pronunciation to "k ae t", you would add `- cat/k ae t` to the `speech:phoneticlexicon` list.|Maximum of 15,000 entries. Maximum of 2 pronunciations per word.|
-|`#ExampleSentences`|A pound symbol (`#`) delimits a section of example sentences. The section heading can only contain letters, digits, and underscores. Example sentences should reflect the range of speech that your model should expect. A training sentence can refer to items under a `@list` by using surrounding left and right curly braces (`{@list name}`). You can refer to multiple lists in the same training sentence, or none at all.|Maximum file size of 200MB.|
-|`//`|Comments follow a double slash (`//`).|Not applicable|
-
-Here's an example structured text file:
-
-```markdown
-// This is a comment because it follows a double slash (`//`).
-
-// Here are three separate lists of items that can be referenced in an example sentence. You can have up to 10 of these.
-@ list food =
-- pizza-- burger-- ice cream-- soda-
-@ list pet =
-- cat-- dog-- fish-
-@ list sports =
-- soccer-- tennis-- cricket-- basketball-- baseball-- football-
-// List of phonetic pronunciations
-@ speech:phoneticlexicon
-- cat/k ae t-- fish/f ih sh-
-// Here are two sections of training sentences.
-#TrainingSentences_Section1
-- you can include sentences without a class reference-- what {@pet} do you have-- I like eating {@food} and playing {@sports}-- my {@pet} likes {@food}-
-#TrainingSentences_Section2
-- you can include more sentences without a class reference-- or more sentences that have a class reference like {@pet}
-```
-
-## Pronunciation data for training
-
-Specialized or made up words might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize "Xbox", pronounce it as "X box". This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
-
-You can provide a custom pronunciation file to improve recognition. Don't use custom pronunciation files to alter the pronunciation of common words. For a list of languages that support custom pronunciation, see [language support](language-support.md?tabs=stt).
-
-> [!NOTE]
-> You can use a pronunciation file alongside any other training dataset except structured text training data. To use pronunciation data with structured text, it must be within a structured text file.
-
-The spoken form is the phonetic sequence spelled out. It can be composed of letters, words, syllables, or a combination of all three. This table includes some examples:
-
-| Recognized displayed form | Spoken form |
-|--|--|
-| 3CPO | three c p o |
-| CNTK | c n t k |
-| IEEE | i triple e |
-
-You provide pronunciations in a single text file. Include the spoken utterance and a custom pronunciation for each. Each row in the file should begin with the recognized form, then a tab character, and then the space-delimited phonetic sequence.
-
-```tsv
-3CPO three c p o
-CNTK c n t k
-IEEE i triple e
-```
-
-Refer to the following table to ensure that your pronunciation dataset files are valid and correctly formatted.
-
-| Property | Value |
-|-|-|
-| Text encoding | UTF-8 BOM (ANSI is also supported for English) |
-| Number of pronunciations per line | 1 |
-| Maximum file size | 1 MB (1 KB for free tier) |
-
-### Audio data for training or testing
-
-Audio data is optimal for testing the accuracy of Microsoft's baseline speech to text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
-
-> [!NOTE]
-> Audio only data for training is available in preview for the `en-US` locale. For other locales, to train with audio data you must also provide [human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing).
-
-Custom Speech projects require audio files with these properties:
-
-| Property | Value |
-|--|--|
-| File format | RIFF (WAV) |
-| Sample rate | 8,000 Hz or 16,000 Hz |
-| Channels | 1 (mono) |
-| Maximum length per audio | 2 hours |
-| Sample format | PCM, 16-bit |
-| Archive format | .zip |
-| Maximum archive size | 2 GB or 10,000 files |
-
-> [!NOTE]
-> When you're uploading training and testing data, the .zip file size can't exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can test from only a *single* dataset.
-
-Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a> to verify audio properties or convert existing audio to the appropriate formats. Here are some example SoX commands:
-
-| Activity | SoX command |
-||-|
-| Check the audio file format. | `sox --i <filename>` |
-| Convert the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
-
-## Next steps
--- [Upload your data](how-to-custom-speech-upload-data.md)-- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)-- [Train a custom model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
- Title: Train a Custom Speech model - Speech service-
-description: Learn how to train Custom Speech models. Training a speech to text model can improve recognition accuracy for the Microsoft base model or a custom model.
------ Previously updated : 11/29/2022--
-zone_pivot_groups: speech-studio-cli-rest
--
-# Train a Custom Speech model
-
-In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released.
-
-> [!NOTE]
-> You pay for Custom Speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md), but you are not charged for training a model.
-
-Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
-
-You can use a custom model for a limited time after it's trained. You must periodically recreate and adapt your custom model from the latest base model to take advantage of the improved accuracy and quality. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-> [!IMPORTANT]
-> If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. After a model is trained, you can [copy it to a Speech resource](#copy-a-model) in another region as needed.
->
-> In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. See footnotes in the [regions](regions.md#speech-service) table for more information.
-
-## Create a model
--
-After you've uploaded [training datasets](./how-to-custom-speech-test-and-train.md), follow these instructions to start training your model:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Train custom models**.
-1. Select **Train a new model**.
-1. On the **Select a baseline model** page, select a base model, and then select **Next**. If you aren't sure, select the most recent model from the top of the list. The name of the base model corresponds to the date when it was released in YYYYMMDD format. The customization capabilities of the base model are listed in parenthesis after the model name in Speech Studio.
-
- > [!IMPORTANT]
- > Take note of the **Expiration for adaptation** date. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-1. On the **Choose data** page, select one or more datasets that you want to use for training. If there aren't any datasets available, cancel the setup, and then go to the **Speech datasets** menu to [upload datasets](how-to-custom-speech-upload-data.md).
-1. Enter a name and description for your custom model, and then select **Next**.
-1. Optionally, check the **Add test in the next step** box. If you skip this step, you can run the same tests later. For more information, see [Test recognition quality](how-to-custom-speech-inspect-data.md) and [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
-1. Select **Save and close** to kick off the build for your custom model.
-1. Return to the **Train custom models** page.
-
- > [!IMPORTANT]
- > Take note of the **Expiration** date. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
---
-To create a model with datasets for training, use the `spx csr model create` command. Construct the request parameters according to the following instructions:
--- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `dataset` parameter to the ID of a dataset that you want used for training. To specify multiple datasets, set the `datasets` (plural) parameter and separate the IDs with a semicolon.-- Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-- Optionally, you can set the `base` property. For example: `--base 1aae1070-7972-47e9-a977-87e3b05c457d`. If you don't specify the `base`, the default base model for the locale is used. The Speech CLI `base` parameter corresponds to the `baseModel` property in the JSON request and response.-
-Here's an example Speech CLI command that creates a model with datasets for training:
-
-```azurecli-interactive
-spx csr model create --api-version v3.1 --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US"
-```
-
-> [!NOTE]
-> In this example, the `base` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
- "baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "datasets": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
- }
- ],
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-21T13:21:01Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-21T13:21:01Z",
- "locale": "en-US",
- "displayName": "My Model",
- "description": "My Model Description"
-}
-```
-
-> [!IMPORTANT]
-> Take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
->
-> Take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-The top-level `self` property in the response body is the model's URI. Use this URI to get details about the model's project, manifest, and deprecation dates. You also use this URI to update or delete a model.
-
-For Speech CLI help with models, run the following command:
-
-```azurecli-interactive
-spx help csr model
-```
---
-To create a model with datasets for training, use the [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.-- Set the required `datasets` property to the URI of the datasets that you want used for training.-- Set the required `locale` property. The model locale must match the locale of the project and base model. The locale can't be changed later.-- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.-- Optionally, you can set the `baseModel` property. For example: `"baseModel": {"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"}`. If you don't specify the `baseModel`, the default base model for the locale is used. -
-Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
- },
- "displayName": "My Model",
- "description": "My Model Description",
- "baseModel": null,
- "datasets": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
- }
- ],
- "locale": "en-US"
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models"
-```
-
-> [!NOTE]
-> In this example, the `baseModel` isn't set, so the default base model for the locale is used. The base model URI is returned in the response.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
- "baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "datasets": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
- }
- ],
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-21T13:21:01Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-21T13:21:01Z",
- "locale": "en-US",
- "displayName": "My Model",
- "description": "My Model Description"
-}
-```
-
-> [!IMPORTANT]
-> Take note of the date in the `adaptationDateTime` property. This is the last date that you can use the base model for training. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
->
-> Take note of the date in the `transcriptionDateTime` property. This is the last date that you can use your custom model for speech recognition. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-The top-level `self` property in the response body is the model's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel) details about the model's project, manifest, and deprecation dates. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Delete) the model.
---
-## Copy a model
-
-You can copy a model to another project that uses the same locale. For example, after a model is trained with audio data in a [region](regions.md#speech-service) with dedicated hardware for training, you can copy it to a Speech resource in another region as needed.
--
-Follow these instructions to copy a model to a project in another region:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Train custom models**.
-1. Select **Copy to**.
-1. On the **Copy speech model** page, select a target region where you want to copy the model.
- :::image type="content" source="./media/custom-speech/custom-speech-copy-to-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/custom-speech-copy-to-full.png":::
-1. Select a Speech resource in the target region, or create a new Speech resource.
-1. Select a project where you want to copy the model, or create a new project.
-1. Select **Copy**.
-
-After the model is successfully copied, you'll be notified and can view it in the target project.
---
-Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech to text REST API](rest-speech-to-text.md).
---
-To copy a model to another Speech resource, use the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.-
-Make an HTTP POST request using the URI as shown in the following example. Use the region and URI of the model you want to copy from. Replace `YourModelId` with the model ID, replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "targetSubscriptionKey": "ModelDestinationSpeechResourceKey"
-} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId:copyto"
-```
-
-> [!NOTE]
-> Only the `targetSubscriptionKey` property in the request body has information about the destination Speech resource.
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
- "baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
- },
- "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae:copyto"
- },
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
- }
- },
- "lastActionDateTime": "2022-05-22T23:15:27Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-22T23:15:27Z",
- "locale": "en-US",
- "displayName": "My Model",
- "description": "My Model Description",
- "customProperties": {
- "PortalAPIVersion": "3",
- "Purpose": "",
- "VadKind": "None",
- "ModelClass": "None",
- "UsesHalide": "False",
- "IsDynamicGrammarSupported": "False"
- }
-}
-```
---
-## Connect a model
-
-Models might have been copied from one project using the Speech CLI or REST API, without being connected to another project. Connecting a model is a matter of updating the model with a reference to the project.
--
-If you are prompted in Speech Studio, you can connect them by selecting the **Connect** button.
----
-To connect a model to a project, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
--- Set the `project` parameter to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `modelId` parameter to the ID of the model that you want to connect to the project.-
-Here's an example Speech CLI command that connects a model to a project:
-
-```azurecli-interactive
-spx csr model update --api-version v3.1 --model YourModelId --project YourProjectId
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
- },
-}
-```
-
-For Speech CLI help with models, run the following command:
-
-```azurecli-interactive
-spx help csr model
-```
---
-To connect a new model to a project of the Speech resource where the model was copied, use the [Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.-
-Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
- },
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
- },
-}
-```
---
-## Next steps
--- [Test recognition quality](how-to-custom-speech-inspect-data.md)-- [Test model quantitatively](how-to-custom-speech-evaluate-data.md)-- [Deploy a model](how-to-custom-speech-deploy-model.md)
cognitive-services How To Custom Speech Transcription Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-transcription-editor.md
- Title: How to use the online transcription editor for Custom Speech - Speech service-
-description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech.
------ Previously updated : 05/08/2022---
-# How to use the online transcription editor
-
-The online transcription editor allows you to create or edit audio + human-labeled transcriptions for Custom Speech. The main use cases of the editor are as follows:
-
-* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
-* You already have audio + human-labeled datasets, but there are errors or defects in the transcription. The editor allows you to quickly modify the transcriptions to get best training accuracy.
-
-The only requirement to use the transcription editor is to have audio data uploaded, with or without corresponding transcriptions.
-
-You can find the **Editor** tab next to the **Training and testing dataset** tab on the main **Speech datasets** page.
--
-Datasets in the **Training and testing dataset** tab can't be updated. You can import a copy of a training or testing dataset to the **Editor** tab, add or edit human-labeled transcriptions to match the audio, and then export the edited dataset to the **Training and testing dataset** tab. Please also note that you can't use a dataset that's in the Editor to train or test a model.
-
-## Import datasets to the Editor
-
-To import a dataset to the Editor, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
-1. Select **Import data**
-1. Select datasets. You can select audio data only, audio + human-labeled data, or both. For audio-only data, you can use the default models to automatically generate machine transcription after importing to the editor.
-1. Enter a name and description for the new dataset, and then select **Next**.
-1. Review your settings, and then select **Import and close** to kick off the import process. After data has been successfully imported, you can select datasets and start editing.
-
-> [!NOTE]
-> You can also select a dataset from the main **Speech datasets** page and export them to the Editor. Select a dataset and then select **Export to Editor**.
-
-## Edit transcription to match audio
-
-Once a dataset has been imported to the Editor, you can start editing the dataset. You can add or edit human-labeled transcriptions to match the audio as you hear it. You do not edit any audio data.
-
-To edit a dataset's transcription in the Editor, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
-1. Select the link to a dataset by name.
-1. From the **Audio + text files** table, select the link to an audio file by name.
-1. After you've made edits, select **Save**.
-
-If there are multiple files in the dataset, you can select **Previous** and **Next** to move from file to file. Edit and save changes to each file as you go.
-
-The detail page lists all the segments in each audio file, and you can select the desired utterance. For each utterance, you can play and compare the audio with the corresponding transcription. Edit the transcriptions if you find any insertion, deletion, or substitution errors. For more information about word error types, see [Test model quantitatively](how-to-custom-speech-evaluate-data.md).
-
-## Export datasets from the Editor
-
-Datasets in the Editor can be exported to the **Training and testing dataset** tab, where they can be used to train or test a model.
-
-To export datasets from the Editor, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Editor**.
-1. Select the link to a dataset by name.
-1. Select one or more rows from the **Audio + text files** table.
-1. Select **Export** to export all of the selected files as one new dataset.
-
-The files are exported as a new dataset, and will not impact or replace other training or testing datasets.
-
-## Next steps
--- [Test recognition quality](how-to-custom-speech-inspect-data.md)-- [Train your model](how-to-custom-speech-train-model.md)-- [Deploy your model](./how-to-custom-speech-train-model.md)-
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
- Title: "Upload training and testing datasets for Custom Speech - Speech service"-
-description: Learn about how to upload data to test or train a Custom Speech model.
------ Previously updated : 11/29/2022-
-zone_pivot_groups: speech-studio-cli-rest
--
-# Upload training and testing datasets for Custom Speech
-
-You need audio or text data for testing the accuracy of speech recognition or training your custom models. For information about the data types supported for testing or training your model, see [Training and testing datasets](how-to-custom-speech-test-and-train.md).
-
-> [!TIP]
-> You can also use the [online transcription editor](how-to-custom-speech-transcription-editor.md) to create and refine labeled audio datasets.
-
-## Upload datasets
--
-To upload your own datasets in Speech Studio, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Speech datasets** > **Upload data**.
-1. Select the **Training data** or **Testing data** tab.
-1. Select a dataset type, and then select **Next**.
-1. Specify the dataset location, and then select **Next**. You can choose a local file or enter a remote location such as Azure Blob URL. If you select remote location, and you don't use trusted Azure services security mechanism (see next Note), then the remote location should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.
-
- > [!NOTE]
- > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
-
-1. Enter the dataset name and description, and then select **Next**.
-1. Review your settings, and then select **Save and close**.
-
-After your dataset is uploaded, go to the **Train custom models** page to [train a custom model](how-to-custom-speech-train-model.md).
----
-To create a dataset and connect it to an existing project, use the `spx csr dataset create` command. Construct the request parameters according to the following instructions:
--- Set the `project` parameter to the ID of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `kind` parameter. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.-- Set the required `contentUrl` parameter. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported.-
- > [!NOTE]
- > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
--- Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.-- Set the required `name` parameter. This is the name that will be displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-
-Here's an example Speech CLI command that creates a dataset and connects it to an existing project:
-
-```azurecli-interactive
-spx csr dataset create --api-version v3.1 --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
- "kind": "Acoustic",
- "contentUrl": "https://contoso.com/mydatasetlocation",
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
- },
- "properties": {
- "acceptedLineCount": 0,
- "rejectedLineCount": 0
- },
- "lastActionDateTime": "2022-05-20T14:07:11Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T14:07:11Z",
- "locale": "en-US",
- "displayName": "My Acoustic Dataset",
- "description": "My Acoustic Dataset Description"
-}
-```
-
-The top-level `self` property in the response body is the dataset's URI. Use this URI to get details about the dataset's project and files. You also use this URI to update or delete a dataset.
-
-For Speech CLI help with datasets, run the following command:
-
-```azurecli-interactive
-spx help csr dataset
-```
----
-To create a dataset and connect it to an existing project, use the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects.-- Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.-- Set the required `contentUrl` property. This is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction are not supported. -
- > [!NOTE]
- > If you use Azure Blob URL, you can ensure maximum security of your dataset files by using trusted Azure services security mechanism. You will use the same techniques as for Batch transcription and plain Storage Account URLs for your dataset files. See details [here](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism).
--- Set the required `locale` property. The dataset locale must match the locale of the project. The locale can't be changed later. -- Set the required `displayName` property. This is the name that will be displayed in the Speech Studio.-
-Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "kind": "Acoustic",
- "displayName": "My Acoustic Dataset",
- "description": "My Acoustic Dataset Description",
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
- },
- "contentUrl": "https://contoso.com/mydatasetlocation",
- "locale": "en-US",
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/datasets"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
- "kind": "Acoustic",
- "contentUrl": "https://contoso.com/mydatasetlocation",
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
- },
- "properties": {
- "acceptedLineCount": 0,
- "rejectedLineCount": 0
- },
- "lastActionDateTime": "2022-05-20T14:07:11Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T14:07:11Z",
- "locale": "en-US",
- "displayName": "My Acoustic Dataset",
- "description": "My Acoustic Dataset Description"
-}
-```
-
-The top-level `self` property in the response body is the dataset's URI. Use this URI to [get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get) details about the dataset's project and files. You also use this URI to [update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update) or [delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Delete) the dataset.
--
-> [!IMPORTANT]
-> Connecting a dataset to a Custom Speech project isn't required to train and test a custom model using the REST API or Speech CLI. But if the dataset is not connected to any project, you can't select it for training or testing in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-
-## Next steps
-
-* [Test recognition quality](how-to-custom-speech-inspect-data.md)
-* [Test model quantitatively](how-to-custom-speech-evaluate-data.md)
-* [Train a custom model](how-to-custom-speech-train-model.md)
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
- Title: Train your custom voice model - Speech service-
-description: Learn how to train a custom neural voice through the Speech Studio portal.
------ Previously updated : 11/28/2022----
-# Train your voice model
-
-In this article, you learn how to train a custom neural voice through the Speech Studio portal.
-
-> [!IMPORTANT]
-> Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can [copy](#copy-your-voice-model-to-another-project) it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
-
-Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice. Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
-
-> [!NOTE]
-> Although the total number of hours required per [training method](#choose-a-training-method) will vary, the same unit price applies to each. For more information, see the [Custom Neural training pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-## Choose a training method
-
-After you validate your data files, you can use them to build your Custom Neural Voice model. When you create a custom neural voice, you can choose to train it with one of the following methods:
--- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data, select **Neural** method. --- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language. --- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples (at least 100 utterances per style) as additional training data for the same voice. -
-The language of the training data must be one of the [languages that are supported](language-support.md?tabs=tts) for custom neural voice neural, cross-lingual, or multi-style training.
-
-## Train your Custom Neural Voice model
-
-To create a custom neural voice in Speech Studio, follow these steps for one of the following [methods](#choose-a-training-method):
-
-# [Neural](#tab/neural)
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
-1. Select **Neural** as the [training method](#choose-a-training-method) for your model and then select **Next**. To use a different training method, see [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
- :::image type="content" source="media/custom-voice/cnv-train-neural.png" alt-text="Screenshot that shows how to select neural training.":::
-1. Select a version of the training recipe for your model. The latest version is selected by default. The supported features and training time can vary by version. Normally, the latest version is recommended for the best results. In some cases, you can choose an older version to reduce training time.
-1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
-1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
-1. Select **Next**.
-1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
-1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
-1. Select **Next**.
-1. Review the settings and check the box to accept the terms of use.
-1. Select **Submit** to start training the model.
-
-# [Neural - cross lingual](#tab/crosslingual)
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
-1. Select **Neural - cross lingual** (Preview) as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model).
- :::image type="content" source="media/custom-voice/cnv-train-neural-cross-lingual.png" alt-text="Screenshot that shows how to select neural cross lingual training.":::
-1. Select the **Target language** that will be the secondary language for your voice model. Only one target language can be selected for a voice model.
-1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
-1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
-1. Select **Next**.
-1. Each training generates 100 sample audio files automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the model at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
-1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
-1. Select **Next**.
-1. Review the settings and check the box to accept the terms of use.
-1. Select **Submit** to start training the model.
-
-# [Neural - multi style](#tab/multistyle)
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Train model** > **Train a new model**.
-1. Select **Neural - multi style** (Preview) as the [training method](#choose-a-training-method) for your model. To use a different training method, see [Neural](?tabs=neural#train-your-custom-neural-voice-model) or [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model).
- :::image type="content" source="media/custom-voice/cnv-train-neural-multi-style.png" alt-text="Screenshot that shows how to select neural multi style training.":::
-1. Select one or more preset speaking styles to train.
-1. Select the data that you want to use for training. Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files. Only successfully processed datasets can be selected for training. Check your data processing status if you do not see your training set in the list.
-1. Select **Next**.
-1. Optionally, you can add up to 10 custom speaking styles:
- 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#speaking-styles-and-roles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
- 1. Select style samples as training data. The style samples should be all from the same voice talent profile.
-1. Select **Next**.
-1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data.
-1. Select **Next**.
-1. Each training generates 100 sample audios for the default style and 20 for each preset style automatically, to help you test the model with a default script. Optionally, you can also check the box next to **Add my own test script** and provide your own test script with up to 100 utterances to test the default style at no additional cost. The generated audio files are a combination of the automatic test scripts and custom test scripts. For more information, see [test script requirements](#test-script-requirements).
-1. Enter a **Name** and **Description** to help you identify the model. Choose a name carefully. The model name will be used as the voice name in your [speech synthesis request](how-to-deploy-and-use-endpoint.md#use-your-custom-voice) via the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-1. Optionally, enter the **Description** to help you identify the model. A common use of the description is to record the names of the data that you used to create the model.
-1. Select **Next**.
-1. Review the settings and check the box to accept the terms of use.
-1. Select **Submit** to start training the model.
-
-
-
-The **Train model** table displays a new entry that corresponds to this newly created model. The status reflects the process of converting your data to a voice model, as described in this table:
-
-| State | Meaning |
-| -- | - |
-| Processing | Your voice model is being created. |
-| Succeeded | Your voice model has been created and can be deployed. |
-| Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
-| Canceled | The training for your voice model was canceled. |
-
-While the model status is **Processing**, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training.
--
-After you finish training the model successfully, you can review the model details and [test the model](#test-your-voice-model).
-
-You can use the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio]( https://speech.microsoft.com/portal/audiocontentcreation) to create audio and fine-tune your deployed voice. If applicable for your voice, one of multiple styles can also be selected.
-
-### Rename your model
-
-If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
--
-Enter the new name on the **Clone voice model** window, then select **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
--
-### Test your voice model
-
-After your voice model is successfully built, you can use the generated sample audio files to test it before deploying it for use.
-
-The quality of the voice depends on many factors, such as:
--- The size of the training data.-- The quality of the recording.-- The accuracy of the transcript file.-- How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.-
-Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
--
-If you want to upload your own test scripts to further test your model, select **Add test scripts** to upload your own test script.
--
-Before uploading test script, check the [test script requirements](#test-script-requirements). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-On **Add test scripts** window, select **Browse for a file** to select your own script, then select **Add** to upload it.
--
-### Test script requirements
-
-The test script must be a .txt file, less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
-
-Unlike the [training transcription files](how-to-custom-voice-training-data.md#transcription-data-for-individual-utterances--matching-transcript), the test script should exclude the utterance ID (filenames of each utterance). Otherwise, these IDs are spoken.
-
-Here's an example set of utterances in one .txt file:
-
-```text
-This is the waistline, and it's falling.
-We have trouble scoring.
-It was Janet Maslin.
-```
-
-Each paragraph of the utterance results in a separate audio. If you want to combine all sentences into one audio, make them a single paragraph.
-
->[!NOTE]
-> The generated audio files are a combination of the automatic test scripts and custom test scripts.
-
-### Update engine version for your voice model
-
-Azure Text to speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you've trained your voice, you can apply your voice to the new language model by updating to the latest engine version.
-
-When a new engine is available, you're prompted to update your neural voice model.
--
-Go to the model details page and follow the on-screen instructions to install the latest engine.
--
-Alternatively, select **Install the latest engine** later to update your model to the latest engine version.
--
-You're not charged for engine update. The previous versions are still kept. You can check all engine versions for the model from **Engine version** drop-down list, or remove one if you don't need it anymore.
--
-The updated version is automatically set as default. But you can change the default version by selecting a version from the drop-down list and selecting **Set as default**.
--
-If you want to test each engine version of your voice model, you can select a version from the drop-down list, then select **DefaultTests** under **Testing** to listen to the sample audios. If you want to upload your own test scripts to further test your current engine version, first make sure the version is set as default, then follow the [testing steps above](#test-your-voice-model).
-
-Updating the engine will create a new version of the model at no additional cost. After you've updated the engine version for your voice model, you need to deploy the new version to [create a new endpoint](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint). You can only deploy the default version.
--
-After you've created a new endpoint, you need to [transfer the traffic to the new endpoint in your product](how-to-deploy-and-use-endpoint.md#switch-to-a-new-voice-model-in-your-product).
-
-For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
-
-## Copy your voice model to another project
-
-You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region.
-
-> [!NOTE]
-> Custom Neural Voice training is currently only available in some regions. But you can easily copy a neural voice model from those regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#speech-service).
-
-To copy your custom neural voice model to another project:
-
-1. On the **Train model** tab, select a voice model that you want to copy, and then select **Copy to project**.
-
- :::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Screenshot of the copy to project option.":::
-
-1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
-
- :::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Screenshot of the copy voice model dialog.":::
-
-1. Select **Submit** to copy the model.
-1. Select **View model** under the notification message for copy success.
-
-Navigate to the project where you copied the model to [deploy the model copy](how-to-deploy-and-use-endpoint.md).
-
-## Next steps
--- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)-- [How to record voice samples](record-custom-voice-samples.md)-- [Text to speech API reference](rest-text-to-speech.md)
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
- Title: "How to prepare data for Custom Voice - Speech service"-
-description: "Learn how to provide studio recordings and the associated scripts that will be used to train your Custom Neural Voice."
------ Previously updated : 10/27/2022---
-# Prepare training data for Custom Neural Voice
-
-When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md). The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
-
-All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. To confirm that your data is correctly formatted, see [Training data types](how-to-custom-voice-training-data.md).
-
-> [!NOTE]
-> - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
-> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users. Please see out [Speech service quotas and limits](speech-services-quotas-and-limits.md#custom-neural-voice) for more details.
-
-## Upload your data
-
-When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A *training set* is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. The service checks data readiness per each training set. You can import multiple data to a training set.
-
-To upload training data, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Prepare training data** > **Upload data**.
-1. In the **Upload data** wizard, choose a [data type](how-to-custom-voice-training-data.md) and then select **Next**.
-1. Select local files from your computer or enter the Azure Blob storage URL to upload data.
-1. Under **Specify the target training set**, select an existing training set or create a new one. If you created a new training set, make sure it's selected in the drop-down list before you continue.
-1. Select **Next**.
-1. Enter a name and description for your data and then select **Next**.
-1. Review the upload details, and select **Submit**.
-
-> [!NOTE]
-> Duplicate IDs are not accepted. Utterances with the same ID will be removed.
->
-> Duplicate audio names are removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicates, they're rejected.
-
-Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
-
-After you upload the data, you can check the details in the training set detail view. On the detail page, you can further check the pronunciation issue and the noise level for each of your data. The pronunciation score at the sentence level ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. Utterances with an overall score lower than 70 will be rejected. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
-
-## Resolve data issues online
-
-After upload, you can check the data details of the training set. Before continuing to [train your voice model](how-to-custom-voice-create-voice.md), you should try to resolve any data issues.
-
-You can resolve data issues per utterance in Speech Studio.
-
-1. On the detail page, go to the **Accepted data** or **Rejected data** page. Select individual utterances you want to change, then click **Edit**.
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the accepted data or rejected data details page.":::
-
- You can choose which data issues to be displayed based on your criteria.
-
- :::image type="content" source="media/custom-voice/cnv-issues-display-criteria.png" alt-text="Screenshot of choosing which data issues to be displayed":::
-
-1. Edit window will be displayed.
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-editscript.png" alt-text="Screenshot of displaying Edit transcript and recording file window.":::
-
-1. Update transcript or recording file according to issue description on the edit window.
-
- You can edit transcript in the text box, then click **Done**
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-scriptedit-done.png" alt-text="Screenshot of selecting Done button on the Edit transcript and recording file window.":::
-
- If you need to update recording file, select **Update recording file**, then upload the fixed recording file (.wav).
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-upload-recording.png" alt-text="Screenshot that shows how to upload recording file on the Edit transcript and recording file window.":::
-
-1. After you've made changes to your data, you need to check the data quality by clicking **Analyze data** before using this dataset for training.
-
- You can't select this training set for training model before the analysis is complete.
-
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset-analyze.png" alt-text="Screenshot of selecting Analyze data on Data details page.":::
-
- You can also delete utterances with issues by selecting them and clicking **Delete**.
-
-### Typical data issues
-
-The issues are divided into three types. Refer to the following tables to check the respective types of errors.
-
-**Auto-rejected**
-
-Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can [fix these data errors online](#resolve-data-issues-online) or upload the corrected data again for training.
-
-| Category | Name | Description |
-| | -- | |
-| Script | Invalid separator| You must separate the utterance ID and the script content with a Tab character.|
-| Script | Invalid script ID| The script line ID must be numeric.|
-| Script | Duplicated script|Each line of the script content must be unique. The line is duplicated with {}.|
-| Script | Script too long| The script must be less than 1,000 characters.|
-| Script | No matching audio| The ID of each utterance (each line of the script file) must match the audio ID.|
-| Script | No valid script| No valid script is found in this dataset. Fix the script lines that appear in the detailed issue list.|
-| Audio | No matching script| No audio files match the script ID. The name of the .wav files must match with the IDs in the script file.|
-| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the .wav file format by using an audio tool like [SoX](http://sox.sourceforge.net/).|
-| Audio | Low sampling rate| The sampling rate of the .wav files can't be lower than 16 KHz.|
-| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.|
-| Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
-| Mismatch | Low scored utterance| Sentence-level pronunciation score is lower than 70. Review the script and the audio content to make sure they match.|
-
-**Auto-fixed**
-
-The following errors are fixed automatically, but you should review and confirm the fixes are made correctly.
-
-| Category | Name | Description |
-| | -- | |
-| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. |
-| Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
-| Script | Text auto normalized|Text is automatically normalized for digits, symbols, and abbreviations. Review the script and audio to make sure they match.|
-
-**Manual check required**
-
-Unresolved errors listed in the next table affect the quality of training, but data with these errors won't be excluded during training. For higher-quality training, it's a good idea to fix these errors manually.
-
-| Category | Name | Description |
-| | -- | |
-| Script | Non-normalized text |This script contains symbols. Normalize the symbols to match the audio. For example, normalize */* to *slash*.|
-| Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
-| Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
-| Script | No valid end punctuation| Add one of the following at the end of the line: full stop (half-width '.' or full-width '。'), exclamation point (half-width '!' or full-width '!' ), or question mark ( half-width '?' or full-width '?').|
-| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. If it's lower, it will be automatically raised to 24 KHz.|
-| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10 percent of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
-| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
-| Volume | Start silence issue | The first 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the first 100 ms at the start silent.|
-| Volume| End silence issue| The last 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the last 100 ms at the end silent.|
-| Mismatch | Low scored words|Review the script and the audio content to make sure they match, and control the noise floor level. Reduce the length of long silence, or split the audio into multiple utterances if it's too long.|
-| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.|
-| Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.|
-| Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.|
-| Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
-
-## Next steps
--- [Train your voice model](how-to-custom-voice-create-voice.md)-- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)-- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Custom Voice Talent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-talent.md
- Title: "Set up voice talent for custom neural voice - Speech service"-
-description: Create a voice talent profile with an audio file recorded by the voice talent, consenting to the usage of their speech data to train a custom voice model.
------ Previously updated : 10/27/2022---
-# Set up voice talent for Custom Neural Voice
-
-A voice talent is an individual or target speaker whose voices are recorded and used to create neural voice models.
-
-Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model. The consent statement is also used to verify that the voice talent is the same person as the speaker in the training data.
-
-> [!TIP]
-> Before you get started in Speech Studio, define your voice [persona and choose the right voice talent](record-custom-voice-samples.md#choose-your-voice-talent).
-
-You can find the verbal consent statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. See also the [disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context).
-
-## Add voice talent
-
-To add a voice talent profile and upload their consent statement, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Set up voice talent** > **Add voice talent**.
-1. In the **Add new voice talent** wizard, describe the characteristics of the voice you're going to create. The scenarios that you specify here must be consistent with what you provided in the application form.
-1. Select **Next**.
-1. On the **Upload voice talent statement** page, follow the instructions to upload the voice talent statement you've recorded beforehand. Make sure the verbal statement was [recorded](record-custom-voice-samples.md) with the same settings, environment, and speaking style as your training data.
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot of the voice talent statement upload dialog.":::
-1. Enter the voice talent name and company name. The voice talent name must be the name of the person who recorded the consent statement. The company name must match the company name that was spoken in the recorded statement.
-1. Select **Next**.
-1. Review the voice talent and persona details, and select **Submit**.
-
-After the voice talent status is *Succeeded*, you can proceed to [train your custom voice model](how-to-custom-voice-create-voice.md).
-
-## Next steps
--- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)-- [Train your voice model](how-to-custom-voice-create-voice.md)-- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Custom Voice Training Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-training-data.md
- Title: "Training data for Custom Neural Voice - Speech service"-
-description: "Learn about the data types that you can use to train a Custom Neural Voice."
------ Previously updated : 10/27/2022---
-# Training data for Custom Neural Voice
-
-When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
-
-> [!TIP]
-> To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md).
-
-## Types of training data
-
-A voice training dataset includes audio recordings, and a text file with the associated transcriptions. Each audio file should contain a single utterance (a single sentence or a single turn for a dialog system), and be less than 15 seconds long.
-
-In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts.
-
-This table lists data types and how each is used to create a custom Text to speech voice model.
-
-| Data type | Description | When to use | Extra processing required |
-| | -- | -- | |
-| [Individual utterances + matching transcript](#individual-utterances--matching-transcript) | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
-| [Long audio + transcript](#long-audio--transcript-preview) | A collection (.zip) of long, unsegmented audio files (.wav or .mp3, longer than 20 seconds, at most 1000 audio files), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they aren't segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation wherever required. |
-| [Audio only (Preview)](#audio-only-preview) | A collection (.zip) of audio files (.wav or .mp3, at most 1000 audio files) without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation wherever required.|
-
-Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type.
-
-> [!NOTE]
-> The maximum number of datasets allowed to be imported per subscription is 500 zip files for standard subscription (S0) users.
-
-## Individual utterances + matching transcript
-
-You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
-
-To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
-
-For data format examples, refer to the sample training set on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/Sample%20Data). The sample training set includes the sample script and the associated audio.
-
-### Audio data for Individual utterances + matching transcript
-
-Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text to speech voices aren't supported, except for the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav.
-
-Follow these guidelines when preparing audio.
-
-| Property | Value |
-| -- | -- |
-| File format | RIFF (.wav), grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension.<br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
-| Sample format | PCM, at least 16-bit |
-| Audio length | Shorter than 15 seconds |
-| Archive format | .zip |
-| Maximum archive size | 2048 MB |
-
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
-
-### Transcription data for Individual utterances + matching transcript
-
-The transcription file is a plain text file. Use these guidelines to prepare your transcriptions.
-
-| Property | Value |
-| -- | -- |
-| File format | Plain text (.txt) |
-| Encoding format | ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
-| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t). |
-| Maximum file size | 2048 MB |
-
-Below is an example of how the transcripts are organized utterance by utterance in one .txt file:
-
-```
-0000000001[tab] This is the waistline, and it's falling.
-0000000002[tab] We have trouble scoring.
-0000000003[tab] It was Janet Maslin.
-```
-ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the corresponding audio. Errors in the transcripts will introduce quality loss during the training.
-
-## Long audio + transcript (Preview)
-
-> [!NOTE]
-> For **Long audio + transcript (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
-
-In some cases, you may not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service will use the [Batch Transcription API](batch-transcription.md) feature of speech to text.
-
-During the processing of the segmentation, your audio files and the transcripts will also be sent to the Custom Speech service to refine the recognition model so the accuracy can be improved for your data. No data will be retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training.
-
-> [!NOTE]
-> This service will be charged toward your speech to text subscription usage. The long-audio segmentation service is only supported with standard (S0) Speech resources.
-
-### Audio data for Long audio + transcript
-
-Follow these guidelines when preparing audio for segmentation.
-
-| Property | Value |
-| -- | -- |
-| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
-| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
-| Audio length | Longer than 20 seconds |
-| Archive format | .zip |
-| Maximum archive size | 2048 MB, at most 1000 audio files included |
-
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
-
-All audio files should be grouped into a zip file. ItΓÇÖs OK to put .wav files and .mp3 files into one audio zip. For example, you can upload a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 second long, and another audio named ΓÇÿqueenstory.mp3ΓÇÖ, 200 second long. All .mp3 files will be transformed into the .wav format after processing.
-
-### Transcription data for Long audio + transcript
-
-Transcripts must be prepared to the specifications listed in this table. Each audio file must be matched with a transcript.
-
-| Property | Value |
-| -- | -- |
-| File format | Plain text (.txt), grouped into a .zip |
-| File name | Use the same name as the matching audio file |
-| Encoding format |ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
-| # of utterances per line | No limit |
-| Maximum file size | 2048 MB |
-
-All transcripts files in this data type should be grouped into a zip file. For example, you've uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You'll need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you'll provide the full correct transcription for the matching audio.
-
-After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
-
-## Audio only (Preview)
-
-> [!NOTE]
-> For **Audio only (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
-
-If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech to text subscription usage.
-
-Follow these guidelines when preparing audio.
-
-> [!NOTE]
-> The long-audio segmentation service will leverage the batch transcription feature of speech to text, which only supports standard subscription (S0) users.
-
-| Property | Value |
-| -- | -- |
-| File format | RIFF (.wav) or .mp3, grouped into a .zip file |
-| File name | File name characters supported by Windows OS, with .wav extension. <br>The characters \ / : * ? " < > \| aren't allowed. <br>It can't start or end with a space, and can't start with a dot. <br>No duplicate file names allowed. |
-| Sampling rate | When creating a custom neural voice, 24,000 Hz is required. |
-| Sample format |RIFF(.wav): PCM, at least 16-bit<br>mp3: at least 256 KBps bit rate|
-| Audio length | No limit |
-| Archive format | .zip |
-| Maximum archive size | 2048 MB, at most 1000 audio files included |
-
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
-
-All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
-
-## Next steps
--- [Train your voice model](how-to-custom-voice-create-voice.md)-- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)-- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
- Title: Create a project for Custom Neural Voice - Speech service-
-description: Learn how to create a Custom Neural Voice project that contains data, models, tests, and endpoints in Speech Studio.
------ Previously updated : 10/27/2022---
-# Create a project for Custom Neural Voice
-
-Content for [Custom Neural Voice](https://aka.ms/customvoice) like data, models, tests, and endpoints are organized into projects in Speech Studio. Each project is specific to a country/region and language, and the gender of the voice you want to create. For example, you might create a project for a female voice for your call center's chat bots that use English in the United States.
-
-> [!TIP]
-> Try [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
-
-All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md?tabs=tts) and [region](regions.md#speech-service).
-
-## Create a Custom Neural Voice Pro project
-
-To create a Custom Neural Voice Pro project, follow these steps:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select the subscription and Speech resource to work with.
-
- > [!IMPORTANT]
- > Custom Neural Voice training is currently only available in some regions. After your voice model is trained in a supported region, you can copy it to a Speech resource in another region as needed. See footnotes in the [regions](regions.md#speech-service) table for more information.
-
-1. Select **Custom Voice** > **Create a project**.
-1. Select **Custom Neural Voice Pro** > **Next**.
-1. Follow the instructions provided by the wizard to create your project.
-
-Select the new project by name or select **Go to project**. You'll see these menu items in the left panel: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**.
-
-## Next steps
--- [Set up voice talent](how-to-custom-voice-talent.md)-- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)-- [Train your voice model](how-to-custom-voice-create-voice.md)-- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
- Title: How to deploy and use voice model - Speech service-
-description: Learn about how to deploy and use a custom neural voice model.
------ Previously updated : 11/30/2022--
-zone_pivot_groups: programming-languages-set-nineteen
--
-# Deploy and use your voice model
-
-After you've successfully created and [trained](how-to-custom-voice-create-voice.md) your voice model, you deploy it to a custom neural voice endpoint.
-
-Use the Speech Studio to [add a deployment endpoint](#add-a-deployment-endpoint) for your custom neural voice. You can use either the Speech Studio or text to speech REST API to [suspend or resume](#suspend-and-resume-an-endpoint) a custom neural voice endpoint.
-
-> [!NOTE]
-> You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice.
-
-To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text to speech service.
-
-## Add a deployment endpoint
-
-To create a custom neural voice endpoint:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customvoice).
-1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
-1. Select a voice model that you want to associate with this endpoint.
-1. Enter a **Name** and **Description** for your custom endpoint.
-1. Select **Deploy** to create your endpoint.
-
-After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-
-## Application settings
-
-The application settings that you use as REST API [request parameters](#request-parameters) are available on the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
--
-* The **Endpoint key** shows the Speech resource key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
-* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`.
-* The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
-
-## Use your custom voice
-
-The custom endpoint is functionally identical to the standard endpoint that's used for text to speech requests.
-
-One difference is that the `EndpointId` must be specified to use the custom voice via the Speech SDK. You can start with the [text to speech quickstart](get-started-text-to-speech.md) and then update the code with the `EndpointId` and `SpeechSynthesisVoiceName`.
-
-```csharp
-var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
-speechConfig.SpeechSynthesisVoiceName = "YourCustomVoiceName";
-speechConfig.EndpointId = "YourEndpointId";
-```
-
-```cpp
-auto speechConfig = SpeechConfig::FromSubscription(speechKey, speechRegion);
-speechConfig->SetSpeechSynthesisVoiceName("YourCustomVoiceName");
-speechConfig->SetEndpointId("YourEndpointId");
-```
-
-```java
-SpeechConfig speechConfig = SpeechConfig.fromSubscription(speechKey, speechRegion);
-speechConfig.setSpeechSynthesisVoiceName("YourCustomVoiceName");
-speechConfig.setEndpointId("YourEndpointId");
-```
-
-```ObjectiveC
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithSubscription:speechKey region:speechRegion];
-speechConfig.speechSynthesisVoiceName = @"YourCustomVoiceName";
-speechConfig.EndpointId = @"YourEndpointId";
-```
-
-```Python
-speech_config = speechsdk.SpeechConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION'))
-speech_config.endpoint_id = "YourEndpointId"
-speech_config.speech_synthesis_voice_name = "YourCustomVoiceName"
-```
-
-To use a custom neural voice via [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#voice-element), specify the model name as the voice name. This example uses the `YourCustomVoiceName` voice.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="YourCustomVoiceName">
- This is the text that is spoken.
- </voice>
-</speak>
-```
-
-## Switch to a new voice model in your product
-
-Once you've updated your voice model to the latest engine version, or if you want to switch to a new voice in your product, you need to redeploy the new voice model to a new endpoint. Redeploying new voice model on your existing endpoint is not supported. After deployment, switch the traffic to the newly created endpoint. We recommend that you transfer the traffic to the new endpoint in a test environment first to ensure that the traffic works well, and then transfer to the new endpoint in the production environment. During the transition, you need to keep the old endpoint. If there are some problems with the new endpoint during transition, you can switch back to your old endpoint. If the traffic has been running well on the new endpoint for about 24 hours (recommended value), you can delete your old endpoint.
-
-> [!NOTE]
-> If your voice name is changed and you are using Speech Synthesis Markup Language (SSML), be sure to use the new voice name in SSML.
-
-## Suspend and resume an endpoint
-
-You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can continue to use the same endpoint URL in your application to synthesize speech.
-
-You can suspend and resume an endpoint in Speech Studio or via the REST API.
-
-> [!NOTE]
-> The suspend operation will complete almost immediately. The resume operation completes in about the same amount of time as a new deployment.
-
-### Suspend and resume an endpoint in Speech Studio
-
-This section describes how to suspend or resume a custom neural voice endpoint in the Speech Studio portal.
-
-#### Suspend endpoint
-
-1. To suspend and deactivate your endpoint, select **Suspend** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
-
- :::image type="content" source="media/custom-voice/cnv-endpoint-suspend.png" alt-text="Screenshot of the select suspend endpoint option":::
-
-1. In the dialog box that appears, select **Submit**. After the endpoint is suspended, Speech Studio will show the **Successfully suspended endpoint** notification.
-
-#### Resume endpoint
-
-1. To resume and activate your endpoint, select **Resume** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
-
- :::image type="content" source="media/custom-voice/cnv-endpoint-resume.png" alt-text="Screenshot of the select resume endpoint option":::
-
-1. In the dialog box that appears, select **Submit**. After you successfully reactivate the endpoint, the status will change from **Suspended** to **Succeeded**.
-
-### Suspend and resume endpoint via REST API
-
-This section will show you how to [get](#get-endpoint), [suspend](#suspend-endpoint), or [resume](#resume-endpoint) a custom neural voice endpoint via REST API.
--
-#### Get endpoint
-
-Get the endpoint by endpoint ID. The operation returns details about an endpoint such as model ID, project ID, and status.
-
-For example, you might want to track the status progression for [suspend](#suspend-endpoint) or [resume](#resume-endpoint) operations. Use the `status` property in the response payload to determine the status of the endpoint.
-
-The possible `status` property values are:
-
-| Status | Description |
-| - | |
-| `NotStarted` | The endpoint hasn't yet been deployed, and it's not available for speech synthesis. |
-| `Running` | The endpoint is in the process of being deployed or resumed, and it's not available for speech synthesis. |
-| `Succeeded` | The endpoint is active and available for speech synthesis. The endpoint has been deployed or the resume operation succeeded. |
-| `Failed` | The endpoint deploy or suspend operation failed. The endpoint can only be viewed or deleted in [Speech Studio](https://aka.ms/custom-voice-portal).|
-| `Disabling` | The endpoint is in the process of being suspended, and it's not available for speech synthesis. |
-| `Disabled` | The endpoint is inactive, and it's not available for speech synthesis. The suspend operation succeeded or the resume operation failed. |
-
-> [!Tip]
-> If the status is `Failed` or `Disabled`, check `properties.error` for a detailed error message. However, there won't be error details if the status is `Disabled` due to a successful suspend operation.
-
-##### Get endpoint example
-
-For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
-
-HTTP example:
-
-```HTTP
-GET api/texttospeech/v3.0/endpoints/<YourEndpointId> HTTP/1.1
-Ocp-Apim-Subscription-Key: YourResourceKey
-Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
-```
-
-cURL example:
-
-```Console
-curl -v -X GET "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >"
-```
-
-Response header example:
-
-```
-Status code: 200 OK
-```
-
-Response body example:
-
-```json
-{
- "model": {
- "id": "a92aa4b5-30f5-40db-820c-d2d57353de44"
- },
- "project": {
- "id": "ffc87aba-9f5f-4bfa-9923-b98186591a79"
- },
- "properties": {},
- "status": "Succeeded",
- "lastActionDateTime": "2019-01-07T11:36:07Z",
- "id": "e7ffdf12-17c7-4421-9428-a7235931a653",
- "createdDateTime": "2019-01-07T11:34:12Z",
- "locale": "en-US",
- "name": "Voice endpoint",
- "description": "Example for voice endpoint"
-}
-```
-
-#### Suspend endpoint
-
-You can suspend an endpoint to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
-
-You suspend an endpoint with its unique deployment ID. The endpoint status must be `Succeeded` before you can suspend it.
-
-Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Succeeded`, to `Disabling`, and finally to `Disabled`.
-
-##### Suspend endpoint example
-
-For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
-
-HTTP example:
-
-```HTTP
-POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend HTTP/1.1
-Ocp-Apim-Subscription-Key: YourResourceKey
-Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
-Content-Type: application/json
-Content-Length: 0
-```
-
-cURL example:
-
-```Console
-curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0"
-```
-
-Response header example:
-
-```
-Status code: 202 Accepted
-```
-
-For more information, see [response headers](#response-headers).
-
-#### Resume endpoint
-
-When you resume an endpoint, you can use the same endpoint URL that you used before it was suspended.
-
-You resume an endpoint with its unique deployment ID. The endpoint status must be `Disabled` before you can resume it.
-
-Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Disabled`, to `Running`, and finally to `Succeeded`. If the resume operation failed, the endpoint status will be `Disabled`.
-
-##### Resume endpoint example
-
-For information about endpoint ID, region, and Speech resource key parameters, see [request parameters](#request-parameters).
-
-HTTP example:
-
-```HTTP
-POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume HTTP/1.1
-Ocp-Apim-Subscription-Key: YourResourceKey
-Host: <YourResourceRegion>.customvoice.api.speech.microsoft.com
-Content-Type: application/json
-Content-Length: 0
-```
-
-cURL example:
-
-```Console
-curl -v -X POST "https://<YourResourceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourResourceKey >" -H "content-type: application/json" -H "content-length: 0"
-```
-
-Response header example:
-```
-Status code: 202 Accepted
-```
-
-For more information, see [response headers](#response-headers).
-
-#### Parameters and response codes
-
-##### Request parameters
-
-You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and Speech resource key in Speech Studio.
-
-| Name | Location | Required | Type | Description |
-| | | -- | | |
-| `YourResourceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. |
-| `YourEndpointId` | Path | `True` | string | The identifier of the endpoint. |
-| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The Speech resource key the endpoint is associated with. |
-
-##### Response headers
-
-Status code: 202 Accepted
-
-| Name | Type | Description |
-| - | | -- |
-| `Location` | string | The location of the endpoint that can be used as the full URL to get endpoint. |
-| `Retry-After` | string | The total seconds of recommended interval to retry to get endpoint status. |
-
-##### HTTP status codes
-
-The HTTP status code for each response indicates success or common errors.
-
-| HTTP status code | Description | Possible reason |
-| - | -- | - |
-| 200 | OK | The request was successful. |
-| 202 | Accepted | The request has been accepted and is being processed. |
-| 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your Speech resource key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your Speech resource. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers.|
-
-## Next steps
--- [How to record voice samples](record-custom-voice-samples.md)-- [Text to speech API reference](rest-text-to-speech.md)-- [Batch synthesis](batch-synthesis.md)
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
- Title: 'How-to: Develop Custom Commands applications - Speech service'-
-description: Learn how to develop and customize Custom Commands applications. These voice-command apps are best suited for task completion or command-and-control scenarios.
------- Previously updated : 12/15/2020----
-# Develop Custom Commands applications
--
-In this how-to article, you learn how to develop and configure Custom Commands applications. The Custom Commands feature helps you build rich voice-command apps that are optimized for voice-first interaction experiences. The feature is best suited to task completion or command-and-control scenarios. It's particularly well suited for Internet of Things (IoT) devices and for ambient and headless devices.
-
-In this article, you create an application that can turn a TV on and off, set the temperature, and set an alarm. After you create these basic commands, you'll learn about the following options for customizing commands:
-
-* Adding parameters to commands
-* Adding configurations to command parameters
-* Building interaction rules
-* Creating language-generation templates for speech responses
-* Using Custom Voice tools
-
-## Create an application by using simple commands
-
-Start by creating an empty Custom Commands application. For details, refer to the [quickstart](quickstart-custom-commands-application.md). In this application, instead of importing a project, you create a blank project.
-
-1. In the **Name** box, enter the project name *Smart-Room-Lite* (or another name of your choice).
-1. In the **Language** list, select **English (United States)**.
-1. Select or create a LUIS resource.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing the "New project" window.](media/custom-commands/create-new-project.png)
-
-### Update LUIS resources (optional)
-
-You can update the authoring resource that you selected in the **New project** window. You can also set a prediction resource.
-
-A prediction resource is used for recognition when your Custom Commands application is published. You don't need a prediction resource during the development and testing phases.
-
-### Add a TurnOn command
-
-In the empty Smart-Room-Lite Custom Commands application you created, add a command. The command will process an utterance, `Turn on the tv`. It will respond with the message `Ok, turning the tv on`.
-
-1. Create a new command by selecting **New command** at the top of the left pane. The **New command** window opens.
-1. For the **Name** field, provide the value `TurnOn`.
-1. Select **Create**.
-
-The middle pane lists the properties of the command.
-
-The following table explains the command's configuration properties. For more information, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-
-| Configuration | Description |
-| - | |
-| Example sentences | Example utterances the user can say to trigger this command. |
-| Parameters | Information required to complete the command. |
-| Completion rules | Actions to be taken to fulfill the command. Examples: responding to the user or communicating with a web service. |
-| Interaction rules | Other rules to handle more specific or complex situations. |
--
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing where to create a command.](media/custom-commands/add-new-command.png)
-
-#### Add example sentences
-
-In the **Example sentences** section, you provide an example of what the user can say.
-
-1. In the middle pane, select **Example sentences**.
-1. In the pane on the right, add examples:
-
- ```
- Turn on the tv
- ```
-
-1. At the top of the pane, select **Save**.
-
-You don't have parameters yet, so you can move to the **Completion rules** section.
-
-#### Add a completion rule
-
-Next, the command needs a completion rule. This rule tells the user that a fulfillment action is being taken.
-
-For more information about rules and completion rules, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-
-1. Select the default completion rule **Done**. Then edit it as follows:
--
- | Setting | Suggested value | Description |
- | - | - | -- |
- | **Name** | `ConfirmationResponse` | A name describing the purpose of the rule |
- | **Conditions** | None | Conditions that determine when the rule can run |
- | **Actions** | **Send speech response** > **Simple editor** > `Ok, turning the tv on` | The action to take when the rule condition is true |
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing where to create a speech response.](media/custom-commands/create-speech-response-action.png)
-
-1. Select **Save** to save the action.
-1. Back in the **Completion rules** section, select **Save** to save all changes.
-
- > [!NOTE]
- > You don't have to use the default completion rule that comes with the command. You can delete the default completion rule and add your own rule.
-
-### Add a SetTemperature command
-
-Now add one more command, `SetTemperature`. This command will take a single utterance, `Set the temperature to 40 degrees`, and respond with the message `Ok, setting temperature to 40 degrees`.
-
-To create the new command, follow the steps you used for the `TurnOn` command, but use the example sentence `Set the temperature to 40 degrees`.
-
-Then edit the existing **Done** completion rules as follows:
-
-| Setting | Suggested value |
-| - | - |
-| **Name** | `ConfirmationResponse` |
-| **Conditions** | None |
-| **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, setting temperature to 40 degrees` |
-
-Select **Save** to save all changes to the command.
-
-### Add a SetAlarm command
-
-Create a new `SetAlarm` command. Use the example sentence `Set an alarm for 9 am tomorrow`. Then edit the existing **Done** completion rules as follows:
-
-| Setting | Suggested value |
-| - | - |
-| **Name** | `ConfirmationResponse` |
-| **Conditions** | None |
-| **Actions** | **Send speech response** > **Simple editor** > **First variation** > `Ok, setting an alarm for 9 am tomorrow` |
-
-Select **Save** to save all changes to the command.
-
-### Try it out
-
-Test the application's behavior by using the test pane:
-
-1. In the upper-right corner of the pane, select the **Train** icon.
-1. When the training finishes, select **Test**.
-
-Try out the following utterance examples by using voice or text:
--- You type: *set the temperature to 40 degrees*-- Expected response: Ok, setting temperature to 40 degrees-- You type: *turn on the tv*-- Expected response: Ok, turning the tv on-- You type: *set an alarm for 9 am tomorrow*-- Expected response: Ok, setting an alarm for 9 am tomorrow-
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing the test in a web-chat interface.](media/custom-commands/create-basic-test-chat-no-mic.png)
-
-> [!TIP]
-> In the test pane, you can select **Turn details** for information about how this voice input or text input was processed.
-
-## Add parameters to commands
-
-In this section, you learn how to add parameters to your commands. Commands require parameters to complete a task. In complex scenarios, parameters can be used to define conditions that trigger custom actions.
-
-### Configure parameters for a TurnOn command
-
-Start by editing the existing `TurnOn` command to turn on and turn off multiple devices.
-
-1. Now that the command will handle both on and off scenarios, rename the command as *TurnOnOff*.
- 1. In the pane on the left, select the **TurnOn** command. Then next to **New command** at the top of the pane, select the edit button.
-
- 1. In the **Rename command** window, change the name to *TurnOnOff*.
-
-1. Add a new parameter to the command. The parameter represents whether the user wants to turn the device on or off.
- 1. At top of the middle pane, select **Add**. From the drop-down menu, select **Parameter**.
- 1. In the pane on the right, in the **Parameters** section, in the **Name** box, add `OnOff`.
- 1. Select **Required**. In the **Add response for a required parameter** window, select **Simple editor**. In the **First variation** field, add *On or Off?*.
- 1. Select **Update**.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the 'Add response for a required parameter' section with the 'Simple editor' tab selected.](media/custom-commands/add-required-on-off-parameter-response.png)
-
- 1. Configure the parameter's properties by using the following table. For information about all of the configuration properties of a command, see [Custom Commands concepts and definitions](./custom-commands-references.md).
--
- | Configuration | Suggested value | Description |
- | | -| |
- | **Name** | `OnOff` | A descriptive name for the parameter |
- | **Required** | Selected | Check box indicating whether a value for this parameter is required before the command finishes. |
- | **Response for required parameter** |**Simple editor** > `On or Off?` | A prompt asking for the value of this parameter when it isn't known. |
- | **Type** | **String** | Parameter type, such as Number, String, Date Time, or Geography. |
- | **Configuration** | **Accept predefined input values from an internal catalog** | For strings, this setting limits inputs to a set of possible values. |
- | **Predefined input values** | `on`, `off` | Set of possible values and their aliases. |
--
- 1. To add predefined input values, select **Add a predefined input**. In **New Item** window, type *Name* as shown in the preceding table. In this case, you're not using aliases, so you can leave this field blank.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to create a parameter.](media/custom-commands/create-on-off-parameter.png)
-
- 1. Select **Save** to save all configurations of the parameter.
-
-#### Add a SubjectDevice parameter
-
-1. To add a second parameter to represent the name of the devices that can be controlled by using this command, select **Add**. Use the following configuration.
--
- | Setting | Suggested value |
- | | |
- | **Name** | `SubjectDevice` |
- | **Required** | Selected |
- | **Response for required parameter** | **Simple editor** > `Which device do you want to control?` |
- | **Type** | **String** |
- | **Configuration** | **Accept predefined input values from an internal catalog** |
- | **Predefined input values** | `tv`, `fan` |
- | **Aliases** (`tv`) | `television`, `telly` |
-
-1. Select **Save**.
-
-#### Modify example sentences
-
-For commands that use parameters, it's helpful to add example sentences that cover all possible combinations. For example:
-
-* Complete parameter information: `turn {OnOff} the {SubjectDevice}`
-* Partial parameter information: `turn it {OnOff}`
-* No parameter information: `turn something`
-
-Example sentences that use varying degrees of information allow the Custom Commands application to resolve both one-shot resolutions and multiple-turn resolutions by using partial information.
-
-With that information in mind, edit the example sentences to use these suggested parameters:
-
-```
-turn {OnOff} the {SubjectDevice}
-{SubjectDevice} {OnOff}
-turn it {OnOff}
-turn something {OnOff}
-turn something
-```
-
-Select **Save**.
-
-> [!TIP]
-> In the example-sentences editor, use curly braces to refer to your parameters. For example, `turn {OnOff} the {SubjectDevice}`.
-> Use a tab for automatic completion backed by previously created parameters.
-
-#### Modify completion rules to include parameters
-
-Modify the existing completion rule `ConfirmationResponse`.
-
-1. In the **Conditions** section, select **Add a condition**.
-1. In the **New Condition** window, in the **Type** list, select **Required parameters**. In the list that follows, select both **OnOff** and **SubjectDevice**.
-1. Select **Create**.
-1. In the **Actions** section, edit the **Send speech response** action by hovering over it and selecting the edit button. This time, use the newly created `OnOff` and `SubjectDevice` parameters:
-
- ```
- Ok, turning the {SubjectDevice} {OnOff}
- ```
-1. Select **Save**.
-
-Try out the changes by selecting the **Train** icon at the top of the pane on the right.
-
-When the training finishes, select **Test**. A **Test your application** window appears. Try the following interactions:
--- Input: *turn off the tv*-- Output: Ok, turning off the tv-- Input: *turn off the television*-- Output: Ok, turning off the tv-- Input: *turn it off*-- Output: Which device do you want to control?-- Input: *the tv*-- Output: Ok, turning off the tv-
-### Configure parameters for a SetTemperature command
-
-Modify the `SetTemperature` command to enable it to set the temperature as the user directs.
-
-Add a `TemperatureValue` parameter. Use the following configuration:
-
-| Configuration | Suggested value |
-| | -|
-| **Name** | `TemperatureValue` |
-| **Required** | Selected |
-| **Response for required parameter** | **Simple editor** > `What temperature would you like?`
-| **Type** | `Number` |
--
-Edit the example utterances to use the following values.
-
-```
-set the temperature to {TemperatureValue} degrees
-change the temperature to {TemperatureValue}
-set the temperature
-change the temperature
-```
-
-Edit the existing completion rules. Use the following configuration.
-
-| Configuration | Suggested value |
-| | -|
-| **Conditions** | **Required parameter** > **TemperatureValue** |
-| **Actions** | **Send speech response** > `Ok, setting temperature to {TemperatureValue} degrees` |
-
-### Configure parameters for a SetAlarm command
-
-Add a parameter called `DateTime`. Use the following configuration.
-
- | Setting | Suggested value |
- | | -|
- | **Name** | `DateTime` |
- | **Required** | Selected |
- | **Response for required parameter** | **Simple editor** > `For what time?` |
- | **Type** | **DateTime** |
- | **Date Defaults** | If the date is missing, use today. |
- | **Time Defaults** | If the time is missing, use the start of the day. |
--
-> [!NOTE]
-> This article mostly uses String, Number, and DateTime parameter types. For a list of all supported parameter types and their properties, see [Custom Commands concepts and definitions](./custom-commands-references.md).
--
-Edit the example utterances. Use the following values.
-
-```
-set an alarm for {DateTime}
-set alarm {DateTime}
-alarm for {DateTime}
-```
-
-Edit the existing completion rules. Use the following configuration.
-
- | Setting | Suggested value |
- | - | - |
- | **Actions** | **Send speech response** > `Ok, alarm set for {DateTime}` |
-
-Test the three commands together by using utterances related to different commands. (You can switch between the different commands.)
--- Input: *Set an alarm*-- Output: For what time?-- Input: *Turn on the tv*-- Output: Ok, turning the tv on-- Input: *Set an alarm*-- Output: For what time?-- Input: *5 pm*-- Output: Ok, alarm set for 2020-05-01 17:00:00-
-## Add configurations to command parameters
-
-In this section, you learn more about advanced parameter configuration, including:
--
-### Configure a parameter as an external catalog entity
-
-The Custom Commands feature allows you to configure string-type parameters to refer to external catalogs hosted over a web endpoint. So you can update the external catalog independently without editing the Custom Commands application. This approach is useful in cases where the catalog entries are numerous.
-
-Reuse the `SubjectDevice` parameter from the `TurnOnOff` command. The current configuration for this parameter is **Accept predefined inputs from internal catalog**. This configuration refers to a static list of devices in the parameter configuration. Move out this content to an external data source that can be updated independently.
-
-To move the content, start by adding a new web endpoint. In the pane on the left, go to the **Web endpoints** section. There, add a new web endpoint URL. Use the following configuration.
-
-| Setting | Suggested value |
-|-|-|
-| **Name** | `getDevices` |
-| **URL** | `<Your endpoint of getDevices.json>` |
-| **Method** | **GET** |
-
-Then, configure and host a web endpoint that returns a JSON file that lists the devices that can be controlled. The web endpoint should return a JSON file that's formatted like this example:
-
-```json
-{
- "fan" : [],
- "refrigerator" : [
- "fridge"
- ],
- "lights" : [
- "bulb",
- "bulbs",
- "light",
- "light bulb"
- ],
- "tv" : [
- "telly",
- "television"
- ]
-}
-
-```
-
-Next go the **SubjectDevice** parameter settings page. Set up the following properties.
-
-| Setting | Suggested value |
-| -| - |
-| **Configuration** | **Accept predefined inputs from external catalog** |
-| **Catalog endpoint** | `getDevices` |
-| **Method** | **GET** |
-
-Then select **Save**.
-
-> [!IMPORTANT]
-> You won't see an option to configure a parameter to accept inputs from an external catalog unless you have the web endpoint set in the **Web endpoint** section in the pane on the left.
-
-Try it out by selecting **Train**. After the training finishes, select **Test** and try a few interactions.
-
-* Input: *turn on*
-* Output: Which device do you want to control?
-* Input: *lights*
-* Output: Ok, turning the lights on
-
-> [!NOTE]
-> You can now control all the devices hosted on the web endpoint. But you still need to train the application to test the new changes and then republish the application.
-
-### Add validation to parameters
-
-*Validations* are constructs that apply to certain parameter types that allow you to configure constraints on the parameter's value. They prompt you for corrections if values don't fall within the constraints. For a list of parameter types that extend the validation construct, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-
-Test out validations by using the `SetTemperature` command. Use the following steps to add a validation for the `Temperature` parameter.
-
-1. In the pane on the left, select the **SetTemperature** command.
-1. In the middle pane, select **Temperature**.
-1. In the pane on the right, select **Add a validation**.
-1. In the **New validation** window, configure validation as shown in the following table. Then select **Create**.
--
- | Parameter configuration | Suggested value | Description |
- | - | - | - |
- | **Min Value** | `60` | For Number parameters, the minimum value this parameter can assume |
- | **Max Value** | `80` | For Number parameters, the maximum value this parameter can assume |
- | **Failure response** | **Simple editor** > **First variation** > `Sorry, I can only set temperature between 60 and 80 degrees. What temperature do you want?` | A prompt to ask for a new value if the validation fails |
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to add a range validation.](media/custom-commands/add-validations-temperature.png)
-
-Try it out by selecting the **Train** icon at the top of the pane on the right. After the training finishes, select **Test**. Try a few interactions:
--- Input: *Set the temperature to 72 degrees*-- Output: Ok, setting temperature to 72 degrees-- Input: *Set the temperature to 45 degrees*-- Output: Sorry, I can only set temperature between 60 and 80 degrees-- Input: *make it 72 degrees instead*-- Output: Ok, setting temperature to 72 degrees-
-## Add interaction rules
-
-Interaction rules are *additional* rules that handle specific or complex situations. Although you're free to author your own interaction rules, in this example you use interaction rules for the following scenarios:
-
-* Confirming commands
-* Adding a one-step correction to commands
-
-For more information about interaction rules, see [Custom Commands concepts and definitions](./custom-commands-references.md).
-
-### Add confirmations to a command
-
-To add a confirmation, you use the `SetTemperature` command. To achieve confirmation, create interaction rules by using the following steps:
-
-1. In the pane on the left, select the **SetTemperature** command.
-1. In the middle pane, add interaction rules by selecting **Add**. Then select **Interaction rules** > **Confirm command**.
-
- This action adds three interaction rules. The rules ask the user to confirm the date and time of the alarm. They expect a confirmation (yes or no) for the next turn.
-
- 1. Modify the **Confirm command** interaction rule by using the following configuration:
- 1. Change the name to **Confirm temperature**.
- 1. The condition **All required parameters** has already been added.
- 1. Add a new action: **Type** > **Send speech response** > **Are you sure you want to set the temperature as {TemperatureValue} degrees?**
- 1. In the **Expectations** section, leave the default value of **Expecting confirmation from user**.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot showing how to create the required parameter response.](media/custom-speech-commands/add-confirmation-set-temperature.png)
--
- 1. Modify the **Confirmation succeeded** interaction rule to handle a successful confirmation (the user said yes).
-
- 1. Change the name to **Confirmation temperature succeeded**.
- 1. Leave the existing **Confirmation was successful** condition.
- 1. Add a new condition: **Type** > **Required parameters** > **TemperatureValue**.
- 1. Leave the default **Post-execution state** value as **Execute completion rules**.
-
- 1. Modify the **Confirmation denied** interaction rule to handle scenarios when confirmation is denied (the user said no).
-
- 1. Change the name to **Confirmation temperature denied**.
- 1. Leave the existing **Confirmation was denied** condition.
- 1. Add a new condition: **Type** > **Required parameters** > **TemperatureValue**.
- 1. Add a new action: **Type** > **Send speech response** > **No problem. What temperature then?**.
- 1. Change the default **Post-execution state** value to **Wait for user's input**.
-
-> [!IMPORTANT]
-> In this article, you use the built-in confirmation capability. You can also manually add interaction rules one by one.
-
-Try out the changes by selecting **Train**. When the training finishes, select **Test**.
--- **Input**: *Set temperature to 80 degrees*-- **Output**: are you sure you want to set the temperature as 80 degrees?-- **Input**: *No*-- **Output**: No problem. What temperature then?-- **Input**: *72 degrees*-- **Output**: are you sure you want to set the temperature as 72 degrees?-- **Input**: *Yes*-- **Output**: OK, setting temperature to 72 degrees-
-### Implement corrections in a command
-
-In this section, you'll configure a one-step correction. This correction is used after the fulfillment action has run. You'll also see an example of how a correction is enabled by default if the command isn't fulfilled yet. To add a correction when the command isn't finished, add the new parameter `AlarmTone`.
-
-In the left pane, select the **SetAlarm** command. Then and add the new parameter **AlarmTone**.
--- **Name** > `AlarmTone`-- **Type** > **String**-- **Default Value** > **Chimes**-- **Configuration** > **Accept predefined input values from the internal catalog**-- **Predefined input values** > **Chimes**, **Jingle**, and **Echo** (These values are individual predefined inputs.)--
-Next, update the response for the **DateTime** parameter to **Ready to set alarm with tone as {AlarmTone}. For what time?**. Then modify the completion rule as follows:
-
-1. Select the existing completion rule **ConfirmationResponse**.
-1. In the pane on the right, hover over the existing action and select **Edit**.
-1. Update the speech response to `OK, alarm set for {DateTime}. The alarm tone is {AlarmTone}`.
-
-> [!IMPORTANT]
-> The alarm tone can change without any explicit configuration in an ongoing command. For example, it can change when the command hasn't finished yet. A correction is enabled *by default* for all of the command parameters, regardless of the turn number, if the command is yet to be fulfilled.
-
-#### Implement a correction when a command is finished
-
-The Custom Commands platform allows for one-step correction even when the command has finished. This feature isn't enabled by default. It must be explicitly configured.
-
-Use the following steps to configure a one-step correction:
-
-1. In the **SetAlarm** command, add an interaction rule of the type **Update previous command** to update the previously set alarm. Rename the interaction rule as **Update previous alarm**.
-1. Leave the default condition: **Previous command needs to be updated**.
-1. Add a new condition: **Type** > **Required Parameter** > **DateTime**.
-1. Add a new action: **Type** > **Send speech response** > **Simple editor** > **Updating previous alarm time to {DateTime}**.
-1. Leave the default **Post-execution state** value as **Command completed**.
-
-Try out the changes by selecting **Train**. Wait for the training to finish, and then select **Test**.
--- **Input**: *Set an alarm.*-- **Output**: Ready to set alarm with tone as Chimes. For what time?-- **Input**: *Set an alarm with the tone as Jingle for 9 am tomorrow.*-- **Output**: OK, alarm set for 2020-05-21 09:00:00. The alarm tone is Jingle.-- **Input**: *No, 8 am.*-- **Output**: Updating previous alarm time to 2020-05-29 08:00.-
-> [!NOTE]
-> In a real application, in the **Actions** section of this correction rule, you'll also need to send back an activity to the client or call an HTTP endpoint to update the alarm time in your system. This action should be solely responsible for updating the alarm time. It shouldn't be responsible for any other attribute of the command. In this case, that attribute would be the alarm tone.
-
-## Add language-generation templates for speech responses
-
-Language-generation (LG) templates allow you to customize the responses sent to the client. They introduce variance into the responses. You can achieve language generation by using:
-
-* Language-generation templates.
-* Adaptive expressions.
-
-Custom Commands templates are based on the Bot Framework's [LG templates](/azure/bot-service/file-format/bot-builder-lg-file-format#templates). Because the Custom Commands feature creates a new LG template when required (for speech responses in parameters or actions), you don't have to specify the name of the LG template.
-
-So you don't need to define your template like this:
-
- ```
- # CompletionAction
- - Ok, turning {OnOff} the {SubjectDevice}
- - Done, turning {OnOff} the {SubjectDevice}
- - Proceeding to turn {OnOff} {SubjectDevice}
- ```
-
-Instead, you can define the body of the template without the name, like this:
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing a template editor example.](./media/custom-commands/template-editor-example.png)
--
-This change introduces variation into the speech responses that are sent to the client. For an utterance, the corresponding speech response is randomly picked out of the provided options.
-
-By taking advantage of LG templates, you can also define complex speech responses for commands by using adaptive expressions. For more information, see the [LG templates format](/azure/bot-service/file-format/bot-builder-lg-file-format#templates).
-
-By default, the Custom Commands feature supports all capabilities, with the following minor differences:
-
-* In the LG templates, entities are represented as `${entityName}`. The Custom Commands feature doesn't use entities. But you can use parameters as variables with either the `${parameterName}` representation or the `{parameterName}` representation.
-* The Custom Commands feature doesn't support template composition and expansion, because you never edit the *.lg* file directly. You edit only the responses of automatically created templates.
-* The Custom Commands feature doesn't support custom functions that LG injects. Predefined functions are supported.
-* The Custom Commands feature doesn't support options, such as `strict`, `replaceNull`, and `lineBreakStyle`.
-
-### Add template responses to a TurnOnOff command
-
-Modify the `TurnOnOff` command to add a new parameter. Use the following configuration.
-
-| Setting | Suggested value |
-| | |
-| **Name** | `SubjectContext` |
-| **Required** | Unselected |
-| **Type** | **String** |
-| **Default value** | `all` |
-| **Configuration** | **Accept predefined input values from internal catalog** |
-| **Predefined input values** | `room`, `bathroom`, `all`|
-
-#### Modify a completion rule
-
-Edit the **Actions** section of the existing completion rule **ConfirmationResponse**. In the **Edit action** window, switch to **Template Editor**. Then replace the text with the following example.
-
-```
-- IF: @{SubjectContext == "all" && SubjectDevice == "lights"}
- - Ok, turning all the lights {OnOff}
-- ELSEIF: @{SubjectDevice == "lights"}
- - Ok, turning {OnOff} the {SubjectContext} {SubjectDevice}
-- ELSE:
- - Ok, turning the {SubjectDevice} {OnOff}
- - Done, turning {OnOff} the {SubjectDevice}
-```
-
-Train and test your application by using the following input and output. Notice the variation of responses. The variation is created by multiple alternatives of the template value and also by use of adaptive expressions.
-
-* Input: *turn on the tv*
-* Output: Ok, turning the tv on
-* Input: *turn on the tv*
-* Output: Done, turned on the tv
-* Input: *turn off the lights*
-* Output: Ok, turning all the lights off
-* Input: *turn off room lights*
-* Output: Ok, turning off the room lights
-
-## Use a custom voice
-
-Another way to customize Custom Commands responses is to select an output voice. Use the following steps to switch the default voice to a custom voice:
-
-1. In your Custom Commands application, in the pane on the left, select **Settings**.
-1. In the middle pane, select **Custom Voice**.
-1. In the table, select a custom voice or public voice.
-1. Select **Save**.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot showing sample sentences and parameters.](media/custom-commands/select-custom-voice.png)
-
-> [!NOTE]
-> For public voices, neural types are available only for specific regions. For more information, see [Speech service supported regions](./regions.md#speech-service).
->
-> You can create custom voices on the **Custom Voice** project page. For more information, see [Get started with Custom Voice](./how-to-custom-voice.md).
-
-Now the application will respond in the selected voice, instead of the default voice.
-
-## Next steps
-
-* Learn how to [integrate your Custom Commands application](how-to-custom-commands-setup-speech-sdk.md) with a client app by using the Speech SDK.
-* [Set up continuous deployment](how-to-custom-commands-deploy-cicd.md) for your Custom Commands application by using Azure DevOps.
cognitive-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md
- Title: How to get Speech to text Session ID and Transcription ID-
-description: Learn how to get Speech service Speech to text Session ID and Transcription ID
------ Previously updated : 11/29/2022---
-# How to get Speech to text Session ID and Transcription ID
-
-If you use [Speech to text](speech-to-text.md) and need to open a support case, you are often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs.
-
-> [!NOTE]
-> * *Session ID* is used in [real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md).
-> * *Transcription ID* is used in [Batch transcription](batch-transcription.md).
-
-## Getting Session ID
-
-[Real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md) use either the [Speech SDK](speech-sdk.md) or the [REST API for short audio](rest-speech-to-text-short.md).
-
-To get the Session ID, when using SDK you need to:
-
-1. Enable application logging.
-1. Find the Session ID inside the log.
-
-If you use Speech SDK for JavaScript, get the Session ID as described in [this section](#get-session-id-using-javascript).
-
-If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details in [this section](#get-session-id-using-speech-cli).
-
-In case of [Speech to text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio).
-
-### Enable logging in the Speech SDK
-
-Enable logging for your application as described in [this article](how-to-use-logging.md).
-
-### Get Session ID from the log
-
-Open the log file your application produced and look for `SessionId:`. The number that would follow is the Session ID you need. In the following log excerpt example `0b734c41faf8430380d493127bd44631` is the Session ID.
-
-```
-[874193]: 218ms SPX_DBG_TRACE_VERBOSE: audio_stream_session.cpp:1238 [0000023981752A40]CSpxAudioStreamSession::FireSessionStartedEvent: Firing SessionStarted event: SessionId: 0b734c41faf8430380d493127bd44631
-```
-### Get Session ID using JavaScript
-
-If you use Speech SDK for JavaScript, you get Session ID with the help of `sessionStarted` event from the [Recognizer class](/javascript/api/microsoft-cognitiveservices-speech-sdk/recognizer).
-
-See an example of getting Session ID using JavaScript in [this sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/browser/https://docsupdatetracker.net/index.html). Look for `recognizer.sessionStarted = onSessionStarted;` and then for `function onSessionStarted`.
-
-### Get Session ID using Speech CLI
-
-If you use [Speech CLI](spx-overview.md), then you'll see the Session ID in `SESSION STARTED` and `SESSION STOPPED` console messages.
-
-You can also enable logging for your sessions and get the Session ID from the log file as described in [this section](#get-session-id-from-the-log). Run the appropriate Speech CLI command to get the information on using logs:
-
-```console
-spx help recognize log
-```
-```console
-spx help translate log
-```
--
-### Provide Session ID using REST API for short audio
-
-Unlike Speech SDK, [Speech to text REST API for short audio](rest-speech-to-text-short.md) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request.
-
-Generate a GUID inside your code or using any standard tool. Use the GUID value *without dashes or other dividers*. As an example we will use `9f4ffa5113a846eba289aa98b28e766f`.
-
-As a part of your REST request use `X-ConnectionId=<GUID>` expression. For our example, a sample request will look like this:
-```http
-https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&X-ConnectionId=9f4ffa5113a846eba289aa98b28e766f
-```
-`9f4ffa5113a846eba289aa98b28e766f` will be your Session ID.
-
-## Getting Transcription ID for Batch transcription
-
-[Batch transcription API](batch-transcription.md) is a subset of the [Speech to text REST API](rest-speech-to-text.md).
-
-The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).
-
-The following is and example response body of a [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request. GUID value `537216f8-0620-4a10-ae2d-00bdb423b36f` found in the first `self` element is the Transcription ID.
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/824bd685-2d45-424d-bb65-c3fe99e32927"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f/files"
- },
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": false,
- "channels": [
- 0,
- 1
- ],
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked"
- },
- "lastActionDateTime": "2021-11-19T14:09:51Z",
- "status": "NotStarted",
- "createdDateTime": "2021-11-19T14:09:51Z",
- "locale": "ru-RU",
- "displayName": "transcriptiontest"
-}
-```
-> [!NOTE]
-> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request.
-
-> [!NOTE]
-> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) request.
cognitive-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-lower-speech-synthesis-latency.md
- Title: How to lower speech synthesis latency using Speech SDK-
-description: How to lower speech synthesis latency using Speech SDK, including streaming, pre-connection, and so on.
------ Previously updated : 04/29/2021--
-zone_pivot_groups: programming-languages-set-nineteen
--
-# Lower speech synthesis latency using Speech SDK
-
-The synthesis latency is critical to your applications.
-In this article, we will introduce the best practices to lower the latency and bring the best performance to your end users.
-
-Normally, we measure the latency by `first byte latency` and `finish latency`, as follows:
--
-| Latency | Description | [SpeechSynthesisResult](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisresult) property key |
-|--|-||
-| first byte latency | Indicates the time delay between the start of the synthesis task and receipt of the first chunk of audio data. | SpeechServiceResponse_SynthesisFirstByteLatencyMs |
-| finish latency | Indicates the time delay between the start of the synthesis task and the receipt of the whole synthesized audio data. | SpeechServiceResponse_SynthesisFinishLatencyMs |
-
-The Speech SDK puts the latency durations in the Properties collection of [`SpeechSynthesisResult`](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisresult). The following sample code shows these values.
-
-```csharp
-var result = await synthesizer.SpeakTextAsync(text);
-Console.WriteLine($"first byte latency: \t{result.Properties.GetProperty(PropertyId.SpeechServiceResponse_SynthesisFirstByteLatencyMs)} ms");
-Console.WriteLine($"finish latency: \t{result.Properties.GetProperty(PropertyId.SpeechServiceResponse_SynthesisFinishLatencyMs)} ms");
-// you can also get the result id, and send to us when you need help for diagnosis
-var resultId = result.ResultId;
-```
---
-| Latency | Description | [SpeechSynthesisResult](/cpp/cognitive-services/speech/speechsynthesisresult) property key |
-|--|-||
-| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SpeechServiceResponse_SynthesisFirstByteLatencyMs` |
-| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SpeechServiceResponse_SynthesisFinishLatencyMs` |
-
-The Speech SDK measured the latencies and puts them in the property bag of [`SpeechSynthesisResult`](/cpp/cognitive-services/speech/speechsynthesisresult). Refer following codes to get them.
-
-```cpp
-auto result = synthesizer->SpeakTextAsync(text).get();
-auto firstByteLatency = std::stoi(result->Properties.GetProperty(PropertyId::SpeechServiceResponse_SynthesisFirstByteLatencyMs));
-auto finishedLatency = std::stoi(result->Properties.GetProperty(PropertyId::SpeechServiceResponse_SynthesisFinishLatencyMs));
-// you can also get the result id, and send to us when you need help for diagnosis
-auto resultId = result->ResultId;
-```
---
-| Latency | Description | [SpeechSynthesisResult](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisresult) property key |
-|--|-||
-| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SpeechServiceResponse_SynthesisFirstByteLatencyMs` |
-| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SpeechServiceResponse_SynthesisFinishLatencyMs` |
-
-The Speech SDK measured the latencies and puts them in the property bag of [`SpeechSynthesisResult`](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisresult). Refer following codes to get them.
-
-```java
-SpeechSynthesisResult result = synthesizer.SpeakTextAsync(text).get();
-System.out.println("first byte latency: \t" + result.getProperties().getProperty(PropertyId.SpeechServiceResponse_SynthesisFirstByteLatencyMs) + " ms.");
-System.out.println("finish latency: \t" + result.getProperties().getProperty(PropertyId.SpeechServiceResponse_SynthesisFinishLatencyMs) + " ms.");
-// you can also get the result id, and send to us when you need help for diagnosis
-String resultId = result.getResultId();
-```
----
-| Latency | Description | [SpeechSynthesisResult](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult) property key |
-|--|-||
-| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SpeechServiceResponse_SynthesisFirstByteLatencyMs` |
-| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SpeechServiceResponse_SynthesisFinishLatencyMs` |
-
-The Speech SDK measured the latencies and puts them in the property bag of [`SpeechSynthesisResult`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult). Refer following codes to get them.
-
-```python
-result = synthesizer.speak_text_async(text).get()
-first_byte_latency = int(result.properties.get_property(speechsdk.PropertyId.SpeechServiceResponse_SynthesisFirstByteLatencyMs))
-finished_latency = int(result.properties.get_property(speechsdk.PropertyId.SpeechServiceResponse_SynthesisFinishLatencyMs))
-# you can also get the result id, and send to us when you need help for diagnosis
-result_id = result.result_id
-```
---
-| Latency | Description | [SPXSpeechSynthesisResult](/objectivec/cognitive-services/speech/spxspeechsynthesisresult) property key |
-|--|-||
-| `first byte latency` | Indicates the time delay between the synthesis starts and the first audio chunk is received. | `SPXSpeechServiceResponseSynthesisFirstByteLatencyMs` |
-| `finish latency` | Indicates the time delay between the synthesis starts and the whole synthesized audio is received. | `SPXSpeechServiceResponseSynthesisFinishLatencyMs` |
-
-The Speech SDK measured the latencies and puts them in the property bag of [`SPXSpeechSynthesisResult`](/objectivec/cognitive-services/speech/spxspeechsynthesisresult). Refer following codes to get them.
-
-```Objective-C
-SPXSpeechSynthesisResult *speechResult = [speechSynthesizer speakText:text];
-int firstByteLatency = [intString [speechResult.properties getPropertyById:SPXSpeechServiceResponseSynthesisFirstByteLatencyMs]];
-int finishedLatency = [intString [speechResult.properties getPropertyById:SPXSpeechServiceResponseSynthesisFinishLatencyMs]];
-// you can also get the result id, and send to us when you need help for diagnosis
-NSString *resultId = result.resultId;
-```
--
-The first byte latency is much lower than finish latency in most cases.
-The first byte latency is independent from text length, while finish latency increases with text length.
-
-Ideally, we want to minimize the user-experienced latency (the latency before user hears the sound) to one network route trip time plus the first audio chunk latency of the speech synthesis service.
-
-## Streaming
-
-Streaming is critical to lowering latency.
-Client code can start playback when the first audio chunk is received.
-In a service scenario, you can forward the audio chunks immediately to your clients instead of waiting for the whole audio.
--
-You can use the [`PullAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), [`Synthesizing` event](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing), and [`AudioDataStream`](/dotnet/api/microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
-
-Taking `AudioDataStream` as an example:
-
-```csharp
-using (var synthesizer = new SpeechSynthesizer(config, null as AudioConfig))
-{
- using (var result = await synthesizer.StartSpeakingTextAsync(text))
- {
- using (var audioDataStream = AudioDataStream.FromResult(result))
- {
- byte[] buffer = new byte[16000];
- uint filledSize = 0;
- while ((filledSize = audioDataStream.ReadData(buffer)) > 0)
- {
- Console.WriteLine($"{filledSize} bytes received.");
- }
- }
- }
-}
-```
---
-You can use the [`PullAudioOutputStream`](/cpp/cognitive-services/speech/audio-pullaudiooutputstream), [`PushAudioOutputStream`](/cpp/cognitive-services/speech/audio-pushaudiooutputstream), the [`Synthesizing` event](/cpp/cognitive-services/speech/speechsynthesizer#synthesizing), and [`AudioDataStream`](/cpp/cognitive-services/speech/audiodatastream) of the Speech SDK to enable streaming.
-
-Taking `AudioDataStream` as an example:
-
-```cpp
-auto synthesizer = SpeechSynthesizer::FromConfig(config, nullptr);
-auto result = synthesizer->SpeakTextAsync(text).get();
-auto audioDataStream = AudioDataStream::FromResult(result);
-uint8_t buffer[16000];
-uint32_t filledSize = 0;
-while ((filledSize = audioDataStream->ReadData(buffer, sizeof(buffer))) > 0)
-{
- cout << filledSize << " bytes received." << endl;
-}
-```
---
-You can use the [`PullAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_Synthesizing), and [`AudioDataStream`](/java/api/com.microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
-
-Taking `AudioDataStream` as an example:
-
-```java
-SpeechSynthesizer synthesizer = new SpeechSynthesizer(config, null);
-SpeechSynthesisResult result = synthesizer.StartSpeakingTextAsync(text).get();
-AudioDataStream audioDataStream = AudioDataStream.fromResult(result);
-byte[] buffer = new byte[16000];
-long filledSize = audioDataStream.readData(buffer);
-while (filledSize > 0) {
- System.out.println(filledSize + " bytes received.");
- filledSize = audioDataStream.readData(buffer);
-}
-```
---
-You can use the [`PullAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#synthesizing), and [`AudioDataStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
-
-Taking `AudioDataStream` as an example:
-
-```python
-speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
-result = speech_synthesizer.start_speaking_text_async(text).get()
-audio_data_stream = speechsdk.AudioDataStream(result)
-audio_buffer = bytes(16000)
-filled_size = audio_data_stream.read_data(audio_buffer)
-while filled_size > 0:
- print("{} bytes received.".format(filled_size))
- filled_size = audio_data_stream.read_data(audio_buffer)
-```
---
-You can use the [`SPXPullAudioOutputStream`](/objectivec/cognitive-services/speech/spxpullaudiooutputstream), [`SPXPushAudioOutputStream`](/objectivec/cognitive-services/speech/spxpushaudiooutputstream), the [`Synthesizing` event](/objectivec/cognitive-services/speech/spxspeechsynthesizer#addsynthesizingeventhandler), and [`SPXAudioDataStream`](/objectivec/cognitive-services/speech/spxaudiodatastream) of the Speech SDK to enable streaming.
-
-Taking `AudioDataStream` as an example:
-
-```Objective-C
-SPXSpeechSynthesizer *synthesizer = [[SPXSpeechSynthesizer alloc] initWithSpeechConfiguration:speechConfig audioConfiguration:nil];
-SPXSpeechSynthesisResult *speechResult = [synthesizer startSpeakingText:inputText];
-SPXAudioDataStream *stream = [[SPXAudioDataStream alloc] initFromSynthesisResult:speechResult];
-NSMutableData* data = [[NSMutableData alloc]initWithCapacity:16000];
-while ([stream readData:data length:16000] > 0) {
- // Read data here
-}
-```
--
-## Pre-connect and reuse SpeechSynthesizer
-
-The Speech SDK uses a websocket to communicate with the service.
-Ideally, the network latency should be one route trip time (RTT).
-If the connection is newly established, the network latency will include extra time to establish the connection.
-The establishment of a websocket connection needs the TCP handshake, SSL handshake, HTTP connection, and protocol upgrade, which introduces time delay.
-To avoid the connection latency, we recommend pre-connecting and reusing the `SpeechSynthesizer`.
-
-### Pre-connect
-
-To pre-connect, establish a connection to the Speech service when you know the connection will be needed soon. For example, if you are building a speech bot in client, you can pre-connect to the speech synthesis service when the user starts to talk, and call `SpeakTextAsync` when the bot reply text is ready.
--
-```csharp
-using (var synthesizer = new SpeechSynthesizer(uspConfig, null as AudioConfig))
-{
- using (var connection = Connection.FromSpeechSynthesizer(synthesizer))
- {
- connection.Open(true);
- }
- await synthesizer.SpeakTextAsync(text);
-}
-```
---
-```cpp
-auto synthesizer = SpeechSynthesizer::FromConfig(config, nullptr);
-auto connection = Connection::FromSpeechSynthesizer(synthesizer);
-connection->Open(true);
-```
---
-```java
-SpeechSynthesizer synthesizer = new SpeechSynthesizer(speechConfig, (AudioConfig) null);
-Connection connection = Connection.fromSpeechSynthesizer(synthesizer);
-connection.openConnection(true);
-```
---
-```python
-synthesizer = speechsdk.SpeechSynthesizer(config, None)
-connection = speechsdk.Connection.from_speech_synthesizer(synthesizer)
-connection.open(True)
-```
---
-```Objective-C
-SPXSpeechSynthesizer* synthesizer = [[SPXSpeechSynthesizer alloc]initWithSpeechConfiguration:self.speechConfig audioConfiguration:nil];
-SPXConnection* connection = [[SPXConnection alloc]initFromSpeechSynthesizer:synthesizer];
-[connection open:true];
-```
--
-> [!NOTE]
-> If the synthesize text is available, just call `SpeakTextAsync` to synthesize the audio. The SDK will handle the connection.
-
-### Reuse SpeechSynthesizer
-
-Another way to reduce the connection latency is to reuse the `SpeechSynthesizer` so you don't need to create a new `SpeechSynthesizer` for each synthesis.
-We recommend using object pool in service scenario, see our sample code for [C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_server_scenario_sample.cs) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechSynthesisScenarioSamples.java).
--
-## Transmit compressed audio over the network
-
-When the network is unstable or with limited bandwidth, the payload size will also impact latency.
-Meanwhile, a compressed audio format helps to save the users' network bandwidth, which is especially valuable for mobile users.
-
-We support many compressed formats including `opus`, `webm`, `mp3`, `silk`, and so on, see the full list in [SpeechSynthesisOutputFormat](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat).
-For example, the bitrate of `Riff24Khz16BitMonoPcm` format is 384 kbps, while `Audio24Khz48KBitRateMonoMp3` only costs 48 kbps.
-Our Speech SDK will automatically use a compressed format for transmission when a `pcm` output format is set.
-For Linux and Windows, `GStreamer` is required to enable this feature.
-Refer [this instruction](how-to-use-codec-compressed-audio-input-streams.md) to install and configure `GStreamer` for Speech SDK.
-For Android, iOS and macOS, no extra configuration is needed starting version 1.20.
-
-## Others tips
-
-### Cache CRL files
-
-The Speech SDK uses CRL files to check the certification.
-Caching the CRL files until expired helps you avoid downloading CRL files every time.
-See [How to configure OpenSSL for Linux](how-to-configure-openssl-linux.md#certificate-revocation-checks) for details.
-
-### Use latest Speech SDK
-
-We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
-
-## Load test guideline
-
-You may use load test to test the speech synthesis service capacity and latency.
-Here are some guidelines.
--
-## Next steps
-
-* [See the samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master) on GitHub
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
- Title: Migrate from custom voice to custom neural voice - Speech service-
-description: This document helps users migrate from custom voice to custom neural voice.
------ Previously updated : 11/12/2021---
-# Migrate from custom voice to custom neural voice
-
-> [!IMPORTANT]
-> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
->
-> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
-
-The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text to speech technology, in a responsible way.
-
-|Custom voice |Custom neural voice |
-|--|--|
-| The standard, or "traditional," method of custom voice breaks down spoken language into phonetic snippets that can be remixed and matched using classical programming or statistical methods. | Custom neural voice synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech rather than using classical programming or statistical methods.|
-| Custom voice<sup>1</sup> requires a large volume of voice data to produce a more human-like voice model. With fewer recorded lines, a standard custom voice model will tend to sound more obviously robotic. |The custom neural voice capability enables you to create a unique brand voice in multiple languages and styles by using a small set of recordings.|
-
-<sup>1</sup> When creating a custom voice model, the maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users, and 500 for standard subscription (S0) users.
-
-## Action required
-
-Before you can migrate to custom neural voice, your [application](https://aka.ms/customneural) must be accepted. Access to the custom neural voice service is subject to Microsoft's sole discretion based on our eligibility criteria. You must commit to using custom neural voice in alignment with our [Responsible AI principles](https://microsoft.com/ai/responsible-ai) and the [code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
-
-> [!TIP]
-> Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs.
-
-1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and then [apply here](https://aka.ms/customneural).
-1. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://aka.ms/speechstudio/customvoice) using the same Azure subscription that you provide in your application.
-1. Before you can [train](how-to-custom-voice-create-voice.md) and [deploy](how-to-deploy-and-use-endpoint.md) a custom voice model, you must [create a voice talent profile](how-to-custom-voice-talent.md). The profile requires an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model.
-1. Update your code in your apps if you have created a new endpoint with a new model.
-
-## Custom voice details (deprecated)
-
-Read the following sections for details on custom voice.
-
-### Language support
-
-Custom voice supports the following languages (locales).
-
-| Language | Locale |
-|--|-|
-|Chinese (Mandarin, Simplified)|`zh-CN`|
-|Chinese (Mandarin, Simplified), English bilingual|`zh-CN` bilingual|
-|English (India)|`en-IN`|
-|English (United Kingdom)|`en-GB`|
-|English (United States)|`en-US`|
-|French (France)|`fr-FR`|
-|German (Germany)|`de-DE`|
-|Italian (Italy)|`it-IT`|
-|Portuguese (Brazil)|`pt-BR`|
-|Spanish (Mexico)|`es-MX`|
-
-### Regional support
-
-If you've created a custom voice font, use the endpoint that you've created. You can also use the endpoints listed below, replacing the `{deploymentId}` with the deployment ID for your voice model.
-
-| Region | Endpoint |
-|--|-|
-| Australia East | `https://australiaeast.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Brazil South | `https://brazilsouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Canada Central | `https://canadacentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Central US | `https://centralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| East Asia | `https://eastasia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| East US | `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| East US 2 | `https://eastus2.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| France Central | `https://francecentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| India Central | `https://centralindia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Japan East | `https://japaneast.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Japan West | `https://japanwest.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Korea Central | `https://koreacentral.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| North Central US | `https://northcentralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| North Europe | `https://northeurope.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| South Central US | `https://southcentralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| Southeast Asia | `https://southeastasia.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| UK South | `https://uksouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| West Europe | `https://westeurope.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| West Central US | `https://westcentralus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| West US | `https://westus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
-| West US 2 | `https://westus2.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` |
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try out custom neural voice](custom-neural-voice.md)
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
- Title: Migrate from prebuilt standard voice to prebuilt neural voice - Speech service-
-description: This document helps users migrate from prebuilt standard voice to prebuilt neural voice.
------ Previously updated : 11/12/2021---
-# Migrate from prebuilt standard voice to prebuilt neural voice
-
-> [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
->
-> The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**.
-
-The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience.
-
-|Prebuilt standard voice | Prebuilt neural voice |
-|--|--|
-| Noticeably robotic | Natural sounding, closer to human-parity|
-| Limited capabilities in voice tuning<sup>1</sup> |Advanced capabilities in voice tuningΓÇ» |
-| No new investment in future voice fonts |On-going investment in future voice fonts |
-
-<sup>1</sup> For voice tuning, volume and pitch changes can be applied to standard voices at the word or sentence-level, whereas they can only be applied to neural voices at the sentence level. Duration supports standard voices only. To learn more about details on prosody elements, see [Improve synthesis with SSML](speech-synthesis-markup.md).
-
-## Action required
-
-> [!TIP]
-> Even without an Azure account, you can listen to voice samples at the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
-
-1. Review the [price](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) structure.
-2. To make the change, [follow the sample code](speech-synthesis-markup-voice.md#voice-element) to update the voice name in your speech synthesis request to the supported neural voice names in chosen languages. Use neural voices for your speech synthesis request, on cloud or on prem. For on-premises container, use the [neural voice containers](../containers/container-image-tags.md) and follow the [instructions](speech-container-howto.md).
-
-## Standard voice details (deprecated)
-
-Read the following sections for details on standard voice.
-
-### Language support
-
-More than 75 prebuilt standard voices are available in over 45 languages and locales, which allow you to convert text into synthesized speech.
-
-> [!NOTE]
-> With two exceptions, standard voices are created from samples that use a 16 khz sample rate.
-> **The en-US-AriaRUS** and **en-US-GuyRUS** voices are also created from samples that use a 24 khz sample rate.
-> All voices can upsample or downsample to other sample rates when synthesizing.
-
-| Language | Locale (BCP-47) | Gender | Voice name |
-|--|--|--|--|
-| Arabic (Arabic ) | `ar-EG` | Female | `ar-EG-Hoda`|
-| Arabic (Saudi Arabia) | `ar-SA` | Male | `ar-SA-Naayf`|
-| Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-Ivan`|
-| Catalan (Spain) | `ca-ES` | Female | `ca-ES-HerenaRUS`|
-| Chinese (Cantonese, Traditional) | `zh-HK` | Male | `zh-HK-Danny`|
-| Chinese (Cantonese, Traditional) | `zh-HK` | Female | `zh-HK-TracyRUS`|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-HuihuiRUS`|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-Kangkang`|
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-Yaoyao`|
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-HanHanRUS`|
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Female | `zh-TW-Yating`|
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Male | `zh-TW-Zhiwei`|
-| Croatian (Croatia) | `hr-HR` | Male | `hr-HR-Matej`|
-| Czech (Czech Republic) | `cs-CZ` | Male | `cs-CZ-Jakub`|
-| Danish (Denmark) | `da-DK` | Female | `da-DK-HelleRUS`|
-| Dutch (Netherlands) | `nl-NL` | Female | `nl-NL-HannaRUS`|
-| English (Australia) | `en-AU` | Female | `en-AU-Catherine`|
-| English (Australia) | `en-AU` | Female | `en-AU-HayleyRUS`|
-| English (Canada) | `en-CA` | Female | `en-CA-HeatherRUS`|
-| English (Canada) | `en-CA` | Female | `en-CA-Linda`|
-| English (India) | `en-IN` | Female | `en-IN-Heera`|
-| English (India) | `en-IN` | Female | `en-IN-PriyaRUS`|
-| English (India) | `en-IN` | Male | `en-IN-Ravi`|
-| English (Ireland) | `en-IE` | Male | `en-IE-Sean`|
-| English (United Kingdom) | `en-GB` | Male | `en-GB-George`|
-| English (United Kingdom) | `en-GB` | Female | `en-GB-HazelRUS`|
-| English (United Kingdom) | `en-GB` | Female | `en-GB-Susan`|
-| English (United States) | `en-US` | Male | `en-US-BenjaminRUS`|
-| English (United States) | `en-US` | Male | `en-US-GuyRUS`|
-| English (United States) | `en-US` | Female | `en-US-AriaRUS`|
-| English (United States) | `en-US` | Female | `en-US-ZiraRUS`|
-| Finnish (Finland) | `fi-FI` | Female | `fi-FI-HeidiRUS`|
-| French (Canada) | `fr-CA` | Female | `fr-CA-Caroline`|
-| French (Canada) | `fr-CA` | Female | `fr-CA-HarmonieRUS`|
-| French (France) | `fr-FR` | Female | `fr-FR-HortenseRUS`|
-| French (France) | `fr-FR` | Female | `fr-FR-Julie`|
-| French (France) | `fr-FR` | Male | `fr-FR-Paul`|
-| French (Switzerland) | `fr-CH` | Male | `fr-CH-Guillaume`|
-| German (Austria) | `de-AT` | Male | `de-AT-Michael`|
-| German (Germany) | `de-DE` | Female | `de-DE-HeddaRUS`|
-| German (Germany) | `de-DE` | Male | `de-DE-Stefan`|
-| German (Switzerland) | `de-CH` | Male | `de-CH-Karsten`|
-| Greek (Greece) | `el-GR` | Male | `el-GR-Stefanos`|
-| Hebrew (Israel) | `he-IL` | Male | `he-IL-Asaf`|
-| Hindi (India) | `hi-IN` | Male | `hi-IN-Hemant`|
-| Hindi (India) | `hi-IN` | Female | `hi-IN-Kalpana`|
-| Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-Szabolcs`|
-| Indonesian (Indonesia) | `id-ID` | Male | `id-ID-Andika`|
-| Italian (Italy) | `it-IT` | Male | `it-IT-Cosimo`|
-| Italian (Italy) | `it-IT` | Female | `it-IT-LuciaRUS`|
-| Japanese (Japan) | `ja-JP` | Female | `ja-JP-Ayumi`|
-| Japanese (Japan) | `ja-JP` | Female | `ja-JP-HarukaRUS`|
-| Japanese (Japan) | `ja-JP` | Male | `ja-JP-Ichiro`|
-| Korean (Korea) | `ko-KR` | Female | `ko-KR-HeamiRUS`|
-| Malay (Malaysia) | `ms-MY` | Male | `ms-MY-Rizwan`|
-| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-HuldaRUS`|
-| Polish (Poland) | `pl-PL` | Female | `pl-PL-PaulinaRUS`|
-| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-Daniel`|
-| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-HeloisaRUS`|
-| Portuguese (Portugal) | `pt-PT` | Female | `pt-PT-HeliaRUS`|
-| Romanian (Romania) | `ro-RO` | Male | `ro-RO-Andrei`|
-| Russian (Russia) | `ru-RU` | Female | `ru-RU-EkaterinaRUS`|
-| Russian (Russia) | `ru-RU` | Female | `ru-RU-Irina`|
-| Russian (Russia) | `ru-RU` | Male | `ru-RU-Pavel`|
-| Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-Filip`|
-| Slovenian (Slovenia) | `sl-SI` | Male | `sl-SI-Lado`|
-| Spanish (Mexico) | `es-MX` | Female | `es-MX-HildaRUS`|
-| Spanish (Mexico) | `es-MX` | Male | `es-MX-Raul`|
-| Spanish (Spain) | `es-ES` | Female | `es-ES-HelenaRUS`|
-| Spanish (Spain) | `es-ES` | Female | `es-ES-Laura`|
-| Spanish (Spain) | `es-ES` | Male | `es-ES-Pablo`|
-| Swedish (Sweden) | `sv-SE` | Female | `sv-SE-HedvigRUS`|
-| Tamil (India) | `ta-IN` | Male | `ta-IN-Valluvar`|
-| Telugu (India) | `te-IN` | Female | `te-IN-Chitra`|
-| Thai (Thailand) | `th-TH` | Male | `th-TH-Pattara`|
-| Turkish (T├╝rkiye) | `tr-TR` | Female | `tr-TR-SedaRUS`|
-| Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-An` |
-
-> [!IMPORTANT]
-> The `en-US-Jessa` voice has changed to `en-US-Aria`. If you were using "Jessa" before, convert over to "Aria".
->
-> You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)" in your speech synthesis requests.
-
-### Regional support
-
-Use this table to determine **availability of standard voices** by region/endpoint:
-
-| Region | Endpoint |
-|--|-|
-| Australia East | `https://australiaeast.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Brazil South | `https://brazilsouth.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Canada Central | `https://canadacentral.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Central US | `https://centralus.tts.speech.microsoft.com/cognitiveservices/v1` |
-| East Asia | `https://eastasia.tts.speech.microsoft.com/cognitiveservices/v1` |
-| East US | `https://eastus.tts.speech.microsoft.com/cognitiveservices/v1` |
-| East US 2 | `https://eastus2.tts.speech.microsoft.com/cognitiveservices/v1` |
-| France Central | `https://francecentral.tts.speech.microsoft.com/cognitiveservices/v1` |
-| India Central | `https://centralindia.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Japan East | `https://japaneast.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Japan West | `https://japanwest.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Korea Central | `https://koreacentral.tts.speech.microsoft.com/cognitiveservices/v1` |
-| North Central US | `https://northcentralus.tts.speech.microsoft.com/cognitiveservices/v1` |
-| North Europe | `https://northeurope.tts.speech.microsoft.com/cognitiveservices/v1` |
-| South Central US | `https://southcentralus.tts.speech.microsoft.com/cognitiveservices/v1` |
-| Southeast Asia | `https://southeastasia.tts.speech.microsoft.com/cognitiveservices/v1` |
-| UK South | `https://uksouth.tts.speech.microsoft.com/cognitiveservices/v1` |
-| West Central US | `https://westcentralus.tts.speech.microsoft.com/cognitiveservices/v1` |
-| West Europe | `https://westeurope.tts.speech.microsoft.com/cognitiveservices/v1` |
-| West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/v1` |
-| West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/v1` |
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try out prebuilt neural voice](text-to-speech.md)
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
- Title: Use pronunciation assessment-
-description: Learn about pronunciation assessment features that are currently publicly available.
------- Previously updated : 06/05/2023-
-zone_pivot_groups: programming-languages-speech-sdk
--
-# Use pronunciation assessment
-
-In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object.
-
-> [!NOTE]
-> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
-
-You can get pronunciation assessment scores for:
--- Full text-- Words-- Syllable groups-- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format-
-> [!NOTE]
-> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
-
-## Configuration parameters
-
-> [!NOTE]
-> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details.
-
-You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
--
-```csharp
-var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
- referenceText: "good morning",
- gradingSystem: GradingSystem.HundredMark,
- granularity: Granularity.Phoneme,
- enableMiscue: true);
-```
-
--
-```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
-```
---
-```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
-```
---
-```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}")
-```
---
-```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}");
-```
---
-```ObjectiveC
-SPXPronunciationAssessmentConfiguration *pronunciationAssessmentConfig =
-[[SPXPronunciationAssessmentConfiguration alloc] init:@"good morning"
- gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
- granularity:SPXPronunciationAssessmentGranularity_Phoneme
- enableMiscue:true];
-```
----
-```swift
-var pronunciationAssessmentConfig: SPXPronunciationAssessmentConfiguration?
-do {
- try pronunciationAssessmentConfig = SPXPronunciationAssessmentConfiguration.init(referenceText, gradingSystem: SPXPronunciationAssessmentGradingSystem.hundredMark, granularity: SPXPronunciationAssessmentGranularity.phoneme, enableMiscue: true)
-} catch {
- print("error \(error) happened")
- pronunciationAssessmentConfig = nil
- return
-}
-```
-----
-This table lists some of the key configuration parameters for pronunciation assessment.
-
-| Parameter | Description |
-|--|-|
-| `ReferenceText` | The text that the pronunciation is evaluated against. |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
-| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels greater than or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table.|
-| `ScenarioId` | A GUID indicating a customized point system. |
-
-## Syllable groups
-
-Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
-
-The following table compares example phonemes with the corresponding syllables.
-
-| Sample word | Phonemes | Syllables |
-|--|-|-|
-|technological|teknələdʒɪkl|tek·nə·lɑ·dʒɪkl|
-|hello|hɛloʊ|hɛ·loʊ|
-|luck|lʌk|lʌk|
-|photosynthesis|foʊtəsɪnθəsɪs|foʊ·tə·sɪn·θə·sɪs|
-
-To request syllable-level results along with phonemes, set the granularity [configuration parameter](#configuration-parameters) to `Phoneme`.
-
-## Phoneme alphabet format
-
-For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
-
-The following table compares example SAPI phonemes with the corresponding IPA phonemes.
-
-| Sample word | SAPI Phonemes | IPA phonemes |
-|--|-|-|
-|hello|h eh l ow|h ɛ l oʊ|
-|luck|l ah k|l ʌ k|
-|photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
-
-To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
--
-```csharp
-pronunciationAssessmentConfig.PhonemeAlphabet = "IPA";
-```
-
--
-```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
-```
-
--
-```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
-```
---
-```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}")
-```
---
-```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\"}");
-```
--
-
-```ObjectiveC
-pronunciationAssessmentConfig.phonemeAlphabet = @"IPA";
-```
----
-```swift
-pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
-```
-----
-## Spoken phoneme
-
-With spoken phonemes, you can get confidence scores indicating how likely the spoken phonemes matched the expected phonemes.
-
-For example, to obtain the complete spoken sound for the word "Hello", you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word "hello", the expected IPA phonemes are "h ɛ l oʊ". However, the actual spoken phonemes are "h ə l oʊ". You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
-
-```json
-{
- "Id": "bbb42ea51bdb46d19a1d685e635fe173",
- "RecognitionStatus": 0,
- "Offset": 7500000,
- "Duration": 13800000,
- "DisplayText": "Hello.",
- "NBest": [
- {
- "Confidence": 0.975003,
- "Lexical": "hello",
- "ITN": "hello",
- "MaskedITN": "hello",
- "Display": "Hello.",
- "PronunciationAssessment": {
- "AccuracyScore": 100,
- "FluencyScore": 100,
- "CompletenessScore": 100,
- "PronScore": 100
- },
- "Words": [
- {
- "Word": "hello",
- "Offset": 7500000,
- "Duration": 13800000,
- "PronunciationAssessment": {
- "AccuracyScore": 99.0,
- "ErrorType": "None"
- },
- "Syllables": [
- {
- "Syllable": "hɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 91.0
- },
- "Offset": 7500000,
- "Duration": 4100000
- },
- {
- "Syllable": "loʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- },
- "Offset": 11700000,
- "Duration": 9600000
- }
- ],
- "Phonemes": [
- {
- "Phoneme": "h",
- "PronunciationAssessment": {
- "AccuracyScore": 98.0,
- "NBestPhonemes": [
- {
- "Phoneme": "h",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 52.0
- },
- {
- "Phoneme": "ə",
- "Score": 35.0
- },
- {
- "Phoneme": "k",
- "Score": 23.0
- },
- {
- "Phoneme": "æ",
- "Score": 20.0
- }
- ]
- },
- "Offset": 7500000,
- "Duration": 3500000
- },
- {
- "Phoneme": "ɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 47.0,
- "NBestPhonemes": [
- {
- "Phoneme": "ə",
- "Score": 100.0
- },
- {
- "Phoneme": "l",
- "Score": 52.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 47.0
- },
- {
- "Phoneme": "h",
- "Score": 17.0
- },
- {
- "Phoneme": "æ",
- "Score": 2.0
- }
- ]
- },
- "Offset": 11100000,
- "Duration": 500000
- },
- {
- "Phoneme": "l",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "l",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 46.0
- },
- {
- "Phoneme": "ə",
- "Score": 5.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 3.0
- },
- {
- "Phoneme": "u",
- "Score": 1.0
- }
- ]
- },
- "Offset": 11700000,
- "Duration": 1100000
- },
- {
- "Phoneme": "oʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "oʊ",
- "Score": 100.0
- },
- {
- "Phoneme": "d",
- "Score": 29.0
- },
- {
- "Phoneme": "t",
- "Score": 24.0
- },
- {
- "Phoneme": "n",
- "Score": 22.0
- },
- {
- "Phoneme": "l",
- "Score": 18.0
- }
- ]
- },
- "Offset": 12900000,
- "Duration": 8400000
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-To indicate whether, and how many potential spoken phonemes to get confidence scores for, set the `NBestPhonemeCount` parameter to an integer value such as `5`.
-
-
-```csharp
-pronunciationAssessmentConfig.NBestPhonemeCount = 5;
-```
-
--
-```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
-```
-
--
-```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
-```
-
--
-```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}")
-```
---
-```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"phonemeAlphabet\":\"IPA\",\"nBestPhonemeCount\":5}");
-```
--
-
-
-```ObjectiveC
-pronunciationAssessmentConfig.nbestPhonemeCount = 5;
-```
----
-```swift
-pronunciationAssessmentConfig?.nbestPhonemeCount = 5
-```
----
-## Get pronunciation assessment results
-
-In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified.
-
-> [!TIP]
-> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
-
-When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string.
--
-```csharp
-using (var speechRecognizer = new SpeechRecognizer(
- speechConfig,
- audioConfig))
-{
- pronunciationAssessmentConfig.ApplyTo(speechRecognizer);
- var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
-
- // The pronunciation assessment result as a Speech SDK object
- var pronunciationAssessmentResult =
- PronunciationAssessmentResult.FromResult(speechRecognitionResult);
-
- // The pronunciation assessment result as a JSON string
- var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult);
-}
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
---
-Word, syllable, and phoneme results aren't available via SDK objects with the Speech SDK for C++. Word, syllable, and phoneme results are only available in the JSON string.
-
-```cpp
-auto speechRecognizer = SpeechRecognizer::FromConfig(
- speechConfig,
- audioConfig);
-
-pronunciationAssessmentConfig->ApplyTo(speechRecognizer);
-speechRecognitionResult = speechRecognizer->RecognizeOnceAsync().get();
-
-// The pronunciation assessment result as a Speech SDK object
-auto pronunciationAssessmentResult =
- PronunciationAssessmentResult::FromResult(speechRecognitionResult);
-
-// The pronunciation assessment result as a JSON string
-auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult);
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624).
-
-
-For Android application development, the word, syllable, and phoneme results are available via SDK objects with the Speech SDK for Java. The results are also available in the JSON string. For Java Runtime (JRE) application development, the word, syllable, and phoneme results are only available in the JSON string.
-
-```Java
-SpeechRecognizer speechRecognizer = new SpeechRecognizer(
- speechConfig,
- audioConfig);
-
-pronunciationAssessmentConfig.applyTo(speechRecognizer);
-Future<SpeechRecognitionResult> future = speechRecognizer.recognizeOnceAsync();
-SpeechRecognitionResult speechRecognitionResult = future.get(30, TimeUnit.SECONDS);
-
-// The pronunciation assessment result as a Speech SDK object
-PronunciationAssessmentResult pronunciationAssessmentResult =
- PronunciationAssessmentResult.fromResult(speechRecognitionResult);
-
-// The pronunciation assessment result as a JSON string
-String pronunciationAssessmentResultJson = speechRecognitionResult.getProperties().getProperty(PropertyId.SpeechServiceResponse_JsonResult);
-
-recognizer.close();
-speechConfig.close();
-audioConfig.close();
-pronunciationAssessmentConfig.close();
-speechRecognitionResult.close();
-```
----
-```JavaScript
-var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, audioConfig);
-
-pronunciationAssessmentConfig.applyTo(speechRecognizer);
-
-speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRecognitionResult) => {
- // The pronunciation assessment result as a Speech SDK object
- var pronunciationAssessmentResult = SpeechSDK.PronunciationAssessmentResult.fromResult(speechRecognitionResult);
-
- // The pronunciation assessment result as a JSON string
- var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult);
-},
-{});
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52).
---
-```Python
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config, \
- audio_config=audio_config)
-
-pronunciation_assessment_config.apply_to(speech_recognizer)
-speech_recognition_result = speech_recognizer.recognize_once()
-
-# The pronunciation assessment result as a Speech SDK object
-pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech_recognition_result)
-
-# The pronunciation assessment result as a JSON string
-pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult)
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1).
---
-
-```ObjectiveC
-SPXSpeechRecognizer* speechRecognizer = \
- [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
- audioConfiguration:audioConfig];
-
-[pronunciationAssessmentConfig applyToRecognizer:speechRecognizer];
-
-SPXSpeechRecognitionResult *speechRecognitionResult = [speechRecognizer recognizeOnce];
-
-// The pronunciation assessment result as a Speech SDK object
-SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronunciationAssessmentResult alloc] init:speechRecognitionResult];
-
-// The pronunciation assessment result as a JSON string
-NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult];
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862).
---
-```swift
-let speechRecognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, audioConfiguration: audioConfig)
-
-try! pronConfig.apply(to: speechRecognizer)
-
-let speechRecognitionResult = try? speechRecognizer.recognizeOnce()
-
-// The pronunciation assessment result as a Speech SDK object
-let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecognitionResult!)
-
-// The pronunciation assessment result as a JSON string
-let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult)
-```
-
-To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L224).
----
-### Result parameters
-
-This table lists some of the key pronunciation assessment results.
-
-| Parameter | Description |
-|--|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
-| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
-| `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.|
-
-### JSON result example
-
-Pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
-- The phoneme [alphabet](#phoneme-alphabet-format) is IPA.-- The [syllables](#syllable-groups) are returned alongside phonemes for the same word. -- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).-- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested.-- Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2. -
-```json
-{
- "Id": "bbb42ea51bdb46d19a1d685e635fe173",
- "RecognitionStatus": 0,
- "Offset": 7500000,
- "Duration": 13800000,
- "DisplayText": "Hello.",
- "NBest": [
- {
- "Confidence": 0.975003,
- "Lexical": "hello",
- "ITN": "hello",
- "MaskedITN": "hello",
- "Display": "Hello.",
- "PronunciationAssessment": {
- "AccuracyScore": 100,
- "FluencyScore": 100,
- "CompletenessScore": 100,
- "PronScore": 100
- },
- "Words": [
- {
- "Word": "hello",
- "Offset": 7500000,
- "Duration": 13800000,
- "PronunciationAssessment": {
- "AccuracyScore": 99.0,
- "ErrorType": "None"
- },
- "Syllables": [
- {
- "Syllable": "hɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 91.0
- },
- "Offset": 7500000,
- "Duration": 4100000
- },
- {
- "Syllable": "loʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0
- },
- "Offset": 11700000,
- "Duration": 9600000
- }
- ],
- "Phonemes": [
- {
- "Phoneme": "h",
- "PronunciationAssessment": {
- "AccuracyScore": 98.0,
- "NBestPhonemes": [
- {
- "Phoneme": "h",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 52.0
- },
- {
- "Phoneme": "ə",
- "Score": 35.0
- },
- {
- "Phoneme": "k",
- "Score": 23.0
- },
- {
- "Phoneme": "æ",
- "Score": 20.0
- }
- ]
- },
- "Offset": 7500000,
- "Duration": 3500000
- },
- {
- "Phoneme": "ɛ",
- "PronunciationAssessment": {
- "AccuracyScore": 47.0,
- "NBestPhonemes": [
- {
- "Phoneme": "ə",
- "Score": 100.0
- },
- {
- "Phoneme": "l",
- "Score": 52.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 47.0
- },
- {
- "Phoneme": "h",
- "Score": 17.0
- },
- {
- "Phoneme": "æ",
- "Score": 2.0
- }
- ]
- },
- "Offset": 11100000,
- "Duration": 500000
- },
- {
- "Phoneme": "l",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "l",
- "Score": 100.0
- },
- {
- "Phoneme": "oʊ",
- "Score": 46.0
- },
- {
- "Phoneme": "ə",
- "Score": 5.0
- },
- {
- "Phoneme": "ɛ",
- "Score": 3.0
- },
- {
- "Phoneme": "u",
- "Score": 1.0
- }
- ]
- },
- "Offset": 11700000,
- "Duration": 1100000
- },
- {
- "Phoneme": "oʊ",
- "PronunciationAssessment": {
- "AccuracyScore": 100.0,
- "NBestPhonemes": [
- {
- "Phoneme": "oʊ",
- "Score": 100.0
- },
- {
- "Phoneme": "d",
- "Score": 29.0
- },
- {
- "Phoneme": "t",
- "Score": 24.0
- },
- {
- "Phoneme": "n",
- "Score": 22.0
- },
- {
- "Phoneme": "l",
- "Score": 18.0
- }
- ]
- },
- "Offset": 12900000,
- "Duration": 8400000
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-## Pronunciation assessment in streaming mode
-
-Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore` , and `CompletenessScore` will vary over time throughout the recording and evaluation process.
--
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream).
---
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream).
---
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548).
---
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915).
---
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js).
---
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831).
---
-For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191).
----
-## Next steps
--- Learn our quality [benchmark](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866)-- Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)-- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
cognitive-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md
- Title: How to recognize intents from speech using the Speech SDK C#-
-description: In this guide, you learn how to recognize intents from speech using the Speech SDK for C#.
------ Previously updated : 02/08/2022----
-# How to recognize intents from speech using the Speech SDK for C#
-
-The Cognitive Services [Speech SDK](speech-sdk.md) integrates with the [Language Understanding service (LUIS)](https://www.luis.ai/home) to provide **intent recognition**. An intent is something the user wants to do: book a flight, check the weather, or make a call. The user can use whatever terms feel natural. Using machine learning, LUIS maps user requests to the intents you've defined.
-
-> [!NOTE]
-> A LUIS application defines the intents and entities you want to recognize. It's separate from the C# application that uses the Speech service. In this article, "app" means the LUIS app, while "application" means the C# code.
-
-In this guide, you use the Speech SDK to develop a C# console application that derives intents from user utterances through your device's microphone. You'll learn how to:
-
-> [!div class="checklist"]
->
-> - Create a Visual Studio project referencing the Speech SDK NuGet package
-> - Create a speech configuration and get an intent recognizer
-> - Get the model for your LUIS app and add the intents you need
-> - Specify the language for speech recognition
-> - Recognize speech from a file
-> - Use asynchronous, event-driven continuous recognition
-
-## Prerequisites
-
-Be sure you have the following items before you begin this guide:
--- A LUIS account. You can get one for free through the [LUIS portal](https://www.luis.ai/home).-- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).-
-## LUIS and speech
-
-LUIS integrates with the Speech service to recognize intents from speech. You don't need a Speech service subscription, just LUIS.
-
-LUIS uses two kinds of keys:
-
-| Key type | Purpose |
-| | -- |
-| Authoring | Lets you create and modify LUIS apps programmatically |
-| Prediction | Used to access the LUIS application in runtime |
-
-For this guide, you need the prediction key type. This guide uses the example Home Automation LUIS app, which you can create by following the [Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) quickstart. If you've created a LUIS app of your own, you can use it instead.
-
-When you create a LUIS app, LUIS automatically generates an authoring key so you can test the app using text queries. This key doesn't enable the Speech service integration and won't work with this guide. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this guide.
-
-After you create the LUIS resource in the Azure dashboard, log into the [LUIS portal](https://www.luis.ai/home), choose your application on the **My Apps** page, then switch to the app's **Manage** page. Finally, select **Azure Resources** in the sidebar.
--
-On the **Azure Resources** page:
-
-Select the icon next to a key to copy it to the clipboard. (You may use either key.)
-
-## Create the project and add the workload
-
-To create a Visual Studio project for Windows development, you need to create the project, set up Visual Studio for .NET desktop development, install the Speech SDK, and choose the target architecture.
-
-To start, create the project in Visual Studio, and make sure that Visual Studio is set up for .NET desktop development:
-
-1. Open Visual Studio 2019.
-
-1. In the Start window, select **Create a new project**.
-
-1. In the **Create a new project** window, choose **Console App (.NET Framework)**, and then select **Next**.
-
-1. In the **Configure your new project** window, enter *helloworld* in **Project name**, choose or create the directory path in **Location**, and then select **Create**.
-
-1. From the Visual Studio menu bar, select **Tools** > **Get Tools and Features**, which opens Visual Studio Installer and displays the **Modifying** dialog box.
-
-1. Check whether the **.NET desktop development** workload is available. If the workload hasn't been installed, select the check box next to it, and then select **Modify** to start the installation. It may take a few minutes to download and install.
-
- If the check box next to **.NET desktop development** is already selected, select **Close** to exit the dialog box.
-
- ![Enable .NET desktop development](~/articles/cognitive-services/speech-service/media/sdk/vs-enable-net-desktop-workload.png)
-
-1. Close Visual Studio Installer.
-
-### Install the Speech SDK
-
-The next step is to install the [Speech SDK NuGet package](https://aka.ms/csspeech/nuget), so you can reference it in the code.
-
-1. In the Solution Explorer, right-click the **helloworld** project, and then select **Manage NuGet Packages** to show the NuGet Package Manager.
-
- ![NuGet Package Manager](~/articles/cognitive-services/speech-service/media/sdk/vs-nuget-package-manager.png)
-
-1. In the upper-right corner, find the **Package Source** drop-down box, and make sure that **nuget.org** is selected.
-
-1. In the upper-left corner, select **Browse**.
-
-1. In the search box, type *Microsoft.CognitiveServices.Speech* and select **Enter**.
-
-1. From the search results, select the **Microsoft.CognitiveServices.Speech** package, and then select **Install** to install the latest stable version.
-
- ![Install Microsoft.CognitiveServices.Speech NuGet package](~/articles/cognitive-services/speech-service/media/sdk/qs-csharp-dotnet-windows-03-nuget-install-1.0.0.png)
-
-1. Accept all agreements and licenses to start the installation.
-
- After the package is installed, a confirmation appears in the **Package Manager Console** window.
-
-### Choose the target architecture
-
-Now, to build and run the console application, create a platform configuration matching your computer's architecture.
-
-1. From the menu bar, select **Build** > **Configuration Manager**. The **Configuration Manager** dialog box appears.
-
- ![Configuration Manager dialog box](~/articles/cognitive-services/speech-service/media/sdk/vs-configuration-manager-dialog-box.png)
-
-1. In the **Active solution platform** drop-down box, select **New**. The **New Solution Platform** dialog box appears.
-
-1. In the **Type or select the new platform** drop-down box:
- - If you're running 64-bit Windows, select **x64**.
- - If you're running 32-bit Windows, select **x86**.
-
-1. Select **OK** and then **Close**.
-
-## Add the code
-
-Next, you add code to the project.
-
-1. From **Solution Explorer**, open the file **Program.cs**.
-
-1. Replace the block of `using` statements at the beginning of the file with the following declarations:
-
- [!code-csharp[Top-level declarations](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#toplevel)]
-
-1. Replace the provided `Main()` method, with the following asynchronous equivalent:
-
- ```csharp
- public static async Task Main()
- {
- await RecognizeIntentAsync();
- Console.WriteLine("Please press Enter to continue.");
- Console.ReadLine();
- }
- ```
-
-1. Create an empty asynchronous method `RecognizeIntentAsync()`, as shown here:
-
- ```csharp
- static async Task RecognizeIntentAsync()
- {
- }
- ```
-
-1. In the body of this new method, add this code:
-
- [!code-csharp[Intent recognition by using a microphone](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#intentRecognitionWithMicrophone)]
-
-1. Replace the placeholders in this method with your LUIS resource key, region, and app ID as follows.
-
- | Placeholder | Replace with |
- | -- | |
- | `YourLanguageUnderstandingSubscriptionKey` | Your LUIS resource key. Again, you must get this item from your Azure dashboard. You can find it on your app's **Azure Resources** page (under **Manage**) in the [LUIS portal](https://www.luis.ai/home). |
- | `YourLanguageUnderstandingServiceRegion` | The short identifier for the region your LUIS resource is in, such as `westus` for West US. See [Regions](regions.md). |
- | `YourLanguageUnderstandingAppId` | The LUIS app ID. You can find it on your app's **Settings** page in the [LUIS portal](https://www.luis.ai/home). |
-
-With these changes made, you can build (**Control+Shift+B**) and run (**F5**) the application. When you're prompted, try saying "Turn off the lights" into your PC's microphone. The application displays the result in the console window.
-
-The following sections include a discussion of the code.
-
-## Create an intent recognizer
-
-First, you need to create a speech configuration from your LUIS prediction key and region. You can use speech configurations to create recognizers for the various capabilities of the Speech SDK. The speech configuration has multiple ways to specify the resource you want to use; here, we use `FromSubscription`, which takes the resource key and region.
-
-> [!NOTE]
-> Use the key and region of your LUIS resource, not a Speech resource.
-
-Next, create an intent recognizer using `new IntentRecognizer(config)`. Since the configuration already knows which resource to use, you don't need to specify the key again when creating the recognizer.
-
-## Import a LUIS model and add intents
-
-Now import the model from the LUIS app using `LanguageUnderstandingModel.FromAppId()` and add the LUIS intents that you wish to recognize via the recognizer's `AddIntent()` method. These two steps improve the accuracy of speech recognition by indicating words that the user is likely to use in their requests. You don't have to add all the app's intents if you don't need to recognize them all in your application.
-
-To add intents, you must provide three arguments: the LUIS model (which has been created and is named `model`), the intent name, and an intent ID. The difference between the ID and the name is as follows.
-
-| `AddIntent()`&nbsp;argument | Purpose |
-| | - |
-| `intentName` | The name of the intent as defined in the LUIS app. This value must match the LUIS intent name exactly. |
-| `intentID` | An ID assigned to a recognized intent by the Speech SDK. This value can be whatever you like; it doesn't need to correspond to the intent name as defined in the LUIS app. If multiple intents are handled by the same code, for instance, you could use the same ID for them. |
-
-The Home Automation LUIS app has two intents: one for turning on a device, and another for turning off a device. The lines below add these intents to the recognizer; replace the three `AddIntent` lines in the `RecognizeIntentAsync()` method with this code.
-
-```csharp
-recognizer.AddIntent(model, "HomeAutomation.TurnOff", "off");
-recognizer.AddIntent(model, "HomeAutomation.TurnOn", "on");
-```
-
-Instead of adding individual intents, you can also use the `AddAllIntents` method to add all the intents in a model to the recognizer.
-
-## Start recognition
-
-With the recognizer created and the intents added, recognition can begin. The Speech SDK supports both single-shot and continuous recognition.
-
-| Recognition mode | Methods to call | Result |
-| - | | |
-| Single-shot | `RecognizeOnceAsync()` | Returns the recognized intent, if any, after one utterance. |
-| Continuous | `StartContinuousRecognitionAsync()`<br>`StopContinuousRecognitionAsync()` | Recognizes multiple utterances; emits events (for example, `IntermediateResultReceived`) when results are available. |
-
-The application uses single-shot mode and so calls `RecognizeOnceAsync()` to begin recognition. The result is an `IntentRecognitionResult` object containing information about the intent recognized. You extract the LUIS JSON response by using the following expression:
-
-```csharp
-result.Properties.GetProperty(PropertyId.LanguageUnderstandingServiceResponse_JsonResult)
-```
-
-The application doesn't parse the JSON result. It only displays the JSON text in the console window.
-
-![Single LUIS recognition results](media/sdk/luis-results.png)
-
-## Specify recognition language
-
-By default, LUIS recognizes intents in US English (`en-us`). By assigning a locale code to the `SpeechRecognitionLanguage` property of the speech configuration, you can recognize intents in other languages. For example, add `config.SpeechRecognitionLanguage = "de-de";` in our application before creating the recognizer to recognize intents in German. For more information, see [LUIS language support](../LUIS/luis-language-support.md).
-
-## Continuous recognition from a file
-
-The following code illustrates two additional capabilities of intent recognition using the Speech SDK. The first, previously mentioned, is continuous recognition, where the recognizer emits events when results are available. These events can then be processed by event handlers that you provide. With continuous recognition, you call the recognizer's `StartContinuousRecognitionAsync()` method to start recognition instead of `RecognizeOnceAsync()`.
-
-The other capability is reading the audio containing the speech to be processed from a WAV file. Implementation involves creating an audio configuration that can be used when creating the intent recognizer. The file must be single-channel (mono) with a sampling rate of 16 kHz.
-
-To try out these features, delete or comment out the body of the `RecognizeIntentAsync()` method, and add the following code in its place.
-
-[!code-csharp[Intent recognition by using events from a file](~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/intent_recognition_samples.cs#intentContinuousRecognitionWithFile)]
-
-Revise the code to include your LUIS prediction key, region, and app ID and to add the Home Automation intents, as before. Change `whatstheweatherlike.wav` to the name of your recorded audio file. Then build, copy the audio file to the build directory, and run the application.
-
-For example, if you say "Turn off the lights", pause, and then say "Turn on the lights" in your recorded audio file, console output similar to the following may appear:
-
-![Audio file LUIS recognition results](media/sdk/luis-results-2.png)
-
-The Speech SDK team actively maintains a large set of examples in an open-source repository. For the sample source code repository, see the [Azure Cognitive Services Speech SDK on GitHub](https://aka.ms/csspeech/samples). There are samples for C#, C++, Java, Python, Objective-C, Swift, JavaScript, UWP, Unity, and Xamarin. Look for the code from this article in the **samples/csharp/sharedcontent/console** folder.
-
-## Next steps
-
-> [Quickstart: Recognize speech from a microphone](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnetcore)
cognitive-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-speech.md
- Title: "How to recognize speech - Speech service"-
-description: Learn how to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
------ Previously updated : 09/16/2022--
-zone_pivot_groups: programming-languages-speech-services
-keywords: speech to text, speech to text software
--
-# How to recognize speech
-----------
-## Next steps
-
-* [Try the speech to text quickstart](get-started-speech-to-text.md)
-* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
-* [Use batch transcription](batch-transcription.md)
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
- Title: Select an audio input device with the Speech SDK-
-description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'
------ Previously updated : 07/05/2019----
-# Select an audio input device with the Speech SDK
-
-This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK to select the audio input. You configure the audio device through the `AudioConfig` object:
-
-```C++
-audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
-```
-
-```cs
-audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
-```
-
-```Python
-audio_config = AudioConfig(device_name="<device id>");
-```
-
-```objc
-audioConfig = AudioConfiguration.FromMicrophoneInput("<device id>");
-```
-
-```Java
-audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
-```
-
-```JavaScript
-audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
-```
-
-> [!Note]
-> Microphone use isn't available for JavaScript running in Node.js.
-
-## Audio device IDs on Windows for desktop applications
-
-Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for desktop applications.
-
-The following code sample illustrates how to use it to enumerate audio devices in C++:
-
-```cpp
-#include <cstdio>
-#include <mmdeviceapi.h>
-
-#include <Functiondiscoverykeys_devpkey.h>
-
-const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
-const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
-
-constexpr auto REFTIMES_PER_SEC = (10000000 * 25);
-constexpr auto REFTIMES_PER_MILLISEC = 10000;
-
-#define EXIT_ON_ERROR(hres) \
- if (FAILED(hres)) { goto Exit; }
-#define SAFE_RELEASE(punk) \
- if ((punk) != NULL) \
- { (punk)->Release(); (punk) = NULL; }
-
-void ListEndpoints();
-
-int main()
-{
- CoInitializeEx(NULL, COINIT_MULTITHREADED);
- ListEndpoints();
-}
-
-//--
-// This function enumerates all active (plugged in) audio
-// rendering endpoint devices. It prints the friendly name
-// and endpoint ID string of each endpoint device.
-//--
-void ListEndpoints()
-{
- HRESULT hr = S_OK;
- IMMDeviceEnumerator *pEnumerator = NULL;
- IMMDeviceCollection *pCollection = NULL;
- IMMDevice *pEndpoint = NULL;
- IPropertyStore *pProps = NULL;
- LPWSTR pwszID = NULL;
-
- hr = CoCreateInstance(CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, IID_IMMDeviceEnumerator, (void**)&pEnumerator);
- EXIT_ON_ERROR(hr);
-
- hr = pEnumerator->EnumAudioEndpoints(eCapture, DEVICE_STATE_ACTIVE, &pCollection);
- EXIT_ON_ERROR(hr);
-
- UINT count;
- hr = pCollection->GetCount(&count);
- EXIT_ON_ERROR(hr);
-
- if (count == 0)
- {
- printf("No endpoints found.\n");
- }
-
- // Each iteration prints the name of an endpoint device.
- PROPVARIANT varName;
- for (ULONG i = 0; i < count; i++)
- {
- // Get the pointer to endpoint number i.
- hr = pCollection->Item(i, &pEndpoint);
- EXIT_ON_ERROR(hr);
-
- // Get the endpoint ID string.
- hr = pEndpoint->GetId(&pwszID);
- EXIT_ON_ERROR(hr);
-
- hr = pEndpoint->OpenPropertyStore(
- STGM_READ, &pProps);
- EXIT_ON_ERROR(hr);
-
- // Initialize the container for property value.
- PropVariantInit(&varName);
-
- // Get the endpoint's friendly-name property.
- hr = pProps->GetValue(PKEY_Device_FriendlyName, &varName);
- EXIT_ON_ERROR(hr);
-
- // Print the endpoint friendly name and endpoint ID.
- printf("Endpoint %d: \"%S\" (%S)\n", i, varName.pwszVal, pwszID);
-
- CoTaskMemFree(pwszID);
- pwszID = NULL;
- PropVariantClear(&varName);
- }
-
-Exit:
- CoTaskMemFree(pwszID);
- pwszID = NULL;
- PropVariantClear(&varName);
- SAFE_RELEASE(pEnumerator);
- SAFE_RELEASE(pCollection);
- SAFE_RELEASE(pEndpoint);
- SAFE_RELEASE(pProps);
-}
-```
-
-In C#, you can use the [NAudio](https://github.com/naudio/NAudio) library to access the CoreAudio API and enumerate devices as follows:
-
-```cs
-using System;
-
-using NAudio.CoreAudioApi;
-
-namespace ConsoleApp
-{
- class Program
- {
- static void Main(string[] args)
- {
- var enumerator = new MMDeviceEnumerator();
- foreach (var endpoint in
- enumerator.EnumerateAudioEndPoints(DataFlow.Capture, DeviceState.Active))
- {
- Console.WriteLine("{0} ({1})", endpoint.FriendlyName, endpoint.ID);
- }
- }
- }
-}
-```
-
-A sample device ID is `{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}`.
-
-## Audio device IDs on UWP
-
-On the Universal Windows Platform (UWP), you can obtain audio input devices by using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
-
-The following code samples show how to do this step in C++ and C#:
-
-```cpp
-#include <winrt/Windows.Foundation.h>
-#include <winrt/Windows.Devices.Enumeration.h>
-
-using namespace winrt::Windows::Devices::Enumeration;
-
-void enumerateDeviceIds()
-{
- auto promise = DeviceInformation::FindAllAsync(DeviceClass::AudioCapture);
-
- promise.Completed(
- [](winrt::Windows::Foundation::IAsyncOperation<DeviceInformationCollection> const& sender,
- winrt::Windows::Foundation::AsyncStatus /* asyncStatus */) {
- auto info = sender.GetResults();
- auto num_devices = info.Size();
-
- for (const auto &device : info)
- {
- std::wstringstream ss{};
- ss << "looking at device (of " << num_devices << "): " << device.Id().c_str() << "\n";
- OutputDebugString(ss.str().c_str());
- }
- });
-}
-```
-
-```cs
-using Windows.Devices.Enumeration;
-using System.Linq;
-
-namespace helloworld {
- private async void EnumerateDevices()
- {
- var devices = await DeviceInformation.FindAllAsync(DeviceClass.AudioCapture);
-
- foreach (var device in devices)
- {
- Console.WriteLine($"{device.Name}, {device.Id}\n");
- }
- }
-}
-```
-
-A sample device ID is `\\\\?\\SWD#MMDEVAPI#{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}#{2eef81be-33fa-4800-9670-1cd474972c3f}`.
-
-## Audio device IDs on Linux
-
-The device IDs are selected by using standard ALSA device IDs.
-
-The IDs of the inputs attached to the system are contained in the output of the command `arecord -L`.
-Alternatively, they can be obtained by using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
-
-Sample IDs are `hw:1,0` and `hw:CARD=CC,DEV=0`.
-
-## Audio device IDs on macOS
-
-The following function implemented in Objective-C creates a list of the names and IDs of the audio devices attached to a Mac.
-
-The `deviceUID` string is used to identify a device in the Speech SDK for macOS.
-
-```objc
-#import <Foundation/Foundation.h>
-#import <CoreAudio/CoreAudio.h>
-
-CFArrayRef CreateInputDeviceArray()
-{
- AudioObjectPropertyAddress propertyAddress = {
- kAudioHardwarePropertyDevices,
- kAudioObjectPropertyScopeGlobal,
- kAudioObjectPropertyElementMaster
- };
-
- UInt32 dataSize = 0;
- OSStatus status = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
- if (kAudioHardwareNoError != status) {
- fprintf(stderr, "AudioObjectGetPropertyDataSize (kAudioHardwarePropertyDevices) failed: %i\n", status);
- return NULL;
- }
-
- UInt32 deviceCount = (uint32)(dataSize / sizeof(AudioDeviceID));
-
- AudioDeviceID *audioDevices = (AudioDeviceID *)(malloc(dataSize));
- if (NULL == audioDevices) {
- fputs("Unable to allocate memory", stderr);
- return NULL;
- }
-
- status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, audioDevices);
- if (kAudioHardwareNoError != status) {
- fprintf(stderr, "AudioObjectGetPropertyData (kAudioHardwarePropertyDevices) failed: %i\n", status);
- free(audioDevices);
- audioDevices = NULL;
- return NULL;
- }
-
- CFMutableArrayRef inputDeviceArray = CFArrayCreateMutable(kCFAllocatorDefault, deviceCount, &kCFTypeArrayCallBacks);
- if (NULL == inputDeviceArray) {
- fputs("CFArrayCreateMutable failed", stderr);
- free(audioDevices);
- audioDevices = NULL;
- return NULL;
- }
-
- // Iterate through all the devices and determine which are input-capable
- propertyAddress.mScope = kAudioDevicePropertyScopeInput;
- for (UInt32 i = 0; i < deviceCount; ++i) {
- // Query device UID
- CFStringRef deviceUID = NULL;
- dataSize = sizeof(deviceUID);
- propertyAddress.mSelector = kAudioDevicePropertyDeviceUID;
- status = AudioObjectGetPropertyData(audioDevices[i], &propertyAddress, 0, NULL, &dataSize, &deviceUID);
- if (kAudioHardwareNoError != status) {
- fprintf(stderr, "AudioObjectGetPropertyData (kAudioDevicePropertyDeviceUID) failed: %i\n", status);
- continue;
- }
-
- // Query device name
- CFStringRef deviceName = NULL;
- dataSize = sizeof(deviceName);
- propertyAddress.mSelector = kAudioDevicePropertyDeviceNameCFString;
- status = AudioObjectGetPropertyData(audioDevices[i], &propertyAddress, 0, NULL, &dataSize, &deviceName);
- if (kAudioHardwareNoError != status) {
- fprintf(stderr, "AudioObjectGetPropertyData (kAudioDevicePropertyDeviceNameCFString) failed: %i\n", status);
- continue;
- }
-
- // Determine if the device is an input device (it is an input device if it has input channels)
- dataSize = 0;
- propertyAddress.mSelector = kAudioDevicePropertyStreamConfiguration;
- status = AudioObjectGetPropertyDataSize(audioDevices[i], &propertyAddress, 0, NULL, &dataSize);
- if (kAudioHardwareNoError != status) {
- fprintf(stderr, "AudioObjectGetPropertyDataSize (kAudioDevicePropertyStreamConfiguration) failed: %i\n", status);
- continue;
- }
-
- AudioBufferList *bufferList = (AudioBufferList *)(malloc(dataSize));
- if (NULL == bufferList) {
- fputs("Unable to allocate memory", stderr);
- break;
- }
-
- status = AudioObjectGetPropertyData(audioDevices[i], &propertyAddress, 0, NULL, &dataSize, bufferList);
- if (kAudioHardwareNoError != status || 0 == bufferList->mNumberBuffers) {
- if (kAudioHardwareNoError != status)
- fprintf(stderr, "AudioObjectGetPropertyData (kAudioDevicePropertyStreamConfiguration) failed: %i\n", status);
- free(bufferList);
- bufferList = NULL;
- continue;
- }
-
- free(bufferList);
- bufferList = NULL;
-
- // Add a dictionary for this device to the array of input devices
- CFStringRef keys [] = { CFSTR("deviceUID"), CFSTR("deviceName")};
- CFStringRef values [] = { deviceUID, deviceName};
-
- CFDictionaryRef deviceDictionary = CFDictionaryCreate(kCFAllocatorDefault,
- (const void **)(keys),
- (const void **)(values),
- 2,
- &kCFTypeDictionaryKeyCallBacks,
- &kCFTypeDictionaryValueCallBacks);
-
- CFArrayAppendValue(inputDeviceArray, deviceDictionary);
-
- CFRelease(deviceDictionary);
- deviceDictionary = NULL;
- }
-
- free(audioDevices);
- audioDevices = NULL;
-
- // Return a non-mutable copy of the array
- CFArrayRef immutableInputDeviceArray = CFArrayCreateCopy(kCFAllocatorDefault, inputDeviceArray);
- CFRelease(inputDeviceArray);
- inputDeviceArray = NULL;
-
- return immutableInputDeviceArray;
-}
-```
-
-For example, the UID for the built-in microphone is `BuiltInMicrophoneDevice`.
-
-## Audio device IDs on iOS
-
-Audio device selection with the Speech SDK isn't supported on iOS. Apps that use the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
-
-For example, the instruction
-
-```objc
-[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord
- withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:NULL];
-```
-
-Enables the use of a Bluetooth headset for a speech-enabled app.
-
-## Audio device IDs in JavaScript
-
-In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
-
-## Next steps
--- [Explore samples on GitHub](https://aka.ms/csspeech/samples)--- [Customize acoustic models](./how-to-custom-speech-train-model.md)-- [Customize language models](./how-to-custom-speech-train-model.md)
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
- Title: Get facial position with viseme-
-description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme.
------ Previously updated : 10/23/2022--
-zone_pivot_groups: programming-languages-speech-services-nomore-variant
--
-# Get facial position with viseme
-
-> [!NOTE]
-> To explore the locales supported for Viseme ID and blend shapes, refer to [the list of all supported locales](language-support.md?tabs=tts#viseme). Scalable Vector Graphics (SVG) is only supported for the `en-US` locale.
-
-A *viseme* is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. Each viseme depicts the key facial poses for a specific set of phonemes.
-
-You can use visemes to control the movement of 2D and 3D avatar models, so that the facial positions are best aligned with synthetic speech. For example, you can:
-
- * Create an animated virtual voice assistant for intelligent kiosks, building multi-mode integrated services for your customers.
- * Build immersive news broadcasts and improve audience experiences with natural face and mouth movements.
- * Generate more interactive gaming avatars and cartoon characters that can speak with dynamic content.
- * Make more effective language teaching videos that help language learners understand the mouth behavior of each word and phoneme.
- * People with hearing impairment can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
-
-For more information about visemes, view this [introductory video](https://youtu.be/ui9XT47uwxs).
-> [!VIDEO https://www.youtube.com/embed/ui9XT47uwxs]
-
-## Overall workflow of producing viseme with speech
-
-Neural Text to speechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme ID, Scalable Vector Graphics (SVG), or blend shapes. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar.
-
-The overall workflow of viseme is depicted in the following flowchart:
-
-![Diagram of the overall workflow of viseme.](media/text-to-speech/viseme-structure.png)
-
-## Viseme ID
-
-Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth position for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes).
-
-Speech audio output can be accompanied by viseme IDs and `Audio offset`. The `Audio offset` indicates the offset timestamp that represents the start time of each viseme, in ticks (100 nanoseconds).
-
-### Map phonemes to visemes
-
-Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes. The table below shows a mapping relationship between viseme IDs and mouth positions, listing typical IPA phonemes for each viseme ID.
-
-| Viseme ID | IPA | Mouth position|
-||||
-|0|Silence|<img src="media/text-to-speech/viseme-id-0.jpg" width="200" height="200" alt="The mouth position when viseme ID is 0">|
-|1|`æ`, `ə`, `ʌ`| <img src="media/text-to-speech/viseme-id-1.jpg" width="200" height="200" alt="The mouth position when viseme ID is 1">|
-|2|`ɑ`|<img src="media/text-to-speech/viseme-id-2.jpg" width="200" height="200" alt="The mouth position when viseme ID is 2">|
-|3|`ɔ`|<img src="media/text-to-speech/viseme-id-3.jpg" width="200" height="200" alt="The mouth position when viseme ID is 3">|
-|4|`ɛ`, `ʊ`|<img src="media/text-to-speech/viseme-id-4.jpg" width="200" height="200" alt="The mouth position when viseme ID is 4">|
-|5|`ɝ`|<img src="media/text-to-speech/viseme-id-5.jpg" width="200" height="200" alt="The mouth position when viseme ID is 5">|
-|6|`j`, `i`, `ɪ` |<img src="media/text-to-speech/viseme-id-6.jpg" width="200" height="200" alt="The mouth position when viseme ID is 6">|
-|7|`w`, `u`|<img src="media/text-to-speech/viseme-id-7.jpg" width="200" height="200" alt="The mouth position when viseme ID is 7">|
-|8|`o`|<img src="media/text-to-speech/viseme-id-8.jpg" width="200" height="200" alt="The mouth position when viseme ID is 8">|
-|9|`aʊ`|<img src="media/text-to-speech/viseme-id-9.jpg" width="200" height="200" alt="The mouth position when viseme ID is 9">|
-|10|`ɔɪ`|<img src="media/text-to-speech/viseme-id-10.jpg" width="200" height="200" alt="The mouth position when viseme ID is 10">|
-|11|`aɪ`|<img src="media/text-to-speech/viseme-id-11.jpg" width="200" height="200" alt="The mouth position when viseme ID is 11">|
-|12|`h`|<img src="media/text-to-speech/viseme-id-12.jpg" width="200" height="200" alt="The mouth position when viseme ID is 12">|
-|13|`╔╣`|<img src="media/text-to-speech/viseme-id-13.jpg" width="200" height="200" alt="The mouth position when viseme ID is 13">|
-|14|`l`|<img src="media/text-to-speech/viseme-id-14.jpg" width="200" height="200" alt="The mouth position when viseme ID is 14">|
-|15|`s`, `z`|<img src="media/text-to-speech/viseme-id-15.jpg" width="200" height="200" alt="The mouth position when viseme ID is 15">|
-|16|`ʃ`, `tʃ`, `dʒ`, `ʒ`|<img src="media/text-to-speech/viseme-id-16.jpg" width="200" height="200" alt="The mouth position when viseme ID is 16">|
-|17|`├░`|<img src="media/text-to-speech/viseme-id-17.jpg" width="200" height="200" alt="The mouth position when viseme ID is 17">|
-|18|`f`, `v`|<img src="media/text-to-speech/viseme-id-18.jpg" width="200" height="200" alt="The mouth position when viseme ID is 18">|
-|19|`d`, `t`, `n`, `╬╕`|<img src="media/text-to-speech/viseme-id-19.jpg" width="200" height="200" alt="The mouth position when viseme ID is 19">|
-|20|`k`, `g`, `ŋ`|<img src="media/text-to-speech/viseme-id-20.jpg" width="200" height="200" alt="The mouth position when viseme ID is 20">|
-|21|`p`, `b`, `m`|<img src="media/text-to-speech/viseme-id-21.jpg" width="200" height="200" alt="The mouth position when viseme ID is 21">|
-
-## 2D SVG animation
-
-For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position.
-
-With temporal tags that are provided in a viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character that's designed for language learning.
-
-![Screenshot showing a 2D rendering example of four red-lipped mouths, each representing a different viseme ID that corresponds to a phoneme.](media/text-to-speech/viseme-demo-2D.png)
-
-## 3D blend shapes animation
-
-You can use blend shapes to drive the facial movements of a 3D character that you designed.
-
-The blend shapes JSON string is represented as a 2-dimensional matrix. Each row represents a frame. Each frame (in 60 FPS) contains an array of 55 facial positions.
-
-## Get viseme events with the Speech SDK
-
-To get viseme with your synthesized speech, subscribe to the `VisemeReceived` event in the Speech SDK.
-
-> [!NOTE]
-> To request SVG or blend shapes output, you should use the `mstts:viseme` element in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup-structure.md#viseme-element).
-
-The following snippet shows how to subscribe to the viseme event:
--
-```csharp
-using (var synthesizer = new SpeechSynthesizer(speechConfig, audioConfig))
-{
- // Subscribes to viseme received event
- synthesizer.VisemeReceived += (s, e) =>
- {
- Console.WriteLine($"Viseme event received. Audio offset: " +
- $"{e.AudioOffset / 10000}ms, viseme id: {e.VisemeId}.");
-
- // `Animation` is an xml string for SVG or a json string for blend shapes
- var animation = e.Animation;
- };
-
- // If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
- var result = await synthesizer.SpeakSsmlAsync(ssml);
-}
-
-```
---
-```cpp
-auto synthesizer = SpeechSynthesizer::FromConfig(speechConfig, audioConfig);
-
-// Subscribes to viseme received event
-synthesizer->VisemeReceived += [](const SpeechSynthesisVisemeEventArgs& e)
-{
- cout << "viseme event received. "
- // The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds.
- << "Audio offset: " << e.AudioOffset / 10000 << "ms, "
- << "viseme id: " << e.VisemeId << "." << endl;
-
- // `Animation` is an xml string for SVG or a json string for blend shapes
- auto animation = e.Animation;
-};
-
-// If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
-auto result = synthesizer->SpeakSsmlAsync(ssml).get();
-```
---
-```java
-SpeechSynthesizer synthesizer = new SpeechSynthesizer(speechConfig, audioConfig);
-
-// Subscribes to viseme received event
-synthesizer.VisemeReceived.addEventListener((o, e) -> {
- // The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds.
- System.out.print("Viseme event received. Audio offset: " + e.getAudioOffset() / 10000 + "ms, ");
- System.out.println("viseme id: " + e.getVisemeId() + ".");
-
- // `Animation` is an xml string for SVG or a json string for blend shapes
- String animation = e.getAnimation();
-});
-
-// If VisemeID is the only thing you want, you can also use `SpeakTextAsync()`
-SpeechSynthesisResult result = synthesizer.SpeakSsmlAsync(ssml).get();
-```
---
-```Python
-speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config)
-
-def viseme_cb(evt):
- print("Viseme event received: audio offset: {}ms, viseme id: {}.".format(
- evt.audio_offset / 10000, evt.viseme_id))
-
- # `Animation` is an xml string for SVG or a json string for blend shapes
- animation = evt.animation
-
-# Subscribes to viseme received event
-speech_synthesizer.viseme_received.connect(viseme_cb)
-
-# If VisemeID is the only thing you want, you can also use `speak_text_async()`
-result = speech_synthesizer.speak_ssml_async(ssml).get()
-```
---
-```Javascript
-var synthesizer = new SpeechSDK.SpeechSynthesizer(speechConfig, audioConfig);
-
-// Subscribes to viseme received event
-synthesizer.visemeReceived = function (s, e) {
- window.console.log("(Viseme), Audio offset: " + e.audioOffset / 10000 + "ms. Viseme ID: " + e.visemeId);
-
- // `Animation` is an xml string for SVG or a json string for blend shapes
- var animation = e.Animation;
-}
-
-// If VisemeID is the only thing you want, you can also use `speakTextAsync()`
-synthesizer.speakSsmlAsync(ssml);
-```
---
-```Objective-C
-SPXSpeechSynthesizer *synthesizer =
- [[SPXSpeechSynthesizer alloc] initWithSpeechConfiguration:speechConfig
- audioConfiguration:audioConfig];
-
-// Subscribes to viseme received event
-[synthesizer addVisemeReceivedEventHandler: ^ (SPXSpeechSynthesizer *synthesizer, SPXSpeechSynthesisVisemeEventArgs *eventArgs) {
- NSLog(@"Viseme event received. Audio offset: %fms, viseme id: %lu.", eventArgs.audioOffset/10000., eventArgs.visemeId);
-
- // `Animation` is an xml string for SVG or a json string for blend shapes
- NSString *animation = eventArgs.Animation;
-}];
-
-// If VisemeID is the only thing you want, you can also use `SpeakText`
-[synthesizer speakSsml:ssml];
-```
--
-Here's an example of the viseme output.
-
-# [Viseme ID](#tab/visemeid)
-
-```text
-(Viseme), Viseme ID: 1, Audio offset: 200ms.
-
-(Viseme), Viseme ID: 5, Audio offset: 850ms.
-
-……
-
-(Viseme), Viseme ID: 13, Audio offset: 2350ms.
-```
-
-# [2D SVG](#tab/2dsvg)
-
-The SVG output is an xml string that contains the animation.
-Render the SVG animation along with the synthesized speech to see the mouth movement.
-
-```xml
-<svg width= "1200px" height= "1200px" ..>
- <g id= "front_start" stroke= "none" stroke-width= "1" fill= "none" fill-rule= "evenodd">
- <animate attributeName= "d" begin= "d_dh_front_background_1_0.end" dur= "0.27500
- ...
-```
-
-# [3D blend shapes](#tab/3dblendshapes)
--
-Each viseme event includes a series of frames in the `Animation` SDK property. These are grouped to best align the facial positions with the audio. Your 3D engine should render each group of `BlendShapes` frames immediately before the corresponding audio chunk. The `FrameIndex` value indicates how many frames preceded the current list of frames.
-
-The output json looks like the following sample. Each frame within `BlendShapes` contains an array of 55 facial positions represented as decimal values between 0 to 1. The decimal values are in the same order as described in the facial positions table below.
-
-```json
-{
- "FrameIndex":0,
- "BlendShapes":[
- [0.021,0.321,...,0.258],
- [0.045,0.234,...,0.288],
- ...
- ]
-}
-```
-
-The order of `BlendShapes` is as follows.
-
-| Order | Facial position in `BlendShapes`|
-| | -- |
-| 1 | eyeBlinkLeft|
-| 2 | eyeLookDownLeft|
-| 3 | eyeLookInLeft|
-| 4 | eyeLookOutLeft|
-| 5 | eyeLookUpLeft|
-| 6 | eyeSquintLeft|
-| 7 | eyeWideLeft|
-| 8 | eyeBlinkRight|
-| 9 | eyeLookDownRight|
-| 10 | eyeLookInRight|
-| 11 | eyeLookOutRight|
-| 12 | eyeLookUpRight|
-| 13 | eyeSquintRight|
-| 14 | eyeWideRight|
-| 15 | jawForward|
-| 16 | jawLeft|
-| 17 | jawRight|
-| 18 | jawOpen|
-| 19 | mouthClose|
-| 20 | mouthFunnel|
-| 21 | mouthPucker|
-| 22 | mouthLeft|
-| 23 | mouthRight|
-| 24 | mouthSmileLeft|
-| 25 | mouthSmileRight|
-| 26 | mouthFrownLeft|
-| 27 | mouthFrownRight|
-| 28 | mouthDimpleLeft|
-| 29 | mouthDimpleRight|
-| 30 | mouthStretchLeft|
-| 31 | mouthStretchRight|
-| 32 | mouthRollLower|
-| 33 | mouthRollUpper|
-| 34 | mouthShrugLower|
-| 35 | mouthShrugUpper|
-| 36 | mouthPressLeft|
-| 37 | mouthPressRight|
-| 38 | mouthLowerDownLeft|
-| 39 | mouthLowerDownRight|
-| 40 | mouthUpperUpLeft|
-| 41 | mouthUpperUpRight|
-| 42 | browDownLeft|
-| 43 | browDownRight|
-| 44 | browInnerUp|
-| 45 | browOuterUpLeft|
-| 46 | browOuterUpRight|
-| 47 | cheekPuff|
-| 48 | cheekSquintLeft|
-| 49 | cheekSquintRight|
-| 50 | noseSneerLeft|
-| 51 | noseSneerRight|
-| 52 | tongueOut|
-| 53 | headRoll|
-| 54 | leftEyeRoll|
-| 55 | rightEyeRoll|
---
-After you obtain the viseme output, you can use these events to drive character animation. You can build your own characters and automatically animate them.
-
-## Next steps
--- [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)-- [How to improve synthesis with SSML](speech-synthesis-markup.md)
cognitive-services How To Speech Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis.md
- Title: "How to synthesize speech from text - Speech service"-
-description: Learn how to convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
------ Previously updated : 09/16/2022--
-zone_pivot_groups: programming-languages-speech-services
-keywords: text to speech
--
-# How to synthesize speech from text
-----------
-## Next steps
-
-* [Try the text to speech quickstart](get-started-text-to-speech.md)
-* [Get started with Custom Neural Voice](how-to-custom-voice.md)
-* [Improve synthesis with SSML](speech-synthesis-markup.md)
cognitive-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-track-speech-sdk-memory-usage.md
- Title: How to track Speech SDK memory usage - Speech service-
-description: The Speech SDK supports numerous programming languages for speech to text and text to speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK.
------ Previously updated : 12/10/2019--
-zone_pivot_groups: programming-languages-set-two
---
-# How to track Speech SDK memory usage
-
-The Speech SDK is based on a native code base that's projected into multiple programming languages through a series of interoperability layers. Each language-specific projection has idiomatically correct features to manage the object lifecycle. Additionally, the Speech SDK includes memory management tooling to track resource usage with object logging and object limits.
-
-## How to read object logs
-
-If [Speech SDK logging is enabled](how-to-use-logging.md), tracking tags are emitted to enable historical object observation. These tags include:
-
-* `TrackHandle` or `StopTracking`
-* The object type
-* The current number of objects that are tracked the type of the object, and the current number being tracked.
-
-Here's a sample log:
-
-```terminal
-(284): 8604ms SPX_DBG_TRACE_VERBOSE: handle_table.h:90 TrackHandle type=Microsoft::Cognitive
-```
-
-## Set a warning threshold
-
-You have the option to create a warning threshold, and if that threshold is exceeded (assuming logging is enabled), a warning message is logged. The warning message contains a dump of all objects in existence along with their count. This information can be used to better understand issues.
-
-To enable a warning threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you've created an instance of `SpeechConfig` called `config`:
--
-```csharp
-config.SetProperty("SPEECH-ObjectCountWarnThreshold", "10000");
-```
---
-```C++
-config->SetProperty("SPEECH-ObjectCountWarnThreshold", "10000");
-```
---
-```java
-config.setProperty("SPEECH-ObjectCountWarnThreshold", "10000");
-```
---
-```Python
-speech_config.set_property_by_name("SPEECH-ObjectCountWarnThreshold", "10000")?
-```
---
-```ObjectiveC
-[config setPropertyTo:@"10000" byName:"SPEECH-ObjectCountWarnThreshold"];
-```
--
-> [!TIP]
-> The default value for this property is 10,000.
-
-## Set an error threshold
-
-Using the Speech SDK, you can set the maximum number of objects allowed at a given time. If this setting is enabled, when the maximum number is hit, attempts to create new recognizer objects will fail. Existing objects will continue to work.
-
-Here's a sample error:
-
-```terminal
-Runtime error: The maximum object count of 500 has been exceeded.
-The threshold can be adjusted by setting the SPEECH-ObjectCountErrorThreshold property on the SpeechConfig object.
-Handle table dump by object type:
-class Microsoft::Cognitive
-class Microsoft::Cognitive
-class Microsoft::Cognitive
-class Microsoft::Cognitive
-```
-
-To enable an error threshold, it must be specified on a `SpeechConfig` object. This object is checked when a new recognizer is created. In the following examples, let's assume that you've created an instance of `SpeechConfig` called `config`:
--
-```csharp
-config.SetProperty("SPEECH-ObjectCountErrorThreshold", "10000");
-```
---
-```C++
-config->SetProperty("SPEECH-ObjectCountErrorThreshold", "10000");
-```
---
-```java
-config.setProperty("SPEECH-ObjectCountErrorThreshold", "10000");
-```
---
-```Python
-speech_config.set_property_by_name("SPEECH-ObjectCountErrorThreshold", "10000")?
-```
---
-```objc
-[config setPropertyTo:@"10000" byName:"SPEECH-ObjectCountErrorThreshold"];
-```
--
-> [!TIP]
-> The default value for this property is the platform-specific maximum value for a `size_t` data type. A typical recognition will consume between 7 and 10 internal objects.
-
-## Next steps
-
-* [Learn more about the Speech SDK](speech-sdk.md)
cognitive-services How To Translate Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-translate-speech.md
- Title: "How to translate speech - Speech service"-
-description: Learn how to translate speech from one language to text in another language, including object construction and supported audio input formats.
------- Previously updated : 06/08/2022-
-zone_pivot_groups: programming-languages-speech-services
--
-# How to recognize and translate speech
-----------
-## Next steps
-
-* [Try the speech to text quickstart](get-started-speech-to-text.md)
-* [Try the speech translation quickstart](get-started-speech-translation.md)
-* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
- Title: Speech SDK audio input stream concepts-
-description: An overview of the capabilities of the Speech SDK audio input stream.
------ Previously updated : 05/09/2023---
-# How to use the audio input stream
-
-The Speech SDK provides a way to stream audio into the recognizer as an alternative to microphone or file input.
-
-This guide describes how to use audio input streams. It also describes some of the requirements and limitations of the audio input stream.
-
-See more examples of speech-to-text recognition with audio input stream on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs).
-
-## Identify the format of the audio stream
-
-Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure Cognitive Services Speech service.
-
-Supported audio samples are:
-
- - PCM format (int-16, signed)
- - One channel
- - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second)
- - Two-block aligned (16 bit including padding for a sample)
-
-The corresponding code in the SDK to create the audio format looks like this example:
-
-```csharp
-byte channels = 1;
-byte bitsPerSample = 16;
-int samplesPerSecond = 16000; // or 8000
-var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
-```
-
-Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
-
-## Create your own audio input stream class
-
-You can create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample:
-
-```csharp
-public class ContosoAudioStream : PullAudioInputStreamCallback
-{
- public ContosoAudioStream() {}
-
- public override int Read(byte[] buffer, uint size)
- {
- // Returns audio data to the caller.
- // E.g., return read(config.YYY, buffer, size);
- return 0;
- }
-
- public override void Close()
- {
- // Close and clean up resources.
- }
-}
-```
-
-Create an audio configuration based on your audio format and custom audio input stream. For example:
-
-```csharp
-var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(), audioFormat);
-```
-
-Here's how the custom audio input stream is used in the context of a speech recognizer:
-
-```csharp
-using System;
-using System.IO;
-using System.Threading.Tasks;
-using Microsoft.CognitiveServices.Speech;
-using Microsoft.CognitiveServices.Speech.Audio;
-
-public class ContosoAudioStream : PullAudioInputStreamCallback
-{
- public ContosoAudioStream() {}
-
- public override int Read(byte[] buffer, uint size)
- {
- // Returns audio data to the caller.
- // E.g., return read(config.YYY, buffer, size);
- return 0;
- }
-
- public override void Close()
- {
- // Close and clean up resources.
- }
-}
-
-class Program
-{
- static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY");
- static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION");
-
- async static Task Main(string[] args)
- {
- byte channels = 1;
- byte bitsPerSample = 16;
- uint samplesPerSecond = 16000; // or 8000
- var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
- var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(), audioFormat);
-
- var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
- speechConfig.SpeechRecognitionLanguage = "en-US";
- var speechRecognizer = new SpeechRecognizer(speechConfig, audioConfig);
-
- Console.WriteLine("Speak into your microphone.");
- var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync();
- Console.WriteLine($"RECOGNIZED: Text={speechRecognitionResult.Text}");
- }
-}
-```
-
-## Next steps
--- [Speech to text quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp)-- [How to recognize speech](./how-to-recognize-speech.md?pivots=programming-language-csharp)
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
- Title: How to use compressed input audio - Speech service-
-description: Learn how to use compressed input audio the Speech SDK and CLI.
------- Previously updated : 04/25/2022-
-zone_pivot_groups: programming-languages-speech-services
--
-# How to use compressed input audio
-----------
-## Next steps
-
-* [Try the speech to text quickstart](get-started-speech-to-text.md)
-* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
- Title: Real-time Conversation Transcription quickstart - Speech service-
-description: In this quickstart, learn how to transcribe meetings and other conversations. You can add, remove, and identify multiple participants by streaming audio to the Speech service.
------ Previously updated : 11/12/2022-
-zone_pivot_groups: acs-js-csharp-python
---
-# Quickstart: Real-time Conversation Transcription
-
-You can transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe conversations. See the Conversation Transcription [overview](conversation-transcription.md) for more information.
-
-## Limitations
-
-* Only available in the following subscription regions: `centralus`, `eastasia`, `eastus`, `westeurope`
-* Requires a 7-mic circular multi-microphone array. The microphone array should meet [our specification](./speech-sdk-microphone.md).
-
-> [!NOTE]
-> The Speech SDK for C++, Java, Objective-C, and Swift support Conversation Transcription, but we haven't yet included a guide here.
----
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Asynchronous Conversation Transcription](how-to-async-conversation-transcription.md)
cognitive-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-custom-entity-pattern-matching.md
- Title: How to recognize intents with custom entity pattern matching-
-description: In this guide, you learn how to recognize intents and custom entities from simple patterns.
------ Previously updated : 11/15/2021-
-zone_pivot_groups: programming-languages-set-thirteen
---
-# How to recognize intents with custom entity pattern matching
-
-The Cognitive Services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
-
-In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
-
-> [!div class="checklist"]
->
-> - Create a Visual Studio project referencing the Speech SDK NuGet package
-> - Create a speech configuration and get an intent recognizer
-> - Add intents and patterns via the Speech SDK API
-> - Add custom entities via the Speech SDK API
-> - Use asynchronous, event-driven continuous recognition
-
-## When to use pattern matching
-
-Use pattern matching if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
-* You don't have access to a CLU model, but still want intents.
-
-For more information, see the [pattern matching overview](./pattern-matching-overview.md).
-
-## Prerequisites
-
-Be sure you have the following items before you begin this guide:
--- A [Cognitive Services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)-- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).---
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-logging.md
- Title: Speech SDK logging - Speech service-
-description: Learn about how to enable logging in the Speech SDK (C++, C#, Python, Objective-C, Java).
----- Previously updated : 07/05/2019---
-# Enable logging in the Speech SDK
-
-Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file.
-
-## Sample
-
-The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you've created an instance called `speechConfig`:
-
-```csharp
-speechConfig.SetProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
-```
-
-```java
-speechConfig.setProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
-```
-
-```C++
-speechConfig->SetProperty(PropertyId::Speech_LogFilename, "LogfilePathAndName");
-```
-
-```Python
-speech_config.set_property(speechsdk.PropertyId.Speech_LogFilename, "LogfilePathAndName")
-```
-
-```objc
-[speechConfig setPropertyTo:@"LogfilePathAndName" byId:SPXSpeechLogFilename];
-```
-
-```go
-import ("github.com/Microsoft/cognitive-services-speech-sdk-go/common")
-
-speechConfig.SetProperty(common.SpeechLogFilename, "LogfilePathAndName")
-```
-
-You can create a recognizer from the configuration object. This will enable logging for all recognizers.
-
-> [!NOTE]
-> If you create a `SpeechSynthesizer` from the configuration object, it will not enable logging. If logging is enabled though, you will also receive diagnostics from the `SpeechSynthesizer`.
-
-JavaScript is an exception where the logging is enabled via SDK diagnostics as shown in the following code snippet:
-
-```javascript
-sdk.Diagnostics.SetLoggingLevel(sdk.LogLevel.Debug);
-sdk.Diagnostics.SetLogOutputPath("LogfilePathAndName");
-```
-
-## Create a log file on different platforms
-
-For Windows or Linux, the log file can be in any path the user has write permission for. Write permissions to file system locations in other operating systems may be limited or restricted by default.
-
-### Universal Windows Platform (UWP)
-
-UWP applications need to be places log files in one of the application data locations (local, roaming, or temporary). A log file can be created in the local application folder:
-
-```csharp
-StorageFolder storageFolder = ApplicationData.Current.LocalFolder;
-StorageFile logFile = await storageFolder.CreateFileAsync("logfile.txt", CreationCollisionOption.ReplaceExisting);
-speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile.Path);
-```
-
-Within a Unity UWP application, a log file can be created using the application persistent data path folder as follows:
-
-```csharp
-#if ENABLE_WINMD_SUPPORT
- string logFile = Application.persistentDataPath + "/logFile.txt";
- speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile);
-#endif
-```
-For more about file access permissions in UWP applications, see [File access permissions](/windows/uwp/files/file-access-permissions).
-
-### Android
-
-You can save a log file to either internal storage, external storage, or the cache directory. Files created in the internal storage or the cache directory are private to the application. It is preferable to create a log file in external storage.
-
-```java
-File dir = context.getExternalFilesDir(null);
-File logFile = new File(dir, "logfile.txt");
-speechConfig.setProperty(PropertyId.Speech_LogFilename, logFile.getAbsolutePath());
-```
-
-The code above will save a log file to the external storage in the root of an application-specific directory. A user can access the file with the file manager (usually in `Android/data/ApplicationName/logfile.txt`). The file will be deleted when the application is uninstalled.
-
-You also need to request `WRITE_EXTERNAL_STORAGE` permission in the manifest file:
-
-```xml
-<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="...">
- ...
- <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
- ...
-</manifest>
-```
-
-Within a Unity Android application, the log file can be created using the application persistent data path folder as follows:
-
-```csharp
-string logFile = Application.persistentDataPath + "/logFile.txt";
-speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile);
-```
-In addition, you need to also set write permission in your Unity Player settings for Android to "External (SDCard)". The log will be written
-to a directory you can get using a tool such as AndroidStudio Device File Explorer. The exact directory path may vary between Android devices,
-location is typically the `sdcard/Android/data/your-app-packagename/files` directory.
-
-More about data and file storage for Android applications is available [here](https://developer.android.com/guide/topics/data/data-storage.html).
-
-#### iOS
-
-Only directories inside the application sandbox are accessible. Files can be created in the documents, library, and temp directories. Files in the documents directory can be made available to a user.
-
-If you are using Objective-C on iOS, use the following code snippet to create a log file in the application document directory:
-
-```objc
-NSString *filePath = [
- [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) firstObject]
- stringByAppendingPathComponent:@"logfile.txt"];
-[speechConfig setPropertyTo:filePath byId:SPXSpeechLogFilename];
-```
-
-To access a created file, add the below properties to the `Info.plist` property list of the application:
-
-```xml
-<key>UIFileSharingEnabled</key>
-<true/>
-<key>LSSupportsOpeningDocumentsInPlace</key>
-<true/>
-```
-
-If you're using Swift on iOS, please use the following code snippet to enable logs:
-```swift
-let documentsDirectoryPathString = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first!
-let documentsDirectoryPath = NSURL(string: documentsDirectoryPathString)!
-let logFilePath = documentsDirectoryPath.appendingPathComponent("swift.log")
-self.speechConfig!.setPropertyTo(logFilePath!.absoluteString, by: SPXPropertyId.speechLogFilename)
-```
-
-More about iOS File System is available [here](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html).
-
-## Logging with multiple recognizers
-
-Although a log file output path is specified as a configuration property into a `SpeechRecognizer` or other SDK object, SDK logging is a singleton, *process-wide* facility with no concept of individual instances. You can think of this as the `SpeechRecognizer` constructor (or similar) implicitly calling a static and internal "Configure Global Logging" routine with the property data available in the corresponding `SpeechConfig`.
-
-This means that you can't, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging will be emitted to that file.
-
-This also means that the lifetime of the object that configured logging isn't tied to the duration of logging. Logging will not stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging may be stopped by setting the log file path to an empty string when creating a new object.
-
-To reduce potential confusion when configuring logging for multiple instances, it may be useful to abstract control of logging from objects doing real work. An example pair of helper routines:
-
-```cpp
-void EnableSpeechSdkLogging(const char* relativePath)
-{
- auto configForLogging = SpeechConfig::FromSubscription("unused_key", "unused_region");
- configForLogging->SetProperty(PropertyId::Speech_LogFilename, relativePath);
- auto emptyAudioConfig = AudioConfig::FromStreamInput(AudioInputStream::CreatePushStream());
- auto temporaryRecognizer = SpeechRecognizer::FromConfig(configForLogging, emptyAudioConfig);
-}
-
-void DisableSpeechSdkLogging()
-{
- EnableSpeechSdkLogging("");
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
cognitive-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-simple-language-pattern-matching.md
- Title: How to recognize intents with simple language pattern matching-
-description: In this guide, you learn how to recognize intents and entities from simple patterns.
------ Previously updated : 04/19/2022-
-zone_pivot_groups: programming-languages-set-thirteen
---
-# How to recognize intents with simple language pattern matching
-
-The Cognitive Services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
-
-In this guide, you use the Speech SDK to develop a C++ console application that derives intents from user utterances through your device's microphone. You'll learn how to:
-
-> [!div class="checklist"]
->
-> - Create a Visual Studio project referencing the Speech SDK NuGet package
-> - Create a speech configuration and get an intent recognizer
-> - Add intents and patterns via the Speech SDK API
-> - Recognize speech from a microphone
-> - Use asynchronous, event-driven continuous recognition
-
-## When to use pattern matching
-
-Use pattern matching if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
-* You don't have access to a CLU model, but still want intents.
-
-For more information, see the [pattern matching overview](./pattern-matching-overview.md).
-
-## Prerequisites
-
-Be sure you have the following items before you begin this guide:
--- A [Cognitive Services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)-- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).-
-## Speech and simple patterns
-
-The simple patterns are a feature of the Speech SDK and need a Cognitive Services resource or a Unified Speech resource.
-
-A pattern is a phrase that includes an Entity somewhere within it. An Entity is defined by wrapping a word in curly brackets. This example defines an Entity with the ID "floorName", which is case-sensitive:
-
-```
- Take me to the {floorName}
-```
-
-All other special characters and punctuation will be ignored.
-
-Intents will be added using calls to the IntentRecognizer->AddIntent() API.
---
cognitive-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md
- Title: Voice Assistants on Windows - Get Started-
-description: The steps to begin developing a windows voice agent, including a reference to the sample code quickstart.
------ Previously updated : 04/15/2020----
-# Get started with voice assistants on Windows
-
-This guide takes you through the steps to begin developing a voice assistant on Windows.
-
-## Set up your development environment
-
-To start developing a voice assistant for Windows, you'll need to make sure you have the proper development environment.
--- **Visual Studio:** You'll need to install [Microsoft Visual Studio 2017](https://visualstudio.microsoft.com/), Community Edition or higher-- **Windows version**: A PC with a Windows Insider fast ring build of Windows and the Windows Insider version of the Windows SDK. This sample code is verified as working on Windows Insider Release Build 19025.vb_release_analog.191112-1600 using Windows SDK 19018. Any Build or SDK above the specified versions should be compatible.-- **UWP development tools**: The Universal Windows Platform development workload in Visual Studio. See the UWP [Get set up](/windows/uwp/get-started/get-set-up) page to get your machine ready for developing UWP Applications.-- **A working microphone and audio output**-
-## Obtain resources from Microsoft
-
-Some resources necessary for a customized voice agent on Windows requires resources from Microsoft. The [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) provides sample versions of these resources for initial development and testing, so this section is unnecessary for initial development.
--- **Keyword model:** Voice activation requires a keyword model from Microsoft in the form of a .bin file. The .bin file provided in the UWP Voice Assistant Sample is trained on the keyword *Contoso*.-- **Limited Access Feature Token:** Since the ConversationalAgent APIs provide access to microphone audio, they're protected under Limited Access Feature restrictions. To use a Limited Access Feature, you'll need to obtain a Limited Access Feature token connected to the package identity of your application from Microsoft.-
-## Establish a dialog service
-
-For a complete voice assistant experience, the application will need a dialog service that
--- Detect a keyword in a given audio file-- Listen to user input and convert it to text-- Provide the text to a bot-- Translate the text response of the bot to an audio output-
-Here are the requirements to create a basic dialog service using Direct Line Speech.
--- **Speech resource:** An Azure resource for Speech features such as speech to text and text to speech. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).-- **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot".-
-## Try out the sample app
-
-With your Speech resource key and echo bot's bot ID, you're ready to try out the [UWP Voice Assistant sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample). Follow the instructions in the readme to run the app and enter your credentials.
-
-## Create your own voice assistant for Windows
-
-Once you've received your Limited Access Feature token and bin file from Microsoft, you can begin on your own voice assistant on Windows.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Read the voice assistant implementation guide](windows-voice-assistants-implementation-guide.md)
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md
- Title: Ingestion Client - Speech service-
-description: In this article we describe a tool released on GitHub that enables customers push audio files to Speech service easily and quickly
------ Previously updated : 08/29/2022---
-# Ingestion Client with Azure Cognitive Services
-
-The Ingestion Client is a tool released by Microsoft on GitHub that helps you quickly deploy a call center transcription solution to Azure with a no-code approach.
-
-> [!TIP]
-> You can use the tool and resulting solution in production to process a high volume of audio.
-
-Ingestion Client uses the [Azure Cognitive Service for Language](../language-service/index.yml), [Azure Cognitive Service for Speech](./index.yml), [Azure storage](https://azure.microsoft.com/product-categories/storage/), and [Azure Functions](https://azure.microsoft.com/services/functions/).
-
-## Get started with the Ingestion Client
-
-An Azure Account and an Azure Cognitive Services resource are needed to run the Ingestion Client.
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne" title="Create a Cognitive Services resource" target="_blank">Create a Cognitive Services resource</a> in the Azure portal.
-* Get the resource key and region. After your Cognitive Services resource is deployed, select **Go to resource** to view and manage keys. For more information about Cognitive Services resources, see [Get the keys for your resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md#get-the-keys-for-your-resource).
-
-See the [Getting Started Guide for the Ingestion Client](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/ingestion/ingestion-client/Setup/guide.md) on GitHub to learn how to setup and use the tool.
-
-## Ingestion Client Features
-
-The Ingestion Client works by connecting a dedicated [Azure storage](https://azure.microsoft.com/product-categories/storage/) account to custom [Azure Functions](https://azure.microsoft.com/services/functions/) in a serverless fashion to pass transcription requests to the service. The transcribed audio files land in the dedicated [Azure Storage container](https://azure.microsoft.com/product-categories/storage/).
-
-> [!IMPORTANT]
-> Pricing varies depending on the mode of operation (batch vs real-time) as well as the Azure Function SKU selected. By default the tool will create a Premium Azure Function SKU to handle large volume. Visit the [Pricing](https://azure.microsoft.com/pricing/details/functions/) page for more information.
-
-Internally, the tool uses Speech and Language services, and follows best practices to handle scale-up, retries and failover. The following schematic describes the resources and connections.
--
-The following Speech service feature is used by the Ingestion Client:
--- [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-
-Here are some Language service features that are used by the Ingestion Client:
--- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription.-- [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.-
-Besides Cognitive Services, these Azure products are used to complete the solution:
--- [Azure storage](https://azure.microsoft.com/product-categories/storage/): For storing telephony data and the transcripts that are returned by the Batch Transcription API. This storage account should use notifications, specifically for when new files are added. These notifications are used to trigger the transcription process.-- [Azure Functions](https://azure.microsoft.com/services/functions/): For creating the shared access signature (SAS) URI for each recording, and triggering the HTTP POST request to start a transcription. Additionally, you use Azure Functions to create requests to retrieve and delete transcriptions by using the Batch Transcription API.-
-## Tool customization
-
-The tool is built to show customers results quickly. You can customize the tool to your preferred SKUs and setup. The SKUs can be edited from the [Azure portal](https://portal.azure.com) and [the code itself is available on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
-
-> [!NOTE]
-> We suggest creating the resources in the same dedicated resource group to understand and track costs more easily.
-
-## Next steps
-
-* [Learn more about Cognitive Services features for call center](./call-center-overview.md)
-* [Explore the Language service features](../language-service/overview.md#available-features)
-* [Explore the Speech service features](./overview.md)
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md
- Title: Intent recognition overview - Speech service-
-description: Intent recognition allows you to recognize user objectives you have pre-defined. This article is an overview of the benefits and capabilities of the intent recognition service.
----- Previously updated : 02/22/2023
-keywords: intent recognition
--
-# What is intent recognition?
-
-In this overview, you will learn about the benefits and capabilities of intent recognition. The Cognitive Services Speech SDK provides two ways to recognize intents, both described below. An intent is something the user wants to do: book a flight, check the weather, or make a call. Using intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options you define in the Intent Recognizer or Conversational Language Understanding (CLU) model.
-
-## Pattern matching
-
-The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using [CLU](#conversational-language-understanding) or a combination of the two.
-
-Use pattern matching if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](../language-service/conversational-language-understanding/overview.md).
-* You don't have access to a CLU model, but still want intents.
-
-For more information, see the [pattern matching concepts](./pattern-matching-overview.md) and then:
-* Start with [simple pattern matching](how-to-use-simple-language-pattern-matching.md).
-* Improve your pattern matching by using [custom entities](how-to-use-custom-entity-pattern-matching.md).
-
-## Conversational Language Understanding
-
-Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it.
-
-Both a Speech resource and Language resource are required to use CLU with the Speech SDK. The Speech resource is used to transcribe the user's speech into text, and the Language resource is used to recognize the intent of the utterance. To get started, see the [quickstart](get-started-intent-recognition-clu.md).
-
-> [!IMPORTANT]
-> When you use conversational language understanding with the Speech SDK, you are charged both for the Speech to text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-
-For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](../language-service/conversational-language-understanding/overview.md).
-
-> [!IMPORTANT]
-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
->
-> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
-
-## Next steps
-
-* [Intent recognition with simple pattern matching](how-to-use-simple-language-pattern-matching.md)
-* [Intent recognition with CLU quickstart](get-started-intent-recognition-clu.md)
cognitive-services Keyword Recognition Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/keyword-recognition-guidelines.md
- Title: Keyword recognition recommendations and guidelines - Speech service-
-description: An overview of recommendations and guidelines when using keyword recognition.
------ Previously updated : 04/30/2021---
-# Recommendations and guidelines for keyword recognition
-
-This article outlines how to choose your keyword optimize its accuracy characteristics and how to design your user experiences with Keyword Verification.
-
-## Choosing an effective keyword
-
-Creating an effective keyword is vital to ensuring your product will consistently and accurately respond. Consider the following guidelines when you choose a keyword.
-
-> [!NOTE]
-> The examples below are in English but the guidelines apply to all languages supported by Custom Keyword. For a list of all supported languages, see [Language support](language-support.md?tabs=custom-keyword).
--- It should take no longer than two seconds to say.-- Words of 4 to 7 syllables work best. For example, "Hey, Computer" is a good keyword. Just "Hey" is a poor one.-- Keywords should follow common pronunciation rules specific to the native language of your end-users.-- A unique or even a made-up word that follows common pronunciation rules might reduce false positives. For example, "computerama" might be a good keyword.-- Do not choose a common word. For example, "eat" and "go" are words that people say frequently in ordinary conversation. They might lead to higher than desired false accept rates for your product.-- Avoid using a keyword that might have alternative pronunciations. Users would have to know the "right" pronunciation to get their product to voice activate. For example, "509" can be pronounced "five zero nine," "five oh nine," or "five hundred and nine." "R.E.I." can be pronounced "r-e-i" or "ray." "Live" can be pronounced "/l─½v/" or "/liv/".-- Do not use special characters, symbols, or digits. For example, "Go#" and "20 + cats" could be problematic keywords. However, "go sharp" or "twenty plus cats" might work. You can still use the symbols in your branding and use marketing and documentation to reinforce the proper pronunciation.--
-## User experience recommendations with Keyword Verification
-
-With a multi-stage keyword recognition scenario where [Keyword Verification](keyword-recognition-overview.md#keyword-verification) is used, applications can choose when the end-user is notified of a keyword detection. The recommendation for rendering any visual or audible indicator is to rely upon on responses from the Keyword Verification service:
-
-![User experience guideline when optimizing for accuracy.](media/custom-keyword/kw-verification-ux-accuracy.png)
-
-This ensures the optimal experience in terms of accuracy to minimize the user-perceived impact of false accepts but incurs additional latency.
-
-For applications that require latency optimization, applications can provide light and unobtrusive indicators to the end-user based on the on-device keyword recognition. For example, lighting an LED pattern or pulsing an icon. The indicators can continue to exist if Keyword Verification responds with a keyword accept, or can be dismissed if the response is a keyword reject:
-
-![User experience guideline when optimizing for latency.](media/custom-keyword/kw-verification-ux-latency.png)
-
-## Next steps
-
-* [Get the Speech SDK.](speech-sdk.md)
-* [Learn more about Voice Assistants.](voice-assistants.md)
cognitive-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md
- Title: Keyword recognition overview - Speech service-
-description: An overview of the features, capabilities, and restrictions for keyword recognition by using the Speech Software Development Kit (SDK).
------ Previously updated : 04/30/2021----
-# What is keyword recognition?
-
-Keyword recognition detects a word or short phrase within a stream of audio. It's also referred to as keyword spotting.
-
-The most common use case of keyword recognition is voice activation of virtual assistants. For example, "Hey Cortana" is the keyword for the Cortana assistant. Upon recognition of the keyword, a scenario-specific action is carried out. For virtual assistant scenarios, a common resulting action is speech recognition of audio that follows the keyword.
-
-Generally, virtual assistants are always listening. Keyword recognition acts as a privacy boundary for the user. A keyword requirement acts as a gate that prevents unrelated user audio from crossing the local device to the cloud.
-
-To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multistage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
-
-The current system is designed with multiple stages that span the edge and cloud:
-
-![Diagram that shows multiple stages of keyword recognition across the edge and cloud.](media/custom-keyword/kw-recognition-multi-stage.png)
-
-Accuracy of keyword recognition is measured via the following metrics:
-
-* **Correct accept rate**: Measures the system's ability to recognize the keyword when it's spoken by a user. The correct accept rate is also known as the true positive rate.
-* **False accept rate**: Measures the system's ability to filter out audio that isn't the keyword spoken by a user. The false accept rate is also known as the false positive rate.
-
-The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance isn't supported.
-
-## Custom keyword for on-device models
-
-With the [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword), you can generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
-
-### Pricing
-
-There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK when used in conjunction with other Speech service features such as speech to text.
-
-### Types of models
-
-You can use custom keyword to generate two types of on-device models for any keyword.
-
-| Model type | Description |
-| - | -- |
-| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models might not have optimal accuracy characteristics. |
-| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model by using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
-
-> [!NOTE]
-> You can view a list of regions that support the **Advanced** model type in the [keyword recognition region support](regions.md#speech-service) documentation.
-
-Neither model type requires you to upload training data. Custom keyword fully handles data generation and model training.
-
-### Pronunciations
-
-When you create a new model, custom keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all variations that closely represent the way you expect users to say the keyword. All other pronunciations shouldn't be selected.
-
-It's important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, if you choose more pronunciations than you need, you might get higher false accept rates. If you choose too few pronunciations, where not all expected variations are covered, you might get lower correct accept rates.
-
-### Test models
-
-After on-device models are generated by custom keyword, they can be tested directly on the portal. You can use the portal to speak directly into your browser and get keyword recognition results.
-
-## Keyword verification
-
-Keyword verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. Tuning or training isn't required for keyword verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency and are transparent to client applications.
-
-### Pricing
-
-Keyword verification is always used in combination with speech to text. There's no cost to use keyword verification beyond the cost of speech to text.
-
-### Keyword verification and speech to text
-
-When keyword verification is used, it's always in combination with speech to text. Both services run in parallel, which means audio is sent to both services for simultaneous processing.
-
-![Diagram that shows parallel processing of keyword verification and speech to text.](media/custom-keyword/kw-verification-parallel-processing.png)
-
-Running keyword verification and speech to text in parallel yields the following benefits:
-
-* **No other latency on speech to text results**: Parallel execution means that keyword verification adds no latency. The client receives speech to text results as quickly. If keyword verification determines the keyword wasn't present in the audio, speech to text processing is terminated. This action protects against unnecessary speech to text processing. Network and cloud model processing increases the user-perceived latency of voice activation. For more information, see [Recommendations and guidelines](keyword-recognition-guidelines.md).
-* **Forced keyword prefix in speech to text results**: Speech to text processing ensures that the results sent to the client are prefixed with the keyword. This behavior allows for increased accuracy in the speech to text results for speech that follows the keyword.
-* **Increased speech to text timeout**: Because of the expected presence of the keyword at the beginning of audio, speech to text allows for a longer pause of up to five seconds after the keyword before it determines the end of speech and terminates speech to text processing. This behavior ensures that the user experience is correctly handled for staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
-
-### Keyword verification responses and latency considerations
-
-For each request to the service, keyword verification returns one of two responses: accepted or rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency doesn't include network cost between the client and Speech services.
-
-| Keyword verification response | Description |
-| -- | -- |
-| Accepted | Indicates the service believed that the keyword was present in the audio stream provided as part of the request. |
-| Rejected | Indicates the service believed that the keyword wasn't present in the audio stream provided as part of the request. |
-
-Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, keyword verification processes a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in two seconds, the service times out and signals a rejected response to the client.
-
-### Use keyword verification with on-device models from custom keyword
-
-The Speech SDK enables seamless use of on-device models generated by using custom keyword with keyword verification and speech to text. It transparently handles:
-
-* Audio gating to keyword verification and speech recognition based on the outcome of an on-device model.
-* Communicating the keyword to keyword verification.
-* Communicating any more metadata to the cloud for orchestrating the end-to-end scenario.
-
-You don't need to explicitly specify any configuration parameters. All necessary information is automatically extracted from the on-device model generated by custom keyword.
-
-The sample and tutorials linked here show how to use the Speech SDK:
-
- * [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
- * [Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)
- * [Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
-
-## Speech SDK integration and scenarios
-
-The Speech SDK enables easy use of personalized on-device keyword recognition models generated with custom keyword and keyword verification. To ensure that your product needs can be met, the SDK supports the following two scenarios:
-
-| Scenario | Description | Samples |
-| -- | -- | - |
-| End-to-end keyword recognition with speech to text | Best suited for products that will use a customized on-device keyword model from custom keyword with keyword verification and speech to text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> |
-| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from custom keyword. | <ul><li>[C# on Windows UWP sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
-
-## Next steps
-
-* [Read the quickstart to generate on-device keyword recognition models using custom keyword](custom-keyword-basics.md)
-* [Learn more about voice assistants](voice-assistants.md)
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
- Title: Language identification - Speech service-
-description: Language identification is used to determine the language being spoken in audio when compared against a list of provided languages.
------- Previously updated : 04/19/2023-
-zone_pivot_groups: programming-languages-speech-services-nomore-variant
--
-# Language identification
-
-Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).
-
-Language identification (LID) use cases include:
-
-* [Speech to text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text.
-* [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
-
-For speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
-
-## Configuration options
-
-> [!IMPORTANT]
-> Language Identification APIs are simplified with the Speech SDK version 1.25 and later. The
-`SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have
-been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. Prioritizing between low latency and high accuracy is no longer necessary following recent model improvements. Now, you only need to select whether to run at-start or continuous Language Identification when doing continuous speech recognition or translation.
-
-Whether you use language identification with [speech to text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options.
--- Define a list of [candidate languages](#candidate-languages) that you expect in the audio.-- Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification.-
-Then you make a [recognize once or continuous recognition](#recognize-once-or-continuous) request to the Speech service.
-
-Code snippets are included with the concepts described next. Complete samples for each use case are provided later.
-
-### Candidate languages
-
-You provide candidate languages with the `AutoDetectSourceLanguageConfig` object, at least one of which is expected to be in the audio. You can include up to four languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification). The Speech service returns one of the candidate languages provided even if those languages weren't in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, either `fr-FR` or `en-US` would be returned.
-
-You must provide the full locale with dash (`-`) separator, but language identification only uses one locale per base language. Don't include multiple locales (for example, "en-US" and "en-GB") for the same language.
--
-```csharp
-var autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
-```
--
-```cpp
-auto autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
-```
--
-```python
-auto_detect_source_language_config = \
- speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
-```
--
-```java
-AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE", "zh-CN"));
-```
--
-```javascript
-var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages([("en-US", "de-DE", "zh-CN"]);
-```
--
-```objective-c
-NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
-SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
- [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
-```
--
-For more information, see [supported languages](language-support.md?tabs=language-identification).
-
-### At-start and Continuous language identification
-
-Speech supports both at-start and continuous language identification (LID).
-
-> [!NOTE]
-> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), JavaScript ([for speech to text only](#speech-to-text)),and Python.
-- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change. With at-start LID, a single language is detected and returned in less than 5 seconds.-- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID doesn't support changing languages within the same sentence. For example, if you're primarily speaking Spanish and insert some English words, it will not detect the language change per word. -
-You implement at-start LID or continuous LID by calling methods for [recognize once or continuous](#recognize-once-or-continuous). Continuous LID is only supported with continuous recognition.
-
-### Recognize once or continuous
-
-Language identification is completed with recognition objects and operations. You'll make a request to the Speech service for recognition of audio.
-
-> [!NOTE]
-> Don't confuse recognition with identification. Recognition can be used with or without language identification.
-
-You'll either call the "recognize once" method, or the start and stop continuous recognition methods. You choose from:
--- Recognize once with At-start LID. Continuous LID isn't supported for recognize once.-- Continuous recognition with at-start LID-- Continuous recognition with continuous LID-
-The `SpeechServiceConnection_LanguageIdMode` property is only required for continuous LID. Without it, the Speech service defaults to at-start lid. The supported values are "AtStart" for at-start LID or "Continuous" for continuous LID.
--
-```csharp
-// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
-var result = await recognizer.RecognizeOnceAsync();
-
-// Start and stop continuous recognition with At-start LID
-await recognizer.StartContinuousRecognitionAsync();
-await recognizer.StopContinuousRecognitionAsync();
-
-// Start and stop continuous recognition with Continuous LID
-speechConfig.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
-await recognizer.StartContinuousRecognitionAsync();
-await recognizer.StopContinuousRecognitionAsync();
-```
--
-```cpp
-// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
-auto result = recognizer->RecognizeOnceAsync().get();
-
-// Start and stop continuous recognition with At-start LID
-recognizer->StartContinuousRecognitionAsync().get();
-recognizer->StopContinuousRecognitionAsync().get();
-
-// Start and stop continuous recognition with Continuous LID
-speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
-recognizer->StartContinuousRecognitionAsync().get();
-recognizer->StopContinuousRecognitionAsync().get();
-```
--
-```java
-// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
-SpeechRecognitionResult result = recognizer->RecognizeOnceAsync().get();
-
-// Start and stop continuous recognition with At-start LID
-recognizer.startContinuousRecognitionAsync().get();
-recognizer.stopContinuousRecognitionAsync().get();
-
-// Start and stop continuous recognition with Continuous LID
-speechConfig.setProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
-recognizer.startContinuousRecognitionAsync().get();
-recognizer.stopContinuousRecognitionAsync().get();
-```
--
-```python
-# Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
-result = recognizer.recognize_once()
-
-# Start and stop continuous recognition with At-start LID
-recognizer.start_continuous_recognition()
-recognizer.stop_continuous_recognition()
-
-# Start and stop continuous recognition with Continuous LID
-speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
-recognizer.start_continuous_recognition()
-recognizer.stop_continuous_recognition()
-```
--
-## Speech to text
-
-You use Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech to text overview](speech-to-text.md).
-
-> [!NOTE]
-> Speech to text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech to text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, JavaScript, and Python.
->
-> Currently for speech to text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
--
-See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_with_language_id_samples.cs).
-
-### [Recognize once](#tab/once)
-
-```csharp
-using Microsoft.CognitiveServices.Speech;
-using Microsoft.CognitiveServices.Speech.Audio;
-
-var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey","YourServiceRegion");
-
-var autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig.FromLanguages(
- new string[] { "en-US", "de-DE", "zh-CN" });
-
-using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
-using (var recognizer = new SpeechRecognizer(
- speechConfig,
- autoDetectSourceLanguageConfig,
- audioConfig))
-{
- var speechRecognitionResult = await recognizer.RecognizeOnceAsync();
- var autoDetectSourceLanguageResult =
- AutoDetectSourceLanguageResult.FromResult(speechRecognitionResult);
- var detectedLanguage = autoDetectSourceLanguageResult.Language;
-}
-```
-
-### [Continuous recognition](#tab/continuous)
-
-```csharp
-using Microsoft.CognitiveServices.Speech;
-using Microsoft.CognitiveServices.Speech.Audio;
-
-var region = "YourServiceRegion";
-// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
-var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
-var endpointUrl = new Uri(endpointString);
-
-var config = SpeechConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
-
-// Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
-config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
-
-var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
-
-var stopRecognition = new TaskCompletionSource<int>();
-using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
-{
- using (var recognizer = new SpeechRecognizer(config, autoDetectSourceLanguageConfig, audioInput))
- {
- // Subscribes to events.
- recognizer.Recognizing += (s, e) =>
- {
- if (e.Result.Reason == ResultReason.RecognizingSpeech)
- {
- Console.WriteLine($"RECOGNIZING: Text={e.Result.Text}");
- var autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.FromResult(e.Result);
- Console.WriteLine($"DETECTED: Language={autoDetectSourceLanguageResult.Language}");
- }
- };
-
- recognizer.Recognized += (s, e) =>
- {
- if (e.Result.Reason == ResultReason.RecognizedSpeech)
- {
- Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
- var autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.FromResult(e.Result);
- Console.WriteLine($"DETECTED: Language={autoDetectSourceLanguageResult.Language}");
- }
- else if (e.Result.Reason == ResultReason.NoMatch)
- {
- Console.WriteLine($"NOMATCH: Speech could not be recognized.");
- }
- };
-
- recognizer.Canceled += (s, e) =>
- {
- Console.WriteLine($"CANCELED: Reason={e.Reason}");
-
- if (e.Reason == CancellationReason.Error)
- {
- Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
- Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
- Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
- }
-
- stopRecognition.TrySetResult(0);
- };
-
- recognizer.SessionStarted += (s, e) =>
- {
- Console.WriteLine("\n Session started event.");
- };
-
- recognizer.SessionStopped += (s, e) =>
- {
- Console.WriteLine("\n Session stopped event.");
- Console.WriteLine("\nStop recognition.");
- stopRecognition.TrySetResult(0);
- };
-
- // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
- await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
-
- // Waits for completion.
- // Use Task.WaitAny to keep the task rooted.
- Task.WaitAny(new[] { stopRecognition.Task });
-
- // Stops recognition.
- await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
- }
-}
-```
----
-See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp).
-
-### [Recognize once](#tab/once)
-
-```cpp
-using namespace std;
-using namespace Microsoft::Cognitive
-using namespace Microsoft::Cognitive
-
-auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey","YourServiceRegion");
-
-auto autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
-
-auto recognizer = SpeechRecognizer::FromConfig(
- speechConfig,
- autoDetectSourceLanguageConfig
- );
-
-speechRecognitionResult = recognizer->RecognizeOnceAsync().get();
-auto autoDetectSourceLanguageResult =
- AutoDetectSourceLanguageResult::FromResult(speechRecognitionResult);
-auto detectedLanguage = autoDetectSourceLanguageResult->Language;
-```
-
-### [Continuous recognition](#tab/continuous)
------
-See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java).
-
-### [Recognize once](#tab/once)
-
-```java
-AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE"));
-
-SpeechRecognizer recognizer = new SpeechRecognizer(
- speechConfig,
- autoDetectSourceLanguageConfig,
- audioConfig);
-
-Future<SpeechRecognitionResult> future = recognizer.recognizeOnceAsync();
-SpeechRecognitionResult result = future.get(30, TimeUnit.SECONDS);
-AutoDetectSourceLanguageResult autoDetectSourceLanguageResult =
- AutoDetectSourceLanguageResult.fromResult(result);
-String detectedLanguage = autoDetectSourceLanguageResult.getLanguage();
-
-recognizer.close();
-speechConfig.close();
-autoDetectSourceLanguageConfig.close();
-audioConfig.close();
-result.close();
-```
-
-### [Continuous recognition](#tab/continuous)
------
-See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py).
-
-### [Recognize once](#tab/once)
-
-```Python
-auto_detect_source_language_config = \
- speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE"])
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config,
- auto_detect_source_language_config=auto_detect_source_language_config,
- audio_config=audio_config)
-result = speech_recognizer.recognize_once()
-auto_detect_source_language_result = speechsdk.AutoDetectSourceLanguageResult(result)
-detected_language = auto_detect_source_language_result.language
-```
-
-### [Continuous recognition](#tab/continuous)
-
-```python
-import azure.cognitiveservices.speech as speechsdk
-import time
-import json
-
-speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
-weatherfilename="en-us_zh-cn.wav"
-
-# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
-endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
-speech_config = speechsdk.SpeechConfig(subscription=speech_key, endpoint=endpoint_string)
-audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
-
-# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
-speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
-
-auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
- languages=["en-US", "de-DE", "zh-CN"])
-
-speech_recognizer = speechsdk.SpeechRecognizer(
- speech_config=speech_config,
- auto_detect_source_language_config=auto_detect_source_language_config,
- audio_config=audio_config)
-
-done = False
-
-def stop_cb(evt):
- """callback that signals to stop continuous recognition upon receiving an event `evt`"""
- print('CLOSING on {}'.format(evt))
- nonlocal done
- done = True
-
-# Connect callbacks to the events fired by the speech recognizer
-speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
-speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
-speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
-speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
-speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
-# stop continuous recognition on either session stopped or canceled events
-speech_recognizer.session_stopped.connect(stop_cb)
-speech_recognizer.canceled.connect(stop_cb)
-
-# Start continuous speech recognition
-speech_recognizer.start_continuous_recognition()
-while not done:
- time.sleep(.5)
-
-speech_recognizer.stop_continuous_recognition()
-```
---
-```Objective-C
-NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
-SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
- [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
-SPXSpeechRecognizer* speechRecognizer = \
- [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
- autoDetectSourceLanguageConfiguration:autoDetectSourceLanguageConfig
- audioConfiguration:audioConfig];
-SPXSpeechRecognitionResult *result = [speechRecognizer recognizeOnce];
-SPXAutoDetectSourceLanguageResult *languageDetectionResult = [[SPXAutoDetectSourceLanguageResult alloc] init:result];
-NSString *detectedLanguage = [languageDetectionResult language];
-```
---
-```Javascript
-var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages(["en-US", "de-DE"]);
-var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, autoDetectSourceLanguageConfig, audioConfig);
-speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) => {
- var languageDetectionResult = SpeechSDK.AutoDetectSourceLanguageResult.fromResult(result);
- var detectedLanguage = languageDetectionResult.language;
-},
-{});
-```
--
-### Speech to text custom models
-
-> [!NOTE]
-> Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models.
-
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
-
-```csharp
-var sourceLanguageConfigs = new SourceLanguageConfig[]
-{
- SourceLanguageConfig.FromLanguage("en-US"),
- SourceLanguageConfig.FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR")
-};
-var autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig.FromSourceLanguageConfigs(
- sourceLanguageConfigs);
-```
--
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
-
-```cpp
-std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
-sourceLanguageConfigs.push_back(
- SourceLanguageConfig::FromLanguage("en-US"));
-sourceLanguageConfigs.push_back(
- SourceLanguageConfig::FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));
-
-auto autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig::FromSourceLanguageConfigs(
- sourceLanguageConfigs);
-```
--
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
-
-```java
-List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
-sourceLanguageConfigs.add(
- SourceLanguageConfig.fromLanguage("en-US"));
-sourceLanguageConfigs.add(
- SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));
-
-AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
- AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs(
- sourceLanguageConfigs);
-```
--
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
-
-```Python
- en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
- fr_language_config = speechsdk.languageconfig.SourceLanguageConfig("fr-FR", "The Endpoint Id for custom model of fr-FR")
- auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
- sourceLanguageConfigs=[en_language_config, fr_language_config])
-```
--
-This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
-
-```Objective-C
-SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
-SPXSourceLanguageConfiguration* frLanguageConfig = \
- [[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"fr-FR"
- endpointId:@"The Endpoint Id for custom model of fr-FR"];
-NSArray *languageConfigs = @[enLanguageConfig, frLanguageConfig];
-SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
- [[SPXAutoDetectSourceLanguageConfiguration alloc]initWithSourceLanguageConfigurations:languageConfigs];
-```
--
-Language detection with a custom endpoint isn't supported by the Speech SDK for JavaScript. For example, if you include "fr-FR" as shown here, the custom endpoint will be ignored.
-
-```Javascript
-var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US");
-var frLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR");
-var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs([enLanguageConfig, frLanguageConfig]);
-```
--
-## Speech translation
-
-You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md).
-
-> [!NOTE]
-> Speech translation with language identification is only supported with Speech SDKs in C#, C++, and Python.
-> Currently for speech translation with language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.
-
-See more examples of speech translation with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/translation_samples.cs).
-
-### [Recognize once](#tab/once)
-
-```csharp
-using Microsoft.CognitiveServices.Speech;
-using Microsoft.CognitiveServices.Speech.Audio;
-using Microsoft.CognitiveServices.Speech.Translation;
-
-public static async Task RecognizeOnceSpeechTranslationAsync()
-{
- var region = "YourServiceRegion";
- // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
- var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
- var endpointUrl = new Uri(endpointString);
-
- var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
-
- // Source language is required, but currently ignored.
- string fromLanguage = "en-US";
- speechTranslationConfig.SpeechRecognitionLanguage = fromLanguage;
-
- speechTranslationConfig.AddTargetLanguage("de");
- speechTranslationConfig.AddTargetLanguage("fr");
-
- var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
-
- using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
-
- using (var recognizer = new TranslationRecognizer(
- speechTranslationConfig,
- autoDetectSourceLanguageConfig,
- audioConfig))
- {
-
- Console.WriteLine("Say something or read from file...");
- var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
-
- if (result.Reason == ResultReason.TranslatedSpeech)
- {
- var lidResult = result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
-
- Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={result.Text}");
- foreach (var element in result.Translations)
- {
- Console.WriteLine($" TRANSLATED into '{element.Key}': {element.Value}");
- }
- }
- }
-}
-```
-
-### [Continuous recognition](#tab/continuous)
-
-```csharp
-using Microsoft.CognitiveServices.Speech;
-using Microsoft.CognitiveServices.Speech.Audio;
-using Microsoft.CognitiveServices.Speech.Translation;
-
-public static async Task MultiLingualTranslation()
-{
- var region = "YourServiceRegion";
- // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
- var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
- var endpointUrl = new Uri(endpointString);
-
- var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
-
- // Source language is required, but currently ignored.
- string fromLanguage = "en-US";
- config.SpeechRecognitionLanguage = fromLanguage;
-
- config.AddTargetLanguage("de");
- config.AddTargetLanguage("fr");
-
- // Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
- config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
- var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
-
- var stopTranslation = new TaskCompletionSource<int>();
- using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
- {
- using (var recognizer = new TranslationRecognizer(config, autoDetectSourceLanguageConfig, audioInput))
- {
- recognizer.Recognizing += (s, e) =>
- {
- var lidResult = e.Result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
-
- Console.WriteLine($"RECOGNIZING in '{lidResult}': Text={e.Result.Text}");
- foreach (var element in e.Result.Translations)
- {
- Console.WriteLine($" TRANSLATING into '{element.Key}': {element.Value}");
- }
- };
-
- recognizer.Recognized += (s, e) => {
- if (e.Result.Reason == ResultReason.TranslatedSpeech)
- {
- var lidResult = e.Result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
-
- Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={e.Result.Text}");
- foreach (var element in e.Result.Translations)
- {
- Console.WriteLine($" TRANSLATED into '{element.Key}': {element.Value}");
- }
- }
- else if (e.Result.Reason == ResultReason.RecognizedSpeech)
- {
- Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
- Console.WriteLine($" Speech not translated.");
- }
- else if (e.Result.Reason == ResultReason.NoMatch)
- {
- Console.WriteLine($"NOMATCH: Speech could not be recognized.");
- }
- };
-
- recognizer.Canceled += (s, e) =>
- {
- Console.WriteLine($"CANCELED: Reason={e.Reason}");
-
- if (e.Reason == CancellationReason.Error)
- {
- Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
- Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
- Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
- }
-
- stopTranslation.TrySetResult(0);
- };
-
- recognizer.SpeechStartDetected += (s, e) => {
- Console.WriteLine("\nSpeech start detected event.");
- };
-
- recognizer.SpeechEndDetected += (s, e) => {
- Console.WriteLine("\nSpeech end detected event.");
- };
-
- recognizer.SessionStarted += (s, e) => {
- Console.WriteLine("\nSession started event.");
- };
-
- recognizer.SessionStopped += (s, e) => {
- Console.WriteLine("\nSession stopped event.");
- Console.WriteLine($"\nStop translation.");
- stopTranslation.TrySetResult(0);
- };
-
- // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
- Console.WriteLine("Start translation...");
- await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
-
- Task.WaitAny(new[] { stopTranslation.Task });
- await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
- }
- }
-}
-```
----
-See more examples of speech translation with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/translation_samples.cpp).
-
-### [Recognize once](#tab/once)
-
-```cpp
-auto region = "YourServiceRegion";
-// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
-auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
-auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
-
-auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE" });
-
-// Sets source and target languages
-// The source language will be detected by the language detection feature.
-// However, the SpeechRecognitionLanguage still need to set with a locale string, but it will not be used as the source language.
-// This will be fixed in a future version of Speech SDK.
-auto fromLanguage = "en-US";
-config->SetSpeechRecognitionLanguage(fromLanguage);
-config->AddTargetLanguage("de");
-config->AddTargetLanguage("fr");
-
-// Creates a translation recognizer using microphone as audio input.
-auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig);
-cout << "Say something...\n";
-
-// Starts translation, and returns after a single utterance is recognized. The end of a
-// single utterance is determined by listening for silence at the end or until a maximum of 15
-// seconds of audio is processed. The task returns the recognized text as well as the translation.
-// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
-// shot recognition like command or query.
-// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
-auto result = recognizer->RecognizeOnceAsync().get();
-
-// Checks result.
-if (result->Reason == ResultReason::TranslatedSpeech)
-{
- cout << "RECOGNIZED: Text=" << result->Text << std::endl;
-
- for (const auto& it : result->Translations)
- {
- cout << "TRANSLATED into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
- }
-}
-else if (result->Reason == ResultReason::RecognizedSpeech)
-{
- cout << "RECOGNIZED: Text=" << result->Text << " (text could not be translated)" << std::endl;
-}
-else if (result->Reason == ResultReason::NoMatch)
-{
- cout << "NOMATCH: Speech could not be recognized." << std::endl;
-}
-else if (result->Reason == ResultReason::Canceled)
-{
- auto cancellation = CancellationDetails::FromResult(result);
- cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;
-
- if (cancellation->Reason == CancellationReason::Error)
- {
- cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
- cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
- cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
- }
-}
-```
-
-### [Continuous recognition](#tab/continuous)
-
-```cpp
-using namespace std;
-using namespace Microsoft::Cognitive
-using namespace Microsoft::Cognitive
-using namespace Microsoft::Cognitive
-
-void MultiLingualTranslation()
-{
- auto region = "YourServiceRegion";
- // Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
- auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
- auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
-
- // Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
- speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
- auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
-
- promise<void> recognitionEnd;
- // Source language is required, but currently ignored.
- auto fromLanguage = "en-US";
- config->SetSpeechRecognitionLanguage(fromLanguage);
- config->AddTargetLanguage("de");
- config->AddTargetLanguage("fr");
-
- auto audioInput = AudioConfig::FromWavFileInput("whatstheweatherlike.wav");
- auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig, audioInput);
-
- recognizer->Recognizing.Connect([](const TranslationRecognitionEventArgs& e)
- {
- std::string lidResult = e.Result->Properties.GetProperty(PropertyId::SpeechServiceConnection_AutoDetectSourceLanguageResult);
-
- cout << "Recognizing in Language = "<< lidResult << ":" << e.Result->Text << std::endl;
- for (const auto& it : e.Result->Translations)
- {
- cout << " Translated into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
- }
- });
-
- recognizer->Recognized.Connect([](const TranslationRecognitionEventArgs& e)
- {
- if (e.Result->Reason == ResultReason::TranslatedSpeech)
- {
- std::string lidResult = e.Result->Properties.GetProperty(PropertyId::SpeechServiceConnection_AutoDetectSourceLanguageResult);
- cout << "RECOGNIZED in Language = " << lidResult << ": Text=" << e.Result->Text << std::endl;
- }
- else if (e.Result->Reason == ResultReason::RecognizedSpeech)
- {
- cout << "RECOGNIZED: Text=" << e.Result->Text << " (text could not be translated)" << std::endl;
- }
- else if (e.Result->Reason == ResultReason::NoMatch)
- {
- cout << "NOMATCH: Speech could not be recognized." << std::endl;
- }
-
- for (const auto& it : e.Result->Translations)
- {
- cout << " Translated into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
- }
- });
-
- recognizer->Canceled.Connect([&recognitionEnd](const TranslationRecognitionCanceledEventArgs& e)
- {
- cout << "CANCELED: Reason=" << (int)e.Reason << std::endl;
- if (e.Reason == CancellationReason::Error)
- {
- cout << "CANCELED: ErrorCode=" << (int)e.ErrorCode << std::endl;
- cout << "CANCELED: ErrorDetails=" << e.ErrorDetails << std::endl;
- cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
-
- recognitionEnd.set_value();
- }
- });
-
- recognizer->Synthesizing.Connect([](const TranslationSynthesisEventArgs& e)
- {
- auto size = e.Result->Audio.size();
- cout << "Translation synthesis result: size of audio data: " << size
- << (size == 0 ? "(END)" : "");
- });
-
- recognizer->SessionStopped.Connect([&recognitionEnd](const SessionEventArgs& e)
- {
- cout << "Session stopped.";
- recognitionEnd.set_value();
- });
-
- // Starts continuos recognition. Use StopContinuousRecognitionAsync() to stop recognition.
- recognizer->StartContinuousRecognitionAsync().get();
- recognitionEnd.get_future().get();
- recognizer->StopContinuousRecognitionAsync().get();
-}
-```
----
-See more examples of speech translation with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/translation_sample.py).
-
-### [Recognize once](#tab/once)
-
-```python
-import azure.cognitiveservices.speech as speechsdk
-import time
-import json
-
-speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
-weatherfilename="en-us_zh-cn.wav"
-
-# set up translation parameters: source language and target languages
-# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
-endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
-translation_config = speechsdk.translation.SpeechTranslationConfig(
- subscription=speech_key,
- endpoint=endpoint_string,
- speech_recognition_language='en-US',
- target_languages=('de', 'fr'))
-audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
-
-# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
-auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
-
-# Creates a translation recognizer using and audio file as input.
-recognizer = speechsdk.translation.TranslationRecognizer(
- translation_config=translation_config,
- audio_config=audio_config,
- auto_detect_source_language_config=auto_detect_source_language_config)
-
-# Starts translation, and returns after a single utterance is recognized. The end of a
-# single utterance is determined by listening for silence at the end or until a maximum of 15
-# seconds of audio is processed. The task returns the recognition text as result.
-# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
-# shot recognition like command or query.
-# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
-result = recognizer.recognize_once()
-
-# Check the result
-if result.reason == speechsdk.ResultReason.TranslatedSpeech:
- print("""Recognized: {}
- German translation: {}
- French translation: {}""".format(
- result.text, result.translations['de'], result.translations['fr']))
-elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
- print("Recognized: {}".format(result.text))
- detectedSrcLang = result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
- print("Detected Language: {}".format(detectedSrcLang))
-elif result.reason == speechsdk.ResultReason.NoMatch:
- print("No speech could be recognized: {}".format(result.no_match_details))
-elif result.reason == speechsdk.ResultReason.Canceled:
- print("Translation canceled: {}".format(result.cancellation_details.reason))
- if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
- print("Error details: {}".format(result.cancellation_details.error_details))
-```
-
-### [Continuous recognition](#tab/continuous)
-
-```python
-import azure.cognitiveservices.speech as speechsdk
-import time
-import json
-
-speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
-weatherfilename="en-us_zh-cn.wav"
-
-# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
-endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
-translation_config = speechsdk.translation.SpeechTranslationConfig(
- subscription=speech_key,
- endpoint=endpoint_string,
- speech_recognition_language='en-US',
- target_languages=('de', 'fr'))
-audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
-
-# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
-translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
-
-# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
-auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
-
-# Creates a translation recognizer using and audio file as input.
-recognizer = speechsdk.translation.TranslationRecognizer(
- translation_config=translation_config,
- audio_config=audio_config,
- auto_detect_source_language_config=auto_detect_source_language_config)
-
-def result_callback(event_type, evt):
- """callback to display a translation result"""
- print("{}: {}\n\tTranslations: {}\n\tResult Json: {}".format(
- event_type, evt, evt.result.translations.items(), evt.result.json))
-
-done = False
-
-def stop_cb(evt):
- """callback that signals to stop continuous recognition upon receiving an event `evt`"""
- print('CLOSING on {}'.format(evt))
- nonlocal done
- done = True
-
-# connect callback functions to the events fired by the recognizer
-recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
-recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
-# event for intermediate results
-recognizer.recognizing.connect(lambda evt: result_callback('RECOGNIZING', evt))
-# event for final result
-recognizer.recognized.connect(lambda evt: result_callback('RECOGNIZED', evt))
-# cancellation event
-recognizer.canceled.connect(lambda evt: print('CANCELED: {} ({})'.format(evt, evt.reason)))
-
-# stop continuous recognition on either session stopped or canceled events
-recognizer.session_stopped.connect(stop_cb)
-recognizer.canceled.connect(stop_cb)
-
-def synthesis_callback(evt):
- """
- callback for the synthesis event
- """
- print('SYNTHESIZING {}\n\treceived {} bytes of audio. Reason: {}'.format(
- evt, len(evt.result.audio), evt.result.reason))
- if evt.result.reason == speechsdk.ResultReason.RecognizedSpeech:
- print("RECOGNIZED: {}".format(evt.result.properties))
- if evt.result.properties.get(speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult) == None:
- print("Unable to detect any language")
- else:
- detectedSrcLang = evt.result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
- jsonResult = evt.result.properties[speechsdk.PropertyId.SpeechServiceResponse_JsonResult]
- detailResult = json.loads(jsonResult)
- startOffset = detailResult['Offset']
- duration = detailResult['Duration']
- if duration >= 0:
- endOffset = duration + startOffset
- else:
- endOffset = 0
- print("Detected language = " + detectedSrcLang + ", startOffset = " + str(startOffset) + " nanoseconds, endOffset = " + str(endOffset) + " nanoseconds, Duration = " + str(duration) + " nanoseconds.")
- global language_detected
- language_detected = True
-
-# connect callback to the synthesis event
-recognizer.synthesizing.connect(synthesis_callback)
-
-# start translation
-recognizer.start_continuous_recognition()
-
-while not done:
- time.sleep(.5)
-
-recognizer.stop_continuous_recognition()
-```
----
-## Run and use a container
-
-Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
-
-When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`.
-
-For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide.
--
-## Speech to text batch transcription
-
-To identify languages with [Batch transcription REST API](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request.
-
-> [!WARNING]
-> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
->
-> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#speech-to-text-custom-models) instead of batch transcription.
-
-The following example shows the usage of the `languageIdentification` property with four candidate languages. For more information about request properties see [Create a batch transcription](batch-transcription-create.md#request-configuration-options).
-
-```json
-{
- <...>
-
- "properties": {
- <...>
-
- "languageIdentification": {
- "candidateLocales": [
- "en-US",
- "ja-JP",
- "zh-CN",
- "hi-IN"
- ]
- },
- <...>
- }
-}
-```
-
-## Next steps
-
-* [Try the speech to text quickstart](get-started-speech-to-text.md)
-* [Improve recognition accuracy with custom speech](custom-speech-overview.md)
-* [Use batch transcription](batch-transcription.md)
cognitive-services Language Learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-learning-overview.md
- Title: Language learning with Azure Cognitive Service for Speech-
-description: Azure Cognitive Services for Speech can be used to learn languages.
------ Previously updated : 02/23/2023---
-# Language learning with Azure Cognitive Service for Speech
-
-The Azure Cognitive Service for Speech platform is a comprehensive collection of technologies and services aimed at accelerating the incorporation of speech into applications. Azure Cognitive Services for Speech can be used to learn languages.
--
-## Pronunciation Assessment
-
-The [Pronunciation Assessment](pronunciation-assessment-tool.md) feature is designed to provide instant feedback to users on the accuracy, fluency, and prosody of their speech when learning a new language, so that they can speak and present in a new language with confidence. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
-
-The Pronunciation Assessment feature offers several benefits for educators, service providers, and students.
-- For educators, it provides instant feedback, eliminates the need for time-consuming oral language assessments, and offers consistent and comprehensive assessments. -- For service providers, it offers high real-time capabilities, worldwide Speech cognitive service and supports growing global business. -- For students and learners, it provides a convenient way to practice and receive feedback, authoritative scoring to compare with native pronunciation and helps to follow the exact text order for long sentences or full documents.-
-## Speech to text
-
-Azure [Speech to text](speech-to-text.md) supports real-time language identification for multilingual language learning scenarios, help human-human interaction with better understanding and readable context.
-
-## Text to speech
-
-[Text to speech](text-to-speech.md) prebuilt neural voices can read out learning materials natively and empower self-served learning. A broad portfolio of [languages and voices](language-support.md?tabs=tts) are supported for AI teacher, content read aloud capabilities, and more. Microsoft is continuously working on bringing new languages to the world.
--
-[Custom Neural Voice](custom-neural-voice.md) is available for you to create a customized synthetic voice for your applications. Education companies are using this technology to personalize language learning, by creating unique characters with distinct voices that match the culture and background of their target audience.
---
-## Next steps
-
-* [How to use pronunciation assessment](how-to-pronunciation-assessment.md)
-* [What is Speech to text](speech-to-text.md)
-* [What is Text to speech](text-to-speech.md)
-* [What is Custom Neural Voice](custom-neural-voice.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
- Title: Language support - Speech service-
-description: The Speech service supports numerous languages for speech to text and text to speech conversion, along with speech translation. This article provides a comprehensive list of language support by service feature.
------ Previously updated : 01/12/2023----
-# Language and voice support for the Speech service
-
-The following tables summarize language support for [speech to text](speech-to-text.md), [text to speech](text-to-speech.md), [pronunciation assessment](how-to-pronunciation-assessment.md), [speech translation](speech-translation.md), [speaker recognition](speaker-recognition-overview.md), and additional service features.
-
-You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), [Speech to text REST API for short audio](rest-speech-to-text-short.md) and [Text to speech REST API](rest-text-to-speech.md#get-a-list-of-voices).
-
-## Supported languages
-
-Language support varies by Speech service functionality.
-
-> [!NOTE]
-> See [Speech Containers](speech-container-overview.md#available-speech-containers) and [Embedded Speech](embedded-speech.md#models-and-voices) separately for their supported languages.
-
-**Choose a Speech feature**
-
-# [Speech to text](#tab/stt)
-
-The table in this section summarizes the locales supported for Speech to text. See the table footnotes for more details.
-
-Additional remarks for Speech to text locales are included in the [Custom Speech](#custom-speech) section below.
-
-> [!TIP]
-> Try out the [Real-time Speech to text tool](https://speech.microsoft.com/portal/speechtotexttool) without having to use any code.
--
-### Custom Speech
-
-To improve Speech to text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md).
-
-# [Text to speech](#tab/tts)
-
-The table in this section summarizes the locales and voices supported for Text to speech. See the table footnotes for more details.
-
-Additional remarks for Text to speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
-
-> [!TIP]
-> Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
--
-### Voice styles and roles
-
-In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. All prebuilt voices with speaking styles and multi-style custom voices support style degree adjustment. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender.
-
-To learn how you can configure and adjust neural voice styles and roles, see [Speech Synthesis Markup Language](speech-synthesis-markup-voice.md#speaking-styles-and-roles).
-
-Use the following table to determine supported styles and roles for each neural voice.
--
-### Viseme
-
-This table lists all the locales supported for [Viseme](speech-synthesis-markup-structure.md#viseme-element). For more information about Viseme, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md) and [Viseme element](speech-synthesis-markup-structure.md#viseme-element).
--
-### Prebuilt neural voices
-
-Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices in the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
-
-> [!IMPORTANT]
-> Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
-
-Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
-
-Please note that the following neural voices are retired.
--- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. All requests with previous versions won't succeed starting from October 30, 2021.-- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." -
-### Custom Neural Voice
-
-Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data. Multi-style custom neural voices support style degree adjustment. There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview).
-
-Select the right locale that matches your training data to train a custom neural voice model. For example, if the recording data is spoken in English with a British accent, select `en-GB`.
-
-With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages with Cross-lingual support.
--
-# [Pronunciation assessment](#tab/pronunciation-assessment)
-
-The table in this section summarizes the 19 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 18 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
--
-# [Speech translation](#tab/speech-translation)
-
-The table in this section summarizes the locales supported for Speech translation. Speech translation supports different languages for speech-to-speech and speech to text translation. The available target languages depend on whether the translation target is speech or text.
-
-#### Translate from language
-
-To set the input speech recognition language, specify the full locale with a dash (`-`) separator. See the [speech to text language table](?tabs=stt#supported-languages). The default language is `en-US` if you don't specify a language.
-
-#### Translate to text language
-
-To set the translation target language, with few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. See the speech translation target language table below. The default language is `en` if you don't specify a language.
--
-# [Language identification](#tab/language-identification)
-
-The table in this section summarizes the locales supported for [Language identification](language-identification.md).
-> [!NOTE]
-> Language Identification compares speech at the language level, such as English and German. Do not include multiple locales of the same language in your candidate list.
--
-# [Speaker recognition](#tab/speaker-recognition)
-
-The table in this section summarizes the locales supported for Speaker recognition. Speaker recognition is mostly language agnostic. The universal model for text-independent speaker recognition combines various data sources from multiple languages. We've tuned and evaluated the model on these languages and locales. For more information on speaker recognition, see the [overview](speaker-recognition-overview.md).
--
-# [Custom keyword](#tab/custom-keyword)
-
-The table in this section summarizes the locales supported for custom keyword and keyword verification.
--
-# [Intent Recognition](#tab/intent-recognizer-pattern-matcher)
-
-The table in this section summarizes the locales supported for the Intent Recognizer Pattern Matcher.
--
-***
-
-## Next steps
-
-* [Region support](regions.md)
cognitive-services Logging Audio Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/logging-audio-transcription.md
- Title: How to log audio and transcriptions for speech recognition-
-description: Learn how to use audio and transcription logging for speech to text and speech translation.
------- Previously updated : 03/28/2023-
-zone_pivot_groups: programming-languages-speech-services-nomore-variant
--
-# How to log audio and transcriptions for speech recognition
-
-You can enable logging for both audio input and recognized speech when using [speech to text](get-started-speech-to-text.md) or [speech translation](get-started-speech-to-text.md). For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged. This article describes how to enable, access and delete the audio and transcription logs.
-
-Audio and transcription logs can be used as input for [Custom Speech](custom-speech-overview.md) model training. You might have other use cases.
-
-> [!WARNING]
-> Don't depend on audio and transcription logs when the exact record of input audio is required. In the periods of peak load, the service prioritizes hardware resources for transcription tasks. This may result in minor parts of the audio not being logged. Such occasions are rare, but nevertheless possible.
-
-Logging is done asynchronously for both base and custom model endpoints. Audio and transcription logs are stored by the Speech service and not written locally. The logs are retained for 30 days. After this period, the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
-
-## Enable audio and transcription logging
-
-Logging is disabled by default. Logging can be enabled [per recognition session](#enable-logging-for-a-single-recognition-session) or [per custom model endpoint](#enable-audio-and-transcription-logging-for-a-custom-model-endpoint).
-
-### Enable logging for a single recognition session
-
-You can enable logging for a single recognition session, whether using the default base model or [custom model](how-to-custom-speech-deploy-model.md) endpoint.
-
-> [!WARNING]
-> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active.
-
-#### Enable logging for speech to text with the Speech SDK
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging()` of the [SpeechConfig](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig) class instance.
-
-```csharp
-speechConfig.EnableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/dotnet/api/microsoft.cognitiveservices.speech.propertyid):
-
-```csharp
-string isAudioLoggingEnabled = speechConfig.GetProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [SpeechRecognizer](/dotnet/api/microsoft.cognitiveservices.speech.speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging` of the [SpeechConfig](/cpp/cognitive-services/speech/speechconfig) class instance.
-
-```cpp
-speechConfig->EnableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` property:
-
-```cpp
-string isAudioLoggingEnabled = speechConfig->GetProperty(PropertyId::SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [SpeechRecognizer](/cpp/cognitive-services/speech/speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechConfig](/java/api/com.microsoft.cognitiveservices.speech.speechconfig) class instance.
-
-```java
-speechConfig.enableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/java/api/com.microsoft.cognitiveservices.speech.propertyid):
-
-```java
-String isAudioLoggingEnabled = speechConfig.getProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [SpeechRecognizer](/java/api/com.microsoft.cognitiveservices.speech.speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) class instance.
-
-```javascript
-speechConfig.enableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid):
-
-```javascript
-var SpeechSDK;
-SpeechSDK = speechSdk;
-// <...>
-string isAudioLoggingEnabled = speechConfig.getProperty(SpeechSDK.PropertyId.SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [SpeechRecognizer](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enable_audio_logging` of the [SpeechConfig](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig) class instance.
-
-```python
-speech_config.enable_audio_logging()
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid):
-
-```python
-import azure.cognitiveservices.speech as speechsdk
-# <...>
-is_audio_logging_enabled = speech_config.get_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_EnableAudioLogging)
-```
-
-Each [SpeechRecognizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechrecognizer) that uses this `speech_config` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging` of the [SPXSpeechConfiguration](/objectivec/cognitive-services/speech/spxspeechconfiguration) class instance.
-
-```objectivec
-[speechConfig enableAudioLogging];
-```
-
-To check whether logging is enabled, get the value of the `SPXSpeechServiceConnectionEnableAudioLogging` [property](/objectivec/cognitive-services/speech/spxpropertyid):
-
-```objectivec
-NSString *isAudioLoggingEnabled = [speechConfig getPropertyById:SPXSpeechServiceConnectionEnableAudioLogging];
-```
-
-Each [SpeechRecognizer](/objectivec/cognitive-services/speech/spxspeechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
--
-#### Enable logging for speech translation with the Speech SDK
-
-For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging()` of the [SpeechTranslationConfig](/dotnet/api/microsoft.cognitiveservices.speech.speechtranslationconfig) class instance.
-
-```csharp
-speechTranslationConfig.EnableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/dotnet/api/microsoft.cognitiveservices.speech.propertyid):
-
-```csharp
-string isAudioLoggingEnabled = speechTranslationConfig.GetProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [TranslationRecognizer](/dotnet/api/microsoft.cognitiveservices.speech.translation.translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging` of the [SpeechTranslationConfig](/cpp/cognitive-services/speech/translation-speechtranslationconfig) class instance.
-
-```cpp
-speechTranslationConfig->EnableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` property:
-
-```cpp
-string isAudioLoggingEnabled = speechTranslationConfig->GetProperty(PropertyId::SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [TranslationRecognizer](/cpp/cognitive-services/speech/translation-translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechTranslationConfig](/java/api/com.microsoft.cognitiveservices.speech.translation.speechtranslationconfig) class instance.
-
-```java
-speechTranslationConfig.enableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/java/api/com.microsoft.cognitiveservices.speech.propertyid):
-
-```java
-String isAudioLoggingEnabled = speechTranslationConfig.getProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [TranslationRecognizer](/java/api/com.microsoft.cognitiveservices.speech.translation.translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechTranslationConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechtranslationconfig) class instance.
-
-```javascript
-speechTranslationConfig.enableAudioLogging();
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid):
-
-```javascript
-var SpeechSDK;
-SpeechSDK = speechSdk;
-// <...>
-string isAudioLoggingEnabled = speechTranslationConfig.getProperty(SpeechSDK.PropertyId.SpeechServiceConnection_EnableAudioLogging);
-```
-
-Each [TranslationRecognizer](/javascript/api/microsoft-cognitiveservices-speech-sdk/translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enable_audio_logging` of the [SpeechTranslationConfig](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.speechtranslationconfig) class instance.
-
-```python
-speech_translation_config.enable_audio_logging()
-```
-
-To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid):
-
-```python
-import azure.cognitiveservices.speech as speechsdk
-# <...>
-is_audio_logging_enabled = speech_translation_config.get_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_EnableAudioLogging)
-```
-
-Each [TranslationRecognizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.translationrecognizer) that uses this `speech_translation_config` has audio and transcription logging enabled.
--
-To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging` of the [SPXSpeechTranslationConfiguration](/objectivec/cognitive-services/speech/spxspeechtranslationconfiguration) class instance.
-
-```objectivec
-[speechTranslationConfig enableAudioLogging];
-```
-
-To check whether logging is enabled, get the value of the `SPXSpeechServiceConnectionEnableAudioLogging` [property](/objectivec/cognitive-services/speech/spxpropertyid):
-
-```objectivec
-NSString *isAudioLoggingEnabled = [speechTranslationConfig getPropertyById:SPXSpeechServiceConnectionEnableAudioLogging];
-```
-
-Each [TranslationRecognizer](/objectivec/cognitive-services/speech/spxtranslationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
--
-#### Enable logging for Speech to text REST API for short audio
-
-If you use [Speech to text REST API for short audio](rest-speech-to-text-short.md) and want to enable audio and transcription logging, you need to use the query parameter and value `storeAudio=true` as a part of your REST request. A sample request looks like this:
-
-```http
-https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&storeAudio=true
-```
-
-### Enable audio and transcription logging for a custom model endpoint
-
-This method is applicable for [Custom Speech](custom-speech-overview.md) endpoints only.
-
-Logging can be enabled or disabled in the persistent custom model endpoint settings. When logging is enabled (turned on) for a custom model endpoint, then you don't need to enable logging at the [recognition session level with the SDK or REST API](#enable-logging-for-a-single-recognition-session). Even when logging isn't enabled for a custom model endpoint, you can enable logging temporarily at the recognition session level with the SDK or REST API.
-
-> [!WARNING]
-> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active.
-
-You can enable audio and transcription logging for a custom model endpoint:
-- When you create the endpoint using the Speech Studio, REST API, or Speech CLI. For details about how to enable logging for a Custom Speech endpoint, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint).-- When you update the endpoint ([Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)) using the [Speech to text REST API](rest-speech-to-text.md). For an example of how to update the logging setting for an endpoint, see [Turn off logging for a custom model endpoint](#turn-off-logging-for-a-custom-model-endpoint). But instead of setting the `contentLoggingEnabled` property to `false`, set it to `true` to enable logging for the endpoint.-
-## Turn off logging for a custom model endpoint
-
-To disable audio and transcription logging for a custom model endpoint, you must update the persistent endpoint logging setting using the [Speech to text REST API](rest-speech-to-text.md). There isn't a way to disable logging for an existing custom model endpoint using the Speech Studio.
-
-To turn off logging for a custom endpoint, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
--- Set the `contentLoggingEnabled` property within `properties`. Set this property to `true` to enable logging of the endpoint's traffic. Set this property to `false` to disable logging of the endpoint's traffic. -
-Make an HTTP PATCH request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, replace `YourEndpointId` with your endpoint ID, and set the request body properties as previously described.
-
-```azurecli-interactive
-curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
- "properties": {
- "contentLoggingEnabled": false
- },
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId"
-```
-
-You should receive a response body in the following format:
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/71b46720-995d-4038-a331-0317e9e7a02f"
- },
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/122fd2f7-1d3a-4404-885d-2b24a2a187e8"
- },
- "properties": {
- "loggingEnabled": false
- },
- "lastActionDateTime": "2023-03-28T23:03:15Z",
- "status": "Succeeded",
- "createdDateTime": "2023-03-28T23:02:40Z",
- "locale": "en-US",
- "displayName": "My Endpoint",
- "description": "My Endpoint Description"
-}
-```
-
-The response body should reflect the new setting. The name of the logging property in the response (`loggingEnabled`) is different from the name of the logging property that you set in the request (`contentLoggingEnabled`).
-
-## Get audio and transcription logs
-
-You can access audio and transcription logs using [Speech to text REST API](#get-audio-and-transcription-logs-with-speech-to-text-rest-api). For [custom model](how-to-custom-speech-deploy-model.md) endpoints, you can also use [Speech Studio](#get-audio-and-transcription-logs-with-speech-studio). See details in the following sections.
-
-> [!NOTE]
-> Logging data is kept for 30 days. After this period the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
-
-### Get audio and transcription logs with Speech Studio
-
-This method is applicable for [custom model](how-to-custom-speech-deploy-model.md) endpoints only.
-
-To download the endpoint logs:
-
-1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select **Custom Speech** > Your project name > **Deploy models**.
-1. Select the link by endpoint name.
-1. Under **Content logging**, select **Download log**.
-
-With this approach, you can download all available log sets at once. There's no way to download selected log sets in Speech Studio.
-
-### Get audio and transcription logs with Speech to text REST API
-
-You can download all or a subset of available log sets.
-
-This method is applicable for base and [custom model](how-to-custom-speech-deploy-model.md) endpoints. To list and download audio and transcription logs:
-- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.-- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.-
-### Get log IDs with Speech to text REST API
-
-In some scenarios, you may need to get IDs of the available logs. For example, you may want to delete a specific log as described [later in this article](#delete-specific-log).
-
-To get IDs of the available logs:
-- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.-- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.-
-Here's a sample output of [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs). For simplicity, only one log set is shown:
-
-```json
-{
- "values": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json",
- "name": "163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9.v2.json",
- "kind": "Transcription",
- "properties": {
- "size": 79920
- },
- "createdDateTime": "2023-03-13T16:37:15Z",
- "links": {
- "contentUrl": "<Link to download log file>"
- }
- },
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav",
- "name": "163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9.wav",
- "kind": "Audio",
- "properties": {
- "size": 932966
- },
- "createdDateTime": "2023-03-13T16:37:15Z",
- "links": {
- "contentUrl": "<Link to to download log file>"
- }
- }
- ]
-}
-```
-
-The locations of each audio and transcription log file are returned in the response body. See the corresponding `kind` property to determine whether the file includes the audio (`"kind": "Audio"`) or the transcription (`"kind": "Transcription"`).
-
-The log ID for each log file is the last part of the URL in the `"self"` element value. The log ID in the following example is `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json`.
-
-```json
-"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json"
-```
-
-## Delete audio and transcription logs
-
-Logging data is kept for 30 days. After this period, the logs are automatically deleted. However you can delete specific logs or a range of available logs at any time.
-
-For any base or [custom model](how-to-custom-speech-deploy-model.md) endpoint you can delete all available logs, logs for a given time frame, or a particular log based on its Log ID. The deletion process is done asynchronously and can take minutes, hours, one day, or longer depending on the number of log files.
-
-To delete audio and transcription logs you must use the [Speech to text REST API](rest-speech-to-text.md). There isn't a way to delete logs using the Speech Studio.
-
-### Delete all logs or logs for a given time frame
-
-To delete all logs or logs for a given time frame:
--- Base models: Use the [Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). -- Custom model endpoints: Use the [Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs) operation of the [Speech to text REST API](rest-speech-to-text.md).-
-Optionally, set the `endDate` of the audio logs deletion (specific day, UTC). Expected format: "yyyy-mm-dd". For instance, "2023-03-15" results in deleting all logs on March 15, 2023 and before.
-
-### Delete specific log
-
-To delete a specific log by ID:
--- Base models: Use the [Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog) operation of the [Speech to text REST API](rest-speech-to-text.md).-- Custom model endpoints: Use the [Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog) operation of the [Speech to text REST API](rest-speech-to-text.md).-
-For details about how to get Log IDs, see a previous section [Get log IDs with Speech to text REST API](#get-log-ids-with-speech-to-text-rest-api).
-
-Since audio and transcription logs have separate IDs (such as IDs `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json` and `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav` from a [previous example in this article](#get-log-ids-with-speech-to-text-rest-api)), when you want to delete both audio and transcription logs you execute separate [delete by ID](#delete-specific-log) requests.
-
-## Next steps
-
-* [Speech to text quickstart](get-started-speech-to-text.md)
-* [Speech translation quickstart](./get-started-speech-translation.md)
-* [Create and train custom speech models](custom-speech-overview.md)
cognitive-services Migrate To Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-to-batch-synthesis.md
- Title: Migrate to Batch synthesis API - Speech service-
-description: This document helps developers migrate code from Long Audio REST API to Batch synthesis REST API.
------ Previously updated : 09/01/2022----
-# Migrate code from Long Audio API to Batch synthesis API
-
-The [Batch synthesis API](batch-synthesis.md) (Preview) provides asynchronous synthesis of long-form text to speech. Benefits of upgrading from Long Audio API to Batch synthesis API, and details about how to do so, are described in the sections below.
-
-> [!IMPORTANT]
-> [Batch synthesis API](batch-synthesis.md) is currently in public preview. Once it's generally available, the Long Audio API will be deprecated.
-
-## Base path
-
-You must update the base path in your code from `/texttospeech/v3.0/longaudiosynthesis` to `/texttospeech/3.1-preview1/batchsynthesis`. For example, to list synthesis jobs for your Speech resource in the `eastus` region, use `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis` instead of `https://eastus.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis`.
-
-## Regions and endpoints
-
-Batch synthesis API is available in all [Speech regions](regions.md).
-
-The Long Audio API is limited to the following regions:
-
-| Region | Endpoint |
-|--|-|
-| Australia East | `https://australiaeast.customvoice.api.speech.microsoft.com` |
-| East US | `https://eastus.customvoice.api.speech.microsoft.com` |
-| India Central | `https://centralindia.customvoice.api.speech.microsoft.com` |
-| South Central US | `https://southcentralus.customvoice.api.speech.microsoft.com` |
-| Southeast Asia | `https://southeastasia.customvoice.api.speech.microsoft.com` |
-| UK South | `https://uksouth.customvoice.api.speech.microsoft.com` |
-| West Europe | `https://westeurope.customvoice.api.speech.microsoft.com` |
-
-## Voices list
-
-Batch synthesis API supports all [text to speech voices and styles](language-support.md?tabs=tts).
-
-The Long Audio API is limited to the set of voices returned by a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
-
-## Text inputs
-
-Batch synthesis text inputs are sent in a JSON payload of up to 500 kilobytes.
-
-Long Audio API text inputs are uploaded from a file that meets the following requirements:
-* One plain text (.txt) or SSML text (.txt) file encoded as [UTF-8 with Byte Order Mark (BOM)](https://www.w3.org/International/questions/qa-utf8-bom.en#bom). Don't use compressed files such as ZIP. If you have more than one input file, you must submit multiple requests.
-* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs. For plain text, each paragraph is separated by a new line. For SSML text, each SSML piece is considered a paragraph. Separate SSML pieces by different paragraphs.
-
-With Batch synthesis API, you can use any of the [supported SSML elements](speech-synthesis-markup.md), including the `audio`, `mstts:backgroundaudio`, and `lexicon` elements. The `audio`, `mstts:backgroundaudio`, and `lexicon` elements aren't supported by Long Audio API.
-
-## Audio output formats
-
-Batch synthesis API supports all [text to speech audio output formats](rest-text-to-speech.md#audio-outputs).
-
-The Long Audio API is limited to the following set of audio output formats. The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
-
-* riff-8khz-16bit-mono-pcm
-* riff-16khz-16bit-mono-pcm
-* riff-24khz-16bit-mono-pcm
-* riff-48khz-16bit-mono-pcm
-* audio-16khz-32kbitrate-mono-mp3
-* audio-16khz-64kbitrate-mono-mp3
-* audio-16khz-128kbitrate-mono-mp3
-* audio-24khz-48kbitrate-mono-mp3
-* audio-24khz-96kbitrate-mono-mp3
-* audio-24khz-160kbitrate-mono-mp3
-
-## Getting results
-
-With batch synthesis API, use the URL from the `outputs.result` property of the GET batch synthesis response. The [results](batch-synthesis.md#batch-synthesis-results) are in a ZIP file that contains the audio (such as `0001.wav`), summary, and debug details.
-
-Long Audio API text inputs and results are returned via two separate content URLs as shown in the following example. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request. Both ZIP files can be downloaded from the URL in their `links.contentUrl` property.
-
-## Cleaning up resources
-
-Batch synthesis API supports up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed". The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
-
-The Long Audio API is limited to 20,000 requests for each Azure subscription account. The Speech service doesn't remove job history automatically. You must remove the previous job run history before making new requests that would otherwise exceed the limit.
-
-## Next steps
--- [Batch synthesis API](batch-synthesis.md)-- [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)-- [Text to speech quickstart](get-started-text-to-speech.md)
cognitive-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md
- Title: Migrate from v2 to v3 REST API - Speech service-
-description: This document helps developers migrate code from v2 to v3 of the Speech to text REST API.speech-to-text REST API.
------ Previously updated : 11/29/2022----
-# Migrate code from v2.0 to v3.0 of the REST API
-
-Compared to v2, the v3 version of the Speech services REST API for speech to text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two.
-
-> [!IMPORTANT]
-> The Speech to text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech to text REST API v3.1. Complete the steps in this article and then see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide for additional requirements.
-
-## Forward compatibility
-
-All entities from v2 can also be found in the v3 API under the same identity. Where the schema of a result has changed, (for example, transcriptions), the result of a GET in the v3 version of the API uses the v3 schema. The result of a GET in the v2 version of the API uses the same v2 schema. Newly created entities on v3 aren't available in responses from v2 APIs.
-
-## Migration steps
-
-This is a summary list of items you need to be aware of when you are preparing for migration. Details are found in the individual links. Depending on your current use of the API not all steps listed here may apply. Only a few changes require non-trivial changes in the calling code. Most changes just require a change to item names.
-
-General changes:
-
-1. [Change the host name](#host-name-changes)
-
-1. [Rename the property id to self in your client code](#identity-of-an-entity)
-
-1. [Change code to iterate over collections of entities](#working-with-collections-of-entities)
-
-1. [Rename the property name to displayName in your client code](#name-of-an-entity)
-
-1. [Adjust the retrieval of the metadata of referenced entities](#accessing-referenced-entities)
-
-1. If you use Batch transcription:
-
- * [Adjust code for creating batch transcriptions](#creating-transcriptions)
-
- * [Adapt code to the new transcription results schema](#format-of-v3-transcription-results)
-
- * [Adjust code for how results are retrieved](#getting-the-content-of-entities-and-the-results)
-
-1. If you use Custom model training/testing APIs:
-
- * [Apply modifications to custom model training](#customizing-models)
-
- * [Change how base and custom models are retrieved](#retrieving-base-and-custom-models)
-
- * [Rename the path segment accuracytests to evaluations in your client code](#accuracy-tests)
-
-1. If you use endpoints APIs:
-
- * [Change how endpoint logs are retrieved](#retrieving-endpoint-logs)
-
-1. Other minor changes:
-
- * [Pass all custom properties as customProperties instead of properties in your POST requests](#using-custom-properties)
-
- * [Read the location from response header Location instead of Operation-Location](#response-headers)
-
-## Breaking changes
-
-### Host name changes
-
-Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech to text REST API v3.0](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
->[!IMPORTANT]
->Change the hostname from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com` where region is the region of your speech subscription. Also remove `api/`from any path in your client code.
-
-### Identity of an entity
-
-The property `id` is now `self`. In v2, an API user had to know how our paths on the API are being created. This was non-extensible and required unnecessary work from the user. The property `id` (uuid) is replaced by `self` (string), which is location of the entity (URL). The value is still unique between all your entities. If `id` is stored as a string in your code, a rename is enough to support the new schema. You can now use the `self` content as the URL for the `GET`, `PATCH`, and `DELETE` REST calls for your entity.
-
-If the entity has additional functionality available through other paths, they are listed under `links`. The following example for transcription shows a separate method to `GET` the content of the transcription:
->[!IMPORTANT]
->Rename the property `id` to `self` in your client code. Change the type from `uuid` to `string` if needed.
-
-**v2 transcription:**
-
-```json
-{
- "id": "9891c965-bb32-4880-b14b-6d44efb158f3",
- "createdDateTime": "2019-01-07T11:34:12Z",
- "lastActionDateTime": "2019-01-07T11:36:07Z",
- "status": "Succeeded",
- "locale": "en-US",
- "name": "Transcription using locale en-US"
-}
-```
-
-**v3 transcription:**
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
- "createdDateTime": "2019-01-07T11:34:12Z",
- "lastActionDateTime": "2019-01-07T11:36:07Z",
- "status": "Succeeded",
- "locale": "en-US",
- "displayName": "Transcription using locale en-US",
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
- }
-}
-```
-
-Depending on your code's implementation, it may not be enough to rename the property. We recommend using the returned `self` and `links` values as the target urls of your REST calls, rather than generating paths in your client. By using the returned URLs, you can be sure that future changes in paths will not break your client code.
-
-### Working with collections of entities
-
-Previously the v2 API returned all available entities in a result. To allow a more fine grained control over the expected response size in v3, all collection results are paginated. You have control over the count of returned entities and the starting offset of the page. This behavior makes it easy to predict the runtime of the response processor.
-
-The basic shape of the response is the same for all collections:
-
-```json
-{
- "values": [
- {
- }
- ],
- "@nextLink": "https://{region}.api.cognitive.microsoft.com/speechtotext/v3.0/{collection}?skip=100&top=100"
-}
-```
-
-The `values` property contains a subset of the available collection entities. The count and offset can be controlled using the `skip` and `top` query parameters. When `@nextLink` is not `null`, there's more data available and the next batch of data can be retrieved by doing a GET on `$.@nextLink`.
-
-This change requires calling the `GET` for the collection in a loop until all elements have been returned.
-
->[!IMPORTANT]
->When the response of a GET to `speechtotext/v3.1/{collection}` contains a value in `$.@nextLink`, continue issuing `GETs` on `$.@nextLink` until `$.@nextLink` is not set to retrieve all elements of that collection.
-
-### Creating transcriptions
-
-A detailed description on how to create batches of transcriptions can be found in [Batch transcription How-to](./batch-transcription.md).
-
-The v3 transcription API lets you set specific transcription options explicitly. All (optional) configuration properties can now be set in the `properties` property.
-Version v3 also supports multiple input files, so it requires a list of URLs rather than a single URL as v2 did. The v2 property name `recordingsUrl` is now `contentUrls` in v3. The functionality of analyzing sentiment in transcriptions has been removed in v3. See Microsoft Cognitive Service [Text Analysis](https://azure.microsoft.com/services/cognitive-services/text-analytics/) for sentiment analysis options.
-
-The new property `timeToLive` under `properties` can help prune the existing completed entities. The `timeToLive` specifies a duration after which a completed entity will be deleted automatically. Set it to a high value (for example `PT12H`) when the entities are continuously tracked, consumed, and deleted and therefore usually processed long before 12 hours have passed.
-
-**v2 transcription POST request body:**
-
-```json
-{
- "locale": "en-US",
- "name": "Transcription using locale en-US",
- "recordingsUrl": "https://contoso.com/mystoragelocation",
- "properties": {
- "AddDiarization": "False",
- "AddWordLevelTimestamps": "False",
- "PunctuationMode": "DictatedAndAutomatic",
- "ProfanityFilterMode": "Masked"
- }
-}
-```
-
-**v3 transcription POST request body:**
-
-```json
-{
- "locale": "en-US",
- "displayName": "Transcription using locale en-US",
- "contentUrls": [
- "https://contoso.com/mystoragelocation",
- "https://contoso.com/myotherstoragelocation"
- ],
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": false,
- "punctuationMode": "DictatedAndAutomatic",
- "profanityFilterMode": "Masked"
- }
-}
-```
->[!IMPORTANT]
->Rename the property `recordingsUrl` to `contentUrls` and pass an array of urls instead of a single url. Pass settings for `diarizationEnabled` or `wordLevelTimestampsEnabled` as `bool` instead of `string`.
-
-### Format of v3 transcription results
-
-The schema of transcription results has changed slightly to align with transcriptions created by real-time endpoints. Find an in-depth description of the new format in the [Batch transcription How-to](./batch-transcription.md). The schema of the result is published in our [GitHub sample repository](https://aka.ms/csspeech/samples) under `samples/batch/transcriptionresult_v3.schema.json`.
-
-Property names are now camel-cased and the values for `channel` and `speaker` now use integer types. Format for durations now use the structure described in ISO 8601, which matches duration formatting used in other Azure APIs.
-
-Sample of a v3 transcription result. The differences are described in the comments.
-
-```json
-{
- "source": "...", // (new in v3) was AudioFileName / AudioFileUrl
- "timestamp": "2020-06-16T09:30:21Z", // (new in v3)
- "durationInTicks": 41200000, // (new in v3) was AudioLengthInSeconds
- "duration": "PT4.12S", // (new in v3)
- "combinedRecognizedPhrases": [ // (new in v3) was CombinedResults
- {
- "channel": 0, // (new in v3) was ChannelNumber
- "lexical": "hello world",
- "itn": "hello world",
- "maskedITN": "hello world",
- "display": "Hello world."
- }
- ],
- "recognizedPhrases": [ // (new in v3) was SegmentResults
- {
- "recognitionStatus": "Success", //
- "channel": 0, // (new in v3) was ChannelNumber
- "offset": "PT0.07S", // (new in v3) new format, was OffsetInSeconds
- "duration": "PT1.59S", // (new in v3) new format, was DurationInSeconds
- "offsetInTicks": 700000.0, // (new in v3) was Offset
- "durationInTicks": 15900000.0, // (new in v3) was Duration
-
- // possible transcriptions of the current phrase with confidences
- "nBest": [
- {
- "confidence": 0.898652852,phrase
- "speaker": 1,
- "lexical": "hello world",
- "itn": "hello world",
- "maskedITN": "hello world",
- "display": "Hello world.",
-
- "words": [
- {
- "word": "hello",
- "offset": "PT0.09S",
- "duration": "PT0.48S",
- "offsetInTicks": 900000.0,
- "durationInTicks": 4800000.0,
- "confidence": 0.987572
- },
- {
- "word": "world",
- "offset": "PT0.59S",
- "duration": "PT0.16S",
- "offsetInTicks": 5900000.0,
- "durationInTicks": 1600000.0,
- "confidence": 0.906032
- }
- ]
- }
- ]
- }
- ]
-}
-```
->[!IMPORTANT]
->Deserialize the transcription result into the new type as shown above. Instead of a single file per audio channel, distinguish channels by checking the property value of `channel` for each element in `recognizedPhrases`. There is now a single result file for each input file.
--
-### Getting the content of entities and the results
-
-In v2, the links to the input or result files have been inlined with the rest of the entity metadata. As an improvement in v3, there is a clear separation between entity metadata (which is returned by a GET on `$.self`) and the details and credentials to access the result files. This separation helps protect customer data and allows fine control over the duration of validity of the credentials.
-
-In v3, `links` include a sub-property called `files` in case the entity exposes data (datasets, transcriptions, endpoints, or evaluations). A GET on `$.links.files` will return a list of files and a SAS URL
-to access the content of each file. To control the validity duration of the SAS URLs, the query parameter `sasValidityInSeconds` can be used to specify the lifetime.
-
-**v2 transcription:**
-
-```json
-{
- "id": "9891c965-bb32-4880-b14b-6d44efb158f3",
- "status": "Succeeded",
- "reportFileUrl": "https://contoso.com/report.txt?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=6c044930-3926-4be4-be76-f728327c53b5",
- "resultsUrls": {
- "channel_0": "https://contoso.com/audiofile1.wav?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=6c044930-3926-4be4-be76-f72832e6600c",
- "channel_1": "https://contoso.com/audiofile2.wav?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=3e0163f1-0029-4d4a-988d-3fba7d7c53b5"
- }
-}
-```
-
-**v3 transcription:**
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
- }
-}
-```
-
-**A GET on `$.links.files` would result in:**
-
-```json
-{
- "values": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/f23e54f5-ed74-4c31-9730-2f1a3ef83ce8",
- "name": "Name",
- "kind": "Transcription",
- "properties": {
- "size": 200
- },
- "createdDateTime": "2020-01-13T08:00:00Z",
- "links": {
- "contentUrl": "https://customspeech-usw.blob.core.windows.net/artifacts/mywavefile1.wav.json?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- },
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/28bc946b-c251-4a86-84f6-ea0f0a2373ef",
- "name": "Name",
- "kind": "TranscriptionReport",
- "properties": {
- "size": 200
- },
- "createdDateTime": "2020-01-13T08:00:00Z",
- "links": {
- "contentUrl": "https://customspeech-usw.blob.core.windows.net/artifacts/report.json?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- }
- ],
- "@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files?skip=2&top=2"
-}
-```
-
-The `kind` property indicates the format of content of the file. For transcriptions, the files of kind `TranscriptionReport` are the summary of the job and files of the kind `Transcription` are the result of the job itself.
-
->[!IMPORTANT]
->To get the results of operations, use a `GET` on `/speechtotext/v3.0/{collection}/{id}/files`, they are no longer contained in the responses of `GET` on `/speechtotext/v3.0/{collection}/{id}` or `/speechtotext/v3.0/{collection}`.
-
-### Customizing models
-
-Before v3, there was a distinction between an _acoustic model_ and a _language model_ when a model was being trained. This distinction resulted in the need to specify multiple models when creating endpoints or transcriptions. To simplify this process for a caller, we removed the differences and made everything depend on the content of the datasets that are being used for model training. With this change, the model creation now supports mixed datasets (language data and acoustic data). Endpoints and transcriptions now require only one model.
-
-With this change, the need for a `kind` in the `POST` operation has been removed and the `datasets[]` array can now contain multiple datasets of the same or mixed kinds.
-
-To improve the results of a trained model, the acoustic data is automatically used internally during language training. In general, models created through the v3 API deliver more accurate results than models created with the v2 API.
-
->[!IMPORTANT]
->To customize both the acoustic and language model part, pass all of the required language and acoustic datasets in `datasets[]` of the POST to `/speechtotext/v3.0/models`. This will create a single model with both parts customized.
-
-### Retrieving base and custom models
-
-To simplify getting the available models, v3 has separated the collections of "base models" from the customer owned "customized models". The two routes are now
-`GET /speechtotext/v3.0/models/base` and `GET /speechtotext/v3.0/models/`.
-
-In v2, all models were returned together in a single response.
-
->[!IMPORTANT]
->To get a list of provided base models for customization, use `GET` on `/speechtotext/v3.0/models/base`. You can find your own customized models with a `GET` on `/speechtotext/v3.0/models`.
-
-### Name of an entity
-
-The `name` property is now `displayName`. This is consistent with other Azure APIs to not indicate identity properties. The value of this property must not be unique and can be changed after entity creation with a `PATCH` operation.
-
-**v2 transcription:**
-
-```json
-{
- "name": "Transcription using locale en-US"
-}
-```
-
-**v3 transcription:**
-
-```json
-{
- "displayName": "Transcription using locale en-US"
-}
-```
-
->[!IMPORTANT]
->Rename the property `name` to `displayName` in your client code.
-
-### Accessing referenced entities
-
-In v2, referenced entities were always inlined, for example the used models of an endpoint. The nesting of entities resulted in large responses and consumers rarely consumed the nested content. To shrink the response size and improve performance, the referenced entities are no longer inlined in the response. Instead, a reference to the other entity appears, and can directly be used for a subsequent `GET` (it's a URL as well), following the same pattern as the `self` link.
-
-**v2 transcription:**
-
-```json
-{
- "id": "9891c965-bb32-4880-b14b-6d44efb158f3",
- "models": [
- {
- "id": "827712a5-f942-4997-91c3-7c6cde35600b",
- "modelKind": "Language",
- "lastActionDateTime": "2019-01-07T11:36:07Z",
- "status": "Running",
- "createdDateTime": "2019-01-07T11:34:12Z",
- "locale": "en-US",
- "name": "Acoustic model",
- "description": "Example for an acoustic model",
- "datasets": [
- {
- "id": "702d913a-8ba6-4f66-ad5c-897400b081fb",
- "dataImportKind": "Language",
- "lastActionDateTime": "2019-01-07T11:36:07Z",
- "status": "Succeeded",
- "createdDateTime": "2019-01-07T11:34:12Z",
- "locale": "en-US",
- "name": "Language dataset",
- }
- ]
- },
- ]
-}
-```
-
-**v3 transcription:**
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
- "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/021a72d0-54c4-43d3-8254-27336ead9037"
- }
-}
-```
-
-If you need to consume the details of a referenced model as shown in the above example, just issue a GET on `$.model.self`.
-
->[!IMPORTANT]
->To retrieve the metadata of referenced entities, issue a GET on `$.{referencedEntity}.self`, for example to retrieve the model of a transcription do a `GET` on `$.model.self`.
--
-### Retrieving endpoint logs
-
-Version v2 of the service supported logging endpoint results. To retrieve the results of an endpoint with v2, you would create a "data export", which represented a snapshot of the results defined by a time range. The process of exporting batches of data was inflexible. The v3 API gives access to each individual file and allows iteration through them.
-
-**A successfully running v3 endpoint:**
-
-```json
-{
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6",
- "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs"
- }
-}
-```
-
-**Response of GET `$.links.logs`:**
-
-```json
-{
- "values": [
- {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/6d72ad7e-f286-4a6f-b81b-a0532ca6bcaa/files/logs/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
- "name": "2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
- "kind": "Audio",
- "properties": {
- "size": 12345
- },
- "createdDateTime": "2020-01-13T08:00:00Z",
- "links": {
- "contentUrl": "https://customspeech-usw.blob.core.windows.net/artifacts/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
- }
- }
- ],
- "@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs?top=2&SkipToken=2!188!MDAwMDk1ITZhMjhiMDllLTg0MDYtNDViMi1hMGRkLWFlNzRlOGRhZWJkNi8yMDIwLTA0LTAxLzEyNDY0M182MzI5NGRkMi1mZGYzLTRhZmEtOTA0NC1mODU5ZTcxOWJiYzYud2F2ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--"
-}
-```
-
-Pagination for endpoint logs works similar to all other collections, except that no offset can be specified. Due to the large amount of available data, pagination is determined by the server.
-
-In v3, each endpoint log can be deleted individually by issuing a `DELETE` operation on the `self` of a file, or by using `DELETE` on `$.links.logs`. To specify an end date, the query parameter `endDate` can be added to the request.
-
->[!IMPORTANT]
->Instead of creating log exports on `/api/speechtotext/v2.0/endpoints/{id}/data` use `/v3.0/endpoints/{id}/files/logs/` to access log files individually.
-
-### Using custom properties
-
-To separate custom properties from the optional configuration properties, all explicitly named properties are now located in the `properties` property and all properties defined by the callers are now located in the `customProperties` property.
-
-**v2 transcription entity:**
-
-```json
-{
- "properties": {
- "customerDefinedKey": "value",
- "diarizationEnabled": "False",
- "wordLevelTimestampsEnabled": "False"
- }
-}
-```
-
-**v3 transcription entity:**
-
-```json
-{
- "properties": {
- "diarizationEnabled": false,
- "wordLevelTimestampsEnabled": false
- },
- "customProperties": {
- "customerDefinedKey": "value"
- }
-}
-```
-
-This change also lets you use correct types on all explicitly named properties under `properties` (for example boolean instead of string).
-
->[!IMPORTANT]
->Pass all custom properties as `customProperties` instead of `properties` in your `POST` requests.
-
-### Response headers
-
-v3 no longer returns the `Operation-Location` header in addition to the `Location` header on `POST` requests. The value of both headers in v2 was the same. Now only `Location` is returned.
-
-Because the new API version is now managed by Azure API management (APIM), the throttling related headers `X-RateLimit-Limit`, `X-RateLimit-Remaining`, and `X-RateLimit-Reset` aren't contained in the response headers.
-
->[!IMPORTANT]
->Read the location from response header `Location` instead of `Operation-Location`. In case of a 429 response code, read the `Retry-After` header value instead of `X-RateLimit-Limit`, `X-RateLimit-Remaining`, or `X-RateLimit-Reset`.
--
-### Accuracy tests
-
-Accuracy tests have been renamed to evaluations because the new name describes better what they represent. The new paths are: `https://{region}.api.cognitive.microsoft.com/speechtotext/v3.0/evaluations`.
-
->[!IMPORTANT]
->Rename the path segment `accuracytests` to `evaluations` in your client code.
--
-## Next steps
-
-* [Speech to text REST API](rest-speech-to-text.md)
-* [Speech to text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
cognitive-services Migrate V3 0 To V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md
- Title: Migrate from v3.0 to v3.1 REST API - Speech service-
-description: This document helps developers migrate code from v3.0 to v3.1 of the Speech to text REST API.
------ Previously updated : 01/25/2023----
-# Migrate code from v3.0 to v3.1 of the REST API
-
-The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below.
-
-> [!IMPORTANT]
-> Speech to text REST API v3.1 is generally available. Version 3.0 of the [Speech to text REST API](rest-speech-to-text.md) will be retired.
-
-## Base path
-
-You must update the base path in your code from `/speechtotext/v3.0` to `/speechtotext/v3.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base`.
-
-Please note these additional changes:
-- The `/models/{id}/copyto` operation (includes '/') in version 3.0 is replaced by the `/models/{id}:copyto` operation (includes ':') in version 3.1.-- The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.-- The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.-
-For more details, see [Operation IDs](#operation-ids) later in this guide.
-
-## Batch transcription
-
-> [!NOTE]
-> Don't use Speech to text REST API v3.0 to retrieve a transcription created via Speech to text REST API v3.1. You'll see an error message such as the following: "The API version cannot be used to access this transcription. Please use API version v3.1 or higher."
-
-In the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation the following three properties are added:
-- The `displayFormWordLevelTimestampsEnabled` property can be used to enable the reporting of word-level timestamps on the display form of the transcription results. The results are returned in the `displayWords` property of the transcription file. -- The `diarization` property can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers. To use this property you must also set the `diarizationEnabled` property to `true`. -- The `languageIdentification` property can be used specify settings for language identification on the input prior to transcription. Up to 10 candidate locales are supported for language identification. The returned transcription will include a new `locale` property for the recognized language or the locale that you provided. -
-The `filter` property is added to the [Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List), [Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles), and [Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions) operations. The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, and `locale`. For example: `filter=createdDateTime gt 2022-02-01T11:00:00Z`
-
-If you use webhook to receive notifications about transcription status, please note that the webhooks created via V3.0 API cannot receive notifications for V3.1 transcription requests. You need to create a new webhook endpoint via V3.1 API in order to receive notifications for V3.1 transcription requests.
-
-## Custom Speech
-
-### Datasets
-
-The following operations are added for uploading and managing multiple data blocks for a dataset:
-
-To support model adaptation with [structured text in markdown](how-to-custom-speech-test-and-train.md#structured-text-data-for-training) data, the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation now supports the **LanguageMarkdown** data kind. For more information, see [upload datasets](how-to-custom-speech-upload-data.md#upload-datasets).
-
-### Models
-
-The [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) and [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operations return information on the type of adaptation supported by each base model.
-
-```json
-"features": {
- "supportsAdaptationsWith": [
- "Acoustic",
- "Language",
- "LanguageMarkdown",
- "Pronunciation"
- ]
-}
-```
-
-The [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation has a new `customModelWeightPercent` property where you can specify the weight used when the Custom Language Model (trained from plain or structured text data) is combined with the Base Language Model. Valid values are integers between 1 and 100. The default value is currently 30.
-
-The `filter` property is added to the following operations:
--- [Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_List)-- [Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)-- [Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_List)-- [Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_List)-- [Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)-- [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)-- [Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)-- [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)-- [Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListDatasets)-- [Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEndpoints)-- [Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEvaluations)-- [Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListModels)-
-The `filter` expression can be used to select a subset of the available resources. You can filter by `displayName`, `description`, `createdDateTime`, `lastActionDateTime`, `status`, `locale`, and `kind`. For example: `filter=locale eq 'en-US'`
-
-Added the [Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListFiles) operation to get the files of the model identified by the given ID.
-
-Added the [Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile) operation to get one specific file (identified with fileId) from a model (identified with ID). This lets you retrieve a **ModelReport** file that provides information on the data processed during training.
-
-## Operation IDs
-
-You must update the base path in your code from `/speechtotext/v3.0` to `/speechtotext/v3.1`. For example, to get base models in the `eastus` region, use `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base` instead of `https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base`.
-
-The name of each `operationId` in version 3.1 is prefixed with the object name. For example, the `operationId` for "Create Model" changed from [CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel) in version 3.0 to [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) in version 3.1.
-
-|Path|Method|Version 3.1 Operation ID|Version 3.0 Operation ID|
-|||||
-|`/datasets`|GET|[Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_List)|[GetDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasets)|
-|`/datasets`|POST|[Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create)|[CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset)|
-|`/datasets/{id}`|DELETE|[Datasets_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Delete)|[DeleteDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset)|
-|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)|
-|`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)|
-|`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks)|Not applicable|
-|`/datasets/{id}/blocks`|GET|[Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks)|Not applicable|
-|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock)|Not applicable|
-|`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)|
-|`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
-|`/datasets/locales`|GET|[Datasets_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListSupportedLocales)|[GetSupportedLocalesForDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForDatasets)|
-|`/datasets/upload`|POST|[Datasets_Upload](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Upload)|[UploadDatasetFromForm](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UploadDatasetFromForm)|
-|`/endpoints`|GET|[Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_List)|[GetEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoints)|
-|`/endpoints`|POST|[Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create)|[CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint)|
-|`/endpoints/{id}`|DELETE|[Endpoints_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Delete)|[DeleteEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint)|
-|`/endpoints/{id}`|GET|[Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get)|[GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint)|
-|`/endpoints/{id}`|PATCH|[Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)|[UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint)|
-|`/endpoints/{id}/files/logs`|DELETE|[Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs)|[DeleteEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLogs)|
-|`/endpoints/{id}/files/logs`|GET|[Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs)|[GetEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLogs)|
-|`/endpoints/{id}/files/logs/{logId}`|DELETE|[Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog)|[DeleteEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLog)|
-|`/endpoints/{id}/files/logs/{logId}`|GET|[Endpoints_GetLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetLog)|[GetEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLog)|
-|`/endpoints/base/{locale}/files/logs`|DELETE|[Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs)|[DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLogs)|
-|`/endpoints/base/{locale}/files/logs`|GET|[Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs)|[GetBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs)|
-|`/endpoints/base/{locale}/files/logs/{logId}`|DELETE|[Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog)|[DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog)|
-|`/endpoints/base/{locale}/files/logs/{logId}`|GET|[Endpoints_GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetBaseModelLog)|[GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLog)|
-|`/endpoints/locales`|GET|[Endpoints_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListSupportedLocales)|[GetSupportedLocalesForEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEndpoints)|
-|`/evaluations`|GET|[Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_List)|[GetEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluations)|
-|`/evaluations`|POST|[Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create)|[CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation)|
-|`/evaluations/{id}`|DELETE|[Evaluations_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete)|[DeleteEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation)|
-|`/evaluations/{id}`|GET|[Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get)|[GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation)|
-|`/evaluations/{id}`|PATCH|[Evaluations_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update)|[UpdateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation)|
-|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)|
-|`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)|
-|`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
-|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
-|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)|
-|`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)|
-|`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
-|`/models/{id}`|DELETE|[Models_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Delete)|[DeleteModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel)|
-|`/models/{id}`|GET|[Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel)|[GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel)|
-|`/models/{id}`|PATCH|[Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update)|[UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel)|
-|`/models/{id}/files`|GET|[Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListFiles)|Not applicable|
-|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile)|Not applicable|
-|`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)|
-|`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
-|`/models/base/{id}`|GET|[Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
-|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)|
-|`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
-|`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
-|`/projects`|POST|[Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create)|[CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject)|
-|`/projects/{id}`|DELETE|[Projects_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Delete)|[DeleteProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject)|
-|`/projects/{id}`|GET|[Projects_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Get)|[GetProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject)|
-|`/projects/{id}`|PATCH|[Projects_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Update)|[UpdateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject)|
-|`/projects/{id}/datasets`|GET|[Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListDatasets)|[GetDatasetsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetsForProject)|
-|`/projects/{id}/endpoints`|GET|[Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEndpoints)|[GetEndpointsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointsForProject)|
-|`/projects/{id}/evaluations`|GET|[Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEvaluations)|[GetEvaluationsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationsForProject)|
-|`/projects/{id}/models`|GET|[Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListModels)|[GetModelsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelsForProject)|
-|`/projects/{id}/transcriptions`|GET|[Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions)|[GetTranscriptionsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsForProject)|
-|`/projects/locales`|GET|[Projects_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListSupportedLocales)|[GetSupportedProjectLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedProjectLocales)|
-|`/transcriptions`|GET|[Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List)|[GetTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions)|
-|`/transcriptions`|POST|[Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)|[CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription)|
-|`/transcriptions/{id}`|DELETE|[Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete)|[DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)|
-|`/transcriptions/{id}`|GET|[Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get)|[GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription)|
-|`/transcriptions/{id}`|PATCH|[Transcriptions_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Update)|[UpdateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription)|
-|`/transcriptions/{id}/files`|GET|[Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles)|[GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles)|
-|`/transcriptions/{id}/files/{fileId}`|GET|[Transcriptions_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_GetFile)|[GetTranscriptionFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFile)|
-|`/transcriptions/locales`|GET|[Transcriptions_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListSupportedLocales)|[GetSupportedLocalesForTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForTranscriptions)|
-|`/webhooks`|GET|[WebHooks_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_List)|[GetHooks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHooks)|
-|`/webhooks`|POST|[WebHooks_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Create)|[CreateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateHook)|
-|`/webhooks/{id}:ping`<sup>2</sup>|POST|[WebHooks_Ping](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Ping)|[PingHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/PingHook)|
-|`/webhooks/{id}:test`<sup>3</sup>|POST|[WebHooks_Test](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Test)|[TestHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/TestHook)|
-|`/webhooks/{id}`|DELETE|[WebHooks_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Delete)|[DeleteHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteHook)|
-|`/webhooks/{id}`|GET|[WebHooks_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Get)|[GetHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHook)|
-|`/webhooks/{id}`|PATCH|[WebHooks_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Update)|[UpdateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateHook)|
-
-<sup>1</sup> The `/models/{id}/copyto` operation (includes '/') in version 3.0 is replaced by the `/models/{id}:copyto` operation (includes ':') in version 3.1.
-
-<sup>2</sup> The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
-
-<sup>3</sup> The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
-
-## Next steps
-
-* [Speech to text REST API](rest-speech-to-text.md)
-* [Speech to text REST API v3.1 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1)
-* [Speech to text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
--
cognitive-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md
- Title: Migration to neural voice - Speech service-
-description: This document summarizes the benefits of migration from non-neural voice to neural voice.
------ Previously updated : 11/12/2021---
-# Migration to neural voice
-
-We're retiring two features from [Text to speech](index-text-to-speech.yml) capabilities as detailed below.
-
-## Custom voice (non-neural training)
-
-> [!IMPORTANT]
-> We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource.
->
-> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**.
-
-Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice.
-
-Custom neural voice is a gated service. You need to create an Azure account and Speech service subscription (with S0 tier), and [apply](https://aka.ms/customneural) to use custom neural feature. After you've been granted access, you can visit the [Speech Studio portal](https://speech.microsoft.com/portal) and then select Custom Voice to get started.
-
-Even without an Azure account, you can listen to voice samples in [Speech Studio](https://aka.ms/customvoice) and determine the right voice for your business needs.
-
-Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. Custom voice (retired) is referred as **Custom**, and custom neural voice is referred as **Custom Neural**.
-
-## Prebuilt standard voice
-
-> [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
->
-> The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**.
-
-Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
-
-Prebuilt neural voice is powered by deep neural networks. You need to create an Azure account and Speech service subscription. Then you can use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal), and select prebuilt neural voices to get started. Listening to the voice sample without creating an Azure account, you can visit the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
-
-Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. Prebuilt standard voice (retired) is referred as **Standard**, and prebuilt neural voice is referred as **Neural**.
-
-## Next steps
--- [Try out prebuilt neural voice](text-to-speech.md)-- [Try out custom neural voice](custom-neural-voice.md)
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/multi-device-conversation.md
- Title: Multi-device Conversation overview - Speech service-
-description: Multi-device conversation makes it easy to create a speech or text conversation between multiple clients and coordinate the messages that are sent between them.
------ Previously updated : 02/19/2022----
-# What is Multi-device Conversation?
-
-Multi-device conversation makes it easy to create a speech or text conversation between multiple clients and coordinate the messages sent between them.
-
-> [!NOTE]
-> Multi-device conversation access is a preview feature.
-
-With multi-device conversation, you can:
--- Connect multiple clients into the same conversation and manage the sending and receiving of messages between them.-- Easily transcribe audio from each client and send the transcription to the others, with optional translation.-- Easily send text messages between clients, with optional translation.-
-You can build a feature or solution that works across an array of devices. Each device can independently send messages (either transcriptions of audio or instant messages) to all other devices.
-
-Whereas [Conversation Transcription](conversation-transcription.md) works on a single device with a multichannel microphone array, Multi-device Conversation is suited for scenarios with multiple devices, each with a single microphone.
-
->[!IMPORTANT]
-> Multi-device conversation does not support sending audio files between clients: only the transcription and/or translation.
-
-## Key features
--- **Real-time transcription:** Everyone will receive a transcript of the conversation, so they can follow along the text in real-time or save it for later.-- **Real-time translation:** With more than 70 [supported languages](language-support.md) for text translation, users can translate the conversation to their preferred languages.-- **Readable transcripts:** The transcription and translation are easy to follow, with punctuation and sentence breaks.-- **Voice or text input:** Each user can speak or type on their own device, depending on the language support capabilities enabled for the participant's chosen language. Please refer to [Language support](language-support.md).-- **Message relay:** The multi-device conversation service will distribute messages sent by one client to all the others, in the languages of their choice.-- **Message identification:** Every message that users receive in the conversation will be tagged with the nickname of the user who sent it.-
-## Use cases
-
-### Lightweight conversations
-
-Creating and joining a conversation is easy. One user will act as the 'host' and create a conversation, which generates a random five letter conversation code and a QR code. All other users can join the conversation by typing in the conversation code or scanning the QR code.
-
-Since users join via the conversation code and aren't required to share contact information, it is easy to create quick, on-the-spot conversations.
-
-### Inclusive meetings
-
-Real-time transcription and translation can help make conversations accessible for people who speak different languages and/or are deaf or hard of hearing. Each person can also actively participate in the conversation, by speaking their preferred language or sending instant messages.
-
-### Presentations
-
-You can also provide captions for presentations and lectures both on-screen and on the audience members' own devices. After the audience joins with the conversation code, they can see the transcript in their preferred language, on their own device.
-
-## How it works
-
-All clients will use the Speech SDK to create or join a conversation. The Speech SDK interacts with the multi-device conversation service, which manages the lifetime of a conversation, including the list of participants, each client's chosen language(s), and messages sent.
-
-Each client can send audio or instant messages. The service will use speech recognition to convert audio into text, and send instant messages as-is. If clients choose different languages, then the service will translate all messages to the specified language(s) of each client.
-
-![Multi-device Conversation Overview Diagram](media/scenarios/multi-device-conversation.png)
-
-## Overview of Conversation, Host, and Participant
-
-A conversation is a session that one user starts for the other participating users to join. All clients connect to the conversation using the five-letter conversation code.
-
-Each conversation creates metadata that includes:
-- Timestamps of when the conversation started and ended-- List of all participants in the conversation, which includes each user's chosen nickname and primary language for speech or text input.--
-There are two types of users in a conversation: host and participant.
-
-The host is the user who starts a conversation, and who acts as the administrator of that conversation.
-- Each conversation can only have one host-- The host must be connected to the conversation for the duration of the conversation. If the host leaves the conversation, the conversation will end for all other participants.-- The host has a few extra controls to manage the conversation:
- - Lock the conversation - prevent additional participants from joining
- - Mute all participants - prevent other participants from sending any messages to the conversation, whether transcribed from speech or instant messages
- - Mute individual participants
- - Unmute all participants
- - Unmute individual participants
-
-A participant is a user who joins a conversation.
-- A participant can leave and rejoin the same conversation at any time, without ending the conversation for other participants.-- Participants cannot lock the conversation or mute/unmute others-
-> [!NOTE]
-> Each conversation can have up to 100 participants, of which 10 can be simultaneously speaking at any given time.
-
-## Language support
-
-When creating or joining a conversation, each user must choose a primary language: the language that they will speak and send instant messages in, and also the language they will see other users' messages.
-
-There are two kinds of languages: speech to text and text-only:
-- If the user chooses a speech to text language as their primary language, then they will be able to use both speech and text input in the conversation.--- If the user chooses a text-only language, then they will only be able to use text input and send instant messages in the conversation. Text-only languages are the languages that are supported for text translation, but not speech to text. You can see available languages on the [language support](./language-support.md) page.-
-Apart from their primary language, each participant can also specify additional languages for translating the conversation.
-
-Below is a summary of what the user will be able to do in a multi-device conversation, based to their chosen primary language.
--
-| What the user can do in the conversation | Speech to text | Text-only |
-|--|-||
-| Use speech input | ✔️ | ❌ |
-| Send instant messages | ✔️ | ✔️ |
-| Translate the conversation | ✔️ | ✔️ |
-
-> [!NOTE]
-> For lists of available speech to text and text translation languages, see [supported languages](./language-support.md).
---
-## Next steps
-
-* [Translate conversations in real-time](quickstarts/multi-device-conversation.md)
cognitive-services Openai Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/openai-speech.md
- Title: "Azure OpenAI speech to speech chat - Speech service"-
-description: In this how-to guide, you can use Speech to converse with Azure OpenAI. The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
------- Previously updated : 04/15/2023-
-zone_pivot_groups: programming-languages-csharp-python
-keywords: speech to text, openai
--
-# Azure OpenAI speech to speech chat
---
-## Next steps
--- [Learn more about Speech](overview.md)-- [Learn more about Azure OpenAI](../openai/overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
- Title: What is the Speech service?-
-description: The Speech service provides speech to text, text to speech, and speech translation capabilities with an Azure resource. Add speech to your applications, tools, and devices with the Speech SDK, Speech Studio, or REST APIs.
------ Previously updated : 09/16/2022---
-# What is the Speech service?
-
-The Speech service provides speech to text and text to speech capabilities with an [Speech resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). You can transcribe speech to text with high accuracy, produce natural-sounding text to speech voices, translate spoken audio, and use speaker recognition during conversations.
--
-Create custom voices, add specific words to your base vocabulary, or build your own models. Run Speech anywhere, in the cloud or at the edge in containers. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](./rest-speech-to-text.md).
-
-Speech is available for many [languages](language-support.md), [regions](regions.md), and [price points](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-## Speech scenarios
-
-Common scenarios for speech include:
-- [Captioning](./captioning-concepts.md): Learn how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios.-- [Audio Content Creation](text-to-speech.md#more-about-neural-text-to-speech-features): You can use neural voices to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems.-- [Call Center](call-center-overview.md): Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case.-- [Language learning](language-learning-overview.md): Provide pronunciation assessment feedback to language learners, support real-time transcription for remote learning conversations, and read aloud teaching materials with neural voices.-- [Voice assistants](voice-assistants.md): Create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation.-
-Microsoft uses Speech for many scenarios, such as captioning in Teams, dictation in Office 365, and Read Aloud in the Edge browser.
--
-## Speech capabilities
-
-Speech feature summaries are provided below with links for more information.
-
-### Speech to text
-
-Use [speech to text](speech-to-text.md) to transcribe audio into text, either in [real-time](#real-time-speech-to-text) or asynchronously with [batch transcription](#batch-transcription).
-
-> [!TIP]
-> You can try real-time speech to text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
-
-Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarization to determine who said what and when. Get readable transcripts with automatic formatting and punctuation.
-
-The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage.
-
-### Real-time speech to text
-
-With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as:
-- Transcriptions, captions, or subtitles for live meetings-- Contact center agent assist-- Dictation-- Voice agents-- Pronunciation assessment-
-### Batch transcription
-
-[Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
-- Transcriptions, captions, or subtitles for pre-recorded audio-- Contact center post-call analytics-- Diarization-
-### Text to speech
-
-With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more.
--- Prebuilt neural voice: Highly natural out-of-the-box voices. Check the prebuilt neural voice samples the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.-- Custom neural voice: Besides the pre-built neural voices that come out of the box, you can also create a [custom neural voice](custom-neural-voice.md) that is recognizable and unique to your brand or product. Custom neural voices are private and can offer a competitive advantage. Check the custom neural voice samples [here](https://aka.ms/customvoice).-
-### Speech translation
-
-[Speech translation](speech-translation.md) enables real-time, multilingual translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech to text translation.
-
-### Language identification
-
-[Language identification](language-identification.md) is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech to text recognition, or with speech translation.
-
-### Speaker recognition
-[Speaker recognition](speaker-recognition-overview.md) provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?".
-
-### Pronunciation assessment
-
-[Pronunciation assessment](./how-to-pronunciation-assessment.md) evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence.
-
-### Intent recognition
-
-[Intent recognition](./intent-recognition.md): Use speech to text with conversational language understanding to derive user intents from transcribed speech and act on voice commands.
-
-## Delivery and presence
-
-You can deploy Azure Cognitive Services Speech features in the cloud or on-premises.
-
-With [containers](speech-container-howto.md), you can bring the service closer to your data for compliance, security, or other operational reasons.
-
-Speech service deployment in sovereign clouds is available for some government entities and their partners. For example, the Azure Government cloud is available to US government entities and their partners. Azure China cloud is available to organizations with a business presence in China. For more information, see [sovereign clouds](sovereign-clouds.md).
--
-## Use Speech in your application
-
-The [Speech Studio](speech-studio-overview.md) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
-
-The [Speech CLI](spx-overview.md) is a command-line tool for using Speech service without having to write any code. Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI.
-
-The [Speech SDK](./speech-sdk.md) exposes many of the Speech service capabilities you can use to develop speech-enabled applications. The Speech SDK is available in many programming languages and across all platforms.
-
-In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use REST APIs for [batch transcription](batch-transcription.md) and [speaker recognition](/rest/api/speakerrecognition/) REST APIs.
-
-## Get started
-
-We offer quickstarts in many popular programming languages. Each quickstart is designed to teach you basic design patterns and have you running code in less than 10 minutes. See the following list for the quickstart for each feature:
-
-* [Speech to text quickstart](get-started-speech-to-text.md)
-* [Text to speech quickstart](get-started-text-to-speech.md)
-* [Speech translation quickstart](./get-started-speech-translation.md)
-
-## Code samples
-
-Sample code for the Speech service is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. Use these links to view SDK and REST samples:
--- [Speech to text, text to speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)-- [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)-- [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)-- [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples)-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-### Speech to text
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
-
-### Pronunciation Assessment
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-
-### Custom Neural Voice
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
-* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-
-### Speaker Recognition
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
-
-* [Get started with speech to text](get-started-speech-to-text.md)
-* [Get started with text to speech](get-started-text-to-speech.md)
cognitive-services Pattern Matching Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pattern-matching-overview.md
- Title: Pattern Matching overview-
-description: Pattern Matching with the IntentRecognizer helps you get started quickly with offline intent matching.
--- Previously updated : 11/15/2021-
-keywords: intent recognition pattern matching
--
-# What is pattern matching?
--
cognitive-services Power Automate Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/power-automate-batch-transcription.md
- Title: Power automate batch transcription - Speech service-
-description: Transcribe audio files from an Azure Storage container using the Power Automate batch transcription connector.
------- Previously updated : 03/09/2023--
-# Power automate batch transcription
-
-This article describes how to use [Power Automate](/power-automate/getting-started) and the [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) to transcribe audio files from an Azure Storage container. The connector uses the [Batch Transcription REST API](batch-transcription.md), but you don't need to write any code to use it. If the connector doesn't meet your requirements, you can still use the [REST API](rest-speech-to-text.md#transcriptions) directly.
-
-In addition to [Power Automate](/power-automate/getting-started), you can use the [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) with [Power Apps](/power-apps) and [Logic Apps](../../logic-apps/index.yml).
-
-> [!TIP]
-> Try more Speech features in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
-
-## Prerequisites
--
-## Create the Azure Blob Storage container
-
-In this example, you'll transcribe audio files that are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account.
-
-Follow these steps to create a new storage account and container.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
-1. Select the Storage account.
-1. In the **Data storage** group in the left pane, select **Containers**.
-1. Select **+ Container**.
-1. Enter a name for the new container such as "batchtranscription" and select **Create**.
-1. Get the **Access key** for the storage account. Select **Access keys** in the **Security + networking** group in the left pane. View and take note of the **key1** (or **key2**) value. You'll need the access key later when you [configure the connector](#create-a-power-automate-flow).
-
-Later you'll [upload files to the container](#upload-files-to-the-container) after the connector is configured, since the events of adding and modifying files kick off the transcription process.
-
-## Create a Power Automate flow
-
-### Create a new flow
-
-1. [Sign in to power automate](https://make.powerautomate.com/)
-1. From the collapsible menu on the left, select **Create**.
-1. Select **Automated cloud flow** to start from a blank flow that can be triggered by a designated event.
-
- :::image type="content" source="./media/power-platform/create-automated-cloud-flow.png" alt-text="A screenshot of the menu for creating an automated cloud flow." lightbox="./media/power-platform/create-automated-cloud-flow.png":::
-
-1. In the **Build an automated cloud flow** dialog, enter a name for your flow such as "BatchSTT".
-1. Select **Skip** to exit the dialog and continue without choosing a trigger.
-
-### Configure the flow trigger
-
-1. Choose a trigger from the [Azure Blob Storage connector](/connectors/azureblob/). For this example, enter "blob" in the search connectors and triggers box to narrow the results.
-1. Under the **Azure Blob Storage** connector, select the **When a blob is added or modified** trigger.
-
- :::image type="content" source="./media/power-platform/flow-search-blob.png" alt-text="A screenshot of the search connectors and triggers dialog." lightbox="./media/power-platform/flow-search-blob.png":::
-
-1. Configure the Azure Blob Storage connection.
- 1. From the **Authentication type** drop-down list, select **Access Key**.
- 1. Enter the account name and access key of the Azure Storage account that you [created previously](#create-the-azure-blob-storage-container).
- 1. Select **Create** to continue.
-1. Configure the **When a blob is added or modified** trigger.
-
- :::image type="content" source="./media/power-platform/flow-connection-settings-blob.png" alt-text="A screenshot of the configure blob trigger dialog." lightbox="./media/power-platform/flow-connection-settings-blob.png":::
-
- 1. From the **Storage account name or blob endpoint** drop-down list, select **Use connection settings**. You should see the storage account name as a component of the connection string.
- 1. Under **Container** select the folder icon. Choose the container that you [created previously](#create-the-azure-blob-storage-container).
-
-### Create SAS URI by path
-
-To transcribe an audio file that's in your [Azure Blob Storage container](#create-the-azure-blob-storage-container) you need a [Shared Access Signature (SAS) URI](../../storage/common/storage-sas-overview.md) for the file.
-
-The [Azure Blob Storage connector](/connectors/azureblob/) supports SAS URIs for individual blobs, but not for entire containers.
-
-1. Select **+ New step** to begin adding a new operation for the Azure Blob Storage connector.
-1. Enter "blob" in the search connectors and actions box to narrow the results.
-1. Under the **Azure Blob Storage** connector, select the **Create SAS URI by path** trigger.
-1. Under the **Storage account name or blob endpoint** drop-down, choose the same connection that you used for the **When a blob is added or modified** trigger.
-1. Select `Path` as dynamic content for the **Blob path** field.
-
-By now, you should have a flow that looks like this:
--
-### Create transcription
-
-1. Select **+ New step** to begin adding a new operation for the [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/).
-1. Enter "batch speech to text" in the search connectors and actions box to narrow the results.
-1. Select the **Azure Cognitive Services for Batch Speech to text** connector.
-1. Select the **Create transcription** action.
-1. Create a new connection to the Speech resource that you [created previously](#prerequisites). The connection will be available throughout the Power Automate environment. For more information, see [Manage connections in Power Automate](/power-automate/add-manage-connections).
- 1. Enter a name for the connection such as "speech-resource-key". You can choose any name that you like.
- 1. In the **API Key** field, enter the Speech resource key.
-
- Optionally you can select the connector ellipses (...) to view available connections. If you weren't prompted to create a connection, then you already have a connection that's selected by default.
-
- :::image type="content" source="./media/power-platform/flow-progression-speech-resource-connection.png" alt-text="A screenshot of the view connections dialog." lightbox="./media/power-platform/flow-progression-speech-resource-connection.png":::
-
-1. Configure the **Create transcription** action.
- 1. In the **locale** field enter the expected locale of the audio data to transcribe.
- 1. Select `DisplayName` as dynamic content for the **displayName** field. You can choose any name that you would like to refer to later.
- 1. Select `Web Url` as dynamic content for the **contentUrls Item - 1** field. This is the SAS URI output from the [Create SAS URI by path](#create-sas-uri-by-path) action.
-
- > [!TIP]
- > For more information about create transcription parameters, see the [Azure Cognitive Services for Batch Speech to text](/connectors/cognitiveservicesspe/#create-transcription) documentation.
-
-1. From the top navigation menu, select **Save**.
-
-### Test the flow
-
-1. From the top navigation menu, select **Flow checker**. In the side panel that appears, you shouldn't see any errors or warnings. If you do, then you should fix them before continuing.
-1. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**.
-1. In the side panel that appears, select **Manually** and then select **Test**.
-
-After a few seconds, you should see an indication that the flow is in progress.
--
-The flow is waiting for a file to be added or modified in the Azure Blob Storage container. That's the [trigger that you configured](#configure-the-flow-trigger) earlier.
-
-To trigger the test flow, upload an audio file to the Azure Blob Storage container as described next.
-
-## Upload files to the container
-
-Follow these steps to upload [wav, mp3, or ogg](batch-transcription-audio-data.md#supported-audio-formats) files from your local directory to the Azure Storage container that you [created previously](#create-the-azure-blob-storage-container).
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. <a href="https://portal.azure.com/#create/Microsoft.StorageAccount-ARM" title="Create a Storage account resource" target="_blank">Create a Storage account resource</a> in the Azure portal. Use the same subscription and resource group as your Speech resource.
-1. Select the Storage account.
-1. Select the new container.
-1. Select **Upload**.
-1. Choose the files to upload and select **Upload**.
-
-## View the transcription flow results
-
-After you upload the audio file to the Azure Blob Storage container, the flow should run and complete. Return to your test flow in the Power Automate portal to view the results.
--
-You can select and expand the **Create transcription** to see detailed input and output results.
-
-## Next steps
--- [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/)-- [Azure Blob Storage connector](/connectors/azureblob/)-- [Power Platform](/power-platform/)
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
- Title: How to use pronunciation assessment in Speech Studio-
-description: The pronunciation assessment tool in Speech Studio gives you feedback on the accuracy and fluency of your speech, no coding required.
------ Previously updated : 09/08/2022---
-# Pronunciation assessment in Speech Studio
-
-Pronunciation assessment uses the Speech to text capability to provide subjective and objective feedback for language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds.
-
-Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input.
-- At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech. -- At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.-- Syllable-level accuracy scores are currently available via the [JSON file](?tabs=json#pronunciation-assessment-results) or [Speech SDK](how-to-pronunciation-assessment.md). -- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech.-
-This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
-
-> [!NOTE]
-> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
->
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
-
-## Try out pronunciation assessment
-
-You can explore and try out pronunciation assessment even without signing in.
-
-> [!TIP]
-> To assess more than 5 seconds of speech with your own script, sign in with an [Azure account](https://azure.microsoft.com/free/cognitive-services) and use your <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a>.
-
-Follow these steps to assess your pronunciation of the reference text:
-
-1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment).
-
- :::image type="content" source="media/pronunciation-assessment/pa.png" alt-text="Screenshot of how to go to Pronunciation Assessment on Speech Studio.":::
-
-1. Choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
-
- :::image type="content" source="media/pronunciation-assessment/pa-language.png" alt-text="Screenshot of choosing a supported language that you want to evaluate the pronunciation.":::
-
-1. Choose from the provisioned text samples, or under the **Enter your own script** label, enter your own reference text.
-
- When reading the text, you should be close to microphone to make sure the recorded voice isn't too low.
-
- :::image type="content" source="media/pronunciation-assessment/pa-record.png" alt-text="Screenshot of where to record audio with a microphone.":::
-
- Otherwise you can upload recorded audio for pronunciation assessment. Once successfully uploaded, the audio will be automatically evaluated by the system, as shown in the following screenshot.
-
- :::image type="content" source="media/pronunciation-assessment/pa-upload.png" alt-text="Screenshot of uploading recorded audio to be assessed.":::
-
-## Pronunciation assessment results
-
-Once you've recorded the reference text or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on the accuracy and fluency of spoken audio, by comparing a machine generated transcript of the input audio with the reference text. You can listen to your spoken audio, and download it if necessary.
-
-You can also check the pronunciation assessment result in JSON. The word-level, syllable-level, and phoneme-level accuracy scores are included in the JSON file.
-
-### [Display](#tab/display)
-
-The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. The error types in the pronunciation assessment are represented using different colors. Yellow indicates mispronunciations, gray indicates omissions, and red indicates insertions. This visual distinction makes it easier to identify and analyze specific errors. It provides a clear overview of the error types and frequencies in the spoken audio, helping you focus on areas that need improvement. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes.
--
-### [JSON](#tab/json)
-
-The complete transcription is shown in the `text` attribute. You can see accuracy scores for the whole word, syllables, and specific phonemes. You can get the same results using the Speech SDK. For information, see [How to use Pronunciation Assessment](how-to-pronunciation-assessment.md).
-
-```json
-{
- "text": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
- "duration": 156100000,
- "offset": 800000,
- "json": {
- "Id": "f583d7588c89425d8fce76686c11ed12",
- "RecognitionStatus": 0,
- "Offset": 800000,
- "Duration": 156100000,
- "DisplayText": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
- "SNR": 40.47014,
- "NBest": [
- {
- "Confidence": 0.97532314,
- "Lexical": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
- "ITN": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
- "MaskedITN": "today was a beautiful day we had a great time taking a long long walk in the morning the countryside was in full bloom yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain",
- "Display": "Today was a beautiful day. We had a great time taking a long long walk in the morning. The countryside was in full bloom, yet the air was crisp and cold towards end of the day clouds came in forecasting much needed rain.",
- "PronunciationAssessment": {
- "AccuracyScore": 92,
- "FluencyScore": 81,
- "CompletenessScore": 93,
- "PronScore": 85.6
- },
- "Words": [
- // Words preceding "countryside" are omitted for brevity...
- {
- "Word": "countryside",
- "Offset": 66200000,
- "Duration": 7900000,
- "PronunciationAssessment": {
- "AccuracyScore": 30,
- "ErrorType": "Mispronunciation"
- },
- "Syllables": [
- {
- "Syllable": "kahn",
- "PronunciationAssessment": {
- "AccuracyScore": 3
- },
- "Offset": 66200000,
- "Duration": 2700000
- },
- {
- "Syllable": "triy",
- "PronunciationAssessment": {
- "AccuracyScore": 19
- },
- "Offset": 69000000,
- "Duration": 1100000
- },
- {
- "Syllable": "sayd",
- "PronunciationAssessment": {
- "AccuracyScore": 51
- },
- "Offset": 70200000,
- "Duration": 3900000
- }
- ],
- "Phonemes": [
- {
- "Phoneme": "k",
- "PronunciationAssessment": {
- "AccuracyScore": 0
- },
- "Offset": 66200000,
- "Duration": 900000
- },
- {
- "Phoneme": "ah",
- "PronunciationAssessment": {
- "AccuracyScore": 0
- },
- "Offset": 67200000,
- "Duration": 1000000
- },
- {
- "Phoneme": "n",
- "PronunciationAssessment": {
- "AccuracyScore": 11
- },
- "Offset": 68300000,
- "Duration": 600000
- },
- {
- "Phoneme": "t",
- "PronunciationAssessment": {
- "AccuracyScore": 16
- },
- "Offset": 69000000,
- "Duration": 300000
- },
- {
- "Phoneme": "r",
- "PronunciationAssessment": {
- "AccuracyScore": 27
- },
- "Offset": 69400000,
- "Duration": 300000
- },
- {
- "Phoneme": "iy",
- "PronunciationAssessment": {
- "AccuracyScore": 15
- },
- "Offset": 69800000,
- "Duration": 300000
- },
- {
- "Phoneme": "s",
- "PronunciationAssessment": {
- "AccuracyScore": 26
- },
- "Offset": 70200000,
- "Duration": 1700000
- },
- {
- "Phoneme": "ay",
- "PronunciationAssessment": {
- "AccuracyScore": 56
- },
- "Offset": 72000000,
- "Duration": 1300000
- },
- {
- "Phoneme": "d",
- "PronunciationAssessment": {
- "AccuracyScore": 100
- },
- "Offset": 73400000,
- "Duration": 700000
- }
- ]
- },
- // Words following "countryside" are omitted for brevity...
- ]
- }
- ]
- }
-}
-```
---
-### Assessment scores in streaming mode
-
-Pronunciation Assessment supports uninterrupted streaming mode. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you don't press the stop recording button, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
-
-Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 3 sub aspects: **Accuracy score**, **Fluency score**, and **Completeness score**. In streaming mode, since the **Accuracy score**, **Fluency score and Completeness score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score and Fluency score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
-
-Refer to the demo examples below for the whole process of evaluating pronunciation in streaming mode.
-
-**Start recording**
-
-As you start recording, the scores at the bottom begin to alter from 0.
--
-**During recording**
-
-During recording a long paragraph, you can pause recording at any time. You can continue to evaluate your recording as long as you don't press the stop button.
--
-**Finish recording**
-
-After you press the stop button, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score** at the bottom.
--
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
--- Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)-- Read the blog about [use cases](https://aka.ms/pronunciationassessment/techblog)
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
- Title: 'Quickstart: Create a voice assistant using Custom Commands - Speech service'-
-description: In this quickstart, you create and test a basic Custom Commands application in Speech Studio.
------ Previously updated : 02/19/2022----
-# Quickstart: Create a voice assistant with Custom Commands
--
-In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app.
-
-## Region Availability
-At this time, Custom Commands supports speech resources created in regions that have [voice assistant capabilities](./regions.md#voice-assistants).
-
-## Prerequisites
-
-> [!div class="checklist"]
-> * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">Create a Speech resource in a region that supports Custom Commands.</a> Refer to the **Region Availability** section above for list of supported regions.
-> * Download the sample
-[Smart Room Lite](https://aka.ms/speech/cc-quickstart) json file.
-> * Download the latest version of [Windows Voice Assistant Client](https://aka.ms/speech/va-samples-wvac).
-
-## Go to the Speech Studio for Custom Commands
-
-1. In a web browser, go to [Speech Studio](https://aka.ms/speechstudio/customcommands).
-1. Enter your credentials to sign in to the portal.
-
- The default view is your list of Speech resources.
- > [!NOTE]
- > If you don't see the select resource page, you can navigate there by choosing "Resource" from the settings menu on the top bar.
-
-1. Select your Speech resource, and then select **Go to Studio**.
-1. Select **Custom Commands**.
-
- The default view is a list of the Custom Commands applications you have under your selected resource.
-
-## Import an existing application as a new Custom Commands project
-
-1. Select **New project** to create a project.
-
-1. In the **Name** box, enter project name as `Smart-Room-Lite` (or something else of your choice).
-1. In the **Language** list, select **English (United States)**.
-1. Select **Browse files** and in the browse window, select the **SmartRoomLite.json** file.
-
- > [!div class="mx-imgBorder"]
- > ![Create a project](media/custom-commands/import-project.png)
-
-1. In the **LUIS authoring resource** list, select an authoring resource. If there are no valid authoring resources, create one by selecting **Create new LUIS authoring resource**.
-
- 1. In the **Resource Name** box, enter the name of the resource.
- 1. In the **Resource Group** list, select a resource group.
- 1. In the **Location** list, select a location.
- 1. In the **Pricing Tier** list, select a tier.
-
-
- > [!NOTE]
- > You can create resource groups by entering the desired resource group name into the "Resource Group" field. The resource group will be created when **Create** is selected.
--
-1. Next, select **Create** to create your project.
-1. After the project is created, select your project.
-You should now see overview of your new Custom Commands application.
-
-## Try out some voice commands
-1. Select **Train** at the top of the right pane.
-1. Once training is completed, select **Test** and try out the following utterances:
- - Turn on the tv
- - Set the temperature to 80 degrees
- - Turn it off
- - The tv
- - Set an alarm for 5 PM
-
-## Integrate Custom Commands application in an assistant
-Before you can access this application from outside Speech Studio, you need to publish the application. For publishing an application, you will need to configure prediction LUIS resource.
-
-### Update prediction LUIS resource
--
-1. Select **Settings** in the left pane and select **LUIS resources** in the middle pane.
-1. Select a prediction resource, or create one by selecting **Create new resource**.
-1. Select **Save**.
-
- > [!div class="mx-imgBorder"]
- > ![Set LUIS Resources](media/custom-commands/set-luis-resources.png)
-
-> [!NOTE]
-> Because the authoring resource supports only 1,000 prediction endpoint requests per month, you will mandatorily need to set a LUIS prediction resource before publishing your Custom Commands application.
-
-### Publish the application
-
-Select **Publish** on top of the right pane. Once publish completes, a new window will appear. Note down the **Application ID** and **Speech resource key** value from it. You will need these two values to be able to access the application from outside Speech Studio.
-
-Alternatively, you can also get these values by selecting **Settings** > **General** section.
-
-### Access application from client
-
-In the scope of this article, we will be using the Windows Voice Assistant client you downloaded as part of the pre-requisites. Unzip the folder.
-1. Launch **VoiceAssistantClient.exe**.
-1. Create a new publish profile and enter value for **Connection Profile**. In the **General Settings** section, enter values **Subscription Key** (this is same as the **Speech resource key** value you saved when publishing the application), **Subscription key region** and **Custom commands app ID**.
- > [!div class="mx-imgBorder"]
- > ![Screenshot that highlights the General Settings section for creating a WVAC profile.](media/custom-commands/create-profile.png)
-1. Select **Save and Apply Profile**.
-1. Now try out the following inputs via speech/text
- > [!div class="mx-imgBorder"]
- > ![WVAC Create profile](media/custom-commands/conversation.png)
--
-> [!TIP]
-> You can click on entries in **Activity Log** to inspect the raw responses being sent from the Custom Commands service.
-
-## Next steps
-
-In this article, you used an existing application. Next, in the [how-to sections](./how-to-develop-custom-commands-application.md), you learn how to design, develop, debug, test and integrate a Custom Commands application from scratch.
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/multi-device-conversation.md
- Title: 'Quickstart: Multi-device Conversation - Speech service'-
-description: In this quickstart, you'll learn how to create and join clients to a multi-device conversation by using the Speech SDK.
------ Previously updated : 06/25/2020-
-zone_pivot_groups: programming-languages-set-nine
---
-# Quickstart: Multi-device Conversation
--
-> [!NOTE]
-> The Speech SDK for Java, JavaScript, Objective-C, and Swift support Multi-device Conversation, but we haven't yet included a guide here.
--
cognitive-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md
- Title: Install the Speech SDK-
-description: In this quickstart, you'll learn how to install the Speech SDK for your preferred programming language.
------ Previously updated : 09/16/2022--
-zone_pivot_groups: programming-languages-speech-sdk
--
-# Install the Speech SDK
---------
-## Next steps
-
-* [Speech to text quickstart](../get-started-speech-to-text.md)
-* [Text to speech quickstart](../get-started-text-to-speech.md)
-* [Speech translation quickstart](../get-started-speech-translation.md)
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/voice-assistants.md
- Title: 'Quickstart: Create a custom voice assistant - Speech service'-
-description: In this quickstart, you use the Speech SDK to create a custom voice assistant.
------ Previously updated : 06/25/2020--
-zone_pivot_groups: programming-languages-voice-assistants
--
-# Quickstart: Create a custom voice assistant
--
-> [!NOTE]
-> The Speech SDK for C++, JavaScript, Objective-C, Python, and Swift support custom voice assistants, but we haven't yet included a guide here.
----
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
- Title: "Recording custom voice samples - Speech service"-
-description: Make a production-quality custom voice by preparing a robust script, hiring good voice talent, and recording professionally.
------ Previously updated : 10/14/2022---
-# Recording voice samples for Custom Neural Voice
-
-This article provides you instructions on preparing high-quality voice samples for creating a professional voice model using the Custom Neural Voice Pro project.
-
-Creating a high-quality production custom neural voice from scratch isn't a casual undertaking. The central component of a custom neural voice is a large collection of audio samples of human speech. It's vital that these audio recordings be of high quality. Choose a voice talent who has experience making these kinds of recordings, and have them recorded by a recording engineer using professional equipment.
-
-Before you can make these recordings, though, you need a script: the words that will be spoken by your voice talent to create the audio samples.
-
-Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results.
-
-## Tips for preparing data for a high-quality voice
-
-A highly-natural custom neural voice depends on several factors, like the quality and size of your training data.
-
-The quality of your training data is a primary factor. For example, in the same training set, consistent volume, speaking rate, speaking pitch, and speaking style are essential to create a high-quality custom neural voice. You should also avoid background noise in the recording and make sure the script and recording match. To ensure the quality of your data, you need to follow [script selection criteria](#script-selection-criteria) and [recording requirements](#recording-your-script).
-
-Regarding the size of the training data, in most cases you can build a reasonable custom neural voice with 500 utterances. According to our tests, adding more training data in most languages does not necessarily improve naturalness of the voice itself (tested using the MOS score), however, with more training data that covers more word instances, you have higher possibility to reduce the ratio of dissatisfactory parts of speech for the voice, such as the glitches. To hear what dissatisfactory parts of speech sound like, refer to [the GitHub examples](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/DSAT-examples.md).
-
-In some cases, you may want a voice persona with unique characteristics. For example, a cartoon persona needs a voice with a special speaking style, or a voice that is very dynamic in intonation. For such cases, we recommend that you prepare at least 1000 (preferably 2000) utterances, and record them at a professional recording studio. To learn more about how to improve the quality of your voice model, see [characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context).
-
-## Voice recording roles
-
-There are four basic roles in a custom neural voice recording project:
-
-Role|Purpose
--|-
-Voice talent |This person's voice will form the basis of the custom neural voice.
-Recording engineer |Oversees the technical aspects of the recording and operates the recording equipment.
-Director |Prepares the script and coaches the voice talent's performance.
-Editor |Finalizes the audio files and prepares them for upload to Speech Studio
-
-An individual may fill more than one role. This guide assumes that you'll be filling the director role and hiring both a voice talent and a recording engineer. If you want to make the recordings yourself, this article includes some information about the recording engineer role. The editor role isn't needed until after the recording session, and can be performed by the director or the recording engineer.
-
-## Choose your voice talent
-
-Actors with experience in voiceover, voice character work, announcing or newsreading make good voice talent. Choose voice talent whose natural voice you like. It's possible to create unique "character" voices, but it's much harder for most talent to perform them consistently, and the effort can cause voice strain. The single most important factor for choosing voice talent is consistency. Your recordings for the same voice style should all sound like they were made on the same day in the same room. You can approach this ideal through good recording practices and engineering.
-
-Your voice talent must be able to speak with consistent rate, volume level, pitch, and tone with clear dictation. They also need to be able to control their pitch variation, emotional affect, and speech mannerisms. Recording voice samples can be more fatiguing than other kinds of voice work, so most voice talent can usually only record for two or three hours a day. Limit sessions to three or four days a week, with a day off in-between if possible.
-
-Work with your voice talent to develop a persona that defines the overall sound and emotional tone of the custom neural voice, making sure to pinpoint what "neutral" sounds like for that persona. Using the Custom Neural Voice capability, you can train a model that speaks with emotion, so define the speaking styles of your persona and ask your voice talent to read the script in a way that resonates with the styles you want.
-
-For example, a persona with a naturally upbeat personality would carry a note of optimism even when they speak neutrally. However, this personality trait should be subtle and consistent. Listen to readings by existing voices to get an idea of what you're aiming for.
-
-> [!TIP]
-> Usually, you'll want to own the voice recordings you make. Your voice talent should be amenable to a work-for-hire contract for the project.
-
-## Create a script
-
-The starting point of any custom neural voice recording session is the script, which contains the utterances to be spoken by your voice talent. The term "utterances" encompasses both full sentences and shorter phrases. Building a custom neural voice requires at least 300 recorded utterances as training data.
--
-The utterances in your script can come from anywhere: fiction, non-fiction, transcripts of speeches, news reports, and anything else available in printed form. For a brief discussion of potential legal issues, see the ["Legalities"](#legalities) section. You can also write your own text.
-
-Your utterances don't need to come from the same source, the same kind of source, or have anything to do with each other. However, if you use set phrases (for example, "You have successfully logged in") in your speech application, make sure to include them in your script. It will give your custom neural voice a better chance of pronouncing those phrases well.
-
-We recommend the recording scripts include both general sentences and domain-specific sentences. For example, if you plan to record 2,000 sentences, 1,000 of them could be general sentences, another 1,000 of them could be sentences from your target domain or the use case of your application.
-
-We provide [sample scripts in the 'General', 'Chat' and 'Customer Service' domains for each language](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) to help you prepare your recording scripts. You can use these Microsoft shared scripts for your recordings directly or use them as a reference to create your own.
-
-### Script selection criteria
-
-Below are some general guidelines that you can follow to create a good corpus (recorded audio samples) for custom neural voice training.
--- Balance your script to cover different sentence types in your domain including statements, questions, exclamations, long sentences, and short sentences.-
- Each sentence should contain 4 words to 30 words, and no duplicate sentences should be included in your script.<br>
- For how to balance the different sentence types, refer to the following table:
-
- | Sentence types | Coverage |
- | : | : |
- | Statement sentences | Statement sentences should be 70-80% of the script.|
- | Question sentences | Question sentences should be about 10%-20% of your domain script, including 5%-10% of rising and 5%-10% of falling tones. |
- | Exclamation sentences| Exclamation sentences should be about 10%-20% of your script.|
- | Short word/phrase| Short word/phrase scripts should be about 10% of total utterances, with 5 to 7 words per case. |
-
- > [!NOTE]
- > Short words/phrases should be separated with a commas. They help remind your voice talent to pause briefly when reading them.
-
- Best practices include:
- - Balanced coverage for Parts of Speech, like verbs, nouns, adjectives, and so on.
- - Balanced coverage for pronunciations. Include all letters from A to Z so the Text to speech engine learns how to pronounce each letter in your style.
- - Readable, understandable, common-sense scripts for the speaker to read.
- - Avoid too many similar patterns for words/phrases, like "easy" and "easier".
- - Include different formats of numbers: address, unit, phone, quantity, date, and so on, in all sentence types.
- - Include spelling sentences if it's something your custom neural voice will be used to read. For example, "The spelling of Apple is A P P L E".
--- Don't put multiple sentences into one line/one utterance. Separate each line by utterance.--- Make sure the sentence is clean. Generally, don't include too many non-standard words like numbers or abbreviations as they're usually hard to read. Some applications may require the reading of many numbers or acronyms. In these cases, you can include these words, but normalize them in their spoken form.-
- Below are some best practices for example:
- - For lines with abbreviations, instead of "BTW", write "by the way".
- - For lines with digits, instead of "911", write "nine one one".
- - For lines with acronyms, instead of "ABC", write "A B C".
-
- With that, make sure your voice talent pronounces these words in an expected way. Keep your script and recordings matched during the training process.
--- Your script should include many different words and sentences with different kinds of sentence lengths, structures, and moods. --- Check the script carefully for errors. If possible, have someone else check it too. When you run through the script with your voice talent, you may catch more mistakes.-
-### Difference between voice talent script and training script
-
-The training script can differ from the voice talent script, especially for scripts that contain digits, symbols, abbreviations, date, and time. Scripts prepared for the voice talent must follow native reading conventions, such as 50% and $45. The scripts used for training must be normalized to match the audio recording, such as *fifty percent* and *forty-five dollars*.
-
-> [!NOTE]
-> We provide some example scripts for the voice talent on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script). To use the example scripts for training, you must normalize them according to the recordings of your voice talent before uploading the file.
-
-The following table shows the difference between scripts for voice talent and the normalized script for training.
-
-| Category |Voice talent script example | Training script example (normalized) |
-| | | |
-| Digits |123| one hundred and twenty-three|
-| Symbols |50%| fifty percent|
-| Abbreviation |ASAP| as soon as possible|
-| Date and time |March 3rd at 5:00 PM| March third at five PM|
-
-### Typical defects of a script
-
-The script's poor quality can adversely affect the training results. To achieve high-quality training results, it's crucial to avoid defects.
-
-Script defects generally fall into the following categories:
-
-| Category | Example |
-| : | : |
-| Meaningless content. | "Colorless green ideas sleep furiously."|
-| Incomplete sentences. |- "This was my last eve" (no subject, no specific meaning) <br>- "He's obviously already funny (no quote mark in the end, it's not a complete sentence) |
-| Typo in the sentences. | - Start with a lower case<br>- No ending punctuation if needed<br> - Misspelling <br>- Lack of punctuation: no period in the end (except news title)<br>- End with symbols, except comma, question, exclamation <br>- Wrong format, such as:<br> &emsp;- 45$ (should be $45)<br> &emsp;- No space or excess space between word/punctuation |
-|Duplication in similar format, one per each pattern is enough. |- "Now is 1pm in New York"<br>- "Now is 2pm in New York"<br>- "Now is 3pm in New York"<br>- "Now is 1pm in Seattle"<br>- "Now is 1pm in Washington D.C." |
-|Uncommon foreign words: only commonly used foreign words are acceptable in the script. | In English one might use the French word "faux" in common speech, but a French expression such as "coincer la bulle" would be uncommon. |
-|Emoji or any other uncommon symbols | |
-
-### Script format
-
-The script is for use during recording sessions, so you can set it up any way you find easy to work with. Create the text file that's required by Speech Studio separately.
-
-A basic script format contains three columns:
--- The number of the utterance, starting at 1. Numbering makes it easy for everyone in the studio to refer to a particular utterance ("let's try number 356 again"). You can use the Microsoft Word paragraph numbering feature to number the rows of the table automatically.-- A blank column where you'll write the take number or time code of each utterance to help you find it in the finished recording.-- The text of the utterance itself.-
- ![Sample script](media/custom-voice/script.png)
-
-> [!NOTE]
-> Most studios record in short segments known as "takes". Each take typically contains 10 to 24 utterances. Just noting the take number is sufficient to find an utterance later. If you're recording in a studio that prefers to make longer recordings, you'll want to note the time code instead. The studio will have a prominent time display.
-
-Leave enough space after each row to write notes. Be sure that no utterance is split between pages. Number the pages, and print your script on one side of the paper.
-
-Print three copies of the script: one for the voice talent, one for the recording engineer, and one for the director (you). Use a paper clip instead of staples: an experienced voice artist will separate the pages to avoid making noise as the pages are turned.
-
-### Voice talent statement
-
-To train a neural voice, you must [create a voice talent profile](how-to-custom-voice-talent.md) with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence.
-
-### Legalities
-
-Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance won't be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose isn't well established. Microsoft can't provide legal advice on this issue; consult your own legal counsel.
-
-Fortunately, it's possible to avoid these issues entirely. There are many sources of text you can use without permission or license.
-
-|Text source|Description|
-|-|-|
-|[CMU Arctic corpus](https://pyroomacoustics.readthedocs.io/en/pypi-release/pyroomacoustics.datasets.cmu_arctic.html)|About 1100 sentences selected from out-of-copyright works specifically for use in speech synthesis projects. An excellent starting point.|
-|Works no longer<br>under copyright|Typically works published prior to 1923. For English, [Project Gutenberg](https://www.gutenberg.org/) offers tens of thousands of such works. You may want to focus on newer works, as the language will be closer to modern English.|
-|Government&nbsp;works|Works created by the United States government are not copyrighted in the United States, though the government may claim copyright in other countries/regions.|
-|Public domain|Works for which copyright has been explicitly disclaimed or that have been dedicated to the public domain. It may not be possible to waive copyright entirely in some jurisdictions.|
-|Permissively licensed works|Works distributed under a license like Creative Commons or the GNU Free Documentation License (GFDL). Wikipedia uses the GFDL. Some licenses, however, may impose restrictions on performance of the licensed content that may impact the creation of a custom neural voice model, so read the license carefully.|
-
-## Recording your script
-
-Record your script at a professional recording studio that specializes in voice work. They'll have a recording booth, the right equipment, and the right people to operate it. It's recommended not to skimp on recording.
-
-Discuss your project with the studio's recording engineer and listen to their advice. The recording should have little or no dynamic range compression (maximum of 4:1). It's critical that the audio has consistent volume and a high signal-to-noise ratio, while being free of unwanted sounds.
-
-### Recording requirements
-
-To achieve high-quality training results, follow the following requirements during recording or data preparation:
--- Clear and well pronounced--- Natural speed: not too slow or too fast between audio files.--- Appropriate volume, prosody and break: stable within the same sentence or between sentences, correct break for punctuation.--- No noise during recording--- Fit your persona design--- No wrong accent: fit to the target design--- No wrong pronunciation-
-You can refer to below specification to prepare for the audio samples as best practice.
-
-| Property | Value |
-| : | : |
-| File format | *.wav, Mono |
-| Sampling rate | 24 KHz |
-| Sample format | 16 bit, PCM |
-| Peak volume levels | -3 dB to -6 dB |
-| SNR | > 35 dB |
-| Silence | - There should have some silence (recommend 100 ms) at the beginning and ending, but no longer than 200 ms<br>- Silence between words or phrases < -30 dB<br>- Silence in the wave after last word is spoken <-60 dB |
-| Environment noise, echo | - The level of noise at start of the wave before speaking < -70 dB |
-
-> [!Note]
-> You can record at higher sampling rate and bit depth, for example in the format of 48 KHz 24 bit PCM. During the custom neural voice training, we'll down sample it to 24 KHz 16 bit PCM automatically.
-
-A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
-
-Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
-
-### Typical audio errors
-
-For high-quality training results, avoiding audio errors is highly recommended. Audio errors can are usually within following categories:
--- Audio file name doesn't match the script ID.-- WAR file has an invalid format and can't be read.-- Audio sampling rate is lower than 16 KHz. It's recommended that the .wav file sampling rate be equal or higher than 24 KHz for high-quality neural voice.-- Volume peak isn't within the range of -3 dB (70% of max volume) to -6 dB (50%). -- Waveform overflow: the waveform is cut at its peak value and is thus not complete.-
- ![waveform overflow](media/custom-voice/overflow.png)
--- The silent parts of the recording aren't clean; you can hear sounds such as ambient noise, mouth noise and echo.-
- For example, below audio contains the environment noise between speeches.
-
- ![environment noise](media/custom-voice/environment-noise.png)
-
- Below sample contains signs of DC offset or echo.
-
- ![DC offset or echo](media/custom-voice/dc-offset-noise.png)
--- The overall volume is too low. Your data will be tagged as an issue if the volume is lower than -18 dB (10% of max volume). Make sure all audio files should be consistent at the same level of volume.-
- ![overall volume](media/custom-voice/overall-volume.png)
--- No silence before the first word or after the last word. Also, the start or end silence shouldn't be longer than 200 ms or shorter than 100 ms.-
- ![No silence](media/custom-voice/no-silence.png)
-
-### Do it yourself
-
-If you want to make the recording yourself, instead of going into a recording studio, here's a short primer. Thanks to the rise of home recording and podcasting, it's easier than ever to find good recording advice and resources online.
-
-Your "recording booth" should be a small room with no noticeable echo or "room tone." It should be as quiet and soundproof as possible. Drapes on the walls can be used to reduce echo and neutralize or "deaden" the sound of the room.
-
-Use a high-quality studio condenser microphone ("mic" for short) intended for recording voice. Sennheiser, AKG, and even newer Zoom mics can yield good results. You can buy a mic, or rent one from a local audio-visual rental firm. Look for one with a USB interface. This type of mic conveniently combines the microphone element, preamp, and analog-to-digital converter into one package, simplifying hookup.
-
-You may also use an analog microphone. Many rental houses offer "vintage" microphones known for their voice character. Note that professional analog gear uses balanced XLR connectors, rather than the 1/4-inch plug that's used in consumer equipment. If you go analog, you'll also need a preamp and a computer audio interface with these connectors.
-
-Install the microphone on a stand or boom, and install a pop filter in front of the microphone to eliminate noise from "plosive" consonants like "p" and "b." Some microphones come with a suspension mount that isolates them from vibrations in the stand, which is helpful.
-
-The voice talent must stay at a consistent distance from the microphone. Use tape on the floor to mark where they should stand. If the talent prefers to sit, take special care to monitor mic distance and avoid chair noise.
-
-Use a stand to hold the script. Avoid angling the stand so that it can reflect sound toward the microphone.
-
-The person operating the recording equipment ΓÇö the recording engineer ΓÇö should be in a separate room from the talent, with some way to talk to the talent in the recording booth (a *talkback circuit*).
-
-The recording should contain as little noise as possible, with a goal of -80 dB.
-
-Listen closely to a recording of silence in your "booth," figure out where any noise is coming from, and eliminate the cause. Common sources of noise are air vents, fluorescent light ballasts, traffic on nearby roads, and equipment fans (even notebook PCs might have fans). Microphones and cables can pick up electrical noise from nearby AC wiring, usually a hum or buzz. A buzz can also be caused by a *ground loop*, which is caused by having equipment plugged into more than one electrical circuit.
-
-> [!TIP]
-> In some cases, you might be able to use an equalizer or a noise reduction software plug-in to help remove noise from your recordings, although it is always best to stop it at its source.
-
-Set levels so that most of the available dynamic range of digital recording is used without overdriving. That means set the audio loud, but not so loud that it becomes distorted. An example of the waveform of a good recording is shown in the following image:
-
-![A good recording waveform](media/custom-voice/good-recording.png)
-
-Here, most of the range (height) is used, but the highest peaks of the signal don't reach the top or bottom of the window. You can also see that the silence in the recording approximates a thin horizontal line, indicating a low noise floor. This recording has acceptable dynamic range and signal-to-noise ratio.
-
-Record directly into the computer via a high-quality audio interface or a USB port, depending on the mic you're using. For analog, keep the audio chain simple: mic, preamp, audio interface, computer. You can license both [Avid Pro Tools](https://www.avid.com/en/pro-tools) and [Adobe Audition](https://www.adobe.com/products/audition.html) monthly at a reasonable cost. If your budget is extremely tight, try the free [Audacity](https://www.audacityteam.org/).
-
-Record at 44.1 KHz 16 bit monophonic (CD quality) or better. Current state-of-the-art is 48 KHz 24 bit, if your equipment supports it. You'll down-sample your audio to 24 KHz 16-bit before you submit it to Speech Studio. Still, it pays to have a high-quality original recording in the event that edits are needed.
-
-Ideally, have different people serve in the roles of director, engineer, and talent. Don't try to do it all yourself. In a pinch, one person can be both the director and the engineer.
-
-### Before the session
-
-To avoid wasting studio time, run through the script with your voice talent before the recording session. While the voice talent becomes familiar with the text, they can clarify the pronunciation of any unfamiliar words.
-
-> [!NOTE]
-> Most recording studios offer electronic display of scripts in the recording booth. In this case, type your run-through notes directly into the script's document. You'll still want a paper copy to take notes on during the session, though. Most engineers will want a hard copy, too. And you'll still want a third printed copy as a backup for the talent in case the computer is down.
-
-Your voice talent might ask which word you want emphasized in an utterance (the "operative word"). Tell them that you want a natural reading with no particular emphasis. Emphasis can be added when speech is synthesized; it shouldn't be a part of the original recording.
-
-Direct the talent to pronounce words distinctly. Every word of the script should be pronounced as written. Sounds shouldn't be omitted or slurred together, as is common in casual speech, *unless they have been written that way in the script*.
-
-|Written text|Unwanted casual pronunciation|
-|-|-|
-|never going to give you up|never gonna give you up|
-|there are four lights|there're four lights|
-|how's the weather today|how's th' weather today|
-|say hello to my little friend|say hello to my lil' friend|
-
-The talent should *not* add distinct pauses between words. The sentence should still flow naturally, even while sounding a little formal. This fine distinction might take practice to get right.
-
-### The recording session
-
-Create a reference recording, or *match file,* of a typical utterance at the beginning of the session. Ask the talent to repeat this line every page or so. Each time, compare the new recording to the reference. This practice helps the talent remain consistent in volume, tempo, pitch, and intonation. Meanwhile, the engineer can use the match file as a reference for levels and overall consistency of sound.
-
-The match file is especially important when you resume recording after a break or on another day. Play it a few times for the talent and have them repeat it each time until they're matching well.
-
-Coach your talent to take a deep breath and pause for a moment before each utterance. Record a couple of seconds of silence between utterances. Words should be pronounced the same way each time they appear, considering context. For example, "record" as a verb is pronounced differently from "record" as a noun.
-
-Record approximately five seconds of silence before the first recording to capture the "room tone". This practice helps Speech Studio compensate for noise in the recordings.
-
-> [!TIP]
-> All you need to capture is the voice talent, so you can make a monophonic (single-channel) recording of just their lines. However, if you record in stereo, you can use the second channel to record the chatter in the control room to capture discussion of particular lines or takes. Remove this track from the version that's uploaded to Speech Studio.
-
-Listen closely, using headphones, to the voice talent's performance. You're looking for good but natural diction, correct pronunciation, and a lack of unwanted sounds. Don't hesitate to ask your talent to re-record an utterance that doesn't meet these standards.
-
-> [!TIP]
-> If you are using a large number of utterances, a single utterance might not have a noticeable effect on the resultant custom neural voice. It might be more expedient to simply note any utterances with issues, exclude them from your dataset, and see how your custom neural voice turns out. You can always go back to the studio and record the missed samples later.
-
-Note the take number or time code on your script for each utterance. Ask the engineer to mark each utterance in the recording's metadata or cue sheet as well.
-
-Take regular breaks and provide a beverage to help your voice talent keep their voice in good shape.
-
-### After the session
-
-Modern recording studios run on computers. At the end of the session, you receive one or more audio files, not a tape. These files will probably be WAV or AIFF format in CD quality (44.1 KHz 16-bit) or better. 24 KHz 16-bit is common and desirable. The default sampling rate for a custom neural voice is 24 KHz. It's recommended that you should use a sample rate of 24 KHz for your training data. Higher sampling rates, such as 96 KHz, are generally not needed.
-
-Speech Studio requires each provided utterance to be in its own file. Each audio file delivered by the studio contains multiple utterances. So the primary post-production task is to split up the recordings and prepare them for submission. The recording engineer might have placed markers in the file (or provided a separate cue sheet) to indicate where each utterance starts.
-
-Use your notes to find the exact takes you want, and then use a sound editing utility, such as [Avid Pro Tools](https://www.avid.com/en/pro-tools), [Adobe Audition](https://www.adobe.com/products/audition.html), or the free [Audacity](https://www.audacityteam.org/), to copy each utterance into a new file.
-
-Listen to each file carefully. At this stage, you can edit out small unwanted sounds that you missed during recording, like a slight lip smack before a line, but be careful not to remove any actual speech. If you can't fix a file, remove it from your dataset and note that you've done so.
-
-Convert each file to 16 bits and a sample rate of 24 KHz before saving and if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-
-Finally, create the transcript that associates each WAV file with a text version of the corresponding utterance. [Train your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
-
-Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
-
-## Next steps
-
-You're ready to upload your recordings and create your custom neural voice.
-
-> [!div class="nextstepaction"]
-> [Train your voice model](./how-to-custom-voice-create-voice.md)
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
- Title: Regions - Speech service-
-description: A list of available regions and endpoints for the Speech service, including speech to text, text to speech, and speech translation.
------ Previously updated : 09/16/2022----
-# Speech service supported regions
-
-The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://aka.ms/speechstudio/).
-
-Keep in mind the following points:
-
-* If your application uses a [Speech SDK](speech-sdk.md), you provide the region identifier, such as `westus`, when you create a `SpeechConfig`. Make sure the region matches the region of your subscription.
-* If your application uses one of the Speech service REST APIs, the region is part of the endpoint URI you use when making requests.
-* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors.
-
-> [!NOTE]
-> Speech service doesn't store or process customer data outside the region the customer deploys the service instance in.
-
-## Speech service
-
-The following regions are supported for Speech service features such as speech to text, text to speech, pronunciation assessment, and translation. The geographies are listed in alphabetical order.
-
-| Geography | Region | Region identifier |
-| -- | -- | -- |
-| Africa | South Africa North | `southafricanorth` <sup>6</sup>|
-| Asia Pacific | East Asia | `eastasia` <sup>5</sup>|
-| Asia Pacific | Southeast Asia | `southeastasia` <sup>1,2,3,4,5</sup>|
-| Asia Pacific | Australia East | `australiaeast` <sup>1,2,3,4</sup>|
-| Asia Pacific | Central India | `centralindia` <sup>1,2,3,4,5</sup>|
-| Asia Pacific | Japan East | `japaneast` <sup>2,5</sup>|
-| Asia Pacific | Japan West | `japanwest` |
-| Asia Pacific | Korea Central | `koreacentral` <sup>2</sup>|
-| Canada | Canada Central | `canadacentral` <sup>1</sup>|
-| Europe | North Europe | `northeurope` <sup>1,2,4,5</sup>|
-| Europe | West Europe | `westeurope` <sup>1,2,3,4,5</sup>|
-| Europe | France Central | `francecentral` |
-| Europe | Germany West Central | `germanywestcentral` |
-| Europe | Norway East | `norwayeast` |
-| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>|
-| Europe | Switzerland West | `switzerlandwest` |
-| Europe | UK South | `uksouth` <sup>1,2,3,4</sup>|
-| Middle East | UAE North | `uaenorth` <sup>6</sup>|
-| South America | Brazil South | `brazilsouth` <sup>6</sup>|
-| US | Central US | `centralus` |
-| US | East US | `eastus` <sup>1,2,3,4,5</sup>|
-| US | East US 2 | `eastus2` <sup>1,2,4,5</sup>|
-| US | North Central US | `northcentralus` <sup>4,6</sup>|
-| US | South Central US | `southcentralus` <sup>1,2,3,4,5,6</sup>|
-| US | West Central US | `westcentralus` <sup>5</sup>|
-| US | West US | `westus` <sup>2,5</sup>|
-| US | West US 2 | `westus2` <sup>1,2,4,5</sup>|
-| US | West US 3 | `westus3` |
-
-<sup>1</sup> The region has dedicated hardware for Custom Speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
-
-<sup>2</sup> The region is available for Custom Neural Voice training. You can copy a trained neural voice model to other regions for deployment.
-
-<sup>3</sup> The Long Audio API is available in the region.
-
-<sup>4</sup> The region supports custom keyword advanced models.
-
-<sup>5</sup> The region supports keyword verification.
-
-<sup>6</sup> The region does not support Speaker Recognition.
-
-## Intent recognition
-
-Available regions for intent recognition via the Speech SDK are in the following table.
-
-| Global region | Region | Region identifier |
-| - | - | -- |
-| Asia | East Asia | `eastasia` |
-| Asia | Southeast Asia | `southeastasia` |
-| Australia | Australia East | `australiaeast` |
-| Europe | North Europe | `northeurope` |
-| Europe | West Europe | `westeurope` |
-| North America | East US | `eastus` |
-| North America | East US 2 | `eastus2` |
-| North America | South Central US | `southcentralus` |
-| North America | West Central US | `westcentralus` |
-| North America | West US | `westus` |
-| North America | West US 2 | `westus2` |
-| South America | Brazil South | `brazilsouth` |
-
-This is a subset of the publishing regions supported by the [Language Understanding service (LUIS)](../luis/luis-reference-regions.md).
-
-## Voice assistants
-
-The [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
-
-| Global region | Region | Region identifier |
-| - | - | -- |
-| North America | West US | `westus` |
-| North America | West US 2 | `westus2` |
-| North America | East US | `eastus` |
-| North America | East US 2 | `eastus2` |
-| North America | West Central US | `westcentralus` |
-| North America | South Central US | `southcentralus` |
-| Europe | West Europe | `westeurope` |
-| Europe | North Europe | `northeurope` |
-| Asia | East Asia | `eastasia` |
-| Asia | Southeast Asia | `southeastasia` |
-| India | Central India | `centralindia` |
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
- Title: What's new - Speech service-
-description: Find out about new releases and features for the Azure Cognitive Service for Speech.
------- Previously updated : 03/16/2023---
-# What's new in Azure Cognitive Service for Speech?
-
-Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
-
-## Recent highlights
-
-* Speech SDK 1.30.0 was released in July 2023.
-* Speech to text and text to speech container versions were updated in March 2023.
-* Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription.
-* Custom Speech to text container disconnected mode was released in January 2023.
-* Text to speech Batch synthesis API is available in public preview.
-* Speech to text REST API version 3.1 is generally available.
-
-## Release notes
-
-**Choose a service or resource**
-
-# [SDK](#tab/speech-sdk)
--
-# [CLI](#tab/speech-cli)
--
-# [Text to speech service](#tab/text-to-speech)
--
-# [Speech to text service](#tab/speech-to-text)
--
-# [Containers](#tab/containers)
--
-***
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
- Title: How to back up and recover speech customer resources-
-description: Learn how to prepare for service outages with Custom Speech and Custom Voice.
----- Previously updated : 07/28/2021---
-# Back up and recover speech customer resources
-
-The Speech service is [available in various regions](./regions.md). Speech resource keys are tied to a single region. When you acquire a key, you select a specific region, where your data, model and deployments reside.
-
-Datasets for customer-created data assets, such as customized speech models, custom voice fonts and speaker recognition voice profiles, are also **available only within the service-deployed region**. Such assets are:
-
-**Custom Speech**
-- Training audio/text data-- Test audio/text data-- Customized speech models-- Log data-
-**Custom Voice**
-- Training audio/text data-- Test audio/text data-- Custom voice fonts-
-**Speaker Recognition**
-- Speaker enrollment audio-- Speaker voice signature-
-While some customers use our default endpoints to transcribe audio or standard voices for speech synthesis, other customers create assets for customization.
-
-These assets are backed up regularly and automatically by the repositories themselves, so **no data loss will occur** if a region becomes unavailable. However, you must take steps to ensure service continuity if there's a region outage.
-
-## How to monitor service availability
-
-If you use the default endpoints, you should configure your client code to monitor for errors. If errors persist, be prepared to redirect to another region where you have a Speech resource.
-
-Follow these steps to configure your client to monitor for errors:
-
-1. Find the [list of regionally available endpoints in our documentation](./rest-speech-to-text.md).
-2. Select a primary and one or more secondary/backup regions from the list.
-3. From Azure portal, create Speech service resources for each region.
- - If you have set a specific quota, you may also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
-
-4. Each region has its own STS token service. For the primary region and any backup regions your client configuration file needs to know the:
- - Regional Speech service endpoints
- - [Regional key and the region code](./rest-speech-to-text.md)
-
-5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965).
-
- 1. Since networks experience transient errors, for single connectivity issue occurrences, the suggestion is to retry.
- 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text to speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880).
-
-The recovery from regional failures for this usage type can be instantaneous and at a low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal.
-
-## Custom endpoint recovery
-
-Data assets, models or deployments in one region can't be made visible or accessible in any other region.
-
-You should create Speech service resources in both a main and a secondary region by following the same steps as used for default endpoints.
-
-### Custom Speech
-
-Custom Speech service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
-
-1. Create your custom model in one main region (Primary).
-2. Run the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation to replicate the custom model to all prepared regions (Secondary).
-3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md).
- - If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
-4. Configure your client to fail over on persistent errors as with the default endpoints usage.
-
-Your client code can monitor availability of your deployed models in your primary region, and redirect their audio traffic to the secondary region when the primary fails. If you don't require real-time failover, you can still follow these steps to prepare for a manual failover.
-
-#### Offline failover
-
-If you don't require real-time failover you can decide to import your data, create and deploy your models in the secondary region at a later time with the understanding that these tasks will take time to complete.
-
-#### Failover time requirements
-
-This section provides general guidance about timing. The times were recorded to estimate offline failover using a [representative test data set](https://github.com/microsoft/Cognitive-Custom-Speech-Service).
--- Data upload to new region: **15mins**-- Acoustic/language model creation: **6 hours (depending on the data volume)**-- Model evaluation: **30 mins**-- Endpoint deployment: **10 mins**-- Model copy API call: **10 mins**-- Client code reconfiguration and deployment: **Depending on the client system**-
-It's nonetheless advisable to create keys for a primary and secondary region for production models with real-time requirements.
-
-### Custom Voice
-
-Custom Voice doesn't support automatic failover. Handle real-time synthesis failures with these two options.
-
-**Option 1: Fail over to public voice in the same region.**
-
-When custom voice real-time synthesis fails, fail over to a public voice (client sample code: [GitHub: custom voice failover to public voice](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L899)).
-
-Check the [public voices available](language-support.md?tabs=tts). You can also change the sample code above if you would like to fail over to a different voice or in a different region.
-
-**Option 2: Fail over to custom voice on another region.**
-
-1. Create and deploy your custom voice in one main region (primary).
-2. Copy your custom voice model to another region (the secondary region) in [Speech Studio](https://aka.ms/speechstudio/).
-3. Go to Speech Studio and switch to the Speech resource in the secondary region. Load the copied model and create a new endpoint.
- - Voice model deployment usually finishes **in 3 minutes**.
- - Each endpoint is subject to extra charges. [Check the pricing for model hosting here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-4. Configure your client to fail over to the secondary region. See sample code in C#: [GitHub: custom voice failover to secondary region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L920).
-
-### Speaker Recognition
-
-Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
-
-During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
- Title: Speech to text REST API for short audio - Speech service-
-description: Learn how to use Speech to text REST API for short audio to convert speech to text.
------ Previously updated : 05/02/2023----
-# Speech to text REST API for short audio
-
-Use cases for the Speech to text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md).
-
-Before you use the Speech to text REST API for short audio, consider the following limitations:
-
-* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
-* The REST API for short audio returns only final results. It doesn't provide partial results.
-* [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md).
-* [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech.
-
-Before you use the Speech to text REST API for short audio, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication).
-
-## Regions and endpoints
-
-The endpoint for the REST API for short audio has this format:
-
-```
-https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1
-```
-
-Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your Speech resource.
-
-> [!NOTE]
-> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
-
-## Audio formats
-
-Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
-
-| Format | Codec | Bit rate | Sample rate |
-|--|-|-|--|
-| WAV | PCM | 256 kbps | 16 kHz, mono |
-| OGG | OPUS | 256 kpbs | 16 kHz, mono |
-
-> [!NOTE]
-> The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
-
-## Request headers
-
-This table lists required and optional headers for speech to text requests:
-
-|Header| Description | Required or optional |
-||-||
-| `Ocp-Apim-Subscription-Key` | Your resource key for the Speech service. | Either this header or `Authorization` is required. |
-| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
-| `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional |
-| `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
-| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Use this header only if you're chunking audio data. | Optional |
-| `Expect` | If you're using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if you're sending chunked audio data. |
-| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It's good practice to always include `Accept`. | Optional, but recommended. |
-
-## Query parameters
-
-These parameters might be included in the query string of the REST request.
-
-> [!NOTE]
-> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
-
-| Parameter | Description | Required or optional |
-|--|-||
-| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt). | Required |
-| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
-| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
-| `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
-
-### Pronunciation assessment parameters
-
-This table lists required and optional parameters for pronunciation assessment:
-
-| Parameter | Description | Required or optional |
-|--|-||
-| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
-| `Granularity` | The evaluation granularity. Accepted values are:<br><br> `Phoneme`, which shows the score on the full-text, word, and phoneme levels.<br>`Word`, which shows the score on the full-text and word levels. <br>`FullText`, which shows the score on the full-text level only.<br><br> The default setting is `Phoneme`. | Optional |
-| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response properties](#response-properties). The default setting is `Basic`. | Optional |
-| `EnableMiscue` | Enables miscue calculation. With this parameter enabled, the pronounced words will be compared to the reference text. They'll be marked with omission or insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional |
-| `ScenarioId` | A GUID that indicates a customized point system. | Optional |
-
-Here's example JSON that contains the pronunciation assessment parameters:
-
-```json
-{
- "ReferenceText": "Good morning.",
- "GradingSystem": "HundredMark",
- "Granularity": "FullText",
- "Dimension": "Comprehensive"
-}
-```
-
-The following sample code shows how to build the pronunciation assessment parameters into the `Pronunciation-Assessment` header:
-
-```csharp
-var pronAssessmentParamsJson = $"{{\"ReferenceText\":\"Good morning.\",\"GradingSystem\":\"HundredMark\",\"Granularity\":\"FullText\",\"Dimension\":\"Comprehensive\"}}";
-var pronAssessmentParamsBytes = Encoding.UTF8.GetBytes(pronAssessmentParamsJson);
-var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes);
-```
-
-We strongly recommend streaming ([chunked transfer](#chunked-transfer)) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
-
-> [!NOTE]
-> For more For more information, see [pronunciation assessment](how-to-pronunciation-assessment.md).
-
-## Sample request
-
-The following sample includes the host name and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended but not required.
-
-```HTTP
-POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
-Accept: application/json;text/xml
-Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
-Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY
-Host: westus.stt.speech.microsoft.com
-Transfer-Encoding: chunked
-Expect: 100-continue
-```
-
-To enable pronunciation assessment, you can add the following header. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters).
-
-```HTTP
-Pronunciation-Assessment: eyJSZWZlcm...
-```
-
-### HTTP status codes
-
-The HTTP status code for each response indicates success or common errors.
-
-| HTTP status code | Description | Possible reasons |
-||-|--|
-| 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) |
-| 200 | OK | The request was successful. The response body is a JSON object. |
-| 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |
-| 401 | Unauthorized | A resource key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
-| 403 | Forbidden | A resource key or authorization token is missing. |
--
-## Sample responses
-
-Here's a typical response for `simple` recognition:
-
-```json
-{
- "RecognitionStatus": "Success",
- "DisplayText": "Remind me to buy 5 pencils.",
- "Offset": "1236645672289",
- "Duration": "1236645672289"
-}
-```
-
-Here's a typical response for `detailed` recognition:
-
-```json
-{
- "RecognitionStatus": "Success",
- "Offset": "1236645672289",
- "Duration": "1236645672289",
- "NBest": [
- {
- "Confidence": 0.9052885,
- "Display": "What's the weather like?",
- "ITN": "what's the weather like",
- "Lexical": "what's the weather like",
- "MaskedITN": "what's the weather like"
- },
- {
- "Confidence": 0.92459863,
- "Display": "what is the weather like",
- "ITN": "what is the weather like",
- "Lexical": "what is the weather like",
- "MaskedITN": "what is the weather like"
- }
- ]
-}
-```
-
-Here's a typical response for recognition with pronunciation assessment:
-
-```json
-{
- "RecognitionStatus": "Success",
- "Offset": "400000",
- "Duration": "11000000",
- "NBest": [
- {
- "Confidence" : "0.87",
- "Lexical" : "good morning",
- "ITN" : "good morning",
- "MaskedITN" : "good morning",
- "Display" : "Good morning.",
- "PronScore" : 84.4,
- "AccuracyScore" : 100.0,
- "FluencyScore" : 74.0,
- "CompletenessScore" : 100.0,
- "Words": [
- {
- "Word" : "Good",
- "AccuracyScore" : 100.0,
- "ErrorType" : "None",
- "Offset" : 500000,
- "Duration" : 2700000
- },
- {
- "Word" : "morning",
- "AccuracyScore" : 100.0,
- "ErrorType" : "None",
- "Offset" : 5300000,
- "Duration" : 900000
- }
- ]
- }
- ]
-}
-```
-
-### Response properties
-
-Results are provided as JSON. The `simple` format includes the following top-level fields:
-
-| Property | Description |
-|--|--|
-|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
-|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
-|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.|
-|`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
-
-The `RecognitionStatus` field might contain these values:
-
-| Status | Description |
-|--|-|
-| `Success` | The recognition was successful, and the `DisplayText` field is present. |
-| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
-| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
-| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
-| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. |
-
-> [!NOTE]
-> If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result.
-
-The `detailed` format includes additional forms of recognized results.
-When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
-
-The object in the `NBest` list can include:
-
-| Property | Description |
-|--|-|
-| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
-| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
-| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
-| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
-| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
-| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
-| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
-| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
-| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
-
-## Chunked transfer
-
-Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
-
-The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
-
-```csharp
-var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
-request.SendChunked = true;
-request.Accept = @"application/json;text/xml";
-request.Method = "POST";
-request.ProtocolVersion = HttpVersion.Version11;
-request.Host = host;
-request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
-request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_RESOURCE_KEY";
-request.AllowWriteStreamBuffering = false;
-
-using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
-{
- // Open a request stream and write 1,024-byte chunks in the stream one at a time.
- byte[] buffer = null;
- int bytesRead = 0;
- using (var requestStream = request.GetRequestStream())
- {
- // Read 1,024 raw bytes from the input audio file.
- buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))];
- while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0)
- {
- requestStream.Write(buffer, 0, bytesRead);
- }
-
- requestStream.Flush();
- }
-}
-```
-
-## Authentication
--
-## Next steps
--- [Customize speech models](./how-to-custom-speech-train-model.md)-- [Get familiar with batch transcription](batch-transcription.md)-
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
- Title: Speech to text REST API - Speech service-
-description: Get reference documentation for Speech to text REST API.
------ Previously updated : 11/29/2022----
-# Speech to text REST API
-
-Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
-
-> [!IMPORTANT]
-> Speech to text REST API v3.1 is generally available. Version 3.0 of the [Speech to text REST API](rest-speech-to-text.md) will be retired. For more information, see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide.
-
-> [!div class="nextstepaction"]
-> [See the Speech to text REST API v3.1 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/)
-
-> [!div class="nextstepaction"]
-> [See the Speech to text REST API v3.0 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/)
-
-Use Speech to text REST API to:
--- [Custom Speech](custom-speech-overview.md): With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.-- [Batch transcription](batch-transcription.md): Transcribe audio files as a batch from multiple URLs or an Azure container. -
-Speech to text REST API includes such features as:
--- Get logs for each endpoint if logs have been requested for that endpoint.-- Request the manifest of the models that you create, to set up on-premises containers.-- Upload data from Azure storage accounts by using a shared access signature (SAS) URI.-- Bring your own storage. Use your own storage accounts for logs, transcription files, and other data.-- Some operations support webhook notifications. You can register your webhooks where notifications are sent.-
-## Datasets
-
-Datasets are applicable for [Custom Speech](custom-speech-overview.md). You can use datasets to train and test the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
-
-See [Upload training and testing datasets](how-to-custom-speech-upload-data.md?pivots=rest-api) for examples of how to upload datasets. This table includes all the operations that you can perform on datasets.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/datasets`|GET|[Datasets_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_List)|[GetDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasets)|
-|`/datasets`|POST|[Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create)|[CreateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset)|
-|`/datasets/{id}`|DELETE|[Datasets_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Delete)|[DeleteDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteDataset)|
-|`/datasets/{id}`|GET|[Datasets_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Get)|[GetDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDataset)|
-|`/datasets/{id}`|PATCH|[Datasets_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Update)|[UpdateDataset](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateDataset)|
-|`/datasets/{id}/blocks:commit`|POST|[Datasets_CommitBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_CommitBlocks)|Not applicable|
-|`/datasets/{id}/blocks`|GET|[Datasets_GetBlocks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetBlocks)|Not applicable|
-|`/datasets/{id}/blocks`|PUT|[Datasets_UploadBlock](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_UploadBlock)|Not applicable|
-|`/datasets/{id}/files`|GET|[Datasets_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles)|[GetDatasetFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFiles)|
-|`/datasets/{id}/files/{fileId}`|GET|[Datasets_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_GetFile)|[GetDatasetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetFile)|
-|`/datasets/locales`|GET|[Datasets_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListSupportedLocales)|[GetSupportedLocalesForDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForDatasets)|
-|`/datasets/upload`|POST|[Datasets_Upload](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Upload)|[UploadDatasetFromForm](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UploadDatasetFromForm)|
-
-## Endpoints
-
-Endpoints are applicable for [Custom Speech](custom-speech-overview.md). You must deploy a custom endpoint to use a Custom Speech model.
-
-See [Deploy a model](how-to-custom-speech-deploy-model.md?pivots=rest-api) for examples of how to manage deployment endpoints. This table includes all the operations that you can perform on endpoints.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/endpoints`|GET|[Endpoints_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_List)|[GetEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoints)|
-|`/endpoints`|POST|[Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create)|[CreateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEndpoint)|
-|`/endpoints/{id}`|DELETE|[Endpoints_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Delete)|[DeleteEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpoint)|
-|`/endpoints/{id}`|GET|[Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get)|[GetEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpoint)|
-|`/endpoints/{id}`|PATCH|[Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)|[UpdateEndpoint](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEndpoint)|
-|`/endpoints/{id}/files/logs`|DELETE|[Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs)|[DeleteEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLogs)|
-|`/endpoints/{id}/files/logs`|GET|[Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs)|[GetEndpointLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLogs)|
-|`/endpoints/{id}/files/logs/{logId}`|DELETE|[Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog)|[DeleteEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEndpointLog)|
-|`/endpoints/{id}/files/logs/{logId}`|GET|[Endpoints_GetLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetLog)|[GetEndpointLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointLog)|
-|`/endpoints/base/{locale}/files/logs`|DELETE|[Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs)|[DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLogs)|
-|`/endpoints/base/{locale}/files/logs`|GET|[Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs)|[GetBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs)|
-|`/endpoints/base/{locale}/files/logs/{logId}`|DELETE|[Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog)|[DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog)|
-|`/endpoints/base/{locale}/files/logs/{logId}`|GET|[Endpoints_GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_GetBaseModelLog)|[GetBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLog)|
-|`/endpoints/locales`|GET|[Endpoints_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListSupportedLocales)|[GetSupportedLocalesForEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEndpoints)|
-
-## Evaluations
-
-Evaluations are applicable for [Custom Speech](custom-speech-overview.md). You can use evaluations to compare the performance of different models. For example, you can compare the performance of a model trained with a specific dataset to the performance of a model trained with a different dataset.
-
-See [Test recognition quality](how-to-custom-speech-inspect-data.md?pivots=rest-api) and [Test accuracy](how-to-custom-speech-evaluate-data.md?pivots=rest-api) for examples of how to test and evaluate Custom Speech models. This table includes all the operations that you can perform on evaluations.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/evaluations`|GET|[Evaluations_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_List)|[GetEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluations)|
-|`/evaluations`|POST|[Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create)|[CreateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateEvaluation)|
-|`/evaluations/{id}`|DELETE|[Evaluations_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Delete)|[DeleteEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteEvaluation)|
-|`/evaluations/{id}`|GET|[Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get)|[GetEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluation)|
-|`/evaluations/{id}`|PATCH|[Evaluations_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Update)|[UpdateEvaluation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateEvaluation)|
-|`/evaluations/{id}/files`|GET|[Evaluations_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListFiles)|[GetEvaluationFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFiles)|
-|`/evaluations/{id}/files/{fileId}`|GET|[Evaluations_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_GetFile)|[GetEvaluationFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationFile)|
-|`/evaluations/locales`|GET|[Evaluations_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_ListSupportedLocales)|[GetSupportedLocalesForEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForEvaluations)|
-
-## Health status
-
-Health status provides insights about the overall health of the service and sub-components.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/healthstatus`|GET|[HealthStatus_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/HealthStatus_Get)|[GetHealthStatus](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHealthStatus)|
-
-## Models
-
-Models are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). You can use models to transcribe audio files. For example, you can use a model trained with a specific dataset to transcribe audio files.
-
-See [Train a model](how-to-custom-speech-train-model.md?pivots=rest-api) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md?pivots=rest-api) for examples of how to train and manage Custom Speech models. This table includes all the operations that you can perform on models.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/models`|GET|[Models_ListCustomModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListCustomModels)|[GetModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModels)|
-|`/models`|POST|[Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create)|[CreateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateModel)|
-|`/models/{id}:copyto`<sup>1</sup>|POST|[Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo)|[CopyModelToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)|
-|`/models/{id}`|DELETE|[Models_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Delete)|[DeleteModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteModel)|
-|`/models/{id}`|GET|[Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel)|[GetModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel)|
-|`/models/{id}`|PATCH|[Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update)|[UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel)|
-|`/models/{id}/files`|GET|[Models_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListFiles)|Not applicable|
-|`/models/{id}/files/{fileId}`|GET|[Models_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetFile)|Not applicable|
-|`/models/{id}/manifest`|GET|[Models_GetCustomModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModelManifest)|[GetModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelManifest)|
-|`/models/base`|GET|[Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels)|[GetBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModels)|
-|`/models/base/{id}`|GET|[Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel)|[GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel)|
-|`/models/base/{id}/manifest`|GET|[Models_GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModelManifest)|[GetBaseModelManifest](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelManifest)|
-|`/models/locales`|GET|[Models_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListSupportedLocales)|[GetSupportedLocalesForModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForModels)|
-
-## Projects
-
-Projects are applicable for [Custom Speech](custom-speech-overview.md). Custom Speech projects contain models, training and testing datasets, and deployment endpoints. Each project is specific to a [locale](language-support.md?tabs=stt). For example, you might create a project for English in the United States.
-
-See [Create a project](how-to-custom-speech-create-project.md?pivots=rest-api) for examples of how to create projects. This table includes all the operations that you can perform on projects.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/projects`|GET|[Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List)|[GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects)|
-|`/projects`|POST|[Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create)|[CreateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateProject)|
-|`/projects/{id}`|DELETE|[Projects_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Delete)|[DeleteProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteProject)|
-|`/projects/{id}`|GET|[Projects_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Get)|[GetProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProject)|
-|`/projects/{id}`|PATCH|[Projects_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Update)|[UpdateProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateProject)|
-|`/projects/{id}/datasets`|GET|[Projects_ListDatasets](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListDatasets)|[GetDatasetsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetDatasetsForProject)|
-|`/projects/{id}/endpoints`|GET|[Projects_ListEndpoints](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEndpoints)|[GetEndpointsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEndpointsForProject)|
-|`/projects/{id}/evaluations`|GET|[Projects_ListEvaluations](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListEvaluations)|[GetEvaluationsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetEvaluationsForProject)|
-|`/projects/{id}/models`|GET|[Projects_ListModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListModels)|[GetModelsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModelsForProject)|
-|`/projects/{id}/transcriptions`|GET|[Projects_ListTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListTranscriptions)|[GetTranscriptionsForProject](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsForProject)|
-|`/projects/locales`|GET|[Projects_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_ListSupportedLocales)|[GetSupportedProjectLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedProjectLocales)|
--
-## Transcriptions
-
-Transcriptions are applicable for [Batch Transcription](batch-transcription.md). Batch transcription is used to transcribe a large amount of audio in storage. You should send multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe.
-
-See [Create a transcription](batch-transcription-create.md?pivots=rest-api) for examples of how to create a transcription from multiple audio files. This table includes all the operations that you can perform on transcriptions.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/transcriptions`|GET|[Transcriptions_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_List)|[GetTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions)|
-|`/transcriptions`|POST|[Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)|[CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription)|
-|`/transcriptions/{id}`|DELETE|[Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete)|[DeleteTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)|
-|`/transcriptions/{id}`|GET|[Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get)|[GetTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscription)|
-|`/transcriptions/{id}`|PATCH|[Transcriptions_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Update)|[UpdateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateTranscription)|
-|`/transcriptions/{id}/files`|GET|[Transcriptions_ListFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListFiles)|[GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles)|
-|`/transcriptions/{id}/files/{fileId}`|GET|[Transcriptions_GetFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_GetFile)|[GetTranscriptionFile](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFile)|
-|`/transcriptions/locales`|GET|[Transcriptions_ListSupportedLocales](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_ListSupportedLocales)|[GetSupportedLocalesForTranscriptions](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetSupportedLocalesForTranscriptions)|
--
-## Web hooks
-
-Web hooks are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). In particular, web hooks apply to [datasets](#datasets), [endpoints](#endpoints), [evaluations](#evaluations), [models](#models), and [transcriptions](#transcriptions). Web hooks can be used to receive notifications about creation, processing, completion, and deletion events.
-
-This table includes all the web hook operations that are available with the Speech to text REST API.
-
-|Path|Method|Version 3.1|Version 3.0|
-|||||
-|`/webhooks`|GET|[WebHooks_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_List)|[GetHooks](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHooks)|
-|`/webhooks`|POST|[WebHooks_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Create)|[CreateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateHook)|
-|`/webhooks/{id}:ping`<sup>1</sup>|POST|[WebHooks_Ping](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Ping)|[PingHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/PingHook)|
-|`/webhooks/{id}:test`<sup>2</sup>|POST|[WebHooks_Test](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Test)|[TestHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/TestHook)|
-|`/webhooks/{id}`|DELETE|[WebHooks_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Delete)|[DeleteHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteHook)|
-|`/webhooks/{id}`|GET|[WebHooks_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Get)|[GetHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetHook)|
-|`/webhooks/{id}`|PATCH|[WebHooks_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/WebHooks_Update)|[UpdateHook](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateHook)|
-
-<sup>1</sup> The `/webhooks/{id}/ping` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:ping` operation (includes ':') in version 3.1.
-
-<sup>2</sup> The `/webhooks/{id}/test` operation (includes '/') in version 3.0 is replaced by the `/webhooks/{id}:test` operation (includes ':') in version 3.1.
-
-## Next steps
--- [Create a Custom Speech project](how-to-custom-speech-create-project.md)-- [Get familiar with batch transcription](batch-transcription.md)-
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
- Title: Text to speech API reference (REST) - Speech service-
-description: Learn how to use the REST API to convert text into synthesized speech.
------ Previously updated : 01/24/2022----
-# Text to speech REST API
-
-The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region by using a REST API. In this article, you'll learn about authorization options, query options, how to structure a request, and how to interpret a response.
-
-> [!TIP]
-> Use cases for the text to speech REST API are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For example, with the Speech SDK you can [subscribe to events](how-to-speech-synthesis.md#subscribe-to-synthesizer-events) for more insights about the text to speech processing and results.
-
-The text to speech REST API supports neural text to speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information:
--- For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).-- For information about regional availability, see [Speech service supported regions](regions.md#speech-service).-- For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).-
-> [!IMPORTANT]
-> Costs vary for prebuilt neural voices (called *Neural* on the pricing page) and custom neural voices (called *Custom Neural* on the pricing page). For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-Before you use the text to speech REST API, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication).
-
-## Get a list of voices
-
-You can use the `tts.speech.microsoft.com/cognitiveservices/voices/list` endpoint to get a full list of voices for a specific region or endpoint. Prefix the voices list endpoint with a region to get a list of voices for that region. For example, to get a list of voices for the `westus` region, use the `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` endpoint. For a list of all supported regions, see the [regions](regions.md) documentation.
-
-> [!NOTE]
-> [Voices and styles in preview](language-support.md?tabs=tts) are only available in three service regions: East US, West Europe, and Southeast Asia.
-
-### Request headers
-
-This table lists required and optional headers for text to speech requests:
-
-| Header | Description | Required or optional |
-|--|-||
-| `Ocp-Apim-Subscription-Key` | Your Speech resource key. | Either this header or `Authorization` is required. |
-| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
-
-### Request body
-
-A body isn't required for `GET` requests to this endpoint.
-
-### Sample request
-
-This request requires only an authorization header:
-
-```http
-GET /cognitiveservices/voices/list HTTP/1.1
-
-Host: westus.tts.speech.microsoft.com
-Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY
-```
-
-Here's an example curl command:
-
-```curl
-curl --location --request GET 'https://YOUR_RESOURCE_REGION.tts.speech.microsoft.com/cognitiveservices/voices/list' \
header 'Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY'
-```
-
-### Sample response
-
-You should receive a response with a JSON body that includes all supported locales, voices, gender, styles, and other details. The `WordsPerMinute` property for each voice can be used to estimate the length of the output speech. This JSON example shows partial results to illustrate the structure of a response:
-
-```json
-[
- // Redacted for brevity
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",
- "DisplayName": "Jenny",
- "LocalName": "Jenny",
- "ShortName": "en-US-JennyNeural",
- "Gender": "Female",
- "Locale": "en-US",
- "LocaleName": "English (United States)",
- "StyleList": [
- "assistant",
- "chat",
- "customerservice",
- "newscast",
- "angry",
- "cheerful",
- "sad",
- "excited",
- "friendly",
- "terrified",
- "shouting",
- "unfriendly",
- "whispering",
- "hopeful"
- ],
- "SampleRateHertz": "24000",
- "VoiceType": "Neural",
- "Status": "GA",
- "ExtendedPropertyMap": {
- "IsHighQuality48K": "True"
- },
- "WordsPerMinute": "152"
- },
- // Redacted for brevity
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (en-US, JennyMultilingualNeural)",
- "DisplayName": "Jenny Multilingual",
- "LocalName": "Jenny Multilingual",
- "ShortName": "en-US-JennyMultilingualNeural",
- "Gender": "Female",
- "Locale": "en-US",
- "LocaleName": "English (United States)",
- "SecondaryLocaleList": [
- "de-DE",
- "en-AU",
- "en-CA",
- "en-GB",
- "es-ES",
- "es-MX",
- "fr-CA",
- "fr-FR",
- "it-IT",
- "ja-JP",
- "ko-KR",
- "pt-BR",
- "zh-CN"
- ],
- "SampleRateHertz": "24000",
- "VoiceType": "Neural",
- "Status": "GA",
- "WordsPerMinute": "190"
- },
- // Redacted for brevity
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (ga-IE, OrlaNeural)",
- "DisplayName": "Orla",
- "LocalName": "Orla",
- "ShortName": "ga-IE-OrlaNeural",
- "Gender": "Female",
- "Locale": "ga-IE",
- "LocaleName": "Irish (Ireland)",
- "SampleRateHertz": "24000",
- "VoiceType": "Neural",
- "Status": "GA",
- "WordsPerMinute": "139"
- },
- // Redacted for brevity
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (zh-CN, YunxiNeural)",
- "DisplayName": "Yunxi",
- "LocalName": "云希",
- "ShortName": "zh-CN-YunxiNeural",
- "Gender": "Male",
- "Locale": "zh-CN",
- "LocaleName": "Chinese (Mandarin, Simplified)",
- "StyleList": [
- "narration-relaxed",
- "embarrassed",
- "fearful",
- "cheerful",
- "disgruntled",
- "serious",
- "angry",
- "sad",
- "depressed",
- "chat",
- "assistant",
- "newscast"
- ],
- "SampleRateHertz": "24000",
- "VoiceType": "Neural",
- "Status": "GA",
- "RolePlayList": [
- "Narrator",
- "YoungAdultMale",
- "Boy"
- ],
- "WordsPerMinute": "293"
- },
- // Redacted for brevity
-]
-```
-
-### HTTP status codes
-
-The HTTP status code for each response indicates success or common errors.
-
-| HTTP status code | Description | Possible reason |
-||-|--|
-| 200 | OK | The request was successful. |
-| 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
-| 401 | Unauthorized | The request is not authorized. Make sure your resource key or token is valid and in the correct region. |
-| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. |
-| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
--
-## Convert text to speech
-
-The `cognitiveservices/v1` endpoint allows you to convert text to speech by using [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md).
-
-### Regions and endpoints
-
-These regions are supported for text to speech through the REST API. Be sure to select the endpoint that matches your Speech resource region.
--
-### Request headers
-
-This table lists required and optional headers for text to speech requests:
-
-| Header | Description | Required or optional |
-|--|-||
-| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Required |
-| `Content-Type` | Specifies the content type for the provided text. Accepted value: `application/ssml+xml`. | Required |
-| `X-Microsoft-OutputFormat` | Specifies the audio output format. For a complete list of accepted values, see [Audio outputs](#audio-outputs). | Required |
-| `User-Agent` | The application name. The provided value must be fewer than 255 characters. | Required |
-
-### Request body
-
-If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text to speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
-
-### Sample request
-
-This HTTP request uses SSML to specify the voice and language. If the body length is long, and the resulting audio exceeds 10 minutes, it's truncated to 10 minutes. In other words, the audio length can't exceed 10 minutes.
-
-```http
-POST /cognitiveservices/v1 HTTP/1.1
-
-X-Microsoft-OutputFormat: riff-24khz-16bit-mono-pcm
-Content-Type: application/ssml+xml
-Host: westus.tts.speech.microsoft.com
-Content-Length: <Length>
-Authorization: Bearer [Base64 access_token]
-User-Agent: <Your application name>
-
-<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male'
- name='en-US-ChristopherNeural'>
- I'm excited to try text to speech!
-</voice></speak>
-```
-<sup>*</sup> For the Content-Length, you should use your own content length. In most cases, this value is calculated automatically.
-
-### HTTP status codes
-
-The HTTP status code for each response indicates success or common errors:
-
-| HTTP status code | Description | Possible reason |
-||-|--|
-| 200 | OK | The request was successful. The response body is an audio file. |
-| 400 | Bad request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common reason is a header that's too long. |
-| 401 | Unauthorized | The request is not authorized. Make sure your Speech resource key or token is valid and in the correct region. |
-| 415 | Unsupported media type | It's possible that the wrong `Content-Type` value was provided. `Content-Type` should be set to `application/ssml+xml`. |
-| 429 | Too many requests | You have exceeded the quota or rate of requests allowed for your resource. |
-| 502 | Bad gateway | There's a network or server-side problem. This status might also indicate invalid headers. |
-
-If the HTTP status is `200 OK`, the body of the response contains an audio file in the requested format. This file can be played as it's transferred, saved to a buffer, or saved to a file.
-
-## Audio outputs
-
-The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
-
-#### [Streaming](#tab/streaming)
-
-```
-amr-wb-16000hz
-audio-16khz-16bit-32kbps-mono-opus
-audio-16khz-32kbitrate-mono-mp3
-audio-16khz-64kbitrate-mono-mp3
-audio-16khz-128kbitrate-mono-mp3
-audio-24khz-16bit-24kbps-mono-opus
-audio-24khz-16bit-48kbps-mono-opus
-audio-24khz-48kbitrate-mono-mp3
-audio-24khz-96kbitrate-mono-mp3
-audio-24khz-160kbitrate-mono-mp3
-audio-48khz-96kbitrate-mono-mp3
-audio-48khz-192kbitrate-mono-mp3
-ogg-16khz-16bit-mono-opus
-ogg-24khz-16bit-mono-opus
-ogg-48khz-16bit-mono-opus
-raw-8khz-8bit-mono-alaw
-raw-8khz-8bit-mono-mulaw
-raw-8khz-16bit-mono-pcm
-raw-16khz-16bit-mono-pcm
-raw-16khz-16bit-mono-truesilk
-raw-22050hz-16bit-mono-pcm
-raw-24khz-16bit-mono-pcm
-raw-24khz-16bit-mono-truesilk
-raw-44100hz-16bit-mono-pcm
-raw-48khz-16bit-mono-pcm
-webm-16khz-16bit-mono-opus
-webm-24khz-16bit-24kbps-mono-opus
-webm-24khz-16bit-mono-opus
-```
-
-#### [NonStreaming](#tab/nonstreaming)
-
-```
-riff-8khz-8bit-mono-alaw
-riff-8khz-8bit-mono-mulaw
-riff-8khz-16bit-mono-pcm
-riff-22050hz-16bit-mono-pcm
-riff-24khz-16bit-mono-pcm
-riff-44100hz-16bit-mono-pcm
-riff-48khz-16bit-mono-pcm
-```
-
-***
-
-> [!NOTE]
-> If you select 48kHz output format, the high-fidelity voice model with 48kHz will be invoked accordingly. The sample rates other than 24kHz and 48kHz can be obtained through upsampling or downsampling when synthesizing, for example, 44.1kHz is downsampled from 48kHz.
->
-> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/).
-
-## Authentication
--
-## Next steps
--- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [Get started with custom neural voice](how-to-custom-voice.md)-- [Batch synthesis](batch-synthesis.md)
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/role-based-access-control.md
- Title: Role-based access control for Speech resources - Speech service-
-description: Learn how to assign access roles for a Speech resource.
------ Previously updated : 04/03/2022---
-# Role-based access control for Speech resources
-
-You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a Custom Speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
-
-> [!NOTE]
-> A Speech resource can inherit or be assigned multiple roles. The final level of access to this resource is a combination of all roles permissions from the operation level.
-
-## Roles for Speech resources
-
-A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in this table are assigned by default.
-
-| Role | Can list resource keys | Access to data, models, and endpoints|
-| | | |
-|**Owner** |Yes |View, create, edit, and delete |
-|**Contributor** |Yes |View, create, edit, and delete |
-|**Cognitive Services Contributor** |Yes |View, create, edit, and delete |
-|**Cognitive Services User** |Yes |View, create, edit, and delete |
-|**Cognitive Services Speech Contributor** |No | View, create, edit, and delete |
-|**Cognitive Services Speech User** |No |View only |
-|**Cognitive Services Data Reader (Preview)** |No |View only |
-
-> [!IMPORTANT]
-> Whether a role can list resource keys is important for [Speech Studio authentication](#speech-studio-authentication). To list resource keys, a role must have permission to run the `Microsoft.CognitiveServices/accounts/listKeys/action` operation. Please note that if key authentication is disabled in the Azure Portal, then none of the roles can list keys.
-
-Keep the built-in roles if your Speech resource can have full read and write access to the projects.
-
-For finer-grained resource access control, you can [add or remove roles](../../role-based-access-control/role-assignments-portal.md?tabs=current) using the Azure portal. For example, you could create a custom role with permission to upload Custom Speech datasets, but without permission to deploy a Custom Speech model to an endpoint.
-
-## Authentication with keys and tokens
-
-The [roles](#roles-for-speech-resources) define what permissions you have. Authentication is required to use the Speech resource.
-
-To authenticate with Speech resource keys, all you need is the key and region. To authenticate with an Azure AD token, the Speech resource must have a [custom subdomain](speech-services-private-link.md#create-a-custom-domain-name) and use a [private endpoint](speech-services-private-link.md#turn-on-private-endpoints). The Speech service uses custom subdomains with private endpoints only.
-
-### Speech SDK authentication
-
-For the SDK, you configure whether to authenticate with a Speech resource key or Azure AD token. For details, see [Azure Active Directory Authentication with the Speech SDK](how-to-configure-azure-ad-auth.md).
-
-### Speech Studio authentication
-
-Once you're signed into [Speech Studio](speech-studio-overview.md), you select a subscription and Speech resource. You don't choose whether to authenticate with a Speech resource key or Azure AD token. Speech Studio gets the key or token automatically from the Speech resource. If one of the assigned [roles](#roles-for-speech-resources) has permission to list resource keys, Speech Studio will authenticate with the key. Otherwise, Speech Studio will authenticate with the Azure AD token.
-
-If Speech Studio uses your Azure AD token, but the Speech resource doesn't have a custom subdomain and private endpoint, then you can't use some features in Speech Studio. In this case, for example, the Speech resource can be used to train a Custom Speech model, but you can't use a Custom Speech model to transcribe audio files.
-
-| Authentication credential | Feature availability |
-| | |
-|Speech resource key|Full access limited only by the assigned role permissions.|
-|Azure AD token with custom subdomain and private endpoint|Full access limited only by the assigned role permissions.|
-|Azure AD token without custom subdomain and private endpoint (not recommended)|Features are limited. For example, the Speech resource can be used to train a Custom Speech model or Custom Neural Voice. But you can't use a Custom Speech model or Custom Neural Voice.|
-
-## Next steps
-
-* [Azure Active Directory Authentication with the Speech SDK](how-to-configure-azure-ad-auth.md).
-* [Speech service encryption of data at rest](speech-encryption-of-data-at-rest.md).
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md
- Title: Sovereign Clouds - Speech service-
-description: Learn how to use Sovereign Clouds
------- Previously updated : 05/10/2022---
-# Speech service in sovereign clouds
-
-## Azure Government (United States)
-
-Available to US government entities and their partners only. See more information about Azure Government [here](../../azure-government/documentation-government-welcome.md) and [here.](../../azure-government/compare-azure-government-global-azure.md)
--- **Azure portal:**
- - [https://portal.azure.us/](https://portal.azure.us/)
-- **Regions:**
- - US Gov Arizona
- - US Gov Virginia
-- **Available pricing tiers:**
- - Free (F0) and Standard (S0). See more details [here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)
-- **Supported features:**
- - Speech to text
- - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation)
- - [Speech Studio](https://speech.azure.us/)
- - Text to speech
- - Standard voice
- - Neural voice
- - Speech translation
-- **Unsupported features:**
- - Custom Voice
- - Custom Commands
-- **Supported languages:**
- - See the list of supported languages [here](language-support.md)
-
-### Endpoint information
-
-This section contains Speech service endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
-
-#### Speech service REST API
-
-Speech service REST API endpoints in Azure Government have the following format:
-
-| REST API type / operation | Endpoint format |
-|--|--|
-| Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`
-| [Speech to text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
-| [Speech to text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` |
-| [Text to speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` |
-
-Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
-
-| | Region identifier |
-|--|--|
-| **US Gov Arizona** | `usgovarizona` |
-| **US Gov Virginia** | `usgovvirginia` |
-
-#### Speech SDK
-
-For [Speech SDK](speech-sdk.md) in sovereign clouds you need to use "from host / with host" instantiation of `SpeechConfig` class or `--host` option of [Speech CLI](spx-overview.md). (You may also use "from endpoint / with endpoint" instantiation and `--endpoint` Speech CLI option).
-
-`SpeechConfig` class should be instantiated like this:
-
-# [C#](#tab/c-sharp)
-```csharp
-var config = SpeechConfig.FromHost(usGovHost, subscriptionKey);
-```
-# [C++](#tab/cpp)
-```cpp
-auto config = SpeechConfig::FromHost(usGovHost, subscriptionKey);
-```
-# [Java](#tab/java)
-```java
-SpeechConfig config = SpeechConfig.fromHost(usGovHost, subscriptionKey);
-```
-# [Python](#tab/python)
-```python
-import azure.cognitiveservices.speech as speechsdk
-speech_config = speechsdk.SpeechConfig(host=usGovHost, subscription=subscriptionKey)
-```
-# [Objective-C](#tab/objective-c)
-```objectivec
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:usGovHost subscription:subscriptionKey];
-```
-***
-
-Speech CLI should be used like this (note the `--host` option):
-```dos
-spx recognize --host "usGovHost" --file myaudio.wav
-```
-Replace `subscriptionKey` with your Speech resource key. Replace `usGovHost` with the expression matching the required service offering and the region of your subscription from this table:
-
-| Region / Service offering | Host expression |
-|--|--|
-| **US Gov Arizona** | |
-| Speech to text | `wss://usgovarizona.stt.speech.azure.us` |
-| Text to speech | `https://usgovarizona.tts.speech.azure.us` |
-| **US Gov Virginia** | |
-| Speech to text | `wss://usgovvirginia.stt.speech.azure.us` |
-| Text to speech | `https://usgovvirginia.tts.speech.azure.us` |
--
-## Azure China
-
-Available to organizations with a business presence in China. See more information about Azure China [here](/azure/china/overview-operations).
--- **Azure portal:**
- - [https://portal.azure.cn/](https://portal.azure.cn/)
-- **Regions:**
- - China East 2
- - China North 2
- - China North 3
-- **Available pricing tiers:**
- - Free (F0) and Standard (S0). See more details [here](https://www.azure.cn/pricing/details/cognitive-services/https://docsupdatetracker.net/index.html)
-- **Supported features:**
- - Speech to text
- - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation)
- - [Speech Studio](https://speech.azure.cn/)
- - [Pronunciation assessment](how-to-pronunciation-assessment.md)
- - Text to speech
- - Standard voice
- - Neural voice
- - Speech translator
-- **Unsupported features:**
- - Custom Voice
- - Custom Commands
-- **Supported languages:**
- - See the list of supported languages [here](language-support.md)
-
-### Endpoint information
-
-This section contains Speech service endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md).
-
-#### Speech service REST API
-
-Speech service REST API endpoints in Azure China have the following format:
-
-| REST API type / operation | Endpoint format |
-|--|--|
-| Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`
-| [Speech to text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
-| [Speech to text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` |
-| [Text to speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` |
-
-Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
-
-| | Region identifier |
-|--|--|
-| **China East 2** | `chinaeast2` |
-| **China North 2** | `chinanorth2` |
-| **China North 3** | `chinanorth3` |
-
-#### Speech SDK
-
-For [Speech SDK](speech-sdk.md) in sovereign clouds you need to use "from host / with host" instantiation of `SpeechConfig` class or `--host` option of [Speech CLI](spx-overview.md). (You may also use "from endpoint / with endpoint" instantiation and `--endpoint` Speech CLI option).
-
-`SpeechConfig` class should be instantiated like this:
-
-# [C#](#tab/c-sharp)
-```csharp
-var config = SpeechConfig.FromHost("azCnHost", subscriptionKey);
-```
-# [C++](#tab/cpp)
-```cpp
-auto config = SpeechConfig::FromHost("azCnHost", subscriptionKey);
-```
-# [Java](#tab/java)
-```java
-SpeechConfig config = SpeechConfig.fromHost("azCnHost", subscriptionKey);
-```
-# [Python](#tab/python)
-```python
-import azure.cognitiveservices.speech as speechsdk
-speech_config = speechsdk.SpeechConfig(host="azCnHost", subscription=subscriptionKey)
-```
-# [Objective-C](#tab/objective-c)
-```objectivec
-SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithHost:"azCnHost" subscription:subscriptionKey];
-```
-***
-
-Speech CLI should be used like this (note the `--host` option):
-```dos
-spx recognize --host "azCnHost" --file myaudio.wav
-```
-Replace `subscriptionKey` with your Speech resource key. Replace `azCnHost` with the expression matching the required service offering and the region of your subscription from this table:
-
-| Region / Service offering | Host expression |
-|--|--|
-| **China East 2** | |
-| Speech to text | `wss://chinaeast2.stt.speech.azure.cn` |
-| Text to speech | `https://chinaeast2.tts.speech.azure.cn` |
-| **China North 2** | |
-| Speech to text | `wss://chinanorth2.stt.speech.azure.cn` |
-| Text to speech | `https://chinanorth2.tts.speech.azure.cn` |
-| **China North 3** | |
-| Speech to text | `wss://chinanorth3.stt.speech.azure.cn` |
-| Text to speech | `https://chinanorth3.tts.speech.azure.cn` |
cognitive-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md
- Title: Speaker recognition overview - Speech service-
-description: Speaker recognition provides algorithms that verify and identify speakers by their unique voice characteristics, by using voice biometry. Speaker recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. This article is an overview of the benefits and capabilities of the speaker recognition feature.
------ Previously updated : 01/08/2022--
-keywords: speaker recognition, voice biometry
--
-# What is speaker recognition?
-
-Speaker recognition can help determine who is speaking in an audio clip. The service can verify and identify speakers by their unique voice characteristics, by using voice biometry.
-
-You provide audio training data for a single speaker, which creates an enrollment profile based on the unique characteristics of the speaker's voice. You can then cross-check audio voice samples against this profile to verify that the speaker is the same person (speaker verification). You can also cross-check audio voice samples against a *group* of enrolled speaker profiles to see if it matches any profile in the group (speaker identification).
-
-> [!IMPORTANT]
-> Microsoft limits access to speaker recognition. You can apply for access through the [Azure Cognitive Services speaker recognition limited access review](https://aka.ms/azure-speaker-recognition). For more information, see [Limited access for speaker recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition).
-
-## Speaker verification
-
-Speaker verification streamlines the process of verifying an enrolled speaker identity with either passphrases or free-form voice input. For example, you can use it for customer identity verification in call centers or contactless facility access.
-
-### How does speaker verification work?
-
-The following flowchart provides a visual of how this works:
--
-Speaker verification can be either text-dependent or text-independent. *Text-dependent* verification means that speakers need to choose the same passphrase to use during both enrollment and verification phases. *Text-independent* verification means that speakers can speak in everyday language in the enrollment and verification phrases.
-
-For text-dependent verification, the speaker's voice is enrolled by saying a passphrase from a set of predefined phrases. Voice features are extracted from the audio recording to form a unique voice signature, and the chosen passphrase is also recognized. Together, the voice signature and the passphrase are used to verify the speaker.
-
-Text-independent verification has no restrictions on what the speaker says during enrollment, besides the initial activation phrase when active enrollment is enabled. It doesn't have any restrictions on the audio sample to be verified, because it only extracts voice features to score similarity.
-
-The APIs aren't intended to determine whether the audio is from a live person, or from an imitation or recording of an enrolled speaker.
-
-## Speaker identification
-
-Speaker identification helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers. Speaker identification enables you to attribute speech to individual speakers, and unlock value from scenarios with multiple speakers, such as:
-
-* Supporting solutions for remote meeting productivity.
-* Building multi-user device personalization.
-
-### How does speaker identification work?
-
-Enrollment for speaker identification is text-independent. There are no restrictions on what the speaker says in the audio, besides the initial activation phrase when active enrollment is enabled. Similar to speaker verification, the speaker's voice is recorded in the enrollment phase, and the voice features are extracted to form a unique voice signature. In the identification phase, the input voice sample is compared to a specified list of enrolled voices (up to 50 in each request).
-
-## Data security and privacy
-
-Speaker enrollment data is stored in a secured system, including the speech audio for enrollment and the voice signature features. The speech audio for enrollment is only used when the algorithm is upgraded, and the features need to be extracted again. The service doesn't retain the speech recording or the extracted voice features that are sent to the service during the recognition phase.
-
-You control how long data should be retained. You can create, update, and delete enrollment data for individual speakers through API calls. When the subscription is deleted, all the speaker enrollment data associated with the subscription will also be deleted.
-
-As with all of the Cognitive Services resources, developers who use the speaker recognition feature must be aware of Microsoft policies on customer data. You should ensure that you have received the appropriate permissions from the users. You can find more details in [Data and privacy for speaker recognition](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition). For more information, see the [Cognitive Services page](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/) on the Microsoft Trust Center.
-
-## Common questions and solutions
-
-| Question | Solution |
-||-|
-| What situations am I most likely to use speaker recognition? | Good examples include call center customer verification, voice-based patient check-in, meeting transcription, and multi-user device personalization.|
-| What's the difference between identification and verification? | Identification is the process of detecting which member from a group of speakers is speaking. Verification is the act of confirming that a speaker matches a known, *enrolled* voice.|
-| What languages are supported? | See [Speaker recognition language support](language-support.md?tabs=speaker-recognition). |
-| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speech-service).|
-| What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV. |
-| Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. |
-| What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#delete-voice-profile-enrollments). Recognition audio samples aren't retained or stored. |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Speaker recognition quickstart](./get-started-speaker-recognition.md)
cognitive-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md
- Title: Batch processing kit for Speech containers-
-description: Use the Batch processing kit to scale Speech container requests.
------ Previously updated : 10/22/2020---
-# Batch processing kit for Speech containers
-
-Use the batch processing kit to complement and scale out workloads on Speech containers. Available as a container, this open-source utility helps facilitate batch transcription for large numbers of audio files, across any number of on-premises and cloud-based speech container endpoints.
--
-The batch kit container is available for free on [GitHub](https://github.com/microsoft/batch-processing-kit) and [Docker hub](https://hub.docker.com/r/batchkit/speech-batch-kit/tags). You are only [billed](speech-container-overview.md#billing) for the Speech containers you use.
-
-| Feature | Description |
-|||
-| Batch audio file distribution | Automatically dispatch large numbers of files to on-premises or cloud-based Speech container endpoints. Files can be on any POSIX-compliant volume, including network filesystems. |
-| Speech SDK integration | Pass common flags to the Speech SDK, including: n-best hypotheses, diarization, language, profanity masking. |
-|Run modes | Run the batch client once, continuously in the background, or create HTTP endpoints for audio files. |
-| Fault tolerance | Automatically retry and continue transcription without losing progress, and differentiate between which errors can, and can't be retried on. |
-| Endpoint availability detection | If an endpoint becomes unavailable, the batch client will continue transcribing, using other container endpoints. After becoming available again, the client will automatically begin using the endpoint. |
-| Endpoint hot-swapping | Add, remove, or modify Speech container endpoints during runtime without interrupting the batch progress. Updates are immediate. |
-| Real-time logging | Real-time logging of attempted requests, timestamps, and failure reasons, with Speech SDK log files for each audio file. |
-
-## Get the container image with `docker pull`
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download the latest batch kit container.
--
-```bash
-docker pull docker.io/batchkit/speech-batch-kit:latest
-```
-
-## Endpoint configuration
-
-The batch client takes a yaml configuration file that specifies the on-premises container endpoints. The following example can be written to `/mnt/my_nfs/config.yaml`, which is used in the examples below.
-----------
-```yaml
-MyContainer1:
- concurrency: 5
- host: 192.168.0.100
- port: 5000
- rtf: 3
-MyContainer2:
- concurrency: 5
- host: BatchVM0.corp.redmond.microsoft.com
- port: 5000
- rtf: 2
-MyContainer3:
- concurrency: 10
- host: localhost
- port: 6001
- rtf: 4
-```
-
-This yaml example specifies three speech containers on three hosts. The first host is specified by a IPv4 address, the second is running on the same VM as the batch-client, and the third container is specified by the DNS hostname of another VM. The `concurrency` value specifies the maximum concurrent file transcriptions that can run on the same container. The `rtf` (Real-Time Factor) value is optional and can be used to tune performance.
-
-The batch client can dynamically detect if an endpoint becomes unavailable (for example, due to a container restart or networking issue), and when it becomes available again. Transcription requests will not be sent to containers that are unavailable, and the client will continue using other available containers. You can add, remove, or edit endpoints at any time without interrupting the progress of your batch.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-## Run the batch processing container
-ΓÇ»
-> [!NOTE]
-> * This example uses the same directory (`/my_nfs`) for the configuration file and the inputs, outputs, and logs directories. You can use hosted or NFS-mounted directories for these folders.
-> * Running the client with `–h` will list the available command-line parameters, and their default values. 
-> * The batch processing container is only supported on Linux.
-
-Use the Docker `run` command to start the container. This will start an interactive shell inside the container.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-```bash
-docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs --entrypoint /bin/bash /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest
-```
-
-To run the batch client:
-----------------------------------------
-```bash
-run-batch-client -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization None -language en-US -strict_config
-```
-
-To run the batch client and container in a single command:
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-```bash
-docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs
-```
-----------
-The client will start running. If an audio file has already been transcribed in a previous run, the client will automatically skip the file. Files are sent with an automatic retry if transient errors occur, and you can differentiate between which errors you want to the client to retry on. On a transcription error, the client will continue transcription, and can retry without losing progress.
-
-## Run modes
-
-The batch processing kit offers three modes, using the `--run-mode` parameter.
-
-#### [Oneshot](#tab/oneshot)
-
-`ONESHOT` mode transcribes a single batch of audio files (from an input directory and optional file list) to an output folder.
--
-1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
-2. Place audio files for transcription in an input directory.
-3. Invoke the container on the directory, which will begin processing the files. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
-4. The files are dispatched to the container endpoints from step 1.
-5. Logs and the Speech container output are returned to the specified output directory.
-
-#### [Daemon](#tab/daemon)
-
-> [!TIP]
-> If multiple files are added to the input directory at the same time, you can improve performance by instead adding them in a regular interval.
-
-`DAEMON` mode transcribes existing files in a given folder, and continuously transcribes new audio files as they are added.
--
-1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
-2. Invoke the container on an input directory. The batch client will begin monitoring the directory for incoming files.
-3. Set up continuous audio file delivery to the input directory. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
-4. Once a file write or POSIX signal is detected, the container is triggered to respond.
-5. The files are dispatched to the container endpoints from step 1.
-6. Logs and the Speech container output are returned to the specified output directory.
-
-#### [REST](#tab/rest)
-
-`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a Python module extension or importing as a submodule.
--
-1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
-2. Send an HTTP request to one of the API server's endpoints.
-
- |Endpoint |Description |
- |||
- |`/submit` | Endpoint for creating new batch requests. |
- |`/status` | Endpoint for checking the status of a batch request. The connection will stay open until the batch completes. |
- |`/watch` | Endpoint for using HTTP long polling until the batch completes. |
-
-3. Audio files are uploaded from the input directory. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
-4. If a request is sent to the `/submit` endpoint, the files are dispatched to the container endpoints from step
-5. Logs and the Speech container output are returned to the specified output directory.
--
-## Logging
-
-> [!NOTE]
-> The batch client may overwrite the *run.log* file periodically if it gets too large.
-
-The client creates a *run.log* file in the directory specified by the `-log_folder` argument in the docker `run` command. Logs are captured at the DEBUG level by default. The same logs are sent to the `stdout/stderr`, and filtered depending on the `-file_log_level` or `console_log_level` arguments. This log is only necessary for debugging, or if you need to send a trace for support. The logging folder also contains the Speech SDK logs for each audio file.
-
-The output directory specified by `-output_folder` will contain a *run_summary.json* file, which is periodically rewritten every 30 seconds or whenever new transcriptions are finished. You can use this file to check on progress as the batch proceeds. It will also contain the final run statistics and final status of every file when the batch is completed. The batch is completed when the process has a clean exit.
-
-## Next steps
-
-* [How to install and run containers](speech-container-howto.md)
-------
cognitive-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-configuration.md
- Title: Configure Speech containers-
-description: Speech service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
------ Previously updated : 04/18/2023---
-# Configure Speech service containers
-
-Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality.
-
-The Speech container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. The container-specific settings are the billing settings.
-
-## Configuration settings
--
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](speech-container-overview.md#billing).
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Speech_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-This setting can be found in the following place:
--- Azure portal: **Speech** Resource Management, under **Keys**-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Speech_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Speech_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-This setting can be found in the following place:
--- Azure portal: **Speech** Overview, labeled `Endpoint`-
-| Required | Name | Data type | Description |
-| -- | - | | -- |
-| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [billing](speech-container-overview.md#billing). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
-
-## Eula setting
--
-## Fluentd settings
--
-## HTTP proxy credentials settings
--
-## Logging settings
--
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-The Standard Speech containers don't use input or output mounts to store training or service data. However, custom speech containers rely on volume mounts.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](speech-container-howto.md#host-computer-requirements-and-recommendations)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
-
-| Optional | Name | Data type | Description |
-| -- | - | | -- |
-| Not allowed | `Input` | String | Standard Speech containers do not use this. Custom speech containers use [volume mounts](#volume-mount-settings). |
-| Optional | `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output` |
-
-## Volume mount settings
-
-The custom speech containers use [volume mounts](https://docs.docker.com/storage/volumes/) to persist custom models. You can specify a volume mount by adding the `-v` (or `--volume`) option to the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-> [!NOTE]
-> The volume mount settings are only applicable for [Custom Speech to text](speech-container-cstt.md) containers.
-
-Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same `ModelId` for a custom speech container will use the previously downloaded model. If the volume mount is not provided, custom models cannot be persisted.
-
-The volume mount setting consists of three color `:` separated fields:
-
-1. The first field is the name of the volume on the host machine, for example _C:\input_.
-2. The second field is the directory in the container, for example _/usr/local/models_.
-3. The third field (optional) is a comma-separated list of options, for more information see [use volumes](https://docs.docker.com/storage/volumes/).
-
-Here's a volume mount example that mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory.
-
-```bash
--v C:\input:/usr/local/models
-```
--
-## Next steps
--- Review [How to install and run containers](speech-container-howto.md)--
cognitive-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-cstt.md
- Title: Custom speech to text containers - Speech service-
-description: Install and run custom speech to text containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
------- Previously updated : 04/18/2023-
-zone_pivot_groups: programming-languages-speech-sdk-cli
-keywords: on-premises, Docker, container
--
-# Custom speech to text containers with Docker
-
-The Custom speech to text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech to text container.
-
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
-
-For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
-
-## Container images
-
-The Custom speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`.
--
-The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text`. Either append a specific version or append `:latest` to get the most recent version.
-
-| Version | Path |
-|--||
-| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
-| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:3.12.0-amd64` |
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<prerelease>
-```
-
-> [!NOTE]
-> The `locale` and `voice` for custom speech to text containers is determined by the custom model ingested by the container.
-
-The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/custom-speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
-
-```json
-{
- "name": "azure-cognitive-services/speechservices/custom-speech-to-text",
- "tags": [
- "2.10.0-amd64",
- "2.11.0-amd64",
- "2.12.0-amd64",
- "2.12.1-amd64",
- <--redacted for brevity-->
- "latest"
- ]
-}
-```
-
-### Get the container image with docker pull
-
-You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest
-```
-
-> [!NOTE]
-> The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container.
-
-## Get the model ID
-
-Before you can [run](#run-the-container-with-docker-run) the container, you need to know the model ID of your custom model or a base model ID. When you run the container you specify one of the model IDs to download and use.
-
-# [Custom model ID](#tab/custom-model)
-
-The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). For information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
-
-![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png)
-
-Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command.
-
-![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
--
-# [Base model ID](#tab/base-model)
-
-You can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account.
-
-To get base model IDs, you use the `docker run` command. For example:
-
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-BaseModelLocale={LOCALE} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command checks the container image and returns the available base models of the target locale.
-
-> [!NOTE]
-> Although you use the `docker run` command, the container isn't started for service.
-
-The output gives you a list of base models with the information locale, model ID, and creation date time. For example:
-
-```
-Checking available base model for en-us
-2020/10/30 21:54:20 [Info] Searching available base models for en-us
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T08:23:42Z, Id: a3d8aab9-6f36-44cd-9904-b37389ce2bfa
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T12:01:02Z, Id: cc7826ac-5355-471d-9bc6-a54673d06e45
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2017-08-17T12:00:00Z, Id: a1f8db59-40ff-4f0e-b011-37629c3a1a53
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-04-16T11:55:00Z, Id: c7a69da3-27de-4a4b-ab75-b6716f6321e5
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-09-21T15:18:43Z, Id: da494a53-0dad-4158-b15f-8f9daca7a412
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-10-19T11:28:54Z, Id: 84ec130b-d047-44bf-a46d-58c1ac292ca7
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T07:59:09Z, Id: ee5c100f-152f-4ae5-9e9d-014af3c01c56
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T09:21:55Z, Id: d04959a6-71da-4913-9997-836793e3c115
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-01-11T10:04:19Z, Id: 488e5f23-8bc5-46f8-9ad8-ea9a49a8efda
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-02-18T14:37:57Z, Id: 0207b3e6-92a8-4363-8c0e-361114cdd719
-2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-03-03T17:34:10Z, Id: 198d9b79-2950-4609-b6ec-f52254074a05
-2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us
-```
---
-## Display model download
-
-Before you [run](#run-the-container-with-docker-run) the container, you can optionally get the available display models information and choose to download those models into your speech to text container to get highly improved final display output. Display model download is available with custom-speech-to-text container version 3.1.0 and later.
-
-> [!NOTE]
-> Although you use the `docker run` command, the container isn't started for service.
-
-You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models.
-
-Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example:
-
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
-BaseModelLocale={LOCALE} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example:
-
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
-DisplayLocale={LOCALE} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter:
-
-```bash
-docker run --rm -it \
-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
-RescoreId={RESCORE_MODEL_ID} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-> [!NOTE]
-> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models).
-
-## Run the container with docker run
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container for service.
-
-# [Custom speech to text](#tab/container)
--
-# [Disconnected custom speech to text](#tab/disconnected)
-
-To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
-
-If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
-
-In order to prepare and configure a disconnected custom speech to text container you will need two separate speech resources:
--- A regular Azure Cognitive Services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container.-- An Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.-
-Follow these steps to download and run the container in disconnected environments.
-1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure Cognitive Services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
-1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
-1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
-
-### Download a model for the disconnected container
-
-For this step, use a regular Azure Cognitive Services Speech resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan.
--
-### Download the disconnected container license
-
-Next, you download your disconnected license file. The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container.
-
-You can only use a license file with the appropriate container and model that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
-
-| Placeholder | Description |
-|-|-|
-| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-
-For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
-
-```bash
-docker run --rm -it -p 5000:5000 \
--v {LICENSE_MOUNT} \--v {MODEL_PATH} \
-{IMAGE} \
-eula=accept \
-billing={ENDPOINT_URI} \
-apikey={API_KEY} \
-DownloadLicense=True \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-```
-
-### Run the disconnected container
-
-Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
-
-Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
-
-| Placeholder | Description |
-|-|-|
-| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` |
-| `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
-| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
-
-For this step, use an Azure Cognitive Services Speech resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan.
-
-```bash
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
--v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \--v {MODEL_PATH} \
-{IMAGE} \
-eula=accept \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
-
-The Custom Speech to text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
-
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
----
-## Use the container
--
-[Try the speech to text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region.
-
-## Next steps
-
-* See the [Speech containers overview](speech-container-overview.md)
-* Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Speech Container Howto On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md
- Title: Use Speech service containers with Kubernetes and Helm-
-description: Using Kubernetes and Helm to define the speech to text and text to speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises.
------ Previously updated : 07/22/2021---
-# Use Speech service containers with Kubernetes and Helm
-
-One option to manage your Speech containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define the speech to text and text to speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services and various configuration options. For more information about running Docker containers without Kubernetes orchestration, see [install and run Speech service containers](speech-container-howto.md).
-
-## Prerequisites
-
-The following prerequisites before using Speech containers on-premises:
-
-| Required | Purpose |
-|-||
-| Azure Account | If you don't have an Azure subscription, create a [free account][free-azure-account] before you begin. |
-| Container Registry access | In order for Kubernetes to pull the docker images into the cluster, it will need access to the container registry. |
-| Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. |
-| Helm CLI | Install the [Helm CLI][helm-install], which is used to install a helm chart (container package definition). |
-|Speech resource |In order to use these containers, you must have:<br><br>A _Speech_ Azure resource to get the associated billing key and billing endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: resource key<br><br>**{ENDPOINT_URI}**: endpoint URI example is: `https://eastus.api.cognitive.microsoft.com/sts/v1.0`|
-
-## The recommended host computer configuration
-
-Refer to the [Speech service container host computer][speech-container-host-computer] details as a reference. This *helm chart* automatically calculates CPU and memory requirements based on how many decodes (concurrent requests) that the user specifies. Additionally, it will adjust based on whether optimizations for audio/text input are configured as `enabled`. The helm chart defaults to, two concurrent requests and disabling optimization.
-
-| Service | CPU / Container | Memory / Container |
-|--|--|--|
-| **speech to text** | one decoder requires a minimum of 1,150 millicores. If the `optimizedForAudioFile` is enabled, then 1,950 millicores are required. (default: two decoders) | Required: 2 GB<br>Limited: 4 GB |
-| **text to speech** | one concurrent request requires a minimum of 500 millicores. If the `optimizeForTurboMode` is enabled, then 1,000 millicores are required. (default: two concurrent requests) | Required: 1 GB<br> Limited: 2 GB |
-
-## Connect to the Kubernetes cluster
-
-The host computer is expected to have an available Kubernetes cluster. See this tutorial on [deploying a Kubernetes cluster](../../aks/tutorial-kubernetes-deploy-cluster.md) for a conceptual understanding of how to deploy a Kubernetes cluster to a host computer.
-
-## Configure Helm chart values for deployment
-
-Visit the [Microsoft Helm Hub][ms-helm-hub] for all the publicly available helm charts offered by Microsoft. From the Microsoft Helm Hub, you'll find the **Cognitive Services Speech On-Premises Chart**. The **Cognitive Services Speech On-Premises** is the chart we'll install, but we must first create an `config-values.yaml` file with explicit configurations. Let's start by adding the Microsoft repository to our Helm instance.
-
-```console
-helm repo add microsoft https://microsoft.github.io/charts/repo
-```
-
-Next, we'll configure our Helm chart values. Copy and paste the following YAML into a file named `config-values.yaml`. For more information on customizing the **Cognitive Services Speech On-Premises Helm Chart**, see [customize helm charts](#customize-helm-charts). Replace the `# {ENDPOINT_URI}` and `# {API_KEY}` comments with your own values.
-
-```yaml
-# These settings are deployment specific and users can provide customizations
-# speech to text configurations
-speechToText:
- enabled: true
- numberOfConcurrentRequest: 3
- optimizeForAudioFile: true
- image:
- registry: mcr.microsoft.com
- repository: azure-cognitive-services/speechservices/speech-to-text
- tag: latest
- pullSecrets:
- - mcr # Or an existing secret
- args:
- eula: accept
- billing: # {ENDPOINT_URI}
- apikey: # {API_KEY}
-
-# text to speech configurations
-textToSpeech:
- enabled: true
- numberOfConcurrentRequest: 3
- optimizeForTurboMode: true
- image:
- registry: mcr.microsoft.com
- repository: azure-cognitive-services/speechservices/text-to-speech
- tag: latest
- pullSecrets:
- - mcr # Or an existing secret
- args:
- eula: accept
- billing: # {ENDPOINT_URI}
- apikey: # {API_KEY}
-```
-
-> [!IMPORTANT]
-> If the `billing` and `apikey` values are not provided, the services will expire after 15 min. Likewise, verification will fail as the services will not be available.
-
-### The Kubernetes package (Helm chart)
-
-The *Helm chart* contains the configuration of which docker image(s) to pull from the `mcr.microsoft.com` container registry.
-
-> A [Helm chart][helm-charts] is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
-
-The provided *Helm charts* pull the docker images of the Speech service, both text to speech and the speech to text services from the `mcr.microsoft.com` container registry.
-
-## Install the Helm chart on the Kubernetes cluster
-
-To install the *helm chart* we'll need to execute the [`helm install`][helm-install-cmd] command, replacing the `<config-values.yaml>` with the appropriate path and file name argument. The `microsoft/cognitive-services-speech-onpremise` Helm chart referenced below is available on the [Microsoft Helm Hub here][ms-helm-hub-speech-chart].
-
-```console
-helm install onprem-speech microsoft/cognitive-services-speech-onpremise \
- --version 0.1.1 \
- --values <config-values.yaml>
-```
-
-Here is an example output you might expect to see from a successful install execution:
-
-```console
-NAME: onprem-speech
-LAST DEPLOYED: Tue Jul 2 12:51:42 2019
-NAMESPACE: default
-STATUS: DEPLOYED
-
-RESOURCES:
-==> v1/Pod(related)
-NAME READY STATUS RESTARTS AGE
-speech-to-text-7664f5f465-87w2d 0/1 Pending 0 0s
-speech-to-text-7664f5f465-klbr8 0/1 ContainerCreating 0 0s
-text-to-speech-56f8fb685b-4jtzh 0/1 ContainerCreating 0 0s
-text-to-speech-56f8fb685b-frwxf 0/1 Pending 0 0s
-
-==> v1/Service
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-speech-to-text LoadBalancer 10.0.252.106 <pending> 80:31811/TCP 1s
-text-to-speech LoadBalancer 10.0.125.187 <pending> 80:31247/TCP 0s
-
-==> v1beta1/PodDisruptionBudget
-NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
-speech-to-text-poddisruptionbudget N/A 20% 0 1s
-text-to-speech-poddisruptionbudget N/A 20% 0 1s
-
-==> v1beta2/Deployment
-NAME READY UP-TO-DATE AVAILABLE AGE
-speech-to-text 0/2 2 0 0s
-text-to-speech 0/2 2 0 0s
-
-==> v2beta2/HorizontalPodAutoscaler
-NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
-speech-to-text-autoscaler Deployment/speech-to-text <unknown>/50% 2 10 0 0s
-text-to-speech-autoscaler Deployment/text-to-speech <unknown>/50% 2 10 0 0s
--
-NOTES:
-cognitive-services-speech-onpremise has been installed!
-Release is named onprem-speech
-```
-
-The Kubernetes deployment can take over several minutes to complete. To confirm that both pods and services are properly deployed and available, execute the following command:
-
-```console
-kubectl get all
-```
-
-You should expect to see something similar to the following output:
-
-```console
-NAME READY STATUS RESTARTS AGE
-pod/speech-to-text-7664f5f465-87w2d 1/1 Running 0 34m
-pod/speech-to-text-7664f5f465-klbr8 1/1 Running 0 34m
-pod/text-to-speech-56f8fb685b-4jtzh 1/1 Running 0 34m
-pod/text-to-speech-56f8fb685b-frwxf 1/1 Running 0 34m
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3h
-service/speech-to-text LoadBalancer 10.0.252.106 52.162.123.151 80:31811/TCP 34m
-service/text-to-speech LoadBalancer 10.0.125.187 65.52.233.162 80:31247/TCP 34m
-
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-deployment.apps/speech-to-text 2 2 2 2 34m
-deployment.apps/text-to-speech 2 2 2 2 34m
-
-NAME DESIRED CURRENT READY AGE
-replicaset.apps/speech-to-text-7664f5f465 2 2 2 34m
-replicaset.apps/text-to-speech-56f8fb685b 2 2 2 34m
-
-NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
-horizontalpodautoscaler.autoscaling/speech-to-text-autoscaler Deployment/speech-to-text 1%/50% 2 10 2 34m
-horizontalpodautoscaler.autoscaling/text-to-speech-autoscaler Deployment/text-to-speech 0%/50% 2 10 2 34m
-```
-
-### Verify Helm deployment with Helm tests
-
-The installed Helm charts define *Helm tests*, which serve as a convenience for verification. These tests validate service readiness. To verify both **speech to text** and **text to speech** services, we'll execute the [Helm test][helm-test] command.
-
-```console
-helm test onprem-speech
-```
-
-> [!IMPORTANT]
-> These tests will fail if the POD status is not `Running` or if the deployment is not listed under the `AVAILABLE` column. Be patient as this can take over ten minutes to complete.
-
-These tests will output various status results:
-
-```console
-RUNNING: speech to text-readiness-test
-PASSED: speech to text-readiness-test
-RUNNING: text to speech-readiness-test
-PASSED: text to speech-readiness-test
-```
-
-As an alternative to executing the *helm tests*, you could collect the *External IP* addresses and corresponding ports from the `kubectl get all` command. Using the IP and port, open a web browser and navigate to `http://<external-ip>:<port>:/swagger/https://docsupdatetracker.net/index.html` to view the API swagger page(s).
-
-## Customize Helm charts
-
-Helm charts are hierarchical. Being hierarchical allows for chart inheritance, it also caters to the concept of specificity, where settings that are more specific override inherited rules.
----
-## Next steps
-
-For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks].
-
-> [!div class="nextstepaction"]
-> [Cognitive Services Containers][cog-svcs-containers]
-
-<!-- LINKS - external -->
-[free-azure-account]: https://azure.microsoft.com/free
-[git-download]: https://git-scm.com/downloads
-[azure-cli]: /cli/azure/install-azure-cli
-[docker-engine]: https://www.docker.com/products/docker-engine
-[kubernetes-cli]: https://kubernetes.io/docs/tasks/tools/install-kubectl
-[helm-install]: https://helm.sh/docs/intro/install/
-[helm-install-cmd]: https://helm.sh/docs/intro/using_helm/#helm-install-installing-a-package
-[tiller-install]: https://helm.sh/docs/install/#installing-tiller
-[helm-charts]: https://helm.sh/docs/topics/charts/
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[helm-test]: https://v2.helm.sh/docs/helm/#helm-test
-[ms-helm-hub]: https://artifacthub.io/packages/search?repo=microsoft
-[ms-helm-hub-speech-chart]: https://hub.helm.sh/charts/microsoft/cognitive-services-speech-onpremise
-
-<!-- LINKS - internal -->
-[speech-container-host-computer]: speech-container-howto.md#host-computer-requirements-and-recommendations
-[installing-helm-apps-in-aks]: ../../aks/kubernetes-helm.md
-[cog-svcs-containers]: ../cognitive-services-container-support.md
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
- Title: Install and run Speech containers with Docker - Speech service-
-description: Use the Speech containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
------ Previously updated : 04/18/2023-
-keywords: on-premises, Docker, container
--
-# Install and run Speech containers with Docker
-
-By using containers, you can use a subset of the Speech service features in your own environment. In this article, you'll learn how to download, install, and run a Speech container.
-
-> [!NOTE]
-> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-## Prerequisites
-
-You must meet the following prerequisites before you use Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need:
-
-* You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech service resource" target="_blank">Speech service resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-### Billing arguments
-
-Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times.
-
-Three primary parameters for all Cognitive Services containers are required. The Microsoft Software License Terms must be present with a value of **accept**. An Endpoint URI and API key are also needed.
-
-Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey` parameter.
-
-The <a href="https://docs.docker.com/engine/reference/commandline/run/" target="_blank">`docker run` <span class="docon docon-navigate-external x-hidden-focus"></span></a> command will start the container when all three of the following options are provided with valid values:
-
-| Option | Description |
-|--|-|
-| `ApiKey` | The API key of the Speech resource that's used to track billing information.<br/>The `ApiKey` value is used to start the container and is available on the Azure portal's **Keys** page of the corresponding Speech resource. Go to the **Keys** page, and select the **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon.|
-| `Billing` | The endpoint of the Speech resource that's used to track billing information.<br/>The endpoint is available on the Azure portal **Overview** page of the corresponding Speech resource. Go to the **Overview** page, hover over the endpoint, and a **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon appears. Copy and use the endpoint where needed.|
-| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
-
-> [!IMPORTANT]
-> These subscription keys are used to access your Cognitive Services API. Don't share your keys. Store them securely. For example, use Azure Key Vault. We also recommend that you regenerate these keys regularly. Only one key is necessary to make an API call. When you regenerate the first key, you can use the second key for continued access to the service.
-
-The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. For an example of the information sent to Microsoft for billing, see the [Cognitive Services container FAQ](../containers/container-faq.yml#how-does-billing-work) in the Azure Cognitive Services documentation.
-
-For more information about these options, see [Configure containers](speech-container-configuration.md).
-
-### Container requirements and recommendations
-
-The following table describes the minimum and recommended allocation of resources for each Speech container:
-
-| Container | Minimum | Recommended |Speech Model|
-|--||-| -- |
-| Speech to text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory|
-| Custom speech to text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory|
-| Speech language identification | 1 core, 1-GB memory | 1 core, 1-GB memory |n/a|
-| Neural text to speech | 6 core, 12-GB memory | 8 core, 16-GB memory |n/a|
-
-Each core must be at least 2.6 gigahertz (GHz) or faster.
-
-Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-> [!NOTE]
-> The minimum and recommended allocations are based on Docker limits, *not* the host machine resources.
-> For example, speech to text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech models (see above table).
-> Also, the first run of either container might take longer because models are being paged into memory.
-
-## Host computer requirements and recommendations
-
-The host is an x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as:
-
-* [Azure Kubernetes Service](~/articles/aks/index.yml).
-* [Azure Container Instances](~/articles/container-instances/index.yml).
-* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).
--
-> [!NOTE]
-> Containers support compressed audio input to the Speech SDK by using GStreamer.
-> To install GStreamer in a container, follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md).
-
-### Advanced Vector Extension support
-
-The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
-
-```console
-grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
-```
-> [!WARNING]
-> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
-
-## Run the container
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Once running, the container continues to run until you [stop the container](#stop-the-container).
-
-Take note the following best practices with the `docker run` command:
--- **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.-- **Argument order**: Do not change the order of the arguments unless you are familiar with Docker containers.-
-You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. The following command lists the ID, repository, and tag of each downloaded container image, formatted as a table:
-
-```bash
-docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"
-```
-
-Here's an example result:
-```
-IMAGE ID REPOSITORY TAG
-<image-id> <repository-path/name> <tag-name>
-```
-
-## Validate that a container is running
-
-There are several ways to validate that the container is running. Locate the *External IP* address and exposed port of the container in question, and open your favorite web browser. Use the various request URLs that follow to validate the container is running.
-
-The example request URLs listed here are `http://localhost:5000`, but your specific container might vary. Make sure to rely on your container's *External IP* address and exposed port.
-
-| Request URL | Purpose |
-|--|--|
-| `http://localhost:5000/` | The container provides a home page. |
-| `http://localhost:5000/ready` | Requested with GET, this URL provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://localhost:5000/status` | Also requested with GET, this URL verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
-
-## Stop the container
-
-To shut down the container, in the command-line environment where the container is running, select <kbd>Ctrl+C</kbd>.
-
-## Run multiple containers on the same host
-
-If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-
-You can have this container and a different Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running.
-
-## Host URLs
-
-> [!NOTE]
-> Use a unique port number if you're running multiple containers.
-
-| Protocol | Host URL | Containers |
-|--|--|--|
-| WS | `ws://localhost:5000` | [Speech to text](speech-container-stt.md#use-the-container)<br/><br/>[Custom speech to text](speech-container-cstt.md#use-the-container) |
-| HTTP | `http://localhost:5000` | [Neural text to speech](speech-container-ntts.md#use-the-container)<br/><br/>[Speech language identification](speech-container-lid.md#use-the-container) |
-
-For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security) in the Azure Cognitive Services documentation.
-
-## Troubleshooting
-
-When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues.
-
-> [!TIP]
-> For more troubleshooting information and guidance, see [Cognitive Services containers frequently asked questions (FAQ)](../containers/container-faq.yml) in the Azure Cognitive Services documentation.
--
-### Logging settings
-
-Speech containers come with ASP.NET Core logging support. Here's an example of the `neural-text-to-speech container` started with default logging to the console:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY} \
-Logging:Console:LogLevel:Default=Information
-```
-
-For more information about logging, see [Configure Speech containers](speech-container-configuration.md#logging-settings) and [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation.
-
-## Microsoft diagnostics container
-
-If you're having trouble running a Cognitive Services container, you can try using the Microsoft diagnostics container. Use this container to diagnose common errors in your deployment environment that might prevent Cognitive Services containers from functioning as expected.
-
-To get the container, use the following `docker pull` command:
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/diagnostic
-```
-
-Then run the container. Replace `{ENDPOINT_URI}` with your endpoint, and replace `{API_KEY}` with your key to your resource:
-
-```bash
-docker run --rm mcr.microsoft.com/azure-cognitive-services/diagnostic \
-eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-The container will test for network connectivity to the billing endpoint.
-
-## Run disconnected containers
-
-To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
--
-## Next steps
-
-* Review [configure containers](speech-container-configuration.md) for configuration settings.
-* Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md).
-* Deploy and run containers on [Azure Container Instance](../containers/azure-container-instance-recipe.md)
-* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md).
--
cognitive-services Speech Container Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-lid.md
- Title: Language identification containers - Speech service-
-description: Install and run language identification containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
------- Previously updated : 04/18/2023-
-zone_pivot_groups: programming-languages-speech-sdk-cli
-keywords: on-premises, Docker, container
--
-# Language identification containers with Docker
-
-The Speech language identification container detects the language spoken in audio files. You can get real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a language identification container.
-
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
->
-> The Speech language identification container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
-
-For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
-
-> [!TIP]
-> To get the most useful results, use the Speech language identification container with the [speech to text](speech-container-stt.md) or [custom speech to text](speech-container-cstt.md) containers.
-
-## Container images
-
-The Speech language identification container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `language-detection`.
--
-The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection`. Either append a specific version or append `:latest` to get the most recent version.
-
-| Version | Path |
-|--||
-| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
-| 1.11.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.11.0-amd64-preview` |
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<prerelease>
-```
-
-The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
-
-```json
-{
- "name": "azure-cognitive-services/speechservices/language-detection",
- "tags": [
- "1.1.0-amd64-preview",
- "1.11.0-amd64-preview",
- "1.3.0-amd64-preview",
- "1.5.0-amd64-preview",
- <--redacted for brevity-->
- "1.8.0-amd64-preview",
- "latest"
- ]
-}
-```
-
-## Get the container image with docker pull
-
-You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest
-```
--
-## Run the container with docker run
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
-
-The following table represents the various `docker run` parameters and their corresponding descriptions:
-
-| Parameter | Description |
-|||
-| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
-| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
-
-When you run the Speech language identification container, configure the port, memory, and CPU according to the language identification container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
-
-Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
-
-```bash
-docker run --rm -it -p 5000:5003 --memory 1g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a Speech language identification container from the container image.
-* Allocates 1 CPU core and 1 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
-
-## Run with the speech to text container
-
-If you want to run the language identification container with the [speech to text](speech-container-stt.md) container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`:
-
-```bash
-docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000
-```
-
-Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls.
-
-## Use the container
--
-[Try language identification](language-identification.md) using host authentication instead of key and region. When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`.
-
-## Next steps
-
-* See the [Speech containers overview](speech-container-overview.md)
-* Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Speech Container Ntts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-ntts.md
- Title: Neural text to speech containers - Speech service-
-description: Install and run neural text to speech containers with Docker to perform speech synthesis and more on-premises.
------- Previously updated : 04/18/2023-
-zone_pivot_groups: programming-languages-speech-sdk-cli
-keywords: on-premises, Docker, container
--
-# Text to speech containers with Docker
-
-The neural text to speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text to speech container.
-
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
-
-For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
-
-## Container images
-
-The neural text to speech container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`.
--
-The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`. Either append a specific version or append `:latest` to get the most recent version.
-
-| Version | Path |
-|--||
-| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest`<br/><br/>The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. |
-| 2.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:2.12.0-amd64-mr-in` |
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<voice>-<preview>
-```
-
-The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
-
-```json
-{
- "name": "azure-cognitive-services/speechservices/neural-text-to-speech",
- "tags": [
- "1.10.0-amd64-cs-cz-antoninneural",
- "1.10.0-amd64-cs-cz-vlastaneural",
- "1.10.0-amd64-de-de-conradneural",
- "1.10.0-amd64-de-de-katjaneural",
- "1.10.0-amd64-en-au-natashaneural",
- <--redacted for brevity-->
- "latest"
- ]
-}
-```
-
-> [!IMPORTANT]
-> We retired the standard speech synthesis voices and standard [text to speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/text-to-speech/tags) container on August 31, 2021. You should use neural voices with the [neural-text to speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md).
-
-## Get the container image with docker pull
-
-You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest
-```
-
-> [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. For additional locales and voices, see [text to speech container images](#container-images).
-
-## Run the container with docker run
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
-
-# [Neural text to speech](#tab/container)
-
-The following table represents the various `docker run` parameters and their corresponding descriptions:
-
-| Parameter | Description |
-|||
-| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
-| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
-
-When you run the text to speech container, configure the port, memory, and CPU according to the text to speech container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
-
-Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a neural text to speech container from the container image.
-* Allocates 6 CPU cores and 12 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-# [Disconnected neural text to speech](#tab/disconnected)
-
-To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
-
-If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
-
-The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
-
-| Placeholder | Description |
-|-|-|
-| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 \
--v {LICENSE_MOUNT} \
-{IMAGE} \
-eula=accept \
-billing={ENDPOINT_URI} \
-apikey={API_KEY} \
-DownloadLicense=True \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-```
-
-Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
-
-Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
-
-Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
--v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \
-{IMAGE} \
-eula=accept \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
-
-Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
-
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
---
-For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
-
-## Use the container
--
-[Try the text to speech quickstart](get-started-text-to-speech.md) using host authentication instead of key and region.
-
-### SSML voice element
-
-When you construct a neural text to speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The [locale of the voice](language-support.md?tabs=tts) must correspond to the locale of the container model.
-
-For example, a model that was downloaded via the `latest` tag (defaults to "en-US") would have a voice name of `en-US-AriaNeural`.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaNeural">
- This is the text that is spoken.
- </voice>
-</speak>
-```
-
-## Next steps
-
-* See the [Speech containers overview](speech-container-overview.md)
-* Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md
- Title: Speech containers overview - Speech service-
-description: Use the Docker containers for the Speech service to perform speech recognition, transcription, generation, and more on-premises.
------ Previously updated : 04/18/2023-
-keywords: on-premises, Docker, container
--
-# Speech containers overview
-
-By using containers, you can use a subset of the Speech service features in your own environment. With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Containers are great for specific security and data governance requirements.
-
-> [!NOTE]
-> You must [request and get approval](#request-approval-to-run-the-container) to use a Speech container.
-
-## Available Speech containers
-
-The following table lists the Speech containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container.
-
-| Container | Features | Supported versions and locales |
-|--|--|--|
-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
-| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
-| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |
-| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.14.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
-
-<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
-<sup>2</sup> Not available as a disconnected container.
-
-## Request approval to run the container
-
-To use the Speech containers, you must submit one of the following request forms and wait for approval:
-- [Connected containers request form](https://aka.ms/csgate) if you want to run containers regularly, in environments that are only connected to the internet.-- [Disconnected Container request form](https://aka.ms/csdisconnectedcontainers) if you want to run containers in environments that can be disconnected from the internet. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.-
-The form requests information about you, your company, and the user scenario for which you'll use the container.
-
-* On the form, you must use an email address associated with an Azure subscription ID.
-* The Azure resource you use to run the container must have been created with the approved Azure subscription ID.
-* Check your email for updates on the status of your application from Microsoft.
-
-After you submit the form, the Azure Cognitive Services team reviews it and emails you with a decision within 10 business days.
-
-> [!IMPORTANT]
-> To use the Speech containers, your request must be approved.
-
-While you're waiting for approval, you can [setup the prerequisites](speech-container-howto.md#prerequisites) on your host computer. You can also download the container from the Microsoft Container Registry (MCR). You can run the container after your request is approved.
-
-## Billing
-
-The Speech containers send billing information to Azure by using a Speech resource on your Azure account.
-
-> [!NOTE]
-> Connected and disconnected container pricing and commitment tiers vary. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times. For more information, see [billing arguments](speech-container-howto.md#billing-arguments).
-
-## Container recipes and other container services
-
-You can use container recipes to create containers that can be reused. Containers can be built with some or all configuration settings so that they are not needed when the container is started. For container recipes see the following Azure Cognitive Services articles:
-- [Create containers for reuse](../containers/container-reuse-recipe.md)-- [Deploy and run container on Azure Container Instance](../containers/azure-container-instance-recipe.md)-- [Deploy a language detection container to Azure Kubernetes Service](../containers/azure-kubernetes-recipe.md)-- [Use Docker Compose to deploy multiple containers](../containers/docker-compose-recipe.md)-
-For information about other container services, see the following Azure Cognitive Services articles:
-- [Tutorial: Create a container image for deployment to Azure Container Instances](../../container-instances/container-instances-tutorial-prepare-app.md)-- [Quickstart: Create a private container registry using the Azure CLI](../../container-registry/container-registry-get-started-azure-cli.md)-- [Tutorial: Prepare an application for Azure Kubernetes Service (AKS)](../../aks/tutorial-kubernetes-prepare-app.md)-
-## Next steps
-
-* [Install and run Speech containers](speech-container-howto.md)
--
cognitive-services Speech Container Stt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-stt.md
- Title: Speech to text containers - Speech service-
-description: Install and run speech to text containers with Docker to perform speech recognition, transcription, generation, and more on-premises.
------- Previously updated : 04/18/2023-
-zone_pivot_groups: programming-languages-speech-sdk-cli
-keywords: on-premises, Docker, container
--
-# Speech to text containers with Docker
-
-The Speech to text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech to text container.
-
-> [!NOTE]
-> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container.
-
-For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md).
-
-## Container images
-
-The Speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`.
--
-The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. Either append a specific version or append `:latest` to get the most recent version.
-
-| Version | Path |
-|--||
-| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest`<br/><br/>The `latest` tag pulls the latest image for the `en-US` locale. |
-| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:3.12.0-amd64-mr-in` |
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<locale>-<prerelease>
-```
-
-The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet:
-
-```json
-{
- "name": "azure-cognitive-services/speechservices/speech-to-text",
- "tags": [
- "2.10.0-amd64-ar-ae",
- "2.10.0-amd64-ar-bh",
- "2.10.0-amd64-ar-eg",
- "2.10.0-amd64-ar-iq",
- "2.10.0-amd64-ar-jo",
- <--redacted for brevity-->
- "latest"
- ]
-}
-```
-
-## Get the container image with docker pull
-
-You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container.
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest
-```
-
-> [!IMPORTANT]
-> The `latest` tag pulls the latest image for the `en-US` locale. For additional versions and locales, see [speech to text container images](#container-images).
-
-## Run the container with docker run
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container.
-
-# [Speech to text](#tab/container)
-
-The following table represents the various `docker run` parameters and their corresponding descriptions:
-
-| Parameter | Description |
-|||
-| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
-| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). |
-
-When you run the speech to text container, configure the port, memory, and CPU according to the speech to text container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations).
-
-Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-* Runs a `speech-to-text` container from the container image.
-* Allocates 4 CPU cores and 8 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-# [Disconnected speech to text](#tab/disconnected)
-
-To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation.
-
-If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values.
-
-The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container.
-
-| Placeholder | Description |
-|-|-|
-| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 \
--v {LICENSE_MOUNT} \
-{IMAGE} \
-eula=accept \
-billing={ENDPOINT_URI} \
-apikey={API_KEY} \
-DownloadLicense=True \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-```
-
-Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
-
-Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
-
-Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
--v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \
-{IMAGE} \
-eula=accept \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
-
-Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
-
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
---
-For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container).
--
-## Use the container
--
-[Try the speech to text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region.
-
-## Next steps
-
-* See the [Speech containers overview](speech-container-overview.md)
-* Review [configure containers](speech-container-configuration.md) for configuration settings
-* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Speech Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-devices.md
- Title: Speech devices overview - Speech service-
-description: Get started with the Speech devices. The Speech service works with a wide variety of devices and audio sources.
------ Previously updated : 12/27/2021---
-# What are Speech devices?
-
-The [Speech service](overview.md) works with a wide variety of devices and audio sources. You can use the default audio processing available on a device. Otherwise, the Speech SDK has an option for you to use our advanced audio processing algorithms that are designed to work well with the [Speech service](overview.md). It provides accurate far-field [speech recognition](speech-to-text.md) via noise suppression, echo cancellation, beamforming, and dereverberation.
-
-## Audio processing
-Audio processing is enhancements applied to a stream of audio to improve the audio quality. Examples of common enhancements include automatic gain control (AGC), noise suppression, and acoustic echo cancellation (AEC). The Speech SDK integrates [Microsoft Audio Stack (MAS)](audio-processing-overview.md), allowing any application or product to use its audio processing capabilities on input audio.
-
-## Microphone array recommendations
-The Speech SDK works best with a microphone array that has been designed according to our recommended guidelines. For details, see [Microphone array recommendations](speech-sdk-microphone.md).
-
-## Device development kits
-The Speech SDK is designed to work with purpose-built development kits, and varying microphone array configurations. For example, you can use one of these Azure development kits.
--- [Azure Percept DK](../../azure-percept/overview-azure-percept-dk.md) contains a preconfigured audio processor and a four-microphone linear array. You can use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. -- [Azure Kinect DK](../../kinect-dk/about-azure-kinect-dk.md) is a spatial computing developer kit with advanced AI sensors that provide sophisticated computer vision and speech models. As an all-in-one small device with multiple modes, it contains a depth sensor, spatial microphone array with a video camera, and orientation sensor. -
-## Next steps
-
-* [Audio processing concepts](audio-processing-overview.md)
cognitive-services Speech Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-encryption-of-data-at-rest.md
- Title: Speech service encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Speech service.
----- Previously updated : 07/14/2021-
-#Customer intent: As a user of the Translator service, I want to learn how encryption at rest works.
--
-# Speech service encryption of data at rest
-
-Speech service automatically encrypts your data when it is persisted it to the cloud. Speech service encryption protects your data and to help you to meet your organizational security and compliance commitments.
-
-## About Cognitive Services encryption
-
-Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
-
-## About encryption key management
-
-When you use Custom Speech and Custom Voice, Speech service may store following data in the cloud:
-
-* Speech trace data - only if your turn the trace on for your custom endpoint
-* Uploaded training and test data
-
-By default, your data are stored in Microsoft's storage and your subscription uses Microsoft-managed encryption keys. You also have an option to prepare your own storage account. Access to the store is managed by the Managed Identity, and Speech service cannot directly access to your own data, such as speech trace data, customization training data and custom models.
-
-For more information about Managed Identity, see [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
-
-In the meantime, when you use Custom Command, you can manage your subscription with your own encryption keys. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. For more information about Custom Command and CMK, see [Custom Commands encryption of data at rest](custom-commands-encryption-of-data-at-rest.md).
-
-## Bring your own storage (BYOS) for customization and logging
-
-To request access to bring your own storage, fill out and submit theΓÇ»[Speech service - bring your own storage (BYOS) request form](https://aka.ms/cogsvc-cmk). Once approved, you'll need to create your own storage account to store the data required for customization and logging. When adding a storage account, the Speech service resource will enable a system assigned managed identity.
-
-> [!IMPORTANT]
-> The user account you use to create a Speech resource with BYOS functionality enabled should be assigned the [Owner role at the Azure subscription scope](../../cost-management-billing/manage/add-change-subscription-administrator.md#to-assign-a-user-as-an-administrator). Otherwise you will get an authorization error during the resource provisioning.
-
-After the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory (AAD). After being registered, the managed identity will be given access to the storage account. For more about managed identities, see [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the storage account will be removed. This will cause the parts of the Speech service that require access to the storage account to stop working.
-
-The Speech service doesn't currently support Customer Lockbox. However, customer data can be stored using BYOS, allowing you to achieve similar data controls to [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md). Keep in mind that Speech service data stays and is processed in the region where the Speech resource was created. This applies to any data at rest and data in transit. When using customization features, like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where your BYOS (if used) and Speech service resource reside.
-
-> [!IMPORTANT]
-> Microsoft **does not** use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored.
-
-## Next steps
-
-* [Speech service - bring your own storage (BYOS) request form](https://aka.ms/cogsvc-cmk)
-* [What are managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
cognitive-services Speech Sdk Microphone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk-microphone.md
- Title: Microphone array recommendations - Speech service-
-description: Speech SDK microphone array recommendations. These array geometries are recommended for use with the Microsoft Audio Stack.
------ Previously updated : 12/27/2021----
-# Microphone array recommendations
-
-In this article, you learn how to design a microphone array customized for use with the Speech SDK. This is most pertinent if you are selecting, specifying, or building hardware for speech solutions.
-
-The Speech SDK works best with a microphone array that has been designed according to these guidelines, including the microphone geometry, component selection, and architecture.
-
-## Microphone geometry
-
-The following array geometries are recommended for use with the Microsoft Audio Stack. Location of sound sources and rejection of ambient noise is improved with greater number of microphones with dependencies on specific applications, user scenarios, and the device form factor.
-
-| Array |Microphones| Geometry |
-| -- | -- | -- |
-|Circular - 7 Microphones|<img src="media/speech-devices-sdk/7-mic-c.png" alt="7 mic circular array" width="150"/>|6 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced|
-|Circular - 4 Microphones|<img src="media/speech-devices-sdk/4-mic-c.png" alt="4 mic circular array" width="150"/>|3 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced|
-|Linear - 4 Microphones|<img src="media/speech-devices-sdk/4-mic-l.png" alt="4 mic linear array" width="150"/>|Length = 120 mm, Spacing = 40 mm|
-|Linear - 2 Microphones|<img src="media/speech-devices-sdk/2-mic-l.png" alt="2 mic linear array" width="150"/>|Spacing = 40 mm|
-
-Microphone channels should be ordered ascending from 0, according to the numbering depicted above for each array. The Microsoft Audio Stack will require an additional reference stream of audio playback to perform echo cancellation.
-
-## Component selection
-
-Microphone components should be selected to accurately reproduce a signal free of noise and distortion.
-
-The recommended properties when selecting microphones are:
-
-| Parameter | Recommended |
-| | -- |
-| SNR | \>= 65 dB (1 kHz signal 94 dBSPL, A-weighted noise) |
-| Amplitude Matching | ┬▒ 1 dB @ 1 kHz |
-| Phase Matching | ┬▒ 2┬░ @ 1 kHz |
-| Acoustic Overload Point (AOP) | \>= 120 dBSPL (THD = 10%) |
-| Bit Rate | Minimum 24-bit |
-| Sampling Rate | Minimum 16 kHz\* |
-| Frequency Response | ┬▒ 3 dB, 200-8000 Hz Floating Mask\* |
-| Reliability | Storage Temperature Range -40┬░C to 70┬░C<br />Operating Temperature Range -20┬░C to 55┬░C |
-
-\*_Higher sampling rates or "wider" frequency ranges may be necessary for high-quality communications (VoIP) applications_
-
-Good component selection must be paired with good electroacoustic integration in order to avoid impairing the performance of the components used. Unique use cases may also necessitate additional requirements (for example: operating temperature ranges).
-
-## Microphone array integration
-
-The performance of the microphone array when integrated into a device will differ from the component specification. It is important to ensure that the microphones are well matched after integration. Therefore the device performance measured after any fixed gain or EQ should meet the following recommendations:
-
-| Parameter | Recommended |
-| | -- |
-| SNR | \> 63 dB (1 kHz signal 94 dBSPL, A-weighted noise) |
-| Output Sensitivity | -26 dBFS/Pa @ 1 kHz (recommended) |
-| Amplitude Matching | ┬▒ 2 dB, 200-8000 Hz |
-| THD%\* | Γëñ 1%, 200-8000 Hz, 94 dBSPL, 5th Order |
-| Frequency Response | ┬▒ 6 dB, 200-8000 Hz Floating Mask\*\* |
-
-\*\*_A low distortion speaker is required to measure THD (e.g. Neumann KH120)_
-
-\*\*_"Wider" frequency ranges may be necessary for high-quality communications (VoIP) applications_
-
-## Speaker integration recommendations
-
-As echo cancellation is necessary for speech recognition devices that contain speakers, additional recommendations are provided for speaker selection and integration.
-
-| Parameter | Recommended |
-| | -- |
-| Linearity Considerations | No non-linear processing after speaker reference, otherwise a hardware-based loopback reference stream is required |
-| Speaker Loopback | Provided via WASAPI, private APIs, custom ALSA plug-in (Linux), or provided via firmware channel |
-| THD% | 3rd Octave Bands minimum 5th Order, 70 dBA Playback @ 0.8 m Γëñ 6.3%, 315-500 Hz Γëñ 5%, 630-5000 Hz |
-| Echo Coupling to Microphones | \> -10 dB TCLw using ITU-T G.122 Annex B.4 method, normalized to mic level<br />TCLw = TCLwmeasured \+ (Measured Level - Target Output Sensitivity)<br />TCLw = TCLwmeasured \+ (Measured Level - (-26)) |
-
-## Integration design architecture
-
-The following guidelines for architecture are necessary when integrating microphones into a device:
-
-| Parameter | Recommendation |
-| | -- |
-| Mic Port Similarity | All microphone ports are same length in array |
-| Mic Port Dimensions | Port size ├ÿ0.8-1.0 mm. Port Length / Port Diameter \< 2 |
-| Mic Sealing | Sealing gaskets uniformly implemented in stack-up. Recommend \> 70% compression ratio for foam gaskets |
-| Mic Reliability | Mesh should be used to prevent dust and ingress (between PCB for bottom ported microphones and sealing gasket/top cover) |
-| Mic Isolation | Rubber gaskets and vibration decoupling through structure, particularly for isolating any vibration paths due to integrated speakers |
-| Sampling Clock | Device audio must be free of jitter and drop-outs with low drift |
-| Record Capability | The device must be able to record individual channel raw streams simultaneously |
-| USB | All USB audio input devices must set descriptors according to the [USB Audio Devices Rev3 Spec](https://www.usb.org/document-library/usb-audio-devices-rev-30-and-adopters-agreement) |
-| Microphone Geometry | Drivers must implement [Microphone Array Geometry Descriptors](/windows-hardware/drivers/audio/ksproperty-audio-mic-array-geometry) correctly |
-| Discoverability | Devices must not have any undiscoverable or uncontrollable hardware, firmware, or 3rd party software-based non-linear audio processing algorithms to/from the device |
-| Capture Format | Capture formats must use a minimum sampling rate of 16 kHz and recommended 24-bit depth |
-
-## Electrical architecture considerations
-
-Where applicable, arrays may be connected to a USB host (such as a SoC that runs the [Microsoft Audio Stack (MAS)](audio-processing-overview.md)) and interfaces to Speech services or other applications.
-
-Hardware components such as PDM-to-TDM conversion should ensure that the dynamic range and SNR of the microphones is preserved within re-samplers.
-
-High-speed USB Audio Class 2.0 should be supported within any audio MCUs in order to provide the necessary bandwidth for up to seven channels at higher sample rates and bit depths.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about audio processing](audio-processing-overview.md)
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
- Title: About the Speech SDK - Speech service-
-description: The Speech software development kit (SDK) exposes many of the Speech service capabilities, making it easier to develop speech-enabled applications.
------ Previously updated : 09/16/2022---
-# What is the Speech SDK?
-
-The Speech SDK (software development kit) exposes many of the [Speech service capabilities](overview.md), so you can develop speech-enabled applications. The Speech SDK is available [in many programming languages](quickstarts/setup-platform.md) and across platforms. The Speech SDK is ideal for both real-time and non-real-time scenarios, by using local devices, files, Azure Blob Storage, and input and output streams.
-
-In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech to text REST API](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
-
-## Supported languages
-
-The Speech SDK supports the following languages and platforms:
-
-| Programming language | Reference | Platform support |
-|-|-|-|
-| [C#](quickstarts/setup-platform.md?pivots=programming-language-csharp) <sup>1</sup> | [.NET](/dotnet/api/microsoft.cognitiveservices.speech) | Windows, Linux, macOS, Mono, Xamarin.iOS, Xamarin.Mac, Xamarin.Android, UWP, Unity |
-| [C++](quickstarts/setup-platform.md?pivots=programming-language-cpp) <sup>2</sup> | [C++](/cpp/cognitive-services/speech/) | Windows, Linux, macOS |
-| [Go](quickstarts/setup-platform.md?pivots=programming-language-go) | [Go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) | Linux |
-| [Java](quickstarts/setup-platform.md?pivots=programming-language-java) | [Java](/java/api/com.microsoft.cognitiveservices.speech) | Android, Windows, Linux, macOS |
-| [JavaScript](quickstarts/setup-platform.md?pivots=programming-language-javascript) | [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/) | Browser, Node.js |
-| [Objective-C](quickstarts/setup-platform.md?pivots=programming-language-objectivec) | [Objective-C](/objectivec/cognitive-services/speech/) | iOS, macOS |
-| [Python](quickstarts/setup-platform.md?pivots=programming-language-python) | [Python](/python/api/azure-cognitiveservices-speech/) | Windows, Linux, macOS |
-| [Swift](quickstarts/setup-platform.md?pivots=programming-language-swift) | [Objective-C](/objectivec/cognitive-services/speech/) <sup>3</sup> | iOS, macOS |
-
-<sup>1 C# code samples are available in the documentation. The Speech SDK for C# is based on .NET Standard 2.0, so it supports many platforms and programming languages. For more information, see [.NET implementation support](/dotnet/standard/net-standard#net-implementation-support).</sup>
-<sup>2 C isn't a supported programming language for the Speech SDK.</sup>
-<sup>3 The Speech SDK for Swift shares client libraries and reference documentation with the Speech SDK for Objective-C.</sup>
--
-## Speech SDK demo
-
-The following video shows how to install the [Speech SDK for C#](quickstarts/setup-platform.md) and write a simple .NET console application for speech to text.
-
-> [!VIDEO c20d3b0c-e96a-4154-9299-155e27db7117]
-
-## Code samples
-
-Speech SDK code samples are available in the documentation and GitHub.
-
-### Docs samples
-
-At the top of documentation pages that contain samples, options to select include C#, C++, Go, Java, JavaScript, Objective-C, Python, or Swift.
--
-If a sample is not available in your preferred programming language, you can select another programming language to get started and learn about the concepts, or see the reference and samples linked from the beginning of the article.
-
-### GitHub samples
-
-In depth samples are available in the [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/csspeech/samples) repository on GitHub. There are samples for C# (including UWP, Unity, and Xamarin), C++, Java, JavaScript (including Browser and Node.js), Objective-C, Python, and Swift. Code samples for Go are available in the [Microsoft/cognitive-services-speech-sdk-go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) repository on GitHub.
-
-## Help options
-
-The [Microsoft Q&A](/answers/topics/azure-speech.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-speech) forums are available for the developer community to ask and answer questions about Azure Cognitive Speech and other services. Microsoft monitors the forums and replies to questions that the community has not yet answered. To make sure that we see your question, tag it with 'azure-speech'.
-
-You can suggest an idea or report a bug by creating an issue on GitHub:
-- [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/GHspeechissues)-- [Microsoft/cognitive-services-speech-sdk-go](https://github.com/microsoft/cognitive-services-speech-sdk-go/issues)-- [Microsoft/cognitive-services-speech-sdk-js](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues)-
-See also [Azure Cognitive Services support and help options](../cognitive-services-support-options.md?context=/azure/cognitive-services/speech-service/context/context) to get support, stay up-to-date, give feedback, and report bugs for Cognitive Services.
-
-## Next steps
-
-* [Install the SDK](quickstarts/setup-platform.md)
-* [Try the speech to text quickstart](./get-started-speech-to-text.md)
cognitive-services Speech Service Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-service-vnet-service-endpoint.md
- Title: Use Virtual Network service endpoints with Speech service-
-description: This article describes how to use Speech service with an Azure Virtual Network service endpoint.
------ Previously updated : 03/19/2021---
-# Use Speech service through a Virtual Network service endpoint
-
-[Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) help to provide secure and direct connectivity to Azure services over an optimized route on the Azure backbone network. Endpoints help you secure your critical Azure service resources to only your virtual networks. Service endpoints enable private IP addresses in the virtual network to reach the endpoint of an Azure service without needing a public IP address on the virtual network.
-
-This article explains how to set up and use Virtual Network service endpoints with Speech service in Azure Cognitive Services.
-
-> [!NOTE]
-> Before you start, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
-
-This article also describes [how to remove Virtual Network service endpoints later but still use the Speech resource](#use-a-speech-resource-that-has-a-custom-domain-name-but-that-doesnt-have-allowed-virtual-networks).
-
-To set up a Speech resource for Virtual Network service endpoint scenarios, you need to:
-1. [Create a custom domain name for the Speech resource](#create-a-custom-domain-name).
-1. [Configure virtual networks and networking settings for the Speech resource](#configure-virtual-networks-and-the-speech-resource-networking-settings).
-1. [Adjust existing applications and solutions](#adjust-existing-applications-and-solutions).
-
-> [!NOTE]
-> Setting up and using Virtual Network service endpoints for Speech service is similar to setting up and using private endpoints. In this article, we refer to the corresponding sections of the [article on using private endpoints](speech-services-private-link.md) when the procedures are the same.
--
-This article describes how to use Virtual Network service endpoints with Speech service. For information about private endpoints, see [Use Speech service through a private endpoint](speech-services-private-link.md).
-
-## Create a custom domain name
-
-Virtual Network service endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Create a custom domain by following the [guidance](speech-services-private-link.md#create-a-custom-domain-name) in the private endpoint article. All warnings in the section also apply to Virtual Network service endpoints.
-
-## Configure virtual networks and the Speech resource networking settings
-
-You need to add all virtual networks that are allowed access via the service endpoint to the Speech resource networking properties.
-
-> [!NOTE]
-> To access a Speech resource via the Virtual Network service endpoint, you need to enable the `Microsoft.CognitiveServices` service endpoint type for the required subnets of your virtual network. Doing so will route all subnet traffic related to Cognitive Services through the private backbone network. If you intend to access any other Cognitive Services resources from the same subnet, make sure these resources are configured to allow your virtual network.
->
-> If a virtual network isn't added as *allowed* in the Speech resource networking properties, it won't have access to the Speech resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the virtual network. And if the service endpoint is enabled but the virtual network isn't allowed, the Speech resource won't be accessible for the virtual network through a public IP address, no matter what the Speech resource's other network security settings are. That's because enabling the `Microsoft.CognitiveServices` endpoint routes all traffic related to Cognitive Services through the private backbone network, and in this case the virtual network should be explicitly allowed to access the resource. This guidance applies for all Cognitive Services resources, not just for Speech resources.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Speech resource.
-1. In the **Resource Management** group in the left pane, select **Networking**.
-1. On the **Firewalls and virtual networks** tab, select **Selected Networks and Private Endpoints**.
-
- > [!NOTE]
- > To use Virtual Network service endpoints, you need to select the **Selected Networks and Private Endpoints** network security option. No other options are supported. If your scenario requires the **All networks** option, consider using [private endpoints](speech-services-private-link.md), which support all three network security options.
-
-5. Select **Add existing virtual network** or **Add new virtual network** and provide the required parameters. Select **Add** for an existing virtual network or **Create** for a new one. If you add an existing virtual network, the `Microsoft.CognitiveServices` service endpoint will automatically be enabled for the selected subnets. This operation can take up to 15 minutes. Also, see the note at the beginning of this section.
-
-### Enabling service endpoint for an existing virtual network
-
-As described in the previous section, when you configure a virtual network as *allowed* for the Speech resource, the `Microsoft.CognitiveServices` service endpoint is automatically enabled. If you later disable it, you need to re-enable it manually to restore the service endpoint access to the Speech resource (and to other Cognitive Services resources):
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the virtual network.
-1. In the **Settings** group in the left pane, select **Subnets**.
-1. Select the required subnet.
-1. A new panel appears on the right side of the window. In this panel, in the **Service Endpoints** section, select `Microsoft.CognitiveServices` in the **Services** list.
-1. Select **Save**.
-
-## Adjust existing applications and solutions
-
-A Speech resource that has a custom domain enabled interacts with the Speech service in a different way. This is true for a custom-domain-enabled Speech resource regardless of whether service endpoints are configured. Information in this section applies to both scenarios.
-
-### Use a Speech resource that has a custom domain name and allowed virtual networks
-
-In this scenario, the **Selected Networks and Private Endpoints** option is selected in the networking settings of the Speech resource and at least one virtual network is allowed. This scenario is equivalent to [using a Speech resource that has a custom domain name and a private endpoint enabled](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint).
--
-### Use a Speech resource that has a custom domain name but that doesn't have allowed virtual networks
-
-In this scenario, private endpoints aren't enabled and one of these statements is true:
--- The **Selected Networks and Private Endpoints** option is selected in the networking settings of the Speech resource, but no allowed virtual networks are configured.-- The **All networks** option is selected in the networking settings of the Speech resource.-
-This scenario is equivalent to [using a Speech resource that has a custom domain name and that doesn't have private endpoints](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
---
-## Learn more
-
-* [Use Speech service through a private endpoint](speech-services-private-link.md)
-* [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
-* [Azure Private Link](../../private-link/private-link-overview.md)
-* [Speech SDK](speech-sdk.md)
-* [Speech to text REST API](rest-speech-to-text.md)
-* [Text to speech REST API](rest-text-to-speech.md)
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
- Title: How to use private endpoints with Speech service-
-description: Learn how to use Speech service with private endpoints provided by Azure Private Link
------ Previously updated : 04/07/2021----
-# Use Speech service through a private endpoint
-
-[Azure Private Link](../../private-link/private-link-overview.md) lets you connect to services in Azure by using a [private endpoint](../../private-link/private-endpoint-overview.md). A private endpoint is a private IP address that's accessible only within a specific [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet.
-
-This article explains how to set up and use Private Link and private endpoints with the Speech service. This article then describes how to remove private endpoints later, but still use the Speech resource.
-
-> [!NOTE]
-> Before you proceed, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
--
-Setting up a Speech resource for the private endpoint scenarios requires performing the following tasks:
-1. [Create a custom domain name](#create-a-custom-domain-name)
-1. [Turn on private endpoints](#turn-on-private-endpoints)
-1. [Adjust existing applications and solutions](#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint)
--
-This article describes the usage of the private endpoints with Speech service. Usage of the VNet service endpoints is described [here](speech-service-vnet-service-endpoint.md).
-
-## Create a custom domain name
-> [!CAUTION]
-> A Speech resource with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
->
--
-## Turn on private endpoints
-
-We recommend using the [private DNS zone](../../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints.
-You can create a private DNS zone during the provisioning process.
-If you're using your own DNS server, you might also need to change your DNS configuration.
-
-Decide on a DNS strategy *before* you provision private endpoints for a production Speech resource. And test your DNS changes, especially if you use your own DNS server.
-
-Use one of the following articles to create private endpoints.
-These articles use a web app as a sample resource to make available through private endpoints.
--- [Create a private endpoint by using the Azure portal](../../private-link/create-private-endpoint-portal.md)-- [Create a private endpoint by using Azure PowerShell](../../private-link/create-private-endpoint-powershell.md)-- [Create a private endpoint by using Azure CLI](../../private-link/create-private-endpoint-cli.md)-
-Use these parameters instead of the parameters in the article that you chose:
-
-| Setting | Value |
-|||
-| Resource type | **Microsoft.CognitiveServices/accounts** |
-| Resource | **\<your-speech-resource-name>** |
-| Target sub-resource | **account** |
-
-**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Cognitive Services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
-
-### Resolve DNS from the virtual network
-
-This check is *required*.
-
-Follow these steps to test the custom DNS entry from your virtual network:
-
-1. Log in to a virtual machine located in the virtual network to which you've attached your private endpoint.
-1. Open a Windows command prompt or a Bash shell, run `nslookup`, and confirm that it successfully resolves your resource's custom domain name.
-
- ```dos
- C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
- Server: UnKnown
- Address: 168.63.129.16
-
- Non-authoritative answer:
- Name: my-private-link-speech.privatelink.cognitiveservices.azure.com
- Address: 172.28.0.10
- Aliases: my-private-link-speech.cognitiveservices.azure.com
- ```
-
-1. Confirm that the IP address matches the IP address of your private endpoint.
-
-### Resolve DNS from other networks
-
-Perform this check only if you've turned on either the **All networks** option or the **Selected Networks and Private Endpoints** access option in the **Networking** section of your resource.
-
-If you plan to access the resource by using only a private endpoint, you can skip this section.
-
-1. Log in to a computer attached to a network that's allowed to access the resource.
-2. Open a Windows command prompt or Bash shell, run `nslookup`, and confirm that it successfully resolves your resource's custom domain name.
-
- ```dos
- C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
- Server: UnKnown
- Address: fe80::1
-
- Non-authoritative answer:
- Name: vnetproxyv1-weu-prod.westeurope.cloudapp.azure.com
- Address: 13.69.67.71
- Aliases: my-private-link-speech.cognitiveservices.azure.com
- my-private-link-speech.privatelink.cognitiveservices.azure.com
- westeurope.prod.vnet.cog.trafficmanager.net
- ```
-
-> [!NOTE]
-> The resolved IP address points to a virtual network proxy endpoint, which dispatches the network traffic to the private endpoint for the Speech resource. The behavior will be different for a resource with a custom domain name but *without* private endpoints. See [this section](#dns-configuration) for details.
-
-## Adjust an application to use a Speech resource with a private endpoint
-
-A Speech resource with a custom domain interacts with the Speech service in a different way.
-This is true for a custom-domain-enabled Speech resource both with and without private endpoints.
-Information in this section applies to both scenarios.
-
-Follow instructions in this section to adjust existing applications and solutions to use a Speech resource with a custom domain name and a private endpoint turned on.
-
-A Speech resource with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
-
-> [!NOTE]
-> A Speech resource without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
-> This way differs from the scenario of a Speech resource that uses a private endpoint.
-> This is important to consider because you may decide to remove private endpoints later.
-> See [Adjust an application to use a Speech resource without private endpoints](#adjust-an-application-to-use-a-speech-resource-without-private-endpoints) later in this article.
-
-### Speech resource with a custom domain name and a private endpoint: Usage with the REST APIs
-
-We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-
-Speech service has REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario.
-
-Speech to text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario.
-
-The Speech to text REST APIs are:
-- [Speech to text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). -- [Speech to text REST API for short audio](rest-speech-to-text-short.md), which is used for real-time speech to text.-
-Usage of the Speech to text REST API for short audio and the Text to speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
-
-Speech to text REST API uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario.
-
-The next subsections describe both cases.
-
-#### Speech to text REST API
-
-Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech to text REST API](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
-
-This is a sample request URL:
-
-```http
-https://westeurope.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions
-```
-
-> [!NOTE]
-> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
-
-After you turn on a custom domain for a Speech resource (which is necessary for private endpoints), that resource will use the following DNS name pattern for the basic REST API endpoint: <p/>`{your custom name}.cognitiveservices.azure.com`
-
-That means that in our example, the REST API endpoint name will be: <p/>`my-private-link-speech.cognitiveservices.azure.com`
-
-And the sample request URL needs to be converted to:
-```http
-https://my-private-link-speech.cognitiveservices.azure.com/speechtotext/v3.1/transcriptions
-```
-This URL should be reachable from the virtual network with the private endpoint attached (provided the [correct DNS resolution](#resolve-dns-from-the-virtual-network)).
-
-After you turn on a custom domain name for a Speech resource, you typically replace the host name in all request URLs with the new custom domain host name. All other parts of the request (like the path `/speechtotext/v3.1/transcriptions` in the earlier example) remain the same.
-
-> [!TIP]
-> Some customers develop applications that use the region part of the regional endpoint's DNS name (for example, to send the request to the Speech resource deployed in the particular Azure region).
->
-> A custom domain for a Speech resource contains *no* information about the region where the resource is deployed. So the application logic described earlier will *not* work and needs to be altered.
-
-#### Speech to text REST API for short audio and Text to speech REST API
-
-The [Speech to text REST API for short audio](rest-speech-to-text-short.md) and the [Text to speech REST API](rest-text-to-speech.md) use two types of endpoints:
-- [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the Cognitive Services REST API to obtain an authorization token-- Special endpoints for all other operations-
-> [!NOTE]
-> See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.
-
-The detailed description of the special endpoints and how their URL should be transformed for a private-endpoint-enabled Speech resource is provided in [this subsection](#construct-endpoint-url) about usage with the Speech SDK. The same principle described for the SDK applies for the Speech to text REST API for short audio and the Text to speech REST API.
-
-Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text to speech REST API. Usage of the Speech to text REST API for short audio is fully equivalent.
-
-> [!NOTE]
-> When you're using the Speech to text REST API for short audio and Text to speech REST API in private endpoint scenarios, use a resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech to text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text to speech REST API](rest-text-to-speech.md#request-headers))
->
-> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
-
-**Text to speech REST API usage example**
-
-We'll use West Europe as a sample Azure region and `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain). The custom domain name `my-private-link-speech.cognitiveservices.azure.com` in our example belongs to the Speech resource created in the West Europe region.
-
-To get the list of the voices supported in the region, perform the following request:
-
-```http
-https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list
-```
-See more details in the [Text to speech REST API documentation](rest-text-to-speech.md).
-
-For the private-endpoint-enabled Speech resource, the endpoint URL for the same operation needs to be modified. The same request will look like this:
-
-```http
-https://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/voices/list
-```
-See a detailed explanation in the [Construct endpoint URL](#construct-endpoint-url) subsection for the Speech SDK.
-
-### Speech resource with a custom domain name and a private endpoint: Usage with the Speech SDK
-
-Using the Speech SDK with a custom domain name and private-endpoint-enabled Speech resources requires you to review and likely change your application code.
-
-We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section.
-
-#### Construct endpoint URL
-
-Usually in SDK scenarios (as well as in the Speech to text REST API for short audio and Text to speech REST API scenarios), Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is:
-
-`{region}.{speech service offering}.speech.microsoft.com`
-
-An example DNS name is:
-
-`westeurope.stt.speech.microsoft.com`
-
-All possible values for the region (first element of the DNS name) are listed in [Speech service supported regions](regions.md). (See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints.) The following table presents the possible values for the Speech service offering (second element of the DNS name):
-
-| DNS name value | Speech service offering |
-|-|-|
-| `commands` | [Custom Commands](custom-commands.md) |
-| `convai` | [Conversation Transcription](conversation-transcription.md) |
-| `s2s` | [Speech Translation](speech-translation.md) |
-| `stt` | [Speech to text](speech-to-text.md) |
-| `tts` | [Text to speech](text-to-speech.md) |
-| `voice` | [Custom Voice](how-to-custom-voice.md) |
-
-So the earlier example (`westeurope.stt.speech.microsoft.com`) stands for a Speech to text endpoint in West Europe.
-
-Private-endpoint-enabled endpoints communicate with Speech service via a special proxy. Because of that, *you must change the endpoint connection URLs*.
-
-A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.speech.microsoft.com/{URL path}`
-
-A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{speech service offering}/{URL path}`
-
-**Example 1.** An application is communicating by using the following URL (speech recognition using the base model for US English in West Europe):
-
-```
-wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
-```
-
-To use it in the private-endpoint-enabled scenario when the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`, you must modify the URL like this:
-
-```
-wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
-```
-
-Notice the details:
--- The host name `westeurope.stt.speech.microsoft.com` is replaced by the custom domain host name `my-private-link-speech.cognitiveservices.azure.com`.-- The second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.-
-**Example 2.** An application uses the following URL to synthesize speech in West Europe:
-```
-wss://westeurope.tts.speech.microsoft.com/cognitiveservices/websocket/v1
-```
-
-The following equivalent URL uses a private endpoint, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
-
-```
-wss://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/websocket/v1
-```
-
-The same principle in Example 1 is applied, but the key element this time is `tts`.
-
-#### Modifying applications
-
-Follow these steps to modify your code:
-
-1. Determine the application endpoint URL:
-
- - [Turn on logging for your application](how-to-use-logging.md) and run it to log activity.
- - In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach the Speech service.
-
- Example:
-
- ```
- (114917): 41ms SPX_DBG_TRACE_VERBOSE: property_bag_impl.cpp:138 ISpxPropertyBagImpl::LogPropertyAndValue: this=0x0000028FE4809D78; name='SPEECH-ConnectionUrl'; value='wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?traffictype=spx&language=en-US'
- ```
-
- So the URL that the application used in this example is:
-
- ```
- wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
- ```
-
-2. Create a `SpeechConfig` instance by using a full endpoint URL:
-
- 1. Modify the endpoint that you just determined, as described in the earlier [Construct endpoint URL](#construct-endpoint-url) section.
-
- 1. Modify how you create the instance of `SpeechConfig`. Most likely, your application is using something like this:
- ```csharp
- var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
- ```
- This won't work for a private-endpoint-enabled Speech resource because of the host name and URL changes that we described in the previous sections. If you try to run your existing application without any modifications by using the key of a private-endpoint-enabled resource, you'll get an authentication error (401).
-
- To make it work, modify how you instantiate the `SpeechConfig` class and use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:
- - `speechKey` contains the key of the private-endpoint-enabled Speech resource.
- - `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain:
- ```
- wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
- ```
-
- Create a `SpeechConfig` instance:
- ```csharp
- var config = SpeechConfig.FromEndpoint(endPoint, speechKey);
- ```
- ```cpp
- auto config = SpeechConfig::FromEndpoint(endPoint, speechKey);
- ```
- ```java
- SpeechConfig config = SpeechConfig.fromEndpoint(endPoint, speechKey);
- ```
- ```python
- import azure.cognitiveservices.speech as speechsdk
- speech_config = speechsdk.SpeechConfig(endpoint=endPoint, subscription=speechKey)
- ```
- ```objectivec
- SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithEndpoint:endPoint subscription:speechKey];
- ```
-
-> [!TIP]
-> The query parameters specified in the endpoint URI are not changed, even if they're set by other APIs. For example, if the recognition language is defined in the URI as query parameter `language=en-US`, and is also set to `ru-RU` via the corresponding property, the language setting in the URI is used. The effective language is then `en-US`.
->
-> Parameters set in the endpoint URI always take precedence. Other APIs can override only parameters that are not specified in the endpoint URI.
-
-After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios.
---
-## Adjust an application to use a Speech resource without private endpoints
-
-In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech service, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
-
-This section explains how to use a Speech resource with a custom domain name but *without* any private endpoints with the Speech service REST APIs and [Speech SDK](speech-sdk.md). This might be a resource that was once used in a private endpoint scenario, but then had its private endpoints deleted.
-
-### DNS configuration
-
-Remember how a custom domain DNS name of the private-endpoint-enabled Speech resource is [resolved from public networks](#resolve-dns-from-other-networks). In this case, the IP address resolved points to a proxy endpoint for a virtual network. That endpoint is used for dispatching the network traffic to the private-endpoint-enabled Cognitive Services resource.
-
-However, when *all* resource private endpoints are removed (or right after the enabling of the custom domain name), the CNAME record of the Speech resource is reprovisioned. It now points to the IP address of the corresponding [Cognitive Services regional endpoint](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
-
-So the output of the `nslookup` command will look like this:
-```dos
-C:\>nslookup my-private-link-speech.cognitiveservices.azure.com
-Server: UnKnown
-Address: fe80::1
-
-Non-authoritative answer:
-Name: apimgmthskquihpkz6d90kmhvnabrx3ms3pdubscpdfk1tsx3a.cloudapp.net
-Address: 13.93.122.1
-Aliases: my-private-link-speech.cognitiveservices.azure.com
- westeurope.api.cognitive.microsoft.com
- cognitiveweprod.trafficmanager.net
- cognitiveweprod.azure-api.net
- apimgmttmdjylckcx6clmh2isu2wr38uqzm63s8n4ub2y3e6xs.trafficmanager.net
- cognitiveweprod-westeurope-01.regional.azure-api.net
-```
-Compare it with the output from [this section](#resolve-dns-from-other-networks).
-
-### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs
-
-#### Speech to text REST API
-
-Speech to text REST API usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api).
-
-#### Speech to text REST API for short audio and Text to speech REST API
-
-In this case, usage of the Speech to text REST API for short audio and usage of the Text to speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [Speech to text REST API for short audio](rest-speech-to-text-short.md) and [Text to speech REST API](rest-text-to-speech.md) documentation.
-
-> [!NOTE]
-> When you're using the Speech to text REST API for short audio and Text to speech REST API in custom domain scenarios, use a Speech resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech to text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text to speech REST API](rest-text-to-speech.md#request-headers))
->
-> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
-
-### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
-
-Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the general case as described in the [Speech SDK documentation](speech-sdk.md).
-
-In case you have modified your code for using with a [private-endpoint-enabled Speech resource](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), consider the following.
-
-In the section on [private-endpoint-enabled Speech resources](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), we explained how to determine the endpoint URL, modify it, and make it work through "from endpoint"/"with endpoint" initialization of the `SpeechConfig` class instance.
-
-However, if you try to run the same application after having all private endpoints removed (allowing some time for the corresponding DNS record reprovisioning), you'll get an internal service error (404). The reason is that the [DNS record](#dns-configuration) now points to the regional Cognitive Services endpoint instead of the virtual network proxy, and the URL paths like `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US` won't be found there.
-
-You need to roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code:
-
-```csharp
-var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
-```
--
-## Pricing
-
-For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
-
-## Learn more
-
-* [Use Speech service through a Virtual Network service endpoint](speech-service-vnet-service-endpoint.md)
-* [Azure Private Link](../../private-link/private-link-overview.md)
-* [Azure VNet service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md)
-* [Speech SDK](speech-sdk.md)
-* [Speech to text REST API](rest-speech-to-text.md)
-* [Text to speech REST API](rest-text-to-speech.md)
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
- Title: Speech service quotas and limits-
-description: Quick reference, detailed description, and best practices on the quotas and limits for the Speech service in Azure Cognitive Services.
------ Previously updated : 02/17/2023---
-# Speech service quotas and limits
-
-This article contains a quick reference and a detailed description of the quotas and limits for the Speech service in Azure Cognitive Services. The information applies to all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) of the service. It also contains some best practices to avoid request throttling.
-
-For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-## Quotas and limits reference
-
-The following sections provide you with a quick guide to the quotas and limits that apply to the Speech service.
-
-For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable.
-
-### Speech to text quotas and limits per resource
-
-This section describes speech to text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
-
-#### Real-time speech to text and speech translation
-
-You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the [Speech to text REST API for short audio](rest-speech-to-text-short.md).
-
-> [!IMPORTANT]
-> These limits apply to concurrent real-time speech to text requests and speech translation requests combined. For example, if you have 60 concurrent speech to text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests.
-
-| Quota | Free (F0) | Standard (S0) |
-|--|--|--|
-| Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
-| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
-
-#### Batch transcription
-
-| Quota | Free (F0) | Standard (S0) |
-|--|--|--|
-| [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
-| Max audio input file size | N/A | 1 GB |
-| Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB |
-| Max blob container size | N/A | 5 GB |
-| Max number of blobs per container | N/A | 10000 |
-| Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
-| Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file |
-
-#### Model customization
-
-The limits in this table apply per Speech resource when you create a Custom Speech model.
-
-| Quota | Free (F0) | Standard (S0) |
-|--|--|--|
-| REST API limit | 300 requests per minute | 300 requests per minute |
-| Max number of speech datasets | 2 | 500 |
-| Max acoustic dataset file size for data import | 2 GB | 2 GB |
-| Max language dataset file size for data import | 200 MB | 1.5 GB |
-| Max pronunciation dataset file size for data import | 1 KB | 1 MB |
-| Max text size when you're using the `text` parameter in the [Models_Create](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create/) API request | 200 KB | 500 KB |
-
-### Text to speech quotas and limits per resource
-
-This section describes text to speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
-
-#### Common text to speech quotas and limits
-
-| Quota | Free (F0) | Standard (S0) |
-|--|--|--|
-| Maximum number of transactions per time period for prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds<br/><br/>This limit isn't adjustable. | 200 transactions per second (TPS) (default value)<br/><br/>The rate is adjustable up to 1000 TPS for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit). |
-| Max audio length produced per request | 10 min | 10 min |
-| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
-| Max SSML message size per turn for websocket | 64 KB | 64 KB |
-
-#### Custom Neural Voice
-
-| Quota | Free (F0)| Standard (S0) |
-|--|--|--|
-| Max number of transactions per second (TPS) | Not available for F0 | 200 transactions per second (TPS) (default value) |
-| Max number of datasets | N/A | 500 |
-| Max number of simultaneous dataset uploads | N/A | 5 |
-| Max data file size for data import per dataset | N/A | 2 GB |
-| Upload of long audios or audios without script | N/A | Yes |
-| Max number of simultaneous model trainings | N/A | 3 |
-| Max number of custom endpoints | N/A | 50 |
-
-#### Audio Content Creation tool
-
-| Quota | Free (F0)| Standard (S0) |
-|--|--|--|
-| File size (plain text in SSML)<sup>1</sup> | 3,000 characters per file | 20,000 characters per file |
-| File size (lexicon file)<sup>2</sup> | 3,000 characters per file | 20,000 characters per file |
-| Billable characters in SSML| 15,000 characters per file | 100,000 characters per file |
-| Export to audio library | 1 concurrent task | N/A |
-
-<sup>1</sup> The limit only applies to plain text in SSML and doesn't include tags.
-
-<sup>2</sup> The limit includes all text including tags. The characters of lexicon file aren't charged. Only the lexicon elements in SSML are counted as billable characters. Refer to [billable characters](text-to-speech.md#billable-characters) to learn more.
-
-### Speaker recognition quotas and limits per resource
-
-Speaker recognition is limited to 20 transactions per second (TPS).
-
-## Detailed description, quota adjustment, and best practices
-
-Some of the Speech service quotas are adjustable. This section provides additional explanations, best practices, and adjustment instructions.
-
-The following quotas are adjustable for Standard (S0) resources. The Free (F0) request limits aren't adjustable.
--- Speech to text [concurrent request limit](#real-time-speech-to-text-and-speech-translation) for base model endpoint and custom endpoint-- Text to speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices-- Speech translation [concurrent request limit](#real-time-speech-to-text-and-speech-translation)-
-Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity.
-
-Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In most cases, this throttled state is transient.
-
-### General best practices to mitigate throttling during autoscaling
-
-To minimize issues related to throttling, it's a good idea to use the following techniques:
--- Implement retry logic in your application.-- Avoid sharp changes in the workload. Increase the workload gradually. For example, let's say your application is using text to speech, and your current workload is 5 TPS. The next second, you increase the load to 20 TPS (that is, four times more). Speech service immediately starts scaling up to fulfill the new load, but is unable to scale as needed within one second. Some of the requests will get response code 429 (too many requests).-- Test different load increase patterns. For more information, see the [workload pattern example](#example-of-a-workload-pattern-best-practice).-- Create additional Speech service resources in *different* regions, and distribute the workload among them. (Creating multiple Speech service resources in the same region will not affect the performance, because all resources will be served by the same backend cluster).-
-The next sections describe specific cases of adjusting quotas.
-
-### Speech to text: increase real-time speech to text concurrent request limit
-
-By default, the number of concurrent real-time speech to text and speech translation [requests combined](#real-time-speech-to-text-and-speech-translation) is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
-
->[!NOTE]
-> Concurrent request limits for base and custom models need to be adjusted separately. You can have a Speech service resource that's associated with many custom endpoints hosting many custom model deployments. As needed, the limit adjustments per custom endpoint must be requested separately.
-
-Increasing the limit of concurrent requests doesn't directly affect your costs. The Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
-
-You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
-
->[!NOTE]
->[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on. Speech containers do, however, have their own capacity limitations that should be taken into account. For more information, see the [Speech containers FAQ](./speech-container-howto.md).
-
-#### Have the required information ready
--- For the base model:
- - Speech resource ID
- - Region
-- For the custom model:
- - Region
- - Custom endpoint ID
-
-How to get information for the base model:
-
-1. Go to the [Azure portal](https://portal.azure.com/).
-1. Select the Speech service resource for which you would like to increase the concurrency request limit.
-1. From the **Resource Management** group, select **Properties**.
-1. Copy and save the values of the following fields:
- - **Resource ID**
- - **Location** (your endpoint region)
-
-How to get information for the custom model:
-
-1. Go to the [Speech Studio](https://aka.ms/speechstudio/customspeech) portal.
-1. Sign in if necessary, and go to **Custom Speech**.
-1. Select your project, and go to **Deployment**.
-1. Select the required endpoint.
-1. Copy and save the values of the following fields:
- - **Service Region** (your endpoint region)
- - **Endpoint ID**
-
-#### Create and submit a support request
-
-Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
-
-1. Ensure you have the required information listed in the previous section.
-1. Go to the [Azure portal](https://portal.azure.com/).
-1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit.
-1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.
-1. In **Summary**, describe what you want (for example, "Increase speech to text concurrency request limit").
-1. In **Problem type**, select **Quota or Subscription issues**.
-1. In **Problem subtype**, select either:
- - **Quota or concurrent requests increase** for an increase request.
- - **Quota or usage validation** to check the existing limit.
-1. Select **Next: Solutions**. Proceed further with the request creation.
-1. On the **Details** tab, in the **Description** field, enter the following:
- - A note that the request is about the speech to text quota.
- - Choose either the base or custom model.
- - The Azure resource information you [collected previously](#have-the-required-information-ready).
- - Any other required information.
-1. On the **Review + create** tab, select **Create**.
-1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
-
-### Example of a workload pattern best practice
-
-Here's a general example of a good approach to take. It's meant only as a template that you can adjust as necessary for your own use.
-
-Suppose that a Speech service resource has the concurrent request limit set to 300. Start the workload from 20 concurrent connections, and increase the load by 20 concurrent connections every 90-120 seconds. Control the service responses, and implement the logic that falls back (reduces the load) if you get too many requests (response code 429). Then, retry the load increase in one minute, and if it still doesn't work, try again in two minutes. Use a pattern of 1-2-4-4 minutes for the intervals.
-
-Generally, it's a very good idea to test the workload and the workload patterns before going to production.
-
-### Text to speech: increase concurrent request limit
-
-For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
-
-Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
-
-You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
-
->[!NOTE]
->[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on.
-
-#### Prepare the required information
-
-To create an increase request, you need to provide your information.
--- For the prebuilt voice:
- - Speech resource ID
- - Region
-- For the custom voice:
- - Deployment region
- - Custom endpoint ID
-
-How to get information for the prebuilt voice:
-
-1. Go to the [Azure portal](https://portal.azure.com/).
-1. Select the Speech service resource for which you would like to increase the concurrency request limit.
-1. From the **Resource Management** group, select **Properties**.
-1. Copy and save the values of the following fields:
- - **Resource ID**
- - **Location** (your endpoint region)
-
-How to get information for the custom voice:
-
-1. Go to the [Speech Studio](https://aka.ms/speechstudio/customvoice) portal.
-1. Sign in if necessary, and go to **Custom Voice**.
-1. Select your project, and go to **Deploy model**.
-1. Select the required endpoint.
-1. Copy and save the values of the following fields:
- - **Service Region** (your endpoint region)
- - **Endpoint ID**
-
-#### Create and submit a support request
-
-Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
-
-1. Ensure you have the required information listed in the previous section.
-1. Go to the [Azure portal](https://portal.azure.com/).
-1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit.
-1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.
-1. In **Summary**, describe what you want (for example, "Increase text to speech concurrency request limit").
-1. In **Problem type**, select **Quota or Subscription issues**.
-1. In **Problem subtype**, select either:
- - **Quota or concurrent requests increase** for an increase request.
- - **Quota or usage validation** to check the existing limit.
-1. On the **Recommended solution** tab, select **Next**.
-1. On the **Additional details** tab, fill in all the required items. And in the **Details** field, enter the following:
- - A note that the request is about the text to speech quota.
- - Choose either the prebuilt voice or custom voice.
- - The Azure resource information you [collected previously](#prepare-the-required-information).
- - Any other required information.
-1. On the **Review + create** tab, select **Create**.
-1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
- Title: Speech phonetic alphabets - Speech service-
-description: This article presents Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
------ Previously updated : 09/16/2022---
-# SSML phonetic alphabets
-
-Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text to speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup-pronunciation.md#phoneme-element).
-
-Speech service supports the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) suprasegmentals that are listed here. You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup-pronunciation.md#phoneme-element).
-
-|`ipa` | Symbol | Note|
-|-|-|-|
-| `ˈ` | Primary stress | Don’t use single quote ( ‘ or ' ) though it looks similar. |
-| `ˌ` | Secondary stress | Don’t use comma ( , ) though it looks similar. |
-| `.` | Syllable boundary | |
-| `ː` | Long | Don’t use colon ( : or :) though it looks similar. |
-| `‿` | Linking | |
-
-> [!TIP]
-> You can use [the international phonetic alphabet keyboard](https://www.internationalphoneticalphabet.org/html-ipa-keyboard-v1/keyboard/) to create the correct `ipa` suprasegmentals.
-
-For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The eight locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, zh-HK, and zh-TW. For those eight locales, you set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup-pronunciation.md#phoneme-element).
-
-See the sections in this article for the phonemes that are specific to each locale.
-
-> [!NOTE]
-> The following tables list viseme IDs corresponding to phonemes for different locales. When viseme ID is 0, it indicates silence.
-
-## ar-EG/ar-SA
-
-## bg-BG
-
-## ca-ES
-
-## cs-CZ
-
-## da-DK
-
-## de-DE/de-CH/de-AT
-
-## el-GR
-
-## en-GB/en-IE/en-AU
-
-## :::no-loc text="en-US/en-CA":::
-
-## es-ES
-
-## es-MX
-
-## fi-FI
-
-## fr-FR/fr-CA/fr-CH
-
-## he-IL
-
-## hr-HR
-
-## hu-HU
-
-## id-ID
-
-## it-IT
-
-## ja-JP
-
-## ko-KR
-
-## ms-MY
-
-## nb-NO
-
-## nl-NL/nl-BE
-
-## pl-PL
-
-## pt-BR
-
-## pt-PT
-
-## ro-RO
-
-## ru-RU
-
-## sk-SK
-
-## sl-SI
-
-## sv-SE
-
-## th-TH
-
-## tr-TR
-
-## vi-VN
-
-## zh-CN
-
-## zh-HK
-
-## zh-TW
-
-## Map X-SAMPA to IPA
--
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
- Title: "Speech Studio overview - Speech service"-
-description: Speech Studio is a set of UI-based tools for building and integrating features from Speech service in your applications.
------ Previously updated : 09/25/2022---
-# What is Speech Studio?
-
-[Speech Studio](https://aka.ms/speechstudio/) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech service in your applications. You create projects in Speech Studio by using a no-code approach, and then reference those assets in your applications by using the [Speech SDK](speech-sdk.md), the [Speech CLI](spx-overview.md), or the REST APIs.
-
-> [!TIP]
-> You can try speech to text and text to speech in [Speech Studio](https://aka.ms/speechstudio/) without signing up or writing any code.
-
-## Speech Studio scenarios
-
-Explore, try out, and view sample code for some of common use cases.
-
-* [Captioning](https://aka.ms/speechstudio/captioning): Choose a sample video clip to see real-time or offline processed captioning results. Learn how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. For more information, see the [captioning quickstart](captioning-quickstart.md).
-
-* [Call Center](https://aka.ms/speechstudio/callcenter): View a demonstration on how to use the Language and Speech services to analyze call center conversations. Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case. For more information, see the [call center quickstart](call-center-quickstart.md).
-
-For a demonstration of these scenarios in Speech Studio, view this [introductory video](https://youtu.be/mILVerU6DAw).
-> [!VIDEO https://www.youtube.com/embed/mILVerU6DAw]
-
-## Speech Studio features
-
-In Speech Studio, the following Speech service features are available as project types:
-
-* [Real-time speech to text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech to text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech to text works on your audio samples. To explore the full functionality, see [What is speech to text](speech-to-text.md).
-
-* [Batch speech to text](https://aka.ms/speechstudio/batchspeechtotext): Quickly test batch transcription capabilities to transcribe a large amount of audio in storage and receive results asynchronously, To learn more about Batch Speech-to-text, see [Batch speech to text overview](batch-transcription.md).
-
-* [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md).
-
-* [Pronunciation assessment](https://aka.ms/speechstudio/pronunciationassessment): Evaluate speech pronunciation and give speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly, without code. To use the feature with the Speech SDK in your applications, see the [Pronunciation assessment](how-to-pronunciation-assessment.md) article.
-
-* [Speech Translation](https://aka.ms/speechstudio/speechtranslation): Quickly test and translate speech into other languages of your choice with low latency. To explore the full functionality, see [What is speech translation](speech-translation.md).
-
-* [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=tts). Bring your scenarios to life with highly expressive and human-like neural voices.
-
-* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text to speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
-
-* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): A no-code approach for text to speech synthesis. You can use the output audio as-is, or as a starting point for further customization. You can build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. For more information, see the [Audio Content Creation](how-to-audio-content-creation.md) documentation.
-
-* [Custom Keyword](https://aka.ms/speechstudio/customkeyword): A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
-
-* [Custom Commands](https://aka.ms/speechstudio/customcommands): Easily build rich, voice-command apps that are optimized for voice-first interaction experiences. Custom Commands provides a code-free authoring experience in Speech Studio, an automatic hosting model, and relatively lower complexity. The feature helps you focus on building the best solution for your voice-command scenarios. For more information, see the [Develop Custom Commands applications](how-to-develop-custom-commands-application.md) guide. Also see [Integrate with a client application by using the Speech SDK](how-to-custom-commands-setup-speech-sdk.md).
-
-## Next steps
-
-* [Explore Speech Studio](https://speech.microsoft.com)
cognitive-services Speech Synthesis Markup Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-pronunciation.md
- Title: Pronunciation with Speech Synthesis Markup Language (SSML) - Speech service-
-description: Learn about Speech Synthesis Markup Language (SSML) elements to improve pronunciation.
------ Previously updated : 11/30/2022---
-# Pronunciation with SSML
-
-You can use Speech Synthesis Markup Language (SSML) with text to speech to specify how the speech is pronounced. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced.
-
-Refer to the sections below for details about how to use SSML elements to improve pronunciation. For more information about SSML syntax, see [SSML document structure and events](speech-synthesis-markup-structure.md).
-
-## phoneme element
-
-The `phoneme` element is used for phonetic pronunciation in SSML documents. Always provide human-readable speech as a fallback.
-
-Phonetic alphabets are composed of phones, which are made up of letters, numbers, or characters, sometimes in combination. Each phone describes a unique sound of speech. This is in contrast to the Latin alphabet, where any letter might represent multiple spoken sounds. Consider the different `en-US` pronunciations of the letter "c" in the words "candy" and "cease" or the different pronunciations of the letter combination "th" in the words "thing" and "those."
-
-> [!NOTE]
-> For a list of locales that support phonemes, see footnotes in the [language support](language-support.md?tabs=tts) table.
-
-Usage of the `phoneme` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `alphabet` | The phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash; See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li><li>`x-sampa` &ndash; See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md#map-x-sampa-to-ipa)</li></ul><br>The alphabet applies only to the `phoneme` in the element. | Optional |
-| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text to speech rejects the entire SSML document and produces none of the speech output specified in the document.<br/><br/>For `ipa`, to stress one syllable by placing stress symbol before this syllable, you need to mark all syllables for the word. Or else, the syllable before this stress symbol will be stressed. For `sapi`, if you want to stress one syllable, you need to place the stress symbol after this syllable, whether or not all syllables of the word are marked.| Required |
-
-### phoneme examples
-
-The supported values for attributes of the `phoneme` element were [described previously](#phoneme-element). In the first two examples, the values of `ph="tə.ˈmeɪ.toʊ"` or `ph="təmeɪˈtoʊ"` are specified to stress the syllable `meɪ`.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <phoneme alphabet="ipa" ph="tə.ˈmeɪ.toʊ"> tomato </phoneme>
- </voice>
-</speak>
-```
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <phoneme alphabet="ipa" ph="təmeɪˈtoʊ"> tomato </phoneme>
- </voice>
-</speak>
-```
-```xml
-<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <phoneme alphabet="sapi" ph="iy eh n y uw eh s"> en-US </phoneme>
- </voice>
-</speak>
-```
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <s>His name is Mike <phoneme alphabet="ups" ph="JH AU"> Zhou </phoneme></s>
- </voice>
-</speak>
-```
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <phoneme alphabet='x-sampa' ph='he."lou'>hello</phoneme>
- </voice>
-</speak>
-```
-
-## Custom lexicon
-
-You can define how single entities (such as company, a medical term, or an emoji) are read in SSML by using the [phoneme](#phoneme-element) and [sub](#sub-element) elements. To define how multiple entities are read, create an XML structured custom lexicon file. Then you upload the custom lexicon XML file and reference it with the SSML `lexicon` element.
-
-> [!NOTE]
-> For a list of locales that support custom lexicon, see footnotes in the [language support](language-support.md?tabs=tts) table.
->
-> The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
-
-Usage of the `lexicon` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| | - | - |
-| `uri` | The URI of the publicly accessible custom lexicon XML file with either the `.xml` or `.pls` file extension. Using [Azure Blob Storage](../../storage/blobs/storage-quickstart-blobs-portal.md) is recommended but not required. For more information about the custom lexicon file, see [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/).| Required |
-
-### Custom lexicon examples
-
-The supported values for attributes of the `lexicon` element were [described previously](#custom-lexicon).
-
-After you've published your custom lexicon, you can reference it from your SSML. The following SSML example references a custom lexicon that was uploaded to `https://www.example.com/customlexicon.xml`.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="http://www.w3.org/2001/mstts"
- xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <lexicon uri="https://www.example.com/customlexicon.xml"/>
- BTW, we will be there probably at 8:00 tomorrow morning.
- Could you help leave a message to Robert Benigni for me?
- </voice>
-</speak>
-```
-
-### Custom lexicon file
-
-To define how multiple entities are read, you can define them in a custom lexicon XML file with either the `.xml` or `.pls` file extension.
-
-> [!NOTE]
-> The custom lexicon file is a valid XML document, but it cannot be used as an SSML document.
-
-Here are some limitations of the custom lexicon file:
--- **File size**: The custom lexicon file size is limited to a maximum of 100 KB. If the file size exceeds the 100-KB limit, the synthesis request fails.-- **Lexicon cache refresh**: The custom lexicon is cached with the URI as the key on text to speech when it's first loaded. The lexicon with the same URI won't be reloaded within 15 minutes, so the custom lexicon change needs to wait 15 minutes at the most to take effect.-
-The supported elements and attributes of a custom lexicon XML file are described in the [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/). Here are some examples of the supported elements and attributes:
--- The `lexicon` element contains at least one `lexeme` element. Lexicon contains the necessary `xml:lang` attribute to indicate which locale it should be applied for. One custom lexicon is limited to one locale by design, so if you apply it for a different locale, it won't work. The `lexicon` element also has an `alphabet` attribute to indicate the alphabet used in the lexicon. The possible values are `ipa` and `x-microsoft-sapi`.-- Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `lexeme` element is case sensitive in the custom lexicon. For example, if you only provide a phoneme for the `lexeme` "Hello", it won't work for the `lexeme` "hello".-- The `grapheme` element contains text that describes the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). -- The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. -- The `phoneme` element provides text that describes how the `lexeme` is pronounced. The syllable boundary is '.' in the IPA alphabet. The `phoneme` element can't contain white space when you use the IPA alphabet.-- When the `alias` and `phoneme` elements are provided with the same `grapheme` element, `alias` has higher priority.-
-Microsoft provides a [validation tool for the custom lexicon](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomLexiconValidation) that helps you find errors (with detailed error messages) in the custom lexicon file. Using the tool is recommended before you use the custom lexicon XML file in production with the Speech service.
-
-#### Custom lexicon file examples
-
-The following XML example (not SSML) would be contained in a custom lexicon `.xml` file. When you use this custom lexicon, "BTW" is read as "By the way." "Benigni" is read with the provided IPA "bɛˈniːnji."
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<lexicon version="1.0"
- xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
- http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
- alphabet="ipa" xml:lang="en-US">
- <lexeme>
- <grapheme>BTW</grapheme>
- <alias>By the way</alias>
- </lexeme>
- <lexeme>
- <grapheme> Benigni </grapheme>
- <phoneme> bɛˈniːnji</phoneme>
- </lexeme>
- <lexeme>
- <grapheme>😀</grapheme>
- <alias>test emoji</alias>
- </lexeme>
-</lexicon>
-```
-
-You can't directly set the pronunciation of a phrase by using the custom lexicon. If you need to set the pronunciation for an acronym or an abbreviated term, first provide an `alias`, and then associate the `phoneme` with that `alias`. For example:
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<lexicon version="1.0"
- xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
- http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
- alphabet="ipa" xml:lang="en-US">
- <lexeme>
- <grapheme>Scotland MV</grapheme>
- <alias>ScotlandMV</alias>
- </lexeme>
- <lexeme>
- <grapheme>ScotlandMV</grapheme>
- <phoneme>ˈskɒtlənd.ˈmiːdiəm.weɪv</phoneme>
- </lexeme>
-</lexicon>
-```
-
-You could also directly provide your expected `alias` for the acronym or abbreviated term. For example:
-
-```xml
- <lexeme>
- <grapheme>Scotland MV</grapheme>
- <alias>Scotland Media Wave</alias>
- </lexeme>
-```
-
-The preceding custom lexicon XML file examples use the IPA alphabet, which is also known as the IPA phone set. We suggest that you use the IPA because it's the international standard. For some IPA characters, they're the "precomposed" and "decomposed" version when they're being represented with Unicode. The custom lexicon only supports the decomposed Unicode.
-
-The Speech service defines a phonetic set for these locales: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, `zh-HK`, and `zh-TW`. For more information on the detailed Speech service phonetic alphabet, see the [Speech service phonetic sets](speech-ssml-phonetic-sets.md).
-
-You can use the `x-microsoft-sapi` as the value for the `alphabet` attribute with custom lexicons as demonstrated here:
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<lexicon version="1.0"
- xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
- http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
- alphabet="x-microsoft-sapi" xml:lang="en-US">
- <lexeme>
- <grapheme>BTW</grapheme>
- <alias> By the way </alias>
- </lexeme>
- <lexeme>
- <grapheme> Benigni </grapheme>
- <phoneme> b eh 1 - n iy - n y iy </phoneme>
- </lexeme>
-</lexicon>
-```
-
-## say-as element
-
-The `say-as` element indicates the content type, such as number or date, of the element's text. This element provides guidance to the speech synthesis engine about how to pronounce the text.
-
-Usage of the `say-as` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `interpret-as` | Indicates the content type of an element's text. For a list of types, see the following table.| Required |
-| `format`| Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them. See the following table. | Optional |
-| `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional |
-
-The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column isn't empty in the table below.
-
-| interpret-as | format | Interpretation |
-| - | - | - |
-| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." |
-| `cardinal`, `number` | None| The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">10</say-as> options`<br /><br />As "There are ten options."|
-| `ordinal` | None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."|
-| `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
-| `fraction` | None | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." |
-| `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." |
-| `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
-| `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish. |
-| `telephone` | None | The text is spoken as a telephone number. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." |
-| `currency` | None | The text is spoken as a currency. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="currency">99.9 USD</say-as>`<br /><br />As "ninety-nine US dollars and ninety cents."|
-| `address`| None | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington."|
-| `name` | None | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. |
-
-### say-as examples
-
-The supported values for attributes of the `say-as` element were [described previously](#say-as-element).
-
-The speech synthesis engine speaks the following example as "Your first request was for one room on October nineteenth twenty ten with early arrival at twelve thirty five PM."
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <p>
- Your <say-as interpret-as="ordinal"> 1st </say-as> request was for <say-as interpret-as="cardinal"> 1 </say-as> room
- on <say-as interpret-as="date" format="mdy"> 10/19/2010 </say-as>, with early arrival at <say-as interpret-as="time" format="hms12"> 12:35pm </say-as>.
- </p>
- </voice>
-</speak>
-```
-
-## sub element
-
-Use the `sub` element to indicate that the alias attribute's text value should be pronounced instead of the element's enclosed text. In this way, the SSML contains both a spoken and written form.
-
-Usage of the `sub` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `alias` | The text value that should be pronounced instead of the element's enclosed text. | Required |
-
-### sub examples
-
-The supported values for attributes of the `sub` element were [described previously](#sub-element).
-
-The speech synthesis engine speaks the following example as "World Wide Web Consortium."
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <sub alias="World Wide Web Consortium">W3C</sub>
- </voice>
-</speak>
-```
-
-## Pronunciation with MathML
-
-The Mathematical Markup Language (MathML) is an XML-compliant markup language that describes mathematical content and structure. The Speech service can use the MathML as input text to properly pronounce mathematical notations in the output audio.
-
-> [!NOTE]
-> The MathML elements (tags) are currently supported by all neural voices in the `en-US` and `en-AU` locales.
-
-All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3.0](https://www.w3.org/TR/MathML3/) specifications are supported, except the MathML 3.0 [Elementary Math](https://www.w3.org/TR/MathML3/chapter3.html#presm.elementary) elements.
-
-Take note of these MathML elements and attributes:
-- The `xmlns` attribute in `<math xmlns="http://www.w3.org/1998/Math/MathML">` is optional.-- The `semantics`, `annotation`, and `annotation-xml` elements don't output speech, so they're ignored.-- If an element isn't recognized, it will be ignored, and the child elements within it will still be processed.-
-The MathML entities aren't supported by XML syntax, so you must use the corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
-
-### MathML examples
-
-The text to speech output for this example is "a squared plus b squared equals c squared".
-
-```xml
-<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts' xml:lang='en-US'><voice name='en-US-JennyNeural'><math xmlns='http://www.w3.org/1998/Math/MathML'><msup><mi>a</mi><mn>2</mn></msup><mo>+</mo><msup><mi>b</mi><mn>2</mn></msup><mo>=</mo><msup><mi>c</mi><mn>2</mn></msup></math></voice></speak>
-```
-
-## Next steps
--- [SSML overview](speech-synthesis-markup.md)-- [SSML document structure and events](speech-synthesis-markup-structure.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md
- Title: Speech Synthesis Markup Language (SSML) document structure and events - Speech service-
-description: Learn about the Speech Synthesis Markup Language (SSML) document structure.
------ Previously updated : 11/30/2022---
-# SSML document structure and events
-
-The Speech Synthesis Markup Language (SSML) with input text determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.
-
-Refer to the sections below for details about how to structure elements in the SSML document.
-
-## Document structure
-
-The Speech service implementation of SSML is based on the World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/). The elements supported by the Speech can differ from the W3C standard.
-
-Each SSML document is created with SSML elements or tags. These elements are used to adjust the voice, style, pitch, prosody, volume, and more.
-
-Here's a subset of the basic structure and syntax of an SSML document:
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="string">
- <mstts:backgroundaudio src="string" volume="string" fadein="string" fadeout="string"/>
- <voice name="string" effect="string">
- <audio src="string"></audio>
- <bookmark mark="string"/>
- <break strength="string" time="string" />
- <emphasis level="value"></emphasis>
- <lang xml:lang="string"></lang>
- <lexicon uri="string"/>
- <math xmlns="http://www.w3.org/1998/Math/MathML"></math>
- <mstts:audioduration value="string"/>
- <mstts:express-as style="string" styledegree="value" role="string"></mstts:express-as>
- <mstts:silence type="string" value="string"/>
- <mstts:viseme type="string"/>
- <p></p>
- <phoneme alphabet="string" ph="string"></phoneme>
- <prosody pitch="value" contour="value" range="value" rate="value" volume="value"></prosody>
- <s></s>
- <say-as interpret-as="string" format="string" detail="string"></say-as>
- <sub alias="string"></sub>
- </voice>
-</speak>
-```
-
-Some examples of contents that are allowed in each element are described in the following list:
-- `audio`: The body of the `audio` element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. The `audio` element can also contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.-- `bookmark`: This element can't contain text or any other elements.-- `break`: This element can't contain text or any other elements.-- `emphasis`: This element can contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, and `sub`.-- `lang`: This element can contain all other elements except `mstts:backgroundaudio`, `voice`, and `speak`.-- `lexicon`: This element can't contain text or any other elements.-- `math`: This element can only contain text and MathML elements.-- `mstts:audioduration`: This element can't contain text or any other elements.-- `mstts:backgroundaudio`: This element can't contain text or any other elements.-- `mstts:express-as`: This element can contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, and `sub`.-- `mstts:silence`: This element can't contain text or any other elements.-- `mstts:viseme`: This element can't contain text or any other elements.-- `p`: This element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `sub`, `mstts:express-as`, and `s`.-- `phoneme`: This element can only contain text and no other elements.-- `prosody`: This element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.-- `s`: This element can contain text and the following elements: `audio`, `break`, `phoneme`, `prosody`, `say-as`, `mstts:express-as`, and `sub`.-- `say-as`: This element can only contain text and no other elements.-- `sub`: This element can only contain text and no other elements.-- `speak`: The root element of an SSML document. This element can contain the following elements: `mstts:backgroundaudio` and `voice`.-- `voice`: This element can contain all other elements except `mstts:backgroundaudio` and `speak`.-
-The Speech service automatically handles punctuation as appropriate, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark.
-
-## Special characters
-
-To use the characters `&`, `<`, and `>` within the SSML element's value or text, you must use the entity format. Specifically you must use `&amp;` in place of `&`, use `&lt;` in place of `<`, and use `&gt;` in place of `>`. Otherwise the SSML will not be parsed correctly.
-
-For example, specify `green &amp; yellow` instead of `green & yellow`. The following SSML will be parsed as expected:
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- My favorite colors are green &amp; yellow.
- </voice>
-</speak>
-```
-
-Special characters such as quotation marks, apostrophes, and brackets, must be escaped. For more information, see [Extensible Markup Language (XML) 1.0: Appendix D](https://www.w3.org/TR/xml/#sec-entexpand).
-
-Attribute values must be enclosed by double or single quotation marks. For example, `<prosody volume="90">` and `<prosody volume='90'>` are well-formed, valid elements, but `<prosody volume=90>` won't be recognized.
-
-## Speak root element
-
-The `speak` element is the root element that's required for all SSML documents. The `speak` element contains information such as version, language, and the markup vocabulary definition.
-
-Here's the syntax for the `speak` element:
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="string"></speak>
-```
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `version` | Indicates the version of the SSML specification used to interpret the document markup. The current version is "1.0".| Required|
-| `xml:lang` | The language of the root document. The value can contain a language code such as `en` (English), or a locale such as `en-US` (English - United States). | Required |
-| `xmlns` | The URI to the document that defines the markup vocabulary (the element types and attribute names) of the SSML document. The current URI is "http://www.w3.org/2001/10/synthesis". | Required |
-
-The `speak` element must contain at least one [voice element](speech-synthesis-markup-voice.md#voice-element).
-
-### speak examples
-
-The supported values for attributes of the `speak` element were [described previously](#speak-root-element).
-
-#### Single voice example
-
-This example uses the `en-US-JennyNeural` voice. For more examples, see [voice examples](speech-synthesis-markup-voice.md#voice-examples).
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- This is the text that is spoken.
- </voice>
-</speak>
-```
-
-## Add a break
-
-Use the `break` element to override the default behavior of breaks or pauses between words. You can use it to add pauses that are otherwise automatically inserted by the Speech service.
-
-Usage of the `break` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `strength` | The relative duration of a pause by using one of the following values:<br/><ul><li>x-weak</li><li>weak</li><li>medium (default)</li><li>strong</li><li>x-strong</li></ul>| Optional |
-| `time` | The absolute duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`. If the `time` attribute is set, the `strength` attribute is ignored.| Optional |
-
-Here are more details about the `strength` attribute.
-
-| Strength | Relative duration |
-| - | - |
-| X-weak | 250 ms |
-| Weak | 500 ms |
-| Medium | 750 ms |
-| Strong | 1,000 ms |
-| X-strong | 1,250 ms |
-
-### Break examples
-
-The supported values for attributes of the `break` element were [described previously](#add-a-break). The following three ways all add 750 ms breaks.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- Welcome <break /> to text to speech.
- Welcome <break strength="medium" /> to text to speech.
- Welcome <break time="750ms" /> to text to speech.
- </voice>
-</speak>
-```
-
-## Add silence
-
-Use the `mstts:silence` element to insert pauses before or after text, or between two adjacent sentences.
-
-One of the differences between `mstts:silence` and `break` is that a `break` element can be inserted anywhere in the text. Silence only works at the beginning or end of input text or at the boundary of two adjacent sentences.
-
-The silence setting is applied to all input text within it's enclosing `voice` element. To reset or change the silence setting again, you must use a new `voice` element with either the same voice or a different voice.
-
-Usage of the `mstts:silence` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `type` | Specifies where and how to add silence. The following silence types are supported:<br/><ul><li>`Leading` ΓÇô Additional silence at the beginning of the text. The value that you set is added to the natural silence before the start of text.</li><li>`Leading-exact` ΓÇô Silence at the beginning of the text. The value is an absolute silence length.</li><li>`Tailing` ΓÇô Additional silence at the end of text. The value that you set is added to the natural silence after the last word.</li><li>`Tailing-exact` ΓÇô Silence at the end of the text. The value is an absolute silence length.</li><li>`Sentenceboundary` ΓÇô Additional silence between adjacent sentences. The actual silence length for this type includes the natural silence after the last word in the previous sentence, the value you set for this type, and the natural silence before the starting word in the next sentence.</li><li>`Sentenceboundary-exact` ΓÇô Silence between adjacent sentences. The value is an absolute silence length.</li><li>`Comma-exact` ΓÇô Silence at the comma in half-width or full-width format. The value is an absolute silence length.</li><li>`Semicolon-exact` ΓÇô Silence at the semicolon in half-width or full-width format. The value is an absolute silence length.</li><li>`Enumerationcomma-exact` ΓÇô Silence at the enumeration comma in full-width format. The value is an absolute silence length.</li></ul><br/>An absolute silence type (with the `-exact` suffix) replaces any otherwise natural leading or trailing silence. Absolute silence types take precedence over the corresponding non-absolute type. For example, if you set both `Leading` and `Leading-exact` types, the `Leading-exact` type will take effect.| Required |
-| `Value` | The duration of a pause in seconds (such as `2s`) or milliseconds (such as `500ms`). Valid values range from 0 to 5000 milliseconds. If you set a value greater than the supported maximum, the service will use `5000ms`.| Required |
-
-### mstts silence examples
-
-The supported values for attributes of the `mstts:silence` element were [described previously](#add-silence).
-
-In this example, `mstts:silence` is used to add 200 ms of silence between two sentences.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
-<voice name="en-US-JennyNeural">
-<mstts:silence type="Sentenceboundary" value="200ms"/>
-If we're home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
-A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time.
-</voice>
-</speak>
-```
-
-In this example, `mstts:silence` is used to add 50 ms of silence at the comma, 100 ms of silence at the semicolon, and 150 ms of silence at the enumeration comma.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="zh-CN">
-<voice name="zh-CN-YunxiNeural">
-<mstts:silence type="comma-exact" value="50ms"/><mstts:silence type="semicolon-exact" value="100ms"/><mstts:silence type="enumerationcomma-exact" value="150ms"/>你好呀,云希、晓晓;你好呀。
-</voice>
-</speak>
-```
-
-## Specify paragraphs and sentences
-
-The `p` and `s` elements are used to denote paragraphs and sentences, respectively. In the absence of these elements, the Speech service automatically determines the structure of the SSML document.
-
-### Paragraph and sentence examples
-
-The following example defines two paragraphs that each contain sentences. In the second paragraph, the Speech service automatically determines the sentence structure, since they aren't defined in the SSML document.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <p>
- <s>Introducing the sentence element.</s>
- <s>Used to mark individual sentences.</s>
- </p>
- <p>
- Another simple paragraph.
- Sentence structure in this paragraph is not explicitly marked.
- </p>
- </voice>
-</speak>
-```
-
-## Bookmark element
-
-You can use the `bookmark` element in SSML to reference a specific location in the text or tag sequence. Then you'll use the Speech SDK and subscribe to the `BookmarkReached` event to get the offset of each marker in the audio stream. The `bookmark` element won't be spoken. For more information, see [Subscribe to synthesizer events](how-to-speech-synthesis.md#subscribe-to-synthesizer-events).
-
-Usage of the `bookmark` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `mark` | The reference text of the `bookmark` element. | Required |
-
-### Bookmark examples
-
-The supported values for attributes of the `bookmark` element were [described previously](#bookmark-element).
-
-As an example, you might want to know the time offset of each flower word in the following snippet:
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaNeural">
- We are selling <bookmark mark='flower_1'/>roses and <bookmark mark='flower_2'/>daisies.
- </voice>
-</speak>
-```
-
-## Viseme element
-
-A viseme is the visual description of a phoneme in spoken language. It defines the position of the face and mouth while a person is speaking. You can use the `mstts:viseme` element in SSML to request viseme output. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
-
-The viseme setting is applied to all input text within it's enclosing `voice` element. To reset or change the viseme setting again, you must use a new `voice` element with either the same voice or a different voice.
-
-Usage of the `viseme` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `type` | The type of viseme output.<ul><li>`redlips_front` ΓÇô lip-sync with viseme ID and audio offset output </li><li>`FacialExpression` ΓÇô blend shapes output</li></ul> | Required |
-
-> [!NOTE]
-> Currently, `redlips_front` only supports neural voices in `en-US` locale, and `FacialExpression` supports neural voices in `en-US` and `zh-CN` locales.
-
-### Viseme examples
-
-The supported values for attributes of the `viseme` element were [described previously](#viseme-element).
-
-This SSML snippet illustrates how to request blend shapes with your synthesized speech.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <mstts:viseme type="FacialExpression"/>
- Rainbow has seven colors: Red, orange, yellow, green, blue, indigo, and violet.
- </voice>
-</speak>
-```
-
-## Next steps
--- [SSML overview](speech-synthesis-markup.md)-- [Voice and sound with SSML](speech-synthesis-markup-voice.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-voice.md
- Title: Voice and sound with Speech Synthesis Markup Language (SSML) - Speech service-
-description: Learn about Speech Synthesis Markup Language (SSML) elements to determine what your output audio will sound like.
------ Previously updated : 11/30/2022---
-# Voice and sound with SSML
-
-Use Speech Synthesis Markup Language (SSML) to specify the text to speech voice, language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note.
-
-Refer to the sections below for details about how to use SSML elements to specify voice and sound. For more information about SSML syntax, see [SSML document structure and events](speech-synthesis-markup-structure.md).
-
-## Voice element
-
-At least one `voice` element must be specified within each SSML [speak](speech-synthesis-markup-structure.md#speak-root-element) element. This element determines the voice that's used for text to speech.
-
-You can include multiple `voice` elements in a single SSML document. Each `voice` element can specify a different voice. You can also use the same voice multiple times with different settings, such as when you [change the silence duration](speech-synthesis-markup-structure.md#add-silence) between sentences.
--
-Usage of the `voice` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `name` | The voice used for text to speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=tts).| Required|
-| `effect` |The audio effect processor that's used to optimize the quality of the synthesized speech output for specific scenarios on devices. <br/><br/>For some scenarios in production environments, the auditory experience may be degraded due to the playback distortion on certain devices. For example, the synthesized speech from a car speaker may sound dull and muffled due to environmental factors such as speaker response, room reverberation, and background noise. The passenger might have to turn up the volume to hear more clearly. To avoid manual operations in such a scenario, the audio effect processor can make the sound clearer by compensating the distortion of playback.<br/><br/>The following values are supported:<br/><ul><li>`eq_car` ΓÇô Optimize the auditory experience when providing high-fidelity speech in cars, buses, and other enclosed automobiles.</li><li>`eq_telecomhp8k` ΓÇô Optimize the auditory experience for narrowband speech in telecom or telephone scenarios. We recommend a sampling rate of 8 kHz. If the sample rate isn't 8 kHz, the auditory quality of the output speech won't be optimized.</li></ul><br/>If the value is missing or invalid, this attribute will be ignored and no effect will be applied.| Optional |
-
-### Voice examples
-
-The supported values for attributes of the `voice` element were [described previously](#voice-element).
-
-#### Single voice example
-
-This example uses the `en-US-JennyNeural` voice.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- This is the text that is spoken.
- </voice>
-</speak>
-```
-
-#### Multiple voices example
-
-Within the `speak` element, you can specify multiple voices for text to speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element.
-
-This example alternates between the `en-US-JennyNeural` and `en-US-ChristopherNeural` voices.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- Good morning!
- </voice>
- <voice name="en-US-ChristopherNeural">
- Good morning to you too Jenny!
- </voice>
-</speak>
-```
-
-#### Custom neural voice example
-
-To use your [custom neural voice](how-to-deploy-and-use-endpoint.md#use-your-custom-voice), specify the model name as the voice name in SSML.
-
-This example uses a custom voice named "my-custom-voice".
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="my-custom-voice">
- This is the text that is spoken.
- </voice>
-</speak>
-```
-
-#### Audio effect example
-
-You use the `effect` attribute to optimize the auditory experience for scenarios such as cars and telecommunications. The following SSML example uses the `effect` attribute with the configuration in car scenarios.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural" effect="eq_car">
- This is the text that is spoken.
- </voice>
-</speak>
-```
-
-## Speaking styles and roles
-
-By default, neural voices have a neutral speaking style. You can adjust the speaking style, style degree, and role at the sentence level.
-
-> [!NOTE]
-> Styles, style degree, and roles are supported for a subset of neural voices as described in the [voice styles and roles](language-support.md?tabs=tts#voice-styles-and-roles) documentation. To determine what styles and roles are supported for each voice, you can also use the [list voices](rest-text-to-speech.md#get-a-list-of-voices) API and the [Audio Content Creation](https://aka.ms/audiocontentcreation) web application.
-
-Usage of the `mstts:express-as` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `style` | The voice-specific speaking style. You can express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant. If the style value is missing or invalid, the entire `mstts:express-as` element is ignored and the service uses the default neutral speech. For custom neural voice styles, see the [custom neural voice style example](#custom-neural-voice-style-example). | Required |
-| `styledegree` | The intensity of the speaking style. You can specify a stronger or softer style to make the speech more expressive or subdued. The range of accepted values are: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. If the style degree is missing or isn't supported for your voice, this attribute is ignored.| Optional |
-| `role`| The speaking role-play. The voice can imitate a different age and gender, but the voice name isn't changed. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. If the role is missing or isn't supported for your voice, this attribute is ignored. | Optional |
-
-The following table has descriptions of each supported `style` attribute.
-
-|Style|Description|
-| - | - |
-|`style="advertisement_upbeat"`|Expresses an excited and high-energy tone for promoting a product or service.|
-|`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.|
-|`style="angry"`|Expresses an angry and annoyed tone.|
-|`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
-|`style="calm"`|Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, and prosody are more uniform compared to other types of speech.|
-|`style="chat"`|Expresses a casual and relaxed tone.|
-|`style="cheerful"`|Expresses a positive and happy tone.|
-|`style="customerservice"`|Expresses a friendly and helpful tone for customer support.|
-|`style="depressed"`|Expresses a melancholic and despondent tone with lower pitch and energy.|
-|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.|
-|`style="documentary-narration"`|Narrates documentaries in a relaxed, interested, and informative style suitable for dubbing documentaries, expert commentary, and similar content.|
-|`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.|
-|`style="empathetic"`|Expresses a sense of caring and understanding.|
-|`style="envious"`|Expresses a tone of admiration when you desire something that someone else has.|
-|`style="excited"`|Expresses an upbeat and hopeful tone. It sounds like something great is happening and the speaker is really happy about that.|
-|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tension and unease.|
-|`style="friendly"`|Expresses a pleasant, inviting, and warm tone. It sounds sincere and caring.|
-|`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.|
-|`style="hopeful"`|Expresses a warm and yearning tone. It sounds like something good will happen to the speaker.|
-|`style="lyrical"`|Expresses emotions in a melodic and sentimental way.|
-|`style="narration-professional"`|Expresses a professional, objective tone for content reading.|
-|`style="narration-relaxed"`|Express a soothing and melodious tone for content reading.|
-|`style="newscast"`|Expresses a formal and professional tone for narrating news.|
-|`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.|
-|`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
-|`style="poetry-reading"`|Expresses an emotional and rhythmic tone while reading a poem.|
-|`style="sad"`|Expresses a sorrowful tone.|
-|`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.|
-|`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
-|`style="sports_commentary"`|Expresses a relaxed and interesting tone for broadcasting a sports event.|
-|`style="sports_commentary_excited"`|Expresses an intensive and energetic tone for broadcasting exciting moments in a sports event.|
-|`style="whispering"`|Speaks very softly and make a quiet and gentle sound|
-|`style="terrified"`|Expresses a very scared tone, with faster pace and a shakier voice. It sounds like the speaker is in an unsteady and frantic status.|
-|`style="unfriendly"`|Expresses a cold and indifferent tone.|
-
-The following table has descriptions of each supported `role` attribute.
-
-| Role | Description |
-| - | - |
-| `role="Girl"` | The voice imitates a girl. |
-| `role="Boy"` | The voice imitates a boy. |
-| `role="YoungAdultFemale"` | The voice imitates a young adult female. |
-| `role="YoungAdultMale"` | The voice imitates a young adult male. |
-| `role="OlderAdultFemale"` | The voice imitates an older adult female. |
-| `role="OlderAdultMale"` | The voice imitates an older adult male. |
-| `role="SeniorFemale"` | The voice imitates a senior female. |
-| `role="SeniorMale"` | The voice imitates a senior male. |
--
-### mstts express-as examples
-
-The supported values for attributes of the `mstts:express-as` element were [described previously](#speaking-styles-and-roles).
-
-#### Style and degree example
-
-You use the `mstts:express-as` element to express emotions like cheerfulness, empathy, and calm. You can also optimize the voice for different scenarios like customer service, newscast, and voice assistant.
-
-The following SSML example uses the `<mstts:express-as>` element with a sad style degree of "2".
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
- <voice name="zh-CN-XiaomoNeural">
- <mstts:express-as style="sad" styledegree="2">
- 快走吧,路上一定要注意安全,早去早回。
- </mstts:express-as>
- </voice>
-</speak>
-```
-
-#### Role example
-
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice imitates a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed.
-
-This SSML snippet illustrates how the `role` attribute is used to change the role-play for `zh-CN-XiaomoNeural`.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
- <voice name="zh-CN-XiaomoNeural">
- 女儿看见父亲走了进来,问道:
- <mstts:express-as role="YoungAdultFemale" style="calm">
- “您来的挺快的,怎么过来的?”
- </mstts:express-as>
- 父亲放下手提包,说:
- <mstts:express-as role="OlderAdultMale" style="calm">
- “刚打车过来的,路上还挺顺畅。”
- </mstts:express-as>
- </voice>
-</speak>
-```
-
-#### Custom neural voice style example
-
-Your custom neural voice can be trained to speak with some preset styles such as cheerful, sad, and whispering. You can also [train a custom neural voice](how-to-custom-voice-create-voice.md?tabs=multistyle#train-your-custom-neural-voice-model) to speak in a custom style as determined by your training data. To use your custom neural voice style in SSML, specify the style name that you previously entered in Speech Studio.
-
-This example uses a custom voice named "my-custom-voice". The custom voice speaks with the "cheerful" preset style and style degree of "2", and then with a custom style named "my-custom-style" and style degree of "0.01".
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="my-custom-voice">
- <mstts:express-as style="cheerful" styledegree="2">
- That'd be just amazing!
- </mstts:express-as>
- <mstts:express-as style="my-custom-style" styledegree="0.01">
- What's next?
- </mstts:express-as>
- </voice>
-</speak>
-```
-
-## Adjust speaking languages
-
-By default, all neural voices are fluent in their own language and English without using the `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
-
-You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
-
-Usage of the `lang` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `xml:lang` | The language that you want the neural voice to speak. | Required to adjust the speaking language for the neural voice. If you're using `lang xml:lang`, the locale must be provided. |
-
-> [!NOTE]
-> The `<lang xml:lang>` element is incompatible with the `prosody` and `break` elements. You can't adjust pause and prosody like pitch, contour, rate, or volume in this element.
-
-Use this table to determine which speaking languages are supported for each neural voice. If the voice doesn't speak the language of the input text, the Speech service won't output synthesized audio.
-
-| Voice | Primary and default locale | Secondary locales |
-| - | - | - |
-| `en-US-JennyMultilingualNeural` | `en-US` | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
-
-### Lang examples
-
-The supported values for attributes of the `lang` element were [described previously](#adjust-speaking-languages).
-
-The primary language for `en-US-JennyMultilingualNeural` is `en-US`. You must specify `en-US` as the default language within the `speak` element, whether or not the language is adjusted elsewhere.
-
-This SSML snippet shows how to use the `lang` element (and `xml:lang` attribute) to speak `de-DE` with the `en-US-JennyMultilingualNeural` neural voice.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-JennyMultilingualNeural">
- <lang xml:lang="de-DE">
- Wir freuen uns auf die Zusammenarbeit mit Ihnen!
- </lang>
- </voice>
-</speak>
-```
-
-Within the `speak` element, you can specify multiple languages including `en-US` for text to speech output. For each adjusted language, the text must match the language and be wrapped in a `voice` element. This SSML snippet shows how to use `<lang xml:lang>` to change the speaking languages to `es-MX`, `en-US`, and `fr-FR`.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-JennyMultilingualNeural">
- <lang xml:lang="es-MX">
- ¡Esperamos trabajar con usted!
- </lang>
- <lang xml:lang="en-US">
- We look forward to working with you!
- </lang>
- <lang xml:lang="fr-FR">
- Nous avons hâte de travailler avec vous!
- </lang>
- </voice>
-</speak>
-```
-
-## Adjust prosody
-
-The `prosody` element is used to specify changes to pitch, contour, range, rate, and volume for the text to speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
-
-Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. Text to speech limits or substitutes values that aren't supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120.
-
-Usage of the `prosody` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `contour` | Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
-| `pitch` | Indicates the baseline pitch for the text. Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
-| `range`| A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`.| Optional |
-| `rate` | Indicates the speaking rate of the text. Speaking rate can be applied at the word or sentence level. The rate changes should be within 0.5 to 2 times the original audio. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the original rate. A value of *0.5* results in a halving of the original rate. A value of *2* results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
-| `volume` | Indicates the volume level of the speaking voice. Volume changes can be applied at the sentence level. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
-
-### Prosody examples
-
-The supported values for attributes of the `prosody` element were [described previously](#adjust-prosody).
-
-#### Change speaking rate example
-
-This SSML snippet illustrates how the `rate` attribute is used to change the speaking rate to 30% greater than the default rate.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <prosody rate="+30.00%">
- Enjoy using text to speech.
- </prosody>
- </voice>
-</speak>
-```
-
-#### Change volume example
-
-This SSML snippet illustrates how the `volume` attribute is used to change the volume to 20% greater than the default volume.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <prosody volume="+20.00%">
- Enjoy using text to speech.
- </prosody>
- </voice>
-</speak>
-```
-
-#### Change pitch example
-
-This SSML snippet illustrates how the `pitch` attribute is used so that the voice speaks in a high pitch.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- Welcome to <prosody pitch="high">Enjoy using text to speech.</prosody>
- </voice>
-</speak>
-```
-
-#### Change pitch contour example
-
-This SSML snippet illustrates how the `contour` attribute is used to change the contour.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <prosody contour="(60%,-60%) (100%,+80%)" >
- Were you the only person in the room?
- </prosody>
- </voice>
-</speak>
-```
-
-## Adjust emphasis
-
-The optional `emphasis` element is used to add or remove word-level stress for the text. This element can only contain text and the following elements: `audio`, `break`, `emphasis`, `lang`, `phoneme`, `prosody`, `say-as`, `sub`, and `voice`.
-
-> [!NOTE]
-> The word-level emphasis tuning is only available for these neural voices: `en-US-GuyNeural`, `en-US-DavisNeural`, and `en-US-JaneNeural`.
->
-> For words that have low pitch and short duration, the pitch might not be raised enough to be noticed.
-
-Usage of the `emphasis` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2) | Optional |
-
-### Emphasis examples
-
-The supported values for attributes of the `emphasis` element were [described previously](#adjust-emphasis).
-
-This SSML snippet demonstrates how the `emphasis` element is used to add moderate level emphasis for the word "meetings".
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-GuyNeural">
- I can help you join your <emphasis level="moderate">meetings</emphasis> fast.
- </voice>
-</speak>
-```
-
-## Add recorded audio
-
-The `audio` element is optional. You can use it to insert prerecorded audio into an SSML document. The body of the `audio` element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. The `audio` element can also contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`.
-
-Any audio included in the SSML document must meet these requirements:
-* The audio file must be valid *.mp3, *.wav, *.opus, *.ogg, *.flac, or *.wma files.
-* The combined total time for all text and audio files in a single response can't exceed 600 seconds.
-* The audio must not contain any customer-specific or other sensitive information.
-
-> [!NOTE]
-> The `audio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
-
-Usage of the `audio` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `src` | The URI location of the audio file. The audio must be hosted on an internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the file must present a valid, trusted TLS/SSL certificate. We recommend that you put the audio file into Blob Storage in the same Azure region as the text to speech endpoint to minimize the latency. | Required |
-
-### Audio examples
-
-The supported values for attributes of the `audio` element were [described previously](#add-recorded-audio).
-
-This SSML snippet illustrates how the `src` attribute is used to insert audio from two .wav files.
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-JennyNeural">
- <p>
- <audio src="https://contoso.com/opinionprompt.wav"/>
- Thanks for offering your opinion. Please begin speaking after the beep.
- <audio src="https://contoso.com/beep.wav">
- Could not play the beep, please voice your opinion now.
- </audio>
- </p>
- </voice>
-</speak>
-```
-
-## Audio duration
-
-Use the `mstts:audioduration` element to set the duration of the output audio. Use this element to help synchronize the timing of audio output completion. The audio duration can be decreased or increased between 0.5 to 2 times the rate of the original audio. The original audio here is the audio without any other rate settings. The speaking rate will be slowed down or sped up accordingly based on the set value.
-
-The audio duration setting is applied to all input text within its enclosing `voice` element. To reset or change the audio duration setting again, you must use a new `voice` element with either the same voice or a different voice.
-
-Usage of the `mstts:audioduration` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `value` | The requested duration of the output audio in either seconds (such as `2s`) or milliseconds (such as `2000ms`).<br/><br/>This value should be within 0.5 to 2 times the original audio without any other rate settings. For example, if the requested duration of your audio is `30s`, then the original audio must have otherwise been between 15 and 60 seconds. If you set a value outside of these boundaries, the duration is set according to the respective minimum or maximum multiple.<br/><br/>Given your requested output audio duration, the Speech service adjusts the speaking rate accordingly. Use the [voice list](rest-text-to-speech.md#get-a-list-of-voices) API and check the `WordsPerMinute` attribute to find out the speaking rate of the neural voice that you're using. You can divide the number of words in your input text by the value of the `WordsPerMinute` attribute to get the approximate original output audio duration. The output audio will sound most natural when you set the audio duration closest to the estimated duration.| Required |
-
-### mstts audio duration examples
-
-The supported values for attributes of the `mstts:audioduration` element were [described previously](#audio-duration).
-
-In this example, the original audio is around 15 seconds. The `mstts:audioduration` element is used to set the audio duration to 20 seconds (`20s`).
-
-```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
-<voice name="en-US-JennyNeural">
-<mstts:audioduration value="20s"/>
-If we're home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
-A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time.
-</voice>
-</speak>
-```
-
-## Background audio
-
-You can use the `mstts:backgroundaudio` element to add background audio to your SSML documents or mix an audio file with text to speech. With `mstts:backgroundaudio`, you can loop an audio file in the background, fade in at the beginning of text to speech, and fade out at the end of text to speech.
-
-If the background audio provided is shorter than the text to speech or the fade out, it loops. If it's longer than the text to speech, it stops when the fade out has finished.
-
-Only one background audio file is allowed per SSML document. You can intersperse `audio` tags within the `voice` element to add more audio to your SSML document.
-
-> [!NOTE]
-> The `mstts:backgroundaudio` element should be put in front of all `voice` elements. If specified, it must be the first child of the `speak` element.
->
-> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead.
-
-Usage of the `mstts:backgroundaudio` element's attributes are described in the following table.
-
-| Attribute | Description | Required or optional |
-| - | - | - |
-| `src` | The URI location of the background audio file. | Required |
-| `volume` | The volume of the background audio file. **Accepted values**: `0` to `100` inclusive. The default value is `1`. | Optional |
-| `fadein` | The duration of the background audio fade-in as milliseconds. The default value is `0`, which is the equivalent to no fade in. **Accepted values**: `0` to `10000` inclusive. | Optional |
-| `fadeout` | The duration of the background audio fade-out in milliseconds. The default value is `0`, which is the equivalent to no fade out. **Accepted values**: `0` to `10000` inclusive. | Optional|
-
-### mstss backgroundaudio examples
-
-The supported values for attributes of the `mstts:backgroundaudio` element were [described previously](#background-audio).
-
-```xml
-<speak version="1.0" xml:lang="en-US" xmlns:mstts="http://www.w3.org/2001/mstts">
- <mstts:backgroundaudio src="https://contoso.com/sample.wav" volume="0.7" fadein="3000" fadeout="4000"/>
- <voice name="en-US-JennyNeural">
- The text provided in this document will be spoken over the background audio.
- </voice>
-</speak>
-```
-
-## Next steps
--- [SSML overview](speech-synthesis-markup.md)-- [SSML document structure and events](speech-synthesis-markup-structure.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=tts)-
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
- Title: Speech Synthesis Markup Language (SSML) overview - Speech service-
-description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech.
------ Previously updated : 11/30/2022---
-# Speech Synthesis Markup Language (SSML) overview
-
-Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input.
-
-> [!TIP]
-> You can hear voices in different styles and pitches reading example text via the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
-
-## Scenarios
-
-You can use SSML to:
--- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.-- [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note.-- [Control pronunciation](speech-synthesis-markup-pronunciation.md) of the output audio. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced.-
-## Use SSML
-
-> [!IMPORTANT]
-> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. For more information, see [text to speech pricing notes](text-to-speech.md#pricing-note).
-
-You can use SSML in the following ways:
--- [Audio Content Creation](https://aka.ms/audiocontentcreation) tool: Author plain text and SSML in Speech Studio: You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).-- [Batch synthesis API](batch-synthesis.md): Provide SSML via the `inputs` property. -- [Speech CLI](get-started-text-to-speech.md?pivots=programming-language-cli): Provide SSML via the `spx synthesize --ssml SSML` command line argument.-- [Speech SDK](how-to-speech-synthesis.md#use-ssml-to-customize-speech-characteristics): Provide SSML via the "speak" SSML method.-
-## Next steps
--- [SSML document structure and events](speech-synthesis-markup-structure.md)-- [Voice and sound with SSML](speech-synthesis-markup-voice.md)-- [Pronunciation with SSML](speech-synthesis-markup-pronunciation.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=tts)
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
- Title: Speech to text overview - Speech service-
-description: Get an overview of the benefits and capabilities of the speech to text feature of the Speech service.
------ Previously updated : 04/05/2023--
-keywords: speech to text, speech to text software
--
-# What is speech to text?
-
-In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure Cognitive Services. Speech to text can be used for [real-time](#real-time-speech-to-text) or [batch transcription](#batch-transcription) of audio streams into text.
-
-> [!NOTE]
-> To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt).
-
-## Real-time speech to text
-
-With real-time speech to text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as:
-- Transcriptions, captions, or subtitles for live meetings-- Contact center agent assist-- Dictation-- Voice agents-- Pronunciation assessment-
-Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
-
-## Batch transcription
-
-Batch transcription is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
-- Transcriptions, captions, or subtitles for pre-recorded audio-- Contact center post-call analytics-- Diarization-
-Batch transcription is available via:
-- [Speech to text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).-- The [Speech CLI](spx-overview.md) supports both real-time and batch transcription. For Speech CLI help with batch transcriptions, run the following command:
- ```azurecli-interactive
- spx help batch transcription
- ```
-
-## Custom Speech
-
-With [Custom Speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
-
-> [!TIP]
-> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
-
-A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md).
-
-Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt).
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
--- [Get started with speech to text](get-started-speech-to-text.md)-- [Create a batch transcription](batch-transcription-create.md)
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md
- Title: Speech translation overview - Speech service-
-description: With speech translation, you can add end-to-end, real-time, multi-language translation of speech to your applications, tools, and devices.
------ Previously updated : 09/16/2022--
-keywords: speech translation
--
-# What is speech translation?
-
-In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech to text translation of audio streams.
-
-By using the Speech SDK or Speech CLI, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech.
-
-For a list of languages supported for speech translation, see [Language and voice support](language-support.md?tabs=speech-translation).
-
-## Core features
-
-* Speech to text translation with recognition results.
-* Speech-to-speech translation.
-* Support for translation to multiple target languages.
-* Interim recognition and translation results.
-
-## Get started
-
-As your first step, try the [Speech translation quickstart](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
-
-You'll find [Speech SDK speech to text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition and translation, and working with custom models.
-
-## Next steps
-
-* Try the [speech translation quickstart](get-started-speech-translation.md)
-* Install the [Speech SDK](speech-sdk.md)
-* Install the [Speech CLI](spx-overview.md)
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
- Title: "Quickstart: The Speech CLI - Speech service"-
-description: In this Azure AI Speech CLI quickstart, you interact with speech to text, text to speech, and speech translation without having to write code.
------ Previously updated : 09/16/2022----
-# Quickstart: Get started with the Azure AI Speech CLI
-
-In this article, you'll learn how to use the Azure AI Speech CLI (also called SPX) to access Speech services such as speech to text, text to speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts.
-
-This article assumes that you have working knowledge of the Command Prompt window, terminal, or PowerShell.
-
-> [!NOTE]
-> In PowerShell, the [stop-parsing token](/powershell/module/microsoft.powershell.core/about/about_special_characters#stop-parsing-token) (`--%`) should follow `spx`. For example, run `spx --% config @region` to view the current region config value.
-
-## Download and install
--
-## Create a resource configuration
-
-# [Terminal](#tab/terminal)
-
-To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-
-To configure your resource key and region identifier, run the following commands:
-
-```console
-spx config @key --set SPEECH-KEY
-spx config @region --set SPEECH-REGION
-```
-
-The key and region are stored for future Speech CLI commands. To view the current configuration, run the following commands:
-
-```console
-spx config @key
-spx config @region
-```
-
-As needed, include the `clear` option to remove either stored value:
-
-```console
-spx config @key --clear
-spx config @region --clear
-```
-
-# [PowerShell](#tab/powershell)
-
-To get started, you need a Speech resource key and region identifier (for example, `eastus`, `westus`). Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
-
-To configure your Speech resource key and region identifier, run the following commands in PowerShell:
-
-```powershell
-spx --% config @key --set SPEECH-KEY
-spx --% config @region --set SPEECH-REGION
-```
-
-The key and region are stored for future SPX commands. To view the current configuration, run the following commands:
-
-```powershell
-spx --% config @key
-spx --% config @region
-```
-
-As needed, include the `clear` option to remove either stored value:
-
-```powershell
-spx --% config @key --clear
-spx --% config @region --clear
-```
-
-***
-
-## Basic usage
-
-> [!IMPORTANT]
-> When you use the Speech CLI in a container, include the `--host` option. You must also specify `--key none` to ensure that the CLI doesn't try to use a Speech key for authentication. For example, run `spx recognize --key none --host wss://localhost:5000/ --file myaudio.wav` to recognize speech from an audio file in a [speech to text container](speech-container-stt.md).
-
-This section shows a few basic SPX commands that are often useful for first-time testing and experimentation. Start by viewing the help that's built into the tool by running the following command:
-
-```console
-spx
-```
-
-You can search help topics by keyword. For example, to see a list of Speech CLI usage examples, run the following command:
-
-```console
-spx help find --topics "examples"
-```
-
-To see options for the recognize command, run the following command:
-
-```console
-spx help recognize
-```
-
-Additional help commands are listed in the console output. You can enter these commands to get detailed help about subcommands.
-
-## Speech to text (speech recognition)
-
-> [!NOTE]
-> You can't use your computer's microphone when you run the Speech CLI within a Docker container. However, you can read from and save audio files in your local mounted directory.
-
-To convert speech to text (speech recognition) by using your system's default microphone, run the following command:
-
-```console
-spx recognize --microphone
-```
-
-After you run the command, SPX begins listening for audio on the current active input device. It stops listening when you select **Enter**. The spoken audio is then recognized and converted to text in the console output.
-
-With the Speech CLI, you can also recognize speech from an audio file. Run the following command:
-
-```console
-spx recognize --file /path/to/file.wav
-```
-
-> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help recognize```.
-
-## Text to speech (speech synthesis)
-
-The following command takes text as input and then outputs the synthesized speech to the current active output device (for example, your computer speakers).
-
-```console
-spx synthesize --text "Testing synthesis using the Speech CLI" --speakers
-```
-
-You can also save the synthesized output to a file. In this example, let's create a file named *my-sample.wav* in the directory where you're running the command.
-
-```console
-spx synthesize --text "Enjoy using the Speech CLI." --audio output my-sample.wav
-```
-
-These examples presume that you're testing in English. However, Speech service supports speech synthesis in many languages. You can pull down a full list of voices either by running the following command or by visiting the [language support page](./language-support.md?tabs=tts).
-
-```console
-spx synthesize --voices
-```
-
-Here's a command for using one of the voices you've discovered.
-
-```console
-spx synthesize --text "Bienvenue chez moi." --voice fr-FR-AlainNeural --speakers
-```
-
-> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help synthesize```.
-
-## Speech to text translation
-
-With the Speech CLI, you can also do speech to text translation. Run the following command to capture audio from your default microphone and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command.
-
-```console
-spx translate --microphone --source en-US --target ru-RU
-```
-
-When you're translating into multiple languages, separate the language codes with a semicolon (`;`).
-
-```console
-spx translate --microphone --source en-US --target ru-RU;fr-FR;es-ES
-```
-
-If you want to save the output of your translation, use the `--output` flag. In this example, you'll also read from a file.
-
-```console
-spx translate --file /some/file/path/input.wav --source en-US --target ru-RU --output file /some/file/path/russian_translation.txt
-```
-
-> [!TIP]
-> If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help translate```.
--
-## Next steps
-
-* [Install GStreamer to use the Speech CLI with MP3 and other formats](./how-to-use-codec-compressed-audio-input-streams.md)
-* [Configuration options for the Speech CLI](./spx-data-store-configuration.md)
-* [Batch operations with the Speech CLI](./spx-batch-operations.md)
cognitive-services Spx Batch Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-batch-operations.md
- Title: "Run batch operations with the Speech CLI - Speech service"-
-description: Learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI.
------ Previously updated : 09/16/2022----
-# Run batch operations with the Speech CLI
-
-Common tasks when using the Speech service, are batch operations. In this article, you'll learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. Specifically, you'll learn how to:
-
-* Run batch speech recognition on a directory of audio files
-* Run batch speech synthesis by iterating over a `.tsv` file
-
-## Batch speech to text (speech recognition)
-
-The Speech service is often used to recognize speech from audio files. In this example, you'll learn how to iterate over a directory using the Speech CLI to capture the recognition output for each `.wav` file. The `--files` flag is used to point at the directory where audio files are stored, and the wildcard `*.wav` is used to tell the Speech CLI to run recognition on every file with the extension `.wav`. The output for each recognition file is written as a tab separated value in `speech_output.tsv`.
-
-> [!NOTE]
-> The `--threads` argument can be also used in the next section for `spx synthesize` commands, and the available threads will depend on the CPU and its current load percentage.
-
-```console
-spx recognize --files C:\your_wav_file_dir\*.wav --output file C:\output_dir\speech_output.tsv --threads 10
-```
-
-The following is an example of the output file structure.
-
-```output
-audio.input.id recognizer.session.started.sessionid recognizer.recognized.result.text
-sample_1 07baa2f8d9fd4fbcb9faea451ce05475 A sample wave file.
-sample_2 8f9b378f6d0b42f99522f1173492f013 Sample text synthesized.
-```
-
-## Batch text to speech (speech synthesis)
-
-The easiest way to run batch text to speech is to create a new `.tsv` (tab-separated-value) file, and use the `--foreach` command in the Speech CLI. You can create a `.tsv` file using your favorite text editor, for this example, let's call it `text_synthesis.tsv`:
-
->[!IMPORTANT]
-> When copying the contents of this text file, make sure that your file has a **tab** not spaces between the file location and the text. Sometimes, when copying the contents from this example, tabs are converted to spaces causing the `spx` command to fail when run.
-
-```Input
-audio.output text
-C:\batch_wav_output\wav_1.wav Sample text to synthesize.
-C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
-C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
-```
-
-Next, you run a command to point to `text_synthesis.tsv`, perform synthesis on each `text` field, and write the result to the corresponding `audio.output` path as a `.wav` file.
-
-```console
-spx synthesize --foreach in @C:\your\path\to\text_synthesis.tsv
-```
-
-This command is the equivalent of running `spx synthesize --text "Sample text to synthesize" --audio output C:\batch_wav_output\wav_1.wav` **for each** record in the `.tsv` file.
-
-A couple things to note:
-
-* The column headers, `audio.output` and `text`, correspond to the command-line arguments `--audio output` and `--text`, respectively. Multi-part command-line arguments like `--audio output` should be formatted in the file with no spaces, no leading dashes, and periods separating strings, for example, `audio.output`. Any other existing command-line arguments can be added to the file as additional columns using this pattern.
-* When the file is formatted in this way, no additional arguments are required to be passed to `--foreach`.
-* Ensure to separate each value in the `.tsv` with a **tab**.
-
-However, if you have a `.tsv` file like the following example, with column headers that **do not match** command-line arguments:
-
-```Input
-wav_path str_text
-C:\batch_wav_output\wav_1.wav Sample text to synthesize.
-C:\batch_wav_output\wav_2.wav Using the Speech CLI to run batch-synthesis.
-C:\batch_wav_output\wav_3.wav Some more text to test capabilities.
-```
-
-You can override these field names to the correct arguments using the following syntax in the `--foreach` call. This is the same call as above.
-
-```console
-spx synthesize --foreach audio.output;text in @C:\your\path\to\text_synthesis.tsv
-```
-
-## Next steps
-
-* [Speech CLI overview](./spx-overview.md)
-* [Speech CLI quickstart](./spx-basics.md)
cognitive-services Spx Data Store Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-data-store-configuration.md
- Title: "Configure the Speech CLI datastore - Speech service"-
-description: Learn how to configure the Speech CLI datastore.
----- Previously updated : 05/01/2022----
-# Configure the Speech CLI datastore
-
-The [Speech CLI](spx-basics.md) can rely on settings in configuration files, which you can refer to using a `@` symbol. The Speech CLI saves a new setting in a new `./spx/data` subdirectory that is created in the current working directory for the Speech CLI. When looking for a configuration value, the Speech CLI searches your current working directory, then in the datastore at `./spx/data`, and then in other datastores, including a final read-only datastore in the `spx` binary.
-
-In the [Speech CLI quickstart](spx-basics.md), you used the datastore to save your `@key` and `@region` values, so you did not need to specify them with each `spx` command. Keep in mind, that you can use configuration files to store your own configuration settings, or even use them to pass URLs or other dynamic content generated at runtime.
-
-For more details about datastore files, including use of default configuration files (`@spx.default`, `@default.config`, and `@*.default.config` for command-specific default settings), enter this command:
-
-```console
-spx help advanced setup
-```
-
-## nodefaults
-
-The following example clears the `@my.defaults` configuration file, adds key-value pairs for **key** and **region** in the file, and uses the configuration in a call to `spx recognize`.
-
-```console
-spx config @my.defaults --clear
-spx config @my.defaults --add key 000072626F6E20697320636F6F6C0000
-spx config @my.defaults --add region westus
-
-spx config @my.defaults
-
-spx recognize --nodefaults @my.defaults --file hello.wav
-```
-
-## Dynamic configuration
-
-You can also write dynamic content to a configuration file using the `--output` option.
-
-For example, the following command creates a custom speech model and stores the URL of the new model in a configuration file. The next command waits until the model at that URL is ready for use before returning.
-
-```console
-spx csr model create --name "Example 4" --datasets @my.datasets.txt --output url @my.model.txt
-spx csr model status --model @my.model.txt --wait
-```
-
-The following example writes two URLs to the `@my.datasets.txt` configuration file. In this scenario, `--output` can include an optional **add** keyword to create a configuration file or append to the existing one.
-
-```console
-spx csr dataset create --name "LM" --kind Language --content https://crbn.us/data.txt --output url @my.datasets.txt
-spx csr dataset create --name "AM" --kind Acoustic --content https://crbn.us/audio.zip --output add url @my.datasets.txt
-
-spx config @my.datasets.txt
-```
-
-## SPX config add
-
-For readability, flexibility, and convenience, you can use a preset configuration with select output options.
-
-For example, you might have the following requirements for [captioning](captioning-quickstart.md):
-- Recognize from the input file `caption.this.mp4`.-- Output WebVTT and SRT captions to the files `caption.vtt` and `caption.srt` respectively.-- Output the `offset`, `duration`, `resultid`, and `text` of each recognizing event to the file `each.result.tsv`.-
-You can create a preset configuration named `@caption.defaults` as shown here:
-
-```console
-spx config @caption.defaults --clear
-spx config @caption.defaults --add output.each.recognizing.result.offset=true
-spx config @caption.defaults --add output.each.recognizing.result.duration=true
-spx config @caption.defaults --add output.each.recognizing.result.resultid=true
-spx config @caption.defaults --add output.each.recognizing.result.text=true
-spx config @caption.defaults --add output.each.file.name=each.result.tsv
-spx config @caption.defaults --add output.srt.file.name=caption.srt
-spx config @caption.defaults --add output.vtt.file.name=caption.vtt
-```
-
-The settings are saved to the current directory in a file named `caption.defaults`. Here are the file contents:
-
-```
-output.each.recognizing.result.offset=true
-output.each.recognizing.result.duration=true
-output.each.recognizing.result.resultid=true
-output.each.recognizing.result.text=true
-output.all.file.name=output.result.tsv
-output.each.file.name=each.result.tsv
-output.srt.file.name=caption.srt
-output.vtt.file.name=caption.vtt
-```
-
-Then, to generate [captions](captioning-quickstart.md), you can run this command that imports settings from the `@caption.defaults` preset configuration:
-
-```console
-spx recognize --file caption.this.mp4 --format any --output vtt --output srt @caption.defaults
-```
-
-Using the preset configuration as shown previously is similar to running the following command:
-
-```console
-spx recognize --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt --output each file each.result.tsv --output all file output.result.tsv --output each recognizer recognizing result offset --output each recognizer recognizing duration --output each recognizer recognizing result resultid --output each recognizer recognizing text
-```
-
-## Next steps
-
-* [Batch operations with the Speech CLI](./spx-batch-operations.md)
cognitive-services Spx Output Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-output-options.md
- Title: "Configure the Speech CLI output options - Speech service"-
-description: Learn how to configure output options with the Speech CLI.
----- Previously updated : 09/16/2022---
-# Configure the Speech CLI output options
-
-The [Speech CLI](spx-basics.md) output can be written to standard output or specified files.
-
-For contextual help in the Speech CLI, you can run any of the following commands:
-
-```console
-spx help recognize output examples
-spx help synthesize output examples
-spx help translate output examples
-spx help intent output examples
-```
-
-## Standard output
-
-If the file argument is a hyphen (`-`), the results are written to standard output as shown in the following example.
-
-```console
-spx recognize --file caption.this.mp4 --format any --output vtt file - --output srt file - --output each file - @output.each.detailed --property SpeechServiceResponse_StablePartialResultThreshold=0 --profanity masked
-```
-
-## Default file output
-
-If you omit the `file` option, output is written to default files in the current directory.
-
-For example, run the following command to write WebVTT and SRT [captions](captioning-concepts.md) to their own default files:
-
-```console
-spx recognize --file caption.this.mp4 --format any --output vtt --output srt --output each text --output all duration
-```
-
-The default file names are as follows, where the `<EPOCH_TIME>` is replaced at run time.
-- The default SRT file name includes the input file name and the local operating system epoch time: `output.caption.this.<EPOCH_TIME>.srt`-- The default Web VTT file name includes the input file name and the local operating system epoch time: `output.caption.this.<EPOCH_TIME>.vtt`-- The default `output each` file name, `each.<EPOCH_TIME>.tsv`, includes the local operating system epoch time. This file is not created by default, unless you specify the `--output each` option.-- The default `output all` file name, `output.<EPOCH_TIME>.tsv`, includes the local operating system epoch time. This file is created by default.-
-## Output to specific files
-
-For output to files that you specify instead of the [default files](#default-file-output), set the `file` option to the file name.
-
-For example, to output both WebVTT and SRT [captions](captioning-concepts.md) to files that you specify, run the following command:
-
-```console
-spx recognize --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt --output each text --output each file each.result.tsv --output all file output.result.tsv
-```
-
-The preceding command also outputs the `each` and `all` results to the specified files.
-
-## Output to multiple files
-
-For translations with `spx translate`, separate files are created for the source language (such as `--source en-US`) and each target language (such as `--target "de;fr;zh-Hant"`).
-
-For example, to output translated SRT and WebVTT captions, run the following command:
-
-```console
-spx translate --source en-US --target "de;fr;zh-Hant" --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt
-```
-
-Captions should then be written to the following files: *caption.srt*, *caption.vtt*, *caption.de.srt*, *caption.de.vtt*, *caption.fr.srt*, *caption.fr.vtt*, *caption.zh-Hant.srt*, and *caption.zh-Hant.vtt*.
-
-## Suppress header
-
-You can suppress the header line in the output file by setting the `has header false` option:
-
-```
-spx recognize --nodefaults @my.defaults --file audio.wav --output recognized text --output file has header false
-```
-
-See [Configure the Speech CLI datastore](spx-data-store-configuration.md#nodefaults) for more information about `--nodefaults`.
-
-## Next steps
-
-* [Captioning quickstart](./captioning-quickstart.md)
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-overview.md
- Title: The Azure AI Speech CLI-
-description: In this article, you learn about the Speech CLI, a command-line tool for using Speech service without having to write any code.
------ Previously updated : 09/16/2022---
-# What is the Speech CLI?
-
-The Speech CLI is a command-line tool for using Speech service without having to write any code. The Speech CLI requires minimal setup. You can easily use it to experiment with key features of Speech service and see how it works with your use cases. Within minutes, you can run simple test workflows, such as batch speech-recognition from a directory of files or text to speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready, and you can scale it up to run larger processes by using automated `.bat` or shell scripts.
-
-Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. As you're deciding when to use the Speech CLI or the Speech SDK, consider the following guidance.
-
-Use the Speech CLI when:
-* You want to experiment with Speech service features with minimal setup and without having to write code.
-* You have relatively simple requirements for a production application that uses Speech service.
-
-Use the Speech SDK when:
-* You want to integrate Speech service functionality within a specific language or platform (for example, C#, Python, or C++).
-* You have complex requirements that might require advanced service requests.
-* You're developing custom behavior, including response streaming.
-
-## Core features
-
-* **Speech recognition**: Convert speech to text either from audio files or directly from a microphone, or transcribe a recorded conversation.
-
-* **Speech synthesis**: Convert text to speech either by using input from text files or by inputting directly from the command line. Customize speech output characteristics by using [Speech Synthesis Markup Language (SSML) configurations](speech-synthesis-markup.md).
-
-* **Speech translation**: Translate audio in a source language to text or audio in a target language.
-
-* **Run on Azure compute resources**: Send Speech CLI commands to run on an Azure remote compute resource by using `spx webjob`.
-
-## Get started
-
-To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands. It also gives you slightly more advanced commands for running batch operations for speech to text and text to speech. After you've read the basics article, you should understand the syntax well enough to start writing some custom commands or automate simple Speech service operations.
-
-## Next steps
--- [Get started with the Azure AI Speech CLI](spx-basics.md)-- [Speech CLI configuration options](./spx-data-store-configuration.md)-- [Speech CLI batch operations](./spx-batch-operations.md)
cognitive-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/swagger-documentation.md
- Title: Generate a REST API client library - Speech service-
-description: The Swagger documentation can be used to auto-generate SDKs for a number of programming languages.
------- Previously updated : 10/03/2022--
-# Generate a REST API client library for the Speech to text REST API
-
-Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs.
-
-> [!NOTE]
-> Speech service has several REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md).
->
-> However only [Speech to text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech service REST APIs.
-
-## Generating code from the Swagger specification
-
-The [Swagger specification](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1) has options that allow you to quickly test for various paths. However, sometimes it's desirable to generate code for all paths, creating a single library of calls that you can base future solutions on. Let's take a look at the process to generate a Python library.
-
-You'll need to set Swagger to the region of your Speech resource. You can confirm the region in the **Overview** part of your Speech resource settings in Azure portal. The complete list of supported regions is available [here](regions.md#speech-service).
-
-1. In a browser, go to the Swagger specification for your [region](regions.md#speech-service):
- `https://<your-region>.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1`
-1. On that page, click **API definition**, and click **Swagger**. Copy the URL of the page that appears.
-1. In a new browser, go to [https://editor.swagger.io](https://editor.swagger.io)
-1. Click **File**, click **Import URL**, paste the URL, and click **OK**.
-1. Click **Generate Client** and select **python**. The client library downloads to your computer in a `.zip` file.
-1. Extract everything from the download. You might use `tar -xf` to extract everything.
-1. Install the extracted module into your Python environment:
- `pip install path/to/package/python-client`
-1. The installed package is named `swagger_client`. Check that the installation has worked:
- `python -c "import swagger_client"`
-
-You can use the Python library that you generated with the [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
-
-## Next steps
-
-* [Speech service samples on GitHub](https://aka.ms/csspeech/samples).
-* [Speech to text REST API](rest-speech-to-text.md)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
- Title: Text to speech overview - Speech service-
-description: Get an overview of the benefits and capabilities of the text to speech feature of the Speech service.
------ Previously updated : 09/25/2022--
-keywords: text to speech
--
-# What is text to speech?
-
-In this overview, you learn about the benefits and capabilities of the text to speech feature of the Speech service, which is part of Azure Cognitive Services.
-
-Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text to speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
-
-## Core features
-
-Text to speech includes the following features:
-
-| Feature | Summary | Demo |
-| | | |
-| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. |
-| Custom Neural Voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). |
-
-### More about neural text to speech features
-
-The text to speech feature of the Speech service on Azure has been fully upgraded to the neural text to speech engine. This engine uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the clear articulation of words, neural text to speech significantly reduces listening fatigue when users interact with AI systems.
-
-The patterns of stress and intonation in spoken language are called _prosody_. Traditional text to speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis.
-
-Here's more information about neural text to speech features in the Speech service, and how they overcome the limits of traditional text to speech systems:
-
-* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md).
-
-* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
-
-* **Prebuilt neural voices**: Microsoft neural text to speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
-
- - Make interactions with chatbots and voice assistants more natural and engaging.
- - Convert digital texts such as e-books into audiobooks.
- - Enhance in-car navigation systems.
-
- For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
-
-* **Fine-tuning text to speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text to speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
-
- You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md) and [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).
-
-* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
-
- By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md?tabs=tts).
-
-> [!NOTE]
-> We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
->
-> If your applications, tools, or products are using any of the standard voices and custom voices, you must migrate to the neural version. For more information, see [Migrate to neural voices](migration-overview-neural-voice.md).
-
-## Get started
-
-To get started with text to speech, see the [quickstart](get-started-text-to-speech.md). Text to speech is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md).
-
-> [!TIP]
-> To convert text to speech with a no-code approach, try the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation).
-
-## Sample code
-
-Sample code for text to speech is available on GitHub. These samples cover text to speech conversion in most popular programming languages:
-
-* [Text to speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
-* [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)
-
-## Custom Neural Voice
-
-In addition to prebuilt neural voices, you can create and fine-tune custom neural voices that are unique to your product or brand. All it takes to get started is a handful of audio files and the associated transcriptions. For more information, see [Get started with Custom Neural Voice](how-to-custom-voice.md).
-
-## Pricing note
-
-### Billable characters
-When you use the text to speech feature, you're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable:
-
-* Text passed to the text to speech feature in the SSML body of the request
-* All markup within the text field of the request body in the SSML format, except for `<speak>` and `<voice>` tags
-* Letters, punctuation, spaces, tabs, markup, and all white-space characters
-* Every code point defined in Unicode
-
-For detailed information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-> [!IMPORTANT]
-> Each Chinese character is counted as two characters for billing, including kanji used in Japanese, hanja used in Korean, or hanzi used in other languages.
-
-### Model training and hosting time for Custom Neural Voice
-
-Custom Neural Voice training and hosting are both calculated by hour and billed per second. For the billing unit price, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-Custom Neural Voice (CNV) training time is measured by ΓÇÿcompute hourΓÇÖ (a unit to measure machine running time). Typically, when training a voice model, two computing tasks are running in parallel. So, the calculated compute hours will be longer than the actual training time. On average, it takes less than one compute hour to train a CNV Lite voice; while for CNV Pro, it usually takes 20 to 40 compute hours to train a single-style voice, and around 90 compute hours to train a multi-style voice. The CNV training time is billed with a cap of 96 compute hours. So in the case that a voice model is trained in 98 compute hours, you will only be charged with 96 compute hours.
-
-Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour). The hosting time (hours) for each endpoint is calculated at 00:00 UTC every day for the previous 24 hours. For example, if the endpoint has been active for 24 hours on day one, it will be billed for 24 hours at 00:00 UTC the second day. If the endpoint is newly created or has been suspended during the day, it will be billed for its accumulated running time until 00:00 UTC the second day. If the endpoint is not currently hosted, it will not be billed. In addition to the daily calculation at 00:00 UTC each day, the billing is also triggered immediately when an endpoint is deleted or suspended. For example, for an endpoint created at 08:00 UTC on December 1, the hosting hour will be calculated to 16 hours at 00:00 UTC on December 2 and 24 hours at 00:00 UTC on December 3. If the user suspends hosting the endpoint at 16:30 UTC on December 3, the duration (16.5 hours) from 00:00 to 16:30 UTC on December 3 will be calculated for billing.
-
-## Reference docs
-
-* [Speech SDK](speech-sdk.md)
-* [REST API: Text to speech](rest-text-to-speech.md)
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
-
-* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
-* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-
-## Next steps
-
-* [Text to speech quickstart](get-started-text-to-speech.md)
-* [Get the Speech SDK](speech-sdk.md)
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/troubleshooting.md
- Title: Troubleshoot the Speech SDK - Speech service-
-description: This article provides information to help you solve issues you might encounter when you use the Speech SDK.
------ Previously updated : 12/08/2022---
-# Troubleshoot the Speech SDK
-
-This article provides information to help you solve issues you might encounter when you use the Speech SDK.
-
-## Authentication failed
-
-You might observe one of several authentication errors, depending on the programming environment, API, or SDK. Here are some example errors:
-- Did you set the speech resource key and region values? -- AuthenticationFailure-- HTTP 403 Forbidden or HTTP 401 Unauthorized. Connection requests without a valid `Ocp-Apim-Subscription-Key` or `Authorization` header are rejected with a status of 403 or 401.-- ValueError: cannot construct SpeechConfig with the given arguments (or a variation of this message). This error could be observed, for example, when you run one of the Speech SDK for Python quickstarts without setting environment variables. You might also see it when you set the environment variables to something invalid such as your key or region. -- Exception with an error code: 0x5. This access denied error could be observed, for example, when you run one of the Speech SDK for C# quickstarts without setting environment variables.-
-For baseline authentication troubleshooting tips, see [validate your resource key](#validate-your-resource-key) and [validate an authorization token](#validate-an-authorization-token). For more information about confirming credentials, see [get the keys for your resource](../cognitive-services-apis-create-account.md?tabs=speech#get-the-keys-for-your-resource).
-
-### Validate your resource key
-
-You can verify that you have a valid resource key by running one of the following commands.
-
-> [!NOTE]
-> Replace `YOUR_RESOURCE_KEY` and `YOUR_REGION` with your own resource key and associated region.
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$FetchTokenHeader = @{
- 'Content-type'='application/x-www-form-urlencoded'
- 'Content-Length'= '0'
- 'Ocp-Apim-Subscription-Key' = 'YOUR_RESOURCE_KEY'
-}
-$OAuthToken = Invoke-RestMethod -Method POST -Uri https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken -Headers $FetchTokenHeader
-$OAuthToken
-```
-
-# [cURL](#tab/curl)
-
-```
-curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0"
-```
---
-If you entered a valid resource key, the command returns an authorization token, otherwise an error is returned.
-
-### Validate an authorization token
-
-If you're using an authorization token for authentication, you might see an authentication error because:
-- The authorization token is invalid-- The authorization token is expired-
-If you use an authorization token for authentication, run one of the following commands to verify that the authorization token is still valid. Tokens are valid for 10 minutes.
-
-> [!NOTE]
-> Replace `YOUR_AUDIO_FILE` with the path to your prerecorded audio file. Replace `YOUR_ACCESS_TOKEN` with the authorization token returned in the preceding step. Replace `YOUR_REGION` with the correct region.
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-$SpeechServiceURI =
-'https://YOUR_REGION.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US'
-
-# $OAuthToken is the authorization token returned by the token service.
-$RecoRequestHeader = @{
- 'Authorization' = 'Bearer '+ $OAuthToken
- 'Transfer-Encoding' = 'chunked'
- 'Content-type' = 'audio/wav; codec=audio/pcm; samplerate=16000'
-}
-
-# Read audio into byte array.
-$audioBytes = [System.IO.File]::ReadAllBytes("YOUR_AUDIO_FILE")
-
-$RecoResponse = Invoke-RestMethod -Method POST -Uri $SpeechServiceURI -Headers $RecoRequestHeader -Body $audioBytes
-
-# Show the result.
-$RecoResponse
-```
-
-# [cURL](#tab/curl)
-
-```
-curl -v -X POST "https://YOUR_REGION.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?language=en-US" -H "Authorization: Bearer YOUR_ACCESS_TOKEN" -H "Transfer-Encoding: chunked" -H "Content-type: audio/wav; codec=audio/pcm; samplerate=16000" --data-binary @YOUR_AUDIO_FILE
-```
---
-If you entered a valid authorization token, the command returns the transcription for your audio file, otherwise an error is returned.
--
-## InitialSilenceTimeout via RecognitionStatus
-
-This issue usually is observed with [single-shot recognition](./how-to-recognize-speech.md#single-shot-recognition) of a single utterance. For example, the error can be returned under the following circumstances:
-
-* The audio begins with a long stretch of silence. In that case, the service stops the recognition after a few seconds and returns `InitialSilenceTimeout`.
-* The audio uses an unsupported codec format, which causes the audio data to be treated as silence.
-
-It's OK to have silence at the beginning of audio, but only when you use [continuous recognition](./how-to-recognize-speech.md#continuous-recognition).
-
-## SPXERR_AUDIO_SYS_LIBRARY_NOT_FOUND
-
-This can be returned, for example, when multiple versions of Python have been installed, or if you're not using a supported version of Python. You can try using a different python interpreter or uninstall all python versions and re-install the latest version of python and the Speech SDK.
-
-## HTTP 400 Bad Request
-
-This error usually occurs when the request body contains invalid audio data. Only WAV format is supported. Also, check the request's headers to make sure you specify appropriate values for `Content-Type` and `Content-Length`.
-
-## HTTP 408 Request Timeout
-
-The error most likely occurs because no audio data is being sent to the service. This error also might be caused by network issues.
-
-## Next steps
-
-* [Review the release notes](releasenotes.md)
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
- Title: "Tutorial: Voice-enable your bot - Speech service"-
-description: In this tutorial, you'll create an echo bot and configure a client app that lets you speak to your bot and hear it respond back to you.
------ Previously updated : 01/24/2022----
-# Tutorial: Voice-enable your bot
-
-You can use Azure Cognitive Services Speech to voice-enable a chat bot.
-
-In this tutorial, you'll use the Microsoft Bot Framework to create a bot that responds to what you say. You'll deploy your bot to Azure and register it with the Bot Framework Direct Line Speech channel. Then, you'll configure a sample client app for Windows that lets you speak to your bot and hear it speak back to you.
-
-To complete the tutorial, you don't need extensive experience or familiarity with Azure, Bot Framework bots, or Direct Line Speech.
-
-The voice-enabled chat bot that you make in this tutorial follows these steps:
-
-1. The sample client application is configured to connect to the Direct Line Speech channel and the echo bot.
-1. When the user presses a button, voice audio streams from the microphone. Or audio is continuously recorded when a custom keyword is used.
-1. If a custom keyword is used, keyword detection happens on the local device, gating audio streaming to the cloud.
-1. By using the Speech SDK, the sample client application connects to the Direct Line Speech channel and streams audio.
-1. Optionally, higher-accuracy keyword verification happens on the service.
-1. The audio is passed to the speech recognition service and transcribed to text.
-1. The recognized text is passed to the echo bot as a Bot Framework activity.
-1. The response text is turned into audio by the text to speech service, and streamed back to the client application for playback.
-
-![Diagram that illustrates the flow of the Direct Line Speech channel.](media/tutorial-voice-enable-your-bot-speech-sdk/diagram.png "The Speech Channel flow")
-
-> [!NOTE]
-> The steps in this tutorial don't require a paid service. As a new Azure user, you can use credits from your free Azure trial subscription and the free tier of the Speech service to complete this tutorial.
-
-Here's what this tutorial covers:
-> [!div class="checklist"]
-> * Create new Azure resources.
-> * Build, test, and deploy the echo bot sample to Azure App Service.
-> * Register your bot with a Direct Line Speech channel.
-> * Build and run the Windows Voice Assistant Client to interact with your echo bot.
-> * Add custom keyword activation.
-> * Learn to change the language of the recognized and spoken speech.
-
-## Prerequisites
-
-Here's what you'll need to complete this tutorial:
--- A Windows 10 PC with a working microphone and speakers (or headphones).-- [Visual Studio 2017](https://visualstudio.microsoft.com/downloads/) or later, with the **ASP.NET and web development** workload installed.-- [.NET Framework Runtime 4.6.1](https://dotnet.microsoft.com/download) or later.-- An Azure account. [Sign up for free](https://azure.microsoft.com/free/cognitive-services/).-- A [GitHub](https://github.com/) account.-- [Git for Windows](https://git-scm.com/download/win).-
-## Create a resource group
-
-The client app that you'll create in this tutorial uses a handful of Azure services. To reduce the round-trip time for responses from your bot, you'll want to make sure that these services are in the same Azure region.
-
-This section walks you through creating a resource group in the West US region. You'll use this resource group when you're creating individual resources for the Bot Framework, the Direct Line Speech channel, and the Speech service.
-
-1. Go to the [Azure portal page for creating a resource group](https://portal.azure.com/#create/Microsoft.ResourceGroup).
-1. Provide the following information:
- * Set **Subscription** to **Free Trial**. (You can also use an existing subscription.)
- * Enter a name for **Resource group**. We recommend **SpeechEchoBotTutorial-ResourceGroup**.
- * From the **Region** dropdown menu, select **West US**.
-1. Select **Review and create**. You should see a banner that reads **Validation passed**.
-1. Select **Create**. It might take a few minutes to create the resource group.
-1. As with the resources you'll create later in this tutorial, it's a good idea to pin this resource group to your dashboard for easy access. If you want to pin this resource group, select the pin icon next to the name.
-
-### Choose an Azure region
-
-Ensure that you use a [supported Azure region](regions.md#voice-assistants). The Direct Line Speech channel uses the text to speech service, which has neural and standard voices. Neural voices are used at [these Azure regions](regions.md#speech-service), and standard voices (retiring) are used at [these Azure regions](how-to-migrate-to-prebuilt-neural-voice.md).
-
-For more information about regions, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/).
-
-## Create resources
-
-Now that you have a resource group in a supported region, the next step is to create individual resources for each service that you'll use in this tutorial.
-
-### Create a Speech service resource
-
-1. Go to the [Azure portal page for creating a Speech service resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices).
-1. Provide the following information:
- * For **Name**, we recommend **SpeechEchoBotTutorial-Speech** as the name of your resource.
- * For **Subscription**, make sure that **Free Trial** is selected.
- * For **Location**, select **West US**.
- * For **Pricing tier**, select **F0**. This is the free tier.
- * For **Resource group**, select **SpeechEchoBotTutorial-ResourceGroup**.
-1. After you've entered all required information, select **Create**. It might take a few minutes to create the resource.
-1. Later in this tutorial, you'll need subscription keys for this service. You can access these keys at any time from your resource's **Overview** area (under **Manage keys**) or the **Keys** area.
-
-At this point, check that your resource group (**SpeechEchoBotTutorial-ResourceGroup**) has a Speech service resource:
-
-| Name | Type | Location |
-||-|-|
-| SpeechEchoBotTutorial-Speech | Speech | West US |
-
-### Create an Azure App Service plan
-
-An App Service plan defines a set of compute resources for a web app to run.
-
-1. Go to the [Azure portal page for creating an Azure App Service plan](https://portal.azure.com/#create/Microsoft.AppServicePlanCreate).
-1. Provide the following information:
- * Set **Subscription** to **Free Trial**. (You can also use an existing subscription.)
- * For **Resource group**, select **SpeechEchoBotTutorial-ResourceGroup**.
- * For **Name**, we recommend **SpeechEchoBotTutorial-AppServicePlan** as the name of your plan.
- * For **Operating System**, select **Windows**.
- * For **Region**, select **West US**.
- * For **Pricing Tier**, make sure that **Standard S1** is selected. This should be the default value. If it isn't, set **Operating System** to **Windows**.
-1. Select **Review and create**. You should see a banner that reads **Validation passed**.
-1. Select **Create**. It might take a few minutes to create the resource.
-
-At this point, check that your resource group (**SpeechEchoBotTutorial-ResourceGroup**) has two resources:
-
-| Name | Type | Location |
-||-|-|
-| SpeechEchoBotTutorial-AppServicePlan | App Service Plan | West US |
-| SpeechEchoBotTutorial-Speech | Cognitive Services | West US |
-
-## Build an echo bot
-
-Now that you've created resources, start with the echo bot sample, which echoes the text that you've entered as its response. The sample code is already configured to work with the Direct Line Speech channel, which you'll connect after you've deployed the bot to Azure.
-
-> [!NOTE]
-> The instructions that follow, along with more information about the echo bot, are available in the [sample's README on GitHub](https://github.com/microsoft/BotBuilder-Samples/blob/master/samples/csharp_dotnetcore/02.echo-bot/README.md).
-
-### Run the bot sample on your machine
-
-1. Clone the samples repository:
-
- ```bash
- git clone https://github.com/Microsoft/botbuilder-samples.git
- ```
-
-1. Open Visual Studio.
-1. From the toolbar, select **File** > **Open** > **Project/Solution**. Then open the project solution:
-
- ```
- samples\csharp_dotnetcore\02.echo-bot\EchoBot.sln
- ```
-
-1. After the project is loaded, select the <kbd>F5</kbd> key to build and run the project.
-
- In the browser that opens, you'll see a screen similar to this one:
-
- > [!div class="mx-imgBorder"]
- > [![Screenshot that shows the EchoBot page with the message that your bot is ready.](media/tutorial-voice-enable-your-bot-speech-sdk/echobot-running-on-localhost.png "EchoBot running on localhost")](media/tutorial-voice-enable-your-bot-speech-sdk/echobot-running-on-localhost.png#lightbox)
-
-### Test the bot sample with the Bot Framework Emulator
-
-The [Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop app that lets bot developers test and debug their bots locally (or remotely through a tunnel). The emulator accepts typed text as the input (not voice). The bot will also respond with text.
-
-Follow these steps to use the Bot Framework Emulator to test your echo bot running locally, with text input and text output. After you deploy the bot to Azure, you'll test it with voice input and voice output.
-
-1. Install [Bot Framework Emulator](https://github.com/Microsoft/BotFramework-Emulator/releases/latest) version 4.3.0 or later.
-1. Open the Bot Framework Emulator, and then select **File** > **Open Bot**.
-1. Enter the URL for your bot. For example:
-
- ```
- http://localhost:3978/api/messages
- ```
-
-1. Select **Connect**.
-1. The bot should greet you with a "Hello and welcome!" message. Type in any text message and confirm that you get a response from the bot.
-
- This is what an exchange of communication with an echo bot might look like:
- [![Screenshot shows the Bot Framework Emulator.](media/tutorial-voice-enable-your-bot-speech-sdk/bot-framework-emulator.png "Bot Framework emulator")](media/tutorial-voice-enable-your-bot-speech-sdk/bot-framework-emulator.png#lightbox)
-
-## Deploy your bot to Azure App Service
-
-The next step is to deploy the echo bot to Azure. There are a few ways to deploy a bot, including the [Azure CLI](/azure/bot-service/bot-builder-deploy-az-cli) and [deployment templates](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/13.core-bot). This tutorial focuses on publishing directly from Visual Studio.
-
-> [!NOTE]
-> If **Publish** doesn't appear as you perform the following steps, use Visual Studio Installer to add the **ASP.NET and web development** workload.
-
-1. From Visual Studio, open the echo bot that's been configured for use with the Direct Line Speech channel:
-
- ```
- samples\csharp_dotnetcore\02.echo-bot\EchoBot.sln
- ```
-
-1. In Solution Explorer, right-click the **EchoBot** project and select **Publish**.
-1. In the **Publish** window that opens:
- 1. Select **Azure** > **Next**.
- 1. Select **Azure App Service (Windows)** > **Next**.
- 1. Select **Create a new Azure App Service** by the green plus sign.
-1. When the **App Service (Windows)** window appears:
- * Select **Add an account**, and sign in with your Azure account credentials. If you're already signed in, select your account from the dropdown list.
- * For **Name**, enter a globally unique name for your bot. This name is used to create a unique bot URL.
-
- A default name that includes the date and time appears in the box (for example, **EchoBot20190805125647**). You can use the default name for this tutorial.
- * For **Subscription**, select **Free Trial**.
- * For **Resource Group**, select **SpeechEchoBotTutorial-ResourceGroup**.
- * For **Hosting Plan**, select **SpeechEchoBotTutorial-AppServicePlan**.
-1. Select **Create**. On the final wizard screen, select **Finish**.
-1. Select **Publish**. Visual Studio deploys the bot to Azure.
-
- You should see a success message in the Visual Studio output window that looks like this:
-
- ```
- Publish Succeeded.
- Web App was published successfully https://EchoBot20190805125647.azurewebsites.net/
- ```
-
- Your default browser should open and display a page that reads: "Your bot is ready!"
-
-At this point, check your resource group (**SpeechEchoBotTutorial-ResourceGroup**) in the Azure portal. Confirm that it contains these three resources:
-
-| Name | Type | Location |
-||-|-|
-| EchoBot20190805125647 | App Service | West US |
-| SpeechEchoBotTutorial-AppServicePlan | App Service plan | West US |
-| SpeechEchoBotTutorial-Speech | Cognitive Services | West US |
-
-## Enable web sockets
-
-You need to make a small configuration change so that your bot can communicate with the Direct Line Speech channel by using web sockets. Follow these steps to enable web sockets:
-
-1. Go to the [Azure portal](https://portal.azure.com), and select your App Service resource. The resource name should be similar to **EchoBot20190805125647** (your unique app name).
-1. On the left pane, under **Settings**, select **Configuration**.
-1. Select the **General settings** tab.
-1. Find the toggle for **Web sockets** and set it to **On**.
-1. Select **Save**.
-
-> [!TIP]
-> You can use the controls at the top of your Azure App Service page to stop or restart the service. This ability can come in handy when you're troubleshooting.
-
-## Create a channel registration
-
-Now that you've created an Azure App Service resource to host your bot, the next step is to create a channel registration. Creating a channel registration is a prerequisite for registering your bot with Bot Framework channels, including the Direct Line Speech channel. If you want to learn more about how bots use channels, see [Connect a bot to channels](/azure/bot-service/bot-service-manage-channels).
-
-1. Go to the [Azure portal page for creating an Azure bot](https://portal.azure.com/#create/Microsoft.AzureBot).
-1. Provide the following information:
- * For **Bot handle**, enter **SpeechEchoBotTutorial-BotRegistration-####**. Replace **####** with a number of your choice.
-
- > [!NOTE]
- > The bot handle must be globally unique. If you enter one and get the error message "The requested bot ID is not available," pick a different number. The following examples use **8726**.
- * For **Subscription**, select **Free Trial**.
- * For **Resource group**, select **SpeechEchoBotTutorial-ResourceGroup**.
- * For **Location**, select **West US**.
- * For **Pricing tier**, select **F0**.
- * Ignore **Auto create App ID and password**.
-1. At the bottom of the **Azure Bot** pane, select **Create**.
-1. After you create the resource, open your **SpeechEchoBotTutorial-BotRegistration-####** resource in the Azure portal.
-1. From the **Settings** area, select **Configuration**.
-1. For **Messaging endpoint**, enter the URL for your web app with the **/api/messages** path appended. For example, if your globally unique app name was **EchoBot20190805125647**, your messaging endpoint would be `https://EchoBot20190805125647.azurewebsites.net/api/messages/`.
-
-At this point, check your resource group (**SpeechEchoBotTutorial-ResourceGroup**) in the Azure portal. It should now show at least four resources:
-
-| Name | Type | Location |
-||-|-|
-| EchoBot20190805125647 | App Service | West US |
-| SpeechEchoBotTutorial-AppServicePlan | App Service plan | West US |
-| SpeechEchoBotTutorial-BotRegistration-8726 | Bot Service | Global |
-| SpeechEchoBotTutorial-Speech | Cognitive Services | West US |
-
-> [!IMPORTANT]
-> The Azure Bot Service resource shows the Global region, even though you selected West US. This is expected.
-
-## Optional: Test in web chat
-
-The **Azure Bot** page has a **Test in Web Chat** option under **Settings**. It won't work by default with your bot because the web chat needs to authenticate against your bot.
-
-If you want to test your deployed bot with text input, use the following steps. Note that these steps are optional and aren't required for you to continue with the tutorial.
-
-1. In the [Azure portal](https://portal.azure.com), find and open your **EchoBotTutorial-BotRegistration-####** resource.
-1. From the **Settings** area, select **Configuration**. Copy the value under **Microsoft App ID**.
-1. Open the Visual Studio EchoBot solution. In Solution Explorer, find and double-click **appsettings.json**.
-1. Replace the empty string next to **MicrosoftAppId** in the JSON file with the copied ID value.
-1. Go back to the Azure portal. In the **Settings** area, select **Configuration**. Then select **Manage** next to **Microsoft App ID**.
-1. Select **New client secret**. Add a description (for example, **web chat**) and select **Add**. Copy the new secret.
-1. Replace the empty string next to **MicrosoftAppPassword** in the JSON file with the copied secret value.
-1. Save the JSON file. It should look something like this code:
-
- ```json
- {
- "MicrosoftAppId": "3be0abc2-ca07-475e-b6c3-90c4476c4370",
- "MicrosoftAppPassword": "-zRhJZ~1cnc7ZIlj4Qozs_eKN.8Cq~U38G"
- }
- ```
-
-1. Republish the app: right-click the **EchoBot** project in Visual Studio Solution Explorer, select **Publish**, and then select the **Publish** button.
-
-## Register the Direct Line Speech channel
-
-Now it's time to register your bot with the Direct Line Speech channel. This channel creates a connection between your bot and a client app compiled with the Speech SDK.
-
-1. In the [Azure portal](https://portal.azure.com), find and open your **SpeechEchoBotTutorial-BotRegistration-####** resource.
-1. From the **Settings** area, select **Channels** and then take the following steps:
- 1. Under **More channels**, select **Direct Line Speech**.
- 1. Review the text on the **Configure Direct line Speech** page, and then expand the **Cognitive service account** dropdown menu.
- 1. Select the Speech service resource that you created earlier (for example, **SpeechEchoBotTutorial-Speech**) from the menu to associate your bot with your subscription key.
- 1. Ignore the rest of the optional fields.
- 1. Select **Save**.
-
-1. From the **Settings** area, select **Configuration** and then take the following steps:
- 1. Select the **Enable Streaming Endpoint** checkbox. This step is necessary for creating a communication protocol built on web sockets between your bot and the Direct Line Speech channel.
- 1. Select **Save**.
-
-If you want to learn more, see [Connect a bot to Direct Line Speech](/azure/bot-service/bot-service-channel-connect-directlinespeech).
-
-## Run the Windows Voice Assistant Client
-
-The Windows Voice Assistant Client is a Windows Presentation Foundation (WPF) app in C# that uses the [Speech SDK](./speech-sdk.md) to manage communication with your bot through the Direct Line Speech channel. Use it to interact with and test your bot before writing a custom client app. It's open source, so you can either download the executable file and run it, or build it yourself.
-
-The Windows Voice Assistant Client has a simple UI that allows you to configure the connection to your bot, view the text conversation, view Bot Framework activities in JSON format, and display adaptive cards. It also supports the use of custom keywords. You'll use this client to speak with your bot and receive a voice response.
-
-> [!NOTE]
-> At this point, confirm that your microphone and speakers are enabled and working.
-
-1. Go to the [GitHub repository for the Windows Voice Assistant Client](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/README.md).
-1. Follow the provided instructions to either:
- * Download a prebuilt executable file in a .zip package to run
- * Build the executable file yourself, by cloning the repository and building the project
-
-1. Open the **VoiceAssistantClient.exe** client application and configure it to connect to your bot, by following the instructions in the GitHub repository.
-1. Select **Reconnect** and make sure you see the message "New conversation started - type or press the microphone button."
-1. Let's test it out. Select the microphone button, and speak a few words in English. The recognized text appears as you speak. When you're done speaking, the bot replies in its own voice, saying "echo" followed by the recognized words.
-
- You can also use text to communicate with the bot. Just type in the text on the bottom bar.
-
-### Troubleshoot errors in the Windows Voice Assistant Client
-
-If you get an error message in your main app window, use this table to identify and troubleshoot the problem:
-
-| Message | What should you do? |
-|-|-|
-|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct resource key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the key and its region correctly. |
-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: We could not connect to the bot before sending a message | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|
-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) box and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|
-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your resource key doesn't support neural voices. See [neural voices](./regions.md#speech-service) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).|
-
-If the actions in the table don't address your problem, see [Voice assistants: Frequently asked questions](faq-voice-assistants.yml). If you still can't resolve your problem after following all the steps in this tutorial, please enter a new issue on the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues).
-
-#### A note on connection timeout
-
-If you're connected to a bot and no activity has happened in the last five minutes, the service automatically closes the web socket connection with the client and with the bot. This is by design. A message appears on the bottom bar: "Active connection timed out but ready to reconnect on demand."
-
-You don't need to select the **Reconnect** button. Press the microphone button and start talking, enter a text message, or say the keyword (if one is enabled). The connection is automatically reestablished.
-
-### View bot activities
-
-Every bot sends and receives activity messages. In the **Activity Log** window of the Windows Voice Assistant Client, timestamped logs show each activity that the client has received from the bot. You can also see the activities that the client sent to the bot by using the [DialogServiceConnector.SendActivityAsync](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.sendactivityasync) method. When you select a log item, it shows the details of the associated activity as JSON.
-
-Here's sample JSON of an activity that the client received:
-
-```json
-{
- "attachments":[],
- "channelData":{
- "conversationalAiData":{
- "requestInfo":{
- "interactionId":"8d5cb416-73c3-476b-95fd-9358cbfaebfa",
- "version":"0.2"
- }
- }
- },
- "channelId":"directlinespeech",
- "conversation":{
- "id":"129ebffe-772b-47f0-9812-7c5bfd4aca79",
- "isGroup":false
- },
- "entities":[],
- "from":{
- "id":"SpeechEchoBotTutorial-BotRegistration-8726"
- },
- "id":"89841b4d-46ce-42de-9960-4fe4070c70cc",
- "inputHint":"acceptingInput",
- "recipient":{
- "id":"129ebffe-772b-47f0-9812-7c5bfd4aca79|0000"
- },
- "replyToId":"67c823b4-4c7a-4828-9d6e-0b84fd052869",
- "serviceUrl":"urn:botframework:websocket:directlinespeech",
- "speak":"<speak version='1.0' xmlns='https://www.w3.org/2001/10/synthesis' xml:lang='en-US'><voice name='en-US-JennyNeural'>Echo: Hello and welcome.</voice></speak>",
- "text":"Echo: Hello and welcome.",
- "timestamp":"2019-07-19T20:03:51.1939097Z",
- "type":"message"
-}
-```
-
-To learn more about what's returned in the JSON output, see the [fields in the activity](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md). For the purpose of this tutorial, you can focus on the [text](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#text) and [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) fields.
-
-### View client source code for calls to the Speech SDK
-
-The Windows Voice Assistant Client uses the NuGet package [Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech/), which contains the Speech SDK. A good place to start reviewing the sample code is the method `InitSpeechConnector()` in the file [VoiceAssistantClient\MainWindow.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs), which creates these two Speech SDK objects:
-- [DialogServiceConfig](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconfig): For configuration settings like resource key and its region.-- [DialogServiceConnector](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.-ctor): To manage the channel connection and client subscription events for handling recognized speech and bot responses.-
-## Add custom keyword activation
-
-The Speech SDK supports custom keyword activation. Similar to "Hey Cortana" for a Microsoft assistant, you can write an app that will continuously listen for a keyword of your choice. Keep in mind that a keyword can be single word or a multiple-word phrase.
-
-> [!NOTE]
-> The term *keyword* is often used interchangeably with the term *wake word*. You might see both used in Microsoft documentation.
-
-Keyword detection happens on the client app. If you're using a keyword, audio is streamed to the Direct Line Speech channel only if the keyword is detected. The Direct Line Speech channel includes a component called *keyword verification*, which does more complex processing in the cloud to verify that the keyword you've chosen is at the start of the audio stream. If keyword verification succeeds, then the channel will communicate with the bot.
-
-Follow these steps to create a keyword model, configure the Windows Voice Assistant Client to use this model, and test it with your bot:
-
-1. [Create a custom keyword by using the Speech service](./custom-keyword-basics.md).
-1. Unzip the model file that you downloaded in the previous step. It should be named for your keyword. You're looking for a file named **kws.table**.
-1. In the Windows Voice Assistant Client, find the **Settings** menu (the gear icon in the upper right). For **Model file path**, enter the full path name for the **kws.table** file from step 2.
-1. Select the **Enabled** checkbox. You should see this message next to the checkbox: "Will listen for the keyword upon next connection." If you've provided the wrong file or an invalid path, you should see an error message.
-1. Enter the values for **Subscription key** and **Subscription key region**, and then select **OK** to close the **Settings** menu.
-1. Select **Reconnect**. You should see a message that reads: "New conversation started - type, press the microphone button, or say the keyword." The app is now continuously listening.
-1. Speak any phrase that starts with your keyword. For example: "{your keyword}, what time is it?" You don't need to pause after uttering the keyword. When you're finished, two things happen:
- * You see a transcription of what you spoke.
- * You hear the bot's response.
-1. Continue to experiment with the three input types that your bot supports:
- * Entering text on the bottom bar
- * Pressing the microphone icon and speaking
- * Saying a phrase that starts with your keyword
-
-### View the source code that enables keyword detection
-
-In the source code of the Windows Voice Assistant Client, use these files to review the code that enables keyword detection:
--- [VoiceAssistantClient\Models.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/Models.cs) includes a call to the Speech SDK method [KeywordRecognitionModel.fromFile()](/javascript/api/microsoft-cognitiveservices-speech-sdk/keywordrecognitionmodel#fromfile-string-). This method is used to instantiate the model from a local file on disk.-- [VoiceAssistantClient\MainWindow.xaml.cs](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs) includes a call to the Speech SDK method [DialogServiceConnector.StartKeywordRecognitionAsync()](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.startkeywordrecognitionasync). This method activates continuous keyword detection.-
-## Optional: Change the language and bot voice
-
-The bot that you've created will listen for and respond in English, with a default US English text to speech voice. However, you're not limited to using English or a default voice.
-
-In this section, you'll learn how to change the language that your bot will listen for and respond in. You'll also learn how to select a different voice for that language.
-
-### Change the language
-
-You can choose from any of the languages mentioned in the [speech to text](language-support.md?tabs=stt) table. The following example changes the language to German.
-
-1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech to text](language-support.md?tabs=stt) table.
-
- This step sets the spoken language to be recognized, overriding the default **en-us**. It also instructs the Direct Line Speech channel to use a default German voice for the bot reply.
-1. Close the **Settings** page, and then select the **Reconnect** button to establish a new connection to your echo bot.
-1. Select the microphone button, and say a phrase in German. The recognized text appears, and the echo bot replies with the default German voice.
-
-### Change the default bot voice
-
-You can select the text to speech voice and control pronunciation if the bot specifies the reply in the form of a [Speech Synthesis Markup Language](speech-synthesis-markup.md) (SSML) instead of simple text. The echo bot doesn't use SSML, but you can easily modify the code to do that.
-
-The following example adds SSML to the echo bot reply so that the German voice `de-DE-RalfNeural` (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md?tabs=tts) that are supported for your language.
-
-1. Open **samples\csharp_dotnetcore\02.echo-bot\echo-bot.cs**.
-1. Find these lines:
-
- ```csharp
- var replyText = $"Echo: {turnContext.Activity.Text}";
- await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);
- ```
-
- Replace them with this code:
-
- ```csharp
- var replyText = $"Echo: {turnContext.Activity.Text}";
- var replySpeak = @"<speak version='1.0' xmlns='https://www.w3.org/2001/10/synthesis' xml:lang='de-DE'>
- <voice name='de-DE-RalfNeural'>" +
- $"{replyText}" + "</voice></speak>";
- await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replySpeak), cancellationToken);
- ```
-1. Build your solution in Visual Studio and fix any build errors.
-
-The second argument in the method `MessageFactory.Text` sets the [activity speak field](https://github.com/Microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) in the bot reply. With the preceding change, it has been replaced from simple text to SSML in order to specify a non-default German voice.
-
-### Redeploy your bot
-
-Now that you've made the necessary change to the bot, the next step is to republish it to Azure App Service and try it out:
-
-1. In the Solution Explorer window, right-click the **EchoBot** project and select **Publish**.
-1. Your previous deployment configuration has already been loaded as the default. Select **Publish** next to **EchoBot20190805125647 - Web Deploy**.
-
- The **Publish Succeeded** message appears in the Visual Studio output window, and a webpage opens with the message "Your bot is ready!"
-1. Open the Windows Voice Assistant Client app. Select the **Settings** button (upper-right gear icon), and make sure that you still have **de-de** in the **Language** field.
-1. Follow the instructions in [Run the Windows Voice Assistant Client](#run-the-windows-voice-assistant-client) to reconnect with your newly deployed bot, speak in the new language, and hear your bot reply in that language with the new voice.
-
-## Clean up resources
-
-If you're not going to continue using the echo bot deployed in this tutorial, you can remove it and all its associated Azure resources by deleting the Azure resource group:
-
-1. In the [Azure portal](https://portal.azure.com), select **Resource Groups** under **Azure services**.
-1. Find the **SpeechEchoBotTutorial-ResourceGroup** resource group. Select the three dots (...).
-1. Select **Delete resource group**.
-
-## Explore documentation
-
-* [Deploy to an Azure region near you](https://azure.microsoft.com/global-infrastructure/locations/) to see the improvement in bot response time.
-* [Deploy to an Azure region that supports high-quality neural text to speech voices](./regions.md#speech-service).
-* Get pricing associated with the Direct Line Speech channel:
- * [Bot Service pricing](https://azure.microsoft.com/pricing/details/bot-service/)
- * [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)
-* Build and deploy your own voice-enabled bot:
- * Build a [Bot Framework bot](https://dev.botframework.com/). Then [register it with the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech) and [customize your bot for voice](/azure/bot-service/directline-speech-bot).
- * Explore existing [Bot Framework solutions](https://microsoft.github.io/botframework-solutions/index): [Build a virtual assistant](https://microsoft.github.io/botframework-solutions/overview/virtual-assistant-solution/) and [extend it to Direct Line Speech](https://microsoft.github.io/botframework-solutions/clients-and-channels/tutorials/enable-speech/1-intro/).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build your own client app by using the Speech SDK](./quickstarts/voice-assistants.md?pivots=programming-language-csharp)
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/voice-assistants.md
- Title: Voice assistants overview - Speech service-
-description: An overview of the features, capabilities, and restrictions for voice assistants with the Speech SDK.
------ Previously updated : 03/11/2020----
-# What is a voice assistant?
-
-By using voice assistants with the Speech service, developers can create natural, human-like, conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation.
-
-## Choose an assistant solution
-
-The first step in creating a voice assistant is to decide what you want it to do. Speech service provides multiple, complementary solutions for crafting assistant interactions. You might want your application to support an open-ended conversation with phrases such as "I need to go to Seattle" or "What kind of pizza can I order?" For flexibility and versatility, you can add voice in and voice out capabilities to a bot by using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel.
-
-If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
-
-## Reference architecture for building a voice assistant by using the Speech SDK
-
- ![Conceptual diagram of the voice assistant orchestration service flow.](media/voice-assistants/overview.png)
-
-## Core features
-
-Whether you choose [Direct Line Speech](direct-line-speech.md) or another solution to create your assistant interactions, you can use a rich set of customization features to customize your assistant to your brand, product, and personality.
-
-| Category | Features |
-|-|-|
-|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants by using a custom keyword such as "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which you can configure by going to [Get started with custom keywords](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus using the device alone).
-|[Speech to text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech to text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
-|[Text to speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text to speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to speech (Neural TTS) voice that gives a voice to your brand.
-
-## Get started with voice assistants
-
-We offer the following quickstart article that's designed to have you running code in less than 10 minutes: [Quickstart: Create a custom voice assistant by using Direct Line Speech](quickstarts/voice-assistants.md)
-
-## Sample code and tutorials
-
-Sample code for creating a voice assistant is available on GitHub. The samples cover the client application for connecting to your assistant in several popular programming languages.
-
-* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
-* [Tutorial: Voice-enable an assistant that's built by using Azure Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
-
-## Customization
-
-Voice assistants that you build by using Speech service can use a full range of customization options.
-
-* [Custom Speech](./custom-speech-overview.md)
-* [Custom Voice](how-to-custom-voice.md)
-* [Custom Keyword](keyword-recognition-overview.md)
-
-> [!NOTE]
-> Customization options vary by language and locale. To learn more, see [Supported languages](language-support.md).
-
-## Next steps
-
-* [Learn more about Direct Line Speech](direct-line-speech.md)
-* [Get the Speech SDK](speech-sdk.md)
cognitive-services Windows Voice Assistants Automatic Enablement Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-automatic-enablement-guidelines.md
- Title: Privacy guidelines for voice assistants on Windows-
-description: The instructions to enable voice activation for a voice agent by default.
------ Previously updated : 04/15/2020---
-# Privacy guidelines for voice assistants on Windows
-
-It's important that users are given clear information about how their voice data is collected and used and important that they are given control over if and how this collection happens. These core facets of privacy -- *disclosure* and *consent* -- are especially important for voice assistants integrated into Windows given the always-listening nature of their use.
-
-Developers creating voice assistants on Windows must include clear user interface elements within their applications that reflect the listening capabilities of the assistant.
-
-> [!NOTE]
-> Failure to provide appropriate disclosure and consent for an assistant application, including after application updates, may result in the assistant becoming unavailable for voice activation until privacy issues are resolved.
-
-## Minimum requirements for feature inclusion
-
-Windows users can see and control the availability of their assistant applications in **`Settings > Privacy > Voice activation`**.
-
- > [!div class="mx-imgBorder"]
- > [![Screenshot shows options to control availablity of Cortana. ](media/voice-assistants/windows_voice_assistant/privacy-app-listing.png "A Windows voice activation privacy setting entry for an assistant application")](media/voice-assistants/windows_voice_assistant/privacy-app-listing.png#lightbox)
-
-To become eligible for inclusion in this list, contact Microsoft at winvoiceassistants@microsoft.com to get started. By default, users will need to explicitly enable voice activation for a new assistant in **`Settings > Privacy > Voice Activation`**, which an application can protocol link to with `ms-settings:privacy-voiceactivation`. An allowed application will appear in the list once it has run and used the `Windows.ApplicationModel.ConversationalAgent` APIs. Its voice activation settings will be modifiable once the application has obtained microphone consent from the user.
-
-Because the Windows privacy settings include information about how voice activation works and has standard UI for controlling permission, disclosure and consent are both fulfilled. The assistant will remain in this allowed list as long as it does not:
-
-* Mislead or misinform the user about voice activation or voice data handling by the assistant
-* Unduly interfere with another assistant
-* Break any other relevant Microsoft policies
-
-If any of the above are discovered, Microsoft may remove an assistant from the allowed list until problems are resolved.
-
-> [!NOTE]
-> In all cases, voice activation permission requires microphone permission. If an assistant application does not have microphone access, it will not be eligible for voice activation and will appear in the voice activation privacy settings in a disabled state.
-
-## Additional requirements for inclusion in Microphone consent
-
-Assistant authors who want to make it easier and smoother for their users to opt in to voice activation can do so by meeting additional requirements to adequately fulfill disclosure and consent without an extra trip to the settings page. Once approved, voice activation will become available immediately once a user grants microphone permission to the assistant application. To qualify for this, an assistant application must do the following **before** prompting for Microphone consent (for example, by using the `AppCapability.RequestAccessAsync` API):
-
-1. Provide clear and prominent indication to the user that the application would like to listen to the user's voice for a keyword, *even when the application is not running*, and would like the user's consent
-1. Include relevant information about data usage and privacy policies, such as a link to an official privacy statement
-1. Avoid any directive or leading wording (for example, "click yes on the following prompt") in the experience flow that discloses audio capture behavior
-
-If an application accomplishes all of the above, it's eligible to enable voice activation capability together with Microphone consent. Contact winvoiceassistants@microsoft.com for more information and to review a first use experience.
-
-> [!NOTE]
-> Voice activation above lock is not eligible for automatic enablement with Microphone access and will still require a user to visit the Voice activation privacy page to enable above-lock access for an assistant.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about best practices for voice assistants on Windows](windows-voice-assistants-best-practices.md)
cognitive-services Windows Voice Assistants Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-best-practices.md
- Title: Voice Assistants on Windows - Design Guidelines-
-description: Guidelines for best practices when designing a voice agent experience.
------ Previously updated : 05/1/2020---
-# Design assistant experiences for Windows 10
-
-Voice assistants developed on Windows 10 must implement the user experience guidelines below to provide the best possible experiences for voice activation on Windows 10. This document will guide developers through understanding the key work needed for a voice assistant to integrate with the Windows 10 Shell.
-
-## Contents
--- [Summary of voice activation views supported in Windows 10](#summary-of-voice-activation-views-supported-in-windows-10)-- [Requirements summary](#requirements-summary)-- [Best practices for good listening experiences](#best-practices-for-good-listening-experiences)-- [Design guidance for in-app voice activation](#design-guidance-for-in-app-voice-activation)-- [Design guidance for voice activation above lock](#design-guidance-for-voice-activation-above-lock)-- [Design guidance for voice activation preview](#design-guidance-for-voice-activation-preview)-
-## Summary of voice activation views supported in Windows 10
-
-Windows 10 infers an activation experience for the customer context based on the device context. The following summary table is a high-level overview of the different views available when the screen is on.
-
-| View (Availability) | Device context | Customer goal | Appears when | Design needs |
-| | | | | |
-| **In-app (19H1)** | Below lock, assistant has focus | Interact with the assistant app | Assistant processes the request in-app | Main in-app view listening experience |
-| **Above lock (19H2)** | Above lock, unauthenticated | Interact with the assistant, but from a distance | System is locked and assistant requests activation | Full-screen visuals for far-field UI. Implement dismissal policies to not block unlocking. |
-| **Voice activation preview (20H1)** | Below lock, assistant does not have focus | Interact with the assistant, but in a less intrusive way | System is below lock and assistant requests background activation | Minimal canvas. Resize or hand-off to the main app view as needed. |
-
-## Requirements summary
-
-Minimal effort is required to access the different experiences. However, assistants do need to implement the right design guidance for each view. This table below provides a checklist of the requirements that must be followed.
-
-| **Voice activation view** | **Assistant requirements summary** |
-| | |
-| **In-app** | <ul><li>Process the request in-app</li><li>Provides UI indicators for listening states</li><li>UI adapts as window sizes change</li></ul> |
-| **Above lock** | <ul><li>Detect lock state and request activation</li><li>Do not provide always persistent UX that would block access to the Windows lock screen</li><li>Provide full screen visuals and a voice-first experience</li><li>Honor dismissal guidance below</li><li>Follow privacy and security considerations below</li></ul> |
-| **Voice activation preview** | <ul><li>Detect unlock state and request background activation</li><li>Draw minimal listening UX in the preview pane</li><li>Draw a close X in the top-right and self-dismiss and stop streaming audio when pressed</li><li>Resize or hand-off to the main assistant app view as needed to provide answers</li></ul> |
-
-## Best practices for good listening experiences
-
-Assistants should build a listening experience to provide critical feedback so the customer can understand the state of the assistant. Below are some possible states to consider when building an assistant experience. These are only possible suggestions, not mandatory guidance.
--- Assistant is available for speech input-- Assistant is in the process of activating (either a keyword or mic button press)-- Assistant is actively streaming audio to the assistant cloud-- Assistant is ready for the customer to start speaking-- Assistant is hearing that words are being said-- Assistant understands that the customer is done speaking-- Assistant is processing and preparing a response-- Assistant is responding-
-Even if states change rapidly it is worth considering providing UX for states, since durations are variable across the Windows ecosystem. Visual feedback as well as brief audio chimes or chirps, also called &quot;earcons&quot;, can be part of the solution. Likewise, visual cards coupled with audio descriptions make for good response options.
-
-## Design guidance for in-app voice activation
-
-When the assistant app has focus, the customer intent is clearly to interact with the app, so all voice activation experiences should be handled by the main app view. This view may be resized by the customer. To help explain assistant shell interactions, the rest of this document uses the concrete example of a financial service assistant named Contoso. In this and subsequent diagrams, what the customer says will appear in cartoon speech bubbles on the left with assistant responses in cartoon bubbles on the right.
-
-**In-app view. Initial state when voice activation begins:**
-![Screenshot showing the Contoso finance assistant app open to it's default canvas. A cartoon speech bubble on the right says "Contoso".](media/voice-assistants/windows_voice_assistant/initial_state.png)
-
-**In-app view. After successful voice activation, listening experience begins:**![Screenshot of voice assistant on Windows while voice assistant is listening](media/voice-assistants/windows_voice_assistant/listening.png)
-
-**In-app view. All responses remain in the app experience.**![Screenshot of voice assistant on Windows as assistant replies](media/voice-assistants/windows_voice_assistant/response.png)
-
-## Design guidance for voice activation above lock
-
-Available with 19H2, assistants built on Windows voice activation platform are available to answer above lock.
-
-### Customer opt-in
-
-Voice activation above lock is always disabled by default. Customers opt-in through the Windows settings>Privacy>Voice Activation. For details on monitoring and prompting for this setting, see the [above lock implementation guide](windows-voice-assistants-implementation-guide.md#detecting-above-lock-activation-user-preference).
-
-### Not a lock-screen replacement
-
-While notifications or other standard app lock-screen integration points remain available for the assistant, the Windows lock screen always defines the initial customer experience until a voice activation occurs. After voice activation is detected, the assistant app temporarily appears above the lock screen. To avoid customer confusion, when active above lock, the assistant app must never present UI to ask for any kind of credentials or log in.
-
-![Screenshot of a Windows lock screen](media/voice-assistants/windows_voice_assistant/lock_screen1.png)
-
-### Above lock experience following voice activation
-
-When the screen is on, the assistant app is full screen with no title bar above the lock screen. Larger visuals and strong voice descriptions with strong voice-primary interface allow for cases where the customer is too far away to read UI or has their hands busy with another (non-PC) task.
-
-When the screen remains off, the assistant app could play an earcon to indicate the assistant is activating and provide a voice-only experience.
-
-![Screenshot of voice assistant above lock](media/voice-assistants/windows_voice_assistant/above_lock_listening.png)
-
-### Dismissal policies
-
-The assistant must implement the dismissal guidance in this section to make it easier for customers to log in the next time they want to use their Windows PC. Below are specific requirements, which the assistant must implement:
--- **All assistant canvases that show above lock must contain an X** in the top right that dismisses the assistant.-- **Pressing any key must also dismiss the assistant app**. Keyboard input is a traditional lock app signal that the customer wants to log in. Therefore, any keyboard/text input should not be directed to the app. Instead, the app should self-dismiss when keyboard input is detected, so the customer can easily log in to their device.-- **If the screen goes off, the app must self-dismiss.** This ensures that the next time the customer uses their PC, the login screen will be ready and waiting for them.-- If the app is &quot;in use&quot;, it may continue above lock. &quot;in use&quot; constitutes any input or output. For example, when streaming music or video the app may continue above lock. &quot;Follow on&quot; and other multiturn dialog steps are permitted to keep the app above lock.-- **Implementation details on dismissing the application** can be found [in the above lock implementation guide](windows-voice-assistants-implementation-guide.md#closing-the-application).-
-![Screenshot showing the above lock view of the Contoso finance assistant app.](media/voice-assistants/windows_voice_assistant/above_lock_response.png)
-
-![Screenshot of a desktop showing the Windows lock screen.](media/voice-assistants/windows_voice_assistant/lock_screen2.png)
-
-### Privacy &amp; security considerations above lock
-
-Many PCs are portable but not always within customer reach. They may be briefly left in hotel rooms, airplane seats, or workspaces, where other people have physical access. If assistants that are enabled above lock aren't prepared, they can become subject to the class of so-called &quot;[evil maid](https://en.wikipedia.org/wiki/Evil_maid_attack)&quot; attacks.
-
-Therefore, assistants should follow the guidance in this section to help keep experience secure. Interaction above lock occurs when the Windows user is unauthenticated. This means that, in general, **input to the assistant should also be treated as unauthenticated**.
--- Assistants should **implement a skill allowed list to identify skills that are confirmed secure and safe** to be accessed above lock.-- Speaker ID technologies can play a role in alleviating some risks, but Speaker ID is not a suitable replacement for Windows authentication.-- The skill allowed list should consider three classes of actions or skills:-
-| **Action class** | **Description** | **Examples (not a complete list)** |
-| | | |
-| Safe without authentication | General purpose information or basic app command and control | &quot;What time is it?&quot;, &quot;Play the next track&quot; |
-| Safe with Speaker ID | Impersonation risk, revealing personal information. | &quot;What&#39;s my next appointment?&quot;, &quot;Review my shopping list&quot;, &quot;Answer the call&quot; |
-| Safe only after Windows authentication | High-risk actions that an attacker could use to harm the customer | &quot;Buy more groceries&quot;, &quot;Delete my (important) appointment&quot;, &quot;Send a (mean) text message&quot;, &quot;Launch a (nefarious) webpage&quot; |
-
-For the case of Contoso, general information around public stock information is safe without authentication. Customer-specific information such as number of shares owned is likely safe with Speaker ID. However, buying or selling stocks should never be allowed without Windows authentication.
-
-To further secure the experience, **weblinks, or other app-to-app launches will always be blocked by Windows until the customer signs in.** As a last resort mitigation, Microsoft reserves the right to remove an application from the allowed list of enabled assistants if a serious security issue is not addressed in a timely manner.
-
-## Design guidance for voice activation preview
-
-Below lock, when the assistant app does _not_ have focus, Windows provides a less intrusive voice activation UI to help keep the customer in flow. This is especially true for the case of false activations that would be highly disruptive if they launched the full app. The core idea is that each assistant has another home in the Shell, the assistant taskbar icon. When the request for background activation occurs, a small view above the assistant taskbar icon appears. Assistants should provide a small listening experience in this canvas. After processing the requests, assistants can choose to resize this view to show an in-context answer or to hand off their main app view to show larger, more detailed visuals.
--- To stay minimal, the preview does not have a title bar, so **the assistant must draw an X in the top right to allow customers to dismiss the view.** Refer to [Closing the Application](windows-voice-assistants-implementation-guide.md#closing-the-application) for the specific APIs to call when the dismiss button is pressed.-- To support voice activation previews, assistants may invite customers to pin the assistant to the taskbar during first run.-
-**Voice activation preview: Initial state**
-
-The Contoso assistant has a home on the taskbar: their swirling, circular icon.
-
-![Screenshot of voice assistant on Windows as a taskbar icon pre-activation](media/voice-assistants/windows_voice_assistant/pre_compact_view.png)
-
-**As activation progresses**, the assistant requests background activation. The assistant is given a small preview pane (default width 408 and height: 248). If server-side voice activation determines the signal was a false positive, this view could be dismissed for minimal interruption.
-
-![Screenshot of voice assistant on Windows in compact view while verifying activation](media/voice-assistants/windows_voice_assistant/compact_view_activating.png)
-
-**When final activation is confirmed**, the assistant presents its listening UX. Assistant must always draw a dismiss X in the top right of the voice activation preview.
-
-![Screenshot of voice assistant on Windows listening in compact view](media/voice-assistants/windows_voice_assistant/compact_view_listening.png)
-
-**Quick answers** may appear in the voice activation preview. A TryResizeView will allow assistants to request different sizes.
-
-![Screenshot of voice assistant on Windows replying in compact view](media/voice-assistants/windows_voice_assistant/compact_view_response.png)
-
-**Hand-off**. At any point, the assistant may hand off to its main app view to provide more information, dialogue, or answers that require more screen real estate. Please refer to the [Transition from compact view to full view](windows-voice-assistants-implementation-guide.md#transition-from-compact-view-to-full-view) section for implementation details.
-
-![Screenshots of voice assistant on Windows before and after expanding the compact view](media/voice-assistants/windows_voice_assistant/compact_transition.png)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get started on developing your voice assistant](how-to-windows-voice-assistants-get-started.md)
cognitive-services Windows Voice Assistants Implementation Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-implementation-guide.md
- Title: Voice Assistants on Windows - Above Lock Implementation Guidelines-
-description: The instructions to implement voice activation and above-lock capabilities for a voice agent application.
------ Previously updated : 04/15/2020----
-# Implementing Voice Assistants on Windows
-
-This guide walks through important implementation details for creating a voice assistant on Windows.
-
-## Implementing voice activation
-
-After [setting up your environment](how-to-windows-voice-assistants-get-started.md) and learning [how voice activation works](windows-voice-assistants-overview.md#how-does-voice-activation-work), you can start implementing voice activation for your own voice assistant application.
-
-### Registration
-
-#### Ensure that the microphone is available and accessible, then monitor its state
-
-MVA needs a microphone to be present and accessible to be able to detect a voice activation. Use the [AppCapability](/uwp/api/windows.security.authorization.appcapabilityaccess.appcapability), [DeviceWatcher](/uwp/api/windows.devices.enumeration.devicewatcher), and [MediaCapture](/uwp/api/windows.media.capture.mediacapture) classes to check for microphone privacy access, device presence, and device status (like volume and mute) respectively.
-
-### Register the application with the background service
-
-In order for MVA to launch the application in the background, the application needs to be registered with the Background Service. See a full guide for Background Service registration [here](/windows/uwp/launch-resume/register-a-background-task).
-
-### Unlock the Limited Access Feature
-
-Use your Microsoft-provided Limited Access Feature key to unlock the voice assistant feature. Use the [LimitedAccessFeature](/uwp/api/windows.applicationmodel.limitedaccessfeatures) class from the Windows SDK to do this.
-
-### Register the keyword for the application
-
-The application needs to register itself, its keyword model, and its language with Windows.
-
-Start by retrieving the keyword detector. In this sample code, we retrieve the first detector, but you can select a particular detector by selecting it from `configurableDetectors`.
-
-```csharp
-private static async Task<ActivationSignalDetector> GetFirstEligibleDetectorAsync()
-{
- var detectorManager = ConversationalAgentDetectorManager.Default;
- var allDetectors = await detectorManager.GetAllActivationSignalDetectorsAsync();
- var configurableDetectors = allDetectors.Where(candidate => candidate.CanCreateConfigurations
- && candidate.Kind == ActivationSignalDetectorKind.AudioPattern
- && (candidate.SupportedModelDataTypes.Contains("MICROSOFT_KWSGRAPH_V1")));
-
- if (configurableDetectors.Count() != 1)
- {
- throw new NotSupportedException($"System expects one eligible configurable keyword spotter; actual is {configurableDetectors.Count()}.");
- }
-
- var detector = configurableDetectors.First();
-
- return detector;
-}
-```
-
-After retrieving the ActivationSignalDetector object, call its `ActivationSignalDetector.CreateConfigurationAsync` method with the signal ID, model ID, and display name to register your keyword and retrieve your application's `ActivationSignalDetectionConfiguration`. The signal and model IDs should be GUIDs decided on by the developer and stay consistent for the same keyword.
-
-### Verify that the voice activation setting is enabled
-
-To use voice activation, a user needs to enable voice activation for their system and enable voice activation for their application. You can find the setting under "Voice activation privacy settings" in Windows settings. To check the status of the voice activation setting in your application, use the instance of the `ActivationSignalDetectionConfiguration` from registering the keyword. The [AvailabilityInfo](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/clients/csharp-uwp/UWPVoiceAssistantSample/UIAudioStatus.cs#L128) field on the `ActivationSignalDetectionConfiguration` contains an enum value that describes the state of the voice activation setting.
-
-### Retrieve a ConversationalAgentSession to register the app with the MVA system
-
-The `ConversationalAgentSession` is a class in the Windows SDK that allows your app to update Windows with the app state (Idle, Detecting, Listening, Working, Speaking) and receive events, such as activation detection and system state changes such as the screen locking. Retrieving an instance of the AgentSession also serves to register the application with Windows as activatable by voice. It is best practice to maintain one reference to the `ConversationalAgentSession`. To retrieve the session, use the `ConversationalAgentSession.GetCurrentSessionAsync` API.
-
-### Listen to the two activation signals: the OnBackgroundActivated and OnSignalDetected
-
-Windows will signal your app when it detects a keyword in one of two ways. If the app is not active (that is, you do not have a reference to a non-disposed instance of `ConversationalAgentSession`), then it will launch your app and call the OnBackgroundActivated method in the App.xaml.cs file of your application. If the event arguments' `BackgroundActivatedEventArgs.TaskInstance.Task.Name` field matches the string "AgentBackgroundTrigger", then the application startup was triggered by voice activation. The application needs to override this method and retrieve an instance of ConversationalAgentSession to signal to Windows that is now active. When the application is active, Windows will signal the occurrence of a voice activation using the `ConversationalAgentSession.OnSignalDetected` event. Add an event handler to this event as soon as you retrieve the `ConversationalAgentSession`.
-
-## Keyword verification
-
-Once a voice agent application is activated by voice, the next step is to verify that the keyword detection was valid. Windows does not provide a solution for keyword verification, but it does allow voice assistants to access the audio from the hypothesized activation and complete verification on its own.
-
-### Retrieve activation audio
-
-Create an [AudioGraph](/uwp/api/windows.media.audio.audiograph) and pass it to the `CreateAudioDeviceInputNodeAsync` of the `ConversationalAgentSession`. This will load the graph's audio buffer with the audio *starting approximately 3 seconds before the keyword was detected*. This additional leading audio is included to accommodate a wide range of keyword lengths and speaker speeds. Then, handle the [QuantumStarted](/uwp/api/windows.media.audio.audiograph.quantumstarted) event from the audio graph to retrieve the audio data.
-
-```csharp
-var inputNode = await agentSession.CreateAudioDeviceInputNodeAsync(audioGraph);
-var outputNode = inputGraph.CreateFrameOutputNode();
-inputNode.AddOutgoingConnection(outputNode);
-audioGraph.QuantumStarted += OnQuantumStarted;
-```
-
-Note: The leading audio included in the audio buffer can cause keyword verification to fail. To fix this issue, trim the beginning of the audio buffer before you send the audio for keyword verification. This initial trim should be tailored to each assistant.
-
-### Launch in the foreground
-
-When keyword verification succeeds, the application needs to move to the foreground to display UI. Call the `ConversationalAgentSession.RequestForegroundActivationAsync` API to move your application to the foreground.
-
-### Transition from compact view to full view
-
-When your application is first activated by voice, it is started in a compact view. Please read the [Design guidance for voice activation preview](windows-voice-assistants-best-practices.md#design-guidance-for-voice-activation-preview) for guidance on the different views and transitions between them for voice assistants on Windows.
-
-To make the transition from compact view to full app view, use the ApplicationView API `TryEnterViewModeAsync`:
-
-```csharp
-var appView = ApplicationView.GetForCurrentView();
-await appView.TryEnterViewModeAsync(ApplicationViewMode.Default);
-```
-
-## Implementing above lock activation
-
-The following steps cover the requirements to enable a voice assistant on Windows to run above lock, including references to example code and guidelines for managing the application lifecycle.
-
-For guidance on designing above lock experiences, visit the [best practices guide](windows-voice-assistants-best-practices.md).
-
-When an app shows a view above lock, it is considered to be in "Kiosk Mode". For more information on implementing an app that uses Kiosk Mode, see the [kiosk mode documentation](/windows-hardware/drivers/partnerapps/create-a-kiosk-app-for-assigned-access).
-
-### Transitioning above lock
-
-An activation above lock is similar to an activation below lock. If there are no active instances of the application, a new instance will be started in the background and `OnBackgroundActivated` in App.xaml.cs will be called. If there is an instance of the application, that instance will get a notification through the `ConversationalAgentSession.SignalDetected` event.
-
-If the application does not appear above lock, it must call `ConversationalAgentSession.RequestForegroundActivationAsync`. This triggers the `OnLaunched` method in App.xaml.cs which should navigate to the view that will appear above lock.
-
-### Detecting lock screen transitions
-
-The ConversationalAgent library in the Windows SDK provides an API to make the lock screen state and changes to the lock screen state easily accessible. To detect the current lock screen state, check the `ConversationalAgentSession.IsUserAuthenticated` field. To detect changes in lock state, add an event handler to the `ConversationalAgentSession` object's `SystemStateChanged` event. It will fire whenever the screen changes from unlocked to locked or vice versa. If the value of the event arguments is `ConversationalAgentSystemStateChangeType.UserAuthentication`, then the lock screen state has changed.
-
-```csharp
-conversationalAgentSession.SystemStateChanged += (s, e) =>
-{
- if (e.SystemStateChangeType == ConversationalAgentSystemStateChangeType.UserAuthentication)
- {
- // Handle lock state change
- }
-};
-```
-
-### Detecting above lock activation user preference
-
-The application entry in the Voice Activation Privacy settings page has a toggle for above lock functionality. For your app to be able to launch above lock, the user will need to turn this setting on. The status of above lock permissions is also stored in the ActivationSignalDetectionConfiguration object. The AvailabilityInfo.HasLockScreenPermission status reflects whether the user has given above lock permission. If this setting is disabled, a voice application can prompt the user to navigate to the appropriate settings page at the link "ms-settings:privacy-voiceactivation" with instructions on how to enable above-lock activation for the application.
-
-## Closing the application
-
-To properly close the application programmatically while above or below lock, use the `WindowService.CloseWindow()` API. This triggers all UWP lifecycle methods, including OnSuspend, allowing the application to dispose of its `ConversationalAgentSession` instance before closing.
-
-> [!NOTE]
-> The application can close without closing the [below lock instance](/windows-hardware/drivers/partnerapps/create-a-kiosk-app-for-assigned-access#add-a-way-out-of-assigned-access-). In this case, the above lock view needs to "clean up", ensuring that once the screen is unlocked, there are no event handlers or tasks that will try to manipulate the above lock view.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Visit the UWP Voice Assistant Sample app for examples and code walk-throughs](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample)
cognitive-services Windows Voice Assistants Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-overview.md
- Title: Voice Assistants on Windows overview - Speech service-
-description: An overview of the voice assistants on Windows, including capabilities and development resources available.
------ Previously updated : 02/19/2022----
-# What are Voice Assistants on Windows?
-
-Voice assistant applications can take advantage of the Windows ConversationalAgent APIs to achieve a complete voice-enabled assistant experience.
-
-## Voice Assistant Features
-
-Voice agent applications can be activated by a spoken keyword to enable a hands-free, voice driven experience. Voice activation works when the application is closed and when the screen is locked.
-
-In addition, Windows provides a set of voice-activation privacy settings that gives users control of voice activation and above lock activation on a per-app basis.
-
-After voice activation, Windows will manage multiple active agents properly and notify each voice assistant if they are interrupted or deactivated. This allows applications to manage interruptions and other inter-agent events properly.
-
-## How does voice activation work?
-
-The Agent Activation Runtime (AAR) is the ongoing process in Windows that manages application activation on a spoken keyword or button press. It starts with Windows as long as there is at least one application on the system that is registered with the system. Applications interact with AAR through the ConversationalAgent APIs in the Windows SDK.
-
-When the user speaks a keyword, the software or hardware keyword spotter on the system notifies AAR that a keyword has been detected, providing a keyword ID. AAR in turn sends a request to BackgroundService to start the application with the corresponding application ID.
-
-### Registration
-
-The first time a voice activated application is run, it registers its app ID and keyword information through the ConversationalAgent APIs. AAR registers all configurations in the global mapping with the hardware or software keyword spotter on the system, allowing them to detect the application's keyword. The application also [registers with the Background Service](/windows/uwp/launch-resume/register-a-background-task).
-
-Note that this means an application cannot be activated by voice until it has been run once and registration has been allowed to complete.
-
-### Receiving an activation
-
-Upon receiving the request from AAR, the Background Service launches the application. The application receives a signal through the OnBackgroundActivated life-cycle method in `App.xaml.cs` with a unique event argument. This argument tells the application that it was activated by AAR and that it should start keyword verification.
-
-If the application successfully verifies the keyword, it can make a request that appears in the foreground. When this request succeeds, the application displays UI and continues its interaction with the user.
-
-AAR still signals active applications when their keyword is spoken. Rather than signaling through the life-cycle method in `App.xaml.cs`, though, it signals through an event in the ConversationalAgent APIs.
-
-### Keyword verification
-
-The keyword spotter that triggers the application to start has achieved low power consumption by simplifying the keyword model. This allows the keyword spotter to be "always on" without a high power impact, but it also means the keyword spotter will likely have a high number of "false accepts" where it detects a keyword even though no keyword was spoken. This is why the voice activation system launches the application in the background: to give the application a chance to verify that the keyword was spoken before interrupting the user's current session. AAR saves the audio since a few seconds before the keyword was spotted and makes it accessible to the application. The application can use this to run a more reliable keyword spotter on the same audio.
-
-## Next steps
--- Review the [design guidelines](windows-voice-assistants-best-practices.md) to provide the best experiences for voice activation.-- See the voice assistants on Windows [get started](how-to-windows-voice-assistants-get-started.md) page. -- See the [UWP Voice Assistant Sample](windows-voice-assistants-faq.yml#the-uwp-voice-assistant-sample) page and follow the steps to get the sample client running.
cognitive-services Document Translation Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/connector/document-translation-flow.md
- Title: "Use Translator V3 connector to build a Document Translation flow"-
-description: Use Microsoft Translator V3 connector and Power Automate to create a Document Translation flow.
------ Previously updated : 05/23/2023---
-<!-- markdownlint-disable MD051 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD029 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-# Create a Document Translation flow (preview)
-
-> [!IMPORTANT]
->
-> The Translator connector is currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-
-This tutorial guides you through configuring a Microsoft Translator V3 connector cloud flow that supports document translation. The Translator V3 connector creates a connection between your Translator Service instance and Microsoft Power Automate enabling you to use one or more prebuilt operations as steps in your apps and workflows.
-
-Document Translation is a cloud-based REST API feature of the Azure Translator service. The Document Translation API enables multiple and complex document translations while preserving original document structure and data format.
-
-In this tutorial:
-
-> [!div class="checklist"]
->
-> * [Create a blob storage account with containers for your source and target files](#azure-storage).
-> * [Set-up a managed identity with role-based access control (RBAC)](#create-a-managed-identity).
-> * [Translate documents from your Azure Blob Storage account](#translate-documents).
-> * [Translate documents from your SharePoint site](#translate-documents).
-
-## Prerequisites
-
-Here's what you need to get started: [**Translator resource**](#translator-resource), [**Azure storage account**](#azure-storage) with at least two containers, and a [**system-assigned managed identity**](#managed-identity-with-rbac) with role-based access.
-
-### Translator resource
-
-* If you don't have an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/), you can [**create one for free**](https://azure.microsoft.com/free/).
-
-* Create a [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource). As you complete the Translator project and instance details fields, pay special attention to the following entries:
-
- * **Resource Region**. Choose a **geographic** region like **West US** (**not** the *Global* region).
-
- * **Pricing tier**. Select **Standard S1** to try the service.
-
-* Use the **key** and **name** from your Translator resource to connect your application to Power Automate. Your Translator resource keys are found under the Resource Management section in the Azure portal and your resource name is located at the top of the page.
-
- :::image type="content" source="../media/connectors/keys-resource-details.png" alt-text="Get key and endpoint.":::
-
-* Copy and paste your key and resource name in a convenient location, such as *Microsoft Notepad*.
-
-### Azure storage
-
-* Next, you need an [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) and at least two [containers](/azure/storage/blobs/storage-quickstart-blobs-portal?branch=main#create-a-container) for your source and target files:
-
- * **Source container**. This container is where you upload your files for translation (required).
- * **Target container**. This container is where your translated files are stored (required).
-
-* **If your storage account is behind a firewall, you must enable additional configurations**:
-
- 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
- 1. Select your Storage account.
- 1. In the **Security + networking** group in the left pane, select **Networking**.
- 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
-
- :::image type="content" source="../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
-
- 1. Deselect all check boxes.
- 1. Make sure **Microsoft network routing** is selected.
- 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Translator resource as the instance name.
- 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, *see* [Configure Azure Storage firewalls and virtual networks](../../../storage/common/storage-network-security.md).
-
- :::image type="content" source="../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view.":::
-
- 1. Select **Save**. It may take up to 5 min for the network changes to propagate.
-
-### Managed identity with RBAC
-
- Finally, before you can use the Translator V3 connector's operations for document translation, you must grant your Translator resource access to your storage account using a managed identity with role based identity control (RBAC).
-
- :::image type="content" source="../document-translation/media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
-
-#### Create a managed identity
-
-First, create a system-assigned managed identity for your Translator resource and grant that identity specific permissions to access your Azure storage account:
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Translator resource.
-1. In the **Resource Management** group in the left pane, select **Identity**.
-1. Within the **System assigned** tab, turn on the **Status** toggle.
-1. Select **Save**.
-
- :::image type="content" source="../media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
-
-#### Role assignment
-
-Next, assign a **`Storage Blob Data Contributor`** role to the managed identity *at the* storage scope for your storage resource.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Translator resource.
-1. In the **Resource Management** group in the left pane, select **Identity**.
-1. Under **Permissions** select **Azure role assignments**:
-
- :::image type="content" source="../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
-
-1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
-
- :::image type="content" source="../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
-
-1. Finally, assign a **Storage Blob Data Contributor** role to your Translator service resource. The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
-
- | Field | Value|
- ||--|
- |**Scope**| ***Storage***.|
- |**Subscription**| ***The subscription associated with your storage resource***.|
- |**Resource**| ***The name of your storage resource***.
- |**Role** | ***Storage Blob Data Contributor***.|
-
-1. After the *Added Role assignment* confirmation message appears, refresh the page to see the added role assignment.
-
- :::image type="content" source="../media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
-
-1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
-
- :::image type="content" source="../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
-
-### Configure a Document Translation flow
-
-Now that you've completed the prerequisites and initial setup, let's get started using the Translator V3 connector to create your document translation flow:
-
-1. Sign in to [Power Automate](https://powerautomate.microsoft.com/).
-
-1. Select **Create** from the left sidebar menu.
-
-1. Select **Instant cloud flow** from the main content area.
-
- :::image type="content" source="../media/connectors/create-flow.png" alt-text="Screenshot showing how to create an instant cloud flow.":::
-
-1. In the popup window, name your flow then choose **Manually trigger a flow** and select **Create**.
-
- :::image type="content" source="../media/connectors/select-manual-flow.png" alt-text="Screenshot showing how to manually trigger a flow.":::
-
-1. The first step for your instant flowΓÇö**Manually trigger a flow**ΓÇöappears on screen. Select **New step**.
-
- :::image type="content" source="../media/connectors/add-new-step.png" alt-text="Screenshot of add new flow step page.":::
-
-## Translate documents
-
-Next, we're ready to select an action. You can translate documents located in your **Azure Blob Storage** or **Microsoft SharePoint** account.
-
-### [Use Azure Blob Storage](#tab/blob-storage)
-
-### Azure Blob Storage
-
-Here are the steps to translate a file in Azure Blob Storage using the Translator V3 connector:
-> [!div class="checklist"]
->
-> * Choose the Translator V3 connector.
-> * Select document translation.
-> * Enter your Azure Blob Storage credentials and container locations.
-> * Translate your document(s) choosing source and target languages.
-> * Get the status of the translation operation.
-
-1. In the **Choose an operation** pop-up window, enter Translator V3 in the **Search connectors and actions** search bar and select the **Microsoft Translator V3** icon.
-
- :::image type="content" source="../media/connectors/choose-operation.png" alt-text="Screenshot showing the selection of Translator V3 as the next flow step.":::
-
-1. Select the **Start document translation** action.
-1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
-
- * **Connection name**. Enter a name for your connection.
-
- * **Subscription Key**. Your Translator resource keys are found under the **Resource Management** section of the resource sidebar in the Azure portal. Enter one of your keys. **Make certain that your Translator resource is assigned to a geographical region such as West US** (not global).
-
- * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
-
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
-
- > [!NOTE]
- > After you've set up your connection, you won't be required to reenter your credentials for subsequent flows.
-
-1. The **Start document translation** action window now appears. Complete the fields as follows:
-
- * For **Storage type of the input documents**. Select **File** or **Folder**.
- * Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
- * **Location of the source documents**. Enter the URL for your document(s) in your Azure storage source document container.
- * **Location of the translated documents**. Enter the URL for your Azure storage target document container.
-
- To find your source and target URLs:
- * Navigate to your storage account in the Azure portal.
- * In the left sidebar, under **Data storage** , select **Containers**:
-
- |Source|Target|
- ||-|
- |Select the checkbox next to the source container|Select the checkbox next to the target container.|
- | From the main window area, select a file or document for translation.| Select the ellipses located at the right, then choose **Properties**.|
- | The source URL is located at the top of the Properties list. Select the **Copy to Clipboard** icon.|The target URL is located at the top of the Properties list. Select the **Copy to Clipboard** icon.|
- | Navigate to your Power automate flow and paste the source URL in the **Location of the source documents** field.|Navigate to your Power automate flow and paste the target URL in the **Location of the translated documents** field.|
- * Choose a **Target Language** from the dropdown menu and select **Save**.
-
- :::image type="content" source="../media/connectors/start-document-translation-window.png" alt-text="Screenshot of the Start document translation dialog window.":::
-
-## Get documents status
-
-Now that you've submitted your document(s) for translation, let's check the status of the operation.
-
-1. Select **New step**.
-
-1. Enter Translator V3 in the search box and choose **Microsoft Translator V3**.
-
-1. Select **Get documents status** (not the singular Get *document* status action).
-
- :::image type="content" source="../media/connectors/get-documents-status-step.png" alt-text="Screenshot of the get documents status step.":::
-
-1. Next, you're going to enter an expression to retrieve the **`operation ID`** value.
-
-1. Select the operation ID field. A **Dynamic content** / **Expression** dropdown window appears.
-
-1. Select the **Expression** tab and enter the following expression into the function field:
-
- ```powerappsfl
-
- body('Start_document_translation').operationID
-
- ```
-
- :::image type="content" source="../media/connectors/create-function-expression.png" alt-text="Screenshot showing function creation window.":::
-
-1. Select **OK**. The function appears in the **Operation ID** window. Select **Save**.
-
- :::image type="content" source="../media/connectors/operation-id-function.png" alt-text="Screenshot showing the operation ID field with an expression function value.":::
-
-## Test the connector flow
-
-Time to check our flow and document translation results.
-
-1. There's a green bar at the top of the page indicating that ***Your flow is ready to go***.
-1. Select **Test** from the upper-right corner of the page.
-
- :::image type="content" source="../media/connectors/test-flow.png" alt-text="Screenshot showing the test icon/button.":::
-
-1. Select the following buttons: **Test Flow** → **Manually** → **Test** from the right-side window.
-1. In the next window, select the **Run flow** button.
-1. Finally, select the **Done** button.
-1. You should receive a ***Your flow ran successfully*** message and green check marks align with each successful step.
-
- :::image type="content" source="../media/connectors/successful-document-translation-flow.png" alt-text="Screenshot of successful document translation flow.":::
-
-1. Select the **Get documents status** step, then select **Show raw outputs** from the **Outputs** section.
-1. A **Get documents status** window appears. At the top of the JSON response, you see `"statusCode":200` indicating that the request was successful.
-
- :::image type="content" source="../media/connectors/get-documents-status.png" alt-text="Screenshot showing the 'Get documents status' JSON response.":::
-
-1. As a final check, navigate to your Azure Blob Storage target source container. There, you should see the translated document in the **Overview** section. The document may be in a folder labeled with the translation language code.
-
-#### [Use Microsoft SharePoint](#tab/sharepoint)
-
-### Microsoft SharePoint
-
-Here are the steps to upload a file from your SharePoint site to Azure Blob Storage, translate the file, and return the translated file to SharePoint:
-
-> [!div class="checklist"]
->
-> * [Retrieve the source file content from your SharePoint site](#get-file-content).
-> * [Create an Azure storage blob from the source file](#create-a-storage-blob).
-> * [Translate the source file](#start-document-translation).
-> * [Get documents status](#get-documents-status).
-> * [Get blob content and metadata](#get-blob-content-and-metadata).
-> * [Create a new SharePoint folder](#create-new-folder).
-> * [Create your translated file in Sharepoint](#create-sharepoint-file).
-
-##### Get file content
-
- 1. In the **Choose an operation** pop-up window enter **SharePoint**, then select the **Get file content** content. Power Automate automatically signs you into your SharePoint account.
-
- :::image type="content" source="../media/connectors/get-file-content.png" alt-text="Screenshot of the SharePoint Get file content action.":::
-
-1. On the **Get file content** step window, complete the following fields:
- * **Site Address**. Select the SharePoint site URL where your file is located from the dropdown list.
- * **File Identifier**. Select the folder icon and choose the document(s) for translation.
-
-##### Create a storage blob
-
-1. Select **New step** and enter **Azure Blob Storage** in the search box.
-
-1. Select the **Create blob (V2)** action.
-
-1. If you're using the Azure storage step for the first time, you need to enter your storage resource authentication:
-
-1. In the **Authentication type** field, choose **Azure AD Integrated** and then select the **Sign in** button.
-
- :::image type="content" source="../media/connectors/storage-authentication.png" alt-text="Screenshot of Azure Blob Storage authentication window.":::
-
-1. Choose the Azure Active Directory (Azure AD) account associated with your Azure Blob Storage and Translator resource accounts.
-
-1. After you have completed the **Azure Blob Storage** authentication, the **Create blob** step appears. Complete the fields as follows:
-
- * **Storage account name or blob endpoint**. Select **Enter custom value** and enter your storage account name.
- * **Folder path**. Select the folder icon and select your source document container.
- * **Blob name**. Enter the name of the file you're translating.
- * **Blob content**. Select the **Blob content** field to reveal the **Dynamic content** dropdown list and choose the SharePoint **File Content** icon.
-
- :::image type="content" source="../media/connectors/file-dynamic-content.png" alt-text="Screenshot of the file content icon on the dynamic content menu.":::
-
-##### Start document translation
-
-1. Select **New step**.
-
-1. Enter Translator V3 in the search box and choose **Microsoft Translator V3**.
-
-1. Select the **Start document translation** action.
-
-1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
-
- * **Connection name**. Enter a name for your connection.
- * **Subscription Key**. Enter one of your keys that you copied from the Azure portal.
- * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal.
- * Select **Create**.
-
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
-
- > [!NOTE]
- > After you've setup your connection, you won't be required to reenter your credentials for subsequent Translator flows.
-
-1. The **Start document translation** action window now appears. Complete the fields as follows:
-
-1. For **Storage type of the input documents**. Select **File** or **Folder**.
-1. Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
-1. **Location of the source documents**. Select the field and add **Path** followed by **Name** from the **Dynamic content** dropdown list.
-1. **Location of the translated documents**. Enter the URL for your Azure storage target document container.
-
- To find your target container URL:
-
- * Navigate to your storage account in the Azure portal.
- * In the left sidebar, under **Data storage** , select **Containers**.
- * Select the checkbox next to the target container.
- * Select the ellipses located at the right, then choose **Properties**.
- * The target URL is located at the top of the Properties list. Select the **Copy to Clipboard** icon.
- * Navigate back to your Power automate flow and paste the target URL in the **Location of the translated documents** field.
-
-1. Choose a **Target Language** from the dropdown menu and select **Save**.
-
- :::image type="content" source="../media/connectors/start-document-translation-window.png" alt-text="Screenshot of the Start document translation dialog window.":::
-
-##### Get documents status
-
-Prior to retrieving the documents status, let's schedule a 30-second delay to ensure that the file has been processed for translation:
-
-1. Select **New step**. Enter **Schedule** in the search box and choose **Delay**.
- * For **Count**. Enter **30**.
- * For **Unit**. Enter **Second** from the dropdown list.
-
- :::image type="content" source="../media/connectors/delay-step.png" alt-text="Screenshot showing the delay step.":::
-
-1. Select **New step**. Enter Translator V3 in the search box and choose **Microsoft Translator V3**.
-
-1. Select **Get documents status** (not the singular Get *document* status).
-
- :::image type="content" source="../media/connectors/get-documents-status-step.png" alt-text="Screenshot of the get documents status step.":::
-
-1. Next, you're going to enter an expression to retrieve the **`operation ID`** value.
-
-1. Select the **operation ID** field. A **Dynamic content** / **Expression** dropdown window appears.
-
-1. Select the **Expression** tab and enter the following expression into the function field:
-
- ```powerappsfl
-
- body('Start_document_translation').operationID
-
- ```
-
- :::image type="content" source="../media/connectors/create-function-expression.png" alt-text="Screenshot showing function creation window.":::
-
-1. Select **OK**. The function appears in the **Operation ID** window. Select **Save**.
-
- :::image type="content" source="../media/connectors/operation-id-function.png" alt-text="Screenshot showing the operation ID field with an expression function value.":::
-
-##### Get blob content and metadata
-
-In this step, you retrieve the translated document from Azure Blob Storage and upload it to SharePoint.
-
-1. Select **New Step**, enter **Control** in the search box, and select **Apply to each** from the **Actions** list.
-1. Select the input field to show the **Dynamic content** window and select **value**.
-
- :::image type="content" source="../media/connectors/sharepoint-steps.png" alt-text="Screenshot showing the 'Apply to each' control step.":::
-
-1. Select **Add an action**.
-
- :::image type="content" source="../media/connectors/add-action.png" alt-text="Screenshot showing the 'Add an action' control step.":::
-
-1. Enter **Control** in the search box and select **Do until** from the **Actions** list.
-1. Select the **Choose a value** field to show the **Dynamic content** window and select **progress**.
-1. Complete the three fields as follows: **progress** → **is equal to** → **1**:
-
- :::image type="content" source="../media/connectors/do-until-progress.png" alt-text="Screenshot showing the Do until control action.":::
-
-1. Select **Add an action**, enter **Azure Blob Storage** in the search box, and select the **Get blob content using path (V2)** action.
-1. In the **Storage account name or blob endpoint** field, select **Enter custom value** and enter your storage account name.
-1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression** and enter the following logic in the formula field:
-
- ```powerappsfl
-
- concat(split(outputs('Get_documents_status')?['body/path'],'/')[3],'/',split(outputs('Get_documents_status')?['body/path'],'/')[4],'/',split(outputs('Get_documents_status')?['body/path'],'/')[5])
-
- ```
-
-1. Select **Add an action**, enter **Azure Blob Storage** in the search box, and select the **Get Blob Metadata using path (V2)** action.
-1. In the **Storage account name or blob endpoint** field, select **Enter custom value** and enter your storage account name.
-
- :::image type="content" source="../media/connectors/enter-custom-value.png" alt-text="Screenshot showing 'enter custom value' from the Create blob (V2) window.":::
-
-1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression** and enter the following logic in the formula field:
-
- ```powerappsfl
-
- concat(split(outputs('Get_documents_status')?['body/path'],'/')[3],'/',split(outputs('Get_documents_status')?['body/path'],'/')[4],'/',split(outputs('Get_documents_status')?['body/path'],'/')[5])
-
- ```
-
-##### Create new folder
-
-1. Select **Add an action**, enter **SharePoint** in the search box, and select the **Create new folder** action.
-1. Select your SharePoint URL from the **Site Address** dropdown window.
-1. In the next field, select a **List or Library** from the dropdown.
-1. Name your new folder and enter it in the **/Folder Path/** field (be sure to enclose the path name with forward slashes).
-
-##### Create SharePoint file
-
-1. Select **Add an action**, enter **SharePoint** in the search box, and select the **Create file** action.
-1. Select your SharePoint URL from the **Site Address** dropdown window.
-1. In the **Folder Path** field, from the **Dynamic content** list, under *Create new folder*, select **Full Path** .
-1. In the **File Name** field, from the **Dynamic content** list, under *Get Blob Metadata using path (V2)*, select **Name** .
-1. In the **File Content** field, from the **Dynamic content** list, under *Get Blob Metadata using path (V2)* select **File Content** .
-1. Select **Save**.
-
- :::image type="content" source="../media/connectors/apply-to-each-complete.png" alt-text="Screenshot showing the apply-to-each step sequence.":::
-
-## Test your connector flow
-
-Let's check your document translation flow and results.
-
-1. There's a green bar at the top of the page indicating that **Your flow is ready to go.**.
-1. Select **Test** from the upper-right corner of the page.
-
- :::image type="content" source="../media/connectors/test-flow.png" alt-text="Screenshot showing the test icon/button.":::
-
-1. Select **Test Flow** → **Manually** → **Test** from the right-side window.
-1. In the **Run flow** window, select the **Continue** then the **Run flow** buttons.
-1. Finally, select the **Done** button.
-1. You should receive a "Your flow ran successfully" message and green check marks align with each successful step.
-
- :::image type="content" source="../media/connectors/successful-sharepoint-flow.png" alt-text="Screenshot showing a successful flow using SharePoint and Azure Blob Storage.":::
---
-That's it! You've learned to automate document translation processes using the Microsoft Translator V3 connector and Power Automate.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Explore the Document Translation reference guide](../document-translation/reference/rest-api-guide.md)
cognitive-services Text Translator Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/connector/text-translator-flow.md
- Title: Use Translator V3 connector to configure a Text Translator flow-
-description: Use Microsoft Translator V3 connector and Power Automate to configure a Text Translator flow.
------ Previously updated : 05/23/2023---
-<!-- markdownlint-disable MD051 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD029 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-# Create a text translation flow (preview)
-
-> [!IMPORTANT]
->
-> * The Translator connector is currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-
-This article guides you through configuring a Microsoft Translator V3 connector cloud flow that supports text translation and transliteration. The Translator V3 connector creates a connection between your Translator Service instance and Microsoft Power Automate enabling you to use one or more prebuilt operations as steps in your apps and workflows.
-
-Text translation is a cloud-based REST API feature of the Azure Translator service. The Text Translation API enables quick and accurate source-to-target text translations in real time.
-
-## Prerequisites
-
-To get started, you need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
-
-* Once you have your Azure subscription, create a [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource):
-
-* You need the key and name from your resource to connect your application to Power Automate. Your Translator resource keys are found under the Resource Management section in the Azure portal and your resource name is located at the top of the page. Copy and paste your key and resource name in a convenient location, such as *Microsoft Notepad*.
-
- :::image type="content" source="../media/connectors/keys-resource-details.png" alt-text= "Screenshot showing key and endpoint location in the Azure portal.":::
-
-## Configure the Translator V3 connector
-
-Now that you've completed the prerequisites, let's get started.
-
-1. Sign in to [Power Automate](https://powerautomate.microsoft.com/).
-
-1. Select **Create** from the left sidebar menu.
-
-1. Select **Instant cloud flow** from the main content area.
-
- :::image type="content" source="../media/connectors/create-flow.png" alt-text="Screenshot showing how to create an instant cloud flow.":::
-
-1. In the popup window, **name your flow**, choose **Manually trigger a flow** then, select **Create**.
-
- :::image type="content" source="../media/connectors/select-manual-flow.png" alt-text="Screenshot showing how to manually trigger a flow.":::
-
-1. The first step for your instant flowΓÇö**Manually trigger a flow**ΓÇöappears on screen. Select **New step**.
-
- :::image type="content" source="../media/connectors/add-new-step.png" alt-text="Screenshot of add new flow step page.":::
-
-1. A **choose an operation** pop-up window appears. Enter Translator V3 in the **Search connectors and actions** search bar then select the **Microsoft Translator V3** icon.
-
- :::image type="content" source="../media/connectors/choose-operation.png" alt-text="Screenshot showing the selection of Translator V3 as the next flow step.":::
-
-## Structure your cloud flow
-
-Let's select an action. Choose to translate or transliterate text.
-
-#### [Translate text](#tab/translate)
-
-### Translate
-
-1. Select the **Translate text** action.
-1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
-
- * **Connection name**. Enter a name for your connection.
- * **Subscription Key**. Enter one of your keys that you copied from the Azure portal.
- * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
-
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
-
- > [!NOTE]
- > After you've setup your connection, you won't be required to reenter your credentials for subsequent Translator flows.
-
-1. The **Translate text** action window appears next.
-1. Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
-1. Select a **Target Language** from the dropdown window.
-1. Enter the **Body Text**.
-1. Select **Save**.
-
- :::image type="content" source="../media/connectors/translate-text-step.png" alt-text="Screenshot showing the translate text step.":::
-
-#### [Transliterate text](#tab/transliterate)
-
-### Transliterate
-
-1. Select the **Transliterate** action.
-1. If you're using the Translator V3 connector for the first time, you need to enter your resource credentials:
-
- * **Connection name**. Enter a name for your connection.
- * **Subscription Key**. Enter one of your keys that you copied from the Azure portal.
- * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
-
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
-
-1. Next, the **Transliterate** action window appears.
-1. **Language**. Select the language of the text that is to be converted.
-1. **Source script**. Select the name of the input text script.
-1. **Target script**. Select the name of transliterated text script.
-1. Select **Save**.
-
- :::image type="content" source="../media/connectors/transliterate-text-step.png" alt-text="Screenshot showing the transliterate text step.":::
---
-## Test your connector flow
-
-Let's test the cloud flow and view the translated text.
-
-1. There's a green bar at the top of the page indicating that **Your flow is ready to go.**.
-1. Select **Test** from the upper-right corner of the page.
-
- :::image type="content" source="../media/connectors/test-flow.png" alt-text="Screenshot showing the test icon/button.":::
-
-1. Select **Test Flow** → **Manually** → **Test** from the right-side window.
-1. In the next window, select the **Run flow** button.
-1. Finally, select the **Done** button.
-1. You should receive a "Your flow ran successfully" message and green check marks align with each successful step.
-
- :::image type="content" source="../media/connectors/successful-text-translation-flow.png" alt-text="Screenshot of successful flow.":::
-
-#### [Translate text](#tab/translate)
-
-Select the **Translate text** step to view the translated text (output):
-
- :::image type="content" source="../media/connectors/translated-text-output.png" alt-text="Screenshot of translated text output.":::
-
-#### [Transliterate text](#tab/transliterate)
-
-Select the **Transliterate** step to view the translated text (output):
-
- :::image type="content" source="../media/connectors/transliterated-text-output.png" alt-text="Screenshot of transliterated text output.":::
---
-> [!TIP]
->
-> * Check on the status of your flow by selecting **My flows** tab on the navigation sidebar.
-> * Edit or update your connection by selecting **Connections** under the **Data** tab on the navigation sidebar.
-
-That's it! You now know how to automate text translation processes using the Microsoft Translator V3 connector and Power Automate.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Configure a Translator V3 connector document translation flow](document-translation-flow.md)
cognitive-services Translator Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-container-configuration.md
- Title: Configure containers - Translator-
-description: The Translator container runtime environment is configured using the `docker run` command arguments. There are both required and optional settings.
------ Previously updated : 11/29/2022-
-recommendations: false
--
-# Configure Translator Docker containers
-
-Cognitive Services provides each container with a common configuration framework. You can easily configure your Translator containers to build Translator application architecture optimized for robust cloud capabilities and edge locality.
-
-The **Translator** container runtime environment is configured using the `docker run` command arguments. This container has both required and optional settings. The required container-specific settings are the billing settings.
-
-## Configuration settings
-
-The container has the following configuration settings:
-
-|Required|Setting|Purpose|
-|--|--|--|
-|Yes|[ApiKey](#apikey-configuration-setting)|Tracks billing information.|
-|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetric support to your container.|
-|Yes|[Billing](#billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure.|
-|Yes|[EULA](#eula-setting)| Indicates that you've accepted the license for the container.|
-|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
-|No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.|
-|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
-|Yes|[Mounts](#mount-settings)|Reads and writes data from the host computer to the container and from the container back to the host computer.|
-
- > [!IMPORTANT]
-> The [**ApiKey**](#apikey-configuration-setting), [**Billing**](#billing-configuration-setting), and [**EULA**](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container.
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Translator_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-This setting can be found in the following place:
-
-* Azure portal: **Translator** resource management, under **Keys**
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Translator_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Translator_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-This setting can be found in the following place:
-
-* Azure portal: **Translator** Overview page, labeled `Endpoint`
-
-| Required | Name | Data type | Description |
-| -- | - | | -- |
-| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](translator-how-to-install-container.md#required-elements). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../../cognitive-services-custom-subdomains.md). |
-
-## EULA setting
--
-## Fluentd settings
--
-## HTTP proxy credentials settings
--
-## Logging settings
--
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Cognitive Services Containers](../../cognitive-services-container-support.md)
cognitive-services Translator Container Supported Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-container-supported-parameters.md
- Title: "Container: Translate method"-
-description: Understand the parameters, headers, and body messages for the container Translate method of Azure Cognitive Services Translator to translate text.
------- Previously updated : 11/29/2022---
-# Container: Translate
-
-Translate text.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-http://localhost:{port}/translate?api-version=3.0
-```
-
-Example: http://<span></span>localhost:5000/translate?api-version=3.0
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-### Required parameters
-
-| Query parameter | Description |
-| | |
-| api-version | _Required parameter_. <br>Version of the API requested by the client. Value must be `3.0`. |
-| from | _Required parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope.|
-| to | _Required parameter_. <br>Specifies the language of the output text. The target language must be one of the [supported languages](../reference/v3-0-languages.md) included in the `translation` scope. For example, use `to=de` to translate to German. <br>It's possible to translate to multiple languages simultaneously by repeating the parameter in the query string. For example, use `to=de&to=it` to translate to German and Italian. |
-
-### Optional parameters
-
-| Query parameter | Description |
-| | |
-| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
-| includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
-
-Request headers include:
-
-| Headers | Description |
-| | |
-| Authentication header(s) | _Required request header_. <br>See [available options for authentication](../reference/v3-0-reference.md#authentication). |
-| Content-Type | _Required request header_. <br>Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. |
-| Content-Length | _Required request header_. <br>The length of the request body. |
-| X-ClientTraceId | _Optional_. <br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to translate.
-
-```json
-[
- {"Text":"I would really like to drive your car around the block a few times."}
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 100 elements.
-* The entire text included in the request can't exceed 10,000 characters including spaces.
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
-* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
-
-* `to`: A string representing the language code of the target language.
-
-* `text`: A string giving the translated text.
-
-* `sentLen`: An object returning sentence boundaries in the input and output texts.
-
-* `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
-
-* `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
-
- Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
-
- * `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
-
-Examples of JSON responses are provided in the [examples](#examples) section.
-
-## Response headers
-
-| Headers | Description |
-| | |
-| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
-| X-MT-System | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: </br></br>&FilledVerySmallSquare; Custom - Request includes a custom system and at least one custom system was used during translation.</br>&FilledVerySmallSquare; Team - All other requests |
-
-## Response status codes
-
-If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](../reference/v3-0-reference.md#errors).
-
-## Examples
-
-### Translate a single input
-
-This example shows how to translate a single sentence from English to Simplified Chinese.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
- ]
- }
-]
-```
-
-The `translations` array includes one element, which provides the translation of the single piece of text in the input.
-
-### Translate multiple pieces of text
-
-Translating multiple strings at once is simply a matter of specifying an array of strings in the request body.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}, {'Text':'I am fine, thank you.'}]"
-```
-
-The response contains the translation of all pieces of text in the exact same order as in the request.
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
- ]
- },
- {
- "translations":[
- {"text":"我很好,谢谢你。","to":"zh-Hans"}
- ]
- }
-]
-```
-
-### Translate to multiple languages
-
-This example shows how to translate the same input to several languages in one request.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"},
- {"text":"Hallo, was ist dein Name?","to":"de"}
- ]
- }
-]
-```
-
-### Translate content with markup and decide what's translated
-
-It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated.
-
-```
-<div class="notranslate">This will not be translated.</div>
-<div>This will be translated. </div>
-```
-
-Here's a sample request to illustrate.
-
-```curl
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
-```
-
-The response is:
-
-```
-[
- {
- "translations":[
- {"text":"<div class=\"notranslate\">This will not be translated.</div><div>这将被翻译。</div>","to":"zh-Hans"}
- ]
- }
-]
-```
-
-### Translate with dynamic dictionary
-
-If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is only safe for proper nouns such as personal names and product names.
-
-The markup to supply uses the following syntax.
-
-```
-<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
-```
-
-For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request:
-
-```
-curl -X POST "http://localhost:{port}/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">word or phrase</mstrans:dictionary> is a dictionary entry.'}]"
-```
-
-The result is:
-
-```
-[
- {
- "translations":[
- {"text":"Das Wort \"wordomatic\" ist ein W├╢rterbucheintrag.","to":"de"}
- ]
- }
-]
-```
-
-This feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you've created training data that shows your work or phrase in context, you'll get much better results. [Learn more about Custom Translator](../customization.md).
-
-## Request limits
-
-Each translate request is limited to 10,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests.
-
-The following table lists array element and character limits for the Translator **translation** operation.
-
-| Operation | Maximum size of array element | Maximum number of array elements | Maximum request size (characters) |
-|:-|:-|:-|:-|
-| translate | 10,000 | 100 | 10,000 |
cognitive-services Translator Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-disconnected-containers.md
- Title: Use Translator Docker containers in disconnected environments-
-description: Learn how to run Azure Cognitive Services Translator containers in disconnected environments.
----- Previously updated : 06/27/2023---
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-# Use Translator containers in disconnected environments
-
- Azure Cognitive Services Translator containers allow you to use Translator Service APIs with the benefits of containerization. Disconnected containers are offered through commitment tier pricing offered at a discounted rate compared to pay-as-you-go pricing. With commitment tier pricing, you can commit to using Translator Service features for a fixed fee, at a predictable total cost, based on the needs of your workload.
-
-## Get started
-
-Before attempting to run a Docker container in an offline environment, make sure you're familiar with the following requirements to successfully download and use the container:
-
-* Host computer requirements and recommendations.
-* The Docker `pull` command to download the container.
-* How to validate that a container is running.
-* How to send queries to the container's endpoint, once it's running.
-
-## Request access to use containers in disconnected environments
-
-Complete and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the Internet.
--
-Access is limited to customers that meet the following requirements:
-
-* Your organization should be identified as strategic customer or partner with Microsoft.
-* Disconnected containers are expected to run fully offline, hence your use cases must meet at least one of these or similar requirements:
- * Environment or device(s) with zero connectivity to internet.
- * Remote location that occasionally has internet access.
- * Organization under strict regulation of not sending any kind of data back to cloud.
-* Application completed as instructed. Make certain to pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
-
-## Create a new resource and purchase a commitment plan
-
-1. Create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
-
-1. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
-
- > [!NOTE]
- >
- > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
-
- :::image type="content" source="../media/create-resource-offline-container.png" alt-text="A screenshot showing resource creation on the Azure portal.":::
-
-1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
-
-## Gather required parameters
-
-There are three required parameters for all Cognitive Services' containers:
-
-* The end-user license agreement (EULA) must be present with a value of *accept*.
-* The endpoint URL for your resource from the Azure portal.
-* The API key for your resource from the Azure portal.
-
-Both the endpoint URL and API key are needed when you first run the container to configure it for disconnected usage. You can find the key and endpoint on the **Key and endpoint** page for your resource in the Azure portal:
-
- :::image type="content" source="../media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot of Azure portal keys and endpoint page.":::
-
-> [!IMPORTANT]
-> You will only use your key and endpoint to configure the container to run in a disconnected environment. After you configure the container, you won't need the key and endpoint values to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
-
-## Download a Docker container with `docker pull`
-
-Download the Docker container that has been approved to run in a disconnected environment. For example:
-
-|Docker pull command | Value |Format|
-|-|-||
-|&bullet; **`docker pull [image]`**</br>&bullet; **`docker pull [image]:latest`**|The latest container image.|&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation</br> </br>&bullet; mcr.microsoft.com/azure-cognitive-services/translator/text-translation: latest |
-|||
-|&bullet; **`docker pull [image]:[version]`** | A specific container image |mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64 |
-
- **Example Docker pull command**
-
-```docker
-docker pull mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
-```
-
-## Configure the container to run in a disconnected environment
-
-Now that you've downloaded your container, you need to execute the `docker run` command with the following parameters:
-
-* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
-* **`Languages={language list}`**. You must include this parameter to download model files for the [languages](../language-support.md) you want to translate.
-
-> [!IMPORTANT]
-> The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
-
-The following example shows the formatting for the `docker run` command with placeholder values. Replace these placeholder values with your own values.
-
-| Placeholder | Value | Format|
-|-|-||
-| `[image]` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
-| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
- | `{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, in the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Text Translation resource. You can find it on your resource's **Key and endpoint** page, in the Azure portal. |`{string}`|
-| `{LANGUAGES_LIST}` | List of language codes separated by commas. It's mandatory to have English (en) language as part of the list.| `en`, `fr`, `it`, `zu`, `uk` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-
- **Example `docker run` command**
-
-```docker
-
-docker run --rm -it -p 5000:5000 \
---v {MODEL_MOUNT_PATH} \---v {LICENSE_MOUNT_PATH} \---e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \---e DownloadLicense=true \---e eula=accept \---e billing={ENDPOINT_URI} \---e apikey={API_KEY} \---e Languages={LANGUAGES_LIST} \-
-[image]
-```
-
-### Translator translation models and container configuration
-
-After you've [configured the container](#configure-the-container-to-run-in-a-disconnected-environment), the values for the downloaded translation models and container configuration will be generated and displayed in the container output:
-
-```bash
- -e MODELS= usr/local/models/model1/, usr/local/models/model2/
- -e TRANSLATORSYSTEMCONFIG=/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832
-```
-
-## Run the container in a disconnected environment
-
-Once the license file has been downloaded, you can run the container in a disconnected environment with your license, appropriate memory, and suitable CPU allocations. The following example shows the formatting of the `docker run` command with placeholder values. Replace these placeholders values with your own values.
-
-Whenever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. In addition, an output mount must be specified so that billing usage records can be written.
-
-Placeholder | Value | Format|
-|-|-||
-| `[image]`| The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/translator/text-translation` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `16g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/translator/license:/path/to/license/directory` |
-|`{MODEL_MOUNT_PATH}`| The path where the machine translation models are downloaded, and mounted. Your directory structure must be formatted as **/usr/local/models** | `/host/translator/models:/usr/local/models`|
-|`{MODELS_DIRECTORY_LIST}`|List of comma separated directories each having a machine translation model. | `/usr/local/models/enu_esn_generalnn_2022240501,/usr/local/models/esn_enu_generalnn_2022240501` |
-| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
-|`{TRANSLATOR_CONFIG_JSON}`| Translator system configuration file used by container internally.| `/usr/local/models/Config/5a72fa7c-394b-45db-8c06-ecdfc98c0832` |
-
- **Example `docker run` command**
-
-```docker
-
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
---v {MODEL_MOUNT_PATH} \---v {LICENSE_MOUNT_PATH} \---v {OUTPUT_MOUNT_PATH} \---e Mounts:License={CONTAINER_LICENSE_DIRECTORY} \---e Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} \---e MODELS={MODELS_DIRECTORY_LIST} \---e TRANSLATORSYSTEMCONFIG={TRANSLATOR_CONFIG_JSON} \---e eula=accept \-
-[image]
-```
-
-## Other parameters and commands
-
-Here are a few more parameters and commands you may need to run the container:
-
-#### Usage records
-
-When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST API endpoint to generate a report about service usage.
-
-#### Arguments for storing logs
-
-When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
-
- **Example `docker run` command**
-
-```docker
-docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
-```
-
-#### Get records using the container endpoints
-
-The container provides two endpoints for returning records regarding its usage.
-
-#### Get all records
-
-The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
-
-```HTTP
-https://<service>/records/usage-logs/
-```
-
- **Example HTTPS endpoint**
-
- `http://localhost:5000/records/usage-logs`
-
-The usage-logs endpoint returns a JSON response similar to the following example:
-
-```json
-{
-"apiType": "string",
-"serviceName": "string",
-"meters": [
-{
- "name": "string",
- "quantity": 256345435
- }
- ]
-}
-```
-
-#### Get records for a specific month
-
-The following endpoint provides a report summarizing usage over a specific month and year:
-
-```HTTP
-https://<service>/records/usage-logs/{MONTH}/{YEAR}
-```
-
-This usage-logs endpoint returns a JSON response similar to the following example:
-
-```json
-{
- "apiType": "string",
- "serviceName": "string",
- "meters": [
- {
- "name": "string",
- "quantity": 56097
- }
- ]
-}
-```
-
-### Purchase a different commitment plan for disconnected containers
-
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-
-You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section.
-
-### End a commitment plan
-
- If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're still able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you do so, you avoid charges for the following year.
-
-## Troubleshooting
-
-Run the container with an output mount and logging enabled. These settings enable the container to generate log files that are helpful for troubleshooting issues that occur while starting or running the container.
-
-> [!TIP]
-> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../containers/disconnected-container-faq.yml).
-
-That's it! You've learned how to create and run disconnected containers for Azure Translator Service.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Request parameters for Translator text containers](translator-container-supported-parameters.md)
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
- Title: Install and run Docker containers for Translator API-
-description: Use the Docker container for Translator API to translate text.
------ Previously updated : 01/18/2023-
-recommendations: false
-keywords: on-premises, Docker, container, identify
--
-# Install and run Translator containers
-
-Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
-
-Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
-
-See the list of [languages supported](../language-support.md) when using Translator containers.
-
-> [!IMPORTANT]
->
-> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
-> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
-
-<!-- markdownlint-disable MD033 -->
-
-## Prerequisites
-
-To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-You'll also need to have:
-
-| Required | Purpose |
-|--|--|
-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
-| Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
-
-|Optional|Purpose|
-||-|
-|Azure CLI (command-line interface) |<ul><li> The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell.</li></ul> |
--
-## Required elements
-
-All Cognitive Services containers require three primary elements:
-
-* **EULA accept setting**. An end-user license agreement (EULA) set with a value of `Eula=accept`.
-
-* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource **Keys and Endpoint** page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
-
-> [!IMPORTANT]
->
-> * Keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service.
-
-## Host computer
--
-## Container requirements and recommendations
-
-The following table describes the minimum and recommended CPU cores and memory to allocate for the Translator container.
-
-| Container | Minimum |Recommended | Language Pair |
-|--|||-|
-| Translator |`2` cores, 2-GB memory |`4` cores, 8-GB memory | 4 |
-
-* Each core must be at least 2.6 gigahertz (GHz) or faster.
-
-* For every language pair, it's recommended to have 2 GB of memory. By default, the Translator offline container has four language pairs.
-
-* The core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-> [!NOTE]
->
-> * CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the docker run command.
->
-> * The minimum and recommended specifications are based on Docker limits, not host machine resources.
-
-## Request approval to run container
-
-Complete and submit the [**Azure Cognitive Services
-Application for Gated Services**](https://aka.ms/csgate-translator) to request access to the container.
---
-## Translator container image
-
-The Translator container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest`.
-
-To use the latest version of the container, you can use the `latest` tag. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/tags).
-
-## Get container images with **docker commands**
-
-> [!IMPORTANT]
->
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `EULA`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start.
-
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to download a container image from Microsoft Container registry and run it.
-
-```Docker
-docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
--v /mnt/d/TranslatorContainer:/usr/local/models \--e apikey={API_KEY} \--e eula=accept \--e billing={ENDPOINT_URI} \--e Languages=en,fr,es,ar,ru \
-mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
-```
-
-The above command:
-
-* Downloads and runs a Translator container from the container image.
-* Allocates 12 gigabytes (GB) of memory and four CPU core.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Accepts the end-user agreement (EULA)
-* Configures billing endpoint
-* Downloads translation models for languages English, French, Spanish, Arabic, and Russian
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-### Run multiple containers on the same host
-
-If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-
-You can have this container and a different Azure Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running.
-
-## Query the container's Translator endpoint
-
- The container provides a REST-based Translator endpoint API. Here's an example request:
-
-```curl
-curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS"
- -H "Content-Type: application/json" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-> [!NOTE]
-> If you attempt the cURL POST request before the container is ready, you'll end up getting a *Service is temporarily unavailable* response. Wait until the container is ready, then try again.
-
-## Stop the container
--
-## Troubleshoot
-
-### Validate that a container is running
-
-There are several ways to validate that the container is running:
-
-* The container provides a homepage at `\` as a visual validation that the container is running.
-
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
-
-| Request URL | Purpose |
-|--|--|
-| `http://localhost:5000/` | The container provides a home page. |
-| `http://localhost:5000/ready` | Requested with GET. Provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://localhost:5000/status` | Requested with GET. Verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). |
-| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. |
---
-## Text translation code samples
-
-### Translate text with swagger
-
-#### English &leftrightarrow; German
-
-Navigate to the swagger page: `http://localhost:5000/swagger/https://docsupdatetracker.net/index.html`
-
-1. Select **POST /translate**
-1. Select **Try it out**
-1. Enter the **From** parameter as `en`
-1. Enter the **To** parameter as `de`
-1. Enter the **api-version** parameter as `3.0`
-1. Under **texts**, replace `string` with the following JSON
-
-```json
- [
- {
- "text": "hello, how are you"
- }
- ]
-```
-
-Select **Execute**, the resulting translations are output in the **Response Body**. You should expect something similar to the following response:
-
-```json
-"translations": [
- {
- "text": "hallo, wie geht es dir",
- "to": "de"
- }
- ]
-```
-
-### Translate text with Python
-
-```python
-import requests, json
-
-url = 'http://localhost:5000/translate?api-version=3.0&from=en&to=fr'
-headers = { 'Content-Type': 'application/json' }
-body = [{ 'text': 'Hello, how are you' }]
-
-request = requests.post(url, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(
- response,
- sort_keys=True,
- indent=4,
- ensure_ascii=False,
- separators=(',', ': ')))
-```
-
-### Translate text with C#/.NET console app
-
-Launch Visual Studio, and create a new console application. Edit the `*.csproj` file to add the `<LangVersion>7.1</LangVersion>` nodeΓÇöspecifies C# 7.1. Add the [Newtoonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) NuGet package, version 11.0.2.
-
-In the `Program.cs` replace all the existing code with the following script:
-
-```csharp
-using Newtonsoft.Json;
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-
-namespace TranslateContainer
-{
- class Program
- {
- const string ApiHostEndpoint = "http://localhost:5000";
- const string TranslateApi = "/translate?api-version=3.0&from=en&to=de";
-
- static async Task Main(string[] args)
- {
- var textToTranslate = "Sunny day in Seattle";
- var result = await TranslateTextAsync(textToTranslate);
-
- Console.WriteLine(result);
- Console.ReadLine();
- }
-
- static async Task<string> TranslateTextAsync(string textToTranslate)
- {
- var body = new object[] { new { Text = textToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- var client = new HttpClient();
- using (var request =
- new HttpRequestMessage
- {
- Method = HttpMethod.Post,
- RequestUri = new Uri($"{ApiHostEndpoint}{TranslateApi}"),
- Content = new StringContent(requestBody, Encoding.UTF8, "application/json")
- })
- {
- // Send the request and await a response.
- var response = await client.SendAsync(request);
-
- return await response.Content.ReadAsStringAsync();
- }
- }
- }
-}
-```
-
-## Summary
-
-In this article, you learned concepts and workflows for downloading, installing, and running Translator container. Now you know:
-
-* Translator provides Linux containers for Docker.
-* Container images are downloaded from the container registry and run in Docker.
-* You can use the REST API to call 'translate' operation in Translator container by specifying the container's host URI.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Azure Cognitive Services containers](../../containers/index.yml?context=%2fazure%2fcognitive-services%2ftranslator%2fcontext%2fcontext)
cognitive-services Create Translator Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/create-translator-resource.md
- Title: Create a Translator resource-
-description: This article shows you how to create an Azure Cognitive Services Translator resource and get a key and endpoint URL.
------- Previously updated : 03/17/2023--
-# Create a Translator resource
-
-In this article, you learn how to create a Translator resource in the Azure portal. [Azure Translator](translator-overview.md) is a cloud-based machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Azure resources are instances of services that you create. All API requests to Azure services require an **endpoint** URL and a read-only **key** for authenticating access.
-
-## Prerequisites
-
-To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free 12-month subscription**](https://azure.microsoft.com/free/).
-
-## Create your resource
-
-The Translator service can be accessed through two different resource types:
-
-* [**Single-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource types enable access to a single service API key and endpoint.
-
-* [**Multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource types enable access to multiple Cognitive Services using a single API key and endpoint.
-
-## Complete your project and instance details
-
-1. **Subscription**. Select one of your available Azure subscriptions.
-
-1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
-
-1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using the Document Translation feature with [managed identity authorization](document-translation/how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**.
-
-1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
-
- > [!NOTE]
- > If you are using a Translator feature that requires a custom domain endpoint, such as Document Translation, the value that you enter in the Name field will be the custom domain name parameter for the endpoint.
-
-1. **Pricing tier**. Select a [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/translator) that meets your needs:
-
- * Each subscription has a free tier.
- * The free tier has the same features and functionality as the paid plans and doesn't expire.
- * Only one free tier is available per subscription.
- * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest you select the Standard S1 instance tier to try Document Translation.
-
-1. If you've created a multi-service resource, you need to confirm more usage details via the check boxes.
-
-1. Select **Review + Create**.
-
-1. Review the service terms and select **Create** to deploy your resource.
-
-1. After your resource has successfully deployed, select **Go to resource**.
-
-### Authentication keys and endpoint URL
-
-All Cognitive Services API requests require an endpoint URL and a read-only key for authentication.
-
-* **Authentication keys**. Your key is a unique string that is passed on every request to the Translation service. You can pass your key through a query-string parameter or by specifying it in the HTTP request header.
-
-* **Endpoint URL**. Use the Global endpoint in your API request unless you need a specific Azure region or custom endpoint. *See* [Base URLs](reference/v3-0-reference.md#base-urls). The Global endpoint URL is `api.cognitive.microsofttranslator.com`.
-
-## Get your authentication keys and endpoint
-
-1. After your new resource deploys, select **Go to resource** or navigate directly to your resource page.
-1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-1. Copy and paste your keys and endpoint URL in a convenient location, such as *Microsoft Notepad*.
--
-## How to delete a resource or resource group
-
-> [!WARNING]
->
-> Deleting a resource group also deletes all resources contained in the group.
-
-To remove a Cognitive Services or Translator resource, you can **delete the resource** or **delete the resource group**.
-
-To delete the resource:
-
-1. Navigate to your Resource Group in the Azure portal.
-1. Select the resources to be deleted by selecting the adjacent check box.
-1. Select **Delete** from the top menu near the right edge.
-1. Type *yes* in the **Deleted Resources** dialog box.
-1. Select **Delete**.
-
-To delete the resource group:
-
-1. Navigate to your Resource Group in the Azure portal.
-1. Select the **Delete resource group** from the top menu bar near the left edge.
-1. Confirm the deletion request by entering the resource group name and selecting **Delete**.
-
-## How to get started with Translator
-
-In our quickstart, you learn how to use the Translator service with REST APIs.
-
-> [!div class="nextstepaction"]
-> [Get Started with Translator](quickstart-translator.md)
-
-## More resources
-
-* [Microsoft Translator code samples](https://github.com/MicrosoftTranslator). Multi-language Translator code samples are available on GitHub.
-* [Microsoft Translator Support Forum](https://www.aka.ms/TranslatorForum)
-* [Get Started with Azure (3-minute video)](https://azure.microsoft.com/get-started/?b=16.24)
cognitive-services Beginners Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/beginners-guide.md
- Title: Custom Translator for beginners-
-description: A user guide for understanding the end-to-end customized machine translation process.
----- Previously updated : 11/04/2022--
-# Custom Translator for beginners
-
- [Custom Translator](overview.md) enables you to a build translation system that reflects your business, industry, and domain-specific terminology and style. Training and deploying a custom system is easy and doesn't require any programming skills. The customized translation system seamlessly integrates into your existing applications, workflows, and websites and is available on Azure through the same cloud-based [Microsoft Text Translator API](../reference/v3-0-translate.md?tabs=curl) service that powers billions of translations every day.
-
-The platform enables users to build and publish custom translation systems to and from English. The Custom Translator supports more than 60 languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../language-support.md).
-
-## Is a custom translation model the right choice for me?
-
-A well-trained custom translation model provides more accurate domain-specific translations because it relies on previously translated in-domain documents to learn preferred translations. Translator uses these terms and phrases in context to produce fluent translations in the target language while respecting context-dependent grammar.
-
-Training a full custom translation model requires a substantial amount of data. If you don't have at least 10,000 sentences of previously trained documents, you won't be able to train a full-language translation model. However, you can either train a dictionary-only model or use the high-quality, out-of-the-box translations available with the Text Translator API.
--
-## What does training a custom translation model involve?
-
-Building a custom translation model requires:
-
-* Understanding your use-case.
-
-* Obtaining in-domain translated data (preferably human translated).
-
-* The ability to assess translation quality or target language translations.
-
-## How do I evaluate my use-case?
-
-Having clarity on your use-case and what success looks like is the first step towards sourcing proficient training data. Here are a few considerations:
-
-* What is your desired outcome and how will you measure it?
-
-* What is your business domain?
-
-* Do you have in-domain sentences of similar terminology and style?
-
-* Does your use-case involve multiple domains? If yes, should you build one translation system or multiple systems?
-
-* Do you have requirements impacting regional data residency at-rest and in-transit?
-
-* Are the target users in one or multiple regions?
-
-## How should I source my data?
-
-Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
-
-* Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
-
-* Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
-
-* Can you crawl online portals to collect source sentences and synthesize target sentences?
-
-## What should I use for training material?
-
-| Source | What it does | Rules to follow |
-||||
-| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
-| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
-| Test documents | Calculate the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
-| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
-| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
-
-## What is a BLEU score?
-
-BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
-
-A BLEU score is a number between zero and 100. A score of zero indicates a low quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100 - a BLEU score between 40 and 60 indicates a high-quality translation.
-
-[Read more](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma)
-
-## What happens if I don't submit tuning or testing data?
-
-Tuning and test sentences are optimally representative of what you plan to translate in the future. If you don't submit any tuning or testing data, Custom Translator will automatically exclude sentences from your training documents to use as tuning and test data.
-
-| System-generated | Manual-selection |
-|||
-| Convenient. | Enables fine-tuning for your future needs.|
-| Good, if you know that your training data is representative of what you are planning to translate. | Provides more freedom to compose your training data.|
-| Easy to redo when you grow or shrink the domain. | Allows for more data and better domain coverage.|
-|Changes each training run.| Remains static over repeated training runs|
-
-## How is training material processed by Custom Translator?
-
-To prepare for training, documents undergo a series of processing and filtering steps. These steps are explained below. Knowledge of the filtering process may help with understanding the sentence count displayed as well as the steps you can take to prepare training documents for training with Custom Translator.
-
-* ### Sentence alignment
-
- If your document isn't in XLIFF, XLSX, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence-by-sentence. Translator doesn't perform document alignmentΓÇöit follows your naming convention for the documents to find a matching document in the other language. Within the source text, Custom Translator tries to find the corresponding sentence in the target language. It uses document markup like embedded HTML tags to help with the alignment.
-
- If you see a large discrepancy between the number of sentences in the source and target documents, your source document may not be parallel, or couldn't be aligned. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel.
-
-* ### Extracting tuning and testing data
-
- Tuning and testing data is optional. If you don't provide it, the system will remove an appropriate percentage from your training documents to use for tuning and testing. The removal happens dynamically as part of the training process. Since this step occurs as part of training, your uploaded documents aren't affected. You can see the final used sentence counts for each category of dataΓÇötraining, tuning, testing, and dictionaryΓÇöon the Model details page after training has succeeded.
-
-* ### Length filter
-
- * Removes sentences with only one word on either side.
- * Removes sentences with more than 100 words on either side. Chinese, Japanese, Korean are exempt.
- * Removes sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
- * Removes sentences with more than 2000 characters for Chinese, Japanese, Korean.
- * Removes sentences with less than 1% alphanumeric characters.
- * Removes dictionary entries containing more than 50 words.
-
-* ### White space
-
- * Replaces any sequence of white-space characters including tabs and CR/LF sequences with a single space character.
- * Removes leading or trailing space in the sentence.
-
-* ### Sentence end punctuation
-
- * Replaces multiple sentence-end punctuation characters with a single instance. Japanese character normalization.
-
- * Converts full width letters and digits to half-width characters.
-
-* ### Unescaped XML tags
-
- Transforms unescaped tags into escaped tags:
-
- | Tag | Becomes |
- |||
- | \&lt; | \&amp;lt; |
- | \&gt; | \&amp;gt; |
- | \&amp; | \&amp;amp; |
-
-* ### Invalid characters
-
- Custom Translator removes sentences that contain Unicode character U+FFFD. The character U+FFFD indicates a failed encoding conversion.
-
-## What steps should I take before uploading data?
-
-* Remove sentences with invalid encoding.
-* Remove Unicode control characters.
-* If feasible, align sentences (source-to-target).
-* Remove source and target sentences that don't match the source and target languages.
-* When source and target sentences have mixed languages, ensure that untranslated words are intentional, for example, names of organizations and products.
-* Correct grammatical and typographical errors to prevent teaching these errors to your model.
-* Though our training process handles source and target lines containing multiple sentences, it's better to have one source sentence mapped to one target sentence.
-
-## How do I evaluate the results?
-
-After your model is successfully trained, you can view the model's BLEU score and baseline model BLEU score on the model details page. We use the same set of test data to generate both the model's BLEU score and the baseline BLEU score. This data will help you make an informed decision regarding which model would be better for your use-case.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try our Quickstart](quickstart.md)
cognitive-services Bleu Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/bleu-score.md
- Title: "What is a BLEU score? - Custom Translator"-
-description: BLEU is a measurement of the differences between machine translation and human-created reference translations of the same source sentence.
----- Previously updated : 11/04/2022--
-#Customer intent: As a Custom Translator user, I want to understand how BLEU score works so that I understand system test outcome better.
--
-# What is a BLEU score?
-
-[BLEU (Bilingual Evaluation Understudy)](https://en.wikipedia.org/wiki/BLEU) is a measurement of the difference between an automatic translation and human-created reference translations of the same source sentence.
-
-## Scoring process
-
-The BLEU algorithm compares consecutive phrases of the automatic translation
-with the consecutive phrases it finds in the reference translation, and counts
-the number of matches, in a weighted fashion. These matches are position
-independent. A higher match degree indicates a higher degree of similarity with
-the reference translation, and higher score. Intelligibility and grammatical correctness aren't taken into account.
-
-## How BLEU works?
-
-The BLEU score's strength is that it correlates well with human judgment. BLEU averages out
-individual sentence judgment errors over a test corpus, rather than attempting
-to devise the exact human judgment for every sentence.
-
-A more extensive discussion of BLEU scores is [here](https://youtu.be/-UqDljMymMg).
-
-BLEU results depend strongly on the breadth of your domain; consistency of
-test, training and tuning data; and how much data you have
-available for training. If your models have been trained on a narrow domain, and
-your training data is consistent with your test data, you can expect a high
-BLEU score.
-
->[!NOTE]
->A comparison between BLEU scores is only justifiable when BLEU results are compared with the same Test set, the same language pair, and the same MT engine. A BLEU score from a different test set is bound to be different.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [BLEU score evaluation](../how-to/test-your-model.md)
cognitive-services Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/customization.md
- Title: Translation Customization - Translator-
-description: Use the Microsoft Translator Hub to build your own machine translation system using your preferred terminology and style.
------ Previously updated : 11/04/2022---
-# Customize your text translations
-
-The Custom Translator is a feature of the Translator service, which allows users to customize Microsoft Translator's advanced neural machine translation when translating text using Translator (version 3 only).
-
-The feature can also be used to customize speech translation when used with [Cognitive Services Speech](../../../speech-service/index.yml).
-
-## Custom Translator
-
-With Custom Translator, you can build neural translation systems that understand the terminology used in your own business and industry. The customized translation system will then integrate into existing applications, workflows, and websites.
-
-### How does it work?
-
-Use your previously translated documents (leaflets, webpages, documentation, etc.) to build a translation system that reflects your domain-specific terminology and style, better than a standard translation system. Users can upload TMX, XLIFF, TXT, DOCX, and XLSX documents.
-
-The system also accepts data that is parallel at the document level but isn't yet aligned at the sentence level. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator will be able to automatically match sentences across documents. The system can also use monolingual data in either or both languages to complement the parallel training data to improve the translations.
-
-The customized system is then available through a regular call to Translator using the category parameter.
-
-Given the appropriate type and amount of training data it isn't uncommon to expect gains between 5 and 10, or even more BLEU points on translation quality by using Custom Translator.
-
-More details about the various levels of customization based on available data can be found in the [Custom Translator User Guide](../overview.md).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Set up a customized language system using Custom Translator](../overview.md)
cognitive-services Data Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/data-filtering.md
- Title: "Data Filtering - Custom Translator"-
-description: When you submit documents to be used for training a custom system, the documents undergo a series of processing and filtering steps.
---- Previously updated : 11/04/2022---
-#Customer intent: As a Custom Translator, I want to understand how data is filtered before training a model.
--
-# Data filtering
-
-When you submit documents to be used for training, the documents undergo a series of processing and filtering steps. These steps are explained here. The knowledge of the filtering may help you understand the sentence count displayed in Custom Translator and the steps you may take yourself to prepare the documents for training with Custom Translator.
-
-## Sentence alignment
-
-If your document isn't in XLIFF, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence by sentence. Custom Translator doesn't perform document alignment ΓÇô it follows your naming of the documents to find the matching document of the other language. Within the document, Custom Translator tries to find the corresponding sentence in the other language. It uses document markup like embedded HTML tags to help with the alignment.
-
-If you see a large discrepancy between the number of sentences in the source and target documents, your documents may not be parallel. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel. Custom Translator shows a warning next to the document if the sentence count differs suspiciously.
-
-## Deduplication
-
-Custom Translator removes the sentences that are present in test and tuning documents from training data. The removal happens dynamically inside of the training run, not in the data processing step. Custom Translator reports the sentence count to you in the project overview before such removal.
-
-## Length filter
-
-* Remove sentences with only one word on either side.
-* Remove sentences with more than 100 words on either side.ΓÇ» Chinese, Japanese, Korean are exempt.
-* Remove sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
-* Remove sentences with more than 2000 characters for Chinese, Japanese, Korean.
-* Remove sentences with less than 1% alpha characters.
-* Remove dictionary entries containing more than 50 words.
-
-## White space
-
-* Replace any sequence of white-space characters including tabs and CR/LF sequences with a single space character.
-* Remove leading or trailing space in the sentence
-
-## Sentence end punctuation
-
-Replace multiple sentence end punctuation characters with a single instance.
-
-## Japanese character normalization
-
-Convert full width letters and digits to half-width characters.
-
-## Unescaped XML tags
-
-Filtering transforms unescaped tags into escaped tags:
-* `&lt;` becomes `&amp;lt;`
-* `&gt;` becomes `&amp;gt;`
-* `&amp;` becomes `&amp;amp;`
-
-## Invalid characters
-
-Custom Translator removes sentences that contain Unicode character U+FFFD. The character U+FFFD indicates a failed encoding conversion.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to train a model](../how-to/train-custom-model.md)
cognitive-services Dictionaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/dictionaries.md
- Title: "What is a dictionary? - Custom Translator"-
-description: How to create an aligned document that specifies a list of phrases or sentences (and their translations) that you always want Microsoft Translator to translate the same way. Dictionaries are sometimes also called glossaries or term bases.
---- Previously updated : 11/04/2022---
-#Customer intent: As a Custom Translator, I want to understand how to use a dictionary to build a custom translation model.
--
-# What is a dictionary?
-
-A dictionary is an aligned pair of documents that specifies a list of phrases or sentences and their corresponding translations. Use a dictionary in your training, when you want Translator to translate any instances of the source phrase or sentence, using the translation you've provided in the dictionary. Dictionaries are sometimes called glossaries or term bases. You can think of the dictionary as a brute force "copy and replace" for all the terms you list. Furthermore, Microsoft Custom Translator service builds and makes use of its own general purpose dictionaries to improve the quality of its translation. However, a customer provided dictionary takes precedent and will be searched first to look up words or sentences.
-
-Dictionaries only work for projects in language pairs that have a fully supported Microsoft general neural network model behind them. [View the complete list of languages](../../language-support.md).
-
-## Phrase dictionary
-
-A phrase dictionary is case-sensitive. It's an exact find-and-replace operation. When you include a phrase dictionary in training your model, any word or phrase listed is translated in the way specified. The rest of the sentence is translated as usual. You can use a phrase dictionary to specify phrases that shouldn't be translated by providing the same untranslated phrase in the source and target files.
-
-## Sentence dictionary
-
-A sentence dictionary is case-insensitive. The sentence dictionary allows you to specify an exact target translation for a source sentence. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If the source dictionary entry ends with punctuation, it's ignored during the match. If only a portion of the sentence matches, the entry won't match. When a match is detected, the target entry of the sentence dictionary will be returned.
-
-## Dictionary-only trainings
-
-You can train a model using only dictionary data. To do so, select only the dictionary document (or multiple dictionary documents) that you wish to include and select **Create model**. Since this training is dictionary-only, there's no minimum number of training sentences required. Your model will typically complete training much faster than a standard training. The resulting models will use the Microsoft baseline models for translation with the addition of the dictionaries you've added. You won't get a test report.
-
->[!Note]
->Custom Translator doesn't sentence align dictionary files, so it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned.
-
-## Recommendations
--- Dictionaries aren't a substitute for training a model using training data. For better results, we recommended letting the system learn from your training data. However, when sentences or compound nouns must be translated verbatim, use a dictionary.--- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context of that sentence is lost or limited for translating the rest of the sentence. The result is that, while the phrase or word within the sentence will translate according to the provided dictionary, the overall translation quality of the sentence often suffers.--- The phrase dictionary works well for compound nouns like product names ("_Microsoft SQL Server_"), proper names ("_City of Hamburg_"), or product features ("_pivot table_"). It doesn't work as well for verbs or adjectives because those words are typically highly contextual within the source or target language. The best practice is to avoid phrase dictionary entries for anything but compound nouns.--- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries are case- and punctuation-sensitive. Custom Translator will only match words and phrases in the input sentence that use exactly the same capitalization and punctuation marks as specified in the source dictionary file. Also, translations will reflect the capitalization and punctuation provided in the target dictionary file.-
- **Example**
-
- - If you're training an English-to-Spanish system that uses a phrase dictionary and you specify "_SQL server_" in the source file and "_Microsoft SQL Server_" in the target file. When you request the translation of a sentence that contains the phrase "_SQL server_", Custom Translator will match the dictionary entry and the translation will contain "_Microsoft SQL Server_."
- - When you request translation of a sentence that includes the same phrase but **doesn't** match what is in your source file, such as "_sql server_", "_sql Server_" or "_SQL Server_", it **won't** return a match from your dictionary.
- - The translation follows the rules of the target language as specified in your phrase dictionary.
--- If you're using a sentence dictionary, end-of-sentence punctuation is ignored.-
- **Example**
-
- - If your source dictionary contains "_This sentence ends with punctuation!_", then any translation requests containing "_This sentence ends with punctuation_" will match.
--- Your dictionary should contain unique source lines. If a source line (a word, phrase, or sentence) appears more than once in a dictionary file, the system will always use the **last entry** provided and return the target when a match is found.--- Avoid adding phrases that consist of only numbers or are two- or three-letter words, such as acronyms, in the source dictionary file.-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about document formatting guidelines](document-formats-naming-convention.md)
cognitive-services Document Formats Naming Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/document-formats-naming-convention.md
- Title: "Document formats and naming conventions - Custom Translator"-
-description: This article is a guide to document formats and naming conventions in Custom Translator to avoid naming conflicts.
---- Previously updated : 11/04/2022---
-#Customer intent: As a Custom Translator user, I want to understand how to format and name my documents.
--
-# Document formats and naming convention guidance
-
-Any file used for custom translation must be at least **four** characters in length.
-
-This table includes all supported file formats that you can use to build your translation system:
-
-| Format | Extensions | Description |
-|-|--|--|
-| XLIFF | .XLF, .XLIFF | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
-| TMX | .TMX | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
-| ZIP | .ZIP | ZIP is an archive file format. |
-| Locstudio | .LCL | A Microsoft format for parallel documents |
-| Microsoft Word | .DOCX | Microsoft Word document |
-| Adobe Acrobat | .PDF | Adobe Acrobat portable document |
-| HTML | .HTML, .HTM | HTML document |
-| Text file | .TXT | UTF-16 or UTF-8 encoded text files. The file name must not contain Japanese characters. |
-| Aligned text file | .ALIGN | The extension `.ALIGN` is a special extension that you can use if you know that the sentences in the document pair are perfectly aligned. If you provide a `.ALIGN` file, Custom Translator won't align the sentences for you. |
-| Excel file | .XLSX | Excel file (2013 or later). First line/ row of the spreadsheet should be language code. |
-
-## Dictionary formats
-
-For dictionaries, Custom Translator supports all file formats that are supported for training sets. If you're using an Excel dictionary, the first line/ row of the spreadsheet should be language codes.
-
-## Zip file formats
-
-Documents can be grouped into a single zip file and uploaded. The Custom Translator supports zip file formats (ZIP, GZ, and TGZ).
-
-Each document in the zip file with the extension TXT, HTML, HTM, PDF, DOCX, ALIGN must follow this naming convention:
-
-{document name}\_{language code}
-where {document name} is the name of your document, {language code} is the ISO LanguageID (two characters), indicating that the document contains sentences in that language. There must be an underscore (_) before the language code.
-
-For example, to upload two parallel documents within a zip for an English to
-Spanish system, the files should be named "data_en" and "data_es".
-
-Translation Memory files (TMX, XLF, XLIFF, LCL, XLSX) aren't required to follow the specific language-naming convention.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about managing projects](workspace-and-project.md#what-is-a-custom-translator-project)
cognitive-services Model Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/model-training.md
- Title: "What are training and modeling? - Custom Translator"-
-description: A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive data sets are required training dataset, tuning dataset, and testing dataset.
----- Previously updated : 11/04/2022--
-#Customer intent: As a Custom Translator user, I want to concept of a model and training, so that I can efficiently use training, tuning and testing datasets the helps me build a translation model.
--
-# What are training and modeling?
-
-A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
-
-If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself.
-
-## Training document type for Custom Translator
-
-Documents included in training set are used by the Custom Translator as the basis for building your model. During training execution, sentences that are present in these documents are aligned (or paired). You can take liberties in composing your set of training documents. You can include documents that you believe are of tangential relevance in one model. Again exclude them in another to see the impact in [BLEU (Bilingual Evaluation Understudy) score](bleu-score.md). As long as you keep the tuning set and test set constant, feel free to experiment with the composition of the training set. This approach is an effective way to modify the quality of your translation system.
-
-You can run multiple trainings within a project and compare the [BLEU scores](bleu-score.md) across all training runs. When you're running multiple trainings for comparison, ensure same tuning/ test data is specified each time. Also make sure to also inspect the results manually in the ["Testing"](../how-to/test-your-model.md) tab.
-
-## Tuning document type for Custom Translator
-
-Parallel documents included in this set are used by the Custom Translator to tune the translation system for optimal results.
-
-The tuning data is used during training to adjust all parameters and weights of the translation system to the optimal values. Choose your tuning data carefully: the tuning data should be representative of the content of the documents you intend to translate in the future. The tuning data has a major influence on the quality of the translations produced. Tuning enables the translation system to provide translations that are closest to the samples you provide in the tuning data. You don't need more than 2500 sentences in your tuning data. For optimal translation quality, it's recommended to select the tuning set manually by choosing the most representative selection of sentences.
-
-When creating your tuning set, choose sentences that are a meaningful and representative length of the future sentences that you expect to translate. Choose sentences that have words and phrases that you intend to translate in the approximate distribution that you expect in your future translations. In practice, a sentence length of 7 to 10 words will produce the best results. These sentences contain enough context to show inflection and provide a phrase length that is significant, without being overly complex.
-
-A good description of the type of sentences to use in the tuning set is prose: actual fluent sentences. Not table cells, not poems, not lists of things, not only punctuation, or numbers in a sentence - regular language.
-
-If you manually select your tuning data, it shouldn't have any of the same sentences as your training and testing data. The tuning data has a significant impact on the quality of the translations - choose the sentences carefully.
-
-If you aren't sure what to choose for your tuning data, just select the training data and let Custom Translator select the tuning data for you. When you let the Custom Translator choose the tuning data automatically, it will use a random subset of sentences from your bilingual training documents and exclude these sentences from the training material itself.
-
-## Testing dataset for Custom Translator
-
-Parallel documents included in the testing set are used to compute the BLEU (Bilingual Evaluation Understudy) score. This score indicates the quality of your translation system. This score actually tells you how closely the translations done by the translation system resulting from this training match the reference sentences in the test data set.
-
-The BLEU score is a measurement of the delta between the automatic translation and the reference translation. Its value ranges from 0 to 100. A score of 0 indicates that not a single word of the reference appears in the translation. A score of 100 indicates that the automatic translation exactly matches the reference: the same word is in the exact same position. The score you receive is the BLEU score average for all sentences of the testing data.
-
-The test data should include parallel documents where the target language sentences are the most desirable translations of the corresponding source language sentences in the source-target pair. You may want to use the same criteria you used to compose the tuning data. However, the testing data has no influence over the quality of the translation system. It's used exclusively to generate the BLEU score for you.
-
-You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it will use a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
-
-You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
-
-## Next Steps
-
-> [!div class="nextstepaction"]
-> [Test and evaluate your model](../how-to/test-your-model.md)
cognitive-services Parallel Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/parallel-documents.md
- Title: "What are parallel documents? - Custom Translator"-
-description: Parallel documents are pairs of documents where one is the translation of the other. One document in the pair contains sentences in the source language and the other document contains these sentences translated into the target language.
---- Previously updated : 11/04/2022--
-#Customer intent: As a Custom Translator, I want to understand how to use parallel documents to build a custom translation model.
--
-# What are parallel documents?
-
-Parallel documents are pairs of documents where one is the translation of the
-other. One document in the pair contains sentences in the source language and
-the other document contains these sentences translated into the target language.
-It doesn't matter which language is marked as "source" and which language is
-marked as "target" ΓÇô a parallel document can be used to train a translation
-system in either direction.
-
-## Requirements
-
-You'll need a minimum of 10,000 unique aligned parallel sentences to train a system. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. As a best practice, continuously add more parallel content and retrain to improve the quality of your translation system. For more information, *see* [Sentence Alignment](./sentence-alignment.md).
-
-Microsoft requires that documents uploaded to the Custom Translator don't violate a third party's copyright or intellectual properties. For more information, please see the [Terms of Use](https://azure.microsoft.com/support/legal/cognitive-services-terms/). Uploading a document using the portal doesn't alter the ownership of the intellectual property in the document itself.
-
-## Use of parallel documents
-
-Parallel documents are used by the system:
-
-1. To learn how words, phrases and sentences are commonly mapped between the
- two languages.
-
-2. To learn how to process the appropriate context depending on the surrounding
- phrases. A word may not always translate to the exact same word in the other
- language.
-
-As a best practice, make sure that there's a 1:1 sentence correspondence between
-the source and target language versions of the documents.
-
-If your project is domain (category) specific, your documents should be
-consistent in terminology within that category. The quality of the resulting
-translation system depends on the number of sentences in your document set and
-the quality of the sentences. The more examples your documents contain with
-diverse usages for a word specific to your category, the better job the system
-can do during translation.
-
-Documents uploaded are private to each workspace and can be used in as many
-projects or trainings as you like. Sentences extracted from your documents are
-stored separately in your repository as plain Unicode text files and are
-available for you to delete. Don't use the Custom Translator as a document
-repository, you won't be able to download the documents you uploaded in the
-format you uploaded them.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to use a dictionary](dictionaries.md)
cognitive-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/sentence-alignment.md
- Title: "Sentence pairing and alignment - Custom Translator"-
-description: During the training execution, sentences present in parallel documents are paired or aligned. Custom Translator learns translations one sentence at a time, by reading a sentence and translating it. Then it aligns words and phrases in these two sentences to each other.
---- Previously updated : 11/04/2022---
-#Customer intent: As a Custom Translator user, I want to know how sentence alignment works, so that I can have better understanding of underlying process of sentence extraction, pairing, filtering, aligning.
--
-# Sentence pairing and alignment in parallel documents
-
-After documents are uploaded, sentences present in parallel documents are
-paired or aligned. Custom Translator reports the number of sentences it was
-able to pair as the Aligned Sentences in each of the data sets.
-
-## Pairing and alignment process
-
-Custom Translator learns translations of sentences one sentence at a time. It reads a sentence from the source text, and then the translation of this sentence from the target text. Then it aligns words and phrases in these two sentences to each other. This process enables it to create a map of the words and phrases in one sentence to the equivalent words and phrases in the translation of the sentence. Alignment tries to ensure that the system trains on sentences that are translations of each other.
-
-## Pre-aligned documents
-
-If you know you have parallel documents, you may override the
-sentence alignment by supplying pre-aligned text files. You can extract all
-sentences from both documents into text file, organized one sentence per line,
-and upload with an `.align` extension. The `.align` extension signals Custom
-Translator that it should skip sentence alignment.
-
-For best results, try to make sure that you have one sentence per line in your
- files. Don't have newline characters within a sentence, it will cause poor
-alignments.
-
-## Suggested minimum number of sentences
-
-For a training to succeed, the table below shows the minimum number of sentences required in each document type. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. The general guideline is having more in-domain parallel sentences of human translation quality should produce higher-quality models.
-
-| Document type | Suggested minimum sentence count | Maximum sentence count |
-||--|--|
-| Training | 10,000 | No upper limit |
-| Tuning | 500 | 2,500 |
-| Testing | 500 | 2,500 |
-| Dictionary | 0 | 250,000 |
-
-> [!NOTE]
->
-> - Training will not start and will fail if the 10,000 minimum sentence count for Training is not met.
-> - Tuning and Testing are optional. If you do not provide them, the system will remove an appropriate percentage from Training to use for validation and testing.
-> - You can train a model using only dictionary data. Please refer to [What is Dictionary](dictionaries.md).
-> - If your dictionary contains more than 250,000 sentences, our Document Translation feature is a better choice. Please refer to [Document Translation](../../document-translation/overview.md).
-> - Free (F0) subscription training has a maximum limit of 2,000,000 characters.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to use a dictionary](dictionaries.md)
cognitive-services Workspace And Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/concepts/workspace-and-project.md
- Title: "What is a workspace and project? - Custom Translator"-
-description: This article will explain the differences between a workspace and a project as well as project categories and labels for the Custom Translator service.
----- Previously updated : 11/04/2022---
-#Customer intent: As a Custom Translator user, I want to concept of a project, so that I can use it efficiently.
-
-# What is a Custom Translator workspace?
-
-A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is inside a specific workspace.
-
-Workspace is private to you and the people you invite into your workspace. Uninvited people don't have access to the content of your workspace. You can invite as many people as you like into your workspace and modify or remove their access anytime. You can also create a new workspace. By default a workspace won't contain any projects or documents that are within your other workspaces.
-
-## What is a Custom Translator project?
-
-A project is a wrapper for a model, documents, and tests. Each project
-automatically includes all documents that are uploaded into that workspace that
-have the correct language pair. For example, if you have both an English to
-Spanish project and a Spanish to English project, the same documents will be
-included in both projects. Each project has a CategoryID associated with it
-that is used when querying the [V3 API](../../reference/v3-0-translate.md?tabs=curl) for translations. CategoryID is parameter used to get translations from a customized system built with Custom Translator.
-
-## Project categories
-
-The category identifies the domain ΓÇô the area of terminology and style you want to use ΓÇô for your project. Choose the category most relevant to your documents. In some cases, your choice of the category directly influences the behavior of the Custom Translator.
-
-We have two sets of baseline models. They're General and Technology. If the category **Technology** is selected, the Technology baseline models will be used. For any other category selection, the General baseline models are used. The Technology baseline model does well in technology domain, but it shows lower quality, if the sentences used for translation don't fall within the technology domain. We suggest customers select category Technology only if sentences fall strictly within the technology domain.
-
-In the same workspace, you may create projects for the same language pair in
-different categories. Custom Translator prevents creation of a duplicate project
-with the same language pair and category. Applying a label to your project
-allows you to avoid this restriction. Don't use labels unless you're building translation systems for multiple clients, as adding a
-unique label to your project will be reflected in your projects CategoryID.
-
-## Project labels
-
-Custom Translator allows you to assign a project label to your project. The
-project label distinguishes between multiple projects with the same language
-pair and category. As a best practice, avoid using project labels unless
-necessary.
-
-The project label is used as part of the CategoryID. If the project label is
-left unset or is set identically across projects, then projects with the same
-category and *different* language pairs will share the same CategoryID. This approach is
-advantageous because it allows you to switch between languages when using the Translator API without worrying about a CategoryID that is unique to each project.
-
-For example, if I wanted to enable translations in the Technology domain from
-English to French and from French to English, I would create two
-projects: one for English -\> French, and one for French -\> English. I would
-specify the same category (Technology) for both and leave the project label
-blank. The CategoryID for both projects would match, so I could query the API
-for both English and French translations without having to modify my CategoryID.
-
-If you're a language service provider and want to serve
-multiple customers with different models that retain the same category and
-language pair, use a project label to differentiate between customers.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about model training](model-training.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md
- Title: "Frequently asked questions - Custom Translator"-
-description: This article contains answers to frequently asked questions about the Azure Cognitive Services Custom Translator.
---- Previously updated : 08/17/2020-----
-# Custom Translator frequently asked questions
-
-This article contains answers to frequently asked questions about [Custom Translator](https://portal.customtranslator.azure.ai).
-
-## What are the current restrictions in Custom Translator?
-
-There are restrictions and limits with respect to file size, model training, and model deployment. Keep these restrictions in mind when setting up your training to build a model in Custom Translator.
--- Submitted files must be less than 100 MB in size.-- Monolingual data isn't supported.-
-## When should I request deployment for a translation system that has been trained?
-
-It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the BLEU score and/ or the test results aren't satisfactory. You should
-be strict and careful in designing your tuning set and your test set. Make certain your sets
-fully represent the terminology and style of material you want to
-translate. You can be more liberal in composing your training data, and
-experiment with different options. Request a system deployment when you're
-satisfied with the translations in your system test results and have no more data to add to
-improve your trained system.
-
-## How many trained systems can be deployed in a project?
-
-Only one trained system can be deployed per project. It may take several
-trainings to create a suitable translation system for your project and we
-encourage you to request deployment of a training that gives you the best
-result. You can determine the quality of the training by the BLEU score (higher
-is better), and by consulting with reviewers before deciding that the quality of
-translations is suitable for deployment.
-
-## When can I expect my trainings to be deployed?
-
-The deployment generally takes less than an hour.
-
-## How do you access a deployed system?
-
-Deployed systems can be accessed via the Microsoft Translator Text API V3 by
-specifying the CategoryID. More information about the Translator Text API can
-be found in the [API
-Reference](../reference/v3-0-reference.md)
-webpage.
-
-## How do I skip alignment and sentence breaking if my data is already sentence aligned?
-
-The Custom Translator skips sentence alignment and sentence breaking for TMX
-files and for text files with the `.align` extension. `.align` files give users
-an option to skip Custom Translator's sentence breaking and alignment process for the
-files that are perfectly aligned, and need no further processing. We recommend
-using `.align` extension only for files that are perfectly aligned.
-
-If the number of extracted sentences doesn't match the two files with the same
-base name, Custom Translator will still run the sentence aligner on `.align`
-files.
-
-## I tried uploading my TMX, but it says "document processing failed"
-
-Ensure that the TMX conforms to the [TMX 1.4b Specification](https://www.gala-global.org/tmx-14b).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try the custom translator quickstart](quickstart.md)
-
cognitive-services Copy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/copy-model.md
- Title: Copy a custom model-
-description: This article explains how to copy a custom model to another workspace using the Azure Cognitive Services Custom Translator.
---- Previously updated : 06/01/2023---
-# Copy a custom model
-
-Copying a model to other workspaces enables model lifecycle management (for example, development → test → production) and increases usage scalability while reducing the training cost.
-
-## Copy model to another workspace
-
- > [!Note]
- >
- > To copy model from one workspace to another, you must have an **Owner** role in both workspaces.
- >
- > The copied model cannot be recopied. You can only rename, delete, or publish a copied model.
-
-1. After successful model training, select the **Model details** blade.
-
-1. Select the **Model Name** to copy.
-
-1. Select **Copy to workspace**.
-
-1. Fill out the target details.
-
- :::image type="content" source="../media/how-to/copy-model-1.png" alt-text="Screenshot illustrating the copy model dialog window.":::
-
- > [!Note]
- >
- > A dropdown list displays the list of workspaces available to use. Otherwise, select **Create a new workspace**.
- >
- > If selected workspace contains a project for the same language pair, it can be selected from the Project dropdown list, otherwise, select **Create a new project** to create one.
-
-1. Select **Copy model**.
-
-1. A notification panel shows the copy progress. The process should complete fairly quickly:
-
- :::image type="content" source="../media/how-to/copy-model-2.png" alt-text="Screenshot illustrating notification that the copy model is in process.":::
-
-1. After **Copy model** completion, a copied model is available in the target workspace and ready to publish. A **Copied model** watermark is appended to the model name.
-
- :::image type="content" source="../media/how-to/copy-model-3.png" alt-text="Screenshot illustrating the copy complete dialog window.":::
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to publish/deploy a custom model](publish-model.md).
cognitive-services Create Manage Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/create-manage-project.md
- Title: Create and manage a project-
-description: How to create and manage a project in the Azure Cognitive Services Custom Translator.
---- Previously updated : 11/04/2022----
-# Create and manage a project
-
-A project contains translation models for one language pair. Each project includes all documents that were uploaded into that workspace with the correct language pair.
-
-Creating a project is the first step in building and publishing a model.
-
-## Create a project
-
-1. After you sign in, your default workspace is loaded. To create a project in different workspace, select **My workspaces**, then select a workspace name.
-
-1. Select **Create project**.
-
-1. Enter the following details about your project in the creation dialog:
-
- - **Project name (required):** Give your project a unique, meaningful name. It's not necessary to mention the languages within the title.
-
- - **Language pair (required):** Select the source and target languages from the dropdown list
-
- - **Domain (required):** Select the domain from the dropdown list that's most appropriate for your project. The domain describes the terminology and style of the documents you intend to translate.
-
- >[!Note]
- >Select **Show advanced options** to add project label, project description, and domain description
-
- - **Project label:** The project label distinguishes between projects with the same language pair and domain. As a best practice, here are a few tips:
-
- - Use a label *only* if you're planning to build multiple projects for the same language pair and same domain and want to access these projects with a different Domain ID.
-
- - Don't use a label if you're building systems for one domain only.
-
- - A project label isn't required and not helpful to distinguish between language pairs.
-
- - You can use the same label for multiple projects.
-
- - **Project description:** A short summary about the project. This description has no influence over the behavior of the Custom Translator or your resulting custom system, but can help you differentiate between different projects.
-
- - **Domain description:** Use this field to better describe the particular field or industry in which you're working. or example, if your category is medicine, you might add details about your subfield, such as surgery or pediatrics. The description has no influence over the behavior of the Custom Translator or your resulting custom system.
-
-1. Select **Create project**.
-
- :::image type="content" source="../media/how-to/create-project-dialog.png" alt-text="Screenshot illustrating the create project fields.":::
-
-## Edit a project
-
-To modify the project name, project description, or domain description:
-
-1. Select the workspace name.
-
-1. Select the project name, for example, *English-to-German*.
-
-1. The **Edit and Delete** buttons should now be visible.
-
- :::image type="content" source="../media/how-to/edit-project-dialog-1.png" alt-text="Screenshot illustrating the edit project fields":::
-
-1. Select **Edit** and fill in or modify existing text.
-
- :::image type="content" source="../media/how-to/edit-project-dialog-2.png" alt-text="Screenshot illustrating detailed edit project fields.":::
-
-1. Select **Edit project** to save.
-
-## Delete a project
-
-1. Follow the [**Edit a project**](#edit-a-project) steps 1-3 above.
-
-1. Select **Delete** and read the delete message before you select **Delete project** to confirm.
-
- :::image type="content" source="../media/how-to/delete-project-1.png" alt-text="Screenshot illustrating delete project fields.":::
-
- >[!Note]
- >If your project has a published model or a model that is currently in training, you will only be able to delete your project once your model is no longer published or training.
- >
- > :::image type="content" source="../media/how-to/delete-project-2.png" alt-text="Screenshot illustrating the unable to delete message.":::
-
-## Next steps
--- Learn [how to manage project documents](create-manage-training-documents.md).-- Learn [how to train a model](train-custom-model.md).-- Learn [how to test and evaluate model quality](test-your-model.md).-- Learn [how to publish model](publish-model.md).-- Learn [how to translate with custom models](translate-with-custom-model.md).
cognitive-services Create Manage Training Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/create-manage-training-documents.md
- Title: Build and upload training documents-
-description: How to build and upload parallel documents (two documents where one is the origin and the other is the translation) using Custom Translator.
---- Previously updated : 11/04/2022----
-# Build and manage training documents
-
-[Custom Translator](../overview.md) enables you to build translation models that reflect your business, industry, and domain-specific terminology and style. Training and deploying a custom model is easy and doesn't require any programming skills. Custom Translator allows you to upload parallel files, translation memory files, or zip files.
-
-[Parallel documents](../what-are-parallel-documents.md) are pairs of documents where one (target) is a translation of the other (source). One document in the pair contains sentences in the source language and the other document contains those sentences translated into the target language.
-
-Before uploading your documents, review the [document formats and naming convention guidance](../document-formats-naming-convention.md) to make sure your file format is supported by Custom Translator.
-
-## How to create document sets
-
-Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
--- Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?--- Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?--- Can you crawl online portals to collect source sentences and synthesize target sentences?-
-### Training material for each document types
-
-| Source | What it does | Rules to follow |
-||||
-| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](../concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
-| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
-| Test documents | Calculate the [BLEU score](../beginners-guide.md#what-is-a-bleu-score).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
-| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
-| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
-
-## How to upload documents
-
-Document types are associated with the language pair selected when you create a project.
-
-1. Sign-in to [Custom Translator](https://portal.customtranslator.azure.ai) portal. Your default workspace is loaded and a list of previously created projects are displayed.
-
-1. Select the desired project **Name**. By default, the **Manage documents** blade is selected and a list of previously uploaded documents is displayed.
-
-1. Select **Add document set** and choose the document type:
-
- - Training set
- - Testing set
- - Tuning set
- - Dictionary set:
- - Phrase Dictionary
- - Sentence Dictionary
-
-1. Select **Next**.
-
- :::image type="content" source="../media/how-to/upload-1.png" alt-text="Screenshot illustrating the document upload link.":::
-
- >[!Note]
- >Choosing **Dictionary set** launches **Choose type of dictionary** dialog.
- >Choose one and select **Next**
-
-1. Select your documents format from the radio buttons.
-
- :::image type="content" source="../media/how-to/upload-2.png" alt-text="Screenshot illustrating the upload document page.":::
-
- - For **Parallel documents**, fill in the `Document set name` and select **Browse files** to select source and target documents.
- - For **Translation memory (TM)** file or **Upload multiple sets with ZIP**, select **Browse files** to select the file
-
-1. Select **Upload**.
-
-At this point, Custom Translator is processing your documents and attempting to extract sentences as indicated in the upload notification. Once done processing, you'll see the upload successful notification.
-
- :::image type="content" source="../media/quickstart/document-upload-notification.png" alt-text="Screenshot illustrating the upload document processing dialog window.":::
-
-## View upload history
-
-In workspace page you can view history of all document uploads details like document type, language pair, upload status etc.
-
-1. From the [Custom Translator](https://portal.customtranslator.azure.ai) portal workspace page,
- click Upload History tab to view history.
-
- :::image type="content" source="../media/how-to/upload-history-tab.png" alt-text="Screenshot showing the upload history tab.":::
-
-2. This page shows the status of all of your past uploads. It displays
- uploads from most recent to least recent. For each upload, it shows the document name, upload status, the upload date, the number of files uploaded, type of file uploaded, the language pair of the file, and created by. You can use Filter to quickly find documents by name, status, language, and date range.
-
- :::image type="content" source="../media/how-to/upload-history-page.png" alt-text="Screenshot showing the upload history page.":::
-
-3. Click on any upload history record. In upload history details page,
- you can view the files uploaded as part of the upload, uploaded status of the file, language of the file and error message (if there is any error in upload).
-
-## Next steps
--- Learn [how to train a model](train-custom-model.md).-- Learn [how to test and evaluate model quality](test-your-model.md).-- Learn [how to publish model](publish-model.md).-- Learn [how to translate with custom models](translate-with-custom-model.md).
cognitive-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/create-manage-workspace.md
- Title: Create and manage a workspace-
-description: How to create and manage workspaces
---- Previously updated : 11/04/2022-----
-# Create and manage a workspace
-
- Workspaces are places to manage your documents, projects, and models. When you create a workspace, you can choose to use the workspace independently, or share it with teammates to divide up the work.
-
-## Create workspace
-
-1. After you sign in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
-
- :::image type="content" source="../media/quickstart/first-time-user.png" alt-text="Screenshot illustrating first-time sign-in.":::
-
-1. Select **My workspaces**
-
-1. Select **Create a new workspace**
-
-1. Type a **Workspace name** and select **Next**
-
-1. Select "Global" for **Select resource region** from the dropdown list.
-
-1. Copy/paste your Translator Services key.
-
-1. Select **Next**.
-
-1. Select **Done**
-
- > [!NOTE]
- > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2**.
-
- > [!NOTE]
- > All uploaded customer content, custom model binaries, custom model configurations, and training logs are kept encrypted-at-rest in the selected region.
-
- :::image type="content" source="../media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
-
- :::image type="content" source="../media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation.":::
-
-## Manage workspace settings
-
-Select a workspace and navigate to **Workspace settings**. You can manage the following workspace settings:
-
-* Change the resource key if the region is **Global**. If you're using a region-specific resource such as **East US**, you can't change your resource key.
-
-* Change the workspace name.
-
-* [Share the workspace with others](#share-workspace-for-collaboration).
-
-* Delete the workspace.
-
-### Share workspace for collaboration
-
-The person who created the workspace is the owner. Within **Workspace settings**, an owner can designate three different roles for a collaborative workspace:
-
-* **Owner**. An owner has full permissions within the workspace.
-
-* **Editor**. An editor can add documents, train models, and delete documents and projects. They can't modify who the workspace is shared with, delete the workspace, or change the workspace name.
-
-* **Reader**. A reader can view (and download if available) all information in the workspace.
-
-> [!NOTE]
-> The Custom Translator workspace sharing policy has changed. For additional security measures, you can share a workspace only with people who have recently signed in to the Custom Translator portal.
-
-1. Select **Share**.
-
-1. Complete the **email address** field for collaborators.
-
-1. Select **role** from the dropdown list.
-
-1. Select **Share**.
---
-### Remove somebody from a workspace
-
-1. Select **Share**.
-
-2. Select the **X** icon next to the **Role** and email address that you want to remove.
--
-### Restrict access to workspace models
-
-> [!WARNING]
-> **Restrict access** blocks runtime translation requests to all published models in the workspace if the requests don't include the same Translator resource that was used to create the workspace.
-
-Select the **Yes** checkbox. Within few minutes, all published models are secured from unauthorized access.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to manage projects](create-manage-project.md)
cognitive-services Enable Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/enable-vnet-service-endpoint.md
- Title: Enable Virtual Network service endpoints with Custom Translator service-
-description: This article describes how to use Custom Translator service with an Azure Virtual Network service endpoint.
----- Previously updated : 07/05/2023----
-# Enable Custom Translator through Azure Virtual Network
-
-In this article, we show you how to set up and use VNet service endpoints with Custom Translator.
-
-Azure Virtual Network (VNet) [service endpoints](../../../../virtual-network/virtual-network-service-endpoints-overview.md) securely connect your Azure service resources to your virtual networks over an optimized route via the Azure global network. Service endpoints enable private IP addresses within your virtual network to reach the endpoint of an Azure service without the need for a public IP address on the virtual network.
-
-For more information, see [Azure Virtual Network overview](../../../../virtual-network/virtual-networks-overview.md)
-
-> [!NOTE]
-> Before you start, review [how to use virtual networks with Cognitive Services](../../../cognitive-services-virtual-networks.md).
-
- To set up a Translator resource for VNet service endpoint scenarios, you need the resources:
-
-* [A regional Translator resource (global isn't supported)](../../create-translator-resource.md).
-* [VNet and networking settings for the Translator resource](#configure-virtual-networks-resource-networking-settings).
-
-## Configure virtual networks resource networking settings
-
-To start, you need to add all virtual networks that are allowed access via the service endpoint to the Translator resource networking properties. To enable access to a Translator resource via the VNet, you need to enable the `Microsoft.CognitiveServices` service endpoint type for the required subnets of your virtual network. Doing so routes all subnet traffic related to Cognitive Services through the private global network. If you intend to access any other Cognitive Services resources from the same subnet, make sure these resources are also configured to allow your virtual network.
-
-> [!NOTE]
->
-> * If a virtual network isn't added as *allowed* in the Translator resource networking properties, it won't have access to the Translator resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the virtual network.
-> * If the service endpoint is enabled but the virtual network isn't allowed, the Translator resource won't be accessible for the virtual network through a public IP address, regardless of your other network security settings.
-> * Enabling the `Microsoft.CognitiveServices` endpoint routes all traffic related to Cognitive Services through the private global network. Thus, the virtual network should be explicitly allowed to access the resource.
-> * This guidance applies for all Cognitive Services resources, not just for Translator resources.
-
-Let's get started:
-
-1. Navigate to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-
-1. Select a regional Translator resource.
-
-1. From the **Resource Management** group in the left side panel, select **Networking**.
-
- :::image type="content" source="../media/how-to/resource-management-networking.png" alt-text="Screenshot of the networking selection under Resource Management in the Azure portal.":::
-
-1. From the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
-
- :::image type="content" source="../media/how-to/firewalls-virtual-network.png" alt-text="Screenshot of the firewalls and virtual network page in the Azure portal.":::
-
- > [!NOTE]
- > To use Virtual Network service endpoints, you need to select the **Selected Networks and Private Endpoints** network security option. No other options are supported.
-
-1. Select **Add existing virtual network** or **Add new virtual network** and provide the required parameters.
-
- * Complete the process by selecting **Add** for an existing virtual network or **Create** for a new one.
-
- * If you add an existing virtual network, the `Microsoft.CognitiveServices` service endpoint is automatically enabled for the selected subnets.
-
- * If you create a new virtual network, the **default** subnet is automatically configured to the `Microsoft.CognitiveServices` service endpoint. This operation can take few minutes.
-
- > [!NOTE]
- > As described in the [previous section](#configure-virtual-networks-resource-networking-settings), when you configure a virtual network as *allowed* for the Translator resource, the `Microsoft.CognitiveServices` service endpoint is automatically enabled. If you later disable it, you need to re-enable it manually to restore the service endpoint access to the Translator resource (and to other Cognitive Services resources).
-
-1. Now, when you choose the **Selected Networks and Private Endpoints** tab, you can see your enabled virtual network and subnets under the **Virtual networks** section.
-
-1. How to check the service endpoint
-
- * From the **Resource Management** group in the left side panel, select **Networking**.
-
- * Select your **virtual network** and then select the desired **subnet**.
-
- :::image type="content" source="../media/how-to/select-subnet.png" alt-text="Screenshot of subnet selection section in the Azure portal.":::
-
- * A new **Subnets** window appears.
-
- * Select **Service endpoints** from the **Settings** menu located on the left side panel.
-
- :::image type="content" source="../media/how-to/service-endpoints.png" alt-text="Screenshot of the **Subnets** selection from the **Settings** menu in the Azure portal.":::
-
-1. From the **Settings** menu in the left side panel, choose **Service Endpoints** and, in the main window, check that your virtual network subnet is included in the `Microsoft.CognitiveServices` list.
-
-## Use the Custom Translator portal
-
-The following table describes Custom Translator project accessibility per Translator resource **Networking** → **Firewalls and virtual networks** security setting:
-
- :::image type="content" source="../media/how-to/allow-network-access.png" alt-text="Screenshot of allowed network access section in the Azure portal.":::
-
-> [!IMPORTANT]
- > If you configure **Selected Networks and Private Endpoints** via the **Networking** → **Firewalls and virtual networks** tab, you can't use the Custom Translator portal and your Translator resource. However, you can still use the Translator resource outside of the Custom Translator portal.
-
-| Translator resource network security setting | Custom Translator portal accessibility |
-|--|--|
-| All networks | No restrictions |
-| Selected Networks and Private Endpoints | Accessible from allowed VNET IP addresses |
-| Disabled | Not accessible |
-
-To use Custom Translator without relaxing network access restrictions on your production Translator resource, consider this workaround:
-
-* Create another Translator resource for development that can be used on a public network.
-
-* Prepare your custom model in the Custom Translator portal on the development resource.
-
-* Copy the model on your development resource to your production resource using [Custom Translator non-interactive REST API](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) `workspaces` → `copy authorization and models` → `copy functions`.
-
-Congratulations! You learned how to use Azure VNet service endpoints with Custom Translator.
-
-## Learn more
-
-Visit the [**Custom Translator API**](https://microsofttranslator.github.io/CustomTranslatorApiSamples/) page to view our non-interactive REST APIs.
cognitive-services Publish Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/publish-model.md
- Title: Publish a custom model-
-description: This article explains how to publish a custom model using the Azure Cognitive Services Custom Translator.
---- Previously updated : 11/04/2022---
-# Publish a custom model
-
-Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing).
-
-## Publish your trained model
-
-You can publish one model per project to one or multiple regions.
-1. Select the **Publish model** blade.
-
-1. Select *en-de with sample data* and select **Publish**.
-
-1. Check the desired region(s).
-
-1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
-
- :::image type="content" source="../media/quickstart/publish-model.png" alt-text="Screenshot illustrating the publish model blade.":::
-
-## Replace a published model
-
-To replace a published model, you can exchange the published model with a different model in the same region(s):
-
-1. Select the replacement model.
-
-1. Select **Publish**.
-
-1. Select **publish** once more in the **Publish model** dialog window.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to translate with custom models](../quickstart.md)
cognitive-services Test Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/test-your-model.md
- Title: View custom model details and test translation-
-description: How to test your custom model BLEU score and evaluate translations
---- Previously updated : 11/04/2022---
-# Test your model
-
-Once your model has successfully trained, you can use translations to evaluate the quality of your model. In order to make an informed decision about whether to use our standard model or your custom model, you should evaluate the delta between your custom model [**BLEU score**](#bleu-score) and our standard model **Baseline BLEU**. If your models have been trained on a narrow domain, and your training data is consistent with the test data, you can expect a high BLEU score.
-
-## BLEU score
-
-BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
-
-A BLEU score is a number between zero and 100. A score of zero indicates a low-quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100ΓÇöa BLEU score between 40 and 60 indicates a high-quality translation.
-
-[Read more](../concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma)
-
-## Model details
-
-1. Select the **Model details** blade.
-
-1. Select the model name. Review the training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You'll use the `Category ID` to make translation requests.
-
-1. Evaluate the model [BLEU](../beginners-guide.md#what-is-a-bleu-score) score. Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means there's high translation quality using the custom model.
-
- :::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating the model detail.":::
-
-## Test quality of your model's translation
-
-1. Select **Test model** blade.
-
-1. Select model **Name**.
-
-1. Human evaluate translation from your **Custom model** and the **Baseline model** (our pre-trained baseline used for customization) against **Reference** (target translation from the test set).
-
-1. If you're satisfied with the training results, place a deployment request for the trained model.
-
-## Next steps
--- Learn [how to publish/deploy a custom model](publish-model.md).-- Learn [how to translate documents with a custom model](translate-with-custom-model.md).
cognitive-services Train Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/train-custom-model.md
- Title: Train model-
-description: How to train a custom model
---- Previously updated : 11/04/2022---
-# Train a custom model
-
-A model provides translations for a specific language pair. The outcome of a successful training is a model. To train a custom model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A minimum of 10,000 parallel training sentences are required to train a full model.
-
-## Create model
-
-1. Select the **Train model** blade.
-
-1. Type the **Model name**.
-
-1. Keep the default **Full training** selected or select **Dictionary-only training**.
-
- >[!Note]
- >Full training displays all uploaded document types. Dictionary-only displays dictionary documents only.
-
-1. Under **Select documents**, select the documents you want to use to train the model, for example, `sample-English-German` and review the training cost associated with the selected number of sentences.
-
-1. Select **Train now**.
-
-1. Select **Train** to confirm.
-
- >[!Note]
- >**Notifications** displays model training in progress, e.g., **Submitting data** state. Training model takes few hours, subject to the number of selected sentences.
-
- :::image type="content" source="../media/quickstart/train-model.png" alt-text="Screenshot illustrating the train model blade.":::
-
-## When to select dictionary-only training
-
-For better results, we recommended letting the system learn from your training data. However, when you don't have enough parallel sentences to meet the 10,000 minimum requirements, or sentences and compound nouns must be rendered as-is, use dictionary-only training. Your model will typically complete training much faster than with full training. The resulting models will use the baseline models for translation along with the dictionaries you've added. You won't see BLEU scores or get a test report.
-
-> [!Note]
->Custom Translator doesn't sentence-align dictionary files. Therefore, it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned. If not, the document upload will fail.
-
-## Model details
-
-1. After successful model training, select the **Model details** blade.
-
-1. Select the **Model Name** to review training date/time, total training time, number of sentences used for training, tuning, testing, dictionary, and whether the system generated the test and tuning sets. You'll use `Category ID` to make translation requests.
-
-1. Evaluate the model [BLEU score](../beginners-guide.md#what-is-a-bleu-score). Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
-
- :::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating model details fields.":::
-
-## Duplicate model
-
-1. Select the **Model details** blade.
-
-1. Hover over the model name and check the selection button.
-
-1. Select **Duplicate**.
-
-1. Fill in **New model name**.
-
-1. Keep **Train immediately** checked if no further data will be selected or uploaded, otherwise, check **Save as draft**
-
-1. Select **Save**
-
- > [!Note]
- >
- > If you save the model as `Draft`, **Model details** is updated with the model name in `Draft` status.
- >
- > To add more documents, select on the model name and follow `Create model` section above.
-
- :::image type="content" source="../media/how-to/duplicate-model.png" alt-text="Screenshot illustrating the duplicate model blade.":::
-
-## Next steps
--- Learn [how to test and evaluate model quality](test-your-model.md).-- Learn [how to publish model](publish-model.md).-- Learn [how to translate with custom models](translate-with-custom-model.md).
cognitive-services Translate With Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/translate-with-custom-model.md
- Title: Translate text with a custom model-
-description: How to make translation requests using custom models published with the Azure Cognitive Services Custom Translator.
---- Previously updated : 11/04/2022---
-# Translate text with a custom model
-
-After you publish your custom model, you can access it with the Translator API by using the `Category ID` parameter.
-
-## How to translate
-
-1. Use the `Category ID` when making a custom translation request via Microsoft Translator [Text API V3](../../reference/v3-0-translate.md?tabs=curl). The `Category ID` is created by concatenating the WorkspaceID, project label, and category code. Use the `CategoryID` with the Text Translator API to get custom translations.
-
- ```http
- https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=de&category=a2eb72f9-43a8-46bd-82fa-4693c8b64c3c-TECH
-
- ```
-
- More information about the Translator Text API can be found on the [Translator API Reference](../../reference/v3-0-translate.md) page.
-
-1. You may also want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about building and publishing custom models](../beginners-guide.md)
cognitive-services Key Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/key-terms.md
- Title: Key terms - Custom Translator-
-description: List of key terms used in Custom Translator articles.
---- Previously updated : 11/01/2022----
-# Custom Translator key terms
-
-The following table presents a list of key terms that you may find as you work with the [Custom Translator](https://portal.customtranslator.azure.ai).
-
-| Word or Phrase|Definition|
-||--|
-| Source Language | The source language is the starting language that you want to convert to another language (the "target").|
-| Target Language| The target language is the language that you want the machine translation to provide after it receives the source language. |
-| Monolingual File | A monolingual file has a single language not paired with another file of a different language. |
-| Parallel Files | A parallel file is combination of two files with corresponding text. One file has the source language. The other has the target language.|
-| Sentence Alignment| Parallel dataset must have aligned sentences to sentences that represent the same text in both languages. For instance, in a source parallel file the first sentence should, in theory, map to the first sentence in the target parallel file.|
-| Aligned Text | One of the most important steps of file validation is to align the sentences in the parallel documents. Things are expressed differently in different languages. Also different languages have different word orders. This step does the job of aligning the sentences with the same content so that they can be used for training. A low sentence alignment indicates there might be something wrong with one or both of the files. |
-| Word Breaking/ Unbreaking | Word breaking is the function of marking the boundaries between words. Many writing systems use a space to denote the boundary between words. Word unbreaking refers to the removal of any visible marker that may have been inserted between words in a preceding step. |
-| Delimiters | Delimiters are the ways that a sentence is divided up into segments or delimit the margin between sentences. For instance, in English spaces delimit words, colons, and semi-colons delimit clauses and periods delimit sentences. |
-| Training Files | A training file is used to teach the machine translation system how to map from one language (the source) to a target language (the target). The more data you provide, the better the system will perform. |
-| Tuning Files | These files are often randomly derived from the training set (if you don't select a tuning set). The sentences are autoselected and used to tune the system and ensure that it's functioning properly. If you wish to create a general-purpose translation model and create your own tuning files, make sure they're a random set of sentences across domains |
-| Testing Files| These files are often derived files, randomly selected from the training set (if you don't select any test set). The purpose of these sentences is to evaluate the translation model's accuracy. To make sure the system accurately translates these sentences, you may wish to create a testing set and upload it to the translator. Doing so will ensure that the sentences are used in the system's evaluation (the generation of a BLEU score). |
-| Combo file | A type of file in which the source and translated sentences are contained in the same file. Supported file formats (TMX, XLIFF, XLF, ICI, and XLSX). |
-| Archive file | A file that contains other files. Supported file formats (zip, gz, tgz). |
-| BLEU Score | [BLEU](concepts/bleu-score.md) is the industry standard method for evaluating the "precision" or accuracy of the translation model. Though other methods of evaluation exist, Microsoft Translator relies BLEU method to report accuracy to Project Owners.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/overview.md
- Title: What is Custom Translator?-
-description: Custom Translator offers similar capabilities to what Microsoft Translator Hub does for Statistical Machine Translation (SMT), but exclusively for Neural Machine Translation (NMT) systems.
---- Previously updated : 11/04/2022---
-# What is Custom Translator?
-
-Custom Translator is a feature of the Microsoft Translator service, which enables enterprises, app developers, and language service providers to build customized neural machine translation (NMT) systems. The customized translation systems seamlessly integrate into existing applications, workflows, and websites.
-
-Translation systems built with [Custom Translator](https://portal.customtranslator.azure.ai) are available through Microsoft Translator [Microsoft Translator Text API V3](../reference/v3-0-translate.md?tabs=curl), the same cloud-based, secure, high performance system powering billions of translations every day.
-
-The platform enables users to build and publish custom translation systems to and from English. Custom Translator supports more than three dozen languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../language-support.md).
-
-This documentation contains the following article types:
-
-* [**Quickstarts**](./quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](./how-to/create-manage-workspace.md) contain instructions for using the feature in more specific or customized ways.
-
-## Features
-
-Custom Translator provides different features to build custom translation system and later access it.
-
-|Feature |Description |
-|||
-|[Apply neural machine translation technology](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) | Improve your translation by applying neural machine translation (NMT) provided by Custom translator. |
-|[Build systems that knows your business terminology](./beginners-guide.md) | Customize and build translation systems using parallel documents that understand the terminologies used in your own business and industry. |
-|[Use a dictionary to build your models](./how-to/train-custom-model.md#when-to-select-dictionary-only-training) | If you don't have training data set, you can train a model with only dictionary data. |
-|[Collaborate with others](./how-to/create-manage-workspace.md#manage-workspace-settings) | Collaborate with your team by sharing your work with different people. |
-|[Access your custom translation model](./how-to/translate-with-custom-model.md) | Your custom translation model can be accessed anytime by your existing applications/ programs via Microsoft Translator Text API V3. |
-
-## Get better translations
-
-Microsoft Translator released [Neural Machine Translation (NMT)](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) in 2016. NMT provided major advances in translation quality over the industry-standard [Statistical Machine Translation (SMT)](https://en.wikipedia.org/wiki/Statistical_machine_translation) technology. Because NMT better captures the context of full sentences before translating them, it provides higher quality, more human-sounding, and more fluent translations. [Custom Translator](https://portal.customtranslator.azure.ai) provides NMT for your custom models resulting better translation quality.
-
-You can use previously translated documents to build a translation system. These documents include domain-specific terminology and style, better than a standard translation system. Users can upload ALIGN, PDF, LCL, HTML, HTM, XLF, TMX, XLIFF, TXT, DOCX, and XLSX documents.
-
-Custom Translator also accepts data that's parallel at the document level to make data collection and preparation more effective. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator will be able to automatically match sentences across documents.
-
-If the appropriate type and amount of training data is supplied, it's not uncommon to see [BLEU score](concepts/bleu-score.md) gains between 5 and 10 points by using Custom Translator.
-
-## Be productive and cost effective
-
-With [Custom Translator](https://portal.customtranslator.azure.ai), training and deploying a custom system doesn't require any programming skills.
-
-The secure [Custom Translator](https://portal.customtranslator.azure.ai) portal enables users to upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system will then be available for use at scale within a few hours (actual time depends on training data size).
-
-[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a [dedicated API](https://custom-api.cognitive.microsofttranslator.com/swagger/). The API allows users to manage creating or updating training through their own app or webservice.
-
-The cost of using a custom model to translate content is based on the user's Translator Text API pricing tier. See the Cognitive Services [Translator Text API pricing webpage](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
-for pricing tier details.
-
-## Securely translate anytime, anywhere on all your apps and services
-
-Custom systems can be seamlessly accessed and integrated into any product or business workflow and on any device via the Microsoft Translator Text REST API.
-
-## Next steps
-
-* Read about [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
-
-* With [Quickstart](./quickstart.md) learn to build a translation model in Custom Translator.
cognitive-services Platform Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/platform-upgrade.md
- Title: "Platform upgrade - Custom Translator"-
-description: Custom Translator v1.0 upgrade
---- Previously updated : 06/01/2023---
-# Custom Translator platform upgrade
-
-> [!CAUTION]
->
-> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the new platform for continued processing and support.
-
-Following measured and consistent high-quality results using models trained on the Custom Translator new platform, the v1.0 platform is retiring. The new Custom Translator platform delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the new platform by June 02, 2023.
-
-## Custom Translator v1.0 upgrade timeline
-
-* **May 01, 2023** → Custom Translator v1.0 model publishing ends. There's no downtime during the v1.0 model migration. All model publishing and in-flight translation requests will continue without disruption until June 02, 2023.
-
-* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to new platform models.
-
-* **June 08, 2023** → Remaining v1.0 published models migrate automatically and are published by the Custom Translator team.
-
-## Upgrade to new platform
-
-> [!IMPORTANT]
->
-> * Starting **May 01, 2023** the upgrade wizard and workspace banner will be displayed in the Custom Translator portal indicating that you have v1.0 models to upgrade.
-> * The banner contains a **Select** button that takes you to the upgrade wizard where a list of all your v1.0 models available for upgrade are displayed.
-> * Select any or all of your v1.0 models then select **Train** to start new platform model training.
-
-* **Check to see if you have published v1.0 models**. After signing in to the Custom Translator portal, you'll see a message indicating that you have v1.0 models to upgrade. You can also check to see if a current workspace has v1.0 models by selecting **Workspace settings** and scrolling to the bottom of the page.
-
-* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the new platform.
-
-## Unpublished and opt-out published models
-
-* For unpublished models, save the model data (training, testing, dictionary) and delete the project.
-
-* For published models that you don't want to upgrade, save your model data (training, testing, dictionary), unpublish the model, and delete the project.
-
-## Next steps
-
-For more support, visit [Azure Cognitive Services support and help options](../../cognitive-services-support-options.md).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/quickstart.md
- Title: "Quickstart: Build, deploy, and use a custom model - Custom Translator"-
-description: A step-by-step guide to building a translation system using the Custom Translator portal v2.
---- Previously updated : 07/05/2023---
-# Quickstart: Build, publish, and translate with custom models
-
-Translator is a cloud-based neural machine translation service that is part of the Azure Cognitive Services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, learn to build custom solutions for your applications across all [supported languages](../language-support.md).
-
-## Prerequisites
-
- To use the [Custom Translator](https://portal.customtranslator.azure.ai/) portal, you need the following resources:
-
-* A [Microsoft account](https://signup.live.com).
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have an Azure subscription, [create a Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You need the key and endpoint from the resource to connect your application to the Translator service. Paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
-
- :::image type="content" source="../media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-
-For more information, *see* [how to create a Translator resource](../how-to-create-translator-resource.md).
-
-## Custom Translator portal
-
-Once you have the above prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
-
-You can read an overview of translation and custom translation, learn some tips, and watch a getting started video in the [Azure AI technical blog](https://techcommunity.microsoft.com/t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956).
-
-## Process summary
-
-1. [**Create a workspace**](#create-a-workspace). A workspace is a work area for composing and building your custom translation system. A workspace can contain multiple projects, models, and documents. All the work you do in Custom Translator is done inside a specific workspace.
-
-1. [**Create a project**](#create-a-project). A project is a wrapper for models, documents, and tests. Each project includes all documents that are uploaded into that workspace with the correct language pair. For example, if you have both an English-to-Spanish project and a Spanish-to-English project, the same documents are included in both projects.
-
-1. [**Upload parallel documents**](#upload-documents). Parallel documents are pairs of documents where one (target) is the translation of the other (source). One document in the pair contains sentences in the source language and the other document contains sentences translated into the target language. It doesn't matter which language is marked as "source" and which language is marked as "target"ΓÇöa parallel document can be used to train a translation system in either direction.
-
-1. [**Train your model**](#train-your-model). A model is the system that provides translation for a specific language pair. The outcome of a successful training is a model. When you train a model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator automatically assembles tuning and testing data. It uses a random subset of sentences from your training documents, and excludes these sentences from the training data itself. A 10,000 parallel sentence is the minimum requirement to train a model.
-
-1. [**Test (human evaluate) your model**](#test-your-model). The testing set is used to compute the [BLEU](beginners-guide.md#what-is-a-bleu-score) score. This score indicates the quality of your translation system.
-
-1. [**Publish (deploy) your trained model**](#publish-your-model). Your custom model is made available for runtime translation requests.
-
-1. [**Translate text**](#translate-text). Use the cloud-based, secure, high performance, highly scalable Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl) to make translation requests.
-
-## Create a workspace
-
-1. After your sign-in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
-
- :::image type="content" source="media/quickstart/first-time-user.png" alt-text="Screenshot illustrating how to create a workspace.":::
-
-1. Select **My workspaces**.
-
-1. Select **Create a new workspace**.
-
-1. Type _Contoso MT models_ for **Workspace name** and select **Next**.
-
-1. Select "Global" for **Select resource region** from the dropdown list.
-
-1. Copy/paste your Translator Services key.
-
-1. Select **Next**.
-
-1. Select **Done**.
-
- >[!Note]
- > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2.**
-
- :::image type="content" source="media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
-
- :::image type="content" source="media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation.":::
-
-## Create a project
-
-Once the workspace is created successfully, you're taken to the **Projects** page.
-
-You create English-to-German project to train a custom model with only a [training](training-and-model.md#training-document-type-for-custom-translator) document type.
-
-1. Select **Create project**.
-
-1. Type *English-to-German* for **Project name**.
-
-1. Select *English (en)* as **Source language** from the dropdown list.
-
-1. Select *German (de)* as **Target language** from the dropdown list.
-
-1. Select *General* for **Domain** from the dropdown list.
-
-1. Select **Create project**.
-
- :::image type="content" source="media/quickstart/create-project.png" alt-text="Screenshot illustrating how to create a project.":::
-
-## Upload documents
-
-In order to create a custom model, you need to upload all or a combination of [training](training-and-model.md#training-document-type-for-custom-translator), [tuning](training-and-model.md#tuning-document-type-for-custom-translator), [testing](training-and-model.md#testing-dataset-for-custom-translator), and [dictionary](concepts/dictionaries.md) document types for customization.
-
->[!Note]
-> You can use our sample training, phrase and sentence dictionaries dataset, [Customer sample English-to-German datasets](https://github.com/MicrosoftTranslator/CustomTranslatorSampleDatasets), for this quickstart. However, for production, it's better to upload your own training dataset.
-
-1. Select *English-to-German* project name.
-
-1. Select **Manage documents** from the left navigation menu.
-
-1. Select **Add document set**.
-
-1. Check the **Training set** box and select **Next**.
-
-1. Keep **Parallel documents** checked and type *sample-English-German*.
-
-1. Under the **Source (English - EN) file**, select **Browse files** and select *sample-English-German-Training-en.txt*.
-
-1. Under **Target (German - EN) file**, select **Browse files** and select *sample-English-German-Training-de.txt*.
-
-1. Select **Upload**
-
- >[!Note]
- >You can upload the sample phrase and sentence dictionaries dataset. This step is left for you to complete.
-
- :::image type="content" source="media/quickstart/upload-model.png" alt-text="Screenshot illustrating how to upload documents.":::
-
-## Train your model
-
-Now you're ready to train your English-to-German model.
-
-1. Select **Train model** from the left navigation menu.
-
-1. Type *en-de with sample data* for **Model name**.
-
-1. Keep **Full training** checked.
-
-1. Under **Select documents**, check *sample-English-German* and review the training cost associated with the selected number of sentences.
-
-1. Select **Train now**.
-
-1. Select **Train** to confirm.
-
- >[!Note]
- >**Notifications** displays model training in progress, e.g., **Submitting data** state. Training model takes few hours, subject to the number of selected sentences.
-
- :::image type="content" source="media/quickstart/train-model.png" alt-text="Screenshot illustrating how to create a model.":::
-
-1. After successful model training, select **Model details** from the left navigation menu.
-
-1. Select the model name *en-de with sample data*. Review training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You use the `Category ID` to make translation requests.
-
-1. Evaluate the model [BLEU](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
-
- >[!Note]
- >If you train with our shared customer sample datasets, BLEU score will be different than the image.
-
- :::image type="content" source="media/quickstart/model-details.png" alt-text="Screenshot illustrating model details.":::
-
-## Test your model
-
-Once your training has completed successfully, inspect the test set translated sentences.
-
-1. Select **Test model** from the left navigation menu.
-2. Select "en-de with sample data"
-3. Human evaluate translation from **New model** (custom model), and **Baseline model** (our pretrained baseline used for customization) against **Reference** (target translation from the test set)
-
-## Publish your model
-
-Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing).
-
-1. Select **Publish model** from the left navigation menu.
-
-1. Select *en-de with sample data* and select **Publish**.
-
-1. Check the desired region(s).
-
-1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
-
- :::image type="content" source="media/quickstart/publish-model.png" alt-text="Screenshot illustrating how to deploy a trained model.":::
-
-## Translate text
-
-1. Developers should use the `Category ID` when making translation requests with Microsoft Translator [Text API V3](../reference/v3-0-translate.md?tabs=curl). More information about the Translator Text API can be found on the [API Reference](../reference/v3-0-reference.md) webpage.
-
-1. Business users may want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to manage workspaces](how-to/create-manage-workspace.md)
cognitive-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/release-notes.md
- Title: "Release notes - Custom Translator"-
-description: Custom Translator releases, improvements, bug fixes, and known issues.
---- Previously updated : 06/01/2023----
-# Custom Translator release notes
-
-This page presents the latest feature, improvement, bug fix, and known issue release notes for Custom Translator service.
-
-## 2023-June release
-
-### June 2023 new features and model updates
-
-#### Custom Translator platform upgrade
-
-&emsp; 🆕 ***Model Upgrade Wizard*** is now available in **Workspace settings** to help guide customers through the V1-model-upgrade-to-new-platform process. For more information, *see* [Custom Translator platform upgrade](platform-upgrade.md).
-
-#### Custom Translator copy model
-
-&emsp; 🆕 ***Copy Model*** is now available in **Model details** to enable the copying of models from one workspace to another. This feature enables model lifecycle management (development → testing → production) and/or scaling. For more information, *see* [Copy a custom model](how-to/copy-model.md).
-
-#### Restrict access to published models
-
- &emsp; Published model security is now enhanced and restricted access is now enabled within **Workspace settings** to allow only linked Translator resources to request translation.
-
-#### June language model updates
-
-&emsp; Current supported language pairs are listed in the following table. For higher quality, we encourage you to retrain your models accordingly. For more information, *see* [Language support](../language-support.md#custom-translator-language-pairs).
-
-|Source Language|Target Language|
-|:-|:-|
-| Czech (cs-cz) | English (en-us) |
-| Danish (da-dk) | English (en-us) |
-| German (de-&#8203;de) | English (en-us) |
-| Greek (el-gr) | English (en-us) |
-| English (en-us) | Arabic (ar-sa) |
-| English (en-us) | Czech (cs-cz) |
-| English (en-us) | Danish (da-dk) |
-| English (en-us) | German (de-&#8203;de) |
-| English (en-us) | Greek (el-gr) |
-| English (en-us) | Spanish (es-es) |
-| English (en-us) | French (fr-fr) |
-| English (en-us) | Hebrew (he-il) |
-| English (en-us) | Hindi (hi-in) |
-| English (en-us) | Croatian (hr-hr) |
-| English (en-us) | Hungarian (hu-hu) |
-| English (en-us) | Indonesian (id-id) |
-| English (en-us) | Italian (it-it) |
-| English (en-us) | Japanese (ja-jp) |
-| English (en-us) | Korean (ko-kr) |
-| English (en-us) | Lithuanian (lt-lt) |
-| English (en-us) | Latvian (lv-lv) |
-| English (en-us) | Norwegian (nb-no) |
-| English (en-us) | Polish (pl-pl) |
-| English (en-us) | Portuguese (pt-pt) |
-| English (en-us) | Russian (ru-ru) |
-| English (en-us) | Slovak (sk-sk) |
-| English (en-us) | Swedish (sv-se) |
-| English (en-us) | Ukrainian (uk-ua) |
-| English (en-us) | Vietnamese (vi-vn) |
-| English (en-us) | Chinese Simplified (zh-cn) |
-| Spanish (es-es) | English (en-us) |
-| French (fr-fr) | English (en-us) |
-| Hindi (hi-in) | English (en-us) |
-| Hungarian (hu-hu) | English (en-us) |
-| Indonesian (id-id) | English (en-us) |
-| Italian (it-it) | English (en-us) |
-| Japanese (ja-jp) | English (en-us) |
-| Korean (ko-kr) | English (en-us) |
-| Norwegian (nb-no) | English (en-us) |
-| Dutch (nl-nl) | English (en-us) |
-| Polish (pl-pl) | English (en-us) |
-| Portuguese (pt-br) | English (en-us) |
-| Russian (ru-ru) | English (en-us) |
-| Swedish (sv-se) | English (en-us) |
-| Thai (th-th) | English (en-us) |
-| Turkish (tr-tr) | English (en-us) |
-| Vietnamese (vi-vn) | English (en-us) |
-| Chinese Simplified (zh-cn) | English (en-us) |
-
-## 2022-November release
-
-### November 2022 improvements and fixes
-
-#### Custom Translator stable GA v2.0 release
-
-* Custom Translator version v2.0 is generally available and ready for use in your production applications.
-
-* Upload history has been added to the workspace, next to Projects and Documents tabs.
-
-#### November language model updates
-
-* Language pairs are listed in the following table. We encourage you to retrain your models accordingly for higher quality.
-
-|Source Language|Target Language|
-|:-|:-|
-|Chinese Simplified (zh-Hans)|English (en-us)|
-|Chinese Traditional (zh-Hant)|English (en-us)|
-|Czech (cs)|English (en-us)|
-|Dutch (nl)|English (en-us)|
-|English (en-us)|Chinese Simplified (zh-Hans)|
-|English (en-us)|Chinese Traditional (zh-Hant)|
-|English (en-us)|Czech (cs)|
-|English (en-us)|Dutch (nl)|
-|English (en-us)|French (fr)|
-|English (en-us)|German (de)|
-|English (en-us)|Italian (it)|
-|English (en-us)|Polish (pl)|
-|English (en-us)|Romanian (ro)|
-|English (en-us)|Russian (ru)|
-|English (en-us)|Spanish (es)|
-|English (en-us)|Swedish (sv)|
-|German (de)|English (en-us)|
-|Italian (it)|English (en-us)|
-|Russian (ru)|English (en-us)|
-|Spanish (es)|English (en-us)|
-
-#### Security update
-
-* Custom Translator API preview REST API calls now require User Access Token to authenticate.
-
-* Visit our GitHub repo for a [C# code sample](https://github.com/MicrosoftTranslator/CustomTranslator-API-CSharp).
-
-#### Fixes
-
-* Resolved document upload error that caused a blank page in the browser.
-
-* Applied functional modifications.
-
-## 2021-May release
-
-### May 2021 improvements and fixes
-
-* We added new training pipeline to improve the custom model generalization and capacity to retain more customer terminology (words and phrases).
-
-* Refreshed Custom Translator baselines to fix word alignment bug. See list of impacted language pair*.
-
-### Language pair list
-
-| Source Language | Target Language |
-|-|--|
-| Arabic (`ar`) | English (`en-us`)|
-| Brazilian Portuguese (`pt`) | English (`en-us`)|
-| Bulgarian (`bg`) | English (`en-us`)|
-| Chinese Simplified (`zh-Hans`) | English (`en-us`)|
-| Chinese Traditional (`zh-Hant`) | English (`en-us`)|
-| Croatian (`hr`) | English (`en-us`)|
-| Czech (`cs`) | English (`en-us`)|
-| Danish (`da`) | English (`en-us`)|
-| Dutch (nl) | English (`en-us`)|
-| English (`en-us`) | Arabic (`ar`)|
-| English (`en-us`) | Bulgarian (`bg`)|
-| English (`en-us`) | Chinese Simplified (`zh-Hans`|
-| English (`en-us`) | Chinese Traditional (`zh-Hant`|
-| English (`en-us`) | Czech (`cs)`|
-| English (`en-us`) | Danish (`da`)|
-| English (`en-us`) | Dutch (`nl`)|
-| English (`en-us`) | Estonian (`et`)|
-| English (`en-us`) | Fijian (`fj`)|
-| English (`en-us`) | Finnish (`fi`)|
-| English (`en-us`) | French (`fr`)|
-| English (`en-us`) | Greek (`el`)|
-| English (`en-us`) | Hindi (`hi`) |
-| English (`en-us`) | Hungarian (`hu`)|
-| English (`en-us`) | Icelandic (`is`)|
-| English (`en-us`) | Indonesian (`id`)|
-| English (`en-us`) | Inuktitut (`iu`)|
-| English (`en-us`) | Irish (`ga`)|
-| English (`en-us`) | Italian (`it`)|
-| English (`en-us`) | Japanese (`ja`)|
-| English (`en-us`) | Korean (`ko`)|
-| English (`en-us`) | Lithuanian (`lt`)|
-| English (`en-us`) | Norwegian (`nb`)|
-| English (`en-us`) | Polish (`pl`)|
-| English (`en-us`) | Romanian (`ro`)|
-| English (`en-us`) | Samoan (`sm`)|
-| English (`en-us`) | Slovak (`sk`)|
-| English (`en-us`) | Spanish (`es`)|
-| English (`en-us`) | Swedish (`sv`)|
-| English (`en-us`) | Tahitian (`ty`)|
-| English (`en-us`) | Thai (`th`)|
-| English (`en-us`) | Tongan (`to`)|
-| English (`en-us`) | Turkish (`tr`)|
-| English (`en-us`) | Ukrainian (`uk`) |
-| English (`en-us`) | Welsh (`cy`)|
-| Estonian (`et`) | English (`en-us`)|
-| Fijian (`fj`) | English (`en-us`)|
-| Finnish (`fi`) | English (`en-us`)|
-| German (`de`) | English (`en-us`)|
-| Greek (`el`) | English (`en-us`)|
-| Hungarian (`hu`) | English (`en-us`)|
-| Icelandic (`is`) | English (`en-us`)|
-| Indonesian (`id`) | English (`en-us`)
-| Inuktitut (`iu`) | English (`en-us`)|
-| Irish (`ga`) | English (`en-us`)|
-| Italian (`it`) | English (`en-us`)|
-| Japanese (`ja`) | English (`en-us`)|
-| Kazakh (`kk`) | English (`en-us`)|
-| Korean (`ko`) | English (`en-us`)|
-| Lithuanian (`lt`) | English (`en-us`)|
-| Malagasy (`mg`) | English (`en-us`)|
-| Maori (`mi`) | English (`en-us`)|
-| Norwegian (`nb`) | English (`en-us`)|
-| Persian (`fa`) | English (`en-us`)|
-| Polish (`pl`) | English (`en-us`)|
-| Romanian (`ro`) | English (`en-us`)|
-| Russian (`ru`) | English (`en-us`)|
-| Slovak (`sk`) | English (`en-us`)|
-| Spanish (`es`) | English (`en-us`)|
-| Swedish (`sv`) | English (`en-us`)|
-| Tahitian (`ty`) | English (`en-us`)|
-| Thai (`th`) | English (`en-us`)|
-| Tongan (`to`) | English (`en-us`)|
-| Turkish (`tr`) | English (`en-us`)|
-| Vietnamese (`vi`) | English (`en-us`)|
-| Welsh (`cy`) | English (`en-us`)|
cognitive-services Document Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/document-sdk-overview.md
- Title: Azure Document Translation SDKs -
-description: Azure Document Translation software development kits (SDKs) expose Document Translation features and capabilities, using C#, Java, JavaScript, and Python programming language.
----- Previously updated : 06/05/2023-
-recommendations: false
--
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD051 -->
-
-# Azure Document Translation SDK
-
-Azure Document Translation is a cloud-based REST API feature of the Azure Translator service. The Document Translation API enables quick and accurate source-to-target whole document translations, asynchronously, in supported languages and various file formats. The Document Translation software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Translation REST API capabilities into your applications.
-
-## Supported languages
-
-Document Translation SDK supports the following programming languages:
-
-| Language → SDK version | Package|Client library| Supported API version|
-|:-:|:-|:-|:-|
-|[.NET/C# → 1.0.0](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Document/1.0.0/https://docsupdatetracker.net/index.html)| [NuGet](https://www.nuget.org/packages/Azure.AI.Translation.Document) | [Azure SDK for .NET](/dotnet/api/overview/azure/AI.Translation.Document-readme?view=azure-dotnet&preserve-view=true) | Document Translation v1.0|
-|[Python → 1.0.0](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-translation-document/1.0.0/https://docsupdatetracker.net/index.html)|[PyPi](https://pypi.org/project/azure-ai-translation-document/1.0.0/)|[Azure SDK for Python](/python/api/overview/azure/ai-translation-document-readme?view=azure-python&preserve-view=true)|Document Translation v1.0|
-
-## Changelog and release history
-
-This section provides a version-based description of Document Translation feature and capability releases, changes, updates, and enhancements.
-
-### [C#/.NET](#tab/csharp)
-
-**Version 1.0.0 (GA)** </br>
-**2022-06-07**
-
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/CHANGELOG.md)
-
-##### [README](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/README.md)
-
-##### [Samples](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/samples)
-
-### [Python](#tab/python)
-
-**Version 1.0.0 (GA)** </br>
-**2022-06-07**
-
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/CHANGELOG.md)
-
-##### [README](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/README.md)
-
-##### [Samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/samples)
---
-## Use Document Translation SDK in your applications
-
-The Document Translation SDK enables the use and management of the Translation service in your application. The SDK builds on the underlying Document Translation REST APIs for use within your programming language paradigm. Choose your preferred programming language:
-
-### 1. Install the SDK client library
-
-### [C#/.NET](#tab/csharp)
-
-```dotnetcli
-dotnet add package Azure.AI.Translation.Document --version 1.0.0
-```
-
-```powershell
-Install-Package Azure.AI.Translation.Document -Version 1.0.0
-```
-
-### [Python](#tab/python)
-
-```python
-pip install azure-ai-translation-document==1.0.0
-```
---
-### 2. Import the SDK client library into your application
-
-### [C#/.NET](#tab/csharp)
-
-```csharp
-using System;
-using Azure.Core;
-using Azure.AI.Translation.Document;
-```
-
-### [Python](#tab/python)
-
-```python
-from azure.ai.translation.document import DocumentTranslationClient
-from azure.core.credentials import AzureKeyCredential
-```
---
-### 3. Authenticate the client
-
-### [C#/.NET](#tab/csharp)
-
-Create an instance of the `DocumentTranslationClient` object to interact with the Document Translation SDK, and then call methods on that client object to interact with the service. The `DocumentTranslationClient` is the primary interface for using the Document Translation client library. It provides both synchronous and asynchronous methods to perform operations.
-
-```csharp
-private static readonly string endpoint = "<your-custom-endpoint>";
-private static readonly string key = "<your-key>";
-
-DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-```
-
-### [Python](#tab/python)
-
-Create an instance of the `DocumentTranslationClient` object to interact with the Document Translation SDK, and then call methods on that client object to interact with the service. The `DocumentTranslationClient` is the primary interface for using the Document Translation client library. It provides both synchronous and asynchronous methods to perform operations.
-
-```python
-endpoint = "<endpoint>"
-key = "<apiKey>"
-
-client = DocumentTranslationClient(endpoint, AzureKeyCredential(key))
-
-```
---
-### 4. Build your application
-
-### [C#/.NET](#tab/csharp)
-
-The Document Translation service requires that you upload your files to an Azure Blob Storage source container (sourceUri), provide a target container where the translated documents can be written (targetUri), and include the target language code (targetLanguage).
-
-```csharp
-
-Uri sourceUri = new Uri("<your-source container-url");
-Uri targetUri = new Uri("<your-target-container-url>");
-string targetLanguage = "<target-language-code>";
-
-DocumentTranslationInput input = new DocumentTranslationInput(sourceUri, targetUri, targetLanguage)
-```
-
-### [Python](#tab/python)
-
-The Document Translation service requires that you upload your files to an Azure Blob Storage source container (sourceUri), provide a target container where the translated documents can be written (targetUri), and include the target language code (targetLanguage).
-
-```python
-sourceUrl = "<your-source container-url>"
-targetUrl = "<your-target-container-url>"
-targetLanguage = "<target-language-code>"
-
-poller = client.begin_translation(sourceUrl, targetUrl, targetLanguage)
-result = poller.result()
-
-```
---
-## Help options
-
-The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/microsoft-translator) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer.
-
-> [!TIP]
-> To make sure that we see your Microsoft Q&A question, tag it with **`microsoft-translator`**.
-> To make sure that we see your Stack Overflow question, tag it with **`Azure Translator`**.
->
-
-## Next steps
-
->[!div class="nextstepaction"]
-> [**Document Translation SDK quickstart**](quickstarts/document-translation-sdk.md) [**Document Translation v1.0 REST API reference**](reference/rest-api-guide.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/faq.md
- Title: Frequently asked questions - Document Translation-
-description: Get answers to frequently asked questions about Azure Cognitive Services Document Translation.
-------- Previously updated : 05/24/2022---
-<!-- markdownlint-disable MD001 -->
-
-# Answers to frequently asked questions
-
-## Document Translation: FAQ
-
-#### Should I specify the source language in a request?
-
-If the language of the content in the source document is known, it's recommended to specify the source language in the request to get a better translation. If the document has content in multiple languages or the language is unknown, then don't specify the source language in the request. Document translation automatically identifies language for each text segment and translates.
-
-#### To what extent are the layout, structure, and formatting maintained?
-
-When text is translated from the source to target language, the overall length of translated text may differ from source. The result could be reflow of text across pages. The same fonts may not be available both in source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
-
-#### Will the text in an image within a document gets translated?
-
-No. The text in an image within a document won't get translated.
-
-#### Can Document Translation translate content from scanned documents?
-
-Yes. Document translation translates content from _scanned PDF_ documents.
-
-#### Will my document be translated if it's password protected?
-
-No. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
-
-#### If I'm using managed identities, do I also need a SAS token URL?
-
-No. Don't include SAS token URLSΓÇöyour requests will fail. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-sas-tokens.md
- Title: Create shared access signature (SAS) tokens for storage containers and blobs
-description: How to create Shared Access Signature tokens (SAS) for containers and blobs with Microsoft Storage Explorer and the Azure portal.
---- Previously updated : 06/19/2023--
-# Create SAS tokens for your storage containers
-
-In this article, you learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
--
->[!TIP]
->
-> [Managed identities](create-use-managed-identities.md) provide an alternate method for you to grant access to your storage data without the need to include SAS tokens with your HTTP requests. *See*, [Managed identities for Document Translation](create-use-managed-identities.md).
->
-> * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
-> * Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
-> * There's no added cost to use managed identities in Azure.
-
-At a high level, here's how SAS tokens work:
-
-* Your application submits the SAS token to Azure Storage as part of a REST API request.
-
-* If the storage service verifies that the SAS is valid, the request is authorized.
-
-* If the SAS token is deemed invalid, the request is declined, and the error code 403 (Forbidden) is returned.
-
-Azure Blob Storage offers three resource types:
-
-* **Storage** accounts provide a unique namespace in Azure for your data.
-* **Data storage containers** are located in storage accounts and organize sets of blobs (files, text, or images).
-* **Blobs** are located in containers and store text and binary data such as files, text, and images.
-
-> [!IMPORTANT]
->
-> * SAS tokens are used to grant permissions to storage resources, and should be protected in the same manner as an account key.
->
-> * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
-
-## Prerequisites
-
-To get started, you need the following resources:
-
-* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
-
-* A [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource.
-
-* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
-
- * [Create a storage account](../../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
- * [Create a container](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
-
-## Create SAS tokens in the Azure portal
-
-<!-- markdownlint-disable MD024 -->
-
-Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your container or a specific file as follows and continue with these steps:
-
-| Create SAS token for a container| Create SAS token for a specific file|
-|:--:|:--:|
-**Your storage account** → **containers** → **your container** |**Your storage account** → **containers** → **your container**→ **your file** |
-
-1. Right-click the container or file and select **Generate SAS** from the drop-down menu.
-
-1. Select **Signing method** → **User delegation key**.
-
-1. Define **Permissions** by checking and/or clearing the appropriate check box:
-
- * Your **source** container or file must have designated **read** and **list** access.
-
- * Your **target** container or file must have designated **write** and **list** access.
-
-1. Specify the signed key **Start** and **Expiry** times.
-
- * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.
- * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations.
- * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
- * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
- * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Azure AD credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
-
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
-
-1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
-
-1. Review then select **Generate SAS token and URL**.
-
-1. The **Blob SAS token** query string and **Blob SAS URL** are displayed in the lower area of window.
-
-1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
-
-1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
-
-## Create SAS tokens with Azure Storage Explorer
-
-Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
-
-* You need the [**Azure Storage Explorer**](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
-
-* After the Azure Storage Explorer app is installed, [connect it to the storage account](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation. Follow these steps to create tokens for a storage container or specific blob file:
-
-### [SAS tokens for storage containers](#tab/Containers)
-
-1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
-1. Expand the Storage Accounts node and select **Blob Containers**.
-1. Expand the Blob Containers node and right-click a storage **container** node to display the options menu.
-1. Select **Get Shared Access Signature...** from options menu.
-1. In the **Shared Access Signature** window, make the following selections:
- * Select your **Access policy** (the default is none).
- * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
- * Select the **Time zone** for the Start and Expiry date and time (default is Local).
- * Define your container **Permissions** by checking and/or clearing the appropriate check box.
- * Review and select **Create**.
-
-1. A new window appears with the **Container** name, **URI**, and **Query string** for your container.
-1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
-1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
-
-### [SAS tokens for specific blob file](#tab/blobs)
-
-1. Open the Azure Storage Explorer app on your local machine and navigate to your connected **Storage Accounts**.
-1. Expand your storage node and select **Blob Containers**.
-1. Expand the Blob Containers node and select a **container** node to display the contents in the main window.
-1. Select the file where you wish to delegate SAS access and right-click to display the options menu.
-1. Select **Get Shared Access Signature...** from options menu.
-1. In the **Shared Access Signature** window, make the following selections:
- * Select your **Access policy** (the default is none).
- * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
- * Select the **Time zone** for the Start and Expiry date and time (default is Local).
- * Define your container **Permissions** by checking and/or clearing the appropriate check box.
- * Your **source** container or file must have designated **read** and **list** access.
- * Your **target** container or file must have designated **write** and **list** access.
- * Select **key1** or **key2**.
- * Review and select **Create**.
-
-1. A new window appears with the **Blob** name, **URI**, and **Query string** for your blob.
-1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.**
-1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
---
-### Use your SAS URL to grant access
-
-The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
-
-You can include your SAS URL with REST API requests in two ways:
-
-* Use the **SAS URL** as your sourceURL and targetURL values.
-
-* Append the **SAS query string** to your existing sourceURL and targetURL values.
-
-Here's a sample REST API request:
-
-```json
-{
- "inputs": [
- {
- "storageType": "File",
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "es"
- },
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "de"
- }
- ]
- }
- ]
-}
-```
-
-That's it! You've learned how to create SAS tokens to authorize how clients access your data.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get Started with Document Translation](../quickstarts/get-started-with-rest-api.md)
->
cognitive-services Create Use Glossaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-glossaries.md
- Title: Create and use a glossary with Document Translation
-description: How to create and use a glossary with Document Translation.
---- Previously updated : 03/28/2023--
-# Use glossaries with Document Translation
-
-A glossary is a list of terms with definitions that you create for the Document Translation service to use during the translation process. Currently, the glossary feature supports one-to-one source-to-target language translation. Common use cases for glossaries include:
-
-* **Context-specific terminology**. Create a glossary that designates specific meanings for your unique context.
-
-* **No translation**. For example, you can restrict Document Translation from translating product name brands by using a glossary with the same source and target text.
-
-* **Specified translations for ambiguous words**. Choose a specific translation for poly&#8203;semantic words.
-
-## Create, upload, and use a glossary file
-
-1. **Create your glossary file.** Create a file in a supported format (preferably tab-separated values) that contains all the terms and phrases you want to use in your translation.
-
- To check if your file format is supported, *see* [Get supported glossary formats](../reference/get-supported-glossary-formats.md).
-
- The following English-source glossary contains words that can have different meanings depending upon the context in which they're used. The glossary provides the expected translation for each word in the file to help ensure accuracy.
-
- For instance, when the word `Bank` appears in a financial document, it should be translated to reflect its financial meaning. If the word `Bank` appears in a geographical document, it may refer to shore to reflect its topographical meaning. Similarly, the word `Crane` can refer to either a bird or machine.
-
- ***Example glossary .tsv file: English-to-French***
-
- ```tsv
- Bank Banque
- Card Carte
- Crane Grue
- Office Office
- Tiger Tiger
- US United States
- ```
-
-1. **Upload your glossary to Azure storage**. To complete this step, you need an [Azure Blob Storage account](https://ms.portal.azure.com/#create/Microsoft.StorageAccount) with [containers](/azure/storage/blobs/storage-quickstart-blobs-portal?branch=main#create-a-container) to store and organize your blob data within your storage account.
-
-1. **Specify your glossary in the translation request.** Include the **`glossary URL`**, **`format`**, and **`version`** in your **`POST`** request:
-
- :::code language="json" source="../../../../../cognitive-services-rest-samples/curl/Translator/translate-with-glossary.json" range="1-23" highlight="13-14":::
-
- > [!NOTE]
- > The example used an enabled [**system-assigned managed identity**](create-use-managed-identities.md#enable-a-system-assigned-managed-identity) with a [**Storage Blob Data Contributor**](create-use-managed-identities.md#grant-storage-account-access-for-your-translator-resource) role assignment for authorization. For more information, *see* [**Managed identities for Document Translation**](./create-use-managed-identities.md).
-
-### Case sensitivity
-
-By default, Azure Cognitive Services Translator service API is **case-sensitive**, meaning that it matches terms in the source text based on case.
-
-* **Partial sentence application**. When your glossary is applied to **part of a sentence**, the Document Translation API checks whether the glossary term matches the case in the source text. If the casing doesn't match, the glossary isn't applied.
-
-* **Complete sentence application**. When your glossary is applied to a **complete sentence**, the service becomes **case-insensitive**. It matches the glossary term regardless of its case in the source text. This provision applies the correct results for use cases involving idioms and quotes.
-
-## Next steps
-
-Try the Document Translation how-to guide to asynchronously translate whole documents using a programming language of your choice:
-
-> [!div class="nextstepaction"]
-> [Use Document Translation REST APIs](use-rest-api-programmatically.md)
cognitive-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md
- Title: Create and use managed identities for Document Translation-
-description: Understand how to create and use managed identities in the Azure portal
------ Previously updated : 03/28/2023---
-# Managed identities for Document Translation
-
-Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
-
- :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
-
-* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
-
-* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md).
-
-* There's no added cost to use managed identities in Azure.
-
-> [!IMPORTANT]
->
-> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
->
-> * To use managed identities for Document Translation operations, you must [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a specific geographic Azure region such as **East US**. If your Translator resource region is set to **Global**, then you can't use managed identity for Document Translation. You can still use [Shared Access Signature tokens (SAS)](create-sas-tokens.md) for Document Translation.
->
-> * Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
->
-
-## Prerequisites
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* A [**single-service Translator**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (not a multi-service Cognitive Services) resource assigned to a **geographical** region such as **West US**. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../../cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
-
-* A brief understanding of [**Azure role-based access control (`Azure RBAC`)**](../../../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
-
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You also need to create containers to store and organize your blob data within your storage account.
-
-* **If your storage account is behind a firewall, you must enable the following configuration**:
- 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
- 1. Select the Storage account.
- 1. In the **Security + networking** group in the left pane, select **Networking**.
- 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
-
- :::image type="content" source="../../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
-
- 1. Deselect all check boxes.
- 1. Make sure **Microsoft network routing** is selected.
- 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Translator resource as the instance name.
- 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, _see_ [Configure Azure Storage firewalls and virtual networks](../../../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions).
-
- :::image type="content" source="../../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view.":::
-
- 1. Select **Save**.
-
- > [!NOTE]
- > It may take up to 5 min for the network changes to propagate.
-
- Although network access is now permitted, your Translator resource is still unable to access the data in your Storage account. You need to [create a managed identity](#managed-identity-assignments) for and [assign a specific access role](#grant-storage-account-access-for-your-translator-resource) to your Translator resource.
-
-## Managed identity assignments
-
-There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
-
-* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
-
-* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
-
-In the following steps, we enable a system-assigned managed identity and grant your Translator resource limited access to your Azure Blob Storage account.
-
-## Enable a system-assigned managed identity
-
-You must grant the Translator resource access to your storage account before it can create, read, or delete blobs. Once you enabled the Translator resource with a system-assigned managed identity, you can use Azure role-based access control (`Azure RBAC`), to give Translator access to your Azure storage containers.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Translator resource.
-1. In the **Resource Management** group in the left pane, select **Identity**.
-1. Within the **System assigned** tab, turn on the **Status** toggle.
-
- :::image type="content" source="../../media/managed-identities/resource-management-identity-tab.png" alt-text="Screenshot: resource management identity tab in the Azure portal.":::
-
- > [!IMPORTANT]
- > User assigned managed identity won't meet requirements for the batch transcription storage account scenario. Be sure to enable system assigned managed identity.
-
-1. Select **Save**.
-
-## Grant storage account access for your Translator resource
-
-> [!IMPORTANT]
-> To assign a system-assigned managed identity role, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](../../../../role-based-access-control/built-in-roles.md#owner) or [**User Access Administrator**](../../../../role-based-access-control/built-in-roles.md#user-access-administrator) at the storage scope for the storage resource.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the Translator resource.
-1. In the **Resource Management** group in the left pane, select **Identity**.
-1. Under **Permissions** select **Azure role assignments**:
-
- :::image type="content" source="../../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
-
-1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
-
- :::image type="content" source="../../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
-
-1. Next, assign a **Storage Blob Data Contributor** role to your Translator service resource. The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
-
- | Field | Value|
- ||--|
- |**Scope**| **_Storage_**.|
- |**Subscription**| **_The subscription associated with your storage resource_**.|
- |**Resource**| **_The name of your storage resource_**.|
- |**Role** | **_Storage Blob Data Contributor_**.|
-
- :::image type="content" source="../../media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
-
-1. After the _Added Role assignment_ confirmation message appears, refresh the page to see the added role assignment.
-
- :::image type="content" source="../../media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
-
-1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
-
- :::image type="content" source="../../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
-
-## HTTP requests
-
-* A batch Document Translation request is submitted to your Translator service endpoint via a POST request.
-
-* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
-
-* If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request.
-
-* The translated documents appear in your target container.
-
-### Headers
-
-The following headers are included with each Document Translation API request:
-
-|HTTP header|Description|
-||--|
-|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure key for your Translator or Cognitive Services resource.|
-|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
-
-### POST request body
-
-* The request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`.
-* The request body is a JSON object named `inputs`.
-* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs. With system assigned managed identity, you use a plain Storage Account URL (no SAS or other additions). The format is `https://<storage_account_name>.blob.core.windows.net/<container_name>`.
-* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
-* A value for the `glossaries` field (optional) is applied when the document is being translated.
-* The `targetUrl` for each target language must be unique.
-
-> [!IMPORTANT]
-> If a file with the same name already exists in the destination, the job will fail. When using managed identities, don't include a SAS token URL with your HTTP requests. If you do so, your requests will fail.
-
-<!-- markdownlint-disable MD024 -->
-### Translate all documents in a container
-
-This sample request body references a source container for all documents to be translated to a target language.
-
-For more information, _see_ [request parameters](#post-request-body).
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>"
- },
- "targets": [
- {
- "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>"
- "language": "fr"
- }
- ]
- }
- ]
-}
-```
-
-### Translate a specific document in a container
-
-This sample request body references a single source document to be translated into two target languages.
-
-> [!IMPORTANT]
-> In addition to the request parameters [noted previously](#post-request-body), you must include `"storageType": "File"`. Otherwise the source URL is assumed to be at the container level.
-
-```json
-{
- "inputs": [
- {
- "storageType": "File",
- "source": {
- "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>/source-english.docx"
- },
- "targets": [
- {
- "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>/Target-Spanish.docx"
- "language": "es"
- },
- {
- "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>/Target-German.docx",
- "language": "de"
- }
- ]
- }
- ]
-}
-```
-
-### Translate all documents in a container using a custom glossary
-
-This sample request body references a source container for all documents to be translated to a target language using a glossary.
-
-For more information, _see_ [request parameters](#post-request-body).
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://<storage_account_name>.blob.core.windows.net/<source_container_name>",
- "filter": {
- "prefix": "myfolder/"
- }
- },
- "targets": [
- {
- "targetUrl": "https://<storage_account_name>.blob.core.windows.net/<target_container_name>",
- "language": "es",
- "glossaries": [
- {
- "glossaryUrl": "https://<storage_account_name>.blob.core.windows.net/<glossary_container_name>/en-es.xlf",
- "format": "xliff"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-Great! You've learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and `Azure RBAC`, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Quickstart: Get started with Document Translation](../quickstarts/get-started-with-rest-api.md)
-
-> [!div class="nextstepaction"]
-> [Tutorial: Access Azure Storage from a web app using managed identities](../../../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fcognitive-services%2ftranslator%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcognitive-services%2ftranslator%2ftoc.json)
cognitive-services Use Rest Api Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-rest-api-programmatically.md
- Title: Use Document Translation APIs programmatically
-description: "How to create a Document Translation service using C#, Go, Java, Node.js, or Python and the REST API"
------ Previously updated : 04/17/2023-
-recommendations: false
---
-# Use REST APIs programmatically
-
- Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service. You can use the Document Translation API to asynchronously translate whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats) while preserving source document structure and text formatting. In this how-to guide, you learn to use Document Translation APIs with a programming language of your choice and the HTTP REST API.
-
-## Prerequisites
-
-> [!NOTE]
->
-> * Typically, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service key or a single-service key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
->
-> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
->
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
-
- * **Source container**. This container is where you upload your files for translation (required).
- * **Target container**. This container is where your translated files are stored (required).
-
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource):
-
- **Complete the Translator project and instance details fields as follows:**
-
- 1. **Subscription**. Select one of your available Azure subscriptions.
-
- 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
-
- 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
-
- 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
-
- > [!NOTE]
- > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
-
- 1. **Pricing tier**. Document Translation isn't supported in the free tier. Select Standard S1 to try the service.
-
- 1. Select **Review + Create**.
-
- 1. Review the service terms and select **Create** to deploy your resource.
-
- 1. After your resource has successfully deployed, select **Go to resource**.
-
-### Retrieve your key and custom domain endpoint
-
-*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
-
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
-
-1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-
-1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
-
-1. You **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
-
- :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
-
-### Get your key
-
-Requests to the Translator service require a read-only key for authenticating access.
-
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
-1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-1. Copy and paste your key in a convenient location, such as *Microsoft Notepad*.
-1. You paste it into the code sample to authenticate your request to the Document Translation service.
--
-## Create Azure Blob Storage containers
-
-You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
-
-* **Source container**. This container is where you upload your files for translation (required).
-* **Target container**. This container is where your translated files are stored (required).
-
-> [!NOTE]
-> Document Translation supports glossaries as blobs in target containers (not separate glossary containers). If want to include a custom glossary, add it to the target container and include the` glossaryUrl` with the request. If the translation language pair is not present in the glossary, it will not be applied. *See* [Translate documents using a custom glossary](#translate-documents-using-a-custom-glossary)
-
-### **Create SAS access tokens for Document Translation**
-
-The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](create-sas-tokens.md).
-
-* Your **source** container or blob must have designated **read** and **list** access.
-* Your **target** container or blob must have designated **write** and **list** access.
-* Your **glossary** blob must have designated **read** and **list** access.
-
-> [!TIP]
->
-> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
-> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
-> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
-
-## HTTP requests
-
-A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
-
-For detailed information regarding Azure Translator Service request limits, _see_ [**Document Translation request limits**](../../request-limits.md#document-translation).
-
-### HTTP headers
-
-The following headers are included with each Document Translation API request:
-
-|HTTP header|Description|
-||--|
-|Ocp-Apim-Subscription-Key|**Required**: The value is the Azure key for your Translator or Cognitive Services resource.|
-|Content-Type|**Required**: Specifies the content type of the payload. Accepted values are application/json or charset=UTF-8.|
-
-### POST request body properties
-
-* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`
-* The POST request body is a JSON object named `inputs`.
-* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs.
-* The `prefix` and `suffix` are case-sensitive strings to filter documents in the source path for translation. The `prefix` field is often used to delineate subfolders for translation. The `suffix` field is most often used for file extensions.
-* A value for the `glossaries` field (optional) is applied when the document is being translated.
-* The `targetUrl` for each target language must be unique.
-
->[!NOTE]
-> If a file with the same name already exists in the destination, the job will fail.
-
-<!-- markdownlint-disable MD024 -->
-### Translate all documents in a container
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
- "language": "fr"
- }
- ]
- }
- ]
-}
-```
-
-### Translate a specific document in a container
-
-* Specify `"storageType": "File"`
-* If you aren't using a [**system-assigned managed identity**](create-use-managed-identities.md) for authentication, make sure you've created source URL & SAS token for the specific blob/document (not for the container)
-* Ensure you've specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-* This sample request returns a single document translated into two target languages
-
-```json
-{
- "inputs": [
- {
- "storageType": "File",
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "es"
- },
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "de"
- }
- ]
- }
- ]
-}
-```
-
-### Translate documents using a custom glossary
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://myblob.blob.core.windows.net/source"
- },
- "targets": [
- {
- "targetUrl": "https://myblob.blob.core.windows.net/target",
- "language": "es",
- "glossaries": [
- {
- "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf",
- "format": "xliff"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-
-## Use code to submit Document Translation requests
-
-### Set up your coding Platform
-
-### [C#](#tab/csharp)
-
-* Create a new project.
-* Replace Program.cs with the C# code sample.
-* Set your endpoint, key, and container URL values in Program.cs.
-* To process JSON data, add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/).
-* Run the program from the project directory.
-
-### [Node.js](#tab/javascript)
-
-* Create a new Node.js project.
-* Install the Axios library with `npm i axios`.
-* Copy/paste the JavaScript sample code into your project.
-* Set your endpoint, key, and container URL values.
-* Run the program.
-
-### [Python](#tab/python)
-
-* Create a new project.
-* Copy and paste the code from one of the samples into your project.
-* Set your endpoint, key, and container URL values.
-* Run the program. For example: `python translate.py`.
-
-### [Java](#tab/java)
-
-* Create a working directory for your project. For example:
-
-```powershell
-mkdir sample-project
-```
-
-* In your project directory, create the following subdirectory structure:
-
- src</br>
-&emsp; Γöö main</br>
-&emsp;&emsp;&emsp;Γöö java
-
-```powershell
-mkdir -p src/main/java/
-```
-
-**NOTE**: Java source files (for example, _sample.java_) live in src/main/**java**.
-
-* In your root directory (for example, *sample-project*), initialize your project with Gradle:
-
-```powershell
-gradle init --type basic
-```
-
-* When prompted to choose a **DSL**, select **Kotlin**.
-
-* Update the `build.gradle.kts` file. Keep in mind that you need to update your `mainClassName` depending on the sample:
-
- ```java
- plugins {
- java
- application
- }
- application {
- mainClassName = "{NAME OF YOUR CLASS}"
- }
- repositories {
- mavenCentral()
- }
- dependencies {
- compile("com.squareup.okhttp:okhttp:2.5.0")
- }
- ```
-
-* Create a Java file in the **java** directory and copy/paste the code from the provided sample. Don't forget to add your key and endpoint.
-
-* **Build and run the sample from the root directory**:
-
-```powershell
-gradle build
-gradle run
-```
-
-### [Go](#tab/go)
-
-* Create a new Go project.
-* Add the provided Go code sample.
-* Set your endpoint, key, and container URL values.
-* Save the file with a '.go' extension.
-* Open a command prompt on a computer with Go installed.
-* Build the file. For example: 'go build example-code.go'.
-* Run the file, for example: 'example-code'.
-
-
-
-> [!IMPORTANT]
->
-> For the code samples, you'll hard-code your Shared Access Signature (SAS) URL where indicated. Remember to remove the SAS URL from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Managed Identity](create-use-managed-identities.md). For more information, _see_ Azure Storage [security](../../../../storage/common/authorize-data-access.md).
-
-> You may need to update the following fields, depending upon the operation:
->>>
->> * `endpoint`
->> * `key`
->> * `sourceURL`
->> * `targetURL`
->> * `glossaryURL`
->> * `id` (job ID)
->>
-
-#### Locating the `id` value
-
-* You find the job `id` in the POST method response Header `Operation-Location` URL value. The last parameter of the URL is the operation's job **`id`**:
-
-|**Response header**|**Result URL**|
-|--|-|
-Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches/9dce0aa9-78dc-41ba-8cae-2e2f3c2ff8ec</span>
-
-* You can also use a **GET Jobs** request to retrieve a Document Translation job `id` .
-
->
-
-## Translate documents
-
-### [C#](#tab/csharp)
-
-```csharp
-
- using System;
- using System.Net.Http;
- using System.Threading.Tasks;
- using System.Text;
--
- class Program
- {
-
- static readonly string route = "/batches";
-
- private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
-
- private static readonly string key = "<YOUR-KEY>";
-
- static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\"}, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
-
- static async Task Main(string[] args)
- {
- using HttpClient client = new HttpClient();
- using HttpRequestMessage request = new HttpRequestMessage();
- {
-
- StringContent content = new StringContent(json, Encoding.UTF8, "application/json");
-
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Content = content;
-
- HttpResponseMessage response = await client.SendAsync(request);
- string result = response.Content.ReadAsStringAsync().Result;
- if (response.IsSuccessStatusCode)
- {
- Console.WriteLine($"Status code: {response.StatusCode}");
- Console.WriteLine();
- Console.WriteLine($"Response Headers:");
- Console.WriteLine(response.Headers);
- }
- else
- Console.Write("Error");
-
- }
-
- }
-
- }
-```
-
-### [Node.js](#tab/javascript)
-
-```javascript
-
-const axios = require('axios').default;
-
-let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let route = '/batches';
-let key = '<YOUR-KEY>';
-
-let data = JSON.stringify({"inputs": [
- {
- "source": {
- "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",
- "storageSource": "AzureBlob",
- "language": "en"
- }
- },
- "targets": [
- {
- "targetUrl": "https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS",
- "storageSource": "AzureBlob",
- "category": "general",
- "language": "es"}]});
-
-let config = {
- method: 'post',
- baseURL: endpoint,
- url: route,
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Content-Type': 'application/json'
- },
- data: data
-};
-
-axios(config)
-.then(function (response) {
- let result = { statusText: response.statusText, statusCode: response.status, headers: response.headers };
- console.log()
- console.log(JSON.stringify(result));
-})
-.catch(function (error) {
- console.log(error);
-});
-```
-
-### [Python](#tab/python)
-
-```python
-
-import requests
-
-endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
-key = '<YOUR-KEY>'
-path = '/batches'
-constructed_url = endpoint + path
-
-payload= {
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS",
- "storageSource": "AzureBlob",
- "language": "en"
- },
- "targets": [
- {
- "targetUrl": "https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS",
- "storageSource": "AzureBlob",
- "category": "general",
- "language": "es"
- }
- ]
- }
- ]
-}
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Content-Type': 'application/json'
-}
-
-response = requests.post(constructed_url, headers=headers, json=payload)
-
-print(f'response status code: {response.status_code}\nresponse status: {response.reason}\nresponse headers: {response.headers}')
-```
-
-### [Java](#tab/java)
-
-```java
-
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.squareup.okhttp.*;
-
-public class DocumentTranslation {
- String key = "<YOUR-KEY>";
- String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
- String path = endpoint + "/batches";
-
- OkHttpClient client = new OkHttpClient();
-
- public void post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
- Request request = new Request.Builder()
- .url(path).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- System.out.println(response.code());
- System.out.println(response.headers());
- }
-
- public static void main(String[] args) {
- try {
- DocumentTranslation sampleRequest = new DocumentTranslation();
- sampleRequest.post();
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-
-package main
-
-import (
- "bytes"
- "fmt"
-"net/http"
-)
-
-func main() {
-endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
-key := "<YOUR-KEY>"
-uri := endpoint + "/batches"
-method := "POST"
-
-var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en"},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
-
-req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonStr))
-req.Header.Add("Ocp-Apim-Subscription-Key", key)
-req.Header.Add("Content-Type", "application/json")
-
-client := &http.Client{}
-
-if err != nil {
- fmt.Println(err)
- return
- }
- res, err := client.Do(req)
- if err != nil {
- fmt.Println(err)
- return
- }
- defer res.Body.Close()
- fmt.Println("response status:", res.Status)
- fmt.Println("response headers", res.Header)
-}
-```
---
-## Get file formats
-
-Retrieve a list of supported file formats. If successful, this method returns a `200 OK` response code.
-
-### [C#](#tab/csharp)
-
-```csharp
-
-using System;
-using System.Net.Http;
-using System.Threading.Tasks;
--
-class Program
-{
--
- private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
-
- static readonly string route = "/documents/formats";
-
- private static readonly string key = "<YOUR-KEY>";
-
- static async Task Main(string[] args)
- {
-
- HttpClient client = new HttpClient();
- using HttpRequestMessage request = new HttpRequestMessage();
- {
- request.Method = HttpMethod.Get;
- request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
--
- HttpResponseMessage response = await client.SendAsync(request);
- string result = response.Content.ReadAsStringAsync().Result;
-
- Console.WriteLine($"Status code: {response.StatusCode}");
- Console.WriteLine($"Response Headers: {response.Headers}");
- Console.WriteLine();
- Console.WriteLine(result);
- }
-}
-```
-
-### [Node.js](#tab/javascript)
-
-```javascript
-
-const axios = require('axios');
-
-let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let key = '<YOUR-KEY>';
-let route = '/documents/formats';
-
-let config = {
- method: 'get',
- url: endpoint + route,
- headers: {
- 'Ocp-Apim-Subscription-Key': key
- }
-};
-
-axios(config)
-.then(function (response) {
- console.log(JSON.stringify(response.data));
-})
-.catch(function (error) {
- console.log(error);
-});
-
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.squareup.okhttp.*;
-
-public class GetFileFormats {
-
- String key = "<YOUR-KEY>";
- String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
- String url = endpoint + "/documents/formats";
- OkHttpClient client = new OkHttpClient();
-
- public void get() throws IOException {
- Request request = new Request.Builder().url(
- url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
- Response response = client.newCall(request).execute();
- System.out.println(response.body().string());
- }
-
- public static void main(String[] args) throws IOException {
- try{
- GetJobs jobs = new GetJobs();
- jobs.get();
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-
-```
-
-### [Python](#tab/python)
-
-```python
-
-import http.client
-
-host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
-parameters = '//translator/text/batch/v1.0/documents/formats'
-key = '<YOUR-KEY>'
-conn = http.client.HTTPSConnection(host)
-payload = ''
-headers = {
- 'Ocp-Apim-Subscription-Key': key
-}
-conn.request("GET", parameters , payload, headers)
-res = conn.getresponse()
-data = res.read()
-print(res.status)
-print()
-print(data.decode("utf-8"))
-```
-
-### [Go](#tab/go)
-
-```go
-
-package main
-
-import (
- "fmt"
- "net/http"
- "io/ioutil"
-)
-
-func main() {
-
- endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- key := "<YOUR-KEY>"
- uri := endpoint + "/documents/formats"
- method := "GET"
-
- client := &http.Client {
- }
- req, err := http.NewRequest(method, uri, nil)
-
- if err != nil {
- fmt.Println(err)
- return
- }
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
-
- res, err := client.Do(req)
- if err != nil {
- fmt.Println(err)
- return
- }
- defer res.Body.Close()
-
- body, err := ioutil.ReadAll(res.Body)
- if err != nil {
- fmt.Println(err)
- return
- }
- fmt.Println(res.StatusCode)
- fmt.Println(string(body))
-}
-```
---
-## Get job status
-
-Get the current status for a single job and a summary of all jobs in a Document Translation request. If successful, this method returns a `200 OK` response code.
-<!-- markdownlint-disable MD024 -->
-
-### [C#](#tab/csharp)
-
-```csharp
-
-using System;
-using System.Net.Http;
-using System.Threading.Tasks;
--
-class Program
-{
--
- private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
-
- static readonly string route = "/batches/{id}";
-
- private static readonly string key = "<YOUR-KEY>";
-
- static async Task Main(string[] args)
- {
-
- HttpClient client = new HttpClient();
- using HttpRequestMessage request = new HttpRequestMessage();
- {
- request.Method = HttpMethod.Get;
- request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
--
- HttpResponseMessage response = await client.SendAsync(request);
- string result = response.Content.ReadAsStringAsync().Result;
-
- Console.WriteLine($"Status code: {response.StatusCode}");
- Console.WriteLine($"Response Headers: {response.Headers}");
- Console.WriteLine();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Node.js](#tab/javascript)
-
-```javascript
-
-const axios = require('axios');
-
-let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let key = '<YOUR-KEY>';
-let route = '/batches/{id}';
-
-let config = {
- method: 'get',
- url: endpoint + route,
- headers: {
- 'Ocp-Apim-Subscription-Key': key
- }
-};
-
-axios(config)
-.then(function (response) {
- console.log(JSON.stringify(response.data));
-})
-.catch(function (error) {
- console.log(error);
-});
-
-```
-
-### [Java](#tab/java)
-
-```java
-
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.squareup.okhttp.*;
-
-public class GetJobStatus {
-
- String key = "<YOUR-KEY>";
- String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
- String url = endpoint + "/batches/{id}";
- OkHttpClient client = new OkHttpClient();
-
- public void get() throws IOException {
- Request request = new Request.Builder().url(
- url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
- Response response = client.newCall(request).execute();
- System.out.println(response.body().string());
- }
-
- public static void main(String[] args) throws IOException {
- try{
- GetJobs jobs = new GetJobs();
- jobs.get();
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-
-```
-
-### [Python](#tab/python)
-
-```python
-
-import http.client
-
-host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
-parameters = '//translator/text/batch/v1.0/batches/{id}'
-key = '<YOUR-KEY>'
-conn = http.client.HTTPSConnection(host)
-payload = ''
-headers = {
- 'Ocp-Apim-Subscription-Key': key
-}
-conn.request("GET", parameters , payload, headers)
-res = conn.getresponse()
-data = res.read()
-print(res.status)
-print()
-print(data.decode("utf-8"))
-```
-
-### [Go](#tab/go)
-
-```go
-
-package main
-
-import (
- "fmt"
- "net/http"
- "io/ioutil"
-)
-
-func main() {
-
- endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- key := "<YOUR-KEY>"
- uri := endpoint + "/batches/{id}"
- method := "GET"
-
- client := &http.Client {
- }
- req, err := http.NewRequest(method, uri, nil)
-
- if err != nil {
- fmt.Println(err)
- return
- }
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
-
- res, err := client.Do(req)
- if err != nil {
- fmt.Println(err)
- return
- }
- defer res.Body.Close()
-
- body, err := ioutil.ReadAll(res.Body)
- if err != nil {
- fmt.Println(err)
- return
- }
- fmt.Println(res.StatusCode)
- fmt.Println(string(body))
-}
-```
---
-## Get document status
-
-### Brief overview
-
-Retrieve the status of a specific document in a Document Translation request. If successful, this method returns a `200 OK` response code.
-
-### [C#](#tab/csharp)
-
-```csharp
-
-using System;
-using System.Net.Http;
-using System.Threading.Tasks;
--
-class Program
-{
--
- private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
-
- static readonly string route = "/{id}/document/{documentId}";
-
- private static readonly string key = "<YOUR-KEY>";
-
- static async Task Main(string[] args)
- {
-
- HttpClient client = new HttpClient();
- using HttpRequestMessage request = new HttpRequestMessage();
- {
- request.Method = HttpMethod.Get;
- request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
--
- HttpResponseMessage response = await client.SendAsync(request);
- string result = response.Content.ReadAsStringAsync().Result;
-
- Console.WriteLine($"Status code: {response.StatusCode}");
- Console.WriteLine($"Response Headers: {response.Headers}");
- Console.WriteLine();
- Console.WriteLine(result);
- }
-}
-```
-
-### [Node.js](#tab/javascript)
-
-```javascript
-
-const axios = require('axios');
-
-let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let key = '<YOUR-KEY>';
-let route = '/{id}/document/{documentId}';
-
-let config = {
- method: 'get',
- url: endpoint + route,
- headers: {
- 'Ocp-Apim-Subscription-Key': key
- }
-};
-
-axios(config)
-.then(function (response) {
- console.log(JSON.stringify(response.data));
-})
-.catch(function (error) {
- console.log(error);
-});
-
-```
-
-### [Java](#tab/java)
-
-```java
-
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.squareup.okhttp.*;
-
-public class GetDocumentStatus {
-
- String key = "<YOUR-KEY>";
- String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
- String url = endpoint + "/{id}/document/{documentId}";
- OkHttpClient client = new OkHttpClient();
-
- public void get() throws IOException {
- Request request = new Request.Builder().url(
- url).method("GET", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
- Response response = client.newCall(request).execute();
- System.out.println(response.body().string());
- }
-
- public static void main(String[] args) throws IOException {
- try{
- GetJobs jobs = new GetJobs();
- jobs.get();
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-
-```
-
-### [Python](#tab/python)
-
-```python
-
-import http.client
-
-host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
-parameters = '//translator/text/batch/v1.0/{id}/document/{documentId}'
-key = '<YOUR-KEY>'
-conn = http.client.HTTPSConnection(host)
-payload = ''
-headers = {
- 'Ocp-Apim-Subscription-Key': key
-}
-conn.request("GET", parameters , payload, headers)
-res = conn.getresponse()
-data = res.read()
-print(res.status)
-print()
-print(data.decode("utf-8"))
-```
-
-### [Go](#tab/go)
-
-```go
-
-package main
-
-import (
- "fmt"
- "net/http"
- "io/ioutil"
-)
-
-func main() {
-
- endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- key := "<YOUR-KEY>"
- uri := endpoint + "/{id}/document/{documentId}"
- method := "GET"
-
- client := &http.Client {
- }
- req, err := http.NewRequest(method, uri, nil)
-
- if err != nil {
- fmt.Println(err)
- return
- }
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
-
- res, err := client.Do(req)
- if err != nil {
- fmt.Println(err)
- return
- }
- defer res.Body.Close()
-
- body, err := ioutil.ReadAll(res.Body)
- if err != nil {
- fmt.Println(err)
- return
- }
- fmt.Println(res.StatusCode)
- fmt.Println(string(body))
-}
-```
---
-## Delete job
-
-### Brief overview
-
-Cancel currently processing or queued job. Only documents for which translation hasn't started are canceled.
-
-### [C#](#tab/csharp)
-
-```csharp
-
-using System;
-using System.Net.Http;
-using System.Threading.Tasks;
--
-class Program
-{
--
- private static readonly string endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
-
- static readonly string route = "/batches/{id}";
-
- private static readonly string key = "<YOUR-KEY>";
-
- static async Task Main(string[] args)
- {
-
- HttpClient client = new HttpClient();
- using HttpRequestMessage request = new HttpRequestMessage();
- {
- request.Method = HttpMethod.Delete;
- request.RequestUri = new Uri(endpoint + route);
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
--
- HttpResponseMessage response = await client.SendAsync(request);
- string result = response.Content.ReadAsStringAsync().Result;
-
- Console.WriteLine($"Status code: {response.StatusCode}");
- Console.WriteLine($"Response Headers: {response.Headers}");
- Console.WriteLine();
- Console.WriteLine(result);
- }
-}
-```
-
-### [Node.js](#tab/javascript)
-
-```javascript
-
-const axios = require('axios');
-
-let endpoint = 'https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0';
-let key = '<YOUR-KEY>';
-let route = '/batches/{id}';
-
-let config = {
- method: 'delete',
- url: endpoint + route,
- headers: {
- 'Ocp-Apim-Subscription-Key': key
- }
-};
-
-axios(config)
-.then(function (response) {
- console.log(JSON.stringify(response.data));
-})
-.catch(function (error) {
- console.log(error);
-});
-
-```
-
-### [Java](#tab/java)
-
-```java
-
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.squareup.okhttp.*;
-
-public class DeleteJob {
-
- String key = "<YOUR-KEY>";
- String endpoint = "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0";
- String url = endpoint + "/batches/{id}";
- OkHttpClient client = new OkHttpClient();
-
- public void get() throws IOException {
- Request request = new Request.Builder().url(
- url).method("DELETE", null).addHeader("Ocp-Apim-Subscription-Key", key).build();
- Response response = client.newCall(request).execute();
- System.out.println(response.body().string());
- }
-
- public static void main(String[] args) throws IOException {
- try{
- GetJobs jobs = new GetJobs();
- jobs.get();
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-
-```
-
-### [Python](#tab/python)
-
-```python
-
-import http.client
-
-host = '<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com'
-parameters = '//translator/text/batch/v1.0/batches/{id}'
-key = '<YOUR-KEY>'
-conn = http.client.HTTPSConnection(host)
-payload = ''
-headers = {
- 'Ocp-Apim-Subscription-Key': key
-}
-conn.request("DELETE", parameters , payload, headers)
-res = conn.getresponse()
-data = res.read()
-print(res.status)
-print()
-print(data.decode("utf-8"))
-```
-
-### [Go](#tab/go)
-
-```go
-
-package main
-
-import (
- "fmt"
- "net/http"
- "io/ioutil"
-)
-
-func main() {
-
- endpoint := "https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0"
- key := "<YOUR-KEY>"
- uri := endpoint + "/batches/{id}"
- method := "DELETE"
-
- client := &http.Client {
- }
- req, err := http.NewRequest(method, uri, nil)
-
- if err != nil {
- fmt.Println(err)
- return
- }
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
-
- res, err := client.Do(req)
- if err != nil {
- fmt.Println(err)
- return
- }
- defer res.Body.Close()
-
- body, err := ioutil.ReadAll(res.Body)
- if err != nil {
- fmt.Println(err)
- return
- }
- fmt.Println(res.StatusCode)
- fmt.Println(string(body))
-}
-```
---
-### Common HTTP status codes
-
-| HTTP status code | Description | Possible reason |
-||-|--|
-| 200 | OK | The request was successful. |
-| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. When managing your subscription on the Azure portal, make sure you're using the **Translator** single-service resource _not_ the **Cognitive Services** multi-service resource.
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
-
-## Learn more
-
-* [Translator v3 API reference](../../reference/v3-0-reference.md)
-* [Language support](../../language-support.md)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a customized language system using Custom Translator](../../custom-translator/overview.md)
cognitive-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/language-studio.md
- Title: Try Document Translation in Language Studio
-description: "Document Translation in Azure Cognitive Services Language Studio."
------ Previously updated : 03/17/2023--
-recommendations: false
--
-# Document Translation in Language Studio (Preview)
-
-> [!IMPORTANT]
-> Document Translation in Language Studio is currently in Public Preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-
- Document Translation in [**Azure Cognitive Services Language Studio**](https://language.cognitive.azure.com/home) is a no-code user interface that lets you interactively translate documents from local or Azure Blob Storage.
-
-## Supported regions
-
-The Document Translation feature in the Language Studio is currently available in the following regions;
-
-|DisplayName|Name|
-|--||
-|East US |`eastus`|
-|East US 2 |`eastus2`|
-|West US 2 | `westus2`|
-|West US 3| `westus3`|
-|UK South| `uksouth`|
-|South Central US| `southcentralus` |
-|Australia East|`australiaeast` |
-|Central India| `centralindia` |
-|North Europe| `northeurope` |
-|West Europe|`westeurope`|
-|Switzerland North| `switzerlandnorth` |
-
-## Prerequisites
-
-If you or an administrator have previously setup a Translator resource with a **system-assigned managed identity**, enabled a **Storage Blob Data Contributor** role assignment, and created an Azure Blob Storage account, you can skip this section and [**Get started**](#get-started) right away.
-
-> [!NOTE]
->
-> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
->
-> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
->
-
-Document Translation in Language Studio requires the following resources:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource) with [**system-assigned managed identity**](how-to-guides/create-use-managed-identities.md#enable-a-system-assigned-managed-identity) enabled and a [**Storage Blob Data Contributor**](how-to-guides/create-use-managed-identities.md#grant-storage-account-access-for-your-translator-resource) role assigned. For more information, *see* [**Managed identities for Document Translation**](how-to-guides/create-use-managed-identities.md). Also, make sure the region and pricing sections are completed as follows:
-
- * **Resource Region**. For this project, choose a geographic region such as **East US**. For Document Translation, [system-assigned managed identity](how-to-guides/create-use-managed-identities.md) isn't supported for the **Global** region.
-
- * **Pricing tier**. Select Standard S1 or D3 to try the service. Document Translation isn't supported in the free tier.
-
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). An active Azure Blob Storage account is required to use Document Translation in the Language Studio.
-
-Now that you've completed the prerequisites, let's start translating documents!
-
-## Get started
-
-At least one **source document** is required. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx). The source language is English.
-
-1. Navigate to [Language Studio](https://language.cognitive.azure.com/home).
-
-1. If you're using the Language Studio for the first time, a **Select an Azure resource** pop-up screen appears. Make the following selections:
-
- * **Azure directory**.
- * **Azure subscription**.
- * **Resource type**. Choose **Translator**.
- * **Resource name**. The resource you select must have [**managed identity enabled**](how-to-guides/create-use-managed-identities.md).
-
- :::image type="content" source="media/language-studio/choose-azure-resource.png" alt-text="Screenshot of the language studio choose your Azure resource dialog window.":::
-
- > [!TIP]
- > You can update your selected directory and resource by selecting the Translator settings icon located in the left navigation section.
-
-1. Navigate to Language Studio and select the **Document translation** tile:
-
- :::image type="content" source="media/language-studio/welcome-home-page.png" alt-text="Screenshot of the language studio home page.":::
-
-1. If you're using the Document Translation feature for the first time, start with the **Initial Configuration** to select your **Azure Translator resource** and **Document storage** account:
-
- :::image type="content" source="media/language-studio/initial-configuration.png" alt-text="Screenshot of the initial configuration page.":::
-
-1. In the **Job** section, choose the language to **Translate from** (source) or keep the default **Auto-detect language** and select the language to **Translate to** (target). You can select a maximum of 10 target languages. Once you've selected your source and target language(s), select **Next**:
-
- :::image type="content" source="media/language-studio/basic-information.png" alt-text="Screenshot of the language studio basic information page.":::
-
-## File location and destination
-
-Your source and target files can be located in your local environment or your Azure Blob Storage [container](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). Follow the steps to select where to retrieve your source and store your target files:
-
-### Choose a source file location
-
-#### [**Local**](#tab/local-env)
-
- 1. In the **files and destination** section, choose the files for translation by selecting the **Upload local files** button.
-
- 1. Next, select **&#x2795; Add file(s)**, choose the file(s) for translation, then select **Next**:
-
- :::image type="content" source="media/language-studio/upload-file.png" alt-text="Screenshot of the select files for translation page.":::
-
-#### [**Azure Blob Storage**](#tab/blob-storage)
-
-1. In the **files and destination** section, choose the files for translation by selecting the **Select for Blob storage** button.
-
-1. Next, choose your *source* **Blob container**, find and select the file(s) for translation, then select **Next**:
-
- :::image type="content" source="media/language-studio/select-blob-container.png" alt-text="Screenshot of select files from your blob container.":::
---
-### Choose a target file destination
-
-#### [**Local**](#tab/local-env)
-
-While still in the **files and destination** section, select **Download translated file(s)**. Once you have made your choice, select **Next**:
-
- :::image type="content" source="media/language-studio/target-file-upload.png" alt-text="Screenshot of the select destination for target files page.":::
-
-#### [**Azure Blob Storage**](#tab/blob-storage)
-
-1. While still in the **files and destination** section, select **Upload to Azure Blob Storage**.
-1. Next, choose your *target* **Blob container** and select **Next**:
-
- :::image type="content" source="media/language-studio/target-file-upload.png" alt-text="Screenshot of target file upload drop-down menu.":::
---
-### Optional selections and review
-
-1. (Optional) You can add **additional options** for custom translation and/or a glossary file. If you don't require these options, just select **Next**.
-
-1. On the **Review and finish** page, check to make sure that your selections are correct. If not, you can go back. If everything looks good, select the **Start translation job** button.
-
- :::image type="content" source="media/language-studio/start-translation.png" alt-text="Screenshot of the start translation job page.":::
-
-1. The **Job history** page contains the **Translation job id** and job status.
-
- > [!NOTE]
- > The list of translation jobs on the job history page includes all the jobs that were submitted through the chosen translator resource. If your colleague used the same translator resource to submit a job, you will see the status of that job on the job history page.
-
- :::image type="content" source="media/language-studio/job-history.png" alt-text="Screenshot of the job history page.":::
-
-That's it! You now know how to translate documents using Azure Cognitive Services Language Studio.
-
-## Next steps
-
-> [!div class="nextstepaction"]
->
-> [Use Document Translation REST APIs programmatically](how-to-guides/use-rest-api-programmatically.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
- Title: What is Microsoft Azure Cognitive Services Document Translation?
-description: An overview of the cloud-based batch Document Translation service and process.
------ Previously updated : 05/05/2023--
-recommendations: false
-
-# What is Document Translation?
-
-Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md), while preserving original document structure and data format.
-
-## Key features
-
-| Feature | Description |
-| | -|
-| **Translate large files**| Translate whole documents asynchronously.|
-|**Translate numerous files**|Translate multiple files across all supported languages and dialects while preserving document structure and data format.|
-|**Preserve source file presentation**| Translate files while preserving the original layout and format.|
-|**Apply custom translation**| Translate documents using general and [custom translation](../customization.md#custom-translator) models.|
-|**Apply custom glossaries**|Translate documents using custom glossaries.|
-|**Automatically detect document language**|Let the Document Translation service determine the language of the document.|
-|**Translate documents with content in multiple languages**|Use the autodetect feature to translate documents with content in multiple languages into your target language.|
-
-> [!NOTE]
-> When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
-> For more information on input requirements, *see* [Document Transaltion request limits](../request-limits.md#document-translation)
-
-## Development options
-
-You can add Document Translation to your applications using the REST API or a client-library SDK:
-
-* The [**REST API**](reference/rest-api-guide.md). is a language agnostic interface that enables you to create HTTP requests and authorization headers to translate documents.
-
-* The [**client-library SDKs**](client-sdks.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](https://pypi.org/project/azure-ai-translation-document/).
-
-## Get started
-
-In our quickstart, you learn how to rapidly get started using Document Translation. To begin, you need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
-
-> [!div class="nextstepaction"]
-> [Start here](get-started-with-document-translation.md "Learn how to use Document Translation with HTTP REST")
-
-## Supported document formats
-
-The [Get supported document formats method](reference/get-supported-document-formats.md) returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
-
-Document Translation supports the following document file types:
-
-| File type| File extension|Description|
-|||--|
-|Adobe PDF|`pdf`|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.|
-|Comma-Separated Values |`csv`| A comma-delimited raw-data file used by spreadsheet programs.|
-|HTML|`html`, `htm`|Hyper Text Markup Language.|
-|Localization Interchange File Format|xlf| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
-|Markdown| `markdown`, `mdown`, `mkdn`, `md`, `mkd`, `mdwn`, `mdtxt`, `mdtext`, `rmd`| A lightweight markup language for creating formatted text.|
-|M&#8203;HTML|`mthml`, `mht`| A web page archive format used to combine HTML code and its companion resources.|
-|Microsoft Excel|`xls`, `xlsx`|A spreadsheet file for data analysis and documentation.|
-|Microsoft Outlook|`msg`|An email message created or saved within Microsoft Outlook.|
-|Microsoft PowerPoint|`ppt`, `pptx`| A presentation file used to display content in a slideshow format.|
-|Microsoft Word|`doc`, `docx`| A text document file.|
-|OpenDocument Text|`odt`|An open-source text document file.|
-|OpenDocument Presentation|`odp`|An open-source presentation file.|
-|OpenDocument Spreadsheet|`ods`|An open-source spreadsheet file.|
-|Rich Text Format|`rtf`|A text document containing formatting.|
-|Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.|
-|Text|`txt`| An unformatted text document.|
-
-## Request limits
-
-For detailed information regarding Azure Translator Service request limits, *see* [**Document Translation request limits**](../request-limits.md#document-translation).
-
-### Legacy file types
-
-Source file types are preserved during the document translation with the following **exceptions**:
-
-| Source file extension | Translated file extension|
-| | |
-| .doc, .odt, .rtf, | .docx |
-| .xls, .ods | .xlsx |
-| .ppt, .odp | .pptx |
-
-## Supported glossary formats
-
-Document Translation supports the following glossary file types:
-
-| File type| File extension|Description|
-|||--|
-|Comma-Separated Values| `csv` |A comma-delimited raw-data file used by spreadsheet programs.|
-|Localization Interchange File Format| `xlf` , `xliff`| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.|
-|Tab-Separated Values/TAB|`tsv`, `tab`| A tab-delimited raw-data file used by spreadsheet programs.|
-
-## Data residency
-
-Document Translation data residency depends on the Azure region where your Translator resource was created:
-
-* Translator resources **created** in any region in Europe are **processed** at data center in West Europe and North Europe.
-* Translator resources **created** in any region in Asia or Australia are **processed** at data center in Southeast Asia and Australia East.
-* Translator resource **created** in all other regions including Global, North America and South America are **processed** at data center in East US and West US 2.
-
-### Text Translation data residency
-
-✔️ Feature: **Translator Text** </br>
-✔️ Region where resource created: **Any**
-
-| Service endpoint | Request processing data center |
-||--|
-|**Global (recommended):**</br>**`api.cognitive.microsofttranslator.com`**|Closest available data center.|
-|**Americas:**</br>**`api-nam.cognitive.microsofttranslator.com`**|East US &bull; South Central US &bull; West Central US &bull; West US 2|
-|**Europe:**</br>**`api-eur.cognitive.microsofttranslator.com`**|North Europe &bull; West Europe|
-| **Asia Pacific:**</br>**`api-apc.cognitive.microsofttranslator.com`**|Korea South &bull; Japan East &bull; Southeast Asia &bull; Australia East|
-|
-
-### Document Translation data residency
-
-✔️ Feature: **Document Translation**</br>
-✔️ Service endpoint: **Custom:** &#8198;&#8198;&#8198; **`<name-of-your-resource.cognitiveservices.azure.com/translator/text/batch/v1.0`**
-
- |Resource region| Request processing data center |
-|-|--|
-|**Global and any region in the Americas** | East US &bull; West US 2|
-|**Any region in Europe**| North Europe &bull; West Europe|
-|**Any region in Asia Pacific**| Southeast Asia &bull; Australia East|
--
-### Document Translation data residency
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get Started with Document Translation](get-started-with-document-translation.md)
cognitive-services Document Translation Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/document-translation-rest-api.md
- Title: Get started with Document Translation
-description: "How to create a Document Translation service using C#, Go, Java, Node.js, or Python programming languages and the REST API"
------ Previously updated : 06/02/2023-
-recommendations: false
-
-zone_pivot_groups: programming-languages-set-translator
--
-# Get started with Document Translation
-
-Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
-
-## Prerequisites
-
-> [!IMPORTANT]
->
-> * Java and JavaScript Document Translation SDKs are currently available in **public preview**. Features, approaches and processes may change, prior to the general availability (GA) release, based on user feedback.
-> * C# and Python SDKs are general availability (GA) releases ready for use in your production applications
-> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
->
-> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
->
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
-
- * **Source container**. This container is where you upload your files for translation (required).
- * **Target container**. This container is where your translated files are stored (required).
-
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource):
-
- **Complete the Translator project and instance details fields as follows:**
-
- 1. **Subscription**. Select one of your available Azure subscriptions.
-
- 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
-
- 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
-
- 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
-
- > [!NOTE]
- > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
-
- 1. **Pricing tier**. Document Translation isn't supported in the free tier. **Select Standard S1 to try the service**.
-
- 1. Select **Review + Create**.
-
- 1. Review the service terms and select **Create** to deploy your resource.
-
- 1. After your resource has successfully deployed, select **Go to resource**.
-
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) -->
-
-### Retrieve your key and document translation endpoint
-
-*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
-
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
-
-1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-
-1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
-
-1. You paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
-
- :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
-
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) -->
-
-## Create Azure Blob Storage containers
-
-You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
-
-* **Source container**. This container is where you upload your files for translation (required).
-* **Target container**. This container is where your translated files are stored (required).
-
-### **Required authentication**
-
-The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](../how-to-guides/create-sas-tokens.md).
-
-* Your **source** container or blob must have designated **read** and **list** access.
-* Your **target** container or blob must have designated **write** and **list** access.
-* Your **glossary** blob must have designated **read** and **list** access.
-
-> [!TIP]
->
-> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
-> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
-> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
-
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) -->
-
-### Sample document
-
-For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. The source language is English.
-------------
-That's it, congratulations! In this quickstart, you used Document Translation to translate a document while preserving it's original structure and data format.
-
-## Next steps
--
cognitive-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/document-translation-sdk.md
- Title: "Document Translation C#/.NET or Python client library"-
-description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
------- Previously updated : 06/19/2023-
-zone_pivot_groups: programming-languages-document-sdk
--
-# Document Translation client-library SDKs
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD001 -->
-
-Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
-
-> [!IMPORTANT]
->
-> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
->
-> * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest that you select Standard S1 to try Document Translation. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
-
-## Prerequisites
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource). If you're planning on using the Document Translation feature with [managed identity authorization](../how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**. Select the **Standard S1 or D3** or pricing tier.
-
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your Azure Blob Storage account for your source and target files:
-
- * **Source container**. This container is where you upload your files for translation (required).
- * **Target container**. This container is where your translated files are stored (required).
-
-### Storage container authorization
-
-You can choose one of the following options to authorize access to your Translator resource.
-
-**✔️ Managed Identity**. A managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for an Azure managed resource. Managed identities enable you to run your Translator application without having to embed credentials in your code. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
-
-To learn more, *see* [Managed identities for Document Translation](../how-to-guides/create-use-managed-identities.md).
-
- :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
-
-**✔️ Shared Access Signature (SAS)**. A shared access signature is a URL that grants restricted access for a specified period of time to your Translator service. To use this method, you need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs.
-
-* Your **source** container or blob must have designated **read** and **list** access.
-* Your **target** container or blob must have designated **write** and **list** access.
-
-To learn more, *see* [**Create SAS tokens**](../how-to-guides/create-sas-tokens.md).
-
- :::image type="content" source="../media/sas-url-token.png" alt-text="Screenshot of a resource URI with a SAS token.":::
-----
-### Next step
-
-> [!div class="nextstepaction"]
-> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
cognitive-services Cancel Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/cancel-translation.md
- Title: Cancel translation method-
-description: The cancel translation method cancels a currently processing or queued operation.
------ Previously updated : 06/20/2021--
-# Cancel translation
-
-Cancel a currently processing or queued operation. An operation won't be canceled if it is already completed, has failed, or is canceling. A bad request will be returned. All documents that have completed translation won't be canceled and will be charged. All pending documents will be canceled if possible.
-
-## Request URL
-
-Send a `DELETE` request to:
-
-```DELETE HTTP
-https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches/{id}
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-|Query parameter|Required|Description|
-|--|--|--|
-|id|True|The operation-ID.|
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-|--|--|
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-| Status Code| Description|
-|--|--|
-|200|OK. Cancel request has been submitted|
-|401|Unauthorized. Check your credentials.|
-|404|Not found. Resource is not found.
-|500|Internal Server Error.
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## Cancel translation response
-
-### Successful response
-
-The following information is returned in a successful response.
-
-|Name|Type|Description|
-| | | |
-|id|string|ID of the operation.|
-|createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|summary|StatusSummary|Summary containing the details listed below.|
-|summary.total|integer|Count of total documents.|
-|summary.failed|integer|Count of documents failed.|
-|summary.success|integer|Count of documents successfully translated.|
-|summary.inProgress|integer|Count of documents in progress.|
-|summary.notYetStarted|integer|Count of documents not yet started processing.|
-|summary.cancelled|integer|Number of canceled.|
-|summary.totalCharacterCharged|integer|Total characters charged by the API.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
-
-## Examples
-
-### Example successful response
-
-The following JSON object is an example of a successful response.
-
-Status code: 200
-
-```JSON
-{
- "id": "727bf148-f327-47a0-9481-abae6362f11e",
- "createdDateTimeUtc": "2020-03-26T00:00:00Z",
- "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
- "status": "Succeeded",
- "summary": {
- "total": 10,
- "failed": 1,
- "success": 9,
- "inProgress": 0,
- "notYetStarted": 0,
- "cancelled": 0,
- "totalCharacterCharged": 0
- }
-}
-```
-
-### Example error response
-
-The following JSON object is an example of an error response. The schema for other error codes is the same.
-
-Status code: 500
-
-```JSON
-{
- "error": {
- "code": "InternalServerError",
- "message": "Internal Server Error",
- "target": "Operation",
- "innerError": {
- "code": "InternalServerError",
- "message": "Unexpected internal server error has occurred"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Document Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-document-status.md
- Title: Get document status method-
-description: The get document status method returns the status for a specific document.
------ Previously updated : 04/21/2021--
-# Get document status
-
-The Get Document Status method returns the status for a specific document. The method returns the translation status for a specific document based on the request ID and document ID.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches/{id}/documents/{documentId}
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-|Query parameter|Required|Description|
-| | | |
-|documentId|True|The document ID.|
-|id|True|The batch ID.|
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|200|OK. Successful request and it is accepted by the service. The operation details are returned.HeadersRetry-After: integerETag: string|
-|401|Unauthorized. Check your credentials.|
-|404|Not Found. Resource is not found.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## Get document status response
-
-### Successful get document status response
-
-|Name|Type|Description|
-| | | |
-|path|string|Location of the document or folder.|
-|sourcePath|string|Location of the source document.|
-|createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|to|string|Two letter language code of To Language. [See the list of languages](../../language-support.md).|
-|progress|number|Progress of the translation if available|
-|id|string|Document ID.|
-|characterCharged|integer|Characters charged by the API.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
-
-## Examples
-
-### Example successful response
-The following JSON object is an example of a successful response.
-
-```JSON
-{
- "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
- "sourcePath": "https://myblob.blob.core.windows.net/sourceContainer/fr/mydoc.txt",
- "createdDateTimeUtc": "2020-03-26T00:00:00Z",
- "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
- "status": "Running",
- "to": "fr",
- "progress": 0.1,
- "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
- "characterCharged": 0
-}
-```
-
-### Example error response
-
-The following JSON object is an example of an error response. The schema for other error codes is the same.
-
-Status code: 401
-
-```JSON
-{
- "error": {
- "code": "Unauthorized",
- "message": "User is not authorized",
- "target": "Document",
- "innerError": {
- "code": "Unauthorized",
- "message": "Operation is not authorized"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-documents-status.md
- Title: Get documents status-
-description: The get documents status method returns the status for all documents in a batch document translation request.
------ Previously updated : 04/21/2021--
-# Get documents status
-
-If the number of documents in the response exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
-
-$top, $skip and $maxpagesize query parameters can be used to specify a number of results to return and an offset for the collection.
-
-$top indicates the total number of records the user wants to be returned across all pages. $skip indicates the number of records to skip from the list of document status held by the server based on the sorting method specified. By default, we sort by descending start time. $maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page.
-
-$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc asc" or "$orderBy=createdDateTimeUtc desc"). The default sorting is descending by createdDateTimeUtc. Some query parameters can be used to filter the returned list (ex: "status=Succeeded,Cancelled") will only return succeeded and canceled documents. createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify a range of datetime to filter the returned list by. The supported filtering query parameters are (status, IDs, createdDateTimeUtcStart, createdDateTimeUtcEnd).
-
-When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
-
-> [!NOTE]
-> If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches/{id}/documents
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-|Query parameter|In|Required|Type|Description|
-| | | | | |
-|id|path|True|string|The operation ID.|
-|$maxpagesize|query|False|integer int32|$maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a $maxpagesize preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
-|$orderBy|query|False|array|The sorting query for the collection (ex: 'CreatedDateTimeUtc asc', 'CreatedDateTimeUtc desc').|
-|$skip|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection. Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
-|$top|query|False|integer int32|$top indicates the total number of records the user wants to be returned across all pages. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection. Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
-|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.|
-|createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.|
-|ids|query|False|array|IDs to use in filtering.|
-|statuses|query|False|array|Statuses to use in filtering.|
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|200|OK. Successful request and returns the status of the documents. HeadersRetry-After: integerETag: string|
-|400|Invalid request. Check input parameters.|
-|401|Unauthorized. Check your credentials.|
-|404|Resource is not found.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
--
-## Get documents status response
-
-### Successful get documents status response
-
-The following information is returned in a successful response.
-
-|Name|Type|Description|
-| | | |
-|@nextLink|string|Url for the next page. Null if no more pages available.|
-|value|DocumentStatus []|The detail status of individual documents listed below.|
-|value.path|string|Location of the document or folder.|
-|value.sourcePath|string|Location of the source document.|
-|value.createdDateTimeUtc|string|Operation created date time.|
-|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|value.status|status|List of possible statuses for job or document.<ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|value.to|string|To language.|
-|value.progress|number|Progress of the translation if available.|
-|value.id|string|Document ID.|
-|value.characterCharged|integer|Characters charged by the API.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|target|string|Gets the source of the error. For example, it would be "documents" or "document ID" in the case of an invalid document.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document id" if there was an invalid document.|
-
-## Examples
-
-### Example successful response
-
-The following is an example of a successful response.
-
-```JSON
-{
- "value": [
- {
- "path": "https://myblob.blob.core.windows.net/destinationContainer/fr/mydoc.txt",
- "sourcePath": "https://myblob.blob.core.windows.net/sourceContainer/fr/mydoc.txt",
- "createdDateTimeUtc": "2020-03-26T00:00:00Z",
- "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
- "status": "Running",
- "to": "fr",
- "progress": 0.1,
- "id": "273622bd-835c-4946-9798-fd8f19f6bbf2",
- "characterCharged": 0
- }
- ],
- "@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.0/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55/documents?$top=5&$skip=15"
-}
-```
-
-### Example error response
-
-The following is an example of an error response. The schema for other error codes is the same.
-
-Status code: 500
-
-```JSON
-{
- "error": {
- "code": "InternalServerError",
- "message": "Internal Server Error",
- "target": "Operation",
- "innerError": {
- "code": "InternalServerError",
- "message": "Unexpected internal server error has occurred"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Supported Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-supported-document-formats.md
- Title: Get supported document formats method-
-description: The get supported document formats method returns a list of supported document formats.
------ Previously updated : 04/21/2021--
-# Get supported document formats
-
-The Get supported document formats method returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/documents/formats
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-|--|--|
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-|--|--|
-|200|OK. Returns the list of supported document file formats.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## File format response
-
-### Successful fileFormatListResult response
-
-The following information is returned in a successful response.
-
-|Name|Type|Description|
-| | | |
-|value|FileFormat []|FileFormat[] contains the details listed below.|
-|value.contentTypes|string[]|Supported Content-Types for this format.|
-|value.defaultVersion|string|Default version if none is specified.|
-|value.fileExtensions|string[]|Supported file extension for this format.|
-|value.format|string|Name of the format.|
-|value.versions|string [] | Supported version.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
- |code|string|Enums containing high-level error codes. Possible values:<ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
-
-## Examples
-
-### Example successful response
-The following is an example of a successful response.
-
-Status code: 200
-
-```JSON
-{
- "value": [
- {
- "format": "PlainText",
- "fileExtensions": [
- ".txt"
- ],
- "contentTypes": [
- "text/plain"
- ],
- "versions": []
- },
- {
- "format": "OpenXmlWord",
- "fileExtensions": [
- ".docx"
- ],
- "contentTypes": [
- "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
- ],
- "versions": []
- },
- {
- "format": "OpenXmlPresentation",
- "fileExtensions": [
- ".pptx"
- ],
- "contentTypes": [
- "application/vnd.openxmlformats-officedocument.presentationml.presentation"
- ],
- "versions": []
- },
- {
- "format": "OpenXmlSpreadsheet",
- "fileExtensions": [
- ".xlsx"
- ],
- "contentTypes": [
- "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
- ],
- "versions": []
- },
- {
- "format": "OutlookMailMessage",
- "fileExtensions": [
- ".msg"
- ],
- "contentTypes": [
- "application/vnd.ms-outlook"
- ],
- "versions": []
- },
- {
- "format": "HtmlFile",
- "fileExtensions": [
- ".html",
- ".htm"
- ],
- "contentTypes": [
- "text/html"
- ],
- "versions": []
- },
- {
- "format": "PortableDocumentFormat",
- "fileExtensions": [
- ".pdf"
- ],
- "contentTypes": [
- "application/pdf"
- ],
- "versions": []
- },
- {
- "format": "XLIFF",
- "fileExtensions": [
- ".xlf"
- ],
- "contentTypes": [
- "application/xliff+xml"
- ],
- "versions": [
- "1.0",
- "1.1",
- "1.2"
- ]
- },
- {
- "format": "TSV",
- "fileExtensions": [
- ".tsv",
- ".tab"
- ],
- "contentTypes": [
- "text/tab-separated-values"
- ],
- "versions": []
- },
- {
- "format": "CSV",
- "fileExtensions": [
- ".csv"
- ],
- "contentTypes": [
- "text/csv"
- ],
- "versions": []
- },
- {
- "format": "RichTextFormat",
- "fileExtensions": [
- ".rtf"
- ],
- "contentTypes": [
- "application/rtf"
- ],
- "versions": []
- },
- {
- "format": "WordDocument",
- "fileExtensions": [
- ".doc"
- ],
- "contentTypes": [
- "application/msword"
- ],
- "versions": []
- },
- {
- "format": "PowerpointPresentation",
- "fileExtensions": [
- ".ppt"
- ],
- "contentTypes": [
- "application/vnd.ms-powerpoint"
- ],
- "versions": []
- },
- {
- "format": "ExcelSpreadsheet",
- "fileExtensions": [
- ".xls"
- ],
- "contentTypes": [
- "application/vnd.ms-excel"
- ],
- "versions": []
- },
- {
- "format": "OpenDocumentText",
- "fileExtensions": [
- ".odt"
- ],
- "contentTypes": [
- "application/vnd.oasis.opendocument.text"
- ],
- "versions": []
- },
- {
- "format": "OpenDocumentPresentation",
- "fileExtensions": [
- ".odp"
- ],
- "contentTypes": [
- "application/vnd.oasis.opendocument.presentation"
- ],
- "versions": []
- },
- {
- "format": "OpenDocumentSpreadsheet",
- "fileExtensions": [
- ".ods"
- ],
- "contentTypes": [
- "application/vnd.oasis.opendocument.spreadsheet"
- ],
- "versions": []
- },
- {
- "format": "Markdown",
- "fileExtensions": [
- ".markdown",
- ".mdown",
- ".mkdn",
- ".md",
- ".mkd",
- ".mdwn",
- ".mdtxt",
- ".mdtext",
- ".rmd"
- ],
- "contentTypes": [
- "text/markdown",
- "text/x-markdown",
- "text/plain"
- ],
- "versions": []
- },
- {
- "format": "Mhtml",
- "fileExtensions": [
- ".mhtml",
- ".mht"
- ],
- "contentTypes": [
- "message/rfc822",
- "application/x-mimearchive",
- "multipart/related"
- ],
- "versions": []
- }
- ]
-}
-
-```
-
-### Example error response
-
-The following is an example of an error response. The schema for other error codes is the same.
-
-Status code: 500
-
-```JSON
-{
- "error": {
- "code": "InternalServerError",
- "message": "Internal Server Error",
- "innerError": {
- "code": "InternalServerError",
- "message": "Unexpected internal server error has occurred"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-supported-glossary-formats.md
- Title: Get supported glossary formats method-
-description: The get supported glossary formats method returns the list of supported glossary formats.
------ Previously updated : 04/21/2021--
-# Get supported glossary formats
-
-The Get supported glossary formats method returns a list of glossary formats supported by the Document Translation service. The list includes the common file extension used.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/glossaries/formats
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|200|OK. Returns the list of supported glossary file formats.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
--
-## Get supported glossary formats response
-
-Base type for list return in the Get supported glossary formats API.
-
-### Successful get supported glossary formats response
-
-Base type for list return in the Get supported glossary formats API.
-
-|Name|Type|Description|
-| | | |
-|value|FileFormat []|FileFormat[] contains the details listed below.|
-|value.contentTypes|string []|Supported Content-Types for this format.|
-|value.defaultVersion|string|Default version if none is specified|
-|value.fileExtensions|string []| Supported file extension for this format.|
-|value.format|string|Name of the format.|
-|value.versions|string []| Supported version.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
-
-## Examples
-
-### Example successful response
-
-The following is an example of a successful response.
-
-```JSON
-{
- "value": [
- {
- "format": "XLIFF",
- "fileExtensions": [
- ".xlf"
- ],
- "contentTypes": [
- "application/xliff+xml"
- ],
- "defaultVersion": "1.2",
- "versions": [
- "1.0",
- "1.1",
- "1.2"
- ]
- },
- {
- "format": "TSV",
- "fileExtensions": [
- ".tsv",
- ".tab"
- ],
- "contentTypes": [
- "text/tab-separated-values"
- ]
- },
- {
- "format": "CSV",
- "fileExtensions": [
- ".csv"
- ],
- "contentTypes": [
- "text/csv"
- ]
- }
- ]
-}
-
-```
-
-### Example error response
-the following is an example of an error response. The schema for other error codes is the same.
-
-Status code: 500
-
-```JSON
-{
- "error": {
- "code": "InternalServerError",
- "message": "Internal Server Error",
- "innerError": {
- "code": "InternalServerError",
- "message": "Unexpected internal server error has occurred"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Supported Storage Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-supported-storage-sources.md
- Title: Get supported storage sources method-
-description: The get supported storage sources method returns a list of supported storage sources.
------ Previously updated : 04/21/2021--
-# Get supported storage sources
-
-The Get supported storage sources method returns a list of storage sources/options supported by the Document Translation service.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/storagesources
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|200|OK. Successful request and returns the list of storage sources.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## Get supported storage sources response
-
-### Successful get supported storage sources response
-Base type for list return in the Get supported storage sources API.
-
-|Name|Type|Description|
-| | | |
-|value|string []|List of objects.|
--
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
-
-## Examples
-
-### Example successful response
-
-The following is an example of a successful response.
-
-```JSON
-{
- "value": [
- "AzureBlob"
- ]
-}
-```
-
-### Example error response
-The following is an example of an error response. The schema for other error codes is the same.
-
-Status code: 500
-
-```JSON
-{
- "error": {
- "code": "InternalServerError",
- "message": "Internal Server Error",
- "innerError": {
- "code": "InternalServerError",
- "message": "Unexpected internal server error has occurred"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Translation Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-translation-status.md
- Title: Get translation status-
-description: The get translation status method returns the status for a document translation request.
------ Previously updated : 04/21/2021--
-# Get translation status
-
-The Get translation status method returns the status for a document translation request. The status includes the overall request status and the status for documents that are being translated as part of that request.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches/{id}
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
--
-## Request parameters
-
-Request parameters passed on the query string are:
-
-|Query parameter|Required|Description|
-| | | |
-|id|True|The operation ID.|
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|200|OK. Successful request and returns the status of the batch translation operation. HeadersRetry-After: integerETag: string|
-|401|Unauthorized. Check your credentials.|
-|404|Resource is not found.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## Get translation status response
-
-### Successful get translation status response
-
-The following information is returned in a successful response.
-
-|Name|Type|Description|
-| | | |
-|id|string|ID of the operation.|
-|createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|summary|StatusSummary|Summary containing the details listed below.|
-|summary.total|integer|Total count.|
-|summary.failed|integer|Failed count.|
-|summary.success|integer|Number of successful.|
-|summary.inProgress|integer|Number of in progress.|
-|summary.notYetStarted|integer|Count of not yet started.|
-|summary.cancelled|integer|Number of canceled.|
-|summary.totalCharacterCharged|integer|Total characters charged by the API.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|target|string|Gets the source of the error. For example, it would be "documents" or "document id" for an invalid document.|
-|innerError|InnerTranslationError|New Inner Error format which conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document id" in case of invalid document.|
-
-## Examples
-
-### Example successful response
-
-The following JSON object is an example of a successful response.
-
-```JSON
-{
- "id": "727bf148-f327-47a0-9481-abae6362f11e",
- "createdDateTimeUtc": "2020-03-26T00:00:00Z",
- "lastActionDateTimeUtc": "2020-03-26T01:00:00Z",
- "status": "Succeeded",
- "summary": {
- "total": 10,
- "failed": 1,
- "success": 9,
- "inProgress": 0,
- "notYetStarted": 0,
- "cancelled": 0,
- "totalCharacterCharged": 0
- }
-}
-```
-
-### Example error response
-
-The following JSON object is an example of an error response. The schema for other error codes is the same.
-
-Status code: 401
-
-```JSON
-{
- "error": {
- "code": "Unauthorized",
- "message": "User is not authorized",
- "target": "Document",
- "innerError": {
- "code": "Unauthorized",
- "message": "Operation is not authorized"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Get Translations Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/get-translations-status.md
- Title: Get translations status-
-description: The get translations status method returns a list of batch requests submitted and the status for each request.
------ Previously updated : 06/22/2021--
-# Get translations status
-
-The Get translations status method returns a list of batch requests submitted and the status for each request. This list only contains batch requests submitted by the user (based on the resource).
-
-If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
-
-$top, $skip and $maxpagesize query parameters can be used to specify a number of results to return and an offset for the collection.
-
-$top indicates the total number of records the user wants to be returned across all pages. $skip indicates the number of records to skip from the list of batches based on the sorting method specified. By default, we sort by descending start time. $maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page.
-
-$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc asc" or "$orderBy=createdDateTimeUtc desc"). The default sorting is descending by createdDateTimeUtc. Some query parameters can be used to filter the returned list (ex: "status=Succeeded,Cancelled") will only return succeeded and canceled operations. createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify a range of datetime to filter the returned list by. The supported filtering query parameters are (status, IDs, createdDateTimeUtcStart, createdDateTimeUtcEnd).
-
-The server honors the values specified by the client. However, clients must be prepared to handle responses that contain a different page size or contain a continuation token.
-
-When both $top and $skip are included, the server should first apply $skip and then $top on the collection.
-
-> [!NOTE]
-> If the server can't honor $top and/or $skip, the server must return an error to the client informing about it instead of just ignoring the query options. This reduces the risk of the client making assumptions about the data returned.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-|Query parameter|In|Required|Type|Description|
-| | | |||
-|$maxpagesize|query|False|integer int32|$maxpagesize is the maximum items returned in a page. If more items are requested via $top (or $top is not specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a $maxpagesize preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
-|$orderBy|query|False|array|The sorting query for the collection (ex: 'CreatedDateTimeUtc asc', 'CreatedDateTimeUtc desc')|
-|$skip|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection.Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
-|$top|query|False|integer int32|$top indicates the total number of records the user wants to be returned across all pages. Clients MAY use $top and $skip query parameters to specify a number of results to return and an offset into the collection. When both $top and $skip are given by a client, the server SHOULD first apply $skip and then $top on the collection.Note: If the server can't honor $top and/or $skip, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
-|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.|
-|createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.|
-|ids|query|False|array|IDs to use in filtering.|
-|statuses|query|False|array|Statuses to use in filtering.|
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|200|OK. Successful request and returns the status of the all the operations. HeadersRetry-After: integerETag: string|
-|400|Bad Request. Invalid request. Check input parameters.|
-|401|Unauthorized. Check your credentials.|
-|500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## Get translations status response
-
-### Successful get translations status response
-
-The following information is returned in a successful response.
-
-|Name|Type|Description|
-| | | |
-|@nextLink|string|Url for the next page. Null if no more pages available.|
-|value|TranslationStatus[]|TranslationStatus[] array listed below|
-|value.id|string|ID of the operation.|
-|value.createdDateTimeUtc|string|Operation created date time.|
-|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|value.status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
-|value.summary|StatusSummary[]|Summary containing the details listed below.|
-|value.summary.total|integer|Count of total documents.|
-|value.summary.failed|integer|Count of documents failed.|
-|value.summary.success|integer|Count of documents successfully translated.|
-|value.summary.inProgress|integer|Count of documents in progress.|
-|value.summary.notYetStarted|integer|Count of documents not yet started processing.|
-|value.summary.cancelled|integer|Count of documents canceled.|
-|value.summary.totalCharacterCharged|integer|Total count of characters charged.|
-
-### Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (this can be nested).|
-|innerError.code|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example, it would be "documents" or "document ID" if there was an invalid document.|
-
-## Examples
-
-### Example successful response
-
-The following is an example of a successful response.
-
-```JSON
-{
- "value": [
- {
- "id": "36724748-f7a0-4db7-b7fd-f041ddc75033",
- "createdDateTimeUtc": "2021-06-18T03:35:30.153374Z",
- "lastActionDateTimeUtc": "2021-06-18T03:36:44.6155316Z",
- "status": "Succeeded",
- "summary": {
- "total": 3,
- "failed": 2,
- "success": 1,
- "inProgress": 0,
- "notYetStarted": 0,
- "cancelled": 0,
- "totalCharacterCharged": 0
- }
- },
- {
- "id": "1c7399a7-6913-4f20-bb43-e2fe2ba1a67d",
- "createdDateTimeUtc": "2021-05-24T17:57:43.8356624Z",
- "lastActionDateTimeUtc": "2021-05-24T17:57:47.128391Z",
- "status": "Failed",
- "summary": {
- "total": 1,
- "failed": 1,
- "success": 0,
- "inProgress": 0,
- "notYetStarted": 0,
- "cancelled": 0,
- "totalCharacterCharged": 0
- }
- },
- {
- "id": "daa2a646-4237-4f5f-9a48-d515c2d9af3c",
- "createdDateTimeUtc": "2021-04-14T19:49:26.988272Z",
- "lastActionDateTimeUtc": "2021-04-14T19:49:43.9818634Z",
- "status": "Succeeded",
- "summary": {
- "total": 2,
- "failed": 0,
- "success": 2,
- "inProgress": 0,
- "notYetStarted": 0,
- "cancelled": 0,
- "totalCharacterCharged": 21899
- }
- }
- ],
- ""@nextLink": "https://westus.cognitiveservices.azure.com/translator/text/batch/v1.0/operations/727BF148-F327-47A0-9481-ABAE6362F11E/documents?$top=5&$skip=15"
-}
-
-```
-
-### Example error response
-
-The following is an example of an error response. The schema for other error codes is the same.
-
-Status code: 500
-
-```JSON
-{
- "error": {
- "code": "InternalServerError",
- "message": "Internal Server Error",
- "target": "Operation",
- "innerError": {
- "code": "InternalServerError",
- "message": "Unexpected internal server error has occurred"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/rest-api-guide.md
- Title: "Document Translation REST API reference guide"-
-description: View a list of with links to the Document Translation REST APIs.
------ Previously updated : 06/21/2021---
-# Document Translation REST API reference guide
-
-Document Translation is a cloud-based feature of the Azure Translator service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format. The available methods are listed in the table below:
-
-| Request| Description|
-||--|
-| [**Get supported document formats**](get-supported-document-formats.md)| This method returns a list of supported document formats.|
-|[**Get supported glossary formats**](get-supported-glossary-formats.md)|This method returns a list of supported glossary formats.|
-|[**Get supported storage sources**](get-supported-storage-sources.md)| This method returns a list of supported storage sources/options.|
-|[**Start translation (POST)**](start-translation.md)|This method starts a document translation job. |
-|[**Get documents status**](get-documents-status.md)|This method returns the status of a all documents in a translation job.|
-|[**Get document status**](get-document-status.md)| This method returns the status for a specific document in a job. |
-|[**Get translations status**](get-translations-status.md)| This method returns a list of all translation requests submitted by a user and the status for each request.|
-|[**Get translation status**](get-translation-status.md) | This method returns a summary of the status for a specific document translation request. It includes the overall request status and the status for documents that are being translated as part of that request.|
-|[**Cancel translation (DELETE)**](cancel-translation.md)| This method cancels a document translation that is currently processing or queued. |
-
-> [!div class="nextstepaction"]
-> [Explore our client libraries and SDKs for C# and Python.](../client-sdks.md)
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
- Title: Start translation-
-description: Start a document translation request with the Document Translation service.
------ Previously updated : 02/01/2022--
-# Start translation
-
-Use this API to start a translation request with the Document Translation service. Each request can contain multiple documents and must contain a source and destination container for each document.
-
-The prefix and suffix filter (if supplied) are used to filter folders. The prefix is applied to the subpath after the container name.
-
-Glossaries / Translation memory can be included in the request and are applied by the service when the document is translated.
-
-If the glossary is invalid or unreachable during translation, an error is indicated in the document status. If a file with the same name already exists in the destination, the job will fail. The targetUrl for each target language must be unique.
-
-## Request URL
-
-Send a `POST` request to:
-```HTTP
-POST https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches
-```
-
-Learn how to find your [custom domain name](../get-started-with-document-translation.md#find-your-custom-domain-name).
-
-> [!IMPORTANT]
->
-> * **All API requests to the Document Translation service require a custom domain endpoint**.
-> * You can't use the endpoint found on your Azure portal resource _Keys and Endpoint_ page nor the global translator endpointΓÇö`api.cognitive.microsofttranslator.com`ΓÇöto make HTTP requests to Document Translation.
-
-## Request headers
-
-Request headers are:
-
-|Headers|Description|
-| | |
-|Ocp-Apim-Subscription-Key|Required request header|
-
-## Request Body: Batch Submission Request
-
-|Name|Type|Description|
-| | | |
-|inputs|BatchRequest[]|BatchRequest listed below. The input list of documents or folders containing documents. Media Types: "application/json", "text/json", "application/*+json".|
-
-### Inputs
-
-Definition for the input batch translation request.
-
-|Name|Type|Required|Description|
-| | | | |
-|source|SourceInput[]|True|inputs.source listed below. Source of the input documents.|
-|storageType|StorageInputType[]|False|inputs.storageType listed below. Storage type of the input documents source string. Required for single document translation only.|
-|targets|TargetInput[]|True|inputs.target listed below. Location of the destination for the output.|
-
-**inputs.source**
-
-Source of the input documents.
-
-|Name|Type|Required|Description|
-| | | | |
-|filter|DocumentFilter[]|False|DocumentFilter[] listed below.|
-|filter.prefix|string|False|A case-sensitive prefix string to filter documents in the source path for translation. For example, when using an Azure storage blob Uri, use the prefix to restrict sub folders for translation.|
-|filter.suffix|string|False|A case-sensitive suffix string to filter documents in the source path for translation. It's most often use for file extensions.|
-|language|string|False|Language code If none is specified, we'll perform auto detect on the document.|
-|sourceUrl|string|True|Location of the folder / container or single file with your documents.|
-|storageSource|StorageSource|False|StorageSource listed below.|
-|storageSource.AzureBlob|string|False||
-
-**inputs.storageType**
-
-Storage type of the input documents source string.
-
-|Name|Type|
-| | |
-|file|string|
-|folder|string|
-
-**inputs.target**
-
-Destination for the finished translated documents.
-
-|Name|Type|Required|Description|
-| | | | |
-|category|string|False|Category / custom system for translation request.|
-|glossaries|Glossary[]|False|Glossary listed below. List of Glossary.|
-|glossaries.format|string|False|Format.|
-|glossaries.glossaryUrl|string|True (if using glossaries)|Location of the glossary. We'll use the file extension to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it won't be applied.|
-|glossaries.storageSource|StorageSource|False|StorageSource listed above.|
-|glossaries.version|string|False|Optional Version. If not specified, default is used.|
-|targetUrl|string|True|Location of the folder / container with your documents.|
-|language|string|True|Two letter Target Language code. See [list of language codes](../../language-support.md).|
-|storageSource|StorageSource []|False|StorageSource [] listed above.|
-
-## Example request
-
-The following are examples of batch requests.
-
-> [!NOTE]
-> In the following examples, limited access has been granted to the contents of an Azure Storage container [using a shared access signature(SAS)](../../../../storage/common/storage-sas-overview.md) token.
-
-**Translating all documents in a container**
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
- "language": "fr"
- }
- ]
- }
- ]
-}
-```
-
-**Translating all documents in a container applying glossaries**
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
- "language": "fr",
- "glossaries": [
- {
- "glossaryUrl": "https://my.blob.core.windows.net/glossaries/en-fr.xlf?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=BsciG3NWoOoRjOYesTaUmxlXzyjsX4AgVkt2AsxJ9to%3D",
- "format": "xliff",
- "version": "1.2"
- }
- ]
-
- }
- ]
- }
- ]
-}
-```
-
-**Translating specific folder in a container**
-
-Make sure you've specified the folder name (case sensitive) as prefix in filter.
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D",
- "filter": {
- "prefix": "MyFolder/"
- }
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target-fr?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
- "language": "fr"
- }
- ]
- }
- ]
-}
-```
-
-**Translating specific document in a container**
-
-* Specify "storageType": "File"
-* Create source URL & SAS token for the specific blob/document.
-* Specify the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-
-The sample request below shows a single document translated into two target languages
-
-```json
-{
- "inputs": [
- {
- "storageType": "File",
- "source": {
- "sourceUrl": "https://my.blob.core.windows.net/source-en/source-english.docx?sv=2019-12-12&st=2021-01-26T18%3A30%3A20Z&se=2021-02-05T18%3A30%3A00Z&sr=c&sp=rl&sig=d7PZKyQsIeE6xb%2B1M4Yb56I%2FEEKoNIF65D%2Fs0IFsYcE%3D"
- },
- "targets": [
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-Spanish.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "es"
- },
- {
- "targetUrl": "https://my.blob.core.windows.net/target/try/Target-German.docx?sv=2019-12-12&st=2021-01-26T18%3A31%3A11Z&se=2021-02-05T18%3A31%3A00Z&sr=c&sp=wl&sig=AgddSzXLXwHKpGHr7wALt2DGQJHCzNFF%2F3L94JHAWZM%3D",
- "language": "de"
- }
- ]
- }
- ]
-}
-```
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status Code|Description|
-| | |
-|202|Accepted. Successful request and the batch request are created by the service. The header Operation-Location will indicate a status url with the operation ID.HeadersOperation-Location: string|
-|400|Bad Request. Invalid request. Check input parameters.|
-|401|Unauthorized. Check your credentials.|
-|429|Request rate is too high.|
-|500|Internal Server Error.|
-|503|Service is currently unavailable. Try again later.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
-## Error response
-
-|Name|Type|Description|
-| | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
-|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Cognitive Services API Guidelines. This contains required properties: ErrorCode, message and optional properties target, details(key value pair), and inner error(this can be nested).|
-|inner.Errorcode|string|Gets code error string.|
-|innerError.message|string|Gets high-level error message.|
-|innerError.target|string|Gets the source of the error. For example it would be "documents" or "document ID" if the document is invalid.|
-
-## Examples
-
-### Example successful response
-
-The following information is returned in a successful response.
-
-You can find the job ID in the POST method's response Header Operation-Location URL value. The last parameter of the URL is the operation's job ID (the string following "/operation/").
-
-```HTTP
-Operation-Location: https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/operation/0FA2822F-4C2A-4317-9C20-658C801E0E55
-```
-
-### Example error response
-
-```JSON
-{
- "error": {
- "code": "ServiceUnavailable",
- "message": "Service is temporary unavailable",
- "innerError": {
- "code": "ServiceTemporaryUnavailable",
- "message": "Service is currently unavailable. Please try again later"
- }
- }
-}
-```
-
-## Next steps
-
-Follow our quickstart to learn more about using Document Translation and the client library.
-
-> [!div class="nextstepaction"]
-> [Get started with Document Translation](../get-started-with-document-translation.md)
cognitive-services Dynamic Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/dynamic-dictionary.md
- Title: Dynamic Dictionary - Translator-
-description: This article explains how to use the dynamic dictionary feature of the Azure Cognitive Services Translator.
------ Previously updated : 05/26/2020---
-# How to use a dynamic dictionary
-
-If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is safe only for compound nouns like proper names and product names.
-
-**Syntax:**
-
-<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
-
-**Requirements:**
-
-* The `From` and `To` languages must include English and another supported language.
-* You must include the `From` parameter in your API translation request instead of using the autodetect feature.
-
-**Example: en-de:**
-
-Source input: `The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.`
-
-Target output: `Das Wort "wordomatic" ist ein W├╢rterbucheintrag.`
-
-This feature works the same way with and without HTML mode.
-
-Use the feature sparingly. A better way to customize translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you have or can create training data that shows your work or phrase in context, you get much better results. You can find more information about Custom Translator at [https://aka.ms/CustomTranslator](https://aka.ms/CustomTranslator).
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/encrypt-data-at-rest.md
- Title: Translator encryption of data at rest-
-description: Microsoft lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Translator, and how to enable and manage CMK.
----- Previously updated : 11/03/2022-
-#Customer intent: As a user of the Translator service, I want to learn how encryption at rest works.
--
-# Translator encryption of data at rest
-
-Translator automatically encrypts your uploaded data when it's persisted to the cloud helping to meet your organizational security and compliance goals.
-
-## About Cognitive Services encryption
-
-Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
-
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. If you're using a pricing tier that supports Customer-managed keys, you can see the encryption settings for your resource in the **Encryption** section of the [Azure portal](https://portal.azure.com), as shown in the following image.
-
-![View Encryption settings](../media/cognitive-services-encryption/encryptionblade.png)
-
-For subscriptions that only support Microsoft-managed encryption keys, you won't have an **Encryption** section.
-
-## Customer-managed keys with Azure Key Vault
-
-By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
-
-> [!IMPORTANT]
-> Customer-managed keys are available for all pricing tiers for the Translator service. To request the ability to use customer-managed keys, fill out and submit the [Translator Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk) It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Translator service, you will need to create a new Translator resource. Once your Translator resource is created, you can use Azure Key Vault to set up your managed identity.
-
-Follow these steps to enable customer-managed keys for Translator:
-
-1. Create your new regional Translator or regional Cognitive Services resource. Customer-managed keys won't work with a global resource.
-2. Enabled Managed Identity in the Azure portal, and add your customer-managed key information.
-3. Create a new workspace in Custom Translator and associate this subscription information.
-
-### Enable customer-managed keys
-
-You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-
-A new Cognitive Services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault. The key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available as soon as the resource is created.
-
-To learn how to use customer-managed keys with Azure Key Vault for Cognitive Services encryption, see:
--- [Configure customer-managed keys with Key Vault for Cognitive Services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)-
-Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working. Any models that you have deployed will also be undeployed. All uploaded data will be deleted from Custom Translator. If the managed identities are re-enabled, we will not automatically redeploy the model for you.
-
-> [!IMPORTANT]
-> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-
-### Store customer-managed keys in Azure Key Vault
-
-To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
-
-Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
-
-> [!NOTE]
-> If the entire key vault is deleted, your data will no longer be displayed and all your models will be undeployed. All uploaded data will be deleted from Custom Translator.
-
-### Revoke access to customer-managed keys
-
-To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource and your models will be undeployed, as the encryption key is inaccessible by Cognitive Services. All uploaded data will also be deleted from Custom Translator.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services Firewalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/firewalls.md
- Title: Translate behind firewalls - Translator-
-description: Azure Cognitive Services Translator can translate behind firewalls using either domain-name or IP filtering.
------ Previously updated : 04/19/2023---
-# Use Translator behind firewalls
-
-Translator can translate behind firewalls using either [Domain-name](../../firewall/dns-settings.md#configure-dns-proxyazure-portal) or [IP filtering](#configure-firewall). Domain-name filtering is the preferred method.
-
-If you still require IP filtering, you can get the [IP addresses details using service tag](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). Translator is under the **CognitiveServicesManagement** service tag.
-
-## Configure firewall
-
- Navigate to your Translator resource in the Azure portal.
-
-1. Select **Networking** from the **Resource Management** section.
-1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
-
- :::image type="content" source="media/firewall-setting-azure-portal.png" alt-text="Screenshot of the firewall setting in the Azure portal.":::
-
- > [!NOTE]
- >
- > * Once you enable **Selected Networks and Private Endpoints**, you must use the **Virtual Network** endpoint to call the Translator. You can't use the standard translator endpoint (`api.cognitive.microsofttranslator.com`) and you can't authenticate with an access token.
- > * For more information, *see* [**Virtual Network Support**](reference/v3-0-reference.md#virtual-network-support).
-
-1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (`non-reserved`) addresses are accepted.
-
-Running Microsoft Translator from behind a specific IP filtered firewall is **not recommended**. The setup is likely to break in the future without notice.
-
-The IP addresses for Translator geographical endpoints as of September 21, 2021 are:
-
-|Geography|Base URL (geographical endpoint)|IP Addresses|
-|:--|:--|:--|
-|United States|api-nam.cognitive.microsofttranslator.com|20.42.6.144, 20.49.96.128, 40.80.190.224, 40.64.128.192|
-|Europe|api-eur.cognitive.microsofttranslator.com|20.50.1.16, 20.38.87.129|
-|Asia Pacific|api-apc.cognitive.microsofttranslator.com|40.80.170.160, 20.43.132.96, 20.37.196.160, 20.43.66.16|
-
-## Next steps
-
-[**Translator virtual network support**](reference/v3-0-reference.md#virtual-network-support)
-
-[**Configure virtual networks**](../cognitive-services-virtual-networks.md#grant-access-from-an-internet-ip-range)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
- Title: Language support - Translator-
-description: Cognitive Services Translator supports the following languages for text to text translation using Neural Machine Translation (NMT).
------ Previously updated : 06/01/2023--
-# Translator language support
-
-**Translation - Cloud:** Cloud translation is available in all languages for the Translate operation of Text Translation and for Document Translation.
-
-**Translation ΓÇô Containers:** Language support for Containers.
-
-**Custom Translator:** Custom Translator can be used to create customized translation models that you can then use to customize your translated output while using the Text Translation or Document Translation features.
-
-**Auto Language Detection:** Automatically detect the language of the source text while using Text Translation or Document Translation.
-
-**Dictionary:** Use the [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) or [Dictionary Examples](reference/v3-0-dictionary-examples.md) operations from the Text Translation feature to display alternative translations from or to English and examples of words in context.
-
-## Translation
-
-> [!NOTE]
-> Language code `pt` will default to `pt-br`, Portuguese (Brazil).
-
-|Language | Language code | Cloud ΓÇô Text Translation and Document Translation | Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
-|:-|:-:|:-:|:-:|:-:|:-:|:-:|
-| Afrikaans | `af` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Albanian | `sq` |Γ£ö|Γ£ö||Γ£ö||
-| Amharic | `am` |Γ£ö|Γ£ö||||
-| Arabic | `ar` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Armenian | `hy` |Γ£ö|Γ£ö||Γ£ö||
-| Assamese | `as` |Γ£ö|Γ£ö|Γ£ö|||
-| Azerbaijani (Latin) | `az` |Γ£ö|Γ£ö||||
-| Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Bashkir | `ba` |Γ£ö|Γ£ö||||
-| Basque | `eu` |Γ£ö|Γ£ö||||
-| Bosnian (Latin) | `bs` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö||||
-| Catalan | `ca` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Chinese (Literary) | `lzh` |Γ£ö|Γ£ö||||
-| Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Chinese Traditional | `zh-Hant` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Croatian | `hr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Dari | `prs` |Γ£ö|Γ£ö||||
-| Divehi | `dv` |Γ£ö|Γ£ö||Γ£ö||
-| Dutch | `nl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| English | `en` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Estonian | `et` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Faroese | `fo` |Γ£ö|Γ£ö||||
-| Fijian | `fj` |Γ£ö|Γ£ö|Γ£ö|||
-| Filipino | `fil` |Γ£ö|Γ£ö|Γ£ö|||
-| Finnish | `fi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| French | `fr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| French (Canada) | `fr-ca` |Γ£ö|Γ£ö||||
-| Galician | `gl` |Γ£ö|Γ£ö||||
-| Georgian | `ka` |Γ£ö|Γ£ö||Γ£ö||
-| German | `de` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
-| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö|
-| Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Inuinnaqtun | `ikt` |Γ£ö|Γ£ö||||
-| Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Inuktitut (Latin) | `iu-Latn` |Γ£ö|Γ£ö||||
-| Irish | `ga` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
-| Italian | `it` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Japanese | `ja` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö|||
-| Kazakh | `kk` |Γ£ö|Γ£ö||||
-| Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
-| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö|
-| Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
-| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö||
-| Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
-| Kyrgyz (Cyrillic) | `ky` |Γ£ö|Γ£ö||||
-| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö||
-| Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Macedonian | `mk` |Γ£ö|Γ£ö||Γ£ö||
-| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö|||
-| Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
-| Maltese | `mt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Maori | `mi` |Γ£ö|Γ£ö|Γ£ö|||
-| Marathi | `mr` |Γ£ö|Γ£ö|Γ£ö|||
-| Mongolian (Cyrillic) | `mn-Cyrl` |Γ£ö|Γ£ö||||
-| Mongolian (Traditional) | `mn-Mong` |Γ£ö|Γ£ö||Γ£ö||
-| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö||
-| Nepali | `ne` |Γ£ö|Γ£ö||||
-| Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Odia | `or` |Γ£ö|Γ£ö|Γ£ö|||
-| Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö||
-| Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Polish | `pl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Portuguese (Brazil) | `pt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Portuguese (Portugal) | `pt-pt` |Γ£ö|Γ£ö||||
-| Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö|||
-| Queretaro Otomi | `otq` |Γ£ö|Γ£ö||||
-| Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Samoan (Latin) | `sm` |Γ£ö|Γ£ö |Γ£ö|||
-| Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö|Γ£ö||Γ£ö||
-| Serbian (Latin) | `sr-Latn` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Slovak | `sk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Slovenian | `sl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Somali (Arabic) | `so` |Γ£ö|Γ£ö||Γ£ö||
-| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Swahili (Latin) | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Tahitian | `ty` |Γ£ö|Γ£ö |Γ£ö|Γ£ö||
-| Tamil | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Tatar (Latin) | `tt` |Γ£ö|Γ£ö||||
-| Telugu | `te` |Γ£ö|Γ£ö|Γ£ö|||
-| Thai | `th` |Γ£ö|Γ£ö |Γ£ö|Γ£ö|Γ£ö|
-| Tibetan | `bo` |Γ£ö|Γ£ö|||
-| Tigrinya | `ti` |Γ£ö|Γ£ö||||
-| Tongan | `to` |Γ£ö|Γ£ö|Γ£ö|||
-| Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Turkmen (Latin) | `tk` |Γ£ö|Γ£ö|||
-| Ukrainian | `uk` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Upper Sorbian | `hsb` |Γ£ö|Γ£ö||||
-| Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Uyghur (Arabic) | `ug` |Γ£ö|Γ£ö|||
-| Uzbek (Latin | `uz` |Γ£ö|Γ£ö||Γ£ö||
-| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö||
-| Zulu | `zu` |Γ£ö|Γ£ö||||
-
-## Document Translation: scanned PDF support
-
-|Language|Language Code|Supported as source language for scanned PDF?|Supported as target language for scanned PDF?|
-|:-|:-:|:-:|:-:|
-|Afrikaans|`af`|Yes|Yes|
-|Albanian|`sq`|Yes|Yes|
-|Amharic|`am`|No|No|
-|Arabic|`ar`|Yes|Yes|
-|Armenian|`hy`|No|No|
-|Assamese|`as`|No|No|
-|Azerbaijani (Latin)|`az`|Yes|Yes|
-|Bangla|`bn`|No|No|
-|Bashkir|`ba`|No|Yes|
-|Basque|`eu`|Yes|Yes|
-|Bosnian (Latin)|`bs`|Yes|Yes|
-|Bulgarian|`bg`|Yes|Yes|
-|Cantonese (Traditional)|`yue`|No|Yes|
-|Catalan|`ca`|Yes|Yes|
-|Chinese (Literary)|`lzh`|No|Yes|
-|Chinese Simplified|`zh-Hans`|Yes|Yes|
-|Chinese Traditional|`zh-Hant`|Yes|Yes|
-|Croatian|`hr`|Yes|Yes|
-|Czech|`cs`|Yes|Yes|
-|Danish|`da`|Yes|Yes|
-|Dari|`prs`|No|No|
-|Divehi|`dv`|No|No|
-|Dutch|`nl`|Yes|Yes|
-|English|`en`|Yes|Yes|
-|Estonian|`et`|Yes|Yes|
-|Faroese|`fo`|Yes|Yes|
-|Fijian|`fj`|Yes|Yes|
-|Filipino|`fil`|Yes|Yes|
-|Finnish|`fi`|Yes|Yes|
-|French|`fr`|Yes|Yes|
-|French (Canada)|`fr-ca`|Yes|Yes|
-|Galician|`gl`|Yes|Yes|
-|Georgian|`ka`|No|No|
-|German|`de`|Yes|Yes|
-|Greek|`el`|Yes|Yes|
-|Gujarati|`gu`|No|No|
-|Haitian Creole|`ht`|Yes|Yes|
-|Hebrew|`he`|No|No|
-|Hindi|`hi`|Yes|Yes|
-|Hmong Daw (Latin)|`mww`|Yes|Yes|
-|Hungarian|`hu`|Yes|Yes|
-|Icelandic|`is`|Yes|Yes|
-|Indonesian|`id`|Yes|Yes|
-|Interlingua|`ia`|Yes|Yes|
-|Inuinnaqtun|`ikt`|No|Yes|
-|Inuktitut|`iu`|No|No|
-|Inuktitut (Latin)|`iu-Latn`|Yes|Yes|
-|Irish|`ga`|Yes|Yes|
-|Italian|`it`|Yes|Yes|
-|Japanese|`ja`|Yes|Yes|
-|Kannada|`kn`|No|Yes|
-|Kazakh (Cyrillic)|`kk`, `kk-cyrl`|Yes|Yes|
-|Kazakh (Latin)|`kk-latn`|Yes|Yes|
-|Khmer|`km`|No|No|
-|Klingon|`tlh-Latn`|No|No|
-|Klingon (plqaD)|`tlh-Piqd`|No|No|
-|Korean|`ko`|Yes|Yes|
-|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|No|No|
-|Kurdish (Latin) (Northern)|`ku-latn`, `kmr`|Yes|Yes|
-|Kyrgyz (Cyrillic)|`ky`|Yes|Yes|
-|Lao|`lo`|No|No|
-|Latvian|`lv`|No|Yes|
-|Lithuanian|`lt`|Yes|Yes|
-|Macedonian|`mk`|No|Yes|
-|Malagasy|`mg`|No|Yes|
-|Malay (Latin)|`ms`|Yes|Yes|
-|Malayalam|`ml`|No|Yes|
-|Maltese|`mt`|Yes|Yes|
-|Maori|`mi`|Yes|Yes|
-|Marathi|`mr`|Yes|Yes|
-|Mongolian (Cyrillic)|`mn-Cyrl`|Yes|Yes|
-|Mongolian (Traditional)|`mn-Mong`|No|No|
-|Myanmar (Burmese)|`my`|No|No|
-|Nepali|`ne`|Yes|Yes|
-|Norwegian|`nb`|Yes|Yes|
-|Odia|`or`|No|No|
-|Pashto|`ps`|No|No|
-|Persian|`fa`|No|No|
-|Polish|`pl`|Yes|Yes|
-|Portuguese (Brazil)|`pt`, `pt-br`|Yes|Yes|
-|Portuguese (Portugal)|`pt-pt`|Yes|Yes|
-|Punjabi|`pa`|No|Yes|
-|Queretaro Otomi|`otq`|No|Yes|
-|Romanian|`ro`|Yes|Yes|
-|Russian|`ru`|Yes|Yes|
-|Samoan (Latin)|`sm`|Yes|Yes|
-|Serbian (Cyrillic)|`sr-Cyrl`|No|Yes|
-|Serbian (Latin)|`sr`, `sr-latn`|Yes|Yes|
-|Slovak|`sk`|Yes|Yes|
-|Slovenian|`sl`|Yes|Yes|
-|Somali|`so`|No|Yes|
-|Spanish|`es`|Yes|Yes|
-|Swahili (Latin)|`sw`|Yes|Yes|
-|Swedish|`sv`|Yes|Yes|
-|Tahitian|`ty`|No|Yes|
-|Tamil|`ta`|No|Yes|
-|Tatar (Latin)|`tt`|Yes|Yes|
-|Telugu|`te`|No|Yes|
-|Thai|`th`|No|No|
-|Tibetan|`bo`|No|No|
-|Tigrinya|`ti`|No|No|
-|Tongan|`to`|Yes|Yes|
-|Turkish|`tr`|Yes|Yes|
-|Turkmen (Latin)|`tk`|Yes|Yes|
-|Ukrainian|`uk`|No|Yes|
-|Upper Sorbian|`hsb`|Yes|Yes|
-|Urdu|`ur`|No|No|
-|Uyghur (Arabic)|`ug`|No|No|
-|Uzbek (Latin)|`uz`|Yes|Yes|
-|Vietnamese|`vi`|No|Yes|
-|Welsh|`cy`|Yes|Yes|
-|Yucatec Maya|`yua`|Yes|Yes|
-|Zulu|`zu`|Yes|Yes|
-
-## Transliteration
-
-The [Transliterate operation](reference/v3-0-transliterate.md) in the Text Translation feature supports the following languages. In the "To/From", "<-->" indicates that the language can be transliterated from or to either of the scripts listed. The "-->" indicates that the language can only be transliterated from one script to the other.
-
-| Language | Language code | Script | To/From | Script|
-|:-- |:-:|:-:|:-:|:-:|
-| Arabic | `ar` | Arabic `Arab` | <--> | Latin `Latn` |
-| Assamese | `as` | Bengali `Beng` | <--> | Latin `Latn` |
-| Bangla | `bn` | Bengali `Beng` | <--> | Latin `Latn` |
-|Belarusian| `be` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-|Bulgarian| `bg` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-| Chinese (Simplified) | `zh-Hans` | Chinese Simplified `Hans`| <--> | Latin `Latn` |
-| Chinese (Simplified) | `zh-Hans` | Chinese Simplified `Hans`| <--> | Chinese Traditional `Hant`|
-| Chinese (Traditional) | `zh-Hant` | Chinese Traditional `Hant`| <--> | Latin `Latn` |
-| Chinese (Traditional) | `zh-Hant` | Chinese Traditional `Hant`| <--> | Chinese Simplified `Hans` |
-|Greek| `el` | Greek `Grek` | <--> | Latin `Latn` |
-| Gujarati | `gu` | Gujarati `Gujr` | <--> | Latin `Latn` |
-| Hebrew | `he` | Hebrew `Hebr` | <--> | Latin `Latn` |
-| Hindi | `hi` | Devanagari `Deva` | <--> | Latin `Latn` |
-| Japanese | `ja` | Japanese `Jpan` | <--> | Latin `Latn` |
-| Kannada | `kn` | Kannada `Knda` | <--> | Latin `Latn` |
-|Kazakh| `kk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-|Korean| `ko` | Korean `Kore` | <--> | Latin `Latn` |
-|Kyrgyz| `ky` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-|Macedonian| `mk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-| Malayalam | `ml` | Malayalam `Mlym` | <--> | Latin `Latn` |
-| Marathi | `mr` | Devanagari `Deva` | <--> | Latin `Latn` |
-|Mongolian| `mn` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-| Odia | `or` | Oriya `Orya` | <--> | Latin `Latn` |
-|Persian| `fa` | Arabic `Arab` | <--> | Latin `Latn` |
-| Punjabi | `pa` | Gurmukhi `Guru` | <--> | Latin `Latn` |
-|Russian| `ru` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-| Serbian (Cyrillic) | `sr-Cyrl` | Cyrillic `Cyrl` | --> | Latin `Latn` |
-| Serbian (Latin) | `sr-Latn` | Latin `Latn` | --> | Cyrillic `Cyrl`|
-|Sindhi| `sd` | Arabic `Arab` | <--> | Latin `Latn` |
-|Sinhala| `si` | Sinhala `Sinh` | <--> | Latin `Latn` |
-|Tajik| `tg` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-| Tamil | `ta` | Tamil `Taml` | <--> | Latin `Latn` |
-|Tatar| `tt` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-| Telugu | `te` | Telugu `Telu` | <--> | Latin `Latn` |
-| Thai | `th` | Thai `Thai` | --> | Latin `Latn` |
-|Ukrainian| `uk` | Cyrillic `Cyrl` | <--> | Latin `Latn` |
-|Urdu| `ur` | Arabic `Arab` | <--> | Latin `Latn` |
-
-## Custom Translator language pairs
-
-|Source Language|Target Language|
-|:-|:-|
-| Czech (cs-cz) | English (en-us) |
-| Danish (da-dk) | English (en-us) |
-| German (de-&#8203;de) | English (en-us) |
-| Greek (el-gr) | English (en-us) |
-| English (en-us) | Arabic (ar-sa) |
-| English (en-us) | Czech (cs-cz) |
-| English (en-us) | Danish (da-dk) |
-| English (en-us) | German (de-&#8203;de) |
-| English (en-us) | Greek (el-gr) |
-| English (en-us) | Spanish (es-es) |
-| English (en-us) | French (fr-fr) |
-| English (en-us) | Hebrew (he-il) |
-| English (en-us) | Hindi (hi-in) |
-| English (en-us) | Croatian (hr-hr) |
-| English (en-us) | Hungarian (hu-hu) |
-| English (en-us) | Indonesian (id-id) |
-| English (en-us) | Italian (it-it) |
-| English (en-us) | Japanese (ja-jp) |
-| English (en-us) | Korean (ko-kr) |
-| English (en-us) | Lithuanian (lt-lt) |
-| English (en-us) | Latvian (lv-lv) |
-| English (en-us) | Norwegian (nb-no) |
-| English (en-us) | Polish (pl-pl) |
-| English (en-us) | Portuguese (pt-pt) |
-| English (en-us) | Russian (ru-ru) |
-| English (en-us) | Slovak (sk-sk) |
-| English (en-us) | Swedish (sv-se) |
-| English (en-us) | Ukrainian (uk-ua) |
-| English (en-us) | Vietnamese (vi-vn) |
-| English (en-us) | Chinese Simplified (zh-cn) |
-| Spanish (es-es) | English (en-us) |
-| French (fr-fr) | English (en-us) |
-| Hindi (hi-in) | English (en-us) |
-| Hungarian (hu-hu) | English (en-us) |
-| Indonesian (id-id) | English (en-us) |
-| Italian (it-it) | English (en-us) |
-| Japanese (ja-jp) | English (en-us) |
-| Korean (ko-kr) | English (en-us) |
-| Norwegian (nb-no) | English (en-us) |
-| Dutch (nl-nl) | English (en-us) |
-| Polish (pl-pl) | English (en-us) |
-| Portuguese (pt-br) | English (en-us) |
-| Russian (ru-ru) | English (en-us) |
-| Swedish (sv-se) | English (en-us) |
-| Thai (th-th) | English (en-us) |
-| Turkish (tr-tr) | English (en-us) |
-| Vietnamese (vi-vn) | English (en-us) |
-| Chinese Simplified (zh-cn) | English (en-us) |
-||
-
-## Other Cognitive Services
-
-Add more capabilities to your apps and workflows by utilizing other Cognitive Services with Translator. Language support for other
-
-* [Computer Vision](../computer-vision/language-support.md)
-* [Speech](../speech-service/language-support.md)
-* [Language service](../language-service/concepts/language-support.md)
-
-View all [Cognitive Services](../index.yml).
-
-## Next steps
-
-* [Text Translation reference](reference/v3-0-reference.md)
-* [Document Translation reference](document-translation/reference/rest-api-guide.md)
-* [Custom Translator overview](custom-translator/overview.md)
cognitive-services Migrate To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/migrate-to-v3.md
- Title: Migrate to V3 - Translator-
-description: This article provides the steps to help you migrate from V2 to V3 of the Azure Cognitive Services Translator.
------ Previously updated : 05/26/2020---
-# Translator V2 to V3 Migration
-
-> [!NOTE]
-> V2 was deprecated on April 30, 2018. Please migrate your applications to V3 in order to take advantage of new functionality available exclusively in V3. V2 was retired on May 24, 2021.
-
-The Microsoft Translator team has released Version 3 (V3) of the Translator. This release includes new features, deprecated methods and a new format for sending to, and receiving data from the Microsoft Translator Service. This document provides information for changing applications to use V3.
-
-The end of this document contains helpful links for you to learn more.
-
-## Summary of features
-
-* No Trace - In V3 No-Trace applies to all pricing tiers in the Azure portal. This feature means that no text submitted to the V3 API, will be saved by Microsoft.
-* JSON - XML is replaced by JSON. All data sent to the service and received from the service is in JSON format.
-* Multiple target languages in a single request - The Translate method accepts multiple 'to' languages for translation in a single request. For example, a single request can be 'from' English and 'to' German, Spanish and Japanese, or any other group of languages.
-* Bilingual dictionary - A bilingual dictionary method has been added to the API. This method includes 'lookup' and 'examples'.
-* Transliterate - A transliterate method has been added to the API. This method will convert words and sentences in one script into another script. For example, Arabic to Latin.
-* Languages - A new 'languages' method delivers language information, in JSON format, for use with the 'translate', 'dictionary', and 'transliterate' methods.
-* New to Translate - New capabilities have been added to the 'translate' method to support some of the features that were in the V2 API as separate methods. An example is TranslateArray.
-* Speak method - Text to speech functionality is no longer supported in the Microsoft Translator. Text to speech functionality is available in [Microsoft Speech Service](../speech-service/text-to-speech.md).
-
-The following list of V2 and V3 methods identifies the V3 methods and APIs that will provide the functionality that came with V2.
-
-| V2 API Method | V3 API Compatibility |
-|:-- |:-|
-| `Translate` | [Translate](reference/v3-0-translate.md) |
-| `TranslateArray` | [Translate](reference/v3-0-translate.md) |
-| `GetLanguageNames` | [Languages](reference/v3-0-languages.md) |
-| `GetLanguagesForTranslate` | [Languages](reference/v3-0-languages.md) |
-| `GetLanguagesForSpeak` | [Microsoft Speech Service](../speech-service/language-support.md) |
-| `Speak` | [Microsoft Speech Service](../speech-service/text-to-speech.md) |
-| `Detect` | [Detect](reference/v3-0-detect.md) |
-| `DetectArray` | [Detect](reference/v3-0-detect.md) |
-| `AddTranslation` | Feature is no longer supported |
-| `AddTranslationArray` | Feature is no longer supported |
-| `BreakSentences` | [BreakSentence](reference/v3-0-break-sentence.md) |
-| `GetTranslations` | Feature is no longer supported |
-| `GetTranslationsArray` | Feature is no longer supported |
-
-## Move to JSON format
-
-Microsoft Translator Translation V2 accepted and returned data in XML format. In V3, all data sent and received using the API is in JSON format. XML will no longer be accepted or returned in V3.
-
-This change will affect several aspects of an application written for the V2 Text Translation API. As an example: The Languages API returns language information for text translation, transliteration, and the two dictionary methods. You can request all language information for all methods in one call or request them individually.
-
-The languages method does not require authentication; by selecting the following link you can see all the language information for V3 in JSON:
-
-[https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation,dictionary,transliteration](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation,dictionary,transliteration)
-
-## Authentication Key
-
-The authentication key you are using for V2 will be accepted for V3. You will not need to get a new subscription. You will be able to mix V2 and V3 in your apps during the yearlong migration period, making it easier for you to release new versions while you are still migrating from V2-XML to V3-JSON.
-
-## Pricing Model
-
-Microsoft Translator V3 is priced in the same way V2 was priced; per character, including spaces. The new features in V3 make some changes in what characters are counted for billing.
-
-| V3 Method | Characters Counted for Billing |
-|:-- |:-|
-| `Languages` | No characters submitted, none counted, no charge. |
-| `Translate` | Count is based on how many characters are submitted for translation, and how many languages the characters are translated into. 50 characters submitted, and 5 languages requested will be 50x5. |
-| `Transliterate` | Number of characters submitted for transliteration are counted. |
-| `Dictionary lookup & example` | Number of characters submitted for Dictionary lookup and examples are counted. |
-| `BreakSentence` | No Charge. |
-| `Detect` | No Charge. |
-
-## V3 End Points
-
-Global
-
-* api.cognitive.microsofttranslator.com
-
-## V3 API text translations methods
-
-[`Languages`](reference/v3-0-languages.md)
-
-[`Translate`](reference/v3-0-translate.md)
-
-[`Transliterate`](reference/v3-0-transliterate.md)
-
-[`BreakSentence`](reference/v3-0-break-sentence.md)
-
-[`Detect`](reference/v3-0-detect.md)
-
-[`Dictionary/lookup`](reference/v3-0-dictionary-lookup.md)
-
-[`Dictionary/example`](reference/v3-0-dictionary-examples.md)
-
-## Compatibility and customization
-
-> [!NOTE]
->
-> The Microsoft Translator Hub will be retired on May 17, 2019. [View important migration information and dates](https://www.microsoft.com/translator/business/hub/).
-
-Microsoft Translator V3 uses neural machine translation by default. As such, it cannot be used with the Microsoft Translator Hub. The Translator Hub only supports legacy statistical machine translation. Customization for neural translation is now available using the Custom Translator. [Learn more about customizing neural machine translation](custom-translator/overview.md)
-
-Neural translation with the V3 text API does not support the use of standard categories (SMT, speech, tech, _generalnn_).
-
-| Version | Endpoint | GDPR Processor Compliance | Use Translator Hub | Use Custom Translator (Preview) |
-| : | :- | : | :-- | : |
-|Translator Version 2| api.microsofttranslator.com| No |Yes |No|
-|Translator Version 3| api.cognitive.microsofttranslator.com| Yes| No| Yes|
-
-**Translator Version 3**
-
-* It's generally available and fully supported.
-* It's GDPR-compliant as a processor and satisfies all ISO 20001 and 20018 as well as SOC 3 certification requirements.
-* It allows you to invoke the neural network translation systems you have customized with Custom Translator (Preview), the new Translator NMT customization feature.
-* It doesn't provide access to custom translation systems created using the Microsoft Translator Hub.
-
-You are using Version 3 of the Translator if you are using the api.cognitive.microsofttranslator.com endpoint.
-
-**Translator Version 2**
-
-* Doesn't satisfy all ISO 20001,20018 and SOC 3 certification requirements.
-* Doesn't allow you to invoke the neural network translation systems you have customized with the Translator customization feature.
-* Provides access to custom translation systems created using the Microsoft Translator Hub.
-* You are using Version 2 of the Translator if you are using the api.microsofttranslator.com endpoint.
-
-No version of the Translator creates a record of your translations. Your translations are never shared with anyone. More information on the [Translator No-Trace](https://www.aka.ms/NoTrace) webpage.
-
-## Links
-
-* [Microsoft Privacy Policy](https://privacy.microsoft.com/privacystatement)
-* [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal)
-* [Online Services Terms](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [View V3.0 Documentation](reference/v3-0-reference.md)
cognitive-services Modifications Deprecations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/modifications-deprecations.md
- Title: Modifications to Translator Service
-description: Translator Service changes, modifications, and deprecations
------ Previously updated : 11/15/2022---
-# Modifications to Translator Service
-
-Learn about Translator service changes, modification, and deprecations.
-
-> [!NOTE]
-> Looking for updates and preview announcements? Visit our [What's new](whats-new.md) page to stay up to date with release notes, feature enhancements, and our newest documentation.
-
-## November 2022
-
-### Changes to Translator `Usage` metrics
-
-> [!IMPORTANT]
-> **`Characters Translated`** and **`Characters Trained`** metrics are deprecated and have been removed from the Azure portal.
-
-|Deprecated metric| Current metric(s) | Description|
-||||
-|Characters Translated (Deprecated)</br></br></br></br>|**&bullet; Text Characters Translated**</br></br>**&bullet;Text Custom Characters Translated**| &bullet; Number of characters in incoming **text** translation request.</br></br> &bullet; Number of characters in incoming **custom** translation request. |
-|Characters Trained (Deprecated) | **&bullet; Text Trained Characters** | &bullet; Number of characters **trained** using text translation service.|
-
-* In 2021, two new metrics, **Text Characters Translated** and **Text Custom Characters Translated**, were added to help with granular metrics data service usage. These metrics replaced **Characters Translated** which provided combined usage data for the general and custom text translation service.
-
-* Similarly, the **Text Trained Characters** metric was added to replace the **Characters Trained** metric.
-
-* **Characters Trained** and **Characters Translated** metrics have had continued support in the Azure portal with the deprecated flag to allow migration to the current metrics. As of October 2022, Characters Trained and Characters Translated are no longer available in the Azure portal.
--
cognitive-services Prevent Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/prevent-translation.md
- Title: Prevent content translation - Translator-
-description: Prevent translation of content with the Translator. The Translator allows you to tag content so that it isn't translated.
------ Previously updated : 05/26/2020---
-# How to prevent translation of content with the Translator
-
-The Translator allows you to tag content so that it isn't translated. For example, you may want to tag code, a brand name, or a word/phrase that doesn't make sense when localized.
-
-## Methods for preventing translation
-
-1. Tag your content with `notranslate`. It's by design that this works only when the input textType is set as HTML
-
- Example:
-
- ```html
- <span class="notranslate">This will not be translated.</span>
- <span>This will be translated. </span>
- ```
-
- ```html
- <div class="notranslate">This will not be translated.</div>
- <div>This will be translated. </div>
- ```
-
-2. Tag your content with `translate="no"`. This only works when the input textType is set as HTML
-
- Example:
-
- ```html
- <span translate="no">This will not be translated.</span>
- <span>This will be translated. </span>
- ```
-
- ```html
- <div translate="no">This will not be translated.</div>
- <div>This will be translated. </div>
- ```
-
-3. Use the [dynamic dictionary](dynamic-dictionary.md) to prescribe a specific translation.
-
-4. Don't pass the string to the Translator for translation.
-
-5. Custom Translator: Use a [dictionary in Custom Translator](custom-translator/concepts/dictionaries.md) to prescribe the translation of a phrase with 100% probability.
--
-## Next steps
-> [!div class="nextstepaction"]
-> [Use the Translate operation to translate text](reference/v3-0-translate.md)
cognitive-services Profanity Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/profanity-filtering.md
- Title: Profanity filtering - Translator-
-description: Use Translator profanity filtering to determine the level of profanity translated in your text.
------ Previously updated : 04/26/2022---
-# Add profanity filtering with the Translator
-
-Normally the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures. As a result, the degree of profanity in the target language may be amplified or reduced.
-
-If you want to avoid seeing profanity in the translation, even if profanity is present in the source text, use the profanity filtering option available in the Translate() method. This option allows you to choose whether you want the profanity deleted, marked with appropriate tags, or no action taken.
-
-The Translate() method takes the "options" parameter, which contains the new element "ProfanityAction." The accepted values of ProfanityAction are "NoAction," "Marked," and "Deleted." For the value of "Marked," an additional, optional element "ProfanityMarker" can take the values "Asterisk" (default) and "Tag."
--
-## Accepted values and examples of ProfanityMarker and ProfanityAction
-| ProfanityAction value | ProfanityMarker value | Action | Example: Source - Spanish| Example: Target - English|
-|:--|:--|:--|:--|:--|
-| NoAction| | Default. Same as not setting the option. Profanity passes from source to target. | Que coche de \<insert-profane-word> | What a \<insert-profane-word> car |
-| Marked | Asterisk | Profane words are replaced by asterisks (default). | Que coche de \<insert-profane-word> | What a *** car |
-| Marked | Tag | Profane words are surrounded by XML tags \<profanity\>...\</profanity>. | Que coche de \<insert-profane-word> | What a \<profanity> \<insert-profane-word> \</profanity> car |
-| Deleted | | Profane words are removed from the output without replacement. | Que coche de \<insert-profane-word> | What a car |
-
-In the above examples, **\<insert-profane-word>** is a placeholder for profane words.
-
-## Next steps
-> [!div class="nextstepaction"]
-> [Apply profanity filtering with your Translator call](reference/v3-0-translate.md)
cognitive-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-text-rest-api.md
- Title: "Quickstart: Azure Cognitive Services Translator REST APIs"-
-description: "Learn to translate text with the Translator service REST APIs. Examples are provided in C#, Go, Java, JavaScript and Python."
------ Previously updated : 06/02/2023---
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD049 -->
-
-# Quickstart: Azure Cognitive Services Translator REST APIs
-
-Try the latest version of Azure Translator. In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice or the REST API. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production.
-
-## Prerequisites
-
-You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
-
-* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
-
- * You need the key and endpoint from the resource to connect your application to the Translator service. You paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
-
- :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-
- > [!NOTE]
- >
- > * For this quickstart it is recommended that you use a Translator text single-service global resource.
- > * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription.
- > * If you choose to use the multi-service Cognitive Services or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription.
- > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translator REST API headers](translator-text-apis.md).
-
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites)
>-
-## Headers
-
-To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code for each programming language.
-
-For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
-
-Header|Value| Condition |
-| |: |:|
-|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|&bullet; ***Required***|
-|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |&bullet; ***Required*** when using a multi-service Cognitive Services or regional (geographic) resource like **West US**.</br>&bullet; ***Optional*** when using a single-service global Translator Resource.
-|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|&bullet; **Required**|
-|**Content-Length**|The **length of the request** body.|&bullet; ***Optional***|
-
-> [!IMPORTANT]
->
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, _see_ the Cognitive Services [security](../cognitive-services-security.md) article.
-
-## Translate text
-
-The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
-
-For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
-
-### [C#: Visual Studio](#tab/csharp)
-
-### Set up your Visual Studio project
-
-1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
-
- > [!TIP]
- >
- > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
-
-1. Open Visual Studio.
-
-1. On the Start page, choose **Create a new project**.
-
- :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
-
-1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
-
- :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
-
-1. In the **Configure your new project** dialog window, enter `translator_quickstart` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
-
- :::image type="content" source="media/quickstarts/configure-new-project.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
-
-1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
-
- :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
-
-### Install the Newtonsoft.json package with NuGet
-
-1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
-
- :::image type="content" source="media/quickstarts/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
-
-1. Select the Browse tab and type Newtonsoft.json.
-
- :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
-
-1. Select install from the right package manager window to add the package to your project.
-
- :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
-<!-- checked -->
-<!-- [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project) -->
-
-### Build your C# application
-
-> [!NOTE]
->
-> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
-> * The new output uses recent C# features that simplify the code you need to write.
-> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
-> * For more information, _see_ [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
-
-1. Open the **Program.cs** file.
-
-1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code sample into your application's Program.cs file. Make sure you update the key variable with the value from your Azure portal Translator instance:
-
-```csharp
-using System.Text;
-using Newtonsoft.Json;
-
-class Program
-{
- private static readonly string key = "<your-translator-key>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Input and output languages are defined as parameters.
- string route = "/translate?api-version=3.0&from=en&to=fr&to=zu";
- string textToTranslate = "I would really like to drive your car around the block a few times!";
- object[] body = new object[] { new { Text = textToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-
-```
-<!-- checked -->
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=build-your-c#-application) -->
-
-### Run your C# application
-
-Once you've added a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
--
-**Translation output:**
-
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
-]
-
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=run-your-c#-application) -->
-
-### [Go](#tab/go)
-
-### Set up your Go environment
-
-You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
-
-> [!TIP]
->
-> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
-
-1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
-
- * Download the Go version for your operating system.
- * Once the download is complete, run the installer.
- * Open a command prompt and enter the following to confirm Go was installed:
-
- ```console
- go version
- ```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=set-up-your-go-environment) -->
-
-### Build your Go application
-
-1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-app**, and navigate to it.
-
-1. Create a new GO file named **translation.go** from the **translator-app** directory.
-
-1. Copy and paste the provided code sample into your **translation.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance:
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("from", "en")
- q.Add("to", "fr")
- q.Add("to", "zu")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "I would really like to drive your car around the block a few times."},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=build-your-go-application) -->
-
-### Run your Go application
-
-Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-app** folder and use the following command:
-
-```console
- go run translation.go
-```
-
-**Translation output:**
-
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
-]
-
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=run-your-go-application) -->
-
-### [Java: Gradle](#tab/java)
-
-### Set up your Java environment
-
-* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. _See_ [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
-
- >[!TIP]
- >
- > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
- > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
-
-* If you aren't using VS Code, make sure you have the following installed in your development environment:
-
- * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
-
- * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=set-up-your-java-environment) -->
-
-### Create a new Gradle project
-
-1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
-
- ```console
- mkdir translator-text-app && translator-text-app
- ```
-
- ```powershell
- mkdir translator-text-app; cd translator-text-app
- ```
-
-1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including _build.gradle.kts_, which is used at runtime to create and configure your application.
-
- ```console
- gradle init --type basic
- ```
-
-1. When prompted to choose a **DSL**, select **Kotlin**.
-
-1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
-
-1. Update `build.gradle.kts` with the following code:
-
- ```kotlin
- plugins {
- java
- application
- }
- application {
- mainClass.set("TranslatorText")
- }
- repositories {
- mavenCentral()
- }
- dependencies {
- implementation("com.squareup.okhttp3:okhttp:4.10.0")
- implementation("com.google.code.gson:gson:2.9.0")
- }
- ```
-<!-- checked -->
-
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-a-gradle-project) -->
-
-### Create your Java Application
-
-1. From the translator-text-app directory, run the following command:
-
- ```console
- mkdir -p src/main/java
- ```
-
- You create the following directory structure:
-
- :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
-
-1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
-
- > [!TIP]
- >
- > * You can create a new file using PowerShell.
- > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
- > * Type the following command **New-Item TranslatorText.java**.
- >
- > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
-
-1. Open the `TranslatorText.java` file in your IDE and copy then paste the following code sample into your application. **Make sure you update the key with one of the key values from your Azure portal Translator instance:**
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<your-translator-key";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
--
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"I would really like to drive your car around the block a few times!\"}]");
- Request request = new Request.Builder()
- .url("https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&to=zu")
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText translateRequest = new TranslatorText();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-<!-- checked -->
-
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-your-java-application) -->
-
-### Build and run your Java application
-
-Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
-
-1. Build your application with the `build` command:
-
- ```console
- gradle build
- ```
-
-1. Run your application with the `run` command:
-
- ```console
- gradle run
- ```
-
-**Translation output:**
-
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
-]
-
-```
-<!-- checked -->
-
-<!-- > [!div class="nextstepaction"]
-> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=build-and-run-your-java-application) -->
-
-### [JavaScript: Node.js](#tab/nodejs)
-
-### Set up your Node.js Express project
-
-1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
-
- > [!TIP]
- >
- > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
-
-1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
-
- ```console
- mkdir translator-app && cd translator-app
- ```
-
- ```powershell
- mkdir translator-app; cd translator-app
- ```
-
-1. Run the npm init command to initialize the application and scaffold your project.
-
- ```console
- npm init
- ```
-
-1. Specify your project's attributes using the prompts presented in the terminal.
-
- * The most important attributes are name, version number, and entry point.
- * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
- * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
- * After you've completed the prompts, a `package.json` file will be created in your translator-app directory.
-
-1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
-
- ```console
- npm install axios uuid
- ```
-
-1. Create the `index.js` file in the application directory.
-
- > [!TIP]
- >
- > * You can create a new file using PowerShell.
- > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
- > * Type the following command **New-Item index.js**.
- >
- > * You can also create a new file named `index.js` in your IDE and save it to the `translator-app` directory.
-<!-- checked -->
-
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-nodejs-express-project) -->
-
-### Build your JavaScript application
-
-Add the following code sample to your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**:
-
-```javascript
- const axios = require('axios').default;
- const { v4: uuidv4 } = require('uuid');
-
- let key = "<your-translator-key>";
- let endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- let location = "<YOUR-RESOURCE-LOCATION>";
-
- axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['fr', 'zu']
- },
- data: [{
- 'text': 'I would really like to drive your car around the block a few times!'
- }],
- responseType: 'json'
- }).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
- })
-
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-javascript-application) -->
-
-### Run your JavaScript application
-
-Once you've added the code sample to your application, run your program:
-
-1. Navigate to your application directory (translator-app).
-
-1. Type the following command in your terminal:
-
- ```console
- node index.js
- ```
-
-**Translation output:**
-
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
-]
-
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=run-your-javascript-application) -->
-
-### [Python](#tab/python)
-
-### Set up your Python project
-
-1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
-
- > [!TIP]
- >
- > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
-
-1. Open a terminal window and use pip to install the Requests library and uuid0 package:
-
- ```console
- pip install requests uuid
- ```
-
- > [!NOTE]
- > We will also use a Python built-in package called json. It's used to work with JSON data.
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-python-project) -->
-
-### Build your Python application
-
-1. Create a new Python file called **translator-app.py** in your preferred editor or IDE.
-
-1. Add the following code sample to your `translator-app.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<your-translator-key>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['fr', 'zu']
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'I would really like to drive your car around the block a few times!'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-python-application) -->
-
-### Run your Python application
-
-Once you've added a code sample to your application, build and run your program:
-
-1. Navigate to your **translator-app.py** file.
-
-1. Type the following command in your console:
-
- ```console
- python translator-app.py
- ```
-
-**Translation output:**
-
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "translations": [
- {
- "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
- "to": "fr"
- },
- {
- "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
- "to": "zu"
- }
- ]
- }
-]
-
-```
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=run-your-python-application) -->
---
-## Next steps
-
-That's it, congratulations! You've learned to use the Translator service to translate text.
-
- Explore our how-to documentation and take a deeper dive into Translation service capabilities:
-
-* [**Translate text**](translator-text-apis.md#translate-text)
-
-* [**Transliterate text**](translator-text-apis.md#transliterate-text)
-
-* [**Detect and identify language**](translator-text-apis.md#detect-language)
-
-* [**Get sentence length**](translator-text-apis.md#get-sentence-length)
-
-* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
cognitive-services Quickstart Text Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-text-sdk.md
- Title: "Quickstart: Azure Cognitive Services Translator SDKs"-
-description: "Learn to translate text with the Translator service SDks in a programming language of your choice: C#, Java, JavaScript, or Python."
------ Previously updated : 05/09/2023--
-zone_pivot_groups: programming-languages-set-translator-sdk
--
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD049 -->
-
-# Quickstart: Azure Cognitive Services Translator SDKs (preview)
-
-> [!IMPORTANT]
->
-> * The Translator text SDKs are currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-
-In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production.
-
-## Prerequisites
-
-You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
-
-* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
-
- * Get the key, endpoint, and region from the resource to connect your application to the Translator service. Paste these values into the code later in the quickstart. You can find them on the Azure portal **Keys and Endpoint** page:
-
- :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
---------
-That's it, congratulations! In this quickstart, you used a Text Translation SDK to translate text.
-
-## Next steps
-
-Learn more about Text Translation development options:
-
-> [!div class="nextstepaction"]
->[Text Translation SDK overview](text-sdk-overview.md) </br></br>[Text Translator V3 reference](reference/v3-0-reference.md)
cognitive-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/rest-api-guide.md
- Title: "Text Translation REST API reference guide"-
-description: View a list of with links to the Text Translation REST APIs.
------ Previously updated : 04/26/2022---
-# Text Translation REST API
-
-Text Translation is a cloud-based feature of the Azure Translator service and is part of the Azure Cognitive Service family of REST APIs. The Text Translation API translates text between language pairs across all [supported languages and dialects](../../language-support.md). The available methods are listed in the table below:
-
-| Request| Method| Description|
-||--||
-| [**languages**](v3-0-languages.md) | **GET** | Returns the set of languages currently supported by the **translation**, **transliteration**, and **dictionary** methods. This request doesn't require authentication headers and you don't need a Translator resource to view the supported language set.|
-|[**translate**](v3-0-translate.md) | **POST**| Translate specified source language text into the target language text.|
-|[**transliterate**](v3-0-transliterate.md) | **POST** | Map source language script or alphabet to a target language script or alphabet.
-|[**detect**](v3-0-detect.md) | **POST** | Identify the source language. |
-|[**breakSentence**](v3-0-break-sentence.md) | **POST** | Returns an array of integers representing the length of sentences in a source text. |
-| [**dictionary/lookup**](v3-0-dictionary-lookup.md) | **POST** | Returns alternatives for single word translations. |
-| [**dictionary/examples**](v3-0-dictionary-examples.md) | **POST** | Returns how a term is used in context. |
-
-> [!div class="nextstepaction"]
-> [Create a Translator resource in the Azure portal.](../how-to-create-translator-resource.md)
-
-> [!div class="nextstepaction"]
-> [Quickstart: REST API and your programming language](../quickstart-translator.md)
cognitive-services V3 0 Break Sentence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-break-sentence.md
- Title: Translator BreakSentence Method-
-description: The Translator BreakSentence method identifies the positioning of sentence boundaries in a piece of text.
------- Previously updated : 12/23/2022---
-# Translator 3.0: BreakSentence
-
-Identifies the positioning of sentence boundaries in a piece of text.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-https://api.cognitive.microsofttranslator.com/breaksentence?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-| Query Parameter | Description |
-| -| -- |
-| api-version <img width=200/> | **Required query parameter**.<br/>Version of the API requested by the client. Value must be `3.0`. |
-| language | **Optional query parameter**.<br/>Language tag identifying the language of the input text. If a code isn't specified, automatic language detection will be applied. |
-| script | **Optional query parameter**.<br/>Script tag identifying the script used by the input text. If a script isn't specified, the default script of the language will be assumed. |
-
-Request headers include:
-
-| Headers | Description |
-| - | -- |
-| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="/azure/cognitive-services/translator/reference/v3-0-reference#authentication">available options for authentication</a>. |
-| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
-| Content-Length | **Required request header**.<br/>The length of the request body. |
-| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`. Sentence boundaries are computed for the value of the `Text` property. A sample request body with one piece of text looks like that:
-
-```json
-[
- { "Text": "How are you? I am fine. What did you do today?" }
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 100 elements.
-* The text value of an array element can't exceed 50,000 characters including spaces.
-* The entire text included in the request can't exceed 50,000 characters including spaces.
-* If the `language` query parameter is specified, then all array elements must be in the same language. Otherwise, language auto-detection is applied to each array element independently.
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
-* `sentLen`: An array of integers representing the lengths of the sentences in the text element. The length of the array is the number of sentences, and the values are the length of each sentence.
-
-* `detectedLanguage`: An object describing the detected language through the following properties:
-
- * `language`: Code of the detected language.
-
- * `score`: A float value indicating the confidence in the result. The score is between zero (0) and one (1.0). A low score (<= 0.4) indicates a low confidence.
-
-The `detectedLanguage` property is only present in the result object when language auto-detection is requested.
-
-An example JSON response is:
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "sentLen": [
- 13,
- 11,
- 22
- ]
- }
-]
-```
-
-## Response headers
-
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>X-RequestId</td>
- <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td>
- </tr>
-</table>
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-<table width="100%">
- <th width="20%">Status Code</th>
- <th>Description</th>
- <tr>
- <td>200</td>
- <td>Success.</td>
- </tr>
- <tr>
- <td>400</td>
- <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td>
- </tr>
- <tr>
- <td>401</td>
- <td>The request couldn't be authenticated. Check that credentials are specified and valid.</td>
- </tr>
- <tr>
- <td>403</td>
- <td>The request isn't authorized. Check the details error message. This response code often indicates that all free translations provided with a trial subscription have been used up.</td>
- </tr>
- <tr>
- <td>429</td>
- <td>The server rejected the request because the client has exceeded request limits.</td>
- </tr>
- <tr>
- <td>500</td>
- <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
- <tr>
- <td>503</td>
- <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
-</table>
-
-If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
-
-## Examples
-
-The following example shows how to obtain sentence boundaries for a single sentence. The language of the sentence is automatically detected by the service.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/breaksentence?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'How are you? I am fine. What did you do today?'}]"
-```
cognitive-services V3 0 Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-detect.md
- Title: Translator Detect Method-
-description: Identify the language of a piece of text with the Azure Cognitive Services Translator Detect method.
------- Previously updated : 02/01/2019---
-# Translator 3.0: Detect
-
-Identifies the language of a piece of text.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-https://api.cognitive.microsofttranslator.com/detect?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-| Query parameter | Description |
-| | |
-| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. |
-
-Request headers include:
-
-| Headers | Description |
-| | |
-| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication)</a>. |
-| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
-| Content-Length | *Required request header*.<br/>The length of the request body. |
-| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`. Language detection is applied to the value of the `Text` property. The language auto-detection works better with longer input text. A sample request body looks like that:
-
-```json
-[
- { "Text": "Ich w├╝rde wirklich gerne Ihr Auto ein paar Mal um den Block fahren." }
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 100 elements.
-* The entire text included in the request cannot exceed 50,000 characters including spaces.
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
- * `language`: Code of the detected language.
-
- * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
-
- * `isTranslationSupported`: A boolean value which is true if the detected language is one of the languages supported for text translation.
-
- * `isTransliterationSupported`: A boolean value which is true if the detected language is one of the languages supported for transliteration.
-
- * `alternatives`: An array of other possible languages. Each element of the array is another object with the same properties listed above: `language`, `score`, `isTranslationSupported` and `isTransliterationSupported`.
-
-An example JSON response is:
-
-```json
-[
-
- {
-
- "language": "de",
-
- "score": 1.0,
-
- "isTranslationSupported": true,
-
- "isTransliterationSupported": false
-
- }
-
-]
-```
-
-## Response headers
-
-| Headers | Description |
-| | |
-| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-| Status Code | Description |
-| | |
-| 200 | Success. |
-| 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |
-| 401 | The request could not be authenticated. Check that credentials are specified and valid. |
-| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. |
-| 429 | The server rejected the request because the client has exceeded request limits. |
-| 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
-| 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
-
-If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
-
-## Examples
-
-The following example shows how to retrieve languages supported for text translation.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/detect?api-version=3.0" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'What language is this text written in?'}]"
-```
cognitive-services V3 0 Dictionary Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-dictionary-examples.md
- Title: Translator Dictionary Examples Method-
-description: The Translator Dictionary Examples method provides examples that show how terms in the dictionary are used in context.
------- Previously updated : 01/21/2020---
-# Translator 3.0: Dictionary Examples
-
-Provides examples that show how terms in the dictionary are used in context. This operation is used in tandem with [Dictionary lookup](./v3-0-dictionary-lookup.md).
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-https://api.cognitive.microsofttranslator.com/dictionary/examples?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-| Query Parameter | Description |
-| | -- |
-| api-version <img width=200/> | **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0`. |
-| from | **Required parameter**.<br/>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. |
-| to | **Required parameter**.<br/>Specifies the language of the output text. The target language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. |
-
-Request headers include:
-
-| Headers | Description |
-| | -- |
-| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="/azure/cognitive-services/translator/reference/v3-0-reference#authentication">available options for authentication</a>. |
-| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
-| Content-Length | **Required request header**.<br/>The length of the request body. |
-| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with the following properties:
-
- * `Text`: A string specifying the term to lookup. This should be the value of a `normalizedText` field from the back-translations of a previous [Dictionary lookup](./v3-0-dictionary-lookup.md) request. It can also be the value of the `normalizedSource` field.
-
- * `Translation`: A string specifying the translated text previously returned by the [Dictionary lookup](./v3-0-dictionary-lookup.md) operation. This should be the value from the `normalizedTarget` field in the `translations` list of the [Dictionary lookup](./v3-0-dictionary-lookup.md) response. The service will return examples for the specific source-target word-pair.
-
-An example is:
-
-```json
-[
- {"Text":"fly", "Translation":"volar"}
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 10 elements.
-* The text value of an array element cannot exceed 100 characters including spaces.
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
- * `normalizedSource`: A string giving the normalized form of the source term. Generally, this should be identical to the value of the `Text` field at the matching list index in the body of the request.
-
- * `normalizedTarget`: A string giving the normalized form of the target term. Generally, this should be identical to the value of the `Translation` field at the matching list index in the body of the request.
-
- * `examples`: A list of examples for the (source term, target term) pair. Each element of the list is an object with the following properties:
-
- * `sourcePrefix`: The string to concatenate _before_ the value of `sourceTerm` to form a complete example. Do not add a space character, since it is already there when it should be. This value may be an empty string.
-
- * `sourceTerm`: A string equal to the actual term looked up. The string is added with `sourcePrefix` and `sourceSuffix` to form the complete example. Its value is separated so it can be marked in a user interface, e.g., by bolding it.
-
- * `sourceSuffix`: The string to concatenate _after_ the value of `sourceTerm` to form a complete example. Do not add a space character, since it is already there when it should be. This value may be an empty string.
-
- * `targetPrefix`: A string similar to `sourcePrefix` but for the target.
-
- * `targetTerm`: A string similar to `sourceTerm` but for the target.
-
- * `targetSuffix`: A string similar to `sourceSuffix` but for the target.
-
- > [!NOTE]
- > If there are no examples in the dictionary, the response is 200 (OK) but the `examples` list is an empty list.
-
-## Examples
-
-This example shows how to lookup examples for the pair made up of the English term `fly` and its Spanish translation `volar`.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/examples?api-version=3.0&from=en&to=es" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly', 'Translation':'volar'}]"
-```
-
-The response body (abbreviated for clarity) is:
-
-```
-[
- {
- "normalizedSource":"fly",
- "normalizedTarget":"volar",
- "examples":[
- {
- "sourcePrefix":"They need machines to ",
- "sourceTerm":"fly",
- "sourceSuffix":".",
- "targetPrefix":"Necesitan máquinas para ",
- "targetTerm":"volar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"That should really ",
- "sourceTerm":"fly",
- "sourceSuffix":".",
- "targetPrefix":"Eso realmente debe ",
- "targetTerm":"volar",
- "targetSuffix":"."
- },
- //
- // ...list abbreviated for documentation clarity
- //
- ]
- }
-]
-```
cognitive-services V3 0 Dictionary Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-dictionary-lookup.md
- Title: Translator Dictionary Lookup Method-
-description: The Dictionary Lookup method provides alternative translations for a word and a small number of idiomatic phrases.
------- Previously updated : 01/21/2020---
-# Translator 3.0: Dictionary Lookup
-
-Provides alternative translations for a word and a small number of idiomatic phrases. Each translation has a part-of-speech and a list of back-translations. The back-translations enable a user to understand the translation in context. The [Dictionary Example](./v3-0-dictionary-examples.md) operation allows further drill down to see example uses of each translation pair.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-| Query Parameter | Description |
-| | -- |
-| api-version <img width=200/> | **Required parameter**.<br/>Version of the API requested by the client. Value must be `3.0` |
-| from | **Required parameter**.<br/>Specifies the language of the input text. The source language must be one of the [supported languages](./v3-0-languages.md) included in the `dictionary` scope. |
-| to | **Required parameter**.<br/>Specifies the language of the output text. The target language must be one of the [supported languages](v3-0-languages.md) included in the `dictionary` scope. |
--
-Request headers include:
-
-| Headers | Description |
-| | -- |
-| Authentication header(s) <img width=200/> | **Required request header**.<br/>See <a href="/azure/cognitive-services/translator/reference/v3-0-reference#authentication">available options for authentication</a>. |
-| Content-Type | **Required request header**.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
-| Content-Length | **Required request header**.<br/>The length of the request body. |
-| X-ClientTraceId | **Optional**.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the term to lookup.
-
-```json
-[
- {"Text":"fly"}
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 10 elements.
-* The text value of an array element cannot exceed 100 characters including spaces.
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
- * `normalizedSource`: A string giving the normalized form of the source term. For example, if the request is "JOHN", the normalized form will be "john". The content of this field becomes the input to [lookup examples](./v3-0-dictionary-examples.md).
-
- * `displaySource`: A string giving the source term in a form best suited for end-user display. For example, if the input is "JOHN", the display form will reflect the usual spelling of the name: "John".
-
- * `translations`: A list of translations for the source term. Each element of the list is an object with the following properties:
-
- * `normalizedTarget`: A string giving the normalized form of this term in the target language. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md).
-
- * `displayTarget`: A string giving the term in the target language and in a form best suited for end-user display. Generally, this will only differ from the `normalizedTarget` in terms of capitalization. For example, a proper noun like "Juan" will have `normalizedTarget = "juan"` and `displayTarget = "Juan"`.
-
- * `posTag`: A string associating this term with a part-of-speech tag.
-
- | Tag name | Description |
- |-|--|
- | ADJ | Adjectives |
- | ADV | Adverbs |
- | CONJ | Conjunctions |
- | DET | Determiners |
- | MODAL | Verbs |
- | NOUN | Nouns |
- | PREP | Prepositions |
- | PRON | Pronouns |
- | VERB | Verbs |
- | OTHER | Other |
-
- As an implementation note, these tags were determined by part-of-speech tagging the English side, and then taking the most frequent tag for each source/target pair. So if people frequently translate a Spanish word to a different part-of-speech tag in English, tags may end up being wrong (with respect to the Spanish word).
-
- * `confidence`: A value between 0.0 and 1.0 which represents the "confidence" (or perhaps more accurately, "probability in the training data") of that translation pair. The sum of confidence scores for one source word may or may not sum to 1.0.
-
- * `prefixWord`: A string giving the word to display as a prefix of the translation. Currently, this is the gendered determiner of nouns, in languages that have gendered determiners. For example, the prefix of the Spanish word "mosca" is "la", since "mosca" is a feminine noun in Spanish. This is only dependent on the translation, and not on the source. If there is no prefix, it will be the empty string.
-
- * `backTranslations`: A list of "back translations" of the target. For example, source words that the target can translate to. The list is guaranteed to contain the source word that was requested (e.g., if the source word being looked up is "fly", then it is guaranteed that "fly" will be in the `backTranslations` list). However, it is not guaranteed to be in the first position, and often will not be. Each element of the `backTranslations` list is an object described by the following properties:
-
- * `normalizedText`: A string giving the normalized form of the source term that is a back-translation of the target. This value should be used as input to [lookup examples](./v3-0-dictionary-examples.md).
-
- * `displayText`: A string giving the source term that is a back-translation of the target in a form best suited for end-user display.
-
- * `numExamples`: An integer representing the number of examples that are available for this translation pair. Actual examples must be retrieved with a separate call to [lookup examples](./v3-0-dictionary-examples.md). The number is mostly intended to facilitate display in a UX. For example, a user interface may add a hyperlink to the back-translation if the number of examples is greater than zero and show the back-translation as plain text if there are no examples. Note that the actual number of examples returned by a call to [lookup examples](./v3-0-dictionary-examples.md) may be less than `numExamples`, because additional filtering may be applied on the fly to remove "bad" examples.
-
- * `frequencyCount`: An integer representing the frequency of this translation pair in the data. The main purpose of this field is to provide a user interface with a means to sort back-translations so the most frequent terms are first.
-
- > [!NOTE]
- > If the term being looked-up does not exist in the dictionary, the response is 200 (OK) but the `translations` list is an empty list.
-
-## Examples
-
-This example shows how to lookup alternative translations in Spanish of the English term `fly` .
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly'}]"
-```
-
-The response body (abbreviated for clarity) is:
-
-```
-[
- {
- "normalizedSource":"fly",
- "displaySource":"fly",
- "translations":[
- {
- "normalizedTarget":"volar",
- "displayTarget":"volar",
- "posTag":"VERB",
- "confidence":0.4081,
- "prefixWord":"",
- "backTranslations":[
- {"normalizedText":"fly","displayText":"fly","numExamples":15,"frequencyCount":4637},
- {"normalizedText":"flying","displayText":"flying","numExamples":15,"frequencyCount":1365},
- {"normalizedText":"blow","displayText":"blow","numExamples":15,"frequencyCount":503},
- {"normalizedText":"flight","displayText":"flight","numExamples":15,"frequencyCount":135}
- ]
- },
- {
- "normalizedTarget":"mosca",
- "displayTarget":"mosca",
- "posTag":"NOUN",
- "confidence":0.2668,
- "prefixWord":"",
- "backTranslations":[
- {"normalizedText":"fly","displayText":"fly","numExamples":15,"frequencyCount":1697},
- {"normalizedText":"flyweight","displayText":"flyweight","numExamples":0,"frequencyCount":48},
- {"normalizedText":"flies","displayText":"flies","numExamples":9,"frequencyCount":34}
- ]
- },
- //
- // ...list abbreviated for documentation clarity
- //
- ]
- }
-]
-```
-
-This example shows what happens when the term being looked up does not exist for the valid dictionary pair.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/dictionary/lookup?api-version=3.0&from=en&to=es" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d "[{'Text':'fly123456'}]"
-```
-
-Since the term is not found in the dictionary, the response body includes an empty `translations` list.
-
-```
-[
- {
- "normalizedSource":"fly123456",
- "displaySource":"fly123456",
- "translations":[]
- }
-]
-```
cognitive-services V3 0 Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-languages.md
- Title: Translator Languages Method-
-description: The Languages method gets the set of languages currently supported by other operations of the Translator.
------- Previously updated : 02/01/2019---
-# Translator 3.0: Languages
-
-Gets the set of languages currently supported by other operations of the Translator.
-
-## Request URL
-
-Send a `GET` request to:
-```HTTP
-https://api.cognitive.microsofttranslator.com/languages?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-<table width="100%">
- <th width="20%">Query parameter</th>
- <th>Description</th>
- <tr>
- <td>api-version</td>
- <td><em>Required parameter</em>.<br/>Version of the API requested by the client. Value must be `3.0`.</td>
- </tr>
- <tr>
- <td>scope</td>
- <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`.</td>
- </tr>
-</table>
-
-*See* [response body](#response-body).
-
-Request headers are:
-
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>Accept-Language</td>
- <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language isn't specified or when localization isn't available.
- </td>
- </tr>
- <tr>
- <td>X-ClientTraceId</td>
- <td>*Optional request header*.<br/>A client-generated GUID to uniquely identify the request.</td>
- </tr>
-</table>
-
-Authentication isn't required to get language resources.
-
-## Response body
-
-A client uses the `scope` query parameter to define which groups of languages it's interested in.
-
-* `scope=translation` provides languages supported to translate text from one language to another language;
-
-* `scope=transliteration` provides capabilities for converting text in one language from one script to another script;
-
-* `scope=dictionary` provides language pairs for which `Dictionary` operations return data.
-
-A client may retrieve several groups simultaneously by specifying a comma-separated list of names. For example, `scope=translation,transliteration,dictionary` would return supported languages for all groups.
-
-A successful response is a JSON object with one property for each requested group:
-
-```json
-{
- "translation": {
- //... set of languages supported to translate text (scope=translation)
- },
- "transliteration": {
- //... set of languages supported to convert between scripts (scope=transliteration)
- },
- "dictionary": {
- //... set of languages supported for alternative translations and examples (scope=dictionary)
- }
-}
-```
-
-The value for each property is as follows.
-
-* `translation` property
-
- The value of the `translation` property is a dictionary of (key, value) pairs. Each key is a BCP 47 language tag. A key identifies a language for which text can be translated to or translated from. The value associated with the key is a JSON object with properties that describe the language:
-
- * `name`: Display name of the language in the locale requested via `Accept-Language` header.
-
- * `nativeName`: Display name of the language in the locale native for this language.
-
- * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
-
- An example is:
-
- ```json
- {
- "translation": {
- ...
- "fr": {
- "name": "French",
- "nativeName": "Français",
- "dir": "ltr"
- },
- ...
- }
- }
- ```
-
-* `transliteration` property
-
- The value of the `transliteration` property is a dictionary of (key, value) pairs. Each key is a BCP 47 language tag. A key identifies a language for which text can be converted from one script to another script. The value associated with the key is a JSON object with properties that describe the language and its supported scripts:
-
- * `name`: Display name of the language in the locale requested via `Accept-Language` header.
-
- * `nativeName`: Display name of the language in the locale native for this language.
-
- * `scripts`: List of scripts to convert from. Each element of the `scripts` list has properties:
-
- * `code`: Code identifying the script.
-
- * `name`: Display name of the script in the locale requested via `Accept-Language` header.
-
- * `nativeName`: Display name of the language in the locale native for the language.
-
- * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
-
- * `toScripts`: List of scripts available to convert text to. Each element of the `toScripts` list has properties `code`, `name`, `nativeName`, and `dir` as described earlier.
-
- An example is:
-
- ```json
- {
- "transliteration": {
- ...
- "ja": {
- "name": "Japanese",
- "nativeName": "日本語",
- "scripts": [
- {
- "code": "Jpan",
- "name": "Japanese",
- "nativeName": "日本語",
- "dir": "ltr",
- "toScripts": [
- {
- "code": "Latn",
- "name": "Latin",
- "nativeName": "ラテン語",
- "dir": "ltr"
- }
- ]
- },
- {
- "code": "Latn",
- "name": "Latin",
- "nativeName": "ラテン語",
- "dir": "ltr",
- "toScripts": [
- {
- "code": "Jpan",
- "name": "Japanese",
- "nativeName": "日本語",
- "dir": "ltr"
- }
- ]
- }
- ]
- },
- ...
- }
- }
- ```
-
-* `dictionary` property
-
- The value of the `dictionary` property is a dictionary of (key, value) pairs. Each key is a BCP 47 language tag. The key identifies a language for which alternative translations and back-translations are available. The value is a JSON object that describes the source language and the target languages with available translations:
-
- * `name`: Display name of the source language in the locale requested via `Accept-Language` header.
-
- * `nativeName`: Display name of the language in the locale native for this language.
-
- * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
-
- * `translations`: List of languages with alterative translations and examples for the query expressed in the source language. Each element of the `translations` list has properties:
-
- * `name`: Display name of the target language in the locale requested via `Accept-Language` header.
-
- * `nativeName`: Display name of the target language in the locale native for the target language.
-
- * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
-
- * `code`: Language code identifying the target language.
-
- An example is:
-
- ```json
- "es": {
- "name": "Spanish",
- "nativeName": "Espa├▒ol",
- "dir": "ltr",
- "translations": [
- {
- "name": "English",
- "nativeName": "English",
- "dir": "ltr",
- "code": "en"
- }
- ]
- },
- ```
-
-The structure of the response object doesn't change without a change in the version of the API. For the same version of the API, the list of available languages may change over time because Microsoft Translator continually extends the list of languages supported by its services.
-
-The list of supported languages doesn't change frequently. To save network bandwidth and improve responsiveness, a client application should consider caching language resources and the corresponding entity tag (`ETag`). Then, the client application can periodically (for example, once every 24 hours) query the service to fetch the latest set of supported languages. Passing the current `ETag` value in an `If-None-Match` header field allows the service to optimize the response. If the resource hasn't been modified, the service returns status code 304 and an empty response body.
-
-## Response headers
-
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>ETag</td>
- <td>Current value of the entity tag for the requested groups of supported languages. To make subsequent requests more efficient, the client may send the `ETag` value in an `If-None-Match` header field.
- </td>
- </tr>
- <tr>
- <td>X-RequestId</td>
- <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td>
- </tr>
-</table>
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-<table width="100%">
- <th width="20%">Status Code</th>
- <th>Description</th>
- <tr>
- <td>200</td>
- <td>Success.</td>
- </tr>
- <tr>
- <td>304</td>
- <td>The resource hasn't been modified since the version specified by request headers `If-None-Match`.</td>
- </tr>
- <tr>
- <td>400</td>
- <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td>
- </tr>
- <tr>
- <td>429</td>
- <td>The server rejected the request because the client has exceeded request limits.</td>
- </tr>
- <tr>
- <td>500</td>
- <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
- <tr>
- <td>503</td>
- <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
-</table>
-
-If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
-
-## Examples
-
-The following example shows how to retrieve languages supported for text translation.
-
-```curl
-curl "https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation"
-```
cognitive-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-reference.md
- Title: Translator V3.0 Reference-
-description: Reference documentation for the Translator V3.0. Version 3 of the Translator provides a modern JSON-based Web API.
------ Previously updated : 04/20/2023---
-# Translator v3.0
-
-## What's new?
-
-Version 3 of the Translator provides a modern JSON-based Web API. It improves usability and performance by consolidating existing features into fewer operations and it provides new features.
-
-* Transliteration to convert text in one language from one script to another script.
-* Translation to multiple languages in one request.
-* Language detection, translation, and transliteration in one request.
-* Dictionary to look up alternative translations of a term, to find back-translations and examples showing terms used in context.
-* More informative language detection results.
-
-## Base URLs
-
-Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there's a datacenter failure when using the global endpoint, the request may be routed outside of the geography.
-
-To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography.
-
-|Geography|Base URL (geographical endpoint)|Datacenters|
-|:--|:--|:--|
-|Global (`non-regional`)| api.cognitive.microsofttranslator.com|Closest available datacenter|
-|Asia Pacific| api-apc.cognitive.microsofttranslator.com|Korea South, Japan East, Southeast Asia, and Australia East|
-|Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe|
-|United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2|
-
-<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or 'Switzerland West', then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is 'my-swiss-n', then your custom endpoint is "https://my-swiss-n.cognitiveservices.azure.com". And a sample request to translate is:
-```curl
-// Pass secret key and region using headers to a custom endpoint
-curl -X POST "https://my-swiss-n.cognitiveservices.azure.com/translator/text/v3.0/translate?to=fr" \
--H "Ocp-Apim-Subscription-Key: xxx" \--H "Ocp-Apim-Subscription-Region: switzerlandnorth" \--H "Content-Type: application/json" \--d "[{'Text':'Hello'}]" -v
-```
-<sup>`2`</sup> Custom Translator isn't currently available in Switzerland.
-
-## Authentication
-
-Subscribe to Translator or [Cognitive Services multi-service](https://azure.microsoft.com/pricing/details/cognitive-services/) in Azure Cognitive Services, and use your key (available in the Azure portal) to authenticate.
-
-There are three headers that you can use to authenticate your subscription. This table describes how each is used:
-
-|Headers|Description|
-|:-|:-|
-|Ocp-Apim-Subscription-Key|*Use with Cognitive Services subscription if you're passing your secret key*.<br/>The value is the Azure secret key for your subscription to Translator.|
-|Authorization|*Use with Cognitive Services subscription if you're passing an authentication token.*<br/>The value is the Bearer token: `Bearer <token>`.|
-|Ocp-Apim-Subscription-Region|*Use with Cognitive Services multi-service and regional translator resource.*<br/>The value is the region of the multi-service or regional translator resource. This value is optional when using a global translator resource.|
-
-### Secret key
-
-The first option is to authenticate using the `Ocp-Apim-Subscription-Key` header. Add the `Ocp-Apim-Subscription-Key: <YOUR_SECRET_KEY>` header to your request.
-
-#### Authenticating with a global resource
-
-When you use a [global translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation), you need to include one header to call the Translator.
-
-|Headers|Description|
-|:--|:-|
-|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
-
-Here's an example request to call the Translator using the global translator resource
-
-```curl
-// Pass secret key using headers
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es" \
- -H "Ocp-Apim-Subscription-Key:<your-key>" \
- -H "Content-Type: application/json" \
- -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-#### Authenticating with a regional resource
-
-When you use a [regional translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation),
-there are two headers that you need to call the Translator.
-
-|Headers|Description|
-|:--|:-|
-|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
-|Ocp-Apim-Subscription-Region| The value is the region of the translator resource. |
-
-Here's an example request to call the Translator using the regional translator resource
-
-```curl
-// Pass secret key and region using headers
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es" \
- -H "Ocp-Apim-Subscription-Key:<your-key>" \
- -H "Ocp-Apim-Subscription-Region:<your-region>" \
- -H "Content-Type: application/json" \
- -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-#### Authenticating with a Multi-service resource
-
-A Cognitive Service multi-service resource allows you to use a single API key to authenticate requests for multiple services.
-
-When you use a multi-service secret key, you must include two authentication headers with your request. There are two headers that you need to call the Translator.
-
-|Headers|Description|
-|:--|:-|
-|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your multi-service resource.|
-|Ocp-Apim-Subscription-Region| The value is the region of the multi-service resource. |
-
-Region is required for the multi-service Text API subscription. The region you select is the only region that you can use for text translation when using the multi-service key. It must be the same region you selected when you signed up for your multi-service subscription through the Azure portal.
-
-If you pass the secret key in the query string with the parameter `Subscription-Key`, then you must specify the region with query parameter `Subscription-Region`.
-
-### Authenticating with an access token
-
-Alternatively, you can exchange your secret key for an access token. This token is included with each request as the `Authorization` header. To obtain an authorization token, make a `POST` request to the following URL:
-
-| Resource type | Authentication service URL |
-|--|--|
-| Global | `https://api.cognitive.microsoft.com/sts/v1.0/issueToken` |
-| Regional or Multi-Service | `https://<your-region>.api.cognitive.microsoft.com/sts/v1.0/issueToken` |
-
-Here are example requests to obtain a token given a secret key for a global resource:
-
-```curl
-// Pass secret key using header
-curl --header 'Ocp-Apim-Subscription-Key: <your-key>' --data "" 'https://api.cognitive.microsoft.com/sts/v1.0/issueToken'
-
-// Pass secret key using query string parameter
-curl --data "" 'https://api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=<your-key>'
-```
-
-And here are example requests to obtain a token given a secret key for a regional resource located in Central US:
-
-```curl
-// Pass secret key using header
-curl --header "Ocp-Apim-Subscription-Key: <your-key>" --data "" "https://centralus.api.cognitive.microsoft.com/sts/v1.0/issueToken"
-
-// Pass secret key using query string parameter
-curl --data "" "https://centralus.api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=<your-key>"
-```
-
-A successful request returns the encoded access token as plain text in the response body. The valid token is passed to the Translator service as a bearer token in the Authorization.
-
-```http
-Authorization: Bearer <Base64-access_token>
-```
-
-An authentication token is valid for 10 minutes. The token should be reused when making multiple calls to the Translator. However, if your program makes requests to the Translator over an extended period of time, then your program must request a new access token at regular intervals (for example, every 8 minutes).
-
-## Authentication with Azure Active Directory (Azure AD)
-
- Translator v3.0 supports Azure AD authentication, Microsoft's cloud-based identity and access management solution. Authorization headers enable the Translator service to validate that the requesting client is authorized to use the resource and to complete the request.
-
-### **Prerequisites**
-
-* A brief understanding of how to [**authenticate with Azure Active Directory**](../../authentication.md?tabs=powershell#authenticate-with-azure-active-directory).
-
-* A brief understanding of how to [**authorize access to managed identities**](../../authentication.md?tabs=powershell#authorize-access-to-managed-identities).
-
-### **Headers**
-
-|Header|Value|
-|:--|:-|
-|Authorization| The value is an access **bearer token** generated by Azure AD.</br><ul><li> The bearer token provides proof of authentication and validates the client's authorization to use the resource.</li><li> An authentication token is valid for 10 minutes and should be reused when making multiple calls to Translator.</br></li>*See* [Sample request: 2. Get a token](../../authentication.md?tabs=powershell#sample-request)</ul>|
-|Ocp-Apim-Subscription-Region| The value is the region of the **translator resource**.</br><ul><li> This value is optional if the resource is global.</li></ul>|
-|Ocp-Apim-ResourceId| The value is the Resource ID for your Translator resource instance.</br><ul><li>You find the Resource ID in the Azure portal at **Translator Resource → Properties**. </li><li>Resource ID format: </br>/subscriptions/<**subscriptionId**>/resourceGroups/<**resourceGroupName**>/providers/Microsoft.CognitiveServices/accounts/<**resourceName**>/</li></ul>|
-
-##### **Translator property pageΓÇöAzure portal**
--
-### **Examples**
-
-#### **Using the global endpoint**
-
-```curl
- // Using headers, pass a bearer token generated by Azure AD, resource ID, and the region.
-
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es" \
- -H "Authorization: Bearer <Base64-access_token>"\
- -H "Ocp-Apim-ResourceId: <Resource ID>" \
- -H "Ocp-Apim-Subscription-Region: <your-region>" \
- -H "Content-Type: application/json" \
- -data-raw "[{'Text':'Hello, friend.'}]"
-```
-
-#### **Using your custom endpoint**
-
-```curl
-// Using headers, pass a bearer token generated by Azure AD.
-
-curl -X POST https://<your-custom-domain>.cognitiveservices.azure.com/translator/text/v3.0/translate?api-version=3.0&to=es \
- -H "Authorization: Bearer <Base64-access_token>"\
- -H "Content-Type: application/json" \
- -data-raw "[{'Text':'Hello, friend.'}]"
-```
-
-### **Examples using managed identities**
-
-Translator v3.0 also supports authorizing access to managed identities. If a managed identity is enabled for a translator resource, you can pass the bearer token generated by managed identity in the request header.
-
-#### **With the global endpoint**
-
-```curl
-// Using headers, pass a bearer token generated either by Azure AD or Managed Identities, resource ID, and the region.
-
-curl -X POST https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=es \
- -H "Authorization: Bearer <Base64-access_token>"\
- -H "Ocp-Apim-ResourceId: <Resource ID>" \
- -H "Ocp-Apim-Subscription-Region: <your-region>" \
- -H "Content-Type: application/json" \
- -data-raw "[{'Text':'Hello, friend.'}]"
-```
-
-#### **With your custom endpoint**
-
-```curl
-//Using headers, pass a bearer token generated by Managed Identities.
-
-curl -X POST https://<your-custom-domain>.cognitiveservices.azure.com/translator/text/v3.0/translate?api-version=3.0&to=es \
- -H "Authorization: Bearer <Base64-access_token>"\
- -H "Content-Type: application/json" \
- -data-raw "[{'Text':'Hello, friend.'}]"
-```
-
-## Virtual Network support
-
-The Translator service is now available with Virtual Network (VNET) capabilities in all regions of the Azure public cloud. To enable Virtual Network, *See* [Configuring Azure Cognitive Services Virtual Networks](../../cognitive-services-virtual-networks.md?tabs=portal).
-
-Once you turn on this capability, you must use the custom endpoint to call the Translator. You can't use the global translator endpoint ("api.cognitive.microsofttranslator.com") and you can't authenticate with an access token.
-
-You can find the custom endpoint after you create a [translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) and allow access from selected networks and private endpoints.
-
-1. Navigate to your Translator resource in the Azure portal.
-1. Select **Networking** from the **Resource Management** section.
-1. Under the **Firewalls and virtual networks** tab, choose **Selected Networks and Private Endpoints**.
-
- :::image type="content" source="../media/virtual-network-setting-azure-portal.png" alt-text="Screenshot of the virtual network setting in the Azure portal.":::
-
-1. Select **Save** to apply your changes.
-1. Select **Keys and Endpoint** from the **Resource Management** section.
-1. Select the **Virtual Network** tab.
-1. Listed there are the endpoints for Text Translation and Document Translation.
-
- :::image type="content" source="../media/virtual-network-endpoint.png" alt-text="Screenshot of the virtual network endpoint.":::
-
-|Headers|Description|
-|:--|:-|
-|Ocp-Apim-Subscription-Key| The value is the Azure secret key for your subscription to Translator.|
-|Ocp-Apim-Subscription-Region| The value is the region of the translator resource. This value is optional if the resource is `global`|
-
-Here's an example request to call the Translator using the custom endpoint
-
-```curl
-// Pass secret key and region using headers
-curl -X POST "https://<your-custom-domain>.cognitiveservices.azure.com/translator/text/v3.0/translate?api-version=3.0&to=es" \
- -H "Ocp-Apim-Subscription-Key:<your-key>" \
- -H "Ocp-Apim-Subscription-Region:<your-region>" \
- -H "Content-Type: application/json" \
- -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-## Errors
-
-A standard error response is a JSON object with name/value pair named `error`. The value is also a JSON object with properties:
-
-* `code`: A server-defined error code.
-* `message`: A string giving a human-readable representation of the error.
-
-For example, a customer with a free trial subscription would receive the following error once the free quota is exhausted:
-
-```json
-{
- "error": {
- "code":403001,
- "message":"The operation isn't allowed because the subscription has exceeded its free quota."
- }
-}
-```
-
-The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes are:
-
-| Code | Description |
-|:-|:--|
-| 400000| One of the request inputs isn't valid.|
-| 400001| The "scope" parameter is invalid.|
-| 400002| The "category" parameter is invalid.|
-| 400003| A language specifier is missing or invalid.|
-| 400004| A target script specifier ("To script") is missing or invalid.|
-| 400005| An input text is missing or invalid.|
-| 400006| The combination of language and script isn't valid.|
-| 400018| A source script specifier ("From script") is missing or invalid.|
-| 400019| One of the specified languages isn't supported.|
-| 400020| One of the elements in the array of input text isn't valid.|
-| 400021| The API version parameter is missing or invalid.|
-| 400023| One of the specified language pair isn't valid.|
-| 400035| The source language ("From" field) isn't valid.|
-| 400036| The target language ("To" field) is missing or invalid.|
-| 400042| One of the options specified ("Options" field) isn't valid.|
-| 400043| The client trace ID (ClientTraceId field or X-ClientTranceId header) is missing or invalid.|
-| 400050| The input text is too long. View [request limits](../request-limits.md).|
-| 400064| The "translation" parameter is missing or invalid.|
-| 400070| The number of target scripts (ToScript parameter) doesn't match the number of target languages (To parameter).|
-| 400071| The value isn't valid for TextType.|
-| 400072| The array of input text has too many elements.|
-| 400073| The script parameter isn't valid.|
-| 400074| The body of the request isn't valid JSON.|
-| 400075| The language pair and category combination isn't valid.|
-| 400077| The maximum request size has been exceeded. View [request limits](../request-limits.md).|
-| 400079| The custom system requested for translation between from and to language doesn't exist.|
-| 400080| Transliteration isn't supported for the language or script.|
-| 401000| The request isn't authorized because credentials are missing or invalid.|
-| 401015| "The credentials provided are for the Speech API. This request requires credentials for the Text API. Use a subscription to Translator."|
-| 403000| The operation isn't allowed.|
-| 403001| The operation isn't allowed because the subscription has exceeded its free quota.|
-| 405000| The request method isn't supported for the requested resource.|
-| 408001| The translation system requested is being prepared. Retry in a few minutes.|
-| 408002| Request timed out waiting on incoming stream. The client didn't produce a request within the time that the server was prepared to wait. The client may repeat the request without modifications at any later time.|
-| 415000| The Content-Type header is missing or invalid.|
-| 429000, 429001, 429002| The server rejected the request because the client has exceeded request limits.|
-| 500000| An unexpected error occurred. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
-| 503000| Service is temporarily unavailable. Retry. If the error persists, report it with date/time of error, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId.|
-
-## Metrics
-Metrics allow you to view the translator usage and availability information in Azure portal, under metrics section as shown in the below screenshot. For more information, see [Data and platform metrics](../../../azure-monitor/essentials/data-platform-metrics.md).
-
-![Translator Metrics](../media/translatormetrics.png)
-
-This table lists available metrics with description of how they're used to monitor translation API calls.
-
-| Metrics | Description |
-|:-|:--|
-| TotalCalls| Total number of API calls.|
-| TotalTokenCalls| Total number of API calls via token service using authentication token.|
-| SuccessfulCalls| Number of successful calls.|
-| TotalErrors| Number of calls with error response.|
-| BlockedCalls| Number of calls that exceeded rate or quota limit.|
-| ServerErrors| Number of calls with server internal error(5XX).|
-| ClientErrors| Number of calls with client-side error(4XX).|
-| Latency| Duration to complete request in milliseconds.|
-| CharactersTranslated| Total number of characters in incoming text request.|
cognitive-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-translate.md
- Title: Translator Translate Method-
-description: Understand the parameters, headers, and body messages for the Translate method of Azure Cognitive Services Translator to translate text.
------- Previously updated : 10/03/2022---
-# Translator 3.0: Translate
-
-Translates text.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-https://api.cognitive.microsofttranslator.com/translate?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-### Required parameters
-
-| Query parameter | Description |
-| | |
-| api-version | _Required parameter_. <br>Version of the API requested by the client. Value must be `3.0`. |
-| to | _Required parameter_. <br>Specifies the language of the output text. The target language must be one of the [supported languages](v3-0-languages.md) included in the `translation` scope. For example, use `to=de` to translate to German. <br>It's possible to translate to multiple languages simultaneously by repeating the parameter in the query string. For example, use `to=de&to=it` to translate to German and Italian. |
-
-### Optional parameters
-
-| Query parameter | Description |
-| | |
-
-| Query parameter | Description |
-| | |
-| from | _Optional parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope. If the `from` parameter isn't specified, automatic language detection is applied to determine the source language. <br> <br>You must use the `from` parameter rather than autodetection when using the [dynamic dictionary](../dynamic-dictionary.md) feature. **Note**: the dynamic dictionary feature is case-sensitive. |
-| textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
-| category | _Optional parameter_. <br>A string specifying the category (domain) of the translation. This parameter is used to get translations from a customized system built with [Custom Translator](../customization.md). Add the Category ID from your Custom Translator [project details](../custom-translator/how-to-create-project.md#view-project-details) to this parameter to use your deployed customized system. Default value is: `general`. |
-| profanityAction | _Optional parameter_. <br>Specifies how profanities should be treated in translations. Possible values are: `NoAction` (default), `Marked` or `Deleted`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
-| profanityMarker | _Optional parameter_. <br>Specifies how profanities should be marked in translations. Possible values are: `Asterisk` (default) or `Tag`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
-| includeAlignment | _Optional parameter_. <br>Specifies whether to include alignment projection from source text to translated text. Possible values are: `true` or `false` (default). |
-| includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
-| suggestedFrom | _Optional parameter_. <br>Specifies a fallback language if the language of the input text can't be identified. Language autodetection is applied when the `from` parameter is omitted. If detection fails, the `suggestedFrom` language will be assumed. |
-| fromScript | _Optional parameter_. <br>Specifies the script of the input text. |
-| toScript | _Optional parameter_. <br>Specifies the script of the translated text. |
-| allowFallback | _Optional parameter_. <br>Specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. Possible values are: `true` (default) or `false`. <br> <br>`allowFallback=false` specifies that the translation should only use systems trained for the `category` specified by the request. If a translation for language X to language Y requires chaining through a pivot language E, then all the systems in the chain (X → E and E → Y) will need to be custom and have the same category. If no system is found with the specific category, the request will return a 400 status code. `allowFallback=true` specifies that the service is allowed to fall back to a general system when a custom system doesn't exist. |
-
-Request headers include:
-
-| Headers | Description |
-| | |
-| Authentication header(s) | _Required request header_. <br>See [available options for authentication](./v3-0-reference.md#authentication). |
-| Content-Type | _Required request header_. <br>Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. |
-| Content-Length | _Required request header_. <br>The length of the request body. |
-| X-ClientTraceId | _Optional_. <br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to translate.
-
-```json
-[
- {"Text":"I would really like to drive your car around the block a few times."}
-]
-```
-
-For information on character and array limits, _see_ [Request limits](../request-limits.md#character-and-array-limits-per-request).
-
-## Response body
-
-A successful response is a JSON array with one result for each string in the input array. A result object includes the following properties:
-
-* `detectedLanguage`: An object describing the detected language through the following properties:
-
- * `language`: A string representing the code of the detected language.
-
- * `score`: A float value indicating the confidence in the result. The score is between zero and one and a low score indicates a low confidence.
-
- The `detectedLanguage` property is only present in the result object when language auto-detection is requested.
-
-* `translations`: An array of translation results. The size of the array matches the number of target languages specified through the `to` query parameter. Each element in the array includes:
-
- * `to`: A string representing the language code of the target language.
-
- * `text`: A string giving the translated text.
-
-* `transliteration`: An object giving the translated text in the script specified by the `toScript` parameter.
-
- * `script`: A string specifying the target script.
-
- * `text`: A string giving the translated text in the target script.
-
- The `transliteration` object isn't included if transliteration doesn't take place.
-
- * `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the alignment element will be empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions.
-
-* `sentLen`: An object returning sentence boundaries in the input and output texts.
-
- * `srcSentLen`: An integer array representing the lengths of the sentences in the input text. The length of the array is the number of sentences, and the values are the length of each sentence.
-
- * `transSentLen`: An integer array representing the lengths of the sentences in the translated text. The length of the array is the number of sentences, and the values are the length of each sentence.
-
- Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
-
-* `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
-
-Examples of JSON responses are provided in the [examples](#examples) section.
-
-## Response headers
-
-| Headers | Description |
-| | |
-| X-requestid | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
-| X-mt-system | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests |
-| X-metered-usage |Specifies consumption (the number of characters for which the user will be charged) for the translation job request. For example, if the word "Hello" is translated from English (en) to French (fr), this field will return the value '5'.|
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-|Status code | Description |
-| | |
-|200 | Success. |
-|400 |One of the query parameters is missing or not valid. Correct request parameters before retrying. |
-|401 | The request couldn't be authenticated. Check that credentials are specified and valid. |
-|403 | The request isn't authorized. Check the details error message. This status code often indicates that all free translations provided with a trial subscription have been used up. |
-|408 | The request couldn't be fulfilled because a resource is missing. Check the details error message. When the request includes a custom category, this status code often indicates that the custom translation system isn't yet available to serve requests. The request should be retried after a waiting period (for example, 1 minute). |
-|429 | The server rejected the request because the client has exceeded request limits. |
-|500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
-|503 |Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
-
-If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
-
-## Examples
-
-### Translate a single input
-
-This example shows how to translate a single sentence from English to Simplified Chinese.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
- ]
- }
-]
-```
-
-The `translations` array includes one element, which provides the translation of the single piece of text in the input.
-
-### Translate a single input with language autodetection
-
-This example shows how to translate a single sentence from English to Simplified Chinese. The request doesn't specify the input language. Autodetection of the source language is used instead.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "detectedLanguage": {"language": "en", "score": 1.0},
- "translations":[
- {"text": "你好, 你叫什么名字?", "to": "zh-Hans"}
- ]
- }
-]
-```
-The response is similar to the response from the previous example. Since language autodetection was requested, the response also includes information about the language detected for the input text. The language autodetection works better with longer input text.
-
-### Translate with transliteration
-
-Let's extend the previous example by adding transliteration. The following request asks for a Chinese translation written in Latin script.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=zh-Hans&toScript=Latn" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "detectedLanguage":{"language":"en","score":1.0},
- "translations":[
- {
- "text":"你好, 你叫什么名字?",
- "transliteration":{"script":"Latn", "text":"nǐ hǎo , nǐ jiào shén me míng zì ?"},
- "to":"zh-Hans"
- }
- ]
- }
-]
-```
-
-The translation result now includes a `transliteration` property, which gives the translated text using Latin characters.
-
-### Translate multiple pieces of text
-
-Translating multiple strings at once is simply a matter of specifying an array of strings in the request body.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}, {'Text':'I am fine, thank you.'}]"
-```
-
-The response contains the translation of all pieces of text in the exact same order as in the request.
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"}
- ]
- },
- {
- "translations":[
- {"text":"我很好,谢谢你。","to":"zh-Hans"}
- ]
- }
-]
-```
-
-### Translate to multiple languages
-
-This example shows how to translate the same input to several languages in one request.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'Hello, what is your name?'}]"
-```
-
-The response body is:
-
-```
-[
- {
- "translations":[
- {"text":"你好, 你叫什么名字?","to":"zh-Hans"},
- {"text":"Hallo, was ist dein Name?","to":"de"}
- ]
- }
-]
-```
-
-### Handle profanity
-
-Normally the Translator service will retain profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures, and as a result the degree of profanity in the target language may be amplified or reduced.
-
-If you want to avoid getting profanity in the translation, regardless of the presence of profanity in the source text, you can use the profanity filtering option. The option allows you to choose whether you want to see profanity deleted, whether you want to mark profanities with appropriate tags (giving you the option to add your own post-processing), or you want no action taken. The accepted values of `ProfanityAction` are `Deleted`, `Marked` and `NoAction` (default).
--
-| ProfanityAction | Action |
-| | |
-| `NoAction` | NoAction is the default behavior. Profanity will pass from source to target. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a jack. |
-| `Deleted` | Profane words will be removed from the output without replacement. <br> <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a** |
-| `Marked` | Profane words are replaced by a marker in the output. The marker depends on the `ProfanityMarker` parameter. <br> <br>For `ProfanityMarker=Asterisk`, profane words are replaced with `***`: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a \\*\\*\\*. <br> <br>For `ProfanityMarker=Tag`, profane words are surrounded by XML tags &lt;profanity&gt; and &lt;/profanity&gt;: <br>**Example Source (Japanese)**: 彼はジャッカスです。 <br>**Example Translation (English)**: He's a &lt;profanity&gt;jack&lt;/profanity&gt;. |
-
-For example:
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de&profanityAction=Marked" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'This is an <expletive> good idea.'}]"
-```
-
-This request returns:
-
-```
-[
- {
- "translations":[
- {"text":"Das ist eine *** gute Idee.","to":"de"}
- ]
- }
-]
-```
-
-Compare with:
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de&profanityAction=Marked&profanityMarker=Tag" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'This is an <expletive> good idea.'}]"
-```
-
-That last request returns:
-
-```
-[
- {
- "translations":[
- {"text":"Das ist eine <profanity>verdammt</profanity> gute Idee.","to":"de"}
- ]
- }
-]
-```
-
-### Translate content with markup and decide what's translated
-
-It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element won't be translated, while the content in the second `div` element will be translated.
-
-```
-<div class="notranslate">This will not be translated.</div>
-<div>This will be translated. </div>
-```
-
-Here's a sample request to illustrate.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=zh-Hans&textType=html" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'<div class=\"notranslate\">This will not be translated.</div><div>This will be translated.</div>'}]"
-```
-
-The response is:
-
-```
-[
- {
- "translations":[
- {"text":"<div class=\"notranslate\">This will not be translated.</div><div>这将被翻译。</div>","to":"zh-Hans"}
- ]
- }
-]
-```
-
-### Obtain alignment information
-
-Alignment is returned as a string value of the following format for every word of the source. The information for each word is separated by a space, including for non-space-separated languages (scripts) like Chinese:
-
-[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]] *
-
-Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21".
-
-In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the Alignment element will be empty. The method returns no error in that case.
-
-To receive alignment information, specify `includeAlignment=true` on the query string.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&includeAlignment=true" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The answer lies in machine translation.'}]"
-```
-
-The response is:
-
-```
-[
- {
- "translations":[
- {
- "text":"La réponse se trouve dans la traduction automatique.",
- "to":"fr",
- "alignment":{"proj":"0:2-0:1 4:9-3:9 11:14-11:19 16:17-21:24 19:25-40:50 27:37-29:38 38:38-51:51"}
- }
- ]
- }
-]
-```
-
-The alignment information starts with `0:2-0:1`, which means that the first three characters in the source text (`The`) map to the first two characters in the translated text (`La`).
-
-#### Limitations
-
-Obtaining alignment information is an experimental feature that we've enabled for prototyping research and experiences with potential phrase mappings. We may choose to stop supporting this feature in the future. Here are some of the notable restrictions where alignments aren't supported:
-
-* Alignment isn't available for text in HTML format that is, textType=html
-* Alignment is only returned for a subset of the language pairs:
- * English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic).
- * from Japanese to Korean or from Korean to Japanese.
- * from Japanese to Chinese Simplified and Chinese Simplified to Japanese.
- * from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.
-* You won't receive alignment if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you" and other high frequency sentences.
-* Alignment isn't available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md)
-
-### Obtain sentence boundaries
-
-To receive information about sentence length in the source text and translated text, specify `includeSentenceLength=true` on the query string.
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&includeSentenceLength=true" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The answer lies in machine translation. The best machine translation technology cannot always provide translations tailored to a site or users like a human. Simply copy and paste a code snippet anywhere.'}]"
-```
-
-The response is:
-
-```json
-[
- {
- "translations":[
- {
- "text":"La réponse se trouve dans la traduction automatique. La meilleure technologie de traduction automatique ne peut pas toujours fournir des traductions adaptées à un site ou des utilisateurs comme un être humain. Il suffit de copier et coller un extrait de code n'importe où.",
- "to":"fr",
- "sentLen":{"srcSentLen":[40,117,46],"transSentLen":[53,157,62]}
- }
- ]
- }
-]
-```
-
-### Translate with dynamic dictionary
-
-If you already know the translation you want to apply to a word or a phrase, you can supply it as markup within the request. The dynamic dictionary is only safe for proper nouns such as personal names and product names. **Note**: the dynamic dictionary feature is case-sensitive.
-
-The markup to supply uses the following syntax.
-
-```
-<mstrans:dictionary translation="translation of phrase">phrase</mstrans:dictionary>
-```
-
-For example, consider the English sentence "The word wordomatic is a dictionary entry." To preserve the word _wordomatic_ in the translation, send the request:
-
-```
-curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'The word <mstrans:dictionary translation=\"wordomatic\">wordomatic</mstrans:dictionary> is a dictionary entry.'}]"
-```
-
-The result is:
-
-```
-[
- {
- "translations":[
- {"text":"Das Wort \"wordomatic\" ist ein W├╢rterbucheintrag.","to":"de"}
- ]
- }
-]
-```
-
-This dynamic-dictionary feature works the same way with `textType=text` or with `textType=html`. The feature should be used sparingly. The appropriate and far better way of customizing translation is by using Custom Translator. Custom Translator makes full use of context and statistical probabilities. If you can create training data that shows your work or phrase in context, you'll get much better results. [Learn more about Custom Translator](../customization.md).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Try the Translator quickstart](../quickstart-translator.md)
cognitive-services V3 0 Transliterate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-transliterate.md
- Title: Translator Transliterate Method-
-description: Convert text in one language from one script to another script with the Translator Transliterate method.
------- Previously updated : 02/01/2019---
-# Translator 3.0: Transliterate
-
-Converts text in one language from one script to another script.
-
-## Request URL
-
-Send a `POST` request to:
-
-```HTTP
-https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0
-```
-
-## Request parameters
-
-Request parameters passed on the query string are:
-
-| Query parameter | Description |
-| | |
-| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. |
-| language | *Required parameter*.<br/>Specifies the language of the text to convert from one script to another. Possible languages are listed in the `transliteration` scope obtained by querying the service for its [supported languages](./v3-0-languages.md). |
-| fromScript | *Required parameter*.<br/>Specifies the script used by the input text. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find input scripts available for the selected language. |
-| toScript | *Required parameter*.<br/>Specifies the output script. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find output scripts available for the selected combination of input language and input script. |
-
-Request headers include:
-
-| Headers | Description |
-| | |
-| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication). |
-| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json` |
-| Content-Length | *Required request header*.<br/>The length of the request body. |
-| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
-
-## Request body
-
-The body of the request is a JSON array. Each array element is a JSON object with a string property named `Text`, which represents the string to convert.
-
-```json
-[
- {"Text":"πüôπéôπü½πüíπü»"},
- {"Text":"さようなら"}
-]
-```
-
-The following limitations apply:
-
-* The array can have at most 10 elements.
-* The text value of an array element cannot exceed 1,000 characters including spaces.
-* The entire text included in the request cannot exceed 5,000 characters including spaces.
-
-## Response body
-
-A successful response is a JSON array with one result for each element in the input array. A result object includes the following properties:
-
- * `text`: A string which is the result of converting the input string to the output script.
-
- * `script`: A string specifying the script used in the output.
-
-An example JSON response is:
-
-```json
-[
- {"text":"konnnichiha","script":"Latn"},
- {"text":"sayounara","script":"Latn"}
-]
-```
-
-## Response headers
-
-| Headers | Description |
-| | |
-| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
-
-## Response status codes
-
-The following are the possible HTTP status codes that a request returns.
-
-| Status Code | Description |
-| | |
-| 200 | Success. |
-| 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |
-| 401 | The request could not be authenticated. Check that credentials are specified and valid. |
-| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. |
-| 429 | The server rejected the request because the client has exceeded request limits. |
-| 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
-| 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
-
-If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
-
-## Examples
-
-The following example shows how to convert two Japanese strings into Romanized Japanese.
-
-The JSON payload for the request in this example:
-
-```json
-[{"text":"こんにちは","script":"jpan"},{"text":"さようなら","script":"jpan"}]
-```
-
-If you are using cURL in a command-line window that does not support Unicode characters, take the following JSON payload and save it into a file named `request.txt`. Be sure to save the file with `UTF-8` encoding.
-
-```
-curl -X POST "https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0&language=ja&fromScript=Jpan&toScript=Latn" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d @request.txt
-```
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/service-limits.md
- Title: Service limits - Translator Service-
-description: This article lists service limits for the Translator text and document translation. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour.
------ Previously updated : 05/08/2023---
-# Service limits for Azure Translator Service
-
-This article provides both a quick reference and detailed description of Azure Translator Service character and array limits for text and document translation.
-
-## Text translation
-
-Charges are incurred based on character count, not request frequency. Character limits are subscription-based.
-
-### Character and array limits per request
-
-Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 &times; 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, it's recommended that you send shorter requests.
-
-The following table lists array element and character limits for each text translation operation.
-
-| Operation | Maximum Size of Array Element | Maximum Number of Array Elements | Maximum Request Size (characters) |
-|:-|:-|:-|:-|
-| **Translate** | 50,000| 1,000| 50,000 |
-| **Transliterate** | 5,000| 10| 5,000 |
-| **Detect** | 50,000 |100 |50,000 |
-| **BreakSentence** | 50,000| 100 |50,000 |
-| **Dictionary Lookup** | 100 |10| 1,000 |
-| **Dictionary Examples** | 100 for text and 100 for translation (200 total)| 10|2,000 |
-
-### Character limits per hour
-
-Your character limit per hour is based on your Translator subscription tier.
-
-The hourly quota should be consumed evenly throughout the hour. For example, at the F0 tier limit of 2 million characters per hour, characters should be consumed no faster than roughly 33,300 characters per minute. The sliding window range is 2 million characters divided by 60 minutes.
-
-You're likely to receive an out-of-quota response under the following circumstances:
-
-* You've reached or surpass the quota limit.
-* You've sent a large portion of the quota in too short a period of time.
-
-There are no limits on concurrent requests.
-
-| Tier | Character limit |
-||--|
-| F0 | 2 million characters per hour |
-| S1 | 40 million characters per hour |
-| S2 / C2 | 40 million characters per hour |
-| S3 / C3 | 120 million characters per hour |
-| S4 / C4 | 200 million characters per hour |
-
-Limits for [multi-service subscriptions](./reference/v3-0-reference.md#authentication) are the same as the S1 tier.
-
-These limits are restricted to Microsoft's standard translation models. Custom translation models that use Custom Translator are limited to 3,600 characters per second, per model.
-
-### Latency
-
-The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry.
-
-## Document Translation
-
-This table lists the content limits for data sent using Document Translation:
-
-|Attribute | Limit|
-|||
-|Document size| Γëñ 40 MB |
-|Total number of files.|Γëñ 1000 |
-|Total content size in a batch | Γëñ 250 MB|
-|Number of target languages in a batch| Γëñ 10 |
-|Size of Translation memory file| Γëñ 10 MB|
-
-> [!NOTE]
-> Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
-
-## Next steps
-
-* [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
-* [Regional availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
-* [v3 Translator reference](./reference/v3-0-reference.md)
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
- Title: "Translator: sovereign clouds"-
-description: Using Translator in sovereign clouds
------ Previously updated : 08/17/2022---
-# Translator in sovereign (national) clouds
-
- Azure sovereign clouds are isolated in-country/region platforms with independent authentication, storage, and compliance requirements. Sovereign clouds are often used within geographical boundaries where there's a strict data residency requirement. Translator is currently deployed in the following sovereign clouds:
-
-|Cloud | Region identifier |
-||--|
-| [Azure US Government](../../azure-government/documentation-government-welcome.md)|<ul><li>`usgovarizona` (US Gov Arizona)</li><li>`usgovvirginia` (US Gov Virginia)</li></ul>|
-| [Azure China 21 Vianet](/azure/china/overview-operations) |<ul><li>`chinaeast2` (East China 2)</li><li>`chinanorth` (China North)</li></ul>|
-
-## Azure portal endpoints
-
-The following table lists the base URLs for Azure sovereign cloud endpoints:
-
-| Sovereign cloud | Azure portal endpoint |
-| | -- |
-| Azure portal for US Government | `https://portal.azure.us` |
-| Azure portal China operated by 21 Vianet | `https://portal.azure.cn` |
-
-<!-- markdownlint-disable MD033 -->
-
-## Translator: sovereign clouds
-
-### [Azure US Government](#tab/us)
-
- The Azure Government cloud is available to US government customers and their partners. US federal, state, local, tribal governments and their partners have access to the Azure Government cloud dedicated instance. Cloud operations are controlled by screened US citizens.
-
-| Azure US Government | Availability and support |
-|--|--|
-|Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>|
-| Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>|
-|Available pricing tiers|<ul><li>Free (F0) and Standard (S1). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
-|Supported Features | <ul><li>[Text Translation](reference/v3-0-reference.md)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
-|Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>|
-
-<!-- markdownlint-disable MD036 -->
-
-### Endpoint
-
-#### Azure portal
-
-Base URL:
-
-```http
-https://portal.azure.us
-```
-
-#### Authorization token
-
-Replace the `<region-identifier>` parameter with the sovereign cloud identifier:
-
-|Cloud | Region identifier |
-||--|
-| Azure US Government|<ul><li>`usgovarizona` (US Gov Arizona)</li><li>`usgovvirginia` (US Gov Virginia)</li></ul>|
-| Azure China 21 Vianet|<ul><li>`chinaeast2` (East China 2)</li><li>`chinanorth` (China North)</li></ul>|
-
-```http
-https://<region-identifier>.api.cognitive.microsoft.us/sts/v1.0/issueToken
-```
-
-#### Text translation
-
-```http
-https://api.cognitive.microsofttranslator.us/
-```
-
-#### Document Translation custom endpoint
-
-```http
-https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0
-```
-
-#### Custom Translator portal
-
-```http
-https://portal.customtranslator.azure.us/
-```
-
-### Example API translation request
-
-Translate a single sentence from English to Simplified Chinese.
-
-**Request**
-
-```curl
-curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version=3.0?&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <key>" -H "Ocp-Apim-Subscription-Region: chinanorth" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text':'你好, 你叫什么名字?'}]"
-```
-
-**Response body**
-
-```JSON
-[
- {
- "translations":[
- {"text": "Hello, what is your name?", "to": "en"}
- ]
- }
-]
-```
-
-> [!div class="nextstepaction"]
-> [Azure Government: Translator text reference](../../azure-government/documentation-government-cognitiveservices.md#translator)
-
-### [Azure China 21 Vianet](#tab/china)
-
-The Azure China cloud is a physical and logical network-isolated instance of cloud services located in China. In order to apply for an Azure China account, you need a Chinese legal entity, Internet Content provider (ICP) license, and physical presence within China.
-
-|Azure China 21 Vianet | Availability and support |
-|||
-|Azure portal |<ul><li>[Azure China 21 Vianet Portal](https://portal.azure.cn/)</li></ul>|
-|Regions <br></br>The region-identifier is a required header when using a multi-service resource. | <ul><li>`chinanorth` </li><li> `chinaeast2`</li></ul>|
-|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li></ul>|
-|Supported Languages|<ul><li>[Translator language support.](https://docs.azure.cn/cognitive-services/translator/language-support)</li></ul>|
-
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD024 -->
-
-### Endpoint
-
-Base URL
-
-#### Azure portal
-
-```http
-https://portal.azure.cn
-```
-
-#### Authorization token
-
-Replace the `<region-identifier>` parameter with the sovereign cloud identifier:
-
-```http
-https://<region-identifier>.api.cognitive.azure.cn/sts/v1.0/issueToken
-```
-
-#### Text translation
-
-```http
-https://api.translator.azure.cn/translate
-```
-
-### Example text translation request
-
-Translate a single sentence from English to Simplified Chinese.
-
-**Request**
-
-```curl
-curl -X POST "https://api.translator.azure.cn/translate?api-version=3.0&from=en&to=zh-Hans" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json; charset=UTF-8" -d "[{'Text': 'Hello, what is your name?'}]"
-```
-
-**Response body**
-
-```JSON
-[
- {
- "translations":[
- {"text": "你好, 你叫什么名字?", "to": "zh-Hans"}
- ]
- }
-]
-```
-
-#### Document Translation custom endpoint
-
-```http
-https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.cn/translator/text/batch/v1.0
-```
-
-### Example batch translation request
-
-```json
-{
- "inputs": [
- {
- "source": {
- "sourceUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
- },
- "targets": [
- {
- "targetUrl": "https://<storage_acount>.blob.core.chinacloudapi.cn/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
- "language": "zh-Hans"
- }
- ]
- }
- ]
-}
-```
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Translator](index.yml)
cognitive-services Text Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/text-sdk-overview.md
- Title: Azure Text Translation SDKs -
-description: Azure Text Translation software development kits (SDKs) expose Text Translation features and capabilities, using C#, Java, JavaScript, and Python programming language.
------ Previously updated : 05/12/2023-
-recommendations: false
--
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD051 -->
-
-# Azure Text Translation SDK (preview)
-
-> [!IMPORTANT]
->
-> * The Translator text SDKs are currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-
-Azure Text Translation is a cloud-based REST API feature of the Azure Translator service. The Text Translation API enables quick and accurate source-to-target text translations in real time. The Text Translation software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Text Translation REST API capabilities into your applications. Text Translation SDK is available across programming platforms in C#/.NET, Java, JavaScript, and Python.
-
-## Supported languages
-
-Text Translation SDK supports the following:
-
-| Language → SDK version | Package|Client library| Supported API version|
-|:-:|:-|:-|:-|
-|[.NET/C# → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.Translation.Text/1.0.0-beta.1)|[Azure SDK for .NET](/dotnet/api/overview/azure/ai.translation.text-readme?view=azure-dotnet-preview&preserve-view=true)|Translator v3.0|
-|[Java → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-translation-text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-translation-text/1.0.0-beta.1)|[Azure SDK for Java](/java/api/overview/azure/ai-translation-text-readme?view=azure-java-preview&preserve-view=true)|Translator v3.0|
-|[JavaScript → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-cognitiveservices-translatortext/1.0.0/https://docsupdatetracker.net/index.html)|[npm](https://www.npmjs.com/package/@azure-rest/ai-translation-text/v/1.0.0-beta.1)|[Azure SDK for JavaScript](/javascript/api/overview/azure/text-translation?view=azure-node-preview&preserve-view=true) |Translator v3.0 |
-|**Python → 1.0.0b1**|[PyPi](https://pypi.org/project/azure-ai-translation-text/1.0.0b1/)|[Azure SDK for Python](/python/api/azure-ai-translation-text/azure.ai.translation.text?view=azure-python-preview&preserve-view=true) |Translator v3.0|
-
-## Changelog and release history
-
-This section provides a version-based description of Text Translation feature and capability releases, changes, updates, and enhancements.
-
-#### Translator Text SDK April 2023 preview release
-
-This release includes the following:
-
-### [**C#**](#tab/csharp)
-
-* **Version 1.0.0-beta.1 (2023-04-17)**
-* **Targets Text Translation v3.0**
-* **Initial version release**
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.Translation.Text/1.0.0-beta.1)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/translation/Azure.AI.Translation.Text/CHANGELOG.md#100-beta1-2023-04-17)
-
-[**README**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Text#readme)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Text/samples)
-
-### [**Java**](#tab/java)
-
-* **Version 1.0.0-beta.1 (2023-04-18)**
-* **Targets Text Translation v3.0**
-* **Initial version release**
-
-[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-translation-text/1.0.0-beta.1)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#100-beta1-2023-04-18)
-
-[**README**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/translation/azure-ai-translation-text#readme)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/translation/azure-ai-translation-text#next-steps)
-
-### [**JavaScript**](#tab/javascript)
-
-* **Version 1.0.0-beta.1 (2023-04-18)**
-* **Targets Text Translation v3.0**
-* **Initial version release**
-
-[**Package (npm)**](https://www.npmjs.com/package/@azure-rest/ai-translation-text/v/1.0.0-beta.1)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/translation/ai-translation-text-rest/CHANGELOG.md#100-beta1-2023-04-18)
-
-[**README**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/translation/ai-translation-text-rest/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/translation/ai-translation-text-rest/samples/v1-beta)
-
-### [**Python**](#tab/python)
-
-* **Version 1.0.0b1 (2023-04-19)**
-* **Targets Text Translation v3.0**
-* **Initial version release**
-
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-translation-text/1.0.0b1/)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/translation/azure-ai-translation-text/CHANGELOG.md#100b1-2023-04-19)
-
-[**README**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/translation/azure-ai-translation-text/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/translation/azure-ai-translation-text/samples)
---
-## Use Text Translation SDK in your applications
-
-The Text Translation SDK enables the use and management of the Text Translation service in your application. The SDK builds on the underlying Text Translation REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Text Translation SDK for your preferred programming language:
-
-### 1. Install the SDK client library
-
-### [C#/.NET](#tab/csharp)
-
-```dotnetcli
-dotnet add package Azure.AI.Translation.Text --version 1.0.0-beta.1
-```
-
-```powershell
-Install-Package Azure.AI.Translation.Text -Version 1.0.0-beta.1
-```
-
-### [Java](#tab/java)
-
-```xml
-<dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-ai-translation-text</artifactId>
- <version>1.0.0-beta.1</version>
-</dependency>
-```
-
-```kotlin
-implementation("com.azure:azure-ai-translation-text:1.0.0-beta.1")
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-npm i @azure-rest/ai-translation-text@1.0.0-beta.1
-```
-
-### [Python](#tab/python)
-
-```python
-pip install azure-ai-translation-text==1.0.0b1
-```
---
-### 2. Import the SDK client library into your application
-
-### [C#/.NET](#tab/csharp)
-
-```csharp
-using Azure;
-using Azure.AI.Translation.Text;
-```
-
-### [Java](#tab/java)
-
-```java
-import java.util.List;
-import java.util.ArrayList;
-import com.azure.ai.translation.text.models.*;
-import com.azure.ai.translation.text.TextTranslationClientBuilder;
-import com.azure.ai.translation.text.TextTranslationClient;
-
-import com.azure.core.credential.AzureKeyCredential;
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-const {TextTranslationClient } = require("@azure-rest/ai-translation-text").default;
-```
-
-### [Python](#tab/python)
-
-```python
-from azure.core.credentials import TextTranslationClient
-from azure-ai-translation-text import TextTranslationClient
-```
---
-### 3. Authenticate the client
-
-Interaction with the Translator service using the client library begins with creating an instance of the `TextTranslationClient`class. You need your API key and region to instantiate a client object.
-The Text Translation API key is found in the Azure portal:
--
-### [C#/.NET](#tab/csharp)
-
-**Using the global endpoint (default)**
-
-```csharp
-string key = "<your-key>";
-
-AzureKeyCredential credential = new(key);
-TextTranslationClient client = new(credential);
-```
-
-**Using a regional endpoint**
-
-```csharp
-
-Uri endpoint = new("<your-endpoint>");
-string key = "<your-key>";
-string region = "<region>";
-
-AzureKeyCredential credential = new(key);
-TextTranslationClient client = new(credential, region);
-```
-
-### [Java](#tab/java)
-
-**Using the global endpoint (default)**
-
-```java
-
-String apiKey = "<your-key>";
-AzureKeyCredential credential = new AzureKeyCredential(apiKey);
-
-TextTranslationClient client = new TextTranslationClientBuilder()
- .credential(credential)
- .buildClient();
-```
-
-**Using a regional endpoint**
-
-```java
-String apiKey = "<your-key>";
-String endpoint = "<your-endpoint>";
-String region = "<region>";
-
-AzureKeyCredential credential = new AzureKeyCredential(apiKey);
-
-TextTranslationClient client = new TextTranslationClientBuilder()
-.credential(credential)
-.region(region)
-.endpoint(endpoint)
-.buildClient();
-
-```
-
-### [JavaScript](#tab/javascript)
-
-```javascript
-
-const TextTranslationClient = require("@azure-rest/ai-translation-text").default,
-
-const apiKey = "<your-key>";
-const endpoint = "<your-endpoint>";
-const region = "<region>";
-
-```
-
-### [Python](#tab/python)
-
-```python
-
-from azure.ai.translation.text import TextTranslationClient, TranslatorCredential
-from azure.ai.translation.text.models import InputTextItem
-from azure.core.exceptions import HttpResponseError
-
-key = "<your-key>"
-endpoint = "<your-endpoint>"
-region = "<region>"
--
-credential = TranslatorCredential(key, region)
-text_translator = TextTranslationClient(endpoint=endpoint, credential=credential)
-```
---
-### 4. Build your application
-
-### [C#/.NET](#tab/csharp)
-
-Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/translation/Azure.AI.Translation.Text/samples) for .NET/C#.
-
-### [Java](#tab/java)
-
-Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/translation/azure-ai-translation-text/src/samples/java/com/azure/ai/translation/text) for Java.
-
-### [JavaScript](#tab/javascript)
-
-Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/translation/ai-translation-text-rest/samples/v1-beta) for JavaScript or TypeScript.
-
-### [Python](#tab/python)
-
-Create a client object to interact with the Text Translation SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, *see* the Text Translation [sample repository](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/translation/azure-ai-translation-text/samples) for Python.
---
-## Help options
-
-The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-text-translation) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-text-translation`**.
-
-## Next steps
-
->[!div class="nextstepaction"]
-> [**Text Translation quickstart**](quickstart-translator-sdk.md)
-
->[!div class="nextstepaction"]
-> [**Text Translation v3.0 reference guide**](reference/v3-0-reference.md)
cognitive-services Text Translation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/text-translation-overview.md
- Title: What is Azure Text Translation?-
-description: Integrate the Text Translation API into your applications, websites, tools, and other solutions to provide multi-language user experiences.
------ Previously updated : 04/26/2023---
-# What is Azure Text Translation?
-
- Azure Text Translation is a cloud-based REST API feature of the Translator service that uses neural machine translation technology to enable quick and accurate source-to-target text translation in real time across all [supported languages](language-support.md). In this overview, you learn how the Text Translation REST APIs enable you to build intelligent solutions for your applications and workflows.
-
-Text translation documentation contains the following article types:
-
-* [**Quickstarts**](quickstart-translator.md). Getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-create-translator-resource.md). Instructions for accessing and using the service in more specific or customized ways.
-* [**Reference articles**](reference/v3-0-reference.md). REST API documentation and programming language-based content.
-
-## Text translation features
-
- Text Translation supports the following methods:
-
-* [**Languages**](reference/v3-0-languages.md). Returns a list of languages supported by **Translate**, **Transliterate**, and **Dictionary Lookup** operations. This request doesn't require authentication; just copy and paste the following GET request into Postman or your favorite API tool or browser:
-
- ```http
- https://api.cognitive.microsofttranslator.com/languages?api-version=3.0
- ```
-
-* [**Translate**](reference/v3-0-translate.md#translate-to-multiple-languages). Renders single source-language text to multiple target-language texts with a single request.
-
-* [**Transliterate**](reference/v3-0-transliterate.md). Converts characters or letters of a source language to the corresponding characters or letters of a target language.
-
-* [**Detect**](reference/v3-0-detect.md). Returns the source code language code and a boolean variable denoting whether the detected language is supported for text translation and transliteration.
-
- > [!NOTE]
- > You can **Translate, Transliterate, and Detect** text with [a single REST API call](reference/v3-0-translate.md#translate-a-single-input-with-language-autodetection) .
-
-* [**Dictionary lookup**](reference/v3-0-dictionary-lookup.md). Returns equivalent words for the source term in the target language.
-* [**Dictionary example**](reference/v3-0-dictionary-examples.md) Returns grammatical structure and context examples for the source term and target term pair.
-
-## Text translation deployment options
-
-Add Text Translation to your projects and applications using the following resources:
-
-* Access the cloud-based Translator service via the [**REST API**](reference/rest-api-guide.md), available in Azure.
-
-* Use the REST API [translate request](containers/translator-container-supported-parameters.md) with the [**Text translation Docker container**](containers/translator-how-to-install-container.md).
-
- > [!IMPORTANT]
- >
- > * To use the Translator container you must complete and submit the [**Azure Cognitive Services Application for Gated Services**](https://aka.ms/csgate-translator) online request form and have it approved to acquire access to the container.
- >
- > * The [**Translator container image**](https://hub.docker.com/_/microsoft-azure-cognitive-services-translator-text-translation) supports limited features compared to cloud offerings.
- >
-
-## Get started with Text Translation
-
-Ready to begin?
-
-* [**Create a Translator resource**](how-to-create-translator-resource.md "Go to the Azure portal.") in the Azure portal.
-
-* [**Get your access keys and API endpoint**](how-to-create-translator-resource.md#authentication-keys-and-endpoint-url). An endpoint URL and read-only key are required for authentication.
-
-* Explore our [**Quickstart**](quickstart-translator.md "Learn to use Translator via REST and a preferred programming language.") and view use cases and code samples for the following programming languages:
- * [**C#/.NET**](quickstart-translator.md?tabs=csharp)
- * [**Go**](quickstart-translator.md?tabs=go)
- * [**Java**](quickstart-translator.md?tabs=java)
- * [**JavaScript/Node.js**](quickstart-translator.md?tabs=nodejs)
- * [**Python**](quickstart-translator.md?tabs=python)
-
-## Next steps
-
-Dive deeper into the Text Translation REST API:
-
-> [!div class="nextstepaction"]
-> [See the REST API reference](./reference/v3-0-reference.md)
cognitive-services Translator Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-faq.md
- Title: Frequently asked questions - Translator-
-description: Get answers to frequently asked questions about the Translator API in Azure Cognitive Services.
------- Previously updated : 05/18/2021---
-# Frequently asked questionsΓÇöTranslator API
-
-## How does Translator count characters?
-
-Translator counts every code point defined in Unicode as a character. Each translation counts as a separate translation, even if the request was made in a single API call translating to multiple languages. The length of the response doesn't matter and the number of requests, words, bytes, or sentences isn't relevant to character count.
-
-Translator counts the following input:
-
-* Text passed to Translator in the body of a request.
- * `Text` when using the [Translate](reference/v3-0-translate.md), [Transliterate](reference/v3-0-transliterate.md), and [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) methods
- * `Text` and `Translation` when using the [Dictionary Examples](reference/v3-0-dictionary-examples.md) method.
-
-* All markup: HTML, XML tags, etc. within the request body text field. JSON notation used to build the request (for instance the key "Text:") is **not** counted.
-* An individual letter.
-* Punctuation.
-* A space, tab, markup, or any white-space character.
-* A repeated translation, even if you have previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same.
-
-For scripts based on graphic symbols, such as written Chinese and Japanese Kanji, the Translator service counts the number of Unicode code points. One character per symbol. Exception: Unicode surrogate pairs count as two characters.
-
-Calls to the **Detect** and **BreakSentence** methods aren't counted in the character consumption. However, we do expect calls to the Detect and BreakSentence methods to be reasonably proportionate to the use of other counted functions. If the number of Detect or BreakSentence calls exceeds the number of other counted methods by 100 times, Microsoft reserves the right to restrict your use of the Detect and BreakSentence methods.
-
-For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
-
-## Where can I see my monthly usage?
-
-The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can be used to estimate your costs. You can also monitor, view, and add Azure alerts for your Azure services in your user account in the Azure portal:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your Translator resource Overview page.
-1. Select the **subscription** for your Translator resource.
-
- :::image type="content" source="media/azure-portal-overview.png" alt-text="Screenshot of the subscription link on overview page in the Azure portal.":::
-
-1. In the left rail, make your selection under **Cost Management**:
-
- :::image type="content" source="media/azure-portal-cost-management.png" alt-text="Screenshot of the cost management resources links in the Azure portal.":::
-
-## Is attribution required when using Translator?
-
-Attribution isn't required when using Translator for text and speech translation. It's recommended that you inform users that the content they're viewing is machine translated.
-
-If attribution is present, it must conform to the [Translator attribution guidelines](https://www.microsoft.com/translator/business/attribution/).
-
-## Is Translator a replacement for human translator?
-
-No, both have their place as essential tools for communication. Use machine translation where the quantity of content, speed of creation, and budget constraints make it impossible to use human translation.
-
-Machine translation has been used as a first pass by several of our [language service provider (LSP)](https://www.microsoft.com/translator/business/partners/) partners, prior to using human translation and can improve productivity by up to 50 percent. For a list of LSP partners, visit the Translator partner page.
--
-> [!TIP]
-> If you can't find answers to your questions in this FAQ, try asking the Translator API community on [StackOverflow](https://stackoverflow.com/search?q=%5Bmicrosoft-cognitive%5D+or+%5Bmicrosoft-cognitive%5D+translator&s=34bf0ce2-b6b3-4355-86a6-d45a1121fe27).
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md
- Title: What is the Microsoft Azure Cognitive Services Translator?-
-description: Integrate Translator into your applications, websites, tools, and other solutions to provide multi-language user experiences.
------ Previously updated : 11/16/2022---
-# What is Azure Cognitive Services Translator?
-
-Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
-
-Translator documentation contains the following article types:
-
-* [**Quickstarts**](quickstart-translator.md). Getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](translator-text-apis.md). Instructions for accessing and using the service in more specific or customized ways.
-* [**Reference articles**](reference/v3-0-reference.md). REST API documentation and programming language-based content.
-
-## Translator features and development options
-
-Translator service supports the following features. Use the links in this table to learn more about each feature and browse the API references.
-
-| Feature | Description | Development options |
-|-|-|--|
-| [**Text Translation**](text-translation-overview.md) | Execute text translation between supported source and target languages in real time. | <ul><li>[**REST API**](reference/rest-api-guide.md) </li><li>[Text translation Docker container](containers/translator-how-to-install-container.md).</li></ul> |
-| [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. | <ul><li>[**REST API**](document-translation/reference/rest-api-guide.md)</li><li>[**Client-library SDK**](document-translation/how-to-guides/use-client-sdks.md)</li></ul> |
-| [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. | <ul><li>[**Custom Translator portal**](https://portal.customtranslator.azure.ai/)</li></ul> |
-
-For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
-
-## Try the Translator service for free
-
-First, you need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
-
-Next, you need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
-
-Now, you're ready to get started! [**Create a Translator service**](how-to-create-translator-resource.md "Go to the Azure portal."), [**get your access keys and API endpoint**](how-to-create-translator-resource.md#authentication-keys-and-endpoint-url "An endpoint URL and read-only key are required for authentication."), and try our [**quickstart**](quickstart-translator.md "Learn to use Translator via REST.").
-
-## Next steps
-
-* Learn more about the following features:
- * [**Text Translation**](text-translation-overview.md)
- * [**Document Translation**](document-translation/overview.md)
- * [**Custom Translator**](custom-translator/overview.md)
-* Review [**Translator pricing**](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
- Title: "Use Azure Cognitive Services Translator APIs"-
-description: "Learn to translate text, transliterate text, detect language and more with the Translator service. Examples are provided in C#, Java, JavaScript and Python."
------ Previously updated : 06/12/2023--
-keywords: translator, translator service, translate text, transliterate text, language detection
--
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD001 -->
-<!-- markdownlint-disable MD024 -->
-
-# Use Azure Cognitive Services Translator APIs
-
-In this how-to guide, you learn to use the [Translator service REST APIs](reference/rest-api-guide.md). You start with basic examples and move onto some core configuration options that are commonly used during development, including:
-
-* [Translation](#translate-text)
-* [Transliteration](#transliterate-text)
-* [Language identification/detection](#detect-language)
-* [Calculate sentence length](#get-sentence-length)
-* [Get alternate translations](#dictionary-lookup-alternate-translations) and [examples of word usage in a sentence](#dictionary-examples-translations-in-context)
-
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-
-* A Cognitive Services or Translator resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or a [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal, to get your key and endpoint. After it deploys, select **Go to resource**.
-
-* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
-
-* You need the key and endpoint from the resource to connect your application to the Translator service. Later, you paste your key and endpoint into the code samples. You can find these values on the Azure portal **Keys and Endpoint** page:
-
- :::image type="content" source="media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, *see* the Cognitive Services [security](../cognitive-services-security.md).
-
-## Headers
-
-To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to make sure the following headers are included with each request. Don't worry, we include the headers in the sample code in the following sections.
-
-|Header|Value| Condition |
-| |: |:|
-|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |
-|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using a multi-service Cognitive Services or regional (geographic) resource like **West US**.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>|
-|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>|
-|**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> |
-|**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul>
-
-## Set up your application
-
-### [C#](#tab/csharp)
-
-1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
-
- > [!TIP]
- >
- > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
-
-1. Open Visual Studio.
-
-1. On the Start page, choose **Create a new project**.
-
- :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
-
-1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
-
- :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
-
-1. In the **Configure your new project** dialog window, enter `translator_text_app` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
-
- :::image type="content" source="media/how-to-guides/configure-your-console-app.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
-
-1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
-
- :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
-
-### Install the Newtonsoft.json package with NuGet
-
-1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
-
- :::image type="content" source="media/how-to-guides/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
-
-1. Select the Browse tab and type Newtonsoft.
-
- :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
-
-1. Select install from the right package manager window to add the package to your project.
-
- :::image type="content" source="media/how-to-guides/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
-
-### Build your application
-
-> [!NOTE]
->
-> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
-> * The new output uses recent C# features that simplify the code you need to write.
-> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
-> * For more information, *see* [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
-
-1. Open the **Program.cs** file.
-
-1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
-
-1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
--
-### [Go](#tab/go)
-
-You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
-
-> [!TIP]
->
-> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
-
-1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
-
- * Download the Go version for your operating system.
- * Once the download is complete, run the installer.
- * Open a command prompt and enter the following to confirm Go was installed:
-
- ```console
- go version
- ```
-
-1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
-
-1. Create a new GO file named **text-translator.go** from the **translator-text-app** directory.
-
-1. Copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
-
-1. Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
-
- ```console
- go run translation.go
- ```
-
-### [Java](#tab/java)
-
-* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
-
- >[!TIP]
- >
- > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
- > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
-
-* If you aren't using Visual Studio Code, make sure you have the following installed in your development environment:
-
- * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
-
- * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
-
-### Create a new Gradle project
-
-1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
-
- ```console
- mkdir translator-text-app && translator-text-app
- ```
-
- ```powershell
- mkdir translator-text-app; cd translator-text-app
- ```
-
-1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
-
- ```console
- gradle init --type basic
- ```
-
-1. When prompted to choose a **DSL**, select **Kotlin**.
-
-1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
-
- > [!NOTE]
- > It may take a few minutes for the entire application to be created, but soon you should see several folders and files including `build-gradle.kts`.
-
-1. Update `build.gradle.kts` with the following code:
-
- ```kotlin
- plugins {
- java
- application
- }
- application {
- mainClass.set("TextTranslator")
- }
- repositories {
- mavenCentral()
- }
- dependencies {
- implementation("com.squareup.okhttp3:okhttp:4.10.0")
- implementation("com.google.code.gson:gson:2.9.0")
- }
- ```
-
-### Create a Java Application
-
-1. From the translator-text-app directory, run the following command:
-
- ```console
- mkdir -p src/main/java
- ```
-
- You create the following directory structure:
-
- :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
-
-1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
-
- > [!TIP]
- >
- > * You can create a new file using PowerShell.
- > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
- > * Type the following command **New-Item TranslatorText.java**.
- >
- > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
-
-1. Copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
-
-1. Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
-
- 1. Build your application with the `build` command:
-
- ```console
- gradle build
- ```
-
- 1. Run your application with the `run` command:
-
- ```console
- gradle run
- ```
-
-### [Node.js](#tab/nodejs)
-
-1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
-
- > [!TIP]
- >
- > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
-
-1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`.
-
- ```console
- mkdir translator-text-app && cd translator-text-app
- ```
-
- ```powershell
- mkdir translator-text-app; cd translator-text-app
- ```
-
-1. Run the npm init command to initialize the application and scaffold your project.
-
- ```console
- npm init
- ```
-
-1. Specify your project's attributes using the prompts presented in the terminal.
-
- * The most important attributes are name, version number, and entry point.
- * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
- * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
- * After you've completed the prompts, a `package.json` file will be created in your translator-text-app directory.
-
-1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
-
- ```console
- npm install axios uuid
- ```
-
-1. Create the `index.js` file in the application directory.
-
- > [!TIP]
- >
- > * You can create a new file using PowerShell.
- > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
- > * Type the following command **New-Item index.js**.
- >
- > * You can also create a new file named `index.js` in your IDE and save it to the `translator-text-app` directory.
-
-1. Copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
-
-1. Once you've added the code sample to your application, run your program:
-
- 1. Navigate to your application directory (translator-text-app).
-
- 1. Type the following command in your terminal:
-
- ```console
- node index.js
- ```
-
-### [Python](#tab/python)
-
-1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
-
- > [!TIP]
- >
- > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
-
-1. Open a terminal window and use pip to install the Requests library and uuid0 package:
-
- ```console
- pip install requests uuid
- ```
-
- > [!NOTE]
- > We will also use a Python built-in package called json. It's used to work with JSON data.
-
-1. Create a new Python file called **text-translator.py** in your preferred editor or IDE.
-
-1. Add the following code sample to your `text-translator.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
-
-1. Once you've added a desired code sample to your application, build and run your program:
-
- 1. Navigate to your **text-translator.py** file.
-
- 1. Type the following command in your console:
-
- ```console
- python text-translator.py
- ```
---
-> [!IMPORTANT]
-> The samples in this guide require hard-coded keys and endpoints.
-> Remember to **remove the key from your code when you're done**, and **never post it publicly**.
-> For production, consider using a secure way of storing and accessing your credentials. For more information, *see* [Cognitive Services security](../cognitive-services-security.md).
-
-## Translate text
-
-The core operation of the Translator service is to translate text. In this section, you build a request that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
-
-### [C#](#tab/csharp)
-
-```csharp
-using System.Text;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Input and output languages are defined as parameters.
- string route = "/translate?api-version=3.0&from=en&to=sw&to=it";
- string textToTranslate = "Hello, friend! What did you do today?";
- object[] body = new object[] { new { Text = textToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("from", "en")
- q.Add("to", "it")
- q.Add("to", "sw")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Hello friend! What did you do today?"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/translate?api-version=3.0&from=en&to=sw&to=it";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Hello, friend! What did you do today?\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText translateRequest = new TranslatorText();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
- const { v4: uuidv4 } = require('uuid');
-
- let key = "<your-translator-key>";
- let endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['sw', 'it']
- },
- data: [{
- 'text': 'Hello, friend! What did you do today?'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['sw', 'it']
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Hello, friend! What did you do today?'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "translations":[
- {
- "text":"Halo, rafiki! Ulifanya nini leo?",
- "to":"sw"
- },
- {
- "text":"Ciao, amico! Cosa hai fatto oggi?",
- "to":"it"
- }
- ]
- }
-]
-```
-
-You can check the consumption (the number of characters charged) for each request in the [**response headers: x-metered-usage**](reference/v3-0-translate.md#response-headers) field.
-
-## Detect language
-
-If you need translation, but don't know the language of the text, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
-
-### Detect source language during translation
-
-If you don't include the `from` parameter in your translation request, the Translator service attempts to detect the source text's language. In the response, you get the detected language (`language`) and a confidence score (`score`). The closer the `score` is to `1.0`, means that there's increased confidence that the detection is correct.
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
-// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Output languages are defined as parameters, input language detected.
- string route = "/translate?api-version=3.0&to=en&to=it";
- string textToTranslate = "Halo, rafiki! Ulifanya nini leo?";
- object[] body = new object[] { new { Text = textToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("to", "en")
- q.Add("to", "it")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Halo rafiki! Ulifanya nini leo?"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/translate?api-version=3.0&to=en&to=it";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
-// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Halo, rafiki! Ulifanya nini leo?\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText translateRequest = new TranslatorText();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'to': ['en', 'it']
- },
- data: [{
- 'text': 'Halo, rafiki! Ulifanya nini leo?'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'to': ['en', 'it']
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Halo, rafiki! Ulifanya nini leo?'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response:
-
-```json
-[
- {
- "detectedLanguage":{
- "language":"sw",
- "score":0.8
- },
- "translations":[
- {
- "text":"Hello friend! What did you do today?",
- "to":"en"
- },
- {
- "text":"Ciao amico! Cosa hai fatto oggi?",
- "to":"it"
- }
- ]
- }
-]
-```
-
-### Detect source language without translation
-
-It's possible to use the Translator service to detect the language of source text without performing a translation. To do so, you use the [`/detect`](./reference/v3-0-detect.md) endpoint.
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
-// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Just detect language
- string route = "/detect?api-version=3.0";
- string textToLangDetect = "Hallo Freund! Was hast du heute gemacht?";
- object[] body = new object[] { new { Text = textToLangDetect } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
-// required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
-
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/detect?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Ciao amico! Cosa hai fatto oggi?"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/detect?api-version=3.0";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Hallo Freund! Was hast du heute gemacht?\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText detectRequest = new TranslatorText();
- String response = detectRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/detect',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0'
- },
- data: [{
- 'text': 'Hallo Freund! Was hast du heute gemacht?'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/detect'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Hallo Freund! Was hast du heute gemacht?'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-The `/detect` endpoint response includes alternate detections, and indicates if the translation and transliteration are supported for all of the detected languages. After a successful call, you should see the following response:
-
-```json
-[
- {
- "language":"de",
-
- "score":1.0,
-
- "isTranslationSupported":true,
-
- "isTransliterationSupported":false
- }
-]
-```
-
-## Transliterate text
-
-Transliteration is the process of converting a word or phrase from the script (alphabet) of one language to another based on phonetic similarity. For example, you could use transliteration to convert "สวัสดี" (`thai`) to "sawatdi" (`latn`). There's more than one way to perform transliteration. In this section, you learn how to use language detection using the `translate` endpoint, and the `transliterate` endpoint.
-
-### Transliterate during translation
-
-If you're translating into a language that uses a different alphabet (or phonemes) than your source, you might need a transliteration. In this example, we translate "Hello" from English to Thai. In addition to getting the translation in Thai, you get a transliteration of the translated phrase using the Latin alphabet.
-
-To get a transliteration from the `translate` endpoint, use the `toScript` parameter.
-
-> [!NOTE]
-> For a complete list of available languages and transliteration options, see [language support](language-support.md).
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Output language defined as parameter, with toScript set to latn
- string route = "/translate?api-version=3.0&to=th&toScript=latn";
- string textToTransliterate = "Hello, friend! What did you do today?";
- object[] body = new object[] { new { Text = textToTransliterate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
-
- uri := endpoint + "/translate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("to", "th")
- q.Add("toScript", "latn")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Hello, friend! What did you do today?"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/translate?api-version=3.0&to=th&toScript=latn";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
--
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Hello, friend! What did you do today?\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText translateRequest = new TranslatorText();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'to': 'th',
- 'toScript': 'latn'
- },
- data: [{
- 'text': 'Hello, friend! What did you do today?'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```Python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'to': 'th',
- 'toScript': 'latn'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Hello, friend! What did you do today?'
-}]
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response. Keep in mind that the response from `translate` endpoint includes the detected source language with a confidence score, a translation using the alphabet of the output language, and a transliteration using the Latin alphabet.
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1
- },
- "translations": [
- {
- "text": "หวัดดีเพื่อน! วันนี้เธอทำอะไรไปบ้าง ",
- "to": "th",
- "transliteration": {
- "script": "Latn",
- "text": "watdiphuean! wannithoethamaraipaiang"
- }
- }
- ]
- }
-]
-```
-
-### Transliterate without translation
-
-You can also use the `transliterate` endpoint to get a transliteration. When using the transliteration endpoint, you must provide the source language (`language`), the source script/alphabet (`fromScript`), and the output script/alphabet (`toScript`) as parameters. In this example, we're going to get the transliteration for สวัสดีเพื่อน! วันนี้คุณทำอะไร.
-
-> [!NOTE]
-> For a complete list of available languages and transliteration options, see [language support](language-support.md).
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // For a complete list of options, see API reference.
- // Input and output languages are defined as parameters.
- string route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn";
- string textToTransliterate = "สวัสดีเพื่อน! วันนี้คุณทำอะไร";
- object[] body = new object[] { new { Text = textToTransliterate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/transliterate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("language", "th")
- q.Add("fromScript", "thai")
- q.Add("toScript", "latn")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "สวัสดีเพื่อน! วันนี้คุณทำอะไร"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
--
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"สวัสดีเพื่อน! วันนี้คุณทำอะไร\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText transliterateRequest = new TranslatorText();
- String response = transliterateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/transliterate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'language': 'th',
- 'fromScript': 'thai',
- 'toScript': 'latn'
- },
- data: [{
- 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/transliterate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'language': 'th',
- 'fromScript': 'thai',
- 'toScript': 'latn'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `transliterate` only returns the `text` and the output `script`.
-
-```json
-[
- {
- "text":"sawatdiphuean! wannikhunthamarai",
-
- "script":"latn"
- }
-]
-```
-
-## Get sentence length
-
-With the Translator service, you can get the character count for a sentence or series of sentences. The response is returned as an array, with character counts for each sentence detected. You can get sentence lengths with the `translate` and `breaksentence` endpoints.
-
-### Get sentence length during translation
-
-You can get character counts for both source text and translation output using the `translate` endpoint. To return sentence length (`srcSenLen` and `transSenLen`) you must set the `includeSentenceLength` parameter to `True`.
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Include sentence length details.
- string route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
- string sentencesToCount =
- "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
- object[] body = new object[] { new { Text = sentencesToCount } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("to", "es")
- q.Add("includeSentenceLength", "true")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.IOException;
-
-import com.google.gson.*;
-import okhttp3.MediaType;
-import okhttp3.OkHttpClient;
-import okhttp3.Request;
-import okhttp3.RequestBody;
-import okhttp3.Response;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
- public static String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText translateRequest = new TranslatorText();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'to': 'es',
- 'includeSentenceLength': true
- },
- data: [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'to': 'es',
- 'includeSentenceLength': True
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
-}]
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response. In addition to the detected source language and translation, you get character counts for each detected sentence for both the source (`srcSentLen`) and translation (`transSentLen`).
-
-```json
-[
- {
- "detectedLanguage":{
- "language":"en",
- "score":1.0
- },
- "translations":[
- {
- "text":"¿Puedes decirme cómo llegar a Penn Station? Oh, ¿no estás seguro? Está bien.",
- "to":"es",
- "sentLen":{
- "srcSentLen":[
- 44,
- 21,
- 12
- ],
- "transSentLen":[
- 44,
- 22,
- 10
- ]
- }
- }
- ]
- }
-]
-```
-
-### Get sentence length without translation
-
-The Translator service also lets you request sentence length without translation using the `breaksentence` endpoint.
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // Only include sentence length details.
- string route = "/breaksentence?api-version=3.0";
- string sentencesToCount =
- "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
- object[] body = new object[] { new { Text = sentencesToCount } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/breaksentence?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/breaksentence?api-version=3.0";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText breakSentenceRequest = new TranslatorText();
- String response = breakSentenceRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/breaksentence',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0'
- },
- data: [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/breaksentence'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `breaksentence` only returns the character counts for the source text in an array called `sentLen`.
-
-```json
-[
- {
- "detectedLanguage":{
- "language":"en",
- "score":1.0
- },
- "sentLen":[
- 44,
- 21,
- 12
- ]
- }
-]
-```
-
-## Dictionary lookup (alternate translations)
-
-With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "`luz solar`," "`rayos solares`," and "`soleamiento`," "`sol`," and "`insolaci├│n`."
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // See many translation options
- string route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
- string wordToTranslate = "sunlight";
- object[] body = new object[] { new { Text = wordToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/dictionary/lookup?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("from", "en")
- q.Add("to", "es")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "sunlight"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
- public String url = endpoint.concat(route);
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"sunlight\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText breakSentenceRequest = new TranslatorText();
- String response = breakSentenceRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/dictionary/lookup',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
- },
- data: [{
- 'text': 'sunlight'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/dictionary/lookup'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'sunlight'
-}]
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response. Let's examine the response more closely since the JSON is more complex than some of the other examples in this article. The `translations` array includes a list of translations. Each object in this array includes a confidence score (`confidence`), the text optimized for end-user display (`displayTarget`), the normalized text (`normalizedText`), the part of speech (`posTag`), and information about previous translation (`backTranslations`). For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-lookup.md)
-
-```json
-[
- {
- "normalizedSource":"sunlight",
- "displaySource":"sunlight",
- "translations":[
- {
- "normalizedTarget":"luz solar",
- "displayTarget":"luz solar",
- "posTag":"NOUN",
- "confidence":0.5313,
- "prefixWord":"",
- "backTranslations":[
- {
- "normalizedText":"sunlight",
- "displayText":"sunlight",
- "numExamples":15,
- "frequencyCount":702
- },
- {
- "normalizedText":"sunshine",
- "displayText":"sunshine",
- "numExamples":7,
- "frequencyCount":27
- },
- {
- "normalizedText":"daylight",
- "displayText":"daylight",
- "numExamples":4,
- "frequencyCount":17
- }
- ]
- },
- {
- "normalizedTarget":"rayos solares",
- "displayTarget":"rayos solares",
- "posTag":"NOUN",
- "confidence":0.1544,
- "prefixWord":"",
- "backTranslations":[
- {
- "normalizedText":"sunlight",
- "displayText":"sunlight",
- "numExamples":4,
- "frequencyCount":38
- },
- {
- "normalizedText":"rays",
- "displayText":"rays",
- "numExamples":11,
- "frequencyCount":30
- },
- {
- "normalizedText":"sunrays",
- "displayText":"sunrays",
- "numExamples":0,
- "frequencyCount":6
- },
- {
- "normalizedText":"sunbeams",
- "displayText":"sunbeams",
- "numExamples":0,
- "frequencyCount":4
- }
- ]
- },
- {
- "normalizedTarget":"soleamiento",
- "displayTarget":"soleamiento",
- "posTag":"NOUN",
- "confidence":0.1264,
- "prefixWord":"",
- "backTranslations":[
- {
- "normalizedText":"sunlight",
- "displayText":"sunlight",
- "numExamples":0,
- "frequencyCount":7
- }
- ]
- },
- {
- "normalizedTarget":"sol",
- "displayTarget":"sol",
- "posTag":"NOUN",
- "confidence":0.1239,
- "prefixWord":"",
- "backTranslations":[
- {
- "normalizedText":"sun",
- "displayText":"sun",
- "numExamples":15,
- "frequencyCount":20387
- },
- {
- "normalizedText":"sunshine",
- "displayText":"sunshine",
- "numExamples":15,
- "frequencyCount":1439
- },
- {
- "normalizedText":"sunny",
- "displayText":"sunny",
- "numExamples":15,
- "frequencyCount":265
- },
- {
- "normalizedText":"sunlight",
- "displayText":"sunlight",
- "numExamples":15,
- "frequencyCount":242
- }
- ]
- },
- {
- "normalizedTarget":"insolaci├│n",
- "displayTarget":"insolaci├│n",
- "posTag":"NOUN",
- "confidence":0.064,
- "prefixWord":"",
- "backTranslations":[
- {
- "normalizedText":"heat stroke",
- "displayText":"heat stroke",
- "numExamples":3,
- "frequencyCount":67
- },
- {
- "normalizedText":"insolation",
- "displayText":"insolation",
- "numExamples":1,
- "frequencyCount":55
- },
- {
- "normalizedText":"sunstroke",
- "displayText":"sunstroke",
- "numExamples":2,
- "frequencyCount":31
- },
- {
- "normalizedText":"sunlight",
- "displayText":"sunlight",
- "numExamples":0,
- "frequencyCount":12
- },
- {
- "normalizedText":"solarization",
- "displayText":"solarization",
- "numExamples":0,
- "frequencyCount":7
- },
- {
- "normalizedText":"sunning",
- "displayText":"sunning",
- "numExamples":1,
- "frequencyCount":7
- }
- ]
- }
- ]
- }
-]
-```
-
-## Dictionary examples (translations in context)
-
-After you've performed a dictionary lookup, pass the source and translation text to the `dictionary/examples` endpoint, to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
-
-### [C#](#tab/csharp)
-
-```csharp
-using System;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
-
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
-
- static async Task Main(string[] args)
- {
- // See examples of terms in context
- string route = "/dictionary/examples?api-version=3.0&from=en&to=es";
- object[] body = new object[] { new { Text = "sunlight", Translation = "luz solar" } } ;
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- // location required if you're using a multi-service or regional (not global) resource.
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
-
-### [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "<YOUR-TRANSLATOR-KEY>"
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- location := "<YOUR-RESOURCE-LOCATION>"
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/dictionary/examples?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("from", "en")
- q.Add("to", "es")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- Translation string
- }{
- {
- Text: "sunlight",
- Translation: "luz solar",
- },
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator Text API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-### [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class TranslatorText {
- private static String key = "<YOUR-TRANSLATOR-KEY>";
- public String endpoint = "https://api.cognitive.microsofttranslator.com";
- public String route = "/dictionary/examples?api-version=3.0&from=en&to=es";
- public String url = endpoint.concat(route);
--
- // location, also known as region.
- // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
- private static String location = "<YOUR-RESOURCE-LOCATION>";
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"sunlight\", \"Translation\": \"luz solar\"}]");
- Request request = new Request.Builder()
- .url(url)
- .post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- // location required if you're using a multi-service or regional (not global) resource.
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- TranslatorText dictionaryExamplesRequest = new TranslatorText();
- String response = dictionaryExamplesRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-### [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-let key = "<YOUR-TRANSLATOR-KEY>";
-let endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-let location = "<YOUR-RESOURCE-LOCATION>";
-
-axios({
- baseURL: endpoint,
- url: '/dictionary/examples',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- // location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
- },
- data: [{
- 'text': 'sunlight',
- 'translation': 'luz solar'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
-
-### [Python](#tab/python)
-
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "<YOUR-TRANSLATOR-KEY>"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# location, also known as region.
-# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
-
-location = "<YOUR-RESOURCE-LOCATION>"
-
-path = '/dictionary/examples'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- # location required if you're using a multi-service or regional (not global) resource.
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'sunlight',
- 'translation': 'luz solar'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
---
-After a successful call, you should see the following response. For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-examples.md)
-
-```json
-[
- {
- "normalizedSource":"sunlight",
- "normalizedTarget":"luz solar",
- "examples":[
- {
- "sourcePrefix":"You use a stake, silver, or ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Se usa una estaca, plata, o ",
- "targetTerm":"luz solar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"A pocket of ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Una bolsa de ",
- "targetTerm":"luz solar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"There must also be ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"También debe haber ",
- "targetTerm":"luz solar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"We were living off of current ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Estábamos viviendo de la ",
- "targetTerm":"luz solar",
- "targetSuffix":" actual."
- },
- {
- "sourcePrefix":"And they don't need unbroken ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Y ellos no necesitan ",
- "targetTerm":"luz solar",
- "targetSuffix":" ininterrumpida."
- },
- {
- "sourcePrefix":"We have lamps that give the exact equivalent of ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Disponemos de lámparas que dan el equivalente exacto de ",
- "targetTerm":"luz solar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"Plants need water and ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Las plantas necesitan agua y ",
- "targetTerm":"luz solar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"So this requires ",
- "sourceTerm":"sunlight",
- "sourceSuffix":".",
- "targetPrefix":"Así que esto requiere ",
- "targetTerm":"luz solar",
- "targetSuffix":"."
- },
- {
- "sourcePrefix":"And this pocket of ",
- "sourceTerm":"sunlight",
- "sourceSuffix":" freed humans from their ...",
- "targetPrefix":"Y esta bolsa de ",
- "targetTerm":"luz solar",
- "targetSuffix":", liber├│ a los humanos de ..."
- },
- {
- "sourcePrefix":"Since there is no ",
- "sourceTerm":"sunlight",
- "sourceSuffix":", the air within ...",
- "targetPrefix":"Como no hay ",
- "targetTerm":"luz solar",
- "targetSuffix":", el aire atrapado en ..."
- },
- {
- "sourcePrefix":"The ",
- "sourceTerm":"sunlight",
- "sourceSuffix":" shining through the glass creates a ...",
- "targetPrefix":"La ",
- "targetTerm":"luz solar",
- "targetSuffix":" a través de la vidriera crea una ..."
- },
- {
- "sourcePrefix":"Less ice reflects less ",
- "sourceTerm":"sunlight",
- "sourceSuffix":", and more open ocean ...",
- "targetPrefix":"Menos hielo refleja menos ",
- "targetTerm":"luz solar",
- "targetSuffix":", y más mar abierto ..."
- },
- {
- "sourcePrefix":"",
- "sourceTerm":"Sunlight",
- "sourceSuffix":" is most intense at midday, so ...",
- "targetPrefix":"La ",
- "targetTerm":"luz solar",
- "targetSuffix":" es más intensa al mediodía, por lo que ..."
- },
- {
- "sourcePrefix":"... capture huge amounts of ",
- "sourceTerm":"sunlight",
- "sourceSuffix":", so fueling their growth.",
- "targetPrefix":"... capturan enormes cantidades de ",
- "targetTerm":"luz solar",
- "targetSuffix":" que favorecen su crecimiento."
- },
- {
- "sourcePrefix":"... full height, giving more direct ",
- "sourceTerm":"sunlight",
- "sourceSuffix":" in the winter.",
- "targetPrefix":"... altura completa, dando más ",
- "targetTerm":"luz solar",
- "targetSuffix":" directa durante el invierno."
- }
- ]
- }
-]
-```
-
-## Troubleshooting
-
-### Common HTTP status codes
-
-| HTTP status code | Description | Possible reason |
-||-|--|
-| 200 | OK | The request was successful. |
-| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
-
-### Java users
-
-If you're encountering connection issues, it may be that your TLS/SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Customize and improve translation](customization.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
- Title: What's new in Azure Cognitive Services Translator?-
-description: Learn of the latest changes to the Translator Service API.
------ Previously updated : 05/23/2023--
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD036 -->
-<!-- markdownlint-disable MD001 -->
-
-# What's new in Azure Cognitive Services Translator?
-
-Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation.
-
-Translator is a language service that enables users to translate text and documents, helps entities expand their global outreach, and supports preservation of at-risk and endangered languages.
-
-Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
-
-## June 2023
-
-**Documentation updates**
-* The [Document Translation SDK overview](document-translation/document-sdk-overview.md) is now available to provide guidance and resources for the .NET/C# and Python SDKs.
-* The [Document Translation SDK quickstart](document-translation/quickstarts/document-translation-sdk.md) is now available for the C# and Python programming languages.
-
-## May 2023
-
-**Announcing new releases for Build 2023**
-
-### Text Translation SDK (preview)
-
-* The Text translation SDKs are now available in public preview for C#/.NET, Java, JavaScript/TypeScript, and Python programming languages.
-* To learn more, see [Text translation SDK overview](text-sdk-overview.md).
-* To get started, try a [Text Translation SDK quickstart](quickstart-translator-sdk.md) using a programming language of your choice.
-
-### Microsoft Translator V3 Connector (preview)
-
-The Translator V3 Connector is now available in public preview. The connector creates a connection between your Translator Service instance and Microsoft Power Automate enabling you to use one or more prebuilt operations as steps in your apps and workflows. To learn more, see the following documentation:
-
-* [Automate document translation](connector/document-translation-flow.md)
-* [Automate text translation](connector/text-translator-flow.md)
-
-## February 2023
-
-[**Document Translation in Language Studio**](document-translation/language-studio.md) is now available for Public Preview. The feature provides a no-code user interface to interactively translate documents from local or Azure Blob Storage.
-
-## November 2022
-
-### Custom Translator stable GA v2.0 release
-
-Custom Translator version v2.0 is generally available and ready for use in your production applications!
-
-## June 2022
-
-### Document Translation stable GA 1.0.0 release
-
-Document Translation .NET and Python client-library SDKs are now generally available and ready for use in production applications!
-
-### [**C#**](#tab/csharp)
-
-**Version 1.0.0 (GA)** </br>
-**2022-06-07**
-
-##### [README](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/README.md)
-
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/CHANGELOG.md)
-
-##### [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.Translation.Document)
-
-##### [**SDK reference documentation**](/dotnet/api/overview/azure/AI.Translation.Document-readme)
-
-### [Python](#tab/python)
-
-**Version 1.0.0 (GA)** </br>
-**2022-06-07**
-
-##### [README](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/README.md)
-
-##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/CHANGELOG.md)
-
-##### [**Package (PyPI)**](https://pypi.org/project/azure-ai-translation-document/1.0.0/)
-
-##### [**SDK reference documentation**](/python/api/overview/azure/ai-translation-document-readme?view=azure-python&preserve-view=true)
---
-## May 2022
-
-### [Document Translation support for scanned PDF documents](https://aka.ms/blog_ScannedPdfTranslation)
-
-* Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.
-
-## April 2022
-
-### [Text and document translation support for Faroese](https://www.microsoft.com/translator/blog/2022/04/25/introducing-faroese-translation-for-faroese-flag-day/)
-
-* Translator service has [text and document translation language support](language-support.md) for Faroese, a Germanic language originating on the Faroe Islands. The Faroe Islands are a self-governing region within the Kingdom of Denmark located between Norway and Iceland. Faroese is descended from Old West Norse spoken by Vikings in the Middle Ages.
-
-### [Text and document translation support for Basque and Galician](https://www.microsoft.com/translator/blog/2022/04/12/break-the-language-barrier-with-translator-now-with-two-new-languages/)
-
-* Translator service has [text and document translation language support](language-support.md) for Basque and Galician. Basque is a language isolate, meaning it isn't related to any other modern language. It's spoken in parts of northern Spain and southern France. Galician is spoken in northern Portugal and western Spain. Both Basque and Galician are official languages of Spain.
-
-## March 2022
-
-### [Text and document translation support for Somali and Zulu languages](https://www.microsoft.com/translator/blog/2022/03/29/translator-welcomes-two-new-languages-somali-and-zulu/)
-
-* Translator service has [text and document translation language support](language-support.md) for Somali and Zulu. The Somali language, spoken throughout Africa, has more than 21 million speakers and is in the Cushitic branch of the Afroasiatic language family. The Zulu language has 12 million speakers and is recognized as one of South Africa's 11 official languages.
-
-## February 2022
-
-### [Text and document translation support for Upper Sorbian](https://www.microsoft.com/translator/blog/2022/02/21/translator-celebrates-international-mother-language-day-by-adding-upper-sorbian/),
-
-* Translator service has [text and document translation language support](language-support.md) for Upper Sorbian. The Translator team has worked tirelessly to preserve indigenous and endangered languages around the world. Language data provided by the Upper Sorbian language community was instrumental in introducing this language to Translator.
-
-### [Text and document translation support for Inuinnaqtun and Romanized Inuktitut](https://www.microsoft.com/translator/blog/2022/02/01/introducing-inuinnaqtun-and-romanized-inuktitut/)
-
-* Translator service has [text and document translation language support](language-support.md) for Inuinnaqtun and Romanized Inuktitut. Both are indigenous languages that are essential and treasured foundations of Canadian culture and society.
-
-## January 2022
-
-### Custom Translator portal (v2.0) public preview
-
-* The [Custom Translator portal (v2.0)](https://portal.customtranslator.azure.ai/) is now in public preview and includes significant changes that makes it easier to create your custom translation systems.
-
-* To learn more, see our Custom Translator [documentation](custom-translator/overview.md) and try our [quickstart](custom-translator/quickstart.md) for step-by-step instructions.
-
-## October 2021
-
-### [Text and document support for more than 100 languages](https://www.microsoft.com/translator/blog/2021/10/11/translator-now-translates-more-than-100-languages/)
-
-* Translator service has added [text and document language support](language-support.md) for the following languages:
- * **Bashkir**. A Turkic language spoken by approximately 1.4 million native speakers. It has three regional language groups: Southern, Eastern, and Northwestern.
- * **Dhivehi**. Also known as Maldivian, it's an Indo-Aryan language primarily spoken in the island country of Maldives.
- * **Georgian**. A Kartvelian language that is the official language of Georgia. It has approximately 4 million speakers.
- * **Kyrgyz**. A Turkic language that is the official language of Kyrgyzstan.
- * **Macedonian (Cyrillic)**. An Eastern South Slavic language that is the official language of North Macedonia. It has approximately 2 million people.
- * **Mongolian (Traditional)**. Traditional Mongolian script is the first writing system created specifically for the Mongolian language. Mongolian is the official language of Mongolia.
- * **Tatar**. A Turkic language used by speakers in modern Tatarstan. It's closely related to Crimean Tatar and Siberian Tatar but each belongs to different subgroups.
- * **Tibetan**. It has nearly 6 million speakers and can be found in many Tibetan Buddhist publications.
- * **Turkmen**. The official language of Turkmenistan. It's similar to Turkish and Azerbaijani.
- * **Uyghur**. A Turkic language with nearly 15 million speakers. It's spoken primarily in Western China.
- * **Uzbek (Latin)**. A Turkic language that is the official language of Uzbekistan. It has 34 million native speakers.
-
-These additions bring the total number of languages supported in Translator to 103.
-
-## August 2021
-
-### [Text and document translation support for literary Chinese](https://www.microsoft.com/translator/blog/2021/08/25/microsoft-translator-releases-literary-chinese-translation/)
-
-* Azure Cognitive Services Translator has [text and document language support](language-support.md) for literary Chinese. Classical or literary Chinese is a traditional style of written Chinese used by traditional Chinese poets and in ancient Chinese poetry.
-
-## June 2021
-
-### [Document Translation client libraries for C#/.NET and Python](document-translation/how-to-guides/use-client-sdks.md)ΓÇönow available in prerelease
-
-## May 2021
-
-### [Document Translation ΓÇò now generally available](https://www.microsoft.com/translator/blog/2021/05/25/translate-full-documents-with-document-translation-%e2%80%95-now-in-general-availability/)
-
-* **Feature release**: Translator's [Document Translation](document-translation/overview.md) feature is generally available. Document Translation is designed to translate large files and batch documents with rich content while preserving original structure and format. You can also use custom glossaries and custom models built with [Custom Translator](custom-translator/overview.md) to ensure your documents are translated quickly and accurately.
-
-### [Translator service available in containers](https://www.microsoft.com/translator/blog/2021/05/25/translator-service-now-available-in-containers/)
-
-* **New release**: Translator service is available in containers as a gated preview. [Submit an online request](https://aka.ms/csgate-translator) and have it approved prior to getting started. Containers enable you to run several Translator service features in your own environment and are great for specific security and data governance requirements. *See*, [Install and run Translator containers (preview)](containers/translator-how-to-install-container.md)
-
-## February 2021
-
-### [Document Translation public preview](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
-
-* **New release**: [Document Translation](document-translation/overview.md) is available as a preview feature of the Translator Service. Preview features are still in development and aren't meant for production use. They're made available on a "preview" basis so customers can get early access and provide feedback. Document Translation enables you to translate large documents and process batch files while still preserving the original structure and format. _See_ [Microsoft Translator blog: Introducing Document Translation](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
-
-### [Text and document translation support for nine added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
-
-* Translator service has [text and document translation language support](language-support.md) for the following languages:
-
- * **Albanian**. An isolate language unrelated to any other and spoken by nearly 8 million people.
- * **Amharic**. An official language of Ethiopia spoken by approximately 32 million people. It's also the liturgical language of the Ethiopian Orthodox church.
- * **Armenian**. The official language of Armenia with 5-7 million speakers.
- * **Azerbaijani**. A Turkic language spoken by approximately 23 million people.
- * **Khmer**. The official language of Cambodia with approximately 16 million speakers.
- * **Lao**. The official language of Laos with 30 million native speakers.
- * **Myanmar**. The official language of Myanmar, spoken as a first language by approximately 33 million people.
- * **Nepali**. The official language of Nepal with approximately 16 million native speakers.
- * **Tigrinya**. A language spoken in Eritrea and northern Ethiopia with nearly 11 million speakers.
-
-## January 2021
-
-### [Text and document translation support for Inuktitut](https://www.microsoft.com/translator/blog/2021/01/27/inuktitut-is-now-available-in-microsoft-translator/)
-
-* Translator service has [text and document translation language support](language-support.md) for **Inuktitut**, one of the principal Inuit languages of Canada. Inuktitut is one of eight official aboriginal languages in the Northwest Territories.
-
-## November 2020
-
-### [Custom Translator V2 is generally available](https://www.microsoft.com/translator/blog/2021/01/27/inuktitut-is-now-available-in-microsoft-translator/)
-
-* **New release**: Custom Translator V2 upgrade is fully available to the generally available (GA). The V2 platform enables you to build custom models with all document types (training, testing, tuning, phrase dictionary, and sentence dictionary). _See_ [Microsoft Translator blog: Custom Translator pushes the translation quality bar closer to human parity](https://www.microsoft.com/translator/blog/2020/11/12/microsoft-custom-translator-pushes-the-translation-quality-bar-closer-to-human-parity).
-
-## October 2020
-
-### [Text and document translation support for Canadian French](https://www.microsoft.com/translator/blog/2020/10/20/cest-tiguidou-ca-translator-adds-canadian-french/)
-
-* Translator service has [text and document translation language support](language-support.md) for **Canadian French**. Canadian French and European French are similar to one another and are mutually understandable. However, there can be significant differences in vocabulary, grammar, writing, and pronunciation. Over 7 million Canadians (20 percent of the population) speak French as their first language.
-
-## September 2020
-
-### [Text and document translation support for Assamese and Axomiya](https://www.microsoft.com/translator/blog/2020/09/29/assamese-text-translation-is-here/)
-
-* Translator service has [text and document translation language support](language-support.md) for **Assamese** also knows as **Axomiya**. Assamese / Axomiya is primarily spoken in Eastern India by approximately 14 million people.
-
-## August 2020
-
-### [Introducing virtual networks and private links for translator](https://www.microsoft.com/translator/blog/2020/08/19/virtual-networks-and-private-links-for-translator-are-now-generally-available/)
-
-* **New release**: Virtual network capabilities and Azure private links for Translator are generally available (GA). Azure private links allow you to access Translator and your Azure hosted services over a private endpoint in your virtual network. You can use private endpoints for Translator to allow clients on a virtual network to securely access data over a private link. _See_ [Microsoft Translator blog: Virtual Networks and Private Links for Translator are generally available](https://www.microsoft.com/translator/blog/2020/08/19/virtual-networks-and-private-links-for-translator-are-now-generally-available/)
-
-### [Custom Translator upgrade to v2](https://www.microsoft.com/translator/blog/2020/08/05/custom-translator-v2-is-now-available/)
-
-* **New release**: Custom Translator V2 phase 1 is available. The newest version of Custom Translator rolls out in two phases to provide quicker translation and quality improvements, and allow you to keep your training data in the region of your choice. *See* [Microsoft Translator blog: Custom Translator: Introducing higher quality translations and regional data residency](https://www.microsoft.com/translator/blog/2020/08/05/custom-translator-v2-is-now-available/)
-
-### [Text and document translation support for two Kurdish regional languages](https://www.microsoft.com/translator/blog/2020/08/20/translator-adds-two-kurdish-dialects-for-text-translation/)
-
-* **Northern (Kurmanji) Kurdish** (15 million native speakers) and **Central (Sorani) Kurdish** (7 million native speakers). Most Kurdish texts are written in Kurmanji and Sorani.
-
-### [Text and document translation support for two Afghan languages](https://www.microsoft.com/translator/blog/2020/08/17/translator-adds-dari-and-pashto-text-translation/)
-
-* **Dari** (20 million native speakers) and **Pashto** (40 - 60 million speakers). The two official languages of Afghanistan.
-
-### [Text and document translation support for Odia](https://www.microsoft.com/translator/blog/2020/08/13/odia-language-text-translation-is-now-available-in-microsoft-translator/)
-
-* **Odia** is a classical language spoken by 35 million people in India and across the world. It joins **Bangla**, **Gujarati**, **Hindi**, **Kannada**, **Malayalam**, **Marathi**, **Punjabi**, **Tamil**, **Telugu**, **Urdu**, and **English** as the 12th most used language of India supported by Microsoft Translator.
cognitive-services Word Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/word-alignment.md
- Title: Word alignment - Translator-
-description: To receive alignment information, use the Translate method and include the optional includeAlignment parameter.
------ Previously updated : 05/26/2020---
-# How to receive word alignment information
-
-## Receiving word alignment information
-To receive alignment information, use the Translate method and include the optional includeAlignment parameter.
-
-## Alignment information format
-Alignment is returned as a string value of the following format for every word of the source. The information for each word is separated by a space, including for non-space-separated languages (scripts) like Chinese:
-
-[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]] *
-
-Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21".
-
-In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the Alignment element will be empty. The method returns no error in that case.
-
-## Restrictions
-Alignment is only returned for a subset of the language pairs at this point:
-* from English to any other language;
-* from any other language to English except for Chinese Simplified, Chinese Traditional, and Latvian to English
-* from Japanese to Korean or from Korean to Japanese
-You will not receive alignment information if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you", and other high frequency sentences.
-
-## Example
-
-Example JSON
-
-```json
-[
- {
- "translations": [
- {
- "text": "Kann ich morgen Ihr Auto fahren?",
- "to": "de",
- "alignment": {
- "proj": "0:2-0:3 4:4-5:7 6:10-25:30 12:15-16:18 17:19-20:23 21:28-9:14 29:29-31:31"
- }
- }
- ]
- }
-]
-```
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
- Title: Authentication in Azure Cognitive Services-
-description: "There are three ways to authenticate a request to an Azure Cognitive Services resource: a subscription key, a bearer token, or a multi-service subscription. In this article, you'll learn about each method, and how to make a request."
------ Previously updated : 09/01/2022---
-# Authenticate requests to Azure Cognitive Services
-
-Each request to an Azure Cognitive Service must include an authentication header. This header passes along a subscription key or authentication token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each.
-
-* Authenticate with a [single-service](#authenticate-with-a-single-service-subscription-key) or [multi-service](#authenticate-with-a-multi-service-subscription-key) subscription key
-* Authenticate with a [token](#authenticate-with-an-access-token)
-* Authenticate with [Azure Active Directory (AAD)](#authenticate-with-azure-active-directory)
-
-## Prerequisites
-
-Before you make a request, you need an Azure account and an Azure Cognitive Services subscription. If you already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to get you set up in minutes: [Create a Cognitive Services account for Azure](cognitive-services-apis-create-account.md).
-
-You can get your subscription key from the [Azure portal](cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) after [creating your account](https://azure.microsoft.com/free/cognitive-services/).
-
-## Authentication headers
-
-Let's quickly review the authentication headers available for use with Azure Cognitive Services.
-
-| Header | Description |
-|--|-|
-| Ocp-Apim-Subscription-Key | Use this header to authenticate with a subscription key for a specific service or a multi-service subscription key. |
-| Ocp-Apim-Subscription-Region | This header is only required when using a multi-service subscription key with the [Translator service](./Translator/reference/v3-0-reference.md). Use this header to specify the subscription region. |
-| Authorization | Use this header if you are using an access token. The steps to perform a token exchange are detailed in the following sections. The value provided follows this format: `Bearer <TOKEN>`. |
-
-## Authenticate with a single-service subscription key
-
-The first option is to authenticate a request with a subscription key for a specific service, like Translator. The keys are available in the Azure portal for each resource that you've created. To use a subscription key to authenticate a request, it must be passed along as the `Ocp-Apim-Subscription-Key` header.
-
-These sample requests demonstrates how to use the `Ocp-Apim-Subscription-Key` header. Keep in mind, when using this sample you'll need to include a valid subscription key.
-
-This is a sample call to the Bing Web Search API:
-```cURL
-curl -X GET 'https://api.cognitive.microsoft.com/bing/v7.0/search?q=Welsch%20Pembroke%20Corgis' \
--H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' | json_pp
-```
-
-This is a sample call to the Translator service:
-```cURL
-curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
--H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \--H 'Content-Type: application/json' \data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
-```
-
-The following video demonstrates using a Cognitive Services key.
-
-## Authenticate with a multi-service subscription key
-
-> [!WARNING]
-> At this time, the multi-service key doesn't support: QnA Maker, Immersive Reader, Personalizer, and Anomaly Detector.
-
-This option also uses a subscription key to authenticate requests. The main difference is that a subscription key is not tied to a specific service, rather, a single key can be used to authenticate requests for multiple Cognitive Services. See [Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) for information about regional availability, supported features, and pricing.
-
-The subscription key is provided in each request as the `Ocp-Apim-Subscription-Key` header.
-
-[![Multi-service subscription key demonstration for Cognitive Services](./media/index/single-key-demonstration-video.png)](https://www.youtube.com/watch?v=psHtA1p7Cas&feature=youtu.be)
-
-### Supported regions
-
-When using the multi-service subscription key to make a request to `api.cognitive.microsoft.com`, you must include the region in the URL. For example: `westus.api.cognitive.microsoft.com`.
-
-When using multi-service subscription key with the Translator service, you must specify the subscription region with the `Ocp-Apim-Subscription-Region` header.
-
-Multi-service authentication is supported in these regions:
--- `australiaeast`-- `brazilsouth`-- `canadacentral`-- `centralindia`-- `eastasia`-- `eastus`-- `japaneast`-- `northeurope`-- `southcentralus`-- `southeastasia`-- `uksouth`-- `westcentralus`-- `westeurope`-- `westus`-- `westus2`-- `francecentral`-- `koreacentral`-- `northcentralus`-- `southafricanorth`-- `uaenorth`-- `switzerlandnorth`--
-### Sample requests
-
-This is a sample call to the Bing Web Search API:
-
-```cURL
-curl -X GET 'https://YOUR-REGION.api.cognitive.microsoft.com/bing/v7.0/search?q=Welsch%20Pembroke%20Corgis' \
--H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' | json_pp
-```
-
-This is a sample call to the Translator service:
-
-```cURL
-curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
--H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \--H 'Ocp-Apim-Subscription-Region: YOUR_SUBSCRIPTION_REGION' \--H 'Content-Type: application/json' \data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
-```
-
-## Authenticate with an access token
-
-Some Azure Cognitive Services accept, and in some cases require, an access token. Currently, these services support access tokens:
-
-* Text Translation API
-* Speech
-* Speech
-
->[!NOTE]
-> QnA Maker also uses the Authorization header, but requires an endpoint key. For more information, see [QnA Maker: Get answer from knowledge base](./qnamaker/quickstarts/get-answer-from-knowledge-base-using-url-tool.md).
-
->[!WARNING]
-> The services that support access tokens may change over time, please check the API reference for a service before using this authentication method.
-
-Both single service and multi-service subscription keys can be exchanged for authentication tokens. Authentication tokens are valid for 10 minutes. They're stored in JSON Web Token (JWT) format and can be queried programmatically using the [JWT libraries](https://jwt.io/libraries).
-
-Access tokens are included in a request as the `Authorization` header. The token value provided must be preceded by `Bearer`, for example: `Bearer YOUR_AUTH_TOKEN`.
-
-### Sample requests
-
-Use this URL to exchange a subscription key for an access token: `https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken`.
-
-```cURL
-curl -v -X POST \
-"https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
--H "Content-type: application/x-www-form-urlencoded" \--H "Content-length: 0" \--H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY"
-```
-
-These multi-service regions support token exchange:
--- `australiaeast`-- `brazilsouth`-- `canadacentral`-- `centralindia`-- `eastasia`-- `eastus`-- `japaneast`-- `northeurope`-- `southcentralus`-- `southeastasia`-- `uksouth`-- `westcentralus`-- `westeurope`-- `westus`-- `westus2`-
-After you get an access token, you'll need to pass it in each request as the `Authorization` header. This is a sample call to the Translator service:
-
-```cURL
-curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
--H 'Authorization: Bearer YOUR_AUTH_TOKEN' \--H 'Content-Type: application/json' \data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
-```
--
-## Use Azure key vault to securely access credentials
-
-You can [use Azure Key Vault](./use-key-vault.md) to securely develop Cognitive Services applications. Key Vault enables you to store your authentication credentials in the cloud, and reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
-
-Authentication is done via Azure Active Directory. Authorization may be done via Azure role-based access control (Azure RBAC) or Key Vault access policy. Azure RBAC can be used for both management of the vaults and access data stored in a vault, while key vault access policy can only be used when attempting to access data stored in a vault.
-
-## See also
-
-* [What is Cognitive Services?](./what-are-cognitive-services.md)
-* [Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)
-* [Custom subdomains](cognitive-services-custom-subdomains.md)
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
- Title: Use the autoscale feature
-description: Learn how to use the autoscale feature for Cognitive Services to dynamically adjust the rate limit of your service.
---- Previously updated : 06/27/2022--
-# Cognitive Services autoscale feature
-
-This article provides guidance for how customers can access higher rate limits on their Cognitive Service resources.
-
-## Overview
-
-Each Cognitive Services resource has a pre-configured static call rate (transactions per second) which limits the number of concurrent calls that customers can make to the backend service in a given time frame. The autoscale feature will automatically increase/decrease a customer's resource's rate limits based on near-real-time resource usage metrics and backend service capacity metrics.
-
-## Get started with the autoscale feature
-
-This feature is disabled by default for every new resource. Follow these instructions to enable it.
-
-#### [Azure portal](#tab/portal)
-
-Go to your resource's page in the Azure portal, and select the **Overview** tab on the left pane. Under the **Essentials** section, find the **Autoscale** line and select the link to view the **Autoscale Settings** pane and enable the feature.
--
-#### [Azure CLI](#tab/cli)
-
-Run this command after you've created your resource:
-
-```azurecli
-az resource update --namespace Microsoft.CognitiveServices --resource-type accounts --set properties.dynamicThrottlingEnabled=true --resource-group {resource-group-name} --name {resource-name}
-
-```
---
-## Frequently asked questions
-
-### Does enabling the autoscale feature mean my resource will never be throttled again?
-
-No, you may still get `429` errors for rate limit excess. If your application triggers a spike, and your resource reports a `429` response, autoscale will check the available capacity projection section to see whether the current capacity can accommodate a rate limit increase and respond within five minutes.
-
-If the available capacity is enough for an increase, autoscale will gradually increase the rate limit cap of your resource. If you continue to call your resource at a high rate that results in more `429` throttling, your TPS rate will continue to increase over time. If this continues for one hour or more, you should reach the maximum rate (up to 1000 TPS) currently available at that time for that resource.
-
-If the available capacity is not enough for an increase, the autoscale feature will wait five minutes and check again.
-
-### What if I need a higher default rate limit?
-
-By default, Cognitive Service resources have a default rate limit of 10 TPS. If you need a higher default TPS, submit a ticket by following the **New Support Request** link on your resource's page in the Azure portal. Remember to include a business justification in the request.
-
-### Will this feature increase my Azure spend?
-
-Cognitive Services pricing hasn't changed and can be accessed [here](https://azure.microsoft.com/pricing/details/cognitive-services/). We'll only bill for successful calls made to Cognitive Services APIs. However, increased call rate limits mean more transactions will be completed, and you may receive a higher bill.
-
-Be aware of potential errors and their consequences. If a bug in your client application causes it to call the service hundreds of times per second, that would likely lead to a much higher bill, whereas the cost would be much more limited under a fixed rate limit. Errors of this kind are your responsibility, so we highly recommend that you perform development and client update tests against a resource with a fixed rate limit prior to using the autoscale feature.
-
-### Can I disable this feature if I'd rather limit the rate than have unpredictable spending?
-
-Yes, you can disable the autoscale feature through Azure portal or CLI and return to your default call rate limit setting. If your resource was previously approved for a higher default TPS, it will go back to that rate. It can take up to five minutes for the changes to go into effect.
-
-### Which services support the autoscale feature?
-
-Autoscale feature is available for the following
-
-* [Cognitive Services multi-key](./cognitive-services-apis-create-account.md?tabs=multiservice%2canomaly-detector%2clanguage-service%2ccomputer-vision%2cwindows)
-* [Computer Vision](computer-vision/index.yml)
-* [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
-* [Anomaly Detector](./anomaly-detector/overview.md)
-* [Content Moderator](./content-moderator/overview.md)
-* [Custom Vision (Prediction)](./custom-vision-service/overview.md)
-* [Immersive Reader](../applied-ai-services/immersive-reader/overview.md)
-* [LUIS](./luis/what-is-luis.md)
-* [Metrics Advisor](../applied-ai-services/metrics-advisor/overview.md)
-* [Personalizer](./personalizer/what-is-personalizer.md)
-* [QnAMaker](./qnamaker/overview/overview.md)
-* [Form Recognizer](../applied-ai-services/form-recognizer/overview.md?tabs=v3-0)
-
-### Can I test this feature using a free subscription?
-
-No, the autoscale feature is not available to free tier subscriptions.
-
-## Next steps
--- [Plan and Manage costs for Azure Cognitive Services](./plan-manage-costs.md).-- [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
- Title: "Cognitive Services for big data"
-description: Learn how to leverage Azure Cognitive Services on large datasets using Python, Java, and Scala. With Cognitive Services for big data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations.
------ Previously updated : 10/28/2021---
-# Azure Cognitive Services for big data
-
-![Azure Cognitive Services for big data](media/cognitive-services-big-data-overview.svg)
-
-Azure Cognitive Services for big data lets users channel terabytes of data through Cognitive Services using [Apache Spark&trade;](/dotnet/spark/what-is-spark) and open source libraries for distributed machine learning workloads. With Cognitive Services for big data, it's easy to create large-scale intelligent applications with any datastore.
-
-Using the resources and libraries described in this article, you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications.
-
-## Features and benefits
-
-Cognitive Services for big data can use resources from any [supported region](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services), as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
-
-## Supported services
-
-[Cognitive Services](../index.yml), accessed through APIs and SDKs, help developers build intelligent applications without having AI or data science skills. With Cognitive Services you can make your applications see, hear, speak, and understand. To use Cognitive Services, your application must send data to the service over the network. Once received, the service sends an intelligent response in return. The following Cognitive Services resources are available for big data workloads:
-
-### Vision
-
-|Service Name|Service Description|
-|:--|:|
-|[Computer Vision](../computer-vision/index.yml "Computer Vision")| The Computer Vision service provides you with access to advanced algorithms for processing images and returning information. |
-|[Face](../computer-vision/index-identity.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. |
-
-### Speech
-
-|Service Name|Service Description|
-|:--|:|
-|[Speech service](../speech-service/index.yml "Speech service")|The Speech service provides access to features like speech recognition, speech synthesis, speech translation, and speaker verification and identification.|
-
-### Decision
-
-|Service Name|Service Description|
-|:--|:|
-|[Anomaly Detector](../anomaly-detector/index.yml "Anomaly Detector") | The Anomaly Detector service allows you to monitor and detect abnormalities in your time series data.|
-
-### Language
-
-|Service Name|Service Description|
-|:--|:|
-|[Language service](../language-service/index.yml "Language service")| The Language service provides natural language processing over raw text for sentiment analysis, key-phrase extraction, and language detection.|
-
-### Search
-
-|Service Name|Service Description|
-|:--|:|
-|[Bing Image Search](/azure/cognitive-services/bing-image-search "Bing Image Search")|The Bing Image Search service returns a display of images determined to be relevant to the user's query.|
-
-## Supported programming languages for Cognitive Services for big data
-
-Cognitive Services for big data are built on Apache Spark. Apache Spark is a distributed computing library that supports Java, Scala, Python, R, and many other languages. See [SynapseML](https://microsoft.github.io/SynapseML) for documentation, samples, and blog posts.
-
-The following languages are currently supported.
-
-### Python
-
-We provide a PySpark API for current and legacy libraries:
-
-* [`synapseml.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html)
-
-* [`mmlspark.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.18.1/pyspark/modules.html)
-
-For more information, see the [Python Developer API](https://mmlspark.blob.core.windows.net/docs/1.0.0-rc1/pyspark/mmlspark.cognitive.html). For usage examples, see the [Python Samples](samples-python.md).
-
-### Scala and Java
-
-We provide a Scala and Java-based Spark API for current and legacy libraries:
-
-* [`com.microsoft.synapseml.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.10.0/scala/com/microsoft/azure/synapse/ml/cognitive/https://docsupdatetracker.net/index.html)
-
-* [`com.microsoft.ml.spark.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.18.1/scala/https://docsupdatetracker.net/index.html#com.microsoft.ml.spark.cognitive.package)
-
-For more information, see the [Scala Developer API](https://mmlspark.blob.core.windows.net/docs/1.0.0-rc1/scal).
-
-## Supported platforms and connectors
-
-Big data scenarios require Apache Spark. There are several Apache Spark platforms that support Cognitive Services for big data.
-
-### Azure Databricks
-
-[Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform. It provides one-click setup, streamlined work-flows, and an interactive workspace that supports collaboration between data scientists, data engineers, and business analysts.
-
-### Azure Synapse Analytics
-
-[Azure Synapse Analytics](/azure/databricks/data/data-sources/azure/synapse-analytics) is as enterprise data warehouse that uses massive parallel processing. With Synapse Analytics, you can quickly run complex queries across petabytes of data. Azure Synapse Analytics provides managed Spark Pools to run Spark Jobs with an intuitive Jupyter Notebook Interface.
-
-### Azure Kubernetes Service
-
-[Azure Kubernetes Service (AKS)](../../aks/index.yml) orchestrates Docker Containers and distributed applications at massive scales. AKS is a managed Kubernetes offering that simplifies using Kubernetes in Azure. Kubernetes can enable fine-grained control of Cognitive Service scale, latency, and networking. However, we recommend using Azure Databricks or Azure Synapse Analytics if you're unfamiliar with Apache Spark.
-
-### Data Connectors
-
-Once you have a Spark Cluster, the next step is connecting to your data. Apache Spark has a broad collection of database connectors. These connectors allow applications to work with large datasets no matter where they're stored. For more information about supported databases and connectors, see the [list of supported datasources for Azure Databricks](/azure/databricks/data/data-sources/).
-
-## Concepts
-
-### Spark
-
-[Apache Spark&trade;](http://spark.apache.org/) is a unified analytics engine for large-scale data processing. Its parallel processing framework boosts performance of big data and analytic applications. Spark can operate as both a batch and stream processing system, without changing core application code.
-
-The basis of Spark is the DataFrame: a tabular collection of data distributed across the Apache Spark worker nodes. A Spark DataFrame is like a table in a relational database or a data frame in R/Python, but with limitless scale. DataFrames can be constructed from many sources such as: structured data files, tables in Hive, or external databases. Once your data is in a Spark DataFrame, you can:
-
- - Do SQL-style computations such as join and filter tables.
- - Apply functions to large datasets using MapReduce style parallelism.
- - Apply Distributed Machine Learning using Microsoft Machine Learning for Apache Spark.
- - Use Cognitive Services for big data to enrich your data with ready-to-use intelligent services.
-
-### Microsoft Machine Learning for Apache Spark (MMLSpark)
-
-[Microsoft Machine Learning for Apache Spark](https://mmlspark.blob.core.windows.net/website/https://docsupdatetracker.net/index.html#install) (MMLSpark) is an open-source, distributed machine learning library (ML) built on Apache Spark. Cognitive Services for big data is included in this package. Additionally, MMLSpark contains several other ML tools for Apache Spark, such as LightGBM, Vowpal Wabbit, OpenCV, LIME, and more. With MMLSpark, you can build powerful predictive and analytical models from any Spark datasource.
-
-### HTTP on Spark
-
-Cognitive Services for big data is an example of how we can integrate intelligent web services with big data. Web services power many applications across the globe and most services communicate through the Hypertext Transfer Protocol (HTTP). To work with *arbitrary* web services at large scales, we provide HTTP on Spark. With HTTP on Spark, you can pass terabytes of data through any web service. Under the hood, we use this technology to power Cognitive Services for big data.
-
-## Developer samples
--- [Recipe: Predictive Maintenance](recipes/anomaly-detection.md)-- [Recipe: Intelligent Art Exploration](recipes/art-explorer.md)-
-## Blog posts
--- [Learn more about how Cognitive Services work on Apache Spark&trade;](https://azure.microsoft.com/blog/dear-spark-developers-welcome-to-azure-cognitive-services/)-- [Saving Snow Leopards with Deep Learning and Computer Vision on Spark](/archive/blogs/machinelearning/saving-snow-leopards-with-deep-learning-and-computer-vision-on-spark)-- [Microsoft Research Podcast: MMLSpark, empowering AI for Good with Mark Hamilton](https://blubrry.com/microsoftresearch/49485070/092-mmlspark-empowering-ai-for-good-with-mark-hamilton/)-- [Academic Whitepaper: Large Scale Intelligent Microservices](https://arxiv.org/abs/2009.08044)-
-## Webinars and videos
--- [The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Services](https://databricks.com/session/the-azure-cognitive-services-on-spark-clusters-with-embedded-intelligent-services)-- [Spark Summit Keynote: Scalable AI for Good](https://databricks.com/session_eu19/scalable-ai-for-good)-- [Cognitive Services for big data in Azure Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)-- [Lightning Talk on Large Scale Intelligent Microservices](https://www.youtube.com/watch?v=BtuhmdIy9Fk&t=6s)-
-## Next steps
--- [Getting Started with Cognitive Services for big data](getting-started.md)-- [Simple Python Examples](samples-python.md)-- [Simple Scala Examples](samples-scala.md)
cognitive-services Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/getting-started.md
- Title: "Get started with Cognitive Services for big data"
-description: Set up your SynapseML or MMLSpark pipeline with Cognitive Services in Azure Databricks and run a sample.
----- Previously updated : 08/16/2022----
-# Getting started
-
-Setting up your environment is the first step to building a pipeline for your data. After your environment is ready, running a sample is quick and easy.
-
-In this article, you'll perform these steps to get started:
-
-> [!div class="checklist"]
-> * [Create a Cognitive Services resource](#create-a-cognitive-services-resource)
-> * [Create an Apache Spark cluster](#create-an-apache-spark-cluster)
-> * [Try a sample](#try-a-sample)
-
-## Create a Cognitive Services resource
-
-To work with big data in Cognitive Services, first create a Cognitive Services resource for your workflow. There are two main types of Cognitive
-
-### Cloud services
-
-Cloud-based Cognitive Services are intelligent algorithms hosted in Azure. These services are ready for use without training, you just need an internet connection. You can [create a Cognitive Service in the Azure portal](../cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows) or with the [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows).
-
-### Containerized services (optional)
-
-If your application or workload uses large datasets, requires private networking, or can't contact the cloud, communicating with cloud services might be impossible. In this situation, containerized Cognitive Services have these benefits:
-
-* **Low Connectivity**: You can deploy containerized Cognitive Services in any computing environment, both on-cloud and off. If your application can't contact the cloud, consider deploying containerized Cognitive Services on your application.
-
-* **Low Latency**: Because containerized services don't require the round-trip communication to/from the cloud, responses are returned with much lower latencies.
-
-* **Privacy and Data Security**: You can deploy containerized services into private networks, so that sensitive data doesn't leave the network.
-
-* **High Scalability**: Containerized services don't have "rate limits" and run on user-managed computers. So, you can scale Cognitive Services without end to handle much larger workloads.
-
-Follow [this guide](../cognitive-services-container-support.md?tabs=luis) to create a containerized Cognitive Service.
-
-## Create an Apache Spark cluster
-
-[Apache Spark&trade;](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the big data Cognitive Services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
-
-### Azure Databricks
-
-Azure Databricks is an Apache Spark-based analytics platform with a one-click setup, streamlined workflows, and an interactive workspace. It's often used to collaborate between data scientists, engineers, and business analysts. To use the big data Cognitive Services on Azure Databricks, follow these steps:
-
-1. [Create an Azure Databricks workspace](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-an-azure-databricks-workspace)
-
-1. [Create a Spark cluster in Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-a-spark-cluster-in-databricks)
-
-1. Install the SynapseML open-source library (or MMLSpark library if you're supporting a legacy application):
-
- * Create a new library in your databricks workspace
- <img src="media/create-library.png" alt="Create library" width="50%"/>
-
- * For SynapseML: input the following maven coordinates
- Coordinates: `com.microsoft.azure:synapseml_2.12:0.10.0`
- Repository: default
-
- * For MMLSpark (legacy): input the following maven coordinates
- Coordinates: `com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3`
- Repository: `https://mmlspark.azureedge.net/maven`
- <img src="media/library-coordinates.png" alt="Library Coordinates" width="50%"/>
-
- * Install the library onto a cluster
- <img src="media/install-library.png" alt="Install Library on Cluster" width="50%"/>
-
-### Azure Synapse Analytics (optional)
-
-Optionally, you can use Synapse Analytics to create a spark cluster. Azure Synapse Analytics brings together enterprise data warehousing and big data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources at scale. To get started using Azure Synapse Analytics, follow these steps:
-
-1. [Create a Synapse Workspace (preview)](../../synapse-analytics/quickstart-create-workspace.md).
-
-1. [Create a new serverless Apache Spark pool (preview) using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md).
-
-In Azure Synapse Analytics, big data for Cognitive Services is installed by default.
-
-### Azure Kubernetes Service
-
-If you're using containerized Cognitive Services, one popular option for deploying Spark alongside containers is the Azure Kubernetes Service.
-
-To get started on Azure Kubernetes Service, follow these steps:
-
-1. [Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md)
-
-1. [Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark)
-
-1. [Install a cognitive service container using Helm](../computer-vision/deploy-computer-vision-on-premises.md)
-
-## Try a sample
-
-After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](../../search/search-synapseml-cognitive-services.md).
-
-First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
-
-1. Create a new Databricks notebook, by choosing **New Notebook** from the **Azure Databricks** menu.
-
- <img src="media/new-notebook.png" alt="Create a new notebook" width="50%"/>
-
-1. In the **Create Notebook**, enter a name, select **Python** as the language, and select the Spark cluster that you created earlier.
-
- <img src="media/databricks-notebook-details.jpg" alt="New notebook details" width="50%"/>
-
- Select **Create**.
-
-1. Paste this code snippet into your new notebook.
-
- ```python
- from mmlspark.cognitive import *
- from pyspark.sql.functions import col
-
- # Add your region and subscription key from the Language service (or a general Cognitive Service key)
- # If using a multi-region Cognitive Services resource, delete the placeholder text: service_region = ""
- service_key = "ADD-SUBSCRIPTION-KEY-HERE"
- service_region = "ADD-SERVICE-REGION-HERE"
-
- df = spark.createDataFrame([
- ("I am so happy today, its sunny!", "en-US"),
- ("I am frustrated by this rush hour traffic", "en-US"),
- ("The cognitive services on spark aint bad", "en-US"),
- ], ["text", "language"])
-
- sentiment = (TextSentiment()
- .setTextCol("text")
- .setLocation(service_region)
- .setSubscriptionKey(service_key)
- .setOutputCol("sentiment")
- .setErrorCol("error")
- .setLanguageCol("language"))
-
- results = sentiment.transform(df)
-
- # Show the results in a table
- display(results.select("text", col("sentiment")[0].getItem("score").alias("sentiment")))
- ```
-
-1. Get your region and subscription key from the **Keys and Endpoint** menu from your Language resource in the Azure portal.
-
-1. Replace the region and subscription key placeholders in your Databricks notebook code with values that are valid for your resource.
-
-1. Select the play, or triangle, symbol in the upper right of your notebook cell to run the sample. Optionally, select **Run All** at the top of your notebook to run all cells. The answers will display below the cell in a table.
-
-### Expected results
-
-| text | sentiment |
-|:|:|
-| I am so happy today, its sunny! | 0.978959 |
-| I am frustrated by this rush hour traffic | 0.0237956 |
-| The cognitive services on spark aint bad | 0.888896 |
-
-## Next steps
--- [Short Python Examples](samples-python.md)-- [Short Scala Examples](samples-scala.md)-- [Recipe: Predictive Maintenance](recipes/anomaly-detection.md)-- [Recipe: Intelligent Art Exploration](recipes/art-explorer.md)
cognitive-services Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/recipes/anomaly-detection.md
- Title: "Recipe: Predictive maintenance with the Cognitive Services for big data"-
-description: This quickstart shows how to perform distributed anomaly detection with the Cognitive Services for big data
------ Previously updated : 07/06/2020----
-# Recipe: Predictive maintenance with the Cognitive Services for big data
-
-This recipe shows how you can use Azure Synapse Analytics and Cognitive Services on Apache Spark for predictive maintenance of IoT devices. We'll follow along with the [Azure Cosmos DB and Synapse Link](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples) sample. To keep things simple, in this recipe we'll read the data straight from a CSV file rather than getting streamed data through Azure Cosmos DB and Synapse Link. We strongly encourage you to look over the Synapse Link sample.
-
-## Hypothetical scenario
-
-The hypothetical scenario is a Power Plant, where IoT devices are monitoring [steam turbines](https://en.wikipedia.org/wiki/Steam_turbine). The IoTSignals collection has Revolutions per minute (RPM) and Megawatts (MW) data for each turbine. Signals from steam turbines are being analyzed and anomalous signals are detected.
-
-There could be outliers in the data in random frequency. In those situations, RPM values will go up and MW output will go down, for circuit protection. The idea is to see the data varying at the same time, but with different signals.
-
-## Prerequisites
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* [Azure Synapse workspace](../../../synapse-analytics/quickstart-create-workspace.md) configured with a [serverless Apache Spark pool](../../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md)
-
-## Setup
-
-### Create an Anomaly Detector resource
-
-Azure Cognitive Services are represented by Azure resources that you subscribe to. Create a resource for Translator using the [Azure portal](../../cognitive-services-apis-create-account.md) or [Azure CLI](../../cognitive-services-apis-create-account-cli.md). You can also:
--- View an existing resource in the [Azure portal](https://portal.azure.com/).-
-Make note of the endpoint and the key for this resource, you'll need it in this guide.
-
-## Enter your service keys
-
-Let's start by adding your key and location.
-
-```python
-service_key = None # Paste your anomaly detector key here
-location = None # Paste your anomaly detector location here
-
-assert (service_key is not None)
-assert (location is not None)
-```
-
-## Read data into a DataFrame
-
-Next, let's read the IoTSignals file into a DataFrame. Open a new notebook in your Synapse workspace and create a DataFrame from the file.
-
-```python
-df_signals = spark.read.csv("wasbs://publicwasb@mmlspark.blob.core.windows.net/iot/IoTSignals.csv", header=True, inferSchema=True)
-```
-
-### Run anomaly detection using Cognitive Services on Spark
-
-The goal is to find instances where the signals from the IoT devices were outputting anomalous values so that we can see when something is going wrong and do predictive maintenance. To do that, let's use Anomaly Detector on Spark:
-
-```python
-from pyspark.sql.functions import col, struct
-from mmlspark.cognitive import SimpleDetectAnomalies
-from mmlspark.core.spark import FluentAPI
-
-detector = (SimpleDetectAnomalies()
- .setSubscriptionKey(service_key)
- .setLocation(location)
- .setOutputCol("anomalies")
- .setGroupbyCol("grouping")
- .setSensitivity(95)
- .setGranularity("secondly"))
-
-df_anomaly = (df_signals
- .where(col("unitSymbol") == 'RPM')
- .withColumn("timestamp", col("dateTime").cast("string"))
- .withColumn("value", col("measureValue").cast("double"))
- .withColumn("grouping", struct("deviceId"))
- .mlTransform(detector)).cache()
-
-df_anomaly.createOrReplaceTempView('df_anomaly')
-```
-
-Let's take a look at the data:
-
-```python
-df_anomaly.select("timestamp","value","deviceId","anomalies.isAnomaly").show(3)
-```
-
-This cell should yield a result that looks like:
-
-| timestamp | value | deviceId | isAnomaly |
-|:--|--:|:--|:|
-| 2020-05-01 18:33:51 | 3174 | dev-7 | False |
-| 2020-05-01 18:33:52 | 2976 | dev-7 | False |
-| 2020-05-01 18:33:53 | 2714 | dev-7 | False |
--
- ## Visualize anomalies for one of the devices
-
-IoTSignals.csv has signals from multiple IoT devices. We'll focus on a specific device and visualize anomalous outputs from the device.
-
-```python
-df_anomaly_single_device = spark.sql("""
-select
- timestamp,
- measureValue,
- anomalies.expectedValue,
- anomalies.expectedValue + anomalies.upperMargin as expectedUpperValue,
- anomalies.expectedValue - anomalies.lowerMargin as expectedLowerValue,
- case when anomalies.isAnomaly=true then 1 else 0 end as isAnomaly
-from
- df_anomaly
-where deviceid = 'dev-1' and timestamp < '2020-04-29'
-order by timestamp
-limit 200""")
-```
-
-Now that we have created a dataframe that represents the anomalies for a particular device, we can visualize these anomalies:
-
-```python
-import matplotlib.pyplot as plt
-from pyspark.sql.functions import col
-
-adf = df_anomaly_single_device.toPandas()
-adf_subset = df_anomaly_single_device.where(col("isAnomaly") == 1).toPandas()
-
-plt.figure(figsize=(23,8))
-plt.plot(adf['timestamp'],adf['expectedUpperValue'], color='darkred', linestyle='solid', linewidth=0.25, label='UpperMargin')
-plt.plot(adf['timestamp'],adf['expectedValue'], color='darkgreen', linestyle='solid', linewidth=2, label='Expected Value')
-plt.plot(adf['timestamp'],adf['measureValue'], 'b', color='royalblue', linestyle='dotted', linewidth=2, label='Actual')
-plt.plot(adf['timestamp'],adf['expectedLowerValue'], color='black', linestyle='solid', linewidth=0.25, label='Lower Margin')
-plt.plot(adf_subset['timestamp'],adf_subset['measureValue'], 'ro', label = 'Anomaly')
-plt.legend()
-plt.title('RPM Anomalies with Confidence Intervals')
-plt.show()
-```
-
-If successful, your output will look like this:
-
-![Anomaly Detector Plot](../media/anomaly-output.png)
-
-## Next steps
-
-Learn how to do predictive maintenance at scale with Azure Cognitive Services, Azure Synapse Analytics, and Azure Cosmos DB. For more information, see the full sample on [GitHub](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples).
cognitive-services Art Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/recipes/art-explorer.md
- Title: "Recipe: Intelligent Art Exploration with the Cognitive Services for big data"-
-description: This recipe shows how to create a searchable art database using Azure Search and MMLSpark.
------ Previously updated : 01/10/2022----
-# Recipe: Intelligent Art Exploration with the Cognitive Services for big data
-
-In this example, we'll use the Cognitive Services for big data to add intelligent annotations to the Open Access collection from the Metropolitan Museum of Art (MET). This will enable us to create an intelligent search engine using Azure Search even without manual annotations.
-
-## Prerequisites
-
-* You must have a subscription key for Computer Vision and Cognitive Search. Follow the instructions in [Create a Cognitive Services account](../../cognitive-services-apis-create-account.md) to subscribe to Computer Vision and get your key.
- > [!NOTE]
- > For pricing information, see [Azure Cognitive Search](https://azure.microsoft.com/services/search/#pricing).
-
-## Import Libraries
-
-Run the following command to import libraries for this recipe.
-
-```python
-import os, sys, time, json, requests
-from pyspark.ml import Transformer, Estimator, Pipeline
-from pyspark.ml.feature import SQLTransformer
-from pyspark.sql.functions import lit, udf, col, split
-```
-
-## Set up Subscription Keys
-
-Run the following command to set up variables for service keys. Insert your subscription keys for Computer Vision and Azure Cognitive Search.
-
-```python
-VISION_API_KEY = 'INSERT_COMPUTER_VISION_SUBSCRIPTION_KEY'
-AZURE_SEARCH_KEY = 'INSERT_AZURE_COGNITIVE_SEARCH_SUBSCRIPTION_KEY'
-search_service = "mmlspark-azure-search"
-search_index = "test"
-```
-
-## Read the Data
-
-Run the following command to load data from the MET's Open Access collection.
-
-```python
-data = spark.read\
- .format("csv")\
- .option("header", True)\
- .load("wasbs://publicwasb@mmlspark.blob.core.windows.net/metartworks_sample.csv")\
- .withColumn("searchAction", lit("upload"))\
- .withColumn("Neighbors", split(col("Neighbors"), ",").cast("array<string>"))\
- .withColumn("Tags", split(col("Tags"), ",").cast("array<string>"))\
- .limit(25)
-```
-
-<a name="AnalyzeImages"></a>
-
-## Analyze the Images
-
-Run the following command to use Computer Vision on the MET's Open Access artworks collection. As a result, you'll get visual features from the artworks.
-
-```python
-from mmlspark.cognitive import AnalyzeImage
-from mmlspark.stages import SelectColumns
-
-#define pipeline
-describeImage = (AnalyzeImage()
- .setSubscriptionKey(VISION_API_KEY)
- .setLocation("eastus")
- .setImageUrlCol("PrimaryImageUrl")
- .setOutputCol("RawImageDescription")
- .setErrorCol("Errors")
- .setVisualFeatures(["Categories", "Tags", "Description", "Faces", "ImageType", "Color", "Adult"])
- .setConcurrency(5))
-
-df2 = describeImage.transform(data)\
- .select("*", "RawImageDescription.*").drop("Errors", "RawImageDescription")
-```
-
-<a name="CreateSearchIndex"></a>
-
-## Create the Search Index
-
-Run the following command to write the results to Azure Search to create a search engine of the artworks with enriched metadata from Computer Vision.
-
-```python
-from mmlspark.cognitive import *
-df2.writeToAzureSearch(
- subscriptionKey=AZURE_SEARCH_KEY,
- actionCol="searchAction",
- serviceName=search_service,
- indexName=search_index,
- keyCol="ObjectID"
-)
-```
-
-## Query the Search Index
-
-Run the following command to query the Azure Search index.
-
-```python
-url = 'https://{}.search.windows.net/indexes/{}/docs/search?api-version=2019-05-06'.format(search_service, search_index)
-requests.post(url, json={"search": "Glass"}, headers = {"api-key": AZURE_SEARCH_KEY}).json()
-```
-
-## Next steps
-
-Learn how to use [Cognitive Services for big data for Anomaly Detection](anomaly-detection.md).
cognitive-services Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-python.md
- Title: "Cognitive Services for big data - Python Samples"
-description: Try Cognitive Services samples in Python for Azure Databricks to run your MMLSpark pipeline for big data.
----- Previously updated : 11/01/2022----
-# Python Samples for Cognitive Services for big data
-
-The following snippets are ready to run and will help get you started with using Cognitive Services on Spark with Python.
-
-The samples in this article use these Cognitive
--- Language service - get the sentiment (or mood) of a set of sentences.-- Computer Vision - get the tags (one-word descriptions) associated with a set of images.-- Speech to text - transcribe audio files to extract text-based transcripts.-- Anomaly Detector - detect anomalies within a time series data.-
-## Prerequisites
-
-1. Follow the steps in [Getting started](getting-started.md) to set up your Azure Databricks and Cognitive Services environment. This tutorial shows you how to install MMLSpark and how to create your Spark cluster in Databricks.
-1. After you create a new notebook in Azure Databricks, copy the **Shared code** below and paste into a new cell in your notebook.
-1. Choose a service sample, below, and copy paste it into a second new cell in your notebook.
-1. Replace any of the service subscription key placeholders with your own key.
-1. Choose the run button (triangle icon) in the upper right corner of the cell, then select **Run Cell**.
-1. View results in a table below the cell.
-
-## Shared code
-
-To get started, we'll need to add this code to the project:
-
-```python
-from mmlspark.cognitive import *
-
-# A general Cognitive Services key for the Language service and Computer Vision (or use separate keys that belong to each service)
-service_key = "ADD_YOUR_SUBSCRIPION_KEY"
-# An Anomaly Dectector subscription key
-anomaly_key = "ADD_YOUR_SUBSCRIPION_KEY"
-
-# Validate the key
-assert service_key != "ADD_YOUR_SUBSCRIPION_KEY"
-```
-
-## Language service sample
-
-The [Language service](../language-service/index.yml) provides several algorithms for extracting intelligent insights from text. For example, we can find the sentiment of given input text. The service will return a score between 0.0 and 1.0 where low scores indicate negative sentiment and high score indicates positive sentiment. This sample uses three simple sentences and returns the sentiment for each.
-
-```python
-from pyspark.sql.functions import col
-
-# Create a dataframe that's tied to it's column names
-df = spark.createDataFrame([
- ("I am so happy today, its sunny!", "en-US"),
- ("I am frustrated by this rush hour traffic", "en-US"),
- ("The cognitive services on spark aint bad", "en-US"),
-], ["text", "language"])
-
-# Run the Language service with options
-sentiment = (TextSentiment()
- .setTextCol("text")
- .setLocation("eastus")
- .setSubscriptionKey(service_key)
- .setOutputCol("sentiment")
- .setErrorCol("error")
- .setLanguageCol("language"))
-
-# Show the results of your text query in a table format
-display(sentiment.transform(df).select("text", col("sentiment")[0].getItem("sentiment").alias("sentiment")))
-```
-
-### Expected result
-
-| text | sentiment |
-|:|:|
-| I am so happy today, its sunny! | positive |
-| I am frustrated by this rush hour traffic | negative |
-| The cognitive services on spark aint bad | positive |
-
-## Computer Vision sample
-
-[Computer Vision](../computer-vision/index.yml) analyzes images to identify structure such as faces, objects, and natural-language descriptions. In this sample, we tag a list of images. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
-
-```python
-
-# Create a dataframe with the image URLs
-df = spark.createDataFrame([
- ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/objects.jpg", ),
- ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/dog.jpg", ),
- ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg", )
- ], ["image", ])
-
-# Run the Computer Vision service. Analyze Image extracts infortmation from/about the images.
-analysis = (AnalyzeImage()
- .setLocation("eastus")
- .setSubscriptionKey(service_key)
- .setVisualFeatures(["Categories","Color","Description","Faces","Objects","Tags"])
- .setOutputCol("analysis_results")
- .setImageUrlCol("image")
- .setErrorCol("error"))
-
-# Show the results of what you wanted to pull out of the images.
-display(analysis.transform(df).select("image", "analysis_results.description.tags"))
-```
-
-### Expected result
-
-| image | tags |
-|:|:--|
-| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/objects.jpg | ['skating' 'person' 'man' 'outdoor' 'riding' 'sport' 'skateboard' 'young' 'board' 'shirt' 'air' 'black' 'park' 'boy' 'side' 'jumping' 'trick' 'ramp' 'doing' 'flying']
-| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/dog.jpg | ['dog' 'outdoor' 'fence' 'wooden' 'small' 'brown' 'building' 'sitting' 'front' 'bench' 'standing' 'table' 'walking' 'board' 'beach' 'white' 'holding' 'bridge' 'track']
-| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg | ['outdoor' 'grass' 'house' 'building' 'old' 'home' 'front' 'small' 'church' 'stone' 'large' 'grazing' 'yard' 'green' 'sitting' 'leading' 'sheep' 'brick' 'bench' 'street' 'white' 'country' 'clock' 'sign' 'parked' 'field' 'standing' 'garden' 'water' 'red' 'horse' 'man' 'tall' 'fire' 'group']
--
-## Speech to text sample
-The [Speech to text](../speech-service/index-speech-to-text.yml) service converts streams or files of spoken audio to text. In this sample, we transcribe two audio files. The first file is easy to understand, and the second is more challenging.
-
-```python
-
-# Create a dataframe with our audio URLs, tied to the column called "url"
-df = spark.createDataFrame([("https://mmlspark.blob.core.windows.net/datasets/Speech/audio2.wav",),
- ("https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3",)
- ], ["url"])
-
-# Run the Speech to text service to translate the audio into text
-speech_to_text = (SpeechToTextSDK()
- .setSubscriptionKey(service_key)
- .setLocation("eastus")
- .setOutputCol("text")
- .setAudioDataCol("url")
- .setLanguage("en-US")
- .setProfanity("Masked"))
-
-# Show the results of the translation
-display(speech_to_text.transform(df).select("url", "text.DisplayText"))
-```
-
-### Expected result
-
-| url | DisplayText |
-|:|:|
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio2.wav | Custom speech provides tools that allow you to visually inspect the recognition quality of a model by comparing audio data with the corresponding recognition result from the custom speech portal. You can playback uploaded audio and determine if the provided recognition result is correct. This tool allows you to quickly inspect quality of Microsoft's baseline speech to text model or a trained custom model without having to transcribe any audio data. |
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3 | Add a gentleman Sir thinking visual check. |
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3 | I hear me. |
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3 | I like the reassurance for radio that I can hear it as well. |
--
-## Anomaly Detector sample
-
-[Anomaly Detector](../anomaly-detector/index.yml) is great for detecting irregularities in your time series data. In this sample, we use the service to find anomalies in the entire time series.
-
-```python
-from pyspark.sql.functions import lit
-
-# Create a dataframe with the point data that Anomaly Detector requires
-df = spark.createDataFrame([
- ("1972-01-01T00:00:00Z", 826.0),
- ("1972-02-01T00:00:00Z", 799.0),
- ("1972-03-01T00:00:00Z", 890.0),
- ("1972-04-01T00:00:00Z", 900.0),
- ("1972-05-01T00:00:00Z", 766.0),
- ("1972-06-01T00:00:00Z", 805.0),
- ("1972-07-01T00:00:00Z", 821.0),
- ("1972-08-01T00:00:00Z", 20000.0),
- ("1972-09-01T00:00:00Z", 883.0),
- ("1972-10-01T00:00:00Z", 898.0),
- ("1972-11-01T00:00:00Z", 957.0),
- ("1972-12-01T00:00:00Z", 924.0),
- ("1973-01-01T00:00:00Z", 881.0),
- ("1973-02-01T00:00:00Z", 837.0),
- ("1973-03-01T00:00:00Z", 9000.0)
-], ["timestamp", "value"]).withColumn("group", lit("series1"))
-
-# Run the Anomaly Detector service to look for irregular data
-anamoly_detector = (SimpleDetectAnomalies()
- .setSubscriptionKey(anomaly_key)
- .setLocation("eastus")
- .setTimestampCol("timestamp")
- .setValueCol("value")
- .setOutputCol("anomalies")
- .setGroupbyCol("group")
- .setGranularity("monthly"))
-
-# Show the full results of the analysis with the anomalies marked as "True"
-display(anamoly_detector.transform(df).select("timestamp", "value", "anomalies.isAnomaly"))
-```
-
-### Expected result
-
-| timestamp | value | isAnomaly |
-|:|--:|:|
-| 1972-01-01T00:00:00Z | 826 | False |
-| 1972-02-01T00:00:00Z | 799 | False |
-| 1972-03-01T00:00:00Z | 890 | False |
-| 1972-04-01T00:00:00Z | 900 | False |
-| 1972-05-01T00:00:00Z | 766 | False |
-| 1972-06-01T00:00:00Z | 805 | False |
-| 1972-07-01T00:00:00Z | 821 | False |
-| 1972-08-01T00:00:00Z | 20000 | True |
-| 1972-09-01T00:00:00Z | 883 | False |
-| 1972-10-01T00:00:00Z | 898 | False |
-| 1972-11-01T00:00:00Z | 957 | False |
-| 1972-12-01T00:00:00Z | 924 | False |
-| 1973-01-01T00:00:00Z | 881 | False |
-| 1973-02-01T00:00:00Z | 837 | False |
-| 1973-03-01T00:00:00Z | 9000 | True |
-
-## Arbitrary web APIs
-
-With HTTP on Spark, any web service can be used in your big data pipeline. In this example, we use the [World Bank API](http://api.worldbank.org/v2/country/) to get information about various countries/regions around the world.
-
-```python
-from requests import Request
-from mmlspark.io.http import HTTPTransformer, http_udf
-from pyspark.sql.functions import udf, col
-
-# Use any requests from the Python requests library
-def world_bank_request(country):
- return Request("GET", "http://api.worldbank.org/v2/country/{}?format=json".format(country))
-
-# Create a dataframe with spcificies which countries/regions we want data on
-df = (spark.createDataFrame([("br",),("usa",)], ["country"])
- .withColumn("request", http_udf(world_bank_request)(col("country"))))
-
-# Much faster for big data because of the concurrency :)
-client = (HTTPTransformer()
- .setConcurrency(3)
- .setInputCol("request")
- .setOutputCol("response"))
-
-# Get the body of the response
-def get_response_body(resp):
- return resp.entity.content.decode()
-
-# Show the details of the country data returned
-display(client.transform(df).select("country", udf(get_response_body)(col("response")).alias("response")))
-```
-
-### Expected result
-
-| country | response |
-|:-|:-|
-| br | `[{"page":1,"pages":1,"per_page":"50","total":1},[{"id":"BRA","iso2Code":"BR","name":"Brazil","region":{"id":"LCN","iso2code":"ZJ","value":"Latin America & Caribbean "},"adminregion":{"id":"LAC","iso2code":"XJ","value":"Latin America & Caribbean (excluding high income)"},"incomeLevel":{"id":"UMC","iso2code":"XT","value":"Upper middle income"},"lendingType":{"id":"IBD","iso2code":"XF","value":"IBRD"},"capitalCity":"Brasilia","longitude":"-47.9292","latitude":"-15.7801"}]]` |
-| usa | `[{"page":1,"pages":1,"per_page":"50","total":1},[{"id":"USA","iso2Code":"US","name":"United States","region":{"id":"NAC","iso2code":"XU","value":"North America"},"adminregion":{"id":"","iso2code":"","value":""},"incomeLevel":{"id":"HIC","iso2code":"XD","value":"High income"},"lendingType":{"id":"LNX","iso2code":"XX","value":"Not classified"},"capitalCity":"Washington D.C.","longitude":"-77.032","latitude":"38.8895"}]]` |
-
-## See also
-
-* [Recipe: Anomaly Detection](./recipes/anomaly-detection.md)
-* [Recipe: Art Explorer](./recipes/art-explorer.md)
cognitive-services Samples Scala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-scala.md
- Title: "Cognitive Services for big data Scala Samples"
-description: Use Cognitive Services for Azure Databricks to run your MMLSpark pipeline for big data.
----- Previously updated : 11/01/2022---
-# Quick Examples
-
-The following snippets are ready to run and will help get you started with using Cognitive Services on Spark. The samples below are in Scala.
-
-The samples use these Cognitive
--- Language service - get the sentiment (or mood) of a set of sentences.-- Computer Vision - get the tags (one-word descriptions) associated with a set of images.-- Speech to text - transcribe audio files to extract text-based transcripts.-- Anomaly Detector - detect anomalies within a time series data.-
-## Prerequisites
-
-1. Follow the steps in [Getting started](getting-started.md) to set up your Azure Databricks and Cognitive Services environment. This tutorial will include how to install MMLSpark and how to create your Spark cluster in Databricks.
-1. After you create a new notebook in Azure Databricks, copy the **Shared code** below and paste into a new cell in your notebook.
-1. Choose a service sample, below, and copy paste it into a second new cell in your notebook.
-1. Replace any of the service subscription key placeholders with your own key.
-1. Choose the run button (triangle icon) in the upper right corner of the cell, then select **Run Cell**.
-1. View results in a table below the cell.
-
-## Shared code
-To get started, add this code to your project:
-
-```scala
-import com.microsoft.ml.spark.cognitive._
-import spark.implicits._
-
-val serviceKey = "ADD-YOUR-SUBSCRIPTION-KEY"
-val location = "eastus"
-```
-
-## Language service
-
-The [Language service](../language-service/index.yml) provides several algorithms for extracting intelligent insights from text. For example, we can find the sentiment of given input text. The service will return a score between `0.0` and `1.0` where low scores indicate negative sentiment and high score indicates positive sentiment. The sample below uses three simple sentences and returns the sentiment score for each.
-
-```scala
-import org.apache.spark.sql.functions.col
-
-val df = Seq(
- ("I am so happy today, its sunny!", "en-US"),
- ("I am frustrated by this rush hour traffic", "en-US"),
- ("The cognitive services on spark aint bad", "en-US")
-).toDF("text", "language")
-
-val sentiment = new TextSentiment()
- .setTextCol("text")
- .setLocation(location)
- .setSubscriptionKey(serviceKey)
- .setOutputCol("sentiment")
- .setErrorCol("error")
- .setLanguageCol("language")
-
-display(sentiment.transform(df).select(col("text"), col("sentiment")(0).getItem("score").alias("sentiment")))
-```
-
-### Expected result
-
-| text | sentiment |
-|:|:|
-| I am so happy today, its sunny! | 0.9789592027664185 |
-| I am frustrated by this rush hour traffic | 0.023795604705810547 |
-| The cognitive services on spark aint bad | 0.8888956308364868 |
-
-## Computer Vision
-
-[Computer Vision](../computer-vision/index.yml) analyzes images to identify structure such as faces, objects, and natural-language descriptions.
- In this sample, we tag a list of images. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
-
-```scala
-// Create a dataframe with the image URLs
-val df = Seq(
- ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/objects.jpg"),
- ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/dog.jpg"),
- ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg")
-).toDF("image")
-
-// Run the Computer Vision service. Analyze Image extracts infortmation from/about the images.
-val analysis = new AnalyzeImage()
- .setLocation(location)
- .setSubscriptionKey(serviceKey)
- .setVisualFeatures(Seq("Categories","Color","Description","Faces","Objects","Tags"))
- .setOutputCol("results")
- .setImageUrlCol("image")
- .setErrorCol("error"))
-
-// Show the results of what you wanted to pull out of the images.
-display(analysis.transform(df).select(col("image"), col("results").getItem("tags").getItem("name")).alias("results")))
-
-// Uncomment for full results with all visual feature requests
-//display(analysis.transform(df).select(col("image"), col("results")))
-```
-
-### Expected result
-
-| image | tags |
-|:|:--|
-| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/objects.jpg | ['skating' 'person' 'man' 'outdoor' 'riding' 'sport' 'skateboard' 'young' 'board' 'shirt' 'air' 'black' 'park' 'boy' 'side' 'jumping' 'trick' 'ramp' 'doing' 'flying']
-| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/dog.jpg | ['dog' 'outdoor' 'fence' 'wooden' 'small' 'brown' 'building' 'sitting' 'front' 'bench' 'standing' 'table' 'walking' 'board' 'beach' 'white' 'holding' 'bridge' 'track']
-| https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/house.jpg | ['outdoor' 'grass' 'house' 'building' 'old' 'home' 'front' 'small' 'church' 'stone' 'large' 'grazing' 'yard' 'green' 'sitting' 'leading' 'sheep' 'brick' 'bench' 'street' 'white' 'country' 'clock' 'sign' 'parked' 'field' 'standing' 'garden' 'water' 'red' 'horse' 'man' 'tall' 'fire' 'group']
-
-## Speech to text
-
-The [Speech to text](../speech-service/index-speech-to-text.yml) service converts streams or files of spoken audio to text. In this sample, we transcribe two audio files. The first file is easy to understand, and the second is more challenging.
-
-```scala
-import org.apache.spark.sql.functions.col
-
-// Create a dataframe with audio URLs, tied to the column called "url"
-val df = Seq(("https://mmlspark.blob.core.windows.net/datasets/Speech/audio2.wav"),
- ("https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3")).toDF("url")
-
-// Run the Speech to text service to translate the audio into text
-val speechToText = new SpeechToTextSDK()
- .setSubscriptionKey(serviceKey)
- .setLocation("eastus")
- .setOutputCol("text")
- .setAudioDataCol("url")
- .setLanguage("en-US")
- .setProfanity("Masked")
-
-// Show the results of the translation
-display(speechToText.transform(df).select(col("url"), col("text").getItem("DisplayText")))
-```
-
-### Expected result
-
-| url | DisplayText |
-|:|:|
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio2.wav | Custom speech provides tools that allow you to visually inspect the recognition quality of a model by comparing audio data with the corresponding recognition result from the custom speech portal. You can playback uploaded audio and determine if the provided recognition result is correct. This tool allows you to quickly inspect quality of Microsoft's baseline speech to text model or a trained custom model without having to transcribe any audio data. |
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3 | Add a gentleman Sir thinking visual check. |
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3 | I hear me. |
-| https://mmlspark.blob.core.windows.net/datasets/Speech/audio3.mp3 | I like the reassurance for radio that I can hear it as well. |
-
-## Anomaly Detector
-
-[Anomaly Detector](../anomaly-detector/index.yml) is great for detecting irregularities in your time series data. In this sample, we use the service to find anomalies in the entire time series.
-
-```scala
-import org.apache.spark.sql.functions.{col, lit}
-
-val anomalyKey = "84a2c303cc7e49f6a44d692c27fb9967"
-
-val df = Seq(
- ("1972-01-01T00:00:00Z", 826.0),
- ("1972-02-01T00:00:00Z", 799.0),
- ("1972-03-01T00:00:00Z", 890.0),
- ("1972-04-01T00:00:00Z", 900.0),
- ("1972-05-01T00:00:00Z", 766.0),
- ("1972-06-01T00:00:00Z", 805.0),
- ("1972-07-01T00:00:00Z", 821.0),
- ("1972-08-01T00:00:00Z", 20000.0),
- ("1972-09-01T00:00:00Z", 883.0),
- ("1972-10-01T00:00:00Z", 898.0),
- ("1972-11-01T00:00:00Z", 957.0),
- ("1972-12-01T00:00:00Z", 924.0),
- ("1973-01-01T00:00:00Z", 881.0),
- ("1973-02-01T00:00:00Z", 837.0),
- ("1973-03-01T00:00:00Z", 9000.0)
- ).toDF("timestamp", "value").withColumn("group", lit("series1"))
-
-// Run the Anomaly Detector service to look for irregular data
-val anamolyDetector = new SimpleDetectAnomalies()
- .setSubscriptionKey(anomalyKey)
- .setLocation("eastus")
- .setTimestampCol("timestamp")
- .setValueCol("value")
- .setOutputCol("anomalies")
- .setGroupbyCol("group")
- .setGranularity("monthly")
-
-// Show the full results of the analysis with the anomalies marked as "True"
-display(anamolyDetector.transform(df).select("timestamp", "value", "anomalies.isAnomaly"))
-```
-
-### Expected result
-
-| timestamp | value | isAnomaly |
-|:|--:|:|
-| 1972-01-01T00:00:00Z | 826 | False |
-| 1972-02-01T00:00:00Z | 799 | False |
-| 1972-03-01T00:00:00Z | 890 | False |
-| 1972-04-01T00:00:00Z | 900 | False |
-| 1972-05-01T00:00:00Z | 766 | False |
-| 1972-06-01T00:00:00Z | 805 | False |
-| 1972-07-01T00:00:00Z | 821 | False |
-| 1972-08-01T00:00:00Z | 20000 | True |
-| 1972-09-01T00:00:00Z | 883 | False |
-| 1972-10-01T00:00:00Z | 898 | False |
-| 1972-11-01T00:00:00Z | 957 | False |
-| 1972-12-01T00:00:00Z | 924 | False |
-| 1973-01-01T00:00:00Z | 881 | False |
-| 1973-02-01T00:00:00Z | 837 | False |
-| 1973-03-01T00:00:00Z | 9000 | True |
cognitive-services Local Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/local-categories.md
# Search categories for the Bing Local Business Search API The Bing Local Business Search API enables you to search for local business entities in a variety of categories, with priority given to results close a user's location. You can include these searches in searches along with the `localCircularView` and `localMapView` [parameters](specify-geographic-search.md).
X-MSEdge-Ref: Ref A: 68AFB51807C6485CAB8AAF20E232EFFF Ref B: CO1EDGE0108 Ref C:
## Next steps - [Geographic search boundaries](specify-geographic-search.md) - [Query and response](local-search-query-response.md)-- [Quickstart in C#](quickstarts/local-quickstart.md)
+- [Quickstart in C#](quickstarts/local-quickstart.md)
cognitive-services Local Search Query Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/local-search-query-response.md
# Sending and using Bing Local Business Search API queries and responses You can get local results from the Bing Local Business Search API by sending a search query to its endpoint and including the `Ocp-Apim-Subscription-Key` header, which is required. Along with available [headers](local-search-reference.md#headers) and [parameters](local-search-reference.md#query-parameters), Searches can be customized by specifying [geographic boundaries](specify-geographic-search.md) for the area to be searched, and the [categories](local-search-query-response.md) of places returned.
Expires: Tue, 16 Oct 2018 16:25:15 GMT
- [Local Business Search quickstart](quickstarts/local-quickstart.md) - [Local Business Search Java quickstart](quickstarts/local-search-java-quickstart.md) - [Local Business Search Node quickstart](quickstarts/local-search-node-quickstart.md)-- [Local Business Search Python quickstart](quickstarts/local-search-python-quickstart.md)
+- [Local Business Search Python quickstart](quickstarts/local-search-python-quickstart.md)
cognitive-services Local Search Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/local-search-reference.md
# Bing Local Business Search API v7 reference The Local Business Search API sends a search query to Bing to get results that include restaurants, hotels, or other local businesses. For places, the query can specify the name of the local business or a category (for example, restaurants near me). Entity results include persons, places, or things. Place in this context is business entities, states, countries/regions, etc.
The following are the possible error code and sub-error code values.
- [Local Business Search quickstart](quickstarts/local-quickstart.md) - [Local Business Search Java quickstart](quickstarts/local-search-java-quickstart.md) - [Local Business Search Node quickstart](quickstarts/local-search-node-quickstart.md)-- [Local Business Search Python quickstart](quickstarts/local-search-python-quickstart.md)
+- [Local Business Search Python quickstart](quickstarts/local-search-python-quickstart.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/overview.md
# What is Bing Local Business Search? The Bing Local Business Search API is a RESTful service that enables your applications to find information about local businesses based on search queries. For example, `q=<business-name> in Redmond, Washington`, or `q=Italian restaurants near me`. ## Features
Call the Bing Local Business Search API from any programming language that can m
- [Query and response](local-search-query-response.md) - [Local Business Search quickstart](quickstarts/local-quickstart.md) - [Local Business Search API reference](local-search-reference.md)-- [Use and display requirements](../bing-web-search/use-display-requirements.md)
+- [Use and display requirements](../bing-web-search/use-display-requirements.md)
cognitive-services Local Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-quickstart.md
# Quickstart: Send a query to the Bing Local Business Search API in C# Use this quickstart to learn how to send requests to the Bing Local Business Search API, which is an Azure Cognitive Service. Although this simple application is written in C#, the API is a RESTful Web service compatible with any programming language capable of making HTTP requests and parsing JSON.
cognitive-services Local Search Java Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-search-java-quickstart.md
# Quickstart: Send a query to the Bing Local Business Search API using Java Use this quickstart to learn how to send requests to the Bing Local Business Search API, which is an Azure Cognitive Service. Although this simple application is written in Java, the API is a RESTful Web service compatible with any programming language capable of making HTTP requests and parsing JSON.
cognitive-services Local Search Node Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-search-node-quickstart.md
# Quickstart: Send a query to the Bing Local Business Search API using Node.js Use this quickstart to learn how to send requests to the Bing Local Business Search API, which is an Azure Cognitive Service. Although this simple application is written in Node.js, the API is a RESTful Web service compatible with any programming language capable of making HTTP requests and parsing JSON.
cognitive-services Local Search Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/quickstarts/local-search-python-quickstart.md
# Quickstart: Send a query to the Bing Local Business Search API in Python Use this quickstart to learn how to send requests to the Bing Local Business Search API, which is an Azure Cognitive Service. Although this simple application is written in Python, the API is a RESTful Web service compatible with any programming language capable of making HTTP requests and parsing JSON.
cognitive-services Specify Geographic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-local-business-search/specify-geographic-search.md
# Use geographic boundaries to filter results from the Bing Local Business Search API The Bing Local Business Search API enables you to set boundaries on the specific geographic area you'd like to search by using the `localCircularView` or `localMapView` query parameters. Be sure to use only one parameter in your queries.
https://api.cognitive.microsoft.com/bing/v7.0/localbusinesses/search?q=restauran
- [Local Business Search Java Quickstart](quickstarts/local-search-java-quickstart.md) - [Local Business Search C# Quickstart](quickstarts/local-quickstart.md) - [Local Business Search Node Quickstart](quickstarts/local-search-node-quickstart.md)-- [Local Business Search Python quickstart](quickstarts/local-search-python-quickstart.md)
+- [Local Business Search Python quickstart](quickstarts/local-search-python-quickstart.md)
cognitive-services Bing Insights Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/bing-insights-usage.md
Last updated 04/03/2019
# Examples of Bing insights usage This article contains examples of how Bing might use and display image insights on Bing.com.
To get started with your first request, see the quickstarts:
* [Node.js](quickstarts/nodejs.md)
-* [Python](quickstarts/python.md)
+* [Python](quickstarts/python.md)
cognitive-services Sending Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/concepts/sending-queries.md
# Sending search queries to the Bing Visual Search API This article describes the parameters and attributes of requests sent to the Bing Visual Search API, as well as the response object.
If the image contains a recognized entity such as a culturally well-known/popula
## See also - [What is the Bing Visual Search API?](../overview.md)-- [Tutorial: Create a Visual Search single-page web app](../tutorial-bing-visual-search-single-page-app.md)
+- [Tutorial: Create a Visual Search single-page web app](../tutorial-bing-visual-search-single-page-app.md)
cognitive-services Default Insights Tag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/default-insights-tag.md
Last updated 04/04/2019
# Default insights tag The default insights tag is the one with the `displayName` field set to an empty string. The following example shows the possible list of default insights (actions). The list of actions the response includes depends on the image. And for each action, the list of properties may vary by image, so check if the property exists before trying to use it.
To get started quickly with your first request, see the quickstarts:
* [Node.js](quickstarts/nodejs.md)
-* [Python](quickstarts/python.md).
+* [Python](quickstarts/python.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/language-support.md
Last updated 09/25/2018
# Language and region support for the Bing Visual Search API Bing Visual Search API supports more than three dozen countries/regions, many with more than one language. Each request should include the user's country/region and language of choice. Knowing the user's market helps Bing return appropriate results. If you don't specify a country/region and language, Bing makes a best effort to determine the user's country/region and language. Because the results may contain links to Bing, knowing the country/region and language may provide a preferred localized Bing user experience if the user clicks the Bing links.
Alternatively, you can specify the country/region using the `cc` query parameter
|T├╝rkiye|Turkish|tr-TR| |United Kingdom|English|en-GB| |United States|English|en-US|
-|United States|Spanish|es-US|
+|United States|Spanish|es-US|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/overview.md
Last updated 01/24/2023
# What is the Bing Visual Search API? The Bing Visual Search API returns insights for an image. You can either upload an image or provide a URL to one. Insights are visually similar images, shopping sources, webpages that include the image, and more. Insights returned by the Bing Visual Search API are similar to ones shown on Bing.com/images.
The Bing Visual Search API is a RESTful web service, making it easy to call from
## Next steps
-First, try the Bing Visual Search API [interactive demo](https://azure.microsoft.com/services/cognitive-services/bing-visual-search/).
+First, try the Bing Visual Search API [interactive demo](https://azure.microsoft.com/services/cognitive-services/Bing-visual-search/).
The demo shows how you can quickly customize a search query and scour the web for images. To get started quickly with your first request, see the quickstarts:
To get started quickly with your first request, see the quickstarts:
* The [Bing Search API use and display requirements](../bing-web-search/use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
-* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
+* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/client-libraries.md
# Quickstart: Use the Bing Visual Search client library ::: zone pivot="programming-language-csharp"
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/csharp.md
# Quickstart: Get image insights using the Bing Visual Search REST API and C# This quickstart demonstrates how to upload an image to the Bing Visual Search API and view the insights that it returns.
This quickstart demonstrates how to upload an image to the Bing Visual Search AP
using System.Collections.Generic; ```
-2. Add variables for your subscription key, endpoint, and path to the image you want to upload. For the `uriBase` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Add variables for your subscription key, endpoint, and path to the image you want to upload. For the `uriBase` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```csharp const string accessKey = "<my_subscription_key>";
cognitive-services Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/go.md
# Quickstart: Get image insights using the Bing Visual Search REST API and Go Use this quickstart to make your first call to the Bing Visual Search API using the Go programming language. A POST request uploads an image to the API endpoint. The results include URLs and descriptive information about images similar to the uploaded image.
The following code declares the main function and assigns the required variables
1. Confirm that the endpoint is correct and replace the `token` value with a valid subscription key from your Azure account. 2. For `batchNumber`, assign a GUID, which is required for the leading and trailing boundaries of the POST data. 3. For `fileName`, assign the image file to use for the POST.
-4. For `endpoint`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+4. For `endpoint`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```go func main() {
The results identify images similar to the image contained in the POST body. The
> [!div class="nextstepaction"] > [What is the Bing Visual Search API?](../overview.md)
-> [Bing Web Search quickstart in Go](../../Bing-Web-Search/quickstarts/go.md)
+> [Bing Web Search quickstart in Go](../../bing-web-search/quickstarts/go.md)
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/java.md
# Quickstart: Get image insights using the Bing Visual Search REST API and Java Use this quickstart to make your first call to the Bing Visual Search API. This Java application uploads an image to the API and displays the information it returns. Although this application is written in Java, the API is a RESTful Web service compatible with most programming languages.
Use this quickstart to make your first call to the Bing Visual Search API. This
import org.apache.http.impl.client.HttpClientBuilder; ```
-2. Create variables for your API endpoint, subscription key, and the path to your image. For the `endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your API endpoint, subscription key, and the path to your image. For the `endpoint` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```java static String endpoint = "https://api.cognitive.microsoft.com/bing/v7.0/images/visualsearch";
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/nodejs.md
# Quickstart: Get image insights using the Bing Visual Search REST API and Node.js Use this quickstart to make your first call to the Bing Visual Search API. This simple JavaScript application uploads an image to the API, and displays the information returned about it. Although this application is written in JavaScript, the API is a RESTful Web service compatible with most programming languages.
Use this quickstart to make your first call to the Bing Visual Search API. This
var fs = require('fs'); ```
-2. Create variables for your API endpoint, subscription key, and the path to your image. For the `baseUri` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your API endpoint, subscription key, and the path to your image. For the `baseUri` value, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```javascript var baseUri = 'https://api.cognitive.microsoft.com/bing/v7.0/images/visualsearch';
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/python.md
# Quickstart: Get image insights using the Bing Visual Search REST API and Python Use this quickstart to make your first call to the Bing Visual Search API. This Python application uploads an image to the API and displays the information it returns. Although this application is written in Python, the API is a RESTful Web service compatible with most programming languages.
Use this quickstart to make your first call to the Bing Visual Search API. This
import requests, json ```
-2. Create variables for your subscription key, endpoint, and the path to the image you're uploading. For the value of `BASE_URI`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../cognitive-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
+2. Create variables for your subscription key, endpoint, and the path to the image you're uploading. For the value of `BASE_URI`, you can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
```python
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/quickstarts/ruby.md
# Quickstart: Get image insights using the Bing Visual Search REST API and Ruby Use this quickstart to make your first call to the Bing Visual Search API using the Ruby programming language. A POST request uploads an image to the API endpoint. The results include URLs and descriptive information about images similar to the uploaded image.
cognitive-services Tutorial Bing Visual Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-bing-visual-search-single-page-app.md
# Tutorial: Create a Visual Search single-page web app The Bing Visual Search API returns insights for an image. You can either upload an image or provide a URL to one. Insights are visually similar images, shopping sources, webpages that include the image, and more. Insights returned by the Bing Visual Search API are similar to ones shown on Bing.com/images.
-This tutorial explains how to extend a single-page web app for the Bing Image Search API. To view that tutorial or get the source code used here, see [Tutorial: Create a single-page app for the Bing Image Search API](../Bing-Image-Search/tutorial-bing-image-search-single-page-app.md).
+This tutorial explains how to extend a single-page web app for the Bing Image Search API. To view that tutorial or get the source code used here, see [Tutorial: Create a single-page app for the Bing Image Search API](../bing-image-search/tutorial-bing-image-search-single-page-app.md).
The full source code for this application (after extending it to use the Bing Visual Search API), is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/Tutorials/Bing-Visual-Search/BingVisualSearchApp.html).
cognitive-services Tutorial Visual Search Crop Area Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-crop-area-results.md
# Tutorial: Crop an image with the Bing Visual Search SDK for C# The Bing Visual Search SDK enables you to crop an image before finding similar online images. This application crops a single person from an image containing several people, and then returns search results containing similar images found online.
This image is cropped by creating an `ImageInfo` object from the crop area, and
```csharp CropArea CropArea = new CropArea(top: (float)0.01, bottom: (float)0.30, left: (float)0.01, right: (float)0.20);
-string imageURL = "https://learn.microsoft.com/azure/cognitive-services/bing-visual-search/media/ms_srleaders.jpg";
+string imageURL = "https://learn.microsoft.com/azure/cognitive-services/Bing-visual-search/media/ms_srleaders.jpg";
ImageInfo imageInfo = new ImageInfo(cropArea: CropArea, url: imageURL); VisualSearchRequest visualSearchRequest = new VisualSearchRequest(imageInfo: imageInfo);
cognitive-services Tutorial Visual Search Image Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-image-upload.md
# Tutorial: Upload images to the Bing Visual Search API The Bing Visual Search API enables you to search the web for images similar to ones you upload. Use this tutorial to create a web application that can send an image to the API, and display the insights it returns within the webpage. Note that this application does not adhere to all [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) for using the API.
cognitive-services Tutorial Visual Search Insights Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-insights-token.md
# Tutorial: Find similar images from previous searches using an image insights token The Visual Search client library enables you to find images online from previous searches that return an `ImageInsightsToken`. This application gets an `ImageInsightsToken` and uses the token in a subsequent search. It then sends the `ImageInsightsToken` to Bing and returns results that include Bing Search URLs and URLs of similar images found online.
As shown above, the `TopicResults` and `ImageResults` types contain queries for
## Next steps > [!div class="nextstepaction"]
-> [Create a Visual Search single-page web app](tutorial-bing-visual-search-single-page-app.md)
+> [Create a Visual Search single-page web app](tutorial-bing-visual-search-single-page-app.md)
cognitive-services Use Insights Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/use-insights-token.md
# Use an insights token to get insights for an image Bing Visual Search API returns information about an image that you provide. You can provide the image by using the URL of the image, an insights token, or by uploading an image. For information about these options, see [What is Bing Visual Search API?](overview.md). This article demonstrates using an insights token. For examples that demonstrate how to upload an image to get insights, see the quickstarts:
If you send Bing Visual Search an image token or URL, the following shows the fo
} ```
-The examples in this article show how to use the insights token. You get the insights token from an `Image` object in an /images/search API response. For information about getting the insights token, see [What is the Bing Image Search API?](../Bing-Image-Search/overview.md).
+The examples in this article show how to use the insights token. You get the insights token from an `Image` object in an /images/search API response. For information about getting the insights token, see [What is the Bing Image Search API?](../bing-image-search/overview.md).
``` --boundary_1234-abcd
cognitive-services Cognitive Services And Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-and-machine-learning.md
- Title: Cognitive Services and Machine Learning-
-description: Learn where Azure Cognitive Services fits in with other Azure offerings for machine learning.
---- Previously updated : 10/28/2021-
-# Cognitive Services and machine learning
-
-Cognitive Services provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services.
-
-[Cognitive Services](./what-are-cognitive-services.md) is a group of services, each supporting different, generalized prediction capabilities. The services are divided into different categories to help you find the right service.
-
-|Service category|Purpose|
-|--|--|
-|[Decision](https://azure.microsoft.com/services/cognitive-services/directory/decision/)|Build apps that surface recommendations for informed and efficient decision-making.|
-|[Language](https://azure.microsoft.com/services/cognitive-services/directory/lang/)|Allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want.|
-|[Search](https://azure.microsoft.com/services/cognitive-services/directory/search/)|Add Bing Search APIs to your apps and harness the ability to comb billions of webpages, images, videos, and news with a single API call.|
-|[Speech](https://azure.microsoft.com/services/cognitive-services/directory/speech/)|Convert speech into text and text into natural-sounding speech. Translate from one language to another and enable speaker verification and recognition.|
-|[Vision](https://azure.microsoft.com/services/cognitive-services/directory/vision/)|Recognize, identify, caption, index, and moderate your pictures, videos, and digital ink content.|
-
-Use Cognitive Services when you:
-
-* Can use a generalized solution.
-* Access solution from a programming REST API or SDK.
-
-Use other machine-learning solutions when you:
-
-* Need to choose the algorithm and need to train on very specific data.
-
-## What is machine learning?
-
-Machine learning is a concept where you bring together data and an algorithm to solve a specific need. Once the data and algorithm are trained, the output is a model that you can use again with different data. The trained model provides insights based on the new data.
-
-The process of building a machine learning system requires some knowledge of machine learning or data science.
-
-Machine learning is provided using [Azure Machine Learning (AML) products and services](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning?context=azure%2fmachine-learning%2fstudio%2fcontext%2fml-context).
-
-## What is a Cognitive Service?
-
-A Cognitive Service provides part or all of the components in a machine learning solution: data, algorithm, and trained model. These services are meant to require general knowledge about your data without needing experience with machine learning or data science. These services provide both REST API(s) and language-based SDKs. As a result, you need to have programming language knowledge to use the services.
-
-## How are Cognitive Services and Azure Machine Learning (AML) similar?
-
-Both have the end-goal of applying artificial intelligence (AI) to enhance business operations, though how each provides this in the respective offerings is different.
-
-Generally, the audiences are different:
-
-* Cognitive Services are for developers without machine-learning experience.
-* Azure Machine Learning is tailored for data scientists.
-
-## How is a Cognitive Service different from machine learning?
-
-A Cognitive Service provides a trained model for you. This brings data and an algorithm together, available from a REST API(s) or SDK. You can implement this service within minutes, depending on your scenario. A Cognitive Service provides answers to general problems such as key phrases in text or item identification in images.
-
-Machine learning is a process that generally requires a longer period of time to implement successfully. This time is spent on data collection, cleaning, transformation, algorithm selection, model training, and deployment to get to the same level of functionality provided by a Cognitive Service. With machine learning, it is possible to provide answers to highly specialized and/or specific problems. Machine learning problems require familiarity with the specific subject matter and data of the problem under consideration, as well as expertise in data science.
-
-## What kind of data do you have?
-
-Cognitive Services, as a group of services, can require none, some, or all custom data for the trained model.
-
-### No additional training data required
-
-Services that provide a fully-trained model can be treated as a _opaque box_. You don't need to know how they work or what data was used to train them. You bring your data to a fully trained model to get a prediction.
-
-### Some or all training data required
-
-Some services allow you to bring your own data, then train a model. This allows you to extend the model using the Service's data and algorithm with your own data. The output matches your needs. When you bring your own data, you may need to tag the data in a way specific to the service. For example, if you are training a model to identify flowers, you can provide a catalog of flower images along with the location of the flower in each image to train the model.
-
-A service may _allow_ you to provide data to enhance its own data. A service may _require_ you to provide data.
-
-### Real-time or near real-time data required
-
-A service may need real-time or near-real time data to build an effective model. These services process significant amounts of model data.
-
-## Service requirements for the data model
-
-The following data categorizes each service by which kind of data it allows or requires.
-
-|Cognitive Service|No training data required|You provide some or all training data|Real-time or near real-time data collection|
-|--|--|--|--|
-|[Anomaly Detector](./Anomaly-Detector/overview.md)|x|x|x|
-|Bing Search |x|||
-|[Computer Vision](./computer-vision/overview.md)|x|||
-|[Content Moderator](./Content-Moderator/overview.md)|x||x|
-|[Custom Vision](./custom-vision-service/overview.md)||x||
-|[Face](./Face/Overview.md)|x|x||
-|[Ink Recognizer](/previous-versions/azure/cognitive-services/Ink-Recognizer/overview)|x|x||
-|[Language Understanding (LUIS)](./LUIS/what-is-luis.md)||x||
-|[Personalizer](./personalizer/what-is-personalizer.md)|x*|x*|x|
-|[QnA Maker](./QnAMaker/Overview/overview.md)||x||
-|[Speaker Recognizer](./speech-service/speaker-recognition-overview.md)||x||
-|[Speech Text to speech (TTS)](speech-service/text-to-speech.md)|x|x||
-|[Speech Speech to text (STT)](speech-service/speech-to-text.md)|x|x||
-|[Speech Translation](speech-service/speech-translation.md)|x|||
-|[Language service](./language-service/overview.md)|x|||
-|[Translator](./translator/translator-overview.md)|x|||
-|[Translator - custom translator](./translator/custom-translator/overview.md)||x||
-
-*Personalizer only needs training data collected by the service (as it operates in real-time) to evaluate your policy and data. Personalizer does not need large historical datasets for up-front or batch training.
-
-## Where can you use Cognitive Services?
-
-The services are used in any application that can make REST API(s) or SDK calls. Examples of applications include web sites, bots, virtual or mixed reality, desktop and mobile applications.
-
-## How is Azure Cognitive Search related to Cognitive Services?
-
-[Azure Cognitive Search](../search/search-what-is-azure-search.md) is a separate cloud search service that optionally uses Cognitive Services to add image and natural language processing to indexing workloads. Cognitive Services is exposed in Azure Cognitive Search through [built-in skills](../search/cognitive-search-predefined-skills.md) that wrap individual APIs. You can use a free resource for walkthroughs, but plan on creating and attaching a [billable resource](../search/cognitive-search-attach-cognitive-services.md) for larger volumes.
-
-## How can you use Cognitive Services?
-
-Each service provides information about your data. You can combine services together to chain solutions such as converting speech (audio) to text, translating the text into many languages, then using the translated languages to get answers from a knowledge base. While Cognitive Services can be used to create intelligent solutions on their own, they can also be combined with traditional machine learning projects to supplement models or accelerate the development process.
-
-Cognitive Services that provide exported models for other machine learning tools:
-
-|Cognitive Service|Model information|
-|--|--|
-|[Custom Vision](./custom-vision-service/overview.md)|[Export](./custom-vision-service/export-model-python.md) for Tensorflow for Android, CoreML for iOS11, ONNX for Windows ML|
-
-## Learn more
-
-* [Architecture Guide - What are the machine learning products at Microsoft?](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning)
-* [Machine learning - Introduction to deep learning vs. machine learning](../machine-learning/concept-deep-learning-vs-machine-learning.md)
-
-## Next steps
-
-* Create your Cognitive Service account in the [Azure portal](cognitive-services-apis-create-account.md) or with [Azure CLI](./cognitive-services-apis-create-account-cli.md).
-* Learn how to [authenticate](authentication.md) to a Cognitive Service.
-* Use [diagnostic logging](diagnostic-logging.md) for issue identification and debugging.
-* Deploy a Cognitive Service in a Docker [container](cognitive-services-container-support.md).
-* Keep up to date with [service updates](https://azure.microsoft.com/updates/?product=cognitive-services).
cognitive-services Cognitive Services Apis Create Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account-cli.md
- Title: Create a Cognitive Services resource using the Azure CLI-
-description: Get started with Azure Cognitive Services by using Azure CLI commands to create and subscribe to a resource.
----
-keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services
- Previously updated : 06/06/2022----
-# Quickstart: Create a Cognitive Services resource using the Azure CLI
-
-Use this quickstart to create a Cognitive Services resource using [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) commands. After you create the resource, use the keys and endpoint generated for you to authenticate your applications.
-
-Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
-
-## Types of Cognitive Services resources
--
-## Prerequisites
-
-* A valid Azure subscription - [Create one](https://azure.microsoft.com/free/cognitive-services) for free.
-* The [Azure CLI](/cli/azure/install-azure-cli)
-* [!INCLUDE [contributor-requirement](./includes/quickstarts/contributor-requirement.md)]
-* [!INCLUDE [terms-azure-portal](./includes/quickstarts/terms-azure-portal.md)]
-
-## Install the Azure CLI and sign in
-
-Install the [Azure CLI](/cli/azure/install-azure-cli). To sign into your local installation of the CLI, run the [az login](/cli/azure/reference-index#az-login) command:
-
-```azurecli-interactive
-az login
-```
-
-You can also use the green **Try It** button to run these commands in your browser.
-
-## Create a new Azure Cognitive Services resource group
-
-Before you create a Cognitive Services resource, you must have an Azure resource group to contain the resource. When you create a new resource, you can either create a new resource group, or use an existing one. This article shows how to create a new resource group.
-
-### Choose your resource group location
-
-To create a resource, you'll need one of the Azure locations available for your subscription. You can retrieve a list of available locations with the [az account list-locations](/cli/azure/account#az-account-list-locations) command. Most Cognitive Services can be accessed from several locations. Choose the one closest to you, or see which locations are available for the service.
-
-> [!IMPORTANT]
-> * Remember your Azure location, as you will need it when calling the Azure Cognitive Services resources.
-> * The availability of some Cognitive Services can vary by region. For more information, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services).
-
-```azurecli-interactive
-az account list-locations --query "[].{Region:name}" --out table
-```
-
-After you have your Azure location, create a new resource group in the Azure CLI using the [az group create](/cli/azure/group#az-group-create) command. In the example below, replace the Azure location `westus2` with one of the Azure locations available for your subscription.
-
-```azurecli-interactive
-az group create --name cognitive-services-resource-group --location westus2
-```
-
-## Create a Cognitive Services resource
-
-### Choose a service and pricing tier
-
-When you create a new resource, you'll need to know the kind of service you want to use, along with the [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/) (or SKU) you want. You'll use this and other information as parameters when you create the resource.
----
-You can find a list of available Cognitive Service "kinds" with the [az cognitiveservices account list-kinds](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-list-kinds) command:
-
-```azurecli-interactive
-az cognitiveservices account list-kinds
-```
-
-### Add a new resource to your resource group
-
-To create and subscribe to a new Cognitive Services resource, use the [az cognitiveservices account create](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-create) command. This command adds a new billable resource to the resource group you created earlier. When you create your new resource, you'll need to know the "kind" of service you want to use, along with its pricing tier (or SKU) and an Azure location:
-
-You can create an F0 (free) resource for Anomaly Detector, named `anomaly-detector-resource` with the command below.
-
-```azurecli-interactive
-az cognitiveservices account create --name anomaly-detector-resource --resource-group cognitive-services-resource-group --kind AnomalyDetector --sku F0 --location westus2 --yes
-```
--
-## Get the keys for your resource
-
-To log into your local installation of the Command-Line Interface(CLI), use the [az login](/cli/azure/reference-index#az-login) command.
-
-```azurecli-interactive
-az login
-```
-
-Use the [az cognitiveservices account keys list](/cli/azure/cognitiveservices/account/keys#az-cognitiveservices-account-keys-list) command to get the keys for your Cognitive Service resource.
-
-```azurecli-interactive
- az cognitiveservices account keys list --name anomaly-detector-resource --resource-group cognitive-services-resource-group
-```
--
-## Pricing tiers and billing
-
-Pricing tiers (and the amount you get billed) are based on the number of transactions you send using your authentication information. Each pricing tier specifies the:
-* maximum number of allowed transactions per second (TPS).
-* service features enabled within the pricing tier.
-* The cost for a predefined number of transactions. Going above this amount will cause an extra charge as specified in the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for your service.
-
-## Get current quota usage for your resource
-
-Use the [az cognitiveservices account list-usage](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-list-usage) command to get the usage for your Cognitive Service resource.
-
-```azurecli-interactive
-az cognitiveservices account list-usage --name anomaly-detector-resource --resource-group cognitive-services-resource-group --subscription subscription-name
-```
-
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services resource, you can delete it or the resource group. Deleting the resource group also deletes any other resources contained in the group.
-
-To remove the resource group and its associated resources, use the az group delete command.
-
-```azurecli-interactive
-az group delete --name cognitive-services-resource-group
-```
-
-If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
-
-## See also
-
-* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
-* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
-* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
-* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
-* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Cognitive Services Apis Create Account Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account-client-library.md
- Title: Create Cognitive Services resource using Azure Management client library-
-description: Create and manage Azure Cognitive Services resources using the Azure Management client library.
-
-keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services
---- Previously updated : 06/04/2021-
-zone_pivot_groups: programming-languages-set-ten
---
-# Quickstart: Create a Cognitive Services resource using the Azure Management client library
-
-Use this quickstart to create and manage Azure Cognitive Services resources using the Azure Management client library.
-
-Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
-
-Individual AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications.
------------
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
- Title: "Create a Cognitive Services resource in the Azure portal"-
-description: Get started with Azure Cognitive Services by creating and subscribing to a resource in the Azure portal.
---
-keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services
-- Previously updated : 02/13/2023---
-# Quickstart: Create a Cognitive Services resource using the Azure portal
-
-Use this quickstart to create a Cognitive Services resource. After you create a Cognitive Service resource in the Azure portal, you'll get an endpoint and a key for authenticating your applications.
-
-Azure Cognitive Services is cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They're available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
-
-## Types of Cognitive Services resources
--
-## Prerequisites
-
-* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* [!INCLUDE [contributor-requirement](./includes/quickstarts/contributor-requirement.md)]
-
-## Create a new Azure Cognitive Services resource
-
-### [Multi-service](#tab/multiservice)
-
-The multi-service resource is named **Cognitive Services** in the portal. The multi-service resource enables access to the following Cognitive
-
-* **Decision** - Content Moderator
-* **Language** - Language, Translator
-* **Speech** - Speech
-* **Vision** - Computer Vision, Custom Vision, Form Recognizer, Face
-
-1. You can select this link to create an Azure Cognitive multi-service resource: [Create a Cognitive Services resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne).
-
-1. On the **Create** page, provide the following information:
-
- [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]
-
- :::image type="content" source="media/cognitive-services-apis-create-account/resource_create_screen-multi.png" alt-text="Multi-service resource creation screen":::
-
-1. Configure other settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
-
-### [Decision](#tab/decision)
--
-### [Language](#tab/language)
--
-### [Speech](#tab/speech)
-
-1. Select the following links to create a Speech resource:
- - [Speech Services](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
-
-1. On the **Create** page, provide the following information:
-
- [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]
-
-1. Select **Review + create**.
-
-### [Vision](#tab/vision)
-----
-## Get the keys for your resource
-
-1. After your resource is successfully deployed, select **Next Steps** > **Go to resource**.
-
- :::image type="content" source="media/cognitive-services-apis-create-account/cognitive-services-resource-deployed.png" alt-text="Get resource keys screen":::
-
-1. From the quickstart pane that opens, you can access the resource endpoint and manage keys.
-<!--
-1. If you missed the previous steps or need to find your resource later, go to the [Azure services](https://portal.azure.com/#home) home page. From here you can view recent resources, select **My resources**, or use the search box to find your resource by name.
-
- :::image type="content" source="media/cognitive-services-apis-create-account/home-my-resources.png" alt-text="Find resource keys from home screen":::
>--
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources contained in the group.
-
-1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups.
-1. Locate the resource group containing the resource to be deleted.
-1. If you want to delete the entire resource group, select the resource group name. On the next page, Select **Delete resource group**, and confirm.
-1. If you want to delete only the Cognitive Service resource, select the resource group to see all the resources within it. On the next page, select the resource that you want to delete, select the ellipsis menu for that row, and select **Delete**.
-
-If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
-
-## See also
-
-* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
-* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
-* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
-* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-premises.
-* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
- Title: Use Azure Cognitive Services Containers on-premises-
-description: Learn how to use Docker containers to use Cognitive Services on-premises.
------ Previously updated : 05/08/2023-
-keywords: on-premises, Docker, container, Kubernetes
-#Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
--
-# What are Azure Cognitive Services containers?
-
-Azure Cognitive Services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure Cognitive Services.
-
-> [!VIDEO https://www.youtube.com/embed/hdfbn4Q8jbo]
-
-Containerization is an approach to software distribution in which an application or service, including its dependencies & configuration, is packaged together as a container image. With little or no modification, a container image can be deployed on a container host. Containers are isolated from each other and the underlying operating system, with a smaller footprint than a virtual machine. Containers can be instantiated from container images for short-term tasks, and removed when no longer needed.
-
-## Features and benefits
--- **Immutable infrastructure**: Enable DevOps teams to leverage a consistent and reliable set of known system parameters, while being able to adapt to change. Containers provide the flexibility to pivot within a predictable ecosystem and avoid configuration drift.-- **Control over data**: Choose where your data gets processed by Cognitive Services. This can be essential if you can't send data to the cloud but need access to Cognitive Services APIs. Support consistency in hybrid environments ΓÇô across data, management, identity, and security.-- **Control over model updates**: Flexibility in versioning and updating of models deployed in their solutions.-- **Portable architecture**: Enables the creation of a portable application architecture that can be deployed on Azure, on-premises and the edge. Containers can be deployed directly to [Azure Kubernetes Service](../aks/index.yml), [Azure Container Instances](../container-instances/index.yml), or to a [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy).-- **High throughput / low latency**: Provide customers the ability to scale for high throughput and low latency requirements by enabling Cognitive Services to run physically close to their application logic and data. Containers don't cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources.-- **Scalability**: With the ever growing popularity of containerization and container orchestration software, such as Kubernetes; scalability is at the forefront of technological advancements. Building on a scalable cluster foundation, application development caters to high availability.-
-## Containers in Azure Cognitive Services
-
-Azure Cognitive Services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure Cognitive Services. You can find instructions and image locations in the tables below.
-
-> [!NOTE]
-> See [Install and run Form Recognizer containers](../applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md) for **Applied AI Services Form Recognizer** container instructions and image locations.
-
-### Decision containers
-
-| Service | Container | Description | Availability |
-|--|--|--|--|
-| [Anomaly detector][ad-containers] | **Anomaly Detector** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/decision/anomaly-detector/about)) | The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning. | Generally available |
-
-### Language containers
-
-| Service | Container | Description | Availability |
-|--|--|--|--|
-| [LUIS][lu-containers] | **LUIS** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/about)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Language service][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Language service][ta-containers-language] | **Text Language Detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/about)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/about)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Language service][ta-containers-health] | **Text Analytics for health** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/about))| Extract and label medical information from unstructured clinical text. | Generally available |
-| [Language service][ta-containers-cner] | **Custom Named Entity Recognition** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about))| Extract named entities from text, using a custom model you create using your data. | Preview |
-| [Translator][tr-containers] | **Translator** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/about))| Translate text in several languages and dialects. | Generally available. Gated - [request access](https://aka.ms/csgate-translator). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-
-### Speech containers
-
-> [!NOTE]
-> To use Speech containers, you will need to complete an [online request form](https://aka.ms/csgate).
-
-| Service | Container | Description | Availability |
-|--|--|--|--|
-| [Speech Service API][sp-containers-stt] | **Speech to text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Speech Service API][sp-containers-cstt] | **Custom Speech to text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Speech Service API][sp-containers-ntts] | **Neural Text to speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview |
-
-### Vision containers
--
-| Service | Container | Description | Availability |
-|--|--|--|--|
-| [Computer Vision][cv-containers] | **Read OCR** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-read)) | The Read OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API documentation](./computer-vision/overview-ocr.md). | Generally Available. Gated - [request access](https://aka.ms/csgate). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Spatial Analysis][spa-containers] | **Spatial analysis** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-vision-spatial-analysis)) | Analyzes real-time streaming video to understand spatial relationships between people, their movement, and interactions with objects in physical environments. | Preview |
-
-<!--
-|[Personalizer](./personalizer/what-is-personalizer.md) |F0, S0|**Personalizer** ([image](https://go.microsoft.com/fwlink/?linkid=2083928&clcid=0x409))|Azure Personalizer is a cloud-based API service that allows you to choose the best experience to show to your users, learning from their real-time behavior.|
>-
-Additionally, some containers are supported in the Cognitive Services [multi-service resource](cognitive-services-apis-create-account.md) offering. You can create one single Cognitive Services All-In-One resource and use the same billing key across supported services for the following
-
-* Computer Vision
-* LUIS
-* Language service
-
-## Prerequisites
-
-You must satisfy the following prerequisites before using Azure Cognitive Services containers:
-
-**Docker Engine**: You must have Docker Engine installed locally. Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Linux](https://docs.docker.com/engine/installation/#supported-platforms), and [Windows](https://docs.docker.com/docker-for-windows/). On Windows, Docker must be configured to support Linux containers. Docker containers can also be deployed directly to [Azure Kubernetes Service](../aks/index.yml) or [Azure Container Instances](../container-instances/index.yml).
-
-Docker must be configured to allow the containers to connect with and send billing data to Azure.
-
-**Familiarity with Microsoft Container Registry and Docker**: You should have a basic understanding of both Microsoft Container Registry and Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.
-
-For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).
-
-Individual containers can have their own requirements, as well, including server and memory allocation requirements.
--
-## Developer samples
-
-Developer samples are available at our [GitHub repository](https://github.com/Azure-Samples/cognitive-services-containers-samples).
-
-## Next steps
-
-Learn about [container recipes](containers/container-reuse-recipe.md) you can use with the Cognitive Services.
-
-Install and explore the functionality provided by containers in Azure Cognitive
-
-* [Anomaly Detector containers][ad-containers]
-* [Computer Vision containers][cv-containers]
-* [Language Understanding (LUIS) containers][lu-containers]
-* [Speech Service API containers][sp-containers]
-* [Language service containers][ta-containers]
-* [Translator containers][tr-containers]
-
-<!--* [Personalizer containers](https://go.microsoft.com/fwlink/?linkid=2083928&clcid=0x409)
>-
-[ad-containers]: anomaly-Detector/anomaly-detector-container-howto.md
-[cv-containers]: computer-vision/computer-vision-how-to-install-containers.md
-[lu-containers]: luis/luis-container-howto.md
-[sp-containers]: speech-service/speech-container-howto.md
-[spa-containers]: ./computer-vision/spatial-analysis-container.md
-[sp-containers-lid]: speech-service/speech-container-lid.md
-[sp-containers-stt]: speech-service/speech-container-stt.md
-[sp-containers-cstt]: speech-service/speech-container-cstt.md
-[sp-containers-ntts]: speech-service/speech-container-ntts.md
-[ta-containers]: language-service/overview.md#deploy-on-premises-using-docker-containers
-[ta-containers-keyphrase]: language-service/key-phrase-extraction/how-to/use-containers.md
-[ta-containers-language]: language-service/language-detection/how-to/use-containers.md
-[ta-containers-sentiment]: language-service/sentiment-opinion-mining/how-to/use-containers.md
-[ta-containers-health]: language-service/text-analytics-for-health/how-to/use-containers.md
-[ta-containers-cner]: language-service/custom-named-entity-recognition/how-to/use-containers.md
-[tr-containers]: translator/containers/translator-how-to-install-container.md
-[request-access]: https://aka.ms/csgate
cognitive-services Cognitive Services Custom Subdomains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-custom-subdomains.md
- Title: Custom subdomains-
-description: Custom subdomain names for each Cognitive Service resource are created through the Azure portal, Azure Cloud Shell, or Azure CLI.
------ Previously updated : 12/04/2020---
-# Custom subdomain names for Cognitive Services
-
-Azure Cognitive Services use custom subdomain names for each resource created through the [Azure portal](https://portal.azure.com), [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/), or [Azure CLI](/cli/azure/install-azure-cli). Unlike regional endpoints, which were common for all customers in a specific Azure region, custom subdomain names are unique to the resource. Custom subdomain names are required to enable features like Azure Active Directory (Azure AD) for authentication.
-
-## How does this impact existing resources?
-
-Cognitive Services resources created before July 1, 2019 will use the regional endpoints for the associated service. These endpoints will work with existing and new resources.
-
-If you'd like to migrate an existing resource to leverage custom subdomain names, so that you can enable features like Azure AD, follow these instructions:
-
-1. Sign in to the Azure portal and locate the Cognitive Services resource that you'd like to add a custom subdomain name to.
-2. In the **Overview** blade, locate and select **Generate Custom Domain Name**.
-3. This opens a panel with instructions to create a unique custom subdomain for your resource.
- > [!WARNING]
- > After you've created a custom subdomain name it **cannot** be changed.
-
-## Do I need to update my existing resources?
-
-No. The regional endpoint will continue to work for new and existing Cognitive Services and the custom subdomain name is optional. Even if a custom subdomain name is added the regional endpoint will continue to work with the resource.
-
-## What if an SDK asks me for the region for a resource?
-
-> [!WARNING]
-> Speech Services use custom subdomains with [private endpoints](Speech-Service/speech-services-private-link.md) **only**. In all other cases use **regional endpoints** with Speech Services and associated SDKs.
-
-Regional endpoints and custom subdomain names are both supported and can be used interchangeably. However, the full endpoint is required.
-
-Region information is available in the **Overview** blade for your resource in the [Azure portal](https://portal.azure.com). For the full list of regional endpoints, see [Is there a list of regional endpoints?](#is-there-a-list-of-regional-endpoints)
-
-## Are custom subdomain names regional?
-
-Yes. Using a custom subdomain name doesn't change any of the regional aspects of your Cognitive Services resource.
-
-## What are the requirements for a custom subdomain name?
-
-A custom subdomain name is unique to your resource. The name can only include alphanumeric characters and the `-` character; it must be between 2 and 64 characters in length and cannot end with a `-`.
-
-## Can I change a custom domain name?
-
-No. After a custom subdomain name is created and associated with a resource it cannot be changed.
-
-## Can I reuse a custom domain name?
-
-Each custom subdomain name is unique, so in order to reuse a custom subdomain name that you've assigned to a Cognitive Services resource, you'll need to delete the existing resource. After the resource has been deleted, you can reuse the custom subdomain name.
-
-## Is there a list of regional endpoints?
-
-Yes. This is a list of regional endpoints that you can use with Azure Cognitive Services resources.
-
-> [!NOTE]
-> The Translator service and Bing Search APIs use global endpoints.
-
-| Endpoint type | Region | Endpoint |
-||--|-|
-| Public | Global (Translator & Bing) | `https://api.cognitive.microsoft.com` |
-| | Australia East | `https://australiaeast.api.cognitive.microsoft.com` |
-| | Brazil South | `https://brazilsouth.api.cognitive.microsoft.com` |
-| | Canada Central | `https://canadacentral.api.cognitive.microsoft.com` |
-| | Central US | `https://centralus.api.cognitive.microsoft.com` |
-| | East Asia | `https://eastasia.api.cognitive.microsoft.com` |
-| | East US | `https://eastus.api.cognitive.microsoft.com` |
-| | East US 2 | `https://eastus2.api.cognitive.microsoft.com` |
-| | France Central | `https://francecentral.api.cognitive.microsoft.com` |
-| | India Central | `https://centralindia.api.cognitive.microsoft.com` |
-| | Japan East | `https://japaneast.api.cognitive.microsoft.com` |
-| | Korea Central | `https://koreacentral.api.cognitive.microsoft.com` |
-| | North Central US | `https://northcentralus.api.cognitive.microsoft.com` |
-| | North Europe | `https://northeurope.api.cognitive.microsoft.com` |
-| | South Africa North | `https://southafricanorth.api.cognitive.microsoft.com` |
-| | South Central US | `https://southcentralus.api.cognitive.microsoft.com` |
-| | Southeast Asia | `https://southeastasia.api.cognitive.microsoft.com` |
-| | UK South | `https://uksouth.api.cognitive.microsoft.com` |
-| | West Central US | `https://westcentralus.api.cognitive.microsoft.com` |
-| | West Europe | `https://westeurope.api.cognitive.microsoft.com` |
-| | West US | `https://westus.api.cognitive.microsoft.com` |
-| | West US 2 | `https://westus2.api.cognitive.microsoft.com` |
-| US Gov | US Gov Virginia | `https://virginia.api.cognitive.microsoft.us` |
-| China | China East 2 | `https://chinaeast2.api.cognitive.azure.cn` |
-| | China North | `https://chinanorth.api.cognitive.azure.cn` |
-
-## See also
-
-* [What are the Cognitive Services?](./what-are-cognitive-services.md)
-* [Authentication](authentication.md)
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md
- Title: Data Loss Prevention
-description: Cognitive Services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss.
---- Previously updated : 03/31/2023---
-# Configure data loss prevention for Azure Cognitive Services
-
-Cognitive Services data loss prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This creates another level of control for customers to prevent data loss. In this article, we'll cover the steps required to enable the data loss prevention feature for Cognitive Services resources.
-
-## Prerequisites
-
-Before you make a request, you need an Azure account and an Azure Cognitive Services subscription. If you already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to get you set up in minutes: [Create a Cognitive Services account for Azure](cognitive-services-apis-create-account.md).
-
-You can get your subscription key from the [Azure portal](cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) after [creating your account](https://azure.microsoft.com/free/cognitive-services/).
-
-## Enabling data loss prevention
-
-There are two parts to enable data loss prevention. First the property restrictOutboundNetworkAccess must be set to true. When this is set to true, you also need to provide the list of approved URLs. The list of URLs is added to the allowedFqdnList property. The allowedFqdnList property contains an array of comma-separated URLs.
-
->[!NOTE]
->
-> * The `allowedFqdnList` property value supports a maximum of 1000 URLs.
-> * The property supports both IP addresses and fully qualified domain names i.e., `www.microsoft.com`, values.
-> * It can take up to 15 minutes for the updated list to take effect.
-
-# [Azure CLI](#tab/azure-cli)
-
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
-
-1. View the details of the Cognitive Services resource.
-
- ```azurecli-interactive
- az cognitiveservices account show \
- -g "myresourcegroup" -n "myaccount" \
- ```
-
-1. View the current properties of the Cognitive Services resource.
-
- ```azurecli-interactive
- az rest -m get \
- -u /subscriptions/{subscription ID}}/resourceGroups/{resource group}/providers/Microsoft.CognitiveServices/accounts/{account name}?api-version=2021-04-30 \
- ```
-
-1. Configure the restrictOutboundNetworkAccess property and update the allowed FqdnList with the approved URLs
-
- ```azurecli-interactive
- az rest -m patch \
- -u /subscriptions/{subscription ID}}/resourceGroups/{resource group}/providers/Microsoft.CognitiveServices/accounts/{account name}?api-version=2021-04-30 \
- -b '{"properties": { "restrictOutboundNetworkAccess": true, "allowedFqdnList": [ "microsoft.com" ] }}'
- ```
-
-# [PowerShell](#tab/powershell)
-
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
-
-1. Display the current properties for Cognitive Services resource.
-
- ```azurepowershell-interactive
- $getParams = @{
- ResourceGroupName = 'myresourcegroup'
- ResourceProviderName = 'Microsoft.CognitiveServices'
- ResourceType = 'accounts'
- Name = 'myaccount'
- ApiVersion = '2021-04-30'
- Method = 'GET'
- }
- Invoke-AzRestMethod @getParams
- ```
-
-1. Configure the restrictOutboundNetworkAccess property and update the allowed FqdnList with the approved URLs
-
- ```azurepowershell-interactive
- $patchParams = @{
- ResourceGroupName = 'myresourcegroup'
- ResourceProviderName = 'Microsoft.CognitiveServices'
- ResourceType = 'accounts'
- Name = 'myaccount'
- ApiVersion = '2021-04-30'
- Payload = '{"properties": { "restrictOutboundNetworkAccess": true, "allowedFqdnList": [ "microsoft.com" ] }}'
- Method = 'PATCH'
- }
- Invoke-AzRestMethod @patchParams
- ```
---
-## Supported services
-
-The following services support data loss prevention configuration:
--- Azure OpenAI-- Computer Vision-- Content Moderator-- Custom Vision-- Face-- Form Recognizer-- Speech Service-- QnA Maker-
-## Next steps
--- [Configure Virtual Networks](cognitive-services-virtual-networks.md)
cognitive-services Cognitive Services Development Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-development-options.md
- Title: Azure Cognitive Services development options
-description: Learn how to use Azure Cognitive Services with different development and deployment options such as client libraries, REST APIs, Logic Apps, Power Automate, Azure Functions, Azure App Service, Azure Databricks, and many more.
------ Previously updated : 10/28/2021--
-# Cognitive Services development options
-
-This document provides a high-level overview of development and deployment options to help you get started with Azure Cognitive Services.
-
-Azure Cognitive Services are cloud-based AI services that allow developers to build intelligence into their applications and products without deep knowledge of machine learning. With Cognitive Services, you have access to AI capabilities or models that are built, trained, and updated by Microsoft - ready to be used in your applications. In many cases, you also have the option to customize the models for your business needs.
-
-Cognitive Services are organized into four categories: Decision, Language, Speech, and Vision. Typically you would access these services through REST APIs, client libraries, and custom tools (like command-line interfaces) provided by Microsoft. However, this is only one path to success. Through Azure, you also have access to several development options, such as:
-
-* Automation and integration tools like Logic Apps and Power Automate.
-* Deployment options such as Azure Functions and the App Service.
-* Cognitive Services Docker containers for secure access.
-* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for big data scenarios.
-
-Before we jump in, it's important to know that the Cognitive Services are primarily used for two distinct tasks. Based on the task you want to perform, you have different development and deployment options to choose from.
-
-* [Development options for prediction and analysis](#development-options-for-prediction-and-analysis)
-* [Tools to customize and configure models](#tools-to-customize-and-configure-models)
-
-## Development options for prediction and analysis
-
-The tools that you will use to customize and configure models are different from those that you'll use to call the Cognitive Services. Out of the box, most Cognitive Services allow you to send data and receive insights without any customization. For example:
-
-* You can send an image to the Computer Vision service to detect words and phrases or count the number of people in the frame
-* You can send an audio file to the Speech service and get transcriptions and translate the speech to text at the same time
-
-Azure offers a wide range of tools that are designed for different types of users, many of which can be used with Cognitive Services. Designer-driven tools are the easiest to use, and are quick to set up and automate, but may have limitations when it comes to customization. Our REST APIs and client libraries provide users with more control and flexibility, but require more effort, time, and expertise to build a solution. If you use REST APIs and client libraries, there is an expectation that you're comfortable working with modern programming languages like C#, Java, Python, JavaScript, or another popular programming language.
-
-Let's take a look at the different ways that you can work with the Cognitive Services.
-
-### Client libraries and REST APIs
-
-Cognitive Services client libraries and REST APIs provide you direct access to your service. These tools provide programmatic access to the Cognitive Services, their baseline models, and in many cases allow you to programmatically customize your models and solutions.
-
-* **Target user(s)**: Developers and data scientists
-* **Benefits**: Provides the greatest flexibility to call the services from any language and environment.
-* **UI**: N/A - Code only
-* **Subscription(s)**: Azure account + Cognitive Services resources
-
-If you want to learn more about available client libraries and REST APIs, use our [Cognitive Services overview](index.yml) to pick a service and get started with one of our quickstarts for vision, decision, language, and speech.
-
-### Cognitive Services for big data
-
-With Cognitive Services for big data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications. Cognitive Services for big data support the following platforms and connectors: Azure Databricks, Azure Synapse, Azure Kubernetes Service, and Data Connectors.
-
-* **Target user(s)**: Data scientists and data engineers
-* **Benefits**: The Azure Cognitive Services for big data let users channel terabytes of data through Cognitive Services using Apache Spark&trade;. It's easy to create large-scale intelligent applications with any datastore.
-* **UI**: N/A - Code only
-* **Subscription(s)**: Azure account + Cognitive Services resources
-
-If you want to learn more about big data for Cognitive Services, a good place to start is with the [overview](./big-dat) samples.
-
-### Azure Functions and Azure Service Web Jobs
-
-[Azure Functions](../azure-functions/index.yml) and [Azure App Service Web Jobs](../app-service/index.yml) both provide code-first integration services designed for developers and are built on [Azure App Services](../app-service/index.yml). These products provide serverless infrastructure for writing code. Within that code you can make calls to our services using our client libraries and REST APIs.
-
-* **Target user(s)**: Developers and data scientists
-* **Benefits**: Serverless compute service that lets you run event-triggered code.
-* **UI**: Yes
-* **Subscription(s)**: Azure account + Cognitive Services resource + Azure Functions subscription
-
-### Azure Logic Apps
-
-[Azure Logic Apps](../logic-apps/index.yml) share the same workflow designer and connectors as Power Automate but provide more advanced control, including integrations with Visual Studio and DevOps. Power Automate makes it easy to integrate with your Cognitive Services resources through service-specific connectors that provide a proxy or wrapper around the APIs. These are the same connectors as those available in Power Automate.
-
-* **Target user(s)**: Developers, integrators, IT pros, DevOps
-* **Benefits**: Designer-first (declarative) development model providing advanced options and integration in a low-code solution
-* **UI**: Yes
-* **Subscription(s)**: Azure account + Cognitive Services resource + Logic Apps deployment
-
-### Power Automate
-
-Power Automate is a service in the [Power Platform](/power-platform/) that helps you create automated workflows between apps and services without writing code. We offer several connectors to make it easy to interact with your Cognitive Services resource in a Power Automate solution. Power Automate is built on top of Logic Apps.
-
-* **Target user(s)**: Business users (analysts) and SharePoint administrators
-* **Benefits**: Automate repetitive manual tasks simply by recording mouse clicks, keystrokes and copy paste steps from your desktop!
-* **UI tools**: Yes - UI only
-* **Subscription(s)**: Azure account + Cognitive Services resource + Power Automate Subscription + Office 365 Subscription
-
-### AI Builder
-
-[AI Builder](/ai-builder/overview) is a Microsoft Power Platform capability you can use to improve business performance by automating processes and predicting outcomes. AI Builder brings the power of AI to your solutions through a point-and-click experience. Many cognitive services such as the Language service, and Computer Vision have been directly integrated here and you don't need to create your own Cognitive Services.
-
-* **Target user(s)**: Business users (analysts) and SharePoint administrators
-* **Benefits**: A turnkey solution that brings the power of AI through a point-and-click experience. No coding or data science skills required.
-* **UI tools**: Yes - UI only
-* **Subscription(s)**: AI Builder
-
-### Continuous integration and deployment
-
-You can use Azure DevOps and GitHub Actions to manage your deployments. In the [section below](#continuous-integration-and-delivery-with-devops-and-github-actions), we have two examples of CI/CD integrations to train and deploy custom models for Speech and the Language Understanding (LUIS) service.
-
-* **Target user(s)**: Developers, data scientists, and data engineers
-* **Benefits**: Allows you to continuously adjust, update, and deploy applications and models programmatically. There is significant benefit when regularly using your data to improve and update models for Speech, Vision, Language, and Decision.
-* **UI tools**: N/A - Code only
-* **Subscription(s)**: Azure account + Cognitive Services resource + GitHub account
-
-## Tools to customize and configure models
-
-As you progress on your journey building an application or workflow with the Cognitive Services, you may find that you need to customize the model to achieve the desired performance. Many of our services allow you to build on top of the pre-built models to meet your specific business needs. For all our customizable services, we provide both a UI-driven experience for walking through the process as well as APIs for code-driven training. For example:
-
-* You want to train a Custom Speech model to correctly recognize medical terms with a word error rate (WER) below 3 percent
-* You want to build an image classifier with Custom Vision that can tell the difference between coniferous and deciduous trees
-* You want to build a custom neural voice with your personal voice data for an improved automated customer experience
-
-The tools that you will use to train and configure models are different from those that you'll use to call the Cognitive Services. In many cases, Cognitive Services that support customization provide portals and UI tools designed to help you train, evaluate, and deploy models. Let's quickly take a look at a few options:<br><br>
-
-| Pillar | Service | Customization UI | Quickstart |
-|--||||
-| Vision | Custom Vision | https://www.customvision.ai/ | [Quickstart](./custom-vision-service/quickstarts/image-classification.md?pivots=programming-language-csharp) |
-| Decision | Personalizer | UI is available in the Azure portal under your Personalizer resource. | [Quickstart](./personalizer/quickstart-personalizer-sdk.md) |
-| Language | Language Understanding (LUIS) | https://www.luis.ai/ | |
-| Language | QnA Maker | https://www.qnamaker.ai/ | [Quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) |
-| Language | Translator/Custom Translator | https://portal.customtranslator.azure.ai/ | [Quickstart](./translator/custom-translator/quickstart-build-deploy-custom-model.md) |
-| Speech | Custom Commands | https://speech.microsoft.com/ | [Quickstart](./speech-service/custom-commands.md) |
-| Speech | Custom Speech | https://speech.microsoft.com/ | [Quickstart](./speech-service/custom-speech-overview.md) |
-| Speech | Custom Voice | https://speech.microsoft.com/ | [Quickstart](./speech-service/how-to-custom-voice.md) |
-
-### Continuous integration and delivery with DevOps and GitHub Actions
-
-Language Understanding and the Speech service offer continuous integration and continuous deployment solutions that are powered by Azure DevOps and GitHub Actions. These tools are used for automated training, testing, and release management of custom models.
-
-* [CI/CD for Custom Speech](./speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md)
-* [CI/CD for LUIS](./luis/luis-concept-devops-automation.md)
-
-## On-premises containers
-
-Many of the Cognitive Services can be deployed in containers for on-premises access and use. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security, or other operational reasons. For a complete list of Cognitive Services containers, see [On-premises containers for Cognitive Services](./cognitive-services-container-support.md).
-
-## Next steps
-<!--
-* Learn more about low code development options for Cognitive Services -->
-* [Create a Cognitive Services resource and start building](./cognitive-services-apis-create-account.md?tabs=multiservice%252clinux)
cognitive-services Cognitive Services Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md
- Title: Use environment variables with Cognitive Services-
-description: "This guide shows you how to set and retrieve environment variables to handle your Cognitive Services subscription credentials in a more secure way when you test out applications."
----- Previously updated : 09/09/2022---
-# Use environment variables with Cognitive Services
-
-This guide shows you how to set and retrieve environment variables to handle your Cognitive Services subscription credentials in a more secure way when you test out applications.
-
-## Set an environment variable
-
-To set environment variables, use one the following commands, where the `ENVIRONMENT_VARIABLE_KEY` is the named key and `value` is the value stored in the environment variable.
-
-# [Command Line](#tab/command-line)
-
-Use the following command to create and assign a persisted environment variable, given the input value.
-
-```CMD
-:: Assigns the env var to the value
-setx ENVIRONMENT_VARIABLE_KEY "value"
-```
-
-In a new instance of the Command Prompt, use the following command to read the environment variable.
-
-```CMD
-:: Prints the env var value
-echo %ENVIRONMENT_VARIABLE_KEY%
-```
-
-# [PowerShell](#tab/powershell)
-
-Use the following command to create and assign a persisted environment variable, given the input value.
-
-```powershell
-# Assigns the env var to the value
-[System.Environment]::SetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY', 'value', 'User')
-```
-
-In a new instance of the Windows PowerShell, use the following command to read the environment variable.
-
-```powershell
-# Prints the env var value
-[System.Environment]::GetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY')
-```
-
-# [Bash](#tab/bash)
-
-Use the following command to create and assign a persisted environment variable, given the input value.
-
-```Bash
-# Assigns the env var to the value
-echo export ENVIRONMENT_VARIABLE_KEY="value" >> /etc/environment && source /etc/environment
-```
-
-In a new instance of the **Bash**, use the following command to read the environment variable.
-
-```Bash
-# Prints the env var value
-echo "${ENVIRONMENT_VARIABLE_KEY}"
-
-# Or use printenv:
-# printenv ENVIRONMENT_VARIABLE_KEY
-```
---
-> [!TIP]
-> After you set an environment variable, restart your integrated development environment (IDE) to ensure that the newly added environment variables are available.
-
-## Retrieve an environment variable
-
-To use an environment variable in your code, it must be read into memory. Use one of the following code snippets, depending on which language you're using. These code snippets demonstrate how to get an environment variable given the `ENVIRONMENT_VARIABLE_KEY` and assign the value to a program variable named `value`.
-
-# [C#](#tab/csharp)
-
-For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>.
-
-```csharp
-using static System.Environment;
-
-class Program
-{
- static void Main()
- {
- // Get the named env var, and assign it to the value variable
- var value =
- GetEnvironmentVariable(
- "ENVIRONMENT_VARIABLE_KEY");
- }
-}
-```
-
-# [C++](#tab/cpp)
-
-For more information, see <a href="/cpp/c-runtime-library/reference/getenv-s-wgetenv-s" target="_blank">`getenv_s`</a> and <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv`</a>.
-
-```cpp
-#include <iostream>
-#include <stdlib.h>
-
-std::string GetEnvironmentVariable(const char* name);
-
-int main()
-{
- // Get the named env var, and assign it to the value variable
- auto value = GetEnvironmentVariable("ENVIRONMENT_VARIABLE_KEY");
-}
-
-std::string GetEnvironmentVariable(const char* name)
-{
-#if defined(_MSC_VER)
- size_t requiredSize = 0;
- (void)getenv_s(&requiredSize, nullptr, 0, name);
- if (requiredSize == 0)
- {
- return "";
- }
- auto buffer = std::make_unique<char[]>(requiredSize);
- (void)getenv_s(&requiredSize, buffer.get(), requiredSize, name);
- return buffer.get();
-#else
- auto value = getenv(name);
- return value ? value : "";
-#endif
-}
-```
-
-# [Java](#tab/java)
-
-For more information, see <a href="https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getenv(java.lang.String)" target="_blank">`System.getenv` </a>.
-
-```java
-import java.lang.*;
-
-public class Program {
- public static void main(String[] args) throws Exception {
- // Get the named env var, and assign it to the value variable
- String value =
- System.getenv(
- "ENVIRONMENT_VARIABLE_KEY")
- }
-}
-```
-
-# [Node.js](#tab/node-js)
-
-For more information, see <a href="https://nodejs.org/api/process.html#process_process_env" target="_blank">`process.env` </a>.
-
-```javascript
-// Get the named env var, and assign it to the value variable
-const value =
- process.env.ENVIRONMENT_VARIABLE_KEY;
-```
-
-# [Python](#tab/python)
-
-For more information, see <a href="https://docs.python.org/2/library/os.html#os.environ" target="_blank">`os.environ` </a>.
-
-```python
-import os
-
-# Get the named env var, and assign it to the value variable
-value = os.environ['ENVIRONMENT_VARIABLE_KEY']
-```
-
-# [Objective-C](#tab/objective-c)
-
-For more information, see <a href="https://developer.apple.com/documentation/foundation/nsprocessinfo/1417911-environment?language=objc" target="_blank">`environment` </a>.
-
-```objectivec
-// Get the named env var, and assign it to the value variable
-NSString* value =
- [[[NSProcessInfo processInfo]environment]objectForKey:@"ENVIRONMENT_VARIABLE_KEY"];
-```
---
-## Next steps
-
-* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started.
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
- Title: Limited Access features for Cognitive Services-
-description: Azure Cognitive Services that are available with Limited Access are described below.
----- Previously updated : 10/27/2022---
-# Limited Access features for Cognitive Services
-
-Our vision is to empower developers and organizations to use AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services.
-
-## What is Limited Access?
-
-Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they've reviewed and agree to the terms of service. Microsoft may require customers to reverify this information.
-
-Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services.
-
-## List of Limited Access services
-
-The following services are Limited Access:
--- [Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context): Pro features -- [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context): All features -- [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features, face ID property-- [Computer Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context): Celebrity Recognition feature -- [Azure Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features-- [Azure OpenAI](/legal/cognitive-services/openai/limited-access): Azure OpenAI Service, modified abuse monitoring, and modified content filters-
-Features of these services that aren't listed above are available without registration.
-
-## FAQ about Limited Access
-
-### How do I register for access?
-
-Submit a registration form for each Limited Access service you would like to use:
--- [Custom Neural Voice](https://aka.ms/customneural): Pro features -- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features -- [Face API](https://aka.ms/facerecognition): Identify and Verify features-- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature -- [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features -- [Azure OpenAI](/legal/cognitive-services/openai/limited-access): Azure OpenAI Service, modified abuse monitoring, and modified content filters -
-### How long will the registration process take?
-
-Review may take 5-10 business days. You will receive an email as soon as your application is reviewed.
-
-### Who is eligible to use Limited Access services?
-
-Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their registration form.
-
-Please use an email address affiliated with your organization in your registration form. Registration forms submitted with personal email addresses will be denied.
-
-If you're not a managed customer, we invite you to submit an application using the same forms and we will reach out to you about any opportunities to join an eligibility program.
-
-### What is a managed customer? What if I don't know whether I'm a managed customer?
-
-Managed customers work with Microsoft account teams. We invite you to submit a registration form for the features you'd like to use, and we'll verify your eligibility for access. We are not able to accept requests to become a managed customer at this time.
-
-### What happens if I'm an existing customer and I don't register?
-
-Existing customers have until June 30, 2023 to submit a registration form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without an approved application, you will be denied access after June 30, 2023.
-
-The registration forms can be found here:
--- [Custom Neural Voice](https://aka.ms/customneural): Pro features -- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features -- [Face API](https://aka.ms/facerecognition): Identify and Verify features-- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature -- [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features-- [Azure OpenAI: [Azure OpenAI service](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu), modified abuse monitoring, and modified content filters-
-### I'm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to register to keep using these services?
-
-We're always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you've previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new registration form to continue using these services beyond June 30, 2023.
-
-If you're an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit a registration form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for application processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The registration forms can be found here:
--- [Custom Neural Voice](https://aka.ms/customneural): Pro features -- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features -
-### What if my use case isn't on the registration form?
-
-Limited Access features are only available for the use cases listed on the registration forms. If your desired use case isn't listed, let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings.
-
-### Where can I use Limited Access services?
-
-Search [here](https://azure.microsoft.com/global-infrastructure/services/) for a Limited Access service to view its regional availability. In the Brazil South and UAE North datacenter regions, we are prioritizing access for commercial customers managed by Microsoft.
-
-Detailed information about supported regions for Custom Neural Voice and Speaker Recognition operations can be found [here](./speech-service/regions.md).
-
-### What happens to my data if my application is denied?
-
-If you're an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to Microsoft's data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
-
-### How long will the registration process take?
-
-You'll receive communication from us about your application within 10 business days. In some cases, reviews can take longer. You'll receive an email as soon as your application is reviewed.
-
-## Help and support
-
-Report abuse of Limited Access services [here](https://aka.ms/reportabuse).
cognitive-services Cognitive Services Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-support-options.md
- Title: Azure Cognitive Services support and help options
-description: How to obtain help and support for questions and problems when you create applications that integrate with Azure Cognitive Services.
----- Previously updated : 06/28/2022---
-# Azure Cognitive Services support and help options
-
-Are you just starting to explore the functionality of Azure Cognitive Services? Perhaps you are implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Cognitive Services.
-
-## Create an Azure support request
-
-<div class='icon is-large'>
- <img alt='Azure support' src='/media/logos/logo_azure.svg'>
-</div>
-
-Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
-
-* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
-* [Azure portal for the United States government](https://portal.azure.us)
--
-## Post a question on Microsoft Q&A
-
-For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
-
-If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when you ask your question:
-
-* [Cognitive Services](/answers/topics/azure-cognitive-services.html)
-
-**Vision**
-
-* [Computer Vision](/answers/topics/azure-computer-vision.html)
-* [Custom Vision](/answers/topics/azure-custom-vision.html)
-* [Face](/answers/topics/azure-face.html)
-* [Form Recognizer](/answers/topics/azure-form-recognizer.html)
-* [Video Indexer](/answers/topics/azure-media-services.html)
-
-**Language**
-
-* [Immersive Reader](/answers/topics/azure-immersive-reader.html)
-* [Language Understanding (LUIS)](/answers/topics/azure-language-understanding.html)
-* [QnA Maker](/answers/topics/azure-qna-maker.html)
-* [Language service](/answers/topics/azure-text-analytics.html)
-* [Translator](/answers/topics/azure-translator.html)
-
-**Speech**
-
-* [Speech service](/answers/topics/azure-speech.html)
--
-**Decision**
-
-* [Anomaly Detector](/answers/topics/azure-anomaly-detector.html)
-* [Content Moderator](/answers/topics/azure-content-moderator.html)
-* [Metrics Advisor](/answers/topics/148981/azure-metrics-advisor.html)
-* [Personalizer](/answers/topics/azure-personalizer.html)
-
-**Azure OpenAI**
-
-* [Azure OpenAI](/answers/topics/azure-openai.html)
-
-## Post a question to Stack Overflow
-
-<div class='icon is-large'>
- <img alt='Stack Overflow' src='/media/logos/logo_stackoverflow.svg'>
-</div>
-
-For answers on your developer questions from the largest community developer ecosystem, ask your question on Stack Overflow.
-
-If you do submit a new question to Stack Overflow, please use one or more of the following tags when you create the question:
-
-* [Cognitive Services](https://stackoverflow.com/questions/tagged/azure-cognitive-services)
-
-**Vision**
-
-* [Computer Vision](https://stackoverflow.com/search?q=azure+computer+vision)
-* [Custom Vision](https://stackoverflow.com/search?q=azure+custom+vision)
-* [Face](https://stackoverflow.com/search?q=azure+face)
-* [Form Recognizer](https://stackoverflow.com/search?q=azure+form+recognizer)
-* [Video Indexer](https://stackoverflow.com/search?q=azure+video+indexer)
-
-**Language**
-
-* [Immersive Reader](https://stackoverflow.com/search?q=azure+immersive+reader)
-* [Language Understanding (LUIS)](https://stackoverflow.com/search?q=azure+luis+language+understanding)
-* [QnA Maker](https://stackoverflow.com/search?q=azure+qna+maker)
-* [Language service](https://stackoverflow.com/search?q=azure+text+analytics)
-* [Translator](https://stackoverflow.com/search?q=azure+translator+text)
-
-**Speech**
-
-* [Speech service](https://stackoverflow.com/search?q=azure+speech)
-
-**Decision**
-
-* [Anomaly Detector](https://stackoverflow.com/search?q=azure+anomaly+detector)
-* [Content Moderator](https://stackoverflow.com/search?q=azure+content+moderator)
-* [Metrics Advisor](https://stackoverflow.com/search?q=azure+metrics+advisor)
-* [Personalizer](https://stackoverflow.com/search?q=azure+personalizer)
-
-**Azure OpenAI**
-
-* [Azure OpenAI](https://stackoverflow.com/search?q=azure+openai)
-
-## Submit feedback
-
-To request new features, post them on https://feedback.azure.com. Share your ideas for making Cognitive Services and its APIs work better for the applications you develop.
-
-* [Cognitive Services](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858)
-
-**Vision**
-
-* [Computer Vision](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Custom Vision](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Face](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Form Recognizer](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=7a8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Video Indexer](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6483a3c0-0b25-ec11-b6e6-000d3a4f0858)
--
-**Language**
-
-* [Immersive Reader](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
-* [Language Understanding (LUIS)](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
-* [QnA Maker](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
-* [Language service](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
-* [Translator](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=449a6fba-0b25-ec11-b6e6-000d3a4f0858)
-
-**Speech**
-
-* [Speech service](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=21041fae-0b25-ec11-b6e6-000d3a4f0858)
-
-**Decision**
-
-* [Anomaly Detector](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Content Moderator](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Metrics Advisor](https://feedback.azure.com/d365community/search/?q=%22Metrics+Advisor%22)
-* [Personalizer](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
-
-## Stay informed
-
-Staying informed about features in a new release or news on the Azure blog can help you find the difference between a programming error, a service bug, or a feature not yet available in Cognitive Services.
-
-* Learn more about product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=ai-machine-learning&query=Azure%20Cognitive%20Services).
-* News about Cognitive Services is shared in the [Azure blog](https://azure.microsoft.com/blog/topics/cognitive-services/).
-* [Join the conversation on Reddit](https://www.reddit.com/r/AZURE/search/?q=Cognitive%20Services&restrict_sr=1) about Cognitive Services.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [What are Azure Cognitive Services?](./what-are-cognitive-services.md)
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
- Title: Configure Virtual Networks for Azure Cognitive Services-
-description: Configure layered network security for your Cognitive Services resources.
------ Previously updated : 07/04/2023---
-# Configure Azure Cognitive Services virtual networks
-
-Azure Cognitive Services provides a layered security model. This model enables you to secure your Cognitive Services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
-
-An application that accesses a Cognitive Services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key.
-
-> [!IMPORTANT]
-> Turning on firewall rules for your Cognitive Services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met:
->
-> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Cognitive Services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Cognitive Services account.
-> * Or the request should originate from an allowed list of IP addresses.
->
-> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
--
-## Scenarios
-
-To secure your Cognitive Services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients.
-
-Network rules are enforced on all network protocols to Azure Cognitive Services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Cognitive Services resources, or when you create new Cognitive Services resources. Once network rules are applied, they're enforced for all requests.
-
-## Supported regions and service offerings
-
-Virtual networks (VNETs) are supported in [regions where Cognitive Services are available](https://azure.microsoft.com/global-infrastructure/services/). Cognitive Services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
-
-> [!div class="checklist"]
-> * Anomaly Detector
-> * Azure OpenAI
-> * Computer Vision
-> * Content Moderator
-> * Custom Vision
-> * Face
-> * Language Understanding (LUIS)
-> * Personalizer
-> * Speech service
-> * Language service
-> * QnA Maker
-> * Translator Text
--
-> [!NOTE]
-> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
->
-> * **AzureActiveDirectory**
-> * **AzureFrontDoor.Frontend**
-> * **AzureResourceManager**
-> * **CognitiveServicesManagement**
-> * **CognitiveServicesFrontEnd**
--
-## Change the default network access rule
-
-By default, Cognitive Services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action.
-
-> [!WARNING]
-> Making changes to network rules can impact your applications' ability to connect to Azure Cognitive Services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
-
-### Managing default network access rules
-
-You can manage default network access rules for Cognitive Services resources through the Azure portal, PowerShell, or the Azure CLI.
-
-# [Azure portal](#tab/portal)
-
-1. Go to the Cognitive Services resource you want to secure.
-
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
-
- ![Virtual network option](media/vnet/virtual-network-blade.png)
-
-1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Cognitive Services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Cognitive Services resource.
-1. To allow traffic from all networks, choose to allow access from **All networks**.
-
- ![Virtual networks deny](media/vnet/virtual-network-deny.png)
-
-1. Select **Save** to apply your changes.
-
-# [PowerShell](#tab/powershell)
-
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
-
-1. Display the status of the default rule for the Cognitive Services resource.
-
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
- (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
- ```
-
-1. Set the default rule to deny network access by default.
-
- ```azurepowershell-interactive
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Deny
- }
- Update-AzCognitiveServicesAccountNetworkRuleSet @parameters
- ```
-
-1. Set the default rule to allow network access by default.
-
- ```azurepowershell-interactive
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Allow
- }
- Update-AzCognitiveServicesAccountNetworkRuleSet @parameters
- ```
-
-# [Azure CLI](#tab/azure-cli)
-
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
-
-1. Display the status of the default rule for the Cognitive Services resource.
-
- ```azurecli-interactive
- az cognitiveservices account show \
- -g "myresourcegroup" -n "myaccount" \
- --query networkRuleSet.defaultAction
- ```
-
-1. Set the default rule to deny network access by default.
-
- ```azurecli-interactive
- az resource update \
- --ids {resourceId} \
- --set properties.networkAcls="{'defaultAction':'Deny'}"
- ```
-
-1. Set the default rule to allow network access by default.
-
- ```azurecli-interactive
- az resource update \
- --ids {resourceId} \
- --set properties.networkAcls="{'defaultAction':'Allow'}"
- ```
-
-***
-
-## Grant access from a virtual network
-
-You can configure Cognitive Services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
-
-Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Cognitive Services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure Cognitive Services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Cognitive Services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Cognitive Services resource to access the data.
-
-Each Cognitive Services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range).
-
-### Required permissions
-
-To apply a virtual network rule to a Cognitive Services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
-
-Cognitive Services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-### Managing virtual network rules
-
-You can manage virtual network rules for Cognitive Services resources through the Azure portal, PowerShell, or the Azure CLI.
-
-# [Azure portal](#tab/portal)
-
-1. Go to the Cognitive Services resource you want to secure.
-
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
-
-1. Check that you've selected to allow access from **Selected networks**.
-
-1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**.
-
- ![Add existing vNet](media/vnet/virtual-network-add-existing.png)
-
-1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**.
-
- ![Add existing vNet details](media/vnet/virtual-network-add-existing-details.png)
-
-1. To create a new virtual network and grant it access, select **Add new virtual network**.
-
- ![Add new vNet](media/vnet/virtual-network-add-new.png)
-
-1. Provide the information necessary to create the new virtual network, and then select **Create**.
-
- ![Create vNet](media/vnet/virtual-network-create.png)
-
- > [!NOTE]
- > If a service endpoint for Azure Cognitive Services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
- >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs.
-
-1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
-
- ![Remove vNet](media/vnet/virtual-network-remove.png)
-
-1. Select **Save** to apply your changes.
-
-# [PowerShell](#tab/powershell)
-
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
-
-1. List virtual network rules.
-
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
- (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules
- ```
-
-1. Enable service endpoint for Azure Cognitive Services on an existing virtual network and subnet.
-
- ```azurepowershell-interactive
- Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" `
- -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `
- -AddressPrefix "10.0.0.0/24" `
- -ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork
- ```
-
-1. Add a network rule for a virtual network and subnet.
-
- ```azurepowershell-interactive
- $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
- }
- $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
-
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -VirtualNetworkResourceId $subnet.Id
- }
- Add-AzCognitiveServicesAccountNetworkRule @parameters
- ```
-
- > [!TIP]
- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
-
-1. Remove a network rule for a virtual network and subnet.
-
- ```azurepowershell-interactive
- $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
- }
- $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
-
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -VirtualNetworkResourceId $subnet.Id
- }
- Remove-AzCognitiveServicesAccountNetworkRule @parameters
- ```
-
-# [Azure CLI](#tab/azure-cli)
-
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
-
-1. List virtual network rules.
-
- ```azurecli-interactive
- az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" \
- --query virtualNetworkRules
- ```
-
-1. Enable service endpoint for Azure Cognitive Services on an existing virtual network and subnet.
-
- ```azurecli-interactive
- az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \
- --vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices"
- ```
-
-1. Add a network rule for a virtual network and subnet.
-
- ```azurecli-interactive
- $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
- --query id --output tsv)
-
- # Use the captured subnet identifier as an argument to the network rule addition
- az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --subnet $subnetid
- ```
-
- > [!TIP]
- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
- >
- > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant.
-
-1. Remove a network rule for a virtual network and subnet.
-
- ```azurecli-interactive
- $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
- --query id --output tsv)
-
- # Use the captured subnet identifier as an argument to the network rule removal
- az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --subnet $subnetid
- ```
-
-***
-
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
-
-## Grant access from an internet IP range
-
-You can configure Cognitive Services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic.
-
-Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`.
-
- > [!Tip]
- > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules.
-
-IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`.
-
-Only IPV4 addresses are supported at this time. Each Cognitive Services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network).
-
-### Configuring access from on-premises networks
-
-To grant access from your on-premises networks to your Cognitive Services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help.
-
-If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering)
-
-### Managing IP network rules
-
-You can manage IP network rules for Cognitive Services resources through the Azure portal, PowerShell, or the Azure CLI.
-
-# [Azure portal](#tab/portal)
-
-1. Go to the Cognitive Services resource you want to secure.
-
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
-
-1. Check that you've selected to allow access from **Selected networks**.
-
-1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted.
-
- ![Add IP range](media/vnet/virtual-network-add-ip-range.png)
-
-1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
-
- ![Delete IP range](media/vnet/virtual-network-delete-ip-range.png)
-
-1. Select **Save** to apply your changes.
-
-# [PowerShell](#tab/powershell)
-
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
-
-1. List IP network rules.
-
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
- (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules
- ```
-
-1. Add a network rule for an individual IP address.
-
- ```azurepowershell-interactive
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
- }
- Add-AzCognitiveServicesAccountNetworkRule @parameters
- ```
-
-1. Add a network rule for an IP address range.
-
- ```azurepowershell-interactive
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
- }
- Add-AzCognitiveServicesAccountNetworkRule @parameters
- ```
-
-1. Remove a network rule for an individual IP address.
-
- ```azurepowershell-interactive
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
- }
- Remove-AzCognitiveServicesAccountNetworkRule @parameters
- ```
-
-1. Remove a network rule for an IP address range.
-
- ```azurepowershell-interactive
- $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
- }
- Remove-AzCognitiveServicesAccountNetworkRule @parameters
- ```
-
-# [Azure CLI](#tab/azure-cli)
-
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
-
-1. List IP network rules.
-
- ```azurecli-interactive
- az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" --query ipRules
- ```
-
-1. Add a network rule for an individual IP address.
-
- ```azurecli-interactive
- az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
- ```
-
-1. Add a network rule for an IP address range.
-
- ```azurecli-interactive
- az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
- ```
-
-1. Remove a network rule for an individual IP address.
-
- ```azurecli-interactive
- az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
- ```
-
-1. Remove a network rule for an IP address range.
-
- ```azurecli-interactive
- az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
- ```
-
-***
-
-> [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
-
-## Use private endpoints
-
-You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Cognitive Services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Cognitive Services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
-
-Private endpoints for Cognitive Services resources let you:
-
-* Secure your Cognitive Services resource by configuring the firewall to block all connections on the public endpoint for the Cognitive Services service.
-* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet.
-* Securely connect to Cognitive Services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
-
-### Conceptual overview
-
-A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Cognitive Services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Cognitive Services service uses a secure private link.
-
-Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Cognitive Services resource, including REST.
-
-Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Cognitive Services resource using private endpoint, while using service endpoints to access others.
-
-When you create a private endpoint for a Cognitive Services resource in your VNet, a consent request is sent for approval to the Cognitive Services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
-
-Cognitive Services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Cognitive Services resource in the [Azure portal](https://portal.azure.com).
-
-### Private endpoints
-
-When creating the private endpoint, you must specify the Cognitive Services resource it connects to. For more information on creating a private endpoint, see:
-
-* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md)
-* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md)
-* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
-
-### Connecting to private endpoints
-
-> [!NOTE]
-> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure Cognitive Services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwader names.
-
-Clients on a VNet using the private endpoint should use the same connection string for the Cognitive Services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Cognitive Services resource over a private link.
-
-We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
-
-### Private endpoints with the Speech Services
-
-See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md).
-
-### DNS changes for private endpoints
-
-When you create a private endpoint, the DNS CNAME resource record for the Cognitive Services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
-
-When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Cognitive Services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
-
-This approach enables access to the Cognitive Services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet.
-
-If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Cognitive Services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet.
-
-> [!TIP]
-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Cognitive Services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
-
-For more information on configuring your own DNS server to support private endpoints, see the following articles:
-
-* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
-* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
-
-### Pricing
-
-For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
-
-## Next steps
-
-* Explore the various [Azure Cognitive Services](./what-are-cognitive-services.md)
-* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
cognitive-services Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/commitment-tier.md
- Title: Create an Azure Cognitive Services resource with commitment tier pricing
-description: Learn how to sign up for commitment tier pricing, which is different than pay-as-you-go pricing.
----- Previously updated : 12/01/2022--
-# Purchase commitment tier pricing
-
-Cognitive Services offers commitment tier pricing, each offering a discounted rate compared to the pay-as-you-go pricing model. With commitment tier pricing, you can commit to using the following Cognitive Services features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload:
-* Speech to text (Standard)
-* Text to speech (Neural)
-* Text Translation (Standard)
-* Language Understanding standard (Text Requests)
-* Azure Cognitive Service for Language
- * Sentiment Analysis
- * Key Phrase Extraction
- * Language Detection
-* Computer Vision - OCR
-
-Commitment tier pricing is also available for the following Applied AI service:
-* Form Recognizer ΓÇô Custom/Invoice
-
-For more information, see [Azure Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
-
-## Create a new resource
-
-1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Cognitive Services or Applied AI services listed above.
-
-2. Enter the applicable information to create your resource. Be sure to select the standard pricing tier.
-
- > [!NOTE]
- > If you intend to purchase a commitment tier for disconnected container usage, you will need to request separate access and select the **Commitment tier disconnected containers** pricing tier. See the [disconnected containers](./containers/disconnected-containers.md) article for more information
-
- :::image type="content" source="media/commitment-tier/create-resource.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/commitment-tier/create-resource.png":::
-
-3. Once your resource is created, you'll be able to change your pricing from pay-as-you-go, to a commitment plan.
-
-## Purchase a commitment plan by updating your Azure resource
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure subscription.
-2. In your Azure resource for one of the applicable features listed above, select **Commitment tier pricing**.
-3. Select **Change** to view the available commitments for hosted API and container usage. Choose a commitment plan for one or more of the following offerings:
- * **Web**: web-based APIs, where you send data to Azure for processing.
- * **Connected container**: Docker containers that enable you to [deploy Cognitive services on premises](cognitive-services-container-support.md), and maintain an internet connection for billing and metering.
-
- :::image type="content" source="media/commitment-tier/commitment-tier-pricing.png" alt-text="A screenshot showing the commitment tier pricing page on the Azure portal." lightbox="media/commitment-tier/commitment-tier-pricing.png":::
-
-4. In the window that appears, select both a **Tier** and **Auto-renewal** option.
-
- * **Commitment tier** - The commitment tier for the feature. The commitment tier is enabled immediately when you select **Purchase** and you will be charged the commitment amount on a pro-rated basis.
-
- * **Auto-renewal** - Choose how you want to renew, change, or cancel the current commitment plan starting with the next billing cycle. If you decide to auto-renew, the **Auto-renewal date** is the date (in your local timezone) when you'll be charged for the next billing cycle. This date coincides with the start of the calendar month.
-
- > [!CAUTION]
- > Once you click **Purchase** you will be charged for the tier you select. Once purchased, the commitment plan is non-refundable.
- >
- > Commitment plans are charged monthly, except the first month upon purchase which is pro-rated (cost and quota) based on the number of days remaining in that month. For the subsequent months, the charge is incurred on the first day of the month.
-
- :::image type="content" source="media/commitment-tier/enable-commitment-plan.png" alt-text="A screenshot showing the commitment tier pricing and renewal details on the Azure portal." lightbox="media/commitment-tier/enable-commitment-plan-large.png":::
--
-## Overage pricing
-
-If you use the resource above the quota provided, you'll be charged for the additional usage as per the overage amount mentioned in the commitment tier.
-
-## Purchase a different commitment plan
-
-The commitment plans have a calendar month commitment period. You can purchase a commitment plan at any time from the default pay-as-you-go pricing model. When you purchase a plan, you'll be charged a pro-rated price for the remaining month. During the commitment period, you can't change the commitment plan for the current month. However, you can choose a different commitment plan for the next calendar month. The billing for the next month would happen on the first day of the next month.
-
-If you need a larger commitment plan than any of the ones offered, contact `csgate@microsoft.com`.
-
-## End a commitment plan
-
-If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of each month to end a commitment plan, and not be charged for the following month.
-
-## See also
-
-* [Azure Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
cognitive-services Azure Container Instance Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/azure-container-instance-recipe.md
- Title: Azure Container Instance recipe-
-description: Learn how to deploy Cognitive Services Containers on Azure Container Instance
------ Previously updated : 12/18/2020-
-#Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
-
-# https://github.com/Azure/cognitiveservices-aci
--
-# Deploy and run container on Azure Container Instance
-
-With the following steps, scale Azure Cognitive Services applications in the cloud easily with Azure [Container Instances](../../container-instances/index.yml). Containerization helps you focus on building your applications instead of managing the infrastructure. For more information on using containers, see [features and benefits](../cognitive-services-container-support.md#features-and-benefits).
-
-## Prerequisites
-
-The recipe works with any Cognitive Services container. The Cognitive Service resource must be created before using the recipe. Each Cognitive Service that supports containers has a "How to install" article for installing and configuring the service for a container. Some services require a file or set of files as input for the container, it is important that you understand and have used the container successfully before using this solution.
-
-* An Azure resource for the Azure Cognitive Service you're using.
-* Cognitive Service **endpoint URL** - review your specific service's "How to install" for the container, to find where the endpoint URL is from within the Azure portal, and what a correct example of the URL looks like. The exact format can change from service to service.
-* Cognitive Service **key** - the keys are on the **Keys** page for the Azure resource. You only need one of the two keys. The key is a string of 32 alpha-numeric characters.
-
-* A single Cognitive Services Container on your local host (your computer). Make sure you can:
- * Pull down the image with a `docker pull` command.
- * Run the local container successfully with all required configuration settings with a `docker run` command.
- * Call the container's endpoint, getting a response of HTTP 2xx and a JSON response back.
-
-All variables in angle brackets, `<>`, need to be replaced with your own values. This replacement includes the angle brackets.
-
-> [!IMPORTANT]
-> The LUIS container requires a `.gz` model file that is pulled in at runtime. The container must be able to access this model file via a volume mount from the container instance. To upload a model file, follow these steps:
-> 1. [Create an Azure file share](../../storage/files/storage-how-to-create-file-share.md). Take note of the Azure Storage account name, key, and file share name as you'll need them later.
-> 2. [export your LUIS model (packaged app) from the LUIS portal](../LUIS/luis-container-howto.md#export-packaged-app-from-luis).
-> 3. In the Azure portal, navigate to the **Overview** page of your storage account resource, and select **File shares**.
-> 4. Select the file share name that you recently created, then select **Upload**. Then upload your packaged app.
-
-# [Azure portal](#tab/portal)
--
-# [CLI](#tab/cli)
-----
-## Use the Container Instance
-
-# [Azure portal](#tab/portal)
-
-1. Select the **Overview** and copy the IP address. It will be a numeric IP address such as `55.55.55.55`.
-1. Open a new browser tab and use the IP address, for example, `http://<IP-address>:5000 (http://55.55.55.55:5000`). You will see the container's home page, letting you know the container is running.
-
- ![Container's home page](../../../includes/media/cognitive-services-containers-api-documentation/container-webpage.png)
-
-1. Select **Service API Description** to view the swagger page for the container.
-
-1. Select any of the **POST** APIs and select **Try it out**. The parameters are displayed including the input. Fill in the parameters.
-
-1. Select **Execute** to send the request to your Container Instance.
-
- You have successfully created and used Cognitive Services containers in Azure Container Instance.
-
-# [CLI](#tab/cli)
--
-> [!NOTE]
-> If you're running the Text Analytics for health container, use the following URL to submit queries: `http://localhost:5000/text/analytics/v3.2-preview.1/entities/health`
--
cognitive-services Azure Kubernetes Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/azure-kubernetes-recipe.md
- Title: Run Language Detection container in Kubernetes Service-
-description: Deploy the language detection container, with a running sample, to the Azure Kubernetes Service, and test it in a web browser.
------ Previously updated : 01/10/2022----
-# Deploy a language detection container to Azure Kubernetes Service
-
-Learn how to deploy the language detection container. This procedure shows you how create the local Docker containers, push the containers to your own private container registry, run the container in a Kubernetes cluster, and test it in a web browser.
-
-## Prerequisites
-
-This procedure requires several tools that must be installed and run locally. Do not use Azure Cloud Shell.
-
-* Use an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
-* [Git](https://git-scm.com/downloads) for your operating system so you can clone the [sample](https://github.com/Azure-Samples/cognitive-services-containers-samples) used in this procedure.
-* [Azure CLI](/cli/azure/install-azure-cli).
-* [Docker engine](https://www.docker.com/products/docker-engine) and validate that the Docker CLI works in a console window.
-* [kubectl](https://storage.googleapis.com/kubernetes-release/release/v1.13.1/bin/windows/amd64/kubectl.exe).
-* An Azure resource with the correct pricing tier. Not all pricing tiers work with this container:
- * **Language** resource with F0 or Standard pricing tiers only.
- * **Cognitive Services** resource with the S0 pricing tier.
-
-## Running the sample
-
-This procedure loads and runs the Cognitive Services Container sample for language detection. The sample has two containers, one for the client application and one for the Cognitive Services container. We'll push both of these images to the Azure Container Registry. Once they are on your own registry, create an Azure Kubernetes Service to access these images and run the containers. When the containers are running, use the **kubectl** CLI to watch the containers performance. Access the client application with an HTTP request and see the results.
-
-![A diagram showing the conceptual idea of running a container on Kubernetes](media/container-instance-sample.png)
-
-## The sample containers
-
-The sample has two container images, one for the frontend website. The second image is the language detection container returning the detected language (culture) of text. Both containers are accessible from an external IP when you are done.
-
-### The language-frontend container
-
-This website is equivalent to your own client-side application that makes requests of the language detection endpoint. When the procedure is finished, you get the detected language of a string of characters by accessing the website container in a browser with `http://<external-IP>/<text-to-analyze>`. An example of this URL is `http://132.12.23.255/helloworld!`. The result in the browser is `English`.
-
-### The language container
-
-The language detection container, in this specific procedure, is accessible to any external request. The container hasn't been changed in any way so the standard Cognitive Services container-specific language detection API is available.
-
-For this container, that API is a POST request for language detection. As with all Cognitive Services containers, you can learn more about the container from its hosted Swagger information, `http://<external-IP>:5000/swagger/https://docsupdatetracker.net/index.html`.
-
-Port 5000 is the default port used with the Cognitive Services containers.
-
-## Create Azure Container Registry service
-
-To deploy the container to the Azure Kubernetes Service, the container images need to be accessible. Create your own Azure Container Registry service to host the images.
-
-1. Sign in to the Azure CLI
-
- ```azurecli-interactive
- az login
- ```
-
-1. Create a resource group named `cogserv-container-rg` to hold every resource created in this procedure.
-
- ```azurecli-interactive
- az group create --name cogserv-container-rg --location westus
- ```
-
-1. Create your own Azure Container Registry with the format of your name then `registry`, such as `pattyregistry`. Do not use dashes or underline characters in the name.
-
- ```azurecli-interactive
- az acr create --resource-group cogserv-container-rg --name pattyregistry --sku Basic
- ```
-
- Save the results to get the **loginServer** property. This will be part of the hosted container's address, used later in the `language.yml` file.
-
- ```azurecli-interactive
- az acr create --resource-group cogserv-container-rg --name pattyregistry --sku Basic
- ```
-
- ```output
- {
- "adminUserEnabled": false,
- "creationDate": "2019-01-02T23:49:53.783549+00:00",
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/cogserv-container-rg/providers/Microsoft.ContainerRegistry/registries/pattyregistry",
- "location": "westus",
- "loginServer": "pattyregistry.azurecr.io",
- "name": "pattyregistry",
- "provisioningState": "Succeeded",
- "resourceGroup": "cogserv-container-rg",
- "sku": {
- "name": "Basic",
- "tier": "Basic"
- },
- "status": null,
- "storageAccount": null,
- "tags": {},
- "type": "Microsoft.ContainerRegistry/registries"
- }
- ```
-
-1. Sign in to your container registry. You need to login before you can push images to your registry.
-
- ```azurecli-interactive
- az acr login --name pattyregistry
- ```
-
-## Get website Docker image
-
-1. The sample code used in this procedure is in the Cognitive Services containers samples repository. Clone the repository to have a local copy of the sample.
-
- ```console
- git clone https://github.com/Azure-Samples/cognitive-services-containers-samples
- ```
-
- Once the repository is on your local computer, find the website in the [\dotnet\Language\FrontendService](https://github.com/Azure-Samples/cognitive-services-containers-samples/tree/master/dotnet/Language/FrontendService) directory. This website acts as the client application calling the language detection API hosted in the language detection container.
-
-1. Build the Docker image for this website. Make sure the console is in the [\FrontendService](https://github.com/Azure-Samples/cognitive-services-containers-samples/tree/master/dotnet/Language/FrontendService) directory where the Dockerfile is located when you run the following command:
-
- ```console
- docker build -t language-frontend -t pattiyregistry.azurecr.io/language-frontend:v1 .
- ```
-
- To track the version on your container registry, add the tag with a version format, such as `v1`.
-
-1. Push the image to your container registry. This may take a few minutes.
-
- ```console
- docker push pattyregistry.azurecr.io/language-frontend:v1
- ```
-
- If you get an `unauthorized: authentication required` error, login with the `az acr login --name <your-container-registry-name>` command.
-
- When the process is done, the results should be similar to:
-
- ```output
- The push refers to repository [pattyregistry.azurecr.io/language-frontend]
- 82ff52ee6c73: Pushed
- 07599c047227: Pushed
- 816caf41a9a1: Pushed
- 2924be3aed17: Pushed
- 45b83a23806f: Pushed
- ef68f6734aa4: Pushed
- v1: digest: sha256:31930445deee181605c0cde53dab5a104528dc1ff57e5b3b34324f0d8a0eb286 size: 1580
- ```
-
-## Get language detection Docker image
-
-1. Pull the latest version of the Docker image to the local machine. This may take a few minutes. If there is a newer version of this container, change the value from `1.1.006770001-amd64-preview` to the newer version.
-
- ```console
- docker pull mcr.microsoft.com/azure-cognitive-services/language:1.1.006770001-amd64-preview
- ```
-
-1. Tag image with your container registry. Find the latest version and replace the version `1.1.006770001-amd64-preview` if you have a more recent version.
-
- ```console
- docker tag mcr.microsoft.com/azure-cognitive-services/language pattiyregistry.azurecr.io/language:1.1.006770001-amd64-preview
- ```
-
-1. Push the image to your container registry. This may take a few minutes.
-
- ```console
- docker push pattyregistry.azurecr.io/language:1.1.006770001-amd64-preview
- ```
-
-## Get Container Registry credentials
-
-The following steps are needed to get the required information to connect your container registry with the Azure Kubernetes Service you create later in this procedure.
-
-1. Create service principal.
-
- ```azurecli-interactive
- az ad sp create-for-rbac
- ```
-
- Save the results `appId` value for the assignee parameter in step 3, `<appId>`. Save the `password` for the next section's client-secret parameter `<client-secret>`.
-
- ```output
- {
- "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "displayName": "azure-cli-2018-12-31-18-39-32",
- "name": "http://azure-cli-2018-12-31-18-39-32",
- "password": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- }
- ```
-
-1. Get your container registry ID.
-
- ```azurecli-interactive
- az acr show --resource-group cogserv-container-rg --name pattyregistry --query "id" --o table
- ```
-
- Save the output for the scope parameter value, `<acrId>`, in the next step. It looks like:
-
- ```output
- /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/cogserv-container-rg/providers/Microsoft.ContainerRegistry/registries/pattyregistry
- ```
-
- Save the full value for step 3 in this section.
-
-1. To grant the correct access for the AKS cluster to use images stored in your container registry, create a role assignment. Replace `<appId>` and `<acrId>` with the values gathered in the previous two steps.
-
- ```azurecli-interactive
- az role assignment create --assignee <appId> --scope <acrId> --role Reader
- ```
-
-## Create Azure Kubernetes Service
-
-1. Create the Kubernetes cluster. All the parameter values are from previous sections except the name parameter. Choose a name that indicates who created it and its purpose, such as `patty-kube`.
-
- ```azurecli-interactive
- az aks create --resource-group cogserv-container-rg --name patty-kube --node-count 2 --service-principal <appId> --client-secret <client-secret> --generate-ssh-keys
- ```
-
- This step may take a few minutes. The result is:
-
- ```output
- {
- "aadProfile": null,
- "addonProfiles": null,
- "agentPoolProfiles": [
- {
- "count": 2,
- "dnsPrefix": null,
- "fqdn": null,
- "maxPods": 110,
- "name": "nodepool1",
- "osDiskSizeGb": 30,
- "osType": "Linux",
- "ports": null,
- "storageProfile": "ManagedDisks",
- "vmSize": "Standard_DS1_v2",
- "vnetSubnetId": null
- }
- ],
- "dnsPrefix": "patty-kube--65a101",
- "enableRbac": true,
- "fqdn": "patty-kube--65a101-341f1f54.hcp.westus.azmk8s.io",
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/cogserv-container-rg/providers/Microsoft.ContainerService/managedClusters/patty-kube",
- "kubernetesVersion": "1.9.11",
- "linuxProfile": {
- "adminUsername": "azureuser",
- "ssh": {
- "publicKeys": [
- {
- "keyData": "ssh-rsa AAAAB3NzaC...ohR2d81mFC
- }
- ]
- }
- },
- "location": "westus",
- "name": "patty-kube",
- "networkProfile": {
- "dnsServiceIp": "10.0.0.10",
- "dockerBridgeCidr": "172.17.0.1/16",
- "networkPlugin": "kubenet",
- "networkPolicy": null,
- "podCidr": "10.244.0.0/16",
- "serviceCidr": "10.0.0.0/16"
- },
- "nodeResourceGroup": "MC_patty_westus",
- "provisioningState": "Succeeded",
- "resourceGroup": "cogserv-container-rg",
- "servicePrincipalProfile": {
- "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "keyVaultSecretRef": null,
- "secret": null
- },
- "tags": null,
- "type": "Microsoft.ContainerService/ManagedClusters"
- }
- ```
-
- The service is created but it doesn't have the website container or language detection container yet.
-
-1. Get credentials of the Kubernetes cluster.
-
- ```azurecli-interactive
- az aks get-credentials --resource-group cogserv-container-rg --name patty-kube
- ```
-
-## Load the orchestration definition into your Kubernetes service
-
-This section uses the **kubectl** CLI to talk with the Azure Kubernetes Service.
-
-1. Before loading the orchestration definition, check **kubectl** has access to the nodes.
-
- ```console
- kubectl get nodes
- ```
-
- The response looks like:
-
- ```output
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-13756812-0 Ready agent 6m v1.9.11
- aks-nodepool1-13756812-1 Ready agent 6m v1.9.11
- ```
-
-1. Copy the following file and name it `language.yml`. The file has a `service` section and a `deployment` section each for the two container types, the `language-frontend` website container and the `language` detection container.
-
- [!code-yml[Kubernetes orchestration file for the Cognitive Services containers sample](~/samples-cogserv-containers/Kubernetes/language/language.yml "Kubernetes orchestration file for the Cognitive Services containers sample")]
-
-1. Change the language-frontend deployment lines of `language.yml` based on the following table to add your own container registry image names, client secret, and Language service settings.
-
- Language-frontend deployment settings|Purpose|
- |--|--|
- |Line 32<br> `image` property|Image location for the frontend image in your Container Registry<br>`<container-registry-name>.azurecr.io/language-frontend:v1`|
- |Line 44<br> `name` property|Container Registry secret for the image, referred to as `<client-secret>` in a previous section.|
-
-1. Change the language deployment lines of `language.yml` based on the following table to add your own container registry image names, client secret, and Language service settings.
-
- |Language deployment settings|Purpose|
- |--|--|
- |Line 78<br> `image` property|Image location for the language image in your Container Registry<br>`<container-registry-name>.azurecr.io/language:1.1.006770001-amd64-preview`|
- |Line 95<br> `name` property|Container Registry secret for the image, referred to as `<client-secret>` in a previous section.|
- |Line 91<br> `apiKey` property|Your Language service resource key|
- |Line 92<br> `billing` property|The billing endpoint for your Language service resource.<br>`https://westus.api.cognitive.microsoft.com/text/analytics/v2.1`|
-
- Because the **apiKey** and **billing endpoint** are set as part of the Kubernetes orchestration definition, the website container doesn't need to know about these or pass them as part of the request. The website container refers to the language detection container by its orchestrator name `language`.
-
-1. Load the orchestration definition file for this sample from the folder where you created and saved the `language.yml`.
-
- ```console
- kubectl apply -f language.yml
- ```
-
- The response is:
-
- ```output
- service "language-frontend" created
- deployment.apps "language-frontend" created
- service "language" created
- deployment.apps "language" created
- ```
-
-## Get external IPs of containers
-
-For the two containers, verify the `language-frontend` and `language` services are running and get the external IP address.
-
-```console
-kubectl get all
-```
-
-```output
-NAME READY STATUS RESTARTS AGE
-pod/language-586849d8dc-7zvz5 1/1 Running 0 13h
-pod/language-frontend-68b9969969-bz9bg 1/1 Running 1 13h
-
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14h
-service/language LoadBalancer 10.0.39.169 104.42.172.68 5000:30161/TCP 13h
-service/language-frontend LoadBalancer 10.0.42.136 104.42.37.219 80:30943/TCP 13h
-
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-deployment.extensions/language 1 1 1 1 13h
-deployment.extensions/language-frontend 1 1 1 1 13h
-
-NAME DESIRED CURRENT READY AGE
-replicaset.extensions/language-586849d8dc 1 1 1 13h
-replicaset.extensions/language-frontend-68b9969969 1 1 1 13h
-
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-deployment.apps/language 1 1 1 1 13h
-deployment.apps/language-frontend 1 1 1 1 13h
-
-NAME DESIRED CURRENT READY AGE
-replicaset.apps/language-586849d8dc 1 1 1 13h
-replicaset.apps/language-frontend-68b9969969 1 1 1 13h
-```
-
-If the `EXTERNAL-IP` for the service is shown as pending, rerun the command until the IP address is shown before moving to the next step.
-
-## Test the language detection container
-
-Open a browser and navigate to the external IP of the `language` container from the previous section: `http://<external-ip>:5000/swagger/https://docsupdatetracker.net/index.html`. You can use the `Try it` feature of the API to test the language detection endpoint.
-
-![A screenshot showing the container's swagger documentation](./media/language-detection-container-swagger-documentation.png)
-
-## Test the client application container
-
-Change the URL in the browser to the external IP of the `language-frontend` container using the following format: `http://<external-ip>/helloworld`. The English culture text of `helloworld` is predicted as `English`.
-
-## Clean up resources
-
-When you are done with the cluster, delete the Azure resource group.
-
-```azurecli-interactive
-az group delete --name cogserv-container-rg
-```
-
-## Related information
-
-* [kubectl for Docker Users](https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/)
-
-## Next steps
-
-[Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Container Reuse Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-reuse-recipe.md
- Title: Recipes for Docker containers-
-description: Learn how to build, test, and store containers with some or all of your configuration settings for deployment and reuse.
------ Previously updated : 10/28/2021-
-#Customer intent: As a potential customer, I want to know how to configure containers so I can reuse them.
--
-# Create containers for reuse
-
-Use these container recipes to create Cognitive Services Containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started.
-
-Once you have this new layer of container (with settings), and you have tested it locally, you can store the container in a container registry. When the container starts, it will only need those settings that are not currently stored in the container. The private registry container provides configuration space for you to pass those settings in.
-
-## Docker run syntax
-
-Any `docker run` examples in this document assume a Windows console with a `^` line continuation character. Consider the following for your own use:
-
-* Do not change the order of the arguments unless you are very familiar with docker containers.
-* If you are using an operating system other than Windows, or a console other than Windows console, use the correct console/terminal, folder syntax for mounts, and line continuation character for your console and system. Because the Cognitive Services container is a Linux operating system, the target mount uses a Linux-style folder syntax.
-* `docker run` examples use the directory off the `c:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission.
-
-## Store no configuration settings in image
-
-The example `docker run` commands for each service do not store any configuration settings in the container. When you start the container from a console or registry service, those configuration settings need to pass in. The private registry container provides configuration space for you to pass those settings in.
-
-## Reuse recipe: store all configuration settings with container
-
-In order to store all configuration settings, create a `Dockerfile` with those settings.
-
-Issues with this approach:
-
-* The new container has a separate name and tag from the original container.
-* In order to change these settings, you will have to change the values of the Dockerfile, rebuild the image, and republish to your registry.
-* If someone gets access to your container registry or your local host, they can run the container and use the Cognitive Services endpoints.
-* If your Cognitive Service doesn't require input mounts, don't add the `COPY` lines to your Dockerfile.
-
-Create Dockerfile, pulling from the existing Cognitive Services container you want to use, then use docker commands in the Dockerfile to set or pull in information the container needs.
-
-This example:
-
-* Sets the billing endpoint, `{BILLING_ENDPOINT}` from the host's environment key using `ENV`.
-* Sets the billing API-key, `{ENDPOINT_KEY}` from the host's environment key using `ENV.
-
-### Reuse recipe: store billing settings with container
-
-This example shows how to build the Language service's sentiment container from a Dockerfile.
-
-```Dockerfile
-FROM mcr.microsoft.com/azure-cognitive-services/sentiment:latest
-ENV billing={BILLING_ENDPOINT}
-ENV apikey={ENDPOINT_KEY}
-ENV EULA=accept
-```
-
-Build and run the container [locally](#how-to-use-container-on-your-local-host) or from your [private registry container](#how-to-add-container-to-private-registry) as needed.
-
-### Reuse recipe: store billing and mount settings with container
-
-This example shows how to use Language Understanding, saving billing and models from the Dockerfile.
-
-* Copies the Language Understanding (LUIS) model file from the host's file system using `COPY`.
-* The LUIS container supports more than one model. If all models are stored in the same folder, you all need one `COPY` statement.
-* Run the docker file from the relative parent of the model input directory. For the following example, run the `docker build` and `docker run` commands from the relative parent of `/input`. The first `/input` on the `COPY` command is the host computer's directory. The second `/input` is the container's directory.
-
-```Dockerfile
-FROM <container-registry>/<cognitive-service-container-name>:<tag>
-ENV billing={BILLING_ENDPOINT}
-ENV apikey={ENDPOINT_KEY}
-ENV EULA=accept
-COPY /input /input
-```
-
-Build and run the container [locally](#how-to-use-container-on-your-local-host) or from your [private registry container](#how-to-add-container-to-private-registry) as needed.
-
-## How to use container on your local host
-
-To build the Docker file, replace `<your-image-name>` with the new name of the image, then use:
-
-```console
-docker build -t <your-image-name> .
-```
-
-To run the image, and remove it when the container stops (`--rm`):
-
-```console
-docker run --rm <your-image-name>
-```
-
-## How to add container to private registry
-
-Follow these steps to use the Dockerfile and place the new image in your private container registry.
-
-1. Create a `Dockerfile` with the text from reuse recipe. A `Dockerfile` doesn't have an extension.
-
-1. Replace any values in the angle brackets with your own values.
-
-1. Build the file into an image at the command line or terminal, using the following command. Replace the values in the angle brackets, `<>`, with your own container name and tag.
-
- The tag option, `-t`, is a way to add information about what you have changed for the container. For example, a container name of `modified-LUIS` indicates the original container has been layered. A tag name of `with-billing-and-model` indicates how the Language Understanding (LUIS) container has been modified.
-
- ```Bash
- docker build -t <your-new-container-name>:<your-new-tag-name> .
- ```
-
-1. Sign in to Azure CLI from a console. This command opens a browser and requires authentication. Once authenticated, you can close the browser and continue working in the console.
-
- ```azurecli
- az login
- ```
-
-1. Sign in to your private registry with Azure CLI from a console.
-
- Replace the values in the angle brackets, `<my-registry>`, with your own registry name.
-
- ```azurecli
- az acr login --name <my-registry>
- ```
-
- You can also sign in with docker login if you are assigned a service principal.
-
- ```Bash
- docker login <my-registry>.azurecr.io
- ```
-
-1. Tag the container with the private registry location. Replace the values in the angle brackets, `<my-registry>`, with your own registry name.
-
- ```Bash
- docker tag <your-new-container-name>:<your-new-tag-name> <my-registry>.azurecr.io/<your-new-container-name-in-registry>:<your-new-tag-name>
- ```
-
- If you don't use a tag name, `latest` is implied.
-
-1. Push the new image to your private container registry. When you view your private container registry, the container name used in the following CLI command will be the name of the repository.
-
- ```Bash
- docker push <my-registry>.azurecr.io/<your-new-container-name-in-registry>:<your-new-tag-name>
- ```
-
-## Next steps
-
-[Create and use Azure Container Instance](azure-container-instance-recipe.md)
-
-<!--
-## Store input and output configuration settings
-
-Bake in input params only
-
-FROM containerpreview.azurecr.io/microsoft/cognitive-services-luis:<tag>
-COPY luisModel1 /input/
-COPY luisModel2 /input/
-
-## Store all configuration settings
-
-If you are a single manager of the container, you may want to store all settings in the container. The new, resulting container will not need any variables passed in to run.
-
-Issues with this approach:
-
-* In order to change these settings, you will have to change the values of the Dockerfile and rebuild the file.
-* If someone gets access to your container registry or your local host, they can run the container and use the Cognitive Services endpoints.
-
-The following _partial_ Dockerfile shows how to statically set the values for billing and model. This example uses the
-
-```Dockerfile
-FROM <container-registry>/<cognitive-service-container-name>:<tag>
-ENV billing=<billing value>
-ENV apikey=<apikey value>
-COPY luisModel1 /input/
-COPY luisModel2 /input/
-```
--->
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
- Title: Use Docker containers in disconnected environments-
-description: Learn how to run Azure Cognitive Services Docker containers disconnected from the internet.
----- Previously updated : 06/28/2023---
-# Use Docker containers in disconnected environments
-
-Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs disconnected from the internet. Currently, the following containers can be run in this manner:
-
-* [Speech to text](../speech-service/speech-container-howto.md?tabs=stt)
-* [Custom Speech to text](../speech-service/speech-container-howto.md?tabs=cstt)
-* [Neural Text to speech](../speech-service/speech-container-howto.md?tabs=ntts)
-* [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)
-* Azure Cognitive Service for Language
- * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md)
- * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md)
- * [Language Detection](../language-service/language-detection/how-to/use-containers.md)
-* [Computer Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md)
-* [Form Recognizer](../../applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md)
-
-Before attempting to run a Docker container in an offline environment, make sure you know the steps to successfully download and use the container. For example:
-
-* Host computer requirements and recommendations.
-* The Docker `pull` command you'll use to download the container.
-* How to validate that a container is running.
-* How to send queries to the container's endpoint, once it's running.
-
-## Request access to use containers in disconnected environments
-
-Fill out and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the internet.
--
-Access is limited to customers that meet the following requirements:
-
-* Your organization should be identified as strategic customer or partner with Microsoft.
-* Disconnected containers are expected to run fully offline, hence your use cases must meet one of below or similar requirements:
- * Environment or device(s) with zero connectivity to internet.
- * Remote location that occasionally has internet access.
- * Organization under strict regulation of not sending any kind of data back to cloud.
-* Application completed as instructed - Please pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
-
-## Purchase a commitment plan to use containers in disconnected environments
-
-### Create a new resource
-
-1. Sign into the [Azure portal](https://portal.azure.com/) and select **Create a new resource** for one of the applicable Cognitive Services or Applied AI services listed above.
-
-2. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
-
- > [!NOTE]
- >
- > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
- > * Pricing details are for example only.
-
-1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
-
-### Configure container for disconnected usage
-
-See the following documentation for steps on downloading and configuring the container for disconnected usage:
-
-* [Computer Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md#run-the-container-disconnected-from-the-internet)
-* [Language Understanding (LUIS)](../LUIS/luis-container-howto.md#run-the-container-disconnected-from-the-internet)
-* [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)
-* [Form recognizer](../../applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md)
-
-**Speech service**
-
-* [Speech to text](../speech-service/speech-container-stt.md?tabs=disconnected#run-the-container-with-docker-run)
-* [Custom Speech to text](../speech-service/speech-container-cstt.md?tabs=disconnected#run-the-container-with-docker-run)
-* [Neural Text to speech](../speech-service/speech-container-ntts.md?tabs=disconnected#run-the-container-with-docker-run)
-
-**Language service**
-
-* [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
-* [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
-* [Language Detection](../language-service/language-detection/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
-
-
-## Container image and license updates
--
-## Usage records
-
-When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage.
-
-### Arguments for storing logs
-
-When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the example below, replacing `{OUTPUT_PATH}` with the path where the logs will be stored:
-
-```Docker
-docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
-```
-
-### Get records using the container endpoints
-
-The container provides two endpoints for returning records about its usage.
-
-#### Get all records
-
-The following endpoint will provide a report summarizing all of the usage collected in the mounted billing record directory.
-
-```http
-https://<service>/records/usage-logs/
-```
-
-It will return JSON similar to the example below.
-
-```json
-{
- "apiType": "noop",
- "serviceName": "noop",
- "meters": [
- {
- "name": "Sample.Meter",
- "quantity": 253
- }
- ]
-}
-```
-
-#### Get records for a specific month
-
-The following endpoint will provide a report summarizing usage over a specific month and year.
-
-```HTTP
-https://<service>/records/usage-logs/{MONTH}/{YEAR}
-```
-
-it will return a JSON response similar to the example below:
-
-```json
-{
- "apiType": "string",
- "serviceName": "string",
- "meters": [
- {
- "name": "string",
- "quantity": 253
- }
- ]
-}
-```
-
-## Purchase a different commitment plan for disconnected containers
-
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-
-You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
-
-## End a commitment plan
-
-If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers, and not be charged for the following year.
-
-## Troubleshooting
-
-If you run the container with an output mount and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
-
-> [!TIP]
-> For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](disconnected-container-faq.yml).
-
-## Next steps
-
-[Azure Cognitive Services containers overview](../cognitive-services-container-support.md)
---------
cognitive-services Docker Compose Recipe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/docker-compose-recipe.md
- Title: Use Docker Compose to deploy multiple containers-
-description: Learn how to deploy multiple Cognitive Services containers. This article shows you how to orchestrate multiple Docker container images by using Docker Compose.
------ Previously updated : 10/29/2020-
-#Customer intent: As a potential customer, I want to know how to configure containers so I can reuse them.
-
-# SME: Brendan Walsh
--
-# Use Docker Compose to deploy multiple containers
-
-This article shows you how to deploy multiple Azure Cognitive Services containers. Specifically, you'll learn how to use Docker Compose to orchestrate multiple Docker container images.
-
-> [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container Docker applications. In Compose, you use a YAML file to configure your application's services. Then, you create and start all the services from your configuration by running a single command.
-
-It can be useful to orchestrate multiple container images on a single host computer. In this article, we'll pull together the Read and Form Recognizer containers.
-
-## Prerequisites
-
-This procedure requires several tools that must be installed and run locally:
-
-* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
-* [Docker Engine](https://www.docker.com/products/docker-engine). Confirm that the Docker CLI works in a console window.
-* An Azure resource with the correct pricing tier. Only the following pricing tiers work with this container:
- * **Computer Vision** resource with F0 or Standard pricing tier only.
- * **Form Recognizer** resource with F0 or Standard pricing tier only.
- * **Cognitive Services** resource with the S0 pricing tier.
-* If you're using a gated preview container, You will need to complete the [online request form](https://aka.ms/csgate/) to use it.
-
-## Docker Compose file
-
-The YAML file defines all the services to be deployed. These services rely on either a `DockerFile` or an existing container image. In this case, we'll use two preview images. Copy and paste the following YAML file, and save it as *docker-compose.yaml*. Provide the appropriate **apikey**, **billing**, and **EndpointUri** values in the file.
-
-```yaml
-version: '3.7'
-
- forms:
- image: "mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout"
- environment:
- eula: accept
- billing: # < Your form recognizer billing URL >
- apikey: # < Your form recognizer API key >
- FormRecognizer__ComputerVisionApiKey: # < Your form recognizer API key >
- FormRecognizer__ComputerVisionEndpointUri: # < Your form recognizer URI >
- volumes:
- - type: bind
- source: E:\publicpreview\output
- target: /output
- - type: bind
- source: E:\publicpreview\input
- target: /input
- ports:
- - "5010:5000"
-
- ocr:
- image: "mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview"
- environment:
- eula: accept
- apikey: # < Your computer vision API key >
- billing: # < Your computer vision billing URL >
- ports:
- - "5021:5000"
-```
-
-> [!IMPORTANT]
-> Create the directories on the host machine that are specified under the **volumes** node. This approach is required because the directories must exist before you try to mount an image by using volume bindings.
-
-## Start the configured Docker Compose services
-
-A Docker Compose file enables the management of all the stages in a defined service's life cycle: starting, stopping, and rebuilding services; viewing the service status; and log streaming. Open a command-line interface from the project directory (where the docker-compose.yaml file is located).
-
-> [!NOTE]
-> To avoid errors, make sure that the host machine correctly shares drives with Docker Engine. For example, if *E:\publicpreview* is used as a directory in the *docker-compose.yaml* file, share drive **E** with Docker.
-
-From the command-line interface, execute the following command to start (or restart) all the services defined in the *docker-compose.yaml* file:
-
-```console
-docker-compose up
-```
-
-The first time Docker executes the **docker-compose up** command by using this configuration, it pulls the images configured under the **services** node and then downloads and mounts them:
-
-```console
-Pulling forms (mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout:)...
-latest: Pulling from azure-cognitive-services/form-recognizer/layout
-743f2d6c1f65: Pull complete
-72befba99561: Pull complete
-2a40b9192d02: Pull complete
-c7715c9d5c33: Pull complete
-f0b33959f1c4: Pull complete
-b8ab86c6ab26: Pull complete
-41940c21ed3c: Pull complete
-e3d37dd258d4: Pull complete
-cdb5eb761109: Pull complete
-fd93b5f95865: Pull complete
-ef41dcbc5857: Pull complete
-4d05c86a4178: Pull complete
-34e811d37201: Pull complete
-Pulling ocr (mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview:)...
-latest: Pulling from /azure-cognitive-services/vision/read:3.1-preview
-f476d66f5408: Already exists
-8882c27f669e: Already exists
-d9af21273955: Already exists
-f5029279ec12: Already exists
-1a578849dcd1: Pull complete
-45064b1ab0bf: Download complete
-4bb846705268: Downloading [=========================================> ] 187.1MB/222.8MB
-c56511552241: Waiting
-e91d2aa0f1ad: Downloading [==============================================> ] 162.2MB/176.1MB
-```
-
-After the images are downloaded, the image services are started:
-
-```console
-Starting docker_ocr_1 ... done
-Starting docker_forms_1 ... doneAttaching to docker_ocr_1, docker_forms_1forms_1 | forms_1 | forms_1 | Notice: This Preview is made available to you on the condition that you agree to the Supplemental Terms of Use for Microsoft Azure Previews [https://go.microsoft.com/fwlink/?linkid=2018815], which supplement your agreement [https://go.microsoft.com/fwlink/?linkid=2018657] governing your use of Azure. If you do not have an existing agreement governing your use of Azure, you agree that your agreement governing use of Azure is the Microsoft Online Subscription Agreement [https://go.microsoft.com/fwlink/?linkid=2018755] (which incorporates the Online Services Terms [https://go.microsoft.com/fwlink/?linkid=2018760]). By using the Preview you agree to these terms.
-forms_1 |
-forms_1 |
-forms_1 | Using '/input' for reading models and other read-only data.
-forms_1 | Using '/output/forms/812d811d1bcc' for writing logs and other output data.
-forms_1 | Logging to console.
-forms_1 | Submitting metering to 'https://westus2.api.cognitive.microsoft.com/'.
-forms_1 | WARNING: No access control enabled!
-forms_1 | warn: Microsoft.AspNetCore.Server.Kestrel[0]
-forms_1 | Overriding address(es) 'http://+:80'. Binding to endpoints defined in UseKestrel() instead.
-forms_1 | Hosting environment: Production
-forms_1 | Content root path: /app/forms
-forms_1 | Now listening on: http://0.0.0.0:5000
-forms_1 | Application started. Press Ctrl+C to shut down.
-ocr_1 |
-ocr_1 |
-ocr_1 | Notice: This Preview is made available to you on the condition that you agree to the Supplemental Terms of Use for Microsoft Azure Previews [https://go.microsoft.com/fwlink/?linkid=2018815], which supplement your agreement [https://go.microsoft.com/fwlink/?linkid=2018657] governing your use of Azure. If you do not have an existing agreement governing your use of Azure, you agree that your agreement governing use of Azure is the Microsoft Online Subscription Agreement [https://go.microsoft.com/fwlink/?linkid=2018755] (which incorporates the Online Services Terms [https://go.microsoft.com/fwlink/?linkid=2018760]). By using the Preview you agree to these terms.
-ocr_1 |
-ocr_1 |
-ocr_1 | Logging to console.
-ocr_1 | Submitting metering to 'https://westcentralus.api.cognitive.microsoft.com/'.
-ocr_1 | WARNING: No access control enabled!
-ocr_1 | Hosting environment: Production
-ocr_1 | Content root path: /
-ocr_1 | Now listening on: http://0.0.0.0:5000
-ocr_1 | Application started. Press Ctrl+C to shut down.
-```
-
-## Verify the service availability
--
-Here's some example output:
-
-```
-IMAGE ID REPOSITORY TAG
-2ce533f88e80 mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout latest
-4be104c126c5 mcr.microsoft.com/azure-cognitive-services/vision/read:3.1-preview latest
-```
-
-### Test containers
-
-Open a browser on the host machine and go to **localhost** by using the specified port from the *docker-compose.yaml* file, such as http://localhost:5021/swagger/https://docsupdatetracker.net/index.html. For example, you could use the **Try It** feature in the API to test the Form Recognizer endpoint. Both containers swagger pages should be available and testable.
-
-![Form Recognizer Container](media/form-recognizer-swagger-page.png)
-
-## Next steps
-
-[Cognitive Services containers](../cognitive-services-container-support.md)
cognitive-services Harm Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/concepts/harm-categories.md
- Title: "Harm categories in Azure AI Content Safety"-
-description: Learn about the different content moderation flags and severity levels that the Content Safety service returns.
------- Previously updated : 04/06/2023-
-keywords:
---
-# Harm categories in Azure AI Content Safety
-
-This guide describes all of the harm categories and ratings that Content Safety uses to flag content. Both text and image content use the same set of flags.
-
-## Harm categories
-
-Content Safety recognizes four distinct categories of objectionable content.
-
-| Category | Description |
-| | - |
-| Hate | The hate category describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
-| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against oneΓÇÖs will, prostitution, pornography, and abuse. |
-| Violence | The violence category describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
-| Self-harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage oneΓÇÖs body, or kill oneself. |
-
-Classification can be multi-labeled. For example, when a text sample goes through the text moderation model, it could be classified as both Sexual content and Violence.
-
-## Severity levels
-
-Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
-
-| Severity Levels | Label |
-| -- | -- |
-|Severity Level 0 ΓÇô Safe | Content may be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
-|Severity Level 2 ΓÇô Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
-|Severity Level 4 ΓÇô Medium| Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
-|Severity Level 6 ΓÇô High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
-
-## Next steps
-
-Follow a quickstart to get started using Content Safety in your application.
-
-> [!div class="nextstepaction"]
-> [Content Safety quickstart](../quickstart-text.md)
cognitive-services Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/concepts/response-codes.md
- Title: "Content Safety error codes"-
-description: See the possible error codes for the Content Safety APIs.
------- Previously updated : 05/09/2023---
-# Content Safety Error codes
-
-The content APIs may return the following error codes:
-
-| Error Code | Possible reasons | Suggestions |
-| - | - | -- |
-| InvalidRequestBody | One or more fields in the request body do not match the API definition. | 1. Check the API version you specified in the API call. <br/>2. Check the corresponding API definition for the API version you selected. |
-| InvalidResourceName | The resource name you specified in the URL does not meet the requirements, like the blocklist name, blocklist term ID, etc. | 1. Check the API version you specified in the API call. <br/>2. Check whether the given name has invalid characters according to the API definition. |
-| ResourceNotFound | The resource you specified in the URL may not exist, like the blocklist name. | 1. Check the API version you specified in the API call. <br/> 2. Double check the existence of the resource specified in the URL. |
-| InternalError | Some unexpected situations on the server side have been triggered. | 1. You may want to retry a few times after a small period and see it the issue happens again. <br/> 2. Contact Azure Support if this issue persists. |
-| ServerBusy | The server side cannot process the request temporarily. | 1. You may want to retry a few times after a small period and see it the issue happens again. <br/>2.Contact Azure Support if this issue persists. |
-| TooManyRequests | The current RPS has exceeded the quota for your current SKU. | 1. Check the pricing table to understand the RPS quota. <br/>2.Contact Azure Support if you need more QPS. |
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/how-to/encrypt-data-at-rest.md
- Title: Azure Content Safety Service encryption of data at rest
-description: Learn how Azure Content Safety encrypts your data when it's persisted to the cloud.
------- Previously updated : 07/04/2023---
-# Azure Content Safety encryption of data at rest
-
-Azure Content Safety automatically encrypts your data when it's uploaded to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure Content Safety handles encryption of data at rest.
-
-## About Cognitive Services encryption
-
-Azure Content Safety is part of Azure Cognitive Services. Cognitive Services data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
--
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-> [!IMPORTANT]
-> For blocklist names, only MMK encryption is applied by default. Using CMK or not will not change this behavior. All the other data will use either MMK or CMK depending on what you've selected.
-
-## Customer-managed keys with Azure Key Vault
-
-Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/general/overview).
-
-To enable customer-managed keys, you must also enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
-
-Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](/azure/key-vault/general/about-keys-secrets-certificates).
--
-## Enable customer-managed keys for your resource
-
-To enable customer-managed keys in the Azure portal, follow these steps:
-
-1. Go to your Cognitive Services resource.
-2. On the left, select **Encryption**.
-3. Under **Encryption type**, select **Customer Managed Keys**, as shown in the following screenshot.
---
-## Specify a key
-
-After you enable customer-managed keys, you can specify a key to associate with the Cognitive Services resource.
-
-### Specify a key as a URI
-
-To specify a key as a URI, follow these steps:
-
-1. In the Azure portal, go to your key vault.
-
-2. Under **Settings**, select **Keys**.
-
-3. Select the desired key, and then select the key to view its versions. Select a key version to view the settings for that version.
-
-4. Copy the **Key Identifier** value, which provides the URI.
-
- ![Screenshot of the Azure portal page for a key version. The Key Identifier box contains a placeholder for a key URI.](../../media/cognitive-services-encryption/key-uri-portal.png)
-
-5. Go back to your Cognitive Services resource, and then select **Encryption**.
-
-6. Under **Encryption key**, select **Enter key URI**.
-
-7. Paste the URI that you copied into the **Key URI** box.
-
- ![Screenshot of the Encryption page for a Cognitive Services resource. The Enter key URI option is selected, and the Key URI box contains a value.](../../media/cognitive-services-encryption/ssecmk2.png)
-
-8. Under **Subscription**, select the subscription that contains the key vault.
-
-9. Save your changes.
---
-### Specify a key from a key vault
-
-To specify a key from a key vault, first make sure that you have a key vault that contains a key. Then follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-
-2. Under **Encryption key**, select **Select from Key Vault**.
-
-3. Select the key vault that contains the key that you want to use.
-
-4. Select the key that you want to use.
-
- ![Screenshot of the Select key from Azure Key Vault page in the Azure portal. The Subscription, Key vault, Key, and Version boxes contain values.](../../media/cognitive-services-encryption/ssecmk3.png)
-
-5. Save your changes.
--
-## Update the key version
-
-When you create a new version of a key, update the Cognitive Services resource to use the new version. Follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-2. Enter the URI for the new key version. Alternately, you can select the key vault and then select the key again to update the version.
-3. Save your changes.
--
-## Use a different key
-
-To change the key that you use for encryption, follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-2. Enter the URI for the new key. Alternately, you can select the key vault and then select a new key.
-3. Save your changes.
--
-## Rotate customer-managed keys
-
-You can rotate a customer-managed key in Key Vault according to your compliance policies. When the key is rotated, you must update the Cognitive Services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](/azure/cognitive-services/openai/encrypt-data-at-rest#update-the-key-version).
-
-Rotating the key doesn't trigger re-encryption of data in the resource. No further action is required from the user.
--
-## Revoke a customer-managed key
-
-To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource, because the encryption key is inaccessible by Cognitive Services.
--
-## Disable customer-managed keys
-
-When you disable customer-managed keys, your Cognitive Services resource is then encrypted with Microsoft-managed keys. To disable customer-managed keys, follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-2. Select **Microsoft Managed Keys** > **Save**.
-
-When you previously enabled customer managed keys this also enabled a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](/azure/active-directory/managed-identities-azure-resources/overview).
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
-
-> [!IMPORTANT]
-> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/known-issues#transferring-a-subscription-between-azure-ad-directories).
-
-## Next steps
-* [Content Safety overview](../overview.md)
cognitive-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/how-to/use-blocklist.md
- Title: "Use blocklists for text moderation"-
-description: Learn how to customize text moderation in Content Safety by using your own list of blockItems.
------- Previously updated : 04/21/2023-
-keywords:
---
-# Use a blocklist
-
-> [!CAUTION]
-> The sample data in this guide may contain offensive content. User discretion is advised.
-
-The default AI classifiers are sufficient for most content moderation needs. However, you may need to screen for items that are specific to your use case.
-
-## Prerequisites
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**.
- * The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
-* One of the following installed:
- * [cURL](https://curl.haxx.se/) for REST API calls.
- * [Python 3.x](https://www.python.org/) installed
- * Your Python installation should include [pip](https://pip.pypa.io/en/stable/). You can check if you have pip installed by running `pip --version` on the command line. Get pip by installing the latest version of Python.
- * If you're using Python, you'll need to install the Azure AI Content Safety client library for Python. Run the command `pip install azure-ai-contentsafety` in your project directory.
- * [.NET Runtime](https://dotnet.microsoft.com/download/dotnet/) installed.
- * [.NET 6.0](https://dotnet.microsoft.com/download/dotnet-core) SDK or above installed.
- * If you're using .NET, you'll need to install the Azure AI Content Safety client library for .NET. Run the command `dotnet add package Azure.AI.ContentSafety --prerelease` in your project directory.
--
-## Analyze text with a blocklist
-
-You can create blocklists to use with the Text API. The following steps help you get started.
-
-### Create or modify a blocklist
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the URL) with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`.
-1. Optionally replace the value of the `"description"` field with a custom description.
--
-```shell
-curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_id>?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json' \data-raw '{
- "description": "This is a violence list"
-}'
-```
-
-The response code should be `201`(created a new list) or `200`(updated an existing list).
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-var blocklistDescription = "<description>";
-
-var data = new
-{
- description = blocklistDescription,
-};
-
-var createResponse = client.CreateOrUpdateTextBlocklist(blocklistName, RequestContent.Create(data));
-if (createResponse.Status == 201)
-{
- Console.WriteLine("\nBlocklist {0} created.", blocklistName);
-}
-else if (createResponse.Status == 200)
-{
- Console.WriteLine("\nBlocklist {0} updated.", blocklistName);
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`.
-1. Optionally replace `<description>` with a custom description.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.ai.contentsafety.models import TextBlocklist
-from azure.core.exceptions import HttpResponseError
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create a Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def create_or_update_text_blocklist(name, description):
- try:
- return client.create_or_update_text_blocklist(
- blocklist_name=name, resource=TextBlocklist(description=description)
- )
- except HttpResponseError as e:
- print("\nCreate or update text blocklist failed: ")
- if e.error:
- print(f"Error code: {e.error.code}")
- print(f"Error message: {e.error.message}")
- raise
- print(e)
- raise
--
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
- blocklist_description = "<description>"
-
- # create blocklist
- result = create_or_update_text_blocklist(name=blocklist_name, description=blocklist_description)
- if result is not None:
- print("Blocklist created: {}".format(result))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`.
-1. Optionally replace `<description>` with a custom description.
-1. Run the script.
---
-### Add blockItems in the list
-
-> [!NOTE]
->
-> There is a maximum limit of **10,000 terms** in total across all lists. You can add at most 100 blockItems in one request.
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the URL) with the ID value you used in the list creation step.
-1. Optionally replace the value of the `"description"` field with a custom description.
-1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
-
-```shell
-curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_id>:addBlockItems?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json' \data-raw '"blockItems": [{
- "description": "string",
- "text": "bleed"
-}]'
-```
-
-> [!TIP]
-> You can add multiple blockItems in one API call. Make the request body a JSON array of data groups:
->
-> [{
-> "description": "string",
-> "text": "bleed"
-> },
-> {
-> "description": "string",
-> "text": "blood"
-> }]
--
-The response code should be `200`.
-
-```console
-{
- "blockItemId": "string",
- "description": "string",
- "text": "bleed"
-
-}
-```
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-string blockItemText1 = "k*ll";
-string blockItemText2 = "h*te";
-
-var blockItems = new TextBlockItemInfo[] { new TextBlockItemInfo(blockItemText1), new TextBlockItemInfo(blockItemText2) };
-var addedBlockItems = client.AddBlockItems(blocklistName, new AddBlockItemsOptions(blockItems));
-
-if (addedBlockItems != null && addedBlockItems.Value != null)
-{
- Console.WriteLine("\nBlockItems added:");
- foreach (var addedBlockItem in addedBlockItems.Value.Value)
- {
- Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", addedBlockItem.BlockItemId, addedBlockItem.Text, addedBlockItem.Description);
- }
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Replace the value of the `block_item_text_1` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
-1. Optionally add more blockItem strings to the `blockItems` parameter.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.ai.contentsafety.models import TextBlockItemInfo, AddBlockItemsOptions
-from azure.core.exceptions import HttpResponseError
-import time
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def add_block_items(name, items):
- block_items = [TextBlockItemInfo(text=i) for i in items]
- try:
- response = client.add_block_items(
- blocklist_name=name,
- body=AddBlockItemsOptions(block_items=block_items),
- )
- except HttpResponseError as e:
- print("\nAdd block items failed: ")
- if e.error:
- print(f"Error code: {e.error.code}")
- print(f"Error message: {e.error.message}")
- raise
- print(e)
- raise
-
- return response.value
--
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
-
- block_item_text_1 = "k*ll"
- input_text = "I h*te you and I want to k*ll you."
-
- # add block items
- result = add_block_items(name=blocklist_name, items=[block_item_text_1])
- if result is not None:
- print("Block items added: {}".format(result))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Replace the value of the `block_item_text_1` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
-1. Optionally add more blockItem strings to the `add_block_items` parameter.
-1. Run the script.
----
-> [!NOTE]
->
-> There will be some delay after you add or edit a blockItem before it takes effect on text analysis, usually **not more than five minutes**.
-
-### Analyze text with a blocklist
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step. The `"blocklistNames"` field can contain an array of multiple list IDs.
-1. Optionally change the value of `"breakByBlocklists"`. `true` indicates that once a blocklist is matched, the analysis will return immediately without model output. `false` will cause the model to continue to do analysis in the default categories.
-1. Optionally change the value of the `"text"` field to whatever text you want to analyze.
-
-```shell
-curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2023-04-30-preview&' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json' \data-raw '{
- "text": "I want to beat you till you bleed",
- "categories": [
- "Hate",
- "Sexual",
- "SelfHarm",
- "Violence"
- ],
- "blocklistNames":["<your_list_id>"],
- "breakByBlocklists": true
-}'
-```
-
-The JSON response will contain a `"blocklistMatchResults"` that indicates any matches with your blocklist. It reports the location in the text string where the match was found.
-
-```json
-{
- "blocklistMatchResults": [
- {
- "blocklistName": "string",
- "blockItemID": "string",
- "blockItemText": "bleed",
- "offset": "28",
- "length": "5"
- }
- ]
-}
-```
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-// After you edit your blocklist, it usually takes effect in 5 minutes, please wait some time before analyzing with blocklist after editing.
-var request = new AnalyzeTextOptions("I h*te you and I want to k*ll you");
-request.BlocklistNames.Add(blocklistName);
-request.BreakByBlocklists = true;
-
-Response<AnalyzeTextResult> response;
-try
-{
- response = client.AnalyzeText(request);
-}
-catch (RequestFailedException ex)
-{
- Console.WriteLine("Analyze text failed.\nStatus code: {0}, Error code: {1}, Error message: {2}", ex.Status, ex.ErrorCode, ex.Message);
- throw;
-}
-
-if (response.Value.BlocklistsMatchResults != null)
-{
- Console.WriteLine("\nBlocklist match result:");
- foreach (var matchResult in response.Value.BlocklistsMatchResults)
- {
- Console.WriteLine("Blockitem was hit in text: Offset: {0}, Length: {1}", matchResult.Offset, matchResult.Length);
- Console.WriteLine("BlocklistName: {0}, BlockItemId: {1}, BlockItemText: {2}, ", matchResult.BlocklistName, matchResult.BlockItemId, matchResult.BlockItemText);
- }
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Replace the `request` input text with whatever text you want to analyze.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.ai.contentsafety.models import AnalyzeTextOptions
-from azure.core.exceptions import HttpResponseError
-import time
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def analyze_text_with_blocklists(name, text):
- try:
- response = client.analyze_text(
- AnalyzeTextOptions(text=text, blocklist_names=[name], break_by_blocklists=False)
- )
- except HttpResponseError as e:
- print("\nAnalyze text failed: ")
- if e.error:
- print(f"Error code: {e.error.code}")
- print(f"Error message: {e.error.message}")
- raise
- print(e)
- raise
-
- return response.blocklists_match_results
--
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
- input_text = "I h*te you and I want to k*ll you."
-
- # analyze text
- match_results = analyze_text_with_blocklists(name=blocklist_name, text=input_text)
- for match_result in match_results:
- print("Block item {} in {} was hit, text={}, offset={}, length={}."
- .format(match_result.block_item_id, match_result.blocklist_name, match_result.block_item_text, match_result.offset, match_result.length))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Replace the `input_text` variable with whatever text you want to analyze.
-1. Run the script.
--
-## Other blocklist operations
-
-This section contains more operations to help you manage and use the blocklist feature.
-
-### Get all blockItems in a list
-
-#### [REST API](#tab/rest)
--
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-
-```shell
-curl --location --request GET '<endpoint>/contentsafety/text/blocklists/<your_list_id>/blockItems?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json'
-```
-
-The status code should be `200` and the response body should look like this:
-
-```json
-{
- "values": [
- {
- "blockItemId": "string",
- "description": "string",
- "text": "bleed",
- }
- ]
-}
-```
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-var allBlockitems = client.GetTextBlocklistItems(blocklistName);
-Console.WriteLine("\nList BlockItems:");
-foreach (var blocklistItem in allBlockitems)
-{
- Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", blocklistItem.BlockItemId, blocklistItem.Text, blocklistItem.Description);
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
--
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.core.exceptions import HttpResponseError
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def list_block_items(name):
- try:
- response = client.list_text_blocklist_items(blocklist_name=name)
- return list(response)
- except HttpResponseError as e:
- print("\nList block items failed: ")
- if e.error:
- print(f"Error code: {e.error.code}")
- print(f"Error message: {e.error.message}")
- raise
- print(e)
- raise
--
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
-
- result = list_block_items(name=blocklist_name)
- if result is not None:
- print("Block items: {}".format(result))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Run the script.
----
-### Get all blocklists
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
--
-```shell
-curl --location --request GET '<endpoint>/contentsafety/text/blocklists?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json'
-```
-
-The status code should be `200`. The JSON response looks like this:
-
-```json
-"value": [
- {
- "blocklistName": "string",
- "description": "string"
- }
-]
-```
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
--
-var blocklists = client.GetTextBlocklists();
-Console.WriteLine("\nList blocklists:");
-foreach (var blocklist in blocklists)
-{
- Console.WriteLine("BlocklistName: {0}, Description: {1}", blocklist.BlocklistName, blocklist.Description);
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.core.exceptions import HttpResponseError
--
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
--
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def list_text_blocklists():
- try:
- return client.list_text_blocklists()
- except HttpResponseError as e:
- print("\nList text blocklists failed: ")
- if e.error:
- print(f"Error code: {e.error.code}")
- print(f"Error message: {e.error.message}")
- raise
- print(e)
- raise
-if __name__ == "__main__":
- # list blocklists
- result = list_text_blocklists()
- if result is not None:
- print("List blocklists: ")
- for l in result:
- print(l)
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Run the script.
---
-### Get a blocklist by name
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-
-```shell
-cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_id>?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \data ''
-```
-
-The status code should be `200`. The JSON response looks like this:
-
-```json
-{
- "blocklistName": "string",
- "description": "string"
-}
-```
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-var getBlocklist = client.GetTextBlocklist(blocklistName);
-if (getBlocklist != null && getBlocklist.Value != null)
-{
- Console.WriteLine("\nGet blocklist:");
- Console.WriteLine("BlocklistName: {0}, Description: {1}", getBlocklist.Value.BlocklistName, getBlocklist.Value.Description);
-}
-```
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.ai.contentsafety.models import TextBlocklist
-from azure.core.exceptions import HttpResponseError
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create a Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def get_text_blocklist(name):
- try:
- return client.get_text_blocklist(blocklist_name=name)
- except HttpResponseError as e:
- print("\nGet text blocklist failed: ")
- if e.error:
- print(f"Error code: {e.error.code}")
- print(f"Error message: {e.error.message}")
- raise
- print(e)
- raise
-
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
-
- # get blocklist
- result = get_text_blocklist(blocklist_name)
- if result is not None:
- print("Get blocklist: {}".format(result))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Run the script.
----
-### Get a blockItem by blockItem ID
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
--
-```shell
-cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_id>/blockitems/<your_item_id>?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \data ''
-```
-
-The status code should be `200`. The JSON response looks like this:
-
-```json
-{
- "blockItemId": "string",
- "description": "string",
- "text": "string"
-}
-```
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-var getBlockItemId = addedBlockItems.Value.Value[0].BlockItemId;
-var getBlockItem = client.GetTextBlocklistItem(blocklistName, getBlockItemId);
-Console.WriteLine("\nGet BlockItem:");
-Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", getBlockItem.Value.BlockItemId, getBlockItem.Value.Text, getBlockItem.Value.Description);
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Optionally change the value of `getBlockItemId` to the ID of a previously added item.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.core.exceptions import HttpResponseError
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create a Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def get_block_item(name, item_id):
- try:
- return client.get_text_blocklist_item(blocklist_name=name, block_item_id=item_id)
- except HttpResponseError as e:
- print("Get block item failed.")
- print("Error code: {}".format(e.error.code))
- print("Error message: {}".format(e.error.message))
- return None
- except Exception as e:
- print(e)
- return None
-
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
- block_item = get_block_item(blocklist_name, "<your_item_id>")
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
-1. Run the script.
---
-### Remove a blockItem from a list
-
-> [!NOTE]
->
-> There will be some delay after you delete an item before it takes effect on text analysis, usually **not more than five minutes**.
-
-#### [REST API](#tab/rest)
--
-Copy the cURL command below to a text editor and make the following changes:
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-1. Replace `<item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
--
-```shell
-curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_id>/removeBlockItems?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json'data-raw '"blockItemIds":[
- "<item_id>"
-]'
-```
-
-> [!TIP]
-> You can delete multiple blockItems in one API call. Make the request body an array of `blockItemId` values.
-
-The response code should be `204`.
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-var removeBlockItemId = addedBlockItems.Value.Value[0].BlockItemId;
-var removeBlockItemIds = new List<string> { removeBlockItemId };
-var removeResult = client.RemoveBlockItems(blocklistName, new RemoveBlockItemsOptions(removeBlockItemIds));
-
-if (removeResult != null && removeResult.Status == 204)
-{
- Console.WriteLine("\nBlockItem removed: {0}.", removeBlockItemId);
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Optionally change the value of `removeBlockItemId` to the ID of a previously added item.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.ai.contentsafety.models import RemoveBlockItemsOptions
-from azure.core.exceptions import HttpResponseError
--
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create a Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-
-def remove_block_items(name, ids):
- request = RemoveBlockItemsOptions(block_item_ids=ids)
- try:
- client.remove_block_items(blocklist_name=name, body=request)
- return True
- except HttpResponseError as e:
- print("Remove block items failed.")
- print("Error code: {}".format(e.error.code))
- print("Error message: {}".format(e.error.message))
- return False
- except Exception as e:
- print(e)
- return False
-
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
- remove_id = "<your_item_id>"
-
- # remove one blocklist item
- if remove_block_items(name=blocklist_name, ids=[remove_id]):
- print("Block item removed: {}".format(remove_id))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` with the ID value you used in the list creation step.
-1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
-1. Run the script.
---
-### Delete a list and all of its contents
-
-> [!NOTE]
->
-> There will be some delay after you delete a list before it takes effect on text analysis, usually **not more than five minutes**.
-
-#### [REST API](#tab/rest)
-
-Copy the cURL command below to a text editor and make the following changes:
--
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-
-```shell
-curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_id>?api-version=2023-04-30-preview' \
header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \header 'Content-Type: application/json' \
-```
-
-The response code should be `204`.
-
-#### [C#](#tab/csharp)
-
-Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code.
-
-```csharp
-string endpoint = "<endpoint>";
-string key = "<enter_your_key_here>";
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
-
-var blocklistName = "<your_list_id>";
-
-var deleteResult = client.DeleteTextBlocklist(blocklistName);
-if (deleteResult != null && deleteResult.Status == 204)
-{
- Console.WriteLine("\nDeleted blocklist.");
-}
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-1. Run the script.
-
-#### [Python](#tab/python)
-
-Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
-
-```python
-import os
-from azure.ai.contentsafety import ContentSafetyClient
-from azure.core.credentials import AzureKeyCredential
-from azure.core.exceptions import HttpResponseError
-
-endpoint = "<endpoint>"
-key = "<enter_your_key_here>"
-
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
-def delete_blocklist(name):
- try:
- client.delete_text_blocklist(blocklist_name=name)
- return True
- except HttpResponseError as e:
- print("Delete blocklist failed.")
- print("Error code: {}".format(e.error.code))
- print("Error message: {}".format(e.error.message))
- return False
- except Exception as e:
- print(e)
- return False
-
-if __name__ == "__main__":
- blocklist_name = "<your_list_id>"
- # delete blocklist
- if delete_blocklist(name=blocklist_name):
- print("Blocklist {} deleted successfully.".format(blocklist_name))
-```
-
-1. Replace `<endpoint>` with your endpoint URL.
-1. Replace `<enter_your_key_here>` with your key.
-1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step.
-1. Run the script.
---
-## Next steps
-
-See the API reference documentation to learn more about the APIs used in this guide.
-
-* [Content Safety API reference](https://aka.ms/content-safety-api)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/overview.md
- Title: What is Azure AI Content Safety? (preview)-
-description: Learn how to use Content Safety to track, flag, assess, and filter inappropriate material in user-generated content.
------- Previously updated : 04/02/2023-
-keywords: content safety, azure content safety, online content safety, content filtering software, content moderation service, content moderation
-
-#Customer intent: As a developer of content management software, I want to find out whether Azure AI Content Safety is the right solution for my moderation needs.
--
-# What is Azure AI Content Safety? (preview)
-
-Azure AI Content Safety detects harmful user-generated and AI-generated content in applications and services. Content Safety includes text and image APIs that allow you to detect material that is harmful. We also have an interactive Content Safety Studio that allows you to view, explore and try out sample code for detecting harmful content across different modalities.
-
-Content filtering software can help your app comply with regulations or maintain the intended environment for your users.
---
-This documentation contains the following article types:
-
-* **[Quickstarts](./quickstart-text.md)** are getting-started instructions to guide you through making requests to the service.
-* **[How-to guides](./how-to/use-blocklist.md)** contain instructions for using the service in more specific or customized ways.
-* **[Concepts](concepts/harm-categories.md)** provide in-depth explanations of the service functionality and features.
-
-## Where it's used
-
-The following are a few scenarios in which a software developer or team would require a content moderation service:
--- Online marketplaces that moderate product catalogs and other user-generated content.-- Gaming companies that moderate user-generated game artifacts and chat rooms.-- Social messaging platforms that moderate images and text added by their users.-- Enterprise media companies that implement centralized moderation for their content.-- K-12 education solution providers filtering out content that is inappropriate for students and educators.-
-> [!IMPORTANT]
-> You cannot use Content Safety to detect illegal child exploitation images.
-
-## Product types
-
-There are different types of analysis available from this service. The following table describes the currently available APIs.
-
-| Type | Functionality |
-| :-- | :- |
-| Text Detection API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
-| Image Detection API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
-
-## Content Safety Studio
-
-[Azure AI Content Safety Studio](https://contentsafety.cognitive.azure.com) is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.
-
-Content Safety Studio not only contains the out-of-the-box AI models, but also includes Microsoft's built-in terms blocklists to flag profanities and stay up to date with new trends. You can also upload your own blocklists to enhance the coverage of harmful content that's specific to your use case.
-
-Studio also lets you set up a moderation workflow, where you can continuously monitor and improve content moderation performance. It can help you meet content requirements from all kinds of industries like gaming, media, education, E-commerce, and more. Businesses can easily connect their services to the Studio and have their content moderated in real time, whether user-generated or AI-generated.
-
-All of these capabilities are handled by the Studio and its backend; customers donΓÇÖt need to worry about model development. You can onboard your data for quick validation and monitor your KPIs accordingly, like technical metrics (latency, accuracy, recall), or business metrics (block rate, block volume, category proportions, language proportions and more). With simple operations and configurations, customers can test different solutions quickly and find the best fit, instead of spending time experimenting with custom models or doing moderation manually.
-
-> [!div class="nextstepaction"]
-> [Content Safety Studio](https://contentsafety.cognitive.azure.com)
--
-### Content Safety Studio features
-
-In Content Safety Studio, the following Content Safety service features are available:
-
-* [Moderate Text Content](https://contentsafety.cognitive.azure.com/text): With the text moderation tool, you can easily run tests on text content. Whether you want to test a single sentence or an entire dataset, our tool offers a user-friendly interface that lets you assess the test results directly in the portal. You can experiment with different sensitivity levels to configure your content filters and blocklist management, ensuring that your content is always moderated to your exact specifications. Plus, with the ability to export the code, you can implement the tool directly in your application, streamlining your workflow and saving time.
-
-* [Moderate Image Content](https://contentsafety.cognitive.azure.com/image): With the image moderation tool, you can easily run tests on images to ensure that they meet your content standards. Our user-friendly interface allows you to evaluate the test results directly in the portal, and you can experiment with different sensitivity levels to configure your content filters. Once you've customized your settings, you can easily export the code to implement the tool in your application.
-
-* [Monitor Online Activity](https://contentsafety.cognitive.azure.com/monitor): The powerful monitoring page allows you to easily track your moderation API usage and trends across different modalities. With this feature, you can access detailed response information, including category and severity distribution, latency, error, and blocklist detection. This information provides you with a complete overview of your content moderation performance, enabling you to optimize your workflow and ensure that your content is always moderated to your exact specifications. With our user-friendly interface, you can quickly and easily navigate the monitoring page to access the information you need to make informed decisions about your content moderation strategy. You have the tools you need to stay on top of your content moderation performance and achieve your content goals.
-
-## Input requirements
-
-The default maximum length for text submissions is 1000 characters. If you need to analyze longer blocks of text, you can split the input text (for example, by punctuation or spacing) across multiple related submissions.
-
-The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
-
-## Security
-
-### Use Azure Active Directory to manage access
-
-For enhanced security, you can use Azure Active Directory (Azure AD) to manage access to your resources. You can grant access to other users within your organization by assigning them the roles of **Cognitive Services Users** and **Reader**. To learn more about granting user access to Azure resources using the Azure Portal, please refer to the [Role-based access control guide](/azure/role-based-access-control/quickstart-assign-role-user-portal).
-
-### Encryption of data at rest
-
-Learn how Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-## Pricing
-
-Currently, the public preview features are available in the **F0 and S0** pricing tier.
-
-## Service limits
-
-### Language availability
-
-This API supports eight languages: English, German, Japanese, Spanish, French, Italian, Portuguese, Chinese. You don't need to specify a language code for text analysis; we automatically detect your input language.
-
-### Region / location
-
-To use the preview APIs, you must create your Azure AI Content Safety resource in a supported region. Currently, the public preview features are available in the following Azure regions:
--- East US-- West Europe-
-Feel free to [contact us](mailto:acm-team@microsoft.com) if you need other regions for your business.
-
-### Query rates
-
-| Pricing Tier | Requests per second (RPS) |
-| :-- | : |
-| F0 | 5 |
-| S0 | 10 |
-
-If you need a faster rate, please [contact us](mailto:acm-team@microsoft.com) to request.
--
-## Contact us
-
-If you get stuck, [email us](mailto:acm-team@microsoft.com) or use the feedback widget on the upper right of any docs page.
-
-## Next steps
-
-Follow a quickstart to get started using Content Safety in your application.
-
-> [!div class="nextstepaction"]
-> [Content Safety quickstart](./quickstart-text.md)
cognitive-services Quickstart Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/quickstart-image.md
- Title: "Quickstart: Analyze image content"-
-description: Get started using Content Safety to analyze image content for objectionable material.
------- Previously updated : 05/08/2023-
-zone_pivot_groups: programming-languages-content-safety
-keywords:
--
-# QuickStart: Analyze image content
-
-Get started with the Content Studio, REST API, or client SDKs to do basic image moderation. The Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
-
-> [!NOTE]
->
-> The sample data and code may contain offensive content. User discretion is advised.
------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
--- [Portal](/azure/cognitive-services/cognitive-services-apis-create-account#clean-up-resources)-- [Azure CLI](/azure/cognitive-services/cognitive-services-apis-create-account-cli#clean-up-resources)-
-## Next steps
-
-Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy.
-
-> [!div class="nextstepaction"]
-> [Content Safety Studio quickstart](./studio-quickstart.md)
cognitive-services Quickstart Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/quickstart-text.md
- Title: "Quickstart: Analyze image and text content"-
-description: Get started using Content Safety to analyze image and text content for objectionable material.
------- Previously updated : 04/06/2023-
-zone_pivot_groups: programming-languages-content-safety
-keywords:
--
-# QuickStart: Analyze text content
-
-Get started with the Content Safety Studio, REST API, or client SDKs to do basic text moderation. The Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
-
-> [!NOTE]
->
-> The sample data and code may contain offensive content. User discretion is advised.
------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
--- [Portal](/azure/cognitive-services/cognitive-services-apis-create-account#clean-up-resources)-- [Azure CLI](/azure/cognitive-services/cognitive-services-apis-create-account-cli#clean-up-resources)-
-## Next steps
-Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy.
-
-> [!div class="nextstepaction"]
-> [Content Safety Studio quickstart](./studio-quickstart.md)
cognitive-services Studio Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/studio-quickstart.md
- Title: "Quickstart: Content Safety Studio"-
-description: In this quickstart, get started with the Content Safety service using Content Safety Studio in your browser.
-------- Previously updated : 04/27/2023---
-# QuickStart: Azure AI Content Safety Studio
-
-In this quickstart, get started with the Content Safety service using Content Safety Studio in your browser.
-
-> [!CAUTION]
-> Some of the sample content provided by Content Safety Studio may be offensive. Sample images are blurred by default. User discretion is advised.
-
-## Prerequisites
-
-* An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
-* A [Content Safety](https://aka.ms/acs-create) Azure resource.
-* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
--
-## Analyze text content
-The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides capability for you to quickly try out text moderation.
--
-1. Select the **Moderate text content** panel.
-1. Add text to the input field, or select sample text from the panels on the page.
-1. Select **Run test**.
-
-The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
-
-The **Use blocklist** tab on the right lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a **Blocklist detection** panel under **Results**. It reports any matches with the blocklist.
-
-## Analyze image content
-The [Moderate image content](https://contentsafety.cognitive.azure.com/image) page provides capability for you to quickly try out image moderation.
--
-1. Select the **Moderate image content** panel.
-1. Select a sample image from the panels on the page, or upload your own image. The maximum size for image submissions is 4 MB, and image dimensions must be between 50 x 50 pixels and 2,048 x 2,048 pixels. Images can be in JPEG, PNG, GIF, BMP, TIFF, or WEBP formats.
-1. Select **Run test**.
-
-The service returns all the categories that were detected, with the severity level for each(0-Safe, 2-Low, 4-Medium, 6-High). It also returns a binary **Accepted**/**Rejected** result, based on the filters you configure. Use the matrix in the **Configure filters** tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
-
-## View and export code
-You can use the **View Code** feature in both *Analyze text content* or *Analyze image content* page to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
---
-## Monitor online activity
-
-The [Monitor online activity](https://contentsafety.cognitive.azure.com/monitor) page lets you view your API usage and trends.
--
-You can choose which **Media type** to monitor. You can also specify the time range that you want to check by selecting **Show data for the last __**.
-
-In the **Reject rate per category** chart, you can also adjust the severity thresholds for each category.
--
-You can also edit blocklists if you want to change some terms, based on the **Top 10 blocked terms** chart.
-
-## Manage your resource
-To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Content Safety Studio home page and select the **Resource** tab. If you have other resources, you can switch resources here as well.
--
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
--- [Portal](/azure/cognitive-services/cognitive-services-apis-create-account#clean-up-resources)-- [Azure CLI](/azure/cognitive-services/cognitive-services-apis-create-account-cli#clean-up-resources)-
-## Next steps
-
-Next, get started using Content Safety through the REST APIs or a client SDK, so you can seamlessly integrate the service into your application.
-
-> [!div class="nextstepaction"]
-> [Quickstart: REST API and client SDKs](./quickstart-text.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/whats-new.md
- Title: What's new in Content Safety?-
-description: Stay up to date on recent releases and updates to Azure Content Safety.
------- Previously updated : 04/07/2023---
-# What's new in Content Safety
-
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
-
-## May 2023
-
-### Content Safety public preview
-
-The Azure Content Safety is a Cognitive Service that detects material that is potentially offensive, risky, or otherwise undesirable. This service offers state-of-the-art text and image models that detect problematic content. Azure Content Safety helps make applications and services safer from harmful user-generated and AI-generated content. Follow a [quickstart](./quickstart-text.md) to get started.
-
-## Cognitive Services updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-bicep.md
- Title: Create an Azure Cognitive Services resource using Bicep | Microsoft Docs
-description: Create an Azure Cognitive Service resource with Bicep.
-keywords: cognitive services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
---- Previously updated : 01/19/2023----
-# Quickstart: Create a Cognitive Services resource using Bicep
-
-Follow this quickstart to create Cognitive Services resource using Bicep.
-
-Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
--
-## Things to consider
-
-Using Bicep to create a Cognitive Service resource lets you create a multi-service resource. This enables you to:
-
-* Access multiple Azure Cognitive Services with a single key and endpoint.
-* Consolidate billing from the services you use.
-* [!INCLUDE [terms-azure-portal](./includes/quickstarts/terms-azure-portal.md)]
-
-## Prerequisites
-
-* If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Review the Bicep file
-
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cognitive-services-universalkey/).
-
-> [!NOTE]
-> * If you use a different resource `kind` (listed below), you may need to change the `sku` parameter to match the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) tier you wish to use. For example, the `TextAnalytics` kind uses `S` instead of `S0`.
-> * Many of the Cognitive Services have a free `F0` pricing tier that you can use to try the service.
-
-Be sure to change the `sku` parameter to the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) instance you want. The `sku` depends on the resource `kind` that you are using. For example, `TextAnalytics`
--
-One Azure resource is defined in the Bicep file: [Microsoft.CognitiveServices/accounts](/azure/templates/microsoft.cognitiveservices/accounts) specifies that it is a Cognitive Services resource. The `kind` field in the Bicep file defines the type of resource.
--
-## Deploy the Bicep file
-
-1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
- ```
-
-
-
- When the deployment finishes, you should see a message indicating the deployment succeeded.
-
-## Review deployed resources
-
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az resource list --resource-group exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName exampleRG
-```
---
-## Clean up resources
-
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name exampleRG
-```
---
-If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
-
-## See also
-
-* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
-* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
-* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
-* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
-* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-resource-manager-template.md
- Title: Create an Azure Cognitive Services resource using ARM templates | Microsoft Docs
-description: Create a Azure Cognitive Service resource with ARM template.
-keywords: cognitive services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
----- Previously updated : 09/01/2022----
-# Quickstart: Create a Cognitive Services resource using an ARM template
-
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Cognitive Services.
-
-Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
-
-Create a resource using an Azure Resource Manager template (ARM template). This multi-service resource lets you:
-
-* Access multiple Azure Cognitive Services with a single key and endpoint.
-* Consolidate billing from the services you use.
-* [!INCLUDE [terms-azure-portal](./includes/quickstarts/terms-azure-portal.md)]
--
-## Prerequisites
-
-* If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cognitive-services-universalkey/).
--
-One Azure resource is defined in the Bicep file: [Microsoft.CognitiveServices/accounts](/azure/templates/microsoft.cognitiveservices/accounts) specifies that it is a Cognitive Services resource. The `kind` field in the Bicep file defines the type of resource.
--
-## Deploy the template
-
-# [Azure portal](#tab/portal)
-
-1. Click the **Deploy to Azure** button.
-
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cognitiveservices%2Fcognitive-services-universalkey%2Fazuredeploy.json)
-
-2. Enter the following values.
-
- |Value |Description |
- |||
- | **Subscription** | Select an Azure subscription. |
- | **Resource group** | Select **Create new**, enter a unique name for the resource group, and then click **OK**. |
- | **Region** | Select a region. For example, **East US** |
- | **Cognitive Service Name** | Replace with a unique name for your resource. You will need the name in the next section when you validate the deployment. |
- | **Location** | Replace with the region used above. |
- | **Sku** | The [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/) for your resource. |
-
- :::image type="content" source="media/arm-template/universal-key-portal-template.png" alt-text="Resource creation screen.":::
-
-3. Select **Review + Create**, then **Create**. After the resource has successfully finished deploying, the **Go to resource** button will be highlighted.
-
-# [Azure CLI](#tab/CLI)
-
-> [!NOTE]
-> `az deployment group` create requires Azure CLI version 2.6 or later. To display the version type `az --version`. For more information, see the [documentation](/cli/azure/deployment/group).
-
-Run the following script via the Azure CLI, either from [your local machine](/cli/azure/install-azure-cli), or from a browser by using the **Try it** button. Enter a name and location (for example `centralus`) for a new resource group, and the ARM template will be used to deploy a Cognitive Services resource within it. Remember the name you use. You will use it later to validate the deployment.
-
-```azurecli-interactive
-read -p "Enter a name for your new resource group:" resourceGroupName &&
-read -p "Enter the location (i.e. centralus):" location &&
-templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.cognitiveservices/cognitive-services-universalkey/azuredeploy.json" &&
-az group create --name $resourceGroupName --location "$location" &&
-az deployment group create --resource-group $resourceGroupName --template-uri $templateUri &&
-echo "Press [ENTER] to continue ..." &&
-read
-```
----
-## Review deployed resources
-
-# [Portal](#tab/portal)
-
-When your deployment finishes, you will be able to click the **Go to resource** button to see your new resource. You can also find the resource group by:
-
-1. Selecting **Resource groups** from the left navigation menu.
-2. Selecting the resource group name.
-
-# [Azure CLI](#tab/CLI)
-
-Using the Azure CLI, run the following script, and enter the name of the resource group you created earlier.
-
-```azurecli-interactive
-echo "Enter the resource group where the Cognitive Services resource exists:" &&
-read resourceGroupName &&
-az cognitiveservices account list -g $resourceGroupName
-```
----
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources contained in the group.
-
-# [Azure portal](#tab/portal)
-
-1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups.
-2. Locate the resource group containing the resource to be deleted
-3. Right-click on the resource group listing. Select **Delete resource group**, and confirm.
-
-# [Azure CLI](#tab/CLI)
-
-Using the Azure CLI, run the following script, and enter the name of the resource group you created earlier.
-
-```azurecli-interactive
-echo "Enter the resource group name, for deletion:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName
-```
---
-If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
-
-## See also
-
-* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
-* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
-* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
-* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
-* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Create Account Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-terraform.md
- Title: 'Quickstart: Create an Azure Cognitive Services resource using Terraform'
-description: 'In this article, you create an Azure Cognitive Services resource using Terraform'
-keywords: cognitive services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
--- Previously updated : 4/14/2023---
-content_well_notification:
- - AI-contribution
--
-# Quickstart: Create an Azure Cognitive Services resource using Terraform
-
-This article shows how to use Terraform to create a [Cognitive Services account](/azure/cognitive-services/cognitive-services-apis-create-account) using [Terraform](/azure/developer/terraform/quickstart-configure).
-
-Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
--
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
-> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
-> * Create a random string using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
-> * Create a Cognitive Services account using [azurerm_cognitive_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cognitive_account)
-
-## Prerequisites
--- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)-
-## Implement the Terraform code
-
-> [!NOTE]
-> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-cognitive-services-account). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-cognitive-services-account/TestRecord.md).
->
-> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
-
-1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
-
-1. Create a file named `main.tf` and insert the following code:
-
- [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/main.tf)]
-
-1. Create a file named `outputs.tf` and insert the following code:
-
- [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/outputs.tf)]
-
-1. Create a file named `providers.tf` and insert the following code:
-
- [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/providers.tf)]
-
-1. Create a file named `variables.tf` and insert the following code:
-
- [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/variables.tf)]
-
-## Initialize Terraform
--
-## Create a Terraform execution plan
--
-## Apply a Terraform execution plan
--
-## Verify the results
-
-#### [Azure CLI](#tab/azure-cli)
-
-1. Get the Azure resource name in which the Cognitive Services account was created.
-
- ```console
- resource_group_name=$(terraform output -raw resource_group_name)
- ```
-
-1. Get the Cognitive Services account name.
-
- ```console
- azurerm_cognitive_account_name=$(terraform output -raw azurerm_cognitive_account_name)
- ```
-
-1. Run [az cognitiveservices account show](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-show) to show the Cognitive Services account you created in this article.
-
- ```azurecli
- az cognitiveservices account show --name $azurerm_cognitive_account_name \
- --resource-group $resource_group_name
- ```
-
-#### [Azure PowerShell](#tab/azure-powershell)
-
-1. Get the Azure resource name in which the Cognitive Services account was created.
-
- ```console
- $resource_group_name=$(terraform output -raw resource_group_name)
- ```
-
-1. Get the Cognitive Services account name.
-
- ```console
- $azurerm_cognitive_account_name=$(terraform output -raw azurerm_cognitive_account_name)
- ```
-
-1. Run [Get-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccount) to display information about the new service.
-
- ```azurepowershell
- Get-AzCognitiveServicesAccount -ResourceGroupName $resource_group_name `
- -Name $azurerm_cognitive_account_name
- ```
---
-## Clean up resources
--
-## Troubleshoot Terraform on Azure
-
-[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Recover deleted Cognitive Services resources](manage-resources.md)
cognitive-services Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/diagnostic-logging.md
- Title: Diagnostic logging-
-description: This guide provides step-by-step instructions to enable diagnostic logging for an Azure Cognitive Service. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging.
----- Previously updated : 07/19/2021---
-# Enable diagnostic logging for Azure Cognitive Services
-
-This guide provides step-by-step instructions to enable diagnostic logging for an Azure Cognitive Service. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging. Before you continue, you must have an Azure account with a subscription to at least one Cognitive Service, such as [Speech Services](./speech-service/overview.md), or [LUIS](./luis/what-is-luis.md).
-
-## Prerequisites
-
-To enable diagnostic logging, you'll need somewhere to store your log data. This tutorial uses Azure Storage and Log Analytics.
-
-* [Azure storage](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) - Retains diagnostic logs for policy audit, static analysis, or backup. The storage account does not have to be in the same subscription as the resource emitting logs as long as the user who configures the setting has appropriate Azure RBAC access to both subscriptions.
-* [Log Analytics](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace) - A flexible log search and analytics tool that allows for analysis of raw logs generated by an Azure resource.
-
-> [!NOTE]
-> * Additional configuration options are available. To learn more, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
-> * "Trace" in diagnostic logging is only available for [Custom question answering](./qnamaker/how-to/get-analytics-knowledge-base.md?tabs=v2).
-
-## Enable diagnostic log collection
-
-Let's start by enabling diagnostic logging using the Azure portal.
-
-> [!NOTE]
-> To enable this feature using PowerShell or the Azure CLI, use the instructions provided in [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
-
-1. Navigate to the Azure portal. Then locate and select a Cognitive Services resource. For example, your subscription to Speech Services.
-2. Next, from the left-hand navigation menu, locate **Monitoring** and select **Diagnostic settings**. This screen contains all previously created diagnostic settings for this resource.
-3. If there is a previously created resource that you'd like to use, you can select it now. Otherwise, select **+ Add diagnostic setting**.
-4. Enter a name for the setting. Then select **Archive to a storage account** and **Send to log Analytics**.
-5. When prompted to configure, select the storage account and OMS workspace that you'd like to use to store you diagnostic logs. **Note**: If you don't have a storage account or OMS workspace, follow the prompts to create one.
-6. Select **Audit**, **RequestResponse**, and **AllMetrics**. Then set the retention period for your diagnostic log data. If a retention policy is set to zero, events for that log category are stored indefinitely.
-7. Click **Save**.
-
-It can take up to two hours before logging data is available to query and analyze. So don't worry if you don't see anything right away.
-
-## View and export diagnostic data from Azure Storage
-
-Azure Storage is a robust object storage solution that is optimized for storing large amounts of unstructured data. In this section, you'll learn to query your storage account for total transactions over a 30-day timeframe and export the data to excel.
-
-1. From the Azure portal, locate the Azure Storage resource that you created in the last section.
-2. From the left-hand navigation menu, locate **Monitoring** and select **Metrics**.
-3. Use the available drop-downs to configure your query. For this example, let's set the time range to **Last 30 days** and the metric to **Transaction**.
-4. When the query is complete, you'll see a visualization of transaction over the last 30 days. To export this data, use the **Export to Excel** button located at the top of the page.
-
-Learn more about what you can do with diagnostic data in [Azure Storage](../storage/blobs/storage-blobs-introduction.md).
-
-## View logs in Log Analytics
-
-Follow these instructions to explore log analytics data for your resource.
-
-1. From the Azure portal, locate and select **Log Analytics** from the left-hand navigation menu.
-2. Locate and select the resource you created when enabling diagnostics.
-3. Under **General**, locate and select **Logs**. From this page, you can run queries against your logs.
-
-### Sample queries
-
-Here are a few basic Kusto queries you can use to explore your log data.
-
-Run this query for all diagnostic logs from Azure Cognitive Services for a specified time period:
-
-```kusto
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-```
-
-Run this query to see the 10 most recent logs:
-
-```kusto
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| take 10
-```
-
-Run this query to group operations by **Resource**:
-
-```kusto
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES" |
-summarize count() by Resource
-```
-Run this query to find the average time it takes to perform an operation:
-
-```kusto
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| summarize avg(DurationMs)
-by OperationName
-```
-
-Run this query to view the volume of operations over time split by OperationName with counts binned for every 10s.
-
-```kusto
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| summarize count()
-by bin(TimeGenerated, 10s), OperationName
-| render areachart kind=unstacked
-```
-
-## Next steps
-
-* To understand how to enable logging, and also the metrics and log categories that are supported by the various Azure services, read both the [Overview of metrics](../azure-monitor/data-platform.md) in Microsoft Azure and [Overview of Azure Diagnostic Logs](../azure-monitor/essentials/platform-logs-overview.md) articles.
-* Read these articles to learn about event hubs:
- * [What is Azure Event Hubs?](../event-hubs/event-hubs-about.md)
- * [Get started with Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
-* Read [Understand log searches in Azure Monitor logs](../azure-monitor/logs/log-query-overview.md).
cognitive-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/configure-containers.md
- Title: Configure containers - Language service-
-description: Language service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
------- Previously updated : 11/02/2021---
-# Configure Language service docker containers
-
-Language service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. This article applies to the following containers:
-
-* sentiment analysis
-* language detection
-* key phrase extraction
-* Text Analytics for health
-
-## Configuration settings
--
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start.
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the key and it must be a valid key for the _Language_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Language_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Language_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-|Required| Name | Data type | Description |
-|--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. |
--
-## Eula setting
--
-## Fluentd settings
--
-## Http proxy credentials settings
--
-## Logging settings
-
-
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-The Language service containers don't use input or output mounts to store training or service data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the host computer's mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
-
-|Optional| Name | Data type | Description |
-|-||--|-|
-|Not allowed| `Input` | String | Language service containers do not use this.|
-|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Next steps
-
-* Use more [Cognitive Services Containers](../../cognitive-services-container-support.md)
cognitive-services Multi Region Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/multi-region-deployment.md
- Title: Deploy custom language projects to multiple regions in Azure Cognitive Service for Language-
-description: Learn about deploying your language projects to multiple regions.
------ Previously updated : 10/11/2022----
-# Deploy custom language projects to multiple regions
-
-> [!NOTE]
-> This article applies to the following custom features in Azure Cognitive Service for Language:
-> * [Conversational language understanding](../../conversational-language-understanding/overview.md)
-> * [Custom text classification](../../custom-text-classification/overview.md)
-> * [Custom NER](../../custom-named-entity-recognition/overview.md)
-> * [Orchestration workflow](../../orchestration-workflow/overview.md)
-
-Custom Language service features enable you to deploy your project to more than one region, making it much easier to access your project globally while managing only one instance of your project in one place.
-
-Before you deploy a project, you can assign **deployment resources** in other regions. Each deployment resource is a different Language resource from the one you use to author your project. You deploy to those resources and then target your prediction requests to that resource in their respective regions and your queries are served directly from that region.
-
-When creating a deployment, you can select which of your assigned deployment resources and their corresponding regions you would like to deploy to. The model you deploy is then replicated to each region and accessible with its own endpoint dependent on the deployment resource's custom subdomain.
-
-## Example
-
-Suppose you want to make sure your project, which is used as part of a customer support chatbot, is accessible by customers across the US and India. You would author a project with the name **ContosoSupport** using a _West US 2_ Language resource named **MyWestUS2**. Before deployment, you would assign two deployment resources to your project - **MyEastUS** and **MyCentralIndia** in _East US_ and _Central India_, respectively.
-
-When deploying your project, You would select all three regions for deployment: the original _West US 2_ region and the assigned ones through _East US_ and _Central India_.
-
-You would now have three different endpoint URLs to access your project in all three regions:
-* West US 2: `https://mywestus2.cognitiveservices.azure.com/language/:analyze-conversations`
-* East US: `https://myeastus.cognitiveservices.azure.com/language/:analyze-conversations`
-* Central India: `https://mycentralindia.cognitiveservices.azure.com/language/:analyze-conversations`
-
-The same request body to each of those different URLs serves the exact same response directly from that region.
-
-## Validations and requirements
-
-Assigning deployment resources requires Microsoft Azure Active Directory (Azure AD) authentication. Azure AD is used to confirm you have access to the resources you are interested in assigning to your project for multi-region deployment. In the Language Studio, you can automatically [enable Azure AD authentication](https://aka.ms/rbac-language) by assigning yourself the _Cognitive Services Language Owner_ role to your original resource. To programmatically use Azure AD authentication, learn more from the [Cognitive Services documentation](../../../authentication.md?source=docs&tabs=powershell&tryIt=true#authenticate-with-azure-active-directory).
-
-Your project name and resource are used as its main identifiers. Therefore, a Language resource can only have a specific project name in each resource. Any other projects with the same name will not be deployable to that resource.
-
-For example, if a project **ContosoSupport** was created by resource **MyWestUS2** in _West US 2_ and deployed to resource **MyEastUS** in _East US_, the resource **MyEastUS** cannot create a different project called **ContosoSupport** and deploy a project to that region. Similarly, your collaborators cannot then create a project **ContosoSupport** with resource **MyCentralIndia** in _Central India_ and deploy it to either **MyWestUS2** or **MyEastUS**.
-
-You can only swap deployments that are available in the exact same regions, otherwise swapping will fail.
-
-If you remove an assigned resource from your project, all of the project deployments to that resource will then be deleted.
-
-> [!NOTE]
-> Orchestration workflow only:
->
-> You **cannot** assign deployment resources to orchestration workflow projects with custom question answering or LUIS connections. You subsequently cannot add custom question answering or LUIS connections to projects that have assigned resources.
->
-> For multi-region deployment to work as expected, the connected CLU projects **must also be deployed** to the same regional resources you've deployed the orchestration workflow project to. Otherwise the orchestration workflow project will attempt to route a request to a deployment in its region that doesn't exist.
-
-Some regions are only available for deployment and not for authoring projects.
-
-## Next steps
-
-Learn how to deploy models for:
-* [Conversational language understanding](../../conversational-language-understanding/how-to/deploy-model.md)
-* [Custom text classification](../../custom-text-classification/how-to/deploy-model.md)
-* [Custom NER](../../custom-named-entity-recognition/how-to/deploy-model.md)
-* [Orchestration workflow](../../orchestration-workflow/how-to/deploy-model.md)
cognitive-services Project Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/custom-features/project-versioning.md
- Title: Conversational Language Understanding Project Versioning-
-description: Learn how versioning works in conversational language understanding
------ Previously updated : 10/10/2022-----
-# Project versioning
-
-> [!NOTE]
-> This article applies to the following custom features in Azure Cognitive Service for Language:
-> * Conversational language understanding
-> * Custom text classification
-> * Custom NER
-> * Orchestration workflow
-
-Building your project typically happens in increments. You may add, remove, or edit intents, entities, labels and data at each stage. Every time you train, a snapshot of your current project state is taken to produce a model. That model saves the snapshot to be loaded back at any time. Every model acts as its own version of the project.
-
-For example, if your project has 10 intents and/or entities, with 50 training documents or utterances, it can be trained to create a model named **v1**. Afterwards, you might make changes to the project to alter the numbers of training data. The project can be trained again to create a new model named **v2**. If you don't like the changes you've made in **v2** and would like to continue from where you left off in model **v1**, then you would just need to load the model data from **v1** back into the project. Loading a model's data is possible through both the Language Studio and API. Once complete, the project will have the original amount and types of training data.
-
-If the project data is not saved in a trained model, it can be lost. For example, if you loaded model **v1**, your project now has the data that was used to train it. If you then made changes, didn't train, and loaded model **v2**, you would lose those changes as they weren't saved to any specific snapshot.
-
-If you overwrite a model with a new snapshot of data, you won't be able to revert back to any previous state of that model.
-
-You always have the option to locally export the data for every model.
-
-## Data location
-
-The data for your model versions will be saved in different locations, depending on the custom feature you're using.
-
-# [Custom NER](#tab/custom-ner)
-
-In custom named entity recognition, the data being saved to the snapshot is the labels file.
-
-# [Custom text classification](#tab/custom-text-classification)
-
-In custom text classification, the data being saved to the snapshot is the labels file.
-
-# [Orchestration workflow](#tab/orchestration-workflow)
-
-In orchestration workflow, you do not version or store the assets of the connected intents as part of the orchestration snapshot - those are managed separately. The only snapshot being taken is of the connection itself and the intents and utterances that do not have connections, including all the test data.
-
-# [Conversational language understanding](#tab/clu)
-
-In conversational language understanding, the data being saved to the snapshot are the intents and utterances included in the project.
----
-## Next steps
-Learn how to load or export model data for:
-* [Conversational language understanding](../../conversational-language-understanding/how-to/view-model-evaluation.md#export-model-data)
-* [Custom text classification](../../custom-text-classification/how-to/view-model-evaluation.md)
-* [Custom NER](../../custom-named-entity-recognition/how-to/view-model-evaluation.md)
-* [Orchestration workflow](../../orchestration-workflow/how-to/view-model-evaluation.md)
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md
- Title: Data limits for Language service features-
-description: Data and service limitations for Azure Cognitive Service for Language features.
------- Previously updated : 10/05/2022---
-# Service limits for Azure Cognitive Service for Language
-
-> [!NOTE]
-> This article only describes the limits for preconfigured features in Azure Cognitive Service for Language:
-> To see the service limits for customizable features, see the following articles:
-> * [Custom classification](../custom-classification/service-limits.md)
-> * [Custom NER](../custom-named-entity-recognition/service-limits.md)
-> * [Conversational language understanding](../conversational-language-understanding/service-limits.md)
-> * [Question answering](../question-answering/concepts/limits.md)
-
-Use this article to find the limits for the size, and rates that you can send data to the following features of the language service.
-* [Named Entity Recognition (NER)](../named-entity-recognition/overview.md)
-* [Personally Identifiable Information (PII) detection](../personally-identifiable-information/overview.md)
-* [Key phrase extraction](../key-phrase-extraction/overview.md)
-* [Entity linking](../entity-linking/overview.md)
-* [Text Analytics for health](../text-analytics-for-health/overview.md)
-* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/overview.md)
-* [Language detection](../language-detection/overview.md)
-
-When using features of the Language service, keep the following information in mind:
-
-* Pricing is independent of data or rate limits. Pricing is based on the number of text records you send to the API, and is subject to your Language resource's [pricing details](https://aka.ms/unifiedLanguagePricing).
- * A text record is measured as 1000 characters.
-* Data and rate limits are based on the number of documents you send to the API. If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-* A document is a single string of text characters.
-
-## Maximum characters per document
-
-The following limit specifies the maximum number of characters that can be in a single document.
-
-| Feature | Value |
-|||
-| Text Analytics for health | 125,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| All other preconfigured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). If you need to submit larger documents, consider using the feature asynchronously. |
-| All other preconfigured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). |
-
-If a document exceeds the character limit, the API behaves differently depending on how you're sending requests.
-
-If you're sending requests synchronously:
-* The API doesn't process documents that exceed the maximum size, and returns an invalid document error for it. If an API request has multiple documents, the API continues processing them if they are within the character limit.
-
-If you're sending requests [asynchronously](use-asynchronously.md):
-* The API rejects the entire request and returns a `400 bad request` error if any document within it exceeds the maximum size.
-
-## Maximum request size
-
-The following limit specifies the maximum size of documents contained in the entire request.
-
-| Feature | Value |
-|||
-| All preconfigured features | 1 MB |
-
-## Maximum documents per request
-
-Exceeding the following document limits generates an HTTP 400 error code.
-
-> [!NOTE]
-> When sending asynchronous API requests, you can send a maximum of 25 documents per request.
-
-| Feature | Max Documents Per Request |
-|-|--|
-| Conversation summarization | 1 |
-| Language Detection | 1000 |
-| Sentiment Analysis | 10 |
-| Opinion Mining | 10 |
-| Key Phrase Extraction | 10 |
-| Named Entity Recognition (NER) | 5 |
-| Personally Identifying Information (PII) detection | 5 |
-| Document summarization | 25 |
-| Entity Linking | 5 |
-| Text Analytics for health |25 for the web-based API, 1000 for the container. (125,000 characters in total) |
-
-## Rate limits
-
-Your rate limit varies with your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). These limits are the same for both versions of the API. These rate limits don't apply to the Text Analytics for health container, which doesn't have a set rate limit.
-
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
-
-Requests rates are measured for each feature separately. You can send the maximum number of requests for your pricing tier to each feature, at the same time. For example, if you're in the `S` tier and send 1000 requests at once, you wouldn't be able to send another request for 59 seconds.
-
-## See also
-
-* [What is Azure Cognitive Service for Language](../overview.md)
-* [Pricing details](https://aka.ms/unifiedLanguagePricing)
cognitive-services Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md
- Title: Use the Language SDK and REST API-
-description: Learn about how to integrate the Language service SDK and REST API into your applications.
------ Previously updated : 02/14/2023---
-# SDK and REST developer guide for the Language service
-
-Use this article to find information on integrating the Language service SDKs and REST API into your applications.
-
-## Development options
-
-The Language service provides support through a REST API, and client libraries in several languages.
-
-# [Client library (Azure SDK)](#tab/language-studio)
-
-## Client libraries (Azure SDK)
-
-The Language service provides three namespaces for using the available features. Depending on which features and programming language you're using, you will need to download one or more of the following packages, and have the following framework/language version support:
-
-|Framework/Language | Minimum supported version |
-|||
-|.NET | .NET Framework 4.6.1 or newer, or .NET (formerly .NET Core) 2.0 or newer. |
-|Java | v8 or later |
-|JavaScript | v14 LTS or later |
-|Python| v3.7 or later |
-
-### Azure.AI.TextAnalytics
-
->[!NOTE]
-> If you're using custom named entity recognition or custom text classification, you will need to create a project and train a model before using the SDK. The SDK only provides the ability to analyze text using models you create. See the following quickstarts for information on creating a model.
-> * [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md)
-> * [Custom text classification](../custom-text-classification/quickstart.md)
-
-The `Azure.AI.TextAnalytics` namespace enables you to use the following Language features. Use the links below for articles to help you send API requests using the SDK.
-
-* [Custom named entity recognition](../custom-named-entity-recognition/how-to/call-api.md?tabs=client#send-an-entity-recognition-request-to-your-model)
-* [Custom text classification](../custom-text-classification/how-to/call-api.md?tabs=client-libraries#send-a-text-classification-request-to-your-model)
-* [Document summarization](../summarization/quickstart.md)
-* [Entity linking](../entity-linking/quickstart.md)
-* [Key phrase extraction](../key-phrase-extraction/quickstart.md)
-* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md)
-* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md)
-* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md)
-* [Text analytics for health](../text-analytics-for-health/quickstart.md)
-
-As you use these features in your application, use the following documentation and code samples for additional information.
-
-| Language → Latest GA version |Reference documentation |Samples |
-||||
-| [C#/.NET → v5.2.0](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) | [C# documentation](/dotnet/api/azure.ai.textanalytics) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
-| [Java → v5.2.0](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
-| [JavaScript → v1.0.0](https://www.npmjs.com/package/@azure/ai-language-text/v/1.0.0) | [JavaScript documentation](/javascript/api/overview/azure/ai-language-text-readme) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
-| [Python → v5.2.0](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
--
-### Azure.AI.Language.Conversations
-
-> [!NOTE]
-> If you're using conversational language understanding or orchestration workflow, you will need to create a project and train a model before using the SDK. The SDK only provides the ability to analyze text using models you create. See the following quickstarts for more information.
-> * [Conversational language understanding](../conversational-language-understanding/quickstart.md)
-> * [Orchestration workflow](../orchestration-workflow/quickstart.md)
-
-The `Azure.AI.Language.Conversations` namespace enables you to use the following Language features. Use the links below for articles to help you send API requests using the SDK.
-
-* [Conversational language understanding](../conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request)
-* [Orchestration workflow](../orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request)
-* [Conversation summarization (Python only)](../summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python)
-* [Personally Identifying Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=client-libraries#examples)
-
-As you use these features in your application, use the following documentation and code samples for additional information.
-
-| Language → Latest GA version | Reference documentation |Samples |
-||||
-| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://aka.ms/sdk-sample-conversation-dot-net) |
-| [Python → v1.0.0](https://pypi.org/project/azure-ai-language-conversations/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://aka.ms/sdk-samples-conversation-python) |
-
-### Azure.AI.Language.QuestionAnswering
-
-The `Azure.AI.Language.QuestionAnswering` namespace enables you to use the following Language features:
-
-* [Question answering](../question-answering/quickstart/sdk.md?pivots=programming-language-csharp)
- * Authoring - Automate common tasks like adding new question answer pairs and working with projects/knowledge bases.
- * Prediction - Answer questions based on passages of text.
-
-As you use these features in your application, use the following documentation and code samples for additional information.
-
-| Language → Latest GA version |Reference documentation |Samples |
-||||
-| [C#/.NET → v1.0.0](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering/1.0.0#readme-body-tab) | [C# documentation](/dotnet/api/overview/azure/ai.language.questionanswering-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering) |
-| [Python → v1.0.0](https://pypi.org/project/azure-ai-language-questionanswering/1.0.0/) | [Python documentation](/python/api/overview/azure/ai-language-questionanswering-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-questionanswering) |
-
-# [REST API](#tab/rest-api)
-
-## REST API
-
-The Language service provides multiple API endpoints depending on which feature you wish to use.
-
-### Conversation analysis authoring API
-
-The conversation analysis authoring API enables you to author custom models and create/manage projects for the following features.
-* [Conversational language understanding](../conversational-language-understanding/quickstart.md?pivots=rest-api)
-* [Orchestration workflow](../orchestration-workflow/quickstart.md?pivots=rest-api)
-
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversational-analysis-authoring) for additional information.
-
-### Conversation analysis runtime API
-
-The conversation analysis runtime API enables you to send requests to custom models you've created for:
-* [Conversational language understanding](../conversational-language-understanding/how-to/call-api.md?tabs=REST-APIs#send-a-conversational-language-understanding-request)
-* [Orchestration workflow](../orchestration-workflow/how-to/call-api.md?tabs=REST-APIs#send-an-orchestration-workflow-request)
-
-It additionally enables you to use the following features, without creating any models:
-* [Conversation summarization](../summarization/quickstart.md?pivots=rest-api&tabs=conversation-summarization)
-* [Personally Identifiable Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=rest-api#examples)
-
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime) for additional information.
--
-### Text analysis authoring API
-
-The text analysis authoring API enables you to author custom models and create/manage projects for:
-* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api)
-* [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
-
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-authoring) for additional information.
-
-### Text analysis runtime API
-
-The text analysis runtime API enables you to send requests to custom models you've created for:
-
-* [Custom named entity recognition](../custom-named-entity-recognition/quickstart.md?pivots=rest-api)
-* [Custom text classification](../custom-text-classification/quickstart.md?pivots=rest-api)
-
-It additionally enables you to use the following features, without creating any models:
-
-* [Document summarization](../summarization/quickstart.md?tabs=document-summarization&pivots=rest-api)
-* [Entity linking](../entity-linking/quickstart.md?pivots=rest-api)
-* [Key phrase extraction](../key-phrase-extraction/quickstart.md?pivots=rest-api)
-* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md?pivots=rest-api)
-* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md?pivots=rest-api)
-* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api)
-* [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
-
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text) for additional information.
-
-### Question answering APIs
-
-The question answering APIs enables you to use the [question answering](../question-answering/quickstart/sdk.md?pivots=rest) feature.
-
-#### Reference documentation
-
-As you use this API in your application, see the following reference documentation for additional information.
-
-* [Prebuilt API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers-from-text) - Use the prebuilt runtime API to answer specified question using text provided by users.
-* [Custom authoring API](/rest/api/cognitiveservices/questionanswering/question-answering-projects) - Create a knowledge base to answer questions.
-* [Custom runtime API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) - Query and knowledge base to generate an answer.
----
-## See also
-
-[Azure Cognitive Service for Language overview](../overview.md)
cognitive-services Encryption Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/encryption-data-at-rest.md
- Title: Language service encryption of data at rest
-description: Learn how the Language service encrypts your data when it's persisted to the cloud.
----- Previously updated : 08/08/2022-
-#Customer intent: As a user of the Language service, I want to learn how encryption at rest works.
--
-# Language services encryption of data at rest
-
-The Language services automatically encrypt your data when it is persisted to the cloud. The Language services encryption protects your data and helps you meet your organizational security and compliance commitments.
-
-## About Cognitive Services encryption
-
-Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
-
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-## Customer-managed keys with Azure Key Vault
-
-There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../../key-vault/general/overview.md).
-
-### Customer-managed keys for Language services
-
-To request the ability to use customer-managed keys, fill out and submit theΓÇ»[Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with Language services, you'll need to create a new Language resource from the Azure portal.
--
-### Enable customer-managed keys
-
-A new Cognitive Services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Cognitive Services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK.
-
-To learn how to use customer-managed keys with Azure Key Vault for Cognitive Services encryption, see:
--- [Configure customer-managed keys with Key Vault for Cognitive Services encryption from the Azure portal](../../encryption/cognitive-services-encryption-keys-portal.md)-
-Enabling customer managed keys will also enable a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../../active-directory/managed-identities-azure-resources/overview.md).
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
-
-> [!IMPORTANT]
-> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-
-### Store customer-managed keys in Azure Key Vault
-
-To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
-
-Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../../key-vault/general/about-keys-secrets-certificates.md).
-
-### Rotate customer-managed keys
-
-You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Cognitive Services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Cognitive Services by using the Azure portal](../../encryption/cognitive-services-encryption-keys-portal.md).
-
-Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
-
-### Revoke access to customer-managed keys
-
-To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Cognitive Services resource, as the encryption key is inaccessible by Cognitive Services.
-
-## Next steps
-
-* [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../../key-vault/general/overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/language-support.md
- Title: Language support for language features-
-description: This article explains which natural languages are supported by the different features of Azure Cognitive Service for Language.
------ Previously updated : 03/09/2023---
-# Language support for Language features
-
-Use this article to learn about the languages currently supported by different features.
-
-> [!NOTE]
-> Some of the languages listed below are only supported in some [model versions](../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data). See the linked feature-level language support article for details.
--
-| Language | Language code | [Custom text classification](../custom-text-classification/language-support.md) | [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) | [Conversational language understanding](../conversational-language-understanding/language-support.md) | [Entity linking](../entity-linking/language-support.md) | [Language detection](../language-detection/language-support.md) | [Key phrase extraction](../key-phrase-extraction/language-support.md) | [Named entity recognition(NER)](../named-entity-recognition/language-support.md) | [Orchestration workflow](../orchestration-workflow/language-support.md) | [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents) | [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations) | [Question answering](../question-answering/language-support.md) | [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support) | [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) | [Text Analytics for health](../text-analytics-for-health/language-support.md) | [Summarization](../summarization/language-support.md?tabs=document-summarization) | [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization) |
-|::|:-:|:-:|:-:|:--:|:-:|::|::|:--:|:--:|:-:|:-:|::|::|:-:|:--:|::|:--:|
-| Afrikaans | `af` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Albanian | `sq` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Amharic | `am` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Arabic | `ar` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Armenian | `hy` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Assamese | `as` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Azerbaijani | `az` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Basque | `eu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Belarusian | `be` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Bengali | `bn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Bosnian | `bs` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Breton | `br` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Bulgarian | `bg` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Burmese | `my` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Catalan | `ca` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Central Khmer | `km` | | | | | &check; | | | | | | | | | | | |
-| Chinese (Simplified) | `zh-hans` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
-| Chinese (Traditional) | `zh-hant` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Corsican | `co` | | | | | &check; | | | | | | | | | | | |
-| Croatian | `hr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Czech | `cs` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Danish | `da` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Dari | `prs` | | | | | &check; | | | | | | | | | | | |
-| Divehi | `dv` | | | | | &check; | | | | | | | | | | | |
-| Dutch | `nl` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| English (UK) | `en-gb` | &check; | &check; | &check; | | &check; | | | | | | | | | | | |
-| English (US) | `en-us` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
-| Esperanto | `eo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Estonian | `et` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Fijian | `fj` | | | | | &check; | | | | | | | | | | | |
-| Filipino | `tl` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Finnish | `fi` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| French | `fr` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Galician | `gl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Georgian | `ka` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| German | `de` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Greek | `el` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Gujarati | `gu` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Haitian | `ht` | | | | | &check; | | | | | | | | | | | |
-| Hausa | `ha` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Hebrew | `he` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | &check; | | |
-| Hindi | `hi` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Hmong Daw | `mww` | | | | | &check; | | | | | | | | | | | |
-| Hungarian | `hu` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Icelandic | `is` | | | | | &check; | | | | | | &check; | | | | | |
-| Igbo | `ig` | | | | | &check; | | | | | | | | | | | |
-| Indonesian | `id` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Inuktitut | `iu` | | | | | &check; | | | | | | | | | | | |
-| Irish | `ga` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Italian | `it` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Japanese | `ja` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
-| Javanese | `jv` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Kannada | `kn` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Kazakh | `kk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Khmer | `km` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Kinyarwanda | `rw` | | | | | &check; | | | | | | | | | | | |
-| Korean | `ko` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | &check; | |
-| Kurdish (Kurmanji) | `ku` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Kyrgyz | `ky` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Lao | `lo` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Latin | `la` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Latvian | `lv` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Lithuanian | `lt` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Luxembourgish | `lb` | | | | | &check; | | | | | | | | | | | |
-| Macedonian | `mk` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Malagasy | `mg` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Malay | `ms` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Malayalam | `ml` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Maltese | `mt` | | | | | &check; | | | | | | | | | | | |
-| Maori | `mi` | | | | | &check; | | | | | | | | | | | |
-| Marathi | `mr` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Mongolian | `mn` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Nepali | `ne` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Norwegian (Bokmal) | `nb` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | | | | | |
-| Norwegian | `no` | | | | | &check; | | | | | | | &check; | &check; | | | |
-| Norwegian Nynorsk | `nn` | | | | | &check; | | | | | | | | | | | |
-| Oriya | `or` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Oromo | `om` | | | | | | &check; | | | | | | &check; | &check; | | | |
-| Pashto | `ps` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Persian (Farsi) | `fa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Polish | `pl` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Portuguese (Brazil) | `pt-br` | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | &check; | |
-| Portuguese (Portugal) | `pt-pt` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | | &check; | &check; | | &check; | |
-| Punjabi | `pa` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Queretaro Otomi | `otq` | | | | | &check; | | | | | | | | | | | |
-| Romanian | `ro` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Russian | `ru` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Samoan | `sm` | | | | | &check; | | | | | | | | | | | |
-| Sanskrit | `sa` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Scottish Gaelic | `gd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Serbian | `sr` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | | | | | |
-| Shona | `sn` | | | | | &check; | | | | | | | | | | | |
-| Sindhi | `sd` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Sinhala | `si` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Slovak | `sk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Slovenian | `sl` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Somali | `so` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Spanish | `es` | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | | &check; | &check; | &check; | &check; | | |
-| Sundanese | `su` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Swahili | `sw` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Swedish | `sv` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Tahitian | `ty` | | | | | &check; | | | | | | | | | | | |
-| Tajik | `tg` | | | | | &check; | | | | | | | | | | | |
-| Tamil | `ta` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Tatar | `tt` | | | | | &check; | | | | | | | | | | | |
-| Telugu | `te` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Thai | `th` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Tibetan | `bo` | | | | | &check; | | | | | | | | | | | |
-| Tigrinya | `ti` | | | | | &check; | | | | | | | | | | | |
-| Tongan | `to` | | | | | &check; | | | | | | | | | | | |
-| Turkish | `tr` | &check; | &check; | &check; | | &check; | &check; | &check; | | &check; | | &check; | &check; | &check; | | | |
-| Turkmen | `tk` | | | | | &check; | | | | | | | | | | | |
-| Ukrainian | `uk` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Urdu | `ur` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Uyghur | `ug` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Uzbek | `uz` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Vietnamese | `vi` | &check; | &check; | &check; | | &check; | &check; | | | | | &check; | &check; | &check; | | | |
-| Welsh | `cy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Western Frisian | `fy` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Xhosa | `xh` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Yiddish | `yi` | &check; | &check; | &check; | | &check; | &check; | | | | | | &check; | &check; | | | |
-| Yoruba | `yo` | | | | | &check; | | | | | | | | | | | |
-| Yucatec Maya | `yua` | | | | | &check; | | | | | | | | | | | |
-| Zulu | `zu` | &check; | &check; | &check; | | &check; | | | | | | | | | | | |
-
-## See also
-
-See the following service-level language support articles for information on model version support for each language:
-* [Custom text classification](../custom-text-classification/language-support.md)
-* [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md)
-* [Conversational language understanding](../conversational-language-understanding/language-support.md)
-* [Entity linking](../entity-linking/language-support.md)
-* [Language detection](../language-detection/language-support.md)
-* [Key phrase extraction](../key-phrase-extraction/language-support.md)
-* [Named entity recognition(NER)](../named-entity-recognition/language-support.md)
-* [Orchestration workflow](../orchestration-workflow/language-support.md)
-* [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents)
-* [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations)
-* [Question answering](../question-answering/language-support.md)
-* [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support)
-* [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support)
-* [Text Analytics for health](../text-analytics-for-health/language-support.md)
-* [Summarization](../summarization/language-support.md?tabs=document-summarization)
-* [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization)
cognitive-services Migrate Language Service Latest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/migrate-language-service-latest.md
- Title: Migrate to the latest version of Azure Cognitive Service for Language-
-description: Learn how to move your Text Analytics applications to use the latest version of the Language service.
------ Previously updated : 08/08/2022----
-# Migrate to the latest version of Azure Cognitive Service for Language
-
-> [!TIP]
-> Just getting started with Azure Cognitive Service for Language? See the [overview article](../overview.md) for details on the service, available features, and links to quickstarts for information on the current version of the API.
-
-If your applications are still using the Text Analytics API, or client library (before stable v5.1.0), this article will help you upgrade your applications to use the latest version of the [Azure Cognitive Service for language](../overview.md) features.
-
-## Unified Language endpoint (REST API)
-
-This section applies to applications that use the older `/text/analytics/...` endpoint format for REST API calls. For example:
-
-```http
-https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/<version>/<feature>
-```
-
-If your application uses the above endpoint format, the REST API endpoint for the following Language service features has changed:
-
-* [Entity linking](../entity-linking/quickstart.md?pivots=rest-api)
-* [Key phrase extraction](../key-phrase-extraction/quickstart.md?pivots=rest-api)
-* [Language detection](../language-detection/quickstart.md?pivots=rest-api)
-* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md?pivots=rest-api)
-* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md?pivots=rest-api)
-* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api)
-* [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
-
-The Language service now provides a unified endpoint for sending REST API requests to these features. If your application uses the REST API, update its request endpoint to use the current endpoint:
-
-```http
-https://<your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01
-```
-
-Additionally, the format of the JSON request body has changed. You'll need to update the request structure that your application sends to the API, for example the following entity recognition JSON body:
-
-```json
-{
- "kind": "EntityRecognition",
- "parameters": {
- "modelVersion": "latest"
- },
- "analysisInput":{
- "documents":[
- {
- "id":"1",
- "language": "en",
- "text": "I had a wonderful trip to Seattle last week."
- }
- ]
- }
-}
-```
-
-Use the quickstarts linked above to see current example REST API calls for the feature(s) you're using, and the associated API output.
-
-## Client libraries
-
-To use the latest version of the client library, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. See the quickstart articles linked above for example code and instructions for using the client library in your preferred language.
-
-<!--[!INCLUDE [SDK target versions](../includes/sdk-target-versions.md)]-->
--
-## Version 2.1 functionality changes
-
-If you're migrating an application from v2.1 of the API, there are several changes to feature functionality you should be aware of.
-
-### Sentiment analysis v2.1
-
-[Sentiment Analysis](../sentiment-opinion-mining/quickstart.md) in version 2.1 returns sentiment scores between 0 and 1 for each document sent to the API, with scores closer to 1 indicating more positive sentiment. The current version of this feature returns sentiment labels (such as "positive" or "negative") for both the sentences and the document as a whole, and their associated confidence scores.
-
-### NER, PII, and entity linking v2.1
-
-In version 2.1, the Text Analytics API used one endpoint for Named Entity Recognition (NER) and entity linking. The current version of this feature provides expanded named entity detection, and has separate endpoints for [NER](../named-entity-recognition/quickstart.md?pivots=rest-api) and [entity linking](../entity-linking/quickstart.md?pivots=rest-api) requests. Additionally, you can use another feature offered in the Language service that lets you detect [detect personal (PII) and health (PHI) information](../personally-identifiable-information/overview.md).
-
-You will also need to update your application to use the [entity categories](../named-entity-recognition/concepts/named-entity-categories.md) returned in the [API's response](../named-entity-recognition/how-to-call.md).
-
-### Version 2.1 entity categories
-
-The following table lists the entity categories returned for NER v2.1.
-
-| Category | Description |
-||--|
-| Person | Names of people. |
-|Location | Natural and human-made landmarks, structures, geographical features, and geopolitical entities |
-|Organization | Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type. |
-| PhoneNumber | Phone numbers (US and EU phone numbers only). |
-| Email | Email addresses. |
-| URL | URLs to websites. |
-| IP | Network IP addresses. |
-| DateTime | Dates and times of day.|
-| Date | Calender dates. |
-| Time | Times of day |
-| DateRange | Date ranges. |
-| TimeRange | Time ranges. |
-| Duration | Durations. |
-| Set | Set, repeated times. |
-| Quantity | Numbers and numeric quantities. |
-| Number | Numbers. |
-| Percentage | Percentages.|
-| Ordinal | Ordinal numbers. |
-| Age | Ages. |
-| Currency | Currencies. |
-| Dimension | Dimensions and measurements. |
-| Temperature | Temperatures. |
-
-### Language detection v2.1
-
-The [language detection](../language-detection/quickstart.md) feature output has changed in the current version. The JSON response will contain `ConfidenceScore` instead of `score`. The current version also only returns one language for each document.
-
-### Key phrase extraction v2.1
-
-The key phrase extraction feature functionality currently has not changed outside of the endpoint and request format.
-
-## See also
-
-* [What is Azure Cognitive Service for Language?](../overview.md)
-* [Language service developer guide](developer-guide.md)
-* See the following reference documentation for information on previous API versions.
- * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c9)
- * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Sentiment)
- * [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
-* Use the following quickstart guides to see examples for the current version of these features.
- * [Entity linking](../entity-linking/quickstart.md)
- * [Key phrase extraction](../key-phrase-extraction/quickstart.md)
- * [Named entity recognition (NER)](../named-entity-recognition/quickstart.md)
- * [Language detection](../language-detection/quickstart.md)
- * [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md)
- * [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md)
- * [Text analytics for health](../text-analytics-for-health/quickstart.md)
-
cognitive-services Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/migrate.md
- Title: "Migrate to Azure Cognitive Service for Language from: LUIS, QnA Maker, and Text Analytics"-
-description: Use this article to learn if you need to migrate your applications from LUIS, QnA Maker, and Text Analytics.
----- Previously updated : 09/29/2022----
-# Migrating to Azure Cognitive Service for Language
-
-On November 2nd 2021, Azure Cognitive Service for Language was released into public preview. This language service unifies the Text Analytics, QnA Maker, and LUIS service offerings, and provides several new features as well.
-
-## Do I need to migrate to the language service if I am using Text Analytics?
-
-Text Analytics has been incorporated into the language service, and its features are still available. If you were using Text Analytics, your applications should continue to work without breaking changes. You can also see the [Text Analytics migration guide](migrate-language-service-latest.md), if you need to update an older application.
-
-Consider using one of the available quickstart articles to see the latest information on service endpoints, and API calls.
-
-## How do I migrate to the language service if I am using LUIS?
-
-If you're using Language Understanding (LUIS), you can [import your LUIS JSON file](../conversational-language-understanding/how-to/migrate-from-luis.md) to the new Conversational language understanding feature.
-
-## How do I migrate to the language service if I am using QnA Maker?
-
-If you're using QnA Maker, see the [migration guide](../question-answering/how-to/migrate-qnamaker.md) for information on migrating knowledge bases from QnA Maker to question answering.
-
-## See also
-
-* [Azure Cognitive Service for Language overview](../overview.md)
-* [Conversational language understanding overview](../conversational-language-understanding/overview.md)
-* [Question answering overview](../question-answering/overview.md)
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
- Title: Model Lifecycle of Language service models-
-description: This article describes the timelines for models and model versions used by Language service features.
------- Previously updated : 11/29/2022---
-# Model lifecycle
-
-Language service features utilize AI models. We update the language service with new model versions to improve accuracy, support, and quality. As models become older, they are retired. Use this article for information on that process, and what you can expect for your applications.
-
-## Prebuilt features
--
-Our standard (not customized) language service features are built on AI models that we call pre-trained models.
-
-We regularly update the language service with new model versions to improve model accuracy, support, and quality.
-
-By default, all API requests will use the latest Generally Available (GA) model.
-
-#### Choose the model-version used on your data
-
-We strongly recommend using the `latest` model version to utilize the latest and highest quality models. As our models improve, itΓÇÖs possible that some of your model results may change. Model versions may be deprecated, so don't recommend including specified versions in your implementation.
-
-Preview models used for preview features do not maintain a minimum retirement period and may be deprecated at any time.
-
-By default, API and SDK requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended).
-
-> [!NOTE]
-> * If you are using an model version that is not listed in the table, then it was subjected to the expiration policy.
-> * Abstractive document and conversation summarization do not provide model versions other than the latest available.
-
-Use the table below to find which model versions are supported by each feature:
-
-| Feature | Supported versions |
-|--|--|
-| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01`,`2022-10-01`,`2022-11-01*` |
-| Language Detection | `2021-11-20`, `2022-10-01*` |
-| Entity Linking | `2021-06-01*` |
-| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview`, `2023-02-01-preview**` |
-| Personally Identifiable Information (PII) detection | `2021-01-15*`, `2023-01-01-preview**` |
-| PII detection for conversations (Preview) | `2022-05-15-preview**` |
-| Question answering | `2021-10-01*` |
-| Text Analytics for health | `2021-05-15`, `2022-03-01*`, `2022-08-15-preview`, `2023-01-01-preview**` |
-| Key phrase extraction | `2021-06-01`, `2022-07-01`,`2022-10-01*` |
-| Document summarization - extractive only (preview) | `2022-08-31-preview**` |
-
-\* Latest Generally Available (GA) model version
-
-\*\* Latest preview version
--
-## Custom features
-
-### Expiration timeline
-
-As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration:
-
-New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you've assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version.
-
-After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want.
-
-> [!Tip]
-> It's recommended to use the latest supported config version
-
-You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you'll have to use another supported training config version for submitting any training or deployment jobs.
-
-Deployment expiration is when your deployed model will be unavailable to be used for prediction.
-
-Use the table below to find which model versions are supported by each feature:
-
-| Feature | Supported Training config versions | Training config expiration | Deployment expiration |
-||--|||
-| Custom text classification | `2022-05-01` | `2023-05-01` | `2024-04-30` |
-| Conversational language understanding | `2022-05-01` | `2022-10-28` | `2023-10-28` |
-| Conversational language understanding | `2022-09-01` | `2023-02-28` | `2024-02-28` |
-| Custom named entity recognition | `2022-05-01` | `2023-05-01` | `2024-04-30` |
-| Orchestration workflow | `2022-05-01` | `2023-05-01` | `2024-04-30` |
--
-## API versions
-
-When you're making API calls to the following features, you need to specify the `API-VERISON` you want to use to complete your request. It's recommended to use the latest available API versions.
-
-If you're using [Language Studio](https://aka.ms/languageStudio) for your projects, you'll use the latest API version available. Other API versions are only available through the REST APIs and client libraries.
-
-Use the following table to find which API versions are supported by each feature:
-
-| Feature | Supported versions | Latest Generally Available version | Latest preview version |
-|--||||
-| Custom text classification | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2022-05-01` | `2022-10-01-preview` |
-| Conversational language understanding | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
-| Custom named entity recognition | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
-| Orchestration workflow | `2022-05-01`, `2022-10-01-preview`, `2023-04-01` | `2023-04-01` | `2022-10-01-preview` |
-
-## Next steps
-
-[Azure Cognitive Service for Language overview](../overview.md)
cognitive-services Multilingual Emoji Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/multilingual-emoji-support.md
- Title: Multilingual and emoji support in Azure Cognitive Service for Language-
-description: Learn about offsets caused by multilingual and emoji encodings in Language service features.
------ Previously updated : 11/02/2021----
-# Multilingual and emoji support in Language service features
-
-Multilingual and emoji support has led to Unicode encodings that use more than one [code point](https://wikipedia.org/wiki/Code_point) to represent a single displayed character, called a grapheme. For example, emojis like 🌷 and 👍 may use several characters to compose the shape with additional characters for visual attributes, such as skin tone. Similarly, the Hindi word `अनुच्छेद` is encoded as five letters and three combining marks.
-
-Because of the different lengths of possible multilingual and emoji encodings, Language service features may return offsets in the response.
-
-## Offsets in the API response
-
-Whenever offsets are returned the API response, remember:
-
-* Elements in the response may be specific to the endpoint that was called.
-* HTTP POST/GET payloads are encoded in [UTF-8](https://www.w3schools.com/charsets/ref_html_utf8.asp), which may or may not be the default character encoding on your client-side compiler or operating system.
-* Offsets refer to grapheme counts based on the [Unicode 8.0.0](https://unicode.org/versions/Unicode8.0.0) standard, not character counts.
-
-## Extracting substrings from text with offsets
-
-Offsets can cause problems when using character-based substring methods, for example the .NET [substring()](/dotnet/api/system.string.substring) method. One problem is that an offset may cause a substring method to end in the middle of a multi-character grapheme encoding instead of the end.
-
-In .NET, consider using the [StringInfo](/dotnet/api/system.globalization.stringinfo) class, which enables you to work with a string as a series of textual elements, rather than individual character objects. You can also look for grapheme splitter libraries in your preferred software environment.
-
-The Language service features returns these textual elements as well, for convenience.
-
-Endpoints that return an offset will support the `stringIndexType` parameter. This parameter adjusts the `offset` and `length` attributes in the API output to match the requested string iteration scheme. Currently, we support three types:
--- `textElement_v8` (default): iterates over graphemes as defined by the [Unicode 8.0.0](https://unicode.org/versions/Unicode8.0.0) standard-- `unicodeCodePoint`: iterates over [Unicode Code Points](http://www.unicode.org/versions/Unicode13.0.0/ch02.pdf#G25564), the default scheme for Python 3-- `utf16CodeUnit`: iterates over [UTF-16 Code Units](https://unicode.org/faq/utf_bom.html#UTF16), the default scheme for JavaScript, Java, and .NET-
-If the `stringIndexType` requested matches the programming environment of choice, substring extraction can be done using standard substring or slice methods.
-
-## See also
-
-* [Language service overview](../overview.md)
cognitive-services Previous Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/previous-updates.md
- Title: Previous language service updates-
-description: An archive of previous Azure Cognitive Service for Language updates.
------ Previously updated : 06/23/2022----
-# Previous updates for Azure Cognitive Service for Language
-
-This article contains a list of previously recorded updates for Azure Cognitive Service for Language. For more current service updates, see [What's new](../whats-new.md).
-
-## October 2021
-
-* Quality improvements for the extractive summarization feature in model-version `2021-08-01`.
-
-## September 2021
-
-* Starting with version `3.0.017010001-onprem-amd64` The text analytics for health container can now be called using the client library.
-
-## July 2021
-
-* General availability for text analytics for health containers and API.
-* General availability for opinion mining.
-* General availability for PII extraction and redaction.
-* General availability for asynchronous operation.
-
-## June 2021
-
-### General API updates
-
-* New model-version `2021-06-01` for key phrase extraction based on transformers. It provides:
- * Support for 10 languages (Latin and CJK).
- * Improved key phrase extraction.
-* The `2021-06-01` model version for Named Entity Recognition (NER) which provides
- * Improved AI quality and expanded language support for the *Skill* entity category.
- * Added Spanish, French, German, Italian and Portuguese language support for the *Skill* entity category
-
-### Text Analytics for health updates
-
-* A new model version `2021-05-15` for the `/health` endpoint and on-premises container which provides
- * 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`,
- * 14 new relation types,
- * Assertion detection expanded for new entity types and
- * Linking support for `ALLERGEN` entity type
-* A new image for the Text Analytics for health container with tag `3.0.016230002-onprem-amd64` and model version `2021-05-15`. This container is available for download from Microsoft Container Registry.
-
-## May 2021
-
-* Custom question answering (previously QnA maker) can now be accessed using a Text Analytics resource.
-* Preview API release, including:
- * Asynchronous API now supports sentiment analysis and opinion mining.
- * A new query parameter, `LoggingOptOut`, is now available for customers who wish to opt out of logging input text for incident reports.
-* Text analytics for health and asynchronous operations are now available in all regions.
-
-## March 2021
-
-* Changes in the opinion mining JSON response body:
- * `aspects` is now `targets` and `opinions` is now `assessments`.
-* Changes in the JSON response body of the hosted web API of text analytics for health:
- * The `isNegated` boolean name of a detected entity object for negation is deprecated and replaced by assertion detection.
- * A new property called `role` is now part of the extracted relation between an attribute and an entity as well as the relation between entities. This adds specificity to the detected relation type.
-* Entity linking is now available as an asynchronous task.
-* A new `pii-categories` parameter for the PII feature.
- * This parameter lets you specify select PII entities, as well as those not supported by default for the input language.
-* Updated client libraries, which include asynchronous and text analytics for health operations.
-
-* A new model version `2021-03-01` for text analytics for health API and on-premises container which provides:
- * A rename of the `Gene` entity type to `GeneOrProtein`.
- * A new `Date` entity type.
- * Assertion detection which replaces negation detection.
- * A new preferred `name` property for linked entities that is normalized from various ontologies and coding systems.
-* A new text analytics for health container image with tag `3.0.015490002-onprem-amd64` and the new model-version `2021-03-01` has been released to the container preview repository.
- * This container image will no longer be available for download from `containerpreview.azurecr.io` after April 26th, 2021.
-* **Processed Text Records** is now available as a metric in the **Monitoring** section for your text analytics resource in the Azure portal.
-
-## February 2021
-
-* The `2021-01-15` model version for the PII feature, which provides:
- * Expanded support for 9 new languages
- * Improved AI quality
-* The S0 through S4 pricing tiers are being retired on March 8th, 2021.
-* The language detection container is now generally available.
-
-## January 2021
-
-* The `2021-01-15` model version for Named Entity Recognition (NER), which provides
- * Expanded language support.
- * Improved AI quality of general entity categories for all supported languages.
-* The `2021-01-05` model version for language detection, which provides additional language support.
-
-## November 2020
-
-* Portuguese (Brazil) `pt-BR` is now supported in sentiment analysis, starting with model version `2020-04-01`. It adds to the existing `pt-PT` support for Portuguese.
-* Updated client libraries, which include asynchronous and text analytics for health operations.
-
-## October 2020
-
-* Hindi support for sentiment analysis, starting with model version `2020-04-01`.
-* Model version `2020-09-01` for language detection, which adds additional language support and accuracy improvements.
-
-## September 2020
-
-* PII now includes the new `redactedText` property in the response JSON where detected PII entities in the input text are replaced by an `*` for each character of those entities.
-* Entity linking endpoint now includes the `bingID` property in the response JSON for linked entities.
-* The following updates are specific to the September release of the text analytics for health container only.
- * A new container image with tag `1.1.013530001-amd64-preview` with the new model-version `2020-09-03` has been released to the container preview repository.
- * This model version provides improvements in entity recognition, abbreviation detection, and latency enhancements.
-
-## August 2020
-
-* Model version `2020-07-01` for key phrase extraction, PII detection, and language detection. This update adds:
- * Additional government and country/region specific entity categories for Named Entity Recognition.
- * Norwegian and Turkish support in Sentiment Analysis.
-* An HTTP 400 error will now be returned for API requests that exceed the published data limits.
-* Endpoints that return an offset now support the optional `stringIndexType` parameter, which adjusts the returned `offset` and `length` values to match a supported string index scheme.
-
-The following updates are specific to the August release of the Text Analytics for health container only.
-
-* New model-version for Text Analytics for health: `2020-07-24`
-
-The following properties in the JSON response have changed:
-
-* `type` has been renamed to `category`
-* `score` has been renamed to `confidenceScore`
-* Entities in the `category` field of the JSON output are now in pascal case. The following entities have been renamed:
- * `EXAMINATION_RELATION` has been renamed to `RelationalOperator`.
- * `EXAMINATION_UNIT` has been renamed to `MeasurementUnit`.
- * `EXAMINATION_VALUE` has been renamed to `MeasurementValue`.
- * `ROUTE_OR_MODE` has been renamed `MedicationRoute`.
- * The relational entity `ROUTE_OR_MODE_OF_MEDICATION` has been renamed to `RouteOfMedication`.
-
-The following entities have been added:
-
-* Named Entity Recognition
- * `AdministrativeEvent`
- * `CareEnvironment`
- * `HealthcareProfession`
- * `MedicationForm`
-
-* Relation extraction
- * `DirectionOfCondition`
- * `DirectionOfExamination`
- * `DirectionOfTreatment`
-
-## May 2020
-
-* Model version `2020-04-01`:
- * Updated language support for sentiment analysis
- * New "Address" entity category in Named Entity Recognition (NER)
- * New subcategories in NER:
- * Location - Geographical
- * Location - Structural
- * Organization - Stock Exchange
- * Organization - Medical
- * Organization - Sports
- * Event - Cultural
- * Event - Natural
- * Event - Sports
-
-* The following properties in the JSON response have been added:
- * `SentenceText` in sentiment analysis
- * `Warnings` for each document
-
-* The names of the following properties in the JSON response have been changed, where applicable:
-
-* `score` has been renamed to `confidenceScore`
- * `confidenceScore` has two decimal points of precision.
-* `type` has been renamed to `category`
-* `subtype` has been renamed to `subcategory`
-
-* New sentiment analysis feature - opinion mining
-* New personal (`PII`) domain filter for protected health information (`PHI`).
-
-## February 2020
-
-Additional entity types are now available in the Named Entity Recognition (NER). This update introduces model version `2020-02-01`, which includes:
-
-* Recognition of the following general entity types (English only):
- * PersonType
- * Product
- * Event
- * Geopolitical Entity (GPE) as a subtype under Location
- * Skill
-
-* Recognition of the following personal information entity types (English only):
- * Person
- * Organization
- * Age as a subtype under Quantity
- * Date as a subtype under DateTime
- * Email
- * Phone Number (US only)
- * URL
- * IP Address
-
-### October 2019
-
-* Introduction of PII feature
-* Model version `2019-10-01`, which includes:
- * Named entity recognition:
- * Expanded detection and categorization of entities found in text.
- * Recognition of the following new entity types:
- * Phone number
- * IP address
- * Sentiment analysis:
- * Significant improvements in the accuracy and detail of the API's text categorization and scoring.
- * Automatic labeling for different sentiments in text.
- * Sentiment analysis and output on a document and sentence level.
-
- This model version supports: English (`en`), Japanese (`ja`), Chinese Simplified (`zh-Hans`), Chinese Traditional (`zh-Hant`), French (`fr`), Italian (`it`), Spanish (`es`), Dutch (`nl`), Portuguese (`pt`), and German (`de`).
-
-## Next steps
-
-See [What's new](../whats-new.md) for current service updates.
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
- Title: Role-based access control for the Language service-
-description: Learn how to use Azure RBAC for managing individual access to Azure resources.
------ Previously updated : 10/31/2022----
-# Language role-based access control
-
-Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](../../../role-based-access-control/index.yml) for more information.
-
-## Enable Azure Active Directory authentication
-
-To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../../authentication.md#create-a-resource-with-a-custom-subdomain) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
-
-## Add role assignment to Language resource
-
-Azure RBAC can be assigned to a Language resource. To grant access to an Azure resource, you add a role assignment.
-1. In the [Azure portal](https://portal.azure.com/), select **All services**.
-1. Select **Cognitive Services**, and navigate to your specific Language resource.
- > [!NOTE]
- > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
-
-1. Select **Access control (IAM)** on the left navigation pane.
-1. Select **Add**, then select **Add role assignment**.
-1. On the **Role** tab on the next screen, select a role you want to add.
-1. On the **Members** tab, select a user, group, service principal, or managed identity.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
-
-## Language role types
-
-Use the following table to determine access needs for your Language projects.
-
-These custom roles only apply to Language resources.
-> [!NOTE]
-> * All prebuilt capabilities are accessible to all roles
-> * *Owner* and *Contributor* roles take priority over the custom language roles
-> * AAD is only used in case of custom Language roles
-> * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in Language studio portal.
--
-### Cognitive Services Language reader
-
-A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
--
- :::column span="":::
- **Capabilities**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * Read
- * Test
- :::column-end:::
- :::column span="":::
- All GET APIs under:
- * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
- * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
- * [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
- Only `TriggerExportProjectJob` POST operation under:
- * [Language authoring conversational language understanding export API](/rest/api/language/2023-04-01/text-analysis-authoring/export)
- * [Language authoring text analysis export API](/rest/api/language/2023-04-01/text-analysis-authoring/export)
- Only Export POST operation under:
- * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export)
- All the Batch Testing Web APIs
- *[Language Runtime CLU APIs](/rest/api/language/2023-04-01/conversation-analysis-runtime)
- *[Language Runtime Text Analysis APIs](/rest/api/language/2023-04-01/text-analysis-runtime/analyze-text)
- :::column-end:::
-
-### Cognitive Services Language writer
-
-A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
-
- :::column span="":::
- **Capabilities**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * All functionalities under Cognitive Services Language Reader.
- * Ability to:
- * Train
- * Write
- :::column-end:::
- :::column span="":::
- * All APIs under Language reader
- * All POST, PUT and PATCH APIs under:
- * [Language conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
- * [Language text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
- * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
- Except for
- * Delete deployment
- * Delete trained model
- * Delete Project
- * Deploy Model
- :::column-end:::
-
-### Cognitive Services Language owner
-
-> [!NOTE]
-> If you are assigned as an *Owner* and *Language Owner* you will be be shown as *Cognitive Services Language owner* in Language studio portal.
--
-These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
-
- :::column span="":::
- **Functionality**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * All functionalities under Cognitive Services Language Writer
- * Deploy
- * Delete
- :::column-end:::
- :::column span="":::
- All APIs available under:
- * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
- * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
- * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
-
- :::column-end:::
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
- Title: "How to: Use Language service features asynchronously"-
-description: Learn how to send Language service API requests asynchronously.
------ Previously updated : 10/31/2022---
-# How to use Language service features asynchronously
-
-The Language service enables you to send API requests asynchronously, using either the REST API or client library. You can also include multiple different Language service features in your request, to be performed on your data at the same time.
-
-Currently, the following features are available to be used asynchronously:
-* Entity linking
-* Document summarization
-* Conversation summarization
-* Key phrase extraction
-* Language detection
-* Named Entity Recognition (NER)
-* Customer content detection
-* Sentiment analysis and opinion mining
-* Text Analytics for health
-* Personal Identifiable information (PII)
-
-When you send asynchronous requests, you will incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you will be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-
-## Submit an asynchronous job using the REST API
-
-To submit an asynchronous job, review the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
-1. Add your documents to the `analysisInput` object.
-1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object.
-1. You can optionally:
- 1. Choose a specific [version of the model](model-lifecycle.md) used on your data.
- 1. Include additional Language service features in the `tasks` object, to be performed on your data at the same time.
-
-Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to job creation endpoint. For example:
-
-```http
-POST https://your-endpoint.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-05-01
-```
-
-A successful call will return a 202 response code. The `operation-location` in the response header will be the URL you will use to retrieve the API results. The value will look similar to the following URL:
-
-```http
-GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01
-```
-
-To [get the status and retrieve the results](/rest/api/language/2023-04-01/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
-
-## Send asynchronous API requests using the client library
-
-First, make sure you have the client library installed for your language of choice. For steps on installing the client library, see the quickstart article for the feature you want to use.
-
-Afterwards, use the client object to send asynchronous calls to the API. The method calls to use will vary depending on your language. Use the available samples and reference documentation to help you get started.
-
-* [C#](/dotnet/api/overview/azure/ai.textanalytics-readme?preserve-view=true&view=azure-dotnet#async-examples)
-* [Java](/java/api/overview/azure/ai-textanalytics-readme?preserve-view=true&view=azure-java-preview#analyze-multiple-actions)
-* [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?preserve-view=true&view=azure-node-preview#analyze-actions)
-* [Python](/python/api/overview/azure/ai-textanalytics-readme?preserve-view=true&view=azure-python-preview#multiple-analysis)
-
-## Result availability
-
-When using this feature asynchronously, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-
-## Automatic language detection
-
-Starting in version `2022-07-01-preview` of the REST API, you can request automatic [language detection](../language-detection/overview.md) on your documents. By setting the `language` parameter to `auto`, the detected language code of the text will be returned as a language value in the response. This language detection will not incur extra charges to your Language resource.
-
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-You can send up to 125,000 characters across all documents contained in the asynchronous request, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). This character limit is higher than the limit for synchronous requests, to enable higher throughput.
-
-If a document exceeds the character limit, the API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-
-## See also
-
-* [Azure Cognitive Service for Language overview](../overview.md)
-* [Multilingual and emoji support](../concepts/multilingual-emoji-support.md)
-* [What's new](../whats-new.md)
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/best-practices.md
- Title: Conversational language understanding best practices-
-description: Apply best practices when using conversational language understanding
------ Previously updated : 10/11/2022----
-# Best practices for conversational language understanding
-
-Use the following guidelines to create the best possible projects in conversational language understanding.
-
-## Choose a consistent schema
-
-Schema is the definition of your intents and entities. There are different approaches you could take when defining what you should create as an intent versus an entity. There are some questions you need to ask yourself:
--- What actions or queries am I trying to capture from my user?-- What pieces of information are relevant in each action?-
-You can typically think of actions and queries as _intents_, while the information required to fulfill those queries as _entities_.
-
-For example, assume you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _Cancel_ intent with various examples like _"Cancel the Contoso service"_, or _"stop charging me for the Fabrikam subscription"_. The user's intent here is to _cancel_, the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they would like to cancel. Therefore, you can create an entity for _subscriptions_. You can then model your entire project to capture actions as intents and use entities to fill in those actions. This allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, upgrading, etc. that all make use of the _subscriptions_ and other entities.
-
-The above schema design makes it easy for you to extend existing capabilities (canceling, upgrading, signing up) to new targets by creating a new entity.
-
-Another approach is to model the _information_ as intents and _actions_ as entities. Let's take the same example, allowing your customers to cancel subscriptions through your chatbot. You can create an intent for each subscription available, such as _Contoso_ with utterances like _"cancel Contoso"_, _"stop charging me for contoso services"_, _"Cancel the Contoso subscription"_. You would then create an entity to capture the action, _cancel_. You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
-
-This schema design makes it easy for you to extend new actions to existing targets by adding new action entities or entity components.
-
-Make sure to avoid trying to funnel all the concepts into just intents, for example don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
-
-You also want to avoid mixing different schema designs. Do not build half of your application with actions as intents and the other half with information as intents. Ensure it is consistent to get the possible results.
--------
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
- Title: conversational language understanding data formats-
-description: Learn about the data formats accepted by conversational language understanding.
------ Previously updated : 06/20/2023----
-# Data formats accepted by conversational language understanding
--
-If you're uploading your data into CLU it has to follow a specific format, use this article to learn more about accepted data formats.
-
-## Import project file format
-
-If you're [importing a project](../how-to/create-project.md#import-project) into CLU the file uploaded has to be in the following format.
-
-```json
-{
- "projectFileVersion": "2022-10-01-preview",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectKind": "Conversation",
- "projectName": "{PROJECT-NAME}",
- "multilingual": true,
- "description": "DESCRIPTION",
- "language": "{LANGUAGE-CODE}",
- "settings": {
- "confidenceThreshold": 0
- }
- },
- "assets": {
- "projectKind": "Conversation",
- "intents": [
- {
- "category": "intent1"
- }
- ],
- "entities": [
- {
- "category": "entity1",
- "compositionSetting": "{COMPOSITION-SETTING}",
- "list": {
- "sublists": [
- {
- "listKey": "list1",
- "synonyms": [
- {
- "language": "{LANGUAGE-CODE}",
- "values": [
- "{VALUES-FOR-LIST}"
- ]
- }
- ]
- }
- ]
- },
- "prebuilts": [
- {
- "category": "{PREBUILT-COMPONENTS}"
- }
- ],
- "regex": {
- "expressions": [
- {
- "regexKey": "regex1",
- "language": "{LANGUAGE-CODE}",
- "regexPattern": "{REGEX-PATTERN}"
- }
- ]
- },
- "requiredComponents": [
- "{REQUIRED-COMPONENTS}"
- ]
- }
- ],
- "utterances": [
- {
- "text": "utterance1",
- "intent": "intent1",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "category": "ENTITY1",
- "offset": 6,
- "length": 4
- }
- ]
- }
- ]
- }
-}
-
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-|`{API-VERSION}` | The [version](../../concepts/model-lifecycle.md#api-versions) of the API you are calling. | `2023-04-01` |
-|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md). Values are from `0` to `1`|`0.7`|
-| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
-| `multilingual` | `true`| A boolean value that enables you to have utterances in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
-|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
-|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
-|`synonyms`|`[]`|Array containing all the synonyms|synonym|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances, synonyms, and regular expressions used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
-| `intents` | `[]` | Array containing all the intents you have in the project. These are the intents that will be classified from your utterances.| `[]` |
-| `entities` | `[]` | Array containing all the entities in your project. These are the entities that will be extracted from your utterances. Every entity can have additional optional components defined with them: list, prebuilt, or regex. | `[]` |
-| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
-| `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`|
-| `offset` | ` ` | The inclusive character position of the start of the entity. |`5`|
-| `length` | ` ` | The character length of the entity. |`5`|
-| `listKey`| ` ` | A normalized value for the list of synonyms to map back to in prediction. | `Microsoft` |
-| `values`| `{VALUES-FOR-LIST}` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"msft", "microsoft", "MS"` |
-| `regexKey`| `{REGEX-PATTERN}` | A normalized value for the regular expression to map back to in prediction. | `ProductPattern1` |
-| `regexPattern`| `{REGEX-PATTERN}` | A regular expression. | `^pre` |
-| `prebuilts`| `{PREBUILT-COMPONENTS}` | The prebuilt components that can extract common types. You can find the list of prebuilts you can add [here](../prebuilt-component-reference.md). | `Quantity.Number` |
-| `requiredComponents` | `{REQUIRED-COMPONENTS}` | A setting that specifies a requirement that a specific component be present to return the entity. You can learn more [here](./entity-components.md#required-components). The possible values are `learned`, `regex`, `list`, or `prebuilts` |`"learned", "prebuilt"`|
----
-## Utterance file format
-
-CLU offers the option to upload your utterance directly to the project rather than typing them in one by one. You can find this option in the [data labeling](../how-to/tag-utterances.md) page for your project.
-
-```json
-[
- {
- "text": "{Utterance-Text}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "intent": "{intent}",
- "entities": [
- {
- "category": "{entity}",
- "offset": 19,
- "length": 10
- }
- ]
- },
- {
- "text": "{Utterance-Text}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "intent": "{intent}",
- "entities": [
- {
- "category": "{entity}",
- "offset": 20,
- "length": 10
- },
- {
- "category": "{entity}",
- "offset": 31,
- "length": 5
- }
- ]
- }
-]
-
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-|`text`|`{Utterance-Text}`|Your utterance text|Testing|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the language code of the majority of the utterances. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
-| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
-|`intent`|`{intent}`|The assigned intent| intent1|
-|`entity`|`{entity}`|Entity to be extracted| entity1|
-| `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`|
-| `offset` | ` ` | The inclusive character position of the start of the text. |`0`|
-| `length` | ` ` | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
--
-## Next steps
-
-* You can import your labeled data into your project directly. See [import project](../how-to/create-project.md#import-project) for more information.
-* See the [how-to article](../how-to/tag-utterances.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/entity-components.md
- Title: Entity components in Conversational Language Understanding-
-description: Learn how Conversational Language Understanding extracts entities from text
------ Previously updated : 10/11/2022----
-# Entity components
-
-In Conversational Language Understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
-
-## Component types
-
-An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
-
-### Learned component
-
-The learned component uses the entity tags you label your utterances with to train a machine learned model. The model learns to predict where the entity is, based on the context within the utterance. Your labels provide examples of where the entity is expected to be present in an utterance, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by tagging utterances for the entity. If you do not tag any utterances with the entity, it will not have a learned component.
--
-### List component
-
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
-
-In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
---
-### Prebuilt component
-
-The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. See [the list of supported prebuilt components](../prebuilt-component-reference.md) for more information.
---
-### Regex component
-
-The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression will be extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression will return the key as part of the prediction response.
-
-In multilingual projects, you can specify a different expression for each language. While using the prediction API, you can specify the language in the input request, which will only match the regular expression associated to that language.
---
-## Entity options
-
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
-
-### Combine components
-
-Combine components as one entity when they overlap by taking the union of all the components.
-
-Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
--
-By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
--
-Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
--
-With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
---
-### Do not combine components
-
-Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ tagged as Software:
--
-When you do not combine components, the entity will return twice:
--
-### Required components
-
-An entity can sometimes be defined by multiple components but requires one or more of them to be present. Every component can be set as **required**, which means the entity will **not** be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it is guaranteed that any returned entity includes a learned component; if it doesn't, the entity will not be returned.
-
-Required components are most frequently used with learned components, as they can restrict the other component types to a specific context, which is commonly associated to **roles**. You can also require all components to make sure that every component is present for an entity.
-
-In the Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
-
-#### Example
-
-Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as _"Book **two** tickets tomorrow to Cairo"_.
-
-Typically, you would add a prebuilt component for _Quantity.Number_ that already extracts all numbers. However if your entity was only defined with the prebuilt, it would also extract other numbers as part of the **Ticket Quantity** entity, such as _"Book **two** tickets tomorrow to Cairo at **3** PM"_.
-
-To resolve this, you would label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has 2 components, the prebuilt that knows all numbers, and the learned one that predicts where the Ticket Quantity is in a sentence. If you require the learned component, you make sure that Ticket Quantity only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned Ticket Quantity entity is both a number and in the correct position.
--
-## How to use components and options
-
-Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-
-A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a _General.Organization_ prebuilt component added to it, the entity may not predict all the organizations specific to your domain. You can use a list component to extend the values of the Organization entity and thereby extending the prebuilt with your own organizations.
-
-Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-
-When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
-
-> [!NOTE]
-> Previously during the public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
--
-## Next steps
-
-[Supported prebuilt components](../prebuilt-component-reference.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/evaluation-metrics.md
- Title: Conversational Language Understanding evaluation metrics-
-description: Learn about evaluation metrics in Conversational Language Understanding
------ Previously updated : 05/13/2022----
-# Evaluation metrics for conversational language understanding models
-
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined intents and entities for utterances in the test set, and compares them with the provided tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, conversational language understanding uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)`
--
-Precision, recall, and F1 score are calculated for:
-* Each entity separately (entity-level evaluation)
-* Each intent separately (intent-level evaluation)
-* For the model collectively (model-level evaluation).
-
-The definitions of precision, recall, and evaluation are the same for entity-level, intent-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* can differ. For example, consider the following text.
-
-### Example
-
-* Make a response with thank you very much.
-* reply with saying yes.
-* Check my email please.
-* email to cynthia that dinner last week was splendid.
-* send email to mike
-
-These are the intents used: *Reply*,*sendEmail*,*readEmail*. These are the entities: *contactName*, *message*.
-
-The model could make the following predictions:
-
-| Utterance | Predicted intent | Actual intent |Predicted entity| Actual entity|
-|--|--|--|--|--|
-|Make a response with thank you very much|Reply|Reply|`thank you very much` as `message` |`thank you very much` as `message` |
-|reply with saying yes| sendEmail|Reply|--|`yes` as `message`|
-|Check my email please|readEmail|readEmail|--|--|
-|email to cynthia that dinner last week was splendid|Reply|sendEmail|`dinner last week was splendid` as `message`| `cynthia` as `contactName`, `dinner last week was splendid` as `message`|
-|send email to mike|sendEmail|sendEmail|`mike` as `message`|`mike` as `contactName`|
--
-### Intent level evaluation for *Reply* intent
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | Utterance 1 was correctly predicted as *Reply*. |
-| False Positive | 1 |Utterance 4 was mistakenly predicted as *Reply*. |
-| False Negative | 1 | Utterance 2 was mistakenly predicted as *sendEmail*. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5 `
--
-### Intent level evaluation for *sendEmail* intent
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | Utterance 5 was correctly predicted as *sendEmail* |
-| False Positive | 1 |Utterance 2 was mistakenly predicted as *sendEmail*. |
-| False Negative | 1 | Utterance 4 was mistakenly predicted as *Reply*. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5 `
-
-### Intent level evaluation for *readEmail* intent
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | Utterance 3 was correctly predicted as *readEmail*. |
-| False Positive | 0 |--|
-| False Negative | 0 |--|
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 0) = 1`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 1) / (1 + 1) = 1`
-
-### Entity level evaluation for *contactName* entity
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | `cynthia` was correctly predicted as `contactName` in utterance 4|
-| False Positive | 0 |--|
-| False Negative | 1 | `mike` was mistakenly predicted as `message` in utterance 5 |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.5) / (1 + 0.5) = 0.67`
-
-### Entity level evaluation for *message* entity
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 2 |`thank you very much` was correctly predicted as `message` in utterance 1 and `dinner last week was splendid` was correctly predicted as `message` in utterance 4 |
-| False Positive | 1 |`mike` was mistakenly predicted as `message` in utterance 5 |
-| False Negative | 1 | ` yes` was not predicted as `message` in utterance 2 |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 2 / (2 + 1) = 0.67`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 2 / (2 + 1) = 0.67`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
--
-### Model-level evaluation for the collective model
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 6 | Sum of TP for all intents and entities |
-| False Positive | 3| Sum of FP for all intents and entities |
-| False Negative | 4 | Sum of FN for all intents and entities |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 6 / (6 + 3) = 0.67`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 6 / (6 + 4) = 0.60`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.67 * 0.60) / (0.67 + 0.60) = 0.63`
-
-## Confusion matrix
-
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities or intents.
-The matrix compares the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify intents or entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these intents or entities together. If that isn't possible, consider adding more tagged examples of both intents or entities to help the model differentiate between them.
-
-The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
--
-You can calculate the intent-level or entity-level and model-level evaluation metrics from the confusion matrix:
-
-* The values in the diagonal are the *True Positive* values of each intent or entity.
-* The sum of the values in the intent or entities rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the intent or entities columns (excluding the diagonal) is the *false Negative* of the model.
-
-Similarly,
-
-* The *true positive* of the model is the sum of *true Positives* for all intents or entities.
-* The *false positive* of the model is the sum of *false positives* for all intents or entities.
-* The *false Negative* of the model is the sum of *false negatives* for all intents or entities.
--
-## Guidance
-
-After you trained your model, you will see some guidance and recommendations on how to improve the model. It's recommended to have a model covering every point in the guidance section.
-
-* Training set has enough data: When an intent or entity has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on that intent. In this case, consider adding more labeled data in the training set. You should only consider adding more labeled data to your entity if your entity has a learned component. If your entity is defined only by list, prebuilt, and regex components, then this recommendation is not applicable.
-
-* All intents or entities are present in test set: When the testing data lacks labeled instances for an intent or entity, the model evaluation is less comprehensive due to untested scenarios. Consider having test data for every intent and entity in your model to ensure everything is being tested.
-
-* Unclear distinction between intents or entities: When data is similar for different intents or entities, it can lead to lower accuracy because they may be frequently misclassified as each other. Review the following intents and entities and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance. If you are seeing two entities constantly being predicted for the same spans because they share the same list, prebuilt, or regex components, then make sure to add a **learned** component for each entity and make it **required**. Learn more about [entity components](./entity-components.md).
-
-## Next steps
-
-[Train a model in Language Studio](../how-to/train-model.md)
cognitive-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/multiple-languages.md
- Title: Multilingual projects-
-description: Learn about which how to make use of multilingual projects in conversational language understanding
------ Previously updated : 01/10/2022----
-# Multilingual projects
-
-Conversational language understanding makes it easy for you to extend your project to several languages at once. When you enable multiple languages in projects, you'll be able to add language specific utterances and synonyms to your project, and get multilingual predictions for your intents and entities.
-
-## Multilingual intent and learned entity components
-
-When you enable multiple languages in a project, you can train the project primarily in one language and immediately get predictions in others.
-
-For example, you can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-
-Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the [tag utterances](../how-to/tag-utterances.md) page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
-
-You aren't expected to add the same amount of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
-
-When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-## List and prebuilt components in multiple languages
-
-Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
-
-```json
-"query": "{query}"
-"language": "{language code}"
-```
-
-If you do not provide a language, it will fall back to the default language of your project. See the [language support](../language-support.md) article for a list of different language codes.
-
-Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.
-
-## Next steps
-
-* [Tag utterances](../how-to/tag-utterances.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/none-intent.md
- Title: Conversational Language Understanding None Intent-
-description: Learn about the default None intent in conversational language understanding
------ Previously updated : 05/13/2022-----
-# None intent
-
-Every project in conversational language understanding includes a default **None** intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
-
-An utterance can be predicted as the None intent if the top scoring intent's score is **lower** than the None score threshold. It can also be predicted if the utterance is similar to examples added to the None intent.
-
-## None score threshold
-
-You can go to the **project settings** of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
-
-For any query and utterance, the highest scoring intent ends up **lower** than the threshold score, the top intent will be automatically replaced with the None intent. The scores of all the other intents remain unchanged.
-
-The score should be set according to your own observations of prediction scores, as they may vary by project. A higher threshold score forces the utterances to be more similar to the examples you have in your training data.
-
-When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
-
-> [!NOTE]
-> During model evaluation of your test set, the None score threshold is not applied.
-
-## Adding examples to the None intent
-
-The None intent is also treated like any other intent in your project. If there are utterances that you want predicted as None, consider adding similar examples to them in your training data. For example, if you would like to categorize utterances that are not important to your project as None, such as greetings, yes and no answers, responses to questions such as providing a number, then add those utterances to your intent.
-
-You should also consider adding false positive examples to the None intent. For example, in a flight booking project it is likely that the utterance "I want to buy a book" could be confused with a Book Flight intent. Adding "I want to buy a book" or "I love reading books" as None training utterances helps alter the predictions of those types of utterances towards the None intent instead of Book Flight.
-
-## Next steps
-
-[Conversational language understanding overview](../overview.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
- Title: Frequently Asked Questions-
-description: Use this article to quickly get the answers to FAQ about conversational language understanding
------ Previously updated : 09/29/2022----
-# Frequently asked questions for conversational language understanding
-
-Use this article to quickly get the answers to common questions about conversational language understanding
-
-## How do I create a project?
-
-See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
--
-## Can I use more than one conversational language understanding project together?
-
-Yes, using orchestration workflow. See the [orchestration workflow documentation](../orchestration-workflow/overview.md) for more information.
-
-## What is the difference between LUIS and conversational language understanding?
-
-Conversational language understanding is the next generation of LUIS.
-
-## Training is taking a long time, is this expected?
-
-For conversation projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
-
-## How do I use entity components?
-
-See the [entity components](./concepts/entity-components.md) article.
-
-## Which languages are supported in this feature?
-
-See the [language support](./language-support.md) article.
-
-## How do I get more accurate results for my project?
-
-Take a look at the [recommended guidelines](./how-to/build-schema.md#guidelines-and-recommendations) for information on improving accuracy.
-
-## How do I get predictions in different languages?
-
-When you train and deploy a conversation project in any language, you can immediately try querying it in [multiple languages](./concepts/multiple-languages.md). You may get varied results for different languages. To improve the accuracy of any language, add utterances to your project in that language to introduce the trained model to more syntax of that language.
-
-## How many intents, entities, utterances can I add to a project?
-
-See the [service limits](./service-limits.md) article.
-
-## Can I label the same word as 2 different entities?
-
-Unlike LUIS, you cannot label the same text as 2 different entities. Learned components across different entities are mutually exclusive, and only one learned span is predicted for each set of characters.
-
-## Can I import a LUIS JSON file into conversational language understanding?
-
-Yes, you can [import any LUIS application](./how-to/migrate-from-luis.md) JSON file from the latest version in the service.
-
-## Can I import a LUIS `.LU` file into conversational language understanding?
-
-No, the service only supports JSON format. You can go to LUIS, import the `.LU` file and export it as a JSON file.
-
-## Can I use conversational language understanding with custom question answering?
-
-Yes, you can use [orchestration workflow](../orchestration-workflow/overview.md) to orchestrate between different conversational language understanding and [question answering](../question-answering/overview.md) projects. Start by creating orchestration workflow projects, then connect your conversational language understanding and custom question answering projects. To perform this action, make sure that your projects are under the same Language resource.
-
-## How do I handle out of scope or domain utterances that aren't relevant to my intents?
-
-Add any out of scope utterances to the [none intent](./concepts/none-intent.md).
-
-## How do I control the none intent?
-
-You can control the none intent threshold from UI through the project settings, by changing the none intent threshold value. The values can be between 0.0 and 1.0. Also, you can change this threshold from the APIs by changing the *confidenceThreshold* in settings object. Learn more about [none intent](./concepts/none-intent.md#none-score-threshold)
-
-## Is there any SDK support?
-
-Yes, only for predictions, and samples are available for [Python](https://aka.ms/sdk-samples-conversation-python) and [C#](https://aka.ms/sdk-sample-conversation-dot-net). There is currently no authoring support for the SDK.
-
-## What are the training modes?
--
-|Training mode | Description | Language availability | Pricing |
-|||||
-|Standard training | Faster training times for quicker model iteration. | Can only train projects in English. | Included in your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). |
-|Advanced training | Slower training times using fine-tuned neural network transformer models. | Can train [multilingual projects](language-support.md#multi-lingual-option). | May incur [additional charges](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-
-See [training modes](how-to/train-model.md#training-modes) for more information.
-
-## Are there APIs for this feature?
-
-Yes, all the APIs are available.
-* [Authoring APIs](https://aka.ms/clu-authoring-apis)
-* [Prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation)
-
-## Next steps
-
-[Conversational language understanding overview](overview.md)
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/glossary.md
- Title: Definitions used in conversational language understanding-
-description: Learn about definitions used in conversational language understanding.
------ Previously updated : 05/13/2022----
-# Terms and definitions used in conversation language understanding
-
-Use this article to learn about some of the definitions and terms you may encounter when using conversation language understanding.
-
-## Entity
-Entities are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Intent
-An intent represents a task or action the user wants to perform. It's a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill.
-
-## List entity
-A list entity represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
-
-The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small", "medium", or "large" are used regardless of the context.
-
-## Model
-A model is an object that's trained to do a certain task, in this case conversation understanding tasks. Models are trained by providing labeled data to learn from so they can later be used to understand utterances.
-
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Overfitting
-
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
-## Project
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-
-## Recall
-Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
-## Regular expression
-A regular expression entity represents a regular expression. Regular expression entities are exact matches.
-
-## Schema
-Schema is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project
-
-## Training data
-Training data is the set of information that is needed to train a model.
-
-## Utterance
-
-An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
--
-## Next steps
-
-* [Data and service limits](service-limits.md).
-* [Conversation language understanding overview](../overview.md).
cognitive-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/build-schema.md
- Title: How to build a Conversational Language Understanding project schema-
-description: Use this article to start building a Conversational Language Understanding project schema
------ Previously updated : 05/13/2022----
-# How to build your project schema
-
-In conversational language understanding projects, the *schema* is defined as the combination of intents and entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents and entities should be included in your project.
-
-## Guidelines and recommendations
-
-Consider the following guidelines when picking intents for your project:
-
- 1. Create distinct, separable intents. An intent is best described as action the user wants to perform. Think of the project you're building and identify all the different actions your users may take when interacting with your project. Sending, calling, and canceling are all actions that are best represented as different intents. "Canceling an order" and "canceling an appointment" are very similar, with the distinction being *what* they are canceling. Those two actions should be represented under the same intent, *Cancel*.
-
- 2. Create entities to extract relevant pieces of information within your text. The entities should be used to capture the relevant information needed to fulfill your user's action. For example, *order* or *appointment* could be different things a user is trying to cancel, and you should create an entity to capture that piece of information.
-
-You can *"send"* a *message*, *"send"* an *email*, or *"send"* a package. Creating an intent to capture each of those requirements will not scale over time, and you should use entities to identify *what* the user was sending. The combination of intents and entities should determine your conversation flow.
-
-For example, consider a company where the bot developers have identified the three most common actions their users take when using a calendar:
-
-* Setup new meetings
-* Respond to meeting requests
-* Cancel meetings
-
-They might create an intent to represent each of these actions. They might also include entities to help complete these actions, such as:
-
-* Meeting attendants
-* Date
-* Meeting durations
-
-## Add intents
-
-To build a project schema within [Language Studio](https://aka.ms/languageStudio):
-
-1. Select **Schema definition** from the left side menu.
-
-2. From the top pivots, you can change the view to be **Intents** or **Entities**.
-
-2. To create an intent, select **Add** from the top menu. You will be prompted to type in a name before completing creating the intent.
-
-3. Repeat the above step to create all the intents to capture all the actions that you think the user will want to perform while using the project.
-
- :::image type="content" source="../media/build-schema-page.png" alt-text="A screenshot showing the schema creation page for conversation projects in Language Studio." lightbox="../media/build-schema-page.png":::
-
-4. When you select the intent, you will be directed to the [Data labeling](tag-utterances.md) page, with a filter set for the intent you selected. You can add examples for intents and label them with entities.
-
-## Add entities
-
-1. Move to **Entities** pivot from the top of the page.
-
-2. To add an entity, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
-
-3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
-
-4. Every entity can be defined by multiple components: learned, list or prebuilt. A learned component is added to all your entities once you label them in your utterances.
-
- :::image type="content" source="../media/entity-details.png" alt-text="A screenshot showing the entity details page for conversation projects in Language Studio." lightbox="../media/entity-details.png":::
-
-5.You can add a [list](../concepts/entity-components.md#list-component) or [prebuilt](../concepts/entity-components.md#prebuilt-component) component to each entity.
-
-### Add prebuilt component
-
-To add a **prebuilt** component, select **Add new prebuilt** and from the drop-down menu, select the prebuilt type to you want to add to this entity.
-
- <!--:::image type="content" source="../media/add-prebuilt-component.png" alt-text="A screenshot showing a prebuilt-component in Language Studio." lightbox="../media/add-prebuilt-component.png":::-->
-
-### Add list component
-
-To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
-
-1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
-
-2. From the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
-
- <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
-
-### Define entity options
-
-Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
-
- <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
--
-After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
-
-## Next Steps
-
-* [Add utterances and label your data](tag-utterances.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
- Title: Send prediction requests to a conversational language understanding deployment-
-description: Learn about sending prediction requests for conversational language understanding.
------ Previously updated : 06/28/2022----
-# Send prediction requests to a deployment
-
-After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment.
-You can query the deployment programmatically through the [prediction API](https://aka.ms/ct-runtime-swagger) or through the client libraries (Azure SDK).
-
-## Test deployed model
-
-You can use Language Studio to submit an utterance, get predictions and visualize the results.
----
-## Send a conversational language understanding request
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-First you will need to get your resource key and endpoint:
--
-### Query your model
--
-# [Client libraries (Azure SDK)](#tab/azure-sdk)
-
-### Use the client libraries (Azure SDK)
-
-You can also use the client libraries provided by the Azure SDK to send requests to your model.
-
-> [!NOTE]
-> The client library for conversational language understanding is only available for:
-> * .NET
-> * Python
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="A screenshot showing a key and endpoint in the Azure portal." lightbox="../../custom-text-classification/media/get-endpoint-azure.png":::
--
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
- |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
-
-5. See the following reference documentation for more information:
-
- * [C#](/dotnet/api/azure.ai.language.conversations)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
-
--
-## Next steps
-
-* [Conversational language understanding overview](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
- Title: How to create projects in Conversational Language Understanding-
-description: Use this article to learn how to create projects in Conversational Language Understanding.
------ Previously updated : 09/29/2022----
-# How to create a CLU project
-
-Use this article to learn how to set up these requirements and create a project.
--
-## Prerequisites
-
-Before you start using CLU, you will need several things:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-* An Azure Language resource
-
-### Create a Language resource
-
-Before you start using CLU, you will need an Azure Language resource.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
----
-## Sign in to Language Studio
--
-## Create a conversation project
-
-Once you have a Language resource created, create a Conversational Language Understanding project.
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Import project
-
-### [Language Studio](#tab/language-studio)
-
-You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and from the top menu, clicking on **Export**.
--
-That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
-
-If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [the LUIS migration article](../how-to/migrate-from-luis.md) for more information.
-
-To import a project, click on the arrow button next to **Create a new project** and select **Import**, then select the LUIS or Conversational Language Understanding JSON file.
--
-### [REST APIs](#tab/rest-api)
-
-You can import a CLU JSON into the service
----
-## Export project
-
-### [Language Studio](#tab/Language-Studio)
-
-You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and pressing **Export**.
-
-### [REST APIs](#tab/rest-apis)
-
-You can export a Conversational Language Understanding project as a JSON file at any time.
----
-## Get CLU project details
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Delete project
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
-
-When you don't need your project anymore, you can delete your project using the APIs.
----
-## Next Steps
-
-[Build schema](./build-schema.md)
-
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-model.md
- Title: How to deploy a model for conversational language understanding-
-description: Use this article to learn how to deploy models for conversational language understanding.
------ Previously updated : 10/12/2022----
-# Deploy a model
-
-Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md)
-* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
-* Reviewed the [model performance](view-model-evaluation.md) to determine how your model is performing.
-
-See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Deploy model
-
-After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
-* Taking the model assigned to the first deployment, and assigning it to the second deployment.
-* taking the model assigned to second deployment and assign it to the first deployment.
-
-This can be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* Use [prediction API to query your model](call-api.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/fail-over.md
- Title: Back up and recover your conversational language understanding models-
-description: Learn how to save and recover your conversational language understanding models.
------ Previously updated : 05/16/2022----
-# Back up and recover your conversational language understanding models
-
-When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync your CLU models across regions.
-
-If your app or business depends on the use of a CLU model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents, entities and utterances. You still need to [train](../how-to/train-model.md) and [deploy](../how-to/deploy-model.md) the models to be available for use with [runtime APIs](https://aka.ms/clu-apis).
--
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions, each of them in a different region.
-
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get Training Status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
-
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference ](https://aka.ms/clu-authoring-apis)
-
-* [Runtime prediction REST API reference ](https://aka.ms/clu-apis)
-
cognitive-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
- Title: Conversational Language Understanding backwards compatibility-
-description: Learn about backwards compatibility between LUIS and Conversational Language Understanding
------ Previously updated : 09/08/2022----
-# Migrate from Language Understanding (LUIS) to conversational language understanding (CLU)
-
-[Conversational language understanding (CLU)](../overview.md) is a cloud-based AI offering in Azure Cognitive Services for Language. It's the newest generation of [Language Understanding (LUIS)](../../../luis/what-is-luis.md) and offers backwards compatibility with previously created LUIS applications. CLU employs state-of-the-art machine learning intelligence to allow users to build a custom natural language understanding model for predicting intents and entities in conversational utterances.
-
-CLU offers the following advantages over LUIS:
--- Improved accuracy with state-of-the-art machine learning models for better intent classification and entity extraction. LUIS required more examples to generalize certain concepts in intents and entities, while CLU's more advanced machine learning reduces the burden on customers by requiring significantly less data. -- Multilingual support for model learning and training. Train projects in one language and immediately predict intents and entities across 96 languages.-- Ease of integration with different CLU and [custom question answering](../../question-answering/overview.md) projects using [orchestration workflow](../../orchestration-workflow/overview.md). -- The ability to add testing data within the experience using Language Studio and APIs for model performance evaluation prior to deployment. -
-To get started, you can [create a new project](../quickstart.md?pivots=language-studio#create-a-conversational-language-understanding-project) or [migrate your LUIS application](#migrate-your-luis-applications).
-
-## Comparison between LUIS and CLU
-
-The following table presents a side-by-side comparison between the features of LUIS and CLU. It also highlights the changes to your LUIS application after migrating to CLU. Click on the linked concept to learn more about the changes.
-
-|LUIS features | CLU features | Post migration |
-|::|:-:|:--:|
-|Machine-learned and Structured ML entities| Learned [entity components](#how-are-entities-different-in-clu) |Machine-learned entities without subentities will be transferred as CLU entities. Structured ML entities will only transfer leaf nodes (lowest level subentities that do not have their own subentities) as entities in CLU. The name of the entity in CLU will be the name of the subentity concatenated with the parent. For example, _Order.Size_|
-|List, regex, and prebuilt entities| List, regex, and prebuilt [entity components](#how-are-entities-different-in-clu) | List, regex, and prebuilt entities will be transferred as entities in CLU with a populated entity component based on the entity type.|
-|`Pattern.Any` entities| Not currently available | `Pattern.Any` entities will be removed.|
-|Single culture for each application|[Multilingual models](#how-is-conversational-language-understanding-multilingual) enable multiple languages for each project. |The primary language of your project will be set as your LUIS application culture. Your project can be trained to extend to different languages.|
-|Entity roles |[Roles](#how-are-entity-roles-transferred-to-clu) are no longer needed. | Entity roles will be transferred as entities.|
-|Settings for: normalize punctuation, normalize diacritics, normalize word form, use all training data |[Settings](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Settings will not be transferred. |
-|Patterns and phrase list features|[Patterns and Phrase list features](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Patterns and phrase list features will not be transferred. |
-|Entity features| Entity components| List or prebuilt entities added as features to an entity will be transferred as added components to that entity. [Entity features](#how-do-entity-features-get-transferred-in-clu) will not be transferred for intents. |
-|Intents and utterances| Intents and utterances |All intents and utterances will be transferred. Utterances will be labeled with their transferred entities. |
-|Application GUIDs |Project names| A project will be created for each migrating application with the application name. Any special characters in the application names will be removed in CLU.|
-|Versioning| Every time you train, a model is created and acts as a version of your [project](#how-do-i-manage-versions-in-clu). | A project will be created for the selected application version. |
-|Evaluation using batch testing |Evaluation using testing sets | [Adding your testing dataset](../how-to/tag-utterances.md#how-to-label-your-utterances) will be required.|
-|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). |
-|Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu-how-is-standard-training-different-from-advanced-training) | Training will be required after application migration. |
-|Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. |
-|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](https://aka.ms/clu-authoring-apis). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
-|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](https://aka.ms/clu-runtime-api). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
-
-## Migrate your LUIS applications
-
-Use the following steps to migrate your LUIS application using either the LUIS portal or REST API.
-
-# [LUIS portal](#tab/luis-portal)
-
-## Migrate your LUIS applications using the LUIS portal
-
-Follow these steps to begin migration using the [LUIS Portal](https://www.luis.ai/):
-
-1. After logging into the LUIS portal, click the button on the banner at the top of the screen to launch the migration wizard. The migration will only copy your selected LUIS applications to CLU.
-
- :::image type="content" source="../media/backwards-compatibility/banner.svg" alt-text="A screenshot showing the migration banner in the LUIS portal." lightbox="../media/backwards-compatibility/banner.svg":::
--
- The migration overview tab provides a brief explanation of conversational language understanding and its benefits. Press Next to proceed.
-
- :::image type="content" source="../media/backwards-compatibility/migration-overview.svg" alt-text="A screenshot showing the migration overview window." lightbox="../media/backwards-compatibility/migration-overview.svg":::
-
-1. Determine the Language resource that you wish to migrate your LUIS application to. If you have already created your Language resource, select your Azure subscription followed by your Language resource, and then click **Next**. If you don't have a Language resource, click the link to create a new Language resource. Afterwards, select the resource and click **Next**.
-
- :::image type="content" source="../media/backwards-compatibility/select-resource.svg" alt-text="A screenshot showing the resource selection window." lightbox="../media/backwards-compatibility/select-resource.svg":::
-
-1. Select all your LUIS applications that you want to migrate, and specify each of their versions. Click **Next**. After selecting your application and version, you will be prompted with a message informing you of any features that won't be carried over from your LUIS application.
-
- > [!NOTE]
- > Special characters are not supported by conversational language understanding. Any special characters in your selected LUIS application names will be removed in your new migrated applications.
- :::image type="content" source="../media/backwards-compatibility/select-applications.svg" alt-text="A screenshot showing the application selection window." lightbox="../media/backwards-compatibility/select-applications.svg":::
-
-1. Review your Language resource and LUIS applications selections. Click **Finish** to migrate your applications.
-
-1. A popup window will let you track the migration status of your applications. Applications that have not started migrating will have a status of **Not started**. Applications that have begun migrating will have a status of **In progress**, and once they have finished migrating their status will be **Succeeded**. A **Failed** application means that you must repeat the migration process. Once the migration has completed for all applications, select **Done**.
-
- :::image type="content" source="../media/backwards-compatibility/migration-progress.svg" alt-text="A screenshot showing the application migration progress window." lightbox="../media/backwards-compatibility/migration-progress.svg":::
-
-1. After your applications have migrated, you can perform the following steps:
-
- * [Train your model](../how-to/train-model.md?tabs=language-studio)
- * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
- * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
-
-# [REST API](#tab/rest-api)
-
-## Migrate your LUIS applications using REST APIs
-
-Follow these steps to begin migration programmatically using the CLU Authoring REST APIs:
-
-1. Export your LUIS application in JSON format. You can use the [LUIS Portal](https://www.luis.ai/) to export your applications, or the [LUIS programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40).
-
-1. Submit a POST request using the following URL, headers, and JSON body to import LUIS application into your CLU project. CLU does not support names with special characters so remove any special characters from the project name.
-
- ### Request URL
- ```rest
- {ENDPOINT}/language/authoring/analyze-conversations/projects/{PROJECT-NAME}/:import?api-version={API-VERSION}&format=luis
- ```
-
- |Placeholder |Value | Example |
- ||||
- |`{ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
- |`{PROJECT-NAME}` | The name for your project. This value is case sensitive. | `myProject` |
- |`{API-VERSION}` | The [version](../../concepts/model-lifecycle.md#api-versions) of the API you are calling. | `2023-04-01` |
-
- ### Headers
-
- Use the following header to authenticate your request.
-
- |Key|Value|
- |--|--|
- |`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.|
-
- ### JSON body
-
- Use the exported LUIS JSON data as your body.
-
-1. After your applications have migrated, you can perform the following steps:
-
- * [Train your model](../how-to/train-model.md?tabs=language-studio)
- * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
- * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
---
-## Frequently asked questions
-
-### Which LUIS JSON version is supported by CLU?
-
-CLU supports the model JSON version 7.0.0. If the JSON format is older, it would need to be imported into LUIS first, then exported from LUIS with the most recent version.
-
-### How are entities different in CLU?
-
-In CLU, a single entity can have multiple entity components, which are different methods for extraction. Those components are then combined together using rules you can define. The available components are:
-- Learned: Equivalent to ML entities in LUIS, labels are used to train a machine-learned model to predict an entity based on the content and context of the provided labels.-- List: Just like list entities in LUIS, list components exact match a set of synonyms and maps them back to a normalized value called a **list key**.-- Prebuilt: Prebuilt components allow you to define an entity with the prebuilt extractors for common types available in both LUIS and CLU.-- Regex: Regex components use regular expressions to capture custom defined patterns, exactly like regex entities in LUIS.-
-Entities in LUIS will be transferred over as entities of the same name in CLU with the equivalent components transferred.
-
-After migrating, your structured machine-learned leaf nodes and bottom-level subentities will be transferred to the new CLU model while all the parent entities and higher-level entities will be ignored. The name of the entity will be the bottom-level entityΓÇÖs name concatenated with its parent entity.
-
-#### Example:
-
-LUIS entity:
-
-* Pizza Order
- * Topping
- * Size
-
-Migrated LUIS entity in CLU:
-
-* Pizza Order.Topping
-* Pizza Order.Size
-
-You also cannot label 2 different entities in CLU for the same span of characters. Learned components in CLU are mutually exclusive and do not provide overlapping predictions for learned components only. When migrating your LUIS application, entity labels that overlapped preserved the longest label and ignored any others.
-
-For more information on entity components, see [Entity components](../concepts/entity-components.md).
-
-### How are entity roles transferred to CLU?
-
-Your roles will be transferred as distinct entities along with their labeled utterances. Each roleΓÇÖs entity type will determine which entity component will be populated. For example, a list entity role will be transferred as an entity with the same name as the role, with a populated list component.
-
-### How do entity features get transferred in CLU?
-
-Entities used as features for intents will not be transferred. Entities used as features for other entities will populate the relevant component of the entity. For example, if a list entity named _SizeList_ was used as a feature to a machine-learned entity named _Size_, then the _Size_ entity will be transferred to CLU with the list values from _SizeList_ added to its list component. The same is applied for prebuilt and regex entities.
-
-### How are entity confidence scores different in CLU?
-
-Any extracted entity has a 100% confidence score and therefore entity confidence scores should not be used to make decisions between entities.
-
-### How is conversational language understanding multilingual?
-
-Conversational language understanding projects accept utterances in different languages. Furthermore, you can train your model in one language and extend it to predict in other languages.
-
-#### Example:
-
-Training utterance (English): *How are you?*
-
-Labeled intent: Greeting
-
-Runtime utterance (French): *Comment ça va?*
-
-Predicted intent: Greeting
-
-### How is the accuracy of CLU better than LUIS?
-
-CLU uses state-of-the-art models to enhance machine learning performance of different models of intent classification and entity extraction.
-
-These models are insensitive to minor variations, removing the need for the following settings: _Normalize punctuation_, _normalize diacritics_, _normalize word form_, and _use all training data_.
-
-Additionally, the new models do not support phrase list features as they no longer require supplementary information from the user to provide semantically similar words for better accuracy. Patterns were also used to provide improved intent classification using rule-based matching techniques that are not necessary in the new model paradigm. The question below explains this in more detail.
-
-### What do I do if the features I am using in LUIS are no longer present?
-
-There are several features that were present in LUIS that will no longer be available in CLU. This includes the ability to do feature engineering, having patterns and pattern.any entities, and structured entities. If you had dependencies on these features in LUIS, use the following guidance:
--- **Patterns**: Patterns were added in LUIS to assist the intent classification through defining regular expression template utterances. This included the ability to define Pattern only intents (without utterance examples). CLU is capable of generalizing by leveraging the state-of-the-art models. You can provide a few utterances to that matched a specific pattern to the intent in CLU, and it will likely classify the different patterns as the top intent without the need of the pattern template utterance. This simplifies the requirement to formulate these patterns, which was limited in LUIS, and provides a better intent classification experience. --- **Phrase list features**: The ability to associate features mainly occurred to assist the classification of intents by highlighting the key elements/features to use. This is no longer required since the deep models used in CLU already possess the ability to identify the elements that are inherent in the language. In turn removing these features will have no effect on the classification ability of the model.--- **Structured entities**: The ability to define structured entities was mainly to enable multilevel parsing of utterances. With the different possibilities of the sub-entities, LUIS needed all the different combinations of entities to be defined and presented to the model as examples. In CLU, these structured entities are no longer supported, since overlapping learned components are not supported. There are a few possible approaches to handling these structured extractions:
- - **Non-ambiguous extractions**: In most cases the detection of the leaf entities is enough to understand the required items within a full span. For example, structured entity such as _Trip_ that fully spanned a source and destination (_London to New York_ or _Home to work_) can be identified with the individual spans predicted for source and destination. Their presence as individual predictions would inform you of the _Trip_ entity.
- - **Ambiguous extractions**: When the boundaries of different sub-entities are not very clear. To illustrate, take the example ΓÇ£I want to order a pepperoni pizza and an extra cheese vegetarian pizzaΓÇ¥. While the different pizza types as well as the topping modifications can be extracted, having them extracted without context would have a degree of ambiguity of where the extra cheese is added. In this case the extent of the span is context based and would require ML to determine this. For ambiguous extractions you can use one of the following approaches:
-
-1. Combine sub-entities into different entity components within the same entity.
-
-#### Example:
-
-LUIS Implementation:
-
-* Pizza Order (entity)
- * Size (subentity)
- * Quantity (subentity)
-
-CLU Implementation:
-
-* Pizza Order (entity)
- * Size (list entity component: small, medium, large)
- * Quantity (prebuilt entity component: number)
-
-In CLU, you would label the entire span for _Pizza Order_ inclusive of the size and quantity, which would return the pizza order with a list key for size, and a number value for quantity in the same entity object.
-
-2. For more complex problems where entities contain several levels of depth, you can create a project for each level of depth in the entity structure. This gives you the option to:
-- Pass the utterance to each project.-- Combine the analyses of each project in the stage proceeding CLU. -
-For a detailed example on this concept, check out the pizza sample projects available on [GitHub](https://aka.ms/clu-pizza).
-
-### How do I manage versions in CLU?
-
-CLU saves the data assets used to train your model. You can export a model's assets or load them back into the project at any point. So models act as different versions of your project.
-
-You can export your CLU projects using [Language Studio](https://language.cognitive.azure.com/home) or [programmatically](../how-to/fail-over.md#export-your-primary-project-assets) and store different versions of the assets locally.
-
-### Why is CLU classification different from LUIS? How does None classification work?
-
-CLU presents a different approach to training models by using multi-classification as opposed to binary classification. As a result, the interpretation of scores is different and also differs across training options. While you are likely to achieve better results, you have to observe the difference in scores and determine a new threshold for accepting intent predictions. You can easily add a confidence score threshold for the [None intent](../concepts/none-intent.md) in your project settings. This will return *None* as the top intent if the top intent did not exceed the confidence score threshold provided.
-
-### Do I need more data for CLU models than LUIS?
-
-The new CLU models have better semantic understanding of language than in LUIS, and in turn help make models generalize with a significant reduction of data. While you shouldnΓÇÖt aim to reduce the amount of data that you have, you should expect better performance and resilience to variations and synonyms in CLU compared to LUIS.
-
-### If I donΓÇÖt migrate my LUIS apps, will they be deleted?
-
-Your existing LUIS applications will be available until October 1, 2025. After that time you will no longer be able to use those applications, the service endpoints will no longer function, and the applications will be permanently deleted.
-
-### Are .LU files supported on CLU?
-
-Only JSON format is supported by CLU. You can import your .LU files to LUIS and export them in JSON format, or you can follow the migration steps above for your application.
-
-### What are the service limits of CLU?
-
-See the [service limits](../service-limits.md) article for more information.
-
-### Do I have to refactor my code if I migrate my applications from LUIS to CLU?
-
-The API objects of CLU applications are different from LUIS and therefore code refactoring will be necessary.
-
-If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40) and [runtime](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs, you can replace them with their equivalent APIs.
-
-[CLU authoring APIs](https://aka.ms/clu-authoring-apis): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/2023-04-01/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
-
-[CLU runtime APIs](https://aka.ms/clu-runtime-api): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`.
-
-You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
-
-### How are the training times different in CLU? How is standard training different from advanced training?
-
-CLU offers standard training, which trains and learns in English and is comparable to the training time of LUIS. It also offers advanced training, which takes a considerably longer duration as it extends the training to all other [supported languages](../language-support.md). The train API will continue to be an asynchronous process, and you will need to assess the change in the DevOps process you employ for your solution.
-
-### How has the experience changed in CLU compared to LUIS? How is the development lifecycle different?
-
-In LUIS you would Build-Train-Test-Publish, whereas in CLU you Build-Train-Evaluate-Deploy-Test.
-
-1. **Build**: In CLU, you can define your intents, entities, and utterances before you train. CLU additionally offers you the ability to specify _test data_ as you build your application to be used for model evaluation. Evaluation assesses how well your model is performing on your test data and provides you with precision, recall, and F1 metrics.
-2. **Train**: You create a model with a name each time you train. You can overwrite an already trained model. You can specify either _standard_ or _advanced_ training, and determine if you would like to use your test data for evaluation, or a percentage of your training data to be left out from training and used as testing data. After training is complete, you can evaluate how well your model is doing on the outside.
-3. **Deploy**: After training is complete and you have a model with a name, it can be deployed for predictions. A deployment is also named and has an assigned model. You could have multiple deployments for the same model. A deployment can be overwritten with a different model, or you can swap models with other deployments in the project.
-4. **Test**: Once deployment is complete, you can use it for predictions through the deployment endpoint. You can also test it in the studio in the Test deployment page.
-
-This process is in contrast to LUIS, where the application ID was attached to everything, and you deployed a version of the application in either the staging or production slots.
-
-This will influence the DevOps processes you use.
-
-### Does CLU have container support?
-
-No, you cannot export CLU to containers.
-
-### How will my LUIS applications be named in CLU after migration?
-
-Any special characters in the LUIS application name will be removed. If the cleared name length is greater than 50 characters, the extra characters will be removed. If the name after removing special characters is empty (for example, if the LUIS application name was `@@`), the new name will be _untitled_. If there is already a conversational language understanding project with the same name, the migrated LUIS application will be appended with `_1` for the first duplicate and increase by 1 for each additional duplicate. In case the new nameΓÇÖs length is 50 characters and it needs to be renamed, the last 1 or 2 characters will be removed to be able to concatenate the number and still be within the 50 characters limit.
-
-## Migration from LUIS Q&A
-
-If you have any questions that were unanswered in this article, consider leaving your questions at our [Microsoft Q&A thread](https://aka.ms/luis-migration-qna-thread).
-
-## Next steps
-* [Quickstart: create a CLU project](../quickstart.md)
-* [CLU language support](../language-support.md)
-* [CLU FAQ](../faq.md)
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
- Title: How to tag utterances in Conversational Language Understanding-
-description: Use this article to tag your utterances in Conversational Language Understanding projects
------ Previously updated : 06/03/2022----
-# Label your utterances in Language Studio
-
-Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance that you want to extract as entities.
-
-Data labeling is a crucial step in development lifecycle; this data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled utterances, you can directly [import it into your project](create-project.md#import-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. Labeled data informs the model how to interpret text, and is used for training and evaluation.
-
-## Prerequisites
-
-Before you can label your data, you need:
-
-* A successfully [created project](create-project.md).
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data labeling guidelines
-
-After [building your schema](build-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words and sentences will be associated with the intents and entities in your project. You will want to spend time labeling your utterances - introducing and refining the data that will be used to in training your models.
-
-As you add utterances and label them, keep in mind:
-
-* The machine learning models generalize based on the labeled examples you provide it; the more examples you provide, the more data points the model has to make better generalizations.
-
-* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-
- * **Label precisely**: Label each intent and entity to its right type always. Only include what you want classified and extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the utterances.
- * **Label completely**: Provide varied utterances for every intent. Label all the instances of the entity in all your utterances.
-
-* For [Multilingual projects](../language-support.md#multi-lingual-option), adding utterances in other languages increases the model's performance in these languages, but avoid duplicating your data across all the languages you would like to support. For example, to improve a calender bot's performance with users, a developer might add examples mostly in English, and a few in Spanish or French as well. They might add utterances such as:
-
- * "_Set a meeting with **Matt** and **Kevin** **tomorrow** at **12 PM**._" (English)
- * "_Reply as **tentative** to the **weekly update** meeting._" (English)
- * "_Cancelar mi **pr├│xima** reuni├│n_." (Spanish)
-
-## How to label your utterances
-
-Use the following steps to label your utterances:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Data labeling**. In this page, you can start adding your utterance and labeling them. You can also upload your utterance directly by clicking on **Upload utterance file** from the top menu, make sure it follows the [accepted format](../concepts/data-formats.md#utterance-file-format).
-
-3. From the top pivots, you can change the view to be **training set** or **testing set**. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
-
- :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
-
- > [!TIP]
- > If you are planning on using **Automatically split the testing set from training data** splitting, add all your utterances to the training set.
-
-
-4. From the **Select intent** dropdown menu, select one of the intents, the language of the utterance (for multilingual projects), and the utterance itself. Press the enter key in the utterance's text box to add the utterance.
-
-5. You have two options to label entities in an utterance:
-
- |Option |Description |
- |||
- |Label using a brush | Select the brush icon next to an entity in the right pane, then highlight the text in the utterance you want to label. |
- |Label using inline menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity you want to label these words with. |
-
-6. In the right side pane, under the **Labels** pivot, you can find all the entity types in your project and the count of labeled instances per each.
-
-7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
- * *Total instances per labeled entity* where you can view count of all labeled instances of a specific entity.
- * *Unique utterances per labeled entity* where each utterance is counted if it contains at least one labeled instance of this entity.
- * *Utterances per intent* where you can view count of utterances per intent.
---
- > [!NOTE]
- > List and prebuilt components are not shown in the data labeling page, and all labels here only apply to the **learned component**.
-
-To remove a label:
- 1. From within your utterance, select the entity you want to remove a label from.
- 3. Scroll through the menu that appears, and select **Remove label**.
-
-To delete an entity:
- 1. Select the entity you want to edit in the right side pane.
- 2. Click on the three dots next to the entity, and select the option you want from the drop-down menu.
-
-## Suggest utterances with Azure OpenAI
-
-In CLU, use Azure OpenAI to suggest utterances to add to your project using GPT models. You first need to get access and create a resource in Azure OpenAI. You'll then need to create a deployment for the GPT models. Follow the pre-requisite steps [here](../../../openai/how-to/create-resource.md).
-
-Before you get started, the suggest utterances feature is only available if your Language resource is in the following regions:
-* East US
-* South Central US
-* West Europe
-
-In the Data Labeling page:
-
-1. Click on the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment.
-2. On selection of an Azure OpenAI resource, click **Connect**, which allows your Language resource to have direct access to your Azure OpenAI resource. It assigns your Language resource the role of `Cognitive Services User` to your Azure OpenAI resource, which allows your current Language resource to have access to Azure OpenAI's service. If the connection fails, follow these [steps](#add-required-configurations-to-azure-openai-resource) below to add the right role to your Azure OpenAI resource manually.
-3. Once the resource is connected, select the deployment. The recommended model for the Azure OpenAI deployment is `text-davinci-002`.
-4. Select the intent you'd like to get suggestions for. Make sure the intent you have selected has at least 5 saved utterances to be enabled for utterance suggestions. The suggestions provided by Azure OpenAI are based on the **most recent utterances** you've added for that intent.
-5. Click on **Generate utterances**. Once complete, the suggested utterances will show up with a dotted line around it, with the note *Generated by AI*. Those suggestions need to be accepted or rejected. Accepting a suggestion simply adds it to your project, as if you had added it yourself. Rejecting it deletes the suggestion entirely. Only accepted utterances will be part of your project and used for training or testing. You can accept or reject by clicking on the green check or red cancel buttons beside each utterance. You can also use the `Accept all` and `Reject all` buttons in the toolbar.
--
-Using this feature entails a charge to your Azure OpenAI resource for a similar number of tokens to the suggested utterances generated. Details for Azure OpenAI's pricing can be found [here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
-
-### Add required configurations to Azure OpenAI resource
-
-If connecting your Language resource to an Azure OpenAI resource fails, follow these steps:
-
-Enable identity management for your Language resource using the following options:
-
-### [Azure portal](#tab/portal)
-
-Your Language resource must have identity management, to enable it using [Azure portal](https://portal.azure.com/):
-
-1. Go to your Language resource
-2. From left hand menu, under **Resource Management** section, select **Identity**
-3. From **System assigned** tab, make sure to set **Status** to **On**
-
-### [Language Studio](#tab/studio)
-
-Your Language resource must have identity management, to enable it using [Language Studio](https://aka.ms/languageStudio):
-
-1. Click the settings icon in the top right corner of the screen
-2. Select **Resources**
-3. Select the check box **Managed Identity** for your Language resource.
---
-After enabling managed identity, assign the role `Cognitive Services User` to your Azure OpenAI resource using the managed identity of your Language resource.
-
- 1. Go to the [Azure portal](https://portal.azure.com/) and navigate to your Azure OpenAI resource.
- 2. Click on the Access Control (IAM) tab on the left.
- 3. Click on Add > Add role assignment.
- 4. Select "Job function roles" and click Next.
- 5. Select `Cognitive Services User` from the list of roles and click Next.
- 6. Select Assign access to "Managed identity" and click on "Select members".
- 7. Under "Managed identity" select "Language".
- 8. Search for your resource and select it. Then click on the Select button below and next to complete the process.
- 9. Review the details and click on Review + Assign.
--
-After a few minutes, refresh the Language Studio and you will be able to successfully connect to Azure OpenAI.
-
-## Next Steps
-* [Train Model](./train-model.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/train-model.md
- Title: How to train and evaluate models in Conversational Language Understanding-
-description: Use this article to train a model and view its evaluation details to make improvements.
------ Previously updated : 05/12/2022----
-# Train your conversational language understanding model
-
-After you have completed [labeling your utterances](tag-utterances.md), you can start training a model. Training is the process where the model learns from your [labeled utterances](tag-utterances.md). <!--After training is completed, you will be able to [view model performance](view-model-evaluation.md).-->
-
-To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). <!--The results are returned so you can review the [modelΓÇÖs performance](view-model-evaluation.md).-->
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* [Labeled utterances](tag-utterances.md)
-
-<!--See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.-->
-
-## Data splitting
-
-Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the labeled utterances.
-The **testing set** is a blind set that isn't introduced to the model during training but only during evaluation.
-
-After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate [evaluation metrics](../concepts/evaluation-metrics.md).
-It is recommended to make sure that all your intents and entities are adequately represented in both the training and testing set.
-
-Conversational language understanding supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during [labeling](tag-utterances.md).
-
-## Training modes
-
-CLU supports two modes for training your models
-
-* **Standard training** uses fast machine learning algorithms to train your models relatively quickly. This is currently only available for **English** and is disabled for any project that doesn't use English (US), or English (UK) as its primary language. This training option is free of charge. Standard training allows you to add utterances and test them quickly at no cost. The evaluation scores shown should guide you on where to make changes in your project and add more utterances. Once youΓÇÖve iterated a few times and made incremental improvements, you can consider using advanced training to train another version of your model.
-
-* **Advanced training** uses the latest in machine learning technology to customize models with your data. This is expected to show better performance scores for your models and will enable you to use the [multilingual capabilities](../language-support.md#multi-lingual-option) of CLU as well. Advanced training is priced differently. See the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/language-service) for details.
-
-Use the evaluation scores to guide your decisions. There might be times where a specific example is predicted incorrectly in advanced training as opposed to when you used standard training mode. However, if the overall evaluation results are better using advanced, then it is recommended to use your final model. If that isnΓÇÖt the case and you are not looking to use any multilingual capabilities, you can continue to use model trained with standard mode.
-
-> [!Note]
-> You should expect to see a difference in behaviors in intent confidence scores between the training modes as each algorithm calibrates their scores differently.
-
-## Train model
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-
-### Start training job
--
-### Get training job status
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
----
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-----
-## Next steps
-
-* [Model evaluation metrics](../concepts/evaluation-metrics.md)
-<!--* [Deploy and query the model](./deploy-model.md)-->
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/view-model-evaluation.md
- Title: How to view conversational language understanding models details
-description: Use this article to learn about viewing the details for a conversational language understanding model.
------- Previously updated : 05/16/2022----
-# View conversational language understanding model details
-
-After model training is completed, you can view your model details and see how well it performs against the test set.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from your utterances. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Testing set** when [add your utterances](tag-utterances.md).
-
-## Prerequisites
-
-Before viewing a model's evaluation, you need:
-
-* A successfully [created project](create-project.md).
-* A successfully [trained model](train-model.md).
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Model details
-
-### [Language studio](#tab/Language-studio)
--
-### [REST APIs](#tab/REST-APIs)
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
------
-## Next steps
-
-* As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used.
-* If you're happy with your model performance, you can [deploy your model](deploy-model.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/language-support.md
- Title: Conversational language understanding language support-
-description: This article explains which natural languages are supported by the conversational language understanding feature of Azure Cognitive Service for Language.
------ Previously updated : 05/12/2022----
-# Language support for conversational language understanding
-
-Use this article to learn about the languages currently supported by CLU feature.
-
-## Multi-lingual option
-
-> [!TIP]
-> See [How to train a model](how-to/train-model.md#training-modes) for information on which training mode you should use for multilingual projects.
-
-With conversational language understanding, you can train a model in one language and use to predict intents and entities from utterances in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
-
-You can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-
-Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the tag utterances page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
-
-You aren't expected to add the same number of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
-
-When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-### List and prebuilt components in multiple languages
-
-Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
-
-```json
-"query": "{query}"
-"language": "{language code}"
-```
-
-If you do not provide a language, it will fall back to the default language of your project.
-
-Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. <!--See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.-->
---
-## Languages supported by conversational language understanding
-
-Conversational language understanding supports utterances in the following languages:
-
-| Language | Language code |
-| | |
-| Afrikaans | `af` |
-| Amharic | `am` |
-| Arabic | `ar` |
-| Assamese | `as` |
-| Azerbaijani | `az` |
-| Belarusian | `be` |
-| Bulgarian | `bg` |
-| Bengali | `bn` |
-| Breton | `br` |
-| Bosnian | `bs` |
-| Catalan | `ca` |
-| Czech | `cs` |
-| Welsh | `cy` |
-| Danish | `da` |
-| German | `de` |
-| Greek | `el` |
-| English (US) | `en-us` |
-| English (UK) | `en-gb` |
-| Esperanto | `eo` |
-| Spanish | `es` |
-| Estonian | `et` |
-| Basque | `eu` |
-| Persian (Farsi) | `fa` |
-| Finnish | `fi` |
-| French | `fr` |
-| Western Frisian | `fy` |
-| Irish | `ga` |
-| Scottish Gaelic | `gd` |
-| Galician | `gl` |
-| Gujarati | `gu` |
-| Hausa | `ha` |
-| Hebrew | `he` |
-| Hindi | `hi` |
-| Croatian | `hr` |
-| Hungarian | `hu` |
-| Armenian | `hy` |
-| Indonesian | `id` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Javanese | `jv` |
-| Georgian | `ka` |
-| Kazakh | `kk` |
-| Khmer | `km` |
-| Kannada | `kn` |
-| Korean | `ko` |
-| Kurdish (Kurmanji) | `ku` |
-| Kyrgyz | `ky` |
-| Latin | `la` |
-| Lao | `lo` |
-| Lithuanian | `lt` |
-| Latvian | `lv` |
-| Malagasy | `mg` |
-| Macedonian | `mk` |
-| Malayalam | `ml` |
-| Mongolian | `mn` |
-| Marathi | `mr` |
-| Malay | `ms` |
-| Burmese | `my` |
-| Nepali | `ne` |
-| Dutch | `nl` |
-| Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
-| Punjabi | `pa` |
-| Polish | `pl` |
-| Pashto | `ps` |
-| Portuguese (Brazil) | `pt-br` |
-| Portuguese (Portugal) | `pt-pt` |
-| Romanian | `ro` |
-| Russian | `ru` |
-| Sanskrit | `sa` |
-| Sindhi | `sd` |
-| Sinhala | `si` |
-| Slovak | `sk` |
-| Slovenian | `sl` |
-| Somali | `so` |
-| Albanian | `sq` |
-| Serbian | `sr` |
-| Sundanese | `su` |
-| Swedish | `sv` |
-| Swahili | `sw` |
-| Tamil | `ta` |
-| Telugu | `te` |
-| Thai | `th` |
-| Filipino | `tl` |
-| Turkish | `tr` |
-| Uyghur | `ug` |
-| Ukrainian | `uk` |
-| Urdu | `ur` |
-| Uzbek | `uz` |
-| Vietnamese | `vi` |
-| Xhosa | `xh` |
-| Yiddish | `yi` |
-| Chinese (Simplified) | `zh-hans` |
-| Chinese (Traditional) | `zh-hant` |
-| Zulu | `zu` |
---
-## Next steps
-
-* [Conversational language understanding overview](overview.md)
-* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
- Title: Conversational Language Understanding - Azure Cognitive Services-
-description: Customize an AI model to predict the intentions of utterances, and extract important information from them.
------ Previously updated : 10/26/2022----
-# What is conversational language understanding?
-
-Conversational language understanding is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build natural language understanding component to be used in an end-to-end conversational application.
-
-Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it. CLU only provides the intelligence to understand the input text for the client application and doesn't perform any actions. By creating a CLU project, developers can iteratively label utterances, train and evaluate model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/create-project.md) contain instructions for using the service in more specific or customized ways.
--
-## Example usage scenarios
-
-CLU can be used in multiple scenarios across a variety of industries. Some examples are:
-
-### End-to-end conversational bot
-
-Use CLU to build and train a custom natural language understanding model based on a specific domain and the expected users' utterances. Integrate it with any end-to-end conversational bot so that it can process and analyze incoming text in real time to identify the intention of the text and extract important information from it. Have the bot perform the desired action based on the intention and extracted information. An example would be a customized retail bot for online shopping or food ordering.
-
-### Human assistant bots
-
-One example of a human assistant bot is to help staff improve customer engagements by triaging customer queries and assigning them to the appropriate support engineer. Another example would be a human resources bot in an enterprise that allows employees to communicate in natural language and receive guidance based on the query.
-
-### Command and control application
-
-When you integrate a client application with a speech to text component, users can speak a command in natural language for CLU to process, identify intent, and extract information from the text for the client application to perform an action. This use case has many applications, such as to stop, play, forward, and rewind a song or turn lights on or off.
-
-### Enterprise chat bot
-
-In a large corporation, an enterprise chat bot may handle a variety of employee affairs. It might handle frequently asked questions served by a custom question answering knowledge base, a calendar specific skill served by conversational language understanding, and an interview feedback skill served by LUIS. Use Orchestration workflow to connect all these skills together and appropriately route the incoming requests to the correct service.
--
-## Project development lifecycle
-
-Creating a CLU project typically involves several different steps.
--
-Follow these steps to get the most out of your model:
-
-1. **Define your schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. In this step you create the [intents](glossary.md#intent) that you want to assign to user's utterances, and the relevant [entities](glossary.md#entity) you want extracted.
-
-2. **Label your data**: The quality of data labeling is a key factor in determining model performance.
-
-3. **Train the model**: Your model starts learning from your labeled data.
-
-4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-
-6. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model.
-
-7. **Deploy the model**: Deploying a model makes it available for use via the [Runtime API](https://aka.ms/clu-apis).
-
-8. **Predict intents and entities**: Use your custom model to predict intents and entities from user's utterances.
-
-## Reference documentation and code samples
-
-As you use CLU, see the following reference documentation and samples for Azure Cognitive Services for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
-|REST APIs (Runtime) | [REST API documentation](https://aka.ms/clu-apis) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the transparency note for CLU to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using conversational language understanding.
-
-* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
-
cognitive-services Prebuilt Component Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/prebuilt-component-reference.md
- Title: Supported prebuilt entity components-
-description: Learn about which entities can be detected automatically in Conversational Language Understanding
------ Previously updated : 11/02/2021----
-# Supported prebuilt entity components
-
-Conversational Language Understanding allows you to add prebuilt components to entities. Prebuilt components automatically predict common types from utterances. See the [entity components](concepts/entity-components.md) article for information on components.
-
-## Reference
-
-The following prebuilt components are available in Conversational Language Understanding.
-
-| Type | Description | Supported languages |
-| | | |
-| Quantity.Age | Age of a person or thing. For example: "30 years old", "9 months old" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.Number | A cardinal number in numeric or text form. For example: "Thirty", "23", "14.5", "Two and a half" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.Percentage | A percentage using the symbol % or the word "percent". For example: "10%", "5.6 percent" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.Ordinal | An ordinal number in numeric or text form. For example: "first", "second", "last", "10th" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.Dimension | Special dimensions such as length, distance, volume, area, and speed. For example: "two miles", "650 square kilometers", "35 km/h" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.Temperature | A temperature in Celsius or Fahrenheit. For example: "32F", "34 degrees celsius", "2 deg C" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.Currency | Monetary amounts including currency. For example "1000.00 US dollars", "£20.00", "$67.5B" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Quantity.NumberRange | A numeric interval. For example: "between 25 and 35" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Datetime | Dates and times. For example: "June 23, 1976", "7 AM", "6:49 PM", "Tomorrow at 7 PM", "Next Week" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Person.Name | The name of an individual. For example: "Joe", "Ann" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Email | Email Addresses. For example: "user@contoso.com", "user_name@contoso.com", "user.name@contoso.com" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Phone Number | US Phone Numbers. For example: "123-456-7890", "+1 123 456 7890", "(123)456-7890" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| URL | Website URLs and Links. | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| General.Organization | Companies and corporations. For example: "Microsoft" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| Geography.Location | The name of a location. For example: "Tokyo" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
-| IP Address | An IP address. For example: "192.168.0.4" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |
--
-## Prebuilt components in multilingual projects
-
-In multilingual conversation projects, you can enable any of the prebuilt components. The component is only predicted if the language of the query is supported by the prebuilt entity. The language is either specified in the request or defaults to the primary language of the application if not provided.
-
-## Next steps
-
-[Entity components](concepts/entity-components.md)
-
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
- Title: Quickstart - create a conversational language understanding project-
-description: Quickly start building an AI model to extract information and predict the intentions of text-based utterances.
------ Previously updated : 03/14/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: Conversational language understanding
-
-Use this article to get started with Conversational Language understanding using Language Studio and the REST API. Follow these steps to try out an example.
-------
-## Next steps
-
-* [Learn about entity components](concepts/entity-components.md)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
- Title: Conversational Language Understanding limits-
-description: Learn about the data, region, and throughput limits for Conversational Language Understanding
------ Previously updated : 10/12/2022----
-# Conversational language understanding limits
-
-Use this article to learn about the data and service limits when using conversational language understanding.
-
-## Language resource limits
-
-* Your Language resource must be one of the following [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/):
-
- |Tier|Description|Limit|
- |--|--|--|
- |F0|Free tier|You are only allowed **one** F0 Language resource **per subscription**.|
- |S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
--
-See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
-
-* You can have up to **500** projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-### Regional availability
-
-Conversational language understanding is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| China East 2 | Γ£ô | Γ£ô |
-| China North 2 | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
-
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|F|Training time| 1 hour per month |
-|S|Training time| Unlimited, Pay as you go |
-|F|Prediction Calls| 5,000 request per month |
-|S|Prediction Calls| Unlimited, Pay as you go |
-
-## Data limits
-
-The following limits are observed for the conversational language understanding.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Number of utterances per project | 1 | 25,000|
-|Utterance length in characters (authoring) | 1 | 500 |
-|Utterance length in characters (prediction) | 1 | 1000 |
-|Number of intents per project | 1 | 500|
-|Number of entities per project | 0 | 350|
-|Number of list synonyms per entity| 0 | 20,000 |
-|Number of list synonyms per project| 0 | 2,000,000 |
-|Number of prebuilt components per entity| 0 | 7 |
-|Number of regular expressions per project| 0 | 20 |
-|Number of trained models per project| 0 | 10 |
-|Number of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Item | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
-| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
-
-## Next steps
-
-* [Conversational language understanding overview](overview.md)
cognitive-services Bot Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/tutorials/bot-framework.md
- Title: Add natural language understanding to your bot in Bot Framework SDK using conversational language understanding
-description: Learn how to train a bot to understand natural language.
-keywords: conversational language understanding, bot framework, bot, language understanding, nlu
------- Previously updated : 05/25/2022--
-# Integrate conversational language understanding with Bot Framework
-
-A dialog is the interaction that occurs between user queries and an application. Dialog management is the process that defines the automatic behavior that should occur for different customer interactions. While conversational language understanding can classify intents and extract information through entities, the [Bot Framework SDK](/azure/bot-service/bot-service-overview) allows you to configure the applied logic for the responses returned from it.
-
-This tutorial will explain how to integrate your own conversational language understanding (CLU) project for a flight booking project in the Bot Framework SDK that includes three intents: **Book Flight**, **Get Weather**, and **None**.
--
-## Prerequisites
--- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial.
-- Download the **CoreBotWithCLU** [sample](https://aka.ms/clu-botframework-overview).
- - Clone the entire samples repository to get access to this solution.
-
-## Import a project in conversational language understanding
-
-1. Download the [FlightBooking.json](https://aka.ms/clu-botframework-json) file in the **Core Bot with CLU** sample, in the _Cognitive Models_ folder.
-2. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
-3. Navigate to [Conversational Language Understanding](https://language.cognitive.azure.com/clu/projects) and click on the service. This will route you the projects page. Click the Import button next to the Create New Project button. Import the FlightBooking.json file with the project name as **FlightBooking**. This will automatically import the CLU project with all the intents, entities, and utterances.
-
- :::image type="content" source="../media/import.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import.png":::
-
-4. Once the project is loaded, click on **Training jobs** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
-
- :::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page in C L U." lightbox="../media/train-model.png":::
-
-5. Once training is complete, click to **Deploying a model** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
-
- :::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot of the deployment page within the deploy model screen in C L U." lightbox="../media/deploy-model-tutorial.png":::
-
-## Update the settings file
-
-Now that your CLU project is deployed and ready, update the settings that will connect to the deployment.
-
-In the **Core Bot** sample, update your [appsettings.json](https://aka.ms/clu-botframework-settings) with the appropriate values.
--- The _CluProjectName_ is **FlightBooking**.-- The _CluDeploymentName_ is **Testing**-- The _CluAPIKey_ can be either of the keys in the **Keys and Endpoint** section for your Language resource in the [Azure portal](https://portal.azure.com). You can also copy your key from the Project Settings tab in CLU. -- The _CluAPIHostName_ is the endpoint found in the **Keys and Endpoint** section for your Language resource in the Azure portal. Note the format should be ```<Language_Resource_Name>.cognitiveservices.azure.com``` without `https://`.-
-```json
-{
- "MicrosoftAppId": "",
- "MicrosoftAppPassword": "",
- "CluProjectName": "",
- "CluDeploymentName": "",
- "CluAPIKey": "",
- "CluAPIHostName": ""
-}
-```
-
-## Identify integration points
-
-In the Core Bot sample, you can check out the **FlightBookingRecognizer.cs** file. Here is where the CLU API call to the deployed endpoint is made to retrieve the CLU prediction for intents and entities.
-
-```csharp
- public FlightBookingRecognizer(IConfiguration configuration)
- {
- var cluIsConfigured = !string.IsNullOrEmpty(configuration["CluProjectName"]) && !string.IsNullOrEmpty(configuration["CluDeploymentName"]) && !string.IsNullOrEmpty(configuration["CluAPIKey"]) && !string.IsNullOrEmpty(configuration["CluAPIHostName"]);
- if (cluIsConfigured)
- {
- var cluApplication = new CluApplication(
- configuration["CluProjectName"],
- configuration["CluDeploymentName"],
- configuration["CluAPIKey"],
- "https://" + configuration["CluAPIHostName"]);
- // Set the recognizer options depending on which endpoint version you want to use.
- var recognizerOptions = new CluOptions(cluApplication)
- {
- Language = "en"
- };
-
- _recognizer = new CluRecognizer(recognizerOptions);
- }
-```
--
-Under the Dialogs folder, find the **MainDialog** which uses the following to make a CLU prediction.
-
-```csharp
- var cluResult = await _cluRecognizer.RecognizeAsync<FlightBooking>(stepContext.Context, cancellationToken);
-```
-
-The logic that determines what to do with the CLU result follows it.
-
-```csharp
- switch (cluResult.TopIntent().intent)
- {
- case FlightBooking.Intent.BookFlight:
- // Initialize BookingDetails with any entities we may have found in the response.
- var bookingDetails = new BookingDetails()
- {
- Destination = cluResult.Entities.toCity,
- Origin = cluResult.Entities.fromCity,
- TravelDate = cluResult.Entities.flightDate,
- };
-
- // Run the BookingDialog giving it whatever details we have from the CLU call, it will fill out the remainder.
- return await stepContext.BeginDialogAsync(nameof(BookingDialog), bookingDetails, cancellationToken);
-
- case FlightBooking.Intent.GetWeather:
- // We haven't implemented the GetWeatherDialog so we just display a TODO message.
- var getWeatherMessageText = "TODO: get weather flow here";
- var getWeatherMessage = MessageFactory.Text(getWeatherMessageText, getWeatherMessageText, InputHints.IgnoringInput);
- await stepContext.Context.SendActivityAsync(getWeatherMessage, cancellationToken);
- break;
-
- default:
- // Catch all for unhandled intents
- var didntUnderstandMessageText = $"Sorry, I didn't get that. Please try asking in a different way (intent was {cluResult.TopIntent().intent})";
- var didntUnderstandMessage = MessageFactory.Text(didntUnderstandMessageText, didntUnderstandMessageText, InputHints.IgnoringInput);
- await stepContext.Context.SendActivityAsync(didntUnderstandMessage, cancellationToken);
- break;
- }
-```
-
-## Run the bot locally
-
-Run the sample locally on your machine **OR** run the bot from a terminal or from Visual Studio:
-
-### Run the bot from a terminal
-
-From a terminal, navigate to the `cognitive-service-language-samples/CoreBotWithCLU` folder.
-
-Then run the following command
-
-```bash
-# run the bot
-dotnet run
-```
-
-### Run the bot from Visual Studio
-
-1. Launch Visual Studio
-1. From the top navigation menu, select **File**, **Open**, then **Project/Solution**
-1. Navigate to the `cognitive-service-language-samples/CoreBotWithCLU` folder
-1. Select the `CoreBotCLU.csproj` file
-1. Press `F5` to run the project
--
-## Testing the bot using Bot Framework Emulator
-
-[Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.
--- Install the [latest Bot Framework Emulator](https://github.com/Microsoft/BotFramework-Emulator/releases).-
-## Connect to the bot using Bot Framework Emulator
-
-1. Launch Bot Framework Emulator
-1. Select **File**, then **Open Bot**
-1. Enter a Bot URL of `http://localhost:3978/api/messages` and press Connect and wait for it to load
-1. You can now query for different examples such as "Travel from Cairo to Paris" and observe the results
-
-If the top intent returned from CLU resolves to "_Book flight_". Your bot will ask additional questions until it has enough information stored to create a travel booking. At that point it will return this booking information back to your user.
-
-## Next steps
-
-Learn more about the [Bot Framework SDK](/azure/bot-service/bot-service-overview).
--
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/data-formats.md
- Title: Custom NER data formats-
-description: Learn about the data formats accepted by custom NER.
------ Previously updated : 10/17/2022----
-# Accepted custom NER data formats
-
-If you are trying to [import your data](../how-to/create-project.md#import-project) into custom NER, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
-
-## Labels file format
-
-Your Labels file should be in the `json` format below to be used in [importing](../how-to/create-project.md#import-project) your labels into a project.
-
-```json
-{
- "projectFileVersion": "2022-05-01",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectKind": "CustomEntityRecognition",
- "storageInputContainerName": "{CONTAINER-NAME}",
- "projectName": "{PROJECT-NAME}",
- "multilingual": false,
- "description": "Project-description",
- "language": "en-us",
- "settings": {}
- },
- "assets": {
- "projectKind": "CustomEntityRecognition",
- "entities": [
- {
- "category": "Entity1"
- },
- {
- "category": "Entity2"
- }
- ],
- "documents": [
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 500,
- "labels": [
- {
- "category": "Entity1",
- "offset": 25,
- "length": 10
- },
- {
- "category": "Entity2",
- "offset": 120,
- "length": 8
- }
- ]
- }
- ]
- },
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 100,
- "labels": [
- {
- "category": "Entity2",
- "offset": 20,
- "length": 5
- }
- ]
- }
- ]
- }
- ]
- }
-}
-
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
-|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
-| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
-| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
-| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| `dataset` | `{DATASET}` | The test set to which this file will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`|
-| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
-| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
-| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
-| `offset` | | The start position for the entity text. | `25`|
-| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
---
-## Next steps
-* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
-* See the [how-to article](../how-to/tag-data.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/evaluation-metrics.md
- Title: Custom NER evaluation metrics-
-description: Learn about evaluation metrics in Custom Named Entity Recognition (NER)
------ Previously updated : 08/08/2022----
-# Evaluation metrics for custom named entity recognition models
-
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom NER uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
-
->[!NOTE]
-> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
-
-## Model-level and entity-level evaluation metrics
-
-Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
-
-The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
-
-### Example
-
-*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
-
-The model extracting entities from this text could have the following predictions:
-
-| Entity | Predicted as | Actual type |
-|--|--|--|
-| John Smith | Person | Person |
-| Frederick | Person | City |
-| Forrest | City | Person |
-| Fannie Thomas | Person | Person |
-| Colorado Springs | City | City |
-
-### Entity-level evaluation for the *person* entity
-
-The model would have the following entity-level evaluation, for the *person* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
-| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-
-* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
-
-### Entity-level evaluation for the *city* entity
-
-The model would have the following entity-level evaluation, for the *city* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
-| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Model-level evaluation for the collective model
-
-The model would have the following evaluation for the model in its entirety:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
-| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
-| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
-
-## Interpreting entity-level evaluation metrics
-
-So what does it actually mean to have high precision or high recall for a certain entity?
-
-| Recall | Precision | Interpretation |
-|--|--|--|
-| High | High | This entity is handled well by the model. |
-| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
-| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
-| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
-
-## Guidance
-
-After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
-
-* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
-
-* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
-
-* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
-
-* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
-
-* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
--
-## Confusion matrix
-
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
-The matrix compares the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
-
-The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
--
-You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
-
-* The values in the diagonal are the *True Positive* values of each entity.
-* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
-
-Similarly,
-
-* The *true positive* of the model is the sum of *true Positives* for all entities.
-* The *false positive* of the model is the sum of *false positives* for all entities.
-* The *false Negative* of the model is the sum of *false negatives* for all entities.
-
-## Next steps
-
-* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/fail-over.md
- Title: Back up and recover your custom Named Entity Recognition (NER) models-
-description: Learn how to save and recover your custom NER models.
------ Previously updated : 04/25/2022----
-# Back up and recover your custom NER models
-
-When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
-
-If your app or business depends on the use of a custom NER model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
-
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions. [Create your resources](./how-to/create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](./quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
--
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
---
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get training status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
-
- [!INCLUDE [get project details](./includes/rest-api/get-project-details.md)]
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
--
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
-
-* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md
- Title: Custom Named Entity Recognition (NER) FAQ-
-description: Learn about Frequently asked questions when using custom Named Entity Recognition.
------ Previously updated : 08/08/2022-----
-# Frequently asked questions for Custom Named Entity Recognition
-
-Find answers to commonly asked questions about concepts, and scenarios related to custom NER in Azure Cognitive Service for Language.
-
-## How do I get started with the service?
-
-See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more detailed information.
-
-## What are the service limits?
-
-See the [service limits article](service-limits.md) for more information.
-
-## How many tagged files are needed?
-
-Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There is no set number of tagged instances that will make every model perform well. Performance highly dependent on your schema, and the ambiguity of your schema. Ambiguous entity types need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per entity is 50.
-
-## Training is taking a long time, is this expected?
-
-The training process can take a long time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
-
-## How do I build my custom model programmatically?
--
-You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
-
-When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-for-predictions), you can use the REST API, or the client library.
-
-## What is the recommended CI/CD process?
-
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its performance](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove labels from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [train a model](how-to/train-model.md), you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing set where there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
-
-## Does a low or high model score guarantee bad or good performance in production?
-
-Model evaluation may not always be comprehensive. This depends on:
-* If the **test set** is too small so the good/bad scores are not representative of model's actual performance. Also if a specific entity type is missing or under-represented in your test set it will affect model performance.
-* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
-* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
-
-See the [data selection and schema design](how-to/design-schema.md) article for more information.
-
-## How do I improve model performance?
-
-* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous, and you should consider merging them both into one entity type for better performance.
-
-* [Review test set predictions](how-to/view-model-evaluation.md). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
-
-* Learn more about [data selection and schema design](how-to/design-schema.md).
-
-* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged entities side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
-
-## Why do I get different results when I retrain my model?
-
-* When you [train your model](how-to/train-model.md), you can determine if you want your data to be split randomly into train and test sets. If you do, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
-
-* If you're retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
-
-## How do I get predictions in different languages?
-
-First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in [multiple languages](language-support.md#multi-lingual-option). You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language.
-
-## I trained my model, but I can't test it
-
-You need to [deploy your model](quickstart.md#deploy-your-model) before you can test it.
-
-## How do I use my trained model for predictions?
-
-After deploying your model, you [call the prediction API](how-to/call-api.md), using either the [REST API](how-to/call-api.md?tabs=rest-api) or [client libraries](how-to/call-api.md?tabs=client).
-
-## Data privacy and security
-
-Custom NER is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, Custom NER users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
-
-Your data is only stored in your Azure Storage account. Custom NER only has access to read from it during training.
-
-## How to clone my project?
-
-To clone your project you need to use the export API to export the project assets, and then import them into a new project. See the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
-
-## Next steps
-
-* [Custom NER overview](overview.md)
-* [Quickstart](quickstart.md)
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/glossary.md
- Title: Definitions and terms used for Custom Named Entity Recognition (NER)-
-description: Definitions and terms you may encounter when building AI models using Custom Named Entity Recognition
------ Previously updated : 05/06/2022----
-# Custom named entity recognition definitions and terms
-
-Use this article to learn about some of the definitions and terms you may encounter when using custom NER.
-
-## Entity
-
-An entity is a span of text that indicates a certain type of information. The text span can consist of one or more words. In the scope of custom NER, entities represent the information that the user wants to extract from the text. Developers tag entities within their data with the needed entities before passing it to the model for training. For example "Invoice number", "Start date", "Shipment number", "Birthplace", "Origin city", "Supplier name" or "Client address".
-
-For example, in the sentence "*John borrowed 25,000 USD from Fred.*" the entities might be:
-
-| Entity name/type | Entity |
-|--|--|
-| Borrower Name | *John* |
-| Lender Name | *Fred* |
-| Loan Amount | *25,000 USD* |
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Model
-
-A model is an object that is trained to do a certain task, in this case custom entity recognition. Models are trained by providing labeled data to learn from so they can later be used for recognition tasks.
-
-* **Model training** is the process of teaching your model what to extract based on your labeled data.
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
-## Project
-
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-As a prerequisite to creating a custom entity extraction project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
-
-Within your project you can do the following actions:
-
-* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
-* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
-* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
-* **Deployment**: After you have reviewed the model's performance and decided it can be used in your environment, you need to assign it to a deployment to use it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-* **Test model**: After deploying your model, test your deployment in [Language Studio](https://aka.ms/LanguageStudio) to see how it would perform in production.
-
-## Recall
-Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
-## Next steps
-
-* [Data and service limits](service-limits.md).
-* [Custom NER overview](../overview.md).
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
- Title: Send a Named Entity Recognition (NER) request to your custom model
-description: Learn how to send requests for custom NER.
------- Previously updated : 05/11/2023----
-# Query your custom model
-
-After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
-You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api) or through the client libraries (Azure SDK).
-
-## Test deployed model
-
-You can use Language Studio to submit the custom entity recognition task and visualize the results.
----
-## Send an entity recognition request to your model
-
-# [Language Studio](#tab/language-studio)
--
-# [REST API](#tab/rest-api)
-
-First you need to get your resource key and endpoint:
---
-### Submit a custom NER task
--
-### Get task results
--
-# [Client libraries (Azure SDK)](#tab/client)
-
-First you need to get your resource key and endpoint:
--
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
- |Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
- |JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
- |Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_RecognizeCustomEntities.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py)
-
-5. See the following reference documentation for more information on the client, and return object:
-
- * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
- * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
- * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
-
--
-## Next steps
-
-* [Frequently asked questions](../faq.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
- Title: Create custom NER projects and use Azure resources-
-description: Learn how to create and manage projects and Azure resources for custom NER.
------ Previously updated : 06/03/2022----
-# How to create custom NER project
-
-Use this article to learn how to set up the requirements for starting with custom NER and create a project.
-
-## Prerequisites
-
-Before you start using custom NER, you will need:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Create a Language resource
-
-Before you start using custom NER, you will need an Azure Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom named entity recognition.
-
-You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
-
-## Create Language resource and connect storage account
-
-You can create a resource in the following ways:
-
-* The Azure portal
-* Language Studio
-* PowerShell
-
-> [!Note]
-> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
-----
-> [!NOTE]
-> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
-> * You can only connect your language resource to one storage account.
-
-## Using a pre-existing Language resource
--
-## Create a custom named entity recognition project
-
-Once your resource and storage container are configured, create a new custom NER project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Import project
-
-If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Get project details
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Delete project
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
-
-* After your project is created, you can start [labeling your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
- Title: How to deploy a custom NER model-
-description: Learn how to deploy a model for custom NER.
------ Previously updated : 03/23/2023----
-# Deploy a model and extract entities from text using the runtime API
-
-Once you are satisfied with how your model performs, it is ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-
-See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Deploy model
-
-After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/design-schema.md
- Title: Preparing data and designing a schema for custom NER-
-description: Learn about how to select and prepare data, to be successful in creating custom NER projects.
------ Previously updated : 05/09/2022----
-# How to prepare data and define a schema for custom NER
-
-In order to create a custom NER model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the entity types/categories that you need your model to extract from the text at runtime.
-
-## Schema design
-
-The schema defines the entity types/categories that you need your model to extract from text at runtime.
-
-* Review documents in your dataset to be familiar with their format and structure.
-
-* Identify the [entities](../glossary.md#entity) you want to extract from the data.
-
- For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
-
-* Avoid entity types ambiguity.
-
- **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
-
- For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
-
-* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
-
- For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
-
-## Data selection
-
-The quality of data you train your model with affects model performance greatly.
-
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
-
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
-
-* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-
-* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
-
-> [!NOTE]
-> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
-
-## Data preparation
-
-As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-
-* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
-
-You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
-
-## Test set
-
-When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all entities used in your project.
-
-## Next steps
-
-If you haven't already, create a custom NER project. If it's your first time using custom NER, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/tag-data.md
- Title: How to label your data for Custom Named Entity Recognition (NER)-
-description: Learn how to label your data for use with Custom Named Entity Recognition (NER).
------ Previously updated : 05/24/2022----
-# Label your data in Language Studio
-
-Before training your model you need to label your documents with the custom entities you want to extract. Data labeling is a crucial step in development lifecycle. In this step you can create the entity types you want to extract from your data and label these entities within your documents. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project.
-
-Before creating a custom NER model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
-
-## Prerequisites
-
-Before you can label your data, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data labeling guidelines
-
-After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON document in your storage container that you have connected to this project.
-
-As you label your data, keep in mind:
-
-* In general, more labeled data leads to better results, provided the data is labeled accurately.
-
-* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the documents.
- * **Label completely**: Label all the instances of the entity in all your documents. You can use the [auto labelling feature](use-autolabeling.md) to ensure complete labeling.
-
- > [!NOTE]
- > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
-
-## Label your data
-
-Use the following steps to label your data:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
-
- <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
-
- >[!TIP]
- > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
- > You can also use the filters to view the documents that are labeled with a specific entity type.
-
-3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
-
- > [!NOTE]
- > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
-
-4. In the right side pane, **Add entity type** to your project so you can start labeling your data with them.
-
- <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
-
-6. You have two options to label your document:
-
- |Option |Description |
- |||
- |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
- |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
-
- The below screenshot shows labeling using a brush.
-
- :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
-
-6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each.
-
-6. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
-
- > [!TIP]
- > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
-
-7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
- * *Total instances* where you can view count of all labeled instances of a specific entity type.
- * *documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
-
-7. When you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
-
-## Remove labels
-
-To remove a label
-
-1. Select the entity you want to remove a label from.
-2. Scroll through the menu that appears, and select **Remove label**.
-
-## Delete entities
-
-To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity will remove all its labeled instances from your dataset.
-
-## Next steps
-
-After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/train-model.md
- Title: How to train your Custom Named Entity Recognition (NER) model-
-description: Learn about how to train your model for Custom Named Entity Recognition (NER).
------ Previously updated : 05/06/2022----
-# Train your custom named entity recognition model
-
-Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
-
-To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
--
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data splitting
-
-Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
-The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
-After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated.
-It's recommended to make sure that all your entities are adequately represented in both the training and testing set.
-
-Custom NER supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**:The system will split your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](tag-data.md).
-
-## Train model
-
-# [Language studio](#tab/Language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-### Start training job
--
-### Get training job status
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
-
- [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
---
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After training is completed, you'll be able to view [model performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services Use Autolabeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-autolabeling.md
- Title: How to use autolabeling in custom named entity recognition-
-description: Learn how to use autolabeling in custom named entity recognition.
------- Previously updated : 03/20/2023---
-# How to use autolabeling for Custom Named Entity Recognition
-
-[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires both time and effort, you can use the autolabeling feature to automatically label your entities. You can start autolabeling jobs based on a model you've previously trained or using GPT models. With autolabeling based on a model you've previously trained, you can start labeling a few of your documents, train a model, then create an autolabeling job to produce entity labels for other documents based on that model. With autolabeling with GPT, you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your entities.
-
-## Prerequisites
-
-### [Autolabel based on a model you've trained](#tab/autolabel-model)
-
-Before you can use autolabeling based on a model you've trained, you need:
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md)
-* A [successfully trained model](train-model.md)
--
-### [Autolabel with GPT](#tab/autolabel-gpt)
-Before you can use autolabeling with GPT, you need:
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* Entity names that are meaningful. The GPT models label entities in your documents based on the name of the entity you've provided.
-* [Labeled data](tag-data.md) isn't required.
-* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md).
---
-## Trigger an autolabeling job
-
-### [Autolabel based on a model you've trained](#tab/autolabel-model)
-
-When you trigger an autolabeling job based on a model you've trained, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit applies on all projects within the same resource.
-
-> [!TIP]
-> A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is:
->
-> `ceil(8921/1000) = ceil(8.921)`, which is 9 text records.
-
-1. From the left navigation menu, select **Data labeling**.
-2. Select the **Autolabel** button under the Activity pane to the right of the page.
--
- :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png":::
-
-3. Choose Autolabel based on a model you've trained and click on Next.
-
- :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
-
-4. Choose a trained model. It's recommended to check the model performance before using it for autolabeling.
-
- :::image type="content" source="../media/choose-model-trained.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model-trained.png":::
-
-5. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities.
-
- :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
-
-6. Choose the documents you want to be automatically labeled. The number of text records of each document is displayed. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter.
-
- > [!NOTE]
- > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible.
- > * You can view the documents by clicking on the document name.
-
- :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
-
-7. Select **Autolabel** to trigger the autolabeling job.
-You should see the model used, number of documents included in the autolabeling job, number of text records and entities to be automatically labeled. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
-
- :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
-
-### [Autolabel with GPT](#tab/autolabel-gpt)
-
-When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models.
-
-1. From the left navigation menu, select **Data labeling**.
-2. Select the **Autolabel** button under the Activity pane to the right of the page.
-
- :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png":::
-
-4. Choose Autolabel with GPT and click on Next.
-
- :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
-
-5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed.
-
- :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png":::
-
-6. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. Having descriptive names for labels, and including examples for each label is recommended to achieve good quality labeling with GPT.
-
- :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
-
-7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter.
-
- > [!NOTE]
- > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible.
- > * You can view the documents by clicking on the document name.
-
- :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
-
-8. Select **Start job** to trigger the autolabeling job.
-You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
-
- :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
----
-## Review the auto labeled documents
-
-When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
--
-Entities that have been automatically labeled appear with a dotted line. These entities have two selectors (a checkmark and an "X") that allow you to accept or reject the automatic label.
-
-Once an entity is accepted, the dotted line changes to a solid one, and the label is included in any further model training becoming a user defined label.
-
-Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen.
-
-After you accept or reject the labeled entities, select **Save labels** to apply the changes.
-
-> [!NOTE]
-> * We recommend validating automatically labeled entities before accepting them.
-> * All labels that were not accepted are be deleted when you train your model.
--
-## Next steps
-
-* Learn more about [labeling your data](tag-data.md).
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-containers.md
- Title: Use Docker containers for Custom Named Entity Recognition on-premises-
-description: Learn how to use Docker containers for Custom Named Entity Recognition on-premises.
------ Previously updated : 05/08/2023--
-keywords: on-premises, Docker, container, natural language processing
--
-# Install and run Custom Named Entity Recognition containers
--
-Containers enable you to host the Custom Named Entity Recognition API on your own infrastructure using your own trained model. If you have security or data governance requirements that can't be fulfilled by calling Custom Named Entity Recognition remotely, then containers might be a good option.
-
-> [!NOTE]
-> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data and service limits](../../concepts/data-limits.md).
--
-## Prerequisites
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
-* A [trained and deployed Custom Named Entity Recognition model](../quickstart.md)
--
-## Host computer requirements and recommendations
--
-The following table describes the minimum and recommended specifications for Custom Named Entity Recognition containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
-
-| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
-|||-|--|--|
-| **Custom Named Entity Recognition** | 1 core, 2 GB memory | 1 core, 4 GB memory |15 | 30|
-
-CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Export your Custom Named Entity Recognition model
-
-Before you proceed with running the docker image, you will need to export your own trained model to expose it to your container. Use the following command to extract your model and replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Custom Named Entity Recognition resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the Custom Named Entity Recognition API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| **{PROJECT_NAME}** | The name of the project containing the model that you want to export. You can find it on your projects tab in the Language Studio portal. |`myProject`|
-| **{TRAINED_MODEL_NAME}** | The name of the trained model you want to export. You can find your trained models on your model evaluation tab under your project in the Language Studio portal. |`myTrainedModel`|
-
-```bash
-curl --location --request PUT '{ENDPOINT_URI}/language/authoring/analyze-text/projects/{PROJECT_NAME}/exported-models/{TRAINED_MODEL_NAME}?api-version=2023-04-15-preview' \
header 'Ocp-Apim-Subscription-Key: {API_KEY}' \header 'Content-Type: application/json' \data-raw '{
-    "TrainedmodelLabel": "{TRAINED_MODEL_NAME}"
-}'
-```
-
-## Get the container image with `docker pull`
-
-The Custom Named Entity Recognition container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `customner`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/customner`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about).
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry.
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/customner:latest
-```
--
-## Run the container with `docker run`
-
-Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
-
-> [!IMPORTANT]
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-To run the *Custom Named Entity Recognition* container, execute the following `docker run` command. Replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Custom Named Entity Recognition resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the Custom Named Entity Recognition API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| **{PROJECT_NAME}** | The name of the project containing the model that you want to export. You can find it on your projects tab in the Language Studio portal. |`myProject`|
-| **{LOCAL_PATH}** | The path where the exported model in the previous step will be downloaded in. You can choose any path of your liking. |`C:/custom-ner-model`|
-| **{TRAINED_MODEL_NAME}** | The name of the trained model you want to export. You can find your trained models on your model evaluation tab under your project in the Language Studio portal. |`myTrainedModel`|
--
-```bash
-docker run --rm -it -p5000:5000 --memory 4g --cpus 1 \
--v {LOCAL_PATH}:/modelPath \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/customner:latest \
-EULA=accept \
-BILLING={ENDPOINT_URI} \
-APIKEY={API_KEY} \
-projectName={PROJECT_NAME}
-exportedModelName={TRAINED_MODEL_NAME}
-```
-
-This command:
-
-* Runs a *Custom Named Entity Recognition* container and downloads your exported model to the local path specified.
-* Allocates one CPU core and 4 gigabytes (GB) of memory
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Automatically removes the container after it exits. The container image is still available on the host computer.
--
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs.
---
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
--
-## Billing
-
-The Custom Named Entity Recognition containers send billing information to Azure, using a _Custom Named Entity Recognition_ resource on your Azure account.
--
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Custom Named Entity Recognition containers. In summary:
-
-* Custom Named Entity Recognition provides Linux containers for Docker.
-* Container images are downloaded from the Microsoft Container Registry (MCR).
-* Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Custom Named Entity Recognition containers by specifying the host URI of the container.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
- Title: Evaluate a Custom Named Entity Recognition (NER) model-
-description: Learn how to evaluate and score your Custom Named Entity Recognition (NER) model
------ Previously updated : 02/28/2023-----
-# View the custom NER model's evaluation and details
-
-After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from the data. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](tag-data.md).
-
-## Prerequisites
-
-Before viewing model evaluation, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md)
-* A [successfully trained model](train-model.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Model details
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* [Deploy your model](deploy-model.md)
-* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/language-support.md
- Title: Language and region support for custom named entity recognition-
-description: Learn about the languages and regions supported by custom named entity recognition.
------ Previously updated : 05/06/2022----
-# Language support for custom named entity recognition
-
-Use this article to learn about the languages currently supported by custom named entity recognition feature.
-
-## Multi-lingual option
-
-With custom NER, you can train a model in one language and use to extract entities from documents in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
--
-You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom named entity recognition
-makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-
-Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
-
-You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
-
-When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-## Language support
-
-Custom NER supports `.txt` files in the following languages:
-
-| Language | Language code |
-| | |
-| Afrikaans | `af` |
-| Amharic | `am` |
-| Arabic | `ar` |
-| Assamese | `as` |
-| Azerbaijani | `az` |
-| Belarusian | `be` |
-| Bulgarian | `bg` |
-| Bengali | `bn` |
-| Breton | `br` |
-| Bosnian | `bs` |
-| Catalan | `ca` |
-| Czech | `cs` |
-| Welsh | `cy` |
-| Danish | `da` |
-| German | `de`
-| Greek | `el` |
-| English (US) | `en-us` |
-| Esperanto | `eo` |
-| Spanish | `es` |
-| Estonian | `et` |
-| Basque | `eu` |
-| Persian (Farsi) | `fa` |
-| Finnish | `fi` |
-| French | `fr` |
-| Western Frisian | `fy` |
-| Irish | `ga` |
-| Scottish Gaelic | `gd` |
-| Galician | `gl` |
-| Gujarati | `gu` |
-| Hausa | `ha` |
-| Hebrew | `he` |
-| Hindi | `hi` |
-| Croatian | `hr` |
-| Hungarian | `hu` |
-| Armenian | `hy` |
-| Indonesian | `id` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Javanese | `jv` |
-| Georgian | `ka` |
-| Kazakh | `kk` |
-| Khmer | `km` |
-| Kannada | `kn` |
-| Korean | `ko` |
-| Kurdish (Kurmanji) | `ku` |
-| Kyrgyz | `ky` |
-| Latin | `la` |
-| Lao | `lo` |
-| Lithuanian | `lt` |
-| Latvian | `lv` |
-| Malagasy | `mg` |
-| Macedonian | `mk` |
-| Malayalam | `ml` |
-| Mongolian | `mn` |
-| Marathi | `mr` |
-| Malay | `ms` |
-| Burmese | `my` |
-| Nepali | `ne` |
-| Dutch | `nl` |
-| Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
-| Punjabi | `pa` |
-| Polish | `pl` |
-| Pashto | `ps` |
-| Portuguese (Brazil) | `pt-br` |
-| Portuguese (Portugal) | `pt-pt` |
-| Romanian | `ro` |
-| Russian | `ru` |
-| Sanskrit | `sa` |
-| Sindhi | `sd` |
-| Sinhala | `si` |
-| Slovak | `sk` |
-| Slovenian | `sl` |
-| Somali | `so` |
-| Albanian | `sq` |
-| Serbian | `sr` |
-| Sundanese | `su` |
-| Swedish | `sv` |
-| Swahili | `sw` |
-| Tamil | `ta` |
-| Telugu | `te` |
-| Thai | `th` |
-| Filipino | `tl` |
-| Turkish | `tr` |
-| Uyghur | `ug` |
-| Ukrainian | `uk` |
-| Urdu | `ur` |
-| Uzbek | `uz` |
-| Vietnamese | `vi` |
-| Xhosa | `xh` |
-| Yiddish | `yi` |
-| Chinese (Simplified) | `zh-hans` |
-| Zulu | `zu` |
-
-## Next steps
-
-* [Custom NER overview](overview.md)
-* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/overview.md
- Title: Custom named entity recognition - Azure Cognitive Services-
-description: Customize an AI model to label and extract information from documents using Azure Cognitive Services.
------ Previously updated : 02/22/2023----
-# What is custom named entity recognition?
-
-Custom NER is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for custom named entity recognition tasks.
-
-Custom NER enables users to build custom AI models to extract domain-specific entities from unstructured text, such as contracts or financial documents. By creating a Custom NER project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
-
-## Example usage scenarios
-
-Custom named entity recognition can be used in multiple scenarios across a variety of industries:
-
-### Information extraction
-
-Many financial and legal organizations extract and normalize data from thousands of complex, unstructured text sources on a daily basis. Such sources include bank statements, legal agreements, or bank forms. For example, mortgage application data extraction done manually by human reviewers may take several days to extract. Automating these steps by building a custom NER model simplifies the process and saves cost, time, and effort.
-
-### Knowledge mining to enhance/enrich semantic search
-
-Search is foundational to any app that surfaces text content to users. Common scenarios include catalog or document search, retail product search, or knowledge mining for data science. Many enterprises across various industries want to build a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom NER for extracting entities from the text that are relevant to their industry. These entities can be used to enrich the indexing of the file for a more customized search experience.
-
-### Audit and compliance
-
-Instead of manually reviewing significantly long text files to audit and apply policies, IT departments in financial or legal enterprises can use custom NER to build automated solutions. These solutions can be helpful to enforce compliance policies, and set up necessary business rules based on knowledge mining pipelines that process structured and unstructured content.
-
-## Project development lifecycle
-
-Using custom NER typically involves several different steps.
--
-1. **Define your schema**: Know your data and identify the [entities](glossary.md#entity) you want extracted. Avoid ambiguity.
-
-2. **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
- 1. **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- 2. **Label consistently**: The same entity should have the same label across all the files.
- 3. **Label completely**: Label all the instances of the entity in all your files.
-
-3. **Train the model**: Your model starts learning from your labeled data.
-
-4. **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
-
-6. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
-
-7. **Extract entities**: Use your custom models for entity extraction tasks.
-
-## Reference documentation and code samples
-
-As you use custom NER, see the following reference documentation and samples for Azure Cognitive Services for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | |
-|REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_RecognizeCustomEntities.md) |
-| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/RecognizeCustomEntities.java) |
-|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
-|Python (Runtime) | [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_recognize_custom_entities.py) |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using custom named entity recognition.
-
-* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](service-limits.md) for information such as [regional availability](service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
- Title: Quickstart - Custom named entity recognition (NER)-
-description: Quickly start building an AI model to categorize and extract information from unstructured text.
------ Previously updated : 01/25/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: Custom named entity recognition
-
-Use this article to get started with creating a custom NER project where you can train custom models for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract named entities and are trained by learning from tagged data.
-
-In this article, we use Language Studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example weΓÇÖll build a custom NER model to extract relevant entities from loan agreements, such as the:
-* Date of the agreement
-* Borrower's name, address, city and state
-* Lender's name, address, city and state
-* Loan and interest amounts
-------
-## Next steps
-
-After you've created entity extraction model, you can:
-
-* [Use the runtime API to extract entities](how-to/call-api.md)
-
-When you start to create your own custom NER projects, use the how-to articles to learn more about tagging, training and consuming your model in greater detail:
-
-* [Data selection and schema design](how-to/design-schema.md)
-* [Tag data](how-to/tag-data.md)
-* [Train a model](how-to/train-model.md)
-* [Model evaluation](how-to/view-model-evaluation.md)
-
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
- Title: Custom Named Entity Recognition (NER) service limits-
-description: Learn about the data and service limits when using Custom Named Entity Recognition (NER).
------ Previously updated : 05/06/2022----
-# Custom named entity recognition service limits
-
-Use this article to learn about the data and service limits when using custom NER.
-
-## Language resource limits
-
-* Your Language resource has to be created in one of the [supported regions](#regional-availability).
-
-* Your resource must be one of the supported pricing tiers:
-
- |Tier|Description|Limit|
- |--|--|--|
- |F0 |Free tier|You are only allowed one F0 tier Language resource per subscription.|
- |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
-
-
-* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](how-to/create-project.md#create-language-resource-and-connect-storage-account)
-
-* You can have up to 500 projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-## Regional availability
-
-Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
--
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-|Document size|--|125,000 characters. You can send up to 25 documents as long as they collectively do not exceed 125,000 characters|
-
-> [!TIP]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|F|Training time| 1 hour per month |
-|S|Training time| Unlimited, Pay as you go |
-|F|Prediction Calls| 5,000 text records per month |
-|S|Prediction Calls| Unlimited, Pay as you go |
-
-## Document limits
-
-* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-
-* All files uploaded in your container must contain data. Empty files are not allowed for training.
-
-* All files should be available at the root of your container.
-
-## Data limits
-
-The following limits are observed for the custom named entity recognition.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Documents count | 10 | 100,000 |
-|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
-|Count of entity types | 1 | 200 |
-|Entity length in characters | 1 | 500 |
-|Count of trained models per project| 0 | 10 |
-|Count of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Item | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
-| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
--
-## Next steps
-
-* [Custom NER overview](overview.md)
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/data-formats.md
- Title: Custom Text Analytics for health data formats-
-description: Learn about the data formats accepted by custom text analytics for health.
------ Previously updated : 04/14/2023----
-# Accepted data formats in custom text analytics for health
-
-Use this article to learn about formatting your data to be imported into custom text analytics for health.
-
-If you are trying to [import your data](../how-to/create-project.md#import-project) into custom Text Analytics for health, it has to follow a specific format. If you don't have data to import, you can [create your project](../how-to/create-project.md) and use the Language Studio to [label your documents](../how-to/label-data.md).
-
-Your Labels file should be in the `json` format below to be used when importing your labels into a project.
-
-```json
-{
- "projectFileVersion": "{API-VERSION}",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectName": "{PROJECT-NAME}",
- "projectKind": "CustomHealthcare",
- "description": "Trying out custom Text Analytics for health",
- "language": "{LANGUAGE-CODE}",
- "multilingual": true,
- "storageInputContainerName": "{CONTAINER-NAME}",
- "settings": {}
- },
- "assets": {
- "projectKind": "CustomHealthcare",
- "entities": [
- {
- "category": "Entity1",
- "compositionSetting": "{COMPOSITION-SETTING}",
- "list": {
- "sublists": [
- {
- "listKey": "One",
- "synonyms": [
- {
- "language": "en",
- "values": [
- "EntityNumberOne",
- "FirstEntity"
- ]
- }
- ]
- }
- ]
- }
- },
- {
- "category": "Entity2"
- },
- {
- "category": "MedicationName",
- "list": {
- "sublists": [
- {
- "listKey": "research drugs",
- "synonyms": [
- {
- "language": "en",
- "values": [
- "rdrug a",
- "rdrug b"
- ]
- }
- ]
-
- }
- ]
- }
- "prebuilts": "MedicationName"
- }
- ],
- "documents": [
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 500,
- "labels": [
- {
- "category": "Entity1",
- "offset": 25,
- "length": 10
- },
- {
- "category": "Entity2",
- "offset": 120,
- "length": 8
- }
- ]
- }
- ]
- },
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "entities": [
- {
- "regionOffset": 0,
- "regionLength": 100,
- "labels": [
- {
- "category": "Entity2",
- "offset": 20,
- "length": 5
- }
- ]
- }
- ]
- }
- ]
- }
-}
-
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#) to learn more about multilingual support. | `true`|
-|`projectName`|`{PROJECT-NAME}`|Project name|`myproject`|
-| `storageInputContainerName` |`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| `entities` | | Array containing all the entity types you have in the project. These are the entity types that will be extracted from your documents into.| |
-| `category` | | The name of the entity type, which can be user defined for new entity definitions, or predefined for prebuilt entities. For more information, see the entity naming rules below.| |
-|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
-| `list` | | Array containing all the sublists you have in the project for a specific entity. Lists can be added to prebuilt entities or new entities with learned components.| |
-|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
-| `listKey`| `One` | A normalized value for the list of synonyms to map back to in prediction. | `One` |
-|`synonyms`|`[]`|Array containing all the synonyms|synonym|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the synonym in your sublist. If your project is a multilingual project and you want to support your list of synonyms for all the languages in your project, you have to explicitly add your synonyms to each language. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
-| `values`| `"EntityNumberone"`, `"FirstEntity"` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"EntityNumberone"`, `"FirstEntity"` |
-| `prebuilts` | `MedicationName` | The name of the prebuilt component populating the prebuilt entity. [Prebuilt entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project by default but you can extend them with list components in your labels file. | `MedicationName` |
-| `documents` | | Array containing all the documents in your project and list of the entities labeled within each document. | [] |
-| `location` | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| `dataset` | `{DATASET}` | The test set to which this file goes to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting). Possible values for this field are `Train` and `Test`. |`Train`|
-| `regionOffset` | | The inclusive character position of the start of the text. |`0`|
-| `regionLength` | | The length of the bounding box in terms of UTF16 characters. Training only considers the data in this region. |`500`|
-| `category` | | The type of entity associated with the span of text specified. | `Entity1`|
-| `offset` | | The start position for the entity text. | `25`|
-| `length` | | The length of the entity in terms of UTF16 characters. | `20`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the document used in your project. If your project is a multilingual project, choose the language code of the majority of the documents. See [Language support](../language-support.md) for more information about supported language codes. |`en`|
-
-## Entity naming rules
-
-1. [Prebuilt entity names](../../text-analytics-for-health/concepts/health-entity-categories.md) are predefined. They must be populated with a prebuilt component and it must match the entity name.
-2. New user defined entities (entities with learned components or labeled text) can't use prebuilt entity names.
-3. New user defined entities can't be populated with prebuilt components as prebuilt components must match their associated entities names and have no labeled data assigned to them in the documents array.
---
-## Next steps
-* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md#import-project)
-* See the [how-to article](../how-to/label-data.md) more information about labeling your data.
-* When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/entity-components.md
- Title: Entity components in custom Text Analytics for health-
-description: Learn how custom Text Analytics for health extracts entities from text
------ Previously updated : 04/14/2023----
-# Entity components in custom text analytics for health
-
-In custom Text Analytics for health, entities are relevant pieces of information that are extracted from your unstructured input text. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
-
-## Component types
-
-An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
-
-The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you can't add learned components. Similarly, you can create new entities with learned and list components, but you can't populate them with additional prebuilt components.
-
-### Learned component
-
-The learned component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels to your data for the entity. If you do not label any data, it will not have a learned component.
-
-The Text Analytics for health entities, which by default have prebuilt components can't be extended with learned components, meaning they do not require or accept further labeling to function.
--
-### List component
-
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
-
-In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
---
-### Prebuilt component
-
-The [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components. Entities with prebuilt components are pretrained and can extract information relating to their categories without any labels.
---
-## Entity options
-
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
-
-### Combine components
-
-Combine components as one entity when they overlap by taking the union of all the components.
-
-Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your input data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
--
-By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
--
-Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
--
-With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
---
-### Don't combine components
-
-Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
-
-#### Example
-
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your labeled data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ labeled as Software:
--
-When you do not combine components, the entity will return twice:
---
-## How to use components and options
-
-Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-
-A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have a **Medication Name** entity, which has a `Medication.Name` prebuilt component added to it, the entity may not predict all the medication names specific to your domain. You can use a list component to extend the values of the Medication Name entity and thereby extending the prebuilt with your own values of Medication Names.
-
-Other times you may be interested in extracting an entity through context such as a **medical device**. You would label for the learned component of the medical device to learn _where_ a medical device is based on its position within the sentence. You may also have a list of medical devices that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-
-When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
--
-## Next steps
-
-* [Entities with prebuilt components](../../text-analytics-for-health/concepts/health-entity-categories.md)
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/concepts/evaluation-metrics.md
- Title: Custom text analytics for health evaluation metrics-
-description: Learn about evaluation metrics in custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Evaluation metrics for custom Text Analytics for health models
-
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set is not introduced to the model through the training process, to make sure that the model is tested on new data.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined entities for documents in the test set, and compares them with the provided data labels (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. User defined entities are **included** in the evaluation factoring in Learned and List components; Text Analytics for health prebuilt entities are **not** factored in the model evaluation. For evaluation, custom Text Analytics for health uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It is the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted entities are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
-
->[!NOTE]
-> Precision, recall and F1 score are calculated for each entity separately (*entity-level* evaluation) and for the model collectively (*model-level* evaluation).
-
-## Model-level and entity-level evaluation metrics
-
-Precision, recall, and F1 score are calculated for each entity separately (entity-level evaluation) and for the model collectively (model-level evaluation).
-
-The definitions of precision, recall, and evaluation are the same for both entity-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* differ can differ. For example, consider the following text.
-
-### Example
-
-*The first party of this contract is John Smith, resident of 5678 Main Rd., City of Frederick, state of Nebraska. And the second party is Forrest Ray, resident of 123-345 Integer Rd., City of Corona, state of New Mexico. There is also Fannie Thomas resident of 7890 River Road, city of Colorado Springs, State of Colorado.*
-
-The model extracting entities from this text could have the following predictions:
-
-| Entity | Predicted as | Actual type |
-|--|--|--|
-| John Smith | Person | Person |
-| Frederick | Person | City |
-| Forrest | City | Person |
-| Fannie Thomas | Person | Person |
-| Colorado Springs | City | City |
-
-### Entity-level evaluation for the *person* entity
-
-The model would have the following entity-level evaluation, for the *person* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 2 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. |
-| False Positive | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-| False Negative | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-
-* **Precision**: `#True_Positive / (#True_Positive + #False_Positive)` = `2 / (2 + 1) = 0.67`
-* **Recall**: `#True_Positive / (#True_Positive + #False_Negatives)` = `2 / (2 + 1) = 0.67`
-* **F1 Score**: `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.67 * 0.67) / (0.67 + 0.67) = 0.67`
-
-### Entity-level evaluation for the *city* entity
-
-The model would have the following entity-level evaluation, for the *city* entity:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | *Colorado Springs* was correctly predicted as *city*. |
-| False Positive | 1 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. |
-| False Negative | 1 | *Frederick* was incorrectly predicted as *person* while it should have been *city*. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `1 / (1 + 1) = 0.5`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `1 / (1 + 1) = 0.5`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Model-level evaluation for the collective model
-
-The model would have the following evaluation for the model in its entirety:
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 3 | *John Smith* and *Fannie Thomas* were correctly predicted as *person*. *Colorado Springs* was correctly predicted as *city*. This is the sum of true positives for all entities. |
-| False Positive | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false positives for all entities. |
-| False Negative | 2 | *Forrest* was incorrectly predicted as *city* while it should have been *person*. *Frederick* was incorrectly predicted as *person* while it should have been *city*. This is the sum of false negatives for all entities. |
-
-* **Precision** = `#True_Positive / (#True_Positive + #False_Positive)` = `3 / (3 + 2) = 0.6`
-* **Recall** = `#True_Positive / (#True_Positive + #False_Negatives)` = `3 / (3 + 2) = 0.6`
-* **F1 Score** = `2 * Precision * Recall / (Precision + Recall)` = `(2 * 0.6 * 0.6) / (0.6 + 0.6) = 0.6`
-
-## Interpreting entity-level evaluation metrics
-
-So what does it actually mean to have high precision or high recall for a certain entity?
-
-| Recall | Precision | Interpretation |
-|--|--|--|
-| High | High | This entity is handled well by the model. |
-| Low | High | The model cannot always extract this entity, but when it does it is with high confidence. |
-| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. |
-| Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
-
-## Guidance
-
-After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
-
-* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
-
-* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
-
-* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
-
-* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
-
-* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
--
-## Confusion matrix
-
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
-The matrix compares the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify entities that are too close to each other and often get mistaken (ambiguity). In this case consider merging these entity types together. If that isn't possible, consider adding more tagged examples of both entities to help the model differentiate between them.
-
-The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag.
--
-You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
-
-* The values in the diagonal are the *True Positive* values of each entity.
-* The sum of the values in the entity rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the entity columns (excluding the diagonal) is the *false Negative* of the model.
-
-Similarly,
-
-* The *true positive* of the model is the sum of *true Positives* for all entities.
-* The *false positive* of the model is the sum of *false positives* for all entities.
-* The *false Negative* of the model is the sum of *false negatives* for all entities.
-
-## Next steps
-
-* [Custom text analytics for health overview](../overview.md)
-* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/call-api.md
- Title: Send a custom Text Analytics for health request to your custom model
-description: Learn how to send a request for custom text analytics for health.
------- Previously updated : 04/14/2023----
-# Send queries to your custom Text Analytics for health model
-
-After the deployment is added successfully, you can query the deployment to extract entities from your text based on the model you assigned to the deployment.
-You can query the deployment programmatically using the [Prediction API](https://aka.ms/ct-runtime-api).
-
-## Test deployed model
-
-You can use Language Studio to submit the custom Text Analytics for health task and visualize the results.
----
-## Send a custom text analytics for health request to your model
-
-# [Language Studio](#tab/language-studio)
---
-# [REST API](#tab/rest-api)
-
-First you will need to get your resource key and endpoint:
---
-### Submit a custom Text Analytics for health task
--
-### Get task results
-----
-## Next steps
-
-* [Custom text analytics for health](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/create-project.md
- Title: Using Azure resources in custom Text Analytics for health-
-description: Learn about the steps for using Azure resources with custom text analytics for health.
------ Previously updated : 04/14/2023----
-# How to create custom Text Analytics for health project
-
-Use this article to learn how to set up the requirements for starting with custom text analytics for health and create a project.
-
-## Prerequisites
-
-Before you start using custom text analytics for health, you need:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Create a Language resource
-
-Before you start using custom text analytics for health, you'll need an Azure Language resource. It's recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions preconfigured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text analytics for health.
-
-You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you will connect a pre-existing storage account, you should have an owner role assigned to it.
-
-## Create Language resource and connect storage account
-
-You can create a resource in the following ways:
-
-* The Azure portal
-* Language Studio
-* PowerShell
-
-> [!Note]
-> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
-----
-> [!NOTE]
-> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
-> * You can only connect your language resource to one storage account.
-
-## Using a pre-existing Language resource
--
-## Create a custom Text Analytics for health project
-
-Once your resource and storage container are configured, create a new custom text analytics for health project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can use it to get started by [importing a project](#import-project).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Import project
-
-If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Get project details
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Delete project
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
-
-* After you define your schema, you can start [labeling your data](label-data.md), which will be used for model training, evaluation, and finally making predictions.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/deploy-model.md
- Title: Deploy a custom Text Analytics for health model-
-description: Learn about deploying a model for custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Deploy a custom text analytics for health model
-
-Once you're satisfied with how your model performs, it's ready to be deployed and used to recognize entities in text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-
-For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).
-
-## Deploy model
-
-After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After you have a deployment, you can use it to [extract entities](call-api.md) from text.
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/design-schema.md
- Title: Preparing data and designing a schema for custom Text Analytics for health-
-description: Learn about how to select and prepare data, to be successful in creating custom TA4H projects.
------ Previously updated : 04/14/2023----
-# How to prepare data and define a schema for custom Text Analytics for health
-
-In order to create a custom TA4H model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it entailing defining the entity types or categories that you need your model to extract from the text at runtime.
-
-## Schema design
-
-Custom Text Analytics for health allows you to extend and customize the Text Analytics for health entity map. The first step of the process is building your schema, which allows you to define the new entity types or categories that you need your model to extract from text in addition to the Text Analytics for health existing entities at runtime.
-
-* Review documents in your dataset to be familiar with their format and structure.
-
-* Identify the entities you want to extract from the data.
-
- For example, if you are extracting entities from support emails, you might need to extract "Customer name", "Product name", "Request date", and "Contact information".
-
-* Avoid entity types ambiguity.
-
- **Ambiguity** happens when entity types you select are similar to each other. The more ambiguous your schema the more labeled data you will need to differentiate between different entity types.
-
- For example, if you are extracting data from a legal contract, to extract "Name of first party" and "Name of second party" you will need to add more examples to overcome ambiguity since the names of both parties look similar. Avoid ambiguity as it saves time, effort, and yields better results.
-
-* Avoid complex entities. Complex entities can be difficult to pick out precisely from text, consider breaking it down into multiple entities.
-
- For example, extracting "Address" would be challenging if it's not broken down to smaller entities. There are so many variations of how addresses appear, it would take large number of labeled entities to teach the model to extract an address, as a whole, without breaking it down. However, if you replace "Address" with "Street Name", "PO Box", "City", "State" and "Zip", the model will require fewer labels per entity.
--
-## Add entities
-
-To add entities to your project:
-
-1. Move to **Entities** pivot from the top of the page.
-
-2. [Text Analytics for health entities](../../text-analytics-for-health/concepts/health-entity-categories.md) are automatically loaded into your project. To add additional entity categories, select **Add** from the top menu. You will be prompted to type in a name before completing creating the entity.
-
-3. After creating an entity, you'll be routed to the entity details page where you can define the composition settings for this entity.
-
-4. Entities are defined by [entity components](../concepts/entity-components.md): learned, list or prebuilt. Text Analytics for health entities are by default populated with the prebuilt component and cannot have learned components. Your newly defined entities can be populated with the learned component once you add labels for them in your data but cannot be populated with the prebuilt component.
-
-5. You can add a [list](../concepts/entity-components.md#list-component) component to any of your entities.
-
-
-### Add list component
-
-To add a **list** component, select **Add new list**. You can add multiple lists to each entity.
-
-1. To create a new list, in the *Enter value* text box enter this is the normalized value that will be returned when any of the synonyms values is extracted.
-
-2. For multilingual projects, from the *language* drop-down menu, select the language of the synonyms list and start typing in your synonyms and hit enter after each one. It is recommended to have synonyms lists in multiple languages.
-
- <!--:::image type="content" source="../media/add-list-component.png" alt-text="A screenshot showing a list component in Language Studio." lightbox="../media/add-list-component.png":::-->
-
-### Define entity options
-
-Change to the **Entity options** pivot in the entity details page. When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined based on the [entity option](../concepts/entity-components.md#entity-options) you select in this step. Select the one that you want to apply to this entity and click on the **Save** button at the top.
-
- <!--:::image type="content" source="../media/entity-options.png" alt-text="A screenshot showing an entity option in Language Studio." lightbox="../media/entity-options.png":::-->
--
-After you create your entities, you can come back and edit them. You can **Edit entity components** or **delete** them by selecting this option from the top menu.
--
-## Data selection
-
-The quality of data you train your model with affects model performance greatly.
-
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
-
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life. For example, if you are training your model to extract entities from legal documents that may come in many different formats and languages, you should provide examples that exemplify the diversity as you would expect to see in real life.
-
-* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-
-* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
-
-> [!NOTE]
-> If your documents are in multiple languages, select the **enable multi-lingual** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
-
-## Data preparation
-
-As a prerequisite for creating a project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-
-* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-You can only use `.txt` documents. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your document format.
-
-You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/label-data.md) in Language studio.
-
-## Test set
-
-When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set includes documents that represent all entities used in your project.
-
-## Next steps
-
-If you haven't already, create a custom Text Analytics for health project. If it's your first time using custom Text Analytics for health, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [how-to article](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/fail-over.md
- Title: Back up and recover your custom Text Analytics for health models-
-description: Learn how to save and recover your custom Text Analytics for health models.
------ Previously updated : 04/14/2023----
-# Back up and recover your custom Text Analytics for health models
-
-When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to fail over into another region. This requires two Azure Language resources in different regions and synchronizing custom models across them.
-
-If your app or business depends on the use of a custom Text Analytics for health model, we recommend that you create a replica of your project in an additional supported region. If a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets, and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./train-model.md) and [deploy](./deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
-
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions. [Create your resources](./create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect each of your Language resources to different storage accounts. Each storage account should be located in the same respective regions that your separate Language resources are in. You can follow the [quickstart](../quickstart.md?pivots=rest-api#create-a-new-azure-language-resource-and-azure-storage-account) to create an additional Language resource and storage account.
--
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources as well as the primary and secondary container names. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{PRIMARY-CONTAINER-NAME}`, `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
---
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}`, `{SECONDARY-RESOURCE-KEY}`, and `{SECONDARY-CONTAINER-NAME}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get training status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of the process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fails and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice. We recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
-
- [!INCLUDE [get project details](../includes/rest-api/get-project-details.md)]
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model).
--
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
-
-* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/label-data.md
- Title: How to label your data for custom Text Analytics for health-
-description: Learn how to label your data for use with custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Label your data using the Language Studio
-
-Data labeling is a crucial step in development lifecycle. In this step, you label your documents with the new entities you defined in your schema to populate their learned components. This data will be used in the next step when training your model so that your model can learn from the labeled data to know which entities to extract. If you already have labeled data, you can directly [import](create-project.md#import-project) it into your project, but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). See [create project](create-project.md#import-project) to learn more about importing labeled data into your project. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio).
-
-## Prerequisites
-
-Before you can label your data, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data labeling guidelines
-
-After preparing your data, designing your schema and creating your project, you will need to label your data. Labeling your data is important so your model knows which words will be associated with the entity types you need to extract. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON document in your storage container that you have connected to this project.
-
-As you label your data, keep in mind:
-
-* You can't add labels for Text Analytics for health entities as they're pretrained prebuilt entities. You can only add labels to new entity categories that you defined during schema definition.
-
-If you want to improve the recall for a prebuilt entity, you can extend it by adding a list component while you are [defining your schema](design-schema.md).
-
-* In general, more labeled data leads to better results, provided the data is labeled accurately.
-
-* The precision, consistency and completeness of your labeled data are key factors to determining model performance.
-
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the documents.
- * **Label completely**: Label all the instances of the entity in all your documents.
-
- > [!NOTE]
- > There is no fixed number of labels that can guarantee your model will perform the best. Model performance is dependent on possible ambiguity in your schema, and the quality of your labeled data. Nevertheless, we recommend having around 50 labeled instances per entity type.
-
-## Label your data
-
-Use the following steps to label your data:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container.
-
- <!--:::image type="content" source="../media/tagging-files-view.png" alt-text="A screenshot showing the Language Studio screen for labeling data." lightbox="../media/tagging-files-view.png":::-->
-
- >[!TIP]
- > You can use the filters in top menu to view the unlabeled documents so that you can start labeling them.
- > You can also use the filters to view the documents that are labeled with a specific entity type.
-
-3. Change to a single document view from the left side in the top menu or select a specific document to start labeling. You can find a list of all `.txt` documents available in your project to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
-
- > [!NOTE]
- > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document. Hebrew is not supported with multi-lingual projects.
-
-4. In the right side pane, you can use the **Add entity type** button to add additional entities to your project that you missed during schema definition.
-
- <!--:::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing complete data labeling." lightbox="../media/tag-1.png":::-->
-
-5. You have two options to label your document:
-
- |Option |Description |
- |||
- |Label using a brush | Select the brush icon next to an entity type in the right pane, then highlight the text in the document you want to annotate with this entity type. |
- |Label using a menu | Highlight the word you want to label as an entity, and a menu will appear. Select the entity type you want to assign for this entity. |
-
- The below screenshot shows labeling using a brush.
-
- :::image type="content" source="../media/tag-options.png" alt-text="A screenshot showing the labeling options offered in Custom NER." lightbox="../media/tag-options.png":::
-
-6. In the right side pane under the **Labels** pivot you can find all the entity types in your project and the count of labeled instances per each. The prebuilt entities will be shown for reference but you will not be able to label for these prebuilt entities as they are pretrained.
-
-7. In the bottom section of the right side pane you can add the current document you are viewing to the training set or the testing set. By default all the documents are added to your training set. See [training and testing sets](train-model.md#data-splitting) for information on how they are used for model training and evaluation.
-
- > [!TIP]
- > If you are planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
-
-7. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
- * *Total instances* where you can view count of all labeled instances of a specific entity type.
- * *Documents with at least one label* where each document is counted if it contains at least one labeled instance of this entity.
-
-7. When you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
-
-## Remove labels
-
-To remove a label
-
-1. Select the entity you want to remove a label from.
-2. Scroll through the menu that appears, and select **Remove label**.
-
-## Delete entities
-
-You cannot delete any of the Text Analytics for health pretrained entities because they have a prebuilt component. You are only permitted to delete newly defined entity categories. To delete an entity, select the delete icon next to the entity you want to remove. Deleting an entity removes all its labeled instances from your dataset.
-
-## Next steps
-
-After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/train-model.md
- Title: How to train your custom Text Analytics for health model-
-description: Learn about how to train your model for custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Train your custom Text Analytics for health model
-
-Training is the process where the model learns from your [labeled data](label-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
-
-To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
--
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data splitting
-
-Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the labeled entities and what spans of text are to be extracted as entities.
-The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
-After model training is completed successfully, the model is used to make predictions from the documents in the testing and based on these predictions [evaluation metrics](../concepts/evaluation-metrics.md) are calculated. Model training and evaluation are only for newly defined entities with learned components; therefore, Text Analytics for health entities are excluded from model training and evaluation due to them being entities with prebuilt components. It's recommended to make sure that all your labeled entities are adequately represented in both the training and testing set.
-
-Custom Text Analytics for health supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**:The system splits your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](label-data.md).
-
-## Train model
-
-# [Language studio](#tab/Language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-### Start training job
--
-### Get training job status
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it's successfully completed.
-
- [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
---
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/how-to/view-model-evaluation.md
- Title: Evaluate a Custom Text Analytics for health model-
-description: Learn how to evaluate and score your Custom Text Analytics for health model
------ Previously updated : 04/14/2023-----
-# View a custom text analytics for health model's evaluation and details
-
-After your model has finished training, you can view the model performance and see the extracted entities for the documents in the test set.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you train a new model, as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](label-data.md).
-
-## Prerequisites
-
-Before viewing model evaluation, you need:
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md)
-* A [successfully trained model](train-model.md)
--
-## Model details
-
-There are several metrics you can use to evaluate your mode. See the [performance metrics](../concepts/evaluation-metrics.md) article for more information on the model details described in this article.
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* [Deploy your model](deploy-model.md)
-* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/language-support.md
- Title: Language and region support for custom Text Analytics for health-
-description: Learn about the languages and regions supported by custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Language support for custom text analytics for health
-
-Use this article to learn about the languages currently supported by custom Text Analytics for health.
-
-## Multilingual option
-
-With custom Text Analytics for health, you can train a model in one language and use it to extract entities from documents other languages. This feature saves you the trouble of building separate projects for each language and instead combining your datasets in a single project, making it easy to scale your projects to multiple languages. You can train your project entirely with English documents, and query it in: French, German, Italian, and others. You can enable the multilingual option as part of the project creation process or later through the project settings.
-
-You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. In the [data labeling](how-to/label-data.md) page in Language Studio, you can select the language of the document you're adding. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better. When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-Hebrew is not supported in multilingual projects. If the primary language of the project is Hebrew, you will not be able to add training data in other languages, or query the model with other languages. Similarly, if the primary language of the project is not Hebrew, you will not be able to add training data in Hebrew, or query the model in Hebrew.
-
-## Language support
-
-Custom Text Analytics for health supports `.txt` files in the following languages:
-
-| Language | Language code |
-| | |
-| English | `en` |
-| French | `fr` |
-| German | `de` |
-| Spanish | `es` |
-| Italian | `it` |
-| Portuguese (Portugal) | `pt-pt` |
-| Hebrew | `he` |
--
-## Next steps
-
-* [Custom Text Analytics for health overview](overview.md)
-* [Service limits](reference/service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/overview.md
- Title: Custom Text Analytics for health - Azure Cognitive Services-
-description: Customize an AI model to label and extract healthcare information from documents using Azure Cognitive Services.
------ Previously updated : 04/14/2023----
-# What is custom Text Analytics for health?
-
-Custom Text Analytics for health is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models on top of [Text Analytics for health](../text-analytics-for-health/overview.md) for custom healthcare entity recognition tasks.
-
-Custom Text Analytics for health enables users to build custom AI models to extract healthcare specific entities from unstructured text, such as clinical notes and reports. By creating a custom Text Analytics for health project, developers can iteratively define new vocabulary, label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through creating making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/label-data.md) contain instructions for using the service in more specific or customized ways.
-
-## Example usage scenarios
-
-Similarly to Text Analytics for health, custom Text Analytics for health can be used in multiple [scenarios](../text-analytics-for-health/overview.md#example-use-cases) across a variety of healthcare industries. However, the main usage of this feature is to provide a layer of customization on top of Text Analytics for health to extend its existing entity map.
--
-## Project development lifecycle
-
-Using custom Text Analytics for health typically involves several different steps.
--
-* **Define your schema**: Know your data and define the new entities you want extracted on top of the existing Text Analytics for health entity map. Avoid ambiguity.
-
-* **Label your data**: Labeling data is a key factor in determining model performance. Label precisely, consistently and completely.
- * **Label precisely**: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
- * **Label consistently**: The same entity should have the same label across all the files.
- * **Label completely**: Label all the instances of the entity in all your files.
-
-* **Train the model**: Your model starts learning from your labeled data.
-
-* **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
-
-* **Deploy the model**: Deploying a model makes it available for use via an API.
-
-* **Extract entities**: Use your custom models for entity extraction tasks.
-
-## Reference documentation and code samples
-
-As you use custom Text Analytics for health, see the following reference documentation for Azure Cognitive Services for Language:
-
-|APIs| Reference documentation|
-||||
-|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
-|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
--
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
---
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using custom Text Analytics for health.
-
-* As you go through the project development lifecycle, review the glossary to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](reference/service-limits.md) for information such as [regional availability](reference/service-limits.md#regional-availability).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/quickstart.md
- Title: Quickstart - Custom Text Analytics for health (Custom TA4H)-
-description: Quickly start building an AI model to categorize and extract information from healthcare unstructured text.
------ Previously updated : 04/14/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: custom Text Analytics for health
-
-Use this article to get started with creating a custom Text Analytics for health project where you can train custom models on top of Text Analytics for health for custom entity recognition. A model is artificial intelligence software that's trained to do a certain task. For this system, the models extract healthcare related named entities and are trained by learning from labeled data.
-
-In this article, we use Language Studio to demonstrate key concepts of custom Text Analytics for health. As an example weΓÇÖll build a custom Text Analytics for health model to extract the Facility or treatment location from short discharge notes.
-------
-## Next steps
-
-* [Text analytics for health overview](./overview.md)
-
-After you've created entity extraction model, you can:
-
-* [Use the runtime API to extract entities](how-to/call-api.md)
-
-When you start to create your own custom Text Analytics for health projects, use the how-to articles to learn more about data labeling, training and consuming your model in greater detail:
-
-* [Data selection and schema design](how-to/design-schema.md)
-* [Tag data](how-to/label-data.md)
-* [Train a model](how-to/train-model.md)
-* [Model evaluation](how-to/view-model-evaluation.md)
-
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/glossary.md
- Title: Definitions used in custom Text Analytics for health-
-description: Learn about definitions used in custom Text Analytics for health
------ Previously updated : 04/14/2023----
-# Terms and definitions used in custom Text Analytics for health
-
-Use this article to learn about some of the definitions and terms you may encounter when using Custom Text Analytics for health
-
-## Entity
-Entities are words in input data that describe information relating to a specific category or concept. If your entity is complex and you would like your model to identify specific parts, you can break your entity into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode.
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Prebuilt entity component
-
-Prebuilt entity components represent pretrained entity components that belong to the [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md). These entities are automatically loaded into your project as entities with prebuilt components. You can define list components for entities with prebuilt components but you cannot add learned components. Similarly, you can create new entities with learned and list components, but you cannot populate them with additional prebuilt components.
--
-## Learned entity component
-
-The learned entity component uses the entity tags you label your text with to train a machine learned model. The model learns to predict where the entity is, based on the context within the text. Your labels provide examples of where the entity is expected to be present in text, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by labeling your data for the entity. If you do not label any data with the entity, it will not have a learned component. Learned components cannot be added to entities with prebuilt components.
-
-## List entity component
-A list entity component represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
-
-The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "clinics" and you have the words "clinic a, clinic b, clinic c" in the list, then the size entity will be predicted for all instances of the input data where "clinic a, clinic b, clinic c" are used regardless of the context. List components can be added to all entities regardless of whether they are prebuilt or newly defined.
-
-## Model
-A model is an object that's trained to do a certain task, in this case custom Text Analytics for health models perform all the features of Text Analytics for health in addition to custom entity extraction for the user's defined entities. Models are trained by providing labeled data to learn from so they can later be used to understand context from the input text.
-
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Overfitting
-
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted entities are correctly labeled.
-
-## Project
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-
-## Recall
-Measures the model's ability to predict actual positive entities. It's the ratio between the predicted true positives and what was actually labeled. The recall metric reveals how many of the predicted entities are correct.
--
-## Schema
-Schema is defined as the combination of entities within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about what are the new entities should you add to your project to extend the existing [Text Analytics for health entity map](../../text-analytics-for-health/concepts/health-entity-categories.md) and which new vocabulary should you add to the prebuilt entities using list components to enhance their recall. For example, adding a new entity for patient name or extending the prebuilt entity "Medication Name" with a new research drug (Ex: research drug A).
-
-## Training data
-Training data is the set of information that is needed to train a model.
--
-## Next steps
-
-* [Data and service limits](service-limits.md).
-
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-analytics-for-health/reference/service-limits.md
- Title: Custom Text Analytics for health service limits-
-description: Learn about the data and service limits when using Custom Text Analytics for health.
------ Previously updated : 04/14/2023----
-# Custom Text Analytics for health service limits
-
-Use this article to learn about the data and service limits when using custom Text Analytics for health.
-
-## Language resource limits
-
-* Your Language resource has to be created in one of the [supported regions](#regional-availability).
-
-* Your resource must be one of the supported pricing tiers:
-
- |Tier|Description|Limit|
- |--|--|--|
- |S |Paid tier|You can have unlimited Language S tier resources per subscription. |
-
-
-* You can only connect one storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](../how-to/create-project.md#create-language-resource-and-connect-storage-account)
-
-* You can have up to 500 projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-## Regional availability
-
-Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment.
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| East US | Γ£ô | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-|Document size|--|125,000 characters. You can send up to 20 documents as long as they collectively do not exceed 125,000 characters|
-
-> [!TIP]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|S|Training time| Unlimited, free |
-|S|Prediction Calls| 5,000 text records for free per language resource|
-
-## Document limits
-
-* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-
-* All files uploaded in your container must contain data. Empty files are not allowed for training.
-
-* All files should be available at the root of your container.
-
-## Data limits
-
-The following limits are observed for authoring.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Documents count | 10 | 100,000 |
-|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
-|Count of entity types | 1 | 200 |
-|Entity length in characters | 1 | 500 |
-|Count of trained models per project| 0 | 10 |
-|Count of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Item | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Entity name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters. See the supported [data format](../concepts/data-formats.md#entity-naming-rules) for more information on entity names when importing a labels file. |
-| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
--
-## Next steps
-
-* [Custom text analytics for health overview](../overview.md)
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/concepts/data-formats.md
- Title: Custom text classification data formats-
-description: Learn about the data formats accepted by custom text classification.
------ Previously updated : 05/24/2022----
-# Accepted data formats
-
-If you're trying to import your data into custom text classification, it has to follow a specific format. If you don't have data to import you can [create your project](../how-to/create-project.md) and use Language Studio to [label your documents](../how-to/tag-data.md).
-
-## Labels file format
-
-Your Labels file should be in the `json` format below. This will enable you to [import](../how-to/create-project.md#import-a-custom-text-classification-project) your labels into a project.
-
-# [Multi label classification](#tab/multi-classification)
-
-```json
-{
- "projectFileVersion": "2022-05-01",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectKind": "CustomMultiLabelClassification",
- "storageInputContainerName": "{CONTAINER-NAME}",
- "projectName": "{PROJECT-NAME}",
- "multilingual": false,
- "description": "Project-description",
- "language": "en-us"
- },
- "assets": {
- "projectKind": "CustomMultiLabelClassification",
- "classes": [
- {
- "category": "Class1"
- },
- {
- "category": "Class2"
- }
- ],
- "documents": [
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "classes": [
- {
- "category": "Class1"
- },
- {
- "category": "Class2"
- }
- ]
- }
- ]
- }
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-| multilingual | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
-|projectName|`{PROJECT-NAME}`|Project name|myproject|
-| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| classes | [] | Array containing all the classes you have in the project. These are the classes you want to classify your documents into.| [] |
-| documents | [] | Array containing all the documents in your project and the classes labeled for this document. | [] |
-| location | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container, this value should be the document name.|`doc1.txt`|
-| dataset | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../how-to/train-model.md#data-splitting) for more information. Possible values for this field are `Train` and `Test`. |`Train`|
--
-# [Single label classification](#tab/single-classification)
-
-```json
-{
-
- "projectFileVersion": "2022-05-01",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectKind": "CustomSingleLabelClassification",
- "storageInputContainerName": "{CONTAINER-NAME}",
- "settings": {},
- "projectName": "{PROJECT-NAME}",
- "multilingual": false,
- "description": "Project-description",
- "language": "en-us"
- },
- "assets": {
- "projectKind": "CustomSingleLabelClassification",
- "classes": [
- {
- "category": "Class1"
- },
- {
- "category": "Class2"
- }
- ],
- "documents": [
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "class": {
- "category": "Class2"
- }
- },
- {
- "location": "{DOCUMENT-NAME}",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "class": {
- "category": "Class1"
- }
- }
- ]
- }
-```
-|Key |Placeholder |Value | Example |
-|||-|--|
-|projectName|`{PROJECT-NAME}`|Project name|myproject|
-| storageInputContainerName|`{CONTAINER-NAME}`|Container name|`mycontainer`|
-| multilingual | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents). See [language support](../language-support.md#multi-lingual-option) to learn more about multilingual support. | `true`|
-| classes | [] | Array containing all the classes you have in the project. These are the classes you want to classify your documents into.| [] |
-| documents | [] | Array containing all the documents in your project and which class this document belongs to. | [] |
-| location | `{DOCUMENT-NAME}` | The location of the documents in the storage container. Since all the documents are in the root of the container this should be the document name.|`doc1.txt`|
-| dataset | `{DATASET}` | The test set to which this file will go to when split before training. See [How to train a model](../how-to/train-model.md#data-splitting) for more information. Possible values for this field are `Train` and `Test`. |`Train`|
----
-## Next steps
-
-* You can import your labeled data into your project directly. See [How to create a project](../how-to/create-project.md#import-a-custom-text-classification-project) to learn more about importing projects.
-* See the [how-to article](../how-to/tag-data.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/concepts/evaluation-metrics.md
- Title: Custom text classification evaluation metrics-
-description: Learn about evaluation metrics in custom text classification.
------ Previously updated : 08/08/2022----
-# Evaluation metrics
-
-Your [dataset is split](../how-to/train-model.md#data-splitting) into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined classes for documents in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom text classification uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)` <br>
-
->[!NOTE]
-> Precision, recall and F1 score are calculated for each class separately (*class-level* evaluation) and for the model collectively (*model-level* evaluation).
-## Model-level and Class-level evaluation metrics
-
-The definitions of precision, recall, and evaluation are the same for both class-level and model-level evaluations. However, the count of *True Positive*, *False Positive*, and *False Negative* differ as shown in the following example.
-
-The below sections use the following example dataset:
-
-| Document | Actual classes | Predicted classes |
-|--|--|--|
-| 1 | action, comedy | comedy|
-| 2 | action | action |
-| 3 | romance | romance |
-| 4 | romance, comedy | romance |
-| 5 | comedy | action |
-
-### Class-level evaluation for the *action* class
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | Document 2 was correctly classified as *action*. |
-| False Positive | 1 | Document 5 was mistakenly classified as *action*. |
-| False Negative | 1 | Document 1 was not classified as *Action* though it should have. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Class-level evaluation for the *comedy* class
-
-| Key | Count | Explanation |
-|--|--|--|
-| True positive | 1 | Document 1 was correctly classified as *comedy*. |
-| False positive | 0 | No documents were mistakenly classified as *comedy*. |
-| False negative | 2 | Documents 5 and 4 were not classified as *comedy* though they should have. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 0) = 1`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 2) = 0.33`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 1 * 0.67) / (1 + 0.67) = 0.80`
-
-### Model-level evaluation for the collective model
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 4 | Documents 1, 2, 3 and 4 were given correct classes at prediction. |
-| False Positive | 1 | Document 5 was given a wrong class at prediction. |
-| False Negative | 2 | Documents 1 and 4 were not given all correct class at prediction. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 4 / (4 + 1) = 0.8`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 4 / (4 + 2) = 0.67`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.8 * 0.67) / (0.8 + 0.67) = 0.73`
-
-> [!NOTE]
-> For single-label classification models, the count of false negatives and false positives are always equal. Custom single-label classification models always predict one class for each document. If the prediction is not correct, FP count of the predicted class increases by one and FN of the actual class increases by one, overall count of FP and FN for the model will always be equal. This is not the case for multi-label classification, because failing to predict one of the classes of a document is counted as a false negative.
-## Interpreting class-level evaluation metrics
-
-So what does it actually mean to have a high precision or a high recall for a certain class?
-
-| Recall | Precision | Interpretation |
-|--|--|--|
-| High | High | This class is perfectly handled by the model. |
-| Low | High | The model can't always predict this class but when it does it is with high confidence. This may be because this class is underrepresented in the dataset so consider balancing your data distribution.|
-| High | Low | The model predicts this class well, however it is with low confidence. This may be because this class is over represented in the dataset so consider balancing your data distribution. |
-| Low | Low | This class is poorly handled by the model where it is not usually predicted and when it is, it is not with high confidence. |
-
-Custom text classification models are expected to experience both false negatives and false positives. You need to consider how each will affect the overall system, and carefully think through scenarios where the model will ignore correct predictions, and recognize incorrect predictions. Depending on your scenario, either *precision* or *recall* could be more suitable evaluating your model's performance.
-
-For example, if your scenario involves processing technical support tickets, predicting the wrong class could cause it to be forwarded to the wrong department/team. In this example, you should consider making your system more sensitive to false positives, and precision would be a more relevant metric for evaluation.
-
-As another example, if your scenario involves categorizing email as "*important*" or "*spam*", an incorrect prediction could cause you to miss a useful email if it's labeled "*spam*". However, if a spam email is labeled *important* you can disregard it. In this example, you should consider making your system more sensitive to false negatives, and recall would be a more relevant metric for evaluation.
-
-If you want to optimize for general purpose scenarios or when precision and recall are both important, you can utilize the F1 score. Evaluation scores are subjective depending on your scenario and acceptance criteria. There is no absolute metric that works for every scenario.
-
-## Guidance
-
-After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
-
-* Training set has enough data: When a class type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases.
-
-* All class types are present in test set: When the testing data lacks labeled instances for a class type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios.
-
-* Class types are balanced within training and test sets: When sampling bias causes an inaccurate representation of a class typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that class type to occur too often or too little.
-
-* Class types are evenly distributed between training and test sets: When the mix of class types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested.
-
-* Class types in training set are clearly distinct: When the training data is similar for multiple class types, it can lead to lower accuracy because the class types may be frequently misclassified as each other.
-
-## Confusion matrix
-
-> [!Important]
-> Confusion matrix is not available for multi-label classification projects.
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of classes.
-The matrix compares the the expected labels with the ones predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify classes that are too close to each other and often get mistaken (ambiguity). In this case consider merging these classes together. If that isn't possible, consider labeling more documents with both classes to help the model differentiate between them.
-
-All correct predictions are located in the diagonal of the table, so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
--
-You can calculate the class-level and model-level evaluation metrics from the confusion matrix:
-
-* The values in the diagonal are the *True Positive* values of each class.
-* The sum of the values in the class rows (excluding the diagonal) is the *false positive* of the model.
-* The sum of the values in the class columns (excluding the diagonal) is the *false Negative* of the model.
-
-Similarly,
-
-* The *true positive* of the model is the sum of *true Positives* for all classes.
-* The *false positive* of the model is the sum of *false positives* for all classes.
-* The *false Negative* of the model is the sum of *false negatives* for all classes.
--
-## Next steps
-
-* [View a model's performance in Language Studio](../how-to/view-model-evaluation.md)
-* [Train a model](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/fail-over.md
- Title: Back up and recover your custom text classification models-
-description: Learn how to save and recover your custom text classification models.
------ Previously updated : 04/22/2022----
-# Back up and recover your custom text classification models
-
-When you create a Language resource, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync custom models across regions.
-
-If your app or business depends on the use of a custom text classification model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings and tagged data. You still need to [train](./how-to/train-model.md) and [deploy](how-to/deploy-model.md) the models to be available for use with [prediction APIs](https://aka.ms/ct-runtime-swagger).
-
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions. [Create a Language resource](./how-to/create-project.md#create-a-language-resource) and connect them to an Azure storage account. It's recommended that you connect both of your Language resources to the same storage account, though this might introduce slightly higher latency when importing your project, and training a model.
-
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get Training Status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime prediction API](https://aka.ms/ct-runtime-swagger) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both project. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
--
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference](https://aka.ms/ct-authoring-swagger)
-* [Runtime prediction REST API reference](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/faq.md
- Title: Custom text classification FAQ-
-description: Learn about Frequently asked questions when using the custom text classification API.
------ Previously updated : 04/22/2022-----
-# Frequently asked questions
-
-Find answers to commonly asked questions about concepts, and scenarios related to custom text classification in Azure Cognitive Service for Language.
-
-## How do I get started with the service?
-
-See the [quickstart](./quickstart.md) to quickly create your first project, or view [how to create projects](how-to/create-project.md) for more details.
-
-## What are the service limits?
-
-See the [service limits article](service-limits.md).
-
-## Which languages are supported in this feature?
-
-See the [language support](./language-support.md) article.
-
-## How many tagged files are needed?
-
-Generally, diverse and representative [tagged data](how-to/tag-data.md) leads to better results, given that the tagging is done precisely, consistently and completely. There's no set number of tagged classes that will make every model perform well. Performance is highly dependent on your schema and the ambiguity of your schema. Ambiguous classes need more tags. Performance also depends on the quality of your tagging. The recommended number of tagged instances per class is 50.
-
-## Training is taking a long time, is this expected?
-
-The training process can take some time. As a rough estimate, the expected training time for files with a combined length of 12,800,000 chars is 6 hours.
-
-## How do I build my custom model programmatically?
-
-You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
-
-When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-to-make-predictions), you can use the REST API, or the client library.
-
-## What is the recommended CI/CD process?
-
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [tag your data](how-to/tag-data.md#label-your-data), you can determine how your dataset is split into training and testing sets.
-
-## Does a low or high model score guarantee bad or good performance in production?
-
-Model evaluation may not always be comprehensive, depending on:
-* If the **test set** is too small, the good/bad scores are not representative of model's actual performance. Also if a specific class is missing or under-represented in your test set it will affect model performance.
-* **Data diversity** if your data only covers few scenarios/examples of the text you expect in production, your model will not be exposed to all possible scenarios and might perform poorly on the scenarios it hasn't been trained on.
-* **Data representation** if the dataset used to train the model is not representative of the data that would be introduced to the model in production, model performance will be affected greatly.
-
-See the [data selection and schema design](how-to/design-schema.md) article for more information.
-
-## How do I improve model performance?
-
-* View the model [confusion matrix](how-to/view-model-evaluation.md), if you notice that a certain class is frequently classified incorrectly, consider adding more tagged instances for this class. If you notice that two classes are frequently classified as each other, this means the schema is ambiguous, consider merging them both into one class for better performance.
-
-* [Examine Data distribution](concepts/evaluation-metrics.md) If one of the classes has many more tagged instances than the others, your model may be biased towards this class. Add more data to the other classes or remove most of the examples from the dominating class.
-
-* Review the [data selection and schema design](how-to/design-schema.md) article for more information.
-
-* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged classes side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
-
-## When I retrain my model I get different results, why is this?
-
-* When you [tag your data](how-to/tag-data.md#label-your-data) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
-
-* If you are retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough, which is a factor of how representative and distinct your data is, and the quality of your tagged data.
-
-## How do I get predictions in different languages?
-
-First, you need to enable the multilingual option when [creating your project](how-to/create-project.md) or you can enable it later from the project settings page. After you train and deploy your model, you can start querying it in multiple languages. You may get varied results for different languages. To improve the accuracy of any language, add more tagged instances to your project in that language to introduce the trained model to more syntax of that language. See [language support](language-support.md#multi-lingual-option) for more information.
-
-## I trained my model, but I can't test it
-
-You need to [deploy your model](quickstart.md#deploy-your-model) before you can test it.
-
-## How do I use my trained model to make predictions?
-
-After deploying your model, you [call the prediction API](how-to/call-api.md), using either the [REST API](how-to/call-api.md?tabs=rest-api) or [client libraries](how-to/call-api.md?tabs=client).
-
-## Data privacy and security
-
-Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
-
-Your data is only stored in your Azure Storage account. Custom text classification only has access to read from it during training.
-
-## How to clone my project?
-
-To clone your project you need to use the export API to export the project assets and then import them into a new project. See [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
-
-## Next steps
-
-* [Custom text classification overview](overview.md)
-* [Quickstart](quickstart.md)
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/glossary.md
- Title: Definitions used in custom text classification-
-description: Learn about definitions used in custom text classification.
------ Previously updated : 04/14/2022----
-# Terms and definitions used in custom text classification
-
-Use this article to learn about some of the definitions and terms you may encounter when using custom text classification.
-
-## Class
-
-A class is a user-defined category that indicates the overall classification of the text. Developers label their data with their classes before they pass it to the model for training.
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Model
-
-A model is an object that's trained to do a certain task, in this case text classification tasks. Models are trained by providing labeled data to learn from so they can later be used for classification tasks.
-
-* **Model training** is the process of teaching your model how to classify documents based on your labeled data.
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
-## Project
-
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-As a prerequisite to creating a custom text classification project, you have to connect your resource to a storage account with your dataset when you [create a new project](how-to/create-project.md). Your project automatically includes all the `.txt` files available in your container.
-
-Within your project you can do the following:
-
-* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract.
-* **Build and train your model**: The core step of your project, where your model starts learning from your labeled data.
-* **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
-* **Deployment**: After you have reviewed model performance and decide it's fit to be used in your environment; you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-* **Test model**: After deploying your model, you can use this operation in [Language Studio](https://aka.ms/LanguageStudio) to try it out your deployment and see how it would perform in production.
-
-### Project types
-
-Custom text classification supports two types of projects
-
-* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
-* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
-
-## Recall
-Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
--
-## Next steps
-
-* [Data and service limits](service-limits.md).
-* [Custom text classification overview](../overview.md).
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
- Title: Send a text classification request to your custom model
-description: Learn how to send requests for custom text classification.
------- Previously updated : 03/23/2023----
-# Send text classification requests to your model
-
-After you've successfully deployed a model, you can query the deployment to classify text based on the model you assigned to the deployment.
-You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-api) or through the client libraries (Azure SDK).
-
-## Test deployed model
-
-You can use Language Studio to submit the custom text classification task and visualize the results.
-----
-## Send a text classification request to your model
-
-> [!TIP]
-> You can [test your model in Language Studio](../quickstart.md?pivots=language-studio#test-your-model) by sending sample text to classify it.
-
-# [Language Studio](#tab/language-studio)
---
-# [REST API](#tab/rest-api)
-
-First you need to get your resource key and endpoint:
----
-### Submit a custom text classification task
---
-### Get task results
--
-# [Client libraries (Azure SDK)](#tab/client-libraries)
-
-First you'll need to get your resource key and endpoint:
--
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
- |Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
- |JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
- |Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- Single label classification:
- * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_SingleLabelClassify.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/SingleLabelClassifyDocument.java)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py)
-
- Multi label classification:
- * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_MultiLabelClassify.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/MultiLabelClassifyDocument.java)
- * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js)
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py)
-
-5. See the following reference documentation for more information on the client, and return object:
-
- * [C#](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true)
- * [Java](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true)
- * [JavaScript](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true)
- * [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true)
--
-## Next steps
-
-* [Custom text classification overview](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/create-project.md
- Title: How to create custom text classification projects-
-description: Learn about the steps for using Azure resources with custom text classification.
------ Previously updated : 06/03/2022----
-# How to create custom text classification project
-
-Use this article to learn how to set up the requirements for starting with custom text classification and create a project.
-
-## Prerequisites
-
-Before you start using custom text classification, you will need:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-
-## Create a Language resource
-
-Before you start using custom text classification, you will need an Azure Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
-
-You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to classify text.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you will connect a pre-existing storage account, you should have an **owner** role assigned to it.
-
-## Create Language resource and connect storage account
--
-> [!Note]
-> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
-
-### [Using the Azure portal](#tab/azure-portal)
--
-### [Using Language Studio](#tab/language-studio)
--
-### [Using Azure PowerShell](#tab/azure-powershell)
----
-> [!NOTE]
-> * The process of connecting a storage account to your Language resource is irreversible, it cannot be disconnected later.
-> * You can only connect your language resource to one storage account.
-
-## Using a pre-existing Language resource
---
-## Create a custom text classification project
-
-Once your resource and storage container are configured, create a new custom text classification project. A project is a work area for building your custom AI models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used. If you have labeled data, you can [import it](#import-a-custom-text-classification-project) to get started.
-
-### [Language Studio](#tab/studio)
---
-### [REST APIs](#tab/apis)
----
-## Import a custom text classification project
-
-If you have already labeled data, you can use it to get started with the service. Make sure that your labeled data follows the [accepted data formats](../concepts/data-formats.md).
-
-### [Language Studio](#tab/studio)
--
-### [REST APIs](#tab/apis)
----
-## Get project details
-
-### [Language Studio](#tab/studio)
--
-### [REST APIs](#tab/apis)
----
-## Delete project
-
-### [Language Studio](#tab/studio)
--
-### [REST APIs](#tab/apis)
----
-## Next steps
-
-* You should have an idea of the [project schema](design-schema.md) you will use to label your data.
-
-* After your project is created, you can start [labeling your data](tag-data.md), which will inform your text classification model how to interpret text, and is used for training and evaluation.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/deploy-model.md
- Title: How to deploy a custom text classification model-
-description: Learn how to deploy a model for custom text classification.
------ Previously updated : 03/23/2023----
-# Deploy a model and classify text using the runtime API
-
-Once you are satisfied with how your model performs, it is ready to be deployed; and use it to classify text. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Prerequisites
-
-* [A custom text classification project](create-project.md) with a configured Azure storage account,
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md) and successfully [trained model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Deploy model
-
-After you have reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-You can swap deployments after you've tested a model assigned to one deployment, and want to assign it to another. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment and assign it to the first deployment. This could be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
-----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When you unassign or remove a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-* Use [prediction API to query your model](call-api.md)
cognitive-services Design Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/design-schema.md
- Title: How to prepare data and define a custom classification schema-
-description: Learn about data selection, preparation, and creating a schema for custom text classification projects.
------ Previously updated : 05/05/2022----
-# How to prepare data and define a text classification schema
-
-In order to create a custom text classification model, you will need quality data to train it. This article covers how you should select and prepare your data, along with defining a schema. Defining the schema is the first step in [project development lifecycle](../overview.md#project-development-lifecycle), and it defines the classes that you need your model to classify your text into at runtime.
-
-## Schema design
-
-The schema defines the classes that you need your model to classify your text into at runtime.
-
-* **Review and identify**: Review documents in your dataset to be familiar with their structure and content, then identify how you want to classify your data.
-
- For example, if you are classifying support tickets, you might need the following classes: *login issue*, *hardware issue*, *connectivity issue*, and *new equipment request*.
-
-* **Avoid ambiguity in classes**: Ambiguity arises when the classes you specify share similar meaning to one another. The more ambiguous your schema is, the more labeled data you may need to differentiate between different classes.
-
- For example, if you are classifying food recipes, they may be similar to an extent. To differentiate between *dessert recipe* and *main dish recipe*, you may need to label more examples to help your model distinguish between the two classes. Avoiding ambiguity saves time and yields better results.
-
-* **Out of scope data**: When using your model in production, consider adding an *out of scope* class to your schema if you expect documents that don't belong to any of your classes. Then add a few documents to your dataset to be labeled as *out of scope*. The model can learn to recognize irrelevant documents, and predict their labels accordingly.
--
-## Data selection
-
-The quality of data you train your model with affects model performance greatly.
-
-* Use real-life data that reflects your domain's problem space to effectively train your model. You can use synthetic data to accelerate the initial model training process, but it will likely differ from your real-life data and make your model less effective when used.
-
-* Balance your data distribution as much as possible without deviating far from the distribution in real-life.
-
-* Use diverse data whenever possible to avoid overfitting your model. Less diversity in training data may lead to your model learning spurious correlations that may not exist in real-life data.
-
-* Avoid duplicate documents in your data. Duplicate data has a negative effect on the training process, model metrics, and model performance.
-
-* Consider where your data comes from. If you are collecting data from one person, department, or part of your scenario, you are likely missing diversity that may be important for your model to learn about.
-
-> [!NOTE]
-> If your documents are in multiple languages, select the **multiple languages** option during [project creation](../quickstart.md) and set the **language** option to the language of the majority of your documents.
-
-## Data preparation
-
-As a prerequisite for creating a custom text classification project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training documents from Azure directly, or through using the Azure Storage Explorer tool. Using the Azure Storage Explorer tool allows you to upload more data quickly.
-
-* [Create and upload documents from Azure](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-* [Create and upload documents using Azure Storage Explorer](../../../../vs-azure-tools-storage-explorer-blobs.md)
-
-You can only use `.txt`. documents for custom text. If your data is in other format, you can use [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to change your file format.
-
- You can upload an annotated dataset, or you can upload an unannotated one and [label your data](../how-to/tag-data.md) in Language studio.
-
-## Test set
-
-When defining the testing set, make sure to include example documents that are not present in the training set. Defining the testing set is an important step to calculate the [model performance](view-model-evaluation.md#model-details). Also, make sure that the testing set include documents that represent all classes used in your project.
-
-## Next steps
-
-If you haven't already, create a custom text classification project. If it's your first time using custom text classification, consider following the [quickstart](../quickstart.md) to create an example project. You can also see the [project requirements](../how-to/create-project.md) for more details on what you need to create a project.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/tag-data.md
- Title: How to label your data for custom classification - Azure Cognitive Services-
-description: Learn about how to label your data for use with the custom text classification.
------ Previously updated : 11/10/2022----
-# Label text data for training your model
-
-Before training your model you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
-
-Before creating a custom text classification model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
-
-## Prerequisites
-
-Before you can label data, you need:
-
-* [A successfully created project](create-project.md) with a configured Azure blob storage account,
-* Documents containing text data that have [been uploaded](design-schema.md#data-preparation) to your storage account.
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data labeling guidelines
-
-After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON file in your storage container that you've connected to this project.
-
-As you label your data, keep in mind:
-
-* In general, more labeled data leads to better results, provided the data is labeled accurately.
-
-* There is no fixed number of labels that can guarantee your model will perform the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
-
-## Label your data
-
-Use the following steps to label your data:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Data labeling**. You can find a list of all documents in your storage container. See the image below.
-
- >[!TIP]
- > You can use the filters in top menu to view the unlabeled files so that you can start labeling them.
- > You can also use the filters to view the documents that are labeled with a specific class.
-
-3. Change to a single file view from the left side in the top menu or select a specific file to start labeling. You can find a list of all `.txt` files available in your projects to the left. You can use the **Back** and **Next** button from the bottom of the page to navigate through your documents.
-
- > [!NOTE]
- > If you enabled multiple languages for your project, you will find a **Language** dropdown in the top menu, which lets you select the language of each document.
--
-4. In the right side pane, **Add class** to your project so you can start labeling your data with them.
-
-5. Start labeling your files.
-
- # [Multi label classification](#tab/multi-classification)
-
- **Multi label classification**: your file can be labeled with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to label this document with.
-
- :::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
-
- # [Single label classification](#tab/single-classification)
-
- **Single label classification**: your file can only be labeled with one class; you can do so by selecting one of the buttons next to the class you want to label the document with.
-
- :::image type="content" source="../media/single.png" alt-text="A screenshot showing the single label classification tag page" lightbox="../media/single.png":::
-
-
-
- You can also use the [auto labeling feature](use-autolabeling.md) to ensure complete labeling.
-
-6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each.
-
-7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
-
- > [!TIP]
- > If you are planning on using **Automatic** data spliting use the default option of assigning all the documents into your training set.
-
-8. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
- * *Total instances* where you can view count of all labeled instances of a specific class.
- * *documents with at least one label* where each document is counted if it contains at least one labeled instance of this class.
-
-9. While you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on **Save labels** button at the bottom of the page.
-
-## Remove labels
-
-If you want to remove a label, uncheck the button next to the class.
-
-## Delete or classes
-
-To delete a class, click on the delete icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
-
-## Next steps
-
-After you've labeled your data, you can begin [training a model](train-model.md) that will learn based on your data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/train-model.md
- Title: How to train your custom text classification model - Azure Cognitive Services-
-description: Learn about how to train your model for custom text classification.
------ Previously updated : 08/08/2022----
-# How to train a custom text classification model
-
-Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to determine if you need to improve your model.
-
-To train a model, start a training job. Only successfully completed jobs create a usable model. Training jobs expire after seven days. After this period, you won't be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiration. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few minutes when dealing with few documents, up to several hours depending on the dataset size and the complexity of your schema.
---
-## Prerequisites
-
-Before you train your model, you need:
-
-* [A successfully created project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data splitting
-
-Before you start the training process, labeled documents in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the class/classes assigned to each document.
-The **testing set** is a blind set that is not introduced to the model during training but only during evaluation.
-After the model is trained successfully, it is used to make predictions from the documents in the testing set. Based on these predictions, the model's [evaluation metrics](../concepts/evaluation-metrics.md) will be calculated.
-It is recommended to make sure that all your classes are adequately represented in both the training and testing set.
-
-Custom text classification supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**: The system will split your labeled data between the training and testing sets, according to the percentages you choose. The system will attempt to have a representation of all classes in your training set. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which labeled documents should belong to which set. This step is only enabled if you have added documents to your testing set during [data labeling](tag-data.md).
-
-## Train model
-
-# [Language studio](#tab/Language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-### Start training job
--
-### Get training job status
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
-
- [!INCLUDE [get training model status](../includes/rest-api/get-training-status.md)]
---
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
cognitive-services Use Autolabeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/use-autolabeling.md
- Title: How to use autolabeling in custom text classification-
-description: Learn how to use autolabeling in custom text classification.
------- Previously updated : 3/15/2023---
-# How to use autolabeling for Custom Text Classification
-
-[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires much time and effort, you can use the autolabeling feature to automatically label your documents with the classes you want to categorize them into. You can currently start autolabeling jobs based on a model using GPT models where you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your documents.
-
-## Prerequisites
-
-Before you can use autolabeling with GPT, you need:
-* A successfully [created project](create-project.md) with a configured Azure blob storage account.
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* Class names that are meaningful. The GPT models label documents based on the names of the classes you've provided.
-* [Labeled data](tag-data.md) isn't required.
-* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md).
---
-## Trigger an autolabeling job
-
-When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models.
-
-1. From the left navigation menu, select **Data labeling**.
-2. Select the **Autolabel** button under the Activity pane to the right of the page.
-
- :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png":::
-
-4. Choose Autolabel with GPT and click on Next.
-
- :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
-
-5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed.
-
- :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png":::
-
-6. Select the classes you want to be included in the autolabeling job. By default, all classes are selected. Having descriptive names for classes, and including examples for each class is recommended to achieve good quality labeling with GPT.
-
- :::image type="content" source="../media/choose-classes.png" alt-text="A screenshot showing which labels to be included in autotag job." lightbox="../media/choose-classes.png":::
-
-7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter.
-
- > [!NOTE]
- > * If a document was automatically labeled, but this label was already user defined, only the user defined label is used.
- > * You can view the documents by clicking on the document name.
-
- :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
-
-8. Select **Start job** to trigger the autolabeling job.
-You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
-
- :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
----
-## Review the auto labeled documents
-
-When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
--
-Documents that have been automatically classified have suggested labels in the activity pane highlighted in purple. Each suggested label has two selectors (a checkmark and a cancel icon) that allow you to accept or reject the automatic label.
-
-Once a label is accepted, the purple color changes to the default blue one, and the label is included in any further model training becoming a user defined label.
-
-After you accept or reject the labels for the autolabeled documents, select **Save labels** to apply the changes.
-
-> [!NOTE]
-> * We recommend validating automatically labeled documents before accepting them.
-> * All labels that were not accepted are deleted when you train your model.
--
-## Next steps
-
-* Learn more about [labeling your data](tag-data.md).
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/view-model-evaluation.md
- Title: View a custom text classification model evaluation - Azure Cognitive Services-
-description: Learn how to view the evaluation scores for a custom text classification model
------ Previously updated : 10/12/2022----
-# View your text classification model's evaluation and details
-
-After your model has finished training, you can view the model performance and see the predicted classes for the documents in the test set.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from the data. To make sure that the evaluation is calculated on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Test** documents when [labeling data](tag-data.md).
-
-## Prerequisites
-
-Before viewing model evaluation you need:
-
-* [A custom text classification project](create-project.md) with a configured Azure blob storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md)
-* A successfully [trained model](train-model.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Model details
-
-### [Language studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
-
-### Single label classification
-
-### Multi label classification
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/language-studio)
---
-### [REST APIs](#tab/rest-api)
-----
-## Next steps
-
-As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used. Once you know whether your model performance needs to improve, you can begin improving the model.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/language-support.md
- Title: Language support in custom text classification-
-description: Learn about which languages are supported by custom text classification.
------ Previously updated : 05/06/2022----
-# Language support for custom text classification
-
-Use this article to learn about the languages currently supported by custom text classification feature.
-
-## Multi-lingual option
-
-With custom text classification, you can train a model in one language and use to classify documents in another language. This feature is useful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set.
-
-You can train your project entirely with English documents, and query it in: French, German, Mandarin, Japanese, Korean, and others. Custom text classification
-makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-
-Whenever you identify that a particular language is not performing as well as other languages, you can add more documents for that language in your project. In the [data labeling](how-to/tag-data.md) page in Language Studio, you can select the language of the document you're adding. When you introduce more documents for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
-
-You aren't expected to add the same number of documents for every language. You should build the majority of your project in one language, and only add a few documents in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English documents in German, train a new model and test in German again. You should see better results for German queries. The more labeled documents you add, the more likely the results are going to get better.
-
-When you add data in another language, you shouldn't expect it to negatively affect other languages.
-
-## Languages supported by custom text classification
-
-Custom text classification supports `.txt` files in the following languages:
-
-| Language | Language Code |
-| | |
-| Afrikaans | `af` |
-| Amharic | `am` |
-| Arabic | `ar` |
-| Assamese | `as` |
-| Azerbaijani | `az` |
-| Belarusian | `be` |
-| Bulgarian | `bg` |
-| Bengali | `bn` |
-| Breton | `br` |
-| Bosnian | `bs` |
-| Catalan | `ca` |
-| Czech | `cs` |
-| Welsh | `cy` |
-| Danish | `da` |
-| German | `de`
-| Greek | `el` |
-| English (US) | `en-us` |
-| Esperanto | `eo` |
-| Spanish | `es` |
-| Estonian | `et` |
-| Basque | `eu` |
-| Persian (Farsi) | `fa` |
-| Finnish | `fi` |
-| French | `fr` |
-| Western Frisian | `fy` |
-| Irish | `ga` |
-| Scottish Gaelic | `gd` |
-| Galician | `gl` |
-| Gujarati | `gu` |
-| Hausa | `ha` |
-| Hebrew | `he` |
-| Hindi | `hi` |
-| Croatian | `hr` |
-| Hungarian | `hu` |
-| Armenian | `hy` |
-| Indonesian | `id` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Javanese | `jv` |
-| Georgian | `ka` |
-| Kazakh | `kk` |
-| Khmer | `km` |
-| Kannada | `kn` |
-| Korean | `ko` |
-| Kurdish (Kurmanji) | `ku` |
-| Kyrgyz | `ky` |
-| Latin | `la` |
-| Lao | `lo` |
-| Lithuanian | `lt` |
-| Latvian | `lv` |
-| Malagasy | `mg` |
-| Macedonian | `mk` |
-| Malayalam | `ml` |
-| Mongolian | `mn` |
-| Marathi | `mr` |
-| Malay | `ms` |
-| Burmese | `my` |
-| Nepali | `ne` |
-| Dutch | `nl` |
-| Norwegian (Bokmal) | `nb` |
-| Oriya | `or` |
-| Punjabi | `pa` |
-| Polish | `pl` |
-| Pashto | `ps` |
-| Portuguese (Brazil) | `pt-br` |
-| Portuguese (Portugal) | `pt-pt` |
-| Romanian | `ro` |
-| Russian | `ru` |
-| Sanskrit | `sa` |
-| Sindhi | `sd` |
-| Sinhala | `si` |
-| Slovak | `sk` |
-| Slovenian | `sl` |
-| Somali | `so` |
-| Albanian | `sq` |
-| Serbian | `sr` |
-| Sundanese | `su` |
-| Swedish | `sv` |
-| Swahili | `sw` |
-| Tamil | `ta` |
-| Telugu | `te` |
-| Thai | `th` |
-| Filipino | `tl` |
-| Turkish | `tr` |
-| Uyghur | `ug` |
-| Ukrainian | `uk` |
-| Urdu | `ur` |
-| Uzbek | `uz` |
-| Vietnamese | `vi` |
-| Xhosa | `xh` |
-| Yiddish | `yi` |
-| Chinese (Simplified) | `zh-hans` |
-| Zulu | `zu` |
-
-## Next steps
-
-* [Custom text classification overview](overview.md)
-* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
- Title: Custom text classification - Azure Cognitive Services-
-description: Customize an AI model to classify documents and other content using Azure Cognitive Services.
------ Previously updated : 06/17/2022----
-# What is custom text classification?
-
-Custom text classification is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build custom models for text classification tasks.
-
-Custom text classification enables users to build custom AI models to classify text into custom classes pre-defined by the user. By creating a custom text classification project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
-
-Custom text classification supports two types of projects:
-
-* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
-* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
-
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
-
-## Example usage scenarios
-
-Custom text classification can be used in multiple scenarios across a variety of industries:
-
-### Automatic emails or ticket triage
-
-Support centers of all types receive a high volume of emails or tickets containing unstructured, freeform text and attachments. Timely review, acknowledgment, and routing to subject matter experts within internal teams is critical. Email triage at this scale requires people to review and route to the right departments, which takes time and resources. Custom text classification can be used to analyze incoming text, and triage and categorize the content to be automatically routed to the relevant departments for further action.
-
-### Knowledge mining to enhance/enrich semantic search
-
-Search is foundational to any app that surfaces text content to users. Common scenarios include catalog or document searches, retail product searches, or knowledge mining for data science. Many enterprises across various industries are seeking to build a rich search experience over private, heterogeneous content, which includes both structured and unstructured documents. As a part of their pipeline, developers can use custom text classification to categorize their text into classes that are relevant to their industry. The predicted classes can be used to enrich the indexing of the file for a more customized search experience.
-
-## Project development lifecycle
-
-Creating a custom text classification project typically involves several different steps.
--
-Follow these steps to get the most out of your model:
-
-1. **Define your schema**: Know your data and identify the [classes](glossary.md#class) you want differentiate between, to avoid ambiguity.
-
-2. **Label your data**: The quality of data labeling is a key factor in determining model performance. Documents that belong to the same class should always have the same class, if you have a document that can fall into two classes use **Multi label classification** projects. Avoid class ambiguity, make sure that your classes are clearly separable from each other, especially with single label classification projects.
-
-3. **Train the model**: Your model starts learning from your labeled data.
-
-4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-
-5. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
-
-6. **Classify text**: Use your custom model for custom text classification tasks.
-
-## Reference documentation and code samples
-
-As you use custom text classification, see the following reference documentation and samples for Azure Cognitive Service for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | |
-|REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples - Single label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_SingleLabelClassify.md) [C# samples - Multi label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_MultiLabelClassify.md) |
-| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/SingleLabelClassifyDocument.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/MultiLabelClassifyDocument.java) |
-|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples - Single label classification](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) [JavaScript samples - Multi label classification](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) |
-|Python (Runtime)| [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples - Single label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py) [Python samples - Multi label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py) |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom text classification](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using custom text classification.
-
-* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](service-limits.md) for information such as regional availability.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
- Title: Quickstart - Custom text classification-
-description: Quickly start building an AI model to identify and apply labels (classify) unstructured text.
------ Previously updated : 01/25/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: Custom text classification
-
-Use this article to get started with creating a custom text classification project where you can train custom models for text classification. A model is artificial intelligence software that's trained to do a certain task. For this system, the models classify text, and are trained by learning from tagged data.
-
-Custom text classification supports two types of projects:
-
-* **Single label classification** - you can assign a single class for each document in your dataset. For example, a movie script could only be classified as "Romance" or "Comedy".
-* **Multi label classification** - you can assign multiple classes for each document in your dataset. For example, a movie script could be classified as "Comedy" or "Romance" and "Comedy".
-
-In this quickstart you can use the sample datasets provided to build a multi label classification where you can classify movie scripts into one or more categories or you can use single label classification dataset where you can classify abstracts of scientific papers into one of the defined domains.
--------
-## Next steps
-
-After you've created a custom text classification model, you can:
-* [Use the runtime API to classify text](how-to/call-api.md)
-
-When you start to create your own custom text classification projects, use the how-to articles to learn more about developing your model in greater detail:
-
-* [Data selection and schema design](how-to/design-schema.md)
-* [Tag data](how-to/tag-data.md)
-* [Train a model](how-to/train-model.md)
-* [View model evaluation](how-to/view-model-evaluation.md)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md
- Title: Custom text classification limits-
-description: Learn about the data and rate limits when using custom text classification.
--- Previously updated : 01/25/2022-------
-# Custom text classification limits
-
-Use this article to learn about the data and service limits when using custom text classification.
-
-## Language resource limits
-
-* Your Language resource has to be created in one of the supported regions and pricing tiers listed below.
-
-* You can only connect 1 storage account per resource. This process is irreversible. If you connect a storage account to your resource, you cannot unlink it later. Learn more about [connecting a storage account](how-to/create-project.md#create-language-resource-and-connect-storage-account)
-
-* You can have up to 500 projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-## Pricing tiers
-
-Custom text classification is available with the following pricing tiers:
-
-|Tier|Description|Limit|
-|--|--|--|
-|F0 |Free tier|You are only allowed one F0 tier Language resource per subscription.|
-|S |Paid tier|You can have unlimited Language S tier resources per subscription. |
--
-See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
-
-## Regional availability
-
-Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
-
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-|Document size|--|125,000 characters. You can send up to 25 documents as long as they collectively do not exceed 125,000 characters|
-
-> [!TIP]
-> If you need to send larger files than the limit allows, you can break the text into smaller chunks of text before sending them to the API. You use can the [chunk command from CLUtils](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ChunkCommand/README.md) for this process.
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|F|Training time| 1 hour per month|
-|S|Training time| Unlimited, Pay as you go |
-|F|Prediction Calls| 5,000 text records per month |
-|S|Prediction Calls| Unlimited, Pay as you go |
-
-## Document limits
-
-* You can only use `.txt`. files. If your data is in another format, you can use the [CLUtils parse command](https://github.com/microsoft/CognitiveServicesLanguageUtilities/blob/main/CustomTextAnalytics.CLUtils/Solution/CogSLanguageUtilities.ViewLayer.CliCommands/Commands/ParseCommand/README.md) to open your document and extract the text.
-
-* All files uploaded in your container must contain data. Empty files are not allowed for training.
-
-* All files should be available at the root of your container.
-
-## Data limits
-
-The following limits are observed for the custom text classification.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Documents count | 10 | 100,000 |
-|Document length in characters | 1 | 128,000 characters; approximately 28,000 words or 56 pages. |
-|Count of classes | 1 | 200 |
-|Count of trained models per project| 0 | 10 |
-|Count of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Item | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` ,symbols `_ . -`,with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Class name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
-| Document name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` with no spaces. |
-
-## Next steps
-
-* [Custom text classification overview](overview.md)
cognitive-services Triage Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/tutorials/triage-email.md
- Title: Triage incoming emails with Power Automate-
-description: Learn how to use custom text classification to categorize and triage incoming emails with Power Automate
------ Previously updated : 01/27/2023---
-# Tutorial: Triage incoming emails with power automate
-
-In this tutorial you will categorize and triage incoming email using custom text classification. Using this [Power Automate](/power-automate/getting-started) flow, when a new email is received, its contents will have a classification applied, and depending on the result, a message will be sent to a designated channel on [Microsoft Teams](https://www.microsoft.com/microsoft-teams).
--
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">A Language resource </a>
- * A trained [custom text classification](../overview.md) model.
- * You will need the key and endpoint from your Language resource to authenticate your Power Automate flow.
-* A successfully created and deployed [single text classification custom model](../quickstart.md)
--
-## Create a Power Automate flow
-
-1. [Sign in to power automate](https://make.powerautomate.com/)
-
-2. From the left side menu, select **My flows** and create a **Automated cloud flow**
-
- :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the flow creation screen." lightbox="../media/create-flow.png":::
-
-3. Name your flow `EmailTriage`. Below **Choose your flow's triggers**, search for *email* and select **When a new email arrives**. Then click **create**
-
- :::image type="content" source="../media/email-flow.png" alt-text="A screenshot of the email flow triggers." lightbox="../media/email-flow.png":::
-
-4. Add the right connection to your email account. This connection will be used to access the email content.
-
-5. To add a Language service connector, search for *Azure Language*.
-
- :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of available Azure Language service connectors." lightbox="../media/language-connector.png":::
-
-6. Search for *CustomSingleLabelClassification*.
-
- :::image type="content" source="../media/single-classification.png" alt-text="A screenshot of Classification connector." lightbox="../media/single-classification.png":::
-
-7. Start by adding the right connection to your connector. This connection will be used to access the classification project.
-
-8. In the documents ID field, add **1**.
-
-9. In the documents text field, add **body** from **dynamic content**.
-
-10. Fill in the project name and deployment name of your deployed custom text classification model.
-
- :::image type="content" source="../media/classification.png" alt-text="A screenshot project details." lightbox="../media/classification.png":::
-
-11. Add a condition to send a Microsoft Teams message to the right team by:
- 1. Select **results** from **dynamic content**, and add the condition. For this tutorial, we are looking for `Computer_science` related emails. In the **Yes** condition, choose your desired option to notify a team channel. In the **No** condition, you can add additional conditions to perform alternative actions.
-
- :::image type="content" source="../media/email-triage.png" alt-text="A screenshot of email flow." lightbox="../media/email-triage.png":::
--
-## Next steps
-
-* [Use the Language service with Power Automate](../../tutorials/power-automate.md)
-* [Available Language service connectors](/connectors/cognitiveservicestextanalytics)
cognitive-services Azure Machine Learning Labeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom/azure-machine-learning-labeling.md
- Title: Use Azure Machine Learning labeling in Language Studio
-description: Learn how to label your data in Azure Machine Learning, and import it for use in the Language service.
------- Previously updated : 04/17/2023---
-# Use Azure Machine Learning labeling in Language Studio
-
-Labeling data is an important part of preparing your dataset. Using the labeling experience in Azure Machine Learning, you can experience easier collaboration, more flexibility, and the ability to [outsource labeling tasks](/azure/machine-learning/how-to-outsource-data-labeling) to external labeling vendors from the [Azure Market Place](https://azuremarketplace.microsoft.com/marketplace/consulting-services?search=AzureMLVend). You can use Azure Machine Learning labeling for:
-* [custom text classification](../custom-text-classification/overview.md)
-* [custom named entity recognition](../custom-named-entity-recognition/overview.md)
-
-## Prerequisites
-
-Before you can connect your labeling project to Azure Machine Learning, you need:
-* A successfully created Language Studio project with a configured Azure blob storage account.
-* Text data that has been uploaded to your storage account.
-* At least:
- * One entity label for custom named entity recognition, or
- * Two class labels for custom text classification projects.
-* An [Azure Machine Learning workspace](/azure/machine-learning/how-to-manage-workspace) that has been [connected](/azure/machine-learning/v1/how-to-connect-data-ui?tabs=credential#create-datastores) to the same Azure blob storage account that your Language Studio account using.
-
-## Limitations
-
-* Connecting your labeling project to Azure Machine Learning is a one-to-one connection. If you disconnect your project, you will not be able to connect your project back to the same Azure Machine Learning project
-* You can't label in the Language Studio and Azure Machine Learning simultaneously. The labeling experience is enabled in one studio at a time.
-* The testing and training files in the labeling experience you switch away from will be ignored when training your model.
-* Only Azure Machine Learning's JSONL file format can be imported into Language Studio.
-* Projects with the multi-lingual option enabled can't be connected to Azure Machine Learning, and not all languages are supported.
- * Language support is provided by the Azure Machine Learning [TextDNNLanguages Class](/python/api/azureml-automl-core/azureml.automl.core.constants.textdnnlanguages?view=azure-ml-py&preserve-view=true&branch=main#azureml-automl-core-constants-textdnnlanguages-supported).
-* The Azure Machine Learning workspace you're connecting to must be assigned to the same Azure Storage account that Language Studio is connected to. Be sure that the Azure Machine Learning workspace has the storage blob data reader permission on the storage account. The workspace needs to have been linked to the storage account during the creation process in the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices).
-* Switching between the two labeling experiences isn't instantaneous. It may take time to successfully complete the operation.
-
-## Import your Azure Machine Learning labels into Language Studio
-
-Language Studio supports the JSONL file format used by Azure Machine Learning. If youΓÇÖve been labeling data on Azure Machine Learning, you can import your up-to-date labels in a new custom project to utilize the features of both studios.
-
-1. Start by creating a new project for custom text classification or custom named entity recognition.
-
- 1. In the **Create a project** screen that appears, follow the prompts to connect your storage account, and enter the basic information about your project. Be sure that the Azure resource you're using doesn't have another storage account already connected.
-
- 1. In the **Choose container** section, choose the option indicating that you already have a correctly formatted file. Then select your most recent Azure Machine Learning labels file.
-
- :::image type="content" source="./media/select-label-file.png" alt-text="A screenshot showing the selection for a label file in Language Studio." lightbox="./media/select-label-file.png":::
-
-## Connect to Azure Machine Learning
-
-Before you connect to Azure Machine Learning, you need an Azure Machine Learning account with a pricing plan that can accommodate the compute needs of your project. See the [prerequisites section](#prerequisites) to make sure that you have successfully completed all the requirements to start connecting your Language Studio project to Azure Machine Learning.
-
-1. Use the [Azure portal](https://portal.azure.com/) to navigate to the Azure Blob Storage account connected to your language resource.
-2. Ensure that the *Storage Blob Data Contributor* role is assigned to your AML workspace within the role assignments for your Azure Blob Storage account.
-3. Navigate to your project in [Language Studio](https://language.azure.com/). From the left navigation menu of your project, select **Data labeling**.
-4. Select **use Azure Machine Learning to label** in either the **Data labeling** description, or under the **Activity pane**.
-
- :::image type="content" source="./media/azure-machine-learning-selection.png" alt-text="A screenshot showing the location of the Azure Machine Learning link." lightbox="./media/azure-machine-learning-selection.png":::
-
-1. Select **Connect to Azure Machine Learning** to start the connection process.
-
- :::image type="content" source="./media/activity-pane.png" alt-text="A screenshot showing the Azure Machine Learning connection button in Language Studio." lightbox="./media/activity-pane.png":::
-
-1. In the window that appears, follow the prompts. Select the Azure Machine Learning workspace youΓÇÖve created previously under the same Azure subscription. Enter a name for the new Azure Machine Learning project that will be created to enable labeling in Azure Machine Learning.
-
- >[!TIP]
- > Make sure your workspace is linked to the same Azure Blob Storage account and Language resource before continuing. You can create a new workspace and link to your storage account through the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.MachineLearningServices). Ensure that the storage account is properly linked to the workspace.
-
-1. (Optional) Turn on the vendor labeling toggle to use labeling vendor companies. Before choosing the vendor labeling companies, contact the vendor labeling companies on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?search=AzureMLVend) to finalize a contract with them. For more information about working with vendor companies, see [How to outsource data labeling](/azure/machine-learning/how-to-outsource-data-labeling).
-
- You can also leave labeling instructions for the human labelers that will help you in the labeling process. These instructions can help them understand the task by leaving clear definitions of the labels and including examples for better results.
-
-1. Review the settings for your connection to Azure Machine Learning and make changes if needed.
-
- > [!IMPORTANT]
- > Finalizing the connection **is permanent**. Attempting to disconnect your established connection at any point in time will permanently disable your Language Studio project from connecting to the same Azure Machine Learning project.
-
-1. After the connection has been initiated, your ability to label data in Language Studio will be disabled for a few minutes to prepare the new connection.
-
-## Switch to labeling with Azure Machine Learning from Language Studio
-
-Once the connection has been established, you can switch to Azure Machine Learning through the **Activity pane** in Language Studio at any time.
--
-When you switch, your ability to label data in Language Studio will be disabled, and you will be able to label data in Azure Machine Learning. You can switch back to labeling in Language Studio at any time through Azure Machine Learning.
-
-For information on how to label the text, see [Azure Machine Learning how to label](/azure/machine-learning/how-to-label-data#label-text). For information about managing and tracking the text labeling project, see [Azure Machine Learning set up and manage a text labeling project](/azure/machine-learning/how-to-create-text-labeling-projects).
-
-## Train your model using labels from Azure Machine Learning
-
-When you switch to labeling using Azure Machine Learning, you can still train, evaluate, and deploy your model in Language Studio. To train your model using updated labels from Azure Machine Learning:
-
-1. Select **Training jobs** from the navigation menu on the left of the Language studio screen for your project.
-
-1. Select **Import latest labels from Azure Machine Learning** from the **Choose label origin** section in the training page. This synchronizes the labels from Azure Machine Learning before starting the training job.
-
- :::image type="content" source="./media/azure-machine-learning-label-origin.png" alt-text="A screenshot showing the selector for using labels from Azure Machine Learning." lightbox="./media/azure-machine-learning-label-origin.png":::
-
-## Switch to labeling with Language Studio from Azure Machine Learning
-
-After you've switched to labeling with Azure Machine Learning, You can switch back to labeling with Language Studio project at any time.
-
-> [!NOTE]
-> * Only users with the [correct roles](/azure/machine-learning/how-to-add-users) in Azure Machine Learning have the ability to switch labeling.
-> * When you switch to using Language Studio, labeling on Azure Machine learning will be disabled.
-
-To switch back to labeling with Language Studio:
-
-1. Navigate to your project in Azure Machine Learning and select **Data labeling** from the left navigation menu.
-1. Select the **Language Studio** tab and select **Switch to Language Studio**.
-
- :::image type="content" source="./media/azure-machine-learning-studio.png" alt-text="A screenshot showing the selector for using labels from Language Studio." lightbox="./media/azure-machine-learning-studio.png":::
-
-3. The process takes a few minutes to complete, and your ability to label in Azure Machine Learning will be disabled until it's switched back from Language Studio.
-
-## Disconnecting from Azure Machine Learning
-
-Disconnecting your project from Azure Machine Learning is a permanent, irreversible process and can't be undone. You will no longer be able to access your labels in Azure Machine Learning, and you wonΓÇÖt be able to reconnect the Azure Machine Learning project to any Language Studio project in the future. To disconnect from Azure Machine Learning:
-
-1. Ensure that any updated labels you want to maintain are synchronized with Azure Machine Learning by switching the labeling experience back to the Language Studio.
-1. Select **Project settings** from the navigation menu on the left in Language Studio.
-1. Select the **Disconnect from Azure Machine Learning** button from the **Manage Azure Machine Learning connections** section.
-
-## Next steps
-Learn more about labeling your data for [Custom Text Classification](../custom-text-classification/how-to/tag-data.md) and [Custom Named Entity Recognition](../custom-named-entity-recognition/how-to/tag-data.md).
-
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/how-to/call-api.md
- Title: How to call the entity linking API-
-description: Learn how to identify and link entities found in text with the entity linking API.
------ Previously updated : 11/02/2021----
-# How to use entity linking
-
-The entity linking feature can be used to identify and disambiguate the identity of an entity found in text (for example, determining whether an occurrence of the word "*Mars*" refers to the planet, or to the Roman god of war). It will return the entities in the text with links to [Wikipedia](https://www.wikipedia.org/) as a knowledge base.
--
-## Development options
--
-## Determine how to process the data (optional)
-
-### Specify the entity linking model
-
-By default, entity linking will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-
-### Input languages
-
-When you submit documents to be processed by entity linking, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, entity linking will default to English. Due to [multilingual and emoji support](../../concepts/multilingual-emoji-support.md), the response may contain text offsets.
-
-## Submitting data
-
-Entity linking produces a higher-quality result when you give it smaller amounts of text to work on. This is opposite from some features, like key phrase extraction which performs better on larger blocks of text. To get the best results from both operations, consider restructuring the inputs accordingly.
-
-To send an API request, you will need a Language resource endpoint and key.
-
-> [!NOTE]
-> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-
-Analysis is performed upon receipt of the request. Using entity linking synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
--
-### Getting entity linking results
-
-You can stream the results to an application, or save the output to a file on the local system.
-
-## Service and data limits
--
-## See also
-
-* [Entity linking overview](../overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/language-support.md
- Title: Language support for key phrase analysis-
-description: A list of natural languages supported by the entity linking API
------ Previously updated : 11/02/2021----
-# Entity linking language support
-
-> [!NOTE]
-> Languages are added as new model versions are released for specific features. The current model version for Entity Linking is `2020-02-01`.
-
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
-|:|:-:|:-:|:--:|:--:|
-| English | `en` | Γ£ô | 2019-10-01 | |
-| Spanish | `es` | Γ£ô | 2019-10-01 | |
-
-## Next steps
-
-[Entity linking overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/overview.md
- Title: What is entity linking in Azure Cognitive Service for Language?-
-description: An overview of entity linking in Azure Cognitive Services, which helps you extract entities from text, and provides links to an online knowledge base.
------ Previously updated : 01/10/2023----
-# What is entity linking in Azure Cognitive Service for Language?
-
-Entity linking is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Entity linking identifies and disambiguates the identity of entities found in text. For example, in the sentence "*We went to Seattle last week.*", the word "*Seattle*" would be identified, with a link to more information on Wikipedia.
-
-This documentation contains the following types of articles:
-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific ways.
-
-## Get started with entity linking
---
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for entity linking](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
- Title: "Quickstart: Entity linking using the client library and REST API"-
-description: 'Use this quickstart to perform Entity Linking, using C#, Python, Java, JavaScript, and the REST API.'
------ Previously updated : 02/17/2023--
-keywords: text mining, entity linking
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: Entity Linking using the client library and REST API
---------------
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/call-api.md
- Title: how to call the Key Phrase Extraction API-
-description: How to extract key phrases by using the Key Phrase Extraction API.
------ Previously updated : 01/10/2023----
-# How to use key phrase extraction
-
-The key phrase extraction feature can evaluate unstructured text, and for each document, return a list of key phrases.
-
-This feature is useful if you need to quickly identify the main points in a collection of documents. For example, given input text "*The food was delicious and the staff was wonderful*", the service returns the main topics: "*food*" and "*wonderful staff*".
-
-> [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
-
-## Development options
--
-## Determine how to process the data (optional)
-
-### Specify the key phrase extraction model
-
-By default, key phrase extraction will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-
-### Input languages
-
-When you submit documents to be processed by key phrase extraction, you can specify which of [the supported languages](../language-support.md) they're written in. if you don't specify a language, key phrase extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
-
-## Submitting data
-
-Key phrase extraction works best when you give it bigger amounts of text to work on. This is opposite from sentiment analysis, which performs better on smaller amounts of text. To get the best results from both operations, consider restructuring the inputs accordingly.
-
-To send an API request, You will need your Language resource endpoint and key.
-
-> [!NOTE]
-> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-
-Analysis is performed upon receipt of the request. Using the key phrase extraction feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
---
-## Getting key phrase extraction results
-
-When you receive results from the API, the order of the returned key phrases is determined internally, by the model. You can stream the results to an application, or save the output to a file on the local system.
-
-## Service and data limits
--
-## Next steps
-
-[Key Phrase Extraction overview](../overview.md)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/use-containers.md
- Title: Use Docker containers for Key Phrase Extraction on-premises-
-description: Learn how to use Docker containers for Key Phrase Extraction on-premises.
------ Previously updated : 04/11/2023--
-keywords: on-premises, Docker, container, natural language processing
--
-# Install and run Key Phrase Extraction containers
--
-Containers enable you to host the Key Phrase Extraction API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Key Phrase Extraction remotely, then containers might be a good option.
--
-> [!NOTE]
-> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data and service limits](../../concepts/data-limits.md).
-
-Containers enable you to run the Key Phrase Extraction APIs in your own environment and are great for your specific security and data governance requirements. The Key Phrase Extraction containers provide advanced natural language processing over raw text, and include three main functions: sentiment analysis, Key Phrase Extraction, and language detection.
-
-## Prerequisites
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
--
-## Host computer requirements and recommendations
--
-The following table describes the minimum and recommended specifications for the available Key Phrase Extraction containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
-
-| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
-|||-|--|--|
-| **Key Phrase Extraction** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
-
-CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with `docker pull`
-
-The key phrase extraction container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `keyphrase`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase`.
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/keyphrase/about).
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry.
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase:latest
-```
--
-## Run the container with `docker run`
-
-Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
-
-> [!IMPORTANT]
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-> * The sentiment analysis and language detection containers use v3 of the API, and are generally available. The Key Phrase Extraction container uses v2 of the API, and is in preview.
-
-To run the *Key Phrase Extraction* container, execute the following `docker run` command. Replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Key Phrase Extraction resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the Key Phrase Extraction API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
--
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a *Key Phrase Extraction* container from the container image
-* Allocates one CPU core and 4 gigabytes (GB) of memory
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Automatically removes the container after it exits. The container image is still available on the host computer.
--
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs.
-
-<!-- ## Validate container is running -->
---
-## Run the container disconnected from the internet
--
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
--
-## Billing
-
-The Key Phrase Extraction containers send billing information to Azure, using a _Key Phrase Extraction_ resource on your Azure account.
-
-\
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Key Phrase Extraction containers. In summary:
-
-* Key Phrase Extraction provides Linux containers for Docker.
-* Container images are downloaded from the Microsoft Container Registry (MCR).
-* Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Key Phrase Extraction containers by specifying the host URI of the container.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/language-support.md
- Title: Language support for Key Phrase Extraction-
-description: Use this article to find the natural languages supported by Key Phrase Extraction.
------ Previously updated : 07/28/2022----
-# Language support for Key Phrase Extraction
-
-Use this article to find the natural languages supported by Key Phrase Analysis.
-
-## Supported languages
-
-> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-key-phrase-extraction-model) are released for specific features. The current model version for Key Phrase Extraction is `2022-07-01`.
-
-Total supported language codes: 94
-
-| Language | Language code | Starting with model version | Notes |
-|--||-|--|
-| Afrikaans      |     `af`  |                2020-07-01                 |                    |
-| Albanian     |     `sq`  |                2022-10-01                 |                    |
-| Amharic     |     `am`  |                2022-10-01                 |                    |
-| Arabic    |     `ar`  |                2022-10-01                 |                    |
-| Armenian    |     `hy`  |                2022-10-01                 |                    |
-| Assamese    |     `as`  |                2022-10-01                 |                    |
-| Azerbaijani    |     `az`  |                2022-10-01                 |                    |
-| Basque    |     `eu`  |                2022-10-01                 |                    |
-| Belarusian |     `be`  |                2022-10-01                 |                    |
-| Bengali     |     `bn`  |                2022-10-01                 |                    |
-| Bosnian    |     `bs`  |                2022-10-01                 |                    |
-| Breton    |     `br`  |                2022-10-01                 |                    |
-| Bulgarian      |     `bg`  |                2020-07-01                 |                    |
-| Burmese    |     `my`  |                2022-10-01                 |                    |
-| Catalan    |     `ca`  |                2020-07-01                 |                    |
-| Chinese-Simplified    |     `zh-hans` |                2021-06-01                 |                    |
-| Chinese-Traditional |     `zh-hant` |                2022-10-01                 |                    |
-| Croatian | `hr` | 2020-07-01 | |
-| Czech    |     `cs`  |                2022-10-01                 |                    |
-| Danish | `da` | 2019-10-01 | |
-| Dutch                 |     `nl`      |                2019-10-01                 |                    |
-| English               |     `en`      |                2019-10-01                 |                    |
-| Esperanto    |     `eo`  |                2022-10-01                 |                    |
-| Estonian              |     `et`      |                2020-07-01                 |                    |
-| Filipino    |     `fil`  |                2022-10-01                 |                    |
-| Finnish               |     `fi`      |                2019-10-01                 |                    |
-| French                |     `fr`      |                2019-10-01                 |                    |
-| Galician    |     `gl`  |                2022-10-01                 |                    |
-| Georgian    |     `ka`  |                2022-10-01                 |                    |
-| German                |     `de`      |                2019-10-01                 |                    |
-| Greek    |     `el`  |                2020-07-01                 |                    |
-| Gujarati    |     `gu`  |                2022-10-01                 |                    |
-| Hausa      |     `ha`  |                2022-10-01                 |                    |
-| Hebrew    |     `he`  |                2022-10-01                 |                    |
-| Hindi      |     `hi`  |                2022-10-01                 |                    |
-| Hungarian    |     `hu`  |                2020-07-01                 |                    |
-| Indonesian            |     `id`      |                2020-07-01                 |                    |
-| Irish            |     `ga`      |                2022-10-01                 |                    |
-| Italian               |     `it`      |                2019-10-01                 |                    |
-| Japanese              |     `ja`      |                2019-10-01                 |                    |
-| Javanese            |     `jv`      |                2022-10-01                 |                    |
-| Kannada            |     `kn`      |                2022-10-01                 |                    |
-| Kazakh            |     `kk`      |                2022-10-01                 |                    |
-| Khmer            |     `km`      |                2022-10-01                 |                    |
-| Korean                |     `ko`      |                2019-10-01                 |                    |
-| Kurdish (Kurmanji)   |     `ku`      |                2022-10-01                 |                    |
-| Kyrgyz            |     `ky`      |                2022-10-01                 |                    |
-| Lao            |     `lo`      |                2022-10-01                 |                    |
-| Latin            |     `la`      |                2022-10-01                 |                    |
-| Latvian               |     `lv`      |                2020-07-01                 |                    |
-| Lithuanian            |     `lt`      |                2022-10-01                 |                    |
-| Macedonian            |     `mk`      |                2022-10-01                 |                    |
-| Malagasy            |     `mg`      |                2022-10-01                 |                    |
-| Malay            |     `ms`      |                2022-10-01                 |                    |
-| Malayalam            |     `ml`      |                2022-10-01                 |                    |
-| Marathi            |     `mr`      |                2022-10-01                 |                    |
-| Mongolian            |     `mn`      |                2022-10-01                 |                    |
-| Nepali            |     `ne`      |                2022-10-01                 |                    |
-| Norwegian (Bokmål)    |     `no`      |                2020-07-01                 | `nb` also accepted |
-| Oriya            |     `or`      |                2022-10-01                 |                    |
-| Oromo            |     `om`      |                2022-10-01                 |                    |
-| Pashto            |     `ps`      |                2022-10-01                 |                    |
-| Persian (Farsi)       |     `fa`      |                2022-10-01                 |                    |
-| Polish                |     `pl`      |                2019-10-01                 |                    |
-| Portuguese (Brazil)   |    `pt-BR`    |                2019-10-01                 |                    |
-| Portuguese (Portugal) |    `pt-PT`    |                2019-10-01                 | `pt` also accepted |
-| Punjabi            |     `pa`      |                2022-10-01                 |                    |
-| Romanian              |     `ro`      |                2020-07-01                 |                    |
-| Russian               |     `ru`      |                2019-10-01                 |                    |
-| Sanskrit            |     `sa`      |                2022-10-01                 |                    |
-| Scottish Gaelic       |     `gd`      |                2022-10-01                 |                    |
-| Serbian            |     `sr`      |                2022-10-01                 |                    |
-| Sindhi            |     `sd`      |                2022-10-01                 |                    |
-| Sinhala            |     `si`      |                2022-10-01                 |                    |
-| Slovak                |     `sk`      |                2020-07-01                 |                    |
-| Slovenian             |     `sl`      |                2020-07-01                 |                    |
-| Somali            |     `so`      |                2022-10-01                 |                    |
-| Spanish               |     `es`      |                2019-10-01                 |                    |
-| Sudanese            |     `su`      |                2022-10-01                 |                    |
-| Swahili            |     `sw`      |                2022-10-01                 |                    |
-| Swedish               |     `sv`      |                2019-10-01                 |                    |
-| Tamil            |     `ta`      |                2022-10-01                 |                    |
-| Telugu           |     `te`      |                2022-10-01                 |                    |
-| Thai            |     `th`      |                2022-10-01                 |                    |
-| Turkish              |     `tr`      |                2020-07-01                 |                    |
-| Ukrainian           |     `uk`      |                2022-10-01                 |                    |
-| Urdu            |     `ur`      |                2022-10-01                 |                    |
-| Uyghur            |     `ug`      |                2022-10-01                 |                    |
-| Uzbek            |     `uz`      |                2022-10-01                 |                    |
-| Vietnamese            |     `vi`      |                2022-10-01                 |                    |
-| Welsh            |     `cy`      |                2022-10-01                 |                    |
-| Western Frisian       |     `fy`      |                2022-10-01                 |                    |
-| Xhosa            |     `xh`      |                2022-10-01                 |                    |
-| Yiddish            |     `yi`      |                2022-10-01                 |                    |
-
-## Next steps
-
-* [How to call the API](how-to/call-api.md) for more information.
-* [Quickstart: Use the key phrase extraction client library and REST API](quickstart.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/overview.md
- Title: What is key phrase extraction in Azure Cognitive Service for Language?-
-description: An overview of key phrase extraction in Azure Cognitive Services, which helps you identify main concepts in unstructured text
------ Previously updated : 01/10/2023----
-# What is key phrase extraction in Azure Cognitive Service for Language?
-
-Key phrase extraction is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use key phrase extraction to quickly identify the main concepts in text. For example, in the text "*The food was delicious and the staff were wonderful.*", key phrase extraction will return the main topics: "*food*" and "*wonderful staff*".
-
-This documentation contains the following types of articles:
-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
---
-## Get started with entity linking
---
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for key phrase extraction](/legal/cognitive-services/language-service/transparency-note-key-phrase-extraction?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
- Title: "Quickstart: Use the Key Phrase Extraction client library"-
-description: Use this quickstart to start using the Key Phrase Extraction API.
------ Previously updated : 02/17/2023--
-keywords: text mining, key phrase
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: using the Key Phrase Extraction client library and REST API
----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Next steps
-
-* [Key phrase extraction overview](overview.md)
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
- Title: 'Tutorial: Integrate Power BI with key phrase extraction'-
-description: Learn how to use the key phrase extraction feature to get text stored in Power BI.
------ Previously updated : 09/28/2022----
-# Tutorial: Extract key phrases from text stored in Power BI
-
-Microsoft Power BI Desktop is a free application that lets you connect to, transform, and visualize your data. Key phrase extraction, one of the features of Azure Cognitive Service for Language, provides natural language processing. Given raw unstructured text, it can extract the most important phrases, analyze sentiment, and identify well-known entities such as brands. Together, these tools can help you quickly see what your customers are talking about and how they feel about it.
-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * Use Power BI Desktop to import and transform data
-> * Create a custom function in Power BI Desktop
-> * Integrate Power BI Desktop with the Key Phrase Extraction feature of Azure Cognitive Service for Language
-> * Use Key Phrase Extraction to get the most important phrases from customer feedback
-> * Create a word cloud from customer feedback
-
-## Prerequisites
--- Microsoft Power BI Desktop. [Download at no charge](https://powerbi.microsoft.com/get-started/).-- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/).-- A Language resource. If you don't have one, you can [create one](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics).-- The [Language resource key](../../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that was generated for you during sign-up.-- Customer comments. You can [use our example data](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/language-service/tutorials/comments.csv) or your own data. This tutorial assumes you're using our example data.-
-## Load customer data
-
-To get started, open Power BI Desktop and load the comma-separated value (CSV) file that you downloaded as part of the [prerequisites](#prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum.
-
-> [!NOTE]
-> Power BI can use data from a wide variety of web-based sources, such as SQL databases. See the [Power Query documentation](/power-query/connectors/) for more information.
-
-In the main Power BI Desktop window, select the **Home** ribbon. In the **External data** group of the ribbon, open the **Get Data** drop-down menu and select **Text/CSV**.
-
-![The Get Data button](../media/tutorials/power-bi/get-data-button.png)
-
-The Open dialog appears. Navigate to your Downloads folder, or to the folder where you downloaded the CSV file. Click on the name of the file, then the **Open** button. The CSV import dialog appears.
-
-![The CSV Import dialog](../media/tutorials/power-bi/csv-import.png)
-
-The CSV import dialog lets you verify that Power BI Desktop has correctly detected the character set, delimiter, header rows, and column types. This information is all correct, so click **Load**.
-
-To see the loaded data, click the **Data View** button on the left edge of the Power BI workspace. A table opens that contains the data, like in Microsoft Excel.
-
-![The initial view of the imported data](../media/tutorials/power-bi/initial-data-view.png)
-
-## Prepare the data
-
-You may need to transform your data in Power BI Desktop before it's ready to be processed by Key Phrase Extraction.
-
-The sample data contains a `subject` column and a `comment` column. With the Merge Columns function in Power BI Desktop, you can extract key phrases from the data in both these columns, rather than just the `comment` column.
-
-In Power BI Desktop, select the **Home** ribbon. In the **External data** group, click **Edit Queries**.
-
-![The External Data group in Home ribbon](../media/tutorials/power-bi/edit-queries.png)
-
-Select `FabrikamComments` in the **Queries** list at the left side of the window if it isn't already selected.
-
-Now select both the `subject` and `comment` columns in the table. You may need to scroll horizontally to see these columns. First click the `subject` column header, then hold down the Control key and click the `comment` column header.
-
-![Selecting fields to be merged](../media/tutorials/power-bi/select-columns.png)
-
-Select the **Transform** ribbon. In the **Text Columns** group of the ribbon, click **Merge Columns**. The Merge Columns dialog appears.
-
-![Merging fields using the Merge Columns dialog](../media/tutorials/power-bi/merge-columns.png)
-
-In the Merge Columns dialog, choose `Tab` as the separator, then click **OK.**
-
-You might also consider filtering out blank messages using the Remove Empty filter, or removing unprintable characters using the Clean transformation. If your data contains a column like the `spamscore` column in the sample file, you can skip "spam" comments using a Number Filter.
-
-## Understand the API
-
-[Key Phrase Extraction](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-V3-1/operations/KeyPhrases) can process up to a thousand text documents per HTTP request. Power BI prefers to deal with records one at a time, so in this tutorial your calls to the API will include only a single document each. The Key Phrases API requires the following fields for each document being processed.
-
-| Field | Description |
-| - | - |
-| `id` | A unique identifier for this document within the request. The response also contains this field. That way, if you process more than one document, you can easily associate the extracted key phrases with the document they came from. In this tutorial, because you're processing only one document per request, you can hard-code the value of `id` to be the same for each request.|
-| `text` | The text to be processed. The value of this field comes from the `Merged` column you created in the [previous section](#prepare-the-data), which contains the combined subject line and comment text. The Key Phrases API requires this data be no longer than about 5,120 characters.|
-| `language` | The code for the natural language the document is written in. All the messages in the sample data are in English, so you can hard-code the value `en` for this field.|
-
-## Create a custom function
-
-Now you're ready to create the custom function that will integrate Power BI and Key Phrase Extraction. The function receives the text to be processed as a parameter. It converts data to and from the required JSON format and makes the HTTP request to the Key Phrases API. The function then parses the response from the API and returns a string that contains a comma-separated list of the extracted key phrases.
-
-> [!NOTE]
-> Power BI Desktop custom functions are written in the [Power Query M formula language](/powerquery-m/power-query-m-reference), or just "M" for short. M is a functional programming language based on [F#](/dotnet/fsharp/). You don't need to be a programmer to finish this tutorial, though; the required code is included below.
-
-In Power BI Desktop, make sure you're still in the Query Editor window. If you aren't, select the **Home** ribbon, and in the **External data** group, click **Edit Queries**.
-
-Now, in the **Home** ribbon, in the **New Query** group, open the **New Source** drop-down menu and select **Blank Query**.
-
-A new query, initially named `Query1`, appears in the Queries list. Double-click this entry and name it `KeyPhrases`.
-
-Now, in the **Home** ribbon, in the **Query** group, click **Advanced Editor** to open the Advanced Editor window. Delete the code that's already in that window and paste in the following code.
-
-> [!NOTE]
-> Replace the example endpoint below (containing `<your-custom-subdomain>`) with the endpoint generated for your Language resource. You can find this endpoint by signing in to the [Azure portal](https://azure.microsoft.com/features/azure-portal/), navigating to your resource, and selecting **Key and endpoint**.
--
-```fsharp
-// Returns key phrases from the text in a comma-separated list
-(text) => let
- apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics" & "/v3.0/keyPhrases",
- jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
- jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }",
- bytesbody = Text.ToBinary(jsonbody),
- headers = [#"Ocp-Apim-Subscription-Key" = apikey],
- bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
- jsonresp = Json.Document(bytesresp),
- keyphrases = Text.Lower(Text.Combine(jsonresp[documents]{0}[keyPhrases], ", "))
-in keyphrases
-```
-
-Replace `YOUR_API_KEY_HERE` with your Language resource key. You can also find this key by signing in to the [Azure portal](https://azure.microsoft.com/features/azure-portal/), navigating to your Language resource, and selecting the **Key and endpoint** page. Be sure to leave the quotation marks before and after the key. Then click **Done.**
-
-## Use the custom function
-
-Now you can use the custom function to extract the key phrases from each of the customer comments and store them in a new column in the table.
-
-In Power BI Desktop, in the Query Editor window, switch back to the `FabrikamComments` query. Select the **Add Column** ribbon. In the **General** group, click **Invoke Custom Function**.
-
-![Invoke Custom Function button](../media/tutorials/power-bi/invoke-custom-function-button.png)<br><br>
-
-The Invoke Custom Function dialog appears. In **New column name**, enter `keyphrases`. In **Function query**, select the custom function you created, `KeyPhrases`.
-
-A new field appears in the dialog, **text (optional)**. This field is asking which column we want to use to provide values for the `text` parameter of the Key Phrases API. (Remember that you already hard-coded the values for the `language` and `id` parameters.) Select `Merged` (the column you created [previously](#prepare-the-data) by merging the subject and message fields) from the drop-down menu.
-
-![Invoking a custom function](../media/tutorials/power-bi/invoke-custom-function.png)
-
-Finally, click **OK.**
-
-If everything is ready, Power BI calls your custom function once for each row in the table. It sends the queries to the Key Phrases API and adds a new column to the table to store the results. But before that happens, you may need to specify authentication and privacy settings.
-
-## Authentication and privacy
-
-After you close the Invoke Custom Function dialog, a banner may appear asking you to specify how to connect to the Key Phrases API.
-
-![credentials banner](../media/tutorials/power-bi/credentials-banner.png)
-
-Click **Edit Credentials,** make sure `Anonymous` is selected in the dialog, then click **Connect.**
-
-> [!NOTE]
-> You select `Anonymous` because Key Phrase Extraction authenticates requests using your access key, so Power BI does not need to provide credentials for the HTTP request itself.
-
-> [!div class="mx-imgBorder"]
-> ![setting authentication to anonymous](../media/tutorials/power-bi/access-web-content.png)
-
-If you see the Edit Credentials banner even after choosing anonymous access, you may have forgotten to paste your Language resource key into the code in the `KeyPhrases` [custom function](#create-a-custom-function).
-
-Next, a banner may appear asking you to provide information about your data sources' privacy.
-
-![privacy banner](../media/tutorials/power-bi/privacy-banner.png)
-
-Click **Continue** and choose `Public` for each of the data sources in the dialog. Then click **Save.**
-
-![setting data source privacy](../media/tutorials/power-bi/privacy-dialog.png)
-
-## Create the word cloud
-
-Once you have dealt with any banners that appear, click **Close & Apply** in the Home ribbon to close the Query Editor.
-
-Power BI Desktop takes a moment to make the necessary HTTP requests. For each row in the table, the new `keyphrases` column contains the key phrases detected in the text by the Key Phrases API.
-
-Now you'll use this column to generate a word cloud. To get started, click the **Report** button in the main Power BI Desktop window, to the left of the workspace.
-
-> [!NOTE]
-> Why use extracted key phrases to generate a word cloud, rather than the full text of every comment? The key phrases provide us with the *important* words from our customer comments, not just the *most common* words. Also, word sizing in the resulting cloud isn't skewed by the frequent use of a word in a relatively small number of comments.
-
-If you don't already have the Word Cloud custom visual installed, install it. In the Visualizations panel to the right of the workspace, click the three dots (**...**) and choose **Import From Market**. If the word "cloud" is not among the displayed visualization tools in the list, you can search for "cloud" and click the **Add** button next the Word Cloud visual. Power BI installs the Word Cloud visual and lets you know that it installed successfully.
-
-![adding a custom visual](../media/tutorials/power-bi/add-custom-visuals.png)<br><br>
-
-First, click the Word Cloud icon in the Visualizations panel.
-
-![Word Cloud icon in visualizations panel](../media/tutorials/power-bi/visualizations-panel.png)
-
-A new report appears in the workspace. Drag the `keyphrases` field from the Fields panel to the Category field in the Visualizations panel. The word cloud appears inside the report.
-
-Now switch to the Format page of the Visualizations panel. In the Stop Words category, turn on **Default Stop Words** to eliminate short, common words like "of" from the cloud. However, because we're visualizing key phrases, they might not contain stop words.
-
-![activating default stop words](../media/tutorials/power-bi/default-stop-words.png)
-
-Down a little further in this panel, turn off **Rotate Text** and **Title**.
-
-![activate focus mode](../media/tutorials/power-bi/word-cloud-focus-mode.png)
-
-Click the Focus Mode tool in the report to get a better look at our word cloud. The tool expands the word cloud to fill the entire workspace, as shown below.
-
-![A Word Cloud](../media/tutorials/power-bi/word-cloud.png)
-
-## Using other features
-
-Azure Cognitive Service for Language also provides sentiment analysis and language detection. The language detection in particular is useful if your customer feedback isn't all in English.
-
-Both of these other APIs are similar to the Key Phrases API. That means you can integrate them with Power BI Desktop using custom functions that are nearly identical to the one you created in this tutorial. Just create a blank query and paste the appropriate code below into the Advanced Editor, as you did earlier. (Don't forget your access key!) Then, as before, use the function to add a new column to the table.
-
-The Sentiment Analysis function below returns a label indicating how positive the sentiment expressed in the text is.
-
-```fsharp
-// Returns the sentiment label of the text, for example, positive, negative or mixed.
-(text) => let
- apikey = "YOUR_API_KEY_HERE",
- endpoint = "<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/sentiment",
- jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
- jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }",
- bytesbody = Text.ToBinary(jsonbody),
- headers = [#"Ocp-Apim-Subscription-Key" = apikey],
- bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
- jsonresp = Json.Document(bytesresp),
- sentiment = jsonresp[documents]{0}[sentiment]
- in sentiment
-```
-
-Here are two versions of a Language Detection function. The first returns the ISO language code (for example, `en` for English), while the second returns the "friendly" name (for example, `English`). You may notice that only the last line of the body differs between the two versions.
-
-```fsharp
-// Returns the two-letter language code (for example, 'en' for English) of the text
-(text) => let
- apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/languages",
- jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
- jsonbody = "{ documents: [ { id: ""0"", text: " & jsontext & " } ] }",
- bytesbody = Text.ToBinary(jsonbody),
- headers = [#"Ocp-Apim-Subscription-Key" = apikey],
- bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
- jsonresp = Json.Document(bytesresp),
- language = jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
-```
-```fsharp
-// Returns the name (for example, 'English') of the language in which the text is written
-(text) => let
- apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/languages",
- jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
- jsonbody = "{ documents: [ { id: ""0"", text: " & jsontext & " } ] }",
- bytesbody = Text.ToBinary(jsonbody),
- headers = [#"Ocp-Apim-Subscription-Key" = apikey],
- bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
- jsonresp = Json.Document(bytesresp),
- language jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
-```
-
-Finally, here's a variant of the Key Phrases function already presented that returns the phrases as a list object, rather than as a single string of comma-separated phrases.
-
-> [!NOTE]
-> Returning a single string simplified our word cloud example. A list, on the other hand, is a more flexible format for working with the returned phrases in Power BI. You can manipulate list objects in Power BI Desktop using the Structured Column group in the Query Editor's Transform ribbon.
-
-```fsharp
-// Returns key phrases from the text as a list object
-(text) => let
- apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/keyPhrases",
- jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))),
- jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }",
- bytesbody = Text.ToBinary(jsonbody),
- headers = [#"Ocp-Apim-Subscription-Key" = apikey],
- bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]),
- jsonresp = Json.Document(bytesresp),
- keyphrases = jsonresp[documents]{0}[keyPhrases]
-in keyphrases
-```
-
-## Next steps
-
-Learn more about Azure Cognitive Service for Language, the Power Query M formula language, or Power BI.
-
-* [Azure Cognitive Service for Language overview](../../overview.md)
-* [Power Query M reference](/powerquery-m/power-query-m-reference)
-* [Power BI documentation](https://powerbi.microsoft.com/documentation/powerbi-landing-page/)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/call-api.md
- Title: How to perform language detection-
-description: This article will show you how to detect the language of written text using language detection.
------ Previously updated : 03/01/2022----
-# How to use language detection
-
-The Language Detection feature can evaluate text, and return a language identifier that indicates the language a document was written in.
-
-Language detection is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score between 0 and 1 that reflects the confidence of the model.
-
-The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages.
-
-## Development options
--
-## Determine how to process the data (optional)
-
-### Specify the language detection model
-
-By default, language detection will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-
-### Input languages
-
-When you submit documents to be evaluated, language detection will attempt to determine if the text was written in any of [the supported languages](../language-support.md).
-
-If you have content expressed in a less frequently used language, you can try the Language Detection feature to see if it returns a code. The response for languages that can't be detected is `unknown`.
-
-## Submitting data
-
-> [!TIP]
-> You can use a [Docker container](use-containers.md)for language detection, so you can use the API on-premises.
-
-Analysis is performed upon receipt of the request. Using the language detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
---
-## Getting language detection results
-
-When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
-
-Language detection will return one predominant language for each document you submit, along with it's [ISO 639-1](https://www.iso.org/standard/22109.html) name, a human-readable name, and a confidence score. A positive score of 1 indicates the highest possible confidence level of the analysis.
-
-### Ambiguous content
-
-In some cases it may be hard to disambiguate languages based on the input. You can use the `countryHint` parameter to specify an [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) country/region code. By default the API uses "US" as the default country hint. To remove this behavior, you can reset this parameter by setting this value to empty string `countryHint = ""` .
-
-For example, "communication" is common to both English and French and if given with limited context the response will be based on the "US" country/region hint. If the origin of the text is known to be coming from France that can be given as a hint.
-
-**Input**
-
-```json
-{
- "documents": [
- {
- "id": "1",
- "text": "communication"
- },
- {
- "id": "2",
- "text": "communication",
- "countryHint": "fr"
- }
- ]
-}
-```
-
-The language detection model now has additional context to make a better judgment:
-
-**Output**
-
-```json
-{
- "documents":[
- {
- "detectedLanguage":{
- "confidenceScore":0.62,
- "iso6391Name":"en",
- "name":"English"
- },
- "id":"1",
- "warnings":[
-
- ]
- },
- {
- "detectedLanguage":{
- "confidenceScore":1.0,
- "iso6391Name":"fr",
- "name":"French"
- },
- "id":"2",
- "warnings":[
-
- ]
- }
- ],
- "errors":[
-
- ],
- "modelVersion":"2022-10-01"
-}
-```
-
-If the analyzer can't parse the input, it returns `(Unknown)`. An example is if you submit a text string that consists solely of numbers.
-
-```json
-{
- "documents": [
- {
- "id": "1",
- "detectedLanguage": {
- "name": "(Unknown)",
- "iso6391Name": "(Unknown)",
- "confidenceScore": 0.0
- },
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2021-01-05"
-}
-```
-
-### Mixed-language content
-
-Mixed-language content within the same document returns the language with the largest representation in the content, but with a lower positive rating. The rating reflects the marginal strength of the assessment. In the following example, input is a blend of English, Spanish, and French. The analyzer counts characters in each segment to determine the predominant language.
-
-**Input**
-
-```json
-{
- "documents": [
- {
- "id": "1",
- "text": "Hello, I would like to take a class at your University. ¿Se ofrecen clases en español? Es mi primera lengua y más fácil para escribir. Que diriez-vous des cours en français?"
- }
- ]
-}
-```
-
-**Output**
-
-The resulting output consists of the predominant language, with a score of less than 1.0, which indicates a weaker level of confidence.
-
-```json
-{
- "documents": [
- {
- "id": "1",
- "detectedLanguage": {
- "name": "Spanish",
- "iso6391Name": "es",
- "confidenceScore": 0.88
- },
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2021-01-05"
-}
-```
-
-## Service and data limits
--
-## See also
-
-* [Language detection overview](../overview.md)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/use-containers.md
- Title: Use language detection Docker containers on-premises-
-description: Use Docker containers for the Language Detection API to determine the language of written text, on-premises.
------ Previously updated : 04/11/2023--
-keywords: on-premises, Docker, container
--
-# Use language detection Docker containers on-premises
-
-Containers enable you to host the Language Detection API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Language Detection remotely, then containers might be a good option.
-
-## Prerequisites
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/).
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
--
-## Host computer requirements and recommendations
--
-The following table describes the minimum and recommended specifications for the language detection container. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
-
-| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
-|||-|--|--|
-| **Language detection** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
-
-CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with `docker pull`
-
-The Language Detection container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `language`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/language`
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/tags).
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the Microsoft Container Registry.
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/language:latest
-```
--
-## Run the container with `docker run`
-
-Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
-
-> [!IMPORTANT]
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-To run the *Language Detection* container, execute the following `docker run` command. Replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the language detection API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
--
-```bash
-docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/language \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a *Language Detection* container from the container image
-* Allocates one CPU core and 4 gigabytes (GB) of memory
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Automatically removes the container after it exits. The container image is still available on the host computer.
--
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs.
-
-<!-- ## Validate container is running -->
--
-## Run the container disconnected from the internet
--
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
--
-## Billing
-
-The language detection containers send billing information to Azure, using a _Language_ resource on your Azure account.
--
-For more information about these options, see [Configure containers](../../concepts/configure-containers.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running language detection containers. In summary:
-
-* Language detection provides Linux containers for Docker
-* Container images are downloaded from the Microsoft Container Registry (MCR).
-* Container images run in Docker.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> This container is not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/language-support.md
- Title: Language Detection language support-
-description: This article explains which natural languages are supported by the Language Detection API.
------ Previously updated : 11/02/2021----
-# Language support for Language Detection
-
-Use this article to learn which natural languages are supported by Language Detection.
--
-> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-language-detection-model) are released. The current model version for Language Detection is `2022-10-01`.
-
-The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional/cultural languages, and return detected languages with their name and code. The returned language code parameters conform to [BCP-47](https://tools.ietf.org/html/bcp47) standard with most of them conforming to [ISO-639-1](https://www.iso.org/iso-639-language-codes.html) identifiers.
-
-If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that cannot be detected is `unknown`.
-
-## Languages supported by Language Detection
-
-| Language | Language Code | Starting with model version: |
-||||
-| Afrikaans | `af` | |
-| Albanian | `sq` | |
-| Amharic | `am` | 2021-01-05 |
-| Arabic | `ar` | |
-| Armenian | `hy` | |
-| Assamese | `as` | 2021-01-05 |
-| Azerbaijani | `az` | 2021-01-05 |
-| Bashkir | `ba` | 2022-10-01 |
-| Basque | `eu` | |
-| Belarusian | `be` | |
-| Bengali | `bn` | |
-| Bosnian | `bs` | 2020-09-01 |
-| Bulgarian | `bg` | |
-| Burmese | `my` | |
-| Catalan | `ca` | |
-| Central Khmer | `km` | |
-| Chinese | `zh` | |
-| Chinese Simplified | `zh_chs` | |
-| Chinese Traditional | `zh_cht` | |
-| Chuvash | `cv` | 2022-10-01 |
-| Corsican | `co` | 2021-01-05 |
-| Croatian | `hr` | |
-| Czech | `cs` | |
-| Danish | `da` | |
-| Dari | `prs` | 2020-09-01 |
-| Divehi | `dv` | |
-| Dutch | `nl` | |
-| English | `en` | |
-| Esperanto | `eo` | |
-| Estonian | `et` | |
-| Faroese | `fo` | 2022-10-01 |
-| Fijian | `fj` | 2020-09-01 |
-| Finnish | `fi` | |
-| French | `fr` | |
-| Galician | `gl` | |
-| Georgian | `ka` | |
-| German | `de` | |
-| Greek | `el` | |
-| Gujarati | `gu` | |
-| Haitian | `ht` | |
-| Hausa | `ha` | 2021-01-05 |
-| Hebrew | `he` | |
-| Hindi | `hi` | |
-| Hmong Daw | `mww` | 2020-09-01 |
-| Hungarian | `hu` | |
-| Icelandic | `is` | |
-| Igbo | `ig` | 2021-01-05 |
-| Indonesian | `id` | |
-| Inuktitut | `iu` | |
-| Irish | `ga` | |
-| Italian | `it` | |
-| Japanese | `ja` | |
-| Javanese | `jv` | 2021-01-05 |
-| Kannada | `kn` | |
-| Kazakh | `kk` | 2020-09-01 |
-| Kinyarwanda | `rw` | 2021-01-05 |
-| Kirghiz | `ky` | 2022-10-01 |
-| Korean | `ko` | |
-| Kurdish | `ku` | |
-| Lao | `lo` | |
-| Latin | `la` | |
-| Latvian | `lv` | |
-| Lithuanian | `lt` | |
-| Luxembourgish | `lb` | 2021-01-05 |
-| Macedonian | `mk` | |
-| Malagasy | `mg` | 2020-09-01 |
-| Malay | `ms` | |
-| Malayalam | `ml` | |
-| Maltese | `mt` | |
-| Maori | `mi` | 2020-09-01 |
-| Marathi | `mr` | 2020-09-01 |
-| Mongolian | `mn` | 2021-01-05 |
-| Nepali | `ne` | 2021-01-05 |
-| Norwegian | `no` | |
-| Norwegian Nynorsk | `nn` | |
-| Odia | `or` | |
-| Pasht | `ps` | |
-| Persian | `fa` | |
-| Polish | `pl` | |
-| Portuguese | `pt` | |
-| Punjabi | `pa` | |
-| Queretaro Otomi | `otq` | 2020-09-01 |
-| Romanian | `ro` | |
-| Russian | `ru` | |
-| Samoan | `sm` | 2020-09-01 |
-| Serbian | `sr` | |
-| Shona | `sn` | 2021-01-05 |
-| Sindhi | `sd` | 2021-01-05 |
-| Sinhala | `si` | |
-| Slovak | `sk` | |
-| Slovenian | `sl` | |
-| Somali | `so` | |
-| Spanish | `es` | |
-| Sundanese | `su` | 2021-01-05 |
-| Swahili | `sw` | |
-| Swedish | `sv` | |
-| Tagalog | `tl` | |
-| Tahitian | `ty` | 2020-09-01 |
-| Tajik | `tg` | 2021-01-05 |
-| Tamil | `ta` | |
-| Tatar | `tt` | 2021-01-05 |
-| Telugu | `te` | |
-| Thai | `th` | |
-| Tibetan | `bo` | 2021-01-05 |
-| Tigrinya | `ti` | 2021-01-05 |
-| Tongan | `to` | 2020-09-01 |
-| Turkish | `tr` | 2021-01-05 |
-| Turkmen | `tk` | 2021-01-05 |
-| Upper Sorbian | `hsb` | 2022-10-01 |
-| Uyghur | `ug` | 2022-10-01 |
-| Ukrainian | `uk` | |
-| Urdu | `ur` | |
-| Uzbek | `uz` | |
-| Vietnamese | `vi` | |
-| Welsh | `cy` | |
-| Xhosa | `xh` | 2021-01-05 |
-| Yiddish | `yi` | |
-| Yoruba | `yo` | 2021-01-05 |
-| Yucatec Maya | `yua` | |
-| Zulu | `zu` | 2021-01-05 |
-
-## Romanized Indic Languages supported by Language Detection
-
-| Language | Language Code | Starting with model version: |
-||||
-| Assamese | `as` | 2022-10-01 |
-| Bengali | `bn` | 2022-10-01 |
-| Gujarati | `gu` | 2022-10-01 |
-| Hindi | `hi` | 2022-10-01 |
-| Kannada | `kn` | 2022-10-01 |
-| Malayalam | `ml` | 2022-10-01 |
-| Marathi | `mr` | 2022-10-01 |
-| Odia | `or` | 2022-10-01 |
-| Punjabi | `pa` | 2022-10-01 |
-| Tamil | `ta` | 2022-10-01 |
-| Telugu | `te` | 2022-10-01 |
-| Urdu | `ur` | 2022-10-01 |
-
-## Next steps
-
-[Language detection overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/overview.md
- Title: What is language detection in Azure Cognitive Service for Language?-
-description: An overview of language detection in Azure Cognitive Services, which helps you detect the language that text is written in by returning language codes.
------ Previously updated : 07/27/2022----
-# What is language detection in Azure Cognitive Service for Language?
-
-Language detection is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Language detection can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
-
-This documentation contains the following types of articles:
-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
---
-## Get started with language detection
--
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for language detection](/legal/cognitive-services/language-service/transparency-note-language-detection?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
- Title: "Quickstart: Use the Language Detection client library"-
-description: Use this quickstart to start using Language Detection.
------ Previously updated : 02/17/2023--
-keywords: text mining, language detection
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: using the Language Detection client library and REST API
----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Next steps
-
-* [Language detection overview](overview.md)
cognitive-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-studio.md
- Title: "Quickstart: Get started with Language Studio"-
-description: Use this article to learn about Language Studio, and testing features of Azure Cognitive Service for Language
----- Previously updated : 01/03/2023----
-# Quickstart: Get started with Language Studio
-
-[Language Studio](https://aka.ms/languageStudio) is a set of UI-based tools that lets you explore, build, and integrate features from Azure Cognitive Service for Language into your applications.
-
-Language Studio provides you with a platform to try several service features, and see what they return in a visual manner. It also provides you with an easy-to-use experience to create custom projects and models to work on your data. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
-
-## Try Language Studio before signing up
-
-Language Studio lets you try available features without needing to create an Azure account or an Azure resource. From the main page of the studio, select one of the listed categories to see [available features](overview.md#available-features) you can try.
--
-Once you choose a feature, you'll be able to send several text examples to the service, and see example output.
--
-## Use Language Studio with your own text
-
-When you're ready to use Language Studio features on your own text data, you will need an Azure Language resource for authentication and [billing](https://aka.ms/unifiedLanguagePricing). You can also use this resource to call the REST APIs and client libraries programmatically. Follow these steps to get started.
-
-> [!IMPORTANT]
-> The setup process and requirements for custom features are different. If you're using one of the following custom features, we recommend using the quickstart articles linked below to get started more easily.
-> * [Conversational Language Understanding](./conversational-language-understanding/quickstart.md)
-> * [Custom Text Classification](./custom-classification/quickstart.md)
-> * [Custom Named Entity Recognition (NER)](./custom-named-entity-recognition/quickstart.md)
-> * [Orchestration workflow](./orchestration-workflow/quickstart.md)
-
-1. Create an Azure Subscription. You can [create one for free](https://azure.microsoft.com/free/ai/).
-
-2. [Log into Language Studio](https://aka.ms/languageStudio). If it's your first time logging in, you'll see a window appear that lets you choose a language resource.
-
- :::image type="content" source="./media/language-resource-small.png" alt-text="A screenshot showing the resource selection screen in Language Studio." lightbox="./media/language-resource.png":::
-
-3. Select **Create a new language resource**. Then enter information for your new resource, such as a name, location and resource group.
-
-
- > [!TIP]
- > * When selecting a location for your Azure resource, choose one that's closest to you for lower latency.
- > * We recommend turning the **Managed Identity** option **on**, to authenticate your requests across Azure.
- > * If you use the free pricing tier, you can keep using the Language service even after your Azure free trial or service credit expires.
-
- :::image type="content" source="./media/create-new-resource-small.png" alt-text="A screenshot showing the resource creation screen in Language Studio." lightbox="./media/create-new-resource.png":::
-
-4. Select **Done**. Your resource will be created, and you will be able to use the different features offered by the Language service with your own text.
--
-### Valid text formats for conversation features
-
-> [!NOTE]
-> This section applies to the following features:
-> * [PII detection for conversation](./personally-identifiable-information/overview.md)
-> * [Conversation summarization](./summarization/overview.md?tabs=conversation-summarization)
-
-If you're sending conversational text to supported features in Language Studio, be aware of the following input requirements:
-* The text you send must be a conversational dialog between two or more participants.
-* Each line must start with the name of the participant, followed by a `:`, and followed by what they say.
-* Each participant must be on a new line. If multiple participants' utterances are on the same line, it will be processed as one line of the conversation.
-
-See the following example for how you should structure conversational text you want to send.
-
-*Agent: Hello, you're chatting with Rene. How may I help you?*
-
-*Customer: Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didn't work.*
-
-*Agent: IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue.*
-
-Note that the names of the two participants in the conversation (*Agent* and *Customer*) begin each line, and that there is only one participant per line of dialog.
--
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-> [!TIP]
-> In Language Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by:
-> 1. Selecting the **Settings** icon in the rop-right corner of the Language Studio screen).
-> 2. Select **Resources**
->
-> You can't delete your resource from Language Studio.
-
-## Next steps
-
-* Go to the [Language Studio](https://aka.ms/languageStudio) to begin using features offered by the service.
-* For more information and documentation on the features offered, see the [Azure Cognitive Service for Language overview](overview.md).
cognitive-services Entity Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-metadata.md
- Title: Entity Metadata provided by Named Entity Recognition-
-description: Learn about entity metadata in the NER feature.
------ Previously updated : 06/13/2023---
-# Entity Metadata
-
-The Entity Metadata object captures optional additional information about detected entities, providing resolutions specifically for numeric and temporal entities. This attribute is populated only when there's supplementary data available, enhancing the comprehensiveness of the detected entities. The Metadata component encompasses resolutions designed for both numeric and temporal entities. It's important to handle cases where the Metadata attribute may be empty or absent, as its presence isn't guaranteed for every entity.
-
-Currently, metadata components handle resolutions to a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`.
-
-You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that are provided to a meeting scheduling system.
-
-> [!NOTE]
-> Entity Metadata are only supported starting from **_api-version=2023-04-15-preview_**. For older API versions, you may check the [Entity Resolutions article](./entity-resolutions.md).
-
-This article documents the resolution objects returned for each entity category or subcategory under the metadata object.
-
-## Numeric Entities
-
-### Age
-
-Examples: "10 years old", "23 months old", "sixty Y.O."
-
-```json
-"metadata": {
- "unit": "Year",
- "value": 10
- }
-```
-
-Possible values for "unit":
-- Year-- Month-- Week-- Day--
-### Currency
-
-Examples: "30 Egyptian pounds", "77 USD"
-
-```json
-"metadata": {
- "unit": "Egyptian pound",
- "ISO4217": "EGP",
- "value": 30
- }
-```
-
-Possible values for "unit" and "ISO4217":
-- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html).-
-## Datetime/Temporal entities
-
-Datetime includes several different subtypes that return different response objects.
-
-### Date
-
-Specific days.
-
-Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow"
-
-```json
-"metadata": {
- "dateValues": [
- {
- "timex": "1995-01-01",
- "value": "1995-01-01"
- }
- ]
- }
-```
-
-Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query.
-
-```json
-"metadata": {
- "dateValues": [
- {
- "timex": "XXXX-04-12",
- "value": "2022-04-12"
- },
- {
- "timex": "XXXX-04-12",
- "value": "2023-04-12"
- }
- ]
- }
-```
-
-Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week.
-
-```json
-"metadata" :{
- "dateValues": [
- {
- "timex": "XXXX-WXX-1",
- "value": "2022-10-03"
- },
- {
- "timex": "XXXX-WXX-1",
- "value": "2022-10-10"
- }
- ]
- }
-```
--
-### Time
-
-Specific times.
-
-Examples: "9:39:33 AM", "seven AM", "20:03"
-
-```json
-"metadata": {
- "timex": "T09:39:33",
- "value": "09:39:33"
- }
-```
-
-### Datetime
-
-Specific date and time combinations.
-
-Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30"
-
-```json
-"metadata": {
- "timex": "2022-10-07T18",
- "value": "2022-10-07 18:00:00"
- }
-```
-
-Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified.
-
-```json
-"metadata": {
- "dateValues": [
- {
- "timex": "XXXX-05-03T12",
- "value": "2022-05-03 12:00:00"
- },
- {
- "timex": "XXXX-05-03T12",
- "value": "2023-05-03 12:00:00"
- }
- ]
- }
-```
-
-### Datetime ranges
-
-A datetime range is a period with a beginning and end date, time, or datetime.
-
-Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend"
-
-The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week.
-
-```json
-"metadata": {
- "duration": "PT2702H",
- "begin": "2022-01-03 06:00:00",
- "end": "2022-04-25 20:00:00"
- }
-```
-
-### Set
-
-A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime.
-
-Examples: "every Monday at 6 PM", "every Thursday", "every weekend"
-
-For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM.
-
-```json
-"metadata": {
- "timex": "XXXX-WXX-1T18",
- "value": "not resolved"
- }
-```
-
-## Dimensions
-
-Examples: "24 km/hr", "44 square meters", "sixty six kilobytes"
-
-```json
-"metadata": {
- "unit": "KilometersPerHour",
- "value": 24
- }
-```
-
-Possible values for the "unit" field values:
--- **For Measurements**:
- - SquareKilometer
- - SquareHectometer
- - SquareDecameter
- - SquareMeter
- - SquareDecimeter
- - SquareCentimeter
- - SquareMillimeter
- - SquareInch
- - SquareFoot
- - SquareMile
- - SquareYard
- - Acre
--- **For Information**:
- - Bit
- - Kilobit
- - Megabit
- - Gigabit
- - Terabit
- - Petabit
- - Byte
- - Kilobyte
- - Megabyte
- - Gigabyte
- - Terabyte
- - Petabyte
-
-- **For Length, width, height**:
- - Kilometer
- - Hectometer
- - Decameter
- - Meter
- - Decimeter
- - Centimeter
- - Millimeter
- - Micrometer
- - Nanometer
- - Picometer
- - Mile
- - Yard
- - Inch
- - Foot
- - Light year
- - Pt
--- **For Speed**:
- - MetersPerSecond
- - KilometersPerHour
- - KilometersPerMinute
- - KilometersPerSecond
- - MilesPerHour
- - Knot
- - FootPerSecond
- - FootPerMinute
- - YardsPerMinute
- - YardsPerSecond
- - MetersPerMillisecond
- - CentimetersPerMillisecond
- - KilometersPerMillisecond
--- **For Volume**:
- - CubicMeter
- - CubicCentimeter
- - CubicMillimiter
- - Hectoliter
- - Decaliter
- - Liter
- - Deciliter
- - Centiliter
- - Milliliter
- - CubicYard
- - CubicInch
- - CubicFoot
- - CubicMile
- - FluidOunce
- - Teaspoon
- - Tablespoon
- - Pint
- - Quart
- - Cup
- - Gill
- - Pinch
- - FluidDram
- - Barrel
- - Minim
- - Cord
- - Peck
- - Bushel
- - Hogshead
--- **For Weight**:
- - Kilogram
- - Gram
- - Milligram
- - Microgram
- - Gallon
- - MetricTon
- - Ton
- - Pound
- - Ounce
- - Grain
- - Pennyweight
- - LongTonBritish
- - ShortTonUS
- - ShortHundredweightUS
- - Stone
- - Dram
--
-## Ordinal
-
-Examples: "3rd", "first", "last"
-
-```json
-"metadata": {
- "offset": "3",
- "relativeTo": "Start",
- "value": "3"
- }
-```
-
-Possible values for "relativeTo":
-- Start-- End-
-## Temperature
-
-Examples: "88 deg fahrenheit", "twenty three degrees celsius"
-
-```json
-"metadata": {
- "unit": "Fahrenheit",
- "value": 88
- }
-```
-
-Possible values for "unit":
-- Celsius-- Fahrenheit-- Kelvin-- Rankine
cognitive-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
- Title: Entity resolutions provided by Named Entity Recognition-
-description: Learn about entity resolutions in the NER feature.
------ Previously updated : 10/12/2022----
-# Resolve entities to standard formats
-
-A resolution is a standard format for an entity. Entities can be expressed in various forms and resolutions provide standard predictable formats for common quantifiable types. For example, "eighty" and "80" should both resolve to the integer `80`.
-
-You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that will be provided to a meeting scheduling system.
-
-> [!NOTE]
-> Entity resolution responses are only supported starting from **_api-version=2022-10-01-preview_** and **_"modelVersion": "2022-10-01-preview"_**.
-
-This article documents the resolution objects returned for each entity category or subcategory.
-
-## Age
-
-Examples: "10 years old", "23 months old", "sixty Y.O."
-
-```json
-"resolutions": [
- {
- "resolutionKind": "AgeResolution",
- "unit": "Year",
- "value": 10
- }
- ]
-```
-
-Possible values for "unit":
-- Year-- Month-- Week-- Day--
-## Currency
-
-Examples: "30 Egyptian pounds", "77 USD"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "CurrencyResolution",
- "unit": "Egyptian pound",
- "ISO4217": "EGP",
- "value": 30
- }
- ]
-```
-
-Possible values for "unit" and "ISO4217":
-- [ISO 4217 reference](https://docs.1010data.com/1010dataReferenceManual/DataTypesAndFormats/currencyUnitCodes.html).-
-## Datetime
-
-Datetime includes several different subtypes that return different response objects.
-
-### Date
-
-Specific days.
-
-Examples: "January 1 1995", "12 april", "7th of October 2022", "tomorrow"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Date",
- "timex": "1995-01-01",
- "value": "1995-01-01"
- }
- ]
-```
-
-Whenever an ambiguous date is provided, you're offered different options for your resolution. For example, "12 April" could refer to any year. Resolution provides this year and the next as options. The `timex` value `XXXX` indicates no year was specified in the query.
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Date",
- "timex": "XXXX-04-12",
- "value": "2022-04-12"
- },
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Date",
- "timex": "XXXX-04-12",
- "value": "2023-04-12"
- }
- ]
-```
-
-Ambiguity can occur even for a given day of the week. For example, saying "Monday" could refer to last Monday or this Monday. Once again the `timex` value indicates no year or month was specified, and uses a day of the week identifier (W) to indicate the first day of the week.
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Date",
- "timex": "XXXX-WXX-1",
- "value": "2022-10-03"
- },
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Date",
- "timex": "XXXX-WXX-1",
- "value": "2022-10-10"
- }
- ]
-```
--
-### Time
-
-Specific times.
-
-Examples: "9:39:33 AM", "seven AM", "20:03"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Time",
- "timex": "T09:39:33",
- "value": "09:39:33"
- }
- ]
-```
-
-### Datetime
-
-Specific date and time combinations.
-
-Examples: "6 PM tomorrow", "8 PM on January 3rd", "Nov 1 19:30"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "DateTime",
- "timex": "2022-10-07T18",
- "value": "2022-10-07 18:00:00"
- }
- ]
-```
-
-Similar to dates, you can have ambiguous datetime entities. For example, "May 3rd noon" could refer to any year. Resolution provides this year and the next as options. The `timex` value **XXXX** indicates no year was specified.
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "DateTime",
- "timex": "XXXX-05-03T12",
- "value": "2022-05-03 12:00:00"
- },
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "DateTime",
- "timex": "XXXX-05-03T12",
- "value": "2023-05-03 12:00:00"
- }
- ]
-```
-
-### Datetime ranges
-
-A datetime range is a period with a beginning and end date, time, or datetime.
-
-Examples: "from january 3rd 6 AM to april 25th 8 PM 2022", "between Monday to Thursday", "June", "the weekend"
-
-The "duration" parameter indicates the time passed in seconds (S), minutes (M), hours (H), or days (D). This parameter is only returned when an explicit start and end datetime are in the query. "Next week" would only return with "begin" and "end" parameters for the week.
-
-```json
-"resolutions": [
- {
- "resolutionKind": "TemporalSpanResolution",
- "duration": "PT2702H",
- "begin": "2022-01-03 06:00:00",
- "end": "2022-04-25 20:00:00"
- }
- ]
-```
-
-### Set
-
-A set is a recurring datetime period. Sets don't resolve to exact values, as they don't indicate an exact datetime.
-
-Examples: "every Monday at 6 PM", "every Thursday", "every weekend"
-
-For "every Monday at 6 PM", the `timex` value indicates no specified year with the starting **XXXX**, then every Monday through **WXX-1** to determine first day of every week, and finally **T18** to indicate 6 PM.
-
-```json
-"resolutions": [
- {
- "resolutionKind": "DateTimeResolution",
- "dateTimeSubKind": "Set",
- "timex": "XXXX-WXX-1T18",
- "value": "not resolved"
- }
- ]
-```
-
-## Dimensions
-
-Examples: "24 km/hr", "44 square meters", "sixty six kilobytes"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "SpeedResolution",
- "unit": "KilometersPerHour",
- "value": 24
- }
- ]
-```
-
-Possible values for "resolutionKind" and their "unit" values:
--- **AreaResolution**:
- - SquareKilometer
- - SquareHectometer
- - SquareDecameter
- - SquareMeter
- - SquareDecimeter
- - SquareCentimeter
- - SquareMillimeter
- - SquareInch
- - SquareFoot
- - SquareMile
- - SquareYard
- - Acre
--- **InformationResolution**:
- - Bit
- - Kilobit
- - Megabit
- - Gigabit
- - Terabit
- - Petabit
- - Byte
- - Kilobyte
- - Megabyte
- - Gigabyte
- - Terabyte
- - Petabyte
-
-- **LengthResolution**:
- - Kilometer
- - Hectometer
- - Decameter
- - Meter
- - Decimeter
- - Centimeter
- - Millimeter
- - Micrometer
- - Nanometer
- - Picometer
- - Mile
- - Yard
- - Inch
- - Foot
- - Light year
- - Pt
--- **SpeedResolution**:
- - MetersPerSecond
- - KilometersPerHour
- - KilometersPerMinute
- - KilometersPerSecond
- - MilesPerHour
- - Knot
- - FootPerSecond
- - FootPerMinute
- - YardsPerMinute
- - YardsPerSecond
- - MetersPerMillisecond
- - CentimetersPerMillisecond
- - KilometersPerMillisecond
--- **VolumeResolution**:
- - CubicMeter
- - CubicCentimeter
- - CubicMillimiter
- - Hectoliter
- - Decaliter
- - Liter
- - Deciliter
- - Centiliter
- - Milliliter
- - CubicYard
- - CubicInch
- - CubicFoot
- - CubicMile
- - FluidOunce
- - Teaspoon
- - Tablespoon
- - Pint
- - Quart
- - Cup
- - Gill
- - Pinch
- - FluidDram
- - Barrel
- - Minim
- - Cord
- - Peck
- - Bushel
- - Hogshead
--- **WeightResolution**:
- - Kilogram
- - Gram
- - Milligram
- - Microgram
- - Gallon
- - MetricTon
- - Ton
- - Pound
- - Ounce
- - Grain
- - Pennyweight
- - LongTonBritish
- - ShortTonUS
- - ShortHundredweightUS
- - Stone
- - Dram
--
-## Number
-
-Examples: "27", "one hundred and three", "38.5", "2/3", "33%"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "NumberResolution",
- "numberKind": "Integer",
- "value": 27
- }
- ]
-```
-
-Possible values for "numberKind":
-- Integer-- Decimal-- Fraction-- Power-- Percent--
-## Ordinal
-
-Examples: "3rd", "first", "last"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "OrdinalResolution",
- "offset": "3",
- "relativeTo": "Start",
- "value": "3"
- }
- ]
-```
-
-Possible values for "relativeTo":
-- Start-- End-
-## Temperature
-
-Examples: "88 deg fahrenheit", "twenty three degrees celsius"
-
-```json
-"resolutions": [
- {
- "resolutionKind": "TemperatureResolution",
- "unit": "Fahrenheit",
- "value": 88
- }
- ]
-```
-
-Possible values for "unit":
-- Celsius-- Fahrenheit-- Kelvin-- Rankine----
cognitive-services Ga Preview Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/ga-preview-mapping.md
- Title: Preview API overview-
-description: Learn about the NER preview API.
------ Previously updated : 06/14/2023----
-# Preview API changes
-
-Use this article to get an overview of the new API changes starting from `2023-04-15-preview` version. This API change mainly introduces two new concepts (`entity types` and `entity tags`) replacing the `category` and `subcategory` fields in the current Generally Available API.
-
-## Entity types
-Entity types represent the lowest (or finest) granularity at which the entity has been detected and can be considered to be the base class that has been detected.
-
-## Entity tags
-Entity tags are used to further identify an entity where a detected entity is tagged by the entity type and additional tags to differentiate the identified entity. The entity tags list could be considered to include categories, subcategories, sub-subcategories, and so on.
-
-## Changes from generally available API to preview API
-The changes introduce better flexibility for named entity recognition, including:
-* More granular entity recognition through introducing the tags list where an entity could be tagged by more than one entity tag.
-* Overlapping entities where entities could be recognized as more than one entity type and if so, this entity would be returned twice. If an entity was recognized to belong to two entity tags under the same entity type, both entity tags are returned in the tags list.
-* Filtering entities using entity tags, you can learn more about this by navigating to [this article](../how-to-call#select-which-entities-to-be-returned-(Preview API only).md).
-* Metadata Objects which contain additional information about the entity but currently only act as a wrapper for the existing entity resolution feature. You can learn more about this new feature [here](entity-metadata.md).
-
-## Generally available to preview API entity mappings
-You can see a comparison between the structure of the entity categories/types in the [Supported Named Entity Recognition (NER) entity categories and entity types article](./named-entity-categories.md). Below is a table describing the mappings between the results you would expect to see from the Generally Available API and the Preview API.
-
-| Type | Tags |
-|-|-|
-| Date | Temporal, Date |
-| DateRange | Temporal, DateRange |
-| DateTime | Temporal, DateTime |
-| DateTimeRange | Temporal, DateTimeRange |
-| Duration | Temporal, Duration |
-| SetTemporal | Temporal, SetTemporal |
-| Time | Temporal, Time |
-| TimeRange | Temporal, TimeRange |
-| City | GPE, Location, City |
-| State | GPE, Location, State |
-| CountryRegion | GPE, Location, CountryRegion |
-| Continent | GPE, Location, Continent |
-| GPE | Location, GPE |
-| Location | Location |
-| Airport | Structural, Location |
-| Structural | Location, Structural |
-| Geological | Location, Geological |
-| Age | Numeric, Age |
-| Currency | Numeric, Currency |
-| Number | Numeric, Number |
-| NumberRange | Numeric, NumberRange |
-| Percentage | Numeric, Percentage |
-| Ordinal | Numeric, Ordinal |
-| Temperature | Numeric, Dimension, Temperature |
-| Speed | Numeric, Dimension, Speed |
-| Weight | Numeric, Dimension, Weight |
-| Height | Numeric, Dimension, Height |
-| Length | Numeric, Dimension, Length |
-| Volume | Numeric, Dimension, Volume |
-| Area | Numeric, Dimension, Area |
-| Information | Numeric, Dimension, Information |
-| Address | Address |
-| Person | Person |
-| PersonType | PersonType |
-| Organization | Organization |
-| Product | Product |
-| ComputingProduct | Product, ComputingProduct |
-| IP | IP |
-| Email | Email |
-| URL | URL |
-| Skill | Skill |
-| Event | Event |
-| CulturalEvent | Event, CulturalEvent |
-| SportsEvent | Event, SportsEvent |
-| NaturalEvent | Event, NaturalEvent |
-
cognitive-services Named Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/named-entity-categories.md
- Title: Entity categories recognized by Named Entity Recognition in Azure Cognitive Service for Language-
-description: Learn about the entities the NER feature can recognize from unstructured text.
------ Previously updated : 11/02/2021----
-# Supported Named Entity Recognition (NER) entity categories and entity types
-
-Use this article to find the entity categories that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document.
-
-> [!NOTE]
-> * Starting from API version 2023-04-15-preview, the category and subcategory fields are replaced with entity types and tags to introduce better flexibility.
-
-# [Generally Available API](#tab/ga-api)
-
-## Category: Person
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Person
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Names of people.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `ar`, `cs`, `da`, `nl`, `en`, `fi`, `fr`, `de`, `he`, <br> `hu`, `it`, `ja`, `ko`, `no`, `pl`, `pt-br`, `pt`-`pt`, `ru`, `es`, `sv`, `tr`
-
- :::column-end:::
-
-## Category: PersonType
-
-This category contains the following entity:
--
- :::column span="":::
- **Entity**
-
- PersonType
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Job types or roles held by a person
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: Location
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Location
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Natural and human-made landmarks, structures, geographical features, and geopolitical entities.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `ar`, `cs`, `da`, `nl`, `en`, `fi`, `fr`, `de`, `he`, `hu`, `it`, `ja`, `ko`, `no`, `pl`, `pt-br`, `pt-pt`, `ru`, `es`, `sv`, `tr`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Geopolitical Entity (GPE)
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Cities, countries/regions, states.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
-
- Structural
-
- :::column-end:::
- :::column span="2":::
-
- Manmade structures.
-
- :::column-end:::
- :::column span="2":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Geographical
-
- :::column-end:::
- :::column span="2":::
-
- Geographic and natural features such as rivers, oceans, and deserts.
-
- :::column-end:::
- :::column span="2":::
-
- `en`
-
- :::column-end:::
-
-## Category: Organization
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Organization
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `ar`, `cs`, `da`, `nl`, `en`, `fi`, `fr`, `de`, `he`, `hu`, `it`, `ja`, `ko`, `no`, `pl`, `pt-br`, `pt-pt`, `ru`, `es`, `sv`, `tr`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Medical
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Medical companies and groups.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Stock exchange
-
- :::column-end:::
- :::column span="2":::
-
- Stock exchange groups.
-
- :::column-end:::
- :::column span="2":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Sports
-
- :::column-end:::
- :::column span="2":::
-
- Sports-related organizations.
-
- :::column-end:::
- :::column span="2":::
-
- `en`
-
- :::column-end:::
-
-## Category: Event
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Event
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Historical, social, and naturally occurring events.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt` and `pt-br`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Cultural
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Cultural events and holidays.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Natural
-
- :::column-end:::
- :::column span="2":::
-
- Naturally occurring events.
-
- :::column-end:::
- :::column span="2":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Sports
-
- :::column-end:::
- :::column span="2":::
-
- Sporting events.
-
- :::column-end:::
- :::column span="2":::
-
- `en`
-
- :::column-end:::
-
-## Category: Product
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Product
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Physical objects of various categories.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
--
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Computing products
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Computing products.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
-
-## Category: Skill
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Skill
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- A capability, skill, or expertise.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en` , `es`, `fr`, `de`, `it`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: Address
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Address
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Full mailing address.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: PhoneNumber
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- PhoneNumber
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Phone numbers (US and EU phone numbers only).
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt` `pt-br`
-
- :::column-end:::
-
-## Category: Email
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Email
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Email addresses.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: URL
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- URL
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- URLs to websites.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: IP
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- IP
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- network IP addresses.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: DateTime
-
-This category contains the following entities:
-
- :::column span="":::
- **Entity**
-
- DateTime
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Dates and times of day.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-Entities in this category can have the following subcategories
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Date
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Calender dates.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
-
- Time
-
- :::column-end:::
- :::column span="2":::
-
- Times of day.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
-
- DateRange
-
- :::column-end:::
- :::column span="2":::
-
- Date ranges.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
-
- TimeRange
-
- :::column-end:::
- :::column span="2":::
-
- Time ranges.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
-
- Duration
-
- :::column-end:::
- :::column span="2":::
-
- Durations.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
-
- Set
-
- :::column-end:::
- :::column span="2":::
-
- Set, repeated times.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: Quantity
-
-This category contains the following entities:
-
- :::column span="":::
- **Entity**
-
- Quantity
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Numbers and numeric quantities.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Number
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Numbers.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
- Percentage
-
- :::column-end:::
- :::column span="2":::
-
- Percentages
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
- Ordinal numbers
-
- :::column-end:::
- :::column span="2":::
-
- Ordinal numbers.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
- Age
-
- :::column-end:::
- :::column span="2":::
-
- Ages.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
- Currency
-
- :::column-end:::
- :::column span="2":::
-
- Currencies
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
- Dimensions
-
- :::column-end:::
- :::column span="2":::
-
- Dimensions and measurements.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
- :::column span="":::
- Temperature
-
- :::column-end:::
- :::column span="2":::
-
- Temperatures.
-
- :::column-end:::
- :::column span="2":::
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-# [Preview API](#tab/preview-api)
-
-## Supported Named Entity Recognition (NER) entity categories
-
-Use this article to find the entity types and the additional tags that can be returned by [Named Entity Recognition](../how-to-call.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document.
-
-### Type: Address
-
-Specific street-level mentions of locations: house/building numbers, streets, avenues, highways, intersections referenced by name.
-
-### Type: Numeric
-
-Numeric values.
-
-This entity type could be tagged by the following entity tags:
-
-#### Age
-
-**Description:** Ages
-
-#### Currency
-
-**Description:** Currencies
-
-#### Number
-
-**Description:** Numbers without a unit
-
-#### NumberRange
-
-**Description:** Range of numbers
-
-#### Percentage
-
-**Description:** Percentages
-
-#### Ordinal
-
-**Description:** Ordinal Numbers
-
-#### Temperature
-
-**Description:** Temperatures
-
-#### Dimension
-
-**Description:** Dimensions or measurements
-
-This entity tag also supports tagging the entity type with the following tags:
-
-|Entity tag |Details |
-|--|-|
-|Length |Length of an object|
-|Weight |Weight of an object|
-|Height |Height of an object|
-|Speed |Speed of an object |
-|Area |Area of an object |
-|Volume |Volume of an object|
-|Information|Unit of measure for digital information|
-
-## Type: Temporal
-
-Dates and times of day
-
-This entity type could be tagged by the following entity tags:
-
-#### Date
-
-**Description:** Calendar dates
-
-#### Time
-
-**Description:** Times of day
-
-#### DateTime
-
-**Description:** Calendar dates with time
-
-#### DateRange
-
-**Description:** Date range
-
-#### TimeRange
-
-**Description:** Time range
-
-#### DateTimeRange
-
-**Description:** Date Time range
-
-#### Duration
-
-**Description:** Durations
-
-#### SetTemporal
-
-**Description:** Set, repeated times
-
-## Type: Event
-
-Events with a timed period
-
-This entity type could be tagged by the following entity tags:
-
-#### SocialEvent
-
-**Description:** Social events
-
-#### CulturalEvent
-
-**Description:** Cultural events
-
-#### NaturalEvent
-
-**Description:** Natural events
-
-## Type: Location
-
-Particular point or place in physical space
-
-This entity type could be tagged by the following entity tags:
-#### GPE
-
-**Description:** GeoPolitialEntity
-
-This entity tag also supports tagging the entity type with the following tags:
-
-|Entity tag |Details |
-|-|-|
-|City |Cities |
-|State |States |
-|CountryRegion|Countries/Regions |
-|Continent |Continents |
-
-#### Structural
-
-**Description:** Manmade structures
-
-This entity tag also supports tagging the entity type with the following tags:
-
-|Entity tag |Details |
-|-|-|
-|Airport |Airports |
-
-#### Geological
-
-**Description:** Geographic and natural features
-
-This entity tag also supports tagging the entity type with the following tags:
-
-|Entity tag |Details |
-|-|-|
-|River |Rivers |
-|Ocean |Oceans |
-|Desert |Deserts |
-
-## Type: Organization
-
-Corporations, agencies, and other groups of people defined by some established organizational structure
-
-This entity type could be tagged by the following entity tags:
-
-#### MedicalOrganization
-
-**Description:** Medical companies and groups
-
-#### StockExchange
-
-**Description:** Stock exchange groups
-
-#### SportsOrganization
-
-**Description:** Sports-related organizations
-
-## Type: Person
-
-Names of individuals
-
-## Type: PersonType
-
-Human roles classified by group membership
-
-## Type: Email
-
-Email addresses
-
-## Type: URL
-
-URLs to websites
-
-## Type: IP
-
-Network IP addresses
-
-## Type: PhoneNumber
-
-Phone numbers
-
-## Type: Product
-
-Commercial, consumable objects
-
-This entity type could be tagged by the following entity tags:
-
-#### ComputingProduct
-
-**Description:** Computing products
-
-## Type: Skill
-
-Capabilities, skills, or expertise
---
-## Next steps
-
-* [NER overview](../overview.md)
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/how-to-call.md
- Title: How to perform Named Entity Recognition (NER)-
-description: This article shows you how to extract named entities from text.
------ Previously updated : 01/10/2023-----
-# How to use Named Entity Recognition(NER)
-
-The NER feature can evaluate unstructured text, and extract named entities from text in several predefined categories, for example: person, location, event, product, and organization.
-
-## Development options
--
-## Determine how to process the data (optional)
-
-### Specify the NER model
-
-By default, this feature uses the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
--
-### Input languages
-
-When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, key phrase extraction defaults to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md).
-
-## Submitting data
-
-Analysis is performed upon receipt of the request. Using the NER feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
--
-The API attempts to detect the [defined entity categories](concepts/named-entity-categories.md) for a given document language.
-
-## Getting NER results
-
-When you get results from NER, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/named-entity-categories.md), including their categories and subcategories, and confidence scores.
-
-## Select which entities to be returned (Preview API only)
-
-Starting with **API version 2023-04-15-preview**, the API attempts to detect the [defined entity types and tags](concepts/named-entity-categories.md) for a given document language. The entity types and tags replace the categories and subcategories structure the older models use to define entities for more flexibility. You can also specify which entities are detected and returned, use the optional `includeList` and `excludeList` parameters with the appropriate entity types. The following example would detect only `Location`. You can specify one or more [entity types](concepts/named-entity-categories.md) to be returned. Given the types and tags hierarchy introduced for this version, you have the flexibility to filter on different granularity levels as so:
-
-**Input:**
-
-> [!NOTE]
-> In this example, it returns only the **Location** entity type.
-
-```bash
-{
-    "kind": "EntityRecognition",
-    "parameters": 
-    {
- "includeList" :
- [
- "Location"
- ]
-    },
-    "analysisInput":
-    {
-        "documents":
-        [
-            {
-                "id":"1",
-                "language": "en",
-                "text": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
-            }
-        ]
-    }
-}
-
-```
-
-The above examples would return entities falling under the `Location` entity type such as the `GPE`, `Structural`, and `Geological` tagged entities as [outlined by entity types and tags](concepts/named-entity-categories.md). We could also further filter the returned entities by filtering using one of the entity tags for the `Location` entity type such as filtering over `GPE` tag only as outlined:
-
-```bash
-
- "parameters": 
-    {
- "includeList" :
- [
- "GPE"
- ]
-    }
-
-```
-
-This method returns all `Location` entities only falling under the `GPE` tag and ignore any other entity falling under the `Location` type that is tagged with any other entity tag such as `Structural` or `Geological` tagged `Location` entities. We could also further drill-down on our results by using the `excludeList` parameter. `GPE` tagged entities could be tagged with the following tags: `City`, `State`, `CountryRegion`, `Continent`. We could, for example, exclude `Continent` and `CountryRegion` tags for our example:
-
-```bash
-
- "parameters": 
-    {
- "includeList" :
- [
- "GPE"
- ],
- "excludeList": :
- [
- "Continent",
- "CountryRegion"
- ]
-    }
-
-```
-
-Using these parameters we can successfully filter on only `Location` entity types, since the `GPE` entity tag included in the `includeList` parameter, falls under the `Location` type. We then filter on only Geopolitical entities and exclude any entities tagged with `Continent` or `CountryRegion` tags.
-
-## Service and data limits
--
-## Next steps
-
-[Named Entity Recognition overview](overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
- Title: Named Entity Recognition (NER) language support-
-description: This article explains which natural languages are supported by the NER feature of Azure Cognitive Service for Language.
------ Previously updated : 06/27/2022----
-# Named Entity Recognition (NER) language support
-
-Use this article to learn which natural languages are supported by the NER feature of Azure Cognitive Service for Language.
-
-> [!NOTE]
-> * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
-> * The language support below is for model version `2023-02-01-preview` for the Generally Available API.
-> * You can additionally find the language support for the Preview API in the second tab.
-
-## NER language support
-
-# [Generally Available API](#tab/ga-api)
-
-|Language|Language Code|Supports resolution|Notes|
-|:-|:-|:-|:-|
-|Afrikaans|`af`| | |
-|Albanian|`sq`| | |
-|Amharic|`am`| | |
-|Arabic|`ar`| | |
-|Armenian|`hy`| | |
-|Assamese|`as`| | |
-|Azerbaijani|`az`| | |
-|Basque|`eu`| | |
-|Bengali|`bn`| | |
-|Bosnian|`bs`| | |
-|Bulgarian|`bg`| | |
-|Burmese|`my`| | |
-|Catalan|`ca`| | |
-|Chinese (Simplified)|`zh-Hans`|Γ£ô |`zh` also accepted|
-|Chinese (Traditional)|`zh-Hant`| | |
-|Croatian|`hr`| | |
-|Czech|`cs`| | |
-|Danish|`da`| | |
-|Dutch|`nl`|Γ£ô | |
-|English|`en`|Γ£ô | |
-|Estonian|`et`| | |
-|Finnish|`fi`| | |
-|French|`fr`|Γ£ô | |
-|Galician|`gl`| | |
-|Georgian|`ka`| | |
-|German|`de`|Γ£ô | |
-|Greek|`el`| | |
-|Gujarati|`gu`| | |
-|Hebrew|`he`| | |
-|Hindi|`hi`|Γ£ô | |
-|Hungarian|`hu`| | |
-|Indonesian|`id`| | |
-|Irish|`ga`| | |
-|Italian|`it`|Γ£ô | |
-|Japanese|`ji`|Γ£ô | |
-|Kannada|`kn`| | |
-|Kazakh|`kk`| | |
-|Khmer|`km`| | |
-|Korean|`ko`| | |
-|Kurdish (Kurmanji)|`ku`| | |
-|Kyrgyz|`ky`| | |
-|Lao|`lo`| | |
-|Latvian|`lv`| | |
-|Lithuanian|`lt`| | |
-|Macedonian|`mk`| | |
-|Malagasy|`mg`| | |
-|Malay|`ms`| | |
-|Malayalam|`ml`| | |
-|Marathi|`mr`| | |
-|Mongolian|`mn`| | |
-|Nepali|`ne`| | |
-|Norwegian (Bokmal)|`no`| |`nb` also accepted|
-|Odia|`or`| | |
-|Pashto|`ps`| | |
-|Persian|`fa`| | |
-|Polish|`pl`| | |
-|Portuguese (Brazil)|`pt-BR`|Γ£ô | |
-|Portuguese (Portugal)|`pt-PT`| |`pt` also accepted|
-|Punjabi|`pa`| | |
-|Romanian|`ro`| | |
-|Russian|`ru`| | |
-|Serbian|`sr`| | |
-|Slovak|`sk`| | |
-|Slovenian|`sl`| | |
-|Somali|`so`| | |
-|Spanish|`es`|Γ£ô | |
-|Swahili|`sw`| | |
-|Swazi|`ss`| | |
-|Swedish|`sv`| | |
-|Tamil|`ta`| | |
-|Telugu|`te`| | |
-|Thai|`th`| | |
-|Turkish|`tr`|Γ£ô | |
-|Ukrainian|`uk`| | |
-|Urdu|`ur`| | |
-|Uyghur|`ug`| | |
-|Uzbek|`uz`| | |
-|Vietnamese|`vi`| | |
-|Welsh|`cy`| | |
-
-# [Preview API](#tab/preview-api)
-
-|Language|Language Code|Supports metadata|Notes|
-|:-|:-|:-|:-|
-|Afrikaans|`af`|Γ£ô||
-|Albanian|`sq`|Γ£ô||
-|Amharic|`am`|Γ£ô||
-|Arabic|`ar`|Γ£ô||
-|Armenian|`hy`|Γ£ô||
-|Assamese|`as`|Γ£ô||
-|Azerbaijani|`az`|Γ£ô||
-|Basque|`eu`|Γ£ô||
-|Belarusian (new)|`be`|Γ£ô||
-|Bengali|`bn`|Γ£ô||
-|Bosnian|`bs`|Γ£ô||
-|Breton (new)|`br`|Γ£ô||
-|Bulgarian|`bg`|Γ£ô||
-|Burmese|`my`|✓|`zh` also accepted|
-|Catalan|`ca`|Γ£ô||
-|Chinese (Simplified)|`zh-Hans`|Γ£ô||
-|Chinese (Traditional)|`zh-Hant`|Γ£ô||
-|Croatian|`hr`|Γ£ô||
-|Czech|`cs`|Γ£ô||
-|Danish|`da`|Γ£ô||
-|Dutch|`nl`|Γ£ô||
-|English|`en`|Γ£ô||
-|Esperanto (new)|`eo`|Γ£ô||
-|Estonian|`et`|Γ£ô||
-|Filipino|`fil`|Γ£ô||
-|Finnish|`fi`|Γ£ô||
-|French|`fr`|Γ£ô||
-|Galician|`gl`|Γ£ô||
-|Georgian|`ka`|Γ£ô||
-|German|`de`|Γ£ô||
-|Greek|`el`|Γ£ô||
-|Gujarati|`gu`|Γ£ô||
-|Hausa (new)|`ha`|Γ£ô||
-|Hebrew|`he`|Γ£ô||
-|Hindi|`hi`|Γ£ô||
-|Hungarian|`hu`|Γ£ô||
-|Indonesian|`id`|Γ£ô||
-|Irish|`ga`|Γ£ô||
-|Italian|`it`|Γ£ô||
-|Japanese|`ji`|Γ£ô||
-|Javanese (new)|`jv`|Γ£ô||
-|Kannada|`kn`|Γ£ô||
-|Kazakh|`kk`|Γ£ô||
-|Khmer|`km`|Γ£ô||
-|Korean|`ko`|Γ£ô||
-|Kurdish (Kurmanji)|`ku`|Γ£ô||
-|Kyrgyz|`ky`|Γ£ô||
-|Lao|`lo`|Γ£ô||
-|Latin (new)|`la`|Γ£ô||
-|Latvian|`lv`|Γ£ô||
-|Lithuanian|`lt`|Γ£ô||
-|Macedonian|`mk`|✓|nb also accepted|
-|Malagasy|`mg`|Γ£ô||
-|Malay|`ms`|Γ£ô||
-|Malayalam|`ml`|Γ£ô||
-|Marathi|`mr`|Γ£ô||
-|Mongolian|`mn`|Γ£ô||
-|Nepali|`ne`|✓|pt also accepted|
-|Norwegian|`no`|Γ£ô||
-|Odia|`or`|Γ£ô||
-|Oromo (new)|`om`|Γ£ô||
-|Pashto|`ps`|Γ£ô||
-|Persian|`fa`|Γ£ô||
-|Polish|`pl`|Γ£ô||
-|Portuguese (Brazil)|`pt-BR`|Γ£ô||
-|Portuguese (Portugal)|`pt-PT`|Γ£ô||
-|Punjabi|`pa`|Γ£ô||
-|Romanian|`ro`|Γ£ô||
-|Russian|`ru`|Γ£ô||
-|Sanskrit (new)|`sa`|Γ£ô||
-|Scottish Gaelic (new)|`gd`|Γ£ô||
-|Serbian|`sr`|Γ£ô||
-|Sindhi (new)|`sd`|Γ£ô||
-|Sinhala (new)|`si`|Γ£ô||
-|Slovak|`sk`|Γ£ô||
-|Slovenian|`sl`|Γ£ô||
-|Somali|`so`|Γ£ô||
-|Spanish|`es`|Γ£ô||
-|Sundanese (new)|`su`|Γ£ô||
-|Swahili|`sw`|Γ£ô||
-|Swedish|`sv`|Γ£ô||
-|Tamil|`ta`|Γ£ô||
-|Telugu|`te`|Γ£ô||
-|Thai|`th`|Γ£ô||
-|Turkish|`tr`|Γ£ô||
-|Ukrainian|`uk`|Γ£ô||
-|Urdu|`ur`|Γ£ô||
-|Uyghur|`ug`|Γ£ô||
-|Uzbek|`uz`|Γ£ô||
-|Vietnamese|`vi`|Γ£ô||
-|Welsh|`cy`|Γ£ô||
-|Western Frisian (new)|`fy`|Γ£ô||
-|Xhosa (new)|`xh`|Γ£ô||
-|Yiddish (new)|`yi`|Γ£ô||
---
-## Next steps
-
-[NER feature overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/overview.md
- Title: What is the Named Entity Recognition (NER) feature in Azure Cognitive Service for Language?-
-description: An overview of the Named Entity Recognition feature in Azure Cognitive Services, which helps you extract categories of entities in text.
------ Previously updated : 06/15/2022----
-# What is Named Entity Recognition (NER) in Azure Cognitive Service for Language?
-
-Named Entity Recognition (NER) is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The NER feature can identify and categorize entities in unstructured text. For example: people, places, organizations, and quantities.
-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
-* The [**conceptual articles**](concepts/named-entity-categories.md) provide in-depth explanations of the service's functionality and features.
---
-## Get started with named entity recognition
---
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for NER](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Scenarios
-
-* Enhance search capabilities and search indexing - Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
-* Automate business processes - For example, when reviewing insurance claims, recognized entities like name and location could be highlighted to facilitate the review. Or a support ticket could be generated with a customer's name and company automatically from an email.
-* Customer analysis ΓÇô Determine the most popular information conveyed by customers in reviews, emails, and calls to determine the most relevant topics that get brought up and determine trends over time.
-
-## Next steps
-
-There are two ways to get started using the Named Entity Recognition (NER) feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
- Title: "Quickstart: Use the NER client library"-
-description: Use this quickstart to start using the Named Entity Recognition (NER) API.
------ Previously updated : 02/17/2023--
-keywords: text mining, key phrase
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: Detecting named entities (NER)
----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Next steps
-
-* [NER overview](overview.md)
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
- Title: Extract information in Excel using Power Automate-
-description: Learn how to Extract Excel text without having to write code, using Named Entity Recognition and Power Automate.
------ Previously updated : 11/21/2022----
-# Extract information in Excel using Named Entity Recognition(NER) and Power Automate
-
-In this tutorial, you'll create a Power Automate flow to extract text in an Excel spreadsheet without having to write code.
-
-This flow will take a spreadsheet of issues reported about an apartment complex, and classify them into two categories: plumbing and other. It will also extract the names and phone numbers of the tenants who sent them. Lastly, the flow will append this information to the Excel sheet.
-
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> * Use Power Automate to create a flow
-> * Upload Excel data from OneDrive for Business
-> * Extract text from Excel, and send it for Named Entity Recognition(NER)
-> * Use the information from the API to update an Excel sheet.
-
-## Prerequisites
--- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/).-- A Language resource. If you don't have one, you can [create one in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) and use the free tier to complete this tutorial.-- The [key and endpoint](../../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that was generated for you during sign-up.-- A spreadsheet containing tenant issues. Example data for this tutorial is [available on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx).-- Microsoft 365, with [OneDrive for business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business).-
-## Add the Excel file to OneDrive for Business
-
-Download the example Excel file from [GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx). This file must be stored in your OneDrive for Business account.
--
-The issues are reported in raw text. We will use the NER feature to extract the person name and phone number. Then the flow will look for the word "plumbing" in the description to categorize the issues.
-
-## Create a new Power Automate workflow
-
-Go to the [Power Automate site](https://make.powerautomate.com/), and login. Then click **Create** and **Scheduled flow**.
--
-On the **Build a scheduled cloud flow** page, initialize your flow with the following fields:
-
-|Field |Value |
-|||
-|**Flow name** | **Scheduled Review** or another name. |
-|**Starting** | Enter the current date and time. |
-|**Repeat every** | **1 hour** |
-
-## Add variables to the flow
-
-Create variables representing the information that will be added to the Excel file. Click **New Step** and search for **Initialize variable**. Do this four times, to create four variables.
---
-Add the following information to the variables you created. They represent the columns of the Excel file. If any variables are collapsed, you can click on them to expand them.
-
-| Action |Name | Type | Value |
-|||||
-| Initialize variable | var_person | String | Person |
-| Initialize variable 2 | var_phone | String | Phone Number |
-| Initialize variable 3 | var_plumbing | String | plumbing |
-| Initialize variable 4 | var_other | String | other |
--
-## Read the excel file
-
-Click **New Step** and type **Excel**, then select **List rows present in a table** from the list of actions.
--
-Add the Excel file to the flow by filling in the fields in this action. This tutorial requires the file to have been uploaded to OneDrive for Business.
--
-Click **New Step** and add an **Apply to each** action.
--
-Click on **Select an output from previous step**. In the Dynamic content box that appears, select **value**.
--
-## Send a request for entity recognition
-
-If you haven't already, you need to create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal.
-
-### Create a Language service connection
-
-In the **Apply to each**, click **Add an action**. Go to your Language resource's **key and endpoint** page in the Azure portal, and get the key and endpoint for your Language resource.
-
-In your flow, enter the following information to create a new Language connection.
-
-> [!NOTE]
-> If you already have created a Language connection and want to change your connection details, Click on the ellipsis on the top right corner, and click **+ Add new connection**.
-
-| Field | Value |
-|--|-|
-| Connection Name | A name for the connection to your Language resource. For example, `TAforPowerAutomate`. |
-| Account key | The key for your Language resource. |
-| Site URL | The endpoint for your Language resource. |
---
-## Extract the excel content
-
-After the connection is created, search for **Text Analytics** and select **Named Entity Recognition**. This will extract information from the description column of the issue.
--
-Click in the **Text** field and select **Description** from the Dynamic content windows that appears. Enter `en` for Language, and a unique name as the document ID (you might need to click **Show advanced options**).
---
-Within the **Apply to each**, click **Add an action** and create another **Apply to each** action. Click inside the text box and select **documents** in the Dynamic Content window that appears.
---
-## Extract the person name
-
-Next, we will find the person entity type in the NER output. Within the **Apply to each 2**, click **Add an action**, and create another **Apply to each** action. Click inside the text box and select **Entities** in the Dynamic Content window that appears.
---
-Within the newly created **Apply to each 3** action, click **Add an action**, and add a **Condition** control.
---
-In the Condition window, click on the first text box. In the Dynamic content window, search for **Category** and select it.
---
-Make sure the second box is set to **is equal to**. Then select the third box, and search for `var_person` in the Dynamic content window.
---
-In the **If yes** condition, type in Excel then select **Update a Row**.
--
-Enter the Excel information, and update the **Key Column**, **Key Value** and **PersonName** fields. This will append the name detected by the API to the Excel sheet.
--
-## Get the phone number
-
-Minimize the **Apply to each 3** action by clicking on the name. Then add another **Apply to each** action to **Apply to each 2**, like before. it will be named **Apply to each 4**. Select the text box, and add **entities** as the output for this action.
--
-Within **Apply to each 4**, add a **Condition** control. It will be named **Condition 2**. In the first text box, search for, and add **categories** from the Dynamic content window. Be sure the center box is set to **is equal to**. Then, in the right text box, enter `var_phone`.
--
-In the **If yes** condition, add an **Update a row** action. Then enter the information like we did above, for the phone numbers column of the Excel sheet. This will append the phone number detected by the API to the Excel sheet.
--
-## Get the plumbing issues
-
-Minimize **Apply to each 4** by clicking on the name. Then create another **Apply to each** in the parent action. Select the text box, and add **Entities** as the output for this action from the Dynamic content window.
--
-Next, the flow will check if the issue description from the Excel table row contains the word "plumbing". If yes, it will add "plumbing" in the IssueType column. If not, we will enter "other."
-
-Inside the **Apply to each 4** action, add a **Condition** Control. It will be named **Condition 3**. In the first text box, search for, and add **Description** from the Excel file, using the Dynamic content window. Be sure the center box says **contains**. Then, in the right text box, find and select `var_plumbing`.
--
-In the **If yes** condition, click **Add an action**, and select **Update a row**. Then enter the information like before. In the IssueType column, select `var_plumbing`. This will apply a "plumbing" label to the row.
-
-In the **If no** condition, click **Add an action**, and select **Update a row**. Then enter the information like before. In the IssueType column, select `var_other`. This will apply an "other" label to the row.
--
-## Test the workflow
-
-In the top-right corner of the screen, click **Save**, then **Test**. Under **Test Flow**, select **manually**. Then click **Test**, and **Run flow**.
-
-The Excel file will get updated in your OneDrive account. It will look like the below.
--
-## Next steps
--
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/data-formats.md
- Title: Orchestration workflow data formats-
-description: Learn about the data formats accepted by orchestration workflow.
------ Previously updated : 05/19/2022----
-# Data formats accepted by orchestration workflow
-
-When data is used by your model for learning, it expects the data to be in a specific format. When you tag your data in Language Studio, it gets converted to the JSON format described in this article. You can also manually tag your files.
--
-## JSON file format
-
-If you upload a tags file, it should follow this format.
-
-```json
-{
- "projectFileVersion": "{API-VERSION}",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectKind": "Orchestration",
- "projectName": "{PROJECT-NAME}",
- "multilingual": false,
- "description": "This is a description",
- "language": "{LANGUAGE-CODE}"
- },
- "assets": {
- "projectKind": "Orchestration",
- "intents": [
- {
- "category": "{INTENT1}",
- "orchestration": {
- "targetProjectKind": "Luis|Conversation|QuestionAnswering",
- "luisOrchestration": {
- "appId": "{APP-ID}",
- "appVersion": "0.1",
- "slotName": "production"
- },
- "conversationOrchestration": {
- "projectName": "{PROJECT-NAME}",
- "deploymentName": "{DEPLOYMENT-NAME}"
- },
- "questionAnsweringOrchestration": {
- "projectName": "{PROJECT-NAME}"
- }
- }
- }
- ],
- "utterances": [
- {
- "text": "utterance 1",
- "language": "{LANGUAGE-CODE}",
- "dataset": "{DATASET}",
- "intent": "intent1"
- }
- ]
- }
-}
-```
-
-|Key |Placeholder |Value | Example |
-|||-|--|
-| `api-version` | `{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-03-01-preview` |
-|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md)|`0.7`|
-| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
-| `multilingual` | `false`| Orchestration doesn't support the multilingual feature | `false`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. See [Language support](../language-support.md) for more information about supported language codes. |`en-us`|
-| `intents` | `[]` | Array containing all the intent types you have in the project. These are the intents used in the orchestration project.| `[]` |
--
-## Utterance format
-
-```json
-[
- {
- "intent": "intent1",
- "language": "{LANGUAGE-CODE}",
- "text": "{Utterance-Text}",
- },
- {
- "intent": "intent2",
- "language": "{LANGUAGE-CODE}",
- "text": "{Utterance-Text}",
- }
-]
-
-```
---
-## Next steps
-* You can import your labeled data into your project directly. Learn how to [import project](../how-to/create-project.md)
-* See the [how-to article](../how-to/tag-utterances.md) more information about labeling your data. When you're done labeling your data, you can [train your model](../how-to/train-model.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/evaluation-metrics.md
- Title: Orchestration workflow model evaluation metrics-
-description: Learn about evaluation metrics in orchestration workflow
------ Previously updated : 05/19/2022----
-# Evaluation metrics for orchestration workflow models
-
-Your dataset is split into two parts: a set for training, and a set for testing. The training set is used to train the model, while the testing set is used as a test for model after training to calculate the model performance and evaluation. The testing set isn't introduced to the model through the training process, to make sure that the model is tested on new data. <!--See [data splitting](../how-to/train-model.md#data-splitting) for more information-->
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to predict user defined intents for utterances in the test set, and compares them with the provided tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, orchestration workflow uses the following metrics:
-
-* **Precision**: Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
- `Precision = #True_Positive / (#True_Positive + #False_Positive)`
-
-* **Recall**: Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
- `Recall = #True_Positive / (#True_Positive + #False_Negatives)`
-
-* **F1 score**: The F1 score is a function of Precision and Recall. It's needed when you seek a balance between Precision and Recall.
-
- `F1 Score = 2 * Precision * Recall / (Precision + Recall)`
--
-Precision, recall, and F1 score are calculated for:
-* Each intent separately (intent-level evaluation)
-* For the model collectively (model-level evaluation).
-
-The definitions of precision, recall, and evaluation are the same for intent-level and model-level evaluations. However, the counts for *True Positives*, *False Positives*, and *False Negatives* can differ. For example, consider the following text.
-
-### Example
-
-* Make a response with thank you very much
-* Call my friend
-* Hello
-* Good morning
-
-These are the intents used: *CLUEmail* and *Greeting*
-
-The model could make the following predictions:
-
-| Utterance | Predicted intent | Actual intent |
-|--|--|--|
-|Make a response with thank you very much|CLUEmail|CLUEmail|
-|Call my friend|Greeting|CLUEmail|
-|Hello|CLUEmail|Greeting|
-|Goodmorning| Greeting|Greeting|
-
-### Intent level evaluation for *CLUEmail* intent
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | Utterance 1 was correctly predicted as *CLUEmail*. |
-| False Positive | 1 |Utterance 3 was mistakenly predicted as *CLUEmail*. |
-| False Negative | 1 | Utterance 2 was mistakenly predicted as *Greeting*. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
-
-### Intent level evaluation for *Greeting* intent
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 1 | Utterance 4 was correctly predicted as *Greeting*. |
-| False Positive | 1 |Utterance 2 was mistakenly predicted as *Greeting*. |
-| False Negative | 1 | Utterance 3 was mistakenly predicted as *CLUEmail*. |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 1 / (1 + 1) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 1 / (1 + 1) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
--
-### Model-level evaluation for the collective model
-
-| Key | Count | Explanation |
-|--|--|--|
-| True Positive | 2 | Sum of TP for all intents |
-| False Positive | 2| Sum of FP for all intents |
-| False Negative | 2 | Sum of FN for all intents |
-
-**Precision** = `#True_Positive / (#True_Positive + #False_Positive) = 2 / (2 + 2) = 0.5`
-
-**Recall** = `#True_Positive / (#True_Positive + #False_Negatives) = 2 / (2 + 2) = 0.5`
-
-**F1 Score** = `2 * Precision * Recall / (Precision + Recall) = (2 * 0.5 * 0.5) / (0.5 + 0.5) = 0.5`
--
-## Confusion matrix
-
-A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of intents.
-The matrix compares the actual tags with the tags predicted by the model.
-This gives a holistic view of how well the model is performing and what kinds of errors it is making.
-
-You can use the Confusion matrix to identify intents that are too close to each other and often get mistaken (ambiguity). In this case consider merging these intents together. If that isn't possible, consider adding more tagged examples of both intents to help the model differentiate between them.
-
-You can calculate the model-level evaluation metrics from the confusion matrix:
-
-* The *true positive* of the model is the sum of *true Positives* for all intents.
-* The *false positive* of the model is the sum of *false positives* for all intents.
-* The *false Negative* of the model is the sum of *false negatives* for all intents.
-
-## Next steps
-
-[Train a model in Language Studio](../how-to/train-model.md)
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/fail-over.md
- Title: Save and recover orchestration workflow models-
-description: Learn how to save and recover your orchestration workflow models.
------ Previously updated : 05/19/2022----
-# Back up and recover your orchestration workflow models
-
-When you create a Language resource in the Azure portal, you specify a region for it to be created in. From then on, your resource and all of the operations related to it take place in the specified Azure server region. It's rare, but not impossible, to encounter a network issue that hits an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region. This requires two Azure Language resources in different regions and the ability to sync your orchestration workflow models across regions.
-
-If your app or business depends on the use of an orchestration workflow model, we recommend that you create a replica of your project into another supported region. So that if a regional outage occurs, you can then access your model in the other fail-over region where you replicated your project.
-
-Replicating a project means that you export your project metadata and assets and import them into a new project. This only makes a copy of your project settings, intents and utterances. You still need to [train](../how-to/train-model.md) and [deploy](../how-to/deploy-model.md) the models before you can [query them](../how-to/call-api.md) with the [runtime APIs](https://aka.ms/clu-apis).
--
-In this article, you will learn to how to use the export and import APIs to replicate your project from one resource to another existing in different supported geographical regions, guidance on keeping your projects in sync and changes needed to your runtime consumption.
-
-## Prerequisites
-
-* Two Azure Language resources in different Azure regions, each of them in a different region.
-
-## Get your resource keys endpoint
-
-Use the following steps to get the keys and endpoint of your primary and secondary resources. These will be used in the following steps.
--
-> [!TIP]
-> Keep a note of keys and endpoints for both primary and secondary resources. Use these values to replace the following placeholders:
-`{PRIMARY-ENDPOINT}`, `{PRIMARY-RESOURCE-KEY}`, `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`.
-> Also take note of your project name, your model name and your deployment name. Use these values to replace the following placeholders: `{PROJECT-NAME}`, `{MODEL-NAME}` and `{DEPLOYMENT-NAME}`.
-
-## Export your primary project assets
-
-Start by exporting the project assets from the project in your primary resource.
-
-### Submit export job
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get export job status
-
-Replace the placeholders in the following request with your `{PRIMARY-ENDPOINT}` and `{PRIMARY-RESOURCE-KEY}` that you obtained in the first step.
--
-Copy the response body as you will use it as the body for the next import job.
-
-## Import to a new project
-
-Now go ahead and import the exported project assets in your new project in the secondary region so you can replicate it.
-
-### Submit import job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get import job status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-## Train your model
-
-After importing your project, you only have copied the project's assets and metadata and assets. You still need to train your model, which will incur usage on your account.
-
-### Submit training job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
---
-### Get Training Status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Deploy your model
-
-This is the step where you make your trained model available form consumption via the [runtime prediction API](https://aka.ms/ct-runtime-swagger).
-
-> [!TIP]
-> Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
-
-### Submit deployment job
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-### Get the deployment status
-
-Replace the placeholders in the following request with your `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}` that you obtained in the first step.
--
-## Changes in calling the runtime
-
-Within your system, at the step where you call [runtime API](https://aka.ms/clu-apis) check for the response code returned from the submit task API. If you observe a **consistent** failure in submitting the request, this could indicate an outage in your primary region. Failure once doesn't mean an outage, it may be transient issue. Retry submitting the job through the secondary resource you have created. For the second request use your `{YOUR-SECONDARY-ENDPOINT}` and secondary key, if you have followed the steps above, `{PROJECT-NAME}` and `{DEPLOYMENT-NAME}` would be the same so no changes are required to the request body.
-
-In case you revert to using your secondary resource you will observe slight increase in latency because of the difference in regions where your model is deployed.
-
-## Check if your projects are out of sync
-
-Maintaining the freshness of both projects is an important part of process. You need to frequently check if any updates were made to your primary project so that you move them over to your secondary project. This way if your primary region fail and you move into the secondary region you should expect similar model performance since it already contains the latest updates. Setting the frequency of checking if your projects are in sync is an important choice, we recommend that you do this check daily in order to guarantee the freshness of data in your secondary model.
-
-### Get project details
-
-Use the following url to get your project details, one of the keys returned in the body indicates the last modified date of the project.
-Repeat the following step twice, one for your primary project and another for your secondary project and compare the timestamp returned for both of them to check if they are out of sync.
--
-Repeat the same steps for your replicated project using `{SECONDARY-ENDPOINT}` and `{SECONDARY-RESOURCE-KEY}`. Compare the returned `lastModifiedDateTime` from both projects. If your primary project was modified sooner than your secondary one, you need to repeat the steps of [exporting](#export-your-primary-project-assets), [importing](#import-to-a-new-project), [training](#train-your-model) and [deploying](#deploy-your-model) your model.
-
-## Next steps
-
-In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-
-* [Authoring REST API reference](https://aka.ms/clu-authoring-apis)
-
-* [Runtime prediction REST API reference](https://aka.ms/clu-apis)
cognitive-services None Intent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/none-intent.md
- Title: Orchestration workflow none intent-
-description: Learn about the default None intent in orchestration workflow.
------ Previously updated : 06/03/2022-----
-# The "None" intent in orchestration workflow
-
-Every project in orchestration workflow includes a default None intent. The None intent is a required intent and can't be deleted or renamed. The intent is meant to categorize any utterances that do not belong to any of your other custom intents.
-
-An utterance can be predicted as the None intent if the top scoring intent's score is **lower** than the None score threshold. It can also be predicted if the utterance is similar to examples added to the None intent.
-
-## None score threshold
-
-You can go to the **project settings** of any project and set the **None score threshold**. The threshold is a decimal score from **0.0** to **1.0**.
-
-For any query and utterance, the highest scoring intent ends up **lower** than the threshold score, the top intent will be automatically replaced with the None intent. The scores of all the other intents remain unchanged.
-
-The score should be set according to your own observations of prediction scores, as they may vary by project. A higher threshold score forces the utterances to be more similar to the examples you have in your training data.
-
-When you export a project's JSON file, the None score threshold is defined in the _**"settings"**_ parameter of the JSON as the _**"confidenceThreshold"**_, which accepts a decimal value between 0.0 and 1.0.
-
-The default score for Orchestration Workflow projects is set at **0.5** when creating new project in Language Studio.
-
-> [!NOTE]
-> During model evaluation of your test set, the None score threshold is not applied.
-
-## Adding examples to the None intent
-
-The None intent is also treated like any other intent in your project. If there are utterances that you want predicted as None, consider adding similar examples to them in your training data. For example, if you would like to categorize utterances that are not important to your project as None, then add those utterances to your intent.
-
-## Next steps
-
-[Orchestration workflow overview](../overview.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/faq.md
- Title: Frequently Asked Questions for orchestration projects-
-description: Use this article to quickly get the answers to FAQ about orchestration projects
------ Previously updated : 06/21/2022----
-# Frequently asked questions for orchestration workflows
-
-Use this article to quickly get the answers to common questions about orchestration workflows
-
-## How do I create a project?
-
-See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
-
-## How do I connect other service applications in orchestration workflow projects?
-
-See [How to create projects and build schemas](./how-to/create-project.md) for information on connecting another project as an intent.
-
-## Which LUIS applications can I connect to in orchestration workflow projects?
-
-LUIS applications that use the Language resource as their authoring resource will be available for connection. You can only connect to LUIS applications that are owned by the same resource. This option will only be available for resources in West Europe, as it's the only common available region between LUIS and CLU.
-
-## Which question answering project can I connect to in orchestration workflow projects?
-
-Question answering projects that use the Language resource will be available for connection. You can only connect to question answering projects that are in the same Language resource.
-
-## Training is taking a long time, is this expected?
-
-For orchestration projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
-
-## Can I add entities to orchestration workflow projects?
-
-No. Orchestration projects are only enabled for intents that can be connected to other projects for routing.
-
-<!--
-## Which languages are supported in this feature?
-
-See the [language support](./language-support.md) article.
>
-## How do I get more accurate results for my project?
-
-See [evaluation metrics](./concepts/evaluation-metrics.md) for information on how models are evaluated, and metrics you can use to improve accuracy.
-<!--
-## How many intents, and utterances can I add to a project?
-
-See the [service limits](./service-limits.md) article.
>
-## Can I label the same word as 2 different entities?
-
-Unlike LUIS, you cannot label the same text as 2 different entities. Learned components across different entities are mutually exclusive, and only one learned span is predicted for each set of characters.
-
-## Is there any SDK support?
-
-Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleCode). There is currently no authoring support for the SDK.
-
-## Are there APIs for this feature?
-
-Yes, all the APIs are available.
-* [Authoring APIs](https://aka.ms/clu-authoring-apis)
-* [Prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation)
-
-## Next steps
-
-[Orchestration workflow overview](overview.md)
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/glossary.md
- Title: Definitions used in orchestration workflow-
-description: Learn about definitions used in orchestration workflow.
------ Previously updated : 05/19/2022----
-# Terms and definitions used in orchestration workflow
-Use this article to learn about some of the definitions and terms you may encounter when using orchestration workflow.
-
-## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
-
-## Intent
-An intent represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill.
-
-## Model
-A model is an object that's trained to do a certain task, in this case conversation understanding tasks. Models are trained by providing labeled data to learn from so they can later be used to understand utterances.
-
-* **Model evaluation** is the process that happens right after training to know how well does your model perform.
-* **Deployment** is the process of assigning your model to a deployment to make it available for use via the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Overfitting
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
-
-## Precision
-Measures how precise/accurate your model is. It's the ratio between the correctly identified positives (true positives) and all identified positives. The precision metric reveals how many of the predicted classes are correctly labeled.
-
-## Project
-A project is a work area for building your custom ML models based on your data. Your project can only be accessed by you and others who have access to the Azure resource being used.
-
-## Recall
-Measures the model's ability to predict actual positive classes. It's the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
-
-## Schema
-Schema is defined as the combination of intents within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents should be included in your project
-
-## Training data
-Training data is the set of information that is needed to train a model.
-
-## Utterance
-
-An utterance is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
--
-## Next steps
-
-* [Data and service limits](service-limits.md).
-* [Orchestration workflow overview](../overview.md).
cognitive-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/build-schema.md
- Title: How to build an orchestration project schema
-description: Learn how to define intents for your orchestration workflow project.
------- Previously updated : 05/20/2022----
-# How to build your project schema for orchestration workflow
-
-In orchestration workflow projects, the *schema* is defined as the combination of intents within your project. Schema design is a crucial part of your project's success. When creating a schema, you want think about which intents that should be included in your project.
-
-## Guidelines and recommendations
-
-Consider the following guidelines and recommendations for your project:
-
-* Build orchestration projects when you need to manage the NLU for a multi-faceted virtual assistant or chatbot, where the intents, entities, and utterances would begin to be far more difficult to maintain over time in one project.
-* Orchestrate between different domains. A domain is a collection of intents and entities that serve the same purpose, such as Email commands vs. Restaurant commands.
-* If there is an overlap of similar intents between domains, create the common intents in a separate domain and removing them from the others for the best accuracy.
-* For intents that are general across domains, such as ΓÇ£GreetingΓÇ¥, ΓÇ£ConfirmΓÇ¥, ΓÇ£RejectΓÇ¥, you can either add them in a separate domain or as direct intents in the Orchestration project.
-* Orchestrate to Custom question answering knowledge base when a domain has FAQ type questions with static answers. Ensure that the vocabulary and language used to ask questions is distinctive from the one used in the other Conversational Language Understanding projects and LUIS applications.
-* If an utterance is being misclassified and routed to an incorrect intent, then add similar utterances to the intent to influence its results. If the intent is connected to a project, then add utterances to the connected project itself. After you retrain your orchestration project, the new utterances in the connected project will influence predictions.
-* Add test data to your orchestration projects to validate there isnΓÇÖt confusion between linked projects and other intents.
--
-## Add intents
-
-To build a project schema within [Language Studio](https://aka.ms/languageStudio):
-
-1. Select **Schema definition** from the left side menu.
-
-2. To create an intent, select **Add** from the top menu. You will be prompted to type in a name for the intent.
-
-3. To connect your intent to other existing projects, select **Yes, I want to connect it to an existing project** option. You can alternatively create a non-connected intent by selecting the **No, I don't want to connect to a project** option.
-
-4. If you choose to create a connected intent, choose from **Connected service** the service you are connecting to, then choose the **project name**. You can connect your intent to only one project from the following
-
- :::image type="content" source="../media/build-schema-page.png" alt-text="A screenshot showing the schema creation page in Language Studio." lightbox="../media/build-schema-page.png":::
-
-> [!TIP]
-> Use connected intents to connect to other projects (conversational language understanding, LUIS, and question answering)
-
-5. Click on **Add intent** to add your intent.
-
-## Next steps
-
-* [Add utterances](tag-utterances.md)
-
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
- Title: How to send requests to orchestration workflow-
-description: Learn about sending requests for orchestration workflow.
------ Previously updated : 06/28/2022----
-# Query deployment for intent predictions
-
-After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment.
-You can query the deployment programmatically [Prediction API](https://aka.ms/ct-runtime-swagger) or through the [Client libraries (Azure SDK)](#send-an-orchestration-workflow-request).
-
-## Test deployed model
-
-You can use Language Studio to submit an utterance, get predictions and visualize the results.
----
-## Send an orchestration workflow request
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/REST-APIs)
-
-First you will need to get your resource key and endpoint:
--
-### Query your model
--
-# [Client libraries (Azure SDK)](#tab/azure-sdk)
-
-First you will need to get your resource key and endpoint:
--
-### Use the client libraries (Azure SDK)
-
-You can also use the client libraries provided by the Azure SDK to send requests to your model.
-
-> [!NOTE]
-> The client library for conversational language understanding is only available for:
-> * .NET
-> * Python
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
-
- :::image type="content" source="../../custom-text-classification/media/get-endpoint-azure.png" alt-text="Screenshot showing how to get the Azure endpoint." lightbox="../../custom-text-classification/media/get-endpoint-azure.png":::
-
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
- |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.0.0) |
-
-4. After you've installed the client library, use the following samples on GitHub to start calling the API.
-
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
-
-5. See the following reference documentation for more information:
-
- * [C#](/dotnet/api/azure.ai.language.conversations)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
-
--
-## Next steps
-
-* [Orchestration workflow overview](../overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
- Title: Create orchestration workflow projects and use Azure resources-
-description: Use this article to learn how to create projects in orchestration workflow
------ Previously updated : 03/23/2023----
-# How to create projects in orchestration workflow
-
-Orchestration workflow allows you to create projects that connect your applications to:
-* Custom Language Understanding
-* Question Answering
-* LUIS
-
-## Prerequisites
-
-Before you start using orchestration workflow, you will need several things:
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
-* An Azure Language resource
-
-### Create a Language resource
-
-Before you start using orchestration workflow, you will need an Azure Language resource.
-
-> [!NOTE]
-> * You need to have an **owner** role assigned on the resource group to create a Language resource.
-> * If you are planning to use question answering, you have to enable question answering in resource creation
---
-## Sign in to Language Studio
-
-To create a new intent, click on *+Add* button and start by giving your intent a **name**. You will see two options, to connect to a project or not. You can connect to (LUIS, question answering, or Conversational Language Understanding) projects, or choose the **no** option.
--
-## Create an orchestration workflow project
-
-Once you have a Language resource created, create an orchestration workflow project.
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
---
-## Import an orchestration workflow project
-
-### [Language Studio](#tab/language-studio)
-
-You can export an orchestration workflow project as a JSON file at any time by going to the orchestration workflow projects page, selecting a project, and from the top menu, clicking on **Export**.
-
-That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
-
-To import a project, click on the arrow button next to **Create a new project** and select **Import**, then select the JSON file.
--
-### [REST APIs](#tab/rest-api)
-
-You can import an orchestration workflow JSON into the service
----
-## Export project
-
-### [Language Studio](#tab/language-studio)
-
-You can export an orchestration workflow project as a JSON file at any time by going to the orchestration workflow projects page, selecting a project, and pressing **Export**.
-
-### [REST APIs](#tab/rest-api)
-
-You can export an orchestration workflow project as a JSON file at any time.
---
-## Get orchestration project details
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
----
-## Delete project
-
-### [Language Studio](#tab/language-studio)
--
-### [REST APIs](#tab/rest-api)
-
-When you don't need your project anymore, you can delete your project using the APIs.
-------
-## Next Steps
-
-[Build schema](./build-schema.md)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-model.md
- Title: How to deploy an orchestration workflow project-
-description: Learn about deploying orchestration workflow projects.
------ Previously updated : 10/12/2022----
-# Deploy an orchestration workflow model
-
-Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md)
-* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
-<!--* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.-->
-
-See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Deploy model
-
-After you have reviewed the model's performance and decide it's fit to be used in your environment, you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/clu-apis). It is recommended to create a deployment named `production` to which you assign the best model you have built so far and use it in your system. You can create another deployment called `staging` to which you can assign the model you're currently working on to be able to test it. You can have a maximum on 10 deployments in your project.
-
-# [Language Studio](#tab/language-studio)
-
-
-# [REST APIs](#tab/rest-api)
-
-### Submit deployment job
--
-### Get deployment job status
----
-## Swap deployments
-
-After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
-* Taking the model assigned to the first deployment, and assigning it to the second deployment.
-* taking the model assigned to second deployment and assign it to the first deployment.
-
-This can be used to swap your `production` and `staging` deployments when you want to take the model assigned to `staging` and assign it to `production`.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Delete deployment
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-
-Use [prediction API to query your model](call-api.md)
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/tag-utterances.md
- Title: How to tag utterances in an orchestration workflow project-
-description: Use this article to tag utterances
------ Previously updated : 05/20/2022----
-# Add utterances in Language Studio
-
-Once you have [built a schema](build-schema.md), you should add training and testing utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to.
-
-Adding utterances is a crucial step in project development lifecycle; this data will be used in the next step when training your model so that your model can learn from the added data. If you already have utterances, you can directly [import it into your project](create-project.md#import-an-orchestration-workflow-project), but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md). Labeled data informs the model how to interpret text, and is used for training and evaluation.
-
-## Prerequisites
-
-* A successfully [created project](create-project.md).
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
--
-## How to add utterances
-
-Use the following steps to add utterances:
-
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-
-2. From the left side menu, select **Add utterances**.
-
-3. From the top pivots, you can change the view to be **training set** or **testing set**. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
-
-3. From the **Select intent** dropdown menu, select one of the intents. Type in your utterance, and press the enter key in the utterance's text box to add the utterance. You can also upload your utterances directly by clicking on **Upload utterance file** from the top menu, make sure it follows the [accepted format](../concepts/data-formats.md#utterance-format).
-
- > [!Note]
- > If you are planning on using **Automatically split the testing set from training data** splitting, add all your utterances to the training set.
- > You can add training utterances to **non-connected** intents only.
-
- :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
-
-5. Under **Distribution** you can view the distribution across training and testing sets. You can **view utterances per intent**:
-
-* Utterance per non-connected intent
-* Utterances per connected intent
-
-## Next Steps
-* [Train Model](./train-model.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md
- Title: How to train and evaluate models in orchestration workflow
-description: Learn how to train a model for orchestration workflow projects.
------- Previously updated : 05/20/2022----
-# Train your orchestration workflow model
-
-Training is the process where the model learns from your [labeled utterances](tag-utterances.md). After training is completed, you will be able to [view model performance](view-model-evaluation.md).
-
-To train a model, start a training job. Only successfully completed jobs create a model. Training jobs expire after seven days, after this time you will no longer be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiring. You can only have one training job running at a time, and you can't start other jobs in the same project.
-
-The training times can be anywhere from a few seconds when dealing with simple projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances.
-
-Model evaluation is triggered automatically after training is completed successfully. The evaluation process starts by using the trained model to run predictions on the utterances in the testing set, and compares the predicted results with the provided labels (which establishes a baseline of truth). The results are returned so you can review the [modelΓÇÖs performance](view-model-evaluation.md).
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Data splitting
-
-Before you start the training process, labeled utterances in your project are divided into a training set and a testing set. Each one of them serves a different function.
-The **training set** is used in training the model, this is the set from which the model learns the labeled utterances.
-The **testing set** is a blind set that isn't introduced to the model during training but only during evaluation.
-
-After the model is trained successfully, the model can be used to make predictions from the utterances in the testing set. These predictions are used to calculate [evaluation metrics](../concepts/evaluation-metrics.md).
-
-It is recommended to make sure that all your intents are adequately represented in both the training and testing set.
-
-Orchestration workflow supports two methods for data splitting:
-
-* **Automatically splitting the testing set from training data**: The system will split your tagged data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
-
- > [!NOTE]
- > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
-
-* **Use a manual split of training and testing data**: This method enables users to define which utterances should belong to which set. This step is only enabled if you have added utterances to your testing set during [labeling](tag-utterances.md).
-
-> [!Note]
-> You can only add utterances in the training dataset for non-connected intents only.
--
-## Train model
-
-### Start training job
-
-#### [Language Studio](#tab/language-studio)
--
-#### [REST APIs](#tab/rest-api)
----
-### Get training job status
-
-#### [Language Studio](#tab/language-studio)
-
-Click on the Training Job ID from the list, a side pane will appear where you can check the **Training progress**, **Job status**, and other details for this job.
-
-<!--:::image type="content" source="../../../media/train-pane.png" alt-text="A screenshot showing the training job details." lightbox="../../../media/train-pane.png":::-->
--
-#### [REST APIs](#tab/rest-api)
-
-Training could take sometime depending on the size of your training data and complexity of your schema. You can use the following request to keep polling the status of the training job until it is successfully completed.
---
-### Cancel training job
-
-# [Language Studio](#tab/language-studio)
--
-# [REST APIs](#tab/rest-api)
----
-## Next steps
-* [Model evaluation metrics concepts](../concepts/evaluation-metrics.md)
-* How to [deploy a model](./deploy-model.md)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/view-model-evaluation.md
- Title: How to view orchestration workflow models details
-description: Learn how to view details for your model and evaluate its performance.
------- Previously updated : 10/12/2022----
-# View orchestration workflow model details
-
-After model training is completed, you can view your model details and see how well it performs against the test set. Observing how well your model performed is called evaluation. The test set consists of data that wasn't introduced to the model during the training process.
-
-> [!NOTE]
-> Using the **Automatically split the testing set from training data** option may result in different model evaluation result every time you [train a new model](train-model.md), as the test set is selected randomly from your utterances. To make sure that the evaulation is calcualted on the same test set every time you train a model, make sure to use the **Use a manual split of training and testing data** option when starting a training job and define your **Testing set** when [add your utterances](tag-utterances.md).
--
-## Prerequisites
-
-Before viewing a model's evaluation, you need:
-
-* [An orchestration workflow project](create-project.md).
-* A successfully [trained model](train-model.md)
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Model details
-
-### [Language studio](#tab/Language-studio)
-
-In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
--
-### [REST APIs](#tab/REST-APIs)
----
-## Load or export model data
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
----
-## Delete model
-
-### [Language studio](#tab/Language-studio)
---
-### [REST APIs](#tab/REST-APIs)
------
-## Next steps
-
-* As you review how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used.
-* If you're happy with your model performance, you can [deploy your model](deploy-model.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/language-support.md
- Title: Language support for orchestration workflow-
-description: Learn about the languages supported by orchestration workflow.
------ Previously updated : 05/17/2022----
-# Language support for orchestration workflow projects
-
-Use this article to learn about the languages currently supported by orchestration workflow projects.
-
-## Multilingual options
-
-Orchestration workflow projects do not support the multi-lingual option.
--
-## Language support
-
-Orchestration workflow projects support the following languages:
-
-| Language | Language code |
-| | |
-| German | `de` |
-| English | `en-us` |
-| Spanish | `es` |
-| French | `fr` |
-| Italian | `it` |
-| Portuguese (Brazil) | `pt-br` |
--
-## Next steps
-
-* [Orchestration workflow overview](overview.md)
-* [Service limits](service-limits.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
- Title: Orchestration workflows - Azure Cognitive Services-
-description: Customize an AI model to connect your Conversational Language Understanding, question answering and LUIS applications.
------ Previously updated : 08/10/2022----
-# What is orchestration workflow?
--
-Orchestration workflow is one of the features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build orchestration models to connect [Conversational Language Understanding (CLU)](../conversational-language-understanding/overview.md), [Question Answering](../question-answering/overview.md) projects and [LUIS](../../luis/what-is-luis.md) applications.
-By creating an orchestration workflow, developers can iteratively tag utterances, train and evaluate model performance before making it available for consumption.
-To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
--
-This documentation contains the following article types:
-
-* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
-* [How-to guides](how-to/create-project.md) contain instructions for using the service in more specific or customized ways.
--
-## Example usage scenarios
-
-Orchestration workflow can be used in multiple scenarios across a variety of industries. Some examples are:
-
-### Enterprise chat bot
-
-In a large corporation, an enterprise chat bot may handle a variety of employee affairs. It may be able to handle frequently asked questions served by a custom question answering knowledge base, a calendar specific skill served by conversational language understanding, and an interview feedback skill served by LUIS. The bot needs to be able to appropriately route incoming requests to the correct service. Orchestration workflow allows you to connect those skills to one project that handles the routing of incoming requests appropriately to power the enterprise bot.
-
-## Project development lifecycle
-
-Creating an orchestration workflow project typically involves several different steps.
--
-Follow these steps to get the most out of your model:
-
-1. **Define your schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. Create the [intents](glossary.md#intent) that you want to assign to user's utterances and the projects you want to connect to your orchestration project.
-
-2. **Label your data**: The quality of data tagging is a key factor in determining model performance.
-
-3. **Train a model**: Your model starts learning from your tagged data.
-
-4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-
-5. **Improve the model**: After reviewing the model's performance, you can then learn how you can improve the model.
-
-6. **Deploy the model**: Deploying a model makes it available for use via the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
-
-7. **Predict intents**: Use your custom model to predict intents from user's utterances.
-
-## Reference documentation and code samples
-
-As you use orchestration workflow, see the following reference documentation and samples for Azure Cognitive Services for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST APIs (Authoring) | [REST API documentation](https://aka.ms/clu-authoring-apis) | |
-|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation) | |
-|C# (Runtime) | [C# documentation](/dotnet/api/overview/azure/ai.language.conversations-readme) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples) |
-|Python (Runtime)| [Python documentation](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples) |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency note for CLU and orchestration workflow to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-* Use the [quickstart article](quickstart.md) to start using orchestration workflow.
-
-* As you go through the project development lifecycle, review the [glossary](glossary.md) to learn more about the terms used throughout the documentation for this feature.
-
-* Remember to view the [service limits](service-limits.md) for information such as regional availability.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/quickstart.md
- Title: Quickstart - Orchestration workflow-
-description: Quickly start creating an AI model to connect your Conversational Language Understanding, question answering and LUIS applications.
------ Previously updated : 02/28/2023--
-zone_pivot_groups: usage-custom-language-features
--
-# Quickstart: Orchestration workflow
-
-Use this article to get started with Orchestration workflow projects using Language Studio and the REST API. Follow these steps to try out an example.
-------
-## Next steps
-
-* [Learn about orchestration workflows](overview.md)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/service-limits.md
- Title: Orchestration workflow limits-
-description: Learn about the data, region, and throughput limits for Orchestration workflow
------ Previously updated : 10/12/2022----
-# Orchestration workflow limits
-
-Use this article to learn about the data and service limits when using orchestration workflow.
-
-## Language resource limits
-
-* Your Language resource has to be created in one of the [supported regions](#regional-availability).
-
-* Pricing tiers
-
- |Tier|Description|Limit|
- |--|--|--|
- |F0 |Free tier|You are only allowed one Language resource with the F0 tier per subscription.|
- |S |Paid tier|You can have up to 100 Language resources in the S tier per region.|
--
-See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) for more information.
-
-* You can have up to **500** projects per resource.
-
-* Project names have to be unique within the same resource across all custom features.
-
-## Regional availability
-
-Orchestration workflow is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
-
-| Region | Authoring | Prediction |
-|--|--|-|
-| Australia East | Γ£ô | Γ£ô |
-| Brazil South | | Γ£ô |
-| Canada Central | | Γ£ô |
-| Central India | Γ£ô | Γ£ô |
-| Central US | | Γ£ô |
-| China East 2 | Γ£ô | Γ£ô |
-| China North 2 | | Γ£ô |
-| East Asia | | Γ£ô |
-| East US | Γ£ô | Γ£ô |
-| East US 2 | Γ£ô | Γ£ô |
-| France Central | | Γ£ô |
-| Japan East | | Γ£ô |
-| Japan West | | Γ£ô |
-| Jio India West | | Γ£ô |
-| Korea Central | | Γ£ô |
-| North Central US | | Γ£ô |
-| North Europe | Γ£ô | Γ£ô |
-| Norway East | | Γ£ô |
-| Qatar Central | | Γ£ô |
-| South Africa North | | Γ£ô |
-| South Central US | Γ£ô | Γ£ô |
-| Southeast Asia | | Γ£ô |
-| Sweden Central | | Γ£ô |
-| Switzerland North | Γ£ô | Γ£ô |
-| UAE North | | Γ£ô |
-| UK South | Γ£ô | Γ£ô |
-| West Central US | | Γ£ô |
-| West Europe | Γ£ô | Γ£ô |
-| West US | | Γ£ô |
-| West US 2 | Γ£ô | Γ£ô |
-| West US 3 | Γ£ô | Γ£ô |
-
-## API limits
-
-|Item|Request type| Maximum limit|
-|:-|:-|:-|
-|Authoring API|POST|10 per minute|
-|Authoring API|GET|100 per minute|
-|Prediction API|GET/POST|1,000 per minute|
-
-## Quota limits
-
-|Pricing tier |Item |Limit |
-| | | |
-|F|Training time| 1 hour per month|
-|S|Training time| Unlimited, Pay as you go |
-|F|Prediction Calls| 5,000 request per month |
-|S|Prediction Calls| Unlimited, Pay as you go |
-
-## Data limits
-
-The following limits are observed for orchestration workflow.
-
-|Item|Lower Limit| Upper Limit |
-| | | |
-|Count of utterances per project | 1 | 15,000|
-|Utterance length in characters | 1 | 500 |
-|Count of intents per project | 1 | 500|
-|Count of trained models per project| 0 | 10 |
-|Count of deployments per project| 0 | 10 |
-
-## Naming limits
-
-| Attribute | Limits |
-|--|--|
-| Project name | You can only use letters `(a-z, A-Z)`, and numbers `(0-9)` , symbols `_ . -`, with no spaces. Maximum allowed length is 50 characters. |
-| Model name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Deployment name | You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and symbols `_ . -`. Maximum allowed length is 50 characters. |
-| Intent name| You can only use letters `(a-z, A-Z)`, numbers `(0-9)` and all symbols except ":", `$ & % * ( ) + ~ # / ?`. Maximum allowed length is 50 characters.|
--
-## Next steps
-
-* [Orchestration workflow overview](overview.md)
cognitive-services Connect Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/tutorials/connect-services.md
- Title: Integrate custom question answering and conversational language understanding with orchestration workflow
-description: Learn how to connect different projects with orchestration workflow.
-keywords: conversational language understanding, bot framework, bot, language understanding, nlu
------- Previously updated : 05/25/2022--
-# Connect different services with Orchestration workflow
-
-Orchestration workflow is a feature that allows you to connect different projects from LUIS, conversational language understanding, and custom question answering in one project. You can then use this project for predictions under one endpoint. The orchestration project makes a prediction on which project should be called and automatically routes the request to that project, and returns with its response.
-
-In this tutorial, you will learn how to connect a custom question answering knowledge base with a conversational language understanding project. You will then call the project using the .NET SDK sample for orchestration.
-
-This tutorial will include creating a **chit chat** knowledge base and **email commands** project. Chit chat will deal with common niceties and greetings with static responses. Email commands will predict among a few simple actions for an email assistant. The tutorial will then teach you to call the Orchestrator using the SDK in a .NET environment using a sample solution.
--
-## Prerequisites
--- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) **and select the custom question answering feature** in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial. Copy them from the **Keys and Endpoint** tab in your resource.
- - When you enable custom question answering, you must select an Azure search resource to connect to.
- - Make sure the region of your resource is supported by [conversational language understanding](../../conversational-language-understanding/service-limits.md#regional-availability).
-- Download the **OrchestrationWorkflowSample** [sample](https://aka.ms/orchestration-sample).-
-## Create a custom question answering knowledge base
-
-1. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
-2. Find and select the [Custom question answering](https://language.cognitive.azure.com/questionAnswering/projects/) card in the homepage.
-3. Click on **Create new project** and add the name **chitchat** with the language _English_ before clicking on **Create project**.
-4. When the project loads, click on **Add source** and select _Chit chat_. Select the professional personality for chit chat before
-
- :::image type="content" source="../media/chit-chat.png" alt-text="A screenshot of the chit chat popup." lightbox="../media/chit-chat.png":::
-
-5. Go to **Deploy knowledge base** from the left navigation menu and click on **Deploy** and confirm the popup that shows up.
-
-You are now done with deploying your knowledge base for chit chat. You can explore the type of questions and answers to expect in the **Edit knowledge base** page.
-
-## Create a conversational language understanding project
-
-1. In Language Studio, go to the [Conversational language understanding](https://language.cognitive.azure.com/clu/projects) service.
-2. Download the **EmailProject.json** sample file [here](https://aka.ms/clu-sample-json).
-3. Click on the **Import** button. Browse to the EmailProject.json file you downloaded and press Done.
-
- :::image type="content" source="../media/import-export.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import-export.png":::
-
-4. Once the project is loaded, click on **Training jobs** on the left. Press on Start a training job, provide the model name **v1** and press Train.
-
- :::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page." lightbox="../media/train-model.png":::
-
-5. Once training is complete, click to **Deploying a model** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
-
- :::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot showing the model deployment page." lightbox="../media/deploy-model-tutorial.png":::
-
-You are now done with deploying a conversational language understanding project for email commands. You can explore the different commands in the **Data labeling** page.
-
-## Create an Orchestration workflow project
-
-1. In Language Studio, go to the [Orchestration workflow](https://language.cognitive.azure.com/orchestration/projects) service.
-2. Click on **Create new project**. Use the name **Orchestrator** and the language _English_ before clicking next then done.
-3. Once the project is created, click on **Add** in the **Schema definition** page.
-4. Select _Yes, I want to connect it to an existing project_. Add the intent name **EmailIntent** and select **Conversational Language Understanding** as the connected service. Select the recently created **EmailProject** project for the project name before clicking on **Add Intent**.
--
-5. Add another intent but now select **Question Answering** as the service and select **chitchat** as the project name.
-6. Similar to conversational language understanding, go to **Training jobs** and start a new training job with the name **v1** and press Train.
-7. Once training is complete, click to **Deploying a model** on the left. Click on Add deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment and press Next.
-8. On the next page, select the deployment name **Testing** for the **EmailIntent**. This tells the orchestrator to call the **Testing** deployment in **EmailProject** when it routes to it. Custom question answering projects only have one deployment by default.
--
-Now your orchestration project is ready to be used. Any incoming request will be routed to either **EmailIntent** and the **EmailProject** in conversational language understanding or **ChitChatIntent** and the **chitchat** knowledge base.
-
-## Call the orchestration project with the Conversations SDK
-
-1. In the downloaded sample, open OrchestrationWorkflowSample.sln in Visual Studio.
-
-2. In the OrchestrationWorkflowSample solution, make sure to install all the required packages. In Visual Studio, go to _Tools_, _NuGet Package Manager_ and select _Package Manager Console_ and run the following command.
-
-```powershell
-dotnet add package Azure.AI.Language.Conversations
-```
-Alternatively, you can search for "Azure.AI.Language.Conversations" in the NuGet package manager and install the latest release.
-
-3. In `Program.cs`, replace `{api-key}` and the `{endpoint}` variables. Use the key and endpoint for the Language resource you created earlier. You can find them in the **Keys and Endpoint** tab in your Language resource in Azure.
-
-```csharp
-Uri endpoint = new Uri("{endpoint}");
-AzureKeyCredential credential = new AzureKeyCredential("{api-key}");
-```
-
-4. Replace the project and deployment parameters to **Orchestrator** and **Testing** as below if they are not set already.
-
-```csharp
-string projectName = "Orchestrator";
-string deploymentName = "Testing";
-```
-
-5. Run the project or press F5 in Visual Studio.
-6. Input a query such as "read the email from matt" or "hello how are you". You'll now observe different responses for each, a conversational language understanding **EmailProject** response from the first query, and the answer from the **chitchat** knowledge base for the second query.
-
-**Conversational Language Understanding**:
-
-**Custom Question Answering**:
-
-You can now connect other projects to your orchestrator and begin building complex architectures with various different projects.
-
-## Next steps
--- Learn more about [conversational language understanding](./../../conversational-language-understanding/overview.md).-- Learn more about [custom question answering](./../../question-answering/overview.md).--
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
- Title: What is Azure Cognitive Service for Language-
-description: Learn how to integrate AI into your applications that can extract information and understand written language.
------ Previously updated : 04/14/2023----
-# What is Azure Cognitive Service for Language?
-
-Azure Cognitive Service for Language is a cloud-based service that provides Natural Language Processing (NLP) features for understanding and analyzing text. Use this service to help build intelligent applications using the web-based Language Studio, REST APIs, and client libraries.
-
-## Available features
-
-This Language service unifies the following previously available Cognitive
-
-The Language service also provides several new features as well, which can either be:
-
-* Preconfigured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications.
-* Customizable, which means you'll train an AI model using our tools to fit your data specifically.
-
-> [!TIP]
-> Unsure which feature to use? See [Which Language service feature should I use?](#which-language-service-feature-should-i-use) to help you decide.
-
-[**Language Studio**](./language-studio.md) enables you to use the below service features without needing to write code.
-
-### Named Entity Recognition (NER)
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/named-entity-recognition.png" alt-text="A screenshot of a named entity recognition example." lightbox="media/studio-examples/named-entity-recognition.png":::
- :::column-end:::
- :::column span="":::
- [Named entity recognition](./named-entity-recognition/overview.md) is a preconfigured feature that categorizes entities (words or phrases) in unstructured text across several predefined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
-
- :::column-end:::
-
-### Personally identifying (PII) and health (PHI) information detection
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/personal-information-detection.png" alt-text="A screenshot of a PII detection example." lightbox="media/studio-examples/personal-information-detection.png":::
- :::column-end:::
- :::column span="":::
- [PII detection](./personally-identifiable-information/overview.md) is a preconfigured feature that identifies, categorizes, and redacts sensitive information in both [unstructured text documents](./personally-identifiable-information/how-to-call.md), and [conversation transcripts](./personally-identifiable-information/how-to-call-for-conversations.md). For example: phone numbers, email addresses, forms of identification, [and more](./personally-identifiable-information/concepts/entity-categories.md).
-
- :::column-end:::
-
-### Language detection
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/language-detection.png" alt-text="A screenshot of a language detection example." lightbox="media/studio-examples/language-detection.png":::
- :::column-end:::
- :::column span="":::
- [Language detection](./language-detection/overview.md) is a preconfigured feature that can detect the language a document is written in, and returns a language code for a wide range of languages, variants, dialects, and some regional/cultural languages.
-
- :::column-end:::
-
-### Sentiment Analysis and opinion mining
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/sentiment-analysis-example.png" alt-text="A screenshot of a sentiment analysis example." lightbox="media/studio-examples/sentiment-analysis-example.png":::
- :::column-end:::
- :::column span="":::
- [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) are preconfigured features that help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
-
- :::column-end:::
-
-### Summarization
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/summarization-example.png" alt-text="A screenshot of a summarization example." lightbox="media/studio-examples/summarization-example.png":::
- :::column-end:::
- :::column span="":::
- [Summarization](./summarization/overview.md) is a preconfigured feature that uses extractive text summarization to produce a summary of documents and conversation transcriptions. It extracts sentences that collectively represent the most important or relevant information within the original content.
- :::column-end:::
-
-### Key phrase extraction
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/key-phrases.png" alt-text="A screenshot of a key phrase extraction example." lightbox="media/studio-examples/key-phrases.png":::
- :::column-end:::
- :::column span="":::
- [Key phrase extraction](./key-phrase-extraction/overview.md) is a preconfigured feature that evaluates and returns the main concepts in unstructured text, and returns them as a list.
- :::column-end:::
-
-### Entity linking
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/entity-linking.png" alt-text="A screenshot of an entity linking example." lightbox="media/studio-examples/entity-linking.png":::
- :::column-end:::
- :::column span="":::
- [Entity linking](./entity-linking/overview.md) is a preconfigured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
- :::column-end:::
-
-### Text analytics for health
-
- :::column span="":::
- :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
- :::column-end:::
- :::column span="":::
- [Text analytics for health](./text-analytics-for-health/overview.md) is a preconfigured feature that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
- :::column-end:::
-
-### Custom text classification
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/single-classification.png" alt-text="A screenshot of a custom text classification example." lightbox="media/studio-examples/single-classification.png":::
- :::column-end:::
- :::column span="":::
- [Custom text classification](./custom-text-classification/overview.md) enables you to build custom AI models to classify unstructured text documents into custom classes you define.
- :::column-end:::
-
-### Custom Named Entity Recognition (Custom NER)
--
- :::column span="":::
- :::image type="content" source="media/studio-examples/custom-named-entity-recognition.png" alt-text="A screenshot of a custom NER example." lightbox="media/studio-examples/custom-named-entity-recognition.png":::
- :::column-end:::
- :::column span="":::
- [Custom NER](custom-named-entity-recognition/overview.md) enables you to build custom AI models to extract custom entity categories (labels for words or phrases), using unstructured text that you provide.
- :::column-end:::
--
-### Conversational language understanding
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/conversational-language-understanding.png" alt-text="A screenshot of a conversational language understanding example." lightbox="media/studio-examples/conversational-language-understanding.png":::
- :::column-end:::
- :::column span="":::
- [Conversational language understanding (CLU)](./conversational-language-understanding/overview.md) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it.
- :::column-end:::
-
-### Orchestration workflow
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/orchestration-workflow.png" alt-text="A screenshot of an orchestration workflow example." lightbox="media/studio-examples/orchestration-workflow.png":::
- :::column-end:::
- :::column span="":::
- [Orchestration workflow](./language-detection/overview.md) is a custom feature that enables you to connect [Conversational Language Understanding (CLU)](./conversational-language-understanding/overview.md), [question answering](./question-answering/overview.md), and [LUIS](../LUIS/what-is-luis.md) applications.
-
- :::column-end:::
-
-### Question answering
-
- :::column span="":::
- :::image type="content" source="media/studio-examples/question-answering.png" alt-text="A screenshot of a question answering example." lightbox="media/studio-examples/question-answering.png":::
- :::column-end:::
- :::column span="":::
- [Question answering](./question-answering/overview.md) is a custom feature that finds the most appropriate answer for inputs from your users, and is commonly used to build conversational client applications, such as social media applications, chat bots, and speech-enabled desktop applications.
-
- :::column-end:::
-
-### Custom text analytics for health
-
- :::column span="":::
- :::image type="content" source="text-analytics-for-health/media/call-api/health-named-entity-recognition.png" alt-text="A screenshot of a custom text analytics for health example." lightbox="text-analytics-for-health/media/call-api/health-named-entity-recognition.png":::
- :::column-end:::
- :::column span="":::
- [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) is a custom feature that extract healthcare specific entities from unstructured text, using a model you create.
- :::column-end:::
-
-## Which Language service feature should I use?
-
-This section will help you decide which Language service feature you should use for your application:
-
-|What do you want to do? |Document format |Your best solution | Is this solution customizable?* |
-|||||
-| Detect and/or redact sensitive information such as PII and PHI. | Unstructured text, <br> transcribed conversations | [PII detection](./personally-identifiable-information/overview.md) | |
-| Extract categories of information without creating a custom model. | Unstructured text | The [preconfigured NER feature](./named-entity-recognition/overview.md) | |
-| Extract categories of information using a model specific to your data. | Unstructured text | [Custom NER](./custom-named-entity-recognition/overview.md) | Γ£ô |
-|Extract main topics and important phrases. | Unstructured text | [Key phrase extraction](./key-phrase-extraction/overview.md) | |
-| Determine the sentiment and opinions expressed in text. | Unstructured text | [Sentiment analysis and opinion mining](./sentiment-opinion-mining/overview.md) | |
-| Summarize long chunks of text or conversations. | Unstructured text, <br> transcribed conversations. | [Summarization](./summarization/overview.md) | |
-| Disambiguate entities and get links to Wikipedia. | Unstructured text | [Entity linking](./entity-linking/overview.md) | |
-| Classify documents into one or more categories. | Unstructured text | [Custom text classification](./custom-text-classification/overview.md) | Γ£ô|
-| Extract medical information from clinical/medical documents, without building a model. | Unstructured text | [Text analytics for health](./text-analytics-for-health/overview.md) | |
-| Extract medical information from clinical/medical documents using a model that's trained on your data. | Unstructured text | [Custom text analytics for health](./custom-text-analytics-for-health/overview.md) | |
-| Build a conversational application that responds to user inputs. | Unstructured user inputs | [Question answering](./question-answering/overview.md) | Γ£ô |
-| Detect the language that a text was written in. | Unstructured text | [Language detection](./language-detection/overview.md) | |
-| Predict the intention of user inputs and extract information from them. | Unstructured user inputs | [Conversational language understanding](./conversational-language-understanding/overview.md) | Γ£ô |
-| Connect apps from conversational language understanding, LUIS, and question answering. | Unstructured user inputs | [Orchestration workflow](./orchestration-workflow/overview.md) | Γ£ô |
-
-\* If a feature is customizable, you can train an AI model using our tools to fit your data specifically. Otherwise a feature is preconfigured, meaning the AI models it uses cannot be changed. You just send your data, and use the feature's output in your applications.
-
-## Migrate from Text Analytics, QnA Maker, or Language Understanding (LUIS)
-
-Azure Cognitive Services for Language unifies three individual language services in Cognitive Services - Text Analytics, QnA Maker, and Language Understanding (LUIS). If you have been using these three services, you can easily migrate to the new Azure Cognitive Services for Language. For instructions see [Migrating to Azure Cognitive Services for Language](concepts/migrate.md).
-
-## Tutorials
-
-After you've had a chance to get started with the Language service, try our tutorials that show you how to solve various scenarios.
-
-* [Extract key phrases from text stored in Power BI](key-phrase-extraction/tutorials/integrate-power-bi.md)
-* [Use Power Automate to sort information in Microsoft Excel](named-entity-recognition/tutorials/extract-excel-information.md)
-* [Use Flask to translate text, analyze sentiment, and synthesize speech](/training/modules/python-flask-build-ai-web-app/)
-* [Use Cognitive Services in canvas apps](/powerapps/maker/canvas-apps/cognitive-services-api?context=/azure/cognitive-services/language-service/context/context)
-* [Create a FAQ Bot](question-answering/tutorials/bot-service.md)
-
-## Additional code samples
-
-You can find more code samples on GitHub for the following languages:
-
-* [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
-* [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
-* [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples)
-* [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples)
-
-## Deploy on premises using Docker containers
-Use Language service containers to deploy API features on-premises. These Docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons. The Language service offers the following containers:
-
-* [Sentiment analysis](sentiment-opinion-mining/how-to/use-containers.md)
-* [Language detection](language-detection/how-to/use-containers.md)
-* [Key phrase extraction](key-phrase-extraction/how-to/use-containers.md)
-* [Text Analytics for health](text-analytics-for-health/how-to/use-containers.md)
--
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the following articles to learn about responsible AI use and deployment in your systems:
-
-* [Transparency note for the Language service](/legal/cognitive-services/text-analytics/transparency-note)
-* [Integration and responsible use](/legal/cognitive-services/text-analytics/guidance-integration-responsible-use)
-* [Data, privacy, and security](/legal/cognitive-services/text-analytics/data-privacy)
cognitive-services Conversations Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/conversations-entity-categories.md
- Title: Entity categories recognized by Conversational Personally Identifiable Information (detection) in Azure Cognitive Service for Language-
-description: Learn about the entities the Conversational PII feature (preview) can recognize from conversation inputs.
------ Previously updated : 05/15/2022----
-# Supported customer content (PII) entity categories in conversations
-
-Use this article to find the entity categories that can be returned by the [conversational PII detection feature](../how-to-call-for-conversations.md). This feature runs a predictive model to identify, categorize, and redact sensitive information from an input conversation.
-
-The PII preview feature includes the ability to detect personal (`PII`) information from conversations.
-
-## Entity categories
-
-The following entity categories are returned when you're sending API requests PII feature.
-
-## Category: Name
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Name
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- All first, middle, last or full name is considered PII regardless of whether it is the speakerΓÇÖs name, the agentΓÇÖs name, someone elseΓÇÖs name or a different version of the speakerΓÇÖs full name (Chris vs. Christopher).
-
- To get this entity category, add `Name` to the `pii-categories` parameter. `Name` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`
- :::column-end:::
-
-## Category: PhoneNumber
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- PhoneNumber
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- All telephone numbers (including toll-free numbers or numbers that may be easily found or considered public knowledge) are considered PII
-
- To get this entity category, add `PhoneNumber` to the `pii-categories` parameter. `PhoneNumber` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
--
-## Category: Address
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Address
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Complete or partial addresses are considered PII. All addresses regardless of what residence or institution the address belongs to (such as: personal residence, business, medical center, government agency, etc.) are covered under this category.
- Note:
- * If information is limited to City & State only, it will not be considered PII.
- * If information contains street, zip code or house number, all information is considered as Address PII , including the city and state
-
- To get this entity category, add `Address` to the `pii-categories` parameter. `Address` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
--
-## Category: Email
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Email
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- All email addresses are considered PII.
-
- To get this entity category, add `Email` to the `pii-categories` parameter. `Email` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
-
-## Category: NumericIdentifier
-
-This category contains the following entities:
-
- :::column span="":::
- **Entity**
-
- NumericIdentifier
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Any numeric or alphanumeric identifier that could contain any PII information.
- Examples:
- * Case Number
- * Member Number
- * Ticket number
- * Bank account number
- * Installation ID
- * IP Addresses
- * Product Keys
- * Serial Numbers (1:1 relationship with a specific item/product)
- * Shipping tracking numbers, etc.
-
- To get this entity category, add `NumericIdentifier` to the `pii-categories` parameter. `NumericIdentifier` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
-
-## Category: Credit card
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Credit card
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Any credit card number, any security code on the back, or the expiration date is considered as PII.
-
- To get this entity category, add `CreditCard` to the `pii-categories` parameter. `CreditCard` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
-
-## Next steps
-
-[How to detect PII in conversations](../how-to-call-for-conversations.md)
cognitive-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md
- Title: Entity categories recognized by Personally Identifiable Information (detection) in Azure Cognitive Service for Language-
-description: Learn about the entities the PII feature can recognize from unstructured text.
------ Previously updated : 11/15/2021----
-# Supported Personally Identifiable Information (PII) entity categories
-
-Use this article to find the entity categories that can be returned by the [PII detection feature](../how-to-call.md). This feature runs a predictive model to identify, categorize, and redact sensitive information from an input document.
-
-The PII feature includes the ability to detect personal (`PII`) and health (`PHI`) information.
-
-## Entity categories
-
-> [!NOTE]
-> To detect protected health information (PHI), use the `domain=phi` parameter and model version `2020-04-01` or later.
--
-The following entity categories are returned when you're sending API requests PII feature.
-
-## Category: Person
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Person
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Names of people. Returned as both PII and PHI.
-
- To get this entity category, add `Person` to the `piiCategories` parameter. `Person` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: PersonType
-
-This category contains the following entity:
--
- :::column span="":::
- **Entity**
-
- PersonType
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Job types or roles held by a person.
-
- To get this entity category, add `PersonType` to the `piiCategories` parameter. `PersonType` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: PhoneNumber
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- PhoneNumber
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Phone numbers (US and EU phone numbers only). Returned as both PII and PHI.
-
- To get this entity category, add `PhoneNumber` to the `piiCategories` parameter. `PhoneNumber` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt` `pt-br`
-
- :::column-end:::
---
-## Category: Organization
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Organization
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type. Returned as both PII and PHI.
-
- To get this entity category, add `Organization` to the `piiCategories` parameter. `Organization` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
--
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Medical
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Medical companies and groups.
-
- To get this entity category, add `OrganizationMedical` to the `piiCategories` parameter. `OrganizationMedical` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
-
- :::column span="":::
-
- Stock exchange
-
- :::column-end:::
- :::column span="2":::
-
- Stock exchange groups.
-
- To get this entity category, add `OrganizationStockExchange` to the `piiCategories` parameter. `OrganizationStockExchange` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
-
- `en`
-
- :::column-end:::
-
- :::column span="":::
-
- Sports
-
- :::column-end:::
- :::column span="2":::
-
- Sports-related organizations.
-
- To get this entity category, add `OrganizationSports` to the `piiCategories` parameter. `OrganizationSports` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
-
- `en`
-
- :::column-end:::
---
-## Category: Address
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Address
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Full mailing address. Returned as both PII and PHI.
-
- To get this entity category, add `Address` to the `piiCategories` parameter. `Address` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
--
-## Category: Email
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- Email
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Email addresses. Returned as both PII and PHI.
-
- To get this entity category, add `Email` to the `piiCategories` parameter. `Email` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
--
-## Category: URL
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- URL
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- URLs to websites. Returned as both PII and PHI.
-
- To get this entity category, add `URL` to the `piiCategories` parameter. `URL` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
--
-## Category: IP Address
-
-This category contains the following entity:
-
- :::column span="":::
- **Entity**
-
- IPAddress
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Network IP addresses. Returned as both PII and PHI.
-
- To get this entity category, add `IPAddress` to the `piiCategories` parameter. `IPAddress` will be returned in the API response if detected.
-
- :::column-end:::
-
- :::column span="":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-## Category: DateTime
-
-This category contains the following entities:
-
- :::column span="":::
- **Entity**
-
- DateTime
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Dates and times of day.
-
- To get this entity category, add `DateTime` to the `piiCategories` parameter. `DateTime` will be returned in the API response if detected.
-
- :::column-end:::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-#### Subcategories
-
-The entity in this category can have the following subcategories.
-
- :::column span="":::
- **Entity subcategory**
-
- Date
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Calender dates. Returned as both PII and PHI.
-
- To get this entity category, add `Date` to the `piiCategories` parameter. `Date` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
--
-## Category: Age
-
-This category contains the following entities:
-
- :::column span="":::
- **Entity**
-
- Age
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Ages.
-
- To get this entity category, add `Age` to the `piiCategories` parameter. `Age` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="2":::
- **Supported document languages**
-
- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br`
-
- :::column-end:::
-
-### Azure information
-
-These entity categories include identifiable Azure information like authentication information and connection strings. Not returned as PHI.
-
- :::column span="":::
- **Entity**
-
- Azure DocumentDB Auth Key
-
- :::column-end:::
- :::column span="2":::
- **Details**
-
- Authorization key for an Azure Cosmos DB server.
-
- To get this entity category, add `AzureDocumentDBAuthKey` to the `piiCategories` parameter. `AzureDocumentDBAuthKey` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
- **Supported document languages**
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure IAAS Database Connection String and Azure SQL Connection String.
-
-
- :::column-end:::
- :::column span="2":::
-
- Connection string for an Azure infrastructure as a service (IaaS) database, and SQL connection string.
-
- To get this entity category, add `AzureIAASDatabaseConnectionAndSQLString` to the `piiCategories` parameter. `AzureIAASDatabaseConnectionAndSQLString` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure IoT Connection String
-
- :::column-end:::
- :::column span="2":::
-
- Connection string for Azure IoT.
-
- To get this entity category, add `AzureIoTConnectionString` to the `piiCategories` parameter. `AzureIoTConnectionString` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure Publish Setting Password
-
- :::column-end:::
- :::column span="2":::
-
- Password for Azure publish settings.
-
- To get this entity category, add `AzurePublishSettingPassword` to the `piiCategories` parameter. `AzurePublishSettingPassword` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure Redis Cache Connection String
-
- :::column-end:::
- :::column span="2":::
-
- Connection string for a Redis cache.
-
- To get this entity category, add `AzureRedisCacheString` to the `piiCategories` parameter. `AzureRedisCacheString` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure SAS
-
- :::column-end:::
- :::column span="2":::
-
- Connection string for Azure software as a service (SaaS).
-
- To get this entity category, add `AzureSAS` to the `piiCategories` parameter. `AzureSAS` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure Service Bus Connection String
-
- :::column-end:::
- :::column span="2":::
-
- Connection string for an Azure service bus.
-
- To get this entity category, add `AzureServiceBusString` to the `piiCategories` parameter. `AzureServiceBusString` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure Storage Account Key
-
- :::column-end:::
- :::column span="2":::
-
- Account key for an Azure storage account.
-
- To get this entity category, add `AzureStorageAccountKey` to the `piiCategories` parameter. `AzureStorageAccountKey` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- Azure Storage Account Key (Generic)
-
- :::column-end:::
- :::column span="2":::
-
- Generic account key for an Azure storage account.
-
- To get this entity category, add `AzureStorageAccountGeneric` to the `piiCategories` parameter. `AzureStorageAccountGeneric` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
- :::column span="":::
-
- SQL Server Connection String
-
- :::column-end:::
- :::column span="2":::
-
- Connection string for a computer running SQL Server.
-
- To get this entity category, add `SQLServerConnectionString` to the `piiCategories` parameter. `SQLServerConnectionString` will be returned in the API response if detected.
-
- :::column-end:::
- :::column span="":::
-
- `en`
-
- :::column-end:::
-
-### Identification
---
-## Next steps
-
-* [NER overview](../overview.md)
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
- Title: How to detect Personally Identifiable Information (PII) in conversations.-
-description: This article will show you how to extract PII from chat and spoken transcripts and redact identifiable information.
------ Previously updated : 01/31/2023-----
-# How to detect and redact Personally Identifying Information (PII) in conversations
-
-The Conversational PII feature can evaluate conversations to extract sensitive information (PII) in the content across several pre-defined categories and redact them. This API operates on both transcribed text (referenced as transcripts) and chats.
-For transcripts, the API also enables redaction of audio segments, which contains the PII information by providing the audio timing information for those audio segments.
-
-## Determine how to process the data (optional)
-
-### Specify the PII detection model
-
-By default, this feature will use the latest available AI model on your input. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
-
-### Language support
-
-Currently the conversational PII preview API only supports English language.
-
-### Region support
-
-Currently the conversational PII preview API supports all Azure regions supported by the Language service.
-
-## Submitting data
-
-> [!NOTE]
-> See the [Language Studio](../language-studio.md#valid-text-formats-for-conversation-features) article for information on formatting conversational text to submit using Language Studio.
-
-You can submit the input to the API as list of conversation items. Analysis is performed upon receipt of the request. Because the API is asynchronous, there may be a delay between sending an API request, and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
-
-When using the async feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-
-When you submit data to conversational PII, you can send one conversation (chat or spoken) per request.
-
-The API will attempt to detect all the [defined entity categories](concepts/conversations-entity-categories.md) for a given conversation input. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories.
-
-For spoken transcripts, the entities detected will be returned on the `redactionSource` parameter value provided. Currently, the supported values for `redactionSource` are `text`, `lexical`, `itn`, and `maskedItn` (which maps to Speech to text REST API's `display`\\`displayText`, `lexical`, `itn` and `maskedItn` format respectively). Additionally, for the spoken transcript input, this API will also provide audio timing information to empower audio redaction. For using the audioRedaction feature, use the optional `includeAudioRedaction` flag with `true` value. The audio redaction is performed based on the lexical input format.
-
-> [!NOTE]
-> Conversation PII now supports 40,000 characters as document size.
--
-## Getting PII results
-
-When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/conversations-entity-categories.md), including their categories and subcategories, and confidence scores. The text string with the PII entities redacted will also be returned.
-
-## Examples
-
-# [Client libraries (Azure SDK)](#tab/client-libraries)
-
-1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
-
-2. From the menu on the left side, select **Keys and Endpoint**. You'll need one of the keys and the endpoint to authenticate your API requests.
-
-3. Download and install the client library package for your language of choice:
-
- |Language |Package version |
- |||
- |.NET | [1.0.0](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0) |
- |Python | [1.0.0](https://pypi.org/project/azure-ai-language-conversations/1.1.0b2) |
-
-4. See the following reference documentation for more information on the client, and return object:
-
- * [C#](/dotnet/api/azure.ai.language.conversations)
- * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations.aio)
-
-# [REST API](#tab/rest-api)
-
-## Submit transcripts using speech to text
-
-Use the following example if you have conversations transcribed using the Speech service's [speech to text](../../Speech-Service/speech-to-text.md) feature:
-
-```bash
-curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations/jobs?api-version=2022-05-15-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: your-key-here" \--d \
-'
-{
- "displayName": "Analyze conversations from xxx",
- "analysisInput": {
- "conversations": [
- {
- "id": "23611680-c4eb-4705-adef-4aa1c17507b5",
- "language": "en",
- "modality": "transcript",
- "conversationItems": [
- {
- "participantId": "agent_1",
- "id": "8074caf7-97e8-4492-ace3-d284821adacd",
- "text": "Good morning.",
- "lexical": "good morning",
- "itn": "good morning",
- "maskedItn": "good morning",
- "audioTimings": [
- {
- "word": "good",
- "offset": 11700000,
- "duration": 2100000
- },
- {
- "word": "morning",
- "offset": 13900000,
- "duration": 3100000
- }
- ]
- },
- {
- "participantId": "agent_1",
- "id": "0d67d52b-693f-4e34-9881-754a14eec887",
- "text": "Can I have your name?",
- "lexical": "can i have your name",
- "itn": "can i have your name",
- "maskedItn": "can i have your name",
- "audioTimings": [
- {
- "word": "can",
- "offset": 44200000,
- "duration": 2200000
- },
- {
- "word": "i",
- "offset": 46500000,
- "duration": 800000
- },
- {
- "word": "have",
- "offset": 47400000,
- "duration": 1500000
- },
- {
- "word": "your",
- "offset": 49000000,
- "duration": 1500000
- },
- {
- "word": "name",
- "offset": 50600000,
- "duration": 2100000
- }
- ]
- },
- {
- "participantId": "customer_1",
- "id": "08684a7a-5433-4658-a3f1-c6114fcfed51",
- "text": "Sure that is John Doe.",
- "lexical": "sure that is john doe",
- "itn": "sure that is john doe",
- "maskedItn": "sure that is john doe",
- "audioTimings": [
- {
- "word": "sure",
- "offset": 5400000,
- "duration": 6300000
- },
- {
- "word": "that",
- "offset": 13600000,
- "duration": 2300000
- },
- {
- "word": "is",
- "offset": 16000000,
- "duration": 1300000
- },
- {
- "word": "john",
- "offset": 17400000,
- "duration": 2500000
- },
- {
- "word": "doe",
- "offset": 20000000,
- "duration": 2700000
- }
- ]
- }
- ]
- }
- ]
- },
- "tasks": [
- {
- "taskName": "analyze 1",
- "kind": "ConversationalPIITask",
- "parameters": {
- "modelVersion": "2022-05-15-preview",
- "redactionSource": "text",
- "includeAudioRedaction": true,
- "piiCategories": [
- "all"
- ]
- }
- }
- ]
-}
-`
-```
-
-## Submit text chats
-
-Use the following example if you have conversations that originated in text. For example, conversations through a text-based chat client.
-
-```bash
-curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations/jobs?api-version=2022-05-15-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: your-key-here" \--d \
-'
-{
- "displayName": "Analyze conversations from xxx",
- "analysisInput": {
- "conversations": [
- {
- "id": "23611680-c4eb-4705-adef-4aa1c17507b5",
- "language": "en",
- "modality": "text",
- "conversationItems": [
- {
- "participantId": "agent_1",
- "id": "8074caf7-97e8-4492-ace3-d284821adacd",
- "text": "Good morning."
- },
- {
- "participantId": "agent_1",
- "id": "0d67d52b-693f-4e34-9881-754a14eec887",
- "text": "Can I have your name?"
- },
- {
- "participantId": "customer_1",
- "id": "08684a7a-5433-4658-a3f1-c6114fcfed51",
- "text": "Sure that is John Doe."
- }
- ]
- }
- ]
- },
- "tasks": [
- {
- "taskName": "analyze 1",
- "kind": "ConversationalPIITask",
- "parameters": {
- "modelVersion": "2022-05-15-preview"
- }
- }
- ]
-}
-`
-```
--
-## Get the result
-
-Get the `operation-location` from the response header. The value will look similar to the following URL:
-
-```rest
-https://your-language-endpoint/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678
-```
-
-To get the results of the request, use the following cURL command. Be sure to replace `my-job-id` with the numerical ID value you received from the previous `operation-location` response header:
-
-```bash
-curl -X GET https://your-language-endpoint/language/analyze-conversations/jobs/my-job-id \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: your-key-here"
-```
---
-## Service and data limits
--
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call.md
- Title: How to detect Personally Identifiable Information (PII)-
-description: This article will show you how to extract PII and health information (PHI) from text and detect identifiable information.
------ Previously updated : 07/27/2022-----
-# How to detect and redact Personally Identifying Information (PII)
-
-The PII feature can evaluate unstructured text, extract and redact sensitive information (PII) and health information (PHI) in text across several [pre-defined categories](concepts/entity-categories.md).
--
-## Development options
--
-## Determine how to process the data (optional)
-
-### Specify the PII detection model
-
-By default, this feature will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../concepts/model-lifecycle.md).
-
-### Input languages
-
-When you submit documents to be processed, you can specify which of [the supported languages](language-support.md) they're written in. if you don't specify a language, extraction will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../concepts/multilingual-emoji-support.md).
-
-## Submitting data
-
-Analysis is performed upon receipt of the request. Using the PII detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
--
-## Select which entities to be returned
-
-The API will attempt to detect the [defined entity categories](concepts/entity-categories.md) for a given document language. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. The following example would detect only `Person`. You can specify one or more [entity types](concepts/entity-categories.md) to be returned.
-
-> [!TIP]
-> If you don't include `default` when specifying entity categories, The API will only return the entity categories you specify.
-
-**Input:**
-
-> [!NOTE]
-> In this example, it will return only **person** entity type:
-
-`https://<your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01`
-
-```bash
-{
-    "kind": "PiiEntityRecognition",
-    "parameters": 
-    {
-        "modelVersion": "latest",
- "piiCategories" :
- [
- "Person"
- ]
-    },
-    "analysisInput":
-    {
-        "documents":
-        [
-            {
-                "id":"1",
-                "language": "en",
-                "text": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
-            }
-        ]
-    }
-}
-
-```
-
-**Output:**
-
-```bash
-
-{
- "kind": "PiiEntityRecognitionResults",
- "results": {
- "documents": [
- {
- "redactedText": "We went to Contoso foodplace located at downtown Seattle last week for a dinner party, and we adore the spot! They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is ********) and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! The pasta I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their online menu at www.contosofoodplace.com, call 112-555-0176 or send email to order@contosofoodplace.com! The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!",
- "id": "1",
- "entities": [
- {
- "text": "John Doe",
- "category": "Person",
- "offset": 226,
- "length": 8,
- "confidenceScore": 0.98
- }
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2021-01-15"
- }
-}
-```
-
-## Getting PII results
-
-When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/entity-categories.md), including their categories and subcategories, and confidence scores. The text string with the PII entities redacted will also be returned.
-
-## Service and data limits
--
-## Next steps
-
-[Named Entity Recognition overview](overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/language-support.md
- Title: Personally Identifiable Information (PII) detection language support-
-description: This article explains which natural languages are supported by the PII detection feature of Azure Cognitive Service for Language.
------ Previously updated : 08/02/2022----
-# Personally Identifiable Information (PII) detection language support
-
-Use this article to learn which natural languages are supported by the PII and conversation PII (preview) features of Azure Cognitive Service for Language.
-
-> [!NOTE]
-> * Languages are added as new [model versions](how-to-call.md#specify-the-pii-detection-model) are released.
-
-# [PII for documents](#tab/documents)
-
-## PII language support
-
-| Language | Language code | Starting with model version | Notes |
-||-|--||
-|Arabic |`ar` | 2023-01-01-preview | |
-|Chinese-Simplified |`zh-hans` |2021-01-15 |`zh` also accepted|
-|Chinese-Traditional |`zh-hant` | 2023-01-01-preview | |
-|Czech |`cs` | 2023-01-01-preview | |
-|Danish |`da` |2023-01-01-preview | |
-|Dutch |`nl` |2023-01-01-preview | |
-|English |`en` |2020-07-01 | |
-|Finnish |`fi` | 2023-01-01-preview | |
-|French |`fr` |2021-01-15 | |
-|German |`de` | 2021-01-15 | |
-|Hebrew |`he` | 2023-01-01-preview | |
-|Hindi |`hi` |2023-01-01-preview | |
-|Hungarian |`hu` | 2023-01-01-preview | |
-|Italian |`it` |2021-01-15 | |
-|Japanese |`ja` | 2021-01-15 | |
-|Korean |`ko` | 2021-01-15 | |
-|Norwegian (Bokmål) |`no` | 2023-01-01-preview |`nb` also accepted|
-|Polish |`pl` | 2023-01-01-preview | |
-|Portuguese (Brazil) |`pt-BR` |2021-01-15 | |
-|Portuguese (Portugal)|`pt-PT` | 2021-01-15 |`pt` also accepted|
-|Russian |`ru` | 2023-01-01-preview | |
-|Spanish |`es` |2020-04-01 | |
-|Swedish |`sv` | 2023-01-01-preview | |
-|Turkish |`tr` |2023-01-01-preview | |
--
-# [PII for conversations (preview)](#tab/conversations)
-
-## PII language support
-
-| Language | Language code | Starting with model version | Notes |
-|:-|:-:|:-:|::|
-| English | `en` | 2022-05-15-preview | |
-| French | `fr` | XXXX-XX-XX-preview | |
-| German | `de` | XXXX-XX-XX-preview | |
-| Spanish | `es` | XXXX-XX-XX-preview | |
---
-## Next steps
-
-[PII feature overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/overview.md
- Title: What is the Personally Identifying Information (PII) detection feature in Azure Cognitive Service for Language?-
-description: An overview of the PII detection feature in Azure Cognitive Services, which helps you extract entities and sensitive information (PII) in text.
------ Previously updated : 01/10/2023----
-# What is Personally Identifiable Information (PII) detection in Azure Cognitive Service for Language?
-
-PII detection is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
-
-* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways.
-* The [**conceptual articles**](concepts/entity-categories.md) provide in-depth explanations of the service's functionality and features.
-
-PII comes into two shapes:
-* [PII](how-to-call.md) - works on unstructured text.
-* [Conversation PII (preview)](how-to-call-for-conversations.md) - tailored model to work on conversation transcription.
---
-## Get started with PII detection
-----
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Example scenarios
-
-* **Apply sensitivity labels** - For example, based on the results from the PII service, a public sensitivity label might be applied to documents where no PII entities are detected. For documents where US addresses and phone numbers are recognized, a confidential label might be applied. A highly confidential label might be used for documents where bank routing numbers are recognized.
-* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to first line support representatives, the company may want to redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
-* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they may want to block name, address and phone number to help reduce unconscious gender or other biases.
-* **Replace personal information in source data for machine learning to reduce unfairness** ΓÇô For example, if you want to remove names that might reveal gender when training a machine learning model, you could use the service to identify them and you could replace them with generic placeholders for model training.
-* **Remove personal information from call center transcription** ΓÇô For example, if you want to remove names or other PII data that happen between the agent and the customer in a call center scenario. You could use the service to identify and remove them.
-* **Data cleaning for data science** - PII can be used to make the data ready for data scientists and engineers to be able to use these data to train their machine learning models. Redacting the data to make sure that customer data isn't exposed.
---
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
- Title: "Quickstart: Detect Personally Identifying Information (PII) in text"-
-description: Use this quickstart to start using the PII detection API.
------ Previously updated : 02/17/2023--
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: Detect Personally Identifiable Information (PII)
-
-> [!NOTE]
-> This quickstart only covers PII detection in documents. To learn more about detecting PII in conversations, see [How to detect and redact PII in conversations](how-to-call-for-conversations.md).
----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Next steps
-
-* [Overview](overview.md)
cognitive-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/azure-resources.md
- Title: Azure resources - question answering
-description: Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how they are used in combination allows you to find and fix problems when they occur.
--- Previously updated : 08/08/2022---
-# Azure resources for question answering
-
-Question answering uses several Azure sources, each with a different purpose. Understanding how they are used individually allows you to plan for and select the correct pricing tier or know when to change your pricing tier. Understanding how resources are used _in combination_ allows you to find and fix problems when they occur.
-
-## Resource planning
-
-> [!TIP]
-> "Knowledge base" and "project" are equivalent terms in question answering and can be used interchangeably.
-
-When you first develop a project, in the prototype phase, it is common to have a single resource for both testing and production.
-
-When you move into the development phase of the project, you should consider:
-
-* How many languages will your project hold?
-* How many regions you need your project to be available in?
-* How many documents will your system hold in each domain?
-
-## Pricing tier considerations
-
-Typically there are three parameters you need to consider:
-
-* **The throughput you need**:
-
- * The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
-
- * This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
-
-* **Size and the number of projects**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
-
- With custom question answering, you have a choice to set up your language resource in a single language or multiple languages. You can make this selection when you create your first project in the [Language Studio](https://language.azure.com/).
-
- > [!IMPORTANT]
- > You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum indexes allowed in the tier. Also check the maximum size and the number of documents allowed per tier.
-
- For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
-
-* **Number of documents as sources**: There are no limits to the number of documents you can add as sources in question answering.
-
-The following table gives you some high-level guidelines.
-
-| |Azure Cognitive Search | Limitations |
-| -- | | -- |
-| **Experimentation** |Free Tier | Publish Up to 2 KBs, 50 MB size |
-| **Dev/Test Environment** |Basic | Publish Up to 14 KBs, 2 GB size |
-| **Production Environment** |Standard | Publish Up to 49 KBs, 25 GB size |
-
-## Recommended settings
--
-The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
--
-## Keys in question answering
-
-Your custom question answering feature deals with two kinds of keys: **authoring keys** and **Azure Cognitive Search keys** used to access the service in the customerΓÇÖs subscription.
-
-Use these keys when making requests to the service through APIs.
-
-|Name|Location|Purpose|
-|--|--|--|
-|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language service APIs). These APIs let you edit the questions and answers in your project, and publish your project. These keys are created when you create a new resource.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys and Endpoint** page.|
-|Azure Cognitive Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure cognitive search service deployed in the userΓÇÖs Azure subscription. When you associate an Azure Cognitive Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure Cognitive Search** resource on the **Keys** page.|
-
-### Find authoring keys in the Azure portal
-
-You can view and reset your authoring keys from the Azure portal, where you added the custom question answering feature in your language resource.
-
-1. Go to the language resource in the Azure portal and select the resource that has the *Cognitive Services* type:
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of question answering resource list.](../../../qnamaker/media/qnamaker-how-to-setup-service/resources-created-question-answering.png)
-
-2. Go to **Keys and Endpoint**:
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of subscription key.](../../../qnamaker/media/qnamaker-how-to-key-management/custom-qna-keys-and-endpoint.png)
-
-### Management service region
-
-In custom question answering, both the management and the prediction services are colocated in the same region.
-
-## Resource purposes
-
-Each Azure resource created with Custom question answering feature has a specific purpose:
-
-* Language resource (Also referred to as a Text Analytics resource depending on the context of where you are evaluating the resource.)
-* Cognitive Search resource
-
-### Language resource
-
-The language resource with custom question answering feature provides access to the authoring and publishing APIs, hosts the ranking runtime as well as provides telemetry.
-
-### Azure Cognitive Search resource
-
-The [Cognitive Search](../../../../search/index.yml) resource is used to:
-
-* Store the question and answer pairs
-* Provide the initial ranking (ranker #1) of the question and answer pairs at runtime
-
-#### Index usage
-
-You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum number of indexes allowed in the Azure Cognitive Search tier. Also check the maximum size and the number of documents allowed per tier.
-
-For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
-
-#### Language usage
-
-With custom question answering, you have a choice to set up your service for projects in a single language or multiple languages. You make this choice during the creation of the first project in your language resource.
-
-## Next steps
-
-* Learn about the question answering [projects](../How-To/manage-knowledge-base.md)
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/best-practices.md
- Title: Best practices - question answering
-description: Use these best practices to improve your project and provide better results to your application/chat bot's end users.
----- Previously updated : 06/03/2022---
-# Question answering best practices
-
-Use these best practices to improve your project and provide better results to your client application or chat bot's end users.
-
-## Extraction
-
-Question answering is continually improving the algorithms that extract question answer pairs from content and expanding the list of supported file and HTML formats. In general, FAQ pages should be stand-alone and not combined with other information. Product manuals should have clear headings and preferably an index page.
-
-## Creating good questions and answers
-
-WeΓÇÖve used the following list of question and answer pairs as representation of a project to highlight best practices when authoring projects for question answering.
-
-| Question | Answer |
-|-|-|
-| I want to buy a car |There are three options to buy a car.|
-| I want to purchase software license |Software license can be purchased online at no cost.|
-| What is the price of Microsoft stock? | $200. |
-| How to buy Microsoft Services | Microsoft services can be bought online.|
-| Want to sell car | Please send car pics and document.|
-| How to get access to identification card? | Apply via company portal to get identification card.|
-
-### When should you add alternate questions to question and answer pairs?
-
-Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to the question in the project. For example, consider the following question answer pair:
-
-*Question: What is the price of Microsoft Stock?*
-*Answer: $200.*
-
-The service can return the expected response for semantically similar queries such as:
-
-ΓÇ£How much is Microsoft stock worth?
-ΓÇ£How much is Microsoft share value?ΓÇ¥
-ΓÇ£How much does a Microsoft share cost?ΓÇ¥
-ΓÇ£What is the market value of a Microsoft stock?ΓÇ¥
-ΓÇ£What is the market value of a Microsoft share?ΓÇ¥
-
-However, itΓÇÖs important to understand that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
-
-There are certain scenarios that require the customer to add an alternate question. When itΓÇÖs already verified that for a particular query the correct answer isnΓÇÖt returned despite being present in the project, we advise adding that query as an alternate question to the intended question answer pair.
-
-### How many alternate questions per question answer pair is optimal?
-
-Users can add as many alternate questions as they want, but only first 5 will be considered for core ranking. However, the rest will be useful for exact match scenarios. It is also recommended to keep the different intent/distinct alternate questions at the top for better relevance and score.
-
-Semantic understanding in question answering should be able to take care of similar alternate questions.
-
-The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all kinds of intents for the answer are captured by these 10 questions. For the project at the beginning of this section, in question answer pair #1, adding alternate questions such as ΓÇ£How can I buy a carΓÇ¥, ΓÇ£I wanna buy a carΓÇ¥ arenΓÇÖt required. Whereas adding alternate questions such as ΓÇ£How to purchase a carΓÇ¥, ΓÇ£What are the options of buying a vehicleΓÇ¥ can be useful.
-
-### When to add synonyms to a project?
-
-Question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.
-
-For better relevance, you need to provide a list of acronyms that the end user intends to use interchangeably. The following is a list of acceptable acronyms:
-
-`MSFT` ΓÇô Microsoft
-`ID` ΓÇô Identification
-`ETA` ΓÇô Estimated time of Arrival
-
-Other than acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as ΓÇ£my carΓÇÖs audio isnΓÇÖt workingΓÇ¥ and the project has questions on ΓÇ£fixing audio for car XΓÇ¥, then we need to add ΓÇÿXΓÇÖ and ΓÇÿcarΓÇÖ as synonyms.
-
-The transformer-based model already takes care of most of the common synonym cases, for example: `Purchase ΓÇô Buy`, `Sell - Auction`, `Price ΓÇô Value`. For another example, consider the following question answer pair: Q: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥ A: ΓÇ£$200ΓÇ¥.
-
-If we receive user queries like ΓÇ£Microsoft stock valueΓÇ¥,ΓÇ¥ Microsoft share valueΓÇ¥, ΓÇ£Microsoft stock worthΓÇ¥, ΓÇ£Microsoft share worthΓÇ¥, ΓÇ£stock valueΓÇ¥, etc., you should be able to get the correct answer even though these queries have words like "share", "value", and "worth", which arenΓÇÖt originally present in the project.
-
-Special characters are not allowed in synonyms.
-
-### How are lowercase/uppercase characters treated?
-
-Question answering takes casing into account but it's intelligent enough to understand when itΓÇÖs to be ignored. You shouldnΓÇÖt be seeing any perceivable difference due to wrong casing.
-
-### How are question answer pairs prioritized for multi-turn questions?
-
-When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [Question Answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted.
-
-### How are accents treated?
-
-Accents are supported for all major European languages. If the query has an incorrect accent, the confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
-
-### How is punctuation in a user query treated?
-
-Punctuation is ignored in a user query before sending it to the ranking stack. Ideally it shouldn’t impact the relevance scores. Punctuation that is ignored is as follows: `,?:;\"'(){}[]-+。./!*؟`
-
-## Chit-Chat
-
-Add chit-chat to your bot, to make your bot more conversational and engaging, with low effort. You can easily add chit-chat data sources from pre-defined personalities when creating your project, and change them at any time. Learn how to [add chit-chat to your KB](../How-To/chit-chat.md).
-
-Chit-chat is supported in [many languages](../how-to/chit-chat.md#language-support).
-
-### Choosing a personality
-
-Chit-chat is supported for several predefined personalities:
-
-|Personality |Question answering dataset file |
-||--|
-|Professional |[qna_chitchat_professional.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_professional.tsv) |
-|Friendly |[qna_chitchat_friendly.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_friendly.tsv) |
-|Witty |[qna_chitchat_witty.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_witty.tsv) |
-|Caring |[qna_chitchat_caring.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_caring.tsv) |
-|Enthusiastic |[qna_chitchat_enthusiastic.tsv](https://qnamakerstore.blob.core.windows.net/qnamakerdata/editorial/qna_chitchat_enthusiastic.tsv) |
-
-The responses range from formal to informal and irreverent. You should select the personality that is closest aligned with the tone you want for your bot. You can view the [datasets](https://github.com/Microsoft/BotBuilder-PersonalityChat/tree/master/CSharp/Datasets), and choose one that serves as a base for your bot, and then customize the responses.
-
-### Edit bot-specific questions
-
-There are some bot-specific questions that are part of the chit-chat data set, and have been filled in with generic answers. Change these answers to best reflect your bot details.
-
-We recommend making the following chit-chat question answer pairs more specific:
-
-* Who are you?
-* What can you do?
-* How old are you?
-* Who created you?
-* Hello
-
-### Adding custom chit-chat with a metadata tag
-
-If you add your own chit-chat question answer pairs, make sure to add metadata so these answers are returned. The metadata name/value pair is `editorial:chitchat`.
-
-## Searching for answers
-
-Question answering REST API uses both questions and the answer to search for best answers to a user's query.
-
-### Searching questions only when answer isnΓÇÖt relevant
-
-Use the [`RankerType=QuestionOnly`](#choosing-ranker-type) if you don't want to search answers.
-
-An example of this is when the project is a catalog of acronyms as questions with their full form as the answer. The value of the answer wonΓÇÖt help to search for the appropriate answer.
-
-## Ranking/Scoring
-
-Make sure youΓÇÖre making the best use of the supported ranking features. Doing so will improve the likelihood that a given user query is answered with an appropriate response.
-
-### Choosing a threshold
-
-The default [confidence score](confidence-score.md) that is used as a threshold is 0, however you can [change the threshold](confidence-score.md#set-threshold) for your project based on your needs. Since every project is different, you should test and choose the threshold that is best suited for your project.
-
-### Choosing Ranker type
-
-By default, question answering searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the REST API request.
-
-### Add alternate questions
-
-Alternate questions to improve the likelihood of a match with a user query. Alternate questions are useful when there are multiple ways in which the same question may be asked. This can include changes in the sentence structure and word-style.
-
-|Original query|Alternate queries|Change|
-|--|--|--|
-|Is parking available?|Do you have a car park?|sentence structure|
- |Hi|Yo<br>Hey there|word-style or slang|
-
-### Use metadata tags to filter questions and answers
-
-Metadata adds the ability for a client application to know it shouldnΓÇÖt take all answers but instead to narrow down the results of a user query based on metadata tags. The project answer can differ based on the metadata tag, even if the query is the same. For example, *"where is parking located"* can have a different answer if the location of the restaurant branch is different - that is, the metadata is *Location: Seattle* versus *Location: Redmond*.
-
-### Use synonyms
-
-While thereΓÇÖs some support for synonyms in the English language, use case-insensitive [word alterations](../tutorials/adding-synonyms.md) to add synonyms to keywords that take different forms.
-
-|Original word|Synonyms|
-|--|--|
-|buy|purchase<br>net-banking<br>net banking|
---
-### Use distinct words to differentiate questions
-
-The ranking algorithm, that matches a user query with a question in the project, works best if each question addresses a different need. Repetition of the same word set between questions reduces the likelihood that the right answer is chosen for a given user query with those words.
-
-For example, you might have two separate question answer pairs with the following questions:
-
-|Questions|
-|--|
-|where is the parking *location*|
-|where is the ATM *location*|
-
-Since these two questions are phrased with very similar words, this similarity could cause very similar scores for many user queries that are phrased like *"where is the `<x>` location"*. Instead, try to clearly differentiate with queries like *"where is the parking lot"* and *"where is the ATM"*, by avoiding words like "location" that could be in many questions in your project.
-
-## Collaborate
-
-Question answering allows users to collaborate on a project. Users need access to the associated Azure resource group in order to access the projects. Some organizations may want to outsource the project editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical language resources with identical question answering projects in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the project contents are exported and transferred with an [import-export](../how-to/migrate-knowledge-base.md) process to the language resource of the approver that will finally deploy the project and update the endpoint.
-
-## Active learning
-
-[Active learning](../tutorials/active-learning.md) does the best job of suggesting alternative questions when it has a wide range of quality and quantity of user-based queries. ItΓÇÖs important to allow client-applications' user queries to participate in the active learning feedback loop without censorship. Once questions are suggested in Language Studio, you can review and accept or reject those suggestions.
-
-## Next steps
--
cognitive-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/confidence-score.md
- Title: Confidence score - question answering-
-description: When a user query is matched against a knowledge base, question answering returns relevant answers, along with a confidence score.
----- Previously updated : 11/02/2021---
-# Confidence score
-
-When a user query is matched against a project (also known as a knowledge base), question answering returns relevant answers, along with a confidence score. This score indicates the confidence that the answer is the right match for the given user query.
-
-The confidence score is a number between 0 and 100. A score of 100 is likely an exact match, while a score of 0 means, that no matching answer was found. The higher the score- the greater the confidence in the answer. For a given query, there could be multiple answers returned. In that case, the answers are returned in order of decreasing confidence score.
-
-The following table indicates typical confidence associated for a given score.
-
-|Score Value|Score Meaning|Example Query|
-|--|--|--|
-|0.90 - 1.00|A near exact match of user query and a KB question|
-|> 70|High confidence - typically a good answer that completely answers the user's query|
-|0.50 - 0.70|Medium confidence - typically a fairly good answer that should answer the main intent of the user query|
-|0.30 - 0.50|Low confidence - typically a related answer, that partially answers the user's intent|
-|< 0.30|Very low confidence - typically does not answer the user's query, but has some matching words or phrases |
-|0|No match, so the answer is not returned.|
-
-## Choose a score threshold
-
-The table above shows the range of scores that can occur when querying with question answering. However, since every project is different, and has different types of words, intents, and goals- we recommend you test and choose the threshold that best works for you. By default the threshold is set to `0`, so that all possible answers are returned. The recommended threshold that should work for most projects, is **50**.
-
-When choosing your threshold, keep in mind the balance between **Accuracy** and **Coverage**, and adjust your threshold based on your requirements.
--- If **Accuracy** (or precision) is more important for your scenario, then increase your threshold. This way, every time you return an answer, it will be a much more CONFIDENT case, and much more likely to be the answer users are looking for. In this case, you might end up leaving more questions unanswered. --- If **Coverage** (or recall) is more important- and you want to answer as many questions as possible, even if there is only a partial relation to the user's question- then LOWER the threshold. This means there could be more cases where the answer does not answer the user's actual query, but gives some other somewhat related answer.-
-## Set threshold
-
-Set the threshold score as a property of the [REST API JSON body](../quickstart/sdk.md). This means you set it for each call to REST API.
-
-## Improve confidence scores
-
-To improve the confidence score of a particular response to a user query, you can add the user query to the project as an alternate question on that response. You can also use case-insensitive [synonyms](../tutorials/adding-synonyms.md) to add synonyms to keywords in your project.
-
-## Similar confidence scores
-
-When multiple responses have a similar confidence score, it is likely that the query was too generic and therefore matched with equal likelihood with multiple answers. Try to structure your QnAs better so that every QnA entity has a distinct intent.
-
-## Confidence score differences between test and production
-
-The confidence score of an answer may change negligibly between the test and deployed version of the project even if the content is the same. This is because the content of the test and the deployed project are located in different Azure Cognitive Search indexes.
-
-The test index holds all the question and answer pairs of your project. When querying the test index, the query applies to the entire index then results are restricted to the partition for that specific project. If the test query results are negatively impacting your ability to validate the project, you can:
-* Organize your project using one of the following:
- * One resource restricted to one project: restrict your single language resource (and the resulting Azure Cognitive Search test index) to a project.
- * Two resources - one for test, one for production: have two language resources, using one for testing (with its own test and production indexes) and one for production (also having its own test and production indexes)
-* Always use the same parameters when querying both your test and production projects.
-
-When you deploy a project, the question and answer contents of your project moves from the test index to a production index in Azure search.
-
-If you have a project in different regions, each region uses its own Azure Cognitive Search index. Because different indexes are used, the scores will not be exactly the same.
-
-## No match found
-
-When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is returned. You can change the [default response](../how-to/change-default-answer.md).
-
-## Next steps
-
cognitive-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/plan.md
- Title: Plan your app - question answering
-description: Learn how to plan your question answering app. Understand how question answering works and interacts with other Azure services and some project concepts.
----- Previously updated : 06/03/2022--
-# Plan your question answering app
-
-To plan your question answering app, you need to understand how question answering works and interacts with other Azure services. You should also have a solid grasp of project concepts.
-
-## Azure resources
-
-Each [Azure resource](azure-resources.md#resource-purposes) created with question answering has a specific purpose. Each resource has its own purpose, limits, and [pricing tier](azure-resources.md#pricing-tier-considerations). It's important to understand the function of these resources so that you can use that knowledge into your planning process.
-
-| Resource | Purpose |
-|--|--|
-| [Language resource](azure-resources.md) resource | Authoring, query prediction endpoint and telemetry|
-| [Cognitive Search](azure-resources.md#azure-cognitive-search-resource) resource | Data storage and search |
-
-### Resource planning
-
-Question answering throughput is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
-
-### Language resource
-
-A single language resource with the custom question answering feature enabled can host more than one project. The number of projects is determined by the Cognitive Search pricing tier's quantity of supported indexes. Learn more about the [relationship of indexes to projects](azure-resources.md#index-usage).
-
-### Project size and throughput
-
-When you build a real app, plan sufficient resources for the size of your project and for your expected query prediction requests.
-
-A project size is controlled by the:
-* [Cognitive Search resource](../../../../search/search-limits-quotas-capacity.md) pricing tier limits
-* [Question answering limits](./limits.md)
-
-The project query prediction request is controlled by the web app plan and web app. Refer to [recommended settings](azure-resources.md#recommended-settings) to plan your pricing tier.
-
-### Understand the impact of resource selection
-
-Proper resource selection means your project answers query predictions successfully.
-
-If your project isn't functioning properly, it's typically an issue of improper resource management.
-
-Improper resource selection requires investigation to determine which [resource needs to change](azure-resources.md#pricing-tier-considerations).
-
-## Project
-
-A project is directly tied its language resource. It holds the question and answer (QnA) pairs that are used to answer query prediction requests.
-
-### Language considerations
-
-You can now have projects in different languages within the same language resource where the custom question answering feature is enabled. When you create the first project, you can choose whether you want to use the resource for projects in a single language that will apply to all subsequent projects or make a language selection each time a project is created.
-
-### Ingest data sources
-
-Question answering also supports unstructured content. You can upload a file that has unstructured content.
-
-Currently we do not support URLs for unstructured content.
-
-The ingestion process converts supported content types to markdown. All further editing of the *answer* is done with markdown. After you create a project, you can edit QnA pairs in Language Studio with rich text authoring.
-
-### Data format considerations
-
-Because the final format of a QnA pair is markdown, it's important to understand markdown support.
-
-### Bot personality
-
-Add a bot personality to your project with [chit-chat](../how-to/chit-chat.md). This personality comes through with answers provided in a certain conversational tone such as *professional* and *friendly*. This chit-chat is provided as a conversational set, which you have total control to add, edit, and remove.
-
-A bot personality is recommended if your bot connects to your project. You can choose to use chit-chat in your project even if you also connect to other services, but you should review how the bot service interacts to know if that is the correct architectural design for your use.
-
-### Conversation flow with a project
-
-Conversation flow usually begins with a salutation from a user, such as `Hi` or `Hello`. Your project can answer with a general answer, such as `Hi, how can I help you`, and it can also provide a selection of follow-up prompts to continue the conversation.
-
-You should design your conversational flow with a loop in mind so that a user knows how to use your bot and isn't abandoned by the bot in the conversation. [Follow-up prompts](../tutorials/guided-conversations.md) provide linking between QnA pairs, which allow for the conversational flow.
-
-### Authoring with collaborators
-
-Collaborators may be other developers who share the full development stack of the project application or may be limited to just authoring the project.
-
-project authoring supports several role-based access permissions you apply in the Azure portal to limit the scope of a collaborator's abilities.
-
-## Integration with client applications
-
-Integration with client applications is accomplished by sending a query to the prediction runtime endpoint. A query is sent to your specific project with an SDK or REST-based request to your question answering web app endpoint.
-
-To authenticate a client request correctly, the client application must send the correct credentials and project ID. If you're using an Azure Bot Service, configure these settings as part of the bot configuration in the Azure portal.
-
-### Conversation flow in a client application
-
-Conversation flow in a client application, such as an Azure bot, may require functionality before and after interacting with the project.
-
-Does your client application support conversation flow, either by providing alternate means to handle follow-up prompts or including chit-chit? If so, design these early and make sure the client application query is handled correctly by another service or when sent to your project.
-
-### Active learning from a client application
-
-Question answering uses _active learning_ to improve your project by suggesting alternate questions to an answer. The client application is responsible for a part of this [active learning](../tutorials/active-learning.md). Through conversational prompts, the client application can determine that the project returned an answer that's not useful to the user, and it can determine a better answer. The client application needs to send that information back to the project to improve the prediction quality.
-
-### Providing a default answer
-
-If your project doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page.
-
-This default answer is different from the Azure bot default answer. You configure the default answer for your Azure bot in the Azure portal as part of configuration settings. It's returned when the score threshold isn't met.
-
-## Prediction
-
-The prediction is the response from your project, and it includes more information than just the answer. To get a query prediction response, use the question answering API.
-
-### Prediction score fluctuations
-
-A score can change based on several factors:
-
-* Number of answers you requested in response with the `top` property
-* Variety of available alternate questions
-* Filtering for metadata
-* Query sent to `test` or `production` project.
-
-### Analytics with Azure Monitor
-
-In question answering, telemetry is offered through the [Azure Monitor service](../../../../azure-monitor/index.yml). Use our [top queries](../how-to/analytics.md) to understand your metrics.
-
-## Development lifecycle
-
-The development lifecycle of a project is ongoing: editing, testing, and publishing your project.
-
-### Project development of question answer pairs
-
-Your QnA pairs should be designed and developed based on your client application usage.
-
-Each pair can contain:
-* Metadata - filterable when querying to allow you to tag your QnA pairs with additional information about the source, content, format, and purpose of your data.
-* Follow-up prompts - helps to determine a path through your project so the user arrives at the correct answer.
-* Alternate questions - important to allow search to match to your answer from different forms of the question. [Active learning suggestions](../tutorials/active-learning.md) turn into alternate questions.
-
-### DevOps development
-
-Developing a project to insert into a DevOps pipeline requires that the project is isolated during batch testing.
-
-A project shares the Cognitive Search index with all other projects on the language resource. While the project is isolated by partition, sharing the index can cause a difference in the score when compared to the published project.
-
-To have the _same score_ on the `test` and `production` projects, isolate a language resource to a single project. In this architecture, the resource only needs to live as long as the isolated batch test.
-
-## Next steps
-
-* [Azure resources](./azure-resources.md)
cognitive-services Precise Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/precise-answering.md
- Title: Precise answering using answer span detection - question answering
-description: Understand Precise answering feature available in question answering.
--- Previously updated : 11/02/2021---
-# Precise answering
-
-The precise answering feature introduced, allows you to get the precise short answer from the best candidate answer passage present in the project for any user query. This feature uses a deep learning model at runtime, which understands the intent of the user query and detects the precise short answer from the answer passage, if there is a short answer present as a fact in the answer passage.
-
-This feature is beneficial for both content developers as well as end users. Now, content developers don't need to manually curate specific question answer pairs for every fact present in the project, and the end user doesn't need to look through the whole answer passage returned from the service to find the actual fact that answers the user's query.
-
-## Precise answering via the portal
-
-In the [Language Studio portal](https://aka.ms/languageStudio), when you open the test pane, you will see an option to **Include short answer response** on the top above show advanced options.
-
-When you enter a query in the test pane, you will see a short-answer along with the answer passage, if there is a short answer present in the answer passage.
-
->[!div class="mx-imgBorder"]
->[![Screenshot of test pane with short answer checked and a question containing a short answer response.](../media/precise-answering/short-answer.png)](../media/precise-answering/short-answer.png#lightbox)
-
-You can unselect the **Include short answer response** option, if you want to see only the **Long answer** passage in the test pane.
-
-The service also returns back the confidence score of the precise answer as an **Answer-span confidence score** which you can check by selecting the **Inspect** option and then selection **additional information**.
-
->[!div class="mx-imgBorder"]
->[![Screenshot of inspect pane with answer-span confidence score displayed.](../media/precise-answering/answer-confidence-score.png)](../media/precise-answering/answer-confidence-score.png#lightbox)
-
-## Deploying a bot
-
-When you publish a bot, you get the precise answer enabled experience by default in your application, where you will see short answer along with the answer passage. Refer to the API reference for REST API to see how to use the precise answer (called AnswerSpan) in the response. User has the flexibility to choose other experiences by updating the template through the Bot app service.
cognitive-services Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/analytics.md
- Title: Analytics on projects - custom question answering-
-description: Custom question answering uses Azure diagnostic logging to store the telemetry data and chat logs
--
-displayName: chat history, history, chat logs, logs
--- Previously updated : 11/02/2021---
-# Get analytics for your project
-
-Custom question answering uses Azure diagnostic logging to store the telemetry data and chat logs. Follow the below steps to run sample queries to get analytics on the usage of your custom question answering project.
-
-1. [Enable diagnostics logging](../../../diagnostic-logging.md) for your language resource with custom question answering enabled.
-
-2. In the previous step, select **Trace** in addition to **Audit, RequestResponse and AllMetrics** for logging
-
- ![Enable trace logging in custom question answering](../media/analytics/qnamaker-v2-enable-trace-logging.png)
-
-## Kusto queries
-
-### Chat log
-
-```kusto
-// All QnA Traffic
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
-| extend answer_ = tostring(parse_json(properties_s).answer)
-| extend question_ = tostring(parse_json(properties_s).question)
-| extend score_ = tostring(parse_json(properties_s).score)
-| extend kbId_ = tostring(parse_json(properties_s).kbId)
-| project question_, answer_, score_, kbId_
-```
-
-### Traffic count per project and user in a time period
-
-```kusto
-// Traffic count per KB and user in a time period
-let startDate = todatetime('2019-01-01');
-let endDate = todatetime('2020-12-31');
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
-| where TimeGenerated <= endDate and TimeGenerated >=startDate
-| extend kbId_ = tostring(parse_json(properties_s).kbId)
-| extend userId_ = tostring(parse_json(properties_s).userId)
-| summarize ChatCount=count() by bin(TimeGenerated, 1d), kbId_, userId_
-```
-
-### Latency of GenerateAnswer API
-
-```kusto
-// Latency of GenerateAnswer
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="Generate Answer"
-| project TimeGenerated, DurationMs
-| render timechart
-```
-
-### Average latency of all operations
-
-```kusto
-// Average Latency of all operations
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| project DurationMs, OperationName
-| summarize count(), avg(DurationMs) by OperationName
-| render barchart
-```
-
-### Unanswered questions
-
-```kusto
-// All unanswered questions
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
-| extend answer_ = tostring(parse_json(properties_s).answer)
-| extend question_ = tostring(parse_json(properties_s).question)
-| extend score_ = tostring(parse_json(properties_s).score)
-| extend kbId_ = tostring(parse_json(properties_s).kbId)
-| where score_ == 0
-| project question_, answer_, score_, kbId_
-```
-
-### Prebuilt question answering inference calls
-
-```kusto
-// Show logs from AzureDiagnostics table
-// Lists the latest logs in AzureDiagnostics table, sorted by time (latest first).
-AzureDiagnostics
-| where OperationName == "CustomQuestionAnswering QueryText"
-| extend answer_ = tostring(parse_json(properties_s).answer)
-| extend question_ = tostring(parse_json(properties_s).question)
-| extend score_ = tostring(parse_json(properties_s).score)
-| extend requestid = tostring(parse_json(properties_s)["apim-request-id"])
-| project TimeGenerated, requestid, question_, answer_, score_
-```
-
-## Next steps
--
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
- Title: Authoring API - question answering-
-description: Use the question answering Authoring API to automate common tasks like adding new question answer pairs, and creating, and publishing projects.
----- Previously updated : 11/23/2021--
-# Authoring API
-
-The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects.
-
-> [!NOTE]
-> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
-
-## Prerequisites
-
-* The current version of [cURL](https://curl.haxx.se/). Several command-line switches are used in this article, which are noted in the [cURL documentation](https://curl.haxx.se/docs/manpage.html).
-* The commands in this article are designed to be executed in a Bash shell. These commands will not always work in a Windows command prompt or in PowerShell without modification. If you do not have a Bash shell installed locally, you can use the [Azure Cloud Shell's bash environment](../../../../cloud-shell/overview.md).
-
-## Create a project
-
-To create a project programmatically:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If the prior example was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `NEW-PROJECT-NAME` | The name for your new question answering project.|
-
-You can also adjust additional values like the project language, the default answer given when no answer can be found that meets or exceeds the confidence threshold, and whether this language resource will support multiple languages.
-
-### Example query
-
-```bash
-curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
- "description": "proj1 is a test project.",
- "language": "en",
- "settings": {
- "defaultAnswer": "No good match found for your question in the project."
- },
- "multilingualResource": true
- }
- }' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{NEW-PROJECT-NAME}?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
-{
- "200": {
- "headers": {},
- "body": {
- "projectName": "proj1",
- "description": "proj1 is a test project.",
- "language": "en",
- "settings": {
- "defaultAnswer": "No good match found for your question in the project."
- },
- "multilingualResource": true,
- "createdDateTime": "2021-05-01T15:13:22Z",
- "lastModifiedDateTime": "2021-05-01T15:13:22Z",
- "lastDeployedDateTime": "2021-05-01T15:13:22Z"
- }
- }
-}
-```
-
-## Delete Project
-
-To delete a project programmatically:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If the prior example was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to delete.|
-
-### Example query
-
-```bash
-curl -X DELETE -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}?api-version=2021-10-01'
-```
-
-A successful call to delete a project results in an `Operation-Location` header being returned, which can be used to check the status of the delete project job. In most our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
-
-### Example response
-
-```bash
-HTTP/2 202
-content-length: 0
-operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deletion-jobs/{JOB-ID-GUID}
-x-envoy-upstream-service-time: 324
-apim-request-id:
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Tue, 23 Nov 2021 20:56:18 GMT
-```
-
-If the project was already deleted or could not be found, you would receive a message like:
-
-```json
-{
- "error": {
- "code": "ProjectNotFound",
- "message": "The specified project was not found.",
- "details": [
- {
- "code": "ProjectNotFound",
- "message": "{GUID}"
- }
- ]
- }
-}
-```
-
-### Get project deletion status
-
-To check on the status of your delete project request:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to check on the deployment status for.|
-| `JOB-ID` | When you delete a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the deletion request. The `JOB-ID` is the guid at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deletion-jobs/{THIS GUID IS YOUR JOB ID}` |
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/deletion-jobs/{JOB-ID}?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
-{
- "createdDateTime": "2021-11-23T20:56:18+00:00",
- "expirationDateTime": "2021-11-24T02:56:18+00:00",
- "jobId": "GUID",
- "lastUpdatedDateTime": "2021-11-23T20:56:18+00:00",
- "status": "succeeded"
-}
-```
-
-## Get project settings
-
-To retrieve information about a given project, update the following values in the query below:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to retrieve information about.|
-
-### Example query
-
-```bash
-
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
- {
- "200": {
- "headers": {},
- "body": {
- "projectName": "proj1",
- "description": "proj1 is a test project.",
- "language": "en",
- "settings": {
- "defaultAnswer": "No good match found for your question in the project."
- },
- "createdDateTime": "2021-05-01T15:13:22Z",
- "lastModifiedDateTime": "2021-05-01T15:13:22Z",
- "lastDeployedDateTime": "2021-05-01T15:13:22Z"
- }
- }
- }
-```
-
-## Get question answer pairs
-
-To retrieve question answer pairs and related information for a given project, update the following values in the query below:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to retrieve all the question answer pairs for.|
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/qnas?api-version=2021-10-01'
-
-```
-
-### Example response
-
-```json
-{
- "200": {
- "headers": {},
- "body": {
- "value": [
- {
- "id": 1,
- "answer": "ans1",
- "source": "source1",
- "questions": [
- "question 1.1",
- "question 1.2"
- ],
- "metadata": {
- "k1": "v1",
- "k2": "v2"
- },
- "dialog": {
- "isContextOnly": false,
- "prompts": [
- {
- "displayOrder": 1,
- "qnaId": 11,
- "displayText": "prompt 1.1"
- },
- {
- "displayOrder": 2,
- "qnaId": 21,
- "displayText": "prompt 1.2"
- }
- ]
- },
- "lastUpdatedDateTime": "2021-05-01T17:21:14Z"
- },
- {
- "id": 2,
- "answer": "ans2",
- "source": "source2",
- "questions": [
- "question 2.1",
- "question 2.2"
- ],
- "lastUpdatedDateTime": "2021-05-01T17:21:14Z"
- }
- ]
- }
- }
- }
-```
-
-## Get sources
-
-To retrieve the sources and related information for a given project, update the following values in the query below:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to retrieve all the source information for.|
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT_NAME}/sources?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
-{
- "200": {
- "headers": {},
- "body": {
- "value": [
- {
- "displayName": "source1",
- "sourceUri": "https://learn.microsoft.com/azure/cognitive-services/qnamaker/overview/overview",
- "sourceKind": "url",
- "lastUpdatedDateTime": "2021-05-01T15:13:22Z"
- },
- {
- "displayName": "source2",
- "sourceUri": "https://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf",
- "sourceKind": "file",
- "contentStructureKind": "unstructured",
- "lastUpdatedDateTime": "2021-05-01T15:13:22Z"
- }
- ]
- }
- }
- }
-```
-
-## Get synonyms
-
-To retrieve synonyms and related information for a given project, update the following values in the query below:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to retrieve synonym information for.|
-
-### Example query
-
-```bash
-
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/synonyms?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
- {
- "200": {
- "headers": {},
- "body": {
- "value": [
- {
- "alterations": [
- "qnamaker",
- "qna maker"
- ]
- },
- {
- "alterations": [
- "botframework",
- "bot framework"
- ]
- }
- ]
- }
- }
- }
-```
-
-## Deploy project
-
-To deploy a project to production, update the following values in the query below:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to deploy to production.|
-
-### Example query
-
-```bash
-curl -X PUT -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/deployments/production?api-version=2021-10-01'
-```
-
-A successful call to deploy a project results in an `Operation-Location` header being returned which can be used to check the status of the deployment job. In most our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
-
-### Example response
-
-```bash
-0HTTP/2 202
-content-length: 0
-operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deployments/production/jobs/{JOB-ID-GUID}
-x-envoy-upstream-service-time: 31
-apim-request-id:
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Tue, 23 Nov 2021 20:35:00 GMT
-```
-
-### Get project deployment status
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to check on the deployment status for.|
-| `JOB-ID` | When you deploy a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the deployment request. The `JOB-ID` is the guid at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/deployments/production/jobs/{THIS GUID IS YOUR JOB ID}` |
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/deployments/production/jobs/{JOB-ID}?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
- {
- "200": {
- "headers": {},
- "body": {
- "errors": [],
- "createdDateTime": "2021-05-01T17:21:14Z",
- "expirationDateTime": "2021-05-01T17:21:14Z",
- "jobId": "{JOB-ID-GUID}",
- "lastUpdatedDateTime": "2021-05-01T17:21:14Z",
- "status": "succeeded"
- }
- }
- }
-```
-
-## Export project metadata and assets
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to export.|
-
-### Example query
-
-```bash
-curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{exportAssetTypes": ["qnas","synonyms"]}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:export?api-version=2021-10-01&format=tsv'
-```
-
-### Example response
-
-```bash
-HTTP/2 202
-content-length: 0
-operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/Sample-project/export/jobs/{JOB-ID_GUID}
-x-envoy-upstream-service-time: 214
-apim-request-id:
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Tue, 23 Nov 2021 21:24:03 GMT
-```
-
-### Check export status
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to check on the export status for.|
-| `JOB-ID` | When you export a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the export request. The `JOB-ID` is the guid at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/export/jobs/{THIS GUID IS YOUR JOB ID}` |
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID}?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
-{
- "createdDateTime": "2021-11-23T21:24:03+00:00",
- "expirationDateTime": "2021-11-24T03:24:03+00:00",
- "jobId": "JOB-ID-GUID",
- "lastUpdatedDateTime": "2021-11-23T21:24:08+00:00",
- "status": "succeeded",
- "resultUrl": "https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result"
-}
-```
-
-If you try to access the resultUrl directly, you will get a 404 error. You must append `?api-version=2021-10-01` to the path to make it accessible by an authenticated request: `https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result?api-version=2021-10-01`
-
-## Import project
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
-| `FILE-URI-PATH` | When you export a project programmatically, and then check the status the export a `resultUrl` is generated as part of the response. For example: `"resultUrl": "https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result"` you can use the resultUrl with the API version appended as a source file to import a project from: `https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/export/jobs/{JOB-ID_GUID}/result?api-version=2021-10-01`.|
-
-### Example query
-
-```bash
-curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
- "fileUri": "FILE-URI-PATH"
- }' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/:import?api-version=2021-10-01&format=tsv'
-```
-
-A successful call to import a project results in an `Operation-Location` header being returned, which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't been displaying them. To retrieve the response headers our curl command uses `-i`. Without this additional parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
-
-### Example response
-
-```bash
-HTTP/2 202
-content-length: 0
-operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/sample-proj1/import/jobs/{JOB-ID-GUID}
-x-envoy-upstream-service-time: 417
-apim-request-id:
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Wed, 24 Nov 2021 00:35:11 GMT
-```
-
-### Check import status
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
-| `JOB-ID` | When you import a project programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the export request. The `JOB-ID` is the GUID at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/import/jobs/{THIS GUID IS YOUR JOB ID}` |
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME/import/jobs/{JOB-ID-GUID}?api-version=2021-10-01'
-```
-
-### Example query response
-
-```bash
-{
- "errors": [],
- "createdDateTime": "2021-05-01T17:21:14Z",
- "expirationDateTime": "2021-05-01T17:21:14Z",
- "jobId": "JOB-ID-GUID",
- "lastUpdatedDateTime": "2021-05-01T17:21:14Z",
- "status": "succeeded"
-}
-```
-
-## List deployments
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to generate a deployment list for.|
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/deployments?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
-[
- {
- "deploymentName": "production",
- "lastDeployedDateTime": "2021-10-26T15:12:02Z"
- }
-]
-```
-
-## List Projects
-
-Retrieve a list of all question answering projects your account has access to.
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects?api-version=2021-10-01'
-```
-
-### Example response
-
-```json
-{
- "value": [
- {
- "projectName": "Sample-project",
- "description": "My first question answering project",
- "language": "en",
- "multilingualResource": false,
- "createdDateTime": "2021-10-07T04:51:15Z",
- "lastModifiedDateTime": "2021-10-27T00:42:01Z",
- "lastDeployedDateTime": "2021-11-24T01:34:18Z",
- "settings": {
- "defaultAnswer": "No good match found in KB"
- }
- }
- ]
-}
-```
-
-## Update sources
-
-In this example, we will add a new source to an existing project. You can also replace and delete existing sources with this command depending on what kind of operations you pass as part of your query body.
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project where you would like to update sources.|
-|`METHOD`| PATCH |
-
-### Example query
-
-```bash
-curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
- {
- "op": "add",
- "value": {
- "displayName": "source5",
- "sourceKind": "url",
- "sourceUri": "https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf",
- "sourceContentStructureKind": "semistructured"
- }
- }
-]' -i '{LanguageServiceName}.cognitiveservices.azure.com//language/query-knowledgebases/projects/{projectName}/sources?api-version=2021-10-01'
-```
-
-A successful call to update a source results in an `Operation-Location` header being returned which can be used to check the status of the import job. In many of our examples, we haven't needed to look at the response headers and thus haven't always been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
-
-### Example response
-
-```bash
-HTTP/2 202
-content-length: 0
-operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/Sample-project/sources/jobs/{JOB_ID_GUID}
-x-envoy-upstream-service-time: 412
-apim-request-id: dda23d2b-f110-4645-8bce-1a6f8d504b33
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Wed, 24 Nov 2021 02:47:53 GMT
-```
-
-### Get update source status
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
-| `JOB-ID` | When you update a source programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the update source request. The `JOB-ID` is the GUID at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/sources/jobs/{THIS GUID IS YOUR JOB ID}` |
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/sources/jobs/{JOB-ID}?api-version=2021-10-01'
-```
-
-### Example response
-
-```bash
-{
- "createdDateTime": "2021-11-24T02:47:53+00:00",
- "expirationDateTime": "2021-11-24T08:47:53+00:00",
- "jobId": "{JOB-ID-GUID}",
- "lastUpdatedDateTime": "2021-11-24T02:47:56+00:00",
- "status": "succeeded",
- "resultUrl": "/knowledgebases/Sample-project"
-}
-
-```
-
-## Update question and answer pairs
-
-In this example, we will add a question answer pair to an existing source. You can also modify, or delete existing question answer pairs with this query depending on what operation you pass in the query body. If you don't have a source named `source5`, this example query will fail. You can adjust the source value in the body of the query to a source that exists for your target project.
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to be the destination for the import.|
-
-```bash
-curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
- {
- "op": "add",
- "value":{
- "id": 1,
- "answer": "The latest question answering docs are on https://learn.microsoft.com",
- "source": "source5",
- "questions": [
- "Where do I find docs for question answering?"
- ],
- "metadata": {},
- "dialog": {
- "isContextOnly": false,
- "prompts": []
- }
- }
- }
-]' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/qnas?api-version=2021-10-01'
-```
-
-A successful call to update a question answer pair results in an `Operation-Location` header being returned which can be used to check the status of the update job. In many of our examples, we haven't needed to look at the response headers and thus haven't always been displaying them. To retrieve the response headers our curl command uses `-i`. Without this parameter prior to the endpoint address, the response to this command would appear empty as if no response occurred.
-
-### Example response
-
-```bash
-HTTP/2 202
-content-length: 0
-operation-location: https://southcentralus.api.cognitive.microsoft.com:443/language/query-knowledgebases/projects/Sample-project/qnas/jobs/{JOB-ID-GUID}
-x-envoy-upstream-service-time: 507
-apim-request-id:
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Wed, 24 Nov 2021 03:16:01 GMT
-```
-
-### Get update question answer pairs status
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to be the destination for the question answer pairs updates.|
-| `JOB-ID` | When you update a question answer pair programmatically, a `JOB-ID` is generated as part of the `operation-location` response header to the update request. The `JOB-ID` is the GUID at the end of the `operation-location`. For example: `operation-location: https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/sample-proj1/qnas/jobs/{THIS GUID IS YOUR JOB ID}` |
-
-### Example query
-
-```bash
-curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '' 'https://southcentralus.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/qnas/jobs/{JOB-ID}?api-version=2021-10-01'
-```
-
-### Example response
-
-```bash
- "createdDateTime": "2021-11-24T03:16:01+00:00",
- "expirationDateTime": "2021-11-24T09:16:01+00:00",
- "jobId": "{JOB-ID-GUID}",
- "lastUpdatedDateTime": "2021-11-24T03:16:06+00:00",
- "status": "succeeded",
- "resultUrl": "/knowledgebases/Sample-project"
-```
-
-## Update Synonyms
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to add synonyms.|
-
-### Example query
-
-```bash
-curl -X PUT -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
-"value": [
- {
- "alterations": [
- "qnamaker",
- "qna maker"
- ]
- },
- {
- "alterations": [
- "botframework",
- "bot framework"
- ]
- }
- ]
-}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/synonyms?api-version=2021-10-01'
-```
-
-### Example response
-
-```bash
-0HTTP/2 200
-content-length: 17
-content-type: application/json; charset=utf-8
-x-envoy-upstream-service-time: 39
-apim-request-id: 5deb2692-dac8-43a8-82fe-36476e407ef6
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Wed, 24 Nov 2021 03:59:09 GMT
-
-{
- "value": []
-}
-```
-
-## Update active learning feedback
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the code sample below, you would only need to add the region specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project you would like to be the destination for the active learning feedback updates.|
-
-### Example query
-
-```bash
-curl -X POST -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '{
-records": [
- {
- "userId": "user1",
- "userQuestion": "hi",
- "qnaId": 1
- },
- {
- "userId": "user1",
- "userQuestion": "hello",
- "qnaId": 2
- }
- ]
-}' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/feedback?api-version=2021-10-01'
-```
-
-### Example response
-
-```bash
-HTTP/2 204
-x-envoy-upstream-service-time: 37
-apim-request-id: 92225e03-e83f-4c7f-b35a-223b1b0f29dd
-strict-transport-security: max-age=31536000; includeSubDomains; preload
-x-content-type-options: nosniff
-date: Wed, 24 Nov 2021 04:02:56 GMT
-```
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/best-practices.md
- Title: Project best practices
-description: Best practices for Question Answering
-----
-recommendations: false
Previously updated : 06/03/2022--
-# Project best practices
-
-The following list of QnA pairs will be used to represent a project to highlight best practices when authoring in custom question answering.
-
-|Question |Answer |
-|-|-|
-|I want to buy a car. |There are three options for buying a car. |
-|I want to purchase software license. |Software licenses can be purchased online at no cost. |
-|How to get access to WPA? |WPA can be accessed via the company portal. |
-|What is the price of Microsoft stock?|$200. |
-|How do I buy Microsoft Services? |Microsoft services can be bought online. |
-|I want to sell car. |Please send car pictures and documents. |
-|How do I get an identification card? |Apply via company portal to get an identification card.|
-|How do I use WPA? |WPA is easy to use with the provided manual. |
-|What is the utility of WPA? |WPA provides a secure way to access company resources. |
-
-## When should you add alternate questions to a QnA?
--- Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to questions in the project. For example, consider the following question answer pair:-
- **Question: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥**
-
- **Answer: ΓÇ£$200ΓÇ¥.**
-
- The service can return expected responses for semantically similar queries such as:
-
- "How much is Microsoft stock worth?"
-
- "How much is Microsoft's share value?"
-
- "How much does a Microsoft share cost?"
-
- "What is the market value of Microsoft stock?"
-
- "What is the market value of a Microsoft share?"
-
- However, please note that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
--- There are certain scenarios which require the customer to add an alternate question. When a query does not return the correct answer despite it being present in the project, we advise adding that query as an alternate question to the intended QnA pair.-
-## How many alternate questions per QnA is optimal?
--- Users can add up to 10 alternate questions depending on their scenario. Alternate questions beyond the first 10 arenΓÇÖt considered by our core ranker. However, they are evaluated in the other processing layers resulting in better output overall. All the alternate questions will be considered in the preprocessing step to look for an exact match.--- Semantic understanding in question answering should be able to take care of similar alternate questions.--- The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all intents for the answer are captured by these 10 questions. For the project above, in QNA #1, adding alternate questions such as "How can I buy a car?", "I wanna buy a car." are not required. Whereas adding alternate questions such as "How to purchase a car.", "What are the options for buying a vehicle?" can be useful.-
-## When to add synonyms to a project
--- Question answering provides the flexibility to use synonyms at the project level, unlike QnA Maker where synonyms are shared across projects for the entire service.--- For better relevance, the customer needs to provide a list of acronyms that the end user intends to use interchangeably. For instance, the following is a list of acceptable acronyms:-
- MSFT ΓÇô Microsoft
-
- ID ΓÇô Identification
-
- ETA ΓÇô Estimated time of Arrival
--- Apart from acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as "my carΓÇÖs audio isnΓÇÖt working" and the project has questions on "fixing audio for car X", then we need to add "X" and "car" as synonyms.--- The Transformer based model already takes care of most of the common synonym cases, for e.g.- Purchase ΓÇô Buy, Sell - Auction, Price ΓÇô Value. For example, consider the following QnA pair: Q: "What is the price of Microsoft Stock?" A: "$200".-
-If we receive user queries like "Microsoft stock value", "Microsoft share value", "Microsoft stock worth", "Microsoft share worth", "stock value", etc., they should be able to get correct answer even though these queries have words like share, value, worth which are not originally present in the knowledge base.
-
-## How are lowercase/uppercase characters treated?
-
-Question answering takes casing into account but it's intelligent enough to understand when it is to be ignored. You should not be seeing any perceivable difference due to wrong casing.
-
-## How are QnAs prioritized for multi-turn questions?
-
-When a KB has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other QnAs, for the next query we give slight preference to all the children QnAs, sibling QnAs and grandchildren QnAs in that order. Along with any query, the [Question Answering API] (/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a "context" object with the property "previousQnAId" which denotes the last top answer. Based on this previous QnA ID, all the related QnAs are boosted.
-
-## How are accents treated?
-
-Accents are supported for all major European languages. If the query has an incorrect accent, confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
-
-## How is punctuation in a user query treated?
-
-Punctuation is ignored in user query before sending it to the ranking stack. Ideally it should not impact the relevance scores. Punctuations that are ignored are as follows: ,?:;\"'(){}[]-+。./!*؟
-
-## Next steps
-
cognitive-services Change Default Answer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/change-default-answer.md
- Title: Get default answer - custom question answering
-description: The default answer is returned when there is no match to the question. You may want to change the default answer from the standard default answer in custom question answering.
--- Previously updated : 11/02/2021-----
-# Change default answer for question answering
-
-The default answer for a project is meant to be returned when an answer is not found. If you are using a client application, such as the [Azure Bot service](/azure/bot-service/bot-builder-howto-qna), it may also have a separate default answer, indicating no answer met the score threshold.
-
-## Default answer
--
-|Default answer|Description of answer|
-|--|--|
-|KB answer when no answer is determined|`No good match found in KB.` - When the question answering API finds no matching answer to the question it displays a default text response. In Custom question answering, you can set this text in the **Settings** of your project. |
-
-### Client application integration
-
-For a client application, such as a bot with the [Azure Bot service](/azure/bot-service/bot-builder-howto-qna), you can choose from the following scenarios:
-
-* Use your project's setting
-* Use different text in the client application to distinguish when an answer is returned but doesn't meet the score threshold. This text can either be static text stored in code, or can be stored in the client application's settings list.
-
-## Change default answer in Language Studio
-
-The project default answer is returned when no answer is returned from question answering.
-
-1. Sign in to the [Language Studio](https://language.azure.com). Go to Custom question answering and select your project from the list.
-1. Select **Settings** from the left navigation bar.
-1. Change the value of **Default answer when no answer found** > Select **Save**.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of project settings with red box around the default answer](../media/change-default-answer/settings.png)
-
-## Next steps
-
-* [Create a project](manage-knowledge-base.md)
cognitive-services Chit Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/chit-chat.md
- Title: Adding chitchat to a custom question answering project-
-description: Adding personal chitchat to your bot makes it more conversational and engaging when you create a project. Custom question answering allows you to easily add a pre-populated set of the top chitchat, into your projects.
----- Previously updated : 11/02/2021---
-# Use chitchat with a project
-
-Adding chitchat to your bot makes it more conversational and engaging. The chitchat feature in custom question answering allows you to easily add a pre-populated set of the top chitchat, into your project. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
-
-This dataset has about 100 scenarios of chitchat in the voice of multiple personas, like Professional, Friendly and Witty. Choose the persona that most closely resembles your bot's voice. Given a user query, question answering tries to match it with the closest known chitchat question and answer.
-
-Some examples of the different personalities are below. You can see all the personality [datasets](https://github.com/microsoft/botframework-cli/blob/main/packages/qnamaker/docs/chit-chat-dataset.md) along with details of the personalities.
-
-For the user query of `When is your birthday?`, each personality has a styled response:
-
-<!-- added quotes so acrolinx doesn't score these sentences -->
-|Personality|Example|
-|--|--|
-|Professional|Age doesn't really apply to me.|
-|Friendly|I don't really have an age.|
-|Witty|I'm age-free.|
-|Caring|I don't have an age.|
-|Enthusiastic|I'm a bot, so I don't have an age.|
-||
-
-## Language support
-
-Chitchat data sets are supported in the following languages:
-
-|Language|
-|--|
-|Chinese|
-|English|
-|French|
-|Germany|
-|Italian|
-|Japanese|
-|Korean|
-|Portuguese|
-|Spanish|
-
-## Add chitchat source
-After you create your project you can add sources from URLs, files, as well as chitchat from the **Manage sources** pane.
-
-> [!div class="mx-imgBorder"]
-> ![Add source chitchat](../media/chit-chat/add-source.png)
-
-Choose the personality that you want as your chitchat base.
-
-> [!div class="mx-imgBorder"]
-> ![Menu of different chitchat personalities](../media/chit-chat/personality.png)
-
-## Edit your chitchat questions and answers
-
-When you edit your project, you will see a new source for chitchat, based on the personality you selected. You can now add altered questions or edit the responses, just like with any other source.
-
-> [!div class="mx-imgBorder"]
-> ![Edit chitchat question pairs](../media/chit-chat/edit-chit-chat.png)
-
-To turn the views for context and metadata on and off, select **Show columns** in the toolbar.
-
-## Add more chitchat questions and answers
-
-You can add a new chitchat question pair that is not in the predefined data set. Ensure that you are not duplicating a question pair that is already covered in the chitchat set. When you add any new chitchat question pair, it gets added to your **Editorial** source. To ensure the ranker understands that this is chitchat, add the metadata key/value pair "Editorial: chitchat", as seen in the following image:
--
-## Delete chitchat from your project
-
-Select the **manage sources** pane, and choose your chitchat source. Your specific chitchat source is listed as a tsv file, with the selected personality name. Select **Delete** from the toolbar.
-
-> [!div class="mx-imgBorder"]
-> ![Delete chitchat source](../media/chit-chat/delete-chit-chat.png)
-
-## Next steps
--
cognitive-services Configure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/configure-resources.md
- Title: Configure Question Answering service
-description: This document outlines advanced configurations for custom question answering enabled resources.
--- Previously updated : 11/02/2021---
-# Configure custom question answering enabled resources
-
-You can configure question answering to use a different Cognitive Search resource.
-
-## Change Cognitive Search resource
-
-> [!WARNING]
-> If you change the Azure Search service associated with your language resource, you will lose access to all the projects already present in it. Make sure you export the existing projects before you change the Azure Search service.
-
-If you create a language resource and its dependencies (such as Search) through the Azure portal, a Search service is created for you and linked to the language resource. After these resources are created, you can update the Search resource in the **Features** tab.
-
-1. Go to your language resource in the Azure portal.
-
-2. Select **Features** and select the Azure Cognitive Search service you want to link with your language resource.
-
- > [!NOTE]
- > Your Language resource will retain your Azure Cognitive Search keys. If you update your search resource (for example, regenerating your keys), you will need to select **Update Azure Cognitive Search keys for the current search service**.
-
- > [!div class="mx-imgBorder"]
- > ![Add QnA to TA](../media/configure-resources/update-custom-feature.png)
-
-3. Select **Save**.
-
-## Next steps
-
-* [Encrypt data at rest](./encrypt-data-at-rest.md)
cognitive-services Create Test Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/create-test-deploy.md
- Title: Create, test, and deploy your question answering project
-description: You can create a question answering project from your own content, such as FAQs or product manuals. This article includes an example of creating a question answering project from a simple FAQ webpage, to answer questions.
--- Previously updated : 11/02/2021---
-# Create, test, and deploy a custom question answering project
-
-You can create a question answering project from your own content, such as FAQs or product manuals. This article includes an example of creating a question answering project from a product manual, to answer questions.
-
-## Prerequisites
-
-> [!div class="checklist"]
-> * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> * A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled.
-
-## Create your first question answering project
-
-1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
-
-2. Scroll down to the **Understand questions and conversational language** section and select **Open custom question answering**.
-
- > [!div class="mx-imgBorder"]
- > ![Open custom question answering](../media/create-test-deploy/open-custom-question-answering.png)
-
-3. If your resource is not yet connected to Azure Search select **Connect to Azure Search**. This will open a new browser tab to **Features** pane of your resource in the Azure portal.
-
- > [!div class="mx-imgBorder"]
- > ![Connect to Azure Search](../media/create-test-deploy/connect-to-azure-search.png)
-
-4. Select **Enable custom question answering**, choose the Azure Search resource to link to, and then select **Apply**.
-
- > [!div class="mx-imgBorder"]
- > ![Enable custom question answering](../media/create-test-deploy/enable-custom-question-answering.png)
-
-5. Return to the Language Studio tab. You may need to refresh this page for it to register the change to your resource. Select **Create new project**.
-
-6. Choose the option **I want to set the language for all projects created in this resource** > select **English** > Select **Next**.
-
-7. Enter a project name of **Sample-project**, a description of **My first question answering project**, and leave the default answer with a setting of **No answer found**.
-
-8. Review your choices and select **Create project**
-
-9. From the **Manage sources** page select **Add source** > **URLS**.
-
-10. Select **Add url** enter the following values and then select **Add all**:
-
- |URL Name|URL Value|
- |--||
- |Surface Book User Guide |https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf |
-
- The extraction process takes a few moments to read the document and identify questions and answers. Question and answering will determine if the underlying content is structured or unstructured.
-
- After successfully adding the source, you can then edit the source contents to add more custom question answer sets.
-
-## Test your project
-
-1. Select the link to your source, this will open the edit project page.
-
-2. Select **Test** from the menu bar > Enter the question **How do I setup my surface book?**. An answer will be generated based on the question answer pairs that were automatically identified and extracted from your source URL:
-
- > [!div class="mx-imgBorder"]
- > ![Test question chat interface](../media/create-test-deploy/test-question.png)
-
- If you check the box for **include short answer response** you will also see a precise answer, if available, along with the answer passage in the test pane when you ask a question.
-
-3. Select **Inspect** to examine the response in more detail. The test window is used to test your changes to your project before deploying your project.
-
- > [!div class="mx-imgBorder"]
- > ![See the confidence interval](../media/create-test-deploy/inspect-test.png)
-
- From the **Inspect** interface, you can see the level of confidence that this response will answer the question and directly edit a given question and answer response pair.
-
-## Deploy your project
-
-1. Select the Deploy project icon to enter the deploy project menu.
-
- > [!div class="mx-imgBorder"]
- > ![Deploy project](../media/create-test-deploy/deploy-knowledge-base.png)
-
- When you deploy a project, the contents of your project move from the `test` index to a `prod` index in Azure Search.
-
-2. Select **Deploy** > and then when prompted select **Deploy** again.
-
- > [!div class="mx-imgBorder"]
- > ![Successful deployment](../media/create-test-deploy/successful-deployment.png)
-
- Your project is now successfully deployed. You can use the endpoint to answer questions in your own custom application to answer or in a bot.
-
-## Clean up resources
-
-If you will not continue to test custom question answering, you can delete the associated resource.
-
-## Next steps
--
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/encrypt-data-at-rest.md
- Title: Custom question answering encryption of data at rest-
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for custom question answering, and how to enable and manage CMK.
----- Previously updated : 06/03/2022----
-# Custom question answering encryption of data at rest
-
-Question answering automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals.
-
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your resource with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
-
-Question answering uses CMK support from Azure search, and associates the provided CMK to encrypt the data stored in Azure search index. Please follow the steps listed in [this article](../../../../search/search-security-manage-encryption-keys.md) to configure Key Vault access for the Azure search service.
-
-> [!IMPORTANT]
-> Your Azure Search service resource must have been created after January 2019 and cannot be in the free (shared) tier. There is no support to configure customer-managed keys in the Azure portal.
-
-## Enable customer-managed keys
-
-Follow these steps to enable CMKs:
-
-1. Go to the **Encryption** tab of your language resource with custom question answering enabled.
-2. Select the **Customer Managed Keys** option. Provide the details of your [customer-managed keys](../../../../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal) and select **Save**.
-
-> [!div class="mx-imgBorder"]
-> ![Question Answering CMK](../media/encrypt-data-at-rest/question-answering-cmk.png)
-
-3. On a successful save, the CMK will be used to encrypt the data stored in the Azure Search Index.
-
-> [!IMPORTANT]
-> It is recommended to set your CMK in a fresh Azure Cognitive Search service before any projects are created. If you set CMK in a language resource with existing projects, you might lose access to them. Read more about [working with encrypted content](../../../../search/search-security-manage-encryption-keys.md#work-with-encrypted-content) in Azure Cognitive search.
-
-> [!NOTE]
-> To request the ability to use customer-managed keys, fill out and submit the [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk).
-
-## Regional availability
-
-Customer-managed keys are available in all Azure Search regions.
-
-## Encryption of data in transit
-
-Language Studio runs in the user's browser. Every action triggers a direct call to the respective Cognitive Service API. Hence, question answering is compliant for data in transit.
-
-## Next steps
-
-* [Encryption in Azure Search using CMKs in Azure Key Vault](../../../../search/search-security-manage-encryption-keys.md)
-* [Data encryption at rest](../../../../security/fundamentals/encryption-atrest.md)
-* [Learn more about Azure Key Vault](../../../../key-vault/general/overview.md)
cognitive-services Export Import Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/export-import-refresh.md
- Title: Export/import/refresh | question answering projects and projects
-description: Learn about backing up your question answering projects and projects
-----
-recommendations: false
Previously updated : 01/25/2022-
-# Export-import-refresh in question answering
-
-You may want to create a copy of your question answering project or related question and answer pairs for several reasons:
-
-* To implement a backup and restore process
-* To integrate with your CI/CD pipeline
-* To move your data to different regions
-
-## Prerequisites
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled. Remember your Azure Active Directory ID, Subscription, language resource name you selected when you created the resource.
-
-## Export a project
-
-1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
-
-2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
-
-3. Select the project you wish to export > Select **Export** > YouΓÇÖll have the option to export as an **Excel** or **TSV** file.
-
-4. YouΓÇÖll be prompted to save your exported file locally as a zip file.
-
-### Export a project programmatically
-
-To automate the export process, use the [export functionality of the authoring API](./authoring.md#export-project-metadata-and-assets)
-
-## Import a project
-
-1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
-
-2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
-
-3. Select **Import** and specify the file type you selected for the export process. Either **Excel**, or **TSV**.
-
-4. Select Choose File and browse to the local zipped copy of your project that you exported previously.
-
-5. Provide a unique name for the project youΓÇÖre importing.
-
-6. Remember that a project that has only been imported still needs to be deployed/published if you want it to be live.
-
-### Import a project programmatically
-
-To automate the import process, use the [import functionality of the authoring API](./authoring.md#import-project)
-
-## Refresh source url
-
-1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
-
-2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
-
-3. Select the project that contains the source you want to refresh > select manage sources.
-
-4. We recommend having a backup of your project/question answer pairs prior to running each refresh so that you can always roll-back if needed.
-
-5. Select a url-based source to refresh > Select **Refresh URL**.
-6. Only one URL can be refreshed at a time.
-
-### Refresh a URL programmatically
-
-To automate the URL refresh process, use the [update sources functionality of the authoring API](./authoring.md#update-sources)
-
-The update sources example in the [Authoring API docs](./authoring.md#update-sources) shows the syntax for adding a new URL-based source. An example query for an update would be as follows:
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. An example endpoint is: `https://southcentralus.api.cognitive.microsoft.com/`. If this was your endpoint in the following code sample, you would only need to add the region-specific portion of `southcentral` as the rest of the endpoint path is already present.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either Key1 or Key2. Always having two valid keys allows for secure key rotation with zero downtime. Alternatively you can find the value in **Language Studio** > **question answering** > **Deploy project** > **Get prediction URL**. The key value is part of the sample request.|
-| `PROJECT-NAME` | The name of project where you would like to update sources.|
-
-```bash
-curl -X PATCH -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: application/json" -d '[
- {
- "op": "replace",
- "value": {
- "displayName": "source5",
- "sourceKind": "url",
- "sourceUri": https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf,
- "refresh": "true"
- }
- }
-]' -i 'https://{ENDPOINT}.api.cognitive.microsoft.com/language/query-knowledgebases/projects/{PROJECT-NAME}/sources?api-version=2021-10-01'
-```
-
-## Export questions and answers
-
-ItΓÇÖs also possible to export/import a specific project of question and answers rather than the entire question answering project.
-
-1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
-
-2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
-
-3. Select the project that contains the project question and answer pairs you want to export.
-
-4. Select **Edit project**.
-
-5. To the right of show columns are `...` an ellipsis button. > Select the `...` > a dropdown will reveal the option to export/import questions and answers.
-
- Depending on the size of your web browser, you may experience the UI differently. Smaller browsers will see two separate ellipsis buttons.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of selecting multiple UI ellipsis buttons to get to import/export question and answer pair option](../media/export-import-refresh/export-questions.png)
-
-## Import questions and answers
-
-ItΓÇÖs also possible to export/import a specific project of question and answers rather than the entire question answering project.
-
-1. Sign in to the [Language Studio](https://language.azure.com/) with your Azure credentials.
-
-2. Scroll down to the **Answer questions** section and select **Open custom question answering**.
-
-3. Select the project that contains the project question and answer pairs you want to export.
-
-4. Select **Edit project**.
-
-5. To the right of show columns are `...` an ellipsis button. > Select the `...` > a dropdown will reveal the option to export/import questions and answers.
-
- Depending on the size of your web browser, you may experience the UI differently. Smaller browsers will see two separate ellipsis buttons.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of selecting multiple UI ellipsis buttons to get to import/export question and answer pair option](../media/export-import-refresh/export-questions.png)
-
-## Next steps
-
-* [Learn how to use the Authoring API](./authoring.md)
cognitive-services Manage Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/manage-knowledge-base.md
- Title: Manage projects - question answering
-description: Custom question answering allows you to manage projects by providing access to the project settings and content.
--- Previously updated : 06/03/2022---
-# Create and manage project settings
-
-Question answering allows you to manage your projects by providing access to the project settings and data sources. If you haven't created a question answering project before we recommend starting with the [getting started article](create-test-deploy.md).
-
-## Prerequisites
-
-> * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> * A [Language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and language resource name you selected when you created the resource.
-
-## Create a project
-
-1. Sign in to the [Language Studio](https://language.azure.com/) portal with your Azure credentials.
-
-2. Open the [question answering](https://language.azure.com/languageStudio/questionAnswering/projects) page.
-
-3. Select **create new project**.
-
-4. If you are creating the first project associated with your language resource, you have the option of creating future projects with multiple languages for the same resource. If you choose to explicitly set the language to a single language in your first project, you will not be able to modify this setting later and all subsequent projects for that resource will use the language selected during the creation of your first project.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of language selection UI.](../media/manage-knowledge-base/choose-language-option.png)
-
-5. Enter basic project settings:
-
- |Setting| Value|
- |-||
- |**Name** | Enter your unique project name here|
- |**Description** | Enter a description for your project |
- |**Source language** | Whether or not this value is greyed out, is dependent on the selection that was made when the first project associated with the language resource was created. |
- |**Default answer** | The default answer the system will send if there was no answer found for the question. You can change this at any time in Project settings.
-
-## Manage projects
-
-From the main question answering page in Language Studio you can:
--- Create projects-- Delete projects-- Export existing projects for backup or to migrate to other language resources-- Import projects. (The expected file format is a `.zip` file containing a project that was exported in `excel` or `.tsv` format).-- Projects can be ordered by either **Last modified** or **Last published** date.-
-## Manage sources
-
-1. Select **Manage sources** in the left navigation bar.
-
-1. There are three types of sources: **URLS**, **Files**, and **Chitchat**
-
- |Goal|Action|
- |--|--|
- |Add Source|You can add new sources and FAQ content to your project by selecting **Add source** > and choosing **URLs**, **Files**, or **Chitchat**|
- |Delete Source|You can delete existing sources by selecting to the left of the source, which will cause a blue circle with a checkmark to appear > select the trash can icon. |
- |Mark content as unstructured|If you want to mark the uploaded file content as unstructured select **Unstructured content** from the dropdown when adding the source.|
- |Auto-detect| Allow question and answering to attempt to determine if content is structured versus unstructured.|
-
-## Manage large projects
-
-From the **Edit project page** you can:
-
-* **Search project**: You can search the project by typing in the text box at the top of question answer panel. Hit enter to search on the question, answer, or metadata content.
-
-* **Pagination**: Quickly move through data sources to manage large projects. Page numbers appear at the bottom of the UI and are sometimes off screen.
-
-## Delete project
-
-Deleting a project is a permanent operation. It can't be undone. Before deleting a project, you should export the project from the main question answering page within Language Studio.
-
-If you share your project with collaborators and then later delete it, everyone loses access to the project.
-
-## Next steps
-
-* [Configure resources](./configure-resources.md)
cognitive-services Migrate Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-knowledge-base.md
- Title: Move projects - custom question answering
-description: Moving a custom question answering project requires exporting a project from one resource, and then importing into another.
--- Previously updated : 1/11/2023--
-# Move projects and question answer pairs
-
-> [!NOTE]
-
-> This article deals with the process to export and move projects and sources from one Language resource to another.
-
-You may want to create copies of your projects or sources for several reasons:
-
-* To implement a backup and restore process
-* Integrate with your CI/CD pipeline
-* When you wish to move your data to different regions
-
-## Prerequisites
-
-* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-* A [language resource](https://aka.ms/create-language-resource) with the custom question answering feature enabled in the Azure portal. Remember your Azure Active Directory ID, Subscription, and the Language resource name you selected when you created the resource.
-
-## Export a project
-
-Exporting a project allows you to back up all the question answer sources that are contained within a single project.
-
-1. Sign in to the [Language Studio](https://language.azure.com/).
-1. Select the Language resource you want to move a project from.
-1. Go to Custom Question Answering service. On the **Projects** page, you have the options to export in two formats, Excel or TSV. This will determine the contents of the file. The file itself will be exported as a .zip containing the contents of your project.
-2. You can export only one project at a time.
-
-## Import a project
-
-1. Select the Language resource, which will be the destination for your previously exported project.
-1. Go to Custom Question Answering service. On the **Projects** page, select **Import** and choose the format used when you selected export. Then browse to the local .zip file containing your exported project. Enter a name for your newly imported project and select **Done**.
-
-## Export sources
-
-1. Select the language resource you want to move an individual question answer source from.
-1. Select the project that contains the question and answer source you wish to export.
-1. On the Edit project page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to export in either Excel or TSV.
-
-## Import question and answers
-
-1. Select the language resource, which will be the destination for your previously exported question and answer source.
-1. Select the project where you want to import a question and answer source.
-1. On the Edit project page, select the ellipsis (`...`) icon to the right of **Enable rich text** in the toolbar. You have the option to import either an Excel or TSV file.
-1. Browse to the local location of the file with the **Choose File** option and select **Done**.
-
-<!-- TODO: Replace Link-->
-### Test
-
-**Test** the question answer source by selecting the **Test** option from the toolbar in the **Edit project** page which will launch the test panel. Learn how to [test your project](../../../qnamaker/How-To/test-knowledge-base.md).
-
-### Deploy
-
-<!-- TODO: Replace Link-->
-**Deploy** the project and create a chat bot. Learn how to [deploy your project](../../../qnamaker/Quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base).
-
-## Chat logs
-
-There is no way to move chat logs with projects. If diagnostic logs are enabled, chat logs are stored in the associated Azure Monitor resource.
-
-## Next steps
-
-<!-- TODO: Replace Link-->
-
cognitive-services Migrate Qnamaker To Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md
- Title: Migrate from QnA Maker to Question Answering
-description: Details on features, requirements, and examples for migrating from QnA Maker to Question Answering
----
-ms.
- Previously updated : 08/08/2022--
-# Migrate from QnA Maker to Question Answering
-
-**Purpose of this document:** This article aims to provide information that can be used to successfully migrate applications that use QnA Maker to Question Answering. Using this article, we hope customers will gain clarity on the following:
-
- - Comparison of features across QnA Maker and Question Answering
- - Pricing
- - Simplified Provisioning and Development Experience
- - Migration phases
- - Common migration scenarios
- - Migration steps
-
-**Intended Audience:** Existing QnA Maker customers
-
-> [!IMPORTANT]
-> Question Answering, a feature of Azure Cognitive Service for Language was introduced in November 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration:
->
-> - Automatic RBAC to Language project (not resource)
-> - Automatic enabling of analytics.
-
-You will also need to [re-enable analytics](analytics.md) for the language resource.
-
-## Comparison of features
-
-In addition to a new set of features, Question Answering provides many technical improvements to common features.
-
-|Feature|QnA Maker|Question Answering|Details|
-|-|||-|
-|State of the art transformer-based models|➖|✔️|Turing based models which enables to search QnA at web scale.|
-|Pre-built capability|➖|✔️|Using this capability one can leverage the power of question answering without having to ingest content and manage resources.|
-|Precise answering|➖|✔️|Question Answering supports precise answering with the help of SOTA models.|
-|Smart URL Refresh|➖|✔️|Question Answering provides a means to refresh ingested content from public sources with a single click.|
-|Q&A over knowledge base (hierarchical extraction)|✔️|✔️| |
-|Active learning|✔️|✔️|Question Answering has an improved active learning model.|
-|Alternate Questions|✔️|✔️|The improved models in question answering reduces the need to add alternate questions.|
-|Synonyms|✔️|✔️| |
-|Metadata|✔️|✔️| |
-|Question Generation (private preview)|➖|✔️|This new feature will allow generation of questions over text.|
-|Support for unstructured documents|➖|✔️|Users can now ingest unstructured documents as input sources and query the content for responses|
-|.NET SDK|✔️|✔️| |
-|API|✔️|✔️| |
-|Unified Authoring experience|➖|✔️|A single authoring experience across all Azure Cognitive Services for Language|
-|Multi region support|➖|✔️|
-
-## Pricing
-
-When you are looking at migrating to Question Answering, please consider the following:
-
-|Component |QnA Maker|Question Answering|Details |
-|-||||
-|QnA Maker Service cost |✔️ |➖ |The fixed cost per resource per month. Only applicable for QnAMaker. |
-|Question Answering service cost|➖ |✔️ |The Question Answering cost according to the pay as you go model. Only applicable for Question Answering.|
-|Azure Search cost |✔️ |✔️ |Applicable for both QnA Maker and Question Answering. |
-|App Service cost |✔️ |➖ |Only applicable for QnA Maker. This is the biggest cost savings for users moving to Question Answering. |
--- Users may select a higher tier with higher capacity, which will impact overall price they pay. It doesnΓÇÖt impact the price on language component of Custom Question Answering.--- ΓÇ£Text RecordsΓÇ¥ in Question Answering features refers to the query submitted by the user to the runtime, and it is a concept common to all features within Language service. Sometimes a query may have more text records when the query length is higher. -
-**Example price estimations**
-
-|Usage |Number of resources in QnA Maker|Number of app services in QnA Maker (Tier)|Monthly inference calls in QnA Maker|Search Partitions x search replica (Tier)|Relative cost in Question Answering |
-||--|||--|--|
-|High |5 |5(P1) |8M |9x3(S2) |More expensive |
-|High |100 |100(P1) |6M |9x3(S2) |Less expensive |
-|Medium|10 |10(S1) |800K |4x3(S1) |Less expensive |
-|Low |4 |4(B1) |100K |3x3(S1) |Less expensive |
-
- Summary : Customers should save cost across the most common configurations as seen in the relative cost column.
-
-Here you can find the pricing details for [Question Answering](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) and [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/).
-
-The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can provide even more detail.
-
-## Simplified Provisioning and Development Experience
-
-With the Language service, QnA Maker customers now benefit from a single service that provides Text Analytics, LUIS and Question Answering as features of the language resource. The Language service provides:
--- One Language resource to access all above capabilities-- A single pane of authoring experience across capabilities-- A unified set of APIs across all the capabilities-- A cohesive, simpler, and powerful product-
-Learn how to get started in [Language Studio](../../language-studio.md)
-
-## Migration Phases
-
-If you or your organization have applications in development or production that use QnA Maker, you should update them to use Question Answering as soon as possible. See the following links for available APIs, SDKs, Bot SDKs and code samples.
-
-Following are the broad migration phases to consider:
-
-![A chart showing the phases of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-phases.png)
-
-Additional links which can help you are given below:
-- [Authoring portal](https://language.cognitive.azure.com/home)-- [API](authoring.md)-- [SDK](/dotnet/api/microsoft.azure.cognitiveservices.knowledge.qnamaker)-- Bot SDK: For bots to use custom question answering, use the [Bot.Builder.AI.QnA](https://www.nuget.org/packages/Microsoft.Bot.Builder.AI.QnA/) SDK ΓÇô We recommend customers to continue to use this for their Bot integrations. Here are some sample usages of the same in the botΓÇÖs code: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)-
-## Common migration scenarios
-
-This topic compares two hypothetical scenarios when migrating from QnA Maker to Question Answering. These scenarios can help you to determine the right set of migration steps to execute for the given scenario.
-
-> [!NOTE]
-> An attempt has been made to ensure these scenarios are representative of real customer migrations, however, individual customer scenarios will of course differ. Also, this article doesn't include pricing details. Click here for [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/)
-
-> [!IMPORTANT]
-> Each question answering project is equivalent to a knowledge base in QnA Maker. Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
-
-### Migration scenario 1: No custom authoring portal
-
-In the first migration scenario, the customer uses qnamaker.ai as the authoring portal and they want to migrate their QnA Maker knowledge bases to Custom Question Answering.
-
-[Migrate your project from QnA Maker to Question Answering](migrate-qnamaker.md)
-
-Once migrated to Question Answering:
--- The resource level settings need to be reconfigured for the language resource-- Customer validations should start on the migrated knowledge bases on:
- - Size validation
- - Number of QnA pairs in all KBs to match pre and post migration
-- Customers need to establish new thresholds for their knowledge bases in custom question answering as the Confidence score mapping is different when compared to QnA Maker.
- - Answers for sample questions in pre and post migration
- - Response time for Questions answered in v1 vs v2
- - Retaining of prompts
- - Customers can use the batch testing tool post migration to test the newly created project in custom question answering.
-
-Old QnA Maker resources need to be manually deleted.
-
-Here are some [detailed steps](migrate-qnamaker.md) on migration scenario 1.
-
-### Migration scenario 2
-
-In this migration scenario, the customer may have created their own authoring frontend leveraging the QnA Maker authoring APIs or QnA Maker SDKs.
-
-They should perform these steps required for migration of SDKs:
-
-This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
-
-They should perform the steps required for migration of Knowledge bases to the new Project within Language resource.
-
-Once migrated to Question Answering:
-- The resource level settings need to be reconfigured for the language resource-- Customer validations should start on the migrated knowledge bases on
- - Size validation
- - Number of QnA pairs in all KBs to match pre and post migration
- - Confidence score mapping
- - Answers for sample questions in pre and post migration
- - Response time for Questions answered in v1 vs v2
- - Retaining of prompts
- - Batch testing pre and post migration
-- Old QnA Maker resources need to be manually deleted.-
-Additionally, for customers who have to migrate and upgrade Bot, upgrade bot code is published as NuGet package.
-
-Here you can find some code samples: [Sample 1](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/48.customQABot-all-features) [Sample 2](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot)
-
-Here are [detailed steps on migration scenario 2](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md)
-
-Learn more about the [pre-built API](../../../QnAMaker/How-To/using-prebuilt-api.md)
-
-Learn more about the [Question Answering Get Answers REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers)
-
-## Migration steps
-
-Please note that some of these steps are needed depending on the customers existing architecture. Kindly look at migration phases given above for getting more clarity on which steps are needed by you for migration.
-
-![A chart showing the steps of a successful migration](../media/migrate-qnamaker-to-question-answering/migration-steps.png)
cognitive-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker.md
- Title: Migrate QnA Maker knowledge bases to custom question answering
-description: Migrate your legacy QnAMaker knowledge bases to custom question answering to take advantage of the latest features.
----- Previously updated : 01/23/2022---
-# Migrate from QnA Maker to custom question answering
-
-Custom question answering, a feature of Azure Cognitive Service for Language was introduced in May 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each custom question answering project is equivalent to a knowledge base in QnA Maker. You can easily migrate knowledge bases from a QnA Maker resource to custom question answering projects within a [language resource](https://aka.ms/create-language-resource). You can also choose to migrate knowledge bases from multiple QnA Maker resources to a specific language resource.
-
-To successfully migrate knowledge bases, **the account performing the migration needs contributor access to the selected QnA Maker and language resource**. When a knowledge base is migrated, the following details are copied to the new custom question answering project:
--- QnA pairs including active learning suggestions.-- Synonyms and default answer from the QnA Maker resource.-- Knowledge base name is copied to project description field.-
-Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
-
-## Steps to migrate SDKs
-
-This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, [Azure.AI.Language.QuestionAnswering](https://www.nuget.org/packages/Azure.AI.Language.QuestionAnswering), from the old one, [Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker). It will focus on side-by-side comparisons for similar operations between the two packages.
-
-## Steps to migrate knowledge bases
-
-You can follow the steps below to migrate knowledge bases:
-
-1. Create a [language resource](https://aka.ms/create-language-resource) with custom question answering enabled in advance. When you create the language resource in the Azure portal, you will see the option to enable custom question answering. When you select that option and proceed, you will be asked for Azure Search details to save the knowledge bases.
-
-2. If you want to add knowledge bases in multiple languages to your language resource, visit [Language Studio](https://language.azure.com/) to create your first custom question answering project and select the first option as shown below. Language settings for the language resource can be specified only when creating a project. If you want to migrate existing knowledge bases in a single language to the language resource, you can skip this step.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of choose language UI screen"](../media/migrate-qnamaker/choose-language.png)
-
-3. Visit [https://www.qnamaker.ai](https://www.qnamaker.ai) and select **Start Migration** in the migration note on the knowledge base page. A dialog box will open to initiate the migration.
-
- :::image type="content" source="../media/migrate-qnamaker/start-migration.png" alt-text="Start Migration button that appears in a banner on qnamaker.ai" lightbox="../media/migrate-qnamaker/start-migration.png":::
-
-4. Fill in the details required to initiate migration. The tenant will be auto-selected. You can choose to switch the tenant.
-
- > [!div class="mx-imgBorder"]
- > ![Migrate QnAMaker with red selection box around the tenant selection option](../media/migrate-qnamaker/tenant-selection.png)
-
-5. Select the QnA Maker resource, which contains the knowledge bases to be migrated.
-
- > [!div class="mx-imgBorder"]
- > ![Migrate QnAMaker with red selection box around the QnAMaker resource selection option](../media/migrate-qnamaker/select-resource.png)
-
-6. Select the language resource to which you want to migrate the knowledge bases. You will only be able to see those language resources that have custom question answering enabled. The language setting for the language resource is displayed in the options. You wonΓÇÖt be able to migrate knowledge bases in multiple languages from QnA Maker resources to a language resource if its language setting is not specified.
-
- > [!div class="mx-imgBorder"]
- > ![Migrate QnAMaker with red selection box around the language resource option currently selected resource contains the information that language is unspecified](../media/migrate-qnamaker/language-setting.png)
-
- If you want to migrate knowledge bases in multiple languages to the language resource, you must enable the multiple language setting when creating the first custom question answering project for the language resource. You can do so by following the instructions in step #2. **If the language setting for the language resource is not specified, it is assigned the language of the selected QnA Maker resource**.
-
-7. Select all the knowledge bases that you wish to migrate > select **Next**.
-
- > [!div class="mx-imgBorder"]
- > ![Migrate QnAMaker with red selection box around the knowledge base selection option with a drop-down displaying three knowledge base names](../media/migrate-qnamaker/select-knowledge-bases.png)
-
-8. You can review the knowledge bases you plan to migrate. There could be some validation errors in project names as we follow stricter validation rules for custom question answering projects. To resolve these errors occuring due to invalid characters, select the checkbox (in red) and click **Next**. This is a one-click method to replace the problematic characters in the name with the accepted characters. If there's a duplicate, a new unique project name is generated by the system.
-
- > [!CAUTION]
- > If you migrate a knowledge base with the same name as a project that already exists in the target language resource, **the content of the project will be overridden** by the content of the selected knowledge base.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of an error message starting project names can't contain special characters](../media/migrate-qnamaker/migration-kb-name-validation.png)
-
-9. After resolving the validation errors, select **Start migration**
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot with special characters removed](../media/migrate-qnamaker/migration-kb-name-validation-success.png)
-
-10. It will take a few minutes for the migration to occur. Do not cancel the migration while it is in progress. You can navigate to the migrated projects within the [Language Studio](https://language.azure.com/) post migration.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of successfully migrated knowledge bases with information that you can publish by using Language Studio](../media/migrate-qnamaker/migration-success.png)
-
- If any knowledge bases fail to migrate to custom question answering projects, an error will be displayed. The most common migration errors occur when:
-
- - Your source and target resources are invalid.
- - You are trying to migrate an empty knowledge base (KB).
- - You have reached the limit for an Azure Search instance linked to your target resources.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of a failed migration with an example error](../media/migrate-qnamaker/migration-errors.png)
-
- Once you resolve these errors, you can rerun the migration.
-
-11. The migration will only copy the test instances of your knowledge bases. Once your migration is complete, you will need to manually deploy the knowledge bases to copy the test index to the production index.
-
-## Next steps
--- Learn how to re-enable analytics with [Azure Monitor diagnostic logs](analytics.md).
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/network-isolation.md
- Title: Network isolation and Private Link -question answering
-description: Users can restrict public access to question answering resources.
--- Previously updated : 11/02/2021---
-# Network isolation and private endpoints
-
-The steps below describe how to restrict public access to question answering resources as well as how to enable Azure Private Link. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
-
-## Private Endpoints
-
-Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Question answering provides you support to create private endpoints to the Azure Search Service.
-
-Private endpoints are provided by [Azure Private Link](../../../../private-link/private-link-overview.md), as a separate service. For more information about costs, see the [pricing page.](https://azure.microsoft.com/pricing/details/private-link/)
-
-## Steps to enable private endpoint
-
-1. Assign *Contributer* role to language resource (Depending on the context this may appear as a Text Analytics resource) in the Azure Search Service instance. This operation requires *Owner* access to the subscription. Go to Identity tab in the service resource to get the identity.
-
-> [!div class="mx-imgBorder"]
-> ![Text Analytics Identity](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoints-identity.png)
-
-2. Add the above identity as *Contributer* by going to Azure Search Service IAM tab.
-
-![Managed service IAM](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-access-control.png)
-
-3. Select on *Add role assignments*, add the identity and select *Save*.
-
-![Managed role assignment](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-role-assignment.png)
-
-4. Now, go to *Networking* tab in the Azure Search Service instance and switch Endpoint connectivity data from *Public* to *Private*. This operation is a long running process and can take up to 30 mins to complete.
-
-![Managed Azure search networking](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking.png)
-
-5. Go to *Networking* tab of language resource and under the *Allow access from*, select the *Selected Networks and private endpoints* option and select *save*.
-
-> [!div class="mx-imgBorder"]
-> ![Text Analytics networking](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-custom-qna.png)
-
-This will establish a private endpoint connection between language resource and Azure Cognitive Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure Cognitive Search service instance. Once the whole operation is completed, you are good to use your language resource with question answering enabled.
-
-![Managed Networking Service](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png)
-
-## Support details
- * We don't support changes to Azure Cognitive Search service once you enable private access to your language resources. If you change the Azure Cognitive Search service via 'Features' tab after you have enabled private access, the language resource will become unusable.
-
- * After establishing Private Endpoint Connection, if you switch Azure Cognitive Search Service Networking to 'Public', you won't be able to use the language resource. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work.
-
-## Restrict access to Cognitive Search resource
-
-Follow the steps below to restrict public access to question answering language resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
-
-After restricting access to Cognitive Service resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.
-- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).-- Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules).-- Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of firewall and virtual networks configuration UI]( ../../../qnamaker/media/network-isolation/firewall.png) ]( ../../../qnamaker/media/network-isolation/firewall.png#lightbox)
cognitive-services Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/prebuilt.md
- Title: Prebuilt API - question answering-
-description: Use the question answering Prebuilt API to ask and receive answers to questions without having to create a project.
----- Previously updated : 11/03/2021--
-# Prebuilt API
-
-The question answering **prebuilt API** provides you the capability to answer questions based on a passage of text without having to create projects, maintain question and answer pairs, or incurring costs for underutilized infrastructure. This functionality is provided as an API and can be used to meet question and answering needs without having to learn the details about question answering.
-
-Given a user query and a block of text/passage the API will return an answer and precise answer (if available).
-
-## Example API usage
-
-Imagine that you have one or more blocks of text from which you would like to get answers for a given question. Normally you would have had to create as many sources as the number of blocks of text. However, now with the prebuilt API you can query the blocks of text without having to define content sources in a project.
-
-Some other scenarios where this API can be used are:
-
-* You are developing an ebook reader app for end users, which allows them to highlight text, enter a question and find answers over a highlighted passage of text.
-* A browser extension that allows users to ask a question over the content being currently displayed on the browser page.
-* A health bot that takes queries from users and provides answers based on the medical content that the bot identifies as most relevant to the user query.
-
-Below is an example of a sample request:
-
-## Sample request
-
-```
-POST https://{Unique-to-your-endpoint}.api.cognitive.microsoft.com/language/:query-text
-```
-
-### Sample query over a single block of text
-
-Request Body
-
-```json
-{
- "parameters": {
- "Endpoint": "{Endpoint}",
- "Ocp-Apim-Subscription-Key": "{API key}",
- "Content-Type": "application/json",
- "api-version": "2021-10-01",
- "stringIndexType": "TextElements_v8",
- "textQueryOptions": {
- "question": "how long it takes to charge surface?",
- "records": [
- {
- "id": "1",
- "text": "Power and charging. It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it."
- },
- {
- "id": "2",
- "text": "You can use the USB port on your Surface Pro 4 power supply to charge other devices, like a phone, while your Surface charges. The USB port on the power supply is only for charging, not for data transfer. If you want to use a USB device, plug it into the USB port on your Surface."
- }
- ],
- "language": "en"
- }
- }
-}
-```
-
-## Sample response
-
-In the above request body, we query over a single block of text. A sample response received for the above query is shown below,
-
-```json
-{
-"responses": {
- "200": {
- "headers": {},
- "body": {
- "answers": [
- {
- "answer": "Power and charging. It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it.",
- "confidenceScore": 0.93,
- "id": "1",
- "answerSpan": {
- "text": "two to four hours",
- "confidenceScore": 0,
- "offset": 28,
- "length": 45
- },
- "offset": 0,
- "length": 224
- },
- {
- "answer": "It takes two to four hours to charge the Surface Pro 4 battery fully from an empty state. It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it.",
- "confidenceScore": 0.92,
- "id": "1",
- "answerSpan": {
- "text": "two to four hours",
- "confidenceScore": 0,
- "offset": 8,
- "length": 25
- },
- "offset": 20,
- "length": 224
- },
- {
- "answer": "It can take longer if youΓÇÖre using your Surface for power-intensive activities like gaming or video streaming while youΓÇÖre charging it.",
- "confidenceScore": 0.05,
- "id": "1",
- "answerSpan": null,
- "offset": 110,
- "length": 244
- }
- ]
- }
- }
- }
-```
-
-We see that multiple answers are received as part of the API response. Each answer has a specific confidence score that helps understand the overall relevance of the answer. Answer span represents whether a potential short answer was also detected. Users can make use of this confidence score to determine which answers to provide in response to the query.
-
-## Prebuilt API limits
-
-### API call limits
-If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API. In this context, a document is a defined single string of text characters.
-
-These numbers represent the **per individual API call limits**:
-
-* Number of documents: 5.
-* Maximum size of a single document: 5,120 characters.
-* Maximum three responses per document.
-
-### Language codes supported
-The following language codes are supported by Prebuilt API. These language codes are in accordance to the [ISO 639-1 codes standard](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).
-
-Language code|Language
--|-
-af|Afrikaans
-am|Amharic
-ar|Arabic
-as|Assamese
-az|Azerbaijani
-ba|Bashkir
-be|Belarusian
-bg|Bulgarian
-bn|Bengali
-ca|Catalan, Valencian
-ckb|Central Kurdish
-cs|Czech
-cy|Welsh
-da|Danish
-de|German
-el|Greek, Modern (1453ΓÇô)
-en|English
-eo|Esperanto
-es|Spanish, Castilian
-et|Estonian
-eu|Basque
-fa|Persian
-fi|Finnish
-fr|French
-ga|Irish
-gl|Galician
-gu|Gujarati
-he|Hebrew
-hi|Hindi
-hr|Croatian
-hu|Hungarian
-hy|Armenian
-id|Indonesian
-is|Icelandic
-it|Italian
-ja|Japanese
-ka|Georgian
-kk|Kazakh
-km|Central Khmer
-kn|Kannada
-ko|Korean
-ky|Kirghiz, Kyrgyz
-la|Latin
-lo|Lao
-lt|Lithuanian
-lv|Latvian
-mk|Macedonian
-ml|Malayalam
-mn|Mongolian
-mr|Marathi
-ms|Malay
-mt|Maltese
-my|Burmese
-ne|Nepali
-nl|Dutch, Flemish
-nn|Norwegian Nynorsk
-no|Norwegian
-or|Oriya
-pa|Punjabi, Panjabi
-pl|Polish
-ps|Pashto, Pushto
-pt|Portuguese
-ro|Romanian, Moldavian, Moldovan
-ru|Russian
-sa|Sanskrit
-sd|Sindhi
-si|Sinhala, Sinhalese
-sk|Slovak
-sl|Slovenian
-sq|Albanian
-sr|Serbian
-sv|Swedish
-sw|Swahili
-ta|Tamil
-te|Telugu
-tg|Tajik
-th|Thai
-tl|Tagalog
-tr|Turkish
-tt|Tatar
-ug|Uighur, Uyghur
-uk|Ukrainian
-ur|Urdu
-uz|Uzbek
-vi|Vietnamese
-yi|Yiddish
-zh|Chinese
-
-## Prebuilt API reference
-
-Visit the [full prebuilt API samples](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/Language/stable/2021-10-01/examples/questionanswering/SuccessfulQueryText.json) documentation to understand the input and output parameters required for calling the API.
cognitive-services Smart Url Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/smart-url-refresh.md
- Title: Smart URL refresh - question answering-
-description: Use the question answering smart URL refresh feature to keep your project up to date.
----- Previously updated : 02/28/2022--
-# Use smart URL refresh with a project
-
-Custom question answering gives you the ability to refresh your source contents by getting the latest content from a source URL and updating the corresponding project with one click. The service will ingest content from the URL and either create, merge, or delete question-and-answer pairs in the project.
-
-This functionality is provided to support scenarios where the content in the source URL changes frequently, such as the FAQ page of a product that's updated often. The service will refresh the source and update the project to the latest content while retaining any manual edits made previously.
-
-> [!NOTE]
-> This feature is only applicable to URL sources, and they must be refreshed individually, not in bulk.
-
-> [!IMPORTANT]
-> This feature is only available in the `2021-10-01` version of the Language API.
-
-## How it works
-
-If you have a project with a URL source that has changed, you can trigger a smart URL refresh to keep your project up to date. The service will scan the URL for updated content and generate QnA pairs. It will add any new QnA pairs to your project and also delete any pairs that have disappeared from the source (with exceptions&mdash;see below). It also merges old and new QnA pairs in some situations (see below).
-
-> [!IMPORTANT]
-> Because smart URL refresh can involve deleting old content from your project, you may want to [create a backup](./export-import-refresh.md) of your project before you do any refresh operations.
-
-You can trigger a URL refresh in Language Studio by opening your project, selecting the source in the **Manage sources** list, and selecting **Refresh URL**.
--
-You can also trigger a refresh programmatically using the REST API. See the **[Update Sources](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)** reference documentation for parameters and a sample request.
-
-## Smart refresh behavior
-
-When the user refreshes content using this feature, the project of QnA pairs may be updated in the following ways:
-
-### Delete old pair
-
-If the content of the URL is updated so that an existing QnA pair from the old content of the URL is no longer found in the source, that pair is deleted from the refreshed project. For example, if a QnA pair Q1A1 existed in the old project, but after refreshing, there's no A1 answer generated by the newly refreshed source, then the pair Q1A1 is considered outdated and is dropped from the project altogether.
-
-However, if the old QnA pairs have been manually edited in the authoring portal, they won't be deleted.
-
-### Add new pair
-
-If the content of the URL is updated in such a way that a new QnA pair exists which didn't exist in the old KB, then it's added to the KB. For example, if the service finds that a new answer A2 can be generated, then the QnA pair Q2A2 is inserted into the KB.
-
-### Merge pairs
-
-If the answer of a new QnA pair matches the answer of an old QnA pair, the two pairs are merged. The new pair's question is added as an alternate question to the old QnA pair. For example, consider Q3A3 exists in the old source. When you refresh the source, a new QnA pair Q3'A3 is introduced. In that case, the two QnA pairs are merged: Q3' is added to Q3 as an alternate question.
-
-If the old QnA pair has a metadata value, that data is retained and persisted in the newly merged pair.
-
-If the old QnA pair has follow-up prompts associated with it, then the following scenarios may arise:
-* If the prompt attached to the old pair is from the source being refreshed, then it's deleted, and the prompt of the new pair (if any exists) is appended to the newly merged QnA pair.
-* If the prompt attached to the old pair is from a different source, then it's maintained as-is and the prompt from the new question (if any exists) is appended to the newly merged QnA pair.
--
-#### Merge example
-See the following example of a merge operation with differing questions and prompts:
-
-|Source iteration|Question |Answer |Prompts |
-||||--|
-|old |"What is the new HR policy?" | "You may have to choose among the following options:" | P1, P2 |
-|new |"What is the new payroll policy?" | "You may have to choose among the following options:" | P3, P4 |
-
-The prompts P1 and P2 come from the original source and are different from prompts P3 and P4 of the new QnA pair. They both have the same answer, `You may have to choose among the following options:`, but it leads to different prompts. In this case, the resulting QnA pair would look like this:
-
-|Question |Answer |Prompts |
-|||--|
-|"What is the new HR policy?" </br>(alternate question: "What is the new payroll policy?") | "You may have to choose among the following options:" | P3, P4 |
-
-#### Duplicate answers scenario
-
-When the original source has two or more QnA pairs with the same answer (as in, Q1A1 and Q2A1), the merge behavior may be more complex.
-
-If these two QnA pairs have individual prompts attached to them (for example, Q1A1+P1 and Q2A1+P2), and the refreshed source content has a new QnA pair generated with the same answer A1 and a new prompt P3 (Q1'A1+P3), then the new question will be added as an alternate question to the original pairs (as described above). But all of the original attached prompts will be overwritten by the new prompt. So the final pair set will look like this:
-
-|Question |Answer |Prompts |
-|||--|
-|Q1 </br>(alternate question: Q1') | A1 | P3 |
-|Q2 </br>(alternate question: Q1') | A1 | P3 |
-
-## Next steps
-
-* [Question answering quickstart](../quickstart/sdk.md?pivots=studio)
-* [Update Sources API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/troubleshooting.md
-
Title: Troubleshooting - question answering
-description: The curated list of the most frequently asked questions regarding question answering will help you adopt the feature faster and with better results.
----- Previously updated : 11/02/2021--
-# Troubleshooting for question answering
-
-The curated list of the most frequently asked questions regarding question answering will help you adopt the feature faster and with better results.
-
-## Manage predictions
-
-<details>
-<summary><b>How can I improve the throughput performance for query predictions?</b></summary>
-
-**Answer**:
-Throughput performance issues indicate you need to scale up your Cognitive Search. Consider adding a replica to your Cognitive Search to improve performance.
-
-Learn more about [pricing tiers](../Concepts/azure-resources.md).
-</details>
-
-## Manage your project
-
-<details>
-<summary><b>Why is my URL(s)/file(s) not extracting question-answer pairs?</b></summary>
-
-**Answer**:
-It's possible that question answering can't auto-extract some question-and-answer (QnA) content from valid FAQ URLs. In such cases, you can paste the QnA content in a .txt file and see if the tool can ingest it. Alternately, you can editorially add content to your project through the [Language Studio portal](https://language.azure.com).
-
-</details>
-
-<details>
-<summary><b>How large a project can I create?</b></summary>
-
-**Answer**:
-The size of the project depends on the SKU of Azure search you choose when creating the QnA Maker service. Read [here](../concepts/azure-resources.md) for more details.
-
-</details>
-
-<details>
-<summary><b>How do I share a project with others?</b></summary>
-
-**Answer**:
-Sharing works at the level of the language resource, that is, all projects associated a language resource can be shared.
-</details>
-
-<details>
-<summary><b>Can you share a project with a contributor that is not in the same Azure Active Directory tenant, to modify a project?</b></summary>
-
-**Answer**:
-Sharing is based on Azure role-based access control (Azure Role-base access control). If you can share _any_ resource in Azure with another user, you can also share question answering.
-
-</details>
-
-<details>
-<summary><b>Can you assign read/write rights to 5 different users so each of them can access only 1 question answering project?</b></summary>
-
-**Answer**:
-You can share an entire language resource, not individual projects.
-
-</details>
-
-<details>
-<summary><b>The updates that I made to my project are not reflected in production. Why not?</b></summary>
-
-**Answer**:
-Every edit operation, whether in a table update, test, or setting, needs to be saved before it can be deployed. Be sure to select **Save** after making changes and then re-deploy your project for those changes to be reflected in production.
-
-</details>
-
-<details>
-<summary><b>Does the project support rich data or multimedia?</b></summary>
-
-**Answer**:
-
-#### Multimedia auto-extraction for files and URLs
-
-* URLS - limited HTML-to-Markdown conversion capability.
-* Files - not supported
-
-#### Answer text in markdown
-
-Once QnA pairs are in the project, you can edit an answer's markdown text to include links to media available from public URLs.
-
-</details>
-
-<details>
-<summary><b>Does question answering support non-English languages?</b></summary>
-
-**Answer**:
-See more details about [supported languages](../language-support.md).
-
-If you have content from multiple languages, be sure to create a separate project for each language.
-
-</details>
-
-## Manage service
-
-<details>
-<summary><b>I deleted my existing Search service. How can I fix this?</b></summary>
-
-**Answer**:
-If you delete an Azure Cognitive Search index, the operation is final and the index cannot be recovered.
-
-</details>
-
-<details>
-<summary><b>I deleted my `testkbv2` index in my Search service. How can I fix this?</b></summary>
-
-**Answer**:
-In case you deleted the `testkbv2` index in your Search service, you can restore the data from the last published KB. Use the recovery tool [RestoreTestKBIndex](https://github.com/pchoudhari/QnAMakerBackupRestore/tree/master/RestoreTestKBFromProd) available on GitHub.
-
-</details>
-
-<details>
-<summary><b>Can I use the same Azure Cognitive Search resource for projects using multiple languages?</b></summary>
-
-**Answer**:
-To use multiple language and multiple projects, the user has to create a project for each language and the first project created for the language resource has to select the option **I want to select the language when I create a project in this resource**. This will create a separate Azure search service per language.
-
-</details>
-
-## Integrate with other services including Bots
-
-<details>
-<summary><b>Do I need to use Bot Framework in order to use question answering?</b></summary>
-
-**Answer**:
-No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with question answering. However, Question answering is offered as one of several templates in [Azure Bot Service](/azure/bot-service/). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
-
-</details>
-
-<details>
-<summary><b>How can I create a new bot with question answering?</b></summary>
-
-**Answer**:
-Follow the instructions in [this](../tutorials/bot-service.md) documentation to create your Bot with Azure Bot Service.
-
-</details>
-
-<details>
-<summary><b>How do I use a different project with an existing Azure bot service?</b></summary>
-
-**Answer**:
-You need to have the following information about your project:
-
-* Project ID.
-* Project's published endpoint custom subdomain name, known as `host`, found on **Settings** page after you publish.
-* Project's published endpoint key - found on **Settings** page after you publish.
-
-With this information, go to your bot's app service in the Azure portal. Under **Settings -> Configuration -> Application settings**, change those values.
-
-The project's endpoint key is labeled `QnAAuthkey` in the ABS service.
-
-</details>
-
-<details>
-<summary><b>Can two or more client applications share a project?</b></summary>
-
-**Answer**:
-Yes, the project can be queried from any number of clients.
-
-</details>
-
-<details>
-<summary><b>How do I embed question answering in my website?</b></summary>
-
-**Answer**:
-Follow these steps to embed the question answering service as a web-chat control in your website:
-
-1. Create your FAQ bot by following the instructions [here](../tutorials/bot-service.md).
-2. Enable the web chat by following the steps [here](../tutorials/bot-service.md#integrate-the-bot-with-channels)
-
-## Data storage
-
-<details>
-<summary><b>What data is stored and where is it stored?</b></summary>
-
-**Answer**:
-
-When you create your language resource for question answering, you selected an Azure region. Your projects and log files are stored in this region.
-
-</details>
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/language-support.md
- Title: Language support - custom question answering-
-description: A list of culture, natural languages supported by custom question answering for your project. Do not mix languages in the same project.
----
-recommendations: false
--- Previously updated : 11/02/2021---
-# Language support for custom question answering and projects
-
-This article describes the language support options for custom question answering enabled resources and projects.
-
-In custom question answering, you have the option to either select the language each time you add a new project to a resource allowing multiple language support, or you can select a language that will apply to all future projects for a resource.
-
-## Supporting multiple languages in one custom question answering enabled resource
-
-> [!div class="mx-imgBorder"]
-> ![Multi-lingual project selection](./media/language-support/choose-language.png)
-
-* When you are creating the first project in your service, you get a choice pick the language each time your create a new project. Select this option, to create projects belonging to different languages within one service.
-* Language setting option cannot be modified for the service, once the first project is created.
-* If you enable multiple languages for the project, then instead of having one test index for the service you will have one test index per project.
-
-## Supporting multiple languages in one project
-
-If you need to support a project system, which includes several languages, you can:
-
-* Use the [Translator service](../../translator/translator-info-overview.md) to translate a question into a single language before sending the question to your project. This allows you to focus on the quality of a single language and the quality of the alternate questions and answers.
-* Create a custom question answering enabled language resource, and a project inside that resource, for every language. This allows you to manage separate alternate questions and answer text that is more nuanced for each language. This gives you much more flexibility but requires a much higher maintenance cost when the questions or answers change across all languages.
-
-## Single language per resource
-
-If you **select the option to set the language used by all projects associated with the resource**, consider the following:
-* A language resource, and all its projects, will support one language only.
-* The language is explicitly set when the first project of the service is created.
-* The language can't be changed for any other projects associated with the resource.
-* The language is used by the Cognitive Search service (ranker #1) and Custom question answering (ranker #2) to generate the best answer to a query.
-
-## Languages supported
-
-The following list contains the languages supported for a question answering resource.
-
-| Language |
-|--|
-| Arabic |
-| Armenian |
-| Bangla |
-| Basque |
-| Bulgarian |
-| Catalan |
-| Chinese_Simplified |
-| Chinese_Traditional |
-| Croatian |
-| Czech |
-| Danish |
-| Dutch |
-| English |
-| Estonian |
-| Finnish |
-| French |
-| Galician |
-| German |
-| Greek |
-| Gujarati |
-| Hebrew |
-| Hindi |
-| Hungarian |
-| Icelandic |
-| Indonesian |
-| Irish |
-| Italian |
-| Japanese |
-| Kannada |
-| Korean |
-| Latvian |
-| Lithuanian |
-| Malayalam |
-| Malay |
-| Norwegian |
-| Polish |
-| Portuguese |
-| Punjabi |
-| Romanian |
-| Russian |
-| Serbian_Cyrillic |
-| Serbian_Latin |
-| Slovak |
-| Slovenian |
-| Spanish |
-| Swedish |
-| Tamil |
-| Telugu |
-| Thai |
-| Turkish |
-| Ukrainian |
-| Urdu |
-| Vietnamese |
-
-## Query matching and relevance
-Custom question answering depends on [Azure Cognitive Search language analyzers](/rest/api/searchservice/language-support) for providing results.
-
-While the Azure Cognitive Search capabilities are on par for supported languages, question answering has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
-
-|Languages with additional ranker|
-|--|
-|Chinese|
-|Czech|
-|Dutch|
-|English|
-|French|
-|German|
-|Hungarian|
-|Italian|
-|Japanese|
-|Korean|
-|Polish|
-|Portuguese|
-|Spanish|
-|Swedish|
-
-This additional ranking is an internal working of the custom question answering's ranker.
-
-## Next steps
--
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/overview.md
- Title: What is question answering?
-description: Question answering is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom project.
----
-recommendations: false
- Previously updated : 06/03/2022
-keywords: "qna maker, low code chat bots, multi-turn conversations"
---
-# What is question answering?
-
-Question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data. It is used to find appropriate answers from customer input or from a project.
-
-Question answering is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications. This offering includes features like enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support.
-
-Question answering comprises of two capabilities:
-
-* Custom question answering: Using this capability users can customize different aspects like edit question and answer pairs extracted from the content source, define synonyms and metadata, accept question suggestions etc.
-* Prebuilt question answering: This capability allows users to get a response by querying a text passage without having the need to manage knowledgebases.
-
-This documentation contains the following article types:
-
-* The [quickstarts](./quickstart/sdk.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/manage-knowledge-base.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles](./concepts/precise-answering.md) provide in-depth explanations of the service's functionality and features.
-* [**Tutorials**](./tutorials/bot-service.md) are longer guides that show you how to use the service as a component in broader business solutions.
-
-## When to use question answering
-
-* **When you have static information** - Use question answering when you have static information in your project. This project is custom to your needs, which you've built with documents such as PDFs and URLs.
-* **When you want to provide the same answer to a request, question, or command** - when different users submit the same question, the same answer is returned.
-* **When you want to filter static information based on meta-information** - add [metadata](./tutorials/multiple-domains.md) tags to provide additional filtering options relevant to your client application's users and the information. Common metadata information includes [chit-chat](./how-to/chit-chat.md), content type or format, content purpose, and content freshness. <!--TODO: Fix Link-->
-* **When you want to manage a bot conversation that includes static information** - your project takes a user's conversational text or command and answers it. If the answer is part of a pre-determined conversation flow, represented in your project with [multi-turn context](./tutorials/guided-conversations.md), the bot can easily provide this flow.
-
-## What is a project?
-
-Question answering [imports your content](./how-to/manage-knowledge-base.md) into a project full of question and answer pairs. The import process extracts information about the relationship between the parts of your structured and semi-structured content to imply relationships between the question and answer pairs. You can edit these question and answer pairs or add new pairs.
-
-The content of the question and answer pair includes:
-* All the alternate forms of the question
-* Metadata tags used to filter answer choices during the search
-* Follow-up prompts to continue the search refinement
-
-After you publish your project, a client application sends a user's question to your endpoint. Your question answering service processes the question and responds with the best answer.
-
-## Create a chat bot programmatically
-
-Once a question answering project is published, a client application sends a question to your project endpoint and receives the results as a JSON response. A common client application for question answering is a chat bot.
-
-![Ask a bot a question and get answer from project content](../../qnamaker/media/qnamaker-overview-learnabout/bot-chat-with-qnamaker.png)
-
-|Step|Action|
-|:--|:--|
-|1|The client application sends the user's _question_ (text in their own words), "How do I programmatically update my project?" to your project endpoint.|
-|2|Question answering uses the trained project to provide the correct answer and any follow-up prompts that can be used to refine the search for the best answer. Question answering returns a JSON-formatted response.|
-|3|The client application uses the JSON response to make decisions about how to continue the conversation. These decisions can include showing the top answer and presenting more choices to refine the search for the best answer. |
-|||
-
-## Build low code chat bots
-
-The [Language Studio](https://language.cognitive.azure.com/) portal provides the complete project authoring experience. You can import documents, in their current form, to your project. These documents (such as an FAQ, product manual, spreadsheet, or web page) are converted into question and answer pairs. Each pair is analyzed for follow-up prompts and connected to other pairs. The final _markdown_ format supports rich presentation including images and links.
-
-Once your project is edited, publish the project to a working [Azure Web App bot](https://azure.microsoft.com/services/bot-service/) without writing any code. Test your bot in the [Azure portal](https://portal.azure.com) or download it and continue development.
-
-## High quality responses with layered ranking
-
-The question answering system uses a layered ranking approach. The data is stored in Azure search, which also serves as the first ranking layer. The top results from Azure search are then passed through question answering's NLP re-ranking model to produce the final results and confidence score.
-
-## Multi-turn conversations
-
-Question answering provides multi-turn prompts and active learning to help you improve your basic question and answer pairs.
-
-**Multi-turn prompts** give you the opportunity to connect question and answer pairs. This connection allows the client application to provide a top answer and provides more questions to refine the search for a final answer.
-
-After the project receives questions from users at the published endpoint, question answering applies **active learning** to these real-world questions to suggest changes to your project to improve the quality.
-
-## Development lifecycle
-
-Question answering provides authoring, training, and publishing along with collaboration permissions to integrate into the full development life cycle.
-
-> [!div class="mx-imgBorder"]
-> ![Conceptual image of development cycle](../../qnamaker/media/qnamaker-overview-learnabout/development-cycle.png)
-
-## Complete a quickstart
-
-We offer quickstarts in most popular programming languages, each designed to teach you basic design patterns, and have you running code in less than 10 minutes.
-
-* [Get started with the question answering client library](./quickstart/sdk.md)
-
-## Next steps
-Question answering provides everything you need to build, manage, and deploy your custom project.
-
cognitive-services Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/quickstart/sdk.md
- Title: "Quickstart: Use SDK to create and manage project - custom question answering"
-description: This quickstart shows you how to create and manage your project using custom question answering.
--- Previously updated : 06/06/2022--
-recommendations: false
-
-zone_pivot_groups: custom-qna-quickstart
--
-# Quickstart: question answering
-
-> [!NOTE]
-> Are you looking to migrate your workloads from QnA Maker? See our [migration guide](../how-to/migrate-qnamaker-to-question-answering.md) for information on feature comparisons and migration steps.
-
-Get started with the custom question answering client library. Follow these steps to install the package and try out the example code for basic tasks.
-----
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Explore the REST API
-
-To learn about automating your question answering pipeline consult the REST API documentation. Currently authoring functionality is only available via REST API:
-
-* [Authoring API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
-* [Authoring API cURL examples](../how-to/authoring.md)
-* [Runtime API reference](/rest/api/cognitiveservices/questionanswering/question-answering)
-
-## Next steps
-
-* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md)
cognitive-services Markdown Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/reference/markdown-format.md
- Title: Markdown format - question answering
-description: Following is the list of markdown formats that you can use your answer text.
----- Previously updated : 01/21/2022--
-# Markdown format supported in answer text
-
-Question answering stores answer text as markdown. There are many flavors of markdown. In order to make sure the answer text is returned and displayed correctly, use this reference.
-
-Use the **[CommonMark](https://commonmark.org/help/tutorial/https://docsupdatetracker.net/index.html)** tutorial to validate your markdown. The tutorial has a **Try it** feature for quick copy/paste validation.
-
-## When to use rich-text editing versus markdown
-
-Rich-text editing of answers allows you, as the author, to use a formatting toolbar to quickly select and format text.
-
-Markdown is a better tool when you need to autogenerate content to create projects to be imported as part of a CI/CD pipeline or for batch testing.
-
-## Supported markdown format
-
-Following is the list of markdown formats that you can use in your answer text.
-
-|Purpose|Format|Example markdown|
-|--|--|--|
-A new line between 2 sentences.|`\n\n`|`How can I create a bot with \n\n question answering?`|
-|Headers from h1 to h6, the number of `#` denotes which header. 1 `#` is the h1.|`\n# text \n## text \n### text \n####text \n#####text` |`## Creating a bot \n ...text.... \n### Important news\n ...text... \n### Related Information\n ....text...`<br><br>`\n# my h1 \n## my h2\n### my h3 \n#### my h4 \n##### my h5`|
-|Italics |`*text*`|`How do I create a bot with *question answering*?`|
-|Strong (bold)|`**text**`|`How do I create a bot with **question answering***?`|
-|URL for link|`[text](https://www.my.com)`|`How do I create a bot with [question answering](https://language.cognitive.azure.com/)?`|
-|*URL for public image|`![text](https://www.my.com/image.png)`|`How can I create a bot with ![question answering](path-to-your-image.png)`|
-|Strikethrough|`~~text~~`|`some ~~questions~~ questions need to be asked`|
-|Bold and italics|`***text***`|`How can I create a ***question answering**** bot?`|
-|Bold URL for link|`[**text**](https://www.my.com)`|`How do I create a bot with [**question answering**](https://language.cognitive.azure.com/)?`|
-|Italics URL for link|`[*text*](https://www.my.com)`|`How do I create a bot with [*question answering*](https://language.cognitive.azure.com/)?`|
-|Escape markdown symbols|`\*text\*`|`How do I create a bot with \*question answering*\*?`|
-|Ordered list|`\n 1. item1 \n 1. item2`|`This is an ordered list: \n 1. List item 1 \n 1. List item 2`<br>The preceding example uses automatic numbering built into markdown.<br>`This is an ordered list: \n 1. List item 1 \n 2. List item 2`<br>The preceding example uses explicit numbering.|
-|Unordered list|`\n * item1 \n * item2`<br>or<br>`\n - item1 \n - item2`|`This is an unordered list: \n * List item 1 \n * List item 2`|
-|Nested lists|`\n * Parent1 \n\t * Child1 \n\t * Child2 \n * Parent2`<br><br>`\n * Parent1 \n\t 1. Child1 \n\t * Child2 \n 1. Parent2`<br><br>You can nest ordered and unordered lists together. The tab, `\t`, indicates the indentation level of the child element.|`This is an unordered list: \n * List item 1 \n\t * Child1 \n\t * Child2 \n * List item 2`<br><br>`This is an ordered nested list: \n 1. Parent1 \n\t 1. Child1 \n\t 1. Child2 \n 1. Parent2`|
-
-* Question answering doesn't process the image in any way. It is the client application's role to render the image.
-
-If you want to add content using update/replace project APIs and the content/file contains html tags, you can preserve the HTML in your file by ensuring that opening and closing of the tags are converted in the encoded format.
-
-| Preserve HTML | Representation in the API request | Representation in KB |
-|--||-|
-| Yes | \&lt;br\&gt; | &lt;br&gt; |
-| Yes | \&lt;h3\&gt;header\&lt;/h3\&gt; | &lt;h3&gt;header&lt;/h3&gt; |
-
-Additionally, `CR LF(\r\n)` are converted to `\n` in the KB. `LF(\n)` is kept as is. If you want to escape any escape sequence like a \t or \n you can use backslash, for example: '\\\\r\\\\n' and '\\\\t'
-
-## Next steps
-
-* [Import a project](../how-to/migrate-knowledge-base.md)
cognitive-services Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/bot-service.md
- Title: "Tutorial: Create an FAQ bot with question answering and Azure Bot Service"
-description: In this tutorial, create a no code FAQ Bot with question answering and Azure Bot Service.
--- Previously updated : 11/02/2021---
-# Tutorial: Create a FAQ bot
-
-Create a FAQ Bot with custom question answering and Azure [Bot Service](https://azure.microsoft.com/services/bot-service/) with no code.
-
-In this tutorial, you learn how to:
-
-<!-- green checkmark -->
-> [!div class="checklist"]
-> * Link a question answering project to an Azure Bot Service
-> * Deploy a Bot
-> * Chat with the Bot in web chat
-> * Enable the Bot in supported channels
-
-## Create and publish a project
-
-Follow the [getting started article](../how-to/create-test-deploy.md). Once the project has been successfully deployed, you will be ready to start this article.
-
-## Create a bot
-
-After deploying your project, you can create a bot from the **Deploy project** page:
-
-* You can create several bots quickly, all pointing to the same project for different regions or pricing plans for the individual bots.
-
-* When you make changes to the project and redeploy, you don't need to take further action with the bot. It's already configured to work with the project, and works with all future changes to the project. Every time you publish a project, all the bots connected to it are automatically updated.
-
-1. In Language Studio, on the question answering **Deploy project** page, select the **Create a bot** button.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of UI with option to create a bot in Azure.](../media/bot-service/create-bot-in-azure.png)
-
-1. A new browser tab opens for the Azure portal, with the Azure Bot Service's creation page. Configure the Azure bot service and hit the **Create** button.
-
- |Setting |Value|
- |-||
- | Bot handle| Unique identifier for your bot. This value needs to be distinct from your App name |
- | Subscription | Select your subscription |
- | Resource group | Select an existing resource group or create a new one |
- | Location | Select your desired location |
- | Pricing tier | Choose pricing tier |
- |App name | App service name for your bot |
- |SDK language | C# or Node.js. Once the bot is created, you can download the code to your local development environment and continue the development process. |
- | Language Resource Key | This key is automatically populated deployed question answering project |
- | App service plan/Location | This value is automatically populated, do not change this value |
-
-1. After the bot is created, open the **Bot service** resource.
-1. Under **Settings**, select **Test in Web Chat**.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of Azure Bot Service UI button that reads "Test web chat".](../media/bot-service/test-in-web-chat.png)
-
-1. At the chat prompt of **Type your message**, enter:
-
- `How do I setup my surface book?`
-
- The chat bot responds with an answer from your project.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of bot test chat response containing a question and bot generated answer.](../media/bot-service/bot-chat.png)
-
-## Integrate the bot with channels
-
-Select **Channels** in the Bot service resource that you have created. You can activate the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
-
- >[!div class="mx-imgBorder"]
- >![Screenshot of integration with teams with product icons.](../media/bot-service/channels.png)
-
-## Clean up resources
-
-If you're not going to continue to use this application, delete the associate question answering and bot service resources.
-
-## Next steps
-
-Advance to the next article to learn how to customize your FAQ bot with multi-turn prompts.
cognitive-services Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/multiple-domains.md
- Title: "Tutorial: Create a FAQ bot for multiple categories with Azure Bot Service"
-description: In this tutorial, create a no code FAQ Bot for production use cases with question answering and Azure Bot Service.
----- Previously updated : 11/02/2021---
-# Add multiple categories to your FAQ bot
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a project and tag question answer pairs into distinct categories with metadata
-> * Create a separate project for each domain
-> * Create a separate language resource for each domain
-
-When building a FAQ bot, you may encounter use cases that require you to address queries across multiple domains. Let's say the marketing team at Microsoft wants to build a customer support bot that answers common user queries on multiple Surface Products. For the sake of simplicity here, we will be using two FAQ URLs, [Surface Pen](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98), and [Surface Earbuds](https://support.microsoft.com/surface/use-surface-earbuds-aea108c3-9344-0f11-e5f5-6fc9f57b21f9) to create the project.
-
-## Create project with domain specific metadata
-
-The content authors can use documents to extract question answer pairs or add custom question answer pairs to the project. In order to group these question and answers into specific domains or categories, you can add metadata.
-
-For the bot on Surface products, you can take the following steps to create a bot that answers queries for both product types:
-
-1. Add the following FAQ URLs as sources by selecting **Add source** > **URLs** > and then **Add all** once you have added each of the URLS below:
-
- [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98)<br>[Surface Earbuds FAQ](https://support.microsoft.com/surface/use-surface-earbuds-aea108c3-9344-0f11-e5f5-6fc9f57b21f9)
-
- >[!div class="mx-imgBorder"]
- >[![Screenshot of add URL UI.](../media/multiple-domains/add-url.png)](../media/multiple-domains/add-url.png#lightbox)
-
-2. In this project, we have question answer pairs on two products and we would like to distinguish between them such that we can search for responses among question and answers for a given product. In order to do this, we could update the metadata field for the question answer pairs.
-
- As you can see in the example below, we have added a metadata with **product** as key and **surface_pen** or **surface_earbuds** as values wherever applicable. You can extend this example to extract data on multiple products and add a different value for each product.
-
- >[!div class="mx-imgBorder"]
- >[![Screenshot of metadata example.](../media/multiple-domains/product-metadata.png)](../media/multiple-domains/product-metadata.png#lightbox)
-
-4. Now, in order to restrict the system to search for the response across a particular product you would need to pass that product as a filter in the question answering REST API.
-
- The REST API prediction URL can be retrieved from the Deploy project pane:
-
- >[!div class="mx-imgBorder"]
- >[![Screenshot of the Deploy project page with the prediction URL displayed.](../media/multiple-domains/prediction-url.png)](../media/multiple-domains/prediction-url.png#lightbox)
-
- In the JSON body for the API call, we have passed *surface_pen* as value for the metadata *product*. So, the system will only look for the response among the QnA pairs with the same metadata.
-
- ```json
- {
- "question": "What is the price?",
- "top": 3
- },
- "answerSpanRequest": {
- "enable": true,
- "confidenceScoreThreshold": 0.3,
- "topAnswersWithSpan": 1
- },
- "filters": {
- "metadataFilter": {
- "metadata": [
- {
- "key": "product",
- "value": "surface_pen"
- }
- ]
- }
- }
- ```
-
- You can obtain metadata value based on user input in the following ways:
-
- * Explicitly take the domain as input from the user through the bot client. For instance as shown below, you can take product category as input from the user when the conversation is initiated.
-
- ![Take metadata input](../media/multiple-domains/explicit-metadata-input.png)
-
- * Implicitly identify domain based on bot context. For instance, in case the previous question was on a particular Surface product, it can be saved as context by the client. If the user doesn't specify the product in the next query, you could pass on the bot context as metadata to the Generate Answer API.
-
- ![Pass context](../media/multiple-domains/extract-metadata-from-context.png)
-
- * Extract entity from user query to identify domain to be used for metadata filter. You can use other Cognitive Services such as [Named Entity Recognition (NER)](../../named-entity-recognition/overview.md) and [conversational language understanding](../../conversational-language-understanding/overview.md) for entity extraction.
-
- ![Extract metadata from query](../media/multiple-domains/extract-metadata-from-query.png)
-
-### How large can our projects be?
-
-You can add up to 50000 question answer pairs to a single project. If your data exceeds 50,000 question answer pairs, you should consider splitting the project.
-
-## Create a separate project for each domain
-
-You can also create a separate project for each domain and maintain the projects separately. All APIs require for the user to pass on the project name to make any update to the project or fetch an answer to the user's question.
-
-When the user question is received by the service, you would need to pass on the `projectName` in the REST API endpoint shown to fetch a response from the relevant project. You can locate the URL in the **Deploy project** page under **Get prediction URL**:
-
-`https://southcentralus.api.cognitive.microsoft.com/language/:query-knowledgebases?projectName=Test-Project-English&api-version=2021-10-01&deploymentName=production`
-
-## Create a separate language resource for each domain
-
-Let's say the marketing team at Microsoft wants to build a customer support bot that answers user queries on Surface and Xbox products. They plan to assign distinct teams to access projects on Surface and Xbox. In this case, it is advised to create two question answering resources - one for Surface and another for Xbox. You can however define distinct roles for users accessing the same resource.
cognitive-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/multiple-languages.md
- Title: Create projects in multiple languages -question answering
-description: In this tutorial, you will learn how to create projects with multiple languages.
----- Previously updated : 11/02/2021---
-# Create projects in multiple languages
-
-In this tutorial, you learn how to:
-
-<!-- green checkmark -->
-> [!div class="checklist"]
-> * Create a project that supports English
-> * Create a project that supports German
-
-This tutorial will walk through the process of creating projects in multiple languages. We use the [Surface Pen FAQ](https://support.microsoft.com/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98) URL to create projects in German and English. We then deploy the project and use the question answering REST API to query and get answers to FAQs in the desired language.
-
-## Create project in German
-
-To be able to create a project in more than one language, the multiple language setting must be set at the creation of the first project that is associated with the language resource.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of UI for create project with I want to select the language when I create a project in this resource selected.]( ../media/multiple-languages/multiple-languages.png) ](../media/multiple-languages/multiple-languages.png#lightbox)
-
-1. From the [Language Studio](https://aka.ms/languageStudio) home page, select open custom question answering. Select **Create new project** > **I want to select the language when I create a project in this resource** > **Next**.
-
-2. Fill out enter basic information page and select **Next** > **Create project**.
-
- |Setting| Value|
- ||-|
- |Name | Unique name for your project|
- |Description | Unique description to help identify the project |
- |Source language | For this tutorial, select German |
- |Default answer | Default answer when no answer is returned |
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of UI for create project with the german laungague selected.]( ../media/multiple-languages/choose-german.png) ](../media/multiple-languages/choose-german.png#lightbox)
-
-3. **Add source** > **URLs** > **Add url** > **Add all**.
-
- |Setting| Value |
- |-||
- | Url Name | Surface Pen German |
- | URL | https://support.microsoft.com/de-de/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98 |
- | Classify file structure | Auto-detect |
-
- Question answering reads the document and extracts question answer pairs from the source URL to create the project in the German language. If you select the link to the source, the project page opens where we can edit the contents.
-
- > [!div class="mx-imgBorder"]
- > [ ![Screenshot of UI with German questions and answers](../media/multiple-languages/german-language.png) ]( ../media/multiple-languages/german-language.png#lightbox)
-
-## Create project in English
-
-We now repeat the above steps from before but this time select English and provide an English URL as a source.
-
-1. From the [Language Studio](https://aka.ms/languageStudio) open the question answering page > **Create new project**.
-
-2. Fill out enter basic information page and select **Next** > **Create project**.
-
- |Setting| Value|
- ||-|
- |Name | Unique name for your project|
- |Description | Unique description to help identify the project |
- |Source language | For this tutorial, select English |
- |Default answer | Default answer when no answer is returned |
-
-3. **Add source** > **URLs** > **Add url** > **Add all**.
-
- |Setting| Value |
- |--|--|
- | Url Name | Surface Pen German |
- | URL | https://support.microsoft.com/en-us/surface/how-to-use-your-surface-pen-8a403519-cd1f-15b2-c9df-faa5aa924e98 |
- | Classify file structure | Auto-detect |
-
-## Deploy and query project
-
-We are now ready to deploy the two project and query them in the desired language using the question answering REST API. Once a project is deployed, the following page is shown which provides details to query the project.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of UI with English questions and answers](../media/multiple-languages/get-prediction-url.png) ](../media/multiple-languages/get-prediction-url.png#lightbox)
-
-The language for the incoming user query can be detected with the [Language Detection API](../../language-detection/how-to/call-api.md) and the user can call the appropriate endpoint and project depending on the detected language.
cognitive-services Power Virtual Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/power-virtual-agents.md
- Title: "Tutorial: Add your Question Answering project to Power Virtual Agents"
-description: In this tutorial, you will learn how to add your Question Answering project to Power Virtual Agents.
----- Previously updated : 10/11/2022---
-# Add your Question Answering project to Power Virtual Agents
-
-Create and extend a [Power Virtual Agents](https://powervirtualagents.microsoft.com/) bot to provide answers from your project.
-
-> [!NOTE]
-> The integration demonstrated in this tutorial is in preview and is not intended for deployment to production environments.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
-> * Create a Power Virtual Agents bot
-> * Create a system fallback topic
-> * Add Question Answering as an action to a topic as a Power Automate flow
-> * Create a Power Automate solution
-> * Add a Power Automate flow to your solution
-> * Publish Power Virtual Agents
-> * Test Power Virtual Agents, and receive an answer from your Question Answering project
-
-> [!NOTE]
-> The QnA Maker service is being retired on the 31st of March, 2025. A newer version of the question and answering capability is now available as part of [Azure Cognitive Service for Language](../../index.yml). For question answering capabilities within the Language Service, see [question answering](../overview.md). Starting 1st October, 2022 you wonΓÇÖt be able to create new QnA Maker resources. For information on migrating existing QnA Maker knowledge bases to question answering, consult the [migration guide](../how-to/migrate-qnamaker.md).
-
-## Create and publish a project
-1. Follow the [quickstart](../quickstart/sdk.md?pivots=studio) to create a Question Answering project. Once you have deployed your project.
-2. After deploying your project from Language Studio, click on ΓÇ£Get Prediction URLΓÇ¥.
-3. Get your Site URL from the hostname of Prediction URL and your Account key which would be the Ocp-Apim-Subscription-Key.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of how to obtain the prediction URL and subscription key displayed.]( ../media/power-virtual-agents/get-prediction-url.png) ]( ../media/power-virtual-agents/get-prediction-url.png#lightbox)
-
-4. Create a Custom Question Answering connector: Follow the [connector documentation](/connectors/languagequestionansw/) to create a connection to Question Answering.
-5. Use this tutorial to create a Bot with Power Virtual Agents instead of creating a bot from Language Studio.
-
-## Create a bot in Power Virtual Agents
-[Power Virtual Agents](https://powervirtualagents.microsoft.com/) allows teams to create powerful bots by using a guided, no-code graphical interface. You don't need data scientists or developers.
-
-Create a bot by following the steps in [Create and delete Power Virtual Agents bots](/power-virtual-agents/authoring-first-bot).
-
-## Create the system fallback topic
-In Power Virtual Agents, you create a bot with a series of topics (subject areas), in order to answer user questions by performing actions.
-
-Although the bot can connect to your project from any topic, this tutorial uses the system fallback topic. The fallback topic is used when the bot can't find an answer. The bot passes the user's text to Question Answering Query knowledgebase API, receives the answer from your project, and displays it to the user as a message.
-
-Create a fallback topic by following the steps in [Configure the system fallback topic in Power Virtual Agents](/power-virtual-agents/authoring-system-fallback-topic).
-
-## Use the authoring canvas to add an action
-Use the Power Virtual Agents authoring canvas to connect the fallback topic to your project. The topic starts with the unrecognized user text. Add an action that passes that text to Question Answering, and then shows the answer as a message. The last step of displaying an answer is handled as a [separate step](../../../QnAMaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md#add-your-solutions-flow-to-power-virtual-agents), later in this tutorial.
-
-This section creates the fallback topic conversation flow.
-
-The new fallback action might already have conversation flow elements. Delete the **Escalate** item by selecting the **Options** menu.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of how to delete existing conversation flow elements.]( ../media/power-virtual-agents/delete-action.png) ]( ../media/power-virtual-agents/delete-action.png#lightbox)
-
-Below the *Message* node, select the (**+**) icon, then select **Call an action**.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of how select the Call an action feature.]( ../media/power-virtual-agents/trigger-action-for-power-automate.png) ]( ../media/power-virtual-agents/trigger-action-for-power-automate.png#lightbox)
-
-Select **Create a flow**. This takes you to the Power Automate portal.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of where to find the Create a flow action.]( ../media/power-virtual-agents/create-flow.png) ]( ../media/power-virtual-agents/create-flow.png#lightbox)
-
-Power Automate opens a new template as shown below.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the default Power Automate template.]( ../media/power-virtual-agents/power-automate-actions.png) ]( ../media/power-virtual-agents/power-automate-actions.png#lightbox)
-**Do not use the template shown above.**
-
-Instead you need to follow the steps below that creates a Power Automate flow. This flow:
-- Takes the incoming user text as a question, and sends it to Question Answering.-- Returns the top response back to your bot.-
-click on **Create** in the left panel, then click "OK" to leave the page.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Create action in the left panel and a confirmation message for navigating away from the page.]( ../media/power-virtual-agents/power-automate-create-new.png) ]( ../media/power-virtual-agents/power-automate-create-new.png#lightbox)
-
-Select "Instant Cloud flow"
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Instant cloud flow selection box.]( ../media/power-virtual-agents/create-instant-cloud-flow.png) ]( ../media/power-virtual-agents/create-instant-cloud-flow.png#lightbox)
-
-For testing this connector, you can click on ΓÇ£When PowerVirtual Agents calls a flowΓÇ¥ and click on **Create**.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the When Power Virtual Agents calls a flow selection in the Choose how to trigger this flow list.]( ../media/power-virtual-agents/create-trigger.png) ]( ../media/power-virtual-agents/create-trigger.png#lightbox)
-
-Click on "New Step" and search for "Power Virtual Agents". Choose "Add an input" and select text. Next, provide the keyword and the value.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Add an input option.]( ../media/power-virtual-agents/flow-step-1.png) ]( ../media/power-virtual-agents/flow-step-1.png#lightbox)
-
-Click on "New Step" and search "Language - Question Answering" and choose "Generate answer from Project" from the three actions.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Generate answer from Project selection in the Action list.]( ../media/power-virtual-agents/flow-step-2.png) ]( ../media/power-virtual-agents/flow-step-2.png#lightbox)
-
-This option helps in answering the specified question using your project. Type in the project name, deployment name and API version and select the question from the previous step.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of fields for the Generate answer from Project action.]( ../media/power-virtual-agents/flow-step-3.png) ]( ../media/power-virtual-agents/flow-step-3.png#lightbox)
-
-Click on "New Step" and search for "Initialize variable". Choose a name for your variable, and select the "String" type.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Initialize variable action fields.]( ../media/power-virtual-agents/flow-step-4.png) ]( ../media/power-virtual-agents/flow-step-4.png#lightbox)
-
-Click on "New Step" again, and search for "Apply to each", then select the output from the previous steps and add an action of "Set variable" and select the connector action.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Set variable action within the Apply to each step.]( ../media/power-virtual-agents/flow-step-5.png) ]( ../media/power-virtual-agents/flow-step-5.png#lightbox)
-
-Click on "New Step" and search for "Return value(s) to Power Virtual Agents" and type in a keyword, then choose the previous variable name in the answer.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Return value(s) to Power Virtual Agents step, containing the previous variable.]( ../media/power-virtual-agents/flow-step-6.png) ]( ../media/power-virtual-agents/flow-step-6.png#lightbox)
-
-The list of completed steps should look like this.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the full list of completed steps within Power Virtual Agents.]( ../media/power-virtual-agents/flow-step-7.png) ]( ../media/power-virtual-agents/flow-step-7.png#lightbox)
-
-Select **Save** to save the flow.
-
-## Create a solution and add the flow
-
-For the bot to find and connect to the flow, the flow must be included in a Power Automate solution.
-
-1. While still in the Power Automate portal, select Solutions from the left-side navigation.
-2. Select **+ New solution**.
-3. Enter a display name. The list of solutions includes every solution in your organization or school. Choose a naming convention that helps you filter to just your solutions. For example, you might prefix your email to your solution name: jondoe-power-virtual-agent-question-answering-fallback.
-4. Select your publisher from the list of choices.
-5. Accept the default values for the name and version.
-6. Select **Create** to finish the process.
-
-**Add your flow to the solution**
-
-1. In the list of solutions, select the solution you just created. It should be at the top of the list. If it isn't, search by your email name, which is part of the solution name.
-2. In the solution, select **+ Add existing**, and then select Flow from the list.
-3. Find your flow from the **Outside solutions** list, and then select Add to finish the process. If there are many flows, look at the **Modified** column to find the most recent flow.
-
-## Add your solution's flow to Power Virtual Agents
-
-1. Return to the browser tab with your bot in Power Virtual Agents. The authoring canvas should still be open.
-2. To insert a new step in the flow, above the **Message** action box, select the plus (+) icon. Then select **Call an action**.
-3. From the **Flow** pop-up window, select the new flow named **Generate answers using Question Answering Project...**. The new action appears in the flow.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the Call an action using the Generate answers using Question Answering Project flow.]( ../media/power-virtual-agents/flow-step-8.png) ]( ../media/power-virtual-agents/flow-step-8.png#lightbox)
-
-4. To correctly set the input variable to the QnA Maker action, select **Select a variable**, then select **bot.UnrecognizedTriggerPhrase**.
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the selected bot.UnrecognizedTriggerPhrase variable within the action call.]( ../media/power-virtual-agents/flow-step-9.png) ]( ../media/power-virtual-agents/flow-step-9.png#lightbox)
-
-5. To correctly set the output variable to the Question Answering action, in the **Message** action, select **UnrecognizedTriggerPhrase**, then select the icon to insert a variable, {x}, then select **FinalAnswer**.
-6. From the context toolbar, select **Save**, to save the authoring canvas details for the topic.
-
-Here's what the final bot canvas looks like:
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the completed bot canvas.]( ../media/power-virtual-agents/flow-step-10.png) ]( ../media/power-virtual-agents/flow-step-10.png#lightbox)
-
-## Test the bot
-
-As you design your bot in Power Virtual Agents, you can use the [Test bot pane](/power-virtual-agents/authoring-test-bot) to see how the bot leads a customer through the bot conversation.
-
-1. In the test pane, toggle **Track between topics**. This allows you to watch the progression between topics, as well as within a single topic.
-2. Test the bot by entering the user text in the following order. The authoring canvas reports the successful steps with a green check mark.
-
-|**Question order**|**Test questions** |**Purpose** |
-||-|--|
-|1 |Hello |Begin conversation |
-|2 |Store hours |Sample topic. This is configured for you without any additional work on your part. |
-|3 |Yes |In reply to "Did that answer your question?" |
-|4 |Excellent |In reply to "Please rate your experience." |
-|5 |Yes |In reply to "Can I help with anything else?" |
-|6 |How can I improve the throughput performance for query predictions?|This question triggers the fallback action, which sends the text to your project to answer. Then the answer is shown. the green check marks for the individual actions indicate success for each action.|
-
-> [!div class="mx-imgBorder"]
-> [ ![Screenshot of the completd test bot running alongside the tutorial flow.]( ../media/power-virtual-agents/flow-step-11.png) ]( ../media/power-virtual-agents/flow-step-11.png#lightbox)
-
-## Publish your bot
-
-To make the bot available to all members of your organization, you need to publish it.
-
-Publish your bot by following the steps in [Publish your bot](/power-virtual-agents/publication-fundamentals-publish-channels).
-
-## Share your bot
-
-To make your bot available to others, you first need to publish it to a channel. For this tutorial we'll use the demo website.
-
-Configure the demo website by following the steps in [Configure a chatbot for a live or demo website](/power-virtual-agents/publication-connect-bot-to-web-channels).
-
-Then you can share your website URL with your school or organization members.
-
-## Clean up resources
-
-When you are done with the project, remove the QnA Maker resources in the Azure portal.
-
-## See also
-
-* [Tutorial: Create an FAQ bot](../tutorials/bot-service.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md
- Title: How to perform sentiment analysis and opinion mining-
-description: This article will show you how to detect sentiment, and mine for opinions in text.
------ Previously updated : 07/27/2022----
-# How to: Use Sentiment analysis and Opinion Mining
-
-Sentiment analysis and opinion mining are two ways of detecting positive and negative sentiment. Using sentiment analysis, you can get sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. Opinion Mining provides granular information about the opinions related to words (such as the attributes of products or services) in the text.
-
-## Sentiment Analysis
-
-Sentiment Analysis applies sentiment labels to text, which are returned at a sentence and document level, with a confidence score for each.
-
-The labels are *positive*, *negative*, and *neutral*. At the document level, the *mixed* sentiment label also can be returned. The sentiment of the document is determined below:
-
-| Sentence sentiment | Returned document label |
-|--|-|
-| At least one `positive` sentence is in the document. The rest of the sentences are `neutral`. | `positive` |
-| At least one `negative` sentence is in the document. The rest of the sentences are `neutral`. | `negative` |
-| At least one `negative` sentence and at least one `positive` sentence are in the document. | `mixed` |
-| All sentences in the document are `neutral`. | `neutral` |
-
-Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. For each document or each sentence, the predicted scores associated with the labels (positive, negative, and neutral) add up to 1. For more information, see the [Responsible AI transparency note](/legal/cognitive-services/text-analytics/transparency-note?context=/azure/cognitive-services/text-analytics/context/context).
-
-## Opinion Mining
-
-Opinion Mining is a feature of Sentiment Analysis. Also known as Aspect-based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to attributes of products or services in text. The API surfaces opinions as a target (noun or verb) and an assessment (adjective).
-
-For example, if a customer leaves feedback about a hotel such as "The room was great, but the staff was unfriendly.", Opinion Mining will locate targets (aspects) in the text, and their associated assessments (opinions) and sentiments. Sentiment Analysis might only report a negative sentiment.
--
-If you're using the REST API, to get Opinion Mining in your results, you must include the `opinionMining=true` flag in a request for sentiment analysis. The Opinion Mining results will be included in the sentiment analysis response. Opinion mining is an extension of Sentiment Analysis and is included in your current [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
-
-## Development options
--
-## Determine how to process the data (optional)
-
-### Specify the sentiment analysis model
-
-By default, sentiment analysis will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-
-<!--### Using a preview model version
-
-To use the a preview model version in your API calls, you must specify the model version using the model version parameter. For example, if you were sending a request using Python:
-
-```python
-result = text_analytics_client.analyze_sentiment(documents, show_opinion_mining=True, model_version="2021-10-01-preview")
-```
-
-or if you were using the REST API:
-
-```rest
-https://your-resource-name.cognitiveservices.azure.com/text/analytics/v3.1/sentiment?opinionMining=true&model-version=2021-10-01-preview
-```
-
-See the reference documentation for more information.
-* [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
-
-* [.NET](/dotnet/api/azure.ai.textanalytics.analyzesentimentaction#properties)
-* [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics.textanalyticsclient#analyze-sentiment-documents-kwargs-)
-* [Java](/java/api/com.azure.ai.textanalytics.models.analyzesentimentoptions.setmodelversion#com_azure_ai_textanalytics_models_AnalyzeSentimentOptions_setModelVersion_java_lang_String_)
-* [JavaScript](/javascript/api/@azure/ai-text-analytics/analyzesentimentoptions)
>-
-### Input languages
-
-When you submit documents to be processed by sentiment analysis, you can specify which of [the supported languages](../language-support.md) they're written in. If you don't specify a language, sentiment analysis will default to English. The API may return offsets in the response to support different [multilingual and emoji encodings](../../concepts/multilingual-emoji-support.md).
-
-## Submitting data
-
-Sentiment analysis and opinion mining produce a higher-quality result when you give it smaller amounts of text to work on. This is opposite from some features, like key phrase extraction which performs better on larger blocks of text.
-
-To send an API request, you'll need your Language resource endpoint and key.
-
-> [!NOTE]
-> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-
-Analysis is performed upon receipt of the request. Using the sentiment analysis and opinion mining features synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
--
-## Getting sentiment analysis and opinion mining results
-
-When you receive results from the API, the order of the returned key phrases is determined internally, by the model. You can stream the results to an application, or save the output to a file on the local system.
-
-Sentiment analysis returns a sentiment label and confidence score for the entire document, and each sentence within it. Scores closer to 1 indicate a higher confidence in the label's classification, while lower scores indicate lower confidence. A document can have multiple sentences, and the confidence scores within each document or sentence add up to 1.
-
-Opinion Mining will locate targets (nouns or verbs) in the text, and their associated assessment (adjective). For example, the sentence "*The restaurant had great food and our server was friendly*" has two targets: *food* and *server*. Each target has an assessment. For example, the assessment for *food* would be *great*, and the assessment for *server* would be *friendly*.
-
-The API returns opinions as a target (noun or verb) and an assessment (adjective).
-
-## Service and data limits
--
-## See also
-
-* [Sentiment analysis and opinion mining overview](../overview.md)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/use-containers.md
- Title: Install and run Docker containers for Sentiment Analysis-
-description: Use the Docker containers for the Sentiment Analysis API to perform natural language processing such as sentiment analysis, on-premises.
------ Previously updated : 04/11/2023--
-keywords: on-premises, Docker, container, sentiment analysis, natural language processing
--
-# Install and run Sentiment Analysis containers
-
-Containers enable you to host the Sentiment Analysis API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Sentiment Analysis remotely, then containers might be a good option.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-You must meet the following prerequisites before using Sentiment Analysis containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
--
-## Host computer requirements and recommendations
--
-The following table describes the minimum and recommended specifications for the available container. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
-
-| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
-|||-|--|--|
-| **Sentiment Analysis** | 1 core, 2GB memory | 4 cores, 8GB memory |15 | 30|
-
-CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with `docker pull`
-
-The Sentiment Analysis container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `sentiment`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment`
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/tags).
-
-The sentiment analysis container v3 container is available in several languages. To download the container for the English container, use the command below.
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-en
-```
-
-To download the container for another language, replace `3.0-en` with one of the image tags below.
-
-| Sentiment Analysis Container | Image tag |
-|--|--|
-| Chinese-Simplified | `3.0-zh-hans` |
-| Chinese-Traditional | `3.0-zh-hant` |
-| Dutch | `3.0-nl` |
-| English | `3.0-en` |
-| French | `3.0-fr` |
-| German | `3.0-de` |
-| Hindi | `3.0-hi` |
-| Italian | `3.0-it` |
-| Japanese | `3.0-ja` |
-| Korean | `3.0-ko` |
-| Norwegian (Bokmål) | `3.0-no` |
-| Portuguese (Brazil) | `3.0-pt-BR` |
-| Portuguese (Portugal) | `3.0-pt-PT` |
-| Spanish | `3.0-es` |
-| Turkish | `3.0-tr` |
--
-## Run the container with `docker run`
-
-Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
-
-> [!IMPORTANT]
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-
-To run the Sentiment Analysis container, execute the following `docker run` command. Replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| **{IMAGE_TAG}** | The image tag representing the language of the container you want to run. Make sure this matches the `docker pull` command you used. | `3.0-en` |
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:{IMAGE_TAG} \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a *Sentiment Analysis* container from the container image
-* Allocates one CPU core and 8 gigabytes (GB) of memory
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container
-* Automatically removes the container after it exits. The container image is still available on the host computer.
--
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs.
-
-<!-- ## Validate container is running -->
--
-## Run the container disconnected from the internet
--
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
--
-## Billing
-
-The Sentiment Analysis containers send billing information to Azure, using a _Language_ resource on your Azure account.
--
-For more information about these options, see [Configure containers](../../concepts/configure-containers.md).
-
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Sentiment Analysis containers. In summary:
-
-* Sentiment Analysis provides Linux containers for Docker
-* Container images are downloaded from the Microsoft Container Registry (MCR).
-* Container images run in Docker.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* See [Configure containers](../../concepts/configure-containers.md) for configuration settings.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/language-support.md
- Title: Sentiment Analysis and Opinion Mining language support-
-description: This article explains which languages are supported by the Sentiment Analysis and Opinion Mining features of Azure Cognitive Service for Language.
------ Previously updated : 10/31/2022----
-# Sentiment Analysis and Opinion Mining language support
-
-Use this article to learn which languages are supported by Sentiment Analysis and Opinion Mining.
-
-> [!NOTE]
-> Languages are added as new [model versions](../concepts/model-lifecycle.md) are released.
-
-## Sentiment Analysis language support
-
-Total supported language codes: 94
-
-| Language | Language code | Starting with model version | Notes |
-|-|-|-|-|
-| Afrikaans | `af` | 2022-10-01 | |
-| Albanian | `sq` | 2022-10-01 | |
-| Amharic | `am` | 2022-10-01 | |
-| Arabic | `ar` | 2022-06-01 | |
-| Armenian | `hy` | 2022-10-01 | |
-| Assamese | `as` | 2022-10-01 | |
-| Azerbaijani | `az` | 2022-10-01 | |
-| Basque | `eu` | 2022-10-01 | |
-| Belarusian (new) | `be` | 2022-10-01 | |
-| Bengali | `bn` | 2022-10-01 | |
-| Bosnian | `bs` | 2022-10-01 | |
-| Breton (new) | `br` | 2022-10-01 | |
-| Bulgarian | `bg` | 2022-10-01 | |
-| Burmese | `my` | 2022-10-01 | |
-| Catalan | `ca` | 2022-10-01 | |
-| Chinese (Simplified) | `zh-hans` | 2019-10-01 | `zh` also accepted |
-| Chinese (Traditional) | `zh-hant` | 2019-10-01 | |
-| Croatian | `hr` | 2022-10-01 | |
-| Czech | `cs` | 2022-10-01 | |
-| Danish | `da` | 2022-06-01 | |
-| Dutch | `nl` | 2019-10-01 | |
-| English | `en` | 2019-10-01 | |
-| Esperanto (new) | `eo` | 2022-10-01 | |
-| Estonian | `et` | 2022-10-01 | |
-| Filipino | `fil` | 2022-10-01 | |
-| Finnish | `fi` | 2022-06-01 | |
-| French | `fr` | 2019-10-01 | |
-| Galician | `gl` | 2022-10-01 | |
-| Georgian | `ka` | 2022-10-01 | |
-| German | `de` | 2019-10-01 | |
-| Greek | `el` | 2022-06-01 | |
-| Gujarati | `gu` | 2022-10-01 | |
-| Hausa (new) | `ha` | 2022-10-01 | |
-| Hebrew | `he` | 2022-10-01 | |
-| Hindi | `hi` | 2020-04-01 | |
-| Hungarian | `hu` | 2022-10-01 | |
-| Indonesian | `id` | 2022-10-01 | |
-| Irish | `ga` | 2022-10-01 | |
-| Italian | `it` | 2019-10-01 | |
-| Japanese | `ja` | 2019-10-01 | |
-| Javanese (new) | `jv` | 2022-10-01 | |
-| Kannada | `kn` | 2022-10-01 | |
-| Kazakh | `kk` | 2022-10-01 | |
-| Khmer | `km` | 2022-10-01 | |
-| Korean | `ko` | 2019-10-01 | |
-| Kurdish (Kurmanji) | `ku` | 2022-10-01 | |
-| Kyrgyz | `ky` | 2022-10-01 | |
-| Lao | `lo` | 2022-10-01 | |
-| Latin (new) | `la` | 2022-10-01 | |
-| Latvian | `lv` | 2022-10-01 | |
-| Lithuanian | `lt` | 2022-10-01 | |
-| Macedonian | `mk` | 2022-10-01 | |
-| Malagasy | `mg` | 2022-10-01 | |
-| Malay | `ms` | 2022-10-01 | |
-| Malayalam | `ml` | 2022-10-01 | |
-| Marathi | `mr` | 2022-10-01 | |
-| Mongolian | `mn` | 2022-10-01 | |
-| Nepali | `ne` | 2022-10-01 | |
-| Norwegian | `no` | 2019-10-01 | |
-| Odia | `or` | 2022-10-01 | |
-| Oromo (new) | `om` | 2022-10-01 | |
-| Pashto | `ps` | 2022-10-01 | |
-| Persian | `fa` | 2022-10-01 | |
-| Polish | `pl` | 2022-06-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2019-10-01 | `pt` also accepted |
-| Portuguese (Brazil) | `pt-BR` | 2019-10-01 | |
-| Punjabi | `pa` | 2022-10-01 | |
-| Romanian | `ro` | 2022-10-01 | |
-| Russian | `ru` | 2022-06-01 | |
-| Sanskrit (new) | `sa` | 2022-10-01 | |
-| Scottish Gaelic (new) | `gd` | 2022-10-01 | |
-| Serbian | `sr` | 2022-10-01 | |
-| Sindhi (new) | `sd` | 2022-10-01 | |
-| Sinhala (new) | `si` | 2022-10-01 | |
-| Slovak | `sk` | 2022-10-01 | |
-| Slovenian | `sl` | 2022-10-01 | |
-| Somali | `so` | 2022-10-01 | |
-| Spanish | `es` | 2019-10-01 | |
-| Sundanese (new) | `su` | 2022-10-01 | |
-| Swahili | `sw` | 2022-10-01 | |
-| Swedish | `sv` | 2022-06-01 | |
-| Tamil | `ta` | 2022-10-01 | |
-| Telugu | `te` | 2022-10-01 | |
-| Thai | `th` | 2022-10-01 | |
-| Turkish | `tr` | 2022-10-01 | |
-| Ukrainian | `uk` | 2022-10-01 | |
-| Urdu | `ur` | 2022-10-01 | |
-| Uyghur | `ug` | 2022-10-01 | |
-| Uzbek | `uz` | 2022-10-01 | |
-| Vietnamese | `vi` | 2022-10-01 | |
-| Welsh | `cy` | 2022-10-01 | |
-| Western Frisian (new) | `fy` | 2022-10-01 | |
-| Xhosa (new) | `xh` | 2022-10-01 | |
-| Yiddish (new) | `yi` | 2022-10-01 | |
-
-### Opinion Mining language support
-
-Total supported language codes: 94
-
-| Language | Language code | Starting with model version | Notes |
-|-|-|-|-|
-| Afrikaans (new) | `af` | 2022-11-01 | |
-| Albanian (new) | `sq` | 2022-11-01 | |
-| Amharic (new) | `am` | 2022-11-01 | |
-| Arabic | `ar` | 2022-11-01 | |
-| Armenian (new) | `hy` | 2022-11-01 | |
-| Assamese (new) | `as` | 2022-11-01 | |
-| Azerbaijani (new) | `az` | 2022-11-01 | |
-| Basque (new) | `eu` | 2022-11-01 | |
-| Belarusian (new) | `be` | 2022-11-01 | |
-| Bengali | `bn` | 2022-11-01 | |
-| Bosnian (new) | `bs` | 2022-11-01 | |
-| Breton (new) | `br` | 2022-11-01 | |
-| Bulgarian (new) | `bg` | 2022-11-01 | |
-| Burmese (new) | `my` | 2022-11-01 | |
-| Catalan (new) | `ca` | 2022-11-01 | |
-| Chinese (Simplified) | `zh-hans` | 2022-11-01 | `zh` also accepted |
-| Chinese (Traditional) (new) | `zh-hant` | 2022-11-01 | |
-| Croatian (new) | `hr` | 2022-11-01 | |
-| Czech (new) | `cs` | 2022-11-01 | |
-| Danish | `da` | 2022-11-01 | |
-| Dutch | `nl` | 2022-11-01 | |
-| English | `en` | 2020-04-01 | |
-| Esperanto (new) | `eo` | 2022-11-01 | |
-| Estonian (new) | `et` | 2022-11-01 | |
-| Filipino (new) | `fil` | 2022-11-01 | |
-| Finnish | `fi` | 2022-11-01 | |
-| French | `fr` | 2021-10-01 | |
-| Galician (new) | `gl` | 2022-11-01 | |
-| Georgian (new) | `ka` | 2022-11-01 | |
-| German | `de` | 2021-10-01 | |
-| Greek | `el` | 2022-11-01 | |
-| Gujarati (new) | `gu` | 2022-11-01 | |
-| Hausa (new) | `ha` | 2022-11-01 | |
-| Hebrew (new) | `he` | 2022-11-01 | |
-| Hindi | `hi` | 2022-11-01 | |
-| Hungarian | `hu` | 2022-11-01 | |
-| Indonesian | `id` | 2022-11-01 | |
-| Irish (new) | `ga` | 2022-11-01 | |
-| Italian | `it` | 2021-10-01 | |
-| Japanese | `ja` | 2022-11-01 | |
-| Javanese (new) | `jv` | 2022-11-01 | |
-| Kannada (new) | `kn` | 2022-11-01 | |
-| Kazakh (new) | `kk` | 2022-11-01 | |
-| Khmer (new) | `km` | 2022-11-01 | |
-| Korean | `ko` | 2022-11-01 | |
-| Kurdish (Kurmanji) | `ku` | 2022-11-01 | |
-| Kyrgyz (new) | `ky` | 2022-11-01 | |
-| Lao (new) | `lo` | 2022-11-01 | |
-| Latin (new) | `la` | 2022-11-01 | |
-| Latvian (new) | `lv` | 2022-11-01 | |
-| Lithuanian (new) | `lt` | 2022-11-01 | |
-| Macedonian (new) | `mk` | 2022-11-01 | |
-| Malagasy (new) | `mg` | 2022-11-01 | |
-| Malay (new) | `ms` | 2022-11-01 | |
-| Malayalam (new) | `ml` | 2022-11-01 | |
-| Marathi | `mr` | 2022-11-01 | |
-| Mongolian (new) | `mn` | 2022-11-01 | |
-| Nepali (new) | `ne` | 2022-11-01 | |
-| Norwegian | `no` | 2022-11-01 | |
-| Oriya (new) | `or` | 2022-11-01 | |
-| Oromo (new) | `om` | 2022-11-01 | |
-| Pashto (new) | `ps` | 2022-11-01 | |
-| Persian (Farsi) (new) | `fa` | 2022-11-01 | |
-| Polish | `pl` | 2022-11-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-10-01 | `pt` also accepted |
-| Portuguese (Brazil) | `pt-BR` | 2021-10-01 | |
-| Punjabi (new) | `pa` | 2022-11-01 | |
-| Romanian (new) | `ro` | 2022-11-01 | |
-| Russian | `ru` | 2022-11-01 | |
-| Sanskrit (new) | `sa` | 2022-11-01 | |
-| Scottish Gaelic (new) | `gd` | 2022-11-01 | |
-| Serbian (new) | `sr` | 2022-11-01 | |
-| Sindhi (new) | `sd` | 2022-11-01 | |
-| Sinhala (new) | `si` | 2022-11-01 | |
-| Slovak (new) | `sk` | 2022-11-01 | |
-| Slovenian (new) | `sl` | 2022-11-01 | |
-| Somali (new) | `so` | 2022-11-01 | |
-| Spanish | `es` | 2021-10-01 | |
-| Sundanese (new) | `su` | 2022-11-01 | |
-| Swahili (new) | `sw` | 2022-11-01 | |
-| Swedish | `sv` | 2022-11-01 | |
-| Tamil | `ta` | 2022-11-01 | |
-| Telugu | `te` | 2022-11-01 | |
-| Thai (new) | `th` | 2022-11-01 | |
-| Turkish | `tr` | 2022-11-01 | |
-| Ukrainian (new) | `uk` | 2022-11-01 | |
-| Urdu (new) | `ur` | 2022-11-01 | |
-| Uyghur (new) | `ug` | 2022-11-01 | |
-| Uzbek (new) | `uz` | 2022-11-01 | |
-| Vietnamese (new) | `vi` | 2022-11-01 | |
-| Welsh (new) | `cy` | 2022-11-01 | |
-| Western Frisian (new) | `fy` | 2022-11-01 | |
-| Xhosa (new) | `xh` | 2022-11-01 | |
-| Yiddish (new) | `yi` | 2022-11-01 | |
--
-## Next steps
-
-* [how to call the API](how-to/call-api.md#specify-the-sentiment-analysis-model) for more information.
-* [Quickstart: Use the Sentiment Analysis client library and REST API](quickstart.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/overview.md
- Title: What is sentiment analysis and opinion mining in Azure Cognitive Service for Language?-
-description: An overview of the sentiment analysis feature in Azure Cognitive Services, which helps you find out what people think of a topic by mining text for clues.
------ Previously updated : 01/12/2023----
-# What is sentiment analysis and opinion mining in Azure Cognitive Service for Language?
-
-Sentiment analysis and opinion mining are features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. These features help you find out what people think of your brand or topic by mining text for clues about positive or negative sentiment, and can associate them with specific aspects of the text.
-
-Both sentiment analysis and opinion mining work with a variety of [written languages](./language-support.md).
-
-## Sentiment analysis
-
-The sentiment analysis feature provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This feature also returns confidence scores between 0 and 1 for each document & sentences within it for positive, neutral and negative sentiment.
-
-## Opinion mining
-
-Opinion mining is a feature of sentiment analysis. Also known as aspect-based sentiment analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to words (such as the attributes of products or services) in text.
--
-## Get started with sentiment analysis
---
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for sentiment analysis](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
--
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
- Title: "Quickstart: Use the Sentiment Analysis client library and REST API"-
-description: Use this quickstart to start using the Sentiment Analysis API.
------ Previously updated : 02/17/2023--
-keywords: text mining, key phrase
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: Sentiment analysis and opinion mining
---------------
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/custom/how-to/data-formats.md
- Title: Prepare data for custom summarization-
-description: Learn about how to select and prepare data, to be successful in creating custom summarization projects.
------ Previously updated : 06/01/2022----
-# Format data for custom Summarization
-
-This page contains information about how to select and prepare data in order to be successful in creating custom summarization projects.
-
-> [!NOTE]
-> Throughout this document, we refer to a summary of a document as a ΓÇ£labelΓÇ¥.
-
-## Custom summarization document sample format
-
-In the abstractive document summarization scenario, each document (whether it has a provided label or not) is expected to be provided in a plain .txt file. The file contains one or more lines. If multiple lines are provided, each is assumed to be a paragraph of the document. The following is an example document with three paragraphs.
-
-*At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality.*
-
-*In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages.*
-
-*The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks.*
-
-## Custom summarization conversation sample format
-
-In the abstractive conversation summarization scenario, each conversation (whether it has a provided label or not) is expected to be provided in a plain .txt file. Each conversation turn must be provided in a single line that is formatted as Speaker + ΓÇ£: ΓÇ£ + text (I.e., Speaker and text are separated by a colon followed by a space). The following is an example conversation of three turns between two speakers (Agent and Customer).
-
-Agent: Hello, how can I help you?
-
-Customer: How do I upgrade office? I have been getting error messages all day.
-
-Agent: Please press the upgrade button, then sign in and follow the instructions.
--
-## Custom summarization document and sample mapping JSON format
-
-In both document and conversation summarization scenarios, a set of documents and corresponding labels can be provided in a single JSON file that references individual document/conversation and summary files.
-
-<! The JSON file is expected to contain the following fields:
-
-```json
-projectFileVersion": TODO,
-"stringIndexType": TODO,
-"metadata": {
- "projectKind": TODO,
- "storageInputContainerName": TODO,
- "projectName": a string project name,
- "multilingual": TODO,
- "description": a string project description,
- "language": TODO:
-},
-"assets": {
- "projectKind": TODO,
- "documents": a list of document-label pairs, each is defined with three fields:
- [
- {
- "summaryLocation": a string path to the summary txt file,
- "location": a string path to the document txt file,
- "language": TODO
- }
- ]
-}
-``` >
-
-The following is an example mapping file for the abstractive document summarization scenario with three documents and corresponding labels.
-
-```json
-{
- "projectFileVersion": "2022-10-01-preview",
- "stringIndexType": "Utf16CodeUnit",
- "metadata": {
- "projectKind": "CustomAbstractiveSummarization",
- "storageInputContainerName": "abstractivesummarization",
- "projectName": "sample_custom_summarization",
- "multilingual": false,
- "description": "Creating a custom summarization model",
- "language": "en-us"
- }
- "assets": {
- "projectKind": "CustomAbstractiveSummarization",
- "documents": [
- {
- "summaryLocation": "doc1_summary.txt",
- "location": "doc1.txt",
- "language": "en-us"
- },
- {
- "summaryLocation": "doc2_summary.txt",
- "location": "doc2.txt",
- "language": "en-us"
- },
- {
- "summaryLocation": "doc3_summary.txt",
- "location": "doc3.txt",
- "language": "en-us"
- }
- ]
- }
-}
-```
-
-## Next steps
-
-[Get started with custom summarization](../../custom/quickstart.md)
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/custom/how-to/deploy-model.md
- Title: Deploy a custom summarization model-
-description: Learn about deploying a model for Custom summarization.
------ Previously updated : 06/02/2023---
-# Deploy a custom summarization model
-
-Once you're satisfied with how your model performs, it's ready to be deployed and used to summarize text documents. Deploying a model makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger).
-
-<!--## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure storage account.
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](label-data.md) and a successfully [trained model](train-model.md).
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-
-For more information, see [project development lifecycle](../overview.md#project-development-lifecycle).-->
-
-## Deploy model
-
-After you've reviewed your model's performance and decided it can be used in your environment, you need to assign it to a deployment. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). It is recommended to create a deployment named *production* to which you assign the best model you have built so far and use it in your system. You can create another deployment called *staging* to which you can assign the model you're currently working on to be able to test it. You can have a maximum of 10 deployments in your project.
-
-
-## Swap deployments
-
-After you are done testing a model assigned to one deployment and you want to assign this model to another deployment you can swap these two deployments. Swapping deployments involves taking the model assigned to the first deployment, and assigning it to the second deployment. Then taking the model assigned to second deployment, and assigning it to the first deployment. You can use this process to swap your *production* and *staging* deployments when you want to take the model assigned to *staging* and assign it to *production*.
--
-## Delete deployment
--
-## Assign deployment resources
-
-You can [deploy your project to multiple regions](../../../concepts/custom-features/multi-region-deployment.md) by assigning different Language resources that exist in different regions.
--
-## Unassign deployment resources
-
-When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
--
-## Next steps
-
-Check out the [summarization feature overview](../../overview.md).
cognitive-services Test Evaluate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/custom/how-to/test-evaluate.md
- Title: Test and evaluate models in custom summarization-
-description: Learn about how to test and evaluate custom summarization models.
------ Previously updated : 06/01/2022---
-# Test and evaluate your custom summarization models
-
-As you create your custom summarization model, you want to be sure to ensure that you end up with a quality model. You need to test and evaluate your custom summarization model to ensure it performs well.
-
-## Guidance on split test and training sets
-
-An important stage of creating a customized summarization model is validating that the created model is satisfactory in terms of quality and generates summaries as expected. That validation process has to be performed with a separate set of examples (called test examples) than the examples used for training. There are three important guidelines we recommend following when splitting the available data into training and testing:
--- **Size**: To establish enough confidence about the model's quality, the test set should be of a reasonable size. Testing the model on just a handful of examples can give misleading outcome evaluation times. We recommend evaluating on hundreds of examples. When a large number of documents/conversations is available, we recommend reserving at least 10% of them for testing.-- **No Overlap**: It's crucial to make sure that the same document isn't used for training and testing at the same time. Testing should be performed on documents that were never used for training at any stage, otherwise the quality of the model will be highly overestimated.-- **Diversity**: The test set should cover as many possible input characteristics as possible. For example, it's always better to include documents of different lengths, topics, styles, .. etc. when applicable. Similarly for conversation summarization, it's always a good idea to include conversations of different number of turns and number of speakers.-
-## Guidance to evaluate a custom summarization model
-
-When evaluating a custom model, we recommend using both automatic and manual evaluation together. Automatic evaluation helps quickly judge the quality of summaries produced for the entire test set, hence covering a wide range of input variations. However, automatic evaluation gives an approximation of the quality and isn't enough by itself to establish confidence in the model quality. So, we also recommend inspecting the summaries produced for as many test documents as possible.
-
-### Automatic evaluation
-
-Currently, we use a metric called ROUGE (Recall-Oriented Understudy for Gisting Evaluation). This technique includes measures for automatically determining the quality of a summary by comparing it to ideal summaries created by humans. The measures count the number of overlapping units, like n-gram, word sequences, and word pairs between the computer-generated summary being evaluated and the ideal summaries. To learn more about Rouge, see the [ROUGE Wikipedia entry](https://en.wikipedia.org/wiki/ROUGE_(metric)) and the [paper on the ROUGE package](https://aclanthology.org/W04-1013.pdf).
-
-### Manual evaluation
-
-When you manually inspect the quality of a summary, there are general qualities of a summary that we recommend checking for besides any desired expectations that the custom model was trained to adhere to such as style, format, or length. The general qualities we recommend checking are:
--- **Fluency**: The summary should have no formatting problems, capitalization errors or ungrammatical sentences.-- **Coherence**: The summary should be well-structured and well-organized. The summary shouldn't just be a heap of related information, but should build from sentence to sentence into a coherent body of information about a topic.-- **Coverage**: The summary should cover all important information in the document/conversation.-- **Relevance**: The summary should include only important information from the source document/conversation without redundancies.-- **Hallucinations**: The summary doesn't contain wrong information not supported by the source document/conversation.-
-To learn more about summarization evaluation, see the [MIT Press article on SummEval](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00373/100686/SummEval-Re-evaluating-Summarization-Evaluation).
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/custom/quickstart.md
- Title: Quickstart - Custom summarization (preview)-
-description: Quickly start building an AI model to summarize text.
------ Previously updated : 05/26/2023---
-<! zone_pivot_groups: usage-custom-language-features >
-# Quickstart: custom summarization (preview)
-
-Use this article to get started with creating a custom Summarization project where you can train custom models on top of Summarization. A model is artificial intelligence software that's trained to do a certain task. For this system, the models summarize text and are trained by learning from imported data.
-
-In this article, we use Language Studio to demonstrate key concepts of custom summarization. As an example weΓÇÖll build a custom summarization model to extract the Facility or treatment location from short discharge notes.
-
-<! >::: zone pivot="language-studio" >
--
-<!::: zone-end
----
-## Next steps
-
-* [Summarization overview](../overview.md)
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
- Title: Summarize text with the conversation summarization API-
-description: This article will show you how to summarize chat logs with the conversation summarization API.
------ Previously updated : 01/31/2023----
-# How to use conversation summarization
--
-## Conversation summarization types
--- Chapter title and narrative (general conversation) are designed to summarize a conversation into chapter titles, and a summarization of the conversation's contents. This summarization type works on conversations with any number of parties. --- Issue and resolution (call center focused) is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs, which occur between two parties. --
-The AI models used by the API are provided by the service, you just have to send content for analysis.
-
-## Features
-
-The conversation summarization API uses natural language processing techniques to summarize conversations into shorter summaries per request. Conversation summarization can summarize for issues and resolutions discussed in a two-party conversation or summarize a long conversation into chapters and a short narrative for each chapter.
-
-There's another feature in Azure Cognitive Service for Language named [document summarization](../overview.md?tabs=document-summarization) that is more suitable to summarize documents into concise summaries. When you're deciding between document summarization and conversation summarization, consider the following points:
-* Input genre: Conversation summarization can operate on both chat text and speech transcripts. which have speakers and their utterances. Document summarization operations on text.
-* Purpose of summarization: for example, conversation issue and resolution summarization returns a reason and the resolution for a chat between a customer and a customer service agent.
-
-## Submitting data
-
-> [!NOTE]
-> See the [Language Studio](../../language-studio.md#valid-text-formats-for-conversation-features) article for information on formatting conversational text to submit using Language Studio.
-
-You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
-
-When you use this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-
-When you submit data to conversation summarization, we recommend sending one chat log per request, for better latency.
-
-### Get summaries from text chats
-
-You can use conversation issue and resolution summarization to get summaries as you need. To see an example using text chats, see the [quickstart article](../quickstart.md).
-
-### Get summaries from speech transcriptions
-
-Conversation issue and resolution summarization also enables you to get summaries from speech transcripts by using the [Speech service's speech to text feature](../../../Speech-Service/call-center-overview.md). The following example shows a short conversation that you might include in your API requests.
-
-```json
-"conversations":[
- {
- "id":"abcdefgh-1234-1234-1234-1234abcdefgh",
- "language":"en",
- "modality":"transcript",
- "conversationItems":[
- {
- "modality":"transcript",
- "participantId":"speaker",
- "id":"12345678-abcd-efgh-1234-abcd123456",
- "content":{
- "text":"Hi.",
- "lexical":"hi",
- "itn":"hi",
- "maskedItn":"hi",
- "audioTimings":[
- {
- "word":"hi",
- "offset":4500000,
- "duration":2800000
- }
- ]
- }
- }
- ]
- }
-]
-```
-
-### Get chapter titles
-
-Conversation chapter title summarization lets you get chapter titles from input conversations. A guided example scenario is provided below:
-
-1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
-
-```bash
-curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-conversations/jobs?api-version=2022-10-01-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \--d \
-'
-{
- "displayName": "Conversation Task Example",
- "analysisInput": {
- "conversations": [
- {
- "conversationItems": [
- {
- "text": "Hello, youΓÇÖre chatting with Rene. How may I help you?",
- "id": "1",
- "role": "Agent",
- "participantId": "Agent_1"
- },
- {
- "text": "Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.",
- "id": "2",
- "role": "Customer",
- "participantId": "Customer_1"
- },
- {
- "text": "IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you please try the following steps for me? First, could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking on and off every second?",
- "id": "3",
- "role": "Agent",
- "participantId": "Agent_1"
- },
- {
- "text": "Yes, I pushed the wifi connection button, and now the power light is slowly blinking.",
- "id": "4",
- "role": "Customer",
- "participantId": "Customer_1"
- },
- {
- "text": "Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine? ",
- "id": "5",
- "role": "Agent",
- "participantId": "Agent_1"
- },
- {
- "text": "No. Nothing happened.",
- "id": "6",
- "role": "Customer",
- "participantId": "Customer_1"
- },
- {
- "text": "IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.",
- "id": "7",
- "role": "Agent",
- "participantId": "Agent_1"
- }
- ],
- "modality": "text",
- "id": "conversation1",
- "language": "en"
- }
- ]
- },
- "tasks": [
- {
- "taskName": "Conversation Task 1",
- "kind": "ConversationalSummarizationTask",
- "parameters": {
- "summaryAspects": [
- "chapterTitle"
- ]
- }
- }
- ]
-}
-'
-```
-
-2. Make the following changes in the command where needed:
-- Replace the value `your-value-language-key` with your key.-- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.-
-3. Open a command prompt window (for example: BASH).
-
-4. Paste the command from the text editor into the command prompt window, then run the command.
-
-5. Get the `operation-location` from the response header. The value will look similar to the following URL:
-
-```http
-https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
-```
-
-6. To get the results of the request, use the following cURL command. Be sure to replace `<my-job-id>` with the GUID value you received from the previous `operation-location` response header:
-
-```curl
-curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/<my-job-id>?api-version=2022-10-01-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
-```
-
-Example chapter title summarization JSON response:
-
-```json
-{
- "jobId": "d874a98c-bf31-4ac5-8b94-5c236f786754",
- "lastUpdatedDateTime": "2022-09-29T17:36:42Z",
- "createdDateTime": "2022-09-29T17:36:39Z",
- "expirationDateTime": "2022-09-30T17:36:39Z",
- "status": "succeeded",
- "errors": [],
- "displayName": "Conversation Task Example",
- "tasks": {
- "completed": 1,
- "failed": 0,
- "inProgress": 0,
- "total": 1,
- "items": [
- {
- "kind": "conversationalSummarizationResults",
- "taskName": "Conversation Task 1",
- "lastUpdateDateTime": "2022-09-29T17:36:42.895694Z",
- "status": "succeeded",
- "results": {
- "conversations": [
- {
- "summaries": [
- {
- "aspect": "chapterTitle",
- "text": "Smart Brew 300 Espresso Machine WiFi Connection",
- "contexts": [
- { "conversationItemId": "1", "offset": 0, "length": 53 },
- { "conversationItemId": "2", "offset": 0, "length": 94 },
- { "conversationItemId": "3", "offset": 0, "length": 266 },
- { "conversationItemId": "4", "offset": 0, "length": 85 },
- { "conversationItemId": "5", "offset": 0, "length": 119 },
- { "conversationItemId": "6", "offset": 0, "length": 21 },
- { "conversationItemId": "7", "offset": 0, "length": 109 }
- ]
- }
- ],
- "id": "conversation1",
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "latest"
- }
- }
- ]
- }
-}
-```
-For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
-
- ### Get narrative summarization
-
-Conversation summarization also lets you get narrative summaries from input conversations. A guided example scenario is provided below:
-
-1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
-
-```bash
-curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-conversations/jobs?api-version=2022-10-01-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \--d \
-'
-{
- "displayName": "Conversation Task Example",
- "analysisInput": {
- "conversations": [
- {
- "conversationItems": [
- {
- "text": "Hello, youΓÇÖre chatting with Rene. How may I help you?",
- "id": "1",
- "role": "Agent",
- "participantId": "Agent_1"
- },
- {
- "text": "Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.",
- "id": "2",
- "role": "Customer",
- "participantId": "Customer_1"
- },
- {
- "text": "IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you please try the following steps for me? First, could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking on and off every second?",
- "id": "3",
- "role": "Agent",
- "participantId": "Agent_1"
- },
- {
- "text": "Yes, I pushed the wifi connection button, and now the power light is slowly blinking.",
- "id": "4",
- "role": "Customer",
- "participantId": "Customer_1"
- },
- {
- "text": "Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine? ",
- "id": "5",
- "role": "Agent",
- "participantId": "Agent_1"
- },
- {
- "text": "No. Nothing happened.",
- "id": "6",
- "role": "Customer",
- "participantId": "Customer_1"
- },
- {
- "text": "IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.",
- "id": "7",
- "role": "Agent",
- "participantId": "Agent_1"
- }
- ],
- "modality": "text",
- "id": "conversation1",
- "language": "en"
- }
- ]
- },
- "tasks": [
- {
- "taskName": "Conversation Task 1",
- "kind": "ConversationalSummarizationTask",
- "parameters": {
- "summaryAspects": [
- "narrative"
- ]
- }
- }
- ]
-}
-'
-```
-
-2. Make the following changes in the command where needed:
-- Replace the value `your-language-resource-key` with your key.-- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.-
-3. Open a command prompt window (for example: BASH).
-
-4. Paste the command from the text editor into the command prompt window, then run the command.
-
-5. Get the `operation-location` from the response header. The value will look similar to the following URL:
-
-```http
-https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
-```
-
-6. To get the results of a request, use the following cURL command. Be sure to replace `<my-job-id>` with the GUID value you received from the previous `operation-location` response header:
-
-```curl
-curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversations/jobs/<my-job-id>?api-version=2022-10-01-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
-```
-
-Example narrative summarization JSON response:
-
-```json
-{
- "jobId": "d874a98c-bf31-4ac5-8b94-5c236f786754",
- "lastUpdatedDateTime": "2022-09-29T17:36:42Z",
- "createdDateTime": "2022-09-29T17:36:39Z",
- "expirationDateTime": "2022-09-30T17:36:39Z",
- "status": "succeeded",
- "errors": [],
- "displayName": "Conversation Task Example",
- "tasks": {
- "completed": 1,
- "failed": 0,
- "inProgress": 0,
- "total": 1,
- "items": [
- {
- "kind": "conversationalSummarizationResults",
- "taskName": "Conversation Task 1",
- "lastUpdateDateTime": "2022-09-29T17:36:42.895694Z",
- "status": "succeeded",
- "results": {
- "conversations": [
- {
- "summaries": [
- {
- "aspect": "narrative",
- "text": "Agent_1 helps customer to set up wifi connection for Smart Brew 300 espresso machine.",
- "contexts": [
- { "conversationItemId": "1", "offset": 0, "length": 53 },
- { "conversationItemId": "2", "offset": 0, "length": 94 },
- { "conversationItemId": "3", "offset": 0, "length": 266 },
- { "conversationItemId": "4", "offset": 0, "length": 85 },
- { "conversationItemId": "5", "offset": 0, "length": 119 },
- { "conversationItemId": "6", "offset": 0, "length": 21 },
- { "conversationItemId": "7", "offset": 0, "length": 109 }
- ]
- }
- ],
- "id": "conversation1",
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "latest"
- }
- }
- ]
- }
-}
-```
-
-For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
-
-## Getting conversation issue and resolution summarization results
-
-The following text is an example of content you might submit for conversation issue and resolution summarization. This is only an example, the API can accept much longer input text. See [data limits](../../concepts/data-limits.md) for more information.
-
-**Agent**: "*Hello, how can I help you*?"
-
-**Customer**: "*How can I upgrade my Contoso subscription? I've been trying the entire day.*"
-
-**Agent**: "*Press the upgrade button please. Then sign in and follow the instructions.*"
-
-Summarization is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
-
-In the above example, the API might return the following summarized sentences:
-
-|Summarized text | Aspect |
-||-|
-| "Customer wants to upgrade their subscription. Customer doesn't know how." | issue |
-| "Customer needs to press upgrade button, and sign in." | resolution |
-
-## See also
-
-* [Summarization overview](../overview.md)
cognitive-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md
- Title: Summarize text with the extractive summarization API-
-description: This article will show you how to summarize text with the extractive summarization API.
------ Previously updated : 09/26/2022----
-# How to use document summarization
-
-Document summarization is designed to shorten content that users consider too long to read. Both extractive and abstractive summarization condense articles, papers, or documents to key sentences.
-
-**Extractive summarization**: Produces a summary by extracting sentences that collectively represent the most important or relevant information within the original content.
-
-**Abstractive summarization**: Produces a summary by generating summarized sentences from the document that capture the main idea.
-
-The AI models used by the API are provided by the service, you just have to send content for analysis.
-
-## Features
-
-> [!TIP]
-> If you want to start using these features, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
-
-The document summarization API uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document.
-
-Document summarization returns a rank score as a part of the system response along with extracted sentences and their position in the original documents. A rank score is an indicator of how relevant a sentence is determined to be, to the main idea of a document. The model gives a score between 0 and 1 (inclusive) to each sentence and returns the highest scored sentences per request. For example, if you request a three-sentence summary, the service returns the three highest scored sentences.
-
-There is another feature in Azure Cognitive Service for Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following:
-* Key phrase extraction returns phrases while extractive summarization returns sentences.
-* Extractive summarization returns sentences together with a rank score, and top ranked sentences will be returned per request.
-* Extractive summarization also returns the following positional information:
- * Offset: The start position of each extracted sentence.
- * Length: The length of each extracted sentence.
--
-## Determine how to process the data (optional)
-
-### Submitting data
-
-You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
-
-When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
-
-### Getting document summarization results
-
-When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
-
-The following is an example of content you might submit for summarization, which is extracted using the Microsoft blog article [A holistic representation toward integrative AI](https://www.microsoft.com/research/blog/a-holistic-representation-toward-integrative-ai/). This article is only an example, the API can accept much longer input text. See the data limits section for more information.
-
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-
-The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
-
-Using the above example, the API might return the following summarized sentences:
-
-**Extractive summarization**:
-- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."-- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."-- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."-
-**Abstractive summarization**:
-- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."-
-### Try document extractive summarization
-
-You can use document extractive summarization to get summaries of articles, papers, or documents. To see an example, see the [quickstart article](../quickstart.md).
-
-You can use the `sentenceCount` parameter to specify how many sentences will be returned, with `3` being the default. The range is from 1 to 20.
-
-You can also use the `sortby` parameter to specify in what order the extracted sentences will be returned - either `Offset` or `Rank`, with `Offset` being the default.
-
-|parameter value |Description |
-|||
-|Rank | Order sentences according to their relevance to the input document, as decided by the service. |
-|Offset | Keeps the original order in which the sentences appear in the input document. |
-
-### Try document abstractive summarization
-
-[Reference documentation](https://go.microsoft.com/fwlink/?linkid=2211684)
-
-The following example will get you started with document abstractive summarization:
-
-1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character instead.
-
-```bash
-curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2022-10-01-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \--d \
-'
-{
- "displayName": "Document Abstractive Summarization Task Example",
- "analysisInput": {
- "documents": [
- {
- "id": "1",
- "language": "en",
- "text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
- }
- ]
- },
- "tasks": [
- {
- "kind": "AbstractiveSummarization",
- "taskName": "Document Abstractive Summarization Task 1",
- "parameters": {
- "sentenceCount": 1
- }
- }
- ]
-}
-'
-```
-If you do not specify `sentenceCount`, the model will determine the summary length. Note that `sentenceCount` is the approximation of the sentence count of the output summary, range 1 to 20.
-
-2. Make the following changes in the command where needed:
-- Replace the value `your-language-resource-key` with your key.-- Replace the first part of the request URL `your-language-resource-endpoint` with your endpoint URL.-
-3. Open a command prompt window (for example: BASH).
-
-4. Paste the command from the text editor into the command prompt window, then run the command.
-
-5. Get the `operation-location` from the response header. The value will look similar to the following URL:
-
-```http
-https://<your-language-resource-endpoint>/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
-```
-
-6. To get the results of the request, use the following cURL command. Be sure to replace `<my-job-id>` with the numerical ID value you received from the previous `operation-location` response header:
-
-```bash
-curl -X GET https://<your-language-resource-endpoint>/language/analyze-text/jobs/<my-job-id>?api-version=2022-10-01-preview \
--H "Content-Type: application/json" \--H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
-```
---
-### Abstractive document summarization example JSON response
-
-```json
-{
- "jobId": "cd6418fe-db86-4350-aec1-f0d7c91442a6",
- "lastUpdateDateTime": "2022-09-08T16:45:14Z",
- "createdDateTime": "2022-09-08T16:44:53Z",
- "expirationDateTime": "2022-09-09T16:44:53Z",
- "status": "succeeded",
- "errors": [],
- "displayName": "Document Abstractive Summarization Task Example",
- "tasks": {
- "completed": 1,
- "failed": 0,
- "inProgress": 0,
- "total": 1,
- "items": [
- {
- "kind": "AbstractiveSummarizationLROResults",
- "taskName": "Document Abstractive Summarization Task 1",
- "lastUpdateDateTime": "2022-09-08T16:45:14.0717206Z",
- "status": "succeeded",
- "results": {
- "documents": [
- {
- "summaries": [
- {
- "text": "Microsoft is taking a more holistic, human-centric approach to AI. We've developed a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We've achieved human performance on benchmarks in conversational speech recognition, machine translation, ...... and image captions.",
- "contexts": [
- {
- "offset": 0,
- "length": 247
- }
- ]
- }
- ],
- "id": "1"
- }
- ],
- "errors": [],
- "modelVersion": "latest"
- }
- }
- ]
- }
-}
-```
-
-|parameter |Description |
-|||
-|`-X POST <endpoint>` | Specifies your endpoint for accessing the API. |
-|`-H Content-Type: application/json` | The content type for sending JSON data. |
-|`-H "Ocp-Apim-Subscription-Key:<key>` | Specifies the key for accessing the API. |
-|`-d <documents>` | The JSON containing the documents you want to send. |
-
-The following cURL commands are executed from a BASH shell. Edit these commands with your own resource name, resource key, and JSON values.
--
-## Service and data limits
--
-## See also
-
-* [Summarization overview](../overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
- Title: Summarization language support-
-description: Learn about which languages are supported by document summarization.
------ Previously updated : 09/28/2022----
-# Summarization language support
-
-Use this article to learn which natural languages are supported by document and conversation summarization.
-
-# [Document summarization](#tab/document-summarization)
-
-## Languages supported by extractive and abstractive document summarization
-
-| Language | Language code | Notes |
-|--|||
-| Chinese-Simplified | `zh-hans` | `zh` also accepted |
-| English | `en` | |
-| French | `fr` | |
-| German | `de` | |
-| Italian | `it` | |
-| Japanese | `ja` | |
-| Korean | `ko` | |
-| Spanish | `es` | |
-| Portuguese | `pt` | |
-
-# [Conversation summarization](#tab/conversation-summarization)
-
-## Languages supported by conversation summarization
-
-Conversation summarization supports the following languages:
-
-| Language | Language code | Notes |
-|--|||
-| English | `en` | |
---
-## Languages supported by custom summarization
-
-Custom summarization supports the following languages:
-
-| Language | Language code | Notes |
-|--|||
-| English | `en` | |
-
-## Next steps
-
-* [Summarization overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
- Title: What is document and conversation summarization (preview)?-
-description: Learn about summarizing text.
------ Previously updated : 01/12/2023----
-# What is document and conversation summarization?
--
-Summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
-
-Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md).
-
-# [Document summarization](#tab/document-summarization)
-
-This documentation contains the following article types:
-
-* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service.
-* **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-
-Document summarization uses natural language processing techniques to generate a summary for documents. There are two general approaches to automatic summarization, both of which are supported by the API: extractive and abstractive.
-
-Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words which are not simply extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
-
-## Key features
-
-There are two types of document summarization this API provides:
-
-* **Extractive summarization**: Produces a summary by extracting salient sentences within the document.
- * Multiple extracted sentences: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
- * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
- * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization will return the three highest scored sentences.
- * Positional information: The start position and length of extracted sentences.
-* **Abstractive summarization**: Generates a summary that may not use the same words as those in the document, but captures the main idea.
- * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document may be segmented so multiple groups of summary texts may be returned with their contextual input range.
- * Contextual input range: The range within the input document that was used to generate the summary text.
-
-As an example, consider the following paragraph of text:
-
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-
-The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/multilingual-emoji-support.md) for more information.
-
-Using the above example, the API might return the following summarized sentences:
-
-**Extractive summarization**:
-- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."-- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."-- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."-
-**Abstractive summarization**:
-- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."-
-# [Conversation summarization](#tab/conversation-summarization)
-
-> [!IMPORTANT]
-> Conversation summarization is only available in English.
-
-This documentation contains the following article types:
-
-* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=conversation-summarization)** are getting-started instructions to guide you through making requests to the service.
-* **[How-to guides](how-to/conversation-summarization.md)** contain instructions for using the service in more specific or customized ways.
-
-## Key features
-
-Conversation summarization supports the following features:
-
-* **Issue/resolution summarization**: A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
-* **Chapter title summarization**: Gives suggested chapter titles of the input conversation.
-* **Narrative summarization**: Gives call notes, meeting notes or chat summaries of the input conversation.
-
-## When to use issue and resolution summarization
-
-* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
- * The reason for a service chat/call (the issue).
- * That resolution for the issue.
-* You only want a summary that focuses on related information about issues and resolutions.
-* When there are two participants in the conversation, and you want to summarize what each had said.
-
-As an example, consider the following example conversation:
-
-**Agent**: "*Hello, youΓÇÖre chatting with Rene. How may I help you?*"
-
-**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.*"
-
-**Agent**: "*IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
-
-**Customer**: "*Yes, I pushed the wifi connection button, and now the power light is slowly blinking.*"
-
-**Agent**: "*Great. Thank you! Now, please check in your Contoso Coffee app. Does it prompt to ask you to connect with the machine?*"
-
-**Customer**: "*No. Nothing happened.*"
-
-**Agent**: "*I see. Thanks. LetΓÇÖs try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
-
-**Customer**: *"IΓÇÖve tried the factory reset and followed the above steps again, but it still didnΓÇÖt work."*
-
-**Agent**: "*IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.*"
-
-Conversation summarization feature would simplify the text into the following:
-
-|Example summary | Format | Conversation aspect |
-||-|-|
-| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
-| Checked if the power light is blinking slowly. Checked the Contoso coffee app. It had no prompt. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
---
-## Get started with summarization
---
-## Input requirements and service limits
-
-# [Document summarization](#tab/document-summarization)
-
-* Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
-* Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information.
-
-# [Conversation summarization](#tab/conversation-summarization)
-
-* Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
-* Conversation summarization accepts text in English. See [language support](language-support.md?tabs=conversation-summarization) for more information.
---
-## Reference documentation and code samples
-
-As you use document summarization in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST API | [REST API documentation](https://go.microsoft.com/fwlink/?linkid=2211684) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
-| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
-|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
-|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
-
-## Responsible AI
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
-
-* [Transparency note for Azure Cognitive Service for Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Characteristics and limitations of summarization](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
- Title: "Quickstart: Use Document Summarization"-
-description: Use this quickstart to start using Document Summarization.
------ Previously updated : 02/17/2023--
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: using document summarization and conversation summarization
-----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Next steps
-
-* [How to call document summarization](./how-to/document-summarization.md)
-* [How to call conversation summarization](./how-to/conversation-summarization.md)
cognitive-services Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/region-support.md
- Title: Summarization region support-
-description: Learn about which regions are supported by document summarization.
------- Previously updated : 06/11/2023---
-# Regional availability
-
-Some summarization features are only available in limited regions. More regions will be added to this list as they become available.
-
-## Regional availability table
-
-|Region|Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization|
-||||||-|
-|North Europe|&#9989;|&#9989;|&#9989;|&#10060;|
-|East US|&#9989;|&#9989;|&#9989;|&#9989;|
-|UK South|&#9989;|&#9989;|&#9989;|&#10060;|
-|Southeast Asia|&#9989;|&#9989;|&#9989;|&#10060;|
-
-## Next steps
-
-* [Summarization overview](overview.md)
cognitive-services Assertion Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/assertion-detection.md
- Title: Assertion detection in Text Analytics for health-
-description: Learn about assertion detection.
------ Previously updated : 01/04/2023----
-# Assertion detection
-
-The meaning of medical content is highly affected by modifiers, such as negative or conditional assertions, which can have critical implications if misrepresented. Text Analytics for health supports four categories of assertion detection for entities in the text:
-
-* Certainty
-* Conditional
-* Association
-* Temporal
-
-## Assertion output
-
-Text Analytics for health returns assertion modifiers, which are informative attributes assigned to medical concepts that provide a deeper understanding of the conceptsΓÇÖ context within the text. These modifiers are divided into four categories, each focusing on a different aspect and containing a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The serviceΓÇÖs output response contains only assertion modifiers that are different from the default value. In other words, if no assertion is returned, the implied assertion is the default value.
-
-**CERTAINTY** ΓÇô provides information regarding the presence (present vs. absent) of the concept and how certain the text is regarding its presence (definite vs. possible).
-* **Positive** [Default]: the concept exists or has happened.
-* **Negative**: the concept does not exist now or never happened.
-* **Positive_Possible**: the concept likely exists but there is some uncertainty.
-* **Negative_Possible**: the conceptΓÇÖs existence is unlikely but there is some uncertainty.
-* **Neutral_Possible**: the concept may or may not exist without a tendency to either side.
-
-An example of assertion detection is shown below where a negated entity is returned with a negative value for the certainty category:
-
-```json
-{
- "offset": 381,
- "length": 3,
- "text": "SOB",
- "category": "SymptomOrSign",
- "confidenceScore": 0.98,
- "assertion": {
- "certainty": "negative"
- },
- "name": "Dyspnea",
- "links": [
- {
- "dataSource": "UMLS",
- "id": "C0013404"
- },
- {
- "dataSource": "AOD",
- "id": "0000005442"
- },
- ...
-}
-```
-
-**CONDITIONALITY** ΓÇô provides information regarding whether the existence of a concept depends on certain conditions.
-* **None** [Default]: the concept is a fact and not hypothetical and does not depend on certain conditions.
-* **Hypothetical**: the concept may develop or occur in the future.
-* **Conditional**: the concept exists or occurs only under certain conditions.
-
-**ASSOCIATION** ΓÇô describes whether the concept is associated with the subject of the text or someone else.
-* **Subject** [Default]: the concept is associated with the subject of the text, usually the patient.
-* **Other**: the concept is associated with someone who is not the subject of the text.
-
-**TEMPORAL** - provides additional temporal information for a concept detailing whether it is an occurrence related to the past, present, or future.
-* **Current** [Default]: the concept is related to conditions/events that belong to the current encounter. For example, medical symptoms that have brought the patient to seek medical attention (e.g., “started having headaches 5 days prior to their arrival to the ER”). This includes newly made diagnoses, symptoms experienced during or leading to this encounter, treatments and examinations done within the encounter.
-* **Past**: the concept is related to conditions, examinations, treatments, medication events that are mentioned as something that existed or happened prior to the current encounter, as might be indicated by hints like s/p, recently, ago, previously, in childhood, at age X. For example, diagnoses that were given in the past, treatments that were done, past examinations and their results, past admissions, etc. Medical background is considered as PAST.
-* **Future**: the concept is related to conditions/events that are planned/scheduled/suspected to happen in the future, e.g., will be obtained, will undergo, is scheduled in two weeks from now.
---
-## Next steps
-
-[How to call the Text Analytics for health](../how-to/call-api.md)
cognitive-services Health Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/health-entity-categories.md
- Title: Entity categories recognized by Text Analytics for health-
-description: Learn about categories recognized by Text Analytics for health
------ Previously updated : 01/04/2023----
-# Supported Text Analytics for health entity categories
-
-Text Analytics for health processes and extracts insights from unstructured medical data. The service detects and surfaces medical concepts, assigns assertions to concepts, infers semantic relations between concepts and links them to common medical ontologies.
-
-Text Analytics for health detects medical concepts that fall under the following categories.
-
-## Anatomy
-
-### Entities
-
-**BODY_STRUCTURE** - Body systems, anatomic locations or regions, and body sites. For example, arm, knee, abdomen, nose, liver, head, respiratory system, lymphocytes.
--
-## Demographics
-
-### Entities
-
-**AGE** - All age terms and phrases, including ones for patients, family members, and others. For example, 40-year-old, 51 yo, 3 months old, adult, infant, elderly, young, minor, middle-aged.
-
-**ETHNICITY** - Phrases that indicate the ethnicity of the subject. For example, African American or Asian.
---
-**GENDER** - Terms that disclose the gender of the subject. For example, male, female, woman, gentleman, lady.
--
-## Examinations
-
-### Entities
-
-**EXAMINATION_NAME** ΓÇô Diagnostic procedures and tests, including vital signs and body measurements. For example, MRI, ECG, HIV test, hemoglobin, platelets count, scale systems such as *Bristol stool scale*.
--
-## External Influence
-
-### Entities
-
-**ALLERGEN** ΓÇô an antigen triggering an allergic reaction. For example, cats, peanuts.
---
-## General attributes
-
-### Entities
-
-**COURSE** - Description of a change in another entity over time, such as condition progression (for example: improvement, worsening, resolution, remission), a course of treatment or medication (for example: increase in medication dosage).
--
-**DATE** - Full date relating to a medical condition, examination, treatment, medication, or administrative event.
--
-**DIRECTION** ΓÇô Directional terms that may relate to a body structure, medical condition, examination, or treatment, such as: left, lateral, upper, posterior.
--
-**FREQUENCY** - Describes how often a medical condition, examination, treatment, or medication occurred, occurs, or should occur.
---
-**TIME** - Temporal terms relating to the beginning and/or length (duration) of a medical condition, examination, treatment, medication, or administrative event.
-
-**MEASUREMENT_UNIT** ΓÇô The unit of measurement related to an examination or a medical condition measurement.
-
-**MEASUREMENT_VALUE** ΓÇô The value related to an examination or a medical condition measurement.
--
-**RELATIONAL_OPERATOR** - Phrases that express the quantitative relation between an entity and some additional information.
--
-## Genomics
-
-### Entities
-
-**VARIANT** - All mentions of gene variations and mutations. For example, `c.524C>T`, `(MTRR):r.1462_1557del96`
-
-**GENE_OR_PROTEIN** ΓÇô All mentions of names and symbols of human genes as well as chromosomes and parts of chromosomes and proteins. For example, MTRR, F2.
-
-**MUTATION_TYPE** - Description of the mutation, including its type, effect, and location. For example, trisomy, germline mutation, loss of function.
--
-**EXPRESSION** - Gene expression level. For example, positive for-, negative for-, overexpressed, detected in high/low levels, elevated.
---
-## Healthcare
-
-### Entities
-
-**ADMINISTRATIVE_EVENT** ΓÇô Events that relate to the healthcare system but of an administrative/semi-administrative nature. For example, registration, admission, trial, study entry, transfer, discharge, hospitalization, hospital stay.
-
-**CARE_ENVIRONMENT** ΓÇô An environment or location where patients are given care. For example, emergency room, physicianΓÇÖs office, cardio unit, hospice, hospital.
--
-**HEALTHCARE_PROFESSION** ΓÇô A healthcare practitioner licensed or non-licensed. For example, dentist, pathologist, neurologist, radiologist, pharmacist, nutritionist, physical therapist, chiropractor.
--
-## Medical condition
-
-### Entities
-
-**DIAGNOSIS** ΓÇô Disease, syndrome, poisoning. For example, breast cancer, AlzheimerΓÇÖs, HTN, CHF, spinal cord injury.
-
-**SYMPTOM_OR_SIGN** ΓÇô Subjective or objective evidence of disease or other diagnoses. For example, chest pain, headache, dizziness, rash, SOB, abdomen was soft, good bowel sounds, well nourished.
--
-**CONDITION_QUALIFIER** - Qualitative terms that are used to describe a medical condition. All the following subcategories are considered qualifiers:
-
-* Time-related expressions: those are terms that describe the time dimension qualitatively, such as sudden, acute, chronic, longstanding.
-* Quality expressions: Those are terms that describe the ΓÇ£natureΓÇ¥ of the medical condition, such as burning, sharp.
-* Severity expressions: severe, mild, a bit, uncontrolled.
-* Extensivity expressions: local, focal, diffuse.
--
-**CONDITION_SCALE** ΓÇô Qualitative terms that characterize the condition by a scale, which is a finite ordered list of values.
--
-## Medication
-
-### Entities
-
-**MEDICATION_CLASS** ΓÇô A set of medications that have a similar mechanism of action, a related mode of action, a similar chemical structure, and/or are used to treat the same disease. For example, ACE inhibitor, opioid, antibiotics, pain relievers.
--
-**MEDICATION_NAME** ΓÇô Medication mentions, including copyrighted brand names, and non-brand names. For example, Ibuprofen.
-
-**DOSAGE** - Amount of medication ordered. For example, Infuse Sodium Chloride solution *1000 mL*.
-
-**MEDICATION_FORM** - The form of the medication. For example, solution, pill, capsule, tablet, patch, gel, paste, foam, spray, drops, cream, syrup.
--
-**MEDICATION_ROUTE** - The administration method of medication. For example, oral, topical, inhaled.
--
-## Social
-
-### Entities
-
-**FAMILY_RELATION** ΓÇô Mentions of family relatives of the subject. For example, father, daughter, siblings, parents.
--
-**EMPLOYMENT** ΓÇô Mentions of employment status including specific profession, such as unemployed, retired, firefighter, student.
--
-**LIVING_STATUS** ΓÇô Mentions of the housing situation, including homeless, living with parents, living alone, living with others.
--
-**SUBSTANCE_USE** ΓÇô Mentions of use of legal or illegal drugs, tobacco or alcohol. For example, smoking, drinking, or heroin use.
--
-**SUBSTANCE_USE_AMOUNT** ΓÇô Mentions of specific amounts of substance use. For example, a pack (of cigarettes) or a few glasses (of wine).
---
-## Treatment
-
-### Entities
-
-**TREATMENT_NAME** ΓÇô Therapeutic procedures. For example, knee replacement surgery, bone marrow transplant, TAVI, diet.
----
-## Next steps
-
-* [How to call the Text Analytics for health](../how-to/call-api.md)
cognitive-services Relation Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/relation-extraction.md
- Title: Relation extraction in Text Analytics for health-
-description: Learn about relation extraction
------ Previously updated : 01/04/2023----
-# Relation extraction
-
-Text Analytics for health features relation extraction, which is used to identify meaningful connections between concepts, or entities, mentioned in the text. For example, a "time of condition" relation is found by associating a condition name with a time. Another example is a "dosage of medication" relation, which is found by relating an extracted medication to its extracted dosage. The following example shows how relations are expressed in the JSON output.
-
-> [!NOTE]
-> * Relations referring to CONDITION may refer to either the DIAGNOSIS entity type or the SYMPTOM_OR_SIGN entity type.
-> * Relations referring to MEDICATION may refer to either the MEDICATION_NAME entity type or the MEDICATION_CLASS entity type.
-> * Relations referring to TIME may refer to either the TIME entity type or the DATE entity type.
-
-Relation extraction output contains URI references and assigned roles of the entities of the relation type. For example, in the following JSON:
-
-```json
- "relations": [
- {
- "relationType": "DosageOfMedication",
- "entities": [
- {
- "ref": "#/results/documents/0/entities/0",
- "role": "Dosage"
- },
- {
- "ref": "#/results/documents/0/entities/1",
- "role": "Medication"
- }
- ]
- },
- {
- "relationType": "RouteOfMedication",
- "entities": [
- {
- "ref": "#/results/documents/0/entities/1",
- "role": "Medication"
- },
- {
- "ref": "#/results/documents/0/entities/2",
- "role": "Route"
- }
- ]
-...
-]
-```
-
-## Recognized relations
-
-The following list presents all the recognized relations by the Text Analytics for health API.
-
-**ABBREVIATION**
-
-**AMOUNT_OF_SUBSTANCE_USE**
-
-**BODY_SITE_OF_CONDITION**
-
-**BODY_SITE_OF_TREATMENT**
-
-**COURSE_OF_CONDITION**
-
-**COURSE_OF_EXAMINATION**
-
-**COURSE_OF_MEDICATION**
-
-**COURSE_OF_TREATMENT**
-
-**DIRECTION_OF_BODY_STRUCTURE**
-
-**DIRECTION_OF_CONDITION**
-
-**DIRECTION_OF_EXAMINATION**
-
-**DIRECTION_OF_TREATMENT**
-
-**DOSAGE_OF_MEDICATION**
-
-**EXAMINATION_FINDS_CONDITION**
-
-**EXPRESSION_OF_GENE**
-
-**EXPRESSION_OF_VARIANT**
-
-**FORM_OF_MEDICATION**
-
-**FREQUENCY_OF_CONDITION**
-
-**FREQUENCY_OF_MEDICATION**
-
-**FREQUENCY_OF_SUBSTANCE_USE**
-
-**FREQUENCY_OF_TREATMENT**
-
-**MUTATION_TYPE_OF_GENE**
-
-**MUTATION_TYPE_OF_VARIANT**
-
-**QUALIFIER_OF_CONDITION**
-
-**RELATION_OF_EXAMINATION**
-
-**ROUTE_OF_MEDICATION**
-
-**SCALE_OF_CONDITION**
-
-**TIME_OF_CONDITION**
-
-**TIME_OF_EVENT**
-
-**TIME_OF_EXAMINATION**
-
-**TIME_OF_MEDICATION**
-
-**TIME_OF_TREATMENT**
-
-**UNIT_OF_CONDITION**
-
-**UNIT_OF_EXAMINATION**
-
-**VALUE_OF_CONDITION**
-
-**VALUE_OF_EXAMINATION**
-
-**VARIANT_OF_GENE**
-
-## Next steps
-
-* [How to call the Text Analytics for health](../how-to/call-api.md)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
- Title: How to call Text Analytics for health-
-description: Learn how to extract and label medical information from unstructured clinical text with Text Analytics for health.
------ Previously updated : 01/04/2023----
-# How to use Text Analytics for health
--
-Text Analytics for health can be used to extract and label relevant medical information from unstructured texts such as doctors' notes, discharge summaries, clinical documents, and electronic health records. The service performs [named entity recognition](../concepts/health-entity-categories.md), [relation extraction](../concepts/relation-extraction.md), [entity linking](https://www.nlm.nih.gov/research/umls/sourcereleasedocs/https://docsupdatetracker.net/index.html), and [assertion detection](../concepts/assertion-detection.md) to uncover insights from the input text. For information on the returned confidence scores, see the [transparency note](/legal/cognitive-services/text-analytics/transparency-note#general-guidelines-to-understand-and-improve-performance?context=/azure/cognitive-services/text-analytics/context/context).
-
-> [!TIP]
-> If you want to test out the feature without writing any code, use the [Language Studio](../../language-studio.md).
-
-There are two ways to call the service:
-
-* A [Docker container](use-containers.md) (synchronous)
-* Using the web-based API and client libraries (asynchronous)
-
-## Development options
----
-## Specify the Text Analytics for health model
-
-By default, Text Analytics for health will use the ("2022-03-01") model version on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities along with their assertions and relationships (**only in English**) is supported with the latest preview model version "2023-04-15-preview".
-
-| Supported Versions | Status |
-|--|--|
-| `2023-04-15-preview` | Preview |
-| `2023-04-01` | Generally available |
-| `2023-01-01-preview` | Preview |
-| `2022-08-15-preview` | Preview |
-| `2022-03-01` | Generally available |
-
-## Specify the Text Analytics for health API version
-
-When making a Text Analytics for health API call, you must specify an API version. The latest generally available API version is "2023-04-01" which supports relationship confidence scores in the results. The latest preview API version is "2023-04-15-preview", offering the latest feature which is support for [temporal assertions](../concepts/assertion-detection.md).
-
-| Supported Versions | Status |
-|--|--|
-| `2023-04-15-preview`| Preview |
-| `2023-04-01`| Generally available |
-| `2022-10-01-preview` | Preview |
-| `2022-05-01` | Generally available |
--
-### Text Analytics for health container
-
-The [Text Analytics for health container](use-containers.md) uses separate model versioning than the REST API and client libraries. Only one model version is available per container image.
-
-| Endpoint | Container Image Tag | Model version |
-||--||
-| `/entities/health` | `3.0.59413252-onprem-amd64` (latest) | `2022-03-01` |
-| `/entities/health` | `3.0.59413252-latin-onprem-amd64` (latin) | `2022-08-15-preview` |
-| `/entities/health` | `3.0.59413252-semitic-onprem-amd64` (semitic) | `2022-08-15-preview` |
-| `/entities/health` | `3.0.016230002-onprem-amd64` | `2021-05-15` |
-| `/entities/health` | `3.0.015370001-onprem-amd64` | `2021-03-01` |
-| `/entities/health` | `1.1.013530001-amd64-preview` | `2020-09-03` |
-| `/entities/health` | `1.1.013150001-amd64-preview` | `2020-07-24` |
-| `/domains/health` | `1.1.012640001-amd64-preview` | `2020-05-08` |
-| `/domains/health` | `1.1.012420001-amd64-preview` | `2020-05-08` |
-| `/domains/health` | `1.1.012070001-amd64-preview` | `2020-04-16` |
-
-### Input languages
-
-The Text Analytics for health supports English in addition to multiple languages that are currently in preview. You can use the hosted API or deploy the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
-
-## Submitting data
-
-To send an API request, you will need your Language resource endpoint and key.
-
-> [!NOTE]
-> You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-
-Analysis is performed upon receipt of the request. If you send a request using the REST API or client library, the results will be returned asynchronously. If you're using the Docker container, they will be returned synchronously.
---
-## Submitting a Fast Healthcare Interoperability Resources (FHIR) request
-
-Fast Healthcare Interoperability Resources (FHIR) is the health industry communication standard developed by the Health Level Seven International (HL7) organization. The standard defines the data formats (resources) and API structure for exchanging electronic healthcare data. To receive your result using the **FHIR** structure, you must send the FHIR version in the API request body.
-
-| Parameter Name | Type | Value |
-|--|--|--|
-| fhirVersion | string | `4.0.1` |
---
-## Getting results from the feature
-
-Depending on your API request, and the data you submit to the Text Analytics for health, you will get:
---
-## Service and data limits
--
-## See also
-
-* [Text Analytics for health overview](../overview.md)
-* [Text Analytics for health entity categories](../concepts/health-entity-categories.md)
cognitive-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/configure-containers.md
- Title: Configure Text Analytics for health containers-
-description: Text Analytics for health containers uses a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
------ Previously updated : 11/02/2021----
-# Configure Text Analytics for health docker containers
-
-Text Analytics for health provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. Several [example docker run commands](use-containers.md#run-the-container-with-docker-run) are also available.
-
-## Configuration settings
--
-> [!IMPORTANT]
-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](use-containers.md#billing).
-
-## ApiKey configuration setting
-
-The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Language_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting.
-
-This setting can be found in the following place:
-
-* Azure portal: **Language** resource management, under **Keys and endpoint**
-
-## ApplicationInsights setting
--
-## Billing configuration setting
-
-The `Billing` setting specifies the endpoint URI of the _Language_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Language_ resource on Azure. The container reports usage about every 10 to 15 minutes.
-
-This setting can be found in the following place:
-
-* Azure portal: **Language** Overview, labeled `Endpoint`
-
-|Required| Name | Data type | Description |
-|--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](use-containers.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../../../cognitive-services-custom-subdomains.md). |
-
-## Eula setting
--
-## Fluentd settings
--
-## Http proxy credentials settings
--
-## Logging settings
-
-
-## Mount settings
-
-Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command.
-
-Text Analytics for health containers don't use input or output mounts to store training or service data.
-
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](use-containers.md#host-computer-requirements-and-recommendations)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
-
-|Optional| Name | Data type | Description |
-|-||--|-|
-|Not allowed| `Input` | String | Text Analytics for health containers do not use this.|
-|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`|
-
-## Next steps
-
-* Review [How to install and run containers](use-containers.md)
-* Use more [Cognitive Services Containers](../../../cognitive-services-container-support.md)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/use-containers.md
- Title: How to use Text Analytics for health containers-
-description: Learn how to extract and label medical information on premises using Text Analytics for health Docker container.
------ Previously updated : 01/18/2023----
-# Use Text Analytics for health containers
-
-Containers enable you to host the Text Analytics for health API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Text Analytics for health remotely, then containers might be a good option.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-You must meet the following prerequisites before using Text Analytics for health containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
- * On Windows, Docker must also be configured to support Linux containers.
- * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
-* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
--
-## Host computer requirements and recommendations
--
-The following table describes the minimum and recommended specifications for the Text Analytics for health containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
-
-| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS|
-|||-|--|--|
-| **1 document/request** | 4 core, 12GB memory | 6 core, 12GB memory |15 | 30|
-| **10 documents/request** | 6 core, 16GB memory | 8 core, 20GB memory |15 | 30|
-
-CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command.
-
-## Get the container image with `docker pull`
-
-The Text Analytics for health container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `healthcare`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare`
-
-To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/tags).
-
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry. You can find the featured tags on the [dockerhub page](https://hub.docker.com/_/microsoft-azure-cognitive-services-textanalytics-healthcare)
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name>
-```
--
-## Run the container with `docker run`
-
-Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
-
-> [!IMPORTANT]
-> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.
-> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-> * The [responsible AI](/legal/cognitive-services/text-analytics/transparency-note-health) (RAI) acknowledgment must also be present with a value of `accept`.
-> * The sentiment analysis and language detection containers use v3 of the API, and are generally available. The key phrase extraction container uses v2 of the API, and is in preview.
-
-There are multiple ways you can install and run the Text Analytics for health container.
--- Use the Azure portal to create a Language resource, and use Docker to get your container.-- Use an Azure VM with Docker to run the container. Refer to [Docker on Azure](../../../../docker/index.yml).-- Use the following PowerShell and Azure CLI scripts to automate resource deployment and container configuration.-
-When you use the Text Analytics for health container, the data contained in your API requests and responses is not visible to Microsoft, and is not used for training the model applied to your data.
-
-### Run the container locally
-
-To run the container in your own environment after downloading the container image, execute the following `docker run` command. Replace the placeholders below with your own values:
-
-| Placeholder | Value | Format or example |
-|-|-||
-| **{API_KEY}** | The key for your Language resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| **{ENDPOINT_URI}** | The endpoint for accessing the API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-
-```bash
-docker run --rm -it -p 5000:5000 --cpus 6 --memory 12g \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name> \
-Eula=accept \
-rai_terms=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
--- Runs the Text Analytics for health container from the container image-- Allocates 6 CPU core and 12 gigabytes (GB) of memory-- Exposes TCP port 5000 and allocates a pseudo-TTY for the container-- Accepts the end user license agreement (EULA) and responsible AI (RAI) terms-- Automatically removes the container after it exits. The container image is still available on the host computer.-
-### Demo UI to visualize output
-
-The container provides REST-based query prediction endpoint APIs. We have also provided a visualization tool in the container that is accessible by appending `/demo` to the endpoint of the container. For example:
-
-```
-http://<serverURL>:5000/demo
-```
-
-Use the example cURL request below to submit a query to the container you have deployed replacing the `serverURL` variable with the appropriate value.
-
-```bash
-curl -X POST 'http://<serverURL>:5000/text/analytics/v3.1/entities/health' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json
-
-```
-
-### Install the container using Azure Web App for Containers
-
-Azure [Web App for Containers](https://azure.microsoft.com/services/app-service/containers/) is an Azure resource dedicated to running containers in the cloud. It brings out-of-the-box capabilities such as autoscaling, support for docker containers and docker compose, HTTPS support and much more.
-
-> [!NOTE]
-> Using Azure Web App you will automatically get a domain in the form of `<appservice_name>.azurewebsites.net`
-
-Run this PowerShell script using the Azure CLI to create a Web App for Containers, using your subscription and the container image over HTTPS. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request.
-
-```azurecli
-$subscription_name = "" # THe name of the subscription you want you resource to be created on.
-$resource_group_name = "" # The name of the resource group you want the AppServicePlan
- # and AppSerivce to be attached to.
-$resources_location = "" # This is the location you wish the AppServicePlan to be deployed to.
- # You can use the "az account list-locations -o table" command to
- # get the list of available locations and location code names.
-$appservice_plan_name = "" # This is the AppServicePlan name you wish to have.
-$appservice_name = "" # This is the AppService resource name you wish to have.
-$TEXT_ANALYTICS_RESOURCE_API_KEY = "" # This should be taken from the Language resource.
-$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT = "" # This should be taken from the Language resource.
-$DOCKER_IMAGE_NAME = "mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest"
-
-az login
-az account set -s $subscription_name
-az appservice plan create -n $appservice_plan_name -g $resource_group_name --is-linux -l $resources_location --sku P3V2
-az webapp create -g $resource_group_name -p $appservice_plan_name -n $appservice_name -i $DOCKER_IMAGE_NAME
-az webapp config appsettings set -g $resource_group_name -n $appservice_name --settings Eula=accept rai_terms=accept Billing=$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT ApiKey=$TEXT_ANALYTICS_RESOURCE_API_KEY
-
-# Once deployment complete, the resource should be available at: https://<appservice_name>.azurewebsites.net
-```
-
-### Install the container using Azure Container Instance
-
-You can also use an Azure Container Instance (ACI) to make deployment easier. ACI is a resource that allows you to run Docker containers on-demand in a managed, serverless Azure environment.
-
-See [How to use Azure Container Instances](../../../containers/azure-container-instance-recipe.md) for steps on deploying an ACI resource using the Azure portal. You can also use the below PowerShell script using Azure CLI, which will create an ACI on your subscription using the container image. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request. Due to the limit on the maximum number of CPUs per ACI resource, do not select this option if you expect to submit more than 5 large documents (approximately 5000 characters each) per request.
-See the [ACI regional support](../../../../container-instances/container-instances-region-availability.md) article for availability information.
-
-> [!NOTE]
-> Azure Container Instances don't include HTTPS support for the builtin domains. If you need HTTPS, you will need to manually configure it, including creating a certificate and registering a domain. You can find instructions to do this with NGINX below.
-
-```azurecli
-$subscription_name = "" # The name of the subscription you want you resource to be created on.
-$resource_group_name = "" # The name of the resource group you want the AppServicePlan
- # and AppService to be attached to.
-$resources_location = "" # This is the location you wish the web app to be deployed to.
- # You can use the "az account list-locations -o table" command to
- # Get the list of available locations and location code names.
-$azure_container_instance_name = "" # This is the AzureContainerInstance name you wish to have.
-$TEXT_ANALYTICS_RESOURCE_API_KEY = "" # This should be taken from the Language resource.
-$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT = "" # This should be taken from the Language resource.
-$DNS_LABEL = "" # This is the DNS label name you wish your ACI will have
-$DOCKER_IMAGE_NAME = "mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest"
-
-az login
-az account set -s $subscription_name
-az container create --resource-group $resource_group_name --name $azure_container_instance_name --image $DOCKER_IMAGE_NAME --cpu 4 --memory 12 --port 5000 --dns-name-label $DNS_LABEL --environment-variables Eula=accept rai_terms=accept Billing=$TEXT_ANALYTICS_RESOURCE_API_ENDPOINT ApiKey=$TEXT_ANALYTICS_RESOURCE_API_KEY
-
-# Once deployment complete, the resource should be available at: http://<unique_dns_label>.<resource_group_region>.azurecontainer.io:5000
-```
-
-### Secure ACI connectivity
-
-By default there is no security provided when using ACI with container API. This is because typically containers will run as part of a pod which is protected from the outside by a network bridge. You can however modify a container with a front-facing component, keeping the container endpoint private. The following examples use [NGINX](https://www.nginx.com) as an ingress gateway to support HTTPS/SSL and client-certificate authentication.
-
-> [!NOTE]
-> NGINX is an open-source, high-performance HTTP server and proxy. An NGINX container can be used to terminate a TLS connection for a single container. More complex NGINX ingress-based TLS termination solutions are also possible.
-
-#### Set up NGINX as an ingress gateway
-
-NGINX uses [configuration files](https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/) to enable features at runtime. In order to enable TLS termination for another service, you must specify an SSL certificate to terminate the TLS connection and `proxy_pass` to specify an address for the service. A sample is provided below.
--
-> [!NOTE]
-> `ssl_certificate` expects a path to be specified within the NGINX container's local filesystem. The address specified for `proxy_pass` must be available from within the NGINX container's network.
-
-The NGINX container will load all of the files in the `_.conf_` that are mounted under `/etc/nginx/conf.d/` into the HTTP configuration path.
-
-```nginx
-server {
- listen 80;
- return 301 https://$host$request_uri;
-}
-server {
- listen 443 ssl;
- # replace with .crt and .key paths
- ssl_certificate /cert/Local.crt;
- ssl_certificate_key /cert/Local.key;
-
- location / {
- proxy_pass http://cognitive-service:5000;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- proxy_set_header X-Real-IP $remote_addr;
- }
-}
-```
-
-#### Example Docker compose file
-
-The below example shows how a [docker compose](https://docs.docker.com/compose/reference/overview) file can be created to deploy NGINX and health containers:
-
-```yaml
-version: "3.7"
-
- cognitive-service:
- image: {IMAGE_ID}
- ports:
- - 5000:5000
- environment:
- - eula=accept
- - billing={ENDPOINT_URI}
- - apikey={API_KEY}
- volumes:
- # replace with path to logs folder
- - <path-to-logs-folder>:/output
- nginx:
- image: nginx
- ports:
- - 443:443
- volumes:
- # replace with paths for certs and conf folders
- - <path-to-certs-folder>:/cert
- - <path-to-conf-folder>:/etc/nginx/conf.d/
-```
-
-To initiate this Docker compose file, execute the following command from a console at the root level of the file:
-
-```bash
-docker-compose up
-```
-
-For more information, see NGINX's documentation on [NGINX SSL Termination](https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-http/).
--
-## Query the container's prediction endpoint
-
-The container provides REST-based query prediction endpoint APIs.
-
-Use the host, `http://localhost:5000`, for container APIs.
---
-### Structure the API request for the container
-
-You can use Postman or the example cURL request below to submit a query to the container you deployed, replacing the `serverURL` variable with the appropriate value. Note the version of the API in the URL for the container is different than the hosted API.
---
-## Run the container with client library support
-
-Starting with container version `3.0.017010001-onprem-amd64` (or if you use the `latest` container), you can run the Text Analytics for health container using the [client library](../quickstart.md). To do so, add the following parameter to the `docker run` command:
-
-`enablelro=true`
-
-Afterwards when you authenticate the client object, use the endpoint that your container is running on:
-
-`http://localhost:5000`
-
-For example, if you're using C# you would use the following code:
-
-```csharp
-var client = new TextAnalyticsClient("http://localhost:5000", "your-text-analytics-key");
-```
-
-## Stop the container
--
-## Troubleshooting
-
-If you run the container with an output [mount](configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container.
--
-## Billing
-
-Text Analytics for health containers send billing information to Azure, using a _Language_ resource on your Azure account.
--
-## Summary
-
-In this article, you learned concepts and workflow for downloading, installing, and running Text Analytics for health containers. In summary:
-
-* Text Analytics for health provides a Linux container for Docker
-* Container images are downloaded from the Microsoft Container Registry (MCR).
-* Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Text Analytics for health containers by specifying the host URI of the container.
-* You must specify billing information when instantiating a container.
-
-> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
-
-## Next steps
-
-* See [Configure containers](configure-containers.md) for configuration settings.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
- Title: Text Analytics for health language support-
-description: "This article explains which natural languages are supported by the Text Analytics for health."
------ Previously updated : 01/04/2023----
-# Language support for Text Analytics for health
-
-Use this article to learn which natural languages are supported by Text Analytics for health and its Docker container.
-
-## Hosted API Service
-
-The hosted API service supports English language, model version 03-01-2022. Additional languages, English, Spanish, French, German Italian, Portuguese and Hebrew are supported with model version 2022-08-15-preview.
-
-When structuring the API request, the relevant language tags must be added for these languages:
-
-```
-English ΓÇô ΓÇ£enΓÇ¥
-Spanish ΓÇô ΓÇ£esΓÇ¥
-French - ΓÇ£frΓÇ¥
-German ΓÇô ΓÇ£deΓÇ¥
-Italian ΓÇô ΓÇ£itΓÇ¥
-Portuguese ΓÇô ΓÇ£ptΓÇ¥
-Hebrew ΓÇô ΓÇ£heΓÇ¥
-```
-```json
-json
-
-{
- "analysisInput": {
- "documents": [
- {
- "text": "El médico prescrió 200 mg de ibuprofeno.",
- "language": "es",
- "id": "1"
- }
- ]
- },
- "tasks": [
- {
- "taskName": "analyze 1",
- "kind": "Healthcare",
- "parameters":
- {
- "modelVersion": "2022-08-15-preview"
- }
- }
- ]
-}
-```
-
-## Docker container
-
-The docker container supports English language, model version 2022-03-01.
-Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview.
-Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md).
-
-In order to download the new container images from the Microsoft public container registry, use the following [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command.
-
-For English, Spanish, Italian, French, German and Portuguese:
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latin
-```
-
-For Hebrew:
-
-```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:semitic
-```
--
-When structuring the API request, the relevant language tags must be added for these languages:
-
-```
-English ΓÇô ΓÇ£enΓÇ¥
-Spanish ΓÇô ΓÇ£esΓÇ¥
-French - ΓÇ£frΓÇ¥
-German ΓÇô ΓÇ£deΓÇ¥
-Italian ΓÇô ΓÇ£itΓÇ¥
-Portuguese ΓÇô ΓÇ£ptΓÇ¥
-Hebrew ΓÇô ΓÇ£heΓÇ¥
-```
-
-The following json is an example of a JSON file attached to the Language request's POST body, for a Spanish document:
-
-```json
-json
-
-{
- "analysisInput": {
- "documents": [
- {
- "text": "El médico prescrió 200 mg de ibuprofeno.",
- "language": "es",
- "id": "1"
- }
- ]
- },
- "tasks": [
- {
- "taskName": "analyze 1",
- "kind": "Healthcare",
- }
- ]
-}
-```
-## Details of the supported model versions for each language:
--
-| Language Code | Model Version: | Featured Tag | Specific Tag |
-|:--|:-:|:-:|::|
-| `en` | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
-| `en`, `es`, `it`, `fr`, `de`, `pt` | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
-| `he` | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
------
-## See also
-
-[Text Analytics for health overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
- Title: What is the Text Analytics for health in Azure Cognitive Service for Language?-
-description: An overview of Text Analytics for health in Azure Cognitive Services, which helps you extract medical information from unstructured text, like clinical documents.
------ Previously updated : 01/06/2023----
-# What is Text Analytics for health?
--
-Text Analytics for health is one of the prebuilt features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to extract and label relevant medical information from a variety of unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
-
-This documentation contains the following types of articles:
-* The [**quickstart article**](quickstart.md) provides a short tutorial that guides you with making your first request to the service.
-* The [**how-to guides**](how-to/call-api.md) contain detailed instructions on how to make calls to the service using the hosted API or using the on-premises Docker container.
-* The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth information on each of the service's features, named entity recognition, relation extraction, entity linking, and assertion detection.
-
-## Text Analytics for health features
-
-Text Analytics for health performs four key functions which are named entity recognition, relation extraction, entity linking, and assertion detection, all with a single API call.
--
-Text Analytics for health can receive unstructured text in English as part of its generally available offering. Additional languages such as German, French, Italian, Spanish, Portuguese, and Hebrew are currently supported in preview.
-
-Additionally, Text Analytics for health can return the processed output using the Fast Healthcare Interoperability Resources (FHIR) structure which enables the service's integration with other electronic health systems.
---
-> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
-
-## Usage scenarios
-
-Text Analytics for health can be used in multiple scenarios across a variety of industries.
-Some common customer motivations for using Text Analytics for health include:
-* Assisting and automating the processing of medical documents by proper medical coding to ensure accurate care and billing.
-* Increasing the efficiency of analyzing healthcare data to help drive the success of value-based care models similar to Medicare.
-* Minimizing healthcare provider effort by automating the aggregation of key patient data for trend and pattern monitoring.
-* Facilitating and supporting the adoption of HL7 standards for improved exchange, integration, sharing, retrieval, and delivery of electronic health information in all healthcare services.
-
-### Example use cases:ΓÇâ
-
-|Use case|Description|
-|--|--|
-|Extract insights and statistics|Identify medical entities such as symptoms, medications, diagnosis from clinical and research documents in order to extract insights and statistics for different patient cohorts.|
-|Develop predictive models using historic data|Power solutions for planning, decision support, risk analysis and more, based on prediction models created from historic data.|
-|Annotate and curate medical information|Support solutions for clinical data annotation and curation such as automating clinical coding and digitizing manually created data.|
-|Review and report medical information|Support solutions for reporting and flagging possible errors in medical information resulting from reviewal processes such as quality assurance.|
-|Assist with decision support|Enable solutions that provide humans with assistive information relating to patientsΓÇÖ medical information for faster and more reliable decisions.|
-
-## Get started with Text Analytics for health
---
-## Input requirements and service limits
-
-Text Analytics for health is designed to receive unstructured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
-
-Text Analytics for health works with a variety of input languages. For more information, see [language support](language-support.md).
---
-## Responsible use of AI
-
-An AI system includes the technology, the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also refer to the following articles for more information:
-
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
- Title: "Quickstart: Use the Text Analytics for health REST API and client library"-
-description: Use this quickstart to start using Text Analytics for health.
------ Previously updated : 02/17/2023--
-keywords: text mining, health, text analytics for health
-zone_pivot_groups: programming-languages-text-analytics
--
-# Quickstart: Using Text Analytics for health client library and REST API
-
-This article contains Text Analytics for health quickstarts that help with using the supported client libraries, C#, Java, NodeJS, and Python as well as with using the REST API.
----------------
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
---
-## Next steps
-
-* [How to call the hosted API](./how-to/call-api.md)
-* [How to use the service with Docker containers](./how-to/use-containers.md)
cognitive-services Power Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/tutorials/power-automate.md
- Title: Use Language services in power automate-
-description: Learn how to use Azure Cognitive Service for Language in power automate, without writing code.
------ Previously updated : 03/02/2023-----
-# Use the Language service in Power Automate
-
-You can use [Power Automate](/power-automate/getting-started) flows to automate repetitive tasks and bring efficiency to your organization. Using Azure Cognitive Service for Language, you can automate tasks like:
-* Send incoming emails to different departments based on their contents.
-* Analyze the sentiment of new tweets.
-* Extract entities from incoming documents.
-* Summarize meetings.
-* Remove personal data from files before saving them.
-
-In this tutorial, you'll create a Power Automate flow to extract entities found in text, using [Named entity recognition](../named-entity-recognition/overview.md).
-
-## Prerequisites
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">create a Language resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the API. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (`Free F0`) to try the service, and upgrade later to a paid tier for production.
-* Optional for this tutorial: A trained model is required if you're using a custom capability such as [custom NER](../custom-named-entity-recognition/overview.md), [custom text classification](../custom-text-classification/overview.md), or [conversational language understanding](../conversational-language-understanding/overview.md).
-
-## Create a Power Automate flow
-
-For this tutorial, you will create a flow that extracts named entities from text.
-
-1. [Sign in to power automate](https://make.powerautomate.com/)
-
-1. From the left side menu, select **My flows**. Then select **New flow** > **Automated cloud flow**.
-
- :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the menu for creating an automated cloud flow." lightbox="../media/create-flow.png":::
-
-1. Enter a name for your flow such as `LanguageFlow`. Then select **Skip** to continue without choosing a trigger.
-
- :::image type="content" source="../media/language-flow.png" alt-text="A screenshot of automated cloud flow screen." lightbox="../media/language-flow.png":::
-
-1. Under **Triggers** select **Manually trigger a flow**.
-
- :::image type="content" source="../media/trigger-flow.png" alt-text="A screenshot of how to manually trigger a flow." lightbox="../media/trigger-flow.png":::
-
-1. Select **+ New step** to begin adding a Language service connector.
-
-1. Under **Choose an operation** search for **Azure Language**. Then select **Azure Cognitive Service for Language**. This will narrow down the list of actions to only those that are available for Language.
-
- :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of An Azure language connector." lightbox="../media/language-connector.png":::
-
-1. Under **Actions** search for **Named Entity Recognition**, and select the connector.
-
- :::image type="content" source="../media/entity-connector.png" alt-text="A screenshot of a named entity recognition connector." lightbox="../media/entity-connector.png":::
-
-1. Get the endpoint and key for your Language resource, which will be used for authentication. You can find your key and endpoint by navigating to your resource in the [Azure portal](https://portal.azure.com), and selecting **Keys and Endpoint** from the left side menu.
-
- :::image type="content" source="../media/azure-portal-resource-credentials.png" alt-text="A screenshot of A language resource key and endpoint in the Azure portal." lightbox="../media/azure-portal-resource-credentials.png":::
-
-1. Once you have your key and endpoint, add it to the connector in Power Automate.
-
- :::image type="content" source="../media/language-auth.png" alt-text="A screenshot of adding the language key and endpoint to the Power Automate flow." lightbox="../media/language-auth.png":::
-
-1. Add the data in the connector
-
- > [!NOTE]
- > You will need deployment name and project name if you are using custom language capability.
-
-1. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**.
-
-1. After the flow runs, you will see the response in the **outputs** field.
-
- :::image type="content" source="../media/response-connector.png" alt-text="A screenshot of flow response." lightbox="../media/response-connector.png":::
-
-## Next steps
-
-* [Triage incoming emails with custom text classification](../custom-text-classification/tutorials/triage-email.md)
-* [Available Language service connectors](/connectors/cognitiveservicestextanalytics)
--
cognitive-services Use Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/tutorials/use-kubernetes-service.md
- Title: Deploy a key phrase extraction container to Azure Kubernetes Service-
-description: Deploy a key phrase extraction container image to Azure Kubernetes Service, and test it in a web browser.
------ Previously updated : 05/27/2022----
-# Deploy a key phrase extraction container to Azure Kubernetes Service
-
-Learn how to deploy a [key phrase extraction Docker container](../key-phrase-extraction/how-to/use-containers.md) image to Azure Kubernetes Service (AKS). This procedure shows how to create a Language resource, how to associate a container image, and how to exercise this orchestration of the two from a browser. Using containers can shift your attention away from managing infrastructure to instead focusing on application development. While this article uses the key phrase extraction container as an example, you can use this process for other containers offered by Azure Cognitive Service for Language
-
-## Prerequisites
-
-This procedure requires several tools that must be installed and run locally. Don't use Azure Cloud Shell. You need the following:
-
-* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin.
-* A text editor, for example, [Visual Studio Code](https://code.visualstudio.com/download).
-* The [Azure CLI](/cli/azure/install-azure-cli) installed.
-* The [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed.
-* An Azure resource with the correct pricing tier. Not all pricing tiers work with this container:
- * **Azure Language** resource with F0 or standard pricing tiers only.
- * **Azure Cognitive Services** resource with the S0 pricing tier.
----
-## Deploy the Key Phrase Extraction container to an AKS cluster
-
-1. Open the Azure CLI, and sign in to Azure.
-
- ```azurecli
- az login
- ```
-
-1. Sign in to the AKS cluster. Replace `your-cluster-name` and `your-resource-group` with the appropriate values.
-
- ```azurecli
- az aks get-credentials -n your-cluster-name -g -your-resource-group
- ```
-
- After this command runs, it reports a message similar to the following:
-
- ```output
- Merged "your-cluster-name" as current context in /home/username/.kube/config
- ```
-
- > [!WARNING]
- > If you have multiple subscriptions available to you on your Azure account and the `az aks get-credentials` command returns with an error, a common problem is that you're using the wrong subscription. Set the context of your Azure CLI session to use the same subscription that you created the resources with and try again.
- > ```azurecli
- > az account set -s subscription-id
- > ```
-
-1. Open the text editor of choice. This example uses Visual Studio Code.
-
- ```console
- code .
- ```
-
-1. Within the text editor, create a new file named *keyphrase.yaml*, and paste the following YAML into it. Be sure to replace `billing/value` and `apikey/value` with your own information.
-
- ```yaml
- apiVersion: apps/v1beta1
- kind: Deployment
- metadata:
- name: keyphrase
- spec:
- template:
- metadata:
- labels:
- app: keyphrase-app
- spec:
- containers:
- - name: keyphrase
- image: mcr.microsoft.com/azure-cognitive-services/keyphrase
- ports:
- - containerPort: 5000
- resources:
- requests:
- memory: 2Gi
- cpu: 1
- limits:
- memory: 4Gi
- cpu: 1
- env:
- - name: EULA
- value: "accept"
- - name: billing
- value: # {ENDPOINT_URI}
- - name: apikey
- value: # {API_KEY}
-
-
- apiVersion: v1
- kind: Service
- metadata:
- name: keyphrase
- spec:
- type: LoadBalancer
- ports:
- - port: 5000
- selector:
- app: keyphrase-app
- ```
-
- > [!IMPORTANT]
- > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
-
-1. Save the file, and close the text editor.
-1. Run the Kubernetes `apply` command with the *keyphrase.yaml* file as its target:
-
- ```console
- kubectl apply -f keyphrase.yaml
- ```
-
- After the command successfully applies the deployment configuration, a message appears similar to the following output:
-
- ```output
- deployment.apps "keyphrase" created
- service "keyphrase" created
- ```
-1. Verify that the pod was deployed:
-
- ```console
- kubectl get pods
- ```
-
- The output for the running status of the pod:
-
- ```output
- NAME READY STATUS RESTARTS AGE
- keyphrase-5c9ccdf575-mf6k5 1/1 Running 0 1m
- ```
-
-1. Verify that the service is available, and get the IP address.
-
- ```console
- kubectl get services
- ```
-
- The output for the running status of the *keyphrase* service in the pod:
-
- ```output
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m
- keyphrase LoadBalancer 10.0.100.64 168.61.156.180 5000:31234/TCP 2m
- ```
-
-## Verify the Key Phrase Extraction container instance
-
-1. Select the **Overview** tab, and copy the IP address.
-1. Open a new browser tab, and enter the IP address. For example, enter `http://<IP-address>:5000 (http://55.55.55.55:5000`). The container's home page is displayed, which lets you know the container is running.
-
- ![View the container home page to verify that it's running](../media/swagger-container-documentation.png)
-
-1. Select the **Service API Description** link to go to the container's Swagger page.
-
-1. Choose any of the **POST** APIs, and select **Try it out**. The parameters are displayed, which includes this example input:
-
- ```json
- {
- "documents": [
- {
- "id": "1",
- "text": "Hello world"
- },
- {
- "id": "2",
- "text": "Bonjour tout le monde"
- },
- {
- "id": "3",
- "text": "La carretera estaba atascada. Había mucho tráfico el día de ayer."
- },
- {
- "id": "4",
- "text": ":) :( :D"
- }
- ]
- }
- ```
-
-1. Replace the input with the following JSON content:
-
- ```json
- {
- "documents": [
- {
- "language": "en",
- "id": "7",
- "text": "I was fortunate to attend the KubeCon Conference in Barcelona, it is one of the best conferences I have ever attended. Great people, great sessions and I thoroughly enjoyed it!"
- }
- ]
- }
- ```
-
-1. Set **showStats** to `true`.
-
-1. Select **Execute** to determine the sentiment of the text.
-
- The model that's packaged in the container generates a score that ranges from 0 to 1, where 0 is negative and 1 is positive.
-
- The JSON response that's returned includes sentiment for the updated text input:
-
- ```json
- {
- "documents": [
- {
- "id": "7",
- "keyPhrases": [
- "Great people",
- "great sessions",
- "KubeCon Conference",
- "Barcelona",
- "best conferences"
- ],
- "statistics": {
- "charactersCount": 176,
- "transactionsCount": 1
- }
- }
- ],
- "errors": [],
- "statistics": {
- "documentsCount": 1,
- "validDocumentsCount": 1,
- "erroneousDocumentsCount": 0,
- "transactionsCount": 1
- }
- }
- ```
-
-We can now correlate the document `id` of the response payload's JSON data to the original request payload document `id`. The resulting document has a `keyPhrases` array, which contains the list of key phrases that have been extracted from the corresponding input document. Additionally, there are various statistics such as `characterCount` and `transactionCount` for each resulting document.
-
-## Next steps
-
-* Use more [Cognitive Services containers](../../cognitive-services-container-support.md)
-* [Key phrase extraction overview](../overview.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
- Title: What's new in Azure Cognitive Service for Language?-
-description: Find out about new releases and features for the Azure Cognitive Service for Language.
------ Previously updated : 04/14/2023----
-# What's new in Azure Cognitive Service for Language?
-
-Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
-
-## May 2023
-
-* [Custom Named Entity Recognition (NER) Docker containers](./custom-named-entity-recognition/how-to/use-containers.md) are now available for on-premises deployment.
-
-## April 2023
-
-* [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
-* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below.
- * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
- * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
-* The latest model version (2022-10-01) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
-
-## March 2023
-
-* New model version ('2023-01-01-preview') for Personally Identifiable Information (PII) detection with quality updates and new [language support](./personally-identifiable-information/language-support.md)
-
-* New versions of the text analysis client library are available in preview:
-
- ### [C#](#tab/csharp)
-
- [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.3.0-beta.2)
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.TextAnalytics_5.3.0-beta.2/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
-
- [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.TextAnalytics_5.3.0-beta.2/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
-
- [**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.TextAnalytics_5.3.0-beta.2/sdk/textanalytics/Azure.AI.TextAnalytics/samples/README.md)
-
- ### [Java](#tab/java)
-
- [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.3.0-beta.2)
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#530-beta2-2023-03-07)
-
- [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
-
- [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
-
- ### [JavaScript](#tab/javascript)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-language-text/v/1.1.0-beta.2)
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/CHANGELOG.md)
-
- [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text)
-
- [**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1-beta)
-
- ### [Python](#tab/python)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.3.0b2/)
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b2/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
-
- [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b2/sdk/textanalytics/azure-ai-textanalytics/README.md)
-
- [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-textanalytics_5.3.0b2/sdk/textanalytics/azure-ai-textanalytics/samples)
-
-
-
-## February 2023
-
-* Conversational language understanding and orchestration workflow is now available in the following regions in the sovereign cloud for China:
- * China East 2 (Authoring and Prediction)
- * China North 2 (Prediction)
-* New model evaluation updates for Conversational language understanding and Orchestration workflow.
-* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health.
-* New model version ('2023-02-01-preview') for named entity recognition features improved accuracy and more [language support](./named-entity-recognition/language-support.md) with up to 79 languages.
-
-## December 2022
-
-* New version (v5.2.0-beta.1) of the text analysis client library is available in preview for C#/.NET:
- * [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.3.0-beta.1)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
- * New model version (`2022-10-01`) released for Language Detection. The new model version comes with improvements in language detection quality on short texts.
-
-## November 2022
-
-* Expanded language support for:
- * [Opinion mining](./sentiment-opinion-mining/language-support.md)
-* Conversational PII now supports up to 40,000 characters as document size.
-* New versions of the text analysis client library are available in preview:
-
- * Java
- * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.3.0-beta.1)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/azure-ai-textanalytics_5.3.0-beta.1/sdk/textanalytics/azure-ai-textanalytics/src/samples)
-
- * JavaScript
- * [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-language-text/v/1.1.0-beta.1)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/CHANGELOG.md)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/README.md)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1)
-
- * Python
- * [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.3.0b1/)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/README.md)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/samples)
-
-## October 2022
-
-* The summarization feature now has the following capabilities:
- * [Document summarization](./summarization/overview.md):
- * Abstractive summarization, which generates a summary of a document that may not use the same words as those in the document, but captures the main idea.
- * [Conversation summarization](./summarization/overview.md?tabs=document-summarization?tabs=conversation-summarization)
- * Chapter title summarization, which returns suggested chapter titles of input conversations.
- * Narrative summarization, which returns call notes, meeting notes or chat summaries of input conversations.
-* Expanded language support for:
- * [Sentiment analysis](./sentiment-opinion-mining/language-support.md)
- * [Key phrase extraction](./key-phrase-extraction/language-support.md)
- * [Named entity recognition](./named-entity-recognition/language-support.md)
- * [Text Analytics for health](./text-analytics-for-health/language-support.md)
-* [Multi-region deployment](./concepts/custom-features/multi-region-deployment.md) and [project asset versioning](./concepts/custom-features/project-versioning.md) for:
- * [Conversational language understanding](./conversational-language-understanding/overview.md)
- * [Orchestration workflow](./orchestration-workflow/overview.md)
- * [Custom text classification](./custom-text-classification/overview.md)
- * [Custom named entity recognition](./custom-named-entity-recognition/overview.md)
-* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions.
-* [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition
-* New region support for:
- * [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability)
- * [Orchestration workflow](./orchestration-workflow/service-limits.md#regional-availability)
- * [Custom text classification](./custom-text-classification/service-limits.md#regional-availability)
- * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
-* Document type as an input supported for [Text Analytics for health](./text-analytics-for-health/how-to/call-api.md) FHIR requests
-
-## September 2022
-
-* [Conversational language understanding](./conversational-language-understanding/overview.md) is available in the following regions:
- * Central India
- * Switzerland North
- * West US 2
-* Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
-* The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md).
- * Java
- * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples)
- * Python
- * [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.2.0/)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/README.md)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples)
- * C#/.NET
- * [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0)
- * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md)
- * [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md)
- * [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples)
-
-## August 2022
-
-* [Role-based access control](./concepts/role-based-access-control.md) for the Language service.
-
-## July 2022
-
-* New AI models for [sentiment analysis](./sentiment-opinion-mining/overview.md) and [key phrase extraction](./key-phrase-extraction/overview.md) based on [z-code models](https://www.microsoft.com/research/project/project-zcode/), providing:
- * Performance and quality improvements for the following 11 [languages](./sentiment-opinion-mining/language-support.md) supported by sentiment analysis: `ar`, `da`, `el`, `fi`, `hi`, `nl`, `no`, `pl`, `ru`, `sv`, `tr`
- * Performance and quality improvements for the following 20 [languages](./key-phrase-extraction/language-support.md) supported by key phrase extraction: `af`, `bg`, `ca`, `hr`, `da`, `nl`, `et`, `fi`, `el`, `hu`, `id`, `lv`, `no`, `pl`, `ro`, `ru`, `sk`, `sl`, `sv`, `tr`
-
-* Conversational PII is now available in all Azure regions supported by the Language service.
-
-* A new version of the Language API (`2022-07-01-preview`) has been released. It provides:
- * [Automatic language detection](./concepts/use-asynchronously.md#automatic-language-detection) for asynchronous tasks.
- * Text Analytics for health confidence scores are now returned in relations.
-
- To use this version in your REST API calls, use the following URL:
-
- ```http
- <your-language-resource-endpoint>/language/:analyze-text?api-version=2022-07-01-preview
- ```
-
-## June 2022
-* v1.0 client libraries for [conversational language understanding](./conversational-language-understanding/how-to/call-api.md?tabs=azure-sdk#send-a-conversational-language-understanding-request) and [orchestration workflow](./orchestration-workflow/how-to/call-api.md?tabs=azure-sdk#send-an-orchestration-workflow-request) are Generally Available for the following languages:
- * [C#](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0/sdk/cognitivelanguage/Azure.AI.Language.Conversations)
- * [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations)
-* v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for:
- * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
-* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
- * [Entity linking](./entity-linking/quickstart.md?pivots=rest-api)
- * [Language detection](./language-detection/quickstart.md?pivots=rest-api)
- * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
- * [Named entity recognition](./named-entity-recognition/quickstart.md?pivots=rest-api)
- * [PII detection](./personally-identifiable-information/quickstart.md?pivots=rest-api)
- * [Sentiment analysis and opinion mining](./sentiment-opinion-mining/quickstart.md?pivots=rest-api)
- * [Text analytics for health](./text-analytics-for-health/quickstart.md?pivots=rest-api)
--
-## May 2022
-
-* PII detection for conversations.
-* Rebranded Text Summarization to Document summarization.
-* Conversation summarization is now available in public preview.
-
-* The following features are now Generally Available (GA):
- * Custom text classification
- * Custom Named Entity Recognition (NER)
- * Conversational language understanding
- * Orchestration workflow
-
-* The following updates for custom text classification, custom Named Entity Recognition (NER), conversational language understanding, and orchestration workflow:
- * Data splitting controls.
- * Ability to cancel training jobs.
- * Custom deployments can be named. You can have up to 10 deployments.
- * Ability to swap deployments.
- * Auto labeling (preview) for custom named entity recognition
- * Enterprise readiness support
- * Training modes for conversational language understanding
- * Updated service limits
- * Ability to use free (F0) tier for Language resources
- * Expanded regional availability
- * Updated model life cycle to add training configuration versions
---
-## April 2022
-
-* Fast Healthcare Interoperability Resources (FHIR) support is available in the [Language REST API preview](text-analytics-for-health/quickstart.md?pivots=rest-api&tabs=language) for Text Analytics for health.
-
-## March 2022
-
-* Expanded language support for:
- * [Custom text classification](custom-classification/language-support.md)
- * [Custom Named Entity Recognition (NER)](custom-named-entity-recognition/language-support.md)
- * [Conversational language understanding](conversational-language-understanding/language-support.md)
-
-## February 2022
-
-* Model improvements for latest model-version for [text summarization](summarization/overview.md)
-
-* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
-
-* [Question Answering](question-answering/overview.md): Active learning v2 incorporates a better clustering logic providing improved accuracy of suggestions. It considers user actions when suggestions are accepted or rejected to avoid duplicate suggestions, and improve query suggestions.
-
-## December 2021
-
-* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
-
-## November 2021
-
-* Based on ongoing customer feedback, we have increased the character limit per document for Text Analytics for health from 5,120 to 30,720.
-
-* Azure Cognitive Service for Language release, with support for:
-
- * [Question Answering (now Generally Available)](question-answering/overview.md)
- * [Sentiment Analysis and opinion mining](sentiment-opinion-mining/overview.md)
- * [Key Phrase Extraction](key-phrase-extraction/overview.md)
- * [Named Entity Recognition (NER), Personally Identifying Information (PII)](named-entity-recognition/overview.md)
- * [Language Detection](language-detection/overview.md)
- * [Text Analytics for health](text-analytics-for-health/overview.md)
- * [Text summarization preview](summarization/overview.md)
- * [Custom Named Entity Recognition (Custom NER) preview](custom-named-entity-recognition/overview.md)
- * [Custom Text Classification preview](custom-classification/overview.md)
- * [Conversational Language Understanding preview](conversational-language-understanding/overview.md)
-
-* Preview model version `2021-10-01-preview` for [Sentiment Analysis and Opinion mining](sentiment-opinion-mining/overview.md), which provides:
-
- * Improved prediction quality.
- * [Additional language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
- * For more information, see the [project z-code site](https://www.microsoft.com/research/project/project-zcode/).
- * To use this [model version](sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model), you must specify it in your API calls, using the model version parameter.
-
-* SDK support for sending requests to custom models:
-
- * Custom Named Entity Recognition
- * Custom text classification
- * Custom language understanding
-
-## Next steps
-
-* See the [previous updates](./concepts/previous-updates.md) article for service updates not listed here.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
- Title: Language support-
-description: Azure Cognitive Services enables you to build applications that see, hear, speak with, and understand your users. Between these services, more than three dozen languages are supported, allowing users to communicate with your application in natural ways.
----- Previously updated : 10/28/2021----
-# Natural language support for Azure Cognitive Services
-
-Azure Cognitive Services enable you to build applications that see, hear, speak with, and understand your users. Between these services, more than three dozen languages are supported, allowing users to communicate with your application in natural ways. Use the links below to view language availability by service.
-
-These Cognitive Services are language agnostic and don't have limitations based on human language.
-
-* [Anomaly Detector](./anomaly-detector/index.yml)
-* [Custom Vision](./custom-vision-service/index.yml)
-* [Face](./computer-vision/index-identity.yml)
-* [Personalizer](./personalizer/index.yml)
-
-## Vision
-
-* [Computer Vision](./computer-vision/language-support.md)
-* [Ink Recognizer (Preview)](/previous-versions/azure/cognitive-services/Ink-Recognizer/language-support)
-* [Video Indexer](../azure-video-indexer/language-identification-model.md#guidelines-and-limitations)
-
-## Language
-
-* [Language Understanding (LUIS)](./luis/luis-language-support.md)
-* [QnA Maker](./qnamaker/overview/language-support.md)
-* [Language service](./text-analytics/language-support.md)
-* [Translator](./translator/language-support.md)
-
-## Speech
-
-* [Speech Service: Speech to text](./speech-service/language-support.md?tabs=stt)
-* [Speech Service: Text to speech](./speech-service/language-support.md?tabs=tts)
-* [Speech Service: Speech Translation](./speech-service/language-support.md?tabs=speech-translation)
-
-## Decision
-
-* [Content Moderator](./content-moderator/language-support.md)
-
-## See also
-
-* [What are the Cognitive Services?](./what-are-cognitive-services.md)
-* [Create an account](cognitive-services-apis-create-account.md)
cognitive-services Manage Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/manage-resources.md
- Title: Recover deleted Cognitive Services resource-
-description: This article provides instructions on how to recover an already-deleted Cognitive Services resource.
----- Previously updated : 07/05/2023---
-# Recover deleted Cognitive Services resources
-
-This article provides instructions on how to recover a Cognitive Services resource that is already deleted. The article also provides instructions on how to purge a deleted resource.
-
-> [!NOTE]
-> The instructions in this article are applicable to both a multi-service resource and a single-service resource. A multi-service resource enables access to multiple cognitive services using a single key and endpoint. On the other hand, a single-service resource enables access to just that specific cognitive service for which the resource was created.
-
-## Prerequisites
-
-* The resource to be recovered must have been deleted within the past 48 hours.
-* The resource to be recovered must not have been purged already. A purged resource cannot be recovered.
-* Before you attempt to recover a deleted resource, make sure that the resource group for that account exists. If the resource group was deleted, you must recreate it. Recovering a resource group is not possible. For more information, seeΓÇ»[Manage resource groups](../azure-resource-manager/management/manage-resource-groups-portal.md).
-* If the deleted resource used customer-managed keys with Azure Key Vault and the key vault has also been deleted, then you must restore the key vault before you restore the Cognitive Services resource. For more information, see [Azure Key Vault recovery management](../key-vault/general/key-vault-recovery.md).
-* If the deleted resource used a customer-managed storage and storage account has also been deleted, you must restore the storage account before you restore the Cognitive Services resource. For instructions, see [Recover a deleted storage account](../storage/common/storage-account-recover.md).
-
-Your subscription must have `Microsoft.CognitiveServices/locations/resourceGroups/deletedAccounts/delete` permissions to purge resources, such as [Cognitive Services Contributor](../role-based-access-control/built-in-roles.md#cognitive-services-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor).
-
-## Recover a deleted resource
-
-To recover a deleted cognitive service resource, use the following commands. Where applicable, replace:
-
-* `{subscriptionID}` with your Azure subscription ID
-* `{resourceGroup}` with your resource group
-* `{resourceName}` with your resource name
-* `{location}` with the location of your resource
--
-# [Azure portal](#tab/azure-portal)
-
-If you need to recover a deleted resource, navigate to the hub of the cognitive services API type and select "Manage deleted resources" from the menu. For example, if you would like to recover an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service. Then select **Manage deleted resources**.
-
-Select the subscription in the dropdown list to locate the deleted resource you would like to recover. Select one or more of the deleted resources and click **Recover**.
--
-> [!NOTE]
-> It can take a couple of minutes for your deleted resource(s) to recover and show up in the list of the resources. Click on the **Refresh** button in the menu to update the list of resources.
-
-# [Rest API](#tab/rest-api)
-
-Use the following `PUT` command:
-
-```rest-api
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName}?Api-Version=2021-04-30
-```
-
-In the request body, use the following JSON format:
-
-```json
-{
- "location": "{location}",
- "properties": {
- "restore": true
- }
-}
-```
-
-# [PowerShell](#tab/powershell)
-
-Use the following command to restore the resource:
-
-```powershell
-New-AzResource -Location {location} -Properties @{restore=$true} -ResourceId /subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName} -ApiVersion 2021-04-30
-```
-
-If you need to find the name of your deleted resources, you can get a list of deleted resource names with the following command:
-
-```powershell
-Get-AzResource -ResourceId /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/deletedAccounts -ApiVersion 2021-04-30
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az resource create --subscription {subscriptionID} -g {resourceGroup} -n {resourceName} --location {location} --namespace Microsoft.CognitiveServices --resource-type accounts --properties "{\"restore\": true}"
-```
---
-## Purge a deleted resource
-
-Once you delete a resource, you won't be able to create another one with the same name for 48 hours. To create a resource with the same name, you will need to purge the deleted resource.
-
-To purge a deleted cognitive service resource, use the following commands. Where applicable, replace:
-
-* `{subscriptionID}` with your Azure subscription ID
-* `{resourceGroup}` with your resource group
-* `{resourceName}` with your resource name
-* `{location}` with the location of your resource
-
-> [!NOTE]
-> Once a resource is purged, it is permanently deleted and cannot be restored. You will lose all data and keys associated with the resource.
--
-# [Azure portal](#tab/azure-portal)
-
-If you need to purge a deleted resource, the steps are similar to recovering a deleted resource.
-
-Navigate to the hub of the cognitive services API type of your deleted resource. For example, if you would like to purge an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service. Then select **Manage deleted resources** from the menu.
-
-Select the subscription in the dropdown list to locate the deleted resource you would like to purge.
-Select one or more deleted resources and click **Purge**.
-Purging will permanently delete a Cognitive Services resource.
---
-# [Rest API](#tab/rest-api)
-
-Use the following `DELETE` command:
-
-```rest-api
-https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}?Api-Version=2021-04-30`
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az resource delete --ids /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}
-```
----
-## See also
-* [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md)
-* [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md)
-* [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md)
-* [Create a new resource using an ARM template](create-account-resource-manager-template.md)
cognitive-services Chatgpt Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/chatgpt-quickstart.md
- Title: 'Quickstart - Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service'-
-description: Walkthrough on how to get started with GPT-35-Turbo and GPT-4 on Azure OpenAI Service.
-------- Previously updated : 05/23/2023
-zone_pivot_groups: openai-quickstart-new
-recommendations: false
--
-# Quickstart: Get started using GPT-35-Turbo and GPT-4 with Azure OpenAI Service
-
-Use this article to get started using Azure OpenAI.
------------------
cognitive-services Abuse Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/abuse-monitoring.md
- Title: Azure OpenAI Service abuse monitoring-
-description: Learn about the abuse monitoring capabilities of Azure OpenAI Service
----- Previously updated : 06/16/2023--
-keywords:
--
-# Abuse Monitoring
-
-Azure OpenAI Service detects and mitigates instances of recurring content and/or behaviors that suggest use of the service in a manner that may violate the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) or other applicable product terms. Details on how data is handled can be found on the [Data, Privacy and Security page](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
-
-## Components of abuse monitoring
-
-There are several components to abuse monitoring:
--- **Content Classification**: Classifier models detect harmful language and/or images in user prompts (inputs) and completions (outputs). The system looks for categories of harms as defined in the [Content Requirements](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext), and assigns severity levels as described in more detail on the [Content Filtering page](/azure/cognitive-services/openai/concepts/content-filter).--- **Abuse Pattern Capture**: Azure OpenAI ServiceΓÇÖs abuse monitoring looks at customer usage patterns and employs algorithms and heuristics to detect indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected in a customerΓÇÖs prompts and completions.--- **Human Review and Decision**: When prompts and/or completions are flagged through content classification and abuse pattern capture as described above, authorized Microsoft employees may assess the flagged content, and either confirm or correct the classification or determination based on predefined guidelines and policies. Data can be accessed for human review <u>only</u> by authorized Microsoft employees via Secure Access Workstations (SAWs) with Just-In-Time (JIT) request approval granted by team managers. For Azure OpenAI Service resources deployed in the European Economic Area, the authorized Microsoft employees are located in the European Economic Area.--- **Notification and Action**: When a threshold of abusive behavior has been confirmed based on the preceding three steps, the customer is informed of the determination by email. Except in cases of severe or recurring abuse, customers typically are given an opportunity to explain or remediateΓÇöand implement mechanisms to prevent recurrence ofΓÇöthe abusive behavior. Failure to address the behaviorΓÇöor recurring or severe abuseΓÇömay result in suspension or termination of the customerΓÇÖs access to Azure OpenAI resources and/or capabilities.-
-## Next steps
--- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).-- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).-- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#preventing-abuse-and-harmful-content-generation).
cognitive-services Advanced Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md
- Title: Prompt engineering techniques with Azure OpenAI-
-description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models
----- Previously updated : 04/20/2023-
-keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought
-zone_pivot_groups: openai-prompt
--
-# Prompt engineering techniques
-
-This guide will walk you through some advanced techniques in prompt design and prompt engineering. If you're new to prompt engineering, we recommend starting with our [introduction to prompt engineering guide](prompt-engineering.md).
-
-While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
--- Chat Completion API.-- Completion API.-
-Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
-
-The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the GPT-35-Turbo models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md).
-
-The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#limitations), is just as important as understanding how to leverage their strengths.
------
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
- Title: Azure OpenAI Service content filtering-
-description: Learn about the content filtering capabilities of Azure OpenAI in Azure Cognitive Services
----- Previously updated : 06/08/2023--
-keywords:
--
-# Content filtering
-
-Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design may affect completions and thus filtering behavior. The content filtering system supports the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. It might not be able to detect inappropriate content in languages that it has not been trained or tested to process.
-
-In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that may violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#preventing-abuse-and-harmful-content-generation).
-
-The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
-
-## Content filtering categories
-
-The content filtering system integrated in the Azure OpenAI Service contains neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and is not configurable.
-
-### Categories
-
-|Category|Description|
-|--|--|
-| Hate |The hate category describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
-| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against oneΓÇÖs will, prostitution, pornography, and abuse. |
-| Violence | The violence category describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
-| Self-Harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage oneΓÇÖs body, or kill oneself.|
-
-### Severity levels
-
-|Category|Description|
-|--|--|
-|Safe | Content may be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
-|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.|
-| Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
-|High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual power exchange or abuse.|
-
-## Configurability (preview)
-
-The default content filtering configuration is set to filter at the medium severity threshold for all four content harm categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low is not filtered by the content filters. The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below:
-
-| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
-|-|--||--|
-| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
-| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
-| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium is not filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.|
-| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-
-<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning content filters off. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
-
-Content filtering configurations are created within a Resource in Azure AI Studio, and can be associated with Deployments. [Learn more about configurability here](../how-to/content-filters.md).
-
- :::image type="content" source="../media/content-filters/configuration.png" alt-text="Screenshot of the content filter configuration UI" lightbox="../media/content-filters/configuration.png":::
-
-## Scenario details
-
-When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which may result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
--- Prompts that are classified at a filtered category and severity level will return an HTTP 400 error.-- Non-streaming completions calls won't return any content when the content is filtered. The `finish_reason` value will be set to content_filter. In rare cases with longer responses, a partial result can be returned. In these cases, the `finish_reason` will be updated.-- For streaming completions calls, segments will be returned back to the user as they're completed. The service will continue streaming until either reaching a stop token, length, or when content that is classified at a filtered category and severity level is detected. -
-### Scenario: You send a non-streaming completions call asking for multiple outputs; no content is classified at a filtered category and severity level
-
-The table below outlines the various ways content filtering can appear:
-
- **HTTP response code** | **Response behavior** |
-||-|
-| 200 | In the cases when all generation passes the filters as configured, no content moderation details are added to the response. The `finish_reason` for each generation will be either stop or length. |
-
-**Example request payload:**
-
-```json
-{
- "prompt":"Text example",
- "n": 3,
- "stream": false
-}
-```
-
-**Example response JSON:**
-
-```json
-{
- "id": "example-id",
- "object": "text_completion",
- "created": 1653666286,
- "model": "davinci",
- "choices": [
- {
- "text": "Response generated text",
- "index": 0,
- "finish_reason": "stop",
- "logprobs": null
- }
- ]
-}
-
-```
-
-### Scenario: Your API call asks for multiple responses (N>1) and at least 1 of the responses is filtered
-
-| **HTTP Response Code** | **Response behavior**|
-||-|
-| 200 |The generations that were filtered will have a `finish_reason` value of `content_filter`.
-
-**Example request payload:**
-
-```json
-{
- "prompt":"Text example",
- "n": 3,
- "stream": false
-}
-```
-
-**Example response JSON:**
-
-```json
-{
- "id": "example",
- "object": "text_completion",
- "created": 1653666831,
- "model": "ada",
- "choices": [
- {
- "text": "returned text 1",
- "index": 0,
- "finish_reason": "length",
- "logprobs": null
- },
- {
- "text": "returned text 2",
- "index": 1,
- "finish_reason": "content_filter",
- "logprobs": null
- }
- ]
-}
-```
-
-### Scenario: An inappropriate input prompt is sent to the completions API (either for streaming or non-streaming)
-
-**HTTP Response Code** | **Response behavior**
-||-|
-|400 |The API call will fail when the prompt triggers a content filter as configured. Modify the prompt and try again.|
-
-**Example request payload:**
-
-```json
-{
- "prompt":"Content that triggered the filtering model"
-}
-```
-
-**Example response JSON:**
-
-```json
-"error": {
- "message": "The response was filtered",
- "type": null,
- "param": "prompt",
- "code": "content_filter",
- "status": 400
-}
-```
-
-### Scenario: You make a streaming completions call; no output content is classified at a filtered category and severity level
-
-**HTTP Response Code** | **Response behavior**
-|||-|
-|200|In this case, the call will stream back with the full generation and `finish_reason` will be either 'length' or 'stop' for each generated response.|
-
-**Example request payload:**
-
-```json
-{
- "prompt":"Text example",
- "n": 3,
- "stream": true
-}
-```
-
-**Example response JSON:**
-
-```json
-{
- "id": "cmpl-example",
- "object": "text_completion",
- "created": 1653670914,
- "model": "ada",
- "choices": [
- {
- "text": "last part of generation",
- "index": 2,
- "finish_reason": "stop",
- "logprobs": null
- }
- ]
-}
-```
-
-### Scenario: You make a streaming completions call asking for multiple completions and at least a portion of the output content is filtered
-
-**HTTP Response Code** | **Response behavior**
-|||-|
-| 200 | For a given generation index, the last chunk of the generation will include a non-null `finish_reason` value. The value will be `content_filter` when the generation was filtered.|
-
-**Example request payload:**
-
-```json
-{
- "prompt":"Text example",
- "n": 3,
- "stream": true
-}
-```
-
-**Example response JSON:**
-
-```json
- {
- "id": "cmpl-example",
- "object": "text_completion",
- "created": 1653670515,
- "model": "ada",
- "choices": [
- {
- "text": "Last part of generated text streamed back",
- "index": 2,
- "finish_reason": "content_filter",
- "logprobs": null
- }
- ]
-}
-```
-
-### Scenario: Content filtering system doesn't run on the completion
-
-**HTTP Response Code** | **Response behavior**
-||-|
-| 200 | If the content filtering system is down or otherwise unable to complete the operation in time, your request will still complete without content filtering. You can determine that the filtering wasn't applied by looking for an error message in the `content_filter_result` object.|
-
-**Example request payload:**
-
-```json
-{
- "prompt":"Text example",
- "n": 1,
- "stream": false
-}
-```
-
-**Example response JSON:**
-
-```json
-{
- "id": "cmpl-example",
- "object": "text_completion",
- "created": 1652294703,
- "model": "ada",
- "choices": [
- {
- "text": "generated text",
- "index": 0,
- "finish_reason": "length",
- "logprobs": null,
- "content_filter_result": {
- "error": {
- "code": "content_filter_error",
- "message": "The contents are not filtered"
- }
- }
- }
- ]
-}
-```
-
-## Annotations (preview)
-
-When annotations are enabled as shown in the code snippet below, the following information is returned via the API: content filtering category (hate, sexual, violence, self-harm); within each content filtering category, the severity level (safe, low, medium or high); filtering status (true or false).
-
-Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
-
-```python
-# Note: The openai-python library support for Azure OpenAI is in preview.
-# os.getenv() for the endpoint and key assumes that you are using environment variables.
-
-import os
-import openai
-openai.api_type = "azure"
-openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
-openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
-
-response = openai.Completion.create(
- engine="text-davinci-003", # engine = "deployment_name".
- prompt="{Example prompt where a severity level of low is detected}"
- # Content that is detected at severity level medium or high is filtered,
- # while content detected at severity level low isn't filtered by the content filters.
-)
-
-print(response)
-
-```
-
-### Output
-
-```json
-{
- "choices": [
- {
- "content_filter_results": {
- "hate": {
- "filtered": false,
- "severity": "safe"
- },
- "self_harm": {
- "filtered": false,
- "severity": "safe"
- },
- "sexual": {
- "filtered": false,
- "severity": "safe"
- },
- "violence": {
- "filtered": false,
- "severity": "low"
- }
- },
- "finish_reason": "length",
- "index": 0,
- "logprobs": null,
- "text": {"\")(\"Example model response will be returned\").}"
- }
- ],
- "created": 1685727831,
- "id": "cmpl-7N36VZAVBMJtxycrmiHZ12aK76a6v",
- "model": "text-davinci-003",
- "object": "text_completion",
- "prompt_annotations": [
- {
- "content_filter_results": {
- "hate": {
- "filtered": false,
- "severity": "safe"
- },
- "self_harm": {
- "filtered": false,
- "severity": "safe"
- },
- "sexual": {
- "filtered": false,
- "severity": "safe"
- },
- "violence": {
- "filtered": false,
- "severity": "safe"
- }
- },
- "prompt_index": 0
- }
- ],
- "usage": {
- "completion_tokens": 16,
- "prompt_tokens": 5,
- "total_tokens": 21
- }
-}
-```
-
-The following code snippet shows how to retrieve annotations when content was filtered:
-
-```python
-# Note: The openai-python library support for Azure OpenAI is in preview.
-# os.getenv() for the endpoint and key assumes that you are using environment variables.
-
-import os
-import openai
-openai.api_type = "azure"
-openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
-openai.api_version = "2023-06-01-preview" # API version required to test out Annotations preview
-openai.api_key = os.getenv("AZURE_OPENAI_KEY")
-
-try:
- response = openai.Completion.create(
- prompt="<PROMPT>",
- engine="<MODEL_DEPLOYMENT_NAME>",
- )
- print(response)
-
-except openai.error.InvalidRequestError as e:
- if e.error.code == "content_filter" and e.error.innererror:
- content_filter_result = e.error.innererror.content_filter_result
- # print the formatted JSON
- print(content_filter_result)
-
- # or access the individual categories and details
- for category, details in content_filter_result.items():
- print(f"{category}:\n filtered={details['filtered']}\n severity={details['severity']}")
-
-```
-
-For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`.
-
-### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
-
-```json
-{
- "error": {
- "message": "The response was filtered due to the prompt triggering Azure Content management policy.
- Please modify your prompt and retry. To learn more about our content filtering policies
- please read our documentation: https://go.microsoft.com/fwlink/?linkid=21298766",
- "type": null,
- "param": "prompt",
- "code": "content_filter",
- "status": 400,
- "innererror": {
- "code": "ResponsibleAIPolicyViolation",
- "content_filter_result": {
- "hate": {
- "filtered": true,
- "severity": "high"
- },
- "self-harm": {
- "filtered": true,
- "severity": "high"
- },
- "sexual": {
- "filtered": false,
- "severity": "safe"
- },
- "violence": {
- "filtered":true,
- "severity": "medium"
- }
- }
- }
- }
-}
-```
-
-## Best practices
-
-As part of your application design, consider the following best practices to deliver a positive experience with your application while minimizing potential harms:
--- Decide how you want to handle scenarios where your users send prompts containing content that is classified at a filtered category and severity level or otherwise misuse your application.-- Check the `finish_reason` to see if a completion is filtered.-- Check that there's no error object in the `content_filter_result` (indicating that content filters didn't run).-
-## Next steps
--- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).-- Apply for modified content filters via [this form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).-- Azure OpenAI content filtering is powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).-- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).-- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#preventing-abuse-and-harmful-content-generation).
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
- Title: Azure OpenAI Service models-
-description: Learn about the different model capabilities that are available with Azure OpenAI.
--- Previously updated : 07/12/2023----
-recommendations: false
-keywords:
--
-# Azure OpenAI Service models
-
-Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. Model availability varies by region. For GPT-3 and other models retiring in July 2024, see [Azure OpenAI Service legacy models](./legacy-models.md).
-
-| Models | Description |
-|--|--|
-| [GPT-4](#gpt-4) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. |
-| [GPT-3.5](#gpt-35) | A set of models that improve on GPT-3 and can understand as well as generate natural language and code. |
-| [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. |
-| [DALL-E](#dall-e-models-preview) (Preview) | A series of models in preview that can generate original images from natural language. |
-
-## GPT-4
-
- GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
-
-Due to high demand, access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
--- `gpt-4`-- `gpt-4-32k`-
-The `gpt-4` model supports 8192 max input tokens and the `gpt-4-32k` model supports up to 32,768 tokens.
-
-## GPT-3.5
-
-GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. We recommend using GPT-3.5 Turbo over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md).
--- `gpt-35-turbo`-- `gpt-35-turbo-16k`-
-The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens.
-
-`gpt-35-turbo` and `gpt-35-turbo-16k` share the same [quota](../how-to/quota.md).
-
-Like GPT-4, use the Chat Completions API to use GPT-3.5 Turbo. To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
-
-## Embeddings models
-
-> [!IMPORTANT]
-> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
--
-Currently, we offer three families of Embeddings models for different functionalities:
- The following list indicates the length of the numerical vector returned by the service, based on model capability:
-
-| Base Model | Model(s) | Dimensions |
-||||
-| Ada | models ending in -001 (Version 1) | 1024 |
-| Ada | text-embedding-ada-002 (Version 2) | 1536 |
-
-## DALL-E (Preview)
-
-The DALL-E models, currently in preview, generate images from text prompts that the user provides.
--
-## Model summary table and region availability
-
-> [!IMPORTANT]
-> South Central US is temporarily unavailable for creating new resources due to high demand.
-
-### GPT-4 models
-
-These models can only be used with the Chat Completion API.
-
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | | | |
-| `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | East US, France Central | N/A | 32,768 | September 2021 |
-| `gpt-4` <sup>1</sup> (0613) | East US, France Central | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1</sup> (0613) | East US, France Central | N/A | 32,768 | September 2021 |
-
-<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br>
-<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior.
-
-### GPT-3.5 models
-
-GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can also be used with the Completions API. GPT3.5 Turbo (0613) only supports the Chat Completions API.
-
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | - | -- | - |
-| `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | East US, France Central, UK South | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo-16k` (0613) | East US, France Central, UK South | N/A | 16,384 | Sep 2021 |
-
-<sup>1</sup> Version `0301` of gpt-35-turbo will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior.
--
-### Embeddings models
-
-These models can only be used with Embedding API requests.
-
-> [!NOTE]
-> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
-
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | | | |
-| text-embedding-ada-002 (version 2) | East US, South Central US, West Europe | N/A |8,191 | Sep 2021 |
-| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
-
-### DALL-E models (Preview)
-
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (characters) | Training Data (up to) |
-| | | | | |
-| dalle2 | East US | N/A | 1000 | N/A |
-
-## Working with models
-
-### Finding what models are available
-
-You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
-
-### Model updates
-
-Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down will be visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**:
--
-### Auto update to default
-
-When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a new version being released.
-
-If you are still in the early testing phases for completion and chat completion based models, we recommend deploying models with **auto-update to default** set whenever it is available.
-
-### Specific model version
-
-As your use of Azure OpenAI evolves, and you start to build and integrate with applications you will likely want to manually control model updates so that you can first test and validate that model performance is remaining consistent for your use case prior to upgrade.
-
-When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will auto-upgrade to the default version at the time of retirement.
-
-### GPT-35-Turbo 0301 and GPT-4 0314 retirement
-
-The `gpt-35-turbo` (`0301`) and both `gpt-4` (`0314`) models will be retired on January 4, 2024. Upon retirement, deployments will automatically be upgraded to the default version at the time of retirement. If you would like your deployment to stop accepting completion requests rather than upgrading, then you will be able to set the model upgrade option to expire through the API. We will publish guidelines on this by September 1.
-
-### Viewing deprecation dates
-
-For currently deployed models, from Azure OpenAI Studio select **Deployments**:
--
-To view deprecation/expiration dates for all available models in a given region from Azure OpenAI Studio select **Models** > **Column options** > Select **Deprecation fine tune** and **Deprecation inference**:
--
-### Update & deploy models via the API
-
-```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments/{deploymentName}?api-version=2023-05-01
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```acountname``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deploymentName``` | string | Required | The deployment name you chose when you deployed an existing model or the name you would like a new model deployment to have. |
-| ```resourceGroupName``` | string | Required | The name of the associated resource group for this model deployment. |
-| ```subscriptionId``` | string | Required | Subscription ID for the associated subscription. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json)-
-**Request body**
-
-This is only a subset of the available request body parameters. For the full list of the parameters, you can refer to the [REST API spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json).
-
-|Parameter|Type| Description |
-|--|--|--|
-|versionUpgradeOption | String | Deployment model version upgrade options:<br>`OnceNewDefaultVersionAvailable`<br>`OnceCurrentVersionExpired`<br>`NoAutoUpgrade`|
-|capacity|integer|This represents the amount of [quota](../how-to/quota.md) you are assigning to this deployment. A value of 1 equals 1,000 Tokens per Minute (TPM)|
-
-#### Example request
-
-```Bash
-curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1" \
- -H "Content-Type: application/json" \
- -H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
- -d '{"sku":{"name":"Standard","capacity":1},"properties": {"model": {"format": "OpenAI","name": "text-embedding-ada-002","version": "2"},"versionUpgradeOption":"OnceCurrentVersionExpired"}}'
-```
-
-> [!NOTE]
-> There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from https://portal.azure.com. Then run [`az account get-access-token`](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true). You can use this token as your temporary authorization token for API testing.
-
-#### Example response
-
-```json
-{
- "id": "/subscriptions/{subscription-id}/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1",
- "type": "Microsoft.CognitiveServices/accounts/deployments",
- "name": "text-embedding-ada-002-test-1",
- "sku": {
- "name": "Standard",
- "capacity": 1
- },
- "properties": {
- "model": {
- "format": "OpenAI",
- "name": "text-embedding-ada-002",
- "version": "2"
- },
- "versionUpgradeOption": "OnceCurrentVersionExpired",
- "capabilities": {
- "embeddings": "true",
- "embeddingsMaxInputs": "1"
- },
- "provisioningState": "Succeeded",
- "ratelimits": [
- {
- "key": "request",
- "renewalPeriod": 10,
- "count": 2
- },
- {
- "key": "token",
- "renewalPeriod": 60,
- "count": 1000
- }
- ]
- },
- "systemData": {
- "createdBy": "docs@contoso.com",
- "createdByType": "User",
- "createdAt": "2023-06-13T00:12:38.885937Z",
- "lastModifiedBy": "docs@contoso.com",
- "lastModifiedByType": "User",
- "lastModifiedAt": "2023-06-13T02:41:04.8410965Z"
- },
- "etag": "\"{GUID}\""
-}
-```
-------
-## Next steps
--- [Learn more about Azure OpenAI](../overview.md)-- [Learn more about fine-tuning Azure OpenAI models](../how-to/fine-tuning.md)
cognitive-services Red Teaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/red-teaming.md
- Title: Introduction to red teaming large language models (LLMs)-
-description: Learn about how red teaming and adversarial testing is an essential practice in the responsible development of systems and features using large language models (LLMs)
--- Previously updated : 05/18/2023----
-recommendations: false
-keywords:
--
-# Introduction to red teaming large language models (LLMs)
-
-The term *red teaming* has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of LLMs, the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hate speech, incitement or glorification of violence, or sexual content.
-
-**Red teaming is an essential practice in the responsible development of systems and features using LLMs**. While not a replacement for systematic [measurement and mitigation](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context) work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations.
-
-Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](content-filter.md) and other [mitigation strategies](prompt-engineering.md)) for its Azure OpenAI Service models (see this [Responsible AI Overview](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context)). However, the context of your LLM application will be unique and you also should conduct red teaming to:
--- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application system.-- Identify and mitigate shortcomings in the existing default filters or mitigation strategies.-- Provide feedback on failures so we can make improvements.-
-Here is how you can get started in your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise.
-
-## Getting started
-
-### Managing your red team
-
-**Assemble a diverse group of red teamers.**
-
-LLM red teamers should be a mix of people with diverse social and professional backgrounds, demographic groups, and interdisciplinary expertise that fits the deployment context of your AI system. For example, if youΓÇÖre designing a chatbot to help health care providers, medical experts can help identify risks in that domain.
-
-**Recruit red teamers with both benign and adversarial mindsets.**
-
-Having red teamers with an adversarial mindset and security-testing experience is essential for understanding security risks, but red teamers who are ordinary users of your application system and havenΓÇÖt been involved in its development can bring valuable perspectives on harms that regular users might encounter.
-
-**Remember that handling potentially harmful content can be mentally taxing.**
-
-You will need to take care of your red teamers, not only by limiting the amount of time they spend on an assignment, but also by letting them know they can opt out at any time. Also, avoid burnout by switching red teamersΓÇÖ assignments to different focus areas.
-
-### Planning your red teaming
-
-#### Where to test
-
-Because a system is developed using a LLM base model, you may need to test at several different layers:
--- The LLM base model with its [safety system](./content-filter.md) in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually through an API endpoint.)-- Your application system. (Testing is usually through a UI.)-- Both the LLM base model and your application system before and after mitigations are in place.-
-#### How to test
-
-Consider conducting iterative red teaming in at least two phases:
-
-1. Open-ended red teaming, where red teamers are encouraged to discover a variety of harms. This can help you develop a taxonomy of harms to guide further testing. Note that developing a taxonomy of undesired LLM outputs for your application system is crucial to being able to measure the success of specific mitigation efforts.
-2. Guided red teaming, where red teamers are assigned to focus on specific harms listed in the taxonomy while staying alert for any new harms that may emerge. Red teamers can also be instructed to focus testing on specific features of a system for surfacing potential harms.
-
-Be sure to:
--- Provide your red teamers with clear instructions for what harms or system features they will be testing.-- Give your red teamers a place for recording their findings. For example, this could be a simple spreadsheet specifying the types of data that red teamers should provide, including basics such as:
- - The type of harm that was surfaced.
- - The input prompt that triggered the output.
- - An excerpt from the problematic output.
- - Comments about why the red teamer considered the output problematic.
-- Maximize the effort of responsible AI red teamers who have expertise for testing specific types of harms or undesired outputs. For example, have security subject matter experts focus on jailbreaks, metaprompt extraction, and content related to aiding cyberattacks.-
-### Reporting red teaming findings
-
-You will want to summarize and report red teaming top findings at regular intervals to key stakeholders, including teams involved in the measurement and mitigation of LLM failures so that the findings can inform critical decision making and prioritizations.
-
-## Next steps
-
-[Learn about other mitigation strategies like prompt engineering](./prompt-engineering.md)
cognitive-services System Message https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/system-message.md
- Title: System message framework and template recommendations for Large Language Models(LLMs)-
-description: Learn about how to construct system messages also know as metaprompts to guide an AI system's behavior.
--- Previously updated : 05/19/2023----
-recommendations: false
-keywords:
--
-# System message framework and template recommendations for Large Language Models (LLMs)
-
-This article provides a recommended framework and example templates to help write an effective system message, sometimes referred to as a metaprompt or [system prompt](/azure/cognitive-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-completions#meta-prompts) that can be used to guide an AI systemΓÇÖs behavior and improve system performance. If you're new to prompt engineering, we recommend starting with our [introduction to prompt engineering](prompt-engineering.md) and [prompt engineering techniques guidance](advanced-prompt-engineering.md).
-
-This guide provides system message recommendations and resources that, along with other prompt engineering techniques, can help increase the accuracy and grounding of responses you generate with a Large Language Model (LLM). However, it is important to remember that even when using these templates and guidance, you still need to validate the responses the models generate. Just because a carefully crafted system message worked well for a particular scenario doesn't necessarily mean it will work more broadly across other scenarios. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/cognitive-services/openai/context/context#limitations) and the [mechanisms for evaluating and mitigating those limitations](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context) is just as important as understanding how to leverage their strengths.
-
-The LLM system message framework described here covers four concepts:
--- Define the modelΓÇÖs profile, capabilities, and limitations for your scenario-- Define the modelΓÇÖs output format-- Provide example(s) to demonstrate the intended behavior of the model-- Provide additional behavioral guardrails-
-## Define the modelΓÇÖs profile, capabilities, and limitations for your scenario
--- **Define the specific task(s)** you would like the model to complete. Describe who the users of the model will be, what inputs they will provide to the model, and what you expect the model to do with the inputs.--- **Define how the model should complete the tasks**, including any additional tools (like APIs, code, plug-ins) the model can use. If it doesnΓÇÖt use additional tools, it can rely on its own parametric knowledge.--- **Define the scope and limitations** of the modelΓÇÖs performance. Provide clear instructions on how the model should respond when faced with any limitations. For example, define how the model should respond if prompted on subjects or for uses that are off topic or otherwise outside of what you want the system to do.--- **Define the posture and tone** the model should exhibit in its responses.-
-## Define the model's output format
-
-When using the system message to define the modelΓÇÖs desired output format in your scenario, consider and include the following types of information:
--- **Define the language and syntax** of the output format. If you want the output to be machine parse-able, you may want the output to be in formats like JSON, XSON or XML.--- **Define any styling or formatting** preferences for better user or machine readability. For example, you may want relevant parts of the response to be bolded or citations to be in a specific format.-
-## Provide example(s) to demonstrate the intended behavior of the model
-
-When using the system message to demonstrate the intended behavior of the model in your scenario, it is helpful to provide specific examples. When providing examples, consider the following:
--- Describe difficult use cases where the prompt is ambiguous or complicated, to give the model additional visibility into how to approach such cases.-- Show the potential ΓÇ£inner monologueΓÇ¥ and chain-of-thought reasoning to better inform the model on the steps it should take to achieve the desired outcomes.-
-## Define additional behavioral guardrails
-
-When defining additional safety and behavioral guardrails, itΓÇÖs helpful to first identify and prioritize [the harms](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context) youΓÇÖd like to address. Depending on the application, the sensitivity and severity of certain harms could be more important than others.
-
-## Next steps
--- Learn more about [Azure OpenAI](../overview.md)-- Learn more about [deploying Azure OpenAI responsibly](/legal/cognitive-services/openai/overview?context=/azure/cognitive-services/openai/context/context)-- For more examples, check out the [Azure OpenAI Samples GitHub repository](https://github.com/Azure-Samples/openai)
cognitive-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/use-your-data.md
- Title: 'Using your data with Azure OpenAI Service'-
-description: Use this article to learn about using your data for better text generation in Azure OpenAI.
------- Previously updated : 07/12/2023
-recommendations: false
--
-# Azure OpenAI on your data (preview)
-
-Azure OpenAI on your data enables you to run supported chat models such as GPT-35-Turbo and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. By doing so, you can unlock valuable insights that can help you make better business decisions, identify trends and patterns, and optimize your operations. One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI.
-
-To get started, [connect your data source](../use-your-data-quickstart.md) using [Azure OpenAI Studio](https://oai.azure.com/) and start asking questions and chatting on your data.
-
-Because the model has access to, and can reference specific sources to support its responses, answers are not only based on its pretrained knowledge but also on the latest information available in the designated data source. This grounding data also helps the model avoid generating responses based on outdated or incorrect information.
-
-> [!NOTE]
-> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) with either the gpt-35-turbo or the gpt-4 models deployed.
-
-## What is Azure OpenAI on your data
-
-Azure OpenAI on your data works with OpenAI's powerful GPT-35-Turbo and GPT-4 language models, enabling them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/) to create a solution that connects to your data to enable an enhanced chat experience.
-
-One of the key features of Azure OpenAI on your data is its ability to retrieve and utilize data in a way that enhances the model's output. Azure OpenAI on your data, together with Azure Cognitive Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history. This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt. Although retrieved data is being appended to the prompt, the resulting input is still processed by the model like any other prompt. Once the data has been retrieved and the prompt has been submitted to the model, the model uses this information to provide a completion. See the [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) article for more information.
-
-## Data source options
-
-Azure OpenAI on your data uses an [Azure Cognitive Search](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information.
-
-## Ingesting your data into Azure cognitive search
-
-For documents and datasets with long text, you should use the available [data preparation script](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) to ingest the data into cognitive search. The script chunks the data so that your response with the service will be more accurate. This script also supports scanned PDF file and images and ingests the data using [Form Recognizer](../../../applied-ai-services/form-recognizer/overview.md).
--
-## Data formats and file types
-
-Azure OpenAI on your data supports the following filetypes:
-
-* `.txt`
-* `.md`
-* `.html`
-* Microsoft Word files
-* Microsoft PowerPoint files
-* PDF
-
-There are some caveats about document structure and how it might affect the quality of responses from the model:
-
-* The model provides the best citation titles from markdown (`.md`) files.
-
-* If a document is a PDF file, the text contents are extracted as a preprocessing step (unless you're connecting your own Azure Cognitive Search index). If your document contains images, graphs, or other visual content, the model's response quality depends on the quality of the text that can be extracted from them.
-
-* If you're converting data from an unsupported format into a supported format, make sure the conversion:
-
- * Doesn't lead to significant data loss.
- * Doesn't add unexpected noise to your data.
-
- This will impact the quality of Azure Cognitive Search and the model response.
-
-## Virtual network support & private link support
-
-Azure OpenAI on your data does not currently support private endpoints.
-
-## Azure Role-based access controls (Azure RBAC)
-
-To add a new data source to your Azure OpenAI resource, you need the following Azure RBAC roles.
--
-|Azure RBAC role |Needed when |
-|||
-|[Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) | You want to use Azure OpenAI on your data. |
-|[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. |
-|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. |
-
-## Recommended settings
-
-Use the following sections to help you configure Azure OpenAI on your data for optimal results.
-
-### System message
-
-Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, what it should and shouldn’t answer, and how to format responses. There’s no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 200 tokens.
-
-For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
-
-*"You are a financial chatbot useful for answering questions from financial reports. You are given excerpts from the earnings call. Please answer the questions by parsing through all dialogue."*
-
-This system message can help improve the quality of the response by specifying the domain (in this case finance) and mentioning that the data consists of call transcriptions. It helps set the necessary context for the model to respond appropriately.
-
-> [!NOTE]
-> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior may occur if the system message contradicts with these behaviors.
-
-### Maximum response
-
-Set a limit on the number of tokens per model response. The upper limit for Azure OpenAI on Your Data is 1500. This is equivalent to setting the `max_tokens` parameter in the API.
-
-### Limit responses to your data
-
-This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model may more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
-
-### Semantic search
-
-> [!IMPORTANT]
-> * Semantic search is subject to [additional pricing](/azure/search/semantic-search-overview#availability-and-pricing)
-> * Currently Azure OpenAI on your data supports semantic search for English data only. Only enable semantic search if both your documents and use case are in English.
-
-If [semantic search](/azure/search/semantic-search-overview) is enabled for your Azure Cognitive Search service, you are more likely to produce better retrieval of your data, which can improve response and citation quality.
-
-### Index field mapping
-
-If you're using your own index, you will be prompted in the Azure OpenAI Studio to define which fields you want to map for answering questions when you add your data source. You can provide multiple fields for *Content data*, and should include all fields that have text pertaining to your use case.
--
-In this example, the fields mapped to **Content data** and **Title** provide information to the model to answer questions. **Title** is also used to title citation text. The field mapped to **File name** generates the citation names in the response.
-
-Mapping these fields correctly helps ensure the model has better response and citation quality.
-
-### Interacting with the model
-
-Use the following practices for best results when chatting with the model.
-
-**Conversation history**
-
-* Before starting a new conversation (or asking a question that is not related to the previous ones), clear the chat history.
-* Getting different responses for the same question between the first conversational turn and subsequent turns can be expected because the conversation history changes the current state of the model. If you receive incorrect answers, report it as a quality bug.
-
-**Model response**
-
-* If you are not satisfied with the model response for a specific question, try either making the question more specific or more generic to see how the model responds, and reframe your question accordingly.
-
-* [Chain-of-thought prompting](advanced-prompt-engineering.md?pivots=programming-language-chat-completions#chain-of-thought-prompting) has been shown to be effective in getting the model to produce desired outputs for complex questions/tasks.
-
-**Question length**
-
-Avoid asking long questions and break them down into multiple questions if possible. The GPT models have limits on the number of tokens they can accept. Token limits are counted toward: the user question, the system message, the retrieved search documents (chunks), internal prompts, the conversation history (if any), and the response. If the question exceeds the token limit, it will be truncated.
-
-**Multi-lingual support**
-
-* Azure OpenAI on your data supports queries that are in the same language as the documents. For example, if your data is in Japanese, then queries need to be in Japanese too.
-
-* Currently Azure OpenAI on your data supports [semantic search](/azure/search/semantic-search-overview) for English data only. Don't enable semantic search if your data is in other languages.
-
-* We recommend using a system message to inform the model that your data is in another language. For example:
-
- *ΓÇ£You are an AI assistant that helps people find information. You retrieve Japanese documents, and you should read them carefully in Japanese and answer in Japanese.ΓÇ¥*
-
-* If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
-
-### Using the web app
-
-You can use the available web app to interact with your model using a graphical user interface, which you can deploy using either [Azure OpenAI studio](../use-your-data-quickstart.md?pivots=programming-language-studio#deploy-a-web-app) or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
-
-![A screenshot of the web app interface.](../media/use-your-data/web-app.png)
-
-You can also customize the app's frontend and backend logic. For example, you could change the icon that appears in the center of the app by updating `/frontend/src/assets/Azure.svg` and then redeploying the app [using the Azure CLI](https://github.com/microsoft/sample-app-aoai-chatGPT#deploy-with-the-azure-cli). See the source code for the web app, and more information [on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT).
-
-When customizing the app, we recommend:
--- Resetting the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost.--- Clearly communicating the impact on the user experience that each setting you implement will have.--- When you rotate API keys for your Azure OpenAI or Azure Cognitive Search resource, be sure to update the app settings for each of your deployed apps to use the new keys.--- Pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes and improvements.-
-### Using the API
-
-Consider setting the following parameters even if they are optional for using the API.
--
-|Parameter |Recommendation |
-|||
-|`fieldsMapping` | Explicitly set the title and content fields of your index. This impacts the search retrieval quality of Azure Cognitive Search, which impacts the overall response and citation quality. |
-|`roleInformation` | Corresponds to the ΓÇ£System MessageΓÇ¥ in the Azure OpenAI Studio. See the [System message](#system-message) section above for recommendations. |
-
-#### Streaming data
-
-You can send a streaming request using the `stream` parameter, allowing data to be sent and received incrementally, without waiting for the entire API response. This can improve performance and user experience, especially for large or dynamic data.
-
-```json
-{
- "stream": true,
- "dataSources": [
- {
- "type": "AzureCognitiveSearch",
- "parameters": {
- "endpoint": "'$SearchEndpoint'",
- "key": "'$SearchKey'",
- "indexName": "'$SearchIndex'"
- }
- }
- ],
- "messages": [
- {
- "role": "user",
- "content": "What are the differences between Azure Machine Learning and Azure Cognitive Services?"
- }
- ]
-}
-```
-
-#### Conversation history for better results
-
-When chatting with a model, providing a history of the chat will help the model return higher quality results.
-
-```json
-{
- "dataSources": [
- {
- "type": "AzureCognitiveSearch",
- "parameters": {
- "endpoint": "'$SearchEndpoint'",
- "key": "'$SearchKey'",
- "indexName": "'$SearchIndex'"
- }
- }
- ],
- "messages": [
- {
- "role": "user",
- "content": "What are the differences between Azure Machine Learning and Azure Cognitive Services?"
- },
- {
- "role": "tool",
- "content": "{\"citations\": [{\"content\": \" Title: Cognitive Services and Machine Learning\\n
- },
- {
- "role": "assistant",
- "content": " \nAzure Machine Learning is a product and service tailored for data scientists to build, train, and deploy machine learning models [doc1]..."
- },
- {
- "role": "user",
- "content": "How do I use Azure machine learning?"
- }
- ]
-}
-```
--
-## Next steps
-* [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
-
-* [Introduction to prompt engineering](./prompt-engineering.md)
--
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md
- Title: Azure OpenAI Service encryption of data at rest
-description: Learn how Azure OpenAI encrypts your data when it's persisted to the cloud.
------ Previously updated : 11/14/2022---
-# Azure OpenAI Service encryption of data at rest
-
-Azure OpenAI automatically encrypts your data when it's persisted to the cloud. The encryption protects your data and helps you meet your organizational security and compliance commitments. This article covers how Azure OpenAI handles encryption of data at rest, specifically training data and fine-tuned models. For information on how data provided by you to the service is processed, used, and stored, consult the [data, privacy, and security article](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
-
-## About Cognitive Services encryption
-
-Azure OpenAI is part of Azure Cognitive Services. Cognitive Services data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption.
-
-## About encryption key management
-
-By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-## Customer-managed keys with Azure Key Vault
-
-Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-
-You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-
-To request the ability to use customer-managed keys, fill out and submit the [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request.
-
-To enable customer-managed keys, you must also enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
-
-Only RSA keys of size 2048 are supported with Cognitive Services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
-
-## Enable customer-managed keys for your resource
-
-To enable customer-managed keys in the Azure portal, follow these steps:
-
-1. Go to your Cognitive Services resource.
-1. On the left, select **Encryption**.
-1. Under **Encryption type**, select **Customer Managed Keys**, as shown in the following screenshot.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot of create a resource user experience](./media/encryption/encryption.png)
-
-## Specify a key
-
-After you enable customer-managed keys, you can specify a key to associate with the Cognitive Services resource.
-
-### Specify a key as a URI
-
-To specify a key as a URI, follow these steps:
-
-1. In the Azure portal, go to your key vault.
-1. Under **Settings**, select **Keys**.
-1. Select the desired key, and then select the key to view its versions. Select a key version to view the settings for that version.
-1. Copy the **Key Identifier** value, which provides the URI.
-
- :::image type="content" source="../media/cognitive-services-encryption/key-uri-portal.png" alt-text="Screenshot of the Azure portal page for a key version. The Key Identifier box contains a placeholder for a key URI.":::
-
-1. Go back to your Cognitive Services resource, and then select **Encryption**.
-1. Under **Encryption key**, select **Enter key URI**.
-1. Paste the URI that you copied into the **Key URI** box.
-
- :::image type="content" source="../media/cognitive-services-encryption/ssecmk2.png" alt-text="Screenshot of the Encryption page for a Cognitive Services resource. The Enter key URI option is selected, and the Key URI box contains a value.":::
-
-1. Under **Subscription**, select the subscription that contains the key vault.
-1. Save your changes.
-
-### Specify a key from a key vault
-
-To specify a key from a key vault, first make sure that you have a key vault that contains a key. Then follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-1. Under **Encryption key**, select **Select from Key Vault**.
-1. Select the key vault that contains the key that you want to use.
-1. Select the key that you want to use.
-
- :::image type="content" source="../media/cognitive-services-encryption/ssecmk3.png" alt-text="Screenshot of the Select key from Azure Key Vault page in the Azure portal. The Subscription, Key vault, Key, and Version boxes contain values.":::
-
-1. Save your changes.
-
-## Update the key version
-
-When you create a new version of a key, update the Cognitive Services resource to use the new version. Follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-1. Enter the URI for the new key version. Alternately, you can select the key vault and then select the key again to update the version.
-1. Save your changes.
-
-## Use a different key
-
-To change the key that you use for encryption, follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-1. Enter the URI for the new key. Alternately, you can select the key vault and then select a new key.
-1. Save your changes.
-
-## Rotate customer-managed keys
-
-You can rotate a customer-managed key in Key Vault according to your compliance policies. When the key is rotated, you must update the Cognitive Services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](#update-the-key-version).
-
-Rotating the key doesn't trigger re-encryption of data in the resource. No further action is required from the user.
-
-## Revoke a customer-managed key
-
-You can revoke a customer-managed encryption key by changing the access policy, by changing the permissions on the key vault, or by deleting the key.
-
-To change the access policy of the managed identity that your registry uses, run the [az-keyvault-delete-policy](/cli/azure/keyvault#az-keyvault-delete-policy) command:
-
-```azurecli
-az keyvault delete-policy \
- --resource-group <resource-group-name> \
- --name <key-vault-name> \
- --key_id <key-vault-key-id>
-```
-
-To delete the individual versions of a key, run the [az-keyvault-key-delete](/cli/azure/keyvault/key#az-keyvault-key-delete) command. This operation requires the *keys/delete* permission.
-
-```azurecli
-az keyvault key delete \
- --name <key-vault-name> \
- --object-id $identityPrincipalID \
-```
-
-> [!IMPORTANT]
-> Revoking access to an active customer-managed key while CMK is still enabled will prevent downloading of training data and results files, fine-tuning new models, and deploying fine-tuned models. However, previously deployed fine-tuned models will continue to operate and serve traffic until those deployments are deleted.
-
-### Delete training, validation, and training results data
-
- The Files API allows customers to upload their training data for the purpose of fine-tuning a model. This data is stored in Azure Storage, within the same region as the resource and logically isolated with their Azure subscription and API Credentials. Uploaded files can be deleted by the user via the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-training-files).
-
-### Delete fine-tuned models and deployments
-
-The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest (either with Microsoft-managed keys or customer-managed keys) and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
-
-## Disable customer-managed keys
-
-When you disable customer-managed keys, your Cognitive Services resource is then encrypted with Microsoft-managed keys. To disable customer-managed keys, follow these steps:
-
-1. Go to your Cognitive Services resource, and then select **Encryption**.
-1. Select **Microsoft Managed Keys** > **Save**.
-
-When you previously enabled customer managed keys this also enabled a system assigned managed identity, a feature of Azure AD. Once the system assigned managed identity is enabled, this resource will be registered with Azure Active Directory. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md).
-
-> [!IMPORTANT]
-> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
-
-> [!IMPORTANT]
-> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Azure AD directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-
-## Next steps
-
-* [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services Content Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/content-filters.md
- Title: 'How to use content filters (preview) with Azure OpenAI Service'-
-description: Learn how to use content filters (preview) with Azure OpenAI Service
----- Previously updated : 6/5/2023--
-recommendations: false
-keywords:
--
-# How to configure content filters with Azure OpenAI Service
-
-> [!NOTE]
-> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for full content filtering control, including (i) configuring content filters at severity level high only (ii) or turning the content filters off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).
-
-The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high). The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md).
-
-Content filters can be configured at resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md).
-
-The configurability feature is available in preview and allows customers to adjust the settings, separately for prompts and completions, to filter content for each content category at different severity levels as described in the table below. Content detected at the 'safe' severity level is labeled in annotations but is not subject to filtering and is not configurable.
-
-| Severity filtered | Configurable for prompts | Configurable for completions | Descriptions |
-|-|--||--|
-| Low, medium, high | Yes | Yes | Strictest filtering configuration. Content detected at severity levels low, medium and high is filtered.|
-| Medium, high | Yes | Yes | Default setting. Content detected at severity level low is not filtered, content at medium and high is filtered.|
-| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium is not filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.|
-| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-
-<sup>\*</sup> Only approved customers have full content filtering control, including configuring content filters at severity level high only or turning the content filters off. Managed customers only can apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
-
-## Configuring content filters via Azure OpenAI Studio (preview)
-
-The following steps show how to set up a customized content filtering configuration for your resource.
-
-1. Go to Azure OpenAI Studio and navigate to the Content Filters tab (in the bottom left navigation, as designated by the red box below).
-
- :::image type="content" source="../media/content-filters/studio.png" alt-text="Screenshot of the AI Studio UI with Content Filters highlighted" lightbox="../media/content-filters/studio.png":::
-
-2. Create a new customized content filtering configuration.
-
- :::image type="content" source="../media/content-filters/create-filter.png" alt-text="Screenshot of the content filtering configuration UI with create selected" lightbox="../media/content-filters/create-filter.png":::
-
- This leads to the following configuration view, where you can choose a name for the custom content filtering configuration.
-
- :::image type="content" source="../media/content-filters/filter-view.png" alt-text="Screenshot of the content filtering configuration UI" lightbox="../media/content-filters/filter-view.png":::
-
-3. This is the view of the default content filtering configuration, where content is filtered at medium and high severity levels for all categories. You can modify the content filtering severity level for both prompts and completions separately (configuration for prompts is in the left column and configuration for completions is in the right column, as designated with the blue boxes below) for each of the four content categories (content categories are listed on the left side of the screen, as designated with the green box below). There are three severity levels for each category that are partially or fully configurable: Low, medium, and high (labeled at the top of each column, as designated with the red box below).
-
- :::image type="content" source="../media/content-filters/severity-level.png" alt-text="Screenshot of the content filtering configuration UI with user prompts and model completions highlighted" lightbox="../media/content-filters/severity-level.png":::
-
-4. If you determine that your application or usage scenario requires stricter filtering for some or all content categories, you can configure the settings, separately for prompts and completions, to filter at more severity levels than the default setting. An example is shown in the image below, where the filtering level for user prompts is set to the strictest configuration for hate and sexual, with low severity content filtered along with content classified as medium and high severity (outlined in the red box below). In the example, the filtering levels for model completions are set at the strictest configuration for all content categories (blue box below). With this modified filtering configuration in place, low, medium, and high severity content will be filtered for the hate and sexual categories in user prompts; medium and high severity content will be filtered for the self-harm and violence categories in user prompts; and low, medium, and high severity content will be filtered for all content categories in model completions.
-
- :::image type="content" source="../media/content-filters/settings.png" alt-text="Screenshot of the content filtering configuration with low, medium, high, highlighted." lightbox="../media/content-filters/settings.png":::
-
-5. If your use case was approved for modified content filters as outlined above, you will receive full control over content filtering configurations. With full control, you can choose to turn filtering off, or filter only at severity level high, while accepting low and medium severity content. In the image below, filtering for the categories of self-harm and violence is turned off for user prompts (red box below), while default configurations are retained for other categories for user prompts. For model completions, only high severity content is filtered for the category self-harm (blue box below), and filtering is turned off for violence (green box below), while default configurations are retained for other categories.
-
- :::image type="content" source="../media/content-filters/off.png" alt-text="Screenshot of the content filtering configuration with self harm and violence set to off." lightbox="../media/content-filters/off.png":::
-
- You can create multiple content filtering configurations as per your requirements.
-
- :::image type="content" source="../media/content-filters/multiple.png" alt-text="Screenshot of the content filtering configuration with multiple content filters configured." lightbox="../media/content-filters/multiple.png":::
-
-6. Next, to make a custom content filtering configuration operational, assign a configuration to one or more deployments in your resource. To do this, go to the **Deployments** tab and select **Edit deployment** (outlined near the top of the screen in a red box below).
-
- :::image type="content" source="../media/content-filters/edit-deployment.png" alt-text="Screenshot of the content filtering configuration with edit deployment highlighted." lightbox="../media/content-filters/edit-deployment.png":::
-
-7. Go to advanced options (outlined in the blue box below) select the content filter configuration suitable for that deployment from the **Content Filter** dropdown (outlined near the bottom of the dialog box in the red box below).
-
- :::image type="content" source="../media/content-filters/advanced.png" alt-text="Screenshot of edit deployment configuration with advanced options selected." lightbox="../media/content-filters/select-filter.png":::
-
-8. Select **Save and close** to apply the selected configuration to the deployment.
-
- :::image type="content" source="../media/content-filters/select-filter.png" alt-text="Screenshot of edit deployment configuration with content filter selected." lightbox="../media/content-filters/select-filter.png":::
-
-9. You can also edit and delete a content filter configuration if required. To do this, navigate to the content filters tab and select the desired action (options outlined near the top of the screen in the red box below). You can edit/delete only one filtering configuration at a time.
-
- :::image type="content" source="../media/content-filters/delete.png" alt-text="Screenshot of content filter configuration with edit and delete highlighted." lightbox="../media/content-filters/delete.png":::
-
- > [!NOTE]
- > Before deleting a content filtering configuration, you will need to unassign it from any deployment in the Deployments tab.
-
-## Best practices
-
-We recommend informing your content filtering configuration decisions through an iterative identification (for example, red team testing, stress-testing, and analysis) and measurement process to address the potential harms that are relevant for a specific model, application, and deployment scenario. After implementing mitigations such as content filtering, repeat measurement to test effectiveness. Recommendations and best practices for Responsible AI for Azure OpenAI, grounded in the [Microsoft Responsible AI Standard](https://aka.ms/RAI) can be found in the [Responsible AI Overview for Azure OpenAI](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
-
-## Next steps
--- Learn more about Responsible AI practices for Azure OpenAI: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).-- Read more about [content filtering categories and severity levels](../concepts/content-filter.md) with Azure OpenAI Service.-- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs) article](../concepts/red-teaming.md).
cognitive-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/integrate-synapseml.md
- Title: 'How-to - Use Azure OpenAI Service with large datasets'-
-description: Walkthrough on how to integrate Azure OpenAI with SynapseML and Apache Spark to apply large language models at a distributed scale.
------ Previously updated : 08/04/2022--
-recommendations: false
--
-# Use Azure OpenAI with large datasets
-
-Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI Service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
-
-## Prerequisites
--- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to Azure OpenAI in the desired Azure subscription-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
-- An Azure OpenAI resource ΓÇô [create a resource](create-resource.md?pivots=web-portal#create-a-resource)-- An Apache Spark cluster with SynapseML installed - create a serverless Apache Spark pool [here](../../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)-
-We recommend [creating a Synapse workspace](../../../synapse-analytics/get-started-create-workspace.md), but an Azure Databricks, HDInsight, or Spark on Kubernetes, or even a Python environment with the `pyspark` package, will also work.
-
-## Import this guide as a notebook
-
-The next step is to add this code into your Spark cluster. You can either create a notebook in your Spark platform and copy the code into this notebook to run the demo, or download the notebook and import it into Synapse Analytics.
-
-1. [Download this demo as a notebook](https://github.com/microsoft/SynapseML/blob/master/notebooks/features/cognitive_services/CognitiveServices%20-%20OpenAI.ipynb) (click Raw, then save the file)
-1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook) or, if using Databricks, [into the Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook)
-1. Install SynapseML on your cluster. See the installation instructions for Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This requires pasting another cell at the top of the notebook you imported
-1. Connect your notebook to a cluster and follow along, editing and running the cells below.
-
-## Fill in your service information
-
-Next, edit the cell in the notebook to point to your service. In particular, set the `resource_name`, `deployment_name`, `location`, and `key` variables to the corresponding values for your Azure OpenAI resource.
-
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
-
-```python
-import os
-
-# Replace the following values with your Azure OpenAI resource information
-resource_name = "RESOURCE_NAME" # The name of your Azure OpenAI resource.
-deployment_name = "DEPLOYMENT_NAME" # The name of your Azure OpenAI deployment.
-location = "RESOURCE_LOCATION" # The location or region ID for your resource.
-key = "RESOURCE_API_KEY" # The key for your resource.
-
-assert key is not None and resource_name is not None
-```
-
-## Create a dataset of prompts
-
-Next, create a dataframe consisting of a series of rows, with one prompt per row.
-
-You can also load data directly from Azure Data Lake Storage (ADLS) or other databases. For more information about loading and preparing Spark dataframes, see the [Apache Spark data loading guide](https://spark.apache.org/docs/latest/sql-data-sources.html).
-
-```python
-df = spark.createDataFrame(
- [
- ("Hello my name is",),
- ("The best code is code that's",),
- ("SynapseML is ",),
- ]
-).toDF("prompt")
-```
-
-## Create the OpenAICompletion Apache Spark client
-
-To apply the OpenAI Completion service to the dataframe that you just created, create an `OpenAICompletion` object that serves as a distributed client. Parameters of the service can be set either with a single value, or by a column of the dataframe with the appropriate setters on the `OpenAICompletion` object. Here, we're setting `maxTokens` to 200. A token is around four characters, and this limit applies to the sum of the prompt and the result. We're also setting the `promptCol` parameter with the name of the prompt column in the dataframe.
-
-```python
-from synapse.ml.cognitive import OpenAICompletion
-
-completion = (
- OpenAICompletion()
- .setSubscriptionKey(key)
- .setDeploymentName(deployment_name)
- .setUrl("https://{}.openai.azure.com/".format(resource_name))
- .setMaxTokens(200)
- .setPromptCol("prompt")
- .setErrorCol("error")
- .setOutputCol("completions")
-)
-```
-
-## Transform the dataframe with the OpenAICompletion client
-
-Now that you have the dataframe and the completion client, you can transform your input dataset and add a column called `completions` with all of the information the service adds. We'll select out just the text for simplicity.
-
-```python
-from pyspark.sql.functions import col
-
-completed_df = completion.transform(df).cache()
-display(completed_df.select(
- col("prompt"), col("error"), col("completions.choices.text").getItem(0).alias("text")))
-```
-
-Your output should look something like the following example; note that the completion text can vary.
-
-| **prompt** | **error** | **text** |
-||--| |
-| Hello my name is | undefined | Makaveli I'm eighteen years old and I want to<br>be a rapper when I grow up I love writing and making music I'm from Los<br>Angeles, CA |
-| The best code is code that's | undefined | understandable This is a subjective statement,<br>and there is no definitive answer. |
-| SynapseML is | undefined | A machine learning algorithm that is able to learn how to predict the future outcome of events. |
-
-## Other usage examples
-
-### Improve throughput with request batching
-
-The example above makes several requests to the service, one for each prompt. To complete multiple prompts in a single request, use batch mode. First, in the `OpenAICompletion` object, instead of setting the Prompt column to "Prompt", specify "batchPrompt" for the BatchPrompt column.
-To do so, create a dataframe with a list of prompts per row.
-
-> [!NOTE]
-> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
-
-```python
-batch_df = spark.createDataFrame(
- [
- (["The time has come", "Pleased to", "Today stocks", "Here's to"],),
- (["The only thing", "Ask not what", "Every litter", "I am"],),
- ]
-).toDF("batchPrompt")
-```
-
-Next we create the `OpenAICompletion` object. Rather than setting the prompt column, set the batchPrompt column if your column is of type `Array[String]`.
-
-```python
-batch_completion = (
- OpenAICompletion()
- .setSubscriptionKey(key)
- .setDeploymentName(deployment_name)
- .setUrl("https://{}.openai.azure.com/".format(resource_name))
- .setMaxTokens(200)
- .setBatchPromptCol("batchPrompt")
- .setErrorCol("error")
- .setOutputCol("completions")
-)
-```
-
-In the call to transform, a request will then be made per row. Because there are multiple prompts in a single row, each request will be sent with all prompts in that row. The results will contain a row for each row in the request.
-
-```python
-completed_batch_df = batch_completion.transform(batch_df).cache()
-display(completed_batch_df)
-```
-
-> [!NOTE]
-> There is currently a limit of 20 prompts in a single request and a limit of 2048 "tokens", or approximately 1500 words.
-
-### Using an automatic mini-batcher
-
-If your data is in column format, you can transpose it to row format using SynapseML's `FixedMiniBatcherTransformer`.
-
-```python
-from pyspark.sql.types import StringType
-from synapse.ml.stages import FixedMiniBatchTransformer
-from synapse.ml.core.spark import FluentAPI
-
-completed_autobatch_df = (df
- .coalesce(1) # Force a single partition so that our little 4-row dataframe makes a batch of size 4, you can remove this step for large datasets
- .mlTransform(FixedMiniBatchTransformer(batchSize=4))
- .withColumnRenamed("prompt", "batchPrompt")
- .mlTransform(batch_completion))
-
-display(completed_autobatch_df)
-```
-
-### Prompt engineering for translation
-
-Azure OpenAI can solve many different natural language tasks through [prompt engineering](completions.md). Here, we show an example of prompting for language translation:
-
-```python
-translate_df = spark.createDataFrame(
- [
- ("Japanese: Ookina hako \nEnglish: Big box \nJapanese: Midori tako\nEnglish:",),
- ("French: Quelle heure est-il à Montréal? \nEnglish: What time is it in Montreal? \nFrench: Où est le poulet? \nEnglish:",),
- ]
-).toDF("prompt")
-
-display(completion.transform(translate_df))
-```
-
-### Prompt for question answering
-
-Here, we prompt the GPT-3 model for general-knowledge question answering:
-
-```python
-qa_df = spark.createDataFrame(
- [
- (
- "Q: Where is the Grand Canyon?\nA: The Grand Canyon is in Arizona.\n\nQ: What is the weight of the Burj Khalifa in kilograms?\nA:",
- )
- ]
-).toDF("prompt")
-
-display(completion.transform(qa_df))
-```
cognitive-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/manage-costs.md
- Title: Plan to manage costs for Azure OpenAI Service
-description: Learn how to plan for and manage costs for Azure OpenAI by using cost analysis in the Azure portal.
------ Previously updated : 04/05/2023---
-# Plan to manage costs for Azure OpenAI Service
-
-This article describes how you plan for and manage costs for Azure OpenAI Service. Before you deploy the service, you can use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you've started using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
-
-## Prerequisites
-
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../../../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-## Estimate costs before using Azure OpenAI
-
-Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the costs of using Azure OpenAI.
-
-## Understand the full billing model for Azure OpenAI Service
-
-Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
-
-### How you're charged for Azure OpenAI Service
-
-### Base series and Codex series models
-
-Azure OpenAI base series and Codex series models are charged per 1,000 tokens. Costs vary depending on which model series you choose: Ada, Babbage, Curie, Davinci, or Code-Cushman.
-
-Our models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text.
-
-Token costs are for both input and output. For example, if you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens.
-
-In practice, for this type of completion call the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many different factors including the value assigned to the max_tokens parameter.
-
-### Base Series and Codex series fine-tuned models
-
-Azure OpenAI fine-tuned models are charged based on three factors:
--- Training hours-- Hosting hours-- Inference per 1,000 tokens-
-The hosting hours cost is important to be aware of since once a fine-tuned model is deployed it continues to incur an hourly cost regardless of whether you're actively using it. Fine-tuned model costs should be monitored closely.
--
-### Other costs that might accrue with Azure OpenAI Service
-
-Keep in mind that enabling capabilities like sending data to Azure Monitor Logs, alerting, etc. incurs additional costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource.
-
-### Using Azure Prepayment with Azure OpenAI Service
-
-You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
-
-## Monitor costs
-
-As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
-
-To view Azure OpenAI costs in cost analysis:
-
-1. Sign in to the Azure portal.
-2. Select one of your Azure OpenAI resources.
-3. Under **Resource Management** select **Cost analysis**
-4. By default cost analysis is scoped to the individual Azure OpenAI resource.
--
-To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and in this case switching the chart type to **Line**. You can now see that for this particular resource the source of the costs is from three different model series with **Text-Davinci Tokens** representing the bulk of the costs.
--
-It's important to understand scope when evaluating costs associated with Azure OpenAI. If your resources are part of the same resource group you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups you can scope to the subscription level.
-
-However, when scoped at a higher level you often need to add additional filters to be able to zero in on Azure OpenAI usage. When scoped at the subscription level we see a number of other resources that we may not care about in the context of Azure OpenAI cost management. When scoping at the subscription level, we recommend navigating to the full **Cost analysis tool** under the **Cost Management** service. Search for **"Cost Management"** in the top Azure search bar to navigate to the full service experience, which includes more options like creating budgets.
--
-If you try to add a filter by service, you'll find that you can't find Azure OpenAI in the list. This is because technically Azure OpenAI is part of Cognitive Services so the service level filter is **Cognitive Services**, but if you want to see all Azure OpenAI resources across a subscription without any other type of Cognitive Services resources you need to instead scope to **Service tier: Azure OpenAI**:
--
-## Create budgets
-
-You can create [budgets](../../../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-> [!IMPORTANT]
-> While OpenAI has an option for hard limits that will prevent you from going over your budget, Azure OpenAI does not currently provide this functionality. You are able to kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part.
-
-## Export cost data
-
-You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
-
-## Next steps
--- Learn [how to optimize your cloud investment with Azure Cost Management](../../../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](../../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](../../../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
- Title: How to configure Azure OpenAI Service with managed identities-
-description: Provides guidance on how to set managed identity with Azure Active Directory
--- Previously updated : 06/24/2022--
-recommendations: false
---
-# How to configure Azure OpenAI Service with managed identities
-
-More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Azure Active Directory (Azure AD).
-
-In the following sections, you'll use the Azure CLI to assign roles, and obtain a bearer token to call the OpenAI resource. If you get stuck, links are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI.
-
-## Prerequisites
--- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>-- Access granted to the Azure OpenAI service in the desired Azure subscription-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
-- Azure CLI - [Installation Guide](/cli/azure/install-azure-cli)-- The following Python libraries: os, requests, json-
-## Sign into the Azure CLI
-
-To sign-in to the Azure CLI, run the following command and complete the sign-in. You may need to do it again if your session has been idle for too long.
-
-```azurecli
-az login
-```
-
-## Assign yourself to the Cognitive Services User role
-
-Assigning yourself to the Cognitive Services User role will allow you to use your account for access to the specific cognitive services resource
-
-1. Get your user information
-
- ```azurecli
- export user=$(az account show -o json | jq -r .user.name)
- ```
-
-2. Assign yourself to ΓÇ£Cognitive Services UserΓÇ¥ role.
-
- ```azurecli
- export resourceId=$(az group show -g $myResourceGroupName -o json | jq -r .id)
- az role assignment create --role "Cognitive Services User" --assignee $user --scope $resourceId
- ```
-
- > [!NOTE]
- > Role assignment change will take ~5 mins to become effective.
-
-3. Acquire an Azure AD access token. Access tokens expire in one hour. you'll then need to acquire another one.
-
- ```azurecli
- export accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com -o json | jq -r .accessToken)
- ```
-
-4. Make an API call
-Use the access token to authorize your API call by setting the `Authorization` header value.
-
- ```bash
- curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer $accessToken" \
- -d '{ "prompt": "Once upon a time" }'
- ```
-
-## Authorize access to managed identities
-
-OpenAI supports Azure Active Directory (Azure AD) authentication with [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md). Managed identities for Azure resources can authorize access to Cognitive Services resources using Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine scale sets, and other services. By using managed identities for Azure resources together with Azure AD authentication, you can avoid storing credentials with your applications that run in the cloud.
-
-## Enable managed identities on a VM
-
-Before you can use managed identities for Azure resources to authorize access to Cognitive Services resources from your VM, you must enable managed identities for Azure resources on the VM. To learn how to enable managed identities for Azure Resources, see:
--- [Azure portal](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)-- [Azure PowerShell](../../../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)-- [Azure CLI](../../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)-- [Azure Resource Manager template](../../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)-- [Azure Resource Manager client libraries](../../../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)-
-For more information about managed identities, see [Managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
cognitive-services Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/monitoring.md
- Title: Monitoring Azure OpenAI Service
-description: Start here to learn how to monitor Azure OpenAI Service
------ Previously updated : 03/15/2023--
-# Monitoring Azure OpenAI Service
-
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-
-This article describes the monitoring data generated by Azure OpenAI Service. Azure OpenAI is part of Cognitive Services, which uses [Azure Monitor](../../../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../../../azure-monitor/essentials/monitor-azure-resource.md).
-
-## Monitoring data
-
-Azure OpenAI collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-azure-resources).
-
-## Collection and routing
-
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
-
-Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has additional costs associated with it. To understand more, consult the [Azure Monitor cost calculation guide](/azure/azure-monitor/logs/cost-logs).
-
-The metrics and logs you can collect are discussed in the following sections.
-
-## Analyzing metrics
-
-You can analyze metrics for *Azure OpenAI* by opening **Metrics** which can be found underneath the **Monitoring** section when viewing your Azure OpenAI resource in the Azure portal. See [Getting started with Azure Metrics Explorer](../../../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
-
-Azure OpenAI is a part of Cognitive Services. For a list of all platform metrics collected for Cognitive Services and Azure OpenAI, see [Cognitive Services supported metrics](../../../azure-monitor/essentials/metrics-supported.md#microsoftcognitiveservicesaccounts).
-
-For the current subset of metrics available in Azure OpenAI:
-
-### Azure OpenAI Metrics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|BlockedCalls |Yes |Blocked Calls |Count |Total |Number of calls that exceeded rate or quota limit. |ApiName, OperationName, Region, RatelimitKey |
-|ClientErrors |Yes |Client Errors |Count |Total |Number of calls with client side error (HTTP response code 4xx). |ApiName, OperationName, Region, RatelimitKey |
-|DataIn |Yes |Data In |Bytes |Total |Size of incoming data in bytes. |ApiName, OperationName, Region |
-|DataOut |Yes |Data Out |Bytes |Total |Size of outgoing data in bytes. |ApiName, OperationName, Region |
-|FineTunedTrainingHours |Yes |Processed FineTuned Training Hours |Count |Total |Number of Training Hours Processed on an OpenAI FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|Latency |Yes |Latency |MilliSeconds |Average |Latency in milliseconds. |ApiName, OperationName, Region, RatelimitKey |
-|Ratelimit |Yes |Ratelimit |Count |Total |The current ratelimit of the ratelimit key. |Region, RatelimitKey |
-|ServerErrors |Yes |Server Errors |Count |Total |Number of calls with service internal error (HTTP response code 5xx). |ApiName, OperationName, Region, RatelimitKey |
-|SuccessfulCalls |Yes |Successful Calls |Count |Total |Number of successful calls. |ApiName, OperationName, Region, RatelimitKey |
-|TokenTransaction |Yes |Processed Inference Tokens |Count |Total |Number of Inference Tokens Processed on an OpenAI Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|TotalCalls |Yes |Total Calls |Count |Total |Total number of calls. |ApiName, OperationName, Region, RatelimitKey |
-|TotalErrors |Yes |Total Errors |Count |Total |Total number of calls with error response (HTTP response code 4xx or 5xx). |ApiName, OperationName, Region, RatelimitKey |
-
-## Analyzing logs
-
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../../azure-monitor/essentials/resource-logs-schema.md).
-
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-
-For a list of the types of resource logs available for Azure OpenAI and other Cognitive Services, see [Resource provider operations for Cognitive Services](/azure/role-based-access-control/resource-provider-operations#microsoftcognitiveservices)
-
-### Kusto queries
-
-> [!IMPORTANT]
-> When you select **Logs** from the Azure OpenAI menu, Log Analytics is opened with the query scope set to the current Azure OpenAI resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../../azure-monitor/logs/scope.md) for details.
-
-To explore and get a sense of what type of information is available for your Azure OpenAI resource a useful query to start with once you have deployed a model and sent some completion calls through the playground is as follows:
-
-```kusto
-AzureDiagnostics
-| take 100
-| project TimeGenerated, _ResourceId, Category,OperationName, DurationMs, ResultSignature, properties_s
-```
-
-Here we return a sample of 100 entries and are displaying a subset of the available columns of data in the logs. The results are as follows:
--
-If you wish to see all available columns of data, you can remove the scoping that is provided by the `| project` line:
-
-```kusto
-AzureDiagnostics
-| take 100
-```
-
-You can also select the arrow next to the table name to view all available columns and associated data types.
-
-To examine AzureMetrics run:
-
-```kusto
-AzureMetrics
-| take 100
-| project TimeGenerated, MetricName, Total, Count, TimeGrain, UnitName
-```
--
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have different benefits and drawbacks.
-
-Every organization's alerting needs are going to vary, and will also evolve over time. Generally all alerts should be actionable, with a specific intended response if the alert occurs. If there's no action for someone to take, then it might be something you want to capture in a report, but not in an alert. Some use cases may require alerting anytime certain error conditions exist. But in many environments, it might only be in cases where errors exceed a certain threshold for a period of time where sending an alert is warranted.
-
-Errors below certain thresholds can often be evaluated through regular analysis of data in Azure Monitor Logs. As you analyze your log data over time, you may also find that a certain condition not occurring for a long enough period of time might be valuable to track with alerts. Sometimes the absence of an event in a log is just as important a signal as an error.
-
-Depending on what type of application you're developing in conjunction with your use of Azure OpenAI, [Azure Monitor Application Insights](../../../azure-monitor/overview.md) may offer additional monitoring benefits at the application layer.
--
-## Next steps
--- See [Monitoring Azure resources with Azure Monitor](../../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.-- Read [Understand log searches in Azure Monitor logs](../../../azure-monitor/logs/log-query-overview.md).
cognitive-services Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/quota.md
- Title: Manage Azure OpenAI Service quota-
-description: Learn how to use Azure OpenAI to control your deployments rate limits.
------ Previously updated : 06/07/2023---
-# Manage Azure OpenAI Service quota
-
-Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing your Azure OpenAI quota.
-
-## Introduction to quota
-
-Azure OpenAI's quota feature enables assignment of rate limits to your deployments, up-to a global limit called your ΓÇ£quota.ΓÇ¥ Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute (TPM)**. When you onboard a subscription to Azure OpenAI, you'll receive default quota for most available models. Then, you'll assign TPM to each deployment as it is created, and the available quota for that model will be reduced by that amount. You can continue to create deployments and assign them TPM until you reach your quota limit. Once that happens, you can only create new deployments of that model by reducing the TPM assigned to other deployments of the same model (thus freeing TPM for use), or by requesting and being approved for a model quota increase in the desired region.
-
-> [!NOTE]
-> With a quota of 240,000 TPM for GPT-35-Turbo in East US, a customer can create a single deployment of 240K TPM, 2 deployments of 120K TPM each, or any number of deployments in one or multiple Azure OpenAI resources as long as their TPM adds up to less than 240K total in that region.
-
-When a deployment is created, the assigned TPM will directly map to the tokens-per-minute rate limit enforced on its inferencing requests. A **Requests-Per-Minute (RPM)** rate limit will also be enforced whose value is set proportionally to the TPM assignment using the following ratio:
-
-6 RPM per 1000 TPM.
-
-The flexibility to distribute TPM globally within a subscription and region has allowed Azure OpenAI Service to loosen other restrictions:
--- The maximum resources per region are increased to 30.-- The limit on creating no more than one deployment of the same model in a resource has been removed.-
-## Assign quota
-
-When you create a model deployment, you have the option to assign Tokens-Per-Minute (TPM) to that deployment. TPM can be modified in increments of 1,000, and will map to the TPM and RPM rate limits enforced on your deployment, as discussed above.
-
-To create a new deployment from within the Azure AI Studio under **Management** select **Deployments** > **Create new deployment**.
-
-The option to set the TPM is under the **Advanced options** drop-down:
--
-Post deployment you can adjust your TPM allocation by selecting **Edit deployment** under **Management** > **Deployments** in Azure AI Studio. You can also modify this selection within the new quota management experience under **Management** > **Quotas**.
-
-> [!IMPORTANT]
-> Quotas and limits are subject to change, for the most up-date-information consult our [quotas and limits article](../quotas-limits.md).
-
-## Model specific settings
-
-Different model deployments, also called model classes have unique max TPM values that you're now able to control. **This represents the maximum amount of TPM that can be allocated to that type of model deployment in a given region.** While each model type represents its own unique model class, the max TPM value is currently only different for certain model classes:
--- GPT-4-- GPT-4-32K-- Text-Davinci-003-
-All other model classes have a common max TPM value.
-
-> [!NOTE]
-> Quota Tokens-Per-Minute (TPM) allocation is not related to the max input token limit of a model. Model input token limits are defined in the [models table](../concepts/models.md) and are not impacted by changes made to TPM.
-
-## View and request quota
-
-For an all up view of your quota allocations across deployments in a given region, select **Management** > **Quota** in Azure AI Studio:
---- **Quota Name**: There's one quota value per region for each model type. The quota covers all versions of that model. The quota name can be expanded in the UI to show the deployments that are using the quota.-- **Deployment**: Model deployments divided by model class.-- **Usage/Limit**: For the quota name, this shows how much quota is used by deployments and the total quota approved for this subscription and region. This amount of quota used is also represented in the bar graph.-- **Request Quota**: The icon in this field navigates to a form where requests to increase quota can be submitted.-
-## Migrating existing deployments
-
-As part of the transition to the new quota system and TPM based allocation, all existing Azure OpenAI model deployments have been automatically migrated to use quota. In cases where the existing TPM/RPM allocation exceeds the default values due to previous custom rate-limit increases, equivalent TPM were assigned to the impacted deployments.
-
-## Understanding rate limits
-
-Assigning TPM to a deployment sets the Tokens-Per-Minute (TPM) and Requests-Per-Minute (RPM) rate limits for the deployment, as described above. TPM rate limits are based on the maximum number of tokens that are estimated to be processed by a request at the time the request is received. It isn't the same as the token count used for billing, which is computed after all processing is completed.
-
-As each request is received, Azure OpenAI computes an estimated max processed-token count that includes the following:
--- Prompt text and count-- The max_tokens parameter setting-- The best_of parameter setting-
-As requests come into the deployment endpoint, the estimated max-processed-token count is added to a running token count of all requests that is reset each minute. If at any time during that minute, the TPM rate limit value is reached, then further requests will receive a 429 response code until the counter resets.
-
-RPM rate limits are based on the number of requests received over time. The rate limit expects that requests be evenly distributed over a one-minute period. If this average flow isn't maintained, then requests may receive a 429 response even though the limit isn't met when measured over the course of a minute. To implement this behavior, Azure OpenAI Service evaluates the rate of incoming requests over a small period of time, typically 1 or 10 seconds. If the number of requests received during that time exceeds what would be expected at the set RPM limit, then new requests will receive a 429 response code until the next evaluation period. For example, if Azure OpenAI is monitoring request rate on 1-second intervals, then rate limiting will occur for a 600-RPM deployment if more than 10 requests are received during each 1-second period (600 requests per minute = 10 requests per second).
-
-### Rate limit best practices
-
-To minimize issues related to rate limits, it's a good idea to use the following techniques:
--- Set max_tokens and best_of to the minimum values that serve the needs of your scenario. For example, donΓÇÖt set a large max-tokens value if you expect your responses to be small.-- Use quota management to increase TPM on deployments with high traffic, and to reduce TPM on deployments with limited needs.-- Implement retry logic in your application.-- Avoid sharp changes in the workload. Increase the workload gradually.-- Test different load increase patterns.-
-## Next steps
--- To review quota defaults for Azure OpenAI, consult the [quotas & limits article](../quotas-limits.md)
cognitive-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/switching-endpoints.md
- Title: How to switch between OpenAI and Azure OpenAI Service endpoints with Python-
-description: Learn about the changes you need to make to your code to swap back and forth between OpenAI and Azure OpenAI endpoints.
------ Previously updated : 05/24/2023---
-# How to switch between OpenAI and Azure OpenAI endpoints with Python
-
-While OpenAI and Azure OpenAI Service rely on a [common Python client library](https://github.com/openai/openai-python), there are small changes you need to make to your code in order to swap back and forth between endpoints. This article walks you through the common changes and differences you'll experience when working across OpenAI and Azure OpenAI.
-
-> [!NOTE]
-> This library is maintained by OpenAI and is currently in preview. Refer to the [release history](https://github.com/openai/openai-python/releases) or the [version.py commit history](https://github.com/openai/openai-python/commits/main/openai/version.py) to track the latest updates to the library.
-
-## Authentication
-
-We recommend using environment variables. If you haven't done this before our [Python quickstarts](../quickstart.md) walk you through this configuration.
-
-### API key
-
-<table>
-<tr>
-<td> OpenAI </td> <td> Azure OpenAI </td>
-</tr>
-<tr>
-<td>
-
-```python
-import openai
-
-openai.api_key = "sk-..."
-openai.organization = "..."
--
-```
-
-</td>
-<td>
-
-```python
-import openai
-
-openai.api_type = "azure"
-openai.api_key = "..."
-openai.api_base = "https://example-endpoint.openai.azure.com"
-openai.api_version = "2023-05-15" # subject to change
-```
-
-</td>
-</tr>
-</table>
-
-### Azure Active Directory authentication
-
-<table>
-<tr>
-<td> OpenAI </td> <td> Azure OpenAI </td>
-</tr>
-<tr>
-<td>
-
-```python
-import openai
-
-openai.api_key = "sk-..."
-openai.organization = "..."
------
-```
-
-</td>
-<td>
-
-```python
-import openai
-from azure.identity import DefaultAzureCredential
-
-credential = DefaultAzureCredential()
-token = credential.get_token("https://cognitiveservices.azure.com/.default")
-
-openai.api_type = "azuread"
-openai.api_key = token.token
-openai.api_base = "https://example-endpoint.openai.azure.com"
-openai.api_version = "2023-05-15" # subject to change
-```
-
-</td>
-</tr>
-</table>
-
-## Keyword argument for model
-
-OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of [deployments](/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model) and uses the `deployment_id` keyword argument to describe which model deployment to use. Azure OpenAI also supports the use of `engine` interchangeably with `deployment_id`.
-
-For OpenAI `engine` still works in most instances, but it's deprecated and `model` is preferred.
-
-<table>
-<tr>
-<td> OpenAI </td> <td> Azure OpenAI </td>
-</tr>
-<tr>
-<td>
-
-```python
-completion = openai.Completion.create(
- prompt="<prompt>",
- model="text-davinci-003"
-)
-
-chat_completion = openai.ChatCompletion.create(
- messages="<messages>",
- model="gpt-4"
-)
-
-embedding = openai.Embedding.create(
- input="<input>",
- model="text-embedding-ada-002"
-)
----
-```
-
-</td>
-<td>
-
-```python
-completion = openai.Completion.create(
- prompt="<prompt>",
- deployment_id="text-davinci-003"
- #engine="text-davinci-003"
-)
-
-chat_completion = openai.ChatCompletion.create(
- messages="<messages>",
- deployment_id="gpt-4"
- #engine="gpt-4"
-
-)
-
-embedding = openai.Embedding.create(
- input="<input>",
- deployment_id="text-embedding-ada-002"
- #engine="text-embedding-ada-002"
-)
-```
-
-</td>
-</tr>
-</table>
-
-## Azure OpenAI embeddings doesn't support multiple inputs
-
-Many examples show passing multiple inputs into the embeddings API. For Azure OpenAI, currently we must pass a single text input per call.
-
-<table>
-<tr>
-<td> OpenAI </td> <td> Azure OpenAI </td>
-</tr>
-<tr>
-<td>
-
-```python
-inputs = ["A", "B", "C"]
-
-embedding = openai.Embedding.create(
- input=inputs,
- model="text-embedding-ada-002"
-)
--
-```
-
-</td>
-<td>
-
-```python
-inputs = ["A", "B", "C"]
-
-for text in inputs:
- embedding = openai.Embedding.create(
- input=text,
- deployment_id="text-embedding-ada-002"
- #engine="text-embedding-ada-002"
- )
-```
-
-</td>
-</tr>
-</table>
-
-## Next steps
-
-* Learn more about how to work with GPT-35-Turbo and the GPT-4 models with [our how-to guide](../how-to/chatgpt.md).
-* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://aka.ms/AOAICodeSamples)
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
- Title: 'How to use the Codex models to work with code'-
-description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks
----- Previously updated : 06/24/2022--
-keywords:
--
-# Codex models and Azure OpenAI Service
-
-The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
-
-You can use Codex for a variety of tasks including:
--- Turn comments into code-- Complete your next line or function in context-- Bring knowledge to you, such as finding a useful library or API call for an application-- Add comments-- Rewrite code for efficiency-
-## How to use the Codex models
-
-Here are a few examples of using Codex that can be tested in [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
-
-### Saying "Hello" (Python)
-
-```python
-"""
-Ask the user for their name and say "Hello"
-"""
-```
-
-### Create random names (Python)
-
-```python
-"""
-1. Create a list of first names
-2. Create a list of last names
-3. Combine them randomly into a list of 100 full names
-"""
-```
-
-### Create a MySQL query (Python)
-
-```python
-"""
-Table customers, columns = [CustomerId, FirstName, LastName, Company, Address, City, State, Country, PostalCode, Phone, Fax, Email, SupportRepId]
-Create a MySQL query for all customers in Texas named Jane
-"""
-query =
-```
-
-### Explaining code (JavaScript)
-
-```javascript
-// Function 1
-var fullNames = [];
-for (var i = 0; i < 50; i++) {
- fullNames.push(names[Math.floor(Math.random() * names.length)]
- + " " + lastNames[Math.floor(Math.random() * lastNames.length)]);
-}
-
-// What does Function 1 do?
-```
-
-## Best practices
-
-### Start with a comment, data or code
-
-You can experiment using one of the Codex models in our playground (styling instructions as comments when needed.)
-
-To get Codex to create a useful completion, it's helpful to think about what information a programmer would need to perform a task. This could simply be a clear comment or the data needed to write a useful function, like the names of variables or what class a function handles.
-
-In this example we tell Codex what to call the function and what task it's going to perform.
-
-```python
-# Create a function called 'nameImporter' to add a first and last name to the database
-```
-
-This approach scales even to the point where you can provide Codex with a comment and an example of a database schema to get it to write useful query requests for various databases. Here's an example where we provide the columns and table names for the query.
-
-```python
-# Table albums, columns = [AlbumId, Title, ArtistId]
-# Table artists, columns = [ArtistId, Name]
-# Table media_types, columns = [MediaTypeId, Name]
-# Table playlists, columns = [PlaylistId, Name]
-# Table playlist_track, columns = [PlaylistId, TrackId]
-# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice]
-
-# Create a query for all albums with more than 10 tracks
-```
-
-When you show Codex the database schema, it's able to make an informed guess about how to format a query.
-
-### Specify the programming language
-
-Codex understands dozens of different programming languages. Many share similar conventions for comments, functions and other programming syntax. By specifying the language and what version in a comment, Codex is better able to provide a completion for what you want. That said, Codex is fairly flexible with style and syntax. Here's an example for R and Python.
-
-```r
-# R language
-# Calculate the mean distance between an array of points
-```
-
-```python
-# Python 3
-# Calculate the mean distance between an array of points
-```
-
-### Prompt Codex with what you want it to do
-
-If you want Codex to create a webpage, placing the first line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with func or def).
-
-```html
-<!-- Create a web page with the title 'Kat Katman attorney at paw' -->
-<!DOCTYPE html>
-```
-
-Placing `<!DOCTYPE html>` after our comment makes it very clear to Codex what we want it to do.
-
-Or if we want to write a function we could start the prompt as follows and Codex will understand what it needs to do next.
-
-```python
-# Create a function to count to 100
-
-def counter
-```
-
-### Specifying libraries will help Codex understand what you want
-
-Codex is aware of a large number of libraries, APIs and modules. By telling Codex which ones to use, either from a comment or importing them into your code, Codex will make suggestions based upon them instead of alternatives.
-
-```html
-<!-- Use A-Frame version 1.2.0 to create a 3D website -->
-<!-- https://aframe.io/releases/1.2.0/aframe.min.js -->
-```
-
-By specifying the version, you can make sure Codex uses the most current library.
-
-> [!NOTE]
-> Codex can suggest helpful libraries and APIs, but always be sure to do your own research to make sure that they're safe for your application.
-
-### Comment style can affect code quality
-
-With some languages, the style of comments can improve the quality of the output. For example, when working with Python, in some cases using doc strings (comments wrapped in triple quotes) can give higher quality results than using the pound (`#`) symbol.
-
-```python
-"""
-Create an array of users and email addresses
-"""
-```
-
-### Comments inside of functions can be helpful
-
-Recommended coding standards usually suggest placing the description of a function inside the function. Using this format helps Codex more clearly understand what you want the function to do.
-
-```python
-def getUserBalance(id):
- """
- Look up the user in the database ΓÇÿUserData' and return their current account balance.
- """
-```
-
-### Provide examples for more precise results
-
-If you have a particular style or format you need Codex to use, providing examples or demonstrating it in the first part of the request will help Codex more accurately match what you need.
-
-```python
-"""
-Create a list of random animals and species
-"""
-animals = [ {"name": "Chomper", "species": "Hamster"}, {"name":
-```
-
-### Lower temperatures give more precise results
-
-Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
-
-In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation.
-
-### Organize tasks into functions
-
-We can get Codex to write functions by specifying what the function should do in as precise terms as possible in comment. For example, by writing the following comment, Codex creates a JavaScript timer function that's triggered when a user presses a button:
-
-A simple JavaScript timer
-
-```javascript
-// Create a timer that creates an alert in 10 seconds
-```
-
-### Creating example data
-
-Testing applications often requires using example data. Because Codex is a language model that understands how to comprehend and write natural language, you can ask Codex to create data like arrays of made up names, products and other variables. For example, here we ask Codex to create an array of weather temperatures.
-
-```javascript
-/* Create an array of weather temperatures for San Francisco */
-```
-
-Asking Codex to perform this task will produce a table like this:
-
-```javascript
-var weather = [
- { month: 'January', high: 58, low: 48 },
- { month: 'February', high: 61, low: 50 },
- { month: 'March', high: 64, low: 53 },
- { month: 'April', high: 67, low: 55 },
- { month: 'May', high: 70, low: 58 },
- { month: 'June', high: 73, low: 61 },
- { month: 'July', high: 76, low: 63 },
- { month: 'August', high: 77, low: 64 },
- { month: 'September', high: 76, low: 63 },
- { month: 'October', high: 73, low: 61 },
- { month: 'November', high: 68, low: 57 },
- { month: 'December', high: 64, low: 54 }
-];
-```
-
-### Compound functions and small applications
-
-We can provide Codex with a comment consisting of a complex request like creating a random name generator or performing tasks with user input and Codex can generate the rest provided there are enough tokens.
-
-```javascript
-/*
-Create a list of animals
-Create a list of cities
-Use the lists to generate stories about what I saw at the zoo in each city
-*/
-```
-
-### Limit completion size for more precise results or lower latency
-
-Requesting longer completions in Codex can lead to imprecise answers and repetition. Limit the size of the query by reducing max_tokens and setting stop tokens. For instance, add `\n` as a stop sequence to limit completions to one line of code. Smaller completions also incur less latency.
-
-### Use streaming to reduce latency
-
-Large Codex queries can take tens of seconds to complete. To build applications that require lower latency, such as coding assistants that perform autocompletion, consider using streaming. Responses will be returned before the model finishes generating the entire completion. Applications that need only part of a completion can reduce latency by cutting off a completion either programmatically or by using creative values for `stop`.
-
-Users can combine streaming with duplication to reduce latency by requesting more than one solution from the API, and using the first response returned. Do this by setting `n > 1`. This approach consumes more token quota, so use carefully (for example, by using reasonable settings for `max_tokens` and `stop`).
-
-### Use Codex to explain code
-
-Codex's ability to create and understand code allows us to use it to perform tasks like explaining what the code in a file does. One way to accomplish this is by putting a comment after a function that starts with "This function" or "This application is." Codex will usually interpret this as the start of an explanation and complete the rest of the text.
-
-```javascript
-/* Explain what the previous function is doing: It
-```
-
-### Explaining an SQL query
-
-In this example, we use Codex to explain in a human readable format what an SQL query is doing.
-
-```sql
-SELECT DISTINCT department.name
-FROM department
-JOIN employee ON department.id = employee.department_id
-JOIN salary_payments ON employee.id = salary_payments.employee_id
-WHERE salary_payments.date BETWEEN '2020-06-01' AND '2020-06-30'
-GROUP BY department.name
-HAVING COUNT(employee.id) > 10;
Explanation of the above query in human readable format
-```
-
-### Writing unit tests
-
-Creating a unit test can be accomplished in Python simply by adding the comment "Unit test" and starting a function.
-
-```python
-# Python 3
-def sum_numbers(a, b):
- return a + b
-
-# Unit test
-def
-```
-
-### Checking code for errors
-
-By using examples, you can show Codex how to identify errors in code. In some cases no examples are required, however demonstrating the level and detail to provide a description can help Codex understand what to look for and how to explain it. (A check by Codex for errors shouldn't replace careful review by the user. )
-
-```javascript
-/* Explain why the previous function doesn't work. */
-```
-
-### Using source data to write database functions
-
-Just as a human programmer would benefit from understanding the database structure and the column names, Codex can use this data to help you write accurate query requests. In this example, we insert the schema for a database and tell Codex what to query the database for.
-
-```python
-# Table albums, columns = [AlbumId, Title, ArtistId]
-# Table artists, columns = [ArtistId, Name]
-# Table media_types, columns = [MediaTypeId, Name]
-# Table playlists, columns = [PlaylistId, Name]
-# Table playlist_track, columns = [PlaylistId, TrackId]
-# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice]
-
-# Create a query for all albums with more than 10 tracks
-```
-
-### Converting between languages
-
-You can get Codex to convert from one language to another by following a simple format where you list the language of the code you want to convert in a comment, followed by the code and then a comment with the language you want it translated into.
-
-```python
-# Convert this from Python to R
-# Python version
-
-[ Python code ]
-
-# End
-
-# R version
-```
-
-### Rewriting code for a library or framework
-
-If you want Codex to make a function more efficient, you can provide it with the code to rewrite followed by an instruction on what format to use.
-
-```javascript
-// Rewrite this as a React component
-var input = document.createElement('input');
-input.setAttribute('type', 'text');
-document.body.appendChild(input);
-var button = document.createElement('button');
-button.innerHTML = 'Say Hello';
-document.body.appendChild(button);
-button.onclick = function() {
- var name = input.value;
- var hello = document.createElement('div');
- hello.innerHTML = 'Hello ' + name;
- document.body.appendChild(hello);
-};
-
-// React version:
-```
-
-## Next steps
-
-Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
- Title: What is Azure OpenAI Service?-
-description: Apply advanced language models to variety of use cases with Azure OpenAI
------ Previously updated : 07/06/2023-
-recommendations: false
-keywords:
--
-# What is Azure OpenAI Service?
-
-Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-35-Turbo, and Embeddings model series. In addition, the new GPT-4 and gpt-35-turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
-
-### Features overview
-
-| Feature | Azure OpenAI |
-| | |
-| Models available | **GPT-4 series** <br>**GPT-35-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman <br> Davinci <br>**Fine-tuning is currently unavailable to new customers**.|
-| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
-| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). |
-| Managed Identity| Yes, via Azure Active Directory |
-| UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
-| Model regional availability | [Model availability](./concepts/models.md) |
-| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. |
-
-## Responsible AI
-
-At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating MicrosoftΓÇÖs <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.
-
-## How do I get access to Azure OpenAI?
-
-How do I get access to Azure OpenAI?
-
-Access is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">MicrosoftΓÇÖs commitment to responsible AI</a>. For now, we're working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations.
-
-More specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to Azure OpenAI.
-
-Apply here for access:
-
-<a href="https://aka.ms/oaiapply" target="_blank">Apply now</a>
-
-## Comparing Azure OpenAI and OpenAI
-
-Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-4, GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.
-
-With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering.
-
-## Key concepts
-
-### Prompts & completions
-
-The completions endpoint is the core component of the API service. This API provides access to the model's text-in, text-out interface. Users simply need to provide an input **prompt** containing the English text command, and the model will generate a text **completion**.
-
-Here's an example of a simple prompt and completion:
-
->**Prompt**:
- ```
- """
- count to 5 in a for loop
- """
- ```
->
->**Completion**:
- ```
- for i in range(1, 6):
- print(i)
- ```
-
-### Tokens
-
-Azure OpenAI processes text by breaking it down into tokens. Tokens can be words or just chunks of characters. For example, the word ΓÇ£hamburgerΓÇ¥ gets broken up into the tokens ΓÇ£hamΓÇ¥, ΓÇ£burΓÇ¥ and ΓÇ£gerΓÇ¥, while a short and common word like ΓÇ£pearΓÇ¥ is a single token. Many tokens start with a whitespace, for example ΓÇ£ helloΓÇ¥ and ΓÇ£ byeΓÇ¥.
-
-The total number of tokens processed in a given request depends on the length of your input, output and request parameters. The quantity of tokens being processed will also affect your response latency and throughput for the models.
-
-### Resources
-
-Azure OpenAI is a new product offering on Azure. You can get started with Azure OpenAI the same way as any other Azure product where you [create a resource](how-to/create-resource.md), or instance of the service, in your Azure Subscription. You can read more about Azure's [resource management design](../../azure-resource-manager/management/overview.md).
-
-### Deployments
-
-Once you create an Azure OpenAI Resource, you must deploy a model before you can start making API calls and generating text. This action can be done using the Deployment APIs. These APIs allow you to specify the model you wish to use.
-
-### Prompt engineering
-
-GPT-3, GPT-3.5, and GPT-4 models from OpenAI are prompt-based. With prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. This completion is the modelΓÇÖs continuation of the input text.
-
-While these models are extremely powerful, their behavior is also very sensitive to the prompt. This makes [prompt engineering](./concepts/prompt-engineering.md) an important skill to develop.
-
-Prompt construction can be difficult. In practice, the prompt acts to configure the model weights to complete the desired task, but it's more of an art than a science, often requiring experience and intuition to craft a successful prompt.
-
-### Models
-
-The service provides users access to several different models. Each model provides a different capability and price point.
-
-GPT-4 models are the latest available models. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
-
-The DALL-E models, currently in preview, generate images from text prompts that the user provides.
-
-Learn more about each model on our [models concept page](./concepts/models.md).
-
-## Next steps
-
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
- Title: Azure OpenAI Service quotas and limits-
-description: Quick reference, detailed description, and best practices on the quotas and limits for the OpenAI service in Azure Cognitive Services.
------ Previously updated : 06/08/2023---
-# Azure OpenAI Service quotas and limits
-
-This article contains a quick reference and a detailed description of the quotas and limits for Azure OpenAI in Azure Cognitive Services.
-
-## Quotas and limits reference
-
-The following sections provide you with a quick guide to the default quotas and limits that apply to Azure OpenAI:
-
-| Limit Name | Limit Value |
-|--|--|
-| OpenAI resources per region per Azure subscription | 30 |
-| Default quota per model and region (in tokens-per-minute)<sup>1</sup> |Text-Davinci-003: 120 K <br> GPT-4: 20 K <br> GPT-4-32K: 60 K <br> All others: 240 K |
-| Default DALL-E quota limits | 2 concurrent requests |
-| Maximum prompt tokens per request | Varies per model. For more information, see [Azure OpenAI Service models](./concepts/models.md)|
-| Max fine-tuned model deployments | 2 |
-| Total number of training jobs per resource | 100 |
-| Max simultaneous running training jobs per resource | 1 |
-| Max training jobs queued | 20 |
-| Max Files per resource | 30 |
-| Total size of all files per resource | 1 GB |
-| Max training job time (job will fail if exceeded) | 720 hours |
-| Max training job size (tokens in training file) x (# of epochs) | 2 Billion |
-
-<sup>1</sup> Default quota limits are subject to change.
-
-### General best practices to remain within rate limits
-
-To minimize issues related to rate limits, it's a good idea to use the following techniques:
--- Implement retry logic in your application.-- Avoid sharp changes in the workload. Increase the workload gradually.-- Test different load increase patterns.-- Increase the quota assigned to your deployment. Move quota from another deployment, if necessary.-
-### How to request increases to the default quotas and limits
-
-Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, we are not currently approving new quota increase requests. Your request will be queued until it can be filled at a later time.
-
-For other rate limits, please [submit a service request](/azure/cognitive-services/cognitive-services-support-options?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
-
-## Next steps
-
-Explore how to [manage quota](./how-to/quota.md) for your Azure OpenAI deployments.
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
- Title: Azure OpenAI Service REST API reference-
-description: Learn how to use Azure OpenAI's REST API. In this article, you'll learn about authorization options, how to structure a request and receive a response.
----- Previously updated : 05/15/2023--
-recommendations: false
---
-# Azure OpenAI Service REST API reference
-
-This article provides details on the inference REST API endpoints for Azure OpenAI.
-
-## Authentication
-
-Azure OpenAI provides two methods for authentication. you can use either API Keys or Azure Active Directory.
--- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides guidance for how to make calls with this type of authentication.--- **Azure Active Directory authentication**: You can authenticate an API call using an Azure Active Directory token. Authentication tokens are included in a request as the ```Authorization``` header. The token provided must be preceded by ```Bearer```, for example ```Bearer YOUR_AUTH_TOKEN```. You can read our how-to guide on [authenticating with Azure Active Directory](./how-to/managed-identity.md).-
-### REST API versioning
-
-The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure. For example:
-
-```http
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15
-```
-
-## Completions
-
-With the Completions operation, the model will generate one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position.
-
-**Create a completion**
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The deployment name you chose when you deployed the model. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)-- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document. |
-| ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
-| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
-| ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
-| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. |
-| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse |
-| ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |
-| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.|
-| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
-| ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. |
-| ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. |
-| ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. |
-| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
-| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
-| ```best_of``` | integer | Optional | 1 | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. This parameter cannot be used with `gpt-35-turbo`. |
-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d "{
- \"prompt\": \"Once upon a time\",
- \"max_tokens\": 5
-}"
-```
-
-#### Example response
-
-```json
-{
- "id": "cmpl-4kGh7iXtjW4lc9eGhff6Hp8C7btdQ",
- "object": "text_completion",
- "created": 1646932609,
- "model": "ada",
- "choices": [
- {
- "text": ", a dark line crossed",
- "index": 0,
- "logprobs": null,
- "finish_reason": "length"
- }
- ]
-}
-```
-
-In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
-
-## Embeddings
-Get a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
-
-> [!NOTE]
-> We currently do not support batching of embeddings into a single API call. If you receive the error `InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon.`, this typically occurs when an array of embeddings is attempted to be passed as a batch rather than a single string. The string can be up to 8191 tokens in length when using the text-embedding-ada-002 (Version 2) model.
-
-**Create an embedding**
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/embeddings?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)-- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```input```| string | Yes | N/A | Input text to get embeddings for, encoded as a string. The number of input tokens varies depending on what [model you are using](./concepts/models.md). <br> Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.|
-| ```user``` | string | No | Null | A unique identifier representing for your end-user. This will help Azure OpenAI monitor and detect abuse. **Do not pass PII identifiers instead use pseudoanonymized values such as GUIDs** |
-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15 \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d "{\"input\": \"The food was delicious and the waiter...\"}"
-```
-
-#### Example response
-
-```json
-{
- "object": "list",
- "data": [
- {
- "object": "embedding",
- "embedding": [
- 0.018990106880664825,
- -0.0073809814639389515,
- .... (1024 floats total for ada)
- 0.021276434883475304,
- ],
- "index": 0
- }
- ],
- "model": "text-similarity-babbage:001"
-}
-```
-
-## Chat completions
-
-Create completions for chat messages with the GPT-35-Turbo and GPT-4 models.
-
-**Create chat completions**
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)-- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15 \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure Cognitive Services support this too?"}]}'
-
-```
-
-#### Example response
-
-```console
-{"id":"chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
-"object":"chat.completion","created":1679072642,
-"model":"gpt-35-turbo",
-"usage":{"prompt_tokens":58,
-"completion_tokens":68,
-"total_tokens":126},
-"choices":[{"message":{"role":"assistant",
-"content":"Yes, other Azure Cognitive Services also support customer managed keys. Azure Cognitive Services offer multiple options for customers to manage keys, such as using Azure Key Vault, customer-managed keys in Azure Key Vault or customer-managed keys through Azure Storage service. This helps customers ensure that their data is secure and access to their services is controlled."},"finish_reason":"stop","index":0}]}
-```
-
-In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
-
-Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks.
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```messages``` | array | Required | | The messages to generate chat completions for, in the chat format. |
-| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\nWe generally recommend altering this or `top_p` but not both. |
-| ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. |
-| ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." |
-| ```stop``` | string or array | Optional | null | Up to 4 sequences where the API will stop generating further tokens.|
-| ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).|
-| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.|
-| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.|
-| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.|
-| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
-
-## Completions extensions
-
-Extensions for chat completions, for example Azure OpenAI on your data.
-
-**Use chat completions extensions**
-
-```http
-POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/completions?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-
-#### Example request
-
-```Console
-curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \
--H "Content-Type: application/json" \--H "api-key: YOUR_API_KEY" \--H "chatgpt_url: YOUR_RESOURCE_URL" \--H "chatgpt_key: YOUR_API_KEY" \--d \
-'
-{
- "dataSources": [
- {
- "type": "AzureCognitiveSearch",
- "parameters": {
- "endpoint": "'YOUR_AZURE_COGNITIVE_SEARCH_ENDPOINT'",
- "key": "'YOUR_AZURE_COGNITIVE_SEARCH_KEY'",
- "indexName": "'YOUR_AZURE_COGNITIVE_SEARCH_INDEX_NAME'"
- }
- }
- ],
- "messages": [
- {
- "role": "user",
- "content": "What are the differences between Azure Machine Learning and Azure Cognitive Services?"
- }
- ]
-}
-'
-```
-
-#### Example response
-
-```json
-{
- "id": "12345678-1a2b-3c4e5f-a123-12345678abcd",
- "model": "",
- "created": 1684304924,
- "object": "chat.completion",
- "choices": [
- {
- "index": 0,
- "messages": [
- {
- "role": "tool",
- "content": "{\"citations\": [{\"content\": \"\\nCognitive Services are cloud-based artificial intelligence (AI) services...\", \"id\": null, \"title\": \"What is Cognitive Services\", \"filepath\": null, \"url\": null, \"metadata\": {\"chunking\": \"orignal document size=250. Scores=0.4314117431640625 and 1.72564697265625.Org Highlight count=4.\"}, \"chunk_id\": \"0\"}], \"intent\": \"[\\\"Learn about Azure Cognitive Services.\\\"]\"}",
- "end_turn": false
- },
- {
- "role": "assistant",
- "content": " \nAzure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. [doc1]. Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle. [doc1].",
- "end_turn": true
- }
- ]
- }
- ]
-}
-```
-
-| Parameters | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| `messages` | array | Required | null | The messages to generate chat completions for, in the chat format. |
-| `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI on your data feature. |
-| `temperature` | number | Optional | 0 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. |
-| `top_p` | number | Optional | 1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.|
-| `stream` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` |
-| `stop` | string or array | Optional | null | Up to 2 sequences where the API will stop generating further tokens. |
-| `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
-
-The following parameters can be used inside of the `parameters` field inside of `dataSources`.
-
-| Parameters | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure Cognitive search the value is `AzureCognitiveSearch`. |
-| `endpoint` | string | Required | null | The data source endpoint. |
-| `key` | string | Required | null | One of the Azure Cognitive Search admin keys for your service. |
-| `indexName` | string | Required | null | The search index to be used. |
-| `fieldsMapping` | dictionary | Optional | null | Index data column mapping. |
-| `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |
-| `topNDocuments` | number | Optional | 5 | Number of documents that need to be fetched for document augmentation. |
-| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. |
-| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only available when `queryType` is set to `semantic`. |
-| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the ΓÇ£System MessageΓÇ¥ in Azure OpenAI Studio. <!--See [Using your data](./concepts/use-your-data.md#system-message) for more information.--> ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
-
-## Image generation
-
-### Request a generated image
-
-Generate a batch of images from a text caption. Image generation is currently only available with `api-version=2023-06-01-preview`.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/images/generations:submit?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-06-01-preview`-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```prompt``` | string | Required | | A text description of the desired image(s). The maximum length is 1000 characters. |
-| ```n``` | integer | Optional | 1 | The number of images to generate. Must be between 1 and 5. |
-| ```size``` | string | Optional | 1024x1024 | The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. |
-
-#### Example request
-
-```console
-curl -X POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/images/generations:submit?api-version=2023-06-01-preview \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{
-"prompt": "An avocado chair",
-"size": "512x512",
-"n": 3
-}'
-```
-
-#### Example response
-
-The operation returns a `202` status code and an `GenerateImagesResponse` JSON object containing the ID and status of the operation.
-
-```json
-{
- "id": "f508bcf2-e651-4b4b-85a7-58ad77981ffa",
- "status": "notRunning"
-}
-```
-
-### Get a generated image result
--
-Use this API to retrieve the results of an image generation operation. Image generation is currently only available with `api-version=2023-06-01-preview`.
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version={api-version}
-```
--
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```operation-id``` | string | Required | The GUID that identifies the original image generation request. |
-
-**Supported versions**
--- `2023-06-01-preview`-
-#### Example request
-
-```console
-curl -X GET "https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version=2023-06-01-preview"
--H "Content-Type: application/json"--H "Api-Key: {api key}"
-```
-
-#### Example response
-
-Upon success the operation returns a `200` status code and an `OperationResponse` JSON object. The `status` field can be `"notRunning"` (task is queued but hasn't started yet), `"running"`, `"succeeded"`, `"canceled"` (task has timed out), `"failed"`, or `"deleted"`. A `succeeded` status indicates that the generated image is available for download at the given URL. If multiple images were generated, their URLs are all returned in the `result.data` field.
-
-```json
-{
- "created": 1685064331,
- "expires": 1685150737,
- "id": "4b755937-3173-4b49-bf3f-da6702a3971a",
- "result": {
- "data": [
- {
- "url": "<URL_TO_IMAGE>"
- },
- {
- "url": "<URL_TO_NEXT_IMAGE>"
- },
- ...
- ]
- },
- "status": "succeeded"
-}
-```
-
-### Delete a generated image from the server
-
-You can use the operation ID returned by the request to delete the corresponding image from the Azure server. Generated images are automatically deleted after 24 hours by default, but you can trigger the deletion earlier if you want to.
-
-```http
-DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```operation-id``` | string | Required | The GUID that identifies the original image generation request. |
-
-**Supported versions**
--- `2023-06-01-preview`-
-#### Example request
-
-```console
-curl -X DELETE "https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version=2023-06-01-preview"
--H "Content-Type: application/json"--H "Api-Key: {api key}"
-```
-
-#### Response
-
-The operation returns a `204` status code if successful. This API only succeeds if the operation is in an end state (not `running`).
-
-## Management APIs
-
-Azure OpenAI is deployed as a part of the Azure Cognitive Services. All Cognitive Services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
-
-[**Management APIs reference documentation**](/rest/api/cognitiveservices/)
-
-## Next steps
-
-Learn about [managing deployments, models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/deployments/create).
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
- Title: Azure OpenAI Service embeddings tutorial-
-description: Learn how to use Azure OpenAI's embeddings API for document search with the BillSum dataset
----- Previously updated : 06/14/2023--
-recommendations: false
----
-# Tutorial: Explore Azure OpenAI Service embeddings and document search
-
-This tutorial will walk you through using the Azure OpenAI [embeddings](../concepts/understand-embeddings.md) API to perform **document search** where you'll query a knowledge base to find the most relevant document.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Install Azure OpenAI and other dependent Python libraries.
-> * Download the BillSum dataset and prepare it for analysis.
-> * Create environment variables for your resources endpoint and API key.
-> * Use the **text-embedding-ada-002 (Version 2)** model
-> * Use [cosine similarity](../concepts/understand-embeddings.md) to rank search results.
-
-> [!Important]
-> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
-
-## Prerequisites
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
-* Access granted to Azure OpenAI in the desired Azure subscription
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
-* <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a>
-* The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, pandas, tiktoken.
-* [Jupyter Notebooks](https://jupyter.org/)
-* An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** model deployed. This model is currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process of creating one is documented in our [resource deployment guide](../how-to/create-resource.md).
-
-## Set up
-
-### Python libraries
-
-If you haven't already, you need to install the following libraries:
-
-```cmd
-pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
-```
-
-<!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).-->
-
-### Download the BillSum dataset
-
-BillSum is a dataset of United States Congressional and California state bills. For illustration purposes, we'll look only at the US bills. The corpus consists of bills from the 103rd-115th (1993-2018) sessions of Congress. The data was split into 18,949 train bills and 3,269 test bills. The BillSum corpus focuses on mid-length legislation from 5,000 to 20,000 characters in length. More information on the project and the original academic paper where this dataset is derived from can be found on the [BillSum project's GitHub repository](https://github.com/FiscalNote/BillSum)
-
-This tutorial uses the `bill_sum_data.csv` file that can be downloaded from our [GitHub sample data](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/data/bill_sum_data.csv).
-
-You can also download the sample data by running the following command on your local machine:
-
-```cmd
-curl "https://raw.githubusercontent.com/Azure-Samples/Azure-OpenAI-Docs-Samples/main/Samples/Tutorials/Embeddings/data/bill_sum_data.csv" --output bill_sum_data.csv
-```
-
-### Retrieve key and endpoint
-
-To successfully make a call against Azure OpenAI, you'll need an **endpoint** and a **key**.
-
-|Variable name | Value |
-|--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com`.|
-| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.|
-
-Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
-
-Create and assign persistent environment variables for your key and endpoint.
-
-### Environment variables
-
-# [Command Line](#tab/command-line)
-
-```CMD
-setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
-```
-
-```CMD
-setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_API_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
-```
-
-```powershell
-[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_ENDPOINT', 'REPLACE_WITH_YOUR_ENDPOINT_HERE', 'User')
-```
-
-# [Bash](#tab/bash)
-
-```Bash
-echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment
-echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment
-
-source /etc/environment
-```
---
-After setting the environment variables, you may need to close and reopen Jupyter notebooks or whatever IDE you're using in order for the environment variables to be accessible. While we strongly recommend using Jupyter Notebooks, if for some reason you cannot you'll need to modify any code that is returning a pandas dataframe by using `print(dataframe_name)` rather than just calling the `dataframe_name` directly as is often done at the end of a code block.
-
-Run the following code in your preferred Python IDE:
-
-<!--If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).-->
-
-## Import libraries and list models
-
-```python
-import openai
-import os
-import re
-import requests
-import sys
-from num2words import num2words
-import os
-import pandas as pd
-import numpy as np
-from openai.embeddings_utils import get_embedding, cosine_similarity
-import tiktoken
-
-API_KEY = os.getenv("AZURE_OPENAI_API_KEY")
-RESOURCE_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
-
-openai.api_type = "azure"
-openai.api_key = API_KEY
-openai.api_base = RESOURCE_ENDPOINT
-openai.api_version = "2022-12-01"
-
-url = openai.api_base + "/openai/deployments?api-version=2022-12-01"
-
-r = requests.get(url, headers={"api-key": API_KEY})
-
-print(r.text)
-```
-
-```output
-{
- "data": [
- {
- "scale_settings": {
- "scale_type": "standard"
- },
- "model": "text-embedding-ada-002",
- "owner": "organization-owner",
- "id": "text-embedding-ada-002",
- "status": "succeeded",
- "created_at": 1657572678,
- "updated_at": 1657572678,
- "object": "deployment"
- },
- {
- "scale_settings": {
- "scale_type": "standard"
- },
- "model": "code-cushman-001",
- "owner": "organization-owner",
- "id": "code-cushman-001",
- "status": "succeeded",
- "created_at": 1657572712,
- "updated_at": 1657572712,
- "object": "deployment"
- },
- {
- "scale_settings": {
- "scale_type": "standard"
- },
- "model": "text-search-curie-doc-001",
- "owner": "organization-owner",
- "id": "text-search-curie-doc-001",
- "status": "succeeded",
- "created_at": 1668620345,
- "updated_at": 1668620345,
- "object": "deployment"
- },
- {
- "scale_settings": {
- "scale_type": "standard"
- },
- "model": "text-search-curie-query-001",
- "owner": "organization-owner",
- "id": "text-search-curie-query-001",
- "status": "succeeded",
- "created_at": 1669048765,
- "updated_at": 1669048765,
- "object": "deployment"
- }
- ],
- "object": "list"
-}
-```
-
-The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have an entry for **text-embedding-ada-002**. If you find that you're missing this model, you'll need to [deploy the model](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
-
-Now we need to read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`.
-
-```python
-df=pd.read_csv(os.path.join(os.getcwd(),'bill_sum_data.csv')) # This assumes that you have placed the bill_sum_data.csv in the same directory you are running Jupyter Notebooks
-df
-```
-
-**Output:**
--
-The initial table has more columns than we need we'll create a new smaller DataFrame called `df_bills` which will contain only the columns for `text`, `summary`, and `title`.
-
-```python
-df_bills = df[['text', 'summary', 'title']]
-df_bills
-```
-
-**Output:**
--
-Next we'll perform some light data cleaning by removing redundant whitespace and cleaning up the punctuation to prepare the data for tokenization.
-
-```python
-pd.options.mode.chained_assignment = None #https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#evaluation-order-matters
-
-# s is input text
-def normalize_text(s, sep_token = " \n "):
- s = re.sub(r'\s+', ' ', s).strip()
- s = re.sub(r". ,","",s)
- # remove all instances of multiple spaces
- s = s.replace("..",".")
- s = s.replace(". .",".")
- s = s.replace("\n", "")
- s = s.strip()
-
- return s
-
-df_bills['text']= df_bills["text"].apply(lambda x : normalize_text(x))
-```
-
-Now we need to remove any bills that are too long for the token limit (8192 tokens).
-
-```python
-tokenizer = tiktoken.get_encoding("cl100k_base")
-df_bills['n_tokens'] = df_bills["text"].apply(lambda x: len(tokenizer.encode(x)))
-df_bills = df_bills[df_bills.n_tokens<8192]
-len(df_bills)
-```
-
-```output
-20
-```
-
->[!NOTE]
->In this case all bills are under the embedding model input token limit, but you can use the technique above to remove entries that would otherwise cause embedding to fail. When faced with content that exceeds the embedding limit, you can also chunk the content into smaller pieces and then embed those one at a time.
-
-We'll once again examine **df_bills**.
-
-```python
-df_bills
-```
-
-**Output:**
--
-To understand the n_tokens column a little more as well how text ultimately is tokenized, it can be helpful to run the following code:
-
-```python
-sample_encode = tokenizer.encode(df_bills.text[0])
-decode = tokenizer.decode_tokens_bytes(sample_encode)
-decode
-```
-
-For our docs we're intentionally truncating the output, but running this command in your environment will return the full text from index zero tokenized into chunks. You can see that in some cases an entire word is represented with a single token whereas in others parts of words are split across multiple tokens.
-
-```output
-[b'SECTION',
- b' ',
- b'1',
- b'.',
- b' SHORT',
- b' TITLE',
- b'.',
- b' This',
- b' Act',
- b' may',
- b' be',
- b' cited',
- b' as',
- b' the',
- b' ``',
- b'National',
- b' Science',
- b' Education',
- b' Tax',
- b' In',
- b'cent',
- b'ive',
- b' for',
- b' Businesses',
- b' Act',
- b' of',
- b' ',
- b'200',
- b'7',
- b"''.",
- b' SEC',
- b'.',
- b' ',
- b'2',
- b'.',
- b' C',
- b'RED',
- b'ITS',
- b' FOR',
- b' CERT',
- b'AIN',
- b' CONTRIBUT',
- b'IONS',
- b' BEN',
- b'EF',
- b'IT',
- b'ING',
- b' SC',
-```
-
-If you then check the length of the `decode` variable, you'll find it matches the first number in the n_tokens column.
-
-```python
-len(decode)
-```
-
-```output
-1466
-```
-
-Now that we understand more about how tokenization works we can move on to embedding. It is important to note, that we haven't actually tokenized the documents yet. The `n_tokens` column is simply a way of making sure none of the data we pass to the model for tokenization and embedding exceeds the input token limit of 8,192. When we pass the documents to the embeddings model, it will break the documents into tokens similar (though not necessarily identical) to the examples above and then convert the tokens to a series of floating point numbers that will be accessible via vector search. These embeddings can be stored locally or in an Azure Database. As a result, each bill will have its own corresponding embedding vector in the new `ada_v2` column on the right side of the DataFrame.
-
-```python
-df_bills['ada_v2'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-embedding-ada-002')) # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
-```
-
-```python
-df_bills
-```
-
-**Output:**
--
-As we run the search code block below, we'll embed the search query *"Can I get information on cable company tax revenue?"* with the same **text-embedding-ada-002 (Version 2)** model. Next we'll find the closest bill embedding to the newly embedded text from our query ranked by [cosine similarity](../concepts/understand-embeddings.md).
-
-```python
-# search through the reviews for a specific product
-def search_docs(df, user_query, top_n=3, to_print=True):
- embedding = get_embedding(
- user_query,
- engine="text-embedding-ada-002" # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
- )
- df["similarities"] = df.ada_v2.apply(lambda x: cosine_similarity(x, embedding))
-
- res = (
- df.sort_values("similarities", ascending=False)
- .head(top_n)
- )
- if to_print:
- display(res)
- return res
--
-res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
-```
-
-**Output**:
--
-Finally, we'll show the top result from document search based on user query against the entire knowledge base. This returns the top result of the "Taxpayer's Right to View Act of 1993". This document has a cosine similarity score of 0.76 between the query and the document:
-
-```python
-res["summary"][9]
-```
-
-```output
-"Taxpayer's Right to View Act of 1993 - Amends the Communications Act of 1934 to prohibit a cable operator from assessing separate charges for any video programming of a sporting, theatrical, or other entertainment event if that event is performed at a facility constructed, renovated, or maintained with tax revenues or by an organization that receives public financial support. Authorizes the Federal Communications Commission and local franchising authorities to make determinations concerning the applicability of such prohibition. Sets forth conditions under which a facility is considered to have been constructed, maintained, or renovated with tax revenues. Considers events performed by nonprofit or public organizations that receive tax subsidies to be subject to this Act if the event is sponsored by, or includes the participation of a team that is part of, a tax exempt organization."
-```
-
-Using this approach, you can use embeddings as a search mechanism across documents in a knowledge base. The user can then take the top search result and use it for their downstream task, which prompted their initial query.
-
-## Clean up resources
-
-If you created an OpenAI resource solely for completing this tutorial and want to clean up and remove an OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
--- [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)-- [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)-
-## Next steps
-
-Learn more about Azure OpenAI's models:
-> [!div class="nextstepaction"]
-> [Azure OpenAI Service models](../concepts/models.md)
cognitive-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/use-your-data-quickstart.md
- Title: 'Use your own data with Azure OpenAI service'-
-description: Use this article to import and use your data in Azure OpenAI.
------- Previously updated : 07/12/2023
-recommendations: false
-zone_pivot_groups: openai-use-your-data
--
-# Quickstart: Chat with Azure OpenAI models using your own data
-
-In this quickstart you can use your own data with Azure OpenAI models. Using Azure OpenAI's models on your data can provide you with a powerful conversational AI platform that enables faster and more accurate communication.
-
-## Prerequisites
--- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.-
- Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
--- An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).-- Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) role for the Azure OpenAI resource. --
-> [!div class="nextstepaction"]
-> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=OVERVIEW&Pillar=AOAI&Product=ownData&Page=quickstart&Section=Prerequisites)
--------
-## Clean up resources
-
-If you want to clean up and remove an OpenAI or Azure Cognitive Search resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
--- [Cognitive Services resources](../cognitive-services-apis-create-account.md#clean-up-resources)-- [Azure Cognitive Search resources](/azure/search/search-get-started-portal#clean-up-resources)-- [Azure app service resources](/azure/app-service/quickstart-dotnetcore?pivots=development-environment-vs#clean-up-resources)-
-## Next steps
-- Learn more about [using your data in Azure OpenAI Service](./concepts/use-your-data.md)-- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main).
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
- Title: What's new in Azure OpenAI Service?-
-description: Learn about the latest news and features updates for Azure OpenAI
------ Previously updated : 06/12/2023
-recommendations: false
-keywords:
--
-# What's new in Azure OpenAI Service
-
-## June 2023
-
-### Use Azure OpenAI on your own data (preview)
--- [Azure OpenAI on your data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as GPT-35-Turbo and GPT-4 and receive responses based on your data. -
-### New versions of gpt-35-turbo and gpt-4 models
--- gpt-35-turbo (version 0613)-- gpt-35-turbo-16k (version 0613)-- gpt-4 (version 0613)-- gpt-4-32k (version 0613)-
-### UK South
--- Azure OpenAI is now available in the UK South region. Check the [models page](concepts/models.md), for the latest information on model availability in each region. -
-### Content filtering & annotations (Preview)
--- How to [configure content filters](how-to/content-filters.md) with Azure OpenAI Service.-- [Enable annotations](concepts/content-filter.md) to view content filtering category and severity information as part of your GPT based Completion and Chat Completion calls.-
-### Quota
--- Quota provides the flexibility to actively [manage the allocation of rate limits across the deployments](how-to/quota.md) within your subscription.-
-## May 2023
-
-### Java & JavaScript SDK support
--- NEW Azure OpenAI preview SDKs offering support for [JavaScript](/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=programming-language-javascript) and [Java](/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=programming-language-java).-
-### Azure OpenAI Chat Completion General Availability (GA)
--- General availability support for:
- - Chat Completion API version `2023-05-15`.
- - GPT-35-Turbo models.
- - GPT-4 model series. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
-
-If you are currently using the `2023-03-15-preview` API, we recommend migrating to the GA `2023-05-15` API. If you are currently using API version `2022-12-01` this API remains GA, but does not include the latest Chat Completion capabilities.
-
-> [!IMPORTANT]
-> Using the current versions of the GPT-35-Turbo models with the completion endpoint remains in preview.
-
-### France Central
--- Azure OpenAI is now available in the France Central region. Check the [models page](concepts/models.md), for the latest information on model availability in each region. -
-## April 2023
--- **DALL-E 2 public preview**. Azure OpenAI Service now supports image generation APIs powered by OpenAI's DALL-E 2 model. Get AI-generated images based on the descriptive text you provide. To learn more, check out the [quickstart](./dall-e-quickstart.md). To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/access).--- **Inactive deployments of customized models will now be deleted after 15 days; models will remain available for redeployment.** If a customized (fine-tuned) model is deployed for more than fifteen (15) days during which no completions or chat completions calls are made to it, the deployment will automatically be deleted (and no further hosting charges will be incurred for that deployment). The underlying customized model will remain available and can be redeployed at any time. To learn more check out the [how-to-article](/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio#deploy-a-customized-model).--
-## March 2023
--- **GPT-4 series models are now available in preview on Azure OpenAI**. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4). These models are currently available in the East US and South Central US regions.--- **New Chat Completion API for GPT-35-Turbo and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md).--- **GPT-35-Turbo preview**. To learn more checkout the [how-to article](./how-to/chatgpt.md).--- Increased training limits for fine-tuning: The max training job size (tokens in training file) x (# of epochs) is 2 Billion tokens for all models. We have also increased the max training job from 120 to 720 hours. -- Adding additional use cases to your existing access.  Previously, the process for adding new use cases required customers to reapply to the service. Now, we're releasing a new process that allows you to quickly add new use cases to your use of the service. This process follows the established Limited Access process within Azure Cognitive Services. [Existing customers can attest to any and all new use cases here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUM003VEJPRjRSOTZBRVZBV1E5N1lWMk1XUyQlQCN0PWcu). Please note that this is required anytime you would like to use the service for a new use case you did not originally apply for.-
-## February 2023
-
-### New Features
--- .NET SDK(inference) [preview release](https://www.nuget.org/packages/Azure.AI.OpenAI/1.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/openai/Azure.AI.OpenAI/tests/Samples)-- [Terraform SDK update](https://registry.terraform.io/providers/hashicorp/azurerm/3.37.0/docs/resources/cognitive_deployment) to support Azure OpenAI management operations.-- Inserting text at the end of a completion is now supported with the `suffix` parameter.-
-### Updates
--- Content filtering is on by default.-
-New articles on:
--- [Monitoring an Azure OpenAI Service](./how-to/monitoring.md)-- [Plan and manage costs for Azure OpenAI](./how-to/manage-costs.md)-
-New training course:
--- [Intro to Azure OpenAI](/training/modules/explore-azure-openai/)--
-## January 2023
-
-### New Features
-
-* **Service GA**. Azure OpenAI Service is now generally available.ΓÇï
-
-* **New models**: Addition of the latest text model, text-davinci-003 (East US, West Europe), text-ada-embeddings-002 (East US, South Central US, West Europe)
--
-## December 2022
-
-### New features
-
-* **The latest models from OpenAI.** Azure OpenAI provides access to all the latest models including the GPT-3.5 seriesΓÇï.
-
-* **New API version (2022-12-01).** This update includes several requested enhancements including token usage information in the API response, improved error messages for files, alignment with OpenAI on fine-tuning creation data structure, and support for the suffix parameter to allow custom naming of fine-tuned jobs. ΓÇï
-
-* **Higher request per second limits.** 50 for non-Davinci models. 20 for Davinci models.ΓÇï
-
-* **Faster fine-tune deployments.** Deploy an Ada and Curie fine-tuned models in under 10 minutes.ΓÇï
-
-* **Higher training limits:** 40M training tokens for Ada, Babbage, and Curie. 10M for Davinci.ΓÇï
-
-* **Process for requesting modifications to the abuse & miss-use data logging & human review.** Today, the service logs request/response data for the purposes of abuse and misuse detection to ensure that these powerful models aren't abused. However, many customers have strict data privacy and security requirements that require greater control over their data. To support these use cases, we're releasing a new process for customers to modify the content filtering policies or turn off the abuse logging for low-risk use cases. This process follows the established Limited Access process within Azure Cognitive Services and [existing OpenAI customers can apply here](https://aka.ms/oai/modifiedaccess).ΓÇï
-
-* **Customer managed key (CMK) encryption.** CMK provides customers greater control over managing their data in Azure OpenAI by providing their own encryption keys used for storing training data and customized models. Customer-managed keys (CMK), also known as bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. [Learn more from our encryption at rest documentation](encrypt-data-at-rest.md).
-
-* **Lockbox support**ΓÇï
-
-* **SOC-2 compliance**ΓÇï
-
-* **Logging and diagnostics** through Azure Resource Health, Cost Analysis, and Metrics & Diagnostic settingsΓÇï.
-
-* **Studio improvements.** Numerous usability improvements to the Studio workflow including Azure AD role support to control who in the team has access to create fine-tuned models and deploy.
-
-### Changes (breaking)
-
-**Fine-tuning** create API request has been updated to match OpenAIΓÇÖs schema.
-
-**Preview API versions:**
-
-```json
-{ΓÇï
- "training_file": "file-XGinujblHPwGLSztz8cPS8XY",ΓÇï
- "hyperparams": { ΓÇï
- "batch_size": 4,ΓÇï
- "learning_rate_multiplier": 0.1,ΓÇï
- "n_epochs": 4,ΓÇï
- "prompt_loss_weight": 0.1,ΓÇï
- }ΓÇï
-}
-```
-
-**API version 2022-12-01:**
-
-```json
-{ΓÇï
- "training_file": "file-XGinujblHPwGLSztz8cPS8XY",ΓÇï
- "batch_size": 4,ΓÇï
- "learning_rate_multiplier": 0.1,ΓÇï
- "n_epochs": 4,ΓÇï
- "prompt_loss_weight": 0.1,ΓÇï
-}
-```
-
-**Content filtering is temporarily off** by default. Azure content moderation works differently than OpenAI. Azure OpenAI runs content filters during the generation call to detect harmful or abusive content and filters them from the response. [Learn MoreΓÇï](./concepts/content-filter.md)
-
-ΓÇïThese models will be re-enabled in Q1 2023 and be on by default. ΓÇï
-
-ΓÇï**Customer actions**ΓÇï
-
-* [Contact Azure Support](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) if you would like these turned on for your subscriptionΓÇï.
-* [Apply for filtering modifications](https://aka.ms/oai/modifiedaccess), if you would like to have them remain off. (This option will be for low-risk use cases only.)ΓÇï
-
-## Next steps
-
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-apprentice-mode.md
- Title: Apprentice mode - Personalizer
-description: Learn how to use apprentice mode to gain confidence in a model without changing any code.
--
-ms.
--- Previously updated : 07/26/2022--
-# Use Apprentice mode to train Personalizer without affecting your existing application
-
-When deploying a new Personalizer resource, it's initialized with an untrained, or blank model. That is, it hasn't learned from any data and therefore won't perform well in practice. This is known as the "cold start" problem and is resolved over time by training the model with real data from your production environment. **Apprentice mode** is a learning behavior that helps mitigate the "cold start" problem, and allows you to gain confidence in the model _before_ it makes decisions in production, all without requiring any code change.
-
-<!--
-
-## What is Apprentice mode?
-
-Similar to how an apprentice can learn a craft by observing an expert, Apprentice mode enables Personalizer to learn by observing the decisions made by your application's current logic. The Personalizer model trains by mimicking the same decision output as the application. With each Rank API call, Personalizer can learn without impacting the existing logic and outcomes. Metrics, available from the Azure portal and the API, help you understand the performance as the model learns. Specifically, how well Personalize is matching your existing logic (also known as the baseline policy).
-
-Once Personalizer is able to reasonably match your existing logic 60-80% of the time, you can change the behavior from Apprentice mode to *Online mode*. At that time, Personalizer returns the best actions in the Rank API as determined by the underlying model and can learn how to make better decisions than your baseline policy.
-
-## Why use Apprentice mode?
-
-Apprentice mode provides a way for your model to mimic your existing decision logic before it makes online decisions used by your application. This helps to mitigate the aforementioned cold start problem and provides you with more trust in the Personalizer service and reassurance that the data sent to Personalizer is valuable for training the model. This is done without risking or affecting your online traffic and customer experiences.
-
-The two main reasons to use Apprentice mode are:
-
-* Mitigating **Cold Starts**: Apprentice mode helps mitigate the cost of a training a "new" model in production by learning without the need to make uninformed decisions. The model learns to mimic your existing application logic.
-* **Validating Action and Context Features**: Context and Action features may be inadequate, inaccurate, or suboptimally engineered. If there are too few, too many, incorrect, noisy, or malformed features, Personalize will have difficulty training a well performing model. Conducting a [feature evaluation](how-to-feature-evaluation.md) while in Apprentice mode, enables you to discover how effective the features are at training Personalizer and can identify areas for improving feature quality.
-
-## When should you use Apprentice mode?
-
-Use Apprentice mode to train Personalizer to improve its effectiveness through the following scenarios while leaving the experience of your users unaffected by Personalizer:
-
-* You're implementing Personalizer in a new scenario.
-* You've made major changes to the Context or Action features.
-
-However, Apprentice mode isn't an effective way of measuring the impact Personalizer can have on improving the average reward or your business KPIs. It can only evaluate how well the service is learning your existing logic given the current data you're providing. To measure how effective Personalizer is at choosing the best possible action for each Rank call, Personalizer must be in Online mode, or you can use [Offline evaluations](concepts-offline-evaluation.md) over a period of time when Personalizer was in Online mode.
-
-## Who should use Apprentice mode?
-
-Apprentice mode is useful for developers, data scientists, and business decision makers:
-
-* **Developers** can use Apprentice mode to ensure the Rank and Reward APIs are implemented correctly in the application, and that features being sent to Personalizer are free from errors and common mistakes. Learn more about creating good [Context and Action features](concepts-features.md).
-
-* **Data scientists** can use Apprentice mode to validate that the features are effective at training the Personalizer models. That is, the features contain useful information that allow Personalizer to learn the existing decision logic.
-
-* **Business Decision Makers** can use Apprentice mode to assess the potential of Personalizer to improve results (that is, rewards) compared to existing business logic. Specifically, whether or not Personalizer can learn from the provided data before going into Online mode. This allows them to make an informed decision about impacting user experience, where real revenue and user satisfaction are at stake.
-
-## Comparing Behaviors - Apprentice mode and Online mode
-
-Learning when in Apprentice mode differs from Online mode in the following ways.
-
-|Area|Apprentice mode|Online mode|
-|--|--|--|
-|Impact on User Experience| The users' experience and business metrics won't change. Personalizer is trained by observing the **baseline actions** of your current application logic, without affecting them. | Your users' experience may change as the decision is made by Personalizer and not your baseline action.|
-|Learning speed|Personalizer will learn more slowly when in Apprentice mode compared to learning in Online mode. Apprentice mode can only learn by observing the rewards obtained by your default action without [exploration](concepts-exploration.md), which limits how much Personalizer can learn.|Learns faster because it can both _exploit_ the best action from the current model and _explore_ other actions for potentially better results.|
-|Learning effectiveness "Ceiling"|Personalizer can only approximate, and never exceed, the performance of your application's current logic (the total average reward achieved by the baseline action). It's unlikely that Personalizer will achieve 100% match with your current application's logic, and is recommended that once 60%-80% matching is achieved, Personalizer should be switched to Online mode.|Personalizer should exceed the performance of your baseline application logic. If Personalizer's performance stalls over time, you can conduct on [offline evaluation](concepts-offline-evaluation.md) and [feature evaluation](concept-feature-evaluation.md) to pursue additional improvements. |
-|Rank API return value for rewardActionId| The _rewardActionId_ will always be the Id of the default action. That is, the action you send as the first action in the Rank API request JSON. In other words, the Rank API does nothing visible for your application during Apprentice mode. |The _rewardActionId_ will be one of the Ids provided in then Rank API call as determined by the Personalizer model.|
-|Evaluations|Personalizer keeps a comparison of the reward totals received by your current application logic, and the reward totals Personalizer would be getting if it was in Online mode at that point. This comparison is available to view in the *Monitor* blade of your Personalizer resource in the Azure portal.|Evaluate PersonalizerΓÇÖs effectiveness by running [Offline evaluations](concepts-offline-evaluation.md), which let you compare the total rewards Personalizer has achieved against the potential rewards of the applicationΓÇÖs baseline.|
-
-Note that Personalizer is unlikely to achieve a 100% performance match with the application's baseline logic, and it will never exceed it. Performance matching of 60%-80% should be sufficient to switch Personalizer to Online mode, where Personalizer can learn better decisions and exceed the performance of your application's baseline logic.
-
-## Limitations of Apprentice Mode
-Apprentice Mode trains Personalizer model by attempting to imitate your existing application's baseline logic, using the Context and Action features present in the Rank calls. The following factors will affect Apprentice mode's ability to learn.
-
-### Scenarios where Apprentice Mode May Not be Appropriate:
-
-#### Editorially chosen Content:
-In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors aren't an algorithm, and the factors considered by editors can be subjective and possibly unrelated to the Context or Action features. In this case, Apprentice mode may have difficulty in predicting the baseline action. In these situations you can:
-
-* Test Personalizer in Online Mode: Consider putting Personalizer in Online Mode for time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference between your application's baseline logic and Personalizer.
-* Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content when a certain celebrity is often in the news: This knowledge could be added as a Context feature.
-
-### Factors that will improve and accelerate Apprentice Mode
-If apprentice mode is learning and attaining a matching performance above zero, but performance is improving slowly (not getting to 60% to 80% matched rewards within two weeks), it's possible that there's too little data being sent to Personalizer. The following steps may help facilitate faster learning:
-
-1. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will increase the diversity of the feature values.
-2. Reducing Actions per Event: Personalizer will use the "% of Rank calls to use for exploration" setting to discover preferences and trends. When a Rank call has more actions, the chance of any particular Action being chosen for exploration becomes lower. Reducing the number of actions sent in each Rank call to a smaller number (under 10) can be a temporary adjustment that may indicate whether or not Apprentice Mode has sufficient data to learn.
-
-## Using Apprentice mode to train with historical data
-
-If you have a significant amount of historical data that you would like to use to train Personalizer, you can use Apprentice mode to replay the data through Personalizer.
-
-Set up the Personalizer in Apprentice Mode and create a script that calls Rank with the actions and context features from the historical data. Call the Reward API based on your calculations of the records in this data. You may need approximately 50,000 historical events to see Personalizer attain a 60-80% match with your application's baseline logic. You may be able to achieve satisfactory results with fewer or more events.
-
-When training from historical data, it's recommended that the data sent in [features for context and actions, their layout in the JSON used for Rank requests, and the calculation of reward in this training data set], matches the data [features and calculation of reward] available from your existing application.
-
-Offline and historical data tends to be more incomplete and noisier and can differ in format from your in-production (or online) scenario. While training from historical data is possible, the results from doing so may be inconclusive and aren't necessarily a good predictor of how well Personalizer will learn in Online mode, especially if the features vary between the historical data and the current scenario.
-
-## Using Apprentice Mode versus A/B Tests
-
-It's only useful to do A/B tests of Personalizer treatments once it has been validated and is learning in Online mode, since in Apprentice mode, only the baseline action is used, and the existing logic is learned. This essentially means Personalizer is returning the action of the "control" arm of your A/B test, hence an A/B test in Apprentice mode has no value.
-
-Once you have a use case using Personalizer and learning online, A/B experiments can allow you to create controlled cohorts and conduct comparisons of results that may be more complex than the signals used for rewards. An example question an A/B test could answer is: "In a retail website, Personalizer optimizes a layout and gets more users to _check out_ earlier, but does this reduce total revenue per transaction?"
-
-## Next steps
-
-* Learn about [active and inactive events](concept-active-inactive-events.md)
cognitive-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-auto-optimization.md
- Title: Auto-optimize - Personalizer
-description: This article provides a conceptual overview of the auto-optimize feature for Azure Personalizer service.
--
-ms.
--- Previously updated : 03/08/2021--
-# Personalizer Auto-Optimize (Preview)
--
-## Introduction
-Personalizer automatic optimization saves you manual effort in keeping a Personalizer loop at its best machine learning performance, by automatically searching for improved Learning Settings used to train your models and applying them. Personalizer has strict criteria to apply new Learning Settings to insure improvements are unlikely to introduce loss in rewards.
-
-Personalizer Auto-Optimize is in Public Preview and features, approaches and processes will change based on user feedback.
-
-## When to use Auto-Optimize
-In most cases, the best option is to have Auto-Optimize turned on. Auto-Optimize is *on* for default for new Personalizer loops.
-
-Auto-optimize may help in the following situations:
-* You build applications that are used by many tenants, and each gets their own Personalizer loop(s); for example, if you host multiple e-commerce sites. Auto-Optimize allows you to avoid the manual effort you'd need to tune learning settings for large numbers of Personalizer loops.
-* You have deployed Personalizer and validated that it is working well, getting good rewards, and you have made sure there are no bugs or problems in your features.
-
-> [!NOTE]
-> Auto-Optimize will periodically overwrite Personalizer Learning Settings. If your use case or industry requires audit and archive of models and settings, or if you need backups of previous settings, you can use the Personalizer API to retrieve Learning Settings, or download them via the Azure portal.
-
-## How to enable and disable Auto-Optimize
-To Enable Auto-Optimize, use the toggle switch in the "Model and Learning Settings" blade in the Azure portal.
-
-Alternatively, you can activate Auto-Optimize using the Personalizer `/configurations/service` API.
-
-To disable Auto-Optimize, turn off the toggle.
-
-## Auto-Optimize reports
-
-In the Model and Learning Settings blade you can see the history of auto-optimize runs and the action taken on each.
-
-The table shows:
-* When an auto-optimize run happened,
-* What data window was included,
-* What was the reward performance of online, baseline, and best found Learning Settings,
-* Actions taken: if Learning Settings were updated or not.
-
-Reward performance of different learning settings in each auto-optimization history row are shown in absolute numbers, and as percentages relative to baseline performance.
-
-**Example**: If your baseline average reward is estimated to be 0.20, and the online Personalizer behavior is achieving 0.30, these will be shown as 100% and 150% respectively. If the auto optimization found learning settings capable of achieving 0.40 average reward, it will be shown as 200% (0.40 is 200% of 0.20). Assuming the confidence margins allow for it, the new settings would be applied, and then these would drive Personalizer as the Online settings until the next run.
-
-A history of up to 24 previous Auto-Optimize runs is kept for your analysis. You can seek out more details about those Offline Evaluations and reports for each. Also, the reports contain any Learning Settings that are in this history, which you can find and download or apply.
-
-## How it works
-Personalizer is constantly training the AI models it uses based on rewards. This training is done following some *Learning Settings*, which contain hyper-parameters and other values used in the training process. These learning settings can be "tuned" to your specific Personalizer instance.
-
-Personalizer also has the ability to perform *Offline Evaluations*. Offline Evaluations look at past data, and can produce a statistical estimation of the average reward that Personalizer different algorithms and models could have attained. During this process Personalizer will also search for better Learning Settings, estimating their performance (how many rewards they would have gotten) over that past time period.
-
-#### Auto-Optimize frequency
-Auto-Optimize will run periodically, and will perform the Auto-Optimize based on past data
-* If your application sends to Personalizer more than approximately 20 Mb of data in the last two weeks, it will use the last two weeks of data.
-* If your application sends less than this amount, Personalizer will add data from previous days until there is enough data to optimize, or it reaches the earliest data stored (up to the Data Retention number of days).
-
-The exact times and days when Auto-Optimize runs is determined by the Personalizer service, and will fluctuate over time.
-
-#### Criteria for updating learning settings
-
-Personalizer uses these reward estimations to decide whether to change the current Learning Settings for others. Each estimation is a distribution curve, with upper and lower 95% confidence bounds. Personalizer will only apply new Learning Settings if:
- * They showed higher average rewards in the evaluation period, AND
- * They have a lower bound of the 95% confidence interval, that is *higher* than the lower bound of the 95% confidence interval of the online Learning Settings.
-This criteria to maximize the reward improvement, while trying to eliminate the probability of future rewards lost is managed by Personalizer and draws from research in [Seldonian algorithms](https://aisafety.cs.umass.edu/overview.html) and AI safety.
-
-#### Limitations of Auto-Optimize
-
-Personalizer Auto-Optimize relies on an evaluation of a past period to estimate performance in the future. It is possible that due to external factors in the world, your application, and your users, that these estimations and predictions about Personalizer's models done for the past period are not representative of the future.
-
-Automatic Optimization Preview is unavailable for Personalizer loops that have enabled the Multi-Slot personalization API Preview functionality.
-
-## Next steps
-
-* [Offline evaluations](concepts-offline-evaluation.md)
-* [Learning Policy and Settings](concept-active-learning.md)
-* [How To Analyze Personalizer with an Offline Evaluation](how-to-offline-evaluation.md)
-* [Research in AI Safety](https://aisafety.cs.umass.edu/overview.html)
cognitive-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-rewards.md
- Title: Reward score - Personalizer
-description: The reward score indicates how well the personalization choice, RewardActionID, resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior. Personalizer trains its machine learning models by evaluating the rewards.
--
-ms.
-- Previously updated : 02/20/2020---
-# Reward scores indicate success of personalization
-
-The reward score indicates how well the personalization choice, [RewardActionID](/rest/api/personalizer/1.0/rank/rank#response), resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior.
-
-Personalizer trains its machine learning models by evaluating the rewards.
-
-Learn [how to](how-to-settings.md#configure-rewards-for-the-feedback-loop) configure the default reward score in the Azure portal for your Personalizer resource.
-
-## Use Reward API to send reward score to Personalizer
-
-Rewards are sent to Personalizer by the [Reward API](/rest/api/personalizer/1.0/events/reward). Typically, a reward is a number from 0 to 1. A negative reward, with the value of -1, is possible in certain scenarios and should only be used if you are experienced with reinforcement learning (RL). Personalizer trains the model to achieve the highest possible sum of rewards over time.
-
-Rewards are sent after the user behavior has happened, which could be days later. The maximum amount of time Personalizer will wait until an event is considered to have no reward or a default reward is configured with the [Reward Wait Time](#reward-wait-time) in the Azure portal.
-
-If the reward score for an event hasn't been received within the **Reward Wait Time**, then the **Default Reward** will be applied. Typically, the **[Default Reward](how-to-settings.md#configure-reward-settings-for-the-feedback-loop-based-on-use-case)** is configured to be zero.
--
-## Behaviors and data to consider for rewards
-
-Consider these signals and behaviors for the context of the reward score:
-
-* Direct user input for suggestions when options are involved ("Do you mean X?").
-* Session length.
-* Time between sessions.
-* Sentiment analysis of the user's interactions.
-* Direct questions and mini surveys where the bot asks the user for feedback about usefulness, accuracy.
-* Response to alerts, or delay to response to alerts.
-
-## Composing reward scores
-
-A Reward score must be computed in your business logic. The score can be represented as:
-
-* A single number sent once
-* A score sent immediately (such as 0.8) and an additional score sent later (typically 0.2).
-
-## Default Rewards
-
-If no reward is received within the [Reward Wait Time](#reward-wait-time), the duration since the Rank call, Personalizer implicitly applies the **Default Reward** to that Rank event.
-
-## Building up rewards with multiple factors
-
-For effective personalization, you can build up the reward score based on multiple factors.
-
-For example, you could apply these rules for personalizing a list of video content:
-
-|User behavior|Partial score value|
-|--|--|
-|The user clicked on the top item.|+0.5 reward|
-|The user opened the actual content of that item.|+0.3 reward|
-|The user watched 5 minutes of the content or 30%, whichever is longer.|+0.2 reward|
-|||
-
-You can then send the total reward to the API.
-
-## Calling the Reward API multiple times
-
-You can also call the Reward API using the same event ID, sending different reward scores. When Personalizer gets those rewards, it determines the final reward for that event by aggregating them as specified in the Personalizer configuration.
-
-Aggregation values:
-
-* **First**: Takes the first reward score received for the event, and discards the rest.
-* **Sum**: Takes all reward scores collected for the eventId, and adds them together.
-
-All rewards for an event, which are received after the **Reward Wait Time**, are discarded and do not affect the training of models.
-
-By adding up reward scores, your final reward may be outside the expected score range. This won't make the service fail.
-
-## Best Practices for calculating reward score
-
-* **Consider true indicators of successful personalization**: It is easy to think in terms of clicks, but a good reward is based on what you want your users to *achieve* instead of what you want people to *do*. For example, rewarding on clicks may lead to selecting content that is clickbait prone.
-
-* **Use a reward score for how good the personalization worked**: Personalizing a movie suggestion would hopefully result in the user watching the movie and giving it a high rating. Since the movie rating probably depends on many things (the quality of the acting, the mood of the user), it is not a good reward signal for how well *the personalization* worked. The user watching the first few minutes of the movie, however, may be a better signal of personalization effectiveness and sending a reward of 1 after 5 minutes will be a better signal.
-
-* **Rewards only apply to RewardActionID**: Personalizer applies the rewards to understand the efficacy of the action specified in RewardActionID. If you choose to display other actions and the user clicks on them, the reward should be zero.
-
-* **Consider unintended consequences**: Create reward functions that lead to responsible outcomes with [ethics and responsible use](ethics-responsible-use.md).
-
-* **Use Incremental Rewards**: Adding partial rewards for smaller user behaviors helps Personalizer to achieving better rewards. This incremental reward allows the algorithm to know it's getting closer to engaging the user in the final desired behavior.
- * If you are showing a list of movies, if the user hovers over the first one for a while to see more information, you can determine that some user-engagement happened. The behavior can count with a reward score of 0.1.
- * If the user opened the page and then exited, the reward score can be 0.2.
-
-## Reward wait time
-
-Personalizer will correlate the information of a Rank call with the rewards sent in Reward calls to train the model, which may come at different times. Personalizer waits for the reward score for a defined limited time, starting when the corresponding Rank call occured. This is done even if the Rank call was made using deferred activation](concept-active-inactive-events.md).
-
-If the **Reward Wait Time** expires and there has been no reward information, a default reward is applied to that event for training. You can select a reward wait time of 10 minutes, 4 hours, 12 hours, or 24 hours. If your scenario requires longer reward wait times (e.g., for marketing email campaigns) we are offering a private preview of longer wait times. Open a support ticket in the Azure portal to get in contact with team and see if you qualify and it can be offered to you.
-
-## Best practices for reward wait time
-
-Follow these recommendations for better results.
-
-* Make the Reward Wait Time as short as you can, while leaving enough time to get user feedback.
-
-* Don't choose a duration that is shorter than the time needed to get feedback. For example, if some of your rewards come in after a user has watched 1 minute of a video, the experiment length should be at least double that.
-
-## Next steps
-
-* [Reinforcement learning](concepts-reinforcement-learning.md)
-* [Try the Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank/console)
-* [Try the Reward API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward)
cognitive-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-exploration.md
- Title: Exploration - Personalizer-
-description: With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.
--
-ms.
--- Previously updated : 08/28/2022--
-# Exploration
-
-With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes.
-
-When Personalizer receives a Rank call, it returns a RewardActionID that either:
-* Uses known relevance to match the most probable user behavior based on the current machine learning model.
-* Uses exploration, which does not match the action that has the highest probability in the rank.
-
-Personalizer currently uses an algorithm called *epsilon greedy* to explore.
-
-## Choosing an exploration setting
-
-You configure the percentage of traffic to use for exploration in the Azure portal's **Configuration** page for Personalizer. This setting determines the percentage of Rank calls that perform exploration.
-
-Personalizer determines whether to explore or use the model's most probable action on each rank call. This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs.
-
-## Best practices for choosing an exploration setting
-
-Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.
-
-A setting of zero will negate many of the benefits of Personalizer. With this setting, Personalizer uses no user interactions to discover better user interactions. This leads to model stagnation, drift, and ultimately lower performance.
-
-A setting that is too high will negate the benefits of learning from user behavior. Setting it to 100% implies a constant randomization, and any learned behavior from users would not influence the outcome.
-
-It is important not to change the application behavior based on whether you see if Personalizer is exploring or using the learned best action. This would lead to learning biases that ultimately would decrease the potential performance.
-
-## Next steps
-
-[Reinforcement learning](concepts-reinforcement-learning.md)
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
- Title: "Features: Action and Context - Personalizer" -
-description: Personalizer uses features, information about actions and context, to make better ranking suggestions. Features can be generic, or specific to an item.
--
-ms.
--- Previously updated : 12/28/2022--
-# Context and actions
-
-Personalizer works by learning what your application should show to users in a given context. Context and actions are the two most important pieces of information that you pass into Personalizer. The **context** represents the information you have about the current user or the state of your system, and the **actions** are the options to be chosen from.
-
-## Context
-
-Information for the _context_ depends on each application and use case, but it typically may include information such as:
-
-* Demographic and profile information about your user.
-* Information extracted from HTTP headers such as user agent, or derived from HTTP information such as reverse geographic lookups based on IP addresses.
-* Information about the current time, such as day of the week, weekend or not, morning or afternoon, holiday season or not, etc.
-* Information extracted from mobile applications, such as location, movement, or battery level.
-* Historical aggregates of the behavior of users - such as what are the movie genres this user has viewed the most.
-* Information about the state of the system.
-
-Your application is responsible for loading the information about the context from the relevant databases, sensors, and systems you may have. If your context information doesn't change, you can add logic in your application to cache this information, before sending it to the Rank API.
-
-## Actions
-
-Actions represent a list of options.
-
-Don't send in more than 50 actions when Ranking actions. They may be the same 50 actions every time, or they may change. For example, if you have a product catalog of 10,000 items for an e-commerce application, you may use a recommendation or filtering engine to determine the top 40 a customer may like, and use Personalizer to find the one that will generate the most reward for the current context.
-
-### Examples of actions
-
-The actions you send to the Rank API will depend on what you are trying to personalize.
-
-Here are some examples:
-
-|Purpose|Action|
-|--|--|
-|Personalize which article is highlighted on a news website.|Each action is a potential news article.|
-|Optimize ad placement on a website.|Each action will be a layout or rules to create a layout for the ads (for example, on the top, on the right, small images, large images).|
-|Display personalized ranking of recommended items on a shopping website.|Each action is a specific product.|
-|Suggest user interface elements such as filters to apply to a specific photo.|Each action may be a different filter.|
-|Choose a chat bot's response to clarify user intent or suggest an action.|Each action is an option of how to interpret the response.|
-|Choose what to show at the top of a list of search results|Each action is one of the top few search results.|
-
-### Load actions from the client application
-
-Features from actions may typically come from content management systems, catalogs, and recommender systems. Your application is responsible for loading the information about the actions from the relevant databases and systems you have. If your actions don't change or getting them loaded every time has an unnecessary impact on performance, you can add logic in your application to cache this information.
-
-### Prevent actions from being ranked
-
-In some cases, there are actions that you don't want to display to users. The best way to prevent an action from being ranked is by adding it to the [Excluded Actions](/dotnet/api/microsoft.azure.cognitiveservices.personalizer.models.rankrequest.excludedactions) list, or not passing it to the Rank Request.
-
-In some cases, you might not want events to be trained on by default. In other words, you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you'll render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events that the end user didn't have a chance to interact with.
-
-## Features
-
-Both the **context** and possible **actions** are described using **features**. The features represent all information you think is important for the decision making process to maximize rewards. A good starting point is to imagine you're tasked with selecting the best action at each timestamp and ask yourself: "What information do I need to make an informed decision? What information do I have available to describe the context and each possible action?" Features can be generic, or specific to an item.
-
-Personalizer does not prescribe, limit, or fix what features you can send for actions and context:
-
-* Over time, you may add and remove features about context and actions. Personalizer continues to learn from available information.
-* For categorical features, there's no need to pre-define the possible values.
-* For numeric features, there's no need to pre-define ranges.
-* Feature names starting with an underscore `_` will be ignored.
-* The list of features can be large (hundreds), but we recommend starting with a concise feature set and expanding as necessary.
-* **action** features may or may not have any correlation with **context** features.
-* Features that aren't available should be omitted from the request. If the value of a specific feature is not available for a given request, omit the feature for this request.
-* Avoid sending features with a null value. A null value will be processed as a string with a value of "null" which is undesired.
-
-It's ok and natural for features to change over time. However, keep in mind that Personalizer's machine learning model adapts based on the features it sees. If you send a request containing all new features, Personalizer's model won't be able to use past events to select the best action for the current event. Having a 'stable' feature set (with recurring features) will help the performance of Personalizer's machine learning algorithms.
-
-### Context features
-* Some context features may only be available part of the time. For example, if a user is logged into the online grocery store website, the context will contain features describing purchase history. These features won't be available for a guest user.
-* There must be at least one context feature. Personalizer does not support an empty context.
-* If the context features are identical for every request, Personalizer will choose the globally best action.
-
-### Action features
-* Not all actions need to contain the same features. For example, in the online grocery store scenario, microwavable popcorn will have a "cooking time" feature, while a cucumber won't.
-* Features for a certain action ID may be available one day, but later on become unavailable.
-
-Examples:
-
-The following are good examples for action features. These will depend a lot on each application.
-
-* Features with characteristics of the actions. For example, is it a movie or a tv series?
-* Features about how users may have interacted with this action in the past. For example, this movie is mostly seen by people in demographics A or B, it's typically played no more than one time.
-* Features about the characteristics of how the user *sees* the actions. For example, does the poster for the movie shown in the thumbnail include faces, cars, or landscapes?
-
-## Supported feature types
-
-Personalizer supports features of string, numeric, and boolean types. It's likely that your application will mostly use string features, with a few exceptions.
-
-### How feature types affect machine learning in Personalizer
-
-* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (for example, category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model).
-* **Numeric**: Only use numeric values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as categorical strings. For example Age could be encoded as "Age":"0-5", "Age":"6-10", etc. Height could be bucketed as "Height": "<5'0", "Height": "5'0-5'4", "Height": "5'5-5'11", "Height":"6'0-6-4", "Height":">6'4".
-* **Boolean**
-* **Arrays** Only numeric arrays are supported.
-
-## Feature engineering
-
-* Use categorical and string types for features that are not a magnitude.
-* Make sure there are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed.
-* There are features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
-
-Having features of high density helps Personalizer extrapolate learning from one item to another. But if there are only a few features and they are too dense, Personalizer will try to precisely target content with only a few buckets to choose from.
-
-### Common issues with feature design and formatting
-
-* **Sending features with high cardinality.** Features that have unique values that are not likely to repeat over many events. For example, PII specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
-* **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if non-PII) will likely add more noise to the model and is not recommended.
-* **Sending unique values that will rarely occur more than a few times**. It's recommended to bucket your features to a higher level-of-detail. For example, having features such as `"Context.TimeStamp.Day":"Monday"` or `"Context.TimeStamp.Hour":13` can be useful as there are only 7 and 24 unique values, respectively. However, `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is very precise and has an extremely large number of unique values, which makes it very difficult for Personalizer to learn from it.
-
-### Improve feature sets
-
-Analyze the user behavior by running a [Feature Evaluation Job](how-to-feature-evaluation.md). This allows you to look at past data to see what features are heavily contributing to positive rewards versus those that are contributing less. You can see what features are helping, and it will be up to you and your application to find better features to send to Personalizer to improve results even further.
-
-### Expand feature sets with artificial intelligence and cognitive services
-
-Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to Personalizer.
-
-By preprocessing your items using artificial intelligence services, you can automatically extract information that is likely to be relevant for personalization.
-
-For example:
-
-* You can run a movie file via [Video Indexer](https://azure.microsoft.com/services/media-services/video-indexer/) to extract scene elements, text, sentiment, and many other attributes. These attributes can then be made more dense to reflect characteristics that the original item metadata didn't have.
-* Images can be run through object detection, faces through sentiment, etc.
-* Information in text can be augmented by extracting entities, sentiment, and expanding entities with Bing knowledge graph.
-
-You can use several other [Azure Cognitive Services](https://www.microsoft.com/cognitive-services), like
-
-* [Entity Linking](../text-analytics/index.yml)
-* [Language service](../language-service/index.yml)
-* [Emotion](../face/overview.md)
-* [Computer Vision](../computer-vision/overview.md)
-
-### Use embeddings as features
-
-Embeddings from various Machine Learning models have proven to be affective features for Personalizer
-
-* Embeddings from Large Language Models
-* Embeddings from Computer Vision Models
-
-## Namespaces
-
-Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
-
-The following are examples of feature namespaces used by applications:
-
-* User_Profile_from_CRM
-* Time
-* Mobile_Device_Info
-* http_user_agent
-* VideoResolution
-* DeviceInfo
-* Weather
-* Product_Recommendation_Ratings
-* current_time
-* NewsArticle_TextAnalytics
-
-### Namespace naming conventions and guidelines
-
-* Namespaces should not be nested.
-* Namespaces must start with unique ASCII characters (we recommend using names namespaces that are UTF-8 based). Currently having namespaces with same first characters could result in collisions, therefore it's strongly recommended to have your namespaces start with characters that are distinct from each other.
-* Namespaces are case sensitive. For example `user` and `User` will be considered different namespaces.
-* Feature names can be repeated across namespaces, and will be treated as separate features
-* The following characters cannot be used: codes < 32 (not printable), 32 (space), 58 (colon), 124 (pipe), and 126ΓÇô140.
-* All namespaces starting with an underscore `_` will be ignored.
-
-## JSON examples
-
-### Actions
-When calling Rank, you will send multiple actions to choose from:
-
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
-
-```json
-{
- "actions": [
- {
- "id": "pasta",
- "features": [
- {
- "taste": "salty",
- "spiceLevel": "medium",
- "grams": [400,800]
- },
- {
- "nutritionLevel": 5,
- "cuisine": "italian"
- }
- ]
- },
- {
- "id": "ice cream",
- "features": [
- {
- "taste": "sweet",
- "spiceLevel": "none",
- "grams": [150, 300, 450]
- },
- {
- "nutritionalLevel": 2
- }
- ]
- },
- {
- "id": "juice",
- "features": [
- {
- "taste": "sweet",
- "spiceLevel": "none",
- "grams": [300, 600, 900]
- },
- {
- "nutritionLevel": 5
- },
- {
- "drink": true
- }
- ]
- },
- {
- "id": "salad",
- "features": [
- {
- "taste": "salty",
- "spiceLevel": "low",
- "grams": [300, 600]
- },
- {
- "nutritionLevel": 8
- }
- ]
- }
- ]
-}
-```
-
-### Context
-
-Context is expressed as a JSON object that is sent to the Rank API:
-
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
-
-```JSON
-{
- "contextFeatures": [
- {
- "state": {
- "timeOfDay": "noon",
- "weather": "sunny"
- }
- },
- {
- "device": {
- "mobile":true,
- "Windows":true,
- "screensize": [1680,1050]
- }
- }
- ]
-}
-```
-
-### Namespaces
-
-In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
-
-> [!Note]
-> We strongly recommend using names for feature namespaces that are UTF-8 based and start with different letters. For example, `user`, `environment`, `device`, and `activity` start with `u`, `e`, `d`, and `a`. Currently having namespaces with same first characters could result in collisions.
-
-```JSON
-{
- "contextFeatures": [
- {
- "user": {
- "profileType":"AnonymousUser",
- "Location": "New York, USA"
- }
- },
- {
- "environment": {
- "monthOfYear": "8",
- "timeOfDay": "Afternoon",
- "weather": "Sunny"
- }
- },
- {
- "device": {
- "mobile":true,
- "Windows":true
- }
- },
- {
- "activity" : {
- "itemsInCart": "3-5",
- "cartValue": "250-300",
- "appliedCoupon": true
- }
- }
- ]
-}
-```
-
-## Next steps
-
-[Reinforcement learning](concepts-reinforcement-learning.md)
cognitive-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-offline-evaluation.md
- Title: Use the Offline Evaluation method - Personalizer-
-description: This article will explain how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.
--
-ms.
--- Previously updated : 02/20/2020--
-# Offline evaluation
-
-Offline evaluation is a method that allows you to test and assess the effectiveness of the Personalizer Service without changing your code or affecting user experience. Offline evaluation uses past data, sent from your application to the Rank and Reward APIs, to compare how different ranks have performed.
-
-Offline evaluation is performed on a date range. The range can finish as late as the current time. The beginning of the range can't be more than the number of days specified for [data retention](how-to-settings.md).
-
-Offline evaluation can help you answer the following questions:
-
-* How effective are Personalizer ranks for successful personalization?
- * What are the average rewards achieved by the Personalizer online machine learning policy?
- * How does Personalizer compare to the effectiveness of what the application would have done by default?
- * What would have been the comparative effectiveness of a random choice for Personalization?
- * What would have been the comparative effectiveness of different learning policies specified manually?
-* Which features of the context are contributing more or less to successful personalization?
-* Which features of the actions are contributing more or less to successful personalization?
-
-In addition, Offline Evaluation can be used to discover more optimized learning policies that Personalizer can use to improve results in the future.
-
-Offline evaluations do not provide guidance as to the percentage of events to use for exploration.
-
-## Prerequisites for offline evaluation
-
-The following are important considerations for the representative offline evaluation:
-
-* Have enough data. The recommended minimum is at least 50,000 events.
-* Collect data from periods with representative user behavior and traffic.
-
-## Discovering the optimized learning policy
-
-Personalizer can use the offline evaluation process to discover a more optimal learning policy automatically.
-
-After performing the offline evaluation, you can see the comparative effectiveness of Personalizer with that new policy compared to the current online policy. You can then apply that learning policy to make it effective immediately in Personalizer, by downloading it and uploading it in the Models and Policy panel. You can also download it for future analysis or use.
-
-Current policies included in the evaluation:
-
-| Learning settings | Purpose|
-|--|--|
-|**Online Policy**| The current Learning Policy used in Personalizer |
-|**Baseline**|The application's default (as determined by the first Action sent in Rank calls)|
-|**Random Policy**|An imaginary Rank behavior that always returns random choice of Actions from the supplied ones.|
-|**Custom Policies**|Additional Learning Policies uploaded when starting the evaluation.|
-|**Optimized Policy**|If the evaluation was started with the option to discover an optimized policy, it will also be compared, and you will be able to download it or make it the online learning policy, replacing the current one.|
-
-## Understanding the relevance of offline evaluation results
-
-When you run an offline evaluation, it is very important to analyze _confidence bounds_ of the results. If they are wide, it means your application hasnΓÇÖt received enough data for the reward estimates to be precise or significant. As the system accumulates more data, and you run offline evaluations over longer periods, the confidence intervals become narrower.
-
-## How offline evaluations are done
-
-Offline Evaluations are done using a method called **Counterfactual Evaluation**.
-
-Personalizer is built on the assumption that users' behavior (and thus rewards) are impossible to predict retrospectively (Personalizer can't know what would have happened if the user had been shown something different than what they did see), and only to learn from measured rewards.
-
-This is the conceptual process used for evaluations:
-
-```
-[For a given _learning policy), such as the online learning policy, uploaded learning policies, or optimized candidate policies]:
-{
- Initialize a virtual instance of Personalizer with that policy and a blank model;
-
- [For every chronological event in the logs]
- {
- - Perform a Rank call
-
- - Compare the reward of the results against the logged user behavior.
- - If they match, train the model on the observed reward in the logs.
- - If they don't match, then what the user would have done is unknown, so the event is discarded and not used for training or measurement.
-
- }
-
- Add up the rewards and statistics that were predicted, do some aggregation to aid visualizations, and save the results.
-}
-```
-
-The offline evaluation only uses observed user behavior. This process discards large volumes of data, especially if your application does Rank calls with large numbers of actions.
--
-## Evaluation of features
-
-Offline evaluations can provide information about how much of the specific features for actions or context are weighing for higher rewards. The information is computed using the evaluation against the given time period and data, and may vary with time.
-
-We recommend looking at feature evaluations and asking:
-
-* What other, additional, features could your application or system provide along the lines of those that are more effective?
-* What features can be removed due to low effectiveness? Low effectiveness features add _noise_ into the machine learning.
-* Are there any features that are accidentally included? Examples of these are: user identifiable information, duplicate IDs, etc.
-* Are there any undesirable features that shouldn't be used to personalize due to regulatory or responsible use considerations? Are there features that could proxy (that is, closely mirror or correlate with) undesirable features?
--
-## Next steps
-
-[Configure Personalizer](how-to-settings.md)
-[Run Offline Evaluations](how-to-offline-evaluation.md)
-Understand [How Personalizer Works](how-personalizer-works.md)
cognitive-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-reinforcement-learning.md
- Title: Reinforcement Learning - Personalizer-
-description: Personalizer uses information about actions and current context to make better ranking suggestions. The information about these actions and context are attributes or properties that are referred to as features.
--
-ms.
--- Previously updated : 05/07/2019-
-# What is Reinforcement Learning?
-
-Reinforcement Learning is an approach to machine learning that learns behaviors by getting feedback from its use.
-
-Reinforcement Learning works by:
-
-* Providing an opportunity or degree of freedom to enact a behavior - such as making decisions or choices.
-* Providing contextual information about the environment and choices.
-* Providing feedback about how well the behavior achieves a certain goal.
-
-While there are many subtypes and styles of reinforcement learning, this is how the concept works in Personalizer:
-
-* Your application provides the opportunity to show one piece of content from a list of alternatives.
-* Your application provides information about each alternative and the context of the user.
-* Your application computes a _reward score_.
-
-Unlike some approaches to reinforcement learning, Personalizer doesn't require a simulation to work in. Its learning algorithms are designed to react to an outside world (versus control it) and learn from each data point with an understanding that it's a unique opportunity that cost time and money to create, and that there's a non-zero regret (loss of possible reward) if suboptimal performance happens.
-
-## What type of reinforcement learning algorithms does Personalizer use?
-
-The current version of Personalizer uses **contextual bandits**, an approach to reinforcement learning that is framed around making decisions or choices between discrete actions, in a given context.
-
-The _decision memory_, the model that has been trained to capture the best possible decision, given a context, uses a set of linear models. These have repeatedly shown business results and are a proven approach, partially because they can learn from the real world very rapidly without needing multi-pass training, and partially because they can complement supervised learning models and deep neural network models.
-
-The explore / best action traffic allocation is made randomly following the percentage set for exploration, and the default algorithm for exploration is epsilon-greedy.
-
-### History of Contextual Bandits
-
-John Langford coined the name Contextual Bandits (Langford and Zhang [2007]) to describe a tractable subset of reinforcement learning and has worked on a half-dozen papers improving our understanding of how to learn in this paradigm:
-
-* Beygelzimer et al. [2011]
-* Dudík et al. [2011a, b]
-* Agarwal et al. [2014, 2012]
-* Beygelzimer and Langford [2009]
-* Li et al. [2010]
-
-John has also given several tutorials previously on topics such as Joint Prediction (ICML 2015), Contextual Bandit Theory (NIPS 2013), Active Learning (ICML 2009), and Sample Complexity Bounds (ICML 2003)
-
-## What machine learning frameworks does Personalizer use?
-
-Personalizer currently uses [Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit/wiki) as the foundation for the machine learning. This framework allows for maximum throughput and lowest latency when making personalization ranks and training the model with all events.
-
-## References
-
-* [Making Contextual Decisions with Low Technical Debt](https://arxiv.org/abs/1606.03966)
-* [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453)
-* [Efficient Contextual Bandits in Non-stationary Worlds](https://arxiv.org/abs/1708.01799)
-* [Residual Loss Prediction: Reinforcement: learning With No Incremental Feedback](https://openreview.net/pdf?id=HJNMYceCW)
-* [Mapping Instructions and Visual Observations to Actions with Reinforcement Learning](https://arxiv.org/abs/1704.08795)
-* [Learning to Search Better Than Your Teacher](https://arxiv.org/abs/1502.02206)
-
-## Next steps
-
-[Offline evaluation](concepts-offline-evaluation.md)
cognitive-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-scalability-performance.md
- Title: Scalability and Performance - Personalizer-
-description: "High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance: latency and training throughput."
--
-ms.
--- Previously updated : 10/24/2019-
-# Scalability and Performance
-
-High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance:
-
-* Keeping low latency when making Rank API calls
-* Making sure training throughput keeps up with event input
-
-Personalization can return a rank rapidly, with most of the call duration dedicated to communication through the REST API. Azure will autoscale the ability to respond to requests rapidly.
-
-## Low-latency scenarios
-
-Some applications require low latencies when returning a rank. Low latencies are necessary:
-
-* To keep the user from waiting a noticeable amount of time before displaying ranked content.
-* To help a server that is experiencing extreme traffic avoid tying up scarce compute time and network connections.
--
-## Scalability and training throughput
-
-Personalizer works by updating a model that is retrained based on messages sent asynchronously by Personalizer after Rank and Reward APIs. These messages are sent using an Azure EventHub for the application.
-
- It's unlikely most applications will reach the maximum joining and training throughput of Personalizer. While reaching this maximum won't slow down the application, it would imply event hub queues are getting filled internally faster than they can be cleaned up.
-
-## How to estimate your throughput requirements
-
-* Estimate the average number of bytes per ranking event adding the lengths of the context and action JSON documents.
-* Divide 20MB/sec by this estimated average bytes.
-
-For example, if your average payload has 500 features and each is an estimated 20 characters, then each event is approximately 10 kb. With these estimates, 20,000,000 / 10,000 = 2,000 events/sec, which is about 173 million events/day.
-
-If you're reaching these limits, please contact our support team for architecture advice.
-
-## Next steps
-
-[Create and configure Personalizer](how-to-settings.md).
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
- Title: Data-at-rest encryption in Personalizer-
-description: Learn about the keys that you use for data-at-rest encryption in Personalizer. See how to use Azure Key Vault to configure customer-managed keys.
----- Previously updated : 06/02/2022--
-#Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
--
-# Encryption of data at rest in Personalizer
-
-Personalizer is a service in Azure Cognitive Services that uses a machine learning model to provide apps with user-tailored content. When Personalizer persists data to the cloud, it encrypts that data. This encryption protects your data and helps you meet organizational security and compliance commitments.
--
-> [!IMPORTANT]
-> Customer-managed keys are only available with the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It takes approximately 3-5 business days to hear back about the status of your request. If demand is high, you might be placed in a queue and approved when space becomes available.
->
-> After you're approved to use customer-managed keys with Personalizer, create a new Personalizer resource and select E0 as the pricing tier. After you've created that resource, you can use Azure Key Vault to set up your managed identity.
--
-## Next steps
-
-* [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
-* [Learn more about Azure Key Vault](../../key-vault/general/overview.md)
cognitive-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-personalizer-works.md
- Title: How Personalizer Works - Personalizer
-description: The Personalizer _loop_ uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the Rank and Reward calls.
--
-ms.
--- Previously updated : 02/18/2020--
-# How Personalizer works
-
-The Personalizer resource, your _learning loop_, uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the **Rank** and **Reward** calls. Every loop is completely independent of each other.
-
-## Rank and Reward APIs impact the model
-
-You send _actions with features_ and _context features_ to the Rank API. The **Rank** API decides to use either:
-
-* _Exploit_: The current model to decide the best action based on past data.
-* _Explore_: Select a different action instead of the top action. You [configure this percentage](how-to-settings.md#configure-exploration-to-allow-the-learning-loop-to-adapt) for your Personalizer resource in the Azure portal.
-
-You determine the reward score and send that score to the Reward API. The **Reward** API:
-
-* Collects data to train the model by recording the features and reward scores of each rank call.
-* Uses that data to update the model based on the configuration specified in the _Learning Policy_.
-
-## Your system calling Personalizer
-
-The following image shows the architectural flow of calling the Rank and Reward calls:
-
-![alt text](./media/how-personalizer-works/personalization-how-it-works.png "How Personalization Works")
-
-1. You send _actions with features_ and _context features_ to the Rank API.
-
- * Personalizer decides whether to exploit the current model or explore new choices for the model.
- * The ranking result is sent to EventHub.
-1. The top rank is returned to your system as _reward action ID_.
- Your system presents that content and determines a reward score based on your own business rules.
-1. Your system returns the reward score to the learning loop.
- * When Personalizer receives the reward, the reward is sent to EventHub.
- * The rank and reward are correlated.
- * The AI model is updated based on the correlation results.
- * The inference engine is updated with the new model.
-
-## Personalizer retrains your model
-
-Personalizer retrains your model based on your **Model frequency update** setting on your Personalizer resource in the Azure portal.
-
-Personalizer uses all the data currently retained, based on the **Data retention** setting in number of days on your Personalizer resource in the Azure portal.
-
-## Research behind Personalizer
-
-Personalizer is based on cutting-edge science and research in the area of [Reinforcement Learning](concepts-reinforcement-learning.md) including papers, research activities, and ongoing areas of exploration in Microsoft Research.
-
-## Next steps
-
-Learn about [top scenarios](where-can-you-use-personalizer.md) for Personalizer
cognitive-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-feature-evaluation.md
- Title: Personalizer feature evaluations-
-description: When you run a Feature Evaluation in your Personalizer resource from the Azure portal, Personalizer creates a report containing Feature Scores, a measure of how influential each feature was to the model during the evaluation period.
--
-ms.
--- Previously updated : 09/22/2022--
-# Evaluate feature importances
-
-You can assess how important each feature was to Personalizer's machine learning model by conducting a _feature evaluation_ on your historical log data. Feature evaluations are useful to:
-
-* Understand which features are most or least important to the model.
-* Brainstorm extra features that may be beneficial to learning, by deriving inspiration from what features are currently important in the model.
-* Identify potentially unimportant or non-useful features that should be considered for further analysis or removal.
-* Troubleshoot common problems and errors that may occur when designing features and sending them to Personalizer. For example, using GUIDs, timestamps, or other features that are generally _sparse_ may be problematic. Learn more about [improving features](concepts-features.md).
-
-## What is a feature evaluation?
-
-Feature evaluations are conducted by training and running a copy of your current model configuration on historically collected log data in a specified time period. Features are ignored one at a time to measure the difference in model performance with and without each feature. Because the feature evaluations are performed on historical data, there's no guarantee that these patterns will be observed in future data. However, these insights may still be relevant to future data if your logged data has captured sufficient variability or non-stationary properties of your data. Your current model's performance isn't affected by running a feature evaluation.
-
-A _feature importance_ score is a measure of the relative impact of the feature on the reward over the evaluation period. Feature importance scores are a number between 0 (least important) and 100 (most important) and are shown in the feature evaluation. Since the evaluation is run over a specific time period, the feature importances can change as additional data is sent to Personalizer and as your users, scenarios, and data change over time.
-
-## Creating a feature evaluation
-
-To obtain feature importance scores, you must create a feature evaluation over a period of logged data to generate a report containing the feature importance scores. This report is viewable in the Azure portal. To create a feature evaluation:
-
-1. Go to the [Azure portal](https://portal.azure.com) website
-1. Select your Personalizer resource
-1. Select the _Monitor_ section from the side navigation pane
-1. Select the _Features_ tab
-1. Select "Create report" and a new screen should appear
-1. Choose a name for your report
-1. Choose _start_ and _end_ times for your evaluation period
-1. Select "Create report"
-
-![Screenshot that shows how to create a Feature Evaluation in your Personalizer resource by clicking on "Monitor" blade, the "Feature" tab, then "Create a report".](media/feature-evaluation/create-report.png)
--
-![Screenshot that shows in the creation window and how to fill in the fields for your report including the name, start date, and end date.](media/feature-evaluation/create-report-window.png)
-
-Next, your report name should appear in the reports table below. Creating a feature evaluation is a long running process, where the time to completion depends on the volume of data sent to Personalizer during the evaluation period. While the report is being generated, the _Status_ column will indicate "Running" for your evaluation, and will update to "Succeeded" once completed. Check back periodically to see if your evaluation has finished.
-
-You can run multiple feature evaluations over various periods of time that your Personalizer resource has log data. Make sure that your [data retention period](how-to-settings.md#data-retention) is set sufficiently long to enable you to perform evaluations over older data.
-
-## Interpreting feature importance scores
-
-### Features with a high importance score
-
-Features with higher importance scores were more influential to the model during the evaluation period as compared to the other features. Important features can provide inspiration for designing additional features to be included in the model. For example, if you see the context features "IsWeekend" or "IsWeekday" have high importance for grocery shopping, it may be the case that holidays or long-weekends may also be important factors, so you may want to consider adding features that capture this information.
-
-### Features with a low importance score
-
-Features with low importance scores are good candidates for further analysis. Not all low scoring features necessarily _bad_ or not useful as low scores can occur for one or more several reasons. The list below can help you get started with analyzing why your features may have low scores:
-
-* The feature was rarely observed in the data during the evaluation period.
- <!-- * Check The _Feature occurrences_ in your feature evaluation. If it's low in comparison to other features, this may indicate that feature was not present often enough for the model to determine if it's valuable or not. -->
- * If the number of occurrences of this feature is low in comparison to other features, this may indicate that feature wasn't present often enough for the model to determine if it's valuable or not.
-* The feature values didn't have a lot of diversity or variation.
- <!-- * Check The _Number of unique values_ in your feature evaluation. If it's lower than you would expect, this may indicate that the feature did not vary much during the evaluation period and won't provide significant insight. -->
- * If the number of unique values for this feature lower than you would expect, this may indicate that the feature didn't vary much during the evaluation period and won't provide significant insight.
-
-* The feature values were too noisy (random), or too distinct, and provided little value.
- <!-- * Check the _Number of unique values_ in your feature evaluation. If it's higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period. -->
- * Check the _Number of unique values_ in your feature evaluation. If the number of unique values for this feature is higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period.
-* There's a data or formatting issue.
- * Check to make sure the features are formatted and sent to Personalizer in the way you expect.
-* The feature may not be valuable to model learning and performance if the feature score is low and the reasons above do not apply.
- * Consider removing the feature as it's not helping your model maximize the average reward.
-
-Removing features with low importance scores can help speed up model training by reducing the amount of data needed to learn. It can also potentially improve the performance of the model. However, this isn't guaranteed and further analysis may be needed. [Learn more about designing context and action features.](concepts-features.md)
-
-### Common issues and steps to improve features
--- **Sending features with high cardinality.** Features with high cardinality are those that have many distinct values that are not likely to repeat over many events. For example, personal information specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
-
-- **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if not personal information) will likely add more noise to the model and isn't recommended.
-
-- **Features are too sparse. Values are distinct and rarely occur more than a few times**. Precise timestamps down to the second can be very sparse. It can be made more dense (and therefore, effective) by grouping times into "morning", "midday" or "afternoon", for example. -
-Location information also typically benefits from creating broader classifications. For example, a latitude-longitude coordinates such as Lat: 47.67402┬░ N, Long: 122.12154┬░ W is too precise and forces the model to learn latitude and longitude as distinct dimensions. When you're trying to personalize based on location information, it helps to group location information in larger sectors. An easy way to do that is to choose an appropriate rounding precision for the lat-long numbers, and combine latitude and longitude into "areas" by making them one string. For example, a good way to represent Lat: 47.67402┬░ N, Long: 122.12154┬░ W in regions approximately a few kilometers wide would be "location":"34.3 , 12.1".
--- **Expand feature sets with extrapolated information**
-You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national/regional cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it's relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
--
-## Next steps
-
-[Analyze policy performances with an offline evaluation](how-to-offline-evaluation.md) with Personalizer.
-
cognitive-services How To Inference Explainability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-inference-explainability.md
- Title: "How-to: Use Inference Explainability" -
-description: Personalizer can return feature scores in each Rank call to provide insight on what features are important to the model's decision.
--
-ms.
--- Previously updated : 09/20/2022--
-# Inference Explainability
-Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
-
-Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
-
-## How do I enable inference explainability?
-
-Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
-
-```JSON
-{
- "rewardWaitTime": "PT10M",
- "defaultReward": 0,
- "rewardAggregation": "earliest",
- "explorationPercentage": 0.2,
- "modelExportFrequency": "PT5M",
- "logMirrorEnabled": true,
- "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
- "logRetentionDays": 7,
- "lastConfigurationEditDate": "0001-01-01T00:00:00Z",
- "learningMode": "Online",
- "isAutoOptimizationEnabled": true,
- "autoOptimizationFrequency": "P7D",
- "autoOptimizationStartDate": "2019-01-19T00:00:00Z",
-"isInferenceExplainabilityEnabled": true
-}
-```
-
-> [!NOTE]
-> Enabling inference explainability will significantly increase the latency of calls to the Rank API. We recommend experimenting with this capability and measuring the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
--
-## How to interpret feature scores?
-Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
-
-```JSON
-
-{
- "ranking": [
- {
- "id": "EntertainmentArticle",
- "probability": 0.8
- },
- {
- "id": "SportsArticle",
- "probability": 0.15
- },
- {
- "id": "NewsArticle",
- "probability": 0.05
- }
- ],
- "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
- "rewardActionId": "EntertainmentArticle",
- "inferenceExplanation": [
- {
- "idΓÇ¥: "EntertainmentArticle",
- "features": [
- {
- "name": "user.profileType",
- "score": 3.0
- },
- {
- "name": "user.latLong",
- "score": -4.3
- },
- {
- "name": "user.profileType^user.latLong",
- "score" : 12.1
- },
- ]
- ]
-}
-```
-
-In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the_ best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API.
-
-Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](concepts-exploration.md).
-
-For the best actions returned by Personalizer, the feature scores can provide general insight where:
-* Larger positive scores provide more support for the model choosing this action.
-* Larger negative scores provide more support for the model not choosing this action.
-* Scores close to zero have a small effect on the decision to choose this action.
-
-## Important considerations for Inference Explainability
-* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
-
-* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
-* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm.
--
-## Next steps
-
-[Reinforcement learning](concepts-reinforcement-learning.md)
cognitive-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-multi-slot.md
- Title: How to use multi-slot with Personalizer
-description: Learn how to use multi-slot with Personalizer to improve content recommendations provided by the service.
------- Previously updated : 05/24/2021
-zone_pivot_groups: programming-languages-set-six
---
-# Get started with multi-slot for Azure Personalizer
-
-Multi-slot personalization (Preview) allows you to target content in web layouts, carousels, and lists where more than one action (such as a product or piece of content) is shown to your users. With Personalizer multi-slot APIs, you can have the AI models in Personalizer learn what user contexts and products drive certain behaviors, considering and learning from the placement in your user interface. For example, Personalizer may learn that certain products or content drive more clicks as a sidebar or a footer than as a main highlight on a page.
-
-In this guide, you'll learn how to use the Personalizer multi-slot APIs.
----------
-## Next steps
-
-* [Learn more about multi-slot](concept-multi-slot-personalization.md)
cognitive-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-offline-evaluation.md
- Title: How to perform offline evaluation - Personalizer-
-description: This article will show you how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.
--
-ms.
--- Previously updated : 02/20/2020--
-# Analyze your learning loop with an offline evaluation
-
-Learn how to create an offline evaluation and interpret the results.
-
-Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior over a period of logged (historical) data, and assess how well other model configuration settings may perform for your model.
-
-When you create an offline evaluation, the _Optimization discovery_ option will run offline evaluations over a variety of learning policy values to find one that may improve the performance of your model. You can also provide additional policies to assess in the offline evaluation.
-
-Read about [Offline Evaluations](concepts-offline-evaluation.md) to learn more.
-
-## Prerequisites
-
-* A configured Personalizer resource
-* The Personalizer resource must have a representative amount of logged data - as a ballpark figure, we recommend at least 50,000 events in its logs for meaningful evaluation results. Optionally, you may also have previously exported _learning policy_ files that you wish to test and compare in this evaluation.
-
-## Run an offline evaluation
-
-1. In the [Azure portal](https://azure.microsoft.com/free/cognitive-services), locate your Personalizer resource.
-1. In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**.
- ![In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**.](./media/offline-evaluation/create-new-offline-evaluation.png)
-1. Fill out the options in the _Create an evaluation_ window:
-
- * An evaluation name.
- * Start and end date - these are dates that specify the range of data to use in the evaluation. This data must be present in the logs, as specified in the [Data Retention](how-to-settings.md) value.
- * Set _Optimization discovery_ to **yes**, if you wish Personalizer to attempt to find more optimal learning policies.
- * Add learning settings - upload a learning policy file if you wish to evaluate a custom or previously exported policy.l
-
- > [!div class="mx-imgBorder"]
- > ![Choose offline evaluation settings](./media/offline-evaluation/create-an-evaluation-form.png)
-
-1. Start the Evaluation by selecting **Start evaluation**.
-
-## Review the evaluation results
-
-Evaluations can take a long time to run, depending on the amount of data to process, number of learning policies to compare, and whether an optimization was requested.
-
-1. Once completed, you can select the evaluation from the list of evaluations, then select **Compare the score of your application with other potential learning settings**. Select this feature when you want to see how your current learning policy performs compared to a new policy.
-
-1. Next, Review the performance of the [learning policies](concepts-offline-evaluation.md#discovering-the-optimized-learning-policy).
-
- > [!div class="mx-imgBorder"]
- > [![Review evaluation results](./media/offline-evaluation/evaluation-results.png)](./media/offline-evaluation/evaluation-results.png#lightbox)
-
-You'll see various learning policies on the chart, along with their estimated average reward, confidence intervals, and options to download or apply a specific policy.
-- "Online" - Personalizer's current policy-- "Baseline1" - Your application's baseline policy-- "BaselineRand" - A policy of taking actions at random-- "Inter-len#" or "Hyper#" - Policies created by Optimization discovery.-
-Select **Apply** to apply the policy that improves the model best for your data.
--
-## Next steps
-
-* Learn more about [how offline evaluations work](concepts-offline-evaluation.md).
cognitive-services How To Thick Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-thick-client.md
- Title: How to use local inference with the Personalizer SDK
-description: Learn how to use local inference to improve latency.
------- Previously updated : 09/06/2022--
-# Get started with the local inference SDK for Azure Personalizer
-
-The Personalizer local inference SDK (Preview) downloads the Personalizer model locally, and thus significantly reduces the latency of Rank calls by eliminating network calls. Every minute the client will download the most recent model in the background and use it for inference.
-
-In this guide, you'll learn how to use the Personalizer local inference SDK.
-
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
- Title: "Quickstart: Create and use learning loop with client SDK - Personalizer"
-description: This quickstart shows you how to create and manage a Personalizer learning loop using the client library.
--
-ms.
--- Previously updated : 02/02/2023-
-keywords: personalizer, Azure personalizer, machine learning
-zone_pivot_groups: programming-languages-set-six
--
-# Quickstart: Personalizer client library
-
-Get started with the Azure Personalizer client libraries to set up a basic learning loop. A learning loop is a system of decisions and feedback: an application requests a decision ranking from the service, then it uses the top-ranked choice and calculates a reward score from the outcome. It returns the reward score to the service. Over time, Personalizer uses AI algorithms to make better decisions for any given context. Follow these steps to set up a sample application.
-
-## Example scenario
-
-In this quickstart, a grocery e-retailer wants to increase revenue by showing relevant and personalized products to each customer on its website. On the main page, there's a "Featured Product" section that displays a prepared meal product to prospective customers. The e-retailer would like to determine how to show the right product to the right customer in order to maximize the likelihood of a purchase.
-
-The Personalizer service solves this problem in an automated, scalable, and adaptable way using reinforcement learning. You'll learn how to create _actions_ and their features, _context features_, and _reward scores_. You'll use the Personalizer client library to make calls to the [Rank and Reward APIs](what-is-personalizer.md#rank-and-reward-apis).
----
-## Download the trained model
-
-If you'd like download a Personalizer model that has been trained on 5,000 events from the example above, visit the [Personalizer Samples repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/quickstarts) and download the _Personalizer_QuickStart_Model.zip_ file. Then go to your Personalizer resource in the Azure portal, go to the **Setup** page and the **Import/export** tab, and import the file.
-
-## Clean up resources
-
-To clean up your Cognitive Services subscription, you can delete the resource or delete the resource group, which will delete any associated resources.
-
-* [Portal](../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How Personalizer works](how-personalizer-works.md)
-> [Where to use Personalizer](where-can-you-use-personalizer.md)
cognitive-services Responsible Characteristics And Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-characteristics-and-limitations.md
- Title: Characteristics and limitations of Personalizer-
-description: Characteristics and limitations of Personalizer
----- Previously updated : 05/23/2022----
-# Characteristics and limitations of Personalizer
-
-Azure Personalizer can work in many scenarios. To understand where you can apply Personalizer, make sure the requirements of your scenario meet the [expectations for Personalizer to work](where-can-you-use-personalizer.md#expectations-required-to-use-personalizer). To understand whether Personalizer should be used and how to integrate it into your applications, see [Use Cases for Personalizer](responsible-use-cases.md). You'll find criteria and guidance on choosing use cases, designing features, and reward functions for your uses of Personalizer.
-
-Before you read this article, it's helpful to understand some background information about [how Personalizer works](how-personalizer-works.md).
--
-## Select features for Personalizer
-
-Personalizing content depends on having useful information about the content and the user. For some applications and industries, some user features can be directly or indirectly considered discriminatory and potentially illegal. See the [Personalizer integration and responsible use guidelines](responsible-guidance-integration.md) on assessing features to use with Personalizer.
--
-## Computing rewards for Personalizer
-
-Personalizer learns to improve action choices based on the reward score provided by your application business logic.
-A well-built reward score will act as a short-term proxy to a business goal that's tied to an organization's mission.
-For example, rewarding on clicks will make Personalizer seek clicks at the expense of everything else, even if what's clicked is distracting to the user or not tied to a business outcome.
-In contrast, a news site might want to set rewards tied to something more meaningful than clicks, such as "Did the user spend enough time to read the content?" or "Did the user click relevant articles or references?" With Personalizer, it's easy to tie metrics closely to rewards. However, you will need to be careful not to confound short-term user engagement with desired outcomes.
--
-## Unintended consequences from reward scores
-
-Even if built with the best intentions reward scores might create unexpected consequences or unintended results because of how Personalizer ranks content.
-
-Consider the following examples:
--- Rewarding video content personalization on the percentage of the video length watched will probably tend to rank shorter videos higher than longer videos.-- Rewarding social media shares, without sentiment analysis of how it's shared or the content itself, might lead to ranking offensive, unmoderated, or inflammatory content. This type of content tends to incite a lot of engagement but is often damaging.-- Rewarding the action on user interface elements that users don't expect to change might interfere with the usability and predictability of the user interface. For example, buttons that change location or purpose without warning might make it harder for certain groups of users to stay productive.-
-Implement these best practices:
--- Run offline experiments with your system by using different reward approaches to understand impact and side effects.-- Evaluate your reward functions, and ask yourself how a naïve person might alter its interpretation which may result in unintentional or undesirable outcomes.-- Archive information and assets, such as models, learning policies, and other data, that Personalizer uses to function, so that results can be reproducible.--
-## General guidelines to understand and improve performance
-
-Because Personalizer is based on Reinforcement Learning and learns from rewards to make better choices over time, performance isn't measured in traditional supervised learning terms used in classifiers, such as precision and recall. The performance of Personalizer is directly measured as the sum of reward scores it receives from your application via the Reward API.
-
-When you use Personalizer, the product user interface in the Azure portal provides performance information so you can monitor and act on it. The performance can be seen in the following ways:
--- If Personalizer is in Online Learning mode, you can perform [offline evaluations](concepts-offline-evaluation.md).-- If Personalizer is in [Apprentice mode](concept-apprentice-mode.md), you can see the performance metrics (events imitated and rewards imitated) in the Evaluation pane in the Azure portal.-
-We recommend you perform frequent offline evaluations to maintain oversight. This task will help you monitor trends and ensure effectiveness. For example, you could decide to temporarily put Personalizer in Apprentice Mode if reward performance has a dip.
-
-### Personalizer performance estimates shown in Offline Evaluations: Limitations
-
-We define the "performance" of Personalizer as the total rewards it obtains during use. Personalizer performance estimates shown in Offline Evaluations are computed instead of measured. It is important to understand the limitations of these estimates:
--- The estimates are based on past data, so future performance may vary as the world and your users change.-- The estimates for baseline performance are computed probabilistically. For this reason, the confidence band for the baseline average reward is important. The estimate will get more precise with more events. If you use a smaller number of actions in each Rank call the performance estimate may increase in confidence as there is a higher probability that Personalizer may choose any one of them (including the baseline action) for every event.-- Personalizer constantly trains a model in near real time to improve the actions chosen for each event, and as a result, it will affect the total rewards obtained. The model performance will vary over time, depending on the recent past training data.-- Exploration and action choice are stochastic processes guided by the Personalizer model. The random numbers used for these stochastic processes are seeded from the Event Id. To ensure reproducibility of explore-exploit and other stochastic processes, use the same Event Id.-- Online performance may be capped by [exploration](concepts-exploration.md). Lowering exploration settings will limit how much information is harvested to stay on top of changing trends and usage patterns, so the balance depends on each use case. Some use cases merit starting off with higher exploration settings and reducing them over time (e.g., start with 30% and reduce to 10%).--
-### Check existing models that might accidentally bias Personalizer
-
-Existing recommendations, customer segmentation, and propensity model outputs can be used by your application as inputs to Personalizer. Personalizer learns to disregard features that don't contribute to rewards. Review and evaluate any propensity models to determine if they're good at predicting rewards and contain strong biases that might generate harm as a side effect. For example, look for recommendations that might be based on harmful stereotypes. Consider using tools such as [FairLearn](https://fairlearn.org/) to facilitate the process.
--
-## Proactive assessments during your project lifecycle
-
-Consider creating methods for team members, users, and business owners to report concerns regarding responsible use and a process that prioritizes their resolution. Consider treating tasks for responsible use just like other crosscutting tasks in the application lifecycle, such as tasks related to user experience, security, or DevOps. Tasks related to responsible use and their requirements shouldnΓÇÖt be afterthoughts. Responsible use should be discussed and implemented throughout the application lifecycle.
--
-## Next steps
--- [Responsible use and integration](responsible-guidance-integration.md)-- [Offline evaluations](concepts-offline-evaluation.md)-- [Features for context and actions](concepts-features.md)
cognitive-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-data-and-privacy.md
- Title: Data and privacy for Personalizer-
-description: Data and privacy for Personalizer
----- Previously updated : 05/23/2022---
-# Data and privacy for Personalizer
-
-This article provides information about what data Azure Personalizer uses to work, how it processes that data, and how you can control that data. It assumes basic familiarity with [what Personalizer is](what-is-personalizer.md) and [how Personalizer works](how-personalizer-works.md). Specific terms can be found in Terminology.
--
-## What data does Personalizer process?
-
-Personalizer processes the following types of data:
-- **Context features and Action features**: Your application sends information about users, and the products or content to personalize, in aggregated form. This data is sent to Personalizer in each Rank API call in arguments for Context and Actions. You decide what to send to the API and how to aggregate it. The data is expressed as attributes or features. You provide information about your users, such as their device and their environment, as Context features. You shouldn't send features specific to a user like a phone number or email or User IDs. Action features include information about your content and product, such as movie genre or product price. For more information, see [Features for Actions and Context](concepts-features.md).-- **Reward information**: A reward score (a number between 0 and 1) ranks how well the user interaction resulting from the personalization choice mapped to a business goal. For example, an event might get a reward of "1" if a recommended article was clicked on. For more information, see [Rewards](concept-rewards.md).-
-To understand more about what information you typically use with Personalizer, see [Features are information about Actions and Context](concepts-features.md).
-
-[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores.
-
-## How does Personalizer process data?
-
-The following diagram illustrates how your data is processed.
-
-![Diagram that shows how Personalizer processes data.](media/how-personalizer-works/personalization-how-it-works.png)
-
-Personalizer processes data as follows:
-
-1. Personalizer receives data each time the application calls the Rank API for a personalization event. The data is sent via the arguments for the Context and Actions.
-
-2. Personalizer uses the information in the Context and Actions, its internal AI models, and service configuration to return the rank response for the ID of the action to use. The contents of the Context and Actions are stored for no more than 48 hours in transient caches with the EventID used or generated in the Rank API.
-3. The application then calls the Reward API with one or more reward scores. This information is also stored in transient caches and matched with the Actions and Context information.
-4. After the rank and reward information for events is correlated, it's removed from transient caches and placed in more permanent storage. It remains in permanent storage until the number of days specified in the Data Retention setting has gone by, at which time the information is deleted. If you choose not to specify a number of days in the Data Retention setting, this data will be saved as long as the Personalizer Azure Resource is not deleted or until you choose to Clear Data via the UI or APIs. You can change the Data Retention setting at any time.
-5. Personalizer continuously trains internal Personalizer AI models specific to this Personalizer loop by using the data in the permanent storage and machine learning configuration parameters in [Learning settings](concept-active-learning.md).
-6. Personalizer creates [offline evaluations either](concepts-offline-evaluation.md) automatically or on demand.
-Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](how-to-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
--
-### Independence of Personalizer loops
-
-Each Personalizer loop is separate and independent from others, as follows:
--- **No external data augmentation**: Each Personalizer loop only uses the data supplied to it by you via Rank and Reward API calls to train models. Personalizer doesn't use any additional information from any origin, such as other Personalizer loops in your own Azure subscription, Microsoft, third-party sources or subprocessors.-- **No data, model, or information sharing**: A Personalizer loop won't share information about events, features, and models with any other Personalizer loop in your subscription, Microsoft, third parties or subprocessors.--
-## How is data retained and what customer controls are available?
-
-Personalizer retains different types of data in different ways and provides the following controls for each.
--
-### Personalizer rank and reward data
-
-Personalizer stores the features about Actions and Context sent via rank and reward calls for the number of days specified in configuration under Data Retention.
-To control this data retention, you can:
-
-1. Specify the number of days to retain log storage in the [Azure portal for the Personalizer resource](how-to-settings.md)under **Configuration** > **Data Retention** or via the API. The default **Data Retention** setting is seven days. Personalizer deletes all Rank and Reward data older than this number of days automatically.
-
-2. Clear data for logged personalization and reward data in the Azure portal under **Model and learning settings** > **Clear data** > **Logged personalization and reward data** or via the API.
-
-3. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
-
-You can't access past data from Rank and Reward API calls in the Personalizer resource directly. If you want to see all the data that's being saved, configure log mirroring to create a copy of this data on an Azure Blob Storage resource you've created and are responsible for managing.
--
-### Personalizer transient cache
-
-Personalizer stores partial data about an event separate from rank and reward calls in transient caches. Events are automatically purged from the transient cache 48 hours from the time the event occurred.
-
-To delete transient data, you can:
-
-1. Clear data for logged personalization and reward data in the Azure portal under **Model and learning settings** > **Clear data** or via the API.
-
-2. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
--
-### Personalizer models and learning settings
-
-A Personalizer loop trains models with data from Rank and Reward API calls, driven by the hyperparameters and configuration specified in **Model and learning settings** in the Azure portal. Models are volatile. They're constantly changing and being trained on additional data in near real time. Personalizer doesn't automatically save older models and keeps overwriting them with the latest models. For more information, see ([How to manage models and learning settings](how-to-manage-model.md)). To clear models and learning settings:
-
-1. Reset them in the Azure portal under **Model and learning settings** > **Clear data** or via the API.
-
-2. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
--
-### Personalizer evaluation reports
-
-Personalizer also retains the information generated in [offline evaluations](concepts-offline-evaluation.md) for reports.
-
-To delete offline evaluation reports, you can:
-
-1. Go to the Personalizer loop under the Azure portal. Go to **Evaluations** and delete the relevant evaluation.
-
-2. Delete evaluations via the Evaluations API.
-
-3. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
--
-### Further storage considerations
--- **Customer managed keys**: Customers can configure the service to encrypt data at rest with their own managed keys. This second layer of encryption is on top of Microsoft's own encryption.-- **Geography**: In all cases, the incoming data, models, and evaluations are processed and stored in the same geography where the Personalizer resource was created.-
-Also see:
--- [How to manage model and learning settings](how-to-manage-model.md)-- [Configure Personalizer learning loop](how-to-settings.md)--
-## Next steps
--- [See Responsible use guidelines for Personalizer](responsible-use-cases.md).-
-To learn more about Microsoft's privacy and security commitments, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
cognitive-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md
- Title: Guidance for integration and responsible use of Personalizer-
-description: Guidance for integration and responsible use of Personalizer
----- Previously updated : 05/23/2022----
-# Guidance for integration and responsible use of Personalizer
-
-Microsoft works to help customers responsibly develop and deploy solutions by using Azure Personalizer. Our principled approach upholds personal agency and dignity by considering the AI system's:
--- Fairness, reliability, and safety.-- Privacy and security.-- Inclusiveness.-- Transparency.-- Human accountability.-
-These considerations reflect our commitment to developing responsible AI.
--
-## General guidelines for integration and responsible use principles
-
-When you get ready to integrate and responsibly use AI-powered products or features, the following activities will help to set you up for success:
--- Understand what it can do. Fully assess the potential of Personalizer to understand its capabilities and limitations. Understand how it will perform in your particular scenario and context by thoroughly testing it with real-life conditions and data. --- **Respect an individual's right to privacy**. Only collect data and information from individuals for lawful and justifiable purposes. Only use data and information that you have consent to use for this purpose.--- **Obtain legal review**. Obtain appropriate legal advice to review Personalizer and how you are using it in your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future. --- **Have a human in the loop**. Include human oversight as a consistent pattern area to explore. Ensure constant human oversight of the AI-powered product or feature. Maintain the role of humans in decision making. Make sure you can have real-time human intervention in the solution to prevent harm and manage situations when the AI system doesnΓÇÖt perform as expected. --- **Build trust with affected stakeholders**. Communicate the expected benefits and potential risks to affected stakeholders. Help people understand why the data is needed and how the use of the data will lead to their benefit. Describe data handling in an understandable way. --- **Create a customer feedback loop**. Provide a feedback channel that allows users and individuals to report issues with the service after it's deployed. After you've deployed an AI-powered product or feature, it requires ongoing monitoring and improvement. Be ready to implement any feedback and suggestions for improvement. Establish channels to collect questions and concerns from affected stakeholders. People who might be directly or indirectly affected by the system include employees, visitors, and the general public. --- **Feedback**: Seek feedback from a diverse sampling of the community during the development and evaluation process (for example, historically marginalized groups, people with disabilities, and service workers). For more information, see Community jury. --- **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met.--- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](./concepts-features.md?branch=main) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.--- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.--- **Opt out**: Consider providing a control for users to opt out of receiving personalized recommendations. For these users, the Personalizer Rank API will not be called from your application. Instead, your application can use an alternative mechanism for deciding what action is taken. For example, by opting out of personalized recommendations and choosing the default or baseline action, the user would experience the action that would be taken without Personalizer's recommendation. Alternatively, your application can use recommendations based on aggregate or population-based measures (e.g., now trending, top 10 most popular, etc.).--
-## Your responsibility
-
-All guidelines for responsible implementation build on the foundation that developers and businesses using Personalizer are responsible and accountable for the effects of using these algorithms in society. If you're developing an application that your organization will deploy, you should recognize your role and responsibility for its operation and how it affects people. If you're designing an application to be deployed by a third party, come to a shared understanding of who is ultimately responsible for the behavior of the application. Make sure to document that understanding.
--
-## Questions and feedback
-
-Microsoft is continuously upgrading tools and documents to help you act on these responsibilities. Our team invites you to [provide feedback to Microsoft](mailto:cogsvcs-RL-feedback@microsoft.com?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe other tools, product features, and documents would help you implement these guidelines for using Personalizer.
--
-## Recommended reading
-- See Microsoft's six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/).--
-## Next steps
-
-Understand how the Personalizer API receives features: [Features: Action and Context](concepts-features.md)
cognitive-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-use-cases.md
- Title: Transparency note for Personalizer-
-description: Transparency Note for Personalizer
----- Previously updated : 05/23/2022---
-# Use cases for Personalizer
-
-## What is a Transparency Note?
-
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance.
-
-Microsoft provides *Transparency Notes* to help you understand how our AI technology works. This includes the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
-
-Transparency Notes are part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai).
-
-## Introduction to Personalizer
-
-Azure Personalizer is a cloud-based service that helps your applications choose the best content item to show your users. You can use Personalizer to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, your application monitors the user's reaction and reports a reward score back to Personalizer. The reward score is used to continuously improve the machine learning model using reinforcement learning. This enhances the ability of Personalizer to select the best content item in subsequent interactions based on the contextual information it receives for each.
-
-For more information, see:
--- [What is Personalizer?](what-is-personalizer.md)-- [Where can you use Personalizer](where-can-you-use-personalizer.md)-- [How Personalizer works](how-personalizer-works.md)-
-## Key terms
-
-|Term| Definition|
-|:--|:-|
-|**Learning Loop** | You create a Personalizer resource, called a learning loop, for every part of your application that can benefit from personalization. If you have more than one experience to personalize, create a loop for each. |
-|**Online model** | The default [learning behavior](terminology.md#learning-behavior) for Personalizer where your learning loop, uses machine learning to build the model that predicts the **top action** for your content. |
-|**Apprentice mode** | A [learning behavior](terminology.md#learning-behavior) that helps warm-start a Personalizer model to train without impacting the applications outcomes and actions. |
-|**Rewards**| A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history. |
-|**Exploration**| The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring. |
-
-For more information, and additional key terms, please refer to the [Personalizer Terminology](terminology.md) and [conceptual documentation](how-personalizer-works.md).
-
-## Example use cases
-
-Some common customer motivations for using Personalizer are to:
--- **User engagement**: Capture user interest by choosing content to increase clickthrough, or to prioritize the next best action to improve average revenue. Other mechanisms to increase user engagement might include selecting videos or music in a dynamic channel or playlist.-- **Content optimization**: Images can be optimized for a product (such as selecting a movie poster from a set of options) to optimize clickthrough, or the UI layout, colors, images, and blurbs can be optimized on a web page to increase conversion and purchase.-- **Maximize conversions using discounts and coupons**: To get the best balance of margin and conversion choose which discounts the application will provide to users, or decide which product to highlight from the results of a recommendation engine to maximize conversion.-- **Maximize positive behavior change**: Select which wellness tip question to send in a notification, messaging, or SMS push to maximize positive behavior change.-- **Increase productivity** in customer service and technical support by highlighting the most relevant next best actions or the appropriate content when users are looking for documents, manuals, or database items.-
-## Considerations when choosing a use case
--- Using a service that learns to personalize content and user interfaces is useful. However, it can also be misapplied if the personalization creates harmful side effects in the real world. Consider how personalization also helps your users achieve their goals.-- Consider what the negative consequences in the real world might be if Personalizer isn't suggesting particular items because the system is trained with a bias to the behavior patterns of the majority of the system users.-- Consider situations where the exploration behavior of Personalizer might cause harm.-- Carefully consider personalizing choices that are consequential or irreversible, and that should not be determined by short-term signals and rewards.-- Don't provide actions to Personalizer that shouldn't be chosen. For example, inappropriate movies should be filtered out of the actions to personalize if making a recommendation for an anonymous or underage user.-
-Here are some scenarios where the above guidance will play a role in whether, and how, to apply Personalizer:
--- Avoid using Personalizer for ranking offers on specific loan, financial, and insurance products, where personalization features are regulated, based on data the individuals don't know about, can't obtain, or can't dispute; and choices needing years and information ΓÇ£beyond the clickΓÇ¥ to truly assess how good recommendations were for the business and the users.-- Carefully consider personalizing highlights of school courses and education institutions where recommendations without enough exploration might propagate biases and reduce users' awareness of other options.-- Avoid using Personalizer to synthesize content algorithmically with the goal of influencing opinions in democracy and civic participation, as it is consequential in the long term, and can be manipulative if the user's goal for the visit is to be informed, not influenced.--
-## Next steps
-
-* [Characteristics and limitations for Personalizer](responsible-characteristics-and-limitations.md)
-* [Where can you use Personalizer?](where-can-you-use-personalizer.md)
cognitive-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/terminology.md
- Title: Terminology - Personalizer
-description: Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.
--
-ms.
--- Previously updated : 09/16/2022-
-# Personalizer terminology
-
-Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.
-
-## Conceptual terminology
-
-* **Learning Loop**: You create a Personalizer resource, called a _learning loop_, for every part of your application that can benefit from personalization. If you have more than one experience to personalize, create a loop for each.
-
-* **Model**: A Personalizer model captures all data learned about user behavior, getting training data from the combination of the arguments you send to Rank and Reward calls, and with a training behavior determined by the Learning Policy.
-
-* **Online mode**: The default [learning behavior](#learning-behavior) for Personalizer where your learning loop, uses machine learning to build the model that predicts the **top action** for your content.
-
-* **Apprentice mode**: A [learning behavior](#learning-behavior) that helps warm-start a Personalizer model to train without impacting the applications outcomes and actions.
-
-## Learning Behavior:
-
-* **Online mode**: Return the best action. Your model will respond to Rank calls with the best action and will use Reward calls to learn and improve its selections over time.
-* **[Apprentice mode](concept-apprentice-mode.md)**: Learn as an apprentice. Your model will learn by observing the behavior of your existing system. Rank calls will always return the application's **default action** (baseline).
-
-## Personalizer configuration
-
-Personalizer is configured from the [Azure portal](https://portal.azure.com).
-
-* **Rewards**: configure the default values for reward wait time, default reward, and reward aggregation policy.
-
-* **Exploration**: configure the percentage of Rank calls to use for exploration
-
-* **Model update frequency**: How often the model is retrained.
-
-* **Data retention**: How many days worth of data to store. This can impact offline evaluations, which are used to improve your learning loop.
-
-## Use Rank and Reward APIs
-
-* **Rank**: Given the actions with features and the context features, use explore or exploit to return the top action (content item).
-
- * **Actions**: Actions are the content items, such as products or promotions, to choose from. Personalizer chooses the top action (returned reward action ID) to show to your users via the Rank API.
-
- * **Context**: To provide a more accurate rank, provide information about your context, for example:
- * Your user.
- * The device they are on.
- * The current time.
- * Other data about the current situation.
- * Historical data about the user or context.
-
- Your specific application may have different context information.
-
- * **[Features](concepts-features.md)**: A unit of information about a content item or a user context. Make sure to only use features that are aggregated. Do not use specific times, user IDs, or other non-aggregated data as features.
-
- * An _action feature_ is metadata about the content.
- * A _context feature_ is metadata about the context in which the content is presented.
-
-* **Exploration**: The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring.
-
-* **Learned Best Action**: The Personalizer service uses the current model to decide the best action based on past data.
-
-* **Experiment Duration**: The amount of time the Personalizer service waits for a reward, starting from the moment the Rank call happened for that event.
-
-* **Inactive Events**: An inactive event is one where you called Rank, but you're not sure the user will ever see the result, due to client application decisions. Inactive events allow you to create and store personalization results, then decide to discard them later without impacting the machine learning model.
--
-* **Reward**: A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history.
-
-## Evaluations
-
-### Offline evaluations
-
-* **Evaluation**: An offline evaluation determines the best learning policy for your loop based on your application's data.
-
-* **Learning Policy**: How Personalizer trains a model on every event will be determined by some parameters that affect how the machine learning algorithm works. A new learning loop starts with a default **Learning Policy**, which can yield moderate performance. When running [Evaluations](concepts-offline-evaluation.md), Personalizer creates new learning policies specifically optimized to the use cases of your loop. Personalizer will perform significantly better with policies optimized for each specific loop, generated during Evaluation. The learning policy is named _learning settings_ on the **Model and learning settings** for the Personalizer resource in the Azure portal.
-
-### Apprentice mode evaluations
-
-Apprentice mode provides the following **evaluation metrics**:
-* **Baseline ΓÇô average reward**: Average rewards of the applicationΓÇÖs default (baseline).
-* **Personalizer ΓÇô average reward**: Average of total rewards Personalizer would potentially have reached.
-* **Average rolling reward**: Ratio of Baseline and Personalizer reward ΓÇô normalized over the most recent 1000 events.
-
-## Next steps
-
-* Learn about [ethics and responsible use](ethics-responsible-use.md)
cognitive-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
- Title: "Tutorial: Azure Notebook - Personalizer"-
-description: This tutorial simulates a Personalizer loop _system in an Azure Notebook, which suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is also available and stored in a coffee dataset.
--
-ms.
--- Previously updated : 04/27/2020-
-#Customer intent: As a Python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
--
-# Tutorial: Use Personalizer in Azure Notebook
-
-This tutorial runs a Personalizer loop in an Azure Notebook, demonstrating the end to end life cycle of a Personalizer loop.
-
-The loop suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is stored in a coffee dataset.
-
-## Users and coffee
-
-The notebook, simulating user interaction with a website, selects a random user, time of day, and type of weather from the dataset. A summary of the user information is:
-
-|Customers - context features|Times of Day|Types of weather|
-|--|--|--|
-|Alice<br>Bob<br>Cathy<br>Dave|Morning<br>Afternoon<br>Evening|Sunny<br>Rainy<br>Snowy|
-
-To help Personalizer learn, over time, the _system_ also knows details about the coffee selection for each person.
-
-|Coffee - action features|Types of temperature|Places of origin|Types of roast|Organic|
-|--|--|--|--|--|
-|Cappacino|Hot|Kenya|Dark|Organic|
-|Cold brew|Cold|Brazil|Light|Organic|
-|Iced mocha|Cold|Ethiopia|Light|Not organic|
-|Latte|Hot|Brazil|Dark|Not organic|
-
-The **purpose** of the Personalizer loop is to find the best match between the users and the coffee as much of the time as possible.
-
-The code for this tutorial is available in the [Personalizer Samples GitHub repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/azurenotebook).
-
-## How the simulation works
-
-At the beginning of the running system, the suggestions from Personalizer are only successful between 20% to 30%. This success is indicated by the reward sent back to the Personalizer Reward API, with a score of 1. After some Rank and Reward calls, the system improves.
-
-After the initial requests, run an offline evaluation. This allows Personalizer to review the data and suggest a better learning policy. Apply the new learning policy and run the notebook again with 20% of the previous request count. The loop will perform better with the new learning policy.
-
-## Rank and reward calls
-
-For each of the few thousand calls to the Personalizer service, the Azure Notebook sends the **Rank** request to the REST API:
-
-* A unique ID for the Rank/Request event
-* Context features - A random choice of the user, weather, and time of day - simulating a user on a website or mobile device
-* Actions with Features - _All_ the coffee data - from which Personalizer makes a suggestion
-
-The system receives the request, then compares that prediction with the user's known choice for the same time of day and weather. If the known choice is the same as the predicted choice, the **Reward** of 1 is sent back to Personalizer. Otherwise the reward sent back is 0.
-
-> [!Note]
-> This is a simulation so the algorithm for the reward is simple. In a real-world scenario, the algorithm should use business logic, possibly with weights for various aspects of the customer's experience, to determine the reward score.
--
-## Prerequisites
-
-* An [Azure Notebook](https://notebooks.azure.com/) account.
-* An [Azure Personalizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer).
- * If you have already used the Personalizer resource, make sure to [clear the data](how-to-settings.md#clear-data-for-your-learning-loop) in the Azure portal for the resource.
-* Upload all the files for [this sample](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/azurenotebook) into an Azure Notebook project.
-
-File descriptions:
-
-* [Personalizer.ipynb](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/Personalizer.ipynb) is the Jupyter notebook for this tutorial.
-* [User dataset](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/users.json) is stored in a JSON object.
-* [Coffee dataset](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/coffee.json) is stored in a JSON object.
-* [Example Request JSON](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/blob/master/samples/azurenotebook/example-rankrequest.json) is the expected format for a POST request to the Rank API.
-
-## Configure Personalizer resource
-
-In the Azure portal, configure your [Personalizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) with the **update model frequency** set to 15 seconds and a **reward wait time** of 10 minutes. These values are found on the **[Configuration](how-to-settings.md#configure-service-settings-in-the-azure-portal)** page.
-
-|Setting|Value|
-|--|--|
-|update model frequency|15 seconds|
-|reward wait time|10 minutes|
-
-These values have a very short duration in order to show changes in this tutorial. These values shouldn't be used in a production scenario without validating they achieve your goal with your Personalizer loop.
-
-## Set up the Azure Notebook
-
-1. Change the Kernel to `Python 3.6`.
-1. Open the `Personalizer.ipynb` file.
-
-## Run Notebook cells
-
-Run each executable cell and wait for it to return. You know it is done when the brackets next to the cell display a number instead of a `*`. The following sections explain what each cell does programmatically and what to expect for the output.
-
-### Include the Python modules
-
-Include the required Python modules. The cell has no output.
-
-```python
-import json
-import matplotlib.pyplot as plt
-import random
-import requests
-import time
-import uuid
-```
-
-### Set Personalizer resource key and name
-
-From the Azure portal, find your key and endpoint on the **Quickstart** page of your Personalizer resource. Change the value of `<your-resource-name>` to your Personalizer resource's name. Change the value of `<your-resource-key>` to your Personalizer key.
-
-```python
-# Replace 'personalization_base_url' and 'resource_key' with your valid endpoint values.
-personalization_base_url = "https://<your-resource-name>.cognitiveservices.azure.com/"
-resource_key = "<your-resource-key>"
-```
-
-### Print current date and time
-Use this function to note the start and end times of the iterative function, iterations.
-
-These cells have no output. The function does output the current date and time when called.
-
-```python
-# Print out current datetime
-def currentDateTime():
- currentDT = datetime.datetime.now()
- print (str(currentDT))
-```
-
-### Get the last model update time
-
-When the function, `get_last_updated`, is called, the function prints out the last modified date and time that the model was updated.
-
-These cells have no output. The function does output the last model training date when called.
-
-The function uses a GET REST API to [get model properties](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/GetModelProperties).
-
-```python
-# ititialize variable for model's last modified date
-modelLastModified = ""
-```
-
-```python
-def get_last_updated(currentModifiedDate):
-
- print('--checking model')
-
- # get model properties
- response = requests.get(personalization_model_properties_url, headers = headers, params = None)
-
- print(response)
- print(response.json())
-
- # get lastModifiedTime
- lastModifiedTime = json.dumps(response.json()["lastModifiedTime"])
-
- if (currentModifiedDate != lastModifiedTime):
- currentModifiedDate = lastModifiedTime
- print(f'--model updated: {lastModifiedTime}')
-```
-
-### Get policy and service configuration
-
-Validate the state of the service with these two REST calls.
-
-These cells have no output. The function does output the service values when called.
-
-```python
-def get_service_settings():
-
- print('--checking service settings')
-
- # get learning policy
- response = requests.get(personalization_model_policy_url, headers = headers, params = None)
-
- print(response)
- print(response.json())
-
- # get service settings
- response = requests.get(personalization_service_configuration_url, headers = headers, params = None)
-
- print(response)
- print(response.json())
-```
-
-### Construct URLs and read JSON data files
-
-This cell
-
-* builds the URLs used in REST calls
-* sets the security header using your Personalizer resource key
-* sets the random seed for the Rank event ID
-* reads in the JSON data files
-* calls `get_last_updated` method - learning policy has been removed in example output
-* calls `get_service_settings` method
-
-The cell has output from the call to `get_last_updated` and `get_service_settings` functions.
-
-```python
-# build URLs
-personalization_rank_url = personalization_base_url + "personalizer/v1.0/rank"
-personalization_reward_url = personalization_base_url + "personalizer/v1.0/events/" #add "{eventId}/reward"
-personalization_model_properties_url = personalization_base_url + "personalizer/v1.0/model/properties"
-personalization_model_policy_url = personalization_base_url + "personalizer/v1.0/configurations/policy"
-personalization_service_configuration_url = personalization_base_url + "personalizer/v1.0/configurations/service"
-
-headers = {'Ocp-Apim-Subscription-Key' : resource_key, 'Content-Type': 'application/json'}
-
-# context
-users = "users.json"
-
-# action features
-coffee = "coffee.json"
-
-# empty JSON for Rank request
-requestpath = "example-rankrequest.json"
-
-# initialize random
-random.seed(time.time())
-
-userpref = None
-rankactionsjsonobj = None
-actionfeaturesobj = None
-
-with open(users) as handle:
- userpref = json.loads(handle.read())
-
-with open(coffee) as handle:
- actionfeaturesobj = json.loads(handle.read())
-
-with open(requestpath) as handle:
- rankactionsjsonobj = json.loads(handle.read())
-
-get_last_updated(modelLastModified)
-get_service_settings()
-
-print(f'User count {len(userpref)}')
-print(f'Coffee count {len(actionfeaturesobj)}')
-```
-
-Verify that the output's `rewardWaitTime` is set to 10 minutes and `modelExportFrequency` is set to 15 seconds.
-
-```console
checking model
-<Response [200]>
-{'creationTime': '0001-01-01T00:00:00+00:00', 'lastModifiedTime': '0001-01-01T00:00:00+00:00'}
model updated: "0001-01-01T00:00:00+00:00"checking service settings
-<Response [200]>
-{...learning policy...}
-<Response [200]>
-{'rewardWaitTime': '00:10:00', 'defaultReward': 0.0, 'rewardAggregation': 'earliest', 'explorationPercentage': 0.2, 'modelExportFrequency': '00:00:15', 'logRetentionDays': -1}
-User count 4
-Coffee count 4
-```
-
-### Troubleshooting the first REST call
-
-This previous cell is the first cell that calls out to Personalizer. Make sure the REST status code in the output is `<Response [200]>`. If you get an error, such as 404, but you are sure your resource key and name are correct, reload the notebook.
-
-Make sure the count of coffee and users is both 4. If you get an error, check that you uploaded all 3 JSON files.
-
-### Set up metric chart in Azure portal
-
-Later in this tutorial, the long running process of 10,000 requests is visible from the browser with an updating text box. It may be easier to see in a chart or as a total sum, when the long running process ends. To view this information, use the metrics provided with the resource. You can create the chart now that you have completed a request to the service, then refresh the chart periodically while the long running process is going.
-
-1. In the Azure portal, select your Personalizer resource.
-1. In the resource navigation, select **Metrics** underneath Monitoring.
-1. In the chart, select **Add metric**.
-1. The resource and metric namespace are already set. You only need to select the metric of **successful calls** and the aggregation of **sum**.
-1. Change the time filter to the last 4 hours.
-
- ![Set up metric chart in Azure portal, adding metric for successful calls for the last 4 hours.](./media/tutorial-azure-notebook/metric-chart-setting.png)
-
- You should see three successful calls in the chart.
-
-### Generate a unique event ID
-
-This function generates a unique ID for each rank call. The ID is used to identify the rank and reward call information. This value could come from a business process such as a web view ID or transaction ID.
-
-The cell has no output. The function does output the unique ID when called.
-
-```python
-def add_event_id(rankjsonobj):
- eventid = uuid.uuid4().hex
- rankjsonobj["eventId"] = eventid
- return eventid
-```
-
-### Get random user, weather, and time of day
-
-This function selects a unique user, weather, and time of day, then adds those items to the JSON object to send to the Rank request.
-
-The cell has no output. When the function is called, it returns the random user's name, random weather, and random time of day.
-
-The list of 4 users and their preferences - only some preferences are shown for brevity:
-
-```json
-{
- "Alice": {
- "Sunny": {
- "Morning": "Cold brew",
- "Afternoon": "Iced mocha",
- "Evening": "Cold brew"
- }...
- },
- "Bob": {
- "Sunny": {
- "Morning": "Cappucino",
- "Afternoon": "Iced mocha",
- "Evening": "Cold brew"
- }...
- },
- "Cathy": {
- "Sunny": {
- "Morning": "Latte",
- "Afternoon": "Cold brew",
- "Evening": "Cappucino"
- }...
- },
- "Dave": {
- "Sunny": {
- "Morning": "Iced mocha",
- "Afternoon": "Iced mocha",
- "Evening": "Iced mocha"
- }...
- }
-}
-```
-
-```python
-def add_random_user_and_contextfeatures(namesoption, weatheropt, timeofdayopt, rankjsonobj):
- name = namesoption[random.randint(0,3)]
- weather = weatheropt[random.randint(0,2)]
- timeofday = timeofdayopt[random.randint(0,2)]
- rankjsonobj['contextFeatures'] = [{'timeofday': timeofday, 'weather': weather, 'name': name}]
- return [name, weather, timeofday]
-```
--
-### Add all coffee data
-
-This function adds the entire list of coffee to the JSON object to send to the Rank request.
-
-The cell has no output. The function does change the `rankjsonobj` when called.
--
-The example of a single coffee's features is:
-
-```json
-{
- "id": "Cappucino",
- "features": [
- {
- "type": "hot",
- "origin": "kenya",
- "organic": "yes",
- "roast": "dark"
-
- }
-}
-```
-
-```python
-def add_action_features(rankjsonobj):
- rankjsonobj["actions"] = actionfeaturesobj
-```
-
-### Compare prediction with known user preference
-
-This function is called after the Rank API is called, for each iteration.
-
-This function compares the user's preference for coffee, based on weather and time of day, with the Personalizer's suggestion for the user for those filters. If the suggestion matches, a score of 1 is returned, otherwise the score is 0. The cell has no output. The function does output the score when called.
-
-```python
-def get_reward_from_simulated_data(name, weather, timeofday, prediction):
- if(userpref[name][weather][timeofday] == str(prediction)):
- return 1
- return 0
-```
-
-### Loop through calls to Rank and Reward
-
-The next cell is the _main_ work of the Notebook, getting a random user, getting the coffee list, sending both to the Rank API. Comparing the prediction with the user's known preferences, then sending the reward back to the Personalizer service.
-
-The loop runs for `num_requests` times. Personalizer needs a few thousand calls to Rank and Reward to create a model.
-
-An example of the JSON sent to the Rank API follows. The list of coffee is not complete, for brevity. You can see the entire JSON for coffee in `coffee.json`.
-
-JSON sent to the Rank API:
-
-```json
-{
- 'contextFeatures':[
- {
- 'timeofday':'Evening',
- 'weather':'Snowy',
- 'name':'Alice'
- }
- ],
- 'actions':[
- {
- 'id':'Cappucino',
- 'features':[
- {
- 'type':'hot',
- 'origin':'kenya',
- 'organic':'yes',
- 'roast':'dark'
- }
- ]
- }
- ...rest of coffee list
- ],
- 'excludedActions':[
-
- ],
- 'eventId':'b5c4ef3e8c434f358382b04be8963f62',
- 'deferActivation':False
-}
-```
-
-JSON response from the Rank API:
-
-```json
-{
- 'ranking': [
- {'id': 'Latte', 'probability': 0.85 },
- {'id': 'Iced mocha', 'probability': 0.05 },
- {'id': 'Cappucino', 'probability': 0.05 },
- {'id': 'Cold brew', 'probability': 0.05 }
- ],
- 'eventId': '5001bcfe3bb542a1a238e6d18d57f2d2',
- 'rewardActionId': 'Latte'
-}
-```
-
-Finally, each loop shows the random selection of user, weather, time of day, and determined reward. The reward of 1 indicates the Personalizer resource selected the correct coffee type for the given user, weather, and time of day.
-
-```console
-1 Alice Rainy Morning Latte 1
-```
-
-The function uses:
-
-* Rank: a POST REST API to [get rank](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank).
-* Reward: a POST REST API to [report reward](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward).
-
-```python
-def iterations(n, modelCheck, jsonFormat):
-
- i = 1
-
- # default reward value - assumes failed prediction
- reward = 0
-
- # Print out dateTime
- currentDateTime()
-
- # collect results to aggregate in graph
- total = 0
- rewards = []
- count = []
-
- # default list of user, weather, time of day
- namesopt = ['Alice', 'Bob', 'Cathy', 'Dave']
- weatheropt = ['Sunny', 'Rainy', 'Snowy']
- timeofdayopt = ['Morning', 'Afternoon', 'Evening']
--
- while(i <= n):
-
- # create unique id to associate with an event
- eventid = add_event_id(jsonFormat)
-
- # generate a random sample
- [name, weather, timeofday] = add_random_user_and_contextfeatures(namesopt, weatheropt, timeofdayopt, jsonFormat)
-
- # add action features to rank
- add_action_features(jsonFormat)
-
- # show JSON to send to Rank
- print('To: ', jsonFormat)
-
- # choose an action - get prediction from Personalizer
- response = requests.post(personalization_rank_url, headers = headers, params = None, json = jsonFormat)
-
- # show Rank prediction
- print ('From: ',response.json())
-
- # compare personalization service recommendation with the simulated data to generate a reward value
- prediction = json.dumps(response.json()["rewardActionId"]).replace('"','')
- reward = get_reward_from_simulated_data(name, weather, timeofday, prediction)
-
- # show result for iteration
- print(f' {i} {currentDateTime()} {name} {weather} {timeofday} {prediction} {reward}')
-
- # send the reward to the service
- response = requests.post(personalization_reward_url + eventid + "/reward", headers = headers, params= None, json = { "value" : reward })
-
- # for every N rank requests, compute total correct total
- total = total + reward
-
- # every N iteration, get last updated model date and time
- if(i % modelCheck == 0):
-
- print("**** 10% of loop found")
-
- get_last_updated(modelLastModified)
-
- # aggregate so chart is easier to read
- if(i % 10 == 0):
- rewards.append( total)
- count.append(i)
- total = 0
-
- i = i + 1
-
- # Print out dateTime
- currentDateTime()
-
- return [count, rewards]
-```
-
-## Run for 10,000 iterations
-Run the Personalizer loop for 10,000 iterations. This is a long running event. Do not close the browser running the notebook. Refresh the metrics chart in the Azure portal periodically to see the total calls to the service. When you have around 20,000 calls, a rank and reward call for each iteration of the loop, the iterations are done.
-
-```python
-# max iterations
-num_requests = 200
-
-# check last mod date N% of time - currently 10%
-lastModCheck = int(num_requests * .10)
-
-jsonTemplate = rankactionsjsonobj
-
-# main iterations
-[count, rewards] = iterations(num_requests, lastModCheck, jsonTemplate)
-```
---
-## Chart results to see improvement
-
-Create a chart from the `count` and `rewards`.
-
-```python
-def createChart(x, y):
- plt.plot(x, y)
- plt.xlabel("Batch of rank events")
- plt.ylabel("Correct recommendations per batch")
- plt.show()
-```
-
-## Run chart for 10,000 rank requests
-
-Run the `createChart` function.
-
-```python
-createChart(count,rewards)
-```
-
-## Reading the chart
-
-This chart shows the success of the model for the current default learning policy.
-
-![This chart shows the success of the current learning policy for the duration of the test.](./media/tutorial-azure-notebook/azure-notebook-chart-results.png)
--
-The ideal target that by the end of the test, the loop is averaging a success rate that is close to 100 percent minus the exploration. The default value of exploration is 20%.
-
-`100-20=80`
-
-This exploration value is found in the Azure portal, for the Personalizer resource, on the **Configuration** page.
-
-In order to find a better learning policy, based on your data to the Rank API, run an [offline evaluation](how-to-offline-evaluation.md) in the portal for your Personalizer loop.
-
-## Run an offline evaluation
-
-1. In the Azure portal, open the Personalizer resource's **Evaluations** page.
-1. Select **Create Evaluation**.
-1. Enter the required data of evaluation name, and date range for the loop evaluation. The date range should include only the days you are focusing on for your evaluation.
- ![In the Azure portal, open the Personalizer resource's Evaluations page. Select Create Evaluation. Enter the evaluation name and date range.](./media/tutorial-azure-notebook/create-offline-evaluation.png)
-
- The purpose of running this offline evaluation is to determine if there is a better learning policy for the features and actions used in this loop. To find that better learning policy, make sure **Optimization Discovery** is turned on.
-
-1. Select **OK** to begin the evaluation.
-1. This **Evaluations** page lists the new evaluation and its current status. Depending on how much data you have, this evaluation can take some time. You can come back to this page after a few minutes to see the results.
-1. When the evaluation is completed, select the evaluation then select **Comparison of different learning policies**. This shows the available learning policies and how they would behave with the data.
-1. Select the top-most learning policy in the table and select **Apply**. This applies the _best_ learning policy to your model and retrains.
-
-## Change update model frequency to 5 minutes
-
-1. In the Azure portal, still on the Personalizer resource, select the **Configuration** page.
-1. Change the **model update frequency** and **reward wait time** to 5 minutes and select **Save**.
-
-Learn more about the [reward wait time](concept-rewards.md#reward-wait-time) and [model update frequency](how-to-settings.md#model-update-frequency).
-
-```python
-#Verify new learning policy and times
-get_service_settings()
-```
-
-Verify that the output's `rewardWaitTime` and `modelExportFrequency` are both set to 5 minutes.
-```console
checking model
-<Response [200]>
-{'creationTime': '0001-01-01T00:00:00+00:00', 'lastModifiedTime': '0001-01-01T00:00:00+00:00'}
model updated: "0001-01-01T00:00:00+00:00"checking service settings
-<Response [200]>
-{...learning policy...}
-<Response [200]>
-{'rewardWaitTime': '00:05:00', 'defaultReward': 0.0, 'rewardAggregation': 'earliest', 'explorationPercentage': 0.2, 'modelExportFrequency': '00:05:00', 'logRetentionDays': -1}
-User count 4
-Coffee count 4
-```
-
-## Validate new learning policy
-
-Return to the Azure Notebooks file and continue by running the same loop, but for only 2,000 iterations. Refresh the metrics chart in the Azure portal periodically to see the total calls to the service. When you have around 4,000 calls, a rank and reward call for each iteration of the loop, the iterations are done.
-
-```python
-# max iterations
-num_requests = 2000
-
-# check last mod date N% of time - currently 10%
-lastModCheck2 = int(num_requests * .10)
-
-jsonTemplate2 = rankactionsjsonobj
-
-# main iterations
-[count2, rewards2] = iterations(num_requests, lastModCheck2, jsonTemplate)
-```
-
-## Run chart for 2,000 rank requests
-
-Run the `createChart` function.
-
-```python
-createChart(count2,rewards2)
-```
-
-## Review the second chart
-
-The second chart should show a visible increase in Rank predictions aligning with user preferences.
-
-![The second chart should show a visible increase in Rank predictions aligning with user preferences.](./media/tutorial-azure-notebook/azure-notebook-chart-results-happy-graph.png)
-
-## Clean up resources
-
-If you do not intend to continue the tutorial series, clean up the following resources:
-
-* Delete your Azure Notebook project.
-* Delete your Personalizer resource.
-
-## Next steps
-
-The [Jupyter notebook and data files](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/azurenotebook) used in this sample are available on the GitHub repo for Personalizer.
cognitive-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-personalizer-chat-bot.md
- Title: Use Personalizer in chat bot - Personalizer
-description: Customize a C# .NET chat bot with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.
--
-ms.
--- Previously updated : 05/17/2021---
-# Tutorial: Use Personalizer in .NET chat bot
-
-Use a C# .NET chat bot with a Personalizer loop to provide the correct content to a user. This chat bot suggests a specific coffee or tea to a user. The user can accept or reject that suggestion. This gives Personalizer information to help make the next suggestion more appropriate.
-
-**In this tutorial, you learn how to:**
-
-<!-- green checkmark -->
-> [!div class="checklist"]
-> * Set up Azure resources
-> * Configure and run bot
-> * Interact with bot using Bot Framework Emulator
-> * Understand where and how the bot uses Personalizer
--
-## How does the chat bot work?
-
-A chat bot is typically a back-and-forth conversation with a user. This specific chat bot uses Personalizer to select the best action (coffee or tea) to offer the user. Personalizer uses reinforcement learning to make that selection.
-
-The chat bot needs to manage turns in conversation. The chat bot uses [Bot Framework](https://github.com/microsoft/botframework-sdk) to manage the bot architecture and conversation and uses the Cognitive Service, [Language Understanding](../LUIS/index.yml) (LUIS), to understand the intent of the natural language from the user.
-
-The chat bot is a web site with a specific route available to answer requests, `http://localhost:3978/api/messages`. You can use the Bot Framework Emulator to visually interact with the running chat bot while you are developing a bot locally.
-
-### User interactions with the bot
-
-This is a simple chat bot that allows you to enter text queries.
-
-|User enters text|Bot responds with text|Description of action bot takes to determine response text|
-|--|--|--|
-|No text entered - bot begins the conversation.|`This is a simple chatbot example that illustrates how to use Personalizer. The bot learns what coffee or tea order is preferred by customers given some context information (such as weather, temperature, and day of the week) and information about the user.`<br>`To use the bot, just follow the prompts. To try out a new imaginary context, type ΓÇ£ResetΓÇ¥ and a new one will be randomly generated.`<br>`Welcome to the coffee bot, please tell me if you want to see the menu or get a coffee or tea suggestion for today. Once IΓÇÖve given you a suggestion, you can reply with ΓÇÿlikeΓÇÖ or ΓÇÿdonΓÇÖt likeΓÇÖ. ItΓÇÖs Tuesday today and the weather is Snowy.`|The bot begins the conversation with instructional text and lets you know what the context is: `Tuesday`, `Snowy`.|
-|`Show menu`|`Here is our menu: Coffee: Cappuccino Espresso Latte Macchiato Mocha Tea: GreenTea Rooibos`|Determine intent of query using LUIS, then display menu choices of coffee and tea items. Features of the actions are |
-|`What do you suggest`|`How about Latte?`|Determine intent of query using LUIS, then call **Rank API**, and display top choice as a question `How about {response.RewardActionId}?`. Also displays JSON call and response for illustration purposes.|
-|`I like it`|`ThatΓÇÖs great! IΓÇÖll keep learning your preferences over time.`<br>`Would you like to get a new suggestion or reset the simulated context to a new day?`|Determine intent of query using LUIS, then call **Reward API** with reward of `1`, displays JSON call and response for illustration purposes.|
-|`I don't like it`|`Oh well, maybe IΓÇÖll guess better next time.`<br>`Would you like to get a new suggestion or reset the simulated context to a new day?`|Determine intent of query using LUIS, then call **Reward API** with reward of `0`, displays JSON call and response for illustration purposes.|
-|`Reset`|Returns instructional text.|Determine intent of query using LUIS, then displays the instructional text and resets the context.|
--
-### Personalizer in this bot
-
-This chat bot uses Personalizer to select the top action (specific coffee or tea), based on a list of _actions_ (some type of content) and context features.
-
-The bot sends the list of actions, along with context features, to the Personalizer loop. Personalizer returns the single best action to your bot, which your bot displays.
-
-In this tutorial, the **actions** are types of coffee and tea:
-
-|Coffee|Tea|
-|--|--|
-|Cappuccino<br>Espresso<br>Latte<br>Mocha|GreenTea<br>Rooibos|
-
-**Rank API:** To help Personalizer learn about your actions, the bot sends the following with each Rank API request:
-
-* Actions _with features_
-* Context features
-
-A **feature** of the model is information about the action or context that can be aggregated (grouped) across members of your chat bot user base. A feature _isn't_ individually specific (such as a user ID) or highly specific (such as an exact time of day).
-
-Features are used to align actions to the current context in the model. The model is a representation of Personalizer's past knowledge about actions, context, and their features that allows it to make educated decisions.
-
-The model, including features, is updated on a schedule based on your **Model update frequency** setting in the Azure portal.
-
-Features should be selected with the same planning and design that you would apply to any schema or model in your technical architecture. The feature values can be set with business logic or third-party systems.
-
-> [!CAUTION]
-> Features in this application are for demonstration and may not necessarily be the best features to use in a your web app for your use case.
-
-#### Action features
-
-Each action (content item) has features to help distinguish the coffee or tea item.
-
-The features aren't configured as part of the loop configuration in the Azure portal. Instead they are sent as a JSON object with each Rank API call. This allows flexibility for the actions and their features to grow, change, and shrink over time, which allows Personalizer to follow trends.
-
-Features for coffee and tea include:
-
-* Origin location of coffee bean such as Kenya and Brazil
-* Is the coffee or tea organic?
-* Light or dark roast of coffee
-
-While the coffee has three features in the preceding list, tea only has one. Only pass features to Personalizer that make sense to the action. Don't pass in an empty value for a feature if it doesn't apply to the action.
-
-#### Context features
-
-Context features help Personalizer understand the context of the environment such as the display device, the user, the location, and other features that are relevant to your use case.
-
-The context for this chat bot includes:
-
-* Type of weather (snowy, rainy, sunny)
-* Day of the week
-
-Selection of features is randomized in this chat bot. In a real bot, use real data for your context features.
-
-### Design considerations for this bot
-
-There are a few cautions to note about this conversation:
-* **Bot interaction**: The conversation is very simple because it is demonstrating Rank and Reward in a simple use case. It is not demonstrating the full functionality of the Bot Framework SDK or of the Emulator.
-* **Personalizer**: The features are selected randomly to simulate usage. Do not randomize features in a production Personalizer scenario.
-* **Language Understanding (LUIS)**: The few example utterances of the LUIS model are meant for this sample only. Do not use so few example utterances in your production LUIS application.
--
-## Install required software
-- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/). The downloadable samples repository includes instructions if you would prefer to use the .NET Core CLI.-- [Microsoft Bot Framework Emulator](https://aka.ms/botframeworkemulator) is a desktop application that allows bot developers to test and debug their bots on localhost or running remotely through a tunnel.-
-## Download the sample code of the chat bot
-
-The chat bot is available in the Personalizer samples repository. Clone or [download](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/archive/master.zip) the repository, then open the sample in the `/samples/ChatbotExample` directory with Visual Studio 2019.
-
-To clone the repository, use the following Git command in a Bash shell (terminal).
-
-```bash
-git clone https://github.com/Azure-Samples/cognitive-services-personalizer-samples.git
-```
-
-## Create and configure Personalizer and LUIS resources
-
-### Create Azure resources
-
-To use this chat bot, you need to create Azure resources for Personalizer and Language Understanding (LUIS).
-
-* [Create LUIS resources](../luis/luis-how-to-azure-subscription.md). Create both an authoring and prediction resource.
-* [Create Personalizer resource](how-to-create-resource.md) then copy the key and endpoint from the Azure portal. You will need to set these values in the `appsettings.json` file of the .NET project.
-
-### Create LUIS app
-
-If you are new to LUIS, you need to [sign in](https://www.luis.ai) and immediately migrate your account. You don't need to create new resources, instead select the resources you created in the previous section of this tutorial.
-
-1. To create a new LUIS application, in the [LUIS portal](https://www.luis.ai), select your subscription and authoring resource.
-1. Then, still on the same page, select **+ New app for conversation**, then **Import as JSON**.
-1. In the pop-up dialog, select **Choose file** then select the `/samples/ChatbotExample/CognitiveModels/coffeebot.json` file. Enter the name `Personalizer Coffee bot`.
-1. Select the **Train** button in the top-right navigation of the LUIS portal.
-1. Select the **Publish** button to publish the app to the **Production slot** for the prediction runtime.
-1. Select **Manage**, then **Settings**. Copy the value of the **App ID**. You will need to set this value in the `appsettings.json` file of the .NET project.
-1. Still in the **Manage** section, select **Azure Resources**. This displays the associated resources on the app.
-1. Select **Add prediction resource**. In the pop-up dialog, select your subscription, and the prediction resource created in a previous section of this tutorial, then select **Done**.
-1. Copy the values of the **Primary key** and **Endpoint URL**. You will need to set these values in the `appsettings.json` file of the .NET project.
-
-### Configure bot with appsettings.json file
-
-1. Open the chat bot solution file, `ChatbotSamples.sln`, with Visual Studio 2019.
-1. Open `appsettings.json` in the root directory of the project.
-1. Set all five settings copied in the previous section of this tutorial.
-
- ```json
- {
- "PersonalizerChatbot": {
- "LuisAppId": "",
- "LuisAPIKey": "",
- "LuisServiceEndpoint": "",
- "PersonalizerServiceEndpoint": "",
- "PersonalizerAPIKey": ""
- }
- }
- ```
-
-### Build and run the bot
-
-Once you have configured the `appsettings.json`, you are ready to build and run the chat bot. When you do, a browser opens to the running website, `http://localhost:3978`.
--
-Keep the web site running because the tutorial explains what the bot is doing, so you can interact with the bot.
--
-## Set up the Bot Framework Emulator
-
-1. Open the Bot Framework Emulator, and select **Open Bot**.
-
- :::image type="content" source="media/tutorial-chat-bot/bot-emulator-startup.png" alt-text="Screenshot of Bot Framework Emulator startup screen.":::
--
-1. Configure the bot with the following **bot URL** then select **Connect**:
-
- `http://localhost:3978/api/messages`
-
- :::image type="content" source="media/tutorial-chat-bot/bot-emulator-open-bot-settings.png" alt-text="Screenshot of Bot Framework Emulator open bot settings.":::
-
- The emulator connects to the chat bot and displays the instructional text, along with logging and debug information helpful for local development.
-
- :::image type="content" source="media/tutorial-chat-bot/bot-emulator-bot-conversation-first-turn.png" alt-text="Screenshot of Bot Framework Emulator in first turn of conversation.":::
-
-## Use the bot in the Bot Framework Emulator
-
-1. Ask to see the menu by entering `I would like to see the menu`. The chat bot displays the items.
-1. Let the bot suggest an item by entering `Please suggest a drink for me.`
- The emulator displays the Rank request and response in the chat window, so you can see the full JSON. And the bot makes a suggestion, something like `How about Latte?`
-1. Answer that you would like that which means you accept Personalizer's top Ranked selection, `I like it.`
- The emulator displays the Reward request with reward score 1 and response in the chat window, so you can see the full JSON. And the bot responds with `ThatΓÇÖs great! IΓÇÖll keep learning your preferences over time.` and `Would you like to get a new suggestion or reset the simulated context to a new day?`
-
- If you respond with `no` to the selection, the reward score of 0 is sent to Personalizer.
--
-## Understand the .NET code using Personalizer
-
-The .NET solution is a simple bot framework chat bot. The code related to Personalizer is in the following folders:
-* `/samples/ChatbotExample/Bots`
- * `PersonalizerChatbot.cs` file for the interaction between bot and Personalizer
-* `/samples/ChatbotExample/ReinforcementLearning` - manages the actions and features for the Personalizer model
-* `/samples/ChatbotExample/Model` - files for the Personalizer actions and features, and for the LUIS intents
-
-### PersonalizerChatbot.cs - working with Personalizer
-
-The `PersonalizerChatbot` class is derived from the `Microsoft.Bot.Builder.ActivityHandler`. It has three properties and methods to manage the conversation flow.
-
-> [!CAUTION]
-> Do not copy code from this tutorial. Use the sample code in the [Personalizer samples repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples).
-
-```csharp
-public class PersonalizerChatbot : ActivityHandler
-{
-
- private readonly LuisRecognizer _luisRecognizer;
- private readonly PersonalizerClient _personalizerClient;
-
- private readonly RLContextManager _rlFeaturesManager;
-
- public PersonalizerChatbot(LuisRecognizer luisRecognizer, RLContextManager rlContextManager, PersonalizerClient personalizerClient)
- {
- _luisRecognizer = luisRecognizer;
- _rlFeaturesManager = rlContextManager;
- _personalizerClient = personalizerClient;
- }
- }
-
- public override async Task OnTurnAsync(ITurnContext turnContext, CancellationToken cancellationToken = default(CancellationToken))
- {
- await base.OnTurnAsync(turnContext, cancellationToken);
-
- if (turnContext.Activity.Type == ActivityTypes.Message)
- {
- // Check LUIS model
- var recognizerResult = await _luisRecognizer.RecognizeAsync(turnContext, cancellationToken);
- var topIntent = recognizerResult?.GetTopScoringIntent();
- if (topIntent != null && topIntent.HasValue && topIntent.Value.intent != "None")
- {
- Intents intent = (Intents)Enum.Parse(typeof(Intents), topIntent.Value.intent);
- switch (intent)
- {
- case Intents.ShowMenu:
- await turnContext.SendActivityAsync($"Here is our menu: \n Coffee: {CoffeesMethods.DisplayCoffees()}\n Tea: {TeaMethods.DisplayTeas()}", cancellationToken: cancellationToken);
- break;
- case Intents.ChooseRank:
- // Here we generate the event ID for this Rank.
- var response = await ChooseRankAsync(turnContext, _rlFeaturesManager.GenerateEventId(), cancellationToken);
- _rlFeaturesManager.CurrentPreference = response.Ranking;
- await turnContext.SendActivityAsync($"How about {response.RewardActionId}?", cancellationToken: cancellationToken);
- break;
- case Intents.RewardLike:
- if (!string.IsNullOrEmpty(_rlFeaturesManager.CurrentEventId))
- {
- await RewardAsync(turnContext, _rlFeaturesManager.CurrentEventId, 1, cancellationToken);
- await turnContext.SendActivityAsync($"That's great! I'll keep learning your preferences over time.", cancellationToken: cancellationToken);
- await SendByebyeMessageAsync(turnContext, cancellationToken);
- }
- else
- {
- await turnContext.SendActivityAsync($"Not sure what you like. Did you ask for a suggestion?", cancellationToken: cancellationToken);
- }
-
- break;
- case Intents.RewardDislike:
- if (!string.IsNullOrEmpty(_rlFeaturesManager.CurrentEventId))
- {
- await RewardAsync(turnContext, _rlFeaturesManager.CurrentEventId, 0, cancellationToken);
- await turnContext.SendActivityAsync($"Oh well, maybe I'll guess better next time.", cancellationToken: cancellationToken);
- await SendByebyeMessageAsync(turnContext, cancellationToken);
- }
- else
- {
- await turnContext.SendActivityAsync($"Not sure what you dislike. Did you ask for a suggestion?", cancellationToken: cancellationToken);
- }
-
- break;
- case Intents.Reset:
- _rlFeaturesManager.GenerateRLFeatures();
- await SendResetMessageAsync(turnContext, cancellationToken);
- break;
- default:
- break;
- }
- }
- else
- {
- var msg = @"Could not match your message with any of the following LUIS intents:
- 'ShowMenu'
- 'ChooseRank'
- 'RewardLike'
- 'RewardDislike'.
- Try typing 'Show me the menu','What do you suggest','I like it','I don't like it'.";
- await turnContext.SendActivityAsync(msg);
- }
- }
- else if (turnContext.Activity.Type == ActivityTypes.ConversationUpdate)
- {
- // Generate a new weekday and weather condition
- // These will act as the context features when we call rank with Personalizer
- _rlFeaturesManager.GenerateRLFeatures();
-
- // Send a welcome message to the user and tell them what actions they may perform to use this bot
- await SendWelcomeMessageAsync(turnContext, cancellationToken);
- }
- else
- {
- await turnContext.SendActivityAsync($"{turnContext.Activity.Type} event detected", cancellationToken: cancellationToken);
- }
- }
-
- // code removed for brevity, full sample code available for download
- private async Task SendWelcomeMessageAsync(ITurnContext turnContext, CancellationToken cancellationToken)
- private async Task SendResetMessageAsync(ITurnContext turnContext, CancellationToken cancellationToken)
- private async Task SendByebyeMessageAsync(ITurnContext turnContext, CancellationToken cancellationToken)
- private async Task<RankResponse> ChooseRankAsync(ITurnContext turnContext, string eventId, CancellationToken cancellationToken)
- private async Task RewardAsync(ITurnContext turnContext, string eventId, double reward, CancellationToken cancellationToken)
-}
-```
-
-The methods prefixed with `Send` manage conversation with the bot and LUIS. The methods `ChooseRankAsync` and `RewardAsync` interact with Personalizer.
-
-#### Calling Rank API and display results
-
-The method `ChooseRankAsync` builds the JSON data to send to the Personalizer Rank API by collecting the actions with features and the context features.
-
-```csharp
-private async Task<RankResponse> ChooseRankAsync(ITurnContext turnContext, string eventId, CancellationToken cancellationToken)
-{
- IList<object> contextFeature = new List<object>
- {
- new { weather = _rlFeaturesManager.RLFeatures.Weather.ToString() },
- new { dayofweek = _rlFeaturesManager.RLFeatures.DayOfWeek.ToString() },
- };
-
- Random rand = new Random(DateTime.UtcNow.Millisecond);
- IList<RankableAction> actions = new List<RankableAction>();
- var coffees = Enum.GetValues(typeof(Coffees));
- var beansOrigin = Enum.GetValues(typeof(CoffeeBeansOrigin));
- var organic = Enum.GetValues(typeof(Organic));
- var roast = Enum.GetValues(typeof(CoffeeRoast));
- var teas = Enum.GetValues(typeof(Teas));
-
- foreach (var coffee in coffees)
- {
- actions.Add(new RankableAction
- {
- Id = coffee.ToString(),
- Features =
- new List<object>()
- {
- new { BeansOrigin = beansOrigin.GetValue(rand.Next(0, beansOrigin.Length)).ToString() },
- new { Organic = organic.GetValue(rand.Next(0, organic.Length)).ToString() },
- new { Roast = roast.GetValue(rand.Next(0, roast.Length)).ToString() },
- },
- });
- }
-
- foreach (var tea in teas)
- {
- actions.Add(new RankableAction
- {
- Id = tea.ToString(),
- Features =
- new List<object>()
- {
- new { Organic = organic.GetValue(rand.Next(0, organic.Length)).ToString() },
- },
- });
- }
-
- // Sending a rank request to Personalizer
- // Here we are asking Personalizer to decide which drink the user is most likely to want
- // based on the current context features (weather, day of the week generated in RLContextManager)
- // and the features of the drinks themselves
- var request = new RankRequest(actions, contextFeature, null, eventId);
- await turnContext.SendActivityAsync(
- "===== DEBUG MESSAGE CALL TO RANK =====\n" +
- "This is what is getting sent to Rank:\n" +
- $"{JsonConvert.SerializeObject(request, Formatting.Indented)}\n",
- cancellationToken: cancellationToken);
- var response = await _personalizerClient.RankAsync(request, cancellationToken);
- await turnContext.SendActivityAsync(
- $"===== DEBUG MESSAGE RETURN FROM RANK =====\n" +
- "This is what Rank returned:\n" +
- $"{JsonConvert.SerializeObject(response, Formatting.Indented)}\n",
- cancellationToken: cancellationToken);
- return response;
-}
-```
-
-#### Calling Reward API and display results
-
-The method `RewardAsync` builds the JSON data to send to the Personalizer Reward API by determining score. The score is determined from the LUIS intent identified in the user text and sent from the `OnTurnAsync` method.
-
-```csharp
-private async Task RewardAsync(ITurnContext turnContext, string eventId, double reward, CancellationToken cancellationToken)
-{
- await turnContext.SendActivityAsync(
- "===== DEBUG MESSAGE CALL REWARD =====\n" +
- "Calling Reward:\n" +
- $"eventId = {eventId}, reward = {reward}\n",
- cancellationToken: cancellationToken);
-
- // Sending a reward request to Personalizer
- // Here we are responding to the drink ranking Personalizer provided us
- // If the user liked the highest ranked drink, we give a high reward (1)
- // If they did not, we give a low reward (0)
- await _personalizerClient.RewardAsync(eventId, new RewardRequest(reward), cancellationToken);
-}
-```
-
-## Design considerations for a bot
-
-This sample is meant to demonstrate a simple end-to-end solution of Personalizer in a bot. Your use case may be more complex.
-
-If you intent to use Personalizer in a production bot, plan for:
-* Real-time access to Personalizer _every time_ you need a ranked selection. The Rank API can't be batched or cached. The reward call can be delayed or offloaded to a separate process and if you don't return a reward in the timed period, then a default reward value is set for the event.
-* Use-case based calculation of the reward: This example showed two rewards of zero and one with no range between and no negative value for a score. Your system made of need to more granular scoring.
-* Bot channels: This sample uses a single channel but if you intend to use more than one channel, or variations of bots on a single channel, that may need to be considered as part of the context features of the Personalizer model.
-
-## Clean up resources
-
-When you are done with this tutorial, clean up the following resources:
-
-* Delete your sample project directory.
-* Delete your Personalizer and LUIS resource in the Azure portal.
-
-## Next steps
-* [How Personalizer works](how-personalizer-works.md)
-* [Features](concepts-features.md): learn concepts about features using with actions and context
-* [Rewards](concept-rewards.md): learn about calculating rewards
cognitive-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-personalizer-web-app.md
- Title: Use web app - Personalizer
-description: Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.
--
-ms.
--- Previously updated : 06/10/2020--
-# Tutorial: Add Personalizer to a .NET web app
-
-Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.
-
-**In this tutorial, you learn how to:**
-
-<!-- green checkmark -->
-> [!div class="checklist"]
-> * Set up Personalizer key and endpoint
-> * Collect features
-> * Call Rank and Reward APIs
-> * Display top action, designated as _rewardActionId_
---
-## Select the best content for a web app
-
-A web app should use Personalizer when there is a list of _actions_ (some type of content) on the web page that needs to be personalized to a single top item (rewardActionId) to display. Examples of action lists include news articles, button placement locations, and word choices for product names.
-
-You send the list of actions, along with context features, to the Personalizer loop. Personalizer selects the single best action, then your web app displays that action.
-
-In this tutorial, the actions are types of food:
-
-* pasta
-* ice cream
-* juice
-* salad
-* popcorn
-* coffee
-* soup
-
-To help Personalizer learn about your actions, send both _actions with features_ and _context features_ with each Rank API request.
-
-A **feature** of the model is information about the action or context that can be aggregated (grouped) across members of your web app user base. A feature _isn't_ individually specific (such as a user ID) or highly specific (such as an exact time of day).
-
-### Actions with features
-
-Each action (content item) has features to help distinguish the food item.
-
-The features aren't configured as part of the loop configuration in the Azure portal. Instead they are sent as a JSON object with each Rank API call. This allows flexibility for the actions and their features to grow, change, and shrink over time, which allows Personalizer to follow trends.
-
-```csharp
- /// <summary>
- /// Creates personalizer actions feature list.
- /// </summary>
- /// <returns>List of actions for personalizer.</returns>
- private IList<RankableAction> GetActions()
- {
- IList<RankableAction> actions = new List<RankableAction>
- {
- new RankableAction
- {
- Id = "pasta",
- Features =
- new List<object>() { new { taste = "savory", spiceLevel = "medium" }, new { nutritionLevel = 5, cuisine = "italian" } }
- },
-
- new RankableAction
- {
- Id = "ice cream",
- Features =
- new List<object>() { new { taste = "sweet", spiceLevel = "none" }, new { nutritionalLevel = 2 } }
- },
-
- new RankableAction
- {
- Id = "juice",
- Features =
- new List<object>() { new { taste = "sweet", spiceLevel = "none" }, new { nutritionLevel = 5 }, new { drink = true } }
- },
-
- new RankableAction
- {
- Id = "salad",
- Features =
- new List<object>() { new { taste = "sour", spiceLevel = "low" }, new { nutritionLevel = 8 } }
- },
-
- new RankableAction
- {
- Id = "popcorn",
- Features =
- new List<object>() { new { taste = "salty", spiceLevel = "none" }, new { nutritionLevel = 3 } }
- },
-
- new RankableAction
- {
- Id = "coffee",
- Features =
- new List<object>() { new { taste = "bitter", spiceLevel = "none" }, new { nutritionLevel = 3 }, new { drink = true } }
- },
-
- new RankableAction
- {
- Id = "soup",
- Features =
- new List<object>() { new { taste = "sour", spiceLevel = "high" }, new { nutritionLevel = 7} }
- }
- };
-
- return actions;
- }
-```
--
-## Context features
-
-Context features help Personalizer understand the context of the actions. The context for this sample application includes:
-
-* time of day - morning, afternoon, evening, night
-* user's preference for taste - salty, sweet, bitter, sour, or savory
-* browser's context - user agent, geographical location, referrer
-
-```csharp
-/// <summary>
-/// Get users time of the day context.
-/// </summary>
-/// <returns>Time of day feature selected by the user.</returns>
-private string GetUsersTimeOfDay()
-{
- Random rnd = new Random();
- string[] timeOfDayFeatures = new string[] { "morning", "noon", "afternoon", "evening", "night", "midnight" };
- int timeIndex = rnd.Next(timeOfDayFeatures.Length);
- return timeOfDayFeatures[timeIndex];
-}
-
-/// <summary>
-/// Gets user food preference.
-/// </summary>
-/// <returns>Food taste feature selected by the user.</returns>
-private string GetUsersTastePreference()
-{
- Random rnd = new Random();
- string[] tasteFeatures = new string[] { "salty", "bitter", "sour", "savory", "sweet" };
- int tasteIndex = rnd.Next(tasteFeatures.Length);
- return tasteFeatures[tasteIndex];
-}
-```
-
-## How does the web app use Personalizer?
-
-The web app uses Personalizer to select the best action from the list of food choices. It does this by sending the following information with each Rank API call:
-* **actions** with their features such as `taste` and `spiceLevel`
-* **context** features such as `time` of day, user's `taste` preference, and the browser's user agent information, and context features
-* **actions to exclude** such as juice
-* **eventId**, which is different for each call to Rank API.
-
-## Personalizer model features in a web app
-
-Personalizer needs features for the actions (content) and the current context (user and environment). Features are used to align actions to the current context in the model. The model is a representation of Personalizer's past knowledge about actions, context, and their features that allows it to make educated decisions.
-
-The model, including features, is updated on a schedule based on your **Model update frequency** setting in the Azure portal.
-
-> [!CAUTION]
-> Features in this application are meant to illustrate features and feature values but not necessarily to the best features to use in a web app.
-
-### Plan for features and their values
-
-Features should be selected with the same planning and design that you would apply to any schema or model in your technical architecture. The feature values can be set with business logic or third-party systems. Feature values should not be so highly specific that they don't apply across a group or class of features.
-
-### Generalize feature values
-
-#### Generalize into categories
-
-This app uses `time` as a feature but groups time into categories such as `morning`, `afternoon`, `evening`, and `night`. That is an example of using the information of time but not in a highly specific way, such as `10:05:01 UTC+2`.
-
-#### Generalize into parts
-
-This app uses the HTTP Request features from the browser. This starts with a very specific string with all the data, for example:
-
-```http
-Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/530.99 (KHTML, like Gecko) Chrome/80.0.3900.140 Safari/537.36
-```
-
-The **HttpRequestFeatures** class library generalizes this string into a **userAgentInfo** object with individual values. Any values that are too specific are set to an empty string. When the context features for the request are sent, it has the following JSON format:
-
-```JSON
-{
- "httpRequestFeatures": {
- "_synthetic": false,
- "OUserAgent": {
- "_ua": "",
- "_DeviceBrand": "",
- "_DeviceFamily": "Other",
- "_DeviceIsSpider": false,
- "_DeviceModel": "",
- "_OSFamily": "Windows",
- "_OSMajor": "10",
- "DeviceType": "Desktop"
- }
- }
-}
-```
--
-## Using sample web app
-
-The sample browser-based web app (all code is provided) needs the following applications installed to run the app.
-
-Install the following software:
-
-* [.NET Core 2.1](https://dotnet.microsoft.com/download/dotnet-core/2.1) - the sample back-end server uses .NET core
-* [Node.js](https://nodejs.org/) - the client/front end depends on this application
-* [Visual Studio 2019](https://visualstudio.microsoft.com/vs/), or [.NET Core CLI](/dotnet/core/tools/) - use either the developer environment of Visual Studio 2019 or the .NET Core CLI to build and run the app
-
-### Set up the sample
-1. Clone the Azure Personalizer Samples repo.
-
- ```bash
- git clone https://github.com/Azure-Samples/cognitive-services-personalizer-samples.git
- ```
-
-1. Navigate to _samples/HttpRequestFeatures_ to open the solution, `HttpRequestFeaturesExample.sln`.
-
- If requested, allow Visual Studio to update the .NET package for Personalizer.
-
-### Set up Azure Personalizer Service
-
-1. [Create a Personalizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) in the Azure portal.
-
-1. In the Azure portal, find the `Endpoint` and either `Key1` or `Key2` (either will work) in the **Keys and Endpoints** tab. These are your `PersonalizerServiceEndpoint` and your `PersonalizerApiKey`.
-1. Fill in the `PersonalizerServiceEndpoint` in **appsettings.json**.
-1. Configure the `PersonalizerApiKey` as an [app secrets](/aspnet/core/security/app-secrets) in one of the following ways:
-
- * If you are using the .NET Core CLI, you can use the `dotnet user-secrets set "PersonalizerApiKey" "<API Key>"` command.
- * If you are using Visual Studio, you can right-click the project and select the **Manage User Secrets** menu option to configure the Personalizer keys. By doing this, Visual Studio will open a `secrets.json` file where you can add the keys as follows:
-
- ```JSON
- {
- "PersonalizerApiKey": "<your personalizer key here>",
- }
- ```
-
-## Run the sample
-
-Build and run HttpRequestFeaturesExample with one of the following methods:
-
-* Visual Studio 2019: Press **F5**
-* .NET Core CLI: `dotnet build` then `dotnet run`
-
-Through a web browser, you can send a Rank request and a Reward request and see their responses, as well as the http request features extracted from your environment.
-
-> [!div class="mx-imgBorder"]
-> ![Screenshot shows an example of the Http Request Feature in a web browser.](./media/tutorial-web-app/web-app-single-page.png)
-
-## Demonstrate the Personalizer loop
-
-1. Select the **Generate new Rank Request** button to create a new JSON object for the Rank API call. This creates the actions (with features) and context features and displays the values so you can see what the JSON looks like.
-
- For your own future application, generation of actions and features may happen on the client, on the server, a mix of the two, or with calls to other services.
-
-1. Select **Send Rank Request** to send the JSON object to the server. The server calls the Personalizer Rank API. The server receives the response and returns the top ranked action to the client to display.
-
-1. Set the reward value, then select the **Send Reward Request** button. If you don't change the reward value, the client application always sends the value of `1` to Personalizer.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot shows the Reward Request section.](./media/tutorial-web-app/reward-score-api-call.png)
-
- For your own future application, generation of the reward score may happen after collecting information from the user's behavior on the client, along with business logic on the server.
-
-## Understand the sample web app
-
-The sample web app has a **C# .NET** server, which manages the collection of features and sending and receiving HTTP calls to your Personalizer endpoint.
-
-The sample web app uses a **knockout front-end client application** to capture features and process user interface actions such as clicking on buttons, and sending data to the .NET server.
-
-The following sections explain the parts of the server and client that a developer needs to understand to use Personalizer.
-
-## Rank API: Client application sends context to server
-
-The client application collects the user's browser _user agent_.
-
-> [!div class="mx-imgBorder"]
-> ![Build and run the HTTPRequestFeaturesExample project. A browser window opens to display the single page application.](./media/tutorial-web-app/user-agent.png)
-
-## Rank API: Server application calls Personalizer
-
-This is a typical .NET web app with a client application, much of the boiler plate code is provided for you. Any code not specific to Personalizer is removed from the following code snippets so you can focus on the Personalizer-specific code.
-
-### Create Personalizer client
-
-In the server's **Startup.cs**, the Personalizer endpoint and key are used to create the Personalizer client. The client application doesn't need to communicate with Personalizer in this app, instead relying on the server to make those SDK calls.
-
-The web server's .NET start-up code is:
-
-```csharp
-using Microsoft.Azure.CognitiveServices.Personalizer;
-// ... other using statements removed for brevity
-
-namespace HttpRequestFeaturesExample
-{
- public class Startup
- {
- public Startup(IConfiguration configuration)
- {
- Configuration = configuration;
- }
-
- public IConfiguration Configuration { get; }
-
- // This method gets called by the runtime. Use this method to add services to the container.
- public void ConfigureServices(IServiceCollection services)
- {
- string personalizerApiKey = Configuration.GetSection("PersonalizerApiKey").Value;
- string personalizerEndpoint = Configuration.GetSection("PersonalizerConfiguration:ServiceEndpoint").Value;
- if (string.IsNullOrEmpty(personalizerEndpoint) || string.IsNullOrEmpty(personalizerApiKey))
- {
- throw new ArgumentException("Missing Azure Personalizer endpoint and/or api key.");
- }
- services.AddSingleton(client =>
- {
- return new PersonalizerClient(new ApiKeyServiceClientCredentials(personalizerApiKey))
- {
- Endpoint = personalizerEndpoint
- };
- });
-
- services.AddMvc();
- }
-
- // ... code removed for brevity
- }
-}
-```
-
-### Select best action
-
-In the server's **PersonalizerController.cs**, the **GenerateRank** server API summarizes the preparation to call the Rank API
-
-* Create new `eventId` for the Rank call
-* Get the list of actions
-* Get the list of features from the user and create context features
-* Optionally, set any excluded actions
-* Call Rank API, return results to client
-
-```csharp
-/// <summary>
-/// Creates a RankRequest with user time of day, HTTP request features,
-/// and taste as the context and several different foods as the actions
-/// </summary>
-/// <returns>RankRequest with user info</returns>
-[HttpGet("GenerateRank")]
-public RankRequest GenerateRank()
-{
- string eventId = Guid.NewGuid().ToString();
-
- // Get the actions list to choose from personalizer with their features.
- IList<RankableAction> actions = GetActions();
-
- // Get context information from the user.
- HttpRequestFeatures httpRequestFeatures = GetHttpRequestFeaturesFromRequest(Request);
- string timeOfDayFeature = GetUsersTimeOfDay();
- string tasteFeature = GetUsersTastePreference();
-
- // Create current context from user specified data.
- IList<object> currentContext = new List<object>() {
- new { time = timeOfDayFeature },
- new { taste = tasteFeature },
- new { httpRequestFeatures }
- };
-
- // Exclude an action for personalizer ranking. This action will be held at its current position.
- IList<string> excludeActions = new List<string> { "juice" };
-
- // Rank the actions
- return new RankRequest(actions, currentContext, excludeActions, eventId);
-}
-```
-
-The JSON sent to Personalizer, containing both actions (with features) and the current context features, looks like:
-
-```json
-{
- "contextFeatures": [
- {
- "time": "morning"
- },
- {
- "taste": "savory"
- },
- {
- "httpRequestFeatures": {
- "_synthetic": false,
- "MRefer": {
- "referer": "http://localhost:51840/"
- },
- "OUserAgent": {
- "_ua": "",
- "_DeviceBrand": "",
- "_DeviceFamily": "Other",
- "_DeviceIsSpider": false,
- "_DeviceModel": "",
- "_OSFamily": "Windows",
- "_OSMajor": "10",
- "DeviceType": "Desktop"
- }
- }
- }
- ],
- "actions": [
- {
- "id": "pasta",
- "features": [
- {
- "taste": "savory",
- "spiceLevel": "medium"
- },
- {
- "nutritionLevel": 5,
- "cuisine": "italian"
- }
- ]
- },
- {
- "id": "ice cream",
- "features": [
- {
- "taste": "sweet",
- "spiceLevel": "none"
- },
- {
- "nutritionalLevel": 2
- }
- ]
- },
- {
- "id": "juice",
- "features": [
- {
- "taste": "sweet",
- "spiceLevel": "none"
- },
- {
- "nutritionLevel": 5
- },
- {
- "drink": true
- }
- ]
- },
- {
- "id": "salad",
- "features": [
- {
- "taste": "sour",
- "spiceLevel": "low"
- },
- {
- "nutritionLevel": 8
- }
- ]
- },
- {
- "id": "popcorn",
- "features": [
- {
- "taste": "salty",
- "spiceLevel": "none"
- },
- {
- "nutritionLevel": 3
- }
- ]
- },
- {
- "id": "coffee",
- "features": [
- {
- "taste": "bitter",
- "spiceLevel": "none"
- },
- {
- "nutritionLevel": 3
- },
- {
- "drink": true
- }
- ]
- },
- {
- "id": "soup",
- "features": [
- {
- "taste": "sour",
- "spiceLevel": "high"
- },
- {
- "nutritionLevel": 7
- }
- ]
- }
- ],
- "excludedActions": [
- "juice"
- ],
- "eventId": "82ac52da-4077-4c7d-b14e-190530578e75",
- "deferActivation": null
-}
-```
-
-### Return Personalizer rewardActionId to client
-
-The Rank API returns the selected best action **rewardActionId** to the server.
-
-Display the action returned in **rewardActionId**.
-
-```json
-{
- "ranking": [
- {
- "id": "popcorn",
- "probability": 0.833333254
- },
- {
- "id": "salad",
- "probability": 0.03333333
- },
- {
- "id": "juice",
- "probability": 0
- },
- {
- "id": "soup",
- "probability": 0.03333333
- },
- {
- "id": "coffee",
- "probability": 0.03333333
- },
- {
- "id": "pasta",
- "probability": 0.03333333
- },
- {
- "id": "ice cream",
- "probability": 0.03333333
- }
- ],
- "eventId": "82ac52da-4077-4c7d-b14e-190530578e75",
- "rewardActionId": "popcorn"
-}
-```
-
-### Client displays the rewardActionId action
-
-In this tutorial, the `rewardActionId` value is displayed.
-
-In your own future application, that may be some exact text, a button, or a section of the web page highlighted. The list is returned for any post-analysis of scores, not an ordering of the content. Only the `rewardActionId` content should be displayed.
-
-## Reward API: collect information for reward
-
-The [reward score](concept-rewards.md) should be carefully planned, just as the features are planned. The reward score typically should be a value from 0 to 1. The value _can_ be calculated partially in the client application, based on user behaviors, and partially on the server, based on business logic and goals.
-
-If the server doesn't call the Reward API within the **Reward wait time** configured in the Azure portal for your Personalizer resource, then the **Default reward** (also configured in the Azure portal) is used for that event.
-
-In this sample application, you can select a value to see how the reward impacts the selections.
-
-## Additional ways to learn from this sample
-
-The sample uses several time-based events configured in the Azure portal for your Personalizer resource. Play with those values then return to this sample web app to see how the changes impact the Rank and Reward calls:
-
-* Reward wait time
-* Model update frequency
-
-Additional settings to play with include:
-* Default reward
-* Exploration percentage
--
-## Clean up resources
-
-When you are done with this tutorial, clean up the following resources:
-
-* Delete your sample project directory.
-* Delete your Personalizer resource - think of a Personalizer resource as dedicated to the actions and context - only reuse the resource if you are still using the foods as actions subject domain.
--
-## Next steps
-* [How Personalizer works](how-personalizer-works.md)
-* [Features](concepts-features.md): learn concepts about features using with actions and context
-* [Rewards](concept-rewards.md): learn about calculating rewards
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
- Title: What is Personalizer?
-description: Personalizer is a cloud-based service that allows you to choose the best experience to show to your users, learning from their real-time behavior.
--
-ms.
--- Previously updated : 11/17/2022-
-keywords: personalizer, Azure personalizer, machine learning
--
-# What is Personalizer?
-
-Azure Personalizer is an AI service that your applications make smarter decisions at scale using **reinforcement learning**. Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
-
-Personalizer can determine the best actions to take in a variety of scenarios:
-* E-commerce: What product should be shown to customers to maximize the likelihood of a purchase?
-* Content recommendation: What article should be shown to increase the click-through rate?
-* Content design: Where should an advertisement be placed to optimize user engagement on a website?
-* Communication: When and how should a notification be sent to maximize the chance of a response?
-
-To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer in your browser with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
-
-This documentation contains the following types of articles:
-
-* [**Quickstarts**](quickstart-personalizer-sdk.md) provide step-by-step instructions to guide you through setup and sample code to start making API requests to the service.
-* [**How-to guides**](how-to-settings.md) contain instructions for using Personalizer features and advanced capabilities.
-* [**Code samples**](https://github.com/Azure-Samples/cognitive-services-personalizer-samples) demonstrate how to use Personalizer and help you to easily interface your application with the service.
-* [**Tutorials**](tutorial-use-personalizer-web-app.md) are longer walk-throughs implementing Personalizer as a part of a broader business solution.
-* [**Concepts**](how-personalizer-works.md) provide further detail on Personalizer features, capabilities, and fundamentals.
--
-## How does Personalizer work?
-
-Personalizer uses reinforcement learning to select the best *action* for a given *context* across all users in order to maximize an average *reward*.
-* **Context**: Information that describes the state of your application, scenario, or user that may be relevant to making a decision.
- * Example: The location, device type, age, and favorite topics of users visiting a web site.
-* **Actions**: A discrete set of items that can be chosen, along with attributes describing each item.
- * Example: A set of news articles and the topics that are discussed in each article.
-* **Reward**: A numerical score between 0 and 1 that indicates whether the decision was *bad* (0), or *good* (1)
- * Example: A "1" indicates that a user clicked on the suggested article, whereas a "0" indicates the user did not.
-
-### Rank and Reward APIs
-
-Personalizer empowers you to take advantage of the power and flexibility of reinforcement learning using just two primary APIs.
-
-The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there's a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
-
-The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there's feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
-
-### Learning modes
-
-* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_ that is the action that the application would have taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
-
-* **Online mode** Personalizer will return the best action, given the context, as determined by the underlying RL model and explores other possible actions that may improve performance. Personalizer learns from feedback provided in calls to the Reward API.
-
-Note that Personalizer uses collective information across all users to learn the best actions based on the current context. The service does not:
-* Persist and manage user profile information. Unique user IDs should not be sent to Personalizer.
-* Log individual users' preferences or historical data.
--
-## Example scenarios
-
-Here are a few examples where Personalizer can be used to select the best content to render for a user.
-
-|Content type|Actions {features}|Context features|Returned Reward Action ID<br>(display this content)|
-|--|--|--|--|
-|News articles|a. `The president...`, {national, politics, [text]}<br>b. `Premier League ...` {global, sports, [text, image, video]}<br> c. `Hurricane in the ...` {regional, weather, [text,image]}|Country='USA',<br>Recent_Topics=('politics', 'business'),<br>Month='October'<br>|a `The president...`|
-|Movies|1. `Star Wars` {1977, [action, adventure, fantasy], George Lucas}<br>2. `Hoop Dreams` {1994, [documentary, sports], Steve James}<br>3. `Casablanca` {1942, [romance, drama, war], Michael Curtiz}|Device='smart TV',<br>Screen_Size='large',<br>Favorite_Genre='classics'<br>|3. `Casablanca`|
-|E-commerce Products|i. `Product A` {3 kg, $$$$, deliver in 1 day}<br>ii. `Product B` {20 kg, $$, deliver in 7 days}<br>iii. `Product C` {3 kg, $$$, deliver in 2 days}| Device='iPhone',<br>Spending_Tier='low',<br>Month='June'|ii. `Product B`|
--
-## Scenario requirements
-
-Use Personalizer when your scenario has:
-
-* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest [using a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
-* Information describing the actions (_action features_).
-* Information describing the current context (_contextual features_).
-* Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
-
-## Responsible use of AI
-
-At Microsoft, we're committed to the advancement of AI driven by principles that put people first. AI models such as the ones available in the Personalizer service have significant potential benefits,
-but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, incorporating [MicrosoftΓÇÖs principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai), building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers. See the [Responsible AI docs for Personalizer](responsible-use-cases.md).
-
-## Integrate Personalizer into an application
-
-1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine how to interpret feedback as a **_reward_** score.
-1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
-
- |Resource type| Purpose|
- |--|--|
- |[Apprentice mode](concept-apprentice-mode.md) - `E0`| Train Personalizer to mimic your current decision-making logic without impacting your existing application, before using _Online mode_ to learn better policies in a production environment.|
- |_Online mode_ - Standard, `S0`| Personalizer uses RL to determine best actions in production.|
- |_Online mode_ - Free, `F0`| Try Personalizer in a limited non-production environment.|
-
-1. Add Personalizer to your application, website, or system:
- 1. Add a **Rank** call to Personalizer in your application, website, or system to determine the best action.
- 1. Use the best action, as specified as a _reward action ID_ in your scenario.
- 1. Apply _business logic_ to user behavior or feedback data to determine the **reward** score. For example:
-
- |Behavior|Calculated reward score|
- |--|--|
- |User selected a news article suggested by Personalizer |**1**|
- |User selected a news article _not_ suggested by Personalizer |**0**|
- |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
-
- 1. Add a **Reward** call sending a reward score between 0 and 1
- * Immediately after feedback is received.
- * Or sometime later in scenarios where delayed feedback is expected.
- 1. Evaluate your loop with an [offline evaluation](concepts-offline-evaluation.md) after a period of time when Personalizer has received significant data to make online decisions. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without code changes or user impact.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Personalizer quickstart](quickstart-personalizer-sdk.md)
-
-* [How Personalizer works](how-personalizer-works.md)
-* [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md
- Title: What's new - Personalizer-
-description: This article contains news about Personalizer.
--
-ms.
--- Previously updated : 05/28/2021-
-# What's new in Personalizer
-
-Learn what's new in Azure Personalizer. These items may include release notes, videos, blog posts, and other types of information. Bookmark this page to keep up-to-date with the service.
-
-## Release notes
-
-### September 2022
-* Personalizer Inference Explainabiltiy is now available as a Public Preview. Enabling inference explainability returns feature scores on every Rank API call, providing insight into how influential each feature is to the actions chosen by your Personalizer model. [Learn more about Inference Explainability](how-to-inference-explainability.md).
-* Personalizer SDK now available in [Java](https://search.maven.org/artifact/com.azure/azure-ai-personalizer/1.0.0-beta.1/jar) and [Javascript](https://www.npmjs.com/package/@azure-rest/ai-personalizer).
-
-### April 2022
-* Local inference SDK (Preview): Personalizer now supports near-realtime (sub-10ms) inference without the need to wait for network API calls. Your Personalizer models can be used locally for lightning fast Rank calls using the [C# SDK (Preview)](https://www.nuget.org/packages/Azure.AI.Personalizer/2.0.0-beta.2), empowering your applications to personalize quickly and efficiently. Your model continues to train in Azure while your local model is seamlessly updated.
-
-### May 2021 - //Build conference
-
-* Auto-Optimize (Preview) : You can configure a Personalizer loop that you are using to continuously improve over time with less work. Personalizer will automatically run offline evaluations, discover better machine learning settings, and apply them. To learn more, see [Personalizer Auto-Optimize (Preview)](concept-auto-optimization.md).
-* Multi-slot personalization (Preview): If you have tiled layouts, carousels, and/or sidebars, it is easier to use Personalizer for each place that you recommend products or content on the same page. Personalizer now can now take a list of slots in the Rank API, assign an action to each, and learn from reward scores you send for each slot. To learn more, see [Multi-slot personalization (Preview)](concept-multi-slot-personalization.md).
-* Personalizer is now available in more regions.
-* Updated code samples (GitHub) and documentation. Use the links below to view updated samples:
- * [C#/.NET](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/Personalizer)
- * [JavaScript](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/javascript/Personalizer)
- * [Python samples](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/python/Personalizer)
-
-### July 2020
-
-* New tutorial - [using Personalizer in a chat bot](tutorial-use-personalizer-chat-bot.md)
-
-### June 2020
-
-* New tutorial - [using Personalizer in a web app](tutorial-use-personalizer-web-app.md)
-
-### May 2020 - //Build conference
-
-The following are available in **Public Preview**:
-
- * [Apprentice mode](concept-apprentice-mode.md) as a learning behavior.
-
-### March 2020
-
-* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
-
-### November 2019 - Ignite conference
-
-* Personalizer is generally available (GA)
-* Azure Notebooks [tutorial](tutorial-use-azure-notebook-generate-loop-data.md) with entire lifecycle
-
-### May 2019 - //Build conference
-
-The following preview features were released at the Build 2019 Conference:
-
-* [Rank and reward learning loop](what-is-personalizer.md)
-
-## Videos
-
-### 2019 Build videos
-
-* [Deliver the Right Experiences & Content like Xbox with Cognitive Services Personalizer](https://azure.microsoft.com/resources/videos/build-2019-deliver-the-right-experiences-and-content-with-cognitive-services-personalizer/)
-
-## Service updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
-
-## Next steps
-
-* [Quickstart: Create a feedback loop in C#](./quickstart-personalizer-sdk.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
-* [Use the interactive demo](https://personalizerdevdemo.azurewebsites.net/)
cognitive-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/where-can-you-use-personalizer.md
- Title: Where and how to use - Personalizer
-description: Personalizer can be applied in any situation where your application can select the right item, action, or product to display - in order to make the experience better, achieve better business results, or improve productivity.
--
-ms.
--- Previously updated : 02/18/2020-
-# Where and how to use Personalizer
-
-Use Personalizer in any situation where your application needs to select the correct action (content) to display - in order to make the experience better, achieve better business results, or improve productivity.
-
-Personalizer uses reinforcement learning to select which action (content) to show the user. The selection can vary drastically depending on the quantity, quality, and distribution of data sent to the service.
-
-## Example use cases for Personalizer
-
-* **Intent clarification & disambiguation**: help your users have a better experience when their intent is not clear by providing an option that is personalized.
-* **Default suggestions** for menus & options: have the bot suggest the most likely item in a personalized way as a first step, instead of presenting an impersonal menu or list of alternatives.
-* **Bot traits & tone**: for bots that can vary tone, verbosity, and writing style, consider varying these traits.
-* **Notification & alert content**: decide what text to use for alerts in order to engage users more.
-* **Notification & alert timing**: have personalized learning of when to send notifications to users to engage them more.
--
-## Expectations required to use Personalizer
-
-You can apply Personalizer in situations where you meet or can implement the following guidelines.
-
-|Guideline|Explanation|
-|--|--|
-|Business goal|You have a business or usability goal for your application.|
-|Content|You have a place in your application where making a contextual decision of what to show to users will improve that goal.|
-|Content quantity|You have fewer than 50 actions to rank per call.|
-|Aggregate data|The best choice can and should be learned from collective user behavior and total reward score.|
-|Ethical use|The use of machine learning for personalization follows [responsible use guidelines](ethics-responsible-use.md) and choices you chose.
-|Best single option|The contextual decision can be expressed as ranking the best option (action) from a limited set of choices.|
-|Scored result|How well the ranked choice worked for your application can be determined by measuring some aspect of user behavior, and expressing it in a _[reward score](concept-rewards.md)_.|
-|Relevant timing|The reward score doesn't bring in too many confounding or external factors. The experiment duration is low enough that the reward score can be computed while it's still relevant.|
-|Sufficient context features|You can express the context for the rank as a list of at least 5 [features](concepts-features.md) that you think would help make the right choice, and that doesn't include user-specific identifiable information.|
-|Sufficient action features|You have information about each content choice, _action_, as a list of at least 5 [features](concepts-features.md) that you think will help Personalizer make the right choice.|
-|Daily data|There's enough events to stay on top of optimal personalization if the problem drifts over time (such as preferences in news or fashion). Personalizer will adapt to continuous change in the real world, but results won't be optimal if there's not enough events and data to learn from to discover and settle on new patterns. You should choose a use case that happens often enough. Consider looking for use cases that happen at least 500 times per day.|
-|Historical data|Your application can retain data for long enough to accumulate a history of at least 100,000 interactions. This allows Personalizer to collect enough data to perform offline evaluations and policy optimization.|
-
-**Don't use Personalizer** where the personalized behavior isn't something that can be discovered across all users. For example, using Personalizer to suggest a first pizza order from a list of 20 possible menu items is useful, but which contact to call from the users' contact list when requiring help with childcare (such as "Grandma") is not something that is personalizable across your user base.
-
-## How to use Personalizer in a web application
-
-Adding a learning loop to a web application includes:
-
-* Determine which experience to personalize, what actions and features you have, what context features to use, and what reward you'll set.
-* Add a reference to the Personalization SDK in your application.
-* Call the Rank API when you are ready to personalize.
-* Store the eventId. You send a reward with the Reward API later.
-1. Call Activate for the event once you're sure the user has seen your personalized page.
-1. Wait for user selection of ranked content.
-1. Call Reward API to specify how well the output of the Rank API did.
-
-## How to use Personalizer with a chat bot
-
-In this example, you will see how to use Personalization to make a default suggestion instead of sending the user down a series of menus or choices every time.
-
-* Get the [code](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/samples/ChatbotExample) for this sample.
-* Set up your bot solution. Make sure to publish your LUIS application.
-* Manage Rank and Reward API calls for bot.
- * Add code to manage LUIS intent processing. If the **None** is returned as the top intent or the top intent's score is below your business logic threshold, send the intents list to Personalizer to Rank the intents.
- * Show intent list to user as selectable links with the first intent being the top-ranked intent from Rank API response.
- * Capture the user's selection and send this in the Reward API call.
-
-### Recommended bot patterns
-
-* Make Personalizer Rank API calls every time a disambiguation is needed, as opposed to caching results for each user. The result of disambiguating intent may change over time for one person, and allowing the Rank API to explore variances will accelerate overall learning.
-* Choose an interaction that is common with many users so that you have enough data to personalize. For example, introductory questions may be better fits than smaller clarifications deep in the conversation graph that only a few users may get to.
-* Use Rank API calls to enable "first suggestion is right" conversations, where the user gets asked "Would you like X?" or "Did you mean X?" and the user can just confirm; as opposed to giving options to the user where they must choose from a menu. For example, User:"I'd like to order a coffee" Bot:"Would you like a double espresso?". This way the reward signal is also strong as it pertains directly to the one suggestion.
-
-## How to use Personalizer with a recommendation solution
-
-Many companies use recommendation engines, marketing and campaigning tools, audience segmentation and clustering, collaborative filtering, and other means to recommend products from a large catalog to customers.
-
-The [Microsoft Recommenders GitHub repository](https://github.com/Microsoft/Recommenders) provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. It provides working examples for preparing data, building models, evaluating, tuning, and operationalizing the recommendation engines, for many common approaches including xDeepFM, SAR, ALS, RBM, DKN.
-
-Personalizer can work with a recommendation engine when it's present.
-
-* Recommendation engines take large amounts of items (for example, 500,000) and recommend a subset (such as the top 20) from hundreds or thousands of options.
-* Personalizer takes a small number of actions with lots of information about them and ranks them in real time for a given rich context, while most recommendation engines only use a few attributes about users, products and their interactions.
-* Personalizer is designed to autonomously explore user preferences all the time, which will yield better results where content is changing rapidly, such as news, live events, live community content, content with daily updates, or seasonal content.
-
-A common use is to take the output of a recommendation engine (for example, the top 20 products for a certain customer) and use that as the input actions for Personalizer.
-
-## Adding content safeguards to your application
-
-If your application allows for large variances in content shown to users, and some of that content may be unsafe or inappropriate for some users, you should plan ahead to make sure that the right safeguards are in place to prevent your users from seeing unacceptable content. The best pattern to implement safeguards is:
- * Obtain the list of actions to rank.
- * Filter out the ones that are not viable for the audience.
- * Only rank these viable actions.
- * Display the top ranked action to the user.
-
-In some architectures, the above sequence may be hard to implement. In that case, there is an alternative approach to implementing safeguards after ranking, but a provision needs to be made so actions that fall outside the safeguard are not used to train the Personalizer model.
-
-* Obtain the list of actions to rank, with learning deactivated.
-* Rank actions.
-* Check if the top action is viable.
- * If the top action is viable, activate learning for this rank, then show it to the user.
- * If the top action is not viable, do not activate learning for this ranking, and decide through your own logic or alternative approaches what to show to the user. Even if you use the second-best ranked option, do not activate learning for this ranking.
--
-## Next steps
-
-[Ethics & responsible use](ethics-responsible-use.md).
cognitive-services Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/plan-manage-costs.md
- Title: Plan to manage costs for Azure Cognitive Services
-description: Learn how to plan for and manage costs for Azure Cognitive Services by using cost analysis in the Azure portal.
----- Previously updated : 11/03/2021---
-# Plan and manage costs for Azure Cognitive Services
-
-This article describes how you plan for and manage costs for Azure Cognitive Services. First, you use the Azure pricing calculator to help plan for Cognitive Services costs before you add any resources for the service to estimate costs. Next, as you add Azure resources, review the estimated costs. After you've started using Cognitive Services resources (for example Speech, Computer Vision, LUIS, Language service, Translator, etc.), use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Cognitive Services are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Cognitive Services, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
-
-## Prerequisites
-
-Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-<!--Note for Azure service writer: If you have other prerequisites for your service, insert them here -->
-
-## Estimate costs before using Cognitive Services
-
-Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you add Cognitive Services.
--
-As you add new resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
-
-For more information, see [Azure Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
-
-## Understand the full billing model for Cognitive Services
-
-Cognitive Services runs on Azure infrastructure that [accrues costs](https://azure.microsoft.com/pricing/details/cognitive-services/) when you deploy the new resource. It's important to understand that more infrastructure might accrue costs. You need to manage that cost when you make changes to deployed resources.
-
-When you create or use Cognitive Services resources, you might get charged based on the services that you use. There are two billing models available for Cognitive
-
-## Pay-as-you-go
-
-With Pay-As-You-Go pricing, you are billed according to the Cognitive Services offering you use, based on its billing information.
-
-| Service | Instance(s) | Billing information |
-||-||
-| **Vision** | | |
-| [Computer Vision](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) | Free, Standard (S1) | Billed by the number of transactions. Price per transaction varies per feature (Read, OCR, Spatial Analysis). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/). |
-| [Custom Vision](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) | Free, Standard | <li>Predictions are billed by the number of transactions.</li><li>Training is billed by compute hour(s).</li><li>Image storage is billed by number of images (up to 6 MB per image).</li>|
-| [Face](https://azure.microsoft.com/pricing/details/cognitive-services/face-api/) | Free, Standard | Billed by the number of transactions. |
-| **Speech** | | |
-| [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) | Free, Standard | Billing varies by feature (speech-to-text, text to speech, speech translation, speaker recognition). Primarily, billing is by transaction count or character count. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
-| **Language** | | |
-| [Language Understanding (LUIS)](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) | Free Authoring, Free Prediction, Standard | Billed by number of transactions. Price per transaction varies by feature (speech requests, text requests). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/). |
-| [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/) | Free, Standard | Subscription fee billed monthly. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). |
-| [Language service](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/) | Free, Standard | Billed by number of text records. |
-| [Translator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/) | Free, Pay-as-you-go (S1), Volume discount (S2, S3, S4, C2, C3, C4, D3) | Pricing varies by meter and feature. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). <li>Text translation is billed by number of characters translated.</li><li>Document translation is billed by characters translated.</li><li>Custom translation is billed by characters of source and target training data.</li> |
-| **Decision** | | |
-| [Anomaly Detector](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/) | Free, Standard | Billed by the number of transactions. |
-| [Content Moderator](https://azure.microsoft.com/pricing/details/cognitive-services/content-moderator/) | Free, Standard | Billed by the number of transactions. |
-| [Personalizer](https://azure.microsoft.com/pricing/details/cognitive-services/personalizer/) | Free, Standard (S0) | Billed by transactions per month. There are storage and transaction quotas. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/personalizer/). |
-
-## Commitment tier
-
-In addition to the pay-as-you-go model, Cognitive Services has commitment tiers, which let you commit to using several service features for a fixed fee, enabling you to have a predictable total cost based on the needs of your workload.
-
-With commitment tier pricing, you are billed according to the plan you choose. See [Quickstart: purchase commitment tier pricing](commitment-tier.md) for information on available services, how to sign up, and considerations when purchasing a plan.
-
-> [!NOTE]
-> If you use the resource above the quota provided by the commitment plan, you will be charged for the additional usage as per the overage amount mentioned in the Azure portal when you purchase a commitment plan. For more information, see [Azure Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
----
-### Costs that typically accrue with Cognitive Services
-
-Typically, after you deploy an Azure resource, costs are determined by your pricing tier and the API calls you make to your endpoint. If the service you're using has a commitment tier, going over the allotted calls in your tier may incur an overage charge.
-
-Extra costs may accrue when using these
-
-#### QnA Maker
-
-When you create resources for QnA Maker, resources for other Azure services may also be created. They include:
--- [Azure App Service (for the runtime)](https://azure.microsoft.com/pricing/details/app-service/)-- [Azure Cognitive Search (for the data)](https://azure.microsoft.com/pricing/details/search/)
-
-### Costs that might accrue after resource deletion
-
-#### QnA Maker
-
-After you delete QnA Maker resources, the following resources might continue to exist. They continue to accrue costs until you delete them.
--- [Azure App Service (for the runtime)](https://azure.microsoft.com/pricing/details/app-service/)-- [Azure Cognitive Search (for the data)](https://azure.microsoft.com/pricing/details/search/)-
-### Using Azure Prepayment credit with Cognitive Services
-
-You can pay for Cognitive Services charges with your Azure Prepayment (previously called monetary commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
-
-## Monitor costs
-
-<!-- Note to Azure service writer: Modify the following as needed for your service. Replace example screenshots with ones taken for your service. If you need assistance capturing screenshots, ask banders for help. -->
-
-As you use Azure resources with Cognitive Services, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). As soon as use of a Cognitive Service (or Cognitive Services) starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-When you use cost analysis, you view Cognitive Services costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
-
-To view Cognitive Services costs in cost analysis:
-
-1. Sign in to the Azure portal.
-2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
-3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Cognitive Services.
-
-Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
---- To narrow costs for a single service, like Cognitive Services, select **Add filter** and then select **Service name**. Then, select **Cognitive Services**.-
-Here's an example showing costs for just Cognitive Services.
--
-In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and Cognitive Services costs by resource group are also shown. From here, you can explore costs on your own.
-
-## Create budgets
-
-You can create [budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-
-Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more about the filter options when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-## Export cost data
-
-You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you or others need to do more data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
-
-## Next steps
--- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
- Title: Built-in policy definitions for Azure Cognitive Services
-description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources.
Previously updated : 07/06/2023------
-# Azure Policy built-in policy definitions for Azure Cognitive Services
-
-This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure Cognitive Services. For additional Azure Policy built-ins for other services,
-see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
-
-The name of each built-in policy definition links to the policy definition in the Azure portal. Use
-the link in the **Version** column to view the source on the
-[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-
-## Azure Cognitive Services
--
-## Next steps
--- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).-- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/responsible-use-of-ai-overview.md
- Title: Overview of Responsible use of AI-
-description: Azure Cognitive Services provides information and guidelines on how to responsibly use our AI services in applications. Below are the links to articles that provide this guidance for the different services within the Cognitive Services suite.
----- Previously updated : 1/10/2022---
-# Responsible use of AI with Cognitive Services
-
-Azure Cognitive Services provides information and guidelines on how to responsibly use artificial intelligence in applications. Below are the links to articles that provide this guidance for the different services within the Cognitive Services suite.
-
-## Anomaly Detector
-
-* [Transparency note and use cases](/legal/cognitive-services/anomaly-detector/ad-transparency-note?context=/azure/cognitive-services/anomaly-detector/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/anomaly-detector/data-privacy-security?context=/azure/cognitive-services/anomaly-detector/context/context)
-
-## Computer Vision - OCR
-
-* [Transparency note and use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note?context=/azure/cognitive-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations?context=/azure/cognitive-services/computer-vision/context/context)
-* [Integration and responsible use](/legal/cognitive-services/computer-vision/ocr-guidance-integration-responsible-use?context=/azure/cognitive-services/computer-vision/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security?context=/azure/cognitive-services/computer-vision/context/context)
-
-## Computer Vision - Image Analysis
-
-* [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=/azure/cognitive-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/computer-vision/imageanalysis-characteristics-and-limitations?context=/azure/cognitive-services/computer-vision/context/context)
-* [Integration and responsible use](/legal/cognitive-services/computer-vision/imageanalysis-guidance-for-integration?context=/azure/cognitive-services/computer-vision/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/computer-vision/imageanalysis-data-privacy-security?context=/azure/cognitive-services/computer-vision/context/context)
-* [Limited Access features](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context)
-
-## Computer Vision - Face
-
-* [Transparency note and use cases](/legal/cognitive-services/face/transparency-note?context=/azure/cognitive-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/face/characteristics-and-limitations?context=/azure/cognitive-services/computer-vision/context/context)
-* [Integration and responsible use](/legal/cognitive-services/face/guidance-integration-responsible-use?context=/azure/cognitive-services/computer-vision/context/context)
-* [Data privacy and security](/legal/cognitive-services/face/data-privacy-security?context=/azure/cognitive-services/computer-vision/context/context)
-* [Limited Access features](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context)
-
-## Computer Vision - Spatial Analysis
-
-* [Transparency note and use cases](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/cognitive-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/computer-vision/accuracy-and-limitations?context=/azure/cognitive-services/computer-vision/context/context)
-* [Responsible use in AI deployment](/legal/cognitive-services/computer-vision/responsible-use-deployment?context=/azure/cognitive-services/computer-vision/context/context)
-* [Disclosure design guidelines](/legal/cognitive-services/computer-vision/disclosure-design?context=/azure/cognitive-services/computer-vision/context/context)
-* [Research insights](/legal/cognitive-services/computer-vision/research-insights?context=/azure/cognitive-services/computer-vision/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/computer-vision/compliance-privacy-security-2?context=/azure/cognitive-services/computer-vision/context/context)
-
-## Custom Vision
-
-* [Transparency note and use cases](/legal/cognitive-services/custom-vision/custom-vision-cvs-transparency-note?context=/azure/cognitive-services/custom-vision-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/custom-vision/custom-vision-cvs-characteristics-and-limitations?context=/azure/cognitive-services/custom-vision-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/custom-vision/custom-vision-cvs-guidance-integration-responsible-use?context=/azure/cognitive-services/custom-vision-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/custom-vision/custom-vision-cvs-data-privacy-security?context=/azure/cognitive-services/custom-vision-service/context/context)
-
-## Language service
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Custom text classification
-
-* [Transparency note](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/ctc-guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/language-service/ctc-characteristics-and-limitations?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/ctc-data-privacy-security?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Named entity recognition
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Custom named entity recognition
-
-* [Transparency note](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/cner-guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/language-service/cner-characteristics-and-limitations?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/cner-data-privacy-security?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Entity linking
-
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Language detection
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-language-detection?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Key phrase extraction
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-key-phrase-extraction?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Personally identifiable information detection
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Question Answering
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-question-answering?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-question-answering?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy-security-question-answering?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Sentiment Analysis and opinion mining
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Text Analytics for health
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language - Summarization
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/cognitive-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
-
-## Language Understanding
-
-* [Transparency note and use cases](/legal/cognitive-services/luis/luis-transparency-note?context=/azure/cognitive-services/LUIS/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/luis/characteristics-and-limitations?context=/azure/cognitive-services/LUIS/context/context)
-* [Integration and responsible use](/legal/cognitive-services/luis/guidance-integration-responsible-use?context=/azure/cognitive-services/LUIS/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/luis/data-privacy-security?context=/azure/cognitive-services/LUIS/context/context)
-
-## Azure OpenAI Service
-
-* [Transparency note](/legal/cognitive-services/openai/transparency-note?context=/azure/cognitive-services/openai/context/context)
-* [Limited access](/legal/cognitive-services/openai/limited-access?context=/azure/cognitive-services/openai/context/context)
-* [Code of conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/cognitive-services/openai/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/openai/data-privacy?context=/azure/cognitive-services/openai/context/context)
-
-## Personalizer
-
-* [Transparency note and use cases](./personalizer/responsible-use-cases.md)
-* [Characteristics and limitations](./personalizer/responsible-characteristics-and-limitations.md)
-* [Integration and responsible use](./personalizer/responsible-guidance-integration.md)
-* [Data and privacy](./personalizer/responsible-data-and-privacy.md)
-
-## QnA Maker
-
-* [Transparency note and use cases](/legal/cognitive-services/qnamaker/transparency-note-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/qnamaker/characteristics-and-limitations-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
-* [Integration and responsible use](/legal/cognitive-services/qnamaker/guidance-integration-responsible-use-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/qnamaker/data-privacy-security-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
-
-## Speech - Pronunciation Assessment
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
-
-## Speech - Speaker Recognition
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context)
-
-## Speech - Custom Neural Voice
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
-* [Disclosure of design guidelines](./speech-service/concepts-disclosure-guidelines.md)
-* [Disclosure of design patterns](./speech-service/concepts-disclosure-patterns.md)
-* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
-
-## Speech - Speech to text
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
cognitive-services Rotate Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/rotate-keys.md
- Title: Rotate keys in Azure Cognitive Services-
-description: "Learn how to rotate API keys for better security, without interrupting service"
----- Previously updated : 11/08/2022---
-# Rotate subscription keys in Cognitive Services
-
-Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your resource if a key gets leaked.
-
-## How to rotate keys
-
-Keys can be rotated using the following procedure:
-
-1. If you're using both keys in production, change your code so that only one key is in use. In this guide, assume it's key 1.
-
- This is a necessary step because once a key is regenerated, the older version of that key will stop working immediately. This would cause clients using the older key to get 401 access denied errors.
-1. Once you have only key 1 in use, you can regenerate the key 2. Go to your resource's page on the Azure portal, select the **Keys and Endpoint** tab, and select the **Regenerate Key 2** button at the top of the page.
-1. Next, update your code to use the newly generated key 2.
-
- It will help to have logs or availability to check that users of the key have successfully swapped from using key 1 to key 2 before you proceed.
-1. Now you can regenerate the key 1 using the same process.
-1. Finally, update your code to use the new key 1.
-
-## See also
-
-* [What is Cognitive Services?](./what-are-cognitive-services.md)
-* [Cognitive Services security features](./security-features.md)
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
- Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services
-description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
Previously updated : 07/06/2023------
-# Azure Policy Regulatory Compliance controls for Azure Cognitive Services
-
-[Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
-provides Microsoft created and managed initiative definitions, known as _built-ins_, for the
-**compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure Cognitive Services. You can
-assign the built-ins for a **security control** individually to help make your Azure resources
-compliant with the specific standard.
---
-## Next steps
--- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
cognitive-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md
- Title: Azure Cognitive Services security-
-description: Learn about the security considerations for Cognitive Services usage.
----- Previously updated : 12/02/2022----
-# Azure Cognitive Services security
-
-Security should be considered a top priority in the development of all applications, and with the growth of artificial intelligence enabled applications, security is even more important. This article outlines various security features available for Azure Cognitive Services. Each feature addresses a specific liability, so multiple features can be used in the same workflow.
-
-For a comprehensive list of Azure service security recommendations see the [Cognitive Services security baseline](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=%2Fazure%2Fcognitive-services%2FTOC.json) article.
-
-## Security features
-
-|Feature | Description |
-|:|:|
-| [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). |
-| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](./authentication.md). |
-| [Key rotation](./authentication.md)| Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](./rotate-keys.md). |
-| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). |
-| [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.|
-| [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
-| [Data loss prevention](./cognitive-services-data-loss-prevention.md) | The data loss prevention feature lets an administrator decide what types of URIs their Azure resource can take as inputs (for those API calls that take URIs as input). This can be done to prevent the possible exfiltration of sensitive company data: If a company stores sensitive information (such as a customer's private data) in URL parameters, a bad actor inside that company could submit the sensitive URLs to an Azure service, which surfaces that data outside the company. Data loss prevention lets you configure the service to reject certain URI forms on arrival.|
-| [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) |The Customer Lockbox feature provides an interface for customers to review and approve or reject data access requests. It's used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see the [Customer Lockbox guide](../security/fundamentals/customer-lockbox-overview.md).</br></br>Customer Lockbox is available for the following
-| [Bring your own storage (BYOS)](./speech-service/speech-encryption-of-data-at-rest.md)| The Speech service doesn't currently support Customer Lockbox. However, you can arrange for your service-specific data to be stored in your own storage resource using bring-your-own-storage (BYOS). BYOS allows you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the Azure region where the Speech resource was created. This applies to any data at rest and data in transit. For customization features like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where the Speech service resource and BYOS resource (if used) reside. </br></br>To use BYOS with Speech, follow the [Speech encryption of data at rest](./speech-service/speech-encryption-of-data-at-rest.md) guide.</br></br> Microsoft does not use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored by Speech. |
-
-## Next steps
-
-* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started.
cognitive-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/use-key-vault.md
- Title: Develop Azure Cognitive Services applications with Key Vault
-description: Learn how to develop Cognitive Services applications securely by using Key Vault.
----- Previously updated : 09/13/2022
-zone_pivot_groups: programming-languages-set-twenty-eight
--
-# Develop Azure Cognitive Services applications with Key Vault
-
-Use this article to learn how to develop Cognitive Services applications securely by using [Azure Key Vault](../key-vault/general/overview.md).
-
-Key Vault reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
-
-## Prerequisites
--
-* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free)
-* [Visual Studio IDE](https://visualstudio.microsoft.com/vs/)
-* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
-* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
---
-* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
-* [Python 3.7 or later](https://www.python.org/)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
-* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
-* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
---
-* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
-* [Java Development Kit (JDK) version 8 or above](/azure/developer/java/fundamentals/)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
-* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
-* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
---
-* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free).
-* [Current Node.js v14 LTS or later](https://nodejs.org/)
-* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)
-* An [Azure Key Vault](../key-vault/general/quick-create-portal.md)
-* [A multi-service resource or a resource for a specific service](./cognitive-services-apis-create-account.md)
--
-> [!NOTE]
-> Review the documentation and quickstart articles for the Cognitive Service you're using to get an understanding of:
-> * The credentials and other information you will need to send API calls.
-> * The packages and code you will need to run your application.
-
-## Get your credentials from your Cognitive Services resource
-
-Before you add your credential information to your Azure key vault, you need to retrieve them from your Cognitive Services resource. For example, if your service needs a key and endpoint you would find them using the following steps:
-
-1. Navigate to your Azure resource in the [Azure portal](https://portal.azure.com/).
-1. From the collapsible menu on the left, select **Keys and Endpoint**.
-
- :::image type="content" source="language-service/custom-text-classification/media/get-endpoint-azure.png" alt-text="A screenshot showing the key and endpoint page in the Azure portal." lightbox="language-service/custom-text-classification/media/get-endpoint-azure.png":::
-
-Some Cognitive Services require different information to authenticate API calls, such as a key and region. Make sure to retrieve this information before continuing on.
-
-## Add your credentials to your key vault
-
-For your application to retrieve and use your credentials to authenticate API calls, you will need to add them to your [key vault secrets](../key-vault/secrets/about-secrets.md).
-
-Repeat these steps to generate a secret for each required resource credential. For example, a key and endpoint. These secret names will be used later to authenticate your application.
-
-1. Open a new browser tab or window. Navigate to your key vault in the <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Go to Azure portal" target="_blank">Azure portal</a>.
-1. From the collapsible menu on the left, select **Objects** > **Secrets**.
-1. Select **Generate/Import**.
-
- :::image type="content" source="media/key-vault/store-secrets.png" alt-text="A screenshot showing the key vault key page in the Azure portal." lightbox="media/key-vault/store-secrets.png":::
-
-1. On the **Create a secret** screen, enter the following values:
-
- |Name | Value |
- |||
- |Upload options | Manual |
- |Name | A secret name for your key or endpoint. For example: "CognitiveServicesKey" or "CognitiveServicesEndpoint" |
- |Value | Your Azure Cognitive Services resource key or endpoint. |
-
- Later your application will use the secret "Name" to securely access the "Value".
-
-1. Leave the other values as their defaults. Select **Create**.
-
- >[!TIP]
- > Make sure to remember the names that you set for your secrets, as you'll use them later in your application.
-
-You should now have named secrets for your resource information.
-
-## Create an environment variable for your key vault's name
-
-We recommend creating an environment variable for your Azure key vault's name. Your application will read this environment variable at runtime to retrieve your key and endpoint information.
-
-To set environment variables, use one the following commands. `KEY_VAULT_NAME` with the name of the environment variable, and replace `Your-Key-Vault-Name` with the name of your key vault, which will be stored in the environment variable.
-
-# [Azure CLI](#tab/azure-cli)
-
-Create and assign persisted environment variable, given the value.
-
-```CMD
-setx KEY_VAULT_NAME "Your-Key-Vault-Name"
-```
-
-In a new instance of the **Command Prompt**, read the environment variable.
-
-```CMD
-echo %KEY_VAULT_NAME%
-```
-
-# [PowerShell](#tab/powershell)
-
-Create and assign a persisted environment variable. Replace `Your-Key-Vault-Name` with the name of your key vault.
-
-```powershell
-[System.Environment]::SetEnvironmentVariable('KEY_VAULT_NAME', 'Your-Key-Vault-Name', 'User')
-```
-
-In a new instance of the **Windows PowerShell**, read the environment variable.
-
-```powershell
-[System.Environment]::GetEnvironmentVariable('KEY_VAULT_NAME')
-```
----
-## Authenticate to Azure using Visual Studio
-
-Developers using Visual Studio 2017 or later can authenticate an Azure Active Directory account through Visual Studio. This enables you to access secrets in your key vault by signing into your Azure subscription from within the IDE.
-
-To authenticate in Visual Studio, select **Tools** from the top navigation menu, and select **Options**. Navigate to the **Azure Service Authentication** option to sign in with your user name and password.
-
-## Authenticate using the command line
--
-## Create a new C# application
-
-Using the Visual Studio IDE, create a new .NET Core console app. This will create a "Hello World" project with a single C# source file: `program.cs`.
-
-Install the following client libraries by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for the following libraries, and select **Install** for each:
-
-* `Azure.Security.KeyVault.Secrets`
-* `Azure.Identity`
-
-## Import the example code
-
-Copy the following example code into your `program.cs` file. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using Azure;
-using Azure.Identity;
-using Azure.Security.KeyVault.Secrets;
-using System.Net;
-
-namespace key_vault_console_app
-{
- class Program
- {
- static async Task Main(string[] args)
- {
- //Name of your key vault
- var keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
-
- //variables for retrieving the key and endpoint from your key vault.
- //Set these variables to the names you created for your secrets
- const string keySecretName = "Your-Key-Secret-Name";
- const string endpointSecretName = "Your-Endpoint-Secret-Name";
-
- //Endpoint for accessing your key vault
- var kvUri = $"https://{keyVaultName}.vault.azure.net";
-
- var keyVaultClient = new SecretClient(new Uri(kvUri), new DefaultAzureCredential());
-
- Console.WriteLine($"Retrieving your secrets from {keyVaultName}.");
-
- //Key and endpoint secrets retrieved from your key vault
- var keySecret = await keyVaultClient.GetSecretAsync(keySecretName);
- var endpointSecret = await keyVaultClient.GetSecretAsync(endpointSecretName);
- Console.WriteLine($"Your key secret value is: {keySecret.Value.Value}");
- Console.WriteLine($"Your endpoint secret value is: {endpointSecret.Value.Value}");
- Console.WriteLine("Secrets retrieved successfully");
-
- }
- }
-}
-```
-
-## Run the application
-
-Run the application by selecting the **Debug** button at the top of Visual studio. Your key and endpoint secrets will be retrieved from your key vault.
-
-## Send a test Language service call (optional)
-
-If you're using a multi-service resource or Language resource, you can update [your application](#create-a-new-c-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
-
-1. Install the `Azure.AI.TextAnalytics` library by right-clicking on the solution in the **Solution Explorer** and selecting **Manage NuGet Packages**. In the package manager that opens select **Browse** and search for the following libraries, and select **Install** for each:
-
-1. Add the following directive to the top of your `program.cs` file.
-
- ```csharp
- using Azure.AI.TextAnalytics;
- ```
-
-1. Add the following code sample to your application.
-
- ```csharp
- // Example method for extracting named entities from text
- private static void EntityRecognitionExample(string keySecret, string endpointSecret)
- {
- //String to be sent for Named Entity Recognition
- var exampleString = "I had a wonderful trip to Seattle last week.";
-
- AzureKeyCredential azureKeyCredential = new AzureKeyCredential(keySecret);
- Uri endpoint = new Uri(endpointSecret);
- var languageServiceClient = new TextAnalyticsClient(endpoint, azureKeyCredential);
-
- Console.WriteLine($"Sending a Named Entity Recognition (NER) request");
- var response = languageServiceClient.RecognizeEntities(exampleString);
- Console.WriteLine("Named Entities:");
- foreach (var entity in response.Value)
- {
- Console.WriteLine($"\tText: {entity.Text},\tCategory: {entity.Category},\tSub-Category: {entity.SubCategory}");
- Console.WriteLine($"\t\tScore: {entity.ConfidenceScore:F2},\tLength: {entity.Length},\tOffset: {entity.Offset}\n");
- }
- }
- ```
-
-3. Add the following code to call `EntityRecognitionExample()` from your main method, with your key and endpoint values.
-
- ```csharp
- EntityRecognitionExample(keySecret.Value.Value, endpointSecret.Value.Value);
- ```
-
-4. Run the application.
---
-## Authenticate your application
--
-## Create a Python application
-
-Create a new folder named `keyVaultExample`. Then use your preferred code editor to create a file named `program.py` inside the newly created folder.
-
-### Install Key Vault and Language service packages
-
-1. In a terminal or command prompt, navigate to your project folder and install the Azure Active Directory identity library:
-
- ```terminal
- pip install azure-identity
- ```
-
-1. Install the Key Vault secrets library:
-
- ```terminal
- pip install azure-keyvault-secrets
- ```
-
-## Import the example code
-
-Add the following code sample to the file named `program.py`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
-
-```python
-import os
-from azure.keyvault.secrets import SecretClient
-from azure.identity import DefaultAzureCredential
-from azure.core.credentials import AzureKeyCredential
-
-keyVaultName = os.environ["KEY_VAULT_NAME"]
-
-# Set these variables to the names you created for your secrets
-keySecretName = "Your-Key-Secret-Name"
-endpointSecretName = "Your-Endpoint-Secret-Name"
-
-# URI for accessing key vault
-KVUri = f"https://{keyVaultName}.vault.azure.net"
-
-# Instantiate the client and retrieve secrets
-credential = DefaultAzureCredential()
-kv_client = SecretClient(vault_url=KVUri, credential=credential)
-
-print(f"Retrieving your secrets from {keyVaultName}.")
-
-retrieved_key = kv_client.get_secret(keySecretName).value
-retrieved_endpoint = kv_client.get_secret(endpointSecretName).value
-
-print(f"Your secret key value is {retrieved_key}.");
-print(f"Your secret endpoint value is {retrieved_endpoint}.");
-
-```
-
-## Run the application
-
-Use the following command to run the application. Your key and endpoint secrets will be retrieved from your key vault.
-
-```terminal
-python ./program.py
-```
-
-## Send a test Language service call (optional)
-
-If you're using a multi-service resource or Language resource, you can update [your application](#create-a-python-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
-
-1. Install the Language service library:
-
- ```console
- pip install azure-ai-textanalytics==5.1.0
- ```
-
-1. Add the following code to your application
-
- ```python
- from azure.ai.textanalytics import TextAnalyticsClient
- # Authenticate the key vault secrets client using your key and endpoint
- azure_key_credential = AzureKeyCredential(retrieved_key)
- # Now you can use key vault credentials with the Language service
- language_service_client = TextAnalyticsClient(
- endpoint=retrieved_endpoint,
- credential=azure_key_credential)
-
- # Example of recognizing entities from text
-
- print("Sending NER request")
-
- try:
- documents = ["I had a wonderful trip to Seattle last week."]
- result = language_service_client.recognize_entities(documents = documents)[0]
- print("Named Entities:\n")
- for entity in result.entities:
- print("\tText: \t", entity.text, "\tCategory: \t", entity.category, "\tSubCategory: \t", entity.subcategory,
- "\n\tConfidence Score: \t", round(entity.confidence_score, 2), "\tLength: \t", entity.length, "\tOffset: \t", entity.offset, "\n")
-
- except Exception as err:
- print("Encountered exception. {}".format(err))
- ```
-
-1. Run the application.
---
-## Authenticate your application
--
-## Create a java application
-
-In your preferred IDE, create a new Java console application project, and create a class named `Example`.
-
-## Add dependencies
-
-In your project, add the following dependencies to your `pom.xml` file.
-
-```xml
-<dependencies>
-
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-security-keyvault-secrets</artifactId>
- <version>4.2.3</version>
- </dependency>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.2.0</version>
- </dependency>
- </dependencies>
-```
-
-## Import the example code
-
-Copy the following code into a file named `Example.java`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
-
-```java
-import com.azure.identity.DefaultAzureCredentialBuilder;
-import com.azure.security.keyvault.secrets.SecretClient;
-import com.azure.security.keyvault.secrets.SecretClientBuilder;
-import com.azure.core.credential.AzureKeyCredential;
-
-public class Example {
-
- public static void main(String[] args) {
-
- String keyVaultName = System.getenv("KEY_VAULT_NAME");
- String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net";
-
- //variables for retrieving the key and endpoint from your key vault.
- //Set these variables to the names you created for your secrets
- String keySecretName = "Your-Key-Secret-Name";
- String endpointSecretName = "Your-Endpoint-Secret-Name";
-
- //Create key vault secrets client
- SecretClient secretClient = new SecretClientBuilder()
- .vaultUrl(keyVaultUri)
- .credential(new DefaultAzureCredentialBuilder().build())
- .buildClient();
-
- //retrieve key and endpoint from key vault
- String keyValue = secretClient.getSecret(keySecretName).getValue();
- String endpointValue = secretClient.getSecret(endpointSecretName).getValue();
- System.out.printf("Your secret key value is: %s", keyValue)
- System.out.printf("Your secret endpoint value is: %s", endpointValue)
- }
-}
-```
-
-## Send a test Language service call (optional)
-
-If you're using a multi-service resource or Language resource, you can update [your application](#create-a-java-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
-
-1. In your application, add the following dependency:
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-ai-textanalytics</artifactId>
- <version>5.1.12</version>
- </dependency>
- ```
-
-1. add the following import statements to your file.
-
- ```java
- import com.azure.ai.textanalytics.models.*;
- import com.azure.ai.textanalytics.TextAnalyticsClientBuilder;
- import com.azure.ai.textanalytics.TextAnalyticsClient;
- ```
-
-1. Add the following code to the `main()` method in your application:
-
- ```java
-
- TextAnalyticsClient languageClient = new TextAnalyticsClientBuilder()
- .credential(new AzureKeyCredential(keyValue))
- .endpoint(endpointValue)
- .buildClient();
-
- // Example for recognizing entities in text
- String text = "I had a wonderful trip to Seattle last week.";
-
- for (CategorizedEntity entity : languageClient.recognizeEntities(text)) {
- System.out.printf(
- "Recognized entity: %s, entity category: %s, entity sub-category: %s, score: %s, offset: %s, length: %s.%n",
- entity.getText(),
- entity.getCategory(),
- entity.getSubcategory(),
- entity.getConfidenceScore(),
- entity.getOffset(),
- entity.getLength());
- }
- ```
-
-1. Run your application
---
-## Authenticate your application
--
-## Create a new Node.js application
-
-Create a Node.js application that uses your key vault.
-
-In a terminal, create a folder named `key-vault-js-example` and change into that folder:
-
-```terminal
-mkdir key-vault-js-example && cd key-vault-js-example
-```
-
-Initialize the Node.js project:
-
-```terminal
-npm init -y
-```
-
-### Install Key Vault and Language service packages
-
-1. Using the terminal, install the Azure Key Vault secrets library, [@azure/keyvault-secrets](https://www.npmjs.com/package/@azure/keyvault-secrets) for Node.js.
-
- ```terminal
- npm install @azure/keyvault-secrets
- ```
-
-1. Install the Azure Identity library, [@azure/identity](https://www.npmjs.com/package/@azure/identity) package to authenticate to a Key Vault.
-
- ```terminal
- npm install @azure/identity
- ```
-
-## Import the code sample
-
-Add the following code sample to a file named `index.js`. Replace `Your-Key-Secret-Name` and `Your-Endpoint-Secret-Name` with the secret names that you set in your key vault.
-
-```javascript
-const { SecretClient } = require("@azure/keyvault-secrets");
-const { DefaultAzureCredential } = require("@azure/identity");
-// Load the .env file if it exists
-const dotenv = require("dotenv");
-dotenv.config();
-
-async function main() {
- const credential = new DefaultAzureCredential();
-
- const keyVaultName = process.env["KEY_VAULT_NAME"];
- const url = "https://" + keyVaultName + ".vault.azure.net";
-
- const kvClient = new SecretClient(url, credential);
-
- // Set these variables to the names you created for your secrets
- const keySecretName = "Your-Key-Secret-Name";
- const endpointSecretName = "Your-Endpoint-Secret-Name";
-
- console.log("Retrieving secrets from ", keyVaultName);
- const retrievedKey = await (await kvClient.getSecret(keySecretName)).value;
- const retrievedEndpoint = await (await kvClient.getSecret(endpointSecretName)).value;
- console.log("Your secret key value is: ", retrievedKey);
- console.log("Your secret endpoint value is: ", retrievedEndpoint);
-}
-
-main().catch((error) => {
- console.error("An error occurred:", error);
- process.exit(1);
-});
-```
-
-## Run the sample application
-
-Use the following command to run the application. Your key and endpoint secrets will be retrieved from your key vault.
-
-```terminal
-node index.js
-```
-
-## Send a test Language service call (optional)
-
-If you're using a multi-service resource or Language resource, you can update [your application](#create-a-new-nodejs-application) by following these steps to send an example Named Entity Recognition call by retrieving a key and endpoint from your key vault.
-
-1. Install the Azure Cognitive Service for Language library, [@azure/ai-text-analytics](https://www.npmjs.com/package/@azure/ai-text-analytics/) to send API requests to the [Language service](./language-service/overview.md).
-
- ```terminal
- npm install @azure/ai-text-analytics@5.1.0
- ```
-
-2. Add the following code to your application:
-
- ```javascript
- const { TextAnalyticsClient, AzureKeyCredential } = require("@azure/ai-text-analytics");
- // Authenticate the language client with your key and endpoint
- const languageClient = new TextAnalyticsClient(retrievedEndpoint, new AzureKeyCredential(retrievedKey));
-
- // Example for recognizing entities in text
- console.log("Sending NER request")
- const entityInputs = [
- "I had a wonderful trip to Seattle last week."
- ];
- const entityResults = await languageClient.recognizeEntities(entityInputs);
- entityResults.forEach(document => {
- console.log(`Document ID: ${document.id}`);
- document.entities.forEach(entity => {
- console.log(`\tName: ${entity.text} \tCategory: ${entity.category} \tSubcategory: ${entity.subCategory ? entity.subCategory : "N/A"}`);
- console.log(`\tScore: ${entity.confidenceScore}`);
- });
- });
- ```
-
-3. Run the application.
--
-## Next steps
-
-* See [What are Cognitive Services](./what-are-cognitive-services.md) for available features you can develop along with [Azure Key Vault](../key-vault/general/index.yml).
-* For additional information on secure application development, see:
- * [Best practices for using Azure Key Vault](../key-vault/general/best-practices.md)
- * [Cognitive Services security](cognitive-services-security.md)
- * [Azure security baseline for Cognitive Services](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=/azure/cognitive-services/TOC.json)
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
- Title: What are Azure Cognitive Services?-
-description: Cognitive Services makes AI accessible to every developer without requiring machine-learning and data-science expertise. You need to make an API call from your application to add the ability to see (advanced image search and recognition), hear, speak, search, and decision-making into your apps.
---
-keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services, cognitive understanding, cognitive features
-- Previously updated : 07/04/2023----
-# What are Azure Cognitive Services?
-
-Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
-
-## Categories of Cognitive Services
-
-Cognitive Services can be categorized into five main areas:
-
-* Vision
-* Speech
-* Language
-* Decision
-* Azure OpenAI Service
-
-See the tables below to learn about the services offered within those categories.
-
-## Vision APIs
-
-|Service Name|Service Description|Quickstart
-|:--|:|--|
-|[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information.| [Computer Vision quickstart](./computer-vision/quickstarts-sdk/client-library.md)|
-|[Custom Vision](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. | [Custom Vision quickstart](./custom-vision-service/getting-started-build-a-classifier.md)|
-|[Face](./computer-vision/index-identity.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| [Face quickstart](./face/quickstarts/client-libraries.md)|
-
-## Speech APIs
-
-|Service Name|Service Description| Quickstart|
-|:--|:|--|
-|[Speech service](./speech-service/index.yml "Speech service")|Speech service adds speech-enabled features to applications. Speech service includes various capabilities like speech to text, text to speech, speech translation, and many more.| Go to the [Speech documentation](./speech-service/index.yml) to choose a subservice quickstart.|
-<!--
-|[Speaker Recognition API](./speech-service/speaker-recognition-overview.md "Speaker Recognition API") (Preview)|The Speaker Recognition API provides algorithms for speaker identification and verification.|
-|[Bing Speech](./speech-service/how-to-migrate-from-bing-speech.md "Bing Speech") (Retiring)|The Bing Speech API provides you with an easy way to create speech-enabled features in your applications.|
-|[Translator Speech](/azure/cognitive-services/translator-speech/ "Translator Speech") (Retiring)|Translator Speech is a machine translation service.|
>-
-## Language APIs
-
-|Service Name|Service Description| Quickstart|
-|:--|:|--|
-|[Language service](./language-service/index.yml "Language service")| Azure Language service provides several Natural Language Processing (NLP) features to understand and analyze text.| Go to the [Language documentation](./language-service/index.yml) to choose a subservice quickstart.|
-|[Translator](./translator/index.yml "Translator")|Translator provides machine-based text translation in near real time.| [Translator quickstart](./translator/quickstart-translator.md)|
-|[Language Understanding LUIS](./luis/index.yml "Language Understanding")|Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational or natural language text to predict overall meaning and pull out relevant information. |[LUIS quickstart](./luis/luis-get-started-create-app.md)|
-|[QnA Maker](./qnamaker/index.yml "QnA Maker")|QnA Maker allows you to build a question and answer service from your semi-structured content.| [QnA Maker quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) |
-
-## Decision APIs
-
-|Service Name|Service Description| Quickstart|
-|:--|:|--|
-|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data.| [Anomaly Detector quickstart](./anomaly-detector/quickstarts/client-libraries.md) |
-|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. | [Content Moderator quickstart](./content-moderator/client-libraries.md)|
-|[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. |[Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md)|
-
-## Azure OpenAI
-
-|Service Name | Service Description| Quickstart|
-|:|:-|--|
-|[Azure OpenAI](./openai/index.yml "Azure OpenAI") |Powerful language models including the GPT-3, Codex and Embeddings model series for content generation, summarization, semantic search, and natural language to code translation. | [Azure OpenAI quickstart](./openai/quickstart.md) |
-
-## Create a Cognitive Services resource
-
-You can create a Cognitive Services resource with hands-on quickstarts using any of the following methods:
-
-* [Azure portal](cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows "Azure portal")
-* [Azure CLI](cognitive-services-apis-create-account-cli.md?tabs=windows "Azure CLI")
-* [Azure SDK client libraries](cognitive-services-apis-create-account-client-library.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
-* [Azure Resource Manager (ARM template)](./create-account-resource-manager-template.md?tabs=portal "Azure Resource Manager (ARM template)")
-
-## Use Cognitive Services in different development environments
-
-With Azure and Cognitive Services, you have access to several development options, such as:
-
-* Automation and integration tools like Logic Apps and Power Automate.
-* Deployment options such as Azure Functions and the App Service.
-* Cognitive Services Docker containers for secure access.
-* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for big data scenarios.
-
-To learn more, see [Cognitive Services development options](./cognitive-services-development-options.md).
-
-### Containers for Cognitive Services
-
-Azure Cognitive Services also provides several Docker containers that let you use the same APIs that are available from Azure, on-premises. These containers give you the flexibility to bring Cognitive Services closer to your data for compliance, security, or other operational reasons. For more information, see [Cognitive Services Containers](cognitive-services-container-support.md "Cognitive Services Containers").
-
-<!--
-## Subscription management
-
-Once you are signed in with your Microsoft Account, you can access [My subscriptions](https://www.microsoft.com/cognitive-services/subscriptions "My subscriptions") to show the products you are using, the quota remaining, and the ability to add additional products to your subscription.
-
-## Upgrade to unlock higher limits
-
-All APIs have a free tier, which has usage and throughput limits. You can increase these limits by using a paid offering and selecting the appropriate pricing tier option when deploying the service in the Azure portal. [Learn more about the offerings and pricing](https://azure.microsoft.com/pricing/details/cognitive-services/ "offerings and pricing"). You'll need to set up an Azure subscriber account with a credit card and a phone number. If you have a special requirement or simply want to talk to sales, click "Contact us" button at the top the pricing page.
>--
-## Regional availability
-
-The APIs in Cognitive Services are hosted on a growing network of Microsoft-managed data centers. You can find the regional availability for each API in [Azure region list](https://azure.microsoft.com/regions "Azure region list").
-
-Looking for a region we don't support yet? Let us know by filing a feature request on our [UserVoice forum](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858).
-
-## Language support
-
-Cognitive Services supports a wide range of cultural languages at the service level. You can find the language availability for each API in the [supported languages list](language-support.md "Supported languages list").
-
-## Security
-
-Azure Cognitive Services provides a layered security model, including [authentication](authentication.md "Authentication") with Azure Active Directory credentials, a valid resource key, and [Azure Virtual Networks](cognitive-services-virtual-networks.md "Azure Virtual Networks").
-
-## Certifications and compliance
-
-Cognitive Services has been awarded certifications such as CSA STAR Certification, FedRAMP Moderate, and HIPAA BAA. You can [download](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942 "Download") certifications for your own audits and security reviews.
-
-To understand privacy and data management, go to the [Trust Center](https://servicetrust.microsoft.com/ "Trust Center").
-
-## Help and support
-
-Cognitive Services provides several support options to help you move forward with creating intelligent applications. Cognitive Services also has a strong community of developers that can help answer your specific questions. For a full list of support options available to you, see [Cognitive Services support and help options](cognitive-services-support-options.md "Cognitive Services support and help options").
-
-## Next steps
-
-* Select a service from the tables above and learn how it can help you meet your development goals.
-* [Create a Cognitive Services resource using the Azure portal](cognitive-services-apis-create-account.md "Create a Cognitive Services account")
-* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication. > [!NOTE]
-> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../cognitive-services/Speech-Service/how-to-audio-content-creation.md).
+> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
The Play action allows you to provide access to a pre-recorded audio file of WAV format that ACS can access with support for authentication.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
For more information, see [Push Notifications](../notifications.md).
## Build intelligent, AI powered chat experiences
-You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with the Chat SDK to build use cases like:
+You can use [Azure Cognitive APIs](../../../ai-services/index.yml) with the Chat SDK to build use cases like:
- Enable users to chat with each other in different languages. - Help a support agent prioritize tickets by detecting a negative sentiment of an incoming message from a customer.
You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with t
One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service is responsible for listening to the messages exchanged by other participants [1], calling Cognitive APIs to translate content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
-This way, the message history contains both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../cognitive-services/translator/quickstart-translator.md) to understand how to use Cognitive APIs to translate text to different languages.
+This way, the message history contains both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../ai-services/translator/quickstart-text-rest-api.md) to understand how to use Cognitive APIs to translate text to different languages.
:::image type="content" source="../media/chat/cognitive-services.png" alt-text="Diagram showing Cognitive Services interacting with Communication Services.":::
confidential-computing Quick Create Confidential Vm Azure Cli Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli-amd.md
Create a VM with the [az vm create](/cli/azure/vm) command.
The following example creates a VM named *myVM* and adds a user account named *azureuser*. The `--generate-ssh-keys` parameter is used to automatically generate an SSH key, and put it in the default key location(*~/.ssh*). To use a specific set of keys instead, use the `--ssh-key-values` option. For `size`, select a confidential VM size. For more information, see [supported confidential VM families](virtual-machine-solutions-amd.md).
-Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
+Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption and encryption at host, see [confidential OS disk encryption](confidential-vm-overview.md) and [encryption at host](/azure/virtual-machines/linux/disks-enable-host-based-encryption-cli).
```azurecli-interactive az vm create \
az vm create \
--security-type ConfidentialVM \ --os-disk-security-encryption-type VMGuestStateOnly \ --enable-secure-boot true \
+ --encryption-at-host \
``` It takes a few minutes to create the VM and supporting resources. The following example output shows the VM create operation was successful.
connectors Connectors Native Http Swagger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http-swagger.md
With the built-in **HTTP + Swagger** operation and [Azure Logic Apps](../logic-a
* The Swagger file must have [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) enabled.
- The examples in this topic use the [Cognitive Services Face API](../cognitive-services/face/overview.md), which requires a [Cognitive Services account and access key](../cognitive-services/cognitive-services-apis-create-account.md).
+ The examples in this topic uses [Azure AI Face](../ai-services/computer-vision/overview-identity.md), which requires an [Azure AI services resource key and region](../ai-services/multi-service-resource.md?pivots=azportal).
> [!NOTE] > To reference a Swagger file that's unhosted or that doesn't meet the security and cross-origin requirements,
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
container-instances Container Instances Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-environment-variables.md
New-AzContainerGroup `
Now run the following [New-AzContainerGroup][new-Azcontainergroup] command. This one specifies the *NumWords* and *MinLength* environment variables after populating an array variable, `envVars`: ```azurepowershell-interactive
-$envVars = @{'NumWords'='5';'MinLength'='8'}
-New-AzContainerGroup `
- -ResourceGroupName myResourceGroup `
- -Name mycontainer2 `
- -Image mcr.microsoft.com/azuredocs/aci-wordcount:latest `
- -RestartPolicy OnFailure `
- -EnvironmentVariable $envVars
+$envVars = @(
+ New-AzContainerInstanceEnvironmentVariableObject -Name "NumWords" -Value "5"
+ New-AzContainerInstanceEnvironmentVariableObject -Name "MinLength" -Value "8"
+)
+
+$containerGroup = New-AzContainerGroup -ResourceGroupName "myResourceGroup" `
+ -Name "mycontainer2" `
+ -Image "mcr.microsoft.com/azuredocs/aci-wordcount:latest" `
+ -RestartPolicy "OnFailure" `
+ -Container @(
+ New-AzContainerGroupContainer -Name "mycontainer2" `
+ -EnvironmentVariable $envVars
+ )
``` Once both containers' state is *Terminated* (use [Get-AzContainerInstanceLog][azure-instance-log] to check state), pull their logs with the [Get-AzContainerInstanceLog][azure-instance-log] command.
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
container-registry Tutorial Registry Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-registry-cache.md
Cache for ACR currently supports the following upstream registries:
| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | | Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | ECR Public | Supports unauthenticated pulls only. | Azure CLI |
-| Quay.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | ## Preview Limitations
container-registry Tutorial Troubleshoot Registry Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-registry-cache.md
The Azure portal autofills these fields for you. However, many Docker repositori
## Unhealthy Credential Set
-Credential sets are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credential sets are often a result of these secrets no longer being valid. Inside the Azure portal you can select the credential set, to edit and apply changes.
+Credential sets are a set of Key Vault secrets that operate as a Username and Password for private repositories. Unhealthy Credential sets are often a result of these secrets no longer being valid. In the Azure portal, you can select the credential set, to edit and apply changes.
- Verify the secrets in Azure Key Vault haven't expired. - Verify the secrets in Azure Key Vault are valid.
Cache for ACR currently supports the following upstream registries:
| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | | Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | ECR Public | Supports unauthenticated pulls only. | Azure CLI |
-| Quay.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
Last updated 03/09/2023 + # Options for migrating data to Azure Cosmos DB for MongoDB vCore
+## Pre-migration assessment
+++
+Assessment involves finding out whether you're using the [features and syntax that are supported](./compatibility.md). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
+
+The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
++
+> [!NOTE]
+> We recommend you to go through [the supported features and syntax](./compatibility.md) in detail, as well as perform a proof-of-concept prior to the actual migration.
++ This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore-based offering. ## Native MongoDB tools (Offline)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
Last updated 05/10/2023
[!INCLUDE[MongoDB vCore](../../includes/appliesto-mongodb-vcore.md)]
-Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications, including apps that you built by using [Azure OpenAI embeddings](../../../cognitive-services/openai/tutorials/embeddings.md), with your data that's stored in Azure Cosmos DB. Vector search enables you to efficiently store, index, and query high-dimensional vector data that's stored directly in Azure Cosmos DB for MongoDB vCore. It eliminates the need to transfer your data to more expensive alternatives for vector search capabilities.
+Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications, including apps that you built by using [Azure OpenAI embeddings](../../../ai-services/openai/tutorials/embeddings.md), with your data that's stored in Azure Cosmos DB. Vector search enables you to efficiently store, index, and query high-dimensional vector data that's stored directly in Azure Cosmos DB for MongoDB vCore. It eliminates the need to transfer your data to more expensive alternatives for vector search capabilities.
## What is vector search?
-Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you created by using a machine learning model by using or an embeddings API. Examples of embeddings APIs are [Azure OpenAI Embeddings](/azure/cognitive-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically.
+Vector search is a method that helps you find similar items based on their data characteristics rather than by exact matches on a property field. This technique is useful in applications such as searching for similar text, finding related images, making recommendations, or even detecting anomalies. It works by taking the vector representations (lists of numbers) of your data that you created by using a machine learning model by using or an embeddings API. Examples of embeddings APIs are [Azure OpenAI Embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). It then measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically.
By integrating vector search capabilities natively, you can unlock the full potential of your data in applications that are built on top of the OpenAI API. You can also create custom-built solutions that use vector embeddings.
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
For example, suppose you've provisioned 1000 RU/s on a container. There's an ong
### Known issues For certain scenarios, the effects of a delete by partition key operation isn't guaranteed to be immediately reflected. The effect may be partially seen as the operation progresses. -- [Aggregate queries](query/aggregate-functions.md) that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
+- Aggregate queries that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
- Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete. - [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup.
cosmos-db How To Use Change Feed Estimator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-change-feed-estimator.md
Identifying this scenario helps understand if we need to scale our change feed p
## Implement the change feed estimator
-### As a push model for automatic notifications
+### [.NET](#tab/dotnet)
+
+#### As a push model for automatic notifications
Like the [change feed processor](./change-feed-processor.md), the change feed estimator can work as a push model. The estimator will measure the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and push this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds.
An example of a delegate that receives the estimation is:
You can send this estimation to your monitoring solution and use it to understand how your progress is behaving over time.
-### As an on-demand detailed estimation
+#### As an on-demand detailed estimation
In contrast with the push model, there's an alternative that lets you obtain the estimation on demand. This model also provides more detailed information:
And whenever you want it, with the frequency you require, you can obtain the det
Each `ChangeFeedProcessorState` will contain the lease and lag information, and also who is the current instance owning it.
-## Estimator deployment
+#### Estimator deployment
The change feed estimator does not need to be deployed as part of your change feed processor, nor be part of the same project. We recommend deploying the estimator on an independent and completely different instance from your processors. A single estimator instance can track the progress for the all the leases and instances in your change feed processor deployment. Each estimation will consume [request units](../request-units.md) from your [monitored and lease containers](change-feed-processor.md#components-of-the-change-feed-processor). A frequency of 1 minute in-between is a good starting point, the lower the frequency, the higher the request units consumed.
+### [Java](#tab/java)
+
+The provided example represents a sample Java application that demonstrates the implementation of the Change Feed Processor with the estimation of the lag in processing change feed events. In the application - documents are being inserted into one container (the "feed container"), and meanwhile another worker thread or worker application is pulling inserted documents from the feed container's Change Feed and operating on them in some way.
+
+The change Feed Processor is built and started like this:
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedEstimator.java?name=ChangeFeedProcessorBuilder)]
+
+Change Feed Processor lag checking can be performed on a separate application (like a health monitor) as long as the same input containers (feedContainer and leaseContainer) and the exact same lease prefix (```CONTAINER_NAME + "-lease"```) are used. The estimator code requires that the Change Feed Processor had an opportunity to fully initialize the leaseContainer's documents.
+
+The estimator calculates the accumulated lag by retrieving the current state of the Change Feed Processor and summing up the estimated lag values for each event being processed:
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedEstimator.java?name=EstimatedLag)]
+
+The total lag initially should be zero and finally should be greater or equal to the number of documents created. The total lag value can be logged or used for further analysis, allowing to monitor the performance of the change feed processing and identify any potential bottlenecks or delays in the system:
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedEstimator.java?name=FinalLag)]
+
+An example of a delegate that receives changes and handles them with a lag is:
+[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedEstimator.java?name=HandleChangesWithLag)]
+++ ## Additional resources * [Azure Cosmos DB SDK](sdk-dotnet-v3.md)
-* [Usage samples on GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
+* [Usage samples on GitHub (.NET)](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed)
+* [Usage samples on GitHub (Java)](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/changefeed)
* [Additional samples on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor) ## Next steps
cosmos-db Abs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/abs.md
Previously updated : 07/01/2023 Last updated : 07/17/2023
Returns a numeric expression.
The following example shows the results of using this function on three different numbers. :::code language="json" source="~/cosmos-db-nosql-query-samples/scripts/absolute-value/result.json":::
cosmos-db Acos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/acos.md
Previously updated : 07/01/2023 Last updated : 07/17/2023
Returns a numeric expression.
The following example calculates the arccosine of the specified values using the function.
-```sql
-SELECT VALUE {
- arccosine: ACOS(-1)
-}
-```
-
-```json
-[
- {
- "arccosine": 3.141592653589793
- }
-]
-```
+ ## Remarks
cosmos-db Aggregate Avg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-avg.md
- Title: AVG in Azure Cosmos DB query language
-description: Learn about the Average (AVG) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# AVG (Azure Cosmos DB)
-
-This aggregate function returns the average of the values in the expression.
-
-## Syntax
-
-```sql
-AVG(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
-Returns a numeric expression.
-
-## Examples
-
-The following example returns the average value of `propertyA`:
-
-```sql
-SELECT AVG(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). If any arguments in `AVG` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will not impact the `AVG` calculation.
-
-## Next steps
--- [System functions in Azure Cosmos DB](system-functions.yml)-- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Count https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-count.md
- Title: COUNT in Azure Cosmos DB query language
-description: Learn about the Count (COUNT) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# COUNT (Azure Cosmos DB)
-
-This system function returns the count of the values in the expression.
-
-## Syntax
-
-```sql
-COUNT(<scalar_expr>)
-```
-
-## Arguments
-
-*scalar_expr*
- Any expression that results in a scalar value
-
-## Return types
-
-Returns a numeric (scalar) value
-
-## Examples
-
-The following example returns the total count of items in a container:
-
-```sql
-SELECT COUNT(1)
-FROM c
-```
-
-In the first example, the parameter of the `COUNT` function is any scalar value or expression, but the parameter does not influence the result. The first example passes in a scalar value of `1` to the `COUNT` function. This second example will produce an identical result even though a different scalar expression is used. In the second example, the scalar expression of `2 + 3` is passed in to the `COUNT` function, but the result will be equivalent to the first function.
-
-```sql
-SELECT COUNT(2 + 3)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy) for any properties in the query's filter.
-
-## Next steps
--- [System functions in Azure Cosmos DB](system-functions.yml)-- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-functions.md
- Title: Aggregate functions in Azure Cosmos DB
-description: Learn about SQL aggregate function syntax, types of aggregate functions supported by Azure Cosmos DB.
----- Previously updated : 12/02/2020---
-# Aggregate functions in Azure Cosmos DB
-
-Aggregate functions perform a calculation on a set of values in the `SELECT` clause and return a single value. For example, the following query returns the count of items within a container:
-
-```sql
- SELECT COUNT(1)
- FROM c
-```
-
-## Types of aggregate functions
-
-The API for NoSQL supports the following aggregate functions. `SUM` and `AVG` operate on numeric values, and `COUNT`, `MIN`, and `MAX` work on numbers, strings, Booleans, and nulls.
-
-| Function | Description |
-|-|-|
-| [AVG](aggregate-avg.md) | Returns the average of the values in the expression. |
-| [COUNT](aggregate-count.md) | Returns the number of items in the expression. |
-| [MAX](aggregate-max.md) | Returns the maximum value in the expression. |
-| [MIN](aggregate-min.md) | Returns the minimum value in the expression. |
-| [SUM](aggregate-sum.md) | Returns the sum of all the values in the expression. |
--
-You can also return only the scalar value of the aggregate by using the VALUE keyword. For example, the following query returns the count of values as a single number:
-
-```sql
- SELECT VALUE COUNT(1)
- FROM Families f
-```
-
-The results are:
-
-```json
- [ 2 ]
-```
-
-You can also combine aggregations with filters. For example, the following query returns the count of items with the address state of `WA`.
-
-```sql
- SELECT VALUE COUNT(1)
- FROM Families f
- WHERE f.address.state = "WA"
-```
-
-The results are:
-
-```json
- [ 1 ]
-```
-
-## Remarks
-
-These aggregate system functions will benefit from a [range index](../../index-policy.md#includeexclude-strategy). If you expect to do an `AVG`, `COUNT`, `MAX`, `MIN`, or `SUM` on a property, you should [include the relevant path in the indexing policy](../../index-policy.md#includeexclude-strategy).
-
-## Next steps
--- [Introduction to Azure Cosmos DB](../../introduction.md)-- [System functions](system-functions.yml)-- [User defined functions](udfs.md)
cosmos-db Aggregate Max https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-max.md
- Title: MAX in Azure Cosmos DB query language
-description: Learn about the Max (MAX) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# MAX (Azure Cosmos DB)
-
-This aggregate function returns the maximum of the values in the expression.
-
-## Syntax
-
-```sql
-MAX(<scalar_expr>)
-```
-
-## Arguments
-
-*scalar_expr*
- Is a scalar expression.
-
-## Return types
-
-Returns a scalar expression.
-
-## Examples
-
-The following example returns the maximum value of `propertyA`:
-
-```sql
-SELECT MAX(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). The arguments in `MAX` can be number, string, boolean, or null. Any undefined values will be ignored.
-
-When comparing different types data, the following priority order is used (in descending order):
--- string-- number-- boolean-- null-
-## Next steps
--- [System functions in Azure Cosmos DB](system-functions.yml)-- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Min https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-min.md
- Title: MIN in Azure Cosmos DB query language
-description: Learn about the Min (MIN) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# MIN (Azure Cosmos DB)
-
-This aggregate function returns the minimum of the values in the expression.
-
-## Syntax
-
-```sql
-MIN(<scalar_expr>)
-```
-
-## Arguments
-
-*scalar_expr*
- Is a scalar expression.
-
-## Return types
-
-Returns a scalar expression.
-
-## Examples
-
-The following example returns the minimum value of `propertyA`:
-
-```sql
-SELECT MIN(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). The arguments in `MIN` can be number, string, boolean, or null. Any undefined values will be ignored.
-
-When comparing different types data, the following priority order is used (in ascending order):
--- null-- boolean-- number-- string-
-## Next steps
--- [System functions in Azure Cosmos DB](system-functions.yml)-- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Aggregate Sum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-sum.md
- Title: SUM in Azure Cosmos DB query language
-description: Learn about the Sum (SUM) SQL system function in Azure Cosmos DB.
---- Previously updated : 12/02/2020----
-# SUM (Azure Cosmos DB)
-
-This aggregate function returns the sum of the values in the expression.
-
-## Syntax
-
-```sql
-SUM(<numeric_expr>)
-```
-
-## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
-## Return types
-
-Returns a numeric expression.
-
-## Examples
-
-The following example returns the sum of `propertyA`:
-
-```sql
-SELECT SUM(c.propertyA)
-FROM c
-```
-
-## Remarks
-
-This system function will benefit from a [range index](../../index-policy.md#includeexclude-strategy). If any arguments in `SUM` are string, boolean, or null, the entire aggregate system function will return `undefined`. If any argument has an `undefined` value, it will be not impact the `SUM` calculation.
-
-## Next steps
--- [System functions in Azure Cosmos DB](system-functions.yml)-- [Aggregate functions in Azure Cosmos DB](aggregate-functions.md)
cosmos-db Array Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-concat.md
Title: ARRAY_CONCAT in Azure Cosmos DB query language
-description: Learn about how the Array Concat SQL system function in Azure Cosmos DB returns an array that is the result of concatenating two or more array values
-
+ Title: ARRAY_CONCAT
+
+description: An Azure Cosmos DB for NoSQL system function that returns an array that is the result of concatenating two or more array values
+++ - Previously updated : 03/03/2020--+ Last updated : 07/17/2023+
-# ARRAY_CONCAT (Azure Cosmos DB)
+
+# ARRAY_CONCAT (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns an array that is the result of concatenating two or more array values.
-
+Returns an array that is the result of concatenating two or more array values.
+ ## Syntax
-
+ ```sql
-ARRAY_CONCAT (<arr_expr1>, <arr_expr2> [, <arr_exprN>])
+ARRAY_CONCAT(<array_expr_1>, <array_expr_2> [, <array_expr_N>])
```
-
+ ## Arguments
-
-*arr_expr*
- Is an array expression to concatenate to the other values. The `ARRAY_CONCAT` function requires at least two *arr_expr* arguments.
-
+
+| | Description |
+| | |
+| **`array_expr_1`** | The first expression in the list. |
+| **`array_expr_2`** | The second expression in the list. |
+| **`array_expr_N` *(Optional)*** | Optional expression\[s\], which can contain a variable number of expressions up to the `N`th item in the list. |
+
+> [!NOTE]
+> The `ARRAY_CONCAT` function requires at least two array expression arguments.
+ ## Return types
-
- Returns an array expression.
-
+
+Returns an array expression.
+ ## Examples
-
- The following example how to concatenate two arrays.
+The following example shows how to concatenate two arrays.
+ :::code language="json" source="~/cosmos-db-nosql-query-samples/scripts/array-concat/result.json":::
-
+ ## Remarks
-This system function will not utilize the index.
+- This function doesn't utilize the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml) - [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`ARRAY_SLICE`](array-slice.md)
cosmos-db Array Contains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-contains.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns a boolean indicating whether the array contains the specified value. You
## Syntax ```sql
-ARRAY_CONTAINS (<array_expr>, <expr> [, <bool_expr>])
+ARRAY_CONTAINS(<array_expr>, <expr> [, <bool_expr>])
``` ## Arguments
Returns a boolean value.
## Examples The following example illustrates how to check for specific values or objects in an array using this function.
-
-```sql
-SELECT VALUE {
- containsItem: ARRAY_CONTAINS(["coats", "jackets", "sweatshirts"], "coats"),
- missingItem: ARRAY_CONTAINS(["coats", "jackets", "sweatshirts"], "hoodies"),
- containsFullMatchObject: ARRAY_CONTAINS([{ category: "shirts", color: "blue" }], { category: "shirts", color: "blue" }),
- missingFullMatchObject: ARRAY_CONTAINS([{ category: "shirts", color: "blue" }], { category: "shirts" }),
- containsPartialMatchObject: ARRAY_CONTAINS([{ category: "shirts", color: "blue" }], { category: "shirts" }, true),
- missingPartialMatchObject: ARRAY_CONTAINS([{ category: "shirts", color: "blue" }], { category: "shorts", color: "blue" }, true)
-}
-```
-
-```json
-[
- {
- "containsItem": true,
- "missingItem": false,
- "containsFullMatchObject": true,
- "missingFullMatchObject": false,
- "containsPartialMatchObject": true,
- "missingPartialMatchObject": false
- }
-]
-```
++ ## Remarks
cosmos-db Array Length https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-length.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns a numeric expression.
The following example illustrates how to get the length of an array using the function.
-```sql
-SELECT VALUE {
- length: ARRAY_LENGTH([70, 86, 92, 99, 85, 90, 82]),
- emptyLength: ARRAY_LENGTH([]),
- nullLength: ARRAY_LENGTH(null)
-}
-```
-```json
-[
- {
- "length": 7,
- "emptyLength": 0
- }
-]
-```
## Remarks
cosmos-db Array Slice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/array-slice.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns an array expression.
The following example shows how to get different slices of an array using the function.
-```sql
-SELECT VALUE {
- sliceFromStart: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], 0),
- sliceFromSecond: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], 1),
- sliceFromLast: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], -1),
- sliceFromSecondToLast: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], -2),
- sliceThreeFromStart: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], 0, 3),
- sliceTwelveFromStart: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], 0, 12),
- sliceFiveFromThird: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], 3, 5),
- sliceOneFromSecondToLast: ARRAY_SLICE([70, 86, 92, 99, 85, 90, 82], -2, 1)
-}
-```
-```json
-[
- {
- "sliceFromStart": [70, 86, 92, 99, 85, 90, 82],
- "sliceFromSecond": [86, 92, 99, 85, 90, 82],
- "sliceFromLast": [82],
- "sliceFromSecondToLast": [90, 82],
- "sliceThreeFromStart": [70, 86, 92],
- "sliceTwelveFromStart": [70, 86, 92, 99, 85, 90, 82],
- "sliceFiveFromThird": [99, 85, 90, 82],
- "sliceOneFromSecondToLast": [90]
- }
-]
-```
## Remarks
cosmos-db Asin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/asin.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns a numeric expression.
The following example calculates the arcsine of the specified angle using the function.
-```sql
-SELECT VALUE {
- arcsine: ACOS(-1)
-}
-```
-
-```json
-[
- {
- "arcsine": 3.141592653589793
- }
-]
-```
+ ## Remarks
cosmos-db Atan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/atan.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
ATAN(<numeric_expr>)
| | Description | | | |
-| **`numeric_expr`** | A numeric expression. |
+| **`numeric_expr`** | A numeric expression. |
## Return types
Returns a numeric expression.
The following example calculates the arctangent of the specified angle using the function.
-```sql
-SELECT VALUE {
- arctangent: ATAN(-45.01)
-}
-```
-
-```json
-[
- {
- "arctangent": -1.5485826962062663
- }
-]
-```
+ ## Remarks
SELECT VALUE {
- [System functions Azure Cosmos DB](system-functions.yml) - [`TAN`](tan.md)
+- [`ATN2`](atn2.md)
cosmos-db Atn2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/atn2.md
Title: ATN2 in Azure Cosmos DB query language
-description: Learn about how the ATN2 SQL system function in Azure Cosmos DB returns the principal value of the arc tangent of y/x, expressed in radians
-
+ Title: ATN2
+
+description: An Azure Cosmos DB for NoSQL system function that returns the trigonometric arctangent of y / x in radians.
+++ - Previously updated : 03/03/2020--+ Last updated : 07/18/2023+
-# ATN2 (Azure Cosmos DB)
+
+# ATN2 (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns the principal value of the arc tangent of y/x, expressed in radians.
-
+Returns the principal value of the arctangent of `y/x`, expressed in radians.
+ ## Syntax
-
+ ```sql ATN2(<numeric_expr>, <numeric_expr>) ```
-
+ ## Arguments
-
-*numeric_expr*
- Is a numeric expression.
-
+
+| | Description |
+| | |
+| **`numeric_expr_y`** | A numeric expression for the `y` component. |
+| **`numeric_expr_x`** | A numeric expression for the `x` component. |
+ ## Return types
-
- Returns a numeric expression.
-
+
+Returns a numeric expression.
+ ## Examples
-
- The following example calculates the ATN2 for the specified x and y components.
-
-```sql
-SELECT ATN2(35.175643, 129.44) AS atn2
-```
-
- Here is the result set.
-
-```json
-[{"atn2": 1.3054517947300646}]
-```
+
+The following example calculates the arctangent for the specified `x` and `y` components.
++ ## Remarks
-This system function will not utilize the index.
+- This system function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`TAN`](tan.md)
+- [`ATAN`](atan.md)
cosmos-db Average https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/average.md
+
+ Title: AVG
+
+description: An Azure Cosmos DB for NoSQL system function that returns the average of multiple numeric values.
++++++ Last updated : 07/17/2023+++
+# AVG (NoSQL query)
++
+Returns the average of the values in the expression.
+
+## Syntax
+
+```sql
+AVG(<numeric_expr>)
+```
+
+## Arguments
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+For this example, consider a container with multiple items that each contain a `price` field.
++
+In this example, the function is used to average the values of a specific field into a single aggregated value.
+++
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- If any arguments in `AVG` are string, boolean, or null; the entire aggregate system function returns `undefined`.
+- If any individual argument has an `undefined` value that value isn't included in the `AVG` calculation.
+
+## Next steps
+
+- [System functions in Azure Cosmos DB](system-functions.yml)
+- [`SUM`](sum.md)
cosmos-db Ceiling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/ceiling.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns a numeric expression.
The following example shows positive numeric, negative, and zero values evaluated with this function.
-```sql
-SELECT VALUE {
- ceilingPostiveNumber: CEILING(62.6),
- ceilingNegativeNumber: CEILING(-145.12),
- ceilingSmallNumber: CEILING(0.2989),
- ceilingZero: CEILING(0.0),
- ceilingNull: CEILING(null)
-}
-```
-```json
-[
- {
- "ceilingPostiveNumber": 63,
- "ceilingNegativeNumber": -145,
- "ceilingSmallNumber": 1,
- "ceilingZero": 0
- }
-]
-```
## Remarks
SELECT VALUE {
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`FLOOR`](floor.md)
cosmos-db Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/choose.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns an expression, which could be of any type.
The following example uses a static list to demonstrate various return values at different indexes.
-```sql
-SELECT VALUE
- CHOOSE(1, "abc", 1, true, [1])
-```
-```json
-[
- "abc"
-]
-```
This example uses a static list to demonstrate various return values at different indexes.
-```sql
-SELECT VALUE {
- index0: CHOOSE(0, "abc", 1, true, [1]),
- index1: CHOOSE(1, "abc", 1, true, [1]),
- index2: CHOOSE(2, "abc", 1, true, [1]),
- index3: CHOOSE(3, "abc", 1, true, [1]),
- index4: CHOOSE(4, "abc", 1, true, [1]),
- index5: CHOOSE(5, "abc", 1, true, [1])
-}
-```
-```json
-[
- {
- "index1": "abc",
- "index2": 1,
- "index3": true,
- "index4": [
- 1
- ]
- }
-]
-```
-This final example uses an existing item in a container and selects an expression from existing paths in the item.
-
-```json
-[
- {
- "id": "68719519522",
- "name": "Gremon Fins",
- "sku": "73311",
- "tags": [
- "Science Blue",
- "Turbo"
- ]
- }
-]
-```
+This final example uses an existing item in a container with three relevant fields.
-```sql
-SELECT
- CHOOSE(3, p.id, p.name, p.sku) AS barcode
-FROM
- products p
-```
-```json
-[
- {
- "barcode": "73311"
- }
-]
-```
+This example selects an expression from existing paths in the item.
++ ## Remarks
cosmos-db Concat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/concat.md
Title: CONCAT in Azure Cosmos DB query language
-description: Learn about how the CONCAT SQL system function in Azure Cosmos DB returns a string that is the result of concatenating two or more string values
-
+ Title: CONCAT
+
+description: An Azure Cosmos DB for NoSQL system function that returns
+++ - Previously updated : 03/03/2020--+ Last updated : 07/18/2023+
-# CONCAT (Azure Cosmos DB)
+
+# CONCAT (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
- Returns a string that is the result of concatenating two or more string values.
-
+Returns a string that is the result of concatenating two or more string values.
+ ## Syntax
-
+ ```sql
-CONCAT(<str_expr1>, <str_expr2> [, <str_exprN>])
+CONCAT(<string_expr_1>, <string_expr_2> [, <string_expr_N>])
```
-
+ ## Arguments
-
-*str_expr*
- Is a string expression to concatenate to the other values. The `CONCAT` function requires at least two *str_expr* arguments.
-
+
+| | Description |
+| | |
+| **`string_expr_1`** | The first string expression in the list. |
+| **`string_expr_2`** | The second string expression in the list. |
+| **`string_expr_N` *(Optional)*** | Optional string expression\[s\], which can contain a variable number of expressions up to the `N`th item in the list. |
+
+> [!NOTE]
+> The `CONCAT` function requires at least two string expression arguments.
+ ## Return types
-
- Returns a string expression.
-
+
+Returns a string expression.
+ ## Examples
-
- The following example returns the concatenated string of the specified values.
-
-```sql
-SELECT CONCAT("abc", "def") AS concat
-```
-
- Here is the result set.
-
-```json
-[{"concat": "abcdef"}]
-```
-
+
+This first example returns the concatenated string of two string expressions.
+++
+This next example uses an existing item in a container with various relevant fields.
++
+This example uses the function to select two expressions from the item.
+++ ## Remarks
-This system function will not utilize the index.
+- This function doesn't use the index.
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`CONTAINS`](contains.md)
cosmos-db Contains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/contains.md
Title: Contains in Azure Cosmos DB query language
-description: Learn about how the CONTAINS SQL system function in Azure Cosmos DB returns a Boolean indicating whether the first string expression contains the second
-
+ Title: CONTAINS
+
+description: An Azure Cosmos DB for NoSQL system function that returns whether the first string contains the second.
+++ - Previously updated : 04/01/2021--+ Last updated : 07/18/2023+
-# CONTAINS (Azure Cosmos DB)
+
+# CONTAINS (NoSQL query)
+ [!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-Returns a Boolean indicating whether the first string expression contains the second.
-
+Returns a boolean indicating whether the first string expression contains the second string expression.
+ ## Syntax
-
+ ```sql
-CONTAINS(<str_expr1>, <str_expr2> [, <bool_expr>])
+CONTAINS(<string_expr_1>, <string_expr_2> [, <bool_expr>])
```
-
+ ## Arguments
-
-*str_expr1*
- Is the string expression to be searched.
-
-*str_expr2*
- Is the string expression to find.
-
-*bool_expr*
- Optional value for ignoring case. When set to true, CONTAINS does a case-insensitive search. When unspecified, this value is false.
-
+
+| | Description |
+| | |
+| **`string_expr_1`** | The first string to search. |
+| **`string_expr_2`** | The second string to find. |
+| **`bool_expr` *(Optional)*** | Optional boolean value for ignoring case. When set to `true`, `CONTAINS` performs a case-insensitive search. When `unspecified`, this value defaults to `false`. |
+ ## Return types
-
- Returns a Boolean expression.
-
+
+Returns a boolean expression.
+ ## Examples
-
- The following example checks if "abc" contains "ab" and if "abc" contains "A."
-
-```sql
-SELECT CONTAINS("abc", "ab", false) AS c1, CONTAINS("abc", "A", false) AS c2, CONTAINS("abc", "A", true) AS c3
-```
-
- Here's the result set.
-
-```json
-[
- {
- "c1": true,
- "c2": false,
- "c3": true
- }
-]
-```
+
+The following example checks if various static substrings exist in a string.
++ ## Remarks
SELECT CONTAINS("abc", "ab", false) AS c1, CONTAINS("abc", "A", false) AS c2, CO
## Next steps - [System functions Azure Cosmos DB](system-functions.yml)-- [Introduction to Azure Cosmos DB](../../introduction.md)
+- [`CONCAT`](concat.md)
cosmos-db Cos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/cos.md
Previously updated : 07/01/2023 Last updated : 07/18/2023
Returns a numeric expression.
The following example calculates the cosine of the specified angle using the function.
-```sql
-SELECT VALUE {
- cosine: COS(14.78)
-}
-```
-```json
-[
- {
- "cosine": -0.5994654261946543
- }
-]
-```
## Remarks
cosmos-db Count https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/count.md
+
+ Title: COUNT
+
+description: An Azure Cosmos DB for NoSQL system function that counts the number of occurrences of a value.
++++++ Last updated : 07/17/2023+++
+# COUNT (NoSQL query)
++
+Returns the count of the values in the expression.
+
+## Syntax
+
+```sql
+COUNT(<scalar_expr>)
+```
+
+## Arguments
+
+| | Description |
+| | |
+| **`scalar_expr`** | A scalar expression. |
+
+## Return types
+
+Returns a numeric scalar value.
+
+## Examples
+
+This first example passes in either a scalar value or a numeric expression to the `COUNT` function. The expression is evaluated first to a scalar, making the result of both uses of the function the same value.
+++
+This next example assumes that there's a container with two items with a `/name` field. There's one item without the same field.
++
+In this example, the function counts the number of times the specified scalar field occurs in the filtered data. Here, the function looks for the number of times the `/name` field occurs which is two out of three times.
+++
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy) for any properties in the query's filter.
+
+## Next steps
+
+- [System functions in Azure Cosmos DB](system-functions.yml)
+- [`AVG`](average.md)
cosmos-db Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/getting-started.md
- Title: Getting started with queries-
-description: Learn how to access data using the built-in query syntax of Azure Cosmos DB for NoSQL. Use queries to find data in a container that allows a flexible schema.
------ Previously updated : 04/03/2023---
-# Getting started with queries
--
-In Azure Cosmos DB for NoSQL accounts, there are two ways to read data:
--- **Point reads**
- - You can do a key/value lookup on a single *item ID* and partition key. The *item ID* and partition key combination is the key and the item itself is the value. For a 1-KB document, point reads typically cost one [request unit](../../request-units.md) with a latency under 10 ms. Point reads return a single whole item, not a partial item or a specific field.
- - Here are some examples of how to do **Point reads** with each SDK:
- - [.NET SDK](/dotnet/api/microsoft.azure.cosmos.container.readitemasync)
- - [Java SDK](/java/api/com.azure.cosmos.cosmoscontainer.readitem#com-azure-cosmos-cosmoscontainer-(t)readitem(java-lang-string-com-azure-cosmos-models-partitionkey-com-azure-cosmos-models-cosmositemrequestoptions-java-lang-class(t)))
- - [Node.js SDK](/javascript/api/@azure/cosmos/item#@azure-cosmos-item-read)
- - [Python SDK](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item)
- - [Go SDK](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos#ContainerClient.ReadItem)
-- **Queries**
- - You can query data by writing queries using the Structured Query Language (SQL) as a JSON query language. Queries always cost at least 2.3 request units and, in general, have a higher and more variable latency than point reads. Queries can return many items.
- - Most read-heavy workloads on Azure Cosmos DB use a combination of both point reads and queries. If you just need to read a single item, point reads are cheaper and faster than queries. Point reads don't need to use the query engine to access data and can read the data directly. It's not possible for all workloads to exclusively read data using point reads, so support of a custom query language and [schema-agnostic indexing](../../index-overview.md) provide a more flexible way to access your data.
- - Here are some examples of how to do **queries** with each SDK:
- - [.NET SDK](../samples-dotnet.md#items)
- - [Java SDK](../samples-java.md#query-examples)
- - [Node.js SDK](../samples-nodejs.md#item-examples)
- - [Python SDK](../samples-python.md#item-examples)
- - [Go SDK](../samples-go.md#item-examples)
-
-> [!IMPORTANT]
-> The query language is only used to query items in Azure Cosmos DB for NoSQL. You cannot use the query language to perform operations (Create, Update, Delete, etc.) on items.
-
-The remainder of this article shows how to get started writing queries in Azure Cosmos DB. queries can be run through either the SDK or Azure portal.
-
-## Upload sample data
-
-In your API for NoSQL Azure Cosmos DB account, use the [Data Explorer](../../data-explorer.md) to create a container called `products`. After the container is created, use the data structures browser to expand the `products` container. Finally, select **Items**.
-
-The menu bar should now have the **New Item** option. You use this option to create the JSON items in this article.
-
-> [!TIP]
-> For this quick guide, you can use `/id` as the partition key. For a real-world container, you should consider your overall workload when selecting a partition key strategy. For more information, see [partitioning and horizontal scaling](../../partitioning-overview.md).
-
-### Create JSON items
-
-These two JSON items represent two example products from [AdventureWorks](https://www.adventure-works.com/). They include information about the product, its manufacturer, and metadata tags. Each item includes nested properties, arrays, strings, numbers, GeoJSON data, and booleans.
-
-The first item has strings, numbers, Booleans, arrays, and nested properties:
-
-```json
-{
- "id": "863e778d-21c9-4e2a-a984-d31f947c665c",
- "categoryName": "Surfboards",
- "name": "Teapo Surfboard (6'10\") Grape",
- "sku": "teapo-surfboard-72109",
- "price": 690.00,
- "manufacturer": {
- "name": "Taepo",
- "location": {
- "type": "Point",
- "coordinates": [
- 34.15562788533047, -118.4633004882891
- ]
- }
- },
- "tags": [
- { "name": "Tail Shape: Swallow" },
- { "name": "Color Group: Purple" }
- ]
-}
-```
-
-The second item includes a different set of fields than the first item:
-
-```json
-{
- "id": "6e9f51c1-6b45-440f-af5a-2abc96cd083d",
- "categoryName": "Sleeping Bags",
- "name": "Vareno Sleeping Bag (6') Turmeric",
- "price": 120.00,
- "closeout": true,
- "manufacturer": {
- "name": "Vareno"
- },
- "tags": [
- { "name": "Color Group: Yellow" },
- { "name": "Bag Shape: Mummy" }
- ]
-}
-```
-
-### Query the JSON items
-
-To understand some of the key aspects of the API for NoSQL's query language, it's useful to try a few sample queries.
--- This first query simply returns the entire JSON item for each item in the container. The identifier `products` in this example is arbitrary and you can use any name to reference your container.-
- ```sql
- SELECT *
- FROM products
- ```
-
- The query results are:
-
- ```sql
- [
- {
- "id": "863e778d-21c9-4e2a-a984-d31f947c665c",
- "categoryName": "Surfboards",
- "name": "Teapo Surfboard (6'10\") Grape",
- "sku": "teapo-surfboard-72109",
- "price": 690,
- "manufacturer": {
- "name": "Taepo",
- "location": {
- "type": "Point",
- "coordinates": [
- 34.15562788533047, -118.4633004882891
- ]
- }
- },
- "tags": [
- { "name": "Tail Shape: Swallow" },
- { "name": "Color Group: Purple" }
- ]
- },
- {
- "id": "6e9f51c1-6b45-440f-af5a-2abc96cd083d",
- "categoryName": "Sleeping Bags",
- "name": "Vareno Sleeping Bag (6') Turmeric",
- "price": 120,
- "closeout": true,
- "manufacturer": {
- "name": "Vareno"
- },
- "tags": [
- { "name": "Color Group: Yellow" },
- { "name": "Bag Shape: Mummy" }
- ]
- }
- ]
- ```
-
- > [!NOTE]
- > The JSON items typically include additional fields that are managed by Azure Cosmos DB. These fields include, but are not limited to:
- >
- > - `_rid`
- > - `_self`
- > - `_etag`
- > - `_ts`
- >
- > For this article, these fields are removed to make the output shorter and easier to understand.
--- This query returns the items where the `categoryName` field matches `Sleeping Bags`. Since it's a `SELECT *` query, the output of the query is the complete JSON item. For more information about `SELECT` syntax, see [`SELECT` statement](select.md). This query also uses the shorter `p` alias for the container.-
- ```sql
- SELECT *
- FROM products p
- WHERE p.categoryName = "Sleeping Bags"
- ```
-
- The query results are:
-
- ```json
- [
- {
- "id": "6e9f51c1-6b45-440f-af5a-2abc96cd083d",
- "categoryName": "Sleeping Bags",
- "name": "Vareno Sleeping Bag (6') Turmeric",
- "price": 120,
- "closeout": true,
- "manufacturer": {
- "name": "Vareno"
- },
- "tags": [
- { "name": "Color Group: Yellow" },
- { "name": "Bag Shape: Mummy" }
- ]
- }
- ]
- ```
--- This next query uses a different filter and then reformats the JSON output into a different shape. The query projects a new JSON object with three fields: `name`, `manufacturer.name` and `sku`. This example uses various aliases including `product` and `vendor` to reshape the output object into a format our application can use.-
- ```sql
- SELECT {
- "name": p.name,
- "sku": p.sku,
- "vendor": p.manufacturer.name
- } AS product
- FROM products p
- WHERE p.sku = "teapo-surfboard-72109"
- ```
-
- The query results are:
-
- ```json
- [
- {
- "product": {
- "name": "Teapo Surfboard (6'10\") Grape",
- "sku": "teapo-surfboard-72109",
- "vendor": "Taepo"
- }
- }
- ]
- ```
--- You can also flatten custom JSON results by using `SELECT VALUE`.-
- ```sql
- SELECT VALUE {
- "name": p.name,
- "sku": p.sku,
- "vendor": p.manufacturer.name
- }
- FROM products p
- WHERE p.sku = "teapo-surfboard-72109"
- ```
-
- The query results are:
-
- ```json
- [
- {
- "name": "Teapo Surfboard (6'10\") Grape",
- "sku": "teapo-surfboard-72109",
- "vendor": "Taepo"
- }
- ]
- ```
--- Finally, this last query returns all of the tags by using the `JOIN` keyword. For more information, see [`JOIN` keyword](join.md). This query only returns keywords for products manufactured by `Taepo`.-
- ```sql
- SELECT t.name
- FROM products p
- JOIN t in p.tags
- WHERE p.manufacturer.name = "Taepo"
- ```
-
- The results are:
-
- ```json
- [
- { "name": "Tail Shape: Swallow" },
- { "name": "Color Group: Purple" }
- ]
- ```
-
-## Wrapping up
-
-The examples in this article show several aspects of the Azure Cosmos DB query language:
--- API for NoSQL works on JSON values. The API deals with tree-shaped entities instead of rows and columns. You can refer to the tree nodes at any arbitrary depth, like `Node1.Node2.Node3…..Nodem`, similar to the two-part reference of `<table>.<column>` in ANSI SQL.-- Because the query language works with schemaless data, the type system must be bound dynamically. The same expression could yield different types on different items. The result of a query is a valid JSON value, but isn't guaranteed to be of a fixed schema. -- Azure Cosmos DB supports strict JSON items only. The type system and expressions are restricted to deal only with JSON types. For more information, see the [JSON specification](https://www.json.org/). -- A container is a schema-free collection of JSON items. Containment implicitly captures relations within and across container items, not by primary key and foreign key relations. This feature is important for the intra-item joins. For more information, see [`JOIN` keyword](join.md).-
-## Next steps
--- Explore a few query features:
- - [`SELECT` clause](select.md)
- - [`JOIN` keyword](join.md)
- - [Subqueries](subquery.md)
cosmos-db Group By https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/group-by.md
The GROUP BY clause divides the query's results according to the values of one o
## Remarks
- When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is [aggregate functions](aggregate-functions.md), which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause.
+ When a query uses a GROUP BY clause, the SELECT clause can only contain the subset of properties and system functions included in the GROUP BY clause. One exception is aggregate functions, which can appear in the SELECT clause without being included in the GROUP BY clause. You can also always include literal values in the SELECT clause.
The GROUP BY clause must be after the SELECT, FROM, and WHERE clause and before the OFFSET LIMIT clause. You cannot use GROUP BY with an ORDER BY clause.
GROUP BY f.lastName
) AS UniqueLastNames ```
-Additionally, cross-partition `GROUP BY` queries can have a maximum of 21 [aggregate system functions](aggregate-functions.md).
+Additionally, cross-partition `GROUP BY` queries can have a maximum of 21 aggregate system functions.
## Examples
The results are:
- [Getting started](getting-started.md) - [SELECT clause](select.md)-- [Aggregate functions](aggregate-functions.md)
cosmos-db Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/linq-to-sql.md
The query provider supports the following scalar expressions:
family.children[n].grade; ``` -- Arithmetic expressions, including common arithmetic expressions on numerical and Boolean values. For the complete list, see the [Azure Cosmos DB SQL specification](aggregate-functions.md).
+- Arithmetic expressions, including common arithmetic expressions on numerical and Boolean values.
```csharp 2 * family.children[0].grade;
The LINQ provider included with the SQL .NET SDK supports the following operator
- **Where**: Filters translate to [WHERE](where.md), and support translation between `&&`, `||`, and `!` to the SQL operators - **SelectMany**: Allows unwinding of arrays to the [JOIN](join.md) clause. Use to chain or nest expressions to filter on array elements. - **OrderBy** and **OrderByDescending**: Translate to [ORDER BY](order-by.md) with ASC or DESC.-- **Count**, **Sum**, **Min**, **Max**, and **Average** operators for [aggregation](aggregate-functions.md), and their async equivalents **CountAsync**, **SumAsync**, **MinAsync**, **MaxAsync**, and **AverageAsync**.
+- **Count**, **Sum**, **Min**, **Max**, and **Average** operators for aggregation, and their async equivalents **CountAsync**, **SumAsync**, **MinAsync**, **MaxAsync**, and **AverageAsync**.
- **CompareTo**: Translates to range comparisons. This operator is commonly used for strings, since they're not comparable in .NET. - **Skip** and **Take**: Translates to [OFFSET and LIMIT](offset-limit.md) for limiting results from a query and doing pagination. - **Math functions**: Supports translation from .NET `Abs`, `Acos`, `Asin`, `Atan`, `Ceiling`, `Cos`, `Exp`, `Floor`, `Log`, `Log10`, `Pow`, `Round`, `Sign`, `Sin`, `Sqrt`, `Tan`, and `Truncate` to the equivalent [built-in mathematical functions](system-functions.yml).
cosmos-db Max https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/max.md
+
+ Title: MAX
+
+description: An Azure Cosmos DB for NoSQL system function that returns the maximum value.
++++++ Last updated : 07/17/2023+++
+# MAX (NoSQL query)
++
+Returns the maximum of the values in the expression.
+
+## Syntax
+
+```sql
+MAX(<scalar_expr>)
+```
+
+## Arguments
+
+| | Description |
+| | |
+| **`scalar_expr`** | A scalar expression. |
+
+## Return types
+
+Returns a numeric scalar value.
+
+## Examples
+
+This example uses a container with multiple items that each have a `/price` numeric field.
+
+
+For this example, the `MAX` function is used in a query that includes the numeric field that was mentioned.
+++
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- The arguments in `MAX` can be number, string, boolean, or null.
+- Any `undefined` values are ignored.
+- The following priority order is used (in descending order), when comparing different types of data:
+ 1. string
+ 1. number
+ 1. boolean
+ 1. null
+
+## Next steps
+
+- [System functions in Azure Cosmos DB](system-functions.yml)
+- [`MIN`](min.md)
cosmos-db Min https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/min.md
+
+ Title: MIN
+
+description: An Azure Cosmos DB for NoSQL system function that returns the minimum value.
++++++ Last updated : 07/17/2023+++
+# MIN (NoSQL query)
++
+Returns the minimum of the values in the expression.
+
+## Syntax
+
+```sql
+MIN(<scalar_expr>)
+```
+
+## Arguments
+
+| | Description |
+| | |
+| **`scalar_expr`** | A scalar expression. |
+
+## Return types
+
+Returns a numeric scalar value.
+
+## Examples
+
+This example uses a container with multiple items that each have a `/price` numeric field.
+
+
+For this example, the `MIN` function is used in a query that includes the numeric field that was mentioned.
+++
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- The arguments in `MIN` can be number, string, boolean, or null.
+- Any `undefined` values are ignored.
+- The following priority order is used (in ascending order), when comparing different types of data:
+ 1. null
+ 1. boolean
+ 1. number
+ 1. string
+
+## Next steps
+
+- [System functions in Azure Cosmos DB](system-functions.yml)
+- [`MAX`](max.md)
cosmos-db Sum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/sum.md
+
+ Title: SUM
+
+description: An Azure Cosmos DB for NoSQL system function that sums the specified values.
++++++ Last updated : 07/17/2023+++
+# SUM (NoSQL query)
++
+Returns the sum of the values in the expression.
+
+## Syntax
+
+```sql
+SUM(<numeric_expr>)
+```
+
+## Arguments
+
+| | Description |
+| | |
+| **`numeric_expr`** | A numeric expression. |
+
+## Return types
+
+Returns a numeric expression.
+
+## Examples
+
+For this example, consider a container with multiple items that may contain a `quantity` field.
+
+
+The `SUM` function is used to sum the values of the `quantity` field, when it exists, into a single aggregated value.
+++
+## Remarks
+
+- This function benefits from a [range index](../../index-policy.md#includeexclude-strategy).
+- If any arguments in `SUM` are string, boolean, or null; the entire aggregate system function returns `undefined`.
+- If any individual argument has an `undefined` value that value isn't included in the `SUM` calculation.
+
+## Next steps
+
+- [System functions in Azure Cosmos DB](system-functions.yml)
+- [`AVG`](average.md)
cosmos-db Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/udfs.md
As the preceding examples show, UDFs integrate the power of JavaScript language
- [Introduction to Azure Cosmos DB](../../introduction.md) - [System functions](system-functions.yml)-- [Aggregates](aggregate-functions.md)
cosmos-db Sdk Java V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v2.md
This is the original Azure Cosmos DB Sync Java SDK v2 for API for NoSQL which su
* Fixed a few bugs in the session container that may cause an "Owner resource not found" exception for requests immediately after collection creation. ### <a name="1.9.5"></a>1.9.5
-* Added support for aggregation queries (COUNT, MIN, MAX, SUM, and AVG). See [Aggregation support](query/aggregate-functions.md).
+* Added support for aggregation queries (COUNT, MIN, MAX, SUM, and AVG).
* Added support for change feed. * Added support for collection quota information through RequestOptions.setPopulateQuotaInfo. * Added support for stored procedure script logging through RequestOptions.setScriptLoggingEnabled.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
cost-management-billing Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md
Azure savings plan purchase recommendations are provided through [Azure Advisor]
## How hourly commitment recommendations are generated
-The goal of our savings plan recommendation is to help you make the most cost-effective commitment. Calculations are based on your actual on-demand costs, and don't include usage covered by existing reservations or savings plans is excluded.
+The goal of our savings plan recommendation is to help you make the most cost-effective commitment. Calculations are based on your actual on-demand costs, and don't include usage covered by existing reservations or savings plans.
We start by looking at your hourly and total on-demand usage costs incurred from savings plan-eligible resources in the last 7, 30, and 60 days. These costs are inclusive of any negotiated discounts that you have. We then run hundreds of simulations of what your total cost would have been if you had purchased either a one or three-year savings plan with an hourly commitment equivalent to your hourly costs.
The minimum value doesn't necessarily represent the hourly commitment necessary
## Next steps - Learn about [how the Azure savings plan discount is applied](discount-application.md).-- Learn about how to [trade in reservations](reservation-trade-in.md) for a savings plan.
+- Learn about how to [trade in reservations](reservation-trade-in.md) for a savings plan.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
| VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/>Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | | VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. |
+| Internet exposed VM has high severity vulnerability and insecure SSH private key that can authenticate to another VM | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to another AWS EC2 instance |
+| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server |
+| VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server |
+| VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account |
+| Internet expsed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
### AWS EC2 instances
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| EC2 instance with high severity vulnerabilities has read permissions to a data store |An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket via an IAM policy or via a bucket policy, or via both an IAM policy and a bucket policy. | | EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | | EC2 instance with high severity vulnerabilities has read permissions to a KMS key | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an AWS Key Management Service (KMS) key via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM and AWS KMS policy. |
+| Internet exposed EC2 instance has high severity vulnerability and insecure SSH private key that can authenticate to another AWS EC2 instance | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to another AWS EC2 instance |
+| Internet exposed EC2 instance has high severity vulnerabilities and has insecure secret that is used to authenticate to a RDS resource | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource |
+| EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource |
+| Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both |
### Azure data
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| SQL on VM has a user account with commonly used username and known vulnerabilities (Preview) | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)| | Managed database with excessive internet exposure allows basic (local user/password) authentication (Preview) | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. | | Internet exposed VM has high severity vulnerabilities and a hosted database installed (Preview) | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.|
-| Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container | An internal Azure storage container replicates its data to another Azure storage container which is reachable from the internet and allows public access, and poses this data at risk. |
+| Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container | An internal Azure storage container replicates its data to another Azure storage container that is reachable from the internet and allows public access, and poses this data at risk. |
| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md).| ### AWS data
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Agentless scanning for VMs provides vulnerability assessment and software invent
||| |Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)|
-| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management) |
+| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux | | Instance types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
In such cases, you can create an exemption for a recommendation to:
## Availability
-| Aspect | Details |
+ Aspect | Details |
| - | -- | | Release state: | Preview<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] | | Pricing: | This is a premium Azure Policy capability that's offered at no more cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future.
-| Required roles and permissions: | **Owner** or **Resource Policy Contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). |
+| Required roles and permissions: | **Owner** or **Security Admin** or **Resource Policy Contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). |
| Limitations: | Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, [Microsoft cloud security benchmark](/security/benchmark/azure/introduction), or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives can't be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
To create an exemption rule:
1. Enter a description. 1. Select **Create**.
- :::image type="content" source="media/exempt-resource/defining-recommendation-exemption.png" alt-text="Steps to create an exemption rule to exempt a recommendation from your subscription or management group.":::
+ :::image type="content" source="media/exempt-resource/defining-recommendation-exemption.png" alt-text="Steps to create an exemption rule to exempt a recommendation from your subscription or management group." lightbox="media/exempt-resource/defining-recommendation-exemption.png":::
When the exemption takes effect (it might take up to 30 minutes): - The recommendation or resources won't impact your secure score. - If you've exempted specific resources, they'll be listed in the **Not applicable** tab of the recommendation details page. - If you've exempted a recommendation, it will be hidden by default on Defender for Cloud's recommendations page. This is because the default options of the **Recommendation status** filter on that page are to exclude **Not applicable** recommendations. The same is true if you exempt all recommendations in a security control.
- :::image type="content" source="media/exempt-resource/recommendations-filters-hiding-not-applicable.png" alt-text="Default filters on Microsoft Defender for Cloud's recommendations page hide the not applicable recommendations and security controls":::
+ :::image type="content" source="media/exempt-resource/recommendations-filters-hiding-not-applicable.png" alt-text="Default filters on Microsoft Defender for Cloud's recommendations page hide the not applicable recommendations and security controls." lightbox="media/exempt-resource/recommendations-filters-hiding-not-applicable.png":::
- The information strip at the top of the recommendation details page updates the number of exempted resources:
To create an exemption rule:
1. To review your exempted resources, open the **Not applicable** tab:
- :::image type="content" source="./media/exempt-resource/modifying-exemption.png" alt-text="Modifying an exemption.":::
+ :::image type="content" source="./media/exempt-resource/modifying-exemption.png" alt-text="Modifying an exemption." lightbox="media/exempt-resource/modifying-exemption.png":::
The reason for each exemption is included in the table (1).
To create an exemption rule:
> [!IMPORTANT] > To see the specific exemptions relevant to one recommendation, filter the list according to the relevant scope and recommendation name.
- :::image type="content" source="./media/exempt-resource/policy-page-exemption.png" alt-text="Azure Policy's exemption page":::
+ :::image type="content" source="./media/exempt-resource/policy-page-exemption.png" alt-text="Azure Policy's exemption page." lightbox="media/exempt-resource/policy-page-exemption.png":::
> [!TIP] > Alternatively, [use Azure Resource Graph to find recommendations with exemptions](#find-recommendations-with-exemptions-using-azure-resource-graph).
As explained earlier on this page, exemption rules are a powerful tool providing
To keep track of how your users are exercising this capability, we've created an Azure Resource Manager (ARM) template that deploys a Logic App Playbook and all necessary API connections to notify you when an exemption has been created. -- To learn more about the playbook, see the tech community blog post [How to keep track of Resource Exemptions in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-keep-track-of-resource-exemptions-in-azure-security/ba-p/1770580)
+- To learn more about the playbook, see the tech community blog post [How to keep track of Resource Exemptions in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-keep-track-of-resource-exemptions-in-azure-security/ba-p/1770580).
- You'll find the ARM template in the [Microsoft Defender for Cloud GitHub repository](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation/Notify-ResourceExemption)-- To deploy all the necessary components, [use this automated process](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Security-Center%2Fmaster%2FWorkflow%2520automation%2FNotify-ResourceExemption%2Fazuredeploy.json)
+- To deploy all the necessary components, [use this automated process](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Security-Center%2Fmaster%2FWorkflow%2520automation%2FNotify-ResourceExemption%2Fazuredeploy.json).
## Use the inventory to find resources that have exemptions applied
The asset inventory page of Microsoft Defender for Cloud provides a single page
The inventory page includes many filters to let you narrow the list of resources to the ones of most interest for any given scenario. One such filter is the **Contains exemptions**. Use this filter to find all resources that have been exempted from one or more recommendations. ## Find recommendations with exemptions using Azure Resource Graph
To view all recommendations that have exemption rules:
1. Open **Azure Resource Graph Explorer**.
- :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Launching Azure Resource Graph Explorer** recommendation page" :::
+ :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Launching Azure Resource Graph Explorer** recommendation page." lightbox="media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png":::
1. Enter the following query and select **Run query**.
To view all recommendations that have exemption rules:
Learn more in the following pages: - [Learn more about Azure Resource Graph](../governance/resource-graph/index.yml).-- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md)-- [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
+- [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
+- [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
## Next steps
-In this article, you learned how to exempt a resource from a recommendation so that it doesn't impact your secure score. For more information about secure score, see:
--- [Secure score in Microsoft Defender for Cloud](secure-score-security-controls.md)
+In this article, you learned how to exempt a resource from a recommendation so that it doesn't impact your secure score. For more information about secure score, see [Secure score in Microsoft Defender for Cloud](secure-score-security-controls.md).
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 07/12/2023 Last updated : 07/18/2023 # What's new in Microsoft Defender for Cloud?
Updates in July include:
|Date |Update | |||
+| July 18 | [Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM](#agentless-secret-scanning-for-virtual-machines-in-defender-for-servers-p2--dcspm) |
| July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) | July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) | July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) |
+### Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM
+
+July 18, 2023
+
+Secret scanning is now available as part of the agentless scanning in Defender for Servers P2 and DCSPM. This capability helps to detect unmanaged and insecure secrets saved on virtual machines, both in Azure or AWS resources, that can be used to move laterally in the network. If secrets are detected, Defender for Cloud can help to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
+
+For more information about how to protect your secrets with secret scanning, see [Manage secrets with agentless secret scanning](secret-scanning.md).
### New security alert in Defender for Servers plan 2: detecting potential attacks leveraging Azure VM GPU driver extensions
This alert focuses on identifying suspicious activities leveraging Azure virtual
For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
- ### Support for disabling specific vulnerability findings
+### Support for disabling specific vulnerability findings
July 9, 2023
Release of support for disabling vulnerability findings for your container regis
Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md). - ### Data Aware Security Posture is now Generally Available July 1, 2023
defender-for-cloud Secret Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secret-scanning.md
+
+ Title: Manage secrets with agentless secret scanning in Microsoft Defender for Cloud
+description: Learn how to scan your servers for secrets with Defender for Server's agentless secret scanning.
+ Last updated : 07/18/2023++
+# Manage secrets with agentless secret scanning
+
+Attackers can move laterally across networks, find sensitive data, and exploit vulnerabilities to damage critical information systems by accessing internet-facing workloads and exploiting exposed credentials and secrets.
+
+Defender for Cloud's agentless secret scanning for Virtual Machines (VM) locates plaintext secrets that exist in your environment. If secrets are detected, Defender for Cloud can assist your security team to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
+
+By using agentless secret scanning, you can proactively discover the following types of secrets across your environments:
+
+- **Insecure SSH private keys** - supports RSA algorithm for PuTTy files, PKCS#8 and PKCS#1 standards
+- **Plaintext Azure SQL connection strings** - supports SQL PAAS
+- **Plaintext Azure storage account connection strings**
+- **Plaintext Azure storage account SAS tokens**
+- **Plaintext AWS access keys**
+- **Plaintext AWS RDS SQL connection string** -supports SQL PAAS
+
+In addition to detecting SSH private keys, the agentless scanner verifies whether they can be used to move laterally in the network. Keys that we didn't successfully verify are categorized as **unverified** in the **Recommendation** pane.
+
+## Prerequisites
+
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+- Access to [Defender for Cloud](get-started.md)
+
+- [Enable](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) either or both of the following two plans:
+ - [Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md)
+ - [Defender CSPM](concept-cloud-security-posture-management.md)
+
+> [!NOTE]
+> If both plans are not enabled, you will only have limited access to the features available from Defender for Server's agentless secret scanning capabilities. Check out [which features are available with each plan](#feature-capability).
+
+- [Enable agentless scanning for machines](enable-vulnerability-assessment-agentless.md#enabling-agentless-scanning-for-machines).
+
+For requirements for agentless scanning, see [Learn about agentless scanning](concept-agentless-data-collection.md#availability).
+
+## Feature capability
+
+You must enable [Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) and [Defender CSPM](concept-cloud-security-posture-management.md) to gain access to all of the agentless secret scanning capabilities.
+
+If you only enable one of the two plans, you gain only part of the available features of the agentless secret scanning capabilities. The following table shows which plans enable which features:
+
+| Plan Feature | Defender for servers plan 2 | Defender CSPM |
+|--|--|--|
+| [Attack path](#remediate-secrets-with-attack-path) | No | Yes |
+| [Cloud security explorer](#remediate-secrets-with-cloud-security-explorer) | Yes | Yes |
+| [Recommendations](#remediate-secrets-with-recommendations) | Yes | Yes |
+| [Asset Inventory](#remediate-secrets-from-your-asset-inventory) - Secrets | Yes | No |
+
+## Remediate secrets with attack path
+
+Attack path analysis is a graph-based algorithm that scans your [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph). These scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach.
+
+Attack path analysis takes into account the contextual information of your environment to identify issues that may compromise it. This analysis helps prioritize the riskiest issues for faster remediation.
+
+The attack path page shows an overview of your attack paths, affected resources and a list of active attack paths.
+
+### Azure VM supported attack path scenarios
+
+Agentless secret scanning for Azure VMs supports the following attack path scenarios:
+
+- `Exposed Vulnerable VM has an insecure SSH private key that is used to authenticate to a VM`.
+
+- `Exposed Vulnerable VM has insecure secrets that are used to authenticate to a storage account`.
+
+- `Vulnerable VM has insecure secrets that are used to authenticate to a storage account`.
+
+- `Exposed Vulnerable VM has insecure secrets that are used to authenticate to an SQL server`.
+
+### AWS instances supported attack path scenarios
+
+Agentless secret scanning for AWS instances supports the following attack path scenarios:
+
+- `Exposed Vulnerable EC2 instance has an insecure SSH private key that is used to authenticate to a EC2 instance`.
+
+- `Exposed Vulnerable EC2 instance has an insecure secret that are used to authenticate to a storage account`.
+
+- `Exposed Vulnerable EC2 instance has insecure secrets that are used to authenticate to an AWS RDS server`.
+
+- `Vulnerable EC2 instance has insecure secrets that are used to authenticate to an AWS RDS server`.
+
+**To investigate secrets with Attack path**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack path**.
+
+ :::image type="content" source="media/secret-scanning/attack-path.png" alt-text="Screenshot that shows how to navigate to your attack path in Defender for Cloud.":::
+
+1. Select the relevant attack path.
+
+1. Follow the remediation steps to remediate the attack path.
+
+## Remediate secrets with recommendations
+
+If a secret is found on your resource, that resource triggers an affiliated recommendation that is located under the Remediate vulnerabilities security control on the recommendations page. Depending on your resources, either or both of the following recommendations appear:
+
+- **Azure resources**: `Machines should have secrets findings resolved`
+
+- **AWS resources**: `EC2 instances should have secret findings resolved`
+
+**To remediate secrets from the recommendations page**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
+
+1. Expand the **Remediate vulnerabilities** security control.
+
+1. Select either:
+
+ - **Azure resources**: `Machines should have secrets findings resolved`
+
+ - **AWS resources**: `EC2 instances should have secret findings resolved`
+
+ :::image type="content" source="media/secret-scanning/recommendation-findings.png" alt-text="Screenshot that shows either of the two results under the Remediate vulnerabilities security control." lightbox="media/secret-scanning/recommendation-findings.png":::
+
+1. Expand **Affected resources** to review the list of all resources that contain secrets.
+
+1. In the Findings section, select a secret to view detailed information about the secret.
+
+ :::image type="content" source="media/secret-scanning/select-findings.png" alt-text="Screenshot that shows the detailed information of a secret after you have selected the secret in the findings section." lightbox="media/secret-scanning/select-findings.png":::
+
+1. Expand **Remediation steps** and follow the listed steps.
+
+1. Expand **Affected resources** to review the resources affected by this secret.
+
+1. (Optional) You can select an affected resource to see that resources information.
+
+Secrets that don't have a known attack path, are referred to as `secrets without an identified target resource`.
+
+## Remediate secrets with cloud security explorer
+
+The [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer) allows you to proactively identify potential security risks within your cloud environment. By querying the [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph), the context engine of Defender for Cloud. The cloud security explorer allows your security team to prioritize any concerns, while also considering the specific context and conventions of your organization.
+
+**To remediate secrets with cloud security explorer**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Cloud Security Explorer**.
+
+1. Select one of the following templates:
+
+ - **VM with plaintext secret that can authenticate to another VM** - Returns all Azure VMs or AWS EC2 instances with plaintext secret that can access other VMs or EC2s.
+ - **VM with plaintext secret that can authenticate to a storage account** - Returns all Azure VMs or AWS EC2 instances with plaintext secret that can access storage accounts.
+ - **VM with plaintext secret that can authenticate to a SQL database** - Returns all Azure VMs or AWS EC2 instances with plaintext secret that can access SQL databases.
+
+If you don't want to use any of the available templates, you can also [build your own query](how-to-manage-cloud-security-explorer.md) on the cloud security explorer.
+
+## Remediate secrets from your asset inventory
+
+Your [asset inventory](asset-inventory.md) shows the [security posture](concept-cloud-security-posture-management.md) of the resources you've connected to Defender for Cloud. Defender for Cloud periodically analyzes the security state of resources connected to your subscriptions to identify potential security issues and provides you with active recommendations.
+
+The asset inventory allows you to view the secrets discovered on a specific machine.
+
+**To remediate secrets from your asset inventory**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Inventory**.
+
+1. Select the relevant VM.
+
+1. Go to the **Secrets** tab.
+
+1. Review each plaintext secret that appears with the relevant metadata.
+
+1. Select a secret to view extra details of that secret.
+
+Different types of secrets have different sets of additional information. For example, for plaintext SSH private keys, the information includes related public keys (mapping between the private key to the authorized keysΓÇÖ file we discovered or mapping to a different virtual machine that contains the same SSH private key identifier).
+
+## Next steps
+
+- [Use asset inventory to manage your resources' security posture](asset-inventory.md)
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
This table summarizes Azure cloud support for Defender for Servers features.
[File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA [Adaptive network hardening](./adaptive-network-hardening.md) | GA | NA | NA
-[Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
+[Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
+[Agentless secret scanning](secret-scanning.md) | Preview | NA | NA
## Windows machine support
The following table shows feature support for AWS and GCP machines.
| Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
+| [Agentless secret scanning](secret-scanning.md) | Γ£ö | - |
## Endpoint protection support
defender-for-iot Overview Firmware Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/overview-firmware-analysis.md
+
+ Title: Firmware analysis for device builders - Microsoft Defender for IoT
+description: Learn how Microsoft Defender for IoT's firmware analysis helps device builders to market and deploy highly secure IoT/OT devices.
+ Last updated : 06/15/2023
+#Customer intent: As a device builder, I want to understand how firmware analysis can help secure my IoT/OT devices and products.
++
+# Firmware analysis for device builders
+
+Just like computers have operating systems, IoT devices have firmware, and it's the firmware that runs and controls IoT devices. For IoT device builders, security is a near-universal concern as IoT devices have traditionally lacked basic security measures.
+
+For example, IoT attack vectors typically use easily exploitable--but easily correctable--weaknesses such as hardcoded user accounts, outdated and vulnerable open-source packages, or a manufacturer's private cryptographic signing key.
+
+Use Microsoft Defender for IoT's firmware analysis to identify embedded security threats, vulnerabilities, and common weaknesses that may be otherwise undetectable.
+
+> [!NOTE]
+> The Defender for IoT **Firmware analysis** page is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## How to be sure your firmware is secure
+
+Defender for IoT can analyze your firmware for common weaknesses and vulnerabilities, and provide insight into your firmware security. This analysis is useful whether you build the firmware in-house or receive firmware from your supply chain.
+
+- **Software bill of materials (SBOM)**: Receive a detailed listing of open-source packages used during the firmware's build process. See the package version and what license governs the use of the open-source package.
+
+- **CVE analysis**: See which firmware components have publicly known security vulnerabilities and exposures.
+
+- **Binary hardening analysis**: Identify binaries that haven't enabled specific security flags during compilation like buffer overflow protection, position independent executables, and more common hardening techniques.
+
+- **SSL certificate analysis**: Reveal expired and revoked TLS/SSL certificates.
+
+- **Public and private key analysis**: Verify that the public and private cryptographic keys discovered in the firmware are necessary and not accidental.
+
+- **Password hash extraction**: Ensure that user account password hashes use secure cryptographic algorithms.
+
+## Next steps
+
+- [Analyze a firmware image](tutorial-analyze-firmware.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://az
For more information, see [Upgrade the Microsoft Defender for IoT micro agent](upgrade-micro-agent.md).
+## July 2023
+
+**Firmware analysis public preview announcement**
+
+Microsoft Defender for IoT Firmware analysis is now available in public preview. Defender for IoT can analyze your device firmware for common weaknesses and vulnerabilities, and provide insight into your firmware security. This analysis is useful whether you build the firmware in-house or receive firmware from your supply chain.
+
+For more information, see [Firmware analysis for device builders](overview-firmware-analysis.md).
+++ ## December 2022 **Version 4.6.2**:
defender-for-iot Tutorial Analyze Firmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-analyze-firmware.md
+
+ Title: Analyze a firmware image with Microsoft Defender for IoT.
+description: Learn to analyze a compiled firmware image within Microsoft Defender for IoT.
+ Last updated : 06/15/2023
+#Customer intent: As a device builder, I want to see what vulnerabilities or weaknesses might exist in my firmware image.
++
+# Tutorial: Analyze an IoT/OT firmware image
+
+This tutorial describes how to use Defender for IoT's **Firmware analysis** page to upload a firmware image for security analysis and view analysis results.
+
+> [!NOTE]
+> The Defender for IoT **Firmware analysis** page is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+> [!NOTE]
+> The Defender for IoT **Firmware Analysis** feature is automatically available if you currently access Defender for IoT using the Security Admin, Contributor, or Owner role. If you only have the SecurityReader role or want to use Firmware Analysis as a standalone feature, then your Admin must give the FirmwareAnalysisAdmin role. For additional information, please see [Azure User Roles and Permissions](/azure/role-based-access-control/built-in-roles).
+>
+
+To use the **Firmware analysis** page to analyze your firmware security, your firmware image must have the following prerequisites:
+
+- You must have access to the compiled firmware image.
+
+- Your image must be an unencrypted, Linux-based firmware image.
+
+- Your image must be less than 1 GB in size.
+
+## Select the region for storing firmware images
+
+If this is your first interaction with **Firmware analysis,** then you'll need to select a region in which to upload and store your firmware images.
+
+1. Sign into the Azure portal and go to Defender for IoT.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/defender-portal.png" alt-text="Screenshot that shows the Defender for IoT portal.":::
+
+1. Select **Firmware analysis**.
+1. Select a region to use for storage.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/select-region.png" alt-text="Screenshot that shows selecting an Azure Region.":::
+
+## Upload a firmware image for analysis
+
+1. Sign into the Azure portal and go to Defender for IoT.
+
+1. Select **Firmware analysis** > **Upload**.
+
+1. In the **Upload a firmware image** pane, select **Choose file**. Browse to and select the firmware image file you want to upload.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/upload.png" alt-text="Screenshot that shows clicking the Upload option within Firmware Analysis.":::
+
+1. Enter the following details:
+
+ - The firmware's vendor
+ - The firmware's model
+ - The firmware's version
+ - An optional description of your firmware
+
+1. Select **Upload** to upload your firmware for analysis.
+
+ Your firmware appears in the grid on the **Firmware analysis** page.
+
+## View firmware analysis results
+
+The analysis time will vary based on the size of the firmware image and the number of files discovered in the image. While the analysis is taking place, the status will say *Extracting* and then *Analysis*. When the status is *Ready*, you can see the firmware analysis results.
+
+1. Sign into the Azure portal and go to Microsoft Defender for IoT > **Firmware analysis**.
+
+1. Select the row of the firmware you want to view. The **Firmware overview** pane shows basic data about the firmware on the right.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/firmware-details.png" alt-text="Screenshot that shows clicking the row with the firmware image to see the side panel details.":::
+
+1. Select **View results** to drill down for more details.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/overview.png" alt-text="Screenshot that shows clicking view results button for a detailed analysis of the firmware image.":::
+
+1. The firmware details page shows security analysis results on the following tabs:
+
+ |Name |Description |
+ |||
+ |**Overview** | View an overview of all of the analysis results.|
+ |**Software Components** | View a software bill of materials with the following details: <br><Br> - A list of open source components used to create firmware image <br>- Component version information <br>- Component license <br>- Executable path of the binary |
+ |**Weaknesses** | View a listing of common vulnerabilities and exposures (CVEs). <br><br>Select a specific CVE to view more details. |
+ |**Binary Hardening** | View if executables compiled using recommended security settings: <br><br>- NX <br>- PIE<br>- RELRO<br>- CANARY<br>- STRIPPED<br><br> Select a specific binary to view more details.|
+ |**Password Hashes** | View embedded accounts and their associated password hashes.<br><br>Select a specific user account to view more details.|
+ |**Certificates** | View a list of TLS/SSL certificates found in the firmware.<br><br>Select a specific certificate to view more details.|
+ |**Keys** | View a list of public and private crypto keys in the firmware.<br><br>Select a specific key to view more details.|
+
+ :::image type="content" source="media/tutorial-firmware-analysis/weaknesses.png" alt-text="Screenshot that shows the weaknesses (CVE) analysis of the firmware image.":::
+
+## Delete a firmware image
+
+Delete a firmware image from Defender for IoT when you no longer need it analyzed.
+
+After you delete an image, there's no way to retrieve the image or the associated analysis results. If you need the results, you'll need to upload the firmware image again for analysis.
+
+**To delete a firmware image**:
+
+1. Select the checkbox for the firmware image you want to delete and then select **Delete**.
+
+## Next steps
+
+For more information, see [Firmware analysis for device builders](overview-firmware-analysis.md).
+
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: OT monitoring software versions - Microsoft Defender for IoT description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features. Previously updated : 1/02/2023 Last updated : 07/03/2023 # OT monitoring software versions
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **22.3** | | | |
+|22.3.10|07/2023|Patch|06/2024|
| 22.3.9 | 05/2023 | Patch | 04/2024 | | 22.3.8 | 04/2023 | Patch | 03/2024 | | 22.3.7 | 03/2023 | Patch | 02/2024 |
To understand whether a feature is supported in your sensor version, check the r
## Versions 22.3.x
+### 22.3.10
+
+**Release date**: 07/2023
+
+**Supported until**: 06/2024
+
+This version includes bug fixes for stability improvements.
+ ### 22.3.9 **Release date**: 05/2023 **Supported until**: 04/2024
-This version includes bug fixes for stability improvements.
+This version includes:
- [Improved monitoring and support for OT sensor logs](whats-new.md#improved-monitoring-and-support-for-ot-sensor-logs)
+- Bug fixes for stability improvements.
### 22.3.8
This version includes the following new updates and fixes:
## Next steps For more information about the features listed in this article, see [What's new in Microsoft Defender for IoT?](whats-new.md) and [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).+
dev-box How To Generate Visual Studio Caches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-generate-visual-studio-caches.md
With [Visual Studio 17.7 Preview 3](https://visualstudio.microsoft.com/vs/previe
Benefits of precaching your Visual Studio solution on a dev box image include: - You can reduce the time it takes to load your solution for the first time. -- You can quickly access and use key IDE features like Find In Files and IntelliSense in Visual Studio.
+- You can quickly access and use key IDE features like [**Find In Files**](/visualstudio/ide/find-in-files) and [**Intellisense**](/visualstudio/ide/using-intellisense) in Visual Studio.
- You can improve the Git performance on large repositories. > [!NOTE]
To leverage precaching of your source code and Visual Studio IDE customizations
- Create a dev center and configure the Microsoft Dev Box service. If you don't have one available, follow the steps in [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md) to create a dev center and configure a dev box. - [Create a custom VM image for dev box](how-to-customize-devbox-azure-image-builder.md) that includes your source code and pregenerated caches.
- This article guides you through the creation of an Azure Resource Manager template. In the following sections, you'll modify that template to include processes to [generate the Visual Studio solution cache](#enable-caches-in-dev-box-images) and further improve Visual Studio performance by [preparing the git commit graph](#enable-git-commit-graph-optimizations) for your project.
+ This article guides you through the creation of an Azure Resource Manager template. In the following sections, you'll modify that template to include processes to [generate the Visual Studio solution cache](#enable-visual-studio-caches-in-dev-box-images) and further improve Visual Studio performance by [preparing the git commit graph](#enable-git-commit-graph-optimizations-in-dev-box-images) for your project.
+ You can then use the resulting image to [create new dev boxes](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) for your team.
-You can then use the resulting image to [create new dev boxes](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) for your team.
+## Enable Visual Studio caches in dev box images
-## Enable caches in dev box images
-
-You can generate caches for your Visual Studio solution as part of an automated pipeline that builds custom dev box images. To do so, you must meet the following requirements:
+You can generate caches for your Visual Studio solution as part of an automated pipeline that builds custom dev box images. To enable Visual Studio caches in your dev box image:
* Within the [Azure Resource Manager template](../azure-resource-manager/templates/overview.md), add a customized step to clone the source repository of your project into a nonuser specific location on the VM. * With the project source located on disk you can now run the `PopulateSolutionCache` feature to generate the project caches. To do this, add the following PowerShell command to your template's customized steps:
You can generate caches for your Visual Studio solution as part of an automated
When a dev box user opens the solution on a dev box based off the customized image, Visual Studio will read the already generated caches and skip the cache generation altogether.
-## Enable Git commit-graph optimizations
+## Enable Git commit-graph optimizations in dev box images
-Beyond the [standalone commit-graph feature that was made available with Visual Studio 17.2 Preview 3](https://devblogs.microsoft.com/visualstudio/supercharge-your-git-experience-in-vs/), you can also enable commit-graph optimizations as part of an automated pipeline that generates custom dev box images. To do so, you must meet the following requirements:
+Beyond the [standalone commit-graph feature that was made available with Visual Studio 17.2 Preview 3](https://aka.ms/devblogs-commit-graph), you can also enable commit-graph optimizations as part of an automated pipeline that generates custom dev box images.
+You can enable Git commit-graph optimizations in your dev box image if you meet the following requirements:
* You're using a [Microsoft Dev Box](overview-what-is-microsoft-dev-box.md) as your development workstation. * The source code for your project is saved in a non-user specific location to be included in the image. * You can [create a custom dev box image](how-to-customize-devbox-azure-image-builder.md) that includes the Git source code repository for your project. * You're using [Visual Studio 17.7 Preview 3 or higher](https://visualstudio.microsoft.com/vs/preview/).
-To enable this optimization, execute the following `git` commands from your Git repositoryΓÇÖs location as part of your image build process:
+To enable the commit-graph optimization, execute the following `git` commands from your Git repositoryΓÇÖs location as part of your custom image build process:
```bash # Enables the Git repo to use the commit-graph file, if the file is present
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md
Once the data has been historized, you can query this data in Azure Data Explore
You can also use data history in combination with [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to aggregate data from disparate sources. This can be useful in many scenarios. Here are two examples: * Combine information technology (IT) data from ERP or CRM systems (like Dynamics 365, SAP, or Salesforce) with operational technology (OT) data from IoT devices and production management systems. For an example that illustrates how a company might combine this data, see the following blog post: [Integrating IT and OT Data with Azure Digital Twins, Azure Data Explorer, and Azure Synapse](https://techcommunity.microsoft.com/t5/internet-of-things-blog/integrating-it-and-ot-data-with-azure-digital-twins-azure-data/ba-p/3401981).
-* Integrate with the Azure AI and Cognitive Services [Multivariate Anomaly Detector](../cognitive-services/anomaly-detector/overview-multivariate.md), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time.
+* Integrate with the Azure AI and Cognitive Services [Multivariate Anomaly Detector](../ai-services/anomaly-detector/overview.md), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time.
## Next steps
event-grid Event Schema Event Grid Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-event-grid-namespace.md
Last updated 12/02/2022
-# Azure Event Grid Namespace as an Event Grid source
+# Azure Event Grid Namespace (Preview) as an Event Grid source
This article provides the properties and schema for Azure Event Grid namespace events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). ## Available event types
-Azure Event Grid namespace emits the following event types:
+Azure Event Grid namespace (Preview) emits the following event types:
| Event type | Description | | - | -- |
event-grid Mqtt Certificate Chain Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-certificate-chain-client-authentication.md
Using the CA files generated to create certificate for the client.
## Upload the CA certificate to the namespace 1. In Azure portal, navigate to your Event Grid namespace.
-2. Under the MQTT section in left rail, navigate to CA certificates menu.
+1. Under the MQTT section in left rail, navigate to CA certificates menu.
:::image type="content" source="./media/mqtt-certificate-chain-client-authentication/event-grid-namespace-upload-certificate-authority-certificate.png" alt-text="Screenshot showing the CA certificate page under MQTT section in Event Grid namespace.":::
-3. Select **+ Certificate** to launch the Upload certificate page.
+1. Select **+ Certificate** to launch the Upload certificate page.
+1. Add certificate name and browse to find the intermediate certificate (.step/certs/intermediate_ca.crt) and select **Upload**.
> [!NOTE] > - CA certificate name can be 3-50 characters long.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
event-hubs Event Hubs C Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-c-getstarted-send.md
In this section shows how to write a C app to send events to your event hub. The
#include <string.h> #include <unistd.h> #include <stdlib.h>
+ #include <signal.h>
+
+ volatile sig_atomic_t stop = 0;
#define check(messenger) \ { \
In this section shows how to write a C app to send events to your event hub. The
die(__FILE__, __LINE__, pn_error_text(pn_messenger_error(messenger))); \ } \ }
-
+
+ void interrupt_handler(int signum){
+ if(signum == SIGINT){
+ stop = 1;
+ }
+ }
+
pn_timestamp_t time_now(void) { struct timeval now;
In this section shows how to write a C app to send events to your event hub. The
int main(int argc, char** argv) { printf("Press Ctrl-C to stop the sender process\n");
+ signal(SIGINT, interrupt_handler);
pn_messenger_t *messenger = pn_messenger(NULL); pn_messenger_set_outgoing_window(messenger, 1); pn_messenger_start(messenger);
- while(true) {
+ while(!stop) {
sendMessage(messenger); printf("Sent message\n"); sleep(1);
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
# Azure Policy attestation structure
-Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual-preview). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state.
+Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state.
> [!NOTE]
-> In preview, Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations).
+> Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations).
## Best practices
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
- [DenyAction (preview)](#denyaction-preview) - [DeployIfNotExists](#deployifnotexists) - [Disabled](#disabled)-- [Manual (preview)](#manual-preview)
+- [Manual](#manual)
- [Modify](#modify) ## Interchanging effects
When **enforcementMode** is **Disabled**_**, resources are still evaluated. Logg
logs, and the policy effect don't occur. For more information, see [policy assignment - enforcement mode](./assignment-structure.md#enforcement-mode).
-## Manual (preview)
+## Manual
-The new `manual` (preview) effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
+The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
> [!NOTE]
-> During Public Preview, support for manual policy is available through various Microsoft Defender
+> Support for manual policy is available through various Microsoft Defender
> for Cloud regulatory compliance initiatives. If you are a Microsoft Defender for Cloud [Premium tier](https://azure.microsoft.com/pricing/details/defender-for-cloud/) customer, refer to their experience overview. Currently, the following regulatory policy initiatives include policy definitions containing the manual effect:
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 07/06/2023 Last updated : 07/18/2023
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 07/06/2023 Last updated : 07/18/2023
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-sql](../../../../includes/policy/reference/bycat/policies-sql.md)]
+## SQL Managed Instance
++ ## SQL Server [!INCLUDE [azure-policy-reference-policies-sql-server](../../../../includes/policy/reference/bycat/policies-sql-server.md)]
governance Gov Dod Impact Level 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-4.md
initiative definition.
|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deprecated accounts should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) | |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) | |[External accounts with owner permissions should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|[Azure Synapse workspaces should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7d52b2d-e161-4dfa-a82b-55e564167385) |Use customer-managed keys to control the encryption at rest of the data stored in Azure Synapse workspaces. Customer-managed keys deliver double encryption by adding a second layer of encryption on top of the default encryption with service-managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceCMK_Audit.json) | |[Bot Service should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../ai-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Identify unauthorized use of organizational systems.
initiative definition.
||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | |[Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6646a0bd-e110-40ca-bb97-84fcee63c414) |Management certificates allow anyone who authenticates with them to manage the subscription(s) they are associated with. To manage subscriptions more securely, use of service principals with Resource Manager is recommended to limit the impact of a certificate compromise. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_UseServicePrincipalToProtectSubscriptions.json) |
initiative definition.
|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |modify |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
governance Gov Dod Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-5.md
initiative definition.
|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deprecated accounts should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) | |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) | |[External accounts with owner permissions should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|[Azure Synapse workspaces should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7d52b2d-e161-4dfa-a82b-55e564167385) |Use customer-managed keys to control the encryption at rest of the data stored in Azure Synapse workspaces. Customer-managed keys deliver double encryption by adding a second layer of encryption on top of the default encryption with service-managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceCMK_Audit.json) | |[Bot Service should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../ai-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Identify unauthorized use of organizational systems.
initiative definition.
||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | |[Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6646a0bd-e110-40ca-bb97-84fcee63c414) |Management certificates allow anyone who authenticates with them to manage the subscription(s) they are associated with. To manage subscriptions more securely, use of service principals with Resource Manager is recommended to limit the impact of a certificate compromise. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_UseServicePrincipalToProtectSubscriptions.json) |
initiative definition.
|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |modify |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
initiative definition.
|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deprecated accounts should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) | |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) | |[External accounts with owner permissions should be removed from your subscription](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Role-based Schemes
initiative definition.
||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | |[Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6646a0bd-e110-40ca-bb97-84fcee63c414) |Management certificates allow anyone who authenticates with them to manage the subscription(s) they are associated with. To manage subscriptions more securely, use of service principals with Resource Manager is recommended to limit the impact of a certificate compromise. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_UseServicePrincipalToProtectSubscriptions.json) |
initiative definition.
|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |modify |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |deployIfNotExists |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|[Azure Synapse workspaces should use customer-managed keys to encrypt data at rest](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7d52b2d-e161-4dfa-a82b-55e564167385) |Use customer-managed keys to control the encryption at rest of the data stored in Azure Synapse workspaces. Customer-managed keys deliver double encryption by adding a second layer of encryption on top of the default encryption with service-managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceCMK_Audit.json) | |[Bot Service should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../ai-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[IoT Hub device provisioning service data should be encrypted using customer-managed keys (CMK)](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47031206-ce96-41f8-861b-6a915f3de284) |Use customer-managed keys to manage the encryption at rest of your IoT Hub device provisioning service. The data is automatically encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. Learn more about CMK encryption at [https://aka.ms/dps/CMK](../../../iot-dps/iot-dps-customer-managed-keys.md). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTDps_CMKEncryptionEnabled_AuditDeny.json) |
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
initiative definition.
|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) | |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) | |[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Microsoft Managed Control 1013 - Account Management \| Automated System Account Management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8fd7b917-d83b-4379-af60-51e14e316c61) |Microsoft implements this Access Control control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/MicrosoftManagedControl1013.json) | |[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) |
initiative definition.
||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Microsoft Managed Control 1018 - Account Management \| Role-Based Schemes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9121abf-e698-4ee9-b1cf-71ee528ff07f) |Microsoft implements this Access Control control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/MicrosoftManagedControl1018.json) | |[Microsoft Managed Control 1019 - Account Management \| Role-Based Schemes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6a3ee9b2-3977-459c-b8ce-2db583abd9f7) |Microsoft implements this Access Control control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/MicrosoftManagedControl1019.json) | |[Microsoft Managed Control 1020 - Account Management \| Role-Based Schemes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b291ee8-3140-4cad-beb7-568c077c78ce) |Microsoft implements this Access Control control |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/MicrosoftManagedControl1020.json) |
initiative definition.
|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) | |[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |deployIfNotExists |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../cognitive-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
+|[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](../../../ai-services/authentication.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) |
|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) | |[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) | |[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
initiative definition.
|[Azure Synapse workspaces should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7d52b2d-e161-4dfa-a82b-55e564167385) |Use customer-managed keys to control the encryption at rest of the data stored in Azure Synapse workspaces. Customer-managed keys deliver double encryption by adding a second layer of encryption on top of the default encryption with service-managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceCMK_Audit.json) | |[Bot Service should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51522a96-0869-4791-82f3-981000c2c67f) |Azure Bot Service automatically encrypts your resource to protect your data and meet organizational security and compliance commitments. By default, Microsoft-managed encryption keys are used. For greater flexibility in managing keys or controlling access to your subscription, select customer-managed keys, also known as bring your own key (BYOK). Learn more about Azure Bot Service encryption: [https://docs.microsoft.com/azure/bot-service/bot-service-encryption](/azure/bot-service/bot-service-encryption). |disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Bot%20Service/BotService_CMKEnabled_Audit.json) | |[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../cognitive-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](../../../ai-services/encryption/cognitive-services-encryption-keys-portal.md). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](../../../container-registry/tutorial-enable-customer-managed-keys.md). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[HPC Cache accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F970f84d8-71b6-4091-9979-ace7e3fb6dbb) |Manage encryption at rest of Azure HPC Cache with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Disabled, Deny |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageCache_CMKEnabled.json) |
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
healthcare-apis Overview Of Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-samples.md
+
+ Title: The MedTech service scenario-based mappings samples - Azure Health Data Services
+description: Learn about the MedTech service scenario-based mappings samples.
+++++ Last updated : 07/17/2023+++
+# Overview of the MedTech service scenario-based mappings samples
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
++
+The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings.
+
+## Sample resources
+
+Each MedTech service scenario-based sample contains the following resources:
+
+* Device mapping
+* FHIR destination mapping
+* README
+* Test device message(s)
+
+> [!TIP]
+> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
+
+## CalculatedContent
+
+[Conversions using functions](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/calculatedcontent/conversions-using-functions)
+
+## Next steps
+
+In this article, you learned about the MedTech service scenario-based mappings samples.
+
+* To learn about the MedTech service, see [What is MedTech service?](overview.md)
+
+* To learn about the MedTech service device data processing stages, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
+
+* To learn about the different deployment methods for the MedTech service, see [Choose a deployment method for the MedTech service](deploy-choose-method.md).
+
+* For an overview of the MedTech service device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md).
+
+* For an overview of the MedTech service FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
Title: Quickstart creates an Azure IoT Edge device on Linux | Microsoft Docs
-description: In this quickstart, learn how to create an IoT Edge device on Linux and then deploy prebuilt code remotely from the Azure portal.
+ Title: Quickstart creates an Azure IoT Edge device on Linux
+description: Learn how to create an IoT Edge device on Linux and then deploy prebuilt code remotely from the Azure portal.
Previously updated : 1/31/2023 Last updated : 07/18/2023
During the runtime configuration, you provide a device connection string. This i
This section uses an Azure Resource Manager template to create a new virtual machine and install the IoT Edge runtime on it. If you want to use your own Linux device instead, you can follow the installation steps in [Manually provision a single Linux IoT Edge device](how-to-provision-single-device-linux-symmetric.md), then return to this quickstart.
-Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4) template.
+Use the **Deploy to Azure** button or the CLI commands to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4) template.
+
+* Deploy using the IoT Edge Azure Resource Manager template.
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmaster%2FedgeDeploy.json)
* For bash or Cloud Shell users, copy the following command into a text editor, replace the placeholder text with your information, then copy into your bash or Cloud Shell window:
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
Ensure that there's a route to the internet for the IP addresses assigned to thi
#### Symptoms
-The device has trouble starting modules defined in the deployment. Only the *edgeAgent* is running but continually reporting 'empty config file...'.
+* The device has trouble starting modules defined in the deployment. Only the *edgeAgent* is running but and reports *empty config file...*.
+
+* When you run `sudo iotedge check` on a device, it reports *Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub. Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.*
#### Cause
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
Title: Tutorial - Deploy Custom Vision classifier to a device using Azure IoT Ed
description: In this tutorial, learn how to make a computer vision model run as a container using Custom Vision and IoT Edge. - Last updated 07/13/2023
[!INCLUDE [iot-edge-version-all-supported](includes/iot-edge-version-all-supported.md)]
-Azure IoT Edge can make your IoT solution more efficient by moving workloads out of the cloud and to the edge. This capability lends itself well to services that process large amounts of data, like computer vision models. The [Custom Vision Service](../cognitive-services/custom-vision-service/overview.md) lets you build custom image classifiers and deploy them to devices as containers. Together, these two services enable you to find insights from images or video streams without having to transfer all of the data off site first. Custom Vision provides a classifier that compares an image against a trained model to generate insights.
+Azure IoT Edge can make your IoT solution more efficient by moving workloads out of the cloud and to the edge. This capability lends itself well to services that process large amounts of data, like computer vision models. [Azure AI Custom Vision](../ai-services/custom-vision-service/overview.md) lets you build custom image classifiers and deploy them to devices as containers. Together, these two services enable you to find insights from images or video streams without having to transfer all of the data off site first. Custom Vision provides a classifier that compares an image against a trained model to generate insights.
For example, Custom Vision on an IoT Edge device could determine whether a highway is experiencing higher or lower traffic than normal, or whether a parking garage has available parking spots in a row. These insights can be shared with another service to take action.
In this tutorial, you learn how to:
## Prerequisites
->[!TIP]
->This tutorial is a simplified version of the [Custom Vision and Azure IoT Edge on a Raspberry Pi 3](https://github.com/Azure-Samples/custom-vision-service-iot-edge-raspberry-pi) sample project. This tutorial was designed to run on a cloud VM and uses static images to train and test the image classifier, which is useful for someone just starting to evaluate Custom Vision on IoT Edge. The sample project uses physical hardware and sets up a live camera feed to train and test the image classifier, which is useful for someone who wants to try a more detailed, real-life scenario.
+> [!TIP]
+> This tutorial is a simplified version of the [Custom Vision and Azure IoT Edge on a Raspberry Pi 3](https://github.com/Azure-Samples/custom-vision-service-iot-edge-raspberry-pi) sample project. This tutorial was designed to run on a cloud VM and uses static images to train and test the image classifier, which is useful for someone just starting to evaluate Custom Vision on IoT Edge. The sample project uses physical hardware and sets up a live camera feed to train and test the image classifier, which is useful for someone who wants to try a more detailed, real-life scenario.
* Configure your environment for Linux container development by completing [Tutorial: Develop IoT Edge modules using Visual Studio Code](tutorial-develop-for-linux.md). After completing the tutorial, you should have the following prerequisites in available in your development environment: * A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
In this tutorial, you learn how to:
## Build an image classifier with Custom Vision
-To build an image classifier, you need to create a Custom Vision project and provide training images. For more information about the steps that you take in this section, see [How to build a classifier with Custom Vision](../cognitive-services/custom-vision-service/getting-started-build-a-classifier.md).
+To build an image classifier, you need to create a Custom Vision project and provide training images. For more information about the steps that you take in this section, see [How to build a classifier with Custom Vision](../ai-services/custom-vision-service/getting-started-build-a-classifier.md).
Once your image classifier is built and trained, you can export it as a Docker container and deploy it to an IoT Edge device.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
kinect-dk About Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/about-azure-kinect-dk.md
The Speech SDK enables Azure-connected speech services.
>[!NOTE] >The Azure Kinect DK does not have speakers.
-For additional details and information, visit [Speech Service documentation](../cognitive-services/speech-service/index.yml).
+For additional details and information, visit [Speech Service documentation](../ai-services/speech-service/index.yml).
## Vision services
kinect-dk Access Mics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/access-mics.md
keywords: kinect, azure, sensor, sdk, microphone, access mics, mic data
# Access Azure Kinect DK microphone input data
-The [Speech SDK quickstarts](../cognitive-services/speech-service/index.yml) provide examples of how to use the Azure Kinect DK microphone array in various programming languages.
+The [Speech SDK quickstarts](../ai-services/speech-service/index.yml) provide examples of how to use the Azure Kinect DK microphone array in various programming languages.
For example, see the **Recognize speech in C++ on Windows by using the Speech SDK** quickstart. The code is available [from GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp). Access the microphone array also through Windows API. See the following docs for details on Windows documentation:
You can also review the [microphone array hardware specification](hardware-speci
## Next steps >[!div class="nextstepaction"]
->[Speech Services SDK](../cognitive-services/speech-service/index.yml)
+>[Speech Services SDK](../ai-services/speech-service/index.yml)
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
load-balancer Load Balancer Multiple Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-virtual-machine-scale-set.md
In this section, youΓÇÖll learn how to attach your Virtual Machine Scale Sets be
az vmss update\ --resource-group <resource-group> \ --name <vmss-name> \
- --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools "{'id':'/subscriptions/<SubscriptionID>/resourceGroups/<Resource Group> /providers/Microsoft.Network/loadBalancers/<Load Balancer Name>/backendAddressPools/<Backend address pool name >}"
+ --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools "{'id':'/subscriptions/<SubscriptionID>/resourceGroups/<Resource Group> /providers/Microsoft.Network/loadBalancers/<Load Balancer Name>/backendAddressPools/<Backend address pool name >'}"
``` This example deploys a Virtual Machine Scale Set with the following defined values:
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
You can also find the latest Azure Load Balancer updates and subscribe to the RS
| Type |Name |Description |Date added | | ||||
-| Feature | [ AzureΓÇÖs cross-region Load Balancer is now generally available ](https://azure.microsoft.com/updates/azure-s-crossregion-load-balancer-is-now-generally-available/) | Azure Load BalancerΓÇÖs Global tier is a cloud-native global network load balancing solution. With cross-region Load Balancer, you can distribute traffic across multiple Azure regions with ultra-low latency and high performance. Azure cross-region Load Balancer provides customers a static globally anycast IP address. Through this global IP address, you can easily add or remove regional deployments without interruption. Learn more about [cross-region load balancer](cross-region-overview.md) | July 2023 |
-
-Documentation: Learn more about cross-region load balancer
+| Feature | [ AzureΓÇÖs cross-region Load Balancer is now generally available ](https://azure.microsoft.com/updates/azure-s-crossregion-load-balancer-is-now-generally-available/) | Azure Load BalancerΓÇÖs Global tier is a cloud-native global network load balancing solution. With cross-region Load Balancer, you can distribute traffic across multiple Azure regions with ultra-low latency and high performance. Azure cross-region Load Balancer provides customers a static globally anycast IP address. Through this global IP address, you can easily add or remove regional deployments without interruption. Learn more about [cross-region load balancer](cross-region-overview.md) | July 2023 |
| Feature | [Inbound ICMPv6 pings and traceroute are now supported on Azure Load Balancer (General Availability)](https://azure.microsoft.com/updates/general-availability-inbound-icmpv6-pings-and-traceroute-are-now-supported-on-azure-load-balancer/) | Azure Load Balancer now supports ICMPv6 pings to its frontend and inbound traceroute support to both IPv4 and IPv6 frontends. Learn more about [how to test reachability of your load balancer](load-balancer-test-frontend-reachability.md). | June 2023 | | Feature | [Inbound ICMPv4 pings are now supported on Azure Load Balancer (General Availability)](https://azure.microsoft.com/updates/general-availability-inbound-icmpv4-pings-are-now-supported-on-azure-load-balancer/) | Azure Load Balancer now supports ICMPv4 pings to its frontend, enabling the ability to test reachability of your load balancer. Learn more about [how to test reachability of your load balancer](load-balancer-test-frontend-reachability.md). | May 2023 | | SKU | [Basic Load Balancer is retiring on September 30, 2025](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/) | Basic Load Balancer will retire on 30 September 2025. Make sure to [migrate to Standard SKU](load-balancer-basic-upgrade-guidance.md) before this date. | September 2022 |
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023 ms.suite: integration
machine-learning Text Ner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-ner.md
This article describes a component in Azure Machine Learning designer.
Use this component to create a machine learning model that is based on the AutoML Text NER.
-Named Entity Recognition (NER) is one of the features offered by Azure Cognitive Service for Language. The NER feature can identify and categorize entities in unstructured text. For [more information on NER](../../cognitive-services/language-service/named-entity-recognition/overview.md)
+Named Entity Recognition (NER) is one of the features offered by Azure AI Language. The NER feature can identify and categorize entities in unstructured text. For [more information on NER](../../ai-services/language-service/named-entity-recognition/overview.md)
## How to configure
machine-learning Concept Automl Forecasting Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-deep-learning.md
show_latex: true
This article focuses on the deep learning methods for time series forecasting in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
-Deep learning has made a major impact in fields ranging from [language modeling](../cognitive-services/openai/concepts/models.md) to [protein folding](https://www.deepmind.com/research/highlighted-research/alphafold), among many others. Time series forecasting has likewise benefitted from recent advances in deep learning technology. For example, deep neural network (DNN) models feature prominently in the top performing models from the [fourth](https://www.uber.com/blog/m4-forecasting-competition/) and [fifth](https://www.sciencedirect.com/science/article/pii/S0169207021001874) iterations of the high-profile Makridakis forecasting competition.
+Deep learning has made a major impact in fields ranging from [language modeling](../ai-services/openai/concepts/models.md) to [protein folding](https://www.deepmind.com/research/highlighted-research/alphafold), among many others. Time series forecasting has likewise benefitted from recent advances in deep learning technology. For example, deep neural network (DNN) models feature prominently in the top performing models from the [fourth](https://www.uber.com/blog/m4-forecasting-competition/) and [fifth](https://www.sciencedirect.com/science/article/pii/S0169207021001874) iterations of the high-profile Makridakis forecasting competition.
In this article, we'll describe the structure and operation of the TCNForecaster model in AutoML to help you best apply the model to your scenario.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
# What is Azure Machine Learning designer(v2)?
-Azure Machine Learning designer is a drag-and-drop UI interface to build pipeline in Azure Machine Learning.
+Azure Machine Learning designer is a drag-and-drop UI interface for building machine learning pipelines in Azure Machine Learning Workspaces.
-As shown in below GIF, you can build a pipeline visually by drag and drop your building blocks and connect them.
+As shown in below GIF, you can build a pipeline visually by dragging and dropping building blocks and connecting them.
:::image type="content" source="./media/concept-designer/designer-drag-and-drop.gif" alt-text="GIF of a building a pipeline in the designer." lightbox= "./media/concept-designer/designer-drag-and-drop.gif":::
machine-learning Concept Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-onnx.md
You can obtain ONNX models in several ways:
+ Train a new ONNX model in Azure Machine Learning (see examples at the bottom of this article) or by using [automated Machine Learning capabilities](concept-automated-ml.md#automl--onnx) + Convert existing model from another format to ONNX (see the [tutorials](https://github.com/onnx/tutorials)) + Get a pre-trained ONNX model from the [ONNX Model Zoo](https://github.com/onnx/models)
-+ Generate a customized ONNX model from [Azure Custom Vision service](../cognitive-services/custom-vision-service/index.yml)
++ Generate a customized ONNX model from [Azure AI Custom Vision service](../ai-services/custom-vision-service/index.yml) Many models including image classification, object detection, and text processing can be represented as ONNX models. If you run into an issue with a model that cannot be converted successfully, please file an issue in the GitHub of the respective converter that you used. You can continue using your existing format model until the issue is addressed.
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
The Azure CLI [extension v1 for machine learning](./v1/reference-azure-machine-l
> The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information. ```azurecli
-az ml workspace update -g <myresourcegroup> -w <myworkspace> --v1-legacy-mode False
+az ml workspace update -g <myresourcegroup> -n <myworkspace> --v1-legacy-mode False
``` The return value of the `az ml workspace update` command may not show the updated value. To view the current state of the parameter, use the following command: ```azurecli
-az ml workspace show -g <myresourcegroup> -w <myworkspace> --query v1LegacyMode
+az ml workspace show -g <myresourcegroup> -n <myworkspace> --query v1LegacyMode
```
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
View and change details of your project. On this tab, you can:
### Vision Studio tab
-If your project was created from [Vision Studio](../cognitive-services/computer-vision/how-to/model-customization.md), you'll also see a **Vision Studio** tab. Select **Go to Vision Studio** to return to Vision Studio. Once you return to Vision Studio, you will be able to import your labeled data.
+If your project was created from [Vision Studio](../ai-services/computer-vision/how-to/model-customization.md), you'll also see a **Vision Studio** tab. Select **Go to Vision Studio** to return to Vision Studio. Once you return to Vision Studio, you will be able to import your labeled data.
### Access for labelers
You can export an image label as:
* An [Azure MLTable data asset](./how-to-mltable.md). :::moniker-end
-When you export a CSV or COCO file, a notification appears briefly when the file is ready to download. You'll also find the notification in the **Notification** section on the top bar:
+When you export a CSV or COCO file, a notification appears briefly when the file is ready to download. Select the **Download file** link to download your results. You'll also find the notification in the **Notification** section on the top bar:
Access exported Azure Machine Learning datasets and data assets in the **Data** section of Machine Learning. The data details page also provides sample code you can use to access your labels by using Python.
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
View and change details of your project. On this tab, you can:
### Language Studio tab
-If your project was created from [Language Studio](../cognitive-services/language-service/custom/azure-machine-learning-labeling.md), you'll also see a **Language Studio** tab.
+If your project was created from [Language Studio](../ai-services/language-service/custom/azure-machine-learning-labeling.md), you'll also see a **Language Studio** tab.
* If labeling is active in Language Studio, you can't also label in Azure Machine Learning. In that case, Language Studio is the only tab available. Select **View in Language Studio** to go to the active labeling project in Language Studio. From there, you can switch to labeling in Azure Machine Learning if you wish.
For **Text Named Entity Recognition** projects, you can export label data as:
* A CoNLL file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. Azure Machine Learning creates the CoNLL file in a folder inside*Labeling/export/conll*. :::moniker-end
-When you export a CSV or CoNLL file, a notification appears briefly when the file is ready to download. You'll also find the notification in the **Notification** section on the top bar:
+When you export a CSV or CoNLL file, a notification appears briefly when the file is ready to download. Select the **Download file** link to download your results. You'll also find the notification in the **Notification** section on the top bar:
Access exported Azure Machine Learning datasets and data assets in the **Data** section of Machine Learning. The data details page also provides sample code you can use to access your labels by using Python.
machine-learning How To Create Vector Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-vector-index.md
Once you have created a Vector Index, you can add it to a prompt flow from the p
1. On the top menu, select **More Tools** and select Vector Index Lookup
- :::image type="content" source="media/how-to-create-vector-index/new-vector-creation.png" alt-text="Screenshot showing the location of the More Tools button.":::
+ :::image type="content" source="media/how-to-create-vector-index/vector-lookup.png" alt-text="Screenshot showing the location of the More Tools button.":::
1. The Vector Index lookup tool gets added to the canvas. If you don't see the tool immediately, scroll to the bottom of the canvas.
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Previously updated : 06/07/2023 Last updated : 07/17/2023 reviewer: msakande
The following table describes the key attributes of a deployment:
| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | > [!NOTE]
-> The model and container image (as defined in Environment) can be referenced again at any time by the deployment when the instances behind the deployment go through security patches and/or other recovery operations. If you used a registered model or container image in Azure Container Registry for deployment and removed the model or the container image, the deployments relying on these assets can fail when reimaging happens. If you removed the model or the container image, ensure the dependent deployments are re-created or updated with alternative model or container image.
+> - The model and container image (as defined in Environment) can be referenced again at any time by the deployment when the instances behind the deployment go through security patches and/or other recovery operations. If you used a registered model or container image in Azure Container Registry for deployment and removed the model or the container image, the deployments relying on these assets can fail when reimaging happens. If you removed the model or the container image, ensure the dependent deployments are re-created or updated with alternative model or container image.
+> - The container registry that the environment refers to can be private only if the endpoint identity has the permission to access it via Azure Active Directory authentication and Azure RBAC. For the same reason, private Docker registries other than Azure Container Registry are not supported.
# [Azure CLI](#tab/azure-cli)
machine-learning How To Use Openai Models In Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-openai-models-in-azure-ml.md
In this article, you learn how to discover, finetune and deploy Azure Open AI models at scale, using Azure Machine Learning. ## Prerequisites-- [You must have access](../cognitive-services/openai/overview.md#how-do-i-get-access-to-azure-openai) to the Azure Open AI service-- You must be in an Azure OpenAI service [supported region](../cognitive-services/openai/concepts/models.md#model-summary-table-and-region-availability)
+- [You must have access](../ai-services/openai/overview.md#how-do-i-get-access-to-azure-openai) to the Azure Open AI service
+- You must be in an Azure OpenAI service [supported region](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability)
## What is OpenAI Models in Azure Machine Learning? In recent years, advancements in AI have led to the rise of large foundation models that are trained on a vast quantity of data. These models can be easily adapted to a wide variety of applications across various industries. This emerging trend gives rise to a unique opportunity for enterprises to build and use these foundation models in their deep learning workloads. **OpenAI Models in AzureML** provides Azure Machine Learning native capabilities that enable customers to build and operationalize OpenAI models at scale: -- Accessing [Azure OpenAI](../cognitive-services/openai/overview.md) in Azure Machine Learning, made available in the Azure Machine Learning Model catalog
+- Accessing [Azure OpenAI](../ai-services/openai/overview.md) in Azure Machine Learning, made available in the Azure Machine Learning Model catalog
- Make connection with the Azure OpenAI service - Finetuning Azure OpenAI Models with Azure Machine Learning - Deploying Azure OpenAI Models with Azure Machine Learning to the Azure OpenAI service
In recent years, advancements in AI have led to the rise of large foundation mod
The model catalog (preview) in Azure Machine Learning studio is your starting point to explore various collections of foundation models. The Azure Open AI models collection is a collection of models, exclusively available on Azure. These models enable customers to access prompt engineering, finetuning, evaluation, and deployment capabilities for large language models available in Azure OpenAI Service. You can view the complete list of supported OpenAI models in the [model catalog](https://ml.azure.com/model/catalog), under the `Azure OpenAI Service` collection. > [!TIP]
->Supported OpenAI models are published to the AzureML Model Catalog. View a complete list of [Azure OpenAI models](../cognitive-services/openai/concepts/models.md).
+>Supported OpenAI models are published to the AzureML Model Catalog. View a complete list of [Azure OpenAI models](../ai-services/openai/concepts/models.md).
:::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/model-catalog.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/model-card.png" alt-text="Screenshot showing the Azure OpenAI models collection in the model catalog.":::
To deploy an Azure Open Model from Azure Machine Learning, in order to deploy an
1. Select the **Azure OpenAI** tab and find the deployment you created. When you select the deployment, you'll be redirect to the OpenAI resource that is linked to the deployment. > [!NOTE]
-> Azure Machine Learning will automatically deploy [all base Azure OpenAI models](../cognitive-services/openai/concepts/models.md) for you so you can using interact with the models when getting started.
+> Azure Machine Learning will automatically deploy [all base Azure OpenAI models](../ai-services/openai/concepts/models.md) for you so you can using interact with the models when getting started.
## Finetune Azure OpenAI models using your own training data
You might receive any of the following errors when you try to deploy an Azure Op
- **Fix**: Azure OpenAI failed to create. This is due to Quota issues, make sure you have enough quota for the deployment. - **Failed to fetch Azure OpenAI deployments**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../cognitive-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../cognitive-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../cognitive-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure Open AI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Model Not Deployable** - **Fix**: This usually happens while trying to deploy a GPT-4 model. Due to high demand you need to [apply for access to use GPT-4 models](https://learn.microsoft.com/azure/cognitive-services/openai/concepts/models#gpt-4-models).
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate How To Upgrade Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-upgrade-windows.md
This article describes how to upgrade Windows Server OS while migrating to Azure. Azure Migrate OS upgrade allows you to move from an older operating system to a newer one while keeping your settings, server roles, and data intact. You can move your on-premises server to Azure with an upgraded OS version of Windows Server using Windows upgrade. > [!NOTE]
-> This feature is currently available only for [VMWare agentless migration](tutorial-migrate-vmware.md).
+> This feature is currently available only for [VMware agentless migration](tutorial-migrate-vmware.md).
## Prerequisites
Windows Server 2019 | Windows Server 2022
To upgrade Windows during the test migration, follow these steps:
-1. On the **Get started** page > **Servers, databases and web apps**, select **Replicate**.
+1. Go toΓÇ»**Get started** >ΓÇ»**Servers, databases and web apps**, select **Replicate**.
A Start Replication job begins. When the Start Replication job finishes successfully, the machines begin their initial replication to Azure.
To upgrade Windows during the test migration, follow these steps:
After you've verified that the test migration works as expected, you can migrate the on-premises machines. To upgrade Windows during the migration, follow these steps:
-1. On the **Get started** page > **Servers, databases and web apps**, select **Replicate**. A Start Replication job begins.
+1. On the **Get started** page, in **Servers, databases and web apps**, select **Replicate**. A Start Replication job begins.
2. InΓÇ»**Replicating machines**, right-click the VM and selectΓÇ»**Migrate**. :::image type="content" source="./media/how-to-upgrade-windows/migration.png" alt-text="Screenshot displays the Migrate option.":::
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
A: Some compatibility issues may arise due to changes in MySQL v8.0. It's import
__Q: What support is available if I encounter issues during the upgrade process?__
-A: If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/microsoft-azure-mysql-qa). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also reach out to the Azure Database for MySQL product team at mailto:AskAzureDBforMySQL@service.microsoft.com.
+A: If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/microsoft-azure-mysql-qa). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also email the [Azure Database for MySQL product team](mailto:AskAzureDBforMySQL@service.microsoft.com).
__Q: What will happen to my data during the upgrade?__
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Azure Database for MySQL
nat-gateway Tutorial Nat Gateway Load Balancer Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md
# Tutorial: Integrate a NAT gateway with an internal load balancer using the Azure portal
-In this tutorial, you'll learn how to integrate a NAT gateway with an internal load balancer.
+In this tutorial, you learn how to integrate a NAT gateway with an internal load balancer.
By default, an Azure Standard Load Balancer is secure. Outbound connectivity is explicitly defined by enabling outbound SNAT (Source Network Address Translation).
In this tutorial, you learn how to:
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create the virtual network
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-In this section, you'll create a virtual network and subnet.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
-
-2. Select **Create**.
-
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. Enter **TutorIntLBNAT-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **(US) East US** |
-
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-5. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-6. Under **Subnet name**, select the word **default**.
-
-7. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-8. Select **Save**.
-
-9. Select the **Security** tab or select the **Next: Security** button at the bottom of the page.
-
-10. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
--
-11. Select the **Review + create** tab or select the **Review + create** button.
-
-12. Select **Create**.
-
-## Create load balancer
-
-In this section, you create a load balancer that load balances virtual machines.
-
-During the creation of the load balancer, you'll configure:
-
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-
-2. In the **Load balancer** page, select **Create**.
-
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **TutorIntLBNAT-rg**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(US) East US**. |
- | SKU | Leave the default **Standard**. |
- | Type | Select **Internal**. |
-
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-
-6. Enter **LoadBalancerFrontend** in **Name**.
-
-7. Select **myVNet** in **Virtual network**.
-
-8. Select **myBackendSubnet** in **Subnet**.
-
-9. Select **Dynamic** for **Assignment**.
-
-10. Select **Zone-redundant** in **Availability zone**.
-
- > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
-
-11. Select **Add**.
-
-12. Select **Next: Backend pools** at the bottom of the page.
-
-13. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-14. Enter **myBackendPool** for **Name** in **Add backend pool**.
-
-15. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-
-16. Select **IPv4** or **IPv6** for **IP version**.
-
-17. Select **Add**.
-
-18. Select the **Next: Inbound rules** button at the bottom of the page.
-
-19. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-
-20. In **Add load balancing rule**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Backend pool | Select **myBackendPool**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | TCP reset | Select **Enabled**. |
- | Floating IP | Select **Disabled**. |
-
-21. Select **Add**.
-
-22. Select the blue **Review + create** button at the bottom of the page.
-
-23. Select **Create**.
-
-## Create virtual machines
-
-In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
-
-These VMs are added to the backend pool of the load balancer that was created earlier.
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **TutorIntLBNAT-rg** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM1** |
- | Region | Select **(US) East US** |
- | Availability Options | Select **Availability zones** |
- | Availability zone | Select **1** |
- | Image | Select **Windows Server 2019 Datacenter** |
- | Azure Spot instance | Leave the default |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None** |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
-
- | Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
- | **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
- | **Load balancing settings** |
- | Load balancing options | Select **Azure load balancer** |
- | Select a load balancer | Select **myLoadBalancer** |
- | Select a backend pool | Select **myBackendPool** |
-
-5. Select **Review + create**.
-
-6. Review the settings, and then select **Create**.
-
-7. Follow the steps 1 to 6 to create a VM with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2 |
- | - | -- |
- | Name | **myVM2** |
- | Availability zone | **2** |
- | Network security group | Select the existing **myNSG**|
-
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway and assign it to the subnet in the virtual network you created previously.
-
-1. On the upper-left side of the screen, select **Create a resource > Networking > NAT gateway** or search for **NAT gateway** in the search box.
-
-2. Select **Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **TutorIntLBNAT-rg**. |
- | **Instance details** | |
- | Name | Enter **myNATGateway** |
- | Region | Select **(US) East US** |
- | Availability Zone | Select **None**. |
- | Idle timeout (minutes) | Enter **10**. |
-
-4. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In the **Outbound IP** tab, enter or select the following information:
-
- | **Setting** | **Value** |
- | -- | |
- | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myNATgatewayIP**. </br> Select **OK**. |
-
-6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
-
-7. In the **Subnet** tab, select **myVNet** in the **Virtual network** pull-down.
-
-8. Check the box next to **myBackendSubnet**.
-
-9. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-10. Select **Create**.
## Test NAT gateway
-In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
+In this section, you test the NAT gateway. You first discover the public IP of the NAT gateway. You then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
-1. Select **Resource groups** in the left-hand menu, select the **TutorIntLBNAT-rg** resource group, and then from the resources list, select **myNATgatewayIP**.
-
-2. Make note of the public IP address:
+1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
- :::image type="content" source="./media/tutorial-nat-gateway-load-balancer-internal-portal/find-public-ip.png" alt-text="Screenshot of discover public IP address of NAT gateway." border="true":::
+1. Select **public-ip-nat**.
-3. Select **Resource groups** in the left-hand menu, select the **TutorIntLBNAT-rg** resource group, and then from the resources list, select **myVM1**.
+1. Make note of the public IP address:
-4. On the **Overview** page, select **Connect**, then **Bastion**.
+ :::image type="content" source="./media/quickstart-create-nat-gateway-portal/find-public-ip.png" alt-text="Screenshot of public IP address of NAT gateway." border="true":::
-5. Enter the username and password entered during VM creation.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-6. Open **Internet Explorer** on **myVM1**.
+1. Select **vm-1**.
-7. Enter **https://whatsmyip.com** in the address bar.
+1. On the **Overview** page, select **Connect**, then select the **Bastion** tab.
-8. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+1. Select **Use Bastion**.
- :::image type="content" source="./media/tutorial-nat-gateway-load-balancer-internal-portal/my-ip.png" alt-text="Screenshot of Internet Explorer showing external outbound IP." border="true":::
+1. Enter the username and password entered during VM creation. Select **Connect**.
-## Clean up resources
+1. In the bash prompt, enter the following command:
-If you're not going to continue to use this application, delete
-the virtual network, virtual machine, and NAT gateway with the following steps:
+ ```bash
+ curl ifconfig.me
+ ```
-1. From the left-hand menu, select **Resource groups**.
+1. Verify the IP address returned by the command matches the public IP address of the NAT gateway.
-2. Select the **TutorIntLBNAT-rg** resource group.
+ ```output
+ azureuser@vm-1:~$ curl ifconfig.me
+ 20.7.200.36
+ ```
-3. Select **Delete resource group**.
+1. Close the bastion connection to **vm-1**.
-4. Enter **TutorIntLBNAT-rg** and select **Delete**.
## Next steps
nat-gateway Tutorial Nat Gateway Load Balancer Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md
# Tutorial: Integrate a NAT gateway with a public load balancer using the Azure portal
-In this tutorial, you'll learn how to integrate a NAT gateway with a public load balancer.
+In this tutorial, you learn how to integrate a NAT gateway with a public load balancer.
By default, an Azure Standard Load Balancer is secure. Outbound connectivity is explicitly defined by enabling outbound SNAT (Source Network Address Translation). SNAT is enabled in a load-balancing rule or outbound rules.
The NAT gateway integration replaces the need for outbound rules for backend poo
In this tutorial, you learn how to: > [!div class="checklist"]
+> * Create a NAT gateway
> * Create an Azure Load Balancer > * Create two virtual machines for the backend pool of the Azure Load Balancer
-> * Create a NAT gateway
> * Validate outbound connectivity of the virtual machines in the load balancer backend pool ## Prerequisites An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create the virtual network
-
-In this section, you'll create a virtual network and subnet.
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-
-2. In **Virtual networks**, select **+ Create**.
-
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **TutorPubLBNAT-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **(US) East US** |
-
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-5. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-6. Under **Subnet name**, select the word **default**.
-
-7. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-8. Select **Save**.
-
-9. Select the **Security** tab or select the **Next: Security** button at the bottom of the page.
-
-10. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
--
-11. Select the **Review + create** tab or select the **Review + create** button.
-
-12. Select **Create**.
-
-## Create load balancer
-
-In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-
-During the creation of the load balancer, you'll configure:
-
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-
-2. In the **Load balancer** page, select **Create**.
-
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **TutorPubLBNAT-rg**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(US) East US**. |
- | SKU | Leave the default **Standard**. |
- | Type | Select **Public**. |
- | Tier | Leave the default **Regional**. |
--
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-
-6. Enter **LoadBalancerFrontend** in **Name**.
-
-7. Select **IPv4** or **IPv6** for the **IP version**.
-
- > [!NOTE]
- > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-
-8. Select **IP address** for the **IP type**.
-
- > [!NOTE]
- > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
-
-9. Select **Create new** in **Public IP address**.
-
-10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
-
-11. Select **Zone-redundant** in **Availability zone**.
+## Sign in to Azure
- > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-12. Leave the default of **Microsoft Network** for **Routing preference**.
-13. Select **OK**.
-14. Select **Add**.
-
-15. Select **Next: Backend pools** at the bottom of the page.
-
-16. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-17. Enter **myBackendPool** for **Name** in **Add backend pool**.
-
-18. Select **myVNet** in **Virtual network**.
-
-19. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-
-20. Select **IPv4** or **IPv6** for **IP version**.
-
-21. Select **Add**.
-
-22. Select the **Next: Inbound rules** button at the bottom of the page.
-
-23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-
-24. In **Add load balancing rule**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Backend pool | Select **myBackendPool**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | TCP reset | Select **Enabled**. |
- | Floating IP | Select **Disabled**. |
- | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-
-25. Select **Add**.
-
-26. Select the blue **Review + create** button at the bottom of the page.
-
-27. Select **Create**.
-
-## Create virtual machines
-
-In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
-
-These VMs are added to the backend pool of the load balancer that was created earlier.
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **TutorPubLBNAT-rg** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM1** |
- | Region | Select **(US) East US** |
- | Availability Options | Select **Availability zones** |
- | Availability zone | Select **1** |
- | Image | Select **Windows Server 2019 Datacenter** |
- | Azure Spot instance | Leave the default |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None** |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
-
- | Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> In **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
- | **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the check box.|
- | **Load balancing settings** |
- | Load-balancing options | Select **Azure load balancer** |
- | Select a load balancer | Select **myLoadBalancer** |
- | Select a backend pool | Select **myBackendPool** |
-
-5. Select **Review + create**.
-
-6. Review the settings, and then select **Create**.
-
-7. Follow the steps 1 to 7 to create a VM with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2 |
- | - | -- |
- | Name | **myVM2** |
- | Availability zone | **2** |
- | Network security group | Select the existing **myNSG** |
-
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway and assign it to the subnet in the virtual network you created previously.
-
-1. On the upper-left side of the screen, select **Create a resource > Networking > NAT gateway** or search for **NAT gateway** in the search box.
-
-2. Select **Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **TutorPubLBNAT-rg**. |
- | **Instance details** | |
- | Name | Enter **myNATGateway** |
- | Region | Select **(US) East US** |
- | Availability Zone | Select **None**. |
- | Idle timeout (minutes) | Enter **10**. |
-
-4. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In the **Outbound IP** tab, enter or select the following information:
-
- | **Setting** | **Value** |
- | -- | |
- | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myNATgatewayIP**. </br> Select **OK**. |
-
-6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
-
-7. In the **Subnet** tab, select **myVNet** in the **Virtual network** pull-down.
-
-8. Check the box next to **myBackendSubnet**.
-
-9. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-10. Select **Create**.
## Test NAT gateway
-In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
+In this section, you test the NAT gateway. You first discover the public IP of the NAT gateway. You then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
-1. Select **Resource groups** in the left-hand menu, select the **TutorPubLBNAT-rg** resource group, and then from the resources list, select **myNATgatewayIP**.
-
-2. Make note of the public IP address:
+1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
- :::image type="content" source="./media/tutorial-nat-gateway-load-balancer-public-portal/find-public-ip.png" alt-text="Screenshot discover public IP address of NAT gateway." border="true":::
+1. Select **public-ip-nat**.
-3. Select **Resource groups** in the left-hand menu, select the **TutorPubLBNAT-rg** resource group, and then from the resources list, select **myVM1**.
+1. Make note of the public IP address:
-4. On the **Overview** page, select **Connect**, then **Bastion**.
+ :::image type="content" source="./media/quickstart-create-nat-gateway-portal/find-public-ip.png" alt-text="Screenshot of public IP address of NAT gateway." border="true":::
-5. Enter the username and password entered during VM creation.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-6. Open **Internet Explorer** on **myVM1**.
+1. Select **vm-1**.
-7. Enter **https://whatsmyip.com** in the address bar.
+1. On the **Overview** page, select **Connect**, then select the **Bastion** tab.
-8. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+1. Select **Use Bastion**.
- :::image type="content" source="./media/tutorial-nat-gateway-load-balancer-public-portal/my-ip.png" alt-text="Screenshot Internet Explorer showing external outbound IP." border="true":::
+1. Enter the username and password entered during VM creation. Select **Connect**.
-## Clean up resources
+1. In the bash prompt, enter the following command:
-If you're not going to continue to use this application, delete
-the virtual network, virtual machine, and NAT gateway with the following steps:
+ ```bash
+ curl ifconfig.me
+ ```
-1. From the left-hand menu, select **Resource groups**.
+1. Verify the IP address returned by the command matches the public IP address of the NAT gateway.
-2. Select the **TutorPubLBNAT-rg** resource group.
+ ```output
+ azureuser@vm-1:~$ curl ifconfig.me
+ 20.7.200.36
+ ```
-3. Select **Delete resource group**.
+1. Close the bastion connection to **vm-1**.
-4. Enter **TutorPubLBNAT-rg** and select **Delete**.
## Next steps
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
notification-hubs Create Notification Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-portal.md
Previously updated : 05/03/2023 Last updated : 07/17/2023
notification-hubs Notification Hubs Baidu China Android Notifications Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-baidu-china-android-notifications-get-started.md
documentationcenter: android - ms.devlang: java mobile-baidu Previously updated : 03/18/2020 Last updated : 07/17/2023 -
-ms.lastreviewed: 06/19/2019
notification-hubs Notification Hubs High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md
Previously updated : 05/04/2023 Last updated : 07/17/2023
Last updated 05/04/2023
Android, Windows, etc.) from any back-end (cloud or on-premises). This article describes the configuration options to achieve the availability characteristics required by your solution. For more information about our SLA, see the [Notification Hubs SLA][]. > [!NOTE]
-> The following features are available in preview. If you are interested in using these features, contact your customer success manager at Microsoft, or create an Azure ticket, which will be triaged by the support team:
+> The following features are available in preview:
> > - Ability to edit your cross region disaster recovery options > - Availability zones > > If you're not participating in the preview, your failover region defaults to one of the [Azure paired regions][].
+>
+> Availability zones support will incur an additional cost on top of existing tier pricing. You will not be charged to preview the feature. Once it becomes generally available, you will automatically be billed.
Notification Hubs offers two availability configurations:
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
If you haven't already installed Azure CLI: [Install Azure CLI][installation-ins
``` -- Install and test the `networkcloud` CLI extension
+- Install and test the latest version of `networkcloud` CLI extension
```azurecli az extension add --name networkcloud az networkcloud --help ```
+For list of available versions, see [the extension release history][az-cli-networkcloud-cli-versions].
+
+To install a specific version of the networkcloud CLI extension, add `--version` parameter to the command. For example, below installs 0.4.1
+
+```azurecli
+az extension add --name networkcloud --version 0.4.1
+```
+ ## Install `managednetworkfabric` CLI extension - Remove any previously installed version of the extension
ssh 1.1.6
<!-- LINKS - External --> [installation-instruction]: https://aka.ms/azcli+
+[az-cli-networkcloud-cli-versions]: https://github.com/Azure/azure-cli-extensions/blob/main/src/networkcloud/HISTORY.rst
operator-nexus Howto Use Vm Console Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-vm-console-service.md
+
+ Title: "Azure Operator Nexus: VM Console Service"
+description: Learn how to use the VM console service.
++++ Last updated : 06/16/2023+++
+# Introduction to the Virtual Machine console service
+
+The Virtual Machine (VM) console service provides managed SSH access to a VM hosted in an Operator Nexus Instance. It relies on the Azure Private Link Service (PLS) to establish a private network connection between the user's network and the Azure Operator Nexus Cluster Manager's private network.
++
+For more information about networking resources that enables private connectivity to an Operator Nexus Instance, see [Introduction to Azure Private Link](https://learn.microsoft.com/training/modules/introduction-azure-private-link/).
+
+This document provides guided instructions of how to use the VM Console service to establish an SSH session with a Virtual Machine in an Operator Nexus Instance.
+
+This guide helps you to:
+
+1. Establish a secure private network connectivity between your network and the Cluster Manager's private network
+1. Create a Console resource with `az networkcloud virtualmachine console` CLI command
+1. Initiate an SSH session with a Virtual Machine
+
+> [!NOTE]
+> In order to avoid passing the `--subscription` parameter to each Azure CLI command, execute the following command:
+>
+> ```azurecli
+> az account set --subscription "your-subscription-ID"
+> ```
+
+## Before you begin
+
+1. Install the latest version of the [appropriate CLI extensions](./howto-install-cli-extensions.md)
+
+## Setting variables
+
+To help set up the environment for SSHing to Virtual Machines, define these environment variables that are used throughout this guide.
+
+> [!NOTE]
+> These environment variable values do not reflect a real deployment and users MUST change them to match their environments.
+
+```bash
+ # Cluster Manager environment variables
+ export CM_CLUSTER_NAME="contorso-cluster-manager-1234"
+ export CM_MANAGED_RESOURCE_GROUP="contorso-cluster-manager-1234-rg"
+ export CM_EXTENDED_LOCATION="/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.ExtendedLocation/customLocations/clusterManagerExtendedLocationName"
+ export CM_HOSTED_RESOURCES_RESOURCE_GROUP="my-contorso-console-rg"
+
+ # Your Console resource enviroment variables
+ export VIRTUAL_MACHINE_NAME="my-undercloud-vm"
+ export CONSOLE_PUBLIC_KEY="xxxx-xxxx-xxxxxx-xxxx"
+ export CONSOLE_EXPIRATION_TIME="2023-06-01T01:27:03.008Z"
+
+ # your environment variables
+ export PRIVATE_ENDPOINT_RG="my-work-env-rg"
+ export PRIVATE_ENDPOINT_NAME="my-work-env-ple"
+ export PRIVATE_ENDPOINT_REGION="eastus"
+ export PRIVATE_ENDPOINT_VNET="my-work-env-ple-vnet"
+ export PRIVATE_ENDPOINT_SUBNET="my-work-env-ple-subnet"
+ export PRIVATE_ENDPOINT_CONNECTION_NAME="my-contorse-ple-pls-connection"
+```
+
+## Establishing Private Network Connectivity
+
+In order to establish a secure SSH session with a Virtual Machine, you need to establish a private network connectivity between your network and the Cluster Manager's private network.
+
+This private network relies on the Azure Private Link Endpoint (PLE) and the Azure Private Link Service (PLS).
+
+The Cluster Manager automatically creates a PLS so that you can establish a private network connection between your network and the Cluster Manager's private network.
+
+This section provides a step-by-step guide to help you to establish a private network connectivity.
++
+1. You need to retrieve the resource identifier for the PLS associated to the VM Console service running in the Cluster Manager.
+
+ ```bash
+ # retrieve the infrastructure resource group of the AKS cluster
+ export pls_resource_group=$(az aks show --name ${CM_CLUSTER_NAME} -g ${CM_MANAGED_RESOURCE_GROUP} --query "nodeResourceGroup" -o tsv)
+
+ # retrieve the Private Link Service resource id
+ export pls_resourceid=$(az network private-link-service show \
+ --name console-pls \
+ --resource-group ${pls_resource_group} \
+ --query id \
+ --output tsv)
+ ```
+
+1. Create the PLE for establishing a private and secure connection between your network and the Cluster Manager's private network. You need the PLS resource ID obtained in [Creating Console Resource](#creating-console-resource).
+
+ ```bash
+ az network private-endpoint create \
+ --connection-name "${PRIVATE_ENDPOINT_CONNECTION_NAME}" \
+ --name "${PRIVATE_ENDPOINT_NAME}" \
+ --private-connection-resource-id "${pls_resourceid}" \
+ --resource-group "${PRIVATE_ENDPOINT_RG}" \
+ --vnet-name "${PRIVATE_ENDPOINT_VNET}" \
+ --subnet "${PRIVATE_ENDPOINT_SUBNET}" \
+ --manual-request false
+ ```
+
+1. Retrieve the private IP address allocated to the PLE, which you need when establishing the `ssh` session.
+
+ ```bash
+ ple_interface_id=$(az network private-endpoint list --resource-group ${PRIVATE_ENDPOINT_NAME}-rg --query "[0].networkInterfaces[0].id" -o tsv)
+
+ sshmux_ple_ip=$(az network nic show --ids $ple_interface_id --query 'ipConfigurations[0].privateIPAddress' -o tsv)
+
+ echo "sshmux_ple_ip: ${sshmux_ple_ip}"
+ ```
+
+## Creating Console Resource
+
+The Console resource provides the information about the VM such as VM name, public SSH key, expiration date for the SSH session, etc.
+
+This section provides step-by-step guide to help you to create a Console resource using Azure CLI commands.
++
+1. The first thing before you can establish an SSH session with a VM is to create a ***Console*** resource in the Cluster Manager.
+
+ ```bash
+ az networkcloud virtualmachine console create \
+ --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \
+ --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}" \
+ --extended-location name="${CM_EXTENDED_LOCATION}" type="CustomLocation" \
+ --enabled True \
+ --key-data "${CONSOLE_PUBLIC_KEY}" \
+ [--expiration "${CONSOLE_EXPIRATION_TIME}"]
+ ```
+
+ If you omit the `--expiration` parameter, the Cluster Manager will automatically set the expiration to one day after the creation of the Console resource. Also note that the `expiration` date & time format **must** comply with RFC3339 otherwise the creation of the Console resource fails.
+
+ > [!NOTE]
+ > For a complete synopsis for this command, invoke `az networkcloud console create --help`.
+
+1. Upon successful creation of the Console resource, retrieve the VM Access ID. You must use this unique identifier as `user` of the `ssh` session.
+
+ ```bash
+ virtual_machine_access_id=$(az networkcloud virtualmachine console show \
+ --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \
+ --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}" \
+ --query "virtualMachineAccessId")
+ ```
+
+> [!NOTE]
+> For a complete synopsis for this command, invoke `az networkcloud virtualmachine console show --help`.
+
+## Establishing an SSH session with Virtual Machine
+
+At this point, you have all the info needed for establishing a `ssh` session to the VM, that is, `virtual_machine_access_id` and `sshmux_ple_ip`.
+
+The VM Console service is a `ssh` server that "relays" the session to the designated VM. The `sshmux_ple_ip` indirectly references the VM Console service and the `virtual_machine_access_id` the identifier for the VM.
+
+> [!IMPORTANT]
+> The VM Console service listens to port `2222`, therefore you **must** specify this port number in the `ssh` command.
+>
+> ```bash
+> SSH [-i path-to-private-SSH-key] -p 2222 $virtual_machine_access_id@$sshmux_ple_ip
+> ```
++
+The VM Console service was designed to allow **only** one `ssh` session per Virtual Machine. Anyone establishing a successful `ssh` session to a VM closes an existing session, if any.
+
+> [!IMPORTANT]
+> The private SSH key used for authenticating the `ssh` session (default: `$HOME/.ssh/id_rsa`) MUST match the public SSH key passed as parameter when creating the Console resource.
+
+## Updating Console Resource
+
+You can disable `ssh` session to a given VM by updating the expiration date/time and/or update the public SSH key.
+
+```bash
+ az networkcloud virtualmachine console update \
+ --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \
+ --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}" \
+ [--enabled True | False] \
+ [--key-data "${CONSOLE_PUBLIC_KEY}"] \
+ [--expiration "${CONSOLE_EXPIRATION_TIME}"]
+```
+
+If you want to disable `ssh` access to a VM, you need to update the Console resource with the parameter `enabled False`. This update closes any `ssh` session and restricts any subsequent sessions.
+
+> [!NOTE]
+> Before anyone can create a `ssh` session to a VM, the corresponding Console resource **must** be set to `--enabled True`.
+
+When a Console `--expiration` time expires, it closes any `ssh` session corresponding the Console resource. You'll need to update the expiration time with a future value so that you can establish a new session.
+
+When you update the Console's public SSH key, the VM Console service closes any `ssh` session referenced by the Console resource. You have to provide a matching private SSH key matching the new public key when you establish a `ssh` session.
+
+## Cleaning Up (Optional)
+
+To clean up your VM Console environment setup, you need to delete the Console resource and your Private Link Endpoint.
+
+1. Deleting your Console resource
+
+ ```bash
+ az networkcloud virtualmachine console delete \
+ --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \
+ --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}"
+ ```
+
+1. Deleting the Private Link Endpoint
+
+ ```bash
+ az network private-endpoint delete \
+ --name ${PRIVATE_ENDPOINT_NAME}-ple \
+ --resource-group ${PRIVATE_ENDPOINT_NAME}-rg
+ ```
operator-nexus Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/overview.md
Operator Nexus meets operators' security, resiliency, observability, and perform
The platform seamlessly integrates compute, network, and storage. Operator Nexus is self service and uses the Azure portal, CLI, SDKs, and other tools to interact with the platform.
-<! IMG ![Operator Nexus HL overview diagram](Docs/media/hl-architecture.png) IMG >
:::image type="content" source="media/hl-architecture.png" alt-text="Figure of Operator Nexus overview."::: Figure: Operator Nexus Overview
Operator Nexus utilizes a curated and certified hardware Bill of Materials (BOM)
The service that manages the Operator Nexus infrastructure is hosted in Azure. Operators can choose an Azure region that supports Operator Nexus for any on-premises Operator Nexus instance. The diagram illustrates the architecture of the Operator Nexus service.
-<! IMG ![How Operator Nexus works diagram](Docs/media/architecture-overview.png) IMG >
:::image type="content" source="media/architecture-overview.png" alt-text="Screenshot of how Operator Nexus works."::: Figure: How Operator Nexus works
One important component of the service is Cluster Manager.
### Network Fabric Automation
-Operator Nexus includes [Network Fabric Automation (NFA)](./howto-configure-network-fabric-controller.md) which enables operators to build, operate and manage carrier grade network fabrics. The reliable and distributed cloud services model supports the operators' telco network functions. Operators have the ability to interact with Operator Nexus to provision the network fabric via Zero-Touch Provisioning (ZTP), as well as perform complex network implementations via a workflow driven, API model.
+Operator Nexus includes [Network Fabric Automation (NFA)](./howto-configure-network-fabric-controller.md) which enables operators to build, operate and manage carrier grade network fabrics. The reliable and distributed cloud services model supports the operators' telco network functions. Operators have the ability to interact with Operator Nexus to provision the network fabric via Zero-Touch Provisioning (ZTP), and perform complex network implementations via a workflow driven, API model.
### Network Packet Broker Network Packet Broker (NPB) is an integral part of the network fabric in Operator Nexus. NPB enables multiple scenarios from network performance monitoring to security intrusion detection. Operators can monitor every single packet in Operator Nexus and replicate it. They can apply packet filters dynamically and send filtered packets to multiple destinations for further processing.
-### Azure Hybrid Kubernetes Service
+### Nexus Kubernetes
-Azure Kubernetes Service (AKS) is a managed Kubernetes service on Azure. It lets users focus on developing and deploying their workloads while letting Azure handle the management and operational overheads. In [AKS-Hybrid](/azure/aks/hybrid/) the Kubernetes cluster is deployed on-premises. Azure still performs the traditional operational management activities such as updates, certificate rotations, etc.
+Nexus Kubernetes is an Operator Nexus version of Azure Kubernetes Service (AKS) for on-premises use. It's optimized to automate creation of containers to run tenant network function workloads. A Nexus Kubernetes cluster is deployed on-premises and the traditional operational management activities (CRUD) are managed via Azure. See [Nexus Kubernetes](./concepts-nexus-kubernetes-cluster.md) to learn more.
### Network functions virtualization infrastructure capabilities
Log Analytics has a rich analytical tool-set that operators can use for troubles
## Next steps * Learn more about Operator Nexus [resource models](./concepts-resource-types.md)
-* Review [Operator Nexus deployment prerequisites and steps](./howto-azure-operator-nexus-prerequisites.md)
+* Review [Operator Nexus deployment prerequisites and steps](./howto-azure-operator-nexus-prerequisites.md)
+* Learn [how to deploy a Nexus Kubernetes cluster](./quickstarts-kubernetes-cluster-deployment-bicep.md)
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Some of the reasons why server state can become *Inaccessible* are:
- If you revoke the Key Vault's list, get, wrapKey, and unwrapKey access policies from the [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) that is used to retrieve a key from KeyVault, the Azure Database for PostgreSQL- Flexible Server will be unable to access the key and will move to *Inaccessible* state. [Add required access policies](../../key-vault/general/assign-access-policy.md) to the identity in KeyVault. - If you set up overly restrictive Azure KeyVault firewall rules that cause Azure Database for PostgreSQL- Flexible Server inability to communicate with Azure KeyVault to retrieve keys. If you enable [KeyVault firewall](../../key-vault/general/overview-vnet-service-endpoints.md#trusted-services), make sure you check an option to *'Allow Trusted Microsoft Services to bypass this firewall.'*
+> [!NOTE]
+> When a key is either disabled, deleted, expired, or not reachable server with data encrypted using that key will become **inaccessible** as stated above. Server will not become available until the key is enabled again, or you assign a new key.
+> Generally, server will become **inaccessible** within an 60 minutes after a key is either disabled, deleted, expired, or cannot be reached. Similarly after key becomes available it may take up to 60 minutes until server becomes accessible again.
## Using Data Encryption with Customer Managed Key (CMK) and Geo-redundant Business Continuity features, such as Replicas and Geo-redundant backup
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
Here are some of the important considerations with in-place major version upgrad
- In-place major version upgrade process for Flexible Server automatically deploys the latest supported minor version. -- In-place major version upgrade process is an offline operation and it involves a short downtime.
+- The process of performing an in-place major version upgrade is an offline operation that results in a brief period of downtime. Typically, the downtime is under 15 minutes, although the duration may vary depending on the number of system tables involved
- Long-running transactions or high workload before the upgrade might increase the time taken to shut down the database and increase upgrade time.
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 15
-PostgreSQL version 15 is now generally available in all Azure regions. The current minor release is **15.2**.Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/15.2/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+PostgreSQL version 15 is now generally available in all Azure regions. The current minor release is **15.3**.Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/15.3/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 14
-The current minor release is **14.7**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.7/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **14.8**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.8/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 13
-The current minor release is **13.10**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/13.10/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **13.11**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/13.11/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 12
-The current minor release is **12.14**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/12.14/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **12.15**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/12.15/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is **11.19**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.19/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **11.20**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.20/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
postgresql Create Automation Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/create-automation-tasks.md
+
+# Required metadata
+ # For more information, see https://review.learn.microsoft.com/en-us/help/platform/learn-editor-add-metadata?branch=main
+ # For valid values of ms.service, ms.prod, and ms.topic, see https://review.learn.microsoft.com/en-us/help/platform/metadata-taxonomies?branch=main
+
+ Title: Stop/Start - Automation tasks - Azure Database for PostgreSQL Flexible Server
+description: This article describes how to Stop/Start Azure Database for PostgreSQL Flexible Server instance using the Automation tasks.
+++++ Last updated : 07/13/2023++
+# Manage Azure Database for PostgreSQL - Flexible Server using automation tasks (preview)
+
+> [!IMPORTANT]
+> This capability is in preview and is subject to the
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+To help you manage [Azure Database for PostgreSQL Flexible Server](./overview.md) resources more efficiently, you can create automation tasks for your Flexible Server. One example of such tasks can be starting or stopping the PostgreSQL Flexible Server on a predefined schedule. You can set this task to automatically start or stop the server a specific number of times every day, week, or month by setting the Interval and Frequency values on the task's Configure tab. The automation task continues to work until you delete or disable the task.
+
+In addition, you can also setup automation tasks for other routine tasks such as 'Send monthly cost for resource' and 'Scale PostgreSQL Flexible Server'.
+
+## How do automation tasks differ from Azure Automation?
+
+Automation tasks are more basic and lightweight than [Azure Automation](../../automation/overview.md). Currently, you can create an automation task only at the Azure resource level. An automation task is actually a logic app resource that runs a workflow, powered by the [*multi-tenant* Azure Logic Apps service](../../logic-apps/logic-apps-overview.md). You can view and edit the underlying workflow by opening the task in the workflow designer after it has completed at least one run.
+
+In contrast, Azure Automation is a comprehensive cloud-based automation and configuration service that provides consistent management across your Azure and non-Azure environments.
+
+## Pricing
+
+Creating an automation task doesn't immediately incur charges. Underneath, an automation task is powered by a workflow in a logic app resource hosted in multi-tenant Azure Logic Apps, thus the [Consumption pricing model](../../logic-apps/logic-apps-pricing.md) applies to automation tasks. Metering and billing are based on the trigger and action executions in the underlying logic app workflow.
++
+## Prerequisites
+* An Azure account and subscription.
+* Azure Database for PostgreSQL Flexible Server that you want to manage.
+
+## Create an automation task
+
+1. In the [Azure portal](https://portal.azure.com), find the PostgreSQL Flexible Server resource that you want to manage.
+1. On the resource navigation menu, in the **Automation** section, select **Tasks (preview)**.
+![Screenshot showing Azure portal and Azure PostgreSQL resource menu with "Tasks (preview)" selected.](media/create-automation-tasks/azure-postgres-menu-automation-section.png)
+
+1. On the **Tasks** pane, select **Add a task** to select a task template.
+![Screenshot that shows the "Tasks (preview)" pane with "Add a task" selected.](media/create-automation-tasks/add-automation-task.png)
+
+1. Under **Select a template**, select the task for **Starting** or **Stopping** your Azure PostgreSQL Flexible Server.
+![Screenshot that shows the "Add a task" pane with "Stop PostgreSQL Flexible Server" template selected.](media/create-automation-tasks/select-task-template.png)
+
+1. Under **Authenticate**, in the **Connections** section, select **Create** for every connection that appears in the task so that you can provide authentication credentials for all the connections. The types of connections in each task vary based on the task.
+![Screenshot that shows the selected "Create" option for the Azure Resource Manager connection.](media/create-automation-tasks/create-authenticate-connections.png)
+
+1. When you're prompted, **sign in with your Azure account** credentials.
+![Screenshot that shows the selection, "Sign in".](media/create-automation-tasks/create-connection-sign-in.png)
+
+1. Each successfully authenticated connection looks similar to this example:
+![Screenshot that shows successfully created connection.](media/create-automation-tasks/create-connection-success.png)
+
+1. After you authenticate all the connections, select Next: **Configure**.
+
+1. Under **Configure**, provide a name for the task and any other information required for the task. When you're done, select **Review + create**.
+![Screenshot that shows the required information for the selected task.](media/create-automation-tasks/provide-task-information.png)
+
+1. Tasks that send email notifications require an email address.
+
+> [!NOTE]
+> You can't change the task name after creation, so consider a name that still applies if you edit the underlying workflow. Changes that you make to the underlying workflow apply only to the task that you created, not the task template.
+>
+> For example, if you name your task `Stop-Instance-Weekly`, but you later edit the underlying workflow to run daily, you can't change your task's name to `Stop-Instance-Daily`.
+
+The task you've created, which is automatically live and running, will appear on the **Tasks** list.
+
+![Screenshot that shows the automation tasks list.](media/create-automation-tasks/automation-tasks-list.png)
+
+## Review task history
+
+To view a task's history of runs along with their status:
+
+1. In the [Azure portal](https://portal.azure.com), find the PostgreSQL Flexible Server resource that you want to manage.
+2. On the resource navigation menu, in the **Automation** section, select **Tasks (preview)**.
+3. In the tasks list, find the task that you want to review. In that task's **Runs** column, select **View**.
+
+Here the possible statuses for a run:
+
+ | Status | Description |
+ |--|-|
+ | **Canceled** | The task was canceled while running. |
+ | **Failed** | The task has at least one failed action, but no subsequent actions existed to handle the failure. |
+ | **Running** | The task is currently running. |
+ | **Succeeded** | All actions succeeded. A task can still finish successfully if an action failed, but a subsequent action existed to handle the failure. |
+ | **Waiting** | The run hasn't started yet and is paused because an earlier instance of the task is still running. |
+ |||
+
+ For more information, see [Review runs history in monitoring view](../../logic-apps/monitor-logic-apps.md#review-runs-history).
+
+## Edit the task
+
+To change a task, you have these options:
+
+* Edit the task "inline" so that you can change the task's properties, such as connection information or configuration information, for example, your email address.
+* Edit the task's underlying workflow in the workflow designer.
+
+### Edit the task inline
+
+1. In the [Azure portal](https://portal.azure.com), find the PostgreSQL Flexible Server resource that you want to manage.
+1. On the resource navigation menu, in the **Automation** section, select **Tasks (preview)**.
+1. In the tasks list, find the task that you want to update. Open the task's ellipses (**...**) menu, and select **Edit in-line**.
+1. By default, the **Authenticate** tab appears and shows the existing connections.
+1. To add new authentication credentials or select different existing authentication credentials for a connection, open the connection's ellipses (**...**) menu, and select either **Add new connection** or if available, different authentication credentials.
+1. To update other task properties, select **Next: Configure**.
+1. When you're done, select **Save**
+
+### Edit the task's underlying workflow
+
+* For details on editing the underlying workflow, please refer [Edit the task's underlying workflow](../../logic-apps/create-automation-tasks-azure-resources.md#edit-the-tasks-underlying-workflow)
+
+## Next steps
+
+* [Manage logic apps in the Azure portal](../../logic-apps/manage-logic-apps-with-azure-portal.md)
++
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-dropped-server.md
When a server is dropped, the database server backup is retained for five days i
To restore a dropped Azure Database for PostgreSQL Flexible server, you need - Azure Subscription name hosting the original server - Location where the server was created
+- Use the 2023-03-01-preview **api-version** version
## Steps to restore
To restore a dropped Azure Database for PostgreSQL Flexible server, you need
} } ```
-
+## Commom Errors
+
+1. If you utilize the incorrect API version, you may experience restore failures or timeouts. Please use 2023-03-01-preview API to avoid such issues.
+2. To avoid potential DNS errors, it is recommended to use a different name when initiating the restore process, as some restore operations may fail with the same name.
## Next steps
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: February 2023 * Public preview of [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Support for [extension](concepts-extensions.md) server with new servers<sup>$</sup>
+* Support for [extension](concepts-extensions.md) semver with new servers<sup>$</sup>
* Public Preview of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. * Support for [Geo-redundant backup feature](./concepts-backup-restore.md#geo-redundant-backup-and-restore) when using [Disk Encryption with Customer Managed Key (CMK)](./concepts-data-encryption.md#how-data-encryption-with-a-customer-managed-key-work) feature. * Support for [minor versions](./concepts-supported-versions.md) 14.6, 13.9, 12.13, 11.18 <sup>$</sup>
We continue to support Single Server and encourage you to adopt Flexible Server,
## Next steps Now that you've read an introduction to Azure Database for PostgreSQL flexible server deployment mode, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)+
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|:-|:--|:-|:--| |Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) | |Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) |
-| Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../cognitive-services/cognitive-services-virtual-networks.md#use-private-endpoints) |
+| Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../ai-services/cognitive-services-virtual-networks.md#use-private-endpoints) |
| Azure Cognitive Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Cognitive Search](../search/service-create-private-endpoint.md) | ### Analytics
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
You can create private endpoints for various Azure services, such as Azure SQL a
- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An Azure App Services web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
+- An Azure App Services web app with a Basic, Standard, PremiumV2, PremiumV3, IsolatedV2, Functions Premium (sometimes referred to as the Elastic Premium plan) app service plan, deployed in your Azure subscription.
- For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Previously updated : 02/17/2023 Last updated : 07/18/2023 # Customer intent: As a Microsoft Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Microsoft Purview account.
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview
### Deploy managed private endpoints for data sources
-To scan any data sources using Managed VNet Runtime, you need to deploy and approve a managed private endpoint for the data source prior to create a new scan. To deploy and approve a managed private endpoint for a data source, follow these steps selecting data source of your choice from the list:
+You can use managed private endpoints to connect your data sources to ensure data security during transmission. If your data source allows public access and you want to connect via public network, you can skip this step. Scan run can be executed as long as the integration runtime can connect to your data source.
+
+To deploy and approve a managed private endpoint for a data source, follow these steps selecting data source of your choice from the list:
1. Navigate to **Management**, and select **Managed private endpoints**.
purview How To Managed Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-managed-attributes.md
Once you have created managed attributes, you can refine your [data catalog sear
Below are the known limitations of the managed attribute feature as it currently exists in Microsoft Purview. -- Managed attributes can only be expired, not deleted.
+- Managed attributes can only be deleted if they have not been applied to any assets.
- Managed attributes can't be applied via the bulk edit experience. - After creating an attribute group, you can't edit the name of the attribute group. - After creating a managed attribute, you can't update the attribute name, attribute group or the field type.
purview How To Monitor Data Map Population https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-data-map-population.md
In Microsoft Purview, you can scan various types of data sources and view the sc
## Monitor scan runs
-1. Open the the Microsoft Purview governance portal by:
+1. Open the Microsoft Purview governance portal by:
- Browsing directly to [https://web.purview.azure.com](https://web.purview.azure.com) and selecting your Microsoft Purview account. - Opening the [Azure portal](https://portal.azure.com), searching for and selecting the Microsoft Purview account. Selecting the [**the Microsoft Purview governance portal**](https://web.purview.azure.com/) button.
You can click the **run ID** to check more about the scan run details:
| Failed | The scan phase fails. You can check the error details by clicking the "More info" link next to it. | | Canceled | The scan run is canceled by user. | | In Progress | The scan is running in progress. |
- | Queued | The scan run is waiting for available integration runtime resource.<br>If you use self-hosted integration runtime, note each node can run a number of concurrent scans at the same time depending on your machine specification (CPU and memory). More scans will be in Queued status. |
- | Throttled | The scan run is being throttled. It means this Microsoft Purview account at the moment has more ongoing scan runs than the allowed max concurrent count. Learn more about the limit [here](how-to-manage-quotas.md). This particular scan run will be waiting and be executed once your other ongoing scan(s) finishes. |
+ | Queued | The scan run is waiting for available integration runtime resource.<br>If you use self-hosted integration runtime, note each node can run a number of concurrent scans at the same time depending on your machine specification (CPU and memory). More scans are in Queued status. |
+ | Throttled | The scan run is being throttled. It means this Microsoft Purview account at the moment has more ongoing scan runs than the allowed max concurrent count. Learn more about the limit [here](how-to-manage-quotas.md). This particular scan run is waiting and will be executed once your other ongoing scan(s) finishes. |
The scan run is not charged during "Throttled" or "Queued" status.
You can click the **run ID** to check more about the scan run details:
- **Duration**: The ingestion duration and the start/end time.
+## View the exception log (Preview)
+
+When some assets or relationship fail to ingest into data map during scan, e.g. ingestion status ends up as partially completed, you can see a "**Download log**" button in the [scan run details](#scan-run-details) panel. It provides exception log files that capture the details of the failures.
+
+The following table shows the schema of a log file.
+
+| Column | Description |
+| | -- |
+| TimeStamp | The UTC timestamp when the ingestion operation happens.|
+| ErrorCode | Error code of the exception. |
+| OperationItem | Identifier for the failed asset/relationship, usually using the fully qualified name. |
+| Message | More information on which asset/relationship failed to ingest due to what reason. If there is ingestion failure for resource set, it may apply to multiple assets matching the same naming pattern, and the message includes the impacted count. |
+
+Currently, the exception log doesn't include failures happened during scan phase (metadata discovery). It will be added later.
+ ## Monitor links You can connect other services with Microsoft Purview to establish a "link", which will make the metadata and lineage of that service's assets available to Microsoft Purview. Currently, link is supported for [Azure Data Factory](how-to-link-azure-data-factory.md) and [Azure Synapse Analytics](how-to-lineage-azure-synapse-analytics.md).
purview How To Share Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-share-data.md
There are a couple possible reasons:
### Can't view list of shares in the storage account asset * You don't have enough permissions the data store that you want to see shares of. You need a minimum of **Reader** role on the source storage account to see a read-only view of sent shares and received shares. You can find more details on the [ADLS Gen2](register-scan-adls-gen2.md#data-sharing) or [Blob storage](register-scan-azure-blob-storage-source.md#data-sharing) data source page.
+ * Review [storage account prerequisites](#azure-storage-account-prerequisites) and make sure your storage account region, performance, and redundancy options are all supported.
## Next steps
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Previously updated : 05/08/2023 Last updated : 07/18/2023 # Create and manage a self-hosted integration runtime
Make sure the account has the permission of Log-on as a service. Otherwise self-
You can associate a self-hosted integration runtime with multiple on-premises machines or virtual machines in Azure. These machines are called nodes. You can have up to four nodes associated with a self-hosted integration runtime. The benefits of having multiple nodes are: - Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure for scan. This availability helps ensure continuity when you use up to four nodes.-- Run more concurrent scans. Each self-hosted integration runtime can empower many scans at the same time, auto determined based on the machine's CPU/memory. You can install more nodes if you have more concurrency need. Each scan will be executed on one of the nodes. Having more nodes doesn't improve the performance of a single scan execution.
+- Run more concurrent scans. Each self-hosted integration runtime can empower many scan runs at the same time, auto determined based on the machine's CPU/memory. You can install more nodes if you have more concurrency need.
+- When scanning sources like Azure Blob, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2 and Azure Files, each scan run can leverage all those nodes to boost the scan performance. For other sources, scan will be executed on one of the nodes.
You can associate multiple nodes by installing the self-hosted integration runtime software from [Download Center](https://www.microsoft.com/download/details.aspx?id=39717). Then, register it by using the same authentication key.
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
Previously updated : 04/20/2023 Last updated : 07/18/2023
To create and run a new scan:
1. **Maximum memory available** (applicable when using self-hosted integration runtime): Specify the maximum memory (in GB) available on your VM to be used for scanning processes. This value depends on the size of Cassandra server to be scanned. :::image type="content" source="media/register-scan-cassandra-source/scan.png" alt-text="scan Cassandra source" border="true":::
-1. Select **Test connection** (available when using self-hosted integration runtime).
+1. Select **Test connection** to validate the settings.
1. Select **Continue**.
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
Previously updated : 04/20/2023 Last updated : 07/18/2023
To create and run a new scan, follow these steps:
:::image type="content" source="media/register-scan-looker-source/setup-scan.png" alt-text="trigger scan" border="true":::
-1. Select **Test connection** (available when using self-hosted integration runtime).
+1. Select **Test connection** to validate the settings.
1. Select **Continue**.
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
Previously updated : 04/20/2023 Last updated : 07/18/2023
To create and run a new scan, follow these steps:
:::image type="content" source="media/register-scan-mysql/scan.png" alt-text="scan MySQL" border="true":::
+1. Select **Test connection** to validate the settings (available when using Azure Integration Runtime).
+ 1. Select **Continue**. 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once.
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
Previously updated : 04/20/2023 Last updated : 07/18/2023
To create and run a new scan, follow these steps:
:::image type="content" source="media/register-scan-postgresql/scan.png" alt-text="scan PostgreSQL" border="true":::
+1. Select **Test connection** to validate the settings (available when using Azure Integration Runtime).
+ 1. Select **Continue**. 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 06/08/2023 Last updated : 07/17/2023
Use any of the following deployment checklists during the setup or for troublesh
3. Under **Authentication**, **Allow public client flows** is enabled. 2. Review network configuration and validate if:
- 1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed.
+ 1. If your Power BI doesn't allow public access, make sure [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed.
2. All required [private endpoints for Microsoft Purview](./catalog-private-link-end-to-end.md) are deployed. 3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled. The following endpoints must be reachable from self-hosted runtime VM: - `*.powerbi.com`
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
Previously updated : 04/20/2023 Last updated : 07/18/2023
When object is deleted from the data source, currently the subsequent scan won't
- You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md). - A Salesforce connected app, which will be used to access your Salesforce information. - If you need to create a connected app, you can follow the [Salesforce documentation](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_basics.htm&type=5).
- - You will need to [enable OAuth for you Salesforce application](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_api_integration.htm&type=5).
+ - You will need to [enable OAuth for your Salesforce application](https://help.salesforce.com/s/articleView?id=sf.connected_app_create_api_integration.htm&type=5).
> [!NOTE] > **If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.), **you will need to configure a self hosted integration runtime to connect to it**.
To create and run a new scan, follow these steps:
:::image type="content" source="media/register-scan-salesforce/scan.png" alt-text="scan Salesforce" border="true":::
+1. Select **Test connection** to validate the settings (available when using Azure Integration Runtime).
+ 1. Select **Continue**. 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once.
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
Previously updated : 06/12/2023 Last updated : 07/17/2023
When scanning Snowflake source, Microsoft Purview supports:
- Tasks - Sequences -- Fetching static lineage on assets relationships among tables, views, and streams.
+- Fetching static lineage on assets relationships among tables, views, streams, and stored procedures.
-When setting up scan, you can choose to scan one or more Snowflake database(s) entirely, or further scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+For stored procedures, you can choose the level of details to extract on [scan settings](#scan). Stored procedure lineage is supported for Snowflake Scripting (SQL) and JavaScript languages, and generated based on the procedure definition.
+
+When setting up scan, you can choose to scan one or more Snowflake database(s) entirely based on the given name(s) or name pattern(s), or further scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
### Known limitations
-When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+- When object is deleted from the data source, currently the subsequent scan won't automatically remove the corresponding asset in Microsoft Purview.
+- Stored procedure lineage is not supported for the following patterns:
+ - Stored procedure defined in Java, Python and Scala languages.
+ - Stored procedure using SQL [EXECUTE IMMEDIATE](https://docs.snowflake.com/en/sql-reference/sql/execute-immediate) with static SQL query as variable.
## Prerequisites
On the **Register sources (Snowflake)** screen, follow these steps:
1. Enter a **Name** that the data source will be listed within the Catalog.
-1. Enter the **server** URL used to connect to the Snowflake account in the form of `<account_identifier>.snowflakecomputing.com`, for example, `xy12345.east-us-2.azure.snowflakecomputing.com`. Learn more about Snowflake [account identifier](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#).
+1. Enter the **server** URL used to connect to the Snowflake account in the form of `<account_identifier>.snowflakecomputing.com`, for example, `orgname-accountname.snowflakecomputing.com`. Learn more about Snowflake [account identifier](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#).
1. Select a collection or create a new one (Optional)
To create and run a new scan, follow these steps:
1. **Stored procedure details**: Controls the number of details imported from stored procedures:
- - Signature: The name and parameters of stored procedures.
+ - Signature (default): The name and parameters of stored procedures.
- Code, signature: The name, parameters and code of stored procedures. - Lineage, code, signature: The name, parameters and code of stored procedures, and the data lineage derived from the code. - None: Stored procedure details aren't included.
+ > [!Note]
+ > If you use Self-hosted Integration Runtime for scan, customized setting other than the default Signature is supported since version 5.30.8541.1. The earlier versions always extract the name and parameters of stored procedures.
+ 1. **Maximum memory available** (applicable when using self-hosted integration runtime): Maximum memory (in GB) available on customer's VM to be used by scanning processes. It's dependent on the size of Snowflake source to be scanned. > [!Note]
To create and run a new scan, follow these steps:
:::image type="content" source="media/register-scan-snowflake/scan.png" alt-text="scan Snowflake" border="true":::
+1. Select **Test connection** to validate the settings (available when using Azure Integration Runtime).
+ 1. Select **Continue**. 1. Select a **scan rule set** for classification. You can choose between the system default, existing custom rule sets, or [create a new rule set](create-a-scan-rule-set.md) inline. Check the [Classification](apply-classifications.md) article to learn more.
reliability Migrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-database.md
Enabling zone redundancy for Azure SQL Database guarantees high availability as
Before you migrate to availability zone support, refer to the following table to ensure that your Azure SQL Database is in a supported service tier and deployment model. Make sure that your tier and model is offered in a [region that supports availability zones](/azure/reliability/availability-zones-service-support).
-| Service tier | Zone redundancy availability |
-|--||
-| Premium | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support)|
-| Business Critical | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) |
-| General Purpose | [Selected regions that support availability zones](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#general-purpose-service-tier-zone-redundant-availability)|
-| Hyperscale (Preview) |[All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) |
+| Service tier | Deployment model | Zone redundancy availability |
+|--|||
+| Premium | Single database or Elastic Pool | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support)|
+| Business Critical | Single database or Elastic Pool | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) |
+| General Purpose | Single database or Elastic Pool | [Selected regions that support availability zones](/azure/azure-sql/database/high-availability-sla?view=azuresql&tabs=azure-powershell&preserve-view=true#general-purpose-service-tier-zone-redundant-availability)|
+| Hyperscale | Single database | [All regions that support availability zones](availability-zones-service-support.md#azure-regions-with-availability-zone-support) |
## Downtime requirements
To create a geo-replica of the database:
## Next steps > [!div class="nextstepaction"]
-> [Azure services and regions that support availability zones](availability-zones-service-support.md)
+> [Azure services and regions that support availability zones](availability-zones-service-support.md)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Storage Mover](reliability-azure-storage-mover.md)| [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Virtual Machines](../virtual-machines/virtual-machines-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Virtual Machines](../virtual-machines/reliability-virtual-machines.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
This section outlines variations and considerations when using Azure Bot Service
| Product | Unsupported, limited, and/or modified features | Notes | ||--|| |Azure Machine Learning| See [Azure Machine Learning feature availability across Azure in China cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md#azure-china-21vianet). | |
-| Cognitive
-| Cognitive
+| Cognitive
+| Cognitive
### Azure AD External Identities
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Can perform all actions within an Azure Machine Learning workspace, except for c
### Cognitive Services Contributor
-Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../cognitive-services/cognitive-services-virtual-networks.md)
+Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../ai-services/cognitive-services-virtual-networks.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Lets you create, read, update, delete and manage keys of Cognitive Services. [Le
### Cognitive Services Custom Vision Contributor
-Full access to the project, including the ability to view, create, edit, or delete projects. [Learn more](../cognitive-services/custom-vision-service/role-based-access-control.md)
+Full access to the project, including the ability to view, create, edit, or delete projects. [Learn more](../ai-services/custom-vision-service/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Full access to the project, including the ability to view, create, edit, or dele
### Cognitive Services Custom Vision Deployment
-Publish, unpublish or export models. Deployment can view the project but can't update. [Learn more](../cognitive-services/custom-vision-service/role-based-access-control.md)
+Publish, unpublish or export models. Deployment can view the project but can't update. [Learn more](../ai-services/custom-vision-service/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Publish, unpublish or export models. Deployment can view the project but can't u
### Cognitive Services Custom Vision Labeler
-View, edit training images and create, add, remove, or delete the image tags. Labelers can view the project but can't update anything other than training images and tags. [Learn more](../cognitive-services/custom-vision-service/role-based-access-control.md)
+View, edit training images and create, add, remove, or delete the image tags. Labelers can view the project but can't update anything other than training images and tags. [Learn more](../ai-services/custom-vision-service/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
View, edit training images and create, add, remove, or delete the image tags. La
### Cognitive Services Custom Vision Reader
-Read-only actions in the project. Readers can't create or update the project. [Learn more](../cognitive-services/custom-vision-service/role-based-access-control.md)
+Read-only actions in the project. Readers can't create or update the project. [Learn more](../ai-services/custom-vision-service/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Read-only actions in the project. Readers can't create or update the project. [L
### Cognitive Services Custom Vision Trainer
-View, edit projects and train the models, including the ability to publish, unpublish, export the models. Trainers can't create or delete the project. [Learn more](../cognitive-services/custom-vision-service/role-based-access-control.md)
+View, edit projects and train the models, including the ability to publish, unpublish, export the models. Trainers can't create or delete the project. [Learn more](../ai-services/custom-vision-service/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Lets you perform detect, verify, identify, group, and find similar operations on
### Cognitive Services Metrics Advisor Administrator
-Full access to the project, including the system level configuration. [Learn more](../applied-ai-services/metrics-advisor/how-tos/alerts.md)
+Full access to the project, including the system level configuration. [Learn more](../ai-services/metrics-advisor/how-tos/alerts.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Read access to view files, models, deployments. The ability to create completion
### Cognitive Services QnA Maker Editor
-Let's you create, edit, import and export a KB. You cannot publish or delete a KB. [Learn more](../cognitive-services/qnamaker/reference-role-based-access-control.md)
+Let's you create, edit, import and export a KB. You cannot publish or delete a KB. [Learn more](../ai-services/qnamaker/index.yml)
> [!div class="mx-tableFixed"] > | Actions | Description |
Let's you create, edit, import and export a KB. You cannot publish or delete a K
### Cognitive Services QnA Maker Reader
-Let's you read and test a KB only. [Learn more](../cognitive-services/qnamaker/reference-role-based-access-control.md)
+Let's you read and test a KB only. [Learn more](../ai-services/qnamaker/index.yml)
> [!div class="mx-tableFixed"] > | Actions | Description |
Let's you read and test a KB only. [Learn more](../cognitive-services/qnamaker/r
### Cognitive Services User
-Lets you read and list keys of Cognitive Services. [Learn more](../cognitive-services/authentication.md)
+Lets you read and list keys of Cognitive Services. [Learn more](../ai-services/authentication.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
role-based-access-control Check Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/check-access.md
Title: Quickstart - Check access for a user to Azure resources - Azure RBAC
-description: In this quickstart, you learn how to check the access for yourself or another user to Azure resources using the Azure portal and Azure role-based access control (Azure RBAC).
+ Title: Quickstart - Check access for a user to a single Azure resource - Azure RBAC
+description: In this quickstart, you learn how to check the access for yourself or another user to an Azure resource using the Azure portal and Azure role-based access control (Azure RBAC).
Previously updated : 08/26/2022 Last updated : 07/18/2023 #Customer intent: As a new user, I want to quickly see access for myself, user, group, or application, to make sure they have the appropriate permissions.
-# Quickstart: Check access for a user to Azure resources
+# Quickstart: Check access for a user to a single Azure resource
-Sometimes you need to check what access a user has to a set of Azure resources. You check their access by listing their assignments. A quick way to check the access for a single user is to use the **Check access** feature on the **Access control (IAM)** page.
+Sometimes you need to check what access a user has to an Azure resource. You check their access by listing their assignments. A quick way to check the access for a single user is to use the **Check access** feature on the **Access control (IAM)** page.
-## Step 1: Open the Azure resources
+## Step 1: Open the Azure resource
-To check the access for a user, you first need to open the Azure resources you want to check access for. Azure resources are organized into levels that are typically called the *scope*. In Azure, you can specify a scope at four levels from broad to narrow: management group, subscription, resource group, and resource.
+To check the access for a user, you first need to open the Azure resource you want to check access for. Azure resources are organized into levels that are typically called the *scope*. In Azure, you can specify a scope at four levels from broad to narrow: management group, subscription, resource group, and resource.
![Diagram that shows scope levels for Azure RBAC.](../../includes/role-based-access-control/media/scope-levels.png)
-Follow these steps to open the set of Azure resources that you want to check access for.
+Follow these steps to open the Azure resource that you want to check access for.
1. Open the [Azure portal](https://portal.azure.com).
-1. Open the set of Azure resources you want to check access for, such as **Management groups**, **Subscriptions**, **Resource groups**, or a particular resource.
+1. Open the Azure resource you want to check access for, such as **Management groups**, **Subscriptions**, **Resource groups**, or a particular resource.
1. Click the specific resource in that scope.
Follow these steps to open the set of Azure resources that you want to check acc
## Step 2: Check access for a user
-Follow these steps to check the access for a single user, group, service principal, or managed identity to the previously selected Azure resources.
+Follow these steps to check the access for a single user, group, service principal, or managed identity to the previously selected Azure resource.
1. Click **Access control (IAM)**.
Follow these steps to check the access for a single user, group, service princip
1. Click the user to open the **assignments** pane.
- On this pane, you can see the access for the selected user at this scope and inherited to this scope. Assignments at child scopes are not listed. You see the following assignments:
+ On this pane, you can see the access for the selected user at this scope and inherited to this scope. Assignments at child scopes aren't listed. You see the following assignments:
- Role assignments added with Azure RBAC. - Deny assignments added using Azure Blueprints or Azure managed apps.
Follow these steps to check the access for a single user, group, service princip
## Step 3: Check your access
-Follow these steps to check your access to the previously selected Azure resources.
+Follow these steps to check your access to the previously selected Azure resource.
1. Click **Access control (IAM)**. 1. On the **Check access** tab, click the **View my access** button.
- An assignments pane appears that lists your access at this scope and inherited to this scope. Assignments at child scopes are not listed.
+ An assignments pane appears that lists your access at this scope and inherited to this scope. Assignments at child scopes aren't listed.
![Screenshot of role and deny assignments pane.](./media/check-access/rg-check-access-assignments.png)
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
Previously updated : 06/20/2023 Last updated : 07/17/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ > [!NOTE]
+ > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
+ >
+ > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
+ ```bash sudo crm configure property maintenance-mode="true"+
+ sudo crm configure property priority-fencing-delay=30
# If using NFSv3 sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000
+ meta resource-stickiness=5000 priority=100
# If using NFSv4.1 sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000
+ meta resource-stickiness=5000 priority=100
# If using NFSv3 sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
Follow these steps to install an SAP application server.
## Test the cluster setup
-The following tests are a copy of the test cases in the [best practices guides of SUSE][suse-ha-guide]. They're copied for your convenience. Always also read the best practices guides and perform all additional tests that might have been added.
-
-1. Test HAGetFailoverConfig, HACheckConfig, and HACheckFailoverConfig
-
- Run the following commands as \<sapsid>adm on the node where the ASCS instance is currently running. If the commands fail with FAIL: Insufficient memory, it might be caused by dashes in your hostname. This is a known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.
-
- ```bash
- anftstsapcl1:qasadm 52> sapcontrol -nr 00 -function HAGetFailoverConfig
- 07.03.2019 20:08:59
- HAGetFailoverConfig
- OK
- HAActive: TRUE
- HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3
- HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3 (sap_suse_cluster_connector 3.1.0)
- HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
- HAActiveNode: anftstsapcl1
- HANodes: anftstsapcl1, anftstsapcl2
-
- anftstsapcl1:qasadm 54> sapcontrol -nr 00 -function HACheckConfig
- 07.03.2019 23:28:29
- HACheckConfig
- OK
- state, category, description, comment
- SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
- SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
- SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application server
- SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from application server
- SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts detected
- SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with SPOOL service detected
- SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL service detected
- SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with active ABAP SPOOL service on multiple hosts detected
- SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with BATCH service detected
- SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH service detected
- SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with active ABAP BATCH service on multiple hosts detected
- SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with DIALOG service detected
- SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG service detected
- SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances with active ABAP DIALOG service on multiple hosts detected
- SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with UPDATE service detected
- SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE service detected
- SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances with active ABAP UPDATE service on multiple hosts detected
- SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
- SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (anftstsapvh_QAS_00), SAPInstance includes is-ers patch
- SUCCESS, SAP CONFIGURATION, Enqueue replication (anftstsapvh_QAS_00), Enqueue replication enabled
- SUCCESS, SAP STATE, Enqueue replication state (anftstsapvh_QAS_00), Enqueue replication active
-
- anftstsapcl1:qasadm 55> sapcontrol -nr 00 -function HACheckFailoverConfig
- 07.03.2019 23:30:48
- HACheckFailoverConfig
- OK
- state, category, description, comment
- SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
- ```
-
-2. Manually migrate the ASCS instance
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rscsap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Starting anftstsapcl1
- ```
-
- Run the following commands as root to migrate the ASCS instance.
-
- ```bash
- anftstsapcl1:~ # crm resource migrate rsc_sap_QAS_ASCS00 force
- INFO: Move constraint created for rsc_sap_QAS_ASCS00
-
- anftstsapcl1:~ # crm resource unmigrate rsc_sap_QAS_ASCS00
- INFO: Removed migration constraints for rsc_sap_QAS_ASCS00
-
- # Remove failed actions for the ERS that occurred as part of the migration
- anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
-3. Test HAFailoverToNode
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
- Run the following commands as \<sapsid>adm to migrate the ASCS instance.
-
- ```bash
- anftstsapcl1:qasadm 53> sapcontrol -nr 00 -host anftstsapvh -user qasadm <password> -function HAFailoverToNode ""
-
- # run as root
- # Remove failed actions for the ERS that occurred as part of the migration
- anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
- # Remove migration constraints
- anftstsapcl1:~ # crm resource clear rsc_sap_QAS_ASCS00
- #INFO: Removed migration constraints for rsc_sap_QAS_ASCS00
- ```
-
- Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
-4. Simulate node crash
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Run the following command as root on the node where the ASCS instance is running
-
- ```bash
- anftstsapcl2:~ # echo b > /proc/sysrq-trigger
- ```
-
- If you use SBD, Pacemaker shouldn't automatically start on the killed node. The status after the node is started again should look like this.
-
- ```bash
- Online:
- Online: [ anftstsapcl1 ]
- OFFLINE: [ anftstsapcl2 ]
-
- Full list of resources:
-
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
-
- Failed Actions:
- * rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=166, status=complete, exitreason='',
- last-rc-change='Fri Mar 8 18:26:10 2019', queued=0ms, exec=0ms
- ```
-
- Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean the failed resources.
-
- ```bash
- # run as root
- # list the SBD device(s)
- anftstsapcl2:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
- # SBD_DEVICE="/dev/disk/by-id/scsi-36001405b730e31e7d5a4516a2a697dcf;/dev/disk/by-id/scsi-36001405f69d7ed91ef54461a442c676e;/dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a"
-
- anftstsapcl2:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e11 -d /dev/disk/by-id/scsi-36001405f69d7ed91ef54461a442c676e -d /dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a message anftstsapcl2 clear
-
- anftstsapcl2:~ # systemctl start pacemaker
- anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00
- anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- Full list of resources:
-
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
-5. Test manual restart of ASCS instance
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as <sapsid\>adm on the node where the ASCS instance is running. The commands will stop the ASCS instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.
-
- ```bash
- anftstsapcl2:qasadm 51> sapcontrol -nr 00 -function StopWait 600 2
- ```
-
- The ASCS instance should now be disabled in Pacemaker
-
- ```text
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Stopped (disabled)
- ```
-
- Start the ASCS instance again on the same node.
-
- ```bash
- anftstsapcl2:qasadm 52> sapcontrol -nr 00 -function StartWait 600 2
- ```
-
- The enqueue lock of transaction su01 should be lost, if using enqueue server replication 1 architecture and the back-end should have been reset. Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
-6. Kill message server process
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Run the following commands as root to identify the process of the message server and kill it.
-
- ```bash
- anftstsapcl2:~ # pgrep ms.sapQAS | xargs kill -9
- ```
-
- If you only kill the message server once, it will be restarted by `sapstart`. If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.
-
- ```bash
- anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00
- anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
-7. Kill enqueue server process
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- ```
-
- Run the following commands as root on the node where the ASCS instance is running to kill the enqueue server.
-
- ```bash
- #If using ENSA1
- anftstsapcl1:~ # pgrep en.sapQAS | xargs kill -9
- #If using ENSA2
- anftstsapcl1:~ # pgrep -f enq.sapQAS | xargs kill -9
- ```
-
- The ASCS instance should immediately fail over to the other node, in the case of ENSA1. The ERS instance should also fail over after the ASCS instance is started. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.
-
- ```bash
- anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ASCS00
- anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
-8. Kill enqueue replication server process
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Run the following command as root on the node where the ERS instance is running to kill the enqueue replication server process.
-
- ```bash
- anftstsapcl1:~ # pgrep er.sapQAS | xargs kill -9
- ```
-
- If you only run the command once, `sapstart` will restart the process. If you run it often enough, `sapstart` will not restart the process, and the resource will be in a stopped state. Run the following commands as root to clean up the resource state of the ERS instance after the test.
-
- ```bash
- anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
- ```
-
- Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
-9. Kill enqueue sapstartsrv process
-
- Resource state before starting the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
-
- Run the following commands as root on the node where the ASCS is running.
-
- ```bash
- anftstsapcl2:~ # pgrep -fl ASCS00.*sapstartsrv
- #67625 sapstartsrv
-
- anftstsapcl2:~ # kill -9 67625
- ```
-
- The sapstartsrv process should always be restarted by the Pacemaker resource agent. Resource state after the test:
-
- ```text
- Resource Group: g-QAS_ASCS
- fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2
- nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2
- vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2
- rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2
- stonith-sbd (stonith:external/sbd): Started anftstsapcl1
- Resource Group: g-QAS_ERS
- fs_QAS_ERS (ocf::heartbeat:Filesystem): Started anftstsapcl1
- nc_QAS_ERS (ocf::heartbeat:azure-lb): Started anftstsapcl1
- vip_QAS_ERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1
- rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1
- ```
+Thoroughly test your Pacemaker cluster. [Execute the typical failover tests](./high-availability-guide-suse.md#test-the-cluster-setup).
## Next steps
sap High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md
Previously updated : 12/06/2022 Last updated : 07/17/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ > [!NOTE]
+ > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
+ >
+ > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
+ ```bash sudo crm configure property maintenance-mode="true"+
+ sudo crm configure property priority-fencing-delay=30
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \ operations \$id=rsc_sap_NW1_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000
+ meta resource-stickiness=5000 priority=100
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \ operations \$id=rsc_sap_NW1_ERS01-operations \
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
Previously updated : 06/20/2023 Last updated : 07/17/2023
The instructions in this section are applicable only if you're using Azure NetAp
sudo crm configure property maintenance-mode="false" ```
- SAP introduced support for [ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+ SAP introduced support for [ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ > [!NOTE]
+ > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
+ >
+ > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster. For more information, see [Enqueue Replication 2 High Availability cluster with simple mount](https://documentation.suse.com/sbp/sap/html/SAP-S4HA10-setupguide-simplemount-sle15/https://docsupdatetracker.net/index.html#multicluster)
If you're using an ENSA2 architecture, define the resources as follows. ```bash sudo crm configure property maintenance-mode="true"+
+ sudo crm configure property priority-fencing-delay=30
sudo crm configure primitive rsc_sapstartsrv_NW1_ASCS00 ocf:suse:SAPStartSrv \ params InstanceName=NW1_ASCS00_sapascs
The instructions in this section are applicable only if you're using Azure NetAp
op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \
- meta resource-stickiness=5000
+ meta resource-stickiness=5000 priority=100
# If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files: sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
The instructions in this section are applicable only if you're using Azure NetAp
op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \
- meta resource-stickiness=5000
+ meta resource-stickiness=5000 priority=100
# If you're using NFSv4.1 on Azure NetApp Files: sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
sap High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md
Previously updated : 06/20/2023 Last updated : 07/17/2023
The following items are prefixed with either **[A]** - applicable to all nodes,
If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ > [!NOTE]
+ > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
+ >
+ > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
+ ```bash sudo crm configure property maintenance-mode="true"
+ sudo crm configure property priority-fencing-delay=30
+
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \ operations \$id=rsc_sap_NW1_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \ AUTOMATIC_RECOVER=false \
- meta resource-stickiness=5000
+ meta resource-stickiness=5000 priority=100
sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \ operations \$id=rsc_sap_NW1_ERS02-operations \
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
+1. Blocking network communication
+
+ Resource state before starting the test:
+
+ ```bash
+ stonith-sbd (stonith:external/sbd): Started nw1-cl-1
+ Resource Group: g-NW1_ASCS
+ fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
+ nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
+ vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
+ rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
+ Resource Group: g-NW1_ERS
+ fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
+ nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
+ vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
+ rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
+ ```
+
+ Execute firewall rule to drop communication on one of the nodes
+
+ ```bash
+ # Execute iptable rule on nw1-cl-0 (10.0.0.5) to block the incoming and outgoing traffic to nw1-cl-1 (10.0.0.6)
+ iptables -A INPUT -s 10.0.0.6 -j DROP; iptables -A OUTPUT -d 10.0.0.6 -j DROP
+ ```
+
+ When cluster nodes can't communicate to each other, there's a risk of a split-brain scenario. In such situations, cluster nodes will try to simultaneously fence each other, resulting in fence race.
+
+ When configuring a fencing device, it's recommended to configure [`pcmk_delay_max`](https://www.suse.com/support/kb/doc/?id=000019110) property. So, in the event of split-brain scenario, the cluster introduces a random delay up to the `pcmk_delay_max` value, to the fencing action on each node. The node with the shortest delay will be selected for fencing.
+
+ Additionally, in ENSA 2 configuration, to prioritize the node hosting the ASCS resource over the other node during a split brain scenario, it's recommended to configure [`priority-fencing-delay`](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing) property in the cluster. Enabling priority-fencing-delay property allows the cluster to introduce an extra delay in the fencing action specifically on the node hosting the ASCS resource, allowing the ASCS node to win the fence race.
+ 1. Test manual restart of ASCS instance Resource state before starting the test:
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ```
-1. Kill enqueue server process
+2. Kill enqueue server process
Resource state before starting the test:
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
-1. Kill enqueue replication server process
+3. Kill enqueue replication server process
Resource state before starting the test:
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
-1. Kill enqueue sapstartsrv process
+4. Kill enqueue sapstartsrv process
Resource state before starting the test:
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-attach-cognitive-services.md
Title: Attach Cognitive Services to a skillset
+ Title: Attach Azure AI services to a skillset
-description: Learn how to attach a multi-service Cognitive Services resource to an AI enrichment pipeline in Azure Cognitive Search.
-
+description: Learn how to attach an Azure AI multi-service resource to an AI enrichment pipeline in Azure Cognitive Search.
Last updated 05/31/2023
-# Attach a Cognitive Services resource to a skillset in Azure Cognitive Search
+# Attach an Azure AI multi-service resource to a skillset in Azure Cognitive Search
-When configuring an optional [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [**multi-service Cognitive Services resource**](../cognitive-services/cognitive-services-apis-create-account.md).
+When configuring an optional [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure Cognitive Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [**Azure AI multi-service resource**](../ai-services/multi-service-resource.md?pivots=azportal).
-A multi-service resource references "Cognitive Services" as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/create-skillset) and allows Microsoft to charge you for using these APIs:
+A multi-service resource references a subset of "Azure AI services" as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/create-skillset) and allows Microsoft to charge you for using these
-+ [Computer Vision](../cognitive-services/computer-vision/overview.md) for image analysis and optical character recognition (OCR)
-+ [Language service](../cognitive-services/language-service/overview.md) for language detection, entity recognition, sentiment analysis, and key phrase extraction
-+ [Translator](../cognitive-services/translator/translator-overview.md) for machine text translation
++ [Azure AI Vision](../ai-services/computer-vision/overview.md) for image analysis and optical character recognition (OCR)++ [Azure AI Language](../ai-services/language-service/overview.md) for language detection, entity recognition, sentiment analysis, and key phrase extraction++ [Azure AI Speech](../ai-services/speech-service/overview.md) for speech to text and text to speech++ [Azure AI Translator](../ai-services/translator/translator-overview.md) for machine text translation > [!TIP]
-> Azure provides infrastructure for you to monitor billing and budgets. For more information about monitoring Cognitive Services, see [Plan and manage costs for Azure Cognitive Services](../cognitive-services/plan-manage-costs.md).
+> Azure provides infrastructure for you to monitor billing and budgets. For more information about monitoring Azure AI services, see [Plan and manage costs for Azure AI services](../ai-services/plan-manage-costs.md).
## Set the resource key
If you leave the property unspecified, your search service attempts to use the f
1. [Sign in to Azure portal](https://portal.azure.com).
-1. Create a [multi-service Cognitive Services resource](../cognitive-services/cognitive-services-apis-create-account.md) in the [same region](#same-region-requirement) as your search service.
+1. Create an [Azure AI multi-service resource](../ai-services/multi-service-resource.md?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
1. Add the key to a skillset definition: + If using the [Import data wizard](search-import-data-portal.md), enter the key in the second step, "Add AI enrichments".
- + If adding the key to a new or existing skillset, provide the key in the **Cognitive Services** tab.
+ + If adding the key to a new or existing skillset, provide the key in the **Azure AI services** tab.
:::image type="content" source="media/cognitive-search-attach-cognitive-services/attach-existing2.png" alt-text="Screenshot of the key page." border="true"::: ### [**REST**](#tab/cogkey-rest)
-1. Create a [multi-service Cognitive Services resource](../cognitive-services/cognitive-services-apis-create-account.md) in the [same region](#same-region-requirement) as your search service.
+1. Create an [Azure AI multi-service resource](../ai-services/multi-service-resource.md?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
1. Create or update a skillset, specifying `cognitiveServices` section in the body of the [skillset request](/rest/api/searchservice/create-skillset):
SearchIndexerSkillset skillset = CreateOrUpdateDemoSkillSet(indexerClient, skill
## Remove the key
-Enrichments are a billable feature. If you no longer need to call Cognitive Services, follow these instructions to remove the multi-region key and prevent use of the external resource. Without the key, the skillset reverts to the default allocation of 20 free transactions per indexer, per day. Execution of billable skills stops at 20 transactions and a "Time Out" message appears in indexer execution history when the allocation is used up.
+Enrichments are a billable feature. If you no longer need to call Azure AI services, follow these instructions to remove the multi-region key and prevent use of the external resource. Without the key, the skillset reverts to the default allocation of 20 free transactions per indexer, per day. Execution of billable skills stops at 20 transactions and a "Time Out" message appears in indexer execution history when the allocation is used up.
### [**Azure portal**](#tab/portal-remove)
Enrichments are a billable feature. If you no longer need to call Cognitive Serv
## How the key is used
-Key-based billing applies when API calls to Cognitive Services resources exceed 20 API calls per indexer, per day.
+Key-based billing applies when API calls to Azure AI services resources exceed 20 API calls per indexer, per day.
-The key is used for billing, but not for enrichment operations' connections. For connections, a search service [connects over the internal network](search-security-overview.md#internal-traffic) to a Cognitive Services resource that's colocated in the [same physical region](https://azure.microsoft.com/global-infrastructure/services/?products=search). Most regions that offer Cognitive Search also offer Cognitive Services. If you attempt AI enrichment in a region that doesn't have both services, you'll see this message: "Provided key isn't a valid CognitiveServices type key for the region of your search service."
+The key is used for billing, but not for enrichment operations' connections. For connections, a search service [connects over the internal network](search-security-overview.md#internal-traffic) to an Azure AI services resource that's colocated in the [same physical region](https://azure.microsoft.com/global-infrastructure/services/?products=search). Most regions that offer Cognitive Search also offer other Azure AI services such as Language. If you attempt AI enrichment in a region that doesn't have both services, you'll see this message: "Provided key isn't a valid CognitiveServices type key for the region of your search service."
-Currently, billing for [built-in skills](cognitive-search-predefined-skills.md) requires a public connection from Cognitive Search to Cognitive Services. Disabling public network access breaks billing. If disabling public networks is a requirement, you can configure a [Custom Web API skill](cognitive-search-custom-skill-interface.md) implemented with an [Azure Function](cognitive-search-create-custom-skill-example.md) that supports [private endpoints](../azure-functions/functions-create-vnet.md) and add the [Cognitive Service resource to the same VNET](/azure/cognitive-services/cognitive-services-virtual-networks). In this way, you can call Cognitive Services resource directly from the custom skill using private endpoints.
+Currently, billing for [built-in skills](cognitive-search-predefined-skills.md) requires a public connection from Cognitive Search to another Azure AI service. Disabling public network access breaks billing. If disabling public networks is a requirement, you can configure a [Custom Web API skill](cognitive-search-custom-skill-interface.md) implemented with an [Azure Function](cognitive-search-create-custom-skill-example.md) that supports [private endpoints](../azure-functions/functions-create-vnet.md) and add the [Azure AI services resource to the same VNET](../ai-services/cognitive-services-virtual-networks.md). In this way, you can call Azure AI services resource directly from the custom skill using private endpoints.
> [!NOTE]
-> Some built-in skills are based on non-regional Cognitive Services (for example, the [Text Translation Skill](cognitive-search-skill-text-translation.md)). Using a non-regional skill means that your request might be serviced in a region other than the Azure Cognitive Search region. For more information on non-regional services, see the [Cognitive Services product by region](https://aka.ms/allinoneregioninfo) page.
+> Some built-in skills are based on non-regional Azure AI services (for example, the [Text Translation Skill](cognitive-search-skill-text-translation.md)). Using a non-regional skill means that your request might be serviced in a region other than the Azure Cognitive Search region. For more information on non-regional services, see the [Azure AI services product by region](https://aka.ms/allinoneregioninfo) page.
### Key requirements special cases
-[Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) is metered by Azure Cognitive Search, not Cognitive Services, but it requires a Cognitive Services resource key to unlock transactions beyond 20 per indexer, per day. For this skill only, the resource key unblocks the number of transactions, but is unrelated to billing.
+[Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) is metered by Azure Cognitive Search, not Azure AI services, but it requires an Azure AI multi-service resource key to unlock transactions beyond 20 per indexer, per day. For this skill only, the resource key unblocks the number of transactions, but is unrelated to billing.
## Free enrichments
-AI enrichment offers a small quantity of free processing of billable enrichments so that you can complete short exercises without having to attach a Cognitive Services resource. Free enrichments are 20 documents per day, per indexer. You can [reset the indexer](search-howto-run-reset-indexers.md) to reset the counter if you want to repeat an exercise.
+AI enrichment offers a small quantity of free processing of billable enrichments so that you can complete short exercises without having to attach an Azure AI multi-service resource. Free enrichments are 20 documents per day, per indexer. You can [reset the indexer](search-howto-run-reset-indexers.md) to reset the counter if you want to repeat an exercise.
Some enrichments are always free:
-+ Utility skills that don't call Cognitive Services (namely, [Conditional](cognitive-search-skill-conditional.md), [Document Extraction](cognitive-search-skill-document-extraction.md), [Shaper](cognitive-search-skill-shaper.md), [Text Merge](cognitive-search-skill-textmerger.md), and [Text Split skills](cognitive-search-skill-textsplit.md)) aren't billable.
++ Utility skills that don't call Azure AI services (namely, [Conditional](cognitive-search-skill-conditional.md), [Document Extraction](cognitive-search-skill-document-extraction.md), [Shaper](cognitive-search-skill-shaper.md), [Text Merge](cognitive-search-skill-textmerger.md), and [Text Split skills](cognitive-search-skill-textsplit.md)) aren't billable. + Text extraction from PDF documents and other application files is nonbillable. Text extraction occurs during the [document cracking](search-indexer-overview.md#document-cracking) phase and isn't technically an enrichment, but it occurs during AI enrichment and is thus noted here. ## Billable enrichments
- During AI enrichment, Cognitive Search calls the Cognitive Services APIs for [built-in skills](cognitive-search-predefined-skills.md) that are based on Computer Vision, Translator, and Azure Cognitive Services for Language.
+ During AI enrichment, Cognitive Search calls the Azure AI services APIs for [built-in skills](cognitive-search-predefined-skills.md) that are based on Azure AI Vision, Translator, and Azure AI Language.
-Billable built-in skills that make backend calls to Cognitive Services include [Entity Linking](cognitive-search-skill-entity-linking-v3.md), [Entity Recognition](cognitive-search-skill-entity-recognition-v3.md), [Image Analysis](cognitive-search-skill-image-analysis.md), [Key Phrase Extraction](cognitive-search-skill-keyphrases.md), [Language Detection](cognitive-search-skill-language-detection.md), [OCR](cognitive-search-skill-ocr.md), [Personally Identifiable Information (PII) Detection](cognitive-search-skill-pii-detection.md), [Sentiment](cognitive-search-skill-sentiment-v3.md), and [Text Translation](cognitive-search-skill-text-translation.md).
+Billable built-in skills that make backend calls to Azure AI services include [Entity Linking](cognitive-search-skill-entity-linking-v3.md), [Entity Recognition](cognitive-search-skill-entity-recognition-v3.md), [Image Analysis](cognitive-search-skill-image-analysis.md), [Key Phrase Extraction](cognitive-search-skill-keyphrases.md), [Language Detection](cognitive-search-skill-language-detection.md), [OCR](cognitive-search-skill-ocr.md), [Personally Identifiable Information (PII) Detection](cognitive-search-skill-pii-detection.md), [Sentiment](cognitive-search-skill-sentiment-v3.md), and [Text Translation](cognitive-search-skill-text-translation.md).
Image extraction is an Azure Cognitive Search operation that occurs when documents are cracked prior to enrichment. Image extraction is billable on all tiers, except for 20 free daily extractions on the free tier. Image extraction costs apply to image files inside blobs, embedded images in other files (PDF and other app files), and for images extracted using [Document Extraction](cognitive-search-skill-document-extraction.md). For image extraction pricing, see the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
To estimate the costs associated with Cognitive Search indexing, start with an i
Assume a pipeline that consists of document cracking of each PDF, image and text extraction, optical character recognition (OCR) of images, and entity recognition of organizations.
-The prices shown in this article are hypothetical. They're used to illustrate the estimation process. Your costs could be lower. For the actual price of transactions, see [Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services).
+The prices shown in this article are hypothetical. They're used to illustrate the estimation process. Your costs could be lower. For the actual price of transactions, see [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services).
1. For document cracking with text and image content, text extraction is currently free. For 6,000 images, assume $1 for every 1,000 images extracted. That's a cost of $6.00 for this step.
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
This error occurs when the indexer is attempting to [project data into a knowled
## `Error: The cognitive service for skill '<skill-name>' has been throttled`
-Skill execution failed because the call to Cognitive Services was throttled. Typically, this class of failure occurs when too many skills are executing in parallel. If you're using the Microsoft.Search.Documents client library to run the indexer, you can use the [SearchIndexingBufferedSender](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md#searchindexingbufferedsender) to get automatic retry on failed steps. Otherwise, you can [reset and rerun the indexer](search-howto-run-reset-indexers.md).
+Skill execution failed because the call to Azure AI services was throttled. Typically, this class of failure occurs when too many skills are executing in parallel. If you're using the Microsoft.Search.Documents client library to run the indexer, you can use the [SearchIndexingBufferedSender](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md#searchindexingbufferedsender) to get automatic retry on failed steps. Otherwise, you can [reset and rerun the indexer](search-howto-run-reset-indexers.md).
<a name="could-not-execute-skill-because-a-skill-input-was-invalid"></a>
If you know that your data set contains multiple languages and thus you need the
Here are some references for the currently supported languages for each of the skills that may produce this error message:
-* [EntityRecognitionSkill supported languages](../cognitive-services/language-service/named-entity-recognition/language-support.md)
-* [EntityLinkingSkill supported languages](../cognitive-services/language-service/entity-linking/language-support.md)
-* [KeyPhraseExtractionSkill supported languages](../cognitive-services/language-service/key-phrase-extraction/language-support.md)
-* [LanguageDetectionSkill supported languages](../cognitive-services/language-service/language-detection/language-support.md)
-* [PIIDetectionSkill supported languages](../cognitive-services/language-service/personally-identifiable-information/language-support.md)
-* [SentimentSkill supported languages](../cognitive-services/language-service/sentiment-opinion-mining/language-support.md)
-* [Translator supported languages](../cognitive-services/translator/language-support.md)
+* [EntityRecognitionSkill supported languages](../ai-services/language-service/named-entity-recognition/language-support.md)
+* [EntityLinkingSkill supported languages](../ai-services/language-service/entity-linking/language-support.md)
+* [KeyPhraseExtractionSkill supported languages](../ai-services/language-service/key-phrase-extraction/language-support.md)
+* [LanguageDetectionSkill supported languages](../ai-services/language-service/language-detection/language-support.md)
+* [PIIDetectionSkill supported languages](../ai-services/language-service/personally-identifiable-information/language-support.md)
+* [SentimentSkill supported languages](../ai-services/language-service/sentiment-opinion-mining/language-support.md)
+* [Translator supported languages](../ai-services/translator/language-support.md)
* [Text SplitSkill](cognitive-search-skill-textsplit.md) supported languages: `da, de, en, es, fi, fr, it, ko, pt` <a name="skill-input-was-truncated"></a>
Collections with [Lazy](../cosmos-db/index-policy.md#indexing-mode) indexing pol
## `Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.`
-This warning is passed from the Language service of Azure Cognitive Services. In some cases, it's safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
+This warning is passed from the Language service of Azure AI services. In some cases, it's safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
Title: Reference inputs and outputs in skillsets description: Explains the annotation syntax and how to reference inputs and outputs of a skillset in an AI enrichment pipeline in Azure Cognitive Search.-
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
Title: Extract text from images description: Use Optical Character Recognition (OCR) and image analysis to extract text, layout, captions, and tags from image files in Azure Cognitive Search pipelines.-
This section supplements the [skill reference](cognitive-search-predefined-skill
1. Add templates for OCR and Image Analysis from the portal, or copy the definitions from the [skill reference](cognitive-search-predefined-skills.md) documentation. Insert them into the skills array of your skillset definition.
-1. If necessary, [include multi-service key](cognitive-search-attach-cognitive-services.md) in the Cognitive Services property of the skillset. Cognitive Search makes calls to a billable Azure Cognitive Services resource for OCR and image analysis for transactions that exceed the free limit (20 per indexer per day). Cognitive Services must be in the same region as your search service.
+1. If necessary, [include multi-service key](cognitive-search-attach-cognitive-services.md) in the Azure AI services property of the skillset. Cognitive Search makes calls to a billable Azure AI services resource for OCR and image analysis for transactions that exceed the free limit (20 per indexer per day). Azure AI services must be in the same region as your search service.
1. If original images are embedded in PDF or application files like PPTX or DOCX, you'll need to add a Text Merge skill if you want image output and text output together. Working with embedded images is discussed further on in this article.
-Once the basic framework of your skillset is created and Cognitive Services is configured, you can focus on each individual image skill, defining inputs and source context, and mapping outputs to fields in either an index or knowledge store.
+Once the basic framework of your skillset is created and Azure AI services is configured, you can focus on each individual image skill, defining inputs and source context, and mapping outputs to fields in either an index or knowledge store.
> [!NOTE] > See [REST Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md) for an example skillset that combines image processing with downstream natural language processing. It shows how to feed skill imaging output into entity recognition and key phrase extraction.
Skill outputs include "text" (OCR), "layoutText" (OCR), "merged_content", "capti
}, { "@search.score": 1,
- "metadata_storage_name": "Cognitive Services and Content Intelligence.pptx",
+ "metadata_storage_name": "Azure AI services and Content Intelligence.pptx",
"text": [ "", "Microsoft", "", "", "",
- "Cognitive Search and Augmentation Combining Microsoft Cognitive Services and Azure Search"
+ "Cognitive Search and Augmentation Combining Microsoft Azure AI services and Azure Search"
] } ]
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
Open-source, third-party, or first-party code can be integrated into the pipelin
### Use-cases for built-in skills
-Built-in skills are based on the Cognitive Services APIs: [Computer Vision](../cognitive-services/computer-vision/index.yml) and [Language Service](../cognitive-services/language-service/overview.md). Unless your content input is small, expect to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to run larger workloads.
+Built-in skills are based on the Azure AI services APIs: [Azure AI Vision](../ai-services/computer-vision/index.yml) and [Language Service](../ai-services/language-service/overview.md). Unless your content input is small, expect to [attach a billable Azure AI services resource](cognitive-search-attach-cognitive-services.md) to run larger workloads.
A [skillset](cognitive-search-defining-skillset.md) that's assembled using built-in skills is well suited for the following application scenarios:
A [skillset](cognitive-search-defining-skillset.md) that's assembled using built
[**Custom skills**](cognitive-search-create-custom-skill-example.md) execute external code that you provide. Custom skills can support more complex scenarios, such as recognizing forms, or custom entity detection using a model that you provide and wrap in the [custom skill web interface](cognitive-search-custom-skill-interface.md). Several examples of custom skills include:
-+ [Forms Recognizer](../applied-ai-services/form-recognizer/overview.md)
++ [Document Intelligence](../ai-services/document-intelligence/overview.md) + [Bing Entity Search API](./cognitive-search-create-custom-skill-example.md) + [Custom entity recognition](https://github.com/Microsoft/SkillsExtractorCognitiveSearch)
In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assu
## Availability and pricing
-Enrichment is available in regions that have Azure Cognitive Services. You can check the availability of enrichment on the [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page. Enrichment is available in all regions except:
+Enrichment is available in regions that have Azure AI services. You can check the availability of enrichment on the [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page. Enrichment is available in all regions except:
+ Australia Southeast + China North 2 + Germany West Central
-Billing follows a pay-as-you-go pricing model. The costs of using built-in skills are passed on when a multi-region Cognitive Services key is specified in the skillset. There are also costs associated with image extraction, as metered by Cognitive Search. Text extraction and utility skills, however, aren't billable. For more information, see [How you're charged for Azure Cognitive Search](search-sku-manage-costs.md#how-youre-charged-for-azure-cognitive-search).
+Billing follows a pay-as-you-go pricing model. The costs of using built-in skills are passed on when a multi-region Azure AI services key is specified in the skillset. There are also costs associated with image extraction, as metered by Cognitive Search. Text extraction and utility skills, however, aren't billable. For more information, see [How you're charged for Azure Cognitive Search](search-sku-manage-costs.md#how-youre-charged-for-azure-cognitive-search).
## Checklist: A typical workflow
Start with a subset of data in a [supported data source](search-indexer-overview
1. Create a [data source](/rest/api/searchservice/create-data-source) that specifies a connection to your data.
-1. [Create a skillset](cognitive-search-defining-skillset.md). Unless your project is small, you'll want to [attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md). If you're [creating a knowledge store](knowledge-store-create-rest.md), define it within the skillset.
+1. [Create a skillset](cognitive-search-defining-skillset.md). Unless your project is small, you'll want to [attach an Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md). If you're [creating a knowledge store](knowledge-store-create-rest.md), define it within the skillset.
1. [Create an index schema](search-how-to-create-search-index.md) that defines a search index.
To repeat any of the above steps, [reset the indexer](search-howto-reindex.md) b
+ [Skillset concepts](cognitive-search-working-with-skillsets.md) + [Knowledge store concepts](knowledge-store-concept-intro.md) + [Create a skillset](cognitive-search-defining-skillset.md)
-+ [Create a knowledge store](knowledge-store-create-rest.md)
++ [Create a knowledge store](knowledge-store-create-rest.md)
search Cognitive Search Concept Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-troubleshooting.md
Title: Tips for AI enrichment design description: Tips and troubleshooting for setting up AI enrichment pipelines in Azure Cognitive Search.-
search Cognitive Search Custom Skill Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-interface.md
Title: Custom skill interface description: Integrate a custom skill with an AI enrichment pipeline in Azure Cognitive Search through a web interface that defines compatible inputs and outputs in a skillset.-
search Cognitive Search Custom Skill Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-scale.md
Congratulations! Your custom skill is now scaled right to maximize throughput on
+ [Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills) + [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md)
-+ [Add a Azure Machine Learning skill](./cognitive-search-aml-skill.md)
++ [Add an Azure Machine Learning skill](./cognitive-search-aml-skill.md) + [Use debug sessions to test changes](./cognitive-search-debug-session.md)
search Cognitive Search Custom Skill Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-web-api.md
Title: Custom Web API skill in skillsets description: Extend capabilities of Azure Cognitive Search skillsets by calling out to Web APIs. Use the Custom Web API skill to integrate your custom code.-
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Title: Create a skillset description: A skillset defines data extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search.-
Start with the basic structure. In the [Create Skillset REST API](/rest/api/sear
], "cognitiveServices":{ "@odata.type":"#Microsoft.Azure.Search.CognitiveServicesByKey",
- "description":"A Cognitive Services resource in the same region as Azure Cognitive Search",
+ "description":"An Azure AI services resource in the same region as Azure Cognitive Search",
"key":"<Your-Cognitive-Services-Multiservice-Key>" }, "knowledgeStore":{
Start with the basic structure. In the [Create Skillset REST API](/rest/api/sear
After the name and description, a skillset has four main properties:
-+ `skills` array, an unordered [collection of skills](cognitive-search-predefined-skills.md). Skills can be utilitarian (like splitting text), transformational (based on AI from Cognitive Services), or custom skills that you provide. An example of a skills array is provided in the next section.
++ `skills` array, an unordered [collection of skills](cognitive-search-predefined-skills.md). Skills can be utilitarian (like splitting text), transformational (based on AI from Azure AI services), or custom skills that you provide. An example of a skills array is provided in the next section.
-+ `cognitiveServices` is used for [billable skills](cognitive-search-predefined-skills.md) that call Cognitive Services APIs. Remove this section if you aren't using billable skills or Custom Entity Lookup. [Attach a resource](cognitive-search-attach-cognitive-services.md) if you are.
++ `cognitiveServices` is used for [billable skills](cognitive-search-predefined-skills.md) that call Azure AI services APIs. Remove this section if you aren't using billable skills or Custom Entity Lookup. [Attach a resource](cognitive-search-attach-cognitive-services.md) if you are. + `knowledgeStore` (optional) specifies an Azure Storage account and settings for projecting skillset output into tables, blobs, and files in Azure Storage. Remove this section if you don't need it, otherwise [specify a knowledge store](knowledge-store-create-rest.md).
search Cognitive Search Incremental Indexing Conceptual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-incremental-indexing-conceptual.md
Title: Incremental enrichment concepts (preview) description: Cache intermediate content and incremental changes from AI enrichment pipeline in Azure Storage to preserve investments in existing processed documents. This feature is currently in public preview.-
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
Title: Map enrichments in indexers description: Export the enriched content created by a skillset by mapping its output fields to fields in a search index.-
search Cognitive Search Predefined Skills https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-predefined-skills.md
Title: Built-in skills description: Data extraction, natural language, and image processing skills add semantics and structure to raw content in an Azure Cognitive Search enrichment pipeline.-
This article describes the skills provided with Azure Cognitive Search that you
## Built-in skills
-Built-in skills are based on pre-trained models from Microsoft, which means you cannot train the model using your own training data. Skills that call the Cognitive Resources APIs have a dependency on those services and are billed at the Cognitive Services pay-as-you-go price when you [attach a resource](cognitive-search-attach-cognitive-services.md). Other skills are metered by Azure Cognitive Search, or are utility skills that are available at no charge.
+Built-in skills are based on pre-trained models from Microsoft, which means you cannot train the model using your own training data. Skills that call the Cognitive Resources APIs have a dependency on those services and are billed at the Azure AI services pay-as-you-go price when you [attach a resource](cognitive-search-attach-cognitive-services.md). Other skills are metered by Azure Cognitive Search, or are utility skills that are available at no charge.
The following table enumerates and describes the built-in skills. | OData type | Description | Metered by | |-|-|-| |[Microsoft.Skills.Text.CustomEntityLookupSkill](cognitive-search-skill-custom-entity-lookup.md) | Looks for text from a custom, user-defined list of words and phrases.| Azure Cognitive Search ([pricing](https://azure.microsoft.com/pricing/details/search/)) |
-| [Microsoft.Skills.Text.KeyPhraseExtractionSkill](cognitive-search-skill-keyphrases.md) | This skill uses a pretrained model to detect important phrases based on term placement, linguistic rules, proximity to other terms, and how unusual the term is within the source data. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
-| [Microsoft.Skills.Text.LanguageDetectionSkill](cognitive-search-skill-language-detection.md) | This skill uses a pretrained model to detect which language is used (one language ID per document). When multiple languages are used within the same text segments, the output is the LCID of the predominantly used language. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.KeyPhraseExtractionSkill](cognitive-search-skill-keyphrases.md) | This skill uses a pretrained model to detect important phrases based on term placement, linguistic rules, proximity to other terms, and how unusual the term is within the source data. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.LanguageDetectionSkill](cognitive-search-skill-language-detection.md) | This skill uses a pretrained model to detect which language is used (one language ID per document). When multiple languages are used within the same text segments, the output is the LCID of the predominantly used language. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
| [Microsoft.Skills.Text.MergeSkill](cognitive-search-skill-textmerger.md) | Consolidates text from a collection of fields into a single field. | Not applicable |
-| [Microsoft.Skills.Text.V3.EntityLinkingSkill](cognitive-search-skill-entity-linking-v3.md) | This skill uses a pretrained model to generate links for recognized entities to articles in Wikipedia. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
-| [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) | This skill uses a pretrained model to establish entities for a fixed set of categories: `"Person"`, `"Location"`, `"Organization"`, `"Quantity"`, `"DateTime"`, `"URL"`, `"Email"`, `"PersonType"`, `"Event"`, `"Product"`, `"Skill"`, `"Address"`, `"Phone Number"` and `"IP Address"` fields. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
-| [Microsoft.Skills.Text.PIIDetectionSkill](cognitive-search-skill-pii-detection.md) | This skill uses a pretrained model to extract personal information from a given text. The skill also gives various options for masking the detected personal information entities in the text. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
-| [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md) | This skill uses a pretrained model to assign sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level on a record by record basis. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.V3.EntityLinkingSkill](cognitive-search-skill-entity-linking-v3.md) | This skill uses a pretrained model to generate links for recognized entities to articles in Wikipedia. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) | This skill uses a pretrained model to establish entities for a fixed set of categories: `"Person"`, `"Location"`, `"Organization"`, `"Quantity"`, `"DateTime"`, `"URL"`, `"Email"`, `"PersonType"`, `"Event"`, `"Product"`, `"Skill"`, `"Address"`, `"Phone Number"` and `"IP Address"` fields. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.PIIDetectionSkill](cognitive-search-skill-pii-detection.md) | This skill uses a pretrained model to extract personal information from a given text. The skill also gives various options for masking the detected personal information entities in the text. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md) | This skill uses a pretrained model to assign sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level on a record by record basis. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
| [Microsoft.Skills.Text.SplitSkill](cognitive-search-skill-textsplit.md) | Splits text into pages so that you can enrich or augment content incrementally. | Not applicable |
-| [Microsoft.Skills.Text.TranslationSkill](cognitive-search-skill-text-translation.md) | This skill uses a pretrained model to translate the input text into a variety of languages for normalization or localization use cases. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
-| [Microsoft.Skills.Vision.ImageAnalysisSkill](cognitive-search-skill-image-analysis.md) | This skill uses an image detection algorithm to identify the content of an image and generate a text description. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
-| [Microsoft.Skills.Vision.OcrSkill](cognitive-search-skill-ocr.md) | Optical character recognition. | Cognitive Services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Text.TranslationSkill](cognitive-search-skill-text-translation.md) | This skill uses a pretrained model to translate the input text into a variety of languages for normalization or localization use cases. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Vision.ImageAnalysisSkill](cognitive-search-skill-image-analysis.md) | This skill uses an image detection algorithm to identify the content of an image and generate a text description. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
+| [Microsoft.Skills.Vision.OcrSkill](cognitive-search-skill-ocr.md) | Optical character recognition. | Azure AI services ([pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)) |
| [Microsoft.Skills.Util.ConditionalSkill](cognitive-search-skill-conditional.md) | Allows filtering, assigning a default value, and merging data based on a condition. | Not applicable | | [Microsoft.Skills.Util.DocumentExtractionSkill](cognitive-search-skill-document-extraction.md) | Extracts content from a file within the enrichment pipeline. | Azure Cognitive Search ([pricing](https://azure.microsoft.com/pricing/details/search/)) | [Microsoft.Skills.Util.ShaperSkill](cognitive-search-skill-shaper.md) | Maps output to a complex type (a multi-part data type, which might be used for a full name, a multi-line address, or a combination of last name and a personal identifier.) | Not applicable |
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
Before you begin, have the following prerequisites in place:
+ Azure Storage account with Blob Storage. > [!NOTE]
-> This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
+> This quickstart uses [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Azure AI services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create an Azure AI multi-service resource.
## Set up your data
If you get "Error detecting index schema from data source", the indexer that's p
Next, configure AI enrichment to invoke OCR, image analysis, and natural language processing.
-1. For this quickstart, we're using the **Free** Cognitive Services resource. The sample data consists of 14 files, so the free allotment of 20 transaction on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, we're using the **Free** Azure AI services resource. The sample data consists of 14 files, so the free allotment of 20 transaction on Azure AI services is sufficient for this quickstart.
- :::image type="content" source="media/cognitive-search-quickstart-blob/cog-search-attach.png" alt-text="Screenshot of the Attach Cognitive Services tab." border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/cog-search-attach.png" alt-text="Screenshot of the Attach Azure AI services tab." border="true":::
1. Expand **Add enrichments** and make six selections.
If you're using a free service, remember that you're limited to three indexes, i
You can create skillsets using the portal, .NET SDK, or REST API. To further your knowledge, try the REST API using Postman and more sample data. > [!div class="nextstepaction"]
-> [Tutorial: Extract text and structure from JSON blobs using REST APIs ](cognitive-search-tutorial-blob.md)
+> [Tutorial: Extract text and structure from JSON blobs using REST APIs ](cognitive-search-tutorial-blob.md)
search Cognitive Search Skill Conditional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-conditional.md
Title: Conditional cognitive skill description: The conditional skill in Azure Cognitive Search enables filtering, creating defaults, and merging values in a skillset definition.-
else
``` > [!NOTE]
-> This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
+> This skill isn't bound to Azure AI services. It is non-billable and has no Azure AI services key requirement.
## @odata.type Microsoft.Skills.Util.ConditionalSkill
search Cognitive Search Skill Custom Entity Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-custom-entity-lookup.md
Title: Custom Entity Lookup cognitive search skill description: Extract different custom entities from text in an Azure Cognitive Search cognitive search pipeline.-
Last updated 09/07/2022
The **Custom Entity Lookup** skill is used to detect or recognize entities that you define. During skillset execution, the skill looks for text from a custom, user-defined list of words and phrases. The skill uses this list to label any matching entities found within source documents. The skill also supports a degree of fuzzy matching that can be applied to find matches that are similar but not exact. > [!NOTE]
-> This skill isn't bound to a Cognitive Services API but requires a Cognitive Services key to allow more than 20 transactions. This skill is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing).
+> This skill isn't bound to an Azure AI services API but requires an Azure AI services key to allow more than 20 transactions. This skill is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing).
## @odata.type
search Cognitive Search Skill Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-deprecated.md
Title: Deprecated Cognitive Skills description: This page contains a list of cognitive skills that are considered deprecated and will not be supported in the near future in Azure Cognitive Search skillsets.-
search Cognitive Search Skill Document Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-document-extraction.md
Last updated 12/12/2021
The **Document Extraction** skill extracts content from a file within the enrichment pipeline. This allows you to take advantage of the document extraction step that normally happens before the skillset execution with files that may be generated by other skills. > [!NOTE]
-> This skill isn't bound to Cognitive Services and has no Cognitive Services key requirement.
+> This skill isn't bound to Azure AI services and has no Azure AI services key requirement.
> This skill extracts text and images. Text extraction is free. Image extraction is [metered by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/). On a free search service, the cost of 20 transactions per indexer per day is absorbed so that you can complete quickstarts, tutorials, and small projects at no charge. For Basic, Standard, and above, image extraction is billable. >
The file reference object can be generated one of three ways:
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [How to process and extract information from images in cognitive search scenarios](cognitive-search-concept-image-scenarios.md)
++ [How to process and extract information from images in cognitive search scenarios](cognitive-search-concept-image-scenarios.md)
search Cognitive Search Skill Entity Linking V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-linking-v3.md
Title: Entity Linking cognitive skill (v3) description: Extract different linked entities from text in an enrichment pipeline in Azure Cognitive Search.-
Last updated 08/17/2022
The **Entity Linking** skill (v3) returns a list of recognized entities with links to articles in a well-known knowledge base (Wikipedia). > [!NOTE]
-> This skill is bound to the [Entity Linking](../cognitive-services/language-service/entity-linking/overview.md) machine learning models in [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md) and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to the [Entity Linking](../ai-services/language-service/entity-linking/overview.md) machine learning models in [Azure AI Language](../ai-services/language-service/overview.md) and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> ## @odata.type
Parameter names are case-sensitive and are all optional.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | Language code of the input text. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../cognitive-services/language-service/entity-linking/language-support.md). |
+| `defaultLanguageCode` | Language code of the input text. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../ai-services/language-service/entity-linking/language-support.md). |
| `minimumPrecision` | A value between 0 and 1. If the confidence score (in the `entities` output) is lower than this value, the entity is not returned. The default is 0. |
-| `modelVersion` | (Optional) Specifies the [version of the model](../cognitive-services/language-service/concepts/model-lifecycle.md) to use when calling entity linking. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary.|
+| `modelVersion` | (Optional) Specifies the [version of the model](../ai-services/language-service/concepts/model-lifecycle.md) to use when calling entity linking. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary.|
## Skill inputs | Input name | Description | ||-|
-| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../cognitive-services/language-service/entity-linking/language-support.md). |
+| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../ai-services/language-service/entity-linking/language-support.md). |
| `text` | The text to analyze. | ## Skill outputs
Parameter names are case-sensitive and are all optional.
} ```
-The offsets returned for entities in the output of this skill are directly returned from the [Language Service APIs](../cognitive-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. For more information, see [Multilingual and emoji support in Language service features](../cognitive-services/language-service/concepts/multilingual-emoji-support.md).
+The offsets returned for entities in the output of this skill are directly returned from the [Language Service APIs](../ai-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. For more information, see [Multilingual and emoji support in Language service features](../ai-services/language-service/concepts/multilingual-emoji-support.md).
## Warning cases
search Cognitive Search Skill Entity Recognition V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-recognition-v3.md
Title: Entity Recognition cognitive skill (v3)
-description: Extract different types of entities using the machine learning models of Azure Cognitive Services for Language in an AI enrichment pipeline in Azure Cognitive Search.
-
+description: Extract different types of entities using the machine learning models of Azure AI Language in an AI enrichment pipeline in Azure Cognitive Search.
Last updated 08/17/2022
# Entity Recognition cognitive skill (v3)
-The **Entity Recognition** skill (v3) extracts entities of different types from text. These entities fall under 14 distinct categories, ranging from people and organizations to URLs and phone numbers. This skill uses the [Named Entity Recognition](../cognitive-services/language-service/named-entity-recognition/overview.md) machine learning models provided by [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md).
+The **Entity Recognition** skill (v3) extracts entities of different types from text. These entities fall under 14 distinct categories, ranging from people and organizations to URLs and phone numbers. This skill uses the [Named Entity Recognition](../ai-services/language-service/named-entity-recognition/overview.md) machine learning models provided by [Azure AI Language](../ai-services/language-service/overview.md).
> [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> ## @odata.type
Parameters are case-sensitive and are all optional.
| Parameter name | Description | |--|-| | `categories` | Array of categories that should be extracted. Possible category types: `"Person"`, `"Location"`, `"Organization"`, `"Quantity"`, `"DateTime"`, `"URL"`, `"Email"`, `"personType"`, `"Event"`, `"Product"`, `"Skill"`, `"Address"`, `"phoneNumber"`, `"ipAddress"`. If no category is provided, all types are returned.|
-| `defaultLanguageCode` | Language code of the input text. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../cognitive-services/language-service/named-entity-recognition/language-support.md). Not all entity categories are supported for all languages; see note below.|
+| `defaultLanguageCode` | Language code of the input text. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../ai-services/language-service/named-entity-recognition/language-support.md). Not all entity categories are supported for all languages; see note below.|
| `minimumPrecision` | A value between 0 and 1. If the confidence score (in the `namedEntities` output) is lower than this value, the entity is not returned. The default is 0. |
-| `modelVersion` | (Optional) Specifies the [version of the model](../cognitive-services/language-service/concepts/model-lifecycle.md) to use when calling the entity recognition API. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary. |
+| `modelVersion` | (Optional) Specifies the [version of the model](../ai-services/language-service/concepts/model-lifecycle.md) to use when calling the entity recognition API. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary. |
## Skill inputs | Input name | Description | ||-|
-| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../cognitive-services/language-service/named-entity-recognition/language-support.md). |
+| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../ai-services/language-service/named-entity-recognition/language-support.md). |
| `text` | The text to analyze. | ## Skill outputs > [!NOTE]
-> Not all entity categories are supported for all languages. See [Supported Named Entity Recognition (NER) entity categories](../cognitive-services/language-service/named-entity-recognition/concepts/named-entity-categories.md) to know which entity categories are supported for the language you will be using.
+> Not all entity categories are supported for all languages. See [Supported Named Entity Recognition (NER) entity categories](../ai-services/language-service/named-entity-recognition/concepts/named-entity-categories.md) to know which entity categories are supported for the language you will be using.
| Output name | Description | ||-|
Parameters are case-sensitive and are all optional.
} ```
-The offsets returned for entities in the output of this skill are directly returned from the [Language Service APIs](../cognitive-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. For more information, see [Multilingual and emoji support in Language service features](../cognitive-services/language-service/concepts/multilingual-emoji-support.md).
+The offsets returned for entities in the output of this skill are directly returned from the [Language Service APIs](../ai-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. For more information, see [Multilingual and emoji support in Language service features](../ai-services/language-service/concepts/multilingual-emoji-support.md).
## Warning cases
search Cognitive Search Skill Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-recognition.md
Title: Entity Recognition cognitive skill (v2) description: Extract different types of entities from text in an enrichment pipeline in Azure Cognitive Search.-
Last updated 08/17/2022
# Entity Recognition cognitive skill (v2)
-The **Entity Recognition** skill (v2) extracts entities of different types from text. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
+The **Entity Recognition** skill (v2) extracts entities of different types from text. This skill uses the machine learning models provided by [Text Analytics](../ai-services/language-service/overview.md) in Azure AI services.
> [!IMPORTANT] > The Entity Recognition skill (v2) (**Microsoft.Skills.Text.EntityRecognitionSkill**) is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Azure AI services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Azure AI services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
+> Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type
Parameters are case-sensitive and are all optional.
## Skill outputs > [!NOTE]
-> Not all entity categories are supported for all languages. The `"Person"`, `"Location"`, and `"Organization"` entity category types are supported for the full list of languages above. Only _de_, _en_, _es_, _fr_, and _zh-hans_ support extraction of `"Quantity"`, `"Datetime"`, `"URL"`, and `"Email"` types. For more information, see [Language and region support for the Text Analytics API](../cognitive-services/text-analytics/language-support.md).
+> Not all entity categories are supported for all languages. The `"Person"`, `"Location"`, and `"Organization"` entity category types are supported for the full list of languages above. Only _de_, _en_, _es_, _fr_, and _zh-hans_ support extraction of `"Quantity"`, `"Datetime"`, `"URL"`, and `"Email"` types. For more information, see [Language and region support for the Text Analytics API](../ai-services/language-service/language-detection/overview.md).
| Output name | Description | ||-|
Parameters are case-sensitive and are all optional.
} ```
-Note that the offsets returned for entities in the output of this skill are directly returned from the [Text Analytics API](../cognitive-services/text-analytics/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. [More details can be found here.](../cognitive-services/text-analytics/concepts/text-offsets.md)
+Note that the offsets returned for entities in the output of this skill are directly returned from the [Text Analytics API](../ai-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. [More details can be found here.](../ai-services/language-service/concepts/multilingual-emoji-support.md)
## Warning cases If the language code for the document is unsupported, a warning is returned and no entities are extracted.
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
Last updated 06/24/2022
The **Image Analysis** skill extracts a rich set of visual features based on the image content. For example, you can generate a caption from an image, generate tags, or identify celebrities and landmarks. This article is the reference documentation for the **Image Analysis** skill. See [Extract text and information from images](cognitive-search-concept-image-scenarios.md) for usage instructions.
-This skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) in Cognitive Services. **Image Analysis** works on images that meet the following requirements:
+This skill uses the machine learning models provided by [Azure AI Vision](../ai-services/computer-vision/overview.md) in Azure AI services. **Image Analysis** works on images that meet the following requirements:
+ The image must be presented in JPEG, PNG, GIF or BMP format + The file size of the image must be less than 4 megabytes (MB) + The dimensions of the image must be greater than 50 x 50 pixels > [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> > In addition, image extraction is [billable by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/). >
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include all of the [generally available languages](../cognitive-services/computer-vision/language-support.md#image-analysis) of Cognitive Services Computer Vision. |
-| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) defined by Cognitive Services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Computer Vision Image Analysis documentation](../cognitive-services/computer-vision/language-support.md#image-analysis) on which visual features are supported with each `defaultLanguageCode`.|
+| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include all of the [generally available languages](../ai-services/computer-vision/language-support.md#image-analysis) of Azure AI Vision. |
+| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../ai-services/Computer-vision/Category-Taxonomy.md) defined by Azure AI services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Azure AI Vision Image Analysis documentation](../ai-services/computer-vision/language-support.md#image-analysis) on which visual features are supported with each `defaultLanguageCode`.|
| `details` | An array of strings indicating which domain-specific details to return. Valid visual feature types include: <ul><li>*celebrities* - identifies celebrities if detected in the image.</li><li>*landmarks* - identifies landmarks if detected in the image. </li></ul> | ## Skill inputs
Parameters are case-sensitive.
| Output name | Description | ||-|
-| `adult` | Output is a single [adult](../cognitive-services/computer-vision/concept-detecting-adult-content.md) object of a complex type, consisting of boolean fields (`isAdultContent`, `isGoryContent`, `isRacyContent`) and double type scores (`adultScore`, `goreScore`, `racyScore`). |
-| `brands` | Output is an array of [brand](../cognitive-services/computer-vision/concept-brand-detection.md) objects, where the object is a complex type consisting of `name` (string) and a `confidence` score (double). It also returns a `rectangle` with four bounding box coordinates (`x`, `y`, `w`, `h`, in pixels) indicating placement inside the image. For the rectangle, `x` and `y` are the top left. Bottom left is `x`, `y+h`. Top right is `x+w`, `y`. Bottom right is `x+w`, `y+h`.|
-| `categories` | Output is an array of [category](../cognitive-services/computer-vision/concept-categorizing-images.md) objects, where each category object is a complex type consisting of a `name` (string), `score` (double), and optional `detail` that contains celebrity or landmark details. See the [category taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) for the full list of category names. A detail is a nested complex type. A celebrity detail consists of a name, confidence score, and face bounding box. A landmark detail consists of a name and confidence score.|
-| `description` | Output is a single [description](../cognitive-services/computer-vision/concept-describing-images.md) object of a complex type, consisting of lists of `tags` and `caption` (an array consisting of `Text` (string) and `confidence` (double)). |
+| `adult` | Output is a single [adult](../ai-services/computer-vision/concept-detecting-adult-content.md) object of a complex type, consisting of boolean fields (`isAdultContent`, `isGoryContent`, `isRacyContent`) and double type scores (`adultScore`, `goreScore`, `racyScore`). |
+| `brands` | Output is an array of [brand](../ai-services/computer-vision/concept-brand-detection.md) objects, where the object is a complex type consisting of `name` (string) and a `confidence` score (double). It also returns a `rectangle` with four bounding box coordinates (`x`, `y`, `w`, `h`, in pixels) indicating placement inside the image. For the rectangle, `x` and `y` are the top left. Bottom left is `x`, `y+h`. Top right is `x+w`, `y`. Bottom right is `x+w`, `y+h`.|
+| `categories` | Output is an array of [category](../ai-services/computer-vision/concept-categorizing-images.md) objects, where each category object is a complex type consisting of a `name` (string), `score` (double), and optional `detail` that contains celebrity or landmark details. See the [category taxonomy](../ai-services/Computer-vision/Category-Taxonomy.md) for the full list of category names. A detail is a nested complex type. A celebrity detail consists of a name, confidence score, and face bounding box. A landmark detail consists of a name and confidence score.|
+| `description` | Output is a single [description](../ai-services/computer-vision/concept-describing-images.md) object of a complex type, consisting of lists of `tags` and `caption` (an array consisting of `Text` (string) and `confidence` (double)). |
| `faces` | Complex type consisting of `age`, `gender`, and `faceBoundingBox` having four bounding box coordinates (in pixels) indicating placement inside the image. Coordinates are `top`, `left`, `width`, `height`.|
-| `objects` | Output is an array of [visual feature objects](../cognitive-services/computer-vision/concept-object-detection.md). Each object is a complex type, consisting of `object` (string), `confidence` (double), `rectangle` (with four bounding box coordinates indicating placement inside the image), and a `parent` that contains an object name and confidence . |
-| `tags` | Output is an array of [imageTag](../cognitive-services/computer-vision/concept-detecting-image-types.md) objects, where a tag object is a complex type consisting of `name` (string), `hint` (string), and `confidence` (double). The addition of a hint is rare. It's only generated if a tag is ambiguous. For example, an image tagged as "curling" might have a hint of "sports" to better indicate its content. |
+| `objects` | Output is an array of [visual feature objects](../ai-services/computer-vision/concept-object-detection.md). Each object is a complex type, consisting of `object` (string), `confidence` (double), `rectangle` (with four bounding box coordinates indicating placement inside the image), and a `parent` that contains an object name and confidence . |
+| `tags` | Output is an array of [imageTag](../ai-services/computer-vision/concept-detecting-image-types.md) objects, where a tag object is a complex type consisting of `name` (string), `hint` (string), and `confidence` (double). The addition of a hint is rare. It's only generated if a tag is ambiguous. For example, an image tagged as "curling" might have a hint of "sports" to better indicate its content. |
## Sample skill definition
If you get the error similar to `"One or more skills are invalid. Details: Error
## See also
-+ [What is Image Analysis?](../cognitive-services/computer-vision/overview-image-analysis.md)
++ [What is Image Analysis?](../ai-services/computer-vision/overview-image-analysis.md) + [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Extract text and information from images](cognitive-search-concept-image-scenarios.md)
search Cognitive Search Skill Keyphrases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-keyphrases.md
Title: Key Phrase Extraction cognitive skill description: Evaluates unstructured text, and for each record, returns a list of key phrases in an AI enrichment pipeline in Azure Cognitive Search.-
Last updated 12/09/2021
# Key Phrase Extraction cognitive skill
-The **Key Phrase Extraction** skill evaluates unstructured text, and for each record, returns a list of key phrases. This skill uses the [Key Phrase](../cognitive-services/language-service/key-phrase-extraction/overview.md) machine learning models provided by [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md).
+The **Key Phrase Extraction** skill evaluates unstructured text, and for each record, returns a list of key phrases. This skill uses the [Key Phrase](../ai-services/language-service/key-phrase-extraction/overview.md) machine learning models provided by [Azure AI Language](../ai-services/language-service/overview.md).
This capability is useful if you need to quickly identify the main talking points in the record. For example, given input text "The food was delicious and there were wonderful staff", the service returns "food" and "wonderful staff". > [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> ## @odata.type
Parameters are case-sensitive.
| Inputs | Description | ||-|
-| `defaultLanguageCode` | (Optional) The language code to apply to documents that don't specify language explicitly. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../cognitive-services/language-service/key-phrase-extraction/language-support.md). |
+| `defaultLanguageCode` | (Optional) The language code to apply to documents that don't specify language explicitly. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../ai-services/language-service/key-phrase-extraction/language-support.md). |
| `maxKeyPhraseCount` | (Optional) The maximum number of key phrases to produce. |
-| `modelVersion` | (Optional) Specifies the [version of the model](../cognitive-services/language-service/concepts/model-lifecycle.md) to use when calling the key phrase API. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary. |
+| `modelVersion` | (Optional) Specifies the [version of the model](../ai-services/language-service/concepts/model-lifecycle.md) to use when calling the key phrase API. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary. |
## Skill inputs | Input | Description | |--|-| | `text` | The text to be analyzed.|
-| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../cognitive-services/language-service/key-phrase-extraction/language-support.md). |
+| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../ai-services/language-service/key-phrase-extraction/language-support.md). |
## Skill outputs
If your text is larger than 50,000 characters, only the first 50,000 characters
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [How to define output fields mappings](cognitive-search-output-field-mapping.md)
++ [How to define output fields mappings](cognitive-search-output-field-mapping.md)
search Cognitive Search Skill Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-language-detection.md
Title: Language detection cognitive skill description: Evaluates unstructured text, and for each record, returns a language identifier with a score indicating the strength of the analysis in an AI enrichment pipeline in Azure Cognitive Search.-
Last updated 12/09/2021
# Language detection cognitive skill
-The **Language Detection** skill detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the analysis. This skill uses the machine learning models provided in [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md).
+The **Language Detection** skill detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the analysis. This skill uses the machine learning models provided in [Azure AI Language](../ai-services/language-service/overview.md).
This capability is especially useful when you need to provide the language of the text as input to other skills (for example, the [Sentiment Analysis skill](cognitive-search-skill-sentiment-v3.md) or [Text Split skill](cognitive-search-skill-textsplit.md)).
-See [supported languages](../cognitive-services/language-service/language-detection/language-support.md) for Language Detection. If you have content expressed in an unsupported language, the response is `(Unknown)`.
+See [supported languages](../ai-services/language-service/language-detection/language-support.md) for Language Detection. If you have content expressed in an unsupported language, the response is `(Unknown)`.
> [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> ## @odata.type
Parameters are case-sensitive.
| Inputs | Description | ||-|
-| `defaultCountryHint` | (Optional) An ISO 3166-1 alpha-2 two letter country code can be provided to use as a hint to the language detection model if it cannot [disambiguate the language](../cognitive-services/language-service/language-detection/how-to/call-api.md#ambiguous-content). Specifically, the `defaultCountryHint` parameter is used with documents that don't specify the `countryHint` input explicitly. |
-| `modelVersion` | (Optional) Specifies the [version of the model](../cognitive-services/language-service/concepts/model-lifecycle.md) to use when calling language detection. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary. |
+| `defaultCountryHint` | (Optional) An ISO 3166-1 alpha-2 two letter country code can be provided to use as a hint to the language detection model if it cannot [disambiguate the language](../ai-services/language-service/language-detection/how-to/call-api.md#ambiguous-content). Specifically, the `defaultCountryHint` parameter is used with documents that don't specify the `countryHint` input explicitly. |
+| `modelVersion` | (Optional) Specifies the [version of the model](../ai-services/language-service/concepts/model-lifecycle.md) to use when calling language detection. It will default to the latest available when not specified. We recommend you do not specify this value unless it's necessary. |
## Skill inputs
Parameters are case-sensitive.
| Inputs | Description | |--|-| | `text` | The text to be analyzed.|
-| `countryHint` | An ISO 3166-1 alpha-2 two letter country code to use as a hint to the language detection model if it cannot [disambiguate the language](../cognitive-services/language-service/language-detection/how-to/call-api.md#ambiguous-content). |
+| `countryHint` | An ISO 3166-1 alpha-2 two letter country code to use as a hint to the language detection model if it cannot [disambiguate the language](../ai-services/language-service/language-detection/how-to/call-api.md#ambiguous-content). |
## Skill outputs
Parameters are case-sensitive.
## See also + [Built-in skills](cognitive-search-predefined-skills.md)
-+ [How to define a skillset](cognitive-search-defining-skillset.md)
++ [How to define a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Skill Named Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-named-entity-recognition.md
Title: Named Entity Recognition cognitive skill (v2) description: Extract named entities for person, location and organization from text in an AI enrichment pipeline in Azure Cognitive Search.-
The **Named Entity Recognition** skill (v2) extracts named entities from text. A
> Named entity recognition skill (v2) (**Microsoft.Skills.Text.NamedEntityRecognitionSkill**) is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Azure AI services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Azure AI services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> > Image extraction is an extra charge metered by Azure Cognitive Search, as described on the [pricing page](https://azure.microsoft.com/pricing/details/search/). Text extraction is free. >
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
Last updated 06/24/2022
The **Optical character recognition (OCR)** skill recognizes printed and handwritten text in image files. This article is the reference documentation for the OCR skill. See [Extract text from images](cognitive-search-concept-image-scenarios.md) for usage instructions.
-An OCR skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
+An OCR skill uses the machine learning models provided by [Azure AI Vision](../ai-services/computer-vision/overview.md) API [v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) in Azure AI services. The **OCR** skill maps to the following functionality:
-+ For the languages listed under [Cognitive Services Computer Vision language support](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr), the [Read API](../cognitive-services/computer-vision/overview-ocr.md) is used.
++ For the languages listed under [Azure AI Vision language support](../ai-services/computer-vision/language-support.md#optical-character-recognition-ocr), the [Read API](../ai-services/computer-vision/overview-ocr.md) is used. + For Greek and Serbian Cyrillic, the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. The **OCR** skill extracts text from image files. Supported file formats include:
The **OCR** skill extracts text from image files. Supported file formats include
+ .TIFF > [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> > In addition, image extraction is [billable by Azure Cognitive Search](https://azure.microsoft.com/pricing/details/search/). >
Parameters are case-sensitive.
| Parameter name | Description | |--|-| | `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. </p>This parameter only applies if the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. |
-| `defaultLanguageCode` | Language code of the input text. Supported languages include all of the [generally available languages](../cognitive-services/computer-vision/language-support.md#image-analysis) of Cognitive Services Computer Vision. You can also specify `unk` (Unknown). </p>If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned.|
+| `defaultLanguageCode` | Language code of the input text. Supported languages include all of the [generally available languages](../ai-services/computer-vision/language-support.md#image-analysis) of Azure AI Vision. You can also specify `unk` (Unknown). </p>If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned.|
| `lineEnding` | The value to use as a line separator. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". | In previous versions, there was a parameter called "textExtractionAlgorithm" to specify extraction of "printed" or "handwritten" text. This parameter is deprecated because the current Read API algorithm extracts both types of text at once. If your skill includes this parameter, you don't need to remove it, but it won't be used during skill execution.
The above skillset example assumes that a normalized-images field exists. To gen
## See also
-+ [What is optical character recognition](../cognitive-services/computer-vision/overview-ocr.md)
++ [What is optical character recognition](../ai-services/computer-vision/overview-ocr.md) + [Built-in skills](cognitive-search-predefined-skills.md) + [TextMerger skill](cognitive-search-skill-textmerger.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Skill Pii Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-pii-detection.md
Last updated 12/09/2021
# Personally Identifiable Information (PII) Detection cognitive skill
-The **PII Detection** skill extracts personal information from an input text and gives you the option of masking it. This skill uses the [detection models](../cognitive-services/language-service/personally-identifiable-information/overview.md) provided in [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md).
+The **PII Detection** skill extracts personal information from an input text and gives you the option of masking it. This skill uses the [detection models](../ai-services/language-service/personally-identifiable-information/overview.md) provided in [Azure AI Language](../ai-services/language-service/overview.md).
> [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> ## @odata.type
Parameters are case-sensitive and all are optional.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | (Optional) The language code to apply to documents that don't specify language explicitly. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../cognitive-services/language-service/personally-identifiable-information/language-support.md). |
+| `defaultLanguageCode` | (Optional) The language code to apply to documents that don't specify language explicitly. If the default language code is not specified, English (en) will be used as the default language code. <br/> See the [full list of supported languages](../ai-services/language-service/personally-identifiable-information/language-support.md). |
| `minimumPrecision` | A value between 0.0 and 1.0. If the confidence score (in the `piiEntities` output) is lower than the set `minimumPrecision` value, the entity is not returned or masked. The default is 0.0. | | `maskingMode` | A parameter that provides various ways to mask the personal information detected in the input text. The following options are supported: <ul><li>`"none"` (default): No masking occurs and the `maskedText` output will not be returned. </li><li> `"replace"`: Replaces the detected entities with the character given in the `maskingCharacter` parameter. The character will be repeated to the length of the detected entity so that the offsets will correctly correspond to both the input text and the output `maskedText`.</li></ul> <br/> When this skill was in public preview, the `maskingMode` option `redact` was also supported, which allowed removing the detected entities entirely without replacement. The `redact` option has since been deprecated and are longer be supported. | | `maskingCharacter` | The character used to mask the text if the `maskingMode` parameter is set to `replace`. The following option is supported: `*` (default). This parameter can only be `null` if `maskingMode` is not set to `replace`. <br/><br/> When this skill was in public preview, there was support for the `maskingCharacter` options, `X` and `#`. Both `X` and `#` options have since been deprecated and are longer be supported. | | `domain` | (Optional) A string value, if specified, will set the domain to include only a subset of the entity categories. Possible values include: `"phi"` (detect confidential health information only), `"none"`. |
-| `piiCategories` | (Optional) If you want to specify which entities will be detected and returned, use this optional parameter (defined as a list of strings) with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. See [Supported Personally Identifiable Information entity categories](../cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md) for the full list. |
-| `modelVersion` | (Optional) Specifies the [version of the model](../cognitive-services/language-service/concepts/model-lifecycle.md) to use when calling personally identifiable information detection. It will default to the most recent version when not specified. We recommend you do not specify this value unless it's necessary. |
+| `piiCategories` | (Optional) If you want to specify which entities will be detected and returned, use this optional parameter (defined as a list of strings) with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. See [Supported Personally Identifiable Information entity categories](../ai-services/language-service/personally-identifiable-information/concepts/entity-categories.md) for the full list. |
+| `modelVersion` | (Optional) Specifies the [version of the model](../ai-services/language-service/concepts/model-lifecycle.md) to use when calling personally identifiable information detection. It will default to the most recent version when not specified. We recommend you do not specify this value unless it's necessary. |
## Skill inputs | Input name | Description | ||-|
-| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../cognitive-services/language-service/personally-identifiable-information/language-support.md). |
+| `languageCode` | A string indicating the language of the records. If this parameter is not specified, the default language code will be used to analyze the records. <br/>See the [full list of supported languages](../ai-services/language-service/personally-identifiable-information/language-support.md). |
| `text` | The text to analyze. | ## Skill outputs | Output name | Description | ||-|
-| `piiEntities` | An array of complex types that contains the following fields: <ul><li>`"text"` (The actual personally identifiable information as extracted)</li> <li>`"type"`</li><li>`"subType"`</li><li>`"score"` (Higher value means it's more likely to be a real entity)</li><li>`"offset"` (into the input text)</li><li>`"length"`</li></ul> </br> See [Supported Personally Identifiable Information entity categories](../cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md) for the full list. |
+| `piiEntities` | An array of complex types that contains the following fields: <ul><li>`"text"` (The actual personally identifiable information as extracted)</li> <li>`"type"`</li><li>`"subType"`</li><li>`"score"` (Higher value means it's more likely to be a real entity)</li><li>`"offset"` (into the input text)</li><li>`"length"`</li></ul> </br> See [Supported Personally Identifiable Information entity categories](../ai-services/language-service/personally-identifiable-information/concepts/entity-categories.md) for the full list. |
| `maskedText` | If `maskingMode` is set to a value other than `none`, this output will be the string result of the masking performed on the input text as described by the selected `maskingMode`. If `maskingMode` is set to `none`, this output will not be present. | ## Sample definition
Parameters are case-sensitive and all are optional.
} ```
-The offsets returned for entities in the output of this skill are directly returned from the [Language Service APIs](../cognitive-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. For more information, see [Multilingual and emoji support in Language service features](../cognitive-services/language-service/concepts/multilingual-emoji-support.md).
+The offsets returned for entities in the output of this skill are directly returned from the [Language Service APIs](../ai-services/language-service/overview.md), which means if you are using them to index into the original string, you should use the [StringInfo](/dotnet/api/system.globalization.stringinfo) class in .NET in order to extract the correct content. For more information, see [Multilingual and emoji support in Language service features](../ai-services/language-service/concepts/multilingual-emoji-support.md).
## Errors and warnings
If the skill returns a warning, the output `maskedText` may be empty, which can
## See also + [Built-in skills](cognitive-search-predefined-skills.md)
-+ [How to define a skillset](cognitive-search-defining-skillset.md)
++ [How to define a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Skill Sentiment V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-sentiment-v3.md
Last updated 08/17/2022
# Sentiment cognitive skill (v3)
-The **Sentiment** skill (v3) evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This skill uses the machine learning models provided by version 3 of [Language Service](../cognitive-services/language-service/overview.md) in Cognitive Services. It also exposes [opinion mining capabilities](../cognitive-services/language-service/sentiment-opinion-mining/overview.md), which provides more granular information about the opinions related to attributes of products or services in text.
+The **Sentiment** skill (v3) evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This skill uses the machine learning models provided by version 3 of [Language Service](../ai-services/language-service/overview.md) in Azure AI services. It also exposes [opinion mining capabilities](../ai-services/language-service/sentiment-opinion-mining/overview.md), which provides more granular information about the opinions related to attributes of products or services in text.
> [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> ## @odata.type
Parameters are case-sensitive.
| Parameter Name | Description | |-|-|
-| `defaultLanguageCode` | (optional) The language code to apply to documents that don't specify language explicitly. <br/> See the [full list of supported languages](../cognitive-services/language-service/sentiment-opinion-mining/language-support.md). |
-| `modelVersion` | (optional) Specifies the [version of the model](../cognitive-services/language-service/concepts/model-lifecycle.md) to use when calling sentiment analysis. It will default to the most recent version when not specified. We recommend you do not specify this value unless it's necessary. |
-| `includeOpinionMining` | If set to `true`, enables [the opinion mining feature](../cognitive-services/language-service/sentiment-opinion-mining/overview.md#opinion-mining), which allows aspect-based sentiment analysis to be included in your output results. Defaults to `false`. |
+| `defaultLanguageCode` | (optional) The language code to apply to documents that don't specify language explicitly. <br/> See the [full list of supported languages](../ai-services/language-service/sentiment-opinion-mining/language-support.md). |
+| `modelVersion` | (optional) Specifies the [version of the model](../ai-services/language-service/concepts/model-lifecycle.md) to use when calling sentiment analysis. It will default to the most recent version when not specified. We recommend you do not specify this value unless it's necessary. |
+| `includeOpinionMining` | If set to `true`, enables [the opinion mining feature](../ai-services/language-service/sentiment-opinion-mining/overview.md#opinion-mining), which allows aspect-based sentiment analysis to be included in your output results. Defaults to `false`. |
## Skill inputs | Input Name | Description | |--|-| | `text` | The text to be analyzed.|
-| `languageCode` | (optional) A string indicating the language of the records. If this parameter is not specified, the default value is "en". <br/>See the [full list of supported languages](../cognitive-services/language-service/sentiment-opinion-mining/language-support.md).|
+| `languageCode` | (optional) A string indicating the language of the records. If this parameter is not specified, the default value is "en". <br/>See the [full list of supported languages](../ai-services/language-service/sentiment-opinion-mining/language-support.md).|
## Skill outputs
search Cognitive Search Skill Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-sentiment.md
Title: Sentiment cognitive skill (v2) description: Extract a positive-negative sentiment score from text in an AI enrichment pipeline in Azure Cognitive Search.-
Last updated 08/17/2022
# Sentiment cognitive skill (v2)
-The **Sentiment** skill (v2) evaluates unstructured text along a positive-negative continuum, and for each record, returns a numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, and scores close to 0 indicate negative sentiment. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
+The **Sentiment** skill (v2) evaluates unstructured text along a positive-negative continuum, and for each record, returns a numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, and scores close to 0 indicate negative sentiment. This skill uses the machine learning models provided by [Text Analytics](../ai-services/language-service/overview.md) in Azure AI services.
> [!IMPORTANT] > The Sentiment skill (v2) (**Microsoft.Skills.Text.SentimentSkill**) is now discontinued replaced by [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill. > [!NOTE]
-> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
+> As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Azure AI services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Azure AI services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
>
-> Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
+> Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
## @odata.type
Parameters are case-sensitive.
| Parameter Name | Description | |-|-|
-| `defaultLanguageCode` | (optional) The language code to apply to documents that don't specify language explicitly. <br/> See the [full list of supported languages](../cognitive-services/text-analytics/language-support.md). |
+| `defaultLanguageCode` | (optional) The language code to apply to documents that don't specify language explicitly. <br/> See the [full list of supported languages](../ai-services/language-service/language-detection/overview.md). |
## Skill inputs | Input Name | Description | |--|-| | `text` | The text to be analyzed.|
-| `languageCode` | (Optional) A string indicating the language of the records. If this parameter is not specified, the default value is "en". <br/>See the [full list of supported languages](../cognitive-services/text-analytics/language-support.md).|
+| `languageCode` | (Optional) A string indicating the language of the records. If this parameter is not specified, the default value is "en". <br/>See the [full list of supported languages](../ai-services/language-service/language-detection/overview.md).|
## Skill outputs
search Cognitive Search Skill Shaper https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-shaper.md
Title: Shaper cognitive skill description: Extract metadata and structured information from unstructured data and shape it as a complex type in an AI enrichment pipeline in Azure Cognitive Search.-
Additionally, the **Shaper** skill illustrated in [scenario 3](#nested-complex-t
The output name is always "output". Internally, the pipeline can map a different name, such as "analyzedText" as shown in the examples below, but the **Shaper** skill itself returns "output" in the response. This might be important if you are debugging enriched documents and notice the naming discrepancy, or if you build a custom skill and are structuring the response yourself. > [!NOTE]
-> This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
+> This skill isn't bound to Azure AI services. It is non-billable and has no Azure AI services key requirement.
## @odata.type Microsoft.Skills.Util.ShaperSkill
In this case, the **Shaper** creates a complex type. This structure exists in-me
+ [How to define a skillset](cognitive-search-defining-skillset.md) + [How to use complex types](search-howto-complex-data-types.md) + [Knowledge store](knowledge-store-concept-intro.md)
-+ [Create a knowledge store in REST](knowledge-store-create-rest.md)
++ [Create a knowledge store in REST](knowledge-store-create-rest.md)
search Cognitive Search Skill Text Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-text-translation.md
Last updated 09/19/2022
# Text Translation cognitive skill
-The **Text Translation** skill evaluates text and, for each record, returns the text translated to the specified target language. This skill uses the [Translator Text API v3.0](../cognitive-services/translator/reference/v3-0-translate.md) available in Cognitive Services.
+The **Text Translation** skill evaluates text and, for each record, returns the text translated to the specified target language. This skill uses the [Translator Text API v3.0](../ai-services/translator/reference/v3-0-translate.md) available in Azure AI services.
This capability is useful if you expect that your documents may not all be in one language, in which case you can normalize the text to a single language before indexing for search by translating it. It's also useful for localization use cases, where you may want to have copies of the same text available in multiple languages.
-The [Translator Text API v3.0](../cognitive-services/translator/reference/v3-0-reference.md) is a non-regional Cognitive Service, meaning that your data isn't guaranteed to stay in the same region as your Azure Cognitive Search or attached Cognitive Services resource.
+The [Translator Text API v3.0](../ai-services/translator/reference/v3-0-reference.md) is a non-regional Cognitive Service, meaning that your data isn't guaranteed to stay in the same region as your Azure Cognitive Search or attached Azure AI services resource.
> [!NOTE]
-> This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
+> This skill is bound to Azure AI services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Azure AI services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
> > When using this skill, all documents in the source are processed and billed for translation, even if the source and target languages are the same. This behavior is useful for multi-language support within the same document, but it can result in unnecessary processing. To avoid unexpected billing charges from documents that don't need processing, move them out of the data source container prior to running the skill. >
Parameters are case-sensitive.
| Inputs | Description | ||-|
-| defaultToLanguageCode | (Required) The language code to translate documents into for documents that don't specify the "to" language explicitly. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
-| defaultFromLanguageCode | (Optional) The language code to translate documents from for documents that don't specify the "from" language explicitly. If the defaultFromLanguageCode isn't specified, the automatic language detection provided by the Translator Text API will be used to determine the "from" language. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
-| suggestedFrom | (Optional) The language code to translate documents from if `fromLanguageCode` or `defaultFromLanguageCode` are unspecified, and the automatic language detection is unsuccessful. If the suggestedFrom language isn't specified, English (en) will be used as the suggestedFrom language. <br/> See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| defaultToLanguageCode | (Required) The language code to translate documents into for documents that don't specify the "to" language explicitly. <br/> See the [full list of supported languages](../ai-services/translator/language-support.md). |
+| defaultFromLanguageCode | (Optional) The language code to translate documents from for documents that don't specify the "from" language explicitly. If the defaultFromLanguageCode isn't specified, the automatic language detection provided by the Translator Text API will be used to determine the "from" language. <br/> See the [full list of supported languages](../ai-services/translator/language-support.md). |
+| suggestedFrom | (Optional) The language code to translate documents from if `fromLanguageCode` or `defaultFromLanguageCode` are unspecified, and the automatic language detection is unsuccessful. If the suggestedFrom language isn't specified, English (en) will be used as the suggestedFrom language. <br/> See the [full list of supported languages](../ai-services/translator/language-support.md). |
## Skill inputs | Input name | Description | |--|-| | text | The text to be translated.|
-| toLanguageCode | A string indicating the language the text should be translated to. If this input isn't specified, the defaultToLanguageCode will be used to translate the text. <br/>See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
-| fromLanguageCode | A string indicating the current language of the text. If this parameter isn't specified, the defaultFromLanguageCode (or automatic language detection if the defaultFromLanguageCode isn't provided) will be used to translate the text. <br/>See the [full list of supported languages](../cognitive-services/translator/language-support.md). |
+| toLanguageCode | A string indicating the language the text should be translated to. If this input isn't specified, the defaultToLanguageCode will be used to translate the text. <br/>See the [full list of supported languages](../ai-services/translator/language-support.md). |
+| fromLanguageCode | A string indicating the current language of the text. If this parameter isn't specified, the defaultFromLanguageCode (or automatic language detection if the defaultFromLanguageCode isn't provided) will be used to translate the text. <br/>See the [full list of supported languages](../ai-services/translator/language-support.md). |
## Skill outputs
search Cognitive Search Skill Textmerger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textmerger.md
Title: Text Merge cognitive skill description: Merge text from a collection of fields into one consolidated field. Use this cognitive skill in an AI enrichment pipeline in Azure Cognitive Search.-
Last updated 04/20/2023
The **Text Merge** skill consolidates text from an array of strings into a single field. > [!NOTE]
-> This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
+> This skill isn't bound to Azure AI services. It is non-billable and has no Azure AI services key requirement.
## @odata.type Microsoft.Skills.Text.MergeSkill
search Cognitive Search Skill Textsplit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textsplit.md
Last updated 08/12/2021
The **Text Split** skill breaks text into chunks of text. You can specify whether you want to break the text into sentences or into pages of a particular length. This skill is especially useful if there are maximum text length requirements in other skills downstream. > [!NOTE]
-> This skill isn't bound to Cognitive Services. It is non-billable and has no Cognitive Services key requirement.
+> This skill isn't bound to Azure AI services. It is non-billable and has no Azure AI services key requirement.
## @odata.type Microsoft.Skills.Text.SplitSkill
search Cognitive Search Tutorial Blob Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-dotnet.md
Title: 'C# tutorial: AI on Azure blobs' description: Step through an example of text extraction and natural language processing over content in Blob storage using C# and the Azure Cognitive Search .NET SDK. -
The sample data consists of 14 files of mixed content type that you will upload
## 1 - Create services
-This tutorial uses Azure Cognitive Search for indexing and queries, Cognitive Services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the free allocation of 20 transactions per indexer per day on Cognitive Services, so the only services you need to create are search and storage.
+This tutorial uses Azure Cognitive Search for indexing and queries, Azure AI services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the free allocation of 20 transactions per indexer per day on Azure AI services, so the only services you need to create are search and storage.
If possible, create both in the same region and resource group for proximity and manageability. In practice, your Azure Storage account can be in any region.
If possible, create both in the same region and resource group for proximity and
+ **Storage account name**. If you think you might have multiple resources of the same type, use the name to disambiguate by type and region, for example *blobstoragewestus*.
- + **Location**. If possible, choose the same location used for Azure Cognitive Search and Cognitive Services. A single location voids bandwidth charges.
+ + **Location**. If possible, choose the same location used for Azure Cognitive Search and Azure AI services. A single location voids bandwidth charges.
+ **Account Kind**. Choose the default, *StorageV2 (general purpose v2)*.
If possible, create both in the same region and resource group for proximity and
<!-- The next section says that a key isn't required, but the code won't run without it. Is there a way to make the key declaration work as null? It would be nice to keep the appsetting so that people know how to set it up, but at the same time, the other versions of this sample don't require a key, so this one shouldn't either. -->
-### Cognitive Services
+### Azure AI services
-AI enrichment is backed by Cognitive Services, including Language service and Computer Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Cognitive Services (in the same region as Azure Cognitive Search) so that you can attach it to indexing operations.
+AI enrichment is backed by Azure AI services, including Language service and Azure AI Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Azure AI services (in the same region as Azure Cognitive Search) so that you can attach it to indexing operations.
-For this exercise, however, you can skip resource provisioning because Azure Cognitive Search can connect to Cognitive Services behind the scenes and give you 20 free transactions per indexer run. Since this tutorial uses 14 transactions, the free allocation is sufficient. For larger projects, plan on provisioning Cognitive Services at the pay-as-you-go S0 tier. For more information, see [Attach Cognitive Services](cognitive-search-attach-cognitive-services.md).
+For this exercise, however, you can skip resource provisioning because Azure Cognitive Search can connect to Azure AI services behind the scenes and give you 20 free transactions per indexer run. Since this tutorial uses 14 transactions, the free allocation is sufficient. For larger projects, plan on provisioning Azure AI services at the pay-as-you-go S0 tier. For more information, see [Attach Azure AI services](cognitive-search-attach-cognitive-services.md).
### Azure Cognitive Search
search Cognitive Search Tutorial Blob Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-python.md
Optionally, you can also download the source code for this tutorial. Source code
## 1 - Create services
-This tutorial uses Azure Cognitive Search for indexing and queries, Cognitive Services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the Cognitive Search free allocation of 20 transactions per indexer per day on Cognitive Services, so the only services you need to create are search and storage.
+This tutorial uses Azure Cognitive Search for indexing and queries, Azure AI services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the Cognitive Search free allocation of 20 transactions per indexer per day on Azure AI services, so the only services you need to create are search and storage.
If possible, create both in the same region and resource group for proximity and manageability. In practice, your Azure Storage account can be in any region.
If possible, create both in the same region and resource group for proximity and
+ **Storage account name**. If you think you might have multiple resources of the same type, use the name to disambiguate by type and region, for example *blobstoragewestus*.
- + **Location**. If possible, choose the same location used for Azure Cognitive Search and Cognitive Services. A single location voids bandwidth charges.
+ + **Location**. If possible, choose the same location used for Azure Cognitive Search and Azure AI services. A single location voids bandwidth charges.
+ **Account Kind**. Choose the default, *StorageV2 (general purpose v2)*.
If possible, create both in the same region and resource group for proximity and
1. Save the connection string to Notepad. You'll need it later when setting up the data source connection.
-### Cognitive Services
+### Azure AI services
-AI enrichment is backed by Cognitive Services, including Language service and Computer Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Cognitive Services (in the same region as Azure Cognitive Search) so that you can attach it to indexing operations.
+AI enrichment is backed by Azure AI services, including Language service and Azure AI Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Azure AI services (in the same region as Azure Cognitive Search) so that you can attach it to indexing operations.
-Since this tutorial only uses 14 transactions, you can skip resource provisioning because Azure Cognitive Search can connect to Cognitive Services for 20 free transactions per indexer run. For larger projects, plan on provisioning Cognitive Services at the pay-as-you-go S0 tier. For more information, see [Attach Cognitive Services](cognitive-search-attach-cognitive-services.md).
+Since this tutorial only uses 14 transactions, you can skip resource provisioning because Azure Cognitive Search can connect to Azure AI services for 20 free transactions per indexer run. For larger projects, plan on provisioning Azure AI services at the pay-as-you-go S0 tier. For more information, see [Attach Azure AI services](cognitive-search-attach-cognitive-services.md).
### Azure Cognitive Search
You can find and manage resources in the portal, using the All resources or Reso
Now that you're familiar with all of the objects in an AI enrichment pipeline, let's take a closer look at skillset definitions and individual skills. > [!div class="nextstepaction"]
-> [How to create a skillset](cognitive-search-defining-skillset.md)
+> [How to create a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Tutorial Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob.md
Title: 'REST Tutorial: AI on Azure blobs' description: Step through an example of text extraction and natural language processing over content in Blob Storage using Postman and the Azure Cognitive Search REST APIs. -
The sample data consists of 14 files of mixed content type that you'll upload to
## 1 - Create services
-This tutorial uses Azure Cognitive Search for indexing and queries, Cognitive Services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the free allocation of 20 transactions per indexer per day on Cognitive Services, so the only services you need to create are search and storage.
+This tutorial uses Azure Cognitive Search for indexing and queries, Azure AI services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the free allocation of 20 transactions per indexer per day on Azure AI services, so the only services you need to create are search and storage.
If possible, create both in the same region and resource group for proximity and manageability. In practice, your Azure Storage account can be in any region.
If possible, create both in the same region and resource group for proximity and
+ **Storage account name**. If you think you might have multiple resources of the same type, use the name to disambiguate by type and region, for example *blobstoragewestus*.
- + **Location**. If possible, choose the same location used for Azure Cognitive Search and Cognitive Services. A single location voids bandwidth charges.
+ + **Location**. If possible, choose the same location used for Azure Cognitive Search and Azure AI services. A single location voids bandwidth charges.
+ **Account Kind**. Choose the default, *StorageV2 (general purpose v2)*.
If possible, create both in the same region and resource group for proximity and
1. Save the connection string to Notepad. You'll need it later when setting up the data source connection.
-### Cognitive Services
+### Azure AI services
-AI enrichment is backed by Cognitive Services, including Language service and Computer Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Cognitive Services (in the same region as Azure Cognitive Search) so that you can [attach it to a skillset](cognitive-search-attach-cognitive-services.md).
+AI enrichment is backed by Azure AI services, including Language service and Azure AI Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Azure AI services (in the same region as Azure Cognitive Search) so that you can [attach it to a skillset](cognitive-search-attach-cognitive-services.md).
-For this exercise, however, you can skip resource provisioning because Azure Cognitive Search can connect to Cognitive Services execute 20 transactions per indexer run, free of charge. Since this tutorial uses 14 transactions, the free allocation is sufficient. For larger projects, plan on provisioning Cognitive Services at the pay-as-you-go S0 tier.
+For this exercise, however, you can skip resource provisioning because Azure Cognitive Search can connect to Azure AI services execute 20 transactions per indexer run, free of charge. Since this tutorial uses 14 transactions, the free allocation is sufficient. For larger projects, plan on provisioning Azure AI services at the pay-as-you-go S0 tier.
### Azure Cognitive Search
search Cognitive Search Tutorial Debug Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-debug-sessions.md
Title: 'Tutorial: Debug skillsets' description: Debug sessions is an Azure portal tool used to find, diagnose, and repair problems in a skillset.-
Before you begin, have the following prerequisites in place:
+ [Sample PDFs (clinical trials)](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials/clinical-trials-pdf-19). > [!NOTE]
-> This tutorial also uses [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for language detection, entity recognition, and key phrase extraction. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create a billable Cognitive Services resource.
+> This tutorial also uses [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for language detection, entity recognition, and key phrase extraction. Because the workload is so small, Azure AI services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create a billable Azure AI services resource.
## Set up your data
This tutorial touched on various aspects of skillset definition and processing.
+ [Skillsets in Azure Cognitive Search](cognitive-search-working-with-skillsets.md)
-+ [How to configure caching for incremental enrichment](cognitive-search-incremental-indexing-conceptual.md)
++ [How to configure caching for incremental enrichment](cognitive-search-incremental-indexing-conceptual.md)
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-working-with-skillsets.md
Title: Skillset concepts description: Skillsets are where you author an AI enrichment pipeline in Azure Cognitive Search. Learn important concepts and details about skillset composition.-
search Index Add Custom Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-custom-analyzers.md
Title: Add custom analyzers to string fields description: Configure text tokenizers and character filters to perform text analysis on strings during indexing and queries.-
In the table below, the token filters that are implemented using Apache Lucene a
|[shingle](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html)|ShingleTokenFilter|Creates combinations of tokens as a single token.<br /><br /> **Options**<br /><br /> maxShingleSize (type: int) - Defaults to 2.<br /><br /> minShingleSize (type: int) - Defaults to 2.<br /><br /> outputUnigrams (type: bool) - if true, the output stream contains the input tokens (unigrams) as well as shingles. The default is true.<br /><br /> outputUnigramsIfNoShingles (type: bool) - If true, override the behavior of outputUnigrams==false for those times when no shingles are available. The default is false.<br /><br /> tokenSeparator (type: string) - The string to use when joining adjacent tokens to form a shingle. The default is " ".<br /><br /> filterToken (type: string) - The string to insert for each position for which there is no token. The default is "_".| |[snowball](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html)|SnowballTokenFilter|Snowball Token Filter.<br /><br /> **Options**<br /><br /> language (type: string) - Allowed values include: "armenian", "basque", "catalan", "danish", "dutch", "english", "finnish", "french", "german", "german2", "hungarian", "italian", "kp", "lovins", "norwegian", "porter", "portuguese", "romanian", "russian", "spanish", "swedish", "turkish"| |[sorani_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html)|SoraniNormalizationTokenFilter|Normalizes the Unicode representation of Sorani text.<br /><br /> **Options**<br /><br /> None.|
-|stemmer|StemmerTokenFilter|Language-specific stemming filter.<br /><br /> **Options**<br /><br /> language (type: string) - Allowed values include: <br /> - ["arabic"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicStemmer.html)<br />- ["armenian"](https://snowballstem.org/algorithms/armenian/stemmer.html)<br />- ["basque"](https://snowballstem.org/algorithms/basque/stemmer.html)<br />- ["brazilian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/br/BrazilianStemmer.html)<br />- "bulgarian"<br />- ["catalan"](https://snowballstem.org/algorithms/catalan/stemmer.html)<br />- ["czech"](https://portal.acm.org/citation.cfm?id=1598600)<br />- ["danish"](https://snowballstem.org/algorithms/danish/stemmer.html)<br />- ["dutch"](https://snowballstem.org/algorithms/dutch/stemmer.html)<br />- ["dutchKp"](https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.html)<br />- ["english"](https://snowballstem.org/algorithms/porter/stemmer.html)<br />- ["lightEnglish"](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf)<br />- ["minimalEnglish"](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing)<br />- ["possessiveEnglish"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html)<br />- ["porter2"](https://snowballstem.org/algorithms/english/stemmer.html)<br />- ["lovins"](https://snowballstem.org/algorithms/lovins/stemmer.html)<br />- ["finnish"](https://snowballstem.org/algorithms/finnish/stemmer.html)<br />- "lightFinnish"<br />- ["french"](https://snowballstem.org/algorithms/french/stemmer.html)<br />- ["lightFrench"](https://dl.acm.org/citation.cfm?id=1141523)<br />- ["minimalFrench"](https://dl.acm.org/citation.cfm?id=318984)<br />- "galician"<br />- "minimalGalician"<br />- ["german"](https://snowballstem.org/algorithms/german/stemmer.html)<br />- ["german2"](https://snowballstem.org/algorithms/german2/stemmer.html)<br />- ["lightGerman"](https://dl.acm.org/citation.cfm?id=1141523)<br />- "minimalGerman"<br />- ["greek"](https://sais.se/mthprize/2007/ntais2007.pdf)<br />- "hindi"<br />- ["hungarian"](https://snowballstem.org/algorithms/hungarian/stemmer.html)<br />- ["lightHungarian"](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br />- ["indonesian"](https://eprints.illc.uva.nl/741/2/MoL-2003-03.text.pdf)<br />- ["irish"](https://snowballstem.org/algorithms/irish/stemmer.html)<br />- ["italian"](https://snowballstem.org/algorithms/italian/stemmer.html)<br />- ["lightItalian"](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br />- ["sorani"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html)<br />- ["latvian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/lv/LatvianStemmer.html)<br />- ["norwegian"](https://snowballstem.org/algorithms/norwegian/stemmer.html)<br />- ["lightNorwegian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br />- ["minimalNorwegian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br />- ["lightNynorsk"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br />- ["minimalNynorsk"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br />- ["portuguese"](https://snowballstem.org/algorithms/portuguese/stemmer.html)<br />- ["lightPortuguese"](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br />- ["minimalPortuguese"](https://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf)<br />- ["portugueseRslp"](https://www.inf.ufrgs.br//~viviane/rslp/index.htm)<br />- ["romanian"](https://snowballstem.org/otherapps/romanian/)<br />- ["russian"](https://snowballstem.org/algorithms/russian/stemmer.html)<br />- ["lightRussian"](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)<br />- ["spanish"](https://snowballstem.org/algorithms/spanish/stemmer.html)<br />- ["lightSpanish"](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br />- ["swedish"](https://snowballstem.org/algorithms/swedish/stemmer.html)<br />- "lightSwedish"<br />- ["turkish"](https://snowballstem.org/algorithms/turkish/stemmer.html)|
+|stemmer|StemmerTokenFilter|Language-specific stemming filter.<br /><br /> **Options**<br /><br /> language (type: string) - Allowed values include: <br /> - ["arabic"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicStemmer.html)<br />- ["armenian"](https://snowballstem.org/algorithms/armenian/stemmer.html)<br />- ["basque"](https://snowballstem.org/algorithms/basque/stemmer.html)<br />- ["brazilian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/br/BrazilianStemmer.html)<br />- "bulgarian"<br />- ["catalan"](https://snowballstem.org/algorithms/catalan/stemmer.html)<br />- ["czech"](https://portal.acm.org/citation.cfm?id=1598600)<br />- ["danish"](https://snowballstem.org/algorithms/danish/stemmer.html)<br />- ["dutch"](https://snowballstem.org/algorithms/dutch/stemmer.html)<br />- ["dutchKp"](https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.html)<br />- ["english"](https://snowballstem.org/algorithms/porter/stemmer.html)<br />- ["lightEnglish"](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf)<br />- ["minimalEnglish"](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing)<br />- ["possessiveEnglish"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html)<br />- ["porter2"](https://snowballstem.org/algorithms/english/stemmer.html)<br />- ["lovins"](https://snowballstem.org/algorithms/lovins/stemmer.html)<br />- ["finnish"](https://snowballstem.org/algorithms/finnish/stemmer.html)<br />- "lightFinnish"<br />- ["french"](https://snowballstem.org/algorithms/french/stemmer.html)<br />- ["lightFrench"](https://dl.acm.org/citation.cfm?id=1141523)<br />- ["minimalFrench"](https://dl.acm.org/citation.cfm?id=318984)<br />- "galician"<br />- "minimalGalician"<br />- ["german"](https://snowballstem.org/algorithms/german/stemmer.html)<br />- ["german2"](https://snowballstem.org/algorithms/german2/stemmer.html)<br />- ["lightGerman"](https://dl.acm.org/citation.cfm?id=1141523)<br />- "minimalGerman"<br />- ["greek"](https://sais.se/mthprize/2007/ntais2007.pdf)<br />- "hindi"<br />- ["hungarian"](https://snowballstem.org/algorithms/hungarian/stemmer.html)<br />- ["lightHungarian"](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br />- ["indonesian"](https://eprints.illc.uva.nl/741/2/MoL-2003-03.text.pdf)<br />- ["irish"](https://snowballstem.org/algorithms/irish/stemmer.html)<br />- ["italian"](https://snowballstem.org/algorithms/italian/stemmer.html)<br />- ["lightItalian"](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br />- ["sorani"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html)<br />- ["latvian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/lv/LatvianStemmer.html)<br />- ["norwegian"](https://snowballstem.org/algorithms/norwegian/stemmer.html)<br />- ["lightNorwegian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br />- ["minimalNorwegian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br />- ["lightNynorsk"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br />- ["minimalNynorsk"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br />- ["portuguese"](https://snowballstem.org/algorithms/portuguese/stemmer.html)<br />- ["lightPortuguese"](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br />- ["minimalPortuguese"](https://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf)<br />- ["portugueseRslp"](https://www.inf.ufrgs.br/~viviane/rslp/index.htm)<br />- ["romanian"](https://snowballstem.org/otherapps/romanian/)<br />- ["russian"](https://snowballstem.org/algorithms/russian/stemmer.html)<br />- ["lightRussian"](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)<br />- ["spanish"](https://snowballstem.org/algorithms/spanish/stemmer.html)<br />- ["lightSpanish"](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br />- ["swedish"](https://snowballstem.org/algorithms/swedish/stemmer.html)<br />- "lightSwedish"<br />- ["turkish"](https://snowballstem.org/algorithms/turkish/stemmer.html)|
|[stemmer_override](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/StemmerOverrideFilter.html)|StemmerOverrideTokenFilter|Any dictionary-Stemmed terms are marked as keywords, which prevents stemming down the chain. Must be placed before any stemming filters.<br /><br /> **Options**<br /><br /> rules (type: string array) - Stemming rules in the following format "word => stem" for example "ran => run". The default is an empty list. Required.| |[stopwords](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html)|StopwordsTokenFilter|Removes stop words from a token stream. By default, the filter uses a predefined stop word list for English.<br /><br /> **Options**<br /><br /> stopwords (type: string array) - A list of stopwords. Can't be specified if a stopwordsList is specified.<br /><br /> stopwordsList (type: string) - A predefined list of stopwords. Can't be specified if "stopwords" is specified. Allowed values include:"arabic", "armenian", "basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch", "english", "finnish", "french", "galician", "german", "greek", "hindi", "hungarian", "indonesian", "irish", "italian", "latvian", "norwegian", "persian", "portuguese", "romanian", "russian", "sorani", "spanish", "swedish", "thai", "turkish", default: "english". Can't be specified if "stopwords" is specified. <br /><br /> ignoreCase (type: bool) - If true, all words are lower cased first. The default is false.<br /><br /> removeTrailing (type: bool) - If true, ignore the last search term if it's a stop word. The default is true. |[synonym](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/synonym/SynonymFilter.html)|SynonymTokenFilter|Matches single or multi word synonyms in a token stream.<br /><br /> **Options**<br /><br /> synonyms (type: string array) - Required. List of synonyms in one of the following two formats:<br /><br /> -incredible, unbelievable, fabulous => amazing - all terms on the left side of => symbol are replaced with all terms on its right side.<br /><br /> -incredible, unbelievable, fabulous, amazing - A comma-separated list of equivalent words. Set the expand option to change how this list is interpreted.<br /><br /> ignoreCase (type: bool) - Case-folds input for matching. The default is false.<br /><br /> expand (type: bool) - If true, all words in the list of synonyms (if => notation is not used) map to one another. <br />The following list: incredible, unbelievable, fabulous, amazing is equivalent to: incredible, unbelievable, fabulous, amazing => incredible, unbelievable, fabulous, amazing<br /><br />- If false, the following list: incredible, unbelievable, fabulous, amazing are equivalent to: incredible, unbelievable, fabulous, amazing => incredible.|
search Index Add Language Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-language-analyzers.md
Title: Add language analyzers to string fields description: Multi-lingual lexical analysis for non-English queries and indexes in Azure Cognitive Search.-
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Title: Configure relevance scoring description: Enable Okapi BM25 ranking to upgrade the search ranking and relevance behavior on older Azure Search services.-
PUT [service-name].search.windows.net/indexes/[index name]?api-version=2020-06-3
+ [REST API Reference](/rest/api/searchservice/) + [Add scoring profiles to your index](index-add-scoring-profiles.md) + [Create Index API](/rest/api/searchservice/create-index)
-+ [Azure Cognitive Search .NET SDK](/dotnet/api/overview/azure/search)
++ [Azure Cognitive Search .NET SDK](/dotnet/api/overview/azure/search)
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Title: Relevance and scoring description: Explains the concepts of relevance and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result.-
search Index Sql Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-sql-relational-data.md
Title: Model SQL relational data for import and indexing description: Learn how to model relational data, de-normalized into a flat result set, for indexing and full text search in Azure Cognitive Search.-
Using your own data set, you can use the [Import data wizard](search-import-data
Try the following quickstart to learn the basic steps of the Import data wizard. > [!div class="nextstepaction"]
-> [Quickstart: Create a search index using Azure portal](search-get-started-portal.md)
+> [Quickstart: Create a search index using Azure portal](search-get-started-portal.md)
search Knowledge Store Connect Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-connect-power-bi.md
Title: Connect to a knowledge store with Power BI description: Connect an Azure Cognitive Search knowledge store with Power BI for analysis and exploration.-
For a demonstration of using Power BI with a knowledge store, watch the followin
## Next steps > [!div class="nextstepaction"]
-> [Tables in Power BI reports and dashboards](/power-bi/visuals/power-bi-visualization-tables)
+> [Tables in Power BI reports and dashboards](/power-bi/visuals/power-bi-visualization-tables)
search Knowledge Store Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-portal.md
Before you begin, have the following prerequisites in place:
[Upload the file to a blob container](../storage/blobs/storage-quickstart-blobs-portal.md) in Azure Storage.
-This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an extra Cognitive Services resource.
+This quickstart also uses [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is so small, Azure AI services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an extra Azure AI multi-service resource.
## Start the wizard
Because the data is multiple rows in one CSV file, set the *parsing mode* to get
In this wizard step, add skills for AI enrichment. The source data consists of customer reviews in English and French. Skills that are relevant for this data set include key phrase extraction, sentiment detection, and text translation. In a later step, these enrichments are "projected" into a knowledge store as Azure tables.
-1. Expand **Attach Cognitive Services**. **Free (Limited enrichments)** is selected by default. You can use this resource because the number of records in HotelReviews-Free.csv is 19 and this free resource allows up to 20 transactions a day.
+1. Expand **Attach Azure AI services**. **Free (Limited enrichments)** is selected by default. You can use this resource because the number of records in HotelReviews-Free.csv is 19 and this free resource allows up to 20 transactions a day.
1. Expand **Add enrichments**.
search Knowledge Store Create Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-rest.md
Title: Create a knowledge store using REST description: Use the REST API and Postman to create an Azure Cognitive Search knowledge store for persisting AI enrichments from skillset.-
To make the initial data set available, the hotel reviews are first imported int
## Load data
-This step uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
+This step uses Azure Cognitive Search, Azure Blob Storage, and [Azure AI services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Azure AI services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching an Azure AI multi-service resource.
1. [Download HotelReviews_Free.csv](https://github.com/Azure-Samples/azure-search-sample-data/blob/master/hotelreviews/HotelReviews_data.csv). This CSV contains 19 pieces of customer feedback about a single hotel (originates from Kaggle.com). The file is in a repo with other sample data. If you don't want the whole repo, copy the raw content and paste it into a spreadsheet app on your device.
If you're using a free service, remember that you're limited to three indexes, i
## Next steps
-Now that you've enriched your data by using Cognitive Services and projected the results to a knowledge store, you can use Storage Explorer or other apps to explore your enriched data set.
+Now that you've enriched your data by using Azure AI services and projected the results to a knowledge store, you can use Storage Explorer or other apps to explore your enriched data set.
> [!div class="nextstepaction"]
-> [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md)
+> [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md)
search Knowledge Store Projection Example Long https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-example-long.md
Pay close attention to skill outputs (targetNames). Outputs written to the enric
], "cognitiveServices": { "@odata.type": "#Microsoft.Azure.Search.CognitiveServicesByKey",
- "description": "A Cognitive Services resource in the same region as Search.",
- "key": "<COGNITIVE SERVICES All-in-ONE KEY>"
+ "description": "An Azure AI services resource in the same region as Search.",
+ "key": "<Azure AI services All-in-ONE KEY>"
}, "knowledgeStore": null }
search Knowledge Store Projection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-overview.md
Recall that projections are exclusive to knowledge stores, and are not used to s
Review syntax and examples for each projection type. > [!div class="nextstepaction"]
-> [Define projections in a knowledge store](knowledge-store-projections-examples.md)
+> [Define projections in a knowledge store](knowledge-store-projections-examples.md)
search Knowledge Store Projection Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-shape.md
Title: Shaping data for knowledge store description: Define the data structures in a knowledge store by creating data shapes and passing them to a projection.-
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
The monitoring framework for Azure Cognitive Search is provided by [Azure Monito
+ [Analyze performance in Azure Cognitive Search](search-performance-analysis.md) + [Monitor queries](search-monitor-queries.md)
-+ [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
++ [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
search Performance Benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/performance-benchmarks.md
Title: Performance benchmarks description: Learn about the performance of Azure Cognitive Search through various performance benchmarks-
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
search Reference Stopwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/reference-stopwords.md
For the stopword list for Lucene analyzers, see the [Apache Lucene source code o
+ [Add language analyzers to string fields](index-add-language-analyzers.md) + [Add custom analyzers to string fields](index-add-custom-analyzers.md) + [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md)
-+ [Analyzers for text processing in Azure Cognitive Search](search-analyzers.md)
++ [Analyzers for text processing in Azure Cognitive Search](search-analyzers.md)
search Resource Partners Knowledge Mining https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-partners-knowledge-mining.md
Get expert help from Microsoft partners who build comprehensive solutions that i
| ![Enlighten Designs](media/resource-partners/enlighten-ver2.png "Enlighten Designs company logo") | [**Enlighten Designs**](https://www.enlighten.co.nz) is an award-winning innovation studio that has been enabling client value and delivering digitally transformative experiences for over 22 years. We are pushing the boundaries of the Microsoft technology toolbox, harnessing Cognitive Search, application development, and advanced Azure services that have the potential to transform our world. As experts in Power BI and data visualization, we hold the titles for the most viewed, and the most downloaded Power BI visuals in the world and are MicrosoftΓÇÖs Data Journalism agency of record when it comes to data storytelling. | [Product page](https://www.enlighten.co.nz/Services/Data-Visualisation/Azure-Cognitive-Search) | | ![Neudesic](media/resource-partners/neudesic-logo.png "Neudesic company logo") | [**Neudesic**](https://www.neudesic.com/) is the trusted technology partner in business innovation, delivering impactful business results to clients through digital modernization and evolution. Our consultants bring business and technology expertise together, offering a wide range of cloud and data-driven solutions, including custom application development, data and artificial intelligence, comprehensive managed services, and business software products. Founded in 2002, Neudesic is a privately held company headquartered in Irvine, California. | [Product page](https://www.neudesic.com/services/modern-workplace/document-intelligence-platform-schedule-demo/)| | ![OrangeNXT](media/resource-partners/orangenxt-beldmerk-boven-160px.png "OrangeNXT company logo") | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.</br></br>digitalNXT Search is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to digitalNXT Search is advanced custom cognitive skills for interpreting and correlating selected data.</br></br>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)|
-| ![Plain Concepts](media/resource-partners/plain-concepts-logo.png "Plain Concepts company logo") | [**Plain Concepts**](https://www.plainconcepts.com/contact/) is a Microsoft Partner with over 15 years of cloud, data, and AI expertise on Azure, and more than 12 Microsoft MVP awards. We specialize in the creation of new data relationships among heterogeneous information sources, which combined with our experience with Artificial Intelligence, Machine Learning, and Cognitive Services, exponentially increases the productivity of both machines and human teams. We help customers to face the digital revolution with the AI-based solutions that best suits their company requirements.| [Product page](https://www.plainconcepts.com/artificial-intelligence/) |
+| ![Plain Concepts](media/resource-partners/plain-concepts-logo.png "Plain Concepts company logo") | [**Plain Concepts**](https://www.plainconcepts.com/contact/) is a Microsoft Partner with over 15 years of cloud, data, and AI expertise on Azure, and more than 12 Microsoft MVP awards. We specialize in the creation of new data relationships among heterogeneous information sources, which combined with our experience with Artificial Intelligence, Machine Learning, and Azure AI services, exponentially increases the productivity of both machines and human teams. We help customers to face the digital revolution with the AI-based solutions that best suits their company requirements.| [Product page](https://www.plainconcepts.com/artificial-intelligence/) |
| ![Raytion](media/resource-partners/raytion-logo-blue.png "Raytion company logo") | [**Raytion**](https://www.raytion.com/) is an internationally operating IT business consultancy with a strategic focus on collaboration, search and cloud. Raytion offers intelligent and fully featured search solutions based on Microsoft Azure Cognitive Search and the Raytion product suite. Raytion's solutions enable an easy indexation of a broad range of enterprise content systems and provide a sophisticated search experience, which can be tailored to individual requirements. They are the foundation of enterprise search, knowledge searches, service desk agent support and many more applications. | [Product page](https://www.raytion.com/connectors) |
search Resource Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-training.md
Training modules deliver an end-to-end experience that helps you build skills an
+ [Introduction to Azure Cognitive Search (Microsoft)](/training/modules/intro-to-azure-search/) + [Implement knowledge mining with Azure Cognitive Search (Microsoft)](/training/paths/implement-knowledge-mining-azure-cognitive-search/) + [Add search to apps (Pluralsight)](https://www.pluralsight.com/courses/azure-adding-search-abilities-apps)
-+ [Developer course (Pluralsight)](https://www.pluralsight.com/courses/microsoft-azure-textual-content-search-enabling)
++ [Developer course (Pluralsight)](https://www.pluralsight.com/courses/microsoft-azure-textual-content-search-enabling)
search Search Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-analyzers.md
Title: Analyzers for linguistic and text processing description: Assign analyzers to searchable text fields in an index to replace default standard Lucene with custom, predefined or language-specific alternatives.-
To learn more about analyzers, see the following articles:
+ [Add a language analyzer](index-add-language-analyzers.md) + [Add a custom analyzer](index-add-custom-analyzers.md) + [Create a search index](search-what-is-an-index.md)
-+ [Create a multi-language index](search-language-support.md)
++ [Create a multi-language index](search-language-support.md)
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
Version 2019-05-06 is the previous generally available release of the REST API.
* [Autocomplete](index-add-suggesters.md) is a typeahead feature that completes a partially specified term input. * [Complex types](search-howto-complex-data-types.md) provides native support for structured object data in search index. * [JsonLines parsing modes](search-howto-index-json-blobs.md), part of Azure Blob indexing, creates one search document per JSON entity that is separated by a newline.
-* [AI enrichment](cognitive-search-concept-intro.md) provides indexing that uses the AI enrichment engines of Cognitive Services.
+* [AI enrichment](cognitive-search-concept-intro.md) provides indexing that uses the AI enrichment engines of Azure AI services.
### Breaking changes
search Search Blob Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-blob-metadata-properties.md
Title: Content metadata properties description: Metadata properties can provide content to fields in a search index. This article lists metadata properties supported in Azure Cognitive Search.-
The following table summarizes processing done for each document format, and des
* [Indexers in Azure Cognitive Search](search-indexer-overview.md) * [AI enrichment overview](cognitive-search-concept-intro.md) * [Blob indexing overview](search-blob-storage-integration.md)
-* [SharePoint indexing](search-howto-index-sharepoint-online.md)
+* [SharePoint indexing](search-howto-index-sharepoint-online.md)
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-capacity-planning.md
SUs, pricing, and capacity are explained in detail on the Azure website. For mor
## Next steps > [!div class="nextstepaction"]
-> [Manage costs](search-sku-manage-costs.md)
+> [Manage costs](search-sku-manage-costs.md)
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Previously updated : 01/25/2023 Last updated : 07/17/2023 # Create an Azure Cognitive Search service in the portal
The following service properties are fixed for the lifetime of the service. Beca
## Subscribe (free or paid)
-To try search for free, [open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service using the **Free** tier. Because the tier is fixed, it will never transition to become a billable tier.
+To try search for free, [open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for non-production applications. If you decide you would like to continue using the service for a production application, create a new search service on a billable tier.
Alternatively, you can use free credits to try out paid Azure services, which means you can create your search service at **Basic** or above to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
Two notable exceptions might lead to provisioning one or more search services in
Some features are subject to regional availability. If you require any of following features, choose a region that provides them:
-+ [AI enrichment](cognitive-search-concept-intro.md) requires Cognitive Services to be in the same physical region as Azure Cognitive Search. There are just a few regions that *don't* provide both. The [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page indicates a common regional presence by showing two stacked check marks. An unavailable combination has a missing check mark. The time piece icon indicates future availability.
++ [AI enrichment](cognitive-search-concept-intro.md) requires Azure AI services to be in the same physical region as Azure Cognitive Search. There are just a few regions that *don't* provide both. The [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page indicates a common regional presence by showing two stacked check marks. An unavailable combination has a missing check mark. The time piece icon indicates future availability. :::image type="content" source="media/search-create-service-portal/region-availability.png" lightbox="media/search-create-service-portal/region-availability.png" alt-text="Screenshot of the regional availability page." border="true":::
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
The M-Files connector enables indexing of content managed by the M-Files platfor
by [BA Insight](https://www.bainsight.com/)
-BA Insight's MediaPlatform PrimeTime indexing connector makes it possible to make the content accessible to users via an organization's enterprise search platform, combining the connector with BA Insight's SmartHub. The BA Insight MediaPlatform PrimeTime Connector retrieves information about channels and videos from MediaPlatform PrimeTime and indexes them via Azure Cognitive Search.
+BA Insight's MediaPlatform PrimeTime indexing connector makes it possible to make the content accessible to users via an organization's enterprise search platform, combining the connector with BA Insight's SmartHub. The BA Insight MediaPlatform PrimeTime Connector retrieves information about channels and videos from MediaPlatform PrimeTime and indexes them via an Azure Cognitive Search.
[More details](https://www.bainsight.com/connectors/mediaplatform-primetime-connector-sharepoint-azure-elasticsearch/)
search Search Data Sources Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-terms-of-use.md
Title: Terms of Use (partner data sources) description: Terms of use for partner and third-party data source connectors.-
The use of [partner data source connectors](search-data-sources-gallery.md#data-
Microsoft makes no representation, warranty, or guarantee in relation to the third-party data prior to indexing into your Azure tenant. Microsoft assumes no liability for errors that may occur during the indexing of third-party data. You agree that any data sourced from a third-party provider is provided "as is," and with respect to the index, is provided subject to your configuration. Indexed data in your Azure tenant will be considered "Customer Data" for purposes of the [Online Services Terms](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31), [Privacy Statement](https://privacy.microsoft.com/privacystatement), and Microsoft's other data obligations.
-For more information on how we use Customer Data in Azure, see the [Online Services Terms](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31) and [Privacy Statement](https://privacy.microsoft.com/privacystatement). You can also find out more about Microsoft's commitments to data protection and privacy by visiting our [Trust Center](https://www.microsoft.com/trust-center).
+For more information on how we use Customer Data in Azure, see the [Online Services Terms](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31) and [Privacy Statement](https://privacy.microsoft.com/privacystatement). You can also find out more about Microsoft's commitments to data protection and privacy by visiting our [Trust Center](https://www.microsoft.com/trust-center).
search Search Dotnet Mgmt Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-mgmt-sdk-migration.md
Version 3.0 adds private endpoint protection by restricting access to IP ranges,
| [NetworkRuleSet](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#networkruleset) | IP firewall | Restrict access to a service endpoint to a list of allowed IP addresses. See [Configure IP firewall](service-configure-firewall.md) for concepts and portal instructions. | | [Shared Private Link Resource](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources) | Private Link | Create a shared private link resource to be used by a search service. | | [Private Endpoint Connections](/rest/api/searchmanagement/2021-04-01-preview/private-endpoint-connections) | Private Link | Establish and manage connections to a search service through private endpoint. See [Create a private endpoint](service-create-private-endpoint.md) for concepts and portal instructions.|
-| [Private Link Resources](/rest/api/searchmanagement/2021-04-01-preview/private-link-resources) | Private Link | For a search service that has a private endpoint connection, get a list of all services used in the same virtual network. If your search solution includes indexers that pull from Azure data sources (such as Azure Storage, Azure Cosmos DB, Azure SQL), or uses Cognitive Services or Key Vault, then all of those resources should have endpoints in the virtual network, and this API should return a list. |
+| [Private Link Resources](/rest/api/searchmanagement/2021-04-01-preview/private-link-resources) | Private Link | For a search service that has a private endpoint connection, get a list of all services used in the same virtual network. If your search solution includes indexers that pull from Azure data sources (such as Azure Storage, Azure Cosmos DB, Azure SQL), or uses Azure AI services or Key Vault, then all of those resources should have endpoints in the virtual network, and this API should return a list. |
| [PublicNetworkAccess](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#publicnetworkaccess)| Private Link | This is a property on Create or Update Service requests. When disabled, private link is the only access modality. | ### Breaking changes
search Search Get Started Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-arm.md
Select the following image to sign in to Azure and open a template. The template
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.search%2Fazure-search-create%2Fazuredeploy.json)
-The portal displays a form that allows you to easily provide parameter values. Some parameters are pre-filled with the default values from the template. You will need to provide your subscription, resource group, location, and service name. If you want to use Cognitive Services in an [AI enrichment](cognitive-search-concept-intro.md) pipeline, for example to analyze binary image files for text, choose a location that offers both Cognitive Search and Cognitive Services. Both services are required to be in the same region for AI enrichment workloads. Once you have completed the form, you will need to agree to the terms and conditions and then select the purchase button to complete your deployment.
+The portal displays a form that allows you to easily provide parameter values. Some parameters are pre-filled with the default values from the template. You will need to provide your subscription, resource group, location, and service name. If you want to use Azure AI services in an [AI enrichment](cognitive-search-concept-intro.md) pipeline, for example to analyze binary image files for text, choose a location that offers both Cognitive Search and Azure AI services. Both services are required to be in the same region for AI enrichment workloads. Once you have completed the form, you will need to agree to the terms and conditions and then select the purchase button to complete your deployment.
> [!div class="mx-imgBorder"] > ![Azure portal display of template](./media/search-get-started-arm/arm-portalscrnsht.png)
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
An indexer is a source-specific crawler that can read metadata and content from
### Step 2 - Skip the "Enrich content" page
-The wizard supports the creation of an [AI enrichment pipeline](cognitive-search-concept-intro.md) for incorporating the Cognitive Services AI algorithms into indexing.
+The wizard supports the creation of an [AI enrichment pipeline](cognitive-search-concept-intro.md) for incorporating the Azure AI services AI algorithms into indexing.
We'll skip this step for now, and move directly on to **Customize target index**.
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Title: Create an index alias description: Create an alias to define a secondary name that can be used to refer to an index for querying, indexing, and other operations.-
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
Use the following links to become familiar with loading an index with data, or e
+ [Data import overview](search-what-is-data-import.md) + [Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents)
-+ [Synonym maps](search-synonyms.md)
++ [Synonym maps](search-synonyms.md)
search Search How To Load Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-load-search-index.md
Azure Cognitive Search supports document-level operations so that you can look u
+ [Search indexes overview](search-what-is-an-index.md) + [Data import overview](search-what-is-data-import.md) + [Import data wizard overview](search-import-data-portal.md)
-+ [Indexers overview](search-indexer-overview.md)
++ [Indexers overview](search-indexer-overview.md)
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
Title: Configure search apps for Azure AD description: Acquire a token from Azure Active Directory to authorize search requests to an app built on Azure Cognitive Search.-
This article shows you how to configure your client for Azure AD:
+ For authorization, you'll assign an Azure role to the managed identity that grants permissions to run queries or manage indexing jobs.
-+ Update your client code to call [`TokenCredential()`](/dotnet/api/azure.core.tokencredential). For example, you can get started with new SearchClient(endpoint, new `DefaultAzureCredential()`) to authenticate via Azure AD using [Azure.Identity](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md).
++ Update your client code to call [`TokenCredential()`](/dotnet/api/azure.core.tokencredential). For example, you can get started with new SearchClient(endpoint, new `DefaultAzureCredential()`) to authenticate via an Azure AD using [Azure.Identity](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md). ## Configure role-based access for data plane
search Search Howto Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-concurrency.md
Try modifying other samples to exercise ETags or AccessCondition objects.
+ [Common HTTP request and response headers](/rest/api/searchservice/common-http-request-and-response-headers-used-in-azure-search) + [HTTP status codes](/rest/api/searchservice/http-status-codes)
-+ [Index operations (REST API)](/rest/api/searchservice/index-operations)
++ [Index operations (REST API)](/rest/api/searchservice/index-operations)
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Title: Indexer connection to SQL Server on Azure VMs description: Enable encrypted connections and configure the firewall to allow connections to SQL Server on an Azure virtual machine (VM) from an indexer on Azure Cognitive Search.-
Clusters in different regions connect to different traffic managers. Regardless
## Next steps
-With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Index data from Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
+With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Index data from Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
search Search Howto Incremental Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-incremental-index.md
Title: Enable caching for incremental enrichment (preview) description: Enable caching of enriched content for potential reuse when modifying downstream skills and projections in an AI enrichment pipeline. -
Incremental enrichment is applicable on indexers that contain skillsets, providi
+ [Incremental enrichment (lifecycle and management)](cognitive-search-incremental-indexing-conceptual.md) + [Skillset concepts and composition](cognitive-search-working-with-skillsets.md) + [Create a skillset](cognitive-search-defining-skillset.md)
-+ [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
++ [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-azure-data-lake-storage.md
Title: Azure Data Lake Storage Gen2 indexer description: Set up an Azure Data Lake Storage (ADLS) Gen2 indexer to automate indexing of content and metadata for full text search in Azure Cognitive Search.-
search Search Howto Index Changed Deleted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-changed-deleted-blobs.md
Title: Changed and deleted blobs description: Indexers that index from Azure Storage can pick up new and changed content automatically. This article describes the strategies.-
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
-For this call, specify a [preview REST API version](search-api-preview.md) (2020-06-30-Preview or 2021-04-30-Preview) to create a data source that connects via Azure Cosmos DB for Apache Gremlin.
+For this call, specify a [preview REST API version](search-api-preview.md) (2020-06-30-Preview or 2021-04-30-Preview) to create a data source that connects via an Azure Cosmos DB for Apache Gremlin.
1. [Create or update a data source](/rest/api/searchservice/preview-api/create-or-update-data-source) to set its definition:
Notice how the Output Field Mapping starts with `/document` and does not include
+ For more information about Azure Cognitive Search scenarios and pricing, see the [Search service page on azure.microsoft.com](https://azure.microsoft.com/services/search/).
-+ To learn about network configuration for indexers, see the [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md).
++ To learn about network configuration for indexers, see the [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md).
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Title: Indexing with Azure Cosmos DB for MongoDB description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data in Azure Cosmos DB for MongoDB.-
You can now control how you [run the indexer](search-howto-run-reset-indexers.md
+ [Set up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) + [Index large data sets](search-howto-large-index.md)
-+ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
++ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Search Howto Index Encrypted Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-encrypted-blobs.md
If you don't have an Azure subscription, open a [free account](https://azure.mic
Custom skill deployment creates an Azure Function app and an Azure Storage account. Since these resources are created for you, they aren't listed as a prerequisite. When you're finished with this tutorial, remember to clean up the resources so that you aren't billed for services you're not using. > [!NOTE]
-> Skillsets often require [attaching a Cognitive Services resource](cognitive-search-attach-cognitive-services.md). As written, this skillset has no dependency on Cognitive Services and thus no key is required. If you later add enrichments that invoke built-in skills, remember to update your skillset accordingly.
+> Skillsets often require [attaching an Azure AI multi-service resource](cognitive-search-attach-cognitive-services.md). As written, this skillset has no dependency on Azure AI services and thus no key is required. If you later add enrichments that invoke built-in skills, remember to update your skillset accordingly.
## 1 - Create services and collect credentials
You can find and manage resources in the portal, using the All resources or Reso
Now that you have successfully indexed encrypted files, you can [iterate on this pipeline by adding more cognitive skills](cognitive-search-defining-skillset.md). This will allow you to enrich and gain additional insights to your data.
-If you are working with doubly encrypted data, you might want to investigate the index encryption features available in Azure Cognitive Search. Although the indexer needs decrypted data for indexing purposes, once the index exists, it can be encrypted in a search index using a customer-managed key. This will ensure that your data is always encrypted when at rest. For more information, see [Configure customer-managed keys for data encryption in Azure Cognitive Search](search-security-manage-encryption-keys.md).
+If you are working with doubly encrypted data, you might want to investigate the index encryption features available in Azure Cognitive Search. Although the indexer needs decrypted data for indexing purposes, once the index exists, it can be encrypted in a search index using a customer-managed key. This will ensure that your data is always encrypted when at rest. For more information, see [Configure customer-managed keys for data encryption in Azure Cognitive Search](search-security-manage-encryption-keys.md).
search Search Howto Index Json Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-json-blobs.md
You can also refer to individual array elements by using a zero-based index. For
+ [Define field mappings](search-indexer-field-mappings.md) + [Indexers overview](search-indexer-overview.md) + [How to index CSV blobs with a blob indexer](search-howto-index-csv-blobs.md)
-+ [Tutorial: Search semi-structured data from Azure Blob Storage](search-semi-structured-data.md)
++ [Tutorial: Search semi-structured data from Azure Blob Storage](search-semi-structured-data.md)
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
Title: Azure DB for MySQL (preview) description: Learn how to set up a search indexer to index data stored in Azure Database for MySQL for full text search in Azure Cognitive Search.-
search Search Howto Index Plaintext Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-plaintext-blobs.md
api-key: [admin key]
+ [Indexers in Azure Cognitive Search](search-indexer-overview.md) + [How to configure a blob indexer](search-howto-indexing-azure-blob-storage.md)
-+ [Blob indexing overview](search-blob-storage-integration.md)
++ [Blob indexing overview](search-blob-storage-integration.md)
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
Title: SharePoint indexer (preview) description: Set up a SharePoint indexer to automate indexing of document library content in Azure Cognitive Search.-
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
Title: Azure Blob indexer description: Set up an Azure Blob indexer to automate indexing of blob content for full text search operations and knowledge mining in Azure Cognitive Search.-
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-large-index.md
Although multiple indexer-data-source sets can target the same index, be careful
## Index big data on Spark
-If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Cognitive Services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
+If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Azure AI services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
## See also
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
Title: Set up an indexer connection to Azure Cosmos DB via a managed identity description: Learn how to set up an indexer connection to an Azure Cosmos DB account via a managed identity-
Check to see if the Azure Cosmos DB account has its access restricted to select
## See also
-* [Indexing via Azure Cosmos DB for NoSQL](search-howto-index-cosmosdb.md)
-* [Indexing via Azure Cosmos DB for MongoDB](search-howto-index-cosmosdb-mongodb.md)
-* [Indexing via Azure Cosmos DB for Apache Gremlin](search-howto-index-cosmosdb-gremlin.md)
+* [Indexing via an Azure Cosmos DB for NoSQL](search-howto-index-cosmosdb.md)
+* [Indexing via an Azure Cosmos DB for MongoDB](search-howto-index-cosmosdb-mongodb.md)
+* [Indexing via an Azure Cosmos DB for Apache Gremlin](search-howto-index-cosmosdb-gremlin.md)
search Search Howto Managed Identities Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-sql.md
Title: Connect to Azure SQL description: Learn how to set up an indexer connection to Azure SQL Database using a managed identity-
You can also rule out any firewall issues by trying the connection with and with
## See also
-* [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
+* [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
Title: Connect to Azure Storage description: Learn how to set up an indexer connection to an Azure Storage account using a managed identity-
Azure storage accounts can be further secured using firewalls and virtual networ
* [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md) * [Azure Data Lake Storage Gen2 indexer](search-howto-index-azure-data-lake-storage.md) * [Azure Table indexer](search-howto-indexing-azure-tables.md)
-* [C# Example: Index Data Lake Gen2 using Azure AD (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md)
+* [C# Example: Index Data Lake Gen2 using Azure AD (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md)
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-monitor-indexers.md
If there were document-specific problems during the run, they'll be listed in th
![Indexer details with errors](media/search-monitor-indexers/indexer-execution-error.png "Indexer details with errors")
-Warnings are common with some types of indexers, and don't always indicate a problem. For example indexers that use Cognitive Services can report warnings when image or PDF files don't contain any text to process.
+Warnings are common with some types of indexers, and don't always indicate a problem. For example indexers that use Azure AI services can report warnings when image or PDF files don't contain any text to process.
For more information about investigating indexer errors and warnings, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md).
search Search Howto Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-move-across-regions.md
Occasionally, customers ask about moving a search service to another region. Cur
1. Identify dependencies and related services to understand the full impact of relocating a service, in case you need to move more than just Azure Cognitive Search.
- Azure Storage is used for logging, creating a knowledge store, and is a commonly used external data source for AI enrichment and indexing. Cognitive Services is a dependency in AI enrichment. Both Cognitive Services and your search service are required to be in the same region if you are using AI enrichment.
+ Azure Storage is used for logging, creating a knowledge store, and is a commonly used external data source for AI enrichment and indexing. Azure AI services is a dependency in AI enrichment. Both Azure AI services and your search service are required to be in the same region if you are using AI enrichment.
1. Create an inventory of all objects on the service so that you know what to move: indexes, synonym maps, indexers, data sources, skillsets. If you enabled logging, create and archive any reports you might need for a historical record. 1. Check pricing and availability in the new region to ensure availability of Azure Cognitive Search plus any related services in the new region. The majority of features are available in all regions, but some preview features have restricted availability.
-1. Create a service in the new region and republish from source code any existing indexes, synonym maps, indexers, data sources, and skillsets. Remember that service names must be unique so you cannot reuse the existing name. Check each skillset to see if connections to Cognitive Services are still valid in terms of the same-region requirement. Also, if knowledge stores are created, check the connection strings for Azure Storage if you are using a different service.
+1. Create a service in the new region and republish from source code any existing indexes, synonym maps, indexers, data sources, and skillsets. Remember that service names must be unique so you cannot reuse the existing name. Check each skillset to see if connections to Azure AI services are still valid in terms of the same-region requirement. Also, if knowledge stores are created, check the connection strings for Azure Storage if you are using a different service.
1. Reload indexes and knowledge stores, if applicable. You'll either use application code to push JSON data into an index, or rerun indexers to pull documents in from external sources.
To commit the changes and complete the move of your service account, delete the
[Create a skillset](./cognitive-search-quickstart-blob.md)
-[Create a knowledge store](./knowledge-store-create-portal.md) -->
+[Create a knowledge store](./knowledge-store-create-portal.md) -->
search Search Howto Powerapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-powerapps.md
Title: 'Tutorial: Query from Power Apps' description: Step-by-step guidance on how to build a Power App that connects to an Azure Cognitive Search index, sends queries, and renders results.-
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
If you added or renamed a field, use [$select](search-query-odata-select.md) to
+ [Azure Cosmos DB indexer](search-howto-index-cosmosdb.md) + [Azure Blob Storage indexer](search-howto-indexing-azure-blob-storage.md) + [Azure Table Storage indexer](search-howto-indexing-azure-tables.md)
-+ [Security in Azure Cognitive Search](search-security-overview.md)
++ [Security in Azure Cognitive Search](search-security-overview.md)
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-schedule-indexers.md
Title: Schedule indexer execution description: Learn how to schedule Azure Cognitive Search indexers to index content at specific intervals, or at specific dates and times.-
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
Title: Import data into a search index using Azure portal description: Learn about the Import Data wizard in the Azure portal used to create and load an index, and optionally invoke AI enrichment using built-in skills for natural language processing, translation, OCR, and image analysis.-
The wizard will output the objects in the following table. After the objects are
| [Indexer](/rest/api/searchservice/create-indexer) | A configuration object specifying a data source, target index, an optional skillset, optional schedule, and optional configuration settings for error handing and base-64 encoding. | | [Data Source](/rest/api/searchservice/create-data-source) | Persists connection information to a [supported data source](search-indexer-overview.md#supported-data-sources) on Azure. A data source object is used exclusively with indexers. | | [Index](/rest/api/searchservice/create-index) | Physical data structure used for full text search and other queries. |
-| [Skillset](/rest/api/searchservice/create-skillset) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to a Cognitive Services resource that provides enrichment. |
+| [Skillset](/rest/api/searchservice/create-skillset) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to an Azure AI multi-service resource that provides enrichment. |
| [Knowledge store](knowledge-store-concept-intro.md) | Optional. Stores output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) in tables and blobs in Azure Storage for independent analysis or downstream processing. | ## Benefits and limitations
search Search Index Azure Sql Managed Instance With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-index-azure-sql-managed-instance-with-managed-identity.md
Title: Connect to Azure SQL Managed Instance using managed identity description: Learn how to set up an Azure Cognitive Search indexer connection to an Azure SQL Managed Instance using a managed identity-
search Search Indexer Field Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-field-mappings.md
When errors occur that are related to document key length exceeding 1024 charact
## See also + [Supported data types in Cognitive Search](/rest/api/searchservice/supported-data-types)
-+ [SQL data type map](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types)
++ [SQL data type map](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types)
search Search Indexer How To Access Private Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-how-to-access-private-sql.md
Here are some reminders for testing:
+ [Index data from Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) + [Management REST API](/rest/api/searchmanagement/) + [Search REST API](/rest/api/searchservice/)
-+ [Quickstart: Get started with REST](search-get-started-rest.md)
++ [Quickstart: Get started with REST](search-get-started-rest.md)
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Learn more about private endpoints and other secure connection methods:
+ [Troubleshoot issues with shared private link resources](troubleshoot-shared-private-link-resources.md) + [What are private endpoints?](../private-link/private-endpoint-overview.md) + [DNS configurations needed for private endpoints](../private-link/private-endpoint-dns.md)
-+ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
++ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
A list of all possible Azure resource types that an indexer might access in a ty
| Azure Functions | Attached to a skillset and used to host for custom web API skills | > [!NOTE]
-> An indexer also connects to Cognitive Services for built-in skills. However, that connection is made over the internal network and isn't subject to any network provisions under your control.
+> An indexer also connects to Azure AI services for built-in skills. However, that connection is made over the internal network and isn't subject to any network provisions under your control.
## Supported network protections
search Search Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-language-support.md
Non-string fields and non-searchable string fields don't undergo lexical analysi
## Add text translation
-This article assumes you have translated strings in place. If that's not the case, you can attach Cognitive Services to an [enrichment pipeline](cognitive-search-concept-intro.md), invoking text translation during data ingestion. Text translation takes a dependency on the indexer feature and Cognitive Services, but all setup is done within Azure Cognitive Search.
+This article assumes you have translated strings in place. If that's not the case, you can attach Azure AI services to an [enrichment pipeline](cognitive-search-concept-intro.md), invoking text translation during data ingestion. Text translation takes a dependency on the indexer feature and Azure AI services, but all setup is done within Azure Cognitive Search.
To add text translation, follow these steps:
To add text translation, follow these steps:
1. Create an index that includes fields for translated strings. Most of this article covers index design and field definitions for indexing and querying multi-language content.
-1. [Attach a multi-region Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to your skillset.
+1. [Attach a multi-region Azure AI services resource](cognitive-search-attach-cognitive-services.md) to your skillset.
1. [Create and run the indexer](search-howto-create-indexers.md), and then apply the guidance in this article to query just the fields of interest.
POST /indexes/hotels/docs/search?api-version=2020-06-30
+ [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md) + [Search Documents REST API](/rest/api/searchservice/search-documents) + [AI enrichment overview](cognitive-search-concept-intro.md)
-+ [Skillsets overview](cognitive-search-working-with-skillsets.md)
++ [Skillsets overview](cognitive-search-working-with-skillsets.md)
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 08/09/2022 Last updated : 07/17/2023
Maximum running times exist to provide balance and stability to the service as a
| Blob indexer: maximum blob size, MB |16 |16 |128 |256 |256 |N/A |256 |256 | | Blob indexer: maximum characters of content extracted from a blob |32,000 |64,000 |4&nbsp;million |8&nbsp;million |16&nbsp;million |N/A |4&nbsp;million |4&nbsp;million |
-<sup>1</sup> Free services have indexer maximum execution time of 3 minutes for blob sources and 1 minute for all other data sources. Indexer invocation is once every 180 seconds. For AI indexing that calls into Cognitive Services, free services are limited to 20 free transactions per indexer per day, where a transaction is defined as a document that successfully passes through the enrichment pipeline (tip: you can reset an indexer to reset its count).
+<sup>1</sup> Free services have indexer maximum execution time of 3 minutes for blob sources and 1 minute for all other data sources. Indexer invocation is once every 180 seconds. For AI indexing that calls into Azure AI services, free services are limited to 20 free transactions per indexer per day, where a transaction is defined as a document that successfully passes through the enrichment pipeline (tip: you can reset an indexer to reset its count).
<sup>2</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexers, data sources, and skillsets.
Maximum number of [index aliases](search-how-to-alias.md) varies by tier. In all
## Data limits (AI enrichment)
-An [AI enrichment pipeline](cognitive-search-concept-intro.md) that makes calls to Azure Cognitive Services for Language resource for [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [entity linking](cognitive-search-skill-entity-linking-v3.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), [sentiment analysis](cognitive-search-skill-sentiment-v3.md), [language detection](cognitive-search-skill-language-detection.md), and [personal-information detection](cognitive-search-skill-pii-detection.md) is subject to data limits. The maximum size of a record should be 50,000 characters as measured by [`String.Length`](/dotnet/api/system.string.length). If you need to break up your data before sending it to the sentiment analyzer, use the [Text Split skill](cognitive-search-skill-textsplit.md).
+An [AI enrichment pipeline](cognitive-search-concept-intro.md) that makes calls to Azure AI Language resource for [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [entity linking](cognitive-search-skill-entity-linking-v3.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), [sentiment analysis](cognitive-search-skill-sentiment-v3.md), [language detection](cognitive-search-skill-language-detection.md), and [personal-information detection](cognitive-search-skill-pii-detection.md) is subject to data limits. The maximum size of a record should be 50,000 characters as measured by [`String.Length`](/dotnet/api/system.string.length). If you need to break up your data before sending it to the sentiment analyzer, use the [Text Split skill](cognitive-search-skill-textsplit.md).
## Throttling limits
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
Build an [index](search-what-is-an-index.md), [query an index](search-query-over
* [Create an Azure Cognitive Search index in the Azure portal](search-get-started-portal.md) * [Set up an indexer to load data from other services](search-indexer-overview.md) * [Query an Azure Cognitive Search index using Search explorer in the Azure portal](search-explorer.md)
-* [How to use Azure Cognitive Search in .NET](search-howto-dotnet-sdk.md)
+* [How to use Azure Cognitive Search in .NET](search-howto-dotnet-sdk.md)
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
Title: Manage with REST description: Create and configure an Azure Cognitive Search service with the Management REST API. The Management REST API is comprehensive in scope, with access to generally available and preview features.-
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage.md
Also by default, search services start with API keys for content-related tasks t
* Review [monitoring capabilities](monitor-azure-cognitive-search.md) available in the portal * Automate with [PowerShell](search-manage-powershell.md) or [Azure CLI](search-manage-azure-cli.md) * Review [security features](search-security-overview.md) to protect content and operations
-* Enable [resource logging](monitor-azure-cognitive-search.md) to monitor query and indexing workloads
+* Enable [resource logging](monitor-azure-cognitive-search.md) to monitor query and indexing workloads
search Search Modeling Multitenant Saas Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-modeling-multitenant-saas-applications.md
Title: Multitenancy and content isolation description: Learn about common design patterns for multitenant SaaS applications while using Azure Cognitive Search.-
search Search Monitor Logs Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-monitor-logs-powerbi.md
Title: Visualize logs and metrics with Power BI description: Visualize Azure Cognitive Search logs and metrics with Power BI.-
If you find that you cannot see your data follow these troubleshooting steps:
+ [Monitor search operations and activity](monitor-azure-cognitive-search.md) + [What is Power BI?](/power-bi/fundamentals/power-bi-overview)
-+ [Basic concepts for designers in the Power BI service](/power-bi/service-basic-concepts)
++ [Basic concepts for designers in the Power BI service](/power-bi/service-basic-concepts)
search Search Monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-monitor-queries.md
If you specified an email notification, you will receive an email from "Microsof
If you haven't done so already, review the fundamentals of search service monitoring to learn about the full range of oversight capabilities. > [!div class="nextstepaction"]
-> [Monitor operations and activity in Azure Cognitive Search](monitor-azure-cognitive-search.md)
+> [Monitor operations and activity in Azure Cognitive Search](monitor-azure-cognitive-search.md)
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
Title: Text normalization for filters, facets, sort description: Specify normalizers to text fields in an index to customize the strict keyword matching behavior in filtering, faceting and sorting.-
search Search Performance Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-analysis.md
Title: Analyze performance description: TBD-
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
Title: Performance tips description: Learn about tips and best practices for maximizing performance on a search service.-
search Search Query Odata Collection Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-collection-operators.md
For more details on these limitations as well as examples, see [Troubleshooting
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Comparison Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-comparison-operators.md
Rooms/any(room: room/Type eq 'Deluxe Room')
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Full Text Search Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-full-text-search-functions.md
Find documents that have a word that starts with the letters "lux" in the Descri
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Geo Spatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-geo-spatial-functions.md
Sort hotels in descending order by `search.score` and `rating`, and then in asce
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Logical Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-logical-operators.md
Match documents for hotels in Vancouver, Canada where there is a deluxe room wit
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Orderby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-orderby.md
Sort hotels in descending order by search.score and rating, and then in ascendin
- [How to work with search results in Azure Cognitive Search](search-pagination-page-layout.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Search In Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-search-in-function.md
Find all hotels without the tag 'motel' or 'cabin':
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-select.md
An example result might look like this:
- [How to work with search results in Azure Cognitive Search](search-pagination-page-layout.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Odata Syntax Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-syntax-reference.md
To visually explore the OData language grammar supported by Azure Cognitive Sear
- [Filters in Azure Cognitive Search](search-filters.md) - [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents) - [Lucene query syntax](query-lucene-syntax.md)-- [Simple query syntax in Azure Cognitive Search](query-simple-syntax.md)
+- [Simple query syntax in Azure Cognitive Search](query-simple-syntax.md)
search Search Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-overview.md
For a closer look at query implementation, review the examples for each syntax.
+ [Simple query examples](search-query-simple-examples.md) + [Lucene syntax query examples for building advanced queries](search-query-lucene-examples.md)
-+ [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md)git
++ [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md)git
search Search Query Troubleshoot Collection Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-troubleshoot-collection-filters.md
If you write filters often, and understanding the rules from first principles wo
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Query Understand Collection Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-understand-collection-filters.md
For specific examples of which kinds of filters are allowed and which aren't, se
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
Title: Reliability in Azure Cognitive Search description: Find out about reliability in Azure Cognitive Search.-
Otherwise, your application code used for creating and populating an index is th
[1]: ./media/search-reliability/geo-redundancy.png [2]: ./media/search-reliability/scale-indexers.png [3]: ./media/search-reliability/geo-search-traffic-mgr.png
-[4]: ./media/search-reliability/azure-function-search-traffic-mgr.png
+[4]: ./media/search-reliability/azure-function-search-traffic-mgr.png
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Note that it's not possible to use [customer-managed key encryption](search-secu
+ [Security in Azure Cognitive Search](search-security-overview.md) + [Azure role-based access control in Azure Cognitive Search](search-security-rbac.md)
-+ [Manage using PowerShell](search-manage-powershell.md)
++ [Manage using PowerShell](search-manage-powershell.md)
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
Encryption is performed over the following content:
+ All content within indexes and synonym lists, including descriptions.
-+ For indexers, data sources, and skillsets, only those fields that store connection strings, descriptions, keys, and user inputs are encrypted. For example, skillsets have Cognitive Services keys, and some skills accept user inputs, such as custom entities. In both cases, keys and user inputs into skills are encrypted.
++ For indexers, data sources, and skillsets, only those fields that store connection strings, descriptions, keys, and user inputs are encrypted. For example, skillsets have Azure AI services keys, and some skills accept user inputs, such as custom entities. In both cases, keys and user inputs into skills are encrypted. ## Full double encryption
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Internal requests are secured and managed by Microsoft. You can't configure or c
Internal traffic consists of: + Service-to-service calls for tasks like authentication and authorization through Azure Active Directory, resource logging sent to Azure Monitor, and private endpoint connections that utilize Azure Private Link.
-+ Requests made to Cognitive Services APIs for [built-in skills](cognitive-search-predefined-skills.md).
++ Requests made to Azure AI services APIs for [built-in skills](cognitive-search-predefined-skills.md). + Requests made to the machine learning models that support [semantic search](semantic-search-overview.md#availability-and-pricing). <a name="service-access-and-authentication"></a>
search Search Security Trimming For Azure Search With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-trimming-for-azure-search-with-aad.md
The response includes a filtered list of documents, consisting of those that the
In this walkthrough, you learned a pattern for using Azure AD sign-ins to filter documents in Azure Cognitive Search results, trimming the results of documents that don't match the filter provided on the request. For an alternative pattern that might be simpler, or to revisit other security features, see the following links. - [Security filters for trimming results](search-security-trimming-for-azure-search.md)-- [Security in Azure Cognitive Search](search-security-overview.md)
+- [Security in Azure Cognitive Search](search-security-overview.md)
search Search Semi Structured Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-semi-structured-data.md
If possible, create both in the same region and resource group for proximity and
+ **Storage account name**. If you think you might have multiple resources of the same type, use the name to disambiguate by type and region, for example *blobstoragewestus*.
- + **Location**. If possible, choose the same location used for Azure Cognitive Search and Cognitive Services. A single location voids bandwidth charges.
+ + **Location**. If possible, choose the same location used for Azure Cognitive Search and Azure AI services. A single location voids bandwidth charges.
+ **Account Kind**. Choose the default, *StorageV2 (general purpose v2)*.
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md
Billing is based on capacity (SUs) and the costs of running premium features, su
|-|| | Image extraction (AI enrichment) <sup>1, 2</sup> | Per 1000 images. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). | | Custom Entity Lookup skill (AI enrichment) <sup>1</sup> | Per 1000 text records. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing) |
-| [Built-in skills](cognitive-search-predefined-skills.md) (AI enrichment) <sup>1</sup> | Number of transactions, billed at the same rate as if you had performed the task by calling Cognitive Services directly. You can process 20 documents per indexer per day for free. Larger or more frequent workloads require a multi-resource Cognitive Services key. |
+| [Built-in skills](cognitive-search-predefined-skills.md) (AI enrichment) <sup>1</sup> | Number of transactions, billed at the same rate as if you had performed the task by calling Azure AI services directly. You can process 20 documents per indexer per day for free. Larger or more frequent workloads require a multi-resource Azure AI services key. |
| [Semantic search](semantic-search-overview.md) <sup>1</sup> | Number of queries of "queryType=semantic", billed at a progressive rate. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). | | [Shared private link](search-indexer-howto-access-private.md) <sup>1</sup> | [Billed for bandwidth](https://azure.microsoft.com/pricing/details/private-link/) as long as the shared private link exists and is used. |
Several premium features such as [knowledge store](knowledge-store-concept-intro
[Customer-managed keys](search-security-manage-encryption-keys.md) provide double encryption of sensitive content. This feature requires a billable [Azure Key Vault](https://azure.microsoft.com/pricing/details/key-vault/)).
-Skillsets can include [billable built-in skills](cognitive-search-predefined-skills.md), non-billable built-in utility skills, and custom skills. Non-billable utility skills include Conditional, Shaper, Text Merge, Text Split. There is no billing impact when using them, no Cognitive Services key requirement, and no 20 document limit.
+Skillsets can include [billable built-in skills](cognitive-search-predefined-skills.md), non-billable built-in utility skills, and custom skills. Non-billable utility skills include Conditional, Shaper, Text Merge, Text Split. There is no billing impact when using them, no Azure AI services key requirement, and no 20 document limit.
-A custom skill is functionality you provide. The cost of using a custom skill depends entirely on whether custom code is calling other metered services. There is no Cognitive Services key requirement and no 20 document limit on custom skills.
+A custom skill is functionality you provide. The cost of using a custom skill depends entirely on whether custom code is calling other metered services. There is no Azure AI services key requirement and no 20 document limit on custom skills.
## Monitor costs
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md
Previously updated : 06/17/2022 Last updated : 07/17/2023
Pricing - or the estimated monthly cost of running the service - are shown in th
## Tier descriptions
-Tiers include **Free**, **Basic**, **Standard**, and **Storage Optimized**. Standard and Storage Optimized are available with several configurations and capacities. The following screenshot from Azure portal shows the available tiers, minus pricing (which you can find in the portal and on the [pricing page](https://azure.microsoft.com/pricing/details/search/).
+Tiers include **Free**, **Basic**, **Standard**, and **Storage Optimized**. Standard and Storage Optimized are available with several configurations and capacities. The following screenshot from Azure portal shows the available tiers, minus pricing (which you can find in the portal and on the [pricing page](https://azure.microsoft.com/pricing/details/search/)).
:::image type="content" source="media/search-sku-tier/tiers.png" alt-text="Pricing tier chart" border="true":::
-**Free** creates a limited search service for smaller projects, like running tutorials and code samples. Internally, system resources are shared among multiple subscribers. You cannot scale a free service or run significant workloads.
+**Free** creates a [limited search service](search-limits-quotas-capacity.md#subscription-limits) for smaller projects, like running tutorials and code samples. Internally, system resources are shared among multiple subscribers. You cannot scale a free service or run significant workloads. You can only have one free search service per Azure subscription.
The most commonly used billable tiers include the following:
The best way to choose a pricing tier is to start with a least-cost tier, and th
+ [Create a search service](search-create-service-portal.md) + [Estimate costs](search-sku-manage-costs.md)
-+ [Estimate capacity](search-sku-manage-costs.md)
++ [Estimate capacity](search-sku-manage-costs.md)
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
You'll need the `synapseml` library and several Azure resources. If possible, us
+ [SynapseML package](https://microsoft.github.io/SynapseML/docs/getting_started/installation/#python) <sup>1</sup> + [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>2</sup>
-+ [Azure Cognitive Services](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>3</sup>
++ [Azure AI services](../ai-services/multi-service-resource.md?pivots=azportal) (any tier) <sup>3</sup> + [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>4</sup> <sup>1</sup> This link resolves to a tutorial for loading the package. <sup>2</sup> You can use the free search tier to index the sample data, but [choose a higher tier](search-sku-tier.md) if your data volumes are large. For non-free tiers, you'll need to provide the [search API key](search-security-api-keys.md#find-existing-keys) in the [Set up dependencies](#2set-up-dependencies) step further on.
-<sup>3</sup> This tutorial uses Azure Forms Recognizer and Azure Translator. In the instructions that follow, you'll provide a [Cognitive Services multi-service key](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#get-the-keys-for-your-resource) and the region, and it will work for both services.
+<sup>3</sup> This tutorial uses Azure AI Document Intelligence and Azure AI Translator. In the instructions that follow, you'll provide a [multi-service key](../ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) and the region, and it will work for both services.
<sup>4</sup> In this tutorial, Azure Databricks provides the Spark computing platform and the instructions in the link will tell you how to set up the workspace. For this tutorial, we used the portal steps in "Create a workspace".
df2 = (spark.read.format("binaryFile")
display(df2) ```
-## 4 - Add form recognition
+## 4 - Add document intelligence
Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
-This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](../applied-ai-services/form-recognizer/concept-invoice.md) of Azure Forms Analyzer.
+This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](../ai-services/document-intelligence/concept-invoice.md) of Azure AI Document Intelligence to extract information from the invoices.
```python from synapse.ml.cognitive import AnalyzeInvoices
The output from this step should look similar to the next screenshot. Notice how
:::image type="content" source="media/search-synapseml-cognitive-services/analyze-forms-output.png" alt-text="Screenshot of the AnalyzeInvoices output." border="true":::
-## 5 - Restructure form recognition output
+## 5 - Restructure document intelligence output
Paste the following code into the fourth cell and run it. No modifications are required.
-This code loads [FormOntologyLearner](https://mmlspark.blob.core.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.FormOntologyTransformer), a transformer that analyzes the output of Form Recognizer transformers and infers a tabular data structure. The output of AnalyzeInvoices is dynamic and varies based on the features detected in your content. Furthermore, the transformer consolidates output into a single column. Because the output is dynamic and consolidated, it's difficult to use in downstream transformations that require more structure.
+This code loads [FormOntologyLearner](https://mmlspark.blob.core.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.FormOntologyTransformer), a transformer that analyzes the output of Document Intelligence transformers and infers a tabular data structure. The output of AnalyzeInvoices is dynamic and varies based on the features detected in your content. Furthermore, the transformer consolidates output into a single column. Because the output is dynamic and consolidated, it's difficult to use in downstream transformations that require more structure.
FormOntologyLearner extends the utility of the AnalyzeInvoices transformer by looking for patterns that can be used to create a tabular data structure. Organizing the output into multiple columns and rows makes the content consumable in other transformers, like AzureSearchWriter.
Notice how this transformation recasts the nested fields into a table, which ena
Paste the following code into the fifth cell. No modifications are required, so run the code when you're ready.
-This code loads [Translate](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#translate), a transformer that calls the Azure Translator service in Cognitive Services. The original text, which is in English in the "Description" column, is machine-translated into various languages. All of the output is consolidated into "output.translations" array.
+This code loads [Translate](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#translate), a transformer that calls the Azure AI Translator service in Azure AI services. The original text, which is in English in the "Description" column, is machine-translated into various languages. All of the output is consolidated into "output.translations" array.
```python from synapse.ml.cognitive import Translate
You can find and manage resources in the portal, using the **All resources** or
## Next steps
-In this tutorial, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Forms Recognizer transformers in SynapseML.
+In this tutorial, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Document Intelligence transformers in SynapseML.
As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search: > [!div class="nextstepaction"]
-> [Tutorial: Text Analytics with Cognitive Services](../synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md)
+> [Tutorial: Text Analytics with Azure AI services](../synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md)
search Search Synonyms Tutorial Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synonyms-tutorial-sdk.md
The fastest way to clean up after an example is by deleting the resource group c
This example demonstrated the synonyms feature in C# code to create and post mapping rules and then call the synonym map on a query. Additional information can be found in the [.NET SDK](/dotnet/api/overview/azure/search.documents-readme) and [REST API](/rest/api/searchservice/) reference documentation. > [!div class="nextstepaction"]
-> [How to use synonyms in Azure Cognitive Search](search-synonyms.md)
+> [How to use synonyms in Azure Cognitive Search](search-synonyms.md)
search Search Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synonyms.md
If you have an existing index in a development (non-production) environment, exp
## Next steps > [!div class="nextstepaction"]
-> [Create a synonym map (REST API)](/rest/api/searchservice/create-synonym-map)
+> [Create a synonym map (REST API)](/rest/api/searchservice/create-synonym-map)
search Search Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-traffic-analytics.md
Title: Telemetry for search traffic analytics description: Enable search traffic analytics for Azure Cognitive Search, collect telemetry and user-initiated events using Application Insights, and then analyze findings in a Power BI report.-
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
But you'll also want to become familiar with methodologies for loading an index
+ [Data import overview](search-what-is-data-import.md)
-+ [Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents)
++ [Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents)
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Search is foundational to any app that surfaces text to users, where common scen
+ Rich indexing, with [lexical analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation + Rich query syntax for [vector queries](vector-search-how-to-query.md), text search, fuzzy search, autocomplete, geo-search and more + Programmability through REST APIs and client libraries in Azure SDKs
-+ Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
++ Azure integration at the data layer, machine learning layer, and AI (Azure AI services) > [!div class="nextstepaction"] > [Create a search service](search-create-service-portal.md)
Architecturally, a search service sits between the external data stores that con
In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, semantic ranking, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
-Across the Azure platform, Cognitive Search can integrate with other Azure services in the form of *indexers* that automate data ingestion/retrieval from Azure data sources, and *skillsets* that incorporate consumable AI from Cognitive Services, such as image and natural language processing, or custom AI that you create in Azure Machine Learning or wrap inside Azure Functions.
+Across the Azure platform, Cognitive Search can integrate with other Azure services in the form of *indexers* that automate data ingestion/retrieval from Azure data sources, and *skillsets* that incorporate consumable AI from Azure AI services, such as image and natural language processing, or custom AI that you create in Azure Machine Learning or wrap inside Azure Functions.
## Inside a search service
Among cloud providers, Azure Cognitive Search is strongest for full text search
Key strengths include: + Data integration (crawlers) at the indexing layer.
-+ AI and machine learning integration with Azure Cognitive Services, useful if you need to make unsearchable content full text-searchable.
++ AI and machine learning integration with Azure AI services, useful if you need to make unsearchable content full text-searchable. + Security integration with Azure Active Directory for trusted connections, and with Azure Private Link integration to support private connections to a search index in no-internet scenarios. + Linguistic and custom text analysis in 56 languages. + [Full search experience](search-features-list.md): rich query language, relevance tuning and semantic ranking, faceting, autocomplete queries and suggested results, and synonyms.
search Service Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-configure-firewall.md
Providing IP addresses for clients ensures that the request isn't rejected outri
If your client application is a static Web app on Azure, learn how to determine its IP range for inclusion in a search service IP firewall rule. > [!div class="nextstepaction"]
-> [Inbound and outbound IP addresses in Azure App Service](../app-service/overview-inbound-outbound-ips.md)
+> [Inbound and outbound IP addresses in Azure App Service](../app-service/overview-inbound-outbound-ips.md)
search Service Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-create-private-endpoint.md
In this article, you'll learn how to secure an Azure Cognitive Search service so
+ [Create an Azure virtual network](#create-the-virtual-network) (or use an existing one) + [Create a search service to use a private endpoint](#create-a-search-service-with-a-private-endpoint)
-+ [Create a Azure virtual machine in the same virtual network](#create-a-virtual-machine)
++ [Create an Azure virtual machine in the same virtual network](#create-a-virtual-machine) + [Connect to search using a browser session on the virtual machine](#connect-to-the-vm) Private endpoints are provided by [Azure Private Link](../private-link/private-link-overview.md), as a separate billable service. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/).
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
Some common errors that occur during the deletion phase are listed below.
Learn more about shared private link resources and how to use it for secure access to protected content. + [Accessing protected content via indexers](search-indexer-howto-access-private.md)
-+ [REST API reference](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources)
++ [REST API reference](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources)
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
Title: 'Tutorial: create a custom analyzer' description: Learn how to build a custom analyzer to improve the quality of search results in Azure Cognitive Search.-
You can find and manage resources in the portal, using the All resources or Reso
Now that you're familiar with how to create a custom analyzer, let's take a look at all of the different filters, tokenizers, and analyzers available to you to build a rich search experience. > [!div class="nextstepaction"]
-> [Custom Analyzers in Azure Cognitive Search](index-add-custom-analyzers.md)
+> [Custom Analyzers in Azure Cognitive Search](index-add-custom-analyzers.md)
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
Title: 'C# tutorial optimize indexing with the push API' description: Learn how to efficiently index data using Azure Cognitive Search's push API. This tutorial and sample code are in C#.-
You can find and manage resources in the portal, using the **All resources** or
Now that you're familiar with the concept of ingesting data efficiently, let's take a closer look at Lucene query architecture and how full text search works in Azure Cognitive Search. > [!div class="nextstepaction"]
-> [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md)
+> [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md)
search Tutorial Python Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-python-deploy-static-web-app.md)
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| October | [Beiersdorf customer story using Azure Cognitive Search](https://customers.microsoft.com/story/1552642769228088273-Beiersdorf-consumer-goods-azure-cognitive-search). This customer story showcases semantic search and document summarization to provide researchers with ready access to institutional knowledge. | | September | [Azure Cognitive Search Lab](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md). This C# sample provides the source code for building a web front-end that accesses all of the REST API calls against an index. This tool is used by support engineers to investigate customer support issues. You can try this [demo site](https://azuresearchlab.azurewebsites.net/) before building your own copy. | | September | [Event-driven indexing for Cognitive Search](https://github.com/aditmer/Event-Driven-Indexing-For-Cognitive-Search/blob/main/README.md). This C# sample is an Azure Function app that demonstrates event-driven indexing in Azure Cognitive Search. If you've used indexers and skillsets before, you know that indexers can run on demand or on a schedule, but not in response to events. This demo shows you how to set up an indexing pipeline that responds to data update events. |
-| August | [Tutorial: Index large data from Apache Spark](search-synapseml-cognitive-services.md). This tutorial explains how to use the SynapseML open-source library to push data from Apache Spark into a search index. It also shows you how to make calls to Cognitive Services to get AI enrichment without skillsets and indexers. |
+| August | [Tutorial: Index large data from Apache Spark](search-synapseml-cognitive-services.md). This tutorial explains how to use the SynapseML open-source library to push data from Apache Spark into a search index. It also shows you how to make calls to Azure AI services to get AI enrichment without skillsets and indexers. |
| June | [Semantic search (preview)](semantic-search-overview.md). New support for Storage Optimized tiers (L1, L2). | | June | **General availability** - [Debug Sessions](cognitive-search-debug-session.md).| | May | **Retired** - [Power Query connector preview](/previous-versions/azure/search/search-how-to-index-power-query-data-sources). |
security Ransomware Features Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-features-resources.md
Microsoft focuses heavily on both security of our cloud and providing you the se
We look forward to partnering with you in addressing ransomware protection, detection, and prevention in a holistic manner. Connect with us:-- [AskAzureSecurity@microsoft.com](mailto:AskAzureSecurity&#64;microsoft.com)
+- [AskAzureSecurity@microsoft.com](mailto:AskAzureSecurity@microsoft.com)
- [www.microsoft.com/services](https://www.microsoft.com/en-us/msservices) For detailed information on how Microsoft secures our cloud, visit the [service trust portal](https://servicetrust.microsoft.com/).
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Use the [Microsoft Sentinel pricing calculator](https://azure.microsoft.com/pric
For example, enter the GB of daily data you expect to ingest in Microsoft Sentinel, and the region for your workspace. The calculator provides the aggregate monthly cost across these components: -- Azure Monitor data ingestion: Analytics logs and basic logs - Microsoft Sentinel: Analytics logs and basic logs - Azure Monitor: Retention - Azure Monitor: Data Restore
Learn more about how to [connect data sources](connect-data-sources.md), includi
In this article, you learned how to plan costs and understand the billing for Microsoft Sentinel. > [!div class="nextstepaction"]
-> >[Deploy Microsoft Sentinel](deploy-overview.md)
+> >[Deploy Microsoft Sentinel](deploy-overview.md)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
The allowed values for a user ID type are:
| **AADID**| An Azure Active Directory user ID.| `9267d02c-5f76-40a9-a9eb-b686f3ca47aa` | | **OktaId** | An Okta user ID. | `00urjk4znu3BcncfY0h7` | | **AWSId** | An AWS user ID. | `72643944673` |
-| **PUID** | A Microsoft 365 User ID. | `10032001582F435C` |
+| **PUID** | A Microsoft 365 user ID. | `10032001582F435C` |
+| **SalesforceId** | A Salesforce user ID. | `00530000009M943` |
#### The user name
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md
If you don't use AzCopy, upload your file by using the Azure portal. Go to your
Create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data.
-1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../cognitive-services/translator/document-translation/create-sas-tokens.md?tabs=blobs#create-sas-tokens-in-the-azure-portal).
+1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../ai-services/translator/document-translation/how-to-guides/create-sas-tokens.md?tabs=blobs#create-sas-tokens-in-the-azure-portal).
1. Set the shared access signature token expiry time to be at minimum 6 hours. 1. Copy the value for **Blob SAS URL**.
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Service Fabric Application Arm Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-arm-resource.md
Last updated 07/14/2022
# Manage applications and services as Azure Resource Manager resources
-You can deploy applications and services onto your Service Fabric cluster via Azure Resource Manager. This means that instead of deploying and managing applications via PowerShell or CLI after having to wait for the cluster to be ready, you can now express applications and services in JSON and deploy them in the same Resource Manager template as your cluster. The process of application registration, provisioning, and deployment all happens in one step.
+You can deploy applications and services onto your Service Fabric cluster via Azure Resource Manager. This means that instead of deploying and managing applications via PowerShell or CLI after waiting for the cluster to be ready, you can now express applications and services in JSON and deploy them in the same Resource Manager template as your cluster. The process of application registration, provisioning, and deployment all happens in one step.
This is the recommended way for you to deploy any setup, governance, or cluster management applications that you require in your cluster. This includes the [Patch Orchestration Application](service-fabric-patch-orchestration-application.md), Watchdogs, or any applications that need to be running in your cluster before other applications or services are deployed. When applicable, manage your applications as Resource Manager resources to improve: * Audit trail: Resource Manager audits every operation and keeps a detailed *Activity Log* that can help you trace any changes made to these applications and your cluster.
-* Azure role-based access control (Azure RBAC): Managing access to clusters as well as applications deployed on the cluster can be done via the same Resource Manager template.
+* Azure role-based access control (Azure RBAC): Managing access to clusters and applications deployed on the cluster can be done via the same Resource Manager template.
* Azure Resource Manager (via Azure portal) becomes a one-stop-shop for managing your cluster and critical application deployments. The following snippet shows the different kinds of resources that can be managed via a template:
The following snippet shows the different kinds of resources that can be managed
## Add a new application to your Resource Manager template
-1. Prepare your cluster's Resource Manager template for deployment. See [Create a Service Fabric cluster by using Azure Resource Manager](service-fabric-cluster-creation-via-arm.md) for more information on this.
+1. Prepare your cluster's Resource Manager template for deployment. For more information, see [Create a Service Fabric cluster by using Azure Resource Manager](service-fabric-cluster-creation-via-arm.md).
2. Think about some of the applications you plan on deploying in the cluster. Are there any that will always be running that other applications may take dependencies on? Do you plan on deploying any cluster governance or setup applications? These sorts of applications are best managed via a Resource Manager template, as discussed above. 3. Once you have figured out what applications you want to be deployed this way, the applications have to be packaged, zipped, and placed on a storage share. The share needs to be accessible through a REST endpoint for Azure Resource Manager to consume during deployment. See [Create a storage account](service-fabric-concept-resource-model.md#create-a-storage-account) for details.
-4. In your Resource Manager template, below your cluster declaration, describe each application's properties. These properties include replica or instance count and any dependency chains between resources (other applications or services). Note that this does not replace the Application or Service manifests, but rather describes some of what is in them as part of the cluster's Resource Manager template. Here is a sample template that includes deploying a stateless service *Service1* and a stateful service *Service2* as part of *Application1*:
+4. In your Resource Manager template, below your cluster declaration, describe each application's properties. These properties include replica or instance count and any dependency chains between resources (other applications or services). Note that this doesn't replace the Application or Service manifests, but rather describes some of what is in them as part of the cluster's Resource Manager template. Here's a sample template that includes deploying a stateless service *Service1* and a stateful service *Service2* as part of *Application1*:
```json {
The following snippet shows the different kinds of resources that can be managed
5. Deploy! ## Remove Service Fabric Resource Provider Application resource
-The following will trigger the app package to be un-provisioned from the cluster, and this will clean up the disk space used:
+The following will trigger the app package to be unprovisioned from the cluster, and this will clean up the disk space used:
```powershell
-Get-AzureRmResource -ResourceId /subscriptions/{sid}/resourceGroups/{rg}/providers/Microsoft.ServiceFabric/clusters/{cluster}/applicationTypes/{apptType}/versions/{version} -ApiVersion "2019-03-01" | Remove-AzureRmResource -Force -ApiVersion "2017-07-01-preview"
+$resourceGroup = 'sftestcluster'
+$cluster = $resourceGroup
+$applicationType = 'VotingType'
+$application = 'Voting'
+$applicationVersion = '1.0.0'
+
+$sf = Get-AzResource -ResourceGroupName $resourceGroup -ResourceName $cluster
+$app = Get-AzResource -ResourceId "$($sf.Id)/applications/$application"
+$appType = Get-AzResource -ResourceId "$($sf.Id)/applicationTypes/$applicationType"
+$appTypeVersion = Get-AzResource -ResourceId "$($appType.Id)/versions/$applicationVersion"
+
+# remove application
+Remove-AzResource -ResourceId $app.Id
+
+# remove application type version
+Remove-AzResource -ResourceId $appTypeVersion.Id
+
+# remove application type
+# Remove-AzResource -ResourceId $appType.Id
```
-Simply removing Microsoft.ServiceFabric/clusters/application from your ARM template will not unprovision the Application
+Simply removing Microsoft.ServiceFabric/clusters/application from your ARM template won't unprovision the Application. PowerShell command Remove-AzResource as shown above or performing a REST DELETE [Application Type Versions - Delete](https://learn.microsoft.com/rest/api/servicefabric/application/application-type-versions/delete) directly are two options that can be used.
>[!NOTE] > Once the removal is complete you should not see the package version in SFX or ARM anymore. You cannot delete the application type version resource that the application is running with; ARM/SFRP will prevent this. If you try to unprovision the running package, SF runtime will prevent it.
Simply removing Microsoft.ServiceFabric/clusters/application from your ARM templ
## Manage an existing application via Resource Manager
-If your cluster is already up and some applications that you would like to manage as Resource Manager resources are already deployed on it, instead of removing the applications and redeploying them, you can use a PUT call using the same APIs to have the applications get acknowledged as Resource Manager resources. For additional information, refer to [What is the Service Fabric application resource model?](./service-fabric-concept-resource-model.md)
+If your cluster is already up and some applications that you would like to manage as Resource Manager resources are already deployed on it, instead of removing the applications and redeploying them, you can use a PUT call using the same APIs to have the applications get acknowledged as Resource Manager resources. For additional information, see [What is the Service Fabric application resource model?](./service-fabric-concept-resource-model.md)
> [!NOTE] > To allow a cluster upgrade to ignore unhealthy apps the customer can specify ΓÇ£maxPercentUnhealthyApplications: 100ΓÇ¥ in the ΓÇ£upgradeDescription/healthPolicyΓÇ¥ section; detailed descriptions for all settings are in [Service Fabrics REST API Cluster Upgrade Policy documentation](/rest/api/servicefabric/sfrp-model-clusterupgradepolicy).
spring-apps Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-metrics.md
In an Azure Spring Apps instance, you can view metrics on the following pages:
* The application overview page, which shows quick status charts. To view this page, select **Apps** from the navigation pane, and then select an app.
-* The common metrics page, which shows common metrics available to all apps in the Azure Spring Apps instance. To view this page, select **Metrics** from the navigation pane. You can build your own charts in the common metrics page and pin them to your Dashboard.
+* The common metrics page, which shows common metrics available to all apps in the Azure Spring Apps instance. For the Enterprise plan, it also shows common metrics for Tanzu Spring Cloud Gateway. To view this page, select **Metrics** from the navigation pane. You can build your own charts in the common metrics page and pin them to your Dashboard.
:::image type="content" source="media/concept-metrics/navigation-pane.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Overview page with Apps and Metrics highlighted in the navigation pane." lightbox="media/concept-metrics/navigation-pane.png":::
Next, select the aggregation type for each metric:
:::image type="content" source="media/concept-metrics/aggregation-dropdown.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Metrics page with the Aggregation dropdown menu open." lightbox="media/concept-metrics/aggregation-dropdown.png":::
-The aggregation type indicates how to aggregate metric points in the chart by time. There is one raw metric point every minute, and the pre-aggregation type within a minute is pre-defined by metrics type.
+The aggregation type indicates how to aggregate metric points in the chart by time. There's one raw metric point every minute, and the pre-aggregation type within a minute is pre-defined by metrics type.
* Sum: Sum all values as target output. * Average: Use the Average value in the period as target output.
You can use two kinds of filters (properties):
:::image type="content" source="media/concept-metrics/add-filter.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Metrics page with a chart selected and the Add filter controls highlighted." lightbox="media/concept-metrics/add-filter.png":::
-You can also use the **Apply splitting** option, which will draw multiple lines for one app:
+You can also use the **Apply splitting** option, which draws multiple lines for one app:
:::image type="content" source="media/concept-metrics/apply-splitting.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Metrics page with a chart selected and the Apply splitting option highlighted." lightbox="media/concept-metrics/apply-splitting.png"::::
The following tables show the available metrics and details.
### Error >[!div class="mx-tdCol2BreakAll"]
->| Name | Spring Actuator Metric Name | Unit | Details |
+>| Name | Spring Boot Actuator metric name | Unit | Description |
>|-|-|-||
->| tomcat.global.error | tomcat.global.error | Count | Number of errors that occurred in processed requests |
+>| `tomcat.global.error` | `tomcat.global.error` | Count | Number of errors that occurred in processed requests. |
### Performance >[!div class="mx-tdCol2BreakAll"]
->| Name | Spring Actuator Metric Name | Unit | Details |
+>| Name | Spring Boot Actuator metric name | Unit | Description |
>|-|-|-||
->| system.cpu.usage | system.cpu.usage | Percent | Recent CPU usage for the whole system (Obsolete and don't suggest using it). This value is a double in the [0.0,1.0] interval. A value of 0.0 means that all CPUs were idle during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running 100% of the time during the recent period being observed.|
->| process.cpu.usage | App CPU Usage Percentage | Percent | Recent CPU usage for the Java Virtual Machine process (Obsolete and don't suggest using it). This value is a double in the [0.0,1.0] interval. A value of 0.0 means that none of the CPUs were running threads from the JVM process during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running threads from the JVM 100% of the time during the recent period being observed. Threads from the JVM include the application threads as well as the JVM internal threads.|
+>| `system.cpu.usage` | `system.cpu.usage` | Percent | Recent CPU usage for the whole system (Obsolete and don't suggest using it). This value is a double in the [0.0,1.0] interval. A value of 0.0 means that all CPUs were idle during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running 100% of the time during the recent period being observed.|
+>| `process.cpu.usage` | App CPU Usage Percentage | Percent | Recent CPU usage for the Java Virtual Machine process (Obsolete and don't suggest using it). This value is a double in the [0.0,1.0] interval. A value of 0.0 means that none of the CPUs were running threads from the JVM process during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running threads from the JVM 100% of the time during the recent period being observed. Threads from the JVM include the application threads as well as the JVM internal threads.|
>| App CPU Usage | | Percent | Recent CPU usage of the JVM process against the CPU allocated to this app. This value is a double in the [0.0,1.0] interval. A value of 0.0 means that none of the CPUs were running threads from the JVM process during the recent period of time observed, while a value of 1.0 means that all CPUs were actively running threads from the JVM 100% of the time during the recent period being observed. Threads from the JVM include the application threads as well as the JVM internal threads.| >| App CPU Usage (Deprecated) | | Percent | Deprecated metric of App CPU Usage. Use the new App CPU Usage metric instead.| >| App Memory Usage | | Percent | Recent memory usage of the JVM process against the memory allocated to this app. This value is a double in the [0.0,1.0] interval. A value of 0.0 means that none of the memory was allocated by threads from the JVM process during the recent period of time observed, while a value of 1.0 means that all memory was allocated by threads from the JVM 100% of the time during the recent period being observed. Threads from the JVM include the application threads as well as the JVM internal threads.|
->| jvm.memory.committed | jvm.memory.committed | Bytes | Represents the amount of memory that is guaranteed to be available for use by the JVM. The JVM may release memory to the system and committed could be less than init. committed will always be greater than or equal to used. |
->| jvm.memory.used | jvm.memory.used | Bytes | Represents the amount of memory currently used in bytes. |
->| jvm.memory.max | jvm.memory.max | Bytes | Represents the maximum amount of memory that can be used for memory management. The amount of used and committed memory will always be less than or equal to max if max is defined. A memory allocation may fail if it attempts to increase the used memory such that used > committed even if used <= max would still be true (for example, when the system is low on virtual memory). |
->| jvm.gc.max.data.size | jvm.gc.max.data.size | Bytes | The peak memory usage of the old generation memory pool since the Java virtual machine was started. |
->| jvm.gc.live.data.size | jvm.gc.live.data.size | Bytes | Size of old generation memory pool after a full GC. |
->| jvm.gc.memory.promoted | jvm.gc.memory.promoted | Bytes | Count of positive increases in the size of the old generation memory pool before GC to after GC. |
->| jvm.gc.memory.allocated | jvm.gc.memory.allocated | Bytes | Incremented for an increase in the size of the young generation memory pool after one GC to before the next. |
->| jvm.gc.pause.total.count | jvm.gc.pause (total-count) | Count | Total GC count after this JMV started, including Young and Old GC. |
->| jvm.gc.pause.total.time | jvm.gc.pause (total-time) | Milliseconds | Total GC time consumed after this JMV started, including Young and Old GC. |
+>| `jvm.memory.committed` | `jvm.memory.committed` | Bytes | Represents the amount of memory that is guaranteed to be available for use by the JVM. The JVM may release memory to the system and committed could be less than init. committed will always be greater than or equal to used. |
+>| `jvm.memory.used` | `jvm.memory.used` | Bytes | Represents the amount of memory currently used in bytes. |
+>| `jvm.memory.max` | `jvm.memory.max` | Bytes | Represents the maximum amount of memory that can be used for memory management. The amount of used and committed memory will always be less than or equal to max if max is defined. A memory allocation may fail if it attempts to increase the used memory such that used > committed even if used <= max would still be true (for example, when the system is low on virtual memory). |
+>| `jvm.gc.max.data.size` | `jvm.gc.max.data.size` | Bytes | The peak memory usage of the old generation memory pool since the Java virtual machine was started. |
+>| `jvm.gc.live.data.size` | `jvm.gc.live.data.size` | Bytes | Size of old generation memory pool after a full garbage collection (GC). |
+>| `jvm.gc.memory.promoted` | `jvm.gc.memory.promoted` | Bytes | Count of positive increases in the size of the old generation memory pool before GC to after GC. |
+>| `jvm.gc.memory.allocated` | `jvm.gc.memory.allocated` | Bytes | Incremented for an increase in the size of the young generation memory pool after one GC to before the next. |
+>| `jvm.gc.pause.total.count` | `jvm.gc.pause` (total-count) | Count | Total GC count after this JMV started, including Young and Old GC. |
+>| `jvm.gc.pause.total.time` | `jvm.gc.pause` (total-time) | Milliseconds | Total GC time consumed after this JMV started, including Young and Old GC. |
### Performance (.NET) >[!div class="mx-tdCol2BreakAll"]
->| Name | Spring Actuator Metric Name | Unit | Details |
+>| Name | Spring Boot Actuator metric name | Unit | Description |
>||--|||
->| CPU usage | cpu-usage | Percent | The percent of the process's CPU usage relative to all of the system CPU resources [0-100]. |
->| Working set | working-set | Megabytes | Amount of working set used by the process. |
->| GC heap size | gc-heap-size | Megabytes | Total heap size reported by the garbage collector. |
->| Gen 0 GC count | gen-0-gc-count | Count | Number of Generation 0 garbage collections per second. |
->| Gen 1 GC count | gen-1-gc-count | Count | Number of Generation 1 garbage collections per second. |
->| Gen 2 GC count | gen-2-gc-count | Count | Number of Generation 2 garbage collections per second. |
->| Time in GC | timein-gc | Percent | The percent of time in garbage collection since the last garbage collection. |
->| Gen 0 heap size | gen-0-size | Bytes | Generation 0 heap size. |
->| Gen 1 heap size | gen-1-size | Bytes | Generation 1 heap size. |
->| Gen 2 heap size | gen-2-size | Bytes | Generation 2 heap size. |
->| LOH heap size | loh-size | Bytes | Large Object Heap heap size. |
->| Allocation rate | alloc-rate | Bytes | Number of bytes allocated per second. |
->| Assembly count | assembly-count | Count | Number of assemblies loaded. |
->| Exception count | exception-count | Count | Number of exceptions per second. |
->| Thread pool thread count | threadpool-thread-count | Count | Number of thread pool threads. |
->| Monitor lock contention count | monitor-lock-contention-count | Count | The number of times per second there was contention when trying to take a monitor's lock. |
->| Thread pool queue length | threadpool-queue-length | Count | Thread pool work items queue length. |
->| Thread pool completed items count | threadpool-completed-items-count | Count | Thread pool completed work items count. |
->| Active timers count | active-timer-count | Count | The number of timers that are currently active. An active timer is one that is registered to tick at some point in the future, and has not yet been canceled. |
-
-For more information, see [dotnet counters](/dotnet/core/diagnostics/dotnet-counters).
+>| CPU usage | `cpu-usage` | Percent | The percent of the process's CPU usage relative to all of the system CPU resources [0-100]. |
+>| Working set | `working-set` | Megabytes | Amount of working set used by the process. |
+>| GC heap size | `gc-heap-size` | Megabytes | Total heap size reported by the garbage collector. |
+>| Gen 0 GC count | `gen-0-gc-count` | Count | Number of Generation 0 garbage collections per second. |
+>| Gen 1 GC count | `gen-1-gc-count` | Count | Number of Generation 1 garbage collections per second. |
+>| Gen 2 GC count | `gen-2-gc-count` | Count | Number of Generation 2 garbage collections per second. |
+>| Time in GC | `timein-gc` | Percent | The percent of time in garbage collection since the last garbage collection. |
+>| Gen 0 heap size | `gen-0-size` | Bytes | Generation 0 heap size. |
+>| Gen 1 heap size | `gen-1-size` | Bytes | Generation 1 heap size. |
+>| Gen 2 heap size | `gen-2-size` | Bytes | Generation 2 heap size. |
+>| LOH heap size | `loh-size` | Bytes | Large Object Heap heap size. |
+>| Allocation rate | `alloc-rate` | Bytes | Number of bytes allocated per second. |
+>| Assembly count | `assembly-count` | Count | Number of assemblies loaded. |
+>| Exception count | `exception-count` | Count | Number of exceptions per second. |
+>| Thread pool thread count | `threadpool-thread-count` | Count | Number of thread pool threads. |
+>| Monitor lock contention count | `monitor-lock-contention-count` | Count | The number of times per second there was contention when trying to take a monitor's lock. |
+>| Thread pool queue length | `threadpool-queue-length` | Count | Thread pool work items queue length. |
+>| Thread pool completed items count | `threadpool-completed-items-count` | Count | Thread pool completed work items count. |
+>| Active timers count | `active-timer-count` | Count | The number of timers that are currently active. An active timer is one that is registered to tick at some point in the future, and has not yet been canceled. |
+
+For more information, see [Investigate performance counters (dotnet-counters)](/dotnet/core/diagnostics/dotnet-counters).
### Request >[!div class="mx-tdCol2BreakAll"]
->| Name | Spring Actuator Metric Name | Unit | Details |
+>| Name | Spring Boot Actuator metric name | Unit | Description |
>|-|-|-||
->| tomcat.global.sent | tomcat.global.sent | Bytes | Amount of data Tomcat web server sent |
->| tomcat.global.received | tomcat.global.received | Bytes | Amount of data Tomcat web server received |
->| tomcat.global.request.total.count | tomcat.global.request (total-count) | Count | Total count of Tomcat web server processed requests |
->| tomcat.global.request.max | tomcat.global.request.max | Milliseconds | Maximum time of Tomcat web server to process a request |
+>| `tomcat.global.sent` | `tomcat.global.sent` | Bytes | Amount of data Tomcat web server sent. |
+>| `tomcat.global.received` | `tomcat.global.received` | Bytes | Amount of data Tomcat web server received. |
+>| `tomcat.global.request.total.count` | `tomcat.global.request` (total-count) | Count | Total count of Tomcat web server processed requests. |
+>| `tomcat.global.request.max` | `tomcat.global.request.max` | Milliseconds | Maximum time of Tomcat web server to process a request. |
### Request (.NET) >[!div class="mx-tdCol2BreakAll"]
->| Name | Spring Actuator Metric Name | Unit | Details |
+>| Name | Spring Boot Actuator metric name | Unit | Description |
>||--|||
->| Requests per second | requests-per-second | Count | Request rate. |
->| Total requests | total-requests | Count | Total number of requests. |
->| Current requests | current-requests | Count | Number of current requests. |
->| Failed requests | failed-requests | Count | Number of failed requests. |
+>| Requests per second | `requests-per-second` | Count | Request rate. |
+>| Total requests | `total-requests` | Count | Total number of requests. |
+>| Current requests | `current-requests` | Count | Number of current requests. |
+>| Failed requests | `failed-requests` | Count | Number of failed requests. |
-For more information, see [dotnet counters](/dotnet/core/diagnostics/dotnet-counters).
+For more information, see [Investigate performance counters (dotnet-counters)](/dotnet/core/diagnostics/dotnet-counters).
### Session >[!div class="mx-tdCol2BreakAll"]
->| Name | Spring Actuator Metric Name | Unit | Details |
+>| Name | Spring Boot Actuator metric name | Unit | Description |
>|-|-|-||
->| tomcat.sessions.active.max | tomcat.sessions.active.max | Count | Maximum number of sessions that have been active at the same time |
->| tomcat.sessions.alive.max | tomcat.sessions.alive.max | Milliseconds | Longest time (in seconds) that an expired session was alive |
->| tomcat.sessions.created | tomcat.sessions.created | Count | Number of sessions that have been created |
->| tomcat.sessions.expired | tomcat.sessions.expired | Count | Number of sessions that have expired |
->| tomcat.sessions.rejected | tomcat.sessions.rejected | Count | Number of sessions that were not created because the maximum number of active sessions reached. |
->| tomcat.sessions.active.current | tomcat.sessions.active.current | Count | Tomcat Session Active Count |
+>| `tomcat.sessions.active.max` | `tomcat.sessions.active.max` | Count | Maximum number of sessions that have been active at the same time. |
+>| `tomcat.sessions.alive.max` | `tomcat.sessions.alive.max` | Milliseconds | Longest time (in seconds) that an expired session was alive. |
+>| `tomcat.sessions.created` | `tomcat.sessions.created` | Count | Number of sessions that have been created. |
+>| `tomcat.sessions.expired` | `tomcat.sessions.expired` | Count | Number of sessions that have expired. |
+>| `tomcat.sessions.rejected` | `tomcat.sessions.rejected` | Count | Number of sessions that were not created because the maximum number of active sessions reached. |
+>| `tomcat.sessions.active.current` | `tomcat.sessions.active.current` | Count | Tomcat Session Active Count. |
### Ingress >[!div class="mx-tdCol2BreakAll"]
->| Display Name | Azure Metric Name | Unit | Details |
+>| Display name | Azure metric name | Unit | Description |
>|--|--|-|-|
->| Bytes Received | IngressBytesReceived | Bytes | Count of bytes received by Azure Spring Apps from the clients |
->| Bytes Sent | IngressBytesSent | Bytes | Count of bytes sent by Azure Spring Apps to the clients |
->| Requests | IngressRequests | Count | Count of requests by Azure Spring Apps from the clients |
->| Failed Requests | IngressFailedRequests | Count | Count of failed requests by Azure Spring Apps from the clients |
->| Response Status | IngressResponseStatus | Count | HTTP response status returned by Azure Spring Apps. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories |
->| Response Time | IngressResponseTime | Seconds | Http response time return by Azure Spring Apps |
->| Throughput In (bytes/s) | IngressBytesReceivedRate | BytesPerSecond | Bytes received per second by Azure Spring Apps from the clients |
->| Throughput Out (bytes/s) | IngressBytesSentRate | BytesPerSecond | Bytes sent per second by Azure Spring Apps to the clients |
+>| Bytes Received | `IngressBytesReceived` | Bytes | Count of bytes received by Azure Spring Apps from the clients. |
+>| Bytes Sent | `IngressBytesSent` | Bytes | Count of bytes sent by Azure Spring Apps to the clients. |
+>| Requests | `IngressRequests` | Count | Count of requests by Azure Spring Apps from the clients. |
+>| Failed Requests | `IngressFailedRequests` | Count | Count of failed requests by Azure Spring Apps from the clients. |
+>| Response Status | `IngressResponseStatus` | Count | HTTP response status returned by Azure Spring Apps. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories. |
+>| Response Time | `IngressResponseTime` | Seconds | Http response time return by Azure Spring Apps. |
+>| Throughput In (bytes/s) | `IngressBytesReceivedRate`| BytesPerSecond | Bytes received per second by Azure Spring Apps from the clients. |
+>| Throughput Out (bytes/s) | `IngressBytesSentRate` | BytesPerSecond | Bytes sent per second by Azure Spring Apps to the clients. |
+
+### Gateway
+
+The following table applies to the Tanzu Spring Cloud Gateway in Enterprise plan only.
+
+>[!div class="mx-tdCol2BreakAll"]
+>| Display name | Azure metric name | Unit | Description |
+>| | - | | - |
+>| `jvm.gc.live.data.size` | `GatewayJvmGcLiveDataSizeBytes` | Bytes | Size of old generation memory pool after a full GC. |
+>| `jvm.gc.max.data.size` | `GatewayJvmGcMaxDataSizeBytes` | Bytes | Max size of old generation memory pool. |
+>| `jvm.gc.memory.promoted` | `GatewayJvmGcMemoryPromotedBytesTotal` | Bytes | Count of positive increases in the size of the old generation memory pool before GC to after GC. |
+>| `jvm.gc.pause.max.time` | `GatewayJvmGcPauseSecondsMax` | Seconds | GC Pause Max Time. |
+>| `jvm.gc.pause.total.count` | `GatewayJvmGcPauseSecondsCount` | Count | GC Pause Count. |
+>| `jvm.gc.pause.total.time` | `GatewayJvmGcPauseSecondsSum` | Seconds | GC Pause Total Time. |
+>| `jvm.memory.committed` | `GatewayJvmMemoryCommittedBytes` | Bytes | Memory assigned to JVM in bytes. |
+>| `jvm.memory.used` | `GatewayJvmMemoryUsedBytes` | Bytes | Memory Used in bytes. |
+>| Max time of requests | `GatewayHttpServerRequestsMilliSecondsMax` | Milliseconds | The max time of requests. |
+>| `process.cpu.usage` | `GatewayProcessCpuUsage` | Percent | The recent CPU usage for the JVM process. |
+>| Request count | `GatewayHttpServerRequestsSecondsCount` | Count | The number of requests. |
+>| `system.cpu.usage` | `GatewaySystemCpuUsage` | Percent | The recent CPU usage for the whole system. |
+>| Throttled requests count | `GatewayRatelimitThrottledCount` | Count | The count of the throttled requests. |
## Next steps
spring-apps How To Troubleshoot Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-troubleshoot-enterprise-spring-cloud-gateway.md
This article shows you how to troubleshoot Spring Cloud Gateway for VMware Tanzu
- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or later. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`.
+## Check Gateway metrics
+
+For more information on how to check metrics on the Azure portal, see the [Common metrics page](./concept-metrics.md#common-metrics-page) section of [Metrics for Azure Spring Apps](concept-metrics.md).
+
+For more information on each supported metric, see the [Gateway](./concept-metrics.md#gateway) section of [Metrics for Azure Spring Apps](./concept-metrics.md).
+ ## Check Gateway logs There are two components that make up the Spring Cloud Gateway for VMware Tanzu: the Gateway itself and the Gateway operator. You can infer from the name that the Gateway operator is for managing the Gateway, while the Gateway itself fulfills the features. The logs of both components are available. The following sections describe how to check these logs.
Use the following steps to adjust the log levels:
1. Select **Save** to save your changes. 1. After the change is successful, you can find more detailed logs for troubleshooting, such as information about how requests are routed.
+## Setup alert rules
+
+You can create alert rules based on logs and metrics. For more information, see [Create or edit an alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+
+Use the following steps to directly create alert rules from the Azure portal for Azure Spring Apps:
+
+1. Open your Azure Spring Apps instance.
+1. Navigate to **Logs** or **Metrics**.
+1. Write the log query in the **Logs** pane, or add a metrics chart.
+1. Select **New alert rule**. This action takes you to the **Create an alert rule** pane, and the log query or the metrics is filled out automatically.
+
+You can now configure the alert rule details.
+
+## Monitor Gateway with application performance monitor
+
+ For more information on supported application performance monitors and how to configure them, see the [Configure application performance monitoring](./how-to-configure-enterprise-spring-cloud-gateway.md#configure-application-performance-monitoring) section of [Configure VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md).
+ ## Restart Gateway For some errors, a restart might help solve the issue. For more information, see the [Restart Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md#restart-spring-cloud-gateway) section of [Configure VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md).
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
spring-apps Quickstart Deploy Event Driven App Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app-standard-consumption.md
az spring app create \
--assign-endpoint true ```
-### Create an app with the dedicated workload profile
+### (Optional) Create an app with the dedicated workload profile
-Dedicated workload profiles support running apps with customized hardware and increased cost predictability.
+>[!NOTE]
+> This step is optional. Use this step only if you wish to create apps in the dedicated workload profile.
-Use the following command to create a dedicated workload profile:
+Dedicated workload profiles support running apps with customized hardware and increased cost predictability. Use the following command to create a dedicated workload profile:
```azurecli az containerapp env workload-profile set \
spring-apps Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/whats-new.md
This article is updated quarterly, so revisit it regularly. You can also visit [
## June 2023
-The following updates offer two new plans:
+The following update announces a new plan:
-- **Azure Spring Apps consumption plan**: This plan provides a new way to pay for Azure Spring Apps. With super-efficient pricing and serverless capabilities, you can deploy apps that scale based on usage, paying only for resources consumed. For more information, see [Start from zero and scale to zero ΓÇô Azure Spring Apps consumption plan](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/start-from-zero-and-scale-to-zero-azure-spring-apps-consumption/ba-p/3774825).--- **Azure Spring Apps Consumption and Dedicated plans**: The Standard Dedicated plan provides a fully managed, dedicated environment for running Spring applications on Azure. This plan offers you customizable compute options (including memory optimization), single tenancy, and high availability to help you achieve price predictability, cost savings, and performance for running Spring applications at scale. For more information, see [Unleash Spring apps in a flex environment with Azure Spring Apps Consumption and Dedicated plans](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/unleash-spring-apps-in-a-flex-environment-with-azure-spring-apps/ba-p/3828232).
+- **Azure Spring Apps Consumption and Dedicated plan**: This plan offers you customizable compute options (including memory optimization), single tenancy, and high availability to help you achieve price predictability, cost savings, and performance for running Spring applications at scale. For more information, see [Unleash Spring apps in a flex environment with Azure Spring Apps Consumption and Dedicated plans](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/unleash-spring-apps-in-a-flex-environment-with-azure-spring-apps/ba-p/3828232).
The following update is now available in all plans:
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
The cold tier is currently in PREVIEW and is available in all public regions.
### Enrolling in the preview
-To get started, enroll in the preview by using this [form](https://forms.office.com/r/788B1gr3Nq).
-
-You'll receive an email notification when your application is approved and the `ColdTier` feature flag will be registered on your subscription.
-
-### Verifying that you enrolled in the preview
-
-If the `ColdTier` feature flag is registered on your subscription, then you are enrolled in the preview, and you can begin using the cold tier. Use the following steps to ensure that the feature is registered.
-
-#### [Portal](#tab/azure-portal)
-
-In the **Preview features** page of your subscription, locate the **ColdTier** feature, and then make sure that **Registered** appears in the **State** column.
-
-> [!div class="mx-imgBorder"]
-> ![Verify that the feature is registered in Azure portal](./media/access-tiers-overview/cold-tier-feature-registration.png)
-
-#### [PowerShell](#tab/powershell)
-
-To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
-
-```powershell
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName ColdTier
-```
-
-#### [Azure CLI](#tab/azure-cli)
-
-To verify that the registration is complete, use the [az feature](/cli/azure/feature#az_feature_show) command.
-
-```azurecli
-az feature show --namespace Microsoft.Storage --name ColdTier
-```
--
+You can validate cold tier on a general-purpose v2 storage account from any subscription in Azure public cloud. It's still recommended to share your scenario in the [preview form](https://forms.office.com/r/788B1gr3Nq).
### Limitations and known issues
storage Blob Upload Function Trigger Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md
Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com
-## Create the Computer Vision service account
-Create the Computer Vision service account that processes our uploaded files. Computer Vision is part of Azure Cognitive Services and offers various features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../cognitive-services/computer-vision/overview.md).
+## Create the Azure AI Vision service
+
+Next, create the Azure AI Vision service account that will process our uploaded files. Vision is part of Azure AI services and offers various features for extracting data out of images. You can learn more about Azure AI Vision on the [overview page](../../ai-services/computer-vision/overview.md).
### [Azure portal](#tab/computer-vision-azure-portal)
storage Blob Upload Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md
Copy the value of the `connectionString` property and paste it somewhere to use
## Create the Computer Vision service
-Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../cognitive-services/computer-vision/overview.md).
+Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../ai-services/computer-vision/overview.md).
### [Azure portal](#tab/azure-portal)
storage Data Lake Storage Directory File Acl Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-python.md
Previously updated : 02/07/2023 Last updated : 07/18/2023 ms.devlang: python-+ # Use Python to manage directories and files in Azure Data Lake Storage Gen2
This article shows you how to use Python to create and manage directories and fi
To learn about how to get, set, and update the access control lists (ACL) of directories and files, see [Use Python to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-python.md).
-[Package (Python Package Index)](https://pypi.org/project/azure-storage-file-datalake/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/samples) | [API reference](/python/api/azure-storage-file-datalake/azure.storage.filedatalake) | [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) | [Give Feedback](https://github.com/Azure/azure-sdk-for-python/issues)
+[Package (PyPi)](https://pypi.org/project/azure-storage-file-datalake/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/samples) | [API reference](/python/api/azure-storage-file-datalake/azure.storage.filedatalake) | [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) | [Give Feedback](https://github.com/Azure/azure-sdk-for-python/issues)
## Prerequisites
pip install azure-storage-file-datalake azure-identity
Then open your code file and add the necessary import statements. In this example, we add the following to our *.py* file: ```python
-import os, uuid, sys
+import os
+from azure.storage.filedatalake import (
+ DataLakeServiceClient,
+ DataLakeDirectoryClient,
+ FileSystemClient
+)
from azure.identity import DefaultAzureCredential
-from azure.storage.filedatalake import DataLakeServiceClient
-from azure.core._match_conditions import MatchConditions
-from azure.storage.filedatalake._models import ContentSettings
``` ## Authorize access and connect to data resources
-To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` using Azure Active Directory (Azure AD), an account access key, or a shared access signature (SAS).
+To work with the code examples in this article, you need to create an authorized [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that represents the storage account. You can authorize a `DataLakeServiceClient` object using Azure Active Directory (Azure AD), an account access key, or a shared access signature (SAS).
### [Azure AD](#tab/azure-ad)
To learn more about generating and managing SAS tokens, see the following articl
### [Account key](#tab/account-key)
-You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that is authorized with the account key.
+You can authorize access to data using your account access keys (Shared Key). The following code example creates a [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that is authorized with the account key:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/crud_datalake.py" id="Snippet_AuthorizeWithKey":::
You can authorize access to data using your account access keys (Shared Key). Th
## Create a container
-A container acts as a file system for your files. You can create one by calling the [DataLakeServiceClient.create_file_system method](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient#azure-storage-filedatalake-datalakeserviceclient-create-file-system).
+A container acts as a file system for your files. You can create a container by using the following method:
-This example creates a container named `my-file-system`.
+- [DataLakeServiceClient.create_file_system](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient#azure-storage-filedatalake-datalakeserviceclient-create-file-system)
+
+The following code example creates a container and returns a `FileSystemClient` object for later use:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/crud_datalake.py" id="Snippet_CreateContainer"::: ## Create a directory
-Create a directory reference by calling the [FileSystemClient.create_directory](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.filesystemclient#azure-storage-filedatalake-filesystemclient-create-directory) method.
+You can create a directory reference in the container by using the following method:
+
+- [FileSystemClient.create_directory](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.filesystemclient#azure-storage-filedatalake-filesystemclient-create-directory)
-This example adds a directory named `my-directory` to a container.
+The following code example adds a directory to a container and returns a `DataLakeDirectoryClient` object for later use:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/crud_datalake.py" id="Snippet_CreateDirectory"::: ## Rename or move a directory
-Rename or move a directory by calling the [DataLakeDirectoryClient.rename_directory](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakedirectoryclient#azure-storage-filedatalake-datalakedirectoryclient-rename-directory) method. Pass the path of the desired directory a parameter.
+You can rename or move a directory by using the following method:
+
+- [DataLakeDirectoryClient.rename_directory](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakedirectoryclient#azure-storage-filedatalake-datalakedirectoryclient-rename-directory)
-This example renames a subdirectory to the name `my-directory-renamed`.
+Pass the path with the new directory name in the `new_name` argument. The value must have the following format: {filesystem}/{directory}/{subdirectory}.
+
+The following code example shows how to rename a subdirectory:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/crud_datalake.py" id="Snippet_RenameDirectory":::
-## Delete a directory
+## Upload a file to a directory
-Delete a directory by calling the [DataLakeDirectoryClient.delete_directory](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakedirectoryclient#azure-storage-filedatalake-datalakedirectoryclient-delete-directory) method.
+You can upload content to a new or existing file by using the following method:
-This example deletes a directory named `my-directory`.
+- [DataLakeFileClient.upload_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-upload-data)
+The following code example shows how to upload a file to a directory using the [upload_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-upload-data) method:
-## Upload a file to a directory
-First, create a file reference in the target directory by creating an instance of the **DataLakeFileClient** class. Upload a file by calling the [DataLakeFileClient.append_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-append-data) method. Make sure to complete the upload by calling the [DataLakeFileClient.flush_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-flush-data) method.
+You can use this method to create and upload content to a new file, or you can set the `overwrite` argument to `True` to overwrite an existing file.
-This example uploads a text file to a directory named `my-directory`.
+## Append data to a file
+You can upload data to be appended to a file by using the following method:
+
+- [DataLakeFileClient.append_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-append-data) method.
-> [!TIP]
-> If your file size is large, your code will have to make multiple calls to the **DataLakeFileClient** [append_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-append-data) method. Consider using the [upload_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-upload-data) method instead. That way, you can upload the entire file in a single call.
+The following code example shows how to append data to the end of a file using these steps:
-## Upload a large file to a directory
+- Create a `DataLakeFileClient` object to represent the file resource you're working with.
+- Upload data to the file using the [append_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-append-data) method.
+- Complete the upload by calling the [flush_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-flush-data) method to write the previously uploaded data to the file.
-Use the [DataLakeFileClient.upload_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-upload-data) method to upload large files without having to make multiple calls to the [DataLakeFileClient.append_data](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-append-data) method.
+With this method, data can only be appended to a file and the operation is limited to 4000 MiB per request.
## Download from a directory
-Open a local file for writing. Then, create a **DataLakeFileClient** instance that represents the file that you want to download. Call the [DataLakeFileClient.download_file](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-download-file) to read bytes from the file and then write those bytes to the local file.
+The following code example shows how to download a file from a directory to a local file using these steps:
+
+- Create a `DataLakeFileClient` object to represent the file you want to download.
+- Open a local file for writing.
+- Call the [DataLakeFileClient.download_file](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient#azure-storage-filedatalake-datalakefileclient-download-file) method to read from the file, then write the data to the local file.
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/crud_datalake.py" id="Snippet_DownloadFromDirectory"::: ## List directory contents
-List directory contents by calling the [FileSystemClient.get_paths](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.filesystemclient#azure-storage-filedatalake-filesystemclient-get-paths) method, and then enumerating through the results.
+You can list directory contents by using the following method and enumerating the result:
+
+- [FileSystemClient.get_paths](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.filesystemclient#azure-storage-filedatalake-filesystemclient-get-paths)
+
+Enumerating the paths in the result may make multiple requests to the service while fetching the values.
-This example, prints the path of each subdirectory and file that is located in a directory named `my-directory`.
+The following code example prints the path of each subdirectory and file that is located in a directory:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/python-v12/crud_datalake.py" id="Snippet_ListFilesInDirectory":::
+## Delete a directory
+
+You can delete a directory by using the following method:
+
+- [DataLakeDirectoryClient.delete_directory](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakedirectoryclient#azure-storage-filedatalake-datalakedirectoryclient-delete-directory)
+
+The following code example shows how to delete a directory:
++ ## See also - [API reference documentation](/python/api/azure-storage-file-datalake/azure.storage.filedatalake)
This example, prints the path of each subdirectory and file that is located in a
- [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/samples) - [Gen1 to Gen2 mapping](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library)-- [Give Feedback](https://github.com/Azure/azure-sdk-for-python/issues)
+- [Give Feedback](https://github.com/Azure/azure-sdk-for-python/issues)
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
Title: Cognitive Services in Azure Synapse Analytics
-description: Enrich your data with artificial intelligence (AI) in Azure Synapse Analytics using pretrained models from Azure Cognitive Services.
+ Title: Azure AI services in Azure Synapse Analytics
+description: Enrich your data with artificial intelligence (AI) in Azure Synapse Analytics using pretrained models from Azure AI services.
Last updated 02/14/2023
-# Cognitive Services in Azure Synapse Analytics
+# Azure AI services in Azure Synapse Analytics
-Using pretrained models from Azure Cognitive Services, you can enrich your data with artificial intelligence (AI) in Azure Synapse Analytics.
+Using pretrained models from Azure AI services, you can enrich your data with artificial intelligence (AI) in Azure Synapse Analytics.
-[Azure Cognitive Services](../../cognitive-services/what-are-cognitive-services.md) are cloud-based services that add cognitive intelligence to your data even if you don't have AI or data science skills. There are a few ways you can use these services with your data in Synapse Analytics:
+[Azure AI services](../../ai-services/what-are-ai-services.md) help developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and pre-built and customizable APIs and models.
-- The Cognitive Services wizard in Synapse Analytics generates PySpark code in a Synapse notebook that connects to a cognitive service using data in a Spark table. Then, using pretrained machine learning models, the service does the work for you to add AI to your data. Check out [Sentiment analysis wizard](tutorial-cognitive-services-sentiment.md) and [Anomaly detection wizard](tutorial-cognitive-services-anomaly.md) for more details.
+There are a few ways that you can use a subset of Azure AI services with your data in Synapse Analytics:
-- Synapse Machine Learning ([SynapseML](https://github.com/microsoft/SynapseML)) allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources. Synapse Spark provide built-in SynapseML libraries including [Cognitive Services on Spark](https://github.com/microsoft/SynapseML/tree/master/notebooks/features/cognitive_services).
+- The "Cognitive Services" wizard in Synapse Analytics generates PySpark code in a Synapse notebook that connects to a with Azure AI services using data in a Spark table. Then, using pretrained machine learning models, the service does the work for you to add AI to your data. Check out [Sentiment analysis wizard](tutorial-cognitive-services-sentiment.md) and [Anomaly detection wizard](tutorial-cognitive-services-anomaly.md) for more details.
-- Starting from the PySpark code generated by the wizard, or the example SynapseML code provided in the tutorial, you can write your own code to use other cognitive services with your data. See [What are Azure Cognitive Services?](../../cognitive-services/what-are-cognitive-services.md) for more information about available services.
+- Synapse Machine Learning ([SynapseML](https://github.com/microsoft/SynapseML)) allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources. Synapse Spark provide built-in SynapseML libraries including [synapse.ml.cognitive](https://github.com/microsoft/SynapseML/tree/master/notebooks/features/cognitive_services).
-## Get started
-
-The tutorial, [Pre-requisites for using Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md), walks you through a couple steps you need to perform before using Cognitive Services in Synapse Analytics.
+- Starting from the PySpark code generated by the wizard, or the example SynapseML code provided in the tutorial, you can write your own code to use other Azure AI services with your data. See [What are Azure AI services?](../../ai-services/what-are-ai-services.md) for more information about available services.
-## Cognitive Services
-
+## Get started
-[Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) are a suite of APIs, SDKs, and services available to help developers build intelligent applications without having direct AI or data science skills or knowledge by enabling developers to easily add cognitive features into their applications. The goal of Azure Cognitive Services is to help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services within Azure Cognitive Services can be categorized into five main pillars - Vision, Speech, Language, Web Search, and Decision.
+The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md), walks you through a couple steps you need to perform before using Azure AI services in Synapse Analytics.
## Usage
The tutorial, [Pre-requisites for using Cognitive Services in Azure Synapse](tut
- Dictionary Examples: Provides examples that show how terms in the dictionary are used in context. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DictionaryExamples.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DictionaryExamples)) - Document Translation: Translates documents across all supported languages and dialects while preserving document structure and data format. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/DocumentTranslator.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.DocumentTranslator))
-### Form Recognizer
-[**Form Recognizer**](https://azure.microsoft.com/services/form-recognizer/)
+### Document Intelligence
+[**Document Intelligence**](https://azure.microsoft.com/services/form-recognizer/) (formerly known as Form Recognizer)
- Analyze Layout: Extract text and layout information from a given document. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeLayout.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeLayout)) - Analyze Receipts: Detects and extracts data from receipts using optical character recognition (OCR) and our receipt model, enabling you to easily extract structured data from receipts such as merchant name, merchant phone number, transaction date, transaction total, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeReceipts.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeReceipts)) - Analyze Business Cards: Detects and extracts data from business cards using optical character recognition (OCR) and our business card model, enabling you to easily extract structured data from business cards such as contact names, company names, phone numbers, emails, and more. ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/AnalyzeBusinessCards.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AnalyzeBusinessCards))
The tutorial, [Pre-requisites for using Cognitive Services in Azure Synapse](tut
### Search - [Bing Image search](https://azure.microsoft.com/services/cognitive-services/bing-image-search-api/) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BingImageSearch.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BingImageSearch))-- [Azure Cognitive search](../../search/search-what-is-azure-search.md) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
+- [Azure Cognitive Search](../../search/search-what-is-azure-search.md) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
## Prerequisites
-1. Follow the steps in [Getting started](../../cognitive-services/big-dat) to set up your Azure Databricks and Cognitive Services environment. This tutorial shows you how to install SynapseML and how to create your Spark cluster in Databricks.
+1. Follow the steps in [Setup environment for Azure AI services](./setup-environment-cognitive-services.md) to set up your Azure Databricks and Azure AI services environment. This tutorial shows you how to install SynapseML and how to create your Spark cluster in Databricks.
1. After you create a new notebook in Azure Databricks, copy the following **Shared code** and paste into a new cell in your notebook. 1. Choose one of the following service samples and copy paste it into a second new cell in your notebook. 1. Replace any of the service subscription key placeholders with your own key.
from synapse.ml.core.platform import materializing_display as display
```python from synapse.ml.cognitive import *
-# A general Cognitive Services key for Text Analytics, Computer Vision and Form Recognizer (or use separate keys that belong to each service)
+# A multi-service resource key for Text Analytics, Computer Vision and Document Intelligence (or use separate keys that belong to each service)
service_key = find_secret("cognitive-api-key") service_loc = "eastus"
df = spark.createDataFrame(
[ ("I am so happy today, its sunny!", "en-US"), ("I am frustrated by this rush hour traffic", "en-US"),
- ("The cognitive services on spark aint bad", "en-US"),
+ ("The Azure AI services on spark aint bad", "en-US"),
], ["text", "language"], )
display(
## Text Analytics for Health Sample
-The [Text Analytics for Health Service](../../cognitive-services/language-service/text-analytics-for-health/overview.md?tabs=ner) extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+The [Text Analytics for Health Service](../../ai-services/language-service/text-analytics-for-health/overview.md?tabs=ner) extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
```python
display(healthcare.transform(df))
``` ## Translator sample
-[Translator](https://azure.microsoft.com/services/cognitive-services/translator/) is a cloud-based machine translation service and is part of the Azure Cognitive Services family of cognitive APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in 90 languages and dialects and can be used for text translation with any operating system. In this sample, we do a simple text translation by providing the sentences you want to translate and target languages you want to translate to.
+[Translator](https://azure.microsoft.com/services/cognitive-services/translator/) is a cloud-based machine translation service and is part of the Azure AI services family of APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in 90 languages and dialects and can be used for text translation with any operating system. In this sample, we do a simple text translation by providing the sentences you want to translate and target languages you want to translate to.
```python
display(
) ```
-## Form Recognizer sample
-[Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) is a part of Azure Applied AI Services that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. In this sample, we analyze a business card image and extract its information into structured data.
+## Document Intelligence sample
+[Document Intelligence](https://azure.microsoft.com/services/form-recognizer/) (formerly known as "Form Recognizer") is a part of Azure AI services that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. In this sample, we analyze a business card image and extract its information into structured data.
```python
imageDf = spark.createDataFrame(
], )
-# Run the Form Recognizer service
+# Run the Document Intelligence service
analyzeBusinessCards = ( AnalyzeBusinessCards() .setSubscriptionKey(service_key)
display(
) ```
-## Azure Cognitive search sample
+## Azure Cognitive Search sample
In this example, we show how you can enrich data using Cognitive Skills and write to an Azure Search Index using SynapseML.
tdf.writeToAzureSearch(
## Other Tutorials
-The following tutorials provide complete examples of using Cognitive Services in Synapse Analytics.
+The following tutorials provide complete examples of using Azure AI services in Synapse Analytics.
-- [Sentiment analysis with Cognitive Services](tutorial-cognitive-services-sentiment.md) - Using an example data set of customer comments, you build a Spark table with a column that indicates the sentiment of the comments in each row.
+- [Sentiment analysis with Azure AI services](tutorial-cognitive-services-sentiment.md) - Using an example data set of customer comments, you build a Spark table with a column that indicates the sentiment of the comments in each row.
-- [Anomaly detection with Cognitive Services](tutorial-cognitive-services-anomaly.md) - Using an example data set of time series data, you build a Spark table with a column that indicates whether the data in each row is an anomaly.
+- [Anomaly detection with Azure AI services](tutorial-cognitive-services-anomaly.md) - Using an example data set of time series data, you build a Spark table with a column that indicates whether the data in each row is an anomaly.
-- [Build machine learning applications using Microsoft Machine Learning for Apache Spark](tutorial-build-applications-use-mmlspark.md) - This tutorial demonstrates how to use SynapseML to access several models from Cognitive Services.
+- [Build machine learning applications using Microsoft Machine Learning for Apache Spark](tutorial-build-applications-use-mmlspark.md) - This tutorial demonstrates how to use SynapseML to access several models from Azure AI services.
-- [Form recognizer with Applied AI Service](tutorial-form-recognizer-use-mmlspark.md) demonstrates how to use [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) to analyze your forms and documents, extracts text and data on Azure Synapse Analytics.
+- [Document Intelligence with Azure AI services](tutorial-form-recognizer-use-mmlspark.md) demonstrates how to use [Document Intelligence](../../ai-services/document-intelligence/index.yml) to analyze your forms and documents, extracts text and data on Azure Synapse Analytics.
-- [Text Analytics with Cognitive Service](tutorial-text-analytics-use-mmlspark.md) shows how to use [Text Analytics](../../cognitive-services/text-analytics/index.yml) to analyze unstructured text on Azure Synapse Analytics.
+- [Text Analytics with Azure AI services](tutorial-text-analytics-use-mmlspark.md) shows how to use [Text Analytics](../../ai-services/language-service/index.yml) to analyze unstructured text on Azure Synapse Analytics.
-- [Translator with Cognitive Service](tutorial-translator-use-mmlspark.md) shows how to use [Translator](../../cognitive-services/Translator/index.yml) to build intelligent, multi-language solutions on Azure Synapse Analytics
+- [Translator with Azure AI services](tutorial-translator-use-mmlspark.md) shows how to use [Translator](../../ai-services/Translator/index.yml) to build intelligent, multi-language solutions on Azure Synapse Analytics
-- [Computer Vision with Cognitive Service](tutorial-computer-vision-use-mmlspark.md) demonstrates how to use [Computer Vision](../../cognitive-services/computer-vision/index.yml) to analyze images on Azure Synapse Analytics.
+- [Computer Vision with Azure AI services](tutorial-computer-vision-use-mmlspark.md) demonstrates how to use [Computer Vision](../../ai-services/computer-vision/index.yml) to analyze images on Azure Synapse Analytics.
-## Available Cognitive Services APIs
+## Available Azure AI services APIs
### Bing Image Search
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | |Bing Image Search|BingImageSearch|Images - Visual Search V7.0| Not Supported| ### Anomaly Detector
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | Detect Last Anomaly | DetectLastAnomaly | Detect Last Point V1.0 | Supported | | Detect Anomalies | DetectAnomalies | Detect Entire Series V1.0 | Supported |
The following tutorials provide complete examples of using Cognitive Services in
### Computer vision
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | OCR | OCR | Recognize Printed Text V2.0 | Supported | | Recognize Text | RecognizeText | Recognize Text V2.0 | Supported |
The following tutorials provide complete examples of using Cognitive Services in
### Translator
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | Translate Text | Translate | Translate V3.0 | Not Supported | | Transliterate Text | Transliterate | Transliterate V3.0 | Not Supported |
The following tutorials provide complete examples of using Cognitive Services in
### Face
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | Detect Face | DetectFace | Detect With Url V1.0 | Supported | | Find Similar Face | FindSimilarFace | Find Similar V1.0 | Supported |
The following tutorials provide complete examples of using Cognitive Services in
| Identify Faces | IdentifyFaces | Identify V1.0 | Supported | | Verify Faces | VerifyFaces | Verify Face To Face V1.0 | Supported |
-### Form Recognizer
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+### Document Intelligence
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | Analyze Layout | AnalyzeLayout | Analyze Layout Async V2.1 | Supported | | Analyze Receipts | AnalyzeReceipts | Analyze Receipt Async V2.1 | Supported |
The following tutorials provide complete examples of using Cognitive Services in
| Analyze Custom Model | AnalyzeCustomModel | Analyze With Custom Model V2.1 | Supported | ### Speech-to-text
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | Speech To Text | SpeechToText | SpeechToText V1.0 | Not Supported | | Speech To Text SDK | SpeechToTextSDK | Using Speech SDK Version 1.14.0 | Not Supported |
The following tutorials provide complete examples of using Cognitive Services in
### Text Analytics
-| API Type | SynapseML APIs | Cognitive Service APIs (Versions) | DEP VNet Support |
+| API Type | SynapseML APIs | Azure AI services APIs (Versions) | DEP VNet Support |
| | | - | - | | Text Sentiment V2 | TextSentimentV2 | Sentiment V2.0 | Supported | | Language Detector V2 | LanguageDetectorV2 | Languages V2.0 | Supported |
The following tutorials provide complete examples of using Cognitive Services in
## Next steps - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)-- [What are Cognitive Services?](../../cognitive-services/what-are-cognitive-services.md)
+- [What are Azure AI services?](../../ai-services/what-are-ai-services.md)
- [Use a sample notebook from the Synapse Analytics gallery](quickstart-gallery-sample-notebook.md)
synapse-analytics Setup Environment Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/setup-environment-cognitive-services.md
+
+ Title: "Setup environment for Azure AI services for big data"
+description: Set up your SynapseML or MMLSpark pipeline with Azure AI services in Azure Databricks and run a sample.
++++++ Last updated : 7/18/2023
+ms.devlang: python
+++
+# Setup environment for Azure AI services for big data
+
+Setting up your environment is the first step to building a pipeline for your data. After your environment is ready, running a sample is quick and easy.
+
+In this article, you'll perform these steps to get started:
+
+> [!div class="checklist"]
+> * [Create an Azure AI services resource](#create-an-azure-ai-services-resource)
+> * [Create an Apache Spark cluster](#create-an-apache-spark-cluster)
+> * [Try a sample](#try-a-sample)
+
+## Create an Azure AI services resource
+
+To work with big data in Azure AI services, first create an Azure AI services resource for your workflow. There are two main types of Azure AI
+
+### Cloud services
+
+Cloud-based Azure AI services are intelligent algorithms hosted in Azure. These services are ready for use without training, you just need an internet connection. You can [create resources for Azure AI services in the Azure portal](../../ai-services/multi-service-resource.md?pivots=azportal) or with the [Azure CLI](../../ai-services/multi-service-resource.md?pivots=azcli).
+
+### Containerized services (optional)
+
+If your application or workload uses large datasets, requires private networking, or can't contact the cloud, communicating with cloud services might be impossible. In this situation, containerized Azure AI services have these benefits:
+
+* **Low Connectivity**: You can deploy containerized Azure AI services in any computing environment, both on-cloud and off. If your application can't contact the cloud, consider deploying containerized Azure AI services on your application.
+
+* **Low Latency**: Because containerized services don't require the round-trip communication to/from the cloud, responses are returned with much lower latencies.
+
+* **Privacy and Data Security**: You can deploy containerized services into private networks, so that sensitive data doesn't leave the network.
+
+* **High Scalability**: Containerized services don't have "rate limits" and run on user-managed computers. So, you can scale Azure AI services without end to handle much larger workloads.
+
+Follow [this guide](../../ai-services/cognitive-services-container-support.md) to create a containerized Azure AI service.
+
+## Create an Apache Spark cluster
+
+[Apache Spark&trade;](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the big data Azure AI services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
+
+### Azure Databricks
+
+Azure Databricks is an Apache Spark-based analytics platform with a one-click setup, streamlined workflows, and an interactive workspace. It's often used to collaborate between data scientists, engineers, and business analysts. To use the big data Azure AI services on Azure Databricks, follow these steps:
+
+1. [Create an Azure Databricks workspace](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-an-azure-databricks-workspace)
+
+1. [Create a Spark cluster in Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-a-spark-cluster-in-databricks)
+
+1. Install the SynapseML open-source library (or MMLSpark library if you're supporting a legacy application):
+
+ * Create a new library in your databricks workspace
+ <img src="media/setup-environment-cognitive-services/create-library.png" alt="Create library" width="50%"/>
+
+ * For SynapseML: input the following maven coordinates
+ Coordinates: `com.microsoft.azure:synapseml_2.12:0.10.0`
+ Repository: default
+
+ * For MMLSpark (legacy): input the following maven coordinates
+ Coordinates: `com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3`
+ Repository: `https://mmlspark.azureedge.net/maven`
+ <img src="media/setup-environment-cognitive-services/library-coordinates.png" alt="Library Coordinates" width="50%"/>
+
+ * Install the library onto a cluster
+ <img src="media/setup-environment-cognitive-services/install-library.png" alt="Install Library on Cluster" width="50%"/>
+
+### Azure Synapse Analytics (optional)
+
+Optionally, you can use Synapse Analytics to create a spark cluster. Azure Synapse Analytics brings together enterprise data warehousing and big data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources at scale. To get started using Azure Synapse Analytics, follow these steps:
+
+1. [Create a Synapse Workspace (preview)](../quickstart-create-workspace.md).
+
+1. [Create a new serverless Apache Spark pool (preview) using the Azure portal](../quickstart-create-apache-spark-pool-portal.md).
+
+In Azure Synapse Analytics, big data for Azure AI services is installed by default.
+
+### Azure Kubernetes Service
+
+If you're using containerized Azure AI services, one popular option for deploying Spark alongside containers is the Azure Kubernetes Service.
+
+To get started on Azure Kubernetes Service, follow these steps:
+
+1. [Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md)
+
+1. [Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark)
+
+1. [Install an Azure AI container using Helm](../../ai-services/computer-vision/deploy-computer-vision-on-premises.md)
+
+## Try a sample
+
+After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](../../search/search-synapseml-cognitive-services.md).
+
+First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
+
+1. Create a new Databricks notebook, by choosing **New Notebook** from the **Azure Databricks** menu.
+
+ <img src="media/setup-environment-cognitive-services/new-notebook.png" alt="Create a new notebook" width="50%"/>
+
+1. In the **Create Notebook**, enter a name, select **Python** as the language, and select the Spark cluster that you created earlier.
+
+ <img src="media/setup-environment-cognitive-services/databricks-notebook-details.jpg" alt="New notebook details" width="50%"/>
+
+ Select **Create**.
+
+1. Paste this code snippet into your new notebook.
+
+ ```python
+ from mmlspark.cognitive import *
+ from pyspark.sql.functions import col
+
+ # Add your region and subscription key from the Language service
+ service_key = "ADD-SUBSCRIPTION-KEY-HERE"
+ service_region = "ADD-SERVICE-REGION-HERE"
+
+ df = spark.createDataFrame([
+ ("I am so happy today, its sunny!", "en-US"),
+ ("I am frustrated by this rush hour traffic", "en-US"),
+ ("The Azure AI services on spark aint bad", "en-US"),
+ ], ["text", "language"])
+
+ sentiment = (TextSentiment()
+ .setTextCol("text")
+ .setLocation(service_region)
+ .setSubscriptionKey(service_key)
+ .setOutputCol("sentiment")
+ .setErrorCol("error")
+ .setLanguageCol("language"))
+
+ results = sentiment.transform(df)
+
+ # Show the results in a table
+ display(results.select("text", col("sentiment")[0].getItem("score").alias("sentiment")))
+ ```
+
+1. Get your region and subscription key from the **Keys and Endpoint** menu from your Language resource in the Azure portal.
+
+1. Replace the region and subscription key placeholders in your Databricks notebook code with values that are valid for your resource.
+
+1. Select the play, or triangle, symbol in the upper right of your notebook cell to run the sample. Optionally, select **Run All** at the top of your notebook to run all cells. The answers will display below the cell in a table.
+
+### Expected results
+
+| text | sentiment |
+|:|:|
+| I am so happy today, its sunny! | 0.978959 |
+| I am frustrated by this rush hour traffic | 0.0237956 |
+| The Azure AI services on spark aint bad | 0.888896 |
+
+## Next steps
+
+- [Azure AI services in Azure Synapse Analytics](./overview-cognitive-services.md)
+- [Tutorial: Sentiment analysis with Azure AI Language](./tutorial-cognitive-services-sentiment.md)
synapse-analytics Synapse Machine Learning Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/synapse-machine-learning-library.md
A unified API standardizes many tools, frameworks, algorithms and streamlines th
### Use pre-built intelligent models
-Many tools in SynapseML don't require a large labeled training dataset. Instead, SynapseML provides simple APIs for pre-built intelligent services, such as Azure Cognitive Services, to quickly solve large-scale AI challenges related to both business and research. SynapseML enables developers to embed over 50 different state-of-the-art ML services directly into their systems and databases. These ready-to-use algorithms can parse a wide variety of documents, transcribe multi-speaker conversations in real time, and translate text to over 100 different languages. For more examples of how to use pre-built AI to solve tasks quickly, see [the SynapseML cognitive service examples](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Overview/).
+Many tools in SynapseML don't require a large labeled training dataset. Instead, SynapseML provides simple APIs for pre-built intelligent services, such as Azure AI services, to quickly solve large-scale AI challenges related to both business and research. SynapseML enables developers to embed over 50 different state-of-the-art ML services directly into their systems and databases. These ready-to-use algorithms can parse a wide variety of documents, transcribe multi-speaker conversations in real time, and translate text to over 100 different languages. For more examples of how to use pre-built AI to solve tasks quickly, see [the SynapseML "cognitive" examples](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Overview/).
-To make SynapseML's integration with Azure Cognitive Services fast and efficient SynapseML introduces many optimizations for service-oriented workflows. In particular, SynapseML automatically parses common throttling responses to ensure that jobs donΓÇÖt overwhelm backend services. Additionally, it uses exponential back-offs to handle unreliable network connections and failed responses. Finally, SparkΓÇÖs worker machines stay busy with new asynchronous parallelism primitives for Spark. Asynchronous parallelism allows worker machines to send requests while waiting on a response from the server and can yield a tenfold increase in throughput.
+To make SynapseML's integration with Azure AI services fast and efficient SynapseML introduces many optimizations for service-oriented workflows. In particular, SynapseML automatically parses common throttling responses to ensure that jobs donΓÇÖt overwhelm backend services. Additionally, it uses exponential back-offs to handle unreliable network connections and failed responses. Finally, SparkΓÇÖs worker machines stay busy with new asynchronous parallelism primitives for Spark. Asynchronous parallelism allows worker machines to send requests while waiting on a response from the server and can yield a tenfold increase in throughput.
### Broad ecosystem compatibility with ONNX
After building a model, itΓÇÖs imperative that researchers and engineers underst
## Enterprise support on Azure Synapse Analytics
-SynapseML is generally available on Azure Synapse Analytics with enterprise support. You can build large-scale machine learning pipelines using Azure Cognitive Services, LightGBM, ONNX, and other [selected SynapseML features](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/streamline-collaboration-and-insights-with-simplified-machine/ba-p/2924707). It even includes templates to quickly prototype distributed machine learning systems, such as visual search engines, predictive maintenance pipelines, document translation, and more.
+SynapseML is generally available on Azure Synapse Analytics with enterprise support. You can build large-scale machine learning pipelines using Azure AI services, LightGBM, ONNX, and other [selected SynapseML features](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/streamline-collaboration-and-insights-with-simplified-machine/ba-p/2924707). It even includes templates to quickly prototype distributed machine learning systems, such as visual search engines, predictive maintenance pipelines, document translation, and more.
## Next steps
synapse-analytics Tutorial Build Applications Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-build-applications-use-mmlspark.md
# Tutorial: Build machine learning applications using Synapse Machine Learning In this article, you will learn how to use Synapse Machine Learning ([SynapseML](https://github.com/microsoft/SynapseML)) to create machine learning applications.
-SynapseML expands the distributed machine learning solution of Apache Spark by adding many deep learning and data science tools, such as [Azure Cognitive Services](../../cognitive-services/big-dat), [OpenCV](https://opencv.org/), [LightGBM](https://github.com/Microsoft/LightGBM) and more. SynapseML allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources.
+SynapseML expands the distributed machine learning solution of Apache Spark by adding many deep learning and data science tools, such as [Azure AI services](../../ai-services/what-are-ai-services.md), [OpenCV](https://opencv.org/), [LightGBM](https://github.com/Microsoft/LightGBM) and more. SynapseML allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources.
Synapse Spark provide built-in SynapseML libraries including: - [Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit) ΓÇô Library services for Machine learning to enable Text analytics like sentiment analysis in tweets.-- [Cognitive Services on Spark](https://arxiv.org/abs/1810.08744) ΓÇô To combine the feature of Azure Cognitive Services in SparkML pipelines in order to derive solution design for cognitive data modeling services like anomaly detection.
+- [MMLSpark: Unifying Machine Learning Ecosystems at Massive Scales](https://arxiv.org/abs/1810.08744) ΓÇô To combine the feature of Azure AI services in SparkML pipelines in order to derive solution design for cognitive data modeling services like anomaly detection.
- [LightGBM](https://github.com/Microsoft/LightGBM) ΓÇô LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and higher efficiency. - Conditional KNN - Scalable KNN Models with Conditional Queries. - HTTP on Spark ΓÇô Enables distributed Microservices orchestration in integrating Spark and HTTP protocol-based accessibility.
-This tutorial covers samples using Azure Cognitive Services in SynapseML for
+This tutorial covers samples using Azure AI services in SynapseML for
- Text Analytics - get the sentiment (or mood) of a set of sentences. - Computer Vision - get the tags (one-word descriptions) associated with a set of images.
If you don't have an Azure subscription, [create a free account before you begin
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../get-started-analyze-spark.md).-- Pre-configuration steps described in the tutorial [Configure Cognitive Services in Azure Synapse](./tutorial-configure-cognitive-services-synapse.md).
+- Pre-configuration steps described in the tutorial [Configure Azure AI services in Azure Synapse](./tutorial-configure-cognitive-services-synapse.md).
## Get started
To get started, import SynapseML and configurate service keys.
```python import synapse.ml- from synapse.ml.cognitive import * from notebookutils import mssparkutils
-# A general Cognitive Services key for Text Analytics and Computer Vision (or use separate keys that belong to each service)
-cognitive_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_SERVICE_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
+# An Azure AI services multi-service resource key for Text Analytics and Computer Vision (or use separate keys that belong to each service)
+ai_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_SERVICE_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")
# A Bing Search v7 subscription key bingsearch_service_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_BING_SEARCH_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME") # An Anomaly Dectector subscription key anomalydetector_key = mssparkutils.credentials.getSecret("ADD_YOUR_KEY_VAULT_NAME", "ADD_YOUR_ANOMALY_KEY","ADD_YOUR_KEY_VAULT_LINKED_SERVICE_NAME")-- ``` - ## Text analytics sample
-The [Text Analytics](../../cognitive-services/text-analytics/index.yml) service provides several algorithms for extracting intelligent insights from text. For example, we can find the sentiment of given input text. The service will return a score between 0.0 and 1.0 where low scores indicate negative sentiment and high score indicates positive sentiment. This sample uses three simple sentences and returns the sentiment for each.
+The [Text Analytics](../../ai-services/language-service/index.yml) service provides several algorithms for extracting intelligent insights from text. For example, we can find the sentiment of given input text. The service will return a score between 0.0 and 1.0 where low scores indicate negative sentiment and high score indicates positive sentiment. This sample uses three simple sentences and returns the sentiment for each.
```python from pyspark.sql.functions import col
df_sentences = spark.createDataFrame([
# Run the Text Analytics service with options sentiment = (TextSentiment() .setTextCol("text")
- .setLocation("eastasia") # Set the location of your cognitive service
- .setSubscriptionKey(cognitive_service_key)
+ .setLocation("eastasia") # Set the location of your Azure AI services resource
+ .setSubscriptionKey(ai_service_key)
.setOutputCol("sentiment") .setErrorCol("error") .setLanguageCol("language"))
display(sentiment.transform(df_sentences).select("text", col("sentiment")[0].get
| I am so happy today, its sunny! | positive | ## Computer vision sample
-[Computer Vision](../../cognitive-services/computer-vision/index.yml) analyzes images to identify structure such as faces, objects, and natural-language descriptions. In this sample, we tag the follow image. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
-
+[Computer Vision](../../ai-services/computer-vision/index.yml) analyzes images to identify structure such as faces, objects, and natural-language descriptions. In this sample, we tag the following image. Tags are one-word descriptions of things in the image like recognizable objects, people, scenery, and actions.
![image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/objects.jpg)
df_images = spark.createDataFrame([
# Run the Computer Vision service. Analyze Image extracts information from/about the images. analysis = (AnalyzeImage()
- .setLocation("eastasia") # Set the location of your cognitive service
- .setSubscriptionKey(cognitive_service_key)
+ .setLocation("eastasia") # Set the location of your Azure AI services resource
+ .setSubscriptionKey(ai_service_key)
.setVisualFeatures(["Categories","Color","Description","Faces","Objects","Tags"]) .setOutputCol("analysis_results") .setImageUrlCol("image")
display(res_bingsearch.dropDuplicates())
## Anomaly detector sample
-[Anomaly Detector](../../cognitive-services/anomaly-detector/index.yml) is great for detecting irregularities in your time series data. In this sample, we use the service to find anomalies in the entire time series.
+[Anomaly Detector](../../ai-services/anomaly-detector/index.yml) is great for detecting irregularities in your time series data. In this sample, we use the service to find anomalies in the entire time series.
```python from pyspark.sql.functions import lit
anamoly_detector = (SimpleDetectAnomalies()
# Show the full results of the analysis with the anomalies marked as "True" display(anamoly_detector.transform(df_timeseriesdata).select("timestamp", "value", "anomalies.isAnomaly"))- ``` ### Expected results
display(anamoly_detector.transform(df_timeseriesdata).select("timestamp", "value
## Speech-to-text sample
-The [Speech-to-text](../../cognitive-services/speech-service/index-text-to-speech.yml) service converts streams or files of spoken audio to text. In this sample, we transcribe one audio file to text.
+The [Speech-to-text](../../ai-services/speech-service/index-text-to-speech.yml) service converts streams or files of spoken audio to text. In this sample, we transcribe one audio file to text.
```python # Create a dataframe with our audio URLs, tied to the column called "url"
df = spark.createDataFrame([("https://mmlspark.blob.core.windows.net/datasets/Sp
# Run the Speech-to-text service to translate the audio into text speech_to_text = (SpeechToTextSDK() .setSubscriptionKey(service_key)
- .setLocation("northeurope") # Set the location of your cognitive service
+ .setLocation("northeurope") # Set the location of your Azure AI services resource
.setOutputCol("text") .setAudioDataCol("url") .setLanguage("en-US")
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
Title: 'Tutorial: Anomaly detection with Cognitive Services'
-description: Learn how to use Cognitive Services for anomaly detection in Azure Synapse Analytics.
+ Title: 'Tutorial: Anomaly detection with Azure AI services'
+description: Learn how to use Azure AI Anomaly Detector for anomaly detection in Azure Synapse Analytics.
-# Tutorial: Anomaly detection with Cognitive Services
+# Tutorial: Anomaly detection with Azure AI services
-In this tutorial, you'll learn how to easily enrich your data in Azure Synapse Analytics with [Azure Cognitive Services](../../cognitive-services/index.yml). You'll use [Anomaly Detector](../../cognitive-services/anomaly-detector/index.yml) to find anomalies. A user in Azure Synapse can simply select a table to enrich for detection of anomalies.
+In this tutorial, you'll learn how to easily enrich your data in Azure Synapse Analytics with [Azure AI services](../../ai-services/index.yml). You'll use [Azure AI Anomaly Detector](../../ai-services/anomaly-detector/index.yml) to find anomalies. A user in Azure Synapse can simply select a table to enrich for detection of anomalies.
This tutorial covers: > [!div class="checklist"] > - Steps for getting a Spark table dataset that contains time series data.
-> - Use of a wizard experience in Azure Synapse to enrich data by using Anomaly Detector in Cognitive Services.
+> - Use of a wizard experience in Azure Synapse to enrich data by using Anomaly Detector.
If you don't have an Azure subscription, [create a free account before you begin](https://azure.microsoft.com/free/).
If you don't have an Azure subscription, [create a free account before you begin
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Completion of the pre-configuration steps in the [Configure Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md) tutorial.
+- Completion of the pre-configuration steps in the [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md) tutorial.
## Sign in to the Azure portal
A Spark table named **anomaly_detector_testing_data** should now appear in the d
![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00g.png)
-2. A configuration panel appears, and you're asked to select a Cognitive Services model. Select **Anomaly Detector**.
+2. A configuration panel appears, and you're asked to select a pre-trained model. Select **Anomaly Detector**.
![Screenshot that shows selection of Anomaly Detector as a model.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-00c.png)
A Spark table named **anomaly_detector_testing_data** should now appear in the d
Provide the following details to configure Anomaly Detector: -- **Azure Cognitive Services linked service**: As part of the prerequisite steps, you created a linked service to your [Cognitive Services](tutorial-configure-cognitive-services-synapse.md) . Select it here.
+- **Azure Cognitive Services linked service**: As part of the prerequisite steps, you created a linked service to your [Azure AI service](tutorial-configure-cognitive-services-synapse.md). Select it here.
- **Granularity**: The rate at which your data is sampled. Choose **monthly**.
Provide the following details to configure Anomaly Detector:
- **Grouping column**: The column that groups the series. That is, all rows that have the same value in this column should form one time series. Choose **group (string)**.
-When you're done, select **Open notebook**. This will generate a notebook for you with PySpark code that uses Azure Cognitive Services to detect anomalies.
+When you're done, select **Open notebook**. This will generate a notebook for you with PySpark code that uses Azure AI services to detect anomalies.
![Screenshot that shows configuration details for Anomaly Detector.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-config.png) ## Run the notebook
-The notebook that you just opened uses the [SynapseML library](https://github.com/microsoft/SynapseML) to connect to Cognitive Services. The Azure Cognitive Services linked service that you provided allow you to securely reference your cognitive service from this experience without revealing any secrets.
+The notebook that you just opened uses the [SynapseML library](https://github.com/microsoft/SynapseML) to connect to Azure AI services. The Azure AI services linked service that you provided allow you to securely reference your Azure AI service from this experience without revealing any secrets.
-You can now run all cells to perform anomaly detection. Select **Run All**. [Learn more about Anomaly Detector in Cognitive Services](../../cognitive-services/anomaly-detector/index.yml).
+You can now run all cells to perform anomaly detection. Select **Run All**. [Learn more about Anomaly Detector in Azure AI services](../../ai-services/anomaly-detector/index.yml).
![Screenshot that shows anomaly detection.](media/tutorial-cognitive-services/tutorial-cognitive-services-anomaly-notebook.png) ## Next steps -- [Tutorial: Sentiment analysis with Azure Cognitive Services](tutorial-cognitive-services-sentiment.md)
+- [Tutorial: Sentiment analysis with Azure AI services](tutorial-cognitive-services-sentiment.md)
- [Tutorial: Machine learning model scoring in Azure Synapse dedicated SQL pools](tutorial-sql-pool-model-scoring-wizard.md)-- [Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics](../../cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md)
+- [Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics](../../ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md)
- [SynapseML anomaly detection](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#anomaly-detection) - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
Title: 'Tutorial: Sentiment analysis with Cognitive Services'
-description: Learn how to use Cognitive Services for sentiment analysis in Azure Synapse Analytics
+ Title: 'Tutorial: Sentiment analysis with Azure AI services'
+description: Learn how to use Azure AI Language for sentiment analysis in Azure Synapse Analytics
-# Tutorial: Sentiment analysis with Cognitive Services
+# Tutorial: Sentiment analysis with Azure AI services
-In this tutorial, you'll learn how to easily enrich your data in Azure Synapse Analytics with [Azure Cognitive Services](../../cognitive-services/index.yml). You'll use the [Text Analytics](../../cognitive-services/text-analytics/index.yml) capabilities to perform sentiment analysis.
+In this tutorial, you'll learn how to easily enrich your data in Azure Synapse Analytics with [Azure AI services](../../ai-services/index.yml). You'll use the [Azure AI Language](../../ai-services/language-service/index.yml) text analytics capabilities to perform sentiment analysis.
A user in Azure Synapse can simply select a table that contains a text column to enrich with sentiments. These sentiments can be positive, negative, mixed, or neutral. A probability will also be returned.
This tutorial covers:
> [!div class="checklist"] > - Steps for getting a Spark table dataset that contains a text column for sentiment analysis.
-> - Using a wizard experience in Azure Synapse to enrich data by using Text Analytics in Cognitive Services.
+> - Using a wizard experience in Azure Synapse to enrich data by using Text Analytics in Azure AI Language.
If you don't have an Azure subscription, [create a free account before you begin](https://azure.microsoft.com/free/).
If you don't have an Azure subscription, [create a free account before you begin
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Pre-configuration steps described in the tutorial [Configure Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
+- Pre-configuration steps described in the tutorial [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
## Sign in to the Azure portal
You'll need a Spark table for this tutorial.
![Screenshot that shows selections for opening the scoring wizard.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-00d.png)
-2. A configuration panel appears, and you're asked to select a Cognitive Services model. Select **Sentiment Analysis**.
+2. A configuration panel appears, and you're asked to select a pre-trained model. Select **Sentiment Analysis**.
- ![Screenshot that shows selection of a Cognitive Services model.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-choose.png)
+ ![Screenshot that shows selection of a pre-trained sentiment analysis model.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-choose.png)
## Configure sentiment analysis Next, configure the sentiment analysis. Select the following details:-- **Azure Cognitive Services linked service**: As part of the prerequisite steps, you created a linked service to your [Cognitive Services](tutorial-configure-cognitive-services-synapse.md) . Select it here.
+- **Azure Cognitive Services linked service**: As part of the prerequisite steps, you created a linked service to your [Azure AI service](tutorial-configure-cognitive-services-synapse.md). Select it here.
- **Language**: Select **English** as the language of the text that you want to perform sentiment analysis on. - **Text column**: Select **comment (string)** as the text column in your dataset that you want to analyze to determine the sentiment.
-When you're done, select **Open notebook**. This generates a notebook for you with PySpark code that performs the sentiment analysis with Azure Cognitive Services.
+When you're done, select **Open notebook**. This generates a notebook for you with PySpark code that performs the sentiment analysis with Azure AI services.
![Screenshot that shows selections for configuring sentiment analysis.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-config.png) ## Run the notebook
-The notebook that you just opened uses the [SynapseML library](https://github.com/microsoft/SynapseML) to connect to Cognitive Services. The Azure Cognitive Services linked service that you provided allow you to securely reference your cognitive service from this experience without revealing any secrets.
+The notebook that you just opened uses the [SynapseML library](https://github.com/microsoft/SynapseML) to connect to Azure AI services. The Azure AI services linked service that you provided allow you to securely reference your Azure AI service from this experience without revealing any secrets.
You can now run all cells to enrich your data with sentiments. Select **Run all**.
-The sentiments are returned as **positive**, **negative**, **neutral**, or **mixed**. You also get probabilities per sentiment. [Learn more about sentiment analysis in Cognitive Services](../../cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md).
+The sentiments are returned as **positive**, **negative**, **neutral**, or **mixed**. You also get probabilities per sentiment. [Learn more about sentiment analysis in Azure AI services](../../ai-services/language-service/sentiment-opinion-mining/overview.md).
![Screenshot that shows sentiment analysis.](media/tutorial-cognitive-services/tutorial-cognitive-services-sentiment-notebook.png) ## Next steps-- [Tutorial: Anomaly detection with Azure Cognitive Services](tutorial-cognitive-services-anomaly.md)
+- [Tutorial: Anomaly detection with Azure AI services](tutorial-cognitive-services-anomaly.md)
- [Tutorial: Machine learning model scoring in Azure Synapse dedicated SQL pools](tutorial-sql-pool-model-scoring-wizard.md) - [SynapseML text sentiment analysis](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#textsentiment) - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics Tutorial Computer Vision Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-computer-vision-use-mmlspark.md
Title: 'Tutorial: Computer Vision with Cognitive Service'
-description: Learn how to use computer vision in Azure Synapse Analytics.
+ Title: 'Tutorial: Vision with Azure AI services'
+description: Learn how to use Azure AI Vision in Azure Synapse Analytics.
-# Tutorial: Computer Vision with Cognitive Service
+# Tutorial: Vision with Azure AI services
-[Computer Vision](../../cognitive-services/computer-vision/index.yml) is an [Azure Cognitive Service](../../cognitive-services/index.yml) that enables you to process images and return information based on the visual features. In this tutorial, you'll learn how to use [Computer Vision](../../cognitive-services/computer-vision/index.yml) to analyze images on Azure Synapse Analytics.
+[Azure AI Vision](../../ai-services/computer-vision/index.yml) is an [Azure AI service](../../ai-services/index.yml) that enables you to process images and return information based on the visual features. In this tutorial, you'll learn how to use [Azure AI Vision](../../ai-services/computer-vision/index.yml) to analyze images on Azure Synapse Analytics.
This tutorial demonstrates using text analytics with [SynapseML](https://github.com/microsoft/SynapseML) to:
df = spark.createDataFrame([
("<replace with your file path>/dog.jpg", ) ], ["image", ])
-# Run the Computer Vision service. Analyze Image extracts infortmation from/about the images.
+# Run the Azure AI Vision service. Analyze Image extracts infortmation from/about the images.
analysis = (AnalyzeImage()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setVisualFeatures(["Categories","Color","Description","Faces","Objects","Tags"]) .setOutputCol("analysis_results") .setImageUrlCol("image")
df = spark.createDataFrame([
], ["url", ]) ri = (ReadImage()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("url") .setOutputCol("ocr"))
display(ri.transform(df))
![Screenshot of the expected results from the example OCR analysis.](./media/tutorial-computer-vision-use-mmlspark/ocr-output.png) ## Generate thumbnails
-Analyze the contents of an image to generate an appropriate thumbnail for that image. Computer Vision first generates a high-quality thumbnail and then analyzes the objects within the image to determine the area of interest. Computer Vision then crops the image to fit the requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the original image, depending on your needs.
+Analyze the contents of an image to generate an appropriate thumbnail for that image. The Vision service first generates a high-quality thumbnail and then analyzes the objects within the image to determine the area of interest. Vision then crops the image to fit the requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the original image, depending on your needs.
### Example input ![Photograph of the face of Satya Nadella.](./media/tutorial-computer-vision-use-mmlspark/satya.jpeg)
df = spark.createDataFrame([
], ["url", ]) gt = (GenerateThumbnails()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setHeight(50) .setWidth(50) .setSmartCropping(True)
df = spark.createDataFrame([
], ["url", ]) ti = (TagImage()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("url") .setOutputCol("tags"))
display(ti.transform(df))
![Screenshot of the expected output of the tags generated from the image of Satya Nadella.](./media/tutorial-computer-vision-use-mmlspark/tag-image-output.png) ## Describe image
-Generate a description of an entire image in human-readable language, using complete sentences. Computer Vision's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. A list is then returned ordered from highest confidence score to lowest.
+Generate a description of an entire image in human-readable language, using complete sentences. The Vision service's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. A list is then returned ordered from highest confidence score to lowest.
Let continue using Satya's image as an example.
df = spark.createDataFrame([
], ["url", ]) di = (DescribeImage()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setMaxCandidates(3) .setImageUrlCol("url") .setOutputCol("descriptions"))
display(di.transform(df))
![Screenshot of the expected output of the description of the image of Satya Nadella.](./media/tutorial-computer-vision-use-mmlspark/describe-image-output.png) ## Recognize domain-specific content
-Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.
+Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.
Let continue using Satya's image as an example.
df = spark.createDataFrame([
], ["url", ]) celeb = (RecognizeDomainSpecificContent()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setModel("celebrities") .setImageUrlCol("url") .setOutputCol("celebs"))
synapse-analytics Tutorial Configure Cognitive Services Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md
Title: 'Quickstart: Prerequisites for Cognitive Services in Azure Synapse Analytics'
-description: Learn how to configure the prerequisites for using Cognitive Services in Azure Synapse.
+ Title: 'Quickstart: Prerequisites for Azure AI services in Azure Synapse Analytics'
+description: Learn how to configure the prerequisites for using Azure AI services in Azure Synapse.
-# Quickstart: Configure prerequisites for using Cognitive Services in Azure Synapse Analytics
+# Quickstart: Configure prerequisites for using Azure AI services in Azure Synapse Analytics
-In this quickstart, you'll learn how set up the prerequisites for securely using Azure Cognitive Services in Azure Synapse Analytics. Linking these Azure Cognitive Services allows you to leverage Azure Cognitive Services from various experiences in Synapse.
+In this quickstart, you'll learn how set up the prerequisites for securely using Azure AI services in Azure Synapse Analytics. Linking these Azure AI services allows you to leverage Azure AI services from various experiences in Synapse.
This quickstart covers: > [!div class="checklist"]
-> - Create a Cognitive Services resource like Text Analytics or Anomaly Detector.
-> - Store an authentication key to Cognitive Services resources as secrets in Azure Key Vault, and configure access for an Azure Synapse Analytics workspace.
+> - Create an Azure AI services resource like Text Analytics or Anomaly Detector.
+> - Store an authentication key to Azure AI services resources as secrets in Azure Key Vault, and configure access for an Azure Synapse Analytics workspace.
> - Create an Azure Key Vault linked service in your Azure Synapse Analytics workspace.
-> - Create an Azure Cognitive Services linked service in your Azure Synapse Analytics workspace.
+> - Create an Azure AI services linked service in your Azure Synapse Analytics workspace.
If you don't have an Azure subscription, [create a free account before you begin](https://azure.microsoft.com/free/).
If you don't have an Azure subscription, [create a free account before you begin
Sign in to the [Azure portal](https://portal.azure.com/).
-## Create a Cognitive Services resource
+## Create an Azure AI services resource
-[Azure Cognitive Services](../../cognitive-services/index.yml) includes many types of services. Follow services are examples used in the Azure Synapse tutorials.
+[Azure AI services](../../ai-services/index.yml) includes many types of services. Follow services are examples used in the Azure Synapse tutorials.
You can create a [Text Analytics](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) resource in the Azure portal:
You can create an [Anomaly Detector](https://portal.azure.com/#create/Microsoft.
![Screenshot that shows Anomaly Detector in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00a.png)
-You can create an [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) resource in the Azure portal:
+You can create a [Form Recognizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) resource (for Document Intelligence) in the Azure portal:
![Screenshot that shows Form Recognizer in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-form-recognizer.png)
You can create a [Speech](https://portal.azure.com/#create/Microsoft.CognitiveSe
![Screenshot that shows selections for adding an access policy.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00c.png)
-3. Go to your Cognitive Services resource. For example, go to **Anomaly Detector** > **Keys and Endpoint**. Then copy either of the two keys to the clipboard.
+3. Go to your Azure AI services resource. For example, go to **Anomaly Detector** > **Keys and Endpoint**. Then copy either of the two keys to the clipboard.
4. Go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret, and then paste the key from the previous step into the **Value** field. Finally, select **Create**. ![Screenshot that shows selections for creating a secret.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00d.png) > [!IMPORTANT]
- > Make sure you remember or note down this secret name. You'll use it later when you create the Azure Cognitive Services linked service.
+ > Make sure you remember or note down this secret name. You'll use it later when you create the Azure AI services linked service.
## Create an Azure Key Vault linked service in Azure Synapse
You can create a [Speech](https://portal.azure.com/#create/Microsoft.CognitiveSe
![Screenshot that shows Azure Key Vault as a new linked service.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00e.png)
-## Create an Azure Cognitive Service linked service in Azure Synapse
+## Create an Azure AI linked service in Azure Synapse
1. Open your workspace in Synapse Studio.
-2. Go to **Manage** > **Linked Services**. Create an **Azure Cognitive Services** linked service by pointing to the Cognitive Service that you just created.
+2. Go to **Manage** > **Linked Services**. Create an **Azure Cognitive Services** linked service by pointing to the Azure AI service that you just created.
3. Verify the connection by selecting the **Test connection** button. If the connection is green, select **Create** and then select **Publish all** to save your change.
-![Screenshot that shows Azure Cognitive Service as a new linked service.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-linked-service.png)
+![Screenshot that shows Azure AI services as a new linked service.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-linked-service.png)
-You're now ready to continue with one of the tutorials for using the Azure Cognitive Services experience in Synapse Studio.
+You're now ready to continue with one of the tutorials for using the Azure AI services experience in Synapse Studio.
## Next steps -- [Tutorial: Sentiment analysis with Azure Cognitive Services](tutorial-cognitive-services-sentiment.md)-- [Tutorial: Anomaly detection with Azure Cognitive Services](tutorial-cognitive-services-sentiment.md)
+- [Tutorial: Sentiment analysis with Azure AI services](tutorial-cognitive-services-sentiment.md)
+- [Tutorial: Anomaly detection with Azure AI services](tutorial-cognitive-services-sentiment.md)
- [Tutorial: Machine learning model scoring in Azure Synapse dedicated SQL Pools](tutorial-sql-pool-model-scoring-wizard.md). - [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics Tutorial Form Recognizer Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-form-recognizer-use-mmlspark.md
Title: 'Tutorial: Form Recognizer with Azure Applied AI Service'
-description: Learn how to use form recognizer in Azure Synapse Analytics.
+ Title: 'Tutorial: Document Intelligence with Azure AI services'
+description: Learn how to use Azure AI Document Intelligence in Azure Synapse Analytics.
-# Tutorial: Form Recognizer with Applied AI Service
+# Tutorial: Document Intelligence with Azure AI services
-[Azure Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) is an [Azure Applied AI Service](../../applied-ai-services/index.yml) that enables you to build automated data processing application using machine learning technology. In this tutorial, you'll learn how to easily enrich your data in Azure Synapse Analytics. You'll use [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) to analyze your forms and documents, extracts text and data, and returns a structured JSON output. You quickly get accurate results that are tailored to your specific content without excessive manual intervention or extensive data science expertise.
+[Azure AI Document Intelligence](../../ai-services/document-intelligence/index.yml) is an [Azure AI service](../../ai-services/index.yml) that enables you to build automated data processing application using machine learning technology. In this tutorial, you'll learn how to easily enrich your data in Azure Synapse Analytics. You'll use [Document Intelligence](../../ai-services/document-intelligence/index.yml) to analyze your forms and documents, extracts text and data, and returns a structured JSON output. You quickly get accurate results that are tailored to your specific content without excessive manual intervention or extensive data science expertise.
-This tutorial demonstrates using form recognizer with [SynapseML](https://github.com/microsoft/SynapseML) to:
+This tutorial demonstrates using Document Intelligence with [SynapseML](https://github.com/microsoft/SynapseML) to:
> [!div class="checklist"] > - Extract text and layout from a given document
If you don't have an Azure subscription, [create a free account before you begin
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Pre-configuration steps described in the tutorial [Configure Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
+- Pre-configuration steps described in the tutorial [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
## Get started
import synapse.ml
from synapse.ml.cognitive import * ```
-## Configure form recognizer
+## Configure Document Intelligence
-Use the linked form recognizer you configured in the [pre-configuration steps](tutorial-configure-cognitive-services-synapse.md) .
+Use the linked Document Intelligence you configured in the [pre-configuration steps](tutorial-configure-cognitive-services-synapse.md) .
```python
-cognitive_service_name = "<Your linked service for form recognizer>"
+ai_service_name = "<Your linked service for Document Intelligence>"
```
imageDf = spark.createDataFrame([
], ["source",]) analyzeLayout = (AnalyzeLayout()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("source") .setOutputCol("layout") .setConcurrency(5))
imageDf2 = spark.createDataFrame([
], ["image",]) analyzeReceipts = (AnalyzeReceipts()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("image") .setOutputCol("parsed_document") .setConcurrency(5))
imageDf3 = spark.createDataFrame([
], ["source",]) analyzeBusinessCards = (AnalyzeBusinessCards()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("source") .setOutputCol("businessCards") .setConcurrency(5))
imageDf4 = spark.createDataFrame([
], ["source",]) analyzeInvoices = (AnalyzeInvoices()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("source") .setOutputCol("invoices") .setConcurrency(5))
imageDf5 = spark.createDataFrame([
], ["source",]) analyzeIDDocuments = (AnalyzeIDDocuments()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setImageUrlCol("source") .setOutputCol("ids") .setConcurrency(5))
To ensure the Spark instance is shut down, end any connected sessions(notebooks)
## Next steps
-* [Train a custom form recognizer model](../../applied-ai-services/form-recognizer/label-tool.md)
+* [Train a custom Document Intelligence model](../../ai-services/document-intelligence/label-tool.md)
* [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning) * [SynapseML GitHub Repo](https://github.com/microsoft/SynapseML)
synapse-analytics Tutorial Text Analytics Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md
Title: 'Tutorial: Text Analytics with Cognitive Service'
+ Title: 'Tutorial: Text Analytics with Azure AI services'
description: Learn how to use text analytics in Azure Synapse Analytics.
-# Tutorial: Text Analytics with Cognitive Service
+# Tutorial: Text Analytics with Azure AI services
-[Text Analytics](../../cognitive-services/text-analytics/index.yml) is an [Azure Cognitive Service](../../cognitive-services/index.yml) that enables you to perform text mining and text analysis with Natural Language Processing (NLP) features. In this tutorial, you'll learn how to use [Text Analytics](../../cognitive-services/text-analytics/index.yml) to analyze unstructured text on Azure Synapse Analytics.
+[Text Analytics](../../ai-services/language-service/index.yml) is an [Azure AI services](../../ai-services/index.yml) that enables you to perform text mining and text analysis with Natural Language Processing (NLP) features. In this tutorial, you'll learn how to use [Text Analytics](../../ai-services/language-service/index.yml) to analyze unstructured text on Azure Synapse Analytics.
This tutorial demonstrates using text analytics with [SynapseML](https://github.com/microsoft/SynapseML) to:
If you don't have an Azure subscription, [create a free account before you begin
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Pre-configuration steps described in the tutorial [Configure Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
+- Pre-configuration steps described in the tutorial [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
## Get started
from pyspark.sql.functions import col
Use the linked text analytics you configured in the [pre-configuration steps](tutorial-configure-cognitive-services-synapse.md) . ```python
-cognitive_service_name = "<Your linked service for text analytics>"
+ai_service_name = "<Your linked service for text analytics>"
``` ## Text Sentiment
-The Text Sentiment Analysis provides a way for detecting the sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. See the [Supported languages in Text Analytics API](../../cognitive-services/text-analytics/language-support.md?tabs=sentiment-analysis) for the list of enabled languages.
+The Text Sentiment Analysis provides a way for detecting the sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. See the [Supported languages in Text Analytics API](../../ai-services/language-service/language-detection/overview.md?tabs=sentiment-analysis) for the list of enabled languages.
```python
The Text Sentiment Analysis provides a way for detecting the sentiment labels (s
df = spark.createDataFrame([ ("I am so happy today, its sunny!", "en-US"), ("I am frustrated by this rush hour traffic", "en-US"),
- ("The cognitive services on spark aint bad", "en-US"),
+ ("The Azure AI services on spark aint bad", "en-US"),
], ["text", "language"]) # Run the Text Analytics service with options
display(results
||| |I am so happy today, its sunny!|positive| |I am frustrated by this rush hour traffic|negative|
-|The cognitive services on spark aint bad|positive|
+|The Azure AI services on spark aint bad|positive|
## Language Detector
-The Language Detector evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis. This capability is useful for content stores that collect arbitrary text, where language is unknown. See the [Supported languages in Text Analytics API](../../cognitive-services/text-analytics/language-support.md?tabs=language-detection) for the list of enabled languages.
+The Language Detector evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis. This capability is useful for content stores that collect arbitrary text, where language is unknown. See the [Supported languages in Text Analytics API](../../ai-services/language-service/language-detection/overview.md?tabs=language-detection) for the list of enabled languages.
```python # Create a dataframe that's tied to it's column names
display(language.transform(df))
## Entity Detector
-The Entity Detector returns a list of recognized entities with links to a well-known knowledge base. See the [Supported languages in Text Analytics API](../../cognitive-services/text-analytics/language-support.md?tabs=entity-linking) for the list of enabled languages.
+The Entity Detector returns a list of recognized entities with links to a well-known knowledge base. See the [Supported languages in Text Analytics API](../../ai-services/language-service/language-detection/overview.md?tabs=entity-linking) for the list of enabled languages.
```python df = spark.createDataFrame([
display(entity.transform(df).select("if", "text", col("replies").getItem("docume
## Key Phrase Extractor
-The Key Phrase Extraction evaluates unstructured text and returns a list of key phrases. This capability is useful if you need to quickly identify the main points in a collection of documents. See the [Supported languages in Text Analytics API](../../cognitive-services/text-analytics/language-support.md?tabs=key-phrase-extraction) for the list of enabled languages.
+The Key Phrase Extraction evaluates unstructured text and returns a list of key phrases. This capability is useful if you need to quickly identify the main points in a collection of documents. See the [Supported languages in Text Analytics API](../../ai-services/language-service/language-detection/overview.md?tabs=key-phrase-extraction) for the list of enabled languages.
```python df = spark.createDataFrame([
display(keyPhrase.transform(df).select("text", col("replies").getItem("document"
## Named Entity Recognition (NER)
-Named Entity Recognition (NER) is the ability to identify different entities in text and categorize them into pre-defined classes or types such as: person, location, event, product, and organization. See the [Supported languages in Text Analytics API](../../cognitive-services/text-analytics/language-support.md?tabs=named-entity-recognition) for the list of enabled languages.
+Named Entity Recognition (NER) is the ability to identify different entities in text and categorize them into pre-defined classes or types such as: person, location, event, product, and organization. See the [Supported languages in Text Analytics API](../../ai-services/language-service/language-detection/overview.md?tabs=named-entity-recognition) for the list of enabled languages.
```python df = spark.createDataFrame([
display(ner.transform(df).select("text", col("replies").getItem("document").getI
## Personally Identifiable Information (PII) V3.1
-The PII feature is part of NER and it can identify and redact sensitive entities in text that are associated with an individual person such as: phone number, email address, mailing address, passport number. See the [Supported languages in Text Analytics API](../../cognitive-services/text-analytics/language-support.md?tabs=pii) for the list of enabled languages.
+The PII feature is part of NER and it can identify and redact sensitive entities in text that are associated with an individual person such as: phone number, email address, mailing address, passport number. See the [Supported languages in Text Analytics API](../../ai-services/language-service/language-detection/overview.md?tabs=pii) for the list of enabled languages.
```python df = spark.createDataFrame([
synapse-analytics Tutorial Translator Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-translator-use-mmlspark.md
Title: 'Tutorial: Translator with Cognitive Service'
+ Title: 'Tutorial: Translator with Azure AI services'
description: Learn how to use translator in Azure Synapse Analytics.
-# Tutorial: Translator with Cognitive Service
+# Tutorial: Translator with Azure AI services
-[Translator](../../cognitive-services/Translator/index.yml) is an [Azure Cognitive Service](../../cognitive-services/index.yml) that enables you to perform language translation and other language-related operations. In this tutorial, you'll learn how to use [Translator](../../cognitive-services/Translator/index.yml) to build intelligent, multi-language solutions on Azure Synapse Analytics.
+[Translator](../../ai-services/Translator/index.yml) is an [Azure AI services](../../ai-services/index.yml) that enables you to perform language translation and other language-related operations. In this tutorial, you'll learn how to use [Translator](../../ai-services/Translator/index.yml) to build intelligent, multi-language solutions on Azure Synapse Analytics.
This tutorial demonstrates using translator with [MMLSpark](https://github.com/Azure/mmlspark) to:
If you don't have an Azure subscription, [create a free account before you begin
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Pre-configuration steps described in the tutorial [Configure Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
+- Pre-configuration steps described in the tutorial [Configure Azure AI services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md).
## Get started Open Synapse Studio and create a new notebook. To get started, import [MMLSpark](https://github.com/Azure/mmlspark).
from pyspark.sql.functions import col, flatten
Use the linked translator you configured in the [pre-configuration steps](tutorial-configure-cognitive-services-synapse.md) . ```python
-cognitive_service_name = "<Your linked service for translator>"
+ai_service_name = "<Your linked service for translator>"
``` ## Translate Text
df = spark.createDataFrame([
], ["text",]) translate = (Translate()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setTextCol("text") .setToLanguage(["zh-Hans", "fr"]) .setOutputCol("translation")
transliterateDf = spark.createDataFrame([
], ["text",]) transliterate = (Transliterate()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setLanguage("ja") .setFromScript("Jpan") .setToScript("Latn")
detectDf = spark.createDataFrame([
], ["text",]) detect = (Detect()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setTextCol("text") .setOutputCol("result"))
bsDf = spark.createDataFrame([
], ["text",]) breakSentence = (BreakSentence()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setTextCol("text") .setOutputCol("result"))
dictDf = spark.createDataFrame([
], ["text",]) dictionaryLookup = (DictionaryLookup()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setFromLanguage("en") .setToLanguage("es") .setTextCol("text")
dictDf = spark.createDataFrame([
], ["textAndTranslation",]) dictionaryExamples = (DictionaryExamples()
- .setLinkedService(cognitive_service_name)
+ .setLinkedService(ai_service_name)
.setFromLanguage("en") .setToLanguage("es") .setTextAndTranslationCol("textAndTranslation")
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
synapse-analytics Apache Spark Machine Learning Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-training.md
When using automated ML within Azure Synapse Analytics, you can leverage the dee
> You can learn more about creating an Azure Machine Learning automated ML experiment by following this [tutorial](./spark/../apache-spark-azure-machine-learning-tutorial.md). ## Azure Cognitive Services
-[Azure Cognitive Services](../../cognitive-services/what-are-cognitive-services.md) provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services. A Cognitive Service provides part or all of the components in a machine learning solution: data, algorithm, and trained model. These services are meant to require general knowledge about your data without needing experience with machine learning or data science. You can leverage these pre-trained Cognitive Services automatically within Azure Synapse Analytics.
+[Azure Cognitive Services](../../ai-services/what-are-ai-services.md) provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services. A Cognitive Service provides part or all of the components in a machine learning solution: data, algorithm, and trained model. These services are meant to require general knowledge about your data without needing experience with machine learning or data science. You can leverage these pre-trained Cognitive Services automatically within Azure Synapse Analytics.
## Next steps This article provides an overview of the various options to train machine learning models within Apache Spark pools in Azure Synapse Analytics. You can learn more about model training by following the tutorial below:
synapse-analytics Sql Data Warehouse Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-videos.md
Previously updated : 02/15/2019 Last updated : 07/18/2023 - # Azure Synapse Analytics - dedicated SQL pool (formerly SQL DW) Videos
To get started, select the overview video below to learn about the new updates t
[:::image type="content" source="./mediCWgxPnVY&list=PLXtHYVsvn_b_v4EKljH6dGo9qJ7JjItWL&index=2) Additional videos describing specific functionalities can be viewed on: -- [YouTube: Advanced Analytics with Azure](https://www.youtube.com/playlist?list=PLLasX02E8BPClOvjNV9bXk3LUuf3nQiS2)-- [Azure Videos](https://azure.microsoft.com/resources/videos/index/?services=sql-data-warehouse)-
+- [Azure Synapse demo videos](https://azure.microsoft.com/resources/videos/synapse-analytics/)
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
This section is an archive of guidance and sample project resources for Azure Sy
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models. |
+| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. |
| June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). | | June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. | | June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). |
What follows are the previous format of monthly news updates for Synapse Analyti
### General
-* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models.
+* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models.
* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
This section summarizes new guidance and sample project resources for Azure Syna
| September 2022 | **What is the difference between Synapse dedicated SQL pool (formerly SQL DW) and Serverless SQL pool?** | Understand dedicated vs serverless pools and their concurrency. Read more at [basic concepts of dedicated SQL pools and serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/understand-synapse-dedicated-sql-pool-formerly-sql-dw-and/ba-p/3594628).| | September 2022 | **Reading Delta Lake in dedicated SQL Pool** | [Sample script](https://github.com/microsoft/Azure_Synapse_Toolbox/tree/master/TSQL_Queries/Delta%20Lake) to import Delta Lake files directly into the dedicated SQL Pool and support features like time-travel. For an explanation, see [Reading Delta Lake in dedicated SQL Pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/reading-delta-lake-in-dedicated-sql-pool/ba-p/3571053).| | September 2022 | **Azure Synapse Customer Success Engineering blog series** | The new [Azure Synapse Customer Success Engineering blog series](https://aka.ms/synapsecseblog) launches with a detailed introduction to [Building the Lakehouse - Implementing a Data Lake Strategy with Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/building-the-lakehouse-implementing-a-data-lake-strategy-with/ba-p/3612291).|
-| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models. |
+| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../ai-services/index.yml) models, AI models from partners, and bring-your-own-data models. |
| June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). | | June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. | | June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). |
virtual-desktop App Attach Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-azure-portal.md
host pool.
## Publish MSIX apps to an application group
-Next, you'll need to publish the apps to an application group. You'll need to do this for both desktop and remote app application groups.
+Next, you'll need to publish the apps to an application group. You'll need to do this for both desktop and RemoteApp application groups.
To publish the apps:
To publish the apps:
2. Select the application group you want to publish the apps to. >[!NOTE]
- >MSIX applications can be delivered with MSIX app attach to both remote app and desktop application groups. When a MSIX package is assigned to a RemoteApp application group and Desktop application group from the same host pool the Desktop application group will be displayed in the feed.
+ >MSIX applications can be delivered with MSIX app attach to both RemoteApp and desktop application groups. When a MSIX package is assigned to a RemoteApp application group and Desktop application group from the same host pool the Desktop application group will be displayed in the feed.
3. Once you're in the application group, select the **Applications** tab. The **Applications** grid will display all existing apps within the application group.
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Azure Virtual Desktop supports both NT LAN Manager (NTLM) and Kerberos for sessi
## In-session authentication
-Once you're connected to your remote app or desktop, you may be prompted for authentication inside the session. This section explains how to use credentials other than username and password in this scenario.
+Once you're connected to your RemoteApp or desktop, you may be prompted for authentication inside the session. This section explains how to use credentials other than username and password in this scenario.
### In-session passwordless authentication (preview)
virtual-desktop Autoscale Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-glossary.md
The minimum percentage of hosts is the lowest percentage of all session hosts in
## Active user session
-A user session is considered "active" when the user signs in and connects to their remote app or desktop resource.
+A user session is considered "active" when the user signs in and connects to their RemoteApp or desktop resource.
## Disconnected user session
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
To use scaling plans, make sure you follow these guidelines:
## Assign the Desktop Virtualization Power On Off Contributor role with the Azure portal
-Before creating your first scaling plan, you'll need to assign the *Desktop Virtualization Power On Off Contributor* RBAC role with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent autoscale from working properly. You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with autoscale. This role and assignment will allow Azure Virtual Desktop to manage the power state of any VMs in those subscriptions. It will also let the service apply actions on both host pools and VMs when there are no active user sessions.
+Before creating your first scaling plan, you'll need to assign the *Desktop Virtualization Power On Off Contributor* RBAC role to the Azure Virtual Desktop service principal with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent autoscale from working properly. You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with autoscale. This role and assignment will allow Azure Virtual Desktop to manage the power state of any VMs in those subscriptions. It will also let the service apply actions on both host pools and VMs when there are no active user sessions.
To assign the *Desktop Virtualization Power On Off Contributor* role with the Azure portal to the Azure Virtual Desktop service principal on the subscription your host pool is deployed to:
Now that you've created your scaling plan, here are some things you can do:
- [Assign your scaling plan to new and existing host pools](autoscale-new-existing-host-pool.md) - [Enable diagnostics for your scaling plan](autoscale-diagnostics.md)
-If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions.
+If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions.
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
The following table compares the features of each Remote Desktop client when con
| Feature | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description | |--|--|--|--|--|--|--|--| | Remote Desktop sessions | X | X | X | X | X | X | Desktop of a remote computer presented in a full screen or windowed mode. |
-| Integrated RemoteApp sessions | X | | | | X | | Individual remote apps integrated into the local desktop as if they are running locally. |
-| Immersive RemoteApp sessions | | X | X | X | | X | Individual remote apps presented in a window or maximized to a full screen. |
-| Multiple monitors | 16 monitor limit | | | | 16 monitor limit | | Lets the user run Remote Desktop or remote apps on all local monitors.<br /><br />Each monitor can have a maximum resolution of 8K, with the total resolution limited to 32K. These limits depend on factors such as session host specification and network connectivity. |
+| Integrated RemoteApp sessions | X | | | | X | | Individual applications integrated into the local desktop as if they are running locally. |
+| Immersive RemoteApp sessions | | X | X | X | | X | Individual applications presented in a window or maximized to a full screen. |
+| Multiple monitors | 16 monitor limit | | | | 16 monitor limit | | Enables the remote session to use all local monitors.<br /><br />Each monitor can have a maximum resolution of 8K, with the total resolution limited to 32K. These limits depend on factors such as session host specification and network connectivity. |
| Dynamic resolution | X | X | | | X | X | Resolution and orientation of local monitors is dynamically reflected in the remote session. If the client is running in windowed mode, the remote desktop is resized dynamically to the size of the client window. | | Smart sizing | X | X | | | X | | Remote Desktop in Windowed mode is dynamically scaled to the window's size. | | Localization | X | X | English only | X | | X | Client user interface is available in multiple languages. |
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
Here's how to create a host pool using the Azure portal.
| Host pool name | Enter a name for the host pool, for example *hostpool01*. | | Location | Select the Azure region where your host pool will be deployed. | | Validation environment | Select **Yes** to create a host pool that is used as a [validation environment](create-validation-host-pool.md).<br /><br />Select **No** (*default*) to create a host pool that isn't used as a validation environment. |
- | Preferred app group type | Select the preferred [application group type](environment-setup.md#app-groups) for this host pool from *Desktop* or *Remote App*. |
+ | Preferred app group type | Select the preferred [application group type](environment-setup.md#app-groups) for this host pool from *Desktop* or *RemoteApp*. |
| Host pool type | Select whether your host pool will be Personal or Pooled.<br /><br />If you select **Personal**, a new option will appear for **Assignment type**. Select either **Automatic** or **Direct**.<br /><br />If you select **Pooled**, two new options will appear for **Load balancing algorithm** and **Max session limit**.<br /><br />- For **Load balancing algorithm**, choose either **breadth-first** or **depth-first**, based on your usage pattern.<br /><br />- For **Max session limit**, enter the maximum number of users you want load-balanced to a single session host. | > [!TIP]
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
To add or change a session host's friendly name, use the [Session Host - Update
You can change the display name for a published RemoteApp by setting the friendly name. By default, the friendly name is the same as the name of the RemoteApp program.
-To retrieve a list of published RemoteApps for an application group, run the following PowerShell cmdlet:
+To retrieve a list of published applications for an application group, run the following PowerShell cmdlet:
```powershell Get-AzWvdApplication -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname>
virtual-desktop Insights Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-glossary.md
The most urgent items that you need to take care of right away. If you don't add
## Time to connect
-Time to connect is the time between when a user opens a resource to start their session and when their desktop has loaded and is ready to use. For example, for RemoteApps, this is the time it takes to launch the application.
+Time to connect is the time between when a user opens a resource to start their session and when their desktop has loaded and is ready to use. For example, for a RemoteApp, this is the time it takes to launch the application.
Time to connect has two stages:
Time to connect has two stages:
When monitoring time to connect, keep in mind the following things: -- Time to connect is measured with the following checkpoints from Azure Virtual Desktop service diagnostics data. The checkpoints Insights uses to determine when the connection is established are different for a desktop versus a remote application scenario.
+- Time to connect is measured with the following checkpoints from Azure Virtual Desktop service diagnostics data. The checkpoints Insights uses to determine when the connection is established are different for a desktop versus a RemoteApp scenario.
- Begins: [WVDConnection](/azure/azure-monitor/reference/tables/wvdconnections) state = started - Ends: [WVDCheckpoints](/azure/azure-monitor/reference/tables/wvdcheckpoints) Name = ShellReady (desktops); Name = RdpShellAppExecuted (RemoteApp. For timing, consider the first app launch only)
-For example, Insights measures the time for a desktop experience to launch based on how long it takes to launch Windows Explorer. Insights also measures the time for a remote application to launch based on the time taken to launch the first instance of the shell app for a connection.
+For example, Insights measures the time for a desktop experience to launch based on how long it takes to launch Windows Explorer. Insights also measures the time for a RemoteApp to launch based on the time taken to launch the first instance of the shell app for a connection.
>[!NOTE]
->If a user launches more than one remote application, sometimes the shell app can execute multiple times during a single connection. For an accurate measurement of time to connect, you should only use the first execution checkpoint for each connection.
+>If a user launches more than one RemoteApp, sometimes the shell app can execute multiple times during a single connection. For an accurate measurement of time to connect, you should only use the first execution checkpoint for each connection.
- Establishing new sessions usually takes longer than reestablishing connections to existing sessions due to differences in the "logon" process for new and established connections.
virtual-desktop Manage App Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups-powershell.md
To create a RemoteApp group with PowerShell:
4. Run the following cmdlet to install the application based on `AppAlias`. `AppAlias` becomes visible when you run the output from step 3. ```powershell
- New-AzWvdApplication -AppAlias <appalias> -GroupName <appgroupname> -Name <remoteappname> -ResourceGroupName <resourcegroupname> -CommandLineSetting <DoNotAllow|Allow|Require>
+ New-AzWvdApplication -AppAlias <appalias> -GroupName <appgroupname> -Name <RemoteAppName> -ResourceGroupName <resourcegroupname> -CommandLineSetting <DoNotAllow|Allow|Require>
``` 5. (Optional) Run the following cmdlet to publish a new RemoteApp program to the application group created in step 1. ```powershell
- New-AzWvdApplication -GroupName <appgroupname> -Name <remoteappname> -ResourceGroupName <resourcegroupname> -Filepath <filepath> -IconPath <iconpath> -IconIndex <iconindex> -CommandLineSetting <DoNotAllow|Allow|Require>
+ New-AzWvdApplication -GroupName <appgroupname> -Name <RemoteAppName> -ResourceGroupName <resourcegroupname> -Filepath <filepath> -IconPath <iconpath> -IconIndex <iconindex> -CommandLineSetting <DoNotAllow|Allow|Require>
``` 6. To verify that the app was published, run the following cmdlet.
virtual-desktop Manage App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups.md
The deployment process will do the following things for you:
- Register the application group, if you chose to do so. - Create a link to an Azure Resource Manager template based on your configuration that you can download and save for later.
-Once a user connects to a RemoteApp, any other RemoteApps that they connect to during the same session will be from the same session host.
+Once a user connects to a RemoteApp, any other RemoteApp that they connect to during the same session will be from the same session host.
>[!IMPORTANT] >You can only create 500 application groups for each Azure Active Directory tenant. We added this limit because of service limitations for retrieving feeds for our users. This limit doesn't apply to application groups created in Azure Virtual Desktop (classic).
virtual-desktop Multimedia Redirection Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md
Title: Understanding multimedia redirection on Azure Virtual Desktop - Azure
description: An overview of multimedia redirection on Azure Virtual Desktop. Previously updated : 04/07/2023 Last updated : 07/18/2023 # Understanding multimedia redirection for Azure Virtual Desktop
-Multimedia redirection (MMR) gives you smooth video playback while watching videos in a browser in Azure Virtual Desktop. Multimedia redirection redirects the media content from Azure Virtual Desktop to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature.
+> [!IMPORTANT]
+> Call redirection is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> [!NOTE]
-> Multimedia redirection on Azure Virtual Desktop is only available for the [Windows Desktop client, version 1.2.3916 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices.
+Multimedia redirection redirects media content from Azure Virtual Desktop to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature when using the Windows Desktop client.
+
+Multimedia redirection has two key components:
+
+- Video playback redirection, which optimizes video playback experience for streaming sites and websites with embedded videos like YouTube and Facebook. For more information about which sites are compatible with this feature, see [Video playback redirection](#video-playback-redirection).
+- Call redirection (preview), which optimizes audio calls for WebRTC-based calling apps. For more information about which sites are compatible with this feature, see [Call redirection](#call-redirection).
+
+Call redirection only affects the connection between the local client device and the telephony app server, as shown in the following diagram.
++
+Call redirection offloads WebRTC calls from session hosts to local client devices to reduce latency and improve call quality. However, after the connection is established, call quality becomes dependent on the website or app providers just as it would with a non-redirected call.
## Websites that work with multimedia redirection
-The following list shows websites that are known to work with MMR. MMR works with these sites by default.
+The following lists show websites that are known to work with multimedia redirection. Multimedia redirection works with these sites by default.
+
+### Video playback redirection
+
+The following sites work with video playback redirection:
:::row::: :::column span="":::
The following list shows websites that are known to work with MMR. MMR works wit
:::column-end::: :::row-end:::
-Microsoft Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365 when using the native Teams app. However, if you use Teams live events with a supported browser, MMR is a workaround that provides smoother Teams live events playback on Azure Virtual Desktop. MMR supports Enterprise Content Delivery Network (ECDN) for Teams live events.
+### Call redirection
+
+The following websites work with call redirection:
+
+- [WebRTC Sample Site](https://webrtc.github.io/samples)
+
+Microsoft Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365 when using the native Teams app. However, if you use Teams live events with a browser that supports Teams live events and multimedia redirection, multimedia redirection is a workaround that provides smoother Teams live events playback on Azure Virtual Desktop. Multimedia redirection supports Enterprise Content Delivery Network (ECDN) for Teams live events.
-### The multimedia redirection status icon
+### Check if multimedia redirection is active
To quickly tell if multimedia redirection is active in your browser, we've added the following icon states: | Icon State | Definition | |--|--|
-| :::image type="content" source="./media/mmr-extension-unsupported.png" alt-text="The MMR extension icon greyed out, indicating that the website can't be redirected or the extension isn't loading."::: | A greyed out icon means that multimedia content on the website can't be redirected or the extension isn't loading. |
-| :::image type="content" source="./media/mmr-extension-disconnect.png" alt-text="The MMR extension icon with a red square with an x that indicates the client can't connect to multimedia redirection."::: | The red square with an "X" inside of it means that the client can't connect to multimedia redirection. You may need to uninstall and reinstall the extension, then try again. |
-| :::image type="content" source="./media/mmr-extension-supported.png" alt-text="The MMR extension icon with no status applied."::: | The default icon appearance with no status applied. This icon state means that multimedia content on the website can be redirected and is ready to use. |
-| :::image type="content" source="./media/mmr-extension-playback.png" alt-text="The MMR extension icon with a green square with a play button icon inside of it, indicating that multimedia redirection is working."::: | The green square with a play button icon inside of it means that the extension is currently redirecting video playback. |
-| :::image type="content" source="./media/mmr-extension-webrtc.png" alt-text="The MMR extension icon with a green square with telephone icon inside of it, indicating that multimedia redirection is working."::: | The green square with a phone icon inside of it means that the extension is currently redirecting a WebRTC call. |
+| :::image type="content" source="./media/mmr-extension-unsupported.png" alt-text="The multimedia redirection extension icon greyed out, indicating that the website can't be redirected or the extension isn't loading."::: | A greyed out icon means that multimedia content on the website can't be redirected or the extension isn't loading. |
+| :::image type="content" source="./media/mmr-extension-disconnect.png" alt-text="The multimedia redirection extension icon with a red square with an x that indicates the client can't connect to multimedia redirection."::: | The red square with an "X" inside of it means that the client can't connect to multimedia redirection. You may need to uninstall and reinstall the extension, then try again. |
+| :::image type="content" source="./media/mmr-extension-supported.png" alt-text="The multimedia redirection extension icon with no status applied."::: | The default icon appearance with no status applied. This icon state means that multimedia content on the website can be redirected and is ready to use. |
+| :::image type="content" source="./media/mmr-extension-playback.png" alt-text="The multimedia redirection extension icon with a green square with a play button icon inside of it, indicating that multimedia redirection is working."::: | The green square with a play button icon inside of it means that the extension is currently redirecting video playback. |
+| :::image type="content" source="./media/mmr-extension-webrtc.png" alt-text="The multimedia redirection extension icon with a green square with telephone icon inside of it, indicating that multimedia redirection is working."::: | The green square with a phone icon inside of it means that the extension is currently redirecting a WebRTC call. This icon also appears when both video playback and calls are being redirected at the same time. |
-Selecting the icon in your browser will display a pop-up menu where it lists the features supported on the current page, you can select to enable or disable multimedia redirection on all websites, and collect logs. It also lists the version numbers for each component of the service.
+Selecting the icon in your browser will display a pop-up menu where it lists the features supported on the current page. You can select to enable or disable video playback redirection and call redirection on all websites, and collect logs. It also lists the version numbers for each component of multimedia redirection.
You can use the icon to check the status of the extension by following the directions in [Check the extension status](multimedia-redirection.md#check-the-extension-status).
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Use multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection on Azure Virtual Desktop. Previously updated : 05/10/2023 Last updated : 07/18/2023 # Use multimedia redirection on Azure Virtual Desktop
-This article will show you how to use multimedia redirection (MMR) for Azure Virtual Desktop with Microsoft Edge or Google Chrome browsers. For more information about how multimedia redirection works, see [Understanding multimedia redirection for Azure Virtual Desktop](multimedia-redirection-intro.md).
+> [!IMPORTANT]
+> Multimedia redirection call redirection is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article will show you how to use multimedia redirection for Azure Virtual Desktop with Microsoft Edge or Google Chrome browsers. For more information about how multimedia redirection works, see [Understanding multimedia redirection for Azure Virtual Desktop](multimedia-redirection-intro.md).
## Prerequisites
Before you can use multimedia redirection on Azure Virtual Desktop, you'll need
- An Azure Virtual Desktop deployment. - Microsoft Edge or Google Chrome installed on your session hosts.-- Windows Desktop client, version 1.2.3916 or later on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. This includes the multimedia redirection plugin (`C:\Program Files\Remote Desktop\MsMmrDVCPlugin.dll`), which is required on the client device. Your device must meet the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).
+- Windows Desktop client:
+ - To use video playback redirection, you must install [Windows Desktop client, version 1.2.3916 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). This feature is only compatible with version 1.2.3916 or later of the Windows Desktop client.
+
+ - To use call redirection, you must install the Windows Desktop client, version 1.2.4237 or later with [Insider releases enabled](./users/client-features-windows.md#enable-windows-insider-releases).
+ - Microsoft Visual C++ Redistributable 2015-2022, version 14.32.31332.0 or later installed on your session hosts and Windows client devices. You can download the latest version from [Microsoft Visual C++ Redistributable latest supported downloads](/cpp/windows/latest-supported-vc-redist).
+- Your device must meet the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).
+
+> [!NOTE]
+> Multimedia redirection isn't supported on Azure Virtual Desktop for Microsoft 365 Government (GCC), GCC-High environments, and Microsoft 365 DoD.
+ ## Install the multimedia redirection extension
-For multimedia redirection to work, there are two parts to install on your session hosts: the host component and the browser extension for Edge or Chrome. You install the host component and browser extension from an MSI file, and you can also get and install the browser extension from Microsoft Edge Add-ons or the Chrome Web Store, depending on which browser you're using.
+For multimedia redirection to work, there are two parts to install on your session hosts: the host component and the browser extension for Edge or Chrome. You install both the host component and browser extension for Edge or Chrome browsers on your session hosts from an MSI file. You can also get and install the browser extension from Microsoft Edge Add-ons or the Chrome Web Store.
-### Install the host component
+### Install the host component and browser extension from an MSI file
To install the host component on your session hosts, you can install the MSI manually on each session host or use your enterprise deployment tool with `msiexec`. To install the MSI manually, you'll need to: 1. Sign in to a session host as a local administrator.
-1. Download the [MMR host MSI installer](https://aka.ms/avdmmr/msi).
+1. Download the [multimedia redirection host MSI installer](https://aka.ms/avdmmr/msi).
1. Open the file that you downloaded to run the setup wizard.
-1. Follow the prompts. Once it's completed, select **Finish**.
+1. Follow the prompts. Once it's finished installing, select **Finish**.
+
+### Enable the browser extension
-### Install the browser extension
+Next, users need to enable the browser extension in a remote session to use multimedia redirection with Edge or Chrome.
-Next, you'll need to install the browser extension on session hosts where you already have Edge or Chrome available. When you install the host component, it also installs the browser extension. In order to use multimedia redirection, they'll need to enable the extension:
+> [!TIP]
+> You can also automate installing and enabling the browser extension from Microsoft Edge Add-ons or the Chrome Web Store for all users by [using Group Policy](#install-the-browser-extension-using-group-policy).
1. Sign in to Azure Virtual Desktop and open Edge or Chrome.
-1. When opening the browser, after a short while users will see a prompt that says **New Extension added**, then they should select **Turn on extension**. Users should also pin the extension so that they can see from the icon if multimedia redirection is connected.
+1. When opening the browser, after a short while, users will see a prompt that says **New Extension added**. Once the prompt appears, users should select **Turn on extension**. Users should also pin the extension so that they can see from the icon if multimedia redirection is connected.
:::image type="content" source="./media/mmr-extension-enable.png" alt-text="A screenshot of the prompt to enable the extension."::: >[!IMPORTANT]
- >If the user selects **Remove extension**, it will be removed from the browser and they will need to add it from Microsoft Edge Add-ons or the Chrome Web Store. To install it again, see [Installing the browser extension manually](#install-the-browser-extension-manually).
-
-You can also automate installing the browser extension from Microsoft Edge Add-ons or the Chrome Web Store for all users by [using Group Policy](#install-the-browser-extension-using-group-policy).
+ >If the user selects **Remove extension**, it will be removed from the browser and they will need to add it from Microsoft Edge Add-ons or the Chrome Web Store. To install it again, see [Install the browser extension manually (optional)](#install-the-browser-extension-manually-optional).
Using Group Policy has the following benefits:
Using Group Policy has the following benefits:
- You can restrict which websites use multimedia redirection. - You can pin the extension icon in Google Chrome by default.
-#### Install the browser extension manually
+#### Install the browser extension manually (optional)
-If you need to install the browser extension separately, you can download it from Microsoft Edge Add-ons or the Chrome Web Store.
+If installing the host component doesn't automatically install the extension, you can also download it from Microsoft Edge Add-ons or the Chrome Web Store.
To install the multimedia redirection extension manually, follow these steps:
To install the multimedia redirection extension manually, follow these steps:
1. In your browser, open one of the following links, depending on which browser you're using:
- - For **Microsoft Edge**: [Microsoft Multimedia Redirection Extension](https://microsoftedge.microsoft.com/addons/detail/wvd-multimedia-redirectio/joeclbldhdmoijbaagobkhlpfjglcihd)
+ - For **Microsoft Edge**: [Microsoft multimedia redirection Extension](https://microsoftedge.microsoft.com/addons/detail/wvd-multimedia-redirectio/joeclbldhdmoijbaagobkhlpfjglcihd)
- - For **Google Chrome**: [Microsoft Multimedia Redirection Extension](https://chrome.google.com/webstore/detail/wvd-multimedia-redirectio/lfmemoeeciijgkjkgbgikoonlkabmlno)
+ - For **Google Chrome**: [Microsoft multimedia redirection Extension](https://chrome.google.com/webstore/detail/wvd-multimedia-redirectio/lfmemoeeciijgkjkgbgikoonlkabmlno)
1. Install the extension by selecting **Get** (for Microsoft Edge) or **Add to Chrome** (for Google Chrome), then at the additional prompt, select **Add extension**. Once the installation is finished, you'll see a confirmation message saying that you've successfully added the extension.
You can install the multimedia redirection extension using Group Policy, either
+## Configure call redirection (preview) for the Remote Desktop client only
+
+If you want to test the call redirection (preview) feature, you first need to configure the Remote Desktop client to use [Insider features](./users/client-features-windows.md#enable-windows-insider-releases).
+ ## Check the extension status
-Once you've installed the extension, you can check its status by visiting a website with media content, such as one from the list at [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection), and hovering your mouse cursor over [the multimedia redirection extension icon](multimedia-redirection-intro.md#the-multimedia-redirection-status-icon) in the extension bar on the top-right corner of your browser. A message will appear and tell you about the current status, as shown in the following screenshot.
+Once you've installed the extension, you can check its status by visiting a website with media content, such as one from the list at [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection), and hovering your mouse cursor over [the multimedia redirection extension icon](multimedia-redirection-intro.md#check-if-multimedia-redirection-is-active) in the extension bar on the top-right corner of your browser. A message will appear and tell you about the current status, as shown in the following screenshot.
++
+### Features supported on current page
+To find out what types of redirections are enabled on the webpage you're visiting, you can open up the extension menu and look for the section named **Features supported on current page**. If a feature is currently enabled, you'll see a green check mark next to it, as shown in the following screenshot.
-Another way you can check the extension status is by selecting the extension icon, then you'll see a list of **Features supported on this website** with a green check mark if the website supports that feature.
## Teams live events
To use multimedia redirection with Teams live events:
1. Open the link to the Teams live event in either the Edge or Chrome browser.
-1. Make sure you can see a green play icon as part of the [multimedia redirection status icon](multimedia-redirection-intro.md#the-multimedia-redirection-status-icon). If the green play icon is there, MMR is enabled for Teams live events.
+1. Make sure you can see a green play icon as part of the [multimedia redirection status icon](multimedia-redirection-intro.md#check-if-multimedia-redirection-is-active). If the green play icon is there, multimedia redirection is enabled for Teams live events.
-1. Select **Watch on the web instead**. The Teams live event should automatically start playing in your browser. Make sure you only select **Watch on the web instead**, as shown in the following screenshot. If you use the native Teams app, MMR won't work.
+1. Select **Watch on the web instead**. The Teams live event should automatically start playing in your browser. Make sure you only select **Watch on the web instead**, as shown in the following screenshot. If you use the native Teams app, multimedia redirection won't work.
:::image type="content" source="./media/teams-live-events.png" alt-text="A screenshot of the 'Watch the live event in Microsoft Teams' page. The status icon and 'watch on the web instead' options are highlighted in red.":::
-## Enable video playback for all sites
+## Advanced settings
+
+The following sections describe additional settings you can configure in multimedia redirection.
+
+### Video playback redirection
+
+The following sections will show you how to enable and use various features related to video playback redirection for Azure Virtual Desktop.
-Multimedia redirection is currently limited to the sites listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. However, you can enable video playback for all sites to allow you to test the feature with other websites. To enable video playback for all sites:
+#### Enable video playback for all sites
+
+Video playback redirection is currently limited to the sites listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. However, you can enable video playback redirection for all sites to allow you to test the feature with other websites. To enable video playback redirection for all sites:
1. Select the extension icon in your browser. 1. Select **Show Advanced Settings**.
-1. Toggle **Enable video playback for all sites(beta)** to **on**.
+1. Toggle **Enable video playback for all sites (beta)** to **on**.
-## Redirected video outlines
+#### Enable redirected video overlay
Redirected video outlines will allow you to highlight the currently redirected video elements. When this is enabled, you will see a bright highlighted border around the video element that is being redirected. To enable redirected video outlines:
Redirected video outlines will allow you to highlight the currently redirected v
1. Toggle **Redirected video outlines** to **on**. You will need to refresh the webpage for the change to take effect.
-### Video status overlay
+#### Video status overlay
When you enable video status overlay, you'll see a short message at the top of the video player that indicates the redirection status of the current video. The message will disappear after five seconds. To enable video status overlay: 1. Select the extension icon in your browser.
-2. Select **Show Advanced Settings**.
+1. Select **Show Advanced Settings**.
+
+1. Toggle **Video Status Overlay** to **on**. You'll need to refresh the webpage for the change to take effect.
+
+### Call redirection
+
+The following section will show you how to use advanced features for call redirection.
+
+#### Enable call redirection for all sites
+
+Call redirection is currently limited to the web apps listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. If you're using a listed calling app with an internal URL, you must turn the **Enable WebRTC for all sites** setting to use call redirection. You can also enable call redirection for all sites to test the feature with web apps that aren't officially supported yet.
+
+To enable call redirection for all sites:
+
+1. On your client device, create a registry key with the following values:
+
+ - **Key**: HKCU\Software\Microsoft\MMR
+ - **Type**: REG_DWORD
+ - **Name**: AllowCallRedirectionAllSites
+ - **Value data**: 1
+
+1. Next, connect to a remote session, then select the **extension icon** in your browser.
+
+1. Select **Show Advanced Settings**.
-3. Toggle **Video Status Overlay** to **on**. You'll need to refresh the webpage for the change to take effect.
+1. Toggle **Enable call redirection for all sites (experimental)** on.
## Next steps
virtual-desktop Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/network-connectivity.md
Client connection sequence described below:
2. Azure Active Directory authenticates the user and returns the token used to enumerate resources available to a user 3. Client passes token to the Azure Virtual Desktop feed subscription service 4. Azure Virtual Desktop feed subscription service validates the token
-5. Azure Virtual Desktop feed subscription service passes the list of available desktops and RemoteApps back to the client in the form of digitally signed connection configuration
+5. Azure Virtual Desktop feed subscription service passes the list of available desktops and applications back to the client in the form of digitally signed connection configuration
6. Client stores the connection configuration for each available resource in a set of .rdp files 7. When a user selects the resource to connect, the client uses the associated .rdp file and establishes the secure TLS 1.2 connection to an Azure Virtual Desktop gateway instance with the help of [Azure Front Door](../frontdoor/concept-end-to-end-tls.md#supported-cipher-suites) and passes the connection information. The latency from all gateways is evaluated, and the gateways are put into groups of 10ms. The gateway with the lowest latency and then lowest number of existing connections is chosen. 8. Azure Virtual Desktop gateway validates the request and asks the Azure Virtual Desktop broker to orchestrate the connection
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/overview.md
With Azure Virtual Desktop, you can set up a scalable and flexible environment:
You can deploy and manage virtual desktops: - Use the Azure portal, Azure CLI, PowerShell and REST API to configure the host pools, create application groups, assign users, and publish resources.-- Publish full desktop or individual remote apps from a single host pool, create individual application groups for different sets of users, or even assign users to multiple application groups to reduce the number of images.
+- Publish a full desktop or individual applications from a single host pool, create individual application groups for different sets of users, or even assign users to multiple application groups to reduce the number of images.
- As you manage your environment, use built-in delegated access to assign roles and collect diagnostics to understand various configuration or user errors. - Use the new Diagnostics service to troubleshoot errors. - Only manage the image and virtual machines, not the infrastructure. You don't need to personally manage the Remote Desktop roles like you do with Remote Desktop Services, just the virtual machines in your Azure subscription.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You also need to make sure you've registered the *Microsoft.DesktopVirtualizatio
## Identity
-To access virtual desktops and remote apps from your session hosts, your users need to be able to authenticate. [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is Microsoft's centralized cloud identity service that enables this capability. Azure AD is always used to authenticate users for Azure Virtual Desktop. Session hosts can be joined to the same Azure AD tenant, or to an Active Directory domain using [Active Directory Domain Services](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (AD DS) or [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) (Azure AD DS), providing you with a choice of flexible configuration options.
+To access desktops and applications from your session hosts, your users need to be able to authenticate. [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is Microsoft's centralized cloud identity service that enables this capability. Azure AD is always used to authenticate users for Azure Virtual Desktop. Session hosts can be joined to the same Azure AD tenant, or to an Active Directory domain using [Active Directory Domain Services](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (AD DS) or [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) (Azure AD DS), providing you with a choice of flexible configuration options.
### Session hosts
-You need to join session hosts that provide virtual desktops and remote apps to the same Azure AD tenant as your users, or an Active Directory domain (either AD DS or Azure AD DS).
+You need to join session hosts that provide desktops and applications to the same Azure AD tenant as your users, or an Active Directory domain (either AD DS or Azure AD DS).
To join session hosts to Azure AD or an Active Directory domain, you need the following permissions:
You'll need to enter the following identity parameters when deploying session ho
## Operating systems and licenses
-You have a choice of operating systems (OS) that you can use for session hosts to provide virtual desktops and remote apps. You can use different operating systems with different host pools to provide flexibility to your users. We support the following 64-bit versions of these operating systems, where supported versions and dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/).
+You have a choice of operating systems (OS) that you can use for session hosts to provide desktops and applications. You can use different operating systems with different host pools to provide flexibility to your users. We support the following 64-bit versions of these operating systems, where supported versions and dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/).
|Operating system |User access rights| |||
If your license entitles you to use Azure Virtual Desktop, you don't need to ins
## Network
-There are several network requirements you'll need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their virtual desktops and remote apps while also giving them the best possible user experience.
+There are several network requirements you'll need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their desktops and applications while also giving them the best possible user experience.
Users connecting to Azure Virtual Desktop securely establish a reverse connection to the service, which means you don't need to open any inbound ports. Transmission Control Protocol (TCP) on port 443 is used by default, however RDP Shortpath can be used for [managed networks](shortpath.md) and [public networks](shortpath-public.md) that establishes a direct User Datagram Protocol (UDP)-based transport.
Consider the following when managing session hosts:
## Remote Desktop clients
-Your users will need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to virtual desktops and remote apps. The following clients support Azure Virtual Desktop:
+Your users will need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to desktops and applications. The following clients support Azure Virtual Desktop:
- [Windows Desktop client](./users/connect-windows.md) - [Azure Virtual Desktop Store app for Windows](./users/connect-windows-azure-virtual-desktop-app.md)
virtual-desktop Rdp Quality Of Service Qos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-quality-of-service-qos.md
Without some form of QoS, you might see the following issues:
* Jitter ΓÇô RDP packets arriving at different rates, which can result in visual and audio glitches * Packet loss ΓÇô packets dropped, which results in retransmission that requires additional time
-* Delayed round-trip time (RTT) ΓÇô RDP packets taking a long time to reach their destinations, which result in noticeable delays between input and reaction from the remote application.
+* Delayed round-trip time (RTT) ΓÇô RDP packets taking a long time to reach their destinations, which result in noticeable delays between input and reaction from the desktop or application.
The least complicated way to address these issues is to increase the data connections' size, both internally and out to the internet. Since that is often cost-prohibitive, QoS provides a way to manage the resources you have instead of adding bandwidth more effectively. To address quality issues, we recommend that you first use QoS, then add bandwidth only where necessary.
virtual-desktop Architecture Recs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/architecture-recs.md
This article will provide guidance for your Azure Virtual Desktop deployment str
If you're making an Azure Virtual Desktop deployment for users inside your organization, you can host all your users and resources in the same Azure tenant. You can also use Azure Virtual Desktop's currently supported identity management methods to keep your users secure.
-These are the most basic requirements for an Azure Virtual Desktop deployment that can serve RemoteApps and desktops to users within your organization:
+These are the most basic requirements for an Azure Virtual Desktop deployment that can serve desktops and applications to users within your organization:
- One host pool to host user sessions - One Azure subscription to host the host pool
virtual-desktop Custom Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/custom-apps.md
If you must install your apps manually, you'll need to remote into your session
## Microsoft Store applications
-We don't recommend using Microsoft Store apps for remote app streaming in Azure Virtual Desktop at this time.
+We don't recommend using Microsoft Store apps for RemoteApp streaming in Azure Virtual Desktop at this time.
## Next steps
virtual-desktop Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/identities.md
Title: Create user accounts for remote app streaming - Azure Virtual Desktop
-description: How to create user accounts for remote app streaming for your customers in Azure Virtual Desktop with Azure AD, Azure AD DS, or AD DS.
+ Title: Create user accounts for RemoteApp streaming - Azure Virtual Desktop
+description: How to create user accounts for RemoteApp streaming for your customers in Azure Virtual Desktop with Azure AD, Azure AD DS, or AD DS.
Last updated 08/06/2021
-# Create user accounts for remote app streaming
+# Create user accounts for RemoteApp streaming
-Because Azure Virtual Desktop doesn't currently support external profiles, or "identities," your users won't be able to access the apps you host with their own corporate credentials. Instead, you'll need to create identities for them in the Active Directory Domain that you'll use for remote app streaming and sync user objects to the associated Azure Active Directory (Azure AD) tenant.
+Because Azure Virtual Desktop doesn't currently support external profiles, or "identities," your users won't be able to access the apps you host with their own corporate credentials. Instead, you'll need to create identities for them in the Active Directory Domain that you'll use for RemoteApp streaming and sync user objects to the associated Azure Active Directory (Azure AD) tenant.
In this article, we'll explain how you can manage user identities to provide a secure environment for your customers. We'll also talk about the different parts that make up an identity.
virtual-desktop Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/licensing.md
Title: Understanding Azure Virtual Desktop per-user access pricing for remote app streaming - Azure
-description: An overview of Azure Virtual Desktop licensing considerations for remote app streaming.
+ Title: Understanding Azure Virtual Desktop per-user access pricing for RemoteApp streaming - Azure
+description: An overview of Azure Virtual Desktop licensing considerations for RemoteApp streaming.
# Understanding licensing and per-user access pricing
-This article explains the licensing requirements for using Azure Virtual Desktop to stream remote applications to external users. In this article, you'll learn how licensing Azure Virtual Desktop for external users is different than for internal users, how per-user access pricing works in detail, and how to license other products you plan to use with Azure Virtual Desktop.
+This article explains the licensing requirements for using Azure Virtual Desktop to stream applications remotely to external users. In this article, you'll learn how licensing Azure Virtual Desktop for external users is different than for internal users, how per-user access pricing works in detail, and how to license other products you plan to use with Azure Virtual Desktop.
## Internal users and external users
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/overview.md
Title: What is Azure Virtual Desktop remote app streaming? - Azure
-description: An overview of Azure Virtual Desktop remote app streaming.
+ Title: What is Azure Virtual Desktop RemoteApp streaming? - Azure
+description: An overview of Azure Virtual Desktop RemoteApp streaming.
Last updated 11/12/2021
-# What is Azure Virtual Desktop remote app streaming?
+# What is Azure Virtual Desktop RemoteApp streaming?
-Azure Virtual Desktop is a desktop and app virtualization service that runs on the cloud and lets you access your remote desktop anytime, anywhere. However, did you know you can also use Azure Virtual Desktop as a Platform as a Service (PaaS) to provide your organization's apps as Software as a Service (SaaS) to your customers? With Azure Virtual Desktop remote app streaming, you can now use Azure Virtual Desktop to deliver apps to your customers over a secure network through virtual machines.
+Azure Virtual Desktop is a desktop and app virtualization service that runs on the cloud and lets you access your remote desktop anytime, anywhere. However, did you know you can also use Azure Virtual Desktop as a Platform as a Service (PaaS) to provide your organization's apps as Software as a Service (SaaS) to your customers? With Azure Virtual Desktop RemoteApp streaming, you can now use Azure Virtual Desktop to deliver apps to your customers over a secure network through virtual machines.
If you're unfamiliar with Azure Virtual Desktop (or are new to app virtualization in general), we've gathered some resources here that can help you get your deployment up and running. ## Requirements
-Before you get started, we recommend you take a look at the [overview for Azure Virtual Desktop](../overview.md) for a more in-depth list of system requirements for running Azure Virtual Desktop. While you're there, you can browse the rest of the Azure Virtual Desktop documentation if you want a more IT-focused look into the service, as most of the articles also apply to remote app streaming for Azure Virtual Desktop. Once you understand the basics, you can use the remote app streaming documentation more effectively.
+Before you get started, we recommend you take a look at the [overview for Azure Virtual Desktop](../overview.md) for a more in-depth list of system requirements for running Azure Virtual Desktop. While you're there, you can browse the rest of the Azure Virtual Desktop documentation if you want a more IT-focused look into the service, as most of the articles also apply to RemoteApp streaming for Azure Virtual Desktop. Once you understand the basics, you can use the RemoteApp streaming documentation more effectively.
In order to set up an Azure Virtual Desktop deployment for your custom apps that's available to customers outside your organization, you'll need the following things:
virtual-desktop Sandbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/sandbox.md
This topic will walk you through how to publish Windows Sandbox for your users i
## Prerequisites
-Before you get started, here's what you need to configureWindows Sandbox in Azure Virtual Desktop:
+Before you get started, here's what you need to configure Windows Sandbox in Azure Virtual Desktop:
- A working Azure profile that can access the Azure portal. - A functioning Azure Virtual Desktop deployment. To learn how to deploy Azure Virtual Desktop (classic), see [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md). To learn how to deploy Azure Virtual Desktop with Azure Resource Manager integration, see [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).
Before you get started, here's what you need to configureWindows Sandbox in Azur
## Prepare the VHD image for Azure
-First, you'll need to create a master VHD image. If you haven't created your master VHD image yet, go to [Prepare and customize a master VHD image](set-up-customize-master-image.md) and follow the instructions there. When you're given the option to select an operating system (OS) for your master image, select either Windows 10 or Windows 11.
+First, you'll need to create a custom VHD image. If you haven't created your custom VHD image yet, go to [Prepare and customize a master VHD image](set-up-customize-master-image.md) and follow the instructions there. When you're given the option to select an operating system (OS) for your master image, select either Windows 10 or Windows 11.
When customizing your master image, you'll need to enable the **Containers-DisposableClientVM** feature by running the following command:
To publish Windows Sandbox to your host pool using PowerShell:
Set-AzContext -Tenant $tenantId -Subscription <subscription name or id> ```
-1. Run the following command to create a Sandbox remote app:
+1. Run the following command to create a Sandbox RemoteApp:
```azurepowershell-interactive New-AzWvdApplication -ResourceGroupName <Resource Group Name> -GroupName <Application Group Name> -FilePath C:\windows\system32\WindowsSandbox.exe -IconIndex 0 -IconPath C:\windows\system32\WindowsSandbox.exe -CommandLineSetting 'Allow' -ShowInPortal:$true -SubscriptionId <Workspace Subscription ID>
To publish Windows Sandbox to your host pool using PowerShell:
-That's it! Leave the rest of the options default. You should now have Windows Sandbox Remote App published for your users.
+That's it! Leave the rest of the options default. You should now have Windows Sandbox published as a RemoteApp for your users.
## Next steps
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md
Enabling audit log collection lets you view user and admin activity related to A
- [Session hosts](../azure-monitor/agents/agent-windows.md) - [Key Vault logs](../key-vault/general/logging.md)
-### Use RemoteApps
+### Use RemoteApp
-When choosing a deployment model, you can either provide remote users access to entire virtual desktops or only select applications. Remote applications, or RemoteApps, provide a seamless experience as the user works with apps on their virtual desktop. RemoteApps reduce risk by only letting the user work with a subset of the remote machine exposed by the application.
+When choosing a deployment model, you can either provide remote users access to entire desktops, or only select applications when published as a RemoteApp. RemoteApp provides a seamless experience as the user works with apps from their virtual desktop. RemoteApp reduces risk by only letting the user work with a subset of the remote machine exposed by the application.
### Monitor usage with Azure Monitor
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
To use Start VM on Connect, make sure you follow these guidelines:
## Assign the Desktop Virtualization Power On Contributor role with the Azure portal
-Before you can configure Start VM on Connect, you'll need to assign the *Desktop Virtualization Power On Contributor* role-based access control (RBAC) role with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent Start VM on Connect from working properly.
+Before you can configure Start VM on Connect, you'll need to assign the *Desktop Virtualization Power On Contributor* role-based access control (RBAC) role to the Azure Virtual Desktop service principal with your Azure subscription as the assignable scope. Assigning this role at any level lower than your subscription, such as the resource group, host pool, or VM, will prevent Start VM on Connect from working properly.
You'll need to add each Azure subscription as an assignable scope that contains host pools and session host VMs you want to use with Start VM on Connect. This role and assignment will allow Azure Virtual Desktop to power on VMs, check their status, and report diagnostic information in those subscriptions.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
To enable hardware encode:
1. Set the value to **1** to enable the feature. 1. Repeat these instructions for every client device.
-### Enable content sharing for Teams for Remote App
+### Enable content sharing for Teams for RemoteApp
-Enabling content sharing for Teams on Azure Virtual Desktop lets you share your screen or application window. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md).
+Enabling content sharing for Teams on Azure Virtual Desktop lets you share your screen or application window. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC Redirector Service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md).
To enable content sharing:
To enable content sharing:
1. Add the **ShareClientDesktop** as a DWORD value. 1. Set the value to **1** to enable the feature.
-### Disable desktop screen share for Teams for Remote App
+### Disable desktop screen share for Teams for RemoteApp
You can disable desktop screen sharing for Teams on Azure Virtual Desktop. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md). >[!NOTE]
->You must [enable the ShareClientDesktop key](#enable-content-sharing-for-teams-for-remote-app) before you can use this key.
+>You must [enable the ShareClientDesktop key](#enable-content-sharing-for-teams-for-remoteapp) before you can use this key.
To disable desktop screen share:
To disable desktop screen share:
1. Add the **DisableRAILScreensharing** as a DWORD value. 1. Set the value to **1** to disable desktop screen share.
-### Disable application window sharing for Teams for Remote App
+### Disable application window sharing for Teams for RemoteApp
You can disable application window sharing for Teams on Azure Virtual Desktop. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md). >[!NOTE]
->You must [enable the ShareClientDesktop key](#enable-content-sharing-for-teams-for-remote-app) before you can use this key.
+>You must [enable the ShareClientDesktop key](#enable-content-sharing-for-teams-for-remoteapp) before you can use this key.
To disable application window sharing:
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/environment-setup-2019.md).
-Azure Virtual Desktop is a service that gives users easy and secure access to their virtualized desktops and RemoteApps. This topic will tell you a bit more about the terminology and general structure of Azure Virtual Desktop.
+Azure Virtual Desktop is a service that gives users easy and secure access to their virtualized desktops and applications. This topic will tell you a bit more about the terminology and general structure of Azure Virtual Desktop.
## Host pools
An application group is a logical grouping of applications installed on session
An application group can be one of two types: -- RemoteApp, where users access the RemoteApps you individually select and publish to the application group. Available with pooled host pools only.
+- RemoteApp, where users access the applications you individually select and publish to the application group. Available with pooled host pools only.
- Desktop, where users access the full desktop. Available with pooled or personal host pools.
-Pooled host pools have a preferred application group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop application group with the friendly name **Default Desktop** whenever you create a host pool and sets the host pool's preferred application group type to **Desktop**. You can remove the Desktop application group at any time. If you want your users to only see RemoteApps in their feed, you should set the **preferred application group type** value to **RemoteApp**. If you want your users to only see session desktops in their feed, you should set the **preferred application group type** value to **Desktop**. You can't create another Desktop application group in a host pool while a Desktop application group exists.
+Pooled host pools have a preferred application group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop application group with the friendly name **Default Desktop** whenever you create a host pool and sets the host pool's preferred application group type to **Desktop**. You can remove the Desktop application group at any time. If you want your users to only see applications in their feed, you should set the **preferred application group type** value to **RemoteApp**. If you want your users to only see session desktops in their feed, you should set the **preferred application group type** value to **Desktop**. You can't create another Desktop application group in a host pool while a Desktop application group exists.
To publish resources to users, you must assign them to application groups. When assigning users to application groups, consider the following things:
To publish resources to users, you must assign them to application groups. When
## Workspaces
-A workspace is a logical grouping of application groups in Azure Virtual Desktop. Each Azure Virtual Desktop application group must be associated with a workspace for users to see the remote apps and desktops published to them.
+A workspace is a logical grouping of application groups in Azure Virtual Desktop. Each Azure Virtual Desktop application group must be associated with a workspace for users to see the desktops and applications published to them.
## End users
In this section, we'll go over each of the three types of user sessions that end
### Active user session
-A user session is considered "active" when a user signs in and connects to their remote app or desktop resource.
+A user session is considered *active* when a user signs in and connects to their desktop or RemoteApp resource.
### Disconnected user session
virtual-desktop Troubleshoot Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-multimedia-redirection.md
Title: Troubleshoot Multimedia redirection on Azure Virtual Desktop - Azure
description: Known issues and troubleshooting instructions for multimedia redirection for Azure Virtual Desktop. Previously updated : 04/07/2023 Last updated : 07/18/2023 # Troubleshoot multimedia redirection for Azure Virtual Desktop
->[!NOTE]
-> Multimedia redirection on Azure Virtual Desktop is only available for the [Windows Desktop client, version 1.2.3916 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices.
+> [!IMPORTANT]
+> Call redirection is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-This article describes known issues and troubleshooting instructions for multimedia redirection (MMR) for Azure Virtual Desktop.
+This article describes known issues and troubleshooting instructions for multimedia redirection for Azure Virtual Desktop.
## Known issues and limitations The following issues are ones we're already aware of, so you won't need to report them: -- In the first browser tab a user opens, the extension pop-up might show a message that says, "The extension is not loaded", or a message that says video playback or calling redirection isn't supported while redirection is working correctly in the tab. You can resolve this issue by opening a second tab.
+- In the first browser tab a user opens, the extension pop-up might show a message that says, "The extension is not loaded", or a message that says video playback or call redirection isn't supported while redirection is working correctly in the tab. You can resolve this issue by opening a second tab.
-- Multimedia redirection only works on the [Windows Desktop client](users/connect-windows.md).
+- Multimedia redirection only works on the [Windows Desktop client](users/connect-windows.md). Any other clients, such as the macOS, iOS, Android or Web client, don't support multimedia redirection.
-- Multimedia redirection doesn't currently support protected content, so videos from Netflix, for example, won't work.--- When you resize the video window, the window's size will adjust faster than the video itself. You'll also see this issue when minimizing and maximizing the window.--- You might run into issue where you are stuck in the loading state on every video site. This is a known issue that we're currently investigating. To temporarily mitigate this issue, sign out of Azure Virtual Desktop and restart your session.
+- Multimedia redirection won't work as expected if the session hosts in your deployment are blocking cmd.exe.
- If you aren't using the default Windows size settings for video players, such as not fitting the player to window, not maximizing the window, and so on), parts of video players may not appear correctly. If you encounter this issue, you should change the settings back to Default mode.
The following issues are ones we're already aware of, so you won't need to repor
- Sometimes the host and client version number disappears from the extension status message, which prevents the extension from loading on websites that support it. If you've installed the extension correctly, this issue is because your host machine doesn't have the latest C++ Redistributable installed. To fix this issue, install the [latest supported Visual C++ Redistributable downloads](/cpp/windows/latest-supported-vc-redist).
-### Video playback redirection
--- Video playback redirection only works on the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client), not the web client or other platforms such as macOS, Linux, and so on.
+### Known issues for video playback redirection
-- Video playback redirection doesn't currently support protected content, so videos from Pluralsight and Netflix won't work.
+- Video playback redirection doesn't currently support protected content, so videos that use protected content, such as from Pluralsight and Netflix, won't work.
- When you resize the video window, the window's size will adjust faster than the video itself. You'll also see this issue when minimizing and maximizing the window.
+- If you access a video site, sometimes the video will remain in a loading or buffering state but never actually start playing. We're aware of this issue and are currently investigating it. For now, you can make videos load again by signing out of Azure Virtual Desktop and restarting your session.
+
+### Known issues for call redirection
+
+- Call redirection only works for WebRTC-based audio calls on the sites listed in [Call redirection](multimedia-redirection-intro.md#call-redirection).
+
+- When disconnecting from a remote session, call redirection might stop working. You can make redirection start working again by refreshing the webpage.
+
+- If you enabled the **Enable video playback for all sites** setting in the multimedia redirection extension pop-up and see issues on a supported WebRTC audio calling site, disable the setting and try again.
+
+## Getting help for call redirection and video playback
+
+If you can start a call with multimedia redirection enabled and can see the green phone icon on the extension icon while calling, but the call quality is low, you should contact the app provider for help.
+
+If calls aren't going through, certain features don't work as expected while multimedia redirection is enabled, or multimedia redirection won't enable at all, you must submit a [Microsoft support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+If you encounter any video playback issues that this guide doesn't address or resolve, submit a [Microsoft support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md).
+ ## Log collection If you encounter any issues, you can collect logs from the extension and provide them to your IT admin or support.
To enable log collection:
1. Select **Show Advanced Settings**.
-2. For **Collect logs**, select **Start**.
+1. For **Collect logs**, select **Start**.
## Next steps
virtual-desktop Troubleshoot Set Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-overview.md
Use the following table to identify and resolve issues you may encounter when se
| Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection.md). <br> <br> If your users have been assigned to an application group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. | | Feed discovery problems due to the network | Your users need to contact their network administrator. | | Connecting clients | See [Azure Virtual Desktop service connections](troubleshoot-service-connection.md) and if that doesn't solve your issue, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). |
-| Responsiveness of remote applications or desktop | If issues are tied to a specific application or product, contact the team responsible for that product. |
+| Responsiveness of desktops or applications | If issues are tied to a specific application or product, contact the team responsible for that product. |
| Licensing messages or errors | If issues are tied to a specific application or product, contact the team responsible for that product. | | Issues with third-party authentication methods or tools | Verify that your third-party provider supports Azure Virtual Desktop scenarios and approach them regarding any known issues. | | Issues using Log Analytics for Azure Virtual Desktop | For issues with the diagnostics schema, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/).<br><br>For queries, visualization, or other issues in Log Analytics, select the appropriate problem type under Log Analytics. |
virtual-desktop Tutorial Create Connect Personal Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-create-connect-personal-desktop.md
To create a personal host pool, workspace, application group, and session host V
| Host pool name | Enter a name for the host pool, for example *aad-hp01*. | | Location | Select the Azure region from the list where the host pool, workspace, and application group will be deployed. | | Validation environment | Select **No**. This setting enables your host pool to receive service updates before all other production host pools, but isn't needed for this tutorial.|
- | Preferred app group type | Select **Desktop**. This setting designates what type of resource users see in their feed if they're assigned both *Desktop* and *Remote App* application groups in the same host pool.|
+ | Preferred app group type | Select **Desktop**. This setting designates what type of resource users see in their feed if they're assigned both *Desktop* and *RemoteApp* application groups in the same host pool.|
| **Host pool type** | | | Host pool type | Select **Personal**. This means that end users have a dedicated assigned session host that they'll always connect to. Selecting **Personal** shows a new option for **Assignment type**. | | Assignment type | Select **Automatic**. Automatic assignment means that a user will automatically get assigned the first available session host when they first sign in, which will then be dedicated to that user. |
virtual-desktop Uri Scheme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/uri-scheme.md
Last updated 01/19/2023
> The *ms-avd* Uniform Resource Identifier scheme for Azure Virtual Desktop is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-You can use Uniform Resource Identifier (URI) schemes to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or Remote App.
+You can use Uniform Resource Identifier (URI) schemes to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or RemoteApp.
This article details the available commands and parameters, along with some examples.
Here's the list of currently supported commands for *ms-avd* and their correspon
| Parameter | Values | Description | |--|--|--| | workspaceid | Object ID (GUID). | Specify the object ID of a valid workspace.<br /><br />To get the object ID value using PowerShell, see [Retrieve the object ID of a host pool, workspace, application group, or application](powershell-module.md#retrieve-the-object-id-of-a-host-pool-workspace-application-group-or-application). You can also use [Desktop Virtualization REST APIs](/rest/api/desktopvirtualization). |
-| resourceid | Object ID (GUID). | Specify the object ID of a published resource contained in the workspace. The value can be for a desktop or Remote App.<br /><br />To get the object ID value using PowerShell, see [Retrieve the object ID of a host pool, workspace, application group, or application](powershell-module.md#retrieve-the-object-id-of-a-host-pool-workspace-application-group-or-application). You can also use [Desktop Virtualization REST APIs](/rest/api/desktopvirtualization). |
+| resourceid | Object ID (GUID). | Specify the object ID of a published resource contained in the workspace. The value can be for a desktop or RemoteApp.<br /><br />To get the object ID value using PowerShell, see [Retrieve the object ID of a host pool, workspace, application group, or application](powershell-module.md#retrieve-the-object-id-of-a-host-pool-workspace-application-group-or-application). You can also use [Desktop Virtualization REST APIs](/rest/api/desktopvirtualization). |
| user | User Principal Name (UPN), for example `user@contoso.com`. | Specify a valid user with access to specified resource. | | env *(optional)* | **avdarm** (commercial Azure)<br />**avdgov** (Azure Government) | Specify the Azure cloud where resources are located. | | version | **0** | Specify the version of the connect URI scheme to use. |
virtual-desktop Client Features Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows-azure-virtual-desktop-app.md
To deploy the Azure Virtual Desktop Store app in an enterprise, you can use Micr
### URI to subscribe to a workspace
-The Azure Virtual Desktop Store app supports Uniform Resource Identifier (URI) schemes to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or Remote App.
+The Azure Virtual Desktop Store app supports Uniform Resource Identifier (URI) schemes to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or RemoteApp.
For more information, see [Uniform Resource Identifier schemes with the Remote Desktop client for Azure Virtual Desktop](../uri-scheme.md).
virtual-desktop Client Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows.md
You can set the *AutomaticUpdates* registry key to one of the following values:
### URI to subscribe to a workspace
-The Remote Desktop client for Windows supports the *ms-rd* and *ms-avd* (preview) Uniform Resource Identifier (URI) schemes. This enables you to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or Remote App.
+The Remote Desktop client for Windows supports the *ms-rd* and *ms-avd* (preview) Uniform Resource Identifier (URI) schemes. This enables you to invoke the Remote Desktop client with specific commands, parameters, and values for use with Azure Virtual Desktop. For example, you can subscribe to a workspace or connect to a particular desktop or RemoteApp.
For more information and the available commands, see [Uniform Resource Identifier schemes with the Remote Desktop client for Azure Virtual Desktop](../uri-scheme.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json)
virtual-desktop Create Host Pools Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md
To register the Azure Virtual Desktop agents, do the following on each virtual m
## Next steps
-Now that you've made a host pool, you can populate it with RemoteApps. To learn more about how to manage apps in Azure Virtual Desktop, see the Manage application groups tutorial.
+Now that you've made a host pool, you can populate it with applications. To learn more about how to manage applications in Azure Virtual Desktop, see the Manage application groups tutorial.
> [!div class="nextstepaction"] > [Manage application groups tutorial](../manage-app-groups.md)
virtual-desktop Customize Feed Virtual Desktop Users 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-feed-virtual-desktop-users-2019.md
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
You can change the display name for a published RemoteApp by setting the friendly name. By default, the friendly name is the same as the name of the RemoteApp program.
-To retrieve a list of published RemoteApps for an application group, run the following PowerShell cmdlet:
+To retrieve a list of published applications for an application group, run the following PowerShell cmdlet:
```powershell Get-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname>
virtual-desktop Customize Rdp Properties 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/customize-rdp-properties-2019.md
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
By default, published RDP files contain the following properties:
-|RDP properties | Desktops | RemoteApps |
+|RDP properties | Desktop session | RemoteApp session |
||| | | Multi-monitor mode | Enabled | N/A | | Drive redirections enabled | Drives, clipboard, printers, COM ports, USB devices and smartcards| Drives, clipboard, and printers |
virtual-desktop Diagnostics Role Service 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-role-service-2019.md
The following table lists common errors your admins might run into.
|5001|SessionHostNotFound|The session host you queried might be offline. Check the host pool's status.| |5008|SessionHostUserSessionsExist |You must sign out all users on the session host before executing your intended management activity.| |6000|AppGroupNotFound|The application group name you entered doesn't match any existing application groups. Review the application group name for typos and try again.|
-|6022|RemoteAppNotFound|The RemoteApp name you entered doesn't match any RemoteApps. Review RemoteApp name for typos and try again.|
+|6022|RemoteAppNotFound|The RemoteApp name you entered doesn't match any application. Review RemoteApp name for typos and try again.|
|6010|PublishedItemsExist|The name of the resource you're trying to publish is the same as a resource that already exists. Change the resource name and try again.| |7002|NameNotValidWhiteSpace|Don't use white space in the name.| |8000|InvalidAuthorizationRoleScope|The role name you entered doesn't match any existing role names. Review the role name for typos and try again. |
virtual-desktop Environment Setup 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/environment-setup-2019.md
>[!IMPORTANT] >This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../environment-setup.md).
-Azure Virtual Desktop is a service that gives users easy and secure access to their virtualized desktops and RemoteApps. This topic will tell you a bit more about the general structure of the Azure Virtual Desktop environment.
+Azure Virtual Desktop is a service that gives users easy and secure access to their virtualized desktops and applications. This topic will tell you a bit more about the general structure of the Azure Virtual Desktop environment.
## Tenants
You can set additional properties on the host pool to change its load-balancing
An application group is a logical grouping of applications installed on session hosts in the host pool. An application group can be one of two types: -- RemoteApp, where users access the RemoteApps you individually select and publish to the application group
+- RemoteApp, where users access the applications you individually select and publish to the application group
- Desktop, where users access the full desktop
-By default, a desktop application group (named "Desktop Application Group") is automatically created whenever you create a host pool. You can remove this application group at any time. However, you can't create another desktop application group in the host pool while a desktop application group exists. To publish RemoteApps, you must create a RemoteApp application group. You can create multiple RemoteApp application groups to accommodate different worker scenarios. Different RemoteApp application groups can also contain overlapping RemoteApps.
+By default, a desktop application group (named "Desktop Application Group") is automatically created whenever you create a host pool. You can remove this application group at any time. However, you can't create another desktop application group in the host pool while a desktop application group exists. To publish an application, you must create a RemoteApp application group. You can create multiple RemoteApp application groups to accommodate different worker scenarios. Different RemoteApp application groups can also contain overlapping applications.
To publish resources to users, you must assign them to application groups. When assigning users to application groups, consider the following things:
virtual-desktop Manage App Groups 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-app-groups-2019.md
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"
4. Run the following cmdlet to install the application based on `AppAlias`. `AppAlias` becomes visible when you run the output from step 3. ```powershell
- New-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -Name <remoteappname> -AppAlias <appalias>
+ New-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -Name <RemoteAppName> -AppAlias <appalias>
``` 5. (Optional) Run the following cmdlet to publish a new RemoteApp program to the application group created in step 1. ```powershell
- New-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -Name <remoteappname> -Filepath <filepath> -IconPath <iconpath> -IconIndex <iconindex>
+ New-RdsRemoteApp -TenantName <tenantname> -HostPoolName <hostpoolname> -AppGroupName <appgroupname> -Name <RemoteAppName> -Filepath <filepath> -IconPath <iconpath> -IconIndex <iconindex>
``` 6. To verify that the app was published, run the following cmdlet.
virtual-desktop Publish Apps 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/publish-apps-2019.md
To publish a built-in app:
3. Finally, run the following cmdlet with `<PackageFamilyName>` replaced by the **PackageFamilyName** you found in the previous step: ```powershell
- New-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname> -Name <remoteappname> -FriendlyName <remoteappname> -FilePath "shell:appsFolder\<PackageFamilyName>!App"
+ New-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname> -Name <remoteappname> -FriendlyName <RemoteAppName> -FilePath "shell:appsFolder\<PackageFamilyName>!App"
``` >[!NOTE]
After you publish an app, it will have the default Windows app icon instead of i
The process you use to publish Microsoft Edge is a little different from the publishing process for other apps. To publish Microsoft Edge with the default homepage, run this cmdlet: ```powershell
-New-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname> -Name <remoteappname> -FriendlyName <remoteappname> -FilePath "shell:Appsfolder\Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge"
+New-RdsRemoteApp <tenantname> <hostpoolname> <appgroupname> -Name <RemoteAppName> -FriendlyName <RemoteAppName> -FilePath "shell:Appsfolder\Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge"
``` ## Next steps
virtual-desktop Tenant Setup Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md
> >Learn about how to create a host pool in Azure Virtual Desktop at [Tutorial: Create a host pool](../create-host-pools-azure-marketplace.md).
-Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more application groups that are used to publish remote desktop and remote application resources to users. With a tenant, you can build host pools, create application groups, assign users, and make connections through the service.
+Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more application groups that are used to publish desktop and application resources to users. With a tenant, you can build host pools, create application groups, assign users, and make connections through the service.
In this tutorial, learn how to:
virtual-desktop Troubleshoot Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md
Add-RdsAppGroupUser -TenantName <TenantName> -HostPoolName <HostPoolName> -AppGr
**Cause:** The username used has been already assigned to an application group of a different type. Users can't be assigned to both a remote desktop and RemoteApp application group under the same session host pool.
-**Fix:** If user needs both remote apps and remote desktop, create different host pools or grant user access to the remote desktop, which will permit the use of any application on the session host VM.
+**Fix:** If user needs both a RemoteApp and desktop, create different host pools or only grant user access to the remote desktop, which will permit the use of any application on the session host VM.
### Error: Add-RdsAppGroupUser command -- The specified UserPrincipalName doesn't exist in the Azure Active Directory associated with the Remote Desktop tenant
virtual-desktop Troubleshoot Set Up Overview 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-overview-2019.md
Use the following table to identify and resolve issues you may encounter when se
| Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection-2019.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). <br> <br> If your users have been assigned to an application group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. | | Feed discovery problems due to the network | Your users need to contact their network administrator. | | Connecting clients | See [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md) and if that doesn't solve your issue, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). |
-| Responsiveness of remote applications or desktop | If issues are tied to a specific application or product, contact the team responsible for that product. |
+| Responsiveness of applications or desktop | If issues are tied to a specific application or product, contact the team responsible for that product. |
| Licensing messages or errors | If issues are tied to a specific application or product, contact the team responsible for that product. | | Issues when using Azure Virtual Desktop tools on GitHub (Azure Resource Manager templates, diagnostics tool, management tool) | See [Azure Resource Manager Templates for Remote Desktop Services](https://github.com/Azure/RDS-Templates/blob/master/README.md) to report issues. |
virtual-desktop Watermarking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/watermarking.md
Here's a screenshot showing what watermarking looks like when it's enabled:
> [!IMPORTANT] > - Once watermarking is enabled on a session host, only clients that support watermarking can connect to that session host. If you try to connect from an unsupported client, the connection will fail and you'll get an error message that is not specific. >
-> - Watermarking is for remote desktops only. With remote apps, watermarking is not applied and the connection is allowed.
+> - Watermarking is for remote desktops only. With RemoteApp, watermarking is not applied and the connection is allowed.
> > - If you connect to a session host directly (not through Azure Virtual Desktop) using the Remote Desktop Connection app (`mstsc.exe`), watermarking is not applied and the connection is allowed.
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
In this release, we've made the following changes:
- Added page to installer warning users running client on Windows 7 that support for Windows 7 will end starting January 10, 2023. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to multimedia redirection (MMR) for Azure Virtual Desktop, including the following:
- - MMR now works on remote app browser and supports up to 30 sites. For more information, see [Understanding multimedia redirection for Azure Virtual Desktop](./multimedia-redirection-intro.md).
+ - MMR now works on a browser published as a RemoteApp and supports up to 30 sites. For more information, see [Understanding multimedia redirection for Azure Virtual Desktop](./multimedia-redirection-intro.md).
- MMR introduces better diagnostic tools with the new status icon and one-click Tracelog. For more information, see [Multimedia redirection for Azure Virtual Desktop](./multimedia-redirection.md). ## Updates for version 1.2.3497
In this release, we've made the following changes:
- Added the auto-update feature, which allows the client to install the latest updates automatically. - The client now distinguishes between different feeds in the Connection Center. - Fixed an issue where the subscription account doesn't match the account the user signed in with.-- Fixed an issue where some users couldn't access remote apps through a downloaded file.
+- Fixed an issue where some users couldn't access a RemoteApp through a downloaded file.
- Fixed an issue with Smartcard redirection. ## Updates for version 1.2.1364
In this release, we've made the following changes:
- Added functionality to enable custom URL subscriptions for all users. - Fixed an issue with app pinning on the feed taskbar. - Fixed a crash when subscribing with URL.-- Improved experience when dragging remote app windows with touch or pen.
+- Improved experience when dragging a RemoteApp window with touch or pen.
- Fixed an issue with localization. ## Updates for version 1.2.1186
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 06/03/2023 Last updated : 07/18/2023
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## July 2023
+
+Here's what changed in July 2023:
+
+### Autoscale for personal host pools is currently in preview
+
+Autoscale for personal host pools is now in preview. Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down according to a schedule to optimize deployment costs.
+
+To learn more about autoscale for personal host pools, see [Autoscale scaling plans and example scenarios in Azure Virtual Desktop](autoscale-scenarios.md).
+
+### Confidential VMs are now generally available in Azure Virtual Desktop
+
+Confidential VM and Trusted Launch security features for Azure Virtual Desktop host pool provisioning are now generally available.
+
+Azure confidential virtual machines (VMs) offer VM memory encryption with integrity protection, which strengthens guest protections to deny the hypervisor and other host management components code access to the VM memory and state. For more information about the security benefits of confidential VMs, see our [confidential computing documentation](../confidential-computing/confidential-vm-overview.md).
+
+Trusted Launch protects against advanced and persistent attack techniques. This feature allows you to securely deploy your VMs with verified boot loaders, OS kernels, and drivers. Trusted Launch also protects keys, certificates, and secrets in VMs. For more information about the benefits of Trusted Launch, see our [Trusted Launch documentation](../virtual-machines/trusted-launch.md). Trusted Launch is now enabled by default for all Windows images used with Azure Virtual Desktop.
+
+For more information about this announcement, see [Announcing General Availability of confidential VMs in Azure Virtual Desktop](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-confidential-vms-in-azure/ba-p/3857974).
+
+### Private Link with Azure Virtual Desktop is now generally available
+
+Private Link with Azure Virtual Desktop allows users to establish secure connections to remote resources using private endpoints. With Private Link, traffic between your virtual network and the Azure Virtual Desktop service is routed through the Microsoft *backbone* network. This routing eliminates the need to expose your service to the public internet, enhancing the overall security of your infrastructure. By keeping traffic within this protected network, Private Link adds an extra layer of security for your Azure Virtual Desktop environment. For more information about Private Link, see [Azure Private Link with Azure Virtual Desktop](private-link-overview.md) or read [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-general-availability-of-private-link-for-azure/ba-p/3874429).
+
+## June 2023
+
+Here's what changed in June 2023:
+
+### Azure Virtual Desktop Insights support for the Azure Monitor Agent now in preview
+
+Azure Virtual Desktop Insights is a dashboard built on Azure Monitor workbooks that helps IT professionals understand their Azure Virtual Desktop environments. Azure Virtual Desktops Insights support for the Azure Monitor agent is now in preview. For more information see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md?tabs=monitor).
+
+### Administrative template for Azure Virtual Desktop now available in Intune
+
+We've created an administrative template for Azure Virtual Desktop to help you configure certain features in Azure Virtual Desktop. This administrative template is now available in Intune, which enables you to centrally configure session hosts that are enrolled in Intune and joined to Azure Active Directory (Azure AD) or hybrid Azure AD joined.
+
+For more information see [Administrative template for Azure Virtual Desktop](administrative-template.md?tabs=intune).
+
+## May 2023
+
+Here's what changed in May 2023:
+
+### Custom image templates is now in preview
+
+Custom image templates is now in preview. Custom image templates help you easily create a custom image that you can use when deploying session host VMs. With custom images, you can standardize the configuration of your session host VMs for your organization. Custom image templates is built on [Azure Image Builder](../virtual-machines/image-builder-overview.md) and tailored for Azure Virtual Desktop. For more information about the preview, check out [Custom image templates](custom-image-templates.md) or read [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-azure-virtual-desktop-custom/ba-p/3784361).
+ ## April 2023 Here's what changed in April 2023: ### Azure Virtual Desktop Store app for Windows in public preview
-The [Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json) is now in public preview for Windows 10 and 11. With the Store App, you can now automatically update the client, unlike with the Remote Desktop client. You can also pin remote apps to your Start menu to personalize your desktop and reduce clutter.
+The [Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json) is now in public preview for Windows 10 and 11. With the Store App, you can now automatically update the client, unlike with the Remote Desktop client. You can also pin a RemoteApp to your Start menu to personalize your desktop and reduce clutter.
For more information about the public preview release version, check out [Use features of the Azure Virtual Desktop Store app for Windows when connecting to Azure Virtual Desktop (preview)](users/client-features-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json), [What's new in the Azure Virtual Desktop Store App (preview)](whats-new-client-windows-azure-virtual-desktop-app.md), or read [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-the-new-azure-virtual-desktop-app/ba-p/3785698).
Windows 10 and 11 22H2 Enterprise and Enterprise multi-session images are now vi
### Uniform Resource Identifier Schemes in public preview
-Uniform Resource Identifier (URI) schemes with the Remote Desktop client for Azure Virtual Desktop is now in public preview. This new feature lets you subscribe to a workspace or connect to a particular desktop or Remote App using URI schemes. URI schemes also provide fast and efficient end-user connection to Azure Virtual Desktop resources. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-uniform-resource-identifier/ba-p/3763075) and [URI schemes with the Remote Desktop client for Azure Virtual Desktop (preview)](uri-scheme.md).
+Uniform Resource Identifier (URI) schemes with the Remote Desktop client for Azure Virtual Desktop is now in public preview. This new feature lets you subscribe to a workspace or connect to a particular desktop or RemoteApp using URI schemes. URI schemes also provide fast and efficient end-user connection to Azure Virtual Desktop resources. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-uniform-resource-identifier/ba-p/3763075) and [URI schemes with the Remote Desktop client for Azure Virtual Desktop (preview)](uri-scheme.md).
### Azure Virtual Desktop Insights at Scale now generally available
The ability to collect graphics data for your Azure Virtual Desktop connections
### Multimedia redirection enhancements now in public preview
-An upgraded version of multimedia redirection (MMR) for Azure Virtual Desktop is now in public preview. We've made various improvements to this version, including more supported websites, remote app browser support, and enhancements to media controls for better clarity and one-click tracing. Learn more at [Use multimedia redirection on Azure Virtual Desktop (preview)](multimedia-redirection.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/new-multimedia-redirection-upgrades-on-azure-virtual-desktop-are/m-p/3639520).
+An upgraded version of multimedia redirection (MMR) for Azure Virtual Desktop is now in public preview. We've made various improvements to this version, including more supported websites, RemoteApp browser support, and enhancements to media controls for better clarity and one-click tracing. Learn more at [Use multimedia redirection on Azure Virtual Desktop (preview)](multimedia-redirection.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/new-multimedia-redirection-upgrades-on-azure-virtual-desktop-are/m-p/3639520).
### Grouping costs by Azure Virtual Desktop host pool now in public preview
This feature offers a streamlined onboarding experience in the Azure portal to s
The start VM on connect feature is now generally available. This feature helps you optimize costs by letting you turn off deallocated or stopped VMs, letting your deployment be flexible with user demands. For more information, see [Start Virtual Machine on Connect](start-virtual-machine-connect.md).
-### Remote app streaming documentation
+### RemoteApp streaming
-We recently announced a new pricing option for remote app streaming for using Azure Virtual Desktop to deliver apps as a service to your customers and business partners. For example, software vendors can use remote app streaming to deliver apps as a software as a service (SaaS) solution that's accessible to their customers. To learn more about remote app streaming, check out [our documentation](./remote-app-streaming/overview.md).
+We recently announced a new pricing option for RemoteApp streaming for using Azure Virtual Desktop to deliver apps as a service to your customers and business partners. For example, software vendors can use RemoteApp streaming to deliver apps as a software as a service (SaaS) solution that's accessible to their customers. To learn more about RemoteApp streaming, check out [our documentation](./remote-app-streaming/overview.md).
-From July 14, 2021 to December 31, 2021, we're giving customers who use remote app streaming a promotional offer that lets their business partners and customers access Azure Virtual Desktop for no charge. This offer only applies to external user access rights. Regular billing will resume on January 1, 2022. In the meantime, you can continue to use your existing Windows license entitlements found in licenses like Microsoft 365 E3 or Windows E3. To learn more about this offer, see the [Azure Virtual Desktop pricing page](https://azure.microsoft.com/pricing/details/virtual-desktop/).
### New Azure Virtual Desktop handbooks
Here's what changed in June 2021:
### Windows Virtual Desktop is now Azure Virtual Desktop
-To better align with our vision of a flexible cloud desktop and remote application platform, we've renamed Windows Virtual Desktop to Azure Virtual Desktop. Learn more at [the announcement post in our blog](https://azure.microsoft.com/blog/azure-virtual-desktop-the-desktop-and-app-virtualization-platform-for-the-hybrid-workplace/).
+To better align with our vision of a flexible cloud desktop and application platform, we've renamed Windows Virtual Desktop to Azure Virtual Desktop. Learn more at [the announcement post in our blog](https://azure.microsoft.com/blog/azure-virtual-desktop-the-desktop-and-app-virtualization-platform-for-the-hybrid-workplace/).
### EU, UK, and Canada geographies are now generally available
Here's what this change does for you:
- Azure Virtual Desktop is now integrated with the Azure portal. This means you can manage everything directly in the portal, no PowerShell, web apps, or third-party tools required. To get started, check out our tutorial at [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md). -- Before this update, you could only publish RemoteApps and Desktops to individual users. With Azure Resource Manager, you can now publish resources to Azure Active Directory groups.
+- Before this update, you could only publish desktops and applications to individual users. With Azure Resource Manager, you can now publish resources to Azure Active Directory groups.
- The earlier version of Azure Virtual Desktop had four built-in admin roles that you could assign to a tenant or host pool. These roles are now in [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). You can apply these roles to every Azure Virtual Desktop Azure Resource Manager object, which lets you have a full, rich delegation model.
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 07/06/2023 Last updated : 07/18/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Automatic OS image upgrade is supported for custom images deployed through [Azur
- The new image version should not be excluded from the latest version for that gallery image. Image versions excluded from the gallery image's latest version are not rolled out to the scale set through automatic OS image upgrade. > [!NOTE]
->It can take up to 3 hours for a scale set to trigger the first image upgrade rollout after the scale set is first configured for automatic OS upgrades. This is a one-time delay per scale set. Subsequent image rollouts are triggered on the scale set within 30-60 minutes.
+> It can take up to 3 hours for a scale set to trigger the first image upgrade rollout after the scale set is first configured for automatic OS upgrades due to certain factors such as Maintenance Windows or other restrictions. Customers on the latest image may not get an upgrade until a new image is available.
## Configure automatic OS image upgrade
virtual-machines Backup Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/backup-recovery.md
Learn more about [working with VM restore points](virtual-machines-create-restor
## Next steps You can try out Azure Backup by following the [Azure Backup quickstart](../backup/quick-backup-vm-portal.md).
-You can also plan and implement reliability for your virtual machine configuration. For more information see [Virtual Machine Reliability](./virtual-machines-reliability.md).
+You can also plan and implement reliability for your virtual machine configuration. For more information see [Virtual Machine Reliability](./reliability-virtual-machines.md).
virtual-machines Concepts Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/concepts-restore-points.md
The following table summarizes the support matrix for VM restore points.
**VMs using Accelerated Networking** | Yes **Minimum Frequency at which App consistent restore point can be taken** | 3 hours **Minimum Frequency at which [crash consistent restore points (preview)](https://github.com/Azure/Virtual-Machine-Restore-Points/tree/main/Crash%20consistent%20VM%20restore%20points%20(preview)) can be taken** | 1 hour
+**API version for Application consistent restore point** | 2021-03-01 or later
+**API version for Crash consistent restore point (in preview)** | 2021-07-01 or later
> [!Note] > Restore Points (App consistent or crash consistent) can be created by customer at the minimum supported frequency as mentioned above. Taking restore points at a frequency lower than supported would result in failure.
-## Operating system support
+## Operating system support for application consistency
### Windows
For Azure VM Linux VMs, restore points support the list of Linux [distributions
- Other bring-your-own Linux distributions might work as long as the [Azure VM agent for Linux](../virtual-machines/extensions/agent-linux.md) is available on the VM, and as long as Python is supported. - Restore points don't support a proxy-configured Linux VM if it doesn't have Python version 2.7 or higher installed. - Restore points don't back up NFS files that are mounted from storage, or from any other NFS server, to Linux or Windows machines. It only backs up disks that are locally attached to the VM.
+
+## Operating system support for crash consistency (in preview)
+
+- All Operating systems are supported.
## Other limitations - Restore points are supported only for managed disks. - Ultra-disks, Ephemeral OS disks, and Shared disks aren't supported. -- Restore points APIs require an API of version 2021-03-01 or later.
+- Restore points APIs require an API of version 2021-03-01 or later for application consistency.
+- Restore points APIs require an API of version 2021-03-01 or later for crash consistency. (in preview)
- A maximum of 500 VM restore points can be retained at any time for a VM, irrespective of the number of restore point collections. - Concurrent creation of restore points for a VM isn't supported. - Movement of Virtual Machines (VM) between Resource Groups (RG), or Subscriptions isn't supported when the VM has restore points. Moving the VM between Resource Groups or Subscriptions won't update the source VM reference in the restore point and will cause a mismatch of ARM processor IDs between the actual VM and the restore points.
virtual-machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-restore-points.md
To create a restore point collection, call the restore point collection's [Creat
### Step 2: Create a VM restore point
-After you create the restore point collection, the next step is to create a VM restore point within the restore point collection. For more information about restore point creation, see the [Restore Points - Create](/rest/api/compute/restore-points/create) API documentation.
+After you create the restore point collection, the next step is to create a VM restore point within the restore point collection. For more information about restore point creation, see the [Restore Points - Create](/rest/api/compute/restore-points/create) API documentation. For creating crash consistent restore points (in preview) "consistencyMode" property has to be set to "CrashConsistent" in the creation request.
> [!TIP] > To save space and costs, you can exclude any disk from either local region or cross-region VM restore points. To exclude a disk, add its identifier to the `excludeDisks` property in the request body.
Restore point creation in your local region will be completed within a few secon
## Get restore point copy or replication status
-Creation of a cross-region VM restore point is a long running operation. The VM restore point can be used to restore a VM only after the operation is completed for all disk restore points. To track the operation's status, call the [Restore Point - Get](/rest/api/compute/restore-points/get) API on the target VM restore point and include the `instanceView` parameter. The return will include the percentage of data that has been copied at the time of the request.
+Copying the first VM restore point to another region is a long running operation. The VM restore point can be used to restore a VM only after the operation is completed for all disk restore points. To track the operation's status, call the [Restore Point - Get](/rest/api/compute/restore-points/get) API on the target VM restore point and include the `instanceView` parameter. The return will include the percentage of data that has been copied at the time of the request.
During restore point creation, the `ProvisioningState` will appear as `Creating` in the response. If creation fails, `ProvisioningState` is set to `Failed`.
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Title: Azure Hybrid Benefit for Linux virtual machines description: Learn how Azure Hybrid Benefit can save you money on Linux virtual machines. -+
Last updated 05/02/2023-+
az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS
#### Convert to PAYG using the Azure CLI
-To return a VM to a PAYG model, use a `--license-type` value of `None`:
+If the system was originally a PAYG image and you want to return the VM to a PAYG model, use a `--license-type` value of `None`. For example:
```azurecli # This will enable PAYG on a virtual machine using Azure Hybrid Benefit az vm update -g myResourceGroup -n myVmName --license-type None ```
+If you have a BYOS and want to convert the VM to PAYG, use a `--license-type` value that covers the VM needs as described futher in this article. For example, for RHEL systems you can use any of the following: RHEL_BASE, RHEL_EUS, RHEL_SAPAPPS, RHEL_SAPHA, RHEL_BASEAPAPPS or RHEL_BASESAPHA.
+ #### Convert multiple VM license models simultaneously using the Azure CLI To switch the licensing model on a large number of virtual machines, you can use the `--ids` parameter in the Azure CLI:
virtual-machines Manage Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/manage-restore-points.md
To track the status of the copy operation, follow the guidance in the [Get resto
## Get restore point copy or replication status
-Creation of a cross-region VM restore point is a long running operation. The VM restore point can be used to restore a VM only after the operation is completed for all disk restore points. To track the operation's status, call the [Restore Point - Get](/rest/api/compute/restore-points/get) API on the target VM restore point and include the `instanceView` parameter. The return will include the percentage of data that has been copied at the time of the request.
+Copying the first VM restore point to another region is a long running operation. The VM restore point can be used to restore a VM only after the operation is completed for all disk restore points. To track the operation's status, call the [Restore Point - Get](/rest/api/compute/restore-points/get) API on the target VM restore point and include the `instanceView` parameter. The return will include the percentage of data that has been copied at the time of the request.
During restore point creation, the `ProvisioningState` will appear as `Creating` in the response. If creation fails, `ProvisioningState` is set to `Failed`.
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
virtual-machines Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/reliability-virtual-machines.md
+
+ Title: Reliability in Azure Virtual Machines
+description: Find out about reliability in Azure Virtual Machines
+++++ Last updated : 07/18/2023++
+# Reliability in Virtual Machines
+
+This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover).
+
+For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Reliability recommendations
+
+
+### Reliability recommendations summary
+
+| Category | Priority |Recommendation |
+||--||
+| [**High Availability**](#high-availability) |:::image type="icon" source="../reliability/media/icon-recommendation-high.svg":::| [VM-1: Run production workloads on two or more VMs using Azure Virtual Machine Scale Sets(VMSS) Flex](#-vm-1-run-production-workloads-on-two-or-more-vms-using-vmss-flex) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: |[VM-2: Deploy VMs across availability zones or use VMSS Flex with zones](#-vm-2-deploy-vms-across-availability-zones-or-use-vmss-flex-with-zones) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-high.svg":::|[VM-3: Migrate VMs using availability sets to VMSS Flex](#-vm-3-migrate-vms-using-availability-sets-to-vmss-flex) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: |[VM-5: Use managed disks for VM disks](#-vm-5-use-managed-disks-for-vm-disks)|
+|[**Disaster Recovery**](#disaster-recovery)| :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-4: If Availability Set is required, then put each application tier into a separate Availability Set](#-vm-4-replicate-vms-using-azure-site-recovery) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-7: Backup data on your VMs with Azure Backup service](#-vm-7-backup-data-on-your-vms-with-azure-backup-service) |
+|[**Performance**](#performance) |:::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: | [VM-6: Host application and database data on a data disk](#-vm-6-host-application-and-database-data-on-a-data-disk)|
+||:::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: | [VM-8: Production VMs should be using SSD disks](#-vm-8-production-vms-should-be-using-ssd-disks)|
+||:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-10: Enable Accelerated Networking (AccelNet)](#-vm-10-enable-accelerated-networking-accelnet) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-11: Accelerated Networking is enabled, make sure you update the GuestOS NIC driver every 6 months](#-vm-11-when-accelnet-is-enabled-you-must-manually-update-the-guestos-nic-driver) |
+|[**Management**](#management)|:::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-9: Watch for VMs in Stopped state](#-vm-9-review-vms-in-stopped-state) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: |[VM-22: Use maintenance configurations for the VM](#-vm-22-use-maintenance-configurations-for-the-vm) |
+|[**Security**](#security)|:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-12: VMs should not have a Public IP directly associated](#-vm-12-vms-should-not-have-a-public-ip-directly-associated) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-13: Virtual Network Interfaces have an NSG associated](#-vm-13-vm-network-interfaces-have-a-network-security-group-nsg-associated) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-14: IP Forwarding should only be enabled for Network Virtual Appliances](#-vm-14-ip-forwarding-should-only-be-enabled-for-network-virtual-appliances) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-17: Network access to the VM disk should be set to "Disable public access and enable private access"](#-vm-17-network-access-to-the-vm-disk-should-be-set-to-disable-public-access-and-enable-private-access) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-19: Enable disk encryption and data at rest encryption by default](#-vm-19-enable-disk-encryption-and-data-at-rest-encryption-by-default) |
+|[**Networking**](#networking) | :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-15: Customer DNS Servers should be configured in the Virtual Network level](#-vm-15-dns-servers-should-be-configured-in-the-virtual-network-level) |
+|[**Storage**](#storage) |:::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: |[VM-16: Shared disks should only be enabled in clustered servers](#-vm-16-shared-disks-should-only-be-enabled-in-clustered-servers) |
+|[**Compliance**](#compliance)| :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-18: Ensure that your VMs are compliant with Azure Policies](#-vm-18-ensure-that-your-vms-are-compliant-with-azure-policies) |
+|[**Monitoring**](#monitoring)| :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-20: Enable VM Insights](#-vm-20-enable-vm-insights) |
+||:::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: |[VM-21: Configure diagnostic settings for all Azure resources](#-vm-21-configure-diagnostic-settings-for-all-azure-resources) |
++
+### High availability
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **VM-1: Run production workloads on two or more VMs using VMSS Flex**
+
+To safeguard application workloads from downtime due to the temporary unavailability of a disk or VM, it's recommended that you run production workloads on two or more VMs using VMSS Flex.
+
+To achieve this you can use:
+
+- [Azure Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/overview) to create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
+- **Availability zones**. For more information on availability zones and VMs, see [Availability zone support](#availability-zone-support).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **VM-2: Deploy VMs across availability zones or use VMSS Flex with zones**
+
+When you create your VMs, use availability zones to protect your applications and data against unlikely datacenter failure. For more information about availability zones for VMs, see [Availability zone support](#availability-zone-support) in this document.
+
+For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zone-enabled).
+
+For information on how to migrate your existing VMs to availability zone support, see [Availability zone support redeployment and migration](#availability-zone-redeployment-and-migration).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **VM-3: Migrate VMs using availability sets to VMSS Flex**
+
+Availability sets will be retired in the near future. Modernize your workloads by migrating them from VMs to VMSS Flex.
+
+With VMSS Flex, you can deploy your VMs in one of two ways:
+
+- Across zones
+- In the same zone, but across fault domains (FDs) and update domains (UD) automatically.
+
+In an N-tier application, it's recommended that you place each application tier into its own VMSS Flex.
+
+# [Azure Resource Graph](#tab/graph)
++
+-
++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **VM-5: Use managed disks for VM disks**
+
+To provide better reliability for VMs in an availability set, use managed disks. Managed disks are sufficiently isolated from each other to avoid single points of failure. Also, managed disks arenΓÇÖt subject to the IOPS limits of VHDs created in a storage account.
++
+# [Azure Resource Graph](#tab/graph)
++++
+### Disaster recovery
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-4: Replicate VMs using Azure Site Recovery**
+When you replicate Azure VMs using Site Recovery, all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication.
+
+To learn how to run a disaster recovery drill, see [Run a test failover](/azure/site-recovery/site-recovery-test-failover-to-azure).
++
+# [Azure Resource Graph](#tab/graph)
++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **VM-7: Backup data on your VMs with Azure Backup service**
+
+The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. For more information, see [What is the Azure Backup Service](/azure/backup/backup-overview).
+
+# [Azure Resource Graph](#tab/graph)
++++
+### Performance
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-6: Host application and database data on a data disk**
+
+A data disk is a managed disk thatΓÇÖs attached to a VM. Use the data disk to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Hosting your data on a data disk makes it easy to backup or restore your data. You can also migrate the disk without having to move the entire VM and Operating System. Also, you'll be able to select a different disk SKU, with different type, size, and performance that meet your requirements. For more information on data disks, see [Data Disks](/azure/virtual-machines/managed-disks-overview#data-disk).
+
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **VM-8: Production VMs should be using SSD disks**
++
+Premium SSD disks offer high-performance, low-latency disk support for I/O-intensive applications and production workloads. Standard SSD Disks are a cost-effective storage option optimized for workloads that need consistent performance at lower IOPS levels.
+
+It is recommended that you:
+
+- Use Standard HDD disks for Dev/Test scenarios and less critical workloads at lowest cost.
+- Use Premium SSD disks instead of Standard HDD disks with your premium-capable VMs. For any Single Instance VM using premium storage for all Operating System Disks and Data Disks, Azure guarantees VM connectivity of at least 99.9%.
+
+If you want to upgrade from Standard HDD to Premium SSD disks, consider the following issues:
+
+- Upgrading requires a VM reboot and this process takes 3-5 minutes to complete.
+- If VMs are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+
+
+For more information on Azure managed disks and disks types, see [Azure managed disk types](/azure/virtual-machines/disks-types#premium-ssd).
+++
+# [Azure Resource Graph](#tab/graph)
++++
+### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **VM-10: Enable Accelerated Networking (AccelNet)**
+
+AccelNet enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types.
+
+For more information on Accelerated Networking, see [Accelerated Networking](/azure/virtual-network/accelerated-networking-overview?tabs=redhat.)
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-11: When AccelNet is enabled, you must manually update the GuestOS NIC driver**
+
+When AccelNet is enabled, the default Azure Virtual Network interface in the GuestOS is replaced for a Mellanox interface. As a result, the GuestOS NIC driver is provided from Mellanox, a 3rd party vendor. Although Marketplace images maintained by Microsoft are offered with the latest version of Mellanox drivers, once the VM is deployed, you'll need to manually update GuestOS NIC driver every six months.
+
+# [Azure Resource Graph](#tab/graph)
++++
+### Management
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-9: Review VMs in stopped state**
+VM instances go through different states, including provisioning and power states. If a VM is in a stopped state, the VM may be facing an issue or is no longer necessary and could be removed to help reduce costs.
+
+# [Azure Resource Graph](#tab/graph)
++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-high.svg"::: **VM-22: Use maintenance configurations for the VM**
+
+To ensure that VM updates/interruptions are done in a planned time frame, use maintenance configuration settings to schedule and manage updates. For more information on managing VM updates with maintenance configurations, see [Managing VM updates with Maintenance Configurations](maintenance-configurations.md).
++
+# [Azure Resource Graph](#tab/graph)
++++
+### Security
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **VM-12: VMs should not have a Public IP directly associated**
+
+If a VM requires outbound internet connectivity, it's recommended that you use NAT Gateway or Azure Firewall. NAT Gateway or Azure Firewall help to increase security and resiliency of the service, since both services have much higher availability and [Source Network Address Translation (SNAT)](/azure/load-balancer/load-balancer-outbound-connections) ports. For inbound internet connectivity, it's recommended that you use a load balancing solution such as Azure Load Balancer and Application Gateway.
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-13: VM network interfaces have a Network Security Group (NSG) associated**
+
+It's recommended that you associate a NSG to a subnet, or a network interface, but not both. Since rules in a NSG associated to a subnet can conflict with rules in a NSG associated to a network interface, you can have unexpected communication problems that require troubleshooting. For more information, see [Intra-Subnet traffic](/azure/virtual-network/network-security-group-how-it-works#intra-subnet-traffic).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **VM-14: IP forwarding should only be enabled for network virtual appliances**
+
+IP forwarding enables the virtual machine network interface to:
+
+- Receive network traffic not destined for one of the IP addresses assigned to any of the IP configurations assigned to the network interface.
+
+- Send network traffic with a different source IP address than the one assigned to one of a network interfaceΓÇÖs IP configurations.
+
+The IP forwarding setting must be enabled for every network interface that's attached to the VM receiving traffic to be forwarded. A VM can forward traffic whether it has multiple network interfaces, or a single network interface attached to it. While IP forwarding is an Azure setting, the VM must also run an application that's able to forward the traffic, such as firewall, WAN optimization, and load balancing applications.
+
+To learn how to enable or disable IP forwarding, see [Enable or disable IP forwarding](/azure/virtual-network/virtual-network-network-interface?tabs=azure-portal#enable-or-disable-ip-forwarding).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-17: Network access to the VM disk should be set to "Disable public access and enable private access"**
+
+It's recommended that you set VM disk network access to ΓÇ£Disable public access and enable private accessΓÇ¥ and create a private endpoint. To learn how to create a private endpoint, see [Create a private endpoint](/azure/virtual-machines/disks-enable-private-links-for-import-export-portal#create-a-private-endpoint).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **VM-19: Enable disk encryption and data at rest encryption by default**
+
+There are several types of encryption available for your managed disks, including Azure Disk Encryption (ADE), Server-Side Encryption (SSE) and encryption at host.
+
+- Azure Disk Encryption helps protect and safeguard your data to meet your organizational security and compliance commitments.
+- Azure Disk Storage Server-Side Encryption (also referred to as encryption-at-rest or Azure Storage encryption) automatically encrypts data stored on Azure managed disks (OS and data disks) when persisting on the Storage Clusters.
+- Encryption at host ensures that data stored on the VM host hosting your VM is encrypted at rest and flows encrypted to the Storage clusters.
+- Confidential disk encryption binds disk encryption keys to the VMΓÇÖs TPM and makes the protected disk content accessible only to the VM.
+
+For more information about managed disk encryption options, see [Overview of managed disk encryption options](./disk-encryption-overview.md).
+
+# [Azure Resource Graph](#tab/graph)
++++
+### Networking
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-15: DNS Servers should be configured in the Virtual Network level**
+
+Configure the DNS Server in the Virtual Network to avoid name resolution inconsistency across the environment. For more information on name resolution for resources in Azure virtual networks, see [Name resolution for VMs and cloud services](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances?tabs=redhat).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+### Storage
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-medium.svg"::: **VM-16: Shared disks should only be enabled in clustered servers**
+
+Azure shared disks is a feature for Azure managed disks that enables you to attach a managed disk to multiple VMs simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new or migrate existing clustered applications to Azure and should only be used in those situations where the disk will be assigned to more than one VM member of a cluster.
+
+To learn more about how to enable shared disks for managed disks, see [Enable shared disk](/azure/virtual-machines/disks-shared-enable?tabs=azure-portal).
++
+# [Azure Resource Graph](#tab/graph)
+++++
+### Compliance
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-18: Ensure that your VMs are compliant with Azure Policies**
+
+ItΓÇÖs important to keep your virtual machine (VM) secure for the applications that you run. Securing your VMs can include one or more Azure services and features that cover secure access to your VMs and secure storage of your data. For more information on how to keep your VM and applications secure, see [Azure Policy Regulatory Compliance controls for Azure Virtual Machines](/azure/virtual-machines/security-controls-policy).
++
+# [Azure Resource Graph](#tab/graph)
++++
+### Monitoring
+
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-20: Enable VM Insights**
+
+Enable [VM Insights](/azure/azure-monitor/vm/vminsights-overview) to get more visibility into the health and performance of your virtual machine. VM Insights gives you information on the performance and health of your VMs and virtual machine scale sets, by monitoring their running processes and dependencies on other resources. VM Insights can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues. Insights can also help you understand whether an issue is related to other dependencies.
++
+# [Azure Resource Graph](#tab/graph)
+++++
+#### :::image type="icon" source="../reliability/media/icon-recommendation-low.svg"::: **VM-21: Configure diagnostic settings for all Azure resources**
+
+Platform metrics are sent automatically to Azure Monitor Metrics by default and without configuration. Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on and are one of the following types:
+
+- **Resource logs** that arenΓÇÖt collected until theyΓÇÖre routed to a destination.
+- **Activity logs** that exist on their own but can be routed to other locations.
+
+Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+
+- **Sources** The type of metric and log data to send to the destinations defined in the setting. The available types vary by resource type.
+- **Destinations**: One or more destinations to send to.
+
+A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), create multiple settings. Each resource can have up to five diagnostic settings.
+
+Fore information, see [Diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
++
+# [Azure Resource Graph](#tab/graph)
++++++
+## Availability zone support
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+
+Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
+
+Azure availability zones-enabled services are designed to provide the right level of reliability and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+
+Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [availability zones support](../reliability/availability-zones-service-support.md). The customer will be responsible for configuring and migrating their virtual machines for availability. Refer to the following readiness options below for availability zone enablement:
+
+- See [availability options for VMs](./availability.md)
+- Review [availability zone service and region support](../reliability/availability-zones-service-support.md)
+- [Migrate existing VMs](../reliability/migrate-vm.md) to availability zones
+
+
+### Prerequisites
+
+- Your virtual machine SKUs must be available across the zones in for your region. To review which regions support availability zones, see the [list of supported regions](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+
+- Your VM SKUs must be available across the zones in your region. To check for VM SKU availability, use one of the following methods:
+
+ - Use PowerShell to [Check VM SKU availability](windows/create-PowerShell-availability-zone.md#check-vm-sku-availability).
+ - Use the Azure CLI to [Check VM SKU availability](linux/create-cli-availability-zone.md#check-vm-sku-availability).
+ - Go to [Foundational Services](../reliability/availability-zones-service-support.md#an-icon-that-signifies-this-service-is-foundational-foundational-services).
+
+
+### SLA improvements
+
+Because availability zones are physically separate and provide distinct power source, network, and cooling, SLAs (Service-level agreements) increase. For more information, see the [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
+
+#### Create a resource with availability zone enabled
+
+Get started by creating a virtual machine (VM) with availability zone enabled from the following deployment options below:
+- [Azure CLI](./linux/create-cli-availability-zone.md)
+- [PowerShell](./windows/create-powershell-availability-zone.md)
+- [Azure portal](./create-portal-availability-zone.md)
+
+### Zonal failover support
+
+Customers can set up virtual machines to failover to another zone using the Site Recovery service. For more information, see [Site Recovery](../site-recovery/site-recovery-overview.md).
+
+### Fault tolerance
+
+Virtual machines can failover to another server in a cluster, with the VM's operating system restarting on the new server. Customers should refer to the failover process for disaster recovery, gathering virtual machines in recovery planning, and running disaster recovery drills to ensure their fault tolerance solution is successful.
+
+For more information, see the [site recovery processes](../site-recovery/site-recovery-failover.md#before-you-start).
++
+### Zone down experience
+
+During a zone-wide outage, you should expect a brief degradation of performance until the virtual machine service self-healing re-balances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones.
+
+Customers should also prepare for the possibility that there's an outage of an entire region. If there's a service disruption for an entire region, the locally redundant copies of your data would temporarily be unavailable. If geo-replication is enabled, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region.
+
+#### Zone outage preparation and recovery
+
+The following guidance is provided for Azure virtual machines in the case of a service disruption of the entire region where your Azure virtual machine application is deployed:
+
+- Configure [Azure Site Recovery](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-1-initiate-a-failover-by-using-azure-site-recovery) for your VMs
+- Check the [Azure Service Health Dashboard](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-2-wait-for-recovery) status if Azure Site Recovery hasn't been configured
+- Review how the [Azure Backup service](../backup/backup-azure-vms-introduction.md) works for VMs
+ - See the [support matrix](../backup/backup-support-matrix-iaas.md) for Azure VM backups
+- Determine which [VM restore option and scenario](../backup/about-azure-vm-restore.md) will work best for your environment
+
+### Low-latency design
+
+Cross Region (secondary region), Cross Subscription (preview), and Cross Zonal (preview) are available options to consider when designing a low-latency virtual machine solution. For more information on these options, see the [supported restore methods](../backup/backup-support-matrix-iaas.md#supported-restore-methods).
+
+>[!IMPORTANT]
+>By opting out of zone-aware deployment, you forego protection from isolation of underlying faults. Use of SKUs that don't support availability zones or opting out from availability zone configuration forces reliance on resources that don't obey zone placement and separation (including underlying dependencies of these resources). These resources shouldn't be expected to survive zone-down scenarios. Solutions that leverage such resources should define a disaster recovery strategy and configure a recovery of the solution in another region.
+
+### Safe deployment techniques
+
+When you opt for availability zones isolation, you should utilize safe deployment techniques for application code, as well as application upgrades. In addition to configuring Azure Site Recovery, below are recommended safe deployment techniques for VMs:
+
+- [Virtual Machine Scale Sets](/azure/virtual-machines/flexible-virtual-machine-scale-sets)
+- [Availability Sets](./availability-set-overview.md)
+- [Azure Load Balancer](../load-balancer/load-balancer-overview.md)
+- [Azure Storage Redundancy](../storage/common/storage-redundancy.md)
+++
+ As Microsoft periodically performs planned maintenance updates, there may be rare instances when these updates require a reboot of your virtual machine to apply the required updates to the underlying infrastructure. To learn more, see [availability considerations](./maintenance-and-updates.md#availability-considerations-during-scheduled-maintenance) during scheduled maintenance.
+
+Follow the health signals below for monitoring before upgrading your next set of nodes in another zone:
+
+- Check the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) for the virtual machines service status for your expected regions
+- Ensure that [replication](../site-recovery/azure-to-azure-quickstart.md) is enabled on your VMs
++
+### Availability zone redeployment and migration
+
+For migrating existing virtual machine resources to a zone redundant configuration, refer to the below resources:
+
+- Move a VM to another subscription or resource group
+ - [CLI](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-cli)
+ - [PowerShell](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-powershell)
+- [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines)
+- [Move Azure VMs to availability zones](../site-recovery/move-azure-vms-avset-azone.md)
+- [Move region maintenance configuration resources](./move-region-maintenance-configuration-resources.md)
+
+## Disaster recovery: cross-region failover
+
+In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+Customers can use Cross Region to restore Azure VMs via paired regions. You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more details on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options).
++
+### Cross-region disaster recovery in multi-region geography
+
+While Microsoft is working diligently to restore the virtual machine service for region-wide service disruptions, customers will have to rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan).
+
+#### Outage detection, notification, and management
+
+When the hardware or the physical infrastructure for the virtual machine fails unexpectedly. This can include local network failures, local disk failures, or other rack level failures. When detected, the Azure platform automatically migrates (heals) your virtual machine to a healthy physical machine in the same data center. During the healing procedure, virtual machines experience downtime (reboot) and in some cases loss of the temporary drive. The attached OS and data disks are always preserved.
+
+For more detailed information on virtual machine service disruptions, see [disaster recovery guidance](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance).
+
+#### Set up disaster recovery and outage detection
+
+When setting up disaster recovery for virtual machines, understand what [Azure Site Recovery provides](../site-recovery/site-recovery-overview.md#what-does-site-recovery-provide). Enable disaster recovery for virtual machines with the below methods:
+
+- Set up disaster recovery to a [secondary Azure region for an Azure VM](../site-recovery/azure-to-azure-quickstart.md)
+- Create a Recovery Services vault
+ - [Bicep](../site-recovery/quickstart-create-vault-bicep.md)
+ - [ARM template](../site-recovery/quickstart-create-vault-template.md)
+- Enable disaster recovery for [Linux virtual machines](./linux/tutorial-disaster-recovery.md)
+- Enable disaster recovery for [Windows virtual machines](./windows/tutorial-disaster-recovery.md)
+- Failover virtual machines to [another region](../site-recovery/azure-to-azure-tutorial-failover-failback.md)
+- Failover virtual machines to the [primary region](../site-recovery/azure-to-azure-tutorial-failback.md#fail-back-to-the-primary-region)
+
+### Single-region geography disaster recovery
+
+With disaster recovery set up, Azure VMs will continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there.
+
+When you replicate Azure VMs using [Site Recovery](../site-recovery/site-recovery-overview.md), all the VM disks are continuously replicated to the target region asynchronously. The recovery points are created every few minutes. This gives you a Recovery Point Objective (RPO) in the order of minutes. You can conduct disaster recovery drills as many times as you want, without affecting the production application or the ongoing replication. For more information, see [Run a disaster recovery drill to Azure](../site-recovery/tutorial-dr-drill-azure.md).
+
+For more information, see [Azure VMs architectural components](../site-recovery/azure-to-azure-architecture.md#architectural-components) and [region pairing](./regions.md#region-pairs).
+
+### Capacity and proactive disaster recovery resiliency
+
+Microsoft and its customers operate under the Shared Responsibility Model. This means that for customer-enabled DR (customer-responsible services), the customer must address DR for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated.
+
+For deploying virtual machines, customers can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains in a region or within an Availability Zone.
+
+## Additional guidance
+
+- [Well-Architected Framework for virtual machines](/azure/architecture/framework/services/compute/virtual-machines/virtual-machines-review)
+- [Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture)
+- [Accelerated networking with Azure VM disaster recovery](/azure/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking)
+- [Express Route with Azure VM disaster recovery](../site-recovery/azure-vm-disaster-recovery-with-expressroute.md)
+- [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml)
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/reliability/availability-zones-overview)
virtual-machines Restore Point Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/restore-point-troubleshooting.md
Most common restore point failures can be resolved by following the troubleshoot
- Navigate to **services.msc** and ensure **Windows Azure VM Guest Agent service** is up and running. Also, ensure the [latest version](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409) is installed. [Learn more](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms). - The Azure VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image from the portal, PowerShell, Command Line Interface, or an Azure Resource Manager template. A [manual installation of the Agent](../virtual-machines/extensions/agent-windows.md#manual-installation) may be necessary when you create a custom VM image that's deployed to Azure.-- Review the support matrix to check if VM runs on the [supported Windows operating system](concepts-restore-points.md#operating-system-support).
+- Review the support matrix to check if VM runs on the [supported Windows operating system](concepts-restore-points.md#operating-system-support-for-application-consistency).
# [On Linux VM](#tab/linux) - Ensure the Azure VM Guest Agent service is running by executing the command `ps -e`. Also, ensure the [latest version](../virtual-machines/extensions/update-linux-agent.md) is installed. [Learn more](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms). - Ensure the [Linux VM agent dependencies on system packages](../virtual-machines/extensions/agent-linux.md#requirements) have the supported configuration. For example: Supported Python version is 2.6 and above.-- Review the support matrix to check if VM runs on the [supported Linux operating system.](concepts-restore-points.md#operating-system-support).
+- Review the support matrix to check if VM runs on the [supported Linux operating system.](concepts-restore-points.md#operating-system-support-for-application-consistency).
virtual-machines Virtual Machines Create Restore Points Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-cli.md
Create a VM restore point with the [az restore-point create](/cli/azure/restore-
``` az restore-point create --resource-group "ExampleRg" --collection-name "ExampleRpc" --name "ExampleRp" ```
+To create a crash consistent restore point set the optional parameter "consistency-mode" to "CrashConsistent". This feature is currently in preview.
+ ### Exclude disks when creating a restore point Exclude the disks that you do not want to be a part of the restore point with the `--exclude-disks` parameter, as follows: ```
virtual-machines Virtual Machines Create Restore Points Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-powershell.md
New-AzRestorePointCollection -ResourceGroupName ExampleRG -Name ExampleRPC -VmId
## Step 2: Create a VM restore point Create a VM restore point with the [New-AzRestorePoint](/powershell/module/az.compute/new-azrestorepoint) cmdlet as shown below:+ ``` New-AzRestorePoint -ResourceGroupName ExampleRG -RestorePointCollectionName ExampleRPC -Name ExampleRP ```
+To create a crash consistent restore point set the optional parameter "ConsistencyMode" to "CrashConsistent". This feature is currently in preview.
### Exclude disks from the restore point Exclude certain disks that you do not want to be a part of the restore point with the `-DisksToExclude` parameter, as follows:
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
You can protect your data and guard against extended downtime by creating virtua
An individual VM restore point is a resource that stores VM configuration and point-in-time application consistent snapshots of all the managed disks attached to the VM. You can use VM restore points to easily capture multi-disk consistent backups. VM restore points contain a disk restore point for each of the attached disks and a disk restore point consists of a snapshot of an individual managed disk.
-VM restore points support application consistency for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency.
+VM restore points supports both application consistency and crash consistency (in preview).
+Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency.
+
+Crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a Virtual Machine. This is same as the status of data in the VM after a power outage or a crash. "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Management resource that contains the restore points for a specific VM. If you want to utilize ARM templates for creating restore points and restore point collections, visit the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository on GitHub.
Currently, restore points can only be created in one VM at a time, that is, you
- Restore points are supported only for managed disks. - Ultra-disks, Ephemeral OS disks, and Shared disks are not supported. -- Restore points APIs require an API of version 2021-03-01 or later.
+- API version for application consistent restore point is 2021-03-01 or later.
+- API version for crash consistent restore point is 2021-07-01 or later. (in preview)
- A maximum of 500 VM restore points can be retained at any time for a VM, irrespective of the number of restore point collections. - Concurrent creation of restore points for a VM is not supported. - Restore points for Virtual Machine Scale Sets in Uniform orchestration mode are not supported.
virtual-machines Virtual Machines Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-reliability.md
- Title: Reliability in Azure Virtual Machines
-description: Find out about reliability in Azure Virtual Machines
----- Previously updated : 01/12/2023--
-# What is reliability in Virtual Machines?
-
-This article describes reliability support in Virtual Machines (VM), and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
--
-## Availability zone support
-
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
-
-Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
-
-Azure availability zones-enabled services are designed to provide the right level of reliability and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
-
-Virtual machines support availability zones with three availability zones per supported Azure region and are also zone-redundant and zonal. For more information, see [availability zones support](../reliability/availability-zones-service-support.md). The customer will be responsible for configuring and migrating their virtual machines for availability. Refer to the following readiness options below for availability zone enablement:
--- See [availability options for VMs](./availability.md)-- Review [availability zone service and region support](../reliability/availability-zones-service-support.md)-- [Migrate existing VMs](../reliability/migrate-vm.md) to availability zones--
-### Prerequisites
-
-Your virtual machine SKUs must be available across the zones in for your region. To review which regions support availability zones, see the [list of supported regions](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). Check for VM SKU availability by using PowerShell, the Azure CLI, or review list of foundational services. For more information, see [reliability prerequisites](../reliability/migrate-vm.md#prerequisites).
-
-### SLA improvements
-
-Because availability zones are physically separate and provide distinct power source, network, and cooling, SLAs (Service-level agreements) increase. For more information, see the [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
-
-#### Create a resource with availability zone enabled
-
-Get started by creating a virtual machine (VM) with availability zone enabled from the following deployment options below:
-- [Azure CLI](./linux/create-cli-availability-zone.md)-- [PowerShell](./windows/create-powershell-availability-zone.md)-- [Azure portal](./create-portal-availability-zone.md)-
-### Zonal failover support
-
-Customers can set up virtual machines to failover to another zone using the Site Recovery service. For more information, see [Site Recovery](../site-recovery/site-recovery-overview.md).
-
-### Fault tolerance
-
-Virtual machines can failover to another server in a cluster, with the VM's operating system restarting on the new server. Customers should refer to the failover process for disaster recovery, gathering virtual machines in recovery planning, and running disaster recovery drills to ensure their fault tolerance solution is successful.
-
-For more information, see the [site recovery processes](../site-recovery/site-recovery-failover.md#before-you-start).
--
-### Zone down experience
-
-During a zone-wide outage, the customer should expect brief degradation of performance, until the virtual machine service self-healing re-balances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it is expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones.
-
-Customers should also prepare for the possibility that there's an outage of an entire region. If there's a service disruption for an entire region, the locally redundant copies of your data would temporarily be unavailable. If geo-replication is enabled, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region.
----
-#### Zone outage preparation and recovery
-
-The following guidance is provided for Azure virtual machines in the case of a service disruption of the entire region where your Azure virtual machine application is deployed:
--- Configure [Azure Site Recovery](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-1-initiate-a-failover-by-using-azure-site-recovery) for your VMs-- Check the [Azure Service Health Dashboard](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance#option-2-wait-for-recovery) status if Azure Site Recovery hasn't been configured-- Review how the [Azure Backup service](../backup/backup-azure-vms-introduction.md) works for VMs
- - See the [support matrix](../backup/backup-support-matrix-iaas.md) for Azure VM backups
-- Determine which [VM restore option and scenario](../backup/about-azure-vm-restore.md) will work best for your environment---
-### Low-latency design
-
-Cross Region (secondary region), Cross Subscription (preview), and Cross Zonal (preview) are available options to consider when designing a low-latency virtual machine solution. For more information on these options, see the [supported restore methods](../backup/backup-support-matrix-iaas.md#supported-restore-methods).
-
->[!IMPORTANT]
->By opting out of zone-aware deployment, you forego protection from isolation of underlying faults. Use of SKUs that don't support availability zones or opting out from availability zone configuration forces reliance on resources that don't obey zone placement and separation (including underlying dependencies of these resources). These resources shouldn't be expected to survive zone-down scenarios. Solutions that leverage such resources should define a disaster recovery strategy and configure a recovery of the solution in another region.
-
-### Safe deployment techniques
-
-When you opt for availability zones isolation, you should utilize safe deployment techniques for application code, as well as application upgrades. In addition to configuring Azure Site Recovery, below are recommended safe deployment techniques for VMs:
--- [Virtual Machine Scale Sets](/azure/virtual-machines/flexible-virtual-machine-scale-sets)-- [Availability Sets](./availability-set-overview.md)-- [Azure Load Balancer](../load-balancer/load-balancer-overview.md)-- [Azure Storage Redundancy](../storage/common/storage-redundancy.md)---
- As Microsoft periodically performs planned maintenance updates, there may be rare instances when these updates require a reboot of your virtual machine to apply the required updates to the underlying infrastructure. To learn more, see [availability considerations](./maintenance-and-updates.md#availability-considerations-during-scheduled-maintenance) during scheduled maintenance.
-
-Follow the health signals below for monitoring before upgrading your next set of nodes in another zone:
--- Check the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) for the virtual machines service status for your expected regions-- Ensure that [replication](../site-recovery/azure-to-azure-quickstart.md) is enabled on your VMs----
-### Availability zone redeployment and migration
-
-For migrating existing virtual machine resources to a zone redundant configuration, refer to the below resources:
--- Move a VM to another subscription or resource group
- - [CLI](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-cli)
- - [PowerShell](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-powershell)
-- [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines)-- [Move Azure VMs to availability zones](../site-recovery/move-azure-vms-avset-azone.md)-- [Move region maintenance configuration resources](./move-region-maintenance-configuration-resources.md)-
-## Disaster recovery: cross-region failover
-
-In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
-
-Customers can use Cross Region to restore Azure VMs via paired regions. You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more details on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options).
--
-### Cross-region disaster recovery in multi-region geography
-
-While Microsoft is working diligently to restore the virtual machine service for region-wide service disruptions, customers will have to rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan).
-
-#### Outage detection, notification, and management
-
-When the hardware or the physical infrastructure for the virtual machine fails unexpectedly. This can include local network failures, local disk failures, or other rack level failures. When detected, the Azure platform automatically migrates (heals) your virtual machine to a healthy physical machine in the same data center. During the healing procedure, virtual machines experience downtime (reboot) and in some cases loss of the temporary drive. The attached OS and data disks are always preserved.
-
-For more detailed information on virtual machine service disruptions, see [disaster recovery guidance](/azure/virtual-machines/virtual-machines-disaster-recovery-guidance).
-
-#### Set up disaster recovery and outage detection
-
-When setting up disaster recovery for virtual machines, understand what [Azure Site Recovery provides](../site-recovery/site-recovery-overview.md#what-does-site-recovery-provide). Enable disaster recovery for virtual machines with the below methods:
--- Set up disaster recovery to a [secondary Azure region for an Azure VM](../site-recovery/azure-to-azure-quickstart.md)-- Create a Recovery Services vault
- - [Bicep](../site-recovery/quickstart-create-vault-bicep.md)
- - [ARM template](../site-recovery/quickstart-create-vault-template.md)
-- Enable disaster recovery for [Linux virtual machines](./linux/tutorial-disaster-recovery.md)-- Enable disaster recovery for [Windows virtual machines](./windows/tutorial-disaster-recovery.md)-- Failover virtual machines to [another region](../site-recovery/azure-to-azure-tutorial-failover-failback.md)-- Failover virtual machines to the [primary region](../site-recovery/azure-to-azure-tutorial-failback.md#fail-back-to-the-primary-region)-
-### Single-region geography disaster recovery
--
-With disaster recovery set up, Azure VMs will continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there.
-
-For more information, see [Azure VMs architectural components](../site-recovery/azure-to-azure-architecture.md#architectural-components) and [region pairing](./regions.md#region-pairs).
-
-### Capacity and proactive disaster recovery resiliency
-
-Microsoft and its customers operate under the Shared responsibility model. This means that for customer-enabled DR (customer-responsible services), the customer must address DR for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated.
-
-For deploying virtual machines, customers can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains in a region or within an Availability Zone.
-
-## Additional guidance
--- [Well-Architected Framework for virtual machines](/azure/architecture/framework/services/compute/virtual-machines/virtual-machines-review)-- [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md)-- [Accelerated networking with Azure VM disaster recovery](/azure/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking)-- [Express Route with Azure VM disaster recovery](../site-recovery/azure-vm-disaster-recovery-with-expressroute.md)-- [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml)-
-## Next steps
-> [!div class="nextstepaction"]
-> [Resiliency in Azure](/azure/reliability/availability-zones-overview)
virtual-network-manager How To Define Network Group Membership Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-define-network-group-membership-azure-policy.md
Previously updated : 05/11/2023 Last updated : 07/18/2023
Assume you have the following virtual networks in your subscription. Each virtua
| myVNet02-WestUS | Test | | myVNet03-WestUS | Test |
-You only want to select virtual networks that contain **VNet-A** in the name. To begin using the basic editor to create your conditional statement, you need to create a new network group.
+You only want to select virtual networks that contain **WestUS** in the name. To begin using the basic editor to create your conditional statement, you need to create a new network group.
1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under **Settings**. Then select **+ Create** to create a new network group. 1. Enter a **Name** and an optional **Description** for the network group, and select **Add**. 1. Select the network group from the list and select **Create Azure Policy**. 1. Enter a **Policy name** and leave the **Scope** selections unless changes are needed. 1. Under **Criteria**, select **Name** from the drop-down under **Parameter** and then select **Contains** from the drop-down under *Operator*.
-1. Enter **WestUS** under **Condition**, then select **Save**.
+1. Enter **WestUS** under **Condition** and select **Preview Resources**. You should see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list.
+1. Select **Close** and **Save**.
1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list. > [!IMPORTANT]
In this example, a conditional statement is created that finds virtual networks
- Learn about [Network groups](concept-network-groups.md). - Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance.-- [Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
+- [Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md)
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2023 Last updated : 07/18/2023
virtual-network Tutorial Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic.md
An [application security group (ASGs)](application-security-groups.md) enables y
A [network security group (NSG)](network-security-groups-overview.md) secures network traffic in your virtual network.
-1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Network security group**, or search for *Network security group* in the portal search box.
+1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Network security group**, or use the portal search box to search for **Network security group** (not *Network security group (classic)*).
1. Select **Create**.
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md
Service endpoints are available for the following Azure services and regions. Th
- **[Azure Event Hubs](../event-hubs/event-hubs-service-endpoints.md?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.EventHub*): Generally available in all Azure regions. - **[Azure Data Lake Store Gen 1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.AzureActiveDirectory*): Generally available in all Azure regions where ADLS Gen1 is available. - **[Azure App Service](../app-service/app-service-ip-restrictions.md)** (*Microsoft.Web*): Generally available in all Azure regions where App service is available.-- **[Azure Cognitive Services](../cognitive-services/cognitive-services-virtual-networks.md?tabs=portal)** (*Microsoft.CognitiveServices*): Generally available in all Azure regions where Cognitive services are available.
+- **[Azure Cognitive Services](../ai-services/cognitive-services-virtual-networks.md?tabs=portal)** (*Microsoft.CognitiveServices*): Generally available in all Azure regions where Cognitive services are available.
**Public Preview**
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
# How to configure Virtual WAN Hub routing intent and routing policies -- Virtual WAN Hub routing intent allows you to set up simple and declarative routing policies to send traffic to bump-in-the-wire security solutions like Azure Firewall, Network Virtual Appliances or software-as-a-service (SaaS) solutions deployed within the Virtual WAN hub. ## Background
Connectivity across ExpressRoute circuits via a Firewall appliance in the hub is
#### Routing considerations with ExpressRoute
+> [!NOTE]
+> The routing considerations below apply to all Virtual hubs in the subscription(s) that are enabled by Microsoft Support to allow ExpressRoute to ExpressRoute connectivity via a security appliance in the hub.
+ After transit connectivity across ExpressRoute circuits using a firewall appliance deployed in the Virtual Hub is enabled, you can expect the following changes in behavior in how routes are advertised to ExpressRoute on-premises: * Virtual WAN automatically advertises RFC1918 aggregate prefixes (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) to the ExpressRoute-connected on-premises. These aggregate routes are advertised in addition to the routes described in the previous section. * Virtual WAN automatically advertises all static routes in the defaultRouteTable to ExpressRoute circuit-connected on-premises. This means Virtual WAN advertises the routes specified in the private traffic prefix text box to on-premises.
The following steps describe how to configure routing intent and routing policie
:::image type="content" source="./media/routing-policies/routing-policies-private-nva.png"alt-text="Screenshot showing how to configure NVA private routing policies."lightbox="./media/routing-policies/routing-policies-private-nva.png":::
-4. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks using non-IANA RFC1918 Prefixes, select **Additional Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**.
--
+4. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks using non-IANA RFC1918 Prefixes, select **Additional Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**. Make sure you add the same prefix to the Private Traffic prefix text box in all Virtual Hubs configured with Private Routing Policies to ensure the correct routes are advertised to all hubs.
:::image type="content" source="./media/routing-policies/private-prefixes-nva.png"alt-text="Screenshot showing how to configure additional private prefixes for NVA routing policies."lightbox="./media/routing-policies/private-prefixes-nva.png":::
Assuming you have already reviewed the [Known Limitations](#knownlimitations) s
### Troubleshooting Azure Firewall routing issues * Make sure the provisioning state of the Azure Firewall is **succeeded** before trying to configure routing intent.
-* If you're using non-[IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your branches/Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box.
+* If you're using non-[IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your branches/Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box. Note that the configured "Private Prefixes" do not propagate automatically to other hubs in the Virtual WAN that was configured with routing intent. To ensure connectivity, add these prefixes to the "Private Prefixes" textbox in every single hub that has routing intent.
* If you have specified non RFC1918 addresses as part of the **Private Traffic Prefixes** text box in Firewall Manager, you may need to configure SNAT policies on your Firewall to disable SNAT for non-RFC1918 private traffic. For more information, reference [Azure Firewall SNAT ranges](../firewall/snat-private-range.md). * Configure and view Azure Firewall logs to help troubleshoot and analyze your network traffic. For more information on how to set-up monitoring for Azure Firewall, reference [Azure Firewall diagnostics](../firewall/firewall-diagnostics.md). For an overview of the different types of Firewall logs, see [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md). * For more information on Azure Firewall, review [Azure Firewall Documentation](../firewall/overview.md).
Assuming you have already reviewed the [Known Limitations](#knownlimitations) s
### Troubleshooting Network Virtual Appliances * Make sure the provisioning state of the Network Virtual Appliance is **succeeded** before trying to configure routing intent.
-* If you're using non [IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your connected on-premises or Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box.
+* If you're using non [IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your connected on-premises or Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box. Note that the configured "Private Prefixes" do not propagate automatically to other hubs in the Virtual WAN that was configured with routing intent. To ensure connectivity, add these prefixes to the "Private Prefixes" textbox in every single hub that has routing intent.
* If you have specified non RFC1918 addresses as part of the **Private Traffic Prefixes** text box, you may need to configure SNAT policies on your NVA to disable SNAT for certain non-RFC1918 private traffic. * Check NVA Firewall logs to see if traffic is being dropped or denied by your Firewall rules. * Reach out to your NVA provider for more support and guidance on troubleshooting.
vpn-gateway Active Active Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/active-active-portal.md
Use the following steps to convert active-active mode gateway to active-standby
To configure connections, see the following articles: * [Site-to-Site VPN connections](./tutorial-site-to-site-portal.md)
-* [VNet-to-VNet connections](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md#configure-the-vnet1-gateway-connection)
+* [VNet-to-VNet connections](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
vpn-gateway Vpn Gateway Howto Vnet Vnet Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md
description: Learn how to create a VPN gateway connection between VNets.
Previously updated : 09/14/2022 Last updated : 07/17/2023
You can create this configuration using various tools, depending on the deployme
> * [Connect different deployment models - Azure portal](vpn-gateway-connect-different-deployment-models-portal.md) > * [Connect different deployment models - PowerShell](vpn-gateway-connect-different-deployment-models-powershell.md) >
->
## About connecting VNets
You may want to connect virtual networks by using a VNet-to-VNet connection for
* Within the same region, you can set up multi-tier applications with multiple virtual networks that are connected together because of isolation or administrative requirements.
-VNet-to-VNet communication can be combined with multi-site configurations. These configurations lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity, as shown in the following diagram:
+VNet-to-VNet communication can be combined with multi-site configurations. These configurations let you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity, as shown in the following diagram:
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-diagram.png" alt-text="VNet connections diagram.":::
This article shows you how to connect VNets by using the VNet-to-VNet connection
* **Gateway type**: Select **VPN**. * **VPN type**: Select **Route-based**. * **SKU**: VpnGw2
+ * **Generation**: Generation2
* **Virtual network**: VNet1 * **Gateway subnet address range**: 10.1.255.0/27 * **Public IP address**: Create new * **Public IP address name**: VNet1GWpip
+ * **Enable active-active mode**: Disabled
+ * **Configure BGP**: Disabled
* **Connection** * **Name**: VNet1toVNet4
This article shows you how to connect VNets by using the VNet-to-VNet connection
* **Gateway type**: Select **VPN**. * **VPN type**: Select **Route-based**. * **SKU**: VpnGw2
+ * **Generation**: Generation2
* **Virtual network**: VNet4 * **Gateway subnet address range**: 10.41.255.0/27 * **Public IP address**: Create new * **Public IP address name**: VNet4GWpip
+ * **Enable active-active mode**: Disabled
+ * **Configure BGP**: Disabled
* **Connection** * **Name**: VNet4toVNet1
If you already have a VNet, verify that the settings are compatible with your VP
## Create the VNet1 gateway
-In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. If you're creating this configuration as an exercise, see the [Example settings](#example-settings).
+In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. For gateway SKU pricing, see [Pricing](https://azure.microsoft.com/pricing/details/vpn-gateway/). If you're creating this configuration as an exercise, see the [Example settings](#example-settings).
[!INCLUDE [About gateway subnets](../../includes/vpn-gateway-about-gwsubnet-portal-include.md)]
You can see the deployment status on the Overview page for your gateway. A gatew
After you've configured VNet1, create VNet4 and the VNet4 gateway by repeating the previous steps and replacing the values with VNet4 values. You don't need to wait until the virtual network gateway for VNet1 has finished creating before you configure VNet4. If you're using your own values, make sure the address spaces don't overlap with any of the VNets to which you want to connect.
-## Configure the VNet1 gateway connection
+## Configure your connections
-When the virtual network gateways for both VNet1 and VNet4 have completed, you can create your virtual network gateway connections. In this section, you create a connection from VNet1 to VNet4. VNets in the same subscription can be connected using the portal, even if they are in different resource groups. However, if your VNets are in different subscriptions, you must use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) to make the connections.
+When the VPN gateways for both VNet1 and VNet4 have completed, you can create your virtual network gateway connections.
-1. In the portal, go to your virtual network gateway. For example, **VNet1GW**.
-1. On the virtual network gateway page, go to **Connections**. Select **+Add**.
+VNets in the same subscription can be connected using the portal, even if they are in different resource groups. However, if your VNets are in different subscriptions, you must use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) to make the connections.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-add.png" alt-text="Screenshot showing the connections page." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-add.png":::
+You can create either a bidirectional, or single direction connection. For this exercise, we'll specify a bidirectional connection. The bidirectional connection value creates two separate connections so that traffic can flow in both directions.
-1. On the **Add connection** page, fill in the connection values.
+1. In the portal, go to **VNet1GW**.
+1. On the virtual network gateway page, go to **Connections**. Select **+Add**.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/add-connection.png" alt-text="Screenshot showing the Add Connection page." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/add-connection.png":::
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-add.png" alt-text="Screenshot showing the connections page." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-add.png":::
- * **Name**: Enter a name for your connection. For example, *VNet1toVNet4*.
+1. On the **Create connection** page, fill in the connection values.
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/bidirectional-connectivity.png" alt-text="Screenshot showing the Create Connection page." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/bidirectional-connectivity.png":::
+
* **Connection type**: Select **VNet-to-VNet** from the drop-down.
+ * **Establish bidirectional connectivity**: Select this value
+ * **First connection name**: VNet1-to-VNet4
+ * **Second connection name**: VNet4-to-VNet1
+ * **Region**: East US (the region for VNet1GW)
- * **First virtual network gateway**: This field value is automatically filled in because you're creating this connection from the specified virtual network gateway.
-
- * **Second virtual network gateway**: This field is the virtual network gateway of the VNet that you want to create a connection to. Select **Choose another virtual network gateway** to open the **Choose virtual network gateway** page.
-
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/choose-gateway.png" alt-text="Screenshot showing Choose a virtual network gateway page with another gateway selected."lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/choose-gateway.png":::
-
- * View the virtual network gateways that are listed on this page. Notice that only virtual network gateways that are in your subscription are listed. If you want to connect to a virtual network gateway that isn't in your subscription, use the [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md).
-
- * Select the virtual network gateway to which you want to connect.
+1. Click **Next : Settings >** at the bottom of the page to advance to the **Settings** page.
+1. On the **Settings** page, specify the following values:
+ * **First virtual network gateway**: Select **VNet1GW** from the dropdown.
+ * **Second virtual network gateway**: Select **VNet4GW** from the dropdown.
* **Shared key (PSK)**: In this field, enter a shared key for your connection. You can generate or create this key yourself. In a site-to-site connection, the key you use is the same for your on-premises device and your virtual network gateway connection. The concept is similar here, except that rather than connecting to a VPN device, you're connecting to another virtual network gateway.
-1. Select **OK** to save your changes.
-
-## Configure the VNet4 gateway connection
-
-Next, create a connection from VNet4 to VNet1. In the portal, locate the virtual network gateway associated with VNet4. Follow the steps from the previous section, replacing the values to create a connection from VNet4 to VNet1. Make sure that you use the same shared key.
+ * **IKE Protocol**: IKEv2
+1. For this exercise, you can leave the rest of the settings as their default values.
+1. Select **Review + create**, then **Create** to validate and create your connections.
## Verify your connections
-1. Locate the virtual network gateway in the Azure portal.
+1. Locate the virtual network gateway in the Azure portal. For example, **VNet1GW**
1. On the **Virtual network gateway** page, select **Connections** to view the **Connections** page for the virtual network gateway. After the connection is established, you'll see the **Status** values change to **Connected**.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/view-connections.png" alt-text="Screenshot showing the Connections page to verify the connections." border="false" lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/view-connections.png":::
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connection-status.png" alt-text="Screenshot connection status." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connection-status.png":::
+ 1. Under the **Name** column, select one of the connections to view more information. When data begins flowing, you'll see values for **Data in** and **Data out**.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png" alt-text="Screenshot shows a resource group with values for Data in and Data out." border="false" lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png":::
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png" alt-text="Screenshot shows a resource group with values for Data in and Data out." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png":::
## Add additional connections
web-application-firewall Protect Api Hosted Apim By Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/protect-api-hosted-apim-by-waf.md
Last updated 07/13/2023
There are a growing number of enterprises adhering to API-first approach for their internal applications, and the number and complexity of security attacks against web applications is constantly evolving. This situation requires enterprises to adopt a strong security strategy to protect APIs from various web application attacks.
-[Azure Web Application Firewall (WAF)](../overview.md) is an Azure Networking product that protects APIs from various of [OWASP top 10](https://owasp.org/www-project-top-ten/) web attacks, CVEΓÇÖs, and malicious bot attacks.
+[Azure Web Application Firewall (WAF)](../overview.md) is an Azure Networking product that protects APIs from various [OWASP top 10](https://owasp.org/www-project-top-ten/) web attacks, CVEΓÇÖs, and malicious bot attacks.
This article describes how to use [Azure Web Application Firewall on Azure Front Door](afds-overview.md) to protect APIs hosted on [Azure API Management](../../api-management/api-management-key-concepts.md)
All other settings remain at default values.
Select the "bookwafpolicy" Azure WAF policy and ensure the Policy mode is set to ["Prevention" in the overview tab of the policy](waf-front-door-create-portal.md#change-mode)
-Azure WAF detection mode is used for testing and validating the policy. Detection doesn't block the call but logs all threats detected, while prevention mode blocks the call if an attack is detected. Typically, you test the scenario before switching to prevention model. For this exercise, we switch to prevention mode.
+Azure WAF detection mode is used for testing and validating the policy. Detection doesn't block the call but logs all threats detected, while prevention mode blocks the call if an attack is detected. Typically, you test the scenario before switching to prevention mode. For this exercise, we switch to prevention mode.
[Azure Web Application Firewall on Azure Front Door](afds-overview.md#waf-modes) has more information about various WAF policy modes.
At this point, the end-to-end call is set up, and the API is protected by Azure
- [What is Azure Web Application Firewall?](../overview.md) - [Recommendations to mitigate OWASP API Security Top 10 threats using API Management](../../api-management/mitigate-owasp-api-threats.md)-- [Configure Front Door Standard/Premium in front of Azure API Management](../../api-management/front-door-api-management.md)
+- [Configure Front Door Standard/Premium in front of Azure API Management](../../api-management/front-door-api-management.md)